java.sql2 DynamicMultiOperation with interlaced exceptions

Lukas Eder lukas.eder at
Thu Oct 12 21:50:39 UTC 2017

Hi Douglas

2017-10-09 18:54 GMT+02:00 Douglas Surber <douglas.surber at>:

> It’s not clear to me why you would think Operation does too much.

I feel that the type hierarchy exposes too much coupling between Operation
subtypes that aren't strictly related. Unfortunately, I currently cannot
explain myself better without having played with the API more in depth...

> Operation only has three methods and one convenience method. An Operation
> encapsulates a database action and how the result of that action is
> processed. No Operation subclass does any more. Yes, full specification of
> the action can be complex, e.g. the various Parameterized subclasses, and
> in the case of StaticMultiOperation and DynamicMultiOperation the result
> processing is complex. But every method on Operation either adds to the
> specification of the action or to the processing of the result.
> It would be possible to split Operation into Action and Result components.
> Something like
>   conn.parameterizedAction(selectSql)
>     .set(arg, value, type)
>     .rowResult()
>     .initialValue( () -> new ArrayList() )
>     .rowAggregator( (p, r) -> { p.add(r.get(column, javaType); return p; }
> )
>     .submit();
> I don’t see the value. Negotiating the path of fooAction/barResult doesn’t
> seem to add anything.

I'm more than willing to discuss the concept, although I don't completely
understand your example yet, because again, I don't fully understand what
initialValue() or rowAggregator() do. Perhaps that's already quite

I do like the split between Action and Result, though. After all, this is
what the Flow API does as well. From the client perspective, an Action
corresponds to Publisher and a Result corresponds to a Subscriber, whereas
from the driver / server perspective, things are inversed: The Action is
hooked to a Subscriber whereas the Result is hooked to a Publisher

Ultimately, all the different types of Operations are just a combination of:

- Client "publishing" SQL statements, bind variables, result specifications
(e.g. OUT parameter registrations)
- Client "subscribing" to the outcome of the above
- Server "subscribing" to client requests
- Server "publishing" results from the database (including interlaced
update counts / result sets / errors / warnings / server output, as
discussed here [1])

In addition to the above, should the new API embrace vendor-specific
messaging (e.g. Oracle AQ), we could take this one step further:

- Client "publishes" AQ messages by enqueuing them
- Client "subscribes" to AQ queues, receiving messages
- Server "subscribes" to client messages
- Server "publishes" AQ messages to clients

With all this in mind, I think the Operation abstraction might just be the
wrong abstraction if you want to include the Flow API, but you don't seem
to want that as you've mentioned several times.

> As I responded to Jens, PublisherOperation is a mistake and should not
> have been uploaded. This is not a Flow based API.

Yes, I've read that thread as well, and I'll be reading through the one
where Konrad and Viktor got involved soon. While I understand the sentiment
that it could be a mistake in the current API's context (either the API is
completely reactive, or not at all), I'm inclined to think that the Flow
API would have been a good thing.

If it is not, may I suggest to completely remove all references to Flow,

- Connection.onSubscribe()
- Row -> Flow.Subscription dependency

Also, the DynamicMultiOperation.resultHandler() callback-based API to me
seems more in the line of thought of a Flow based API...

> It is a CompletableFuture based API. There is a need for back pressure in
> two places and we have tried, perhaps unfortunately, to shoehorn Flow into
> those places. For those that want a Flow based API this is not that API. We
> explored that problem space and did not come up with anything that met our
> goals. Not to say that it isn’t possible, but we did not succeed.

Just to get this right, those places are:

- Bind variables
- Results (through Submission), including things like Lobs


What I'm having trouble understanding here is: The CompletableFuture API is
primarly an API that runs Java logic on the ForkJoinPool by default, or on
some other Executor on demand.

I have always been a bit surprised by the excitement about this API,
because it seems to be of quite limited use, and very tightly coupled to
the ForkJoinPool. For instance, if a custom Executor is passed to any
method, the resulting CompletableFuture will not "remember" this Executor
but yet again default to the ForkJoinPool. Yet, the "heavy work" doesn't
happen in the client, it happens on the server. I'm afraid that a
CompletableFuture based API will put blocking burden on the ForkJoinPool
(which is mainly used for things like client side parallel stream
processing), saturating it while waiting for server responses.

Perhaps, CompletionStage might be better suited, then, because it does not
impose any such implementation?

> If the community wants an async database access API based on Flow in the
> Java 10 equivalent release, someone else is going to have to develop a
> fairly complete initial draft. The EG needs to make progress on what we
> have to have any hope of inclusion in the Java 10 equivalent release.

If Java 10 = Java 18.3, due in 5-6 months, then I'm not too optimistic that
the asynchronous JDBC API will stabilise by then. To me, JDBC seems like
one of the most important Java APIs. Design time here is definitely time
well spent, so I'm not in a hurry to get this Flow based API quickly :)



More information about the jdbc-spec-discuss mailing list