dl at cs.oswego.edu
Sun Jun 30 16:02:45 PDT 2013
On 06/30/13 17:48, Sam Pullara wrote:
> This is really ugly. How many wrapping layers might I end up with? I
> can't just
> throw the throwable. This is a very common pattern, much more common than
> rescue() semantics. Very rarely on failure do you want to set the result
> This is a good point. When the stage methods were part of
> CompletableFuture, it was too easy and cheap to bother:
> just call completeExceptionally in the function body.
> But maybe there should be some way to get the same effect
> using a stage method.
> I'd love to see:
> CompletableFuture<T> ifCompletedExceptionally(Consumer<Throwable> block);
> CompletableFuture<T> whenCompleted(BiConsumer<T, Throwable> block);
The more I try out variations of this, the more I think that
if you are going to do a lot of exception processing, you
will want the ability to call completeExceptionally in various
places, which means that you ought to be writing these at
the CompletableFuture layer, not the CompletionStage layer.
But ignoring this, it's probably reasonable to split the two
paths of at least method exceptionally() into different methods:
CompletableFuture<T> exceptionallyTransform(Function<Throwable, ? extends T> fn)
CompletableFuture<T> exceptionallyPropagate(Consumer<Throwable> block)
> Do you mean, that we should try to cancel ongoing asyncs?
> For the usual reasons, the best we can guarantee is to not run
> them if they are triggered but haven't started yet. If you want
> to do more, you have to do it yourself, for example, have some
> shared atomic sentinel that they can read. (This is the same
> issue that j.u.c has disappointed you about in the past, and
> Brian has disappointed you about in Streams. It's not that we
> don't like you(!), but no one knows of a reasonable general purpose
> solution to this, and are coming to believe that nothing will ever
> be better than relying on smart developers to roll their own
> special-purpose solutions.)
> Yeah, you guys keep claiming that, while I see people with pretty general
> solutions getting by just fine — for example, not having takeWhile/takeUntil on
> Stream is a real black eye on the API.
The base issues are pretty straightforward, which does not
make them them straightforwardly solvable:
* Active cancellation requires polling of non-local atomically
managed shared state.
* Choosing where and how often to poll is a function of the
responsiveness vs throughput vs resource requirements of users.
* No choice satisfies everyone. In particular, applications that do
not require active cancellation are slowed down proportionally to
how often this state is needlessly polled.
* Hence, the optimal solution in terms of expected quality of
service for a library framework is to force users to do the polling
themselves in their own code.
* Unfortunately, most users hate to do this.
Hard to win.
More information about the lambda-libs-spec-observers