Internal and External truncation conditions

Doug Lea dl at
Sun Feb 10 05:12:18 PST 2013

On 02/09/13 18:24, Remi Forax wrote:
> if forEachUntil takes a function that return a boolean, it's easy.
> try (BufferedReader r = Files.newBufferedReader(path, Charset.defaultCharset())) {
>    return r.lines().parallel().forEachWhile(element -> {
>       if (regex.matcher(line).matches()) {
>         return false;
>       }
>       ...process the line
>       return true;
>     }
> }

Which then becomes a variant of what I do in ConcurrentHashMap
search{InParallel,Sequentially}, that applies to not only this
but several other usage contexts:

      * Returns a non-null result from applying the given search
      * function on each (key, value), or null if none.  Upon
      * success, further element processing is suppressed and the
      * results of any other parallel invocations of the search
      * function are ignored.
      * @param searchFunction a function returning a non-null
      * result on success, else null
      * @return a non-null result from applying the given search
      * function on each (key, value), or null if none

You'd use this here with a function that processed if
a match (returning null) else returning the first non-match.
Or rework in any of a couple of ways to similar effect.

This works well in CHM because of its nullness policy.
Which allows only this single method to serve as the basis
for all possible short-circuit/cancel applications.
It is so handy when nulls cannot be actual elements
that it might be worth supporting instead of forEachUntil?
People using it  would need to ensure non-null elements.
Just a thought.

While I'm at it:

Sam seems to be asking for asynchronous cancellation of bulk
operations. I can't get myself to appreciate the utility of
doing this. JDK/j.u.c supports several other ways (especially
including the upcoming CompletableFutures) to carefully yet
relatively conveniently arrange/manage cancellation, especially
in IO-related contexts in which they most often arise. None
of them explicitly address bulk computations (although any
of them can do a bulk computation within a task). This is
a feature, not a bug. If you are processing lots
of elements, then only you know the responsiveness vs
overhead tradeoffs of checking for async cancel status.

Requiring that all Stream bulk computations like reduce
continuously check for async cancel status between each
per-element operation is unlikely to satisfy anyone at all,
yet seems to be the only defensible option if we were to
support it.


More information about the lambda-libs-spec-observers mailing list