taming resource scopes

Maurizio Cimadamore maurizio.cimadamore at oracle.com
Mon May 31 11:57:17 UTC 2021

On 30/05/2021 00:18, Samuel Audet wrote:
> Hi, Maurizio,
> Thanks for taking the time to write this down! It's all very 
> interesting to have a sense of how various resources could be managed.
> This is starting to sound a lot like reference counting, and there are 
> many implementations out there that use reference counting 
> "automatically", most notably Swift. I'm not aware of any concurrent 
> thread-safe implementation though, which would be nice if it can be 
> achieved in general, but even if that works out, I'm assuming we could 
> still end up with reference cycles. What's your thoughts on that 
> subject? CPython deals with those with GC...
Hi Samuel,

I believe Swift's Automatic Reference Counting (not be confused with 
Rust's Atomic Reference Counting - the two tragically share the same 
acronym :-)) is a form of garbage collection where, rather than having a 
separate process grovelling through memory (like the JVM's GC does), 
increments and decrements are generated (presumably by the compiler) in 
the user code directly, thus achieving a lower footprint solution, which 
might be good in certain situations. Of course we know the issues with 
reference counting when used as a _general_ mechanism for garbage 
collection - the main ones being the inability to deal with cycles, and 
another being expensive at dealing with atomicity, as you say. By the 
way, the latter _can_ be addressed: in fact, the other ARC, Rust's one, 
does exactly that [1], and, in a way, what we do for (shared) resource 
scopes is inspired by that work. It is just generally less efficient, as 
it involves atomic operations.

Now, when it comes to resource scopes, it is not our goal to come up 
with a perfect and general garbage collection mechanism. If the users 
wanted that, well, they could just use the GC itself (and use an 
implicit, GC-backed scope). What we're after here is a mechanism which 
provides a _reliable_ programming model in the face of deterministic 
deallocation. The NIO async [2] use case shows the problem pretty clearly:

* thread A initiates an async operation on a resource R
* at some point later, thread B picks up resource R and starts working

In this case, you need to define a "bubble" which starts when thread A 
submits the async operation, and finishes when thread B has finished 
executing such operation. If R is released between (1) and (2) several 
errors, with varying degree of gravity can occur - from an exception to 
a VM crash, if the resource is released when the IO operation has 
already been submitted to the OS.

In the document, I note that native calls are not too different from the 
async use case. Ideally, you'd like for all resources used by a native 
call to remain alive until the native code completes. These kinds of 
invariants have to be built _on top_ - classic JVM's garbage collection 
cannot help when deterministic deallocation is involved in the picture. 
And, while we can use GC-related techniques to speed up access to shared 
segment w/o compromising safety, we can only do that if (a) access to a 
resource is lexically enclosed (e.g. if you could write a try/finally 
block around it - which e.g. you can't do in the async case, as it spans 
across multiple threads) and (b) if we can make sure that the number of 
the stack frames involved in the resource access is bounded (which is 
not the case with native calls, as, with upcalls, the stack during a 
native call can grow w/o bounds).

I think it's also very interesting to notice that, even when working 
with a _confined_ segment, you need some way to block deterministic 
closure, otherwise you end up with issues in the following case:

* thread A creates segment S
* thread A passes pointer to S to native code
* native code upcalls to some Java code
* Java code (again, in thread A) closes the scope to which S belongs
* when upcall completes, control returns to native call which attempts 
to dereference S*
* crash

Here we only have access from one thread - and even that is not enough 
to guarantee safety, as some accesses (those in native code) are 
blissfully unaware of the liveness checks occurring in the Java code.

For these reasons we need some way to define a "bubble" where close 
operations are restricted. This is not a new concept, in fact the API 
proposed for Java 17 already had a concept of acquire/release; the 
document just describes a possible restacking where, instead of dealing 
with acquire/release calls directly, clients set up temporal 
dependencies between scopes (but under the hood the acquire/release 
remains). The only way (I know) to avoid reference counting and still 
get benefits of deterministic deallocation would be to track resource 
usage at compile-time (e.g. memory ownership) - but, when calls to 
foreign functions are involved, not even these more advances systems 
would be enough.

One last note: what we do is not, strictly speaking, reference counting 
either :-) Reference counting is symmetric, at least in its classic 
definition. The following works:

// use resource

But so does this:

// use resource

There is something wrong with the latter example, as a client is 
attempting to decrement a counter which was incremented by some other 
use of the resource. In our API, releasing a scope (or decrementing the 
scope counter, if you will), can only be done by the very client that 
did the acquire (or increment). This is what makes the API safe - a 
plain reference counting mechanism (even if atomic) would have done 
nothing for the NIO use case, for instance, as a client could still have 
decremented the counter enough times so that a call to close() was 
possible, thus defeating the very purpose of the reference counting.


[1] - https://doc.rust-lang.org/std/sync/struct.Arc.html
[2] - https://inside.java/2021/04/21/fma-and-nio-channels/

> Samuel
> On 5/30/21 2:15 AM, Maurizio Cimadamore wrote:
>> On 29/05/2021 15:35, Chris Vest wrote:
>>> Hi,
>>> It's not clear to me why the Handle acquire/release API is inferior 
>>> to this new proposal, or why it can't solve the use cases discussed.
>>> Looks fine, otherwise.
>> Hi Chris,
>> the acquire/release is not inferior in any way. Expressiveness-wise 
>> they do exactly the same thing. As I said in another email, there is 
>> a bit less cost in the sense that not having an explicit release 
>> remove costs associated to make sure that you cannot release multiple 
>> times (which is an extra CAS in the shared case). But these are small 
>> things.
>> The approach described in this document is an attempt to simplify the 
>> API a bit. I think the code in the proposal looks a bit better 
>> (especially when looking at how the NIO code can be simplified) and, 
>> more importantly, for the user, reasoning in terms of temporal 
>> dependencies between scopes is, I think, more intuitive than thinking 
>> about incrementing and decrementing a counter. So I was trying to 
>> offer an API which did the same thing as acquire/release, but in a 
>> way that (perhaps!) could be more easily understood.
>> But under the hood we still have counters and acquires and releases: 
>> this is (just) a discussion on how to better surface them to clients.
>> Maurizio
>>> Cheers,
>>> Chris
>>> On Fri, 28 May 2021 at 18:34, Maurizio Cimadamore 
>>> <maurizio.cimadamore at oracle.com> wrote:
>>>     Hi,
>>>     we've been looking beyond 17 at things we can do to improve 
>>> support for
>>>     resource scopes, especially in the context of native calls. I 
>>> tried to
>>>     capture the various things we explored in the writeup below:
>>> https://urldefense.com/v3/__https://inside.java/2021/05/28/taming-resource-scopes/__;!!GqivPVa7Brio!MJpy2yoyaebIWwctnZVb8BxnKqoQyL_qdq8RX6FkAIZXRRNsE32fFNKhFD6c2ZsS8bSOuW8$ 
>>>         I think overall the takeaway points are pretty good:
>>>     * we can simplify the acquire/release scope mechanism with a 
>>> better, higher-level API
>>>     * we can enhance safety when calling native functions using _any 
>>> kind_
>>>     of resource scopes (with relatively little overhead)
>>>     * we can make it easier to tailor the safety characteristics of 
>>> CLinker
>>>     to suit application needs
>>>     Cheers
>>>     Maurizio

More information about the panama-dev mailing list