Comments on SegmentAllocator and ResourceScope

Michael Zucchi notzed at
Mon May 17 06:47:34 UTC 2021


On 17/5/21 1:09 am, Stig Rohde Døssing wrote:
> Hi,
> JEP 412 adds a new interface for allocating segments called
> SegmentAllocator (great idea!), and I have a few questions regarding the
> API and implementation as they exist in the foreign-jextract branch.
> 1: The ArenaAllocator block and max alloc sizes are hard coded. Is this
> intended to be permanent, or will the block size become a parameter of
> ArenaAllocator later? I'm not sure why it makes sense to hard code these
> parameters? As far as I can tell, this will cause issues for at least the
> bounded arena allocator, since requests above MAX_ALLOC_SIZE would cause
> calls to newSegment, which throws an error.

That hardcoded default size seems reasonable enough to me as it's a 
trade off between various issues like system memory fragmentation, 
overheads, performance, and simplicity.  Making it optional adds more 
unit tests, documentation and generally faffing about for questionable 
gain.  It does seem unusually restrictive not to allow other sizes given 
no other context e.g. where a specific size might be important - like 
having a page-based&aligned allocator. NativeMemorySegmentImpl has stuff 
for handling page-aligned allocations so this may indeed be the reason 
it's 4KB.

The latter just has to be a bug as you spotted and seems to be from 
trying to be clever.  See next comment.

> 2: The MAX_ALLOC_SIZE constant also seems strange. It is set to half the
> block size, and any requests for segments above that size cause a fresh
> segment to be allocated. I understand why it makes sense to allocate a new
> segment if the request is larger than the block size, but if the request is
> smaller than the block size, why is not good enough to check if there's
> room in the current segment, and if so, return the appropriate slice, even
> if the request exceeds half the block size?
> See

Looks a bit odd but note that the original segment is still the current 
one for subsequent allocations so it will still be used for other small 
allocations, it probably doesn't have much effect but it might reduce 
allocation holes.  Also whatever the value is the limit has to be half 
the block allocation size for alignment to work properly since the base 
allocations are assumed to be unaligned.

It's also testing against the bytesAlignment aligned size which is not 
the value bytesAlignment applies to, but by having MAX_ALLOC_SIZE == 
BLOCK_SIZE/2 this also 'magically' tests the unaligned allocation size 
limit required to guarantee satisfaction of the alignment as well.  
Seems overly tricksy.  Also not a fan of "this should not be possible" 
comment & assertion that only /seems/ to be there to check this 
tricksiness and enforce the internal expected behaviour of trySlice() - 
which isn't really serving any purpose as a separate function since the 
'try' only affects the first invocation and it isn't used elsewhere.

This also causes the bug above and seems to indicate the straightforward 
logic flow of "try-to-fit ELSE IF SMALL allocate-and-slice ELSE 
allocate-separate" all in a single function would be better - even if it 
does end up with more holes in certain cases (you win some, you lose some).

As an aside the javadoc doesn't seem mention anywhere that 
bytesAlignment must be a power of 2.  Neither is it checked for 
validity, that may be on purpose for performance reasons but the 
comparable posix_memalign() /aligned_alloc() fail with EINVAL if it 
isn't and well things can get pretty bizarre if it isn't valid.  It 
isn't an expensive check, 
or equivalently (Long.lowestOneBit(i) == i).

> 3: Is the SegmentAllocator interface intended to support more advanced
> memory pooling, such as pools where segments can be returned to the pool
> after use? If so, how will the application indicate to the pool that a
> segment can be returned? The Javadoc for MemorySegment and ResourceScope
> discourage implementations outside the JDK, so the pool will not be able to
> e.g. use wrapper classes to make "freed" segments return to the pool.

Looks like a 'no' to me for the reasons you list.  Its been years in the 
making and it's notable that those facilities explicitly not included at 
this point.  Also "should not" is a bit stronger than merely 
"discourage" and I highly suspect they will be sealed interfaces when 
that facility is available (such things are mentioned for other panama 

Having said that ... you could use the slice() function to split a given 
segment however you wish.  It wouldn't be hard to create any sort of 
pool allocator that way and it would just need to have an explicit free 
function rather than allowing the slices() to close themselves.  In my 
experience such allocators have limited utility,

> 4: ResourceScope has the Handles concept, which are objects that are
> acquired from the scope, which will cause the "close" method to fail (and
> throw exceptions) until they are released. Would it make sense to make
> Handles Autocloseable, to allow them to be used with try-with-resources? It
> seems like it is always an error to acquire and not release a handle.

AutoCloseable seems a little unfashionable for some types of resources 
but there's probably other reasons here.

While a thread might use acquire()/release() as a resource liveness 
check to bracket use I imagine you're often passing Handle objects you 
create to other threads so that the liveness is guaranteed and instead 
the liveness check is in the main thread to guard against prematurely 
freeing the resource.

> 5: The ResourceScope Javadoc indicates that Handles exist to help avoid
> concurrency errors where one thread frees memory currently being accessed
> by another thread. It says that the thread closing the scope should not
> repeatedly call ResourceScope.close until no exceptions are thrown, but
> should instead use proper synchronization mechanisms, e.g. Handles to
> coordinate. How will the thread closing the scope use handles to
> coordinate? I don't see any methods on ResourceScope that allows the
> closing thread to avoid calling "close" on a scope with open handles. Will
> the closing thread just have to call "close" and hope for the best?

It could read clearer but I read that section as saying you need to use 
the handle methods to ensure proper behaviour, exceptions would then 
expose code faults.   But you need to use external synchronisation 
methods to make sure you aren't closing things while they're in use.

You always need to do this in such contexts and often it's just a 
natural consequence of a given api and seems out of scope (hoho) for 
this one.  e.g. You can't close some context which manages it's memory 
if you passed one of the objects it created to another thread which is 
still busy using it.


More information about the panama-dev mailing list