RFR (S): 8131319: Move G1Allocator::_summary_bytes_used back to G1CollectedHeap

Tom Benson tom.benson at oracle.com
Fri Jul 17 17:20:11 UTC 2015

Hi Thomas,

On 7/17/2015 11:55 AM, Thomas Schatzl wrote:
> Hi,
> On Fri, 2015-07-17 at 10:02 -0400, Tom Benson wrote:
>> Hi again,
>> Forgot to reply to this part:
>>>> Actually, the PLAB allocators do exactly that every time a region is
>>>> retired via updating the actual bytes copied in the collector policy
>>>> which is then later added to the total.
>>>> I will look into this since it seems it is not only me not liking the
>>>> "if (archive_allocator != NULL) { ... }" copy&paste code here as it
>>>> might be easy to forget.
>>>     so I looked through the code again, and one way of passing on the
>>> amount of memory actually used would be to return that value in
>>> G1Allocator::end_archive_alloc_range(), to be added by the caller to the
>>> _summary_bytes_used.
>> That would work with archive regions used as they are today, where there
>> will not be a GC while one is 'active.'   But if we want to continue to
>> allow that possibility, doing it there won't eliminate the 'clear_use'
>> calls.   I think calling g1h->increase_used would eliminate both the
>> 'clear_use' calls and the check to see if _archive_allocator exists when
>> computing the used size,  while still allowing GCs while an archive
>> region was active.
> One option could be that the archive allocator got notified of GC and
> did whatever it needs to to update global data like the G1Allocator
> (with init/release_*_regions()).
> The problem with immediate updates would obviously be concurrency, e.g.
> if multiple threads were allocating into an archive region(s). Or
> another allocator doing its updates directly like for pinned regions.
> Not sure about how MT safe the current archive allocator is actually.
It is designed for single threaded use from the VM thread, not general 

Anyway,  your change for 8131319 still looks good to me.  Tnx,

>> Really, I guess this could be the subject of a separate change later.
>   Okay.
> Thanks,
>    Thomas

More information about the hotspot-gc-dev mailing list