RFC: Epsilon GC JEP

Aleksey Shipilev shade at redhat.com
Tue Jul 18 13:44:24 UTC 2017

On 07/18/2017 03:20 PM, Erik Österlund wrote:
> If I understand this correctly, the motivation for EpsilonGC is to be able to
> measure the overheads due to GC pauses and GC barriers and measure only the
> application throughput without GC jitter, and then use that as a baseline for
> measuring performance of an actual GC implementation compared to EpsilonGC.

There are several motivations, all in "Motivation" section in JEP. Performance
work is one of them, that's right.

> Howerver, automatic memory management is quite complicated when you think about
> it. 

Yes, and lots of those are handled by the shared code that Epsilon calls into,
just like any other GC.

> Will EpsilonGC allocate all memory up-front, or expand the heap? In the case
> where it expanded on-demand until it runs out of memory, what consequences does
> that potential expansion have on throughput? 

It does have consequences, the same kind of consequences it has with allocating
TLABs. You can trim them down with larger TLABs, larger pages, pre-touching, all
of which are handled outside of Epsilon, by shared code.

> In the case it is allocated upfront, will pages be pre-touched?
Oh yes, there are two lines of code that also handle AlwaysPreTouch. But
otherwise it is handled by shared heap space allocation code. I would like to
see AlwaysPreTouch handled more consistently across GCs though. This is my point
from another mail: if Epsilon has to do something on its own, it is a good sign
shared GC utilities are not much of use.

> If so, what NUMA nodes will the pre-mapped memory map in to? Will mutators
> try to allocate NUMA-local memory?
I think this is handled by shared code, at least for NUMA interleaving. I would
hope that NUMA-aware allocation could be granular to TLABs, in which case it
goes into shared code too, instead of pushing to reimplement this for every GC.
If not, then Epsilon is not fully NUMA-aware.

> What consequences will the larger heap footprint have on the throughput
> because of decreased memory locality and as a result increased last level
> cache misses and suddenly having to spread to more NUMA nodes?
Yes, it would. See two paragraphs below:

> Does the larger footprint change the requirements on compressed oops and
> what encoding/decoding of oop compression is required? In case of an
> expanding heap - can it even use compressed oops? In case of a not expanding
> heap allocated up-front, does a comparison of a GC using compressed oops with
> a baseline that can inherently not use it make sense?
I guess the only relevant point here is, what happens if you need more heap than
32 GB, and then you have to disable compressed oops? In which case, of course,
you will lose. But, you have to keep in mind that the target applications that
are supposed to benefit from Epsilon are low-heap, quite probably zero-garbage.
In this case, the question about heap size is moot: you allocate enough heap to
hold your live data, whether with Epsilon or not.

> Will lack of compaction and resulting possibly worse object locality of
> memory accesses affect performance?
Yes, it would. But it cuts both ways: having more throughput *if* you code with
locality in mind. I am not against GCs that compact, but I do understand there
are cases where I don't want them either.

> I am not convinced that we can just remove GC-induced overheads from the picture
> and measure the application throughput without the GC by using an EpsilonGC as
> proposed. At least I do not think I would use it to draw conclusions about
> GC-induced throughput loss. It seems like an apples to oranges comparison to me.
> Or perhaps I have missed something?

I think this uses a strawman pointing out all other things that could go wrong,
to claim that the only thing the actual no-op GC implementation has to do (e.g.
empty BarrierSet, allocation, and responding to heap exhaustion) is not needed
either :)


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: OpenPGP digital signature
URL: <https://mail.openjdk.java.net/pipermail/hotspot-gc-dev/attachments/20170718/34440076/signature.asc>

More information about the hotspot-gc-dev mailing list