JEP 248: Make G1 the Default Garbage Collector

charlie hunt charlie.hunt at
Thu Jul 30 15:42:48 UTC 2015

> On Jul 30, 2015, at 10:07 AM, Andrew Dinn <adinn at> wrote:
> On 30/07/15 15:02, charlie hunt wrote:
>> As an example, let’s contrast the two GC’s from a Java heap growth
>> standpoint, Parallel GC will aggressively try to grow the Java heap
>> to achieve throughput. G1 will grow the Java heap in reaction to
>> meeting the pause time goal. In cases where the initial Java heap
>> size differs from the max Java heap size, I have seen cases where
>> Parallel GC will grow to the max Java heap size, yet G1 does not
>> because G1 was able to meet the pause time goal without expanding to
>> the max Java heap size.
> I believe the ParallelGC behaviour is actually down to a bug in the GC
> (one which Tony Printezis raised some time ago -- look for subject
> containing "GCLocker blues" in the gc-dev archives from 27/06/2014
> onwards). I believe the expansion arises from the fact that the GC keeps
> calculating size targets on the assumption the heap is full even though
> the GC has just run.
> Anyway, whatever the cause I wrote a blog post about how to get round
> this heap expansion behaviour in Parallel GC (it was very important for
> us to be able to configure our OpenShift Java deployments to stick
> within a certain footprint limit by default). The posts are available at
> with the magic Parallel GC setting documented in part 2 as
>  -XX:UseParallelGC \
>  -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=40 \
>  -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90
> ParallelGC will actually respect the HeapFreeRatio settings (in this
> case keep the excess pages trimmed to bewteen 20% and 40% of the live
> heap) so long as you provide an over-generous time ratio setting (in
> this case try to keep GC time down to 25% of mutation time!). The GC
> doesn't ever actually risk hitting that time ratio target -- indeed in
> my tests it was only a little bit over 1% (the default setting) even
> with the very tight free limits of 20%-40%. Removing time ratio from the
> calculation allowed the GC to apply the heap trimming heuristic correctly.
> n.b. AdaptiveSizePolicyWeight is not that important -- it just says pay
> more attention to recent GC stats (90%) over historic ones (10%).
> regards,
> Andrew Dinn
> -----------
> Senior Principal Software Engineer
> Red Hat UK Ltd
> Registered in UK and Wales under Company Registration No. 3798903
> Directors: Michael Cunningham (USA), Matt Parson (USA), Charlie Peters
> (USA), Michael O’Neill (Ireland)

Hi Andrew,

Thanks for sharing your experience.

Glad to see you discovered how to tune Parallel GC from its defaults to get it shrink and grow the Java heap using GCTimeRatio and [Min|Max]HeapFreeRatio to meet your needs. What you have documented is what I would have suggested. Fwiw, did you happen to observe the default setting of GCTimeRatio for Parallel GC (it’s 99), versus to G1 (its 9)? i.e. more balance memory footprint goal with G1.

All the above said, in the context of this thread, we should avoid the temptation to be talking about tuning Parallel GC versus G1, especially a tuned Parallel GC versus G1. We should keep our discussion on the path of the population that would be impacted should G1 GC be made the default GC for JDK 9. This implies the “out of the box” type of configuration and perhaps with an initial and/or max heap size specified.

Again, those who specify a GC to use (such as you) are not impacted. They will continue to get the GC they have been using.
Those who do not specify a GC are those who may be impacted, potentially positively or negatively or not at all.

Those who do not specify a GC and want Parallel GC (for whatever reason) can get Parallel GC by adding -XX:+UseParallelGC or -XX:+UseParallelOldGC.

While we really like hearing about experiences with G1 GC, (and the other collectors as well), let’s try to keep the discussion on this thread on the subject G1 GC being made the default GC for JDK 9 by talking about the scenarios and the observations of those potentially impacted.



More information about the hotspot-dev mailing list