g1 not doing partial aggressively enough -> fallback to full gc

Peter Schuller peter.schuller at infidyne.com
Sat May 15 17:10:52 UTC 2010


I have another utterly unscientific (but I think interesting)
variation of an older test. The behavior that is interesting in this
case is that the heap grows until it hits the maximum heap size and
falls back to a full GC in spite of there being high-payoff old
regions. Based on on the heap size after the full GC, the live working
set if roughly ~ 230 MB, and the maximum heap size is 4 GB. This means
that during the GC:s right before the full GC:s, g1 is choosing to do
young generation GC:s even though the average live ratio in the older
regions should be roughly 5% (low-hanging fruit for partial

Links to the test, the GC log file and an executable .jar file for
convenience, follows at the bottom of this E-Mail.

A short description of roughly what the test is doing in my particular
invocation and use of it: I have a loop running which repeatedly tells
the httpgctest server to add 25000 "data items" to its in-memory set,
followed by removing 0.1% of all data items (pseudo-randomly). The
end-result is that of a steady-state of roughly 250000 data items that
are aged pseudo-randomly (the removal of 0.1% is done by selecting a
pseudo-randomly from the set). Thus, dead objects will be accumulated
over time over all regions, though the data structure overhead of the
set itself will tend to generate data that is shorter-lived on average
(due to use of clojure's immutable data structures).

Given this behavior, no individual region is likely to become
completely empty, so they need to be evacuated in a partial collection
(rather than as part of the 'cleanup' phase).

The bulk of the data collected in the young generation is expected to
be the immutable data structure's internal structure.

There should be very little writing to older generations (again due to
the use of clojure's immutable data structures).

The JVM version is one built from a recently merged-from-main bsd-port:

  changeset:   206:12f0c051d819
  tag:         tip
  parent:      202:44ad17a6ffea
  parent:      205:b7b4797303cb
  user:        Greg Lewis <glewis at eyesbeyond.com>
  date:        Sat May 08 10:53:55 2010 -0700
  summary:     Merge from main OpenJDK repository

The JVM options used in the particular run that produced the log is
(cut'n'paste from -XX:+PrintCommandLineFlags, but re-ordered for





The log files should lots of young collections, and a very select few
partial collections after marking phases. Typically along the lines of
(excerpt, because the full log is very long):

46.219: [GC pause (young) 2553M->2550M(4096M), 0.0071520 secs]
46.234: [GC pause (young) 2555M->2551M(4096M), 0.0052560 secs]
46.250: [GC pause (young) 2559M->2554M(4096M), 0.0074780 secs]
46.281: [GC pause (young) 2559M->2555M(4096M), 0.0058670 secs]
46.306: [GC pause (young) 2569M->2555M(4096M), 0.0039900 secs]
46.326: [GC pause (young) 2569M->2556M(4096M), 0.0056980 secs]
46.339: [GC concurrent-count-end, 0.4546580]
46.339: [GC cleanup 2562M->2502M(4096M), 0.0702940 secs]
46.410: [GC concurrent-cleanup-start]
46.414: [GC concurrent-cleanup-end, 0.0041480]
46.431: [GC pause (young) 2515M->2497M(4096M), 0.0069320 secs]
46.444: [GC pause (partial) 2501M->2495M(4096M), 0.0056890 secs]
46.469: [GC pause (partial) 2499M->2493M(4096M), 0.0065570 secs]
46.486: [GC pause (partial) 2497M->2493M(4096M), 0.0058280 secs]
46.497: [GC pause (partial) 2496M->2493M(4096M), 0.0108240 secs]
46.525: [GC pause (young) (initial-mark) 2507M->2494M(4096M)46.529:
[GC concurrent-mark-start]
, 0.0044130 secs]
46.544: [GC pause (young) 2508M->2494M(4096M), 0.0053780 secs]
46.574: [GC pause (young) 2513M->2495M(4096M), 0.0058290 secs]
46.727: [GC pause (young) 2512M->2499M(4096M), 0.0122760 secs]
46.761: [GC pause (young) 2507M->2501M(4096M), 0.0121180 secs]
46.799: [GC pause (young) 2512M->2505M(4096M), 0.0116450 secs]
46.826: [GC pause (young) 2513M->2507M(4096M), 0.0098290 secs]
46.852: [GC pause (young) 2515M->2510M(4096M), 0.0111450 secs]
46.873: [GC pause (young) 2517M->2512M(4096M), 0.0095310 secs]
46.893: [GC pause (young) 2519M->2514M(4096M), 0.0113990 secs]
46.918: [GC pause (young) 2519M->2516M(4096M), 0.0092540 secs]

At the very end we see the full GC and the resulting heap size:

74.777: [GC pause (young) 4090M->4074M(4096M), 0.0085890 secs]
74.796: [GC pause (young) 4082M->4075M(4096M), 0.0061880 secs]
74.833: [Full GC 4095M->227M(758M), 2.9635480 secs]
77.940: [GC pause (young) 253M->232M(758M), 0.0206140 secs]
77.970: [GC pause (young) 236M->233M(1426M), 0.0168640 secs]

Close to this there are only young collections in sight.

The effect is lessened by providing less strict pause time demands on
g1 (I tested 250/300 instead of the 10/15 used in this case), but the
behavior *does* remain. It just takes longer to kick in (given the
same heap size).

Test links/reproduction information:

The test is the httpgctest (as of version


A GC log produced with:


Can be found here:


An executable .jar file (product of 'lein uberjar') is here:


For running the executable .jar with the same options, the direct link
to run.sh of the correct version is:


The input to the test once running, is the following little loop
running concurrently:

  while [ 1 ] ; do curl 'http://localhost:9191/gendata?amount=25000' ;
curl 'http://localhost:9191/dropdata?ratio=0.1' ; sleep 0.1 ; done

/ Peter Schuller
hotspot-gc-use mailing list
hotspot-gc-use at openjdk.java.net

More information about the hotspot-gc-dev mailing list