Any plans to increase G1 max region size to 64m?

Thomas Schatzl thomas.schatzl at
Fri Feb 6 13:27:48 UTC 2015

Hi Thomas,

On Thu, 2015-02-05 at 15:44 +0100, Thomas Viessmann wrote:
> Hi,
> currently maximum value is -XX:G1HeapRegionSize=32m.
> Are there plans to increase that number as there are applications
> which allocate bigger objects which then result in slow humongous
> allocations which in turn  typically exceed  the target pause time.

I have to disagree a little here: while humongous allocations are much
slower than regular applications, any application allocating too many of
them too quickly will lead to out-of-memory situations anyway.

I.e. before you are concerned about the performance of humongous
allocations (you are going to actually do something with them?), you
will most likely first get troubles with available memory.

These are my observations though, feel free to give yours.

Allocations for humongous objects have no impact on pause time, except
if they continuously trigger garbage collections, and then you will
probably run into the previously mentioned full gcs anyway. Unless you
count the time the GC takes to the allocation time.

We have tried maximum increasing humongous object size on some very
large heap (>=100G) applications, without good results.

The main problem is, a heap region size of X results in maximum regular
object size of at most X/2.

This results in some or all of these issues:

 - generally, large objects are very slow to copy around during young
gc. Just copying a 16M object (at region size 32M) takes long, and then
processing up to 16M/sizeof(pointer) references is very slow.

(At the moment there is no load-balancing of the copying across threads,
so a lot of threads may be waiting for another thread for a long time
during that; we have seen balancing issues because of that. That's an
implementation issue and could be fixed of course).

 - copying around large objects tends to fragment the survivor and old
gen space. I.e. at the moment, there is just a single current allocation
region for all threads during GC.
So if, due to timing, you copy such a large object, and another thread
allocates only 16 bytes into that same region, there is not enough space
left for another such large object, throwing away the entire remainder
This is also an implementation issue, and will likely be improved soon,
but still relevant at least for 8u40.

 - allocation granularity during GC (and also for the TLABs, ie. during
mutator time) is a region. This may lead to waste of a lot of space at
the start and end of GC because of the single allocation region rule

 - the remembered set management overhead decrease from going from 32M
to 64M is of course significant (roughly halves it), but overall it does
not seem that much better in the cases we tried.

We have found that it is often much better to keep humongous objects
humongous, and then try to reclaim at every GC. This works extremely
well already, and is hopefully going to get better in the future :)

[Ignoring the fact that applications could help GC a little in that
respect by better initial sizing or managing of their large objects in
the first place.]

Given all this, to me there does not seem to be a case right now to
increase this limit.

You can try yourselves if it makes sense for your application if you
increase HeapRegionBounds::MAX_REGION_SIZE. It seems to work (although
not officially supported) as far as I am concerned.


More information about the hotspot-gc-dev mailing list