Reduce cap on maximum heap size?

Paul Hohensee paul.hohensee at
Fri Nov 11 09:26:56 PST 2011

The current 64-bit default heap size policy is "1/4 of physical memory
up to 32gb", which allows for the maximum heap size using compressed 
of whatever description (in order of efficiency, zero-based unscaled: < 4gb,
zero-based scaled with 8-byte object alignment: 4 to ~26gb and 
scaled with 8-byte object alignment: > ~26gb).   Hotspot automatically 
uses the
most efficient version base on -Xmx.

You can also increase object alignment in order to get bigger zero-based 
compressed pointers using -XX:ObjectAlignmentInBytes=<n>, where <n> is
a power of 2. <n> == 16 will give you zero based scaled with up to ~56gb 
heap, etc.

On Intel, there's essentially no penalty for using zero-based, scaled or 
because scaling with an 8-byte alignment costs only an extra index byte in
a typical address mode.  The extra registers in 64-bit mode make 64-bit 
faster than 32-bit in all the benchmarks we run.  Non-zero-based scaled 
a register to hold the heap base, so runs slightly slower than zero-based.


On 11/11/11 11:57 AM, Tony Printezis wrote:
> Florian,
> Sure, but you can still get zero-based (scaled) coops for heaps larger 
> than 4G; we just have to shift every reference. I'd be surprised if 
> the difference between scaled and unscaled zero-based coops is big 
> and/or noticeable (I don't have perf data handy, someone else might 
> want to share some).
> Regarding the larger default heap size, I'm sure we increased it 
> because we got lots of complaints like "I have 32G of RAM, why do you 
> only use 5%-10% of it?". And for a lot of users with large data sets 
> not getting an OOM is more important than the JVM running a tiny bit 
> faster by default. As I said, folks have difference requirements and 
> there is no policy with a clear win here.
> Regards,
> Tony
> On 11/10/2011 8:50 AM, Florian Weimer wrote:
>> * Tony Printezis:
>>> compressed oops should have similar performance to the 32-bit JVM in
>>> most situations (you lose some given that the JVM still uses 64-bit
>>> references, you gain some given that 64-bit architectures typically
>>> have more registers available to the JIT compiler).
>> There used to be several variants of compressed oops: zero-based
>> unscaled, zero-based scaled, and offseted and scaled, depending on where
>> the heap segment is located in the process address space.  The
>> zero-based unscaled encoding does not need an address decoding step and
>> should therefore be faster.  However, that is only possible if the heap
>> fits within the lowest 4 GB, which is never the case for a 12 GB heap.
>> For example, one of our tools runs like this (median on five runs,
>> including startup time):
>> -Xmx1g      4194ms (6184ms user+system)
>> default     4428ms (6620ms user+system)
>> -Xmx40g     4351ms (6232ms user+system)
>> (The old generation is never collected, even with the 1GB heap.)
>> I understand that there is a difficult trade-off, but increasing the
>> default heap size to scaled compressed oops seems to have its costs.

More information about the hotspot-dev mailing list