Reduce cap on maximum heap size?
tony.printezis at oracle.com
Fri Nov 11 08:57:16 PST 2011
Sure, but you can still get zero-based (scaled) coops for heaps larger
than 4G; we just have to shift every reference. I'd be surprised if the
difference between scaled and unscaled zero-based coops is big and/or
noticeable (I don't have perf data handy, someone else might want to
Regarding the larger default heap size, I'm sure we increased it because
we got lots of complaints like "I have 32G of RAM, why do you only use
5%-10% of it?". And for a lot of users with large data sets not getting
an OOM is more important than the JVM running a tiny bit faster by
default. As I said, folks have difference requirements and there is no
policy with a clear win here.
On 11/10/2011 8:50 AM, Florian Weimer wrote:
> * Tony Printezis:
>> compressed oops should have similar performance to the 32-bit JVM in
>> most situations (you lose some given that the JVM still uses 64-bit
>> references, you gain some given that 64-bit architectures typically
>> have more registers available to the JIT compiler).
> There used to be several variants of compressed oops: zero-based
> unscaled, zero-based scaled, and offseted and scaled, depending on where
> the heap segment is located in the process address space. The
> zero-based unscaled encoding does not need an address decoding step and
> should therefore be faster. However, that is only possible if the heap
> fits within the lowest 4 GB, which is never the case for a 12 GB heap.
> For example, one of our tools runs like this (median on five runs,
> including startup time):
> -Xmx1g 4194ms (6184ms user+system)
> default 4428ms (6620ms user+system)
> -Xmx40g 4351ms (6232ms user+system)
> (The old generation is never collected, even with the 1GB heap.)
> I understand that there is a difficult trade-off, but increasing the
> default heap size to scaled compressed oops seems to have its costs.
More information about the hotspot-dev