Jon.Masamitsu at Sun.COM
Wed Jul 11 15:06:56 UTC 2007
Paul Hohensee - Java SE wrote:
> You might try running the reference_client and startup2 using the
> client vm
> and the parallel collector (UseParallelGC, right?) with a 1gb heap. I
> you'll see startup time improvements along with a footprint increase. At
> least that's what I saw experimenting with 1.4.2 some years ago. Got
> on startup then, but have no idea what would happen now.
With today's GC ergonomics (not server class default heap sizing), I would
expect the GC to grow the heap faster than non ergonomics GC so there
would be a faster start up and larger footprint (assuming the start up
continued long enough to do ~10 or more GC's).
> Larger heap sizes really need a parallel collector to avoid excessive
> times. 1g is about the outer limit of what the serial collector can
> the 1.4.2 experiments showed it to be ~20 slower than the parallel
> on jbb2k with that heap size. Acceptable, but not optimal. So
> use of MaximizeHeapSize could result in an unacceptable gc time increase.
> Maybe instead we should have a switch that enables server class machine
> heap ergo in the client (and server, though it's somewhat redundant) vm?
You mean such as the flag "AlwaysActAsServerClassMachine". Yes, there
are more flags in hotspot than the human brain can hold. :^)
Careless use of MaximizeHeapSize would sometimes result in sub-optimal
GC configurations but we would be trading that off against less frequent
out-of-memory exceptions that are the result of a sub-optimal choice of the
maximum heap size and trading that off against the times when the
continues to run in a heap that is too small (so is spending too much
I think the usefulness of MaximizeHeapSize is a question that we can still
consider in a non-ergonomics, non server-class world. It's an ease of use
thing which I hope in that world will mostly do no harm and relieve the
user of having to make a blind decision.
In the GC ergonomics, server-class world MaximizeHeapSize changes the
choice from maximum heap size to "pause time goal" and "throughput goal"
which is where I think we should be heading.
> Jon Masamitsu wrote:
>> We've heard from users in the past that it would be nice if
>> there was no maximum heap size limit. Users say
>> they don't know what the limit for their application is and
>> have to experiment to find an adequate value.
>> I did the following refworkload experiments.
>> Ran "reference_client" and "startup2" (sets of client
>> application like benchmarks) with the -client
>> (uses the serial GC which does not have GC ergonomics) and
>> a 1g heap. The point of this experiment was to see if the
>> size of the committed heap increased because of the 1g heap
>> size limit. There was no discernible difference in the
>> committed size of the heap nor in the amount of GC work.
>> I ran "reference_server" (set of server application like
>> benchmarks) with -server and the parallel GC (does
>> has GC ergonomics) with -XX:DefaultMaxRAM=1g (the default)
>> and with -XX:DefaultMaxRAM=1700m (the amount of physical
>> memory on the platform minus some for
>> the OS) and -XX:DefaultMaxRAMFraction=1. The point of this experiment
>> was to see if the
>> GC ergonomics drove the heap to sizes larger that the 1g
>> default limit. As one would expect the heaps for some
>> benchmarks grew considerably (up to the higher limits
>> in some cases) and for some the heaps did not change.
>> I think that these results are not unexpected. The heap sizing
>> policy used by the serial collector depends on the amount of
>> live data and the live data for the client like applications
>> fit nicely into the 64m default heap size (i.e., the larger
>> 1g heap was not needed). On the larger server like benchmarks run
>> with GC ergonomics, the high default throughput goal of GC
>> ergonomics means that some benchmarks will just use as much
>> heap as they can get in trying to reach the throughput goal.
>> I'd like to propose an additional policy under a new command line
>> flag. Let me use MaximizeHeapSize for the name for now. If
>> MaximizeHeapSize is set, the VM will calculate
>> the amount of physical memory on the platform and
>> try to use that as the upper limit on the heap size. As
>> with the current GC ergonomics the upper limit will
>> be ~4g on a 32bit system. If the VM cannot actually
>> reserve that amount of space at initialization, it will
>> try reserving some smaller amounts until it succeeds. There
>> will be some minimum size (probably 64m) that it will not
>> go below.
>> The user would have to turn MaximizeHeapSize on
>> explicitly. It will be off by default until we get some
>> experience with it. Users who turn it on and also use
>> GC ergonomics will manage the heap size via
>> a smaller or larger throughput goal (i.e., the larger
>> the throughput goal the more heap space the VM will
>> try to get to meet the throughput goal).
>> For the non ergonomics collectors it should only matter
>> if the application would otherwise have bumped into a
>> heap limit (i.e., the application would have thrown
>> an out-of-memory or just spent too much time doing
>> garbage collections because the heap limit set by
>> default or on the command line was too small). In that
>> case this change would allow the application to
>> get the additional space it needs. I expect there will
>> be some corner cases or bugs where the heap will grow to
>> the limit when it shouldn't.
>> Comments on this proposal?
More information about the hotspot-gc-dev