RFR(XS): 8001425: G1: Change the default values for certain G1 specific flags

Charlie Hunt chunt at salesforce.com
Tue Jan 15 19:20:11 UTC 2013

Avg Response Time ... (sigh) --- one of our favorite subjects.  ;-)

You're right, if the marking cycles start earlier than ideally desired, you end up under-utilizing heap space and potentially having to tame mixed GCs.  But, G1 has a tunable we can set to start the marking cycle later.  The challenge there is setting the initiating heap occupancy percent too high and losing the race.  But, by setting it higher (and avoiding losing the race) with larger  heaps hopefully translates to more "good candidate" old gen regions to collect and also hopefully makes the exercise of taming mixed GCs a little easier too.

Thanks for sharing your thoughts.

charlie ...

On Jan 15, 2013, at 12:45 PM, Monica Beckwith wrote:

Thanks, Charlie -

If I may add two more things to John's points below and also expand a bit on the "latency" comment -
Even though we talk about latency, in reality, I have seen many people with bigger heap (around 200Gs) requirements really concerned about ART (Average Response Time)/ Throughput.
Also, we should remember that if the marking cycle is triggered earlier and more often, then we may end up under-utilizing the bigger heaps and will definitely have to spend time "taming the mixedGCs" :)

just my 2 cents.


On 1/15/2013 12:01 PM, Charlie Hunt wrote:

Hi John,

Completely agree with the excellent points you mention below (thanks for being thorough and listing them!).

Given G1 is (somewhat) positioned as a collector to use when improved latency is an important criteria, I think the tradeoffs are something people are willing to live with too.

Fwiw, you have my "ok" to go ahead with your suggestion to apply the new young gen bounds to all heap sizes.


charlie ...

On Jan 15, 2013, at 11:40 AM, John Cuthbertson wrote:

Hi Charlie

Thanks for looking over the changes. Replies inline....

On 1/11/2013 11:32 AM, Charlie Hunt wrote:

Hi John,

Fwiw, I'm fine with Bengt's suggestion of having G1NewSizePercent the same for all Java heap sizes.

I don't have a problem with this. By applying it heaps > 4GB , I was
just being conservative.

I'm on the fence with whether to do the same with G1MaxNewSizePercent.  For me I find the MaxNewSizePercent a bit tricky than NewSizePercent.  WIth NewSizePercent, if young gen is sized "too small", I think the worst case is we have some GCs that are well below the pause time target.  But, with MaxNewSizePercent, if it's allowed to get "too big", then the worst case is evacuation failures.

So, if you did move MaxNewSizePercent down to 60, we'd have a situation where we'd be less likely to have evacuation failures.  Perhaps it's ok to apply this change to all Java heap sizes too?

Again I don't have a problem with applying the new value to all heap
sizes but I am a little concerned about the implications. The benefit is
definitely less risk of evacuation failures but the it could also

* increase the number of young GCs:
    ** increasing the GC overhead and increasing the heap slightly more
    ** lowering throughput
* slightly increase the amount that gets promoted
    ** triggering marking cycles earlier and more often (increased SATB
barrier overhead)
    ** more cards to be refined (we only refine cards in old regions)
increasing the write barrier costs and the RS updating phase of the pauses,
    ** increases the importance of "taming the mixed GCs".

>From Kirk's email it sounds like this is a trade off people are
prepared to live with.

Unless I hear any objections, I'll apply the new young gen bounds to all
heap sizes.


Monica Beckwith | Java Performance Engineer
VOIP: +1 512 401 1274<tel:+1%20512%20401%201274>
<green-for-email-sig_0.gif><http://www.oracle.com/commitment> Oracle is committed to developing practices and products that help protect the environment

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.java.net/pipermail/hotspot-gc-dev/attachments/20130115/18ae73bf/attachment.htm>

More information about the hotspot-gc-dev mailing list