Fwd: Feedback on G1GC

Yu Zhang yu.zhang at oracle.com
Mon Dec 21 20:40:32 UTC 2015


Fabian,

I am late to the party,  still trying to figure out what is the issue. 
 From what I can follow the from
https://groups.google.com/a/jclarity.com/forum/#!msg/friends/hsZiz6HTm9M/MbuttBioCgAJ, 
the original complain is Ref Proc time is very long, and after you added 
ParallelRefProcEnabled and (maybe) other flags, it is better?

I tried to look at the log gc.log.gz
https://groups.google.com/a/jclarity.com/group/friends/attach/b13fb0b7fedd4/gc.log.gz?part=0.1&authuser=0&view=1
In that log,  -XX:+ParallelRefProcEnabled and refproc seems ok. 
-XX:MaxHeapSize=4294967296, but the actual heap size is 1588m. G1 might 
not expand the heap aggressively. You can try to run with fixed Xms Xmx 
values.


As for tenure distribution, yes, most of the objects(5-7m) die young, 
but about 2m objects do not die, can live up to age 12-15, and get 
promoted. Though the old gen usage does not increase after mixed gc, it 
is hard to tell if the mixed gc cleaned those objects, or did 
compaction. Maybe with -Xms=4g -Xmx=4g, the Eden size will increase, so 
is the survivor size, but those 2m objects still get promoted. I think 
we need more experiments to see if the ergonomic is doing the right thing.

Thanks,
Jenny

On 12/20/2015 5:27 AM, Fabian Lange wrote:
> Hi,
> (originall posted on adoption-discuss)
> since a while I have been recommending and using G1GC for JDK 8 
> applications.
>
> This week I was looking at an application which should be the ideal 
> candidate.
> It was given 4GB ram, has a steady memory usage of about 1-2GB and 
> during its work it generates only garbage. It reads data from sockets, 
> deserializes it, manipulates it, serializes it and writes it out to 
> sockets. It is processing 100k to 500k of such requests per second.
>
> With the default G1 settings the machine was very loaded. The 
> collection times were pretty long. It even ran out of memory a few 
> times because the GC could not catch up.
>
> When looking at the logs I was surprised to see extremely small 
> eden/young sizes. The old gen was really big (like 3.5GB, but mostly 
> empty) while G1 was churning on 300MB young.
>
> I raised the question on 
> https://groups.google.com/a/jclarity.com/d/msg/friends/hsZiz6HTm9M/MbuttBioCgAJ 
> where Charlie Hunt was so kind to explain the reasons behind the 
> behaviour. It either did not make sense to me, or I did not understand 
> the explanation.
>
> What I did is what I always did regardless of the collector: I 
> increased young space, knowing it contains mostly garbage.
> The overall behaviour of the JVM was much improved by that.
>
> I found it irritating, that according to Charlie, the main reason for 
> the small eden is the Pause Time Limit. Because GC was not meeting its 
> goal it reduced eden. While I observed better results doing the opposite.
>
> I also enabled -XX:+ParallelRefProcEnabled.
>
> Logs are available from the above discussion, but I can send them in 
> separate mail if desired.
>
> As far as I can tell the ergonomics are not working for me, and the 
> changes I need to do are counter intuitive. From other discussions I 
> learned that quite many people observed better overall performance 
> with raising the pause time restriction.
>
> Is there public information to why the current defaults are as they 
> are? How would feedback on these defaults work?
>
> Best regards,
> Fabian
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.java.net/pipermail/hotspot-gc-dev/attachments/20151221/6ea568bc/attachment.htm>


More information about the hotspot-gc-dev mailing list