GC logging (again)

Kirk Pepperdine kirk at kodewerk.com
Sat Sep 29 10:02:22 UTC 2012


Hi,

I've just noticed that in the latest builds of the 1.6.0 and the 1.7.0 that there are new combinations/corruption of ParNew and CMS logs suggesting something has changed in the logging. I didn't see anything in the lists but...

I've also got this combination of records...

57.535: [GC 57.535: [ParNew
Desired survivor size 1081344 bytes, new threshold 1 (max 4)
- age   1:    2154184 bytes,    2154184 total
: 19136K->2112K(19136K), 0.0291977 secs] 101261K->97883K(126912K), 0.0292664 secs] [Times: user=0.10 sys=0.00, real=0.03 secs] 
57.584: [GC 57.584: [ParNew: 19136K->19136K(19136K), 0.0000203 secs]57.584: [CMS: 95771K->72786K(107776K), 0.2469877 secs] 114907K->72786K(126912K), [CMS Perm : 14265K->14262K(24092K)], 0.2471242 secs] [Times: user=0.24 sys=0.00, real=0.25 secs] 
57.834: [GC [1 CMS-initial-mark: 72786K(107776K)] 73371K(126912K), 0.0013167 secs] [Times: user=0.00 sys=0.00, real=0.01 secs] 

Note that the 57.584 is a new corruption that I've not seen before. I assume that the second record is a full gc.

has there been any changes with logging?

Regards,
Kirk

On 2012-09-27, at 9:41 PM, Vitaly Davidovich <vitalyd at gmail.com> wrote:

> Thanks Jon -- that blog entry was useful.
> 
> Vitaly
> 
> On Thu, Sep 27, 2012 at 12:55 AM, Jon Masamitsu <jon.masamitsu at oracle.com> wrote:
> Vitaly,
> 
> The current implementation depends on a thread not migrating
> between nodes.  On solaris that naturally happens.  I don't
> remember the details but it's something like Solaris sees that
> a thread XX is executing on node AA and using memory on AA
> so it leaves XX on AA.  On linux I'm guessing (really guessing)
> that there is a way to create an affinity between XX on AA.
> 
> This has all the things I ever knew about it.
> 
> https://blogs.oracle.com/jonthecollector/entry/help_for_the_numa_weary
> 
> Jon
> 
> 
> On 9/26/2012 4:09 PM, Vitaly Davidovich wrote:
> Hi guys,
> 
> If I understand it correctly, the NUMA allocator splits eden into regions
> and tries to ensure that an allocated object is in a region local to the
> mutator thread.  How does this affect tlabs? Specifically, a tlab will be
> handed out to a thread from the current node.  If the java thread then
> migrates to a different node, its tlab is presumably still on the previous
> node, leading to cross-node traffic? Is there a notion of a processor local
> tlab? In that case, access to already allocated objects will take a hit but
> new allocations will not.
> 
> The way I imagine a processor local tlab working is when a thread migrates,
> the previous tlab becomes available for whichever java thread is onproc
> there now - that is, tlab ownership changes.  The migrated thread then
> picks up allocations in the new tlab.
> 
> It can still be a bump the pointer since only one hardware thread can be
> running at a time on the processor.
> 
> Is this or something like it already there? If not, what challenges am I
> overlooking from my high-level view?
> 
> Thanks
> 
> Sent from my phone
> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.java.net/pipermail/hotspot-gc-dev/attachments/20120929/fefdf080/attachment.htm>


More information about the hotspot-gc-dev mailing list