Repeated ParNews when Young Gen is Empty?
acolombi at palantir.com
Tue Jul 30 11:01:32 PDT 2013
Sure. To be clear, this method isn't as complete or feature rich as using
a profiler to perform allocation tracking (e.g. Yourkit, Eclipse,
NetBeans). But allocation tracking can have disastrous affect on
production performance, and using heap analyzers won't work when your heap
is really big. This method has minimal impact, so it's more suitable for
production monitoring of large heaps (though I wouldn't recommend it on a
long term basis).
Take a look at this TLAB output. I edited it for brevity:
2013-07-11T15:32:04.504-0400: 43270.342: [GC TLAB: gc thread:
0x000000001c0d3000 [id: 11377] desired_size: 749KB slow allocs: 0 refill
waste: 11984B alloc: 0.00305 37463KB refills: 1 waste 94.0% gc: 720840B
slow: 0B fast: 0B
TLAB: gc thread: 0x000000001706e000 [id: 11376] desired_size: 749KB slow
allocs: 0 refill waste: 11984B alloc: 0.00305 37463KB refills: 1 waste
94.0% gc: 720840B slow: 0B fast: 0B
.. lots of other TLAB output ..
TLAB: gc thread: 0x00002abdadf84800 [id: 9270] desired_size: 222012KB slow
allocs: 0 refill waste: 3552200B alloc: 0.90337 11100631KB refills: 51
waste 0.5% gc: 57549384B slow: 82776B fast: 0B
.. lots of other TLAB output ..
TLAB: gc thread: 0x00002abdaca27000 [id: 6424] desired_size: 2KB slow
allocs: 45 refill waste: 32B alloc: 0.00000 39KB refills: 17 waste
7.4% gc: 1960B slow: 616B fast: 0B
TLAB totals: thrds: 323 refills: 9893 max: 150 slow allocs: 3602 max 513
waste: 0.7% gc: 93432504B max: 57549384B slow: 1488448B max: 624120B
fast: 0B max: 0B
43270.344: [ParNew: 13197817K->910448K(13824000K), 0.1963360 secs]
46593408K->34402450K(75010048K), 0.1985540 secs] [Times: user=3.99
sys=0.03, real=0.20 secs]
One TLAB sticks out, the one associated with thread ID 0x00002abdadf84800.
Why does it stick out? Because it has a giant desired size, and the
estimated allocation of Eden is 90%. This means this thread is very
likely to be responsible for the majority of the allocations that are
occurring. So, if you are running regular (e.g. minutely) jstacks (which
we often do in production), you can pair this thread ID with the thread ID
in the jstack to learn what's going on. For example:
"MessageProcessorThread" daemon prio=10 tid=0x00002abdadf84800 nid=0x2436
... many more lines
Here we can see 0x00002abdadf84800 is doing a StringUtils.join, which
leads to a toString of a large object.
And that's the story of how to track down memory allocation without a
On 7/29/13 2:36 PM, "Simone Bordet" <simone.bordet at gmail.com> wrote:
>On Mon, Jul 29, 2013 at 11:32 PM, Andrew Colombi <acolombi at palantir.com>
>> The problem in production was resolved by reducing the amount of
>> allocation we were doing, and thereby reducing the pressure on the
>> collector. The log output is still very strange to me, and we're going
>> continue to investigate the potential for a JVM bug.
>> One cool thing this experience taught me is a new debugging technique to
>> identify allocation hotspots. Basically, with a combination of
>> and jstacks, you can identify which threads are heavily allocating and
>> what those threads are doing. We were able to pinpoint a small number
>> threads doing the lion's share of the allocations, and improve their
>Care to detail this one, perhaps with an example of yours ?
>Finally, no matter how good the architecture and design are,
>to deliver bug-free software with optimal performance and reliability,
>the implementation technique must be flawless. Victoria Livschitz
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 5019 bytes
Desc: not available
Url : http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20130730/95ebcfbb/smime-0001.p7s
More information about the hotspot-gc-use