Request for review - 7197557

Srinivas Ramakrishna ysr1729 at
Tue Sep 18 19:04:22 UTC 2012

On Mon, Sep 17, 2012 at 10:06 PM, Jon Masamitsu <jon.masamitsu at>wrote:

> ... This is the top of the stack trace of the thread doing the GC.  It is
> actually a live lock where
> the GC has already entered the infinite loop that keeps trying to do a GC
> (checks the
> results of the gc_prologue) until the GC succeeds.  We needed to check for
> an active
> GC_locker.  A thread trying to release a JNI critical section was blocking
> on a
> safepoint.
> =>[1] GC_locker::is_active_internal(**), line 88 in "gcLocker.hpp"
>   [2] GC_locker::is_active_and_**needs_gc(), line 104 in "gcLocker.hpp"
>   [3] VM_GC_Operation::skip_**operation(this = 0xfffffd7fec9d5358), line
> 92 in "vm
> GCOperations.cpp"
>   [4] VM_GC_Operation::doit_**prologue(this = 0xfffffd7fec9d5358), line
> 111 in "vm
> GCOperations.cpp"
>   [5] VMThread::execute(op = 0xfffffd7fec9d5358), line 587 in
> "vmThread.cpp"
>   [6] CollectorPolicy::satisfy_**failed_metadata_allocation(**this =
> 0x432558, loade
> r_data = 0x52a62f8, word_size = 0x3a99U, mdtype = NonClassType), line 765
> in "co
> llectorPolicy.cpp"
>   [7] Metaspace::allocate(loader_**data = 0x52a62f8, word_size = 0x3a99U,
> read_onl
> y = true, mdtype = NonClassType, __the_thread__ = 0x348d800), line 2953 in
> "meta
> space.cpp"
>   [8] Array<unsigned short>::operator new(size = 0x8U, loader_data =
> 0x52a62f8,
> length = 0xea60, read_only = true, __the_thread__ = 0x348d800), line 322
> in "arr
> ay.hpp"
>   [9] MetadataFactory::new_array<**unsigned short>(loader_data =
> 0x52a62f8, length
>  = 0xea60, __the_thread__ = 0x348d800), line 38 in "metadataFactory.hpp"

Makes sense; thanks!

 A somewhat orthogonal question:
> Could you tell me if there is any a-priori limit that the JVM sets on the
> c-heap space used for the metadata?

No limit.  The space is actually not C-heap space.  It is allocated using
> Virtualspaces which
> means mmap'ed space on unix systems.  You can set a limit on the command
> line with
> MaxMetaspaceSize which is analogous to MaxPermSize.
>  If yes, can that limit be changed from the command-line? If there is no
>> such a-priori limit, could you shed any light
>> on a comparison of the memory footprint between the pre-NPG world and the
>> new post-NPG world for
>> some benchmarks that exercise class load/unload etc.?
> We do induce GC to do class unloading.  We have a high water mark (HWM)
> for class metadata
> used.  When the used class metadata hits the HWM, we induce a GC.  The
> initial value of the
> HWM is 12Mbytes to 16Mbytes depending on the platform  The HWM may be
> increased after the
> GC depending on how much free metadata space there is.  Think
> MinFreeRation / MaxFreeRatio
> kind of policy.
> For small benchmarks that don't do extensive class loading, the footprint
> may look
> less because we don't reserve the space for the perm gen.  In general it
> is comparable.
> We do loose some space to fragmentation and need to do more tuning in that
> area.

Great; thanks a lot for that information. I am assuming that in general
full gc's as a result of the Java heap
filling up will, in most cases, take care of reclaiming enough space in the
metadata spaces, so that no explicit
collection is needed for the total size of the metadata spaces to stay
within the HWM computed. By the way, I assume that
there is some way that we can set the starting and maximum metaspace size
to the same value, analogous
to setting PermSize to MaxPermSize? (Essentially saying that the initial
size of the meta space is its maximum size
and that Max/Min free ratio must be ignored.) Anyway, I'll look at the code
and play with the JVM to learn more, but it would be
great if you folks could also put out a brief whitepaper giving an overview
of the implementation and
describing the transition to the new perm gen world and how one might size
these thresholds so as to empirically
"right-size" the heap based on the GC log data etc.

thanks a lot again! And sorry for hijacking the review thread for this
-- ramki
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the hotspot-gc-dev mailing list