RMI Activity Threads Lock GC o/p

Y Srinivas Ramakrishna Y.S.Ramakrishna at Sun.COM
Fri Apr 4 12:00:02 PDT 2008


> OK got it. So with ParallelOldGC, do you get the Perm clean-up 
> behavior and
> heap compaction by default?

Yes.  [Editorial note for other readers not familiar with CMS: CMS does not,
by default, unload classes during concurrent collection cycles: the choice of
default is historical but we have stuck with it because of the negative impact
we have seen on CMS remark pauses with certain kinds of pause-sensitive
applications.]

> How can we get heap compaction with CMS? With
> -XX:+UseCMSCompactAtFullCollection? Would this clean up old RMI references?
> What does this one do: -XX:+CMSCompactWhenClearAllSoftRefs - would it 
> be
> less intrusive? Does it play well with -XX:SoftRefLRUPolicyMSPerMB=1?

Unfortunately, with CMS, you do not get heap compaction during concurrent
collection cycles. You get it only as a result of compacting full stop-world
collections (such as you might get as a result of System.gc() or when there is
a concurrent mode failure because of CMS' concurrent collections not keeping up,
or because of excessive fragmentation).

Note that +UseCMSCompactAtFullCollection is, in fact, the default. It deternmines
whether a compacting collection (or a mark-sweep -- but do not compact -- collection)
is done in response to System.gc() or upon concurrent mode failure. I can think
of almost no situations when you would not go with the default (+) setting of this option.

Similarly +CMSCompactWhenClearAllSoftRefs is true by default as well. Both are equally
intrusive since they involve a stop world compacting collection (done alas single-threaded).
This latter option is obscure enough that you should never need to use it.

> 
> I also asked elsewhere whether there was an equivalent to the AIX
> environment variable ALLOCATION_THRESHOLD to warn of large allocations
> coming in and threatening to blow the heap.

I am afraid I do not know what that variable does w./AIX etc. or what you mean
here by "blow the heap". Did you want some way of telling (perhaps in the GC log
or by other means) that the application was generating very large allocation
requests and wanted to control the size threshold above which you would want
such an event reported?

-- ramki

> 
> Thanks,
> Fino
> 
> On Fri, Apr 4, 2008 at 8:10 PM, Y Srinivas Ramakrishna <
> Y.S.Ramakrishna at sun.com> wrote:
> 
> >
> > > Can you explain how -XX:+CMSClassUnloadingEnabled is going to 
> help? I
> > > haven't used that parameter before.
> >
> > The idea is that, assuming concurrent collections happen, classes 
> will be
> > unloaded (and perm gen cleaned) as a result of this flag, and will thus
> > make it unnecessary for a full gc to reclaim that storage. 
> Sometimes, this
> > can have the beneficial effect of also cleaning up a bunch of 
> storage in
> > non-perm heap which had been referenced from objects in the perm gen
> > which were no longer reachable, but which tended to act as "roots" keeping
> > them
> > alive. It's a general prophylactic in this case, rather than specifically
> > targeted at an issue
> > that Keith is seeing (which specific problem, as I indicated, I do not
> > quite fully understand yet from his original email).
> >
> > -- ramki
> >
> > > Thanks,
> > > Fino
> > >
> > > On Fri, Apr 4, 2008 at 6:42 PM, Y Srinivas Ramakrishna <
> > > Y.S.Ramakrishna at sun.com> wrote:
> > >
> > > >
> > > > Hi Keith --
> > > >
> > > > See inline below:-
> > > >
> > > > > We are running into issues where ostensibly the memory management
> > > > > appears OK; less than 1% of the tome is in GC - when I put 
> this file
> > > > > into HPJmeter 3.1;
> > > > >
> > > > > 0.000: [ParNew 47626K->6985K(1883840K), 0.5939434 secs]
> > > > > 0.613: [Full GC 6985K->6940K(1883840K), 0.7510576 secs]
> > > > > 288.169: [ParNew 399516K->20399K(1883840K), 3.0827681 secs]
> > > > > 844.451: [ParNew 412975K->26162K(1883840K), 0.3202843 secs]
> > > > > 1491.991: [ParNew 418738K->31914K(1883840K), 0.2347086 secs]
> > > > > 2177.292: [ParNew 424490K->37760K(1883840K), 0.3079626 secs]
> > > > > 2855.229: [ParNew 430336K->43595K(1883840K), 2.0764301 secs]
> > > > > 3575.979: [ParNew 436171K->49438K(1883840K), 0.2961466 secs]
> > > > > 3606.470: [ParNew 69808K->49730K(1883840K), 0.0388510 secs]
> > > > > 3606.511: [Full GC 49730K->42771K(1883840K), 2.5417084 secs]
> > > > > 4292.023: [ParNew 435347K->48662K(1883840K), 0.2445446 secs]
> > > > > 4970.650: [ParNew 441238K->54506K(1883840K), 0.2373110 secs]
> > > > > 5677.603: [ParNew 447082K->60349K(1883840K), 0.3322904 secs]
> > > > > 6367.994: [ParNew 452925K->66188K(1883840K), 0.2645763 secs]
> > > > > 7055.852: [ParNew 458764K->72033K(1883840K), 0.8281927 secs]
> > > > > 7210.009: [ParNew 167469K->73442K(1883840K), 0.0969525 secs]
> > > > > 7210.109: [Full GC 73442K->41123K(1883840K), 2.1642088 secs]
> > > > > 7909.604: [ParNew 433699K->47011K(1883840K), 0.2533163 secs]
> > > > > 8603.519: [ParNew 439587K->52863K(1883840K), 0.2230794 secs]
> > > > > 9289.216: [ParNew 445439K->58709K(1883840K), 0.2359698 secs]
> > > > > 9968.793: [ParNew 451285K->64554K(1883840K), 0.2656911 secs]
> > > > > 10649.694: [ParNew 457130K->70393K(1883840K), 0.2243246 secs]
> > > > > 10813.028: [ParNew 158599K->71696K(1883840K), 0.0770400 secs]
> > > > > 10813.107: [Full GC 71696K->41024K(1883840K), 1.7410828 secs]
> > > > > 11503.339: [ParNew 433600K->46907K(1883840K), 0.2542805 secs]
> > > > > 12191.022: [ParNew 439483K->52751K(1883840K), 0.2257059 secs]
> > > > > 12864.793: [ParNew 445327K->58591K(1883840K), 0.2231573 secs]
> > > > > 13546.217: [ParNew 451167K->64433K(1883840K), 0.2532376 secs]
> > > > > 14247.570: [ParNew 457009K->70278K(1883840K), 0.2111731 secs]
> > > > > 14415.581: [ParNew 168788K->71740K(1883840K), 0.0916532 secs]
> > > > > 14415.675: [Full GC 71740K->41182K(1883840K), 1.7439608 secs]
> > > > > 15096.989: [ParNew 433758K->47062K(1883840K), 0.2752132 secs]
> > > > > 15777.472: [ParNew 439638K->52905K(1883840K), 0.2132059 secs]
> > > > > 16475.184: [ParNew 445481K->58750K(1883840K), 0.2249407 secs]
> > > > > 16956.572: [ParNew 451326K->66543K(1883840K), 0.2237252 secs]
> > > > > 17593.401: [ParNew 459119K->72857K(1883840K), 0.2493865 secs]
> > > > > 18018.152: [ParNew 313587K->76412K(1883840K), 0.1719212 secs]
> > > > > 18018.326: [Full GC 76412K->44673K(1883840K), 1.9000112 secs]
> > > > > 18734.462: [ParNew 437249K->50542K(1883840K), 0.2459797 secs]
> > > > > 19434.180: [ParNew 443118K->56364K(1883840K), 0.2399764 secs]
> > > > > 20026.580: [ParNew 448940K->63103K(1883840K), 0.2327731 secs]
> > > > > 20723.692: [ParNew 455679K->68869K(1883840K), 0.2299928 secs]
> > > > > 21338.875: [ParNew 461445K->74742K(1883840K), 0.2005874 secs]
> > > > > 21620.952: [ParNew 269312K->78103K(1883840K), 0.1174351 secs]
> > > > > 21621.072: [Full GC 78103K->45998K(1883840K), 1.8386129 secs]
> > > > > 22227.195: [ParNew 438574K->51330K(1883840K), 0.2042002 secs]
> > > > > 22696.526: [ParNew 443906K->58015K(1883840K), 0.2154086 secs]
> > > > > 23246.252: [ParNew 450591K->63639K(1883840K), 0.2171688 secs]
> > > > > 23936.816: [ParNew 456215K->69353K(1883840K), 0.2421265 secs]
> > > > > 24529.163: [ParNew 461929K->75718K(1883840K), 0.1985638 secs]
> > > > > 25062.082: [ParNew 468294K->82472K(1883840K), 0.2119384 secs]
> > > > > 25223.640: [ParNew 205230K->84729K(1883840K), 0.0745738 secs]
> > > > > 25223.717: [Full GC 84729K->52981K(1883840K), 1.9445841 secs]
> > > > > 25808.453: [ParNew 445557K->58730K(1883840K), 0.2220857 secs]
> > > > > 27012.025: [ParNew 450888K->65873K(1883840K), 0.1835305 secs]
> > > > > 28826.400: [ParNew 194359K->68617K(1883840K), 0.0476450 secs]
> > > > > 28826.450: [Full GC 68617K->33933K(1883840K), 1.3288466 secs]
> > > > > 31626.367: [ParNew 426509K->39131K(1883840K), 0.1329507 secs]
> > > > > 32428.552: [ParNew 79650K->40294K(1883840K), 0.0451805 secs]
> > > > > 32428.600: [Full GC 40294K->29329K(1883840K), 1.0458070 secs]
> > > > > 36030.356: [ParNew 157110K->31764K(1883840K), 0.1066607 secs]
> > > > > 36030.465: [Full GC 31764K->28476K(1883840K), 0.9791810 secs]
> > > > > 39632.163: [ParNew 96572K->30448K(1883840K), 0.0852053 secs]
> > > > > 39632.251: [Full GC 30448K->27232K(1883840K), 0.9056725 secs]
> > > > > 43233.856: [ParNew 215673K->31439K(1883840K), 0.2064516 secs]
> > > > > 43234.074: [Full GC 31439K->28437K(1883840K), 1.1075595 secs]
> > > > > 46835.908: [ParNew 302993K->39167K(1883840K), 0.1579830 secs]
> > > > > 46836.074: [Full GC 39167K->35187K(1883840K), 1.1977157 secs]
> > > > > 50437.975: [ParNew 233401K->40095K(1883840K), 0.1419100 secs]
> > > > > 50438.130: [Full GC 40095K->36165K(1883840K), 1.3757682 secs]
> > > > > 54040.209: [ParNew 47288K->36927K(1883840K), 2.4154908 secs]
> > > > > 54042.656: [Full GC 36927K->35142K(1883840K), 1.7857094 secs]
> > > > > 57645.546: [ParNew 48404K->36028K(1883840K), 0.9233543 secs]
> > > > > 57646.503: [Full GC 36028K->33941K(1883840K), 1.2575880 secs]
> > > > > 61248.475: [ParNew 62613K->36158K(1883840K), 1.5358356 secs]
> > > > > 61250.042: [Full GC 36158K->34806K(1883840K), 1.1270633 secs]
> > > > > 64852.138: [ParNew 89705K->37904K(1883840K), 2.8467706 secs]
> > > > > 64855.019: [Full GC 37904K->36625K(1883840K), 1.2928314 secs]
> > > > >
> > > >
> > > > Did you notice that towards the end of the log above, your allocation
> > > > rates
> > > > have plummetted and the scavenges themselves are taking pretty long?
> > > > Perhaps that gives you some ideas as to what could be happening?
> > > >
> > > > > Here are our VM args:
> > > > >
> > > > > -server -Xms1840m -Xmx1840m -Xss256k -XX:+UseConcMarkSweepGC
> > > > > -XX:NewSize=384m -XX:MaxNewSize=384m -XX:PermSize=256m
> > > > > -XX:MaxPermSize=256m -Dsun.rmi.dgc.client.gcInterval=3600000
> > > > -Dsun.rmi.dgc.server.gcInterval=3600000
> > > > > -Djava.headless.awt=true -Xloggc:gc.log
> > > >
> > > > I'd suggest experimenting with either:-
> > > >
> > > > -XX:+ExplicitGCInvokesConcurrent[AndUnloadsClasses]
> > > > -XX:+CMSClassUnloadingEnabled
> > > >
> > > > or, perhaps less desirable, but certainly useful from the
> > > prespective of
> > > > your
> > > > debugging objectives here:-
> > > >
> > > > -XX:+DisableExplicitGC -XX:+CMSClassUnloadingEnabled
> > > >
> > > > >
> > > > > We see the DGC working every hour - 3600 seconds apart a ParNew
> > > > > followed by a Full GC - and there is a plethora of class unloading
> > > of
> > > > > the Sun reflection classes since we do a lot of RMI -
> > > > serialisation/deserialisation.
> > > > >
> > > > > Should we increase the frequency of DGC? Not sure why the VM hangs
> > > -
> > > > > possibly our client code - but we wanted to exclude completely 
> the
> > > > > idea that GC is culpable of creating this or contributing to this
> > > > failure.
> > > >
> > > > Check that you are not paging and running slow rather than hanging?
> > > >
> > > > When you get the "hung jvm", if on Solaris, try prstat -L -p <pid>
> > > to see
> > > > if any threads are active, and also try pstack <pid> (perhaps several
> > > > seconds apart, to observe any active threads). If the application
> > shows
> > > > no activity (from above), try jstack <pid> (or kill -QUIT <pid>) 
> to
> > > > see if you can elicit a java thread stack dump.
> > > >
> > > > (I was not sure from yr description whether you believed the JVM 
> was
> > > > hung or that the jvm was responding -- for example doing the
> > occasional
> > > > gc etc -- but the application response had plummeted.)
> > > >
> > > > -- ramki
> > > >
> > > > >
> > > > > thanks
> > > > >
> > > > > keith
> > > > >
> > > > >
> > > > > _______________________________________________
> > > > > hotspot-gc-use mailing list
> > > > > hotspot-gc-use at openjdk.java.net
> > > > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
> > > > _______________________________________________
> > > > hotspot-gc-use mailing list
> > > > hotspot-gc-use at openjdk.java.net
> > > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
> > > >
> > >
> > >
> > >
> > > --
> > > Michael Finocchiaro
> > > michael.finocchiaro at gmail.com
> > > Mobile Telephone: +33 6 67 90 64 39
> > > MSN: le_fino at hotmail.com
> > > _______________________________________________
> > > hotspot-gc-use mailing list
> > > hotspot-gc-use at openjdk.java.net
> > > http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
> >
> 
> 
> 
> -- 
> Michael Finocchiaro
> michael.finocchiaro at gmail.com
> Mobile Telephone: +33 6 67 90 64 39
> MSN: le_fino at hotmail.com
> _______________________________________________
> hotspot-gc-use mailing list
> hotspot-gc-use at openjdk.java.net
> http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
_______________________________________________
hotspot-gc-use mailing list
hotspot-gc-use at openjdk.java.net
http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use



More information about the hotspot-gc-dev mailing list