Code Cache, Compilation & Inlining

Hiroshi Yamauchi yamauchi at
Wed Jan 14 15:18:15 PST 2009

On Tue, Jan 13, 2009 at 3:45 AM, Nicolas Michael <mail at> wrote:

> Hi,
> we're running Java 1.6.0_07 (Hotspot 10.0-b23) 32bit on SPARC Solaris
> (Client as well as Server VM).
> Our JVM options:
> -Xms1400M -Xmx2400M -XX:NewSize=512M -XX:MaxNewSize=512M
> -XX:SurvivorRatio=6 -XX:MaxTenuringThreshold=15
> -XX:+UseConcMarkSweepGC -XX:+UseLWPSynchronization
> -XX:CMSInitiatingOccupancyFraction=90
> -XX:+UseCMSInitiatingOccupancyOnly -XX:ParallelGCThreads=16
> -XX:MinHeapFreeRatio=25 -XX:MaxHeapFreeRatio=40 -XX:PermSize=32m
> -XX:MaxPermSize=128m -XX:TargetSurvivorRatio=90
> -XX:CompileThreshold=1500 -XX:+PrintCompilation -XX:+PrintInlining
> During the last weeks, I have spent some time looking closer at
> compilation and inlining, partly triggered by some "changing/degrading
> performance" we were observing in some scenarios. To trace compilation,
> I have switched to the 1.7.0-ea-fastdebug JVM (Hotspot
> 14.0-b07-fastdebug) and ran some tests. I have some questions about what
> I've noticed and hope to get some help here.
> 1.
> With the Server JVM, the code cache became full after 7827 compiles:
> "Java HotSpot(TM) Server VM warning: CodeCache is full. Compiler has
> been disabled"
> Setting -XX:ReservedCodeCacheSize to a large value got rid of this error
> and the JVM compiled 8809 methods in this run.
> My questions are:
> - How large is the code cache per default?
> - How can I monitor the current occupancy of the code cache? Neither
> jstat, jinfo nor a dump of all perfdata values seem to give any hint on
> this.

I think one of the memory pools accessed via the MXBeans thing is the code
cache. See

> - Can I see the code cache as a memory segment with pmap? Is it an
> "anon" segment?
> - Is there any limit for the code cache? In my tests where I increased
> it, I set it to 128m (which is much too large, I assume, but worked).
> 2.
> With the Server JVM and a large-enough code cache, I see some methods in
> the log being compiled, then being "made not entrant", being "made
> zombie" and then being compiled again, made not entrant, zombie, being
> compiled, ... Afaik, this "not entrant" and "zombie" stuff is some
> deoptimization, when the JIT thinks some of its original assumptions
> about those methods weren't good enough, right? Unfortunately, I can't
> find much info on the net about this. Could someone probably explain
> what those really mean and why this might happen?
> BTW, our CompileThreshold is set to 1500 also for the Server VM, and the
> process has been running for about half an hour or so. I didn't yet make
> any really long-running tests for hours or days with enabled
> +PrintCompilation, so I don't know whether this is an infinite loop of
> deoptimization and compilation or just happens "for some time". But
> still, even after half an hour I would not expect such a thing any more.
> 3.
> Now I'm referring to tests that I have done with the Client VM.
> I have noticed that not all methods are being compiled. A Sun Studio
> profile for a 1h run (after 30min warmup) is showing that 2.2% of the
> CPU time is due to Java methods being interpreted:
> Excl.      Incl.       Name
> User CPU   User CPU
> 26363.141  26363.141   <Total>
>  605.223   1049.764   Interpreter
>  587.311    587.311   java.lang.String.equals(java.lang.Object)
> ...
> I tried -XX:-DontCompileHugeMethods: This decreases the CPU time
> consumed by the Interpreter to 435.505 (Excl. User CPU) -- but still,
> some methods seem to be interpreted. We're running an application that's
> doing the same stuff all the time and doesn't have any change in
> behavior, so a CompileThreshold of 1500 should easily be reached quite
> fast. I remember tests for 2-Tiered together with Steve Goldman about 2
> years ago when he told me something like our application is a huge
> application with lots of "lukewarm" methods. Anyway, is there a way to
> find out *which* methods are not being compiled and *why*?
> +PrintCompilation tells a lot of stuff about methods that are being
> compiled, but nothing about methods that Hotspot considers *not* being
> worth compiling ... ;-)
> 4.
> Also for the Client VM, same with inlining: We have a method (isTraceOn,
> to determine whether tracing is enabled or not) that's being called
> quite often. It's a small method, just 17 bytes:
> 499 (17 bytes)
>           @ 0 (4 bytes)
>         - @ 13 (57 bytes)  callee is
> too large
> I know from the Sun Studio profile which are the callers of isTraceOn.
> Those callers are getting compiled, e.g. the method traceSimple, but
> don't even consider to inline isTraceOn (I would expect to find
> something as "- @" in the log with a note *why*
> Hotspot decided to not inline this method -- but it doesn't appear
> anywhere at all):
> 1330 (91 bytes)
>      !  - @ 29 (196 bytes)  callee is too
> large
>     s     @ 48   java.lang.StringBuffer::length (5 bytes)
>     s     @ 82   java.lang.StringBuffer::toString (17 bytes)
>           - @ 13   java.lang.String::<init> (72 bytes)  callee is too
> large
> What could be the reason that isTraceOn is not being inlined? BTW, I
> also tried to increase MaxInlineSize to 64 which didn't help. (Server VM
> does
> inline isTraceOn, but not the Client VM.)
> I can of course also provide full log files and everything...
> Thanks a lot,
> Nick.
-------------- next part --------------
An HTML attachment was scrubbed...

More information about the hotspot-compiler-dev mailing list