InlineCacheBuffer question

Tony Printezis tprintezis at
Tue Nov 21 21:01:45 UTC 2017


Thanks for the reply. Inline (ha!)

Tony Printezis | @TonyPrintezis | tprintezis at

On November 21, 2017 at 3:40:03 PM, Roman Kennke (rkennke at wrote:

Am 21.11.2017 um 21:15 schrieb Tony Printezis:
> Hi all,
> I’ve been looking at the safepoint behavior of one of our services and I
> noticed that around 55% of the safepoints that happen don’t execute a VM
> operation. I’d need to confirm this, but I assume they only happen in
> order to do some cleanup (SafepointSynchronize::is_cleanup_needed()
> returns true, which in JDK 8 translates
to !InlineCacheBuffer::is_empty()).

This is true. This cleanup is done regularily to clean up zombie
nmethods and deflate idle monitors (and some other relatively minor

Sure, there are several things that are done during the pre-safepoint
cleanup phase (8 separate steps). However, the decision on whether to do a
non-VM safepoint or not, basically just to do the cleanup, only seems to
consider whether the InlineCacheBuffer is empty or not (in JDK 8; in 10 it
also checks the ObjectSynchronizer).

> I’m not familiar with the InlineCacheBuffer at all. How important is it
> to execute InlineCacheBuffer::update_inline_caches() at regular
> intervals? Would a modified heuristic, like "a safepoint is required if
> the buffer is >P% full" (maybe in conjunction with increasing the buffer
> size) be reasonable? Or maybe increase the value of
> GuaranteedSafepointInterval? Or both?

I think such a modified heuristic would be reasonable.

> Note that the overhead of the non-VM op safepoints is actually
> negligible. I’d just like to try to cut down the number of safepoints we
> do, as we had issues in the past with safepoint initiation taking too

I can feel your pain.

I’m all for decreasing the overhead of the various cleanups that are done
pre-safepoint (and thanks for the pointers). But, I’m also interested in
just decreasing the number of safepoints that we do, if they are not
absolutely necessary.


There are some ongoing (or infact not-yet-started) efforts to cut this
down. One is concurrent monitor deflation:

Another one is using multiple threads for nmethod sweeping (I proposed a
mechanism to do that some months ago, but has been stalled because
upstream didn't want it back then), here's some related issue around that:

I'd be happy to re-open the discussions around parallel safepoint cleanup.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the hotspot-compiler-dev mailing list