RFR 8048268: G1 Code Root Migration performs poorly

Mikael Gerdin mikael.gerdin at oracle.com
Tue Aug 26 15:42:21 UTC 2014


In order to combat the spikes in code root migration times I suggest that we 
reimplement the code cache remembered set using hash tables instead of the 
current chunked array variant.

While we're at it I suggest that we get rid of the entire migration phase and 
update the code cache remembered set during the parallel RSet scanning phase.
The contains()-check when adding during RSet scanning is designed to be lock-
free in order to reduce contention on the HRRS locks.
This led me to remove some contains-checks in asserts since they were done 
during a phase where operations which could not guarantee lock-free reads to 
succeed were performed.

Testing: Kitchensink 14hrs, JPRT, Aurora perf testing of several industry 
benchmarks and CRM Fuse (where it actually makes a difference since we had 
300ms spikes in code root migration times).

The table sizes in G1CodeRootSet are completely unscientific but seem to work 
good enough for now. An even larger table size could possibly be considered 
for pathological cases where we get thousands of nmethods (as can occur in CRM 
Fuse) but currently the two sizes seem good enough.

This change depends on "JDK-8056084: Refactor Hashtable to allow 
implementations without rehashing support" since the remembered sets are 
allocated and deallocated I needed to allow for deallocation of instances of 
HashtableEntry and deallocation of freelist contents.

Webrev: http://cr.openjdk.java.net/~mgerdin/8048268/nm-hashtable/webrev/
Buglink: https://bugs.openjdk.java.net/browse/JDK-8048268

a note about g1RemSetSummary.cpp
the code failed to update _max_code_root_mem_sz, so the code to list the most 
expensive code root remset was broken.


More information about the hotspot-gc-dev mailing list