RFR 8048268: G1 Code Root Migration performs poorly

Jon Masamitsu jon.masamitsu at oracle.com
Thu Aug 28 21:04:09 UTC 2014




that there was not a call to rebuild_strong_code_roots() was a bug?


On 08/26/2014 08:42 AM, Mikael Gerdin wrote:
> Hi,
> In order to combat the spikes in code root migration times I suggest that we
> reimplement the code cache remembered set using hash tables instead of the
> current chunked array variant.
> While we're at it I suggest that we get rid of the entire migration phase and
> update the code cache remembered set during the parallel RSet scanning phase.
> The contains()-check when adding during RSet scanning is designed to be lock-
> free in order to reduce contention on the HRRS locks.
> This led me to remove some contains-checks in asserts since they were done
> during a phase where operations which could not guarantee lock-free reads to
> succeed were performed.
> Testing: Kitchensink 14hrs, JPRT, Aurora perf testing of several industry
> benchmarks and CRM Fuse (where it actually makes a difference since we had
> 300ms spikes in code root migration times).
> The table sizes in G1CodeRootSet are completely unscientific but seem to work
> good enough for now. An even larger table size could possibly be considered
> for pathological cases where we get thousands of nmethods (as can occur in CRM
> Fuse) but currently the two sizes seem good enough.
> This change depends on "JDK-8056084: Refactor Hashtable to allow
> implementations without rehashing support" since the remembered sets are
> allocated and deallocated I needed to allow for deallocation of instances of
> HashtableEntry and deallocation of freelist contents.
> Webrev: http://cr.openjdk.java.net/~mgerdin/8048268/nm-hashtable/webrev/
> Buglink: https://bugs.openjdk.java.net/browse/JDK-8048268
> a note about g1RemSetSummary.cpp
> the code failed to update _max_code_root_mem_sz, so the code to list the most
> expensive code root remset was broken.
> /Mikael

More information about the hotspot-gc-dev mailing list