[16] RFR: 8248391: Unify handling of all OopStorage instances in weak root processing

Kim Barrett kim.barrett at oracle.com
Tue Jun 30 07:10:20 UTC 2020

> On Jun 26, 2020, at 9:06 AM, Erik Österlund <erik.osterlund at oracle.com> wrote:
> Hi,
> Today, when a weak OopStorage is added, you have to plug it in explicitly to ZGC, Shenandoah and the WeakProcessor, used by Shenandoah, Serial, Parallel and G1. Especially when the runtime data structure associated with an OopStorage needs a notification when oops die. Then you have to explicitly plug in notification code in various places in GC code.
> It would be ideal if this process could be completely automated.
> This patch allows each OopStorage to have an associated notification function. This is a callback function into the runtime, stating how many oops have died this GC cycle. This allows runtime data structures to perform accounting for how large part of the data structure needs cleaning, and whether to trigger such cleaning or not.
> So the interface between the GCs to the OopStorage is that during weak processing, the GC promises to call the callback function with how many oops died. Some shared infrastructure makes this very easy for the GCs.
> Weak processing now uses the OopStorageSet iterators across all GCs, so that adding a new weak OopStorage (even with notification functions) does not require touching any GC code.
> Kudos to Zhengyu for providing some Shenandoah code for this, and StefanK for pre-reviewing it. Also, I am about to go out of office now, so StefanK promised to take it from here. Big thanks for that!
> CR:
> https://bugs.openjdk.java.net/browse/JDK-8248391
> Webrev:
> http://cr.openjdk.java.net/~eosterlund/8248391/webrev.00/
> Thanks,
> /Erik

While I approve of the goal for this change, I have a couple of issues
of some significance with the approach taken.

I think the way the gc notifications are being set up is backward.
The GC (in particular, OopStorage) shouldn't know about client
subsystems in order to record the notification function.  That seems
like a dependency inversion.  Rather, I think client subsystems should
be registering callbacks with the GC.

I've never liked the existing mechanism used to gather iteration
statistics (number of entries cleared, skipped, processed...).  It
always seemed rather bolted on after the fact.  This change continues
that.  I'd rather take the opportunity to improve that, by improving
OopStorage in this area.

I'm working on a reworked version of Erik's changes.  So far it's
looking pretty good.  Some of Erik's work I'm using as-is, but some
parts not so much.  I'm not yet ready to send anything out for review,
but mentioning it here so others hopefully don't spend too much time
reviewing the existing change.

I've discussed this with Stefan, and he's agreed to hand off
responsibility for this project to me.

More information about the hotspot-gc-dev mailing list