RFR(L): 8186027: C2: loop strip mining
rwestrel at redhat.com
Thu Nov 23 14:18:28 UTC 2017
> I am running testing again. But if this will repeat and presence of this
> Sparse.small regression suggesting to me that may be we should keep this
> optimization off by default - keep UseCountedLoopSafepoints false.
> We may switch it on later with additional changes which address regressions.
> What do you think?
If the inner loop runs for a small number of iterations and the compiler
can't statically prove it, I don't see a way to remove the overhead of
loop strip mining entirely. So I'm not optimistic the regression can be
If loop strip mining defaults to false, would there we be any regular
testing on your side?
It seems to me that it would make sense to enable loop strip mining
depending on what GC is used: it makes little sense for parallel gc but
we'll want it enabled for Shenandoah for instance. Where does G1 fit? I
can't really say and I don't have a strong opinion. But as I understand,
G1 was made default under the assumption that users would be ok trading
throughput for better latency. Maybe, that same reasoning applies to
loop strip mining?
More information about the hotspot-compiler-dev