[9] RFR(S): 8140574: C2 must re-execute checks after deoptimizing from merged uncommon traps

Roland Westrelin roland.westrelin at oracle.com
Wed Oct 28 13:33:59 UTC 2015

>> if (i < 0) {
>>  if (some_condition) {
>>    // never taken so unc unstable_if
>>  }
>> }
> I agree that this would be a problem but why would be add an uncommon trap here? Either the i<0 branch is never executed, then we would add the uncommon trap like this
> if (i < 0) {
>   // unc unstable_if
> }
> or it was executed and then also "some_condition" was true and we would not add an uncommon trap at all.

Profiling is what tells us that we can cut one branch. It’s not 100% accurate.

We could have:

void m1(boolean flag) {
  if (i < 0) {
    if (flag) {

that is called a lot with flag false and i sometimes negative, sometimes positive so m1 is compiled, no more profiling is recorded and the profiling tells us the branch is never taken. Then we run:

void m2(int i) {
  m1(i, flag);

with i always positive. During parsing the compiler is unable to prove flag always true, so we add the unc and after some optimization, flag is proven always true so the test is removed.

For instance, this:

for (int i = 0; i < 10; i++);
flag = (i == 10);

I think it would be worthwhile to check we can write a test case like this.

>> So I think we have to restrict when we apply that transformation even more, maybe matching the expected bytecode pattern more closely?
> I'm not sure how to do this. We would somehow have to make sure that the dominating uncommon trap "belongs" to the dominating if and not to some other if that was removed, right?

I suppose if we can prove that the bci of the unc for the dominating if dominates the bci of the unc of the dominated if then we’re good. Maybe by iterating over the ciTypeFlow blocks of the method.


More information about the hotspot-compiler-dev mailing list