RFR(m): 8221734: Deoptimize with handshakes
patricio.chilano.mateo at oracle.com
Wed May 15 19:45:45 UTC 2019
Biased locking changes look good to me.
Just a small comment based on one of the things I mentioned before. I
can't help to think that since get_monitors_from_stack() will return
monitors owned by the handshaked thread then the assert in
assert(!mark->has_bias_pattern() || mark->biased_locker() == thread,
should be combined by the later guarantee() in
BiasedLocking::revoke_own_locks_in_handshake() to be:
guarantee(!mark->has_bias_pattern() || (mark->biased_locker() == thread
&& prototype_header->bias_epoch() == mark->bias_epoch()), "Can't revoke");
and that would make more evident we can remove the call to fast_revoke()
in BiasedLocking::revoke_own_locks_in_handshake(). I know you said you
don't want to change biased locking behavior, but I don't think doing
that should change anything. We know the condition above should hold,
otherwise, if the handshaked thread is not the real biaser (because of
expired epoch) then we could hit the guarantee() in
BiasedLocking:revoke_own_locks_in_handshake() anyways later on if some
other thread rebiased the lock before we were able to revoke it.
On 5/15/19 2:26 AM, Robbin Ehn wrote:
> Hi, please see this update.
> I think I got all review comments fix.
> Long story short, I was concerned about test coverage, so I added a
> stress test
> using the WB, which sometimes crashed in rubbish code.
> There are two bugs in the methods used by WB_DeoptimizeAll.
> (Seems I'm the first user)
> When iterating the nmethods we could see the methods being create in:
> void AdapterHandlerLibrary::create_native_wrapper(const methodHandle&
> And deopt the method when it was in use or before.
> Native wrappers are suppose to live as long as the class.
> I filtered out not_installed and native methods.
> The issue is that a not_entrant method can go to zombie at anytime.
> There are several ways to make a nmethod not go zombie: nmethodLocker,
> have it
> on stack, avoid safepoint poll in some states, etc.., which is also
> depending on
> what type of nmethod.
> The iterator only_alive_and_not_unloading returns not_entrant
> nmethods, but we
> don't know there state prior last poll.
> in_use -> not_entrant -> #poll# -> not_entrant -> zombie
> If the iterator returns the nmethod after we passed the poll it can
> still be
> not_entrant but go zombie.
> The problem happens when a second thread marks a method for deopt and
> makes it
> not_entrant. Then after a poll we end-up in deoptimize_all_marked(),
> but the
> method is not yet a zombie, so the iterator returns it, it becomes a
> zombie thus
> pass the if check and later hit the assert.
> So there is a race between the iterator check of state and
> if-statement check of
> state. Fixed by also filtering out zombies.
> If the stress test with correction of the bugs causes trouble in
> review, I can
> do a follow-up with the stress test separately.
> Good news, no issues found with deopt with handshakes.
> This is v3:
> This full inc from v2 (review + stress test):
> This inc is the review part from v2:
> This inc is the additional stress test with bug fixes:
> Additional biased locking change:
> The original code use same copy of markOop in revoke_and_rebias.
> The keep same behavior I now pass in that copy into fast_revoke.
> The stress test passes hundreds of iterations in mach5.
> Thousands stress tests locally, the issues above was reproduce-able.
> Inc changes also passes t1-5.
> As usual with this change-set, I'm continuously running more test.
> Thanks, Robbin
> On 2019-04-25 14:05, Robbin Ehn wrote:
>> Hi all, please review.
>> Let's deopt with handshakes.
>> Removed VM op Deoptimize, instead we handshake.
>> Locks needs to be inflate since we are not in a safepoint.
>> Goes on top of:
>> Passes t1-7 and multiple t1-5 runs.
>> A few startup benchmark see a small speedup.
>> Thanks, Robbin
More information about the hotspot-compiler-dev