Request for reviews (S): 7196242: vm/mlvm/indy/stress/java/loopsAndThreads crashed

Christian Thalinger christian.thalinger at
Mon Sep 10 16:06:34 PDT 2012

On Sep 10, 2012, at 3:53 PM, Vitaly Davidovich <vitalyd at> wrote:

> Hi Chris,
> Perhaps the comment above where you now take out a lock should be updated since it's still talking about the CAS race?

No.  These are two different problems.

> Also, curious why the CAS with immediate yield() was used before - seems like that doesn't actually buy much since a losing thread may then immediately lose CPU time and have to wait for scheduling to put it back on (I.e. a lock seems more efficient anyway here :)).

But that's only in a race.  When there is only one thread the current approach might be faster.  It depends on how fast the lock implementation is.

-- Chris

> Vitaly
> Sent from my phone
> On Sep 10, 2012 5:30 PM, "Christian Thalinger" <christian.thalinger at> wrote:
> 29 lines changed: 13 ins; 10 del; 6 mod; 2239 unchg
> 7196242: vm/mlvm/indy/stress/java/loopsAndThreads crashed
> Reviewed-by:
> The current code in ConstantPoolCacheEntry::set_method_handle_common uses
> a CAS to find out who's the winning thread in the race about linking an
> invokehandle/invokedynamic call site.  The winning thread does the linking
> while the other threads are waiting in a loop until the winning thread
> finishes.  The waiting threads enter the Patching_lock and call os::yield
> to give the winning thread more CPU time.
> Unfortunately the implementation of os::yield on Solaris uses the
> Threads_lock and we hit this assert:
> #  fatal error: acquiring lock Threads_lock/15 out of order with lock Patching_lock/1 -- possible deadlock
> Even worse, the CAS might fail spuriously and we don't have a winning
> thread because we don't loop around the CAS.  This may lead to hangs.
> src/share/vm/interpreter/interpreterRuntime.cpp
> src/share/vm/oops/cpCache.cpp
> src/share/vm/oops/cpCache.hpp

More information about the hotspot-dev mailing list