RFC: deoptimization & stack bang (8032410: compiler/uncommontrap/TestStackBangRbp.java times out on Solaris-Sparc V9)
roland.westrelin at oracle.com
Thu Feb 20 10:02:12 PST 2014
I’m looking for comments on this rather than a review. I think it deadlocks because when the stack bang in the deopt or uncommon trap blobs triggers an exception, we throw the exception right away even if the deoptee has some monitors locked.
One solution prototyped on sparc is:
Rather than propagate the exception from the signal handler, we return to the deopt/uncommon trap blobs and unlock the monitors that the thread have locked in the deoptee. This would need more platform dependent code to support other platforms.
Rather than do that (and that’s where I’m looking for comments), why wouldn’t the compilers compute the maximum size of the interpreter frames for the nmethod it’s compiling and generate a stack bang from that size rather than the size of the compiled frame. Then we wouldn’t have to worry about banging the stack in the deopt/uncommon trap blobs, the bug above with the locked monitors couldn’t occur (and we wouldn't see the bugs with stack overflows during deoptimization that we’ve had recently). Is there a reason I’m missing why this would be a bad idea?
More information about the hotspot-compiler-dev