[9] RFR(S): 8130309: need to bailout cleanly if CompiledStaticCall::emit_to_interp_stub fails when codecache is out of space

Tobias Hartmann tobias.hartmann at oracle.com
Fri Jul 24 11:29:21 UTC 2015

Hi Roland,

thanks for the review!

On 23.07.2015 15:51, Roland Westrelin wrote:
> assembler.cpp
> 68     Compile::current()->env()->record_failure("CodeCache is full”);
> That assumes we are calling this from c2 but it can be called from c1 as well.

You are right. I moved this code to the C2 methods calling 'AbstractAssembler::start_a_stub()'. The corresponding C1 methods already contain a call to 'bailout()'.

I also noticed that we multiply the 'to_interp_stub_size()' by 2 when creating the stub (see compiledIC_sparc.cpp):

68   __ start_a_stub(to_interp_stub_size()*2);

This seems to be unnecessary and causes problems in "Compile::scratch_emit_size" because we fix the stub section of the CodeBuffer to MAX_stubs_size and therefore fail to emit the to-interpreter stub since to_interp_stub_size()*2 > MAX_stubs_size. I removed the multiplication and added an assert to 'Compile::scratch_emit_size()'.
> Did you add code for c1 to be on the safe side or have you observed problems with c1?

I did not encounter the problem with C1 but added the checks to be on the safe side. Actually, we do call bailout() in C1 'LIR_Assembler::emit_static_call_stub()' if the stub cannot be created but without checking for bailed_out() in the calling method we will continue to emit code and crash (see explanation below).

> "the corresponding code blob is freed. In CodeBuffer::free_blob() -> CodeBuffer::set_blob() we set addresses to 'badAddress' and therefore hit the assert.”
> What addresses are set to badAddress?

If CodeBuffer::expand() fails, the corresponding buffer blob in the code cache is freed and code section addresses are set to 'badAddress' (see CodeBuffer::set_blob(NULL)). If we continue to emit code into this buffer or use its section addresses, we fail. The assert "wrong size of mach node" is hit because we use CodeBuffer::insts_size() which is now -1647855886. Another instance of this bug fails because we continue to emit code, call CodeSection::emit_int32() and fail because end() is set to 'badAddress'.

I did some more testing with "java -XX:+StressCodeBuffers -Xcomp -version" and found more places where we have to check for a failed CodeBuffer expansion to not continue to emit code and crash. I added the corresponding checks.

Here is the new webrev:


> Roland.
>> Thanks,
>> Tobias
>> On 21.07.2015 15:40, Tobias Hartmann wrote:
>>> Hi,
>>> please review the following patch.
>>> https://bugs.openjdk.java.net/browse/JDK-8130309
>>> http://cr.openjdk.java.net/~thartmann/8130309/webrev.00/
>>> Problem:
>>> While C2 is emitting code, an assert is hit because the emitted code size does not correspond to the size of the instruction. The problem is that the code cache is full and therefore the creation of a to-interpreter stub failed. Instead of bailing out we continue and hit the assert.
>>> More precisely, we emit code for a CallStaticJavaDirectNode on aarch64 and emit two stubs (see 'aarch64_enc_java_static_call' in aarch64.ad):
>>> - MacroAssembler::emit_trampoline_stub() -> requires 64 bytes
>>> - CompiledStaticCall::emit_to_interp_stub() -> requires 56 bytes
>>> However, we only have 112 bytes of free space in the stub section of the code buffer and need to expand it. Since the code cache is full, the expansion fails and the corresponding code blob is freed. In CodeBuffer::free_blob() -> CodeBuffer::set_blob() we set addresses to 'badAddress' and therefore hit the assert.
>>> Solution:
>>> Even if we have enough space in the instruction section, we should check for a failed expansion of the stub section and bail out immediately. I added the corresponding check and also removed an unnecessary call to 'start_a_stub' after we already failed. 
>>> Testing:
>>> - Failing test
>>> - JPRT
>>> Thanks,
>>> Tobias

More information about the hotspot-compiler-dev mailing list