李三红(三红) sanhong.lsh at
Tue Jun 5 09:06:29 UTC 2018

Hi Tobias,
Thanks for your questions, see my inline comments.
(As the formatting in my last mail was messed up, just resend it again.)

发件人: hotspot-dev [mailto:hotspot-dev-bounces at] 代表 Tobias Hartmann
发送时间: 2018年6月4日 15:30
收件人: yumin qi <yumin.qi at>
抄送: hotspot-dev at
主题: Re: JEP:

Hi Yumin,

thanks for the details!

On 01.06.2018 05:01, yumin qi wrote:
> Thanks for your review/questions. First I would introduce some 
> background of JWarmup application on use scenario  and how we 
> implement the interaction between application and scheduling (dispatch system, DS).
> The load of each application is controlled by DS. The profiling data 
> is collected against real input data (so it mostly matches the 
> application run in production environments, thus reduce the 
> deoptimization chance). When run with profiling data, application gets 
> notification from DS when compiling should start, application then 
> calls API to notify JVM the hot methods recorded in file can be compiled,  after the compilations, a message sent out to DS so DS will dispatch load into this application.

Could you elaborate a bit more on how the communication between the DS and the application works? A generic user application should not be aware of the pre-compilation, right? Let's assume I run a little Hello World program, when/how is pre-compilation triggered?

The user application will use API to tell JWarmup to kickoff pre-compilation at some appropriate point, generally after app initialization done, the basic workflow as follows:
- DS freezes incoming user requests.
- App does the necessary initialization.
- After initialization done, notify JWarmup to kickoff pre-compilation(via *API*).
- JWarmup does the compilation work
- The app gets notified after the compilation is done(via *API* in polling way)
- DS resumes the requests, the application now is ready for service.

This is case how we use JWarmup, but we do believe the above process cloud be generalized and any app running inside cloud datacenter could benefit from the model by integrating java compilation with DS.
By this way, the java platform can provide flexible mechanism for cloud scheduling system to define compilation behavior according to load time.

Do I understand correctly that the profile information is only used for a "standalone" compilations of a method or is it also used for inlining? For example, if we have profile information for method B and method A inlines method B, does it use the profile information available for B when there is no profile information available for A?

It does support inlining.  
Actually, in "recording" phase, JWarmup also records the "MethodData" information, which can be used for compilation in next run.

> A: During run with pre-compiled methods, deoptimization is only seen 
> with null-check elimination so it is not eliminated. The profile data 
> is not updated and re-used. That is, after deoptimized, it starts from interpreter mode like freshly loaded.

Why do you only see deoptimizations with null-check elimination? A pre-compiled method can still have uncommon traps for reasons like an out of bounds array access or some loop predicate that does not hold, right?

We saw null-check elimination caused the de-optimization in most cases, that's the reason this has been disabled by default in JWarmup.
But you are correct, assumption might be made wrong in some other cases, that's the reason JWarmup provides the option to user to deoptimize the pre-compiled methods after peak load via -XX:CompilationWarmUpDeoptTime control flag, which allows user to choose a time roughly after the peak time to do the deoptimization.


More information about the hotspot-dev mailing list