<p>Remi, </p><div><br></div><div>Did you publish Dalvik changes somewhere? (or plan to?)</div><div><br></div><div>Cheers </div><br><br><div class="gmail_quote"><p>On Wed, Apr 3, 2013 at 6:32 PM, Remi Forax <span dir="ltr"><<a href="mailto:forax@univ-mlv.fr" target="_blank">forax@univ-mlv.fr</a>></span> wrote:<br></p><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><p>On 04/03/2013 06:12 PM, Cédric Champeau wrote:
<br>> Le 03/04/2013 17:50, Remi Forax a écrit :
<br>>> Sorry to be rude, but it's still a micro-benchmark ...
<br>> First of all, yes, it is :) And as the classical fibonacci benchmark,
<br>> it's useless but relevant for understanding how things work :)
<br>>> for invokedynamic, it's not in theory, if your method handle is constant
<br>>> either because it's a static final or because it is nested in a
<br>>> CallSite, it's constant for the JIT, thus fully optimized.
<br>>>
<br>>> for method handle on stack, the method handle is obviously not constant
<br>>> moreover the JIT is not able use trick to make it constant (like
<br>>> hoisting it out of the loop,
<br>>> or doing the inlining algorithm in a backward way etc.)
<br>>> More on that later ...
<br>> Well, even if I make my MH variable declaration "final", the performance
<br>> is the same, so I assume there's no local analysis, right?
<br><br>'final' on a local variable is a modifier for the compiler, not for the JIT.
<br>There are several local analysis, but not the ones you think.
<br><br>>> As you said it's a micro-benchmark so you end up with unusual good
<br>>> performance,
<br>>> by example the call to j.l.r.Method is optimized as never it will be in
<br>>> a real program
<br>>> (you call the method in the same unit it was declared and
<br>>> you have less than 3 instances of Method that are called more than 60
<br>>> times).
<br>>>
<br>>> Now, Krystal is currently working to add a cache when a method handle is
<br>>> called,
<br>>> so in few betas, the performance of method handles of your
<br>>> micro-benchmark will improve dramatically.
<br>> That's good to know :)
<br>>> And the cache of MethodHandle is better that the cache which is used for
<br>>> j.l.r.Method because it can be local to a callsite and not global (in
<br>>> fact local to one callsite in the code of j.l.r.Method that is used by
<br>>> the invocation path when you call "invoke").
<br>>>
<br>>> Anyway, because it's a micro-benchmark the result will be as useless as
<br>>> now to predict the behaviour of a real world program.
<br>> I can perfectly understand why some path is optimized or not, what I
<br>> find surprising is more the order of magnitude here. So yes, calling
<br>> invoke() takes more than 50s where reflection takes only 1.2s, and even
<br>> invokeExact is slower (~3 to 1). My point is more than if MethodHandles
<br>> are branded as "faster" than reflection (I heard you say it ;)), then
<br>> there is something wrong.
<br><br>I say that in the context of proxies, i.e. where you can emit an
<br>invokedynamic to store the method handle at callsite, method handle are
<br>faster than using reflection because you don't need proxies anymore
<br>(remember I said that too :).
<br><br>> You should expect people doing stupid things like me, thinking it will be faster than plain reflection. At least, the docs should mention something about performance.
<br><br>The javadoc is the spec, we are talking about the Oracle implementation
<br>of the spec.
<br>On Android, Jerome and I have run some tests that shows that a
<br>MethodHandle.invokeExact is always faster than a call to the Reflection,
<br>it's just because the reflection is super slow on Android not the opposite.
<br><br>>> cheers,
<br>>> Rémi
<br>> Thanks for your answer :)
<br><br>Rémi
<br><br>_______________________________________________
<br>mlvm-dev mailing list
<br>mlvm-dev@openjdk.java.net
<br>http://mail.openjdk.java.net/mailman/listinfo/mlvm-dev
<br></p></blockquote></div><br>