Introductions
John Rose
John.Rose at Sun.COM
Fri Apr 25 14:54:22 PDT 2008
[CC-ing hotspot-compiler-dev because this is of general interest.]
On Feb 6, 2008, at 3:15 PM, Marcin Mielżyński wrote:
> First, I'd like to introduce the problem a little bit. When testing
> joni performance there was a clear evidence that main bytecode
> interpreter switch got slowed down by big number of cases (reducing
> them to only used ones for a compiled regexp substantially
> increased performance). Before I noticed the switch problem, I
> tried to inline some opcode methods into the big switch (using
> local variables everywhere, etc), the beginnings were very
> promising as tested regexps turned out to be 3x faster without
> changing the actual implementation logic, but as I continued to add
> more and more inlined cases the whole switch began to be something
> like 8x slower.
After a few hundred bytecodes, the inlining heuristic starts to get
scared. Hand inlining is like that: The JVM prefers to do this for
you.
You can play with -XX:FreqInlineSize=N where N is larger than your
switch method. Also maybe bump -XX:MaxInlineSize (default 35).
But it is better to use small methods and let the JVM pick
inlinings. You may have found a weakness in the profiling that
causes the inliner to fall down.
> Then I realized that there's something wrong with inlining. After
> that tried different approaches like splitting the switch into ten
> switches and even a tree like one (which turned out to be a
> complete nonsense of course). As split switches worked better for
> cases I tested, the whole thing became a lot slower on average.
Another thing that's better since Feb. is we now have a disassembly
story that we can share with the public. I put in a plugin interface
to the Gnu binutils library; it's integrated here, in version 13 of
the JVM:
http://hg.openjdk.java.net/jdk7/hotspot/hotspot/rev/c7c777385a15
There are sources and (imperfect) build instructions for making the
plugin.
The point of the -XX:+PrintAssembly option is to give you a very
concrete view (often too concrete) as to what decisions the JIT is
making. In your experiments you would have seen a range of code
shapes, from decision trees to jump tables.
See this and its child pages (you already know about some of this):
http://wikis.sun.com/display/HotSpotInternals/PerformanceTechniques
> Then I decided to prepare a reduced testcase (attached as
> Aswitch2.java file) for the problem and saw rather surprising
> results. With client compiler benchmarks didn't depend on the
> number of cases but for hotspot server compiler it turned out to be
> the opposite (but it was not linear).
> As you've mentioned before, with small number of cases (and/or low
> complexity of the whole method) Hotspot is able to inline the
> method and the switch into it's caller and then prove that the
> switching value is constant. After increasing number of cases (like
> 5..10) I began to see:
> <inline_fail reason='already compiled into a big method'/>
That's a weak part of our heuristic. Hmm. It's especially bad when
there is a great variation in size (switch on constant vs. switch on
non-constant). We know we need heuristics that can estimate the
effects of constant folding. Branches (switches) based on parameter
values are a key case.
> and server compiled code was already slower than client compiled
> (250ms client/280-300ms server after being warmed up)
> After adding another 20 cases this:
> <inline_fail reason='hot method too big'/>
That's FreqInlineSize in action.
> appeared in hotspot.log and the timings went down to 650ms
> after adding another 30 or so cases the timings got even worse
> (700ms and more after being warmed up).
>
> So I began to draw conclusions that counting case hits (found it in
> MultiBranchData class and it's consumers) makes Hotspot try to
> inline the most frequent case into the caller or something. I can
> imagine this for two way branch with a stable flow path, but when
> switch case distribution begins to be somewhat 'random/unknown' (in
> this microbenchmark it is not the case though, as only one case is
> being hit) for Hotspot profiler and the switch is big enough, the
> hole thing looses the point, yet switches shall be O(1) with dense
> values.
In how many places do you expect the switch code to be inlined? Once
for the general case and many places for constant switch selectors?
> Even more surprisingly, the benchmark gets about 10% boost with
> jump tables turned off
It must be getting smart or lucky with decision trees. If a small
minority of cases determines the performance, and those get compiled
into the heart of the decision tree (and hence the loop), then the
other cases don't really matter.
> whereas -XX:MaxInlineSize=500 or so doesn't affect the benchmark
> (though it gave a boost for some other joni regexp benchmarks)
FreqInlineSize takes precedence over MaxInlineSize for hot methods,
IIRC. If so, it's a bug that MaxInlineSize > FreqInlineSize silently
has no effect.
> Here are my numbers for aforementioned microbenchmark (64 cases in
> switch, 20000000 iterations):
>
> java -client
>
> 266
> 250
> 250
> 250
> 250
> 250
> 250
> 250
> 250
> 250
>
> java -server
>
> 1984
> 625
> 688
> 703
> 687
> 703
> 688
> 703
> 687
> 704
>
> java -server -XX:-UseJumpTables
>
> 1985
> 578
> 657
> 656
> 640
> 657
> 640
> 657
> 656
> 640
>
> Harmony (client)
>
> 422
> 391
> 796
> 219
> 219
> 234
> 219
> 234
> 219
> 235
>
> Harmony -server
>
> 828
> 312
> 485
> 515
> 235
> 234
> 234
> 235
> 234
> 235
>
> I think these numbers show there's something wrong with switches
> and opto compiler.
>
> If you don't mind I'd like to tell a bit about joni (https://
> svn.codehaus.org/jruby/joni/) internals. It's a byte based bytecode
> compiled regex engine that's a more or less straight port of
> Oniguruma (though some wild pointer manipulations performed by
> Oniguruma made certain things impossible to be ported directly). It
> uses encoding abstraction to access byte arrays (though there are
> optimizations for single byte encodings that are more aggressive
> than those found in Oniguruma). Classes in encoding package
> represent different encoding implementations extending the abstract
> class Encoding (it was an interface before but invokevirtual turned
> out to be noticeable faster), they deal with character length
> tables, character folding, code point conversions and navigating
> throughout byte arrays. The parsing engine consists of
> ScannerSupport.java/Lexer.java responsible for scanning/tokenizing,
> quite standard stuff (except they're syntax configurable, there is
> a number of preset syntaxes in Syntax.java). Parser.java is
> responsible for building an AST (from nodes in ast package), it
> also case folds StringNodes when case insensitive flag is turned on
> (it also takes an advantage of more sharing and adds support for
> COW, that's almost impossible in c version). Analyzer.java performs
> AST integrity/infinite recursion checks (Oniguruma is able to
> define named groups and either refer them or call them at different
> levels of call stack depth). Analyzer.java is also responsible for
> most optimizations like quantifier reduction/elimination, fixed
> quantifier deletion/expansion, case fold expansion, string
> expansion, unused group elimination and other AST transformations
> (as they call them 'automatic posseivation' or 'look behind
> alternative division'). It also does optional (x*)* combination
> explosion checks and calculates potential minimum/maximum lengths
> based on AST and uses (OptMapInfo.java, OptExactInfo.java,
> NodeOptInfo.java, MinMaxLen.java) classes that contain some (?)
> statistical knowledge to compute fail fast/find fast application
> thresholds and selects fail fast/find fast search algorithms
> (SearchAlgorithm.java), it also selects short circuit paths given
> anchor information. Compiler.java does a pretty standard job
> traversing the AST and emitting opcodes into an int[] array
> (Oniguruma uses void* array but in Java we'd need to pack control
> data like string lengths/relative addresses in byte[] with
> inefficient lookup). It does that in two passes (first it
> calculates code lengths to get eventual jump/call addresses - also
> standard thing), unrolls/eliminates loops and checks whether the
> compiled code will need a stack when run.
That is a well-written, mature RE package. Having worked on a couple
in my own deep past, it warms my heart.
Do you expect to get a different inlining of the Big Switch for each
Encoding? That will be tricky but may be desirable. You almost want
to use something like anonymous classes as a templating mechanism to
forcibly copy the Big Switch into distinct type contexts.
Encodings as virtuals: For megamorphic calls, vtables are a little
cheaper than itables. But you win bigger when you factor things so
that interface or virtual calls are monomorphic or at most bimorphic.
For some info on the bleeding edge of hotspot interface performance,
see: http://openjdk.java.net/projects/mlvm/
subprojects.html#interface-perf
> StackMachnie.java contains methods responsible for stack management/
> manipulation. ByteCodeMachine.java is the actual bytecode
> interpreter implementation (with matchAt method containing the big
> switch). Each opcode is implemented as a separate method.
That is reasonable. It keeps the switch per se small, and
(implicitly) asks the JVM to inline the opcode methods that really
matter.
> For some patterns like /a.*b/ where [anychar*-peek-next-sb:b] and
> similar opcodes are being emitted which contain fast inner loops,
> for those, joni can be even twice as fast as Oniguruma. The issue
> shows up when the code is main switch heavy, a good example is
> http://shootout.alioth.debian.org/gp4sandbox/benchmark.php?
> test=regexdna&lang=ruby&id=0 benchmark which we perform rather
> poorly (up to 2x slower than c version). Keeping in mind joni uses
> single byte specialized bytecodes this seems a bit strange, pattern:
>
> /(?i:[cgt]gggtaaa|tttaccc[acg])/
>
> compiles down to:
>
> [push:(41)] [cclass-sb:6] [push:(4)] [exact1:g]
> [jump:(2)] [exact1:G] [push:(4)] [exact1:g] [jump:(2)]
> [exact1:G] [push:(4)] [exact1:g] [jump:(2)] [exact1:G]
> [exactn-ic-sb:4:taaa] [jump:(39)] [push:(4)] [exact1:t] [jump:(2)]
> [exact1:T] [push:(4)] [exact1:t] [jump:(2)] [exact1:T]
> [push:(4)] [exact1:t] [jump:(2)] [exact1:T] [exactn-ic-sb:4:accc]
> [cclass-sb:6] [end] [finish]
>
> for sample string:
>
> GGCCGGGCGCGGTGGCTCACGCCTGTAATCCCAGCACTTTGGGAGGCCGAGGCGGGCGGA
>
> the execution goes like this:
>
> forward_search_range: str: 0, end: 60, s: 0, range: 60
> forward_search_range success: low: -1, high: 0, dmin: 0, dmax: 1
> match_at: str: 0, end: 60, start: 0, sprev: 0
> size: 60, start offset: 0
> 0> "GGCCGGG..." [push:(41)]
> 0> "GGCCGGG..." [cclass-sb:6]
> 1> "GCCGGGC..." [push:(4)]
> 1> "GCCGGGC..." [exact1:g]
> 1> "GCCGGGC..." [exact1:G]
> 2> "CCGGGCG..." [push:(4)]
> 2> "CCGGGCG..." [exact1:g]
> 2> "CCGGGCG..." [exact1:G]
> 0> "GGCCGGG..." [push:(4)]
> 0> "GGCCGGG..." [exact1:t]
> 0> "GGCCGGG..." [exact1:T]
> 0> "GGCCGGG..." [finish]
> forward_search_range: str: 0, end: 60, s: 1, range: 60
> forward_search_range success: low: 0, high: 1, dmin: 0, dmax: 1
> match_at: str: 0, end: 60, start: 1, sprev: 0
> size: 60, start offset: 1
> 1> "GCCGGGC..." [push:(41)]
> 1> "GCCGGGC..." [cclass-sb:6]
> 2> "CCGGGCG..." [push:(4)]
> 2> "CCGGGCG..." [exact1:g]
> 2> "CCGGGCG..." [exact1:G]
> 1> "GCCGGGC..." [push:(4)]
> 1> "GCCGGGC..." [exact1:t]
> 1> "GCCGGGC..." [exact1:T]
> 0> "GGCCGGG..." [finish]
> .....
>
> Case fold expansion optimization actually made the opposite effect
> here as we seem to have problem with switch method performance
> itself (all opcodes are very cheap here). We might get rid of this
> optimization as [exactn-ic-sb:n:...] is very cheap for some single
> byte encodings (direct table lookup), but a multibyte one ([exactn-
> ic:n:...]) has very high cost as it uses two buffers to unfold the
> code points and make case insensitive comparison. Also, it would be
> an ugly workaround.
It sounds like you have a specific need for customizing your Big
Switch for case-folding searches. This takes me one step closer to
the templating precipice. One way to encourage templating-like
behavior from the JVM is to put your Big Switch into an abstract base
class and then specialize it in a range of subclasses. In this case,
perhaps the range of subclasses is dynamically created. (Depends on
the number of degrees of freedom across which you want to customize.
Can you statically enumerate in a bunch of hand-named subclasses, or
not?)
> I admit there still might be bottlenecks in java code I'm not aware
> of. On the other hand, work on jvm bytecode compiled version has
> been started. It will require completely new code for stack
> manipulation, fail fast logic and anchors (Matcher.java). Though,
> it will bring new exciting optimization opportunities like taking
> advantage of java stack and building fail fast code from templates
> (not to mention even more specialized opcode equivalent
> implementations that will not use switches).
I don't have much hope that the java stack will give you much
leverage on JVM performance, unless you use it in ways that are very
similar to javac output. The java stack is for naming temporary
values compactly, as opposed to a plain register-based architecture.
On the other hand, compiling to bytecodes is a great way to bypass
all sort of Big Switch issues. Use anonymous classes for that, if
the JVM supports them. (No plans to standardize these yet on other
JVMs. JSR 292 has its hands full with invokedynamic.)
The PyPy experience shows that partial evaluation of your Big Switch
can take you a long way. I'd like to make our switch optimizations
work better, even if you end up going all the way to bytecodes.
BTW, going to bytecodes means you start dealing with the cost of
registering, compiling, and GC-ing the results of class loading.
Anonymous classes help address this, but it's expensive; we've seen
scaling problems with such things, e.g., databases that compile every
SQL query to a tiny class. Perhaps the best thing to do would be
(like JRuby) to go to bytecodes only after the same RE had been used
N times or more (N = 40, maybe). Meanwhile, the Big Switch gives
good performance for low-use REs. Best of all would be a way to fold
the Big Switch with a constant code vector and tell the JVM "partial
evaluate this". With the right library factoring, we can get close
to that ideal, but bytecode generation is probably part of the mix
for the foreseeable future.
> Here's the code from the output above:
>
> byte[] pat = "(?i:[cgt]gggtaaa|tttaccc[acg])".getBytes();
> byte[] str =
> "GGCCGGGCGCGGTGGCTCACGCCTGTAATCCCAGCACTTTGGGAGGCCGAGGCGGGCGGA".getByte
> s();
> Regex re = new Regex(pat, 0, pat.length, Option.NONE,
> ASCIIEncoding.INSTANCE, Syntax.RUBY);
>
> Matcher m = re.matcher(str, 0, str.length);
> int result = m.search(0, str.length, Option.DEFAULT);
>
> To enable debug information (which will output AST structure,
> applied optimizations and the execution itself) DEBUG_ALL flag in
> Config.java needs to be turned on.
>
>
> So far I've been reading publications/papers from following locations:
>
> http://java.sun.com/javase/technologies/hotspot/publications/
> http://www.ssw.uni-linz.ac.at/Research/Papers/
> https://openjdk.dev.java.net/hotspot/
>
> I've also done some review of Hotspot sources (I'm overwhelmed by
> it) mainly the opto bytecode parser and profiler structures, a bit
> of call site profiling and caching logic. Also did some review of
> optimization phases. But it's still too early for me to draw any
> conclusions, so my question is where to find additional information
> if it exists and what best approach to choose when analyzing the
> sources ?
We've started a wiki for this purpose; see above. It would be great
if you (or anyone else on the hotspot learning curve) would
contribute to it as you discover important facts. I've added stuff,
but since I've been working on this for 10 years, it's hard to have
perspective on what newcomers need to know. And, this is the best
year by far for being a newcomer!
Best,
-- John
>
> Best regards,
> Marcin
>
> public class Aswitch2 {
>
> public static void main(String[]args) {
> Aswitch2 a = new Aswitch2();
> a.bench();
> a.bench();
> a.bench();
> a.bench();
> a.bench();
> a.bench();
> a.bench();
> a.bench();
> a.bench();
> a.bench();
> }
>
> public void bench() {
> long t = System.currentTimeMillis();
> int result = 0;
> for (int i=0; i<10000000;i++) {
> result += testSwitch((i & 1) == 1 ? i&1 : 1, i%10);
> }
> System.out.println(result);
> System.out.println(System.currentTimeMillis() - t);
> }
>
> public int testSwitch(int a, int n) {
> switch (a) {
> // only this case is in use, but hotspot doesnt know
> about it
> case 1:
> a += 1;
> break;
> case 2:
> for (int i=0; i<n; i++) a+=n;
> a += 2;
> break;
> case 3:
> for (int i=0; i<n; i++) a+=n;
> a += 3;
> break;
> case 4:
> for (int i=0; i<n; i++) n+=n;
> for (int i=0; i<n; i++) n+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) n+=n;
> for (int i=0; i<n; i++) a+=n;
>
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
>
> for (int i=0; i<n; i++) a+=n;
> a += 4;
> break;
> case 5:
> for (int i=0; i<n; i++) a+=n;
> a += 5;
> break;
> case 6:
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) n+=n;
> for (int i=0; i<n; i++) a+=n;
>
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) n+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
>
> a += 10;
>
> a += 6;
> break;
> case 7:
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
>
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
>
> for (int i=0; i<n; i++) a+=n;
> a += 7;
> break;
> case 8:
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
>
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
>
> for (int i=0; i<n; i++) a+=n;
> a += 8;
> break;
> case 9:
> a += 9;
>
> break;
> case 10:
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
>
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
>
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
>
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
>
> a += 10;
> break;
> case 11:
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
>
> a += 11;
> break;
> case 12:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 13:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 14:
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
>
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
>
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 15:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 16:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 17:
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
>
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
>
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 18:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 19:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 20:
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
>
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
>
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 21:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 22:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 23:
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
>
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
> for (int i=0; i<n; i++) a+=n;
>
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 24:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 25:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 26:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 27:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 28:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 29:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 30:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 31:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 32:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 33:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 34:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 35:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 36:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 37:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 38:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 39:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 40:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 41:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 42:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 43:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 44:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 45:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 46:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 47:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 48:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 49:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 50:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 51:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 52:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 53:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 54:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 55:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 56:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 57:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 58:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 59:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 60:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 61:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 62:
>
> for (int i=0; i<n; i++) a+=n;
>
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> case 63:
> for (int i=0; i<n; i++) a+=n;
> if (n > 10) {for (int i=0; i<n; i++) a+=n;}
> if (n > n + 2) {for (int i=0; i<n; i++) a+=n;}
> if (n > 10) {for (int i=0; i<n; i++) a+=n;}
> a += 11;
> break;
> case 64:
> for (int i=0; i<n; i++) a+=n;
> a += 11;
> break;
> default:
> break;
> }
> return a;
> }
>
> }
More information about the hotspot-compiler-dev
mailing list