RFR (S): 7177917: Failed test java/lang/Math/PowTests.java
roland.westrelin at oracle.com
Mon Jun 25 07:43:37 PDT 2012
> Thank you for looking on refworkload. I would suggest to write a microbenchmark, we will need it anyway for SSE
> implementation. Two separate subtests for pow() and exp() called in small loop over pre-generated (in array) "random"
> (started from the same seed) normal (not NaN) values. Measure it's performance after warmup phase. Then run the same
> methods over different values (to cove at least some cases in our code) which will produce NaNs to force recompile (or
> not as in your first implementation). Measure performance with NaNs. Then back to good values and measure it again.
> Test current code, your first and last implementations.
I'm not sure I understand what you mean by: "Then run the same methods over different values (to cove at least some cases in our code) which will produce NaNs to force recompile (or not as in your first implementation). Measure performance with NaNs.".
I wrote a micro benchmark that:
- chooses 1 million "good" random values for pow
- time the computation of pow for the 1 million values
- force the uncommon trap and recompilation
- do the measurement again with the same 1 million values
same thing with exp and I did the measurement with the previous and current version of the code but I don't see any difference.
Where does the test go when it is ready? In the test subdirectory?
More information about the hotspot-compiler-dev