JMH and continuos integration

Stijn Debruyckere Stijn.Debruyckere at
Wed Nov 23 08:49:48 UTC 2016

Hi all, 

I wanted to share that we at Luciad have an in-house framework to run performance tests in a continuous integration (CI) set-up. It is fully targeting the regression test scenario, where you want to be automatically notified if the performance is not OK. We are in the process of open sourcing it, I'll make sure to inform this mailing list when it is available. 

It brings: 
- Automatic warm-up of HotSpot (uses confidence intervals to decide on 'warm') 
- Forking processes to have a clean HotSpot and to be able to repeat measurements. 
- Store all results in a MySQL database (or H2) 
- JUnit integration: a test fails if it is too slow compared to how it behaved in the past (see below). 
- By using JUnit, you also get integration in all IDE's and CI systems. And JUnit is already familiar to many. You can also configure your CI to run your tests on multiple platforms. 
- Informing the responsible about failures is taken care of by your CI, the same as for your other tests. 
- Balance server load: limit server load when there seem to be no changes, spend more server resources if needed. 
- Reporting: a web server with all kinds of charts and comparison reports 

Judging on slowdowns is hard, mostly because of HotSpot's nature to change your code as it runs. There are some pretty advanced statistics in place by now: confidence intervals, piece-wise constant regression analysis, t-tests, coping with non-normally distributed data etc. Whenever a slowdown is detected, the test fails and CI will notify you. You can always manually approve it, which sets a new baseline for the future. 

It predates the release of JMH, and therefore isn't making use of it (yet). As long as you stay away from nano/milli benchmarks, and instead focus on testing real-world scenario's, you're pretty good in practice though. We've been using it for several years now, with over a thousand benchmarks. It does require the resources that your tests use to be exclusively yours (read as: run tests on bare metal, avoid unpredictable network). 

A benchmark could be as simple as this: 

@RunWith( Benchmark.class ) 
public class SomeBenchmark { 
public void someOperation() { 
// Repeatedly called until duration stabilized 


----- Original Message -----

From: "Radim Vansa" <rvansa at> 
To: jmh-dev at 
Sent: Monday, November 21, 2016 3:34:12 PM 
Subject: Re: JMH and continuos integration 

I think you are looking for Alerting [1] - yes, you can check the test 
result against average of last X runs (and much more), and get warning 
mail if the condition is broken. 



On 11/19/2016 08:07 PM, Sergey Melnikov wrote: 
> Hi Radim, 
> I've never heard about PerfRapo. Thank you for hint. 
> Does PerfRepo provide any functionality for automatic performance anomalies checking? 
> Is it possible to calculate geometric mean for selected group if scores? 
> --Sergey 
> On Wed, Nov 16, 2016 at 09:09:25AM +0100, Radim Vansa wrote: 
>> On 11/15/2016 04:09 PM, Mark Price wrote: 
>>> Hi Leonardo, 
>>> at LMAX, we run a number of JMH benchmarks continuously. 
>>> We also try to make sure that there is minimal OS jitter in the results [1], using cpu isolation, sched_setaffinity, and making sure that the benchmark thread gets its own CPU, with no contention from child/spawned threads. 
>>> We record the JSON output in a database, and generate charts based on these results. We have tried (unsuccessfully) to implement a pass/fail CI job that will automatically fail if there are regressions in performance. Unfortunately, we haven't got around to open-sourcing the component that persists results & generates charts. 
>> FYI exactly that kind of tool (opensource) is PerfRepo [1]. It does not 
>> have a JMH integration, but it has a Java client [2]/RESTful API to 
>> upload results. 
>> Radim 
>> [1] 
>> [2] 
>>> We found that even with a very highly tuned system, there was enough inter-run noise that we would get false positives frequently. This led to broken-windows syndrome, so these days we just regularly eyeball the charts for any obvious signal. We record the timestamp and revisions associated with each test-run, so finding a culprit isn't too hard. 
>>> With warm-up iterations, measurement iterations and multiple forks, the feedback-loop can become longer than we'd like. Our micro-benchmarks currently take over an hour to run, though with more hardware we could run them in parallel to improve this. That's still not bad, but for comparison, our suite of ~11k acceptance tests only takes ~25mins... 
>>> I'd be interested to hear of any headway you make in this area. 
>>> Mark 
>>> 1] 
>>> ----- On 11 Nov, 2016, at 00:12, Leonardo Gomes leonardo.f.gomes at wrote: 
>>>> Hi, 
>>>> I'm looking for feedback on how people use JMH as part of their development 
>>>> process. 
>>>> I've played a bit with and am 
>>>> wondering if people in this discussion list do use any sort of automation 
>>>> (Jenkins or other) to detect performance regression on their software, on a 
>>>> pull-request basis. 
>>>> I've seen JMH being used on different open-source projects, but the 
>>>> benchmarks seem to have been introduced mostly to compare different 
>>>> implementations (when introducing / validating changes) and not as part of 
>>>> CI. 
>>>> Any feedback would be appreciated. 
>>>> Thank you, 
>>>> Leonardo. 
>>> --- 
>>> LMAX Exchange, Yellow Building, 1A Nicholas Road, London W11 4AN 
>>> Recognised by the most prestigious business and technology awards 
>>> 2016 Best Trading & Execution, HFM US Technology Awards 
>>> 2016, 2015, 2014, 2013 Best FX Trading Venue - ECN/MTF, WSL Institutional Trading Awards 
>>> 2015 Winner, Deloitte UK Technology Fast 50 
>>> 2015, 2014, 2013, One of the UK's fastest growing technology firms, The Sunday Times Tech Track 100 
>>> 2015 Winner, Deloitte EMEA Technology Fast 500 
>>> 2015, 2014, 2013 Best Margin Sector Platform, Profit & Loss Readers' Choice Awards 
>>> --- 
>>> FX and CFDs are leveraged products that can result in losses exceeding your deposit. They are not suitable for everyone so please ensure you fully understand the risks involved. 
>>> This message and its attachments are confidential, may not be disclosed or used by any person other than the addressee and are intended only for the named recipient(s). This message is not intended for any recipient(s) who based on their nationality, place of business, domicile or for any other reason, is/are subject to local laws or regulations which prohibit the provision of such products and services. This message is subject to the following terms (, if you cannot access these, please notify us by replying to this email and we will send you the terms. If you are not the intended recipient, please notify the sender immediately and delete any copies of this message. 
>>> LMAX Exchange is the trading name of LMAX Limited. LMAX Limited operates a multilateral trading facility. LMAX Limited is authorised and regulated by the Financial Conduct Authority (firm registration number 509778) and is a company registered in England and Wales (number 6505809). 
>>> LMAX Hong Kong Limited is a wholly-owned subsidiary of LMAX Limited. LMAX Hong Kong is licensed by the Securities and Futures Commission in Hong Kong to conduct Type 3 (leveraged foreign exchange trading) regulated activity with CE Number BDV088. 
>> -- 
>> Radim Vansa <rvansa at> 
>> JBoss Performance Team 

Radim Vansa <rvansa at> 
JBoss Performance Team 

More information about the jmh-dev mailing list