JMH and continuos integration
daniel.mitterdorfer at gmail.com
Fri Nov 11 13:39:20 UTC 2016
in the Elasticsearch project, we run our microbenchmarks (
https://github.com/elastic/elasticsearch/tree/master/benchmarks) once per
day with Jenkins on the master branch (see
We don't run any microbenchmarks on pull requests though.
We use a dedicated server class bare metal machine for our microbenchmarks
and isolate the benchmark JVM from OS processes with cgroups. We dump the
results with JMH's JSON output, push them to Elasticsearch (you know, eat
your own dog food ;)) and publish them in a dashboard:
So far we don't have so many benchmarks; I only started with the proof of
concept but I expect more microbenchmarks over time. We might also consider
to run them on each push then.
However, at the moment we don't rely too much on microbenchmarks. We have a
macrobenchmark suite that we run daily to spot performance regressions.
2016-11-11 1:12 GMT+01:00 Leonardo Gomes <leonardo.f.gomes at gmail.com>:
> I'm looking for feedback on how people use JMH as part of their development
> I've played a bit with https://github.com/blackboard/jmh-jenkins and am
> wondering if people in this discussion list do use any sort of automation
> (Jenkins or other) to detect performance regression on their software, on a
> pull-request basis.
> I've seen JMH being used on different open-source projects, but the
> benchmarks seem to have been introduced mostly to compare different
> implementations (when introducing / validating changes) and not as part of
> Any feedback would be appreciated.
> Thank you,
More information about the jmh-dev