Adding Microbenchmarks to the JDK forest/trees (JEP-230)

mark.reinhold at mark.reinhold at
Fri Dec 5 00:11:47 UTC 2014

2014/12/4 9:51 -0800, staffan.friberg at
> On 12/03/2014 02:58 AM, Magnus Ihse Bursie wrote:
>> ...
>> My suggestion is that the microbenchmarks are put in the top-level 
>> repo, if only for the reason that it seems fully possible to split 
>> them out to a separate repo some time in the future if it grows too 
>> much, but it seems much more unlikely that it will ever be moved back 
>> into the top-level repo if we realized it was a stupid idea to put it 
>> in a separate repo.
> I like this idea, and agree that shifting in the opposite direction is 
> probably something that would be much more work than breaking it out if 
> size becomes an issue further down the road.
> When moving a directory to a new sub-repository, is there any concern 
> about the diffs for that set of files still lingering in the top repo, 
> or can those be moved as well?

The files can be moved, but their earlier history will remain in the
top-level repo.

Given the forest structure that we have today, and the fact that this
set of tests could grow to a thousand or so files over time, I think it
makes more sense to place them in a new top-level repo of their own.

In the (very?) long term we might dramatically reduce the number of
repositories but that will be a huge change, and in the mean time adding
one more repo is a pretty minor change and also consistent with current

A pleasant property of the current root repo is that it's very small
(< 9MB) and easy to understand, containing mostly makefiles, build
utilities, and some metadata.  Placing tests in it would start turning
it into more of a grab-bag.

I don't think it makes sense to split the microbenchmarks across
different repos, as we already do with the unit and regression tests.

The latter types of tests primarily address the correctness of the code
in one repo, on the assumption that any code in other repos upon which
that code depends is correct.  In most cases, therefore, it's pretty
easy to tell whether a unit or regression test is specific to, say, the
jdk repo, the langtools repo, or some other repo.

Microbenchmarks, by contrast, address the performance of a particular
API, language construct, or other programmatic interface and also all of
the code upon which it depends, regardless of which repo it came from.
By nature they're more holistic, and so less strongly associated with
the code in any particular repo.

- Mark

More information about the build-dev mailing list