Need reviewers, jdk7 testing changes
Kelly.Ohair at Sun.COM
Tue Dec 8 16:42:32 PST 2009
Martin Buchholz wrote:
> On Mon, Dec 7, 2009 at 23:30, Martin Buchholz <martinrb at google.com> wrote:
>> You cannot exclude the Mother Of All Tests.
>> This is non-negotiable.
> More seriously...
> It's very difficult to manage creeping failures in regression tests.
Tell me about it. :^(
> In theory, the process is supposed to prevent
> regression test failures from ever creeping in - that's the whole point.
> When they do (inevitably) creep in,
> they are supposed to be aggressively targeted.
> A gatekeeper demonstrates the failure to a developer,
> and the developer is given X time units to fix the test breakage,
> or face reversion of the breakage-inducing change.
But it's not that simple. In some cases the cause of the failure is
a change made to a different repository, by a different team,
or even to the base system software via an update.
The ProblemList.txt file was meant to deal with that situation,
and also tests that are spuriously failing for unknown reasons.
When the gatekeeper can do what you say, that is great, and ideal.
But I just don't see it happening that way in all situations.
> I would like to see more effort devoted to fixing the tests
> (or the code!) rather than adding infrastructure that might
> have the effect of hiding the test failures.
Sigh... I'm not trying to hide the failures, and if you haven't
noticed, I have fixed quite a few tests myself.
If anything, I'm making the fact that we have test failures more
public, and the ultimate goal is to fix the tests.
But I needed a baseline, a line in the sand, an expectation
of what tests should always pass. And an ability to run all
the tests in a timely manner.
Now it's time to go through the ProblemList.txt and do a little
triage, file a few bugs, fix what tests can be fixed, and/or
correct my ProblemList.txt file if I got it wrong.
> BTW, I run regtests on java.util collections regularly,
> but only on Linux.
I think it is an expected situation that many developers can only
run the tests on one platform.
But do you expect the gatekeeper to run all the tests on all the
And if for example, your tests fail on Windows you would be
given X time units to fix it? Could you?
More information about the serviceability-dev