Need reviewers, jdk7 testing changes
Joe.Darcy at Sun.COM
Thu Dec 10 00:11:07 UTC 2009
Kelly O'Hair wrote:
> Martin Buchholz wrote:
>> On Tue, Dec 8, 2009 at 16:42, Kelly O'Hair <Kelly.Ohair at sun.com> wrote:
>>>> In theory, the process is supposed to prevent
>>>> regression test failures from ever creeping in - that's the whole
>>>> When they do (inevitably) creep in,
>>>> they are supposed to be aggressively targeted.
>>>> A gatekeeper demonstrates the failure to a developer,
>>>> and the developer is given X time units to fix the test breakage,
>>>> or face reversion of the breakage-inducing change.
>>> But it's not that simple. In some cases the cause of the failure is
>>> a change made to a different repository, by a different team,
>> Changes that get in this way are signs of process failure.
>> E.g. failing tests in java.util caused by hotspot commits
> Deja Vu... we have had this discussion before. ;^)
> How can any team possibly run ALL the tests?
> We would all grind to a halt.
> And what is this "signs of process failure", are you some kind
> of manager now? ;^) "Hurumph... Every failure is a result of poor
> (just yanking your chain. ;^)
> Testing has become a balancing act for everyone, we do what
> we can given the constraints we have. When things fall through
> the cracks, we try and patch the cracks so it doesn't happen
> again. We use what we can to automate, but there are limits.
There is a weeks-long process separating when a change gets pushed into
a JDK 7 integration workspace, like TL, and when the promoted bits are
available for download. The closer a changeset gets to the master, the
most tests should be run against it.
An individual developer should not be expected to run all the tests, but
if at least the regression tests in the code base are not all run on
multiple platform by the time the change hits the master, I'd argue
there is a process problem. Of course, it is also a problem if the
tests are run and the results aren't examined adequately.
More information about the core-libs-dev