StubToolkit must go!

Richard Bair richard.bair at
Wed Feb 6 08:30:38 PST 2013

>> Another big issue is that right now each test class is fired up in its own VM and takes 10x longer (or thereabouts) to execute than if the tests could all just run in the same VM. Some tests need their own VM (related to initialization etc where we cannot easily reinitialize some code on subsequent test runs), whereas most of them could run concurrently. I'm looking into JTReg as the main system for invoking our tests, which is capable of calling TestNG tests or Jemmy tests or any other type of test out there we need to run (and is the same system used by OpenJDK). I mention this, although I am not going to do it yet (just in case those of you with loads of jtreg experience can help guide my way :-)).
> The problem with running them all in the same VM is when something behaves poorly it tends to cascade and you end up with a big snowball effect of a bunch of problems and failures occurring later on, especially irritating to track down if the poorly behaving test does not cause a failure itself.

I think this is OK in our master plan. So the idea is that what we're going to do is to break our testing up into three different levels -- level 1 (pre-commit), level 2 (nightly), level 3 (release). We will define some set of tests that are reliable and fast and cover a sufficient amount of the platform such that, before any integration into master, all tests must pass on all platforms (see below). This should take ~ 5 minutes. The second level (nightly) is going to be a different configuration of tests -- maybe additional tests, maybe the same tests run differently, probably a combination of both. These run for 8+ hours and should require < 8 hours to analyze any failures. In this way, if you've introduced an error into master, then you'll get feedback on it most likely in 24 hours. The third level is the multi-week testing that does everything from stability testing (hence at least a 2 week cycle) to running all the tests in all configurations etc. This should catch the majority of bug escapes from our first two levels of testing, but in the grand scheme of things hopefully it isn't catching much (most of it being caught in the first two levels of testing).

So my thought here is that if we can run 30,000 tests in < 5 min by allowing nearly all of them to be run in the same VM, then that is probably better than running 3,000 tests in separate VMs for the sake of our pre-commit testing. For the nightly run, we can certainly run them all in their same VM so that the analysis phase of failures is much simpler.

> I fully support moving to JTReg, my short stint in core-libs learned me it's on a slightly higher shelf than what we have now :)

Cool. I will play with it and see what the reports are like etc, might have to give some patches back to Jon Gibbons if the reports are nappy :)

> Another thought that was recently brought up is running JPRT jobs before integration, since we're integrated with the JDK now and becoming more so by the week. I believe that would require moving to jtreg and quite possibly some changes to JPRT.

Yes, this is part of our continuous integration plan -- whether JPRT or something like it is TBD. But the idea is that once we have our build/test humming along nicely with our pre-commit / nightly / release testing phases all ironed out, then what we'll do is change our workflow so that we don't do weekly integrations from team forests into master. Instead you'll submit your fix as a patch to a JPRT like system which will then integrate & build & test (pre-commit tests) on all platforms and then if it passes do an automatic integration. If it fails it kicks it out and sends you mail and you can fix up whatever wasn't working and resubmit. Of course there are times when you want a sandbox / team forest and in those cases you can still work on the side. But when you are ready to integrate to master, you end up sending your wad of patches to the system and it integrates & builds & tests just like everything else.

Hopefully this will streamline the process, reducing the pain for weekly integrations for each forest, reducing the number of forests we need to know about etc, and also reduce the time to get a patch that has been reasonably well tested to get into master so SQE can start hammering away at it.


More information about the openjfx-dev mailing list