RFR: 8062036: ConcurrentMarkThread::slt may be invoked before ConcurrentMarkThread::makeSurrogateLockerThread causing intermittent crashes
bengt.rutisson at oracle.com
Fri Nov 7 15:42:52 UTC 2014
On 2014-11-06 21:56, Kim Barrett wrote:
> On Nov 6, 2014, at 4:01 AM, Bengt Rutisson <bengt.rutisson at oracle.com> wrote:
>> What does the test verify? When I run the test on a build that does
>> not have your fix it still passes:
>> $ java -showversion -jar /localhome/tests/jtreg/lib/jtreg.jar -noreport test/gc/TestScavengeALot.java
>> java version "1.9.0-ea"
>> Java(TM) SE Runtime Environment (build 1.9.0-ea-b35)
>> Java HotSpot(TM) 64-Bit Server VM (build 1.9.0-ea-b35, mixed mode)
>> Test results: passed: 1
>> Also, since this is a CMS specific test I think it should be located
>> in the "test/gc/concurrentMarkSweep".
> Without the proposed changes, the test program segfaults during VM
> initialization when using a *non-product* build and using *either* G1
> or CMS as the collector (plus the other options specified in the
> test's @run line), e.g. the main() for the test program is never
> reached with those configurations.
> I may be confused about how to go about testing in this situation, but
> the jtreg invocation above isn't one of the failing configurations
> (it appears to be a product build, and the default GC is being used,
> which isn't either G1 or CMS.)
Yes, you are right. I should have realized that I needed a debug build
to use ScavengeALot. Now I get the test to fail the way you described.
> My understanding is that at least some of our test infrastructure runs
> each test with each of several GC configurations. That doesn't appear
> to be true for jtreg run directly though; to test different GC
> configurations I think one needs to use -javaoptions in the jtreg
Right. Our nightly testing is run with different GC options in each of
the four baselines we have. But this is only true for the testing that
is done in on the GC repo...more on this below...
> By leaving the GC unspecified in the @run line, I'm allowing the
> external invocation to choose which GC to use. It seems at least
> harmless to run the test for non-concurrent collectors (and has the
> benefit of some minimal testing of the -XX:+ScavengeALot feature with
> those collectors too), and we'd like the test to be run for any
> concurrent collector externally specified.
> I could instead explicitly specify the collector to use, by providing
> two @run lines, one for G1 and one for CMS. But then I should add a
> @requires line, and I don't know how to write that @requires line in
> the face of multiple @run lines with different requirements. (This
> seems like a deficiency in the new @requires mechanism.) I think if I
> were going to explicitly specify the collector, I'd need two copies of
> the test, each with one @run line for the desired collector, with a
> corresponding @requires line. (Script-based test trampolines to a
> common .java test implementation would be an alternative if the .java
> code was less trivial.)
Yes, I totally agree. The current @requires functionality is not enough
for our needs. I pointed this out in the recent review request to add
the @requires tag to many GC tests. And I pointed that out when the
@requires tag was first suggested about a year ago. Unfortunately this
feedback has been left out for some reason.
Anyway, I think it would be good to explicitly add the GC to the @run
command. Otherwise these tests will be run in many places, nightly
testing for the other groups, PIT testing, manual testing, ad hoc aurora
runs, et.c. without actually testing anything.
Since the test is so simple I think splitting it up into two files is
the simplest workaround for the @requires limitation for now.
> There's no reason to run the test against a product build, and in fact
> the test would fail in such a configuration were it not for the use of
> -XX:+IgnoreUnrecognizedVMOptions (yuck!). There doesn't seem to be a
> better way at present to deal with that case; but see
Yes, the IgnoreUnrecognizedVMOptions is a really ugly hack. It should at
least take a parameter saying which command line option to ignore if not
present. As it is now we run most of our testing with this flag and that
hides tests with incorrect command lines.
> Anyway, that's how I arrived at the proposed test program. I'm happy
> to fix it if there's some mistake in how I got there.
I think you did a good job with the test. Our infrastructure for
selecting how tests are run has lots of room for improvement.
More information about the hotspot-gc-dev