RFR 9: 8168517 : java/lang/ProcessBuilder/Basic.java failed
Roger.Riggs at Oracle.com
Fri Oct 28 21:16:38 UTC 2016
On 10/28/2016 4:59 PM, David Holmes wrote:
>>>>>>> But this is the second waitFor call after the process is destroyed.
>>>>>>> Sorry I don't really see the point.
>>>>>> The tests were added to determine if waitFor(timeout) was
>>>>>> handling the
>>>>>> timeout parameter correctly.
>>>>>> The 2nd test here was to check the datapath when the process already
>>>>>> been destroyed.
>>>>>> The 'extra' waitFor() ensures the process is gone.
>>>>>> But you are correct, this exercises a path through the waitFor code
>>>>>> does not even look
>>>>>> at the timeout value so I'll remove the entire 2nd case.
>>>>>> Webrev updated in place.
>>>>> Okay, but now you don't need the p.waitFor() after the destroy() at
>>>>> Hmmm ... I guess either we want the original code with a longer
>>>>> timeout to accommodate slow/loaded platforms (to test that waitFor(n)
>>>>> after a destroy() is timely); or everything after destroy() can be
>>>> Here's another approach to test whether waitFor(timeout) is
>>>> behaving as
>>>> The test should be checking that waitFor does not wait longer than
>>>> So shorten the timeout to a very short time (10 milliseconds) and
>>>> check that
>>>> waitFor returns in no more than that time(plus epsilon). It is
>>>> to the first test
>>>> in that section but the process is exiting.
>>> But the first test is checking that we wait at-least as long as the
>>> timeout ie no early return when we know the process is still alive.
>>> Sorry but this isn't making sense to me. The original problem is that
>>> waitFor(8) has timed out and we proceeded to call exitValue() while
>>> the process is still alive. If the process had in fact terminated then
>>> we would have hit the (end-start) check and failed due to a timeout.
>>> So what exactly are we trying to fix here? Do we simply prefer to fail
>>> due to timeout rather than have exitValue() throw
>> calling exitValue was just additional debugging info; but the test would
>> have failed the timeout condition later.
>> So the test is still broken, because it is not checking the correct
>> timeout behavior of waitFor(timeout) and instead
>> is checking how long it takes the OS to destroy. That's not
>>> If so then check for waitFor(8) timing out, but leave the timeout
>>> value at 8 seconds - just avoid the call to exitValue() if a time-out
>>> occurred. Otherwise if we want to avoid timing out we either bump the
>>> timeout to some value > 8 to accommodate the load problem, or we don't
>>> even bother doing waitFor after the destroy().
>> Lengthening the time isn't interesting either because it is only testing
>> that it returns
>> when the process has exited, not whether the timeout is hit at the right
> There is no real way to test timeouts. You can check you don't get
> early returns but you can't check that you return in a timely fashion.
> At least without knowing the execution environment.
The spec for waitFor is " until the
* process represented by this Process object has
* terminated, or the specified waiting time elapses."
> Does waitFor implement it's own timed-waiting mechanism, or does it
> just delegate to an existing API (like wait() or park()) ? If the
> latter then that is where timeout testing would exist.
Thread.sleep() on Linux with interrupt() from a monitoring thread.
> The earlier part of the test already verifies both that the timeout
> mechanism functions, and that it doesn't return early.
>> So I guess, its time to remove the whole test case on a destroyed
> Given destroy() is not guaranteed to work either ... testing waitFor
> in that context will always be somewhat probabilistic I think - and
> timeout testing just makes that more so.
ok, removed the test case.
More information about the core-libs-dev