RFR(S): JDK-8137035 tests got EXCEPTION_STACK_OVERFLOW on Windows 64 bit
gerard.ziemski at oracle.com
Mon Aug 29 15:52:47 UTC 2016
> On Aug 26, 2016, at 4:53 PM, Frederic Parain <frederic.parain at oracle.com> wrote:
>> Is it really an undecidable problem? Why is that exactly?
> How would you compute the max stack size for any call to the VM?
> Just the matrix of all VM options that could impact the stack usage
> is huge: several GC, several JIT compilers, JVMTI hooks, JFR.
> The work to be performed by the JVM can also be dependent on the
> application (class hierarchy, application code itself which can
> be optimized (and deoptimized) in many different ways according to
> compilation policies and application behavior).
> This problem is not specific to the JVM. Linux has a similar issue
> with its kernel stacks: they have a fixed size, but there's no way
> to ensure that the size is sufficient to execute any system call or
> perform any OS operation.
Absolutely, however, in light of the issue, now that we determined we need to increase the number of shadow pages, it seems to me that maybe we could take this opportunity and try to evaluate (somehow) how many we actually need under some hypothetical load condition with all the common options turned on, as an alternative way to conservatively increasing them by 1. After all, like you said, when the networking code was changed, we had to find a new default value somehow, so it has bee done before. I don’t know when we set the pages size last, but if it has been a while, then given all the new features we probably added since then, again as you said JFR, GC strategies etc., means we should probably re-evaluate this every now and then? It’s just that increasing the pages by 1 and hoping (admittedly backed up by testing) that it’s good enough seems to me not quite good enough? Should we at least have a follow-up issue to address this?
More information about the hotspot-runtime-dev