Shouldn't InputStream/Files::readAllBytes throw something other than OutOfMemoryError?
me at noctarius.com
Sun Mar 12 18:00:43 UTC 2017
Fair point though. Guess it might be just for consistency, with existing behavior. Anyhow your explanation makes sense, let’s hope it’ll be a temporary limitation as we’re seeing Arrays 2.0 on the horizon and most most most probably will offer 64bit array indexes :-)
> On 12. Mar 2017, at 18:31, Anthony Vanelverdinghe <anthony.vanelverdinghe at gmail.com> wrote:
> Hi Chris
> Point well taken, but being unable to create a native thread is definitely a VirtualMachineError, and personally I don't care whether the JVM throws an OOME or any other kind of VirtualMachineError in that case.
> My point was that I don't see how being unable to return a result because of a language limitation (i.e. arrays being indexed by integers, and thus limited to 2G for byte), has anything to do with OutOfMemoryError. I believe it would be much more logical to throw a recoverable RuntimeException in this case (e.g. java.lang.ArrayOverflowException, as an analog of java.nio.BufferOverflowException).
> On 12/03/2017 15:53, Christoph Engelbert wrote:
>> Hey Anthony,
>> The meaning is already overloaded, as "Cannot create native thread"
>> is also an OutOfMemoryError and in like 99% of the cases means
>> "Linux ran out of filehandles". The chance the OS really couldn't
>> allocate a thread for the reason of no main memory available is very
>> narrow :)
>> Am 3/12/2017 um 3:24 PM schrieb Anthony Vanelverdinghe:
>>> Files::readAllBytes is specified to throw an OutOfMemoryError "if
>>> an array of the required size cannot be allocated, for example the
>>> file is larger that 2G". Now in Java 9, InputStream::readAllBytes
>>> does the same.
>>> However, this overloads the meaning of OutOfMemoryError: either
>>> "the JVM is out of memory" or "the resultant array would require
>>> long-based indices".
>>> In my opinion, this overloading is problematic, because:
>>> - OutOfMemoryError has very clear semantics, and I don't see the
>>> link between OOME and the fact that a resultant byte would need
>>> to be >2G. If I have 5G of free heap space, and try to read a 3G
>>> file, I'd expect something like an UnsupportedOperationException,
>>> but definitely not an OutOfMemoryError.
>>> - the former meaning is an actual Error, whereas the latter is an
>>> Exception from which the application can recover.
>>> - developers might be tempted to catch the OOME and retry to read
>>> the file/input stream in chunks, no matter the cause of the OOME.
>>> What was the rationale for using OutOfMemory here? And would it
>>> still be possible to change this before Rampdown Phase 2?
>>> Kind regards,
More information about the core-libs-dev