Shouldn't InputStream/Files::readAllBytes throw something other than OutOfMemoryError?
chris.hegarty at oracle.com
Mon Mar 13 11:33:59 UTC 2017
Many of the Collection types throw OOME if requested to grow
greater than ~2GB. Likewise some operations of String and
StringBuilder. Though this behavior is not strictly part of
the current specification, I suspect that it is the defacto
standard ( since the implementation has always behaved this
The java.lang.module.ModuleReader::read method is another
method that specifies the behavior if the returned type is
not capable of supporting very large amounts of data.
I agree that the use of OOME here is somewhat overloaded, but
it appears that we already well down this path, best to make
it clear and consistent in the spec.
On 12/03/17 14:24, Anthony Vanelverdinghe wrote:
> Files::readAllBytes is specified to throw an OutOfMemoryError "if an
> array of the required size cannot be allocated, for example the file is
> larger that 2G". Now in Java 9, InputStream::readAllBytes does the same.
> However, this overloads the meaning of OutOfMemoryError: either "the JVM
> is out of memory" or "the resultant array would require long-based
> In my opinion, this overloading is problematic, because:
> - OutOfMemoryError has very clear semantics, and I don't see the link
> between OOME and the fact that a resultant byte would need to be >2G.
> If I have 5G of free heap space, and try to read a 3G file, I'd expect
> something like an UnsupportedOperationException, but definitely not an
> - the former meaning is an actual Error, whereas the latter is an
> Exception from which the application can recover.
> - developers might be tempted to catch the OOME and retry to read the
> file/input stream in chunks, no matter the cause of the OOME.
> What was the rationale for using OutOfMemory here? And would it still be
> possible to change this before Rampdown Phase 2?
> Kind regards,
More information about the core-libs-dev