RFR 8193832: Performance of InputStream.readAllBytes() could be improved

Peter Levart peter.levart at gmail.com
Thu Dec 21 18:23:50 UTC 2017

On 12/21/2017 05:38 PM, Brian Burkhalter wrote:
> Hi Peter,
> On Dec 21, 2017, at 2:03 AM, Peter Levart <peter.levart at gmail.com> wrote:
>> This is OK as is, but I see another possible improvement to the logic. You decide whether it is worth trying to implement it. Currently the logic reads stream into buffers of DEFAULT_BUFFER_SIZE and adds them to an ArrayList, except the last buffer which is 1st copied into a shorter buffer before being appended to the list. This copying is unnecessary. The copied buffer has the same content, but shorter length. But the information about the length of final buffer is contained elsewhere too (for example implicitly in 'total'). So you copuld change the final "gathering" loop to extract this information for the final buffer and there would be no redundant copying of final buffer necessary.
>> What do you think?
> Actually I had thought of that as well. I think that it would require maintaining a list of the number of valid bytes in each of the buffers or equivalent. If having a second ArrayList for this purpose would not be too much overhead then it would probably be worthwhile. Or I suppose a single List containing an object containing both the bytes and the length would work. One could for example us

I don't think this would be necessary. All buffers but the last one are 
fully filled. The inner reading loop guarantees that the buffer is 
either fully read or the stream is at EOF. So in final gathering loop 
you could maintain a 'remaining' value, initialized to 'total' and 
decremented by DEFAULT_BUFFER_SIZE at each iteration. The number of 
bytes to copy for each buffer would then be 
Math.min(DEFAULT_BUFFER_SIZE, remaining). That's one possibility. There 
are others. But no new structures are necessary.

Regards, Peter

> ByteBuffer bb = ByteBuffer.wrap(buf).limit(nread)
> for each set of reads and store the ByteBuffer in the list although maybe this would be too much overhead?
> Thanks,
> Brian

More information about the core-libs-dev mailing list