Loom and reading/writing of data

Kasper Nielsen kasperni at gmail.com
Sun Nov 24 21:51:49 UTC 2019

> MemorySegment::asByteBuffer and MemorySegment.ofByteBuffer is the
> current proposal for interop.
> -Alan

I'm not really looking for interop with ByteBuffer. I'm questioning whether or
not ByteBuffers are the right low-level API for blocking sequential IO going

While Loom does a great job at freeing people for dealing with schedulers and
manually managing the stack. It currently falls short in providing a good
solution for another major problem in writing high-throughput Java servers:
Memory management.

Every single network framework, and thereby by the majority of Java users, out
there supports managing pools of byte buffers (or abstractions of it) in some
way. A quick search reveals:

* Untertow : io.undertow.connector.ByteBufferPool
* Grizzly  : org.glassfish.grizzly.memory.MemoryManager
* Jetty    : org.eclipse.jetty.io.ByteBufferPool
* Akka     : akka.io.BufferPool
* Netty    :

I've included the full GitHub URL to Netty's ByteBuffer abstraction just to
show how complicated some of these implementations get. Netty's
implementation is probably the most advanced, resorting to "tricks" such as
reference counting, leak detectors, using Unsafe.allocateUninitializedArray()
to avoid the overhead of zeroing out data.

If you will need to deploy a buffer management solution with the complexity
(leak detectors, security issues, etc.) of Netty's in order to get similar
performance with Loom, it does kind of ruin the narrative of making it "easy to
write highly concurrent network servers" a bit.

The main selling point of ByteBuffer is allowing random access to binary data.
But for 99 % of blocking network IO programming that is not particularly
relevant. All you care about is reading or writing the next element. So why use
it as the lowest-level API?

A much better approach would be to push the memory management of buffers down
into the VM. Freeing users for this tedious and error-prone bookkeeping. While
at the same time allowing for the VM to optimize memory usage, memory copying,

Of course, something like this could be added at a later time. But it would
mean, that people would continue to be dependent on libraries for memory
management if they want to go the last mile.


More information about the loom-dev mailing list