Value array flattening in L-world

John Rose john.r.rose at
Sat Feb 24 07:17:06 UTC 2018

On Feb 23, 2018, at 6:44 AM, David Simms <david.simms at> wrote:
> ...
>>> * Can all value arrays be unconditionally flat ?
>>> Consider legacy code for a user collection, using "<T> T[] elemData;" as storage, given T=EndPoint.class, whereby "EndPoint" is later recompiled as a value:
>>>     void remove(int index) {
>>>         elemData[index] = null;
>>>     }
>>>     boolean isEmpty(int index) {
>>>         return elemData[index] == null;
>>>     }
>>> This example code is fairly ordinary, yet flat array semantics will completely break them, "remove()" will always NPE, and "isEmpty()" always return false.
>> This is not true, and is a good example of what I said above about
>> getting nullable arrays of value types, when you need them.  After
>> translation into a class file, "<T> T[] elemData" becomes "Object[]
>> elemData", and the code continues to work, even with value types.
> You're right. Then if we translate "T" to be specifically "EndPoint", we are back to saying, well this is typed and new obvious at compile time, so we can warn the user when they add "__VALUE__" keyword and compile against any existing code. Then there is the case where said legacy wasn't included in a compile time, but at least it is clear from the signature which type we are talking about. There may be user advice to avoid this, but static checking legacy code for potential problems (e.g. jlink plugin). Is this good enough ? Or am I giving us a pass on this ?

I think it's good enough.  I'm relying on our experience with Java
generics, which at the JVM level allow all sorts of heap pollution;
but in practice the rules fit together to make heap pollution a rarity.
I think null pollution is the same kind of thing:  Possible at lots of
points in the JVM, but in practice a rarity.  Especially if we snipe
as many nulls as we can find in new code which knows about

> ...
>> Also, this leads to the question of whether the null check
>> should be built into checkcast or not.  I'm not sure about
>> that one.  I suspect we might want vcheckcast which does
>> a checkcast *and* a null check.  Or maybe a "wide" prefix
>> version of checkcast which accepts mode bits to control
>> the exact details of the checking (this could scale to other
>> types of conversions also).
> Checkcast performing null check for value class, seems reasonable. Does break the fast path you get with nulls...may be better javac inserts the null check ?

It complicates the fast path in the interpreter, kind of like the
value check in acmp.  The JIT doesn't care at all, because
it is exquisitely aware of all null checks, and handles them
the same whether they are bundled into bytecodes or separate.

I could go either way on this, but something tells me to keep
picking at the idea of bundling the null check into checkcast.
It makes templates simpler in the end to do it that way, but
I think there may be other reasons to bundle the check also.

Basically, I think it will be a rarity (in new code) to do a
checkcast to a value type *but* want to preserve nulls.
We can have a special library call or idiom for that,
instead of coupling every single checkcast (of a VT)
with an extra call to Os.requireNN.

— John

More information about the valhalla-dev mailing list