brian.goetz at oracle.com
Fri Dec 20 20:04:32 UTC 2019
> 1) As a design pattern
This was the strawman starting point, shortly after the JVMLS meeting,
which kicked off the "eclair" notion. While this one seems like "the
simplest thing that could work", it strikes me as too simple.
When some version of this approach was floated much earlier, Stephen
commented "I'm not looking forward to making up new names for the inline
flavor of LocalDateTime and friends." I share this concern, but 100x so
on behalf of the clients -- I don't want to force clients to have to
keep a mental database of "what is the inline flavor of this called."
So I think its basically a forced move that there is some mechanical way
to say "the other flavor of T".
Several folks have come out vocally in favor of the Foo / foo naming
convention, which could conceivably satisfy this requirement. But, I
see this as a move we will likely come to regret. (Among other things,
there goes our source of conditional keywords, forever. On its own,
that's a lot of damage to the future evolution of the language.)
The "mechanical way to describe the reference
companion/projection/pair/whatever" becomes even stronger when we get to
specialized generics, as we'll need to be able to say `T.ref` for a type
variable `T` (this is, for example, the return type of `Map::get`.) The
other direction is plausible too (when `T extends InlineObject`), though
I don't have compelling examples of this in mind right now, so its
possible that this is only a one-way requirement.
> 2) As an "advanced" feature of inline classes
> This is the State of Valhalla strategy: inline classes are designed to be inline-default, but as a special-case feature, you can also declare the 'Foo.ref' interface, give it a name, and wire it up to the inline class declaration.
> In reference-default style, the programmer gives the "good name" to the reference projection, and either gives an alternate name to the inline class or is able to elide it entirely (in that case, clients use 'Foo.inline').
> Ways this is different than (1):
> - The 'Foo.inline' type operator
> - Implicit conversions (although sealed types can get us there in (1))
> - There are two types, not three (and two JVM classes, not three)
> - Opportunities for "boilerplate reduction" in the two declarations
Much of the generality of (2) comes from the goals of migrating
primitives to just be declared classes, while retaining the spelling
`Integer` for the ref projection, and not having _two_ box types. If
we're willing to special-case the primitives, then we may be able to do
> 3) As an equal partner with inline-default
> An inline class declaration introduces two types, an inline type and a reference type. But a modifier on the declaration determines whether the "good name" goes to the inline type or the reference type. The other type can be derived using an operator ('Foo.ref' or 'Foo.inline'). There's never a need for an alternate name.
> In this case, the language isn't biased to one style or the other; each declaration picks one. The trade-off is that clients need to keep track of one more bit when thinking about the inline class ("Is this a *foo* inline class or a *bar* inline class?" Actual terminology to be bikeshedded...)
In a previous iteration, we had an LV/QV duality at the VM level, which
corresponded to a null-default/zero-default duality at the language
level. We hated both of these (too much complexity for too little
gain), so we ditched them. What you're proposing is to reintroduce a
new duality, `ref-default` vs `inline-default`, which would arbitrate
custody of "the good name".
What I like about this is that _both_ `Foo.ref` and `Foo.inline` become
true projections from the class declaration Foo; there's no "write a
bunch of classes and wire up their relationship". (Though some degree
of special pleading and auto-wiring would be needed for primitives,
which seems like it is probably acceptable.) It is a more principled
position, and not actually all that different in practice from (2), in
that the default is still inline.
What I don't like is that (a) the author has to pick a polarity at
development time (and therefore can pick wrong), and (b) to the extent
ref-default is common, the client now has to maintain a mental database
of the polarity of every inline class, and (c) if the polarity is not
effectively a forced move (as in (2), where we only use it for
migration), switching polarities will (at least) not be binary
compatible. So the early choice (made with the least information) is
permanent. From a user perspective, we are introducing _two_ new kinds
of top level abstractions; in (2), we are introducing one, and leaning
on interfaces/abstract classes for the other. On the other other hand,
having more ref-default classes than the migrated ones will make
`.inline` stick out less.
Do we want to step back away from the experiment that is `inline`, and
go back to `Foo.ref` and `Foo.val`? If we're looking to level the
playing field, giving them equally fussy/unfussy names is a leveler...
> 4) As the only supported style
> An inline class declaration always gives the "good name" to the reference type, and you always use an operator to get to the inline type ('Foo.inline'—but we're gonna need better syntax.)
> This one would represent a significant shift in the design center of the feature. If you want flattening everywhere, you're going to need to make liberal use of the '.inline' operator. But if you just want to declare that a bunch of your classes don't have identity, and hopefully get a cheap performance boost as a result, it's simple. The burden of learning something new is shifted to "advanced" users and APIs to whom flattening is important.
I can't really see this being a winner.
> I'm not ready to completely dismiss any of these designs, but my preferences at the moment are (1) and (3). Options (4) and (5) are more ambitious, discarding some of our assumptions and taking things in a different direction.
> Like many design patterns, (1) suffers from boilerplate overhead ((2) too, without some language help). It also risks some missed opportunities for optimization or language convenience, because the relationship between the inline and reference type is incidental. (I'd like to get a clearer picture of whether this really matters or not.)
The main knock on (1) is that it leans on an ad-hoc convention, and to
the extent this convention is not universally adhered to, user confusion
abounds. (Think about how many brain cycles you've spent being even
mildly miffed that the box for `long` is `Long` but the box for `char`
is `Character`. If it's more than zero, that's a waste of cycles.)
I really have a hard time seeing (1) as leading where we want.
> (5) feels like something fundamentally new in Java, although if you squint it's "just" a variation on name resolution. What originally prompted this idea was seeing a similar approach in attempts to introduce nullability type operators—legacy code has the "wrong" default, so you need some lightweight way to pick a different default.
(5) could be achieved with another long-standing requests, aliased imports:
import Foo.inline as Foo;
Not saying that makes it better, but a lot of people sort of want import
to work this way anyway.
More information about the valhalla-spec-observers