[records] Record updates for Preview/2

John Rose john.r.rose at oracle.com
Fri Jan 10 04:45:39 UTC 2020

On Jan 8, 2020, at 3:55 PM, Brian Goetz <brian.goetz at oracle.com> wrote:
> We're gathering a list of what non-bugfix things we want to do for the second preview of records.  So far, on my list, I have:
> 1.  Revisiting the j.l.Record supertype.  We want to support inline records when we have inline types.  Until now, we've been working on the assumption in Valhalla that inline classes can only extend interfaces, not abstract classes, so it was suggested that Record should be an interface.  However, that topic in Valhalla is under reconsideration, and I want to wait until that discussion plays out before making any changes here.  
> It has also been pointed out that the name Record is somewhat clash-y.  I'm not really willing to pick a lousy name just to reduce the chance of clashes, but I might be OK with a name like RecordClass.  (Bikeshed alert: if you want to discuss any of these topics, please start a new thread; this one is about curating a to-do list.)
> 2.  Accessibility of mandated members.  Remi noted that the requirement that the mandated record members be public, even for non-public classes, was weird.  But, at the time, the spec was in a state that trying to revisit this was impractical -- Gavin has now left the spec in a much cleaner place, and so it is reasonable to reopen this discussion.  The leading alternate candidate is to propagate the accessibility of the record to its mandated members (public for public, etc), but still require the author to say what they mean.   

+1 on going another round on accessibility.

I specifically want to make sure we are doing the best thing on access defaults (what happens when you don’t mention *any* access mode).  I suppose that’s a separate item, related to this one.  Even though we have a decision of record on this, I’m calling it out
here and now because we are stuck with the final decision we make this time around.
(Maybe like a mandatory appeal for a capital case.)

> 3.  Nesting considerations.  In 14 we will fix the issues surrounding local records, but we still ban nested records in non-static classes.  We should fix this -- by dropping the restriction on static members in inner classes -- and then bring records, enums, and interfaces to parity (allowing local and nested flavors of all, all implicitly static.)  

Agreed.  It’s time to let static types (and maybe other statics) nest more freely.
(The reasons for the current restrictions are IMO obsolete, because we will
never take the path of making inner classes into true dependent types.  Back
in the day we thought we would leave that door open a crack.  If perchance we
do dependent types in the future, a la Beta and Scala, we’ll surely declare them

> 4.  Abstract records.  Several people have asked "what about abstract records"; while these are theoretically possible, there are some complications that I think are best left for treating these as a later addition if needed.  But, for the record, here are some thoughts from the last time I looked into this.
> Given that records are nominal tuples, the notion of "width subtyping" comes immediately to mind.  So, let's go with that for a moment: you could say 
>     abstract record A(int a, int b) { }
> and
>     record B(int a, int b, int c) extends A { }
> and there is a natural width subtyping relationship.  We don't have problems with the equality contract because the abstract class leaves equals() abstract.  
> But, this is a story that is likely to not be entirely satisfactory.  Do we require that the state of A form a prefix for the state of B?  This may not be the API people want.  Do we require that super(a,b) be passed through unchanged?  The constraints on the subclass in this model get, constraining.  
> What if there were a more flexible relationship:
>     record B(int a, int b, int c) extends A(a, b) { }
> Now, there's more flexibility, at the cost of a new "extends" construct.  And what if you want to fix some field of A as a constant:
>     record iload2_bytecode() extends bytecode(0x1c) { }
> These all seems like reasonable things to want when you get into abstract records ... but they all start to push on the "records imperative".  So for now, we're doing nothing, until we have a better story for what we actually want to do.

Personally, I don’t have much appetite for abstract records, any more than
I do for the notion of tuple inheritance.  (int x, int y) is not a subtype of (int x).
Remi said it well:  Inheritance is not the right mechanism to express such things.

*But.*  I *do not* want this decision to accidentally drive the translation strategy.
Just because we don’t have an abstract class to plant “toString” in doesn’t mean
we are dispensed from defining a modular way to manage “toString” on records.
We might start with a hardwired public API for this, but I think we also want to
keep an eye on Brian’s dreams for wiring up implementations by delegation.
Where I think this goes is towards a record implementation class with a public
API which (a) works well today and (b) is likely to be the starting point of a
more pluggable story for behaviorally polymorphic APIs like the toString of
record.  (And the equals and hashCode.)  The libraries, not the language,
should define these methods, and the shape of the definition should be

> 5.  Deconstruction patterns.  Yes, records should be deconstructible with deconstruction patterns.  
> Anything else, that we've already discussed that I left out?

6. Default access modes in records.  (See above. )

Choices:  Package scope like classes, public like interfaces, something
new for records.

7. Translation strategy.  (Is it polished and future-friendly?)  In the review
it’s done by means of jlr.ObjectMethods::bootstrap, which seems a little
non-specific to records.  Doing something with a simple-enough indy
(and no other javac-generated code!) is probably future-friendly.  When
and if we get “mindy” (indy-flavored method definitions) the indy instruction
can be quietly retired in favor of a mindy-bootstrap, with no semantic
change.  Beyond that, the happy point I want to get to in the future is a
sort of inherited mindy, where javac doesn’t mention “toString” at all
in any give record classfile, and instead there’s some wiring somewhere
that says, “records conform to Object via this mindy provider”.

That verges on an abstract class or even an interface; do we need or want
to add something like that as a mandated feature of all records?  Maybe
not.  But I do think we want to get a foot into the door somehow with
records, today, that we parley into a channel for new features tomorrow.

8. Transactional methods (aka. reconstructors):  Consider generalizing
existing patterns that support multi-component data processing.
This can be left for the future, but it’s worth a look now in order to
make sure today’s similar forms (which are canonical constructors
and deconstruction methods) don’t accidentally tie our hands.

What’s a “transactional” method (or expression or lambda)?  OK, I just
made up the term, and I’ll have a different one tomorrow, but the concept
has been on the table for a long time.  In essence there’s a set of names (record
component names) which are put in scope at the top of a block, may be
sequentially modified by the block, and are committed at the bottom.
I’ll punt the rest of the discussion to a separate email.

9.  If we don’t decide hashCode explicitly, I think we’ll back ourselves into
a bad default decision.  Should the hashing logic for r.hashCode() be specified
in a portable manner?  I say, “no”, since there’s no good portable solution.
Objects.hash is portable, but other than that is a poor algorithm.

I’ll fork a separate thread for this.

Right now we don’t *specify* that we combine hashes using the base-31
polynomial, but we *do* use that stupid old thing.  This is plausibly conservative
but it will get it into a worse spot than necessary.  We can buy ourselves some
time by putting weasel words into Record::hashCode (or its successor) and
ObjectMethods::bootstrap.  We can promise that the hashCode result
depends (typically) on all components of the record, and on nothing else.
We should also explicitly state that the method for combining the bits is
subject to change, very much like we specify that the iteration order for
Set.of and Map.of is subject to change, like this:


Finally, for luck, we should fold a dash of salt into the hash function to
keep users on their toes, like this:


If we aren’t cautious about this, users will eventually discover that we are
using the Objects.hash algorithm, we will document it, and then we’ll be
locked out from an easy improvement to records.

I don’t think we should even promise that the record hashCode is mixed
from the hashCodes of the components.  That’s TMI, and it boxes us in.
(I want a vectorized hashCode someday for records and inlines.)  All we
need to do is promise that a record type’s hashCode fulfills the Object
contract, especially relative to the record’s equals method.  No other
promises, other than that the hashCode is subject to change.  And then
rub some salt on it to change it.

Those are my remaining concerns.

— John

More information about the amber-spec-experts mailing list