RFR 8023463 Update HashMap and LinkedHashMap to use bins/buckets or trees (red/black)

Remi Forax forax at univ-mlv.fr
Sun Aug 25 18:04:42 UTC 2013

On 08/21/2013 02:25 PM, Paul Sandoz wrote:
> Hi,
> Here are Doug's Linked/HashMap changes, discussed in a previous thread, as a webrev:
>    http://cr.openjdk.java.net/~psandoz/tl/JDK-8023463-Linked-HashMap-bin-and-tree/webrev/
> I also added some tests related to characteristics associated with fixing another bug.
> Looking at the diffs will be tricky given there are so many changes.

The code can be simplified a little bit by using the diamond syntax,
HashMap (lines: 919, 963, 1026, 1497, 1569) and
LinkedHashMap (lines: 255, 264, 270, 278)

There are a bunch of @Override that are missing making the code harder 
than it should to read.

HashMapSpliterator.est should be renamed to estimateSize.

All occurences of "(n - 1) & hash" should be replaced by "hash & (n - 
1)" because it's easier to explain.

TreeNode.getTreeNode() should be a static method and take a Node<K,V> as 
first argument,
so most of the casts to TreeNode<K,V> that bother you will disappear :)
   static TreeNode<K,V> getTreeNode(Node<K,V> node, int h, Object k) {
     TreeNode<K,V> first = (TreeNode<K,V)node;
     return ((first.parent != null) ? first.root() : firt).find(h, k, null);

In putVal, line 654, the null check (e != null) makes the method hard to 
at least I think a comment saying that it means that an existing node 
need to be altered is needed.

And I still don't like the way the method clone() is implemented (the 
older implementation has the same issue),
result should not be initialized to null and the catch clause should 
thow an InternalError or an AssertionError,

Taking a look to the generated assemblies for HashMap.get(), there are 
three differences,
1) the old implementation and the new implementation both have a 
shortcut to return null,
     but the condition is slightly different, the old implementation 
test if size == 0 while
     the new one test if arraylength <= 0.
     So the new implementation use less load but at the price of a less 
general condition.
2) the new implementation do a lot of less bits twiddling on the 
hashCode value
3) the old implementation use an array of Object and not an array of 
Node, so
     there is a supplementary fast checkcast.
apart that the whole code is identical, so the introduction of the tree 
implementation has
no runtime cost if the tree is not needed.

For HashMap.put, the new implementation has no runtime cost too and does 
fewer loads
because the code of the old implementation was written in the stupid way
(the field 'table' has to be loaded several times by example).

so the code is clearly an improvement compared to the previous versions 
in term of 'assembly beauty' :)

> I fixed unchecked warnings in LinkedHashMap, but have not done so for HashMap, where there are many more casts from Node to TreeNode. One way to solve that is with a static method:
>      @SuppressWarnings("unchecked")
>      static <K, V> TreeNode<K, V> asTreeNode(Node<K, V> n) {
>          return (TreeNode<K, V>) n;
>      }
> but i dunno what the performance implications are. Perhaps it's best to simply stuff @SuppressWarnings on methods of TreeNode rather than trying to modify code just for the sake of reducing the scope.

You should not need the @SuppressWarnings here?

> A JPRT job has been kicked off.
> I recommend when this code goes in we look closely at code coverage results to see if we are missing areas testing tree functionality and update/add new tests accordingly.
> Paul.


More information about the core-libs-dev mailing list