RFR 8214971 : Replace use of string.equals("") with isEmpty()

Tobias Hartmann tobias.hartmann at oracle.com
Mon Dec 10 08:31:56 UTC 2018

Hi Claes,

the difference is the "array encoding" (ae) argument passed to the intrinsic. See
MacroAssembler::string_compare(... int ae). If we compare an UTF16 String, we know that the number
of bytes is always even (two byte / char encoding), whereas a Latin1 string has a byte encoding. So
basically, the UTF16 intrinsics loads/compares chars whereas the Latin1 intrinsic compares bytes
(both apply vectorization accordingly).

But you are right, we could always use the Latin1 intrinsic but in theory it should be slower.
At least on x86_64, might be different on ARM or other platforms.


On 07.12.18 20:35, Claes Redestad wrote:
> This is an interesting point: can anyone explain why there are two distinct methods for LATIN1 and
> UTF16 equality, with corresponding
> intrinsics? Aleksey? Tobias?
> Testing with Arrays.equals then performance profile is
> quite different due vectorization (often better, sometimes worse),
> but this is performance-neutral for a variety of latin1 and utf16 inputs:
> if (coder() == aString.coder())
>     return StringLatin1.equals(value, aString.value);
> Is there some UTF16 input where StringLatin1.equals != StringUTF16.equals that forbids the above?
> Performance-wise it seems neutral, and all tests seem to pass with the above (obviously need to run
> more tests...).
> Thanks!
> /Claes
> On 2018-12-07 04:53, James Laskey wrote:
>> Or simply;
>> if (anObject instanceof String) {
>>             String aString = (String)anObject;
>>             if (coder() == aString.coder())
>>                  return Arrays.equals(value, aString.value);
>>             }
>>         }
>> Sent from my iPhone

More information about the core-libs-dev mailing list