Request for reviews (S): 6916062: assert(_inserts <= _insert_limit, "hash table overflow") in NodeHash::hash_insert
tom.rodriguez at oracle.com
Thu Sep 30 12:19:22 PDT 2010
1024 * unique seems like a high enough limit but did you check ctw to see if it ever triggers? Also would we handle a bailout here gracefully? It would be good to set the limit lower and make sure we handle it properly. I'd be curious to see some statistics on how many iterations we normally perform in that loop but maybe that's a question for another day.
On Sep 30, 2010, at 12:00 PM, Vladimir Kozlov wrote:
> I updated changes with bailout from IGVM optimization loop:
> On 9/29/10 12:14 PM, Vladimir Kozlov wrote:
>> Thank you, Tom
>> On 9/29/10 12:00 PM, Tom Rodriguez wrote:
>>> That seems fine for a backport. Maybe we should consider some product level code to bailout if we do too many rounds
>>> of IGVN?
>> Yes, it is good idea to bailout. I will add it.
>>> On Sep 28, 2010, at 7:08 PM, Vladimir Kozlov wrote:
>>>> Fixed 6916062: assert(_inserts<= _insert_limit,"hash table overflow") in NodeHash::hash_insert
>>>> Changes for 6711117 put a memory node back on IGVN
>>>> worklist if address type does not match cached type
>>>> in a hope that address nodes will be processed and
>>>> the type will be updated. But it does not check that
>>>> there are nodes on worklist which will trigger address
>>>> processing. As result the memory node is pulled from
>>>> and pushed on IGVN worklist infinitely.
>>>> On each iteration NodeHash::_inserts value is incremented
>>>> forcing hash table grow until integer value NodeHash::_max
>>>> is overflow (became 0) which triggers bug's assert.
>>>> I decided to go with simplest fix which is easy to backport
>>>> and leave fixing address type inconsistency for later
>>>> (I have bug 6715629).
>>>> I also added asserts to catch such situations.
>>>> Tested with failing case, CTW, JPRT.
More information about the hotspot-compiler-dev