RFR(S): 8204857: ConcurrentHashTable: Fix parallel processing

Robbin Ehn robbin.ehn at oracle.com
Fri Jun 15 21:03:40 UTC 2018

On 2018-06-15 21:46, Gerard Ziemski wrote:
>> On Jun 15, 2018, at 1:27 PM, Robbin Ehn <robbin.ehn at oracle.com> wrote:
>>> #2 I don’t see how the parallel task split the work to cooperate - is it really useful to have bulk_delete with threads that have the same evaluators? Wouldn’t the threads be competing with each other, rather that dividing the task and cooperating?
>> They work on different ranges in backing bucket array.
>> First thread to claim a piece will get 4096 (2^12) buckets and loop over 0 to 4095. Next thread will claim the second piece at same size but loop 4096->8191 and so on. The thread will claim a new range when it's finished until entire table is done.
> Where/how is this partitioning done?

In concurrentHashTableTasks.inline.hpp:

     if (!this->claim(&start, &stop)) {
       return false;
     BucketsOperation::_cht->do_bulk_delete_locked_for(thread, start, stop,
                                                       eval_f, del_f);


> cheers

More information about the hotspot-runtime-dev mailing list