RFR: 8267703: runtime/cds/appcds/cacheObject/HeapFragmentationTest.java crashed with OutOfMemory [v2]

Thomas Schatzl tschatzl at openjdk.java.net
Wed Jun 2 07:57:35 UTC 2021

On Tue, 1 Jun 2021 15:14:12 GMT, Stefan Johansson <sjohanss at openjdk.org> wrote:

>> src/hotspot/share/gc/g1/g1FullCollector.cpp line 98:
>>> 96: 
>>> 97:   // Finally consider the amount of used regions.
>>> 98:   uint used_worker_limit = MAX2(heap->num_used_regions(), 1u);
>> Should we actually end up here with zero used regions? It does not hurt, but seems superfluous
> No we should not, but I went with the super-safe option and added the `MAX2(...)`. If you like, I can remove it :)

I would prefer to either remove it or assert it. It confused at least me more than it seemed to be worth.

>> src/hotspot/share/gc/g1/g1FullCollector.cpp line 106:
>>> 104:                       worker_count, heap_waste_worker_limit, active_worker_limit, used_worker_limit);
>>> 105:   worker_count = heap->workers()->update_active_workers(worker_count);
>>> 106:   log_info(gc, task)("Using %u workers of %u for full compaction", worker_count, max_worker_count);
>> That's pre-existing, but this will change the number of active workers for the rest of the garbage collection. That made some sense previously as `G1FullCollector::calc_active_workers()` typically was very aggressive, but now it may limit other phases a bit, particularly marking which distributes on a per-reference basis.
>> Overall it might not make much difference though as we are talking about the very little occupied heap case.
>> I.e. some rough per-full gc phase might be better and might be derived easily too.
> This was one of the reasons I went with using "just used regions" and skipping the part that each worker will handle a set of regions. In most cases looking at used regions will not limit the workers much, and if it does we don't have much work to do. I've done some benchmarking and not seen any significant regressions with this patch. The biggest problem was not using enough workers for the bitmap-work.
> Calculating workers per phase might be a good improvement to consider, but that would require some more refactoring.

Okay. If you think it would take too long, please file an issue for per-phase thread sizing in parallel gc then (if there isn't).


PR: https://git.openjdk.java.net/jdk/pull/4225

More information about the hotspot-gc-dev mailing list