RFR: 8062063: Usage of UseHugeTLBFS, UseLargePagesInMetaspace and huge SurvivorAlignmentInBytes cause crashes in CMBitMapClosure::do_bit

Kim Barrett kim.barrett at oracle.com
Wed Jan 7 18:12:34 UTC 2015

On Jan 7, 2015, at 10:14 AM, Stefan Johansson <stefan.johansson at oracle.com> wrote:
> Hi,
> Please review this fix for:
> https://bugs.openjdk.java.net/browse/JDK-8062063
> Webrev:
> http://cr.openjdk.java.net/~sjohanss/8062063/hotspot.00
> Summary:
> When using large pages on Linux we never actually uncommit memory, we just mark it as currently not used. When later re-committing those pages we currently only mark them in use again. This works fine until someone expects to get cleared memory back when doing a commit, which for example is expected for the memory backing certain bitmaps. This fix, makes sure that we always clear large pages when they are re-committed.


Generally looks good.  I have one question:

 137     for (uintptr_t page_index = start; page_index < start + size_in_pages; page_index++) {
 138       if (_needs_clear_on_commit.at(page_index)) {
 139         Copy::zero_to_bytes((HeapWord*)page_start(page_index), _page_size);
 140         _needs_clear_on_commit.clear_bit(page_index);
 141       }
 142     }

I'm not sure how large the size_in_pages argument for commit can be /
tends to be.  Nor do I know how often or in what circumstances the
commit operation gets called.  With those caveats, would it be worth
mking the scan for pages that need to be cleared and their clearing
chunkier by using BitMap::get_next_[zero,one]_offset to search for
ranges that need to be cleared?  It makes the code a little more
complicated than the present bit at a time iteration, but is probably
faster if there are long runs in the bitmap, which seems plausible,
but should probably be tested.  But it might not be worth doing if
performance isn't important here.

More information about the hotspot-gc-dev mailing list