RFR: 8251158: Implementation of JEP 387: Elastic Metaspace

Thomas Stüfe thomas.stuefe at gmail.com
Mon Aug 24 12:26:02 UTC 2020


On Mon, Aug 24, 2020 at 12:47 PM Albert Yang <albert.m.yang at oracle.com>
wrote:

> Hi Thomas,
>
>  > Enlarging the chunk has nothing to do with reservation.
>
> I didn't mean reserving the virtual space; instead, the chunk is
> "reserved" and can't be
> used for later allocation. Using `MetaspaceArena::allocate` to illustrate
> my point.
>
> ```
>      if (current_chunk()->free_words() < raw_word_size) {
>        if (!attempt_enlarge_current_chunk(raw_word_size)) { // step 1:
> reserve the chunk;
> assume the current chunk is 64K, and after enlarging, it becomes (64+64) K.
>          current_chunk_too_small = true;
>        } else {
>          DEBUG_ONLY(InternalStats::inc_num_chunks_enlarged();)
>          UL(debug, "enlarged chunk.");
>        }
>      }
>
>      // Commit the chunk far enough to hold the requested word size. If
> that fails, we
>      // hit a limit (either GC threshold or MaxMetaspaceSize). In that
> case retire the
>      // chunk.
>      if (!current_chunk_too_small) {
>        if (!current_chunk()->ensure_committed_additional(raw_word_size)) {
> // step 2:
> commit to physical memory, if fails, the newly "reserved" 64K is leaked,
> right?
>          UL2(info, "commit failure (requested size: " SIZE_FORMAT ")",
> raw_word_size);
>          commit_failure = true;
>        }
>      }
> ```
>
> The step 2, if failed, needs to undo step 1. That's what I meant that step
> 1 and 2 are
> tightly-coupled.
>
>
OK I see now what you mean. There are two kinds of "reserved": "reserved
and publicly available"; and "reserved but earmarked for one loader, denied
to other loaders". The latter comes in the form of uncommitted memory in
the current or retired chunks of a live arena.

The problem is more generic than the enlarge-chunk path. If an arena is
unable to further commit the current chunk, it asks the ChunkManager for a
new, suitably committed chunk. If (and only if) it got one, it will switch
to that new chunk from now on. Before changing the current chunk to the new
one, the arena will salvage the old chunk. Salvaging means sucking the last
words of remaining committed space out of it and storing those words in the
freeblocklist. The remaining uncommitted space is ignored right now,
"wasted" in the sense you perceive.

While this is true, I do not think this is a real problem. It only can
happen if we hit MaxMetaspaceSize (so not: GC threshold - there, caller
will increase commit quota, and follow up commits on the current chunk will
be successful). Hitting MaxMetaspaceSize is very rare; the option, by
default, is infinite. We may conceivably run into a pathological situation
where we "hover" at that limit, again and again hitting it, the subsequent
GCs releasing some loaders, enough to avoid OOM but never quite enough to
get away from the limit for good... I don't know. Seems far fetched.

The simplest and most elegant solution to that would be to split off unused
portions of a salvaged chunk into separate chunks (e.g. when you salvage a
64K chunk of which you used 13K, you could split it into (16K+16K+32K),
retain the first 16K chunk as retired chunk, return the second and third
chunk back to the ChunkManager. I thought about a similar solution at other
points, e.g. periodically to "shave off" remaining space from in-use chunks
from live arenas.

But there is no pressing need and I want to keep the patch simple. Lets
keep this in mind for a future enhancement.



> Maybe leaking 64K in virtual address is not that significant. Better
> explain it in the
> comments so that future readers know this problem is known.
>
>  > When putting out a new webrev, it will come with an updated version of
> the review
> guide. The master data for both are kept at github:
> https://github.com/tstuefe/jep387/tree/master/review in case
> openjdk.java.net is not
> reachable.
>
> Thank you very much; the new guide is even better than the version I
> originally followed.
>
> --
> /Albert
>

Thanks, Thomas


More information about the hotspot-gc-dev mailing list