Make Metaspace more elastic?
thomas.stuefe at gmail.com
Tue May 29 10:35:02 UTC 2018
The "Elastic JVM" discussion on hotspot-gc triggered an old idea which
I have been throwing around in my head for a while, but before
investing any time I'd like to check if it is actually useful.
How useful would it be to make Metaspace more elastic?
Metaspace implementation attempts to recover from spikes, but is not
always able to do. Spikes in metadata allocation may lead to permanent
higher memory footprint.
The way metaspace allocation currently works is that we have a linked
list of mmaped nodes. Classloader metadata lives in those nodes, but
it is not a 1:1 thing. One node may host metadata for multiple class
loaders, and metadata for one class loader may spread over multiple
nodes. A node is removed (its memory returned to the OS) if it is
completely unused, i.e. all class loaders whose metaspace occupy this
node have been unloaded.
>From that it follows that one node may be kept alive even though most
of its data belong to unloaded class loaders.
Even worse, this process does not work at all for the compressed class
space (if -XX:+UseCompressedClassPointers is set, which is on by
default). Compressed class space is one special part of the metaspace
which needs to be a continuous mapping and in the current
implementation cannot shrink. Basically, this node is never returned
to the OS, regardless how many loaders one unloads. Instead, that
space will always belong to the process; it will be used for future
allocations, if there are any.
I think it would be possible to solve with reasonable effort, since we
have chunk merging (https://bugs.openjdk.java.net/browse/JDK-8198423)
metaspace chunks are allocated at chunk-sized-boundaries which plays
nicely with common page sizes. In theory one could uncommit chunks
when they are returned to the freelists. Lots of buts here, but I
think the general idea is sound.
My question is, is that actually a problem worth solving?
Do we have metaspace consumption spikes we want to (better) recover from?
(posting this on gc-dev, hope it is a good fit.)
More information about the hotspot-gc-dev