Elastic JVM improvements [Was: Re: OpenJDK G1 Patch]
thomas.schatzl at oracle.com
Mon May 28 15:27:59 UTC 2018
On Fri, 2018-05-25 at 17:56 +0200, Ruslan Synytsky wrote:
> Dear Thomas, thank you for supporting this initiative, your efforts
> and time. Please review my comments inline.
> On 24 May 2018 at 12:03, Thomas Schatzl <thomas.schatzl at oracle.com>
> > Hi Rodrigo, Ruslan,
> > first, sorry for the late reply. I have been travelling, so a bit
> > short on time on thinking about and looking through this.
> > Thanks for your contribution. I think these ideas are a very
> > interesting and generally useful additions to the collector and/or
> > community.
> A long story short :). The first thoughts about elasticity of JVM
> came to our team in January 2011. I reached Aleksey Shipilëv to
> discuss the idea. I was looking for an advice how to make it happen.
> We found no "out-of-box" solution at that time. Unfortunately Full GC
> is the only way till now to tell JVM to give back unused but
> committed heap. Parallel GC, which was default, is not friendly with
> RAM consumption at all. G1 was not production ready. So we had a
> changeling time. A little bit later we figured out that it's possible
> to achieve the required behavior with a javagent that monitors ram
> usage and force GC when the application is not busy. We made an
> experiment during several years analyzing how customers react on the
> vertical scaling. The result was impressive, better than we expected.
> We never had complains, contrary many customers gave a positive
> feedback because they simply saved money. Some customers even did not
> believe that resizing is possible! Many people still have that
> perception about "greedy" Java that never gives RAM back to OS.
Releasing memory in other places is "only" an implementation issue,
there are some really old request for enhancements (RFEs) out, see e.g.
https://bugs.openjdk.java.net/browse/JDK-6490394 . Note that some of
the comments, in particular for this one, the requirement on JDK-
8071913, are outdated now.
E.g. ideal situations for this kind of freeing could be the last mixed
gc, or the Remark pause.
> Nowadays this topic is even more relevant as Java and containers are
> a perfect couple (regardless of cgroups issues). More use cases and
> implementations are coming including new garbage collectors. Also,
> OpenJ9 provides -XX:+IdleTuningCompactOnIdle and -Xsoftmx already. We
> have not tested it yet, but the idea is clear, it looks similar from
> the description.
Thanks for the background. Imho we should not want to diverge too much
from their naming, but I am not insisting on anything particular.
Not sure if there is a problem just taking theirs (at least the -XX
> > - [creating a JEP]
> > I can guide you through this, but in the beginning it might be
> > useful to just fill out the description in form of email.
I will follow-up with one of these as an example given what this
discussion yielded so far.
> > Following are some initial questions and thoughts to the proposals.
> > They may be a bit confusing or somewhat unrelated though, please
> > bear with me :)
> > On Sat, 2018-05-19 at 19:01 +0100, Rodrigo Bruno wrote:
> > > Dear OpenJDK community,
> > >
> > > Jelastic and INESC-ID have developed a patch for OpenJDK that
> > > improves elasticity of JVM with variable loads. The detailed
> > > Elastic JVM Patch Description
> > >
> > > Elasticity is the key feature of the cloud computing. It enables
> > > to scale resources according to application workloads timely. Now
> > > we live in the container era. Containers can be scaled vertically
> > > on the fly without downtime. This provides much better elasticity
> > > and density compared to VMs. However, JVM-based applications are
> > > not fully container-ready. The first issue is that HotSpot JVM
> > > doesn’t release unused committed Heap memory automatically, and,
> > > therefore, JVM can’t scale down without an explicit call of the
> > > full GC.
> > > Secondly, it is not possible to increase the size of JVM Heap in
> > > runtime. If your production application has an unpredictable
> > > traffic spike, the only one way to increase the Heap size is to
> > > restart the JVM with a new Xmx parameter.
> > >
> > > To solve these 2 major issues and make JVM more container
> > > friendly, we have implemented the following improvements: i)
> > > timely reduce the amount of unused committed memory; and ii)
> > > dynamically limit how large the used and committed memory can
> > > grow. The patch is implemented for the Garbage First collector.
> > >
> > >
> > > Timely Reducing Unused Committed Memory
> > >
> > > To accomplish this goal, the HotSpot JVM was modified to
> > > periodically trigger a full collection. Two full collections
> > > should not be separated by more than GCFrequency seconds, a
> > > dynamically user-defined variable. The GCFrequency value is
> > > ignored and therefore, i.e., no full collection is triggered, if:
> > >
> > > GCFrequency is zero or below;
> > A time span seems to be different to a "frequency", this seems to
> > be more an interval like CMSTriggerInterval). Also I do not
> > completely follow that this interval is the minimum time between
> > two *full* collections. I would expect that any collection (or gc
> > related pause) would reset that time.
> Can we end up at the situation when small collections happen more
> often than MinTimeBetweenGCs? Like this for example
Okay. So there are the distinct problems of how to detect this "idle"
situation and what to do when this is triggered.
> Then the memory shrinking will not be triggered, as I understand.
> Because small collections in blue area do not help, we need a way to
> reduce the orange committed ram. So Full GC only?
The memory shrinking is not triggered because it is actually never
tried, one could add a call to shrink at the end of every collection if
The advantage of full collection is that it is maximally compacting too
though as you know :)
> > > the average load on the host system is above MaxLoadGC. The
> > > MaxLoadGC is a dynamically user-defined variable. This check is
> > > ignored if MaxLoadGC is zero or below;
> > What is the scale for the "load", e.g. ranging from 0.0 to 1.0, and
> > 1.0 is "full load"? Depending on that this condition makes sense.
> The logic is using os::loadavg and can be found at the link
Some good information to have.
> > The paper does not mention this.
> > > the committed memory is above MinCommitted bytes. MinCommitted
> > > is a dynamically user-defined variable. This check is ignored if
> > > MinCommitted is zero or below;
> > While this is a different concern, have you ever considered using
> > MinHeapSize or InitialHeapSize here?
> Good point. I think we can replace MinCommitted with Xms. We added it
> just in case if Xms is set to a low number (for example 32m), then
> memory usage grows up significantly during the time, and you do not
> want to bring it back as low as Xms, but keep it higher at a specific
> level (for example 1g). I believe this case is very rare, we can
> ignore it.
> > Having read Kirk P.'s concern about the mechanism to actually
> > uncommit memory being too simplistic, I kind of agree. The
> > alternative, to trigger a concurrent cycle plus multiple mixed
> > collections (plus uncommit heap at the end of that mixed phase) is
> > a bit harder to implement. I would certainly help you with that. :)
> I do believe the code / implementation can be improved. No religion
> about Full GC :). I would prefer to avoid it too if we can come with
> another solution at reasonable efforts.
We can start with Full GC.
The options for what to do when the request to give back memory seem to
be. That applies to both features, although in the second case there is
the problem that the effect should be "instantaneous".
1) do not do anything (already using little enough memory)
2) do a young-only gc and then shrink the heap (young holds lots of
3) do a concurrent full gc, i.e. start marking and at Remark or Cleanup
pause shrink the heap.
4) do a concurrent full gc, i.e. start marking and at the end of the
mixed gc (compaction) phase, shrink.
5) do a full gc and shrink the heap
from an effort POV, 1), 2), 3) and 5) are very similar, just start the
correct phase, which should translate to one or the other method call.
With 3) you need to store somewhere that you want to shrink the heap at
the end of the concurrent marking (at Cleanup/Remark), and possibly
tweak a few settings to make sure that "most" space is cleared (soft
references, remembered sets etc).
4) is the only one that requires a bit of additional thought and
effort: assuming that the application is "really" idle, there needs to
be a way to force potentially multiple mixed collections; i.e. at the
moment the trigger to do a young collection (which is what mixed is) is
memory exhaustion in eden. This might not be sufficient to do that in
some reasonable time frame.
>From a memory reclamation POV, obviously the order of more memory
potentially being freed is in order 1)-5); 4) should be very close to
5) in an "idle" system.
>From intrusiveness POV, 1) is least intrusive :), 2-4) I would scale as
"little" overhead, with 5) the worst.
> > Also assuming that at that point the VM is idle, doing a full gc
> > would not hurt the application.
> > Also there is Michal's use case of periodically doing global
> > reference processing to clean out weak references regularly. This
> > seems to be a different use case, but would seem easy to do given
> > that this change probably implements something like
> > CMSTriggerInterval for G1.
> > Maybe there is some way to marry these two issues somehow.
> Does CMSTriggerInterval influence the committed memory shrinking?
I only mentioned CMSTriggerInterval as an example; we do not want
switches in G1 that start with "CMS" particularly also because CMS is
out. I do not think CMS shrinks the heap either.
> > > -Xmx Dynamic Limit Update
> > >
> > > To dynamically limit how large the committed memory (i.e. the
> > > heap size) can grow, a new dynamically user-defined variable was
> > > introduced: CurrentMaxHeapSize. This variable (defined in bytes)
> > > limits how large the heap can be expanded. It can be set at
> > > launch time and changed at runtime. Regardless of when it is
> > > defined, it must always have a value equal or below to
> > > MaxHeapSize (Xmx - the launch time option that limits how large
> > > the heap can grow). Unlike
> > > MaxHeapSize, CurrentMaxHeapSize, can be dynamically changed at
> > > runtime.
> > >
> > > For example dynamically set 1GB as the new Xmx limit
> > >
> > > jinfo -flag CurrentMaxHeapSize=1g <java_pid>
> > >
> > > Setting CurrentMaxHeapSize at runtime will trigger a full
> > > collection if the desired value is below the current heap size.
> > > After finishing the full collection, a second test is done to
> > > verify if the desired
> > > value is still above the heap size (note that a full collection
> > > will try to shrink the heap as much as possible). If the value is
> > > still below the current heap size, then an error is reported to
> > > the user.
> > > Otherwise, the operation is successful.
> > One alternative here could be to use a marking cycle + mixed gcs to
> > reach that new CurrentMaxHeapSize again, which is again is a bit
> > more complicated to achieve. I can help you implementing that if
> > interested.
> > In some cases you might even get away with just uncommitting empty
> > regions and doing nothing else in response to this command.
> > As Kirk mentioned, as another optimization, triggering a young gc
> > could free enough regions too.
> Ok, I pass this question to Rodrigo Bruno and he has the required
> technical knowledge on the implementation.
> > > The limit imposed by the CurrentMaxHeapSize can be disabled if
> > > the variable is unset at launch time or if it is set to zero or
> > > below at runtime.
> > >
> > > This feature is important to cope with changes in workload
> > > demands and to avoid having to restart JVMs to cope with workload
> > > changes.
> > I have only one question about this here at this time: is this
> > CurrentMaxHeapSize a new "hard" heap size (causing OOME in the
> > worst case), or could this be temporarily exceeded and any excess
> > memory given back asap?
> For now it's the new hard limit.
> > Would it be useful to have G1 more slowly adapt to that that new
> > goal size?
> We may have a problem here, as CurrentMaxHeapSize will be
> usually bound to container/VM limits. If you resize container/VM you
> need to get back a clear answer - yes or no (not possible to
> decrease), then you can continue or cancel the resizing action.
> Adding async behavior can complicate the logic and code. I would
> prefer to keep it simple at the first implementation. We can adjust
> it later based on the feedback from more use cases.
> > As you can see I am pretty interested in the changes... :)
> Good! Just to clarify one more time that at Jelastic we are fine
> already. The ultimate goal is to evangelize elasticity of JVM. The
> cloud world is more dynamic now than it was in the last.
> > So overall, if you agree, I will open two JEPs in our bug tracker
> > and we can start discussing and filling out the details.
> Yep, let's move on! Thank you!
I will send out the JEP template with some details partially filled in
as follow-up. It would be nice if you could take over improving this a
little before I (ideally) copy&paste this into the JEP template.
You are the ones having most insight into this problem.
More information about the hotspot-gc-dev