OpenJDK G1 Patch

Ruslan Synytsky synytskyy at jelastic.com
Mon May 21 09:15:22 UTC 2018


Dear Kirk, thank you for the feedback. Please see inline.

On Mon, May 21, 2018 at 07:45 Kirk Pepperdine <kirk at kodewerk.com> wrote:

> Hi Rodrigo,
>
> Interesting idea. IMHO, this solution is too simplistic as it focuses on
> the needs of providers at the detriment of the goals of the consumers.
>
I’m very concerned on this statement. Which part of the patch description
gives you the feeling that this work has been done in favor of providers?
Believe me, based on my experience of working with hosting providers
worldwide, cloud vendors are least interested in resource usage
optimization, because more resources customers use more money they pay. So,
please give us a hint how the description can be improved.

Full collections in normal G1 work loads tend to be very pause expensive
> and one of the goals when tuning the garbage collector is to make sure you
> never see all Full collection. I would suggest that a better, but more
> complex system would be work with some how making adjustments to how
> regions are selected for evacuation. Some how favoring those regions whose
> memory maybe returned to the OS.
>
> That said, it is my current opinion that ZGC or Shenadoha will replace G1
> given that they promise better pause time characteristics. Once these
> collector do become more popular you will end up with the same situation.
> Again, triggering a full collection to solve your problem may not be in the
> best interest of applications owners.
>
We tested Shenandoah and it looks very promising and is a good candidate
for the default GC. A high level overview can be found at the link
https://dzone.com/articles/choosing-the-right-gc

However, as I understand Shenandoah and ZGC are still in active development
phase, so end users can’t fully rely on them yet. At the same time, I
personally do not see any problem for incremental improvements of existing
technologies. For example, some related improvements for CMS were
incorporated in the past -XX:-ShrinkHeapInSteps
https://bugs.openjdk.java.net/browse/JDK-8146436.

While high performance is one of the main advantages of JVM, the real world
is much more diverse. Huge number of running JVMs are in idle mode or
underutilize the allocated resources. In any case, we do not suggest to
replace one solution by another one, both (high performance and high
efficiency) can coexist. It should be optional and enabled upon end user
decision.

Regards


> Kind regards,
> Kirk Pepperdine
>
> On May 20, 2018, at 3:01 AM, Rodrigo Bruno <rbruno at gsd.inesc-id.pt> wrote:
>
> Dear OpenJDK community,
>
> Jelastic <https://jelastic.com/> and INESC-ID <http://www.inesc-id.pt/> have
> developed a patch for OpenJDK that improves elasticity of JVM with variable
> loads. The detailed description of the patch can be found below. We would
> like share this patch with the community and push it to the mainstream. We
> believe this work will help Java community to make JVM even better and
> improve the memory resources usage (save money) in the modern cloud
> environments. A more complete patch description can be found in the paper
> <http://www.gsd.inesc-id.pt/~rbruno/publications/rbruno-ismm18.pdf> that
> will be presented in ISMM 2018.
>
> Elastic JVM Patch Description
>
> Elasticity is the key feature of the cloud computing. It enables to scale
> resources according to application workloads timely. Now we live in the
> container era. Containers can be scaled vertically on the fly without
> downtime. This provides much better elasticity and density compared to VMs.
> However, JVM-based applications are not fully container-ready. The first
> issue is that HotSpot JVM doesn’t release unused committed Heap memory
> automatically, and, therefore, JVM can’t scale down without an explicit
> call of the full GC. Secondly, it is not possible to increase the size of
> JVM Heap in runtime. If your production application has an unpredictable
> traffic spike, the only one way to increase the Heap size is to restart the
> JVM with a new Xmx parameter.
>
> To solve these 2 major issues and make JVM more container friendly, we
> have implemented the following improvements: i) timely reduce the amount of
> unused committed memory; and ii) dynamically limit how large the used and
> committed memory can grow. The patch is implemented for the Garbage First
> collector.
>
>
> Timely Reducing Unused Committed Memory
>
> To accomplish this goal, the HotSpot JVM was modified to periodically
> trigger a full collection. Two full collections should not be separated by
> more than GCFrequency seconds, a dynamically user-defined variable. The
> GCFrequency value is ignored and therefore, i.e., no full collection is
> triggered, if:
>
>
>    - GCFrequency is zero or below;
>    - the average load on the host system is above MaxLoadGC. The MaxLoadGC
>    is a dynamically user-defined variable. This check is ignored if
>    MaxLoadGC is zero or below;
>    - the committed memory  is above MinCommitted bytes. MinCommitted is a
>    dynamically user-defined variable. This check is ignored if
>    MinCommitted is zero or below;
>    - the difference between the current heap capacity and the current
>    heap usage is below MaxOverCommitted bytes. The MaxOverCommitted is a
>    dynamically user-defined variable. This check is ignored if
>    MaxOverCommitted is zero or below;
>
>
>
> The previously mentioned concepts are illustrated in the figure below:
>
>
>
>
>
>
> The figure above depicts an application execution example where all the
> aforementioned variables come into play. The default value for all
> introduced variables (GCFrequency, MaxLoadGC, MaxOverCommitted, and,
> MinCommitted) is zero. In other words, by default, there are no periodic
> GCs.
>
>
> With this these modifications, it is possible to periodically eliminate
> unused committed memory in HotSpot. This is very important for applications
> that do not trigger collections very frequently and that might hold high
> amounts of unused committed memory. One example are web servers, whose
> caches can timeout after some minutes and whose memory might be
> underutilized (after the caches timeout) at night when the amount of
> requests is very low.
>
> -Xmx Dynamic Limit Update
>
> To dynamically limit how large the committed memory (i.e. the heap size)
> can grow, a new dynamically user-defined variable was introduced:
> CurrentMaxHeapSize. This variable (defined in bytes) limits how large the
> heap can be expanded. It can be set at launch time and changed at runtime.
> Regardless of when it is defined, it must always have a value equal or
> below to MaxHeapSize (Xmx - the launch time option that limits how large
> the heap can grow). Unlike MaxHeapSize, CurrentMaxHeapSize, can be
> dynamically changed at runtime.
>
> For example dynamically set 1GB as the new Xmx limit
>
> jinfo -flag CurrentMaxHeapSize=1g <java_pid>
>
> Setting CurrentMaxHeapSize at runtime will trigger a full collection if
> the desired value is below the current heap size. After finishing the full
> collection, a second test is done to verify if the desired value is still
> above the heap size (note that a full collection will try to shrink the
> heap as much as possible). If the value is still below the current heap
> size, then an error is reported to the user. Otherwise, the operation is
> successful.
>
> The limit imposed by the CurrentMaxHeapSize can be disabled if the
> variable is unset at launch time or if it is set to zero or below at
> runtime.
>
> This feature is important to cope with changes in workload demands and to
> avoid having to restart JVMs to cope with workload changes.
>
>
>
> --
Ruslan
CEO @ Jelastic
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.java.net/pipermail/hotspot-gc-dev/attachments/20180521/516f4221/attachment.htm>


More information about the hotspot-gc-dev mailing list