RFC: Adaptively resize heap at any GC/SoftMaxHeapSize for G1

Ruslan Synytsky rs at jelastic.com
Tue Jul 7 16:23:32 UTC 2020

> >> Unfortunately GC.heap_info and VM.info do not provide information about
> >> COMMITTED heap. And jstat documentation
> That is actually not true :) While looking into JDK-8248136, G1 actually
> already prints committed heap with GC.heap_info.
> E.g. on an application with -Xms64m -Xmx1024m the output is:
> $ jcmd 30653 GC.heap_info
> 30653:
>   garbage-first heap   total 519168K, used 315920K [0x00000000c0000000,
> 0x0000000100000000)
>    region size 1024K, 116 young (118784K), 23 survivors (23552K)
> [...]
> The "total" is "available" regions (i.e. ~committed) as explained in the
> previous post (I kept the relevant part below).
Thomas, thank you, it helps! I'm glad that I was wrong. The "total" naming
confused me, I was thinking it's the max (Xmx) heap.

As a follow up question - how to get the non heap usage? It can be useful
for understanding of the full picture. JMX provides the following option:

*MemoryMXBean mem = ManagementFactory.getPlatformMXBean(mbsc,
MemoryMXBean.class);MemoryUsage nonHeap =
mem.getNonHeapMemoryUsage();System.out.println(nonHeap.getInit() + "," +
nonHeap.getUsed() + "|, + nonHeap.getCommitted() + "," + nonHeap.getMax());*

> I recently filed JDK-8248136 for improving the heap info output for G1.

Improving the heap info sounds like a useful enhancement. At the moment
GC.heap_info is printed by different GCs with different naming.


* garbage-first heap   total 204800K, used 1806K [0x0000000080000000,
0x0000000100000000)  region size 1024K, 0 young (0K), 0 survivors
(0K) Metaspace       used 1039K, capacity 4619K, committed 4864K, reserved
1056768K  class space    used 98K, capacity 431K, committed 512K, reserved

* par new generation   total 27648K, used 4493K [0x000000068cc00000,
0x000000068ea00000, 0x000000068ea00000)  eden space 24576K,  18% used
[0x000000068cc00000, 0x000000068d063638, 0x000000068e400000)  from space
3072K,   0% used [0x000000068e400000, 0x000000068e400000,
0x000000068e700000)  to   space 3072K,   0% used [0x000000068e700000,
0x000000068e700000, 0x000000068ea00000) concurrent mark-sweep generation
total 2048K, used 0K [0x000000068ea00000, 0x000000068ec00000,
0x00000007c0000000) Metaspace       used 4439K, capacity 4602K, committed
4864K, reserved 1056768K  class space    used 423K, capacity 426K,
committed 512K, reserved 1048576K*

* ZHeap           used 10M, capacity 48M, max capacity 3482M Metaspace
  used 7608K, capacity 7668K, committed 7680K, reserved 8192K*

* Shenandoah Heap 4914M total, 32768K committed, 3345K used 2457 x 2048K
regionsStatus: not cancelledReserved region: - [0x000000068cc00000,
0x00000007c0000000) Collection set: - map (vanilla): 0x00007f9d98e52466 -
map (biased):  0x00007f9d98e4f000 Metaspace       used 4406K, capacity
4538K, committed 4864K, reserved 1056768K  class space    used 422K,
capacity 426K, committed 512K, reserved 1048576K*

Shenandoah provides the most distinct output including Xmx which is called
"total" while "total" means committed in G1. Also ZGC prints committed heap
as capacity while metaspace has both capacity and committed... Does it make
sense to harmonize the naming?

>> Thomas and Liang, is there a possibility to easily add an optional
>> manageable parameter that regulates behaviour of JVM when memory
>> consumption goes above SoftMaxHeapSize? For example, by default JVM
>> allocates more memory when it can't keep memory usage
>> below SoftMaxHeapSize, and if the optional parameter was specified then
>> throws OutOfMemoryError. In this case we will cover two cases with the
>> code base.

> I think there is still the patch from Rodrigo; from my understanding
> from last time there were some issues around when you are allowed to
> change that (as Current/HardMaxHeapSize is read at "arbitrary" locations
> you need to make sure the gc has a consistent view of it), and naming: I
> am not sure but CurrentMaxHeapSize has been the favorite or so.

Ok, I will talk to Rodrigo and Liang to clarify what we can do as the next

Thank you

More information about the hotspot-gc-dev mailing list