Submitted JEP 189: Shenandoah: An Ultra-Low-Pause-Time Garbage Collector

Erik Helin erik.helin at
Wed May 25 09:33:08 UTC 2016

On 2016-05-23, Christine Flood wrote:
> Hi
> I'm sorry I didn't respond sooner.  Answers are inline.
> ----- Original Message -----
> > From: "Erik Helin" <erik.helin at>
> > To: "Christine Flood" <chf at>
> > Cc: "mark reinhold" <mark.reinhold at>, hotspot-dev at
> > Sent: Monday, May 16, 2016 8:44:02 AM
> > Subject: Re: Submitted JEP 189: Shenandoah: An Ultra-Low-Pause-Time Garbage Collector
> > 
> > I'm moving this thread to hotspot-dev since this JEP affects all of
> > hotspot. I guess that the members from the runtime team and compiler
> > team will want to comment on the changes to the interpreter, C1 and C2.
> > 
> > On 2016-05-13, Christine Flood wrote:
> > > OK, I've put together a pdf document which summarizes our changes.
> > 
> > Thank you for writing this up. Will you incorporate most of this
> > document into the JEP?
> Yes, I'm going to make a second pass over the document incorporating
> all of the comments and answering any questions.


> > 
> > > I'm happy to go into more detail or answer questions.
> > 
> > Reading through the document, I have a few initial high-level questions:
> > - Which platforms does Shenandoah run on (CPUs/OSs)? Which platforms do
> >   you intend Shenandoah to run on?
> For now we are implementing it on 64 bit Intel and AARCH64.  It doesn't make
> very much sense for 32 bit architectures since their heaps are smaller.  
> > - For the goal of the JEP, do you have any particular benchmark in mind
> >   for determining when you have reached the goal? You have stated less
> >   than 100 ms pauses on 100 GB, but variables such as allocation rate,
> >   live set, etc. also tend to affect the GC. It might be easier to
> >   determine that you have reached your goal if you have a specific setup
> >   (OS, CPU, RAM) and a specific benchmark in mind.
> I'm somewhat regretting the original 10ms on 100GB heaps goal.  The reality
> is that our pause times are proportional to the size of the thread stacks. 
> We are working on keeping them as small as possible but you are correct 
> that it's benchmark, OS, and CPU specific.  
> The only concrete goal we have is for SpecJBB 2015.  No worse than 10% decrease
> in max j-ops (throughput) and significantly better critical j-ops (response time).

10% decrease in max j-ops compared to which configuration? ParallelGC?
G1? A highly tuned G1?

> > - When you state "most" GCs will be below 100 ms, do you have any number
> >   in mind? 99% of all GCs? 99.9%?

I have another question related to this: what will Shenandoah do when it
can't keep up? What happens if the concurrent copying can't free enough
memory in time (because the allocation rate is too high)? A full GC?

> > - Who will maintain the Shenadoah code?
> Red Hat will maintain the Shenandoah code.

Ok, then I would like you to make sure that compiling the Shenandoah
code is controlled with a configure variable, similar to jvmci, shark,
dtrace, vm-structs, etc:

sh configure --with-jvm-features=shenandoah

> > Reading through the JEP, I noticed the line " opposed to G1 which
> > focuses on young generation collection to help limit remembered-set
> > size." The main reason for the young collections in G1 is to improve
> > throughput, not to limit the size of the remembered sets. If an
> > application follows the generational hypothesis, then a young generation
> > can be quite a boost to throughput. We still have a remembered set per
> > young region, but it only keeps track of pointers from old regions to
> > young regions (not pointers between young regions). Would you please
> > remove this statement from the JEP?
> I will fix the comment. I didn't mean to imply that young generation collections
> were only about remembered set overhead, I just meant to say that not keeping track
> of young to old pointers aids in keeping remembered set sizes manageable.  The initial
> (non-generational) G1 implementation had embarrassingly large remembered sets.

Sure, I understand. And yes, not keeping track of young to young
pointers reduces the size of the rem sets, but the main purpose of the
young gen is to increase throughput.

> Thank you for your thoughtful comments.


> Christine

More information about the hotspot-dev mailing list