RSet Tuning

Kirk Pepperdine kirk.pepperdine at gmail.com
Mon Jun 8 12:44:22 UTC 2015


Very interesting…. I need to find the one GC log that clearly explains what I’m seeing and maybe that will help add some context. To your last point, I’ve found it almost impossible to “out-tune” the G1 default settings.

Regards,
Kirk

On Jun 8, 2015, at 2:12 PM, Thomas Schatzl <thomas.schatzl at oracle.com> wrote:

> Hi,
> 
> On Mon, 2015-06-01 at 21:17 +0200, Simone Bordet wrote:
>> Hi,
>> 
>> On Mon, Jun 1, 2015 at 8:20 PM, charlie hunt <charlie.hunt at oracle.com> wrote:
>>> The observations you mention here are certainly of interest to the team working on G1. If you would like to start a new thread on the performance and tuning of RSets I’d be glad to offer my $0.02. :-)
>>> 
>> 
>> I'm interested in hearing about RSet tuning.
>> 
>> While I don't have the vast experience of Kirk in number of G1 cases
>> tuned, for those that I worked on G1 was only able to respect the
>> target pause for about 50% of the pauses with a max of about 3x the
>> target pause.
> 
> G1 tends to keep pause time on average, i.e. 50% of the time. There is
> no real cut-off anywhere during collection when it is over time (or
> approaches the max pause time) that tries to force the pause time.
> 
> There is some feedback mechanism to try to make the next pause(s)
> shorter if the recent ones were too long and the other way round.
> 
> Depending on the size of the machine, we have made sometimes good
> improvements one spikes by decreasing gc concurrency, as sometimes these
> outliers are caused by hitting contended locks.
> 
>> Our data shows that for the 50% that exceeded the target pause there
>> is no clear cause: sometimes is RSet scanning, sometimes RSet
>> updating, sometimes Object copying, etc.
>> 
>> In the context of RSet tuning, what advices do you have ?
> 
> No advice, some ideas though:
> 
> - avoid "coarsening", it's typically a bad idea particularly with large
> regions. G1SummarizeRSetStats and G1SummarizeRSetStatsPeriod=1 help
> looking at this. Increase G1RSetRegionEntries/G1RSetRegionEntriesBase as
> needed.
> Expect that memory requirements (and overhead managing it) will go up
> (if you had coarsening before).
> - there are some known concurrency issues with the sparse remembered
> set, since adding to it requires it to take a per-region lock. Contended
> locks are somewhat of an issue in Hotspot. You can try to decrease its
> size by changing
> G1RSetSparseRegionEntries/G1RSetSparseRegionEntriesBase.
> - also the G1RSetUpdatingPauseTimePercent has some impact on the
> refinement thresholds.
> However, lowering the thresholds has diminishing returns, as lower
> thresholds may mean a lot of repeated work.
> In the extreme case, you can let the application threads do that work
> (set the green/yellow/red thresholds to zero, and disable
> G1UseAdaptiveconcRefinement)
> - the main contributor of the update rs pause time are the contents of
> the per-thread refinement buffers. You can decrease the size of the
> per-thread refinement buffers to get less entries in the buffers to work
> on during the pause. G1UpdateBufferSize is the setting you may want to
> try to tune.
> Note that this may have bad consequences on throughput, as this means
> that these buffers will fill up more often, which needs more context
> switches (to get new ones).
> - another contributor is the hot card cache (similarly to the per-thread
> buffers it needs to be flushed at every gc). Try decreasing
> G1ConcRSLogCacheSize - at the cost of throughput, because this is
> another way of reducing repeated refinement of the same card.
> 
> Note that tuning these settings gives very application specific results,
> often they either do almost nothing (just wasting cycles) or only make
> the problem worse.
> 
> Thanks,
>  Thomas
> 
> 



More information about the hotspot-gc-dev mailing list