Low-Overhead Heap Profiling
jeremymanson at google.com
Fri Jun 26 05:47:30 UTC 2015
On Thu, Jun 25, 2015 at 2:08 PM, Tony Printezis <tprintezis at twitter.com>
> Hi Kirk,
> (long time!) See inline.
> On June 25, 2015 at 2:54:04 AM, Kirk Pepperdine (kirk.pepperdine at gmail.com)
> But, seriously, why didn’t you like my proposal? It can do anything your
> scheme can with fewer and simpler code changes. The only thing that it
> cannot do is to sample based on object count (i.e., every 100 objects)
> instead of based on object size (i.e., every 1MB of allocations). But I
> think doing sampling based on size is the right approach here (IMHO).
> I would think that the size based sampling would create a size based bias
> in your sampling.
> That’s actually true. And this could be good (if you’re interested in
> what’s filling up your eden, the larger objects might be of more interest)
> or bad (if you want to get a general idea of what’s being allocated, the
> size bias might make you miss some types of objects / allocation sites).
Note that it catches both large objects and objects that are frequently
allocated in the same way. Both of those are useful pieces of information.
Particularly, if we find, say, 200 of the same stack trace, and we know
they aren't in the live set, then we know we have a place in the code that
generates a lot of garbage. That can be a useful piece of information for
Since IME, it’s allocation frequency is more damaging to performance, I’d
> prefer to see time boxed sampling
> Do you mean “sample every X ms, say”?
This is not impossible, but a little weird. The only obvious way I can
think to do it without enormous overhead is having a thread that wakes up
once every X ms and sets a shared location to 1. Then you check that
shared location on every allocation. If it is 1, you go into a slow path
where you try to CAS it to 0. If the CAS succeeds, take the sample.
You could imagine some sampling problems caused by, say, thread priority
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the serviceability-dev