TLAB and NUMA aware allocator

Vitaly Davidovich vitalyd at gmail.com
Thu Sep 27 19:41:02 UTC 2012


Thanks Jon -- that blog entry was useful.

Vitaly

On Thu, Sep 27, 2012 at 12:55 AM, Jon Masamitsu <jon.masamitsu at oracle.com>wrote:

> Vitaly,
>
> The current implementation depends on a thread not migrating
> between nodes.  On solaris that naturally happens.  I don't
> remember the details but it's something like Solaris sees that
> a thread XX is executing on node AA and using memory on AA
> so it leaves XX on AA.  On linux I'm guessing (really guessing)
> that there is a way to create an affinity between XX on AA.
>
> This has all the things I ever knew about it.
>
> https://blogs.oracle.com/**jonthecollector/entry/help_**for_the_numa_weary<https://blogs.oracle.com/jonthecollector/entry/help_for_the_numa_weary>
>
> Jon
>
>
> On 9/26/2012 4:09 PM, Vitaly Davidovich wrote:
>
>> Hi guys,
>>
>> If I understand it correctly, the NUMA allocator splits eden into regions
>> and tries to ensure that an allocated object is in a region local to the
>> mutator thread.  How does this affect tlabs? Specifically, a tlab will be
>> handed out to a thread from the current node.  If the java thread then
>> migrates to a different node, its tlab is presumably still on the previous
>> node, leading to cross-node traffic? Is there a notion of a processor
>> local
>> tlab? In that case, access to already allocated objects will take a hit
>> but
>> new allocations will not.
>>
>> The way I imagine a processor local tlab working is when a thread
>> migrates,
>> the previous tlab becomes available for whichever java thread is onproc
>> there now - that is, tlab ownership changes.  The migrated thread then
>> picks up allocations in the new tlab.
>>
>> It can still be a bump the pointer since only one hardware thread can be
>> running at a time on the processor.
>>
>> Is this or something like it already there? If not, what challenges am I
>> overlooking from my high-level view?
>>
>> Thanks
>>
>> Sent from my phone
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.openjdk.java.net/pipermail/hotspot-gc-dev/attachments/20120927/be59569d/attachment.htm>


More information about the hotspot-gc-dev mailing list