RFR: 8146115 - Improve docker container detection and resource configuration usage
robbin.ehn at oracle.com
Tue Oct 3 08:00:31 UTC 2017
On 10/03/2017 12:46 AM, David Holmes wrote:
> Hi Robbin,
> I have some views on this :)
> On 3/10/2017 6:20 AM, Robbin Ehn wrote:
>> Hi Bob,
>> As I said in your presentation for RT.
>> If kernel if configured with cgroup this should always be read (otherwise we get wrong values).
>> E.g. fedora have had cgroups default on several years (I believe most distros have it on).
>> - No option is needed at all: right now we have wrong values your fix will provide right ones, why would you ever what to turn that off?
> It's not that you would want to turn that off (necessarily) but just because cgroups capability exists it doesn't mean they have actually been enabled and configured - in
> which case reading all the cgroup info is unnecessary startup overhead. So for now this is opt-in - as was the experimental cgroup support we added. Once it becomes clearer
> how this needs to be used we can adjust the defaults. For now this is enabling technology only.
If cgroup are mounted they are on and the only way to know the configuration (such as no limits) is to actual read the cgroup filesystem.
Therefore the flag make no sense.
>> - log target container would make little sense since almost all linuxes run with croups on.
> Again the capability is present but may not be enabled/configured.
The capability is on if cgroup are mount and the only way to know the configuration is to read the cgroup filesystem.
>> - For cpuset, the processes affinity mask already reflect cgroup setting so you don't need to look into cgroup for that
>> If you do, you would miss any processes specific affinity mask. So _cpu_count() should already be returning the right number of CPU's.
> While the process affinity mask reflect cpusets (and we already use it for that reason), it doesn't reflect shares and quotas. And if shares/quotas are enforced and someone
> sets a custom affinity mask, what is it all supposed to mean? That's one of the main reasons to allow the number of cpu's to be hardwired via a flag. So it's better IMHO to
> read everything from the cgroups if configured to use cgroups.
I'm not taking about shares and quotes, they should be read of course, but cpuset should be checked such as in _cpu_count.
Here is the bug:
[rehn at rehn-ws dev]$ taskset --cpu-list 0-2,6 java -Xlog:os=debug -cp . ForEver | grep proc
[0.002s][debug][os] Initial active processor count set to 4
[rehn at rehn-ws dev]$ taskset --cpu-list 0-2,6 java -XX:+UseContainerSupport -Xlog:os=debug -cp . ForEver | grep proc
[0.003s][debug][os] Initial active processor count set to 32
_cpu_count already does the right thing.
>> Thanks for trying to fixing this!
>> On 09/22/2017 04:27 PM, Bob Vandette wrote:
>>> Please review these changes that improve on docker container detection and the
>>> automatic configuration of the number of active CPUs and total and free memory
>>> based on the containers resource limitation settings and metric data files.
>>> http://cr.openjdk.java.net/~bobv/8146115/webrev.00/ <http://cr.openjdk.java.net/~bobv/8146115/webrev.00/>
>>> These changes are enabled with -XX:+UseContainerSupport.
>>> You can enable logging for this support via -Xlog:os+container=trace.
>>> Since the dynamic selection of CPUs based on cpusets, quotas and shares
>>> may not satisfy every users needs, I’ve added an additional flag to allow the
>>> number of CPUs to be overridden. This flag is named -XX:ActiveProcessorCount=xx.
More information about the hotspot-dev