RFR(s): 8210549: Runtime.exec: in closeDescriptors(), use FD_CLOEXEC instead of close()
thomas.stuefe at gmail.com
Wed Oct 24 18:16:04 UTC 2018
sorry for letting this rest for so long - I got sidetracked by other issues.
On Wed, Sep 12, 2018 at 3:17 PM Martin Buchholz <martinrb at google.com> wrote:
> In 2018, we can assume everyone has implemented FD_CLOEXEC.
> Back in 1970, 20 file descriptors was enough for any program, but
> today they're effectively unlimited, so futilely closing _SC_OPEN_MAX
> descriptors has become unreasonable (as you say). OTOH, there is
> still no standard way of ensuring all our file descriptors are closed.
> Do we need the fallback on your beloved HP-UX? Should we simply close
> the first 64k file descriptors and pray, as is traditional?
This sounds like a reasonable limit.
If I understand this correctly, this only affects stray JVM/JDK file
descriptors which were not opened with FD_CLOEXEC - which is an error
- and third party native code. In hotspot, os::open() sets FD_CLOEXEC,
and java file descriptors are set to FD_CLOEXEC too.
So the error surface is quiet small nowadays. So I think giving up
after 64K is okay.
> freebsd offers closefrom(2)
> and today we can autoconfiscate the existence of closefrom and use
> that where available (BSD and Solaris?)
Here is what I think is theoretically possible:
AIX: has a fcntl command which closes all file descriptors above X which is fine
BSD (but apparently not MacOS): has int closefrom(2) which is fine
Solaris: has a useless closefrom(3) which returns no exit code. Unless
one wants to ignore the exit code, this is no option. However, Solaris
has fdwalk(3), which we use in our port to walk open file descriptors
and close them. But I am not sure about the merit here, this may be
actually more expensive than walking the proc file system itself.
In any case I would suggest to keep the fallback code, which walks the
proc file system, for all Unices. And also to keep the fallback from
that, which closes a range brute force - possibly with a reasonable
max. Is that too strict?
The question is now what is more important to you: code clarity (few
platform ifdefs) or trying out every technique we can think of?
A) If code clarity and simplicity is important, I would basically keep
the coding we have now: just walk the proc file system - including the
AIX ifdefs unfortunately - and as fallback close the range with brute
force until reasonable max.
B) If we want to try everything, we will have more ifdefs, not less,
which would look something like this:
#if defined(_ALL_BSD_SOURCE) && !defined(APPLE)
try fdwalk (?)
fallback: walk proc file system
fallback: close range until 64K or so.
Not sure this is worth it...
In our port I did (B) but today I personally would go with (A). (B)
has too many difficult-to-test-branches. And it may be actually
slower, since you may do work twice (if e.g. fdwalk fails for the same
reason that manually walking the proc fs fails).
What do you think?
> closeDescriptors needs a better name now - maybe ensureDescriptorsAreClosed ?
> I'd give this function a javadoc style docstring.
Will fix both.
> (Finding the FD_DIR directory has become rather ugly, especially the
> AIX-specific code)
I keep hoping that sometime in the future IBM will actually take part
in this AIX port and share some of the work. And blame :(
Best Regards, Thomas
More information about the core-libs-dev