Re: Child process limits cumulative or instantaneous?
Re: Child process limits cumulative or instantaneous?
- Subject: Re: Child process limits cumulative or instantaneous?
- From: Terry Lambert <email@hidden>
- Date: Sun, 1 Mar 2009 00:25:50 -0800
This usually comes down to whether or not they are in their own
session or process group, and the parent/child relationship is
extremely cloudy because launch services sends a message to launchd to
start the process on your behalf, rather than it actually being a
child of your process, so there's a lot of strange stuff that can
happen based on how you are starting things.
Processes which are actually orphaned (which has a very specific POSIX
meaning) are given over to init (well, launchd in Mac OS X) and reaped
by it (well, in the kernel, rather than by launchd doing the work, in
Mac OS X). The only implication there would be whether or not the
process counts as an orphan.
Add to that you can get processes generated under the covers if you
use certain frameworks, and I have no idea of what kind of expectation
they have about the signal handlers being set by the framework vs. the
parent process, etc..
It'd be seriously more clear if there was "ps gaxlwww" output and we
knew the signal mask/block/end/ignore state for SIGCHLD in any
parent(s) involved.
At that point, it's complicated enough that if you need that level of
looking at it, you will have to file a bug report to get it, hopefully
with a small program example that can demonstrate the problem.
-- Terry
On Feb 28, 2009, at 11:05 PM, Jeff Stearns wrote:
Ralph -
When you look at your event library, consider Terry's comment about
inheriting your grandchildren.
On many UNIX-like systems, orphaned processes are automatically
adopted by the "init" process. This was a very common
implementation in UNIX-like systems for many years. Your event
library may have been written with this behavior in mind.
But the POSIX specification says that this behavior is
"implementation-dependent". Terry makes an important point about OS
X behavior. You may be forced to inherit your orphaned
grandchildren, even if you were completely unaware of their birth.
Computers are getting more like humans every day.
-jeff
On Feb 28, 2009, at 5:00 PM, Ralph Castain wrote:
You made some good points here, particularly about SIGCHLD. Let us
investigate a little deeper into how our event library + internal
waitpid program is handling this situation. It could be that we are
indeed seeing a bit of a race condition.
Interestingly, this program runs >> 10K iterations (actually, it
runs until I get tired of waiting for it and manually kill it) on
Linux. So it appears that either there is some enforcement
difference between the two environments, or some difference in the
way waitpid operates, or (most likely) simply enough difference in
the race condition that it always "wins" under Linux.
Let me get back to the list after we dig a little deeper.
Thanks
Ralph
On Feb 27, 2009, at 5:15 PM, Terry Lambert wrote:
On Feb 27, 2009, at 8:43 AM, Ralph Castain wrote:
I appreciate that info. There is plenty of swap space available.
We are not exceeding the total number of processes under
execution at any time, nor the total number of processes under
execution by a single user - assuming that these values are
interpreted as instantaneous and not cumulative.
In other words, if you look at any given time, you will see that
the total number of processes is well under the system limit, and
the number of processes under execution by the user is only 4,
which is well under the system limit.
You say this, but... it's not clear that you would be including
zombies in your calculation.
You reall need to look at the output of "ps gaxwww" and look at
the "STAT" column and see if there are zombies. If there are,
then you need to look at the PPID column to see the parent
process, and that's the process that's failing to reap its zombie
childrent.
It's important to know that if your child process starts its own
child process and terminates, *you* inherit the child of the child
and are expected to reap it on behalf of your child process.
It's also important to note that the SIGCHLD is a persistent
condition, not an event: if you have more than one child
terminate, and reap them in a signal handler, you aren't
necessarily going to get one signal per child terminating. Since
the condition gets set by a child terminating, if another child
terminates before ou service the signal, then the signal is set
(and it's already set). If you process it and clear the signal,
unless you process all possible children in a loop and only leave
the loop when there are no more children to process, you can miss
some, and "leak" zombies because your expectation of a signal-per-
child is fundamentally wrong.
To do this processing of both your own and orphan children, you
should probably use WNOHANG flag to wait4 without specifying a
particular pid to wait for.
-- Terry
However, the total number of processes executed by the user
(cumulative over the entire time the job has been executing) is
over 263 and thus pushing the system limit IF that limit is
cumulative and not instantaneous.
Hope that helps clarify the situation
Ralph
On Feb 27, 2009, at 9:37 AM, mm w wrote:
ERRORS
Fork() will fail and no child process will be created if:
[EAGAIN] The system-imposed limit on the total number
of pro-
cesses under execution would be exceeded.
This limit
is configuration-dependent.
[EAGAIN] The system-imposed limit MAXUPRC (<sys/
param.h>) on
the total number of processes under execution
by a
single user would be exceeded.
[ENOMEM] There is insufficient swap space for the new
process.
On Fri, Feb 27, 2009 at 8:13 AM, Ralph Castain <email@hidden>
wrote:
Hello folks
I'm the run-time developer for Open MPI and am encountering a
resource
starvation problem that I don't understand. What we have is a
test program
that spawns a child process, exchanges a single message with
it, and then
the child terminates. We then spawn another child process and
go through the
same procedure.
This paradigm is typical of some of our users who want to build
client-server applications using MPI. In these cases, they want
the job to
run essentially continuously, but have a rate limiter in their
application
so only one client is alive at any time.
We have verified that the child processes are properly
terminating. We have
monitored and observed that all file descriptors/pipes are
being fully
recovered after each cycle.
However, after 263 cycles, the fork command returns an error
indicating that
we have exceeded the number of allowed child processes for a
given process.
This is fully repeatable, yet the number of child processes in
existence at
any time is 1, as verified by ps.
Do you have any suggestions as to what could be causing this
problem? Is the
limit on child processes a cumulative one, or instantaneous?
Appreciate any help you can give
Ralph
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
--
-mmw
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden