Re: Child process limits cumulative or instantaneous?
Re: Child process limits cumulative or instantaneous?
- Subject: Re: Child process limits cumulative or instantaneous?
- From: Ralph Castain <email@hidden>
- Date: Sat, 28 Feb 2009 18:00:31 -0700
You made some good points here, particularly about SIGCHLD. Let us
investigate a little deeper into how our event library + internal
waitpid program is handling this situation. It could be that we are
indeed seeing a bit of a race condition.
Interestingly, this program runs >> 10K iterations (actually, it runs
until I get tired of waiting for it and manually kill it) on Linux. So
it appears that either there is some enforcement difference between
the two environments, or some difference in the way waitpid operates,
or (most likely) simply enough difference in the race condition that
it always "wins" under Linux.
Let me get back to the list after we dig a little deeper.
Thanks
Ralph
On Feb 27, 2009, at 5:15 PM, Terry Lambert wrote:
On Feb 27, 2009, at 8:43 AM, Ralph Castain wrote:
I appreciate that info. There is plenty of swap space available. We
are not exceeding the total number of processes under execution at
any time, nor the total number of processes under execution by a
single user - assuming that these values are interpreted as
instantaneous and not cumulative.
In other words, if you look at any given time, you will see that
the total number of processes is well under the system limit, and
the number of processes under execution by the user is only 4,
which is well under the system limit.
You say this, but... it's not clear that you would be including
zombies in your calculation.
You reall need to look at the output of "ps gaxwww" and look at the
"STAT" column and see if there are zombies. If there are, then you
need to look at the PPID column to see the parent process, and
that's the process that's failing to reap its zombie childrent.
It's important to know that if your child process starts its own
child process and terminates, *you* inherit the child of the child
and are expected to reap it on behalf of your child process.
It's also important to note that the SIGCHLD is a persistent
condition, not an event: if you have more than one child terminate,
and reap them in a signal handler, you aren't necessarily going to
get one signal per child terminating. Since the condition gets set
by a child terminating, if another child terminates before ou
service the signal, then the signal is set (and it's already set).
If you process it and clear the signal, unless you process all
possible children in a loop and only leave the loop when there are
no more children to process, you can miss some, and "leak" zombies
because your expectation of a signal-per-child is fundamentally wrong.
To do this processing of both your own and orphan children, you
should probably use WNOHANG flag to wait4 without specifying a
particular pid to wait for.
-- Terry
However, the total number of processes executed by the user
(cumulative over the entire time the job has been executing) is
over 263 and thus pushing the system limit IF that limit is
cumulative and not instantaneous.
Hope that helps clarify the situation
Ralph
On Feb 27, 2009, at 9:37 AM, mm w wrote:
ERRORS
Fork() will fail and no child process will be created if:
[EAGAIN] The system-imposed limit on the total number
of pro-
cesses under execution would be exceeded.
This limit
is configuration-dependent.
[EAGAIN] The system-imposed limit MAXUPRC (<sys/
param.h>) on
the total number of processes under execution
by a
single user would be exceeded.
[ENOMEM] There is insufficient swap space for the new
process.
On Fri, Feb 27, 2009 at 8:13 AM, Ralph Castain <email@hidden> wrote:
Hello folks
I'm the run-time developer for Open MPI and am encountering a
resource
starvation problem that I don't understand. What we have is a
test program
that spawns a child process, exchanges a single message with it,
and then
the child terminates. We then spawn another child process and go
through the
same procedure.
This paradigm is typical of some of our users who want to build
client-server applications using MPI. In these cases, they want
the job to
run essentially continuously, but have a rate limiter in their
application
so only one client is alive at any time.
We have verified that the child processes are properly
terminating. We have
monitored and observed that all file descriptors/pipes are being
fully
recovered after each cycle.
However, after 263 cycles, the fork command returns an error
indicating that
we have exceeded the number of allowed child processes for a
given process.
This is fully repeatable, yet the number of child processes in
existence at
any time is 1, as verified by ps.
Do you have any suggestions as to what could be causing this
problem? Is the
limit on child processes a cumulative one, or instantaneous?
Appreciate any help you can give
Ralph
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
--
-mmw
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden