Re: Hard-limits for kern.maxproc
Re: Hard-limits for kern.maxproc
- Subject: Re: Hard-limits for kern.maxproc
- From: William Kucharski <email@hidden>
- Date: Fri, 06 Feb 2009 13:29:35 -0700
Steve Checkoway wrote:
How do other *nix systems accommodate a much larger process limit on 32
bit?
Some just hope for the best.
For example, Linux notably has what they call their "OOM killer" that is
undergoing constant revision.
Basically when the system's out of memory it starts killing off processes
(hopefully nothing important) until it can proceed.
It's the "hopefully nothing important" bit that's the subject of most revisions
and the most controversy.
For example, if you know not to kill off daemons and the kernel decides to kill
your word processing document that you haven't saved for the last hour because
someone is trying to read their email, is that a good thing or bad?
Good for the person reading their email, bad for you.
You get the general idea.
So it's really a question of at what level you want to have things fail, and
where in the chain processes and such are able to deal with the failure.
Do you want the creation of a process to just fail if limits are reached?
Do you want the creation to succeed but memory allocations to fail?
Do you want allocations to always succeed and so you should kill off other
processes until the allocation succeeds?
All (at least somewhat) valid design rationales, depending on the context and
the design parameters of the operating system itself.
I _personally_ believe it shouldn't be a hard-coded limit but rather one that is
configurable for each admin's needs on the understanding that if you change it,
you know what you're doing and when things blow up later it's your own fault, so
don't come crying to your vendor about it.
William Kucharski
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden