Re: Hard-limits for kern.maxproc
Re: Hard-limits for kern.maxproc
- Subject: Re: Hard-limits for kern.maxproc
- From: Steve Checkoway <email@hidden>
- Date: Fri, 6 Feb 2009 11:38:20 -0800
On Feb 6, 2009, at 5:18 AM, Andrew Gallatin wrote:
Nathan wrote:
> emails a really large file to their entire department. I'll hit the
> arbitrary process limit long before my hardware suffers. But if I
do,
> I'll recompile the kernel with higher limits and the machine will
keep
> on chugging. Sounds like software design issue to me.
As people have tried to explain, there are good reasons why the limits
are set the way they are. Remember, with a 32-bit kernel, even if
you've got 8GB of RAM, the kernel itself can only use less than 4GB of
RAM for its own data structure (for example, trivial things like the
process table :) So if you increase the process limits, you run a real
risk of running the kernel out of memory and/or kernel virtual address
space, leading to system crashes or hangs.
I'm not a kernel developer so maybe this is naive, but if running out
of memory is the issue, then shouldn't the limit be based on how much
physical memory there as as opposed to a fixed limit? It seems like
this number could be dynamic too, when the system is running low on
memory, start returning EAGAIN to fork() calls.
As a side note, does the kernel not page out its own memory? It seems
like it should have access to more space using the same mechanism, as
long as it is careful about which data is truly on disk and which is
in memory.
--
Steve Checkoway
"Anyone who says that the solution is to educate the users
hasn't ever met an actual user." -- Bruce Schneier
Attachment:
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden