Re: Hard-limits for kern.maxproc
Re: Hard-limits for kern.maxproc
- Subject: Re: Hard-limits for kern.maxproc
- From: Andrew Gallatin <email@hidden>
- Date: Fri, 06 Feb 2009 08:18:22 -0500
Nathan wrote:
> emails a really large file to their entire department. I'll hit the
> arbitrary process limit long before my hardware suffers. But if I do,
> I'll recompile the kernel with higher limits and the machine will keep
> on chugging. Sounds like software design issue to me.
As people have tried to explain, there are good reasons why the limits
are set the way they are. Remember, with a 32-bit kernel, even if
you've got 8GB of RAM, the kernel itself can only use less than 4GB of
RAM for its own data structure (for example, trivial things like the
process table :) So if you increase the process limits, you run a real
risk of running the kernel out of memory and/or kernel virtual address
space, leading to system crashes or hangs. It would be a better option
switch to an OS which can handle your workload (Solaris, Linux,
FreeBSD etc) in a supported fashion.
Drew
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden