Re: Hard-limits for kern.maxproc
Re: Hard-limits for kern.maxproc
- Subject: Re: Hard-limits for kern.maxproc
- From: Andrew Gallatin <email@hidden>
- Date: Fri, 06 Feb 2009 14:50:08 -0500
Steve Checkoway wrote:
>
> On Feb 6, 2009, at 5:18 AM, Andrew Gallatin wrote:
>
>> Nathan wrote:
>>
>> > emails a really large file to their entire department. I'll hit the
>> > arbitrary process limit long before my hardware suffers. But if I do,
>> > I'll recompile the kernel with higher limits and the machine will keep
>> > on chugging. Sounds like software design issue to me.
>>
>> As people have tried to explain, there are good reasons why the limits
>> are set the way they are. Remember, with a 32-bit kernel, even if
>> you've got 8GB of RAM, the kernel itself can only use less than 4GB of
>> RAM for its own data structure (for example, trivial things like the
>> process table :) So if you increase the process limits, you run a real
>> risk of running the kernel out of memory and/or kernel virtual address
>> space, leading to system crashes or hangs.
>
> I'm not a kernel developer so maybe this is naive, but if running out of
> memory is the issue, then shouldn't the limit be based on how much
> physical memory there as as opposed to a fixed limit? It seems like this
When a 4GB address space is the problem, it actually gets *WORSE* as
you increase the amount of physical RAM. The more RAM you have, the
bigger the page tables and associated housekeeping data structures you
need to have, and the less space you have available for everything
else.
> number could be dynamic too, when the system is running low on memory,
> start returning EAGAIN to fork() calls.
What about kernel call chains that need to succeed? (like allocating
data structures required for disk writes, which in turn are required
to page something out and free memory).
> As a side note, does the kernel not page out its own memory? It seems
> like it should have access to more space using the same mechanism, as
> long as it is careful about which data is truly on disk and which is in
> memory.
Mostly not. Paging kernel memory is a nightmare. I cannot think of
any OS which actually does that off the top of my head, except maybe
AIX. I know that certain kernel-resident portions of processes were
pagable in FreeBSD a long time ago, and it caused quite a few
problems.
The best way to "fix" these scaling issues is to move to a 64-bit
kernel, which Apple is finally doing.
Drew
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden