Re: Hard-limits for kern.maxproc
Re: Hard-limits for kern.maxproc
- Subject: Re: Hard-limits for kern.maxproc
- From: Michael Smith <email@hidden>
- Date: Sat, 7 Feb 2009 00:07:00 -0800
On Feb 6, 2009, at 9:14 PM, Michael Cashwell wrote:
Pages are cleaned by a reasonably high priority kernel thread. The
issue is not priority inversion, it's that determining pre-facto
the working set for the pageout path is NP-hard.
OK, but what confuses me then is that if the issue is not priority
inversion (which is another name for deadlock) then why is pre-
computing the working set for the pageout path necessary? Is it that
if the pageout code (or data) itself were paged out then you're stuck?
Correct.
If that's it, I'm surprised that some sort of memory pool dedicated
to that task couldn't confine those elements to one wired memory
region. Wiring most of the kernel to protect just that seems
draconian. (I'm not saying that the pageout code and data aren't
important just that it would seem they could be isolated.)
I suppose that's tough given that all manner of IOKit drivers
related to disk IO (and raid) might be involved.
Now you're getting it. 8)
It's important to understand that we're not just talking about paging
out to the swap files here; you can have dirty pages associated with
files from *any* filesystem. It might be a network filesystem or a
USB device or your Time Capsule or even (gack) a filesystem coming in
via FUSE.
In order to be able to make reliable forward progress, the system
can't catch-22 itself vs. any of these things, and it's simply not
possible to practically track every page that is associated with
something that might at some point in the future be required on the
pageout path.
= Mike
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden