Re: kernel paging (was Hard-limits for kern.maxproc)
Re: kernel paging (was Hard-limits for kern.maxproc)
- Subject: Re: kernel paging (was Hard-limits for kern.maxproc)
- From: Steve Checkoway <email@hidden>
- Date: Sat, 7 Feb 2009 15:26:06 -0800
On Feb 7, 2009, at 2:15 PM, Terry Lambert wrote:
Interestingly enough, I'm looking at an older 16-bit embedded
system that takes this approach. By carefully maintaining an
overlay call stack (I don't have a better name for it) and a table
of functions and which overlay they are in, it's possible for one
function to call another function in a different overlay by calling
a helper function that's always mapped into the address space that
remaps the other region. In this case, the code resides in several
ROMs and the mapping can be changed by a single instruction.
Obviously it'd be more complicated to do in a general purpose kernel.
Call graphs are by definition directed graphs. DCGs are not
severable, while DAGs are. It is computationally hard to detect
what are call Hamiltonian cycles - A calls B calls C calls ... calls
B - but it's possible using what's called Warshal's algorithm, and
if you are clever, it's possible to do this incrementally.
So it's not actually that complicated to do, but it dies need
planning.
Hamiltonian cycles don't seem to apply here. You're not looking for a
cycle that visits every vertex exactly once. Warshall's algorithm
finds the transitive closure of the graph.
It is possible to do this for third party drivers using a
technique called static analysis. This is relatively straight
forward for fix sized aligned instructions -- call them ppc and
arm -- and a lot harder for variable sized instruction set
processors. You effectively look at the code and decide the
subtrees of code that will be used for a given set if code: the
functions called by the functions called by the ... even through
calls through funtions pointers or vtables.
Certainly this is not decidable in general by straight-forward
reduction from the halting problem. When is the analysis happening?
When the driver is loaded into the kernel?
It's not analogous to the halting problem. It just becomes 4! harder
when you have to guess the instruction set boundary on a 32 bit
processor, and you end up with false positives when you guess
wrong. It comes down to determining code reachability.
Trying to compute if a function gets called at run time is definitely
undecidable. Assume not and let D be the decider for the language L =
{<P,x,F> : P is a program that when run on input x calls the function
F}. Let E be a program that takes as input a program and input pair
<P,x> and emulates running P on x. If at any time during the emulation
P halts on x, E calls a function Halt().
Let <P,x> be an arbitrary program input pair. We can determine if P
halts on x by running our decider D on input <E,<P,x>,Halt>. D accepts
if and only if running E on <P,x> would call Halt() which happens if
and only if running P on x would have halted. Since we decided the
halting problem, no such D can exist.
It's one thing to compute a conservative static call graph (which in
the face of function pointers would require data flow analysis which
in general is undecidable, of course, so you end up having to include
far more possible function calls), and another to compute the actual
dynamic call graph.
This is one of the reasons why graphics nerds are obsessed with
frame rates past the 60 a second you can possibly ever display on an
LCD panel with a 120Hz refresh rate: they don't know *which* 60 of
the ones they've calculated they will need to display.
Makes sense.
--
Steve Checkoway
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden