Long latencies in realtime threads when doing BSD syscalls -- Bug or expected behaviour?
Long latencies in realtime threads when doing BSD syscalls -- Bug or expected behaviour?
- Subject: Long latencies in realtime threads when doing BSD syscalls -- Bug or expected behaviour?
- From: "Mario Kleiner" <email@hidden>
- Date: Fri, 30 Jan 2004 17:19:34 +0100
Hello,
i'm new to this list, so please apologize if i post to the
wrong list, but i couldn't find a clear answer in the list
archives.
We have written a simple C-timing loop, which is basically:
while(true) { some BSD-Syscall like setjmp, getpid, getXXX,
fopen, fprintf, fwrite, ... ; usleep(500 milliseconds) } to
test realtime behaviour/latencies of OS-X.
The loop runs under THREAD TIME CONSTRAINT priority.
When we run it for e.g. 120 seconds, most loop passes have
a duration of less than 1 ms (usually around 0.6ms), even
under a loaded system -- which is good.
But every thirty seconds, the sync() syscall of the update
daemon triggers spikes in loop duration of up to 14ms in
our realtime thread, although it is supposed to preempt
other processes when it becomes runnable.
To me, it seems that any BSD syscall (especially ones that
do I/O) in any (timesharing) thread can produce high
latencies in parallel running realtime threads, by delaying
their BSD syscalls. If we remove the syscalls from the
timing loop, it stays unaffected, so preemption works.
From everything i read on the web and this lists, my
assumption is that the cause of this is the kernel_funnel,
which seems to prevent concurrent execution of BSD calls in
the kernel, but i would like to have some confirmation for
this from somebody knowledgeable:
So, am i right that the kernel_funnel has this effect on
all BSD syscalls, even on uniprocessor machines?
Does this mean, that the kernel funnel is automatically
released if a thread sleeps in the kernel, but that it is
not released, if a thread gets preempted by a realtime
thread -- which would cause the realtime thread to get
delayed/blocked on the lock, as soon as it does a BSD
syscall -- basically some kind of priority inversion?
That would make perfect sense to me, but why doesn't the
same thing happen to the gettimeofday() syscall that i use
for timing and the usleep() when it gets back from sleep?
Is there a way to avoid this, e.g. by only using Apples
Frameworks or Mach API's instead of Posix stuff?
Our current -- ugly -- solution is to kill the update()
process during realtime parts of the application. Is there
something better, e.g. the equivalent of SCHED_FIFO
scheduling on other Unices?
And a last question: Is there an equivalent to mlockall()
to prevent paging for a realtime app?
Sorry for the long post. Any comments would be appreciated.
ciao,
-mario
_______________________________________________
darwin-kernel mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/darwin-kernel
Do not post admin requests to the list. They will be ignored.