Re: Audio threads scheduling
Re: Audio threads scheduling
- Subject: Re: Audio threads scheduling
- From: Stéphane LETZ <email@hidden>
- Date: Sun, 4 Apr 2004 22:01:22 +0200
On 3 avr. 04, at 21:56, Shaun Wexler wrote:
On Apr 3, 2004, at 3:23 AM, Stiphane LETZ wrote:
If I understand correctly, a "normal" IO thread is waken regularly at
each audio cycle beginning, does it's job (possibly in several steps
since it may be preempted by other real-time threads) then is
suspended again.
Correct.
And suspension is done using a "pthread_cond_timedwait" (or something
similar) which period equal the buffer size duration. Is this
correct?
Not exactly; the wait time is a maximum timeout, and the thread is
blocked waiting on a condition variable semaphore to become available.
In effect, the thread awakes when signaled and/or times out.
Who is going the signal the thread?
Now if a thread is suspended *inside* it's audio cycle because it is
waiting on another ressource (a lock or mutex ) but will be unlocked
in the *same* audio cycle, the scheduler will elect another thread
and will have to go back to the suspended real-time thread during the
*same* audio cycle to finish it"s job. So if first case, the
scheduler knows that the thread will be suspended until the next
audio cycle, because of the use of the pthread_cond_timewait call,
and in the second case the scheduler does not know when the thread is
going to be runnable again.
If the blocked thread exceeds its period, the audio engine will throw
an overload and restart its cycle. The kernel scheduler isn't the
problem here; it's the ioProc thread's calculation of wait times and
computation/constraint values. If the thread is not consistent in its
execution times, the HAL has a hard time stabilizing its own
scheduling of the threads. Blocking a realtime thread in favor of
another one is acceptable if you can guarantee consistency and attempt
to design accordingly, but you can't "control" the kernel scheduler
and threads from all other processes; thus it's considered bad
practice. I have successfully used pthread_rwlock in a realtime
ioProc thread, though I don't anymore, and would never recommend it.
I think the question here is to see if they are differences for the
scheduler in the following cases:
- a real-time thread is prempted because it reached the end of it's
"computation" slice (the "computation" parameter defined in the time
constraints setting)
- a real-time thread finished it's job, control go back to the HAL that
finally suspend the thread with something like "pthread_cond_timedwait"
and return control to the scheduler
- a real-time thread suspend itself with "pthread_cond_wait", but it
will be resumed in the same audio cycle.
This, in effect, extends the non-pre-emptible
section of the cycle a great deal and again leads to the priority
inversion.
Why? if the thread is suspended on a mutex for example, control go
back to the scheduler that is free to schedule another real-time
thread?
This allows a low-priority thread the opportunity to lock the mutex,
thus preventing realtime thread(s) from running, hence the potential
for priority inversion.
I agree in the general case. But here I'm thinking of cases where the
lock is never going to be taken by a low-priority thread,
Basically I'm designing an application that will evaluate a graph of
connected audio components and I would like to take profit of
bi-processors machines. Although i read some older mails that advice
against that... I ' am trying to understand under what conditions it
could be done in a correct way. The idea is to have lock-free list
of runnable sub tasks and a "mini-scheduler" that will feed one
thread running on each processor. At some point in the audio cycle,
one thread may have to wait the other one to finish because of data
dependencies. Thus inside an audio cycle, each thread may have to be
suspended but resumed in the same cycle.
This is possible, but the dependencies may become a problem. I find
that one CPU is plenty to handle most audio DSP tasks for one ioProc,
while the other one may be simultaneously processing DSP for another
ioProc, the GUI, disk I/O, etc.
Realtime threads aren't any "faster" than other threads; they just
awake with better timing accuracy and predictability, and will be
preempted less or not at all. If you are unable to achieve sufficient
performance from single audio graphs, where is your bottleneck (have
you profiled?)...
But designing systems that can take the more of bi-processor machines
(and probably soon bi + HT) is challenging...
Stephane Letz
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.