Message: 1
Date: Wed, 9 Sep 2009 12:10:56 -0700
From: William Stewart <email@hidden>
Subject: Re: Audio Units and OpenCL?
To: philippe wicker <email@hidden>, Murray Jason
<email@hidden>, Edward Agabeg <email@hidden>
Cc: CoreAudio list <email@hidden>
Message-ID: <email@hidden>
Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes
On Sep 9, 2009, at 9:56 AM, philippe wicker wrote:
I think that the difficulty in a plugin context is to meet a short -
and known - latency constraint when dispatching a job to several
threads. A solution is to pass the data to work on to some threads
on one Render and get back the result on next Render or even 2
Render calls later, which gives a 1 or 2 buffers latency. To be sure
that the worker threads meet that kind of deadline they have to be
time-constrained and their scheduling parameters carefully tuned. My
guess is that it is probably a difficult task for a generic
dispatching API such as GCD. Maybe an ad-hoc "hand-made" delegation
to a limited number of worker threads would give better results?
We already provide support for this.
In 10.5 we shipped an AU called the "deferred renderer" - it is an
'aufc' audio unit, and it plugs into an AU graph (or AU rendering
chain) as any other audio unit does. It dispatches for its input
(whatever is connected to it) on a different thread than what it is
called for output on (whatever thread AudioUnitRender is called on
it). There are some properties to allow you to control the interaction
in latency, etc, between the calling thread and the thread run by the
AU itself.
Its mainly of use to host apps, where portions of a rendering graph
can be done on different threads, with a minimal, specifiable latency
introduced between the various sections of the graph. You still have
of course, the problem of constructing your graph, knowing where you
can thread it in this way, but the intracacies of buffer management,
threading policy and time constraints, etc, are all handled for within
the AU itself.
In terms of other "threading" type AUs, both the scheduled slice
player and the file player AU have an implicit notion of multi-
threading, but with with the semantic of deadline driven computation.
With the scheduled slice player, you can schedule buffers for playback
from any thread, and when this AU renders, it appropriately plays out
your buffers of audio. Essentially it gives you a push model into the
AU's common pull model rendering approach. The file player handles
this detail for you (you give it a file, and it schedules the reads,
etc, as needed to meet the deadlines of the AU's rendering graph)
I think its interesting to explore these a bit, play around with them
and see how they can be used to good affect. Comments, etc, are always
welcome, and we can certainly look at generating some more
documentation or examples in this area (bugreporter.apple.com is a
good way to go for requests on these matters)
Bill