Re: Audio Units and OpenCL?
Re: Audio Units and OpenCL?
- Subject: Re: Audio Units and OpenCL?
- From: Mike Lemmon <email@hidden>
- Date: Fri, 11 Sep 2009 01:54:03 -0700
Thank you very much for the clarification -- I'll start digging right
in. I might only just say that an example based even on any of the AUs
you've mentioned (file player, deferred renderer, etc.) could be quite
helpful, simply to see a demonstration of good practice when creating
a multi-threaded AU.
Many thanks,
Mike
On 10/09/2009, at 18:22 PM, William Stewart wrote:
On Sep 9, 2009, at 6:10 PM, Mike Lemmon wrote:
So is multi-threading an AU ok, or only in host-oriented cases such
as these? What sort of "host interference" are people worried
about? I suppose this is because hosts assume that AUs are never
multi-threaded?
I'll explain my own situation here to provide an example for the
discussion. I have a synthesizer that models a complex physical
system of (ideally) thousands of discrete units; the audio output
generated is based on the state of the system. The system changes
gradually, so introducing a latency of even one or two seconds
would be OK if it meant that I could increase the complexity of the
system by an order of magnitude. While concurrency isn't a viable
option for most audio plug-ins, it could still make a huge
difference in the few places where it is viable.
I'll jump in at this point (I also like some of the other issues
that are being raised)
We have several AUs that we ship that run more than a single thread
- the file player, the net send/receive all have dependencies on
other threads aside from the render thread in the production of
their audio.
There are also some instruments that do the kinds of things you are
discussing above - they tend to operate in what we've called a "dual
scheduling mode" (see notes in AudioUnit/AudioUnitProperties.h)
which you should support. Briefly, you would have a pool (1 or
more) threads doing longer latency processing (1 or 2 seconds
ahead), but you would still want to react to notes that come in and
want a "real time" response - this is what dual scheduling enables
(and Logic supports this I believe) - you see this change when the
track that your AU is in goes from being a background track to a
live track. It is possible to get "longer delay" and "immediate
response" events to the AU at the same time, so you'd have to adjust
your logic to this.
The basic contract for AudioUnitRender is from the time when the
call comes in, you have a hard deadline to finish your rendering
before the duration of the buffer time expires. That is, the time
you spend in AudioUnitRender (on the thread you are called on) has
to be hard-real-time. You can't enter into unconstrained blocking
situations (wait, file system access, memory allocations) because
you can block this thread, and thus miss your deadline.
I'm not sure that we can really provide you a good example/
documentation on this. As long as you understand the contract of the
AU usage, then that's really the main thing (and I'm happy to
clarify any questions you have). In short, as long as you meet the
semantics of AURender, then you can use whatever you like to produce
data...
So, I think you can easily do what you describe above. AULab is a
good vehicle to test this on - it always and only calls your AU on a
single thread - the I/O thread of the device. If your AU has
problems, you get device overloads (which AULab indicates to you),
and with HAL Lab you can even use this to trace the problem areas if
you are having difficulties. I think that is good to get you to a
functional and correct implementation. Then I would also test with a
more sophisticated host (like Logic with a complex, many tracks
document) to ensure that the threading model you have adopted is not
causing problems in a more complex situation.
Bill
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden