Re: Mixing arbitrary numbers of sounds out an AUHAL unit (was: MOTU devices on Intel machines)
Re: Mixing arbitrary numbers of sounds out an AUHAL unit (was: MOTU devices on Intel machines)
- Subject: Re: Mixing arbitrary numbers of sounds out an AUHAL unit (was: MOTU devices on Intel machines)
- From: William Stewart <email@hidden>
- Date: Wed, 15 Nov 2006 19:37:54 -0800
So - if you want to prototype this you can set up this kind of
service in the AULab application
On 15/11/2006, at 7:03 PM, Christopher Ashworth wrote:
Summary of problem: Using multiple AUGraphs appears to not work
with MOTU devices on Intel machines--after three AUGraphs (attached
to the same output device) are running the render callback from
additional AUGraphs is requesting precisely zero frames, resulting
in silence.
I ordered an Intel machine and a MOTU UltraLight to try to solve
this properly. Since they finally arrived, I can now talk slightly
more intelligently about this.
I tried Kurt's suggestion, which was to create a single AUHAL, with
a mixer in front of it.
The good news is that the prototype of this design works--the
driver only sees it as a single source, and whatever was confusing
it before does not come into play.
The bad news is that it appears I need to violate the CoreAudio
guidelines to make this work.
To wit:
Because I need to support playing a potentially large number of
audio files (the specific number of which is not known a priori), I
create a Matrix Mixer with a very large number of input busses. If
I set the render callback for every bus, the performance penalty is
(unsurprisingly) high. However, if most of the busses have a NULL
callback, the mixer performs great even with a high number of input
busses, because it appears to just ignore the busses with a NULL
callback.
Each audio file that plays is "attached" to the output device by
assigning it to one of the mixer busses. It is removed when it is
not playing.
Here's the problem:
"For other properties, such as the
kAudioUnitProperty_SetRenderCallback property, the audio unit
specification prohibits hosts from changing the property on an
initialized audio unit but there is no programmatic enforcement
against it."
http://developer.apple.com/documentation/MusicAudio/Conceptual/
AudioUnitProgrammingGuide/TheAudioUnit/chapter_4_section_6.html
The problem is one of threading - we do not require AUs in general to
provide thread safe access to many properties - and an initialised AU
is one that could potentially be rendering. So particulary removing
an input on one thread while the AU is actually trying to get input
data on its render thread is not a good idea :-).
So, this is in large part the motivation for the Connection APIs in
AUGraph, and why AUGraph requires you to make a connection request,
but that request will not be executed until you explicitly call
AUGraphUpdate (if the graph is running) - it implements a message API
and will actually do the connect/disconnect on the render thread
before it calls up the render chain (or after it has called up)
We also realise that we don't have this mechanism for callbacks in
the AUGraph API - so we're adding this feature in Leopard.
In the meantime, what you need to do is to manipulate the callbacks
from the render thread - the AUGraph has a add/remove notify call and
you can do this from there. There's an example implementation of this
in the code we provide for OpenAL (openal.org) which uses a lock free
message queue to do this.
Although my preliminary tests are working fine so far, I appear to
be playing with fire by changing the bus render callback when the
AUGraph is initialized.
If that is true, what other approach can I use? I do not presently
see any other way to support an arbitrary number of files out of
the same AUHAL audio unit without either interrupting playback or
having a relatively low limit on the number of sounds that can play
simultaneously.
Surely I'm missing something...?
One other thing you could do with the matrix mixer is to just connect
everything up and then use the MM's enable/disable parameter - this
has the effect of the MMixer NOT calling that input if the input is
disabled (and also when you disable an input it will fade out that
input to avoid glitches).
You should also make sure that you set the silence flag - the MM will
not spend time mixing a buffer if the input provider describes the
buffer as being silent
Bill
Thanks for any insight,
Chris
On Oct 28, 2006, at 6:14 PM, Kurt Revis wrote:
- My application allows sound designers to play back multiple
sounds during a show.
- Each sound is driven by an AUGraph, with an AUHAL audio unit
for the output.
Are you creating a separate AUHAL for each sound you play? If so,
why not just create one, put a stereo mixer AU in front of it, and
then play each sound through a separate input to the mixer?
My understanding is that the intended usage pattern is to use one
output AU and the mixer. You *can* create separate output AUs for
the same device in the same process, and it will work, but it
probably isn't as efficient. Odds are that the bug you're hitting
now would not happen if you used a mixer, since all the mixing
would be happening in your process, and the driver level would be
completely unaware of how many sounds you're playing at once.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
--
mailto:email@hidden
tel: +1 408 974 4056
________________________________________________________________________
__
"Much human ingenuity has gone into finding the ultimate Before.
The current state of knowledge can be summarized thus:
In the beginning, there was nothing, which exploded" - Terry Pratchett
________________________________________________________________________
__
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden