On Nov 25, 2008, at 12:15 PM, Pierre Zeeman wrote:
Bill,
On Mon, Nov 24, 2008 at 11:13 PM, William Stewart
<email@hidden> wrote:
So, for iPhone what you would do is:
Create a graph that is a mixer unit connected to an output unit. Each input to the mixer is going to be a render callback. So, in this instance it is very similar to the default output example, except that now you have many potential inputs and the ability to mix them, control their volume, their pan/location (depending on which mixer you use).
The documentation is a little thin on the ground here (I find it strange that the otherwise comprehensive TN2112 makes no mention of the input(s) to the 3dmixerAU). So, am I correct in thinking that we should call
AudioUnitSetProperty( <mixer unit>, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, <AudioUnitElement>, <AURenderCallbackStruct>, sizeof(<AURenderCallbackStruct>))
for each render callback we wish to add to the mixer in question?
yes
And, also, that the <AudioUnitElement> here will be the bus number of the mixer that we wish to add the callback to?
yes
So, by analogy with a mixing desk, each callback is an audio source (file player, generator, what have you) that we are connecting a particular channel in the mixer.
yes - an audio unit can have a source from either a callback or an audio unit connection.
However, one thing to note here is that on their own these things are not thread-safe - so if you are making connections or callbacks to an audio unit while it is also rendering, then you can have problems. You are better to use AUGraph as it manages that for you.
Callbacks and Connections in an AUGraph are called graph interactions (so you use that API rather than AudioUnitSetProperty)
If I understand your reply here correctly, it's not possible to have those sources themselves be on an audio graph: one couldn't feed the output of a source into the input of another and then send that to the mixer unit. Any serial processing must occur entirely within each callback.
not quite.
What isn't possible is that the only audio units that AUGraph knows about are the ones that apple ships (on the iPhone this is). So, you couldn't do this (which you could on a desktop):
(1) Write an audio unit yourself (say an AUGenerator)
(2) Register it when your app launches (or install it so any app can use it)
(3) tell AUGraph your audio units component types for a graph node - AUGraph would then use AudioComponent APIs and be able to find it.
However, because you can't Register your audio unit, AUGraph can't find it (we should try to fix this I think)....
So, you can have the sources in an AUGraph be a callback - and then from that callback you call directly into your code - however you decide to implement that code.
And yes, the processing must occur within each callback, just like when you call AudioUnitRender, the processing occurs within that call as well.
hope that makes sense
Bill
Or do I have this wrong?
Thanks and regards,
Pierre