Re: modify-in-place AudioUnits in graphs for iOS
Re: modify-in-place AudioUnits in graphs for iOS
- Subject: Re: modify-in-place AudioUnits in graphs for iOS
- From: Charlie Roberts <email@hidden>
- Date: Thu, 16 Jun 2011 15:01:12 -0700
OK, I see that by explicitly calling AudioUnitRender I can have samples processed by AudioUnits that are external to the graph. So is the correct way to implement a chain like this:
mic -> echo -> reverb -> mixer -> out
to do the following?
1. Create a graph with a mixer and remoteIO audio unit nodes
2. Create echo and reverb audio units and DO NOT PLACE THEM IN THE GRAPH
3. Assign a mixer input callback to pull from the reverb audio unit using AudioUnitRender
4. Tell the reverb render callback to pull from the echo audio unit using AudioUnitRender
5. Tell the echo render callback to pull from the bus 1 on the RemoteIO audio unit
I can't imagine this is actually the right way to do it as hardcoding calls to AudioUnitRender with specific audio units nullifies the ability to dynamically insert nodes into the graph. But I also can't figure out a better way to do it and haven't found any example code that shows a better way.
Ideally, in my mind, I would connect every signal processing block as a node in the graph. Each node would have a callback associated with it. The bufferlist passed to each callback would automatically be filled with samples created by the node one step back in the graph. But it doesn't seem like that's how it works...
Any clarification is much appreciated... or if anyone knows of a code sample that chains multiple audio units together in iOS (with more than one of the units having a custom render callback) please let me know! - Charlie
On Tue, Jun 14, 2011 at 5:28 PM, Charlie Roberts
<email@hidden> wrote:
I want to pass a buffer into a node from another node and then use a callback to process that buffer. But whenever I connect an output node to an input node that has a render callback assigned to it the graph fails to initialize. So I'm assuming this is not the correct way to do things in a graph.
I've tried using both mixer and format convertor audio units with the same result. I'm obviously missing something conceptually here. How does an audiounit in a graph in iOS accept input and process it in a render callback?
Thanks in advanced for any clarification - Charlie
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden