Need to access/manipulate mixed output: MultiChannelMixer + AudioConverter, or mix manually?
Need to access/manipulate mixed output: MultiChannelMixer + AudioConverter, or mix manually?
- Subject: Need to access/manipulate mixed output: MultiChannelMixer + AudioConverter, or mix manually?
- From: Michael Tyson <email@hidden>
- Date: Tue, 14 Feb 2012 19:25:08 +0100
Hello!
I'm looking for a little advice:
I'm working on my iOS audio engine and implementing support for fairly sophisticated rendering/processing graphs, so that subsets of channels can be grouped and pre-mixed prior to extracting or manipulating their combined output (which is more efficient and tractable than, say, filtering the audio of each track individually).
For example, say I have 10 channels, and the user wants to apply an echo effect on five of them. Rather than run five echo filters in parallel, it's much more efficient to pre-mix those five channels, then apply a single echo filter.
The graph would then look like:
{ { C1 } { C2 } { C3 } { C4 } { C5 } => filter } { C6 } { C7 } { C8 } { C9 } { C10 }
I'm also refactoring the audio engine to support arbitrary audio stream formats.
Consequently, I need to be able to access and manipulate the mixed output of each track group, in the assigned audio stream format.
I originally implemented this with a MultiChannelMixer at each node of the graph, and accessed the rendered audio by registering a callback with AudioUnitAddRenderNotify (I can also manipulate audio in place in the buffers provided). I quickly realised it wasn't going to work with arbitrary audio formats, however: The output format of the MultiChannelMixer can't really be set, except for the sample rate. From the docs: "On the output scope, set just the application sample rate.". (in practice, a couple of formats did work, but a few also didn't, throwing kAudioUnitErr_FormatNotSupported errors).
Obviously, if the engine is configured with an output format that MultiChannelMixer can provide, then there's no problem. But this won't always be the case (for example, non-interlaced 16-bit audio doesn't work).
So, I'm considering two options for accessing/manipulating the mixed output of subsets of channels:
1. MultiChannelMixers + AudioConverters - Use an audio converter to convert from whatever the MultiChannelMixer's output is, into the main format, then extract/process the audio. If altered, then the audio needs to be converted back to the same format and written to the buffer provided from the mixer's render callback. Or, I suppose I can replace the AUGraph connection with an input callback for the next mixer in the chain, which calls AudioUnitRender upon the MultiChannelMixer, and just performs one conversion, from the mixer's output.
2. Perform all mixing manually from an input callback, using the Accelerate framework for the heavy lifting, and keep everything in the same audio format.
Is there one option that appears clearly superior? I'm always in favour of using what's already there, and not re-inventing the wheel, but given the limited output formats that MultiChannelMixer can provide, am I better off throwing it all away and rolling my own? Or is the audio converter likely to be light enough to make it worth sticking to what's there?
Many thanks,
Michael
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden