Re: iOS: Multiple AUs versus everything in one render callback
Re: iOS: Multiple AUs versus everything in one render callback
- Subject: Re: iOS: Multiple AUs versus everything in one render callback
- From: Brian Willoughby <email@hidden>
- Date: Sun, 08 Jul 2012 14:01:58 -0700
On Jul 6, 2012, at 12:26, Hari Karam Singh wrote:
I'm doing some development on the custom sampler and audio engine
for my (iPhone 4+) app, particularly adding recording and send
effect features. I'm stuck trying to decide whether to go down the
route of having everything handled in one big RemoteIO render
callback or breaking it up into separate AU nodes.
Might anyway know whether a more complex AUGraph with multiple
RemoteIOs and a mixer AU to sum it all imposes significant overhead
compared to doing it all in a single, well-tuned render callback?
Is there any other reason why one would want to go one way or the
other (such as perhaps the AU boundaries clipping/truncating the
audio)?
Performance is big issue and I'd probably just go with the single
render callback but I don't want to miss out on the ever growing
list of fx AUs available.
The answer depends significantly upon your coding skills with regard
to optimizing, and also upon the specific set of operations that you
need to do versus the registers available in the processor.
Apple's Accelerate (vecLib) framework has been mentioned, but those
routines provide only very basic operations. If all you need is to
apply gain, or perform a single FFT, then Accelerate is probably
coded to be faster than anything you can do in a reasonable amount of
time.
But there are tradeoffs. You can chain several Accelerate routines,
one after the other, but each of them has to make a complete pass
through the audio data buffer. In certain situations, you can improve
upon the time needed by multiple Accelerate routines by writing your
own custom loop that only passes through the audio data buffer once,
while performing several operations on the audio samples while they
are in registers. This requires that you have the level of skill that
Apple's Accelerate developers have, and those guys really know what
they're doing. Also, there is a reverse tradeoff, because if your
loop becomes too complex then the data will not fit in the available
registers, and results will have to be written to memory. Once your
loop gets that big, you may not be gaining so much performance by
condensing everything into a single render callback.
Thus, it's both possible that one big render callback would be less
efficient in some cases, and it's also possible that it could be way
more efficient. Another possibility is that you could create an
AUGraph with a certain number of Apple's AU nodes as well as a render
callback that combines a subset of your operations - enough to gain
efficiency, but not so many that the tradeoffs backfire.
I realize that this is not the concrete answer that you perhaps
desired, but I believe it accurately reflects the issues at hand.
Brian Willoughby
Sound Consulting
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden