Re: iphone Audio Unit mixer balance
Re: iphone Audio Unit mixer balance
- Subject: Re: iphone Audio Unit mixer balance
- From: Brian Willoughby <email@hidden>
- Date: Thu, 25 Jun 2009 16:56:43 -0700
On Jun 25, 2009, at 16:28, james mccartney wrote:
On Jun 25, 2009, at 3:42 PM, Brian Willoughby wrote:
On Jun 25, 2009, at 15:23, james mccartney wrote:
For performance, you shouldn't be using an ObjC message send per
sample to fill your buffer. It would be better to write a method
that returns a buffer pointer and do the buffer filling in C.
Agreed.
However, to be pedantic, one also shouldn't even be using a
standard C function call per sample to fill a buffer
Yes, I wasn't suggesting that.
I was saying basically, replace this:
for (int i = 0; i<n; ++i) out[i] = [myObj nextSample];
with this:
short* in = [myObj getBuffer];
for (int i = 0; i<n; ++i) out[i] = in[i];
No worries. I didn't think you were suggesting anything improper,
which is why I said that I "agreed."
However, I did want to clarify one thing: Your optimization has very
little to do with the original language (ObjC vs. C) and everything
to do with the standard optimization of operating on buffers rather
than individual samples.
However it does depend.
If you are doing some very complex algorithm, it may be a win to
call a vecLib routine per sample.
measure.
And another thing to keep in mind is that most vecLib routines are
optimized for operating on entire buffers rather than individual
samples. For the majority of cases, you'll probably end up calling a
few vecLib routines, each of which process the entire buffer, and not
just one sample. It ends up being more efficient in some case to
pass through the entire buffer three times for three operations,
rather than do three operations on each individual sample before
proceeding to the next sample.
There are exceptions. First of all, once the number of operations
gets lengthy enough, the only way to get optimum performance is to
write your own vecLib-style routine which is properly coded to take
advantage of the vector instructions of the processor. Second of
all, there are a few algorithms like convolution which calculate an
entire operation on each sample, but even in that case there is a
vecLib call which handles this for the entire buffer, so you would
probably never find a case where you call a vecLib routine per
sample. If I'm missing something on the latter, feel free to point
it out. My goal here is to enlighten folks about the proper way to
optimize audio processing by setting some basic expectations.
Brian Willoughby
Sound Consulting
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden