Re: Processing question
Re: Processing question
- Subject: Re: Processing question
- From: Brian Willoughby <email@hidden>
- Date: Tue, 9 Jun 2009 02:24:22 -0700
On Jun 9, 2009, at 01:08, Brian Davies wrote:
My proposal was to pull frames ahead of the current Render request
(might involve several invocations of theInput->PullInput since the
size of theInput buffer is out of my control), copying to a large
internal ring buffer (the accounting will be dynamic because of the
mismatch of buffer sizes), move my processing forward nFrames in
the ring buffer, then output precisely nFrames as requested.
Hence, with L = T = 0, then host doesn't need to know anything,
and I don't need to make assumptions about the host. Will this
play well or badly with the AU framework?
No. This will not work at all. Despite the misleading name
"PullInput," I don't think you'll be able to pull ahead of the
current time, because this would break graphs which include live
input. Perhaps there is an exception for offline audio units, but I
doubt it.
Instead, what you need to do is implement your large internal ring
buffer, report its size in the latency of your plugin, wait for
enough samples to come in to full your buffer, then run your FFT or
other buffered process into a separate output buffer. You'll need
another ring buffer for this separate output buffer, from which you
will begin reading immediately, but there won't be any actual sample
data until the input ring buffer has been processed at least once.
In other words, the FFT consumes from the first ring buffer and
produces for the second ring buffer. Meanwhile, the input sample
produce for the first ring buffer, and the output samples are
consumed from the second ring buffer. This is why you need Reset() -
to clear the ring buffers and zero them out with silent samples so
you don't get garbage at the beginning of a track.
You'll obviously need to make sure that you ring buffers are each
large enough to hold a full buffer of your FFT window plus a full
buffer of audio from the host, because you'll have no guarantee of
the overlap between the mismatched buffer sizes. You'll need to be
able to store all input samples into your ring buffer without
overflow, and you'll need to be able to consume the maximum AU host
buffer size from your output ring buffer. You could probably get by
with smaller buffers than the sum of the dissimilar sizes, if you're
willing to loop and copy buffer segments, processing the FFT to make
room whenever possible. Don't forget the buffer alignment
restrictions on the FFT libraries, too.
I cannot remember whether there is an example of this. Perhaps
someone else on the list can point you in the right direction. For
myself, I implemented a simple AU which merely implemented the
latency and two sequential ring buffers so that I could confirm that
the audio was not corrupted during all of this shuffling between
buffers. Once you have the mismatched buffers taken care of, you can
insert FFT or any other fixed-size-buffer processing and be sure that
there should be no problems.
Brian Willoughby
Sound Consulting
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden