Message: 2
Date: Thu, 28 Feb 2013 19:30:15 -0800
From: Jeff Moore <
email@hidden>
To: "
email@hidden" <
email@hidden>
Subject: Re: Where should CAStreamBasicDescription be instantiated?
Message-ID: <
email@hidden>
Content-Type: text/plain; charset=windows-1252
AudioUnits don't get to control the buffer size. That belongs to the host application. Further, as it says in <AudioUnit/AUComponent.h, all AUs, with a few exceptions, are expected to work in real time and thus can only request the same amount of audio input as they are being asked to produce for output.
That said, there is no restriction on the amount of latency an AU can introduce, provided that this amount is published through the appropriate properties. This allows you to buffer up the data a bit. For example, if the algorithm needs X frames, the AU would return silence for the first X frames of getting pulled while still pulling on it's input. Then once the X frames have been accumulated, the AU would start putting out actual data. This is how you would do a look-ahead limiter for example.
--
Jeff Moore
Core Audio
Apple
On Feb 28, 2013, at 12:50 PM, Jim Griffin <
email@hidden> wrote:
Hello Jeff,
I am subclassing the Audio Unit public AUEffectBase class and have over ridden the Render method to try and use the PullInput method and the GetInput(0)->GetBufferList() to retrieve more than one input buffer.
I'm trying to implement a PICOLA algorithm method to control the time-scale and pitch of an audio stream. This algorithm computes a pitch period value used to determine which parts of the audio stream can be removed and still let the audio stream be understandable. I want to minimize the chipmunk voice effect when the audio stream is sped up a few times.
The Pitch period of the PICOLA algorithm needs about 1500 - 2000 data points to begin its calculations and the default buffer value of 512 isn't enough to start with.
I've tried using the PullInput method and the GetInput(0)->GetBufferList() method in a do … while loop to get 3 or 4 buffers of audio data but the methods don't seem to get new data. I just get the same buffer data 3 or 4 times in a row.
I am looking for a way to have the Audio Unit give me more than 512 float data points per audio channel at a time.
------------------------------
_______________________________________________
Coreaudio-api mailing list
email@hiddenhttps://lists.apple.com/mailman/listinfo/coreaudio-api
End of Coreaudio-api Digest, Vol 10, Issue 71
*********************************************