Re: Where should CAStreamBasicDescription be instantiated?
Re: Where should CAStreamBasicDescription be instantiated?
- Subject: Re: Where should CAStreamBasicDescription be instantiated?
- From: Jeff Moore <email@hidden>
- Date: Tue, 05 Mar 2013 11:19:46 -0800
On Mar 4, 2013, at 1:50 PM, Jim Griffin <email@hidden> wrote:
> Hello Jeff,
>
> I have been looking over your suggestions and I can set the AU Lab maximum buffer size but that does not seem to get propagated through to my component.
That seems odd. Presumably, you mean you went into AU Lab's prefs, clicked on the device's tab and changed the "Frames" pop-up?
At any rate, I just tested this and it seemed to work correctly as far as I could tell.
> It looks like I need to set up a listener to handle that.
>
> I looked through the docs for kAudioDevicePropertyBufferFrameSize but the docs I located do not seem to address setting up listeners between AU Lab and a component.
>
> Do you know which docs I should read through and any example apps that would address that?
You would never do that. Like I said initially, the AU generally has no influence over or access to the IO buffer size of the app. What the AU will be told is the largest buffer it will be pulled for via the property, kAudioUnitProperty_MaximumFramesPerSlice.
--
Jeff Moore
Core Audio
Apple
> Thanks.
>
> Jim Griffin
>
>
> On Mar 1, 2013, at 4:29 PM, Jim Griffin <email@hidden> wrote:
>
>> Jeff,
>>
>> Thanks for the suggestions. The audio component I am working on will be controlled by my host application so I take it that I can have the buffers increased to accommodate the buffer needs of the component?
>>
>> Increasing the latency of the component at the start my computations sounds pretty good also. I will need to experiment on what happens when a user changes the playback speed in the middle of playback.
>>
>>
>> Jim Griffin
>> Macintosh Software Developer
>> email@hidden
>>
>>
>>>
>>> Message: 2
>>> Date: Thu, 28 Feb 2013 19:30:15 -0800
>>> From: Jeff Moore <email@hidden>
>>> To: "email@hidden" <email@hidden>
>>> Subject: Re: Where should CAStreamBasicDescription be instantiated?
>>> Message-ID: <email@hidden>
>>> Content-Type: text/plain; charset=windows-1252
>>>
>>> AudioUnits don't get to control the buffer size. That belongs to the host application. Further, as it says in <AudioUnit/AUComponent.h, all AUs, with a few exceptions, are expected to work in real time and thus can only request the same amount of audio input as they are being asked to produce for output.
>>>
>>> That said, there is no restriction on the amount of latency an AU can introduce, provided that this amount is published through the appropriate properties. This allows you to buffer up the data a bit. For example, if the algorithm needs X frames, the AU would return silence for the first X frames of getting pulled while still pulling on it's input. Then once the X frames have been accumulated, the AU would start putting out actual data. This is how you would do a look-ahead limiter for example.
>>>
>>> --
>>>
>>> Jeff Moore
>>> Core Audio
>>> Apple
>>>
>>>
>>>
>>>
>>> On Feb 28, 2013, at 12:50 PM, Jim Griffin <email@hidden> wrote:
>>>
>>>> Hello Jeff,
>>>>
>>>> I am subclassing the Audio Unit public AUEffectBase class and have over ridden the Render method to try and use the PullInput method and the GetInput(0)->GetBufferList() to retrieve more than one input buffer.
>>>>
>>>> I'm trying to implement a PICOLA algorithm method to control the time-scale and pitch of an audio stream. This algorithm computes a pitch period value used to determine which parts of the audio stream can be removed and still let the audio stream be understandable. I want to minimize the chipmunk voice effect when the audio stream is sped up a few times.
>>>>
>>>> The Pitch period of the PICOLA algorithm needs about 1500 - 2000 data points to begin its calculations and the default buffer value of 512 isn't enough to start with.
>>>>
>>>> I've tried using the PullInput method and the GetInput(0)->GetBufferList() method in a do … while loop to get 3 or 4 buffers of audio data but the methods don't seem to get new data. I just get the same buffer data 3 or 4 times in a row.
>>>>
>>>> I am looking for a way to have the Audio Unit give me more than 512 float data points per audio channel at a time.
>>>
>>>
>>>
>>>
>>> ------------------------------
>>>
>>> _______________________________________________
>>> Coreaudio-api mailing list
>>> email@hidden
>>> https://lists.apple.com/mailman/listinfo/coreaudio-api
>>>
>>> End of Coreaudio-api Digest, Vol 10, Issue 71
>>> *********************************************
>>
>
>
> _______________________________________________
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list (email@hidden)
> Help/Unsubscribe/Update your Subscription:
>
> This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden