Re: Coreaudio-api Digest, Vol 12, Issue 198
Re: Coreaudio-api Digest, Vol 12, Issue 198
- Subject: Re: Coreaudio-api Digest, Vol 12, Issue 198
- From: Daniel Wilson <email@hidden>
- Date: Mon, 30 Nov 2015 14:34:06 -0600
Thank you Roman! How do I generate a buffer from the GetLatency function? I have it declared in my default template and it is default to zero. Fortunately the DSP isn't my issue, I just can't figure out how to get the buffer to do the actual FFT on :(
Sent from my iPhone.
> On Nov 30, 2015, at 2:00 PM, email@hidden wrote:
>
> Send Coreaudio-api mailing list submissions to
> email@hidden
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.apple.com/mailman/listinfo/coreaudio-api
> or, via email, send a message with subject or body 'help' to
> email@hidden
>
> You can reach the person managing the list at
> email@hidden
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Coreaudio-api digest..."
>
>
> Today's Topics:
>
> 1. Frame Size for Audio Unit Rendering (ex. FFT/IFFT) (Daniel Wilson)
> 2. Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT) (Roman)
> 3. Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
> (Paul Davis)
> 4. Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
> (Daniel Wilson)
> 5. Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
> (Paul Davis)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sun, 29 Nov 2015 23:08:01 -0600
> From: Daniel Wilson <email@hidden>
> To: email@hidden
> Subject: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
> Message-ID: <email@hidden>
> Content-Type: text/plain; charset=windows-1252
>
> Does anyone know how to change the frame size when doing the digital signal processing on an audio unit? Currently my audio unit is set up so that it receives a single sample, does the signal processing, outputs the sample, and repeats the process for each sample of the audio signal. I have created quite a few audio units with this set up but now I want to process multiple samples at the same time to do the FFT/IFFT, etc. Does anyone know how to do this? It seems like most people are using audio units for iiOS, but my audio units are for OS X to be used in programs like Logic Pro. Don’t know if that makes a difference.
>
> -Daniel
>
>
> ------------------------------
>
> Message: 2
> Date: Mon, 30 Nov 2015 13:54:04 +0300
> From: Roman <email@hidden>
> To: Daniel Wilson <email@hidden>,
> email@hidden
> Subject: Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
> Message-ID: <email@hidden>
> Content-Type: text/plain; charset=utf-8; format=flowed
>
> Hi Daniel,
>
> You need to implement buffering and output silence while you don't have
> enough audio samples for your FFT/IFFT transformation. It is necessary
> to output the correct value for GetLatency/GetTail functions.
>
> 30.11.2015 08:08, Daniel Wilson пишет:
>> Does anyone know how to change the frame size when doing the digital signal processing on an audio unit? Currently my audio unit is set up so that it receives a single sample, does the signal processing, outputs the sample, and repeats the process for each sample of the audio signal. I have created quite a few audio units with this set up but now I want to process multiple samples at the same time to do the FFT/IFFT, etc. Does anyone know how to do this? It seems like most people are using audio units for iiOS, but my audio units are for OS X to be used in programs like Logic Pro. Don’t know if that makes a difference.
>>
>> -Daniel
>> _______________________________________________
>> Do not post admin requests to the list. They will be ignored.
>> Coreaudio-api mailing list (email@hidden)
>> Help/Unsubscribe/Update your Subscription:
>>
>> This email sent to email@hidden
>
> --
> С уважением,
> Роман
>
>
>
> ------------------------------
>
> Message: 3
> Date: Mon, 30 Nov 2015 08:43:48 -0500
> From: Paul Davis <email@hidden>
> To: Daniel Wilson <email@hidden>
> Cc: CoreAudio API <email@hidden>
> Subject: Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
> Message-ID:
> <CAFa_cKk0PEaVFzw3Uv2jFAJ=email@hidden>
> Content-Type: text/plain; charset="utf-8"
>
> AudioUnits do not get to control the buffer size delivered via a render
> call. The host decides this.
>
> On Mon, Nov 30, 2015 at 12:08 AM, Daniel Wilson <email@hidden>
> wrote:
>
>> Does anyone know how to change the frame size when doing the digital
>> signal processing on an audio unit? Currently my audio unit is set up so
>> that it receives a single sample, does the signal processing, outputs the
>> sample, and repeats the process for each sample of the audio signal. I have
>> created quite a few audio units with this set up but now I want to process
>> multiple samples at the same time to do the FFT/IFFT, etc. Does anyone know
>> how to do this? It seems like most people are using audio units for iiOS,
>> but my audio units are for OS X to be used in programs like Logic Pro.
>> Don’t know if that makes a difference.
>>
>> -Daniel
>> _______________________________________________
>> Do not post admin requests to the list. They will be ignored.
>> Coreaudio-api mailing list (email@hidden)
>> Help/Unsubscribe/Update your Subscription:
>>
>>
>> This email sent to email@hidden
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <https://lists.apple.com/mailman/private/coreaudio-api/attachments/20151130/a79c93c2/attachment.html>
>
> ------------------------------
>
> Message: 4
> Date: Mon, 30 Nov 2015 07:52:26 -0600
> From: Daniel Wilson <email@hidden>
> To: Paul Davis <email@hidden>
> Cc: CoreAudio API <email@hidden>
> Subject: Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
> Message-ID: <email@hidden>
> Content-Type: text/plain; charset="utf-8"
>
> Paul thank you. That makes perfect sense. How do I switch my processing to process the entire buffer at once and not just one sample at a time?
>
> Sent from my iPhone.
>
>> On Nov 30, 2015, at 7:43 AM, Paul Davis <email@hidden> wrote:
>>
>> AudioUnits do not get to control the buffer size delivered via a render call. The host decides this.
>>
>>> On Mon, Nov 30, 2015 at 12:08 AM, Daniel Wilson <email@hidden> wrote:
>>> Does anyone know how to change the frame size when doing the digital signal processing on an audio unit? Currently my audio unit is set up so that it receives a single sample, does the signal processing, outputs the sample, and repeats the process for each sample of the audio signal. I have created quite a few audio units with this set up but now I want to process multiple samples at the same time to do the FFT/IFFT, etc. Does anyone know how to do this? It seems like most people are using audio units for iiOS, but my audio units are for OS X to be used in programs like Logic Pro. Don’t know if that makes a difference.
>>>
>>> -Daniel
>>> _______________________________________________
>>> Do not post admin requests to the list. They will be ignored.
>>> Coreaudio-api mailing list (email@hidden)
>>> Help/Unsubscribe/Update your Subscription:
>>>
>>> This email sent to email@hidden
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <https://lists.apple.com/mailman/private/coreaudio-api/attachments/20151130/b2bf093b/attachment.html>
>
> ------------------------------
>
> Message: 5
> Date: Mon, 30 Nov 2015 09:07:14 -0500
> From: Paul Davis <email@hidden>
> To: Daniel Wilson <email@hidden>
> Cc: CoreAudio API <email@hidden>
> Subject: Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
> Message-ID:
> <email@hidden>
> Content-Type: text/plain; charset="utf-8"
>
> Sorry, no idea. I'm a host author (Ardour / Mixbus / Tracks Live), not a
> plugin writer. A host just gives you a block of samples, with the size of
> its own choosing. What you do with them is up to you. As Roman mentioned,
> you need to plan on buffering them and running your FFT periodically.
>
> On Mon, Nov 30, 2015 at 8:52 AM, Daniel Wilson <email@hidden>
> wrote:
>
>> Paul thank you. That makes perfect sense. How do I switch my processing to
>> process the entire buffer at once and not just one sample at a time?
>>
>> Sent from my iPhone.
>>
>> On Nov 30, 2015, at 7:43 AM, Paul Davis <email@hidden>
>> wrote:
>>
>> AudioUnits do not get to control the buffer size delivered via a render
>> call. The host decides this.
>>
>> On Mon, Nov 30, 2015 at 12:08 AM, Daniel Wilson <email@hidden
>>> wrote:
>>
>>> Does anyone know how to change the frame size when doing the digital
>>> signal processing on an audio unit? Currently my audio unit is set up so
>>> that it receives a single sample, does the signal processing, outputs the
>>> sample, and repeats the process for each sample of the audio signal. I have
>>> created quite a few audio units with this set up but now I want to process
>>> multiple samples at the same time to do the FFT/IFFT, etc. Does anyone know
>>> how to do this? It seems like most people are using audio units for iiOS,
>>> but my audio units are for OS X to be used in programs like Logic Pro.
>>> Don’t know if that makes a difference.
>>>
>>> -Daniel
>>> _______________________________________________
>>> Do not post admin requests to the list. They will be ignored.
>>> Coreaudio-api mailing list (email@hidden)
>>> Help/Unsubscribe/Update your Subscription:
>>>
>>>
>>> This email sent to email@hidden
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <https://lists.apple.com/mailman/private/coreaudio-api/attachments/20151130/afb37ab9/attachment.html>
>
> ------------------------------
>
> _______________________________________________
> Coreaudio-api mailing list
> email@hidden
> https://lists.apple.com/mailman/listinfo/coreaudio-api
>
> End of Coreaudio-api Digest, Vol 12, Issue 198
> **********************************************
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden