• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Coreaudio-api Digest, Vol 12, Issue 198
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Coreaudio-api Digest, Vol 12, Issue 198


  • Subject: Re: Coreaudio-api Digest, Vol 12, Issue 198
  • From: Roman <email@hidden>
  • Date: Tue, 01 Dec 2015 11:51:21 +0300

GetLatency/GetTail are used to specify how much data you are going to buffer, the units are seconds, so you need to return a float value equal to your buffer size in samples divided by samplerate. Say you need 4096 samples for your algorithm and the host sends you data in frames of arbitrary size from 512 to 2048 samples. You need to have a circular (ring) buffer large enough to keep input data and put all input data into it in Render function. When the ring buffer has 4096 samples you give them to your FFT algorithm and fill output buffers with the result. Until there is no 4096 samples in it you fill the output buffers with zeroes. When you get your first 4096 output samples you might need to fill output buffers partially with zeroes and partially with your data to make sure that you output exactly 4096 samples of silence.



30.11.2015 23:34, Daniel Wilson пишет:
Thank you Roman! How do I generate a buffer from the GetLatency function? I have it declared in my default template and it is default to zero. Fortunately the DSP isn't my issue, I just can't figure out how to get the buffer to do the actual FFT on :(

Sent from my iPhone.

On Nov 30, 2015, at 2:00 PM, email@hidden wrote:

Send Coreaudio-api mailing list submissions to
    email@hidden

To subscribe or unsubscribe via the World Wide Web, visit
    https://lists.apple.com/mailman/listinfo/coreaudio-api
or, via email, send a message with subject or body 'help' to
    email@hidden

You can reach the person managing the list at
    email@hidden

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Coreaudio-api digest..."


Today's Topics:

   1. Frame Size for Audio Unit Rendering (ex. FFT/IFFT) (Daniel Wilson)
   2. Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT) (Roman)
   3. Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
      (Paul Davis)
   4. Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
      (Daniel Wilson)
   5. Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
      (Paul Davis)


----------------------------------------------------------------------

Message: 1
Date: Sun, 29 Nov 2015 23:08:01 -0600
From: Daniel Wilson <email@hidden>
To: email@hidden
Subject: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
Message-ID: <email@hidden>
Content-Type: text/plain; charset=windows-1252

Does anyone know how to change the frame size when doing the digital signal processing on an audio unit? Currently my audio unit is set up so that it receives a single sample, does the signal processing, outputs the sample, and repeats the process for each sample of the audio signal. I have created quite a few audio units with this set up but now I want to process multiple samples at the same time to do the FFT/IFFT, etc. Does anyone know how to do this? It seems like most people are using audio units for iiOS, but my audio units are for OS X to be used in programs like Logic Pro. Don’t know if that makes a difference.

-Daniel


------------------------------

Message: 2
Date: Mon, 30 Nov 2015 13:54:04 +0300
From: Roman <email@hidden>
To: Daniel Wilson <email@hidden>,
    email@hidden
Subject: Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
Message-ID: <email@hidden>
Content-Type: text/plain; charset=utf-8; format=flowed

Hi Daniel,

You need to implement buffering and output silence while you don't have
enough audio samples for your FFT/IFFT transformation. It is necessary
to output the correct value for GetLatency/GetTail functions.

30.11.2015 08:08, Daniel Wilson пишет:
Does anyone know how to change the frame size when doing the digital signal processing on an audio unit? Currently my audio unit is set up so that it receives a single sample, does the signal processing, outputs the sample, and repeats the process for each sample of the audio signal. I have created quite a few audio units with this set up but now I want to process multiple samples at the same time to do the FFT/IFFT, etc. Does anyone know how to do this? It seems like most people are using audio units for iiOS, but my audio units are for OS X to be used in programs like Logic Pro. Don’t know if that makes a difference.

-Daniel
  _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden
--
С уважением,
Роман



------------------------------

Message: 3
Date: Mon, 30 Nov 2015 08:43:48 -0500
From: Paul Davis <email@hidden>
To: Daniel Wilson <email@hidden>
Cc: CoreAudio API <email@hidden>
Subject: Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
Message-ID:
    <CAFa_cKk0PEaVFzw3Uv2jFAJ=email@hidden>
Content-Type: text/plain; charset="utf-8"

AudioUnits do not get to control the buffer size delivered via a render
call. The host decides this.

On Mon, Nov 30, 2015 at 12:08 AM, Daniel Wilson <email@hidden>
wrote:

Does anyone know how to change the frame size when doing the digital
signal processing on an audio unit? Currently my audio unit is set up so
that it receives a single sample, does the signal processing, outputs the
sample, and repeats the process for each sample of the audio signal. I have
created quite a few audio units with this set up but now I want to process
multiple samples at the same time to do the FFT/IFFT, etc. Does anyone know
how to do this? It seems like most people are using audio units for iiOS,
but my audio units are for OS X to be used in programs like Logic Pro.
Don’t know if that makes a difference.

-Daniel
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:


This email sent to email@hidden
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.apple.com/mailman/private/coreaudio-api/attachments/20151130/a79c93c2/attachment.html>

------------------------------

Message: 4
Date: Mon, 30 Nov 2015 07:52:26 -0600
From: Daniel Wilson <email@hidden>
To: Paul Davis <email@hidden>
Cc: CoreAudio API <email@hidden>
Subject: Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
Message-ID: <email@hidden>
Content-Type: text/plain; charset="utf-8"

Paul thank you. That makes perfect sense. How do I switch my processing to process the entire buffer at once and not just one sample at a time?

Sent from my iPhone.

On Nov 30, 2015, at 7:43 AM, Paul Davis <email@hidden> wrote:

AudioUnits do not get to control the buffer size delivered via a render call. The host decides this.

On Mon, Nov 30, 2015 at 12:08 AM, Daniel Wilson <email@hidden> wrote:
Does anyone know how to change the frame size when doing the digital signal processing on an audio unit? Currently my audio unit is set up so that it receives a single sample, does the signal processing, outputs the sample, and repeats the process for each sample of the audio signal. I have created quite a few audio units with this set up but now I want to process multiple samples at the same time to do the FFT/IFFT, etc. Does anyone know how to do this? It seems like most people are using audio units for iiOS, but my audio units are for OS X to be used in programs like Logic Pro. Don’t know if that makes a difference.

-Daniel
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.apple.com/mailman/private/coreaudio-api/attachments/20151130/b2bf093b/attachment.html>

------------------------------

Message: 5
Date: Mon, 30 Nov 2015 09:07:14 -0500
From: Paul Davis <email@hidden>
To: Daniel Wilson <email@hidden>
Cc: CoreAudio API <email@hidden>
Subject: Re: Frame Size for Audio Unit Rendering (ex. FFT/IFFT)
Message-ID:
    <email@hidden>
Content-Type: text/plain; charset="utf-8"

Sorry, no idea. I'm a host author (Ardour / Mixbus / Tracks Live), not a
plugin writer. A host just gives you a block of samples, with the size of
its own choosing. What you do with them is up to you. As Roman mentioned,
you need to plan on buffering them and running your FFT periodically.

On Mon, Nov 30, 2015 at 8:52 AM, Daniel Wilson <email@hidden>
wrote:

Paul thank you. That makes perfect sense. How do I switch my processing to
process the entire buffer at once and not just one sample at a time?

Sent from my iPhone.

On Nov 30, 2015, at 7:43 AM, Paul Davis <email@hidden>
wrote:

AudioUnits do not get to control the buffer size delivered via a render
call. The host decides this.

On Mon, Nov 30, 2015 at 12:08 AM, Daniel Wilson <email@hidden
wrote:
Does anyone know how to change the frame size when doing the digital
signal processing on an audio unit? Currently my audio unit is set up so
that it receives a single sample, does the signal processing, outputs the
sample, and repeats the process for each sample of the audio signal. I have
created quite a few audio units with this set up but now I want to process
multiple samples at the same time to do the FFT/IFFT, etc. Does anyone know
how to do this? It seems like most people are using audio units for iiOS,
but my audio units are for OS X to be used in programs like Logic Pro.
Don’t know if that makes a difference.

-Daniel
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:


This email sent to email@hidden
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.apple.com/mailman/private/coreaudio-api/attachments/20151130/afb37ab9/attachment.html>

------------------------------

_______________________________________________
Coreaudio-api mailing list
email@hidden
https://lists.apple.com/mailman/listinfo/coreaudio-api

End of Coreaudio-api Digest, Vol 12, Issue 198
**********************************************
  _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

--
С уважением,
Роман

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden


  • Next by Date: Access to bufferList.mBuffers[0]
  • Next by thread: Access to bufferList.mBuffers[0]
  • Index(es):
    • Date
    • Thread