• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Compensating for latency (how to determine rendering slice size)
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Compensating for latency (how to determine rendering slice size)


  • Subject: Re: Compensating for latency (how to determine rendering slice size)
  • From: William Stewart <email@hidden>
  • Date: Mon, 5 Oct 2009 19:13:02 -0700

There is an audio unit property call kAudioUnitProperty_PresentationLatency.

The intention of the property was for the host to describe the latency of the samples arriving to your audio unit, and the latency of the samples leaving your audio unit (that is, input and output latency).

I think the support for this in different host apps might be sketchy, so even though your AU could look at this value, it may not be getting set correctly. Without it though, it is "difficult" to do what you are trying to do, so I don't really have a good immediate answer for you.

Bill

On Oct 5, 2009, at 2:40 PM, Eyal Redler wrote:

Hi,

I would like to be able to compensate for latency in my audio unit. My audio unit is playing back some audio while recording the input, while the recording is being done, the player is playing in time with the recording but when I play the recording back together with the previous playback there is an apparent timing mismatch.

Please forgive me if I'm stating the obvious and please correct me if I'm wrong but as I understand it what's happening is this:
1. The host chops the audio into slices
2. After a slice of input is captured by the input buffer, the slice is handed to the audio unit/s
3. After the audio unit finishes, the slice is handed to the output buffer
4. The output buffer plays the slice


So, I have three levels, "real time" (what the user is doing), Processing and Playback, I may draw it like this:
Numbers are "slice numbers", RT=real time input, PR=what my au is processing at the moment, PB=playback - what the user is hearing
RT: 01 02 03 04 05 06 07 08 09 10 11
PR: -- 01 02 03 04 05 06 07 08 09 10 11
PB: -- -- 01 02 03 04 05 06 07 08 09 10 11


So, if the player plays over the things heard in slice 03 then I get to process his playing in slice 05 so if I wish to synchronize the two in order to play them together I need to match every input I get with the playback of two slices back, in order to do that I need to know what's the size of the slice.

So my question is, how do I know what's the slice size? Can the slice size change? Any tips in general for doing the compensation.

TIA,


Eyal Redler
------------------------------------------------------------------------------------------------
"If Uri Geller bends spoons with divine powers, then he's doing it the hard way."
--James Randi
www.eyalredler.com



_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden

_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
References: 
 >Compensating for latency (how to determine rendering slice size) (From: Eyal Redler <email@hidden>)

  • Prev by Date: basComponentInsance, are my structures falling of the stack?
  • Next by Date: AudioDevice init problem
  • Previous by thread: Compensating for latency (how to determine rendering slice size)
  • Next by thread: basComponentInsance, are my structures falling of the stack?
  • Index(es):
    • Date
    • Thread