• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Is there an accepted input/output latency compensation technique?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Is there an accepted input/output latency compensation technique?


  • Subject: Re: Is there an accepted input/output latency compensation technique?
  • From: Michael Tyson <email@hidden>
  • Date: Sat, 28 Apr 2012 22:33:05 +0200

Okay, I was completely wrong in my statement of how I've been handling latency. Utterly, embarrassingly wrong. I was being thrown by a misbehaving prebuffer routine, which is why I started using the input/output latency values.

It turns out, using the timestamps seems to be all that's needed to output audio at the right time.

So, it appears this way: If you're using the AudioTimeStamp values, then there's nothing else that you need to do. kAudioSessionProperty_CurrentHardwareOutputLatency and kAudioSessionProperty_CurrentHardwareInputLatency, in this case, are entirely superfluous.

Happy!


-- 
Michael Tyson | atastypixel.com

Live, app-to-app audio streaming is coming soon.
Don't want to miss our launch? Then sign up here: http://audiob.us



On 28 Apr 2012, at 19:31, Michael Tyson wrote:

Hi folks,

I'm wondering if anyone knows the ground truth about latency compensation on iOS?

There's kAudioSessionProperty_CurrentHardwareOutputLatency and kAudioSessionProperty_CurrentHardwareInputLatency, and kAudioSessionProperty_CurrentHardwareIOBufferDuration and audio timestamps, of course, and that's all lovely, but the documentation doesn't really seem to have anything to say about what it all *means*. 

I've always just taken the audio timestamps given in the Remote IO callbacks, and added/subtracted the kAudioSessionProperty_CurrentHardwareOutputLatency/kAudioSessionProperty_CurrentHardwareInputLatency values as required, and that seems to do the trick when saving synchronised recorded audio - latency seems to be pretty much zero (recording a loop playing out the speaker of the same device yields a recording that plays back perfectly in time) - but I'd love to know if there's a Right Way somewhere.

I'd also like to know why, if the system knows the device input and output latencies, the system doesn't automatically add/subtract these to/from the audio timestamps handed to us in the callbacks. Surely those timestamps should aim to reflect the time the audio wavefront hit the mic, or the time the speaker starts vibrating in response to the audio in the buffer?

Anyone know anything about this?

Cheers!
Michael


-- 
Michael Tyson | atastypixel.com

Live, app-to-app audio streaming is coming soon.
Don't want to miss our launch? Then sign up here: http://audiob.us


Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
References: 
 >Is there an accepted input/output latency compensation technique? (From: Michael Tyson <email@hidden>)

  • Prev by Date: Is there an accepted input/output latency compensation technique?
  • Next by Date: Am I forced to use C++ to develop with Audio Units?
  • Previous by thread: Is there an accepted input/output latency compensation technique?
  • Next by thread: London Music Tech Fest Hack Camp
  • Index(es):
    • Date
    • Thread