Hi folks,
I'm wondering if anyone knows the ground truth about latency compensation on iOS?
There's kAudioSessionProperty_CurrentHardwareOutputLatency and kAudioSessionProperty_CurrentHardwareInputLatency, and kAudioSessionProperty_CurrentHardwareIOBufferDuration and audio timestamps, of course, and that's all lovely, but the documentation doesn't really seem to have anything to say about what it all *means*.
I've always just taken the audio timestamps given in the Remote IO callbacks, and added/subtracted the kAudioSessionProperty_CurrentHardwareOutputLatency/kAudioSessionProperty_CurrentHardwareInputLatency values as required, and that seems to do the trick when saving synchronised recorded audio - latency seems to be pretty much zero (recording a loop playing out the speaker of the same device yields a recording that plays back perfectly in time) - but I'd love to know if there's a Right Way somewhere.
I'd also like to know why, if the system knows the device input and output latencies, the system doesn't automatically add/subtract these to/from the audio timestamps handed to us in the callbacks. Surely those timestamps should aim to reflect the time the audio wavefront hit the mic, or the time the speaker starts vibrating in response to the audio in the buffer?
Anyone know anything about this?
Cheers! |