Audio Queue - Audio/Video Sync issue
Audio Queue - Audio/Video Sync issue
- Subject: Audio Queue - Audio/Video Sync issue
- From: Jonathan Watson <email@hidden>
- Date: Sat, 9 Oct 2010 15:34:12 +0100
Hi,
I'm trying to fix an issue with a video player developed
using Core VIdeo Display Link and Core Audio Audio Queues on OSX 10.5
and above (not iPhone). The stream is sourced and buffered from a
network and while video and audio play as expected they are always
slightly out of sync. I believe the issue maybe to do with audio
hardware latency not being taken into account in my timing routine but I
can't find a way around this.
When I get a display link callback I take the PTS of the next video
frame (I have verified that these are valid for this purpose) and
convert this into milliseconds using the timebase of the video
container. I then call AudioQueueCreateTimeline followed by
AudioQueueGetCurrentTime to get an Audio Timestamp from my running Audio
Queue (I have verified I am getting a valid response and my Audio Queue
is playing as expected). Currently I take the mSampleTime from the
Audio Timestamp and divide this by the sample rate, in the case of my
streams this is lpcm audio that has a sample rate of 48000hz. This gives
me the current time since the start of the Audio Queue or thereabouts, I
suspect this is where the problem lies as I don't believe this takes
into account any latency or delays caused by the audio hardware. I then
compare the next video PTS time in milliseconds to mSampleTime/48000, if
the video PTS in millis is less than or equal to mSampleTime/48000 I
display a new frame, otherwise I repeat the current frame. Unfortunately
this is obviously not working completely and the audio to ever so
slightly (yet noticeably) out of sync with the video.
How exactly can I go about resolving this issue? Are there other
fields in the Audio Timestamp that I can use to take into account the
drift/latency that is occurring and if so which fields of the timestamp
and how do these get converted to a usable value in milliseconds? I have
tried converting mHostTime to nanoseconds but I don't know a) the start
host time of the audio queue to calculate how far into the stream I am
from the beginning and b) whether this is anymore accurate than using
mSampleTime. Alternatively is this not the recommended way for
synchronising audio and video using Core Audio/Core Video, should I be
taking another approach altogether?
I have now also tried AudioEnqueueBufferWithParameters to set the timestamp but I'm fairly sure I'm not doing this correctly and so scheduling my audio data incorrectly. See below:
AudioTimeStamp timeToPlay;
timeToPlay.mSampleTime=customAQ->sampleTime;
timeToPlay.mFlags=kAudioTimeStampSampleTimeValid;
err=AudioQueueEnqueueBufferWithParameters(playbackAudioQueue, customAQ->audioQueueBuffer[currentBuffer], 0, 0, 0, 0, 0, 0,NULL, &timeToPlay);
customAQ->sampleTime+=customAQ->sampleRate;
Thanks for any pointers anyone might be able to give.
Jon.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden