Re: Audio Video sync with Core Audio Queue Services
Re: Audio Video sync with Core Audio Queue Services
- Subject: Re: Audio Video sync with Core Audio Queue Services
- From: William Stewart <email@hidden>
- Date: Thu, 22 Oct 2009 19:50:26 -0700
ok, so rather than address your points, I'll just explain some basic
things about AQ.
The AQ is used as the implementation vehicle for QT X playback on
desktop and all the media/movie playback on iPhone. It is used as the
basis for the timing services required to do this to ensure
synchronisation between audio and video.
The time is known through the AQ's timing APIs - the AQ's timeline is
in the sample rate of your audio queue, and anytime you ask for that
time it will tell you which sample is currently being played on your
time line. Zero is the first sample that was played when you started a
given AQ object. The AQ also can give you the time on the device that
is is playing too, this is in the native format and timeline of the
device, not your AQ. There is also an AQ timeline object. You can use
that to determine if there is some kind of discontinuity in the
playback - for instance, maybe the device you were playing to was
changed (someone plugged in headphones, etc). You can then use that
information to do any re-synching that you may have to do because the
underlying device changed.
Because you know how many samples you have enqueued - because you
supply the buffers to be played - and because you can determine what
sample is being played at any time, you can derive any timing
information you need I think.
Bill
On Oct 21, 2009, at 9:01 PM, Stephen Thomas wrote:
Technology used: Core Audio Queue services. (I could have used HAL,
but it seems much complicated than queue services.)
Requirement:
I need to know each time when a enqueued buffer gets played.
As per document, there is no direct way to know this using queue
services.
Proposed solution:
Slice the audio data into smaller chunks of 32ms or less.
In the above scenario, when the buffer moves from audio queue buffer
to playback buffer we get a notification.
The delay would be 32ms * (size of length of playback buffer). So
always we have a audio lag of specified time. Which is tolerable as
long as it is within the limit.
In most of the systems this worked fine. Except for one Mac book
Pro. (Strange thing is in one more Mac book Pro with similar
configuration it worked fine.)
To solve this, I have experimented by slicing the audio data to 8ms
chunk. It solved the av sink issue.
But this do not seem to be viable solution. Because there is a
possibility of playback queue size getting increased (and we do not
know about the size). Which will fail the above solution.
Please suggest how to sink audio and video using core audio queue
services.
Stephen
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden