Re: Audio/Video synchronization
Re: Audio/Video synchronization
- Subject: Re: Audio/Video synchronization
- From: Jeff Moore <email@hidden>
- Date: Mon, 12 Jun 2006 12:06:47 -0700
On Jun 12, 2006, at 8:50 AM, Rahul wrote:
Hi Jeff,
Thank you for your inputs
I have tried following method for to achieve synchronization.
The app receives a stream of video and audio packets. Each of them
have a
timestamp on it.
You probably ought to go into what this means. It sounds like the
media you want to play has it's own embedded clock in it, like MPEG
does.
I have a thread whose sole responsibility is to take the input
timestamp
from the render callback and set the master clock depending on this.
What is this "master clock"? What is it tracking? What units is it
using?
In the thread I do the following:
1. Read the "inTimeStamp" value from the buffer shared with the render
callback
I presume you are referring to the input proc of an instance of AUHAL
or are you directly using the HAL now? I'm a tad confused.
2. Use AudioTranslateTime and convert this "inTimeStamp" value to
the device
sample time. I presume this is the value in future , right?
Until I know where this "inTimeStamp" is coming from, it's hard to
say. At any rate, the time stamps the HAL provides for input data are
actually in the past. But if you mean the time stamp for the render
callback of AUHAL, then I believe that this is a sample time in the
future.
But, depending on the circumstances, the sample time you get from
AUHAL does not have the same zero point as what the HAL uses for
translating time with AudioDeviceTranslateTime(). Where the HAL
derives it's time stamps directly from the hardware, AUHAL supplies
time stamps that are (I think, hopefully Doug will correct me if I'm
wrong about this) zeroed when AUHAL starts playing.
The net effect is that you have to use
kAudioOutputUnitProperty_StartTimestampsAtZero to turn off AUHAL's
remapping of the time stamps to get values you can pass to
AudioDeviceTranslateTime().
.Let us assume
that the rendercallback gave a device sample time of 2126.00 Now I
assume
that till audio device sample time reaches 2126.00 , 2048 bytes
( depending
on the output stream, this might vary ) will be played.
I'm not following this statement too well. You can't measure a
duration without two points on the timeline. You mention one, 2126,
but not the other that you are measuring against.
3. I also have the information for how long the audio packet will
play. This
is a multiple of 64 ms.
I presume you mean how long it's nominal duration is. Let's assume
that the data is at a 48K sample rate which makes the 64ms nominally
be 3072 samples. Unfortunately, it is very rare to get an audio
device that plays at it's true nominal rate. The device's true rate
can vary by a great deal. As such, you cannot know, a priori, the
amount of time it will take a given device to play those 3072 samples.
However I also have its size in bytes. I now use
AudioDeviceGetCurrentTime in a loop . Each time in the loop , I
get the
delta between the two calls to AudioDeviceGetCurrentTime and
multiply this
with 4 . This is the bytes played. When the number of bytes that I
have
played becomes equal to the size of the audio packet, the master
clock is
set to the next audio packet's time stamp.
That will work, I suppose, but it isn't particularly efficient. If
you want to know when a particular sample is going to be reached by
the hardware, it's better to just use AudioDeviceTranslateTime() and
ask for it directly.
Now to the problem that I face. The video is running ahead of the
audio.
The reason is that the master clock is set earlier than required. This
happens because for some reason the "deltaTime" values add up
faster to the
audio packet size than required. This is causing the problem.
It sounds to me like you aren't accounting for the true rate of the
audio hardware, like I mentioned above. It's also possible that you
still have a Garbage In/Garbage Out problem due to the discrepancy
between how AUHAL tracks sample time and the HAL tracks the sample time.
Basically what I want to know is how far have I played with the
bytes I have
supplied to the device.
There is no buffering going on here. There is no added latency in the
software. You can know precisely when a given sample is going to hit
the wire using the HAL's time stamps and the latency figures
provided. Data is consumed at a rate that is expressed through the
time stamps.
Can you please give your inputs on the current method?
It seems to me that you are having a great deal of difficulty
correlating the time stamps the system is giving you to your position
in the media. I suspect that is because you have not been formal
enough in how you handle things. You need to be very formal in how
you relate the presentation time stamps in the media to be played to
the CPU clock. From there you can easily map that into the time lines
of both the video hardware and the audio hardware.
Thank you.
Regards,
Rahul.
On 6/8/06 12:31 AM, "Jeff Moore" <email@hidden> wrote:
On Jun 7, 2006, at 5:15 AM, Rahul wrote:
The master clock is set with the time stamp on the audio packet.
This is
done through regular notifications from the input procedure that
supplies
data to the Audio Converter (ACComplexInputProc in the
PlayAudioFileLite
example). We get the time difference ( in AbsoluteTime) between two
calls to
this input procedure. We consider this time difference as the
duration of
the sample already played. When this difference adds up to the
duration of
the input audio packet we set the master clock to the new audio
packet
timestamp.
This calculation has error in it. The current time when your input
proc is called is not the time at which the input data was acquired.
Thus, the difference between that and the succeeding call is only a
very rough approximation of the duration of the packet. At the very
least, it contains an enormous amount of jitter due to scheduling
latency in the IO thread and any variances in timing in the code path
that leads to your input proc getting called.
But the same logic ,
1.for a default output device of 44100 sample rate finds the video
running
behind audio
2. default output device of 32000 sample rate finds the audio
running behind
video
We have also observed that CoreAudio plays an audio packet for a
time more
than its calculated duration . It looks like it is extrapolating
the packet.
Any inputs on this? Or is there any other method in CoreAudio
( truly
indicating the playing status) which we could use to update our
master
clock.
Basically, you have a case of garbage in/garbage out. The way you are
calculating the time stamps is introducing some error into your
calculation.
There are any number of alternatives. At the HAL level, the IOProc is
handed the time stamp for the data directly (I'm not exactly sure how
this time stamp percolates through AUHAL). The HAL also provides
AudioDeviceGetCurrentTime() and AudioDeviceTranslateTime() to aid in
tracking the audio device's time base.
-----------------------------------------------
Robosoft Technologies - Come home to Technology
--
Jeff Moore
Core Audio
Apple
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden