• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Audio/Video synchronization
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Audio/Video synchronization


  • Subject: Re: Audio/Video synchronization
  • From: Jeff Moore <email@hidden>
  • Date: Wed, 14 Jun 2006 15:30:13 -0700

Sorry this took so long. At any rate, the back-and-forthing of this thread have gotten a bit long and hard to follow. So here's my summary of my understanding of your problem (numbered for easy reference):

1) You are trying to play media containing audio and video in synch, but so far haven't had too much success
2) The media has an embedded clock in it, similar to MPEG (the actual rate of that clock isn't really that important)
3) The audio packets in the media have a start time that is calculated from an anchor point plus the sum of the durations of the preceding packets, implying that the audio data is to be played contiguously.
4) Your code has a "master" clock that wants to track the number of samples played. The actual values that this clock provides to it's callers are in nanoseconds.
5) Your code uses AUHAL as it's interface to the audio hardware and has set things up so that the time stamps in the render callbacks are the raw time stamps from the HAL.
6) When you look at why things aren't in synch with these methods, it appears that the audio data playing back too fast.


I think that covers it.

I'm concerned about your master clock. How are you converting from the samples you count into nanoseconds? I think this is the crux of your problem. This is what I was talking about when I mentioned accounting for the true playback rate of the audio hardware. From the sound of it, the audio hardware is running a tad slow (not an uncommon thing) and you aren't adjusting this calculation accordingly.

The HAL provides tools for dealing with the rate scalar of the hardware. The first tool is the time stamps the HAL provides. These time stamps relate the device's sample clock to the system clock (aka host time) by providing them together in one structure. Each valid value in the AudioTimeStamp refers to the same instant of time in the different time bases. The two most useful values are the sample time and the host time. The AudioTimeStamp also can contain a rate scalar value which is the ratio between the observed number of host ticks per sample and the nominal number of host ticks per sample.

The other tools the HAL provides are in the two API calls AudioDeviceGetCurrentTime() and AudioDeviceTranslateTime(). The first call just returns the current time in the time bases requested in the AudioTimeStamp. The second call is used to to translate a time in one time base into a time in others (again, as requested by the the AudioTimeStamps). Both of these calls use the HAL's support for following the rate scalar of the device so that they are reasonably accurate.

The key thing to note about how the HAL keeps time is that it is always able to relate the sample time to the host time and vice versa. Thus, it becomes easy to relate the sample time to basically any part of the system that counts time using host time, which is most of our APIs, especially CoreVideo which uses a scheme that is essentially identical to the audio HAL.

So if I were in your shoes, I would design things so that I could anchor my media time line in host time when I started playback. I'd also write a conversion function that can convert from the media time into host time using the anchor. From there, you can figure out quite easily when each audio sample is going to be played and can just skip counting and go straight to using AudioDeviceTranslateTime() to get the actual sample times based on the host time of your media.

One other thing to note is that it doesn't sound like you are doing anything with the presentation latencies of the two devices involved. Audio and video devices tend to be a lot different in this regards. To properly synch the media right from the start, you need to account for the differences when you schedule the audio and video to be played. The HAL provides the device's latency via the property, kAudioDevicePropertyLatency.

On Jun 13, 2006, at 8:43 AM, Rahul wrote:

Hello Jeff,


--

Jeff Moore
Core Audio
Apple


_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
References: 
 >Re: Audio/Video synchronization (From: Rahul <email@hidden>)

  • Prev by Date: Re: MIDIEndpointRef is valid ?
  • Next by Date: Re: MIDIEndpointRef is valid ?
  • Previous by thread: Re: Audio/Video synchronization
  • Next by thread: Re: Audio/Video synchronization
  • Index(es):
    • Date
    • Thread