• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: AudioTimeStamps between devices
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: AudioTimeStamps between devices


  • Subject: Re: AudioTimeStamps between devices
  • From: Jeff Moore <email@hidden>
  • Date: Thu, 11 Nov 2004 12:05:27 -0800


On Nov 11, 2004, at 11:13 AM, Ethan Funk wrote:

I need to get audio input sample date from one device, process it, and pass it out to a different device. The devices will have different clock sources, so the sampling will not be exactly in sync. I have been playing around with ComplexPlayThru and now have a bunch of questions:

Is the following correct:
1. <TimeStamp>.mSampleTime = first sample number in buffer relative to the *audio device* clock. (always in sequence with out skips)

This is correct.

2. <TimeStamp>.mHostTime = host time stamp of the first sample in buffer relative to the CPU core clock. (may exhibit buffer to buffer drift relative to mSampleTime)

This is also correct. The drift isn't random and tracks with the rate scalar. Both the sample time and the host time are indicating the same point in time, only using different ways to describe it. By and large, you don't really need the host time to do the synch work since you don't have to derive the rate scalar.


3. <TimeStamp>.mScalarRate = average (over some unknown time span?) deviation of the audio device sample clock rate from the expected sample rate relative to the CPU core clock. For example if the expected rate is 96,000 sps and the device clock is off by 0.01% (faster) relative to the CPU clock, the mScalarRate will settle out at a value of 1.0001 which corresponds to a sample rate of 96,009.6 sps.

This is correct. The rate scalar is the ratio of the observed number of host ticks per sample to the nominal number of host ticks per sample. Devices running faster than nominal have a rate scalar greater than 1.


Assuming I have all the above stuff correct, I can then sync my two devices by getting samples from the source device, taking note of the mScalarRate in the time stamp, and pass the samples along to the output device through a VarRate filter. When doing a render for the VarRate, I would check the mScalarRate of the source and the mScalarRate of the destination, divide the two and set the VarRate to this playback rate value. I'm skipping over the ringbuffer, base line sample rate differences and latency stuff here to get at the heart of the syncronization problem.

Am I using the VarRate filter properly here? Can it be updated during a render callback? Or should be be writing my own decimation/interpolation code.

You can certainly use the rate scalars of the various devices to perform synchronization more or less the way you describe.


One thing you should keep in mind that adds some complications to the bookkeeping is that you usually need to work with whole sample numbers and the math you will be working with will produce fractional sample numbers. Those fractions are important to maintaining synch over time, so be sure to account for them in your math.

--

Jeff Moore
Core Audio
Apple


_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
References: 
 >AudioTimeStamps between devices (From: Ethan Funk <email@hidden>)

  • Prev by Date: AudioTimeStamps between devices
  • Next by Date: Re: XCode 1.5, CA SDK 1.3.1, QT 6.5.2 - update order?
  • Previous by thread: AudioTimeStamps between devices
  • Next by thread: Is there an AudioUnit to normalise a recording?
  • Index(es):
    • Date
    • Thread