Re: Deriving timing information for scheduled playback on an AUGraph
Re: Deriving timing information for scheduled playback on an AUGraph
- Subject: Re: Deriving timing information for scheduled playback on an AUGraph
- From: Kyle Sluder <email@hidden>
- Date: Thu, 17 Jun 2010 11:58:34 -0700
On Thu, Jun 17, 2010 at 11:03 AM, William Stewart <email@hidden> wrote:
> The time stamp that you get from the output unit contains a mHostTime value. That is the CPU clock time of when the samples provided in that render cycle will be played by the hardware (so it is "in the future").
>
> Movie playback of audio and video is driven by the audio clock (based on relating the sample count with the progression of real-time). That is, what is the "real" v the "ideal" sample rate the playback device is running at. The rate scalar in this time stamp expresses that ratio.
While I must confess I am inexperienced in writing digital audio
software, the documentation for AudioTimeStamp is somewhat terse. I'm
not the first to stumble on it:
http://lists.apple.com/archives/coreaudio-api/2003/Apr/msg00147.html
This is where I found the Fundamentals of Digital Audio session at
WWDC quite helpful, by the way. And I just saw that the WWDC session
videos are now online, so I'm going to be revisiting those quite a
bit.
> If you use HostTime (mach_absolute_time) you have a way to then relate that time to the audio time line.
I believe this question came up on the list recently, but from the
on-device perspective. Since I'm on the desktop, I have access to the
functions in <CoreAudio/HostTime.h>. Should I be using those rather
than relying on the HostTime/mach_absolute_time equivalency?
Basically, should I use AudioConvertHostTimeToNanos, or
AbsoluteToNanoseconds?
>
>>
>> My second question is, does Core Audio perform any clock drift
>> compensation when multiple output units are used in a graph?
>
> No
>
>> The user
>> might schedule some tracks to play back on a different hardware
>> device; even if both devices are nominally running at the same clock
>> speed, I'm worried that Core Audio can only say "I've fed 10,000
>> samples to the hardware so far, and it claims to be running at
>> 44.1kHz, so the wall clock time for the next samples I ask for must be
>> 10000/441000 seconds since I started running." What if the same
>> generator unit is being used to provide audio for two out-of-sync
>> devices?
>
> This is why we provide aggregate devices. An aggregate device is an union of two or more distinct audio devices, which are often running from different clocks. The AggDevice keeps these devices in synch by resampling the devices that are NOT the master clock for the aggregate.
Okay, so the takeaway seems to be to use one aggregate device output
rather than multiple output units, and the aggregate device will do
all the internal SRC necessary to make sure things don't play at
different rates. I hadn't even thought about aggregate devices; it
seems rather logical now.
>
>>
>> If Core Audio can't do any compensation, what does this mean for
>> determining the current wall clock time for my application? Might,
>> over a long period of time, a slow- or fast-running hardware device
>> cause its timestamps to become out of sync with reality?
>
> Yes, see my note above about the relationship between host time and sample time and the rate scalar. There have been other more detailed posts about this topic in the past.
The most helpful one I found was a post from you regarding dealing
with synchronizing video:
http://lists.apple.com/archives/coreaudio-api/2005/Jan/msg00179.html
Since I just need to keep different audio track start times in rough
sync, I suppose the best approach would be to use a render
notification callback on the aggregate output device to keep a running
tally of the elapsed real time (sample count times sample rate,
adjusting for the rate scalar). Then if the time is within a certain
range of the next audio event, start prepping that event (add its node
to the graph, prime the buffers, etc.)
Thanks for the help, Bill!
--Kyle Sluder
>
>> I also need
>> to look into running different output units at different clock speeds,
>> but I suppose I'm expected to insert a varispeed unit into the graph
>> to compensate.
>
> Yes - but using Aggregate Devices is really the best way to accomplish this. There have been numerous posts about this as well, and somewhere there are details about how to make these in your App. (You can see the UI we provide for this in the Utility App: Audio MIDI Setup)
>
> Bill
>
>>
>> Thanks for the help all,
>> --Kyle Sluder
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden