Re: Audio Unit/AudioConverter and getting what's playing *now*
Re: Audio Unit/AudioConverter and getting what's playing *now*
- Subject: Re: Audio Unit/AudioConverter and getting what's playing *now*
- From: Simon Fraser <email@hidden>
- Date: Fri, 22 Oct 2004 10:11:30 -0700
On Oct 22, 2004, at 9:41 am, Doug Wyatt wrote:
On Oct 21, 2004, at 23:23, Simon Fraser wrote:
My problem is that this sound has to sync to video. There's a delay
between calling AudioOutputUnitStart() and the sound coming out, I
think
because of some thread starvation issues that mean that my
ACComplexInputProc can't always supply as many packets as the
converter
wants (which may happen because of network bandwidth issues in the
normal course of events). So I need to be able to ask the audio device
(or some other component) the timestamp of the samples that are
currently being played.
I've tried getting the audio device for the AU with
AudioUnitGetProperty(myUnit, kAudioOutputUnitProperty_CurrentDevice,
kAudioUnitScope_Output...), and then calling
AudioDeviceGetCurrentTime(), but this just returns the number of
samples
processed since I called AudioOutputUnitStart() (ie. it continues to
increase even during the initial period of silence).
AudioDeviceTranslateTime() didn't seem to help me either.
AudioDeviceGetCurrentTime() does return the current hardware sample
number. This is actually an arbitrary number; it will start at
something other than 0 in many situations (e.g. if another process was
already playing sound when yours started). This time also has nothing
to do with the current buffer you're playing; it is the HAL's
computation of what sample number is currently hitting the hardware.
AudioDeviceTranslateTime() just converts between those driver absolute
sample numbers and host time (and other units if the driver supports
them, though I don't know of any drivers that do, yet).
Thanks for clearing that up; I suspected that this was the case.
You can also look at the stream of timestamps coming from the output
unit in its calls to get input from another AU or an input callback.
These timestamps do start at 0 when you start the output unit.
Right, but I still need to account for the delay between starting the
output unit, and getting sound. Maybe priming the audio unit with a
known period of silence helps here?
To do anything more with these timestamps requires some notion of
where you are in your timeline -- you'll need an idea of what hardware
reference time is the beginning of the timeline, and make your
calculations relative to that start time.
The problem is that the "beginning of the timeline" needs to be when
sound starts coming out of the speakers, and I haven't found a way to
either get a callback at this point, be able to request a timestamp for
this point, or be able to request a time relative to this point.
You should start the audio hardware running before you do anything
related to sync; in some situations it can take a noticeable amount of
time for the hardware to start. One way to do this is simply to start
the output unit and feed it silence before your timeline begins.
I can try this (I assume I'll still get a delay between feeding
silence, and "hearing" silence). But once the audio unit is warmed up,
how can I measure the latency between the samples I'm feeding it in the
input proc, and the sound that's currently playing?
Thanks again for the help; this is my first foray into Core Audio!
Simon
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden