Re: AudioDeviceGetCurrentTime after headphone plug/unplug
Re: AudioDeviceGetCurrentTime after headphone plug/unplug
- Subject: Re: AudioDeviceGetCurrentTime after headphone plug/unplug
- From: Bjorn Roche <email@hidden>
- Date: Thu, 18 Jan 2007 18:30:06 -0500
Brant,
I am CC'ing both the apple CA dev and PortAudio lists because I
think it's relevant to all of them.
Responses below.
On Jan 18, 2007, at 6:01 PM, Brant Sears wrote:
Hi Bjorn,
I am using the data in the callback - but I also need to be able to
access the same clock via another thread at a different time. My
app does streaming video and I'm using this to time the video frames.
All of my frames have timestamps, so what I do is store a reference
timestamp and a reference time during the audio callback. Then in
another thread (that shows video), I grab the current time (in this
case via Pa_GetStreamTime()) and then compute a current time that
can be compared to the timestamp in the video frame. (currentTime -
referenceTime) + reference timestamp.
Seems reasonable. You may want to read http://www.portaudio.com/docs/
portaudio_sync_acmc2003.pdf
So, I am using timeInfo->outputBufferDacTime for the reference
time. The reason I am doing this is that I believe (correct me if I
am wrong) that this time refers to the time that it will be when
the audio I'm supplying is actually played.
That is my guess. I wrote the code a while ago, so I could be wrong.
Like I said, the video thread is using Pa_GetStreamTime().
Based on what apple said, it seems like there's no way to make this
function call reliably tell you the time as reported by the device,
unless you don't care about when people plug/unplug the headphones,
so it seems the info in the callback is all you get.
But, I'm able to use (less than) this info to sync video quite well
(well within a frame) in my app: www.xowave.com, so it's possible.
So, using Pa_GetStreamTime in both places results in problems with
the audio and video not being in sync. I tried changing the
suggestedLatency slot in the PaStreamParameters structure that I am
using to create my PaStream to defaultLowLatencyOutput this still
is not sufficiently accurate to achieve lip sync.
Latency is primarily controlled on Mac OS X not by the latency
settings but by the frames per buffer setting. This is because
there's hardly any latency on the mac except for the buffer size,
except in some cases described in the notes.txt file in PortAudio's
CoreAudio directory which I just updated to reflect some of this.
Bottom line: in all cases, your framesPerBuffer setting will be a
significant factor. In some cases, latency settings will be a factor.
If you'd like to work on a patch to portaudio that uses the callback
data + interpolation (based on some other clock), I could give you
some guidance, but I don't have a lot of time to work on it myself ATM.
OK, so the problem is that the "device" in AudioDeviceGetCurrentTime
() becomes a new device?
What I don't understand is why the time in the callback isn't also
reset? This value seems to be coming from the TimeStamp that is
passed as an argument to IOAudioProc in pa_mac_core.c. I'm not
sure where this is ultimately coming from.
According to apple, the callback comes through an extra abstraction
layer (the AUHAL) which seems to take care of that.
Would it help if I open a DTS issue on this? How should I
characterize this?
I already filed a feature request. If you file a separate report, you
could refer to my bug number: 4939739.
bjorn
-----------------------------
Bjorn Roche
XO Wave
Digital Audio Production and Post-Production Software
http://www.xowave.com
http://blog.bjornroche.com
http://myspace.com/xowave
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden