Re: Question regarding recording and accounting for drift
Re: Question regarding recording and accounting for drift
- Subject: Re: Question regarding recording and accounting for drift
- From: Brad Ford <email@hidden>
- Date: Mon, 12 Jan 2009 11:17:58 -0800
On Jan 11, 2009, at 8:06 PM, Neil Clayton wrote:
Hello all, and a happy new year!
I've written a bit code that can record from some device, to a QT
Movie. It's using a couple of AU's, a home grown circular buffer and
Audio Converter. It'll also let you playthru (much like
CAPlayThough) if you're recording via Soundflower (or
AudioReflectorDriver).
I'm seeing some drift. Probably not surprising since I've not
accounted for it on the recording side.
I'd like to ask some more experienced developers if my current
reasoning seems valid:
Diagram here:
http://dl.getdropbox.com/u/421935/Audio Recorder Design.pdf
This example shows recording from a 10ch device, playing through to
a 2ch output (Speakers) and recording the 10ch discretely into QT.
The dashed lines show my proposed change below.
The input / output flow is for plauthru.
The input / recorder flow is what I'm concentrating on.
Assuming that the circular buffer places data in "time" according to
the timestamps received, if the input source were running slightly
faster than the Audio Converter itself, would it be reasonable to
assume that over time the audio would get ahead of the video (not
shown)?
So would it therefore be reasonable to add another varispeed before
the AC unit, as a way of controlling this? Would that be enough?
The input/output rate of the varispeed and input rate of the AC
would all match the nominal output rate of the Input HAL. I'd then
account for drift using a method similar to what I've seen in
CAPlayThrough (primarily because it's all I've got to go on).
Am I barking up the right tree?
So...you're recording from 1) an audio input device and 2) a video
input device to file and also performing a real-time preview to 3) an
audio output device, and 4) the screen. That means you've got 4
different devices with potentially 4 different clocks involved (unless
1) and 2) are coming from a muxed device). I'd recommend syncing 2)
to 1), since it's easier to duplicate or drop (or change the timing
of) video frames than to resample audio. That will get 1 and 2 onto
the same timeline. You could drive a timebase off of 1)'s clock --
that will be your master. For preview, you're going to probably have
to use that master timebase to correct for timing differences on the
output devices -- in other words you're going to have to resample the
audio for audio preview, and you're going to have to correct the video
(duplicate or drop, or change the frame durations) to keep everything
in sync.
Hard problems. This is exactly the kind of thing QTKit capture does
well behind the covers. Are those API's not sufficient for you?
-Brad Ford
QuickTime Engineering
Neil Clayton
email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden