Re: Timing in a user-land driver
Re: Timing in a user-land driver
- Subject: Re: Timing in a user-land driver
- From: Jeff Moore <email@hidden>
- Date: Thu, 3 Apr 2008 12:37:20 -0700
First off before I get to far into the details, IOProcs expect the
sample time to be incremented by the IO buffer frame size each time
they are called. Usually the folks that care about the time stamps
will do a resynch operation each time this is not true which is
probably why QT was behaving the way it does.
At any rate just some I'm following what you are asking about: It
sounds like your device has a completely software derived clock rather
than one that reflects the clock of some other device, right? In fact,
it sounds like you want your device's clock to have a perfect 1.0 rate
scalar, correct?
If so, you'll probably want to look at how I implemented the clock of
the SampleHardwarePlugIn. Specifically, you'll want to check out the
methods, SHP_Device::GetCurrentTime, SHP_Device::TranslateTime to see
how it calculates the various aspects of the time stamp. You'll also
want to check out the HP_IOThread class to see how those methods are
used to build the time stamps that get passed to the IOProcs.
Basically what you'll find is that the clock works like this:
When IO is started, an anchor time (sample time = 0, host time =
current host time) is taken which forms the basis of projection for
the other calculations.
Because the rate scalar is exactly 1.0, you can calculate the number
of host ticks per sample by dividing the host clock frequency by the
nominal sample rate. This figure is important for the other
calculations and will only change when the sample rate changes.
Calculating the current sample time then becomes: (current host time -
anchor host time) / host ticks per sample
Translating between sample and host time are also simple linear
projections (the gory details of which you can see in the sample code).
The SampleHardwarePlugIn's IO thread then keeps a frame counter. Each
time it wakes up to call the IOProcs, it increments this counter by
the IO buffer frame size.
The sample times that are passed to the IOProc use the frame counter
as an offset from the IO thread's own anchor time. The corresponding
host times are then calculated using SHP_Device::TranslateTime. This
means that unless something bad happens each sucessive call to the
IOProcs will see their time stamps increment by the IO buffer frame
size each time.
On Apr 3, 2008, at 5:39 AM, Stéphane Letz wrote:
Our user-land driver has to define correct timing informations to be
given as parameters when calling the application IOProc. So we need
to fill a AudioTimeStamp structure with the appropriate mSampleTime,
mHostTime, mRateScalar values. We were just using the
CAHostTimeBase::GetTheCurrentTime(); to fill the mHostTime, setting
a 1.0 value for mRateScalar and using a function that *estimate* the
device real frame position when our device audio callback is called.
It means the estimated frame time was not always exactly separated
by our device buffer size at each callback.
This timing behaviour was actually causing problems with QuickTime
that was dropping a sample from time to time when playing audio
files. By correcting the mSampleTime to just increment by on
complete buffer size at each callback, everything seems to work now.
I just wanted to be confirm that this way of computing timing is the
right one?
--
Jeff Moore
Core Audio
Apple
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden