Sample clock, gui synch issues
Sample clock, gui synch issues
- Subject: Sample clock, gui synch issues
- From: Andrew Coad <email@hidden>
- Date: Wed, 25 Aug 2010 16:49:28 -0400
- Importance: Normal
I know there has been a lot of discussion on this forum about the use of "sample counting" in order to keep strict time for applications such as sequencers and metronomes. However, I've just abandoned using mSample time as my fundamental timing reference so I thought that I would write this note to explain why and ask if the mechanism that I am currently using (seems to work fine in simple tests but you never know...) is valid under all circumstances.
The problem that I found with using mSampleTime as the fundamental timing reference is that it does not have sufficient granularity when you have high BPM rates with subdivisions within each beat. For example, if you define your smallest unit of time (maybe a 1/32nd of 1/64th of a measure) in terms of integral numbers of samples, your actual playback BPM can differ from your desired BPM by a small but noticeable amount (professional musicians are amazing - they can pick up on very small deviations).
What I am now doing is using machine "ticks" as the fundamental timing reference and convert between machine ticks and samples when I want to calculate where in a render buffer I need to start the audio data. A little more detail:
In my particular case, I know ahead of time when display events and audio events should occur from some arbitrary starting point. Thus I calculate ahead of time all these events in nanoseconds. The display thread gets machine ticks and converts to nanos using:
hostTime = mach_absolute_time(); hostTimeInNanos = hostTime * hostTimeToNanosConversionFactor;
and the conversion factor was derived using:
kern_return_t kerror; mach_timebase_info_data_t tinfo; handleError(); hostTimeToNanosConversionFactor = (double)tinfo.numer / tinfo.denom;
The display update method is triggered by a NSTimer running at 60Hz. Whether a display update needs to occur or not can be computed by:
if (startSynchTime + nextEvent.timeOffset < hostTimeInNanos + (displayControl.displayRefreshPeriodInNanos / 2)) {
The display update method displays all as-yet-undisplayed events prior to this time so it won't miss events if NSTimer misses any ticks; NSTimer would need to be grossly wrong/slow for there to ba a visually noticeable problem. Notionally, events go on screen +/- 0.5 * displayRefreshRateInNanos i.e. +/- 8.3mS from the desired time. Not a problem.
The audio render callback gets the start time of the render buffer in nanos using:
frameBufferTime = inTimeStamp->mHostTime * hostTimeToNanosConversionFactor;
it checks to see if the next audio event falls within the time window defined by the callback buffer size using:
if (startSynchTime + nextEvent.timeOffset < frameBufferTime + (inNumberFrames * nanosPerSample)) {
and starts to write the next audio event into the buffer if required otherwise it writes silence. Which sample in the buffer the audio has to start at is computed as:
audioStartSample = (startSynchTime + nextEvent.timeOffset - frameBufferTime >= 0 ? (startSynchTime + nextEvent.timeOffset - frameBufferTime) / nanosPerSample : 0);
silence is pre-pended if necessary.
startSynchTime is the single data item shared between display update method and audio render callback. This variable is set to the value of inTimeStamp->mHostTime of the first render callback following the start of the AUGraph. Both display update method and audio render callback measure their nanosecond timing from this reference point.
As mentioned above, this mechanism seems to work fine. Stability of BPM rate over long periods of time is good and jitter between individual audio events is not large enough to be noticeable (in the order of hundreds of microseconds). I have noticed that the relationship between inTimeStamp->mHostTime and inTimeStamp->mSampleTime is not constant; i.e. inTimeStamp->mHostTime on callback N+1 is not equal to inTimeStamp->mHostTime on callback N + (inNumberFrames * nanosPerSample). I have noticed an approximate 30 samples deviation between callbacks. mSampleTime cannot be wrong so it looks like the deviation is coming from the getting of mHostTime. This in turn means my actual audio start time is +/- 15 samples from the desired sample start time or +/- 340uS. I don't see this as a problem.
That's it. Would appreciate any feedback on anything goofy that I'm doing or cases where this approach might be problemmatic.
AC
|
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden