> I've been using time stamps in an audio unit render callback to
count samples. Problem with that is that the callback processes 512
frames of audio at once; so it's not really accurate either.
Disagree.
The number of frames being processed per callback has no impact on
accuracy for an application like a metronome or sequencer where you know
ahead of time what needs to be played and at what time. The accuracy of
a single metronome click (i.e. the difference between desired time of
the sound and actual time) will never be more than one half of one
sample time (depends on the BPM rate) which is 1/2*44100s = 11.3us. At
140 BPM, this yields a worst case error of 0.16%. You can never get more
accurate than this. Even if you calculated timing at the nanosecond
level you can only start the sound on sample time intervals so you will
still be in error by the same amount. Fortunately, an error of 0.16% has
no meaning in music applications.
The number of samples being
processed per callback does have an impact in applications where you
don't know ahead of time what needs to be played - e.g. all cases of
user initiated real-time sound. For these cases, the worst-case delay in
getting the user initiated sound to actually play is (buffer size in
samples per second)/44100s = 11.6ms for a 512 sample buffer. I don't
know how much delay is perceivable by a human being (it will surely vary
person-to-person) but 11.6ms is not perceivable. In cases where even
this delay is undesirable you can reduce the number of samples per
callback (512 is the default) but this increases the frequency at which
the callback occurs so there is a trade-off here.
Andrew Coad