Core Audio playback precision on iOS devices (and simulator)
Core Audio playback precision on iOS devices (and simulator)
- Subject: Core Audio playback precision on iOS devices (and simulator)
- From: Antonio Nunes <email@hidden>
- Date: Fri, 16 Jul 2010 16:01:27 +0100
CORRECTION: I made a mistake in the "driver" function listing. I'm repeating the whole email, this time with the driver function corrected (I was putting the thread to sleep in the wrong place).
I'm testing playing some ticks at specific points in time using the remoteIO audiounit. To this effect I have a very simple render callback that reads data from a buffer:
static OSStatus multiChannelMixerRenderCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
MT_SoundBufferPtr soundBuffer = &buffer[0];
AudioUnitSampleType *out = (AudioUnitSampleType *)ioData->mBuffers[0].mData;
for (UInt32 i = 0; i < inNumberFrames; ++i) {
out[i] = MT_PopSample(inBusNumber);
}
return noErr;
}
I fill the buffer with samples from a file. The sound is very short. I put a tick on every second (by resetting the the packet counter on the audio file controller object):
- (void)driver
{
[NSThread setThreadPriority:1.0];
NSAutoreleasePool *threadPool = [[NSAutoreleasePool alloc] init];
continuePlaying = YES;
[audioFile reset];
totalEntries = 0;
retrievePosition = 0;
storePosition = 0;
NSTimeInterval sampleTime = 0;
NSTimeInterval nextTime = sampleTime + 1.0;
NSTimeInterval renderIncrement = (NSTimeInterval)1.0 / SAMPLE_RATE;
while (continuePlaying) {
if (MT_NumberOfAvailableSlots() > 0) {
if (sampleTime >= nextTime) {
//printf("sampleTime = %0.15f\n", sampleTime);
[audioFile reset];
nextTime += 1.0;
}
MT_PushSample(0, [audioFile nextPacket]);
sampleTime += renderIncrement;
} else {
usleep(2);
}
}
[threadPool release];
}
To the best of my knowledge, the technique I use here is sample precise, and should achieve near absolute precision (well, sub-microsecond precision anyway). However, when I play this back, and record the sound in a sound editor, I do not see the precision I was expecting. Playback is very good, but there is a small, accumulating lag on each subsequent tick. I first thought I had my math wrong somewhere, or had setup the audiounit incorrectly, but upon double checking all the parameters look correct. In addition, since I was getting these results in the simulator, I decided to measure performance on the actual iPad too. Not to my surprise, there was a small accumulating lag, but much to my surprise, the lag was significantly smaller than in the simulator.
In the simulator on my machine, after a 5 minute test run there is an accumulated lag of virtually 17 microseconds.
On the iPad, after a 5 minute test run there is an accumulated lag of less than 5.5 microseconds.
These differing results suggest to me that the issue is likely not with the techniques employed, but rather that playback on different devices does not run at absolutely identical speed and in neither case is totally accurate.
Is my conclusion correct? Am I overlooking something, and am I simply going about this the wrong way? Is there any way to achieve sub-microsecond precision? I would have thought that if I place the sample with absolute precision in relation to the sample rate (44100), that I should then see (hear) that precision reflected during playback.
António
----------------------------------------------------
Energy is like a muscle,
it grows stronger through being used.
----------------------------------------------------
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden