Re: Core Audio playback precision on iOS devices (and simulator)
Re: Core Audio playback precision on iOS devices (and simulator)
- Subject: Re: Core Audio playback precision on iOS devices (and simulator)
- From: Jeff Moore <email@hidden>
- Date: Fri, 16 Jul 2010 13:56:29 -0700
I'll admit to having some trouble deciphering your code, but the busy loop in the the driver function isn't going to yield particularly great timing. At best, you will have an error that is going to average about half the duration of the IO cycle (well, given the short sleep period, probably much less then that) plus scheduling latency.
But it's unclear if that will really matter much. It looks like the goal is to keep at least one buffer on the stack so that the render proc has something to handout. In that respect, it should probably be OK. It will fall down if the thread executing the driver function ever has a scheduling latency larger than the duration of the IO cycle, though. Not a common occurrence, but it could happen if the system was particularly busy. You could fix it by keeping at least one buffer queued at all times.
On Jul 16, 2010, at 7:53 AM, Antonio Nunes wrote:
> To the best of my knowledge, the technique I use here is sample precise, and should achieve near absolute precision (well, sub-microsecond precision anyway).
True. It appears that modulo a catastrophe, this code will play continuous data.
> However, when I play this back, and record the sound in a sound editor, I do not see the precision I was expecting. Playback is very good, but there is a small, accumulating lag on each subsequent tick. I first thought I had my math wrong somewhere, or had setup the audiounit incorrectly, but upon double checking all the parameters look correct. In addition, since I was getting these results in the simulator, I decided to measure performance on the actual iPad too. Not to my surprise, there was a small accumulating lag, but much to my surprise, the lag was significantly smaller than in the simulator.
>
> In the simulator on my machine, after a 5 minute test run there is an accumulated lag of virtually 17 microseconds.
> On the iPad, after a 5 minute test run there is an accumulated lag of less than 5.5 microseconds.
>
> These differing results suggest to me that the issue is likely not with the techniques employed, but rather that playback on different devices does not run at absolutely identical speed and in neither case is totally accurate.
>
> Is my conclusion correct?
Yes. It sounds like you have measured the difference is playback rate amongst the devices.
> Am I overlooking something, and am I simply going about this the wrong way?
I think the issue here is one of expectation. You are expecting the hardware to run at the nominal rate. The fact is few audio devices will run at their nominal rate. Nearly all of them will run a little faster or a little slower than nominal.
This is not considered an issue generally as most will only ever use one device at a time. Plus the system keeps track of the mapping between the system clock (aka mHostTime in an AudioTimeStamp) and the device's clock (aka mSampleTime in an AudioTimeStamp) so as to be able to feed the hardware at the rate it is running at.
> Is there any way to achieve sub-microsecond precision?
You have to correct for this imperfection in the cases where multiple audio devices and/or computers are involved or other reason that makes getting the data at the nominal rate important. In a studio environment, you usually have a device that is the master clock and then all the devices will drive their clocks from this master clock. The clock is distributed in a variety of ways. For example, this will often be the clock sent from an AES or SPDIF digital interface.
Another common case where this comes up is in video conferencing. The two computers involved will be running at different rates with respect to one another. The software needs to be able to judge the relative difference in rates and scale it's data accordingly (often in the form of doing a small sample rate conversion).
In Mac OS X, the HAL provides Aggregate Devices for this job. An Aggregate Device is a virtual device that is composed of other real devices. The Aggregate will, when told to, handle doing the appropriate sample rate conversions to correct for the drift between the clocks of the various devices it is made of to keep things in sync.
> I would have thought that if I place the sample with absolute precision in relation to the sample rate (44100), that I should then see (hear) that precision reflected during playback.
Actually, I'd argue that you are. The fact that the audio signal is continuous shows that. I think you just had the wrong expectations about what you were going to see with your test.
--
Jeff Moore
Core Audio
Apple
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden