Re: achieving very low latency
Re: achieving very low latency
- Subject: Re: achieving very low latency
- From: William Stewart <email@hidden>
- Date: Mon, 09 Jul 2012 17:58:12 -0700
You can't use AudioQueueStart here and expect low latency response. We can make (and don't) any guarantees about start up latency, and audio queue itself is far enough removed from the basic I/O mechanism, that it cannot (and wasn't designed) to perform at these low latency types of conditions. So, if that is your metric, it isn't at all surprising that you are around these 10's of msecs.
The way to do this is to use either AUHAL (or AURemoteIO on iOS which also can do single digit msec I/O) and you have it running. Why? HW takes time to start. The reason any system can give you these low numbers is because all of those bits and pieces are powered up and running. None of these systems can respond in single digital msec from a "not running" state to a "running" state. On Mac OS X, we reflect this running state as AudioDeviceStart (or for AUHAL - AudioUnitOutputStart) - we have some additional overhead (which I think is an acceptable minimum) of your code actually having to do something. What does your code do?
memset (audioBuffer, 0, sizeof(audioBuffer);
Its basically what all the keyboards, etc, do when they have nothing to sound, its just that we make your code do it, because there's probably other things you'd like to do at other times (like make sound with the low latency).
Now, could you do this with an audio queue?
You'd have to start the queue, then when you have a sound to play, you go get the current time of the queue. Then - you'll have to schedule ahead, because damn, "now" is already past right, by the time you know when "now" is (or more correctly "was"). So, how far do you schedule ahead? Tricky... (to quote Douglas Adams). If you guess too close, we'll truncate some of your sound (your schedule was in the past, and we catch up by dropping sound). If you guess too far away - you introduce latency in your response. So, there's no good way to do this really using Audio Queue. Really. Not unless you want 30msec here (which is what you have currently).
The best way to ensure that you get the lowest latency possible is to run AUHAL, and when you know you have a sound to play, start it in the next output buffer - you can even attach this to a mixer and mix multiple sounds at once - all with the same low-latency response. You can use a lock-free queue to message your I/O thread, etc..., so the response time (as Markus pointed out) on Apple HW, can certainly be comfortably under 5msec from knowing "I have to play a sound" to have that sound heard.
Does that make sense, or is there still some misunderstandings/questions about what is involved here?
Bill
On Jul 6, 2012, at 8:31 PM, Jorge Chamorro wrote:
> On 07/07/2012, at 01:32, William Stewart wrote:
>>
>> On Jul 6, 2012, at 4:44 AM, Jorge Chamorro wrote:
>>> On 05/07/2012, at 16:08, Hari Karam Singh wrote:
>>>>
>>>> As far the CoreAudio code goes, the trick is as you said, to preload at least part of the audio file, and then as also suggested, leave the audio graph running, feeding in silence until the sound is triggered at which point you start feeding the audio file. If you don’t wish to load the entire sound into memory, you need to feed the audio into the render callback via a ring buffer which you monitor from a separate thread and refill from disk when it depletes below a certain threshold.
>>>
>>> Hi,
>>>
>>> That's what I'm doing here <https://github.com/xk/node-sound> and the test <https://github.com/xk/node-sound/blob/master/tests/test00.js> shows that latency is still between 25 and 30 ms, so in my experience that won't cut it.
>>>
>> I don't understand the limitation you are describing. We consistenly measure the ability of many systems to achieve analog-in to analog-out latency under 5msec (and lower, depending on the quality of the driver implementation and the latency of the HW itself - DAC/ADC buffering, transport medium latency, etc).
>>
>> By looking at the various paths of the I/O, you can determine the latency of a sample placed in a buffer to when it will appear in the analog domain:
>>
>> Output buffer size
>> + safety offset
>> + presentation delay
>>
>> This is (and we've measured this) can be under 2.5msec, so there is something serious wrong in your characertisation of 25-30ms if you believe that this is all audio system latency
>
> That test calls AudioQueueStart(player->AQ, NULL); at 10, 15, 20, 25, 30 ... 100 ms intervals, where player->AQ are 3ms sounds. I've not plugged it into the oscilloscope, but when I run it I hear clearly that at any interval below 25-30 it sounds identical.
> --
> Jorge.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden