• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Latency terminology and computation
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Latency terminology and computation


  • Subject: Re: Latency terminology and computation
  • From: William Stewart <email@hidden>
  • Date: Thu, 5 May 2005 14:49:03 -0700


On 05/05/2005, at 1:14 PM, david tay wrote:

In many USB audio devices, there is a control setting that governs a property that is labeled as latency, safety offset, buffer size, etc. There are also varying manufacturer claims of low device latency and a general notion (misconception) that the driver buffer size directly affects the device latency.

I understand that the total latency as measured from the DAW application usually differs significantly from the stated manufacturer claims and that the concept of latency differs somewhat for Mac and Windows.

It is certainly reported differently that is for sure. To my mind, latency in this context is really the measurement of the time it takes for a signal to propagate from an input to an output


How would one explain the difference between
a. safety offset

This is essentially the "no go" area of a device from a given time t. So, for output, the safety offset is essentially how much ahead of now I have to be to write data that the driver can use. In other words, I can't get any closer to the driver's now time than its safety offset.


In Core Audio, this is explicitly abstracted from the I/O path - so the closest we let you get is now + safety offset for output (and now - safety offset for input)

In ASIO, the safety offset is implicit in the double buffering. However, the bad affect of this, is that you have two limits as a client:
(1) You *have* to return your second buffer to the driver before its really done with the first (ie. now - safety offset for output) - in other words the duration of a buffer must *at least* be as big as the safety offset.
(2) The difference in the size of the buffer from the safety offset, is the amount of time you have. So, this ultimately limits how small you can make your buffer. The advantage is that you don't incur the output latency as an extra factor (as it is absorbed by the current buffer)


Now, for an ASIO system, the critical issue is to make the safety offset small enough so that the I/O buffer sizes an application uses can be small. Thus, typically, the latency figures we see reported for ASIO drivers are *not* the overall latency of the signal, but rather how small they were able to make the ASIO buffers and still give a client a "reasonable" amount of time to do work.

For instance, I've seen claims that some particular hardware's ASIO drivers on windows has 5 msec of latency. However, what this means is that the driver's minimum ASIO buffer size is 128 samples (the safety offset is probably 24 samples for instance)... Then the quoted latency figure is 128 (input) + 128 (output) or 256 @ 44100 or 5.8msec. Now, when I actually measure this, from jack to jack, the latency is around 10msecs (because they aren't taking about the overall signal latency, just the driver latency). It sounds great (5 msecs of latency), but actually its not the total story.

OK, now on Mac OS X its different. Because the SO is abstracted out, this doesn't have an affect on the buffer sizes of the application - they can be as big or as small as you need/want (within reason). So using Apple's FW Audio driver with this same piece of hardware, I can get about 10msec of overall latency - but in this case the I/O size is 64 samples (yes, we still have some work to do here). The problem at this stage is that the driver itself is adding enough latency (its safety offset in this case), that changing the I/O size of the application doesn't really make a significant impact in the overall measurements.

However, basically the two systems have an identical measured latency for that particular piece of hardware.

For Mac OS X, where it gets interesting though is with hardware that has low latency in their drivers. If I take an interface which has say 24 samples of safety offset and 32 samples of device latency (see below) - that means that the device/driver is going to contribute:
32 + 24 + 24 + 32 or 112 samples


So, this 112 samples is the total latency added by the driver and the device.

Then I have to add 2x my I/O Size in normal circumstances (see below) - so this say on 64 I/O gets down to 240 (112 + 64 + 64) or 5.44 msec (all these figures are at 44100Hz)

Here's the trick now. I can also run my I/O's at 32 or 24 samples if I'm on a machine that will let me do anything with those small sizes (for instance, we typically find that DP machines are more reliable at these small I/O sizes) - So, now I can take my latency down to say 160 (112 + 24 + 24) or 3.62 msecs...

We also have drivers that have lower safety offsets than these (the 32 samples for the converters (device latency) seems to be a pretty typical figure), so these overall figures can be lower with the right hardware.

One more point - in Tiger we added a new feature called I/O Cycle usage to the HAL. In the above, we're assuming that there is always going to a full I/O time to do output - thus, if I have an output buffer size of 64 samples, I have to add 64 samples of latency (because we're always a full buffer ahead). I/O Cycle usage adds the ability to trade off time for latency - so, you can basically say this:

"I have a 64 sample I/O Buffer, but I want the output to go out 25% of my buffer size" - in other words, instead of the output latency being the full 64 samples, it would 25% of 64 samples (or 16 samples). This also means that I only have 16 samples worth of time now, not 64 samples (even though my buffer size is 64 samples).

So, if I ran my latency measurement as above, I'd now have 192 samples (112 + 64 + 16) or 4.35msec

You can see this feature in AU Lab - go to the preferences window and look at the device settings. You can also measure this using the AU Pulse Detector that is in the DiagnosticAUs part of our SDK - that AU does mono to mono and will send a pulse out, and see how long it takes for it to get back to it - so with AU Lab, create a mono output, mono input document, wire a cable from output to input, put the pulse detector in the AU Lab doc, and send out a pulse (These results can also be verified with the more traditional Y-cable type of tests)

b. device & driver latency

2 things really:

How long does it take to get the data to the drivers converters - driver latency
How long does it take to get the data out of the converters.


CoreAudio is I believe the only system currently that reports this number directly to an application.

Bill

c. total latency ( from DAW down to the device)

in a manner that makes sense across platforms?

How would one compute the driver / device latency on MacOS X for a given sample rate?

Thanks,

David

--
mailto:email@hidden
tel: +1 408 974 4056
________________________________________________________________________ __
"Much human ingenuity has gone into finding the ultimate Before.
The current state of knowledge can be summarized thus:
In the beginning, there was nothing, which exploded" - Terry Pratchett
________________________________________________________________________ __


_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


References: 
 >Latency terminology and computation (From: david tay <email@hidden>)

  • Prev by Date: Re: How to use this AudioFile info thing?
  • Next by Date: Re: How to use this AudioFile info thing?
  • Previous by thread: Latency terminology and computation
  • Next by thread: Get hardware buffer size
  • Index(es):
    • Date
    • Thread