Re: Work-arounds for driver-level input latency
Re: Work-arounds for driver-level input latency
- Subject: Re: Work-arounds for driver-level input latency
- From: Zachary Schneirov <email@hidden>
- Date: Fri, 4 Sep 2009 14:49:02 -0500
Jeff,
I'm writing an audio conferencing application but am willing to
augment it with a driver/plugin if that could improve the issue. I
realize this is beyond what most applications are expected to care
about, but in the case of this app these devices are used with it
almost exclusively, so my scope is expanded by necessity. This is
commodity hardware and I don't have full control of it, but at a low
enough level controlling the input latency seems almost possible. So
the answer to your first question is probably: Both.
Regarding measurements: because I can't know when the hardware itself
is actually reading any given sample from the mic, timing was done
three different ways: a) wiring output to input and comparing timing
between played-out pulses, b) sending audio to another computer on the
network (with a known network latency), c) using software playthrough
on the same computer (with the latter being mostly perceptual).
Obviously this doesn't completely isolate input latency from output
latency, but exact measurements aren't that useful anyway as the
latency is always increasing over time. On some machines the total
input + output latency can reach half a second.
Input latency is close to zero when the device is first connected (or
after any of its stream formats are set), and builds quickly from
there. It seems that the device either is not providing accurate
timings for its samples or has a sample rate that is too dynamic for
CoreAudio to track; regardless, and despite not actually working for
the manufacturer, I want to get the lowest latency possible for my
application.
Thank you, I really appreciate it,
Zach
---------------------------------------
Zachary Schneirov
Northwestern University
On Sep 4, 2009, at 12:03 PM, Jeff Moore wrote:
So I'm confused. The first part of this message sounds like you are
writing a driver for a piece of hardware. Yet, in the second part of
the message you talk about adapting your application. Which are you
doing? A driver? An app? Both? The answer to your questions really
does depend on what you are doing.
Also, I don't see where you are describing what you are measuring
and how you are measuring it. Please be specific! We can't really
help you without knowing the actual details of what you are doing.
There are a lot of ways to do this and get misleading results.
Finally, I will also say that an application really has no control
over hardware latency. The best an app can do is to lower it's IO
buffer size, which has a direct effect on latency at the cost of
having the IO thread run more often, and use
kAudioDevicePropertyIOCycleUsage, which trades time in the IOProc
for lower latency. But there is nothing an app can do or change
about what the driver is doing.
On Sep 3, 2009, at 9:34 PM, Zachary Schneirov wrote:
I'm currently facing the difficult task of achieving low-latency
throughput on a class of USB chipset from C-Media (CM108/109/119)
whose sample timings CoreAudio apparently cannot consistently track.
Problem: Over time (about 5 minutes), frames grabbed from the input
stream become increasingly delayed, often by up to 250 ms. I'm
guessing the HAL's IO engine is underestimating the actual sample
rate of the device, leaving behind some number of frames during
each IO cycle. Likewise, audio is sometimes garbled, perhaps from
overestimating the sample rate and under-running the driver's
buffer (?), though this is less common.
I can observe the effect using a simple HAL IOProc input callback
or with any application that does software playthrough (e.g.,
CAPlayThrough, HALLab's Input window, etc...). On 10.5 and 10.6 I
can reset this latency only by either unplugging the device or
setting the stream format on any section. On 10.4 stopping the HAL
engine for the app seems necessary.
This chipset is common in USB headsets (especially those for
education) and has some desirable qualities (e.g., hardware-
playthrough control), so I'm motivated to adapt my application
(which involves very-low-latency audio conferencing) to work with
it as well as possible.
If knowledgeable CoreAudio people could tell me which of these work-
arounds might set me on the right track or provide better
suggestions, I would be extremely obliged:
a) Avoid AudioDeviceAddIOProc() and instead call AudioDeviceRead()
from a real-time thread with jitter-buffer semantics, dropping a
few frames every now and then
b) Create a user-space HAL driver to manipulate whatever
underlying ring buffer is feeding the HAL IO engine
c) Create an AppleUSBAudio plug-in kext to do the same
d) Set the stream format every few minutes to trigger a reset
(extremely disruptive when playing or recording)
e) Send commands directly to the chipset with IOKitLib to trigger
a reset, aiming for fewer side-effects
--
Jeff Moore
Core Audio
Apple
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden