Re: Pinning IO Thread to a particular processor
Re: Pinning IO Thread to a particular processor
- Subject: Re: Pinning IO Thread to a particular processor
- From: Jeff Moore <email@hidden>
- Date: Wed, 1 Feb 2006 11:57:02 -0800
On Feb 1, 2006, at 9:46 AM, Stefan Haller wrote:
Jeff Moore <email@hidden> wrote:
On Jan 30, 2006, at 11:02 AM, Stefan Haller wrote:
We are observing that on a Quad G5 we get glitches or dropouts
when we
lower the buffer size under high audio load (of course); but with
all
four cores enabled, we get these glitches far earlier (with a higher
buffer size) than with only one core enabled.
What buffer size and sample rate are you seeing the problems?
We did another test today with 44kHz sample rate and a buffer size of
128 frames. Audio load was around 70%, as displayed in Live's CPU
meter. When turning on only one CPU, the sound was perfect; when
turning on any number of CPUs > 1 (no matter which ones), there was
not
only glitches, but continuous crackle. It didn't matter which CPUs we
turned on (e.g. 1+2 or 1+3, or all four), which leads me to believe
it's
not a cache issue.
Hmm. 128 at 44100 is ~2.9ms. That's not an unreasonable buffer size
for a lot of apps, especially Live.
You'll have to tell us. You can use HALLab's IO cycle telemetry
window to track what the IO thread is doing. If you run it as root,
you can also take kernel traces under a variety of circumstances
based on the telemetry.
I'm afraid I need help with interpreting the data.
That's what we're here for.
On the whole, I see that the single CPU IO thread is getting about 37
mics of scheduling latency whereas the quad IO thread is seeing 18-23
mics. That's ~0.63% of the IO cycle duration for the single and
~0.31-0.39% for the quad when the IO cycle is 256 frames at 44100. I
would imagine that this is probably not going to be a huge factor in
the problem.
With 1 CPU, we get occasional overloads (pretty infrequent); here's a
screenshot of one:
<http://home.snafu.de/stk/tmp/1_CPU.png>
This shows a fairly ordinary overload caused by the IOProc taking too
much time. It took ~5.797ms. When you add that to the driver's
portion of the cycle (0.040ms), the total is ~5.837ms. A 256 frame IO
cycle at 44100 is nominally ~5.804ms.
With 4 CPUs we get very frequent red entries in the telemetry window;
here's one:
<http://home.snafu.de/stk/tmp/4_CPUs.png>
This one shows the driver returning kIOReturnIsoTooOld from the
kernel trap. The point at which the HAL calls the kernel trap appears
to be ~5.638ms into this particular IO cycle (whose nominal length is
~5.837ms). That's pretty close to the edge, but without seeing what a
"normal" cycle looks like, it's hard to say if the driver is
warranted in returning the error.
Now what does this tell me?
It tells me that your IOProc is right out there on the edge of using
all the available time in these cases. I'd be very curious to see how
these compare to the cycles that don't overload.
Also, we turned on the "Latency trace on overload" checkbox, and the
console said it did take a trace, but it didn't mention where it was
written to; I didn't manage to find it.
The traces are written to /tmp.
--
Jeff Moore
Core Audio
Apple
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden