Re: RME & latency discussion
Re: RME & latency discussion
- Subject: Re: RME & latency discussion
- From: Dennis Gunn <email@hidden>
- Date: Fri, 15 Oct 2004 00:18:40 +0900
<x-tad-bigger>
On Oct 14, 2004, at 9:21 PM, Martin Kirst wrote:
</x-tad-bigger>
<x-tad-bigger>There are two reasons why the monitoring latency on Mac OS X is about 128 samples higher. Logic uses two buffers (2 * 64 = 128 samples) on Mac OS9 and three buffers (3 * 64 = 192 samples) on Mac OSX.
</x-tad-bigger>
<x-tad-bigger>
AHA thank you, now I understand why increasing the Logic buffer from 32 to 64 samples produced a different result than I expected when I was testing latencies. I had thought there were only two buffers in Logic. As far as I can recall no one ever even mentioned that emagic added a third buffer when we moved to OS X.
</x-tad-bigger>
<x-tad-bigger>It is possible to use two buffers on Mac OSX, but most software uses three. Why that? Because it makes sense. Audio processing in Logic on Mac OS9 was done at hardware interrupt level. This is unacceptable for a real operating system like Windows XP
</x-tad-bigger>
<x-tad-bigger>
?!?!
I can see that you are from RME so you should know but maybe by this time you have seen my other post but I just had a friend using Windows XP on an athlon machine and it got the same low latency as OS9 does.
He measured 190 samples total monitoring latency running logic on athlon machine running Running Windows XP.
I don't know what is going on there but to DAW users working in XP it is a good thing I envy them.
I often hear OSX fans on the tech side talking about what a "real" OS should or should not do but being a DAW user I definitely prefer the "unreal" way it was done in OS9. (And evidently from the measurements continues to be done in Windows XP?)
The thing that does not seem to be getting heard here is the higher latency is very unacceptable for users of DAWs. The higher latency added by the offset buffer just forces the user to set the IO buffer of the DAW lower to get an acceptable monitoring latency with the result that the plugin count goes down and the plugins become unstable and begin to click and pop.
In other words you may see the offset buffer as increasing stability but the user's reaction to it in the form of lowering the IO buffer in the application makes things waaayyyy more unstable.
I know all the arguments about how a ms of latency is just like moving a couple more feet away from the speaker, in fact I have been hearing them for years, but the fact is ears and brains are extremely sensitive instruments, latency is cumulative and there is a sharp threshold for most users when it goes suddenly from imperceptible to highly annoying. That threshold varies from user to user but the most demanding clients I have ever recorded found the 224 sample monitoring latency I get when the Logic IO set to 32 samples in OSX acceptable.
So what is the problem and why am I still complaining?
Logic is not able to record stably at that setting on a 2X2gig G5 with 3gigs of RAM installed using an RME Multiface/Cardbus at my small studio. It can't do it even if I have no plugins active at all.
BTW the guy who did the test for me is a producer small studio owner and dedicated logic user and who had been planning to buy a new G5 so that he could use L7 in his studio studio. When I heard about the increased latency it stopped him dead in his tracks. To give you some measure of how important this issue is to users he chooses to go on using logic 5.5 with it's drastically reduced feature set rather than to move to L7 and loose the ability to do software monitoring at a reasonable latency.
You may think that extra little latency is no big deal. I assure you that it is.
For that matter 128 samples is about 3ms and I would hardly call it "little",
</x-tad-bigger>
<x-tad-bigger>or Mac OSX. When audio processing is done in a user mode thread, usally 3 buffers of 64 samples work more stable than 2 buffers of 128 samples, though the overall latency is lower.</x-tad-bigger><x-tad-bigger>
</x-tad-bigger>
<x-tad-bigger>
interesting.
</x-tad-bigger>
<x-tad-bigger>The second reason for the higher latency on Mac OSX is the safety offset. The Multiface driver sets the safety offset registry value to 32. This results in an additional latency of 64 samples (32 for input + 32 for output). Both the RME PCI cards as well as the RME Fireface also work with lower values, e. g. 24 like MOTU. So you may discuss what the perfect value for the safety offset is. But you can not set the safety to zero.
</x-tad-bigger>
<x-tad-bigger>
Again apparently that is exactly what they do on Windows XP systems right now
</x-tad-bigger>
<x-tad-bigger>On the one hand this causes a slightly higher latency on the other hand the timer driven coreaudio has many advantages over the interrupt driven model.</x-tad-bigger><x-tad-bigger>
</x-tad-bigger>
<x-tad-bigger>
Not for DAW users. Give me the interrupts any day.
</x-tad-bigger>
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden