Re: Coreaudio-api Digest, Vol 1, Issue 43
Re: Coreaudio-api Digest, Vol 1, Issue 43
- Subject: Re: Coreaudio-api Digest, Vol 1, Issue 43
- From: Ev <email@hidden>
- Date: Fri, 15 Oct 2004 15:23:23 -0500
On Oct 15, 2004, at 2:04 PM, email@hidden
wrote:
On Oct 15, 2004, at 5:29 AM, Herbie Robinson wrote:
I have had situations where performers were annoyed with the latency
in Pro Tools TDM. This is especially true when monitoring vocals via
headphones, because you get phase cancelation in the air cavities in
your head. it's not even conclusive that less latency is even better
(unless you can make it zero latency). Sometimes a long delay
actually helps!
I hear what you are saying in a hypothetical kind of way but in my
experience the lower latency the better with vocalists every time,
increasing the latency is not an acceptable way to deal with the phase
cancelation because by the time you completely get rid of the phase
problem you have an annoyingly perceptible delay
The ONLY way to deal with monitoring in a way that will keep anybody
happy is to use an analog mixer for the monitoring.
I guess you me "everybody" but, that way is no good either. The market
is moving relentlessly toward plugin processing and there are things I
use on live inputs that I don't think are even available analog
devices.
The way to make DAW users happy is to get the latency right.
As an owner and operator of a professional digital studio, let me put
my two cents in on this issue.
We built our studio years ago when latency was a non-combatable problem
that had no workaround. We knew we were building a digital studio from
the ground up, and we knew computers were going to get faster, but we
also knew latency would *always* exist. Here's how we did it.
One of the fundamental rules of our studio is: *never* monitor live
inputs from the computer.
We use a digital console (the Sony DMXR-100) at the beginning of the
signal path. The musician listens to the output of the console
headphone outs or whatever, and I buss a result out the ADAT cards to
the MOTU 2408. I know there's some fancy math going on with the latency
on playback vs. recording, but everything comes out lined up as you'd
expect on playback. The buffers on my "multitrack computer" are set to
1024 so I can handle plug-ins (on mixing) without stretching too hard.
Which leads to rule number two: *never* use effects while recording.
This rule is broken a handful of times, but always while another
processor (Line 6 Pod, some other computer) is making the effects.
Never use the multitrack for effects or compression right off the bat -
always use the console.
Which leads to rule number three: distribute the load.
We've got 3 computers right now, but in a week or so (once our new
machine comes in) we'll reconfigure and distribute even more. Use old
computers for synthesizers and basic effects. Use others for 2-track
machines. Use others for storage. Use the console (which is actually a
souped-up QNX-based box) for audio throughput. Etc etc etc. You get the
idea.
With those three rules in hand, we've NEVER had to deal with latency in
our studio.
To comment on a particular note - if a singer is hearing "phase
problems" in their headphones, either 1. reverse the phase of the
signal going to their phones, 2. put an actual delay (slapback) or
reverb on their voice for foldback at a reasonable level to give the
voice some "space", or 3. the singer is listening too hard, tell them
to lighten up. I really don't believe the problem is ever really
*phase* as much as it is the singer's just not comfortable. Don't look
too hard for the problem.
Ev
Co-owner/head engineer
Integral Studio
St Paul, MN
http://www.integral-studio.com/
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden