• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Coreaudio-api Digest, Vol 1, Issue 43
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Coreaudio-api Digest, Vol 1, Issue 43


  • Subject: Re: Coreaudio-api Digest, Vol 1, Issue 43
  • From: Dennis Gunn <email@hidden>
  • Date: Sat, 16 Oct 2004 13:49:44 +0900


On Oct 16, 2004, at 5:23 AM, Ev wrote:
As an owner and operator of a professional digital studio, let me put my two cents in on this issue.

We built our studio years ago when latency was a non-combatable problem that had no workaround. We knew we were building a digital studio from the ground up, and we knew computers were going to get faster, but we also knew latency would *always* exist. Here's how we did it.

One of the fundamental rules of our studio is: *never* monitor live inputs from the computer.


Interesting points and you obviously have your strategies worked out.

As I mentioned at the beginning I am a musician too, I play guitar in some bands backing a major and some not so major artists and have a decent studio in my house, but my "day job" is mainly singing TV commercials, which is something that has me going to a lot small preproduction studios as well as the big full on facilities, In the *smaller* ones I find the most common configuration I encounter has me monitoring though protools. The producers engineers are usually working fast and they are basically aiming for something like the final vocal processing even as I am putting my vocal on. They do that for lots of reasons, including the fact that the producer wants to hear if he is getting wants while I am doing my track, and sometimes the client is there listening too so things have to sound right from the word go.

Also I would point out that apple has put GuitarAmp Pro in Logic. As you know electric guitarists 'play to the sound' they are getting. If they are expected to be getting their sound using guitar amp pro how can they do that without monitoring through the Mac?

Which leads to rule number two: *never* use effects while recording.

This rule is broken a handful of times, but always while another processor (Line 6 Pod, some other computer) is making the effects. Never use the multitrack for effects or compression right off the bat - always use the console.

Everybody I know breaks that one. But then I think you mean never record only the wet signal? Of course one of the main points of the way the channel strips work in ProTools and most DAWs is that there is a dry signal getting recorded while the effected signal is what is getting monitored so if you get the effect wrong you can change it later.


Which leads to rule number three: distribute the load.

We've got 3 computers right now, but in a week or so (once our new machine comes in) we'll reconfigure and distribute even more. Use old computers for synthesizers and basic effects. Use others for 2-track machines. Use others for storage. Use the console (which is actually a souped-up QNX-based box) for audio throughput. Etc etc etc. You get the idea.

The concept behind logic nodes. I was thrilled to death to see that in Logic 7. But that does not do much if anything to help latency.



With those three rules in hand, we've NEVER had to deal with latency in our studio.

To comment on a particular note - if a singer is hearing "phase problems" in their headphones, either 1. reverse the phase of the signal going to their phones, 2. put an actual delay (slapback) or reverb on their voice for foldback at a reasonable level to give the voice some "space", or 3. the singer is listening too hard, tell them to lighten up. I really don't believe the problem is ever really *phase* as much as it is the singer's just not comfortable. Don't look too hard for the problem.

As a singer phase does not *usually* bother me personally much at all, at least I have learned to live with it. But a lot of singers I know specifically say it is an irritant. An engineer does not want to be in the position of telling the singer he is just being too picky, do that too many times and you may find that when the singer is the one doing the picking he will pick another engineer. In fact a really good friend of mine is an owner engineer of a studio where I have done a lot of work and I love his mixes but on those rare occasions where I am the one doing the picking I pick a different engineer every time simply because he refuses to separate the feeds to the cue box and his headphone mixes are nearly impossible for me to deal with.


_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


References: 
 >Re: Coreaudio-api Digest, Vol 1, Issue 43 (From: Ev <email@hidden>)

  • Prev by Date: Re: RME & latency discussion
  • Next by Date: Re: I/O Latency
  • Previous by thread: Re: Coreaudio-api Digest, Vol 1, Issue 43
  • Next by thread: Using AudioConverter
  • Index(es):
    • Date
    • Thread