Re: Channels and frames
Re: Channels and frames
- Subject: Re: Channels and frames
- From: "john smith" <email@hidden>
- Date: Fri, 28 Oct 2005 09:53:52 +0200
I'm sorry, but let's stop this conversation, and just agree that we
disagree. I appreicate the input, and what you're trying to do, but
obviously we're coming from 2 different directions, and there's not really
any use for you people to try to convince me.
Don't get me wrong, it's not that I have already set up my mind, and is
closed for input, it's just that I do understand what you mean, and I do
understand your points, I just don't agree with them.
So, since you're not agreeing to my points, and I not to yours, we seem to
be kinda stuck.
Anyway, my question was answered, and with a little luck I should be able to
get the channel configurations I need.
Thanks a lot,
Michael Olsen
KeyToSound / PhonoXone
From: Brian Willoughby <email@hidden>
To: Michael Olsen <email@hidden>
CC: email@hidden
Subject: Re: Channels and frames
Date: Thu, 27 Oct 2005 16:32:31 -0700
Michael,
The question is whether your AudioUnit can calculate one channel
independent
of all others, ... or, if channels do depend upon each other (as in a
dynamics
processor) then they must all be treated equally as a group. As Marc said,
if
there are dependencies between channels, then you cannot take advantage of
the
ability of an AU to scale to any multichannel usage. But it is worth the
effort to rethink existing code, such that parts of the code which are
useful
in multichannel settings can be taken advantage of in more ways.
I'd like to comment on each of the effects types you listed, in hopes that
this will make the topic clearer.
[ Reverb: Has stereo outputs.
In general, most reverbs have significant channel dependencies. As such,
most
reverbs are not a candidate for automatic multichannel scaling in AU as
Marc
was describing. Most reverbs recreate an ambient space, with some number
of
sound sources, each with a specified or assumed location, and a usually
fixed
number of outputs, representing "ears" or "microphones" that pick up the
ambient reflections. Even though the dry channels just pass through, it is
not
wise to think of any relation between the input and output other than
reflected sound. The algorithm is generally tied to the number of outputs.
As
you said, it would be very difficult to pull three outputs from a stereo
reverb, because the ambient space has been defined by the stereo pickup
channels.
A very simple and artificial reverb could created which does not have
interchannel dependencies, and that reverb could be implemented as an AU
with
any number of channels. Alternatively, advanced algorithms could
automatically
create a virtual ambient space with any number of outputs, which would also
support AU multichannel. You probably aren't working with any reverb that
falls into the above.
You are right, most existing reverb code will probably be limited to
stereo,
so you will need to write your AU so it advertises the stereo limitation,
and
is not available in multichannel.
[ Chorus/Flanger: Might have a "width" control,
[ which controls the stereo field.
[ I.e. stereo output
You're talking about a Stereo Chorus, or Stereo Flanger.
There is nothing about the basic chorus or flanging algorithms which
requires
interchannel dependencies. Most Stereo Chorus implementations create
problematic stereo signals which cause phase cancellations when played over
mono equipment (e.g. AM Radio). It is the Stereo part of the algorithm
that
has channel dependencies, not the Chorus or Flanging itself.
You could make the Stereo controls disappear when there are other than 2
output channels. Or, you could break the effect into two parts, such that
the
chorus/flanger is available in any multichannel configuration, but the
Stereo-izer is only available in the mono->stereo or stereo->stereo
configuration.
P.S. Sometimes "width" is just LFO Depth, and creates no channel
dependencies.
[ Phaser: Same as above
Yep, same as above. There is nothing about the Phaser algorithm which has
channel dependencies. A Phaser is really just a special all-pass EQ with
LFO.
As you mentioned, EQ is a fine candidate for multichannel adaptation. If
you
do have stereo-specific code, it is probably phase-challenged (i.e.
inferior)
and best left as an option that can be defeated.
[ Compressor: May use a single volume detector for stereo channels.
[ Or may be a stereo compressor (converts l/r into m/s).
[ Limiter: Same as compressor
The Compressor and Limiter are dynamics processors. If the channels are
linked, then you might think that there are channel dependencies, but in an
AU
you have the ability to collect all of the channels together for an overall
"level" calculation, and then you can process each channel independently
using
the "level" result from the global code. The "rider" AU that I released
does
this successfully in a way that allows for any number of channels. You
basically need to separate the volume detection from the processing.
If the channels are not linked, then your dynamics effect is completely
channel independent.
So, only stereo reverb algorithms and polarity-based pseudo-stereo effects
cannot take advantage of the AudioUnit ability to scale to multiple
channels.
I hope this helps.
Brian Willoughby
Sound Consulting
_________________________________________________________________
FREE pop-up blocking with the new MSN Toolbar - get it now!
http://toolbar.msn.click-url.com/go/onm00200415ave/direct/01/
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden