Re: multi-channel AUs
Re: multi-channel AUs
- Subject: Re: multi-channel AUs
- From: Kurt Bigler <email@hidden>
- Date: Sat, 29 Mar 2003 16:58:12 -0800
on 3/29/03 8:13 AM, Jeff DuMonthier <email@hidden> wrote:
>
On Friday, March 28, 2003, at 10:50 PM, Bill Stewart wrote:
>
>
> Really its not a question of whether the data is interleaved or not,
>
> but whether you process channel at a time or across all channels. You
>
> can certainly do that if you want - you just have to get each channel
>
> from their separate buffers.
>
>
>
> We could probably do some work to make this easier for you, but there's
>
> been little demand for it as most people seem to prefer to do their
>
> processing a channel at a time, so i doubt we will.
>
>
>
> Bill
>
>
I guess no one important makes spacializers, ambisonics or channel
>
mixing reverbs. This kind of design will certainly increase the demand
>
for third party SDK and abstraction layers.
>
>
Maybe I am getting the wrong idea from this since I am just starting to
>
delve into the examples and documentation, but doesn't expanding the
>
set of allowed data formats (as compared to something like VST) and
>
expecting all hosts and AU's to fend for themselves seem like a bad
>
plan? A real problem with VST is that the SDK and scant documentation
>
leave a lot of things unclear or unspecified. Practices evolve by
>
consensus and testing with lots of plugs/hosts is always required.
>
>
One of the major functions of something like the AU SDK, IMHO, should
>
be to provide a clean interface to negotiate formats between
>
hosts/units along with optimized code to automatically do any necessary
>
conversion (e.g. interleaving/deinterleaving). Yes, that is extra
>
overhead and efficiency would be best served if everyone simply used
>
the same format, but it would allow almost all hosts and units to work
>
together without requiring every developer to include code to handle
>
exceptions to the norm. It would also provide optimized conversion for
>
those who might want interleaved mult-channel data. That is a very
>
efficient format for vector processing using AltiVec and whatever it
>
will be called on the 970.
That's curious. I'm not much of an expert but in my case, I am just
celebrating the fact that I have converted my code base over from
interleaved to deinterleaved, and one of the reasons for this is how easy
this has made it for me to write altivec optimizations for cross-channel
DSP. Certainly this is true for multi-tap delay implementations.
I am aware that the deinterleaved approach is conceivably less optimal than
might be possible with an interleaved approach. However, the deinterleaved
approach vastly facilitates development, making it possible to crank out
large quantities of new multi-channel functionality quickly, because I can
easily write single-channel AltiVec primitives that I can instantiate many
times in a multi-channel situation, and be free to reconfigure the whole
graph at a high level without writing any more special-cased code.
Additional 2 or 4 channel special-case primitives can be written that
combine the functionality of several single-channel primitives, and these
can be instantiated in place of the single-channel primitives when they can
map into the desired DSP graph.
If you want to write altivec code that deals with a fixed number of channels
you might do better with interleaved data, but you will be stuck with that
implementation or have to start over when the number of channels involved
changes.
In any case the cost of converting to/from interleaved will probably be
small compared to the other DSP you are talking about.
-Kurt Bigler
>
>
-Jeff DuMonthier
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.