Re: multi-channel AUs
Re: multi-channel AUs
- Subject: Re: multi-channel AUs
- From: Jeff DuMonthier <email@hidden>
- Date: Sat, 29 Mar 2003 11:13:25 -0500
On Friday, March 28, 2003, at 10:50 PM, Bill Stewart wrote:
No.
For a start all that an AU can do is accept or reject a format and this
is really decided by the context that you are run in.
For the AU the canonical (expected and most widely used) format is
NON-interleaved, Float32 for linear pcm data. That is what all of the
hosts are going to tell you that they are going to give you, so if you
want to play, you have to deal with that.
(BTW - the correct way to accept or reject formats is to use the
ValidFormat and StreamFormatWritable virtual methods from AUBase.cpp)
I'd say that you can't use the kernel approach for doing interleaved
processing, as it is going to give you by default each channel at a
single time.
So, you should look at either:
(1) Do your processing using non-interleaved
(2) Take out the kernel stuff from AUEffectBase and basically walk
through your non-interleaved data, one sample at a time for each
channel of data... ie. you can still process this across all the
channels for each sample
Really its not a question of whether the data is interleaved or not,
but whether you process channel at a time or across all channels. You
can certainly do that if you want - you just have to get each channel
from their separate buffers.
We could probably do some work to make this easier for you, but there's
been little demand for it as most people seem to prefer to do their
processing a channel at a time, so i doubt we will.
Bill
I guess no one important makes spacializers, ambisonics or channel
mixing reverbs. This kind of design will certainly increase the demand
for third party SDK and abstraction layers.
Maybe I am getting the wrong idea from this since I am just starting to
delve into the examples and documentation, but doesn't expanding the
set of allowed data formats (as compared to something like VST) and
expecting all hosts and AU's to fend for themselves seem like a bad
plan? A real problem with VST is that the SDK and scant documentation
leave a lot of things unclear or unspecified. Practices evolve by
consensus and testing with lots of plugs/hosts is always required.
One of the major functions of something like the AU SDK, IMHO, should
be to provide a clean interface to negotiate formats between
hosts/units along with optimized code to automatically do any necessary
conversion (e.g. interleaving/deinterleaving). Yes, that is extra
overhead and efficiency would be best served if everyone simply used
the same format, but it would allow almost all hosts and units to work
together without requiring every developer to include code to handle
exceptions to the norm. It would also provide optimized conversion for
those who might want interleaved mult-channel data. That is a very
efficient format for vector processing using AltiVec and whatever it
will be called on the 970.
-Jeff DuMonthier
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.