Re: AU "offline" processing
Re: AU "offline" processing
- Subject: Re: AU "offline" processing
- From: Bill Stewart <email@hidden>
- Date: Mon, 3 Feb 2003 18:13:37 -0800
Interesting discussion...
On the first issue (can an AU have different sizes of audio data on in
and out) the general idea has been:
AudioUnit - Effects
They leave the data sizes unchanged
Format Converters
They don't:)
We already ship a generic AUConverter unit that wraps up the broad
functionality of the AudioConverter and presents it into the Audio Unit
world... This type of unit ('aufc') is *expected* to have different
buffer sizes for instance between its input and outputs... Also, the
AudioOutputUnits all present this functionality as well - so you can
pass an interleaved stereo 16bit stream straight to one of these guys
and it handles any reinterleaving, sample rate conversion, bit-depth
transformations, to the hardware, etc...
So, this side is already done and has been around for a while.
With the host apps we've concentrated on the hosting of the 1-1 effects
based units as that seemed to us the most common case of existing DSP
and so forth. However, I'd talk to the host app companies about also
providing the capacity to host 'aufc' units for this kind of
functionality - we could define another audio unit type if we wanted to
distinguish between more run of the mill format conversions and musical
effect type conversions (like time-stretching)... ('auxf')...:)
Offline - I'd rather another thread was started on this, restating the
desired needs, usage scenarios and so forth. We've never scene the
distinction between off-line and real time as a necessary distinction
to make at the AU level - typically that is a host level decision. On a
related topic, we do provide rendering quality properties (and CPU
usage properties that the DLSSynth unit uses for instance), that allows
a host to tell an AU that it doesn't care about CPU constraints, but
does about quality... So, I don't think there is anything else that the
AU spec is missing here
Random access to different data streams from the host is *another*
topic entirely - I don't see that it has anything to do with either
offline/RT rendering (or format conversion)... I'm not sure that I
understand the reversal semantic - it seems to me this is a host
problem, not an AU one (and then the host just has to feed that data a
different way - and it gets really complicated for soft-synths:)
But, this is still an interesting area to explore - could we get this
restated...
Bill
--
mailto:email@hidden
tel: +1 408 974 4056
________________________________________________________________________
__
"Much human ingenuity has gone into finding the ultimate Before.
The current state of knowledge can be summarized thus:
In the beginning, there was nothing, which exploded" - Terry Pratchett
________________________________________________________________________
__
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.