Offline processing
Offline processing
- Subject: Offline processing
- From: Steve Hoek <email@hidden>
- Date: Tue, 04 Feb 2003 17:13:38 +1300
>
Offline - I'd rather another thread was started on this, restating the
>
desired needs, usage scenarios and so forth.
>
We've never scene the distinction between off-line and real time as a
>
necessary distinction to make at the AU level - typically that is a host
>
level decision.
Okay, let's kick off the offline discussion.
Forgive my ignorance, but does the AudioUnit standard provide for the
analysis/ pre-pass step that the usual "off-line" plug-ins - such as
normalize - require?
Pitch 'n Time, for example, uses this feature of the AudioSuite standard to
capture and present a waveform overview of the audio being transformed. See
http://www.serato.com/products/pnt/screenshot.jpg for an idea of what I
mean.
Another small but very useful feature of AudioSuite is the ability to
suggest labels for host GUI buttons. Notice in the screenshot that the
"Analyze" button is labeled "Update Waveform".
Our "Offline" plug-in Pitch 'n Time also makes extensive use of the host
app's "current selection", ie what range of samples is selected. This is
reflected in the GUI and allows for several time saving features for our
users. A simple example is the user's ability to type in the required target
length instead of the expansion ratio. I understand you don't want to
restrict the AU standard to conventional hosts, but a standard way to access
this information when it /is/ available is extremely important.
Another desired consequence of time stretching some audio is that the host's
corresponding "automation" is remapped in time. I fear this functionality
may be a pipe dream.
I may post other ideas as they come to me.
--
Steve Hoek Ph. +64-9-480-2396
Director of R&D Fax. +64-9-480-2397
Serato Audio Research Ltd
http://www.serato.com/
>
From: Bill Stewart <email@hidden>
>
Date: Mon, 3 Feb 2003 18:13:37 -0800
>
To: email@hidden
>
Subject: Re: AU "offline" processing
>
>
Interesting discussion...
>
>
On the first issue (can an AU have different sizes of audio data on in
>
and out) the general idea has been:
>
>
AudioUnit - Effects
>
They leave the data sizes unchanged
>
>
Format Converters
>
They don't:)
>
>
We already ship a generic AUConverter unit that wraps up the broad
>
functionality of the AudioConverter and presents it into the Audio Unit
>
world... This type of unit ('aufc') is *expected* to have different
>
buffer sizes for instance between its input and outputs... Also, the
>
AudioOutputUnits all present this functionality as well - so you can
>
pass an interleaved stereo 16bit stream straight to one of these guys
>
and it handles any reinterleaving, sample rate conversion, bit-depth
>
transformations, to the hardware, etc...
>
>
So, this side is already done and has been around for a while.
>
>
With the host apps we've concentrated on the hosting of the 1-1 effects
>
based units as that seemed to us the most common case of existing DSP
>
and so forth. However, I'd talk to the host app companies about also
>
providing the capacity to host 'aufc' units for this kind of
>
functionality - we could define another audio unit type if we wanted to
>
distinguish between more run of the mill format conversions and musical
>
effect type conversions (like time-stretching)... ('auxf')...:)
>
>
Offline - I'd rather another thread was started on this, restating the
>
desired needs, usage scenarios and so forth. We've never scene the
>
distinction between off-line and real time as a necessary distinction
>
to make at the AU level - typically that is a host level decision. On a
>
related topic, we do provide rendering quality properties (and CPU
>
usage properties that the DLSSynth unit uses for instance), that allows
>
a host to tell an AU that it doesn't care about CPU constraints, but
>
does about quality... So, I don't think there is anything else that the
>
AU spec is missing here
>
>
Random access to different data streams from the host is *another*
>
topic entirely - I don't see that it has anything to do with either
>
offline/RT rendering (or format conversion)... I'm not sure that I
>
understand the reversal semantic - it seems to me this is a host
>
problem, not an AU one (and then the host just has to feed that data a
>
different way - and it gets really complicated for soft-synths:)
>
>
But, this is still an interesting area to explore - could we get this
>
restated...
>
>
Bill
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.