Re: Choosing an AU base class
Re: Choosing an AU base class
- Subject: Re: Choosing an AU base class
- From: William Stewart <email@hidden>
- Date: Fri, 20 Nov 2009 17:36:51 -0800
On Nov 20, 2009, at 3:28 PM, Brian Willoughby wrote:
On Nov 20, 2009, at 13:08, patrick machielse wrote:
I'm working an a pitch shifting Audio Unit. It is based on
AUEffectsBase and has been tested/working in the field for 2 years.
Up until now the Audio Unit would not alter the tempo (and hence
the file lenght) of the processed audio files. I believe that this
made it possible to use AUEffectBase, because it honored the
AUEffectBase contract 'number of input frames == number of output
frames'.
Now a new pitch shift mode must be implemented; the tempo must vary
with the pitch ('vinyl style'). I suspect that this breaks the
boundaries of AUEffectBase, and I should drop down to AUBase? Can I
pull in as much data as I like in AUBase->Render()?
Yes - but your AU can no longer be an aufx type
Basically, an aufx unit (and some of the others) are expect to operate
as real-time processors - in one render cycle they can only (not more
or less) get as much input data as they are asked to do on output.
AUEffectBase is designed to support this notion.
An 'aufc' unit (audio converter type), this restriction is relaxed, so
that you can do varispeeding, sample rate conversions, etc. You can
use this in a real-time app, but with some restrictions - your source
must be non-real time. So, in AULab, you can add 'aufc' units to a
track that has a generator unit as its source - for example the file
player, and you can varispeed the playback of a file. But, we don't
allow 'aufc' units to be in a track with audio input, or a synth (also
a real-time unit), or in effect buses
When we develop 'aufc' units, we subclass from AUBase directly (which
we do for some other cases as well)
Brian's point about max frame size needs to also be observed. We use
MaxFramesPerSlice to also limit how big any given pull is. For
instance, in AULab, max frames is set to the render size (lets say 512
samples). If you are varispeeding and playing back at 4X, then you
need 4 times as much input data to generate any given output. So, the
varispeed unit (time pitch is the same), will make 4 pulls of 512
samples each, to get the 2048 samples it needs to generate the 512
output samples.
Bill
The answer is either "no" or "yes and no." You certainly cannot
pull more than a buffer of data when the host has connected live
audio interface inputs to your AU, because the additional data
simply isn't there yet. AudioUnits do operate on a pull model, but
there is a maximum frame size, and the output device is really in
control of how much data is pulled. In any event, your AU should
always be prepared to get a different amount of data in any given
buffer than what you might expect.
There are off-line AudioUnits which can do what you want, so perhaps
you want to make two flavors of your new pitch shifting.
The reason I ask here is that the AU programming guide is very
vague about basic questions like this (in fact, it doesn't mention
AUEffectBase's contract). The only guidance it gives is:
"The AUEffectBase class is strictly for building n-to-n channel
effect units. If you are building an effect unit that does not
employ a direct mapping of input to output channels, you subclass
the AUBase superclass instead."
Which isn't really helpfull, since my 'effect' is n-to-n channels.
Is there a document that discucusses AUBase <> AUEffectBase in more
detail?
But source code is self-documenting, isn't it? ;-)
Brian Willoughby
Sound Consulting
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden