Re: Calling TellListener & AUParameterSet in audio thread
Re: Calling TellListener & AUParameterSet in audio thread
- Subject: Re: Calling TellListener & AUParameterSet in audio thread
- From: Urs Heckmann <email@hidden>
- Date: Fri, 21 Mar 2003 10:55:49 +0100
Am Freitag, 21.03.03, um 06:22 Uhr (Europe/Berlin) schrieb Bill Stewart:
It should be
possible for a plugin to begin and end automation gestures for any
reason,
at any time, with or without any GUI interaction.
Why - what does the plugin know about the beginning or end or an
automation gesture?
For example, think of an effect plugin that has two modes: "analyse"
and "filter"
In "filter"-mode it would do just what it is supposed to do: Work on
the incomming audio stream, giving that stream back to output
In "analyse"-mode, certain performance-critical analysis algorithms
(pitch detection, formant distribution, speech recognition) could be
used to "pre-compute" parameter changes from audio input. These changes
would be recorded as automation data in the host. Which would be
elegant, because the musician has further means to tweak the results of
the analysis stage. Look at Melodyne for an example what advantages
such a split can give you.
Furthermore, pure analysis plugins could generate automation data which
could be re-used for other plugins etc. etc.
Cheers,
;) Urs
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.