• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Convolution Audio Unit how to?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Convolution Audio Unit how to?


  • Subject: Re: Convolution Audio Unit how to?
  • From: Paul Davis <email@hidden>
  • Date: Tue, 22 Nov 2011 20:11:23 -0500

On 11/22/11, Mark Heath <email@hidden> wrote:

> The documentation I read regarding tail were talking about reverb
> filters.   Where N samples in produce N + decay samples out.  My
> filter is still N samples in to N samples out.

no. all AU plugins run as N samples in and N samples out EXCEPT for
format converters. reverbs and whatever you're doing will run N in + N
out. what is different about reverbs and your plugin is that the first
X out will be silent, and you'll generate a "tail" of X samples that
will (likely) be non-silent after the non-silent input arrived.
basically: no difference, because I think you're confused about what
the reverb plugins do.

> So clarifying, there is one instance of my AU class per channel, any
> buffering I am doing in this instance will not clash with another
> channel?

i think you're a little confused about "channels" and "plugins".

AU plugins declare their I/O configuration options. they can be static
(e.g. "I do 1 in and 1out") or variable-with-limits (e.g. "I can do up
to 8 in and up to 4 out") or completely variable (eg. "I can support
any number of inputs and 8 outputs" or "I support any I/O
configuration").

the number of channels of data your plugin receives will be determined
by the i/o configuration selected by the host from the options the
plugin declares. if it declares "i do 1 in and 1 out", then it will
only ever process a single channel of audio data, and that channel
will be the same "stream" unless the host allows the user to do
something to "reconnect" the signal flow reaching the plugin.

> My experience has been in writing video filtering plugins, where all
> spatial data is available at once, and requesting one frame at a time
> is quite common.
>
> My audio dsp has been with non realtime libraries where the filter
> itself requests sample(s) as it needs them and latency isn't an issue.

you're going to find that video filtering and non-realtime audio are
quite different, in some fairly important ways, from realtime audio
processing. there are a few conceptual similarities and certainly the
math involved in the processing is similar, but the "framework" in
which it takes place is generally a bit different.
 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

References: 
 >Convolution Audio Unit how to? (From: Mark Heath <email@hidden>)
 >Re: Convolution Audio Unit how to? (From: Brian Willoughby <email@hidden>)
 >Re: Convolution Audio Unit how to? (From: Mark Heath <email@hidden>)

  • Prev by Date: Re: Convolution Audio Unit how to?
  • Next by Date: Re: Preroll semantics & clarification of kAudioUnitProperty_OfflineRender
  • Previous by thread: Re: Convolution Audio Unit how to?
  • Next by thread: Re: Convolution Audio Unit how to?
  • Index(es):
    • Date
    • Thread