Re: "Best Practices" for temp buffers in Audio Units?
Re: "Best Practices" for temp buffers in Audio Units?
- Subject: Re: "Best Practices" for temp buffers in Audio Units?
- From: Stephen Blinkhorn <email@hidden>
- Date: Mon, 2 Nov 2009 15:19:47 -0600
Hi Sean,
I do all my block allocations up front during AU initialization. When
you initialize you can find out the maximum block size that you will
be asked to process via GetMaxFramesPerSlice() - that won't change
until you get reinitialized.
Maybe that helps if I understood you correctly,
Stephen
On 2 Nov 2009, at 12:43, Sean Costello wrote:
Hi all:
I am working on some algorithms where I want to use block-based
processing, instead of sample-by-sample. The algorithms I am working
on are feedforward, so I can use temporary blocks that are the same
size as the input/output buffers. A few questions:
- Can I assume that the input and output buffers are separate, or
should I assume aliasing of these?
- Is there a good built-in method of allocating a temporary buffer
in the Audio Unit API?
I have done lots of block-based processing before, but in
environments (embedded, synthesis languages) where you have a
predictable block size, that won't change at random times. I can't
presume this in a DAW environment, so i was wondering how other
people dealt with this, or if there was an official way.
Thanks,
Sean Costello
Valhalla DSP, LLC
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden