"Best Practices" for temp buffers in Audio Units?
"Best Practices" for temp buffers in Audio Units?
- Subject: "Best Practices" for temp buffers in Audio Units?
- From: Sean Costello <email@hidden>
- Date: Mon, 2 Nov 2009 10:43:10 -0800
Hi all:
I am working on some algorithms where I want to use block-based
processing, instead of sample-by-sample. The algorithms I am working
on are feedforward, so I can use temporary blocks that are the same
size as the input/output buffers. A few questions:
- Can I assume that the input and output buffers are separate, or
should I assume aliasing of these?
- Is there a good built-in method of allocating a temporary buffer in
the Audio Unit API?
I have done lots of block-based processing before, but in environments
(embedded, synthesis languages) where you have a predictable block
size, that won't change at random times. I can't presume this in a DAW
environment, so i was wondering how other people dealt with this, or
if there was an official way.
Thanks,
Sean Costello
Valhalla DSP, LLC
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden