Use of AudioQueue buffers in case of sound synthesis
Use of AudioQueue buffers in case of sound synthesis
- Subject: Use of AudioQueue buffers in case of sound synthesis
- From: "cparodi.ugemi" <email@hidden>
- Date: Sat, 30 Jan 2010 15:35:36 +0100
Following advice from Jeff I started prototyping a simple callback sound generation test program based on AudioQueue services so I can avoid having to deal with lower level API's (as I just need to deal with the default output device and I don't need AU's).
One problem I am facing is about the 3-buffer approach: do I really need to allocate three queue buffers and invoke my callback render function three times in a loop? Isn't that purely insane from a performance point of view (IMHO I was already struggling with performance with a single callback).
This means triplicating the processor workload per synth voice (think about a polyphonic synth...).
Any way to avoid that?
Regards,
Charles
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden