Re: Use of AudioQueue buffers in case of sound synthesis
Re: Use of AudioQueue buffers in case of sound synthesis
- Subject: Re: Use of AudioQueue buffers in case of sound synthesis
- From: "email@hidden" <email@hidden>
- Date: Sun, 31 Jan 2010 15:21:32 +0100 (CET)
David wrote:,
> You need to queue buffers to get a callback. If the buffers are empty you'll
get silence.
> So you fill the initial buffers with content, and your fine. Since you need
to fill content in the callback too, it might as well serve both duties.
That's clear now, thanks.
But in case of sound syhthesis. sound generation is fully delegated to a
callback render function with no file access, so there's no low speed
peripheral to keep into account (HD or whatever): in that scenario is there any
risk of hearing clicks or any other sort of audible problems if my queue has
got a unique buffer instead of three?
Reason why I'm asking is that I also need to reduce the latency time (time
between key press and sound start), so I'm concerned that a three-buffer
approach may add some sort of delayed start. Please forgive me for the non
deeply technical nature of my questions, I am starting to prototype something
and just want to decide the right approach beforehand.
Regards,
Charles
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden