Re: Use of AudioQueue buffers in case of sound synthesis
Re: Use of AudioQueue buffers in case of sound synthesis
- Subject: Re: Use of AudioQueue buffers in case of sound synthesis
- From: William Stewart <email@hidden>
- Date: Wed, 3 Feb 2010 18:49:45 -0800
I would think that for synthesis that I imagine is responding to a
real-time input of some nature, that you would want a "just in time"
note generation. So, audio queue is going to have too much latency for
this to feel like a real-time response.
The output units (AUHAL on the desktop, AURIO on the phone) would be
the way to do this. You set them up to provide the linear PCM format
you want, probably the default I/O size is good enough to start with,
and then you will be asked to calculate that much audio for any given
time from the callback.
It is not that difficult to set up the audio units to do this. For the
desktop, have a look at the DefaultOutputUnit example. For the phone,
we don't have an example that is explicilty like this, but if you
start with that example, then have a look at the aurioTouch for some
of the particulars of the iPhone, that should be enough to get you
going.
Bill
On Jan 31, 2010, at 3:50 PM, Brian Willoughby wrote:
On Jan 31, 2010, at 06:21, email@hidden wrote:
But in case of sound syhthesis. sound generation is fully delegated
to a
callback render function with no file access, so there's no low speed
peripheral to keep into account (HD or whatever): in that scenario
is there any
risk of hearing clicks or any other sort of audible problems if my
queue has
got a unique buffer instead of three?
There are a couple of ways to use CoreAudio. In one approach, you
could have just the AU output unit, and your application could
provide the audio buffers just in time. In the queue approach, you
have to send the audio buffers in advance, because the callbacks are
telling you that the buffer is finished, not that it is needed right
away.
The former approach requires time-sensitive code with fixed time-
constraint priority. The queue approach is much easier because you
don't have to worry about precise timing so much.
Reason why I'm asking is that I also need to reduce the latency
time (time
between key press and sound start), so I'm concerned that a three-
buffer
approach may add some sort of delayed start. Please forgive me for
the non
deeply technical nature of my questions, I am starting to prototype
something
and just want to decide the right approach beforehand.
I forget whether you're coding for iPhone or full OSX. I also
forget whether iPhone offers all of the options described above that
CoreAudio offers. But if you want the lowest latency, then you
might want to try coding without the queue, if that's not too
advanced for your experience level.
Brian Willoughby
Sound Consulting
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden