AudioQueue API v/s AudioUnit API
AudioQueue API v/s AudioUnit API
- Subject: AudioQueue API v/s AudioUnit API
- From: Bankim Bhavsar <email@hidden>
- Date: Fri, 06 May 2011 11:47:28 -0700
Hello CoreAudio folks,
I'm working on a sound output module for an Mac OS 10.6+ application
that produces sound buffers worth 10 milliseconds of PCM data every 10
milliseconds and stored in an internal buffer worth 1 second.
Currently the sound module is using AudioQueue API configured with 4
buffers each of size worth 20 milliseconds of PCM data and running on
AudioQueue's internal thread. These sound buffers need to be played as
soon as possible and we would like latency to not exceed 200
milliseconds(amount of PCM data in application's internal buffer) in
the worst case.
However with AudioQueue API, when the application(process) is doing
heavy 3D graphics processing or disk activity, sometimes the audio
queue callback gets delayed considerably (200 milliseconds or more). I
tried mitigating this issue by increasing playback rate(1.2 times)
when a certain threshold of buffered bytes is reached/exceeded and
scaling back to normal rate. However changing playback rates causes
blips and not considered an acceptable solution by the team.
I'm wondering whether AudioQueue is the right API to be used in such a
situation.
Should AudioUnit API be used instead? Can anyone list pros and cons of
one API over other?
Thanks,
Bankim.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden