How to properly use AudioQueueOfflineRender?
How to properly use AudioQueueOfflineRender?
- Subject: How to properly use AudioQueueOfflineRender?
- From: "Mark Zuber" <email@hidden>
- Date: Wed, 7 Jan 2009 13:27:29 -0500
Hi,
I have been trying to follow all the posts on this mailing list regarding AudioQueueOfflineRender. I can't seem to find any posts that explain how to properly use it, is this true? I have been trying to modify the "SpeakMe" example app for the iphone OS to read in a audio file (Wav, MP3, AAC, etc...) and render it as linear pcm into a memory buffer. I can't seem to figure out the proper order of calls in which to use AudioQueueOfflineRender and AudioQueueSetOfflineRenderFormat. Has anyone been able to successfully do this? Rolf on the list mentioned about it working using 3 Audio queues. I can't even understand why you need that many. I figure you need one input queue to queue up the audio data read in from the audio file and another output queue in which to receive the uncompressed linear pcm data. Rolf also mentioned something about an upcoming technote that explains this stuff in more detail. Does anyone have sample code that gets this all to work? Any help would be greatly appreciated.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden