AUiPodEQ + Audio Queue: Threading implications of AudioQueueOfflineRender()
AUiPodEQ + Audio Queue: Threading implications of AudioQueueOfflineRender()
- Subject: AUiPodEQ + Audio Queue: Threading implications of AudioQueueOfflineRender()
- From: Chris Adamson <email@hidden>
- Date: Mon, 09 May 2011 11:45:30 -0400
I'm looking at ways to apply the AUiPodEQ to an audio stream that currently goes through one of two Audio Queues. This is on iOS.
We have it working in one case where we decode with ffmpeg and feed PCM to the queue. Our processing goes like this:
1. Decode with FFMPEG
2. Apply AUiPodEQ [via a simple AUGraph: AUConverter -> AUiPodEQ -> GenericOutputUnit]
3. Enqueue
When we change presets on the AUiPodEQ, we don't hear a change until the samples that have had the new preset applied make it all the way through the queue, so latency is a big problem with this approach. But it works.
Other problem is that our other path doesn't use ffmpeg, but rather uses Core Audio's built-in decoders (for stuff like MP3, AAC, etc.), and therefore just feeds encoded samples (with packet description arrays, etc.) into the audio queue. Of course, this would need to be converted PCM first in order to use AUiPodEQ, so I'd been planning to do something like this:
1. Convert to PCM with Audio Conversion Services
2. Apply AUiPodEQ [via the same AUGraph: AUConverter -> AUiPodEQ -> GenericOutputUnit]
3. Enqueue (now as PCM)
This will surely work, though it will have the same latency as the ffmpeg case.
Looking at AudioQueueOfflineRender(), it occurs to me that I might be able to put my AUiPodEQ effect *after* the queue, since the point of an offline queue is to not go straight out to H/W. This would solve my latency problem. With this option #2, I would re-work my graph like this:
[Render callback: call AudioQueueOfflineRender()] -> AUiPodEQ -> AURemoteIO
What gives me pause here is whether AudioQueueOfflineRender() is OK to call in the time-contstrained AU render callback. I strongly suspect someone will tell me that it's not. Which is why I'm asking.
And yes, option #3 would be to rip the the audio queues, have both ffmpeg and Audio Conversion Services fill a CARingBuffer, and then have my graph's render callback just pull samples from there. That might be cleaner and more flexible for the future. But it's a pretty old codebase and I'm treading carefully, and only want to tear things up and start over if I have a really good reason to.
Thanks in advance for any thoughts and suggestions.
--Chris
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden