Re: Grand Central Dispatch in audio render callback
Re: Grand Central Dispatch in audio render callback
- Subject: Re: Grand Central Dispatch in audio render callback
- From: Robert Bielik <email@hidden>
- Date: Fri, 29 Jul 2011 11:16:20 +0200
Support skrev 2011-07-29 10:57:
But with GCD this can be done in parallel rendering each voice into its own sample buffer, e.g.
const size_t N = voiceCount;
dispatch_queue_tqueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
dispatch_apply(N, queue,
^(size_t i) {
ClearBuffer(buffer[i], numSamples);
voice[i]->Process(buffer[i]);
});
Ooh... I shrug when OS functions are called within a real-time rendering callback. I'd say that it is not wise to use it because you're then at the
whim of the OS, and I don't think that the time for the call is guaranteed to be bounded, which means that eventually (or often) you'll fail to
meet the deadline for rendering.
I don't know if it's that much better (but possibly more controllable) but can't you use Intel Threading Building Blocks instead (http://threadingbuildingblocks.org/) ?
Regards
/Rob
Then mixing each voice buffer into the final output:
for (int i = 0; i < voiceCount; i++)
{
vDSP_vadd(buffer[i]...., outBuffer....);
}
So far I'm find that this works on the iOS iPad1 and Simulator, but I'm wondering if this is correct or advisable? And on multi-core devices, iPad2, will this spread the load amongst the cores and thus improve performance?
kind regards
peter johnson
one red dog media
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden