Hi
I'm wondering what the general rule of thumb is for using Grand Central Dispatch from within the real-time audio render callback. For example, on iOS with RemoteIO rendering voices for a polyphonic synthesizer you might have a loop like:
for (int i = 0; i < voiceCount; i++) { ClearBuffer(outBuffer, numSamples); voice[i]->Process(outBuffer); }
But with GCD this can be done in parallel rendering each voice into its own sample buffer, e.g.
const size_t N = voiceCount; dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0); dispatch_apply(N, queue, ^(size_t i) { ClearBuffer(buffer[i], numSamples); voice[i]->Process(buffer[i]); });
Then mixing each voice buffer into the final output:
for (int i = 0; i < voiceCount; i++) { vDSP_vadd(buffer[i]...., outBuffer....); }
So far I'm find that this works on the iOS iPad1 and Simulator, but I'm wondering if this is correct or advisable? And on multi-core devices, iPad2, will this spread the load amongst the cores and thus improve performance?
kind regards peter johnson one red dog media
|