Re: Grand Central Dispatch in audio render callback
Re: Grand Central Dispatch in audio render callback
- Subject: Re: Grand Central Dispatch in audio render callback
- From: Support <email@hidden>
- Date: Fri, 29 Jul 2011 19:50:15 +1000
On 29/07/2011, at 7:16 PM, Robert Bielik wrote:
> Support skrev 2011-07-29 10:57:
>> But with GCD this can be done in parallel rendering each voice into its own sample buffer, e.g.
>>
>> const size_t N = voiceCount;
>> dispatch_queue_tqueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
>> dispatch_apply(N, queue,
>> ^(size_t i) {
>> ClearBuffer(buffer[i], numSamples);
>> voice[i]->Process(buffer[i]);
>> });
>
> Ooh... I shrug when OS functions are called within a real-time rendering callback. I'd say that it is not wise to use it because you're then at the
> whim of the OS, and I don't think that the time for the call is guaranteed to be bounded, which means that eventually (or often) you'll fail to
> meet the deadline for rendering.
>
This is what I was wondering about and it does seem hairy. How heavy the overhead is in scheduling each job and if that can block. In which case how does Logic schedule across multiple cores?
> I don't know if it's that much better (but possibly more controllable) but can't you use Intel Threading Building Blocks instead (http://threadingbuildingblocks.org/) ?
I'm not sure on the implementation of TBB, if it includes ARM or not, or how they handle the job creation overhead. I mean, in both GCD/TBB/OpenMP I guess they ultimately pull a thread out of a thread pool.
regards
peter
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden