Re: Concurrency programming in Audio Units...
Re: Concurrency programming in Audio Units...
- Subject: Re: Concurrency programming in Audio Units...
- From: Doug Wyatt <email@hidden>
- Date: Tue, 2 Mar 2010 07:55:53 -0800
On Mar 2, 2010, at 0:42 , Craig Webb wrote:
> Hi there,
>
> I have a couple of general questions concerning Concurrency programming and Audio Units...
>
> 1. Is it possible to use Grand Central Dispatch or OpenCl when developing an AU component using the Core Audio API ?
Possible, yes, advisible, I don't know. A high priority dispatch worker thread is not non-realtime (priority something like 60-63 last I looked). That could be OK for a sufficiently large chunk of work (say 50+ ms) consuming a sufficiently small slice of CPU time (say under 50%) but almost certainly not if you're going to try to wake up worker threads every 12 ms and consume 90% of a CPU.
> 2. Is the DSP within an AU always computed on a single core, or does the host application spread the load when multiple cores are available ?
A host is free to schedule its work however it likes. It certainly makes sense to parallelize by spreading different mixer channels (e.g.) across multiple cores.
> Basically, I'm into doing direct numerical simulation of physical modelling synthesis (eg finite difference schemes) and would like to know if it is possible to spread the computation over multiple cores / GPU.
A challenge here is that there's no way for you to negotiate use of other cores with the host; it's going to feel free to schedule other work as if those other cores are free.
Doug
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden