• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Audio Units and OpenCL?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Audio Units and OpenCL?


  • Subject: Re: Audio Units and OpenCL?
  • From: philippe wicker <email@hidden>
  • Date: Thu, 10 Sep 2009 09:27:58 +0200

I would think that that the problem is if you want your additional threads to meet a short latency constraint, eg one audio buffer. Then the threads you create must be high priority time constrained threads. They will tell the system that they have such period and such computational needed time. Now if "multi-threaded" AU becomes a common design pattern, we may get a situation where the system has to schedule a number of time constrained threads requiring computational needs exceeding the available time (eg 11.2 ms for 512 frames at 44.1 KHz per core).

Dispatching some jobs between a number of time constrained threads is feasible as long as it is made in a consistent manner, I mean having the all picture in mind. But this is not the case here. The DAW designers decide to dispatch the load to some number of RT threads on their side, the AUs designers decide to dispatch their load to a number of RT threads on their side, there's no reason for this to lead to a consistent and well tuned distribution of threads from the scheduler point of view.

On 10 sept. 09, at 03:10, Mike Lemmon wrote:

So is multi-threading an AU ok, or only in host-oriented cases such as these? What sort of "host interference" are people worried about? I suppose this is because hosts assume that AUs are never multi- threaded?

I'll explain my own situation here to provide an example for the discussion. I have a synthesizer that models a complex physical system of (ideally) thousands of discrete units; the audio output generated is based on the state of the system. The system changes gradually, so introducing a latency of even one or two seconds would be OK if it meant that I could increase the complexity of the system by an order of magnitude. While concurrency isn't a viable option for most audio plug-ins, it could still make a huge difference in the few places where it is viable.


On 9/09/2009, at 12:10 PM, William Stewart wrote:


On Sep 9, 2009, at 9:56 AM, philippe wicker wrote:

I think that the difficulty in a plugin context is to meet a short - and known - latency constraint when dispatching a job to several threads. A solution is to pass the data to work on to some threads on one Render and get back the result on next Render or even 2 Render calls later, which gives a 1 or 2 buffers latency. To be sure that the worker threads meet that kind of deadline they have to be time-constrained and their scheduling parameters carefully tuned. My guess is that it is probably a difficult task for a generic dispatching API such as GCD. Maybe an ad-hoc "hand-made" delegation to a limited number of worker threads would give better results?

We already provide support for this.

In 10.5 we shipped an AU called the "deferred renderer" - it is an 'aufc' audio unit, and it plugs into an AU graph (or AU rendering chain) as any other audio unit does. It dispatches for its input (whatever is connected to it) on a different thread than what it is called for output on (whatever thread AudioUnitRender is called on it). There are some properties to allow you to control the interaction in latency, etc, between the calling thread and the thread run by the AU itself.

Its mainly of use to host apps, where portions of a rendering graph can be done on different threads, with a minimal, specifiable latency introduced between the various sections of the graph. You still have of course, the problem of constructing your graph, knowing where you can thread it in this way, but the intracacies of buffer management, threading policy and time constraints, etc, are all handled for within the AU itself.

In terms of other "threading" type AUs, both the scheduled slice player and the file player AU have an implicit notion of multi- threading, but with with the semantic of deadline driven computation. With the scheduled slice player, you can schedule buffers for playback from any thread, and when this AU renders, it appropriately plays out your buffers of audio. Essentially it gives you a push model into the AU's common pull model rendering approach. The file player handles this detail for you (you give it a file, and it schedules the reads, etc, as needed to meet the deadlines of the AU's rendering graph)

I think its interesting to explore these a bit, play around with them and see how they can be used to good affect. Comments, etc, are always welcome, and we can certainly look at generating some more documentation or examples in this area (bugreporter.apple.com is a good way to go for requests on these matters)

Bill



On 9 sept. 09, at 17:42, Richard Dobson wrote:

So, is there anything in this much-hyped technology that is actually a benefit to audio developers, to enable them to do what has not been done before (like, use all "available" cores for audio processing)?

If not, it is another nail in the coffin of the general-purpose computer as an audio processor, and the industry will move even further towards custom hardware in which they can implement whatever parallel processing they need, and have full control over it.

Also, presumably an older d/c iMac with the Radeon1600 chipset cannot run openCL code, so can it at least degrade gracefully to at least build and run on such a machine?

On an 8-core Mac Pro, how many cores will we see Logic Pro using?

This all rather reinforces speculations I have made that in the headlong rush to concurrent multi-core nirvana, audio will be left behind, or simply ignored/underestimated as a relevant activity.

Richard Dobson






Markus Fritze wrote:
Ehm, you know that GCD is running at the main thread level? Which can be blocked by UI operations, etc. That doesn't seem like a wise choice for real-time processing. OpenCL also doesn't have a threading mode, so you tasks will be shared among all the others and your latency becomes unpredictable.
Markus

_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden

_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden

_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden



_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
References: 
 >Audio Units and OpenCL? (From: Mike Lemmon <email@hidden>)
 >Re: Audio Units and OpenCL? (From: Jean-Daniel Dupas <email@hidden>)
 >Re: Audio Units and OpenCL? (From: Markus Fritze <email@hidden>)
 >Re: Audio Units and OpenCL? (From: Richard Dobson <email@hidden>)
 >Re: Audio Units and OpenCL? (From: philippe wicker <email@hidden>)
 >Re: Audio Units and OpenCL? (From: William Stewart <email@hidden>)
 >Re: Audio Units and OpenCL? (From: Mike Lemmon <email@hidden>)

  • Prev by Date: Re: Audio Units and OpenCL?
  • Next by Date: Re: Audio Units and OpenCL?
  • Previous by thread: Re: Audio Units and OpenCL?
  • Next by thread: Re: Audio Units and OpenCL?
  • Index(es):
    • Date
    • Thread