• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Audio Units and OpenCL?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Audio Units and OpenCL?


  • Subject: Re: Audio Units and OpenCL?
  • From: Stéphane Letz <email@hidden>
  • Date: Thu, 10 Sep 2009 10:11:46 +0200


Message: 1 Date: Wed, 9 Sep 2009 12:10:56 -0700 From: William Stewart <email@hidden> Subject: Re: Audio Units and OpenCL? To: philippe wicker <email@hidden>, Murray Jason <email@hidden>, Edward Agabeg <email@hidden> Cc: CoreAudio list <email@hidden> Message-ID: <email@hidden> Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes


On Sep 9, 2009, at 9:56 AM, philippe wicker wrote:

I think that the difficulty in a plugin context is to meet a short -
and known - latency constraint when dispatching a job to several
threads. A solution is to pass the data to work on to some threads
on one Render and get back the result on next Render or even 2
Render calls later, which gives a 1 or 2 buffers latency. To be sure
that the worker threads meet that kind of deadline they have to be
time-constrained and their scheduling parameters carefully tuned. My
guess is that it is probably a difficult task for a generic
dispatching API such as GCD. Maybe an ad-hoc "hand-made" delegation
to a limited number of worker threads would give better results?

We already provide support for this.

In 10.5 we shipped an AU called the "deferred renderer" - it is an
'aufc' audio unit, and it plugs into an AU graph (or AU rendering
chain) as any other audio unit does. It dispatches for its input
(whatever is connected to it) on a different thread than what it is
called for output on (whatever thread AudioUnitRender is called on
it). There are some properties to allow you to control the interaction
in latency, etc, between the calling thread and the thread run by the
AU itself.

Its mainly of use to host apps, where portions of a rendering graph
can be done on different threads, with a minimal, specifiable latency
introduced between the various sections of the graph. You still have
of course, the problem of constructing your graph, knowing where you
can thread it in this way, but the intracacies of buffer management,
threading policy and time constraints, etc, are all handled for within
the AU itself.

In terms of other "threading" type AUs, both the scheduled slice
player and the file player AU have an implicit notion of multi-
threading, but with with the semantic of deadline driven computation.
With the scheduled slice player, you can schedule buffers for playback
from any thread, and when this AU renders, it appropriately plays out
your buffers of audio. Essentially it gives you a push model into the
AU's common pull model rendering approach. The file player handles
this detail for you (you give it a file, and it schedules the reads,
etc, as needed to meet the deadlines of the AU's rendering graph)

I think its interesting to explore these a bit, play around with them
and see how they can be used to good affect. Comments, etc, are always
welcome, and we can certainly look at generating some more
documentation or examples in this area (bugreporter.apple.com is a
good way to go for requests on these matters)

Bill


What is available for audio developers with specialized AU does not solve the issues for people that would like to use newest Apple technologies (GCD, OpenCL) in audio applications outside of AU context.


Would it make sense to have a special version GCD for real-time usage? Right now GCD uses a pool of "normal" threads to execute pending tasks waiting on GCD queues. Why not have a pool of "time-constraints" threads to deal with RT tasks? Then we could have an extended API like:

dispatch_queue_t rt_queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_REAL_TIME, 0);

and then tasks could be enqueued and executed (assuming they do not contain non RT code... as usual). We would need a way to set time- constraints for the set of RT threads (and assume the entire application would use the same time-constraints parameters).

[In the JACK on OSX project (http://www.jackosx.com/), we have JACK clients define a "time-constraints" thread that just use the same time- constraints parameters that are currently used in the RT thread used by the JACK server. So in essence we duplicate the time-constraints parameters given by CoreAudio to the JACK server to all JACK clients.. and this works quite well]

Comments?

Stéphane Letz



_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


  • Follow-Ups:
    • Re: Audio Units and OpenCL?
      • From: Markus Fritze <email@hidden>
    • Re: Audio Units and OpenCL?
      • From: philippe wicker <email@hidden>
  • Prev by Date: Re: Audio Units and OpenCL?
  • Next by Date: RE: Using an AU directly
  • Previous by thread: Re: Audio Units and OpenCL?
  • Next by thread: Re: Audio Units and OpenCL?
  • Index(es):
    • Date
    • Thread