• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: audio computation and feeder threads
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: audio computation and feeder threads


  • Subject: Re: audio computation and feeder threads
  • From: Lionel Woog <email@hidden>
  • Date: Fri, 28 Feb 2003 23:24:38 -0500

Typically, an IOProc will be called with a 512 frames request. If you can
generate that much in or close to real time you are fine. If you cannot
always do it (i.e. You generate 8K frames at a time, so the lag to provide
the first 512 frames is large), then you need a feeder thread that
pre-computes and buffers the data.

I believe that you will find core audio very able at keeping itself fed.

> I am trying to glean from all the information which has gone by on this list
> what I need for a general plan for thread design in my app. I would like to
> focus for the moment on the case of audio output only, when the audio is
> being synthesised algorithmically. I would specifically like to include the
> possibility of multiple IOProcs being active to support multiple output
> devices running at once.
>
> The topic of feeder threads has come up a lot, although I believe usually
> this has been in connection with playing audio files.
>
> I am trying to decide when it is a worthy thing to use a feeder thread in
> connection with an IOProc thread. The following thoughts come to mind as I
> try to put together my ideas on this, and I would appreciate feedback.
>
> First of all, the mere fact of outputting synthesized audio apparently does
> not in itself appear to constitute a reason for having a feeder thread. I
> am assuming (though maybe I am wrong) that Apple's audio units do not have
> any multi-thread decoupling/buffering going on in them - particularly audio
> units that do synthesis from midi would be the issue here. Can I assume
> that the DLS Synth (which I know _absolutely_ nothing about, yet need to use
> here as an example) does its synthesis right in the IOProc thread? If yes,
> then can I assume that this is therefore an "ok thing"?
>
> So, I can think of several reasons to use a feeder thread (together with
> appropriate buffering and consequent additional latency) to feed synthesized
> audio to an IOProc thread:
>
>
> (1) to keep blocking calls out of the IOProc
>
> (2) to buffer against irregularities in the synthesis process, possibly
> allowing early detection of a processing lag, allowing corrective responses
> (e.g. reduced precision) to be applied graciously rather than having to be
> applied instantly (e.g. note dropping)
>
> (3) to buffer against irregularities in system performance, such as some
> "randomness" in the scheduler together with the unpredictable nature of the
> demands put on the scheduler
>
> (4) to buffer against much nastier things like thread-performance
> inter-dependencies caused by caching issues. For example, suppose two
> memory-hungry threads (possibly even two different IOProcs) happen to get
> scheduled in parallel on 2 cpus, with the result that performance drops
> sharply in both because of collisions in use of memory. It might have been
> better if they had been scheduled serially on the same cpu. But I assume
> there is nothing in the scheduler that could recognize and avoid these kinds
> of situations, even heuristically.
>
> (5) perhaps - to make good use of multiple processors on the system when
> they are available. In this case I am not so much thinking of things like
> cache problems, but rather how to balance the synthesis computations across
> all available processors, while not interfering with IOProc performance.
> For example I could spawn a bunch of synthesis feeder threads, possibly more
> threads than there are processors, so that the scheduler is left with lots
> of flexibility - in case other system loads can not be distributed evenly
> between the processors, my own threads can take up the slack wherever it
> exists.
>
> (6) to get back that extra 10% of the 90% that the HAL will allow, as per
> Jeff Moore's message of 2/26/03 6:02 PM:
>
>> The HAL has a hard limit of roughly 90% CPU usage as measured against
>> the deadlines it calculates internally before it will put you in the
>> penalty box and issue kAudioDevicePropertyOverload notifications (this
>
>
> Am I basically on the right track in my thinking here? Is that just about
> it? Are there any other compelling reasons for using a feeder thread?
>
> Item (5) is particularly of interest to me right now. I first tried
> improving performance on a 2-processor machine with 2 output devices active
> by doing synthesis independently in each IOProc thread. I found that in
> some cases I get a 50% increase in performance, and in other cases no
> reliable improvement at all. In particular my altivec-optimized synthesis
> gets no reliable increase, and in fact sometimes a sharp drop. This is in
> spite of attempts to keep memory use to a very small minimum, although I
> can't prove that there were not many scattered accesses to memory that did
> not happen to collide badly in their timing on 2 processors. Any not to
> dwell on these details here.
>
>
> So, if I am basically on the right track with my thinking, it is fair to say
> that optimized audio synthesis directed towards multiple IOProcs should
> probably always use feeder threads, if the goal is to be able to get as
> close as possible to saturating the CPU?
>
> And if so... would it be useful - and possible - to create one or more audio
> units whose sole purpose is to decouple its pulling callback from its
> rendering callback? Would such an audio unit make it possible for audio
> developers to deal with a much broader set of requirments without having to
> develop so much in-house expertise on thread management? I envision an
> audio unit that permitted sufficiently flexible control over buffering and
> latency (and thread priority?) that almost all audio application thread
> management could be vastly simplified.
>
> Note that this technology would also make it possible for an output audio
> unit that could span any number of output devices, possibly running at
> different sample rates. Obviously I'm glossing over a lot of detail here -
> but I'm actually hoping that someone at Apple will do this in which case all
> the details will be take care of!
>
>
> Thanks,
> Kurt Bigler
> _______________________________________________
> coreaudio-api mailing list | email@hidden
> Help/Unsubscribe/Archives:
> http://www.lists.apple.com/mailman/listinfo/coreaudio-api
> Do not post admin requests to the list. They will be ignored.

--
Lionel Woog, Ph.D.
CTO
Adapted Wave Technologies, Inc.
email: email@hidden
phone: 212-645-0670
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.

  • Follow-Ups:
    • Re: audio computation and feeder threads
      • From: Kurt Bigler <email@hidden>
References: 
 >audio computation and feeder threads (From: Kurt Bigler <email@hidden>)

  • Prev by Date: audio computation and feeder threads
  • Next by Date: Re: Help with AudioFileGetProperty
  • Previous by thread: audio computation and feeder threads
  • Next by thread: Re: audio computation and feeder threads
  • Index(es):
    • Date
    • Thread