• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: CPU Usage difference
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: CPU Usage difference


  • Subject: Re: CPU Usage difference
  • From: Ian Kemmish <email@hidden>
  • Date: Fri, 6 Mar 2009 20:56:01 +0000


On 6 Mar 2009, at 15:16, Richard Dobson <email@hidden> wrote:


It has to be said though, that the implications of multi-core (in all
its many forms) for audio have barely been considered in the musical or
technical press (or academic press for that matter), and only then in
the most vague and general terms. I indeed await Snow Leopard to learn
just what it offers, but will not be able to get a good idea in the
absence of a quad or eight-core machine.


That may be because it's a vague problem:-)

All through the 90's, OEMs were always asking me when I would provide a proper multi-processing version of our PostScript ripping software. Every time I went into it, it turned out that the time to do it properly was less than the time it would take for CPU manufacturers to redouble the speed of their products.... so it never got done!

The problems with real time audio on multi-core CPUs, I think, hinge very specifically on the limited amount of time you have available to do things. This requires very fine-grain synchronisation. In an asymmetric multi-core CPU, such as Cell, this is designed in from the ground up, and I've certainly roughed out a design for my synth running on a PS 3 (and if anyone ever ports a legal clone of Aqua and AppKit to PS3, I may actually do it:-))

But on a symmetric multi-core CPU running a general purpose symmetric multiprocessing OS, the problems become much harder. If you have a two core CPU and splitting your hypothetical reverb AU across both cores means that they each spend 50% of their time in synchronisation code, then your net speed gain is precisely zero! Of course you can put completely independent bits of an audio graph on different cores, but at some point they have to be synchronised and mixed together (I guess the Nodes people know all about how to do this).

The way this might work is if, as advertised, Snow Leopard lets you offload calculations onto the GPU and _if_ it lets you treat the GPU as a coprocessor, so that there are no synchronisation issues. But then the OS has the problem of virtualising that precious resource so that more than one AU can use it.....

At least, that's what _I_ think is hard about it.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Ian Kemmish 18 Durham Close, Biggleswade, Beds SG18 8HZ
email@hidden Tel: +44 1767 601361 Mob: +44 7952 854387
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -



_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
  • Prev by Date: Re: Bytes Per Packet, Frames Per Packet?
  • Next by Date: Send midi to host application
  • Previous by thread: Re: CPU Usage difference
  • Next by thread: EnableSource not being called for Network Sessions (MIDI Driver)
  • Index(es):
    • Date
    • Thread