• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Swift and DSP/Audio
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Swift and DSP/Audio


  • Subject: Re: Swift and DSP/Audio
  • From: Paul Davis <email@hidden>
  • Date: Wed, 04 Jun 2014 16:01:49 -0400




On Wed, Jun 4, 2014 at 3:50 PM, Ian Kemmish <email@hidden> wrote:
Naturally, any parallel language or environment is suitable for signal processing.

Equally naturally, almost none of them will be suitable for *real-time* signal processing, which I suspect is what you're interested in doing.

Fashion dictates that current commercial mutiprocessing products follow the symmetric multiprocessing model, where all cores are seen as equal and able to be deployed on any task (even where physically, they may not be, such as in GPGPU applications - you can't tell afterwards whether your code got run on one of the main CPU's cores or on one of the GPUs).  The only available core might be in the middle of something Really Important just at the moment your audio engine wants to use it. Oops :-(

The more suitable parallel model for real-time work is asymmetrical multiprocessing, where one core decides exactly what all the others will be doing, and when they will be doing it.  The Cell processor used in Sony PS3's did it this way.  It is said to be very difficult to develop code for such an environment.  I was once going to get a PS3 to try it, but then I managed to wring 30,000 real time sine oscillators out of my iMac and the desire went away :-)

Hope this hasn't been entirely off topic....

No, but it does confuse issues at rather different levels.

Core availability is the domain of the operating system kernel, not the programming language. You can write in very careful assembler and if the scheduler doesn't give your task a core, it makes no difference that the code was so well optimized.

The main reason for the prevalence of symmetric MP is that NUMA (non-uniform memory access) systems have really hard caching problems that need to solved at the OS and hardware level, and in general, never really have been. As technology made reasonably large scale symmetric MP systems cheaper and readily available, people have naturally gravitated towards them - they actually work (*) You can easily use a master thread + slave thread pool model on a symmetric MP system, and for many people it seems to come more naturally than a design without a master thread.

GPUs continue, thus far, to remain computationally impressive but latency-constrained. The language used to program them has little to no impact on this.

The issue with LANGUAGE choice has almost nothing to do with the above sorts of issues, but has everything to do with (a) the relative slowdown caused by language features that depend on an (invisible) runtime component (e.g. Objective C's method dispatch **) (b) memory allocation policies within the runtime (e.g. garbage collection at inopportune moments).
 

(*) Ask me about Kendall Square Research machines. Or don't, please.
(**) compare what actually happens when the CPU executes the machine code for the Objective C statement "[ object method ]" with what happens when executing the C++ statement "object->method()"

 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

References: 
 >Re: Swift and DSP/Audio (From: Ian Kemmish <email@hidden>)

  • Prev by Date: Re: Swift and DSP/Audio
  • Next by Date: Re: VPIO unit in a graph
  • Previous by thread: Re: Swift and DSP/Audio
  • Next by thread: VPIO unit in a graph
  • Index(es):
    • Date
    • Thread