• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Real-time buffer playback in Swift?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Real-time buffer playback in Swift?


  • Subject: Re: Real-time buffer playback in Swift?
  • From: Paul Davis <email@hidden>
  • Date: Wed, 17 Sep 2014 12:20:24 -0400


Continuing on ....

On Wed, Sep 17, 2014 at 11:46 AM, Zack Morris <email@hidden> wrote:

So the main problem seems to be stop-the-world garbage collection. 

and implicit memory allocation/free.
 
Generally garbage collection is implemented pretty poorly.  For example, they make assumptions like: the process will be long-lived, memory constraints will be tight, all of the threads have to be stopped in case they share memory, etc.  Small, one-off processes like the kind run in Apache likely don't need garbage collection.  So why can’t functions in high level languages work more like goroutines (from Go), except running in their own isolated processes, and avoid most of these issues? 

because context switching between processes implies a TLB flush, the cost of which is dependent on the size of the working set of the process being switched to. Either way, it is potentially MUCH more expensive than a thread context switch in which the TLB remains untouched. This is particularly true if the switched-to process is handling many audio (or video streams), thus implying that it will touch several (data) pages during its execution time, and thus requiring more refills of the TLB (which is the truly slow part of a context switch).

we know all this from very careful measurement of JACK, which does in fact allow totally separate processes to be chained together in (platform-specific) highly efficient ways, and to share data and all that stuff. what we know is that this model functions OK at low latencies when the number of processes is low. it totally falls apart as the number of processes grows. so sure, if you want to glue together a couple of disparate blobs of DSP written in <whatever>, you can use this sort of approach (you can actually do it today). but scale up to large processing chains, more participating components, and it falls apart.

garbage collection isn't related to the size of the process, but to the design of the language/libraries used by the process. you could write a very small java program that required garbage collection and a very very large C application that did not. if what you really mean is something more simplistic, along the lines of "small blobs of code don't need to micro-manage their memory allocation in the way that big blobs of code do", then that might be true. but what this means in the real world is hard to say, especially if you are writing a host application (big blob of code) which loads arbitrary other blobs of code (small to large).
 
 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

  • Follow-Ups:
    • Re: Real-time buffer playback in Swift?
      • From: Zack Morris <email@hidden>
References: 
 >Re: Real-time buffer playback in Swift? (From: Ian Kemmish <email@hidden>)
 >Re: Real-time buffer playback in Swift? (From: Zack Morris <email@hidden>)

  • Prev by Date: Re: Real-time buffer playback in Swift?
  • Next by Date: Re: Real-time buffer playback in Swift?
  • Previous by thread: Re: Real-time buffer playback in Swift?
  • Next by thread: Re: Real-time buffer playback in Swift?
  • Index(es):
    • Date
    • Thread