Re: Real-time buffer playback in Swift?
Re: Real-time buffer playback in Swift?
- Subject: Re: Real-time buffer playback in Swift?
- From: Zack Morris <email@hidden>
- Date: Wed, 17 Sep 2014 11:09:40 -0600
On Sep 17, 2014, at 10:20 AM, Paul Davis <email@hidden> wrote:
>
> Continuing on ....
>
> On Wed, Sep 17, 2014 at 11:46 AM, Zack Morris <email@hidden> wrote:
>
> So the main problem seems to be stop-the-world garbage collection.
>
> and implicit memory allocation/free.
>
> Generally garbage collection is implemented pretty poorly. For example, they make assumptions like: the process will be long-lived, memory constraints will be tight, all of the threads have to be stopped in case they share memory, etc. Small, one-off processes like the kind run in Apache likely don't need garbage collection. So why can’t functions in high level languages work more like goroutines (from Go), except running in their own isolated processes, and avoid most of these issues?
>
> because context switching between processes implies a TLB flush, the cost of which is dependent on the size of the working set of the process being switched to. Either way, it is potentially MUCH more expensive than a thread context switch in which the TLB remains untouched. This is particularly true if the switched-to process is handling many audio (or video streams), thus implying that it will touch several (data) pages during its execution time, and thus requiring more refills of the TLB (which is the truly slow part of a context switch).
>
> we know all this from very careful measurement of JACK, which does in fact allow totally separate processes to be chained together in (platform-specific) highly efficient ways, and to share data and all that stuff. what we know is that this model functions OK at low latencies when the number of processes is low. it totally falls apart as the number of processes grows. so sure, if you want to glue together a couple of disparate blobs of DSP written in <whatever>, you can use this sort of approach (you can actually do it today). but scale up to large processing chains, more participating components, and it falls apart.
>
> garbage collection isn't related to the size of the process, but to the design of the language/libraries used by the process. you could write a very small java program that required garbage collection and a very very large C application that did not. if what you really mean is something more simplistic, along the lines of "small blobs of code don't need to micro-manage their memory allocation in the way that big blobs of code do", then that might be true. but what this means in the real world is hard to say, especially if you are writing a host application (big blob of code) which loads arbitrary other blobs of code (small to large).
Ya good points, I definitely don’t disagree that processing time and unpredictable overheads matter. I’m just saying, at which point will computers be fast enough that we can work with more forgiving languages? Like imagine if some photonic CPU comes out tomorrow running at a terahertz but it still can’t play an mp3 without a lot of specialty realtime APIs provided by the kernel. Like context switches still take on the order of milliseconds. It would be like the history of the Earth (1 trillion days) passing every second from the CPU’s perspective but it can’t even handle the load of spitting out a sample every 15,000 years (longer than humans have had civilization). That was a back of the napkin calculation of 1 trillion divided by 175,000 samples per second equals about 5 million days.
I think on some level that the way computers perform context switching is wrong. It probably won’t be fixed in typical UNIX kernels anytime soon, so I’m wondering if there’s a way to sidestep the kernel and somehow queue everything up to iron out the kinks of sleeping for so long. I realize it probably isn’t possible when we need < 30 millisecond latencies for realtime audio processing. But if CoreAudio can do it, then why have the separation between it and normal kernel processes at all? I think it’s an honest question.
Like totally wacky idea here, but what if someone made CoreAudio’s realtime API the basis of a userland “kernel”, where a language like Go acted like the OS and applications were separate processes running within it communicating through pipes. Then the rest of the the OS could be shut off and we’d have a realtime system.
I’m sure there’s lots of problems with dog slow hard drive latencies and things like that, but maybe it’s time to just get rid of them and go straight to flash. Treat hard drives as remote streams, with the same latencies and considerations. There’s probably lots of other limitations on what the CoreAudio callbacks can and can’t do that I don’t understand. Like I don’t even know how a CoreAudio callback is fundamentally different than a high priority thread. If someone knows, please enlighten us!
Zack Morris
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden