Re: Grand Central Dispatch in audio render callback
Re: Grand Central Dispatch in audio render callback
- Subject: Re: Grand Central Dispatch in audio render callback
- From: Brian Willoughby <email@hidden>
- Date: Fri, 29 Jul 2011 14:19:15 -0700
I disagree with your phrasing on a couple of points.
First, by every definition of "real time" that I am aware of,
CoreAudio is absolutely real time. It may not be processing one
sample at a time, it's processing one buffer at a time, but that is
still real time. Besides, as the buffer size is made smaller and
smaller, the computer ends up working as hard as if it were literally
processing one sample at a time. If you miss the time slot for even
a single buffer by going over the alloted time, then there will be a
real-time glitch. In computer terms, "real time" means that a
process that represents a time line must run in sync with the wall
clock (real time). You may be confusing latency with near real time,
but they are not exactly the same concepts. CoreAudio is not zero
latency, but it is real time.
Second, when you say that "the audio is prepared entirely ahead of
time," I also find that misleading. The entire audio sequence is not
prepared ahead of time, only a tiny fraction of a second of audio is
prepared ahead of time. The entire rest of that second of audio is
prepared on-the-fly, or real-time.
I almost see the point you're trying to make, but I believe that the
situation is slightly different than you describe. If thread
coordination had no cost, then there actually would be a benefit to
doing things in parallel, even with the buffer latency. The real
reason that parallel processing is not usually beneficial is not
because of the ahead-of-time processing, but because of the overhead
of multi-processing and coordination.
Brian Willoughby
Sound Consulting
On Jul 29, 2011, at 10:46, Gregory Wieber wrote:
Keep in mind that the render callback is actually processing data
slightly before it is audible -- roughly a buffer length ahead of
time. Running tasks in parallel is therefore not necessary as you
might think, because you're not really processing things in real
time -- it's 'near' real time. You have a certain number of CPU
cycles to spend, and once they're spent that's it -- but because
the audio is prepared entirely ahead of time, there's not really
any benefit to doing things in parallel -- which, as pointed out by
Kyle has been covered elsewhere on the list.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden