Re: audio computation and feeder threads
Re: audio computation and feeder threads
- Subject: Re: audio computation and feeder threads
- From: Kurt Bigler <email@hidden>
- Date: Tue, 04 Mar 2003 15:45:02 -0800
on 3/3/03 12:09 PM, Chris Rogers <email@hidden> wrote:
>
Kurt,
>
>
It *is* OK to do processing/synthesis directly in the IOProc thread,
>
and this is what the DLS synth does. In general, if latency is an issue
>
for you, either because you're processing live audio input or because
>
you're responding to live MIDI events then doing things directly
>
will work better for you. For each buffer you always have to make
>
sure the CPU bandwidth
>
you take up is less than roughly 90% (may be less with certain audio
>
devices) to avoid glitches. If your application does not need extremely
>
low latency, then using a feeder thread buffering scheme is a very good idea
>
to smooth out irregularities in CPU usage from buffer to buffer, and may
>
even allow you to gain some of that extra 10% of the CPU. The tradeoff
>
here is latency vs. processing power.
Thanks. It sounds like I am on the right track with my thinking. Now I
just have to figure out how to implement it!
I am going to try to solve the problem for the case of simultaneous output
to multiple devices (with separate IOProcs). This makes it more difficult -
perhaps horrendously so, I realize. Perhaps this is why I got no "bites" on
my suggestion to have a multi-device output unit (provided by apple). I
realize I will have to solve the problem of sample-rate conversion
on-the-fly, using the techniques suggested by Jeff Moore in the thread "Re:
Timing & Callbacks":
on 2/24/03 10:57 AM, Jeff Moore <email@hidden> wrote:
>
The HAL provides enough timing information about the devices in
>
question that it is possible to judge how much faster or slower device
>
A is going than Device B. This is known by tracking the relationship
>
each device's sample time has with host time. Armed with this
>
information, you can make decisions about the amount of rate conversion
>
you do to make up for the missing samples.
Has anyone succesfully solved this problem, who would be willing to share
their code? I am thinking of it in the feeder-thread context. However, I
suppose the sample-rate synchronization problem can be solved without
involving a feeder thread, and it would be great to know how others had
dealt with this.
If the problem can be solved using a feeder-thread approach (by me or anyone
else) it seems to me wrapping the functionality to create a multi-device
Audio Unit would be a relatively minor thing. Thoughts?
Anyone else interested in working on any aspect of this?
I would also greatly appreciate any starting points on appropriate
feeder-thread code - never mind the sample rate synchronization. I think
there have been pointers before, in relation to playing audio files, as I
recall. I'm not sure the same techniques used for playing files would be
applicable here.
on 3/3/03 12:09 PM, Chris Rogers <email@hidden> wrote:
>
We've thought about releasing an AudioUnit which will essentially do
>
this buffering for you, deferring processing to a lower priority
>
thread. Hooking a couple of these into a mixer AudioUnit would be
>
a good way to generically make processing multi-threaded to take
>
advantage of more than one processor.
And thanks for the confirmation here also. This really clarifies the
approach I need to take.
Thanks,
Kurt Bigler
>
>
Chris Rogers
>
Core Audio
>
Apple Computer
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.