Re: Multi-threaded render
Re: Multi-threaded render
- Subject: Re: Multi-threaded render
- From: Craig Hopson <email@hidden>
- Date: Sat, 5 Aug 2006 14:16:19 -0600
Hi Ethan,
1. Yes, we are doing something similar... ...but different. We've
implemented an event driven processor which feeds a ring buffer. The
CA pull is on the output side of the buffer, the processor(s) feed
the input side. Rather than blocking the CA thread (probably not a
good idea), we feed zeros until a predetermined (short) delay period
has passed, and then we start feeding the ring buffer data to the CA
thread. Our system requires a "large" processing time for some
things, and this setup allows that. Once we have this bit of
breathing room, processing can always stay ahead of the CA pull.
Even with heavy processing requirements we can run on a 500 Ti with
absolutely no drop outs.
2. You say "All is well except, in the interest of efficiently using
multi-processor systems..." So, I'm curious, if your app runs on
current hardware, why are you so concerned with the efficiency of the
system? How will that help your application? We did what we did
because we could not run on lower end hardware, and even dual G5s
sometimes lost audio. Although I have not tried, I imagine we could
cut the delay to almost nothing on a core duo machine.
-Craig
On Aug 5, 2006, at 1:41 PM, Ethan Funk wrote:
Currently, I have an audio application, like many, where multiple
audio sources are mixed together by a custom mixer and delivered to
the HALoutput AU. All is well except, in the interest of
efficiently using multi-processor systems, I would like to move
away from the single thread render approach, and have each mixer
input run in it's own thread.
For example, the HALOutput AU would callback to my mixer in the
usual coreaudio rendering thread. My mixer would then wake a
render thread associated with each of it's inputs and block the
calling rendering thread until all the input threads have buffers
ready for my mixer. My mixer would then continue executing the
coreaudio render thread where it would mix the input results and
return the final buffer to the HALOutput AU. All this should be
simple to implement, however, I believe that rendering threads have
some special realtime properties, and I have no idea how to create
such threads for each of my mixer inputs. Likewise, I don't
understand threading on OS X well enough to know what scheduling
problems I may encounter. Any ideas where I can go to learn more
about this sort of thing?
My ultimate goal is to move each mixer input render in it's own
thread so that concurrent processing COULD occur when there is more
than one processor available. It would appear that as things stand
with the single thread approach, each mixer input chain must be
rendered one at at time. The total render time is the sum of the
time required to render from all the inputs together. If I have a
heavy processing load (a lot of AU effects in the input chains),
even with multiple processors, the whole render process will only
use one of the available processors. If I have two or more
processors, it would be nice to have each processor work on
different renders to improve the chance that it would all get done
in time to deliver the results to the HALOutput AU.
Any one else doing anything like this? Am I going down a wrong
road here?
Ethan...
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
40redrocksw.com
This email sent to email@hidden
Craig Hopson
Red Rock Software
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden