Re: AUGraph deadlocks
Re: AUGraph deadlocks
- Subject: Re: AUGraph deadlocks
- From: Brian Willoughby <email@hidden>
- Date: Thu, 08 Dec 2011 12:13:14 -0800
On Dec 5, 2011, at 08:02, patrick machielse wrote:
I'm afraid I might not have been clear about outlining my
implementation or confusing in my use of the term 'rendering engine'.
The processingRecipe prescribes the AudioUnit settings to use f(t).
The settings change continuously during processing (whether the
user changes the recipe or not) and should be adjusted on each
render loop.
Performing these adjustments from the 'outiside' seems to be harder
to achieve than performing them from the renderCallback function.
Also, the implementation of my custom audio unit would require more
care (treading wise).
Stepping back for a bit, it seems that there is still something
wrong. Typically, an AU algorithm has parameter variables and state
variables. The parameter variables are all those values which are
set by the user, or perhaps by automation in a host that offers that
feature. The state variables are internal-only values that are used
by the engine to keep track of ongoing processes, but you really
should not be using SetParameter() for these internal state variables
- they should be accessed directly as read/write members of the class
instance.
I find myself wondering if perhaps you've got too many layers in your
AU plus AUGraph setup. Are you developing an AudioUnit that is
nothing more than an AUGraph of other, existing AudioUnits? Are you
finding it necessary to change parameters continuously because you
have not implemented the processing code yourself? If yes, then
maybe it bears pointing out that it would be way more efficient for
you to implement your graph of multiple AudioUnits as a single,
monolithic piece of code where everything has access to the same
state variables. In other words, grab the source for all of the AUs
and refactor them together into a single render function. Then you
would not ever need to communicate continuously-changing values from
one AU to another.
In any case, it seems like those of us who are offering advice have a
very incomplete picture of what you're really trying to do. As is
typical on this list, most questions do not appear here until a new
developer has already decided on a particular implementation, and
when they get stuck they come here asking how to realize a very
bizarre implementation. As the saying goes: If I had a dollar for
every time an Apple engineer asked "Tell us what you're really trying
to do, at the highest level," then I'd be a very rich man. In other
words, rather than focus on your current stumbling block of
continuous calls to SetParameter within your pre-render, why not step
back to the highest level and explain what you're trying to
accomplish from the user's point of view, and/or the end results of
the audio process. It could be entirely possible that you can solve
this with a completely different approach.
Brian Willoughby
Sound Consulting
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden