RE: Audio Units - newbie questions
RE: Audio Units - newbie questions
- Subject: RE: Audio Units - newbie questions
- From: "Muon Software Ltd - Dave" <email@hidden>
- Date: Wed, 12 Oct 2005 11:31:49 +0100
- Importance: Normal
> But are you getting sound out correctly on each of your output buses?
> I think it might kind of work because AUBase's RenderBus call will
> ONLY call your render call once and I suspect almost by accident it
> might be getting the output data because of what you're doing with
> the element's buffer lists. I would certainly not want to guarantee
> that this would always work.
Probably not - I would need to get a bit more used to Logic 7 before
addressing that particular test :-)
> You've essentially got the right approach I think (ie you calculate
> all your data into an array of channels), but it should be integrated
> into the AUBase semantics more robustly.
>
> We're going to cleanup some code we have that demonstrates how to do
> this and try to get this out soon (I'll try and send you a copy
> privately before then, so you can see how we're handling this)
Looking forward to it, thank you.
> Yes - that tells you when you have to render new data - with multiple
> buses that's important.
So with four busses I get four render calls, one for each bus, all with the
same time stamp. When the time stamp is the same as the previous timestamp I
know I'm rendering for the same slice.
I don't know if I could add a method to my synth to render one output at a
time. It might be better if, on the first call to Render I rendered into 8
mono temporary buffers which I then copy into the AU busses on subsequent
Render calls with the same time stamp depending on which bus I've been
passed in. That sounds vaguely workable to me, though it does mean I have to
maintain my temporary buffers quite carefully and there would be some mixing
overhead in the AU that the VST version doesn't suffer from.
> Sure - but why should you crash? You should certainly not be
> initialising yourself.
The problem here is that on construction we do all the soft stuff.
Traditionally in a VST plugin you don't know, during the constructor, what
the sample rate and block size is. Instead, you wait until you get told -
then at that point you can do your samplerate/blocksize dependent stuff. So
my plugin already has init and reset methods, with the heavyweight stuff
done in reset. Unfortunately it simply was never designed to ever be used in
a case where reset could come in *before* init because that's impossible in
a VST host.
> If I can't do something sensible with your plugin between open and
> init, then why would we have bothered discriminating between these
> two states?
There's a school of thought in coding that you should never do anything much
in a class's constructor, especially if construction could fail. Instead the
constructor should just nuke the class's internal state and provide a
seperate init method that allows the return of a success/failure. That is
one good reason for discriminating between "open" and "initialised".
However my synth will permit the editor to be drawn and will probably accept
parameter changes etc. so it isn't a total loss. It just doesn't want to be
reset or rendered during that intermediate state.
> Some hosts, in order to streamline the user experience, wanted to
> manage resources in use at any given time. So, they spent
> considerable time optimising their own engines and DSP processes,
> etc. One step they wanted to take was to place AUs in a quiescent
> state, but NOT close them, when some part of the mix was not in use -
> lets say the track was made inactive. The user can still see the AU,
> the and AU's view may indeed still be open. Interacting with it may
> be a meaningless gesture, but at least the view should open/close
> properly, and perhaps even allow some basic manipulations.
Now you'd think any sensibly-written plugin would know when it isn't doing
anything in particular and shunt over to a low-effort state. Tachyon
certainly does.
> The AU Semantic we defined for this was Initialisation - so, the host
> apps concerned would try to uninitialise the AUs that were in this
> place, only to find that they would crash.
>
> This was so poorly both understood and supported by AUs that the
> feature was essentially dropped. Which is a shame to our minds,
> because this is one of the uses we had envisaged when we
> discriminated between these two states.
>
> In the AU documentation, this is discussed. AU Lab added a debug menu
> so you can test out your plugin in an uninitialised state. We would
> love to see AU developers take this into account.
I'll give this a go and see what happens.
> Please read the documentation about the states of AUs.
>
> It is basic to the design of AU's that the states are properly
> tracked. For instance, uninitialising and re-initialising an AU are
> done when reformating the AU is desired (even if this is only a
> sample rate change). If you aren't allowing your initialised state to
> be cleared by Uninitialise, then many operations are no longer
> possible. auval will also test this path, so this should be causing
> you problems already.
VST plugins are used to having things like sample rate changes and block
size changes thrown at them, so our synth is robust in that respect. It
doesn't need to be uninitialised to be able to do that. At the moment, it
has no concept of being unitialised and doesn't do anything but I don't
think this would make anything impossible as such. I will test what happens
to our synth when there are lots of i/o configuration, blocksize and sample
rate changes in various AU hosts later today to make sure it is robust. I'm
not planning to add support for unitialisation though unless I absolutely
have to though.
Regards
Dave
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden