RE: Using an AU directly
RE: Using an AU directly
- Subject: RE: Using an AU directly
- From: Darrell Gibson <email@hidden>
- Date: Thu, 10 Sep 2009 20:43:21 +0100
- Acceptlanguage: en-US, en-GB
- Thread-topic: Using an AU directly
Hi Philippe,
Thanks for your help. I think I may have "muddied the waters" in an earlier post when I said a "network of AUs". By this I meant a series in interconnected AUs. I did not mean computer network.
Nonetheless your comments seem to make sense anyway. I'll give it ago.
Darrell.
________________________________________
From: philippe wicker [email@hidden]
Sent: 10 September 2009 12:50
To: Darrell Gibson
Cc: William Stewart; email@hidden
Subject: Re: Using an AU directly
On 10 sept. 09, at 11:27, Darrell Gibson wrote:
> Bill,
>
> Sorry for the delay in replying (I've been out from the office.)
> Thanks for your very detailed post, which has cleared up a lot of
> confusion and vagueness I had.
>
> Can I just clarify a couple of points with you? Is it possible to
> have an AU that has an input callback AND a connection? If it is
> what would happen in this situation?
>
> If I wanted to work on example that used case (2) as you say the
> caller would be in complete control. Presumably I could create a
> thread with using CAPThread and then from the thread call the the
> input callback or connected AU? How would I manage the timing?
The last post of William Stewart on this topic and remarks of others
persuaded me that you don't need threading. I'm assuming here that you
want to process a continuous audio stream created somewhere on another
machine and that you read this stream using a network receiver AU
(AUNetReceive). You're receiving audio data at an average rate which
is proportional to the sample rate (eg 11.6ms for 512 samples at
44.1). So iterating on calls to AudioUnitRender on this AU will lock
your loop to the network stream rate.
Your execution flow will look like:
- call AudioUnitRender
- process the audio buffer
- call AudioUnitRender
- process the audio buffer
- ....etc
As long as your processing time is not too long you'll be able to
follow the network rate.
There maybe a case where this simple solution won't work. It is if
during the analysis of the incoming data you need from time to time to
do a long processing (it depends on your application, you're the only
one who can tell this). Then if you wait too much time before
recalling AudioUnitRender you may loose data at the network level.
Depending on your application that may be important or not. If you
can't afford to loose network data you should then delegate the long
processing task to a worker thread with ideally a lower priority that
your read loop thread. By the way, this delegation could be done on
10.6 using the GCD services (look at the thread "Audio Units and
OpenCL").
>
> Thanks again for your help,
>
> Darrell.
>
> ________________________________________
> From: coreaudio-api-bounces
> +gibsond=email@hidden [coreaudio-api-bounces+gibsond=email@hidden
> ] On Behalf Of William Stewart [email@hidden]
> Sent: 08 September 2009 19:23
> To: philippe wicker
> Cc: email@hidden API
> Subject: Re: Using an AU directly
>
> My, there is some confusion here. I hope I can clarify some basic
> concepts.
>
> First. There is no addition threading implications within an audio
> unit. An audio unit's basic rendering operations are invoked by the
> "host" of the audio unit by calling AudioUnitRender. Here's what then
> happens:
>
> AudioUnitRender
> audio unit calls render notification (sets the "pre" flag)
> audio unit renders
> calls input callback/connection (if needed and if
> present)
> operates on returned input data (or in the case of a
> generator/
> synth, generates audio data)
> - this audio data is then placed in the
> buffer that is provided by
> the caller of AudioUnitRender
> audio unit calls render notification (sets the "post" flag)
>
> When the audio unit calls out at post, it provides the same data in
> the AudioBufferList as is returned to the caller of AudioUnitRender.
> This can be very useful (it is how AULab is able to syphon off the
> data of its processing graph and write a file)
>
> There is no real mystery here - you can see ALL of this implemented in
> AUBase::DoRender (and for the "get the input data" have a look at
> AUEffectBase)
>
> Now, AUGraph and Audio Units are co-incidental. They were released in
> the same OS release, and the main purpose of AUGraph was to provide a
> way to manage audio units, in particular to connect and disconnect
> them while the graph is running. What does a running graph mean? All
> that this means is that AudioOutputUnitStart has been called on the
> head (the output unit) of the graph.
>
> There are two primary use cases of interest for a "running graph".
> (1) The head of a graph is an AU like AUHAL, that is attached to an
> audio device. In this case the graph's rendering is occurring on an
> thread that is owned by the HAL. It defines the timing, the duty
> cycle, etc, and it calls the clients IOProc. AUHAL establishes an
> IOProc, from which it then calls whatever audio unit is connected to
> its input (or callback). When you start AUHAL, it starts the audio
> device and the audio I/O mechanism.
>
> (2) The head of a graph is an AU like the GenericOutput. In this case,
> starting this audio unit actually does nothing but set its state to
> running. The generic output unit has NO thread context, but this must
> be provided by the caller. In this case, the caller is in COMPLETE
> control over the rendering operation - whether it is constrained by
> real time considerations, whether it is processing offline, whether it
> is generating a file, and so forth.
>
> The case of (2) can be seen in use in the PlaySequence example, where
> one option is to generate an audio file that is a rendering of the
> MIDI file. The "normal" options to play sequence is to play the MIDI
> file back (this is (1) above).
>
> When you are calling audio units without a graph, that is analogous to
> case (2) - you have complete control over when you call
> AudioUnitRender on the audio unit to get it to do work. There is no
> need to "hack" around this as audio units are designed to be used in
> both types of contexts (in fact the only real difference between the 2
> cases above is that in (1) the HAL provides the thread and calling
> semantics, whereas in (2) the host application does)
>
> HTH
>
> Bill
>
> _______________________________________________
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list (email@hidden)
> Help/Unsubscribe/Update your Subscription:
>
> This email sent to email@hidden
>
>
>
>
> BU - the UK's Number One New University
> The Guardian University Guide 2009 & 2010
> This email is intended only for the person to whom it is addressed
> and may contain confidential information. If you have received this
> email in error, please notify the sender and delete this email,
> which must not be copied, distributed or disclosed to any other
> person.
> Any views or opinions presented are solely those of the author and
> do not necessarily represent those of Bournemouth University or its
> subsidiary companies. Nor can any contract be formed on behalf of
> the University or its subsidiary companies via email.
>
>
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden