Re: AudioUnitV3 effect and "maximumFramesToRender"
Re: AudioUnitV3 effect and "maximumFramesToRender"
- Subject: Re: AudioUnitV3 effect and "maximumFramesToRender"
- From: Brian Willoughby <email@hidden>
- Date: Tue, 18 Dec 2018 21:47:34 -0800
On Dec 18, 2018, at 1:32 PM, Waverly Edwards <email@hidden> wrote:
> Hmmmm, so the host is not required to render adhere to the maximum. This is
> a shame but I believe I understand.
The way you’ve phrased it is completely backwards and misleading. The host is
the authority on the maximum, and therefore it is the plugins that are required
to ask what that value is and then adjust. You might still be missing some
critical elements.
Part of the basic design of CoreAudio is that the buffer size can change from
one render call to the next. The maximum buffer size is there because memory
cannot be allocated while audio is being processed in real time, so the plugin
must ask the host at startup for the maximum, allocate any necessary memory in
advance, and then use the pre-allocated memory as needed during render calls.
No plugin can control the number of frames pulled in a single render cycle. If
you’re trying to do anything that doesn’t fit into that scheme, then you need
to rethink your design.
That said, the host *is* required to honor the maximum buffer size that the
host itself advertises. All plugins are safe to assume that, if they start by
querying the host maximum frame size, then there will never be a render call
with more frames than the maximum. You may have a situation where the “host” is
not fulfilling its responsibilities.
What may be confusing is that the audio interfaces are separate from the host,
and the audio interfaces have their own maximum buffer size. I’m sure that it’s
especially confusing that the modern way to access an audio interface is to use
an AudioUnit that is hosted by your app, and I can see how that might look
extra complicated because that connection does not require the audio interface
to ask the host for the maximum size and adjust.
Another source of confusion is that each AudioUnit has a local variable that
gets initialized to some value, but that value is not accurate unless the host
is queried to obtain the actual value. As was mentioned earlier in this thread,
a good host developer will make sure that the audio interface buffer size is
queried, and then adjust the host maximum buffer size if that’s appropriate and
possible. After the host adjusts to the interface, it’s the responsibility of
all the AudioUnits to adjust to the host maximum buffer size.
A great number of plugins have been designed that work with an internal frame
size that is fixed, but the AU is still designed to handle completely arbitrary
frame sizes from the host render calls at run time. If this describes your
situation, then you probably need to design some code to handle frame size
mismatches, with the resulting latency that must be incurred to make that
possible.
Brian Willoughby
Sound Consulting
> I do have an option but not one that I really wanted completely reply upon,
> which is to use offline rendering. From my tests, the request for maximum
> frame count in offline rendering is always honored.
> This means that audiounits also adhere to the requests. I prefer the offline
> rendering but wanted the flexibility of realtime. I’ll try to find another
> way to do get this working in realtime.
>
> try self.enableManualRenderingMode(.offline, format:
> audioFile.processingFormat, maximumFrameCount:maximumFrameCount)
>
> I would like to thank everyone who provided their insight and wisdom on the
> subject. It was a welcomed education.
>
>
> From: Jonatan Liljedahl
> Sent: Monday, December 17, 2018 7:14 AM
>
> Hi,
>
> If you read the documentation for maximumFramesToRender you'll see:
>
> "This must be set by the *host* before render resources are
> allocated." (my emphasis).
>
> This property is set by the host, to let the plugin know how many
> frames *maximum* it will ask it to render, nothing else. This is to
> allow the plugin to allocate internal buffers etc at suitable sizes in
> its allocateRenderResources method.
>
> The reason it's set also is the init method of the plugin itself is
> probably simply to provide a default value in case the host does not
> set it.
>
> Then, the actual number of frames that the host renders from the
> plugin could differ from call to call, and should be less than or
> equal to the value of maximumFramesToRender as it were when your
> allocateRenderResources was called. If not, it means the host is buggy
> and is rendering more frames than what it said should be the maximum,
> and it would likely make a lot of plugins crash.
>
> All of this is only related to host-plugin relationship, and has
> actually nothing to do directly with the core audio driver/hardware
> buffer size. Though, many hosts (including my own host app, AUM) will
> render exactly the same number of frames as the current hardware
> buffer size. I've never looked at AVAudioEngine, but my guess is that
> it also does so.
>
> To control the hardware buffer size, you may ask the AVAudioSession
> for a preferred buffer duration, but that's by no means a guarantee
> that you'll actually get that size.
>
> To summarise, there's two things a plugin developer need to do
> regarding buffer size:
> 1) read the value of maximumFramesToRender in your
> allocateRenderResources in case you need to allocate internal buffers.
> 2) Only rely on and adapt to the actual number of frames asked for, as
> passed to your render block/callback when called.
>
> Cheers
> /Jonatan Liljedahl - http://kymatica.com
>
> > > On 14 Dec 2018, at 17:48, Waverly Edwards wrote:
> > > init(componentDescription: AudioComponentDescription,options:
> > > AudioComponentInstantiationOptions = [])
> > >
> > > I created an AudioUnitV3 effect and I set the variable
> > > "maximumFramesToRender" within the above method. The effect does work but
> > > the system overwrites this value with 512 frames, leaving me with no way
> > > to
> > > alter how many frames are pulled.
> > >
> > >
> > > https://developer.apple.com/documentation/audiotoolbox/auaudiounit/1387654-maximumframestorender
> > >
> > > The documentation states you must be set before resources are allocated
> > > and I do so in my init. What else must I do in order to change how many
> > > frames are pulled in a single cycle.
> > >
> > > Does anyone have advice on this matter.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden