RE: fixed buffer sizes?
RE: fixed buffer sizes?
- Subject: RE: fixed buffer sizes?
- From: "Timo K" <email@hidden>
- Date: Tue, 16 Feb 2010 20:25:12 +0100
Hi Paul,
> -----Original Message-----
> To: CoreAudio API
> We (ardour/mixbus development team) have been grappling with an issue
> with a family of plugins that for now I'm going to leave
> unnamed. Using google, its clear that several other AU hosts,
> including Reaper, Fruity Loops and Digital Performer have all had
> issues with these plugins over the years. At one point, even Logic had
> some issues with them as well.
> Distilling what we've observed down to its essence, our question is
> this: should a host be able to call AudioUnitRender() with any number
> of frames as long it is below the last-set maximum-frames-per-slice
> value?
>From what I read from all the standards documentation and talk to host
developers: yes, this is what a plugin should be able to manage.
Every render call can have a different number of samples to process, as
long as the maximum frame size is not exceeded. Buffer size do not
accumulate to the maximum.
> Or is there some unstated model that has allowed some plugin
> developers to have an expectation that the host will use the same
> frame cnt for every render call for every plugin, until the next
> time maximum-frames-per-slice is set?
No, the maximum-frames-per-slice is being set in the suspended state,
but every render call can have a number that is smaller than the maximum.
This is like that in AU, VST, VST3 and RTAS.
> It is very clear that other host developers have grappled with these
> issues (just read various Reaper release notes and search for "fixed
> size buffers"; or look on various DP or FL forums from a couple of
> years ago), we are wondering if:
>
> 1) the expected behaviour of the host is to always run the entire
> "graph"
> (not necessarily an AUGraph) with a fixed buffer size (in between
> any resets, naturally) ?
No. E.g. Logic: if the track is focused, the Core Audio buffer size
is being used (e.g. 64 samples), if not, the process buffer size
is being used (e.g. 512 samples). The "maximum" is being set to 512,
in this case.
So, the host can use this as a strategy to save CPU cycles for plugins
the user has no "live input" focus etc.
So, no, it is not expected (yet, I think desirable).
> 2) if not, is there any interest in extending auval to detect
> some specific cases that we've found where plugins break (badly)
> when the buffer size (ie. the frame count passed to AudioUnitRender)
> is varied (but remains below the maximum-frames-per-slice property)?
> The behaviour we've seen will never be revealed by testing single
> plugin, even with multiple instances of the plugin. This makes the
> use of auval to detect the problem rather difficult ...
I am not sure, if such a behaviour can be explicitly tested. How do
they "break"? Do they crash? Do they overwrite memory they shouldn't?
But regarding the topic of arbitrary render slices:
With the years, I found that host programmers want to achieve mainly
two things with this strategy:
1) lower the latency for live input
2) providing sample accurate automation, if the API does not provide
sample accurate parameter changes.
For 1: This strategy fails as soon as a plugin introduces latency,
algorithmically or otherwise (say: hardware based plugins like
UAD, Powercore, Focusrite Liquidmix, Virus TI etc.). Still, the
disadvantage is that the overhead for each call accumulates to
a larger count, so no gain here.
For 2: The APIs have changed in that perspective, additionally there
is actually no serious plugin that does not smooth their controller
updates at least a bit so anything below fractions of 16 samples is
simply wasted anyways. In a future standard I would heavily vote
for some kind of lower barrier regarding this (maybe depending on
some plugin property "least reasonable render slice size").
The only thing that rises for the plugin programmers is that they
must alloc more memory and write more code to make sure they can
cover calling sequences like this: 193-187-392-192-2-7-403
(given max: 512) with appropriate double buffering and buffer
clipping.
There is one more point that host programmers often do not cover
correctly when running plugins in arbitrary slice sizes:
The timing information (ppq position/sample position). I found that
a lot of the non-major hosts do have their issues in this. Often they
only update the timing information on the audio card buffer size or
their interpolation is badly interpolated, plainly wrong, jitters,
jumps forward and backward etc. and is prone to rounding errors.
But even some major hosts come up with rendering some audio in the
UI thread, switching to the audio thread later, but messing up
the timing (repeating song times in consecutive buffers etc.)
instead of keeping it nice and steady.
just my 2ct.
cu
-timo
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden