Re: fixed buffer sizes?
Re: fixed buffer sizes?
- Subject: Re: fixed buffer sizes?
- From: Paul Davis <email@hidden>
- Date: Tue, 16 Feb 2010 14:58:06 -0500
On Tue, Feb 16, 2010 at 2:25 PM, Timo K <email@hidden> wrote:
>> 2) if not, is there any interest in extending auval to detect
>> some specific cases that we've found where plugins break (badly)
>> when the buffer size (ie. the frame count passed to AudioUnitRender)
>> is varied (but remains below the maximum-frames-per-slice property)?
>> The behaviour we've seen will never be revealed by testing single
>> plugin, even with multiple instances of the plugin. This makes the
>> use of auval to detect the problem rather difficult ...
>
> I am not sure, if such a behaviour can be explicitly tested. How do
> they "break"? Do they crash? Do they overwrite memory they shouldn't?
its not entirely clear what they do internally, but from a user
perspective the result is a varying set of awful noises. we've
established to a fairly high level of confidence that the problem
occurs when either of two things happens:
1) we make a render call with a frame cnt other than the last one
used (workaround: global reset before render if the frame count
doesn't match)
2) we have *2+* plugin instances from this family, and for one reason
or another, we make a render call on one of them that has a different
frame count from the one we use with the other(s). (workaround: the
fix from (1) plus reducing the situations that can lead to this
occuring by trying to avoid split-block processing).
Either of these behaviours by us (the host) will pretty much
predictably result in noise glitches that vary from irritating to
ear-threatening. Our workarounds are limited solely to the particular
plugins we've observed this problem with. However, the plugins will
pass auval's tests with no issues. I believe that auval cannot
possibly detect the problem in (1) even though it can test for it -
that would require knowing what the signal output of a plugin should
look like, which is clearly impossible.
> With the years, I found that host programmers want to achieve mainly
> two things with this strategy:
>
> 1) lower the latency for live input
> 2) providing sample accurate automation, if the API does not provide
> sample accurate parameter changes.
in our case, it would be (2) combined with a buffer-free approach to
latency compensation - rather than feed tracks through a delay line,
we instead delay the onset of feeding on-disk data into the signal
pathway. this can result in various different frame counts being used
for plugins on different tracks just as playback starts up, since we
feed small amounts of silence into the signal pathway before the disk
data starts.
> For 2: The APIs have changed in that perspective, additionally there
> is actually no serious plugin that does not smooth their controller
> updates at least a bit so anything below fractions of 16 samples is
> simply wasted anyways.
although i broadly agree with this, its not necessarily true of toggle
parameters. some plugins may do a little xfade when these are changed,
but some may not since its quite complex to achieve (you have to
render twice ...)
> But even some major hosts come up with rendering some audio in the
> UI thread, switching to the audio thread later, but messing up
> the timing (repeating song times in consecutive buffers etc.)
> instead of keeping it nice and steady.
i have yet to see any audio plugin API specification that describes
thread models in anything even remotely approaching the necessary
level of detail.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden