Re: render inNumberFrames vs. pre/postRender inNumberFrames
Re: render inNumberFrames vs. pre/postRender inNumberFrames
- Subject: Re: render inNumberFrames vs. pre/postRender inNumberFrames
- From: William Stewart <email@hidden>
- Date: Mon, 26 Jun 2006 17:50:23 -0700
On 26/06/2006, at 5:05 PM, Evan Olcott wrote:
I think this might count as a bug report, but I thought I'd check
here first:
---
I have an AUFormatter with it's render callback set to a routine
in my app, gathering data to render.
ok- we have no AUFormatter, but I presume you mean some kind of
format converter AU (that can pull for a different amount of input
samples than its requested output).
I think specifically (given your discussion about sample rates), that
you are discussing this AU:
aufc conv appl - Apple: AUConverter
The same AUFormatter has a pre/post render notify callback
attached to it as well, assigning automation (in the pre-render,
technically).
Yes - your statement below is correct - the pre/post is a
notification that occurs BEFORE the AU does any work, and then AFTER
it has done work.
When I render a file who's sample rate is different than the
hardware's rate, I change the sample rate of the AUFormatter's
input stream, which gathers data accordingly:
- if the machine is running at 44.1kHz and the source file is
96kHz, the render call gets asked for the appropriate number of
samples (if the machine has a 512 sample buffer, the render call
gets asked for 1114+ samples at a time, which is fine)
However, the pre- and post- render callbacks always reference a
512 sample buffer! This makes me calculate the sample range
incorrectly, thus my ramped automation is off - I'm getting the
wrong length to work with.
Shouldn't the pre- render calls represent the number of samples
it's about to ask for, instead of the system's buffer size?
No - and also it does raise the question about which axis you would
use when specifying the sample offset to say AudioUnitSetParameter...
We expect this to be in the output axis - you can see this in the
AUBase implementation for scheduling parameters as well (where it
breaks up the render operations based on the output buffers) and in
the music sequence/player we schedule in the pre-render notification,
so this is also in the axis of the output.
Would I be forced to do some "length math" to get the proper
number of samples about to be asked for so I can preRender my
automation properly?
This is why we schedule on the output axis - otherwise there would be
no way to know - and at the time, the only fixed call/value that a
host can use to schedule a parameter is based on how many samples it
is next going to call the AU's render call.
Bill
Further reading shows that the *preRender* and *postRender* calls
are always in reference to the *output* of the audio unit to which
they are attached. That explains why I'm getting buffer size & info
based on the AUFomatter's output buffer size, not on what the AU is
"about to grab". Moderately inconvenient, but that's the way it is.
Just writing it out so it goes into the mailing list DB in case
someone searches for this.
-ev
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
--
mailto:email@hidden
tel: +1 408 974 4056
________________________________________________________________________
__
"Much human ingenuity has gone into finding the ultimate Before.
The current state of knowledge can be summarized thus:
In the beginning, there was nothing, which exploded" - Terry Pratchett
________________________________________________________________________
__
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden