Re: AUTimePitch -- edge artifacts
Re: AUTimePitch -- edge artifacts
- Subject: Re: AUTimePitch -- edge artifacts
- From: Brian Whitman <email@hidden>
- Date: Mon, 28 Nov 2005 19:16:12 -0500
Hi Chris, I see that the callback might get called multiple times per
render and goes over what you ask for.
Ignoring stretching (say the stretch was at 1.0) -- in my example, I
looked at the first callback and saw that it starts at synth buffer
position 0 and asks for 11920 samples total during the render when I
ask it to analyse 8192 samples.
But I can't then increase my synth buffer pointer by 11920 samples--
the next time I ask for 8192 samples I want those samples to be read
starting at synth buffer pointer 8192 as the returned rendered data
represents only 8192 samples.
So that was the purpose of "askFor"-- to advance the starting
position within the synth buffer every time I ask for more samples.
Internal to the AU callback, the mSampleTime takes care of sample
position within the analysis buffer.
I have a feeling I'm missing something obvious here though. I'll keep
playing with my pointers.
Brian
On Nov 28, 2005, at 5:12 PM, Chris Rogers wrote:
Brian,
It looks like your problem is the way that you're advancing def-
>processIndex by askFor (last line of getNMoreSamplesAtRate() )
You're calculating askFor as:
UInt32 askFor = (UInt32) roundf((float)n*rate);
But you can't make this assumption at all. Your callback function
memoryBufferCallback() may be called multiple times
per render cycle (each call to AudioUnitRender() ). Also, the
total number of sample-frames asked for per render cycle
may not necessarily add up to the value you calculate for "askFor".
You need to advance def->processIndex in memoryBufferCallback()
each time it's called according to the actual number
of sample-frames requested.
Chris Rogers
Core Audio
Apple Computer
On Nov 28, 2005, at 9:56 AM, Brian Whitman wrote:
Following up on my mail from Friday, I got the callback mode
working but am encountering some edge-effect badness of the
AUTimePitch unit. Briefly, I'd like to have a synthesized buffer
(for our sake, this buffer is infinitely long or loops) that I can
control the stretch rate of through the AUTimePitch unit. I'm
finding that the unit is either asking for or returning some
unexpected samples at the start of each process chunk that come
out as "clicks" in playback-- I've verified this by rendering a
small buffer at a "stretch rate" of 1.0 and A/Bing it in Matlab
with the original input.
What seems to be happening is that there are artifacts coming from
the AU's input-- the clicks change rate, so it sounds like the AU
is getting misaligned data and then timestretching it. At a rate
of 1, the clicks are every buffer length (the n that I ask the AU
to synth is always 8192 samples in this case), so they happen
right at the start of getNSamplesAtRate below. Plotting out the
artifacts show something that looks like noise that could be
filtered out but I doubt it's expected behavior. Otherwise
everything is to spec-- the samples are in the right position and
the stretch is fine.
I cannot find any sample code that uses an Offline unit in such a
"real time" manner so I had to hack it out myself, but as the
AULab file generator->AUTimePitch path does not exhibit this
behavior I assume it's something I've implemented wrongly. I've
put the necessary code below, I've mixed James McCartney's
Sinewave Cocoa example with auprocess and added a "stretch"
parameter.
I am sure this is something obvious, but it's hard slogging
through this with no obvious sample code to look against. I hope
at least that my mess helps someone else looking through the
archives later!
From what I understand, I have to create a callback that reads
from my synthesis buffer whenever the AU needs more data. This
looks like:
OSStatus memoryBufferCallback (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp
*inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames,
AudioBufferList*ioData) {
sinewavedef *def = (sinewavedef *)inRefCon; // get access to
Sinewave's data
SInt64 sampleStart = SInt64(inTimeStamp->mSampleTime);
// set up a temp buffer to grab the current next inNumberFrames
samples
float *tempbuf;
tempbuf = (float *) malloc(inNumberFrames * sizeof(float));
AudioBuffer *buf = ioData->mBuffers;
memset((Byte*)buf->mData,0,buf->mDataByteSize);
uint samplePos=0;
for (UInt32 i = 0; i < inNumberFrames; i++) {
samplePos = (sampleStart + i + def->processIndex) % def-
>synthBufferLen;
tempbuf[i] = def->synthBuffer[samplePos];
}
// now copy the temp buffer into the AU read buffer
memcpy(buf[0].mData, tempbuf, buf->mDataByteSize);
free(tempbuf);
return noErr;
}
In the audio playback callback (straight from James' code) we set
a semaphore when we wrap around the buffer. This then tells a
spawned thread (which just polls the semaphore while sound is
playing) to kick off a process to render n samples of audio
through the AU. n is the size of our playback buffer that the
audio playback callback spins through. When that buffer is
finished being played we swap in the AU rendered buffer. The code
that generates n samples given a stretch rate is here, the rate is
set to the AU using AudioUnitSetParameter as the slider changes.
// puts n more samples in renderBuffer (which is malloc'ed
properly already)
void getNMoreSamplesAtRate(void*defptr, uint n, float rate) {
sinewavedef* def = (sinewavedef*) defptr; // get access to
Sinewave's data
// how many samples to ask the AU to analyse to return n samples.
This changes per stretch rate obviously
UInt32 askFor = (UInt32) roundf((float)n*rate);
UInt32 numFrames = def->processor->MaxFramesPerRender();
int synthIndex = 0;
OSErr err;
// set up the AU buffer list
AUOutputBL outputList(*def->streamdescription);
err =def->processor->Initialize (*def->streamdescription, askFor);
bool outisDone=false;
err = def->processor->OfflineAUPreflight(askFor, outisDone);
bool isDone = false;
bool isSilence = false;
bool needsPostProcessing = false;
float * tempbuf; // pointer to returned data from the AU, mem
already allocated
// This is the render -- we should have generated n samples given
askFor samples in.
while(!isDone) {
outputList.Prepare(); // have to do this every time...
err= def->processor->Render (outputList.ABL(), numFrames,
isSilence, &isDone,&needsPostProcessing);
AudioBuffer *buf = outputList.ABL()->mBuffers;
tempbuf = (float*)buf[0].mData;
for (UInt32 j = 0;j<numFrames;j++) {
def->renderBuffer[synthIndex++] = tempbuf[j];
}
}
// when we are done rendering we need to increase our play
position to know where to analyse next time!
def->processIndex = (def->processIndex + askFor) % def-
>synthBufferLen;
}
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
40apple.com
This email sent to email@hidden
--
Brian Whitman. http://variogr.am/
The Echo Nest Corporation
email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden