AUTimePitch -- edge artifacts
AUTimePitch -- edge artifacts
- Subject: AUTimePitch -- edge artifacts
- From: Brian Whitman <email@hidden>
- Date: Mon, 28 Nov 2005 12:56:45 -0500
Following up on my mail from Friday, I got the callback mode working
but am encountering some edge-effect badness of the AUTimePitch unit.
Briefly, I'd like to have a synthesized buffer (for our sake, this
buffer is infinitely long or loops) that I can control the stretch
rate of through the AUTimePitch unit. I'm finding that the unit is
either asking for or returning some unexpected samples at the start
of each process chunk that come out as "clicks" in playback-- I've
verified this by rendering a small buffer at a "stretch rate" of 1.0
and A/Bing it in Matlab with the original input.
What seems to be happening is that there are artifacts coming from
the AU's input-- the clicks change rate, so it sounds like the AU is
getting misaligned data and then timestretching it. At a rate of 1,
the clicks are every buffer length (the n that I ask the AU to synth
is always 8192 samples in this case), so they happen right at the
start of getNSamplesAtRate below. Plotting out the artifacts show
something that looks like noise that could be filtered out but I
doubt it's expected behavior. Otherwise everything is to spec-- the
samples are in the right position and the stretch is fine.
I cannot find any sample code that uses an Offline unit in such a
"real time" manner so I had to hack it out myself, but as the AULab
file generator->AUTimePitch path does not exhibit this behavior I
assume it's something I've implemented wrongly. I've put the
necessary code below, I've mixed James McCartney's Sinewave Cocoa
example with auprocess and added a "stretch" parameter.
I am sure this is something obvious, but it's hard slogging through
this with no obvious sample code to look against. I hope at least
that my mess helps someone else looking through the archives later!
From what I understand, I have to create a callback that reads from
my synthesis buffer whenever the AU needs more data. This looks like:
OSStatus memoryBufferCallback (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp
*inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames,
AudioBufferList*ioData) {
sinewavedef *def = (sinewavedef *)inRefCon; // get access to
Sinewave's data
SInt64 sampleStart = SInt64(inTimeStamp->mSampleTime);
// set up a temp buffer to grab the current next inNumberFrames samples
float *tempbuf;
tempbuf = (float *) malloc(inNumberFrames * sizeof(float));
AudioBuffer *buf = ioData->mBuffers;
memset((Byte*)buf->mData,0,buf->mDataByteSize);
uint samplePos=0;
for (UInt32 i = 0; i < inNumberFrames; i++) {
samplePos = (sampleStart + i + def->processIndex) % def-
>synthBufferLen;
tempbuf[i] = def->synthBuffer[samplePos];
}
// now copy the temp buffer into the AU read buffer
memcpy(buf[0].mData, tempbuf, buf->mDataByteSize);
free(tempbuf);
return noErr;
}
In the audio playback callback (straight from James' code) we set a
semaphore when we wrap around the buffer. This then tells a spawned
thread (which just polls the semaphore while sound is playing) to
kick off a process to render n samples of audio through the AU. n is
the size of our playback buffer that the audio playback callback
spins through. When that buffer is finished being played we swap in
the AU rendered buffer. The code that generates n samples given a
stretch rate is here, the rate is set to the AU using
AudioUnitSetParameter as the slider changes.
// puts n more samples in renderBuffer (which is malloc'ed properly
already)
void getNMoreSamplesAtRate(void*defptr, uint n, float rate) {
sinewavedef* def = (sinewavedef*) defptr; // get access to
Sinewave's data
// how many samples to ask the AU to analyse to return n samples.
This changes per stretch rate obviously
UInt32 askFor = (UInt32) roundf((float)n*rate);
UInt32 numFrames = def->processor->MaxFramesPerRender();
int synthIndex = 0;
OSErr err;
// set up the AU buffer list
AUOutputBL outputList(*def->streamdescription);
err =def->processor->Initialize (*def->streamdescription, askFor);
bool outisDone=false;
err = def->processor->OfflineAUPreflight(askFor, outisDone);
bool isDone = false;
bool isSilence = false;
bool needsPostProcessing = false;
float * tempbuf; // pointer to returned data from the AU, mem
already allocated
// This is the render -- we should have generated n samples given
askFor samples in.
while(!isDone) {
outputList.Prepare(); // have to do this every time...
err= def->processor->Render (outputList.ABL(), numFrames,
isSilence, &isDone,&needsPostProcessing);
AudioBuffer *buf = outputList.ABL()->mBuffers;
tempbuf = (float*)buf[0].mData;
for (UInt32 j = 0;j<numFrames;j++) {
def->renderBuffer[synthIndex++] = tempbuf[j];
}
}
// when we are done rendering we need to increase our play position
to know where to analyse next time!
def->processIndex = (def->processIndex + askFor) % def-
>synthBufferLen;
}
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden