Using AudioConverterConvertBuffer?
Using AudioConverterConvertBuffer?
- Subject: Using AudioConverterConvertBuffer?
- From: Wesley Miaw <email@hidden>
- Date: Tue, 7 May 2002 16:56:17 -0700
I've gotten the DefaultAudioUnit to work by looking at the example
provided with the Developer tools, but I'm having trouble getting my
audio converted in my AudioConverterInputDataProc from the source sample
rate to the audio unit's sample rate.
I've set my source description as follows:
mashStreamBasicDescription_.mSampleRate = 8000.0;
mashStreamBasicDescription_.mFormatID = kAudioFormatLinearPCM;
mashStreamBasicDescription_.mFormatFlags =
kLinearPCMFormatFlagIsFloat |
kLinearPCMFormatFlagIsBigEndian |
kLinearPCMFormatFlagIsPacked;
mashStreamBasicDescription_.mBytesPerPacket = 4;
mashStreamBasicDescription_.mFramesPerPacket = 1;
mashStreamBasicDescription_.mBytesPerFrame = 4;
mashStreamBasicDescription_.mChannelsPerFrame = 1;
mashStreamBasicDescription_.mBitsPerChannel = 32;
The description returned by the DefaultAudioUnit is:
mSampleRate = 44100,
mFormatID = 1819304813,
mFormatFlags = 11,
mBytesPerPacket = 8,
mFramesPerPacket = 1,
mBytesPerFrame = 8,
mChannelsPerFrame = 2,
mBitsPerChannel = 32
So I successfully create a AudioConverterRef using these source and
destination descriptions, and try to use it in my
AudioConverterInputDataProc:
// Zero the output buffer if Write has not be called or there is no
new data written.
// Otherwise copy one block of new data into the output buffer.
if (audio->writeFactor_ == -1 || audio->writeFactor_ ==
audio->readFactor_) {
bzero(audio->outputBuffer_, audio->outputBufferSize_);
} else {
// Copy one block of new data into the output buffer.
writeBufferPtr = audio->writeBuffer_ + audio->readFactor_ *
audio->outputBufferSize_ /
audio->mashStreamBasicDescription_.mBytesPerFrame / 2;
err = AudioConverterConvertBuffer(inAudioConverter, 160,
audio->writeBuffer_, &audio->outputBufferSize_, audio->outputBuffer_);
if (err != noErr) return err;
// memcpy(audio->outputBuffer_, writeBufferPtr,
audio->outputBufferSize_);
// Increment the readFactor, looping around if necessary.
audio->readFactor_++;
if (audio->readFactor_ == audio->ringBufferFactor_)
audio->readFactor_ = 0;
}
audio->writeBuffer_ = float[160] of my 16-bit linear PCM values. This
equals 20ms at 8000Hz. audio->outputBuffer_ = float[1280] and
audio->outputBufferSize_ has a value of 1280 before and after calling
AudioConverterConvertBuffer (double the channels, upsampled from 8kHz to
44.1kHz).
The problem is after calling AudioConverterConvertBuffer,
audio->outputBuffer_ contains huge chunks of zero values, instead of the
160 floats of source data spread out over the 1280 floats sent to the
sound card. What am I doing wrong?
TIA,
Wes
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.