Combining AUHAL input and an AudioConverter
Combining AUHAL input and an AudioConverter
- Subject: Combining AUHAL input and an AudioConverter
- From: Heath Raftery <email@hidden>
- Date: Sat, 07 May 2005 14:27:07 +1000
Hello there,
I've been trying for some time to allow sound input and compression in
my app. So far I've struggled through the process of getting sound in -
that appears to be working well now. I'm now stuck getting the input to
feed into the converter for compression. Here's what I have:
<CODE>
//connect to input device
...
//connect to input converter
AudioStreamBasicDescription asbdIn, asbdOut;
theStatus = AudioUnitGetProperty(*fInputUnit,
kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &asbdIn,
&theSize);
memset(&asbdOut, 0, sizeof(asbdOut));
asbdOut.mFormatID = kAudioFormatQUALCOMM;
asbdOut.mBytesPerFrame = asbdIn.mBytesPerFrame;
asbdOut.mBytesPerPacket = asbdIn.mBytesPerPacket;
asbdOut.mChannelsPerFrame = asbdIn.mChannelsPerFrame;
//I was hoping this would fill in some details of the QUALCOMM format,
but alas, it does nothing despite trying many alternatives
theStatus = AudioFormatGetProperty(kAudioFormatProperty_FormatInfo,
theSize, &asbdOut, &theSize, &asbdOut);
//so instead I fill it out manually...
asbdOut.mBitsPerChannel = 32;
asbdOut.mBytesPerFrame = 4;
asbdOut.mBytesPerPacket = 4;
asbdOut.mChannelsPerFrame = 1;
asbdOut.mFormatFlags = asbdIn.mFormatFlags;
asbdOut.mFormatID = kAudioFormatQUALCOMM;
asbdOut.mFramesPerPacket = 1;
asbdOut.mSampleRate = 512; //compression?
theStatus = AudioConverterNew(&asbdIn, &asbdOut, &fInputConverter);
</CODE>
Okay, all is okay up to here (despite having to manually fill the
format out). I then set the sample rate of the input AU to the input
device (sets it to 48000 instead of 44100 from memory). Then I
initialise a couple of buffers for the data:
<CODE>
err = AudioDeviceGetPropertyInfo(fInputDeviceID, 0, YES,
kAudioDevicePropertyStreamConfiguration, &theSize, NULL);
fBufferList = (AudioBufferList *)malloc(theSize);
err = AudioDeviceGetProperty(fInputDeviceID, 0, YES,
kAudioDevicePropertyStreamConfiguration, &theSize, fBufferList);
//try the variable buffer size first, falling back to BufferFrameSize
err = AudioUnitGetProperty(*fInputUnit,
kAudioDevicePropertyUsesVariableBufferFrameSizes,
kAudioUnitScope_Global, 0, &bufferSizeFrames, &theSize);
if(err)
err = AudioUnitGetProperty(*fInputUnit,
kAudioDevicePropertyBufferFrameSize, kAudioUnitScope_Global, 0,
&bufferSizeFrames, &theSize);
fBufferList->mBuffers[0].mData = malloc(bufferSizeBytes);
fBufferList->mBuffers[0].mDataByteSize = bufferSizeBytes; //why isn't
this set right in the get kAudioDevicePropertyStreamConfiguration call
earlier???
//and why is the number of channels set to 1, even though the asbdIn
says two per frame?
//fBufferList->mBuffers[0].mNumberChannels = asbdIn.mChannelsPerFrame;
//and set up the converted buffer list... by making guesses...
fConvertedBufferList = (AudioBufferList
*)malloc(sizeof(AudioBufferList));
fConvertedBufferList->mNumberBuffers = 1;
fConvertedBufferList->mBuffers[0].mNumberChannels = 1;
fConvertedBufferList->mBuffers[0].mDataByteSize = bufferSizeBytes;// /
(44100/512);
fConvertedBufferList->mBuffers[0].mData =
malloc(fConvertedBufferList->mBuffers[0].mDataByteSize);
</CODE>
Next I set the thing going and enter the callbacks. The first callback
is the audioArrived one:
<CODE>
AudioUnitRender(*fInputUnit, ioActionFlags, inTimeStamp, inBusNumber,
inNumberFrames, fBufferList);
int convertedFrames=0;
while(convertedFrames<inNumberFrames)
{
UInt32 framesRequested = inNumberFrames-convertedFrames;
AudioConverterFillComplexBuffer(fInputConverter,
supplyDataForConversionProc, self, &framesRequested,
fConvertedBufferList, NULL);
convertedFrames+=framesRequested;
}
</CODE>
Next my SupplyDataForConversion callback gets called:
<CODE>
for(i=0; i<fBufferList->mNumberBuffers; ++i)
{
ioData->mBuffers[i].mNumberChannels =
fBufferList->mBuffers[i].mNumberChannels;
ioData->mBuffers[i].mData = fBufferList->mBuffers[i].mData;
ioData->mBuffers[i].mDataByteSize =
fBufferList->mBuffers[i].mDataByteSize;
}
*ioNumberDataPackets = ioData->mBuffers[0].mDataByteSize /
sizeof(Float32);
outDataPacketDescription = (AudioStreamPacketDescription
**)malloc(sizeof(AudioStreamPacketDescription *));
*outDataPacketDescription = (AudioStreamPacketDescription
*)malloc(sizeof(AudioStreamPacketDescription));
(*outDataPacketDescription)[0].mDataByteSize =
ioData->mBuffers[0].mDataByteSize;
(*outDataPacketDescription)[0].mStartOffset = 0;
(*outDataPacketDescription)[0].mVariableFramesInPacket = 0;
</CODE>
And after that gets called twice, the app EXC_BAD_ACCESS's.
Can anyone suggest where I might be going wrong, or where I might find
some resources to help with this task?
Regards,
Heath
--
____________________________________________________________________
| Heath Raftery |
| email@hidden |
| *The search for a new personality is futile; what is fruitful is |
| the interest the old personality can take in new activities* |
| - Ceare Pavese _\|/_ |
|___________________________________________________m(. .)m__________|
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden