Hi Core Audio Lovers,
I was looking to the AudioMixerHost sample code that does the following stuff amongst others:
- (void) setupStereoStreamFormat {
// The AudioUnitSampleType data type is the recommended type for sample data in audio
// units. This obtains the byte size of the type for use in filling in the ASBD.
size_t bytesPerSample = sizeof (AudioUnitSampleType);
// Fill the application audio format struct's fields to define a linear PCM,
// stereo, noninterleaved stream at the hardware sample rate.
stereoStreamFormat.mFormatID = kAudioFormatLinearPCM;
stereoStreamFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
stereoStreamFormat.mBytesPerPacket = bytesPerSample;
stereoStreamFormat.mFramesPerPacket = 1;
stereoStreamFormat.mBytesPerFrame = bytesPerSample;
stereoStreamFormat.mChannelsPerFrame = 2; // 2 indicates stereo
stereoStreamFormat.mBitsPerChannel = 8 * bytesPerSample;
stereoStreamFormat.mSampleRate = graphSampleRate;
NSLog (@"The stereo stream format for the \"guitar\" mixer input bus:");
[self printASBD: stereoStreamFormat];
}
Which setup a stereo format model for future use
And:
NSLog (@"Setting stereo stream format for mixer unit \"guitar\" input bus");
result = AudioUnitSetProperty (
mixerUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
guitarBus,
&stereoStreamFormat,
sizeof (stereoStreamFormat)
);
Which uses the above format definition to setup a stereo channel as a mixer input.
I'm used to do quite the same in the audio unit initialization, and I get as a result an interleaved stereo buffer that I can use in the callback: FrameBuffer[0] being the first left sample, FrameBuffer[1] being the first right sample.
In the Apple sample code, they manage to have two buffers, one for the left and one for the right, using the bufferlist, which I don't yet use. As a result they can access to the buffer in this way:
outSamplesChannelLeft = (AudioUnitSampleType *) ioData->mBuffers[0].mData;
if (isStereo) outSamplesChannelRight = (AudioUnitSampleType *) ioData->mBuffers[1].mData;
In my case, I only have access to ioData->mBuffers[0].mData; giving me an interleaved as said before.
Does some Master of the list knows what is the line in the sample code that allows to manage two buffers in the call back instead of one interleaved and how they link it to the audio unit callback?
In other words, when I enter the callback, ioData->mNumberBuffers = 1, and I don't understand how Apple manages to have it equals to 2. Could somebody explain?
Thanks
Pat