Confusion with AudioStreamBasicDescription
Confusion with AudioStreamBasicDescription
- Subject: Confusion with AudioStreamBasicDescription
- From: Arshan Gailus <email@hidden>
- Date: Wed, 15 Sep 2010 23:30:51 -0400
Hi,
I'm new to programming with Core Audio, and am having a bit of trouble
wrapping my head around some nuances of the fields in an
AudioStreamBasicDescription when hosting AUs. One of the developer
docs on the topic has the following example:
size_t bytesPerSample = sizeof (AudioUnitSampleType);
AudioStreamBasicDescription stereoStreamFormat = {0};
stereoStreamFormat.mFormatID = kAudioFormatLinearPCM;
stereoStreamFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
stereoStreamFormat.mBytesPerPacket = bytesPerSample;
stereoStreamFormat.mBytesPerFrame = bytesPerSample;
stereoStreamFormat.mFramesPerPacket = 1;
stereoStreamFormat.mBitsPerChannel = 8 * bytesPerSample;
stereoStreamFormat.mChannelsPerFrame = 2; // 2 indicates stereo
stereoStreamFormat.mSampleRate = graphSampleRate;
My confusion is this: Since mChannelsPerFrame is set to 2, this will
be a stereo stream and therefore interleaved. So each frame should
contain 2 samples, one for each channel. Why then are mBytesPerPacket
and mBytesPerFrame not 2 * bytesPerSample, as they each contain two
samples?
I'm sure I'm overlooking something rather straightforward here, but
can't seem to see what. Where am I going wrong?
Thanks in advance!
-Arshan
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden