On Aug 30, 2013, at 10:43 AM, Will Pragnell < email@hidden> wrote: Note that there's some difference in meaning relating to mChannelsPerFame depending on whether you're dealing with interleaved or non-interleaved audio. From CoreAudioTypes.h:
Thanks, Will. I've been staring at this paragraph for hours. In my ASBD set up I have
audioFormat.mFormatFlags = kAudioFormatFlagsCanonical; // signed integers
Since the CA_PREFER_FIXED_POINT macro is 1 for iOS device and simulator, the Canonical flag expands to this definition from CoreAudioTypes.h
kAudioFormatFlagsCanonical = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked,
So the kAudioFormatFlagIsNonInterleaved flag is 0, which implies the channels ARE interleaved. So I can disregard the second paragraph here (which I don't think I understand anyway) right?
Typically, when an ASBD is being used, the fields describe the complete layout of the sample data in the buffers that are represented by this description - where typically those buffers are represented by an AudioBuffer that is contained in an AudioBufferList.
However, when an ASBD has the kAudioFormatFlagIsNonInterleaved flag, the AudioBufferList has a different structure and semantic. In this case, the ASBD fields will describe the format of ONE of the AudioBuffers that are contained in the list, AND each AudioBuffer in the list is determined to have a single (mono) channel of audio data. Then, the ASBD's mChannelsPerFrame will indicate the total number of AudioBuffers that are contained within the AudioBufferList - where each buffer contains one channel. This is used primarily with the AudioUnit (and AudioConverter) representation of this list - and won't be found in the AudioHardware usage of this structure.
However, it made me look more closely, and I do see another difference between Simulator and device.
In Simulator I have 1 buffer with 2 identical channels and smoothly wavy low amplitude samples, as Douglas observed earlier:
(lldb) expr ioData->mNumberBuffers (UInt32) $0 = 1 (lldb) expr ioData->mBuffers[0].mNumberChannels (UInt32) $1 = 2 (lldb) expr (SInt16*) ioData->mBuffers[0].mData (SInt16 *) $2 = 0x087ce000 [-484,-484,-518,-518,-531,-531,-537,-537,-537,-537,-506,-506,-466,-466,-421,-421,-366,-366,-317,-317,-260,-260,-193,-193,-133,-133,-73,-73,-2,-2,64,64,124,124,185,185,245,245,296,296,326,326,346,346,367,367,386,386,390,390,380,380,368,368,340,340,310,310,287,287,251,251,213,213]
But on device I have 2 buffers with 1 channel each:
(lldb) expr ioData->mNumberBuffers (UInt32) $3 = 2 (lldb) expr ioData->mBuffers[0].mNumberChannels (UInt32) $5 = 1
and they seem to each have an identical copy of the loud jagged data (even if I skip the zeros)
(lldb) expr (SInt16*) ioData->mBuffers[0].mData (SInt16 *) $0 = 0x03f2f000 [0,14872,0,15008,0,14992,0,-18432,0,-17864,0,-18048,0,14888,0,14784,0,-18112,0,14592,0,14528,0,14816,0,-18176,0,14784,0,-18304,0,14952,0,14928,0,15088,0,15040,0,15096,0,15044,0,15140,0,15076,0,15068,0,15080,0,14920,0,14688,0,14988,0,15020,0,14984,0,-17920,0,-17776] (lldb) expr (SInt16*) ioData->mBuffers[1].mData (SInt16 *) $1 = 0x03f33000 [0,14872,0,15008,0,14992,0,-18432,0,-17864,0,-18048,0,14888,0,14784,0,-18112,0,14592,0,14528,0,14816,0,-18176,0,14784,0,-18304,0,14952,0,14928,0,15088,0,15040,0,15096,0,15044,0,15140,0,15076,0,15068,0,15080,0,14920,0,14688,0,14988,0,15020,0,14984,0,-17920,0,-17776]
I really don't understand where those values are coming from. They do not match anything the mic would be hearing.
|