Hello everyone, I have been lurking but am finally so stumped I have to cry out for any hint!
I have worked through Adamson & Avila's masterpiece and I've been working for months with the Simulator and starting to get the hang of all this. I'm working on analyzing mic input for a simple chromatic tuner, and it's fine on the Simulator, but when I finally plunked down $99 and put it on a real iOS device, I am seeing totally different data in my render callback.
I'm using an AUGraph with RemoteIO components for in and out (except a file input for unit testing) and I think I have my AudioStreamBasicDescription configured to want 16 bit signed integers.
// Set up an ASBD in the iPhone canonical format. // Using signed int PCM so we don't have to mess with 8.24 fixed point math. memset(&audioFormat, 0, sizeof(audioFormat)); audioFormat.mSampleRate = hardwareSampleRate; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagsCanonical; // signed integers, right? audioFormat.mBitsPerChannel = 16;
But the problem starts in my PostRender callback, where I'm casting the mData member to the 16-bit signed int I'm expecting.
SInt16* samples = (SInt16*)(ioData->mBuffers[i].mData);
It works on the Simulator, but is all messed up on the device.
First, on the Simulator, I see
(lldb) expr taas->asbd (AudioStreamBasicDescription) $1 = { (Float64) mSampleRate = 44100 (UInt32) mFormatID = 1819304813 (UInt32) mFormatFlags = 12 (UInt32) mBytesPerPacket = 4 (UInt32) mFramesPerPacket = 1 (UInt32) mBytesPerFrame = 4 (UInt32) mChannelsPerFrame = 2 (UInt32) mBitsPerChannel = 16 (UInt32) mReserved = 0 }
(lldb) type summary add -s "${var[0-63]}" "SInt16 *" (lldb) frame variable samples (SInt16 *) samples = 0x06fb9000 [13,13,14,14,-5,-5,-9,-9,-5,-5,-18,-18,-22,-22,-20,-20,-34,-34,-44,-44,-35,-35,-37,-37,-44,-44,-46,-46,-54,-54,-49,-49,22,22,35,35,35,35,32,32,33,33,17,17,20,20,27,27,27,27,20,20,-1,-1,-6,-6,-5,-5,-4,-4,1,1,-4,-4]
On an iOS device (iPad1 + iOS 5.1, iPhone 4 + iOS 6.1 so far), the ASBD is identical,
(lldb) expr taas->asbd (AudioStreamBasicDescription) $2 = { (Float64) mSampleRate = 44100 (UInt32) mFormatID = 1819304813 (UInt32) mFormatFlags = 12 (UInt32) mBytesPerPacket = 4 (UInt32) mFramesPerPacket = 1 (UInt32) mBytesPerFrame = 4 (UInt32) mChannelsPerFrame = 2 (UInt32) mBitsPerChannel = 16 (UInt32) mReserved = 0 }
but the samples are so weird:
(lldb) frame variable samples (SInt16 *) samples = 0x0245c000 [0,14856,0,14800,0,14656,0,14768,0,-18032,0,-18048,0,-17984,0,-18112,0,-18112,0,-17952,0,-17936,0,-17896,0,-18080,0,-18176,0,-18000,0,14528,0,14688,0,-18000,0,-17920,0,-17952,0,14528,0,-18080,0,14528,0,14624,0,-18176,0,14528,0,14336,0,-18176,0,-18000,0,-18032,0,-18176,0,14816]
Viewing the mData* memory, I have these 16-bit gaps of zeros that make me think it's actually 32 bit or interleaved or something I don't want (like scary 8.24 fixed point):
00 00 08 3A 00 00 D0 39 00 00 40 39 00 00 B0 39 00 00 90 B9 00 00 80 B9 00 00 C0 B9 00 00 40 B9 00 00 40 B9 00 00 E0 B9
But even looking at the non-zero portions it is garbage data, all extremely high or low values.
What is going on? I've searched with my normal methods but can't find what I need. Is there some sort of secret difference between the canonical sample format on a real device? How should I make it work on both the Simulator and a real device?
Thanks for all the high-quality discussions I've seen on this list and for reading about my woes. I hope my mistake is transparent to someone out there!
Cheers, Nathan
|