I just started encountering a similar problem to this today. I'm taking some example code that I used for a Core Audio conference presentation. This example applies a simple gain to samples pulled from remote i/o's mic input. Trying it today at home, I found that I'm getting inNumberFrames == 706 on the first callback, and inNumberFrames == 1 (or sometimes 2) on all subsequent callbacks.
This problem only appears in the simulator, not on the device (where I always get inNumberFrames == 1024).
Moreover, I think the problem is only with USB input devices. If I plug my iPhone headphones into my MacBook and use the clicker-mic, it's fine (inNumberFrames == 512). Using a Logitech USB headset shows the problem on MacBook and Mac Pro, and I also have the problem using a MacAlly iMic USB audio adapter on the Mac Pro.
Doug, were you using USB input devices for this?
I've pasted the relevant view controller's code to < http://pastie.org/937276>. To replicate, your GUI needs a "start" button that calls this VC's handleStartTapped, and a UISlider (range 0.0 to 1.0) whose valueChanged action is connected to handleSlider1ValueChanged.
--Chris
On Apr 12, 2010, at 8:28 PM, Doug McCoy wrote: I am using the RemoteIO AudioUnit on iphone for live IO.
I have been setting this up with various sample rates and buffer sizes to tune my algorithm. For 44.1K and 22K, I get expected results for the number of frames in the callback IO buffer.
For 16K, I get strangeness.
My setup goes like this:
set kAudioSessionProperty_PreferredHardwareSampleRate = 16K set kAudioSessionProperty_PreferredHardwareIOBufferDuration = 0.032 seconds (512 frames)
AudioSessionSetActive(true) get kAudioSessionProperty_CurrentHardwareSampleRate = 16K
get kAudioSessionProperty_CurrentHardwareIOBufferDuration = 0.02322
seconds (371.52 frames, i round up to 372)
I then setup the remoteIO with:
AudioStreamBasicDescription audioFormat;
memset(&audioFormat, 0, sizeof(audioFormat)); audioFormat.mSampleRate = 16000; // FS
audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
audioFormat.mChannelsPerFrame = 1; audioFormat.mFramesPerPacket = 1;
audioFormat.mBitsPerChannel = sizeof(short) * 8; // 16-bit audioFormat.mBytesPerFrame = audioFormat.mBitsPerChannel / 8 * audioFormat.mChannelsPerFrame;
audioFormat.mBytesPerPacket = audioFormat.mBytesPerFrame * audioFormat.mFramesPerPacket;
I enable input and the input output formats are identical.
My callback is declared as such:
static OSStatus PerformThru(
void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber,
UInt32 inNumberFrames, AudioBufferList *ioData)
For the first callback,
inNumberFrames == 372 and ioData->mBuffers[0].mDataByteSize == 372 * 2 (16bit samples, mono)
This is to be expected.
But subsequent callbacks have
inNumberFrames == 1 or 2
and
ioData->mBuffers[0].mDataByteSize == 2 or 4 respectively
This discrepancy is causing havoc in my processing.
|