Re: iOS - AudioUnitRender returned error -10876 on device, but running fine in simulator
Re: iOS - AudioUnitRender returned error -10876 on device, but running fine in simulator
- Subject: Re: iOS - AudioUnitRender returned error -10876 on device, but running fine in simulator
- From: "Brian Wang (TB)" <email@hidden>
- Date: Mon, 04 Apr 2011 16:02:57 +0800
I solved the issue on my own. It is due to a bug in my code causing
10876 error on AudioUnitRender().
I set the category of my AudioSession as
AVAudioSessionCategoryPlayback instead of
AVAudioSessionCategoryPlayAndRecord. When I fixed the category to
AVAudioSessionCategoryPlayAndRecord, I can finally capture microphone
input successfully on the device.
using AVAudioSessionCategoryPlayback doesn't result to any error and
is working well in the simulator. I think this should be an issue for
iOS simulator (though not critical).
On Sat, Apr 2, 2011 at 1:24 AM, Brian Wang (TB) <email@hidden> wrote:
> Hello all,
>
> I encountered a problem which made me unable to capture input signal
> from microphone on the device (iPhone4). However, the code runs fine
> in the simulator. The code was originally adopted from Apple's
> MixerHostAudio class from MixerHost sample code. it runs fine both on
> device and in simulator before I started adding code for capturing mic
> input. Wondering if somebody could help me out. Thanks in advance!
>
> Here is my inputRenderCallback function which feeds signal into mixer input:
>
> //--------- CODE START
> static OSStatus inputRenderCallback (
>
> void *inRefCon,
> AudioUnitRenderActionFlags *ioActionFlags,
> const AudioTimeStamp *inTimeStamp,
> UInt32 inBusNumber,
> UInt32 inNumberFrames,
> AudioBufferList *ioData) {
> recorderStructPtr recorderStructPointer = (recorderStructPtr) inRefCon;
> // ....
> AudioUnitRenderActionFlags renderActionFlags;
> err = AudioUnitRender(recorderStructPointer->iOUnit,
> &renderActionFlags,
> inTimeStamp,
> 1, // bus number for input
> inNumberFrames,
> recorderStructPointer->fInputAudioBuffer
> );
> // error returned is -10876
> // ....
> }
> //--------- CODE END
>
> Here is my related initialization code: Now I keep only 1 input in the
> mixer, so the mixer seems redundant, but works fine before adding
> input capture code.
>
> //--------- CODE START
> // Convenience function to allocate our audio buffers
> - (AudioBufferList *)
> allocateAudioBufferListByNumChannels:(UInt32)numChannels
> withSize:(UInt32)size {
> AudioBufferList* list;
> UInt32 i;
>
> list = (AudioBufferList*)calloc(1, sizeof(AudioBufferList) +
> numChannels * sizeof(AudioBuffer));
> if(list == NULL)
> return nil;
>
> list->mNumberBuffers = numChannels;
> for(i = 0; i < numChannels; ++i) {
> list->mBuffers[i].mNumberChannels = 1;
> list->mBuffers[i].mDataByteSize = size;
> list->mBuffers[i].mData = malloc(size);
> if(list->mBuffers[i].mData == NULL) {
> [self destroyAudioBufferList:list];
> return nil;
> }
> }
> return list;
> }
>
> // initialize audio buffer list for input capture
> recorderStructInstance.fInputAudioBuffer = [self
> allocateAudioBufferListByNumChannels:1 withSize:4096];
>
> // I/O unit description
> AudioComponentDescription iOUnitDescription;
> iOUnitDescription.componentType = kAudioUnitType_Output;
> iOUnitDescription.componentSubType = kAudioUnitSubType_RemoteIO;
> iOUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
> iOUnitDescription.componentFlags = 0;
> iOUnitDescription.componentFlagsMask = 0;
>
> // Multichannel mixer unit description
> AudioComponentDescription MixerUnitDescription;
> MixerUnitDescription.componentType = kAudioUnitType_Mixer;
> MixerUnitDescription.componentSubType =
> kAudioUnitSubType_MultiChannelMixer;
> MixerUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
> MixerUnitDescription.componentFlags = 0;
> MixerUnitDescription.componentFlagsMask = 0;
>
> AUNode iONode; // node for I/O unit
> AUNode mixerNode; // node for Multichannel Mixer unit
>
> // Add the nodes to the audio processing graph
> result = AUGraphAddNode (
> processingGraph,
> &iOUnitDescription,
> &iONode);
>
> result = AUGraphAddNode (
> processingGraph,
> &MixerUnitDescription,
> &mixerNode
> );
>
> result = AUGraphOpen (processingGraph);
>
> // fetch mixer AudioUnit instance
> result = AUGraphNodeInfo (
> processingGraph,
> mixerNode,
> NULL,
> &mixerUnit
> );
>
> // fetch RemoteIO AudioUnit instance
> result = AUGraphNodeInfo (
> processingGraph,
> iONode,
> NULL,
> &(recorderStructInstance.iOUnit)
> );
>
>
> // enable input of RemoteIO unit
> UInt32 enableInput = 1;
> AudioUnitElement inputBus = 1;
> result = AudioUnitSetProperty(recorderStructInstance.iOUnit,
> kAudioOutputUnitProperty_EnableIO,
> kAudioUnitScope_Input,
> inputBus,
> &enableInput,
> sizeof(enableInput)
> );
> // setup mixer inputs
> UInt32 busCount = 1;
>
> result = AudioUnitSetProperty (
> mixerUnit,
> kAudioUnitProperty_ElementCount,
> kAudioUnitScope_Input,
> 0,
> &busCount,
> sizeof (busCount)
> );
>
>
> UInt32 maximumFramesPerSlice = 4096;
>
> result = AudioUnitSetProperty (
> mixerUnit,
> kAudioUnitProperty_MaximumFramesPerSlice,
> kAudioUnitScope_Global,
> 0,
> &maximumFramesPerSlice,
> sizeof (maximumFramesPerSlice)
> );
>
>
> for (UInt16 busNumber = 0; busNumber < busCount; ++busNumber) {
>
> // set up input callback
> AURenderCallbackStruct inputCallbackStruct;
> inputCallbackStruct.inputProc = &inputRenderCallback;
> inputCallbackStruct.inputProcRefCon = &recorderStructInstance;
>
> result = AUGraphSetNodeInputCallback (
> processingGraph,
> mixerNode,
> busNumber,
> &inputCallbackStruct
> );
>
> // set up stream format
> AudioStreamBasicDescription mixerBusStreamFormat;
> size_t bytesPerSample = sizeof (AudioUnitSampleType);
>
> mixerBusStreamFormat.mFormatID = kAudioFormatLinearPCM;
> mixerBusStreamFormat.mFormatFlags =
> kAudioFormatFlagsAudioUnitCanonical;
> mixerBusStreamFormat.mBytesPerPacket = bytesPerSample;
> mixerBusStreamFormat.mFramesPerPacket = 1;
> mixerBusStreamFormat.mBytesPerFrame = bytesPerSample;
> mixerBusStreamFormat.mChannelsPerFrame = 2;
> mixerBusStreamFormat.mBitsPerChannel = 8 * bytesPerSample;
> mixerBusStreamFormat.mSampleRate = graphSampleRate;
>
> result = AudioUnitSetProperty (
> mixerUnit,
> kAudioUnitProperty_StreamFormat,
> kAudioUnitScope_Input,
> busNumber,
> &mixerBusStreamFormat,
> sizeof (mixerBusStreamFormat)
> );
>
>
> }
>
> // set sample rate of mixer output
> result = AudioUnitSetProperty (
> mixerUnit,
> kAudioUnitProperty_SampleRate,
> kAudioUnitScope_Output,
> 0,
> &graphSampleRate,
> sizeof (graphSampleRate)
> );
>
>
> // connect mixer output to RemoteIO
> result = AUGraphConnectNodeInput (
> processingGraph,
> mixerNode, // source node
> 0, // source node output bus number
> iONode, // destination node
> 0 // desintation node input bus number
> );
>
>
> // initialize AudioGraph
> result = AUGraphInitialize (processingGraph);
>
> // start AudioGraph
> result = AUGraphStart (processingGraph);
>
> // enable mixer input
> result = AudioUnitSetParameter (
> mixerUnit,
> kMultiChannelMixerParam_Enable,
> kAudioUnitScope_Input,
> 0, // bus number
> 1, // on
> 0
> );
> //--------- CODE END
>
> (This issue is also posted on StackOverflow:
> http://stackoverflow.com/questions/5502170/ios-audiounitrender-returned-error-10876-on-device-but-running-fine-in-simula
> )
>
> Thanks!
>
> Best regards,
> Brian
>
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden