Re: iPhone AU best practices
Re: iPhone AU best practices
- Subject: Re: iPhone AU best practices
- From: uɐıʇəqɐz pnoqɥɒɯ <email@hidden>
- Date: Thu, 17 Jun 2010 02:29:45 -0700
Well, I've made some changes and have a version now that is working with just one bus of the mixer (I will remove the mixer and go directly to the RemoteIO unit once things are working). THis is what I am doing in my render callback. Now mind you, I have not optimized at all. I will be changing my structures so I can pass a pointer to a SourceBuffer instead of a pointer to an array and the index into it. However, what I'd like you to note is that I am getting clipping when soundVal gets close to the max. I am not sure how to mix without exceeding limits. Any ideas?
I tried to switch to 8.24, by using SetAUCononical instead of SetCanonical on both my source and mixer input formats , but I get an error −10868. I was hoping I might have a few more bits to play with that way. I am getting clipping either when four notes play at once, or when two higher frequency notes play together. Anyone have any thoughts?
static OSStatus renderInput4(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
SourceAudioBufferDataPtr userData = (SourceAudioBufferDataPtr)inRefCon;
AudioSampleType *out = (AudioSampleType *)ioData->mBuffers[0].mData;
SInt16 soundVal;
for (UInt32 i = 0; i < inNumberFrames * 2; i += 2) {
soundVal = 0;
for (UInt32 j = 0; j < MAXBUFS; j++) {
for (UInt16 k = 0; k < 2; k++) {
soundVal += GetOneFrameValue(j, userData, k); // change the struct so that the userData->soundBuffer[] can be passed in not as an array
out[i + k] = soundVal;
}
}
}
return noErr;
}
SInt16 GetOneFrameValue(short voice, SourceAudioBufferDataPtr userData, UInt16 channel)
{
if (!userData->mPlaying[voice])
return 0;
if (userData->soundBuffer[voice]->numFrames < userData->frameNum[voice])
return 0;
UInt32 offset = userData->frameNum[voice] * userData->soundBuffer[voice]->asbd.mChannelsPerFrame + channel;
SInt16 sample = (userData->soundBuffer[voice]->data)[offset];
if (channel + 1 == userData->soundBuffer[voice]->asbd.mChannelsPerFrame) { // find a way around this!
userData->frameNum[voice]++;
if (userData->frameNum[voice] == userData->soundBuffer[voice]->numFrames) {
userData->mPlaying[voice] = false;
userData->frameNum[voice] = 0;
}
}
return sample;
}
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden