Re: How to fill the ioData parameter in the Audio Unit rendering call back?
Re: How to fill the ioData parameter in the Audio Unit rendering call back?
- Subject: Re: How to fill the ioData parameter in the Audio Unit rendering call back?
- From: james mccartney <email@hidden>
- Date: Thu, 12 Mar 2009 15:36:57 -0700
On Mar 12, 2009, at 5:00 AM, Xu Ting wrote:
Hi, list,
I am working on an iPhone app, in which I use the CARingBuffer to hold
the data fragment
which are read from the audio file and mix them by using an Audio Unit
named Multichannel Mixer.
I had correctly setup the Multichannel Mixer Unit, RemoteIO Unit and
CARingBuffer. When I
call the AUGraphStart(myGraph), it correctly called the callback
function I setup.
But I am confused about the ioData parameter which is passed in to
hold the data that need to be
rendered, because now, my app plays these 2 audio files just like an
old cassette tape player
as if it is out of battery(Playing speed is OK, on the sound sounds
like it).
One reason could be because the sound file's sample rate is not the
same as the graph's output sample rate. If the file's sample rate is
higher than the graph's sample rate, then it will sound slow.
In this case you'd need to set the graph output's sample to match the
files (assuming the files' rates match). If you need to playback files
at different rates, then you'll need to use AUConverter or AUVarispeed.
Otherwise if the sample rate is not the problem, then let's continue
with the rest..
I guess the problem
lies on the ioData parameter, so my question is does my guessing
wrong? If not, how to set the
ioData parameter correctly?
My purpose is play 2 audio files simultaneously, and let one file play
in the left earphone,
and the other in the right one.
I had successfully done this by read in the whole audio files data at
once when I tried to use
the sample code from the CAPlayThru, because iPhone can not read too
much data, so I
have to use the ring buffer and read the data when they are needed.
Here is my current callback function code, and I learned this from
CAPlayThru example.
*I added some of my understanding in the comments including the 2 most
hardest part for me
understand.
[My callback function]
OSStatus renderInput(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
// in order get access to my CARingBuffer data
MultichannelMixer *THIS = (MultichannelMixer*)inRefCon;
float *inDataL, *inDataR;
AudioBufferList *bufferListL, *bufferListR;
bufferListL = CAAudioBufferList::Create(1);
bufferListR = CAAudioBufferList::Create(1);
OK first of all you should not be allocating in your render proc since
it runs on the real time thread.
You should preallocate this memory in an object somewhere that you can
get to (e.g. THIS).
//pre-malloc buffers for AudioBufferLists
bufferListL->mBuffers[0].mNumberChannels = 2;
bufferListL->mBuffers[0].mDataByteSize = THIS->bufferedFrames;
bufferListL->mBuffers[0].mData = malloc(THIS-
>bytesOfBufferedFrames);
same here.
bufferListR->mBuffers[0].mNumberChannels = 2;
bufferListR->mBuffers[0].mDataByteSize = THIS->bufferedFrames;
bufferListR->mBuffers[0].mData = malloc(THIS-
>bytesOfBufferedFrames);
..
OSStatus err = noErr;
// Here is where I am confusing about. If I do not read and
set the
// data one bus by one bus, the Fetch method will show error
and read nothing.
// So, I should read and set the data for that bus once a
time, is my understanding wrong?
I'm not sure if I'm understanding how you have this set up, so perhaps
this next advice is useless.
If you have two files and you want to have each one come out a
separate speaker, then you should be able to do this directly in the
code that fills the ring buffer. It would read the same number of
frames from each file and fill the two channels from the corresponding
two files. You would not need a mixer then.
What error are you getting from Fetch? Errors from Fetch indicate you
are asking for sample times that are not in the buffer. Your ring
buffer writer is running either ahead or behind of the ring buffer
reader.
if (inBusNumber == 0){
err = THIS->LRingBuffer->Fetch(bufferListL,
inNumberFrames, SInt64(inTimeStamp->mSampleTime -
THIS->SampleTimeUnit), false);
checkErr(err);
inDataL = (float *)bufferListL->mBuffers[0].mData;
}else {
err = THIS->RRingBuffer->Fetch(bufferListR,
inNumberFrames, SInt64(inTimeStamp->mSampleTime -
THIS->SampleTimeUnit), false);
checkErr(err);
inDataR = (float *)bufferListR->mBuffers[0].mData;
}
// The ioData parameter does have 2 buffers, because I set
the input bus number to 2.
float *outA = (float*)ioData->mBuffers[0].mData;
float *outB = (float*)ioData->mBuffers[1].mData;
// So, if I got the #1 bus's data, I set them to
outA(Assuming outA is the left channel).
if (!inDataL) {
} else {
for (UInt32 i=0; i<inNumberFrames; ++i)
{
outA[i] = inDataL[i];
}
}
// And here is the right one.
if (!inDataR) {
} else {
for (UInt32 i=0; i<inNumberFrames; ++i)
{
outB[i] = inDataR[i];
}
}
// After finished using the AudioBufferList structure,
destroy it.
CAAudioBufferList::Destroy(bufferListL);
CAAudioBufferList::Destroy(bufferListR);
// Read the next necessary data.
[THIS readNext:inNumberFrames forBusNumber:inBusNumber];
If this code is directly calling a read from the file system, you
should not be doing that on the render thread either. Making an
Objective C message send on the render thread is also not recommended
(for various reasons that might cause a flame fest if I were to go
into them again here).
You should signal another thread to do the reading by using for
example, pthread_cond_signal.
return noErr;
}
[Callback function in CAPlayThru sample code]
OSStatus renderInput(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
SynthData& d = *(SynthData*)inRefCon; // get access to
Sinewave's data
UInt32 bufSamples = d.bufs[inBusNumber].numFrames << 1;
float *in = d.bufs[inBusNumber].data;
float *outA = (float*)ioData->mBuffers[0].mData;
float *outB = (float*)ioData->mBuffers[1].mData;
if (!in) {
for (UInt32 i=0; i<inNumberFrames; ++i)
{
outA[i] = 0.f;
outB[i] = 0.f;
}
} else {
// Here is the most difficult part for me to understand.
// Q1: Why does it fill the outA and outB with the same
bus's data
it is duplicating the mono sine wave into two stereo channels.
and actually 2 times the inNumberFrames required?
it is not two times inNumberFrames.
Each channel is a separate buffer, and each one gets filled with
inNumberFrames samples.
// Q2: What does ioData mean in this callback? Or what
supposed to be fed to ioData?
UInt32 phase = d.bufs[inBusNumber].phase;
for (UInt32 i=0; i<inNumberFrames; ++i)
{
outA[i] = in[phase++];
outB[i] = in[phase++];
if (phase >= bufSamples) phase = 0;
}
d.bufs[inBusNumber].phase = phase;
}
return noErr;
}
Thank you in advance.
Tonny Xu
--
Life is like a box of chocolates, u never know what u'r gonna get.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden