Re: How to fill the ioData parameter in the Audio Unit rendering call back?
Re: How to fill the ioData parameter in the Audio Unit rendering call back?
- Subject: Re: How to fill the ioData parameter in the Audio Unit rendering call back?
- From: Xu Ting <email@hidden>
- Date: Fri, 13 Mar 2009 23:05:20 +0900
Hey, James,
Thank you for you reply, it clarify a lot of things to me. I added my
understanding below and let's see whether my understanding is right or
not.
> Otherwise if the sample rate is not the problem, then let's continue with
> the rest..
No, the sample rate is not the problem, I had double checked it, and
the client data format is the same as the output stream data format.
>> float *inDataL, *inDataR;
>> AudioBufferList *bufferListL, *bufferListR;
>> bufferListL = CAAudioBufferList::Create(1);
>> bufferListR = CAAudioBufferList::Create(1);
>
> OK first of all you should not be allocating in your render proc since it
> runs on the real time thread.
> You should preallocate this memory in an object somewhere that you can get
> to (e.g. THIS).
Thank you for you advice, I will correct it.
>> // Here is where I am confusing about. If I do not read and set the
>> // data one bus by one bus, the Fetch method will show error
>> and read nothing.
>> // So, I should read and set the data for that bus once a
>> time, is my understanding wrong?
>
> I'm not sure if I'm understanding how you have this set up, so perhaps this
> next advice is useless.
> If you have two files and you want to have each one come out a separate
> speaker, then you should be able to do this directly in the code that fills
> the ring buffer. It would read the same number of frames from each file and
> fill the two channels from the corresponding two files. You would not need a
> mixer then.
I am not quite understanding you here. Do you mean according to the
interleaved or non-interleaved status, I can directly feed the left
channel data to ioData->mBuffers[0].mData and right channel data to
mBuffer[1]?
Actually, that's my original plan and at that time I did not know what
exactly interleaved/non-interleaved means and didn't know how to do
it. Since the CoreAudio Overview says CoreAudio does have an Audio
Unit named Multichannel Mixer(Or called Stereo Mixer somewhere else),
so I thought this should be the right solution. I was, and I am a
newbie to CoreAudio programming, so it even cost my 1 credit of ITS to
get Multichannel Mixer work!
>> // Read the next necessary data.
>> [THIS readNext:inNumberFrames forBusNumber:inBusNumber];
>
> If this code is directly calling a read from the file system, you should not
> be doing that on the render thread either. Making an Objective C message
> send on the render thread is also not recommended (for various reasons that
> might cause a flame fest if I were to go into them again here).
> You should signal another thread to do the reading by using for example,
> pthread_cond_signal.
No, this code is an interface which call the read function in another thread.
I used the NSOperation and NSOperationQueue to create and manage the
other thread.
Yes, you were right, before I wrote that Objective-C code in the
render callback, I did considered whether it's the proper way or not.
But I don't know how to skip this since I did choose the NSOperation
as my multi-thread solution. Do you have any recommendations?
I even considered use pure C code to write the reading thread, but it
would cost me some additional time, so I decided to do it anyway. And
I will try to refactor my code once the main functions work fine.
>> // Here is the most difficult part for me to understand.
>> // Q1: Why does it fill the outA and outB with the same
>> bus's data
>
> it is duplicating the mono sine wave into two stereo channels.
>
> it is not two times inNumberFrames.
> Each channel is a separate buffer, and each one gets filled with
> inNumberFrames samples.
Well, I have to confirm it, do you mean, the ioData->mBuffers[0]
means the left channel and ioData->mBuffers[1] means the right channel
of the output stream?
If you are, it will really make things more easy. [The documentation
didn't say anything about it, maybe it did mention it somewhere but I
could not find it even use Spotlight]
and still, I had checked the ioData->mBuffers[0].mDataByteSize, when
the inNumberFrames is 512, for example, the mDataByteSize is 2048,
that's half the size I read from my ring buffer. Only if the
mBuffers[0] presents left or right channel can explain this, am I
right about this?
I checked the CoreAudio Overview and it pointed out that one Frame
include 2 samples, left channel sample and the right one. So you said,
"each one gets filled with inNumberFrames samples" does mean
inNumberFrames samples for one channel. Am I right about this?
~~~~~
In your next reply, you were right! I made the mistake about where
that snippet of code came from. It did come from MatrixMixerTest. You
are great!
And, some other questions.
The Audio Unit for the iPhone using 8.24 fixed point linear PCM data,
here, 8 stands for the decimal part and 24 is the fraction part, am I
right?
If that's right, that means Audio Unit in iPhone use 4 bytes to
represent one 8.24 fixed point data. So, if assume L and R means 4
bytes 8.24 fixed point data,
the interleaved data can be represented as LRLRLRLR.....LRLRLR and the
non-interleaved data can be represented as
LLLLLL....LLLLLL,RRRRRR....RRRRRR
Since Audio Unit needs non-interleaved data, so I can just simply set
the first half data to outA and the rest half data to outB, am I right
about this?
Many thanks!
Tonny Xu
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden