Hi
I´m trying to build a multimedia application for MacOSX in which i receive audio data in a PCM 8Khz stream.
It seems that in MacOSX my output AudioUnit can only be set to 44.1Khz sample rate so I know I need to use an AudioConverter between output device and output AudioUnit to resample data using AudioConverterFillComplexBuffer function, supplying it a callback to fill the buffers.
To test, I have now a working example reading a 44Khz stream from a file, so I have the basic AU graph well set up. In my real application, i will have a thread receiving input from a socket, and another (the internal AU thread) playing it.
Ok, now, I will try to use a 8Khz file for the test.
My doubt is where and how have I to do the resample? I imagine I can use a resample function were i pass,lets say, 160 bytes from my 8Khz stream and get for example 512 bytes for a 44Khz stream.When the callback for FillComplexBuffer is called, I use that function and copy its output to the buffers supplied by AudioConverterComplexFillBuffer
that will finally supply them to my output AudioUnit. Is this scheme correct?
For this to work, i must set a 8Khz rate somewhere so my samples are feed to the audio unit at the correct rate. Is this correct?
Or should I synchronize my calls so that a 8Khz sample rate is achieved.? In my 44Khz I have no need of that and sound plays correctly beacause,I imagine, my AudioUnit is already calling callbacks to supply data at that rate.
Where do I configure that rate? In all the examples I have seen the sample rate is get from the micro an the AudioUnit is set at the same rate (44Khz) so there is no need of conversion.
Also, all the samples use a float32 sample size, but i will be using a short (16bit) size that must also be set somewhere.
I know AudioConverter can do this, but I haven´t found any example of sample rate conversion. Sorry for asking so many questions in my first try with CoreAudio but i have lots of doubts.
Thanks
Pablo J.Royo |