Thanks Tahome. Sorry to labour the point, but what I'm trying to get at is how do you make Core Audio play nicely with these floats? You seem to be saying that Core Audio is allowing you to pass floats up the chain until the output stage? I'm being forced to convert back & forth at every stage, e.g. between a mixer and master AU. If you try to create an ASBD with floats you'll get an unsupported error on iOS. i.e. I know how to read a file as floats, and how to process as floats, and how to convert back to SInt16. The problem seems that Core Audio disallows floats anywhere in its chain. Apologies if my original post wasn't clear on this, I only want to know how to make Core Audio accept floats.
On 18 Jan 2011, at 09:06, tahome izwah wrote: In our particular case we use the EAFReader class from the Dirac project to convert chunks of audio coming from a file to float32, process the data entirely in float32 domain using our own DSP and convert it back to SInt16 for playback at the very end of our processing chain. Not sure if this answers your questions, but as I read it this sounded like it's what you wanted to do in your application so I recommended it. --th 2011/1/18 Steve gfx < email@hidden>: Well that sounds like the OP's original question then (See below) : How do
you do remain in float32 land given the constraints of iOS & Core Audio?
(Or were you originally implying that some Dirac code handles this for you?
Or, do you only have ONE stage of processing anyway?)
"1)Load some audio data as 32-bit floats (2-ch stereo) in to buffers.
2) Keep it in this format so that in the mixer and master callbacks no
conversion is necessary to perform per-channel, or global DSP.
3) Finally convert it to whatever necessary for output."
On 18 Jan 2011, at 06:07, tahome izwah wrote:
No. We convert on input (and output) and stay entirely in float32 land
for all computations.
|