RE: AUHAL for input - format change?
RE: AUHAL for input - format change?
- Subject: RE: AUHAL for input - format change?
- From: "Tim Dorcey" <email@hidden>
- Date: Mon, 25 Apr 2005 13:02:15 -0700
- Importance: Normal
> So you'll have to listen for the stream format notifications anyways
> to keep your client sample rate the same as the hardware's (you can
Thanks. I see that I can install an AUHAL listener proc which behaves the
same as my device listener does now, i.e., to reconfigure my AudioConverter
to deal with any new format coming from the AUHAL.
> The really big thing AUHAL helps with is dealing with devices with
> multiple streams -- it flattens them into an array of
> channels so you
> don't have to.
I think this might be the root of problems with my current code, which I
should have looked at closer before jumping into AUHAL. What I need in the
end is 16-bit 8 khz mono. I make no requests of the device, but just query
what it is doing, and set up an AudioConverter to convert that to what I
need. However, I had been using AudioConverterConvertBuffer, and assuming
the device ioProc would deliver the data in a single buffer, interleaved if
stereo. So, if Tiger/iSight is now delivering stereo to the ioProc in an
array, and I am only feeding 1 channel to ConvertBuffer, that would explain
my failure.
The answer seems to be to use AudioConverterFillComplexBuffer (with or
without AUHAL). The aim is to consume all of the new input data, produce as
much output data as possible, and leave any left-over input state in the
converter until new input data is available. I thought I could get this
behavior by asking for an arbitrary amount of output from FillComplexBuffer
and have my inputProc set ioNumberDataPackets to 0 when it has consumed the
available input data. However, this seemed to loop repeatedly. Returning a
non-zero error code from my inputProc solved that problem, and I got the
capture to basically work, except there was a continuous high-pitched
background tone.
Still digging to figure that out, but at least wanted to confirm that I am
on the right track here.
One other thing I see I am doing wrong in the old code is to do the stream
format conversion within the ioProc, rather than in a lower priority thread.
I remember I set it up that way because it was the easiest way to fit into
my existing OS9 code. Then, as I got to implementing new buffering code to
move it out of the ioProc, I decided maybe it wasn't necessary. If the
typical device output is going to be 44 khz, floating point, stereo, then
perhaps the 20:1 data reduction to 8 khz, 16-bit, mono would not be a lot
slower than copying the raw data?
Tim
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden