Re: Acquiring input Data
Re: Acquiring input Data
- Subject: Re: Acquiring input Data
- From: Heath Raftery <email@hidden>
- Date: Thu, 23 Jun 2005 10:42:35 +1000
On 23/06/2005, at 3:41 AM, Paul Barbot wrote:
On 6/22/05, Heath Raftery <email@hidden> wrote:
Well that's progress :)
Keep an eye on your InputUnit in the debugger. It should start to
fill with values as you go through the functions to set it up. Follow
the technote you were reading, and make sure you have included the
functions we've mentioned in this thread. You'll get there!
I hope so ...
Okay, I think I may see the problem. If someone at Apple is still
reading, I think this may be a very good example of what I was
talking about during the G&M Feedback Session at WWDC05. The TechNote
"TN2091: Device Input using the HAL Output Audio Unit" sounds very
promising, but doesn't quite get a newbie very far in the end. If I
can formalising what's missing, I'll be sure to provide that as
feedback.
Here's the basic steps required to get sound in from an input device:
//1. Create AudioOutputUnit from ComponentDescription
//2. Enable input, disable output
//3. Set the AudioUnit's current device to system input device
//4. Match input sample rate to device
At this point your StreamBasicDescriptions should be good to read.
//5. Set the call back function for when input data arrives
//6. Initialise Audio Buffers
//7. Initialise and start the audio input device
//8, 9, ... AudioArrived, AudioUnitRender and friends
I think you've addressed everything, but lets review your Step 4. The
TechNote says this:
<CODE>
CAStreamBasicDescription DeviceFormat;
CAStreamBasicDescription DesiredFormat;
UInt32 size = sizeof(CAStreamBasicDescription);
//Get the input device format
AudioUnitGetProperty(InputUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input, 1, &DeviceFormat, &size);
//set the desired format to the device's sample rate
DesiredFormat.mSampleRate = DeviceFormat.mSampleRate;
//set format to output scope
AudioUnitSetProperty(InputUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 1, &DesiredFormat, sizeof
(CAStreamBasicDescription);
</CODE>
And in the implementation you posted, you changed the SetProperty to
AudioUnitSetProperty(InputUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, /*1,*/ 0, &DesiredFormat, sizeof
(CAStreamBasicDescription));
If you were like me during development of this stuff, you
experimented with these scope and element values all the time without
really knowing why, and wouldn't actually recall why you changed the
element in that call from 1 to 0. I'd go as far as to say the tech
note, and your modification, looks wrong. Here's what I think is
going on:
The purpose of this step is to make sure the device and client side
of the AudioUnit have the same sample rate. That's because the
AudioUnit is capable of "simple" conversions (like deinterleaving the
data) but not sample rate conversions (which requires buffering and a
separate AudioConverter). So when you established this AudioUnit, it
picked its default format on the client side:
2 ch, 44100 Hz, 'lpcm' (0x0000002B) 32-bit big-endian float,
deinterleaved
When you connected it to the input device, it set the device side of
the unit to the device's format, say:
2 ch, 48000 Hz, 'lpcm' (0x0000000B) 32-bit big-endian float
What you need to do is make sure the sample rate on the client side
matches that of the device (you can't change the device format of
course, unless you have an interface to the device itself). So by the
end of the function, the client format should look like this:
2 ch, 48000 Hz, 'lpcm' (0x0000002B) 32-bit big-endian float,
deinterleaved
The unit is capable of deinterleaving to produce the client format,
as long as the sample rates are the same and both formats are lpcm
(linear PCM).
Now lets look at the tech note code:
<CODE>
AudioUnitGetProperty(InputUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input, 1, &DeviceFormat, &size);
</CODE>
eg. DeviceFormat = 2 ch, 48000 Hz, 'lpcm' (0x0000000B) 32-bit big-
endian float
Then
<CODE>
DesiredFormat.mSampleRate = DeviceFormat.mSampleRate;
</CODE>
DesiredFormat has just been statically allocated, so could have
anything in it. Often, it's all zeroes.
eg. DesiredFormat = 0 ch, 48000 Hz, ' ' (0x00000000) 0 bits/
channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame
<CODE>
AudioUnitSetProperty(InputUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 1, &DesiredFormat, sizeof
(CAStreamBasicDescription);
</CODE>
eg. InputUnit looks like this:
Device side: 2 ch, 48000 Hz, 'lpcm' (0x0000000B) 32-bit big-endian
float
Client side: 0 ch, 48000 Hz, ' ' (0x00000000) 0 bits/channel, 0
bytes/packet, 0 frames/packet, 0 bytes/frame
Which is the crap you are seeing. Instead, try this for your Step 4:
<CODE>
//get the client side stream format
CAStreamBasicDescription asbdClient;
UInt32 theSize = sizeof(asbdClient);
AudioUnitGetProperty(InputUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 1, &asbdClient, &theSize);
printf("client format:\n");
asbdClient.Print();
CAStreamBasicDescription asbdDevice;
theSize = sizeof(asbdDevice);
AudioUnitGetProperty(InputUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input, 1, &asbdDevice, &theSize);
printf("device format:\n");
asbdDevice.Print();
Float64 rate=0;
theSize = sizeof(Float64);
//you could probably pull this rate value straight from
asbdDevice.mSampleRate, but here's another way if you have your
InputDeviceID handy.
AudioDeviceGetProperty(InputDeviceID, 0 /*inChannel*/, YES /
*isInput*/, kAudioDevicePropertyNominalSampleRate, &theSize, &rate);
asbdClient.mSampleRate = rate;
//Set the stream format of AUHAL to match the sample rate of the
input device
theSize = sizeof(asbdClient);
AudioUnitSetProperty(InputUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 1, &asbdClient, theSize);
theSize = sizeof(asbdClient);
AudioUnitGetProperty(fInputUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output, 1, &asbdClient, &theSize);
printf("client format after:\n");
asbdClient.Print();
AudioUnitGetProperty(InputUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input, 1, &asbdDevice, &theSize);
printf("device format after:\n");
asbdDevice.Print();
</CODE>
Instead of blowing away the client side of the AudioUnit, this
function just sets its sample rate value to that of the device.
There's a lot of debug logging code in there too, so you can see
what's going on.
Incidentally, from what you've posted I'd guess that your device
sample rate is 44100 anyway (which is common) so you wont see this
48000 popping up. That'll make it a bit harder to see when values are
changing, so you'll have to keep an eye on that.
I'm very interested to see if I'm on the right track here - good luck!
Heath
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden