Re: Acquiring input Data
Re: Acquiring input Data
- Subject: Re: Acquiring input Data
- From: Paul Barbot <email@hidden>
- Date: Thu, 23 Jun 2005 19:05:16 +0200
On 6/23/05, Heath Raftery <email@hidden> wrote:
> On 23/06/2005, at 3:41 AM, Paul Barbot wrote:
>
> > On 6/22/05, Heath Raftery <email@hidden> wrote:
> >
> >> Well that's progress :)
> >> Keep an eye on your InputUnit in the debugger. It should start to
> >> fill with values as you go through the functions to set it up. Follow
> >> the technote you were reading, and make sure you have included the
> >> functions we've mentioned in this thread. You'll get there!
> >>
> > I hope so ...
hey, it seems I have data now in my buffer !!!!
do you know a simple way to verify if it's really good audio Data from
the input ? (eg read it ) ?
> Okay, I think I may see the problem. If someone at Apple is still
> reading, I think this may be a very good example of what I was
> talking about during the G&M Feedback Session at WWDC05. The TechNote
> "TN2091: Device Input using the HAL Output Audio Unit" sounds very
> promising, but doesn't quite get a newbie very far in the end. If I
> can formalising what's missing, I'll be sure to provide that as
> feedback.
>
> Here's the basic steps required to get sound in from an input device:
>
> //1. Create AudioOutputUnit from ComponentDescription
> //2. Enable input, disable output
> //3. Set the AudioUnit's current device to system input device
> //4. Match input sample rate to device
> At this point your StreamBasicDescriptions should be good to read.
> //5. Set the call back function for when input data arrives
> //6. Initialise Audio Buffers
> //7. Initialise and start the audio input device
> //8, 9, ... AudioArrived, AudioUnitRender and friends
>
> I think you've addressed everything, but lets review your Step 4. The
> TechNote says this:
> <CODE>
> CAStreamBasicDescription DeviceFormat;
> CAStreamBasicDescription DesiredFormat;
> UInt32 size = sizeof(CAStreamBasicDescription);
>
> //Get the input device format
> AudioUnitGetProperty(InputUnit, kAudioUnitProperty_StreamFormat,
> kAudioUnitScope_Input, 1, &DeviceFormat, &size);
>
> //set the desired format to the device's sample rate
> DesiredFormat.mSampleRate = DeviceFormat.mSampleRate;
>
> //set format to output scope
> AudioUnitSetProperty(InputUnit, kAudioUnitProperty_StreamFormat,
> kAudioUnitScope_Output, 1, &DesiredFormat, sizeof
> (CAStreamBasicDescription);
> </CODE>
>
> And in the implementation you posted, you changed the SetProperty to
> AudioUnitSetProperty(InputUnit, kAudioUnitProperty_StreamFormat,
> kAudioUnitScope_Output, /*1,*/ 0, &DesiredFormat, sizeof
> (CAStreamBasicDescription));
>
> If you were like me during development of this stuff, you
> experimented with these scope and element values all the time without
> really knowing why, and wouldn't actually recall why you changed the
> element in that call from 1 to 0. I'd go as far as to say the tech
> note, and your modification, looks wrong. Here's what I think is
> going on:
Yes I tried different value in order to see if thing goes different
> The purpose of this step is to make sure the device and client side
> of the AudioUnit have the same sample rate. That's because the
> AudioUnit is capable of "simple" conversions (like deinterleaving the
> data) but not sample rate conversions (which requires buffering and a
> separate AudioConverter). So when you established this AudioUnit, it
> picked its default format on the client side:
> 2 ch, 44100 Hz, 'lpcm' (0x0000002B) 32-bit big-endian float,
> deinterleaved
> When you connected it to the input device, it set the device side of
> the unit to the device's format, say:
> 2 ch, 48000 Hz, 'lpcm' (0x0000000B) 32-bit big-endian float
> What you need to do is make sure the sample rate on the client side
> matches that of the device (you can't change the device format of
> course, unless you have an interface to the device itself). So by the
> end of the function, the client format should look like this:
> 2 ch, 48000 Hz, 'lpcm' (0x0000002B) 32-bit big-endian float,
> deinterleaved
>
> The unit is capable of deinterleaving to produce the client format,
> as long as the sample rates are the same and both formats are lpcm
> (linear PCM).
>
I have tried the example code that you give to me
and here what I have :
client format:
AudioStreamBasicDescription: 2 ch, 44100 Hz, 'lpcm' (0x0000002B)
32-bit big-endian float
, deinterleaved
device format:
AudioStreamBasicDescription: 2 ch, 44100 Hz, 'lpcm' (0x0000000B)
32-bit big-endian float
this is the rate of the device that we set to the client:
asbdclient rate : 1088784512
client format after:
AudioStreamBasicDescription: 2 ch, 44100 Hz, 'lpcm' (0x0000002B)
32-bit big-endian float
, deinterleaved
device format after:
AudioStreamBasicDescription: 2 ch, 44100 Hz, 'lpcm' (0x0000000B)
32-bit big-endian float
So, exept of the rate that is 44100 Hz instead of 48000 it's look
like you said
but I leave the previous print description that you give and its
always full of zero :(
client format2:
AudioStreamBasicDescription: 0 ch, 44100 Hz, ' ' (0x00000000) 0
bits/channel, 0 bytes
/packet, 0 frames/packet, 0 bytes/frame
>
> Instead of blowing away the client side of the AudioUnit, this
> function just sets its sample rate value to that of the device.
> There's a lot of debug logging code in there too, so you can see
> what's going on.
>
> Incidentally, from what you've posted I'd guess that your device
> sample rate is 44100 anyway (which is common) so you wont see this
> 48000 popping up. That'll make it a bit harder to see when values are
> changing, so you'll have to keep an eye on that.
>
> I'm very interested to see if I'm on the right track here - good luck!
> Heath
thanks very much for your help !!
--
Paul Barbot
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden