Re: what would cause QTSetComponentProperty to fail with a kAudioUnitErr_InvalidPropertyValue err?
Re: what would cause QTSetComponentProperty to fail with a kAudioUnitErr_InvalidPropertyValue err?
- Subject: Re: what would cause QTSetComponentProperty to fail with a kAudioUnitErr_InvalidPropertyValue err?
- From: Jeff Moore <email@hidden>
- Date: Tue, 20 Mar 2007 14:44:34 -0700
On Mar 20, 2007, at 2:30 PM, Michael Dautermann wrote:
On Mar 20, 2007, at 11:21 AM, Brad Ford wrote:
I see how that's a confusing choice of words on my part.
The more succinct question I should have asked was: "if somebody
in DTS were looking at my plug-in code, they should be able to
select & open the plug-in via the sequence grabber even though
they wouldn't have my unique device connected nor would they
necessarily hear modulating audio, right?"
My plugin claims an AudioDeviceID in its Initialize function. At
the same time, the plugin also claims two AudioStreamID's (one for
the left and one for the right channel) and marks those two stream
id's as created via AudioHardwareStreamsCreated. And it's through
those id's the "DeviceGetProperty" and "StreamGetProperty"
functions are presumably being called.
I totally agree with Jeff that you should be using HALLab as your
primary bring-up and validation tool. Your device needs to work
right in HALLab before it has a prayer of working in Sequence
Grabber. That said, here's what's going on in SG. Sequence
Grabber's audio channel has api to get a list of available
devices. Each device is described as a CFDictionary of key/value
pairs. To compile this dictionary, SG queries your audio device
for a number of properties. As currently implemented (that's a big
caveat -- all of this is subject to change in the future) you'll
get asked:
Hi guys,
I'd love to be able to fully use HALLab for my validation tool, and
the plug-in seems to do all the right things at least on the Info
dialog (the main dialog) of HALLab.
In the big picture, audio data for the plug-in comes from a
Quicktime 'vdig' component I've written that gathers video and audio
streams from a USB device. Because this 'vdig' needs to be running,
I don't think I can easily use HALLab play-through to do
validation. I have to do it this way as a USB device can only be
exclusively opened by one process / app at a time (and 'vdig' counts
for one and 'audi' for another).
This pretty much means that your whole scheme is doomed to failure.
There's pretty much no way you can always guarantee that the vdig is
around when the audio side of things are in use.
Let me reinforce what Brad said: if your device can't be used in
HALLab, you have basically a zero chance of getting it to work
reliably anywhere else. This includes making it work with the Input
Window.
I do create the streams but even with no audio data making it across
to the audio plugin, the plug-in should still be selectable via the
Sequence Grabber. One would expect the audio stream would just be
silent until the the streaming starts up.
The normal way that a device like this is brought up is to use a
daemon to handle moving the data from the kernel out into user-space.
And then use shared memory and IPC to move the data from the daemon to
the process that is consuming the data. For example, this is how our
iSight driver works. I imagine that the iSight has a lot in common
with the device you are working with.
Jeff said I needed to "start with the basics of creating a proper
HAL plug-in". I'd love to find out what those basics are. Outside
of the header files and the open source project that's been
graciously pointed out to me, I've yet to find any clear
documentation on how to do this.
That's because that's all there is. I'm happy to answer your questions
about the API here on this list.
--
Jeff Moore
Core Audio
Apple
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden