Re: Enable system volume when driver doesn't
Re: Enable system volume when driver doesn't
- Subject: Re: Enable system volume when driver doesn't
- From: "Mikael Hakman" <email@hidden>
- Date: Wed, 12 Mar 2008 00:58:01 +0100
- Organization: Datakonsulten AB
On March 11, 2008 8:29 PM, Jeff Moore wrote:
On Mar 11, 2008, at 7:36 AM, Mikael Hakman wrote:
On March 10, 2008 11:43 PM, Jeff Moore wrote:
You'd be dealing with the HAL's user-land driver API. The sample code
for such a driver is in /Developer/Examples/CoreAudio/HAL/
SampleHardwarePlugIn.
No. You still have to implement all the HAL semantics. There isn't
anything you can do to get rid of that work in a user-land driver.
Fortunately, our SDK has a C++ class library that can help you out
with this. You can see it in the sample project I mentioned.
I looked into the sample. I have no problems with technology used there,
I know this from other programming areas, but I wish there would be a
description in the form of guide or tutorial of HAL user- land driver
implementation describing the semantics, functions and roles of the
various pieces in a sample project. You know: "In order to implement a
HAL user-land driver you have to implement the following N software
pieces. Piece number 1 should implement following M calls. Call number 1
in piece number 1 should do this and return that.". Unfortunately I'm
unable to find such a description. Perhaps I don't know where to look.
You didn't find it because the really isn't anything like that for the
various flavors of HAL plug-in there are. This forum is a fine place to
pose questions. The easiest way to understand how things work is to just
see how they work for other devices using HALLab. Plus, there is a lot of
good info in the various HAL headers.
That said, what you propose is easier said than done. There are a lot
of behaviors and semantics you'd have to implement to make this as
transparent as you would like. Basically, you have to implement a
repeater for all the properties and what not. The sample code won't
really help you with this, but will show you the basics of writing
such a driver.
You are quite right here, even more right without guidance on the very
principles of such an implementation available. Here are some simple
questions. What is the very first software object instantiated by HAL
That would be your plug-in object. Any further objects need to be
instantiated by your plug-in using the functions defined in <CoreAudio/
AudioHardwarePlugIn.h>.
, how does HAL know about this object
It created it itself as part of the plug-in loading process.
, where does HAL find it
HAL plug-ins live in /Library/Audio/Plug-Ins/HAL. The HAL scans this
directory when it is initialized and loads all the plug-ins it finds.
, how it is instantiated?
Using standard CFPlugIn calls.
What externally visible calls and interfaces should this object
implement? What is the meaning and semantics of the calls, interfaces,
and objects implemented, provided or returned by this object? And then,
recursively, the same questions for every object and/or interface. Given
such a description, an implementation needs not to be that overwhelming
as it may appear, IMHO.
An object, in the sense the HAL's API uses the term, is basically just
collection of properties. The various properties implemented by the
various classes of objects are all described in the HAL's headers.
Jeff, you have been very helpful by directing me to right technology (HAL
user-land drivers) and by providing a link to a sample. You also try to
explain things in detail. I thank you very much for this. So when I asked
for more documentation it wasn't by any means directed to you personally, on
the contrary. I simply expressed my opinion what kind of information would
easy understanding and the work. From your latest answer I understand that
there isn't more information than that in header file descriptions, which I
find I little limited or narrow with respect to overall comprehension of the
topic. For example, somewhere in the documentation, whether in reference,
guide, technical note or even in sample code, I would expect a paragraph
entitled "Implementing user-land HAL audio drivers" (or equivalent title)
that would start with your own words here:
"HAL plug-ins live in /Library/Audio/Plug-Ins/HAL. The HAL scans this
directory when it is initialized and loads all the plug-ins it finds."
Then the doc would continue with:
"Packages found on this directory are plug-ins to be loaded. For each
plug-in package, HAL finds file Info.plist in Contents directory. In this
file, under the key <CFPlugInFactories>, HAL finds UUIDs and names of
factory functions. HAL loads the corresponding shared library and calls the
named factory functions. These factory functions shall be implemented in the
loaded library and shall return an AudioHardwarePlugInRef. The 2 parameters
to a factory function are <description of parameters>. Factory functions
shall be exported from shared libraries using their names prefixed with an
underscore. AudioHardwarePlugInRef is a pointer to
AudioHardwarePlugInInterface containing pointers to 27 functions that shall
be implemented by the plug-in. These interface functions are documented in
<url> and will be called by HAL in order to perform various actions or
return some information."
I appreciate this forum as a place to pose the questions but the above
information belongs to a formal documentation, not a forum. Perhaps this
information is given somewhere, where I couldn't find, and then a reference
to this information should be given in CoreAudio docs. It took me a day's
work to play Sherlock and infer this info from the sample and file system
structure. BTW, the sample focuses more on C++ inheritance and code
structuring capabilities than on actually showing what has to be done when
implementing a driver, which of course is not your fault.
Are you writing the driver for this device, too? If so, there are
easier ways to integrate software volume with your driver than writing
a whole new user-land driver.
Unfortunately (or fortunately depending on viewpoint) I don't. We are
rather system integrators than software or hardware vendors. We do write
application and system software when required, possible, and feasible in
our projects. The audio hardware interfaces I'm talking here about are
from very well known vendors including but not limited to such devices
as RME Fireface , Lynx Aurora (with FW card), Apogee Rosetta (with FW
card) etc. We are in a process of building a high quality multichannel
audio/video output only path starting with a computer and ending with
digital audio and video monitors. The system will be used as a model for
other setups and installations. Remote control of volume and other
pertinent properties is required in target environments. I could however
use information on these easier ways to integrate volume control with
existing drivers as a hint (or an argument) when discussing the subject
with audio hardware vendors.
I don't mean to poke holes, but I can't imagine an audio professional who
would buy the gear you list would want to mangle their audio with a
digital volume control. There are reasons why these high end hardware
developers didn't build volume controls into their gear. You are taking
on a huge job with a steep investment in implementation and maintenance
just to enable the volume slider in the menu bar. I'm not sure this makes
a lot of sense, but we're here to help =)
Well, the volume has to be controlled somewhere. Given pure digital path,
you can only control it digitally. Messing with unnecessary D/A, then
analogue volume control, and then A/D again is not my notion of
professionalism, rather the opposite. Aren't we all in _digital_ computer
business?
From your answer I understand that I didn't manage to explain why do we need
functional OS X master volume control. It is _not_ the slider control,
nobody here cares about that. It is the remote IR control that the system
volume control reacts to. I'm not free to describe the aim of my current
project but there are plenty of environments and situations where people can't,
won't, or aren't able to use other means where high quality audio is still
required. One such circumstance, not relevant to my project, are handicapped
or otherwise bounded to a place people. A blind can use a remote with easy
but can't walk, use keyboard or other devices that well.
Furthermore, the high end devices we are talking about are currently the
only means to get out low-jitter, multichannel, digital audio out from the
computer. I'm surprised that these vendors don't even try to broad their
market by such a simple thing as implementing OS X master volume control.
Apparently they are unable to image anything else than a traditional
recording studio environment. I would say that the other markets are at
least an order of magnitude larger.
Regards
Mikael Hakman
Research & Development
Datakonsulten AB
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden