Re: Enable system volume when driver doesn't
Re: Enable system volume when driver doesn't
- Subject: Re: Enable system volume when driver doesn't
- From: Jeff Moore <email@hidden>
- Date: Tue, 11 Mar 2008 17:37:38 -0700
On Mar 11, 2008, at 4:58 PM, Mikael Hakman wrote:
Jeff, you have been very helpful by directing me to right technology
(HAL user-land drivers) and by providing a link to a sample. You
also try to explain things in detail. I thank you very much for
this. So when I asked for more documentation it wasn't by any means
directed to you personally, on the contrary. I simply expressed my
opinion what kind of information would easy understanding and the
work. From your latest answer I understand that there isn't more
information than that in header file descriptions, which I find I
little limited or narrow with respect to overall comprehension of
the topic. For example, somewhere in the documentation, whether in
reference, guide, technical note or even in sample code, I would
expect a paragraph entitled "Implementing user-land HAL audio
drivers" (or equivalent title) that would start with your own words
here:
"HAL plug-ins live in /Library/Audio/Plug-Ins/HAL. The HAL scans
this directory when it is initialized and loads all the plug-ins it
finds."
Then the doc would continue with:
"Packages found on this directory are plug-ins to be loaded. For
each plug-in package, HAL finds file Info.plist in Contents
directory. In this file, under the key <CFPlugInFactories>, HAL
finds UUIDs and names of factory functions. HAL loads the
corresponding shared library and calls the named factory functions.
These factory functions shall be implemented in the loaded library
and shall return an AudioHardwarePlugInRef. The 2 parameters to a
factory function are <description of parameters>. Factory functions
shall be exported from shared libraries using their names prefixed
with an underscore. AudioHardwarePlugInRef is a pointer to
AudioHardwarePlugInInterface containing pointers to 27 functions
that shall be implemented by the plug-in. These interface functions
are documented in <url> and will be called by HAL in order to
perform various actions or return some information."
I appreciate this forum as a place to pose the questions but the
above information belongs to a formal documentation, not a forum.
Perhaps this information is given somewhere, where I couldn't find,
and then a reference to this information should be given in
CoreAudio docs. It took me a day's work to play Sherlock and infer
this info from the sample and file system structure. BTW, the sample
focuses more on C++ inheritance and code structuring capabilities
than on actually showing what has to be done when implementing a
driver, which of course is not your fault.
Yup in a perfect world, you'd be right. Alas, this is far from a
perfect world. Apple has finite resources and we try to document as
much of our stuff as we can as thoroughly as we can. But this is
always subject to available resources and prioritization. Given that
the number of developers who write user-land audio drivers is several
orders of magnitude less than, say, write networking code or use
CFString or write Cocoa apps and you start to see why things like this
wait a while before they get a shake at documentation.
Believe me, this situation is not at all new. I long ago stopped
taking documentation requests personally. I just answer the questions
and let the rest sort itself out.
Are you writing the driver for this device, too? If so, there are
easier ways to integrate software volume with your driver than
writing a whole new user-land driver.
Unfortunately (or fortunately depending on viewpoint) I don't. We
are rather system integrators than software or hardware vendors.
We do write application and system software when required,
possible, and feasible in our projects. The audio hardware
interfaces I'm talking here about are from very well known
vendors including but not limited to such devices as RME
Fireface , Lynx Aurora (with FW card), Apogee Rosetta (with FW
card) etc. We are in a process of building a high quality
multichannel audio/video output only path starting with a
computer and ending with digital audio and video monitors. The
system will be used as a model for other setups and
installations. Remote control of volume and other pertinent
properties is required in target environments. I could however
use information on these easier ways to integrate volume control
with existing drivers as a hint (or an argument) when discussing
the subject with audio hardware vendors.
I don't mean to poke holes, but I can't imagine an audio
professional who would buy the gear you list would want to mangle
their audio with a digital volume control. There are reasons why
these high end hardware developers didn't build volume controls
into their gear. You are taking on a huge job with a steep
investment in implementation and maintenance just to enable the
volume slider in the menu bar. I'm not sure this makes a lot of
sense, but we're here to help =)
Well, the volume has to be controlled somewhere. Given pure digital
path, you can only control it digitally. Messing with unnecessary D/
A, then analogue volume control, and then A/D again is not my notion
of professionalism, rather the opposite. Aren't we all in _digital_
computer business?
Yeah, but digital volume control will do things like magnify the noise
floor and other signal destroying effects (at least form a mastering
point of view). This is especially important for input signals. So, no
I wouldn't agree with you that the digital computer business means we
need a digital volume control. It just isn't apropos for lots of apps.
I've spent time in a lot of studios over the years. Many of which have
examples of the gear you list. I have never heard any of the engineers
wish for a digital volume control. They just reach over to their
console and turn down the monitor volumes (in the, yes, analog
domain). In fact, that's exactly what I do in my own studio.
From your answer I understand that I didn't manage to explain why do
we need functional OS X master volume control. It is _not_ the
slider control, nobody here cares about that. It is the remote IR
control that the system volume control reacts to.
It's all the same thing really.
I'm not free to describe the aim of my current project but there are
plenty of environments and situations where people can't, won't, or
aren't able to use other means where high quality audio is still
required. One such circumstance, not relevant to my project, are
handicapped or otherwise bounded to a place people. A blind can use
a remote with easy but can't walk, use keyboard or other devices
that well.
Furthermore, the high end devices we are talking about are currently
the only means to get out low-jitter, multichannel, digital audio
out from the computer. I'm surprised that these vendors don't even
try to broad their market by such a simple thing as implementing OS
X master volume control. Apparently they are unable to image
anything else than a traditional recording studio environment. I
would say that the other markets are at least an order of magnitude
larger.
I know and talk to many of these vendors. I won't put words in their
mouths (you can ask them yourself about this issue). But I imagine
that they do at least a little market research. If there was demand
from their users for such features, I'm sure that most of them would
bend over backwards to add it. It's interesting to note that many of
the vendors you mention make a pro line of gear and a pro-sumer line
of gear and it seems that it's the pro gear lacks volume control while
the pro-sumer gear often has it.
One other thing: what you are proposing _will_ bleed performance from
the system. You are talking about adding non-trivial signal processing
which means adding more buffers (and therefore memory pressure) to the
system in addition to the extra CPU time you'll spend doing the work.
For example, doing fully dezippered software volume for a 32x32
channel interface is some serious math. It will have the net effect of
reducing the overall track count that a DAW app can achieve which is
probably the last thing a studio rat is going to want to have happen.
And the extra memory usage just takes away from what the DAW app can
use for it's own purposes.
All I'm trying to say here is that you are looking at a _lot_ of work
up front to make your user-land driver work and then even more work
later to make sure it stays compatible with all the devices you want
it to work as well as future devices as they come out. I just want you
to be aware of what you have ahead of you before you dive in is all.
You have a tough row to hoe if you are going to take this on seriously.
Since you say you are a systems integrator, I wonder if you couldn't
achieve what you wanted by building a solution with an already
existing tool like Jack? I'm sure that Stephane and the other Jack
developers will correct me if I get this wrong, but I'm pretty sure
that because Jack has a centralized mixing architecture, it could be
adapted to solving your problem.
--
Jeff Moore
Core Audio
Apple
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden