Re: HAL user-land audio driver
Re: HAL user-land audio driver
- Subject: Re: HAL user-land audio driver
- From: "Mikael Hakman" <email@hidden>
- Date: Mon, 31 Mar 2008 13:16:31 +0200
- Organization: Datakonsulten AB
On March 28, 2008 10:45 PM, Jeff Moore wrote:
On Mar 28, 2008, at 5:59 AM, Mikael Hakman wrote:
On March 26, 2008 7:39 PM, Jeff Moore wrote:
On Mar 26, 2008, at 5:25 AM, Mikael Hakman wrote:
Are the basic (and laborious) shared memory and/or IPC and/or
dividing the plug-in into multiple clients/shared server architecture
the only alternatives?
You are definitely going to be using shared memory and/or IPC, but
whether you use a client/server or peer-to-peer styled protocol
depends totally on what you are trying to accomplish.
I got global state and global property change notifications working now.
I'm using POSIX named shared memory for global state, POSIX named
semaphore for serializing access to this shared memory, and Core
Foundation Distributed Notifications for notifying all plug-in instances
of the changes. For example, when using my plug-in driver as default
audio output and having both Sound Preferences and AMS panels open, and
also volume control visible on the menu bar, I can see volume changes in
all 3 of them when moving anyone of the 3 sliders. Many thanks for your
help so far, Jeff.
Awesome! It's good to hear that you got things going so quickly.
In general, implementation of such multi-process things requires a lot more
time. In this particular case I was lucky. First, my code was already
structured for multi-thread serialization, so it was easy to add
multi-process ditto. Second, all my global state variables were already in
static memory, so it was simple to gather them in a struct and map it to
shared memory. Third, I have previous non-negligible experience in UNIX
system programming which made implementation of shared memory and semaphore
a breeze. Last but not least, Core Foundation Distributed Notification
Centre and publish/subscribe model relieved me from keeping track of all
driver instances and processes and from implementing such subscribe/publish
infrastructure on my own. Using CF, each instance subscribes to a
notification event (one call), and any instance may broadcast such an event
to all others including itself (one call). Relevant and changed
AudioObjectID and AudioObjectPropertyAddress are shipped with the event and
arrive at an appropriate callback(s).
On to the next question. When using Apple IR remote to control output volume
level, the system increases/decreases the level for each press on +/-
buttons on the remote. What is the algorithm used by the system in order to
compute how much to increase/decrease the volume level for each press, as
related to decibel range reported by the device, and perhaps also to
conversions between "scalar" and decibel level values provided by the same?
I'm trying to establish such a mapping that the increase/decrease done for
each press would agree with what is observed for the other devices.
TIA
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden