Re: User mode driver (HAL plugin) vs Kernel mode
Re: User mode driver (HAL plugin) vs Kernel mode
- Subject: Re: User mode driver (HAL plugin) vs Kernel mode
- From: Tuviah Snyder <email@hidden>
- Date: Mon, 28 Jan 2013 23:22:22 +0000
- Thread-topic: User mode driver (HAL plugin) vs Kernel mode
Sounds great. So far it looks like user mode drivers are the way to go.
Can anyone confirm that HAL core audio drivers *must* be 64 bit? I'm have a 32 bit application, which is writing to shared memory using 32 bit structures. Need to know if I need to do extra engineering to align the structures so it will be readable by a 64 bit plugin.
best
Tuviah
On Jan 28, 2013, at 2:49 PM, Joel Reymont <email@hidden<mailto:email@hidden>>
wrote:
My driver supports multiple virtual devices, e.g. with 4 inputs and 2 outputs each.
Audacity can only record from a single device so I have created aggregates of at least two devices and they show up as 8 inputs and 4 outputs.
Works fine so it's not true that you cannot add user-mode drivers to an aggregate.
--
http://www.linkedin.com/in/joelreymont
On Monday, January 28, 2013 at 10:40 PM, Dan wrote:
The problem for me with user-mode drivers, as far as I know, is that you can't add them to an aggregate. This is also the reason Jack OS X is unsuitable for me.
I want to aggregate my lower latency devices and route to my higher latency devices via a kernel-mode reflector driver. I need to be able to monitor external midi instruments & soft synths at low latency. Including a high latency device into an aggregate slows down every other device.
I wish aggregate devices gave the option to set offsets for each device manually, the offset properties for aggregates seem to be read only, even when creating the aggregate programatically. That way I could monitor faster audio interfaces at low latency, and then manually add a buffer delay when recording. Other than that, aggregate devices are stable on my system, but just too inflexible.
Danno
On Jan 28, 2013, at 7:58 PM, Tuviah Snyder wrote:
Thanks for all the feedback so far. I'm just using C++. With boost for threading/reading from shared memory.
Would also be useful if anyone knows what versions of OSX support HAL audio plugins as well.
best
Tuviah
On Jan 28, 2013, at 11:53 AM, Joel Reymont <email@hidden<mailto:email@hidden>>
wrote:
On Mon, Jan 28, 2013 at 7:47 PM, Paul Davis <email@hidden<mailto:email@hidden>> wrote:
i believe that apple employees have specifically noted that objective C and
GCD are inappropriate tools for this purpose.
Works for me so far.
Then again, all the CoreAudio sampling callbacks are doing is either
reading from a ring buffer or writing to one. The output callback is
also making a dispatch_async call.
I know about the party line and actually had the whole thing written
in C++ at one time.
--------------------------------------------------------------------------
for hire: mac osx device driver ninja. kernel, usb and coreaudio drivers
---------------------+------------+---------------------------------------
http://wagerlabs.com | @wagerlabs | http://www.linkedin.com/in/joelreymont
---------------------+------------+---------------------------------------
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden<mailto:email@hidden>)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden<mailto:email@hidden>
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden