Re: User mode driver (HAL plugin) vs Kernel mode
Re: User mode driver (HAL plugin) vs Kernel mode
- Subject: Re: User mode driver (HAL plugin) vs Kernel mode
- From: Dan <email@hidden>
- Date: Mon, 28 Jan 2013 19:36:16 +0000
+1, I'm also interested in this too. As, I'm currently exploring all avenues at the same time, flipping between possible solutions.
I haven't been able to get the old PhantomAudio driver to compile yet, and the AudioReflector driver is as glitchy as hell for me. In the end, I've taken to modifying Soundflower (minus all volume/user controls, but with the added PCM Blitter code from the AudioReflector driver). My modified Soundflower driver is a streamlined as it can get).
My requirements are slightly different. I'm trying to send 4 channels of reflected Soundflower input channels directly to channels 9-10 (SPDIF) and 11-12 of my Prism Orpheus Firewire driver, with all other channels on the Orpheus being controlled via ADAT. I'm trying to avoid the Aggregate route because am aggregate with the Orpheus included has too much latency. I've tried send the Soundflower inputs through AU Lab, but I get 2000+ samples of added latency!
I'm currently working through CAPlayThrough, which works but also has the same amount of latency, but I'm just about to lower the buffers. CAPlayThrough uses 2 AUHAL audio units to pass audio from an inout to output device, but also has a hideous sample rate converter that I don't need (because my system is all hardware clocked), and shall be replacing the SRC with a mixer audio unit for more consistent and hopefully lower latency. Still don't know if it's the right route?
My first instincts were to try and access the Orpheus' mix buffer and clock directly from the kernel-based reflector driver. Performance, and stability being my main concern. But, after trawling through loads of posts this past week, I'm not sure there is any performance advantage doing it this way?
Perhaps someone more experienced could offer some advice?
Danno
On Jan 28, 2013, at 6:27 PM, Tuviah Snyder wrote:
> Hello,
>
> A few months ago I asked the list about writing a user mode audio driver. At that time the available sample code for such a thing was minimal to non existent. Therefore list members suggested writing a kernel mode audio driver.
>
> Recently, Apple has posted simple and complex Core Audio User Space driver sample code, and made several updates. I'm wondering then should I write a user mode audio driver, based on NullAudio, and what are the limitations when compared to kernel mode driver. My driver simply needs to act as a virtual audio driver, reading audio from shared memory that my application writes to.
>
> Currently i'm using send audio to AudioReflector which acts as both an output device for my application and input device for the other application. But if there is a user mode way, that's equally as good or better that would be more ideal.
>
> best
> Tuviah
>
>
>
> _______________________________________________
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list (email@hidden)
> Help/Unsubscribe/Update your Subscription:
>
> This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden