Re: Choosing path...
Re: Choosing path...
- Subject: Re: Choosing path...
- From: Iain McCowan <email@hidden>
- Date: Sat, 2 Oct 2010 21:41:23 +1000
Hello Robert,
Just thought I'd respond to your post to say that I can related to what you are wanting as I have been trying to achieve something very similar myself but coming the other way as a microphone - using a standard system driver for my own raw device, do my own custom signal processing, and then the user accesses this as an enhanced input device in any application, blissfully unaware that there is that extra level between them and the device.
Anyway, I've had some posts and responses to this in recent times, so below is what I've found from my experiences:
1) When I got into Core Audio, I hoped there might be some mechanism to simply associate a user-level Audio Unit with a hardware device - this seems a nice clean way to achieve what we want. But this doesn't seem possible. I also got hopeful when I saw SampleHardwarePlugin, but responses I got to posts on this list discouraged me from that path as it involved more work than I'd anticipated.
2) If I understand your situation (you want to associate your processing in front of an output device, which is not necessarily always the same physical device) I think perhaps the Soundflower / AudioReflector approach is the only real solution, but as you point out it can be messy with multiple devices visible. I think here the approach is:
a) Rebuild Soundflower with your own name, e.g. RobsDevice.
b) Write an application, e.g. following the CAPlayThrough example source, that potentially runs on launch and runs as a process that inserts your processing between RobsDevice as input and the Default Output Device.
c) Tell user to output to RobsDevice if they want your effects added.
This achieves what you want in theory, but is a little messy as a commercial product. It doesn't stop someone doing the wrong thing, and also I've found that Soundflower can tend to accumulate some latency / clock drift over time - but maybe that is something I am doing wrong and there must be a way to detect and avoid it.
3) If the device was a particular USB output device and a USB device, then one solution seems to be the SampleUSBAudioPlugin kext example:
This allows you to essentially patch a custom signal processing function into the otherwise standard system driver. My problem with this has been the restricted kernel space programming it imposes, meaning you won't have all the nice user-level DSP libraries you probably want.
But if you want to generically use any output device, then this approach doesn't seem possible anyway.
Good luck - would be interested to see how you solve this,
Iain.
On Sat, Oct 2, 2010 at 2:50 PM, Robert Bielik
<email@hidden> wrote:
tahome izwah skrev 2010-10-01 20:13:
Have a look at the CAPlayThrough example project that comes with
Xcode, this does exactly what you need.
Thnx, but not exactly. If I'm not mistaken, the CAPlayThrough just opens up an input and an output device, then
pipes audio from in to out. Well, in conjunction with a Soundflower device (as default output), it would do the
trick, but I'd rather not expose an input device.
>From reading posts on this list regarding the SampleHardwarePlugin example of the SDK, it seems that it could
somehow "wrap" an output device (without using any varispeed stuff), which would be much closer to what I want to do.
Would love some input on that?
TIA
/R
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden