advice to associate custom signal processing with a device
advice to associate custom signal processing with a device
- Subject: advice to associate custom signal processing with a device
- From: Iain McCowan <email@hidden>
- Date: Mon, 2 Nov 2009 11:38:31 +1000
Hello,
I am new to this list but have been programming with Core Audio for 2 years now.
I am developing a multi-channel USB microphone device.
I want to do custom signal processing on the input streams for speech enhancement. To simplify my hardware design, I have chosen to do this on the host rather than on an embedded processor.
So far I have achieved this within an application based upon the CAPlayThrough example, placing my real-time frame processing function inside the input callback proc.
I would however like to achieve this across all applications, such that when my device is selected for input, the speech enhancement is always applied. (ie, as far as the user is aware, the signal processing could effectively be happening embedded on the device).
From the documentation, it seems that writing an audio plug-in kext is what I should be doing.
I have attempted to extend the SampleUSBAudioPlugin kext example to achieve this, however I need FFT and vector math routines. These don't seem to be available in the Kernel or IOKit frameworks (I see only a few limited vector math operations), and I am not sure that it is possible build or run a kernel extension linked to other frameworks or libraries with FFT and vector routines, such as Intel IPP library or the Accelerate Framework (vDSP).
I imagine this must be possible, but I am struggling to work out build settings to make it happen without a build or load-time error complaining about the external framework.
I guess my problem is that my code should probably be run in user mode, rather than kernel, but in this case it seems I can only do it within a particular application under my control.
Can anyone advise what is the best way to achieve my goal, of associating my signal processing with a particular hardware device, transparently across all applications?
I guess the two possibilities would be to
1) associate a user-space Core Audio plug-in with the device on connection, but I can't see from the API and documentation how this would be possible ( I have looked at the new SampleUSBAudioOverrideDriver as a means to specify an IOAudioEngineCoreAudioPlugIn, but from what I can tell this is just a bundle to extend the device control interface, not a means of implementing processing??), or
2) link to an FFT and Vector math library such as Accelerate framework or Intel IPP into the SampleUSBAudioPlugin project. (I guess I could implement a complete audio device driver, but this seems unnecessary and I would hit the same build issues).
Can anyone offer directions as to how to achieve either of the above, or else offer advice on an alternative means to achieve my goal?
As an alternative, I have considered having an application that takes input from the device, does the processing and routes audio output to a phantom audio device (such as SoundFlower does), and then have other applications use this, but this seems messy and I would prefer a neater solution if one exists.
any advice appreciated,
thanks,
Iain.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden