Interesting. In my opinion, PortAudio looks like a lot more work! It certainly represents a lot more code, even if you're compiling someone else's code.
CAPlayThrough allows you to connect two different interfaces, even if they're running on different sample clocks. Even if you don't need multiple interfaces, the design of USB audio requires that you treat things this way for a single device.
After reviewing CAPlayThrough, I was able to get a working application by ripping out all of the sample rate conversion code and just using the input and output of a single FireWire interface because I know it is presented as a single clock reference. This means that my modified CAPlayThrough has absolutely no callbacks of any kind (because I am hosting my own AU). If you did something similar, you could have your non-AU DSP be the only callbacks. The obvious drawback to this is that my code won't work with USB audio interfaces, and if I ever want to listen on a different device than input, then I'll need all that code I dropped.
If you've got a hardware DSP, then your audio data is going to need to make a round trip to external hardware, and that's going to involve additional latency. In other words, you're already going to a lot of trouble, it seems.
Trust me, once you understand what's going on in CAPlayThrough, you don't necessarily need to implement everything. An AUGraph can easily run without any callbacks, and in your case you'd just need callbacks to get data to and from your DSP hardware.
Brian Willoughby
Sound Consulting
On Sep 18, 2008, at 17:56, Taylor Holliday wrote:
Thanks Brian,
So since the app will eventually do the processing with a hardware DSP, it's not necessary to run other audio units within the app. The CAPlayThrough seems like a lot of effort just to get play through to work :-\. I just stumbled on PortAudio, what do you all think of that?
- Taylor
On Thu, Sep 18, 2008 at 4:56 PM, Brian Willoughby
<email@hidden> wrote:
Taylor,
I would suggest the AUGraph API. It would allow you to mix existing AudioUnits from other developers alongside custom code in your application. It's probably easier to get your code running without also learning how to develop an AudioUnit at the same time. Plus, the ability to use existing AudioUnits means that you don't have to create everything.
Take a look at the sample code, particularly the CAPlayThrough sample. One you understand callbacks and setting up an AUGraph, it would be pretty easy to insert your own processing in one or more places in the graph, along with some standard AUs if you need them.
Brian Willoughby
Sound Consulting
On Sep 18, 2008, at 16:38, Taylor Holliday wrote:
I'm new to core audio (and audio programming in general) and I'm writing a real-time audio processing app. After skimming a lot of the core audio documentation, I'm not sure what's the easiest way to get audio in and out of my app. Should I make my app into an audio unit (I'd rather it be stand-alone)? Should I interface directly with the HAL? Is there another lib out there that would make this easier (and perhaps would be cross platform)? Any help would be greatly appreciated!