Re: easiest way to process realtime audio
Re: easiest way to process realtime audio
- Subject: Re: easiest way to process realtime audio
- From: Brian Willoughby <email@hidden>
- Date: Mon, 22 Sep 2008 12:24:40 -0700
The callbacks I alluded to would be to the audio units before and
after your processing. If you are going direct from the input device
to the output device, then the CAPlayThrough sample code should show
you exactly how the callbacks are used. In the sample code, they are
inserting an AudioConverter for SRC, but you could add your own
processing to the audio data stream. If, as I suggest, you add
existing AudioUnits to the graph - e.g. limiter or EQ - then you
should be able to use AudioUnitSetProperty() to connect to the input
or output of an AU as appropriate for the position of your processing
code in the overall graph. In other words, your code would not be an
actual node in the graph, but would manually connect between two
nodes. Note that I have not tried this yet, so it might be a little
tricky. I'm not sure whether AUGraph might check for a 'complete'
graph, or if it would simply be happy so long as the chain of data is
not broken at run time.
P.S. I am also working on a project which will eventually be
dedicated DSP, or possibly even analog processing, but I am using
AUGraph to simulate the design in software before I being working
with the dedicated hardware. The only difference is that I started
by writing an AU, and now use AUGraph to insert my AU between live
input and output. Had I not already written the AU, I would have
simply embedded the DSP code in my AUGraph application and hung it
off of the SRC stage of CAPlayThrough, and I might have even more
specifics for you...
Brian Willoughby
Sound Consulting
On Sep 18, 2008, at 21:31, Taylor Holliday wrote:
Well, the DSP is a stand-alone unit (it includes analog audio io)...
I'm thinking one of those development boards from freescale... so I
wouldn't be sending any audio between the DSP and computer. This
software implementation is a prototype so I can avoid buying the DSP
hardware right now and have a quicker proof-of-concept for my app.
I'm implementing something along the lines of the Nord Modular G2
synthesizer if you're curious. But I digress...
After looking at the AUGraph API in more detail, I think it might
save me lots of time. What were the callbacks you alluded to? Can I
make some lightweight nodes that don't have to be full blown audio
units (i.e. not separate plug-ins) by specifying a processing callback?
thanks!
- Taylor
On Thu, Sep 18, 2008 at 6:05 PM, Brian Willoughby
<email@hidden> wrote:
Interesting. In my opinion, PortAudio looks like a lot more work!
It certainly represents a lot more code, even if you're compiling
someone else's code.
CAPlayThrough allows you to connect two different interfaces, even if
they're running on different sample clocks. Even if you don't need
multiple interfaces, the design of USB audio requires that you treat
things this way for a single device.
After reviewing CAPlayThrough, I was able to get a working
application by ripping out all of the sample rate conversion code and
just using the input and output of a single FireWire interface
because I know it is presented as a single clock reference. This
means that my modified CAPlayThrough has absolutely no callbacks of
any kind (because I am hosting my own AU). If you did something
similar, you could have your non-AU DSP be the only callbacks. The
obvious drawback to this is that my code won't work with USB audio
interfaces, and if I ever want to listen on a different device than
input, then I'll need all that code I dropped.
If you've got a hardware DSP, then your audio data is going to need
to make a round trip to external hardware, and that's going to
involve additional latency. In other words, you're already going to
a lot of trouble, it seems.
Trust me, once you understand what's going on in CAPlayThrough, you
don't necessarily need to implement everything. An AUGraph can
easily run without any callbacks, and in your case you'd just need
callbacks to get data to and from your DSP hardware.
On Sep 18, 2008, at 17:56, Taylor Holliday wrote:
So since the app will eventually do the processing with a hardware
DSP, it's not necessary to run other audio units within the app. The
CAPlayThrough seems like a lot of effort just to get play through to
work :-\. I just stumbled on PortAudio, what do you all think of that?
On Thu, Sep 18, 2008 at 4:56 PM, Brian Willoughby
<email@hidden> wrote:
I would suggest the AUGraph API. It would allow you to mix existing
AudioUnits from other developers alongside custom code in your
application. It's probably easier to get your code running without
also learning how to develop an AudioUnit at the same time. Plus,
the ability to use existing AudioUnits means that you don't have to
create everything.
Take a look at the sample code, particularly the CAPlayThrough
sample. One you understand callbacks and setting up an AUGraph, it
would be pretty easy to insert your own processing in one or more
places in the graph, along with some standard AUs if you need them.
On Sep 18, 2008, at 16:38, Taylor Holliday wrote:
I'm new to core audio (and audio programming in general) and I'm
writing a real-time audio processing app. After skimming a lot of the
core audio documentation, I'm not sure what's the easiest way to get
audio in and out of my app. Should I make my app into an audio unit
(I'd rather it be stand-alone)? Should I interface directly with the
HAL? Is there another lib out there that would make this easier (and
perhaps would be cross platform)? Any help would be greatly appreciated!
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden