• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: easiest way to process realtime audio
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: easiest way to process realtime audio


  • Subject: Re: easiest way to process realtime audio
  • From: "Taylor Holliday" <email@hidden>
  • Date: Thu, 18 Sep 2008 21:31:57 -0700

Well, the DSP is a stand-alone unit (it includes analog audio io)... I'm thinking one of those development boards from freescale... so I wouldn't be sending any audio between the DSP and computer. This software implementation is a prototype so I can avoid buying the DSP hardware right now and have a quicker proof-of-concept for my app. I'm implementing something along the lines of the Nord Modular G2 synthesizer if you're curious. But I digress...

After looking at the AUGraph API in more detail, I think it might save me lots of time. What were the callbacks you alluded to? Can I make some lightweight nodes that don't have to be full blown audio units (i.e. not separate plug-ins) by specifying a processing callback?

thanks!
- Taylor

On Thu, Sep 18, 2008 at 6:05 PM, Brian Willoughby <email@hidden> wrote:
Interesting.  In my opinion, PortAudio looks like a lot more work!  It certainly represents a lot more code, even if you're compiling someone else's code.

CAPlayThrough allows you to connect two different interfaces, even if they're running on different sample clocks.  Even if you don't need multiple interfaces, the design of USB audio requires that you treat things this way for a single device.

After reviewing CAPlayThrough, I was able to get a working application by ripping out all of the sample rate conversion code and just using the input and output of a single FireWire interface because I know it is presented as a single clock reference.  This means that my modified CAPlayThrough has absolutely no callbacks of any kind (because I am hosting my own AU).  If you did something similar, you could have your non-AU DSP be the only callbacks.  The obvious drawback to this is that my code won't work with USB audio interfaces, and if I ever want to listen on a different device than input, then I'll need all that code I dropped.

If you've got a hardware DSP, then your audio data is going to need to make a round trip to external hardware, and that's going to involve additional latency.  In other words, you're already going to a lot of trouble, it seems.

Trust me, once you understand what's going on in CAPlayThrough, you don't necessarily need to implement everything.  An AUGraph can easily run without any callbacks, and in your case you'd just need callbacks to get data to and from your DSP hardware.

Brian Willoughby
Sound Consulting


On Sep 18, 2008, at 17:56, Taylor Holliday wrote:

Thanks Brian,

  So since the app will eventually do the processing with a hardware DSP, it's not necessary to run other audio units within the app. The CAPlayThrough seems like a lot of effort just to get play through to work :-\. I just stumbled on PortAudio, what do you all think of that?

- Taylor

On Thu, Sep 18, 2008 at 4:56 PM, Brian Willoughby <email@hidden> wrote:
Taylor,

I would suggest the AUGraph API.  It would allow you to mix existing AudioUnits from other developers alongside custom code in your application.  It's probably easier to get your code running without also learning how to develop an AudioUnit at the same time.  Plus, the ability to use existing AudioUnits means that you don't have to create everything.

Take a look at the sample code, particularly the CAPlayThrough sample.  One you understand callbacks and setting up an AUGraph, it would be pretty easy to insert your own processing in one or more places in the graph, along with some standard AUs if you need them.

Brian Willoughby
Sound Consulting


On Sep 18, 2008, at 16:38, Taylor Holliday wrote:
 I'm new to core audio (and audio programming in general) and I'm writing a real-time audio processing app. After skimming a lot of the core audio documentation, I'm not sure what's the easiest way to get audio in and out of my app. Should I make my app into an audio unit (I'd rather it be stand-alone)? Should I interface directly with the HAL? Is there another lib out there that would make this easier (and perhaps would be cross platform)? Any help would be greatly appreciated!

 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

  • Follow-Ups:
    • Re: easiest way to process realtime audio
      • From: Brian Willoughby <email@hidden>
References: 
 >easiest way to process realtime audio (From: "Taylor Holliday" <email@hidden>)
 >Re: easiest way to process realtime audio (From: Brian Willoughby <email@hidden>)
 >Re: easiest way to process realtime audio (From: "Taylor Holliday" <email@hidden>)
 >Re: easiest way to process realtime audio (From: Brian Willoughby <email@hidden>)

  • Prev by Date: Re: easiest way to process realtime audio
  • Next by Date: Re: Test report iTunes SRC
  • Previous by thread: Re: easiest way to process realtime audio
  • Next by thread: Re: easiest way to process realtime audio
  • Index(es):
    • Date
    • Thread