• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Newbie question: Looking for advice on how best to integrate a 24x24 audio-effect algorithm into a MacOS/X environment
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Newbie question: Looking for advice on how best to integrate a 24x24 audio-effect algorithm into a MacOS/X environment


  • Subject: Re: Newbie question: Looking for advice on how best to integrate a 24x24 audio-effect algorithm into a MacOS/X environment
  • From: Paul Davis <email@hidden>
  • Date: Thu, 12 Feb 2015 09:30:57 -0500



On Wed, Feb 11, 2015 at 11:08 PM, Jeremy Friesner <email@hidden> wrote:
Hi all,

I’ve got an audio algorithm that I’d like to port over to run in a MacOS/X audio environment.  The algorithm is somewhat similar to a 24x24 mixer, in that it takes 24 mono audio streams as input, swizzles them all together, and emits 24 mono audio streams as output.

From studying the Core Audio documentation, it appears that this algorithm could be represented by a “signal processor” style Audio Unit.  What I’m not sure about, however, is how useful such an Audio Unit would be on a machine that doesn’t also have a 24-channel audio input device and a 24-channel audio output audio device to connect it to.

In particular, what I’d like to be able to do is connect (whatever audio input or output devices do exist on the host Mac) to various specified user-specified subsets of my algorithm's inputs/outputs; for example, I might want to feed mixer outputs 5 and 8 to my Mac’s headphone jack, and feed mixer outputs 15 and 16 to my Mac’s line-out jack.  Similarly, I might want my Mac’s microphone jack to feed in to input 3 and my USB headset’s mic to feed in to input 4 of the mixer algorithm.

If I implement my mixing algorithm as a 24-channel Audio Unit, are audio-routing decisions like those described above something that can be done outside of my app? 

this all depends on the host application the user chooses to run your plugin. there are actually 2 layers of routing (at least):

     * sources outside of the host application (hardware or software) and how they connect to the inputs of the host application
     * connections between the inputs of the host application and your plugin

some hosts will give you full control over both layers. others will give you control over only one of them, and some will give you no control at all.

writing a plugin has the HUGE benefit that it frees you from dealing directly with audio interface hardware, especially if you imagine users interacting with multiple devices. it has the drawback that your plugin has absolutely no control over where its inputs come from or where they go.

 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:

This email sent to email@hidden

References: 
 >Newbie question: Looking for advice on how best to integrate a 24x24 audio-effect algorithm into a MacOS/X environment (From: Jeremy Friesner <email@hidden>)

  • Prev by Date: Newbie question: Looking for advice on how best to integrate a 24x24 audio-effect algorithm into a MacOS/X environment
  • Next by Date: rendered audio problem with iOS 8
  • Previous by thread: Newbie question: Looking for advice on how best to integrate a 24x24 audio-effect algorithm into a MacOS/X environment
  • Next by thread: rendered audio problem with iOS 8
  • Index(es):
    • Date
    • Thread