• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: More of Process
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: More of Process


  • Subject: Re: More of Process
  • From: Jeff Moore <email@hidden>
  • Date: Thu, 3 Nov 2005 16:08:53 -0800

My bad. I thought I recalled that you had asked about adding processing to all the output of a given audio device in a previous email which is why I assumed that when you said "kernel" you meant the mach kernel. My mistake.

On Nov 3, 2005, at 3:41 PM, john smith wrote:


Jeff,

Thanks for your reply.

Surely there's some misunderstanding here. All I'm trying to do is to make simple Audio Units (plug-ins).

When I refer to "channel dependencies" I mean that it cannot work on channels independently. For instance, a reverb will often write to 2 output channels simultaneously.

The kernel process I'm talking about is the kernel class living inside the main AU class, when creating an AU using the xcode template. I believe the full name of the class is AUKernelBase.


Thanks anyway.


Michael Olsen

I'll let someone else address your AU programming questions, but I'll take a stab at answering your initial question about system- wide processing.

First off, AudioUnits are not available to kernel entities. AU's are user-land only constructs. So, if you intend on providing a pluggable means of adding processing, you're on your own.

Secondly, there is no API for providing system-wide processing of input or output data. The architecture we have for devices does not lend itself to that sort of operation. So to implement it, you have to, by definition, hack the system. For IOAudio-based devices, you'd need to figure out how to glom onto each individual driver (in the kernel) and inject your processing code into it's data handling. Granted, the IOAudio family makes that somewhat easier by providing the framework on which all the drivers are based, but it's still not a job for the faint of heart and likely would not work with all bits of hardware.

Another solution would be some kind of re-direction solution like what AudioHijack or Jack do and, in a very primitive way, what the AudioReflectorDriver sample code does. You'd redirect the data to your processing code and then direct it at the actual bit of hardware.

On Nov 3, 2005, at 2:29 PM, john smith wrote:


Hi,

as I mentioned in a previous letter I'm unable to locate any documentation for the kernel Process, except that "this is where you process the audio".



--

Jeff Moore
Core Audio
Apple


_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
40hotmail.com


This email sent to email@hidden

_________________________________________________________________
Express yourself instantly with MSN Messenger! Download today it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/




--

Jeff Moore
Core Audio
Apple


_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
References: 
 >Re: More of Process (From: "john smith" <email@hidden>)

  • Prev by Date: Re: More of Process
  • Next by Date: OT: Class-compliant multiport MIDI interface?
  • Previous by thread: Re: More of Process
  • Next by thread: OT: Class-compliant multiport MIDI interface?
  • Index(es):
    • Date
    • Thread