• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Coreaudio-api Digest, Vol 3, Issue 254
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Coreaudio-api Digest, Vol 3, Issue 254


  • Subject: Re: Coreaudio-api Digest, Vol 3, Issue 254
  • From: Jeff Moore <email@hidden>
  • Date: Wed, 16 Aug 2006 17:03:01 -0700


On Aug 16, 2006, at 4:38 PM, Nick Hebner wrote:



On 8/16/06, Nick Hebner <email@hidden> wrote:

> I am new to the core audio api and mac programming in general. I am
> looking at writing an application
> for which I need to have access to all audio streams on the system.
> My initial idea was to write it using
> core audio api's to enumerate all streams on all devices, and fiddle
> with these streams.

OK. I'm with you so far. I'll just point out that in /Developer/
Examples/CoreAudio/HAL/HALLab, you'll find a project that builds an
app that puts a GUI on all the HAL's properties. Makes for an easy way
to explore the hardware.


> I wrote a test > program to see if this would work, it was indicating that there was > only one stream on my one output device, > even when I was playing audio with iTunes and GarageBand.

This is correct. You are enumerating the output streams of the
hardware and for our built-in hardware, there is only one.

Ok, I am a little confused by this, please let me know if my understanding of the system is correct. From viewing the IORegistry entry for my audio device in IORegistryExplorer, I noticed that a seperate IOAudioEngineUserClient is created and destroyed in my audio device each time I open and close an audio application, meaning that these are per-application objects, correct?

User client objects are there to implement communication to/from user- land entities. Basically, you'll see one of these objects get instantiated for each user-land call to IOServiceOpen that targets your IOAudioEngine. There is no guarantee that there will be only one of these per process. That's just the way things are now. You very definitely should not count on that as it is subject to change in future updates to the system.


I thought that each application should only have one of these, but may have multiple IOAudioStreams within an IOAudioEngine.
oops, didnt finish my thought, sorry...
Is it the driver that determines how many streams are allowed? I dont understand this.

The driver determines how many engines and streams there are. For this purpose, you can think of each IOAudioStream as a stream of data moving from the driver to the hardware or vice versa. The IOAudioEngine can be thought of as the clock that keeps track of the sample/host time relationship for a group of synchronized streams.


> Am I able to access streams from other applications

The short answer is no. There are no APIs for directly accessing the
audio data coming out of other processes.

The long answer involves writing a fake device using the HAL's user-
land driver plug-in API or some kind of in-kernel reflector.

Using the device plugin approach, would I need a seperate plugin for each different device, or would a generic one work for all audio devices?

I'm not sure I follow your question. The plug-in implements a user- land driver. It is the moral equivalent of what you write in the kernel. You would be creating a fake device that, to applications, looks and behaves exactly like what you'd get with kernel driver. What you do with the audio once you get it is your business. If you wanted to mix together the audio from multiple processes, you'd need to implement some facility where your instances of your plug-in could rendezvous and do the processing. For example, Jack has a daemon that does this job.


Is there some in depth documentation or examples on driver plugins somewhere?

Not beyond what's in the header file.

> , and if so how?

You can also just use something like Jack, which uses the user-land
approach. SoundFlower or the lowly AudioReflectorDriver in our SDK do
the job of being a fake device by being an in-kernel reflector.

My aim is slightly different from these in that, I do not want to redirect the audio back to an application, I just want to have access to all AudioEngines in the system, so a device plugin would probably be the best approach for this right?

If I understand your meaning, "AudioEngine" in this context refers to the audio output of other applications, as opposed to referring to the IOAudioEngines of the drivers, yes?


If so, all of the services I named could accomplish that job in one fashion or another. They are all about moving data between processes by posing as a fake device. One could easily write an app that was the destination of this routing.


--

Jeff Moore
Core Audio
Apple


_______________________________________________ Do not post admin requests to the list. They will be ignored. Coreaudio-api mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
References: 
 >Re: Coreaudio-api Digest, Vol 3, Issue 254 (From: "Nick Hebner" <email@hidden>)
 >Re: Coreaudio-api Digest, Vol 3, Issue 254 (From: "Nick Hebner" <email@hidden>)

  • Prev by Date: Re: Coreaudio-api Digest, Vol 3, Issue 254
  • Next by Date: XCode 2.4 and 32/64-bit universal binaries
  • Previous by thread: Re: Coreaudio-api Digest, Vol 3, Issue 254
  • Next by thread: XCode 2.4 and 32/64-bit universal binaries
  • Index(es):
    • Date
    • Thread