• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: How to make my AU graph the default output?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: How to make my AU graph the default output?


  • Subject: Re: How to make my AU graph the default output?
  • From: Brian Willoughby <email@hidden>
  • Date: Sat, 28 Mar 2009 20:51:13 -0700

On Mar 28, 2009, at 19:43, Ondřej Čada wrote:
On Mar 29, 2009, at 3:30 AM, Brian Willoughby wrote:
I'm not quite sure what you really mean when you ask for a "default output AU"

I have explained it in some of my previous mails which you probably haven't read: "All I need for that is one "generator AU" which would automatically get all the "standard output": whomever renders any sound to the standard output device, he would actually render it to this unit."

I was planning on writing a brief overview of CoreAudio in the hopes that it might clear things up, and this seems like a good point to do so.


CoreAudio has two basic worlds. One world runs inside each application, and is private to that application. The other world is associated with each specific hardware device, and is partially shared among all applications using that device, but not completely shared. "CoreAudio" as a term encompasses all of the above, but you must remain mindful of the distinction between the two parts. In fact, there are a lot of other parts of CoreAudio such as file format translation and audio data format conversion, but I won't get into those here.

With regard to audio output - e.g. playback - the data can only flow one way. That is, private audio data from a single application can be sent to the partially-shared world of a specific hardware device, where it is mixed together with private audio data coming from other applications. At this point, the audio data can only keep going to the hardware device - it cannot be sent back into an individual application, not even the same application that the data came from. In your case, QuickTime, NSBeep, and NSSpeechSynthesis each hand off the audio data directly to a CoreAudio hardware device, and then you can't get it back. Your only choice is to intercept the audio before that point - something which seems to not be supported by those API - or you can re-implement those APIs' features so that you have total control of the audio data flow. Once the audio data has been passed off and mixed with other application sounds, you can't access it any more from a single application - it belongs to the system from then on.

An AudioUnit is something which usually exists only in the private application world of CoreAudio. Instead of NSBeep, if you had an AUBeep, then you'd be set. You could build a CoreAudio AUGraph and insert an AUBeep MusicInstrument AudioUnit, then connect it to a 3DMixer, and you'd have exactly what you want. You'd have all kinds of parameters to control the AU. But NSBeep does not allow this, because it skips past all of that and just tosses the beep sound into the semi-public final mix along with all the other audio on a specific audio hardware device. A solution for you is to write your own AUBeep or find some AudioUnit out there which sounds close enough for your needs. Similarly, an AUSpeechSynthesis plugin might be just the ticket for the other features in your application suite.

One special exception to the division between the two worlds of CoreAudio - the application-private versus semi-shared hardware - is the AUHAL. AUHAL makes a one-way link between the two. The output of an AUHAL is always a hardware device, and the input is always the private data from a single application. Thus, QuickTime, NSBeep, et cetera, all use an AUHAL or something similar to pass off the audio data, but the catch is that this is a one-way process.

Everything I have described above is from the point of view of playback of audio data from some application source to some audio hardware device destination. This is a one-way data flow, and there is a very good reason for this: CoreAudio has a pull model for data flow. Thus, the audio hardware is in full control of the sample rate and timing of the audio data flow. The CoreAudio device driver asks all connected applications for their data, mixes them together, and makes sure that all of this happens without glitches in the sound. This is a vast improvement over other operating systems' audio API, and I am really glad that CoreAudio is designed this way.

There is, of course, the opposite data flow when setting up recording of audio data. When recording, audio data generally originates in some hardware device and is supposed to end up in an application. It does get confusing because AUHAL also supports this reverse data flow, at the same time it is handling playback audio for output, but please don't ignore the fact that a great deal of conversion has to happen in order to match the sample rates between the output device and the input device, because generally they are not locked. You can cross-connect these two audio data flows such that inputs go to outputs, or maybe even so that outputs go to inputs, but your options are severely limited unless you can handle the concept of the pull model which is controlled by the output hardware device driver.

Just as an individual application cannot reverse the direction of output data to grab audio that any application has already sent to a device, likewise, an individual application cannot create input data that appears to be coming from a shared hardware device to trick some recording application. This is mostly due to CoreAudio's basic design goal to present the hardware accurately, and also to maintain quality control over the timing of the data flow to avoid dropouts.


What you're asking for is the ability to trick certain sound API which were designed before CoreAudio, so that they will think they are passing their data off to a hardware device, but in actuality you want them to transfer their audio data back into your application so you can process it further. The problem with this is that your application cannot serve as a hardware clock source, and cannot assume responsibility for pulling the data without going way beyond a very simple solution. Even the existing solutions in this area like Aggregate Devices and Soundflower all introduce slight distortions of the digital data as they struggle to match different clock sources from different pieces of hardware. If you are willing to accept those distortions and are unwilling to structure your application to fit without the primary CoreAudio design, then I'm sure it will work fine for you.


Keep in mind that CoreAudio was designed to provide the most direct access to audio hardware as is possible, without a lot of abstractions in between, and with the ultimate quality possible which can only be achieved via a pull model with one-way data flow. With this purist approach, it's hard to make difficult things seem easy - and what you're asking to do is actually a difficult thing under the hood, even if it seems simple from a top-level description.


... I think what you're missing is that you are not producing the audio data in your application, or at the very least you're passing off the audio data to some API which is not compatible with CoreAudio ...

That's completely right, of course.

The thing I am harping on (and I do promise this is my very last message on this subject! :)) is that

(a) the default output device is in fact nothing else than an N- channel virtual "sound outlet";

Yes, with emphasis on the "out" part of outlet. i.e. you cannot change this into an input that feeds back into your application once the data has been sent.


(b) thus there is no conceptual reason why it should not play with Core Audio, which is designed to process, well, N-channel sounds, and each of whose units sports (usually two of) such outlets;

Right: Provided that you write a pure CoreAudio application where you have total control over the audio data at all times, then you can create N-channel sounds easily. As you start adding non-CoreAudio pieces to the design, such as QuickTime, NSBeep, NSSpeechSynthesis, you start losing control, and thus you lose access to the full features of CoreAudio. Those API are too simple to allow the power that you need. Granted, QuickTime isn't exactly simple, but with regard to your needs, it is overly simplified to the point that you can't make it work like you need it to.


(c) especially since all the mentioned technologies are created and fully owned by one company, i.e., Apple, one would suppose they'd play nicely with each other. Seems weird for Apple to design and provide sound technologies mutually incompatible by design -- especially in case the compatibility would be pretty easy to achieve.

You're forgetting something. QuickTime was designed before CoreAudio, and QuickTime must place compatibility with existing QuickTime applications and existing QT source code as a much more important requirement than full compatibility with CoreAudio's feature capabilities. Likewise, NSBeep was designed before CoreAudio (and I've been using NSBeep since 1991, so I'm well aware of its capabilities), and NSBeep cannot allow things that it was never designed for. Apple could expand each of these API with a lot of bloated features to make them support everything that CoreAudio can do, or Apple can simply deal with the reality that these are simpler API which were designed for a certain purpose, and leave them to that purpose. Apple has already announce that QuickTime is a dead API and will not be expanded with significant new features - or at least that's what I remember reading. What you really need is AUBeep, and CoreAudio fully supports that, it's just that nobody has written it yet (or maybe they have!). The API you're using are old, their design is old, and it's basically time to move on to new pure CoreAudio API if you want support for surround.


Frankly, surround is a concept that is newer than the API you're using, so it's not terribly surprising that you're suddenly running into trouble when using API that are older than the technologies that you want access to.


I think there's a little point to pursue this further: I have got my solution, and I do understand the technical problems which prevent an easier one. All the talk lately is that I can't get why CA was not designed so that these technical problems don't exist at all, since it would be comparatively easy (for Apple who has full access to all the audio functionality).

My point is that CoreAudio does not need to change to fix this. In fact, CoreAudio should not change to fix this. The proper place for your problems to be fixed is inside QT, NSBeep, etc. Seeing as how those are old API, you might be waiting a long time to see such changes.


Brian Willoughby
Sound Consulting

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


References: 
 >How to make my AU graph the default output? (From: Ondřej Čada <email@hidden>)
 >Re: How to make my AU graph the default output? (From: William Stewart <email@hidden>)
 >Re: How to make my AU graph the default output? (From: Ondřej Čada <email@hidden>)
 >Re: How to make my AU graph the default output? (From: Jens Alfke <email@hidden>)
 >Re: How to make my AU graph the default output? (From: Ondřej Čada <email@hidden>)
 >Re: How to make my AU graph the default output? (From: Brian Willoughby <email@hidden>)
 >Re: How to make my AU graph the default output? (From: Ondřej Čada <email@hidden>)
 >Re: How to make my AU graph the default output? (From: Brian Willoughby <email@hidden>)
 >Re: How to make my AU graph the default output? (From: Ondřej Čada <email@hidden>)

  • Prev by Date: Re: How to make my AU graph the default output?
  • Next by Date: Writing little-endian AIFF/AIFC files
  • Previous by thread: Re: How to make my AU graph the default output?
  • Next by thread: Re: How to make my AU graph the default output?
  • Index(es):
    • Date
    • Thread