• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: How to make my AU graph the default output?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: How to make my AU graph the default output?


  • Subject: Re: How to make my AU graph the default output?
  • From: Brian Willoughby <email@hidden>
  • Date: Sat, 28 Mar 2009 18:30:41 -0700


On Mar 28, 2009, at 17:58, Ondřej Čada wrote:

Brian,

On Mar 29, 2009, at 12:50 AM, Brian Willoughby wrote:
...
... I still can't get the reasons why there is no "default output AU" to be put in the graph: seems to me as clean solution as possible... well, never mind :)

There is a "default output AU" but it only deals with audio from the current application.

Is there? Pray tell me where to found the thing.

That's exactly what I needed; the inter-application approach is an ugly hack which I am forced to use by only one thing: that the "default output AU" inside an application scope does not exist (far as I and those who answered before know).

I think we may have a terminology overlap here. CoreAudio has a piece that is literally called the "default output AU" - but I think that it does something different than what you want.


In a normal CoreAudio application, where you generate all of the audio data yourself or have access to it, and you want to send this to the default audio output device selected in Audio MIDI Setup without worrying about sample rate conversion or interleave conversion, you simply access the "default output AU" and describe your channels properly. CoreAudio provides an AU which will take the audio data in the format that you provide, and convert it mostly automatically to the format that the output device needs. Since you have complete control over the environment, then hopefully you have audio hardware in your carputer which is four channel or more (5.1?). If so, then CoreAudio provides a nice link between multi- channel audio data in your application and the multichannel hardware. If you want to hack together two pieces of hardware from other vendors rather than build your own, then an aggregate device might suffice.

I'm not quite sure what you really mean when you ask for a "default output AU" - so I tried to start by mentioning that there already is such a thing, and now it has hopefully been explained a little more fully.


What you're asking for is the ability to direct audio between applications, because you're leveraging applications that you did not write yourself.

Neither.

All I am asking for is the ability to leverage the audio of _one_ application, and do it inside the _one_ application as this application's internal thing.

I think what you're missing is that you are not producing the audio data in your application, or at the very least you're passing off the audio data to some API which is not compatible with CoreAudio. If your application is not directly processing the audio data in its final format, then you can't control how it is sent to CoreAudio.



If the fact the application offers a plug-in API confuses you, well, forget it, and consider the simplest case possible: a trivial QuickTime-based player. Or, probably even simpler thing: consider a very plain reader, which has only one text field in its GUI; whichever the user writes into the field is read out through NSSpeechSynthesizer. I estimate twenty-odd source lines needed to do that.


I think we're repeating ourselves in this thread now. If you use QuickTime or NSBeep or NSSpeechSynthesizer, then you are not producing the audio data yourself, but you are asking some other API to produce the audio data for you, and your application has no control over the audio data after that. All of the API that you are leveraging do not allow full access to the surround capabilities of CoreAudio. If you want to play a movie in your application, and you cannot figure out how to get it in surround (ask the QuickTime guys for help, I don't know), then you need to process the audio data yourself in surround format and pass that off to CoreAudio. If you want to synthesize beeps in surround, then NSBeep is not the API for you - instead you should look into any kind of custom synthesis of simple waveforms, or perhaps just find an AudioUnit MusicInstrument which makes a tone that is similar to NSBeep, and use the AU in an AUGraph with the 3D mixer to take full advantage of CoreAudio's surround capabilities. If you want to synthesize speech, and you're hot happy with the stereo limitations of NSSpeechSynthesizer, then you need to write your own speech synthesis software and pass the audio data to the 3D mixer.

The most important thing to consider is that you are using audio API outside CoreAudio, and that outside API does not have surround capability. Once you pass off the audio processing to a stereo API, you cannot get the data back in order to expand it into surround and then send it to CoreAudio yourself. Therefore, my suggestion is that you not pass your audio off to stereo API if you desire surround capabilities.


Now suppose any of these extremely simple applications needs sound output improvement -- a completely internal sound improvement, which would be encapsulated in the application itself, of course. Suppose it's such an improvement which can be very naturally and easily done by a chain of a few CA units. For example, we might want to create an illusion the "reader" strolls left and right, by moving the synthesized speech gradually between the L/R stereo output channels.

How one does that with CA? Especially, how one does that without resorting to KEXT-based inter-application ugly solutions?


QuickTime may allow callbacks, so that you can intercept the audio after it has been produced by QuickTime. However, I am not a QuickTime expert, and this is not the list for those questions. I seem to recall that QuickTime does not allow further processing of its output. In that case, your limitation is not CoreAudio, but your real problem is QuickTime.

Same thing with speech. NSSpeechSynthesizer must be a Cocoa API, which I'm guessing based on the NS prefix. However, unless it allows for callbacks which send the synthesized audio samples back to the calling application as buffered data, then you're out of luck if you want to pan this around in surround. Perhaps there are some settings for NSSpeechSynthesizer which go beyond stereo pan and into surround, but perhaps the Cocoa dev mailing list is the place for those questions. Sorry to be pointing you to other mailing lists, but those other API came around before CoreAudio, and their limitations are really independent of CoreAudio's capabilities.

In other words, CoreAudio cannot allow you to access audio data that you otherwise have no access to. If you can find some way to get access to the audio data from these other API, then you can most certainly run it through all manner of plugins, graphs, surround mixing, and take full advantage of the features of CoreAudio.

I realize it's frustrating. I've worked with other technologies such as Optical Character Recognition, where the main program was dependent upon a certain API to produce a result, but the API did not allow any access to the data after the processing was started. When you're not happy with the final output of a certain API, your only choices are to implement the functionality of the API yourself, or ask the developers of the API to provide access to the data before it is sent to the output device hardware.

Brian Willoughby
Sound Consulting

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden


  • Follow-Ups:
    • Re: How to make my AU graph the default output?
      • From: Ondřej Čada <email@hidden>
References: 
 >How to make my AU graph the default output? (From: Ondřej Čada <email@hidden>)
 >Re: How to make my AU graph the default output? (From: William Stewart <email@hidden>)
 >Re: How to make my AU graph the default output? (From: Ondřej Čada <email@hidden>)
 >Re: How to make my AU graph the default output? (From: Jens Alfke <email@hidden>)
 >Re: How to make my AU graph the default output? (From: Ondřej Čada <email@hidden>)
 >Re: How to make my AU graph the default output? (From: Brian Willoughby <email@hidden>)
 >Re: How to make my AU graph the default output? (From: Ondřej Čada <email@hidden>)

  • Prev by Date: Re: How to make my AU graph the default output?
  • Next by Date: Re: How to make my AU graph the default output?
  • Previous by thread: Re: How to make my AU graph the default output?
  • Next by thread: Re: How to make my AU graph the default output?
  • Index(es):
    • Date
    • Thread