Re: How to make my AU graph the default output?
Re: How to make my AU graph the default output?
- Subject: Re: How to make my AU graph the default output?
- From: Ondřej Čada <email@hidden>
- Date: Sun, 29 Mar 2009 04:43:44 +0200
Brian,
On Mar 29, 2009, at 3:30 AM, Brian Willoughby wrote:
Since you have complete control over the environment, then hopefully
you have audio hardware in your carputer which is four channel or
more (5.1?)
It depends on the client. Some do; others use two two-channel devices
(e.g., the built-in output and iMic), which, as you've recommended and
I've successfully tested a short time ago, can be easily covered by an
aggregate device.
(I suppose it could be done more directly by using a multi-output-bus
Mixer unit and playing each bus into a different output device; but
since the above approach is easier, I'm going to stick with it :))
I'm not quite sure what you really mean when you ask for a "default
output AU"
I have explained it in some of my previous mails which you probably
haven't read: "All I need for that is one "generator AU" which would
automatically get all the "standard output": whomever renders any
sound to the standard output device, he would actually render it to
this unit."
... I think what you're missing is that you are not producing the
audio data in your application, or at the very least you're passing
off the audio data to some API which is not compatible with
CoreAudio ...
That's completely right, of course.
The thing I am harping on (and I do promise this is my very last
message on this subject! :)) is that
(a) the default output device is in fact nothing else than an N-
channel virtual "sound outlet";
(b) thus there is no conceptual reason why it should not play with
Core Audio, which is designed to process, well, N-channel sounds, and
each of whose units sports (usually two of) such outlets;
(c) especially since all the mentioned technologies are created and
fully owned by one company, i.e., Apple, one would suppose they'd play
nicely with each other. Seems weird for Apple to design and provide
sound technologies mutually incompatible by design -- especially in
case the compatibility would be pretty easy to achieve.
Consider what I did to implement the functionality I needed:
(i) my application is rendering sound (by any API) to the stereo
default output device;
(ii) the sound data from this device is piped through Soundflower KEXT
to the input sound device of another auxiliary application;
(iii) which then can use AUHAL to get this input into buffers...
(iv) ...and these buffers can be used as CA chain source (to be dumped
into a varispeed);
(v) the CA chain behind the varispeed does what I need (in this case,
makes a "surround" sound of the two original channels, and plays it
out to my output device).
All I am arguing is that _since all the technologies on this path are
Apple's own_, one would suppose they should be compatible to the level
that at the very least the steps (ii) and (iii) are not needed, nor is
splitting the functionality into two separate applications.
(Well I wonder... perhaps it in fact does not need to be split this
way, and my main app could read the Soundflower input itself just as
well. I haven't tested this ;))
I think there's a little point to pursue this further: I have got my
solution, and I do understand the technical problems which prevent an
easier one. All the talk lately is that I can't get why CA was not
designed so that these technical problems don't exist at all, since it
would be comparatively easy (for Apple who has full access to all the
audio functionality).
Thanks a lot for your comments and all the information,
---
Ondra Čada
OCSoftware: email@hidden http://www.ocs.cz
private email@hidden http://www.ocs.cz/oc
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden