Re: Multiple Input/Output project Architecture
Re: Multiple Input/Output project Architecture
- Subject: Re: Multiple Input/Output project Architecture
- From: William Stewart <email@hidden>
- Date: Tue, 16 Oct 2007 16:29:08 -0700
On Oct 16, 2007, at 2:38 PM, David McMahon wrote:
A few issues immediately spring to mind.
Can I create an aggregate device on-the-fly in code given a selection of hardware devices ?
I'm not sure what you mean by "on the fly" - you can certainly create your own aggregate devices, start and stop them, etc...
By 'on the fly' I meant two things. I'd like to have a 'monitor' and a 'main' output at least on the output side of things (And a 'cue' output but that would be separate from the main output), both playing exactly the same audio. I want the user to specify what device (and outputs if applicable) they want these signals to go to. So the first part is...
1 - Can I create an aggregate output device at run time from user specified devices ?
I believe the answer to this is a resounding 'Yes'. But the second part is...
Yes
2 - If the user wants to change devices while audio is playing, can I change the aggregate device to accommodate this without interrupting the main output (assuming it's the 'monitor' output being changed of course)?
No
So, perhaps the best way to handle this is to create an "über" aggregate device (which is an agg of any device found on the system) and then just route your audio as needed. The output channel routing in AULab could be used as a guide here if you want to look at some UI we've done for doing the kinds of things you want to do
That one I'm not so sure about.
Can I have multiple subgraphs each having their own separate input device ?
sub graphs are I think more conceptual than useful in practise. For instance, with AULab we decided not to use sub graphs, but rather have our own topology objects to represent different parts of the main graph. You can get a sense of that from making different session documents and then going to the debug menu and print out the graph.
I need to take a closer look at AULab! Last time I did anything with CoreAudio was shortly after it became available, and that was with MIDI so I've got a lot of catching up to do. Can you have multiple input devices? Yes, but you have clocking issues to worry about. So, either you can use an aggregate device for this and have it resample the inputs to one master device (or ideally have them synch'd in hardware), or you can do your own varispeeding.
This has me wondering. Are we talking about dealing with latency problems or is this a technical issue. I was assuming that I can basically plug a couple of AuFilePlayers and a couple of AUHAL hardware devices into a mixer and get a mixed signal out of the other end. For my application a little latency wouldn't matter at all (I'm looking at a radio DJ setup - Virtual decks, cart machines and a couple of microphones).
Nope. The file players are fine - they just run off the frequency that you pull them for data. The "multiple" AUHALs are the problem - and you would really deal with this like AUHAL does too I think; have a user create an aggregate device of the ins and outs they want, and then use a single instance of AUHAL to talk to that device, and provide routing options for input and output channels.
The Debug print menu from AULab is instructive about how you can set up this kind of routing.
Thankyou for the advice. After some of the people I've been forced to deal with recently it's a real breath of fresh air to get such invaluable information and such a full answer.
Sure - I try to make it sound easy! :)
Bill
|
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden