Audio Units and virtual device implementation
Audio Units and virtual device implementation
- Subject: Audio Units and virtual device implementation
- From: Colin Martel <email@hidden>
- Date: Mon, 15 Jun 2009 12:21:29 -0400
Hello,
I'm working on Zirkonium, a program that has a bit of a particular
setup. To put it briefly, it handles spatialization of multiple (>8)
channel audio for speakers placed in a dome setup. In order to
collaborate with host programs, many of which I'm told don't support
panning over a large number of speakers, it creates a virtual device
with an arbitrary number of channels which the host program can
select. Each channel in the device basically represents an entity
which can then panned by Zirkonium over the N amounts of speakers
present in the dome configuration.
In order to be able to change and automate that panning from within
the host program, an Audio Unit is used which sends the angles and
whatnot to Zirkonium. However, since the audio unit is a separate
piece of code from the virtual device, I'm running into some
difficulty finding an easy way to make sure the audio unit is panning
the channel it's attached to. The current setup is to select the
virtual device channel in the host program, then match that channel in
the audio unit. But since the channel becomes an actual AU parameter,
it becomes subject to automation, can be mismatched from the device
channel and the whole thing is rather confusing and error-prone.
I'm trying to find an alternative, but since I'm very new to
CoreAudio, I'm unsure what can and cannot work. Here are the ideas
I've come up with:
1. Have the audio unit extract the device channel number from the
stream. This would be optimal but impossible as far as I can see, as
it would basically imply making low-level details visible to
AudioUnits that would break their ability to be used independently.
Maybe this could be done by bypassing the AU SDK? Maybe it does
provides a way to access device info and I just didn't see it?
2. Abandon the virtual device implementation and instead pass the
audio frames through the AudioUnit along with the spacialization info.
The AudioUnit would then choose the proper device channel to match the
settings. This sounds like an aupn unit, however the examples from
Apple that I've seen all seemed to involved simulating panning over
stereo through different DSP algorithms, as opposed to actually
panning. Documentation on aupn units is scarce.
3. Abandon the virtual device implementation and pass the audio frames
through the AudioUnit in a custom stream format, basically bypassing
the HAL. Based on what I can see, this is the only solution that would
work for sure, but sending the information in this custom way sounds
like it might incur latency. But since the virtual device is already a
software solution regardless of using the HAL or not, perhaps it
wouldn't be so bad?
I realize this is probably a bit outside of what AudioUnits are
supposed to do, but the original programmer tells me host programs
rarely handle speaker configurations with more speakers than the
standard surround setups, hence the external program requirement. If I
could at least make that interaction simpler, then it'd add a lot of
value to the AudioUnit setup.
Sorry for the lengthy explanations, but I think this is a bit outside
standard fare so I want everything to be clear. :)
-cmartel
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden