Re: Audio Units and virtual device implementation
Re: Audio Units and virtual device implementation
- Subject: Re: Audio Units and virtual device implementation
- From: Jeff Moore <email@hidden>
- Date: Mon, 15 Jun 2009 13:22:34 -0700
I don't think I understand entirely what you are describing here. At
least not when put together like this. I'll try my best though =)
At any rate, I get the part about using a fake audio device
(presumably implemented via a HAL plug-in) to pipe audio to your app's
process from a target process. It sounds to me like this part is
indispensable presuming that all the interesting spatialization
processing is happening in your app's process.
I also get the part about using an AudioUnit in the process using the
fake device to send control information to your application about what
to do with the audio. The AU provides parameters for the host to
automate to control the spatialization processing being done in your
app, including a parameter that says what channel on the fake device
the AU is controlling.
I'm inferring that the reason for all of this is that you are
ultimately providing a mapping from an N channel surround format to an
M channel speaker array. The fake device acts as the N channel audio
device and sends the audio to your app which does the processing and
then sends the resulting M channels to the actual hardware.
My first thought about this is that it seems like a pretty convoluted
solution to something that could be done entirely in the fake device.
Basically, you'd just drop all your spatialization processing into the
N-channel fake device and then drive the fake device using the M-
channel real device. Then, your AU could be simplified down to one
that could tell your fake device which real device to use for IO and
to allow for automating all the parameters.
This solution simplifies the signal path which ought increase
performance and lower latency while still affording the integration
you are looking for.
That said, I'm not an AU expert. So I've have glossed over a lot of
details on the AU side of things. But merging your N to M processing
into the fake audio device seems like a big win to me - presuming my
understanding of the problem is correct.
On Jun 15, 2009, at 9:21 AM, Colin Martel wrote:
Hello,
I'm working on Zirkonium, a program that has a bit of a particular
setup. To put it briefly, it handles spatialization of multiple (>8)
channel audio for speakers placed in a dome setup. In order to
collaborate with host programs, many of which I'm told don't support
panning over a large number of speakers, it creates a virtual device
with an arbitrary number of channels which the host program can
select. Each channel in the device basically represents an entity
which can then panned by Zirkonium over the N amounts of speakers
present in the dome configuration.
In order to be able to change and automate that panning from within
the host program, an Audio Unit is used which sends the angles and
whatnot to Zirkonium. However, since the audio unit is a separate
piece of code from the virtual device, I'm running into some
difficulty finding an easy way to make sure the audio unit is panning
the channel it's attached to. The current setup is to select the
virtual device channel in the host program, then match that channel in
the audio unit. But since the channel becomes an actual AU parameter,
it becomes subject to automation, can be mismatched from the device
channel and the whole thing is rather confusing and error-prone.
I'm trying to find an alternative, but since I'm very new to
CoreAudio, I'm unsure what can and cannot work. Here are the ideas
I've come up with:
1. Have the audio unit extract the device channel number from the
stream. This would be optimal but impossible as far as I can see, as
it would basically imply making low-level details visible to
AudioUnits that would break their ability to be used independently.
Maybe this could be done by bypassing the AU SDK? Maybe it does
provides a way to access device info and I just didn't see it?
2. Abandon the virtual device implementation and instead pass the
audio frames through the AudioUnit along with the spacialization info.
The AudioUnit would then choose the proper device channel to match the
settings. This sounds like an aupn unit, however the examples from
Apple that I've seen all seemed to involved simulating panning over
stereo through different DSP algorithms, as opposed to actually
panning. Documentation on aupn units is scarce.
3. Abandon the virtual device implementation and pass the audio frames
through the AudioUnit in a custom stream format, basically bypassing
the HAL. Based on what I can see, this is the only solution that would
work for sure, but sending the information in this custom way sounds
like it might incur latency. But since the virtual device is already a
software solution regardless of using the HAL or not, perhaps it
wouldn't be so bad?
I realize this is probably a bit outside of what AudioUnits are
supposed to do, but the original programmer tells me host programs
rarely handle speaker configurations with more speakers than the
standard surround setups, hence the external program requirement. If I
could at least make that interaction simpler, then it'd add a lot of
value to the AudioUnit setup.
Sorry for the lengthy explanations, but I think this is a bit outside
standard fare so I want everything to be clear. :)
-cmartel
--
Jeff Moore
Core Audio
Apple
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden