Re: User-land driver for channel mapping
Re: User-land driver for channel mapping
- Subject: Re: User-land driver for channel mapping
- From: Jeff Moore <email@hidden>
- Date: Tue, 8 Apr 2008 13:48:28 -0700
On Apr 8, 2008, at 5:10 AM, Dave Addey wrote:
Hi all,
I'm developing a user-land driver based on the SampleHardwarePlugin
code, to work around a limitation in QuickTime. (I'm aware this is
a less-than-elegant solution.) However, I'm not sure of the best
approach for audio sync, and I'm confused by AudioTimeStamps, so I
thought I'd post a few questions.
Why I'm doing this:
There is currently no way for QuickTime to play one stereo movie to
two different pairs of outputs on a multi-channel audio device
whilst applying separate volume levels to the different channels.
(Radar feature request #4145662.)
My current workaround is to play two copies of a movie instead, with
their timebases slaved together by QuickTime. Whilst this works on
Tiger, it currently causes serious audio dropouts on Leopard (Radar
bug #5615865), and on both platforms it necessitates a higher
processor load (nearly 2x the load of one movie) than should be
necessary.
At the risk of stating the obvious: why are you using QuickTime for
this in the first place? You'd have much better results and a much
easier time of it if you'd just use the Core Audio API directly.
The fact that you want to write a fake user-land device to solve this
problem seems pathological to me. It indicates that you are really
really really far down the wrong path here I think.
I'll try to answer your questions, but I really think you ought to
examine your reasons for needing to ask them in the first place.
My suggested workaround:
Create a "virtual" stereo output device based on
SampleHardwarePlugin, which "wraps" a real multi-channel hardware
device. This virtual stereo device would receive stereo audio from
QuickTime, and would then route the audio to two different pairs of
audio channels on the wrapped hardware device. For example, the
left input channel to hardware channels 1 and 3, and the right input
channel to hardware channels 2 and 4. The device would be able to
apply different volume levels to the different pairs of outputs by
mixing down the input samples before passing them to the output
device. It should be possible to use the same approach to route the
stereo input to two hardware devices via a wrapped aggregate device.
The problem:
I've spent a few days experimenting with SampleHardwarePlugin, and
also with CAPlayThrough. I have playthrough working after a
fashion, by bringing elements of CAPlayThrough into
SampleHardwarePlugin and using an AudioRingBuffer to buffer audio
between QuickTime (aka the virtual device) and a real hardware
device. But the playback is distorted in a way that suggests that
my approach to timing is incorrect.
My questions:
Do I need to use a ring buffer in this scenario? (I can set up my
virtual device to mirror the output stream format of the real
hardware device, to avoid any sample rate differences.) Or could I
just have one fixed-size buffer, which the virtual device writes
into, and the hardware device pulls from if and when data exists in
the buffer?
Why would you buffer anything at all? Why wouldn't you just use the
buffers and time stamps provided by the underlying device?
It seems like you are making things a lot harder than they already are
by this mash-up of stuff.
Likewise, do I need CAPlayThrough's Varispeed unit given that I have
control over the virtual device's stream format? (I’m not using it
at present.)
Nope. Since you are should just be sitting on top of one single
underlying device, there shouldn't be any need to resynchronize
anything.
Should I just make my virtual device reflect the current timestamps
of the wrapped device, and if so, what in SampleHardwarePlugin would
need modifying to achieve this? e.g. should the virtual device's
GetCurrentTime(AudioTimeStamp& outTime) just return the wrapped
device's current time?
As near as I can tell, what you want a fake device that says that it
is a stereo device but under the hood sends it's data to two stereo
pairs on the same underlying real device.
So beyond the need to expose stereo formats to the outside world, the
rest of your device's properties are going to be basically directly
the result from the underlying device. Indeed, even your IO pathway is
going to be driven directly by the underlying device. This will
include the time stamps. For example, your device's implementation of
GetCurrentTime() is just going to turn around and call
GetCurrentTime() on the underlying device.
Something I can't work out about AudioTimeStamps... in what scenario
would mRateScalar be something other than 1.0? (I've seen it happen
with the hardware device I am wrapping.) Or to put it another way,
is a device's sample rate not always quite what is reported, and
why? Do I need to worry about this in the scenario I have described?
If you follow my advice on this, you won't have to worry about what
the rate scalar is. You'll just be returning what you got back from
the underlying device.
--
Jeff Moore
Core Audio
Apple
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden