User-land driver for channel mapping
User-land driver for channel mapping
- Subject: User-land driver for channel mapping
- From: Dave Addey <email@hidden>
- Date: Tue, 08 Apr 2008 13:10:18 +0100
- Thread-topic: User-land driver for channel mapping
Title: User-land driver for channel mapping
Hi all,
I'm developing a user-land driver based on the SampleHardwarePlugin code, to work around a limitation in QuickTime. (I'm aware this is a less-than-elegant solution.) However, I'm not sure of the best approach for audio sync, and I'm confused by AudioTimeStamps, so I thought I'd post a few questions.
Why I'm doing this:
There is currently no way for QuickTime to play one stereo movie to two different pairs of outputs on a multi-channel audio device whilst applying separate volume levels to the different channels. (Radar feature request #4145662.)
My current workaround is to play two copies of a movie instead, with their timebases slaved together by QuickTime. Whilst this works on Tiger, it currently causes serious audio dropouts on Leopard (Radar bug #5615865), and on both platforms it necessitates a higher processor load (nearly 2x the load of one movie) than should be necessary.
My suggested workaround:
Create a "virtual" stereo output device based on SampleHardwarePlugin, which "wraps" a real multi-channel hardware device. This virtual stereo device would receive stereo audio from QuickTime, and would then route the audio to two different pairs of audio channels on the wrapped hardware device. For example, the left input channel to hardware channels 1 and 3, and the right input channel to hardware channels 2 and 4. The device would be able to apply different volume levels to the different pairs of outputs by mixing down the input samples before passing them to the output device. It should be possible to use the same approach to route the stereo input to two hardware devices via a wrapped aggregate device.
The problem:
I've spent a few days experimenting with SampleHardwarePlugin, and also with CAPlayThrough. I have playthrough working after a fashion, by bringing elements of CAPlayThrough into SampleHardwarePlugin and using an AudioRingBuffer to buffer audio between QuickTime (aka the virtual device) and a real hardware device. But the playback is distorted in a way that suggests that my approach to timing is incorrect.
My questions:
Do I need to use a ring buffer in this scenario? (I can set up my virtual device to mirror the output stream format of the real hardware device, to avoid any sample rate differences.) Or could I just have one fixed-size buffer, which the virtual device writes into, and the hardware device pulls from if and when data exists in the buffer?
Likewise, do I need CAPlayThrough's Varispeed unit given that I have control over the virtual device's stream format? (I’m not using it at present.)
Should I just make my virtual device reflect the current timestamps of the wrapped device, and if so, what in SampleHardwarePlugin would need modifying to achieve this? e.g. should the virtual device's GetCurrentTime(AudioTimeStamp& outTime) just return the wrapped device's current time?
Something I can't work out about AudioTimeStamps... in what scenario would mRateScalar be something other than 1.0? (I've seen it happen with the hardware device I am wrapping.) Or to put it another way, is a device's sample rate not always quite what is reported, and why? Do I need to worry about this in the scenario I have described?
Thanks in advance for any help, and thanks for making the SampleHardwarePlugin code available!
All the best,
Dave.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden