Re: Multiple Stereo streams vs. multi-channel stream
Re: Multiple Stereo streams vs. multi-channel stream
- Subject: Re: Multiple Stereo streams vs. multi-channel stream
- From: Bill Stewart <email@hidden>
- Date: Wed, 30 Apr 2003 00:10:42 -0700
The design intention of the HAL in general was to allow the device to
publish itself in a format that is most efficient and thus provides the
ability for the driver to do as little work as possible (this is
important both in terms of CPU usage that is incurred when an app
interacts with the driver, and also affects resource issues like memory
usage where kernel/wired memory is an expensive resource)
Thus, what is the most natural way of representing your device to an
application. It sounds to me that this is a set of mono streams - this
is actually a very efficient way of publishing the device's channels.
Why? Clients of the device can choose which channels they are using and
can turn off the streams of the device that aren't in use. We are doing
some of this already in Jaguar, but not as much as we should - with the
output units and/or the SoundManager. We will have this working
completely in the next OS release. This can make a considerable
difference to the CPU usage.
Both the SoundMgr and the output units already deal correctly with
these mono streamed devices. The Mobile I/O and some other drivers,
publishes channels as mono streams, and I think most of the apps are
working with this device now (there were some teething problems but I
think all of these have been fixed).
Bill
On Tuesday, April 29, 2003, at 11:05 PM, BlazeAudio Developer wrote:
We have a device whose hardware/DMA is designed for multiple stereo
(interleaved) streams.
The first approach we took with our driver was: create 8 audio engines
-
each having one stereo stream.
This works fine, but most programs will only work with one audio-engine
at a time.
So, we took another approach - we created a single audio engine, with 8
stereo streams (the hardware supports treating all 8 DMA engines as
one -
so we don't really have a sync. issue).
This scheme works well with Logic Platinum 5.5.
However, the driver breaks with other software like Cubase SX and Spark
ME.
Is this a known problem?
What's the best approach to solving this?
One alternative is to do double-buffering. Create a single
multi-channel
stream, and then distribute the data coming from CoreAudio to the 8
separate DMA buffers. But it appears like not quite the right thing to
do.
Thanks.
Devendra.
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.
--
mailto:email@hidden
tel: +1 408 974 4056
________________________________________________________________________
__
"Much human ingenuity has gone into finding the ultimate Before.
The current state of knowledge can be summarized thus:
In the beginning, there was nothing, which exploded" - Terry Pratchett
________________________________________________________________________
__
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.