Re: Stream enables [was: fIsMixable]
Re: Stream enables [was: fIsMixable]
- Subject: Re: Stream enables [was: fIsMixable]
- From: Brian Willoughby <email@hidden>
- Date: Tue, 17 Jun 2003 23:13:24 -0700
Oops, I read the reply below after composing my own response. The situation
does seem a bit worse from from this point of view.
I have not considered the difference in efficiency of the logic involved with
per-channel enables versus selecting a physical device format as an entire
collection of channels. It certainly seems more understandable to the
programmer to specify an On/Off value for each channel, rather than wade
through a list of channel combinations to find the most efficient one which
meets the minimum requirements.
But, regardless of the best way to do this, I have also realized that the real
problem seems to be that there needs to be some way for multiple client
applications to negotiate a minimum stream configuration that meets all needs.
My current algorithm changes the physical device format to the most efficient
channel allocation for the current application, but I just realized that if
another application is already running with surround output, then a stereo
application might switch the device to stereo without considering that another
application needs more channels. Without getting into hog mode, is there any
way to negotiate the most efficient channel configuration when multiple clients
have different needs?
I certainly understand that a stereo application cannot guarantee that its two
output will appear on the proper channels of a surround output device, but
let's assume that this can be handled. What I would like is for the stereo
application to mix into the surround stream if another application is playing
surround output, but I would prefer to switch the audio device to a stereo-only
physical format if no other application needs more than two channels.
P.S. I am wondering why CoreAudio or the device driver cannot mix all client
input as float values before doing a single conversion from float to the native
device format. Is there a reason why the conversions are done per client? Or
am I missing a distinction between input and output? Are you talking about
recording audio from the device inputs to multiple client applications, or
playing audio from multiple client applications to the device outputs?
Brian Willoughby
Sound Consulting
Begin forwarded message:
From: "B.J. Buchalter" <email@hidden>
Subject: Re: Stream enables [was: fIsMixable]
>
Actually, it is possible to do that with CoreAudio today (we have it
>
running in one of our drivers).
You can enable the use of stream enables, but it is meaningless. See below:
>
This requires creating multiple streams on a single AudioEngine (pretty
>
much like the ASIO channels).
Yes, but even if you do, the driver cannot optimize on the basis of stream
enables, and the amount of optimization that the HAL can do is trivial. So,
while the API is present, the feature is not implemented in any meaningful
way.
>
However, if the hardware requires you to send data for all the (20 or
>
so!) channels to be present in the DMA buffer, the driver has the task of
>
putting the data in appropriate "slots" - but that also applies for ASIO!
No -- it doesn't. In ASIO the driver is informed which channels are active,
so it can optimize for unused slots. In the current CoreAudio implementation
only the HAL has access to the stream enable data, and the driver is unable
to take advantage of possible optimizations. Since all drivers are
responsible for converting Fixed<->Float, there are significant possible
optimizations that the driver could make if it were aware of the stream
enables.
To the HAL folks: I have a bug in on this and I have a DTS incident on this
(neither of which have gone anywhere) -- it is a real problem for those of
us who make devices that can support literally > 100 channels (streams).
Users do not understand why a driver would take more CPU when they are "only
doing stereo". The current situation is a step back from ASIO. Please
propagate the stream enable information down to the userclient so that we
can optimize our drivers for dynamic channel allocation.
On a similar note -- the current implementation of the HAL/USERCLIENT causes
the driver to do INT->FLOAT conversions for all input channels of all
clients! This means that if N clients are running, all the drivers do N
conversions for each input sample. This is incredibly wasteful, especially
if the client isn't even using the input (like iTunes). Please let's think
about how to get this fixed. I think it would be possible for a driver to
fix this by replacing parts of the user client, but the problem is that this
is something that affects all drivers, and as such really ought to be fixed
in the IOAudioFamily.
_______________________________________________
coreaudio-api mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/coreaudio-api
Do not post admin requests to the list. They will be ignored.