Format Info and the HAL (was Re: kAudioDevicePropertyStreamFormat error on Tiger)
Format Info and the HAL (was Re: kAudioDevicePropertyStreamFormat error on Tiger)
- Subject: Format Info and the HAL (was Re: kAudioDevicePropertyStreamFormat error on Tiger)
- From: Jeff Moore <email@hidden>
- Date: Wed, 16 Nov 2005 14:55:44 -0800
I'm not picking on Mr. Feira, but his questions and the code he
posted touch on a topic in the HAL that is not so well understood:
how to deal with formats. I figured that now would be an opportune
time to share some info on the topic.
First a little history: Back in Mac OS X 10.0 (aka Cheetah), the HAL
had no concept of data streams as independent entities. All data, no
matter how many channels or how it was transferred in and out of the
box, was expected to be fully interleaved and had to be represented
by a single AudioStreamBasicDescription. The HAL's API presented this
through the device's master channel (channel 0). This scheme was
simple, clear, and easy to deal with.
Unfortunately, this scheme also made it very difficult to write an
efficient driver for hardware whose data streams weren't fully
interleaved, especially if the device had a large number of channels.
From the hardware's point of view, the best thing to do was to have
the driver not have to do any interleaving/deinterleaving by
presenting it's channels of data grouped together in the most natural
way for the device. For example, the MotU 828mk1 most naturally
presents itself as having three streams, an 8 channel interleaved
stream, a two channel interleaved stream, and, if you have the
LightPipe support enabled, another 8 channel interleaved stream.
This new bit of complexity necessarily changed how one thinks about
the format of an AudioDevice. Now instead of being representable by a
single ASBD, the format of an AudioDevice is now a vector of ASBDs,
one entry in the vector per stream on the device. In order to provide
access to this new information, the HAL's API gained support for
AudioStream objects in 10.1 (aka Puma).
In order to maintain backward compatibility with the older API, the
HAL forwards all format related calls sent to an AudioDevice object
to the appropriate AudioStream object. In the case of the master
channel, the call is forwarded to the first stream. For devices that
have only one stream, apps that don't know about streams won't see
any differences. Unfortunately, such apps are, by definition,
incapable of properly dealing with devices that have multiple
streams. The code that Mr. Feira posted is a good example of such code.
In order to properly support devices that have more than one stream,
applications have to stop dealing with format information at the
AudioDevice level and start dealing with them at the AudioStream
level. The right thing to do to get the current format of an
AudioDevice is to get the list of AudioStreams from the device,
iterate through the list, and get the format for each stream. The
same is true for getting the list of available formats or setting the
current format. They must be done on a per-stream basis.
The sole exception to dealing with formats at the stream level is
dealing with the nominal sample rate of a device. The HAL requires
that all AudioStreams contained by an AudioDevice be at the same
sample rate. Consequently, when you change the sample rate on one
stream, it gets changed for all streams. For convenience, the HAL
also provides the kAudioDevicePropertyNominalSampleRate family of
properties. These are AudioDevice properties on the master channel
that control the sample rate for the entire device. Apps are
encouraged to use these properties rather than the AudioStream format
properties if all the app wants to do is change the sample rate.
One final thing to note about how the HAL handles formats has to do
with devices that support a range of sample rates for a given sample
format. For this discussion, we'll assume we have a device that
supports stereo 16 bit linear PCM data, but can support any rate
between 32K and 48K. If you ask the stream about
kAudioDevicePropertyStreamFormats, you'll get a list of ASBDs that
has one entry for stereo 16 bit linear PCM that has a sample rate set
to kAudioStreamAnyRate. A situation like this is a signal to an
application that it has to ask the device for
kAudioDevicePropertyAvailableNominalSampleRates to see what rate
ranges might apply. Unfortunately, this approach has the problem that
kAudioDevicePropertyAvailableNominalSampleRates represents all the
rates possible for all the streams and a particular stream many not
support all of those rates for all the sample formats. This would
leave you guessing as to whether or not a given format is really
supported or not.
To resolve this issue, the HAL added a new format struct,
AudioStreamRangedDescription, that contains an ASBD and an
AudioValueRange that specifies the contiguous range of sample rates
that applies to the ASBD when the sample rate is set to
kAudioStreamAnyRate. The HAL also added a new pair of AudioStream
properties, kAudioStreamPropertyAvailableVirtualFormats and
kAudioStreamPropertyAvailablePhysicalFormats, that returns the
available format list for a stream in terms of
AudioStreamRangedDescription structs. Note that this support is only
available in Tiger.
I hope this helps people a bit as they fiddle around with format info
in the HAL. The take away here is that apps have to think a bit
differently about formats when dealing with the HAL. The important
point is that the multi-stream nature of AudioDevices makes it so
that a single ASBD cannot represent the format of the AudioDevice.
Consequently, apps should be accessing and manipulating that
information using the AudioStream objects contained by the AudioDevice.
--
Jeff Moore
Core Audio
Apple
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden