Re: Native Device Formats
Re: Native Device Formats
- Subject: Re: Native Device Formats
- From: "Mikael Hakman" <email@hidden>
- Date: Sun, 8 Jun 2008 16:43:31 +0200
- Organization: Datakonsulten AB
This is where you end up if you take one messed up (for political/commercial
reasons) obsolete specification and build upon it, creating new specs and
messing it up further (again for political/commercial reasons). In the end
nobody knows what the spec says and how to implement it 100% correctly. As
the technology develops many of the special features and wrinkles that were
originally included in the spec aren't needed, used, actual, or even
feasible anymore. The arbitrary limitations put there in ancient times
(because "we don't need more now") are quickly becoming a major obstacle for
further development. Just look at the bit rate/number of channels
limitation - on coax we can run in GHz range, not the few MHz as the spec
says. On optical we can run in THz range! Then, instead of cleaning up the
spec, or even better, working out a new modern and foresighted spec, these
guys sitting in all those standardizing bodies, all working for the big
brand names, continue to mess it up, and cheat on each other.
Historically, you are quite right about S/PDIF 20/24 bits issue. To
beginning with, the 20 most significant bits were called "audio data" and
the 4 least significant bits were called "auxiliary audio data" and the
vendors were warned not to use them. At that time there were no 24-bits
consumer DACs and hardly even professional. 16 bits was the main stream, and
20 bits was the high end. Over the time, the name of these 4 bits changed to
"auxiliary data" (note omission of word "audio") and the professional format
(AES/EBU) got a flag telling whether to use these 4 bits as the least 4
significant bits of audio or not. The consumer spec wasn't changed.
I suspect that the whole 20/24 bits issue is a result of one or possibly two
vendors playing under the table in the specification committee in these
ancient times: "Yes S/PDIF conveys only 20 bits, but if you use our
equipment at the both ends, you get 24 bits through."
Today, 24 bits are the main stream, and most vendors both on the
professional and consumer side use all 24. Sending devices set the not used
bits to zero and receiving devices work on all 24 bits (the cheapest
receiver devices simply discard 4 - 8 bits before DACing). This is the only
sensible, simplest and cheapest way to interpret the spec, given that there
are no advantages at all in doing something else. Note also that Mac
built-in digital IO follows consumer spec. Also, digital IO on many
so-called professional audio interfaces is actually following consumer spec.
This is emphasized both by using the name S/PDIF, which is consumer name,
and by providing coax and/or optical connectors, both of which are also
according to consumer spec. There are however devices that provide true
AES/EBU (i.e. professional) interfaces including physical (XLR connectors),
electrical (balanced 5V, 110 ohm, jitter limits etc.), and logical
(professional format) levels. Some devices claim compatibility with both
specs but usually it is unclear whether they mean only the voltage levels or
whole specs or (which) parts thereof.
Returning to the question of using Mac software to assess the actual sample
rate and bit width (as TobyBear referred to by you) this is all right once
you know that the device driver tells you the truth, i.e. that it reports
what happens on the wire, not what it accepts from or delivers to an
application because these 2 things need not to be the same. In fact, I have
verified that using some digital audio circuits, some drivers, under some
operating systems, they are different - what the driver reports is not the
same what's going on the wire. Therefore you need to assess the truth using
some other external device that you know is telling you the truth. A modern
(high end) AV receiver is such a (cheap) and easy-to-use device. These
people understand the true nature of digital audio, perhaps better than any
others, and they never change the bit-content when not asked to. This is
also the reason why they show you the sample rate (because they know they
can do it right) but not the bit-width because, as you say, there is no way
to be sure unless you perform some special measurements involving both the
sending and the receiving end.
One way to assess the bit-width is to generate signal at a level below 16
and/or below 20 bits dynamic range. Then you send this signal through your
path. Then you can measure what you get at the other end (requires an
oscilloscope and access to a word clock derived from or synchronous to the
signal). You can also reproduce the signal using (very) high amplification
and correspondingly low SNR. If you still hear a -136 dB FS signal then all
24 bits are there.
On Saturday, June 07, 2008 2:30 AM, Brian Willoughby wrote:
While it is true that S/PDIF has only one mode due to lack of an
indicator bit in its Channel Status, it is incorrect to claim that this
mode is 24 bits. Several online references warn that the specification
is designed for 20 bits, and therefore most devices ignore the extra 4
bits due to the inability to dependably determine their validity as
audio.
I have been unable to find the original S/PDIF specification. The
evolution of the original specification has found its way into IEC958
1989-03 (Consumer Part) which is now IEC60958. In any event, whether you
assume 24 or assume 20, you cannot be guaranteed to be correct with all
equipment. It might be possible to detect 16-bit audio samples alongside
4-bit metadata, but it is impossible to distinguish 20-bit audio samples
with 4-bit meta data from 24-bit audio samples without meta data.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden