raw scans and their histograms
raw scans and their histograms
- Subject: raw scans and their histograms
- From: Todd Flashner <email@hidden>
- Date: Wed, 26 Sep 2001 15:06:02 -0400
What determines where the endpoints of a RAW highbit scans (no tonal
manipulation by the driver or image editor, and no profile assigned) get
written to Photoshop's histogram in 16-bit mode? In other words, why does it
appear to be of such low dynamic range, and bunched into the dark end of the
histogram? If you scan a chrome which has a dynamic range of 3.7 on a
scanner which has a true dynamic range of 3.7, why wouldn't the data span
the histogram? It would still be bunched up.
This is a cerebral pursuit for me, I just want to be sure I'm
conceptualizing this properly. I've been told "it's because you haven't
set your end points, or corrected it's gamma". Yes, I understand about
toning the data, but my question is, what determines where that RAW data
will sit in the histogram prior to toning?
I've heard a couple of different explanations, but they are contradictory or
incomplete.
One premise suggests that as the bit depth of the capture device increases
so will it's scans occupy a larger portion of the histogram. IOW, a 10-bit
capture would contain fewer tones than a 14-bit capture, and thus would
occupy a smaller portion of the histogram. This suggests that dynamic range
is directly a function of bit depth, which I don't believe to be the case
(though someone always throws in a snippet about A/D converters, which only
confuses me).
Another theory proposes exactly the opposite of that, though it still ties
dynamic range to bit depth. It suggests that the reason the tones are
compressed is that it is a representation showing that the dynamic range of
the capture device is greater than that of the film. In this scenario, as
the bit depth of the capture device increases, so will it's dynamic range,
and as the dynamic range of the scanner exceeds that of the film, this will
be represented by "empty space" in the histogram -- IOW, empty space reveals
unutilized dynamic range of the scanner. Again, I don't buy this because
it's still trying to tie DR to bit depth.
The fact of the matter is is so long as both scanners have sufficient DR to
handle the film, the 10-bit scan shouldn't have a different histogram than a
14-bit scan, or should it?
Yet another premise suggests it is because scanners write out raw data in
linear gamma and the histogram is representing a working space of a higher
gamma. But gamma is a function of the distribution of the tones inside of
the endpoints, not where the endpoints lie, no? Furthermore, if I convert
the file to a 1.0 gamma working space, the histogram remains essentially
unchanged.
So what is it? Just why does the raw data appear so compressed, and how is
it's shape within the histogram affected by the bit depth of the capture
device?
Todd Flashner