16-bit vs 8-bit images
16-bit vs 8-bit images
- Subject: 16-bit vs 8-bit images
- From: Marco Ugolini <email@hidden>
- Date: Mon, 11 Feb 2002 19:19:48 -0800
My understanding of the issue of 16-bit vs. 8-bit is that the increased
number of bits per channel in 16 bits allows for a range of alterations
(hue/saturation, curves, levels, etc.) that can go wider and deeper than
what is allowed in 8 bits before gaps in the histogram start to occur. This
is possible, clearly, only if we start from an image (scan, digital capture,
etc.) that already makes an effective optimal use of the full 16-bit depth
of available color: simply interpolating an 8-bit image up to 16 bits, for
example, does not add any actual color depth and detail on top of what the
image already has.
If an image already starts out as problematic (for any reason: too dark, you
name it) deeper changes can be made to it in 16 bits before posterization
occurs (which typically shows in the histogram as gaps, although, indeed,
the histogram must be interpreted intelligently) and the picture starts to
"fall apart". It seems to me that whether an image "looks better in 16 bits"
is kind of an idle speculation. Of course, at least theoretically, if an
image indeed does use all 16 bits of color depth effectively, that in in
itself gives it a sizable practical advantage over the best image achievable
in 8 bits with the same system of digital capture, all other variables being
equal. We know that very rarely we end up using images without making any
modifications to them, sometime quite extensive; so, given the SAME image
scanned in 8 bits and in 16 bits with the SAME scanner (if it allows for
16-bit scans), which of the two scans can be altered more radically before
visible image deterioration occurs? That, to me, seems the question to be
asked, one based in real-life production scenarios, which are the ones I
face in my work every day.
From past experimentation, I have seen for myself that an image in 16 bits
is far more forgiving. Whether the bits are 16 or 15 or 13, that seems to be
beside the point: if effectively distributed and exploited, the extra bits
offer a cushion that is simply not available in 8 bits. Since final images
always need to be 8 bits per channel (and since, apparently, 8 bits are
rarely truly 8 bits, more often less) working in 8 bits already means having
the bare minimum number of colors necessary: why not accept an additional
safety margin, however large it may be? Besides the difficulty of
establishing how truly "16-bit" our images are, the other substantial
problem to be overcome, for the time being, is that Photoshop's support of
16-bit workflow is still scant, although better than in the past. But the
day that editing truly 16-bit images in Photoshop is made as easy and
far-ranging as in 8 bits, do we doubt that professional work will be done in
16 bits?
_______________________________________________
colorsync-users mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/colorsync-users
Do not post admin requests to the list. They will be ignored.