Re: 16 bits = 15 bits in Photoshop?
Re: 16 bits = 15 bits in Photoshop?
- Subject: Re: 16 bits = 15 bits in Photoshop?
- From: email@hidden
- Date: Mon, 18 Apr 2005 13:51:43 EDT
Marco Ugolini writes,
>>Well, Count Ugolino was actually a character in Canto 33 of Dante's Inferno.
But that was him. I am someone else. MY name is Ugolini. Different guy.>>
Se vuò ch'i ti soveggna,
dimmi chi sè, e s'io non ti disbrigo,
al fondo della ghiacchia ir mi convegna.
I am by coincidence just finished with Inferno and working on Purgatorio, as
I will be checking out many of Dante's hangouts during the next two weeks when
I am in Italy. That no doubt is the source of my confusion. Mi dispiace.
Ugolini it is.
>>I'll try to translate: you seem to say that it's better to trust a measuring
instrument to calibrate a monitor, but, except for that one instance, for
everything else one ought to trust his own eyes. Am I warm?>>
No, I am saying that when color-correcting on a properly calibrated monitor,
the phenomenon of chromatic adaptation causes us to adjust to any cast in the
displayed image, and therefore our ability to evaluate whether an area is
truly neutral is suspect. Therefore, in determining the gray balance of an image
displayed on a monitor one should rely on the Info palette and not on the eyes.
For most everything else one ought to rely on the eyes.
>>You must certainly be aware that the eyes can also deceive you. In color
management (forgive me for mentioning this expression, so distasteful to
you)...>>
The expression is only distasteful if certain adjectives apply to it. Lame,
dysfunctional color management that is represented as being the solution all
color problems is indeed distasteful to me, That was the sort of color
management advocated by several parties on this list in the late 1990s. A good part of
its shortcomings was due to a lack of awareness that it was the measuring
instruments, not the eyes, that were being deceived.
Today, you won't see that even from the most extreme members of this list.
Everybody now accepts that the machine readings are the start and not the end of
the calibration process. Everybody now admits that if the original setup
produces garbage and color management is introduced, the normal result is not
better-looking images but reliable, predictable, and repeatable garbage. Every
color management consultant now gives his clients fairly stern warnings about
what color management does and, more importantly, what it doesn't do. And every
color management consultant now stresses that calibration requires continuing
discipline, not just make one set of profiles and forget about it until the end
of time.
There's nothing distasteful about any of that, it's rather a good thing. And,
now that even the color management extremists have adopted the position I've
been taking for more than ten years, I don't even mind that they continue to
demonize me out of force of habit.
>>...I don't think we would be going very far by looking at the individual
patches of a printed test chart and guessing their L*a*b coordinates...>>
In calibrating an output device I have exactly as much use for printed test
charts as I have in color correction for histograms. However, this is getting
very far afield. Back to the topic of this thread.
>>Let's start with a synthetic image.>>
No, let's not. I specifically said a real-world color photograph, and I
specifically said that 16-bit can be useful in working with computer-generated
gradients. Computer-generated art is not a good surrogate for natural photographs.
If you want to print the artwork out, and then scan or photograph it, then we
can discuss whether 16-bit is useful in correcting that file. But hint: that
experiment has already been done, and the guy who did it got a big surprise.
>>At the same URL, you can also go ahead and see for yourselves what happens
to another grayscale image, a 16 bit scan I made some time back from a black
& white film negative. >>
Again, a real-world COLOR photograph, please. The additional channels soften
the image's appearance, so much of the difference between the 16-bit and 8-bit
corrections vanish. I've tested around 20 grayscale images with big
corrections and given them to juries for blind evaluation. In around two-thirds of the
examples the juries reported no significant difference. When they had a
preference, they preferred the 8-bit version twice as often as the 16-bit version.
This is as opposed to the evaluation of the color images, which came back no
preference in 100% of cases, except for one in which the panel preferred the
8-bit correction over the 16-bit.
But yes, there are some cases where it has been shown that massive
corrections to a grayscale file work better in 16-bit mode, although even more work
better in 8-bit. Fortunately, nowadays most people do the massive corrections to
the color file before it enters B/W.
>>If circumstances force you to apply a heavy hand to your image, it would be
foolish at least not to CONSIDER starting your work in 16 bits. Still, if you
insist that it makes no difference worth your time, then this is no longer
about reason or logic, but about religion. >>
But I don't insist that. Jim Rich doesn't insist that. We both say merely
that we cannot demonstrate that there is ever any advantage, and we both say that
if somebody can demonstrate that there is, with real-world images under
real-world scenarios, that we have no problem accepting that and that we would
modify our workflows in certain circumstances. Ray Maxwell and others have
suggested scenarios under which there might be an advantage. I agree that if there
ever is an advantage it probably shows up in images like the ones he mentions.
However, I've tried what I consider to be similar images, and no go.
It's the *other* side that is acting religiously. At least two people have
explicitly said in approximately these words, "I do not require proof of this
proposition because it is so obviously true." If that's not the definition of
religion, what is? After all, (and I will think of this when I visit a certain
tomb in Santa Croce next week) the proposition that the sun revolves around the
earth was so obviously true at one time that no amount of proof that it
doesn't was considered persuasive.
If it were all that easy to produce a real-world color photograph showing a
real-world benefit for 16-bit correction, somebody would have produced it
already. Certainly Bruce would have, rather than spending the countless hours
explaining why he has nothing to back up his article of faith.
In November, there was a thread on colortheory in which I criticized one
member for "proving" 16-bit superiority by means of a series repeated drastic
moves back and forth as opposed to something even remotely conceivable in the real
world. This prompted the following response from a list member who disagrees
with a lot of my positions vociferously:
"For once I have to agree with you Dan. This thread has prompted me to do a
series of tests, and however bad I can make the histograms look after
attempting to do serious damage in 8 bits vs. doing the same corrections in 16 bits,
the results look the same. They're the same on the monitor, they're the same on
a proof and they’re the same even on Ektachrome film output. Now I know that
there are cases where it matters because I've seen it before and I fixed the
problem by using 16 bit file corrections, but now that I want to make it happen,
I can't."
I don't think this concept is so obvious that it doesn't require proof.
Dan Margulis
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Colorsync-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden