L* hype -- was: Re: profiling monitor range
L* hype -- was: Re: profiling monitor range
- Subject: L* hype -- was: Re: profiling monitor range
- From: Peter Karp <email@hidden>
- Date: Wed, 21 Jun 2006 16:19:26 +0200
> The significant technical issue with flat panel displays is their
> lack of native linearity. Unfortunately, the CRT calibration routines
> can not accurately describe the slope of multiple contrasts in the
> LCD/TFT. basICColor v4 will help produce the smoothest gradation with
> equal steps in contrast by using the L* gamma technology. The info
> can assist the video card with LUT based corrections
Between the lines you prove here that L* calibration of a display has
no benefit to a gamma calibration. ;-)
What should be the advantage of an L* calibration of a display? It
could be that the tonal response of the display is related to the
visual perception, so that equally spaced distances in RGB space are
related to equally spaced differences in our lightness perception.
This would ensure that no (or less) digital values are 'wasted'. This
in general is a good idea.
_But_ you have to remember that the L* function describes the
perceived lightness under specific conditions! And it's _just_ a
simple model to try to fit the visual lightness perception. There are
other models which are similar good and/or which better fit to other
viewing conditions!
Now comes the point where one sees that most (if not all) of the L*
promoting is marketing speech:
I have read comments from happy customers who stated that there CRT
calibrated to L* is soooo good and 'linear' (I doubt they knew the
meaning of linear here). Similar you state that the L* calibration
will adjust the video card LUT to the 'smoothest gradation' with equal
steps.
For me "smooth" means that there is _no_ banding in a black to white
gradient. But if you load curves on the video card LUT you'll _always_
loose smoothness. You're working with 8-Bit/channel data (DVI offers
only 8 Data-Bits per Channel). The video LUT is accessed with 8 bit
depth too. So you start with 256 levels, but you end up with less. You
can calculate this for yourself or look at Bruce Lindblooms great
website:
http://www.brucelindbloom.com/LevelsCalculator.html
There's no option to upload or define a custom tone reproduction curve
there. But let's assume that a simple gamma model describes your
display -- may it be a CRT or a TFT. This assumption is for most
displays not too far off the reality.
Input and output both have 8 bit depth. Then calculate a transform of
L* to a gamma value of 2.2 (or vice versa that doesn't matter). You'll
see that you have 234 levels left from the original 256. 22 levels are
_gone_ That's the result of the quantization errors due the limited
precision in a computer. I don't even have to look at a monitor, but
can predict that you will have less smooth gradients! But you can test
this for yourself (see below).
Now calculate the values for a transition of L* to 1.8 gamma. You'll
even loose more levels and end up with only 225 levels. You see that
overall a gamma of 2.2 is closer to L* then a gamma of 1.8.
In real life of course you might have a little more or less loss of
levels, because your uncalibrated display will not behave according a
perfect 2.2 or 1.8 gamma.
So when you say that a CRT with L* calibration is more smooth then
with a gamma calibration this is not true, because a CRT has a native
tonal response more or less in the range of a gamma of 2.2 (if you use
a simple gamma model, if you use a model which honors the offset the
values will be somewhat higher around 2.5). A CRT tonal repsonse has
to be changed via the video card LUT. So you'll loose some steps like
shown above.
The L* calibration thing is in my eyes mostly marketing hype. If you
want to have the least impact of the monitor calibration to a smooth
gradient (which many will wish!) you'll have to calibrate the display
in the hardware LUT (like it's possible in some TFT's like Eizo, NEC
and last not least Quato) to the gamma which is used for the file. For
example when working with Adobe-RGB files a gamma of 2.2 is best, for
ECI-RGB files a gamma of 1.8, for L*-RGB files a 'gamma' of L* and so
on.
If the tonal distribution of the file and the monitor are different
the colormanagement has to take care of that and alter the data sent
to the display accordingly. This will introduce all the quantization
errors you don't want to have, because you'll see the effect as
banding. Just try it for yourself and create a genuine gray ramp [1]
(not dithered) and assign different working spaces to this file and
see what happens in a colormanaged application like Photoshop. On an
accurate calibrated display (I work for Quato and mostly look on a
hardware-calibrated Intelli Proof 213) you'll see a perfect gradient
when monitor gamma and file gamma are matched, but you'll see some
banding when monitor gamma and file gamma differ. This is _regardless_
of the actual used gamma (or L*).
Is L* a dumb idea then? No, it's a good idea in general. But benefits
will be small and only be seen under very specific conditions. And
don't forget that potential benefits can _only_ have an effect, when
the data is original in L*. If your camera tags your files with
Adobe-RGB I would not recommend the use of L* for the monitor
calibration.
In everyday use you'll not see a difference between an L* calibration
and a calibration to a gamma value. You can always create conditions
which "proove" that a specific tonal response is "best"...
With kind regards
Peter
[1] I can e-mail a link via PM to a set of different width gray
gradients and other monitor testing tools. The gray gradients are
calculated and contain no dither and all levels are distributed
evenly (exact the same number of pixels for every step).
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Colorsync-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden