In a message dated 6/21/06 7:19 AM, Peter Karp wrote:
Now comes the point where one sees that most (if not all) of the L*
promoting is marketing speech:
That´s dedication to the job! If I were a loyal employee of a manufacturer which doesn´t offer this feature in their monitor calibration software, I would make a similar statement ;-)
Some of Peter´s arguments are valid though. Some are not. This unholy mix makes it hard to separate facts from fiction. I try to elaborate on that a bit:
In order to evaluate display quality, you have to look at the entire chain from the creation of an image through (one or several) conversions to the final output.
You convert from RAW to a working space, from there to (an output space and then to) the monitor space. Last but not least there is another conversion to the physiological properties of the human visual system - which is bein mimicked in the L*a*b* color system.
The most lossless chain would thus be: conversion from RAW to a color space with a tonal response curve that equals that of the human visual system i.e. L* and then on to a monitor that is calibrated to the same tonal response curve. This results in zero loss, not counting the loss in the LUTs.
If your monitor is calibrated to lets say gamma 1.8, your working space is gamma 2.2 and your visual system is (approximately) L*, you lose 18.4% or 46 steps out of 255 in your primaries R, G and B. And this is not counting the additional loss in the LUT of the graphic card.
Is L* a dumb idea then? No, it's a good idea in general. But benefits
will be small and only be seen under very specific conditions. And
don't forget that potential benefits can _only_ have an effect, when
the data is original in L*.
As I have proved above, the conversion to L* will have to occur anyway in Peter´s case between monitor and the human visual system. So, if at least one component represents the L* tonal response curve, you lose less. I fully agree that working in LStarRGB from the beginning would be better – that´s why we designed LStarRGB in the first place. BTW LStarRGB has been or will be submitted to the ISO for standardization (by ECI and under the name eciRGBv2).
For
example when working with Adobe-RGB files a gamma of 2.2 is best, for
ECI-RGB files a gamma of 1.8, for L*-RGB files a 'gamma' of L* and so
on.
So, you want to switch your monitor calibration each time you display an image with a different working space?
Speaking of working spaces, the most common ws is not AdobeRGB, but sRGB. And the guys who designed sRGB knew what they did. Although it appears to be gamma 2.2 (e.g. in Photoshop), it is not. In the dark area, the tonal response curve of sRGB resembles the L* function, only from the midtones to the highlights, it is gamma 2.2. So, if you don´t want to switch calibrations constantly, L* comes out on top again.
For those who want to switch constantly (or work in sRGB exclusively), basICColor display 4 offers an sRGB calibration as well.
The L* calibration thing is in my eyes mostly marketing hype
That´s why a renound monitor manufacturer now support L* calibration as well?
I would never say that L* is the ONLY way to go – the best calibration method depends on so many factors, some of which Peter has stated in his posting. That´s why display 4 offers gammas (even with a 2 decimal accuracy) along with L* and even the sRGB tonal response curve.
So, we can bury the marketing rant and use whatever fits our needs best.