Re: some thoughts on CIELAB, part 2
Re: some thoughts on CIELAB, part 2
- Subject: Re: some thoughts on CIELAB, part 2
- From: Robin Myers <email@hidden>
- Date: Fri, 07 Feb 2003 09:31:27 -0800
<Part 2 of response>
Graeme Gill wrote:
As for color appearance models, I made no mention of these in my post. I
applaud the work on the CIECAM model but until it has scientific
foundations for all the variables in the equations, I will not implement
it for color matching. Notice, I said color matching, not appearance
matching. I do not attempt to match the complete appearance of a color,
especially in reference to the surround and surface effects. No one has
yet satisfied me that a model that works well for single colors on a
surround will work as well for a color in a photographic image where the
surround may be quite varied.
Unfortunately, the deficiencies of L*a*b* in the blue is one of the big
problems in applying L*a*b* in the photographic world. Many color
profile generation programs do very poorly here.
L*a*b* is not all that uniform. If you examine the area occupied by
equal perceptual difference as the distance is increased from the
achromatic point, the area changes significantly with distance and hue
yet the L*a*b* delta-e remains constant. So if you sample the space
regularly for a CLUT, the perceptual sampling is irregular!
As for 8-bit representations, it has always been my contention that all
color calculations should be in floating point and that the profile
should use standard floating point representations. The 8-bit L*a*b*
representation results in a built-in error of 0.7338 delta-E from the
encoding alone.
>
Having compared quite a few profiles made for the same
>
device in XYZ and L*a*b* PCS, the L*a*b* CLUT version
>
seems to almost always has the lowest errors (ie. highest
>
correspondence to the measured data points).
This may be true for XYZ CLUT profiles you have tested. Perhaps this may
be caused by some CMMs that always perform their calculations in L*a*b*
and therefore convert XYZ to L*a*b* before computation. For example, the
last time I checked, the Apple CMM used to do this.
The XYZ color matching algorithms developed as the basis of ColorSync 1
(notice I did not say the ones IN ColorSync 1), create closer matching
for in-gamut colors than the ICC CLUT profiles and CMMs in my tests.
Unless we get together and exchange data, I see no way to agree on this point.
>
<snip>
>
>
You can use almost any space you like for gamut mapping if you
>
twist the gamut mapping itself appropriately. The whole point
>
about choosing a particular space for doing gamut mapping
>
is to make the mapping itself as simple as possible.
As I mentioned in my previous post, I perform very little gamut mapping.
Even when working with client devices and ICC profiles, I have almost
never needed to use the perceptual rendering intent. Relative
colorimetric works quite well.
>
One starting point criteria might be that the shortest
>
distance in the colorspace correspond to the perceptual
>
closest distance, on both a small and large scale.
>
L*a*b* isn't terribly suitable under this criteria,
>
but XYZ is many times worse. Most work currently
>
seems to be going on using CIECAM97 or similar,
>
since a successful color appearance space will
>
tend to meet this criteria. I've certainly had satisfactory
>
results using a "shortest distance" weighted algorithm
>
in a modified CIECAM97 colorspace.
As noted above, when CIECAM is finished to a level where instruments are
available to measure all the variables in the CIECAM model, then I may
switch to using it.
>
<snip>
>
> For a really interesting weird view of L*a*b* space and how it distorts
>
> things, take a look at the spectral locus, including the purple line,
>
> converted into L*a*b*. Do not use the diagram in Wyczecki and Stiles
>
> since it is a monochrome image and difficult to visualize the effect.
>
>
Agreed, but as a way of representing color values, so what ? The thing L*a*b*
>
has going for it is that it is a well known, standard, roughly perceptual
>
space. It's generally regarded as having a 3:1 variation from actual perceptual
>
uniformity. While using a linear interpolation algorithm to interpolate
>
between CLUT values may not make much sense from a color mixing model
>
point of view, it makes good sense in the context of performing an
>
interpolation between relatively closely spaced sample points.
I think we agree to disagree here. I consider L*a*b* to be so "rough" as
to be critically flawed for color matching. I only use it as a crude
benchmark for color differences (it's original purpose). There has been
much dismissal out of hand of any algorithmic color matching by the ICC
and a strict, almost religious fervor for CLUT only methods. I think
that as good scientists, we are required to always keep our minds open
to other methods. However, the ICC seems to be more political than scientific.
All the rest of my objections to L*a*b* for color matching were very
eloquently written by Tom Lianza in his post. I should think the assent
by one of the Grand Masters of Color, Dr. Hunt about the failings of
L*a*b* would be enough to make anyone question the current ICC model.
Best regards,
Robin Myers
<end of Part 2>
_______________________________________________
colorsync-users mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/colorsync-users
Do not post admin requests to the list. They will be ignored.