Hi Barry, I read your interesting article and I have some comments that might be applicable: When speaking with conservationists, I always ask the question: "What is the intended use of the data after acquisition?". The concept of "accurate" color reproduction cannot be considered outside of the illuminant that is used to view the color. In my opinion, the selection of D50 as a standard illuminant ranks as one of the worst decisions in the history of color analysis and the application of d50 to LAB normalization(i.e. Wrong Von Kreiss) just compounds the issue. d50 is physically unrealizable and it does not represent any viewing conditions that one would expect in a conservation exercise. As a matter of fact, the UV content of D50 far exceeds the allowable exposure for most conservation considerations. Naturally, to examine the colors as rendered to sRGB, you would need to do some chromatic adaptation to "discount" the illuminant. This is also problematic. Take a look at www.brucelindbloom.com -> info -> chromatic adaptation evaluation. The XYZ scaling is the one used in LAB. Really, Really bad.... Note also, that chromatic adaptation scaling tends to fail on the low side of the actual color shift. (i.e. It minimizes the shift in colorfulness (C*) in the LAB system). It is highly probable, that more colors actually fall out of the sRGB gamut than you have calculated, if you treat the data spectrally and then do the conversions. My advice to conservationists is to aquire the ground truth data spectrally. Avoid LAB at all costs. Apply the physical illuminant to the spectral data and aquire the camera data under that illuminant. Build the camera profile using XYZ as calculated under the illuminant in use. Note that the ICC profile will insist that the data in XYZ space be chromatically adapted using Bradford or some other specified adaptation. Great care was taken to insure that the chromatic adaptation be invertible so this will not do harm to the original data capture. The invertablity of Bradford insures that nothing is lost on output. So my advice: 1. train the camera under the illuminant used to view the art. 2. Measure the art or target spectrally. 3. Convert to XYZ space using the viewing illuminant applied to the spectral data. 4. build the appropriate profile or calibration table using this data. In this instance, you should avoid LAB like the plague. It is an indelicate obscuration of the intent of the rendering. Regards, Tom Lianza. The information contained in this e-mail and any accompanying attachments may contain information that is privileged, confidential or otherwise protected from disclosure. If you are not the intended recipient of this message, or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. Any dissemination, distribution or other use of the contents of this message by anyone other than the intended recipient is strictly prohibited. The company accepts no liability for any damage caused by any virus transmitted by this email or any attachments.