A discussion on accurate color ...
Hi - Sometime ago I asked for comments and suggestions as Michael Bennett (UConn Library) and I (Library of Congress) began working on a paper for the IS&T conference that was held last week. In reviewing some of the issues we have in obtaining accurate color as we image cultural heritage materials from our collections we decided to take direct measurements from our documents using a X-Rite 530 spectro. I got several comments from this group which were extremely helpful and now would like to share our results. Our initial expectation was to investigate if our color space specification (sRGB or occasionally AdobeRGB(98). Perhaps most frequently, unspecified RGB - all in TIFF master files, generally 8-bit although we now use 16-bit for exceptionally high value materials) was leading to inaccurate colors. We expect to take measurements from a sample of documents throughout the Library and then develop research questions for more detailed study. I have begun direct measurements, collecting Lab values generally. The measurements are so time-consuming that I only do full spectral readings on special request. So far I done 10 items from our general collection, 10 items from our Prints and Photographs collection, and 4 items from our Geography and Maps collection. The process is ongoing - I expect to finish maps and move onto our Music, Manuscripts, and Rare Books collections soon. So far we have over 700 readings of a wide variety of colors. Some initial results are interesting: 1. Of the 700+ readings, only 2 are outside the sRGB color gamut. Both of those are on coated paper on tipped in illustrations in a 1960's gem identification book. 2. Bright vivid colors of tropical birds, almost neon colors from posters, and even gold guilt are all within the sRGB gamut. 3. There are many subtle color variations within the central core of sRGB that curators are extremely interested in. Not just paper "white" but also colors of low saturation given the low reflectivity of the paper and inks in old books and documents. Using ColorThink, we overlaid the colors in the ColorChecker (original, large patch version with 18 colors and 6 step grayscale that we generally use to calibrate our cameras and the newer ColorChecker SG chart. 4. The colors of the cultural heritage materials are not very similar to the colors of the ColorChecker. 5. The more numerous colors of the ColorChecker SG are more similar and might provide a better calibration set. We have posted the following documents to the UConn repository. The complete paper: http://digitalcommons.uconn.edu/libr_pubs/37 The IS&T presentation: http://digitalcommons.uconn.edu/libr_pres/31 The current data: http://digitalcommons.uconn.edu/libr_pubs/35 The datasets will be updated as more document measurements are collected. Thanks, -Barry F. Barry Wheeler Digital Projects Coordinator Office of Strategic Initiatives The Library of Congress bwhe@loc.gov 202 707 8581
Hi Barry, I read your interesting article and I have some comments that might be applicable: When speaking with conservationists, I always ask the question: "What is the intended use of the data after acquisition?". The concept of "accurate" color reproduction cannot be considered outside of the illuminant that is used to view the color. In my opinion, the selection of D50 as a standard illuminant ranks as one of the worst decisions in the history of color analysis and the application of d50 to LAB normalization(i.e. Wrong Von Kreiss) just compounds the issue. d50 is physically unrealizable and it does not represent any viewing conditions that one would expect in a conservation exercise. As a matter of fact, the UV content of D50 far exceeds the allowable exposure for most conservation considerations. Naturally, to examine the colors as rendered to sRGB, you would need to do some chromatic adaptation to "discount" the illuminant. This is also problematic. Take a look at www.brucelindbloom.com -> info -> chromatic adaptation evaluation. The XYZ scaling is the one used in LAB. Really, Really bad.... Note also, that chromatic adaptation scaling tends to fail on the low side of the actual color shift. (i.e. It minimizes the shift in colorfulness (C*) in the LAB system). It is highly probable, that more colors actually fall out of the sRGB gamut than you have calculated, if you treat the data spectrally and then do the conversions. My advice to conservationists is to aquire the ground truth data spectrally. Avoid LAB at all costs. Apply the physical illuminant to the spectral data and aquire the camera data under that illuminant. Build the camera profile using XYZ as calculated under the illuminant in use. Note that the ICC profile will insist that the data in XYZ space be chromatically adapted using Bradford or some other specified adaptation. Great care was taken to insure that the chromatic adaptation be invertible so this will not do harm to the original data capture. The invertablity of Bradford insures that nothing is lost on output. So my advice: 1. train the camera under the illuminant used to view the art. 2. Measure the art or target spectrally. 3. Convert to XYZ space using the viewing illuminant applied to the spectral data. 4. build the appropriate profile or calibration table using this data. In this instance, you should avoid LAB like the plague. It is an indelicate obscuration of the intent of the rendering. Regards, Tom Lianza. The information contained in this e-mail and any accompanying attachments may contain information that is privileged, confidential or otherwise protected from disclosure. If you are not the intended recipient of this message, or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. Any dissemination, distribution or other use of the contents of this message by anyone other than the intended recipient is strictly prohibited. The company accepts no liability for any damage caused by any virus transmitted by this email or any attachments.
Hi Barry, Hi Tom: The procedure at institution is avoid .tif or .jpg and shot raw from DSLR camera. Of course the raw is .dng. "I always ask the question: "What is the intended use of the data after acquisition?" can be the site of the institution, inkjet print or offset, so the DNG raw serves with the software tools that most of us have access: DNG Converter, DNG Profile Editor, PS CS,... Are you saying that GaMapICC isn't usefull? Or RoughProfiler. You can make conversions having the source of light for visualization in account. For what I know, lights at exposure spaces are "something near" D50 (halogen tungsten), or the manufacturers try to get it. And what about the fast chromatic adaptation of human eyes? I reproduce art and photographic artifacs with flash, not with an almost monochromatic light as anyone can see in a SPD graph, and because is the less spiky light source (flash) that I know. In the other hand, is not L*a*b* the ACR working space?. L* for the exposure and brightness, a* for the color temperature and b* for the hue/shade (I think my translator don't works fine here). For me the less usefull illuminant is D65 that is high latitudes daylight most of the time, far away from 4950ºK in subtropical environments. Salud Jose Bueno
Hi Jose, No, L*a*b* is not the ACR internal working space. (*) ACR's primary internal representation uses the ProPhoto/RIMM/ROMM RGB primaries. Exposure and brightness are based on the traditional logarithmic base 2 formulation (i.e., in stops). So boosting Exposure to +1, for example, means doubling the linear scene-referred values (i.e., increasing exposure by 1 stop). The white balance math (color temperature and tint) simply associates user-chosen white points (e.g., tungsten) to corresponding camera whites, by means of a profile's color matrix or matrices. More details (for developers) can be found in public DNG SDK on the Adobe site. Cheers, Eric (*) ACR does use L*a*b* for some internal color difference estimates, e.g., for auto-calculated masks. On May 23, 2011, at 5:51 PM, José Ángel Bueno García wrote:
Hi Barry, Hi Tom:
The procedure at institution is avoid .tif or .jpg and shot raw from DSLR camera. Of course the raw is .dng.
"I always ask the question: "What is the intended use of the data after acquisition?" can be the site of the institution, inkjet print or offset, so the DNG raw serves with the software tools that most of us have access: DNG Converter, DNG Profile Editor, PS CS,...
Are you saying that GaMapICC isn't usefull? Or RoughProfiler. You can make conversions having the source of light for visualization in account.
For what I know, lights at exposure spaces are "something near" D50 (halogen tungsten), or the manufacturers try to get it. And what about the fast chromatic adaptation of human eyes?
I reproduce art and photographic artifacs with flash, not with an almost monochromatic light as anyone can see in a SPD graph, and because is the less spiky light source (flash) that I know.
In the other hand, is not L*a*b* the ACR working space?. L* for the exposure and brightness, a* for the color temperature and b* for the hue/shade (I think my translator don't works fine here).
For me the less usefull illuminant is D65 that is high latitudes daylight most of the time, far away from 4950ºK in subtropical environments.
Salud
Jose Bueno _______________________________________________ Do not post admin requests to the list. They will be ignored. Colorsync-users mailing list (Colorsync-users@lists.apple.com) Help/Unsubscribe/Update your Subscription: http://lists.apple.com/mailman/options/colorsync-users/madmanchan2000%40yaho...
This email sent to madmanchan2000@yahoo.com
On May 24, 2011, at 11:51 AM, Eric Chan wrote:
So boosting Exposure to +1, for example, means doubling the linear scene-referred values (i.e., increasing exposure by 1 stop).
And based on bracketing I often do, with at least a Canon 5DMII, the Exposure slider seems to be fairly accurate in what it reports and produces. Andrew Rodney http://www.digitaldog.net/
El día 24 de mayo de 2011 17:51, Eric Chan <madmanchan2000@yahoo.com> escribió:
Hi Jose,
No, L*a*b* is not the ACR internal working space. (*)
Well, it was a doubt I have had in two last months.
ACR's primary internal representation uses the ProPhoto/RIMM/ROMM RGB primaries. Exposure and brightness are based on the traditional logarithmic base 2 formulation (i.e., in stops). So boosting Exposure to +1, for example, means doubling the linear scene-referred values (i.e., increasing exposure by 1 stop). The white balance math (color temperature and tint) simply associates user-chosen white points (e.g., tungsten) to corresponding camera whites, by means of a profile's color matrix or matrices. More details (for developers) can be found in public DNG SDK on the Adobe site.
I had thought that linear values were compensated by Black, Brightness and Contrast the way gamma correction do to simulate the non linear response of human eye. Works fine as initial adjustment but I usually letf those values to cero in scene-refered if adopt the ICC workflow proposal.
Cheers, Eric
(*) ACR does use L*a*b* for some internal color difference estimates, e.g., for auto-calculated masks.
Then, are there color intenal color conversions or is something that have mean in the interaction with camera profiles from DNG Profile Editor. Thanks, and excuse my ignorance about the tool I use everyday. Jose Bueno
Sorry: Then, are there color intenal color conversions or is something that have mean in the interaction with camera profiles from DNG Profile Editor?. It is a question.
Jose, ACR/LR does perform internal color transforms as needed to carry out its various image processing routines (e.g., noise reduction, vibrance, etc.). However, this is all done internally and has no real connection to the user-specified color space of the rendered output file (e.g., sRGB, Adobe RGB, etc.). The camera profiles produced by DNG Profile Editor and other software (e.g., X-Rite's ColorChecker Passport) are responsible for transforming the camera-native image values (e.g., the primaries determined by a sensor's color filters) to RIMM. At least a color matrix is required, but other components (such as optional lookup tables and a tone curve) can be included, too. This is largely independent of the other internal color transform described in the previous paragraph. Cheers, Eric On May 25, 2011, at 8:59 AM, José Ángel Bueno García wrote:
Sorry:
Then, are there color intenal color conversions or is something that have mean in the interaction with camera profiles from DNG Profile Editor?.
It is a question.
Eric: El día 28 de mayo de 2011 13:18, Eric Chan <madmanchan2000@yahoo.com> escribió:
Jose, ACR/LR does perform internal color transforms as needed to carry out its various image processing routines (e.g., noise reduction, vibrance, etc.). However, this is all done internally and has no real connection to the user-specified color space of the rendered output file (e.g., sRGB, Adobe RGB, etc.).
From ISO 22028?
The camera profiles produced by DNG Profile Editor and other software (e.g., X-Rite's ColorChecker Passport) are responsible for transforming the camera-native image values (e.g., the primaries determined by a sensor's color filters) to RIMM. At least a color matrix is required, but other components (such as optional lookup tables and a tone curve) can be included, too. This is largely independent of the other internal color transform described in the previous paragraph.
And the only thing I miss from both pieces of software is more information about the quality of the profiles. And a re-read of "Fundamentos de Colorimetría" clarified to me the Tom Lianza´s comment.
Cheers, Eric
Thank you Eric Salud Jose Bueno
Hello Barry, Wheeler, Barry wrote:
4. The colors of the cultural heritage materials are not very similar to the colors of the ColorChecker. 5. The more numerous colors of the ColorChecker SG are more similar and might provide a better calibration set.
What do you mean by "similar colors"? similar in terms of spectral properties or just in terms of tristimulus values? I set up a database with approx. 60000 spectral measurements (amongst others many drawdowns of hand-grinded oil pains prepared with ancient pigments after historic recipes in the context of a research project) and found that neither the ColorChecker 24 nor the ColorChecker SG can adequately represent the spectral properties and "metameric challenges" of typical cultural heritage objects to be scanned -- at least not with the sensors and light sources I was confronted with. I chross-checked and confirmed my finding by means of relevant sub-sets of the SOCS Database (ISO/TR 16066: Standard object colour spectra database for colour reproduction evaluation) as well as additional spectral data from paint manufacturers. My research took more than two years and resulted in a tailor-made scanner calibration and profiling target exclusively for Cruse Scanners (with 809 patches). Tests with prototypes are successfully finished and we are starting to manufacture the target right now. Klaus Karcher
On 05/26/2011 01:00 PM, Klaus Karcher wrote:
Hello Barry,
I set up a database with approx. 60000 spectral measurements (amongst others many drawdowns of hand-grinded oil pains prepared with ancient pigments after historic recipes in the context of a research project) and found that neither the ColorChecker 24 nor the ColorChecker SG can adequately represent the spectral properties and "metameric challenges" of typical cultural heritage objects to be scanned -- at least not with the sensors and light sources I was confronted with. I chross-checked and confirmed my finding by means of relevant sub-sets of the SOCS Database (ISO/TR 16066: Standard object colour spectra database for colour reproduction evaluation) as well as additional spectral data from paint manufacturers.
My research took more than two years and resulted in a tailor-made scanner calibration and profiling target exclusively for Cruse Scanners (with 809 patches). Tests with prototypes are successfully finished and we are starting to manufacture the target right now.
Klaus Karcher
Hello Klaus, Given successive changes in pigment qualities through time I wonder how it works if say you have several reds in that target that are based on different pigments ranging from the 12th century up to now. They have to be selected as an average of used pigments through time and by that are a compromise again. Would it not be better to make more targets with pigments that represent a certain period in time and/or type of art? Still a rough classification considering the different speeds in adapting new pigments per area. What was ground by Jan van Eyck in 1420 may not have been in use in Russia two centuries later. -- Met vriendelijke groeten, Ernst Dinkla Gallery Canvas Wrap Actions | Dinkla Grafische Techniek | | www.pigment-print.com | | ( unvollendet ) |
Hello Ernst, you wrote:
Given successive changes in pigment qualities through time I wonder how it works if say you have several reds in that target that are based on different pigments ranging from the 12th century up to now. They have to be selected as an average of used pigments through time and by that are a compromise again. Would it not be better to make more targets with pigments that represent a certain period in time and/or type of art? Still a rough classification considering the different speeds in adapting new pigments per area. What was ground by Jan van Eyck in 1420 may not have been in use in Russia two centuries later.
As far as my experience goes, the properties of the "ideal" training set are more affected by the properties of sensor and light source than by typical pigments. Sets of spectra that cover large parts of the metamer mismatch regions of a sensor or observer can be found or produced with historic paints as well as with modern colorants (e.g with color formulation systems, printing inks, ...). As soon as there are more than 3 basis colors to be considered, the chances to improve reproduction accuracy by excluding "unrealistic" areas from the metamer mismatch regions decrease dramatically. And at least when we regard paintings, arbitrary mixes of more than 3 basis colors even in one original are quite common. Even if you restrict the basis of your training- and test-sets to just 4 colors (e.g. if you print all test- and training-sets with just one particular CMYK printer), but make use of the whole space spanned by this basis (i.e. if you don't use a fixed separation rule), you will note that there are situations where different CMYK values result in the same response for one of your "observers" (camera or human observer), but in distinct responses for the other observer. As soon as this happens, the mapping between camera and observer space can not be unambiguous anymore. And as soon as there is more than one CMYK value that induces a certain camera or observer response, you can find /an infinite number/ of CMYK values that induce exactly the same response for one of the observers (but different responses for the other one). If you can e.g. print the same shade of gray once with pure black and once with pure CMY, you can find an arbitrary number of CMYK combinations "in between" that also result in the same Lab value, but in different RGB values. The set of spectra that result in the same camera or observer response is the metamer set that induces this response. It is infinite, closed and convex. The projection of the metamer set for a certain camera response onto the observer space is the metamer mismatch region. Also this region is closed and convex. There are several strategies to select patches for the training set: you can e.g. select just one typical representative for each mismatch region (e.g. a CMYK patch with an "average" separation), you can include extrema (e.g. pure CMY and pure K) or you can create separate training sets for different applications (separations). Error-minimizing mappings can benefit from the fact that mismatch regions are often elongated and oriented along a certain axis. Klaus Karcher
participants (7)
-
Andrew Rodney
-
Eric Chan
-
Ernst Dinkla
-
José Ángel Bueno García
-
Klaus Karcher
-
Tom Lianza
-
Wheeler, Barry