Colorimetric Accuracy In the Field
One more issue that rears it's head regarding the quest for colorimetric accuracy in the field is the multiplicity of light sources encountered. Taking a sunny day landscape or architecture shot, we have the sun and then the open sky. Just as importantly, we then have the multiplicity of environmental reflections coming in all directions. If you meter the sunlight, what happens to your accuracy as more or less of these lights mix in different parts of the scene? It's my experience that color temperature will vary over as little as a couple feet or even the angle of the metering instrument to the light sources, or the angle of the target to the main light (the sun) and the mix of other reflections, or the color of a wall or pavement or greenery and trees. It seems to me that the colorimetric accuracy would come down to only a very specific white balance in one tiny part of a scene. One would have to completely control and dominate the lighting with controlled lighting in order to have any predicted knowledge of the accuracy of a rendered color. And as Andrew has pointed out you would have to measure every color everywhere so as to have two "patches" to actually compare, an impossibility. I heartily endorse those who seek more accurate control from their equipment, but this seems to me to be a quixotic quest for anything beyond testing your individual cameras and methods to achieve a precise white balance and improved rendering engine interpretation of the camera raw data. To those in scientific or museum situations, how colorimetricly accurate would your work be if you allowed mixed color temperature lights and colored reflections over any part of the subject being photographed? I think we can agree it would be nil, as it is standard practice to eliminate those very things when doing highly accurate, measured photography. Even the subtlest variation in color temperature between two light sources used to light a target for a profile creation makes for a very bad profile. How can one possibly overcome the variations in lighting over an uncontrolled scene in order to suggest colorimteric accuracy? Spend some time doing architectural photography with existing light, limited even to exteriors so as to avoid the extra problems with fluorescents of different age and type, tungsten sources, etc., and you will well know how different color temperature varies over a scene. Scene white balance is always a compromise of individual, perceptual selection. Thanks to all. Jeff Stevensen
On Jun 7, 2013, at 7:26 AM, Jeffrey Stevensen <jeffstev@maine.rr.com> wrote:
And as Andrew has pointed out you would have to measure every color everywhere so as to have two "patches" to actually compare, an impossibility.
Heck, if the proponents of this Colorimatric Accuracy in the field measured 24 and provided a set of dE values that be a start! I wonder if those proponents need to read chapter 3 (Colorimetry) of Mark Fairchild's book again. Heck and just the first two or three pages... Andrew Rodney http://www.digitaldog.net/
On Jun 7, 2013, at 9:30 AM, Andrew Rodney wrote:
On Jun 7, 2013, at 7:26 AM, Jeffrey Stevensen <jeffstev@maine.rr.com> wrote:
And as Andrew has pointed out you would have to measure every color everywhere so as to have two "patches" to actually compare, an impossibility.
Heck, if the proponents of this Colorimatric Accuracy in the field measured 24 and provided a set of dE values that be a start!
I wonder if those proponents need to read chapter 3 (Colorimetry) of Mark Fairchild's book again. Heck and just the first two or three pages...
I wonder if you really miss the point, Andrew, or just having to distort what other people saying :) -- Best regards, Iliah Borg
Jeffrey Stevensen wrote:
If you meter the sunlight, what happens to your accuracy as more or less of these lights mix in different parts of the scene? It's my experience that color temperature will vary over as little as a couple feet or even the angle of the metering instrument to the light sources, or the angle of the target to the main light (the sun) and the mix of other reflections, or the color of a wall or pavement or greenery and trees.
Right, but this doesn't really matter - in the end it's what the human observer would use as their white point that counts. The illuminant color temperature is just a starting point to guess/estimate what that is.
It seems to me that the colorimetric accuracy would come down to only a very specific white balance in one tiny part of a scene.
Colorimetric accuracy is independent of white point - ie. XYZ is absolute, not white point relative. XYZ is the light levels integrated with certain spectral sensitivities - the ones typical of a human observer. The way these three levels are balanced (gain adjusted) in the eye and nervous system is what sets the observer white point. White point is something of interest after you've captured the colorimetery, when you want to re-interpret a colorimetric image for a media/on a device which will cause the human observer to be adapted to a different white point than they would be in the original scene.
One would have to completely control and dominate the lighting with controlled lighting in order to have any predicted knowledge of the accuracy of a rendered color.
Not so if you have a colorimetric capture device.
And as Andrew has pointed out you would have to measure every color everywhere so as to have two "patches" to actually compare, an impossibility.
Not so, if you use a camera which is colorimetrically accurate. Using spot measurements of real world objects under real world illumination is just a way of confirming that your colorimetric camera is operating accurately. There are two complementary ways in which a colorimetrically accurate camera can be approached: 1) change it's spectral sensitivities to better match the human observer 2) Come up with ways of compensating (ie. calibrating) for the interaction of illuminant, object reflectance spectra and the non-colorimetric camera sensitivities. The latter can never be a perfect way of repairing the first defect, and has various degrees of practical difficulty. Graeme Gill.
It would seem that creating a camera with 3 accurate cone sensitivities, or even 4, is a solvable technical issue. But how can one recreate the observer's white point adoption, especially in the presence of mixed light, if one does not register the 360 degree scene at high DR? Edmund On Sat, Jun 8, 2013 at 2:45 AM, Graeme Gill <graeme2@argyllcms.com> wrote:
Jeffrey Stevensen wrote:
If you meter the sunlight, what happens to your accuracy as more or less of these lights mix in different parts of the scene? It's my experience that color temperature will vary over as little as a couple feet or even the angle of the metering instrument to the light sources, or the angle of the target to the main light (the sun) and the mix of other reflections, or the color of a wall or pavement or greenery and trees.
Right, but this doesn't really matter - in the end it's what the human observer would use as their white point that counts. The illuminant color temperature is just a starting point to guess/estimate what that is.
It seems to me that the colorimetric accuracy would come down to only a very specific white balance in one tiny part of a scene.
Colorimetric accuracy is independent of white point - ie. XYZ is absolute, not white point relative. XYZ is the light levels integrated with certain spectral sensitivities - the ones typical of a human observer. The way these three levels are balanced (gain adjusted) in the eye and nervous system is what sets the observer white point.
White point is something of interest after you've captured the colorimetery, when you want to re-interpret a colorimetric image for a media/on a device which will cause the human observer to be adapted to a different white point than they would be in the original scene.
One would have to completely control and dominate the lighting with controlled lighting in order to have any predicted knowledge of the accuracy of a rendered color.
Not so if you have a colorimetric capture device.
And as Andrew has pointed out you would have to measure every color everywhere so as to have two "patches" to actually compare, an impossibility.
Not so, if you use a camera which is colorimetrically accurate. Using spot measurements of real world objects under real world illumination is just a way of confirming that your colorimetric camera is operating accurately.
There are two complementary ways in which a colorimetrically accurate camera can be approached: 1) change it's spectral sensitivities to better match the human observer 2) Come up with ways of compensating (ie. calibrating) for the interaction of illuminant, object reflectance spectra and the non-colorimetric camera sensitivities. The latter can never be a perfect way of repairing the first defect, and has various degrees of practical difficulty.
Graeme Gill. _______________________________________________ Do not post admin requests to the list. They will be ignored. Colorsync-users mailing list (Colorsync-users@lists.apple.com) Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/colorsync-users/edmundronald%40gmail...
This email sent to edmundronald@gmail.com
Hi Edmund Sony brought out a camera with 4 color sensitivity to closely match the CMF's in 2003. It did not go over well. http://www.dpreview.com/news/2003/7/15/sonyrgbeccd Your point about "observer white point adoption" is very important and most folks just don¹t understand the real issue. We really only "see" white over a very small range of viewing conditions. That is why the chromatic adaptation matrices often fail to yield an image that even remotely represents the appearance of the scene. As the illuminant departs from a range of about 4800K to 6700K, the hue of the "white" starts to become noticeable. We don't adapt. If you hold up a white card at sunset, it looks orange. You can stare at the card for minutes and it won't appear white. The same is true at the other end of the CT range. There is a similar corollary with exposure. As it gets dark, a diffuse white no longer represents a diffuse highlight. This effect happens because we naturally maintain a large headroom to accommodate rapid changes from ambient dark to light. This means that we have a notion of "virtual" white when viewing in dim ambient. A "properly" exposed image of the dim scene will push the white in the scene into the reproduced highlight and the overall scene will appear much lighter than the scene as observed. In fact if the exposure is about 3EV or less, you should actually underexpose the scene by at least 1.5 stops to begin to simulate what you are seeing. If you keep the camera WB at 5K or slightly below, the relative white balance of the scene will track with appearance much better than adjusting white balance to the actual illuminant. This is especially true at lower light levels. Regards, Tom L. On 6/8/13 7:14 AM, "edmund ronald" <edmundronald@gmail.com> wrote:
It would seem that creating a camera with 3 accurate cone sensitivities, or even 4, is a solvable technical issue.
But how can one recreate the observer's white point adoption, especially in the presence of mixed light, if one does not register the 360 degree scene at high DR?
Edmund
On Sat, Jun 8, 2013 at 2:45 AM, Graeme Gill <graeme2@argyllcms.com> wrote:
Jeffrey Stevensen wrote:
If you meter the sunlight, what happens to your accuracy as more or less of these lights mix in different parts of the scene? It's my experience that color temperature will vary over as little as a couple feet or even the angle of the metering instrument to the light sources, or the angle of the target to the main light (the sun) and the mix of other reflections, or the color of a wall or pavement or greenery and trees.
Right, but this doesn't really matter - in the end it's what the human observer would use as their white point that counts. The illuminant color temperature is just a starting point to guess/estimate what that is.
It seems to me that the colorimetric accuracy would come down to only a very specific white balance in one tiny part of a scene.
Colorimetric accuracy is independent of white point - ie. XYZ is absolute, not white point relative. XYZ is the light levels integrated with certain spectral sensitivities - the ones typical of a human observer. The way these three levels are balanced (gain adjusted) in the eye and nervous system is what sets the observer white point.
White point is something of interest after you've captured the colorimetery, when you want to re-interpret a colorimetric image for a media/on a device which will cause the human observer to be adapted to a different white point than they would be in the original scene.
One would have to completely control and dominate the lighting with controlled lighting in order to have any predicted knowledge of the accuracy of a rendered color.
Not so if you have a colorimetric capture device.
And as Andrew has pointed out you would have to measure every color everywhere so as to have two "patches" to actually compare, an impossibility.
Not so, if you use a camera which is colorimetrically accurate. Using spot measurements of real world objects under real world illumination is just a way of confirming that your colorimetric camera is operating accurately.
There are two complementary ways in which a colorimetrically accurate camera can be approached: 1) change it's spectral sensitivities to better match the human observer 2) Come up with ways of compensating (ie. calibrating) for the interaction of illuminant, object reflectance spectra and the non-colorimetric camera sensitivities. The latter can never be a perfect way of repairing the first defect, and has various degrees of practical difficulty.
Graeme Gill. _______________________________________________ Do not post admin requests to the list. They will be ignored. Colorsync-users mailing list (Colorsync-users@lists.apple.com) Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/colorsync-users/edmundronald%40gm ail.com
This email sent to edmundronald@gmail.com
_______________________________________________ Do not post admin requests to the list. They will be ignored. Colorsync-users mailing list (Colorsync-users@lists.apple.com) Help/Unsubscribe/Update your Subscription: https://lists.apple.com/mailman/options/colorsync-users/tlianza%40xrite.co m
This email sent to tlianza@xrite.com
Am 08.06.2013 14:50, schrieb Thomas Lianza:
We really only "see" white over a very small range of viewing conditions. That is why the chromatic adaptation matrices often fail to yield an image that even remotely represents the appearance of the scene. As the illuminant departs from a range of about 4800K to 6700K, the hue of the "white" starts to become noticeable. We don't adapt. [...]
Tom, do you happen to know whether CIECAM02's "degree of adaptation" D=F*(1-(1/3.6)*exp((-La-42)/92)) is able model this sufficiently well in practice? Thanks, Gerhard
edmund ronald wrote:
But how can one recreate the observer's white point adoption, especially in the presence of mixed light, if one does not register the 360 degree scene at high DR?
Q. How does one estimate the white point from photo's anyway ? A. There are multiple approaches in the literature. Just because none of them perfectly mimic the human observe, doesn't mean that they are worthless, or impractical, or need some new data from the field to function. Graeme Gill.
Graeme, Approximating a function (white point) by a constant may make sense, it is a start but anyone who has seen the circular Monet paintings will agree that one gains a lot by factoring in variable white points that depend on the orientation of the viewer's gaze. Edmund On Sun, Jun 9, 2013 at 11:00 AM, Graeme Gill <graeme2@argyllcms.com> wrote:
edmund ronald wrote:
But how can one recreate the observer's white point adoption, especially in the presence of mixed light, if one does not register the 360 degree scene at high DR?
Q. How does one estimate the white point from photo's anyway ?
A. There are multiple approaches in the literature. Just because none of them perfectly mimic the human observe, doesn't mean that they are worthless, or impractical, or need some new data from the field to function.
Graeme Gill.
_______________________________________________ Do not post admin requests to the list. They will be ignored. Colorsync-users mailing list (Colorsync-users@lists.apple.com) Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/colorsync-users/edmundronald%40gmail...
This email sent to edmundronald@gmail.com
Am 08.06.2013 13:14, schrieb edmund ronald:
It would seem that creating a camera with 3 accurate cone sensitivities, or even 4, is a solvable technical issue.
But how can one recreate the observer's white point adoption, especially in the presence of mixed light, if one does not register the 360 degree scene at high DR?
There do exist spatial color appearance models like ICAM from Munsell Color Science Laboratory. I have no idea, though, how well they perform in practice. Compared to a traditional color transformation, where the output color is a function of input color only (independent of the image, and independent of pixel location), such a model requires complex image processing (it's basically similar to HDR dynamic compression, but is not limited to luminance, but extends to color appearance). Best Regards, Gerhard
participants (7)
-
Andrew Rodney
-
edmund ronald
-
Gerhard Fuernkranz
-
Graeme Gill
-
Iliah Borg
-
Jeffrey Stevensen
-
Thomas Lianza