On Jun 10, 2014, at 7:04 AM, Roger Breton <graxx@videotron.ca> wrote:
I tend to be biased and still think of sun at midday as the best we have, regardless of all its pitfalls, that it varies during the day, how it changes relative to the position around our hemisphere, time of the year and so forth...
I'm quite partial to the Sun, as well...but your post is really getting at the heart of the way our visual systems work. Superficially, one might think that our visual systems would function like (imprecise) spectroradiometers -- that they tell us the absolute brightness and color of the light reaching our eyes. After all, that's pretty much how the mechanics of our eyes work: a photon hits one of the photoreceptors in the eye, and that creates an electrochemical impulse in the optic nerve associated with the location and color of the photoreceptor. The more photons in a shorter amount of time, the stronger the nerve signal. However, the "software" in our brains is much more sophisticated than that. It's able to integrate the entire visual field into a pretty good estimation of the color and intensity of the illuminant, and it then adjusts our perception of objects such that what we actually see is much closer to what a contact spectrophotometer reads. As such, so long as the illuminant falls in an huge range of spectral power distributions, the stuff we look at always looks pretty much the same. From a practical perspective as an human living in the real world, this is fantastic. That piece of fruit you picked off the tree in the morning still looks like the same piece of fruit in the evening by the campfire. However, it's also the source of so many of the practical problems people have to deal with in critical color applications. For, of course, the whole system is only roughly approximating constancy of perception; once you start to examine it critically, the deviations not only become apparent, but can even be a bit jarring. Just yesterday somebody sent me and at least an hundred other people a bunch of photos shot indoors with a generic digicam whose white balance algorithms suck royally. Everything was horridly yellow...and, yet, I'll bet that most people didn't even notice, and the one or two who might have likely weren't particularly bothered. ...and, yet...the light in the space itself didn't at all look weird or unusual, <i>and</i> those photos are likely displaying absolute XYZ values closer to those in the original scene than a properly white balanced photo would. So which rendering is correct? The one that (perhaps) faithfully reproduces the original colors in the scene, complete with non-Sun illuminant, or the one that faithfully reproduces the normalized appearance of what the scene would look like if viewed under the illuminant that's dominant for you right this moment -- your computer display? I think we may eventually see the day when cameras include a spectroscope designed to record the illuminant, and that information getting added to the file's metadata and used for white balancing. There's no better solution to the problem, and it's not *that* big a technical challenge, all things considered. Since I'm not keen on hauling my i1 Pro and a laptop around with me on photographic excursions, I'm personally toying with the idea of trying to use an iPhone for that sort of thing. Lots of people are using iPhones for spectral analysis for chemistry, but I don't know of anybody doing it for graphic arts.... Cheers, b&