Hello Chris: Thanks for your response. As may be evident from my posts I'm a relative novice with this, so some of the first couple of paragraphs goes a bit over my head. Reading a Wikipedia article - http://en.wikipedia.org/wiki/Color_temperature - I see that CCT as used with LED light sources seems to represent an approximation - a way of assigning a black-body color temperature that corresponds most closely to the human color perception of the light emitted by the LED. I gather that the difference I see between the displays is a result of the errors that creep in during this process, the limited precision of the colorimeter, &c and that I'm not likely to ever get a complete match. I'll focus on tweaking the luminance value, aim for a satisfactory match between the paper/print sample and screen image, and figure out a way to keep students from using the keyboard controls to adjust the brightness. Thanks again. Bruce -- Bruce Bumbarger Library Conservator Magill Library - Haverford College 370 Lancaster Avenue Haverford, PA 19041 610-896-1165 On Tue, Feb 24, 2015 at 5:46 PM, Chris Murphy <lists@colorremedies.com> wrote:
These temperatures, 6506K and 6522K, if they are on the black body locus, should have these (D50 Lab) values: 100 .77 -21.73 100 .78 -21.57
CIE 1976 ∆E = 0.16 CIE 1994 and 2000 ∆E = ~0.08
Those differences are small. These are probably CCT values however, which actually have a range noted by isotherms that can have noticeable differences, i.e. two light sources with CCT 6000K can appear different. You might care about the ∆uv rather than either ∆CCT or ∆Duv, and for that you need CIE:XYZ/LAB/LUV or even xy and for Y use the luminance values, for each display.
But for any of those computations to be worth while, and yet still not tell you anything you haven't already figured out (they look different but the similar is acceptable), you'd need to assume the colorimeter sees the displays the same way a human observer does. And that's almost certainly false. Even if the colorimeter has a calibration matrix for itself and these specific make/model/batch of displays, which likely isn't the case (I'm only aware of one project that attempted to "crowdsource" this, do a lookup and apply something of a custom calibration matrix on the fly, it was too high maintenance and the metrology was unreliable), you're probably within the reproducibility limits of the: measuring device + uniformity of the displays + subtle differences in environment or surround between the two displays + the course granularity of control available for adjusting the white point.
So yeah, it's probably pretty decently close. I'm kinda surprised you're able to get two iMac displays within one nit of each other actually.
I agree with Andrew that the difference is probably made more noticeable by having a relatively low white luminance. Your goal is probably to get the display color temperature and white point to approximate your reference media, be that some form of print media, or a standard under a particular light source. That means some amount of iteration to make that happen and you're probably best off doing that visually anyway, and whatever the measured values end up being, those are your aimpoints to replicate for subsequent calibration. This, rather than a somewhat arbitrary aimpoint for possibly some other workflow.
-- Chris Murphy