Re: Colorsync-users Digest, Vol 4, Issue 347
Re: Colorsync-users Digest, Vol 4, Issue 347
- Subject: Re: Colorsync-users Digest, Vol 4, Issue 347
- From: "email@hidden" <email@hidden>
- Date: Sun, 23 Sep 2007 21:09:40 -0400
Hi Marco,
Thank you for your questions, let me address them here.
"What is the meaning of "population difference" in this context? Which
"population"?"
When we test instruments for relative performance, we test large numbers of instruments >25 on different displays and media against a single calibration standard. The calibration standard, while stable, doesn't necessarily make absolutely the same measurement twice, due to set up variablity and source variability, so we actually have to run an experiment on the standard as well. Devices, such as the PR650 or Minolta CS 1000 are pretty good standards, but they require great care to set up. Both instruments measure very small areas and small degrees of view. Both instruments are very susceptible to flare, so special fixtures must be built to hold the calibrator under test and limit the non imaged area to be measured. We then measure the test set up by making a minimum of three independent test measurements of the reference standard. This means you physically remove the standard from the mount, remount, realign and remeasure. That's our measure of reference set up uncertainty. Naturally,we do this with a warmed up source. We then measure the devices under test on the test setup using multiple mount/dismounts as well. We then compute the histograms of the data collection from each of the device populations and analyze the "population differences" We use two priciple tests: t-test of means and an F-test of variance, to detect population differences. We also physically look at the scatter diagrams of the data, particularly in CIE xy space to see if the error trends in color. We often test what I call media constancy. This checks for product error as a function of the media that are used as the source. In these tests, it's not unusual to see "flops". One device wins sometimes and another wins at other times.
The specified tolerances for the EyeOne Display 2 and of the Photo Research
PR-650 SpectraScan Colorimeter, when used with a CRT, are the same? Is that
so? That sounds hard to believe.
I did not to mean to imply the i1Display is the same as the PR650. I have to be more careful in that regard. That is an excellent question, the specifications for the photo research device are relative to an absolute NIST (USA) standard and include what is called a "compounded measurement uncertainty". I'm on the road, so I don't have link for it, but it is very interesting reading for a color geek. This is a worst case ABSOLUTE standard. For our consumer products, we reference to population average of i1pros as standards. Our standard is a RELATIVE standard derived by measuring a population of devices on a population of displays. What I was trying to point out was that extremes of the stated specification of either device family will yield large potential visual errors on a display. We have seen that a Minolta CS1000, can differ from a PR650 on the order of .008xy when measuring a wide gamut green. They measure a tungsten white reference exactly.
So, are you saying that, all things considered, and in your judgment, the
EyeOne Display 2 is actually a finer instrument than the DTP94/Optix XR?
Absolutely not, both products have their pros and cons. I have always been a technical "fan" of the DTP94 and the guys that designed it. Now we're on the same team and that is a very good thing. I can't talk about future products, but one should assume that we are looking at what customers want, improvement in measurement accuracy, speed and precision. My problem with the 94 was the quatization errors and time of measurement. It would need some tweaks to work with a DDC controlled-wide gamut display. It was also very expensive to make. Remember that the product was designed after Sequel got purchased by Gretag and Monoco was purchased by X-rite. The DTP 94 also had a slightly wider field of view which gave it better mount dismount repeatiblity, at the sake of usage on lower end displays. The major problem with the device is that could not come down in price in a cost competitive market place and the volumes did not indicate that there would be any major improvements due to scale in market size.
You're saying that "the
stated variability can produce visible difference between displays and
calibrators", but is that intended to mean that the stated tolerance of the
spectroradiometers themselves will possibly cause those very visible
differences?
The answer is yes. I gave an example of the Minolta CS1000 (a fine
instrument, if properly applied) and a PR650 having radically different
measurements on a green primary. If the dominant wavelength is
different, there will definately be errors. If you are adjusting the
display to color temperature that is far from Native, you will see a
difference in the white. So what reference device do I use when I
calibrate a calibrator? The decisions are not so clear. With the rapid
changes in display technologies we understand that we have to re-think
and re-tool continuously. The next generation of products will
incorporate some of this new thinking.
Thank you for the challenging questions.
Tom Lianza
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Colorsync-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden