Re: Accuracy of Instruments
Re: Accuracy of Instruments
- Subject: Re: Accuracy of Instruments
- From: Mike Strickler <email@hidden>
- Date: Fri, 2 Nov 2007 11:20:19 -0700
Wait a minute: "In all probability"? The "expert eye"? And "I
guess"? And when we say the profiles will be "different," which is
possible, of course, will one of them be "worse," and how do we
define that? As has already been noted, it's not so easy to say which
instrument is more "correct." We have other, sometimes very
noticeable differences caused by things like choice of algorithms
used by profiling software. My Profilemaker profile will be different
from my profile made in PrintOpen, quite visibly different, but if
you ask me which is "better" I would have to know what you mean by
"better." In one case one I may prefer one to the other.
I'm trying also to think of a scenario where the same set of eyes
needs to make exactly matching profiles with two different
instruments. If there are, say, identical proofers in different
locales profiled and/or linearized by two different instruments and
are each tasked with matching a common reference, well, that sounds
like one of the few possible examples. But then one has to ask, will
those proofs ever be placed side by side, looked at by the same
observe, an how "identical" do they need to be? When will this
happen in practice? And if it ever does, what are all the factors
besides our instruments that frustrate our attempts at perfection,
things like ambient temperatures and humidity in the prepress areas,
always having paper and ink from the same batch number, and how do
these variances stack up against the difference between, say, the two
DTP70s used to linearize the printers? And those other factors do
create both agreement AND repeatability issues that will exist
independently of our measurement errors. And again, let's remember
that the reference itself is the "typical" SWOP or sheetfed press, or
a given press on a "typical" day (assuming the profile was made
correctly). So at the end of all this, the question is, are our
instruments good enough, what do we mean by that, and how do we know?
Mike Strickler wrote:
there is
inevitably a point at which increasing an instrument's accuracy
becomes
statistically meaningless. (To give a crude example, you don't need a
micrometer to frame a house.) Perhaps this can be approached
empirically: Can anyone demonstrate a noticeable and objectionable
variability in printed color that can be traced to the performance of
any recent model of spectrophotometer that has passed its
manufacturer's
certification process?
Of course one can: Take two new spectros (different models), create
two
profiles for the same printer and compare the results. In all
probability an expert eye will notice /considerable/ differences
between
the results at first go. Repeat the procedure with one of the
spectros.
In all probability no one will notice differences between the two
profiles measured with the same instrument. That's what Terry was
talking about: Repeatability is not the Problem, but inter-instrument
agreement is. An I guess it's playing in the same order of magnitude
than inter-observer differences.
The example conforms with the results of several
inter-instrument-agreement-tests, e.g. performed by the University of
Wuppertal/Germany.
Their conclusion: inter-instrument differences are a significant
Factor
in colormanagement-based process control, see e.g.
<http://www.digitalproof-forum.de/rueckblick/ergebnisse04.php>
(German).
Klaus
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Colorsync-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden