Re: Real world experience w/ GMG and Oris RIPs
Re: Real world experience w/ GMG and Oris RIPs
- Subject: Re: Real world experience w/ GMG and Oris RIPs
- From: Graeme Gill <email@hidden>
- Date: Wed, 24 Nov 2004 22:47:25 +1100
- Organization: Color Technology Solutions Pty. Ltd.
Mike Eddington wrote:
> After looking into this further I can verify that colors in between
> patches are not left out. GMG does allow one to view the curves that
> exist between fulcrums. If one fulcrum is adjusted and the dE gets
> pulled down, the interpolation curve between the fulcrums will be
> changed too. Therefore colors between the test values are indeed
> adjusted, whether or not this is adds to profile accuracy is subjective.
A profile is not really a curve, it is a mapping in 4D space to 3D (so it
could be considered a hyper-surface). So no curve can really show you
what's happening between measurement points.
> My results using GMG in both a proprietary and ICC workflow show much
> better numerical and visual results for the iterative method. Here are
> the results comparing the iterative approach GMG implements vs the
> single measurement cycle in creating an ICC profile, both created
> with the IT8 7/3. Upon profile completion, an IT8 was output and
> measured in Gretag MeasureTool.
This is not a valid test. You need to use a verification chart with
test points that are unrelated to the IT8 test values used to create
the profile. Random device values are a good choice, since you can
be certain that they are not correlated in any way with the chart
used to create the profile. In my "virtual" testing I've used
something like 100000 random verification test points. For serious
"real world" profile algorithm tuning, I like to used something
like 5000 to 10000 verification test points. Another approach
(which has just occurred to me) would be to deliberately create
a verification chart that has test points at the furthest possible
locations from the points used to create the profile (at the voronoi
points for those of you who know what that is.)
> Again, whether any of this makes a difference in overall proof quality
> would be subjective...i.e. if you don't trust the numbers, you'll have
> to rely on a visual check. Looking at sample proofs I output, the
> GMG iterative proof is visually closer to the target proof than the ICC
> proof. I also see visual improvement from a proof from the 1st iteration
> to the last. The ICC proof doesn't look horrid, but it doesn't look as
> close visually. Perhaps it is due to the 4D tables of GMG vs the 3D
> tables of an ICC profile...but I can't argue whether or not there is "a
> scientific basis to explain why an iterative approach provides any
> benefit", all I can say is that I see better results using it.
Directly created ICC device link conversions are more accurate than
the conventional A2B -> B2A links used in most systems,
and other considerations can also boost the visual match, irrespective
of whether the ICC format is used to store the profiles and transforms
(ie. actual lighting spectrum, FWA compensation, alternate observer models,
alternate characterization test charts.)
One has also to guard against the power of suggestion as well. If you
just measured what appear to be better delta E's, it's pretty easy
to then "see" better visual results. Double blind tests are the only
certain way to guard against this type of effect.
> Also, in regards to Terry's observation that around 200 patches are not
> measured from the ECI 2002 chart. the explanation I got from GMG is
> below...in slightly broken english ;-)
>
> "on the eci chart there are some patches twice---- we only measure them
> once.
> also we always need all possible overprint combinations to form this 4d
> color matrix which makes our profiles so accorate. example 3% cyan
> is on the cart but 3/3/3 patch is not available, therefore we have to
> skip the 3% patch too. if you want this is a limitation in the way our
> profiles are defined. Well but this limitation also makes us so
> accrate.... so you judge :-)"
This points to a limitation of their approach. They aren't using all the information
available to them, so their profiles aren't as accurate as they might be.
Duplicate patch values should be used (average together), to reduce
measurement inconsistency and print spatial inconsistency. All patches
can be used to tune the fit of the profile model to the test data.
They may well end up with better results than many other because they
have chosen an underlying device behaviour model that is a good fit
for the types of devices it is intended to work on. A good model
can give a superior fit with fewer test points, and may also give
better behaviour in areas distant from the test points. I don't think
this has any relation to iterative approaches though.
Graeme Gill.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Colorsync-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden