Re: Comparing color performance on Displays
Hi Mike, i wrote "good middle class" :) OK, OK, "well done, good job" was definitely more appropriate, i was just little bit disputing "best on the planet". IMO, technically speaking, depends on what we measure. If you measure little bar (Fogra), it is no matter if you get average 1,1 or 0.90, particulary with regard to instrument precision. But if you measure big target (ECI), improving in both values, average and max, is a bit more important. But i agree with your statement about printer drifts - and there are many others practical issues which affects proof precision far more, than few tenths of deltaE. Best regards, Kamil
I haven't seen the blog in question, but your characterization of 1.05/3.10 as "middle class" for a proofing system is not correct, as anyone who installs these systems >knows (and Dan is among this group). In fact, in general, only systems that have some sort of iterative "tuning" of output profiles (e.g., GMG, ORIS, XF can do better than >this immediately after calibration, and even then the improvement is vanishingly slight and the ultra-low numbers rise very quickly as the printer drifts in regular use.
Mike
Message: 12 Date: Sat, 31 Mar 2012 19:09:48 +0200 From: Kamil Tresnak <email@hidden> To: colorsync-users <email@hidden> Subject: Re: Comparing color performance on Displays Message-ID: <email@hidden> Content-Type: text/plain; charset=UTF-8
Dan:
Maybe i get you wrong, but number you are posted are "good middle class" rather than best on the planet, particulary if we are talking about "proofing systems" in general ....
Quote form your blog: "As you see here the results are VERY impressive with the overall average at 1.05 dE and the maximum at 3.10 dE. In case you know have no reference about dE values - these are as good if not better than any proofing system on the planet!"
Best,
Kamil Tresnak
Message: 2 Date: Thu, 29 Mar 2012 16:03:39 -0400 From: Dan Gillespie <email@hidden> To: email@hidden Subject: Re: Colorsync-users Digest, Vol 9, Issue 54 Message-ID: <email@hidden> Content-Type: text/plain; charset=us-ascii
Tod,
You can do this in the new i1Profiler software. You can compare to the 24 ColorChecker values or to printing standards/specifications like GRACoL or Fogra. You can read more in the blog I wrote about it here: http://everydaycolormanagement.blogspot.com/2012/01/eizo-vs-nec-monitor-cali....
Hope this helps,
Dan Gillespie 717.475.9007 Toll Free 1.877.COL-RMGT email@hidden www.colormanagement.com | www.colormanagementgroup.com
On Mar 31, 2012, at 3:02 PM, Kamil Tresnak wrote:
i wrote "good middle class" :) OK, OK, "well done, good job" was definitely more appropriate, i was just little bit disputing "best on the planet".
Yes, perhaps a statement that's a bit bold. I would've clarified "for an ICC-based solution, these are very good numbers". One key thing that was missing (in my opinion) is the dE metric used. If it was dE76, good numbers indeed....but if it was dE2000, good but not exactly "great" numbers.
IMO, technically speaking, depends on what we measure. If you measure little bar (Fogra), it is no matter if you get average 1,1 or 0.90, particulary with regard to instrument precision. But if you measure big target (ECI), improving in both values, average and max, is a bit more important.
I generally find the (Fogra/IDEAlliance) control strips to result in higher dE values (they test the more difficult patches) while the ECI20002-IT8.7/4 charts to be perhaps a bit more forgiving (a larger percentage of "easier" patches). I think there's some clear advantages between some of proprietary solutions (GMG, et al) vs. RIPs based on "open" ICC technology....but the gap is perhaps narrowing as more ICC-based solutions offer some form of profile iteration or "optimization". In the case of iProfiler, I've yet to see it's profile optimization do much improvement that's not within the margin of instrument repeatability, assuming you're starting with 2,000+ patches to begin with. I guess folks can decide whether starting with few patches and then optimizing vs. starting with a lot of patches to begin with that don't require optimization is the better approach...to each his own set of patches. :-) Having said all THAT, I think what's becoming the more critical piece is the *calibration* (i.e. "linearization") routine employed by the various software RIPs. On a given day, many of these proofing RIPs can achieve a reasonable dE and visual match to a standard dataset....KEEEPING it that way becomes the challenge and this is where I find a clear distinction among some of the various RIPs. Regards, Terry ______________________________________ Terence Wyse, WyseConsul Color Management Consulting G7 Certified Expert FIRST Level II Implementation Specialist
participants (2)
-
Kamil Tresnak
-
Terence Wyse