Hello Ernst,
If more sampling is done it would be nice to throw the two most extreme readings in the bin and average the remaining ones, more samples usually increases the spread of the results too.
This is where a "weighted average", such as done in MeasureTool and PatchTool, can help. For at least 3 readings, and somewhere less than 20, a weighted average first takes the standard average and then recomputes the average based on each reading vs the standard average. The farther a reading is from the average, the less it is considered in the average. The effect of "oddball" readings is thus minimized. As you increase the number of readings, the weighted average tends to be equal to the standard average. The effect is most beneficial when doing averages on less than 10 series of measures. At 15-20 readings there is usually no practical differences. In my view, because of the time required to get all the data, an average made on between 3 and 5 series of measurements is optimal. Danny www.babelcolor.com ----- Original Message ----- From: "Ernst Dinkla" <info@pigment-print.com> To: <colorsync-users@lists.apple.com> Sent: Sunday, May 22, 2011 8:07 AM Subject: Re: iSis was Re: Any comments or feedback on i!Publish? (Lou Dina)
On 05/18/2011 09:22 PM, Tyler Boley wrote:
regarding the iSis, I'm finding with inkjet coated fine art papers, the usual suspects, that the rubber drive washers pick up (well dried) black ink from the black start bar and proceed to put it down in slight amounts on the lower color patches. With very light colors this clearly taints the result, with repeated measurement for averaging it's clearly seen, the later measurements differing from the 1st, progressively worse. Anyone find this problematic? I doubt my averaged data is more reliable than the first pass in the light patches. This leads me to my second question. If these instruments make multiple samples per patch and average them on the fly, other than some gross alignment error what is the point of multiple chart measurement for averaging if there has been significant averaging with one pass anyway? Tyler
I thought that the HP Z spectrometer design was based on the iSis in some aspects. In calibration just after printing the targets the Z models will start with scanning the last printed patches first. So no patch is touched by a pinch roller before being measured. The pinch rollers of paper transport do not affect the print surface either but when I observed that reversed scan order first I thought someone paid attention to that aspect too.
Averaging within one patch helps already but given the trend to increase patch numbers and related to that smaller patch sizes I wonder whether that is enough. It does not compensate on spectrometer temperature deviations, not on differences in printing directions or paper texture directions (HM Sugarcane), not on hysterises in spectrometers etc. In strip readings the boundary detection takes its toll too.
For quite simple custom greyscale targets to be measured on a HP Z3200, 17 patches for linearising it, I made a descending and ascending range twice. Large patches anyway on that machine and the spectrometer keeps some distance from the patches. I wonder sometimes whether random distribution of patches actually dampens spectrometer hysterises if there is not extra check in the distribution of the patches on that issue. A target print page + patch distribution that can be scanned in two directions would be an improvement to reduce more effects.
If more sampling is done it would be nice to throw the two most extreme readings in the bin and average the remaining ones, more samples usually increases the spread of the results too.
-- Met vriendelijke groeten, Ernst
Try: http://tech.groups.yahoo.com/group/Wide_Inkjet_Printers/
| Dinkla Grafische Techniek | | www.pigment-print.com | | ( unvollendet ) |