Re: Colorsync-users Digest, Vol 13, Issue 81
Re: Colorsync-users Digest, Vol 13, Issue 81
- Subject: Re: Colorsync-users Digest, Vol 13, Issue 81
- From: Ben Goren <email@hidden>
- Date: Fri, 18 Mar 2016 09:14:40 -0700
On Mar 17, 2016, at 7:41 PM, Chris Cox <email@hidden> wrote:
> Again, you are making a blanket
> statement based on one aspect of the products,
Chris, if you think I've only a single complaint with the color handling of Adobe products, you haven't read a single post I've made to this thread. Either that, or you don't even begin to understand the scope of the task and associated challenges.
So...enough with the name-calling. What follows is a skeletal outline of what I do. If you can provide a comparable outline of how to do the same thing with Adobe products, I'll happily eat my words.
First, I've measured the per-channel spectral response of my cameras by photographing the image projected by a large homebrew spectroscope. I use Iliah's Raw Photo Processor for all image development; for spectral profiling, I dumped the raw (unbalanced!) image data from RPP to a TIFF and then used Graeme's scanin from ArgyllCMS to extract the RGB values. I've also measured the spectral transmission efficiency of my lenses. That data combined with the SPD of the scene illuminant (either measured or, for D- and A-series illuminants, estimated from chart measurements) gets fed into a spreadsheet. (I hate spreadsheets, but they're good for rapid prototyping.)
The spreadsheet then serves two main purposes. First, it can generate simulated measurement chart readings for that particular combination of camera, lens, and illuminant. I simulate tens of thousands of samples, including several hundred real-world samples (from charts, etc.); the rest are synthetic reflective spectra.
Second, I can feed to the spreadsheet actual measurements extracted from a raw (again, unbalanced) image of a chart (or anything with a known reflective spectrum) and get the actual per-channel offsets for exposure. The spreadsheet will predict expected offset values, but by plugging in actual measured values it compensates for any discrepancies introduced into the workflow before that point. In practice, I'm getting small or fractional DE discrepancies, save for however many stops of underexposure I dial in to provide headroom for highlights.
I typically use a ColorChecker for the known samples because it's easy and averaging all those patches makes for fantastic results, but I can get by just fine with a single known sample. (It's like the eyedropper for white balance, but you're not restricted to just a single hopefully-but-never-actually spectrally-flat sample.) To get to this data, I use Iliah's RawDigger.
Those offsets go into RPP to normalize exposure and channel balance; the result at this point is a perfectly (to the fractional hundredth of a stop) balanced and normalized development in the camera's native color space.
I then use Argyll's profiling tools to create a profile (again, specific to that camera, lens, and illuminant) from the spreadsheet-simulated data, and then use Argyll to convert the output from RPP from that space to my preferred working space (which is task-dependent).
For straight-ahead copy work, there'll typically be an interlude here that includes flat-fielding. RawDigger supports flat-fielding for chart capture, which is wonderful. I've used Robin Myers's Equalight for the images themselves in the past, but I'm tending to use ImagaMagick for that more these days. You _can_ use Photoshop for this, but it's a royal pain because you have to combine the color dodge blend mode with an inversion of the image and so on (not to mention the whole gamma blend mess), whereas it's a single (and trivially automated) operation with either the purpose-built Equalight or with ImageMagick.
Where it goes from there tends to be more conventional. Photoshop is almost adequate to the task save for the gamma blending mess you keep vigorously agreeing is very real but that you also bizarrely insist doesn't matter. Affinity Photo doesn't suffer from those problems, and is so much faster than Photoshop it's not even funny...so that's what I've been using lately. Some input sharpening, cropping, maybe a bit of editing to (for example) remove shadows from a seamless white backdrop or that sort of thing. (One area where Photoshop still shines is with responsiveness with the brush tool and a Wacom Cintiq...Affinity Photo still has some catching up to do there.)
Last, back to Argyll for conversion to the destination space -- typically either my printer or sRGB. (And, if I know the image is only ever going to sRGB or has a small enough gamut, I use it from the beginning for the working space.)
The final result is well within the limits of human perception, with (of course) the usual caveats about the output space's gamut. I've made large prints (especially watercolor) that I've laid side-by-side with the original, lit by SoLux lamps, and the artist herself couldn't tell which is which until she stuck her nose into the print to see the slightly reduced spatial resolution of the copy. And, note again: this is all done objectively, with absolutely no manual tweaking of color at any step. No eyeballing this, no sliding that, no visually matching this other; it's all by the numbers, start to finish.
*That's* what's involved in colorimetric reprography.
So...what workflow do you suggest can achieve similar results using either exclusively or primarily (or even substantially) Adobe products?
Cheers,
b&
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Colorsync-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden