Re: RPP raw photo processor 64
Re: RPP raw photo processor 64
- Subject: Re: RPP raw photo processor 64
- From: Ben Goren <email@hidden>
- Date: Sun, 02 Jun 2013 13:30:20 -0700
On Jun 2, 2013, at 12:22 PM, Andrew Rodney <email@hidden> wrote:
> On Jun 2, 2013, at 12:54 PM, Ben Goren <email@hidden> wrote:
>
>> And, again, it's a close match for how I remember the scene.
>
> So that's another example of colorimetric accuracy based on memory?
I wouldn't claim colorimetric accuracy for that shot. At most, I'd say it's vaguely in the spirit of colorimetric accuracy in that it's a faithful-looking rendition within the limits of the media and that there aren't any significant inversions of the tone map...but there's so much contrast crammed into such a small space that ``colorimetric'' simply isn't a word that can apply in any meaningful sense in such a situation.
But that's not why I referred to that image. You suggested that photographing the Sun is something few would attempt. It's not an everyday sort of thing, sure, but you don't have to look hard to find photos with the Sun in the frame, especially at sunset. My example of the eclipsed Sun over the Grand Canyon is an extreme, yes, but I think it also demonstrates that it's entirely within the realm of possibility.
>> I can tell you from personal experience that I can place the original and a print side-by-side on my kitchen table (with overhead Solux track lights providing the illumination) and the artist herself has to very critically examine the two to spot the differences -- and those are almost always only in the areas where she used paints that lie significantly outside of the iPF8100's gamut.
>
> I expect you can as can others without having to resort to whatever colorimetric accuracy is supposed to mean. Many users attempt and can match two differing media or this list and the tools we use wouldn’t exist.
Again, these prints have color differences generally with low-single-digit DE mismatches from the original, depending primarily on gamut mismatches. And Argyll's gamut-mapped perceptual rendering is such that even larger absolute DE differences in spot colors are hard to discern in context in side-by-side prints. Since nobody's about to start folding or cutting up prints or the original to directly compare spot colors, I'm not worried about absolute DE differences and only differences the artist can spot in side-by-side comparisons of the print as a whole.
Oh -- and I haven't tried it, but I'm pretty sure that the prints are good enough such that, if viewed behind glass from behind a stanchion rope on opposite sides of a gallery room (meaning you'd have to look at one, walk to the other side of the room, and look at the other) even the artist might not be able to state which is which. As far as practical matters go -- salability of a giclee print -- that makes them more than plenty ``good enough.''
>> Mask off the sky and the rest of the scene is a good colorimetric match.
>
> You say that but I'm trying to understand how anyone else can agree to that conclusion (still begging the question, what defines colorimetric match or accuracy).
So, again, I wasn't out there with a spectrophotometer so I can't quantify this. I was out there to make a photograph of wildflowers in the Superstition Mountains, not to make an empirical measurement of the camera's colorimetric accuracy.
But I'll bet you a cup of coffee that, if I had made spot measurements with a spectrophotometer of the objects in that scene and compared the Lab values with the corresponding Lab values in the foreground of the photo, you'd generally find low single digit DE differences within the limitations of the working space's gamut. (And, of course, objects in shadow would have significant differences, but only in tone and not in hue.)
>> Didn't I address that in a previous post? I'm sure I did.
>>
>> Average DE is generally in the 1-2 range, with maximum in the 6-10 range with colors so saturated they lie outside of the gamuts of typical working spaces.
>
> You provided values, but how you came to them, or how you gauage the accuracy of what you remember at the scene is still something I don't understand.
ArgyllCMS ships with a tool, profcheck, that lets you compare the expected and actual chart values generated by your profile. It's an automated version of what you've described Bruce doing with the ColorChecker where he eventually got ProPhoto RGB values that matched those predicted by a spectrophotometer measurement.
My chart has 650+ patches, including a few dozen painted patches (including a spectral match for a ColorChecker and a dozen Golden Fluid Acrylics paints and another dozen interior home paints), a bunch of ``special'' patches (including some PTFE thread seal tape, a light trap, a bunch of wood chips, and more) and the rest filled in by the iPF8100. A low average and maximum DE over that many patches with that large a gamut gives me a good deal of confidence in the quality of the results.
>> Obviously, there are hard practical limits to what one can remember days or weeks after being in the field.
>
> Indeed! Here's my 'beef' and question: you've stated the goal of this so called colorimetric match and accuracy but it appears to be based on a perceptual reaction and on what you think you recall seeing. If we were both at that scene and you said you remembered a color looking one way, I disagree, who's right and how do we decide scientifically? IF I measure some flower petal at the scene spectrally that's one set of useful data. IF I recall a color and say it does or doesn't match, who's to say that's correct, close of a mile off?
If we're talking landscape photography out in the field, all your objections are valid to a first approximation.
If we're talking fine art reproduction in the studio, we can easily compare the original and the print side-by-side looking for mismatches, and we can, if necessary, quantify those mismatches with spot measurements from a spectrophotometer. But, when the original artist herself has trouble telling the two apart, I'd say that spectrophotometer measurements at that point are becoming a bit moot.
Now, if you're starting with a process that gets that type of result in the studio, and you use essentially the same workflow out in the field...do you really think it's unreasonable to suggest that the same equipment used in the same way is doing the same basic thing?
In one sense I'd agree that it's a valid criticism that I don't have hard empirical evidence that this same process works the same outdoors as in the studio. But, at the same time, since this is colorsync-users and not Nature, I think you might be demanding standards of proof not suitable to the discussion medium.
I may yet do a writeup to those standards -- but I've still got a lot of work to do to exhaust my own limits. For example, I have a friend's old Bausch and Lomb Spectronic 20 that I'll be cannibalizing to turn into a monochromatic light source, and I've got some carbon nanotube samples that I'll be incorporating into my next profiling target (which will be in the same spirit as but much improved upon the one I have today). And I've got on order some Wratten filters that I'll be experimenting with to, if my hunch proves right, get essentially the same results as Dr. Berns does but with a much simpler workflow (and, if my hunch doesn't prove right, the same results but with his much more complex workflow).
At that point, I might have something worth writing up for an academic journal -- something I've never done before in my life. Until then, all I have to report on is Argyll profile analysis results and my word on what I've seen and my reports of what a few different artists have seen.
If that's not enough for you, may I suggest?
Replicate my workflow informally documented here:
http://trumpetpower.com/photos/Exposure
and let us know what kind of results you yourself get.
> On Jun 2, 2013, at 1:12 PM, edmund ronald <email@hidden> wrote:
>
>> All these issues could be solved if Adobe added a repro mode to ACR.
>
> What's a repro mode? Now we're adding another undefined term on top of "colorimetric accuracy" based on someone's memory of a scene, without any actual colorimetry at play?
I hope it's clear by now that there's lots of colorimetry at play here -- that that's the whole point of the exercise. A chart is measured with a spectrophotometer; a photo is made of the chart; and an ICC profile is built from that photo to create a colorimetric mapping between the photo's RGB values and the spectrometer's readings. At the other end, RGB values are sent to a display or a printer; a spectrometer is used to read the results, and a similar-but-opposite ICC profile is created for output.
Results can be compared both by holding original and copy side-by-side and with a spectrophotometer. And, in many cases, they can be empirically compared without that type of analysis by doing sanity checks on the ICC profiles. Indeed, the soft proofs I create with Argyll are, for my own purposes, essentially perfect representations of what to expect from the print. I don't make test prints any more; I already know what's going to come out of the printer before I click the button.
> I'm suggesting there's a bias expressed about one raw processing engine that is said to be unable to produce either which isn't my experience nor that of others I know and respect.
Well, it's not just Adobe's raw processing engine I'm blaming. They're just the most popular. Canon's is just as problematic, and I'd expect Nikon's to be as well. And probably Apple's (in Aperture), though I've never worked with it. And certainly every on-camera raw-to-JPEG converter has serious colorimetric deviations.
Just neutralizing the contrast boost ACR universally applies (even with the curve set to linear and the contrast slider in its neutral position) requires hand-creating a DNG profile with what essentially winds up looking like a reverse gamma adjustment, for example -- and it should be obvious why that's never going to produce optimal results. You yourself have acknowledged that it takes a lot of work to get a close match with a 24-patch ColorChecker, and that the transformation that gets you that match isn't going to get you a close match on colors not represented by those 24 patches.
In contrast, my workflow with Raw Photo Processor amounts to shooting a target, feeding that target to a Perl script that spits out white balance and exposure numbers, doing a copy / paste of those (three) numbers back into Raw Photo Processor, and applying those settings to the shot of the artwork. That's the sum grand total of my color-correcting workflow; everything else is either sharpening or lens geometry and peripheral illumination correction or panorama-type stitching or cleaning up smudges on the original or the like. I output from RPP to a BetaRGB TIFF, do whatever needs to be done in Photoshop without any color transformations, and then feed the BetaRGB TIFF to Argyll for a gamut-mapped perceptual rendition to the printer profile, and print the result.
I daresay getting comparable quality out of Adobe's rendering engine, if even possible, would require many, many hours of very intensive and skilled post-processing.
Cheers,
b&
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Colorsync-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden