I've been following this discussion and though it is geek-intriguing, in which I include myself. And I'm still catching up. But it seems completely irrelevant to being a working photographer in the field. Mr. Goren seems to be advocating, to use a reference to our earlier, photo-chemical past, "the perfect film developer."
I'm sure I could have taken my i1 Pro into the field with me and gotten spot measurements of everything in sight I could lay my hands on
Indeed, that's hard and a lot of work. But short of that, what proof do we have? Isn't the goal of colorimetry an non ambiguous set of color measurements that don't rely on a personal interpretation of what we see as color?
I've got to side with Mr. Rodney throughout. He asks for the definition of "colorimetric accuracy" and in reply gets assurances based on memory. If I stand on a beach at magic hour in golden light, is the goal a capture of the subject in a perfect colorimetric rendering of that moment's unique and beautiful light, which is what I take to be the end result of some in the "pro-colorimetric" side of this debate? Certainly not a capture of the subject that ignores the momentary unique lighting and says, this is what it would have looked like under some sort of "standard" light; there would be no point to doing that except for some scientific goal with clearly stated conditions of measurement accuracy and toolset accuracy. I think I'm interested in reproducing my perception of what I see, which is the object or scene in that unique lighting, and hopefully a little extra personal poetry of imagination. Anything requiring "matching" must state all the objectively-measured conditions and definitions of what is a match. That masterful matching print of the painting or museum piece will definitely include the specification for the conditions of viewing, including lighting levels, lighting color temperature, evenness, etc. to stay within the accepted deviation constituting a "match." There is no such thing as the original and copy matching to some fine, objective numeric standard across all mediums and viewing conditions. And even if it did, Mr. Goren states
Colorimetry is "the science and technology used to quantify and describe physically the human color perception
Ah! We are back to human perception, "perceptual rendering"! Anybody on this list care to step up and claim to have perfect human "standard observer" vision?
in contrast, my workflow with Raw Photo Processor amounts to shooting a target, feeding that target to a Perl script that spits out white balance and exposure numbers, doing a copy / paste of those (three) numbers back into Raw Photo Processor, and applying those settings to the shot of the artwork. That's the sum grand total of my color-correcting workflow; everything else is either sharpening or lens geometry and peripheral illumination correction or panorama-type stitching or cleaning up smudges on the original or the like.
So rendering the scene from raw as described, personal expression is thus confined to… composition only? Choice of subject? Angle of view? Perspective and lens choice? Just going to a momentary 2D rendering? How is this "accurate" to a changing, moving, scene-scanning, 4D space-time personal experience through one unique set of eyeballs?
I output from RPP to a BetaRGB TIFF, do whatever needs to be done in Photoshop without any color transformations, and then feed the BetaRGB TIFF to Argyll for a gamut-mapped perceptual rendition to the printer profile, and print the result. We all want and usually get pretty close to making the print look awfully "accurate" in an ICC-manged workflow with a variety of tools, but this comes after rendering the raw file to where we like it, so no argument there.
I daresay getting comparable quality out of Adobe's rendering engine, if even possible, would require many, many hours of very intensive and skilled post-processing.
To be science, one must specify all the conditions under which the experiment is conducted, the single variable to be tested and how the final result is then measured. Mr. Rodney is right that that has not been provided, nor is what artists do, nor even generally what clients hire photographers to do. My sincere appreciation to all parties for a fascinating diversion, but now to make some money... Jeff Stevensen Photography 82 Gilman Street Portland, ME 04102 207-773-5175 207-807-6961 cell http://www.jstevensen.com blog http://photosightlines.com
Those of us in the museum world are very concerned with colormetric accuracy, and go to great lengths to achieve this-- even to the point of developing multi-spectral capture methods (I worked recently with Roy Berns and Sinar AG to develop a six-channel system that achieves accuracy on a SG ColorChecker of less that 1 CIEDE2000). Our need for this has little to do with how the image ends up looking on what ever display or paper-- it is mostly useful in scientific painting conservation analysis. I honestly cannot think of another situation where this level of accuracy is useful or practical. It's like walking towards a wall in intervals covering half the distance each time. Will you ever get to the wall? No, but you'll be close enough. Photographers don't care-- they goose the colors later anyway to express their interpretation of a scene-- this is not new-- Ansel burned and dodged fiercely. Leave the science to the scientists. Go out and make art. Stanley Smith Head of Collection Information and Access J. Paul Getty Museum 1200 Getty Center Drive, Suite 1000 Los Angeles, CA 90049-1687 (310) 440-7286
On 6/3/2013 at 02:49 PM, in message <C09A839A-5424-475A-9533-44246968F984@maine.rr.com>, Jeffrey Stevensen <jeffstev@maine.rr.com> wrote:
I've been following this discussion and though it is geek-intriguing, in which I include myself. And I'm still catching up. But it seems completely irrelevant to being a working photographer in the field. Mr. Goren seems to be advocating, to use a reference to our earlier, photo-chemical past, "the perfect film developer."
I'm sure I could have taken my i1 Pro into the field with me and
gotten spot measurements of everything in sight I could lay my hands on
Indeed, that's hard and a lot of work. But short of that, what proof
do we have? Isn't the goal of colorimetry an non ambiguous set of color measurements that don't rely on a personal interpretation of what we see as color?
I've got to side with Mr. Rodney throughout. He asks for the definition of "colorimetric accuracy" and in reply gets assurances based on memory. If I stand on a beach at magic hour in golden light, is the goal a capture of the subject in a perfect colorimetric rendering of that moment's unique and beautiful light, which is what I take to be the end result of some in the "pro-colorimetric" side of this debate? Certainly not a capture of the subject that ignores the momentary unique lighting and says, this is what it would have looked like under some sort of "standard" light; there would be no point to doing that except for some scientific goal with clearly stated conditions of measurement accuracy and toolset accuracy. I think I'm interested in reproducing my perception of what I see, which is the object or scene in that unique lighting, and hopefully a little extra personal poetry of imagination. Anything requiring "matching" must state all the objectively-measured conditions and definitions of what is a match. That masterful matching print of the painting or museum piece will definitely include the specification for the conditions of viewing, including lighting levels, lighting color temperature, evenness, etc. to stay within the accepted deviation constituting a "match." There is no such thing as the original and copy matching to some fine, objective numeric standard across all mediums and viewing conditions. And even if it did, Mr. Goren states
Colorimetry is "the science and technology used to quantify and describe physically the human color perception
Ah! We are back to human perception, "perceptual rendering"! Anybody on this list care to step up and claim to have perfect human "standard observer" vision?
in contrast, my workflow with Raw Photo Processor amounts to shooting a target, feeding that target to a Perl script that spits out white balance and exposure numbers, doing a copy / paste of those (three) numbers back into Raw Photo Processor, and applying those settings to the shot of the artwork. That's the sum grand total of my color-correcting workflow; everything else is either sharpening or lens geometry and peripheral illumination correction or panorama-type stitching or cleaning up smudges on the original or the like.
So rendering the scene from raw as described, personal expression is thus confined to… composition only? Choice of subject? Angle of view? Perspective and lens choice? Just going to a momentary 2D rendering? How is this "accurate" to a changing, moving, scene-scanning, 4D space-time personal experience through one unique set of eyeballs?
I output from RPP to a BetaRGB TIFF, do whatever needs to be done in Photoshop without any color transformations, and then feed the BetaRGB TIFF to Argyll for a gamut-mapped perceptual rendition to the printer profile, and print the result. We all want and usually get pretty close to making the print look awfully "accurate" in an ICC-manged workflow with a variety of tools, but this comes after rendering the raw file to where we like it, so no argument there.
I daresay getting comparable quality out of Adobe's rendering engine, if even possible, would require many, many hours of very intensive and skilled post-processing.
To be science, one must specify all the conditions under which the experiment is conducted, the single variable to be tested and how the final result is then measured. Mr. Rodney is right that that has not been provided, nor is what artists do, nor even generally what clients hire photographers to do. My sincere appreciation to all parties for a fascinating diversion, but now to make some money... Jeff Stevensen Photography 82 Gilman Street Portland, ME 04102 207-773-5175 207-807-6961 cell http://www.jstevensen.com blog http://photosightlines.com _______________________________________________ Do not post admin requests to the list. They will be ignored. Colorsync-users mailing list (Colorsync-users@lists.apple.com) Help/Unsubscribe/Update your Subscription: https://lists.apple.com/mailman/options/colorsync-users/ssmith%40getty.edu This email sent to ssmith@getty.edu
A great deal of colour accuracy in needed in advertising business. Some field applications (like flower photography, minerals, etc. including for scientific needs) also do. If we are not given accuracy, what amount of inaccuracy are we given? How far it can go? On Jun 3, 2013, at 6:44 PM, Stanley Smith wrote:
Those of us in the museum world are very concerned with colormetric accuracy, and go to great lengths to achieve this-- even to the point of developing multi-spectral capture methods (I worked recently with Roy Berns and Sinar AG to develop a six-channel system that achieves accuracy on a SG ColorChecker of less that 1 CIEDE2000). Our need for this has little to do with how the image ends up looking on what ever display or paper-- it is mostly useful in scientific painting conservation analysis. I honestly cannot think of another situation where this level of accuracy is useful or practical. It's like walking towards a wall in intervals covering half the distance each time. Will you ever get to the wall? No, but you'll be close enough. Photographers don't care-- they goose the colors later anyway to express their interpretation of a scene-- this is not new-- Ansel burned and dodged fiercely. Leave the science to the scientists. Go out and make art.
Stanley Smith Head of Collection Information and Access J. Paul Getty Museum 1200 Getty Center Drive, Suite 1000 Los Angeles, CA 90049-1687 (310) 440-7286
On 6/3/2013 at 02:49 PM, in message <C09A839A-5424-475A-9533-44246968F984@maine.rr.com>, Jeffrey Stevensen <jeffstev@maine.rr.com> wrote:
I've been following this discussion and though it is geek-intriguing, in which I include myself. And I'm still catching up. But it seems completely irrelevant to being a working photographer in the field. Mr. Goren seems to be advocating, to use a reference to our earlier, photo-chemical past, "the perfect film developer."
I'm sure I could have taken my i1 Pro into the field with me and
gotten spot measurements of everything in sight I could lay my hands on
Indeed, that's hard and a lot of work. But short of that, what proof
do we have? Isn't the goal of colorimetry an non ambiguous set of color measurements that don't rely on a personal interpretation of what we see as color?
I've got to side with Mr. Rodney throughout. He asks for the definition of "colorimetric accuracy" and in reply gets assurances based on memory. If I stand on a beach at magic hour in golden light, is the goal a capture of the subject in a perfect colorimetric rendering of that moment's unique and beautiful light, which is what I take to be the end result of some in the "pro-colorimetric" side of this debate? Certainly not a capture of the subject that ignores the momentary unique lighting and says, this is what it would have looked like under some sort of "standard" light; there would be no point to doing that except for some scientific goal with clearly stated conditions of measurement accuracy and toolset accuracy.
I think I'm interested in reproducing my perception of what I see, which is the object or scene in that unique lighting, and hopefully a little extra personal poetry of imagination. Anything requiring "matching" must state all the objectively-measured conditions and definitions of what is a match. That masterful matching print of the painting or museum piece will definitely include the specification for the conditions of viewing, including lighting levels, lighting color temperature, evenness, etc. to stay within the accepted deviation constituting a "match." There is no such thing as the original and copy matching to some fine, objective numeric standard across all mediums and viewing conditions. And even if it did, Mr. Goren states
Colorimetry is "the science and technology used to quantify and describe physically the human color perception
Ah! We are back to human perception, "perceptual rendering"! Anybody on this list care to step up and claim to have perfect human "standard observer" vision?
in contrast, my workflow with Raw Photo Processor amounts to shooting a target, feeding that target to a Perl script that spits out white balance and exposure numbers, doing a copy / paste of those (three) numbers back into Raw Photo Processor, and applying those settings to the shot of the artwork. That's the sum grand total of my color-correcting workflow; everything else is either sharpening or lens geometry and peripheral illumination correction or panorama-type stitching or cleaning up smudges on the original or the like.
So rendering the scene from raw as described, personal expression is thus confined to… composition only? Choice of subject? Angle of view? Perspective and lens choice? Just going to a momentary 2D rendering? How is this "accurate" to a changing, moving, scene-scanning, 4D space-time personal experience through one unique set of eyeballs?
I output from RPP to a BetaRGB TIFF, do whatever needs to be done in Photoshop without any color transformations, and then feed the BetaRGB TIFF to Argyll for a gamut-mapped perceptual rendition to the printer profile, and print the result. We all want and usually get pretty close to making the print look awfully "accurate" in an ICC-manged workflow with a variety of tools, but this comes after rendering the raw file to where we like it, so no argument there.
I daresay getting comparable quality out of Adobe's rendering engine, if even possible, would require many, many hours of very intensive and skilled post-processing.
To be science, one must specify all the conditions under which the experiment is conducted, the single variable to be tested and how the final result is then measured. Mr. Rodney is right that that has not been provided, nor is what artists do, nor even generally what clients hire photographers to do.
My sincere appreciation to all parties for a fascinating diversion, but now to make some money...
Jeff Stevensen Photography 82 Gilman Street Portland, ME 04102 207-773-5175 207-807-6961 cell
http://www.jstevensen.com blog http://photosightlines.com
_______________________________________________ Do not post admin requests to the list. They will be ignored. Colorsync-users mailing list (Colorsync-users@lists.apple.com) Help/Unsubscribe/Update your Subscription: https://lists.apple.com/mailman/options/colorsync-users/ssmith%40getty.edu
This email sent to ssmith@getty.edu _______________________________________________ Do not post admin requests to the list. They will be ignored. Colorsync-users mailing list (Colorsync-users@lists.apple.com) Help/Unsubscribe/Update your Subscription: https://lists.apple.com/mailman/options/colorsync-users/iliah.i.borg%40gmail...
This email sent to iliah.i.borg@gmail.com
-- Best regards, Iliah Borg
On Jun 3, 2013, at 4:54 PM, Iliah Borg <iliah.i.borg@gmail.com> wrote:
A great deal of colour accuracy in needed in advertising business.
Some field applications (like flower photography, minerals, etc. including for scientific needs) also do.
If we are not given accuracy, what amount of inaccuracy are we given? How far it can go?
I don't think anyone here including Jeffery and Stanely would disagree. What we (I) keep asking for is a through explanation of what some parties keep calling Colorimetric accuracy. When Stanley talks about his goals and the work he does with museums, I get a strong idea of what he's talking about yet when I ask, I hear terms like matching what one 'remembers' one saw, affecting parts of an image manually because presumably that portion isn’t "colorimetrically accurate". We've been given no clear definition of what this means along with a number of fudge factors. If I measure the illuminant and the scene, then I measure something in the scene, then reproduce those identical colors onto some device, is that colorimetric accuracy? And does it produce a match too? Further, this idea of matching visually or remembering the colors of the scene, observer metamerism hasn't as yet even reared it's ugly head into the discussion. I'm not suggesting what Ben and others are saying isn't correct or achievable pretty much because I have no idea what they are actually referring to hence my continued questions which haven't been addressed or I just 'don't get it'. What is Colorimetric Accuracy in the Field and do you have to measure something in that field to know you got Colorimetric Accuracy in the Field? What about the scene illuminate which Tom did mention in his post? What's not considered Colorimetric Accuracy in the Field in terms of dE and how many samples does one have to look at to say we got Colorimetric Accuracy in the Field or we didn't? Andrew Rodney http://www.digitaldog.net/
On Jun 3, 2013, at 4:27 PM, Andrew Rodney <andrew@digitaldog.net> wrote:
What we (I) keep asking for is a through explanation of what some parties keep calling Colorimetric accuracy.
Andrew, I've addressed this multiple times. Let me try once again. In the studio, I have a workflow that I've verified multiple ways, including empirically with spectrophotometer measurements and side-by-side visual comparisons, that, within the gamut limitations and minor (low-single-digit DE) variations we're all quite familiar with, I get output that's the same as the input. When I apply the same process in the field, everything gives every appearance of working exactly the same way as it does in the studio. For obvious reasons, I haven't attempted to empirically quantify those results in the same manner as I have in the studio. If this was a peer-reviewed journal, you'd be right to rip me a new one for sloppiness in verification methodology. But this isn't a peer-reviewed journal, and I'd suggest that I've more than satisfied the burden of description for a forum such as this. That's especially the case since I've repeatedly described, in great detail, exactly what it is that I'm doing. I'd at this point now suggest that, given your obvious interest in the subject, you owe it to yourself to attempt to replicate my findings, especially since you should already have all the equipment* and skills necessary to do so; the only investment you need is about as much time as it's probably already taken you in typing responses on this thread. Cheers, b& *I'm assuming you've got a chart at least as good as a ColorChecker SG. It's not ideal but it'll at least work for a proof-of-concept demonstration. Better is a hand-made chart, but that's a lot of work.
On Jun 3, 2013, at 5:40 PM, Ben Goren <ben@trumpetpower.com> wrote:
When I apply the same process in the field, everything gives every appearance of working exactly the same way as it does in the studio. For obvious reasons, I haven't attempted to empirically quantify those results in the same manner as I have in the studio.
I suggest you do (and with the studio setup as well) or redefine the term colorimetric accuracy.
If this was a peer-reviewed journal, you'd be right to rip me a new one for sloppiness in verification methodology.
My intention isn't to rip anything, but rather to understand and maybe implement this when warranted. And yes, peer review once something can be reviewed with specific steps would be lovely.
*I'm assuming you've got a chart at least as good as a ColorChecker SG. It's not ideal but it'll at least work for a proof-of-concept demonstration. Better is a hand-made chart, but that's a lot of work.
That part of the equation I indeed have. I believe I stated that I've haven't had any issue using ACR and a Macbeth to produce color numbers in an output referred ProPhoto RGB color space that matches what the Macbeth is supposed to produce. Neither did Bruce Fraser. That *seems* to be colorimetric accuracy. Move the same camera system (and processing) elsewhere, not necessary so. So I take this target into the field and capture it and.....? Or even in the studio. To ask again (specifically):
If I measure the illuminant and the scene, then I measure something in the scene, then reproduce those identical colors onto some device, is that colorimetric accuracy? And does it produce a match too?
Andrew Rodney http://www.digitaldog.net/
On Jun 3, 2013, at 5:18 PM, Andrew Rodney <andrew@digitaldog.net> wrote:
So I take this target into the field and capture it and.....?
<sigh /> Once again: http://trumpetpower.com/photos/Exposure Cheers, b&
Go on... At least using profile application, I can run any number of processes to understand what, where in color space and how high/low the accuracy of a profile is. I can also run that profile on test images and just look at them. Let's take the same process from scene to output (or processed) raw using the same processes. I need a reference first. What would you suggest I measure (and how along with the scene illuminant) in the field such at some point I can compare that to what I process. Andrew Rodney http://www.digitaldog.net/ On Jun 3, 2013, at 6:35 PM, Iliah Borg <iliah.i.borg@gmail.com> wrote:
What we (I) keep asking for is a through explanation of what some parties keep calling Colorimetric accuracy.
Let's start with an obvious. Colorimetric profile application.
-- Best regards, Iliah Borg
I, too, have thoroughly enjoyed this discussion. I am much more the artist than the technician, but I need a standard so I can attempt to have a say in what is going on with all the various permutations of image capture and output that are beyond my control, yet attempt to make repeatable results to some degree of satisfaction. So many stories I heard in the darkroom days about special agitation techniques, secret ingredients in developers, and other rolls of the dice that I had to laugh. They were stories of blind luck. When I finally learned some sensible and very accurate exposure/development control with Phil Davis, I finally felt a sense of relief that I could create a negative that fit nicely within the range of my printing paper's capabilities with amazing repeatability. Similarly, I respect Iliah and Ben's and others attempts to explain from what basis they work and why it is satisfying to them. It makes a lot of sense even though it is way outside my skill set. I appreciate their efforts to explain. I prefer to have as much control of my medium as my abilities and tools allow. I'll never capture the perfect evening light, but if I can consistently capture it on location to some repeatable standard, then at least I can starting point to begin to achieve what I would like to represent in post ( to use a video editor's term). Therefore, the question came up before and I would like to ask again, how do I do an adequate color balance at "sunset with the model at the beach during magic hour"? The Passport? Then what? Thanks. don -- don schaefer
On Jun 3, 2013, at 6:13 PM, Don Schaefer wrote:
<snipped>
Therefore, the question came up before and I would like to ask again, how do I do an adequate color balance at "sunset with the model at the beach during magic hour"? The Passport? Then what?
Set up your camera and a color chart holder, an artist's easel works well, outdoors in the summer at noon. Illuminate the easel at 45 degrees to where the chart's surface will be. This means having the sun to your left or right. The camera is positioned perpendicularly to the center of the chart. Put the chart on the easel (I use a dummy piece of cardboard until everything is ready to reduce UV exposure of my chart) and expose the chart properly. By this I mean that the white patch in the center of the ColorChecker SG is as close to 245 in the green channel as possible (slight under-exposure is better than overexposure). Make sure that the area around the chart is a matte black material (this is to reduce veiling glare), and also use a lens hood. To improve your profile make sure you turn off as much of the camera processing as possible. This includes automatic exposure, contrast enhancement, sharpening, noise reduction, etcetera. Once you have a properly exposed chart image (exposed to the white number given above), adjust the image for the lens fall-off (the lighting should be even or there is something seriously wrong), then neutral balance your image using a mid-tone gray. I suggest using one, or all, of the patches G5, G6, H5, H6. Do not use any of the white, light, or dark gray patches for neutral balancing. Make your profile with your favorite tool. Now you have a profile to produce the equivalent of daylight film. When you are shooting late or early in the day use this profile and you will keep the golden colors. Do not make a profile for the early morning or late afternoon light, you will not like the results. If you are using a ColorChecker Classic (either the Classic itself or the ColorChecker Passport version), the white is exposed to produce as close to 243 as possible and use the N6.5 or N5 patches for the neutral balance. Robin Myers
On Jun 3, 2013, at 7:03 PM, Robin Myers <robin@rmimaging.com> wrote:
On Jun 3, 2013, at 6:13 PM, Don Schaefer wrote:
Then what?
Set up your camera and a color chart holder, an artist's easel works well, outdoors in the summer at noon.
While there's much good advice in Mr. Myers's words, there are many ways to follow the instructions exactly and still end up with a less-than-satisfactory profile. Some variations I'd suggest...well, hesitatingly, I'd suggest using a light tent (outdoors at noon in the middle of summer) to illuminate the chart. It doesn't solve all the problems of properly illuminating a chart in an uncontrolled environment, but it solves more problems than it creates. For better results, you'll have to devise a contraption that blocks extraneous reflections and specular highlights -- not a straightforward task, especially if you still want some portion of skylight to contribute to the illumination of the chart. I've already beaten to death the amount of pre-cooking of the data that most raw processing engines do that can't be turned off or reversed, which is why I recommend Raw Photo Processor or one of its other free / open-source cousins. If that's an option, you can precisely and perfectly normalize white balance and exposure using the method I detail here: http://trumpetpower.com/photos/Exposure#The_right_way_to_do_it If that's not an option, then here: http://trumpetpower.com/photos/Exposure#How_to_make_the_most_of_a_bad_situat... you'll find my recommendation of how to get not-quite-so-bad results out of Adobe Camera Raw and similar raw developers. And I can't stress enough the importance of the chart. The ColorChecker SG is barely adequate. You can make your own, far superior chart...and, in so doing, you'll learn both why the commercial charts mostly don't cut the mustard and why almost nobody could afford to buy or manufacture a really good chart. Once you've got a good profile, when you're out in the field, instead of photographing a gray card and doing an eyedropper white balance, you'd photograph a chart (and the ColorChecker Passport is ideal for this purpose) and follow the same steps at the links above to normalize white balance and exposure. You'd apply your profile...and, within the quality of your workflow, that's all there is to it. Cheers, b&
The purpose of creating a daylight profile is not to achieve colorimetrically accurate images, but to have a profile that will perform most of the heavy lifting in producing an acceptable image with an ICC profile. This is what I thought the poster had requested. Robin Myers On Jun 3, 2013, at 8:16 PM, Ben Goren wrote:
On Jun 3, 2013, at 7:03 PM, Robin Myers <robin@rmimaging.com> wrote:
On Jun 3, 2013, at 6:13 PM, Don Schaefer wrote:
Then what?
Set up your camera and a color chart holder, an artist's easel works well, outdoors in the summer at noon.
While there's much good advice in Mr. Myers's words, there are many ways to follow the instructions exactly and still end up with a less-than-satisfactory profile.
Some variations I'd suggest...well, hesitatingly, I'd suggest using a light tent (outdoors at noon in the middle of summer) to illuminate the chart. It doesn't solve all the problems of properly illuminating a chart in an uncontrolled environment, but it solves more problems than it creates. For better results, you'll have to devise a contraption that blocks extraneous reflections and specular highlights -- not a straightforward task, especially if you still want some portion of skylight to contribute to the illumination of the chart.
I've already beaten to death the amount of pre-cooking of the data that most raw processing engines do that can't be turned off or reversed, which is why I recommend Raw Photo Processor or one of its other free / open-source cousins. If that's an option, you can precisely and perfectly normalize white balance and exposure using the method I detail here:
http://trumpetpower.com/photos/Exposure#The_right_way_to_do_it
If that's not an option, then here:
http://trumpetpower.com/photos/Exposure#How_to_make_the_most_of_a_bad_situat...
you'll find my recommendation of how to get not-quite-so-bad results out of Adobe Camera Raw and similar raw developers.
And I can't stress enough the importance of the chart. The ColorChecker SG is barely adequate. You can make your own, far superior chart...and, in so doing, you'll learn both why the commercial charts mostly don't cut the mustard and why almost nobody could afford to buy or manufacture a really good chart.
Once you've got a good profile, when you're out in the field, instead of photographing a gray card and doing an eyedropper white balance, you'd photograph a chart (and the ColorChecker Passport is ideal for this purpose) and follow the same steps at the links above to normalize white balance and exposure. You'd apply your profile...and, within the quality of your workflow, that's all there is to it.
Cheers,
b&
On Jun 3, 2013, at 8:37 PM, Robin Myers <robin@rmimaging.com> wrote:
The purpose of creating a daylight profile is not to achieve colorimetrically accurate images, but to have a profile that will perform most of the heavy lifting in producing an acceptable image with an ICC profile. This is what I thought the poster had requested.
I would agree with that. I would just suggest that the only way to get an ICC profile with quality results that doesn't cause undesirable artifacts is to create one that actually is suitable for colorimetric accuracy. Otherwise, you're just introducing another set of errors into the workflow, this time coming from inaccurate characterizations of your camera's behavior baked into the ICC profile. And I'd further suggest that it's exactly those sorts of inaccuracies from less-than-carefully-constructed ICC profiles that have given colorimetric workflows such a bad reputation. Cheers, b&
On Jun 3, 2013, at 8:46 PM, Ben Goren <ben@trumpetpower.com> wrote:
I would just suggest that the only way to get an ICC profile with quality results that doesn't cause undesirable artifacts is to create one that actually is suitable for colorimetric accuracy.
Hmm...I should further add: Raw Photo Processor ships with some very high quality matrix profiles that our own Iliah has made from lens-less measurements of camera responses to monochromatic light sources. They're general-purpose tools not suited for precision reprographic work, but few photographers will have the patience and / or skills to create better profiles than what're built into RPP. Again, it's entirely doable to create very high quality ICC profiles for your particular combination of camera / illuminant / lens / etc. You're just not going to get there by photographing a classic ColorChecker (and, yes, you recommended the SG -- I'm just using the classic for hyperbole) on an easel in your back yard and processing it through Adobe Camera RAW. The basic idea is the same, but there's a bit more to the execution. Cheers, b&
On 06/04/2013 05:16 AM, Ben Goren wrote:
Some variations I'd suggest...well, hesitatingly, I'd suggest using a light tent (outdoors at noon in the middle of summer) to illuminate the chart.
Pure teflon cloth preferably then or build an Ulbricht Iglo with fresh polystyrene foam. -- Met vriendelijke groet, Ernst Dinkla http://www.pigment-print.com/spectralplots/spectrumviz_1.htm December 2012: 500+ inkjet media paper white spectral plots.
On Jun 4, 2013, at 3:49 AM, Ernst Dinkla <E.Dinkla@onsneteindhoven.nl> wrote:
On 06/04/2013 05:16 AM, Ben Goren wrote:
Some variations I'd suggest...well, hesitatingly, I'd suggest using a light tent (outdoors at noon in the middle of summer) to illuminate the chart.
Pure teflon cloth preferably then or build an Ulbricht Iglo with fresh polystyrene foam.
The nylon (or whatever) in a standard commercial photographic light tent is sufficiently spectrally neutral for these purposes, and the resulting illumination is sufficiently even. I don't think, in practice, it's all that different from the integrating sphere you suggest. The biggest problem you'll face with a light tent is with (diffuse) specular reflections. And, if you tried to use an IS, you'd have the exact same problem with specular reflections. An IS is great for reflective spectrometry when set up in d/8° geometry...but only for a very, very narrow field of view. And even then you need the light trap at the opposite 8° position on the sphere. Whatever the size (and shape) of that light trap is the size of a sample you can get good measurements from. You could probably build a rig using an integrating sphere that mimicked a spectrophotometer, but it'd only work for small samples at a time and you'd have to reposition either the sphere over the chart or the chart under the sphere and only shoot the chart in small sections at a time. Considering how rapidly the Sun moves in the sky and how fast atmospheric conditions (and thus exposure) can change, I don't think that'll be very practical. Oh -- and polystyrene is a good material to use for constructing an IS, but I'd recommend first spray-painting it with a high-quality flat white home interior paint and, while the paint is still wet, liberally dusting the interior with the purest reagent-grade barium sulfate powder you can get (which will still be quite inexpensive). The result should be not that far off from a commercial IS. And you can buy hollow polystyrene spheres (in halves) from a hobby / crafts store. (Or you can spend many thousands of dollars on a commercial IS, of course.) What I'm working on is something not entirely unlike a monitor hood that will hold a chart and will have baffles that block everything but direct Sun and skylight at angles that won't produce specular reflections. The dimensions will depend on the final dimensions of the new chart I'm working on, so I won't really start on this gadget until after I'm done with the chart. But I'm probably just going to make it out of foamcore and glue some black flock velvet to the insides, so it shouldn't be expensive or hard to make...the hard part is going to be figuring out the geometry, and maybe assembling some odd angles. I should add: this is all for building a profile of sunlight. In the studio, you just have to set your lighting up right. But even there, proper lighting isn't 45°/0...it's more like 45° + ([angle of view] / 2) / 0. For a 50mm lens on 135 format, if you fill the frame (not necessarily a good idea), that means 68°/0. Cheers, b&
On Jun 3, 2013, at 8:03 PM, Robin Myers <robin@rmimaging.com> wrote:
Do not make a profile for the early morning or late afternoon light, you will not like the results.
Please go on! Would this be due to the assumption of say D50 for the profile, or the reference data (which came from where?) assumes something similar? IOW, the conversation has touched briefly on the scene illuminate but I suspect this is a hugely important factor. Moving away as far from memory color and resulting produced accuracy, how can Colorimetric Accuracy in the Field be produced without measuring something in that field including the light? Or that doesn't matter? Andrew Rodney http://www.digitaldog.net/
From: "Iliah Borg
If we are not given accuracy, what amount of inaccuracy are we given? How far it can go?
You are not ever going to get 100% accuracy, so you also have to specify what % accuracy you are aiming for, taking into account all the errors of your instrumentation (including eyeballs and brains!). Bob Frost
On Jun 4, 2013, at 3:27 AM, Bob Frost wrote:
From: "Iliah Borg
If we are not given accuracy, what amount of inaccuracy are we given? How far it can go?
You are not ever going to get 100% accuracy
"Accuracy" as a mode. Meaning if the converter was not designed to allow it. -- Best regards, Iliah Borg
On Jun 4, 2013, at 5:57 AM, Iliah Borg <iliah.i.borg@gmail.com> wrote:
On Jun 4, 2013, at 3:27 AM, Bob Frost wrote:
From: "Iliah Borg
If we are not given accuracy, what amount of inaccuracy are we given? How far it can go?
You are not ever going to get 100% accuracy
"Accuracy" as a mode. Meaning if the converter was not designed to allow it.
Yes. The problem isn't failing to achieve perfection. The problem is not even trying in the first place. It's entirely practical to get into the realm of ``I have to squint hard to see the imperfections.'' Where the commercial raw processors are at, though is, ``One glance shows it's not even close.'' Cheers, b&
This reminds me of a Soviet joke of mid-70s A Western Journalist comes to GUM ( http://en.wikipedia.org/wiki/GUM_(department_store) ) and asks if they carry black caviar. - No, we don't. - But why? - There is no demand. The journalist spent 2 days watching the counter, and no, nobody asked for black caviar. On Jun 4, 2013, at 9:38 AM, Ben Goren wrote:
On Jun 4, 2013, at 5:57 AM, Iliah Borg <iliah.i.borg@gmail.com> wrote:
On Jun 4, 2013, at 3:27 AM, Bob Frost wrote:
From: "Iliah Borg
If we are not given accuracy, what amount of inaccuracy are we given? How far it can go?
You are not ever going to get 100% accuracy
"Accuracy" as a mode. Meaning if the converter was not designed to allow it.
Yes.
The problem isn't failing to achieve perfection.
The problem is not even trying in the first place.
It's entirely practical to get into the realm of ``I have to squint hard to see the imperfections.'' Where the commercial raw processors are at, though is, ``One glance shows it's not even close.''
Cheers,
b&
-- Best regards, Iliah Borg
I'm photographer. I do not art. Go out with your Sinar and your SG ColorChecker. 2013/6/3 Stanley Smith <ssmith@getty.edu>
Those of us in the museum world are very concerned with colormetric accuracy, and go to great lengths to achieve this-- even to the point of developing multi-spectral capture methods (I worked recently with Roy Berns and Sinar AG to develop a six-channel system that achieves accuracy on a SG ColorChecker of less that 1 CIEDE2000). Our need for this has little to do with how the image ends up looking on what ever display or paper-- it is mostly useful in scientific painting conservation analysis. I honestly cannot think of another situation where this level of accuracy is useful or practical. It's like walking towards a wall in intervals covering half the distance each time. Will you ever get to the wall? No, but you'll be close enough. Photographers don't care-- they goose the colors later anyway to express their interpretation of a scene-- this is not new-- Ansel burned and dodged fiercely. Leave the science to the scientists. Go out and make art.
Stanley Smith Head of Collection Information and Access J. Paul Getty Museum 1200 Getty Center Drive, Suite 1000 Los Angeles, CA 90049-1687 (310) 440-7286
On 6/3/2013 at 02:49 PM, in message <C09A839A-5424-475A-9533-44246968F984@maine.rr.com>, Jeffrey Stevensen <jeffstev@maine.rr.com> wrote:
I've been following this discussion and though it is geek-intriguing, in which I include myself. And I'm still catching up. But it seems completely irrelevant to being a working photographer in the field. Mr. Goren seems to be advocating, to use a reference to our earlier, photo-chemical past, "the perfect film developer."
I'm sure I could have taken my i1 Pro into the field with me and
gotten spot measurements of everything in sight I could lay my hands on
Indeed, that's hard and a lot of work. But short of that, what proof
do we have? Isn't the goal of colorimetry an non ambiguous set of color measurements that don't rely on a personal interpretation of what we see as color?
I've got to side with Mr. Rodney throughout. He asks for the definition of "colorimetric accuracy" and in reply gets assurances based on memory. If I stand on a beach at magic hour in golden light, is the goal a capture of the subject in a perfect colorimetric rendering of that moment's unique and beautiful light, which is what I take to be the end result of some in the "pro-colorimetric" side of this debate? Certainly not a capture of the subject that ignores the momentary unique lighting and says, this is what it would have looked like under some sort of "standard" light; there would be no point to doing that except for some scientific goal with clearly stated conditions of measurement accuracy and toolset accuracy.
I think I'm interested in reproducing my perception of what I see, which is the object or scene in that unique lighting, and hopefully a little extra personal poetry of imagination. Anything requiring "matching" must state all the objectively-measured conditions and definitions of what is a match. That masterful matching print of the painting or museum piece will definitely include the specification for the conditions of viewing, including lighting levels, lighting color temperature, evenness, etc. to stay within the accepted deviation constituting a "match." There is no such thing as the original and copy matching to some fine, objective numeric standard across all mediums and viewing conditions. And even if it did, Mr. Goren states
Colorimetry is "the science and technology used to quantify and describe physically the human color perception
Ah! We are back to human perception, "perceptual rendering"! Anybody on this list care to step up and claim to have perfect human "standard observer" vision?
in contrast, my workflow with Raw Photo Processor amounts to shooting a target, feeding that target to a Perl script that spits out white balance and exposure numbers, doing a copy / paste of those (three) numbers back into Raw Photo Processor, and applying those settings to the shot of the artwork. That's the sum grand total of my color-correcting workflow; everything else is either sharpening or lens geometry and peripheral illumination correction or panorama-type stitching or cleaning up smudges on the original or the like.
So rendering the scene from raw as described, personal expression is thus confined to… composition only? Choice of subject? Angle of view? Perspective and lens choice? Just going to a momentary 2D rendering? How is this "accurate" to a changing, moving, scene-scanning, 4D space-time personal experience through one unique set of eyeballs?
I output from RPP to a BetaRGB TIFF, do whatever needs to be done in Photoshop without any color transformations, and then feed the BetaRGB TIFF to Argyll for a gamut-mapped perceptual rendition to the printer profile, and print the result. We all want and usually get pretty close to making the print look awfully "accurate" in an ICC-manged workflow with a variety of tools, but this comes after rendering the raw file to where we like it, so no argument there.
I daresay getting comparable quality out of Adobe's rendering engine, if even possible, would require many, many hours of very intensive and skilled post-processing.
To be science, one must specify all the conditions under which the experiment is conducted, the single variable to be tested and how the final result is then measured. Mr. Rodney is right that that has not been provided, nor is what artists do, nor even generally what clients hire photographers to do.
My sincere appreciation to all parties for a fascinating diversion, but now to make some money...
Jeff Stevensen Photography 82 Gilman Street Portland, ME 04102 207-773-5175 207-807-6961 cell
http://www.jstevensen.com blog http://photosightlines.com
_______________________________________________ Do not post admin requests to the list. They will be ignored. Colorsync-users mailing list (Colorsync-users@lists.apple.com) Help/Unsubscribe/Update your Subscription: https://lists.apple.com/mailman/options/colorsync-users/ssmith%40getty.edu
This email sent to ssmith@getty.edu
_______________________________________________ Do not post admin requests to the list. They will be ignored. Colorsync-users mailing list (Colorsync-users@lists.apple.com) Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/colorsync-users/jbueno61%40gmail.com
This email sent to jbueno61@gmail.com
And with Nikon D90 get dE2000=1.67 with flash units without Pirex on a ColorChecker. Where is your goal?. And your images from X-ray or UV, where do you evaluate and compare?, isn't a display?. And when you go to any kind of meeting or to publish your analysis and conclusions, don't you print on paper?. 2013/6/3 José Ángel Bueno García <jbueno61@gmail.com>
I'm photographer. I do not art. Go out with your Sinar and your SG ColorChecker.
2013/6/3 Stanley Smith <ssmith@getty.edu>
Those of us in the museum world are very concerned with colormetric accuracy, and go to great lengths to achieve this-- even to the point of developing multi-spectral capture methods (I worked recently with Roy Berns and Sinar AG to develop a six-channel system that achieves accuracy on a SG ColorChecker of less that 1 CIEDE2000). Our need for this has little to do with how the image ends up looking on what ever display or paper-- it is mostly useful in scientific painting conservation analysis. I honestly cannot think of another situation where this level of accuracy is useful or practical. It's like walking towards a wall in intervals covering half the distance each time. Will you ever get to the wall? No, but you'll be close enough. Photographers don't care-- they goose the colors later anyway to express their interpretation of a scene-- this is not new-- Ansel burned and dodged fiercely. Leave the science to the scientists. Go out and make art.
Stanley Smith Head of Collection Information and Access J. Paul Getty Museum 1200 Getty Center Drive, Suite 1000 Los Angeles, CA 90049-1687 (310) 440-7286
On 6/3/2013 at 02:49 PM, in message <C09A839A-5424-475A-9533-44246968F984@maine.rr.com>, Jeffrey Stevensen <jeffstev@maine.rr.com> wrote:
I've been following this discussion and though it is geek-intriguing, in which I include myself. And I'm still catching up. But it seems completely irrelevant to being a working photographer in the field. Mr. Goren seems to be advocating, to use a reference to our earlier, photo-chemical past, "the perfect film developer."
I'm sure I could have taken my i1 Pro into the field with me and
gotten spot measurements of everything in sight I could lay my hands on
Indeed, that's hard and a lot of work. But short of that, what proof
do we have? Isn't the goal of colorimetry an non ambiguous set of color measurements that don't rely on a personal interpretation of what we see as color?
I've got to side with Mr. Rodney throughout. He asks for the definition of "colorimetric accuracy" and in reply gets assurances based on memory. If I stand on a beach at magic hour in golden light, is the goal a capture of the subject in a perfect colorimetric rendering of that moment's unique and beautiful light, which is what I take to be the end result of some in the "pro-colorimetric" side of this debate? Certainly not a capture of the subject that ignores the momentary unique lighting and says, this is what it would have looked like under some sort of "standard" light; there would be no point to doing that except for some scientific goal with clearly stated conditions of measurement accuracy and toolset accuracy.
I think I'm interested in reproducing my perception of what I see, which is the object or scene in that unique lighting, and hopefully a little extra personal poetry of imagination. Anything requiring "matching" must state all the objectively-measured conditions and definitions of what is a match. That masterful matching print of the painting or museum piece will definitely include the specification for the conditions of viewing, including lighting levels, lighting color temperature, evenness, etc. to stay within the accepted deviation constituting a "match." There is no such thing as the original and copy matching to some fine, objective numeric standard across all mediums and viewing conditions. And even if it did, Mr. Goren states
Colorimetry is "the science and technology used to quantify and describe physically the human color perception
Ah! We are back to human perception, "perceptual rendering"! Anybody on this list care to step up and claim to have perfect human "standard observer" vision?
in contrast, my workflow with Raw Photo Processor amounts to shooting a target, feeding that target to a Perl script that spits out white balance and exposure numbers, doing a copy / paste of those (three) numbers back into Raw Photo Processor, and applying those settings to the shot of the artwork. That's the sum grand total of my color-correcting workflow; everything else is either sharpening or lens geometry and peripheral illumination correction or panorama-type stitching or cleaning up smudges on the original or the like.
So rendering the scene from raw as described, personal expression is thus confined to… composition only? Choice of subject? Angle of view? Perspective and lens choice? Just going to a momentary 2D rendering? How is this "accurate" to a changing, moving, scene-scanning, 4D space-time personal experience through one unique set of eyeballs?
I output from RPP to a BetaRGB TIFF, do whatever needs to be done in Photoshop without any color transformations, and then feed the BetaRGB TIFF to Argyll for a gamut-mapped perceptual rendition to the printer profile, and print the result. We all want and usually get pretty close to making the print look awfully "accurate" in an ICC-manged workflow with a variety of tools, but this comes after rendering the raw file to where we like it, so no argument there.
I daresay getting comparable quality out of Adobe's rendering engine, if even possible, would require many, many hours of very intensive and skilled post-processing.
To be science, one must specify all the conditions under which the experiment is conducted, the single variable to be tested and how the final result is then measured. Mr. Rodney is right that that has not been provided, nor is what artists do, nor even generally what clients hire photographers to do.
My sincere appreciation to all parties for a fascinating diversion, but now to make some money...
Jeff Stevensen Photography 82 Gilman Street Portland, ME 04102 207-773-5175 207-807-6961 cell
http://www.jstevensen.com blog http://photosightlines.com
_______________________________________________ Do not post admin requests to the list. They will be ignored. Colorsync-users mailing list (Colorsync-users@lists.apple.com) Help/Unsubscribe/Update your Subscription: https://lists.apple.com/mailman/options/colorsync-users/ssmith%40getty.edu
This email sent to ssmith@getty.edu
_______________________________________________ Do not post admin requests to the list. They will be ignored. Colorsync-users mailing list (Colorsync-users@lists.apple.com) Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/colorsync-users/jbueno61%40gmail.com
This email sent to jbueno61@gmail.com
Well, the obvious goal is dE2000 =0.0. -- a goal that is equally obviously not attainable. Sure, we reproduce our paintings in books, on displays at a press check, and projected on unpredictable machines for lectures. I would gently suggest that given the truly bad color memory we all have, that if the color presented on any of these platforms is not "accurate" to the degree that everyone here seems to be striving for, it won't make a damn bit of difference to the viewer. These differences are really only noticeable when you compare the reproduction with the original in a controlled setting, and I never see visitors to the Getty walking around the gallery with the catalog open-- eyes darting from the book to the painting (I'm somewhat embarrassed to note that I do this all the time, and it is often a sobering experience). Of course we need to be close, and we do strive for that, but we have no control over the viewing conditions of a person leafing through one of our catalogs-- a fact that renders extreme color accuracy standards moot. One more thing-- Just because curators strive for accuracy in reproduction, it doesn't mean that publishing professionals always do. I've told this story more than once: We had a big Paul Strand show a few years back, and I did what I mentioned above-- walked around the gallery with the book open. Strand printed very dark and flat-- he loved the murky print. Our reproductions were shockingly different, but looked much like other reproductions of his work that I had seen in the past-- normal contrast and more open shadows. When I asked our publisher why there was such a difference the answer was very revealing: They couldn't sell any books if they had faithfully reproduced the original prints-- it would have looked like a mistake. In Strand's defense, I believe he was used to throwing a lot of light on a print when evaluating-- not a lighting condition that is often reproduced by a casual viewer. Stanley Smith J. Paul Getty Museum Stanley Smith Head of Collection Information and Access J. Paul Getty Museum 1200 Getty Center Drive, Suite 1000 Los Angeles, CA 90049-1687 (310) 440-7286
On 6/3/2013 at 04:24 PM, in message <CAD5o2tktMcuPKn+JmtL8pSd83+CP1P4MfhvmnvRY2LhC_xz8dA@mail.gmail.com>, José Ángel Bueno García<jbueno61@gmail.com> wrote:
And with Nikon D90 get dE2000=1.67 with flash units without Pirex on a ColorChecker. Where is your goal?. And your images from X-ray or UV, where do you evaluate and compare?, isn't a display?. And when you go to any kind of meeting or to publish your analysis and conclusions, don't you print on paper?. 2013/6/3 José Ángel Bueno García <jbueno61@gmail.com> I'm photographer. I do not art. Go out with your Sinar and your SG ColorChecker. 2013/6/3 Stanley Smith <ssmith@getty.edu> Those of us in the museum world are very concerned with colormetric accuracy, and go to great lengths to achieve this-- even to the point of developing multi-spectral capture methods (I worked recently with Roy Berns and Sinar AG to develop a six-channel system that achieves accuracy on a SG ColorChecker of less that 1 CIEDE2000). Our need for this has little to do with how the image ends up looking on what ever display or paper-- it is mostly useful in scientific painting conservation analysis. I honestly cannot think of another situation where this level of accuracy is useful or practical. It's like walking towards a wall in intervals covering half the distance each time. Will you ever get to the wall? No, but you'll be close enough. Photographers don't care-- they goose the colors later anyway to express their interpretation of a scene-- this is not new-- Ansel burned and dodged fiercely. Leave the science to the scientists. Go out and make art. Stanley Smith Head of Collection Information and Access J. Paul Getty Museum 1200 Getty Center Drive, Suite 1000 Los Angeles, CA 90049-1687 (310) 440-7286
On 6/3/2013 at 02:49 PM, in message <C09A839A-5424-475A-9533-44246968F984@maine.rr.com>, Jeffrey Stevensen
<jeffstev@maine.rr.com> wrote: I've been following this discussion and though it is geek-intriguing, in which I include myself. And I'm still catching up. But it seems completely irrelevant to being a working photographer in the field. Mr. Goren seems to be advocating, to use a reference to our earlier, photo-chemical past, "the perfect film developer."
I'm sure I could have taken my i1 Pro into the field with me and
gotten spot measurements of everything in sight I could lay my hands on
Indeed, that's hard and a lot of work. But short of that, what proof
do we have? Isn't the goal of colorimetry an non ambiguous set of color measurements that don't rely on a personal interpretation of what we see as color?
I've got to side with Mr. Rodney throughout. He asks for the definition of "colorimetric accuracy" and in reply gets assurances based on memory. If I stand on a beach at magic hour in golden light, is the goal a capture of the subject in a perfect colorimetric rendering of that moment's unique and beautiful light, which is what I take to be the end result of some in the "pro-colorimetric" side of this debate? Certainly not a capture of the subject that ignores the momentary unique lighting and says, this is what it would have looked like under some sort of "standard" light; there would be no point to doing that except for some scientific goal with clearly stated conditions of measurement accuracy and toolset accuracy. I think I'm interested in reproducing my perception of what I see, which is the object or scene in that unique lighting, and hopefully a little extra personal poetry of imagination. Anything requiring "matching" must state all the objectively-measured conditions and definitions of what is a match. That masterful matching print of the painting or museum piece will definitely include the specification for the conditions of viewing, including lighting levels, lighting color temperature, evenness, etc. to stay within the accepted deviation constituting a "match." There is no such thing as the original and copy matching to some fine, objective numeric standard across all mediums and viewing conditions. And even if it did, Mr. Goren states
Colorimetry is "the science and technology used to quantify and describe physically the human color perception
Ah! We are back to human perception, "perceptual rendering"! Anybody on this list care to step up and claim to have perfect human "standard observer" vision?
in contrast, my workflow with Raw Photo Processor amounts to shooting a target, feeding that target to a Perl script that spits out white balance and exposure numbers, doing a copy / paste of those (three) numbers back into Raw Photo Processor, and applying those settings to the shot of the artwork. That's the sum grand total of my color-correcting workflow; everything else is either sharpening or lens geometry and peripheral illumination correction or panorama-type stitching or cleaning up smudges on the original or the like.
So rendering the scene from raw as described, personal expression is thus confined to… composition only? Choice of subject? Angle of view? Perspective and lens choice? Just going to a momentary 2D rendering? How is this "accurate" to a changing, moving, scene-scanning, 4D space-time personal experience through one unique set of eyeballs?
I output from RPP to a BetaRGB TIFF, do whatever needs to be done in Photoshop without any color transformations, and then feed the BetaRGB TIFF to Argyll for a gamut-mapped perceptual rendition to the printer profile, and print the result. We all want and usually get pretty close to making the print look awfully "accurate" in an ICC-manged workflow with a variety of tools, but this comes after rendering the raw file to where we like it, so no argument there.
I daresay getting comparable quality out of Adobe's rendering engine, if even possible, would require many, many hours of very intensive and skilled post-processing.
To be science, one must specify all the conditions under which the experiment is conducted, the single variable to be tested and how the final result is then measured. Mr. Rodney is right that that has not been provided, nor is what artists do, nor even generally what clients hire photographers to do. My sincere appreciation to all parties for a fascinating diversion, but now to make some money... Jeff Stevensen Photography 82 Gilman Street Portland, ME 04102 207-773-5175 207-807-6961 cell http://www.jstevensen.com blog http://photosightlines.com _______________________________________________ Do not post admin requests to the list. They will be ignored. Colorsync-users mailing list (Colorsync-users@lists.apple.com) Help/Unsubscribe/Update your Subscription: https://lists.apple.com/mailman/options/colorsync-users/ssmith%40getty.edu This email sent to ssmith@getty.edu _______________________________________________ Do not post admin requests to the list. They will be ignored. Colorsync-users mailing list (Colorsync-users@lists.apple.com) Help/Unsubscribe/Update your Subscription: https://lists.apple.com/mailman/options/colorsync-users/jbueno61%40gmail.com This email sent to jbueno61@gmail.com
Stanley Smith wrote:
They couldn't sell any books if they had faithfully reproduced the original prints-- it would have looked like a mistake. In Strand's defense, I believe he was used to throwing a lot of light on a print when evaluating-- not a lighting condition that is often reproduced by a casual viewer.
If that was actually the case, then technically it is correct for the book to not colorimetrically match the originals - an appearance adjustment has been made to allow for different viewing conditions. What's interesting is whether that appearance adjustment was made empirically, or in accordance with an appearance model. It's also interesting that if preservation considerations mean that a very much lower light level is being used to display the original works than was used by the artist when creating them, then in fact an appearance adjusted reproduction may be truer to the original intention that the works themselves as exhibited. [This brings to mind an experience I had recently viewing an exhibition of (mainly) clothing, where the light levels were set very low. <http://www.vogue.com.au/fashion/news/grace+kelly+style+icon+at+bendigo+art+gallery+,17229> Rather that using lower wattage lights, the curators had simply dimmed down their usual incandescent lamps to extreme levels. The resulting "white" point was very, very red, and that combined with the very low light levels meant that adaptation to the white point was extremely slow and incomplete. The result was exceptionally poor color rendering. In fact they knew they had a problem because there were signs basically saying "don't complain about it - we have to do it". I don't know whether to be surprised or not that the curators of a regional museum would appear to be so ignorant of lighting technology and its interaction with human vision.] Graeme Gill.
On Jun 4, 2013, at 4:35 PM, Graeme Gill <graeme2@argyllcms.com> wrote:
[I]n fact an appearance adjusted reproduction may be truer to the original intention that the works themselves as exhibited.
A few things spring to mind. First, I haven't done a lot of this sort of thing, but ArgyllCMS makes it very easy to create a printer profile optimized for viewing conditions as measured with a spectrophotometer. What few times I've made use of that feature, it's worked fantastically. If you actually know where the print is going to be viewed and what the light is like there, you can easily make a print that's more true-to-life than the original -- but only for that one viewing condition. It may well look ugly in daylight, but it'll look great in its intended viewing condition. Graeme, is there any chance that that might extend to the input phase as well? That is, might one be able to get a spectrophotometer reading in an artist's studio and somehow combine that with the image made on the copy stand so as to create a digital original that matches the print not under standard viewing conditions but the artist's working conditions? Last...my parents subscribe to National Geographic. And the photography is, of course, wonderful...but everything always looks dark in my parent's living room in the evening. It looks great when I hold it under a bright light, but I don't think that's typical viewing conditions for this publication. Knowing the standards of the publisher, I'm sure it perfectly meets whatever specification for standard viewing conditions they're targeting...but I would question whether they've chosen an optimal standard to adhere to. Cheers, b&
Stanley Smith wrote:
Photographers don't care-- they goose the colors later anyway to express their interpretation of a scene-- this is not new-- Ansel burned and dodged fiercely. Leave the science to the scientists. Go out and make art.
I have to chuckle at this point. This is so like the printing industry of 20 years ago (and some sections of it even today): "Why do we need these fancy shmancy device profiles ? We can just fiddle the ink ducts/roller pressure/water balance by hand to get it to match the proof print/get OK'd by the customer". That's a perfectly practical and well tested approach to getting stuff done. If you've got no other option, then by all means do that. But that doesn't mean that some other approach with a firmer foundation in science (translation "being smarter") mightn't offer some advantages in terms of being able to get where you want to go faster, more predictably, or even offering places to go that you couldn't conceive of reaching before. And when it's bundled up in a neat package of cameras + workflows that's accessible to all, it will be a "revolution" that "we should have switched to years ago" :-) Graeme Gill.
On 06/04/2013 12:44 AM, Stanley Smith wrote:
(I worked recently with Roy Berns and Sinar AG to develop a six-channel system that achieves accuracy on a SG ColorChecker of less that 1 CIEDE2000).
Stanley Smith Head of Collection Information and Access J. Paul Getty Museum 1200 Getty Center Drive, Suite 1000 Los Angeles, CA 90049-1687 (310) 440-7286
Considering the results above what is your opinion of the HP G4010/4050 (dual illumination mode) scanner test results Image Engineering published in this PDF ?: http://tinyurl.com/lqtk7wp About 4 DeltaE on an acrylic paint chart described as follows: "In addition an acrylic painting was produced using 23 acrylic colors which were mixed with black in 5 steps so that a total of 115 patches were created." Which most likely had between 8 to 10 pigment varieties if I consider the usual art paint offerings. Scoring on average <4 DeltaE on the other charts: "An IT8 target on Fuji crystal archive paper represents the photographic samples. A sample derived from the IT8 reference data was printed on three different ink jet printers the HP 9180 which uses pigment inks and the Epson Stylus Color 1290 and Canon i560 representing dye based inks." They all are based on CMY(K) mixes but at least had a CMYK pigment chart included. While your measuring unit is more strict, the Image Engineering DIY acrylic chart represents one of the many cases in practice. The CCSG chart may represent the average of jobs like that but is usually also the target to aim the profiling at. This desktop scanner is of course a poor man's solution compared to multi-spectral capture methods. I wonder whether this HP dual illumination method could be improved for repro work with scanners and cameras . There is an analogy to the X-Rite Passport CC dual illumination DNG profiling that seems to create a better color fidelity than a single illumination DNG profile even when the light conditions are kept consistent. The Passport target is limited and the RAW route probably not best for repro work (if I interpreted this thread correctly) but it would be interesting if that profiling step could be enhanced with say the CCSG target. HP Artist, mentioned also in this thread, works somewhat different but I expect that the developments were related. HP had a lot of research going in this field 10 to 5 years ago. -- Met vriendelijke groet, Ernst Dinkla http://www.pigment-print.com/spectralplots/spectrumviz_1.htm December 2012: 500+ inkjet media paper white spectral plots.
participants (11)
-
Andrew Rodney
-
Ben Goren
-
Bob Frost
-
Don Schaefer
-
Ernst Dinkla
-
Graeme Gill
-
Iliah Borg
-
Jeffrey Stevensen
-
José Ángel Bueno García
-
Robin Myers
-
Stanley Smith