Re: Feedback on success, creating a camera profile
Ok, I have been googleing a bit and I think that the answer to my question is called ColorChecker and DNG Profile Editor. Is that the only way to easily create a .dcp camera profile for LightRoom, or would you suggest another way which can make use of the transparent HCT target for more precision?
I had to process the RAW files prior to seeing the right colors, and I wish now that there was a way to make a profile that can be virtually assigned directly in LightRoom in place of the generic canned profiles. Do you know if that is possible? (I mean, possible, even for me?)
Paul Schilliger
On May 4, 2013, at 12:55 AM, Paul Schilliger <pschilliger@smile.ch> wrote:
Ok, I have been googleing a bit and I think that the answer to my question is called ColorChecker and DNG Profile Editor. Is that the only way to easily create a .dcp camera profile for LightRoom, or would you suggest another way which can make use of the transparent HCT target for more precision?
The ColorChecker Passport is an excellent small, portable target. Really, it's almost perfect for field use; there's very little I'd personally do differently if I was designing it myself, and what I would do probably isn't practical from a mass manufacturing perspective. If you understand that colorimetric accuracy is *not* the goal of DNG profiles, they serve their intended purpose rather well. But if you're looking for colorimetric matching, run away from DNG profiles and anything else to do with Adobe's processing of raw images and color profiling as far and as fast as you can. Instead, you're looking at a long and bumpy road...but the view from the top once you get there is spectacular. For that, you need an ICC-based workflow and software that can perform linear raw developments. There are many options; my favorite is Raw Photo Processor for image processing and ArgyllCMS for ICC profiling. You'll also want a target with many more patches than the Passport offers, though the Passport is still very useful for setting exposure and white balance (but by building a profile and analyzing the results, not by clicking on selected patches). I went the homemade route for the chart, which is what I'd recommend for serious work. Of course, that also means you need a spectrophotometer -- but it's an essential tool for anything to do with colorimetry so you really don't have much choice in the matter. Here again it's really hard to beat or go worng with the X-Rite product -- preferably the i1 Pro, though the ColorMunki is, by all accounts, a very respectable instrument. And, again again, ditch the bundled X-Rite software in favor of a real tool such as ArgyllCMS or something expensive from the commercial realm. Cheers, b&
I was struck by this statement: "run away from DNG profiles and anything else to do with Adobe's processing of raw images and color profiling as far and as fast as you can." As most of the imaging community world-wide is spending gazillions of time and money using this stuff, I'm not sure such a bald, dismissive sentence with no explanation and no justification should pass unnoticed. Perhaps you would care to elaborate? Mark ________________________________ From: Ben Goren <ben@trumpetpower.com> To: Paul Schilliger <pschilliger@smile.ch> Cc: "colorsync-users@lists.apple.com" <colorsync-users@lists.apple.com> Sent: Saturday, May 4, 2013 7:28:08 AM Subject: Re: Feedback on success, creating a camera profile On May 4, 2013, at 12:55 AM, Paul Schilliger <pschilliger@smile.ch> wrote:
Ok, I have been googleing a bit and I think that the answer to my question is called ColorChecker and DNG Profile Editor. Is that the only way to easily create a .dcp camera profile for LightRoom, or would you suggest another way which can make use of the transparent HCT target for more precision?
The ColorChecker Passport is an excellent small, portable target. Really, it's almost perfect for field use; there's very little I'd personally do differently if I was designing it myself, and what I would do probably isn't practical from a mass manufacturing perspective. If you understand that colorimetric accuracy is *not* the goal of DNG profiles, they serve their intended purpose rather well. But if you're looking for colorimetric matching, run away from DNG profiles and anything else to do with Adobe's processing of raw images and color profiling as far and as fast as you can. Instead, you're looking at a long and bumpy road...but the view from the top once you get there is spectacular. For that, you need an ICC-based workflow and software that can perform linear raw developments. There are many options; my favorite is Raw Photo Processor for image processing and ArgyllCMS for ICC profiling. You'll also want a target with many more patches than the Passport offers, though the Passport is still very useful for setting exposure and white balance (but by building a profile and analyzing the results, not by clicking on selected patches). I went the homemade route for the chart, which is what I'd recommend for serious work. Of course, that also means you need a spectrophotometer -- but it's an essential tool for anything to do with colorimetry so you really don't have much choice in the matter. Here again it's really hard to beat or go worng with the X-Rite product -- preferably the i1 Pro, though the ColorMunki is, by all accounts, a very respectable instrument. And, again again, ditch the bundled X-Rite software in favor of a real tool such as ArgyllCMS or something expensive from the commercial realm. Cheers, b& _______________________________________________ Do not post admin requests to the list. They will be ignored. Colorsync-users mailing list (Colorsync-users@lists.apple.com) Help/Unsubscribe/Update your Subscription: https://lists.apple.com/mailman/options/colorsync-users/mgsegal%40rogers.com This email sent to mgsegal@rogers.com
On May 4, 2013, at 4:48 AM, MARK SEGAL <mgsegal@rogers.com> wrote:
I was struck by this statement:
"run away from DNG profiles and anything else to do with Adobe's processing of raw images and color profiling as far and as fast as you can."
As most of the imaging community world-wide is spending gazillions of time and money using this stuff, I'm not sure such a bald, dismissive sentence with no explanation and no justification should pass unnoticed. Perhaps you would care to elaborate?
I provided exactly that explanation and justification in the introductory clause to the sentence you only partially quoted, as well as the preceding sentence. Here's the full paragraph again:
If you understand that colorimetric accuracy is *not* the goal of DNG profiles, they serve their intended purpose rather well. But if you're looking for colorimetric matching, run away from DNG profiles and anything else to do with Adobe's processing of raw images and color profiling as far and as fast as you can.
The key thing to understand is that Adobe's raw processing software is not and never has been intended to create colorimetric renderings. Its oft-stated goal is instead "pleasing" color. Which is a good thing for shareholders, because most of Adobe's customers are much more interested in "pleasing" color than accurate color. Or, at least they *think* they are...but that's a rant for another time. The short version is that virtually all of the complaints that photographers have, all of the holy grails they keep chasing, are a result of the compromises necessary to achieve "pleasing" color. For example, That S-curve that virtually always gets applied to "enhance contrast and give an image 'pop'" is what destroys shadow and highlight detail. It *has* to; TANSTAAFL. Anyway, the end result is that there's no way to get colorimetric accuracy out of Adobe's raw processing software, and even getting in the ballpark is a challenge. But with different software, it's quite practical to, for example, photograph an artwork and make a print such that the artist herself has to very closely and carefully examine the two side-by-side to spot the differences. A similar workflow can produce superlative results in general photography, including landscape and portraiture and the like. The key there is to always shoot in good light, which means finding or making good light. Which also means seeing good light and being able to recognize what is and isn't good light. Many of those techniques used for "pleasing" results are really just tools to fix bad light in post-processing. But that's yet another rant for yet another time.... I should also add: so long as you don't rely upon Adobe products for the colorimetrically-critical parts of your workflow, especially raw development and profile conversion, you'll be hard-pressed to find better tools for the rest of your editing tasks, from noise reduction to sharpening to cleanup to geometry corrections to compositing to layout and design to all the rest. You're especially not going to find anything else as comprehensive and as well integrated. It's just that Adobe's engineers are solving a different problem from the one of colorimetric accuracy. They solve the problems they're intending to solve rather well, but that (of course! TANSTAAFL again) creates insolvable problems for those seeking colorimetric accuracy. TL/DR: Use the right tool for the right job, and Adobe products are great tools designed for uses that exclude colorimetric accuracy. Cheers, b&
We have differing ideas about what constitutes explanation and justification. I'll leave it at that. The Adobe folks would be better positioned than me to address these contentions scientifically, if they chose to. Mark ________________________________ From: Ben Goren <ben@trumpetpower.com> To: MARK SEGAL <mgsegal@rogers.com> Cc: "colorsync-users@lists.apple.com List" <colorsync-users@lists.apple.com> Sent: Saturday, May 4, 2013 10:09:59 AM Subject: Re: Feedback on success, creating a camera profile On May 4, 2013, at 4:48 AM, MARK SEGAL <mgsegal@rogers.com> wrote:
I was struck by this statement:
"run away from DNG profiles and anything else to do with Adobe's processing of raw images and color profiling as far and as fast as you can."
As most of the imaging community world-wide is spending gazillions of time and money using this stuff, I'm not sure such a bald, dismissive sentence with no explanation and no justification should pass unnoticed. Perhaps you would care to elaborate?
I provided exactly that explanation and justification in the introductory clause to the sentence you only partially quoted, as well as the preceding sentence. Here's the full paragraph again:
If you understand that colorimetric accuracy is *not* the goal of DNG profiles, they serve their intended purpose rather well. But if you're looking for colorimetric matching, run away from DNG profiles and anything else to do with Adobe's processing of raw images and color profiling as far and as fast as you can.
The key thing to understand is that Adobe's raw processing software is not and never has been intended to create colorimetric renderings. Its oft-stated goal is instead "pleasing" color. Which is a good thing for shareholders, because most of Adobe's customers are much more interested in "pleasing" color than accurate color. Or, at least they *think* they are...but that's a rant for another time. The short version is that virtually all of the complaints that photographers have, all of the holy grails they keep chasing, are a result of the compromises necessary to achieve "pleasing" color. For example, That S-curve that virtually always gets applied to "enhance contrast and give an image 'pop'" is what destroys shadow and highlight detail. It *has* to; TANSTAAFL. Anyway, the end result is that there's no way to get colorimetric accuracy out of Adobe's raw processing software, and even getting in the ballpark is a challenge. But with different software, it's quite practical to, for example, photograph an artwork and make a print such that the artist herself has to very closely and carefully examine the two side-by-side to spot the differences. A similar workflow can produce superlative results in general photography, including landscape and portraiture and the like. The key there is to always shoot in good light, which means finding or making good light. Which also means seeing good light and being able to recognize what is and isn't good light. Many of those techniques used for "pleasing" results are really just tools to fix bad light in post-processing. But that's yet another rant for yet another time.... I should also add: so long as you don't rely upon Adobe products for the colorimetrically-critical parts of your workflow, especially raw development and profile conversion, you'll be hard-pressed to find better tools for the rest of your editing tasks, from noise reduction to sharpening to cleanup to geometry corrections to compositing to layout and design to all the rest. You're especially not going to find anything else as comprehensive and as well integrated. It's just that Adobe's engineers are solving a different problem from the one of colorimetric accuracy. They solve the problems they're intending to solve rather well, but that (of course! TANSTAAFL again) creates insolvable problems for those seeking colorimetric accuracy. TL/DR: Use the right tool for the right job, and Adobe products are great tools designed for uses that exclude colorimetric accuracy. Cheers, b&
On May 4, 2013, at 7:33 AM, MARK SEGAL <mgsegal@rogers.com> wrote:
We have differing ideas about what constitutes explanation and justification. I'll leave it at that. The Adobe folks would be better positioned than me to address these contentions scientifically, if they chose to.
I'm not aware that there's anything even vaguely controversial about what I wrote. Read the DNG specification, and it's overwhelmingly clear that it's meant as a general-purpose tool. Indeed, right at the top of chapter 6 on mapping camera space to XYZ, they discuss the reasoning behind supporting two different illuminants and the recommended strategy for extrapolating to other illuminants based on the user-selected color temperature. "Extrapolation" is what you want for this type of general-purpose tool, but -- obviously! -- it's not going to get you precise results. And this sort of fuzzy logic pervades the spec. I kid you not, the first step in ``Translating Camera Neutral Coordinates to White Balance xy Coordinates'' is, ``Guess an xy value.'' They then describe an iterative process that will, indeed, provide not-bad results in a generic tool. But if you actually want to perfectly normalize white balance and exposure, the way to do it is to shoot a chart, dump the raw data to a linear 1.0 gamma UNIWB TIFF, build a matrix profile, do a reverse lookup of D50 white, and use the resulting RGB values to derive linear channel multipliers for subsequent raw development. There's no guessing and the results are perfect within the error bars of your equipment. It should be obvious why Adobe wouldn't take such an approach for their tools, and equally obvious why, as a result, their tools aren't capable of delivering the same results as different tools actually intended for the job. Really, I have no idea why you're getting so upset over this. It's hardly different from cautioning somebody against trying to pull a boat in a trailer with a sports car. Sure, you can strap a canoe to the roof and you can probably jerry-rig something for a fishing boat, but that's not what the car is designed for and you'll get *much* better results using something that *is* designed for the task. Cheers, b&
I'm not getting the least bit upset; it isn't a personal issue; I don't personalize technical matters. I was looking for explanatory value and you are starting to provide some. That helps. Mark ________________________________ From: Ben Goren <ben@trumpetpower.com> To: MARK SEGAL <mgsegal@rogers.com> Cc: "colorsync-users@lists.apple.com List" <colorsync-users@lists.apple.com> Sent: Saturday, May 4, 2013 12:31:30 PM Subject: Re: Feedback on success, creating a camera profile On May 4, 2013, at 7:33 AM, MARK SEGAL <mgsegal@rogers.com> wrote:
We have differing ideas about what constitutes explanation and justification. I'll leave it at that. The Adobe folks would be better positioned than me to address these contentions scientifically, if they chose to.
I'm not aware that there's anything even vaguely controversial about what I wrote. Read the DNG specification, and it's overwhelmingly clear that it's meant as a general-purpose tool. Indeed, right at the top of chapter 6 on mapping camera space to XYZ, they discuss the reasoning behind supporting two different illuminants and the recommended strategy for extrapolating to other illuminants based on the user-selected color temperature. "Extrapolation" is what you want for this type of general-purpose tool, but -- obviously! -- it's not going to get you precise results. And this sort of fuzzy logic pervades the spec. I kid you not, the first step in ``Translating Camera Neutral Coordinates to White Balance xy Coordinates'' is, ``Guess an xy value.'' They then describe an iterative process that will, indeed, provide not-bad results in a generic tool. But if you actually want to perfectly normalize white balance and exposure, the way to do it is to shoot a chart, dump the raw data to a linear 1.0 gamma UNIWB TIFF, build a matrix profile, do a reverse lookup of D50 white, and use the resulting RGB values to derive linear channel multipliers for subsequent raw development. There's no guessing and the results are perfect within the error bars of your equipment. It should be obvious why Adobe wouldn't take such an approach for their tools, and equally obvious why, as a result, their tools aren't capable of delivering the same results as different tools actually intended for the job. Really, I have no idea why you're getting so upset over this. It's hardly different from cautioning somebody against trying to pull a boat in a trailer with a sports car. Sure, you can strap a canoe to the roof and you can probably jerry-rig something for a fishing boat, but that's not what the car is designed for and you'll get *much* better results using something that *is* designed for the task. Cheers, b&
On May 4, 2013, at 10:31 AM, Ben Goren <ben@trumpetpower.com> wrote:
Indeed, right at the top of chapter 6 on mapping camera space to XYZ, they discuss the reasoning behind supporting two different illuminants and the recommended strategy for extrapolating to other illuminants based on the user-selected color temperature. "Extrapolation" is what you want for this type of general-purpose tool, but -- obviously! -- it's not going to get you precise results.
Ben, I'm wondering if this might not be a better mousetrap. But let's back up. To build an ICC profile, you have to feed the software a rendered, output referred image don't you? It's been processed and presumably there's an application of white balance (but maybe not). This then is 'colorimetrically correct' and if so, by what metric? I can take an ICC profile and compare what it predicts I'll get on say 500 color patches to the 500 patch reference used to build it. While it doesn't tell me close to everything I'd want to know about how that profile performs, it's a useful start to see how well this process has worked and allows me to say how 'colorimetrically close' the process was. How would someone do this with ICC camear profiles vs. DNG profiles? I can see this being OK in a studio setup where the illuminate is the same and consistent. But if you build said profile, what happens if you move out of that environment? The idea of building a profile (in this case DNG) that allows extrapolation seems pretty useful in such situations where the illuminant is a moving target. I don't know how this does or doesn't provide 'precise results', at least if one plays with WB, tint/temp to get the image appearance as they desire.
And this sort of fuzzy logic pervades the spec. I kid you not, the first step in ``Translating Camera Neutral Coordinates to White Balance xy Coordinates'' is, ``Guess an xy value.'' They then describe an iterative process that will, indeed, provide not-bad results in a generic tool.
Don't all 3rd party raw converters have to make a guess of the camera color space at some point? Unless Nikon or Canon as an example provide the spectral response of the chip(s), who's to say what the native color space is? My understanding is everyone has to make this assumption.
But if you actually want to perfectly normalize white balance and exposure, the way to do it is to shoot a chart, dump the raw data to a linear 1.0 gamma UNIWB TIFF, build a matrix profile, do a reverse lookup of D50 white, and use the resulting RGB values to derive linear channel multipliers for subsequent raw development. There's no guessing and the results are perfect within the error bars of your equipment.
Moving the issue of target gamut aside (can a target define the gamut of such a device?), again, isn't this process happening on rendered data and how then does moving the camera system out of the environment where the target was shot affect these results?
It should be obvious why Adobe wouldn't take such an approach for their tools, and equally obvious why, as a result, their tools aren't capable of delivering the same results as different tools actually intended for the job.
I'd love to see this demonstrated. Is there any source that could provide two such samples and provide a means to evaluate this?
Sure, you can strap a canoe to the roof and you can probably jerry-rig something for a fishing boat, but that's not what the car is designed for and you'll get *much* better results using something that *is* designed for the task.
True indeed. But at this point, I'm taking baby steps in this topic where I'm pretty savvy about the differences between a boat and a car <g>. Let's assume I need to understand better the limitations of both profiling processes. Can you suggest a test process or a site that has done this to see more closely what you have described in terms of ICC and DNG camera profiles? Andrew Rodney http://www.digitaldog.net/
Timidly, because waiting for the storm, QPcard with QPcalibration software makes DCP. Ben, you are talking about image engineers and research departments in imaging software and imaging devices. I have been watching your site's photographic images and don't see the difference with images edited in ACR/Lightroom/RPP by myself. Have stated that you keep "colors" slightly (natural) saturated to maintain detail and surface texture, as opposite to camera consumer adjustements, and thats OK. In my opinion, and as reference to Paul comments, after jump from Photoshop CS4 to CS6 or the latest Lightroom, the image edition in shadows, highlights, "version" of proccess and lens profiles, have been improved to the levels of other raw editors. I suppose that people at Adobe han been working hard and have incorporated external researchers because a bunch of enhancements were part of research at image engineer universities that I have been reading in the last four years. "S" curves destroy detail in highlight and shadow?. I try to avoid presets that don't work in my workflow, or try to adapt to my preferred appearance. What is your image editing software?. Only RPP? Agree with identify and make use of the best light source that you can afford, but with special attention to SPD than to CRI. Maybe you are true in colorimetric accuracy as general aspect on image capture with DSLR, but the alternative, as far as I know, is to take spectral measurements of the light source and the of the scene (talking about reproducing art) and make use of your favourite profiling software. Some colleagues say RGB CFA is guilty of colorimetric accuracy limitations. Meanwhile ColorCheckers are the only generic, de facto standard, tool for those of us that intentionality keep constancy, and try to have confidence in color management techniques. And what about the printing path?. Salud Jose Bueno 2013/5/4 MARK SEGAL <mgsegal@rogers.com>
We have differing ideas about what constitutes explanation and justification. I'll leave it at that. The Adobe folks would be better positioned than me to address these contentions scientifically, if they chose to.
Mark
________________________________ From: Ben Goren <ben@trumpetpower.com> To: MARK SEGAL <mgsegal@rogers.com> Cc: "colorsync-users@lists.apple.com List" < colorsync-users@lists.apple.com> Sent: Saturday, May 4, 2013 10:09:59 AM Subject: Re: Feedback on success, creating a camera profile
On May 4, 2013, at 4:48 AM, MARK SEGAL <mgsegal@rogers.com> wrote:
I was struck by this statement:
"run away from DNG profiles and anything else to do with Adobe's processing of raw images and color profiling as far and as fast as you can."
As most of the imaging community world-wide is spending gazillions of time and money using this stuff, I'm not sure such a bald, dismissive sentence with no explanation and no justification should pass unnoticed. Perhaps you would care to elaborate?
I provided exactly that explanation and justification in the introductory clause to the sentence you only partially quoted, as well as the preceding sentence.
Here's the full paragraph again:
If you understand that colorimetric accuracy is *not* the goal of DNG profiles, they serve their intended purpose rather well. But if you're looking for colorimetric matching, run away from DNG profiles and anything else to do with Adobe's processing of raw images and color profiling as far and as fast as you can.
The key thing to understand is that Adobe's raw processing software is not and never has been intended to create colorimetric renderings. Its oft-stated goal is instead "pleasing" color. Which is a good thing for shareholders, because most of Adobe's customers are much more interested in "pleasing" color than accurate color.
Or, at least they *think* they are...but that's a rant for another time. The short version is that virtually all of the complaints that photographers have, all of the holy grails they keep chasing, are a result of the compromises necessary to achieve "pleasing" color. For example, That S-curve that virtually always gets applied to "enhance contrast and give an image 'pop'" is what destroys shadow and highlight detail. It *has* to; TANSTAAFL.
Anyway, the end result is that there's no way to get colorimetric accuracy out of Adobe's raw processing software, and even getting in the ballpark is a challenge. But with different software, it's quite practical to, for example, photograph an artwork and make a print such that the artist herself has to very closely and carefully examine the two side-by-side to spot the differences.
A similar workflow can produce superlative results in general photography, including landscape and portraiture and the like. The key there is to always shoot in good light, which means finding or making good light. Which also means seeing good light and being able to recognize what is and isn't good light. Many of those techniques used for "pleasing" results are really just tools to fix bad light in post-processing. But that's yet another rant for yet another time....
I should also add: so long as you don't rely upon Adobe products for the colorimetrically-critical parts of your workflow, especially raw development and profile conversion, you'll be hard-pressed to find better tools for the rest of your editing tasks, from noise reduction to sharpening to cleanup to geometry corrections to compositing to layout and design to all the rest. You're especially not going to find anything else as comprehensive and as well integrated. It's just that Adobe's engineers are solving a different problem from the one of colorimetric accuracy. They solve the problems they're intending to solve rather well, but that (of course! TANSTAAFL again) creates insolvable problems for those seeking colorimetric accuracy.
TL/DR: Use the right tool for the right job, and Adobe products are great tools designed for uses that exclude colorimetric accuracy.
Cheers,
b& _______________________________________________ Do not post admin requests to the list. They will be ignored. Colorsync-users mailing list (Colorsync-users@lists.apple.com) Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/colorsync-users/jbueno61%40gmail.com
This email sent to jbueno61@gmail.com
On May 4, 2013, at 10:02 AM, José Ángel Bueno García <jbueno61@gmail.com> wrote:
Timidly, because waiting for the storm, QPcard with QPcalibration software makes DCP.
I don't have any actual experience with QPcard products, but I don't get a lot of confidence in their products from looking at their published specifications and their marketing materials. At the very least, based on specs alone, I'd choose the ColorChecker Passport as a target any day over the QPcard. The Passport has more patches, has a bigger gamut, and a more useful distribution of patches.
I have been watching your site's photographic images and don't see the difference with images edited in ACR/Lightroom/RPP by myself. Have stated that you keep "colors" slightly (natural) saturated to maintain detail and surface texture, as opposite to camera consumer adjustements, and thats OK.
I think you might have me confused with somebody else. I haven't updated my Web site in ages, and the only photos there are some really old ones from an exhibit I had in the Tempe Public Library.
"S" curves destroy detail in highlight and shadow?
They must, by their very nature. They enhance detail in the midtones by increasing contrast. But that means that they decrease contrast in highlights and shadows...and less contrast means less detail. You can -- and Adobe does -- do some seeming black magic with tone mapping, whereby you apply localized contrast enhancements. And ACR really does an amazing job at that, with the highlight recovery and fill light. But what happens then -- what has to happen -- is that contrast / detail is lost in the transitional parts of the image. Done skillfully (and this is one of the parts where the skills of the Adobe engineers really shine) and that loss of detail isn't visible or objectionable, especially if you're starting with an image that already has a lot of dynamic range and contrast. But you're still mapping so many input bits to the exact same number of output bits, and all you have control over is the spacing and distribution of those bits.
I try to avoid presets that don't work in my workflow, or try to adapt to my preferred appearance. What is your image editing software?. Only RPP?
That depends a great deal on what type of photography I'm doing. If it's fine art reproduction, I'm typically developing with RPP to BetaRGB, which is my preferred working space. There's typically a lot of post-processing I'll further do in Photoshop. At the very least, to minimize exposure variations (such as if one flash doesn't fire at the perfect voltage, which vary rarely happens with the Einsteins but does occasionally happen), I'll take a few exposures, load them into a stack, and set the blend mode to Median. (That of course also reduces noise, but there really isn't any noise at base ISO in modern DSLRs to begin with). I'll also take a white card exposure (several blended, of course) and use Robin Myers's EquaLight to adjust for any and all illumination unevenness, whether from the lens or failure on my part to get perfectly flat illumination (though I've gotten quite good at that). In Photoshop again, if the lens needs any geometry corrections I'll do that, and I'll also use Photoshop for sharpening (typically high pass). And, if it's a large work, I'll shoot in sections (shifting the art under the camera) and stitch the panorama again with Photoshop (and I still haven't figured a really good way to do this that I'm especially happy with, though I've got some fresh ideas to experiment with). If it's landscape photography...well, again, I'll do the development with RPP basically the same way as for giclee work. Unless the light was absolutely perfect, I'll often do multiple developments, either of the same file or of different exposures from the bracket (I always bracket in the field). For example, I might do one development of the foreground with a normal exposure, then then another a stop or two underexposed to capture maximum detail and color in the sky. The trick then is to composite the two together, generally with a mask and a soft brush. The hard part is to not make the top part of the ground look like it's shadowed (though it will be slightly) and to not make the bottom part of the sky look like it's brightened (though, again, it will be). Essentially, I'm creating a custom-shaped graduated neutral density filter suitable for just that one scene. I'm very much a fan of realism in photography. I don't go for the tonemapped HDR look at all. Even when I'm shooting insanely high contrast scenes (such as last summer when I shot the annular eclipse over the Grand Canyon), I strive to avoid all inversions of the tonal range. My results don't necessarily have the same kind of "pop" that a lot of other people go for, but I get a great deal more fine detail and the results are much truer to what you would have actually seen had you been there in person. But that's my own personal preferred aesthetic...and, again, it means I have to pay a *lot* of attention to the light.
Agree with identify and make use of the best light source that you can afford, but with special attention to SPD than to CRI.
Any studio flash is going to be capable of producing excellent to superlative results. I love my Paul C. Buff Einsteins. I wouldn't recommend any other type of light source for color critical work, though I'm sure you could make do with most anything if your technique and workflow is up to the job.
Maybe you are true in colorimetric accuracy as general aspect on image capture with DSLR, but the alternative, as far as I know, is to take spectral measurements of the light source and the of the scene (talking about reproducing art) and make use of your favourite profiling software.
Actually, the state of the art of multi-spectral imaging, as I understand it -- the sort of thing that they're doing at the Smithsonian and that few others are crazy enough to bother with -- is to either use a black-and-white camera with many different tuned color filters, or to use a regular RGB array camera but again take multiple exposures with a few different combinations of wideband color correction filters. But your favorite profiling software isn't going to know what to do with the results; you're going to need some custom stuff. But few outside of places like the Smithsonian actually need anything like that. All you really need is a good copy stand setup with decent lights and a quality profile built from a large patch count target shot on the copy stand with the same setup.
Some colleagues say RGB CFA is guilty of colorimetric accuracy limitations.
I wouldn't disagree with the theory of that, but I'd also note that any modern quality DSLR is going to be close enough to satisfying the Luther Condition that, with a quality profile shot in the same light as you're shooting the art, it's a complete non-issue in practice.
Meanwhile ColorCheckers are the only generic, de facto standard, tool for those of us that intentionality keep constancy, and try to have confidence in color management techniques.
There is a great deal to be said for the ColorChecker, but it's not all that hard to build your own chart that will vastly outperform it in the studio. Start by taking your ColorChecker to your local paint store; if they've got a reasonably modern formulation, they'll be able to mix for you paints that are spectral matches within the same tolerance historically observed in official charts. If they'll sell you pint samples, you'll get a lifetime supply of paint for (classic 24-patch) ColorCheckers for about as much as the Passport costs. Then, head to your local artist's supply store and pick out a bunch of paints there with ``interesting'' spectra (be sure to bring along your spectrophotometer so you can measure the painted samples they should have on hand). Also get some white paint to mix with them so you can have a few different tints, especially of the darker paints. Figure out how big you want your chart, count up how many patches you already have, and figure out how many more patches you need to fill up the rest of the chart (ideally at least a couple hundred patches in total, and more is better). Generate that many patches, evenly distributed in perceptual space, with your favorite ICC profiling toolset. (Argyll is great for this, and you'll obviously need a good profile for your printer / paper.) You'll want to be sure to include a number of neutral patches. For bonus points, add a bit of PTFE (Teflon) thread tape (as close to 100% flat spectral response as you'll get) and a light trap (make the chart a hollow black-lined box with a patch-sized hole on top). For extra bonus points, include samples of real objects / pigments you care about, such as wood chips. Print out the chart, paint on the paints and otherwise assemble it, measure the finished chart with your spectrophotometer, and you'll have something that's so superior to anything you can buy on the market it's not even funny.
And what about the printing path?.
Well, of course, that's a whole other can of worms, so to speak. But any modern inkjet printer, especially the large format ones, should be quite amenable to profiling. I love my Canon iPF8100. And the prints I get from it...well, put a print side-by-side next to the original, and the artist herself has to stare for a while to be able to spot the differences. And if all the original colors fit inside the iPF8100's gamut, that might take a very long time. ...but, of course, you're not going to want to use Adobe's color management for that. Instead, you'll want to keep everything in Photoshop in your preferred working space and use something like Argyll to do a gamut-mapped perceptual conversion to the printer's space, and then print the resulting file with no further color management. The Canon Photoshop print plugin makes that easy, but I understand it's a royal pain to otherwise get Photoshop to not do any color transformations. Cheers, b&
Wonderfull extented answer. In the last printed publication I worked I did all the conversions with GaMapICC for urban landscape and with Photoshop for architecture and interiors. You are invited to come to Canary Islands to see the book. Better, I'll try to pick up one from the curator of the edition and send a copy to you. Light and contrast are different than in high latitudes. What I mean is that your workflow is so stable, predictable and nervous as a engine. BUT I can't afford by requirements of theorical or ideal store sistem give more than one oportunitity to any secene, be a stage where the object is a person, a piece of art of oil on canvas, a photographic artifact, a sculpture,... is the policy of entrance (is better to say ingest?) of raw DNG image files as masters. Just the same that shot with Sinar and 4x5" Ektachrome. On the S curve, I usually make use of opposite option in every session except for reproducing art, so I'm not a defender of any kind on presets, be linear, low contrast or high contrast. I haven't purchased any Qpcard nor SpyderCHECKR, but make use of Spectrashop and grey card from Robin Myers Imaging, but have no experience with EquaLight and have just discovered TIFF Lister. I commented QPcalibration software because the initial post from Paul. I think you might have me confused with somebody else. I haven't updated my
Web site in ages, and the only photos there are some really old ones from an exhibit I had in the Tempe Public Library.
trumpetpower and whyevolutionistrue. At the institution I print with Canon iPF5000. All is fine. Since I'm minimalist because writing in English exhausts to me, I agree with almost all of your statements above, and if you do a search in the previous years you'll find some threads on multispectral imaging when I was interested until I knew is a lot of money for a single photographer. My colleague, Jose Pereira, is trying to experiment with such sistem with a goverment agency. Is Spain and crisis is under the skin. Glad to meet you. Jose Bueno.
José, Andrew, and David, I've been meaning for a while to write up a description of my workflow, and I can see that now is probably the time to do so...but it's going to take a bit of work to do it justice. It's going to need staged examples, a bit of thought into presentation, that sort of thing. I've dropped enough hints here to reproduce what I do...but you probably have to already be on the same page as me to make sense of the hints. What's missing is the bootstrapping for people new to this type of photographic processing -- for all those people whose only experience with raw development is Adobe's and the vendor's tools and C1 and the like. So, if you can bear with me, wait a few days or a week or so...I promise I'll post the link to the full version here first. In the mean time, I hope y'all will forgive me for leaving you with a teaser...but, yes, it *is* possible with a regular DSLR and a bit of knowledge and patience to make reproductions of art whose quality is limited basically only by the gamut of your printer (and paper), and to do so by the numbers. And the same techniques carry over very well into general-purpose photography. Cheers, b&
Hello Ben. I am completely confounded with your essays. I do need you to help me understanding. What is your QPcard wit QPcalibration software that does DCP? What is TANSTAAFL? I don't use open source CMS. I don't update my camera, the Canon EOS lDs Mark II. I keep my small "L" lens collection. Point is I don't keep up with anybody! I must use what I have to the best of my ability. When I used film cameras, I kept them. Regardless of age. Just changed film. My client work is representation of my clients' original intellectual property, wether 2D or 3D. Clients are advised ahead of procedure nothing will look exactly like your original art! When they are interested, I go into the physics of it (what I know). To profile the one and only camera I have with it's lens, and lighting. I use the X-rite passport, for white balance without tethering to computer. Then shoot tethered, the x-rite 24 patch target, convert to DNG with Adobe DNG converter. Then use Adobe's DNG profile editor. At this point I want to interject the x-rite software does not give me decent results in my workflow. I have tested over and over again, and must use the Adobe DNG profile editor. I don't care that is takes a few extra steps to get the results. Help me understand what part of Adobe, in your opinion, I must run run run from? Thanks David B Miller, Pharm. D. member Millers' Photography L.L.C. dba Spinnaker Photo Imaging Center Bellingham, WA www.spinnakerphotoimagingcenter.com 360 714 1345
On Sun, May 5, 2013 at 2:01 AM, Spinnaker Photo Imaging Center
My client work is representation of my clients' original intellectual property, wether 2D or 3D. Clients are advised ahead of procedure nothing will look exactly like your original art!
Fancy meeting you here again David - Now that is a smart piece of advice indeed :) Anyway, I think one could say that Adobe makes software which is useful for photography but less so for reprography. Edmund
+1 _______________________________________ Steven Kornreich steve@kuau.com http://www.kuau.com
Anyway, I think one could say that Adobe makes software which is useful for photography but less so for reprography.
Edmund _______________________________________________ Do not post admin requests to the list. They will be ignored. Colorsync-users mailing list (Colorsync-users@lists.apple.com) Help/Unsubscribe/Update your Subscription: https://lists.apple.com/mailman/options/colorsync-users/steve%40kuau.com
This email sent to steve@kuau.com
I think this thread come from uncertainties in the photographer's workflow. I don't relegate my workflow to targets, but make strictly use of them. This isn't 1+1=2 but a sequence of approaches. I don't exposure targets the same way than original, be a scene from nature or reproducing art. Have been repeatedly said in this list that spectral sensitivity of sensor's CFA and interpolation are far from be perfect to reproduce 1 to 1 all the spectral reflectances (radiances?) of any scene, but enough for the human eyes. I'm getting dE of 1.2/1.4 when reproducing art with old Multiblitz studio flash units and Nikon D90. And try to confirm results with two different profiling/validation applications. This statement don't implies that I transfer all the data from DNG Profile Editor, only Camera Calibration in ACR, to a determinated serie of reproduced artifacts under same conditions, and pay attention to the out of gamut warnings in ACR and PS, plus visual evaluation. https://plus.google.com/photos/102552601155218515388/albums/5769462859632635... 2013/5/5 Steven Kornreich <steve@kuau.com>
+1 _______________________________________ Steven Kornreich steve@kuau.com http://www.kuau.com
Anyway, I think one could say that Adobe makes software which is useful for photography but less so for reprography.
Edmund _______________________________________________ Do not post admin requests to the list. They will be ignored. Colorsync-users mailing list (Colorsync-users@lists.apple.com) Help/Unsubscribe/Update your Subscription: https://lists.apple.com/mailman/options/colorsync-users/steve%40kuau.com
This email sent to steve@kuau.com
Do not post admin requests to the list. They will be ignored. Colorsync-users mailing list (Colorsync-users@lists.apple.com) Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/colorsync-users/jbueno61%40gmail.com
This email sent to jbueno61@gmail.com
participants (8)
-
Andrew Rodney
-
Ben Goren
-
edmund ronald
-
José Ángel Bueno García
-
MARK SEGAL
-
Paul Schilliger
-
Spinnaker Photo Imaging Center
-
Steven Kornreich