Testing Perceptual Rendering in Output Profiles
Testing Perceptual Rendering in Output Profiles
- Subject: Testing Perceptual Rendering in Output Profiles
- From: Lorenzo Ridolfi <email@hidden>
- Date: Mon, 17 Feb 2003 10:34:24 -0300
Hi,
I'm trying to do some tests in output profiles using perceptual rendering.
Unfortunately, I don't have a RIP, so I'm testing a RGB output profile.
Actually, an Epson 1280 profile made with ProfileMaker Pro.
The test is based on the following steps:
1) I made a test image in AdobeRGB space with ramps with the following
colors: black, red, green, blue, cyan, magenta and yellow.
2) In Photoshop, I converted the test image to the printer profile space
using perceptual rendering and black point compensation.
3) I printed the image with "Same As Source" print space and, of course,
the same printer settings I used when the profile was built.
4) With an eye-one, I measured some patches and compared with the lab
values in the image converted to the printed profile.
The values in the image were within 7 to 10 delta-E units in the most
saturated colors. In particular in reds, I'm with the "reds turns orange"
problem. The difference is about 3 units in the hue in LCH (ab) between the
image converted and the actual values on the paper. The Lab values are:
Original image in AdobeRGB: 63, 90, 78
Image converted to print space: 56, 64, 53
Actual values on the paper: 54.6, 61.2, 46.6
Comparing the hue of the original image and the image converted to the
print space, they are with more or less the same value (40.9/39.6).
Otherwise, if you compare the values between the actual print and the image
converted, you'll see a difference of 3 hue units in LCH(ab).
The chroma values of the image converted and the actual paper are about 80.
I know that hue in Lab is non-uniform, but if you plot the original red
ramp in a "spider graph", you'll see that the non-linearity is more
pronounced with the chroma value around 40, not in the 80s. In that
particular region, it seems that Lab is almost linear.
My question is that after converting the image to the output color space
using perceptual rendering (or whatever rendering), I can expect to have a
reasonable match between the actual print patches and the values in the
converted test image ?
In my view, the profile conversion operation, after the gamut mapping in
the perceptual rendering, should have a closer match to the actual printed
patches. In other words, it seems that the "red turns orange" problem in my
case is not in the gamut mapping but in the device characterization
(profile values or in the CMM interpolation?).
Any clue ?
Best Regards,
Lorenzo
_______________________________________________
colorsync-users mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/colorsync-users
Do not post admin requests to the list. They will be ignored.