RE: ScannerRGB to WorkingSpaceRGB (rendering intents)
RE: ScannerRGB to WorkingSpaceRGB (rendering intents)
- Subject: RE: ScannerRGB to WorkingSpaceRGB (rendering intents)
- From: "Fred Bunting" <email@hidden>
- Date: Wed, 11 Jul 2001 11:21:56 -0700
- Thread-topic: ScannerRGB to WorkingSpaceRGB (rendering intents)
Roger Breton wrote:
>
So, Fred, if I convert from a super-duper large RGB gamut
>
like ROMM RGB or
>
Joe Holmes Ekta RGB on down to a tiny-wheeny Newsprint gamut,
>
you see a
>
larger compression taking place as compared to Glossy Coated gamut?
>
>
What if the starting XYZ colors are well within the Newsprint
>
gamut, will
>
the colors still get compressed by the Perceptual algorithm?
>
>
And what if I convert from my scannerRGB to EktaRGB, then, I
>
do not get any
>
compression? Because these two gamuts are more or less the
>
same "size"?
Yes ... yes ... and (hmmm... ) yes. At least that's the theory.
(My 'hmmm ...' on the third question is wondering if it's an
oversimplification to say it's just about the relative "sizes" of the
gamuts ... two gamuts could be the same "size" but different shapes, or
have different overlap, which also affects the compression ... but this
is more detail that I want to get into here.)
What we have is a pixel engine pre-constructed to convert colors in the
source space to colors in the destination space. This pixel engine is
constructed before any image is presented to it. So the content of each
image (e.g. the gamut of the colors actually contained in the image) is
irrelevant. What matters is its source space.
While this seems not to bode well for perceptual rendering, you have to
consider the alternatives.
My point about a "smart CMM" that adjusts the amount of gamut
compression depending on the gamut of the image content, is that this
sounds interesting, but one needs to be careful. But if you have two
images that are pixel-for-pixel identical except for one or two
out-of-gamut pixels on one image, and this results in one getting major
gamut compression and the other no compression, this is a recipe for
users wondering what the heck is happening. So some more image analysis
is necessary.
My point about relative colorimetric is that it can indeed produce
better results than perceptual with some images, but this depends on the
image. The danger is that relative colorimetric can introduce artifacts
(clipping or banding) that are worse than the desaturation issues with
perceptual.
Fred Bunting