RE:notches on the L*
RE:notches on the L*
- Subject: RE:notches on the L*
- From: "eugene appert" <email@hidden>
- Date: Sun, 9 Dec 2007 12:45:25 -0500
Marco, Graeme and Klaus,
Thank you for your explanations, you have managed to answer my question despite its lack of clarity. I have seen the L* decimals while measuring densities with the Eye one and I have seen the 16 bit data in Photoshop but I assumed that these were mathematical derivatives and not true perceivable distinctions. If I have understood your answers you are saying that the 255 brightness distinctions of 8 bit files are transcribed along the L* axis by falling between the 100 integers visible in Photoshop. This explains how distinctions in RGB that appear (in Photoshop) to translate to the same L* are preserved after a profile or mode conversion. I had assumed that the 100 positions along the L* scale corresponded to brightness distinctions perceivable to the human eye.
>What exactly are you converting, and to what target space?
>Why should the RGB steps from 1 to 9 map to L* 1?
>It would help to know more about the source space.
At least in Photoshop all gamma 1.8 RGB and Greyscale workspaces appear to map levels 1 (1,1,1) to 13 ( 13,13,13) to L* 1. In gamma 2.2 spaces RGB 2 to RGB 13 are mapped to L*1. I always imagined these distinctions would be lost since they translated to the same L* value. You are telling me that these distinctions are not lost because they are encoded using decimals I cannot see in Photoshop which will translate them to the target profile and presumably to the printer.
Although the reality is they are never transcribed to the print, and since L*1 is below the maximum shadow point of any output device the question seems to be more about how the rendering intent or BPC is interpreting and remapping those decimal values. For example are L*1.1 and L* 1.2 being remapped as L* 4.1 and L*4.2 (in the case of a target black point of L*4) or could they be remapped as L*4 and L*5?
Here as new question if you're still game. I naively believed that the CIE Lab model precisely represented the human response to colour and light. I know that humans cannot perceive the distinction between L* 16, 347 and L* 16, 348 so these are purely mathematically derived brightness distinctions. The CIE Lab model can be used to define precisely the extreme ranges of saturation, hue and brightness perceived by the human eye why does the model not appear to function for intermediate distinctions as well?
Thanks for your patience
Eugene Appert
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Colorsync-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden