Making progress on my calibration project. I have a question with regards to the merits of pure gamma calibration. The following graph shows the relationship between 8-bit input RGB numbers and measured output Luminance, from 0 to 255: https://1drv.ms/b/s!AkD78CVR1NBqktJmMSOB1tc6aOdiwg?e=5ujwEq The horizontal axis represents Input RGB numbers, from 0 to 255, as they would come out of Photoshop, and the vertical axis represent Output Luminance, as measured on the face of the monitor. There are two "curves" on this graph, a "blue" curve and a "red" curve. The blue curve represent the measured response of a Dell 17" laptop while the red curve represents a 2.2 gamma, as best as I can calculate it. Judging by the shape of the graph in the shadows, I am tempted to concluded that there is "poor separation of tones"? I would say, from 0 to 25, increasing RGB counts practically map to the same Luminance value? This "gamma calibration method" does not make sense to me. Whether 2.2 or 1.8, I don't see how the "response" could be improved at the shadows end? Which makes me wonder whether the other "popular" calibration schemes create better shadow separation? Like L* or sRGB? Worth investigating.. / Roger Breton