So, I'm getting my hands truly soaked with spectral color calculations for the first time, and I've noticed an oddity that I'm hoping somebody here might be able to clarify. The spectral data and formulas and what-no published by the CIE are explicitly and obviously standardized so that illuminants all have power = 100 at 560 nm. But the 1931 2° Standard Observer table is normalized so that y(lambda) = 1.000000 at 550 nm. And the 1964 10° Standard Observer is y(lambda) = 0.999110 at 550 nm, with nothing exactly equal to 1 (and 550 nm still the peak for y(lambda)). I would naively expect everything to be normalized for the same wavelength. Is there a reason why illuminants are normalized for 560 nm and the Standard Observer for 555 nm? Does it make a practical difference, let alone a real-world one? Thanks, b&