re: camera gamut
re: camera gamut
- Subject: re: camera gamut
- From: email@hidden
- Date: Thu, 02 Feb 2012 23:12:10 +0000 (UTC)
I presented a paper on wide gamut imaging at the Fall 2011 SMPTE conference ["Theoretical and Practical Limits to Wide Color Gamut Imaging in Objects, Reproducers, and Cameras"], with the presentation focused specifically on "camera gamut." It has not yet been published in print, but I think I can summarize some helpful points here.
The first two points follow Jack Holm (reference below).
First of all, a useful definition of "camera gamut" has to be defined in a color space, not a spectral space; it is the gamut of colors that a camera can report due to any possible spectral input.
Secondly, a "camera" has to be defined as the combination of a set of spectral sensitivities and a transformation from the sensed quantities to a color space (usually X,Y,Z). It is the output in a color space that will exhibit a defined gamut.
Third, when a particular camera (sensor plus transform) is characterized, it will exhibit a spectral locus in color space, as elucidated by Jack Holm ( http://www.color.org/documents/CaptureColorAnalysisGamuts_ppt.pdf ). While Holm implied that this locus defined the capture gamut, the camera gamut is really the convex envelope enclosing the camera spectral locus. Holm's "color analysis gamut" on slide #3 of the above is really the camera spectral locus. An input spectrum with two carefully chosen wavelengths, at the extreme green point and somewhere along the cyan edge, can produce a reported output chromaticity outside the camera spectral locus (but in this case still well within the gamut of visible chromaticities).
The limitations in camera gamut arise because of the relatively narrow responses of the camera spectral sensitivities compared to the human visual system (especially the "red" or long wavelength channel). In my paper, this was explored by exciting the camera with a series of progressively narrower Gaussian spectra. In the case of spectra centered in the cyan/green part of the spectrum (say, at 510 nm), once the bandwidth is reduced beyond a certain point, the camera red response becomes essentially zero, and therefore further decreases in bandwidth do not produce any increase in reported color saturation. When a reasonable transform matrix is used for correct reporting of common, lower saturation colors, all cyans of medium to high saturation (as seen by the human visual system "standard observer") are reported by the camera as having the same medium saturation.
When the transform used in a camera is a simple linear 3x3 matrix, the resulting camera gamut has exact chromaticity limits: the convex envelope enclosing the camera spectral locus; and this is independent of the input level. However, use of a non-linear transform (usually a look-up table) can modify this, if the camera spectral response roll-offs are gradual enough and overlap sufficiently so that there is still some reasonable degree of response in all three sensor channels when reducing the bandwidth of the input spectrum. My paper gave examples based on several different real cameras' spectral responses.
Wayne Bretl
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Colorsync-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden