Re: Linear-light RAW 12bit vs R G B 8bit: how much better is it
Re: Linear-light RAW 12bit vs R G B 8bit: how much better is it
- Subject: Re: Linear-light RAW 12bit vs R G B 8bit: how much better is it
- From: "Bob Frost" <email@hidden>
- Date: Tue, 24 Jul 2007 12:11:22 +0100
Hi Ray,
Now, the only problem with this is that humans don't react the same way as
sensor chips. Most of the human senses are logarithmic.
I think that the human sensors react in the same way as our camera sensors.
The sensor molecule in cones & rods (retinal) absorbs one photon to change
its structure, and then this absorbed energy is passed through a cascade of
chemical reactions resulting in the closure of a gate in a membrane stopping
current flow through it (sounds like a transistor?). Each cone in the retina
contains thousands of these sensor molecules, each of which will absorb one
photon and then change structure. So at that sensor level, it is probably
linear - one photon = one molecule changes, 100 photons = 100 molecules
change. But then in the retina-brain neural complexes, that info is
processed into non-linear form, just as our camera/computer processes our
linear camera sensor data into non-linear form.
From what I read, the main difference is that the cones (and rods) in the
eye can adapt to light intensity. Like the camera, they both have gain
controls that help in dim light (auto-iso), but the cones (and rods) can
actually become more or less sensitive to light, as needed. So instead of
the Fuji having to have two sensors of differing sensitivity to cope with a
wide dynamic range, the eye sensors just change their sensitivity. This
seems to be done partly by slowing down the regeneration of the active
retinal (fewer unchanged retinal molecules means fewer new photons
absorbed), but partly by other more complex and still unknown means.
So I think there is more similarity between the complete camera system and
the complete human system than you imply, and that the basic sensors operate
on the same linear principle. The non-linearity in both systems is
introduced later during processing.
Bob Frost.
----- Original Message -----
From: "Ray Maxwell" <email@hidden>
First, the data that comes out of the sensor of your digital camera is
"linear" with respect to energy. That means if twice the energy comes
into the sensor (increase of one f stop or double your shutter speed) the
number that comes out will be twice as big. It is correct that the first
stop at maximum exposure without clipping contains 2048 levels in a 12 bit
system. The next stop down contains 1024 etc.
This is why
we usually covert to a color space with a gamma of 2.2 which is
approximately visually "linear". This means that if we make a change in
the highlights of five units and then make a change in the shadows or mid
tones of five units our eyes will perceive the same amount of change.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Colorsync-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden