User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.2) Gecko/20040804 Netscape/7.2 (ax)
Marco Ugolini wrote:
And let's not forget that in all likelihood the reason for a linear internal
color space is to be found in the fact that digital cameras are themselves
linear capture devices -- and not because a linear color space is inherently
a better editing space.
There is a difference in goals between how image data is encoded
for storage or transmission, and how it is encoded for manipulation.
Due to our (essentially) ratiometric response to light, it is more efficient
to encode images in a non-linear space (ie gamma encoded) given
the limits of interference noise (ie. television) or quantization
noise (ie. 8 bit encoding). For image manipulations however, we
are often attempting to mimic the behavior of light in the real world
(ie. resizing, filtering, changing brightness etc.), or dealing with raw
light readings from a sensor (image referred) prior to rendering
into and output referred state, so for these goals a linear light
representation is appropriate.
These two aims are different, and not incompatible, since we can easily
convert from one to the other. Of course in the current world of
increasingly fast processors and larger hard drives, there is less pressure
to be efficient at storing and transmitting images.
Do not post admin requests to the list. They will be ignored.
Colorsync-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden