8 bit precision vs 16 bit
8 bit precision vs 16 bit
- Subject: 8 bit precision vs 16 bit
- From: "Mark Rice" <email@hidden>
- Date: Wed, 12 Dec 2007 15:58:42 -0500
I may be wrong here, as I am speaking from my experience with 16 bit to 8
bit film recorders and other similar devices, but here goes:
I suspect that Photoshop may perform internal 16 bit calculations, and these
are then passed to a 16 bit to 8 bit lookup tables. The LUTs are often not
bidirectional. Also, what good is a 0-255 color value if the video card only
accepts 8 bit values? - it would have to round off anyway. Even if the video
card operates in greater bit depth internally, I don't believe that most of
them accept greater than 8 bit input. The same is true of other output
devices - film recorders, inkjet printers, Lambdas and Lightjets, and
imagesetters and platesetters - almost all of them offer 8 bit input only,
and may have higher bit depth for operating internally. The higher bit depth
internally is usually for curve manipulation, as most engineers do NOT want
to use logarithmic amplifiers because of their instability, and using lookup
tables to linear amplifiers is much more stable.
So as far as I can tell, introducing decimal points into 8 bit values would
simply deceive the end user into seeing chimerical values.
Mark Rice.
Marco Ugolini wrote:
>Yes, it would offer a peek into the dark underbelly of 8-bit calculations.
>But, as they say, you can't please everyone... :-)
>Marco Ugolini"
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Colorsync-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden