Re: floats & Color APIs
Re: floats & Color APIs
- Subject: Re: floats & Color APIs
- From: John Stiles <email@hidden>
- Date: Mon, 23 Oct 2006 12:14:06 -0700
Even QuickDraw used 16-bit shorts for representing color.
I think there are good reasons to support a higher range of color
values than what an average monitor can display (256 unique shades of
red, green or blue). A good printer can probably use 10-12 bits of
color information per channel. Also, any time you are compositing
images, having "extra" resolution above and beyond what you strictly
need can make the final output look better. In a similar vein, audio
editing tools usually work at 96KHz with 24- or 32-bit precision,
even though that's much more precision than the ear can distinguish;
this provides a lot of "slop" for the error that normally accumulates
during composition.
For compact storage, you could certainly scale down the values if you
want. Unless your customers demand exact color precision, it should
be OK.
On Oct 23, 2006, at 11:47 AM, Eric Gorr wrote:
I'm just curious...
What is the reasoning behind moving to floats to represent color in
Apple's Cocoa color APIs?
For example,
<http://developer.apple.com/documentation/Cocoa/Reference/
ApplicationKit/Classes/NSColor_Class/Reference/Reference.html#//
apple_ref/occ/instm/NSColor/blueComponent>
Why wasn't 1 byte or even 2 bytes considered to be enough?
Were the reasons purely having to do with the speed of certain
calculations?
Is there any reason not to just covert these numbers to single byte
components for the purpose of file storage?
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Cocoa-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
40blizzard.com
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Cocoa-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden