Re: Problem with converting an NSImage to grayscale
Re: Problem with converting an NSImage to grayscale
- Subject: Re: Problem with converting an NSImage to grayscale
- From: Heinrich Giesen <email@hidden>
- Date: Tue, 30 Aug 2005 15:44:42 +0200
On 30.08.2005, at 13:06, Arik Devens wrote:
I'm trying to write some code to covert arbitrary RGB and RGBA
NSImage's
to grayscale. I've written a routine, which started out as the
negative
image sample from ImageDifference but I'm running into some odd
problems.
The code works on some images and not others, but I haven't been
able to
figure out why some work and some don't.
John C. Randolph answered:
I wrote the Image Difference sample, and the code I have in there to
dig through the pixel data is no longer necessary. Just use the
monochrome filter from CoreImage. It will be much faster, since the
processing will happen on the GPU.
Maybe that the monochrom filter from CoreImage is faster but it does not
explain why Ariks code doen't work. Simple answer: his code (clearly
influenced
from the ImageDifference code) is wrong because the ImageDifference code
is wrong, especially this "digging through the imaga data".
Before I offer (as a very humble programmer) my code I have some
remarks:
Remark 1: for the description of the storage of a pixel you use a C-
struct.
Don't do this. You cannot know how the 4 consecutive bytes in the
source are
mapped into a C-struct. The compiler knows and may have different
mappings for
bigEndian (PPC) and littleEndian (Intel) machine.
Remark 2: the conversion from RGB to gray is a weighted one.
Usually (in jpeg or in television)) these coefficient are used:
gray = 0.299 * Red + 0.587 * Green + 0.114 * Blue.
In my following code I use an approximation to avoid the real
arithmetic.
Remark 3: you have to respect that the 3 bytes RGB may use 4 bytes
storage
for optimization
Remark 4: also a row of pixels may have padding bytes (Tiger uses
another
algorithm than Panther)
And now follows my code which works only if the source imageRep
has 8 bit per sample
has 3 or 4 samples per pixel
has an rgb colorspace
is meshed.
(This should be a catagory of NSBitmapImageRep,
APPLE MAKES NO WARRANTIES, so do I)
- (NSBitmapImageRep *)grayscaleImageRep
{
unsigned char *pixels = [self bitmapData];
int row;
int column;
int widthInPixels;
int heightInPixels;
int bytesPerRow = [self bytesPerRow];
int bytesPerPixel = [self bitsPerPixel] / [self bitsPerSample];
// check if the source imageRep is valid
widthInPixels = [self pixelsWide];
heightInPixels = [self pixelsHigh];
NSBitmapImageRep *destImageRep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:nil
pixelsWide:widthInPixels
pixelsHigh:heightInPixels
bitsPerSample:8
samplesPerPixel:1
hasAlpha:NO
isPlanar:NO
colorSpaceName:NSCalibratedWhiteColorSpace
bytesPerRow:widthInPixels
bitsPerPixel:8];
unsigned char *grayPixels = [destImageRep bitmapData];
for (row = 0; row < heightInPixels; row++){
unsigned char *sourcePixel = pixels + row*bytesPerRow;
unsigned char *thisgrayPixel = grayPixels + row*widthInPixels;
for (column = 0; column < widthInPixels; column++,
sourcePixel+= bytesPerPixel){
int gray = (77* sourcePixel[0] + 150* sourcePixel[1] +
29* sourcePixel[2])/256;
/*
77/256 = 0.30078 should be 0.299
150/256 = 0.58594 should be 0.587
29/256 = 0.11328 should be 0.114
JEPG (independent Jpeg Group) uses this:
Y = 0.29900 * R + 0.58700 * G + 0.11400 * B
*/
if( gray>255 ) gray = 255; // be cautious
thisgrayPixel[column] = gray;
}
}
return [destImageRep autorelease];
}
The code is a bit simpler for planar imageReps. And to also use
the alpha channel is left to the reader :-)
--
Heinrich Giesen
email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Cocoa-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden