Re: NSImage representations
Re: NSImage representations
- Subject: Re: NSImage representations
- From: Jeremy Rotsztain <email@hidden>
- Date: Tue, 17 Jul 2007 13:33:58 -0400
This is great, Heinrich. Thanks for taking such a thorough look at my
code.
I didn't realize that I could load images into NSImageReps. That
saves me some work.
What is the correct method for changing the NSImageReps to
NSBitmapImageReps? I've been using [ myImageRep copy ].
I should mention that I ran into similar problems parsing through the
bytes of a scaled NSImageReps. The resolution of the images is
correct (320 x 240), but the pixel data looks completely skewed
(spatially) when I do the quantize image transformation. My guess is
that the data of the original image is maintained, but the resolution
of the image is changed to the scaled values.
Is NSImageRep caching the images as I load them? I'm loading (and
releasing) thousands of images and I want to make sure that I'm not
running out of space on my machine.
Things are already working better,
Jeremy
PS - It should only be 1000 colors. If we have a bright r/g/b color,
it would be 255. So 255 / 25 = 10.2. But the decimal point would be
ignored, oder?
On Jul 9, 2007, at 5:02 PM, Heinrich Giesen wrote:
Hi,
On 09.07.2007, at 21:02, Jeremy Rotsztain wrote:
I'll look into that.
For the time being, I just created a new NSImage (at a lower
resolution) and drew into it using NSImage drawInRect: fromRect:
Don't do that, remove the (last) bugs in your program. Your biggest
mistake
was to use the proposed code from an O'Reilly tutorial. You are not
the first
and not the only victim of that tutorial. I read it yesterday again
and must say:
I was not amused! And now let us repair that code.
Because you need the pixels of a rasterimage there is no need to
use an NSImage.
(There is rarely a good reason for using an NSImage, sometimes it
is nice
for drawing.)
It complicates the situation more than it helps. Let us assume you
read an
imagefile that contains jpeg, png, gif, tiff, ... (raster)data:
NSImageRep *theRep = [NSImageREp
imageRepsWithContentsOfFile:fileName];
or better:
NSImageRep *theRep = [NSImageRep imageRepsWithContentsOfURL:aURL];
The result should be an NSBitmapImageRep object.
Now to your quantization method:
- (NSBitmapImageRep *) quantizeImageRep: (NSBitmapImageRep *)
paintImageRep
{
NSBitmapImageRep *quantizeRep = // as in your first post
int row, column; // names change, was x, y
//int imgWidth = imageSize.width;
//int imgHeight = imageSize.height;
// to use these values for looping over the pixels is wrong!
// the size is only needed for rendering for a device and not
here.
// otherwise you are stuck with the resolution. Use this:
int bytesPerRowSrc = [paintImageRep bytesPerRow];
int bytesPerRowDst = [quantizeRep bytesPerRow];
// in Tiger bytesPerRow are divisible by 32, before Tiger it was
16 (or 8?)
// the tutorial doesn't respect padding bytes and therefore it
cannot work
int pixelWide = [paintImageRep pixelsWide];
int pixelHigh = [paintImageRep pixelsHigh];
unsigned char *srcData = [ paintImageRep bitmapData ];
unsigned char *dstData = [ quantizeRep bitmapData ];
unsigned char *p1, *p2;
int n = [paintImageRep bitsPerPixel] / 8;
for( row=0; row < pixelHigh; row++ ){
p1 = srcData + row*bytesPerRowSrc; // address of first pixel
in row source
p2 = dstData + row*bytesPerRowDst; // address of first pixel
in row destination
for( column=0; column < pixelWide; column++ ){
p2[0] = (p1[0] / 25) * 25;
p2[1] = (p1[1] / 25) * 25;
p2[2] = (p1[2] / 25) * 25;
p1 += n;
p2 += n;
}
}
return [quantizeRep autorelease];
}
Some further remarks:
- The code is written in mail.
- this code assumes silently that the incoming NSBitmapImageRep has
exactly three samples per pixel. It should be tested!
- this means gray images are not allowed in this code.
- address operations are simplified and much faster than in the
orig. code
- the p2[..] statements in the tutorial are weird. There we find:
*p2 = (unsigned char)rint( /* int ops only */ );
rint() expects a double value and returns a double value, therefore
the casting. But the parameter for rint() is already an int, so
this int
is converted to double, rint() does some work and returns double
and this (for the assignment) is reconverted to int, i.e. to the
original value.
rint() uses cpu time and changes nothing.
- you wrote you wanted to reduce the number of colors to 1000. But
using the
constants 25 the colors are reduced to 1331 (11x11x11).
- In the tutorial (creating the new NSBitmapImageRep) is:
initWithBitmapDataPlanes:NULL, wrong, must be nil
- same with parameter:
bytesPerRow:NULL, wrong, must be 0
and:
bitsPerPixel:NULL, wrong, must be 0
(ok, the compiler is smart enough to do the right thing)
Last remark: if you will do the same for a given NSImage, write a
new method:
- (NSImage *) quantizeImage: (NSImage *) myImage
{
NSBitmapImageRep *rep = // extract the rep from myImage
NSImage *img = [[NSImage alloc] initWithSize:NSZeroSize];
[img addRepresentation:[self quantizeImageRep:rep]];
return [img autorelease];
}
And now: good luck
Heinrich
--
Heinrich Giesen
email@hidden
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden