Re: Bitmaps and Cocoa
Re: Bitmaps and Cocoa
- Subject: Re: Bitmaps and Cocoa
- From: John Randolph <email@hidden>
- Date: Mon, 21 Jul 2003 14:27:07 -0700
On Saturday, July 19, 2003, at 2:28 PM, Darren Ford wrote:
Hi all,
I've implemented a cross-platform raytracer (in C++) and have hooked
it into a Cocoa frontend. The raytracer works on a pixel-by-pixel
basis and I have a question regarding pixel information and NSImage.
Currently I'm using the 'bitmapData' on a NSBitmapImageRep in order
to place pixel data into an NSImage ie.
myRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL
pixelsWide:x pixelsHigh:y bitsPerSample:8 samplesPerPixel:4
hasAlpha:YES isPlanar:NO colorSpaceName:(NSString
*)NSDeviceRGBColorSpace bytesPerRow:0 bitsPerPixel:0];
pData = [myRep bitmapData];
myImage = [[NSImage alloc] initWithSize:ImageSize];
[myImage addRepresentation:mySmoothedImage];
pData = [myImage bitmapData];
and then using weird offsets in order to place the raw pixel data ie.
pData[yy*4*x + xx*4 + 2] = 50;
pData[yy*4*x + xx*4 + 3] = 70;
etc..
All of this appears to work fine, however I was wondering whether
there was a more elegant way of approaching raw bitmaps? I'm
concerned with the current implementation in that it is not thread
safe (my accessing the data is, however with Cocoa's automatic
redrawing there's the potential that cocoa tries to read at the same
time I'm trying to write my raw data).
That shouldn't be an issue unless two threads are both trying to write
the data at the same time. Are you seeing display glitches?
Also, the appKit won't be redrawing in the window unless you mark the
view (or a part of the view) as needing to be redisplayed with
-setNeedsDisplay or -setNeedsDisplayInRect:. Things like the window
getting uncovered by a window above are handled by the window server
just copying the needed pixels from the backing store.
As for talking to the pixels directly, you may want to do something
like the Image Difference sample, at:
http://developer.apple.com/samplecode/Sample_Code/Graphics_2D/
Image_Difference.htm
In PixelTypes.h, I defined a few structs for various pixel data
formats, and then in Negative.m, I cast the pointer returned by
-bitmapData to a pointer to the appropriate pixel type.
For example:
typedef struct _RGBPixel
{
unsigned char redByte, greenByte, blueByte;
} RGBPixel;
and:
- (NSBitmapImageRep *) invertRGB
{
RGBPixel
*pixels = (RGBPixel *)[self bitmapData]; // -bitmapData returns a
void*, not an NSData object ;-)
int
row,
column,
widthInPixels = [self pixelsWide],
heightInPixels = [self pixelsHigh];
for (row = 0; row < heightInPixels; row++)
for (column = 0; column < widthInPixels; column++)
{
RGBPixel
*thisPixel = &(pixels[((widthInPixels * row) + column)]);
thisPixel->redByte = (255 - thisPixel->redByte);
thisPixel->greenByte = (255 - thisPixel->greenByte);
thisPixel->blueByte = (255 - thisPixel->blueByte);
}
return self;
}
-jcr
_______________________________________________
cocoa-dev mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/cocoa-dev
Do not post admin requests to the list. They will be ignored.