(no subject)
(no subject)
- Subject: (no subject)
- From: "Joseph M Striedl" <email@hidden>
- Date: Thu, 22 Jul 2004 23:03:50 -0500
Hi all,
I'm a bit of a newbie to Cocoa (and the inner workings of Mac environment),
and I'm working on a (IMHO) semi ambitious first program. Basically what I
am doing is creating large montage images by tiling portrait images into an
NSImage, then compositing that image with portrait sized rectangles of pixel
colors sampled from a target image. The idea is to create a rough large
image that appears to be made of smaller images. The general problem that I
am having is that the compositing operators that are defined by AppKit are a
bit too limited for my needs. The specific problem is that I'll take a pixel
color sample from the larger image (from a location that corresponds to one
of the smaller portrait images) and drawSwatchInRect of that color into an
NSimage (the rectangle being equal in size to the smaller portait image)
then compositing that NSImage onto the big bad image composed of the smaller
portraits, the composite operators won't allow me to do enough fine tuning
to make the image look right. If I use NSCompositePlusLighter for example,
the completed image is far to bright. Even if I modify the component values
of the sampled color to make it as subtle as possible. I'm currently
experimenting with compositing the image multiple times using a combination
of composite operators to achieve the desired effect, but I thought I'd toss
a line out there and see if anyone can give me some advice or point me in a
less roundabout direction.
I AM new to Cocoa so I could just be experiencing serious cranial
flatulence. SO here's kinda where I'm coming from and the direction in which
I'm investigating. This is my understanding of the composite operators:
basically the compositing operators are defining an equation that is applied
to the component values of the pixels of two images to produce the result
image.
For instance:
NSCompositePlusLighter seems to simply add the component values of a given
pixel of the first image to the component values of the corresponding pixel
of the second and produce a color with the sum values (ex: in an RGB type
image pixel with values: R:52 G:123 B:235 + pixel with values: R:107 G:122
B:53 would produce R:159 G:245 B:255)
This function goes through every pixel of the images and performs this
equation. Is there a way to customize that equation on a low level? like
redefine my own composite operators? Now, I could create a FOR loop to go
through every pixel of the image and apply the desired equation, but the
images I'm working with have around 27 million pixels, sooooo that takes a
while to say the least....the composite operators seem to be able to pull
this off without such overhead, so I'd like to be able to dig deeper and
attack this issue from as low of a level as possible. I have started
pouring through documentation and just started reading up on Quartz
2D....which seems promising, but I'd hate to eat up time learning how to use
that tool to simply discover a different method of accomplishing the same
result I've already been getting. Any help, advice or even ridicule of my
poor communication skills would be appreciated!
sincerest thanks in advance,
Joe Striedl
_______________________________________________
cocoa-dev mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/cocoa-dev
Do not post admin requests to the list. They will be ignored.