RE: NSImage question
RE: NSImage question
- Subject: RE: NSImage question
- From: Jeff Laing <email@hidden>
- Date: Fri, 30 Mar 2007 10:28:21 +1000
> > My best guess at the moment is:
> >
> > - image 1 is my RGBA data
> > - create image 2 with the same size as image 1
> > - fill image 2 with white
> > - compositePoint:operation image 1 into image 2
> > - create image 3 with the target size
> > - drawInRect:fromRect:operation: image 2 into image 3
I tried this last night and it didn't work either, for reasons that aren't
clear to me, but I believe that its going to be something like "even though
the fill with white set all pixels alpha to 1.0, the composite then
overwrites the alpha values - ie, I've saved nothing".
At the moment, I'm thinking I need to extract the NSBitmapImageRep, and step
through forcing all the alpha bytes back to 1.0 after the composite, which
can't be right. When I'm expecting is that there's some magic API that does
the equivalent of Photoshops 'flatten' that combines two images together
(taking alpha into account) and lets me then discard the alpha channel.
> Any consideration to have a custom NSView.
> Override the drawRect:. Draw your images with whatever
> scale/location you want.
> By doing so, you don't need extra NSImage as off-screen
> buffer no more.
The issue isn't NSImageView - I want to get NSImage under control - at the
moment, I don't seem to be able to composite images that have alpha
channels, if there is any scaling involved. I want to be able to do that
sort of thing when NSImageView's are not involved.
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden