Re: Image classes (Re: How to detect a Retina Mac)
Re: Image classes (Re: How to detect a Retina Mac)
- Subject: Re: Image classes (Re: How to detect a Retina Mac)
- From: Ken Ferry <email@hidden>
- Date: Fri, 06 Sep 2013 16:16:21 -0700
FYI, I went into this question in some detail in this talk from WWDC
2009: Session
111: NSImage in Snow
Leopard<https://deimos.apple.com/WebObjects/Core.woa/BrowsePrivately/adc.apple.com.2233538716.02233538722.2238039498?i=1820509221>
The last part of the talk is a discussion of the differences between the
different APIs available, and when you'd use what.
Can result in odd behavior, for example scaling the the cached bitmap
instead of re-rasterizing the PDF or even using the cached bitmap when
printing (yikes!).
Marcel, you shouldn't have seen this since 10.6. :-)
-ken
On Wed, Aug 21, 2013 at 1:49 AM, Marcel Weiher <email@hidden>wrote:
>
> On Aug 20, 2013, at 18:02 , Uli Kusterer <email@hidden>
> wrote:
>
> > On Aug 20, 2013, at 12:36 PM, Gerriet M. Denkmann <email@hidden>
> wrote:
> >> Well that much I know. And I also know that many NS/UI-things (which
> use Objective-C) often have a CF-counterpart, which uses plain C and often
> these are toll-free bridged. The latter kind is typically used when one
> needs more options or finer control. (E.g. NSDictionary / CFDictionary).
> >>
> >> But what is the story behind NS/UIView relative to CIImage, CGImage?
> When to use what? What are the relevant advantages?
> >>
> >> I would really like to get some link to some documentation, which
> explains these questions.
> >
> >
> > NSImage/UIImage: Highest-level image abstraction, usually independent of
> pixels (e.g. may hold vector graphics, values are measured in Points, not
> pixels)
> > CGImage: Highest-level pixel-based representation of an image, mostly
> measured in actual pixels, you have to do all Retina-work yourself.
> > CIImage: Abstraction on top of textures on a graphics card for use as
> images. Useful if you want to quickly apply effects (CIFilter, transitions
> etc.) to an image, because the image is kept in GPU memory instead of RAM,
> so for applying several filters you save repeated up/downloads, and the
> filters run on many cores on the GPU, instead of blocking the few CPU cores
> your phone has.
> >
> > Note that the abstraction is transparent. E.g. for bitmap images,
> NSImage these days uses a CGImageRef under the hood. Also, CGImageRefs try
> to be smart about keeping image data on the GPU if they can (so
> conceptually "use CIImage", if not actually).
>
> NSImage: a container format for multiple representations of an image for
> optimized display.
>
> Can contain different image representations (NSImageReps), for example a
> PDF and bitmaps at multiple resolutions, and will create additional cached
> representation(s) if the display and the available representations don’t
> match, for example rasterizing a PDF for you and then using that rasterized
> representation for you. Can result in odd behavior, for example scaling
> the the cached bitmap instead of re-rasterizing the PDF or even using the
> cached bitmap when printing (yikes!). If you want control, don’t use this
> (devs have been learning this lesson over and over since early NeXTStep
> days). I think of it as the NSIcon class.
>
> (Note: since 10.6 there has been some access to the selection mechanism
> with bestRepresentationForRect:context:hints:)
>
> UIImage / NSBitmapImageRep / CGImage: a bitmap in UIKit, AppKit and
> CoreGraphics, respectively
>
> Both UIImage and NSBitmapImageRep wrap a CGImage(Ref) (UIImage always did,
> with NSBitmapImageRep this happened a while ago, I think in Leopard). To
> the best of my knowledge, UIImage is equivalent to NSBitmapImageRep, not
> NSImage, because it does not allow vector data.
>
> Due to the history of NSBitmapImageRep, it does have methods for direct
> access to the bitmap data, but because it is now a wrapper for CGImageRef,
> using those tends to be somewhat inefficient, requiring sending the bitmap
> data back and forth between graphics card and main memory. IIRC, the
> recommended way of getting the color at a pixel is to create a context and
> draw the image into the context with appropriate clipping. That way, the
> extraction can be done on the graphics card and only the result shipped
> back to the CPU.
>
> In UIImage, they dropped support for the direct access methods.
>
>
> CIImage: for CoreImage, a virtual image
>
> "Although a CIImage object has image data associated with it, it is not an
> image. You can think of a CIImageobject as an image “recipe.” A CIImage
> object has all the information necessary to produce an image, but Core
> Image doesn’t actually render an image until it is told to do so. This
> “lazy evaluation” method allows Core Image to operate as efficiently as
> possible.”
>
> The recipe is executed if you draw the CIImage on a context or initialize
> another type of image with it.
>
> I personally know very little about CIImage, have only used it a couple of
> times. Are there interesting uses for this outside of CoreImage?
>
>
> Marcel
>
>
>
> _______________________________________________
>
> Cocoa-dev mailing list (email@hidden)
>
> Please do not post admin requests or moderator comments to the list.
> Contact the moderators at cocoa-dev-admins(at)lists.apple.com
>
> Help/Unsubscribe/Update your Subscription:
>
> This email sent to email@hidden
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden