• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: CIImage slower than NSImage?
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: CIImage slower than NSImage?


  • Subject: Re: CIImage slower than NSImage?
  • From: Kenny Leung <email@hidden>
  • Date: Tue, 29 Nov 2005 15:00:11 -0800

Hi Scott.

Yes, I've now learned these lessons the hard way. I would not be using CoreImage if all I wanted to do was geometrically transform the image. In fact, I was doing just fine using NSImage. NSImage performance is amazing - almost unbelievable at times.

Once I crossed the line into doing image processing on the images I was handling (auto-ranging and enhancement of X-rays), it seemed natural to turn all the NSImages into CIImages. I guess I fell into the trap of believing that cool new technology was the panacea for all my problems. While CoreImage is definitely the bee's knees for doing image processing, one should still beware when using it. For instance, CoreImage's general performance in processing images may lead you to believe that it can do *everything* in real time, and this belief may greatly affect the way you design your application. Later, when you find out that what you want to do can't be done in real time, you may have to redesign your application. Or worse, you may ship your application to a customer and have them find out that your software doesn't live up to its billing because they don't have a powerful enough GPU.

OK, enough soap box. These are the concrete things I have learned with my week or so attempt at optimizing CoreImage usage:

- geometric transforms in general, and rotations in particular, are generally slower with CoreImage than with NSImage. If possible, you may want to render your CoreImage into an NSImage, and then composite the NSImage onscreen
- there are thresholds that are hit with the on-screen size of images. That is, when the on-screen size of an image is below a particular threshold, then operations are fast. When the on-screen size is bigger than the threshold, then operations can become much, much slower. My thinking is that it has something to do with how much video memory is available and the need to swap textures into and out of video memory
- there are bugs with either CoreImage or the GPU that can cause "holes" in images when they reach large on-screen sizes
- Using CIContext API to composite CoreImages on-screen is (in general) much faster than using the AppKit additions to CIImage, especially when you have transformed the current context. (the drawInPoint: and drawInRect: methods)
- Just because your GPU is Quartz Extreme capable does not mean it is CoreImage capable. It's best to check the System Profiler to see if it's CoreImage enabled
- Not all GPUs are created equal. You are at the mercy of the power of the GPU that you have available, so some things that look fine on one machine may not be so good on another machine with similar CPU and memory
- resist the urge to send integer images to CoreImage to save memory - go straight to floating point


-Kenny


On Nov 29, 2005, at 7:22 AM, Scott Thompson wrote:



On Nov 22, 2005, at 09:00, Kenny Leung wrote:


Hi All.

I am using CoreImage in my application, and I'm finding that geometric operations, particularly rotation, are much slower with CIImage than NSImage. In fact, rotating an NSImage causes no noticeable slowdown while rotating a CIImage causes a very noticeable slowdown.

Also, applying a transform to a CIImage as a filter yields different results than when the transform is applied to the current graphics context.

Can someone shed some light on this?

Thanks!

-Kenny

When you simply rotate the context, the system just uses the same representation of the image and performs sampling to create the rotated image. Core Image goes through many more steps.


On Nov 22, 2005, at 11:54 AM, Kenny Leung wrote:

Also, the 128-bit floating point version of an image performs better than the 32-bit integer version!

-Kenny

That doesn't surprise me much.

Core image processes it's image in a floating point space. Conceptually, it takes the source pixels, color converts them to a device independent RGB space that uses floating point pixels, performs it's processing, and then "renders the results down" to the destination.

Your performance observation is likely caused the need to convert 32 bit integers to a floating point representation. In other words, the floating point version of the image requires less "mangling" than the 32 bit integer version.

Core image is remarkably fast at providing complex pixel processing. For simpler things, however, the careful steps it goes through to maintain the fidelity of the pixels it manages add too much overhead. If you want to do something simple, like rotate an image, consider using the framework images (CGImage and NSImage) or vImage in the accelerate framework.

Scott




_______________________________________________ Do not post admin requests to the list. They will be ignored. Cocoa-dev mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
  • Follow-Ups:
    • Re: CIImage slower than NSImage?
      • From: "John C. Randolph" <email@hidden>
References: 
 >CIImage slower than NSImage? (From: Kenny Leung <email@hidden>)
 >Re: CIImage slower than NSImage? (From: Milton Sagen <email@hidden>)
 >Re: CIImage slower than NSImage? (From: Kenny Leung <email@hidden>)
 >Re: CIImage slower than NSImage? (From: Scott Thompson <email@hidden>)

  • Prev by Date: Re: Is Apple's singleton sample code correct?
  • Next by Date: [More Questions ]Has anyone any experience with NSTableView and NSArrayController ?
  • Previous by thread: Re: CIImage slower than NSImage?
  • Next by thread: Re: CIImage slower than NSImage?
  • Index(es):
    • Date
    • Thread