Re: Core Graphics: Is it better to up-sample or down-sample images when drawing into a rect?
Re: Core Graphics: Is it better to up-sample or down-sample images when drawing into a rect?
- Subject: Re: Core Graphics: Is it better to up-sample or down-sample images when drawing into a rect?
- From: Jeff Szuhay <email@hidden>
- Date: Wed, 24 Aug 2016 12:59:49 -0700
> On Aug 24, 2016, at 10:37 AM, Jean-Daniel Dupas <email@hidden> wrote:
>
>> Moreover, the performance will greatly depends the sampling algorithm you choose. CGImage provide a couple of algorithms with different tradeoff (see CGContextSetInterpolationQuality() and NSImage and UIImage equivalents).
>
> Just for the record, here is a link with different technics available on OS X to do that:
>
> http://nshipster.com/image-resizing/ <http://nshipster.com/image-resizing/>
Yeah, but CoreImage is a different kettle of fish. CoreImage has deprecated the creation of a core image from layers, in favor of bitmaps and
static image sources.
I’m writing an app for Mac OS X/macOS using primarily CoreGraphics and Quartz2D with a smattering of image filters for special effects.
I draw my images (clocks) into a “reference” sized rectangle—for simplified position calculations—and then have CoreGraphics scale into the destination view rect.
On my rather old but quite functional 2007 MBP, I notice a considerable difference in CPU use when the clocks are on the laptop main screen (low CPU) versus on a 2nd monitor (high CPU—about 4x). I’ve surmised that the main screen is GPU assisted wheres the 2nd monitor is not. Same clock, just on different screens. I have yet to test whether this will be true on newer MacBookPros with different video cabling/interfaces.
I’m in the process of converting my clocks to draw exclusively into offscreen layers (instead of drawing their parts into the device context directly), composing them only when needed, and finally drawing into the view’s graphic context. This, as we’ll soon see, depends upon how efficient the CGContextDrawLayerInRect is when the sour layer and target context are of different sizes—in a single call.
Now I’m wondering if an intermediate transformation to scale the layer to the same size as the context and then just call CGContextDrawLayerAtPoint would be more efficient.
And, yeah, I’m trying to trade off CPU work for GPU work, and get more work done on the GPU.
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden