• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: Quickly drawing non-antialiased, discrete, pixels
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Quickly drawing non-antialiased, discrete, pixels


  • Subject: Re: Quickly drawing non-antialiased, discrete, pixels
  • From: Bill Bumgarner <email@hidden>
  • Date: Sun, 5 Jan 2003 14:04:15 -0500

On Sunday, Jan 5, 2003, at 12:58 US/Eastern, Marcel Weiher wrote:
On Saturday, January 4, 2003, at 03:19 Uhr, Bill Bumgarner wrote:
Quick question: How does one quickly draw lots and lots of discrete, non-contiguous, pixels into an NSView that is not anti-aliased?

Maybe with an image-mask? Sadly, there is no Cocoa class for this, so you would have to drop down to CG.

Have to wrap those APIs... shouldn't be *that* hard, but is not straightforward, either.

I'm running an algorithm that produces several hundreds of thousand points. The points are divided into sets by color-- i.e. if I define a color palette of 1000 colors, then I will get N-hundred consecutive points of the first color, N-hundred consecutive points of the second color, etc..

I'm using the color change as an 'end frame' marker. That is, after each color change, I plot the all points in the previously produced color.

How quickly are "frames" produced? Might it be better to simply mark the image as needing display and only updating a fixed number of > times?

A frame could be considered a single point-- if I don't do any drawing, the code produces about 100,000 points in less than 15 seconds on a TiBook 667. Initially, I defined a 'frame' as a color transition with a 1,000 palette; this yielded 100 'frames' in 15 seconds-- assuming no drawing.

I have sense moved to using perform:afterDelay: with an initial delay of 0.5 seconds to get something on the screen quickly and 3 second intervals for updating after that. The calculation loop also runs on timers, so each 'frame' (as defined by color transitions) is produced as a part of the main event loop as quickly as possible (fast enough that the app remains responsive as long as the user isn't typing-- since there are no text boxes, it isn't a problem).

NSBezierPath does not seem to be the right way to go as NSBP does not perpetuate the concept of a 1x1 pixel -- a point in NSBP is truly a point; a single location with zero width/height. And it is slow-- not slow for what it is doing, but very slow for doing something similar like trying to set the color of an individual pixel within an NSView. (Actually, I was slapping each color's set of points into an NSBP instance and rendering each path into an NSImage.
Hmm, that might run afoul NSBezierPath's "lots of combined paths" performance bug. I think it does the self-intersection tests even when you're not asking for anti-aliasing. Have you tried one of the rect-fill functions instead? NSRectFillList( const NSRect *rects, int count); springs to mind.

For a rect that represents a single point at 100,100, would I use ((100,100), (1,1)) or ((99.5,99.5), (1,1)) or something else?

The NSImage would accumulate the rendered NSBPs and the image was then composited into the view as needed.)

NSBitmapImageRep works-- I tried accumulating the points/colors in a single NSBIR instance and slapping it on screen in the View's drawSelf:-- but is even slower than NSBP. I'm using a 24bit deep image on a 'millions of colors' screen, so the to-screen resampling should be minimal and it is still about 4 times slower than drawing the same image with NSBP.

Actually, you are always getting resampling with NSBitmapImagRep because none the native byte-orders ( ARGB for both 32 (8/8/8/8) and 16 bit ( 1/5/5/5) depths) are supported NSBItmapImageRep layouts ( RGBA for 32 and 16 ( 4/4/4/4 ) bit depths). So for better performance, you need to drop down to CoreGraphics.

Oh-- interesting and truly unfortunate. That sounds like a bug to be filed-- hopefully, it has already been filed n times over. The AppKit's image classes should definitely support the native byte ordering of the underlying blit'ing layer.

In any case, I'm actually using simply RGB w/no alpha channel. It takes 20 seconds to render 100,000 points with 2 second update intervals.

Moving to RGBA from RGB-- planar arrangement-- drawing time increased to 22 seconds; a 10% performance hit.

Using RGB in a non-planar (interleaved-- RGBRGBRGBRGB) mode caused drawing time to drop to 16 seconds; a 20% performance increase. RGBA in non-planar mode yields the same rendering speed.

So-- zipping together planar images for display is more expensive than rendering interleaved images (makes sense).

Would it be possible to push a custom color space into the image rep that represents ARGB data (and gain any performance that way)?

Compositing the whole image for each point plotted, or after each color change? Are the plotted points geometrically close? Clipping to just the affected region helps if it is significantly smaller than the whole image.

At the moment, it is the whole image-- since the image generally takes up most of the view, clipping isn't going to help.

What are my alternatives from here? I would like to avoid OpenGL, if possible.

Alas, OpenGL seems to be the (only) way to go if you want really great performance.

That's one can of worms this vacation cannot handle having opened...

Some more context: This entire exercise is being done in PyObjC while I'm on vacation (this is vacating for me :-). A friend of mine and I were having a discussion comparing PyObjC, Cocoa, wx*, wxPython, and other 'high level' GUI development environments. At random, he tossed a draw-a-bunch-of-discrete-points-on-screen example my direction that used PythonCard (wxPython) and I ported it to PyObjC. The drawing performance is abysmal and it isn't because of the Python VM. It is most likely because of my ignorance.

Or even more likely because of Quartz...

Yes. It strikes me that there are a number of fairly major opportunities for optimization within Quartz. Not a criticism -- it is a young technology and it is clear that Apple has been optimizing Quartz over time. Each major dot release of OS X has brought significant performance improvements in the graphics layer.

b.bum
_______________________________________________
cocoa-dev mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/cocoa-dev
Do not post admin requests to the list. They will be ignored.

  • Follow-Ups:
    • Re: Quickly drawing non-antialiased, discrete, pixels
      • From: Marcel Weiher <email@hidden>
    • Re: Quickly drawing non-antialiased, discrete, pixels
      • From: Cameron Hayne <email@hidden>
References: 
 >Re: Quickly drawing non-antialiased, discrete, pixels (From: Marcel Weiher <email@hidden>)

  • Prev by Date: Re: Loading a Text File into an NSTextView
  • Next by Date: Setting text in NSTextView
  • Previous by thread: Re: Quickly drawing non-antialiased, discrete, pixels
  • Next by thread: Re: Quickly drawing non-antialiased, discrete, pixels
  • Index(es):
    • Date
    • Thread