Re: Best drawing technology for audio waveforms, envelopes, etc.?
Re: Best drawing technology for audio waveforms, envelopes, etc.?
- Subject: Re: Best drawing technology for audio waveforms, envelopes, etc.?
- From: Uli Kusterer <email@hidden>
- Date: Mon, 25 Jun 2007 10:11:20 +0200
On 23.06.2007, at 21:35, Hans Kuder wrote:
I'm just starting a project along the lines of an audio editor that
will be
displaying lots of audio waveforms, user-drawn envelopes, frequency
spectra,
and so on. The central thrust of the project involves the concept of
"overlays" - both visual and auditory - on sections of the sound
file. So my
waveform displays have to show semi-transparent layers and so on.
My question is - what would be the best technology to accomplish this?
Quartz 2D, an NSView subclass, or something else?
Hans,
I don't think it'll make much of a difference. If you are doing
mostly static images, but just compositing one over the other, any
drawing API will do just fine. If your basic shapes are lines, curves
or outlines, you'll probably want to use some form of Quartz (NSView
is Quartz, too, the main difference being that the API is a little
easier to use with an OOP-language). The ground rules of drawing
views apply, of course: Take advantage of drawRect:'s parameter (and
the methods that have been added) to only redraw the parts that
actually need to be redrawn. Same goes for invalidating, use
setNeedsDisplayInRect:.
I think Cocoa or Quartz by now have also gained a method to scroll
a rect, i.e. to move the pixels (which can possibly happen in the
graphics card) and only redraw the newly-exposed parts. You'll also
want to do that to reduce the amount of unnecessary drawing going on
when showing what is being played, or redrawing when your waveform
has been scrolled.
Similarly, avoid doing too much compositing. Alpha-compositing
means every pixel gets looked at several times before it gets
written, which can add up quickly across several layers (while "copy"
mode only looks at the source pixel once and overwrites the
destination pixel with it). If you have many layers that rarely*
change relative to each other, you may want to use an NSImage to
cache the composited image, and just redraw that. In some cases one
could split that final image into several segments, so if the user
only looks at the first 2 minutes, you don't draw and composite the
whole 90 minutes of a podcast.
If you have overlays over the waveform that don't scroll along or
shouldn't be highlighted, it might be a good idea to have two
NSImages, one for overlays and one for the waveform. Anyway, avoid
having to put together too many NSImages during a drawRect:.
So, in short: Don't draw and don't clip wherever you can avoid it.
Not drawing will always be faster than drawing.
But what you should really do is just write the code once and then
use Shark to find out what parts are slow, then optimize those.
OpenGL, CoreAnimation etc. are only useful if you have lots of
layers whose image won't change, but whose positions may change. Also
note that, in general, OpenGL only does polygons and raw bitmaps. So
you won't get anti-aliasing, and whenever you need to draw into a
texture, you need to draw that using Quartz first (or your own direct
pixel manipulation code-- but I don't recommend that). So, OpenGL is
effective if you plan on doing animation, not just scrolling of
several composited elements and maybe drawing an insertion mark or
selection on top. I don't think OpenGL will be of much help otherwise.
Anyway, always use Shark (or OpenGL Profiler where appropriate) to
verify what you're doing has made things better, not worse. In
particular, if your caches are too big, your app will launch and load
too slowly, and may have to swap out caches to disk and thus defeat
the purpose of a cache, while if you choose your caches too small,
you'll end up re-caching too frequently and burn cycles caching stuff
that'll only be needed once (each cache has an overhead, after all).
This is really something that you have to try out and measure.
There's too many factors that can influence this to give a simple
rule-of-thumb algorithm. Or maybe I'm just too lazy to come up with
that algorithm ;-)
*) "rarely" means e.g. only upon a user action. Usually, the computer
is waiting for the user, not the other way round... well, unless
someone is doing heavy lifting and maxing out the CPU in some other
way...
Cheers,
-- M. Uli Kusterer
http://www.zathras.de
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden