Re: multidimensional arrays
Re: multidimensional arrays
- Subject: Re: multidimensional arrays
- From: Pandaa <email@hidden>
- Date: Wed, 16 Mar 2005 00:29:55 +0100
2005-03-15 kl. 23.33 skrev Thomas Davie:
I never said that it wasn't important to optimize or that it wasn't
OK to discuss optimizations.
Sorry, you didn't. It's just the general attitude at some lists and
forums that hit me.
Fair enough – my personal take on the subject is... Write the god damn
app, ignore optimizations.
When and only when it works, try to run it through the profiler if and
only if it runs slowly on some systems.
Yes. I agree completely. A slow and usable app is a lot better than a
fast application that never reaches any users.
It also takes less work to first write a correct and inefficient
implementation which is then optimized, than to try to write a highly
efficient implementation from scratch. But data layout is something
that that can be difficult to change later, so it makes sense to think
about it from the onset.
But what you demonstrated is something that I would barely count as
being in the category of optimizations. Optimizations are things
that bring the complexity of your program down, not things that gain
you a clock cycle here and a clock cycle there.
After you've done everything you can to reduce your program
complexity, these kinds of optimization are what's left that you can
do and may give significant gains. It's not one thing or the other,
one can do both. Most of the time, there are better things to do but
sometimes it's worth the effort.
In the end, it's all about that clock cycle here and there. They add
up.
While I agree to a certain extent, I have never seen a case where this
kind of optimization can actually gain you anything significant other
that when you have *massive* data sets
From my perspective, datasets usually are massive - and what is
limiting the size of the datasets is the speed of the application. A
faster implementation means you can process more data, have higher
quality results, or give more interactive user feedback.
– if you get to a nice low-order (preferably quadratic) polynomial
algorithm, the you're likely to gain much more than sitting there with
an exponential algorithm with a very highly optimized loop body.
Yes, unless the lower order algorithm was less suitable to computer
implementation at your datasize. And once you have changed your
algorithm, you're back to lower level optimizations to improve
efficiency further.
But even that is not what I'm really trying to say - what I'm really
trying to convey is that in general the kind of optimization that
saves you a clock cycle here or there is also the kind that makes your
code harder to read. I tend to say that it's better for my code to be
readable and run on 99% of systems than for my code to be totally
unreadable/maintainable and run on 100% of systems.
I think there's an important distinction to make here between
high-level application code that is mostly software architectural and
lower level code that actually does the work your app is built around.
In higher level code, clarity is the most important thing, but in lower
level processing speed is critical.
But optimization is not only about making an existing app run faster,
it also provides room to extend the application! This is the best way
to use a faster computer - to give extended visualization and user
feedback, and more robust and intelligent processing.
But if you really wanted to bring this up, writing code that looks
nice has very little do to with wether you're writing efficient
code or not. The fastest known algorithm has probably already been
selected, or its design requires deep specialist knowledge of
topics very different from computer programming. Cache-tuning and
other optimizations can usually improve your speed by a rather
large integer factor - such as going from 20 seconds to less than
one second to process a large data set.
Absolutely, but eliminating one multiply to go from
array[x][y];
to
*(array + x * kYSize + y)
Make that array[x*ysize+y] , where ysize was a variable.
is a particularly ugly hack and it is debatable whether it even
gains you anything.
It's not debatable, it's a question for profiling. It's always been a
locally significant gain in my cases. The denser syntax will only be
worth the gain in tight loop nests that are frequently run.
But probably not worth as much as getting rid of the tightly nested
loops all together and using a better algorithm ;)
Of course, but that is often not possible. Generally, you will need at
least one pass over the data. If you can do that, that one remaining
loop nest will still benefit from performance tuning.
If you know the field of your application well you'll usually be using
the best known practical algorithms ( unless it is patented.. ). And
developing better algorithms could be a several year research project,
if it is possible at all.
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . .
. email@hidden . . www.synapticpulse.net .
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Cocoa-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden