Re: [OT] Premature optimizations
Re: [OT] Premature optimizations
- Subject: Re: [OT] Premature optimizations
- From: Georg Tuparev <email@hidden>
- Date: Sun, 1 Aug 2004 16:29:29 +0300
On Jul 30, 2004, at 5:01 AM, m wrote:
"Premature optimization is the root of all evil in programming."
Yes and goto considered harmful ... Is this favoring the point of
view that "Programmers are notorious for optimizing the wrong code"
or is it against it ? It has nothing to do with it.
The quote is a reflection of the fact that before you know the overall
behavior of your system, it's difficult to know what parts will
dominate its overall performance. In a complex enough system, it will
be difficult to accurately predict which parts should be optimized.
Even for *really good* algorithmists like Knuth and Hoare.
Since many years my team is following (loosely) eXtreme Programming.
One of the things we do different is that we optimize during the
refactoring. Traditional wisdom tells us that performance tuning should
only come at the end of the release timetable, and only if really
needed. Experienced XPers even go so far as to say that the customer
needs to write performance-related stories, just as they would write a
story related to functionality. We strongly disagree. First of all, the
developer (and the customer) might be unaware of any potential
performance issues until the negative feedback of the end-user hits
back. After all, developers almost always use high-end hardware with a
lot of RAM in it. This will be not the case for many of the end-users.
But there is a more important factor - application design! If you make
the performance a constant priority from day one, you will end up with
different, and almost certainly better application design.
Imagine you are working on an inspector-like UI that has several form
fields, buttons, radio-button groups etc. (and one is not allowed to
use bindings because of compatibility requirements). All these controls
need to be in sync in order to prevent the user of making unintentional
mistakes (e.g. pressing the "OK" button before all required values are
typed in). Often, the developer will end up writing one method that is
called after every action or edit notification that synchronizes all
controls placed on the window. This "harmless" solution actually
creates many local couplings, but good design is an orthogonal design.
If you fire QuarzDebug you will immediately be aware of the problem
when you see your flashing screen caused by repeated and mostly
redundant redraws! To eliminate this performance issue, you will get
rid of the update-all-controls method and probably will end up with
more orthogonal design based on lazy validation triggered updates
similar to how Cocoa enables and disables NSMenuItem's or remove all
synchronizing code and just use the Control layer. Another example -
how often after we close an inspector-like panel we forget to
unregister for change notifications? This slows down not only our
application, but the responsiveness of the entire system.
But often we are told such performance tuning have no significant
implication (80/20 rule). The optimization tuning I am discussing here
is actually elimination of meaningless method calls. But for the
runtime system this is analogues to unused lines of source code. So
these extra method calls are harmless until the developer changes the
logic of the program. Then harmless extra call can easily turn in to
hard to uncover nasty bugs. Even if performance tuning does not improve
performance, it often improves design! (Thinking about the source code
- in this case from another point of view - is always good thing!)
While refactoring, the developer needs also to ask herself about
performance issues from the perspective of the end-user. The question
should be: "How can the user finish this task fast". By thinking about
possible solutions, the developer often eliminates unneeded user events
and interactions. This makes the UI design cleaner and simpler, and we
end up writing less source code. (This is definitely not the
algorithmic optimization that was the subject of the thread - but jet
another closely related optimization aspect).
Another hidden aspect of performance is memory usage (the application
can run fast until it starts swapping). In my experience memory-hungry
application very often duplicate data. (This is similar to duplicating
of code -- I would claim that data duplication is almost as bad as code
duplication - one thing i was always critical about EOF). But
developers have (almost) no tools to find this out, and they do not see
the hidden dangers. The most trivial example is reading data from the
file into a memory. So we have to copies of the same data. If this is a
file in a shared directory a colleague of you could access and modify
the same file. Later you can unintentionally overwrite her
modifications. If the developer is memory-mapping and locking the file
(with the intention to optimize the memory usage of the program), he
will prevent the possibility of someone else losing work. Again,
optimization concerns leading to better user experience, less bugs, and
better design!
So the bottom line is: make performance key requirement from the
beginning of any software project. Think about optimization (both speed
and memory usage) every time you are refactoring. Write tests to proof
performance criteria.
(BTW: This all reminds me now on one of the Safari talks at WWDC 2003.
One of the developers told that the team made performance a requirement
from day 1! It was forbidden to commit sources that do not pass
performance tests.)
Georg Tuparev
Tuparev Technologies
Klipper 13
1186 VR Amstelveen
The Netherlands
Mobile: +31-6-55798196
_______________________________________________
cocoa-dev mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/cocoa-dev
Do not post admin requests to the list. They will be ignored.