Re: ObjC in time-critical parts of the code
Re: ObjC in time-critical parts of the code
- Subject: Re: ObjC in time-critical parts of the code
- From: Justin Carlson <email@hidden>
- Date: Sun, 18 Jan 2009 17:19:16 -0600
Michael Ash wrote:
> > I would (personally) rather avoid caching selectors and debugging/
maintaining a
> > program that used that behaviour when a well tested alternatives
are built
> > into another language's model.
>
> I agree that it's ugly, but it's good to have the option.
Agreed.
> My preferred approach is to use plain ObjC without such ugly
optimizations, and
> then add in such optimizations later if they become necessary. So far
> they have almost never been necessary. Where they have been
necessary,
> I've found them to be quite clean, at least from my perspective as a
> long-time C programmer. Here's one example of using dynamic-sized
> public ivars to store per-thread per-object data structures instead
of
> storing everything in a dictionary:
>
> http://www.mikeash.com/svn/ChemicalBurn/ChemicalBurnNode.h
>
> This is a bit ugly but IMO not terribly so, and the speed gain was
> impressive. The point here is that there's no reason to be afraid
that
> using ObjC will get you stuck in some performance hole that you can't
> dig out of without a rewrite. Just like any other language, do good
> high-level design and otherwise write the simplest thing that will
> work, then come back later and make the hotspots faster.
And my stance is that these languages (or dialects) interoperate well.
Each dialect has positives and negatives as part of the standard
language and written conventions.
I'd take this saying and expand on it:
Just like any other language, do good high-level design. Emphasize
reuse, genericity, simplicity, expected behaviour, make designs small
and minimal (optimal) from the beginning, expect that they will be
used beyond your immediate needs. Understand what you are writing, and
choose the best tool for the program/task.
Programs written to this model tend not to need as many optimizations
or precarious/time consuming changes after the fact.
When I write 'Understand what you are writing, and choose the best
tool for the program/task.', this means that a programmer needs to
know the environment and how their program will be compiled and the
consequences that their choices make. This doesn't imply 'dream in
assembly', but it does imply that making the wrong choices carries
consequences which tend to outweigh the time investment on the front
end. Therefore, choosing ObjC over C++ (or any other combination)
during an object's inception is imperative to good design. Knowing the
difference is one distinction between a good programmer and a not-so-
good programmer. Otherwise, it is too easy to get stuck in a trap
where you're doing extensive rewriting -- assuming one cares enough to
do more about it than cite Moore's Law. If you scroll back a few
messages, you'll see that I have been through this multiple times, and
it includes code that I have written. So I'd say a programmer *should*
be afraid of design decisions and implementations which they do not
fully understand, that she/he should attempt to understand in order to
make a decision. Your synopsis does not get into enough detail to
specify strongly one way or the other, so don't take this as I am
calling you out on this matter. One could interpret what you've
written (the last bit) as 'lazy programming, until it becomes a
problem', which is unfortunately too common these days.
ObjC and C++ are different enough that there are right and wrong
choices when designing, these choices can considerably affect how a
program runs (as I've mentioned). Assuming you have both languages
available; Making all objects C++ classes or all objcts ObjC classes
in a _real_ (or complex) program is a significant design flaw (IMO).
> I would place autorelease in with "object allocation", which I
already
> mentioned. Autoreleasing an object just somewhat increases its
> allocation costs. There's the question of memory pressure, but if you
> encounter that, it's easily dealt with by adding inner autorelease
> pools, so it's not inevitable.
>
> I'd be really surprised if you turned up a speed difference between
> otherwise identical code using NSString and CFString or other
> toll-free-bridged Foundation/CF classes. They use the same
> implementation underneath, after all, so the only difference is
> messaging and autorelease, neither of which are going to be
> significant compared to the real work going on underneath in most
> cases.
These were not functional rewrites (in the sense of a user's
perspective), but restructuring and informed optimzations. Consider it
Refactoring -- on a very large scale (multiple projects which incurred
100s of 1000s of source lines changed). This tells me that you're
skeptical and don't know the differences yourself since you have never
tried, *or* that I am terrifyingly wrong in my observations among
multiple extensive works. On one side there are assumptions and little
motivation to understand(1) -- when provided with enough detail to
analyze specific points/cases. On the other side, there is significant
firsthand experience. Why should I be motivated to prove it to you
when you have more than enough to observe what the changes actually
entail? FWIW, the changes were not initially intended to be so
widespread, but we changed our tune as the results came back and we
understood more.
1) Please. Don't misinterpret that as an insult, or that I am calling
you a bad programmer. I can tell you're knowledgable. This comment was
in regards to the specifics of the discussion.
> Not sure how numerical code gets a monopoly on performance critical
> code here.... There's a whole universe of performance critical code
> out there which doesn't involve straight-up number crunching.
Well, It all resolves to bits. : ^)
I provided numerical simply as an example. IIRC, 'numerical'
referenced your writing. There are a few angles to this discussion; If
I am providing Foundation -> CoreFoundation changes as an example,
hopefully a numerical monopoly is not implied. In case it wasn't
clear, the benefits extend well beyond 'numerical'.
> >> It's simply that they're not good at it, which is fine, they're
not
> >> supposed to be.
> >
> > I'd say: Not good at it as a general byproduct of implementation
or poor
> > design choices and careless maintenance/evolution, but not
specifically due
> > to the underlying runtime/dispatcher. In other words, the improper
> > implementation of some form of an 'intermediate' dispatcher
within the
> > libraries used. We already know that C++ dispatch is faster than
Objective C
> > (correct?).
>
> I'm afraid I don't really understand most of this paragraph. Yes, C++
> dispatch is faster than Objective-C dispatch, roughly 3 cycles for a
> virtual function versus roughly 13 cycles for objc_msgSend when I
> tested it. It's also vastly less capable. The part I don't understand
> is everything before that point. What do you mean by "not
specifically
> due to the underlying runtime/dispatcher", and "the improper
> implementation of some form of an 'intermediate' dispatcher"?
The last sentence was rhetorical. The rest of the paragraph referred
to source level implementation which provides runtime dynamism to an
object (i.e. the inevitable problems with an object that tries to do
everything). In short, generally poor design decisions were made,
which you pick up on later.
> Right, that's basically all I'm saying here. Dispatch is important in
> the sorts of programs that the people on this list usually write, and
> by using ObjC you get to let Apple write a really good, fast
> implementation of it instead of writing a bad, slow implementation of
> it the way you might in a different language.
or library design.
> > Respectfully, there is clearly more than dispatch involved. I
would agree
> > that this is unrelated to the original post. It *is* (IMO)
related to the
> > observation Jens had along the way, that changing the
implementation to
> > plain C solved the issue.
>
> I must again disagree. If he was seeing frequent multi-millisecond
> spikes from calling an empty ObjC method then I think it's pretty
> clear that he is essentially an unreliable narrator in this story,
and
> if the conversion to C made his problem go away, we still have no
idea
> what particular part of that conversion was responsible. Since using
> ObjC simply could not cause the symptom he observed, the fact that he
> eliminated it by switching languages would tend to imply that
> something else, whatever was actually responsible for the problem,
got
> changed as well.
>
> Mike
Ok... I (again) get the feeling you're not understanding what I've
written, or that you've taken it out of context - and/or vice versa.
Justin
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden