Re: Instance Variable access
Re: Instance Variable access
- Subject: Re: Instance Variable access
- From: Jim Witte <email@hidden>
- Date: Fri, 7 May 2004 14:05:05 -0500
using direct access records 6.8ns per loop on a 1.25GHz G4 whereas
using methods to
get and set the same instance variable takes 91ns per loop
Yow!
I wonder if Apple decides to ship a static lib version of
objc_msgSend that could be linked into your own code, would we
suddenly see our Cocoa apps speeding up...
Moreover, why are simple instance variable accesses even using
objc_msgSend (dynamic or otherwise) *at all*? Is there some reason why
ObjC object instance vars can't be accessed the same way they are in
compiled C++ - by direct offset access inside the object struct? There
are most probably thread syncronization issues to deal with, and
ObjC/Cocoa objects may be too complicated in some way to do this kind
of direct access can't be done - I don't know how Cocoa programs work
down at the assembly level..)
If this can be done, it (I think) would require that "simple instance
var access" methods be labeled/identified (by the compiler) as such, so
this optimization could take place.
What exactly does objc_msgSend do anyway? In C++, a method call is
just parameter (and object pointer) loads, and then a jump (of some
kind - I don't know how PPC keeps track of the return address). Cocoa
can't be doing the same level of optimization, because in Crash
Reports, etc, you can see what methods are called by objc_msgSend.
the IMP pointer and dispatch through that. That should remove at least
2 of the hoops the CPU has to jump through to get to your method code.
IMP pointer? What's that?
One of the "concerns" (possibly unfounded) I've always had about
high-level object-based environments like Cocoa (and highly-layered
OS's like OSX) is a nagging feeling that if all the extra time for
message dispatch and other overhead (do ObjC compilers still retain
stack-frames for even tail-calls to other routines, possibly leading to
implicit recursion?)
(Yes, this is probably perhaps old-school thinking - but when I
started writing Mac programs, it was in Pascal and C with the Toolbox
on 68K - which I figure had a lot fewer layers in it, though of course
I'm not sure - anybody have the source code for the 68K Toolbox handy
;-)
One answer to this concern (if it is indeed real), is that "computers
will always be faster, memories will always be bigger". Yes, but
there's that old rule-of-thumb that "Even as Moore's law says that
computers will get faster (at some rate), the time it takes to log into
Windows always seems to stay the same.." Albeit, this CS lecturer was
talking about Windows, but I wonder if the same would apply to any
modern, highly-layered OS.
Another argument is that you don't really *need* all that speed (the
bottleneck in most applications is not the code, but the human using
it), but of course that fails in some instances, such as repeated,
instance variable access. And if these "problems" with actual vs.
potential speed are propagated throughout the OS.. Then at some point
the user's experience WILL be affected. We may have already seen this
- with the un-responsiveness of OS X 10.0 (which was fixed - I don't
know how - did Apple just leave the optimization flags all 'off' for
the 10.0 build or something?)
Jim
_______________________________________________
cocoa-dev mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/cocoa-dev
Do not post admin requests to the list. They will be ignored.