Re: ObjC in time-critical parts of the code
Re: ObjC in time-critical parts of the code
- Subject: Re: ObjC in time-critical parts of the code
- From: Shawn Erickson <email@hidden>
- Date: Fri, 16 Jan 2009 09:26:31 -0800
On Fri, Jan 16, 2009 at 8:00 AM, Michael Ash <email@hidden> wrote:
> To repeat: something else is going on.
If I had to guess (an we do since the OP didn't post any actionable
information) the following possibilities come to mind...
1) logic bug in the code he replaced
2) memory management issue in the code he replaced that caused memory
pressure or maybe gc (didn't state if he was using or not)
3) some other application task stalling the event loop now not
affected by the code he replaced (didn't state what was driving the
rendering)
4) rendering tasks ran long enough to bump up against schedular
quantum with other pressure in the system
5) etc.
Jens,
Knowing the objective-c dispatch implementation I can say with high
level of certainty you are blaming the wrong thing (it wont can a
delay close to what you reported, a couple orders of magnitude lower
at worst) and your code change corrected or avoided something that you
happened to be doing wrong/inefficiently and/or a bad assumption you
made.
Shark will help you understand hot pathways using time profile or
using a Shark system trace you can understand how you are interacting
with the system including thread context switches. If you use dtrace
and/or Instruments you could actually trigger on every entry and exit
to messages sent to your render object to build up timing information,
etc.
Finally the accuracy of Microsecond is on the order of microseconds or
10s of microseconds (and who knows its exact jitter behavior). Message
sends are on the order of nanoseconds (if a hot message) to 10s of
nanoseconds (if cool). Using something like UpTime can get you down
under a microsecond in accuracy but still easily an order of magnitude
to large still to measure a single message send. You will do nothing
but mislead your self if you try to use something with Microsecond
accuracy to understand something that takes nanoseconds.
(CPU performance counters can get you better but...)
Also the time spent in UpTime or Microseconds is itself on the order
of 10s of nanoseconds which is far longer then a simple message send.
[0:562] > gcc -framework Foundation -framework CoreServices Test.m; ./a.out
...
2009-01-16 09:08:34.647 a.out[42577:10b] average of 10000 calls to
Microseconds: 91 ns
2009-01-16 09:08:34.648 a.out[42577:10b] average of 10000 calls to UpTime: 46 ns
You really need to sample across a batch of operations and/or record
time stamps at important locations and then average the deltas to
avoid time granularity misleading you. Again it is best to leverage
profiling tools to help track down how you application is spending its
time and/or affecting (or being affected by) the system.
...as to why we are pushing back on your statements...
I want to make sure others that wander into this thread in the future
don't get mislead and make the wrong decisions. Also we are trying to
push you to understand what your problem truly was (...guess it is now
to late to help you avoid rewriting code, which was my first goal).
-Shawn
p.s. Samples taken on a first generation Mac Pro 2.66GHz but a G5
wouldn't be drastically different to my Mac Pro in message dispatch
speeds.
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden