Re: message to nil? (very basic question)
Re: message to nil? (very basic question)
- Subject: Re: message to nil? (very basic question)
- From: Bill Bumgarner <email@hidden>
- Date: Mon, 21 Jan 2002 11:21:39 -0500
It's a rather touchy subject... just like there are ObjC / Java flame wars,
there are 'nil eats messages' flame wars with folks on both sides of the
fence.
Personally, I used to be a firm believer in 'nil eats messages' as a
fundamental part of any decent OO system. My opinion has changed over the
last decade+ of working with dynamic OO systems ranging from ObjC to Java
to Python to oddities that have never really enjoyed any popularity.
Mostly, that opinion has changed because of some very deep scars caused by
many hours lost figuring what the heck was wrong with an app only to
discover that some method somewhere deep in the bowels of the system--
often in code not my own-- was assuming that an object reference was
non-nil.
But that doesn't answer the question....
The rational behind it is that nil-eats-messages allows for a fluidity in
the object oriented code that wouldn't otherwise be possible.
I.e. you could....
bar = [[[foo doThis] doThat] doSomethingElse];
assert(bar != nil);
... and any of -doThis, -doThat, -doSomethingElse could return nil without
this rather compact line of code breaking. It is extremely convenient
and minimizes the number of lines of code. It is also extremely readable
(once you are used to the ObjC syntax).
However, as has been previously noted, it is also extremely difficult to
debug. Say the assert() is failing. Instead of only having to check if
the input (foo) and return value of one method is non-nil, you now have to
check the return values of three methods. Not only that, but any one of
those three methods may not be something that could be called repeatedly.
That is, any one or all three of the methods may cause a change to the
underlying state of the object graph that cannot be trivially undone.
Looking at the source to the ObjC runtime-- it is in Darwin-- there is a
hook in place already to make ObjC toss an exception when a nil object is
messaged. If you decide to use this, do so with *great care*. It is
very likely that the AppKit and Foundation rely upon this behavior--
conciously or otherwise-- and bad things will happen if it is used.
There is an article here...
http://www.smalltalkchronicles.net/edition2-1/null_object_pattern.htm
... that tries to make the case for a useful null object pattern and uses
ObjC as an example.
The bottom line is that 'nil eats messages' can certainly reduce the
complexity and the number of lines of code required in a *working*
application. However, it can greatly increase the difficulty of tracking
down and fixing defects when the code breaks.
Personally, I'll take the few extra lines of code and the little bit of
extra ugliness if it saves me several hours of hair pulling debugging
later. In particular, if not using nil-eats-messages results in more
robust code because I'm never assuming that an object reference is non-nil,
I'll live with the ugliness...
As much as I really really like ObjC's syntax over Java's, I have noticed
that the Java code-- as long as it isn't heavily multithreaded-- has been
easier to maintain over time. But that's just one developer's experience.
.. other's will likely have had different experiences [with which they
will berate me vehemently, no doubt].
b.bum
On Monday, January 21, 2002, at 11:02 AM, Smith, Bradley wrote:
I'm a Cocoa and Obj-C convert these days, but I still have to deal with C+
+
on an everyday basis. IMHO being able to make method calls on Nil is an
extremely bad thing to be able to get away with and it doesn't help
anybody.
Going through an exercise in the Hillegas book I was struggling for an
hour
because I hadn't set up an outlet properly. I was making calls on it and
there was no error but the outlet was Nil so no message was going
anywhere.
Can anyone explain the logic / rationale behind it all?
Brad
b.bum
Do the voices in my head bother you?