Re: Is Apple's singleton sample code correct?
Re: Is Apple's singleton sample code correct?
- Subject: Re: Is Apple's singleton sample code correct?
- From: David Gimeno Gost <email@hidden>
- Date: Tue, 29 Nov 2005 22:08:01 +0100
On 29 Nov 2005, at 10:17, Ondra Cada wrote:
No, overriding those methods does not achieve that. There is no need
to override those methods to achieve that.
There is.
There isn't.
(*) Of course, can be done also without overriding
retain/release/dealloc e.g. by overretaining in init. For all
practical reasons that's exactly the same.
No, it's not the same. Keeping the contract is not the same as breaking
it and preventing the client from knowing that it's been broken.
What you call "overretaining" is actually retaining the correct number
of times: once for the reference that is kept in the shared instance
variable and once for the reference that is returned by [[alloc] init].
What the current implementation suggested by Apple's sample code does
is actually "underretaining", it lets the client believe that it owns a
reference to an object which it really doesn't own; it breaks the
contract and it is that breakage what requires the rest of the methods
to go out of their way just to ensure that the client never knows that
the contract has been broken.
What I am aiming at is *this is what we have to do to adhere to Cocoa
memory-management conventions*, whilst keeping the immortable
singleton contract as well.
You're aiming at the wrong target then.
You get the correct behavior by just requiring client code to follow
Cocoa's memory management rules.
No you don't.
Yes, you do if your singleton doesn't break the contract with client
code that chooses to get a reference to it through [[alloc] init].
Correct client would first alloc/init the singleton, then release it.
Fine. If you give it a reference that it actually owns, this should
cause no trouble at all.
Actually, overriding those methods allows client code to break
Cocoa's memory management rules without a hitch.
Sure it allows. *So what*?
You want to allow your singleton be used as any other object, remember?
You don't want client code to break just because you later realize that
your singleton shouldn't really be a singleton at all.
You don't care? Fine. I do. I want mistakes to be detected as close to
when they were introduced as possible. This prevents me from
replicating them when code is copied/pasted, and makes debugging easier
because the offending code is still in my mind and I don't have to read
it again just to understand what it was supposed to do.
(The most obvious example is a forgotten (auto)release: that is a
violation of the memory management rules all right, but the
application runs without a glitch; it leaks memory a bit, but unless
the object in question is big or tooo many of them and/or the app runs
for a long long time, nobody ever notices.)
Gee, so I shouldn't care about such bugs, then?
[*] Without overriding retain/release/dealloc (or other trick with the
same outcome) it would not apply. In other words, our singleton *would
not* adhere to the memory management rules. [*]
[...]
+sharedInstance { static id myself=nil; if (!myself) myself=[[self
alloc] init]; return myself; }
That's a very valid *singleton* we just made here. It does not quite
adhere to the memory-management conventions though, or rather, it
does, alas breaking so the contract of being a singleton :)
It doesn't? Why?
Since [*] ... [*] above.
You can 'or' two expressions and, if one of them is true, the resulting
global expression will also be true, but that does not mean that the
other subexpression is true as well.
You've just provided an explanation for the "or rather, it does, alas
breaking so the contract of being a singleton" part. I already knew
this. The part I was interested in is "does not quite adhere to the
memory-management conventions".
There are different kinds of resources, and they are to be released a
different way in different moments.
I know of no such resources, but anyway that's not a reason for not
decoupling disposal of resources from its cause when the resources
happen to always be released the same way, as is often the case.
Note: that the way resources must be released may depend on the state
of the object that manages them does not mean that the object should be
aware of the reasons it's in that particular state.
For example, it is completely nonsensical to release the memory when
the process is about to end.
You are confusing things. It may be unnecessary, but that does not mean
it's nonsensical. There is a tradeoff between decoupling and efficiency
here. Choosing the first over the second, unless measurable data shows
the other choice should be preferred instead, makes a lot of sense to
me.
Does sending a message to a nil object make sense to you? Do you test
for nil before sending a message every time there is the possibility
that the receiver of the message could be nil?
This is what -dealloc accomplishes. You just tell the object that
manage the resources that you no longer need it.
That's nice from the theoretical point of view. In practice though, it
brings more problems than advantages.
Such as?
app-quit cleanup is a very very different beast from object
deallocation.
And 'for' loops also are a very different beast, but that doesn't
mean you shouldn't use them to do the cleanup, does it?
It does, for HOM is better :D
Analogically, whilst you *can* tweak dealloc to do your application
termination cleanup, you should not. It's ugly, error-prone, it makes
maintenance a bitch. If I want to see what exactly my application with
hundreds of source files in a number of lodable bundles and
subprojects does before quit, I grep for
NSApplicationWillTerminateNotification, and that's that. What would
you do, check *all* them deallocs, whether some of them may contain
more code than plain [ivar release]?
If you want us to believe that using object deallocation for properly
disposal of resources is bad you'll have to give us something more
consistent. Just saying "it's a very different beast" or "it's ugly,
error-prone, it makes maintenance a bitch" clearly doesn't qualify.
Neither does your NSApplicationWillTerminateNotification debugging
example. BTW, have you considered the possibility that reduced coupling
and increased encapsulation could have avoided such bugs in the first
place?
There is nothing wrong about _using_ object deallocation to do the
app-quit cleanup.
There's all wrong with it.
Yeah, sure. And the reason is...? None.
Oh, wait! There is one:
I did it, long ago. I've learnt the hard way.
Ah, well, I see the light now. Thanks! :-)
Regards.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Cocoa-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden