Re: Is Apple's singleton sample code correct?
Re: Is Apple's singleton sample code correct?
- Subject: Re: Is Apple's singleton sample code correct?
- From: Christian Brunschen <email@hidden>
- Date: Mon, 28 Nov 2005 08:10:02 +0000
On 28 Nov 2005, at 06:01, David Gimeno Gost wrote:
On 28 Nov 2005, at 02:54, mmalcolm crawford wrote:
The problem is that your implementation of the pattern strives to
prevent the singleton from ever being deallocated.
I explained this at least once previously. A singleton is
sometimes an object that is resource-intensive to create.
Repeated creation and destruction would result in poor performance.
I don't know why you keep saying this. I've clearly never talked
about repeated creation and destruction in the general case. I've
always been talking about proper deallocation when appropriate,
which would be when the application quits in the cases you are
considering. I really don't understand why you don't understand me.
I'm not sure why I would care? The aim is to make things easier
for the consumer. The current implementation means the object can
be used consistently with Cocoa's memory management rules,
whatever pattern is used.
No, it doesn't. It only works for the most common resources (e.g.
memory), not for resources that need special clean up procedures.
The approach I'm suggesting also can be used consistently with
Cocoa's management rules, actually it's so consistent with them
that there is not even the need to override the retain/release
methods. What makes it different is that it does not impose
additional unnecessary constraints that prevent the object from
being properly deallocated.
One can never be certain that any application will actually be
properly terminated - consider application crashes, or if a user
'force quits' or 'kills' the program in question. Thus, classes
should be written such that they, and their instances, can gracefully
handle that situation - i.e., the situation where they themselves are
not involved in their own cleanup, and thhe cleanup is instead done
by the system. So, if an object has a lifetime which is expected to
last essentially until the end of the lifetime of the application
itself, there is little point in making that cleanup functionality
available separately, or indeed in hooking into it manually - since
the class already has to handle the 'cleanup by the applpication
dying' case, we may as well make that the _default_ case, which also
gives us the advantage of not having to try to clean up our singleton
before the last possible user of our singleton has gone away.
If the application terminates, then the process' memory is
reclaimed by the system, and I'm not bothered about "tidying up".
Yes, and there is nothing in the approach I'm suggesting that
prevents client code from continuing to do that. What makes it
different is that, in addition to that usage, client code would
have the option to properly deallocate the object if (and when)
appropriate.
But who decides 'when appropriate'?
One point of a singleton is that the client code is not involved in
making the creation or destruction decisions about the singleton -
from the point of view of the client coode, the cingleton is 'always
there'. Usually a singleton is lazily created the first time a client
actually asks for it, and then kept around foor the lifetime of the
application, as has been noted. If one were to allow destriction of
singletons, for instance by allowing their retain count to drop to
zero, then we have a situation where two distinct, non-overlapping
uses of a singleton in thhe same program, would lead to two different
singleton instances being created and then deallocated: First, one
'singleton' would be created by the first client, and then
deallocated after the first client was done with it; then the second
client would ask for the singleton, which of course is not available
so has to be created again, and will then once more be deallocated.
One of the points of a singleton is that it is an object thhat is
accessed through a globally visible, shared access point (such as a
class method on the singleton's class). This means that it becomes
essentially impossible to predict which code, other than the one you
yourself write, may or may not be a client, a user, of the singleton.
Thus, just because your own code is done with the singleton, doesn't
mean that the program as a whole is.
Now, if the singleton, as quite frequently is the case, requires a
fair amount of resources to instantiate, then your suggested pattern
could lead to significantly worse performance than the traditional
pattern, as in the traditional pattern a single instance would be
allocated and kept around for the life of the application, whereas
with your pattern there could be an arbitrary number of
'singletons' (each with distinct, non-overlapping lifespans)that
would be created and destrooyed through the lifetime of the application.
Essentially, in my experience, a 'singleton' is intended to describe
an object that is conceptually 'one single object that is always
there, and is always the same, single, object', which is often
implemented by lazily creating the singleton instance the first time
it is created, purely as an optimization, because it will save
resources to not have to initialize those singletons thhat may never
be used, and indeed to defer the creation of those singletons that
will be used, until they actually are. However, the singleton has a
conceptually infinite lifetime, with (conceptually) no start or end.
Your pattern is more one of a 'serial singleton' - a class of which
either none, or at most one, will be in use at any given time, but
whose life cycle is not intended to be conceptually infinite. This is
not a pattern that I have come across much before. However I would
say that it is conceptually firmly _different_ from that which is
generally known as a 'singleton'.
"that's the whole point of the pattern" was given as a
counterpoint to your apparently simple complaint that a singleton
is "immortal". It wasn't clear what the root cause of your
complaint was.
You've used that argument twice in this discussion. I don't see how
not knowing what the root cause of my complaint was, changes the
fact that you've used it to justify that the singleton object
should never be deallocated. If that's not the case and I've
misunderstood you (not "misrepresented"), then I'd like to know
because, as I've already said, as of now I've still seen no
technical reason why the singleton should be prevented from being
deallocated. The only technical reasons I've seen were to justify
that the singleton _doesn't need_ to be deallocated _in some (the
most frequent) cases_, not that it _shouldn't_ be deallocated _in
the general case_.
As has been mentioned, quite often there is significant cost
associated with creating a singleton instance. This suggests that
there should be as few occurrances of instantiating such an object -
ideally, either 0 (if the object is never used) or 1 (if it is). With
a lazily instantiated and never deallocated singleton, you have
exactly that; with your pattern, you can have an arbitrary number of
such instantiations, though you will never have more than one
instance actiive at the same time.
"that's the way things have always been done" was a counterpoint
to your apparent assertion that Cocoa should adhere to a pattern
that is described in a book that was published several years after
the Cocoa pattern has already been established.
You've also used that kind of argument twice in this discussion.
The second time was after I had already clarified what I was saying:
On 27 Nov 2005, at 01:23, mmalcolm crawford wrote:
The current pattern has been used (as far as I can tell) without
issue for almost two decades now, and it's still not clear to me
what the problem is?
Actually, looking at the archives in http://www.cocoabuilder.com/,
I see that the time difference between both mails is only a few
minutes, so it's entirely possible that you hadn't read the
pertinent explanations yet when you wrote that. But, given the
information I had at the moment (i.e. that you had given that
argument after I had clarified the issue), I don't think the way
I've (mis)interpreted that argument qualifies as a
"misrepresentation", just as it wouldn't be appropriate either to
qualify as a "misrepresentation" on your part the fact that you
misunderstood what I was saying in the first place.
I think that
"greatly reducing the number of methods that must be overridden"
is a straw man. There are fewer than half a dozen methods to
implement in all
I was talking in relative, not absolute, terms. The number of
methods that don't have to be overridden if the constraint is
removed is more than half the total number of methods that need to
be overridden if the constraint is maintained.
But by 'removing the constraint'you are changing one of the generally
accepted behaviours of a singleton. If you want something that
behaves differently, then by all means do so, but don't try to
shoehorn it into something that already has a well-defined behaviour
from which your suggested behaviour differs. In other words, go ahead
and use your pattern, but please do it in such a way as to minnimize
confusion: call it something other than 'singleton' and call your
method for accessing your instances something other than
'sharedInstance', because both of those terms have well-defined
meanings with well-defined semantics and behaviours in the context
of Cocoa.
and most of the code is trivial and can be copied and pasted.
Yes. Still, if they can be removed without increasing the
complexity of the methods that must be overridden, the
implementation becomes simpler. This is a fact, there is no straw
man here.
Moreover, they have the benefit of conceptual simplicity.
Well, that's debatable, but I'm not going to argue that again. I
will recall however, that I've seen in this thread other people
expressing their concerns about the apparent complexity of that
implementation.
There are no memory management bugs, since the object is designed
not to be deallocated.
The object has several design goals (as stated in the
documentation). One is not to be deallocated. Another is that it
must be possible to use it as any other object. To me that implies
that client code shouldn't need to be concerned with whether they
are dealing with a singleton or not, i.e. existing client code
shouldn't break just because several months later the developers
realize that what they initially thought should be a singleton,
actually should be something else.
The only difference in the way that someone uses a singleton object,
is in the way that instances of such an object are initially
accessed. Other than that, singletons can be used exactly like any
other object, precisely because they handle memory management by
overriding it, and thus taking themselves out of the equation and
staying alive indefinitely.
There is no need to dispose of resources, since the object is
immortal
Both the need to dispose of resources and the life-cycle of the
object depend on the purpose of the singleton and its performance
requirements. Actually, I would say that there is no such thing as
immortal objects. There are objects whose destruction _must_ be
considered and objects whose destruction _can_ be ignored, but
there are no objects whose destruction _must_ be ignored.
There are cases where it makes performance-wise sense to say
'creating one of these is expensive, so if we ever create one, we
don't deallocate it, because someone might want it again later'. In
fact, the cost of instantiating an object is one of the things that
might prompt one to consider making it a singleton in the first
place. And this isn't even considering the cost of _de_allocation,
which might also be significant: in a 'traditional' singleton, that
cost would only be incurred when the application terminates, whereas
with your pattern, it would be incurred every time the last remaining
client _at the time_ releases the 'pseudo-singleton', which, as has
already been shown, can happen an arbitrary number of times during
the lifetime of an application.
If you really do need to close resources when the application
terminates, then you can register as an observer of the
NSApplication object (a singleton...) to receive the
NSApplicationWillTerminateNotification notification...
Yes, of course... except there is a problem with that. If the
resources that need special clean up procedures are managed by a
singleton, then I must add a method whose semantics would be the
same as those of -dealloc, and whose only reason to exist would be
precisely that I have chosen to not let -dealloc ever be called
just for the sake of it.
Yes, it can be done, but it is a hack to solve a problem caused by
an arbitrary constraint that shouldn't be there in the first place.
Except that it isn't an _arbitrary_ constraint. It's a _deliberate
design choice_ that gives specific semantics and indeed performance
characteristics.
Please, then, make clear what your actual issues are.
I had already stated them at the beginning of the email you are
replying to. The way you've replied to it makes me believe that
this discussion has already reached the end from a technical point
of view.
I have the feeling that, for some reason, you keep thinking that
the approach I propose doesn't cover the cases already covered by
the current implementation or that it conflicts with the singletons
provided by Cocoa, or something like that. I can only guess
because, as I've said, I've still seen no technical reason to
_require_ that singletons shouldn't be deallocated. I've seen
reasons to justify that the constraint shouldn't be a problem in
most cases and ways to avoid the consequences of the constraint
when they are a problem, but I have seen no technical reason for
the existence of the constraint itself.
I have presented a few such arguments.
Hint: volume is not a substitute for precision.
Volume is a consequence of trying to avoid misunderstanding. I
believe that the last emails demonstrate that precision by itself
is not enough to avoid that misunderstanding, but reasonable people
may disagree.
I believe the root of the problem here is that different people
have a different understanding of what the singleton pattern is and
what it's not, and this is bad because it defeats one of the
purposes of having a library of design patterns, namely that we can
communicate design ideas without having to fully describe them
again and again. But such is life, time to move on. There is
clearly no point in trying to suggest a different implementation
for the sample code in Apple's documentation.
One point to keep in mind is that Cocoa is a generally very nicely
coherent framework: a developer who starts picking up the patterns as
they are used within Cocoa, can generally expect to keep finding the
same patterns repeatedly, and when finding them can thus reuse the
knowledge gained before. So, for instance, all of Cocoa's singletons
exhibit the same behaviour in regard to life cycle, etc. You are
essentially suggesting 'breaking the mold', i.e., introducing
something that superficially looks the same as Cocoa's established
singletons, but in some significant aspects behaves differently -
thus breaking the expectations of its users. It is really up to you
to make a compelling case for why something that is working should be
changed.
You might be more successful and encounter more understanding for
your suggestion if you offered it up as an alternative to the
established singleton pattern rather than trying to change the
singleton pattern. Give it a different name, make it clear that it's
a similar but distinct thing, and everybody's happy.
Regards.
Best wishes,
// Christian Brunschen
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Cocoa-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden