Re: ARC vs Manual Reference Counting
Re: ARC vs Manual Reference Counting
- Subject: Re: ARC vs Manual Reference Counting
- From: Marcel Weiher <email@hidden>
- Date: Mon, 09 Sep 2013 13:15:26 +0200
On Sep 9, 2013, at 11:33 , Tom Davie <email@hidden> wrote:
>> On 9 Sep 2013, at 10:18, Jean-Daniel Dupas <email@hidden> wrote:
>>
>> And does the profiler explicitly shows that ARC runtime code is the culprit ?
>
> Yes, it does.
Isn’t it strange how when someone says “oh, and ARC is faster”, without measurements, that passes without comment?
> [] The last time I tried this with with Xcode 4.5, and after I’d added a bunch of extra autorelease pools all over the place which reduced ARC’s overhead to “only" 50%. This in itself suggests to me that ARC causes a significant increase in the number of autoreleased objects (which surprised me given the runtime optimisation to get rid of autorelease/retain pairs in callee/caller).
It shouldn’t really be surprising. ARC adds an astounding number of additional reference counting ops to all code involving object pointers. If that were compiled, ObjC would be completely unusable and slower than all the so-called scripting languages out there. So for things to be usable, ARC then has the optimizer try to undo most of the damage and finally adds some clever runtime hacks to mitigate the rest.
Since the hacks and the remaining damage are somewhat orthogonal, you sometimes end up ahead and sometimes you end up behind.
The other thing that should be considered when seeing heroic hacks like the autorelease-undoer is that such techniques rarely arise spontaneously from an idle moment of relaxed performance optimization. More usually, they happen because there is some sort of “ho lee f*k” moment, where performance regression is so bad/project-threatening that something drastic/heroic needs to be done.
Corporations and people being what they are, official communication tends to focus more on the heroics than the “ho lee f*k”, so documentation on the performance of new technology tends to be, er, “optimistic”.
GC was a classic example of this: yes, there were a few edge cases that you could present where there was a speedup, but overall the performance picture was pretty dismal. Guess what got communicated? I just saw the same thing with dispatch_io, which was presented as a nice performance win in the WWDC talk 2011 (fortunately with the caveat that this was a specific case). Of course, my measurements show it to be a (small) performance loss in most other cases...
I don’t even think there’s anything nefarious going on here, just people reporting on a nice win in a specific case with their shiny new toy, and not really that eager to talk about the not-so-good cases. It is probably a good idea to keep that mechanism in mind while gulping down the Cool Aid :-)
Cheers,
Marcel
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden