Re: autorelease: how does this work!? (if at all)
Re: autorelease: how does this work!? (if at all)
- Subject: Re: autorelease: how does this work!? (if at all)
- From: "Clark S. Cox III" <email@hidden>
- Date: Fri, 18 Jun 2010 14:48:12 -0700
That depends on how you define "subsystem". I see nothing wrong with the way the op is structuring this (i.e. doing the transformation on one serial queue, and doing the writing on another queue. This allows many if the transformations to complete without ever waiting on the disk, yet still serializes the disk access in a nice, orderly fashion.
If this were all on a single serial queue, you'd end up with the queue blocked while writing to disk, when the CPUs are otherwise idle and could have already started doing useful work on the next image(s).
As Bill said, this is a very common GCD idiom.
Sent from my iPhone
On Jun 18, 2010, at 12:44, Tony Romano <email@hidden> wrote:
> Because you make 2 queues, doesn't mean internally they will operate on separate cores/threads. GCD may coalesce these based on system resources. I agree with the best practice you outlined; however, the example in this email thread has 2 queues within the same subsystem. So there must be some "trick" I am not groc'ing within the snippet of code. Anyhow, I think the OP got the answer to his original question.
>
> -Tony
>
> On Jun 18, 2010, at 12:34 PM, Bill Bumgarner wrote:
>
>>
>> On Jun 18, 2010, at 12:09 PM, Tony Romano wrote:
>>
>>> First, the objects are retained by dispatch_async as others have mentioned. Second, I'm not sure why you used 2 queues for the tasks in your code, seems overly complex. Async queues are serialized, which means that you can continue to add to the queue and the jobs will be done in order which they were added. The next job will not start until the previous one in the queue is completed.
>>
>> To be precise, it is the Blocks runtime that takes care of memory management, triggered by dispatch_async()s copying of the block passed to it.
>>
>> As for their being two queues, that pattern is actually pretty common. A best practice is to subdivide your application into subsystems and then have one (or more, depending on concurrency used) queue per subsystem. The queues both allow the application to do work across many cores simultaneously while also providing a natural lock-less exclusion primitive per subsystem.
>>
>> The trick is to keep the object graphs being acted upon within the subsystems relatively isolated from each other (with the points of contention being carefully considered).
>>
>> b.bum
>>
>>
>
> -Tony
>
> _______________________________________________
>
> Cocoa-dev mailing list (email@hidden)
>
> Please do not post admin requests or moderator comments to the list.
> Contact the moderators at cocoa-dev-admins(at)lists.apple.com
>
> Help/Unsubscribe/Update your Subscription:
>
> This email sent to email@hidden
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden