Re: NSImage drawInRect deadlock
Re: NSImage drawInRect deadlock
- Subject: Re: NSImage drawInRect deadlock
- From: Andrew Keller <email@hidden>
- Date: Wed, 10 Aug 2016 13:44:18 -0400
Am 10.08.2016 um 2:48 vorm. schrieb Quincey Morris <email@hidden>:
> On Aug 9, 2016, at 20:47 , Andrew Keller <email@hidden <mailto:email@hidden>> wrote:
>>
>> 2. When utilizing Mike’s approach to limiting the number of parallel tasks down to, say, 1-8, I have been completely unable to reproduce the deadlock.
>> 3. When utilizing Mike’s approach to limiting the number of parallel tasks, Xcode is still saying that threads a being created like crazy — almost one thread per block submitted to the queue.
>
> I’m not a big fan of Mike’s proposed solution. If you want to use N-wide parallelism, then use NSOperationQueue, not GCD.
>
> Blocks dispatched to GCD queues should not contain any internal waits, such as for I/O. Instead, a dispatched block should occupy the CPU continuously, and at the end do one of 3 things:
>
> 1. Just exit.
>
> 2. Start an asynchronous action, such as GCD I/O, with a completion handler that’s not scheduled until the action is done.
>
> 3. Queue another block that represents another processing step in the overall task being performed.
>
> The point of #3 is that I think it’s also a mistake to queue large numbers of blocks to GCD all at once, for the pragmatic reason that if you accidentally violate the non-internal-waits rule, the size of the thread explosion depends on the amount of combustible material that’s queued. It’s better for *each operation* to queue its successor, and to start the whole thing off by priming the pump with a modest number of blocks.
>
> The other thing to be very careful of is global locks. If your code (perhaps outside your direct control) hits any global locks that affect multiple threads, *and* if the kind of lock being used is slower to test when locked than when unlocked, then more parallelism can be counterproductive.
>
> I’ve run into this in cases where adding more operations on more CPUs just adds a disproportionate amount of system overhead, decreasing the throughput of the actual calculation.
>
> The point of all this is that you may not have enough control of the internal behavior of those NSImage methods to safely use GCD parallelism for a job like this. NSOperationQueue might be a better solution.
Interesting idea. I’ve modified my scheduler to use existing work to post the next work to GCD. This implementation has no semaphores and no custom dispatch queues at all. Interestingly, I get roughly the same results: no crazy swapping to disk, no deadlock, and Xcode is still saying that threads are piling up like crazy. (Note that I’m not letting the concurrent procedure count past 8)
That said, this implementation does “feel” better than before from an architectural point of view. I believe I have more along the lines of “computationally intensive things I can do to iteratively improve the UI” rather than “a long list of work to do”. Based on what the user is doing, I can prioritize certain parts of the UI being rendered before others, and using existing work to post the next work in GCD makes a lot of sense because the concept of “future work” can change so easily. I know NSOperationQueue supports giving certain tasks priority, but I’d have to actually set all of those values — whereas with blocks posting other blocks, I get this behavior almost for free because there is always “the next most useful thing to render in the UI”. If some distant future work happens to become irreverent because the user moved to another screen, then this approach simply never gets around to queueing that work, which is usually good.
Thanks,
- Andrew
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden