Re: Concurrent tasks are getting hung up
Re: Concurrent tasks are getting hung up
- Subject: Re: Concurrent tasks are getting hung up
- From: Kevin Meaney <email@hidden>
- Date: Fri, 10 Oct 2014 17:20:01 +0100
On 10 Oct 2014, at 16:27, Jim Crate <email@hidden> wrote:
> On Oct 10, 2014, at 11:00 AM, Kyle Sluder <email@hidden> wrote:
>
>>> On Oct 10, 2014, at 6:42 AM, Steve Mills <email@hidden> wrote:
>>>
>>> I've only created one NSOperationQueue and added many NSInvocationOperation to it.
>>
>> NSOperationQueue works by dispatching your blocks to the global async GCD queue. If you send a thousand blocks to be async processed at once, GCD will keep spinning up threads to try to service them until it exhausts the thread limit and your process deadlocks.
>>
>> It sucks, but you have to be judicious in the number of blocks you submit to the global queue. Set a max operation count on your NSOperationQueue.
>
> I ran into the same problem in my first foray into using NSOperationQueue. When adding several hundred block operations to the queue, it pretty much choked trying to run all several hundred at once. Now, when I create an NSOperationQueue for what could be more than a few operations, I always use:
>
> myQueue.maxConcurrentOperationCount = [[NSProcessInfo processInfo] processorCount];
>
> The processorCount is actually logical core count, so on my 4-core i7 it returns 8.
I prefer working with the actual number of cpus. As Jim shows the logical processor count returns 8 for a four core i7, which is also what I get, hyperthreading doubles the number of reported CPUS.
The reason I prefer physical is that my performance testing of throughput achieved the fastest results when the number of concurrent tasks was equal to the number of physical CPUs. Unfortunately NSProcessInfo doesn't return the number of physical CPUs.
The following code demonstrates various ways of calling sysctl to get system info:
int mib[4];
int numCPU;
size_t len = sizeof(numCPU);
printf("sizeof numCPU: %ld\n", len);
size_t mibSize = sizeof(mib);
printf("\nhw.physicalcpu\n");
sysctlnametomib("hw.physicalcpu", mib, &mibSize);
printf("mib[0] = %d, mib[1] = %d\n", mib[0], mib[1]);
printf("mibSize = %ld\n", mibSize);
sysctl(mib, 2, &numCPU, &len, NULL, 0);
printf("hw.physicalcpu = %d\n", numCPU);
printf("\nhw.logicalcpu\n");
sysctlnametomib("hw.logicalcpu", mib, &mibSize);
printf("mib[0] = %d, mib[1] = %d\n", mib[0], mib[1]);
sysctl(mib, 2, &numCPU, &len, NULL, 0);
printf("hw.logicalcpu = %d\n", numCPU);
sysctlbyname("hw.physicalcpu", &numCPU, &len, NULL, 0);
printf("\nsysctlbyname:hw.physicalcpu = %d\n", numCPU);
Now I've been optimizing to maximize throughput for processing image files, whereas most people should be optimizing to keep the system and their application responsive whilst getting stuff done with as little overhead as possible which implies fewer concurrent operations not more. Also you don't always know when a operation you spawn might start some task that then spawns more concurrent operations. I suppose what I'm saying is err on the side of generating fewer concurrent tasks than more.
And I've seen exactly the same thing as Jim in terms of completely locking up an application by flooding it with concurrent operations.
Kevin
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden