Re: Lockless thread-safe accessor using blocks: how to?
Re: Lockless thread-safe accessor using blocks: how to?
- Subject: Re: Lockless thread-safe accessor using blocks: how to?
- From: Dave Zarzycki <email@hidden>
- Date: Thu, 14 Apr 2011 10:02:15 -0700
On Apr 14, 2011, at 6:20 AM, WT wrote:
> Hi all,
>
> I've started to use GCD in my projects and I found myself using a certain pattern that I now realize isn't actually thread safe. The goal is to write a thread-safe lazy accessor without using locks, @synchronized, or an atomic property. At first I thought that
>
> - (SomeObjType) foo
> {
> __block SomeObjType foo = nil;
>
> dispatch_sync(queue,
> ^{
> // retrieve bah
>
> if (nil == bah)
> {
> // compute and store bah
> }
>
> foo = bah;
> });
>
> return foo;
> }
>
> would do it. Here, bah is some resource that may be changed by multiple threads and queue is a serial GCD queue defined as a static variable in the class where this accessor is defined. The queue is not any of the global queues, but is created by the class.
Please keep in mind that while GCD is certainly is more efficient than locks, @synchronized, or atomic properties, it isn't magic. In a retain/release (not GC) world, it simply is impossible to implement a lockless accessors around instance variables that are objects. It doesn't matter if one uses GCD or other kinds of locks or lock-like APIs:
- (Obj *j)foo
{
Obj *tmp;
lock(); // to ensure that 'ivar' isn't changing because the act of retaining dereferences 'ivar'
tmp = [ivar retain];
unlock();
return [tmp autorelease];
}
Note: like atomic properties, the above code ONLY ensures thread safety at the memory management layer. It doesn't ensure concurrent design correctness. In fact, with the above code and with atomic properties, nothing stops multiple threads from concurrently mutating the returned result, and it blatantly encourages time-of-use-vs-time-of-check bugs.
>
> I see two problems with this pattern.
>
> The first is that if the method gets invoked already in the queue's automatic thread, there will be a deadlock. That's easy to fix, by wrapping the dispatch call into a function that tests queue against the currently executing queue and simply executes the block when they coincide.
Actually, this isn't easy to fix due to X->A->B->A problems, where X is the current queue, then A, then B, and then the code deadlocks trying to dispatch_sync() against A "because it isn't the current queue".
If you want to permit thread local reentrancy, one must use pthread_self(), not dispatch_get_current_queue() to detect and allow "safe" reentrancy.
Personally speaking, reentrant behavior is a sign that the object graph of your code contains cycles, which are problematic for all sorts of reasons, not just synchronization. If you fix the design, then you can avoid the thread local reentrancy problem entirely.
>
> The second problem is that the pattern isn't actually thread safe. If two threads (that aren't the automatic thread of the queue) enter the accessor, two blocks will be enqueued serially. The two threads will then block, waiting for their blocks to finish executing.
>
> So far so good but when the first block finishes, it could happen that the first thread blocks until the second block finishes, at which time the foo value computed by the first block will have been replaced by the value computed by the second block, since foo is shared among all blocks that captured it.
>
> Thus, when the first thread resumes, it will report the wrong foo value.
Can you please provide a compilable test case?
davez_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden