Re: Implementing a "Synchronous Channel" with dispatch
Re: Implementing a "Synchronous Channel" with dispatch
- Subject: Re: Implementing a "Synchronous Channel" with dispatch
- From: Dave Zarzycki <email@hidden>
- Date: Wed, 14 Sep 2011 11:00:37 -0700
On Sep 14, 2011, at 9:38 AM, Andreas Grosam wrote:
>
> On Sep 14, 2011, at 4:20 PM, Dave Zarzycki wrote:
>
>> Andreas,
>>
>> This is probably not the answer you're looking for, but when we find coding patterns like this in the OS, we advice developers to replace this pattern with a GCD queue and use either dispatch_sync() or dispatch_async(). The reason for this advice is because parking threads to wait for events isn't an efficient use of system resources.
>>
>> davez
>
> Well, I *am* using GCD ;)
>
> A "Synchronous Channel" (or Synchronous Queue) is a well known pattern used in multithreading.
No argument there.
> Basically, it is used to "hand off" objects from one thread to another, with the requirement that the "producer" waits until a "consumer" took the object.
Dispatch queues exist to solve both the synchronous and asynchronous producer/consumer pattern. If you want the producer to wait until the consumer is done, then use dispatch_sync() instead of dispatch_async():
x = create_something();
dispatch_sync(consumer_q, ^{
do_something_with(x);
});
// do_something_with() is done
That's it. Easy, huh? :-)
A note about dispatch semaphores: While they are powerful, they are not meant to replace queues. They exist for two very specific problems: 1) Refactoring synchronous APIs on top of asynchronous APIs and 2) managing finite resource pools.
davez
>
> The synchronous queue is a blocking queue, and this blocking is used as a means to "throttle" the use of system resources.
>
> In certain scenarios there is no other way to stop some code which allocates resources by blocking its thread. When you use dispatch queues to schedule "work items", these work items may contain resources. But consider, these resources stay allocated as long as the block waits in the dispatch queue to be processed by some other task and deallocates the resources again when it is finished.
>
> So, imagine this:
> A producer, and consumer in action:
> while (!canceled) {
> NSData * data = … create new data
> dispatch_async(process_data_queue, ^{
> [consumer processData:data];
> });
> }
>
> If you do this, you may very probable drive your system into an insane state if the "producer rate" is higher than the "consumer rate". In order to fix the balance, you would use semaphores, and the Synchronous Channel is exactly this.
>
>
> Regards
> Anderas
>
>
>
>
>
>>
>>
>>
>>
>> On Sep 14, 2011, at 5:40 AM, Andreas Grosam wrote:
>>
>>> Dear List,
>>>
>>> I've implemented a simple "Synchronous Channel" using dispatch lib.
>>> A "synchronous channel" is an actor in which each producer offering an item (via a put operation) must wait for a consumer to take this item (via a get operation), and vice versa.
>>>
>>> The following is probably the simplest implementation, which lacks some important features like timeout etc.
>>>
>>> But anyway, is the following a proper and *efficient* implementation when using dispatch lib where consumers and producers will be scheduled in queues and the semaphore is implemented using dispatch_semaphore?
>>>
>>> Any hints to improve this?
>>>
>>>
>>> template <typename T>
>>> class SimpleSynchronousChannel {
>>> public:
>>> SimpleSynchronousChannel()
>>> : sync_(0), send_(1), recv_(0)
>>> {
>>> }
>>>
>>> void put(const T& v) {
>>> send_.wait();
>>> value_ = v;
>>> recv_.signal();
>>> sync_.wait();
>>> }
>>>
>>> T get() {
>>> recv_.wait();
>>> T result = value_;
>>> sync_.signal();
>>> send_.signal();
>>> return result;
>>> }
>>>
>>> private:
>>> T value_;
>>> semaphore sync_;
>>> semaphore send_;
>>> semaphore recv_;
>>> };
>>>
>>>
>>>
>>> benchmark info: if I run concurrently one producer (performing put()) and one consumer (performing get()), I get a throughput of about 200.000 items/sec on a MacBookPro)
>>>
>>>
>>>
>>>
>>>
>>>
>>> class semaphore is a simple wrapper around dispatch_semaphore:
>>>
>>>
>>> class semaphore : noncopyable {
>>> public:
>>> explicit semaphore(long n) : sem_(dispatch_semaphore_create(n)) {
>>> assert(sem_);
>>> }
>>> ~semaphore() {
>>> dispatch_release(sem_);
>>> }
>>> void signal() {
>>> dispatch_semaphore_signal(sem_);
>>> }
>>> bool wait() { return dispatch_semaphore_wait(sem_, DISPATCH_TIME_FOREVER) == 0; }
>>> bool wait(double timeout_sec) {
>>> long result = dispatch_semaphore_wait(sem_,
>>> timeout_sec >= 0 ?
>>> dispatch_time(DISPATCH_TIME_NOW, timeout_sec*1e9)
>>> : DISPATCH_TIME_FOREVER);
>>> return result == 0;
>>> }
>>>
>>> private:
>>> dispatch_semaphore_t sem_;
>>> };
>>>
>
> _______________________________________________
>
> Cocoa-dev mailing list (email@hidden)
>
> Please do not post admin requests or moderator comments to the list.
> Contact the moderators at cocoa-dev-admins(at)lists.apple.com
>
> Help/Unsubscribe/Update your Subscription:
>
> This email sent to email@hidden
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Please do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden