• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: multiple threads mutex, unlocking order
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: multiple threads mutex, unlocking order


  • Subject: Re: multiple threads mutex, unlocking order
  • From: Terry Lambert <email@hidden>
  • Date: Tue, 17 Jun 2008 14:33:20 -0700

On Jun 17, 2008, at 12:47 AM, Matt Connolly wrote:
Thanks, Mike.
In my application, the kernel has to initiate the messaging, since it's intercepting an event in the kernel space, but the user-space process is responsible for handling it. (to reduce code in kernel, and for higher level os interfaces).

Typically this is done by having user space initiate a request for work to do. The kernel blocks the completion of that request until there is actually work to do. When there is work to do, the kernel queues it to a work to do queue, and sends a wakeup to the synchonizer on which the user space process is blocked in the kernel. The (now unblocked) user space thread dequeues the information and returns it to user space, completing the request. When the work is done, the user space process servicing the kernel's request calls back down with the answer and asks for more work (same call).


Basically (U is your daemon):

U -> K : I am here and give me work

,->
|
|  (A) K -> U : Here is your work
|
`--- (a) U -> K : Here is your result and give me more work

        (b) U -> K : Here is your result and I quit

   (B) K -> U : I have no work for you; go away

With the possibility of an async:

        (c) U -> K : Alas, I am dead!

This effectively inverts the normal system call order, but it's what NFS daemons historically do, and it's how the external identity resolver in Directory Services works.

We haven't formalized this design pattern into an API/KPI because we in general believe it's very bad to block a kernel operation on something running in user space; you can get a deadly embrace deadlock, if the user space process triggers a message to itself (through however long a chain of services), and you can get a starvation deadlock, if the user space process does not get sufficient opportunity to run.

The deadly embrace is generally avoidable only by not relying on frameworks or libraries for which you have not audited the source code for Hamiltonian cycles in the call graph. The starvation deadlock is generally avoidable only by running at very high priority.

Typically, this means that you run minimal code and exempt root owned processes from triggering up to user space. This makes it ideally suited for kauth and similar mechanisms, who run as root and know that root is always authorized, and significantly less suited to all other uses (for example, a firewall that pops up an authorization dialog before permitting a network request to complete).

-- Terry

The user-space connects via a IOUserClient sub-class. Only when this sub-class is instantiated does the driver activate. So, yes, I'm checking what to do if the user-space process crashes out -> the IOUserClient sub-class will get a clientDied message, and the rest of the driver deactivates.
Semaphores seem like the go. But I read somewhere that when a semaphore is signalled, that the waiting thread will not activate immediately - I was hoping that I could get this to happen to reduce latency.
Alternatively, mach messaging might be best. In both of these cases, you can do a timed wait, so if the user-space process crashes, the timeout will revert to a deactive driver state.
It seems, though, that there's a lack of nice IOLock type classes for semaphores and mach messages. Or have I missed something?
Cheers,Matt



On Jun 16, 2008, at 8:09 AM, Matt Connolly wrote:
Do the IOLock lock and unlock routines have to be called from the same thread?
They should, yes.
If multiple threads are waiting on the IOLock, are they executed in the order that they began waiting on the Lock?
No.
I'm looking for a way for a kernel task to wait for a user space process to respond to a message.
This is generally a bad way to do things; you want the user process to drive the messaging, not the kernel.


The underlying issue is that user processes are transient and unreliable, so you need to behave correctly in the case where the process fails to complete the work, or terminates while the work is in progress.

IFF we assume that you have arranged for the user process to re- start, you need to maintain queues of work items (in progress, waiting) inside the kernel so that you can re-issue work to your user process if it checks in after crashing. (And your work items need to be devised such that they can safely be re-started).

Basically, I want to serialise things, so that if there are multiple kernel threads sending to the user-space process, that they will have their information processed, and be woken from their locks in the correct order.
A better model is for the work items to be queued inside the kernel, and for the user process to come along and pick up these work items as it has resources available. As for "the correct order", since kernel threads run concurrently with user threads, it is entirely your responsibility to ensure ordering. You can use a global sequence number, or sort the queue, or... there are many options depending solely on what sort of ordering you're actually talking about.


HTH.
= Mike

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list      (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden

_______________________________________________ Do not post admin requests to the list. They will be ignored. Darwin-kernel mailing list (email@hidden) Help/Unsubscribe/Update your Subscription: This email sent to email@hidden
  • Follow-Ups:
    • Re: multiple threads mutex, unlocking order
      • From: Matt Connolly <email@hidden>
References: 
 >Re: multiple threads mutex, unlocking order (From: Matt Connolly <email@hidden>)

  • Prev by Date: Re: multiple threads mutex, unlocking order
  • Next by Date: Re: multiple threads mutex, unlocking order
  • Previous by thread: Re: multiple threads mutex, unlocking order
  • Next by thread: Re: multiple threads mutex, unlocking order
  • Index(es):
    • Date
    • Thread