Re: VFS KPI: advisory locking
Re: VFS KPI: advisory locking
- Subject: Re: VFS KPI: advisory locking
- From: Terry Lambert <email@hidden>
- Date: Mon, 13 Feb 2006 17:03:57 -0800
Advisory locking is changed in Tiger.
Prior to Tiger, advisory locks were maintained by the the vnop_advlock
entry point in the file system VNOPS table. The FS was responsible
for providing an in-core inode or cnode or other structure, with a
pointer which it then used to hang the lock list off by passing the
address of an element in a per FS structure.
In addition, for each FS that supported advisory locking, there was an
FS-specific implementation of the advisory locking code that knew
about the per inode lock list.
This was problematic, in that it forced you to make the calls into the
lf_advlock() and other calls directly, and each FS required specific
support for locking be integrated at the FS layer (and this did not
occur in many FS's).
As of Tiger, there are two types of locks:
o Locally maintained locks (these are hung off the vnode)
o Remotely maintained locks (these are proxied to a server, which
maintains them for you)
For locally maintained advisory locks, the locks are hung off the
vnode (&vp->v_lockf is the list head). In other words, they are hung
off the vnode. The net upshot of this is that advisory locking "just
works" for any vnode, as long as it's a not a FIFO/pipe/socket.
For remotely maintained advisory locks, the VNOP_ADVLOCK entry point
of the FS is called with the lock information, as before.
If you want the system to maintain the advisory locks for you (i.e.
you have a local file system, and you aren't trying to change the
semantics from the POSIX semantics for some reason), then in the FS's
mount routine, you call vfs_setlocklocal() in the mount routine, on
the mount point (this sets the MNTK_LOCK_LOCAL flag into the mp's
mnt_kern_flag field, and recursively modifies outstanding vnodes of
the FS whose mp you call this on to set a flag on them to indicate
local locking semantics).
Right now, this is technically not KPI, and there's some cleanup
that's bound to happen down the road before it can be formalized as
KPI, so in order to use this, you will need to link against the raw
kernel symbols, rather than the KPI symbol sets (this has been
discussed before, and you can get information on how to do this from
developer.apple.com).
Again:: for remote FS's this should not be an issue for you, since
nothing should change; it's only on local FS's where you were calling
into the advisory locking routines directly that it is an issue.
If you already have a local implementation of your own per-FS locking
code, and you don't want to change over to the new system until the
KPIs are more baked, then everything should still work - just base
your code off the argument list into the NFS VNOPs from the Open
Darwin sources.
Hope that helps!
-- Terry
On Feb 13, 2006, at 3:43 PM, Brian Bergstrand wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On Feb 13, 2006, at 11:15 AM, Jim Magee wrote:
There is nothing in Tiger/Darwin 8 to generate kevents at the VFS
layer. But that is being worked on for a future release. When it
arrives, network filesystems will have locally-driven changes auto-
reflected into the kevent/knote system. They will also have a way
to be informed when someone is watching a file locally so they can
subscribe to whatever remote event system their protocol might
support and then reflect remote changes into the kevent system as
well.
Thanks Jim, that's all good news. But for the present there is no
way for a third-party VFS plugin to provide knotes while remaining
KPI compliant (not linking against the kernel proper)?
So moving my file system plugin from 10.3 to 10.4 has now lost me
knote and advisory locking support (vfs_setlocklocal() is private
and my old locking code was associating context with a proc struct).
--Jim
On Feb 11, 2006, at 2:01 PM, Brian Bergstrand wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Since the knote interface is no longer public, does this mean that
knotes are auto-generated by the kernel? This is for a local file
system, but I'd be interested the status for network file systems
too.
I had a quick look at the VFS source and didn't see anything to
suggest this is the case, but thought maybe the new kernel event
system generated corresponding knotes. Then again, both HFS and
UFS still generate their own notes...
If knotes are not auto-generated, is there a way for VFS plugins
to do so while still remaining KPI compliant?
Knotes and byte-range locking (vfs_setlocklocal) seem to be some
rather big oversights in the KPI.
Brian Bergstrand
<http://www.bergstrand.org/brian/> PGP Key ID: 0xB6C7B6A2
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (Darwin)
iD8DBQFD7jSfedHYW7bHtqIRApeLAKDaEbK7lhHQTdDl+4Ym8hkWYA534ACdHeb/
IO8DwLm+URmSbspo1L3kZl0=
=dk9Q
-----END PGP SIGNATURE-----
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
Brian Bergstrand
<http://www.bergstrand.org/brian/> PGP Key ID: 0xB6C7B6A2
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (Darwin)
iD8DBQFD8RmbedHYW7bHtqIRAsOYAJ9OgCaCFmQu43kV1d2CsVLAGFHcZwCcCu6v
G1m05TFfD6CXFBNfyHjgkUU=
=YTm9
-----END PGP SIGNATURE-----
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden