Re: Yielding the processor in a kext?
Re: Yielding the processor in a kext?
- Subject: Re: Yielding the processor in a kext?
- From: Jeremy Pereira <email@hidden>
- Date: Wed, 12 Sep 2007 12:14:54 +0100
On 11 Sep 2007, at 12:53, Anton Altaparmakov wrote:
I think he means the design of the code that implements your file
system, not the file system design itself. Your code should be
designed in such a way that a memory allocation error caused by a
memory shortage should not corrupt it.
You try and put a relational database in the kernel as file system
and put in lots of metadata duplication and cross references all
over the place and then try and keep it both fast in the 99.9999%
and 100% correct in the 0.0001% where a memory shortage occurs
without journalling, COW, and other modern approaches to fs
consistency and then you will be in my boat and see how retrying
memory allocations suddenly seems like a great idea... (-;
Funnily enough, the VFS I am currently porting from Solaris to Mac
OSX also has a database in it, although it is not relational. The
solution adopted by the original developers and one which I believe
Apple recommends was to put all of the complicated stuff in a user
space daemon. The kext is really just a stub that services the VFS
and vnode API and hands off the calls to the user space program. The
only memory allocations in kernel are those for the vfs and vnode
file system specific information.
So the answer is: no I won't try putting a relational database in the
kernel. It sounds like a world of pain to me. I think I'll accept
the performance hit and do the complicated stuff in user space. It
means I can unit test most of the VFS without two machines and I have
access to all of the user space libraries and a bug in the code is
less likely to cause a panic.
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden