|
| [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] |
In the latest issue of the c't (german fortnightly computer magazine) they
have a comparison of an Apple Xserve 1GHz single G4 vs. a Dell PowerEdge
1650 1,4 GHz single P III, both have 256 MB of RAM. The Xserve runs Mac OS
X Server while the PowerEdge runs Red Hat 7.2. The result so far is that
the PowerEdge outperforms the Xserve on almost all tasks (except SSL
(Keepalive), here is the Xserve the winner) by about factor 2.
The interesting thing is that they have been able to improve the
performance of the Xserve by tweaking the number of the cacheable files
(vnodes). They increased the amount of the default number of 4912 for 256
MB to 15000 by using:
sysctl -w kern.maxvnodes=15000
and got a huge performance boost on that. But they also say that it might
be dangerous to play around with that value (since Apple is warning to do
so)
and they claim that the concept of having a static value for that
parameter is technically outdated and in the case of Linux has been
replaced with a dynamic adaption to the use of the main memory.
So, now my question: is there anything in the plans/works regarding a
dynamic caching mechanism for the Darwin kernel? Is it technically
possible to implement such a mechanism? Will it have a huge or a minor
benefit regarding IO Performance?
Thanks, Lars
| References: | |
| >Darwin IO performance and maxvnodes (From: "Lars Sonchocky-Helldorf" <email@hidden>) |
| Home | Archives | Terms/Conditions | Contact | RSS | Lists | About |
Visit the Apple Store online or at retail locations.
1-800-MY-APPLE
Contact Apple | Terms of Use | Privacy Policy
Copyright © 2011 Apple Inc. All rights reserved.