site_archiver@lists.apple.com Delivered-To: darwin-kernel@lists.apple.com -- Terry _______________________________________________ Do not post admin requests to the list. They will be ignored. Darwin-kernel mailing list (Darwin-kernel@lists.apple.com) Help/Unsubscribe/Update your Subscription: http://lists.apple.com/mailman/options/darwin-kernel/site_archiver%40lists.a... On Aug 28, 2008, at 6:42 AM, alejandro <alejandro@openstudionetworks.com> wrote: I imagine that the memory allocation calls are protected with a mutex. The question if any read/write operation in memory must wait for the same mutex, because the real-time threads do not perform any memory allocations at all. The answer is "no, not the same mutex". But depending on how your code functions, there's plenty of room left for other contention. At a guess, you are allocating a large enough chunk that you are getting multiple pages of page aligned memory. Such allocations are merely reservations of virtual address space, and do not have physical pages associated with them until they are faulted, at which point physical backing pages have to be found and mapped. If these are not kernel allocations, you need to mlock them (I recommend against this for user programs; wired pages are a scarce resource: memory is not free). If these are kernel allocations, I hope they are not huge, since kernel virtual address space is limited to a percentage of physical memory in any case, and so is more scarce; if it becomes fragmented, it will take exponentially longer each time to find contiguous virtual space for large runs of pages. Fault overhead can be avoided later on such pages by paying the cost up front: touching the pages will result in the backing pages being prereserved which avoids the search for physical pages later. If the problem is in fact the fault time allocation is taking a lot of time, it is likely that your system is experiencing paging pressure. This is usually indicative of too high a workload for the available hardware resources, or your (or some other) code acting badly. The most common case of this is people not hinting to the vm system that data should not be cached, or that cached data is no longer needed (non-use of direct I/O or madvise). If it's small enough allocations, then consider both maintaining your own object pool of prefaulted pages, and aggressively recovering things to your free list as early as possible during processing. Realize that for every page you hold in your pool, however, that's one less page for the system, and just that much more paging pressure on everything else; so if possible, it would be best to go after your culprit instead of taking the "I've got mine, to heck with everyone else" approach. This email sent to site_archiver@lists.apple.com