On Thursday, July 10, 2003, at 7:53 PM, Tom Arnold wrote: On my OS X v10.2.6 server, I run a "top" command every 30 minutes and log it to a file. I started doing this to debug a persistent crashing problem that always occurs with a memory allocation panic. By logging "top" output in this way, I've noticed that process 0, the kernel task, uses about .3MB more physical memory every time, i.e., every 30 minutes. Does anyone have any idea WHY this would be happening? Eventually it sucks up so much memory the machine crashes. Kernel wired memory will grow at first, as system-wide resource limits are reached on vnodes, etc.. But it should level out at about 10-25% of available memory at default configurations (higher for smaller memory systems). But some other resources don't have global limits, and a process (or set of processes) may be asking the kernel to allocate more and more of these resources. You can get a better handle on what kernel resource type is growing in such an unbounded way by using the "zprint" tool to dump statistics about the kernel's zone-allocated resource pools (and comparing the output over time). Once you know which resource pool is growing (seemingly unbounded), you would get a lot better help. You may also want to look at the top output for the other processes. Are there ever increasing numbers of memory regions, threads, or Mach ports in any of them? It could help point out the real culprit. --Jim _______________________________________________ darwin-kernel mailing list | darwin-kernel@lists.apple.com Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/darwin-kernel Do not post admin requests to the list. They will be ignored.