As the person who did much of the necessary work to ensure UNIX conformance for setrlimit, unless Python is tracking and voluntarily enforcing the working set limit on itself like gcc does, I can pretty much guarantee you that your problem is not as solved as you think it is.
The thing that pushes swap up is dirty (usually anonymous, usually from malloc) pages, since if they were backed by a vnode, the dirty data would be getting pushed to disk on the backing vnode, not a swap file.
-- Terry On Aug 5, 2009, at 8:45 AM, Roger Herikstad < email@hidden> wrote: Hi, I don't really want to turn off swap as such, I just want to limit it so that a process doesn't take down the entire machine trying to allocate more memory than is available. In my particular situation, I'm using 64 bit python to analyze a big data set that, in come cases may require more than my 16G. I want to allow that, but if my hard drive only has 80Gb available, say, I want to limit the amount of address space to 80 Gb. For python, I found that I can use the resource module and call setrlimit directly, thus fixing my problem.
~ Roger On Wed, Aug 5, 2009 at 11:31 PM, Tom Duffy <email@hidden> wrote:
Do you mean swap? If so, you can turn off the dynamic pager.
™
On Aug 5, 2009, at 7:57 AM, Roger Herikstad < email@hidden> wrote:
Hi list,
I was wondering if there is a way to set a limit on how much virtual memory the operating system should be allowed to allocate? I have a MacPro with 16G of memory running 10.5.6. When using 64 bit applications to analyze big data sets it happens occasionally that the application requests more memory than can be accommodated on the system. As a result, pages are written to disk until there's no more disk space available, which hangs the machine.. Is there a way to change this behavior, such that I would get a memory allocation error instead when (some times accidentally) trying to allocate such huge chunks of memory? Thanks!
~ Roger
|