Re: OSMalloc Fails (returns 0)
Re: OSMalloc Fails (returns 0)
- Subject: Re: OSMalloc Fails (returns 0)
- From: Terry Lambert <email@hidden>
- Date: Wed, 4 Jan 2006 16:13:18 -0800
On Jan 4, 2006, at 2:48 PM, Russ Seehafer wrote:
On Jan 2, 2006, at 8:34 PM, Terry Lambert wrote:
On Dec 25, 2005, at 2:22 PM, Russ Seehafer wrote:
I have a socket filter nke based on the tcplognke sample. I make a
call to OSMalloc with the gOSMallocTag to malloc some space for a
struct inside the nke's tl_attach_fn function. Usually this works
great, but every now and then the malloc fails, usually after the
system has been running for a while. Anyone know why this would
happen? I couldn't find any documentation on it.
In general, this shouldn't happen unless you are leaking memory
(e.g. not freeing the memory on detach, attaching too many times,
etc.).
You can use the command "zprint" to display memory usage by zone on
the system; I suspect you will see one of the kalloc.# zones that
is the size of your structure (or larger, if your structure is not
an even power of 2 in size) has hit its maximum number of elements.
If you need a lot of memory and are willing to manage it yourself,
you should alloc a page at a time; then you will also allocate
pageable memory instead of wired memory (internally OSMalloc will
call either kalloc() or kmem_alloc_pageable()).
-- Terry
OK, I have went through my mallocs and frees and I can't find any
leaks. I did notice that the structure that causes the malloc to
fail is bigger than I thought, a lot bigger. It is about 220000
bytes. Yes you read that right. What are my options for getting this
much memory so frequently in the nke? Im not exactly clear on what
you mean by getting memory a page at a time and managing it myself.
Thanks so much for your input.
Your only options are pretty much to grab it up front, and manage it
yourself, or find a more memory efficient data structure for what
you're trying to do. Better to avoid the memory fragmentation issue
entirely, by not allocating and freeing big chunks like that.
To elaborate: if you grab the first chunk that large you can, end up
freeing it later, and even one allocation happens in the freed space,
you are looking for the next place in memory that can hold a 220K
contiguous object; do that enough times, and there won't be a place in
memory your allocation is capable of being allocated, and you get your
0 return.
You will also *definitely* want to guarantee that the size you request
to allocate is aligned to an even multiple of the page size, for
something that large.
-- Terry
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden