Re: memory allocation and virtual memory increase
Re: memory allocation and virtual memory increase
- Subject: Re: memory allocation and virtual memory increase
- From: John Stiles <email@hidden>
- Date: Tue, 28 Nov 2006 06:43:08 -0800
Bill Bumgarner wrote:
On Nov 27, 2006, at 10:22 PM, Bruce Johnson wrote:
My program contantly allocates (via malloc) chunks of memory to hold
computer generated data. It then writes the data out to disk via
fwrite. Then the memory is then freed.
As Shawn indicated, it certainly sounds like your application has a
memory leak. The other, extremely remote, explanation is that you
have a seriously pathological allocaiton/deallocation pattern that is
fragmenting the heap so badly that you are exhausted addresses without
actually exhausting memory. Could happen, but *extremely* unlikely
-- safer to assume that it really is a memory leak.
Leaks can lie because memory isn't always zeroed across allocations.
Try setting the MALLOC_SCRIBBLE (and MALLOC_PRE_SCRIBBLE, as it
catches other bugs) environment variable. At a very minor cost in
allocation performance, it'll vastly reduce the number of false
negatives ignored by leaks.
Now, though, a second question is raised:
Why is your app "constantly allocat[ing] .... writing ... then the
memory is freed"?
This sounds like a fairly standard situation in which you would
typically want to allocate a handful of buffers that are filled with
data, emptied of said data, and then recycled.
If your app has a steady rate of data production / consumption, then
such a buffer management policy is easy to implement.
In the case where your app has to deal with bursts of data, writing a
buffer management algorithm to manage both the contents and length of
a queue of buffers is relatively straightforward.
This will increase performance in a number of ways. First, it avoids
malloc/free entirely. Secondly, by allocating a handful of large
buffers up front, you give the allocator and VM subsystem a chance to
lay things out in memory a bit more efficiently than can be pulled off
with constant reallocation.
I'm not sure I completely agree with your recommendation of recycling
buffers here. In my experience, unless an app is doing hundreds of
allocations and frees per second, there isn't a strong need for pooling
buffers. Malloc basically /is/ a buffer pooling system, and it's darn
fast on OS X, so unless you have special needs and malloc isn't cutting
it for you, why reinvent that wheel?
Anyway, to the OP, when you say "no leaks are detected," using what
tool? I've had excellent luck with MallocDebug in the past so I'd
recommend starting there. BTW, it's possible that you're calling "free"
on the wrong address. If you alter the pointer returned from
malloc---for example, if you march it forward while filling in the
contents of the allocated buffer---then free will fail. Sometimes it
prints a warning in this case so check your run logs.
_______________________________________________
Cocoa-dev mailing list (email@hidden)
Do not post admin requests or moderator comments to the list.
Contact the moderators at cocoa-dev-admins(at)lists.apple.com
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden