Re: Resident Memory, Private Memory, Shared Memory (top, Activity Monitor and vmmap
Re: Resident Memory, Private Memory, Shared Memory (top, Activity Monitor and vmmap
- Subject: Re: Resident Memory, Private Memory, Shared Memory (top, Activity Monitor and vmmap
- From: Michael Smith <email@hidden>
- Date: Thu, 29 Nov 2007 21:59:41 -0800
On Nov 28, 2007, at 8:46 AM, email@hidden wrote:
On Wed, 2007-11-28, at 00:08, Jim Magee wrote:
This value as returned by task_info() indicates how many page-level
pmap mappings are currently present in the task's address space.
In other words, how many valid page table entries there are for
this processes current use of its virtual address space.
This also means this value is the most uninteresting one for me as
developer.
As I already attempted to point out (you did read to the end of my
last message, right?), these numbers are not going to be interesting
to you.
Apple makes tools that will tell you much more useful and interesting
things about your application. You should use them, rather than
complaining that numbers that do other things aren't doing what you
want.
We can try to explain some of the finer details here, but really if
you're not going to take advice, or do the background reading that's
necessary for you to understand what's going on, there's a limit to
how much this is going to help...
But if the objects are bigger than what we have mapped, we may not
be able to make dead accurate accounting.
Not sure what you are trying to say with that sentence. If an object
is 200 pages, and 20 pages are in fact present in physical memory
(180 are not), then the accounting is perfectly correct when it says
20 * page_size is the used memory of this object. It may not be my
process causing these 20 pages to be in physical memory, but since
this object is mapped into my address space, I keep the memory object
itself alive and thus I keep the 20 pages alive and mapped to memory,
don't I?
No.
Private and shared memory sizes are an even greater approximation
than the resident memory size. But they return different info.
The private and shared memory counts say "how many pages of the
objects I have mapped are cached in memory", regardless of whether
I've accessed them yet/lately, where the resident memory size says
"how many page-table-entries does my process already have to
quickly access those cached pages".
Okay, maybe I asked the wrong question here. Maybe the question
should rather be how to interpret these values.
E.g. what does it mean if a process has very little resident memory,
but huge shared and huge private memory?
Without other contextual information, not a lot.
This means I have memory
objects mapped into my address space, large ones, that have many
physical memory pages assigned to them and that are either private or
shared, however, I have not touched many of these pages lately or at
all, right?
Not necessarily.
You have mappings that touch objects with large resident counts, both
shared and private mappings. You don't have many pages currently
mapped, either because you haven't touched many, or because they've
been stolen, or evicted.
So my process is not really using many of the pages of
these objects, still, I keep these objects alive and since they are
alive, a lot of pages are lost of memory as these objects need them
to keep process-private or process-shared data in memory.
No. The fact that you have mappings against these objects implies
nothing (reliable) about the residency status of pages outside the
ranges you have mapped.
A real live
example, I load a big library, a huge one, that is in use by many
other processes, too. The code segment of that library will be shared
(COW).
If this is a system framework, you may inherit a shared sub-pmap, and
the residency stats for the shared pmap are common to all of the
address spaces containing it.
The text segment for a shared library is normally mapped read-only.
Since some other process uses this library heavily, many of
these COW code pages are resident in physical memory (cached so to
say, if you see all physical RAM just as a cache for the swap file or
other mapped binary files). This will raise my shared memory value to
a huge value. However, if I never access any of the pages of the code
segment of the library, all this memory is not accounted towards my
resident size. Is this interpretation more or less accurate?
In the case of a shared pmap, the pages are technically resident
regardless of whether you've touched them or not. For non-system
libraries (which are much less frequently shared) this is a more
reasonable approximation.
I think I understood the case above, but how to interpret a process
that has a huge resident memory size, but shared and private memory
together are much less? What would be a real live example for that?
Why would this process have so many physical pages mapped into its
virtual address space, if all the objects it has mapped there have so
little physical pages?
It's posible (and common) to allocate process virtual space that isn't
backed by a mapped object that's accounted for in those numbers.
What kind of pages would that be? Doesn't
every page mapped into my process space needs to belong to some
memory object in the kernel?
As has been previously noted in this thread, the accounting for these
numbers is an approximation.
One side question: A virtual page is just virtual. The opposite of a
virtual page is... a physical one?
There is no "opposite" for a virtual page.
If I run vmmap with -resident, it
will print the virtual size of every object and the physical size of
every object (resident size). For the resident size, does it play a
role whether the page is currently really in memory or swapped?
See above inre: the definition of "resident".
= Mike
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden