Re: Resident Memory, Private Memory, Shared Memory (top, Activity Monitor and vmmap)
Re: Resident Memory, Private Memory, Shared Memory (top, Activity Monitor and vmmap)
- Subject: Re: Resident Memory, Private Memory, Shared Memory (top, Activity Monitor and vmmap)
- From: Markus Hanauska <email@hidden>
- Date: Wed, 28 Nov 2007 12:14:43 +0100
On Wed, 2007-11-28, at 00:08, Jim Magee wrote:
Hi Jim,
This value as returned by task_info() indicates how many page-level
pmap mappings are currently present in the task's address space.
In other words, how many valid page table entries there are for
this processes current use of its virtual address space.
This also means this value is the most uninteresting one for me as
developer. It does not take into account if a page is shared or
private. It means that this value has no reference to the memory my
process really needs. Basically it only says how many pages can I
access without faulting the MMU, however faulting the MMU is nothing
bad. A reasonable system will fault the MMU a couple of times a
second and this has no negative effects (okay, it degrades
performance, since mapping a physical page will take some processing
time). IOW the sum of private and shared memory is a much better
representation of my process demands than this value; this value
would be only of interest if something like shared memory wasn't
existing. I can maybe use this value to find out how many pages I
have already "touched" so far, but is it really interesting to know
that? If I touch a shared page, this value will increase, but the
physical memory demand on my system will not necessarily raise, since
the page might have already been resident in physical memory before I
touched it, it just had no mapping into my process space.
The Private and Shared memory sizes:
Are calculated by walking the virtual address space of a task,
looking at the objects mapped there, the resident page counts in
those objects, and the reference counts on those objects to try and
approximate the number of pages that fall into each category.
Isn't that a much better representation of the memory demands? I
mean, if I load a library into my process space, even though I have
not ever touched anything of it, some pages of this library will be
backed up by physical memory - my process might not be the cause why
that happened, but I keep the library in memory at that time
(together with maybe other processes), so I'm responsible (among
other processes) that this memory object can't be dumped and that
these pages stay in memory at the time being.
But if the objects are bigger than what we have mapped, we may not
be able to make dead accurate accounting.
Not sure what you are trying to say with that sentence. If an object
is 200 pages, and 20 pages are in fact present in physical memory
(180 are not), then the accounting is perfectly correct when it says
20 * page_size is the used memory of this object. It may not be my
process causing these 20 pages to be in physical memory, but since
this object is mapped into my address space, I keep the memory object
itself alive and thus I keep the 20 pages alive and mapped to memory,
don't I? So accounting these 20 pages to my process seems reasonable.
Private and shared memory sizes are an even greater approximation
than the resident memory size. But they return different info.
The private and shared memory counts say "how many pages of the
objects I have mapped are cached in memory", regardless of whether
I've accessed them yet/lately, where the resident memory size says
"how many page-table-entries does my process already have to
quickly access those cached pages".
Okay, maybe I asked the wrong question here. Maybe the question
should rather be how to interpret these values.
E.g. what does it mean if a process has very little resident memory,
but huge shared and huge private memory? This means I have memory
objects mapped into my address space, large ones, that have many
physical memory pages assigned to them and that are either private or
shared, however, I have not touched many of these pages lately or at
all, right? So my process is not really using many of the pages of
these objects, still, I keep these objects alive and since they are
alive, a lot of pages are lost of memory as these objects need them
to keep process-private or process-shared data in memory. A real live
example, I load a big library, a huge one, that is in use by many
other processes, too. The code segment of that library will be shared
(COW). Since some other process uses this library heavily, many of
these COW code pages are resident in physical memory (cached so to
say, if you see all physical RAM just as a cache for the swap file or
other mapped binary files). This will raise my shared memory value to
a huge value. However, if I never access any of the pages of the code
segment of the library, all this memory is not accounted towards my
resident size. Is this interpretation more or less accurate?
I think I understood the case above, but how to interpret a process
that has a huge resident memory size, but shared and private memory
together are much less? What would be a real live example for that?
Why would this process have so many physical pages mapped into its
virtual address space, if all the objects it has mapped there have so
little physical pages? What kind of pages would that be? Doesn't
every page mapped into my process space needs to belong to some
memory object in the kernel? If so, how can I have more mapped pages
than I have mapped pages in all mapped memory objects? This case
seems impossible, but I'm almost positive, I have seen that happening
before.
One side question: A virtual page is just virtual. The opposite of a
virtual page is... a physical one? If I run vmmap with -resident, it
will print the virtual size of every object and the physical size of
every object (resident size). For the resident size, does it play a
role whether the page is currently really in memory or swapped? Even
when being swapped, is the page still resident or virtual again? Or
does it make a difference how it is swapped? E.g. whether it's
anonymous memory swapped by the default pager or a mapped file
swapped by the vnode pager.
--
Best Regards,
Markus Hanauska
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden