Re: Problem using .zerofill / -segaddr to create very large segments
Re: Problem using .zerofill / -segaddr to create very large segments
- Subject: Re: Problem using .zerofill / -segaddr to create very large segments
- From: Jay Reynolds Freeman <email@hidden>
- Date: Tue, 22 Sep 2009 22:12:05 -0700
Be patient with me, folks, I am not a Unix system expert and
don't play one on Apple TV (tm), either. :-) I am just a
garden-variety coder trying to link an application, so what
if I need a few hundred GByte ... :-)
Anyway ...
I ran vmmap on my "good" process (160 GByte shared segment);
I hope I have made the correct observations concerning its
voluminous output (I have never used vmmap before):
1) There are no writeable or non-writeable regions anywhere
near my mmapped chunk except a region called "__LINKEDIT"
that starts immediately above the upper end of my chunk;
my chunk is mapped properly to the range (hex)
0000010000000000-0000012800000000
2) In particular, there is a lot of framework, dylib and
other system stuff way at the high end of (48-bit) memory;
at addresses like 00007fffxxxxxxxx. All else (except
__LINKEDIT and my chunk) is way low, addresses all lower
than 00000001c0000000. It appears that my process's
__TEXT and __DATA start at 0x100000000; there is lots of
system __TEXT and system __DATA way high, above
0x00007fff00000000.
3) The vmmap summary for the "good" process looks like this:
REGION TYPE [ VIRTUAL]
=========== [ =======]
CG backing stores [ 2480K]
CG image [ 120K]
CG raster data [ 64K]
CG shared images [ 2260K]
Carbon [ 2876K]
CoreGraphics [ 16K]
IOKit [ 256.0M]
MALLOC [ 91.0M]
Memory tag=240 [ 4K]
Memory tag=242 [ 12K]
Memory tag=243 [ 4K]
Memory tag=249 [ 156K]
Memory tag=251 [ 8K]
STACK GUARD [ 56.1M]
Stack [ 9232K]
VM_ALLOCATE [ 16.3M]
__DATA [ 9616K]
__IMAGE [ 1240K]
__LINKEDIT [ 30.8M]
__TEXT [ 79.6M]
__UNICODE [ 536K]
__WSS [ 160.0G] <== My mmapped segment.
mapped file [ 31.2M]
shared memory [ 4396K]
4) The vmmap output from the 161-GByte version looks the
same, except that the high end of my chunk is now at
0000012840000000 (and that is where __LINKEDIT now
starts), and the vmmap summary shows that segment __WSS
has size 161.0G. Both changes are consistent with one
more 1-GByte .zerofill.
I also ran "otool -l" on both executables; it also produced
voluminous output. I am not sure quite what to look for --
I have never used otool before, either -- but:
5) In both executables, each of the 160 (resp. 161) sections of
my big chunk -- each ".zerofill" -- has its own entry,
which looks typically like this :
Section
sectname __Wss156
segname __WSS
addr 0x00000109c0000000
size 0x0000000040000000 (past end of file)
offset 0
align 2^0 (1)
reloff 0
nreloc 0
flags 0x00000001
reserved1 0
reserved2 0
6) However, in each executable, there appears to be only
one LC_SEGMENT_64 associated with the load of
segment __WSS, which is this (in the "good" executable):
Load command 3
cmd LC_SEGMENT_64
cmdsize 12872
segname __WSS
vmaddr 0x0000010000000000
vmsize 0x0000002800000000
fileoff 606208
filesize 0
maxprot 0x00000007
initprot 0x00000003
nsects 160
flags 0x0
And is this in the "bad" one.
Load command 3
cmd LC_SEGMENT_64
cmdsize 12952
segname __WSS
vmaddr 0x0000010000000000
vmsize 0x0000002840000000
fileoff 606208
filesize 0
maxprot 0x00000007
initprot 0x00000003
nsects 161
flags 0x0
The vmaddrs, vmsizes, and nsects are in each
case correct. The otool output shows 19 load
commands for each of the two executables.
-- Jay Reynolds Freeman
---------------------
email@hidden
http://web.mac.com/jay_reynolds_freeman (personal web site)
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden