Re: Problem using .zerofill / -segaddr to create very large segments
Re: Problem using .zerofill / -segaddr to create very large segments
- Subject: Re: Problem using .zerofill / -segaddr to create very large segments
- From: Greg Parker <email@hidden>
- Date: Tue, 22 Sep 2009 17:42:09 -0700
On Sep 22, 2009, at 5:29 PM, Jay Reynolds Freeman wrote:
> Ideally, you'd fix the runtime to support PIC code generation so you
> can relocate segments for locality of reference to keep the address
> space access patterns non-fragmented over time (via feedback).
You are thinking relocatable binary files, I believe. Think "linked
lists" and "structs with pointers to other structs", instead -- that
is
what is in this big memory-mapped block.
The traditional way to pic-ify such data structures is to store
relative offsets instead of absolute pointers.
I believe you, but this is fundamentally a static-linking problem,
and
I keep thinking that is precisely what the linker is supposed to
do! Using
run-time operations to solve a static linking problem just does not
sound
like the right thing to do ...
The linker and loader are not typically called upon to handle hundreds
of gigabytes of data. You may have found a bug in one of them.Or you
may have an error somewhere and not getting as much error reporting as
you'd like.
One alternative to using hundreds of .zerofill sections is to use one
oversized __UNIXSTACK segment. Use -stack_addr and -stack_size to
specify a __UNIXSTACK segment big enough to hold your main thread's
stack plus your mmap data. The stack will start at the high address so
you can mmap at the low address. Note that you'll lose stack-overflow
protection unless you do more work. Be warned that -stack_addr may not
take the value you expect.
That was for my .s file with 160 1-GByte sections. When I add another
.zerofill, making 161 1-G-byte sections, I get a size -l output that
appears to be exactly the same, except it correctly lists 161 sections
in Segment __WSS, for a total segment size of 172872433664.
The first executable -- the one with 160 sections -- loads and runs
fine.
The second gives errors early in the load about not being able to load
certain classes. This with *NO* other changes to anything.
What is the `vmmap` output of the bad and good versions?
--
Greg Parker email@hidden Runtime Wrangler
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden