Re: [Fwd: Re: execv bug???]
Re: [Fwd: Re: execv bug???]
- Subject: Re: [Fwd: Re: execv bug???]
- From: Jonas Maebe <email@hidden>
- Date: Mon, 28 Jan 2008 17:30:18 +0100
On 28 Jan 2008, at 16:26, Dave Zarzycki wrote:
On Jan 28, 2008, at 4:28 AM, Jonas Maebe wrote:
If it only hurt performance a bit I would never have changed the
code from using fork (which is what we used on all other *nix
ports) into using vfork in the first place. But a 25% to 40%
slowdown caused by 173 system calls in the process of compiling
about 180 kloc is astronomical in my view.
Well, if performance really mattered, then multiple processes
wouldn't be used. Debating fork() versus vfork() is like debating
whether you can run faster with 10 or 20 pounds attached to your
ankles. Either way, you're going to run slower than the guy with no
weights attached to his ankles.
I'm calling the (default, system-supplied) assembler from the compiler
once per compiled source file. I think that is a perfectly normal use-
case and not an example of using the wrong tools/api's for the job
(afaik, the assembler does not support batch processing).
The people maintaining the Windows/Linux ports have created i386/
x86_64 internal assemblers for elf/(pe)coff (and recently also an
internal linker for those targets), but that was mainly because a
couple of years back, the only way to achieve dead code stripping with
the GNU linker was to put every symbol in a separate object file. And
assembling thousands of object files per source file indeed completely
kills performance no matter how efficient fork/vfork/posix_spawn/
CreateProcess/... is (well, that and also some general hostility of
depending on external tools, but I don't share that sentiment).
When I did those tests, the total time spent on calling the assembler
and letting it assemble was 3.5 seconds (out of a total of 15 seconds
for compiling + assembling + linking) when using vfork on a G5/1.8GHz.
I just didn't consider it worth it to spend a lot of implementation
and debugging (and future maintenance) time on reducing 3.5 seconds
for compiling 180 klocs of code into 2 or 1 second(s).
Writing an internal assembler is a non-trivial amount of work to
reinvent the wheel, as far as I'm concerned (not to mention that I'm
not an ADC Select member and hence only can find out what's been
broken after new Xcode releases are out, and enough things break
already when using the external assembler/linker due to doing things
slightly different than gcc which are hence not caught by Apple's
internal testing).
In any case, if you really want all sorts of forwards and backwards
compatibility, our dynamic linker APIs will let you, dynamically
probe the presence of posix_spawn. That will let you use the API,
but default back to vfork (or fork or whatever else you like) on
older releases.
Some dynamic linker api's themselves are also not implemented in older
versions (<= 10.2 has no native dl*) and, iirc, others have also
changed behaviour (and deprecation status) over time. It's not that
simple to add this sort of stuff in a non-cluttering and cross-
platform way.
I agree that in general, this is however indeed the way to go, but I
guess I'm mainly a bit miffed because
a) our Linux and FreeBSD ports use only direct syscalls and no libc
stuff, because glic breaks backwards binary compatibility every other
full moon, while syscalls are an extremely stable interface (we've
never had a newer Linux kernel breaking our old programs, and we've
supported Linux since 1995 or so)
b) when I first asked about that on this list a long time ago (http://lists.apple.com/archives/darwin-development/2003/May/msg00185.html
), I was told that using syscalls on darwin would be an extremely bad
idea (which I agree with)
c) I concluded from this that that I should libc instead as a system
interface, and that it would be just as stable (if not more so) since
Apple is a commercial company
The fact that even this interface may not remain backward compatible
is disappointing however. When I first learned about the concept of
frameworks, I (naively) assumed this meant that breaking backwards
library compatibility would no longer be required, since you could
just bump the compatibility version so all older programs could
transparently keep using the older version of the framework.
Obviously, for something like libSystem this is not a solution "for
free" since you'd still have to keep patching the old version to keep
running on the new syscall interfaces if older syscalls are removed/
changed. But it's not a solution for higher level frameworks either if
even libSystem isn't safe. So I'm starting to wonder how it is
supposed to work at all.
Jonas
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden