Re: IP over FireWire Performance
Re: IP over FireWire Performance
- Subject: Re: IP over FireWire Performance
- From: "Justin C. Walker" <email@hidden>
- Date: Sat, 7 Jun 2003 16:02:36 -0700
On Thursday, June 5, 2003, at 10:15 PM, Wade Tregaskis wrote:
Any ideas as to why the FireWire throughput isn't higher? I've read
that it "can reserve up to 80% ... for one or more isochronous
channels." Does this mean I should never expect better than 640Mbps?
Any ideas why the FireWire receiver would consume so much more cpu in
the kernel_task as compared to Ethernet? May it be caused by
differences in interrupt handling or DMA support? I would expect
Ethernet and FireWire to perform similarly in hardware considering
both PHYs connect to the U2 controller.
I'm not a driver or Firewire expert, so this is only my [slightly]
educated guess, but I'd say there are two primary problems here:
a) The firewire controller doesn't have all the extra capabilities of
the ethernet controller, so there's probably quite a few things being
done in software 'emulation', if you like. Plus, it's probably tuned
for moving a hand full of very large blocks, not thousands of tiny
ones*
b) The drivers simply aren't as tuned as those for the ethernet
device. Remember that most of Darwin's ethernet code is probably
derived almost entirely from well-tested, well-optimised code in the
various BSD's. Apple have, from what I can gather, written the
majority of their IP over Firewire driver from scratch. Probably
borrowing from the ethernet interface code, certainly, but there will
undoubtedly be a lot more the IP over Firewire driver has to do (like
emulating certain things normally done in hardware, as mentioned, for
example).
* = Why the heck does TCP still use 1500 MTU's? Shouldn't the packet
size have scaled along with the bandwidth? Surely the original packet
size was chosen to provide best performance on the first networks...
why should it not be changed as appropriate for newer networks?
Neither TCP nor IP mandates packet sizes (up to a rough limit of 65KB,
unless for TCP you are using 'scaling', in which case it can go
larger). This is dictated by the underlying networks. The directly
attached network is generally used for as the guide for packet size;
but for TCP, there is the Path MTU convention which lets the sending
TCP engine figure out what the minimum MTU will be along the route to
the destination.
As for why ethernet still uses the 1500-byte MTU, it's economics and a
desire for success. If you look at history, Token Ring, Token Bus, and
FDDI all had different (bigger) MTUs, but that became a headache for
switch/hub/bridge vendors, and the only way to make it work in a given
environment was to force all media to the same 1500 byte MTU that
Ethernet used.
For Gigabit Ethernet, there is an extension to the low-level protocol
that lets cooperating stations agree on a larger MTU. It hasn't gotten
much play, though, because of the same issues as above (you can't get
across a switch/hub/bridge with mis-matched MTUs).
It's annoying, but I think the market leads the development community
to believe that it's an acceptable cost when compared to dealing with
those effects.
Regards,
Justin
--
Justin C. Walker, Curmudgeon-At-Large *
Institute for General Semantics | Men are from Earth.
| Women are from Earth.
| Deal with it.
*--------------------------------------*-------------------------------*
_______________________________________________
macnetworkprog mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/macnetworkprog
Do not post admin requests to the list. They will be ignored.