site_archiver@lists.apple.com Delivered-To: darwin-kernel@lists.apple.com Dear all, <http://www.candelatech.com/index1.html> <http://www.shunra.com/network_emulation.aspx> <http://snad.ncsl.nist.gov/nistnet/> -- Terry _______________________________________________ Do not post admin requests to the list. They will be ignored. Darwin-kernel mailing list (Darwin-kernel@lists.apple.com) Help/Unsubscribe/Update your Subscription: http://lists.apple.com/mailman/options/darwin-kernel/site_archiver%40lists.a... On Sep 23, 2008, at 10:20 PM, Michael Tüxen wrote: On Sep 23, 2008, at 11:46 PM, Terry Lambert wrote: On Sep 23, 2008, at 9:44 AM, Michael Tüxen wrote: is there a way to look at the output queue sizes of ethernet interfaces? And can I configure these sizes? Do you mean the MTU or do you mean the ring buffer size? ring buffer size... Actually I found the description of Remote Desktop (http://www.apple.com/remotedesktop/specs.html) which states: Output Statistics: Output Queue Capacity, Output Queue Size, Output Queue Peak Size, Output Queue Drop Count, Output Queue Output Count, Output Queue Retry Count, Output Queue Stall Count which looks like the numbers I'm interested in. Neither I have a license of Remote Desktop nor do I need it for a remote machine... These are layer 7 statistics particular to this application. These are not the droids you're looking for. Maybe it is better to describe why I'm trying to understand.... During testing I have configure Ethernet Interface on two G5 using 10MBit/sec (yes, 10). Then I'm doing a bulk transfer of full sized frames using SCTP. In parallel I'm testing the RTT with the ping tool. I'm observing a multi second RTT, although no messages are lost. So I'm wondering if there are queues in the IP or Linklayer which store the packets. If this is true, I would like to make them smaller to force some packet loss (such that congestion control kicks in). I assume from your statement on RTT for ICMP echo datagrams (ping) that you've discovered that increasing the MSL or the bandwidth delay product is not the same as packet collisions and drops resulting from actual congestion. Making the change you want to make at the level you want to make them will not properly simulate network congestion. There are a number of hardware and software solutions that can simulate congestion; some of them work by putting a lot of (generally ~8) machines on the same segment channel -- either the data is unswitched, or there is a single mux channel between the target machines at some point in the network topology, and some of them work by putting a machine with two cards in between the target machines in order to simulated an intermediate network. Without advocating a particular product, here are some examples (google for more): The only free software that actually does congestion simulation correctly (without requiring multiple machines on an unswitched segment in order to get actual congestion) is NIST Net: Unfortunately, it's poorly maintained since 2005, as it has become a (rather idle) SourceForge project. There are a number of research projects that have simulated network congestion as well, but in general, they have not made their simulation source code available to third parties. If you have a router (not just a switch) in between, and some understanding of OSPF, you could technically simulate congestion by randomization of the input queue depth on the router, but this will not work for you on a host machine. This is a technique used in a number of research papers, and the authors admit to this technique having a number of bad assumptions compared to "real world" network traffic. Theoretically, "tc" on Linux or "DummyNet" on the BSDs would also be able to do this, but in practice, all they really do is latency and RED-style faking out, which won't trigger congestion control unless you carefully craft what they do in order to trigger them. If you do that, then you've no assurance that under real congestion conditions your congestion control is going to actually trigger. Also, before I go, it's not clear if you are using a native SCTP (you've written a protocol family for Darwin), a tunnelled one (over UDP) or a translated one (over TCP). In the case of the second one, you are unlikely to see real world congestion control behaviour, and in the last one, you are unlikely to see congestion control kick in at all (unless it's TCP congestion control, not the SCTP congestion control). This email sent to site_archiver@lists.apple.com