Re: output queue size
Re: output queue size
- Subject: Re: output queue size
- From: Michael Tüxen <email@hidden>
- Date: Wed, 24 Sep 2008 22:03:43 +0200
Hi Terry,
thank you very much for your help. Comments in-line.
Best regards
Michael
On Sep 24, 2008, at 9:11 PM, Terry Lambert wrote:
On Sep 23, 2008, at 10:20 PM, Michael Tüxen wrote:
On Sep 23, 2008, at 11:46 PM, Terry Lambert wrote:
On Sep 23, 2008, at 9:44 AM, Michael Tüxen wrote:
Dear all,
is there a way to look at the output queue sizes of ethernet
interfaces?
And can I configure these sizes?
Do you mean the MTU or do you mean the ring buffer size?
ring buffer size... Actually I found the description of Remote
Desktop
(http://www.apple.com/remotedesktop/specs.html) which states:
Output Statistics: Output Queue Capacity, Output Queue Size,
Output Queue Peak Size, Output Queue Drop Count,
Output Queue Output Count, Output Queue Retry Count,
Output Queue Stall Count
which looks like the numbers I'm interested in. Neither I have a
license
of Remote Desktop nor do I need it for a remote machine...
These are layer 7 statistics particular to this application. These
are not the droids you're looking for.
OK.
Maybe it is better to describe why I'm trying to understand....
During testing I have configure Ethernet Interface on two G5 using
10MBit/sec (yes, 10). Then I'm doing a bulk transfer of full sized
frames
using SCTP. In parallel I'm testing the RTT with the ping tool.
I'm observing a multi second RTT, although no messages are lost. So
I'm wondering if there are queues in the IP or Linklayer which
store the packets. If this is true, I would like to make them smaller
to force some packet loss (such that congestion control kicks in).
I assume from your statement on RTT for ICMP echo datagrams (ping)
that you've discovered that increasing the MSL or the bandwidth
delay product is not the same as packet collisions and drops
resulting from actual congestion. Making the change you want to
make at the level you want to make them will not properly simulate
network congestion.
I do not want to simulate congestion. I have done more testing.
When using ping on the data sink, the RTT is less. On the data source
side the ICMP handling is
done in the kernel. So I assume that process scheduling has also some
effect.
I also looked at the wire and figure out that the acks are sent for
every other data packet.
So the SCTP delay (about 300 messages) must come from a delay between
the ethernet card
receiving the acks and the transport stack (SCTP) processing it. There
must be a queue
for about 150 small packets somewhere...
There are a number of hardware and software solutions that can
simulate congestion; some of them work by putting a lot of
(generally ~8) machines on the same segment channel -- either the
data is unswitched, or there is a single mux channel between the
target machines at some point in the network topology, and some of
them work by putting a machine with two cards in between the target
machines in order to simulated an intermediate network.
Without advocating a particular product, here are some examples
(google for more):
<http://www.candelatech.com/index1.html>
<http://www.shunra.com/network_emulation.aspx>
The only free software that actually does congestion simulation
correctly (without requiring multiple machines on an unswitched
segment in order to get actual congestion) is NIST Net:
<http://snad.ncsl.nist.gov/nistnet/>
Unfortunately, it's poorly maintained since 2005, as it has become a
(rather idle) SourceForge project.
I have used DUMMYNET for that in the past and was quite happy with it.
To be clear, I want to understand
why the transport layer has about 300 messages in flight. It must be
some buffering on the receive
path while processing the acks (called SACKs in SCTP). Any idea? Since
every other packet is SACKed,
I'm talking about 150 small (20 Bytes IP header + 12 Bytes SCTP common
Header + 20 Bytes SACK chunk)
There are a number of research projects that have simulated network
congestion as well, but in general, they have not made their
simulation source code available to third parties.
If you have a router (not just a switch) in between, and some
understanding of OSPF, you could technically simulate congestion by
randomization of the input queue depth on the router, but this will
not work for you on a host machine. This is a technique used in a
number of research papers, and the authors admit to this technique
having a number of bad assumptions compared to "real world" network
traffic. Theoretically, "tc" on Linux or "DummyNet" on the BSDs
would also be able to do this, but in practice, all they really do
is latency and RED-style faking out, which won't trigger congestion
control unless you carefully craft what they do in order to trigger
them. If you do that, then you've no assurance that under real
congestion conditions your congestion control is going to actually
trigger.
Also, before I go, it's not clear if you are using a native SCTP
(you've written a protocol family for Darwin), a tunnelled one (over
UDP) or a translated one (over TCP). In the case of the second one,
you are unlikely to see
I'm using a native SCTP implemented as an NKE (using an unsupported
API of Leopard). Actually the
implementation shares most of its code with the SCTP kernel
implementation in FreeBSD 7.0
real world congestion control behaviour, and in the last one, you
are unlikely to see congestion control kick in at all (unless it's
TCP congestion control, not the SCTP congestion control).
I expected the queuing happening on the data sender side, but is the
the receiving side (receiving
SACKs) of the data source.
-- Terry
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden