Re: quantity in send queue ?
Re: quantity in send queue ?
- Subject: Re: quantity in send queue ?
- From: Justin Walker <email@hidden>
- Date: Wed, 28 Apr 2004 17:24:02 -0700
On Apr 28, 2004, at 16:32, Nicolas Berloquin wrote:
[snip]
I forget - is this TCP or UDP? Trying to outsmart TCP is a waste of
effort, and generally doesn't do what you want. If it's UDP, no data
is stored in socket buffers - either it gets sent, or it gets
dropped, down in the driver layer.
this is TCP. I'm not completely sure that I'm trying to outsmart TCP.
That's what sounds like, from what you say below. The TCP algorithms
are fairly sophisticated, and rely on information involving the amount
of data avaliable (in the socket buffer) locally; the amount of buffer
space remotely (indicated by the window advertised by the remote end);
and the rate at which data enters and leaves the network (indicated by
the progress of ACKs coming back from the remote.
The more I think about it, the more I think it would be convenient for
me to know when the send queue is empty.
I haven't seen this yet, but perhaps I'm being dense.
I have n peers. I'm sending streams of data to each of them (and
receiving too, but that's another story).
I need to be able to set the max kb uploaded per second overall.
So my basic algorithm is :
divide the available rate by the number of peers, and by the amount of
time the loop
passes per second, and tell each peer that they can send
total/numPeers/times-per-second bytes.
Each socket is setup (right now) as non blocking. It would be very
inconvenient for me if the sockets were
blocking.
You definitely want non-blocking.
Therefore, I only know when the TCP send buffer is full when I get
EWOULDBLOCK back from
my send() call. And at that time, their send queue is usually around
65 kbytes.
Actually, this indicates two things: you have filled up your socket
buffer to the size reserved for you at initialization time; and the
remote end has told the local end not to send any more (either by
closing the advertised window to zero; or by not acking quickly). A
glitch in this is that the ACKs could be getting clobbered/lost between
receiver and sender, or there is packet loss on the send side.
The end result is that you can never "know" what the state of your
connection is; you can only guess. Since you are not in the kernel,
you are not privy to most of the information received there. This is
part of what I mean by trying to outsmart TCP.
With such big buffers laying dormant, the potentiality of having a few
kb/s extra data is great, and
that's what I observe with bandwidth monitoring software: instead of,
say a max rate of 22kb/s, I
get 4-5 peaks over 32k every 30 secs.
Is this across the board or for selected connections? Can you tell
whether packets are being lost? Is there one process sending to n
peers, or is it 1-1?
So this led me into thinking that I needed to know, when I do my
send() (or even right before), whether
the previous slice of bytes was sent (otherwise, I can determine that
the send buffer will grow, and
give up on this peer for the current loop, then test next time around,
etc).
Like I said, this won't help that much.
Hence my questions about knowing the amount of data inside the TCP
send queue, or whether
the send queue is empty.
Note that you typically want to keep the send queue full, because of
TCP's penchant for sending full segments. If you want to get really
fancy, look up Nagle's Algorithm, and check out a copy of Stevens's
"Unix Network Programming, v1. 2nd Ed". (There is a 3rd Ed. out, by
someone else, and I have not looked at it to know how good it is).
Another possibility would be to know when the ACK is sent back from
the peer.
And I don't know how to do that with the technique I'm using.
(non-blocking socket, with send()).
I tried setting up some write callback, and setting a writeable flag
to false right after each write, only
to set it back to true in the callback, but I believe that I fall into
the same problem with data queued
up the TCP send buffer (my callback is called even if the buffer is
filling itself).
There is no way that you can know when an ACK is returned by the
remote. You aren't operating in the kernel, and the kernel does not
publish that information.
so now I'm stuck in a loop ;-)
A maze of twisty little passages, all alike. Another part of the rush
of programming.
I would suggest that, before you invest a lot of effort in trying to
beat TCP into submission and invent all sorts of fancy tricks that will
probably not work as well as you hope, you invest the time to figure
out why you see those bandwidth bursts. Either I'm being dense (a not
infrequent occurrence), or you haven't explained exactly why you want
to "fix" this. Do the bursts take away from overall performance
somehow? Affect 'fairness'? Other issues?
Hope that helps.
Regards,
Justin
--
Justin C. Walker, Curmudgeon-At-Large *
Institute for General Semantics | It's not whether you win or
lose...
| It's whether *I* win or lose.
*--------------------------------------*-------------------------------*
_______________________________________________
macnetworkprog mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/macnetworkprog
Do not post admin requests to the list. They will be ignored.