Sorry, you're not even close. Data is still being read into buffers
regardless of whether or not your app reads it, until the window
at which point... You have to do this at a lower level, mucking
TCP congestion control. I'd suggest Richard Stevens' books on TCP/
So is that the (only) way all desktop apps implement it? (eg.
Azureus comes to mind which is written in Java)
It sounds like you want to implement rate limiting in the application
layer. The "traditional family values" approach to this problem is
not to limit the rate at which you *read* octets, but rather to limit
the rate at which you *write* them.
The algorithm you want is called a "token bucket". Every N
milliseconds or so, you put a constant quantity of tokens "into" the
bucket. If the bucket "overflows" you dump the "extra" tokens on the
floor. Whenever you have octets to write, you pull enough tokens out
of the bucket to cover the cost of transmitting them. If there
aren't enough tokens in the bucket, you queue whatever octets you
can't afford to transmit. If the holding queue has octets in it when
it's time to add tokens to the bucket, you spend the tokens
immediately instead by transmitting from the queue. If the holding
queue gets too long, then you either need to throttle back on the
octet source or decide which octets to drop.
james woodyatt <email@hidden>
member of technical staff
apple computer, inc.
Do not post admin requests to the list. They will be ignored.
Macnetworkprog mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden