Re: UDP Rate
Re: UDP Rate
- Subject: Re: UDP Rate
- From: Zack Morris <email@hidden>
- Date: Fri, 27 Feb 2004 16:53:51 -0700
>
As I indicated in my previous response, I don't know what success means
>
here. Note that, since UDP won't tell you if packets get dropped at
>
any point prior to transmission, you can't really measure success
>
directly. Also, this is affected by media speed. If you are blasting
>
packets from your app, the system will do its best to handle them, but
>
it is limited by media speed.
>
...
>
>
Regards,
>
>
Justin
------------------
>
Below is the actual code that I use in my socket thread.
>
...
>
-- Eric Lengyel
------------------
>
The maximum size for outgoing UDP datagrams is controlled by the sysctl
>
net.inet.udp.maxdgram. The default value is 9216.
>
>
Vincent
------------------
>
Perform your own tests, but here's my experience:
>
>
100MBit Ethernet LAN - 99.9% success
>
US only Cable or DSL - 98% success
>
Analog Modems - 95% success (very sensitive to flooding tho)
>
Transatlantic - varies: 97% on a good link, 85% on a bad one
>
...
>
Good luck,
>
>
Matt
Hey, thanx guys! To find the success rate, I time how many packets I
send every "x" seconds (in my case 1-3, still testing) and each time I
resend, I add 1 to a count in the packet in the queue. As I take packets
out of the queue when they are acked, I add these counts up. So I may send
10 packets, with half of them getting resent, giving me 15 packets in 3
seconds, at 512 bytes of data each. So I tried to send 15x512 = 7680 and
10x512 = 5120 made it, so that's a 67% success rate. So I throttle down my
send rate by 10%, because I am under 75%. My tests are actually a little
more complicated than this, because I compare "now" against the value in the
timestamp of the packet to give an accurate %, but it works. I am assuming
these as my maximum packet sizes and payload, do they seem right?
#define kMaxModemPacketSize 576
#define kMaxModemPayloadSize 534 // assumes 576
bytes total, -20->IP -8->UDP -14->ETHERNET ///or -8->PPP (actually 5-7 for
PPP but play it safe)///
I don't want to even be 1 byte over because I can't afford to be
splitting packets and multiplying my success rate towards 0! I have a lot
of header info that goes in each packet, like my own 32 bit checksum,
sequence num, game identifier, etc, so I limit the user to 512. Is there a
way to check if a packet has been split?
As for the pipe, my Idle() loop already just puts the packets on an
atomic queue, so I think I can put it in another thread. The hard part is
that the loop also uses a few variables from the main loop, I will get
nightmares trying to keep it all atomic. And the fact that both threads can
run at once on OS X, versus the interrupt nature of OS 9 makes things very
ugly. I think I will check for OS X and put a mutex around all the
routines, but I sure hope I don't miss one, bleh. Really that thread will
be for bouncing my ping and clock synch packets back.
I am using the SO_TIMESTAMP option of the incoming packets, so perhaps I
don't need to react to them right away. And it would be nice to not have to
remember to call Idle() in my main loop, although I do call Idle() from send
and receive (no more than every 10ms though) so that might not matter too
much. I was hoping that my windows would be long enough that I wouldn't
have to call Idle() very often, but it turns out that 512x16x100 = 819200
bytes/sec is my theoretical maximum. That should work for everything but
streaming video, LOL. But I am only getting 350k/sec in my tests, and I
think it is due to the GUI slowing down the main loop to 50 fps instead of
100. And if my game runs at 30fps, that will slow it even more to 250k/sec.
So basically 50 players downloading at 5k/sec. That's another can of worms
tho.
One more thing is that the modem is extremely stuttery. Like, I think
it buffers the data, cuz I get rates of like 12k a second for a while (even
uploading to the isp, which should be limited to about 3k/s?!) and then many
seconds of nothing. Basically 500 BYTES/sec. Does anyone have a solution
for throttling this besides just manually telling it to only send a packet
every 1/5 of a second or something? I thought I had a good script that
limited it to (current rate)/(current duration), so if I am at 5k/sec, it
would send 512 byte packets no more than every 1/10 of a sec, but this is
evidently not good enough? Or maybe my code is broken?
Sorry for the long letter, thanx for all your help,
----------------------------------------------------------------------------
Zack Morris Z Sculpt Entertainment This Space
email@hidden
http://www.zsculpt.com For Rent
----------------------------------------------------------------------------
If the doors of perception were cleansed, everything would appear to man as
it is, infinite. -William Blake, The Marriage of Heaven and Hell
_______________________________________________
macnetworkprog mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/macnetworkprog
Do not post admin requests to the list. They will be ignored.
References: | |
| >Re: UDP Rate (From: Justin Walker <email@hidden>) |