Re: data lost in AF_INET sockets?
Re: data lost in AF_INET sockets?
- Subject: Re: data lost in AF_INET sockets?
- From: Josh Graessley <email@hidden>
- Date: Tue, 26 Mar 2002 11:02:42 -0800
As Vincent said, there is no flow control with UDP. There are also no
retransmissions either. If data gets lost, UDP won't automatically resend
it. That's not a huge deal with local sockets, the flow control is what's
really causing problems. Another option, if you really want to go with a
protocol that works over IP is to use TCP. TCP has flow control and handles
retransmissions so you wouldn't have to roll your own. It also guarantees
that the data will be delivered in the same order it was sent.
-josh
On 3/26/02 8:35 AM, "Vincent Lubet" <email@hidden> wrote:
>
Michael,
>
>
It is perfectly normal to suffer a great data loss over UDP sockets when
>
sending a lot of data: there is no build-in flow control mechanism in
>
UDP. Packets are sent at a much higher rate than the receiving process
>
can handle.
>
>
For local sockets (AF_UNIX), the size socket buffer of the received end
>
of is the flow control mechanism. The sender does back off when the
>
receiver socket buffer is nearly full.
>
>
A quick workaround is to increase the size of receive socket buffer of
>
your UDP sockets with the socket option SO_RCVBUF or to pace the sender
>
side to slow it down a bit.
>
>
The real solution is to implement flow control in your protocol over UDP.
>
>
Vincent
>
>
On Monday, March 25, 2002, at 12:41 PM, Michael Swan wrote:
>
>
> I have a UNIX application sending a lot of data to an OS X
>
> Carbon-CFM application (i.e. running natively on OS X) via
>
> a UDP socket. Unfortunately, a lot of data seems to be dropped
>
> between the UNIX app and the OS X app. Neither side reports
>
> errors: UNIX app sendto() and OS X app OTRcvUData return without
>
> error. Some part of the socket mechanism is dropping the data
>
> but it isn't clear where. When we're really pushing data through
>
> the socket, we end up losing more that 85% of the data!
>
>
>
> I understand from previous threads on this list converting to
>
> native sockets on the OS X side wouldn't help with performance.
>
> Is this really the case?
>
>
>
> Might it help performance if I switched from using AF_INET sockets
>
> to AF_UNIX sockets? AF_UNIX sockets would seem to me to be more
>
> efficient
>
> since there is less protocol overhead (although that might be
>
> replaced by UNIX filesystem overhead?)
>
>
>
> Any other ideas to improve the IPC throughput? If there is another
>
> mechanism that should be used, please let me know. Pipes would be
>
> a natural but I'm not quite sure how that would work in the OS X
>
> app...
>
_______________________________________________
>
macnetworkprog mailing list | email@hidden
>
Help/Unsubscribe/Archives:
>
http://www.lists.apple.com/mailman/listinfo/macnetworkprog
>
Do not post admin requests to the list. They will be ignored.
_______________________________________________
macnetworkprog mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/macnetworkprog
Do not post admin requests to the list. They will be ignored.