Re: connected SOCK_DGRAM disconnects
Re: connected SOCK_DGRAM disconnects
- Subject: Re: connected SOCK_DGRAM disconnects
- From: Godfrey van der Linden <email@hidden>
- Date: Tue, 02 Feb 2010 17:49:17 +1100
On 2010-02-02, at 4:32 PM, Michael Smith wrote:
>
> On Feb 1, 2010, at 6:40 PM, Godfrey van der Linden wrote:
>
>> I'm trying something a bit different for performance testing and have tripped over a problem that I just haven't been able to solve. Maybe you netheads out there can enlighten me.
>>
>> I have a pair of threads each of which have independent AF_UNIX, SOCK_DGRAM sockets.
>>
>> The client knows the servers AF_UNIX address and connect()s the client socket to the server before sending the first message.
>>
>> The server on recvfrom()ing the first message then connect()s the server's socket to the received client's address. Then falls into a simple mirror loop, that is send a reply, block in recv().
>>
>> After a few messages the client, closes its socket.
>>
>> Q> Shouldn't the server's blocked recv() call be aborted with an ECONNRESET error, when the client's socket is closed?
>
> No. ECONNRESET is the tunnel for the TCP RST behaviour. There is no analog for datagram sockets.
>
Yeah, I came to that realisation, I saw the error returned once during my testing and it never turned up again after moving the code around a bit.
Pity, what I'd really love is if we had an implementation of SOCK_SEQPACKET, in the UNIX and INETx protocol families.
>> I know that this is an unusual usage of DGRAM sockets, but I eventually need some sort of control on the packet size when it goes through my instrumented network driver.
>
>
> Why not just advertise an appropriate MSS and let the network stack handle the rest for you?
>
Sounds interesting, I'm not really a network programmer so all of my knowledge is just what you can glean from Stevens. What is MSS?
My research problem:
I wish to build an energy consumption model of a heavily instrumented IOEthernetController subclass.
To determine the per-byte and per-packet energy cost of sending and receiving packets, I'll need to finely control the frame sizes as they go on the bus to calibration the energy model. I don't think there is any sensible way of controlling the TCP/IP frame on a per packet basis, hence the UDP/IP decision. I'd really, really love a connection oriented per packet socket but there doesn't seem to be such an option.
> = Mike
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden