• Open Menu Close Menu
  • Apple
  • Shopping Bag
  • Apple
  • Mac
  • iPad
  • iPhone
  • Watch
  • TV
  • Music
  • Support
  • Search apple.com
  • Shopping Bag

Lists

Open Menu Close Menu
  • Terms and Conditions
  • Lists hosted on this site
  • Email the Postmaster
  • Tips for posting to public mailing lists
Re: More on detecting dropped connections
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: More on detecting dropped connections


  • Subject: Re: More on detecting dropped connections
  • From: Vincent Lubet <email@hidden>
  • Date: Mon, 15 Apr 2002 08:41:19 -0700

When an error condition is detected on a socket, it is marked as both readable and writable for select(). In the case of a TCP/IP socket such error condition is usually the detection of a non-orderly tear-down of the connection. The usual error codes you get are ECONNRESET, ECONNABORTED or ECONNREFUSED.

You can fetch and clear the error by calling getsockopt() for the option SO_ERROR, or alternatively, a read() or a write() will fail and set errno to the error code.

For more information, I recommend that you check the chapter "Under What Conditions Is a Descriptor Ready" in chapter 6 of "Unix Network Programming Volume 1, Second Edition, Networking APIs: Sockets and XTI" by Richard Stevens".

Vincent Lubet

On Friday, April 12, 2002, at 10:56 PM, Tom Bayley wrote:

I've noticed something strange. Our architecture involves a single process polling connections to a bunch of other processes, using select() (with zero time-out) and non-blocking recv(). Most of the 'other processes' are running on the same machine as the polling one. If one of the 'other processes' crashes, select() says there is data to be read on that socket, though nothing has actually been sent. That's the strange part. Then, when we try to read the socket there is nothing there. This strikes me as rather daft - why doesn't BSD just error the select() or the recv() rather than manufacture this inconsistency?

I understand that I would get SIGPIPE/EPIPE on a write to one of these duff sockets, but our polling architecture does not use writes. (Under Windows this is OK because you can rely on getting an error from reads.) So I'm wondering if I can rely on the quirky behaviour of BSD sockets described above to discover lost/crashed connections without having to write?

Tom

We have a bunch of processes talking to each other over local TCP connections. On NT we can detect if one of the processes goes away (crashes even) because we get an error from recv() on that socket. But on OS X we do not seem to get an error! Why are these behaviours different and how are we *supposed* to discover that sockets have gone bad?
_______________________________________________
macnetworkprog mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/macnetworkprog
Do not post admin requests to the list. They will be ignored.
_______________________________________________
macnetworkprog mailing list | email@hidden
Help/Unsubscribe/Archives: http://www.lists.apple.com/mailman/listinfo/macnetworkprog
Do not post admin requests to the list. They will be ignored.

References: 
 >More on detecting dropped connections (From: Tom Bayley <email@hidden>)

  • Prev by Date: URLIdle
  • Next by Date: Re: More on detecting dropped connections
  • Previous by thread: Re: More on detecting dropped connections
  • Next by thread: Re: More on detecting dropped connections
  • Index(es):
    • Date
    • Thread