More on detecting dropped connections
More on detecting dropped connections
- Subject: More on detecting dropped connections
- From: Tom Bayley <email@hidden>
- Date: Sat, 13 Apr 2002 06:56:39 +0100
I've noticed something strange. Our architecture involves a single
process polling connections to a bunch of other processes, using
select() (with zero time-out) and non-blocking recv(). Most of the
'other processes' are running on the same machine as the polling one. If
one of the 'other processes' crashes, select() says there is data to be
read on that socket, though nothing has actually been sent. That's the
strange part. Then, when we try to read the socket there is nothing
there. This strikes me as rather daft - why doesn't BSD just error the
select() or the recv() rather than manufacture this inconsistency?
I understand that I would get SIGPIPE/EPIPE on a write to one of these
duff sockets, but our polling architecture does not use writes. (Under
Windows this is OK because you can rely on getting an error from reads.)
So I'm wondering if I can rely on the quirky behaviour of BSD sockets
described above to discover lost/crashed connections without having to
write?
Tom
We have a bunch of processes talking to each other over local TCP
connections. On NT we can detect if one of the processes goes away
(crashes even) because we get an error from recv() on that socket. But
on OS X we do not seem to get an error! Why are these behaviours
different and how are we *supposed* to discover that sockets have gone
bad?
_______________________________________________
macnetworkprog mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/macnetworkprog
Do not post admin requests to the list. They will be ignored.