BSD socket oddity with SO_REUSEADDR
BSD socket oddity with SO_REUSEADDR
- Subject: BSD socket oddity with SO_REUSEADDR
- From: Aaron Ballman <email@hidden>
- Date: Tue, 15 Jul 2003 09:45:14 -0500
I am doing a socket implementation using native BSD sockets on OS X
(10.2.6), and have run into a rather odd quirk. It seems as though
SO_REUSEADDR isn't functioning properly. First, I create the
socket's file descriptor with a call to socket, then I try to set the
SO_REUSEADDR socket option with setsockopt. That call returns 0
(success), and just because I was being paranoid, I call getsockopt
to make sure the flag was set (and it was). Yet I am still getting
an EADDRINUSE error from errno on a call to bind (on a subsequent
bind attempt).
The reason I am using this socket option is to get around the 240
second delay from the TIME_WAIT state after closing the socket down.
The first time I try to bind to the port (13789 is the port # btw),
the bind occurs and all works as expected. When I close that socket
down (with a call to close), all terminates properly. Doing an nstat
-p tcp at this point does _not_ show my socket in the list (it's a
SOCK_STREAM socket, and I see it in the list when the application is
running). However, the next time I try to bind to that socket, the
problem shows up.
Here's a code snippet, maybe someone can spot something I'm not
seeing (note, mSocket is the socket's file descriptor, and is set up
already. See comments after the snippet about what's up with the v_*
stuff)
void TCPSocketPosix::Listen( unsigned long port )
{
struct sockaddr_in localAddr = { 0 };
localAddr.sin_family = AF_INET;
localAddr.sin_port = v_htons( port );
localAddr.sin_addr.s_addr = v_htonl( INADDR_ANY );
int val = 1;
if (v_setsockopt( mSocket, SOL_SOCKET, SO_REUSEADDR, &val,
sizeof( int ) ) < 0) {
SocketError( v_errno );
Shutdown();
return;
}
if (v_bind( mSocket, (struct sockaddr *)&localAddr, sizeof(
localAddr ) ) < 0) {
// We weren't able to bind to the port we specified,
// so we should get the reason from errno and bail
SocketError( v_errno );
Shutdown();
return;
}
mAccepting = true; // We're now accepting connections
if (v_listen( mSocket, 5 ) < 0 ) {
// Listen caused an error to occur, so we should get
// the reason from errno and bail
SocketError( v_errno );
Shutdown();
return;
}
}
The reason I'm using things like v_setsockopt is because I am using
function pointers. The application is a Carbon app (we've yet to
port to Mach-o), and I want to use the same code base for all POSIX
platforms (Linux, Mac, Win32 with WinSock). All the appropriate
CFBundle calls are happening and the functions are being loaded, so I
don't suspect that as the problem.
Can anyone spot anything that I might be doing wrong? Or does anyone
know if there may be a bug with setsockopt (I _highly_ doubt this is
the case... but I've seen weirder)?
Thanks for any help!
~Aaron
--
Handy UNIX Commands:
sudo grep -e "My mind" -H -r /
mv /mnt/fuji /mnt/everest
mv "Ignorance" /dev/null
_______________________________________________
macnetworkprog mailing list | email@hidden
Help/Unsubscribe/Archives:
http://www.lists.apple.com/mailman/listinfo/macnetworkprog
Do not post admin requests to the list. They will be ignored.