Re: Sockets Closing Unexpectedly
Re: Sockets Closing Unexpectedly
- Subject: Re: Sockets Closing Unexpectedly
- From: Josh Graessley <email@hidden>
- Date: Thu, 15 Mar 2007 19:34:28 -0700
You can't delay this.
The best solution is to write a filter that runs in a process in user
space. Your in kernel kext is responsible for intercepting the
outbound connection via the connect out filter. If the socket is a
synchronous/blocking socket, call sock_connect with the loopback
address and the port your user space process is listening on. Send
the address and port the connect was initially trying to reach to
your user space process. Have your user space process connect to that
destination and then relay the connect result back to your kext. If
the result was success, return EJUSTRETURN from your out filter
function. Otherwise, call sock_shutdown on the socket and return the
error. If the socket is a non-blocking socket, things may be a little
trickier. I can't remember the exact details, but it can be made to
work. Anyhow, once you have done this, the client thinks their
connected to some remote server when they're really connected to your
process. Your process is responsible for forwarding data between the
two sockets. When the remote socket closes, you can finish writing
your data before closing the socket between your transparent proxy
app and the app that initiated the connection. There are some other
tricky things related to TCP half closes and whatnot.
-josh
On Mar 15, 2007, at 6:22 PM, Jones Curtis wrote:
My socket filter is deferring the sending/receiving of some data
(and specifically in this case, the receiving of some data) for a
while and the remote host is closing the socket connection before
the data has been re-injected and I've not been able to figure out
how to delay the closing of the socket (with respect to the local
process that's using the socket) until I've had a chance to re-
inject the deferred data.
I noted that by the time the detach callback is made, the socket is
already closed.
I noted that the notify callback is called first for
sock_evt_cantrecvmore then sock_evt_closing then
sock_evt_disconnecting and finally sock_evt_disconnected and that
if I block on any or all of those (until the data is re-injected),
it does not prevent the local process from finding out about the
closed socket - which appears to happen very shortly after the
sock_evt_cantsendmore notification.
I've started looking at sock_retain/release and they appear to
delay the closing/disconnecting/disconnected, but not the
cantrecvmore.
I've dug through tcplognke and it doesn't appear to have any
specific provisions for this - it just blocks on
sock_evt_disconnecting. So I'm not sure what else to try at this
point. Any suggestions?
Thanks.
--
Curtis Jones
email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
40apple.com
This email sent to email@hidden
Attachment:
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden