Re: Teardown race with socket filter
Re: Teardown race with socket filter
- Subject: Re: Teardown race with socket filter
- From: Nick Blievers <email@hidden>
- Date: Thu, 11 Nov 2010 15:51:41 +1100
Thanks for pointing that out. I can confirm I am seeing both of these issues (missing detaches, and sometimes missing unregister callbacks).
On 10/11/2010, at 8:29 PM, Antoine Missout wrote:
> Check my email from yesterday.
>
> http://lists.apple.com/archives/darwin-kernel/2010/Nov//msg00011.html
>
> I see the same leaks.
> I've filed rdar:/8648013
> - Antoine
>
>
>
> On 2010-11-09, at 22:38, Nick Blievers wrote:
>
>> Hi,
>>
>> I have a socket filter which works most of the time, but I have noticed that sometimes the connections leak, ie, I never receive a sf_detach callback. The circumstance that causes this is when two threads race during the teardown. Specifically, a close() from userspace and a tcp_close() from an aio_worker thread.
>>
>> This is what the aio thread looks like:
>> 0x31db3798 0x21b455 <panic+445>
>> 0x31db37e8 0x21b54b <Assert+65>
>> 0x31db3808 0x383de582 <com.trustdefender.kext.PacketFilter + 0x6582>
>> 0x31db3a78 0x4c44dc <sflt_notify+92>
>> 0x31db3aa8 0x4b1ac1 <soisdisconnected+49>
>> 0x31db3ac8 0x355f63 <tcp_close+669>
>> 0x31db3b08 0x3513dd <tcp_input+13323>
>> 0x31db3cd8 0x3473f6 <ip_proto_dispatch_in+405>
>> 0x31db3d18 0x348a95 <ip_input+5766>
>> 0x31db3e48 0x348bbc <ip_proto_input+47>
>> 0x31db3e68 0x32f121 <proto_input+144>
>> 0x31db3ea8 0x31be3c <lo_input+25>
>> 0x31db3ec8 0x317a3c <dlil_ifproto_input+117>
>> 0x31db3ef8 0x31a1b4 <dlil_input_packet_list+698>
>> 0x31db3f68 0x31a3f5 <dlil_input_thread_func+457>
>> 0x31db3fc8 0x29e6cc <call_continuation+28>
>>
>> and this is the userspace thread (the exact stacktrace will change as the threads are racing, I don't usually manage to catch it but got lucky this time)
>>
>> 0x3794b988 0x226e57 <thread_invoke+1213>
>> 0x3794ba08 0x2270f6 <thread_block_reason+331>
>> 0x3794ba78 0x227184 <thread_block+33>
>> 0x3794ba98 0x29d846 <lck_mtx_lock_wait_x86+330>
>> 0x3794baf8 0x298328 <lck_mtx_lock+504>
>> 0x3794bb08 0x383de363 <com.trustdefender.kext.PacketFilter + 0x6363>
>> 0x3794bd78 0x4c44dc <sflt_notify+92>
>> 0x3794bda8 0x4b1b3f <soisdisconnecting+48>
>> 0x3794bdc8 0x358582 <tcp_disconnect+77>
>> 0x3794bde8 0x358631 <tcp_usr_detach+59>
>> 0x3794be08 0x4ad59e <soclose_locked+742>
>> 0x3794be68 0x4ad646 <soclose+66>
>> 0x3794be88 0x4696fc <fo_close+13>
>> 0x3794bef8 0x46b4ad <close_internal_locked+302>
>> 0x3794bf38 0x46b57e <close_nocancel+141>
>> 0x3794bf78 0x4edaf8 <unix_syscall64+617>
>> 0x3794bfc8 0x29f43d <lo64_unix_scall+77>
>>
>> Usually, the second thread is long gone, and the socket's so_filt has been cleared... but we are still in the sflt_notify() callback, and detach hasn't been called. Also, the socket itself has been put into the socket cache, so its finished its life cycle.
>>
>> This doesn't seem like anything to do with my code simply because my code shouldn't affect when callbacks happen!.... has anyone seen anything like this before? Got any suggestions?
>>
>>
>> Thanks
>>
>> Nick _______________________________________________
>> Do not post admin requests to the list. They will be ignored.
>> Darwin-kernel mailing list (email@hidden)
>> Help/Unsubscribe/Update your Subscription:
>>
>> This email sent to email@hidden
>
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden