Re: deadlock in 10.12, maybe?
Re: deadlock in 10.12, maybe?
- Subject: Re: deadlock in 10.12, maybe?
- From: Vivek Verma <email@hidden>
- Date: Sat, 01 Oct 2016 00:48:07 -0700
Can you file a radar for this (with preferably the kernel core dump attached) ? I would like see some vnode state which isn't in just the stack trace.
> On Sep 27, 2016, at 11:56 PM, Jorgen Lundman <email@hidden> wrote:
>
>
> Hello list,
>
> So the OpenZFS testing framework experiences deadlocks on 10.12, which were
> not present in 10.11. I'm perfectly willing to accept we have a problem in
> OpenZFS, but the last one doesn't have any ZFS volumes mounted. (The last
> ZFS dataset is trying to unmount - but stuck in vfs_busy)
>
> The most interesting stacks are;
>
> 0xffffff90c1dabc10 0xffffff8029f8e973 _sleep((caddr_t) chan =
> 0xffffff8036f52a8c "\x02", (int) pri = 20, (const char *) wmsg =
> 0xffffff802a195750 "vnode_drain", (u_int64_t) abstime = 0, (int (*)(int))
> continuation = <>, , (lck_mtx_t *) mtx = 0xffffff8036f52a28)
> 0xffffff90c1dabc50 0xffffff8029d1e808 msleep [inlined]((int) pri = 20,
> (const char *) wmsg = 0xffffff802a195750 "vnode_drain")
> 0xffffff90c1dabc50 0xffffff8029d1e7e8 vnode_drain [inlined](void)
> 0xffffff90c1dabc50 0xffffff8029d1e7bb vnode_reclaim_internal((vnode *)
> vp = <>, , (int) locked = 1, (int) reuse = <>, , (int) flags = <>, )
> 0xffffff90c1dabcd0 0xffffff8029d22c12 vflush((mount *) mp = <>, ,
> (vnode *) skipvp = 0x0000000000000000, (int) flags = <>, )
> 0xffffff90c1dabd70 0xffffff8029d2dea4 dounmount((mount *) mp = <>, ,
> (int) flags = <>, , (int) withref = 1, (vfs_context_t) ctx = <>, )
> 0xffffff90c1dabf50 0xffffff8029d2d98a unmount((proc_t) p = <>, ,
> (unmount_args *) uap = 0xffffff8037f79660, (int32_t *) retval = <>, )
> 0xffffff90c1dabfb0 0xffffff802a02a366 unix_syscall64((x86_saved_state_t
> *) state = <>, )
> 0x0000000000000000 0xffffff8029aa9f46 kernel`hndl_unix_scall64 + 0x16
>
>
>
> Zombie Stacks:
>
> 0xffffff90e5f23aa0 0xffffff8029f8e973 _sleep((caddr_t) chan =
> 0xffffff8036f52a80 "\x0eà, (int) pri = 20, (const char *) wmsg =
> 0xffffff802a195f86 "vnode getiocount", (u_int64_t) abstime = 0, (int
> (*)(int)) continuation = <>, , (lck_mtx_t *) mtx = 0xffffff8036f52a28)
> 0xffffff90e5f23af0 0xffffff8029d1e550 msleep [inlined]((const char *)
> wmsg = <no location, value may have been optimized out>, )
> 0xffffff90e5f23af0 0xffffff8029d1e536 vnode_getiocount((vnode_t) vp =
> <>, , (unsigned int) vid = 0, (int) vflags = <>, )
> 0xffffff90e5f23b60 0xffffff8029fb6e44 vget_internal [inlined]((vnode_t)
> vp = 0xffffff8036f52a28, (int) vid = 0, (int) vflags = 0)
> 0xffffff90e5f23b60 0xffffff8029fb6e33 vnode_getwithref
> [inlined]((vnode_t) vp = 0xffffff8036f52a28)
> 0xffffff90e5f23b60 0xffffff8029fb6e33 ubc_unmap((vnode *) vp =
> 0xffffff8036f52a28)
> 0xffffff90e5f23b70 0xffffff8029b60f3d
> vnode_pager_last_unmap((memory_object_t) mem_obj = <>, )
> 0xffffff90e5f23bb0 0xffffff8029b92ebe memory_object_last_unmap
> [inlined]((memory_object_t) memory_object = <>, )
> 0xffffff90e5f23bb0 0xffffff8029b92eb4
> vm_object_deallocate((vm_object_t) object = <>, )
> 0xffffff90e5f23c80 0xffffff8029b83dfe
> vm_map_enter_mem_object_control((vm_map_t) target_map = <>, ,
> (vm_map_offset_t *) address = <>, , (vm_map_size_t) initial_size = <>, ,
> (vm_map_offset_t) mask = <>, , (int) flags = <>, ,
> (memory_object_control_t) control = <>, , (vm_object_offset_t) offset = <>,
> , (boolean_t) copy = <>, , (vm_prot_t) cur_protection = <no location, value
> may have been optimized out>, , (vm_prot_t) max_protection = <no location,
> value may have been optimized out>, , (vm_inherit_t) inheritance = <no
> location, value may have been optimized out>, )
> 0xffffff90e5f23f50 0xffffff8029f7b4c8 mmap((proc_t) p = <>, ,
> (mmap_args *) uap = <>, , (user_addr_t *) retval = <>, )
> 0xffffff90e5f23fb0 0xffffff802a02a366 unix_syscall64((x86_saved_state_t
> *) state = <>, )
> 0x0000000000000000 0xffffff8029aa9f46 kernel`hndl_unix_scall64 + 0x16
>
>
>
>
> All other IO threads are hence stuck in vfs_busy(), and the other threads
> appear to be doing normal sleep.
>
> The full stackdump is available here; http://www.lundman.net/hardcopy9.txt
>
>
>
>
> --
> Jorgen Lundman | <email@hidden>
> Unix Administrator | +81 (0)90-5578-8500
> Shibuya-ku, Tokyo | Japan
>
> _______________________________________________
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Filesystem-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden