Re: panic(cpu 1 caller ...): vnode_put(...): iocount < 1
Re: panic(cpu 1 caller ...): vnode_put(...): iocount < 1
- Subject: Re: panic(cpu 1 caller ...): vnode_put(...): iocount < 1
- From: James Reynolds <email@hidden>
- Date: Tue, 25 Oct 2005 15:10:36 -0600
I'm still debugging my iocount < 1 panic.
I noticed this about 2 different vp's. They have the same v_op
values. If they are the same will they clobber each other?
I hoping like crazy someone on the list knows more about this and can confirm.
My theory is this vnode_create and vnode_locked_put arn't thread
safe. First, it only happens on dual processors. Second, it only
happens when accessing many many files at the same time with 2
different processes that are launched at the same time.
I've tested on clean install and it happens there too, so I know it
is a bug in the shipping OS. I've also had friends at other
universities test this and it happens for them too.
See below for gdb info.
--
Thanks,
James Reynolds
University of Utah
Student Computing Labs
email@hidden
801-585-9811
------------------------------------------------------------------------------------------------------------------
Here are the backtraces of the 2 processes in question:
The find that panicked:
#0 Debugger (message=0x309284 "panic") at
/SourceCache/xnu/xnu-792.2.4/osfmk/ppc/model_dep.c:635
#1 0x0002683c in panic (str=0x31005c "vnode_put(%x): iocount < 1")
at /SourceCache/xnu/xnu-792.2.4/osfmk/kern/debug.c:202
#2 0x000e6d70 in vnode_put_locked (vp=0x3e2a840) at
/SourceCache/xnu/xnu-792.2.4/bsd/vfs/vfs_subr.c:3205
#3 0x000e6d1c in vnode_put (vp=0x3e2a840) at
/SourceCache/xnu/xnu-792.2.4/bsd/vfs/vfs_subr.c:3192
#4 0x000ed6d0 in stat2 (ctx=0x2c413e30, ndp=0x2c413cd0, ub=0,
xsecurity=0, xsecurity_size=0) at
/SourceCache/xnu/xnu-792.2.4/bsd/vfs/vfs_syscalls.c:2496
#5 0x000eda50 in lstat1 (p=0x31e4630, path=0, ub=0, xsecurity=0,
xsecurity_size=0) at
/SourceCache/xnu/xnu-792.2.4/bsd/vfs/vfs_syscalls.c:2591
#6 0x000edaf0 in lstat (p=0x31e4630, uap=0x0, retval=0x1) at
/SourceCache/xnu/xnu-792.2.4/bsd/vfs/vfs_syscalls.c:2603
#7 0x002a7a94 in unix_syscall (regs=0x3e0000) at
/SourceCache/xnu/xnu-792.2.4/bsd/dev/ppc/systemcalls.c:207
#8 0x000abcb0 in noassist ()
#9 0x00000000 in lowGlo ()
The other find:
#0 cpu_signal_handler () at /SourceCache/xnu/xnu-792.2.4/osfmk/ppc/cpu.c:648
#1 0x00791d5c in mhp.1748 ()
#2 0x00505c3c in mhp.1748 ()
#3 0x002deed4 in IOCPUInterruptController::handleInterrupt
(this=0x0, source=3520184) at
/SourceCache/xnu/xnu-792.2.4/iokit/Kernel/IOCPU.cpp:482
#4 0x000adba0 in interrupt (type=46247680, ssp=0x2c14a00, dsisr=0,
dar=1) at /SourceCache/xnu/xnu-792.2.4/osfmk/ppc/interrupt.c:110
#5 0x000ac0c8 in ihsetback ()
#6 0x0023675c in hfs_fsync (vp=0x14, waitfor=3530752,
fullsync=704448, p=0x0) at
/SourceCache/xnu/xnu-792.2.4/bsd/hfs/hfs_vnops.c:1129
#7 0x00239dd0 in hfs_vnop_fsync (ap=0x2c3f3470) at
/SourceCache/xnu/xnu-792.2.4/bsd/hfs/hfs_vnops.c:3563
#8 0x000fb164 in VNOP_FSYNC (vp=0x3efa948, waitfor=1,
context=0xc0000000) at
/SourceCache/xnu/xnu-792.2.4/bsd/vfs/kpi_vfs.c:3012
#9 0x000e4a8c in vclean (vp=0x3efa948, flags=8, p=0x314a9ac) at
/SourceCache/xnu/xnu-792.2.4/bsd/vfs/vfs_subr.c:1736
#10 0x000e4e28 in vgone (vp=0x3efa948) at
/SourceCache/xnu/xnu-792.2.4/bsd/vfs/vfs_subr.c:1891
#11 0x000e7284 in vnode_reclaim_internal (vp=0x3efa948, locked=1,
reuse=1) at /SourceCache/xnu/xnu-792.2.4/bsd/vfs/vfs_subr.c:3432
#12 0x000e6b60 in new_vnode (vpp=0x2c3f3670) at
/SourceCache/xnu/xnu-792.2.4/bsd/vfs/vfs_subr.c:3108
#13 0x000e7408 in vnode_create (flavor=0, size=1, data=0x2c3f36e0,
vpp=0x3effbcc) at /SourceCache/xnu/xnu-792.2.4/bsd/vfs/vfs_subr.c:3495
#14 0x0021faf4 in hfs_getnewvnode (hfsmp=0x3effbcc, dvp=0x413a210,
cnp=0x2c3f3dfc, descp=0x2c3f37a0, wantrsrc=0, attrp=0x2c3f37e0,
forkp=0x2c3f3840, vpp=0x2c3f38a4) at
/SourceCache/xnu/xnu-792.2.4/bsd/hfs/hfs_cnode.c:632
#15 0x00224d44 in hfs_lookup (dvp=0x413a210, vpp=0x2c3f3ce8,
cnp=0x2c3f3dfc, context=0x8000, cnode_locked=0x2c3f3944) at
/SourceCache/xnu/xnu-792.2.4/bsd/hfs/hfs_lookup.c:329
#16 0x00224fe0 in hfs_vnop_lookup (ap=0x2c3f39c0) at
/SourceCache/xnu/xnu-792.2.4/bsd/hfs/hfs_lookup.c:488
#17 0x000f9eb0 in VNOP_LOOKUP (dvp=0x413a210, vpp=0x1,
cnp=0x2c3f3dfc, context=0x35b6b8) at
/SourceCache/xnu/xnu-792.2.4/bsd/vfs/kpi_vfs.c:2052
#18 0x000e0e48 in lookup (ndp=0x2c3f3cd0) at
/SourceCache/xnu/xnu-792.2.4/bsd/vfs/vfs_lookup.c:507
#19 0x000e09dc in namei (ndp=0x2c3f3cd0) at
/SourceCache/xnu/xnu-792.2.4/bsd/vfs/vfs_lookup.c:224
#20 0x000ed698 in stat2 (ctx=0x2c3f3e30, ndp=0x2c3f3cd0, ub=0,
xsecurity=0, xsecurity_size=0) at
/SourceCache/xnu/xnu-792.2.4/bsd/vfs/vfs_syscalls.c:2491
#21 0x000eda50 in lstat1 (p=0x0, path=0, ub=0, xsecurity=0,
xsecurity_size=0) at
/SourceCache/xnu/xnu-792.2.4/bsd/vfs/vfs_syscalls.c:2591
#22 0x000edaf0 in lstat (p=0x0, uap=0x1, retval=0xc0000000) at
/SourceCache/xnu/xnu-792.2.4/bsd/vfs/vfs_syscalls.c:2603
#23 0x002a7a94 in unix_syscall (regs=0x3e0000) at
/SourceCache/xnu/xnu-792.2.4/bsd/dev/ppc/systemcalls.c:207
#24 0x000abcb0 in noassist ()
#25 0x00000000 in lowGlo ()
You will notice one find is doing a vnode_put_locked. The other is
in the middle of a vnode_create. Switch to the frames with the *vp
in each find command. If I print *vp, look at the values....
Panicked find:
(gdb) print *vp
$2 = {
v_lock = {
opaque = {52315696, 0, 0}
},
v_freelist = {
tqe_next = 0x3e2a738,
tqe_prev = 0x3e2a7c8
},
v_mntvnodes = {
tqe_next = 0x3e2a8c4,
tqe_prev = 0x3e2a7d0
},
v_nclinks = {
lh_first = 0x3e2be70
},
v_ncchildren = {
lh_first = 0x0
},
v_defer_reclaimlist = 0x0,
v_flag = 542720,
v_lflag = 57344,
v_iterblkflags = 0 '\0',
v_references = 1 '\001',
v_kusecount = 0,
v_usecount = 0,
v_iocount = 0,
v_owner = 0x0,
v_type = VREG,
v_id = 421672419,
v_un = {
vu_mountedhere = 0x3e26a68,
vu_socket = 0x3e26a68,
vu_specinfo = 0x3e26a68,
vu_fifoinfo = 0x3e26a68,
vu_ubcinfo = 0x3e26a68
},
v_cleanblkhd = {
lh_first = 0x0
},
v_dirtyblkhd = {
lh_first = 0x0
},
v_cred = 0x0,
v_cred_timestamp = 0,
v_numoutput = 0,
v_writecount = 0,
v_name = 0x2b50cf4 "NetIOSFTP",
v_parent = 0x3e10e70,
v_lockf = 0x0,
v_unsafefs = 0x0,
v_op = 0x2c59204,
v_tag = VT_HFS,
v_mount = 0x2c6dd00,
v_data = 0x3e23f78
}
Not panicked find:
(gdb) print *vp
$1 = {
v_lock = {
opaque = {0, 0, 0}
},
v_freelist = {
tqe_next = 0x0,
tqe_prev = 0xdeadb
},
v_mntvnodes = {
tqe_next = 0x0,
tqe_prev = 0x0
},
v_nclinks = {
lh_first = 0x0
},
v_ncchildren = {
lh_first = 0x0
},
v_defer_reclaimlist = 0x0,
v_flag = 542720,
v_lflag = 40966,
v_iterblkflags = 0 '\0',
v_references = 1 '\001',
v_kusecount = 0,
v_usecount = 0,
v_iocount = 0,
v_owner = 0x31e4c60,
v_type = VREG,
v_id = 545265429,
v_un = {
vu_mountedhere = 0x3f00ee8,
vu_socket = 0x3f00ee8,
vu_specinfo = 0x3f00ee8,
vu_fifoinfo = 0x3f00ee8,
vu_ubcinfo = 0x3f00ee8
},
v_cleanblkhd = {
lh_first = 0x0
},
v_dirtyblkhd = {
lh_first = 0x0
},
v_cred = 0x0,
v_cred_timestamp = 0,
v_numoutput = 0,
v_writecount = 0,
v_name = 0x2df74b4 "schema.m",
v_parent = 0x3eb1ce4,
v_lockf = 0x0,
v_unsafefs = 0x0,
v_op = 0x2c59204,
v_tag = VT_HFS,
v_mount = 0x2c6dd00,
v_data = 0x3effacc
}
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Darwin-kernel mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden