Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752144AbaBMV0v (ORCPT ); Thu, 13 Feb 2014 16:26:51 -0500 Received: from mail-qc0-f170.google.com ([209.85.216.170]:42835 "EHLO mail-qc0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751277AbaBMV0t (ORCPT ); Thu, 13 Feb 2014 16:26:49 -0500 Date: Thu, 13 Feb 2014 16:26:45 -0500 From: Tejun Heo To: Li Zhong Cc: Peter Zijlstra , Tommi Rantala , Ingo Molnar , LKML , Dave Jones , trinity@vger.kernel.org Subject: Re: lockdep: strange %s#5 lock name Message-ID: <20140213212645.GG17608@htj.dyndns.org> References: <20140210192846.GF27965@twins.programming.kicks-ass.net> <20140210215224.GB25350@mtj.dyndns.org> <20140211110036.GT9987@twins.programming.kicks-ass.net> <20140211152741.GA24490@htj.dyndns.org> <1392266124.4974.35.camel@ThinkPad-T5421.cn.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1392266124.4974.35.camel@ThinkPad-T5421.cn.ibm.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Feb 13, 2014 at 12:35:24PM +0800, Li Zhong wrote: > [ 5.251993] ------------[ cut here ]------------ > [ 5.252019] WARNING: CPU: 0 PID: 221 at kernel/locking/lockdep.c:710 __lock_acquire+0x1761/0x1f60() > [ 5.252019] Modules linked in: e1000 > [ 5.252019] CPU: 0 PID: 221 Comm: lvm Not tainted 3.14.0-rc2-next-20140212 #1 > [ 5.252019] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2007 > [ 5.252019] 0000000000000009 ffff880118e91938 ffffffff8155fe12 ffff880118e91978 > [ 5.252019] ffffffff8105c195 ffff880118e91958 ffffffff81eb33d0 0000000000000002 > [ 5.252019] ffff880118dd2318 0000000000000000 0000000000000000 ffff880118e91988 > [ 5.252019] Call Trace: > [ 5.252019] [] dump_stack+0x19/0x1b > [ 5.252019] [] warn_slowpath_common+0x85/0xb0 > [ 5.252019] [] warn_slowpath_null+0x1a/0x20 > [ 5.252019] [] __lock_acquire+0x1761/0x1f60 > [ 5.252019] [] ? mark_held_locks+0xae/0x120 > [ 5.252019] [] ? debug_check_no_locks_freed+0x8e/0x160 > [ 5.252019] [] ? lockdep_init_map+0xac/0x600 > [ 5.252019] [] lock_acquire+0x9a/0x120 > [ 5.252019] [] ? flush_workqueue+0x5/0x750 > [ 5.252019] [] flush_workqueue+0x109/0x750 > [ 5.252019] [] ? flush_workqueue+0x5/0x750 > [ 5.252019] [] ? _raw_spin_unlock_irq+0x30/0x40 > [ 5.252019] [] ? srcu_reschedule+0xe0/0xf0 > [ 5.252019] [] dm_suspend+0xe9/0x1e0 > [ 5.252019] [] dev_suspend+0x1e3/0x270 > [ 5.252019] [] ? table_load+0x350/0x350 > [ 5.252019] [] ctl_ioctl+0x26c/0x510 > [ 5.252019] [] ? __lock_acquire+0x41c/0x1f60 > [ 5.252019] [] ? vtime_account_user+0x98/0xb0 > [ 5.252019] [] dm_ctl_ioctl+0x13/0x20 > [ 5.252019] [] do_vfs_ioctl+0x88/0x570 > [ 5.252019] [] ? __fget_light+0x129/0x150 > [ 5.252019] [] SyS_ioctl+0x91/0xb0 > [ 5.252019] [] tracesys+0xcf/0xd4 > [ 5.252019] ---[ end trace ff1fa506f34be3bc ]--- > > It seems to me that when the second time alloc_workqueue() is called > from the same code path, it would have two locks with the same key, but > not the same &wq->name, which doesn't meet lockdep's assumption. Dang... I reverted the previous patch for now. Peter, does this approach sound good to you? Thanks. -- tejun -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/