Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751970AbXA0UrH (ORCPT ); Sat, 27 Jan 2007 15:47:07 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752169AbXA0UrH (ORCPT ); Sat, 27 Jan 2007 15:47:07 -0500 Received: from www.osadl.org ([213.239.205.134]:55958 "EHLO mail.tglx.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751970AbXA0UrG (ORCPT ); Sat, 27 Jan 2007 15:47:06 -0500 Subject: Re: Linux 2.6.20-rc6 - supend lockdep warning From: Thomas Gleixner Reply-To: tglx@linutronix.de To: Linus Torvalds Cc: Linux Kernel Mailing List , Ingo Molnar , Arjan van de Ven In-Reply-To: References: Content-Type: text/plain Date: Sat, 27 Jan 2007 21:47:34 +0100 Message-Id: <1169930854.17469.114.camel@localhost.localdomain> Mime-Version: 1.0 X-Mailer: Evolution 2.6.1 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5443 Lines: 137 On Wed, 2007-01-24 at 18:58 -0800, Linus Torvalds wrote: > It's been more than a week since -rc5, but I blame everybody (including > me) being away for Linux.conf.au and then me waiting for a few days > afterwards to let everybody sync up. 2.6.20-rc6-git (today) on a dual core laptop: PM: Preparing system for mem sleep Disabling non-boot CPUs ... ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.20-rc6 #3 ------------------------------------------------------- pm-suspend/3601 is trying to acquire lock: (cpu_bitmask_lock){--..}, at: [] mutex_lock+0x1c/0x1f but task is already holding lock: (workqueue_mutex){--..}, at: [] mutex_lock+0x1c/0x1f which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (workqueue_mutex){--..}: [] __lock_acquire+0x8dd/0xa04 [] lock_acquire+0x56/0x6f [] __mutex_lock_slowpath+0xe5/0x274 [] mutex_lock+0x1c/0x1f [] __create_workqueue+0x61/0x136 [] cpufreq_governor_dbs+0xa1/0x30e [cpufreq_ondemand] [] __cpufreq_governor+0x9e/0xd2 [] __cpufreq_set_policy+0x187/0x209 [] store_scaling_governor+0x164/0x1b1 [] store+0x37/0x48 [] sysfs_write_file+0xb3/0xdb [] vfs_write+0xaf/0x163 [] sys_write+0x3d/0x61 [] sysenter_past_esp+0x5d/0x99 [] 0xffffffff -> #2 (dbs_mutex){--..}: [] __lock_acquire+0x8dd/0xa04 [] lock_acquire+0x56/0x6f [] __mutex_lock_slowpath+0xe5/0x274 [] mutex_lock+0x1c/0x1f [] cpufreq_governor_dbs+0x85/0x30e [cpufreq_ondemand] [] __cpufreq_governor+0x9e/0xd2 [] __cpufreq_set_policy+0x187/0x209 [] store_scaling_governor+0x164/0x1b1 [] store+0x37/0x48 [] sysfs_write_file+0xb3/0xdb [] vfs_write+0xaf/0x163 [] sys_write+0x3d/0x61 [] sysenter_past_esp+0x5d/0x99 [] 0xffffffff -> #1 (&policy->lock){--..}: [] __lock_acquire+0x8dd/0xa04 [] lock_acquire+0x56/0x6f [] __mutex_lock_slowpath+0xe5/0x274 [] mutex_lock+0x1c/0x1f [] cpufreq_set_policy+0x29/0x79 [] cpufreq_add_dev+0x342/0x48a [] sysdev_driver_register+0x5f/0xa9 [] cpufreq_register_driver+0xac/0x175 [] centrino_init+0x9b/0xa2 [] init+0x11b/0x2c8 [] kernel_thread_helper+0x7/0x10 [] 0xffffffff -> #0 (cpu_bitmask_lock){--..}: [] __lock_acquire+0x7de/0xa04 [] lock_acquire+0x56/0x6f [] __mutex_lock_slowpath+0xe5/0x274 [] mutex_lock+0x1c/0x1f [] lock_cpu_hotplug+0x6c/0x78 [] cpufreq_driver_target+0x28/0x5e [] cpufreq_cpu_callback+0x42/0x52 [] notifier_call_chain+0x20/0x31 [] raw_notifier_call_chain+0x8/0xa [] _cpu_down+0x47/0x1fb [] disable_nonboot_cpus+0x7b/0x100 [] enter_state+0x91/0x1bb [] state_store+0x86/0x9c [] subsys_attr_store+0x20/0x25 [] sysfs_write_file+0xb3/0xdb [] vfs_write+0xaf/0x163 [] sys_write+0x3d/0x61 [] sysenter_past_esp+0x5d/0x99 [] 0xffffffff other info that might help us debug this: 4 locks held by pm-suspend/3601: #0: (pm_mutex){--..}, at: [] enter_state+0x40/0x1bb #1: (cpu_add_remove_lock){--..}, at: [] mutex_lock+0x1c/0x1f #2: (cache_chain_mutex){--..}, at: [] mutex_lock+0x1c/0x1f #3: (workqueue_mutex){--..}, at: [] mutex_lock+0x1c/0x1f stack backtrace: [] show_trace_log_lvl+0x1a/0x2f [] show_trace+0x12/0x14 [] dump_stack+0x16/0x18 [] print_circular_bug_tail+0x5f/0x68 [] __lock_acquire+0x7de/0xa04 [] lock_acquire+0x56/0x6f [] __mutex_lock_slowpath+0xe5/0x274 [] mutex_lock+0x1c/0x1f [] lock_cpu_hotplug+0x6c/0x78 [] cpufreq_driver_target+0x28/0x5e [] cpufreq_cpu_callback+0x42/0x52 [] notifier_call_chain+0x20/0x31 [] raw_notifier_call_chain+0x8/0xa [] _cpu_down+0x47/0x1fb [] disable_nonboot_cpus+0x7b/0x100 [] enter_state+0x91/0x1bb [] state_store+0x86/0x9c [] subsys_attr_store+0x20/0x25 [] sysfs_write_file+0xb3/0xdb [] vfs_write+0xaf/0x163 [] sys_write+0x3d/0x61 [] sysenter_past_esp+0x5d/0x99 ======================= Breaking affinity for irq 1 Breaking affinity for irq 12 Breaking affinity for irq 21 Breaking affinity for irq 22 Breaking affinity for irq 219 CPU 1 is now offline - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/