Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933413Ab0DHUiN (ORCPT ); Thu, 8 Apr 2010 16:38:13 -0400 Received: from mail.skyhub.de ([78.46.96.112]:57856 "EHLO mail.skyhub.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933259Ab0DHUiK (ORCPT ); Thu, 8 Apr 2010 16:38:10 -0400 Date: Thu, 8 Apr 2010 22:31:23 +0200 From: Borislav Petkov To: Linus Torvalds Cc: Rik van Riel , KOSAKI Motohiro , Andrew Morton , Minchan Kim , Linux Kernel Mailing List , Lee Schermerhorn , Nick Piggin , Andrea Arcangeli , Hugh Dickins , sgunderson@bigfoot.com, hannes@cmpxchg.org Subject: Re: [PATCH -v2] rmap: make anon_vma_prepare link in all the anon_vmas of a mergeable VMA Message-ID: <20100408203123.GA24632@a1.tnic> Mail-Followup-To: Borislav Petkov , Linus Torvalds , Rik van Riel , KOSAKI Motohiro , Andrew Morton , Minchan Kim , Linux Kernel Mailing List , Lee Schermerhorn , Nick Piggin , Andrea Arcangeli , Hugh Dickins , sgunderson@bigfoot.com, hannes@cmpxchg.org References: <20100408101925.FB9F.A69D9226@jp.fujitsu.com> <20100408054707.GA9299@a1.tnic> <4BBE1F92.3060802@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.20 (2009-06-14) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 8277 Lines: 143 From: Linus Torvalds Date: Thu, Apr 08, 2010 at 11:32:06AM -0700 Here we go, another night of testing starts... got more caffeine this time :) > > I haven't seen any places that insert VMAs by itself. > > Several strange places that allocate them, but they > > all appear to use the standard functions to insert them. > > Yeah, it's complicated enough to add a vma with all the rbtree etc stuff > that I hope nobody actually cooks their own. But I too grepped for vma > allocations, and there were more of them than I expected, so... ... and of course, I just hit that WARN_ONCE on the first suspend (it did suspend ok though): [ 88.078958] ------------[ cut here ]------------ [ 88.079007] WARNING: at mm/memory.c:3110 handle_mm_fault+0x56/0x67c() [ 88.079032] Hardware name: System Product Name [ 88.079056] Mapping with no anon_vma [ 88.079082] Modules linked in: powernow_k8 cpufreq_ondemand cpufreq_powersave cpufreq_userspace freq_table cpufreq_conservative binfmt_misc kvm_amd kvm ipv6 vfat fat dm_crypt dm_mod k10temp 8250_pnp 8250 serial_core edac_core ohci_hcd pcspkr [ 88.079637] Pid: 1965, comm: console-kit-dae Not tainted 2.6.34-rc3-00290-g2156db9 #7 [ 88.079676] Call Trace: [ 88.079713] [] warn_slowpath_common+0x7c/0x94 [ 88.079744] [] warn_slowpath_fmt+0x41/0x43 [ 88.079774] [] handle_mm_fault+0x56/0x67c [ 88.079805] [] do_page_fault+0x30b/0x32d [ 88.079838] [] ? put_lock_stats+0xe/0x27 [ 88.079866] [] ? lock_release_holdtime+0x104/0x109 [ 88.079898] [] ? error_sti+0x5/0x6 [ 88.079929] [] ? trace_hardirqs_off_thunk+0x3a/0x3c [ 88.079960] [] page_fault+0x1f/0x30 [ 88.079988] ---[ end trace 154dd7f6249e1cc3 ]--- and then sysfs triggered that lockdep circular locking warning - I thought it was fixed already :( [ 256.831204] ======================================================= [ 256.831210] [ INFO: possible circular locking dependency detected ] [ 256.831216] 2.6.34-rc3-00290-g2156db9 #7 [ 256.831221] ------------------------------------------------------- [ 256.831226] hib.sh/2464 is trying to acquire lock: [ 256.831231] (s_active#80){++++.+}, at: [] sysfs_addrm_finish+0x36/0x5f [ 256.831250] [ 256.831252] but task is already holding lock: [ 256.831256] (&per_cpu(cpu_policy_rwsem, cpu)){+++++.}, at: [] lock_policy_rwsem_write+0x4f/0x80 [ 256.831271] [ 256.831273] which lock already depends on the new lock. [ 256.831275] [ 256.831278] [ 256.831280] the existing dependency chain (in reverse order) is: [ 256.831284] [ 256.831286] -> #1 (&per_cpu(cpu_policy_rwsem, cpu)){+++++.}: [ 256.831294] [] __lock_acquire+0x1306/0x169f [ 256.831305] [] lock_acquire+0xf2/0x118 [ 256.831314] [] down_read+0x4c/0x91 [ 256.831323] [] lock_policy_rwsem_read+0x4f/0x80 [ 256.831332] [] show+0x38/0x71 [ 256.831341] [] sysfs_read_file+0xb9/0x13e [ 256.831348] [] vfs_read+0xaf/0x150 [ 256.831357] [] sys_read+0x4a/0x71 [ 256.831364] [] system_call_fastpath+0x16/0x1b [ 256.831375] [ 256.831376] -> #0 (s_active#80){++++.+}: [ 256.831385] [] __lock_acquire+0xfbd/0x169f [ 256.831385] [] lock_acquire+0xf2/0x118 [ 256.831385] [] sysfs_deactivate+0x91/0xe6 [ 256.831385] [] sysfs_addrm_finish+0x36/0x5f [ 256.831385] [] sysfs_remove_dir+0x7a/0x8d [ 256.831385] [] kobject_del+0x16/0x37 [ 256.831385] [] kobject_release+0x3e/0x66 [ 256.831385] [] kref_put+0x43/0x4d [ 256.831385] [] kobject_put+0x47/0x4b [ 256.831385] [] __cpufreq_remove_dev+0x1e5/0x241 [ 256.831385] [] cpufreq_cpu_callback+0x67/0x7f [ 256.831385] [] notifier_call_chain+0x37/0x63 [ 256.831385] [] __raw_notifier_call_chain+0xe/0x10 [ 256.831385] [] _cpu_down+0x98/0x2a6 [ 256.831385] [] disable_nonboot_cpus+0x74/0x10d [ 256.831385] [] hibernation_snapshot+0xac/0x1e1 [ 256.831385] [] hibernate+0xce/0x172 [ 256.831385] [] state_store+0x5c/0xd3 [ 256.831385] [] kobj_attr_store+0x17/0x19 [ 256.831385] [] sysfs_write_file+0x108/0x144 [ 256.831385] [] vfs_write+0xb2/0x153 [ 256.831385] [] sys_write+0x4a/0x71 [ 256.831385] [] system_call_fastpath+0x16/0x1b [ 256.831385] [ 256.831385] other info that might help us debug this: [ 256.831385] [ 256.831385] 6 locks held by hib.sh/2464: [ 256.831385] #0: (&buffer->mutex){+.+.+.}, at: [] sysfs_write_file+0x3c/0x144 [ 256.831385] #1: (s_active#49){.+.+.+}, at: [] sysfs_write_file+0xe7/0x144 [ 256.831385] #2: (pm_mutex){+.+.+.}, at: [] hibernate+0x1c/0x172 [ 256.831385] #3: (cpu_add_remove_lock){+.+.+.}, at: [] cpu_maps_update_begin+0x17/0x19 [ 256.831385] #4: (cpu_hotplug.lock){+.+.+.}, at: [] cpu_hotplug_begin+0x2c/0x53 [ 256.831385] #5: (&per_cpu(cpu_policy_rwsem, cpu)){+++++.}, at: [] lock_policy_rwsem_write+0x4f/0x80 [ 256.831385] [ 256.831385] stack backtrace: [ 256.831385] Pid: 2464, comm: hib.sh Tainted: G W 2.6.34-rc3-00290-g2156db9 #7 [ 256.831385] Call Trace: [ 256.831385] [] print_circular_bug+0xae/0xbd [ 256.831385] [] __lock_acquire+0xfbd/0x169f [ 256.831385] [] ? sysfs_addrm_finish+0x36/0x5f [ 256.831385] [] lock_acquire+0xf2/0x118 [ 256.831385] [] ? sysfs_addrm_finish+0x36/0x5f [ 256.831385] [] sysfs_deactivate+0x91/0xe6 [ 256.831385] [] ? sysfs_addrm_finish+0x36/0x5f [ 256.831385] [] ? trace_hardirqs_on+0xd/0xf [ 256.831385] [] ? release_sysfs_dirent+0x89/0xa9 [ 256.831385] [] sysfs_addrm_finish+0x36/0x5f [ 256.831385] [] sysfs_remove_dir+0x7a/0x8d [ 256.831385] [] kobject_del+0x16/0x37 [ 256.831385] [] kobject_release+0x3e/0x66 [ 256.831385] [] ? kobject_release+0x0/0x66 [ 256.831385] [] kref_put+0x43/0x4d [ 256.831385] [] kobject_put+0x47/0x4b [ 256.831385] [] __cpufreq_remove_dev+0x1e5/0x241 [ 256.831385] [] cpufreq_cpu_callback+0x67/0x7f [ 256.831385] [] notifier_call_chain+0x37/0x63 [ 256.831385] [] __raw_notifier_call_chain+0xe/0x10 [ 256.831385] [] _cpu_down+0x98/0x2a6 [ 256.831385] [] disable_nonboot_cpus+0x74/0x10d [ 256.831385] [] hibernation_snapshot+0xac/0x1e1 [ 256.831385] [] hibernate+0xce/0x172 [ 256.831385] [] state_store+0x5c/0xd3 [ 256.831385] [] kobj_attr_store+0x17/0x19 [ 256.831385] [] sysfs_write_file+0x108/0x144 [ 256.831385] [] vfs_write+0xb2/0x153 [ 256.831385] [] ? trace_hardirqs_on_caller+0x120/0x14b [ 256.831385] [] sys_write+0x4a/0x71 [ 256.831385] [] system_call_fastpath+0x16/0x1b -- Regards/Gruss, Boris. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/