Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752307AbaGLSEZ (ORCPT ); Sat, 12 Jul 2014 14:04:25 -0400 Received: from userp1040.oracle.com ([156.151.31.81]:18174 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751452AbaGLSEX (ORCPT ); Sat, 12 Jul 2014 14:04:23 -0400 Message-ID: <53C1788D.9080800@oracle.com> Date: Sat, 12 Jul 2014 14:03:57 -0400 From: Sasha Levin User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.4.0 MIME-Version: 1.0 To: paulmck@linux.vnet.ibm.com, Thomas Gleixner CC: Christoph Lameter , Pekka Enberg , Matt Mackall , Andrew Morton , Dave Jones , "linux-mm@kvack.org" , LKML Subject: Re: slub/debugobjects: lockup when freeing memory References: <53A2F406.4010109@oracle.com> <20140619165247.GA4904@linux.vnet.ibm.com> <20140619202928.GG4904@linux.vnet.ibm.com> <20140619205307.GL4904@linux.vnet.ibm.com> <20140619220449.GT4904@linux.vnet.ibm.com> <20140620154014.GC4904@linux.vnet.ibm.com> In-Reply-To: <20140620154014.GC4904@linux.vnet.ibm.com> X-Enigmail-Version: 1.6 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Source-IP: ucsinet22.oracle.com [156.151.31.94] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/20/2014 11:40 AM, Paul E. McKenney wrote: > rcu: Export debug_init_rcu_head() and and debug_init_rcu_head() > > Currently, call_rcu() relies on implicit allocation and initialization > for the debug-objects handling of RCU callbacks. If you hammer the > kernel hard enough with Sasha's modified version of trinity, you can end > up with the sl*b allocators recursing into themselves via this implicit > call_rcu() allocation. > > This commit therefore exports the debug_init_rcu_head() and > debug_rcu_head_free() functions, which permits the allocators to allocated > and pre-initialize the debug-objects information, so that there no longer > any need for call_rcu() to do that initialization, which in turn prevents > the recursion into the memory allocators. > > Reported-by: Sasha Levin > Suggested-by: Thomas Gleixner > Signed-off-by: Paul E. McKenney > Acked-by: Thomas Gleixner Hi Paul, Oddly enough, I still see the issue in -next (I made sure that this patch was in the tree): [ 393.810123] ============================================= [ 393.810123] [ INFO: possible recursive locking detected ] [ 393.810123] 3.16.0-rc4-next-20140711-sasha-00046-g07d3099-dirty #813 Not tainted [ 393.810123] --------------------------------------------- [ 393.810123] trinity-c32/9762 is trying to acquire lock: [ 393.810123] (&(&n->list_lock)->rlock){-.-...}, at: get_partial_node.isra.39 (mm/slub.c:1628) [ 393.810123] [ 393.810123] but task is already holding lock: [ 393.810123] (&(&n->list_lock)->rlock){-.-...}, at: __kmem_cache_shutdown (mm/slub.c:3210 mm/slub.c:3233 mm/slub.c:3244) [ 393.810123] [ 393.810123] other info that might help us debug this: [ 393.810123] Possible unsafe locking scenario: [ 393.810123] [ 393.810123] CPU0 [ 393.810123] ---- [ 393.810123] lock(&(&n->list_lock)->rlock); [ 393.810123] lock(&(&n->list_lock)->rlock); [ 393.810123] [ 393.810123] *** DEADLOCK *** [ 393.810123] [ 393.810123] May be due to missing lock nesting notation [ 393.810123] [ 393.810123] 5 locks held by trinity-c32/9762: [ 393.810123] #0: (net_mutex){+.+.+.}, at: copy_net_ns (net/core/net_namespace.c:254) [ 393.810123] #1: (cpu_hotplug.lock){++++++}, at: get_online_cpus (kernel/cpu.c:90) [ 393.810123] #2: (mem_hotplug.lock){.+.+.+}, at: get_online_mems (mm/memory_hotplug.c:83) [ 393.810123] #3: (slab_mutex){+.+.+.}, at: kmem_cache_destroy (mm/slab_common.c:344) [ 393.810123] #4: (&(&n->list_lock)->rlock){-.-...}, at: __kmem_cache_shutdown (mm/slub.c:3210 mm/slub.c:3233 mm/slub.c:3244) [ 393.810123] [ 393.810123] stack backtrace: [ 393.810123] CPU: 32 PID: 9762 Comm: trinity-c32 Not tainted 3.16.0-rc4-next-20140711-sasha-00046-g07d3099-dirty #813 [ 393.843284] ffff880bc26730e0 0000000000000000 ffffffffb4ae7ff0 ffff880bc26a3848 [ 393.843284] ffffffffb0e47068 ffffffffb4ae7ff0 ffff880bc26a38f0 ffffffffac258586 [ 393.843284] ffff880bc2673e30 000000050000000a ffffffffb444dee0 ffff880bc2673e48 [ 393.843284] Call Trace: [ 393.843284] dump_stack (lib/dump_stack.c:52) [ 393.843284] __lock_acquire (kernel/locking/lockdep.c:1739 kernel/locking/lockdep.c:1783 kernel/locking/lockdep.c:2115 kernel/locking/lockdep.c:3182) [ 393.843284] lock_acquire (kernel/locking/lockdep.c:3602) [ 393.843284] ? get_partial_node.isra.39 (mm/slub.c:1628) [ 393.843284] _raw_spin_lock (include/linux/spinlock_api_smp.h:143 kernel/locking/spinlock.c:151) [ 393.843284] ? get_partial_node.isra.39 (mm/slub.c:1628) [ 393.843284] get_partial_node.isra.39 (mm/slub.c:1628) [ 393.843284] ? check_irq_usage (kernel/locking/lockdep.c:1638) [ 393.843284] ? __slab_alloc (mm/slub.c:2307) [ 393.843284] ? __this_cpu_preempt_check (lib/smp_processor_id.c:63) [ 393.843284] __slab_alloc (mm/slub.c:1730 mm/slub.c:2208 mm/slub.c:2372) [ 393.843284] ? __debug_object_init (lib/debugobjects.c:100 lib/debugobjects.c:312) [ 393.843284] ? kvm_clock_read (./arch/x86/include/asm/preempt.h:90 arch/x86/kernel/kvmclock.c:86) [ 393.843284] ? sched_clock (./arch/x86/include/asm/paravirt.h:192 arch/x86/kernel/tsc.c:304) [ 393.843284] kmem_cache_alloc (mm/slub.c:2445 mm/slub.c:2487 mm/slub.c:2492) [ 393.843284] ? debug_smp_processor_id (lib/smp_processor_id.c:57) [ 393.843284] ? __debug_object_init (lib/debugobjects.c:100 lib/debugobjects.c:312) [ 393.843284] ? check_chain_key (kernel/locking/lockdep.c:2188) [ 393.843284] __debug_object_init (lib/debugobjects.c:100 lib/debugobjects.c:312) [ 393.843284] ? _raw_spin_unlock_irqrestore (include/linux/spinlock_api_smp.h:160 kernel/locking/spinlock.c:191) [ 393.843284] ? __this_cpu_preempt_check (lib/smp_processor_id.c:63) [ 393.843284] debug_object_init (lib/debugobjects.c:365) [ 393.843284] rcuhead_fixup_activate (kernel/rcu/update.c:260) [ 393.843284] debug_object_activate (lib/debugobjects.c:280 lib/debugobjects.c:439) [ 393.843284] ? preempt_count_sub (kernel/sched/core.c:2600) [ 393.843284] ? slab_cpuup_callback (mm/slub.c:1484) [ 393.843284] __call_rcu (kernel/rcu/rcu.h:76 (discriminator 8) kernel/rcu/tree.c:2665 (discriminator 8)) [ 393.843284] ? __kmem_cache_shutdown (mm/slub.c:3210 mm/slub.c:3233 mm/slub.c:3244) [ 393.843284] call_rcu (kernel/rcu/tree_plugin.h:679) [ 393.843284] discard_slab (mm/slub.c:1522) [ 393.843284] __kmem_cache_shutdown (mm/slub.c:3210 mm/slub.c:3233 mm/slub.c:3244) [ 393.843284] kmem_cache_destroy (mm/slab_common.c:350) [ 393.843284] nf_conntrack_cleanup_net_list (net/netfilter/nf_conntrack_core.c:1569 (discriminator 3)) [ 393.843284] nf_conntrack_pernet_exit (net/netfilter/nf_conntrack_standalone.c:558) [ 393.843284] ops_exit_list.isra.1 (net/core/net_namespace.c:135) [ 393.843284] setup_net (net/core/net_namespace.c:180 (discriminator 3)) [ 393.843284] copy_net_ns (net/core/net_namespace.c:255) [ 393.843284] create_new_namespaces (kernel/nsproxy.c:95) [ 393.843284] unshare_nsproxy_namespaces (kernel/nsproxy.c:190 (discriminator 4)) [ 393.843284] SyS_unshare (kernel/fork.c:1865 kernel/fork.c:1814) [ 393.843284] tracesys (arch/x86/kernel/entry_64.S:542) Thanks, Sasha -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/