Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754665Ab0LGPY3 (ORCPT ); Tue, 7 Dec 2010 10:24:29 -0500 Received: from mail-ew0-f45.google.com ([209.85.215.45]:44375 "EHLO mail-ew0-f45.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753008Ab0LGPY0 convert rfc822-to-8bit (ORCPT ); Tue, 7 Dec 2010 10:24:26 -0500 DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=rTfWt20qtIQMrO6iE04J//HE1fX+NrXM0mPjbc+lZmNSbz6dlwJBer+7IJaWKKr0NC k8+DLtmesbP+DoaPFr0H/lhbGmN4VW1d1Gbhg7oNvVBxN70ubhmiqs2kav2bqoRwRbGf NJ5QBXHuldZN3EoQ/bymDurd1KDG23cTgCY1k= MIME-Version: 1.0 In-Reply-To: <20101207112555.GE16103@dastard> References: <20101207112555.GE16103@dastard> Date: Wed, 8 Dec 2010 02:24:23 +1100 Message-ID: Subject: Re: [PATCH 00/46] rcu-walk and dcache scaling From: Nick Piggin To: Dave Chinner Cc: Nick Piggin , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, Peter Zijlstra Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 9037 Lines: 156 On Tue, Dec 7, 2010 at 10:25 PM, Dave Chinner wrote: > On Sat, Nov 27, 2010 at 09:15:58PM +1100, Nick Piggin wrote: >> >> git://git.kernel.org/pub/scm/linux/kernel/git/npiggin/linux-npiggin.git vfs-scale-working >> >> Here is an new set of vfs patches for review, not that there was much interest >> last time they were posted. It is structured like: >> >> * preparation patches >> * introduce new locks to take over dcache_lock, then remove it >> * cleaning up and reworking things for new locks >> * rcu-walk path walking >> * start on some fine grained locking steps > > Just got this set of traces doing an 8-way parallel remove of 50 > million inodes at about 40M inodes unlinked: Thanks for testing... > [ 5954.061633] BUG: sleeping function called from invalid context at arch/x86/mm/fault.c:1081 > [ 5954.062466] in_atomic(): 0, irqs_disabled(): 1, pid: 2927, name: rm > [ 5954.063122] 3 locks held by rm/2927: > [ 5954.063476] ?#0: ?(&sb->s_type->i_mutex_key#12/1){+.+.+.}, at: [] do_rmdir+0x81/0x130 > [ 5954.064014] ?#1: ?(&sb->s_type->i_mutex_key#12){+.+.+.}, at: [] vfs_rmdir+0x58/0xe0 > [ 5954.064014] ?#2: ?(rcu_read_lock){.+.+..}, at: [] shrink_dentry_list+0x0/0x430 > [ 5954.064014] irq event stamp: 1484376719 > [ 5954.064014] hardirqs last ?enabled at (1484376719): [] __call_rcu+0xd7/0x1a0 > [ 5954.064014] hardirqs last disabled at (1484376718): [] __call_rcu+0x4a/0x1a0 > [ 5954.064014] softirqs last ?enabled at (1484376586): [] __do_softirq+0x161/0x270 > [ 5954.064014] softirqs last disabled at (1484376581): [] call_softirq+0x1c/0x50 > [ 5954.064014] Pid: 2927, comm: rm Not tainted 2.6.37-rc4-dgc+ #794 > [ 5954.064014] Call Trace: > [ 5954.064014] ?[] ? print_irqtrace_events+0xd0/0xe0 > [ 5954.064014] ?[] __might_sleep+0xf5/0x130 > [ 5954.064014] ?[] do_page_fault+0x103/0x4f0 > [ 5954.064014] ?[] ? pvclock_clocksource_read+0x58/0xd0 > [ 5954.064014] ?[] ? pvclock_clocksource_read+0x58/0xd0 > [ 5954.064014] ?[] ? pvclock_clocksource_read+0x58/0xd0 > [ 5954.064014] ?[] ? trace_hardirqs_off_thunk+0x3a/0x3c > [ 5954.064014] ?[] ? usage_match+0x0/0x20 > [ 5954.064014] ?[] page_fault+0x25/0x30 > [ 5954.064014] ?[] ? usage_match+0x0/0x20 > [ 5954.064014] ?[] ? __bfs+0xc8/0x260 > [ 5954.064014] ?[] ? __bfs+0x23/0x260 > [ 5954.064014] ?[] find_usage_backwards+0x42/0x80 > [ 5954.064014] ?[] check_usage_backwards+0x64/0xf0 > [ 5954.064014] ?[] ? save_stack_trace+0x2f/0x50 > [ 5954.064014] ?[] ? check_usage_backwards+0x0/0xf0 > [ 5954.064014] ?[] mark_lock+0x1a9/0x440 > [ 5954.064014] ?[] __lock_acquire+0x5a9/0x14b0 > [ 5954.064014] ?[] ? __lock_acquire+0x336/0x14b0 > [ 5954.064014] ?[] ? pvclock_clocksource_read+0x58/0xd0 > [ 5954.064014] ?[] lock_acquire+0xb4/0x140 > [ 5954.064014] ?[] ? shrink_dentry_list+0x5c/0x430 > [ 5954.064014] ?[] _raw_spin_lock+0x36/0x70 > [ 5954.064014] ?[] ? shrink_dentry_list+0x5c/0x430 > [ 5954.064014] ?[] shrink_dentry_list+0x5c/0x430 > [ 5954.064014] ?[] ? shrink_dentry_list+0x0/0x430 > [ 5954.064014] ?[] ? do_raw_spin_unlock+0x5e/0xb0 > [ 5954.064014] ?[] __shrink_dcache_sb+0x13d/0x1c0 > [ 5954.064014] ?[] shrink_dcache_parent+0x32f/0x390 > [ 5954.064014] ?[] dentry_unhash+0x3d/0x70 > [ 5954.064014] ?[] vfs_rmdir+0x60/0xe0 > [ 5954.064014] ?[] do_rmdir+0x113/0x130 > [ 5954.064014] ?[] ? sysret_check+0x2e/0x69 > [ 5954.064014] ?[] ? trace_hardirqs_on_thunk+0x3a/0x3f > [ 5954.064014] ?[] sys_unlinkat+0x35/0x40 > [ 5954.064014] ?[] system_call_fastpath+0x16/0x1b Seems that lockdep exploded. > [ 5954.092916] BUG: unable to handle kernel NULL pointer dereference at ? ? ? ? ? (null) > [ 5954.093806] IP: [] __bfs+0xc8/0x260 > [ 5954.094331] PGD 1084e5067 PUD 102368067 PMD 0 > [ 5954.094830] Oops: 0000 [#1] SMP > [ 5954.095194] last sysfs file: /sys/devices/system/cpu/online > [ 5954.095760] CPU 6 > [ 5954.095954] Modules linked in: > [ 5954.096319] > [ 5954.096483] Pid: 2927, comm: rm Not tainted 2.6.37-rc4-dgc+ #794 /Bochs > [ 5954.096665] RIP: 0010:[] ?[] __bfs+0xc8/0x260 > [ 5954.096665] RSP: 0018:ffff8801175539a8 ?EFLAGS: 00010046 > [ 5954.096665] RAX: ffffffff8267d980 RBX: ffffffff8267d980 RCX: ffff880117553a48 > [ 5954.096665] RDX: ffff8801175539d0 RSI: 0000000000000000 RDI: ffff880117553a48 > [ 5954.096665] RBP: ffff880117553a08 R08: 0000000000000000 R09: 0000000000000000 > [ 5954.096665] R10: 0000000000000000 R11: 0000000000000001 R12: 0000000000000000 > [ 5954.096665] R13: ffffffff810b8e20 R14: ffff880117553a90 R15: 0000000000000000 > [ 5954.096665] FS: ?00007f4594cf3700(0000) GS:ffff8800dfa00000(0000) knlGS:0000000000000000 > [ 5954.096665] CS: ?0010 DS: 0000 ES: 0000 CR0: 000000008005003b > [ 5954.096665] CR2: 00007f2f21e89c60 CR3: 0000000110b0f000 CR4: 00000000000006e0 > [ 5954.096665] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 > [ 5954.096665] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 > [ 5954.096665] Process rm (pid: 2927, threadinfo ffff880117552000, task ffff88010ff10b00) > [ 5954.096665] Stack: > [ 5954.096665] ?ffffffff8267d868 00007fffa02d1428 ffff8800ffffffff ffff880100000000 > [ 5954.096665] ?000000000000b720 ffff880117553a48 ffffffff8267d868 ffff880117553a48 > [ 5954.096665] ?0000000000000000 ffff88010ff10b00 0000000000000000 ffffffff81dacba0 > [ 5954.096665] Call Trace: > [ 5954.096665] ?[] find_usage_backwards+0x42/0x80 > [ 5954.096665] ?[] check_usage_backwards+0x64/0xf0 > [ 5954.096665] ?[] ? save_stack_trace+0x2f/0x50 > [ 5954.096665] ?[] ? check_usage_backwards+0x0/0xf0 > [ 5954.096665] ?[] mark_lock+0x1a9/0x440 > [ 5954.096665] ?[] __lock_acquire+0x5a9/0x14b0 > [ 5954.096665] ?[] ? __lock_acquire+0x336/0x14b0 > [ 5954.096665] ?[] ? pvclock_clocksource_read+0x58/0xd0 > [ 5954.096665] ?[] lock_acquire+0xb4/0x140 > [ 5954.096665] ?[] ? shrink_dentry_list+0x5c/0x430 > [ 5954.096665] ?[] _raw_spin_lock+0x36/0x70 > [ 5954.096665] ?[] ? shrink_dentry_list+0x5c/0x430 > [ 5954.096665] ?[] shrink_dentry_list+0x5c/0x430 > [ 5954.096665] ?[] ? shrink_dentry_list+0x0/0x430 > [ 5954.096665] ?[] ? do_raw_spin_unlock+0x5e/0xb0 > [ 5954.096665] ?[] __shrink_dcache_sb+0x13d/0x1c0 > [ 5954.096665] ?[] shrink_dcache_parent+0x32f/0x390 > [ 5954.096665] ?[] dentry_unhash+0x3d/0x70 > [ 5954.096665] ?[] vfs_rmdir+0x60/0xe0 > [ 5954.096665] ?[] do_rmdir+0x113/0x130 > [ 5954.096665] ?[] ? sysret_check+0x2e/0x69 > [ 5954.096665] ?[] ? trace_hardirqs_on_thunk+0x3a/0x3f > [ 5954.096665] ?[] sys_unlinkat+0x35/0x40 > [ 5954.096665] ?[] system_call_fastpath+0x16/0x1b > [ 5954.096665] Code: 0a 89 05 dc 0f a8 01 48 8b 41 10 48 85 c0 0f 84 1f 01 00 00 48 8d 98 70 01 00 00 48 05 80 01 00 00 45 85 c0 48 0f 44 d8 4c 8b 3b <49> 8b 07 49 39 > [ 5954.096665] RIP ?[] __bfs+0xc8/0x260 > [ 5954.096665] ?RSP > [ 5954.096665] CR2: 0000000000000000 > [ 5954.127991] ---[ end trace 85a6727c2d4e3d90 ]--- So I vfs-scale-working branch may not be entirely in the clear, seeing as it touches the code lower in the call chain. However I don't know what can cause lockdep to go off the rails like this. There is a sequence I used to hack around lockdep nesting restrictions, following this pattern: repeat: spin_lock(&parent->d_lock); spin_lock_nested(&dentry->d_lock, DENTRY_D_LOCK_NESTED); /* do stuff */ spin_unlock(&parent->d_lock); spin_release(&dentry->d_lock.dep_map, 1, _RET_IP_); parent = dentry; spin_acquire(&this_parent->d_lock.dep_map, 0, 1, _RET_IP_); goto repeat; It's not directly in this call chain, but I wonder if it could have caused any problem? -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/