Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933502AbXEVNBe (ORCPT ); Tue, 22 May 2007 09:01:34 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1757270AbXEVNB1 (ORCPT ); Tue, 22 May 2007 09:01:27 -0400 Received: from extu-mxob-1.symantec.com ([216.10.194.28]:59375 "EHLO extu-mxob-1.symantec.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757491AbXEVNB0 (ORCPT ); Tue, 22 May 2007 09:01:26 -0400 Date: Tue, 22 May 2007 14:01:03 +0100 (BST) From: Hugh Dickins X-X-Sender: hugh@blonde.wat.veritas.com To: Srihari Vijayaraghavan cc: Ingo Molnar , Christoph Lameter , Oliver Xymoron , Andrew Morton , Jens Axboe , linux-kernel@vger.kernel.org Subject: Re: [PROBLEM] 2.6.22-rc2 panics on x86-64 with slub In-Reply-To: <559665.2852.qm@web52608.mail.re2.yahoo.com> Message-ID: References: <559665.2852.qm@web52608.mail.re2.yahoo.com> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII X-Brightmail-Verdict: X-Brightmail-Tracker: Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3206 Lines: 80 On Tue, 22 May 2007, Srihari Vijayaraghavan wrote: > --- Ingo Molnar wrote: > > * Srihari Vijayaraghavan wrote: > > > Yup, with CONFIG_SMP=n, I'm unable to reproduce the problem. It's > > > quite stable actually (having completed a dozen kernel compile > > > sessions so far). > > [...] > > > could you enable CONFIG_PROVE_LOCKING - does it spit out any warning > > into the syslog? > > Compiled slub with SMP & CONFIG_PROVE_LOCKING. No luck. It still hangs solid > after the second spinlock lockup call trace. > > Here's the relevant sections of the kernel logs: > > ... > Freeing unused kernel memory: 228k freed > BUG: spinlock bad magic on CPU#1, init/1 > lock: ffff81011f5f1100, .magic: ffff8101, .owner: /-1, .owner_cpu: -1 > > Call Trace: > [] _raw_spin_lock+0x22/0xf6 > [] vma_adjust+0x21c/0x446 > [] vma_adjust+0x21c/0x446 > [] vma_merge+0x10c/0x195 > [] do_mmap_pgoff+0x3f5/0x794 > [] _spin_unlock_irq+0x24/0x27 > [] sys_mmap+0xe5/0x110 > [] system_call+0x7e/0x83 > ... > PM: Adding info for No Bus:vcsa1 > BUG: spinlock lockup on CPU#1, hostname/369, ffff81011f5f1fc0 > > Call Trace: > [] _raw_spin_lock+0xcf/0xf6 > [] anon_vma_unlink+0x1c/0x68 > [] anon_vma_unlink+0x1c/0x68 > [] free_pgtables+0x69/0xc4 > [] exit_mmap+0x91/0xeb > [] mmput+0x2c/0x9f > [] do_exit+0x22e/0x82e > [] sys_exit_group+0x0/0xe > [] system_call+0x7e/0x83 > > > Surprisingly, with CONFIG_SMP=n, CONFIG_PROVE_LOCKING produces this with slub > (then hangs solid): > > Freeing unused kernel memory: 188k freed > BUG: spinlock lockup on CPU#0, init/1, ffff81011e9d3160 > > Call Trace: > [] _raw_spin_lock+0xca/0xe8 > [] vma_adjust+0x218/0x442 > [] vma_adjust+0x218/0x442 > [] vma_merge+0x10c/0x195 > [] do_mmap_pgoff+0x3f5/0x790 > [] _spin_unlock_irq+0x24/0x27 > [] sys_mmap+0xe5/0x110 > [] system_call+0x7e/0x83 > > To recap: > 1. No problems with slub on CONFIG_SMP=n & CONFIG_PROVE_LOCKING=n > 2. Problem with slub on CONFIG_SMP=n & CONFIG_PROVE_LOCKING=y (perhaps a. some > locking issue when slub is activated or b. something is wrong with 'prove > locking' mechanism when slub is activated or c. something else I don't see) > 3. Problem with slub on CONFIG_SMP=y (even without CONFIG_PROVE_LOCKING=y) You've made no mention of trying the patch I sent yesterday, or better, the patch Christoph replied with to replace it. Please clarify whether you're getting the above after applying one of those patches - thanks. Hugh - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/