Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758675AbYHVVtN (ORCPT ); Fri, 22 Aug 2008 17:49:13 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1754137AbYHVVs5 (ORCPT ); Fri, 22 Aug 2008 17:48:57 -0400 Received: from py-out-1112.google.com ([64.233.166.182]:59757 "EHLO py-out-1112.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753832AbYHVVs5 (ORCPT ); Fri, 22 Aug 2008 17:48:57 -0400 DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:mime-version:content-type :content-transfer-encoding:content-disposition; b=bls7lpX899U5WdptQjEn/vT7nnMk92c7MOaOE01ZgAqrcCvUlesNfUPrjqXDm4CR2L G/Vg/qSZqXD5fP6gZXwuwshgujtcyuPfLKf+ZK1CC/MhKjS6HaiHtjogSZ4Uav5YZzo7 kyQmb+0wPtcAh2p2K+7BCEYS4gB8ktGVLyQNQ= Message-ID: <6278d2220808221448l24e98c61t83a2680f9df314f2@mail.gmail.com> Date: Fri, 22 Aug 2008 22:48:55 +0100 From: "Daniel J Blueman" To: "Linux Kernel" Subject: [2.6.27-rc4] SLUB list_lock vs obj_hash.lock... MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4496 Lines: 108 When booting 2.6.27-rc4 with SLUB and debug_objects=1, we see (after some activity) lock ordering issues with obj_hash.lock and SLUB's list_lock [1]. Thanks, Daniel --- [1] ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.27-rc4-224c #1 ------------------------------------------------------- hald/4680 is trying to acquire lock: (&n->list_lock){++..}, at: [] add_partial+0x26/0x80 but task is already holding lock: (&obj_hash[i].lock){++..}, at: [] debug_object_free+0x5c/0x120 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&obj_hash[i].lock){++..}: [] __lock_acquire+0xdb1/0x1150 [] lock_acquire+0x91/0xc0 [] _spin_lock_irqsave+0x49/0x90 [] __debug_check_no_obj_freed+0x6e/0x170 [] debug_check_no_obj_freed+0x15/0x20 [] free_hot_cold_page+0x11f/0x240 [] free_hot_page+0x10/0x20 [] __free_pages+0x3d/0x50 [] __free_slab+0x7e/0x90 [] discard_slab+0x18/0x40 [] kmem_cache_shrink+0x17b/0x220 [] acpi_os_purge_cache+0xe/0x12 [] acpi_purge_cached_objects+0x15/0x3d [] acpi_initialize_objects+0x4e/0x59 [] acpi_init+0x91/0x226 [] do_one_initcall+0x45/0x190 [] kernel_init+0x145/0x1a2 [] child_rip+0xa/0x11 [] 0xffffffffffffffff -> #0 (&n->list_lock){++..}: [] __lock_acquire+0xe95/0x1150 [] lock_acquire+0x91/0xc0 [] _spin_lock+0x36/0x70 [] add_partial+0x26/0x80 [] __slab_free+0x106/0x110 [] kmem_cache_free+0xa7/0x110 [] free_object+0x68/0xc0 [] debug_object_free+0xb3/0x120 [] schedule_timeout+0x7e/0xe0 [] do_sys_poll+0x3b9/0x440 [] sys_poll+0x38/0xa0 [] system_call_fastpath+0x16/0x1b [] 0xffffffffffffffff other info that might help us debug this: 1 lock held by hald/4680: #0: (&obj_hash[i].lock){++..}, at: [] debug_object_free+0x5c/0x120 stack backtrace: Pid: 4680, comm: hald Not tainted 2.6.27-rc4-224c #1 Call Trace: [] print_circular_bug_tail+0x9f/0xe0 [] __lock_acquire+0xe95/0x1150 [] lock_acquire+0x91/0xc0 [] ? add_partial+0x26/0x80 [] _spin_lock+0x36/0x70 [] ? add_partial+0x26/0x80 [] add_partial+0x26/0x80 [] __slab_free+0x106/0x110 [] kmem_cache_free+0xa7/0x110 [] ? free_object+0x68/0xc0 [] free_object+0x68/0xc0 [] debug_object_free+0xb3/0x120 [] schedule_timeout+0x7e/0xe0 [] ? process_timeout+0x0/0x10 [] ? schedule_timeout+0x62/0xe0 [] do_sys_poll+0x3b9/0x440 [] ? __pollwait+0x0/0x120 [] ? default_wake_function+0x0/0x10 [] ? default_wake_function+0x0/0x10 [] ? default_wake_function+0x0/0x10 [] ? default_wake_function+0x0/0x10 [] ? default_wake_function+0x0/0x10 [] ? default_wake_function+0x0/0x10 [] ? default_wake_function+0x0/0x10 [] ? default_wake_function+0x0/0x10 [] ? default_wake_function+0x0/0x10 [] ? default_wake_function+0x0/0x10 [] ? trace_hardirqs_on_thunk+0x3a/0x3f [] sys_poll+0x38/0xa0 [] system_call_fastpath+0x16/0x1b -- Daniel J Blueman -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/