Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752044AbYFUM5v (ORCPT ); Sat, 21 Jun 2008 08:57:51 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1750880AbYFUM5m (ORCPT ); Sat, 21 Jun 2008 08:57:42 -0400 Received: from wx-out-0506.google.com ([66.249.82.230]:15668 "EHLO wx-out-0506.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750773AbYFUM5k (ORCPT ); Sat, 21 Jun 2008 08:57:40 -0400 DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:mime-version:content-type :content-transfer-encoding:content-disposition; b=lkHoklCv3E+jGXA6hQAwypcmzn7MkTnjoBO/+VBLO0r6Te6TZK7SiajN+n/gCTIJqz 7Bgx8L5Dyp0gogdsBm3WdNIQUS0E82/rT3VOiAt6W6RGG4Nhq6OeY5ANKdNYXty0b4f6 fSRX/wvRwG08NZN0DsuNzOQRjs+NydtXwgtNA= Message-ID: Date: Sat, 21 Jun 2008 16:57:38 +0400 From: "Alexander Beregalov" To: kernel-testers@vger.kernel.org, "kernel list" , linux-mm@kvack.org, "Mel Gorman" , "Christoph Lameter" , "Lee Schermerhorn" , "KAMEZAWA Hiroyuki" , "Hugh Dickins" , "Nick Piggin" , "Andrew Morton" , "Linus Torvalds" , bfields@fieldses.org, neilb@suse.de, linux-nfs@vger.kernel.org Subject: Re: 2.6.26-rc: nfsd hangs for a few sec MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 7819 Lines: 177 One more try, added some CC's. 2008/6/12 Alexander Beregalov : > I have bisected it and it seems introduced here: > How could it be? > > 54a6eb5c4765aa573a030ceeba2c14e3d2ea5706 is first bad commit > commit 54a6eb5c4765aa573a030ceeba2c14e3d2ea5706 > Author: Mel Gorman > Date: Mon Apr 28 02:12:16 2008 -0700 > > mm: use two zonelist that are filtered by GFP mask > > Currently a node has two sets of zonelists, one for each zone type in the > system and a second set for GFP_THISNODE allocations. Based on the zones > allowed by a gfp mask, one of these zonelists is selected. All of these > zonelists consume memory and occupy cache lines. > > This patch replaces the multiple zonelists per-node with two zonelists. The > first contains all populated zones in the system, ordered by distance, for > fallback allocations when the target/preferred node has no free pages. The > second contains all populated zones in the node suitable for GFP_THISNODE > allocations. > > An iterator macro is introduced called for_each_zone_zonelist() > that interates > through each zone allowed by the GFP flags in the selected zonelist. > > Signed-off-by: Mel Gorman > Acked-by: Christoph Lameter > Signed-off-by: Lee Schermerhorn > Cc: KAMEZAWA Hiroyuki > Cc: Mel Gorman > Cc: Christoph Lameter > Cc: Hugh Dickins > Cc: Nick Piggin > Signed-off-by: Andrew Morton > Signed-off-by: Linus Torvalds > > :040000 040000 89cdad93d855fa839537454113f2716011ca0e26 > 57aa307f4bddd264e70c759a2fb2076bfde363eb M arch > :040000 040000 4add802178c0088a85d3738b42ec42ca33e07d60 > 126d3b170424a18b60074a7901c4e9b98f3bdee5 M fs > :040000 040000 9d215d6248382dab53003d230643f0169f3e3e84 > 67d196d890a27d2211b3bf7e833e6366addba739 M include > :040000 040000 6502d185e8ea6338953027c29cc3ab960d6f9bad > c818e0fc538cdc40016e2d5fe33661c9c54dc8a5 M mm > > I remind the log message (it still happens on -rc5): > Machine hangs for few seconds. > I can caught such thing during the first hour of running. > > [ INFO: possible circular locking dependency detected ] > 2.6.26-rc5-00084-g39b945a #3 > ------------------------------------------------------- > nfsd/3457 is trying to acquire lock: > (iprune_mutex){--..}, at: [] shrink_icache_memory+0x38/0x19b > > but task is already holding lock: > (&(&ip->i_iolock)->mr_lock){----}, at: [] xfs_ilock+0xa2/0xd6 > > which lock already depends on the new lock. > > > the existing dependency chain (in reverse order) is: > > -> #1 (&(&ip->i_iolock)->mr_lock){----}: > [] __lock_acquire+0xa0c/0xbc6 > [] lock_acquire+0x6a/0x86 > [] down_write_nested+0x33/0x6a > [] xfs_ilock+0x7b/0xd6 > [] xfs_ireclaim+0x1d/0x59 > [] xfs_finish_reclaim+0x173/0x195 > [] xfs_reclaim+0xb3/0x138 > [] xfs_fs_clear_inode+0x55/0x8e > [] clear_inode+0x83/0xd2 > [] dispose_list+0x3c/0xc1 > [] shrink_icache_memory+0x173/0x19b > [] shrink_slab+0xda/0x153 > [] try_to_free_pages+0x1e0/0x2a1 > [] __alloc_pages_internal+0x23f/0x3a7 > [] __alloc_pages+0xa/0xc > [] __slab_alloc+0x1c7/0x513 > [] kmem_cache_alloc+0x45/0xb3 > [] reiserfs_alloc_inode+0x12/0x23 > [] alloc_inode+0x14/0x1a9 > [] iget5_locked+0x47/0x133 > [] reiserfs_iget+0x29/0x7d > [] reiserfs_lookup+0xb1/0xee > [] do_lookup+0xa9/0x146 > [] __link_path_walk+0x734/0xb2f > [] path_walk+0x49/0x96 > [] do_path_lookup+0x12f/0x149 > [] __user_walk_fd+0x2f/0x48 > [] vfs_lstat_fd+0x16/0x3d > [] vfs_lstat+0x11/0x13 > [] sys_lstat64+0x14/0x28 > [] sysenter_past_esp+0x6a/0xb1 > [] 0xffffffff > > -> #0 (iprune_mutex){--..}: > [] __lock_acquire+0x929/0xbc6 > [] lock_acquire+0x6a/0x86 > [] mutex_lock_nested+0xba/0x232 > [] shrink_icache_memory+0x38/0x19b > [] shrink_slab+0xda/0x153 > [] try_to_free_pages+0x1e0/0x2a1 > [] __alloc_pages_internal+0x23f/0x3a7 > [] __alloc_pages+0xa/0xc > [] __do_page_cache_readahead+0xaa/0x16a > [] ondemand_readahead+0x119/0x127 > [] page_cache_async_readahead+0x52/0x5d > [] generic_file_splice_read+0x290/0x4a8 > [] xfs_splice_read+0x4b/0x78 > [] xfs_file_splice_read+0x24/0x29 > [] do_splice_to+0x45/0x63 > [] splice_direct_to_actor+0xc3/0x190 > [] nfsd_vfs_read+0x1ed/0x2d0 > [] nfsd_read+0x82/0x99 > [] nfsd3_proc_read+0xdf/0x12a > [] nfsd_dispatch+0xcf/0x19e > [] svc_process+0x3b3/0x68b > [] nfsd+0x168/0x26b > [] kernel_thread_helper+0x7/0x10 > [] 0xffffffff > > other info that might help us debug this: > > 3 locks held by nfsd/3457: > #0: (hash_sem){..--}, at: [] exp_readlock+0xd/0xf > #1: (&(&ip->i_iolock)->mr_lock){----}, at: [] xfs_ilock+0xa2/0xd6 > #2: (shrinker_rwsem){----}, at: [] shrink_slab+0x24/0x153 > > stack backtrace: > Pid: 3457, comm: nfsd Not tainted 2.6.26-rc5-00084-g39b945a #3 > [] print_circular_bug_tail+0x5a/0x65 > [] ? print_circular_bug_header+0xa8/0xb3 > [] __lock_acquire+0x929/0xbc6 > [] lock_acquire+0x6a/0x86 > [] ? shrink_icache_memory+0x38/0x19b > [] mutex_lock_nested+0xba/0x232 > [] ? shrink_icache_memory+0x38/0x19b > [] ? shrink_icache_memory+0x38/0x19b > [] shrink_icache_memory+0x38/0x19b > [] shrink_slab+0xda/0x153 > [] try_to_free_pages+0x1e0/0x2a1 > [] ? isolate_pages_global+0x0/0x3e > [] __alloc_pages_internal+0x23f/0x3a7 > [] __alloc_pages+0xa/0xc > [] __do_page_cache_readahead+0xaa/0x16a > [] ondemand_readahead+0x119/0x127 > [] page_cache_async_readahead+0x52/0x5d > [] generic_file_splice_read+0x290/0x4a8 > [] ? _spin_unlock+0x27/0x3c > [] ? _atomic_dec_and_lock+0x25/0x30 > [] ? __lock_acquire+0xbaa/0xbc6 > [] ? spd_release_page+0x0/0xf > [] xfs_splice_read+0x4b/0x78 > [] xfs_file_splice_read+0x24/0x29 > [] do_splice_to+0x45/0x63 > [] splice_direct_to_actor+0xc3/0x190 > [] ? nfsd_direct_splice_actor+0x0/0xf > [] nfsd_vfs_read+0x1ed/0x2d0 > [] nfsd_read+0x82/0x99 > [] nfsd3_proc_read+0xdf/0x12a > [] nfsd_dispatch+0xcf/0x19e > [] svc_process+0x3b3/0x68b > [] nfsd+0x168/0x26b > [] ? nfsd+0x0/0x26b > [] kernel_thread_helper+0x7/0x10 > ======================= -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/