Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758248Ab0GOLrf (ORCPT ); Thu, 15 Jul 2010 07:47:35 -0400 Received: from bld-mail18.adl2.internode.on.net ([150.101.137.103]:50216 "EHLO mail.internode.on.net" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1757438Ab0GOLrd (ORCPT ); Thu, 15 Jul 2010 07:47:33 -0400 From: Dave Chinner To: xfs@oss.sgi.com Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 0/3] shrinker fixes for XFS for 2.6.35 Date: Thu, 15 Jul 2010 21:46:55 +1000 Message-Id: <1279194418-16119-1-git-send-email-david@fromorbit.com> X-Mailer: git-send-email 1.7.1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4290 Lines: 92 Per-superblock shrinkers are not baked well enough for 2.6.36. However, we still need fixes for the XFS shrinker lockdep problems caused by the global mount list lock and other problems before 2.6.35 releases. The lockdep issues look like: ======================================================= [ INFO: possible circular locking dependency detected ] 2.6.35-rc5-dgc+ #34 ------------------------------------------------------- kswapd0/471 is trying to acquire lock: (&(&ip->i_lock)->mr_lock){++++-.}, at: [] xfs_ilock+0x10b/0x190 but task is already holding lock: (&xfs_mount_list_lock){++++.-}, at: [] xfs_reclaim_inode_shrink+0xd6/0x150 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&xfs_mount_list_lock){++++.-}: [] lock_acquire+0xa6/0x160 [] _raw_spin_lock_irqsave+0x55/0xa0 [] __wake_up+0x32/0x70 [] wakeup_kswapd+0xab/0xb0 [] __alloc_pages_nodemask+0x27d/0x760 [] kmem_getpages+0x62/0x160 [] fallback_alloc+0x18f/0x260 [] ____cache_alloc_node+0x9b/0x180 [] kmem_cache_alloc+0x16b/0x1e0 [] kmem_zone_alloc+0x94/0xe0 [] xfs_inode_alloc+0x29/0x1b0 [] xfs_iget+0x2ec/0x7a0 [] xfs_trans_iget+0x27/0x60 [] xfs_ialloc+0xca/0x790 [] xfs_dir_ialloc+0xaf/0x340 [] xfs_create+0x3dc/0x710 [] xfs_vn_mknod+0xa7/0x1c0 [] xfs_vn_create+0x10/0x20 [] vfs_create+0xac/0xd0 [] do_last+0x51c/0x620 [] do_filp_open+0x224/0x640 [] do_sys_open+0x69/0x140 [] sys_open+0x20/0x30 [] system_call_fastpath+0x16/0x1b -> #0 (&(&ip->i_lock)->mr_lock){++++-.}: [] __lock_acquire+0x11c3/0x1450 [] lock_acquire+0xa6/0x160 [] down_write_nested+0x65/0xb0 [] xfs_ilock+0x10b/0x190 [] xfs_reclaim_inode+0x9d/0x250 [] xfs_inode_ag_walk+0x8b/0x150 [] xfs_inode_ag_iterator+0x8b/0xf0 [] xfs_reclaim_inode_shrink+0x10c/0x150 [] shrink_slab+0x135/0x1a0 [] balance_pgdat+0x421/0x6a0 [] kswapd+0x11d/0x320 [] kthread+0x96/0xa0 [] kernel_thread_helper+0x4/0x10 other info that might help us debug this: 2 locks held by kswapd0/471: #0: (shrinker_rwsem){++++..}, at: [] shrink_slab+0x3d/0x1a0 #1: (&xfs_mount_list_lock){++++.-}, at: [] xfs_reclaim_inode_shrink+0xd6/0x150 There are also a few variations as these paths are traversed from different locations in different workloads. There are also scanning overhead problems caused by the global shrinker as seen in https://bugzilla.kernel.org/show_bug.cgi?id=16348. This is not helped by every shrinker call potentially traversing multiple filesystems to find one with reclaimable inodes. The context based shrinker solution is very simple and doesn't have any effect outside XFS. For XFS, it allows us to avoid locking needed by a global list, as well as remove the repeated scanning of clean filesystems on every shrinker call. In combination with the tagging of the per-AG index to track AGs with reclaimable inodes, all the unnecessary AG scanning is removed and the overhead is minimised. Hence kswapd CPU usage and reclaim progress is not hindered anymore. The patch set is also available at: git://git.kernel.org/pub/scm/git/linux/dgc/xfsdev shrinker -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/