Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756214Ab0BDDji (ORCPT ); Wed, 3 Feb 2010 22:39:38 -0500 Received: from bld-mail12.adl6.internode.on.net ([150.101.137.97]:48468 "EHLO mail.internode.on.net" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1755047Ab0BDDjf (ORCPT ); Wed, 3 Feb 2010 22:39:35 -0500 Date: Thu, 4 Feb 2010 14:39:11 +1100 From: Dave Chinner To: tytso@mit.edu, Christoph Lameter , Andi Kleen , Miklos Szeredi , Alexander Viro , Christoph Hellwig , Christoph Lameter , Rik van Riel , Pekka Enberg , akpm@linux-foundation.org, Nick Piggin , Hugh Dickins , linux-kernel@vger.kernel.org Subject: Re: inodes: Support generic defragmentation Message-ID: <20100204033911.GE5332@discord.disaster> References: <20100129204931.789743493@quilx.com> <20100129205004.405949705@quilx.com> <20100130192623.GE788@thunk.org> <20100131083409.GF29555@one.firstfloor.org> <20100131135933.GM15853@discord.disaster> <20100204003410.GD5332@discord.disaster> <20100204030736.GB25885@thunk.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20100204030736.GB25885@thunk.org> User-Agent: Mutt/1.5.18 (2008-05-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2885 Lines: 67 On Wed, Feb 03, 2010 at 10:07:36PM -0500, tytso@mit.edu wrote: > On Thu, Feb 04, 2010 at 11:34:10AM +1100, Dave Chinner wrote: > > What it comes down to is that the slab has two states for objects - > > allocated and free - but what we really need here is 3 states - > > allocated, unused and freed. We currently track unused objects > > outside the slab in LRU lists and, IMO, that is the source of our > > fragmentation problems because it has no knowledge of the spatial > > layout of the slabs and the state of other objects in the page. > > > > What I'm suggesting is that we ditch the external LRUs and track the > > "unused" state inside the slab and then use that knowledge to decide > > which pages to reclaim. > > Or maybe we need to have the way to track the LRU of the slab page as > a whole? Any time we touch an object on the slab page, we touch the > last updatedness of the slab as a hole. Yes, that's pretty much what I have been trying to describe. ;) (And, IIUC, what I think Nick has been trying to describe as well when he's been saying we should "turn reclaim upside down".) It seems to me to be pretty simple to track, too, if we define pages for reclaim to only be those that are full of unused objects. i.e. the pages have the two states: - Active: some allocated and referenced object on the page => no need for LRU tracking of these - Unused: all allocated objects on the page are not used => these pages are LRU tracked within the slab A single referenced object is enough to change the state of the page from Unused to Active, and when page transitions from Active to Unused is goes on the MRU end of the LRU queue. Reclaim would then start with the oldest pages on the LRU.... > It's actually more complicated than that, though. Even if no one has > touched a particular inode, if one of the inode in the slab page is > pinned down because it is in use, A single active object like this would the slab page Active, and therefore not a candidate for reclaim. Also, we already reclaim dentries before inodes because dentries pin inodes, so our algorithms for reclaim already deal with these ordering issues for us. ... > And of course, if the inode is pinned down because it is opened and/or > mmaped, then its associated dcache entry can't be freed either, so > there's no point trying to trash all of its sibling dentries on the > same page as that dcache entry. Agreed - that's why I think preventing fragemntation caused by LRU reclaim is best dealt with internally to slab where both object age and locality can be taken into account. Cheers, Dave. -- Dave Chinner david@fromorbit.com -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/