Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758919Ab0GWOqB (ORCPT ); Fri, 23 Jul 2010 10:46:01 -0400 Received: from rcsinet10.oracle.com ([148.87.113.121]:62671 "EHLO rcsinet10.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753788Ab0GWOp6 convert rfc822-to-8bit (ORCPT ); Fri, 23 Jul 2010 10:45:58 -0400 MIME-Version: 1.0 Message-ID: <364c83bd-ccb2-48cc-920d-ffcf9ca7df19@default> Date: Fri, 23 Jul 2010 07:44:11 -0700 (PDT) From: Dan Magenheimer To: Christoph Hellwig Cc: ngupta@vflare.org, akpm@linux-foundation.org, Chris Mason , viro@zeniv.linux.org.uk, adilger@sun.com, tytso@mit.edu, mfasheh@suse.com, Joel Becker , matthew@wil.cx, linux-btrfs@vger.kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, ocfs2-devel@oss.oracle.com, linux-mm@kvack.org, jeremy@goop.org, JBeulich@novell.com, Kurt Hackel , npiggin@suse.de, Dave Mccracken , riel@redhat.com, avi@redhat.com, Konrad Wilk Subject: RE: [PATCH V3 0/8] Cleancache: overview References: <20100621231809.GA11111@ca-server1.us.oracle.com4C49468B.40307@vflare.org> <840b32ff-a303-468e-9d4e-30fc92f629f8@default 20100723140440.GA12423@infradead.org> In-Reply-To: <20100723140440.GA12423@infradead.org> X-Priority: 3 X-Mailer: Oracle Beehive Extensions for Outlook 2.0.1.2.1.2 (406224) [OL 12.0.6535.5005] Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 8BIT X-Source-IP: acsmt354.oracle.com [141.146.40.154] X-Auth-Type: Internal IP X-CT-RefId: str=0001.0A0B0209.4C49AAED.02DD:SCFMA4539814,ss=1,fgs=0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2446 Lines: 66 > From: Christoph Hellwig [mailto:hch@infradead.org] > Subject: Re: [PATCH V3 0/8] Cleancache: overview > > On Fri, Jul 23, 2010 at 06:58:03AM -0700, Dan Magenheimer wrote: > > CHRISTOPH AND ANDREW, if you disagree and your concerns have > > not been resolved, please speak up. Hi Christoph -- Thanks very much for the quick (instantaneous?) reply! > Anything that need modification of a normal non-shared fs is utterly > broken and you'll get a clear NAK, so the propsal before is a good > one. Unless/until all filesystems are 100% built on top of VFS, I have to disagree. Abstractions (e.g. VFS) are never perfect. And the relevant filesystem maintainers have acked, so I'm wondering who you are NAK'ing for? Nitin's proposal attempts to move the VFS hooks around to fix usage for one fs (btrfs) that, for whatever reason, has chosen to not layer itself completely on top of VFS; this sounds to me like a recipe for disaster. I think Minchan's reply quickly pointed out one issue... what other filesystems that haven't been changed might encounter a rare data corruption issue because cleancache is transparently enabled for its page cache pages? It also drops requires support to be dropped entirely for another fs (ocfs2) which one user (zcache) can't use, but the other (tmem) makes very good use of. No, the per-fs opt-in is very sensible; and its design is very minimal. Could you please explain your objection further? > There's a couple more issues like the still weird prototypes, > e.g. and i_ino might not be enoug to uniquely identify an inode > on serveral filesystems that use 64-bit inode inode numbers on 32-bit > systems. This reinforces my per-fs opt-in point. Such filesystems should not enable cleancache (or enable them only on the appropriate systems). > Also making the ops vector global is just a bad idea. > There is nothing making this sort of caching inherently global. I'm not sure I understand your point, but two very different users of cleancache have been provided, and more will be discussed at the MM summit next month. Do you have a suggestion on how to avoid a global ops vector while still serving the needs of both existing users? Thanks, Dan -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/