[PATCH V3 0/8] Cleancache: overview
Changes from V2 to V3:
- Rebased to 2.6.35-rc2 (no significant functional changes)
- Use one cleancache_ops struct to avoid pointer hops (Andrew Morton)
- Document and ensure PageLocked requirements are met (Andrew Morton)
- Moved primary doc to Documentation/vm and added a FAQ (Christoph Hellwig)
- Document sysfs API in Documentation/ABI (Andrew Morton)
- Use standard success/fail codes (0/<0) (Nitin Gupta)
- Switch ops function types to void where retval is ignored (Nitin Gupta)
- Clarify in doc: init_fs and flush_fs occur at mount/unmount (Nitin Gupta)
- Fix bug where pool_id==0 is considered an error on fs unmount (Nitin Gupta)
Changes from V1 to V2:
- Rebased to 2.6.34 (no functional changes)
- Convert to sane types (Al Viro)
- Define some raw constants (Konrad Wilk)
- Add ack from Andreas Dilger
In previous patch postings, cleancache was part of the Transcendent
Memory ("tmem") patchset. This patchset refocuses not on the underlying
technology (tmem) but instead on the useful functionality provided for Linux,
and provides a clean API so that cleancache can provide this very useful
functionality either via a Xen tmem driver OR completely independent of tmem.
For example: Nitin Gupta (of compcache and ramzswap fame) is implementing
an in-kernel compression "backend" for cleancache; some believe
cleancache will be a very nice interface for building RAM-like functionality
for pseudo-RAM devices such as SSD or phase-change memory; and a Pune
University team is looking at a backend for virtio (see OLS'2010).
A more complete description of cleancache can be found in Documentation/vm/
cleancache.txt (in PATCH 1/7) which is included below for convenience.
Note that an earlier version of this patch is now shipping in OpenSuSE 11.2
and will soon ship in a release of Oracle Enterprise Linux. Underlying
tmem technology is now shipping in Oracle VM 2.2 and was released
in Xen 4.0 on April 15, 2010.
Signed-off-by: Dan Magenheimer <[email protected]>
Reviewed-by: Jeremy Fitzhardinge <[email protected]>
Documentation/ABI/testing/sysfs-kernel-mm-cleancache | 11 +
Documentation/vm/cleancache.txt | 194 +++++++++++++++++++
fs/btrfs/extent_io.c | 9
fs/btrfs/super.c | 2
fs/buffer.c | 5
fs/ext3/super.c | 2
fs/ext4/super.c | 2
fs/mpage.c | 7
fs/ocfs2/super.c | 3
fs/super.c | 7
include/linux/cleancache.h | 88 ++++++++
include/linux/fs.h | 5
mm/Kconfig | 22 ++
mm/Makefile | 1
mm/cleancache.c | 169 ++++++++++++++++
mm/filemap.c | 11 +
mm/truncate.c | 10
17 files changed, 548 insertions(+)
(following is a copy of Documentation/vm/cleancache.txt)
MOTIVATION
Cleancache can be thought of as a page-granularity victim cache for clean
pages that the kernel's pageframe replacement algorithm (PFRA) would like
to keep around, but can't since there isn't enough memory. So when the
PFRA "evicts" a page, it first attempts to put it into a synchronous
concurrency-safe page-oriented "pseudo-RAM" device (such as Xen's Transcendent
Memory, aka "tmem", or in-kernel compressed memory, aka "zmem", or other
RAM-like devices) which is not directly accessible or addressable by the
kernel and is of unknown and possibly time-varying size. And when a
cleancache-enabled filesystem wishes to access a page in a file on disk,
it first checks cleancache to see if it already contains it; if it does,
the page is copied into the kernel and a disk access is avoided.
A FAQ is included below:
IMPLEMENTATION OVERVIEW
A cleancache "backend" that interfaces to this pseudo-RAM links itself
to the kernel's cleancache "frontend" by setting the cleancache_ops funcs
appropriately and the functions it provides must conform to certain
semantics as follows:
Most important, cleancache is "ephemeral". Pages which are copied into
cleancache have an indefinite lifetime which is completely unknowable
by the kernel and so may or may not still be in cleancache at any later time.
Thus, as its name implies, cleancache is not suitable for dirty pages.
Cleancache has complete discretion over what pages to preserve and what
pages to discard and when.
Mounting a cleancache-enabled filesystem should call "init_fs" to obtain a
pool id which, if positive, must be saved in the filesystem's superblock;
a negative return value indicates failure. A "put_page" will copy a
(presumably about-to-be-evicted) page into cleancache and associate it with
the pool id, the file inode, and a page index into the file. (The combination
of a pool id, an inode, and an index is sometimes called a "handle".)
A "get_page" will copy the page, if found, from cleancache into kernel memory.
A "flush_page" will ensure the page no longer is present in cleancache;
a "flush_inode" will flush all pages associated with the specified inode;
and, when a filesystem is unmounted, a "flush_fs" will flush all pages in
all inodes specified by the given pool id and also surrender the pool id.
A "init_shared_fs", like init, obtains a pool id but tells cleancache
to treat the pool as shared using a 128-bit UUID as a key. On systems
that may run multiple kernels (such as hard partitioned or virtualized
systems) that may share a clustered filesystem, and where cleancache
may be shared among those kernels, calls to init_shared_fs that specify the
same UUID will receive the same pool id, thus allowing the pages to
be shared. Note that any security requirements must be imposed outside
of the kernel (e.g. by "tools" that control cleancache). Or a
cleancache implementation can simply disable shared_init by always
returning a negative value.
If a get_page is successful on a non-shared pool, the page is flushed (thus
making cleancache an "exclusive" cache). On a shared pool, the page
is NOT flushed on a successful get_page so that it remains accessible to
other sharers. The kernel is responsible for ensuring coherency between
cleancache (shared or not), the page cache, and the filesystem, using
cleancache flush operations as required.
Note that cleancache must enforce put-put-get coherency and get-get
coherency. For the former, if two puts are made to the same handle but
with different data, say AAA by the first put and BBB by the second, a
subsequent get can never return the stale data (AAA). For get-get coherency,
if a get for a given handle fails, subsequent gets for that handle will
never succeed unless preceded by a successful put with that handle.
Last, cleancache provides no SMP serialization guarantees; if two
different Linux threads are simultaneously putting and flushing a page
with the same handle, the results are indeterminate.
CLEANCACHE PERFORMANCE METRICS
Cleancache monitoring is done by sysfs files in the
/sys/kernel/mm/cleancache directory. The effectiveness of cleancache
can be measured (across all filesystems) with:
succ_gets - number of gets that were successful
failed_gets - number of gets that failed
puts - number of puts attempted (all "succeed")
flushes - number of flushes attempted
A backend implementatation may provide additional metrics.
FAQ
1) Where's the value? (Andrew Morton)
Cleancache (and its sister code "frontswap") provide interfaces for
a new pseudo-RAM memory type that conceptually lies between fast
kernel-directly-addressable RAM and slower DMA/asynchronous devices.
Disallowing direct kernel or userland reads/writes to this pseudo-RAM
is ideal when data is transformed to a different form and size (such
as wiht compression) or secretly moved (as might be useful for write-
balancing for some RAM-like devices). Evicted page-cache pages (and
swap pages) are a great use for this kind of slower-than-RAM-but-much-
faster-than-disk pseudo-RAM and the cleancache (and frontswap)
"page-object-oriented" specification provides a nice way to read and
write -- and indirectly "name" -- the pages.
In the virtual case, the whole point of virtualization is to statistically
multiplex physical resources across the varying demands of multiple
virtual machines. This is really hard to do with RAM and efforts to
do it well with no kernel change have essentially failed (except in some
well-publicized special-case workloads). Cleancache -- and frontswap --
with a fairly small impact on the kernel, provide a huge amount
of flexibility for more dynamic, flexible RAM multiplexing.
Specifically, the Xen Transcendent Memory backend allows otherwise
"fallow" hypervisor-owned RAM to not only be "time-shared" between multiple
virtual machines, but the pages can be compressed and deduplicated to
optimize RAM utilization. And when guest OS's are induced to surrender
underutilized RAM (e.g. with "self-ballooning"), page cache pages
are the first to go, and cleancache allows those pages to be
saved and reclaimed if overall host system memory conditions allow.
2) Why does cleancache have its sticky fingers so deep inside the
filesystems and VFS? (Andrew Morton and Christophe Hellwig)
The core hooks for cleancache in VFS are in most cases a single line
and the minimum set are placed precisely where needed to maintain
coherency (via cleancache_flush operatings) between cleancache,
the page cache, and disk. All hooks compile into nothingness if
cleancache is config'ed off and turn into a function-pointer-
compare-to-NULL if config'ed on but no backend claims the ops
functions, or to a compare-struct-element-to-negative if a
backend claims the ops functions but a filesystem doesn't enable
cleancache.
Some filesystems are built entirely on top of VFS and the hooks
in VFS are sufficient, so don't require a "init_fs" hook; the
initial implementation of cleancache didn't provide this hook.
But for some filesystems (such as btrfs), the VFS hooks are
incomplete and one or more hooks in fs-specific code are required.
And for some other filesystems, such as tmpfs, cleancache may
be counterproductive. So it seemed prudent to require a filesystem
to "opt in" to use cleancache, which requires adding a hook in
each filesystem. Not all filesystems are supported by cleancache
only because they haven't been tested. The existing set should
be sufficient to validate the concept, the opt-in approach means
that untested filesystems are not affected, and the hooks in the
existing filesystems should make it very easy to add more
filesystems in the future.
3) Why not make cleancache asynchronous and batched so it can
more easily interface with real devices with DMA instead
of copying each individual page? (Minchan Kim)
The one-page-at-a-time copy semantics simplifies the implementation
on both the frontend and backend and also allows the backend to
do fancy things on-the-fly like page compression and
page deduplication. And since the data is "gone" (copied into/out
of the pageframe) before the cleancache get/put call returns,
a great deal of race conditions and potential coherency issues
are avoided. While the interface seems odd for a "real device"
or for real kernel-addressible RAM, it makes perfect sense for
pseudo-RAM.
4) Why is non-shared cleancache "exclusive"? And where is the
page "flushed" after a "get"? (Minchan Kim)
The main reason is to free up memory in pseudo-RAM and to avoid
unnecessary cleancache_flush calls. If you want inclusive,
the page can be "put" immediately following the "get". If
put-after-get for inclusive becomes common, the interface could
be easily extended to add a "get_no_flush" call.
The flush is done by the cleancache backend implementation.
5) What's the performance impact?
Performance analysis has been presented at OLS'09 and LCA'10.
Briefly, performance gains can be significant on most workloads,
especially when memory pressure is high (e.g. when RAM is
overcommitted in a virtual workload); and because the hooks are
invoked primarily in place of or in addition to a disk read/write,
overhead is negligible even in worst case workloads. Basically
cleancache replaces I/O with memory-copy-CPU-overhead; on older
single-core systems with slow memory-copy speeds, cleancache
has little value, but in newer multicore machines, especially
consolidated/virtualized machines, it has great value.
6) Does cleanache work with KVM?
The memory model of KVM is sufficiently different that a cleancache
backend may have little value for KVM. This remains to be tested,
especially in an overcommitted system.
7) Does cleancache work in userspace? It sounds useful for
memory hungry caches like web browsers. (Jamie Lokier)
No plans yet, though we agree it sounds useful, at least for
apps that bypass the page cache (e.g. O_DIRECT).
Last updated: Dan Magenheimer, June 21 2010
What all this fails to explain is that this actually is useful for?
Your series adds lots of crappy code, entiely stupid interactions with a
handfull filesystems, but no actual users.
On Mon, Jun 21, 2010 at 04:18:09PM -0700, Dan Magenheimer wrote:
> [PATCH V3 0/8] Cleancache: overview
Dan,
Two comments:
- Mention where one can get the implementor of the cleancache API.
Either a link to where the patches reside or a git branch.
If you need pointers on branch names:
http://lkml.org/lkml/2010/6/7/269
- Point out the presentation you did on this. It has an excellent
overview of how this API works, and most importantly: a) images
and b). performance numbers.
Otherwise, please consider all of these patches to have
Reviewed-by: Konrad Rzeszutek Wilk <[email protected]>
tag.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
On 06/22/2010 04:48 AM, Dan Magenheimer wrote:
> [PATCH V3 0/8] Cleancache: overview
>
<snip>
>
> Documentation/ABI/testing/sysfs-kernel-mm-cleancache | 11 +
> Documentation/vm/cleancache.txt | 194 +++++++++++++++++++
> fs/btrfs/extent_io.c | 9
> fs/btrfs/super.c | 2
> fs/buffer.c | 5
> fs/ext3/super.c | 2
> fs/ext4/super.c | 2
> fs/mpage.c | 7
> fs/ocfs2/super.c | 3
> fs/super.c | 7
> include/linux/cleancache.h | 88 ++++++++
> include/linux/fs.h | 5
> mm/Kconfig | 22 ++
> mm/Makefile | 1
> mm/cleancache.c | 169 ++++++++++++++++
> mm/filemap.c | 11 +
> mm/truncate.c | 10
> 17 files changed, 548 insertions(+)
>
> (following is a copy of Documentation/vm/cleancache.txt)
>
> MOTIVATION
>
> Cleancache can be thought of as a page-granularity victim cache for clean
> pages that the kernel's pageframe replacement algorithm (PFRA) would like
> to keep around, but can't since there isn't enough memory. So when the
> PFRA "evicts" a page, it first attempts to put it into a synchronous
> concurrency-safe page-oriented "pseudo-RAM" device (such as Xen's Transcendent
> Memory, aka "tmem", or in-kernel compressed memory, aka "zmem", or other
> RAM-like devices) which is not directly accessible or addressable by the
> kernel and is of unknown and possibly time-varying size. And when a
> cleancache-enabled filesystem wishes to access a page in a file on disk,
> it first checks cleancache to see if it already contains it; if it does,
> the page is copied into the kernel and a disk access is avoided.
>
Since zcache is now one of its use cases, I think the major objection that
remains against cleancache is its intrusiveness -- in particular, need to
change individual filesystems (even though one liners). Changes below should
help avoid these per-fs changes and make it more self contained. I haven't
tested these changes myself, so there might be missed cases or other mysterious
problems:
1. Cleancache requires filesystem specific changes primarily to make a call to
cleancache init and store (per-fs instance) pool_id. I think we can get rid of
these by directly passing 'struct super_block' pointer which is also
sufficient to identify FS instance a page belongs to. This should then be used
as a 'handle' by cleancache_ops provider to find corresponding memory pool or
create a new pool when a new handle is encountered.
This leaves out case of ocfs2 for which cleancache needs 'uuid' to decide if a
shared pool should be created. IMHO, this case (and cleancache.init_shared_fs)
should be removed from cleancache_ops since it is applicable only for Xen's
cleancache_ops provider.
2. I think change in btrfs can be avoided by moving cleancache_get_page()
from do_mpage_reapage() to filemap_fault() and this should work for all
filesystems. See:
handle_pte_fault() -> do_(non)linear_fault() -> __do_fault()
-> vma->vm_ops->fault()
which is defined as filemap_fault() for all filesystems. If some future
filesystem uses its own custom function (why?) then it will have to arrange for
call to cleancache_get_page(), if it wants this feature.
With above changes, cleancache will be fairly self-contained:
- cleancache_put_page() when page is removed from page-cache
- cleacacache_get_page() when PF occurs (and after page-cache is searched)
- cleancache_flush_*() on truncate_*()
Thanks,
Nitin
On Fri, Jul 23, 2010 at 4:36 PM, Nitin Gupta <[email protected]> wrote:
>
> 2. I think change in btrfs can be avoided by moving cleancache_get_page()
> from do_mpage_reapage() to filemap_fault() and this should work for all
> filesystems. See:
>
> handle_pte_fault() -> do_(non)linear_fault() -> __do_fault()
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?-> vma->vm_ops->fault()
>
> which is defined as filemap_fault() for all filesystems. If some future
> filesystem uses its own custom function (why?) then it will have to arrange for
> call to cleancache_get_page(), if it wants this feature.
filemap fault works only in case of file-backed page which is mapped
but don't work not-mapped cache page. So we could miss cache page by
read system call if we move it into filemap_fault.
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href
ilto:"[email protected]"> [email protected] </a>
On Fri, Jul 23, 2010 at 4:36 PM, Nitin Gupta <[email protected]> wrote:
>
> 2. I think change in btrfs can be avoided by moving cleancache_get_page()
> from do_mpage_reapage() to filemap_fault() and this should work for all
> filesystems. See:
>
> handle_pte_fault() -> do_(non)linear_fault() -> __do_fault()
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?-> vma->vm_ops->fault()
>
> which is defined as filemap_fault() for all filesystems. If some future
> filesystem uses its own custom function (why?) then it will have to arrange for
> call to cleancache_get_page(), if it wants this feature.
filemap fault works only in case of file-backed page which is mapped
but don't work not-mapped cache page. So we could miss cache page by
read system call if we move it into filemap_fault.
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href
ilto:"[email protected]"> [email protected] </a>
> Since zcache is now one of its use cases, I think the major
> objection that remains against cleancache is its intrusiveness
> -- in particular, need to change individual filesystems (even
> though one liners). Changes below should help avoid these
> per-fs changes and make it more self contained.
Hi Nitin --
I think my reply at http://lkml.org/lkml/2010/6/22/202 adequately
refutes the claim of intrusiveness (43 lines!). And FAQ #2 near
the end of the original posting at http://lkml.org/lkml/2010/6/21/411
explains why the per-fs "opt-in" approach is sensible and necessary.
CHRISTOPH AND ANDREW, if you disagree and your concerns have
not been resolved, please speak up.
Further, the maintainers of the changed filesystems have acked
the very minor cleancache patches; and maintainers of other
filesystems are not affected unless they choose to opt-in,
whereas these other filesystems MAY be affected with your
suggested changes to the patches.
So I think it's just a matter of waiting for the Linux wheels
to turn for a patch that (however lightly) touches a number of
maintainers' code, though I would very much welcome any
input on anything I can do to make those wheels turn faster.
Thanks,
Dan
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href
ilto:"[email protected]"> [email protected] </a>
On Fri, Jul 23, 2010 at 06:58:03AM -0700, Dan Magenheimer wrote:
> CHRISTOPH AND ANDREW, if you disagree and your concerns have
> not been resolved, please speak up.
Anything that need modification of a normal non-shared fs is utterly
broken and you'll get a clear NAK, so the propsal before is a good
one. There's a couple more issues like the still weird prototypes,
e.g. and i_ino might not be enoug to uniquely identify an inode
on serveral filesystems that use 64-bit inode inode numbers on 32-bit
systems. Also making the ops vector global is just a bad idea.
There is nothing making this sort of caching inherently global.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
> From: Christoph Hellwig [mailto:[email protected]]
> Subject: Re: [PATCH V3 0/8] Cleancache: overview
>
> On Fri, Jul 23, 2010 at 06:58:03AM -0700, Dan Magenheimer wrote:
> > CHRISTOPH AND ANDREW, if you disagree and your concerns have
> > not been resolved, please speak up.
Hi Christoph --
Thanks very much for the quick (instantaneous?) reply!
> Anything that need modification of a normal non-shared fs is utterly
> broken and you'll get a clear NAK, so the propsal before is a good
> one.
Unless/until all filesystems are 100% built on top of VFS,
I have to disagree. Abstractions (e.g. VFS) are never perfect.
And the relevant filesystem maintainers have acked, so I'm
wondering who you are NAK'ing for?
Nitin's proposal attempts to move the VFS hooks around to fix
usage for one fs (btrfs) that, for whatever reason, has
chosen to not layer itself completely on top of VFS; this
sounds to me like a recipe for disaster.
I think Minchan's reply quickly pointed out one issue...
what other filesystems that haven't been changed might
encounter a rare data corruption issue because cleancache is
transparently enabled for its page cache pages?
It also drops requires support to be dropped entirely for
another fs (ocfs2) which one user (zcache) can't use, but
the other (tmem) makes very good use of.
No, the per-fs opt-in is very sensible; and its design is
very minimal.
Could you please explain your objection further?
> There's a couple more issues like the still weird prototypes,
> e.g. and i_ino might not be enoug to uniquely identify an inode
> on serveral filesystems that use 64-bit inode inode numbers on 32-bit
> systems.
This reinforces my per-fs opt-in point. Such filesystems
should not enable cleancache (or enable them only on the
appropriate systems).
> Also making the ops vector global is just a bad idea.
> There is nothing making this sort of caching inherently global.
I'm not sure I understand your point, but two very different
users of cleancache have been provided, and more will be
discussed at the MM summit next month.
Do you have a suggestion on how to avoid a global ops
vector while still serving the needs of both existing
users?
Thanks,
Dan
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href
ilto:"[email protected]"> [email protected] </a>
On 07/23/2010 01:46 PM, Minchan Kim wrote:
> On Fri, Jul 23, 2010 at 4:36 PM, Nitin Gupta <[email protected]> wrote:
>>
>> 2. I think change in btrfs can be avoided by moving cleancache_get_page()
>> from do_mpage_reapage() to filemap_fault() and this should work for all
>> filesystems. See:
>>
>> handle_pte_fault() -> do_(non)linear_fault() -> __do_fault()
>> -> vma->vm_ops->fault()
>>
>> which is defined as filemap_fault() for all filesystems. If some future
>> filesystem uses its own custom function (why?) then it will have to arrange for
>> call to cleancache_get_page(), if it wants this feature.
>
>
> filemap fault works only in case of file-backed page which is mapped
> but don't work not-mapped cache page. So we could miss cache page by
> read system call if we move it into filemap_fault.
>
>
Oh, yes. Then we need cleancache_get_page() call in do_generic_file_read() too.
So, if I am missing anything now, we should now be able to get rid of per-fs
changes.
Thanks,
Nitin
On 07/23/2010 08:14 PM, Dan Magenheimer wrote:
>> From: Christoph Hellwig [mailto:[email protected]]
>> Also making the ops vector global is just a bad idea.
>> There is nothing making this sort of caching inherently global.
>
> I'm not sure I understand your point, but two very different
> users of cleancache have been provided, and more will be
> discussed at the MM summit next month.
>
> Do you have a suggestion on how to avoid a global ops
> vector while still serving the needs of both existing
> users?
Maybe introduce cleancache_register(struct cleancache_ops *ops)?
This will allow making cleancache_ops non-global. No value add
but maybe that's cleaner?
Thanks,
Nitin
> From: Dan Magenheimer
> Subject: RE: [PATCH V3 0/8] Cleancache: overview
>
> > From: Christoph Hellwig [mailto:[email protected]]
> > Subject: Re: [PATCH V3 0/8] Cleancache: overview
> >
> > On Fri, Jul 23, 2010 at 06:58:03AM -0700, Dan Magenheimer wrote:
> > > CHRISTOPH AND ANDREW, if you disagree and your concerns have
> > > not been resolved, please speak up.
>
> Hi Christoph --
>
> Thanks very much for the quick (instantaneous?) reply!
>
> > Anything that need modification of a normal non-shared fs is utterly
> > broken and you'll get a clear NAK, so the propsal before is a good
> > one.
>
> Unless/until all filesystems are 100% built on top of VFS,
> I have to disagree. Abstractions (e.g. VFS) are never perfect.
After thinking about this some more, I can see a way
to enforce "opt-in" in the cleancache backend without
any changes to non-generic fs code. I think it's a horrible
hack and we can try it, but I expect fs maintainers
would prefer the explicit one-line-patch opt-in.
1) Cleancache backend maintains a list of "known working"
filesystems (those that have been tested).
2) Nitin's proposed changes pass the *sb as a parameter.
The string name of the filesystem type is available via
sb->s_type->name. This can be compared against
the "known working" list.
Using the sb pointer as a "handle" requires an extra
table search on every cleancache get/put/flush,
and fs/super.c changes are required for fs unmount
notification anyway (e.g. to call cleancache_flush_fs)
so I'd prefer to keep the cleancache_poolid addition
to the sb. I'll assume this is OK since this is in generic
fs code.
Dan
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href
ilto:"[email protected]"> [email protected] </a>
> From: Nitin Gupta [mailto:[email protected]]
> Sent: Friday, July 23, 2010 9:05 AM
> To: Dan Magenheimer
> Cc: Christoph Hellwig; [email protected]; Chris Mason;
> [email protected]; [email protected]; [email protected];
> [email protected]; Joel Becker; [email protected]; linux-
> [email protected]; [email protected]; linux-
> [email protected]; [email protected]; ocfs2-
> [email protected]; [email protected]; [email protected];
> [email protected]; Kurt Hackel; [email protected]; Dave Mccracken;
> [email protected]; [email protected]; Konrad Wilk
> Subject: Re: [PATCH V3 0/8] Cleancache: overview
>
> On 07/23/2010 08:14 PM, Dan Magenheimer wrote:
> >> From: Christoph Hellwig [mailto:[email protected]]
>
>
> >> Also making the ops vector global is just a bad idea.
> >> There is nothing making this sort of caching inherently global.
> >
> > I'm not sure I understand your point, but two very different
> > users of cleancache have been provided, and more will be
> > discussed at the MM summit next month.
> >
> > Do you have a suggestion on how to avoid a global ops
> > vector while still serving the needs of both existing
> > users?
>
> Maybe introduce cleancache_register(struct cleancache_ops *ops)?
> This will allow making cleancache_ops non-global. No value add
> but maybe that's cleaner?
Oh, OK, that seems reasonable.
Dan
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href
ilto:"[email protected]"> [email protected] </a>
On 07/23/2010 11:07 PM, Dan Magenheimer wrote:
>> From: Dan Magenheimer
>> Subject: RE: [PATCH V3 0/8] Cleancache: overview
>>
>>> From: Christoph Hellwig [mailto:[email protected]]
>>> Subject: Re: [PATCH V3 0/8] Cleancache: overview
>>>
>>> On Fri, Jul 23, 2010 at 06:58:03AM -0700, Dan Magenheimer wrote:
>>>> CHRISTOPH AND ANDREW, if you disagree and your concerns have
>>>> not been resolved, please speak up.
>>
>> Hi Christoph --
>>
>> Thanks very much for the quick (instantaneous?) reply!
>>
>>> Anything that need modification of a normal non-shared fs is utterly
>>> broken and you'll get a clear NAK, so the propsal before is a good
>>> one.
>>
>> Unless/until all filesystems are 100% built on top of VFS,
>> I have to disagree. Abstractions (e.g. VFS) are never perfect.
>
> After thinking about this some more, I can see a way
> to enforce "opt-in" in the cleancache backend without
> any changes to non-generic fs code. I think it's a horrible
> hack and we can try it, but I expect fs maintainers
> would prefer the explicit one-line-patch opt-in.
>
> 1) Cleancache backend maintains a list of "known working"
> filesystems (those that have been tested).
Checks against "known working list" indeed looks horrible.
Isn't there any way to identify pagecache -> disk I/O boundaries
which every filesystem obeys? I'm not yet sure but if this is
doable, then we won't require such hacks.
>
> 2) Nitin's proposed changes pass the *sb as a parameter.
> The string name of the filesystem type is available via
> sb->s_type->name. This can be compared against
> the "known working" list.
>
sb->s_magic could also be used, or better if we can somehow
get rid of these checks :)
> Using the sb pointer as a "handle" requires an extra
> table search on every cleancache get/put/flush,
> and fs/super.c changes are required for fs unmount
> notification anyway (e.g. to call cleancache_flush_fs)
> so I'd prefer to keep the cleancache_poolid addition
> to the sb. I'll assume this is OK since this is in generic
> fs code.
>
I will also try making changes to cleancache so it does not
touch any fs specific code. Though IMHO one liners to fs-code
should really be acceptable but unfortunately this doesn't seem
to be the case. Maybe generic cleancache will have better
chances.
Thanks,
Nitin
> > On Fri, Jul 23, 2010 at 06:58:03AM -0700, Dan Magenheimer wrote:
> > > CHRISTOPH AND ANDREW, if you disagree and your concerns have
> > > not been resolved, please speak up.
>
> Hi Christoph --
>
> Thanks very much for the quick (instantaneous?) reply!
>
> > Anything that need modification of a normal non-shared fs is utterly
> > broken and you'll get a clear NAK, so the propsal before is a good
> > one.
>
> No, the per-fs opt-in is very sensible; and its design is
> very minimal.
Not to belabor the point, but maybe the right way to think about
this is:
Cleancache is a new optional feature provided by the VFS layer
that potentially dramatically increases page cache effectiveness
for many workloads in many environments at a negligible cost.
Filesystems that are well-behaved and conform to certain restrictions
can utilize cleancache simply by making a call to cleancache_init_fs
at mount time. Unusual, misbehaving, or poorly layered filesystems
must either add additional hooks and/or undergo extensive additional
testing... or should just not enable the optional cleancache.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href
ilto:"[email protected]"> [email protected] </a>
On 07/24/2010 12:17 AM, Dan Magenheimer wrote:
>>> On Fri, Jul 23, 2010 at 06:58:03AM -0700, Dan Magenheimer wrote:
>>>> CHRISTOPH AND ANDREW, if you disagree and your concerns have
>>>> not been resolved, please speak up.
>>
>> Hi Christoph --
>>
>> Thanks very much for the quick (instantaneous?) reply!
>>
>>> Anything that need modification of a normal non-shared fs is utterly
>>> broken and you'll get a clear NAK, so the propsal before is a good
>>> one.
>>
>> No, the per-fs opt-in is very sensible; and its design is
>> very minimal.
>
> Not to belabor the point, but maybe the right way to think about
> this is:
>
> Cleancache is a new optional feature provided by the VFS layer
> that potentially dramatically increases page cache effectiveness
> for many workloads in many environments at a negligible cost.
>
> Filesystems that are well-behaved and conform to certain restrictions
> can utilize cleancache simply by making a call to cleancache_init_fs
> at mount time. Unusual, misbehaving, or poorly layered filesystems
> must either add additional hooks and/or undergo extensive additional
> testing... or should just not enable the optional cleancache.
OK, So I maintain a filesystem in Kernel. How do I know if my FS
is not "Unusual, misbehaving, or poorly layered"
Thanks
Boaz
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
> From: Boaz Harrosh [mailto:[email protected]]
> Sent: Tuesday, August 03, 2010 10:23 AM
> To: Dan Magenheimer
> Subject: Re: [PATCH V3 0/8] Cleancache: overview
>
> On 07/24/2010 12:17 AM, Dan Magenheimer wrote:
> >>> On Fri, Jul 23, 2010 at 06:58:03AM -0700, Dan Magenheimer wrote:
> >>>> CHRISTOPH AND ANDREW, if you disagree and your concerns have
> >>>> not been resolved, please speak up.
> >>
> >> Hi Christoph --
> >>
> >> Thanks very much for the quick (instantaneous?) reply!
> >>
> >>> Anything that need modification of a normal non-shared fs is
> utterly
> >>> broken and you'll get a clear NAK, so the propsal before is a good
> >>> one.
> >>
> >> No, the per-fs opt-in is very sensible; and its design is
> >> very minimal.
> >
> > Not to belabor the point, but maybe the right way to think about
> > this is:
> >
> > Cleancache is a new optional feature provided by the VFS layer
> > that potentially dramatically increases page cache effectiveness
> > for many workloads in many environments at a negligible cost.
> >
> > Filesystems that are well-behaved and conform to certain restrictions
> > can utilize cleancache simply by making a call to cleancache_init_fs
> > at mount time. Unusual, misbehaving, or poorly layered filesystems
> > must either add additional hooks and/or undergo extensive additional
> > testing... or should just not enable the optional cleancache.
>
> OK, So I maintain a filesystem in Kernel. How do I know if my FS
> is not "Unusual, misbehaving, or poorly layered"
A reasonable question. I'm not a FS expert so this may not be
a complete answer, but please consider it a start:
- The FS should be block-device-based (e.g. a ram-based FS
such as tmpfs should not enable cleancache)
- To ensure coherency/correctness, the FS must ensure that all
file removal or truncation operations either go through VFS
or add hooks to do the equivalent "flush" operations (e.g.
I started looking at FS-cache-based net FS's and was concerned
there might be problems, dunno for sure)
- To ensure coherency/correctness, inode numbers must be unique
(e.g. no emulating 64-bit inode space on 32-bit inode numbers)
- The FS must call the VFS superblock alloc and deactivate routines
or add hooks to do the equivalent cleancache calls done there.
- To maximize performance, all pages fetched from the FS should
go through the do_mpage_readpage routine or the FS should add
hooks to do the equivalent (e.g. btrfs requires a hook for this)
- Currently, the FS blocksize must be the same as PAGESIZE. This
is not an architectural restriction, but no backends currently
support anything different (e.g. hugetlbfs? should not enable
cleancache)
- A clustered FS should invoke the "shared_init_fs" cleancache
hook to get best performance for some backends.
Does that help?
Thanks,
Dan
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href
ilto:"[email protected]"> [email protected] </a>
On 2010-08-03, at 11:35, Dan Magenheimer wrote:
> - The FS should be block-device-based (e.g. a ram-based FS
> such as tmpfs should not enable cleancache)
When you say "block device based", does this exclude network filesystems? It would seem cleancache, like fscache, is actually best suited to high-latency network filesystems.
> - To ensure coherency/correctness, inode numbers must be unique
> (e.g. no emulating 64-bit inode space on 32-bit inode numbers)
Does it need to be restricted to inode numbers at all (i.e. can it use an opaque internal identifier like the NFS file handle)? Disallowing cleancache on a filesystem that uses 64-bit (or larger) inodes on a 32-bit system reduces its usefulness.
Cheers, Andreas
--
Andreas Dilger
Lustre Technical Lead
Oracle Corporation Canada Inc.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href
ilto:"[email protected]"> [email protected] </a>
> From: Andreas Dilger
> Sent: Tuesday, August 03, 2010 12:34 PM
> To: Dan Magenheimer
> Subject: Re: [PATCH V3 0/8] Cleancache: overview
>
> On 2010-08-03, at 11:35, Dan Magenheimer wrote:
> > - The FS should be block-device-based (e.g. a ram-based FS
> > such as tmpfs should not enable cleancache)
>
> When you say "block device based", does this exclude network
> filesystems? It would seem cleancache, like fscache, is actually best
> suited to high-latency network filesystems.
I don't think it should exclude network FSs and agree cleancache
might be well-suited for them. So if "block device based"
leaves out the possibility of network FSs, I am just
displaying my general ignorance of FSs and I/O, and
welcome clarification from FS developers. What I really
meant is: Don't use cleancache for RAM-based filesystems.
> > - To ensure coherency/correctness, inode numbers must be unique
> > (e.g. no emulating 64-bit inode space on 32-bit inode numbers)
>
> Does it need to be restricted to inode numbers at all (i.e. can it use
> an opaque internal identifier like the NFS file handle)? Disallowing
> cleancache on a filesystem that uses 64-bit (or larger) inodes on a 32-
> bit system reduces its usefulness.
True... Earlier versions of the patch did not use ino_t but
instead used an opaque always-64-bit-unsigned "object id".
The patch changed to use ino_t in response to Al Viro's comment
to "use sane types".
The <pool_id,object_id,pg_offset> triple must uniquely
and permanently (unless explicitly flushed) describe
exactly one page of FS data. So if usefulness is increased
by changing object_id back to an explicit 64-bit value,
I'm happy to do that. The only disadvantage I can
see is that 32-bit systems pass an extra 32 bits on
every call that may always be zero on most FSs.
Thanks,
Dan
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to [email protected]. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href
ilto:"[email protected]"> [email protected] </a>