2021-01-28 15:00:07

by David Wysochanski

[permalink] [raw]
Subject: [PATCH 00/10] Convert NFS fscache read paths to netfs API

This minimal set of patches update the NFS client to use the new
readahead method, and convert the NFS fscache to use the new netfs
IO API, and are at:
https://github.com/DaveWysochanskiRH/kernel/releases/tag/fscache-iter-lib-nfs-20210128
https://github.com/DaveWysochanskiRH/kernel/commit/74357eb291c9c292f3ab3bc9ed1227cb76f52c51

The patches are based on David Howells fscache-netfs-lib tree at
https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=fscache-netfs-lib

The first 6 patches refactor some of the NFS read code to facilitate
re-use, the next 4 patches do the conversion to the new API. Note
patch 8 converts nfs_readpages to nfs_readahead.

Changes since my last posting on Jan 27, 2021
- Fix oops with fscache enabled on parallel read unit test
- Add patches to handle invalidate and releasepage
- Use #define FSCACHE_USE_NEW_IO_API to select the new API
- Minor cleanup in nfs_readahead_from_fscache

Still TODO
1. Fix known bugs
a) nfs_issue_op: takes rcu_read_lock but may calls nfs_page_alloc()
with GFP_KERNEL which may sleep (dhowells noted this in a review)
b) nfs_refresh_inode() takes inode->i_lock but may call
__fscache_invalidate() which may sleep (found with lockdep)
c) WARN with xfstest fscache/netapp/pnfs/nfs41
2. Fixup NFS fscache stats (NFSIOS_FSCACHE_*)
* Compare with netfs stats and determine if still needed
3. Cleanup dfprintks and/or convert to tracepoints
4. Further tests (see "Not tested yet")

Tests run
1. Custom NFS+fscache unit tests for basic operation: PASS
* vers=3,4.0,4.1,4.2,sec=sys,server=localhost (same kernel)
2. cthon04: PASS
* test options "-b -g -s -l", fsc,vers=3,4.0,4.1,4.2,sec=sys
* No failures, oopses or hangs
3. iozone tests: PASS
* nofsc,fsc,vers=3,4.0,4.1,4.2,sec=sys,server=rhel7,rhel8
* No failures, oopses, or hangs
4. xfstests/generic: PASS*
* no hangs or crashes (one WARN); failures unrelated to these patches
* Ran following configurations
* vers=4.1,fsc,sec=sys,rhel7-server: PASS
* vers=4.0,fsc,sec=sys,rhel7-server: PASS
* vers=3,fsc,sec=sys,rhel7-server: PASS
* vers=4.1,nofsc,sec=sys,netapp-server(pnfs/files): PASS
* vers=4.1,fsc,sec=sys,netapp-server(pnfs/files): INCOMPLETE
* WARN_ON fs/netfs/read_helper.c:616
* ran with kernel.panic_on_oops=1
* vers=4.2,fsc,sec=sys,rhel7-server: running at generic/438
* vers=4.2,fsc,sec=sys,rhel8-server: running at generic/127
5. kernel build: PASS
* vers=4.2,fsc,sec=sys,rhel8-server: PASS

Not tested yet:
* error injections (for example, connection disruptions, server errors during IO, etc)
* many process mixed read/write on same file
* performance

Dave Wysochanski (10):
NFS: Clean up nfs_readpage() and nfs_readpages()
NFS: In nfs_readpage() only increment NFSIOS_READPAGES when read
succeeds
NFS: Refactor nfs_readpage() and nfs_readpage_async() to use
nfs_readdesc
NFS: Call readpage_async_filler() from nfs_readpage_async()
NFS: Add nfs_pageio_complete_read() and remove nfs_readpage_async()
NFS: Allow internal use of read structs and functions
NFS: Convert to the netfs API and nfs_readpage to use netfs_readpage
NFS: Convert readpages to readahead and use netfs_readahead for
fscache
NFS: Update releasepage to handle new fscache kiocb IO API
NFS: update various invalidation code paths for new IO API

fs/nfs/file.c | 22 +++--
fs/nfs/fscache.c | 230 +++++++++++++++++++------------------------
fs/nfs/fscache.h | 105 +++-----------------
fs/nfs/internal.h | 8 ++
fs/nfs/pagelist.c | 2 +
fs/nfs/read.c | 240 ++++++++++++++++++++-------------------------
fs/nfs/write.c | 10 +-
include/linux/nfs_fs.h | 5 +-
include/linux/nfs_iostat.h | 2 +-
include/linux/nfs_page.h | 1 +
include/linux/nfs_xdr.h | 1 +
11 files changed, 257 insertions(+), 369 deletions(-)

--
1.8.3.1


2021-01-28 15:00:11

by David Wysochanski

[permalink] [raw]
Subject: [PATCH 09/10] NFS: Update releasepage to handle new fscache kiocb IO API

When using the new fscache kiocb IO API, netfs callers should
no longer use fscache_maybe_release_page() in releasepage, but
instead should just wait PG_fscache as needed. The PG_fscache
page bit now means the page is being written to the cache.

Signed-off-by: Dave Wysochanski <[email protected]>
---
fs/nfs/file.c | 11 +++++++++--
fs/nfs/fscache.c | 24 ------------------------
fs/nfs/fscache.h | 5 -----
fs/nfs/write.c | 10 ++++------
4 files changed, 13 insertions(+), 37 deletions(-)

diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index ebcaa164db5f..9e41745c3faf 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -431,8 +431,15 @@ static int nfs_release_page(struct page *page, gfp_t gfp)

/* If PagePrivate() is set, then the page is not freeable */
if (PagePrivate(page))
- return 0;
- return nfs_fscache_release_page(page, gfp);
+ return false;
+
+ if (PageFsCache(page)) {
+ if (!(gfp & __GFP_DIRECT_RECLAIM) || !(gfp & __GFP_FS))
+ return false;
+ wait_on_page_fscache(page);
+ }
+
+ return true;
}

static void nfs_check_dirty_writeback(struct page *page,
diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c
index 2ff631da62ec..dd8cf3cfed0a 100644
--- a/fs/nfs/fscache.c
+++ b/fs/nfs/fscache.c
@@ -332,30 +332,6 @@ void nfs_fscache_open_file(struct inode *inode, struct file *filp)
EXPORT_SYMBOL_GPL(nfs_fscache_open_file);

/*
- * Release the caching state associated with a page, if the page isn't busy
- * interacting with the cache.
- * - Returns true (can release page) or false (page busy).
- */
-int nfs_fscache_release_page(struct page *page, gfp_t gfp)
-{
- if (PageFsCache(page)) {
- struct fscache_cookie *cookie = nfs_i_fscache(page->mapping->host);
-
- BUG_ON(!cookie);
- dfprintk(FSCACHE, "NFS: fscache releasepage (0x%p/0x%p/0x%p)\n",
- cookie, page, NFS_I(page->mapping->host));
-
- if (!fscache_maybe_release_page(cookie, page, gfp))
- return 0;
-
- nfs_inc_fscache_stats(page->mapping->host,
- NFSIOS_FSCACHE_PAGES_UNCACHED);
- }
-
- return 1;
-}
-
-/*
* Release the caching state associated with a page if undergoing complete page
* invalidation.
*/
diff --git a/fs/nfs/fscache.h b/fs/nfs/fscache.h
index faccf4549d55..9f8b1f8e69f3 100644
--- a/fs/nfs/fscache.h
+++ b/fs/nfs/fscache.h
@@ -94,7 +94,6 @@ struct nfs_fscache_inode_auxdata {
extern void nfs_fscache_open_file(struct inode *, struct file *);

extern void __nfs_fscache_invalidate_page(struct page *, struct inode *);
-extern int nfs_fscache_release_page(struct page *, gfp_t);
extern int nfs_readpage_from_fscache(struct file *file,
struct page *page,
struct nfs_readdesc *desc);
@@ -163,10 +162,6 @@ static inline void nfs_fscache_clear_inode(struct inode *inode) {}
static inline void nfs_fscache_open_file(struct inode *inode,
struct file *filp) {}

-static inline int nfs_fscache_release_page(struct page *page, gfp_t gfp)
-{
- return 1; /* True: may release page */
-}
static inline void nfs_fscache_invalidate_page(struct page *page,
struct inode *inode) {}
static inline void nfs_fscache_wait_on_page_write(struct nfs_inode *nfsi,
diff --git a/fs/nfs/write.c b/fs/nfs/write.c
index 639c34fec04a..156508fb6730 100644
--- a/fs/nfs/write.c
+++ b/fs/nfs/write.c
@@ -2102,17 +2102,15 @@ int nfs_migrate_page(struct address_space *mapping, struct page *newpage,
struct page *page, enum migrate_mode mode)
{
/*
- * If PagePrivate is set, then the page is currently associated with
- * an in-progress read or write request. Don't try to migrate it.
+ * If PagePrivate or PageFsCache is set, then the page is currently
+ * associated with an in-progress read or write request. Don't try
+ * to migrate it.
*
* FIXME: we could do this in principle, but we'll need a way to ensure
* that we can safely release the inode reference while holding
* the page lock.
*/
- if (PagePrivate(page))
- return -EBUSY;
-
- if (!nfs_fscache_release_page(page, GFP_KERNEL))
+ if (PagePrivate(page) || PageFsCache(page))
return -EBUSY;

return migrate_page(mapping, newpage, page, mode);
--
1.8.3.1

2021-01-28 15:00:13

by David Wysochanski

[permalink] [raw]
Subject: [PATCH 08/10] NFS: Convert readpages to readahead and use netfs_readahead for fscache

The new FS-Cache API does not have a readpages equivalent function,
and instead of fscache_read_or_alloc_pages() it implements a readahead
function, netfs_readahead(). Call netfs_readahead() if fscache is
enabled, and note that netfs_readahead() has good tracing so we can
remove one dfprintk.

If fscache is not enabled, utilize readahead_page() to run through
the pages needed calling readpage_async_filler(). If we get an error
on any page, then exit the loop, which matches the behavior of
previously called read_cache_pages() when 'filler' returns an error.

Signed-off-by: Dave Wysochanski <[email protected]>
---
fs/nfs/file.c | 2 +-
fs/nfs/fscache.c | 49 +++++++---------------------------------------
fs/nfs/fscache.h | 28 ++++----------------------
fs/nfs/read.c | 36 +++++++++++++++++-----------------
include/linux/nfs_fs.h | 3 +--
include/linux/nfs_iostat.h | 2 +-
6 files changed, 32 insertions(+), 88 deletions(-)

diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index 63940a7a70be..ebcaa164db5f 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -515,7 +515,7 @@ static void nfs_swap_deactivate(struct file *file)

const struct address_space_operations nfs_file_aops = {
.readpage = nfs_readpage,
- .readpages = nfs_readpages,
+ .readahead = nfs_readahead,
.set_page_dirty = __set_page_dirty_nobuffers,
.writepage = nfs_writepage,
.writepages = nfs_writepages,
diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c
index fede075209f5..2ff631da62ec 100644
--- a/fs/nfs/fscache.c
+++ b/fs/nfs/fscache.c
@@ -502,51 +502,16 @@ int nfs_readpage_from_fscache(struct file *file,
/*
* Retrieve a set of pages from fscache
*/
-int __nfs_readpages_from_fscache(struct nfs_open_context *ctx,
- struct inode *inode,
- struct address_space *mapping,
- struct list_head *pages,
- unsigned *nr_pages)
+int nfs_readahead_from_fscache(struct nfs_readdesc *desc,
+ struct readahead_control *ractl)
{
- unsigned npages = *nr_pages;
- int ret;
-
- dfprintk(FSCACHE, "NFS: nfs_getpages_from_fscache (0x%p/%u/0x%p)\n",
- nfs_i_fscache(inode), npages, inode);
-
- ret = fscache_read_or_alloc_pages(nfs_i_fscache(inode),
- mapping, pages, nr_pages,
- NULL,
- ctx,
- mapping_gfp_mask(mapping));
- if (*nr_pages < npages)
- nfs_add_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_OK,
- npages);
- if (*nr_pages > 0)
- nfs_add_fscache_stats(inode, NFSIOS_FSCACHE_PAGES_READ_FAIL,
- *nr_pages);
-
- switch (ret) {
- case 0: /* read submitted to the cache for all pages */
- BUG_ON(!list_empty(pages));
- BUG_ON(*nr_pages != 0);
- dfprintk(FSCACHE,
- "NFS: nfs_getpages_from_fscache: submitted\n");
-
- return ret;
-
- case -ENOBUFS: /* some pages aren't cached and can't be */
- case -ENODATA: /* some pages aren't cached */
- dfprintk(FSCACHE,
- "NFS: nfs_getpages_from_fscache: no page: %d\n", ret);
- return 1;
+ if (!NFS_I(ractl->mapping->host)->fscache)
+ return -ENOBUFS;

- default:
- dfprintk(FSCACHE,
- "NFS: nfs_getpages_from_fscache: ret %d\n", ret);
- }
+ netfs_readahead(ractl, &nfs_fscache_req_ops, desc);

- return ret;
+ /* FIXME: NFSIOS_NFSIOS_FSCACHE_ stats */
+ return 0;
}

/*
diff --git a/fs/nfs/fscache.h b/fs/nfs/fscache.h
index 858f28b1ce03..faccf4549d55 100644
--- a/fs/nfs/fscache.h
+++ b/fs/nfs/fscache.h
@@ -98,12 +98,10 @@ struct nfs_fscache_inode_auxdata {
extern int nfs_readpage_from_fscache(struct file *file,
struct page *page,
struct nfs_readdesc *desc);
-extern int __nfs_readpages_from_fscache(struct nfs_open_context *,
- struct inode *, struct address_space *,
- struct list_head *, unsigned *);
+extern int nfs_readahead_from_fscache(struct nfs_readdesc *desc,
+ struct readahead_control *ractl);
extern void nfs_read_completion_to_fscache(struct nfs_pgio_header *hdr,
unsigned long bytes);
-
/*
* wait for a page to complete writing to the cache
*/
@@ -126,21 +124,6 @@ static inline void nfs_fscache_invalidate_page(struct page *page,
}

/*
- * Retrieve a set of pages from an inode data storage object.
- */
-static inline int nfs_readpages_from_fscache(struct nfs_open_context *ctx,
- struct inode *inode,
- struct address_space *mapping,
- struct list_head *pages,
- unsigned *nr_pages)
-{
- if (NFS_I(inode)->fscache)
- return __nfs_readpages_from_fscache(ctx, inode, mapping, pages,
- nr_pages);
- return -ENOBUFS;
-}
-
-/*
* Invalidate the contents of fscache for this inode. This will not sleep.
*/
static inline void nfs_fscache_invalidate(struct inode *inode)
@@ -195,11 +178,8 @@ static inline int nfs_readpage_from_fscache(struct file *file,
{
return -ENOBUFS;
}
-static inline int nfs_readpages_from_fscache(struct nfs_open_context *ctx,
- struct inode *inode,
- struct address_space *mapping,
- struct list_head *pages,
- unsigned *nr_pages)
+static inline int nfs_readahead_from_fscache(struct nfs_readdesc *desc,
+ struct readahead_control *ractl)
{
return -ENOBUFS;
}
diff --git a/fs/nfs/read.c b/fs/nfs/read.c
index b47e4f38539b..8be4f179a371 100644
--- a/fs/nfs/read.c
+++ b/fs/nfs/read.c
@@ -390,50 +390,50 @@ int nfs_readpage(struct file *file, struct page *page)
return ret;
}

-int nfs_readpages(struct file *file, struct address_space *mapping,
- struct list_head *pages, unsigned nr_pages)
+void nfs_readahead(struct readahead_control *ractl)
{
struct nfs_readdesc desc;
- struct inode *inode = mapping->host;
+ struct inode *inode = ractl->mapping->host;
+ struct page *page;
int ret;

- dprintk("NFS: nfs_readpages (%s/%Lu %d)\n",
- inode->i_sb->s_id,
- (unsigned long long)NFS_FILEID(inode),
- nr_pages);
+ dprintk("NFS: %s (%s/%llu %lld)\n", __func__,
+ inode->i_sb->s_id,
+ (unsigned long long)NFS_FILEID(inode),
+ readahead_length(ractl));
nfs_inc_stats(inode, NFSIOS_VFSREADPAGES);

- ret = -ESTALE;
if (NFS_STALE(inode))
- goto out;
+ return;

- if (file == NULL) {
- ret = -EBADF;
+ if (ractl->file == NULL) {
desc.ctx = nfs_find_open_context(inode, NULL, FMODE_READ);
if (desc.ctx == NULL)
- goto out;
+ return;
} else
- desc.ctx = get_nfs_open_context(nfs_file_open_context(file));
+ desc.ctx = get_nfs_open_context(nfs_file_open_context(ractl->file));

/* attempt to read as many of the pages as possible from the cache
* - this returns -ENOBUFS immediately if the cookie is negative
*/
- ret = nfs_readpages_from_fscache(desc.ctx, inode, mapping,
- pages, &nr_pages);
+ ret = nfs_readahead_from_fscache(&desc, ractl);
if (ret == 0)
goto read_complete; /* all pages were read */

nfs_pageio_init_read(&desc.pgio, inode, false,
&nfs_async_read_completion_ops);

- ret = read_cache_pages(mapping, pages, readpage_async_filler, &desc);
+ while ((page = readahead_page(ractl))) {
+ ret = readpage_async_filler(&desc, page);
+ put_page(page);
+ if (unlikely(ret))
+ break;
+ }

nfs_pageio_complete_read(&desc.pgio, inode);

read_complete:
put_nfs_open_context(desc.ctx);
-out:
- return ret;
}

int __init nfs_init_readpagecache(void)
diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
index 3cfcf219e96b..968c79b1b09b 100644
--- a/include/linux/nfs_fs.h
+++ b/include/linux/nfs_fs.h
@@ -568,8 +568,7 @@ extern int nfs_access_get_cached(struct inode *inode, const struct cred *cred, s
* linux/fs/nfs/read.c
*/
extern int nfs_readpage(struct file *, struct page *);
-extern int nfs_readpages(struct file *, struct address_space *,
- struct list_head *, unsigned);
+extern void nfs_readahead(struct readahead_control *rac);

/*
* inline functions
diff --git a/include/linux/nfs_iostat.h b/include/linux/nfs_iostat.h
index 027874c36c88..8baf8fb7551d 100644
--- a/include/linux/nfs_iostat.h
+++ b/include/linux/nfs_iostat.h
@@ -53,7 +53,7 @@
* NFS page counters
*
* These count the number of pages read or written via nfs_readpage(),
- * nfs_readpages(), or their write equivalents.
+ * nfs_readahead(), or their write equivalents.
*
* NB: When adding new byte counters, please include the measured
* units in the name of each byte counter to help users of this
--
1.8.3.1

2021-01-28 15:00:21

by David Wysochanski

[permalink] [raw]
Subject: [PATCH 10/10] NFS: update various invalidation code paths for new IO API

The new fscache IO API removes the following invalidation related
older APIs: fscache_uncache_all_inode_pages, fscache_uncache_page,
and fscache_wait_on_page_write. Update various code paths to the
new API which only requires we wait for PG_fscache which indicates
fscache IO in progress on the page.

Signed-off-by: Dave Wysochanski <[email protected]>
---
fs/nfs/file.c | 9 +++++----
fs/nfs/fscache.c | 23 +----------------------
fs/nfs/fscache.h | 28 +---------------------------
3 files changed, 7 insertions(+), 53 deletions(-)

diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index 9e41745c3faf..e81e11603b9a 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -416,7 +416,7 @@ static void nfs_invalidate_page(struct page *page, unsigned int offset,
/* Cancel any unstarted writes on this page */
nfs_wb_page_cancel(page_file_mapping(page)->host, page);

- nfs_fscache_invalidate_page(page, page->mapping->host);
+ wait_on_page_fscache(page);
}

/*
@@ -482,12 +482,11 @@ static void nfs_check_dirty_writeback(struct page *page,
static int nfs_launder_page(struct page *page)
{
struct inode *inode = page_file_mapping(page)->host;
- struct nfs_inode *nfsi = NFS_I(inode);

dfprintk(PAGECACHE, "NFS: launder_page(%ld, %llu)\n",
inode->i_ino, (long long)page_offset(page));

- nfs_fscache_wait_on_page_write(nfsi, page);
+ wait_on_page_fscache(page);
return nfs_wb_page(inode, page);
}

@@ -562,7 +561,9 @@ static vm_fault_t nfs_vm_page_mkwrite(struct vm_fault *vmf)
sb_start_pagefault(inode->i_sb);

/* make sure the cache has finished storing the page */
- nfs_fscache_wait_on_page_write(NFS_I(inode), page);
+ if (PageFsCache(vmf->page) &&
+ wait_on_page_bit_killable(vmf->page, PG_fscache) < 0)
+ return VM_FAULT_RETRY;

wait_on_bit_action(&NFS_I(inode)->flags, NFS_INO_INVALIDATING,
nfs_wait_bit_killable, TASK_KILLABLE);
diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c
index dd8cf3cfed0a..d18eeea9c1b5 100644
--- a/fs/nfs/fscache.c
+++ b/fs/nfs/fscache.c
@@ -16,6 +16,7 @@
#include <linux/slab.h>
#include <linux/iversion.h>
#include <linux/xarray.h>
+#define FSCACHE_USE_NEW_IO_API
#include <linux/fscache.h>
#include <linux/netfs.h>

@@ -320,7 +321,6 @@ void nfs_fscache_open_file(struct inode *inode, struct file *filp)
dfprintk(FSCACHE, "NFS: nfsi 0x%p disabling cache\n", nfsi);
clear_bit(NFS_INO_FSCACHE, &nfsi->flags);
fscache_disable_cookie(cookie, &auxdata, true);
- fscache_uncache_all_inode_pages(cookie, inode);
} else {
dfprintk(FSCACHE, "NFS: nfsi 0x%p enabling cache\n", nfsi);
fscache_enable_cookie(cookie, &auxdata, nfsi->vfs_inode.i_size,
@@ -331,27 +331,6 @@ void nfs_fscache_open_file(struct inode *inode, struct file *filp)
}
EXPORT_SYMBOL_GPL(nfs_fscache_open_file);

-/*
- * Release the caching state associated with a page if undergoing complete page
- * invalidation.
- */
-void __nfs_fscache_invalidate_page(struct page *page, struct inode *inode)
-{
- struct fscache_cookie *cookie = nfs_i_fscache(inode);
-
- BUG_ON(!cookie);
-
- dfprintk(FSCACHE, "NFS: fscache invalidatepage (0x%p/0x%p/0x%p)\n",
- cookie, page, NFS_I(inode));
-
- fscache_wait_on_page_write(cookie, page);
-
- BUG_ON(!PageLocked(page));
- fscache_uncache_page(cookie, page);
- nfs_inc_fscache_stats(page->mapping->host,
- NFSIOS_FSCACHE_PAGES_UNCACHED);
-}
-
static void nfs_issue_op(struct netfs_read_subrequest *subreq)
{
struct inode *inode = subreq->rreq->inode;
diff --git a/fs/nfs/fscache.h b/fs/nfs/fscache.h
index 9f8b1f8e69f3..f9d0464188af 100644
--- a/fs/nfs/fscache.h
+++ b/fs/nfs/fscache.h
@@ -11,6 +11,7 @@
#include <linux/nfs_fs.h>
#include <linux/nfs_mount.h>
#include <linux/nfs4_mount.h>
+#define FSCACHE_USE_NEW_IO_API
#include <linux/fscache.h>

#ifdef CONFIG_NFS_FSCACHE
@@ -93,7 +94,6 @@ struct nfs_fscache_inode_auxdata {
extern void nfs_fscache_clear_inode(struct inode *);
extern void nfs_fscache_open_file(struct inode *, struct file *);

-extern void __nfs_fscache_invalidate_page(struct page *, struct inode *);
extern int nfs_readpage_from_fscache(struct file *file,
struct page *page,
struct nfs_readdesc *desc);
@@ -102,27 +102,6 @@ extern int nfs_readahead_from_fscache(struct nfs_readdesc *desc,
extern void nfs_read_completion_to_fscache(struct nfs_pgio_header *hdr,
unsigned long bytes);
/*
- * wait for a page to complete writing to the cache
- */
-static inline void nfs_fscache_wait_on_page_write(struct nfs_inode *nfsi,
- struct page *page)
-{
- if (PageFsCache(page))
- fscache_wait_on_page_write(nfsi->fscache, page);
-}
-
-/*
- * release the caching state associated with a page if undergoing complete page
- * invalidation
- */
-static inline void nfs_fscache_invalidate_page(struct page *page,
- struct inode *inode)
-{
- if (PageFsCache(page))
- __nfs_fscache_invalidate_page(page, inode);
-}
-
-/*
* Invalidate the contents of fscache for this inode. This will not sleep.
*/
static inline void nfs_fscache_invalidate(struct inode *inode)
@@ -162,11 +141,6 @@ static inline void nfs_fscache_clear_inode(struct inode *inode) {}
static inline void nfs_fscache_open_file(struct inode *inode,
struct file *filp) {}

-static inline void nfs_fscache_invalidate_page(struct page *page,
- struct inode *inode) {}
-static inline void nfs_fscache_wait_on_page_write(struct nfs_inode *nfsi,
- struct page *page) {}
-
static inline int nfs_readpage_from_fscache(struct file *file,
struct page *page,
struct nfs_readdesc *desc)
--
1.8.3.1

2021-01-28 15:00:26

by David Wysochanski

[permalink] [raw]
Subject: [PATCH 03/10] NFS: Refactor nfs_readpage() and nfs_readpage_async() to use nfs_readdesc

Both nfs_readpage() and nfs_readpages() use similar code.
This patch should be no functional change, and refactors
nfs_readpage_async() to use nfs_readdesc to enable future
merging of nfs_readpage_async() and nfs_readpage_async_filler().

Signed-off-by: Dave Wysochanski <[email protected]>
---
fs/nfs/read.c | 62 ++++++++++++++++++++++++--------------------------
include/linux/nfs_fs.h | 3 +--
2 files changed, 31 insertions(+), 34 deletions(-)

diff --git a/fs/nfs/read.c b/fs/nfs/read.c
index 464077daf62f..8c05e56dab65 100644
--- a/fs/nfs/read.c
+++ b/fs/nfs/read.c
@@ -114,18 +114,23 @@ static void nfs_readpage_release(struct nfs_page *req, int error)
nfs_release_request(req);
}

-int nfs_readpage_async(struct nfs_open_context *ctx, struct inode *inode,
+struct nfs_readdesc {
+ struct nfs_pageio_descriptor pgio;
+ struct nfs_open_context *ctx;
+};
+
+int nfs_readpage_async(void *data, struct inode *inode,
struct page *page)
{
+ struct nfs_readdesc *desc = data;
struct nfs_page *new;
unsigned int len;
- struct nfs_pageio_descriptor pgio;
struct nfs_pgio_mirror *pgm;

len = nfs_page_length(page);
if (len == 0)
return nfs_return_empty_page(page);
- new = nfs_create_request(ctx, page, 0, len);
+ new = nfs_create_request(desc->ctx, page, 0, len);
if (IS_ERR(new)) {
unlock_page(page);
return PTR_ERR(new);
@@ -133,21 +138,21 @@ int nfs_readpage_async(struct nfs_open_context *ctx, struct inode *inode,
if (len < PAGE_SIZE)
zero_user_segment(page, len, PAGE_SIZE);

- nfs_pageio_init_read(&pgio, inode, false,
+ nfs_pageio_init_read(&desc->pgio, inode, false,
&nfs_async_read_completion_ops);
- if (!nfs_pageio_add_request(&pgio, new)) {
+ if (!nfs_pageio_add_request(&desc->pgio, new)) {
nfs_list_remove_request(new);
- nfs_readpage_release(new, pgio.pg_error);
+ nfs_readpage_release(new, desc->pgio.pg_error);
}
- nfs_pageio_complete(&pgio);
+ nfs_pageio_complete(&desc->pgio);

/* It doesn't make sense to do mirrored reads! */
- WARN_ON_ONCE(pgio.pg_mirror_count != 1);
+ WARN_ON_ONCE(desc->pgio.pg_mirror_count != 1);

- pgm = &pgio.pg_mirrors[0];
+ pgm = &desc->pgio.pg_mirrors[0];
NFS_I(inode)->read_io += pgm->pg_bytes_written;

- return pgio.pg_error < 0 ? pgio.pg_error : 0;
+ return desc->pgio.pg_error < 0 ? desc->pgio.pg_error : 0;
}

static void nfs_page_group_set_uptodate(struct nfs_page *req)
@@ -312,7 +317,7 @@ static void nfs_readpage_result(struct rpc_task *task,
*/
int nfs_readpage(struct file *file, struct page *page)
{
- struct nfs_open_context *ctx;
+ struct nfs_readdesc desc;
struct inode *inode = page_file_mapping(page)->host;
int ret;

@@ -339,39 +344,34 @@ int nfs_readpage(struct file *file, struct page *page)

if (file == NULL) {
ret = -EBADF;
- ctx = nfs_find_open_context(inode, NULL, FMODE_READ);
- if (ctx == NULL)
+ desc.ctx = nfs_find_open_context(inode, NULL, FMODE_READ);
+ if (desc.ctx == NULL)
goto out_unlock;
} else
- ctx = get_nfs_open_context(nfs_file_open_context(file));
+ desc.ctx = get_nfs_open_context(nfs_file_open_context(file));

if (!IS_SYNC(inode)) {
- ret = nfs_readpage_from_fscache(ctx, inode, page);
+ ret = nfs_readpage_from_fscache(desc.ctx, inode, page);
if (ret == 0)
goto out;
}

- xchg(&ctx->error, 0);
- ret = nfs_readpage_async(ctx, inode, page);
+ xchg(&desc.ctx->error, 0);
+ ret = nfs_readpage_async(&desc, inode, page);
if (!ret) {
ret = wait_on_page_locked_killable(page);
if (!PageUptodate(page) && !ret)
- ret = xchg(&ctx->error, 0);
+ ret = xchg(&desc.ctx->error, 0);
}
nfs_add_stats(inode, NFSIOS_READPAGES, 1);
out:
- put_nfs_open_context(ctx);
+ put_nfs_open_context(desc.ctx);
return ret;
out_unlock:
unlock_page(page);
return ret;
}

-struct nfs_readdesc {
- struct nfs_pageio_descriptor *pgio;
- struct nfs_open_context *ctx;
-};
-
static int
readpage_async_filler(void *data, struct page *page)
{
@@ -390,9 +390,9 @@ struct nfs_readdesc {

if (len < PAGE_SIZE)
zero_user_segment(page, len, PAGE_SIZE);
- if (!nfs_pageio_add_request(desc->pgio, new)) {
+ if (!nfs_pageio_add_request(&desc->pgio, new)) {
nfs_list_remove_request(new);
- error = desc->pgio->pg_error;
+ error = desc->pgio.pg_error;
nfs_readpage_release(new, error);
goto out;
}
@@ -407,7 +407,6 @@ struct nfs_readdesc {
int nfs_readpages(struct file *file, struct address_space *mapping,
struct list_head *pages, unsigned nr_pages)
{
- struct nfs_pageio_descriptor pgio;
struct nfs_pgio_mirror *pgm;
struct nfs_readdesc desc;
struct inode *inode = mapping->host;
@@ -440,17 +439,16 @@ int nfs_readpages(struct file *file, struct address_space *mapping,
if (ret == 0)
goto read_complete; /* all pages were read */

- desc.pgio = &pgio;
- nfs_pageio_init_read(&pgio, inode, false,
+ nfs_pageio_init_read(&desc.pgio, inode, false,
&nfs_async_read_completion_ops);

ret = read_cache_pages(mapping, pages, readpage_async_filler, &desc);
- nfs_pageio_complete(&pgio);
+ nfs_pageio_complete(&desc.pgio);

/* It doesn't make sense to do mirrored reads! */
- WARN_ON_ONCE(pgio.pg_mirror_count != 1);
+ WARN_ON_ONCE(desc.pgio.pg_mirror_count != 1);

- pgm = &pgio.pg_mirrors[0];
+ pgm = &desc.pgio.pg_mirrors[0];
NFS_I(inode)->read_io += pgm->pg_bytes_written;
npages = (pgm->pg_bytes_written + PAGE_SIZE - 1) >>
PAGE_SHIFT;
diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h
index 681ed98e4ba8..cb0248a34518 100644
--- a/include/linux/nfs_fs.h
+++ b/include/linux/nfs_fs.h
@@ -570,8 +570,7 @@ extern int nfs_access_get_cached(struct inode *inode, const struct cred *cred, s
extern int nfs_readpage(struct file *, struct page *);
extern int nfs_readpages(struct file *, struct address_space *,
struct list_head *, unsigned);
-extern int nfs_readpage_async(struct nfs_open_context *, struct inode *,
- struct page *);
+extern int nfs_readpage_async(void *, struct inode *, struct page *);

/*
* inline functions
--
1.8.3.1

2021-02-01 02:20:21

by David Wysochanski

[permalink] [raw]
Subject: Re: [PATCH 00/10] Convert NFS fscache read paths to netfs API

On Thu, Jan 28, 2021 at 9:59 AM Dave Wysochanski <[email protected]> wrote:
>
> This minimal set of patches update the NFS client to use the new
> readahead method, and convert the NFS fscache to use the new netfs
> IO API, and are at:
> https://github.com/DaveWysochanskiRH/kernel/releases/tag/fscache-iter-lib-nfs-20210128
> https://github.com/DaveWysochanskiRH/kernel/commit/74357eb291c9c292f3ab3bc9ed1227cb76f52c51
>
> The patches are based on David Howells fscache-netfs-lib tree at
> https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=fscache-netfs-lib
>
> The first 6 patches refactor some of the NFS read code to facilitate
> re-use, the next 4 patches do the conversion to the new API. Note
> patch 8 converts nfs_readpages to nfs_readahead.
>
> Changes since my last posting on Jan 27, 2021
> - Fix oops with fscache enabled on parallel read unit test
> - Add patches to handle invalidate and releasepage
> - Use #define FSCACHE_USE_NEW_IO_API to select the new API
> - Minor cleanup in nfs_readahead_from_fscache
>
> Still TODO
> 1. Fix known bugs
> a) nfs_issue_op: takes rcu_read_lock but may calls nfs_page_alloc()
> with GFP_KERNEL which may sleep (dhowells noted this in a review)
> b) nfs_refresh_inode() takes inode->i_lock but may call
> __fscache_invalidate() which may sleep (found with lockdep)
> c) WARN with xfstest fscache/netapp/pnfs/nfs41

Turns out this is a bit more involved and I would not consider pNFS +
fscache stable right now.
For now I may have to disable fscache if pNFS is enabled unless I can
quickly come up
with a reasonable fix for the problem.

The problem is as follows. Once netfs calls us in "issue_op" for a
given subrequest, it expects
one call back when the subrequest completes. Now the "clamp_length"
function was developed
so we tell the netfs caller how big of an IO we can handle. However,
right now it only implements
an 'rsize' check, and it does not take into account pNFS
characteristics such as segments
which may split up the IO into multiple RPCs. Since each of the RPC
have their own
completion, and so far I've not come up with a way to just call back
into netfs when the
last one is done, I am not sure what the right approach is. One
obvious approach would be
a more sophisticated "clamp_length" function which adds similar logic
as to the *pg_test()
functions. But I don't want to duplicate that and so it's not really clear.

> 2. Fixup NFS fscache stats (NFSIOS_FSCACHE_*)
> * Compare with netfs stats and determine if still needed
> 3. Cleanup dfprintks and/or convert to tracepoints
> 4. Further tests (see "Not tested yet")
>
> Tests run
> 1. Custom NFS+fscache unit tests for basic operation: PASS
> * vers=3,4.0,4.1,4.2,sec=sys,server=localhost (same kernel)
> 2. cthon04: PASS
> * test options "-b -g -s -l", fsc,vers=3,4.0,4.1,4.2,sec=sys
> * No failures, oopses or hangs
> 3. iozone tests: PASS
> * nofsc,fsc,vers=3,4.0,4.1,4.2,sec=sys,server=rhel7,rhel8
> * No failures, oopses, or hangs
> 4. xfstests/generic: PASS*
> * no hangs or crashes (one WARN); failures unrelated to these patches
> * Ran following configurations
> * vers=4.1,fsc,sec=sys,rhel7-server: PASS
> * vers=4.0,fsc,sec=sys,rhel7-server: PASS
> * vers=3,fsc,sec=sys,rhel7-server: PASS
> * vers=4.1,nofsc,sec=sys,netapp-server(pnfs/files): PASS
> * vers=4.1,fsc,sec=sys,netapp-server(pnfs/files): INCOMPLETE
> * WARN_ON fs/netfs/read_helper.c:616
> * ran with kernel.panic_on_oops=1
> * vers=4.2,fsc,sec=sys,rhel7-server: running at generic/438
> * vers=4.2,fsc,sec=sys,rhel8-server: running at generic/127
> 5. kernel build: PASS
> * vers=4.2,fsc,sec=sys,rhel8-server: PASS
>
> Not tested yet:
> * error injections (for example, connection disruptions, server errors during IO, etc)
> * many process mixed read/write on same file
> * performance
>
> Dave Wysochanski (10):
> NFS: Clean up nfs_readpage() and nfs_readpages()
> NFS: In nfs_readpage() only increment NFSIOS_READPAGES when read
> succeeds
> NFS: Refactor nfs_readpage() and nfs_readpage_async() to use
> nfs_readdesc
> NFS: Call readpage_async_filler() from nfs_readpage_async()
> NFS: Add nfs_pageio_complete_read() and remove nfs_readpage_async()
> NFS: Allow internal use of read structs and functions
> NFS: Convert to the netfs API and nfs_readpage to use netfs_readpage
> NFS: Convert readpages to readahead and use netfs_readahead for
> fscache
> NFS: Update releasepage to handle new fscache kiocb IO API
> NFS: update various invalidation code paths for new IO API
>
> fs/nfs/file.c | 22 +++--
> fs/nfs/fscache.c | 230 +++++++++++++++++++------------------------
> fs/nfs/fscache.h | 105 +++-----------------
> fs/nfs/internal.h | 8 ++
> fs/nfs/pagelist.c | 2 +
> fs/nfs/read.c | 240 ++++++++++++++++++++-------------------------
> fs/nfs/write.c | 10 +-
> include/linux/nfs_fs.h | 5 +-
> include/linux/nfs_iostat.h | 2 +-
> include/linux/nfs_page.h | 1 +
> include/linux/nfs_xdr.h | 1 +
> 11 files changed, 257 insertions(+), 369 deletions(-)
>
> --
> 1.8.3.1
>

2021-02-01 14:32:58

by Anna Schumaker

[permalink] [raw]
Subject: Re: [PATCH 00/10] Convert NFS fscache read paths to netfs API

Hi David,

On Sun, Jan 31, 2021 at 9:20 PM David Wysochanski <[email protected]> wrote:
>
> On Thu, Jan 28, 2021 at 9:59 AM Dave Wysochanski <[email protected]> wrote:
> >
> > This minimal set of patches update the NFS client to use the new
> > readahead method, and convert the NFS fscache to use the new netfs
> > IO API, and are at:
> > https://github.com/DaveWysochanskiRH/kernel/releases/tag/fscache-iter-lib-nfs-20210128
> > https://github.com/DaveWysochanskiRH/kernel/commit/74357eb291c9c292f3ab3bc9ed1227cb76f52c51
> >
> > The patches are based on David Howells fscache-netfs-lib tree at
> > https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=fscache-netfs-lib
> >
> > The first 6 patches refactor some of the NFS read code to facilitate
> > re-use, the next 4 patches do the conversion to the new API. Note
> > patch 8 converts nfs_readpages to nfs_readahead.
> >
> > Changes since my last posting on Jan 27, 2021
> > - Fix oops with fscache enabled on parallel read unit test
> > - Add patches to handle invalidate and releasepage
> > - Use #define FSCACHE_USE_NEW_IO_API to select the new API
> > - Minor cleanup in nfs_readahead_from_fscache
> >
> > Still TODO
> > 1. Fix known bugs
> > a) nfs_issue_op: takes rcu_read_lock but may calls nfs_page_alloc()
> > with GFP_KERNEL which may sleep (dhowells noted this in a review)
> > b) nfs_refresh_inode() takes inode->i_lock but may call
> > __fscache_invalidate() which may sleep (found with lockdep)
> > c) WARN with xfstest fscache/netapp/pnfs/nfs41
>
> Turns out this is a bit more involved and I would not consider pNFS +
> fscache stable right now.
> For now I may have to disable fscache if pNFS is enabled unless I can
> quickly come up
> with a reasonable fix for the problem.

So my thought right now is to take the first 6 cleanup / preparation
patches for the 5.12 merge window and save the cutover for 5.13. This
would give you an extra release cycle to fix the pNFS stability, and
it would give more time to find and fix any issues in netfs before
switching NFS over to it.

Would that work?
Anna

>
> The problem is as follows. Once netfs calls us in "issue_op" for a
> given subrequest, it expects
> one call back when the subrequest completes. Now the "clamp_length"
> function was developed
> so we tell the netfs caller how big of an IO we can handle. However,
> right now it only implements
> an 'rsize' check, and it does not take into account pNFS
> characteristics such as segments
> which may split up the IO into multiple RPCs. Since each of the RPC
> have their own
> completion, and so far I've not come up with a way to just call back
> into netfs when the
> last one is done, I am not sure what the right approach is. One
> obvious approach would be
> a more sophisticated "clamp_length" function which adds similar logic
> as to the *pg_test()
> functions. But I don't want to duplicate that and so it's not really clear.
>
> > 2. Fixup NFS fscache stats (NFSIOS_FSCACHE_*)
> > * Compare with netfs stats and determine if still needed
> > 3. Cleanup dfprintks and/or convert to tracepoints
> > 4. Further tests (see "Not tested yet")
> >
> > Tests run
> > 1. Custom NFS+fscache unit tests for basic operation: PASS
> > * vers=3,4.0,4.1,4.2,sec=sys,server=localhost (same kernel)
> > 2. cthon04: PASS
> > * test options "-b -g -s -l", fsc,vers=3,4.0,4.1,4.2,sec=sys
> > * No failures, oopses or hangs
> > 3. iozone tests: PASS
> > * nofsc,fsc,vers=3,4.0,4.1,4.2,sec=sys,server=rhel7,rhel8
> > * No failures, oopses, or hangs
> > 4. xfstests/generic: PASS*
> > * no hangs or crashes (one WARN); failures unrelated to these patches
> > * Ran following configurations
> > * vers=4.1,fsc,sec=sys,rhel7-server: PASS
> > * vers=4.0,fsc,sec=sys,rhel7-server: PASS
> > * vers=3,fsc,sec=sys,rhel7-server: PASS
> > * vers=4.1,nofsc,sec=sys,netapp-server(pnfs/files): PASS
> > * vers=4.1,fsc,sec=sys,netapp-server(pnfs/files): INCOMPLETE
> > * WARN_ON fs/netfs/read_helper.c:616
> > * ran with kernel.panic_on_oops=1
> > * vers=4.2,fsc,sec=sys,rhel7-server: running at generic/438
> > * vers=4.2,fsc,sec=sys,rhel8-server: running at generic/127
> > 5. kernel build: PASS
> > * vers=4.2,fsc,sec=sys,rhel8-server: PASS
> >
> > Not tested yet:
> > * error injections (for example, connection disruptions, server errors during IO, etc)
> > * many process mixed read/write on same file
> > * performance
> >
> > Dave Wysochanski (10):
> > NFS: Clean up nfs_readpage() and nfs_readpages()
> > NFS: In nfs_readpage() only increment NFSIOS_READPAGES when read
> > succeeds
> > NFS: Refactor nfs_readpage() and nfs_readpage_async() to use
> > nfs_readdesc
> > NFS: Call readpage_async_filler() from nfs_readpage_async()
> > NFS: Add nfs_pageio_complete_read() and remove nfs_readpage_async()
> > NFS: Allow internal use of read structs and functions
> > NFS: Convert to the netfs API and nfs_readpage to use netfs_readpage
> > NFS: Convert readpages to readahead and use netfs_readahead for
> > fscache
> > NFS: Update releasepage to handle new fscache kiocb IO API
> > NFS: update various invalidation code paths for new IO API
> >
> > fs/nfs/file.c | 22 +++--
> > fs/nfs/fscache.c | 230 +++++++++++++++++++------------------------
> > fs/nfs/fscache.h | 105 +++-----------------
> > fs/nfs/internal.h | 8 ++
> > fs/nfs/pagelist.c | 2 +
> > fs/nfs/read.c | 240 ++++++++++++++++++++-------------------------
> > fs/nfs/write.c | 10 +-
> > include/linux/nfs_fs.h | 5 +-
> > include/linux/nfs_iostat.h | 2 +-
> > include/linux/nfs_page.h | 1 +
> > include/linux/nfs_xdr.h | 1 +
> > 11 files changed, 257 insertions(+), 369 deletions(-)
> >
> > --
> > 1.8.3.1
> >
>

2021-02-02 12:22:50

by David Wysochanski

[permalink] [raw]
Subject: Re: [PATCH 00/10] Convert NFS fscache read paths to netfs API

On Mon, Feb 1, 2021 at 9:30 AM Anna Schumaker <[email protected]> wrote:
>
> Hi David,
>
> On Sun, Jan 31, 2021 at 9:20 PM David Wysochanski <[email protected]> wrote:
> >
> > On Thu, Jan 28, 2021 at 9:59 AM Dave Wysochanski <[email protected]> wrote:
> > >
> > > This minimal set of patches update the NFS client to use the new
> > > readahead method, and convert the NFS fscache to use the new netfs
> > > IO API, and are at:
> > > https://github.com/DaveWysochanskiRH/kernel/releases/tag/fscache-iter-lib-nfs-20210128
> > > https://github.com/DaveWysochanskiRH/kernel/commit/74357eb291c9c292f3ab3bc9ed1227cb76f52c51
> > >
> > > The patches are based on David Howells fscache-netfs-lib tree at
> > > https://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs.git/log/?h=fscache-netfs-lib
> > >
> > > The first 6 patches refactor some of the NFS read code to facilitate
> > > re-use, the next 4 patches do the conversion to the new API. Note
> > > patch 8 converts nfs_readpages to nfs_readahead.
> > >
> > > Changes since my last posting on Jan 27, 2021
> > > - Fix oops with fscache enabled on parallel read unit test
> > > - Add patches to handle invalidate and releasepage
> > > - Use #define FSCACHE_USE_NEW_IO_API to select the new API
> > > - Minor cleanup in nfs_readahead_from_fscache
> > >
> > > Still TODO
> > > 1. Fix known bugs
> > > a) nfs_issue_op: takes rcu_read_lock but may calls nfs_page_alloc()
> > > with GFP_KERNEL which may sleep (dhowells noted this in a review)
> > > b) nfs_refresh_inode() takes inode->i_lock but may call
> > > __fscache_invalidate() which may sleep (found with lockdep)
> > > c) WARN with xfstest fscache/netapp/pnfs/nfs41
> >
> > Turns out this is a bit more involved and I would not consider pNFS +
> > fscache stable right now.
> > For now I may have to disable fscache if pNFS is enabled unless I can
> > quickly come up
> > with a reasonable fix for the problem.
>
> So my thought right now is to take the first 6 cleanup / preparation
> patches for the 5.12 merge window and save the cutover for 5.13. This
> would give you an extra release cycle to fix the pNFS stability, and
> it would give more time to find and fix any issues in netfs before
> switching NFS over to it.
>
> Would that work?
> Anna
>

Yes that's fine.


> >
> > The problem is as follows. Once netfs calls us in "issue_op" for a
> > given subrequest, it expects
> > one call back when the subrequest completes. Now the "clamp_length"
> > function was developed
> > so we tell the netfs caller how big of an IO we can handle. However,
> > right now it only implements
> > an 'rsize' check, and it does not take into account pNFS
> > characteristics such as segments
> > which may split up the IO into multiple RPCs. Since each of the RPC
> > have their own
> > completion, and so far I've not come up with a way to just call back
> > into netfs when the
> > last one is done, I am not sure what the right approach is. One
> > obvious approach would be
> > a more sophisticated "clamp_length" function which adds similar logic
> > as to the *pg_test()
> > functions. But I don't want to duplicate that and so it's not really clear.
> >
> > > 2. Fixup NFS fscache stats (NFSIOS_FSCACHE_*)
> > > * Compare with netfs stats and determine if still needed
> > > 3. Cleanup dfprintks and/or convert to tracepoints
> > > 4. Further tests (see "Not tested yet")
> > >
> > > Tests run
> > > 1. Custom NFS+fscache unit tests for basic operation: PASS
> > > * vers=3,4.0,4.1,4.2,sec=sys,server=localhost (same kernel)
> > > 2. cthon04: PASS
> > > * test options "-b -g -s -l", fsc,vers=3,4.0,4.1,4.2,sec=sys
> > > * No failures, oopses or hangs
> > > 3. iozone tests: PASS
> > > * nofsc,fsc,vers=3,4.0,4.1,4.2,sec=sys,server=rhel7,rhel8
> > > * No failures, oopses, or hangs
> > > 4. xfstests/generic: PASS*
> > > * no hangs or crashes (one WARN); failures unrelated to these patches
> > > * Ran following configurations
> > > * vers=4.1,fsc,sec=sys,rhel7-server: PASS
> > > * vers=4.0,fsc,sec=sys,rhel7-server: PASS
> > > * vers=3,fsc,sec=sys,rhel7-server: PASS
> > > * vers=4.1,nofsc,sec=sys,netapp-server(pnfs/files): PASS
> > > * vers=4.1,fsc,sec=sys,netapp-server(pnfs/files): INCOMPLETE
> > > * WARN_ON fs/netfs/read_helper.c:616
> > > * ran with kernel.panic_on_oops=1
> > > * vers=4.2,fsc,sec=sys,rhel7-server: running at generic/438
> > > * vers=4.2,fsc,sec=sys,rhel8-server: running at generic/127
> > > 5. kernel build: PASS
> > > * vers=4.2,fsc,sec=sys,rhel8-server: PASS
> > >
> > > Not tested yet:
> > > * error injections (for example, connection disruptions, server errors during IO, etc)
> > > * many process mixed read/write on same file
> > > * performance
> > >
> > > Dave Wysochanski (10):
> > > NFS: Clean up nfs_readpage() and nfs_readpages()
> > > NFS: In nfs_readpage() only increment NFSIOS_READPAGES when read
> > > succeeds
> > > NFS: Refactor nfs_readpage() and nfs_readpage_async() to use
> > > nfs_readdesc
> > > NFS: Call readpage_async_filler() from nfs_readpage_async()
> > > NFS: Add nfs_pageio_complete_read() and remove nfs_readpage_async()
> > > NFS: Allow internal use of read structs and functions
> > > NFS: Convert to the netfs API and nfs_readpage to use netfs_readpage
> > > NFS: Convert readpages to readahead and use netfs_readahead for
> > > fscache
> > > NFS: Update releasepage to handle new fscache kiocb IO API
> > > NFS: update various invalidation code paths for new IO API
> > >
> > > fs/nfs/file.c | 22 +++--
> > > fs/nfs/fscache.c | 230 +++++++++++++++++++------------------------
> > > fs/nfs/fscache.h | 105 +++-----------------
> > > fs/nfs/internal.h | 8 ++
> > > fs/nfs/pagelist.c | 2 +
> > > fs/nfs/read.c | 240 ++++++++++++++++++++-------------------------
> > > fs/nfs/write.c | 10 +-
> > > include/linux/nfs_fs.h | 5 +-
> > > include/linux/nfs_iostat.h | 2 +-
> > > include/linux/nfs_page.h | 1 +
> > > include/linux/nfs_xdr.h | 1 +
> > > 11 files changed, 257 insertions(+), 369 deletions(-)
> > >
> > > --
> > > 1.8.3.1
> > >
> >
>