2024-05-27 16:59:41

by Christoph Hellwig

[permalink] [raw]
Subject: support large folios for NFS

Hi all,

this series adds large folio support to NFS, and almost doubles the
buffered write throughput from the previous bottleneck of ~2.5GB/s
(just like for other file systems).

The first patch is an old one from willy that I've updated very slightly.
Note that this update now requires the mapping_max_folio_size helper
merged into Linus' tree only a few minutes ago.

Diffstat:
fs/nfs/file.c | 4 +++-
fs/nfs/inode.c | 1 +
mm/filemap.c | 40 +++++++++++++++++++++++++---------------
3 files changed, 29 insertions(+), 16 deletions(-)


2024-05-27 19:43:17

by Sagi Grimberg

[permalink] [raw]
Subject: Re: support large folios for NFS



On 27/05/2024 19:36, Christoph Hellwig wrote:
> Hi all,
>
> this series adds large folio support to NFS, and almost doubles the
> buffered write throughput from the previous bottleneck of ~2.5GB/s
> (just like for other file systems).
>
> The first patch is an old one from willy that I've updated very slightly.
> Note that this update now requires the mapping_max_folio_size helper
> merged into Linus' tree only a few minutes ago.

I'll confirm that NFS buffered writes saw a dramatic >2x improvement
against my test system.

For the series:
Tested-by: Sagi Grimberg <[email protected]>




2024-05-28 21:06:08

by Matthew Wilcox

[permalink] [raw]
Subject: Re: support large folios for NFS

On Mon, May 27, 2024 at 06:36:07PM +0200, Christoph Hellwig wrote:
> Hi all,
>
> this series adds large folio support to NFS, and almost doubles the
> buffered write throughput from the previous bottleneck of ~2.5GB/s
> (just like for other file systems).
>
> The first patch is an old one from willy that I've updated very slightly.
> Note that this update now requires the mapping_max_folio_size helper
> merged into Linus' tree only a few minutes ago.

Kind of surprised this didn't fall over given the bugs I just sent a
patch for ... misinterpreting the folio indices seems like it should
have caused a failure in _some_ fstest.

2024-05-28 22:08:10

by Matthew Wilcox

[permalink] [raw]
Subject: [PATCH] nfs: Fix misuses of folio_shift() and folio_order()

Page cache indices are in units of PAGE_SIZE, not in units of
the folio size. Revert the change in nfs_grow_file(), and
pass the inode to nfs_folio_length() so it can be reimplemented
in terms of folio_mkwrite_check_truncate() which handles this
correctly.

Fixes: 0c493b5cf16e ("NFS: Convert buffered writes to use folios")
Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Cc: Trond Myklebust <[email protected]>
Cc: Anna Schumaker <[email protected]>
Cc: Christoph Hellwig <[email protected]>
---
fs/nfs/file.c | 6 +++---
fs/nfs/internal.h | 16 +++++-----------
fs/nfs/read.c | 2 +-
fs/nfs/write.c | 9 +++++----
include/linux/pagemap.h | 4 ++--
5 files changed, 16 insertions(+), 21 deletions(-)

diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index 6bd127e6683d..723d78bbfe3f 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -301,7 +301,7 @@ EXPORT_SYMBOL_GPL(nfs_file_fsync);
static bool nfs_folio_is_full_write(struct folio *folio, loff_t pos,
unsigned int len)
{
- unsigned int pglen = nfs_folio_length(folio);
+ unsigned int pglen = nfs_folio_length(folio, folio->mapping->host);
unsigned int offset = offset_in_folio(folio, pos);
unsigned int end = offset + len;

@@ -386,7 +386,7 @@ static int nfs_write_end(struct file *file, struct address_space *mapping,
*/
if (!folio_test_uptodate(folio)) {
size_t fsize = folio_size(folio);
- unsigned pglen = nfs_folio_length(folio);
+ unsigned pglen = nfs_folio_length(folio, mapping->host);
unsigned end = offset + copied;

if (pglen == 0) {
@@ -610,7 +610,7 @@ static vm_fault_t nfs_vm_page_mkwrite(struct vm_fault *vmf)

folio_wait_writeback(folio);

- pagelen = nfs_folio_length(folio);
+ pagelen = nfs_folio_length(folio, inode);
if (pagelen == 0)
goto out_unlock;

diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
index 9f0f4534744b..3b0236e67257 100644
--- a/fs/nfs/internal.h
+++ b/fs/nfs/internal.h
@@ -819,19 +819,13 @@ unsigned int nfs_page_length(struct page *page)
/*
* Determine the number of bytes of data the page contains
*/
-static inline size_t nfs_folio_length(struct folio *folio)
+static inline size_t nfs_folio_length(struct folio *folio, struct inode *inode)
{
- loff_t i_size = i_size_read(folio_file_mapping(folio)->host);
+ ssize_t ret = folio_mkwrite_check_truncate(folio, inode);

- if (i_size > 0) {
- pgoff_t index = folio_index(folio) >> folio_order(folio);
- pgoff_t end_index = (i_size - 1) >> folio_shift(folio);
- if (index < end_index)
- return folio_size(folio);
- if (index == end_index)
- return offset_in_folio(folio, i_size - 1) + 1;
- }
- return 0;
+ if (ret < 0)
+ ret = 0;
+ return ret;
}

/*
diff --git a/fs/nfs/read.c b/fs/nfs/read.c
index a142287d86f6..ba3bb496f832 100644
--- a/fs/nfs/read.c
+++ b/fs/nfs/read.c
@@ -296,7 +296,7 @@ int nfs_read_add_folio(struct nfs_pageio_descriptor *pgio,
unsigned int len, aligned_len;
int error;

- len = nfs_folio_length(folio);
+ len = nfs_folio_length(folio, inode);
if (len == 0)
return nfs_return_empty_folio(folio);

diff --git a/fs/nfs/write.c b/fs/nfs/write.c
index 2329cbb0e446..7713ce7c5b3a 100644
--- a/fs/nfs/write.c
+++ b/fs/nfs/write.c
@@ -278,8 +278,8 @@ static void nfs_grow_file(struct folio *folio, unsigned int offset,

spin_lock(&inode->i_lock);
i_size = i_size_read(inode);
- end_index = ((i_size - 1) >> folio_shift(folio)) << folio_order(folio);
- if (i_size > 0 && folio_index(folio) < end_index)
+ end_index = (i_size - 1) >> PAGE_SHIFT;
+ if (i_size > 0 && folio->index < end_index)
goto out;
end = folio_file_pos(folio) + (loff_t)offset + (loff_t)count;
if (i_size >= end)
@@ -358,7 +358,8 @@ nfs_page_group_search_locked(struct nfs_page *head, unsigned int page_offset)
*/
static bool nfs_page_group_covers_page(struct nfs_page *req)
{
- unsigned int len = nfs_folio_length(nfs_page_to_folio(req));
+ struct folio *folio = nfs_page_to_folio(req);
+ unsigned int len = nfs_folio_length(folio, folio->mapping->host);
struct nfs_page *tmp;
unsigned int pos = 0;

@@ -1356,7 +1357,7 @@ int nfs_update_folio(struct file *file, struct folio *folio,
struct nfs_open_context *ctx = nfs_file_open_context(file);
struct address_space *mapping = folio_file_mapping(folio);
struct inode *inode = mapping->host;
- unsigned int pagelen = nfs_folio_length(folio);
+ unsigned int pagelen = nfs_folio_length(folio, inode);
int status = 0;

nfs_inc_stats(inode, NFSIOS_VFSUPDATEPAGE);
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index c6aaceed0de6..df57d7361a9a 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -212,8 +212,8 @@ enum mapping_flags {
AS_FOLIO_ORDER_MAX = 21, /* Bits 16-25 are used for FOLIO_ORDER */
};

-#define AS_FOLIO_ORDER_MIN_MASK 0x001f0000
-#define AS_FOLIO_ORDER_MAX_MASK 0x03e00000
+#define AS_FOLIO_ORDER_MIN_MASK (31 << AS_FOLIO_ORDER_MIN)
+#define AS_FOLIO_ORDER_MAX_MASK (31 << AS_FOLIO_ORDER_MAX)
#define AS_FOLIO_ORDER_MASK (AS_FOLIO_ORDER_MIN_MASK | AS_FOLIO_ORDER_MAX_MASK)

/**
--
2.43.0


2024-05-29 05:14:52

by Christoph Hellwig

[permalink] [raw]
Subject: Re: support large folios for NFS

On Tue, May 28, 2024 at 10:05:58PM +0100, Matthew Wilcox wrote:
> On Mon, May 27, 2024 at 06:36:07PM +0200, Christoph Hellwig wrote:
> > Hi all,
> >
> > this series adds large folio support to NFS, and almost doubles the
> > buffered write throughput from the previous bottleneck of ~2.5GB/s
> > (just like for other file systems).
> >
> > The first patch is an old one from willy that I've updated very slightly.
> > Note that this update now requires the mapping_max_folio_size helper
> > merged into Linus' tree only a few minutes ago.
>
> Kind of surprised this didn't fall over given the bugs I just sent a
> patch for ... misinterpreting the folio indices seems like it should
> have caused a failure in _some_ fstest.

I've run quite few tests with different NFS protocol versions, and there
were no new failures, and the existing one is a MM one also reproducible
with local XFS. That's indeed a bit odd.

2024-05-29 05:15:52

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH] nfs: Fix misuses of folio_shift() and folio_order()

> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index c6aaceed0de6..df57d7361a9a 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -212,8 +212,8 @@ enum mapping_flags {
> AS_FOLIO_ORDER_MAX = 21, /* Bits 16-25 are used for FOLIO_ORDER */
> };
>
> -#define AS_FOLIO_ORDER_MIN_MASK 0x001f0000
> -#define AS_FOLIO_ORDER_MAX_MASK 0x03e00000
> +#define AS_FOLIO_ORDER_MIN_MASK (31 << AS_FOLIO_ORDER_MIN)
> +#define AS_FOLIO_ORDER_MAX_MASK (31 << AS_FOLIO_ORDER_MAX)

This looks unrelated.

The NFS changes do look good to me, I'll kick off a testing run on
them in a bit.


2024-05-29 06:31:16

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH] nfs: Fix misuses of folio_shift() and folio_order()

On Tue, May 28, 2024 at 10:03:15PM +0100, Matthew Wilcox (Oracle) wrote:
> Page cache indices are in units of PAGE_SIZE, not in units of
> the folio size. Revert the change in nfs_grow_file(), and
> pass the inode to nfs_folio_length() so it can be reimplemented
> in terms of folio_mkwrite_check_truncate() which handles this
> correctly.

I had to apply the incremental patch below to make the change compile.
With that it causes a new xfstests failure in generic/127 that I haven't
looked into yet. The mm-level bugs I've seen even with baseline
Linus' tree also happened more often than in my previous tests, but
that might just be coincidence.


diff --git a/fs/nfs/nfstrace.h b/fs/nfs/nfstrace.h
index 1e710654af1173..0a5d5fa9513735 100644
--- a/fs/nfs/nfstrace.h
+++ b/fs/nfs/nfstrace.h
@@ -938,7 +938,7 @@ TRACE_EVENT(nfs_sillyrename_unlink,

DECLARE_EVENT_CLASS(nfs_folio_event,
TP_PROTO(
- const struct inode *inode,
+ struct inode *inode,
struct folio *folio
),

@@ -954,14 +954,14 @@ DECLARE_EVENT_CLASS(nfs_folio_event,
),

TP_fast_assign(
- const struct nfs_inode *nfsi = NFS_I(inode);
+ struct nfs_inode *nfsi = NFS_I(inode);

__entry->dev = inode->i_sb->s_dev;
__entry->fileid = nfsi->fileid;
__entry->fhandle = nfs_fhandle_hash(&nfsi->fh);
__entry->version = inode_peek_iversion_raw(inode);
__entry->offset = folio_file_pos(folio);
- __entry->count = nfs_folio_length(folio);
+ __entry->count = nfs_folio_length(folio, inode);
),

TP_printk(
@@ -977,14 +977,14 @@ DECLARE_EVENT_CLASS(nfs_folio_event,
#define DEFINE_NFS_FOLIO_EVENT(name) \
DEFINE_EVENT(nfs_folio_event, name, \
TP_PROTO( \
- const struct inode *inode, \
+ struct inode *inode, \
struct folio *folio \
), \
TP_ARGS(inode, folio))

DECLARE_EVENT_CLASS(nfs_folio_event_done,
TP_PROTO(
- const struct inode *inode,
+ struct inode *inode,
struct folio *folio,
int ret
),
@@ -1002,14 +1002,14 @@ DECLARE_EVENT_CLASS(nfs_folio_event_done,
),

TP_fast_assign(
- const struct nfs_inode *nfsi = NFS_I(inode);
+ struct nfs_inode *nfsi = NFS_I(inode);

__entry->dev = inode->i_sb->s_dev;
__entry->fileid = nfsi->fileid;
__entry->fhandle = nfs_fhandle_hash(&nfsi->fh);
__entry->version = inode_peek_iversion_raw(inode);
__entry->offset = folio_file_pos(folio);
- __entry->count = nfs_folio_length(folio);
+ __entry->count = nfs_folio_length(folio, inode);
__entry->ret = ret;
),

@@ -1026,7 +1026,7 @@ DECLARE_EVENT_CLASS(nfs_folio_event_done,
#define DEFINE_NFS_FOLIO_EVENT_DONE(name) \
DEFINE_EVENT(nfs_folio_event_done, name, \
TP_PROTO( \
- const struct inode *inode, \
+ struct inode *inode, \
struct folio *folio, \
int ret \
), \

2024-05-29 13:35:15

by Trond Myklebust

[permalink] [raw]
Subject: Re: support large folios for NFS

On Tue, 2024-05-28 at 22:05 +0100, Matthew Wilcox wrote:
> On Mon, May 27, 2024 at 06:36:07PM +0200, Christoph Hellwig wrote:
> > Hi all,
> >
> > this series adds large folio support to NFS, and almost doubles the
> > buffered write throughput from the previous bottleneck of ~2.5GB/s
> > (just like for other file systems).
> >
> > The first patch is an old one from willy that I've updated very
> > slightly.
> > Note that this update now requires the mapping_max_folio_size
> > helper
> > merged into Linus' tree only a few minutes ago.
>
> Kind of surprised this didn't fall over given the bugs I just sent a
> patch for ... misinterpreting the folio indices seems like it should
> have caused a failure in _some_ fstest.

Why wouldn't it work? The code you're replacing isn't assuming that
page cache indices are in units of the folio size. It is just assuming
that folio boundaries are multiples of the folio size.

--
Trond Myklebust
Linux NFS client maintainer, Hammerspace
[email protected]


2024-05-29 14:38:05

by Trond Myklebust

[permalink] [raw]
Subject: Re: [PATCH] nfs: Fix misuses of folio_shift() and folio_order()

On Tue, 2024-05-28 at 22:03 +0100, Matthew Wilcox (Oracle) wrote:
> Page cache indices are in units of PAGE_SIZE, not in units of
> the folio size.  Revert the change in nfs_grow_file(), and
> pass the inode to nfs_folio_length() so it can be reimplemented
> in terms of folio_mkwrite_check_truncate() which handles this
> correctly.

For the record, the code being replaced here is not assuming that page
cache indices are in units of the folio size. It is assuming that folio
boundaries will lie on offsets that are multiples of the folio size and
that the current page attributes (page lock, uptodate, etc) are
expected to apply to the data that lies within those folio boundaries.
The way the folio code is written today, that assumption appears to be
correct.

I'm fine with replacing NFS-specific code with generic code when
obviously correct, but AFAICS this would be a cleanup, and not a bug
fix.

>
> Fixes: 0c493b5cf16e ("NFS: Convert buffered writes to use folios")
> Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
> Cc: Trond Myklebust <[email protected]>
> Cc: Anna Schumaker <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
> ---
>  fs/nfs/file.c           |  6 +++---
>  fs/nfs/internal.h       | 16 +++++-----------
>  fs/nfs/read.c           |  2 +-
>  fs/nfs/write.c          |  9 +++++----
>  include/linux/pagemap.h |  4 ++--
>  5 files changed, 16 insertions(+), 21 deletions(-)
>
> diff --git a/fs/nfs/file.c b/fs/nfs/file.c
> index 6bd127e6683d..723d78bbfe3f 100644
> --- a/fs/nfs/file.c
> +++ b/fs/nfs/file.c
> @@ -301,7 +301,7 @@ EXPORT_SYMBOL_GPL(nfs_file_fsync);
>  static bool nfs_folio_is_full_write(struct folio *folio, loff_t pos,
>       unsigned int len)
>  {
> - unsigned int pglen = nfs_folio_length(folio);
> + unsigned int pglen = nfs_folio_length(folio, folio->mapping-
> >host);
>   unsigned int offset = offset_in_folio(folio, pos);
>   unsigned int end = offset + len;
>  
> @@ -386,7 +386,7 @@ static int nfs_write_end(struct file *file,
> struct address_space *mapping,
>   */
>   if (!folio_test_uptodate(folio)) {
>   size_t fsize = folio_size(folio);
> - unsigned pglen = nfs_folio_length(folio);
> + unsigned pglen = nfs_folio_length(folio, mapping-
> >host);
>   unsigned end = offset + copied;
>  
>   if (pglen == 0) {
> @@ -610,7 +610,7 @@ static vm_fault_t nfs_vm_page_mkwrite(struct
> vm_fault *vmf)
>  
>   folio_wait_writeback(folio);
>  
> - pagelen = nfs_folio_length(folio);
> + pagelen = nfs_folio_length(folio, inode);
>   if (pagelen == 0)
>   goto out_unlock;
>  
> diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
> index 9f0f4534744b..3b0236e67257 100644
> --- a/fs/nfs/internal.h
> +++ b/fs/nfs/internal.h
> @@ -819,19 +819,13 @@ unsigned int nfs_page_length(struct page *page)
>  /*
>   * Determine the number of bytes of data the page contains
>   */
> -static inline size_t nfs_folio_length(struct folio *folio)
> +static inline size_t nfs_folio_length(struct folio *folio, struct
> inode *inode)
>  {
> - loff_t i_size = i_size_read(folio_file_mapping(folio)-
> >host);
> + ssize_t ret = folio_mkwrite_check_truncate(folio, inode);
>  
> - if (i_size > 0) {
> - pgoff_t index = folio_index(folio) >>
> folio_order(folio);
> - pgoff_t end_index = (i_size - 1) >>
> folio_shift(folio);
> - if (index < end_index)
> - return folio_size(folio);
> - if (index == end_index)
> - return offset_in_folio(folio, i_size - 1) +
> 1;
> - }
> - return 0;
> + if (ret < 0)
> + ret = 0;
> + return ret;
>  }
>  
>  /*
> diff --git a/fs/nfs/read.c b/fs/nfs/read.c
> index a142287d86f6..ba3bb496f832 100644
> --- a/fs/nfs/read.c
> +++ b/fs/nfs/read.c
> @@ -296,7 +296,7 @@ int nfs_read_add_folio(struct
> nfs_pageio_descriptor *pgio,
>   unsigned int len, aligned_len;
>   int error;
>  
> - len = nfs_folio_length(folio);
> + len = nfs_folio_length(folio, inode);
>   if (len == 0)
>   return nfs_return_empty_folio(folio);
>  
> diff --git a/fs/nfs/write.c b/fs/nfs/write.c
> index 2329cbb0e446..7713ce7c5b3a 100644
> --- a/fs/nfs/write.c
> +++ b/fs/nfs/write.c
> @@ -278,8 +278,8 @@ static void nfs_grow_file(struct folio *folio,
> unsigned int offset,
>  
>   spin_lock(&inode->i_lock);
>   i_size = i_size_read(inode);
> - end_index = ((i_size - 1) >> folio_shift(folio)) <<
> folio_order(folio);
> - if (i_size > 0 && folio_index(folio) < end_index)
> + end_index = (i_size - 1) >> PAGE_SHIFT;
> + if (i_size > 0 && folio->index < end_index)
>   goto out;
>   end = folio_file_pos(folio) + (loff_t)offset +
> (loff_t)count;
>   if (i_size >= end)
> @@ -358,7 +358,8 @@ nfs_page_group_search_locked(struct nfs_page
> *head, unsigned int page_offset)
>   */
>  static bool nfs_page_group_covers_page(struct nfs_page *req)
>  {
> - unsigned int len = nfs_folio_length(nfs_page_to_folio(req));
> + struct folio *folio = nfs_page_to_folio(req);
> + unsigned int len = nfs_folio_length(folio, folio->mapping-
> >host);
>   struct nfs_page *tmp;
>   unsigned int pos = 0;
>  
> @@ -1356,7 +1357,7 @@ int nfs_update_folio(struct file *file, struct
> folio *folio,
>   struct nfs_open_context *ctx = nfs_file_open_context(file);
>   struct address_space *mapping = folio_file_mapping(folio);
>   struct inode *inode = mapping->host;
> - unsigned int pagelen = nfs_folio_length(folio);
> + unsigned int pagelen = nfs_folio_length(folio, inode);
>   int status = 0;
>  
>   nfs_inc_stats(inode, NFSIOS_VFSUPDATEPAGE);
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index c6aaceed0de6..df57d7361a9a 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -212,8 +212,8 @@ enum mapping_flags {
>   AS_FOLIO_ORDER_MAX = 21, /* Bits 16-25 are used for
> FOLIO_ORDER */
>  };
>  
> -#define AS_FOLIO_ORDER_MIN_MASK 0x001f0000
> -#define AS_FOLIO_ORDER_MAX_MASK 0x03e00000
> +#define AS_FOLIO_ORDER_MIN_MASK (31 << AS_FOLIO_ORDER_MIN)
> +#define AS_FOLIO_ORDER_MAX_MASK (31 << AS_FOLIO_ORDER_MAX)
>  #define AS_FOLIO_ORDER_MASK (AS_FOLIO_ORDER_MIN_MASK |
> AS_FOLIO_ORDER_MAX_MASK)
>  
>  /**

--
Trond Myklebust
Linux NFS client maintainer, Hammerspace
[email protected]


2024-05-29 21:59:55

by Trond Myklebust

[permalink] [raw]
Subject: Re: support large folios for NFS

On Mon, 2024-05-27 at 18:36 +0200, Christoph Hellwig wrote:
> Hi all,
>
> this series adds large folio support to NFS, and almost doubles the
> buffered write throughput from the previous bottleneck of ~2.5GB/s
> (just like for other file systems).
>
> The first patch is an old one from willy that I've updated very
> slightly.
> Note that this update now requires the mapping_max_folio_size helper
> merged into Linus' tree only a few minutes ago.
>
> Diffstat:
>  fs/nfs/file.c  |    4 +++-
>  fs/nfs/inode.c |    1 +
>  mm/filemap.c   |   40 +++++++++++++++++++++++++---------------
>  3 files changed, 29 insertions(+), 16 deletions(-)
>

Which tree did you intend to merge this through? Willy's or Anna and
mine? I'm OK either way. I just want to make sure we're on the same
page.

--
Trond Myklebust
Linux NFS client maintainer, Hammerspace
[email protected]


2024-05-31 06:14:53

by Christoph Hellwig

[permalink] [raw]
Subject: Re: support large folios for NFS

On Wed, May 29, 2024 at 09:59:44PM +0000, Trond Myklebust wrote:
> Which tree did you intend to merge this through? Willy's or Anna and
> mine? I'm OK either way. I just want to make sure we're on the same
> page.

I'm perfectly fine either way too. If willy wants to get any other
work for generic_perform_write in as per his RFC patches the pagecache
tree might be a better place, if not maybe the nfs tree.


2024-06-07 05:29:38

by Christoph Hellwig

[permalink] [raw]
Subject: Re: support large folios for NFS

On Fri, May 31, 2024 at 08:14:43AM +0200, [email protected] wrote:
> On Wed, May 29, 2024 at 09:59:44PM +0000, Trond Myklebust wrote:
> > Which tree did you intend to merge this through? Willy's or Anna and
> > mine? I'm OK either way. I just want to make sure we're on the same
> > page.
>
> I'm perfectly fine either way too. If willy wants to get any other
> work for generic_perform_write in as per his RFC patches the pagecache
> tree might be a better place, if not maybe the nfs tree.

That maintainer celebrity death match was a bit boring :) Any takers?

2024-06-07 07:59:42

by Cedric Blancher

[permalink] [raw]
Subject: Re: support large folios for NFS

On Fri, 7 Jun 2024 at 07:29, [email protected] <[email protected]> wrote:
>
> On Fri, May 31, 2024 at 08:14:43AM +0200, [email protected] wrote:
> > On Wed, May 29, 2024 at 09:59:44PM +0000, Trond Myklebust wrote:
> > > Which tree did you intend to merge this through? Willy's or Anna and
> > > mine? I'm OK either way. I just want to make sure we're on the same
> > > page.
> >
> > I'm perfectly fine either way too. If willy wants to get any other
> > work for generic_perform_write in as per his RFC patches the pagecache
> > tree might be a better place, if not maybe the nfs tree.
>
> That maintainer celebrity death match was a bit boring :) Any takers?
>

As much as we like to see blood, gore, WH4K chainsawswords, ripped off
brains and ethernet cables, it would be easier (and less expensive) to
just apply the patch :)

Ced
--
Cedric Blancher <[email protected]>
[https://plus.google.com/u/0/+CedricBlancher/]
Institute Pasteur

2024-06-07 15:34:22

by Trond Myklebust

[permalink] [raw]
Subject: Re: support large folios for NFS

On Fri, 2024-06-07 at 07:29 +0200, [email protected] wrote:
> On Fri, May 31, 2024 at 08:14:43AM +0200, [email protected] wrote:
> > On Wed, May 29, 2024 at 09:59:44PM +0000, Trond Myklebust wrote:
> > > Which tree did you intend to merge this through? Willy's or Anna
> > > and
> > > mine? I'm OK either way. I just want to make sure we're on the
> > > same
> > > page.
> >
> > I'm perfectly fine either way too.  If willy wants to get any other
> > work for generic_perform_write in as per his RFC patches the
> > pagecache
> > tree might be a better place, if not maybe the nfs tree.
>
> That maintainer celebrity death match was a bit boring :)  Any
> takers?
>

???? We'll push them through the NFS tree.

--
Trond Myklebust
Linux NFS client maintainer, Hammerspace
[email protected]