2023-10-13 16:06:09

by David Howells

[permalink] [raw]
Subject: [RFC PATCH 00/53] netfs, afs, cifs: Delegate high-level I/O to netfslib

Hi Jeff, Steve,

I have been working on my netfslib helpers to the point that I can run
xfstests on AFS to completion (both with write-back buffering and, with a
small patch, write-through buffering in the pagecache). I can also run a
certain amount of xfstests on CIFS, though that requires some more
debugging. However, this seems like a good time to post a preview of the
patches.

The patches remove a little over 800 lines from AFS and over 2000 from
CIFS, albeit with around 3000 lines added to netfs. Hopefully, I will be
able to remove a bunch of lines from 9P and Ceph too.

The main aims of these patches are to get high-level I/O and knowledge of
the pagecache out of the filesystem drivers as much as possible and to get
rid, as much of possible, of the knowledge that pages/folios exist.

Further, I would like to see ->write_begin, ->write_end and ->launder_folio
go away.

Features that are added by these patches to that which is already there in
netfslib:

(1) NFS-style (and Ceph-style) locking around DIO vs buffered I/O calls to
prevent these from happening at the same time. mmap'd I/O can, of
necessity, happen at any time ignoring these locks.

(2) Support for unbuffered I/O. The data is kept in the bounce buffer and
the pagecache is not used. This can be turned on with an inode flag.

(3) Support for direct I/O. This is basically unbuffered I/O with some
extra restrictions and no RMW.

(4) Support for using a bounce buffer in an operation. The bounce buffer
may be bigger than the target data/buffer, allowing for crypto
rounding.

(5) Support for content encryption. This isn't supported yet by AFS/CIFS
but is aimed initially at Ceph.

(6) ->write_begin() and ->write_end() are ignored in favour of merging all
of that into one function, netfs_perform_write(), thereby avoiding the
function pointer traversals.

(7) Support for write-through caching in the pagecache.
netfs_perform_write() adds the pages is modifies to an I/O operation
as it goes and directly marks them writeback rather than dirty. When
writing back from write-through, it limits the range written back.
This should allow CIFS to deal with byte-range mandatory locks
correctly.

(8) O_*SYNC and RWF_*SYNC writes use write-through rather than writing to
the pagecache and then flushing afterwards. An AIO O_*SYNC write will
notify of completion when the sub-writes all complete.

(9) Support for write-streaming where modifed data is held in !uptodate
folios, with a private struct attached indicating the range that is
valid.

(10) Support for write grouping, multiplexing a pointer to a group in the
folio private data with the write-streaming data. The writepages
algorithm only writes stuff back that's in the nominated group. This
is intended for use by Ceph to write is snaps in order.

(11) Skipping reads for which we know the server could only supply zeros or
EOF (for instance if we've done a local write that leaves a hole in
the file and extends the local inode size).


General notes:

(1) netfslib now makes use of folio->private, which means the filesystem
can't use it.

(2) Use of fscache is not yet tested. I'm not sure whether to allow a
cache to be used with a write-through write.

(3) The filesystem provides wrappers to call the write helpers, allowing
it to do pre-validation, oplock/capability fetching and the passing in
of write group info.

(4) I want to try flushing the data when tearing down an inode before
invalidating it to try and render launder_folio unnecessary.

(5) Write-through caching will generate and dispatch write subrequests as
it gathers enough data to hit wsize and has whole pages that at least
span that size. This needs to be a bit more flexible, allowing for a
filesystem such as CIFS to have a variable wsize.

(6) The filesystem driver is just given read and write calls with an
iov_iter describing the data/buffer to use. Ideally, they don't see
pages or folios at all. A function, extract_iter_to_sg(), is already
available to decant part of an iterator into a scatterlist for crypto
purposes.


CIFS notes:

(1) CIFS is made to use unbuffered I/O for unbuffered caching modes and
write-through caching for cache=strict.

(2) cifs_init_request() occasionally throws an error that it can't get a
writable file when trying to do writeback.

(3) Apparent file corruption frequently appears in the target file when
cifs_copy_file_range(), even though it doesn't use any netfslib
helpers and even if it doesn't overlap with any pages in the
pagecache.

(4) I should be able to turn multipage folio support on in CIFS now.

(5) The then-unused CIFS code is removed in three patches, not one, to
avoid the git patch generator from producing confusing patches in
which it thinks code is being moved around rather than just being
removed.

David

David Howells (53):
netfs: Add a procfile to list in-progress requests
netfs: Track the fpos above which the server has no data
netfs: Note nonblockingness in the netfs_io_request struct
netfs: Allow the netfs to make the io (sub)request alloc larger
netfs: Add a ->free_subrequest() op
afs: Don't use folio->private to record partial modification
netfs: Provide invalidate_folio and release_folio calls
netfs: Add rsize to netfs_io_request
netfs: Implement unbuffered/DIO vs buffered I/O locking
netfs: Add iov_iters to (sub)requests to describe various buffers
netfs: Add support for DIO buffering
netfs: Provide tools to create a buffer in an xarray
netfs: Add bounce buffering support
netfs: Add func to calculate pagecount/size-limited span of an
iterator
netfs: Limit subrequest by size or number of segments
netfs: Export netfs_put_subrequest() and some tracepoints
netfs: Extend the netfs_io_*request structs to handle writes
netfs: Add a hook to allow tell the netfs to update its i_size
netfs: Make netfs_put_request() handle a NULL pointer
fscache: Add a function to begin an cache op from a netfslib request
netfs: Make the refcounting of netfs_begin_read() easier to use
netfs: Prep to use folio->private for write grouping and streaming
write
netfs: Dispatch write requests to process a writeback slice
netfs: Provide func to copy data to pagecache for buffered write
netfs: Make netfs_read_folio() handle streaming-write pages
netfs: Allocate multipage folios in the writepath
netfs: Implement support for unbuffered/DIO read
netfs: Implement unbuffered/DIO write support
netfs: Implement buffered write API
netfs: Allow buffered shared-writeable mmap through
netfs_page_mkwrite()
netfs: Provide netfs_file_read_iter()
netfs: Provide a writepages implementation
netfs: Provide minimum blocksize parameter
netfs: Make netfs_skip_folio_read() take account of blocksize
netfs: Perform content encryption
netfs: Decrypt encrypted content
netfs: Support decryption on ubuffered/DIO read
netfs: Support encryption on Unbuffered/DIO write
netfs: Provide a launder_folio implementation
netfs: Implement a write-through caching option
netfs: Rearrange netfs_io_subrequest to put request pointer first
afs: Use the netfs write helpers
cifs: Replace cifs_readdata with a wrapper around netfs_io_subrequest
cifs: Share server EOF pos with netfslib
cifs: Replace cifs_writedata with a wrapper around netfs_io_subrequest
cifs: Use more fields from netfs_io_subrequest
cifs: Make wait_mtu_credits take size_t args
cifs: Implement netfslib hooks
cifs: Move cifs_loose_read_iter() and cifs_file_write_iter() to file.c
cifs: Cut over to using netfslib
cifs: Remove some code that's no longer used, part 1
cifs: Remove some code that's no longer used, part 2
cifs: Remove some code that's no longer used, part 3

fs/9p/vfs_addr.c | 51 +-
fs/afs/file.c | 206 +--
fs/afs/inode.c | 15 +-
fs/afs/internal.h | 66 +-
fs/afs/write.c | 816 +---------
fs/ceph/addr.c | 28 +-
fs/ceph/cache.h | 12 -
fs/fscache/io.c | 42 +
fs/netfs/Makefile | 9 +-
fs/netfs/buffered_read.c | 245 ++-
fs/netfs/buffered_write.c | 1223 ++++++++++++++
fs/netfs/crypto.c | 148 ++
fs/netfs/direct_read.c | 263 +++
fs/netfs/direct_write.c | 359 +++++
fs/netfs/internal.h | 121 ++
fs/netfs/io.c | 325 +++-
fs/netfs/iterator.c | 97 ++
fs/netfs/locking.c | 209 +++
fs/netfs/main.c | 101 ++
fs/netfs/misc.c | 237 +++
fs/netfs/objects.c | 64 +-
fs/netfs/output.c | 485 ++++++
fs/netfs/stats.c | 22 +-
fs/smb/client/Kconfig | 1 +
fs/smb/client/cifsfs.c | 65 +-
fs/smb/client/cifsfs.h | 10 +-
fs/smb/client/cifsglob.h | 59 +-
fs/smb/client/cifsproto.h | 10 +-
fs/smb/client/cifssmb.c | 111 +-
fs/smb/client/file.c | 2905 ++++++----------------------------
fs/smb/client/fscache.c | 109 --
fs/smb/client/fscache.h | 54 -
fs/smb/client/inode.c | 25 +-
fs/smb/client/smb2ops.c | 20 +-
fs/smb/client/smb2pdu.c | 168 +-
fs/smb/client/smb2proto.h | 5 +-
fs/smb/client/trace.h | 144 +-
fs/smb/client/transport.c | 17 +-
include/linux/fscache.h | 6 +
include/linux/netfs.h | 173 +-
include/trace/events/afs.h | 31 -
include/trace/events/netfs.h | 158 +-
42 files changed, 5136 insertions(+), 4079 deletions(-)
create mode 100644 fs/netfs/buffered_write.c
create mode 100644 fs/netfs/crypto.c
create mode 100644 fs/netfs/direct_read.c
create mode 100644 fs/netfs/direct_write.c
create mode 100644 fs/netfs/locking.c
create mode 100644 fs/netfs/misc.c
create mode 100644 fs/netfs/output.c


2023-10-13 16:06:58

by David Howells

[permalink] [raw]
Subject: [RFC PATCH 07/53] netfs: Provide invalidate_folio and release_folio calls

Provide default invalidate_folio and release_folio calls. These will need
to interact with invalidation correctly at some point. They will be needed
if netfslib is to make use of folio->private for its own purposes.

Signed-off-by: David Howells <[email protected]>
cc: Jeff Layton <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
fs/9p/vfs_addr.c | 33 ++-------------------------
fs/afs/file.c | 53 ++++---------------------------------------
fs/ceph/addr.c | 24 ++------------------
fs/netfs/Makefile | 1 +
fs/netfs/misc.c | 51 +++++++++++++++++++++++++++++++++++++++++
include/linux/netfs.h | 6 +++--
6 files changed, 64 insertions(+), 104 deletions(-)
create mode 100644 fs/netfs/misc.c

diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c
index 8a635999a7d6..18a666c43e4a 100644
--- a/fs/9p/vfs_addr.c
+++ b/fs/9p/vfs_addr.c
@@ -104,35 +104,6 @@ const struct netfs_request_ops v9fs_req_ops = {
.issue_read = v9fs_issue_read,
};

-/**
- * v9fs_release_folio - release the private state associated with a folio
- * @folio: The folio to be released
- * @gfp: The caller's allocation restrictions
- *
- * Returns true if the page can be released, false otherwise.
- */
-
-static bool v9fs_release_folio(struct folio *folio, gfp_t gfp)
-{
- if (folio_test_private(folio))
- return false;
-#ifdef CONFIG_9P_FSCACHE
- if (folio_test_fscache(folio)) {
- if (current_is_kswapd() || !(gfp & __GFP_FS))
- return false;
- folio_wait_fscache(folio);
- }
- fscache_note_page_release(v9fs_inode_cookie(V9FS_I(folio_inode(folio))));
-#endif
- return true;
-}
-
-static void v9fs_invalidate_folio(struct folio *folio, size_t offset,
- size_t length)
-{
- folio_wait_fscache(folio);
-}
-
#ifdef CONFIG_9P_FSCACHE
static void v9fs_write_to_cache_done(void *priv, ssize_t transferred_or_error,
bool was_async)
@@ -355,8 +326,8 @@ const struct address_space_operations v9fs_addr_operations = {
.writepage = v9fs_vfs_writepage,
.write_begin = v9fs_write_begin,
.write_end = v9fs_write_end,
- .release_folio = v9fs_release_folio,
- .invalidate_folio = v9fs_invalidate_folio,
+ .release_folio = netfs_release_folio,
+ .invalidate_folio = netfs_invalidate_folio,
.launder_folio = v9fs_launder_folio,
.direct_IO = v9fs_direct_IO,
};
diff --git a/fs/afs/file.c b/fs/afs/file.c
index 0c49b3b6f214..3fea5cd8ef13 100644
--- a/fs/afs/file.c
+++ b/fs/afs/file.c
@@ -20,9 +20,6 @@

static int afs_file_mmap(struct file *file, struct vm_area_struct *vma);
static int afs_symlink_read_folio(struct file *file, struct folio *folio);
-static void afs_invalidate_folio(struct folio *folio, size_t offset,
- size_t length);
-static bool afs_release_folio(struct folio *folio, gfp_t gfp_flags);

static ssize_t afs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter);
static ssize_t afs_file_splice_read(struct file *in, loff_t *ppos,
@@ -57,8 +54,8 @@ const struct address_space_operations afs_file_aops = {
.readahead = netfs_readahead,
.dirty_folio = afs_dirty_folio,
.launder_folio = afs_launder_folio,
- .release_folio = afs_release_folio,
- .invalidate_folio = afs_invalidate_folio,
+ .release_folio = netfs_release_folio,
+ .invalidate_folio = netfs_invalidate_folio,
.write_begin = afs_write_begin,
.write_end = afs_write_end,
.writepages = afs_writepages,
@@ -67,8 +64,8 @@ const struct address_space_operations afs_file_aops = {

const struct address_space_operations afs_symlink_aops = {
.read_folio = afs_symlink_read_folio,
- .release_folio = afs_release_folio,
- .invalidate_folio = afs_invalidate_folio,
+ .release_folio = netfs_release_folio,
+ .invalidate_folio = netfs_invalidate_folio,
.migrate_folio = filemap_migrate_folio,
};

@@ -405,48 +402,6 @@ int afs_write_inode(struct inode *inode, struct writeback_control *wbc)
return 0;
}

-/*
- * invalidate part or all of a page
- * - release a page and clean up its private data if offset is 0 (indicating
- * the entire page)
- */
-static void afs_invalidate_folio(struct folio *folio, size_t offset,
- size_t length)
-{
- _enter("{%lu},%zu,%zu", folio->index, offset, length);
-
- folio_wait_fscache(folio);
- _leave("");
-}
-
-/*
- * release a page and clean up its private state if it's not busy
- * - return true if the page can now be released, false if not
- */
-static bool afs_release_folio(struct folio *folio, gfp_t gfp)
-{
- struct afs_vnode *vnode = AFS_FS_I(folio_inode(folio));
-
- _enter("{{%llx:%llu}[%lu],%lx},%x",
- vnode->fid.vid, vnode->fid.vnode, folio_index(folio), folio->flags,
- gfp);
-
- /* deny if folio is being written to the cache and the caller hasn't
- * elected to wait */
-#ifdef CONFIG_AFS_FSCACHE
- if (folio_test_fscache(folio)) {
- if (current_is_kswapd() || !(gfp & __GFP_FS))
- return false;
- folio_wait_fscache(folio);
- }
- fscache_note_page_release(afs_vnode_cache(vnode));
-#endif
-
- /* Indicate that the folio can be released */
- _leave(" = T");
- return true;
-}
-
static void afs_add_open_mmap(struct afs_vnode *vnode)
{
if (atomic_inc_return(&vnode->cb_nr_mmap) == 1) {
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index f4863078f7fe..ced19ff08988 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -160,27 +160,7 @@ static void ceph_invalidate_folio(struct folio *folio, size_t offset,
ceph_put_snap_context(snapc);
}

- folio_wait_fscache(folio);
-}
-
-static bool ceph_release_folio(struct folio *folio, gfp_t gfp)
-{
- struct inode *inode = folio->mapping->host;
-
- dout("%llx:%llx release_folio idx %lu (%sdirty)\n",
- ceph_vinop(inode),
- folio->index, folio_test_dirty(folio) ? "" : "not ");
-
- if (folio_test_private(folio))
- return false;
-
- if (folio_test_fscache(folio)) {
- if (current_is_kswapd() || !(gfp & __GFP_FS))
- return false;
- folio_wait_fscache(folio);
- }
- ceph_fscache_note_page_release(inode);
- return true;
+ netfs_invalidate_folio(folio, offset, length);
}

static void ceph_netfs_expand_readahead(struct netfs_io_request *rreq)
@@ -1563,7 +1543,7 @@ const struct address_space_operations ceph_aops = {
.write_end = ceph_write_end,
.dirty_folio = ceph_dirty_folio,
.invalidate_folio = ceph_invalidate_folio,
- .release_folio = ceph_release_folio,
+ .release_folio = netfs_release_folio,
.direct_IO = noop_direct_IO,
};

diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile
index 386d6fb92793..cd22554d9048 100644
--- a/fs/netfs/Makefile
+++ b/fs/netfs/Makefile
@@ -5,6 +5,7 @@ netfs-y := \
io.o \
iterator.o \
main.o \
+ misc.o \
objects.o

netfs-$(CONFIG_NETFS_STATS) += stats.o
diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c
new file mode 100644
index 000000000000..c3baf2b247d9
--- /dev/null
+++ b/fs/netfs/misc.c
@@ -0,0 +1,51 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Miscellaneous routines.
+ *
+ * Copyright (C) 2022 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells ([email protected])
+ */
+
+#include <linux/swap.h>
+#include "internal.h"
+
+/**
+ * netfs_invalidate_folio - Invalidate or partially invalidate a folio
+ * @folio: Folio proposed for release
+ * @offset: Offset of the invalidated region
+ * @length: Length of the invalidated region
+ *
+ * Invalidate part or all of a folio for a network filesystem. The folio will
+ * be removed afterwards if the invalidated region covers the entire folio.
+ */
+void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length)
+{
+ _enter("{%lx},%zx,%zx", folio_index(folio), offset, length);
+
+ folio_wait_fscache(folio);
+}
+EXPORT_SYMBOL(netfs_invalidate_folio);
+
+/**
+ * netfs_release_folio - Try to release a folio
+ * @folio: Folio proposed for release
+ * @gfp: Flags qualifying the release
+ *
+ * Request release of a folio and clean up its private state if it's not busy.
+ * Returns true if the folio can now be released, false if not
+ */
+bool netfs_release_folio(struct folio *folio, gfp_t gfp)
+{
+ struct netfs_inode *ctx = netfs_inode(folio_inode(folio));
+
+ if (folio_test_private(folio))
+ return false;
+ if (folio_test_fscache(folio)) {
+ if (current_is_kswapd() || !(gfp & __GFP_FS))
+ return false;
+ folio_wait_fscache(folio);
+ }
+
+ fscache_note_page_release(netfs_i_cookie(ctx));
+ return true;
+}
+EXPORT_SYMBOL(netfs_release_folio);
diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index ed64d1034afa..daa431c4148d 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -299,8 +299,10 @@ struct readahead_control;
void netfs_readahead(struct readahead_control *);
int netfs_read_folio(struct file *, struct folio *);
int netfs_write_begin(struct netfs_inode *, struct file *,
- struct address_space *, loff_t pos, unsigned int len,
- struct folio **, void **fsdata);
+ struct address_space *, loff_t pos, unsigned int len,
+ struct folio **, void **fsdata);
+void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length);
+bool netfs_release_folio(struct folio *folio, gfp_t gfp);

void netfs_subreq_terminated(struct netfs_io_subrequest *, ssize_t, bool);
void netfs_get_subrequest(struct netfs_io_subrequest *subreq,

2023-10-13 16:07:12

by David Howells

[permalink] [raw]
Subject: [RFC PATCH 06/53] afs: Don't use folio->private to record partial modification

AFS currently uses folio->private to store the range of bytes within a
folio that have been modified - the idea being that if we have, say, a 2MiB
folio and someone writes a single byte, we only have to write back that
single page and not the whole 2MiB folio - thereby saving on network
bandwidth.

Remove this, at least for now, and accept the extra network load (which
doesn't matter in the common case of writing a whole file at a time from
beginning to end).

This makes folio->private available for netfslib to use.

Signed-off-by: David Howells <[email protected]>
cc: Marc Dionne <[email protected]>
cc: Jeff Layton <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
fs/afs/file.c | 67 -------------
fs/afs/internal.h | 56 -----------
fs/afs/write.c | 188 ++++++++-----------------------------
include/trace/events/afs.h | 16 +---
4 files changed, 42 insertions(+), 285 deletions(-)

diff --git a/fs/afs/file.c b/fs/afs/file.c
index d37dd201752b..0c49b3b6f214 100644
--- a/fs/afs/file.c
+++ b/fs/afs/file.c
@@ -405,63 +405,6 @@ int afs_write_inode(struct inode *inode, struct writeback_control *wbc)
return 0;
}

-/*
- * Adjust the dirty region of the page on truncation or full invalidation,
- * getting rid of the markers altogether if the region is entirely invalidated.
- */
-static void afs_invalidate_dirty(struct folio *folio, size_t offset,
- size_t length)
-{
- struct afs_vnode *vnode = AFS_FS_I(folio_inode(folio));
- unsigned long priv;
- unsigned int f, t, end = offset + length;
-
- priv = (unsigned long)folio_get_private(folio);
-
- /* we clean up only if the entire page is being invalidated */
- if (offset == 0 && length == folio_size(folio))
- goto full_invalidate;
-
- /* If the page was dirtied by page_mkwrite(), the PTE stays writable
- * and we don't get another notification to tell us to expand it
- * again.
- */
- if (afs_is_folio_dirty_mmapped(priv))
- return;
-
- /* We may need to shorten the dirty region */
- f = afs_folio_dirty_from(folio, priv);
- t = afs_folio_dirty_to(folio, priv);
-
- if (t <= offset || f >= end)
- return; /* Doesn't overlap */
-
- if (f < offset && t > end)
- return; /* Splits the dirty region - just absorb it */
-
- if (f >= offset && t <= end)
- goto undirty;
-
- if (f < offset)
- t = offset;
- else
- f = end;
- if (f == t)
- goto undirty;
-
- priv = afs_folio_dirty(folio, f, t);
- folio_change_private(folio, (void *)priv);
- trace_afs_folio_dirty(vnode, tracepoint_string("trunc"), folio);
- return;
-
-undirty:
- trace_afs_folio_dirty(vnode, tracepoint_string("undirty"), folio);
- folio_clear_dirty_for_io(folio);
-full_invalidate:
- trace_afs_folio_dirty(vnode, tracepoint_string("inval"), folio);
- folio_detach_private(folio);
-}
-
/*
* invalidate part or all of a page
* - release a page and clean up its private data if offset is 0 (indicating
@@ -472,11 +415,6 @@ static void afs_invalidate_folio(struct folio *folio, size_t offset,
{
_enter("{%lu},%zu,%zu", folio->index, offset, length);

- BUG_ON(!folio_test_locked(folio));
-
- if (folio_get_private(folio))
- afs_invalidate_dirty(folio, offset, length);
-
folio_wait_fscache(folio);
_leave("");
}
@@ -504,11 +442,6 @@ static bool afs_release_folio(struct folio *folio, gfp_t gfp)
fscache_note_page_release(afs_vnode_cache(vnode));
#endif

- if (folio_test_private(folio)) {
- trace_afs_folio_dirty(vnode, tracepoint_string("rel"), folio);
- folio_detach_private(folio);
- }
-
/* Indicate that the folio can be released */
_leave(" = T");
return true;
diff --git a/fs/afs/internal.h b/fs/afs/internal.h
index 469a717467a4..03fed7ecfab9 100644
--- a/fs/afs/internal.h
+++ b/fs/afs/internal.h
@@ -892,62 +892,6 @@ static inline void afs_invalidate_cache(struct afs_vnode *vnode, unsigned int fl
i_size_read(&vnode->netfs.inode), flags);
}

-/*
- * We use folio->private to hold the amount of the folio that we've written to,
- * splitting the field into two parts. However, we need to represent a range
- * 0...FOLIO_SIZE, so we reduce the resolution if the size of the folio
- * exceeds what we can encode.
- */
-#ifdef CONFIG_64BIT
-#define __AFS_FOLIO_PRIV_MASK 0x7fffffffUL
-#define __AFS_FOLIO_PRIV_SHIFT 32
-#define __AFS_FOLIO_PRIV_MMAPPED 0x80000000UL
-#else
-#define __AFS_FOLIO_PRIV_MASK 0x7fffUL
-#define __AFS_FOLIO_PRIV_SHIFT 16
-#define __AFS_FOLIO_PRIV_MMAPPED 0x8000UL
-#endif
-
-static inline unsigned int afs_folio_dirty_resolution(struct folio *folio)
-{
- int shift = folio_shift(folio) - (__AFS_FOLIO_PRIV_SHIFT - 1);
- return (shift > 0) ? shift : 0;
-}
-
-static inline size_t afs_folio_dirty_from(struct folio *folio, unsigned long priv)
-{
- unsigned long x = priv & __AFS_FOLIO_PRIV_MASK;
-
- /* The lower bound is inclusive */
- return x << afs_folio_dirty_resolution(folio);
-}
-
-static inline size_t afs_folio_dirty_to(struct folio *folio, unsigned long priv)
-{
- unsigned long x = (priv >> __AFS_FOLIO_PRIV_SHIFT) & __AFS_FOLIO_PRIV_MASK;
-
- /* The upper bound is immediately beyond the region */
- return (x + 1) << afs_folio_dirty_resolution(folio);
-}
-
-static inline unsigned long afs_folio_dirty(struct folio *folio, size_t from, size_t to)
-{
- unsigned int res = afs_folio_dirty_resolution(folio);
- from >>= res;
- to = (to - 1) >> res;
- return (to << __AFS_FOLIO_PRIV_SHIFT) | from;
-}
-
-static inline unsigned long afs_folio_dirty_mmapped(unsigned long priv)
-{
- return priv | __AFS_FOLIO_PRIV_MMAPPED;
-}
-
-static inline bool afs_is_folio_dirty_mmapped(unsigned long priv)
-{
- return priv & __AFS_FOLIO_PRIV_MMAPPED;
-}
-
#include <trace/events/afs.h>

/*****************************************************************************/
diff --git a/fs/afs/write.c b/fs/afs/write.c
index e1c45341719b..cdb1391ec46e 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -16,7 +16,8 @@

static int afs_writepages_region(struct address_space *mapping,
struct writeback_control *wbc,
- loff_t start, loff_t end, loff_t *_next,
+ unsigned long long start,
+ unsigned long long end, loff_t *_next,
bool max_one_loop);

static void afs_write_to_cache(struct afs_vnode *vnode, loff_t start, size_t len,
@@ -43,25 +44,6 @@ static void afs_folio_start_fscache(bool caching, struct folio *folio)
}
#endif

-/*
- * Flush out a conflicting write. This may extend the write to the surrounding
- * pages if also dirty and contiguous to the conflicting region..
- */
-static int afs_flush_conflicting_write(struct address_space *mapping,
- struct folio *folio)
-{
- struct writeback_control wbc = {
- .sync_mode = WB_SYNC_ALL,
- .nr_to_write = LONG_MAX,
- .range_start = folio_pos(folio),
- .range_end = LLONG_MAX,
- };
- loff_t next;
-
- return afs_writepages_region(mapping, &wbc, folio_pos(folio), LLONG_MAX,
- &next, true);
-}
-
/*
* prepare to perform part of a write to a page
*/
@@ -71,10 +53,6 @@ int afs_write_begin(struct file *file, struct address_space *mapping,
{
struct afs_vnode *vnode = AFS_FS_I(file_inode(file));
struct folio *folio;
- unsigned long priv;
- unsigned f, from;
- unsigned t, to;
- pgoff_t index;
int ret;

_enter("{%llx:%llu},%llx,%x",
@@ -88,49 +66,20 @@ int afs_write_begin(struct file *file, struct address_space *mapping,
if (ret < 0)
return ret;

- index = folio_index(folio);
- from = pos - index * PAGE_SIZE;
- to = from + len;
-
try_again:
/* See if this page is already partially written in a way that we can
* merge the new write with.
*/
- if (folio_test_private(folio)) {
- priv = (unsigned long)folio_get_private(folio);
- f = afs_folio_dirty_from(folio, priv);
- t = afs_folio_dirty_to(folio, priv);
- ASSERTCMP(f, <=, t);
-
- if (folio_test_writeback(folio)) {
- trace_afs_folio_dirty(vnode, tracepoint_string("alrdy"), folio);
- folio_unlock(folio);
- goto wait_for_writeback;
- }
- /* If the file is being filled locally, allow inter-write
- * spaces to be merged into writes. If it's not, only write
- * back what the user gives us.
- */
- if (!test_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags) &&
- (to < f || from > t))
- goto flush_conflicting_write;
+ if (folio_test_writeback(folio)) {
+ trace_afs_folio_dirty(vnode, tracepoint_string("alrdy"), folio);
+ folio_unlock(folio);
+ goto wait_for_writeback;
}

*_page = folio_file_page(folio, pos / PAGE_SIZE);
_leave(" = 0");
return 0;

- /* The previous write and this write aren't adjacent or overlapping, so
- * flush the page out.
- */
-flush_conflicting_write:
- trace_afs_folio_dirty(vnode, tracepoint_string("confl"), folio);
- folio_unlock(folio);
-
- ret = afs_flush_conflicting_write(mapping, folio);
- if (ret < 0)
- goto error;
-
wait_for_writeback:
ret = folio_wait_writeback_killable(folio);
if (ret < 0)
@@ -156,9 +105,6 @@ int afs_write_end(struct file *file, struct address_space *mapping,
{
struct folio *folio = page_folio(subpage);
struct afs_vnode *vnode = AFS_FS_I(file_inode(file));
- unsigned long priv;
- unsigned int f, from = offset_in_folio(folio, pos);
- unsigned int t, to = from + copied;
loff_t i_size, write_end_pos;

_enter("{%llx:%llu},{%lx}",
@@ -188,23 +134,6 @@ int afs_write_end(struct file *file, struct address_space *mapping,
fscache_update_cookie(afs_vnode_cache(vnode), NULL, &write_end_pos);
}

- if (folio_test_private(folio)) {
- priv = (unsigned long)folio_get_private(folio);
- f = afs_folio_dirty_from(folio, priv);
- t = afs_folio_dirty_to(folio, priv);
- if (from < f)
- f = from;
- if (to > t)
- t = to;
- priv = afs_folio_dirty(folio, f, t);
- folio_change_private(folio, (void *)priv);
- trace_afs_folio_dirty(vnode, tracepoint_string("dirty+"), folio);
- } else {
- priv = afs_folio_dirty(folio, from, to);
- folio_attach_private(folio, (void *)priv);
- trace_afs_folio_dirty(vnode, tracepoint_string("dirty"), folio);
- }
-
if (folio_mark_dirty(folio))
_debug("dirtied %lx", folio_index(folio));

@@ -309,7 +238,6 @@ static void afs_pages_written_back(struct afs_vnode *vnode, loff_t start, unsign
}

trace_afs_folio_dirty(vnode, tracepoint_string("clear"), folio);
- folio_detach_private(folio);
folio_end_writeback(folio);
}

@@ -463,17 +391,12 @@ static void afs_extend_writeback(struct address_space *mapping,
long *_count,
loff_t start,
loff_t max_len,
- bool new_content,
bool caching,
- unsigned int *_len)
+ size_t *_len)
{
struct folio_batch fbatch;
struct folio *folio;
- unsigned long priv;
- unsigned int psize, filler = 0;
- unsigned int f, t;
- loff_t len = *_len;
- pgoff_t index = (start + len) / PAGE_SIZE;
+ pgoff_t index = (start + *_len) / PAGE_SIZE;
bool stop = true;
unsigned int i;

@@ -501,7 +424,7 @@ static void afs_extend_writeback(struct address_space *mapping,
continue;
}

- /* Has the page moved or been split? */
+ /* Has the folio moved or been split? */
if (unlikely(folio != xas_reload(&xas))) {
folio_put(folio);
break;
@@ -519,24 +442,13 @@ static void afs_extend_writeback(struct address_space *mapping,
break;
}

- psize = folio_size(folio);
- priv = (unsigned long)folio_get_private(folio);
- f = afs_folio_dirty_from(folio, priv);
- t = afs_folio_dirty_to(folio, priv);
- if (f != 0 && !new_content) {
- folio_unlock(folio);
- folio_put(folio);
- break;
- }
-
- len += filler + t;
- filler = psize - t;
- if (len >= max_len || *_count <= 0)
+ index += folio_nr_pages(folio);
+ *_count -= folio_nr_pages(folio);
+ *_len += folio_size(folio);
+ stop = false;
+ if (*_len >= max_len || *_count <= 0)
stop = true;
- else if (t == psize || new_content)
- stop = false;

- index += folio_nr_pages(folio);
if (!folio_batch_add(&fbatch, folio))
break;
if (stop)
@@ -562,16 +474,12 @@ static void afs_extend_writeback(struct address_space *mapping,
if (folio_start_writeback(folio))
BUG();
afs_folio_start_fscache(caching, folio);
-
- *_count -= folio_nr_pages(folio);
folio_unlock(folio);
}

folio_batch_release(&fbatch);
cond_resched();
} while (!stop);
-
- *_len = len;
}

/*
@@ -581,14 +489,13 @@ static void afs_extend_writeback(struct address_space *mapping,
static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
struct writeback_control *wbc,
struct folio *folio,
- loff_t start, loff_t end)
+ unsigned long long start,
+ unsigned long long end)
{
struct afs_vnode *vnode = AFS_FS_I(mapping->host);
struct iov_iter iter;
- unsigned long priv;
- unsigned int offset, to, len, max_len;
- loff_t i_size = i_size_read(&vnode->netfs.inode);
- bool new_content = test_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags);
+ unsigned long long i_size = i_size_read(&vnode->netfs.inode);
+ size_t len, max_len;
bool caching = fscache_cookie_enabled(afs_vnode_cache(vnode));
long count = wbc->nr_to_write;
int ret;
@@ -606,13 +513,9 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
* immediately lockable, is not dirty or is missing, or we reach the
* end of the range.
*/
- priv = (unsigned long)folio_get_private(folio);
- offset = afs_folio_dirty_from(folio, priv);
- to = afs_folio_dirty_to(folio, priv);
trace_afs_folio_dirty(vnode, tracepoint_string("store"), folio);

- len = to - offset;
- start += offset;
+ len = folio_size(folio);
if (start < i_size) {
/* Trim the write to the EOF; the extra data is ignored. Also
* put an upper limit on the size of a single storedata op.
@@ -621,12 +524,10 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
max_len = min_t(unsigned long long, max_len, end - start + 1);
max_len = min_t(unsigned long long, max_len, i_size - start);

- if (len < max_len &&
- (to == folio_size(folio) || new_content))
+ if (len < max_len)
afs_extend_writeback(mapping, vnode, &count,
- start, max_len, new_content,
- caching, &len);
- len = min_t(loff_t, len, max_len);
+ start, max_len, caching, &len);
+ len = min_t(unsigned long long, len, i_size - start);
}

/* We now have a contiguous set of dirty pages, each with writeback
@@ -636,7 +537,7 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
folio_unlock(folio);

if (start < i_size) {
- _debug("write back %x @%llx [%llx]", len, start, i_size);
+ _debug("write back %zx @%llx [%llx]", len, start, i_size);

/* Speculatively write to the cache. We have to fix this up
* later if the store fails.
@@ -646,7 +547,7 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
iov_iter_xarray(&iter, ITER_SOURCE, &mapping->i_pages, start, len);
ret = afs_store_data(vnode, &iter, start, false);
} else {
- _debug("write discard %x @%llx [%llx]", len, start, i_size);
+ _debug("write discard %zx @%llx [%llx]", len, start, i_size);

/* The dirty region was entirely beyond the EOF. */
fscache_clear_page_bits(mapping, start, len, caching);
@@ -702,7 +603,8 @@ static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
*/
static int afs_writepages_region(struct address_space *mapping,
struct writeback_control *wbc,
- loff_t start, loff_t end, loff_t *_next,
+ unsigned long long start,
+ unsigned long long end, loff_t *_next,
bool max_one_loop)
{
struct folio *folio;
@@ -914,7 +816,6 @@ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf)
struct inode *inode = file_inode(file);
struct afs_vnode *vnode = AFS_FS_I(inode);
struct afs_file *af = file->private_data;
- unsigned long priv;
vm_fault_t ret = VM_FAULT_RETRY;

_enter("{{%llx:%llu}},{%lx}", vnode->fid.vid, vnode->fid.vnode, folio_index(folio));
@@ -938,24 +839,15 @@ vm_fault_t afs_page_mkwrite(struct vm_fault *vmf)
if (folio_lock_killable(folio) < 0)
goto out;

- /* We mustn't change folio->private until writeback is complete as that
- * details the portion of the page we need to write back and we might
- * need to redirty the page if there's a problem.
- */
if (folio_wait_writeback_killable(folio) < 0) {
folio_unlock(folio);
goto out;
}

- priv = afs_folio_dirty(folio, 0, folio_size(folio));
- priv = afs_folio_dirty_mmapped(priv);
- if (folio_test_private(folio)) {
- folio_change_private(folio, (void *)priv);
+ if (folio_test_dirty(folio))
trace_afs_folio_dirty(vnode, tracepoint_string("mkwrite+"), folio);
- } else {
- folio_attach_private(folio, (void *)priv);
+ else
trace_afs_folio_dirty(vnode, tracepoint_string("mkwrite"), folio);
- }
file_update_time(file);

ret = VM_FAULT_LOCKED;
@@ -1000,30 +892,26 @@ int afs_launder_folio(struct folio *folio)
struct afs_vnode *vnode = AFS_FS_I(folio_inode(folio));
struct iov_iter iter;
struct bio_vec bv;
- unsigned long priv;
- unsigned int f, t;
+ unsigned long long fend, i_size = vnode->netfs.inode.i_size;
+ size_t len;
int ret = 0;

_enter("{%lx}", folio->index);

- priv = (unsigned long)folio_get_private(folio);
- if (folio_clear_dirty_for_io(folio)) {
- f = 0;
- t = folio_size(folio);
- if (folio_test_private(folio)) {
- f = afs_folio_dirty_from(folio, priv);
- t = afs_folio_dirty_to(folio, priv);
- }
+ if (folio_clear_dirty_for_io(folio) && folio_pos(folio) < i_size) {
+ len = folio_size(folio);
+ fend = folio_pos(folio) + len;
+ if (vnode->netfs.inode.i_size < fend)
+ len = fend - i_size;

- bvec_set_folio(&bv, folio, t - f, f);
- iov_iter_bvec(&iter, ITER_SOURCE, &bv, 1, bv.bv_len);
+ bvec_set_folio(&bv, folio, len, 0);
+ iov_iter_bvec(&iter, WRITE, &bv, 1, len);

trace_afs_folio_dirty(vnode, tracepoint_string("launder"), folio);
- ret = afs_store_data(vnode, &iter, folio_pos(folio) + f, true);
+ ret = afs_store_data(vnode, &iter, folio_pos(folio), true);
}

trace_afs_folio_dirty(vnode, tracepoint_string("laundered"), folio);
- folio_detach_private(folio);
folio_wait_fscache(folio);
return ret;
}
diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h
index 597677acc6b1..08506680350c 100644
--- a/include/trace/events/afs.h
+++ b/include/trace/events/afs.h
@@ -846,26 +846,18 @@ TRACE_EVENT(afs_folio_dirty,
__field(struct afs_vnode *, vnode)
__field(const char *, where)
__field(pgoff_t, index)
- __field(unsigned long, from)
- __field(unsigned long, to)
+ __field(size_t, size)
),

TP_fast_assign(
- unsigned long priv = (unsigned long)folio_get_private(folio);
__entry->vnode = vnode;
__entry->where = where;
__entry->index = folio_index(folio);
- __entry->from = afs_folio_dirty_from(folio, priv);
- __entry->to = afs_folio_dirty_to(folio, priv);
- __entry->to |= (afs_is_folio_dirty_mmapped(priv) ?
- (1UL << (BITS_PER_LONG - 1)) : 0);
+ __entry->size = folio_size(folio);
),

- TP_printk("vn=%p %lx %s %lx-%lx%s",
- __entry->vnode, __entry->index, __entry->where,
- __entry->from,
- __entry->to & ~(1UL << (BITS_PER_LONG - 1)),
- __entry->to & (1UL << (BITS_PER_LONG - 1)) ? " M" : "")
+ TP_printk("vn=%p ix=%05lx s=%05lx %s",
+ __entry->vnode, __entry->index, __entry->size, __entry->where)
);

TRACE_EVENT(afs_call_state,

2023-10-13 16:07:17

by David Howells

[permalink] [raw]
Subject: [RFC PATCH 08/53] netfs: Add rsize to netfs_io_request

Add an rsize parameter to netfs_io_request to be filled in by the network
filesystem when the request is initialised. This indicates the maximum
size of a read request that the netfs will honour in that region.

Signed-off-by: David Howells <[email protected]>
cc: Jeff Layton <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
fs/afs/file.c | 1 +
fs/ceph/addr.c | 2 ++
include/linux/netfs.h | 1 +
3 files changed, 4 insertions(+)

diff --git a/fs/afs/file.c b/fs/afs/file.c
index 3fea5cd8ef13..3d2e1913ea27 100644
--- a/fs/afs/file.c
+++ b/fs/afs/file.c
@@ -360,6 +360,7 @@ static int afs_symlink_read_folio(struct file *file, struct folio *folio)
static int afs_init_request(struct netfs_io_request *rreq, struct file *file)
{
rreq->netfs_priv = key_get(afs_file_key(file));
+ rreq->rsize = 4 * 1024 * 1024;
return 0;
}

diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index ced19ff08988..92a5ddcd9a76 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -419,6 +419,8 @@ static int ceph_init_request(struct netfs_io_request *rreq, struct file *file)
struct ceph_netfs_request_data *priv;
int ret = 0;

+ rreq->rsize = 1024 * 1024;
+
if (rreq->origin != NETFS_READAHEAD)
return 0;

diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index daa431c4148d..02e888c170da 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -188,6 +188,7 @@ struct netfs_io_request {
struct list_head subrequests; /* Contributory I/O operations */
void *netfs_priv; /* Private data for the netfs */
unsigned int debug_id;
+ unsigned int rsize; /* Maximum read size (0 for none) */
atomic_t nr_outstanding; /* Number of ops in progress */
atomic_t nr_copy_ops; /* Number of copy-to-cache ops in progress */
size_t submitted; /* Amount submitted for I/O so far */

2023-10-13 16:07:21

by David Howells

[permalink] [raw]
Subject: [RFC PATCH 09/53] netfs: Implement unbuffered/DIO vs buffered I/O locking

Borrow NFS's direct-vs-buffered I/O locking into netfslib. Similar code is
also used in ceph.

Modify it to have the correct checker annotations for i_rwsem lock
acquisition/release and to return -ERESTARTSYS if waits are interrupted.

Signed-off-by: David Howells <[email protected]>
cc: Jeff Layton <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
fs/netfs/Makefile | 1 +
fs/netfs/locking.c | 209 ++++++++++++++++++++++++++++++++++++++++++
include/linux/netfs.h | 10 ++
3 files changed, 220 insertions(+)
create mode 100644 fs/netfs/locking.c

diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile
index cd22554d9048..647ce1935674 100644
--- a/fs/netfs/Makefile
+++ b/fs/netfs/Makefile
@@ -4,6 +4,7 @@ netfs-y := \
buffered_read.o \
io.o \
iterator.o \
+ locking.o \
main.o \
misc.o \
objects.o
diff --git a/fs/netfs/locking.c b/fs/netfs/locking.c
new file mode 100644
index 000000000000..fecca8ea6322
--- /dev/null
+++ b/fs/netfs/locking.c
@@ -0,0 +1,209 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * I/O and data path helper functionality.
+ *
+ * Borrowed from NFS Copyright (c) 2016 Trond Myklebust
+ */
+
+#include <linux/kernel.h>
+#include <linux/netfs.h>
+
+/*
+ * inode_dio_wait_interruptible - wait for outstanding DIO requests to finish
+ * @inode: inode to wait for
+ *
+ * Waits for all pending direct I/O requests to finish so that we can
+ * proceed with a truncate or equivalent operation.
+ *
+ * Must be called under a lock that serializes taking new references
+ * to i_dio_count, usually by inode->i_mutex.
+ */
+static int inode_dio_wait_interruptible(struct inode *inode)
+{
+ if (!atomic_read(&inode->i_dio_count))
+ return 0;
+
+ wait_queue_head_t *wq = bit_waitqueue(&inode->i_state, __I_DIO_WAKEUP);
+ DEFINE_WAIT_BIT(q, &inode->i_state, __I_DIO_WAKEUP);
+
+ for (;;) {
+ prepare_to_wait(wq, &q.wq_entry, TASK_INTERRUPTIBLE);
+ if (!atomic_read(&inode->i_dio_count))
+ break;
+ if (signal_pending(current))
+ break;
+ schedule();
+ }
+ finish_wait(wq, &q.wq_entry);
+
+ return atomic_read(&inode->i_dio_count) ? -ERESTARTSYS : 0;
+}
+
+/* Call with exclusively locked inode->i_rwsem */
+static int netfs_block_o_direct(struct netfs_inode *ictx)
+{
+ if (!test_bit(NETFS_ICTX_ODIRECT, &ictx->flags))
+ return 0;
+ clear_bit(NETFS_ICTX_ODIRECT, &ictx->flags);
+ return inode_dio_wait_interruptible(&ictx->inode);
+}
+
+/**
+ * netfs_start_io_read - declare the file is being used for buffered reads
+ * @inode: file inode
+ *
+ * Declare that a buffered read operation is about to start, and ensure
+ * that we block all direct I/O.
+ * On exit, the function ensures that the NETFS_ICTX_ODIRECT flag is unset,
+ * and holds a shared lock on inode->i_rwsem to ensure that the flag
+ * cannot be changed.
+ * In practice, this means that buffered read operations are allowed to
+ * execute in parallel, thanks to the shared lock, whereas direct I/O
+ * operations need to wait to grab an exclusive lock in order to set
+ * NETFS_ICTX_ODIRECT.
+ * Note that buffered writes and truncates both take a write lock on
+ * inode->i_rwsem, meaning that those are serialised w.r.t. the reads.
+ */
+int netfs_start_io_read(struct inode *inode)
+ __acquires(inode->i_rwsem)
+{
+ struct netfs_inode *ictx = netfs_inode(inode);
+
+ /* Be an optimist! */
+ if (down_read_interruptible(&inode->i_rwsem) < 0)
+ return -ERESTARTSYS;
+ if (test_bit(NETFS_ICTX_ODIRECT, &ictx->flags) == 0)
+ return 0;
+ up_read(&inode->i_rwsem);
+
+ /* Slow path.... */
+ if (down_write_killable(&inode->i_rwsem) < 0)
+ return -ERESTARTSYS;
+ if (netfs_block_o_direct(ictx) < 0) {
+ up_write(&inode->i_rwsem);
+ return -ERESTARTSYS;
+ }
+ downgrade_write(&inode->i_rwsem);
+ return 0;
+}
+
+/**
+ * netfs_end_io_read - declare that the buffered read operation is done
+ * @inode: file inode
+ *
+ * Declare that a buffered read operation is done, and release the shared
+ * lock on inode->i_rwsem.
+ */
+void netfs_end_io_read(struct inode *inode)
+ __releases(inode->i_rwsem)
+{
+ up_read(&inode->i_rwsem);
+}
+
+/**
+ * netfs_start_io_write - declare the file is being used for buffered writes
+ * @inode: file inode
+ *
+ * Declare that a buffered read operation is about to start, and ensure
+ * that we block all direct I/O.
+ */
+int netfs_start_io_write(struct inode *inode)
+ __acquires(inode->i_rwsem)
+{
+ struct netfs_inode *ictx = netfs_inode(inode);
+
+ if (down_write_killable(&inode->i_rwsem) < 0)
+ return -ERESTARTSYS;
+ if (netfs_block_o_direct(ictx) < 0) {
+ up_write(&inode->i_rwsem);
+ return -ERESTARTSYS;
+ }
+ return 0;
+}
+
+/**
+ * netfs_end_io_write - declare that the buffered write operation is done
+ * @inode: file inode
+ *
+ * Declare that a buffered write operation is done, and release the
+ * lock on inode->i_rwsem.
+ */
+void netfs_end_io_write(struct inode *inode)
+ __releases(inode->i_rwsem)
+{
+ up_write(&inode->i_rwsem);
+}
+
+/* Call with exclusively locked inode->i_rwsem */
+static int netfs_block_buffered(struct inode *inode)
+{
+ struct netfs_inode *ictx = netfs_inode(inode);
+ int ret;
+
+ if (!test_bit(NETFS_ICTX_ODIRECT, &ictx->flags)) {
+ set_bit(NETFS_ICTX_ODIRECT, &ictx->flags);
+ if (inode->i_mapping->nrpages != 0) {
+ unmap_mapping_range(inode->i_mapping, 0, 0, 0);
+ ret = filemap_fdatawait(inode->i_mapping);
+ if (ret < 0) {
+ clear_bit(NETFS_ICTX_ODIRECT, &ictx->flags);
+ return ret;
+ }
+ }
+ }
+ return 0;
+}
+
+/**
+ * netfs_start_io_direct - declare the file is being used for direct i/o
+ * @inode: file inode
+ *
+ * Declare that a direct I/O operation is about to start, and ensure
+ * that we block all buffered I/O.
+ * On exit, the function ensures that the NETFS_ICTX_ODIRECT flag is set,
+ * and holds a shared lock on inode->i_rwsem to ensure that the flag
+ * cannot be changed.
+ * In practice, this means that direct I/O operations are allowed to
+ * execute in parallel, thanks to the shared lock, whereas buffered I/O
+ * operations need to wait to grab an exclusive lock in order to clear
+ * NETFS_ICTX_ODIRECT.
+ * Note that buffered writes and truncates both take a write lock on
+ * inode->i_rwsem, meaning that those are serialised w.r.t. O_DIRECT.
+ */
+int netfs_start_io_direct(struct inode *inode)
+ __acquires(inode->i_rwsem)
+{
+ struct netfs_inode *ictx = netfs_inode(inode);
+ int ret;
+
+ /* Be an optimist! */
+ if (down_read_interruptible(&inode->i_rwsem) < 0)
+ return -ERESTARTSYS;
+ if (test_bit(NETFS_ICTX_ODIRECT, &ictx->flags) != 0)
+ return 0;
+ up_read(&inode->i_rwsem);
+
+ /* Slow path.... */
+ if (down_write_killable(&inode->i_rwsem) < 0)
+ return -ERESTARTSYS;
+ ret = netfs_block_buffered(inode);
+ if (ret < 0) {
+ up_write(&inode->i_rwsem);
+ return ret;
+ }
+ downgrade_write(&inode->i_rwsem);
+ return 0;
+}
+
+/**
+ * netfs_end_io_direct - declare that the direct i/o operation is done
+ * @inode: file inode
+ *
+ * Declare that a direct I/O operation is done, and release the shared
+ * lock on inode->i_rwsem.
+ */
+void netfs_end_io_direct(struct inode *inode)
+ __releases(inode->i_rwsem)
+{
+ up_read(&inode->i_rwsem);
+}
diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index 02e888c170da..33d4487a91e9 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -131,6 +131,8 @@ struct netfs_inode {
loff_t remote_i_size; /* Size of the remote file */
loff_t zero_point; /* Size after which we assume there's no data
* on the server */
+ unsigned long flags;
+#define NETFS_ICTX_ODIRECT 0 /* The file has DIO in progress */
};

/*
@@ -315,6 +317,13 @@ ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len,
struct iov_iter *new,
iov_iter_extraction_t extraction_flags);

+int netfs_start_io_read(struct inode *inode);
+void netfs_end_io_read(struct inode *inode);
+int netfs_start_io_write(struct inode *inode);
+void netfs_end_io_write(struct inode *inode);
+int netfs_start_io_direct(struct inode *inode);
+void netfs_end_io_direct(struct inode *inode);
+
/**
* netfs_inode - Get the netfs inode context from the inode
* @inode: The inode to query
@@ -341,6 +350,7 @@ static inline void netfs_inode_init(struct netfs_inode *ctx,
ctx->ops = ops;
ctx->remote_i_size = i_size_read(&ctx->inode);
ctx->zero_point = ctx->remote_i_size;
+ ctx->flags = 0;
#if IS_ENABLED(CONFIG_FSCACHE)
ctx->cache = NULL;
#endif

2023-10-13 16:07:53

by David Howells

[permalink] [raw]
Subject: [RFC PATCH 14/53] netfs: Add func to calculate pagecount/size-limited span of an iterator

Add a function to work out how much of an ITER_BVEC or ITER_XARRAY iterator
we can use in a pagecount-limited and size-limited span. This will be
used, for example, to limit the number of segments in a subrequest to the
maximum number of elements that an RDMA transfer can handle.

Signed-off-by: David Howells <[email protected]>
cc: Jeff Layton <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
fs/netfs/iterator.c | 97 +++++++++++++++++++++++++++++++++++++++++++
include/linux/netfs.h | 2 +
2 files changed, 99 insertions(+)

diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c
index 2ff07ba655a0..b781bbbf1d8d 100644
--- a/fs/netfs/iterator.c
+++ b/fs/netfs/iterator.c
@@ -101,3 +101,100 @@ ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len,
return npages;
}
EXPORT_SYMBOL_GPL(netfs_extract_user_iter);
+
+/*
+ * Select the span of a bvec iterator we're going to use. Limit it by both maximum
+ * size and maximum number of segments. Returns the size of the span in bytes.
+ */
+static size_t netfs_limit_bvec(const struct iov_iter *iter, size_t start_offset,
+ size_t max_size, size_t max_segs)
+{
+ const struct bio_vec *bvecs = iter->bvec;
+ unsigned int nbv = iter->nr_segs, ix = 0, nsegs = 0;
+ size_t len, span = 0, n = iter->count;
+ size_t skip = iter->iov_offset + start_offset;
+
+ if (WARN_ON(!iov_iter_is_bvec(iter)) ||
+ WARN_ON(start_offset > n) ||
+ n == 0)
+ return 0;
+
+ while (n && ix < nbv && skip) {
+ len = bvecs[ix].bv_len;
+ if (skip < len)
+ break;
+ skip -= len;
+ n -= len;
+ ix++;
+ }
+
+ while (n && ix < nbv) {
+ len = min3(n, bvecs[ix].bv_len - skip, max_size);
+ span += len;
+ nsegs++;
+ ix++;
+ if (span >= max_size || nsegs >= max_segs)
+ break;
+ skip = 0;
+ n -= len;
+ }
+
+ return min(span, max_size);
+}
+
+/*
+ * Select the span of an xarray iterator we're going to use. Limit it by both
+ * maximum size and maximum number of segments. It is assumed that segments
+ * can be larger than a page in size, provided they're physically contiguous.
+ * Returns the size of the span in bytes.
+ */
+static size_t netfs_limit_xarray(const struct iov_iter *iter, size_t start_offset,
+ size_t max_size, size_t max_segs)
+{
+ struct folio *folio;
+ unsigned int nsegs = 0;
+ loff_t pos = iter->xarray_start + iter->iov_offset;
+ pgoff_t index = pos / PAGE_SIZE;
+ size_t span = 0, n = iter->count;
+
+ XA_STATE(xas, iter->xarray, index);
+
+ if (WARN_ON(!iov_iter_is_xarray(iter)) ||
+ WARN_ON(start_offset > n) ||
+ n == 0)
+ return 0;
+ max_size = min(max_size, n - start_offset);
+
+ rcu_read_lock();
+ xas_for_each(&xas, folio, ULONG_MAX) {
+ size_t offset, flen, len;
+ if (xas_retry(&xas, folio))
+ continue;
+ if (WARN_ON(xa_is_value(folio)))
+ break;
+ if (WARN_ON(folio_test_hugetlb(folio)))
+ break;
+
+ flen = folio_size(folio);
+ offset = offset_in_folio(folio, pos);
+ len = min(max_size, flen - offset);
+ span += len;
+ nsegs++;
+ if (span >= max_size || nsegs >= max_segs)
+ break;
+ }
+
+ rcu_read_unlock();
+ return min(span, max_size);
+}
+
+size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset,
+ size_t max_size, size_t max_segs)
+{
+ if (iov_iter_is_bvec(iter))
+ return netfs_limit_bvec(iter, start_offset, max_size, max_segs);
+ if (iov_iter_is_xarray(iter))
+ return netfs_limit_xarray(iter, start_offset, max_size, max_segs);
+ BUG();
+}
+EXPORT_SYMBOL(netfs_limit_iter);
diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index a7220e906287..2b5e04ea4db2 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -328,6 +328,8 @@ void netfs_stats_show(struct seq_file *);
ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len,
struct iov_iter *new,
iov_iter_extraction_t extraction_flags);
+size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset,
+ size_t max_size, size_t max_segs);

int netfs_start_io_read(struct inode *inode);
void netfs_end_io_read(struct inode *inode);

2023-10-13 16:11:01

by David Howells

[permalink] [raw]
Subject: [RFC PATCH 34/53] netfs: Make netfs_skip_folio_read() take account of blocksize

Make netfs_skip_folio_read() take account of blocksize such as crypto
blocksize.

Signed-off-by: David Howells <[email protected]>
cc: Jeff Layton <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
fs/netfs/buffered_read.c | 32 +++++++++++++++++++++-----------
1 file changed, 21 insertions(+), 11 deletions(-)

diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c
index e06461ef0bfa..de696aaaefbd 100644
--- a/fs/netfs/buffered_read.c
+++ b/fs/netfs/buffered_read.c
@@ -337,6 +337,7 @@ EXPORT_SYMBOL(netfs_read_folio);

/*
* Prepare a folio for writing without reading first
+ * @ctx: File context
* @folio: The folio being prepared
* @pos: starting position for the write
* @len: length of write
@@ -350,32 +351,41 @@ EXPORT_SYMBOL(netfs_read_folio);
* If any of these criteria are met, then zero out the unwritten parts
* of the folio and return true. Otherwise, return false.
*/
-static bool netfs_skip_folio_read(struct folio *folio, loff_t pos, size_t len,
- bool always_fill)
+static bool netfs_skip_folio_read(struct netfs_inode *ctx, struct folio *folio,
+ loff_t pos, size_t len, bool always_fill)
{
struct inode *inode = folio_inode(folio);
- loff_t i_size = i_size_read(inode);
+ loff_t i_size = i_size_read(inode), low, high;
size_t offset = offset_in_folio(folio, pos);
size_t plen = folio_size(folio);
+ size_t min_bsize = 1UL << ctx->min_bshift;
+
+ if (likely(min_bsize == 1)) {
+ low = folio_file_pos(folio);
+ high = low + plen;
+ } else {
+ low = round_down(pos, min_bsize);
+ high = round_up(pos + len, min_bsize);
+ }

if (unlikely(always_fill)) {
- if (pos - offset + len <= i_size)
- return false; /* Page entirely before EOF */
+ if (low < i_size)
+ return false; /* Some part of the block before EOF */
zero_user_segment(&folio->page, 0, plen);
folio_mark_uptodate(folio);
return true;
}

- /* Full folio write */
- if (offset == 0 && len >= plen)
+ /* Full page write */
+ if (pos == low && high == pos + len)
return true;

- /* Page entirely beyond the end of the file */
- if (pos - offset >= i_size)
+ /* pos beyond last page in the file */
+ if (low >= i_size)
goto zero_out;

/* Write that covers from the start of the folio to EOF or beyond */
- if (offset == 0 && (pos + len) >= i_size)
+ if (pos == low && (pos + len) >= i_size)
goto zero_out;

return false;
@@ -454,7 +464,7 @@ int netfs_write_begin(struct netfs_inode *ctx,
* to preload the granule.
*/
if (!netfs_is_cache_enabled(ctx) &&
- netfs_skip_folio_read(folio, pos, len, false)) {
+ netfs_skip_folio_read(ctx, folio, pos, len, false)) {
netfs_stat(&netfs_n_rh_write_zskip);
goto have_folio_no_wait;
}

2023-10-13 16:13:00

by David Howells

[permalink] [raw]
Subject: [RFC PATCH 51/53] cifs: Remove some code that's no longer used, part 1

Remove some code that was #if'd out with the netfslib conversion. This is
split into parts for file.c as the diff generator otherwise produces a hard
to read diff for part of it where a big chunk is cut out.

Signed-off-by: David Howells <[email protected]>
cc: Steve French <[email protected]>
cc: Shyam Prasad N <[email protected]>
cc: Rohith Surabattula <[email protected]>
cc: Jeff Layton <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
fs/smb/client/cifsglob.h | 12 -
fs/smb/client/cifsproto.h | 21 --
fs/smb/client/file.c | 639 --------------------------------------
fs/smb/client/fscache.c | 111 -------
fs/smb/client/fscache.h | 58 ----
5 files changed, 841 deletions(-)

diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
index a5e114eeeb8b..01ea1206ec7e 100644
--- a/fs/smb/client/cifsglob.h
+++ b/fs/smb/client/cifsglob.h
@@ -1443,18 +1443,6 @@ struct cifs_io_subrequest {
struct smbd_mr *mr;
#endif
struct cifs_credits credits;
-
-#if 0 // TODO: Remove following elements
- struct list_head list;
- struct completion done;
- struct work_struct work;
- struct cifsFileInfo *cfile;
- struct address_space *mapping;
- struct cifs_aio_ctx *ctx;
- enum writeback_sync_modes sync_mode;
- bool uncached;
- struct bio_vec *bv;
-#endif
};

/*
diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h
index 52ff5e889af2..25985b56cd7f 100644
--- a/fs/smb/client/cifsproto.h
+++ b/fs/smb/client/cifsproto.h
@@ -580,32 +580,11 @@ void __cifs_put_smb_ses(struct cifs_ses *ses);
extern struct cifs_ses *
cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx);

-#if 0 // TODO Remove
-void cifs_readdata_release(struct cifs_io_subrequest *rdata);
-static inline void cifs_put_readdata(struct cifs_io_subrequest *rdata)
-{
- if (refcount_dec_and_test(&rdata->subreq.ref))
- cifs_readdata_release(rdata);
-}
-#endif
int cifs_async_readv(struct cifs_io_subrequest *rdata);
int cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid);

int cifs_async_writev(struct cifs_io_subrequest *wdata);
void cifs_writev_complete(struct work_struct *work);
-#if 0 // TODO Remove
-struct cifs_io_subrequest *cifs_writedata_alloc(work_func_t complete);
-void cifs_writedata_release(struct cifs_io_subrequest *rdata);
-static inline void cifs_get_writedata(struct cifs_io_subrequest *wdata)
-{
- refcount_inc(&wdata->subreq.ref);
-}
-static inline void cifs_put_writedata(struct cifs_io_subrequest *wdata)
-{
- if (refcount_dec_and_test(&wdata->subreq.ref))
- cifs_writedata_release(wdata);
-}
-#endif
int cifs_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
struct cifs_sb_info *cifs_sb,
const unsigned char *path, char *pbuf,
diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
index 4c9125a98d18..2c64dccdc81d 100644
--- a/fs/smb/client/file.c
+++ b/fs/smb/client/file.c
@@ -411,133 +411,6 @@ const struct netfs_request_ops cifs_req_ops = {
.create_write_requests = cifs_create_write_requests,
};

-#if 0 // TODO remove 397
-/*
- * Remove the dirty flags from a span of pages.
- */
-static void cifs_undirty_folios(struct inode *inode, loff_t start, unsigned int len)
-{
- struct address_space *mapping = inode->i_mapping;
- struct folio *folio;
- pgoff_t end;
-
- XA_STATE(xas, &mapping->i_pages, start / PAGE_SIZE);
-
- rcu_read_lock();
-
- end = (start + len - 1) / PAGE_SIZE;
- xas_for_each_marked(&xas, folio, end, PAGECACHE_TAG_DIRTY) {
- if (xas_retry(&xas, folio))
- continue;
- xas_pause(&xas);
- rcu_read_unlock();
- folio_lock(folio);
- folio_clear_dirty_for_io(folio);
- folio_unlock(folio);
- rcu_read_lock();
- }
-
- rcu_read_unlock();
-}
-
-/*
- * Completion of write to server.
- */
-void cifs_pages_written_back(struct inode *inode, loff_t start, unsigned int len)
-{
- struct address_space *mapping = inode->i_mapping;
- struct folio *folio;
- pgoff_t end;
-
- XA_STATE(xas, &mapping->i_pages, start / PAGE_SIZE);
-
- if (!len)
- return;
-
- rcu_read_lock();
-
- end = (start + len - 1) / PAGE_SIZE;
- xas_for_each(&xas, folio, end) {
- if (xas_retry(&xas, folio))
- continue;
- if (!folio_test_writeback(folio)) {
- WARN_ONCE(1, "bad %x @%llx page %lx %lx\n",
- len, start, folio_index(folio), end);
- continue;
- }
-
- folio_detach_private(folio);
- folio_end_writeback(folio);
- }
-
- rcu_read_unlock();
-}
-
-/*
- * Failure of write to server.
- */
-void cifs_pages_write_failed(struct inode *inode, loff_t start, unsigned int len)
-{
- struct address_space *mapping = inode->i_mapping;
- struct folio *folio;
- pgoff_t end;
-
- XA_STATE(xas, &mapping->i_pages, start / PAGE_SIZE);
-
- if (!len)
- return;
-
- rcu_read_lock();
-
- end = (start + len - 1) / PAGE_SIZE;
- xas_for_each(&xas, folio, end) {
- if (xas_retry(&xas, folio))
- continue;
- if (!folio_test_writeback(folio)) {
- WARN_ONCE(1, "bad %x @%llx page %lx %lx\n",
- len, start, folio_index(folio), end);
- continue;
- }
-
- folio_set_error(folio);
- folio_end_writeback(folio);
- }
-
- rcu_read_unlock();
-}
-
-/*
- * Redirty pages after a temporary failure.
- */
-void cifs_pages_write_redirty(struct inode *inode, loff_t start, unsigned int len)
-{
- struct address_space *mapping = inode->i_mapping;
- struct folio *folio;
- pgoff_t end;
-
- XA_STATE(xas, &mapping->i_pages, start / PAGE_SIZE);
-
- if (!len)
- return;
-
- rcu_read_lock();
-
- end = (start + len - 1) / PAGE_SIZE;
- xas_for_each(&xas, folio, end) {
- if (!folio_test_writeback(folio)) {
- WARN_ONCE(1, "bad %x @%llx page %lx %lx\n",
- len, start, folio_index(folio), end);
- continue;
- }
-
- filemap_dirty_folio(folio->mapping, folio);
- folio_end_writeback(folio);
- }
-
- rcu_read_unlock();
-}
-#endif // end netfslib remove 397
-
/*
* Mark as invalid, all open files on tree connections since they
* were closed when session to server was lost.
@@ -2497,92 +2370,6 @@ cifs_update_eof(struct cifsInodeInfo *cifsi, loff_t offset,
netfs_resize_file(&cifsi->netfs, end_of_write);
}

-#if 0 // TODO remove 2483
-static ssize_t
-cifs_write(struct cifsFileInfo *open_file, __u32 pid, const char *write_data,
- size_t write_size, loff_t *offset)
-{
- int rc = 0;
- unsigned int bytes_written = 0;
- unsigned int total_written;
- struct cifs_tcon *tcon;
- struct TCP_Server_Info *server;
- unsigned int xid;
- struct dentry *dentry = open_file->dentry;
- struct cifsInodeInfo *cifsi = CIFS_I(d_inode(dentry));
- struct cifs_io_parms io_parms = {0};
-
- cifs_dbg(FYI, "write %zd bytes to offset %lld of %pd\n",
- write_size, *offset, dentry);
-
- tcon = tlink_tcon(open_file->tlink);
- server = tcon->ses->server;
-
- if (!server->ops->sync_write)
- return -ENOSYS;
-
- xid = get_xid();
-
- for (total_written = 0; write_size > total_written;
- total_written += bytes_written) {
- rc = -EAGAIN;
- while (rc == -EAGAIN) {
- struct kvec iov[2];
- unsigned int len;
-
- if (open_file->invalidHandle) {
- /* we could deadlock if we called
- filemap_fdatawait from here so tell
- reopen_file not to flush data to
- server now */
- rc = cifs_reopen_file(open_file, false);
- if (rc != 0)
- break;
- }
-
- len = min(server->ops->wp_retry_size(d_inode(dentry)),
- (unsigned int)write_size - total_written);
- /* iov[0] is reserved for smb header */
- iov[1].iov_base = (char *)write_data + total_written;
- iov[1].iov_len = len;
- io_parms.pid = pid;
- io_parms.tcon = tcon;
- io_parms.offset = *offset;
- io_parms.length = len;
- rc = server->ops->sync_write(xid, &open_file->fid,
- &io_parms, &bytes_written, iov, 1);
- }
- if (rc || (bytes_written == 0)) {
- if (total_written)
- break;
- else {
- free_xid(xid);
- return rc;
- }
- } else {
- spin_lock(&d_inode(dentry)->i_lock);
- cifs_update_eof(cifsi, *offset, bytes_written);
- spin_unlock(&d_inode(dentry)->i_lock);
- *offset += bytes_written;
- }
- }
-
- cifs_stats_bytes_written(tcon, total_written);
-
- if (total_written > 0) {
- spin_lock(&d_inode(dentry)->i_lock);
- if (*offset > d_inode(dentry)->i_size) {
- i_size_write(d_inode(dentry), *offset);
- d_inode(dentry)->i_blocks = (512 - 1 + *offset) >> 9;
- }
- spin_unlock(&d_inode(dentry)->i_lock);
- }
- mark_inode_dirty_sync(d_inode(dentry));
- free_xid(xid);
- return total_written;
-}
-#endif // end netfslib remove 2483
-
struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *cifs_inode,
bool fsuid_only)
{
@@ -4844,292 +4631,6 @@ int cifs_file_mmap(struct file *file, struct vm_area_struct *vma)
return rc;
}

-#if 0 // TODO remove 4794
-/*
- * Unlock a bunch of folios in the pagecache.
- */
-static void cifs_unlock_folios(struct address_space *mapping, pgoff_t first, pgoff_t last)
-{
- struct folio *folio;
- XA_STATE(xas, &mapping->i_pages, first);
-
- rcu_read_lock();
- xas_for_each(&xas, folio, last) {
- folio_unlock(folio);
- }
- rcu_read_unlock();
-}
-
-static void cifs_readahead_complete(struct work_struct *work)
-{
- struct cifs_io_subrequest *rdata = container_of(work,
- struct cifs_io_subrequest, work);
- struct folio *folio;
- pgoff_t last;
- bool good = rdata->result == 0 || (rdata->result == -EAGAIN && rdata->got_bytes);
-
- XA_STATE(xas, &rdata->mapping->i_pages, rdata->subreq.start / PAGE_SIZE);
-
- if (good)
- cifs_readahead_to_fscache(rdata->mapping->host,
- rdata->subreq.start, rdata->subreq.len);
-
- if (iov_iter_count(&rdata->subreq.io_iter) > 0)
- iov_iter_zero(iov_iter_count(&rdata->subreq.io_iter), &rdata->subreq.io_iter);
-
- last = (rdata->subreq.start + rdata->subreq.len - 1) / PAGE_SIZE;
-
- rcu_read_lock();
- xas_for_each(&xas, folio, last) {
- if (good) {
- flush_dcache_folio(folio);
- folio_mark_uptodate(folio);
- }
- folio_unlock(folio);
- }
- rcu_read_unlock();
-
- cifs_put_readdata(rdata);
-}
-
-static void cifs_readahead(struct readahead_control *ractl)
-{
- struct cifsFileInfo *open_file = ractl->file->private_data;
- struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(ractl->file);
- struct TCP_Server_Info *server;
- unsigned int xid, nr_pages, cache_nr_pages = 0;
- unsigned int ra_pages;
- pgoff_t next_cached = ULONG_MAX, ra_index;
- bool caching = fscache_cookie_enabled(cifs_inode_cookie(ractl->mapping->host)) &&
- cifs_inode_cookie(ractl->mapping->host)->cache_priv;
- bool check_cache = caching;
- pid_t pid;
- int rc = 0;
-
- /* Note that readahead_count() lags behind our dequeuing of pages from
- * the ractl, wo we have to keep track for ourselves.
- */
- ra_pages = readahead_count(ractl);
- ra_index = readahead_index(ractl);
-
- xid = get_xid();
-
- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD)
- pid = open_file->pid;
- else
- pid = current->tgid;
-
- server = cifs_pick_channel(tlink_tcon(open_file->tlink)->ses);
-
- cifs_dbg(FYI, "%s: file=%p mapping=%p num_pages=%u\n",
- __func__, ractl->file, ractl->mapping, ra_pages);
-
- /*
- * Chop the readahead request up into rsize-sized read requests.
- */
- while ((nr_pages = ra_pages)) {
- unsigned int i;
- struct cifs_io_subrequest *rdata;
- struct cifs_credits credits_on_stack;
- struct cifs_credits *credits = &credits_on_stack;
- struct folio *folio;
- pgoff_t fsize;
- size_t rsize;
-
- /*
- * Find out if we have anything cached in the range of
- * interest, and if so, where the next chunk of cached data is.
- */
- if (caching) {
- if (check_cache) {
- rc = cifs_fscache_query_occupancy(
- ractl->mapping->host, ra_index, nr_pages,
- &next_cached, &cache_nr_pages);
- if (rc < 0)
- caching = false;
- check_cache = false;
- }
-
- if (ra_index == next_cached) {
- /*
- * TODO: Send a whole batch of pages to be read
- * by the cache.
- */
- folio = readahead_folio(ractl);
- fsize = folio_nr_pages(folio);
- ra_pages -= fsize;
- ra_index += fsize;
- if (cifs_readpage_from_fscache(ractl->mapping->host,
- &folio->page) < 0) {
- /*
- * TODO: Deal with cache read failure
- * here, but for the moment, delegate
- * that to readpage.
- */
- caching = false;
- }
- folio_unlock(folio);
- next_cached += fsize;
- cache_nr_pages -= fsize;
- if (cache_nr_pages == 0)
- check_cache = true;
- continue;
- }
- }
-
- if (open_file->invalidHandle) {
- rc = cifs_reopen_file(open_file, true);
- if (rc) {
- if (rc == -EAGAIN)
- continue;
- break;
- }
- }
-
- if (cifs_sb->ctx->rsize == 0)
- cifs_sb->ctx->rsize =
- server->ops->negotiate_rsize(tlink_tcon(open_file->tlink),
- cifs_sb->ctx);
-
- rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize,
- &rsize, credits);
- if (rc)
- break;
- nr_pages = min_t(size_t, rsize / PAGE_SIZE, ra_pages);
- if (next_cached != ULONG_MAX)
- nr_pages = min_t(size_t, nr_pages, next_cached - ra_index);
-
- /*
- * Give up immediately if rsize is too small to read an entire
- * page. The VFS will fall back to readpage. We should never
- * reach this point however since we set ra_pages to 0 when the
- * rsize is smaller than a cache page.
- */
- if (unlikely(!nr_pages)) {
- add_credits_and_wake_if(server, credits, 0);
- break;
- }
-
- rdata = cifs_readdata_alloc(cifs_readahead_complete);
- if (!rdata) {
- /* best to give up if we're out of mem */
- add_credits_and_wake_if(server, credits, 0);
- break;
- }
-
- rdata->subreq.start = ra_index * PAGE_SIZE;
- rdata->subreq.len = nr_pages * PAGE_SIZE;
- rdata->cfile = cifsFileInfo_get(open_file);
- rdata->server = server;
- rdata->mapping = ractl->mapping;
- rdata->pid = pid;
- rdata->credits = credits_on_stack;
-
- for (i = 0; i < nr_pages; i++) {
- if (!readahead_folio(ractl))
- WARN_ON(1);
- }
- ra_pages -= nr_pages;
- ra_index += nr_pages;
-
- iov_iter_xarray(&rdata->subreq.io_iter, ITER_DEST, &rdata->mapping->i_pages,
- rdata->subreq.start, rdata->subreq.len);
-
- rc = adjust_credits(server, &rdata->credits, rdata->subreq.len);
- if (!rc) {
- if (rdata->cfile->invalidHandle)
- rc = -EAGAIN;
- else
- rc = server->ops->async_readv(rdata);
- }
-
- if (rc) {
- add_credits_and_wake_if(server, &rdata->credits, 0);
- cifs_unlock_folios(rdata->mapping,
- rdata->subreq.start / PAGE_SIZE,
- (rdata->subreq.start + rdata->subreq.len - 1) / PAGE_SIZE);
- /* Fallback to the readpage in error/reconnect cases */
- cifs_put_readdata(rdata);
- break;
- }
-
- cifs_put_readdata(rdata);
- }
-
- free_xid(xid);
-}
-
-/*
- * cifs_readpage_worker must be called with the page pinned
- */
-static int cifs_readpage_worker(struct file *file, struct page *page,
- loff_t *poffset)
-{
- char *read_data;
- int rc;
-
- /* Is the page cached? */
- rc = cifs_readpage_from_fscache(file_inode(file), page);
- if (rc == 0)
- goto read_complete;
-
- read_data = kmap(page);
- /* for reads over a certain size could initiate async read ahead */
-
- rc = cifs_read(file, read_data, PAGE_SIZE, poffset);
-
- if (rc < 0)
- goto io_error;
- else
- cifs_dbg(FYI, "Bytes read %d\n", rc);
-
- /* we do not want atime to be less than mtime, it broke some apps */
- file_inode(file)->i_atime = current_time(file_inode(file));
- if (timespec64_compare(&(file_inode(file)->i_atime), &(file_inode(file)->i_mtime)))
- file_inode(file)->i_atime = file_inode(file)->i_mtime;
- else
- file_inode(file)->i_atime = current_time(file_inode(file));
-
- if (PAGE_SIZE > rc)
- memset(read_data + rc, 0, PAGE_SIZE - rc);
-
- flush_dcache_page(page);
- SetPageUptodate(page);
- rc = 0;
-
-io_error:
- kunmap(page);
-
-read_complete:
- unlock_page(page);
- return rc;
-}
-
-static int cifs_read_folio(struct file *file, struct folio *folio)
-{
- struct page *page = &folio->page;
- loff_t offset = page_file_offset(page);
- int rc = -EACCES;
- unsigned int xid;
-
- xid = get_xid();
-
- if (file->private_data == NULL) {
- rc = -EBADF;
- free_xid(xid);
- return rc;
- }
-
- cifs_dbg(FYI, "read_folio %p at offset %d 0x%x\n",
- page, (int)offset, (int)offset);
-
- rc = cifs_readpage_worker(file, page, &offset);
-
- free_xid(xid);
- return rc;
-}
-#endif // end netfslib remove 4794
-
static int is_inode_writable(struct cifsInodeInfo *cifs_inode)
{
struct cifsFileInfo *open_file;
@@ -5175,125 +4676,6 @@ bool is_size_safe_to_change(struct cifsInodeInfo *cifsInode, __u64 end_of_file)
return true;
}

-#if 0 // TODO remove 5152
-static int cifs_write_begin(struct file *file, struct address_space *mapping,
- loff_t pos, unsigned len,
- struct page **pagep, void **fsdata)
-{
- int oncethru = 0;
- pgoff_t index = pos >> PAGE_SHIFT;
- loff_t offset = pos & (PAGE_SIZE - 1);
- loff_t page_start = pos & PAGE_MASK;
- loff_t i_size;
- struct page *page;
- int rc = 0;
-
- cifs_dbg(FYI, "write_begin from %lld len %d\n", (long long)pos, len);
-
-start:
- page = grab_cache_page_write_begin(mapping, index);
- if (!page) {
- rc = -ENOMEM;
- goto out;
- }
-
- if (PageUptodate(page))
- goto out;
-
- /*
- * If we write a full page it will be up to date, no need to read from
- * the server. If the write is short, we'll end up doing a sync write
- * instead.
- */
- if (len == PAGE_SIZE)
- goto out;
-
- /*
- * optimize away the read when we have an oplock, and we're not
- * expecting to use any of the data we'd be reading in. That
- * is, when the page lies beyond the EOF, or straddles the EOF
- * and the write will cover all of the existing data.
- */
- if (CIFS_CACHE_READ(CIFS_I(mapping->host))) {
- i_size = i_size_read(mapping->host);
- if (page_start >= i_size ||
- (offset == 0 && (pos + len) >= i_size)) {
- zero_user_segments(page, 0, offset,
- offset + len,
- PAGE_SIZE);
- /*
- * PageChecked means that the parts of the page
- * to which we're not writing are considered up
- * to date. Once the data is copied to the
- * page, it can be set uptodate.
- */
- SetPageChecked(page);
- goto out;
- }
- }
-
- if ((file->f_flags & O_ACCMODE) != O_WRONLY && !oncethru) {
- /*
- * might as well read a page, it is fast enough. If we get
- * an error, we don't need to return it. cifs_write_end will
- * do a sync write instead since PG_uptodate isn't set.
- */
- cifs_readpage_worker(file, page, &page_start);
- put_page(page);
- oncethru = 1;
- goto start;
- } else {
- /* we could try using another file handle if there is one -
- but how would we lock it to prevent close of that handle
- racing with this read? In any case
- this will be written out by write_end so is fine */
- }
-out:
- *pagep = page;
- return rc;
-}
-
-static bool cifs_release_folio(struct folio *folio, gfp_t gfp)
-{
- if (folio_test_private(folio))
- return 0;
- if (folio_test_fscache(folio)) {
- if (current_is_kswapd() || !(gfp & __GFP_FS))
- return false;
- folio_wait_fscache(folio);
- }
- fscache_note_page_release(cifs_inode_cookie(folio->mapping->host));
- return true;
-}
-
-static void cifs_invalidate_folio(struct folio *folio, size_t offset,
- size_t length)
-{
- folio_wait_fscache(folio);
-}
-
-static int cifs_launder_folio(struct folio *folio)
-{
- int rc = 0;
- loff_t range_start = folio_pos(folio);
- loff_t range_end = range_start + folio_size(folio);
- struct writeback_control wbc = {
- .sync_mode = WB_SYNC_ALL,
- .nr_to_write = 0,
- .range_start = range_start,
- .range_end = range_end,
- };
-
- cifs_dbg(FYI, "Launder page: %lu\n", folio->index);
-
- if (folio_clear_dirty_for_io(folio))
- rc = cifs_writepage_locked(&folio->page, &wbc);
-
- folio_wait_fscache(folio);
- return rc;
-}
-#endif // end netfslib remove 5152
-
void cifs_oplock_break(struct work_struct *work)
{
struct cifsFileInfo *cfile = container_of(work, struct cifsFileInfo,
@@ -5383,27 +4765,6 @@ void cifs_oplock_break(struct work_struct *work)
cifs_done_oplock_break(cinode);
}

-#if 0 // TODO remove 5333
-/*
- * The presence of cifs_direct_io() in the address space ops vector
- * allowes open() O_DIRECT flags which would have failed otherwise.
- *
- * In the non-cached mode (mount with cache=none), we shunt off direct read and write requests
- * so this method should never be called.
- *
- * Direct IO is not yet supported in the cached mode.
- */
-static ssize_t
-cifs_direct_io(struct kiocb *iocb, struct iov_iter *iter)
-{
- /*
- * FIXME
- * Eventually need to support direct IO for non forcedirectio mounts
- */
- return -EINVAL;
-}
-#endif // netfs end remove 5333
-
static int cifs_swap_activate(struct swap_info_struct *sis,
struct file *swap_file, sector_t *span)
{
diff --git a/fs/smb/client/fscache.c b/fs/smb/client/fscache.c
index e4cb0938fb15..bd9284923cc6 100644
--- a/fs/smb/client/fscache.c
+++ b/fs/smb/client/fscache.c
@@ -136,114 +136,3 @@ void cifs_fscache_release_inode_cookie(struct inode *inode)
cifsi->netfs.cache = NULL;
}
}
-
-#if 0 // TODO remove
-/*
- * Fallback page reading interface.
- */
-static int fscache_fallback_read_page(struct inode *inode, struct page *page)
-{
- struct netfs_cache_resources cres;
- struct fscache_cookie *cookie = cifs_inode_cookie(inode);
- struct iov_iter iter;
- struct bio_vec bvec;
- int ret;
-
- memset(&cres, 0, sizeof(cres));
- bvec_set_page(&bvec, page, PAGE_SIZE, 0);
- iov_iter_bvec(&iter, ITER_DEST, &bvec, 1, PAGE_SIZE);
-
- ret = fscache_begin_read_operation(&cres, cookie);
- if (ret < 0)
- return ret;
-
- ret = fscache_read(&cres, page_offset(page), &iter, NETFS_READ_HOLE_FAIL,
- NULL, NULL);
- fscache_end_operation(&cres);
- return ret;
-}
-
-/*
- * Fallback page writing interface.
- */
-static int fscache_fallback_write_pages(struct inode *inode, loff_t start, size_t len,
- bool no_space_allocated_yet)
-{
- struct netfs_cache_resources cres;
- struct fscache_cookie *cookie = cifs_inode_cookie(inode);
- struct iov_iter iter;
- int ret;
-
- memset(&cres, 0, sizeof(cres));
- iov_iter_xarray(&iter, ITER_SOURCE, &inode->i_mapping->i_pages, start, len);
-
- ret = fscache_begin_write_operation(&cres, cookie);
- if (ret < 0)
- return ret;
-
- ret = cres.ops->prepare_write(&cres, &start, &len, i_size_read(inode),
- no_space_allocated_yet);
- if (ret == 0)
- ret = fscache_write(&cres, start, &iter, NULL, NULL);
- fscache_end_operation(&cres);
- return ret;
-}
-
-/*
- * Retrieve a page from FS-Cache
- */
-int __cifs_readpage_from_fscache(struct inode *inode, struct page *page)
-{
- int ret;
-
- cifs_dbg(FYI, "%s: (fsc:%p, p:%p, i:0x%p\n",
- __func__, cifs_inode_cookie(inode), page, inode);
-
- ret = fscache_fallback_read_page(inode, page);
- if (ret < 0)
- return ret;
-
- /* Read completed synchronously */
- SetPageUptodate(page);
- return 0;
-}
-
-void __cifs_readahead_to_fscache(struct inode *inode, loff_t pos, size_t len)
-{
- cifs_dbg(FYI, "%s: (fsc: %p, p: %llx, l: %zx, i: %p)\n",
- __func__, cifs_inode_cookie(inode), pos, len, inode);
-
- fscache_fallback_write_pages(inode, pos, len, true);
-}
-
-/*
- * Query the cache occupancy.
- */
-int __cifs_fscache_query_occupancy(struct inode *inode,
- pgoff_t first, unsigned int nr_pages,
- pgoff_t *_data_first,
- unsigned int *_data_nr_pages)
-{
- struct netfs_cache_resources cres;
- struct fscache_cookie *cookie = cifs_inode_cookie(inode);
- loff_t start, data_start;
- size_t len, data_len;
- int ret;
-
- ret = fscache_begin_read_operation(&cres, cookie);
- if (ret < 0)
- return ret;
-
- start = first * PAGE_SIZE;
- len = nr_pages * PAGE_SIZE;
- ret = cres.ops->query_occupancy(&cres, start, len, PAGE_SIZE,
- &data_start, &data_len);
- if (ret == 0) {
- *_data_first = data_start / PAGE_SIZE;
- *_data_nr_pages = len / PAGE_SIZE;
- }
-
- fscache_end_operation(&cres);
- return ret;
-}
-#endif
diff --git a/fs/smb/client/fscache.h b/fs/smb/client/fscache.h
index 7308efeb2d89..b4faf6b8b9bd 100644
--- a/fs/smb/client/fscache.h
+++ b/fs/smb/client/fscache.h
@@ -74,43 +74,6 @@ static inline void cifs_invalidate_cache(struct inode *inode, unsigned int flags
i_size_read(inode), flags);
}

-#if 0 // TODO remove
-extern int __cifs_fscache_query_occupancy(struct inode *inode,
- pgoff_t first, unsigned int nr_pages,
- pgoff_t *_data_first,
- unsigned int *_data_nr_pages);
-
-static inline int cifs_fscache_query_occupancy(struct inode *inode,
- pgoff_t first, unsigned int nr_pages,
- pgoff_t *_data_first,
- unsigned int *_data_nr_pages)
-{
- if (!cifs_inode_cookie(inode))
- return -ENOBUFS;
- return __cifs_fscache_query_occupancy(inode, first, nr_pages,
- _data_first, _data_nr_pages);
-}
-
-extern int __cifs_readpage_from_fscache(struct inode *pinode, struct page *ppage);
-extern void __cifs_readahead_to_fscache(struct inode *pinode, loff_t pos, size_t len);
-
-
-static inline int cifs_readpage_from_fscache(struct inode *inode,
- struct page *page)
-{
- if (cifs_inode_cookie(inode))
- return __cifs_readpage_from_fscache(inode, page);
- return -ENOBUFS;
-}
-
-static inline void cifs_readahead_to_fscache(struct inode *inode,
- loff_t pos, size_t len)
-{
- if (cifs_inode_cookie(inode))
- __cifs_readahead_to_fscache(inode, pos, len);
-}
-#endif
-
#else /* CONFIG_CIFS_FSCACHE */
static inline
void cifs_fscache_fill_coherency(struct inode *inode,
@@ -127,27 +90,6 @@ static inline void cifs_fscache_unuse_inode_cookie(struct inode *inode, bool upd
static inline struct fscache_cookie *cifs_inode_cookie(struct inode *inode) { return NULL; }
static inline void cifs_invalidate_cache(struct inode *inode, unsigned int flags) {}

-#if 0 // TODO remove
-static inline int cifs_fscache_query_occupancy(struct inode *inode,
- pgoff_t first, unsigned int nr_pages,
- pgoff_t *_data_first,
- unsigned int *_data_nr_pages)
-{
- *_data_first = ULONG_MAX;
- *_data_nr_pages = 0;
- return -ENOBUFS;
-}
-
-static inline int
-cifs_readpage_from_fscache(struct inode *inode, struct page *page)
-{
- return -ENOBUFS;
-}
-
-static inline
-void cifs_readahead_to_fscache(struct inode *inode, loff_t pos, size_t len) {}
-#endif
-
#endif /* CONFIG_CIFS_FSCACHE */

#endif /* _CIFS_FSCACHE_H */

2023-10-13 16:13:09

by David Howells

[permalink] [raw]
Subject: [RFC PATCH 36/53] netfs: Decrypt encrypted content

Implement a facility to provide decryption for encrypted content to a whole
read-request in one go (which might have been stitched together from
disparate sources with divisions that don't match page boundaries).

Note that this doesn't necessarily gain the best throughput if the crypto
block size is equal to or less than the size of a page (in which case we
might be better doing it as pages become read), but it will handle crypto
blocks larger than the size of a page.

Signed-off-by: David Howells <[email protected]>
cc: Jeff Layton <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
fs/netfs/crypto.c | 59 ++++++++++++++++++++++++++++++++++++
fs/netfs/internal.h | 1 +
fs/netfs/io.c | 6 +++-
include/linux/netfs.h | 3 ++
include/trace/events/netfs.h | 2 ++
5 files changed, 70 insertions(+), 1 deletion(-)

diff --git a/fs/netfs/crypto.c b/fs/netfs/crypto.c
index 943d01f430e2..6729bcda4f47 100644
--- a/fs/netfs/crypto.c
+++ b/fs/netfs/crypto.c
@@ -87,3 +87,62 @@ bool netfs_encrypt(struct netfs_io_request *wreq)
wreq->error = ret;
return false;
}
+
+/*
+ * Decrypt the result of a read request.
+ */
+void netfs_decrypt(struct netfs_io_request *rreq)
+{
+ struct netfs_inode *ctx = netfs_inode(rreq->inode);
+ struct scatterlist source_sg[16], dest_sg[16];
+ unsigned int n_source;
+ size_t n, chunk, bsize = 1UL << ctx->crypto_bshift;
+ loff_t pos;
+ int ret;
+
+ trace_netfs_rreq(rreq, netfs_rreq_trace_decrypt);
+ if (rreq->start >= rreq->i_size)
+ return;
+
+ n = min_t(unsigned long long, rreq->len, rreq->i_size - rreq->start);
+
+ _debug("DECRYPT %llx-%llx f=%lx",
+ rreq->start, rreq->start + n, rreq->flags);
+
+ pos = rreq->start;
+ for (; n > 0; n -= chunk, pos += chunk) {
+ chunk = min(n, bsize);
+
+ ret = netfs_iter_to_sglist(&rreq->io_iter, chunk,
+ source_sg, ARRAY_SIZE(source_sg));
+ if (ret < 0)
+ goto error;
+ n_source = ret;
+
+ if (test_bit(NETFS_RREQ_CRYPT_IN_PLACE, &rreq->flags)) {
+ ret = ctx->ops->decrypt_block(rreq, pos, chunk,
+ source_sg, n_source,
+ source_sg, n_source);
+ } else {
+ ret = netfs_iter_to_sglist(&rreq->iter, chunk,
+ dest_sg, ARRAY_SIZE(dest_sg));
+ if (ret < 0)
+ goto error;
+ ret = ctx->ops->decrypt_block(rreq, pos, chunk,
+ source_sg, n_source,
+ dest_sg, ret);
+ }
+
+ if (ret < 0)
+ goto error_failed;
+ }
+
+ return;
+
+error_failed:
+ trace_netfs_failure(rreq, NULL, ret, netfs_fail_decryption);
+error:
+ rreq->error = ret;
+ set_bit(NETFS_RREQ_FAILED, &rreq->flags);
+ return;
+}
diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h
index 3f4e64968623..8dc68a75d6cd 100644
--- a/fs/netfs/internal.h
+++ b/fs/netfs/internal.h
@@ -26,6 +26,7 @@ int netfs_prefetch_for_write(struct file *file, struct folio *folio,
* crypto.c
*/
bool netfs_encrypt(struct netfs_io_request *wreq);
+void netfs_decrypt(struct netfs_io_request *rreq);

/*
* direct_write.c
diff --git a/fs/netfs/io.c b/fs/netfs/io.c
index 36a3f720193a..9887b22e4cb3 100644
--- a/fs/netfs/io.c
+++ b/fs/netfs/io.c
@@ -398,6 +398,9 @@ static void netfs_rreq_assess(struct netfs_io_request *rreq, bool was_async)
return;
}

+ if (!test_bit(NETFS_RREQ_FAILED, &rreq->flags) &&
+ test_bit(NETFS_RREQ_CONTENT_ENCRYPTION, &rreq->flags))
+ netfs_decrypt(rreq);
if (rreq->origin != NETFS_DIO_READ)
netfs_rreq_unlock_folios(rreq);
else
@@ -427,7 +430,8 @@ static void netfs_rreq_work(struct work_struct *work)
static void netfs_rreq_terminated(struct netfs_io_request *rreq,
bool was_async)
{
- if (test_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags) &&
+ if ((test_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags) ||
+ test_bit(NETFS_RREQ_CONTENT_ENCRYPTION, &rreq->flags)) &&
was_async) {
if (!queue_work(system_unbound_wq, &rreq->work))
BUG();
diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index cdb471938225..524e6f5ff3fd 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -326,6 +326,9 @@ struct netfs_request_ops {
int (*encrypt_block)(struct netfs_io_request *wreq, loff_t pos, size_t len,
struct scatterlist *source_sg, unsigned int n_source,
struct scatterlist *dest_sg, unsigned int n_dest);
+ int (*decrypt_block)(struct netfs_io_request *rreq, loff_t pos, size_t len,
+ struct scatterlist *source_sg, unsigned int n_source,
+ struct scatterlist *dest_sg, unsigned int n_dest);
};

/*
diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h
index 70e2f9a48f24..2f35057602fa 100644
--- a/include/trace/events/netfs.h
+++ b/include/trace/events/netfs.h
@@ -40,6 +40,7 @@
#define netfs_rreq_traces \
EM(netfs_rreq_trace_assess, "ASSESS ") \
EM(netfs_rreq_trace_copy, "COPY ") \
+ EM(netfs_rreq_trace_decrypt, "DECRYPT") \
EM(netfs_rreq_trace_done, "DONE ") \
EM(netfs_rreq_trace_encrypt, "ENCRYPT") \
EM(netfs_rreq_trace_free, "FREE ") \
@@ -75,6 +76,7 @@
#define netfs_failures \
EM(netfs_fail_check_write_begin, "check-write-begin") \
EM(netfs_fail_copy_to_cache, "copy-to-cache") \
+ EM(netfs_fail_decryption, "decryption") \
EM(netfs_fail_dio_read_short, "dio-read-short") \
EM(netfs_fail_dio_read_zero, "dio-read-zero") \
EM(netfs_fail_encryption, "encryption") \

2023-10-13 16:13:20

by David Howells

[permalink] [raw]
Subject: [RFC PATCH 46/53] cifs: Use more fields from netfs_io_subrequest

Use more fields from netfs_io_subrequest instead of those incorporated into
cifs_io_subrequest from cifs_readdata and cifs_writedata.

Signed-off-by: David Howells <[email protected]>
cc: Steve French <[email protected]>
cc: Shyam Prasad N <[email protected]>
cc: Rohith Surabattula <[email protected]>
cc: Jeff Layton <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
fs/smb/client/cifsglob.h | 3 -
fs/smb/client/cifssmb.c | 52 +++++++++---------
fs/smb/client/file.c | 112 +++++++++++++++++++-------------------
fs/smb/client/smb2ops.c | 4 +-
fs/smb/client/smb2pdu.c | 52 +++++++++---------
fs/smb/client/transport.c | 6 +-
6 files changed, 113 insertions(+), 116 deletions(-)

diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
index 0b1835751bda..c7f04f9853c5 100644
--- a/fs/smb/client/cifsglob.h
+++ b/fs/smb/client/cifsglob.h
@@ -1444,9 +1444,6 @@ struct cifs_io_subrequest {
struct list_head list;
struct completion done;
struct work_struct work;
- struct iov_iter iter;
- __u64 offset;
- unsigned int bytes;
};

/*
diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
index 14fca3fa3e08..112a5a2d95b8 100644
--- a/fs/smb/client/cifssmb.c
+++ b/fs/smb/client/cifssmb.c
@@ -1267,12 +1267,12 @@ cifs_readv_callback(struct mid_q_entry *mid)
struct TCP_Server_Info *server = tcon->ses->server;
struct smb_rqst rqst = { .rq_iov = rdata->iov,
.rq_nvec = 2,
- .rq_iter = rdata->iter };
+ .rq_iter = rdata->subreq.io_iter };
struct cifs_credits credits = { .value = 1, .instance = 0 };

- cifs_dbg(FYI, "%s: mid=%llu state=%d result=%d bytes=%u\n",
+ cifs_dbg(FYI, "%s: mid=%llu state=%d result=%d bytes=%zu\n",
__func__, mid->mid, mid->mid_state, rdata->result,
- rdata->bytes);
+ rdata->subreq.len);

switch (mid->mid_state) {
case MID_RESPONSE_RECEIVED:
@@ -1320,14 +1320,14 @@ cifs_async_readv(struct cifs_io_subrequest *rdata)
struct smb_rqst rqst = { .rq_iov = rdata->iov,
.rq_nvec = 2 };

- cifs_dbg(FYI, "%s: offset=%llu bytes=%u\n",
- __func__, rdata->offset, rdata->bytes);
+ cifs_dbg(FYI, "%s: offset=%llu bytes=%zu\n",
+ __func__, rdata->subreq.start, rdata->subreq.len);

if (tcon->ses->capabilities & CAP_LARGE_FILES)
wct = 12;
else {
wct = 10; /* old style read */
- if ((rdata->offset >> 32) > 0) {
+ if ((rdata->subreq.start >> 32) > 0) {
/* can not handle this big offset for old */
return -EIO;
}
@@ -1342,12 +1342,12 @@ cifs_async_readv(struct cifs_io_subrequest *rdata)

smb->AndXCommand = 0xFF; /* none */
smb->Fid = rdata->cfile->fid.netfid;
- smb->OffsetLow = cpu_to_le32(rdata->offset & 0xFFFFFFFF);
+ smb->OffsetLow = cpu_to_le32(rdata->subreq.start & 0xFFFFFFFF);
if (wct == 12)
- smb->OffsetHigh = cpu_to_le32(rdata->offset >> 32);
+ smb->OffsetHigh = cpu_to_le32(rdata->subreq.start >> 32);
smb->Remaining = 0;
- smb->MaxCount = cpu_to_le16(rdata->bytes & 0xFFFF);
- smb->MaxCountHigh = cpu_to_le32(rdata->bytes >> 16);
+ smb->MaxCount = cpu_to_le16(rdata->subreq.len & 0xFFFF);
+ smb->MaxCountHigh = cpu_to_le32(rdata->subreq.len >> 16);
if (wct == 12)
smb->ByteCount = 0;
else {
@@ -1631,13 +1631,13 @@ cifs_writev_callback(struct mid_q_entry *mid)
* client. OS/2 servers are known to set incorrect
* CountHigh values.
*/
- if (written > wdata->bytes)
+ if (written > wdata->subreq.len)
written &= 0xFFFF;

- if (written < wdata->bytes)
+ if (written < wdata->subreq.len)
wdata->result = -ENOSPC;
else
- wdata->bytes = written;
+ wdata->subreq.len = written;
break;
case MID_REQUEST_SUBMITTED:
case MID_RETRY_NEEDED:
@@ -1668,7 +1668,7 @@ cifs_async_writev(struct cifs_io_subrequest *wdata)
wct = 14;
} else {
wct = 12;
- if (wdata->offset >> 32 > 0) {
+ if (wdata->subreq.start >> 32 > 0) {
/* can not handle big offset for old srv */
return -EIO;
}
@@ -1683,9 +1683,9 @@ cifs_async_writev(struct cifs_io_subrequest *wdata)

smb->AndXCommand = 0xFF; /* none */
smb->Fid = wdata->cfile->fid.netfid;
- smb->OffsetLow = cpu_to_le32(wdata->offset & 0xFFFFFFFF);
+ smb->OffsetLow = cpu_to_le32(wdata->subreq.start & 0xFFFFFFFF);
if (wct == 14)
- smb->OffsetHigh = cpu_to_le32(wdata->offset >> 32);
+ smb->OffsetHigh = cpu_to_le32(wdata->subreq.start >> 32);
smb->Reserved = 0xFFFFFFFF;
smb->WriteMode = 0;
smb->Remaining = 0;
@@ -1701,24 +1701,24 @@ cifs_async_writev(struct cifs_io_subrequest *wdata)

rqst.rq_iov = iov;
rqst.rq_nvec = 2;
- rqst.rq_iter = wdata->iter;
- rqst.rq_iter_size = iov_iter_count(&wdata->iter);
+ rqst.rq_iter = wdata->subreq.io_iter;
+ rqst.rq_iter_size = iov_iter_count(&wdata->subreq.io_iter);

- cifs_dbg(FYI, "async write at %llu %u bytes\n",
- wdata->offset, wdata->bytes);
+ cifs_dbg(FYI, "async write at %llu %zu bytes\n",
+ wdata->subreq.start, wdata->subreq.len);

- smb->DataLengthLow = cpu_to_le16(wdata->bytes & 0xFFFF);
- smb->DataLengthHigh = cpu_to_le16(wdata->bytes >> 16);
+ smb->DataLengthLow = cpu_to_le16(wdata->subreq.len & 0xFFFF);
+ smb->DataLengthHigh = cpu_to_le16(wdata->subreq.len >> 16);

if (wct == 14) {
- inc_rfc1001_len(&smb->hdr, wdata->bytes + 1);
- put_bcc(wdata->bytes + 1, &smb->hdr);
+ inc_rfc1001_len(&smb->hdr, wdata->subreq.len + 1);
+ put_bcc(wdata->subreq.len + 1, &smb->hdr);
} else {
/* wct == 12 */
struct smb_com_writex_req *smbw =
(struct smb_com_writex_req *)smb;
- inc_rfc1001_len(&smbw->hdr, wdata->bytes + 5);
- put_bcc(wdata->bytes + 5, &smbw->hdr);
+ inc_rfc1001_len(&smbw->hdr, wdata->subreq.len + 5);
+ put_bcc(wdata->subreq.len + 5, &smbw->hdr);
iov[1].iov_len += 4; /* pad bigger by four bytes */
}

diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
index c192a38b1b7c..c70d106a413f 100644
--- a/fs/smb/client/file.c
+++ b/fs/smb/client/file.c
@@ -2437,8 +2437,8 @@ cifs_writev_requeue(struct cifs_io_subrequest *wdata)
int rc = 0;
struct inode *inode = d_inode(wdata->cfile->dentry);
struct TCP_Server_Info *server;
- unsigned int rest_len = wdata->bytes;
- loff_t fpos = wdata->offset;
+ unsigned int rest_len = wdata->subreq.len;
+ loff_t fpos = wdata->subreq.start;

server = tlink_tcon(wdata->cfile->tlink)->ses->server;
do {
@@ -2463,14 +2463,14 @@ cifs_writev_requeue(struct cifs_io_subrequest *wdata)
}

wdata2->sync_mode = wdata->sync_mode;
- wdata2->offset = fpos;
- wdata2->bytes = cur_len;
- wdata2->iter = wdata->iter;
+ wdata2->subreq.start = fpos;
+ wdata2->subreq.len = cur_len;
+ wdata2->subreq.io_iter = wdata->subreq.io_iter;

- iov_iter_advance(&wdata2->iter, fpos - wdata->offset);
- iov_iter_truncate(&wdata2->iter, wdata2->bytes);
+ iov_iter_advance(&wdata2->subreq.io_iter, fpos - wdata->subreq.start);
+ iov_iter_truncate(&wdata2->subreq.io_iter, wdata2->subreq.len);

- if (iov_iter_is_xarray(&wdata2->iter))
+ if (iov_iter_is_xarray(&wdata2->subreq.io_iter))
/* Check for pages having been redirtied and clean
* them. We can do this by walking the xarray. If
* it's not an xarray, then it's a DIO and we shouldn't
@@ -2504,7 +2504,7 @@ cifs_writev_requeue(struct cifs_io_subrequest *wdata)
} while (rest_len > 0);

/* Clean up remaining pages from the original wdata */
- if (iov_iter_is_xarray(&wdata->iter))
+ if (iov_iter_is_xarray(&wdata->subreq.io_iter))
cifs_pages_write_failed(inode, fpos, rest_len);

if (rc != 0 && !is_retryable_error(rc))
@@ -2521,19 +2521,19 @@ cifs_writev_complete(struct work_struct *work)

if (wdata->result == 0) {
spin_lock(&inode->i_lock);
- cifs_update_eof(CIFS_I(inode), wdata->offset, wdata->bytes);
+ cifs_update_eof(CIFS_I(inode), wdata->subreq.start, wdata->subreq.len);
spin_unlock(&inode->i_lock);
cifs_stats_bytes_written(tlink_tcon(wdata->cfile->tlink),
- wdata->bytes);
+ wdata->subreq.len);
} else if (wdata->sync_mode == WB_SYNC_ALL && wdata->result == -EAGAIN)
return cifs_writev_requeue(wdata);

if (wdata->result == -EAGAIN)
- cifs_pages_write_redirty(inode, wdata->offset, wdata->bytes);
+ cifs_pages_write_redirty(inode, wdata->subreq.start, wdata->subreq.len);
else if (wdata->result < 0)
- cifs_pages_write_failed(inode, wdata->offset, wdata->bytes);
+ cifs_pages_write_failed(inode, wdata->subreq.start, wdata->subreq.len);
else
- cifs_pages_written_back(inode, wdata->offset, wdata->bytes);
+ cifs_pages_written_back(inode, wdata->subreq.start, wdata->subreq.len);

if (wdata->result != -EAGAIN)
mapping_set_error(inode->i_mapping, wdata->result);
@@ -2767,7 +2767,7 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping,
}

wdata->sync_mode = wbc->sync_mode;
- wdata->offset = folio_pos(folio);
+ wdata->subreq.start = folio_pos(folio);
wdata->pid = cfile->pid;
wdata->credits = credits_on_stack;
wdata->cfile = cfile;
@@ -2802,7 +2802,7 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping,
len = min_t(loff_t, len, max_len);
}

- wdata->bytes = len;
+ wdata->subreq.len = len;

/* We now have a contiguous set of dirty pages, each with writeback
* set; the first page is still locked at this point, but all the rest
@@ -2811,10 +2811,10 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping,
folio_unlock(folio);

if (start < i_size) {
- iov_iter_xarray(&wdata->iter, ITER_SOURCE, &mapping->i_pages,
+ iov_iter_xarray(&wdata->subreq.io_iter, ITER_SOURCE, &mapping->i_pages,
start, len);

- rc = adjust_credits(wdata->server, &wdata->credits, wdata->bytes);
+ rc = adjust_credits(wdata->server, &wdata->credits, wdata->subreq.len);
if (rc)
goto err_wdata;

@@ -3233,7 +3233,7 @@ cifs_uncached_writev_complete(struct work_struct *work)
struct cifsInodeInfo *cifsi = CIFS_I(inode);

spin_lock(&inode->i_lock);
- cifs_update_eof(cifsi, wdata->offset, wdata->bytes);
+ cifs_update_eof(cifsi, wdata->subreq.start, wdata->subreq.len);
if (cifsi->netfs.remote_i_size > inode->i_size)
i_size_write(inode, cifsi->netfs.remote_i_size);
spin_unlock(&inode->i_lock);
@@ -3269,19 +3269,19 @@ cifs_resend_wdata(struct cifs_io_subrequest *wdata, struct list_head *wdata_list
* segments
*/
do {
- rc = server->ops->wait_mtu_credits(server, wdata->bytes,
+ rc = server->ops->wait_mtu_credits(server, wdata->subreq.len,
&wsize, &credits);
if (rc)
goto fail;

- if (wsize < wdata->bytes) {
+ if (wsize < wdata->subreq.len) {
add_credits_and_wake_if(server, &credits, 0);
msleep(1000);
}
- } while (wsize < wdata->bytes);
+ } while (wsize < wdata->subreq.len);
wdata->credits = credits;

- rc = adjust_credits(server, &wdata->credits, wdata->bytes);
+ rc = adjust_credits(server, &wdata->credits, wdata->subreq.len);

if (!rc) {
if (wdata->cfile->invalidHandle)
@@ -3427,19 +3427,19 @@ cifs_write_from_iter(loff_t fpos, size_t len, struct iov_iter *from,

wdata->uncached = true;
wdata->sync_mode = WB_SYNC_ALL;
- wdata->offset = (__u64)fpos;
+ wdata->subreq.start = (__u64)fpos;
wdata->cfile = cifsFileInfo_get(open_file);
wdata->server = server;
wdata->pid = pid;
- wdata->bytes = cur_len;
+ wdata->subreq.len = cur_len;
wdata->credits = credits_on_stack;
- wdata->iter = *from;
+ wdata->subreq.io_iter = *from;
wdata->ctx = ctx;
kref_get(&ctx->refcount);

- iov_iter_truncate(&wdata->iter, cur_len);
+ iov_iter_truncate(&wdata->subreq.io_iter, cur_len);

- rc = adjust_credits(server, &wdata->credits, wdata->bytes);
+ rc = adjust_credits(server, &wdata->credits, wdata->subreq.len);

if (!rc) {
if (wdata->cfile->invalidHandle)
@@ -3501,7 +3501,7 @@ static void collect_uncached_write_data(struct cifs_aio_ctx *ctx)
if (wdata->result)
rc = wdata->result;
else
- ctx->total_len += wdata->bytes;
+ ctx->total_len += wdata->subreq.len;

/* resend call if it's a retryable error */
if (rc == -EAGAIN) {
@@ -3516,10 +3516,10 @@ static void collect_uncached_write_data(struct cifs_aio_ctx *ctx)
wdata, &tmp_list, ctx);
else {
iov_iter_advance(&tmp_from,
- wdata->offset - ctx->pos);
+ wdata->subreq.start - ctx->pos);

- rc = cifs_write_from_iter(wdata->offset,
- wdata->bytes, &tmp_from,
+ rc = cifs_write_from_iter(wdata->subreq.start,
+ wdata->subreq.len, &tmp_from,
ctx->cfile, cifs_sb, &tmp_list,
ctx);

@@ -3842,20 +3842,20 @@ static int cifs_resend_rdata(struct cifs_io_subrequest *rdata,
* segments
*/
do {
- rc = server->ops->wait_mtu_credits(server, rdata->bytes,
+ rc = server->ops->wait_mtu_credits(server, rdata->subreq.len,
&rsize, &credits);

if (rc)
goto fail;

- if (rsize < rdata->bytes) {
+ if (rsize < rdata->subreq.len) {
add_credits_and_wake_if(server, &credits, 0);
msleep(1000);
}
- } while (rsize < rdata->bytes);
+ } while (rsize < rdata->subreq.len);
rdata->credits = credits;

- rc = adjust_credits(server, &rdata->credits, rdata->bytes);
+ rc = adjust_credits(server, &rdata->credits, rdata->subreq.len);
if (!rc) {
if (rdata->cfile->invalidHandle)
rc = -EAGAIN;
@@ -3953,17 +3953,17 @@ cifs_send_async_read(loff_t fpos, size_t len, struct cifsFileInfo *open_file,

rdata->server = server;
rdata->cfile = cifsFileInfo_get(open_file);
- rdata->offset = fpos;
- rdata->bytes = cur_len;
+ rdata->subreq.start = fpos;
+ rdata->subreq.len = cur_len;
rdata->pid = pid;
rdata->credits = credits_on_stack;
rdata->ctx = ctx;
kref_get(&ctx->refcount);

- rdata->iter = ctx->iter;
- iov_iter_truncate(&rdata->iter, cur_len);
+ rdata->subreq.io_iter = ctx->iter;
+ iov_iter_truncate(&rdata->subreq.io_iter, cur_len);

- rc = adjust_credits(server, &rdata->credits, rdata->bytes);
+ rc = adjust_credits(server, &rdata->credits, rdata->subreq.len);

if (!rc) {
if (rdata->cfile->invalidHandle)
@@ -4033,8 +4033,8 @@ collect_uncached_read_data(struct cifs_aio_ctx *ctx)
&tmp_list, ctx);
} else {
rc = cifs_send_async_read(
- rdata->offset + got_bytes,
- rdata->bytes - got_bytes,
+ rdata->subreq.start + got_bytes,
+ rdata->subreq.len - got_bytes,
rdata->cfile, cifs_sb,
&tmp_list, ctx);

@@ -4048,7 +4048,7 @@ collect_uncached_read_data(struct cifs_aio_ctx *ctx)
rc = rdata->result;

/* if there was a short read -- discard anything left */
- if (rdata->got_bytes && rdata->got_bytes < rdata->bytes)
+ if (rdata->got_bytes && rdata->got_bytes < rdata->subreq.len)
rc = -ENODATA;

ctx->total_len += rdata->got_bytes;
@@ -4432,16 +4432,16 @@ static void cifs_readahead_complete(struct work_struct *work)
pgoff_t last;
bool good = rdata->result == 0 || (rdata->result == -EAGAIN && rdata->got_bytes);

- XA_STATE(xas, &rdata->mapping->i_pages, rdata->offset / PAGE_SIZE);
+ XA_STATE(xas, &rdata->mapping->i_pages, rdata->subreq.start / PAGE_SIZE);

if (good)
cifs_readahead_to_fscache(rdata->mapping->host,
- rdata->offset, rdata->bytes);
+ rdata->subreq.start, rdata->subreq.len);

- if (iov_iter_count(&rdata->iter) > 0)
- iov_iter_zero(iov_iter_count(&rdata->iter), &rdata->iter);
+ if (iov_iter_count(&rdata->subreq.io_iter) > 0)
+ iov_iter_zero(iov_iter_count(&rdata->subreq.io_iter), &rdata->subreq.io_iter);

- last = (rdata->offset + rdata->bytes - 1) / PAGE_SIZE;
+ last = (rdata->subreq.start + rdata->subreq.len - 1) / PAGE_SIZE;

rcu_read_lock();
xas_for_each(&xas, folio, last) {
@@ -4580,8 +4580,8 @@ static void cifs_readahead(struct readahead_control *ractl)
break;
}

- rdata->offset = ra_index * PAGE_SIZE;
- rdata->bytes = nr_pages * PAGE_SIZE;
+ rdata->subreq.start = ra_index * PAGE_SIZE;
+ rdata->subreq.len = nr_pages * PAGE_SIZE;
rdata->cfile = cifsFileInfo_get(open_file);
rdata->server = server;
rdata->mapping = ractl->mapping;
@@ -4595,10 +4595,10 @@ static void cifs_readahead(struct readahead_control *ractl)
ra_pages -= nr_pages;
ra_index += nr_pages;

- iov_iter_xarray(&rdata->iter, ITER_DEST, &rdata->mapping->i_pages,
- rdata->offset, rdata->bytes);
+ iov_iter_xarray(&rdata->subreq.io_iter, ITER_DEST, &rdata->mapping->i_pages,
+ rdata->subreq.start, rdata->subreq.len);

- rc = adjust_credits(server, &rdata->credits, rdata->bytes);
+ rc = adjust_credits(server, &rdata->credits, rdata->subreq.len);
if (!rc) {
if (rdata->cfile->invalidHandle)
rc = -EAGAIN;
@@ -4609,8 +4609,8 @@ static void cifs_readahead(struct readahead_control *ractl)
if (rc) {
add_credits_and_wake_if(server, &rdata->credits, 0);
cifs_unlock_folios(rdata->mapping,
- rdata->offset / PAGE_SIZE,
- (rdata->offset + rdata->bytes - 1) / PAGE_SIZE);
+ rdata->subreq.start / PAGE_SIZE,
+ (rdata->subreq.start + rdata->subreq.len - 1) / PAGE_SIZE);
/* Fallback to the readpage in error/reconnect cases */
cifs_put_readdata(rdata);
break;
diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
index e7f765673246..bb1e8415bcf3 100644
--- a/fs/smb/client/smb2ops.c
+++ b/fs/smb/client/smb2ops.c
@@ -4686,7 +4686,7 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid,

/* Copy the data to the output I/O iterator. */
rdata->result = cifs_copy_pages_to_iter(pages, pages_len,
- cur_off, &rdata->iter);
+ cur_off, &rdata->subreq.io_iter);
if (rdata->result != 0) {
if (is_offloaded)
mid->mid_state = MID_RESPONSE_MALFORMED;
@@ -4700,7 +4700,7 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid,
/* read response payload is in buf */
WARN_ONCE(pages && !xa_empty(pages),
"read data can be either in buf or in pages");
- length = copy_to_iter(buf + data_offset, data_len, &rdata->iter);
+ length = copy_to_iter(buf + data_offset, data_len, &rdata->subreq.io_iter);
if (length < 0)
return length;
rdata->got_bytes = data_len;
diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
index 4f98631f2cf4..4fde3d506c60 100644
--- a/fs/smb/client/smb2pdu.c
+++ b/fs/smb/client/smb2pdu.c
@@ -4113,7 +4113,7 @@ smb2_new_read_req(void **buf, unsigned int *total_len,
struct smbd_buffer_descriptor_v1 *v1;
bool need_invalidate = server->dialect == SMB30_PROT_ID;

- rdata->mr = smbd_register_mr(server->smbd_conn, &rdata->iter,
+ rdata->mr = smbd_register_mr(server->smbd_conn, &rdata->subreq.io_iter,
true, need_invalidate);
if (!rdata->mr)
return -EAGAIN;
@@ -4174,17 +4174,17 @@ smb2_readv_callback(struct mid_q_entry *mid)
.rq_nvec = 1 };

if (rdata->got_bytes) {
- rqst.rq_iter = rdata->iter;
- rqst.rq_iter_size = iov_iter_count(&rdata->iter);
+ rqst.rq_iter = rdata->subreq.io_iter;
+ rqst.rq_iter_size = iov_iter_count(&rdata->subreq.io_iter);
}

WARN_ONCE(rdata->server != mid->server,
"rdata server %p != mid server %p",
rdata->server, mid->server);

- cifs_dbg(FYI, "%s: mid=%llu state=%d result=%d bytes=%u\n",
+ cifs_dbg(FYI, "%s: mid=%llu state=%d result=%d bytes=%zu\n",
__func__, mid->mid, mid->mid_state, rdata->result,
- rdata->bytes);
+ rdata->subreq.len);

switch (mid->mid_state) {
case MID_RESPONSE_RECEIVED:
@@ -4237,13 +4237,13 @@ smb2_readv_callback(struct mid_q_entry *mid)
cifs_stats_fail_inc(tcon, SMB2_READ_HE);
trace_smb3_read_err(0 /* xid */,
rdata->cfile->fid.persistent_fid,
- tcon->tid, tcon->ses->Suid, rdata->offset,
- rdata->bytes, rdata->result);
+ tcon->tid, tcon->ses->Suid, rdata->subreq.start,
+ rdata->subreq.len, rdata->result);
} else
trace_smb3_read_done(0 /* xid */,
rdata->cfile->fid.persistent_fid,
tcon->tid, tcon->ses->Suid,
- rdata->offset, rdata->got_bytes);
+ rdata->subreq.start, rdata->got_bytes);

queue_work(cifsiod_wq, &rdata->work);
release_mid(mid);
@@ -4265,16 +4265,16 @@ smb2_async_readv(struct cifs_io_subrequest *rdata)
unsigned int total_len;
int credit_request;

- cifs_dbg(FYI, "%s: offset=%llu bytes=%u\n",
- __func__, rdata->offset, rdata->bytes);
+ cifs_dbg(FYI, "%s: offset=%llu bytes=%zu\n",
+ __func__, rdata->subreq.start, rdata->subreq.len);

if (!rdata->server)
rdata->server = cifs_pick_channel(tcon->ses);

io_parms.tcon = tlink_tcon(rdata->cfile->tlink);
io_parms.server = server = rdata->server;
- io_parms.offset = rdata->offset;
- io_parms.length = rdata->bytes;
+ io_parms.offset = rdata->subreq.start;
+ io_parms.length = rdata->subreq.len;
io_parms.persistent_fid = rdata->cfile->fid.persistent_fid;
io_parms.volatile_fid = rdata->cfile->fid.volatile_fid;
io_parms.pid = rdata->pid;
@@ -4293,7 +4293,7 @@ smb2_async_readv(struct cifs_io_subrequest *rdata)
shdr = (struct smb2_hdr *)buf;

if (rdata->credits.value > 0) {
- shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(rdata->bytes,
+ shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(rdata->subreq.len,
SMB2_MAX_BUFFER_SIZE));
credit_request = le16_to_cpu(shdr->CreditCharge) + 8;
if (server->credits >= server->max_credits)
@@ -4303,7 +4303,7 @@ smb2_async_readv(struct cifs_io_subrequest *rdata)
min_t(int, server->max_credits -
server->credits, credit_request));

- rc = adjust_credits(server, &rdata->credits, rdata->bytes);
+ rc = adjust_credits(server, &rdata->credits, rdata->subreq.len);
if (rc)
goto async_readv_out;

@@ -4441,13 +4441,13 @@ smb2_writev_callback(struct mid_q_entry *mid)
* client. OS/2 servers are known to set incorrect
* CountHigh values.
*/
- if (written > wdata->bytes)
+ if (written > wdata->subreq.len)
written &= 0xFFFF;

- if (written < wdata->bytes)
+ if (written < wdata->subreq.len)
wdata->result = -ENOSPC;
else
- wdata->bytes = written;
+ wdata->subreq.len = written;
break;
case MID_REQUEST_SUBMITTED:
case MID_RETRY_NEEDED:
@@ -4478,8 +4478,8 @@ smb2_writev_callback(struct mid_q_entry *mid)
cifs_stats_fail_inc(tcon, SMB2_WRITE_HE);
trace_smb3_write_err(0 /* no xid */,
wdata->cfile->fid.persistent_fid,
- tcon->tid, tcon->ses->Suid, wdata->offset,
- wdata->bytes, wdata->result);
+ tcon->tid, tcon->ses->Suid, wdata->subreq.start,
+ wdata->subreq.len, wdata->result);
if (wdata->result == -ENOSPC)
pr_warn_once("Out of space writing to %s\n",
tcon->tree_name);
@@ -4487,7 +4487,7 @@ smb2_writev_callback(struct mid_q_entry *mid)
trace_smb3_write_done(0 /* no xid */,
wdata->cfile->fid.persistent_fid,
tcon->tid, tcon->ses->Suid,
- wdata->offset, wdata->bytes);
+ wdata->subreq.start, wdata->subreq.len);

queue_work(cifsiod_wq, &wdata->work);
release_mid(mid);
@@ -4520,8 +4520,8 @@ smb2_async_writev(struct cifs_io_subrequest *wdata)
_io_parms = (struct cifs_io_parms) {
.tcon = tcon,
.server = server,
- .offset = wdata->offset,
- .length = wdata->bytes,
+ .offset = wdata->subreq.start,
+ .length = wdata->subreq.len,
.persistent_fid = wdata->cfile->fid.persistent_fid,
.volatile_fid = wdata->cfile->fid.volatile_fid,
.pid = wdata->pid,
@@ -4563,10 +4563,10 @@ smb2_async_writev(struct cifs_io_subrequest *wdata)
*/
if (smb3_use_rdma_offload(io_parms)) {
struct smbd_buffer_descriptor_v1 *v1;
- size_t data_size = iov_iter_count(&wdata->iter);
+ size_t data_size = iov_iter_count(&wdata->subreq.io_iter);
bool need_invalidate = server->dialect == SMB30_PROT_ID;

- wdata->mr = smbd_register_mr(server->smbd_conn, &wdata->iter,
+ wdata->mr = smbd_register_mr(server->smbd_conn, &wdata->subreq.io_iter,
false, need_invalidate);
if (!wdata->mr) {
rc = -EAGAIN;
@@ -4593,7 +4593,7 @@ smb2_async_writev(struct cifs_io_subrequest *wdata)

rqst.rq_iov = iov;
rqst.rq_nvec = 1;
- rqst.rq_iter = wdata->iter;
+ rqst.rq_iter = wdata->subreq.io_iter;
rqst.rq_iter_size = iov_iter_count(&rqst.rq_iter);
#ifdef CONFIG_CIFS_SMB_DIRECT
if (wdata->mr)
@@ -4611,7 +4611,7 @@ smb2_async_writev(struct cifs_io_subrequest *wdata)
#endif

if (wdata->credits.value > 0) {
- shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(wdata->bytes,
+ shdr->CreditCharge = cpu_to_le16(DIV_ROUND_UP(wdata->subreq.len,
SMB2_MAX_BUFFER_SIZE));
credit_request = le16_to_cpu(shdr->CreditCharge) + 8;
if (server->credits >= server->max_credits)
diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c
index 16d87867ef50..c52b9bc10242 100644
--- a/fs/smb/client/transport.c
+++ b/fs/smb/client/transport.c
@@ -1701,8 +1701,8 @@ cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid)
unsigned int buflen = server->pdu_size + HEADER_PREAMBLE_SIZE(server);
bool use_rdma_mr = false;

- cifs_dbg(FYI, "%s: mid=%llu offset=%llu bytes=%u\n",
- __func__, mid->mid, rdata->offset, rdata->bytes);
+ cifs_dbg(FYI, "%s: mid=%llu offset=%llu bytes=%zu\n",
+ __func__, mid->mid, rdata->subreq.start, rdata->subreq.len);

/*
* read the rest of READ_RSP header (sans Data array), or whatever we
@@ -1807,7 +1807,7 @@ cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid)
length = data_len; /* An RDMA read is already done. */
else
#endif
- length = cifs_read_iter_from_socket(server, &rdata->iter,
+ length = cifs_read_iter_from_socket(server, &rdata->subreq.io_iter,
data_len);
if (length > 0)
rdata->got_bytes += length;

2023-10-13 16:16:20

by David Howells

[permalink] [raw]
Subject: [RFC PATCH 38/53] netfs: Support encryption on Unbuffered/DIO write

Support unbuffered and direct I/O writes to an encrypted file. This may
require making an RMW cycle if the write is not appropriately aligned with
respect to the crypto blocks.

Signed-off-by: David Howells <[email protected]>
cc: Jeff Layton <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
fs/netfs/direct_read.c | 2 +-
fs/netfs/direct_write.c | 210 ++++++++++++++++++++++++++++++++++-
fs/netfs/internal.h | 8 ++
fs/netfs/io.c | 117 +++++++++++++++++++
fs/netfs/main.c | 1 +
include/linux/netfs.h | 4 +
include/trace/events/netfs.h | 1 +
7 files changed, 337 insertions(+), 6 deletions(-)

diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c
index 158719b56900..c01cbe42db8a 100644
--- a/fs/netfs/direct_read.c
+++ b/fs/netfs/direct_read.c
@@ -88,7 +88,7 @@ static int netfs_copy_xarray_to_iter(struct netfs_io_request *rreq,
* If we did a direct read to a bounce buffer (say we needed to decrypt it),
* copy the data obtained to the destination iterator.
*/
-static int netfs_dio_copy_bounce_to_dest(struct netfs_io_request *rreq)
+int netfs_dio_copy_bounce_to_dest(struct netfs_io_request *rreq)
{
struct iov_iter *dest_iter = &rreq->iter;
struct kiocb *iocb = rreq->iocb;
diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c
index b1a4921ac4a2..f9dea801d6dd 100644
--- a/fs/netfs/direct_write.c
+++ b/fs/netfs/direct_write.c
@@ -23,6 +23,100 @@ static void netfs_cleanup_dio_write(struct netfs_io_request *wreq)
}
}

+/*
+ * Allocate a bunch of pages and add them into the xarray buffer starting at
+ * the given index.
+ */
+static int netfs_alloc_buffer(struct xarray *xa, pgoff_t index, unsigned int nr_pages)
+{
+ struct page *page;
+ unsigned int n;
+ int ret = 0;
+ LIST_HEAD(list);
+
+ n = alloc_pages_bulk_list(GFP_NOIO, nr_pages, &list);
+ if (n < nr_pages) {
+ ret = -ENOMEM;
+ }
+
+ while ((page = list_first_entry_or_null(&list, struct page, lru))) {
+ list_del(&page->lru);
+ page->index = index;
+ ret = xa_insert(xa, index++, page, GFP_NOIO);
+ if (ret < 0)
+ break;
+ }
+
+ while ((page = list_first_entry_or_null(&list, struct page, lru))) {
+ list_del(&page->lru);
+ __free_page(page);
+ }
+ return ret;
+}
+
+/*
+ * Copy all of the data from the source iterator into folios in the destination
+ * xarray. We cannot step through and kmap the source iterator if it's an
+ * iovec, so we have to step through the xarray and drop the RCU lock each
+ * time.
+ */
+static int netfs_copy_iter_to_xarray(struct iov_iter *src, struct xarray *xa,
+ unsigned long long start)
+{
+ struct folio *folio;
+ void *base;
+ pgoff_t index = start / PAGE_SIZE;
+ size_t len, copied, count = iov_iter_count(src);
+
+ XA_STATE(xas, xa, index);
+
+ _enter("%zx", count);
+
+ if (!count)
+ return -EIO;
+
+ len = PAGE_SIZE - offset_in_page(start);
+ rcu_read_lock();
+ xas_for_each(&xas, folio, ULONG_MAX) {
+ size_t offset;
+
+ if (xas_retry(&xas, folio))
+ continue;
+
+ /* There shouldn't be a need to call xas_pause() as no one else
+ * can see the xarray we're iterating over.
+ */
+ rcu_read_unlock();
+
+ offset = offset_in_folio(folio, start);
+ _debug("folio %lx +%zx [%llx]", folio->index, offset, start);
+
+ while (offset < folio_size(folio)) {
+ len = min(count, len);
+
+ base = kmap_local_folio(folio, offset);
+ copied = copy_from_iter(base, len, src);
+ kunmap_local(base);
+ if (copied != len)
+ goto out;
+ count -= len;
+ if (count == 0)
+ goto out;
+
+ start += len;
+ offset += len;
+ len = PAGE_SIZE;
+ }
+
+ rcu_read_lock();
+ }
+
+ rcu_read_unlock();
+out:
+ _leave(" = %zx", count);
+ return count ? -EIO : 0;
+}
+
/*
* Perform an unbuffered write where we may have to do an RMW operation on an
* encrypted file. This can also be used for direct I/O writes.
@@ -31,20 +125,47 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
struct netfs_group *netfs_group)
{
struct netfs_io_request *wreq;
+ struct netfs_inode *ctx = netfs_inode(file_inode(iocb->ki_filp));
+ unsigned long long real_size = ctx->remote_i_size;
unsigned long long start = iocb->ki_pos;
unsigned long long end = start + iov_iter_count(iter);
ssize_t ret, n;
- bool async = !is_sync_kiocb(iocb);
+ size_t min_bsize = 1UL << ctx->min_bshift;
+ size_t bmask = min_bsize - 1;
+ size_t gap_before = start & bmask;
+ size_t gap_after = (min_bsize - end) & bmask;
+ bool use_bounce, async = !is_sync_kiocb(iocb);
+ enum {
+ DIRECT_IO, COPY_TO_BOUNCE, ENC_TO_BOUNCE, COPY_THEN_ENC,
+ } buffering;

_enter("");

+ /* The real size must be rounded out to the crypto block size plus
+ * any trailer we might want to attach.
+ */
+ if (real_size && ctx->crypto_bshift) {
+ size_t cmask = 1UL << ctx->crypto_bshift;
+
+ if (real_size < ctx->crypto_trailer)
+ return -EIO;
+ if ((real_size - ctx->crypto_trailer) & cmask)
+ return -EIO;
+ real_size -= ctx->crypto_trailer;
+ }
+
/* We're going to need a bounce buffer if what we transmit is going to
* be different in some way to the source buffer, e.g. because it gets
* encrypted/compressed or because it needs expanding to a block size.
*/
- // TODO
+ use_bounce = test_bit(NETFS_ICTX_ENCRYPTED, &ctx->flags);
+ if (gap_before || gap_after) {
+ if (iocb->ki_flags & IOCB_DIRECT)
+ return -EINVAL;
+ use_bounce = true;
+ }

- _debug("uw %llx-%llx", start, end);
+ _debug("uw %llx-%llx +%zx,%zx", start, end, gap_before, gap_after);

wreq = netfs_alloc_request(iocb->ki_filp->f_mapping, iocb->ki_filp,
start, end - start,
@@ -53,7 +174,57 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
if (IS_ERR(wreq))
return PTR_ERR(wreq);

- {
+ if (use_bounce) {
+ unsigned long long bstart = start - gap_before;
+ unsigned long long bend = end + gap_after;
+ pgoff_t first = bstart / PAGE_SIZE;
+ pgoff_t last = (bend - 1) / PAGE_SIZE;
+
+ _debug("bounce %llx-%llx %lx-%lx", bstart, bend, first, last);
+
+ ret = netfs_alloc_buffer(&wreq->bounce, first, last - first + 1);
+ if (ret < 0)
+ goto out;
+
+ iov_iter_xarray(&wreq->io_iter, READ, &wreq->bounce,
+ bstart, bend - bstart);
+
+ if (gap_before || gap_after)
+ async = false; /* We may have to repeat the RMW cycle */
+ }
+
+repeat_rmw_cycle:
+ if (use_bounce) {
+ /* If we're going to need to do an RMW cycle, fill in the gaps
+ * at the ends of the buffer.
+ */
+ if (gap_before || gap_after) {
+ struct iov_iter buffer = wreq->io_iter;
+
+ if ((gap_before && start - gap_before < real_size) ||
+ (gap_after && end < real_size)) {
+ ret = netfs_rmw_read(wreq, iocb->ki_filp,
+ start - gap_before, gap_before,
+ end, end < real_size ? gap_after : 0);
+ if (ret < 0)
+ goto out;
+ }
+
+ if (gap_before && start - gap_before >= real_size)
+ iov_iter_zero(gap_before, &buffer);
+ if (gap_after && end >= real_size) {
+ iov_iter_advance(&buffer, end - start);
+ iov_iter_zero(gap_after, &buffer);
+ }
+ }
+
+ if (!test_bit(NETFS_RREQ_CONTENT_ENCRYPTION, &wreq->flags))
+ buffering = COPY_TO_BOUNCE;
+ else if (!gap_before && !gap_after && netfs_is_crypto_aligned(wreq, iter))
+ buffering = ENC_TO_BOUNCE;
+ else
+ buffering = COPY_THEN_ENC;
+ } else {
/* If this is an async op and we're not using a bounce buffer,
* we have to save the source buffer as the iterator is only
* good until we return. In such a case, extract an iterator
@@ -77,10 +248,25 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
}

wreq->io_iter = wreq->iter;
+ buffering = DIRECT_IO;
}

/* Copy the data into the bounce buffer and encrypt it. */
- // TODO
+ if (buffering == COPY_TO_BOUNCE ||
+ buffering == COPY_THEN_ENC) {
+ ret = netfs_copy_iter_to_xarray(iter, &wreq->bounce, wreq->start);
+ if (ret < 0)
+ goto out;
+ wreq->iter = wreq->io_iter;
+ wreq->start -= gap_before;
+ wreq->len += gap_before + gap_after;
+ }
+
+ if (buffering == COPY_THEN_ENC ||
+ buffering == ENC_TO_BOUNCE) {
+ if (!netfs_encrypt(wreq))
+ goto out;
+ }

/* Dispatch the write. */
__set_bit(NETFS_RREQ_UPLOAD_TO_SERVER, &wreq->flags);
@@ -101,6 +287,20 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
wait_on_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS,
TASK_UNINTERRUPTIBLE);

+ /* See if the write failed due to a 3rd party race when doing
+ * an RMW on a partially modified block in an encrypted file.
+ */
+ if (test_and_clear_bit(NETFS_RREQ_REPEAT_RMW, &wreq->flags)) {
+ netfs_clear_subrequests(wreq, false);
+ iov_iter_revert(iter, end - start);
+ wreq->error = 0;
+ wreq->start = start;
+ wreq->len = end - start;
+ wreq->transferred = 0;
+ wreq->submitted = 0;
+ goto repeat_rmw_cycle;
+ }
+
ret = wreq->error;
_debug("waited = %zd", ret);
if (ret == 0) {
diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h
index 7dd37d3aff3f..a25adbe7ec72 100644
--- a/fs/netfs/internal.h
+++ b/fs/netfs/internal.h
@@ -28,6 +28,11 @@ int netfs_prefetch_for_write(struct file *file, struct folio *folio,
bool netfs_encrypt(struct netfs_io_request *wreq);
void netfs_decrypt(struct netfs_io_request *rreq);

+/*
+ * direct_read.c
+ */
+int netfs_dio_copy_bounce_to_dest(struct netfs_io_request *rreq);
+
/*
* direct_write.c
*/
@@ -38,6 +43,9 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
* io.c
*/
int netfs_begin_read(struct netfs_io_request *rreq, bool sync);
+ssize_t netfs_rmw_read(struct netfs_io_request *wreq, struct file *file,
+ unsigned long long start1, size_t len1,
+ unsigned long long start2, size_t len2);

/*
* main.c
diff --git a/fs/netfs/io.c b/fs/netfs/io.c
index 9887b22e4cb3..14a9f3312d3b 100644
--- a/fs/netfs/io.c
+++ b/fs/netfs/io.c
@@ -775,3 +775,120 @@ int netfs_begin_read(struct netfs_io_request *rreq, bool sync)
out:
return ret;
}
+
+static bool netfs_rmw_read_one(struct netfs_io_request *rreq,
+ unsigned long long start, size_t len)
+{
+ struct netfs_inode *ctx = netfs_inode(rreq->inode);
+ struct iov_iter io_iter;
+ unsigned long long pstart, end = start + len;
+ pgoff_t first, last;
+ ssize_t ret;
+ size_t min_bsize = 1UL << ctx->min_bshift;
+
+ /* Determine the block we need to load. */
+ end = round_up(end, min_bsize);
+ start = round_down(start, min_bsize);
+
+ /* Determine the folios we need to insert. */
+ pstart = round_down(start, PAGE_SIZE);
+ first = pstart / PAGE_SIZE;
+ last = DIV_ROUND_UP(end, PAGE_SIZE);
+
+ ret = netfs_add_folios_to_buffer(&rreq->bounce, rreq->mapping,
+ first, last, GFP_NOFS);
+ if (ret < 0) {
+ rreq->error = ret;
+ return false;
+ }
+
+ rreq->start = start;
+ rreq->len = len;
+ rreq->submitted = 0;
+ iov_iter_xarray(&rreq->io_iter, ITER_DEST, &rreq->bounce, start, len);
+
+ io_iter = rreq->io_iter;
+ do {
+ _debug("submit %llx + %zx >= %llx",
+ rreq->start, rreq->submitted, rreq->i_size);
+ if (rreq->start + rreq->submitted >= rreq->i_size)
+ break;
+ if (!netfs_rreq_submit_slice(rreq, &io_iter, &rreq->subreq_counter))
+ break;
+ } while (rreq->submitted < rreq->len);
+
+ if (rreq->submitted < rreq->len) {
+ netfs_put_request(rreq, false, netfs_rreq_trace_put_no_submit);
+ return false;
+ }
+
+ return true;
+}
+
+/*
+ * Begin the process of reading in one or two chunks of data for use by
+ * unbuffered write to perform an RMW cycle. We don't read directly into the
+ * write buffer as this may get called to redo the read in the case that a
+ * conditional write fails due to conflicting 3rd-party modifications.
+ */
+ssize_t netfs_rmw_read(struct netfs_io_request *wreq, struct file *file,
+ unsigned long long start1, size_t len1,
+ unsigned long long start2, size_t len2)
+{
+ struct netfs_io_request *rreq;
+ ssize_t ret;
+
+ _enter("RMW:R=%x %llx-%llx %llx-%llx",
+ rreq->debug_id, start1, start1 + len1 - 1, start2, start2 + len2 - 1);
+
+ rreq = netfs_alloc_request(wreq->mapping, file,
+ start1, start2 - start1 + len2, NETFS_RMW_READ);
+ if (IS_ERR(rreq))
+ return PTR_ERR(rreq);
+
+ INIT_WORK(&rreq->work, netfs_rreq_work);
+
+ rreq->iter = wreq->io_iter;
+ __set_bit(NETFS_RREQ_CRYPT_IN_PLACE, &rreq->flags);
+ __set_bit(NETFS_RREQ_USE_BOUNCE_BUFFER, &rreq->flags);
+
+ /* Chop the reads into slices according to what the netfs wants and
+ * submit each one.
+ */
+ netfs_get_request(rreq, netfs_rreq_trace_get_for_outstanding);
+ atomic_set(&rreq->nr_outstanding, 1);
+ if (len1 && !netfs_rmw_read_one(rreq, start1, len1))
+ goto wait;
+ if (len2)
+ netfs_rmw_read_one(rreq, start2, len2);
+
+wait:
+ /* Keep nr_outstanding incremented so that the ref always belongs to us
+ * and the service code isn't punted off to a random thread pool to
+ * process.
+ */
+ for (;;) {
+ wait_var_event(&rreq->nr_outstanding,
+ atomic_read(&rreq->nr_outstanding) == 1);
+ netfs_rreq_assess(rreq, false);
+ if (atomic_read(&rreq->nr_outstanding) == 1)
+ break;
+ cond_resched();
+ }
+
+ trace_netfs_rreq(wreq, netfs_rreq_trace_wait_ip);
+ wait_on_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS,
+ TASK_UNINTERRUPTIBLE);
+
+ ret = rreq->error;
+ if (ret == 0 && rreq->submitted < rreq->len) {
+ trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_read);
+ ret = -EIO;
+ }
+
+ if (ret == 0)
+ ret = netfs_dio_copy_bounce_to_dest(rreq);
+
+ netfs_put_request(rreq, false, netfs_rreq_trace_put_return);
+ return ret;
+}
diff --git a/fs/netfs/main.c b/fs/netfs/main.c
index 1cf10f9c4c1f..b335e6a50f9c 100644
--- a/fs/netfs/main.c
+++ b/fs/netfs/main.c
@@ -33,6 +33,7 @@ static const char *netfs_origins[nr__netfs_io_origin] = {
[NETFS_READPAGE] = "RP",
[NETFS_READ_FOR_WRITE] = "RW",
[NETFS_WRITEBACK] = "WB",
+ [NETFS_RMW_READ] = "RM",
[NETFS_UNBUFFERED_WRITE] = "UW",
[NETFS_DIO_READ] = "DR",
[NETFS_DIO_WRITE] = "DW",
diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index 524e6f5ff3fd..9661ae24120f 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -145,6 +145,7 @@ struct netfs_inode {
#define NETFS_ICTX_ENCRYPTED 2 /* The file contents are encrypted */
unsigned char min_bshift; /* log2 min block size for bounding box or 0 */
unsigned char crypto_bshift; /* log2 of crypto block size */
+ unsigned char crypto_trailer; /* Size of crypto trailer */
};

/*
@@ -233,6 +234,7 @@ enum netfs_io_origin {
NETFS_READPAGE, /* This read is a synchronous read */
NETFS_READ_FOR_WRITE, /* This read is to prepare a write */
NETFS_WRITEBACK, /* This write was triggered by writepages */
+ NETFS_RMW_READ, /* This is an unbuffered read for RMW */
NETFS_UNBUFFERED_WRITE, /* This is an unbuffered write */
NETFS_DIO_READ, /* This is a direct I/O read */
NETFS_DIO_WRITE, /* This is a direct I/O write */
@@ -290,6 +292,7 @@ struct netfs_io_request {
#define NETFS_RREQ_UPLOAD_TO_SERVER 10 /* Need to write to the server */
#define NETFS_RREQ_CONTENT_ENCRYPTION 11 /* Content encryption is in use */
#define NETFS_RREQ_CRYPT_IN_PLACE 12 /* Enc/dec in place in ->io_iter */
+#define NETFS_RREQ_REPEAT_RMW 13 /* Need to repeat RMW cycle */
const struct netfs_request_ops *netfs_ops;
void (*cleanup)(struct netfs_io_request *req);
};
@@ -478,6 +481,7 @@ static inline void netfs_inode_init(struct netfs_inode *ctx,
ctx->flags = 0;
ctx->min_bshift = 0;
ctx->crypto_bshift = 0;
+ ctx->crypto_trailer = 0;
#if IS_ENABLED(CONFIG_FSCACHE)
ctx->cache = NULL;
#endif
diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h
index 2f35057602fa..825946f510ee 100644
--- a/include/trace/events/netfs.h
+++ b/include/trace/events/netfs.h
@@ -33,6 +33,7 @@
EM(NETFS_READPAGE, "RP") \
EM(NETFS_READ_FOR_WRITE, "RW") \
EM(NETFS_WRITEBACK, "WB") \
+ EM(NETFS_RMW_READ, "RM") \
EM(NETFS_UNBUFFERED_WRITE, "UW") \
EM(NETFS_DIO_READ, "DR") \
E_(NETFS_DIO_WRITE, "DW")

2023-10-13 16:23:23

by David Howells

[permalink] [raw]
Subject: [RFC PATCH 42/53] afs: Use the netfs write helpers

Make afs use the netfs write helpers.

Signed-off-by: David Howells <[email protected]>
cc: Marc Dionne <[email protected]>
cc: Jeff Layton <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
fs/afs/file.c | 65 +++-
fs/afs/internal.h | 10 +-
fs/afs/write.c | 704 ++-----------------------------------
include/trace/events/afs.h | 23 --
4 files changed, 81 insertions(+), 721 deletions(-)

diff --git a/fs/afs/file.c b/fs/afs/file.c
index 5bb78d874292..586a573b1a9b 100644
--- a/fs/afs/file.c
+++ b/fs/afs/file.c
@@ -34,7 +34,7 @@ const struct file_operations afs_file_operations = {
.release = afs_release,
.llseek = generic_file_llseek,
.read_iter = afs_file_read_iter,
- .write_iter = afs_file_write,
+ .write_iter = netfs_file_write_iter,
.mmap = afs_file_mmap,
.splice_read = afs_file_splice_read,
.splice_write = iter_file_splice_write,
@@ -50,16 +50,15 @@ const struct inode_operations afs_file_inode_operations = {
};

const struct address_space_operations afs_file_aops = {
+ .direct_IO = noop_direct_IO,
.read_folio = netfs_read_folio,
.readahead = netfs_readahead,
.dirty_folio = afs_dirty_folio,
- .launder_folio = afs_launder_folio,
+ .launder_folio = netfs_launder_folio,
.release_folio = netfs_release_folio,
.invalidate_folio = netfs_invalidate_folio,
- .write_begin = afs_write_begin,
- .write_end = afs_write_end,
- .writepages = afs_writepages,
.migrate_folio = filemap_migrate_folio,
+ .writepages = afs_writepages,
};

const struct address_space_operations afs_symlink_aops = {
@@ -355,8 +354,10 @@ static int afs_symlink_read_folio(struct file *file, struct folio *folio)

static int afs_init_request(struct netfs_io_request *rreq, struct file *file)
{
- rreq->netfs_priv = key_get(afs_file_key(file));
+ if (file)
+ rreq->netfs_priv = key_get(afs_file_key(file));
rreq->rsize = 4 * 1024 * 1024;
+ rreq->wsize = 16 * 1024;
return 0;
}

@@ -373,12 +374,37 @@ static void afs_free_request(struct netfs_io_request *rreq)
key_put(rreq->netfs_priv);
}

+static void afs_update_i_size(struct inode *inode, loff_t new_i_size)
+{
+ struct afs_vnode *vnode = AFS_FS_I(inode);
+ loff_t i_size;
+
+ write_seqlock(&vnode->cb_lock);
+ i_size = i_size_read(&vnode->netfs.inode);
+ if (new_i_size > i_size) {
+ i_size_write(&vnode->netfs.inode, new_i_size);
+ inode_set_bytes(&vnode->netfs.inode, new_i_size);
+ }
+ write_sequnlock(&vnode->cb_lock);
+ fscache_update_cookie(afs_vnode_cache(vnode), NULL, &new_i_size);
+}
+
+static void afs_netfs_invalidate_cache(struct netfs_io_request *wreq)
+{
+ struct afs_vnode *vnode = AFS_FS_I(wreq->inode);
+
+ afs_invalidate_cache(vnode, 0);
+}
+
const struct netfs_request_ops afs_req_ops = {
.init_request = afs_init_request,
.free_request = afs_free_request,
.begin_cache_operation = fscache_begin_cache_operation,
.check_write_begin = afs_check_write_begin,
.issue_read = afs_issue_read,
+ .update_i_size = afs_update_i_size,
+ .invalidate_cache = afs_netfs_invalidate_cache,
+ .create_write_requests = afs_create_write_requests,
};

int afs_write_inode(struct inode *inode, struct writeback_control *wbc)
@@ -453,28 +479,39 @@ static vm_fault_t afs_vm_map_pages(struct vm_fault *vmf, pgoff_t start_pgoff, pg

static ssize_t afs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
{
- struct afs_vnode *vnode = AFS_FS_I(file_inode(iocb->ki_filp));
+ struct inode *inode = file_inode(iocb->ki_filp);
+ struct afs_vnode *vnode = AFS_FS_I(inode);
struct afs_file *af = iocb->ki_filp->private_data;
int ret;

- ret = afs_validate(vnode, af->key);
+ if (iocb->ki_flags & IOCB_DIRECT)
+ return netfs_unbuffered_read_iter(iocb, iter);
+
+ ret = netfs_start_io_read(inode);
if (ret < 0)
return ret;
-
- return generic_file_read_iter(iocb, iter);
+ ret = afs_validate(vnode, af->key);
+ if (ret == 0)
+ ret = netfs_file_read_iter(iocb, iter);
+ netfs_end_io_read(inode);
+ return ret;
}

static ssize_t afs_file_splice_read(struct file *in, loff_t *ppos,
struct pipe_inode_info *pipe,
size_t len, unsigned int flags)
{
- struct afs_vnode *vnode = AFS_FS_I(file_inode(in));
+ struct inode *inode = file_inode(in);
+ struct afs_vnode *vnode = AFS_FS_I(inode);
struct afs_file *af = in->private_data;
int ret;

- ret = afs_validate(vnode, af->key);
+ ret = netfs_start_io_read(inode);
if (ret < 0)
return ret;
-
- return filemap_splice_read(in, ppos, pipe, len, flags);
+ ret = afs_validate(vnode, af->key);
+ if (ret == 0)
+ ret = filemap_splice_read(in, ppos, pipe, len, flags);
+ netfs_end_io_read(inode);
+ return ret;
}
diff --git a/fs/afs/internal.h b/fs/afs/internal.h
index 03fed7ecfab9..da5de62b5f9c 100644
--- a/fs/afs/internal.h
+++ b/fs/afs/internal.h
@@ -1468,19 +1468,11 @@ bool afs_dirty_folio(struct address_space *, struct folio *);
#else
#define afs_dirty_folio filemap_dirty_folio
#endif
-extern int afs_write_begin(struct file *file, struct address_space *mapping,
- loff_t pos, unsigned len,
- struct page **pagep, void **fsdata);
-extern int afs_write_end(struct file *file, struct address_space *mapping,
- loff_t pos, unsigned len, unsigned copied,
- struct page *page, void *fsdata);
-extern int afs_writepage(struct page *, struct writeback_control *);
extern int afs_writepages(struct address_space *, struct writeback_control *);
-extern ssize_t afs_file_write(struct kiocb *, struct iov_iter *);
extern int afs_fsync(struct file *, loff_t, loff_t, int);
extern vm_fault_t afs_page_mkwrite(struct vm_fault *vmf);
extern void afs_prune_wb_keys(struct afs_vnode *);
-int afs_launder_folio(struct folio *);
+void afs_create_write_requests(struct netfs_io_request *wreq, loff_t start, size_t len);

/*
* xattr.c
diff --git a/fs/afs/write.c b/fs/afs/write.c
index cdb1391ec46e..748d5954f0ee 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -12,17 +12,9 @@
#include <linux/writeback.h>
#include <linux/pagevec.h>
#include <linux/netfs.h>
+#include <trace/events/netfs.h>
#include "internal.h"

-static int afs_writepages_region(struct address_space *mapping,
- struct writeback_control *wbc,
- unsigned long long start,
- unsigned long long end, loff_t *_next,
- bool max_one_loop);
-
-static void afs_write_to_cache(struct afs_vnode *vnode, loff_t start, size_t len,
- loff_t i_size, bool caching);
-
#ifdef CONFIG_AFS_FSCACHE
/*
* Mark a page as having been made dirty and thus needing writeback. We also
@@ -33,216 +25,16 @@ bool afs_dirty_folio(struct address_space *mapping, struct folio *folio)
return fscache_dirty_folio(mapping, folio,
afs_vnode_cache(AFS_FS_I(mapping->host)));
}
-static void afs_folio_start_fscache(bool caching, struct folio *folio)
-{
- if (caching)
- folio_start_fscache(folio);
-}
-#else
-static void afs_folio_start_fscache(bool caching, struct folio *folio)
-{
-}
#endif

-/*
- * prepare to perform part of a write to a page
- */
-int afs_write_begin(struct file *file, struct address_space *mapping,
- loff_t pos, unsigned len,
- struct page **_page, void **fsdata)
-{
- struct afs_vnode *vnode = AFS_FS_I(file_inode(file));
- struct folio *folio;
- int ret;
-
- _enter("{%llx:%llu},%llx,%x",
- vnode->fid.vid, vnode->fid.vnode, pos, len);
-
- /* Prefetch area to be written into the cache if we're caching this
- * file. We need to do this before we get a lock on the page in case
- * there's more than one writer competing for the same cache block.
- */
- ret = netfs_write_begin(&vnode->netfs, file, mapping, pos, len, &folio, fsdata);
- if (ret < 0)
- return ret;
-
-try_again:
- /* See if this page is already partially written in a way that we can
- * merge the new write with.
- */
- if (folio_test_writeback(folio)) {
- trace_afs_folio_dirty(vnode, tracepoint_string("alrdy"), folio);
- folio_unlock(folio);
- goto wait_for_writeback;
- }
-
- *_page = folio_file_page(folio, pos / PAGE_SIZE);
- _leave(" = 0");
- return 0;
-
-wait_for_writeback:
- ret = folio_wait_writeback_killable(folio);
- if (ret < 0)
- goto error;
-
- ret = folio_lock_killable(folio);
- if (ret < 0)
- goto error;
- goto try_again;
-
-error:
- folio_put(folio);
- _leave(" = %d", ret);
- return ret;
-}
-
-/*
- * finalise part of a write to a page
- */
-int afs_write_end(struct file *file, struct address_space *mapping,
- loff_t pos, unsigned len, unsigned copied,
- struct page *subpage, void *fsdata)
-{
- struct folio *folio = page_folio(subpage);
- struct afs_vnode *vnode = AFS_FS_I(file_inode(file));
- loff_t i_size, write_end_pos;
-
- _enter("{%llx:%llu},{%lx}",
- vnode->fid.vid, vnode->fid.vnode, folio_index(folio));
-
- if (!folio_test_uptodate(folio)) {
- if (copied < len) {
- copied = 0;
- goto out;
- }
-
- folio_mark_uptodate(folio);
- }
-
- if (copied == 0)
- goto out;
-
- write_end_pos = pos + copied;
-
- i_size = i_size_read(&vnode->netfs.inode);
- if (write_end_pos > i_size) {
- write_seqlock(&vnode->cb_lock);
- i_size = i_size_read(&vnode->netfs.inode);
- if (write_end_pos > i_size)
- afs_set_i_size(vnode, write_end_pos);
- write_sequnlock(&vnode->cb_lock);
- fscache_update_cookie(afs_vnode_cache(vnode), NULL, &write_end_pos);
- }
-
- if (folio_mark_dirty(folio))
- _debug("dirtied %lx", folio_index(folio));
-
-out:
- folio_unlock(folio);
- folio_put(folio);
- return copied;
-}
-
-/*
- * kill all the pages in the given range
- */
-static void afs_kill_pages(struct address_space *mapping,
- loff_t start, loff_t len)
-{
- struct afs_vnode *vnode = AFS_FS_I(mapping->host);
- struct folio *folio;
- pgoff_t index = start / PAGE_SIZE;
- pgoff_t last = (start + len - 1) / PAGE_SIZE, next;
-
- _enter("{%llx:%llu},%llx @%llx",
- vnode->fid.vid, vnode->fid.vnode, len, start);
-
- do {
- _debug("kill %lx (to %lx)", index, last);
-
- folio = filemap_get_folio(mapping, index);
- if (IS_ERR(folio)) {
- next = index + 1;
- continue;
- }
-
- next = folio_next_index(folio);
-
- folio_clear_uptodate(folio);
- folio_end_writeback(folio);
- folio_lock(folio);
- generic_error_remove_page(mapping, &folio->page);
- folio_unlock(folio);
- folio_put(folio);
-
- } while (index = next, index <= last);
-
- _leave("");
-}
-
-/*
- * Redirty all the pages in a given range.
- */
-static void afs_redirty_pages(struct writeback_control *wbc,
- struct address_space *mapping,
- loff_t start, loff_t len)
-{
- struct afs_vnode *vnode = AFS_FS_I(mapping->host);
- struct folio *folio;
- pgoff_t index = start / PAGE_SIZE;
- pgoff_t last = (start + len - 1) / PAGE_SIZE, next;
-
- _enter("{%llx:%llu},%llx @%llx",
- vnode->fid.vid, vnode->fid.vnode, len, start);
-
- do {
- _debug("redirty %llx @%llx", len, start);
-
- folio = filemap_get_folio(mapping, index);
- if (IS_ERR(folio)) {
- next = index + 1;
- continue;
- }
-
- next = index + folio_nr_pages(folio);
- folio_redirty_for_writepage(wbc, folio);
- folio_end_writeback(folio);
- folio_put(folio);
- } while (index = next, index <= last);
-
- _leave("");
-}
-
/*
* completion of write to server
*/
static void afs_pages_written_back(struct afs_vnode *vnode, loff_t start, unsigned int len)
{
- struct address_space *mapping = vnode->netfs.inode.i_mapping;
- struct folio *folio;
- pgoff_t end;
-
- XA_STATE(xas, &mapping->i_pages, start / PAGE_SIZE);
-
_enter("{%llx:%llu},{%x @%llx}",
vnode->fid.vid, vnode->fid.vnode, len, start);

- rcu_read_lock();
-
- end = (start + len - 1) / PAGE_SIZE;
- xas_for_each(&xas, folio, end) {
- if (!folio_test_writeback(folio)) {
- kdebug("bad %x @%llx page %lx %lx",
- len, start, folio_index(folio), end);
- ASSERT(folio_test_writeback(folio));
- }
-
- trace_afs_folio_dirty(vnode, tracepoint_string("clear"), folio);
- folio_end_writeback(folio);
- }
-
- rcu_read_unlock();
-
afs_prune_wb_keys(vnode);
_leave("");
}
@@ -379,339 +171,53 @@ static int afs_store_data(struct afs_vnode *vnode, struct iov_iter *iter, loff_t
return afs_put_operation(op);
}

-/*
- * Extend the region to be written back to include subsequent contiguously
- * dirty pages if possible, but don't sleep while doing so.
- *
- * If this page holds new content, then we can include filler zeros in the
- * writeback.
- */
-static void afs_extend_writeback(struct address_space *mapping,
- struct afs_vnode *vnode,
- long *_count,
- loff_t start,
- loff_t max_len,
- bool caching,
- size_t *_len)
+static void afs_upload_to_server(struct netfs_io_subrequest *subreq)
{
- struct folio_batch fbatch;
- struct folio *folio;
- pgoff_t index = (start + *_len) / PAGE_SIZE;
- bool stop = true;
- unsigned int i;
-
- XA_STATE(xas, &mapping->i_pages, index);
- folio_batch_init(&fbatch);
-
- do {
- /* Firstly, we gather up a batch of contiguous dirty pages
- * under the RCU read lock - but we can't clear the dirty flags
- * there if any of those pages are mapped.
- */
- rcu_read_lock();
-
- xas_for_each(&xas, folio, ULONG_MAX) {
- stop = true;
- if (xas_retry(&xas, folio))
- continue;
- if (xa_is_value(folio))
- break;
- if (folio_index(folio) != index)
- break;
-
- if (!folio_try_get_rcu(folio)) {
- xas_reset(&xas);
- continue;
- }
-
- /* Has the folio moved or been split? */
- if (unlikely(folio != xas_reload(&xas))) {
- folio_put(folio);
- break;
- }
-
- if (!folio_trylock(folio)) {
- folio_put(folio);
- break;
- }
- if (!folio_test_dirty(folio) ||
- folio_test_writeback(folio) ||
- folio_test_fscache(folio)) {
- folio_unlock(folio);
- folio_put(folio);
- break;
- }
-
- index += folio_nr_pages(folio);
- *_count -= folio_nr_pages(folio);
- *_len += folio_size(folio);
- stop = false;
- if (*_len >= max_len || *_count <= 0)
- stop = true;
-
- if (!folio_batch_add(&fbatch, folio))
- break;
- if (stop)
- break;
- }
-
- if (!stop)
- xas_pause(&xas);
- rcu_read_unlock();
-
- /* Now, if we obtained any folios, we can shift them to being
- * writable and mark them for caching.
- */
- if (!folio_batch_count(&fbatch))
- break;
-
- for (i = 0; i < folio_batch_count(&fbatch); i++) {
- folio = fbatch.folios[i];
- trace_afs_folio_dirty(vnode, tracepoint_string("store+"), folio);
+ struct afs_vnode *vnode = AFS_FS_I(subreq->rreq->inode);
+ ssize_t ret;

- if (!folio_clear_dirty_for_io(folio))
- BUG();
- if (folio_start_writeback(folio))
- BUG();
- afs_folio_start_fscache(caching, folio);
- folio_unlock(folio);
- }
+ _enter("%x[%x],%zx",
+ subreq->rreq->debug_id, subreq->debug_index, subreq->io_iter.count);

- folio_batch_release(&fbatch);
- cond_resched();
- } while (!stop);
+ trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
+ ret = afs_store_data(vnode, &subreq->io_iter, subreq->start,
+ subreq->rreq->origin == NETFS_LAUNDER_WRITE);
+ netfs_write_subrequest_terminated(subreq, ret < 0 ? ret : subreq->len,
+ false);
}

-/*
- * Synchronously write back the locked page and any subsequent non-locked dirty
- * pages.
- */
-static ssize_t afs_write_back_from_locked_folio(struct address_space *mapping,
- struct writeback_control *wbc,
- struct folio *folio,
- unsigned long long start,
- unsigned long long end)
+static void afs_upload_to_server_worker(struct work_struct *work)
{
- struct afs_vnode *vnode = AFS_FS_I(mapping->host);
- struct iov_iter iter;
- unsigned long long i_size = i_size_read(&vnode->netfs.inode);
- size_t len, max_len;
- bool caching = fscache_cookie_enabled(afs_vnode_cache(vnode));
- long count = wbc->nr_to_write;
- int ret;
-
- _enter(",%lx,%llx-%llx", folio_index(folio), start, end);
-
- if (folio_start_writeback(folio))
- BUG();
- afs_folio_start_fscache(caching, folio);
-
- count -= folio_nr_pages(folio);
-
- /* Find all consecutive lockable dirty pages that have contiguous
- * written regions, stopping when we find a page that is not
- * immediately lockable, is not dirty or is missing, or we reach the
- * end of the range.
- */
- trace_afs_folio_dirty(vnode, tracepoint_string("store"), folio);
-
- len = folio_size(folio);
- if (start < i_size) {
- /* Trim the write to the EOF; the extra data is ignored. Also
- * put an upper limit on the size of a single storedata op.
- */
- max_len = 65536 * 4096;
- max_len = min_t(unsigned long long, max_len, end - start + 1);
- max_len = min_t(unsigned long long, max_len, i_size - start);
-
- if (len < max_len)
- afs_extend_writeback(mapping, vnode, &count,
- start, max_len, caching, &len);
- len = min_t(unsigned long long, len, i_size - start);
- }
-
- /* We now have a contiguous set of dirty pages, each with writeback
- * set; the first page is still locked at this point, but all the rest
- * have been unlocked.
- */
- folio_unlock(folio);
+ struct netfs_io_subrequest *subreq =
+ container_of(work, struct netfs_io_subrequest, work);

- if (start < i_size) {
- _debug("write back %zx @%llx [%llx]", len, start, i_size);
-
- /* Speculatively write to the cache. We have to fix this up
- * later if the store fails.
- */
- afs_write_to_cache(vnode, start, len, i_size, caching);
-
- iov_iter_xarray(&iter, ITER_SOURCE, &mapping->i_pages, start, len);
- ret = afs_store_data(vnode, &iter, start, false);
- } else {
- _debug("write discard %zx @%llx [%llx]", len, start, i_size);
-
- /* The dirty region was entirely beyond the EOF. */
- fscache_clear_page_bits(mapping, start, len, caching);
- afs_pages_written_back(vnode, start, len);
- ret = 0;
- }
-
- switch (ret) {
- case 0:
- wbc->nr_to_write = count;
- ret = len;
- break;
-
- default:
- pr_notice("kAFS: Unexpected error from FS.StoreData %d\n", ret);
- fallthrough;
- case -EACCES:
- case -EPERM:
- case -ENOKEY:
- case -EKEYEXPIRED:
- case -EKEYREJECTED:
- case -EKEYREVOKED:
- case -ENETRESET:
- afs_redirty_pages(wbc, mapping, start, len);
- mapping_set_error(mapping, ret);
- break;
-
- case -EDQUOT:
- case -ENOSPC:
- afs_redirty_pages(wbc, mapping, start, len);
- mapping_set_error(mapping, -ENOSPC);
- break;
-
- case -EROFS:
- case -EIO:
- case -EREMOTEIO:
- case -EFBIG:
- case -ENOENT:
- case -ENOMEDIUM:
- case -ENXIO:
- trace_afs_file_error(vnode, ret, afs_file_error_writeback_fail);
- afs_kill_pages(mapping, start, len);
- mapping_set_error(mapping, ret);
- break;
- }
-
- _leave(" = %d", ret);
- return ret;
+ afs_upload_to_server(subreq);
}

/*
- * write a region of pages back to the server
+ * Set up write requests for a writeback slice. We need to add a write request
+ * for each write we want to make.
*/
-static int afs_writepages_region(struct address_space *mapping,
- struct writeback_control *wbc,
- unsigned long long start,
- unsigned long long end, loff_t *_next,
- bool max_one_loop)
+void afs_create_write_requests(struct netfs_io_request *wreq, loff_t start, size_t len)
{
- struct folio *folio;
- struct folio_batch fbatch;
- ssize_t ret;
- unsigned int i;
- int n, skips = 0;
-
- _enter("%llx,%llx,", start, end);
- folio_batch_init(&fbatch);
-
- do {
- pgoff_t index = start / PAGE_SIZE;
+ struct netfs_io_subrequest *subreq;

- n = filemap_get_folios_tag(mapping, &index, end / PAGE_SIZE,
- PAGECACHE_TAG_DIRTY, &fbatch);
+ _enter("%x,%llx-%llx", wreq->debug_id, start, start + len);

- if (!n)
- break;
- for (i = 0; i < n; i++) {
- folio = fbatch.folios[i];
- start = folio_pos(folio); /* May regress with THPs */
-
- _debug("wback %lx", folio_index(folio));
-
- /* At this point we hold neither the i_pages lock nor the
- * page lock: the page may be truncated or invalidated
- * (changing page->mapping to NULL), or even swizzled
- * back from swapper_space to tmpfs file mapping
- */
-try_again:
- if (wbc->sync_mode != WB_SYNC_NONE) {
- ret = folio_lock_killable(folio);
- if (ret < 0) {
- folio_batch_release(&fbatch);
- return ret;
- }
- } else {
- if (!folio_trylock(folio))
- continue;
- }
-
- if (folio->mapping != mapping ||
- !folio_test_dirty(folio)) {
- start += folio_size(folio);
- folio_unlock(folio);
- continue;
- }
-
- if (folio_test_writeback(folio) ||
- folio_test_fscache(folio)) {
- folio_unlock(folio);
- if (wbc->sync_mode != WB_SYNC_NONE) {
- folio_wait_writeback(folio);
-#ifdef CONFIG_AFS_FSCACHE
- folio_wait_fscache(folio);
-#endif
- goto try_again;
- }
-
- start += folio_size(folio);
- if (wbc->sync_mode == WB_SYNC_NONE) {
- if (skips >= 5 || need_resched()) {
- *_next = start;
- folio_batch_release(&fbatch);
- _leave(" = 0 [%llx]", *_next);
- return 0;
- }
- skips++;
- }
- continue;
- }
-
- if (!folio_clear_dirty_for_io(folio))
- BUG();
- ret = afs_write_back_from_locked_folio(mapping, wbc,
- folio, start, end);
- if (ret < 0) {
- _leave(" = %zd", ret);
- folio_batch_release(&fbatch);
- return ret;
- }
-
- start += ret;
- }
-
- folio_batch_release(&fbatch);
- cond_resched();
- } while (wbc->nr_to_write > 0);
-
- *_next = start;
- _leave(" = 0 [%llx]", *_next);
- return 0;
+ subreq = netfs_create_write_request(wreq, NETFS_UPLOAD_TO_SERVER,
+ start, len, afs_upload_to_server_worker);
+ if (subreq)
+ netfs_queue_write_request(subreq);
}

/*
* write some of the pending data back to the server
*/
-int afs_writepages(struct address_space *mapping,
- struct writeback_control *wbc)
+int afs_writepages(struct address_space *mapping, struct writeback_control *wbc)
{
struct afs_vnode *vnode = AFS_FS_I(mapping->host);
- loff_t start, next;
int ret;

- _enter("");
-
/* We have to be careful as we can end up racing with setattr()
* truncating the pagecache since the caller doesn't take a lock here
* to prevent it.
@@ -721,68 +227,11 @@ int afs_writepages(struct address_space *mapping,
else if (!down_read_trylock(&vnode->validate_lock))
return 0;

- if (wbc->range_cyclic) {
- start = mapping->writeback_index * PAGE_SIZE;
- ret = afs_writepages_region(mapping, wbc, start, LLONG_MAX,
- &next, false);
- if (ret == 0) {
- mapping->writeback_index = next / PAGE_SIZE;
- if (start > 0 && wbc->nr_to_write > 0) {
- ret = afs_writepages_region(mapping, wbc, 0,
- start, &next, false);
- if (ret == 0)
- mapping->writeback_index =
- next / PAGE_SIZE;
- }
- }
- } else if (wbc->range_start == 0 && wbc->range_end == LLONG_MAX) {
- ret = afs_writepages_region(mapping, wbc, 0, LLONG_MAX,
- &next, false);
- if (wbc->nr_to_write > 0 && ret == 0)
- mapping->writeback_index = next / PAGE_SIZE;
- } else {
- ret = afs_writepages_region(mapping, wbc,
- wbc->range_start, wbc->range_end,
- &next, false);
- }
-
+ ret = netfs_writepages(mapping, wbc);
up_read(&vnode->validate_lock);
- _leave(" = %d", ret);
return ret;
}

-/*
- * write to an AFS file
- */
-ssize_t afs_file_write(struct kiocb *iocb, struct iov_iter *from)
-{
- struct afs_vnode *vnode = AFS_FS_I(file_inode(iocb->ki_filp));
- struct afs_file *af = iocb->ki_filp->private_data;
- ssize_t result;
- size_t count = iov_iter_count(from);
-
- _enter("{%llx:%llu},{%zu},",
- vnode->fid.vid, vnode->fid.vnode, count);
-
- if (IS_SWAPFILE(&vnode->netfs.inode)) {
- printk(KERN_INFO
- "AFS: Attempt to write to active swap file!\n");
- return -EBUSY;
- }
-
- if (!count)
- return 0;
-
- result = afs_validate(vnode, af->key);
- if (result < 0)
- return result;
-
- result = generic_file_write_iter(iocb, from);
-
- _leave(" = %zd", result);
- return result;
-}
-
/*
* flush any dirty pages for this process, and check for write errors.
* - the return status from this call provides a reliable indication of
@@ -811,49 +260,11 @@ int afs_fsync(struct file *file, loff_t start, loff_t end, int datasync)
*/
vm_fault_t afs_page_mkwrite(struct vm_fault *vmf)
{
- struct folio *folio = page_folio(vmf->page);
struct file *file = vmf->vma->vm_file;
- struct inode *inode = file_inode(file);
- struct afs_vnode *vnode = AFS_FS_I(inode);
- struct afs_file *af = file->private_data;
- vm_fault_t ret = VM_FAULT_RETRY;
-
- _enter("{{%llx:%llu}},{%lx}", vnode->fid.vid, vnode->fid.vnode, folio_index(folio));
-
- afs_validate(vnode, af->key);
-
- sb_start_pagefault(inode->i_sb);
-
- /* Wait for the page to be written to the cache before we allow it to
- * be modified. We then assume the entire page will need writing back.
- */
-#ifdef CONFIG_AFS_FSCACHE
- if (folio_test_fscache(folio) &&
- folio_wait_fscache_killable(folio) < 0)
- goto out;
-#endif
-
- if (folio_wait_writeback_killable(folio))
- goto out;
-
- if (folio_lock_killable(folio) < 0)
- goto out;
-
- if (folio_wait_writeback_killable(folio) < 0) {
- folio_unlock(folio);
- goto out;
- }
-
- if (folio_test_dirty(folio))
- trace_afs_folio_dirty(vnode, tracepoint_string("mkwrite+"), folio);
- else
- trace_afs_folio_dirty(vnode, tracepoint_string("mkwrite"), folio);
- file_update_time(file);

- ret = VM_FAULT_LOCKED;
-out:
- sb_end_pagefault(inode->i_sb);
- return ret;
+ if (afs_validate(AFS_FS_I(file_inode(file)), afs_file_key(file)) < 0)
+ return VM_FAULT_SIGBUS;
+ return netfs_page_mkwrite(vmf, NULL);
}

/*
@@ -883,60 +294,3 @@ void afs_prune_wb_keys(struct afs_vnode *vnode)
afs_put_wb_key(wbk);
}
}
-
-/*
- * Clean up a page during invalidation.
- */
-int afs_launder_folio(struct folio *folio)
-{
- struct afs_vnode *vnode = AFS_FS_I(folio_inode(folio));
- struct iov_iter iter;
- struct bio_vec bv;
- unsigned long long fend, i_size = vnode->netfs.inode.i_size;
- size_t len;
- int ret = 0;
-
- _enter("{%lx}", folio->index);
-
- if (folio_clear_dirty_for_io(folio) && folio_pos(folio) < i_size) {
- len = folio_size(folio);
- fend = folio_pos(folio) + len;
- if (vnode->netfs.inode.i_size < fend)
- len = fend - i_size;
-
- bvec_set_folio(&bv, folio, len, 0);
- iov_iter_bvec(&iter, WRITE, &bv, 1, len);
-
- trace_afs_folio_dirty(vnode, tracepoint_string("launder"), folio);
- ret = afs_store_data(vnode, &iter, folio_pos(folio), true);
- }
-
- trace_afs_folio_dirty(vnode, tracepoint_string("laundered"), folio);
- folio_wait_fscache(folio);
- return ret;
-}
-
-/*
- * Deal with the completion of writing the data to the cache.
- */
-static void afs_write_to_cache_done(void *priv, ssize_t transferred_or_error,
- bool was_async)
-{
- struct afs_vnode *vnode = priv;
-
- if (IS_ERR_VALUE(transferred_or_error) &&
- transferred_or_error != -ENOBUFS)
- afs_invalidate_cache(vnode, 0);
-}
-
-/*
- * Save the write to the cache also.
- */
-static void afs_write_to_cache(struct afs_vnode *vnode,
- loff_t start, size_t len, loff_t i_size,
- bool caching)
-{
- fscache_write_to_cache(afs_vnode_cache(vnode),
- vnode->netfs.inode.i_mapping, start, len, i_size,
- afs_write_to_cache_done, vnode, caching);
-}
diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h
index 08506680350c..754358149372 100644
--- a/include/trace/events/afs.h
+++ b/include/trace/events/afs.h
@@ -837,29 +837,6 @@ TRACE_EVENT(afs_dir_check_failed,
__entry->vnode, __entry->off, __entry->i_size)
);

-TRACE_EVENT(afs_folio_dirty,
- TP_PROTO(struct afs_vnode *vnode, const char *where, struct folio *folio),
-
- TP_ARGS(vnode, where, folio),
-
- TP_STRUCT__entry(
- __field(struct afs_vnode *, vnode)
- __field(const char *, where)
- __field(pgoff_t, index)
- __field(size_t, size)
- ),
-
- TP_fast_assign(
- __entry->vnode = vnode;
- __entry->where = where;
- __entry->index = folio_index(folio);
- __entry->size = folio_size(folio);
- ),
-
- TP_printk("vn=%p ix=%05lx s=%05lx %s",
- __entry->vnode, __entry->index, __entry->size, __entry->where)
- );
-
TRACE_EVENT(afs_call_state,
TP_PROTO(struct afs_call *call,
enum afs_call_state from,

2023-10-13 16:26:52

by David Howells

[permalink] [raw]
Subject: [RFC PATCH 45/53] cifs: Replace cifs_writedata with a wrapper around netfs_io_subrequest

Replace the cifs_writedata struct with the same wrapper around
netfs_io_subrequest that was used to replace cifs_readdata.

Signed-off-by: David Howells <[email protected]>
cc: Steve French <[email protected]>
cc: Shyam Prasad N <[email protected]>
cc: Rohith Surabattula <[email protected]>
cc: Jeff Layton <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
fs/smb/client/cifsglob.h | 30 +++------------
fs/smb/client/cifsproto.h | 16 ++++++--
fs/smb/client/cifssmb.c | 9 ++---
fs/smb/client/file.c | 79 ++++++++++++++++-----------------------
fs/smb/client/smb2pdu.c | 9 ++---
fs/smb/client/smb2proto.h | 3 +-
6 files changed, 58 insertions(+), 88 deletions(-)

diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
index 1943d035b8d3..0b1835751bda 100644
--- a/fs/smb/client/cifsglob.h
+++ b/fs/smb/client/cifsglob.h
@@ -238,7 +238,6 @@ struct cifs_fattr;
struct smb3_fs_context;
struct cifs_fid;
struct cifs_io_subrequest;
-struct cifs_writedata;
struct cifs_io_parms;
struct cifs_search_info;
struct cifsInodeInfo;
@@ -413,8 +412,7 @@ struct smb_version_operations {
/* async read from the server */
int (*async_readv)(struct cifs_io_subrequest *);
/* async write to the server */
- int (*async_writev)(struct cifs_writedata *,
- void (*release)(struct kref *));
+ int (*async_writev)(struct cifs_io_subrequest *);
/* sync read from the server */
int (*sync_read)(const unsigned int, struct cifs_fid *,
struct cifs_io_parms *, unsigned int *, char **,
@@ -1438,35 +1436,17 @@ struct cifs_io_subrequest {
#endif
struct cifs_credits credits;

- // TODO: Remove following elements
- struct list_head list;
- struct completion done;
- struct work_struct work;
- struct iov_iter iter;
- __u64 offset;
- unsigned int bytes;
-};
+ enum writeback_sync_modes sync_mode;
+ bool uncached;
+ struct bio_vec *bv;

-/* asynchronous write support */
-struct cifs_writedata {
- struct kref refcount;
+ // TODO: Remove following elements
struct list_head list;
struct completion done;
- enum writeback_sync_modes sync_mode;
struct work_struct work;
- struct cifsFileInfo *cfile;
- struct cifs_aio_ctx *ctx;
struct iov_iter iter;
- struct bio_vec *bv;
__u64 offset;
- pid_t pid;
unsigned int bytes;
- int result;
- struct TCP_Server_Info *server;
-#ifdef CONFIG_CIFS_SMB_DIRECT
- struct smbd_mr *mr;
-#endif
- struct cifs_credits credits;
};

/*
diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h
index 7748fe148fb4..561dac1576a5 100644
--- a/fs/smb/client/cifsproto.h
+++ b/fs/smb/client/cifsproto.h
@@ -589,11 +589,19 @@ static inline void cifs_put_readdata(struct cifs_io_subrequest *rdata)
int cifs_async_readv(struct cifs_io_subrequest *rdata);
int cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid);

-int cifs_async_writev(struct cifs_writedata *wdata,
- void (*release)(struct kref *kref));
+int cifs_async_writev(struct cifs_io_subrequest *wdata);
void cifs_writev_complete(struct work_struct *work);
-struct cifs_writedata *cifs_writedata_alloc(work_func_t complete);
-void cifs_writedata_release(struct kref *refcount);
+struct cifs_io_subrequest *cifs_writedata_alloc(work_func_t complete);
+void cifs_writedata_release(struct cifs_io_subrequest *rdata);
+static inline void cifs_get_writedata(struct cifs_io_subrequest *wdata)
+{
+ refcount_inc(&wdata->subreq.ref);
+}
+static inline void cifs_put_writedata(struct cifs_io_subrequest *wdata)
+{
+ if (refcount_dec_and_test(&wdata->subreq.ref))
+ cifs_writedata_release(wdata);
+}
int cifs_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
struct cifs_sb_info *cifs_sb,
const unsigned char *path, char *pbuf,
diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
index 76005b3d5ffe..14fca3fa3e08 100644
--- a/fs/smb/client/cifssmb.c
+++ b/fs/smb/client/cifssmb.c
@@ -1610,7 +1610,7 @@ CIFSSMBWrite(const unsigned int xid, struct cifs_io_parms *io_parms,
static void
cifs_writev_callback(struct mid_q_entry *mid)
{
- struct cifs_writedata *wdata = mid->callback_data;
+ struct cifs_io_subrequest *wdata = mid->callback_data;
struct cifs_tcon *tcon = tlink_tcon(wdata->cfile->tlink);
unsigned int written;
WRITE_RSP *smb = (WRITE_RSP *)mid->resp_buf;
@@ -1655,8 +1655,7 @@ cifs_writev_callback(struct mid_q_entry *mid)

/* cifs_async_writev - send an async write, and set up mid to handle result */
int
-cifs_async_writev(struct cifs_writedata *wdata,
- void (*release)(struct kref *kref))
+cifs_async_writev(struct cifs_io_subrequest *wdata)
{
int rc = -EACCES;
WRITE_REQ *smb = NULL;
@@ -1723,14 +1722,14 @@ cifs_async_writev(struct cifs_writedata *wdata,
iov[1].iov_len += 4; /* pad bigger by four bytes */
}

- kref_get(&wdata->refcount);
+ cifs_get_writedata(wdata);
rc = cifs_call_async(tcon->ses->server, &rqst, NULL,
cifs_writev_callback, NULL, wdata, 0, NULL);

if (rc == 0)
cifs_stats_inc(&tcon->stats.cifs_stats.num_writes);
else
- kref_put(&wdata->refcount, release);
+ cifs_put_writedata(wdata);

async_writev_out:
cifs_small_buf_release(smb);
diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
index 0383ce61ac35..c192a38b1b7c 100644
--- a/fs/smb/client/file.c
+++ b/fs/smb/client/file.c
@@ -2410,10 +2410,10 @@ cifs_get_readable_path(struct cifs_tcon *tcon, const char *name,
}

void
-cifs_writedata_release(struct kref *refcount)
+cifs_writedata_release(struct cifs_io_subrequest *wdata)
{
- struct cifs_writedata *wdata = container_of(refcount,
- struct cifs_writedata, refcount);
+ if (wdata->uncached)
+ kref_put(&wdata->ctx->refcount, cifs_aio_ctx_release);
#ifdef CONFIG_CIFS_SMB_DIRECT
if (wdata->mr) {
smbd_deregister_mr(wdata->mr);
@@ -2432,7 +2432,7 @@ cifs_writedata_release(struct kref *refcount)
* possible that the page was redirtied so re-clean the page.
*/
static void
-cifs_writev_requeue(struct cifs_writedata *wdata)
+cifs_writev_requeue(struct cifs_io_subrequest *wdata)
{
int rc = 0;
struct inode *inode = d_inode(wdata->cfile->dentry);
@@ -2442,7 +2442,7 @@ cifs_writev_requeue(struct cifs_writedata *wdata)

server = tlink_tcon(wdata->cfile->tlink)->ses->server;
do {
- struct cifs_writedata *wdata2;
+ struct cifs_io_subrequest *wdata2;
unsigned int wsize, cur_len;

wsize = server->ops->wp_retry_size(inode);
@@ -2465,7 +2465,7 @@ cifs_writev_requeue(struct cifs_writedata *wdata)
wdata2->sync_mode = wdata->sync_mode;
wdata2->offset = fpos;
wdata2->bytes = cur_len;
- wdata2->iter = wdata->iter;
+ wdata2->iter = wdata->iter;

iov_iter_advance(&wdata2->iter, fpos - wdata->offset);
iov_iter_truncate(&wdata2->iter, wdata2->bytes);
@@ -2487,11 +2487,10 @@ cifs_writev_requeue(struct cifs_writedata *wdata)
rc = -EBADF;
} else {
wdata2->pid = wdata2->cfile->pid;
- rc = server->ops->async_writev(wdata2,
- cifs_writedata_release);
+ rc = server->ops->async_writev(wdata2);
}

- kref_put(&wdata2->refcount, cifs_writedata_release);
+ cifs_put_writedata(wdata2);
if (rc) {
if (is_retryable_error(rc))
continue;
@@ -2510,14 +2509,14 @@ cifs_writev_requeue(struct cifs_writedata *wdata)

if (rc != 0 && !is_retryable_error(rc))
mapping_set_error(inode->i_mapping, rc);
- kref_put(&wdata->refcount, cifs_writedata_release);
+ cifs_put_writedata(wdata);
}

void
cifs_writev_complete(struct work_struct *work)
{
- struct cifs_writedata *wdata = container_of(work,
- struct cifs_writedata, work);
+ struct cifs_io_subrequest *wdata = container_of(work,
+ struct cifs_io_subrequest, work);
struct inode *inode = d_inode(wdata->cfile->dentry);

if (wdata->result == 0) {
@@ -2538,16 +2537,16 @@ cifs_writev_complete(struct work_struct *work)

if (wdata->result != -EAGAIN)
mapping_set_error(inode->i_mapping, wdata->result);
- kref_put(&wdata->refcount, cifs_writedata_release);
+ cifs_put_writedata(wdata);
}

-struct cifs_writedata *cifs_writedata_alloc(work_func_t complete)
+struct cifs_io_subrequest *cifs_writedata_alloc(work_func_t complete)
{
- struct cifs_writedata *wdata;
+ struct cifs_io_subrequest *wdata;

wdata = kzalloc(sizeof(*wdata), GFP_NOFS);
if (wdata != NULL) {
- kref_init(&wdata->refcount);
+ refcount_set(&wdata->subreq.ref, 1);
INIT_LIST_HEAD(&wdata->list);
init_completion(&wdata->done);
INIT_WORK(&wdata->work, complete);
@@ -2729,7 +2728,7 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping,
{
struct inode *inode = mapping->host;
struct TCP_Server_Info *server;
- struct cifs_writedata *wdata;
+ struct cifs_io_subrequest *wdata;
struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
struct cifs_credits credits_on_stack;
struct cifs_credits *credits = &credits_on_stack;
@@ -2822,10 +2821,9 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping,
if (wdata->cfile->invalidHandle)
rc = -EAGAIN;
else
- rc = wdata->server->ops->async_writev(wdata,
- cifs_writedata_release);
+ rc = wdata->server->ops->async_writev(wdata);
if (rc >= 0) {
- kref_put(&wdata->refcount, cifs_writedata_release);
+ cifs_put_writedata(wdata);
goto err_close;
}
} else {
@@ -2835,7 +2833,7 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping,
}

err_wdata:
- kref_put(&wdata->refcount, cifs_writedata_release);
+ cifs_put_writedata(wdata);
err_uncredit:
add_credits_and_wake_if(server, credits, 0);
err_close:
@@ -3224,23 +3222,13 @@ int cifs_flush(struct file *file, fl_owner_t id)
return rc;
}

-static void
-cifs_uncached_writedata_release(struct kref *refcount)
-{
- struct cifs_writedata *wdata = container_of(refcount,
- struct cifs_writedata, refcount);
-
- kref_put(&wdata->ctx->refcount, cifs_aio_ctx_release);
- cifs_writedata_release(refcount);
-}
-
static void collect_uncached_write_data(struct cifs_aio_ctx *ctx);

static void
cifs_uncached_writev_complete(struct work_struct *work)
{
- struct cifs_writedata *wdata = container_of(work,
- struct cifs_writedata, work);
+ struct cifs_io_subrequest *wdata = container_of(work,
+ struct cifs_io_subrequest, work);
struct inode *inode = d_inode(wdata->cfile->dentry);
struct cifsInodeInfo *cifsi = CIFS_I(inode);

@@ -3253,11 +3241,11 @@ cifs_uncached_writev_complete(struct work_struct *work)
complete(&wdata->done);
collect_uncached_write_data(wdata->ctx);
/* the below call can possibly free the last ref to aio ctx */
- kref_put(&wdata->refcount, cifs_uncached_writedata_release);
+ cifs_put_writedata(wdata);
}

static int
-cifs_resend_wdata(struct cifs_writedata *wdata, struct list_head *wdata_list,
+cifs_resend_wdata(struct cifs_io_subrequest *wdata, struct list_head *wdata_list,
struct cifs_aio_ctx *ctx)
{
unsigned int wsize;
@@ -3306,8 +3294,7 @@ cifs_resend_wdata(struct cifs_writedata *wdata, struct list_head *wdata_list,
wdata->mr = NULL;
}
#endif
- rc = server->ops->async_writev(wdata,
- cifs_uncached_writedata_release);
+ rc = server->ops->async_writev(wdata);
}
}

@@ -3322,7 +3309,7 @@ cifs_resend_wdata(struct cifs_writedata *wdata, struct list_head *wdata_list,
} while (rc == -EAGAIN);

fail:
- kref_put(&wdata->refcount, cifs_uncached_writedata_release);
+ cifs_put_writedata(wdata);
return rc;
}

@@ -3374,7 +3361,7 @@ cifs_write_from_iter(loff_t fpos, size_t len, struct iov_iter *from,
{
int rc = 0;
size_t cur_len, max_len;
- struct cifs_writedata *wdata;
+ struct cifs_io_subrequest *wdata;
pid_t pid;
struct TCP_Server_Info *server;
unsigned int xid, max_segs = INT_MAX;
@@ -3438,6 +3425,7 @@ cifs_write_from_iter(loff_t fpos, size_t len, struct iov_iter *from,
break;
}

+ wdata->uncached = true;
wdata->sync_mode = WB_SYNC_ALL;
wdata->offset = (__u64)fpos;
wdata->cfile = cifsFileInfo_get(open_file);
@@ -3457,14 +3445,12 @@ cifs_write_from_iter(loff_t fpos, size_t len, struct iov_iter *from,
if (wdata->cfile->invalidHandle)
rc = -EAGAIN;
else
- rc = server->ops->async_writev(wdata,
- cifs_uncached_writedata_release);
+ rc = server->ops->async_writev(wdata);
}

if (rc) {
add_credits_and_wake_if(server, &wdata->credits, 0);
- kref_put(&wdata->refcount,
- cifs_uncached_writedata_release);
+ cifs_put_writedata(wdata);
if (rc == -EAGAIN)
continue;
break;
@@ -3482,7 +3468,7 @@ cifs_write_from_iter(loff_t fpos, size_t len, struct iov_iter *from,

static void collect_uncached_write_data(struct cifs_aio_ctx *ctx)
{
- struct cifs_writedata *wdata, *tmp;
+ struct cifs_io_subrequest *wdata, *tmp;
struct cifs_tcon *tcon;
struct cifs_sb_info *cifs_sb;
struct dentry *dentry = ctx->cfile->dentry;
@@ -3537,8 +3523,7 @@ static void collect_uncached_write_data(struct cifs_aio_ctx *ctx)
ctx->cfile, cifs_sb, &tmp_list,
ctx);

- kref_put(&wdata->refcount,
- cifs_uncached_writedata_release);
+ cifs_put_writedata(wdata);
}

list_splice(&tmp_list, &ctx->list);
@@ -3546,7 +3531,7 @@ static void collect_uncached_write_data(struct cifs_aio_ctx *ctx)
}
}
list_del_init(&wdata->list);
- kref_put(&wdata->refcount, cifs_uncached_writedata_release);
+ cifs_put_writedata(wdata);
}

cifs_stats_bytes_written(tcon, ctx->total_len);
diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
index cc3d80a66869..4f98631f2cf4 100644
--- a/fs/smb/client/smb2pdu.c
+++ b/fs/smb/client/smb2pdu.c
@@ -4415,7 +4415,7 @@ SMB2_read(const unsigned int xid, struct cifs_io_parms *io_parms,
static void
smb2_writev_callback(struct mid_q_entry *mid)
{
- struct cifs_writedata *wdata = mid->callback_data;
+ struct cifs_io_subrequest *wdata = mid->callback_data;
struct cifs_tcon *tcon = tlink_tcon(wdata->cfile->tlink);
struct TCP_Server_Info *server = wdata->server;
unsigned int written;
@@ -4496,8 +4496,7 @@ smb2_writev_callback(struct mid_q_entry *mid)

/* smb2_async_writev - send an async write, and set up mid to handle result */
int
-smb2_async_writev(struct cifs_writedata *wdata,
- void (*release)(struct kref *kref))
+smb2_async_writev(struct cifs_io_subrequest *wdata)
{
int rc = -EACCES, flags = 0;
struct smb2_write_req *req = NULL;
@@ -4629,7 +4628,7 @@ smb2_async_writev(struct cifs_writedata *wdata,
flags |= CIFS_HAS_CREDITS;
}

- kref_get(&wdata->refcount);
+ cifs_get_writedata(wdata);
rc = cifs_call_async(server, &rqst, NULL, smb2_writev_callback, NULL,
wdata, flags, &wdata->credits);

@@ -4641,7 +4640,7 @@ smb2_async_writev(struct cifs_writedata *wdata,
io_parms->offset,
io_parms->length,
rc);
- kref_put(&wdata->refcount, release);
+ cifs_put_writedata(wdata);
cifs_stats_fail_inc(tcon, SMB2_WRITE_HE);
}

diff --git a/fs/smb/client/smb2proto.h b/fs/smb/client/smb2proto.h
index 02ffe5ec9b21..4d3d51e42d3c 100644
--- a/fs/smb/client/smb2proto.h
+++ b/fs/smb/client/smb2proto.h
@@ -189,8 +189,7 @@ extern int SMB2_get_srv_num(const unsigned int xid, struct cifs_tcon *tcon,
extern int smb2_async_readv(struct cifs_io_subrequest *rdata);
extern int SMB2_read(const unsigned int xid, struct cifs_io_parms *io_parms,
unsigned int *nbytes, char **buf, int *buf_type);
-extern int smb2_async_writev(struct cifs_writedata *wdata,
- void (*release)(struct kref *kref));
+extern int smb2_async_writev(struct cifs_io_subrequest *wdata);
extern int SMB2_write(const unsigned int xid, struct cifs_io_parms *io_parms,
unsigned int *nbytes, struct kvec *iov, int n_vec);
extern int SMB2_echo(struct TCP_Server_Info *server);

2023-10-13 16:27:12

by David Howells

[permalink] [raw]
Subject: [RFC PATCH 41/53] netfs: Rearrange netfs_io_subrequest to put request pointer first

Rearrange the netfs_io_subrequest struct to put the netfs_io_request
pointer (rreq) first. This then allows netfs_io_subrequest to be put in a
union with a pointer to a wrapper around netfs_io_request for cifs.

Signed-off-by: David Howells <[email protected]>
cc: Steve French <[email protected]>
cc: Shyam Prasad N <[email protected]>
cc: Rohith Surabattula <[email protected]>
cc: Jeff Layton <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
include/linux/netfs.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index c416645649e1..ff4f86ae64e4 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -209,8 +209,8 @@ struct netfs_cache_resources {
* the pages it points to can be relied on to exist for the duration.
*/
struct netfs_io_subrequest {
- struct work_struct work;
struct netfs_io_request *rreq; /* Supervising I/O request */
+ struct work_struct work;
struct list_head rreq_link; /* Link in rreq->subrequests */
struct iov_iter io_iter; /* Iterator for this subrequest */
loff_t start; /* Where to start the I/O */

2023-10-13 16:27:19

by David Howells

[permalink] [raw]
Subject: [RFC PATCH 30/53] netfs: Allow buffered shared-writeable mmap through netfs_page_mkwrite()

Provide an entry point to delegate a filesystem's ->page_mkwrite() to.
This checks for conflicting writes, then attached any netfs-specific group
marking (e.g. ceph snap) to the page to be considered dirty.

Signed-off-by: David Howells <[email protected]>
cc: Jeff Layton <[email protected]>
cc: [email protected]
cc: [email protected]
cc: [email protected]
---
fs/netfs/buffered_write.c | 59 +++++++++++++++++++++++++++++++++++++++
include/linux/netfs.h | 4 +++
2 files changed, 63 insertions(+)

diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c
index 60e7da53cbd2..3c1f26f32351 100644
--- a/fs/netfs/buffered_write.c
+++ b/fs/netfs/buffered_write.c
@@ -413,3 +413,62 @@ ssize_t netfs_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
return ret;
}
EXPORT_SYMBOL(netfs_file_write_iter);
+
+/*
+ * Notification that a previously read-only page is about to become writable.
+ * Note that the caller indicates a single page of a multipage folio.
+ */
+vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_group)
+{
+ struct folio *folio = page_folio(vmf->page);
+ struct file *file = vmf->vma->vm_file;
+ struct inode *inode = file_inode(file);
+ vm_fault_t ret = VM_FAULT_RETRY;
+ int err;
+
+ _enter("%lx", folio->index);
+
+ sb_start_pagefault(inode->i_sb);
+
+ if (folio_wait_writeback_killable(folio))
+ goto out;
+
+ if (folio_lock_killable(folio) < 0)
+ goto out;
+
+ /* Can we see a streaming write here? */
+ if (WARN_ON(!folio_test_uptodate(folio))) {
+ ret = VM_FAULT_SIGBUS | VM_FAULT_LOCKED;
+ goto out;
+ }
+
+ if (netfs_folio_group(folio) != netfs_group) {
+ folio_unlock(folio);
+ err = filemap_fdatawait_range(inode->i_mapping,
+ folio_pos(folio),
+ folio_pos(folio) + folio_size(folio));
+ switch (err) {
+ case 0:
+ ret = VM_FAULT_RETRY;
+ goto out;
+ case -ENOMEM:
+ ret = VM_FAULT_OOM;
+ goto out;
+ default:
+ ret = VM_FAULT_SIGBUS;
+ goto out;
+ }
+ }
+
+ if (folio_test_dirty(folio))
+ trace_netfs_folio(folio, netfs_folio_trace_mkwrite_plus);
+ else
+ trace_netfs_folio(folio, netfs_folio_trace_mkwrite);
+ netfs_set_group(folio, netfs_group);
+ file_update_time(file);
+ ret = VM_FAULT_LOCKED;
+out:
+ sb_end_pagefault(inode->i_sb);
+ return ret;
+}
+EXPORT_SYMBOL(netfs_page_mkwrite);
diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index d1dc7ba62f17..e2a5a441b7fc 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -403,6 +403,10 @@ int netfs_write_begin(struct netfs_inode *, struct file *,
void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length);
bool netfs_release_folio(struct folio *folio, gfp_t gfp);

+/* VMA operations API. */
+vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_group);
+
+/* (Sub)request management API. */
void netfs_subreq_terminated(struct netfs_io_subrequest *, ssize_t, bool);
void netfs_get_subrequest(struct netfs_io_subrequest *subreq,
enum netfs_sreq_ref_trace what);

2023-10-16 15:55:05

by Jeff Layton

[permalink] [raw]
Subject: Re: [RFC PATCH 08/53] netfs: Add rsize to netfs_io_request

On Fri, 2023-10-13 at 17:03 +0100, David Howells wrote:
> Add an rsize parameter to netfs_io_request to be filled in by the network
> filesystem when the request is initialised. This indicates the maximum
> size of a read request that the netfs will honour in that region.
>
> Signed-off-by: David Howells <[email protected]>
> cc: Jeff Layton <[email protected]>
> cc: [email protected]
> cc: [email protected]
> cc: [email protected]
> ---
> fs/afs/file.c | 1 +
> fs/ceph/addr.c | 2 ++
> include/linux/netfs.h | 1 +
> 3 files changed, 4 insertions(+)
>
> diff --git a/fs/afs/file.c b/fs/afs/file.c
> index 3fea5cd8ef13..3d2e1913ea27 100644
> --- a/fs/afs/file.c
> +++ b/fs/afs/file.c
> @@ -360,6 +360,7 @@ static int afs_symlink_read_folio(struct file *file, struct folio *folio)
> static int afs_init_request(struct netfs_io_request *rreq, struct file *file)
> {
> rreq->netfs_priv = key_get(afs_file_key(file));
> + rreq->rsize = 4 * 1024 * 1024;
> return 0;
> }
>
> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> index ced19ff08988..92a5ddcd9a76 100644
> --- a/fs/ceph/addr.c
> +++ b/fs/ceph/addr.c
> @@ -419,6 +419,8 @@ static int ceph_init_request(struct netfs_io_request *rreq, struct file *file)
> struct ceph_netfs_request_data *priv;
> int ret = 0;
>
> + rreq->rsize = 1024 * 1024;
> +

Holy magic numbers, batman! I think this deserves a comment that
explains how you came up with these values.

Also, do 9p and cifs not need this for some reason?

> if (rreq->origin != NETFS_READAHEAD)
> return 0;
>
> diff --git a/include/linux/netfs.h b/include/linux/netfs.h
> index daa431c4148d..02e888c170da 100644
> --- a/include/linux/netfs.h
> +++ b/include/linux/netfs.h
> @@ -188,6 +188,7 @@ struct netfs_io_request {
> struct list_head subrequests; /* Contributory I/O operations */
> void *netfs_priv; /* Private data for the netfs */
> unsigned int debug_id;
> + unsigned int rsize; /* Maximum read size (0 for none) */
> atomic_t nr_outstanding; /* Number of ops in progress */
> atomic_t nr_copy_ops; /* Number of copy-to-cache ops in progress */
> size_t submitted; /* Amount submitted for I/O so far */
>

--
Jeff Layton <[email protected]>

2023-10-16 15:57:25

by Jeff Layton

[permalink] [raw]
Subject: Re: [RFC PATCH 09/53] netfs: Implement unbuffered/DIO vs buffered I/O locking

On Fri, 2023-10-13 at 17:03 +0100, David Howells wrote:
> Borrow NFS's direct-vs-buffered I/O locking into netfslib. Similar code is
> also used in ceph.
>
> Modify it to have the correct checker annotations for i_rwsem lock
> acquisition/release and to return -ERESTARTSYS if waits are interrupted.
>
> Signed-off-by: David Howells <[email protected]>
> cc: Jeff Layton <[email protected]>
> cc: [email protected]
> cc: [email protected]
> cc: [email protected]
> ---
> fs/netfs/Makefile | 1 +
> fs/netfs/locking.c | 209 ++++++++++++++++++++++++++++++++++++++++++
> include/linux/netfs.h | 10 ++
> 3 files changed, 220 insertions(+)
> create mode 100644 fs/netfs/locking.c
>
> diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile
> index cd22554d9048..647ce1935674 100644
> --- a/fs/netfs/Makefile
> +++ b/fs/netfs/Makefile
> @@ -4,6 +4,7 @@ netfs-y := \
> buffered_read.o \
> io.o \
> iterator.o \
> + locking.o \
> main.o \
> misc.o \
> objects.o
> diff --git a/fs/netfs/locking.c b/fs/netfs/locking.c
> new file mode 100644
> index 000000000000..fecca8ea6322
> --- /dev/null
> +++ b/fs/netfs/locking.c
> @@ -0,0 +1,209 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * I/O and data path helper functionality.
> + *
> + * Borrowed from NFS Copyright (c) 2016 Trond Myklebust
> + */
> +
> +#include <linux/kernel.h>
> +#include <linux/netfs.h>
> +
> +/*
> + * inode_dio_wait_interruptible - wait for outstanding DIO requests to finish
> + * @inode: inode to wait for
> + *
> + * Waits for all pending direct I/O requests to finish so that we can
> + * proceed with a truncate or equivalent operation.
> + *
> + * Must be called under a lock that serializes taking new references
> + * to i_dio_count, usually by inode->i_mutex.
> + */
> +static int inode_dio_wait_interruptible(struct inode *inode)
> +{
> + if (!atomic_read(&inode->i_dio_count))
> + return 0;
> +
> + wait_queue_head_t *wq = bit_waitqueue(&inode->i_state, __I_DIO_WAKEUP);
> + DEFINE_WAIT_BIT(q, &inode->i_state, __I_DIO_WAKEUP);
> +
> + for (;;) {
> + prepare_to_wait(wq, &q.wq_entry, TASK_INTERRUPTIBLE);
> + if (!atomic_read(&inode->i_dio_count))
> + break;
> + if (signal_pending(current))
> + break;
> + schedule();
> + }
> + finish_wait(wq, &q.wq_entry);
> +
> + return atomic_read(&inode->i_dio_count) ? -ERESTARTSYS : 0;
> +}
> +
> +/* Call with exclusively locked inode->i_rwsem */
> +static int netfs_block_o_direct(struct netfs_inode *ictx)
> +{
> + if (!test_bit(NETFS_ICTX_ODIRECT, &ictx->flags))
> + return 0;
> + clear_bit(NETFS_ICTX_ODIRECT, &ictx->flags);
> + return inode_dio_wait_interruptible(&ictx->inode);
> +}
> +
> +/**
> + * netfs_start_io_read - declare the file is being used for buffered reads
> + * @inode: file inode
> + *
> + * Declare that a buffered read operation is about to start, and ensure
> + * that we block all direct I/O.
> + * On exit, the function ensures that the NETFS_ICTX_ODIRECT flag is unset,
> + * and holds a shared lock on inode->i_rwsem to ensure that the flag
> + * cannot be changed.
> + * In practice, this means that buffered read operations are allowed to
> + * execute in parallel, thanks to the shared lock, whereas direct I/O
> + * operations need to wait to grab an exclusive lock in order to set
> + * NETFS_ICTX_ODIRECT.
> + * Note that buffered writes and truncates both take a write lock on
> + * inode->i_rwsem, meaning that those are serialised w.r.t. the reads.
> + */
> +int netfs_start_io_read(struct inode *inode)
> + __acquires(inode->i_rwsem)
> +{
> + struct netfs_inode *ictx = netfs_inode(inode);
> +
> + /* Be an optimist! */
> + if (down_read_interruptible(&inode->i_rwsem) < 0)
> + return -ERESTARTSYS;
> + if (test_bit(NETFS_ICTX_ODIRECT, &ictx->flags) == 0)
> + return 0;
> + up_read(&inode->i_rwsem);
> +
> + /* Slow path.... */
> + if (down_write_killable(&inode->i_rwsem) < 0)
> + return -ERESTARTSYS;
> + if (netfs_block_o_direct(ictx) < 0) {
> + up_write(&inode->i_rwsem);
> + return -ERESTARTSYS;
> + }
> + downgrade_write(&inode->i_rwsem);
> + return 0;
> +}
> +
> +/**
> + * netfs_end_io_read - declare that the buffered read operation is done
> + * @inode: file inode
> + *
> + * Declare that a buffered read operation is done, and release the shared
> + * lock on inode->i_rwsem.
> + */
> +void netfs_end_io_read(struct inode *inode)
> + __releases(inode->i_rwsem)
> +{
> + up_read(&inode->i_rwsem);
> +}
> +
> +/**
> + * netfs_start_io_write - declare the file is being used for buffered writes
> + * @inode: file inode
> + *
> + * Declare that a buffered read operation is about to start, and ensure
> + * that we block all direct I/O.
> + */
> +int netfs_start_io_write(struct inode *inode)
> + __acquires(inode->i_rwsem)
> +{
> + struct netfs_inode *ictx = netfs_inode(inode);
> +
> + if (down_write_killable(&inode->i_rwsem) < 0)
> + return -ERESTARTSYS;
> + if (netfs_block_o_direct(ictx) < 0) {
> + up_write(&inode->i_rwsem);
> + return -ERESTARTSYS;
> + }
> + return 0;
> +}
> +
> +/**
> + * netfs_end_io_write - declare that the buffered write operation is done
> + * @inode: file inode
> + *
> + * Declare that a buffered write operation is done, and release the
> + * lock on inode->i_rwsem.
> + */
> +void netfs_end_io_write(struct inode *inode)
> + __releases(inode->i_rwsem)
> +{
> + up_write(&inode->i_rwsem);
> +}
> +
> +/* Call with exclusively locked inode->i_rwsem */
> +static int netfs_block_buffered(struct inode *inode)
> +{
> + struct netfs_inode *ictx = netfs_inode(inode);
> + int ret;
> +
> + if (!test_bit(NETFS_ICTX_ODIRECT, &ictx->flags)) {
> + set_bit(NETFS_ICTX_ODIRECT, &ictx->flags);
> + if (inode->i_mapping->nrpages != 0) {
> + unmap_mapping_range(inode->i_mapping, 0, 0, 0);
> + ret = filemap_fdatawait(inode->i_mapping);
> + if (ret < 0) {
> + clear_bit(NETFS_ICTX_ODIRECT, &ictx->flags);
> + return ret;
> + }
> + }
> + }
> + return 0;
> +}
> +
> +/**
> + * netfs_start_io_direct - declare the file is being used for direct i/o
> + * @inode: file inode
> + *
> + * Declare that a direct I/O operation is about to start, and ensure
> + * that we block all buffered I/O.
> + * On exit, the function ensures that the NETFS_ICTX_ODIRECT flag is set,
> + * and holds a shared lock on inode->i_rwsem to ensure that the flag
> + * cannot be changed.
> + * In practice, this means that direct I/O operations are allowed to
> + * execute in parallel, thanks to the shared lock, whereas buffered I/O
> + * operations need to wait to grab an exclusive lock in order to clear
> + * NETFS_ICTX_ODIRECT.
> + * Note that buffered writes and truncates both take a write lock on
> + * inode->i_rwsem, meaning that those are serialised w.r.t. O_DIRECT.
> + */
> +int netfs_start_io_direct(struct inode *inode)
> + __acquires(inode->i_rwsem)
> +{
> + struct netfs_inode *ictx = netfs_inode(inode);
> + int ret;
> +
> + /* Be an optimist! */
> + if (down_read_interruptible(&inode->i_rwsem) < 0)
> + return -ERESTARTSYS;
> + if (test_bit(NETFS_ICTX_ODIRECT, &ictx->flags) != 0)
> + return 0;
> + up_read(&inode->i_rwsem);
> +
> + /* Slow path.... */
> + if (down_write_killable(&inode->i_rwsem) < 0)
> + return -ERESTARTSYS;
> + ret = netfs_block_buffered(inode);
> + if (ret < 0) {
> + up_write(&inode->i_rwsem);
> + return ret;
> + }
> + downgrade_write(&inode->i_rwsem);
> + return 0;
> +}
> +
> +/**
> + * netfs_end_io_direct - declare that the direct i/o operation is done
> + * @inode: file inode
> + *
> + * Declare that a direct I/O operation is done, and release the shared
> + * lock on inode->i_rwsem.
> + */
> +void netfs_end_io_direct(struct inode *inode)
> + __releases(inode->i_rwsem)
> +{
> + up_read(&inode->i_rwsem);
> +}
> diff --git a/include/linux/netfs.h b/include/linux/netfs.h
> index 02e888c170da..33d4487a91e9 100644
> --- a/include/linux/netfs.h
> +++ b/include/linux/netfs.h
> @@ -131,6 +131,8 @@ struct netfs_inode {
> loff_t remote_i_size; /* Size of the remote file */
> loff_t zero_point; /* Size after which we assume there's no data
> * on the server */
> + unsigned long flags;
> +#define NETFS_ICTX_ODIRECT 0 /* The file has DIO in progress */
> };
>
> /*
> @@ -315,6 +317,13 @@ ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len,
> struct iov_iter *new,
> iov_iter_extraction_t extraction_flags);
>
> +int netfs_start_io_read(struct inode *inode);
> +void netfs_end_io_read(struct inode *inode);
> +int netfs_start_io_write(struct inode *inode);
> +void netfs_end_io_write(struct inode *inode);
> +int netfs_start_io_direct(struct inode *inode);
> +void netfs_end_io_direct(struct inode *inode);
> +
> /**
> * netfs_inode - Get the netfs inode context from the inode
> * @inode: The inode to query
> @@ -341,6 +350,7 @@ static inline void netfs_inode_init(struct netfs_inode *ctx,
> ctx->ops = ops;
> ctx->remote_i_size = i_size_read(&ctx->inode);
> ctx->zero_point = ctx->remote_i_size;
> + ctx->flags = 0;
> #if IS_ENABLED(CONFIG_FSCACHE)
> ctx->cache = NULL;
> #endif
>

It's nice to see this go into common code, but why not go ahead and
convert ceph (and possibly NFS) to use this? Is there any reason not to?

--
Jeff Layton <[email protected]>

2023-10-16 16:10:52

by David Howells

[permalink] [raw]
Subject: Re: [RFC PATCH 09/53] netfs: Implement unbuffered/DIO vs buffered I/O locking

Jeff Layton <[email protected]> wrote:

> It's nice to see this go into common code, but why not go ahead and
> convert ceph (and possibly NFS) to use this? Is there any reason not to?

I'm converting ceph on a follow-on branch and for ceph this will be dealt with
there.

I could do NFS round about here, I suppose.

David

2023-10-16 16:38:39

by David Howells

[permalink] [raw]
Subject: Re: [RFC PATCH 08/53] netfs: Add rsize to netfs_io_request

Jeff Layton <[email protected]> wrote:

> > + rreq->rsize = 4 * 1024 * 1024;
> > return 0;
> ...
> > + rreq->rsize = 1024 * 1024;
> > +
>
> Holy magic numbers, batman! I think this deserves a comment that
> explains how you came up with these values.

Actually, that should be set to something like the object size for ceph.

> Also, do 9p and cifs not need this for some reason?

At this point, cifs doesn't use netfslib, so that's implemented in a later
patch in this series.

9p does need setting, but I haven't tested that yet. It probably needs
setting to 1MiB as I think that's the maximum the 9p transport can handle.

But in the case of cifs, this is actually dynamic, depending on how many
credits we can obtain. The same may be true of ceph, though I'm not entirely
clear on that as yet.

For afs, the maximum [rw]size the protocol supports is actually something like
281350422593565 (ie. (65535-28) * (2^32-1)) minus a few bytes, but that's
probably not a good idea. I might be best setting it at something like 256KiB
as that's what OpenAFS uses.

David