2022-03-09 02:29:23

by David Howells

[permalink] [raw]
Subject: [PATCH v2 00/19] netfs: Prep for write helpers


Having had a go at implementing write helpers and content encryption
support in netfslib, it seems that the netfs_read_{,sub}request structs and
the equivalent write request structs were almost the same and so should be
merged, thereby requiring only one set of alloc/get/put functions and a
common set of tracepoints.

Merging the structs also has the advantage that if a bounce buffer is added
to the request struct, a read operation can be performed to fill the bounce
buffer, the contents of the buffer can be modified and then a write
operation can be performed on it to send the data wherever it needs to go
using the same request structure all the way through. The I/O handlers
would then transparently perform any required crypto. This should make it
easy to perform RMW cycles if needed.

The potentially common functions and structs, however, by their names all
proclaim themselves to be associated with the read side of things. The
bulk of these changes alter this in the following ways:

(1) Rename struct netfs_read_{,sub}request to netfs_io_{,sub}request.

(2) Rename some enums, members and flags to make them more appropriate.

(3) Adjust some comments to match.

(4) Drop "read"/"rreq" from the names of common functions. For instance,
netfs_get_read_request() becomes netfs_get_request().

(5) The ->init_rreq() and ->issue_op() methods become ->init_request() and
->issue_read(). I've kept the latter as a read-specific function and
in another branch added an ->issue_write() method.

The driver source is then reorganised into a number of files:

fs/netfs/buffered_read.c Create read reqs to the pagecache
fs/netfs/io.c Dispatchers for read and write reqs
fs/netfs/main.c Some general miscellaneous bits
fs/netfs/objects.c Alloc, get and put functions
fs/netfs/stats.c Optional procfs statistics.

and future development can be fitted into this scheme, e.g.:

fs/netfs/buffered_write.c Modify the pagecache
fs/netfs/buffered_flush.c Writeback from the pagecache
fs/netfs/direct_read.c DIO read support
fs/netfs/direct_write.c DIO write support
fs/netfs/unbuffered_write.c Write modifications directly back

Beyond the above changes, there are also some changes that affect how
things work:

(1) Make fscache_end_operation() generally available.

(2) In the netfs tracing header, generate enums from the symbol -> string
mapping tables rather than manually coding them.

(3) Add a struct for filesystems that uses netfslib to put into their
inode wrapper structs to hold extra state that netfslib is interested
in, such as the fscache cookie. This allows netfslib functions to be
set in filesystem operation tables and jumped to directly without
having to have a filesystem wrapper.

(4) Add a member to the struct added in (3) to track the remote inode
length as that may differ if local modifications are buffered. We may
need to supply an appropriate EOF pointer when storing data (in AFS
for example).

(5) Pass extra information to netfs_alloc_request() so that the
->init_request() hook can access it and retain information to indicate
the origin of the operation.

(6) Make the ->init_request() hook return an error, thereby allowing a
filesystem that isn't allowed to cache an inode (ceph or cifs, for
example) to skip readahead.

(7) Switch to using refcount_t for subrequests and add tracepoints to log
refcount changes for the request and subrequest structs.

(8) Add a function to consolidate dispatching a read request. Similar
code is used in three places and another couple are likely to be added
in the future.


The patches can be found on this branch:

http://git.kernel.org/cgit/linux/kernel/git/dhowells/linux-fs.git/log/?h=fscache-next

This is based on top of ceph's master branch as some of the patches
conflict.

David
---

Changes
=======
ver #2)
- Change kdoc references to renamed files[1].
- Switched the begin-read-function patch and the prepare-to-split patch as
fewer functions then need unstatic'ing.
- Fixed an uninitialised var in netfs_begin_read()[2][3].
- Fixed a refleak caused by an unremoved line when netfs_begin_read() was
introduced.
- Use "#if IS_ENABLED()" in netfs_i_cookie(), not "#ifdef".
- Implemented missing bit of ceph readahead through netfs_readahead().
- Rearranged the patch order to make the ceph readahead possible.

Link: https://lore.kernel.org/r/[email protected]/ [1]
Link: https://lore.kernel.org/r/[email protected]/ [2]
Link: https://lore.kernel.org/r/[email protected]/ [3]
Link: https://lore.kernel.org/r/164622970143.3564931.3656393397237724303.stgit@warthog.procyon.org.uk/ # v1

---
David Howells (17):
netfs: Generate enums from trace symbol mapping lists
netfs: Rename netfs_read_*request to netfs_io_*request
netfs: Finish off rename of netfs_read_request to netfs_io_request
netfs: Split netfs_io_* object handling out
netfs: Adjust the netfs_rreq tracepoint slightly
netfs: Trace refcounting on the netfs_io_request struct
netfs: Trace refcounting on the netfs_io_subrequest struct
netfs: Adjust the netfs_failure tracepoint to indicate non-subreq lines
netfs: Change ->init_request() to return an error code
netfs: Add a netfs inode context
netfs: Add a function to consolidate beginning a read
netfs: Prepare to split read_helper.c
netfs: Rename read_helper.c to io.c
netfs: Split fs/netfs/read_helper.c
netfs: Split some core bits out into their own file
netfs: Keep track of the actual remote file size
afs: Maintain netfs_i_context::remote_i_size

Jeff Layton (1):
netfs: Refactor arguments for netfs_alloc_read_request

Jeffle Xu (1):
fscache: export fscache_end_operation()


Documentation/filesystems/netfs_library.rst | 139 ++-
fs/9p/cache.c | 10 +-
fs/9p/v9fs.c | 4 +-
fs/9p/v9fs.h | 12 +-
fs/9p/vfs_addr.c | 62 +-
fs/9p/vfs_inode.c | 13 +-
fs/afs/dynroot.c | 1 +
fs/afs/file.c | 41 +-
fs/afs/inode.c | 32 +-
fs/afs/internal.h | 23 +-
fs/afs/super.c | 4 +-
fs/afs/write.c | 10 +-
fs/cachefiles/io.c | 10 +-
fs/ceph/addr.c | 113 +-
fs/ceph/cache.c | 28 +-
fs/ceph/cache.h | 15 +-
fs/ceph/inode.c | 6 +-
fs/ceph/super.h | 16 +-
fs/cifs/cifsglob.h | 10 +-
fs/cifs/fscache.c | 19 +-
fs/cifs/fscache.h | 2 +-
fs/fscache/internal.h | 11 -
fs/netfs/Makefile | 8 +-
fs/netfs/buffered_read.c | 428 +++++++
fs/netfs/internal.h | 49 +-
fs/netfs/io.c | 657 ++++++++++
fs/netfs/main.c | 20 +
fs/netfs/objects.c | 161 +++
fs/netfs/read_helper.c | 1205 -------------------
fs/netfs/stats.c | 1 -
fs/nfs/fscache.c | 8 -
include/linux/fscache.h | 14 +
include/linux/netfs.h | 162 ++-
include/trace/events/cachefiles.h | 6 +-
include/trace/events/netfs.h | 188 ++-
35 files changed, 1860 insertions(+), 1628 deletions(-)
create mode 100644 fs/netfs/buffered_read.c
create mode 100644 fs/netfs/io.c
create mode 100644 fs/netfs/main.c
create mode 100644 fs/netfs/objects.c
delete mode 100644 fs/netfs/read_helper.c



2022-03-09 02:29:42

by David Howells

[permalink] [raw]
Subject: [PATCH v2 01/19] fscache: export fscache_end_operation()

From: Jeffle Xu <[email protected]>

Export fscache_end_operation() to avoid code duplication.

Besides, considering the paired fscache_begin_read_operation() is
already exported, it shall make sense to also export
fscache_end_operation().

Signed-off-by: Jeffle Xu <[email protected]>
Signed-off-by: David Howells <[email protected]>
cc: [email protected]

Link: https://lore.kernel.org/r/[email protected]/ # Jeffle's v4
Link: https://lore.kernel.org/r/164622971432.3564931.12184135678781328146.stgit@warthog.procyon.org.uk/ # v1
---

fs/cifs/fscache.c | 8 --------
fs/fscache/internal.h | 11 -----------
fs/nfs/fscache.c | 8 --------
include/linux/fscache.h | 14 ++++++++++++++
4 files changed, 14 insertions(+), 27 deletions(-)

diff --git a/fs/cifs/fscache.c b/fs/cifs/fscache.c
index 33af72e0ac0c..b47c2011ce5b 100644
--- a/fs/cifs/fscache.c
+++ b/fs/cifs/fscache.c
@@ -134,14 +134,6 @@ void cifs_fscache_release_inode_cookie(struct inode *inode)
}
}

-static inline void fscache_end_operation(struct netfs_cache_resources *cres)
-{
- const struct netfs_cache_ops *ops = fscache_operation_valid(cres);
-
- if (ops)
- ops->end_operation(cres);
-}
-
/*
* Fallback page reading interface.
*/
diff --git a/fs/fscache/internal.h b/fs/fscache/internal.h
index f121c21590dc..ed1c9ed737f2 100644
--- a/fs/fscache/internal.h
+++ b/fs/fscache/internal.h
@@ -70,17 +70,6 @@ static inline void fscache_see_cookie(struct fscache_cookie *cookie,
where);
}

-/*
- * io.c
- */
-static inline void fscache_end_operation(struct netfs_cache_resources *cres)
-{
- const struct netfs_cache_ops *ops = fscache_operation_valid(cres);
-
- if (ops)
- ops->end_operation(cres);
-}
-
/*
* main.c
*/
diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c
index cfe901650ab0..39654ca72d3d 100644
--- a/fs/nfs/fscache.c
+++ b/fs/nfs/fscache.c
@@ -249,14 +249,6 @@ void nfs_fscache_release_file(struct inode *inode, struct file *filp)
}
}

-static inline void fscache_end_operation(struct netfs_cache_resources *cres)
-{
- const struct netfs_cache_ops *ops = fscache_operation_valid(cres);
-
- if (ops)
- ops->end_operation(cres);
-}
-
/*
* Fallback page reading interface.
*/
diff --git a/include/linux/fscache.h b/include/linux/fscache.h
index 296c5f1d9f35..d2430da8aa67 100644
--- a/include/linux/fscache.h
+++ b/include/linux/fscache.h
@@ -456,6 +456,20 @@ int fscache_begin_read_operation(struct netfs_cache_resources *cres,
return -ENOBUFS;
}

+/**
+ * fscache_end_operation - End the read operation for the netfs lib
+ * @cres: The cache resources for the read operation
+ *
+ * Clean up the resources at the end of the read request.
+ */
+static inline void fscache_end_operation(struct netfs_cache_resources *cres)
+{
+ const struct netfs_cache_ops *ops = fscache_operation_valid(cres);
+
+ if (ops)
+ ops->end_operation(cres);
+}
+
/**
* fscache_read - Start a read from the cache.
* @cres: The cache resources to use


2022-03-09 02:29:58

by David Howells

[permalink] [raw]
Subject: [PATCH v2 02/19] netfs: Generate enums from trace symbol mapping lists

netfs has a number of lists of symbols for use in tracing, listed in an
enum and then listed again in a symbol->string mapping for use with
__print_symbolic(). This is, however, redundant.

Instead, use the symbol->string mapping list to also generate the enum
where the enum is in the same file.

Signed-off-by: David Howells <[email protected]>
cc: [email protected]

Link: https://lore.kernel.org/r/164622980839.3564931.5673300162465266909.stgit@warthog.procyon.org.uk/ # v1
---

include/trace/events/netfs.h | 57 ++++++++++--------------------------------
1 file changed, 14 insertions(+), 43 deletions(-)

diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h
index e6f4ebbb4c69..88d9a74dd346 100644
--- a/include/trace/events/netfs.h
+++ b/include/trace/events/netfs.h
@@ -15,49 +15,6 @@
/*
* Define enums for tracing information.
*/
-#ifndef __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY
-#define __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY
-
-enum netfs_read_trace {
- netfs_read_trace_expanded,
- netfs_read_trace_readahead,
- netfs_read_trace_readpage,
- netfs_read_trace_write_begin,
-};
-
-enum netfs_rreq_trace {
- netfs_rreq_trace_assess,
- netfs_rreq_trace_done,
- netfs_rreq_trace_free,
- netfs_rreq_trace_resubmit,
- netfs_rreq_trace_unlock,
- netfs_rreq_trace_unmark,
- netfs_rreq_trace_write,
-};
-
-enum netfs_sreq_trace {
- netfs_sreq_trace_download_instead,
- netfs_sreq_trace_free,
- netfs_sreq_trace_prepare,
- netfs_sreq_trace_resubmit_short,
- netfs_sreq_trace_submit,
- netfs_sreq_trace_terminated,
- netfs_sreq_trace_write,
- netfs_sreq_trace_write_skip,
- netfs_sreq_trace_write_term,
-};
-
-enum netfs_failure {
- netfs_fail_check_write_begin,
- netfs_fail_copy_to_cache,
- netfs_fail_read,
- netfs_fail_short_readpage,
- netfs_fail_short_write_begin,
- netfs_fail_prepare_write,
-};
-
-#endif
-
#define netfs_read_traces \
EM(netfs_read_trace_expanded, "EXPANDED ") \
EM(netfs_read_trace_readahead, "READAHEAD") \
@@ -98,6 +55,20 @@ enum netfs_failure {
EM(netfs_fail_short_write_begin, "short-write-begin") \
E_(netfs_fail_prepare_write, "prep-write")

+#ifndef __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY
+#define __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY
+
+#undef EM
+#undef E_
+#define EM(a, b) a,
+#define E_(a, b) a
+
+enum netfs_read_trace { netfs_read_traces } __mode(byte);
+enum netfs_rreq_trace { netfs_rreq_traces } __mode(byte);
+enum netfs_sreq_trace { netfs_sreq_traces } __mode(byte);
+enum netfs_failure { netfs_failures } __mode(byte);
+
+#endif

/*
* Export enum symbols via userspace.


2022-03-09 02:30:13

by David Howells

[permalink] [raw]
Subject: [PATCH v2 03/19] netfs: Rename netfs_read_*request to netfs_io_*request

Rename netfs_read_*request to netfs_io_*request so that the same structures
can be used for the write helpers too.

perl -p -i -e 's/netfs_read_(request|subrequest)/netfs_io_$1/g' \
`git grep -l 'netfs_read_\(sub\|\)request'`
perl -p -i -e 's/nr_rd_ops/nr_outstanding/g' \
`git grep -l nr_rd_ops`
perl -p -i -e 's/nr_wr_ops/nr_copy_ops/g' \
`git grep -l nr_wr_ops`
perl -p -i -e 's/netfs_read_source/netfs_io_source/g' \
`git grep -l 'netfs_read_source'`
perl -p -i -e 's/netfs_io_request_ops/netfs_request_ops/g' \
`git grep -l 'netfs_io_request_ops'`
perl -p -i -e 's/init_rreq/init_request/g' \
`git grep -l 'init_rreq'`

Signed-off-by: David Howells <[email protected]>
cc: [email protected]

Link: https://lore.kernel.org/r/164622988070.3564931.7089670190434315183.stgit@warthog.procyon.org.uk/ # v1
---

Documentation/filesystems/netfs_library.rst | 40 +++---
fs/9p/vfs_addr.c | 16 +-
fs/afs/file.c | 12 +-
fs/afs/internal.h | 4 -
fs/cachefiles/io.c | 6 -
fs/ceph/addr.c | 16 +-
fs/ceph/cache.h | 4 -
fs/netfs/read_helper.c | 194 ++++++++++++++-------------
include/linux/netfs.h | 42 +++---
include/trace/events/cachefiles.h | 6 -
include/trace/events/netfs.h | 14 +-
11 files changed, 177 insertions(+), 177 deletions(-)

diff --git a/Documentation/filesystems/netfs_library.rst b/Documentation/filesystems/netfs_library.rst
index 4f373a8ec47b..a997e2d4321d 100644
--- a/Documentation/filesystems/netfs_library.rst
+++ b/Documentation/filesystems/netfs_library.rst
@@ -71,11 +71,11 @@ Read Helper Functions
Three read helpers are provided::

void netfs_readahead(struct readahead_control *ractl,
- const struct netfs_read_request_ops *ops,
+ const struct netfs_request_ops *ops,
void *netfs_priv);
int netfs_readpage(struct file *file,
struct folio *folio,
- const struct netfs_read_request_ops *ops,
+ const struct netfs_request_ops *ops,
void *netfs_priv);
int netfs_write_begin(struct file *file,
struct address_space *mapping,
@@ -84,7 +84,7 @@ Three read helpers are provided::
unsigned int flags,
struct folio **_folio,
void **_fsdata,
- const struct netfs_read_request_ops *ops,
+ const struct netfs_request_ops *ops,
void *netfs_priv);

Each corresponds to a VM operation, with the addition of a couple of parameters
@@ -116,7 +116,7 @@ occurs, the request will get partially completed if sufficient data is read.

Additionally, there is::

- * void netfs_subreq_terminated(struct netfs_read_subrequest *subreq,
+ * void netfs_subreq_terminated(struct netfs_io_subrequest *subreq,
ssize_t transferred_or_error,
bool was_async);

@@ -132,7 +132,7 @@ Read Helper Structures
The read helpers make use of a couple of structures to maintain the state of
the read. The first is a structure that manages a read request as a whole::

- struct netfs_read_request {
+ struct netfs_io_request {
struct inode *inode;
struct address_space *mapping;
struct netfs_cache_resources cache_resources;
@@ -140,7 +140,7 @@ the read. The first is a structure that manages a read request as a whole::
loff_t start;
size_t len;
loff_t i_size;
- const struct netfs_read_request_ops *netfs_ops;
+ const struct netfs_request_ops *netfs_ops;
unsigned int debug_id;
...
};
@@ -187,8 +187,8 @@ The above fields are the ones the netfs can use. They are:
The second structure is used to manage individual slices of the overall read
request::

- struct netfs_read_subrequest {
- struct netfs_read_request *rreq;
+ struct netfs_io_subrequest {
+ struct netfs_io_request *rreq;
loff_t start;
size_t len;
size_t transferred;
@@ -244,23 +244,23 @@ Read Helper Operations
The network filesystem must provide the read helpers with a table of operations
through which it can issue requests and negotiate::

- struct netfs_read_request_ops {
- void (*init_rreq)(struct netfs_read_request *rreq, struct file *file);
+ struct netfs_request_ops {
+ void (*init_request)(struct netfs_io_request *rreq, struct file *file);
bool (*is_cache_enabled)(struct inode *inode);
- int (*begin_cache_operation)(struct netfs_read_request *rreq);
- void (*expand_readahead)(struct netfs_read_request *rreq);
- bool (*clamp_length)(struct netfs_read_subrequest *subreq);
- void (*issue_op)(struct netfs_read_subrequest *subreq);
- bool (*is_still_valid)(struct netfs_read_request *rreq);
+ int (*begin_cache_operation)(struct netfs_io_request *rreq);
+ void (*expand_readahead)(struct netfs_io_request *rreq);
+ bool (*clamp_length)(struct netfs_io_subrequest *subreq);
+ void (*issue_op)(struct netfs_io_subrequest *subreq);
+ bool (*is_still_valid)(struct netfs_io_request *rreq);
int (*check_write_begin)(struct file *file, loff_t pos, unsigned len,
struct folio *folio, void **_fsdata);
- void (*done)(struct netfs_read_request *rreq);
+ void (*done)(struct netfs_io_request *rreq);
void (*cleanup)(struct address_space *mapping, void *netfs_priv);
};

The operations are as follows:

- * ``init_rreq()``
+ * ``init_request()``

[Optional] This is called to initialise the request structure. It is given
the file for reference and can modify the ->netfs_priv value.
@@ -420,12 +420,12 @@ The network filesystem's ->begin_cache_operation() method is called to set up a
cache and this must call into the cache to do the work. If using fscache, for
example, the cache would call::

- int fscache_begin_read_operation(struct netfs_read_request *rreq,
+ int fscache_begin_read_operation(struct netfs_io_request *rreq,
struct fscache_cookie *cookie);

passing in the request pointer and the cookie corresponding to the file.

-The netfs_read_request object contains a place for the cache to hang its
+The netfs_io_request object contains a place for the cache to hang its
state::

struct netfs_cache_resources {
@@ -443,7 +443,7 @@ operation table looks like the following::
void (*expand_readahead)(struct netfs_cache_resources *cres,
loff_t *_start, size_t *_len, loff_t i_size);

- enum netfs_read_source (*prepare_read)(struct netfs_read_subrequest *subreq,
+ enum netfs_io_source (*prepare_read)(struct netfs_io_subrequest *subreq,
loff_t i_size);

int (*read)(struct netfs_cache_resources *cres,
diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c
index 9a10e68c5f30..7b79fabe7593 100644
--- a/fs/9p/vfs_addr.c
+++ b/fs/9p/vfs_addr.c
@@ -31,9 +31,9 @@
* v9fs_req_issue_op - Issue a read from 9P
* @subreq: The read to make
*/
-static void v9fs_req_issue_op(struct netfs_read_subrequest *subreq)
+static void v9fs_req_issue_op(struct netfs_io_subrequest *subreq)
{
- struct netfs_read_request *rreq = subreq->rreq;
+ struct netfs_io_request *rreq = subreq->rreq;
struct p9_fid *fid = rreq->netfs_priv;
struct iov_iter to;
loff_t pos = subreq->start + subreq->transferred;
@@ -52,11 +52,11 @@ static void v9fs_req_issue_op(struct netfs_read_subrequest *subreq)
}

/**
- * v9fs_init_rreq - Initialise a read request
+ * v9fs_init_request - Initialise a read request
* @rreq: The read request
* @file: The file being read from
*/
-static void v9fs_init_rreq(struct netfs_read_request *rreq, struct file *file)
+static void v9fs_init_request(struct netfs_io_request *rreq, struct file *file)
{
struct p9_fid *fid = file->private_data;

@@ -65,7 +65,7 @@ static void v9fs_init_rreq(struct netfs_read_request *rreq, struct file *file)
}

/**
- * v9fs_req_cleanup - Cleanup request initialized by v9fs_init_rreq
+ * v9fs_req_cleanup - Cleanup request initialized by v9fs_init_request
* @mapping: unused mapping of request to cleanup
* @priv: private data to cleanup, a fid, guaranted non-null.
*/
@@ -91,7 +91,7 @@ static bool v9fs_is_cache_enabled(struct inode *inode)
* v9fs_begin_cache_operation - Begin a cache operation for a read
* @rreq: The read request
*/
-static int v9fs_begin_cache_operation(struct netfs_read_request *rreq)
+static int v9fs_begin_cache_operation(struct netfs_io_request *rreq)
{
#ifdef CONFIG_9P_FSCACHE
struct fscache_cookie *cookie = v9fs_inode_cookie(V9FS_I(rreq->inode));
@@ -102,8 +102,8 @@ static int v9fs_begin_cache_operation(struct netfs_read_request *rreq)
#endif
}

-static const struct netfs_read_request_ops v9fs_req_ops = {
- .init_rreq = v9fs_init_rreq,
+static const struct netfs_request_ops v9fs_req_ops = {
+ .init_request = v9fs_init_request,
.is_cache_enabled = v9fs_is_cache_enabled,
.begin_cache_operation = v9fs_begin_cache_operation,
.issue_op = v9fs_req_issue_op,
diff --git a/fs/afs/file.c b/fs/afs/file.c
index 720818a7c166..e55761f8858c 100644
--- a/fs/afs/file.c
+++ b/fs/afs/file.c
@@ -240,7 +240,7 @@ void afs_put_read(struct afs_read *req)
static void afs_fetch_data_notify(struct afs_operation *op)
{
struct afs_read *req = op->fetch.req;
- struct netfs_read_subrequest *subreq = req->subreq;
+ struct netfs_io_subrequest *subreq = req->subreq;
int error = op->error;

if (error == -ECONNABORTED)
@@ -310,7 +310,7 @@ int afs_fetch_data(struct afs_vnode *vnode, struct afs_read *req)
return afs_do_sync_operation(op);
}

-static void afs_req_issue_op(struct netfs_read_subrequest *subreq)
+static void afs_req_issue_op(struct netfs_io_subrequest *subreq)
{
struct afs_vnode *vnode = AFS_FS_I(subreq->rreq->inode);
struct afs_read *fsreq;
@@ -359,7 +359,7 @@ static int afs_symlink_readpage(struct file *file, struct page *page)
return ret;
}

-static void afs_init_rreq(struct netfs_read_request *rreq, struct file *file)
+static void afs_init_request(struct netfs_io_request *rreq, struct file *file)
{
rreq->netfs_priv = key_get(afs_file_key(file));
}
@@ -371,7 +371,7 @@ static bool afs_is_cache_enabled(struct inode *inode)
return fscache_cookie_enabled(cookie) && cookie->cache_priv;
}

-static int afs_begin_cache_operation(struct netfs_read_request *rreq)
+static int afs_begin_cache_operation(struct netfs_io_request *rreq)
{
#ifdef CONFIG_AFS_FSCACHE
struct afs_vnode *vnode = AFS_FS_I(rreq->inode);
@@ -396,8 +396,8 @@ static void afs_priv_cleanup(struct address_space *mapping, void *netfs_priv)
key_put(netfs_priv);
}

-const struct netfs_read_request_ops afs_req_ops = {
- .init_rreq = afs_init_rreq,
+const struct netfs_request_ops afs_req_ops = {
+ .init_request = afs_init_request,
.is_cache_enabled = afs_is_cache_enabled,
.begin_cache_operation = afs_begin_cache_operation,
.check_write_begin = afs_check_write_begin,
diff --git a/fs/afs/internal.h b/fs/afs/internal.h
index b6f02321fc09..c56a0e1719ae 100644
--- a/fs/afs/internal.h
+++ b/fs/afs/internal.h
@@ -207,7 +207,7 @@ struct afs_read {
loff_t file_size; /* File size returned by server */
struct key *key; /* The key to use to reissue the read */
struct afs_vnode *vnode; /* The file being read into. */
- struct netfs_read_subrequest *subreq; /* Fscache helper read request this belongs to */
+ struct netfs_io_subrequest *subreq; /* Fscache helper read request this belongs to */
afs_dataversion_t data_version; /* Version number returned by server */
refcount_t usage;
unsigned int call_debug_id;
@@ -1063,7 +1063,7 @@ extern const struct address_space_operations afs_file_aops;
extern const struct address_space_operations afs_symlink_aops;
extern const struct inode_operations afs_file_inode_operations;
extern const struct file_operations afs_file_operations;
-extern const struct netfs_read_request_ops afs_req_ops;
+extern const struct netfs_request_ops afs_req_ops;

extern int afs_cache_wb_key(struct afs_vnode *, struct afs_file *);
extern void afs_put_wb_key(struct afs_wb_key *);
diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c
index 753986ea1583..6ac6fdbc70d3 100644
--- a/fs/cachefiles/io.c
+++ b/fs/cachefiles/io.c
@@ -382,18 +382,18 @@ static int cachefiles_write(struct netfs_cache_resources *cres,
* Prepare a read operation, shortening it to a cached/uncached
* boundary as appropriate.
*/
-static enum netfs_read_source cachefiles_prepare_read(struct netfs_read_subrequest *subreq,
+static enum netfs_io_source cachefiles_prepare_read(struct netfs_io_subrequest *subreq,
loff_t i_size)
{
enum cachefiles_prepare_read_trace why;
- struct netfs_read_request *rreq = subreq->rreq;
+ struct netfs_io_request *rreq = subreq->rreq;
struct netfs_cache_resources *cres = &rreq->cache_resources;
struct cachefiles_object *object;
struct cachefiles_cache *cache;
struct fscache_cookie *cookie = fscache_cres_cookie(cres);
const struct cred *saved_cred;
struct file *file = cachefiles_cres_file(cres);
- enum netfs_read_source ret = NETFS_DOWNLOAD_FROM_SERVER;
+ enum netfs_io_source ret = NETFS_DOWNLOAD_FROM_SERVER;
loff_t off, to;
ino_t ino = file ? file_inode(file)->i_ino : 0;

diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 46e0881ae8b2..9d995f351079 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -183,7 +183,7 @@ static int ceph_releasepage(struct page *page, gfp_t gfp)
return 1;
}

-static void ceph_netfs_expand_readahead(struct netfs_read_request *rreq)
+static void ceph_netfs_expand_readahead(struct netfs_io_request *rreq)
{
struct inode *inode = rreq->inode;
struct ceph_inode_info *ci = ceph_inode(inode);
@@ -200,7 +200,7 @@ static void ceph_netfs_expand_readahead(struct netfs_read_request *rreq)
rreq->len = roundup(rreq->len, lo->stripe_unit);
}

-static bool ceph_netfs_clamp_length(struct netfs_read_subrequest *subreq)
+static bool ceph_netfs_clamp_length(struct netfs_io_subrequest *subreq)
{
struct inode *inode = subreq->rreq->inode;
struct ceph_fs_client *fsc = ceph_inode_to_client(inode);
@@ -219,7 +219,7 @@ static void finish_netfs_read(struct ceph_osd_request *req)
{
struct ceph_fs_client *fsc = ceph_inode_to_client(req->r_inode);
struct ceph_osd_data *osd_data = osd_req_op_extent_osd_data(req, 0);
- struct netfs_read_subrequest *subreq = req->r_priv;
+ struct netfs_io_subrequest *subreq = req->r_priv;
int num_pages;
int err = req->r_result;

@@ -245,9 +245,9 @@ static void finish_netfs_read(struct ceph_osd_request *req)
iput(req->r_inode);
}

-static bool ceph_netfs_issue_op_inline(struct netfs_read_subrequest *subreq)
+static bool ceph_netfs_issue_op_inline(struct netfs_io_subrequest *subreq)
{
- struct netfs_read_request *rreq = subreq->rreq;
+ struct netfs_io_request *rreq = subreq->rreq;
struct inode *inode = rreq->inode;
struct ceph_mds_reply_info_parsed *rinfo;
struct ceph_mds_reply_info_in *iinfo;
@@ -298,9 +298,9 @@ static bool ceph_netfs_issue_op_inline(struct netfs_read_subrequest *subreq)
return true;
}

-static void ceph_netfs_issue_op(struct netfs_read_subrequest *subreq)
+static void ceph_netfs_issue_op(struct netfs_io_subrequest *subreq)
{
- struct netfs_read_request *rreq = subreq->rreq;
+ struct netfs_io_request *rreq = subreq->rreq;
struct inode *inode = rreq->inode;
struct ceph_inode_info *ci = ceph_inode(inode);
struct ceph_fs_client *fsc = ceph_inode_to_client(inode);
@@ -364,7 +364,7 @@ static void ceph_readahead_cleanup(struct address_space *mapping, void *priv)
ceph_put_cap_refs(ci, got);
}

-static const struct netfs_read_request_ops ceph_netfs_read_ops = {
+static const struct netfs_request_ops ceph_netfs_read_ops = {
.is_cache_enabled = ceph_is_cache_enabled,
.begin_cache_operation = ceph_begin_cache_operation,
.issue_op = ceph_netfs_issue_op,
diff --git a/fs/ceph/cache.h b/fs/ceph/cache.h
index 09164389fa66..b8b3b5cb6438 100644
--- a/fs/ceph/cache.h
+++ b/fs/ceph/cache.h
@@ -62,7 +62,7 @@ static inline int ceph_fscache_set_page_dirty(struct page *page)
return fscache_set_page_dirty(page, ceph_fscache_cookie(ci));
}

-static inline int ceph_begin_cache_operation(struct netfs_read_request *rreq)
+static inline int ceph_begin_cache_operation(struct netfs_io_request *rreq)
{
struct fscache_cookie *cookie = ceph_fscache_cookie(ceph_inode(rreq->inode));

@@ -143,7 +143,7 @@ static inline bool ceph_is_cache_enabled(struct inode *inode)
return false;
}

-static inline int ceph_begin_cache_operation(struct netfs_read_request *rreq)
+static inline int ceph_begin_cache_operation(struct netfs_io_request *rreq)
{
return -ENOBUFS;
}
diff --git a/fs/netfs/read_helper.c b/fs/netfs/read_helper.c
index 501da990c259..50035d93f1dc 100644
--- a/fs/netfs/read_helper.c
+++ b/fs/netfs/read_helper.c
@@ -28,23 +28,23 @@ module_param_named(debug, netfs_debug, uint, S_IWUSR | S_IRUGO);
MODULE_PARM_DESC(netfs_debug, "Netfs support debugging mask");

static void netfs_rreq_work(struct work_struct *);
-static void __netfs_put_subrequest(struct netfs_read_subrequest *, bool);
+static void __netfs_put_subrequest(struct netfs_io_subrequest *, bool);

-static void netfs_put_subrequest(struct netfs_read_subrequest *subreq,
+static void netfs_put_subrequest(struct netfs_io_subrequest *subreq,
bool was_async)
{
if (refcount_dec_and_test(&subreq->usage))
__netfs_put_subrequest(subreq, was_async);
}

-static struct netfs_read_request *netfs_alloc_read_request(
- const struct netfs_read_request_ops *ops, void *netfs_priv,
+static struct netfs_io_request *netfs_alloc_read_request(
+ const struct netfs_request_ops *ops, void *netfs_priv,
struct file *file)
{
static atomic_t debug_ids;
- struct netfs_read_request *rreq;
+ struct netfs_io_request *rreq;

- rreq = kzalloc(sizeof(struct netfs_read_request), GFP_KERNEL);
+ rreq = kzalloc(sizeof(struct netfs_io_request), GFP_KERNEL);
if (rreq) {
rreq->netfs_ops = ops;
rreq->netfs_priv = netfs_priv;
@@ -55,27 +55,27 @@ static struct netfs_read_request *netfs_alloc_read_request(
INIT_WORK(&rreq->work, netfs_rreq_work);
refcount_set(&rreq->usage, 1);
__set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
- if (ops->init_rreq)
- ops->init_rreq(rreq, file);
+ if (ops->init_request)
+ ops->init_request(rreq, file);
netfs_stat(&netfs_n_rh_rreq);
}

return rreq;
}

-static void netfs_get_read_request(struct netfs_read_request *rreq)
+static void netfs_get_read_request(struct netfs_io_request *rreq)
{
refcount_inc(&rreq->usage);
}

-static void netfs_rreq_clear_subreqs(struct netfs_read_request *rreq,
+static void netfs_rreq_clear_subreqs(struct netfs_io_request *rreq,
bool was_async)
{
- struct netfs_read_subrequest *subreq;
+ struct netfs_io_subrequest *subreq;

while (!list_empty(&rreq->subrequests)) {
subreq = list_first_entry(&rreq->subrequests,
- struct netfs_read_subrequest, rreq_link);
+ struct netfs_io_subrequest, rreq_link);
list_del(&subreq->rreq_link);
netfs_put_subrequest(subreq, was_async);
}
@@ -83,8 +83,8 @@ static void netfs_rreq_clear_subreqs(struct netfs_read_request *rreq,

static void netfs_free_read_request(struct work_struct *work)
{
- struct netfs_read_request *rreq =
- container_of(work, struct netfs_read_request, work);
+ struct netfs_io_request *rreq =
+ container_of(work, struct netfs_io_request, work);
netfs_rreq_clear_subreqs(rreq, false);
if (rreq->netfs_priv)
rreq->netfs_ops->cleanup(rreq->mapping, rreq->netfs_priv);
@@ -95,7 +95,7 @@ static void netfs_free_read_request(struct work_struct *work)
netfs_stat_d(&netfs_n_rh_rreq);
}

-static void netfs_put_read_request(struct netfs_read_request *rreq, bool was_async)
+static void netfs_put_read_request(struct netfs_io_request *rreq, bool was_async)
{
if (refcount_dec_and_test(&rreq->usage)) {
if (was_async) {
@@ -111,12 +111,12 @@ static void netfs_put_read_request(struct netfs_read_request *rreq, bool was_asy
/*
* Allocate and partially initialise an I/O request structure.
*/
-static struct netfs_read_subrequest *netfs_alloc_subrequest(
- struct netfs_read_request *rreq)
+static struct netfs_io_subrequest *netfs_alloc_subrequest(
+ struct netfs_io_request *rreq)
{
- struct netfs_read_subrequest *subreq;
+ struct netfs_io_subrequest *subreq;

- subreq = kzalloc(sizeof(struct netfs_read_subrequest), GFP_KERNEL);
+ subreq = kzalloc(sizeof(struct netfs_io_subrequest), GFP_KERNEL);
if (subreq) {
INIT_LIST_HEAD(&subreq->rreq_link);
refcount_set(&subreq->usage, 2);
@@ -128,15 +128,15 @@ static struct netfs_read_subrequest *netfs_alloc_subrequest(
return subreq;
}

-static void netfs_get_read_subrequest(struct netfs_read_subrequest *subreq)
+static void netfs_get_read_subrequest(struct netfs_io_subrequest *subreq)
{
refcount_inc(&subreq->usage);
}

-static void __netfs_put_subrequest(struct netfs_read_subrequest *subreq,
+static void __netfs_put_subrequest(struct netfs_io_subrequest *subreq,
bool was_async)
{
- struct netfs_read_request *rreq = subreq->rreq;
+ struct netfs_io_request *rreq = subreq->rreq;

trace_netfs_sreq(subreq, netfs_sreq_trace_free);
kfree(subreq);
@@ -147,7 +147,7 @@ static void __netfs_put_subrequest(struct netfs_read_subrequest *subreq,
/*
* Clear the unread part of an I/O request.
*/
-static void netfs_clear_unread(struct netfs_read_subrequest *subreq)
+static void netfs_clear_unread(struct netfs_io_subrequest *subreq)
{
struct iov_iter iter;

@@ -160,7 +160,7 @@ static void netfs_clear_unread(struct netfs_read_subrequest *subreq)
static void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error,
bool was_async)
{
- struct netfs_read_subrequest *subreq = priv;
+ struct netfs_io_subrequest *subreq = priv;

netfs_subreq_terminated(subreq, transferred_or_error, was_async);
}
@@ -169,8 +169,8 @@ static void netfs_cache_read_terminated(void *priv, ssize_t transferred_or_error
* Issue a read against the cache.
* - Eats the caller's ref on subreq.
*/
-static void netfs_read_from_cache(struct netfs_read_request *rreq,
- struct netfs_read_subrequest *subreq,
+static void netfs_read_from_cache(struct netfs_io_request *rreq,
+ struct netfs_io_subrequest *subreq,
enum netfs_read_from_hole read_hole)
{
struct netfs_cache_resources *cres = &rreq->cache_resources;
@@ -188,8 +188,8 @@ static void netfs_read_from_cache(struct netfs_read_request *rreq,
/*
* Fill a subrequest region with zeroes.
*/
-static void netfs_fill_with_zeroes(struct netfs_read_request *rreq,
- struct netfs_read_subrequest *subreq)
+static void netfs_fill_with_zeroes(struct netfs_io_request *rreq,
+ struct netfs_io_subrequest *subreq)
{
netfs_stat(&netfs_n_rh_zero);
__set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
@@ -212,8 +212,8 @@ static void netfs_fill_with_zeroes(struct netfs_read_request *rreq,
* - NETFS_SREQ_CLEAR_TAIL: A short read - the rest of the buffer will be
* cleared.
*/
-static void netfs_read_from_server(struct netfs_read_request *rreq,
- struct netfs_read_subrequest *subreq)
+static void netfs_read_from_server(struct netfs_io_request *rreq,
+ struct netfs_io_subrequest *subreq)
{
netfs_stat(&netfs_n_rh_download);
rreq->netfs_ops->issue_op(subreq);
@@ -222,7 +222,7 @@ static void netfs_read_from_server(struct netfs_read_request *rreq,
/*
* Release those waiting.
*/
-static void netfs_rreq_completed(struct netfs_read_request *rreq, bool was_async)
+static void netfs_rreq_completed(struct netfs_io_request *rreq, bool was_async)
{
trace_netfs_rreq(rreq, netfs_rreq_trace_done);
netfs_rreq_clear_subreqs(rreq, was_async);
@@ -235,10 +235,10 @@ static void netfs_rreq_completed(struct netfs_read_request *rreq, bool was_async
*
* May be called in softirq mode and we inherit a ref from the caller.
*/
-static void netfs_rreq_unmark_after_write(struct netfs_read_request *rreq,
+static void netfs_rreq_unmark_after_write(struct netfs_io_request *rreq,
bool was_async)
{
- struct netfs_read_subrequest *subreq;
+ struct netfs_io_subrequest *subreq;
struct folio *folio;
pgoff_t unlocked = 0;
bool have_unlocked = false;
@@ -267,8 +267,8 @@ static void netfs_rreq_unmark_after_write(struct netfs_read_request *rreq,
static void netfs_rreq_copy_terminated(void *priv, ssize_t transferred_or_error,
bool was_async)
{
- struct netfs_read_subrequest *subreq = priv;
- struct netfs_read_request *rreq = subreq->rreq;
+ struct netfs_io_subrequest *subreq = priv;
+ struct netfs_io_request *rreq = subreq->rreq;

if (IS_ERR_VALUE(transferred_or_error)) {
netfs_stat(&netfs_n_rh_write_failed);
@@ -280,8 +280,8 @@ static void netfs_rreq_copy_terminated(void *priv, ssize_t transferred_or_error,

trace_netfs_sreq(subreq, netfs_sreq_trace_write_term);

- /* If we decrement nr_wr_ops to 0, the ref belongs to us. */
- if (atomic_dec_and_test(&rreq->nr_wr_ops))
+ /* If we decrement nr_copy_ops to 0, the ref belongs to us. */
+ if (atomic_dec_and_test(&rreq->nr_copy_ops))
netfs_rreq_unmark_after_write(rreq, was_async);

netfs_put_subrequest(subreq, was_async);
@@ -291,10 +291,10 @@ static void netfs_rreq_copy_terminated(void *priv, ssize_t transferred_or_error,
* Perform any outstanding writes to the cache. We inherit a ref from the
* caller.
*/
-static void netfs_rreq_do_write_to_cache(struct netfs_read_request *rreq)
+static void netfs_rreq_do_write_to_cache(struct netfs_io_request *rreq)
{
struct netfs_cache_resources *cres = &rreq->cache_resources;
- struct netfs_read_subrequest *subreq, *next, *p;
+ struct netfs_io_subrequest *subreq, *next, *p;
struct iov_iter iter;
int ret;

@@ -303,7 +303,7 @@ static void netfs_rreq_do_write_to_cache(struct netfs_read_request *rreq)
/* We don't want terminating writes trying to wake us up whilst we're
* still going through the list.
*/
- atomic_inc(&rreq->nr_wr_ops);
+ atomic_inc(&rreq->nr_copy_ops);

list_for_each_entry_safe(subreq, p, &rreq->subrequests, rreq_link) {
if (!test_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags)) {
@@ -334,7 +334,7 @@ static void netfs_rreq_do_write_to_cache(struct netfs_read_request *rreq)
iov_iter_xarray(&iter, WRITE, &rreq->mapping->i_pages,
subreq->start, subreq->len);

- atomic_inc(&rreq->nr_wr_ops);
+ atomic_inc(&rreq->nr_copy_ops);
netfs_stat(&netfs_n_rh_write);
netfs_get_read_subrequest(subreq);
trace_netfs_sreq(subreq, netfs_sreq_trace_write);
@@ -342,20 +342,20 @@ static void netfs_rreq_do_write_to_cache(struct netfs_read_request *rreq)
netfs_rreq_copy_terminated, subreq);
}

- /* If we decrement nr_wr_ops to 0, the usage ref belongs to us. */
- if (atomic_dec_and_test(&rreq->nr_wr_ops))
+ /* If we decrement nr_copy_ops to 0, the usage ref belongs to us. */
+ if (atomic_dec_and_test(&rreq->nr_copy_ops))
netfs_rreq_unmark_after_write(rreq, false);
}

static void netfs_rreq_write_to_cache_work(struct work_struct *work)
{
- struct netfs_read_request *rreq =
- container_of(work, struct netfs_read_request, work);
+ struct netfs_io_request *rreq =
+ container_of(work, struct netfs_io_request, work);

netfs_rreq_do_write_to_cache(rreq);
}

-static void netfs_rreq_write_to_cache(struct netfs_read_request *rreq)
+static void netfs_rreq_write_to_cache(struct netfs_io_request *rreq)
{
rreq->work.func = netfs_rreq_write_to_cache_work;
if (!queue_work(system_unbound_wq, &rreq->work))
@@ -366,9 +366,9 @@ static void netfs_rreq_write_to_cache(struct netfs_read_request *rreq)
* Unlock the folios in a read operation. We need to set PG_fscache on any
* folios we're going to write back before we unlock them.
*/
-static void netfs_rreq_unlock(struct netfs_read_request *rreq)
+static void netfs_rreq_unlock(struct netfs_io_request *rreq)
{
- struct netfs_read_subrequest *subreq;
+ struct netfs_io_subrequest *subreq;
struct folio *folio;
unsigned int iopos, account = 0;
pgoff_t start_page = rreq->start / PAGE_SIZE;
@@ -391,7 +391,7 @@ static void netfs_rreq_unlock(struct netfs_read_request *rreq)
* mixture inside.
*/
subreq = list_first_entry(&rreq->subrequests,
- struct netfs_read_subrequest, rreq_link);
+ struct netfs_io_subrequest, rreq_link);
iopos = 0;
subreq_failed = (subreq->error < 0);

@@ -450,8 +450,8 @@ static void netfs_rreq_unlock(struct netfs_read_request *rreq)
/*
* Handle a short read.
*/
-static void netfs_rreq_short_read(struct netfs_read_request *rreq,
- struct netfs_read_subrequest *subreq)
+static void netfs_rreq_short_read(struct netfs_io_request *rreq,
+ struct netfs_io_subrequest *subreq)
{
__clear_bit(NETFS_SREQ_SHORT_READ, &subreq->flags);
__set_bit(NETFS_SREQ_SEEK_DATA_READ, &subreq->flags);
@@ -460,7 +460,7 @@ static void netfs_rreq_short_read(struct netfs_read_request *rreq,
trace_netfs_sreq(subreq, netfs_sreq_trace_resubmit_short);

netfs_get_read_subrequest(subreq);
- atomic_inc(&rreq->nr_rd_ops);
+ atomic_inc(&rreq->nr_outstanding);
if (subreq->source == NETFS_READ_FROM_CACHE)
netfs_read_from_cache(rreq, subreq, NETFS_READ_HOLE_CLEAR);
else
@@ -471,9 +471,9 @@ static void netfs_rreq_short_read(struct netfs_read_request *rreq,
* Resubmit any short or failed operations. Returns true if we got the rreq
* ref back.
*/
-static bool netfs_rreq_perform_resubmissions(struct netfs_read_request *rreq)
+static bool netfs_rreq_perform_resubmissions(struct netfs_io_request *rreq)
{
- struct netfs_read_subrequest *subreq;
+ struct netfs_io_subrequest *subreq;

WARN_ON(in_interrupt());

@@ -482,7 +482,7 @@ static bool netfs_rreq_perform_resubmissions(struct netfs_read_request *rreq)
/* We don't want terminating submissions trying to wake us up whilst
* we're still going through the list.
*/
- atomic_inc(&rreq->nr_rd_ops);
+ atomic_inc(&rreq->nr_outstanding);

__clear_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags);
list_for_each_entry(subreq, &rreq->subrequests, rreq_link) {
@@ -494,27 +494,27 @@ static bool netfs_rreq_perform_resubmissions(struct netfs_read_request *rreq)
netfs_stat(&netfs_n_rh_download_instead);
trace_netfs_sreq(subreq, netfs_sreq_trace_download_instead);
netfs_get_read_subrequest(subreq);
- atomic_inc(&rreq->nr_rd_ops);
+ atomic_inc(&rreq->nr_outstanding);
netfs_read_from_server(rreq, subreq);
} else if (test_bit(NETFS_SREQ_SHORT_READ, &subreq->flags)) {
netfs_rreq_short_read(rreq, subreq);
}
}

- /* If we decrement nr_rd_ops to 0, the usage ref belongs to us. */
- if (atomic_dec_and_test(&rreq->nr_rd_ops))
+ /* If we decrement nr_outstanding to 0, the usage ref belongs to us. */
+ if (atomic_dec_and_test(&rreq->nr_outstanding))
return true;

- wake_up_var(&rreq->nr_rd_ops);
+ wake_up_var(&rreq->nr_outstanding);
return false;
}

/*
* Check to see if the data read is still valid.
*/
-static void netfs_rreq_is_still_valid(struct netfs_read_request *rreq)
+static void netfs_rreq_is_still_valid(struct netfs_io_request *rreq)
{
- struct netfs_read_subrequest *subreq;
+ struct netfs_io_subrequest *subreq;

if (!rreq->netfs_ops->is_still_valid ||
rreq->netfs_ops->is_still_valid(rreq))
@@ -534,7 +534,7 @@ static void netfs_rreq_is_still_valid(struct netfs_read_request *rreq)
* Note that we could be in an ordinary kernel thread, on a workqueue or in
* softirq context at this point. We inherit a ref from the caller.
*/
-static void netfs_rreq_assess(struct netfs_read_request *rreq, bool was_async)
+static void netfs_rreq_assess(struct netfs_io_request *rreq, bool was_async)
{
trace_netfs_rreq(rreq, netfs_rreq_trace_assess);

@@ -561,8 +561,8 @@ static void netfs_rreq_assess(struct netfs_read_request *rreq, bool was_async)

static void netfs_rreq_work(struct work_struct *work)
{
- struct netfs_read_request *rreq =
- container_of(work, struct netfs_read_request, work);
+ struct netfs_io_request *rreq =
+ container_of(work, struct netfs_io_request, work);
netfs_rreq_assess(rreq, false);
}

@@ -570,7 +570,7 @@ static void netfs_rreq_work(struct work_struct *work)
* Handle the completion of all outstanding I/O operations on a read request.
* We inherit a ref from the caller.
*/
-static void netfs_rreq_terminated(struct netfs_read_request *rreq,
+static void netfs_rreq_terminated(struct netfs_io_request *rreq,
bool was_async)
{
if (test_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags) &&
@@ -600,11 +600,11 @@ static void netfs_rreq_terminated(struct netfs_read_request *rreq,
* If @was_async is true, the caller might be running in softirq or interrupt
* context and we can't sleep.
*/
-void netfs_subreq_terminated(struct netfs_read_subrequest *subreq,
+void netfs_subreq_terminated(struct netfs_io_subrequest *subreq,
ssize_t transferred_or_error,
bool was_async)
{
- struct netfs_read_request *rreq = subreq->rreq;
+ struct netfs_io_request *rreq = subreq->rreq;
int u;

_enter("[%u]{%llx,%lx},%zd",
@@ -648,12 +648,12 @@ void netfs_subreq_terminated(struct netfs_read_subrequest *subreq,
out:
trace_netfs_sreq(subreq, netfs_sreq_trace_terminated);

- /* If we decrement nr_rd_ops to 0, the ref belongs to us. */
- u = atomic_dec_return(&rreq->nr_rd_ops);
+ /* If we decrement nr_outstanding to 0, the ref belongs to us. */
+ u = atomic_dec_return(&rreq->nr_outstanding);
if (u == 0)
netfs_rreq_terminated(rreq, was_async);
else if (u == 1)
- wake_up_var(&rreq->nr_rd_ops);
+ wake_up_var(&rreq->nr_outstanding);

netfs_put_subrequest(subreq, was_async);
return;
@@ -691,10 +691,10 @@ void netfs_subreq_terminated(struct netfs_read_subrequest *subreq,
}
EXPORT_SYMBOL(netfs_subreq_terminated);

-static enum netfs_read_source netfs_cache_prepare_read(struct netfs_read_subrequest *subreq,
+static enum netfs_io_source netfs_cache_prepare_read(struct netfs_io_subrequest *subreq,
loff_t i_size)
{
- struct netfs_read_request *rreq = subreq->rreq;
+ struct netfs_io_request *rreq = subreq->rreq;
struct netfs_cache_resources *cres = &rreq->cache_resources;

if (cres->ops)
@@ -707,11 +707,11 @@ static enum netfs_read_source netfs_cache_prepare_read(struct netfs_read_subrequ
/*
* Work out what sort of subrequest the next one will be.
*/
-static enum netfs_read_source
-netfs_rreq_prepare_read(struct netfs_read_request *rreq,
- struct netfs_read_subrequest *subreq)
+static enum netfs_io_source
+netfs_rreq_prepare_read(struct netfs_io_request *rreq,
+ struct netfs_io_subrequest *subreq)
{
- enum netfs_read_source source;
+ enum netfs_io_source source;

_enter("%llx-%llx,%llx", subreq->start, subreq->start + subreq->len, rreq->i_size);

@@ -748,11 +748,11 @@ netfs_rreq_prepare_read(struct netfs_read_request *rreq,
/*
* Slice off a piece of a read request and submit an I/O request for it.
*/
-static bool netfs_rreq_submit_slice(struct netfs_read_request *rreq,
+static bool netfs_rreq_submit_slice(struct netfs_io_request *rreq,
unsigned int *_debug_index)
{
- struct netfs_read_subrequest *subreq;
- enum netfs_read_source source;
+ struct netfs_io_subrequest *subreq;
+ enum netfs_io_source source;

subreq = netfs_alloc_subrequest(rreq);
if (!subreq)
@@ -777,7 +777,7 @@ static bool netfs_rreq_submit_slice(struct netfs_read_request *rreq,
if (source == NETFS_INVALID_READ)
goto subreq_failed;

- atomic_inc(&rreq->nr_rd_ops);
+ atomic_inc(&rreq->nr_outstanding);

rreq->submitted += subreq->len;

@@ -804,7 +804,7 @@ static bool netfs_rreq_submit_slice(struct netfs_read_request *rreq,
return false;
}

-static void netfs_cache_expand_readahead(struct netfs_read_request *rreq,
+static void netfs_cache_expand_readahead(struct netfs_io_request *rreq,
loff_t *_start, size_t *_len, loff_t i_size)
{
struct netfs_cache_resources *cres = &rreq->cache_resources;
@@ -813,7 +813,7 @@ static void netfs_cache_expand_readahead(struct netfs_read_request *rreq,
cres->ops->expand_readahead(cres, _start, _len, i_size);
}

-static void netfs_rreq_expand(struct netfs_read_request *rreq,
+static void netfs_rreq_expand(struct netfs_io_request *rreq,
struct readahead_control *ractl)
{
/* Give the cache a chance to change the request parameters. The
@@ -866,10 +866,10 @@ static void netfs_rreq_expand(struct netfs_read_request *rreq,
* This is usable whether or not caching is enabled.
*/
void netfs_readahead(struct readahead_control *ractl,
- const struct netfs_read_request_ops *ops,
+ const struct netfs_request_ops *ops,
void *netfs_priv)
{
- struct netfs_read_request *rreq;
+ struct netfs_io_request *rreq;
unsigned int debug_index = 0;
int ret;

@@ -897,7 +897,7 @@ void netfs_readahead(struct readahead_control *ractl,

netfs_rreq_expand(rreq, ractl);

- atomic_set(&rreq->nr_rd_ops, 1);
+ atomic_set(&rreq->nr_outstanding, 1);
do {
if (!netfs_rreq_submit_slice(rreq, &debug_index))
break;
@@ -910,8 +910,8 @@ void netfs_readahead(struct readahead_control *ractl,
while (readahead_folio(ractl))
;

- /* If we decrement nr_rd_ops to 0, the ref belongs to us. */
- if (atomic_dec_and_test(&rreq->nr_rd_ops))
+ /* If we decrement nr_outstanding to 0, the ref belongs to us. */
+ if (atomic_dec_and_test(&rreq->nr_outstanding))
netfs_rreq_assess(rreq, false);
return;

@@ -944,10 +944,10 @@ EXPORT_SYMBOL(netfs_readahead);
*/
int netfs_readpage(struct file *file,
struct folio *folio,
- const struct netfs_read_request_ops *ops,
+ const struct netfs_request_ops *ops,
void *netfs_priv)
{
- struct netfs_read_request *rreq;
+ struct netfs_io_request *rreq;
unsigned int debug_index = 0;
int ret;

@@ -977,19 +977,19 @@ int netfs_readpage(struct file *file,

netfs_get_read_request(rreq);

- atomic_set(&rreq->nr_rd_ops, 1);
+ atomic_set(&rreq->nr_outstanding, 1);
do {
if (!netfs_rreq_submit_slice(rreq, &debug_index))
break;

} while (rreq->submitted < rreq->len);

- /* Keep nr_rd_ops incremented so that the ref always belongs to us, and
+ /* Keep nr_outstanding incremented so that the ref always belongs to us, and
* the service code isn't punted off to a random thread pool to
* process.
*/
do {
- wait_var_event(&rreq->nr_rd_ops, atomic_read(&rreq->nr_rd_ops) == 1);
+ wait_var_event(&rreq->nr_outstanding, atomic_read(&rreq->nr_outstanding) == 1);
netfs_rreq_assess(rreq, false);
} while (test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags));

@@ -1076,10 +1076,10 @@ static bool netfs_skip_folio_read(struct folio *folio, loff_t pos, size_t len)
int netfs_write_begin(struct file *file, struct address_space *mapping,
loff_t pos, unsigned int len, unsigned int aop_flags,
struct folio **_folio, void **_fsdata,
- const struct netfs_read_request_ops *ops,
+ const struct netfs_request_ops *ops,
void *netfs_priv)
{
- struct netfs_read_request *rreq;
+ struct netfs_io_request *rreq;
struct folio *folio;
struct inode *inode = file_inode(file);
unsigned int debug_index = 0, fgp_flags;
@@ -1153,19 +1153,19 @@ int netfs_write_begin(struct file *file, struct address_space *mapping,
while (readahead_folio(&ractl))
;

- atomic_set(&rreq->nr_rd_ops, 1);
+ atomic_set(&rreq->nr_outstanding, 1);
do {
if (!netfs_rreq_submit_slice(rreq, &debug_index))
break;

} while (rreq->submitted < rreq->len);

- /* Keep nr_rd_ops incremented so that the ref always belongs to us, and
+ /* Keep nr_outstanding incremented so that the ref always belongs to us, and
* the service code isn't punted off to a random thread pool to
* process.
*/
for (;;) {
- wait_var_event(&rreq->nr_rd_ops, atomic_read(&rreq->nr_rd_ops) == 1);
+ wait_var_event(&rreq->nr_outstanding, atomic_read(&rreq->nr_outstanding) == 1);
netfs_rreq_assess(rreq, false);
if (!test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags))
break;
diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index 614f22213e21..a2ca91cb7a68 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -106,7 +106,7 @@ static inline int wait_on_page_fscache_killable(struct page *page)
return folio_wait_private_2_killable(page_folio(page));
}

-enum netfs_read_source {
+enum netfs_io_source {
NETFS_FILL_WITH_ZEROES,
NETFS_DOWNLOAD_FROM_SERVER,
NETFS_READ_FROM_CACHE,
@@ -130,8 +130,8 @@ struct netfs_cache_resources {
/*
* Descriptor for a single component subrequest.
*/
-struct netfs_read_subrequest {
- struct netfs_read_request *rreq; /* Supervising read request */
+struct netfs_io_subrequest {
+ struct netfs_io_request *rreq; /* Supervising read request */
struct list_head rreq_link; /* Link in rreq->subrequests */
loff_t start; /* Where to start the I/O */
size_t len; /* Size of the I/O */
@@ -139,7 +139,7 @@ struct netfs_read_subrequest {
refcount_t usage;
short error; /* 0 or error that occurred */
unsigned short debug_index; /* Index in list (for debugging output) */
- enum netfs_read_source source; /* Where to read from */
+ enum netfs_io_source source; /* Where to read from */
unsigned long flags;
#define NETFS_SREQ_WRITE_TO_CACHE 0 /* Set if should write to cache */
#define NETFS_SREQ_CLEAR_TAIL 1 /* Set if the rest of the read should be cleared */
@@ -152,7 +152,7 @@ struct netfs_read_subrequest {
* Descriptor for a read helper request. This is used to make multiple I/O
* requests on a variety of sources and then stitch the result together.
*/
-struct netfs_read_request {
+struct netfs_io_request {
struct work_struct work;
struct inode *inode; /* The file being accessed */
struct address_space *mapping; /* The mapping being accessed */
@@ -160,8 +160,8 @@ struct netfs_read_request {
struct list_head subrequests; /* Requests to fetch I/O from disk or net */
void *netfs_priv; /* Private data for the netfs */
unsigned int debug_id;
- atomic_t nr_rd_ops; /* Number of read ops in progress */
- atomic_t nr_wr_ops; /* Number of write ops in progress */
+ atomic_t nr_outstanding; /* Number of read ops in progress */
+ atomic_t nr_copy_ops; /* Number of write ops in progress */
size_t submitted; /* Amount submitted for I/O so far */
size_t len; /* Length of the request */
short error; /* 0 or error that occurred */
@@ -176,23 +176,23 @@ struct netfs_read_request {
#define NETFS_RREQ_DONT_UNLOCK_FOLIOS 3 /* Don't unlock the folios on completion */
#define NETFS_RREQ_FAILED 4 /* The request failed */
#define NETFS_RREQ_IN_PROGRESS 5 /* Unlocked when the request completes */
- const struct netfs_read_request_ops *netfs_ops;
+ const struct netfs_request_ops *netfs_ops;
};

/*
* Operations the network filesystem can/must provide to the helpers.
*/
-struct netfs_read_request_ops {
+struct netfs_request_ops {
bool (*is_cache_enabled)(struct inode *inode);
- void (*init_rreq)(struct netfs_read_request *rreq, struct file *file);
- int (*begin_cache_operation)(struct netfs_read_request *rreq);
- void (*expand_readahead)(struct netfs_read_request *rreq);
- bool (*clamp_length)(struct netfs_read_subrequest *subreq);
- void (*issue_op)(struct netfs_read_subrequest *subreq);
- bool (*is_still_valid)(struct netfs_read_request *rreq);
+ void (*init_request)(struct netfs_io_request *rreq, struct file *file);
+ int (*begin_cache_operation)(struct netfs_io_request *rreq);
+ void (*expand_readahead)(struct netfs_io_request *rreq);
+ bool (*clamp_length)(struct netfs_io_subrequest *subreq);
+ void (*issue_op)(struct netfs_io_subrequest *subreq);
+ bool (*is_still_valid)(struct netfs_io_request *rreq);
int (*check_write_begin)(struct file *file, loff_t pos, unsigned len,
struct folio *folio, void **_fsdata);
- void (*done)(struct netfs_read_request *rreq);
+ void (*done)(struct netfs_io_request *rreq);
void (*cleanup)(struct address_space *mapping, void *netfs_priv);
};

@@ -235,7 +235,7 @@ struct netfs_cache_ops {
/* Prepare a read operation, shortening it to a cached/uncached
* boundary as appropriate.
*/
- enum netfs_read_source (*prepare_read)(struct netfs_read_subrequest *subreq,
+ enum netfs_io_source (*prepare_read)(struct netfs_io_subrequest *subreq,
loff_t i_size);

/* Prepare a write operation, working out what part of the write we can
@@ -255,19 +255,19 @@ struct netfs_cache_ops {

struct readahead_control;
extern void netfs_readahead(struct readahead_control *,
- const struct netfs_read_request_ops *,
+ const struct netfs_request_ops *,
void *);
extern int netfs_readpage(struct file *,
struct folio *,
- const struct netfs_read_request_ops *,
+ const struct netfs_request_ops *,
void *);
extern int netfs_write_begin(struct file *, struct address_space *,
loff_t, unsigned int, unsigned int, struct folio **,
void **,
- const struct netfs_read_request_ops *,
+ const struct netfs_request_ops *,
void *);

-extern void netfs_subreq_terminated(struct netfs_read_subrequest *, ssize_t, bool);
+extern void netfs_subreq_terminated(struct netfs_io_subrequest *, ssize_t, bool);
extern void netfs_stats_show(struct seq_file *);

#endif /* _LINUX_NETFS_H */
diff --git a/include/trace/events/cachefiles.h b/include/trace/events/cachefiles.h
index c6f5aa74db89..002d0ae4f9bc 100644
--- a/include/trace/events/cachefiles.h
+++ b/include/trace/events/cachefiles.h
@@ -424,8 +424,8 @@ TRACE_EVENT(cachefiles_vol_coherency,
);

TRACE_EVENT(cachefiles_prep_read,
- TP_PROTO(struct netfs_read_subrequest *sreq,
- enum netfs_read_source source,
+ TP_PROTO(struct netfs_io_subrequest *sreq,
+ enum netfs_io_source source,
enum cachefiles_prepare_read_trace why,
ino_t cache_inode),

@@ -435,7 +435,7 @@ TRACE_EVENT(cachefiles_prep_read,
__field(unsigned int, rreq )
__field(unsigned short, index )
__field(unsigned short, flags )
- __field(enum netfs_read_source, source )
+ __field(enum netfs_io_source, source )
__field(enum cachefiles_prepare_read_trace, why )
__field(size_t, len )
__field(loff_t, start )
diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h
index 88d9a74dd346..2d0665b416bf 100644
--- a/include/trace/events/netfs.h
+++ b/include/trace/events/netfs.h
@@ -94,7 +94,7 @@ netfs_failures;
#define E_(a, b) { a, b }

TRACE_EVENT(netfs_read,
- TP_PROTO(struct netfs_read_request *rreq,
+ TP_PROTO(struct netfs_io_request *rreq,
loff_t start, size_t len,
enum netfs_read_trace what),

@@ -127,7 +127,7 @@ TRACE_EVENT(netfs_read,
);

TRACE_EVENT(netfs_rreq,
- TP_PROTO(struct netfs_read_request *rreq,
+ TP_PROTO(struct netfs_io_request *rreq,
enum netfs_rreq_trace what),

TP_ARGS(rreq, what),
@@ -151,7 +151,7 @@ TRACE_EVENT(netfs_rreq,
);

TRACE_EVENT(netfs_sreq,
- TP_PROTO(struct netfs_read_subrequest *sreq,
+ TP_PROTO(struct netfs_io_subrequest *sreq,
enum netfs_sreq_trace what),

TP_ARGS(sreq, what),
@@ -161,7 +161,7 @@ TRACE_EVENT(netfs_sreq,
__field(unsigned short, index )
__field(short, error )
__field(unsigned short, flags )
- __field(enum netfs_read_source, source )
+ __field(enum netfs_io_source, source )
__field(enum netfs_sreq_trace, what )
__field(size_t, len )
__field(size_t, transferred )
@@ -190,8 +190,8 @@ TRACE_EVENT(netfs_sreq,
);

TRACE_EVENT(netfs_failure,
- TP_PROTO(struct netfs_read_request *rreq,
- struct netfs_read_subrequest *sreq,
+ TP_PROTO(struct netfs_io_request *rreq,
+ struct netfs_io_subrequest *sreq,
int error, enum netfs_failure what),

TP_ARGS(rreq, sreq, error, what),
@@ -201,7 +201,7 @@ TRACE_EVENT(netfs_failure,
__field(unsigned short, index )
__field(short, error )
__field(unsigned short, flags )
- __field(enum netfs_read_source, source )
+ __field(enum netfs_io_source, source )
__field(enum netfs_failure, what )
__field(size_t, len )
__field(size_t, transferred )


2022-03-09 02:30:15

by David Howells

[permalink] [raw]
Subject: [PATCH v2 04/19] netfs: Finish off rename of netfs_read_request to netfs_io_request

Adjust helper function names and comments after mass rename of
struct netfs_read_*request to struct netfs_io_*request.

Changes
=======
ver #2)
- Make the changes in the docs also.

Signed-off-by: David Howells <[email protected]>
cc: [email protected]

Link: https://lore.kernel.org/r/164622992433.3564931.6684311087845150271.stgit@warthog.procyon.org.uk/ # v1
---

Documentation/filesystems/netfs_library.rst | 4 +
fs/9p/vfs_addr.c | 6 +-
fs/afs/file.c | 4 +
fs/cachefiles/io.c | 4 +
fs/ceph/addr.c | 6 +-
fs/netfs/read_helper.c | 83 ++++++++++++++-------------
include/linux/netfs.h | 22 ++++---
7 files changed, 65 insertions(+), 64 deletions(-)

diff --git a/Documentation/filesystems/netfs_library.rst b/Documentation/filesystems/netfs_library.rst
index a997e2d4321d..4eb7e7b7b0fc 100644
--- a/Documentation/filesystems/netfs_library.rst
+++ b/Documentation/filesystems/netfs_library.rst
@@ -250,7 +250,7 @@ through which it can issue requests and negotiate::
int (*begin_cache_operation)(struct netfs_io_request *rreq);
void (*expand_readahead)(struct netfs_io_request *rreq);
bool (*clamp_length)(struct netfs_io_subrequest *subreq);
- void (*issue_op)(struct netfs_io_subrequest *subreq);
+ void (*issue_read)(struct netfs_io_subrequest *subreq);
bool (*is_still_valid)(struct netfs_io_request *rreq);
int (*check_write_begin)(struct file *file, loff_t pos, unsigned len,
struct folio *folio, void **_fsdata);
@@ -305,7 +305,7 @@ The operations are as follows:

This should return 0 on success and an error code on error.

- * ``issue_op()``
+ * ``issue_read()``

[Required] The helpers use this to dispatch a subrequest to the server for
reading. In the subrequest, ->start, ->len and ->transferred indicate what
diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c
index 7b79fabe7593..fdc1033a1546 100644
--- a/fs/9p/vfs_addr.c
+++ b/fs/9p/vfs_addr.c
@@ -28,10 +28,10 @@
#include "fid.h"

/**
- * v9fs_req_issue_op - Issue a read from 9P
+ * v9fs_issue_read - Issue a read from 9P
* @subreq: The read to make
*/
-static void v9fs_req_issue_op(struct netfs_io_subrequest *subreq)
+static void v9fs_issue_read(struct netfs_io_subrequest *subreq)
{
struct netfs_io_request *rreq = subreq->rreq;
struct p9_fid *fid = rreq->netfs_priv;
@@ -106,7 +106,7 @@ static const struct netfs_request_ops v9fs_req_ops = {
.init_request = v9fs_init_request,
.is_cache_enabled = v9fs_is_cache_enabled,
.begin_cache_operation = v9fs_begin_cache_operation,
- .issue_op = v9fs_req_issue_op,
+ .issue_read = v9fs_issue_read,
.cleanup = v9fs_req_cleanup,
};

diff --git a/fs/afs/file.c b/fs/afs/file.c
index e55761f8858c..b19d635eed12 100644
--- a/fs/afs/file.c
+++ b/fs/afs/file.c
@@ -310,7 +310,7 @@ int afs_fetch_data(struct afs_vnode *vnode, struct afs_read *req)
return afs_do_sync_operation(op);
}

-static void afs_req_issue_op(struct netfs_io_subrequest *subreq)
+static void afs_issue_read(struct netfs_io_subrequest *subreq)
{
struct afs_vnode *vnode = AFS_FS_I(subreq->rreq->inode);
struct afs_read *fsreq;
@@ -401,7 +401,7 @@ const struct netfs_request_ops afs_req_ops = {
.is_cache_enabled = afs_is_cache_enabled,
.begin_cache_operation = afs_begin_cache_operation,
.check_write_begin = afs_check_write_begin,
- .issue_op = afs_req_issue_op,
+ .issue_read = afs_issue_read,
.cleanup = afs_priv_cleanup,
};

diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c
index 6ac6fdbc70d3..b19f496db9ad 100644
--- a/fs/cachefiles/io.c
+++ b/fs/cachefiles/io.c
@@ -406,7 +406,7 @@ static enum netfs_io_source cachefiles_prepare_read(struct netfs_io_subrequest *
}

if (test_bit(FSCACHE_COOKIE_NO_DATA_TO_READ, &cookie->flags)) {
- __set_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags);
+ __set_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags);
why = cachefiles_trace_read_no_data;
goto out_no_object;
}
@@ -475,7 +475,7 @@ static enum netfs_io_source cachefiles_prepare_read(struct netfs_io_subrequest *
goto out;

download_and_store:
- __set_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags);
+ __set_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags);
out:
cachefiles_end_secure(cache, saved_cred);
out_no_object:
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 9d995f351079..9189257476f8 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -259,7 +259,7 @@ static bool ceph_netfs_issue_op_inline(struct netfs_io_subrequest *subreq)
size_t len;

__set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
- __clear_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags);
+ __clear_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags);

if (subreq->start >= inode->i_size)
goto out;
@@ -298,7 +298,7 @@ static bool ceph_netfs_issue_op_inline(struct netfs_io_subrequest *subreq)
return true;
}

-static void ceph_netfs_issue_op(struct netfs_io_subrequest *subreq)
+static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq)
{
struct netfs_io_request *rreq = subreq->rreq;
struct inode *inode = rreq->inode;
@@ -367,7 +367,7 @@ static void ceph_readahead_cleanup(struct address_space *mapping, void *priv)
static const struct netfs_request_ops ceph_netfs_read_ops = {
.is_cache_enabled = ceph_is_cache_enabled,
.begin_cache_operation = ceph_begin_cache_operation,
- .issue_op = ceph_netfs_issue_op,
+ .issue_read = ceph_netfs_issue_read,
.expand_readahead = ceph_netfs_expand_readahead,
.clamp_length = ceph_netfs_clamp_length,
.check_write_begin = ceph_netfs_check_write_begin,
diff --git a/fs/netfs/read_helper.c b/fs/netfs/read_helper.c
index 50035d93f1dc..26d54055b17e 100644
--- a/fs/netfs/read_helper.c
+++ b/fs/netfs/read_helper.c
@@ -37,7 +37,7 @@ static void netfs_put_subrequest(struct netfs_io_subrequest *subreq,
__netfs_put_subrequest(subreq, was_async);
}

-static struct netfs_io_request *netfs_alloc_read_request(
+static struct netfs_io_request *netfs_alloc_request(
const struct netfs_request_ops *ops, void *netfs_priv,
struct file *file)
{
@@ -63,13 +63,12 @@ static struct netfs_io_request *netfs_alloc_read_request(
return rreq;
}

-static void netfs_get_read_request(struct netfs_io_request *rreq)
+static void netfs_get_request(struct netfs_io_request *rreq)
{
refcount_inc(&rreq->usage);
}

-static void netfs_rreq_clear_subreqs(struct netfs_io_request *rreq,
- bool was_async)
+static void netfs_clear_subrequests(struct netfs_io_request *rreq, bool was_async)
{
struct netfs_io_subrequest *subreq;

@@ -81,11 +80,11 @@ static void netfs_rreq_clear_subreqs(struct netfs_io_request *rreq,
}
}

-static void netfs_free_read_request(struct work_struct *work)
+static void netfs_free_request(struct work_struct *work)
{
struct netfs_io_request *rreq =
container_of(work, struct netfs_io_request, work);
- netfs_rreq_clear_subreqs(rreq, false);
+ netfs_clear_subrequests(rreq, false);
if (rreq->netfs_priv)
rreq->netfs_ops->cleanup(rreq->mapping, rreq->netfs_priv);
trace_netfs_rreq(rreq, netfs_rreq_trace_free);
@@ -95,15 +94,15 @@ static void netfs_free_read_request(struct work_struct *work)
netfs_stat_d(&netfs_n_rh_rreq);
}

-static void netfs_put_read_request(struct netfs_io_request *rreq, bool was_async)
+static void netfs_put_request(struct netfs_io_request *rreq, bool was_async)
{
if (refcount_dec_and_test(&rreq->usage)) {
if (was_async) {
- rreq->work.func = netfs_free_read_request;
+ rreq->work.func = netfs_free_request;
if (!queue_work(system_unbound_wq, &rreq->work))
BUG();
} else {
- netfs_free_read_request(&rreq->work);
+ netfs_free_request(&rreq->work);
}
}
}
@@ -121,14 +120,14 @@ static struct netfs_io_subrequest *netfs_alloc_subrequest(
INIT_LIST_HEAD(&subreq->rreq_link);
refcount_set(&subreq->usage, 2);
subreq->rreq = rreq;
- netfs_get_read_request(rreq);
+ netfs_get_request(rreq);
netfs_stat(&netfs_n_rh_sreq);
}

return subreq;
}

-static void netfs_get_read_subrequest(struct netfs_io_subrequest *subreq)
+static void netfs_get_subrequest(struct netfs_io_subrequest *subreq)
{
refcount_inc(&subreq->usage);
}
@@ -141,7 +140,7 @@ static void __netfs_put_subrequest(struct netfs_io_subrequest *subreq,
trace_netfs_sreq(subreq, netfs_sreq_trace_free);
kfree(subreq);
netfs_stat_d(&netfs_n_rh_sreq);
- netfs_put_read_request(rreq, was_async);
+ netfs_put_request(rreq, was_async);
}

/*
@@ -216,7 +215,7 @@ static void netfs_read_from_server(struct netfs_io_request *rreq,
struct netfs_io_subrequest *subreq)
{
netfs_stat(&netfs_n_rh_download);
- rreq->netfs_ops->issue_op(subreq);
+ rreq->netfs_ops->issue_read(subreq);
}

/*
@@ -225,8 +224,8 @@ static void netfs_read_from_server(struct netfs_io_request *rreq,
static void netfs_rreq_completed(struct netfs_io_request *rreq, bool was_async)
{
trace_netfs_rreq(rreq, netfs_rreq_trace_done);
- netfs_rreq_clear_subreqs(rreq, was_async);
- netfs_put_read_request(rreq, was_async);
+ netfs_clear_subrequests(rreq, was_async);
+ netfs_put_request(rreq, was_async);
}

/*
@@ -306,7 +305,7 @@ static void netfs_rreq_do_write_to_cache(struct netfs_io_request *rreq)
atomic_inc(&rreq->nr_copy_ops);

list_for_each_entry_safe(subreq, p, &rreq->subrequests, rreq_link) {
- if (!test_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags)) {
+ if (!test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) {
list_del_init(&subreq->rreq_link);
netfs_put_subrequest(subreq, false);
}
@@ -336,7 +335,7 @@ static void netfs_rreq_do_write_to_cache(struct netfs_io_request *rreq)

atomic_inc(&rreq->nr_copy_ops);
netfs_stat(&netfs_n_rh_write);
- netfs_get_read_subrequest(subreq);
+ netfs_get_subrequest(subreq);
trace_netfs_sreq(subreq, netfs_sreq_trace_write);
cres->ops->write(cres, subreq->start, &iter,
netfs_rreq_copy_terminated, subreq);
@@ -378,9 +377,9 @@ static void netfs_rreq_unlock(struct netfs_io_request *rreq)
XA_STATE(xas, &rreq->mapping->i_pages, start_page);

if (test_bit(NETFS_RREQ_FAILED, &rreq->flags)) {
- __clear_bit(NETFS_RREQ_WRITE_TO_CACHE, &rreq->flags);
+ __clear_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags);
list_for_each_entry(subreq, &rreq->subrequests, rreq_link) {
- __clear_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags);
+ __clear_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags);
}
}

@@ -408,7 +407,7 @@ static void netfs_rreq_unlock(struct netfs_io_request *rreq)
pg_failed = true;
break;
}
- if (test_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags))
+ if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags))
folio_start_fscache(folio);
pg_failed |= subreq_failed;
if (pgend < iopos + subreq->len)
@@ -453,13 +452,13 @@ static void netfs_rreq_unlock(struct netfs_io_request *rreq)
static void netfs_rreq_short_read(struct netfs_io_request *rreq,
struct netfs_io_subrequest *subreq)
{
- __clear_bit(NETFS_SREQ_SHORT_READ, &subreq->flags);
+ __clear_bit(NETFS_SREQ_SHORT_IO, &subreq->flags);
__set_bit(NETFS_SREQ_SEEK_DATA_READ, &subreq->flags);

netfs_stat(&netfs_n_rh_short_read);
trace_netfs_sreq(subreq, netfs_sreq_trace_resubmit_short);

- netfs_get_read_subrequest(subreq);
+ netfs_get_subrequest(subreq);
atomic_inc(&rreq->nr_outstanding);
if (subreq->source == NETFS_READ_FROM_CACHE)
netfs_read_from_cache(rreq, subreq, NETFS_READ_HOLE_CLEAR);
@@ -493,10 +492,10 @@ static bool netfs_rreq_perform_resubmissions(struct netfs_io_request *rreq)
subreq->error = 0;
netfs_stat(&netfs_n_rh_download_instead);
trace_netfs_sreq(subreq, netfs_sreq_trace_download_instead);
- netfs_get_read_subrequest(subreq);
+ netfs_get_subrequest(subreq);
atomic_inc(&rreq->nr_outstanding);
netfs_read_from_server(rreq, subreq);
- } else if (test_bit(NETFS_SREQ_SHORT_READ, &subreq->flags)) {
+ } else if (test_bit(NETFS_SREQ_SHORT_IO, &subreq->flags)) {
netfs_rreq_short_read(rreq, subreq);
}
}
@@ -553,7 +552,7 @@ static void netfs_rreq_assess(struct netfs_io_request *rreq, bool was_async)
clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
wake_up_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS);

- if (test_bit(NETFS_RREQ_WRITE_TO_CACHE, &rreq->flags))
+ if (test_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags))
return netfs_rreq_write_to_cache(rreq);

netfs_rreq_completed(rreq, was_async);
@@ -642,8 +641,8 @@ void netfs_subreq_terminated(struct netfs_io_subrequest *subreq,

complete:
__clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags);
- if (test_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags))
- set_bit(NETFS_RREQ_WRITE_TO_CACHE, &rreq->flags);
+ if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags))
+ set_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags);

out:
trace_netfs_sreq(subreq, netfs_sreq_trace_terminated);
@@ -674,7 +673,7 @@ void netfs_subreq_terminated(struct netfs_io_subrequest *subreq,
__clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags);
}

- __set_bit(NETFS_SREQ_SHORT_READ, &subreq->flags);
+ __set_bit(NETFS_SREQ_SHORT_IO, &subreq->flags);
set_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags);
goto out;

@@ -878,7 +877,7 @@ void netfs_readahead(struct readahead_control *ractl,
if (readahead_count(ractl) == 0)
goto cleanup;

- rreq = netfs_alloc_read_request(ops, netfs_priv, ractl->file);
+ rreq = netfs_alloc_request(ops, netfs_priv, ractl->file);
if (!rreq)
goto cleanup;
rreq->mapping = ractl->mapping;
@@ -916,7 +915,7 @@ void netfs_readahead(struct readahead_control *ractl,
return;

cleanup_free:
- netfs_put_read_request(rreq, false);
+ netfs_put_request(rreq, false);
return;
cleanup:
if (netfs_priv)
@@ -953,7 +952,7 @@ int netfs_readpage(struct file *file,

_enter("%lx", folio_index(folio));

- rreq = netfs_alloc_read_request(ops, netfs_priv, file);
+ rreq = netfs_alloc_request(ops, netfs_priv, file);
if (!rreq) {
if (netfs_priv)
ops->cleanup(folio_file_mapping(folio), netfs_priv);
@@ -975,7 +974,7 @@ int netfs_readpage(struct file *file,
netfs_stat(&netfs_n_rh_readpage);
trace_netfs_read(rreq, rreq->start, rreq->len, netfs_read_trace_readpage);

- netfs_get_read_request(rreq);
+ netfs_get_request(rreq);

atomic_set(&rreq->nr_outstanding, 1);
do {
@@ -989,7 +988,8 @@ int netfs_readpage(struct file *file,
* process.
*/
do {
- wait_var_event(&rreq->nr_outstanding, atomic_read(&rreq->nr_outstanding) == 1);
+ wait_var_event(&rreq->nr_outstanding,
+ atomic_read(&rreq->nr_outstanding) == 1);
netfs_rreq_assess(rreq, false);
} while (test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags));

@@ -999,7 +999,7 @@ int netfs_readpage(struct file *file,
ret = -EIO;
}
out:
- netfs_put_read_request(rreq, false);
+ netfs_put_request(rreq, false);
return ret;
}
EXPORT_SYMBOL(netfs_readpage);
@@ -1122,7 +1122,7 @@ int netfs_write_begin(struct file *file, struct address_space *mapping,
}

ret = -ENOMEM;
- rreq = netfs_alloc_read_request(ops, netfs_priv, file);
+ rreq = netfs_alloc_request(ops, netfs_priv, file);
if (!rreq)
goto error;
rreq->mapping = folio_file_mapping(folio);
@@ -1146,7 +1146,7 @@ int netfs_write_begin(struct file *file, struct address_space *mapping,
*/
ractl._nr_pages = folio_nr_pages(folio);
netfs_rreq_expand(rreq, &ractl);
- netfs_get_read_request(rreq);
+ netfs_get_request(rreq);

/* We hold the folio locks, so we can drop the references */
folio_get(folio);
@@ -1160,12 +1160,13 @@ int netfs_write_begin(struct file *file, struct address_space *mapping,

} while (rreq->submitted < rreq->len);

- /* Keep nr_outstanding incremented so that the ref always belongs to us, and
- * the service code isn't punted off to a random thread pool to
+ /* Keep nr_outstanding incremented so that the ref always belongs to
+ * us, and the service code isn't punted off to a random thread pool to
* process.
*/
for (;;) {
- wait_var_event(&rreq->nr_outstanding, atomic_read(&rreq->nr_outstanding) == 1);
+ wait_var_event(&rreq->nr_outstanding,
+ atomic_read(&rreq->nr_outstanding) == 1);
netfs_rreq_assess(rreq, false);
if (!test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags))
break;
@@ -1177,7 +1178,7 @@ int netfs_write_begin(struct file *file, struct address_space *mapping,
trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_write_begin);
ret = -EIO;
}
- netfs_put_read_request(rreq, false);
+ netfs_put_request(rreq, false);
if (ret < 0)
goto error;

@@ -1193,7 +1194,7 @@ int netfs_write_begin(struct file *file, struct address_space *mapping,
return 0;

error_put:
- netfs_put_read_request(rreq, false);
+ netfs_put_request(rreq, false);
error:
folio_unlock(folio);
folio_put(folio);
diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index a2ca91cb7a68..f63de27d6f29 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -131,7 +131,7 @@ struct netfs_cache_resources {
* Descriptor for a single component subrequest.
*/
struct netfs_io_subrequest {
- struct netfs_io_request *rreq; /* Supervising read request */
+ struct netfs_io_request *rreq; /* Supervising I/O request */
struct list_head rreq_link; /* Link in rreq->subrequests */
loff_t start; /* Where to start the I/O */
size_t len; /* Size of the I/O */
@@ -139,29 +139,29 @@ struct netfs_io_subrequest {
refcount_t usage;
short error; /* 0 or error that occurred */
unsigned short debug_index; /* Index in list (for debugging output) */
- enum netfs_io_source source; /* Where to read from */
+ enum netfs_io_source source; /* Where to read from/write to */
unsigned long flags;
-#define NETFS_SREQ_WRITE_TO_CACHE 0 /* Set if should write to cache */
+#define NETFS_SREQ_COPY_TO_CACHE 0 /* Set if should copy the data to the cache */
#define NETFS_SREQ_CLEAR_TAIL 1 /* Set if the rest of the read should be cleared */
-#define NETFS_SREQ_SHORT_READ 2 /* Set if there was a short read from the cache */
+#define NETFS_SREQ_SHORT_IO 2 /* Set if the I/O was short */
#define NETFS_SREQ_SEEK_DATA_READ 3 /* Set if ->read() should SEEK_DATA first */
#define NETFS_SREQ_NO_PROGRESS 4 /* Set if we didn't manage to read any data */
};

/*
- * Descriptor for a read helper request. This is used to make multiple I/O
- * requests on a variety of sources and then stitch the result together.
+ * Descriptor for an I/O helper request. This is used to make multiple I/O
+ * operations to a variety of data stores and then stitch the result together.
*/
struct netfs_io_request {
struct work_struct work;
struct inode *inode; /* The file being accessed */
struct address_space *mapping; /* The mapping being accessed */
struct netfs_cache_resources cache_resources;
- struct list_head subrequests; /* Requests to fetch I/O from disk or net */
+ struct list_head subrequests; /* Contributory I/O operations */
void *netfs_priv; /* Private data for the netfs */
unsigned int debug_id;
- atomic_t nr_outstanding; /* Number of read ops in progress */
- atomic_t nr_copy_ops; /* Number of write ops in progress */
+ atomic_t nr_outstanding; /* Number of ops in progress */
+ atomic_t nr_copy_ops; /* Number of copy-to-cache ops in progress */
size_t submitted; /* Amount submitted for I/O so far */
size_t len; /* Length of the request */
short error; /* 0 or error that occurred */
@@ -171,7 +171,7 @@ struct netfs_io_request {
refcount_t usage;
unsigned long flags;
#define NETFS_RREQ_INCOMPLETE_IO 0 /* Some ioreqs terminated short or with error */
-#define NETFS_RREQ_WRITE_TO_CACHE 1 /* Need to write to the cache */
+#define NETFS_RREQ_COPY_TO_CACHE 1 /* Need to write to the cache */
#define NETFS_RREQ_NO_UNLOCK_FOLIO 2 /* Don't unlock no_unlock_folio on completion */
#define NETFS_RREQ_DONT_UNLOCK_FOLIOS 3 /* Don't unlock the folios on completion */
#define NETFS_RREQ_FAILED 4 /* The request failed */
@@ -188,7 +188,7 @@ struct netfs_request_ops {
int (*begin_cache_operation)(struct netfs_io_request *rreq);
void (*expand_readahead)(struct netfs_io_request *rreq);
bool (*clamp_length)(struct netfs_io_subrequest *subreq);
- void (*issue_op)(struct netfs_io_subrequest *subreq);
+ void (*issue_read)(struct netfs_io_subrequest *subreq);
bool (*is_still_valid)(struct netfs_io_request *rreq);
int (*check_write_begin)(struct file *file, loff_t pos, unsigned len,
struct folio *folio, void **_fsdata);


2022-03-09 02:31:45

by David Howells

[permalink] [raw]
Subject: [PATCH v2 05/19] netfs: Split netfs_io_* object handling out

Split netfs_io_* object handling out into a file that's going to contain
object allocation, get and put routines.

Signed-off-by: David Howells <[email protected]>
cc: [email protected]

Link: https://lore.kernel.org/r/164622995118.3564931.6089530629052064470.stgit@warthog.procyon.org.uk/ # v1
---

fs/netfs/Makefile | 6 ++
fs/netfs/internal.h | 18 +++++++
fs/netfs/objects.c | 123 ++++++++++++++++++++++++++++++++++++++++++++++++
fs/netfs/read_helper.c | 118 ----------------------------------------------
4 files changed, 147 insertions(+), 118 deletions(-)
create mode 100644 fs/netfs/objects.c

diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile
index c15bfc966d96..939fd00a1fc9 100644
--- a/fs/netfs/Makefile
+++ b/fs/netfs/Makefile
@@ -1,5 +1,9 @@
# SPDX-License-Identifier: GPL-2.0

-netfs-y := read_helper.o stats.o
+netfs-y := \
+ objects.o \
+ read_helper.o
+
+netfs-$(CONFIG_NETFS_STATS) += stats.o

obj-$(CONFIG_NETFS_SUPPORT) := netfs.o
diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h
index b7f2c4459f33..cf7a3ddb16a4 100644
--- a/fs/netfs/internal.h
+++ b/fs/netfs/internal.h
@@ -5,17 +5,35 @@
* Written by David Howells ([email protected])
*/

+#include <linux/netfs.h>
+#include <trace/events/netfs.h>
+
#ifdef pr_fmt
#undef pr_fmt
#endif

#define pr_fmt(fmt) "netfs: " fmt

+/*
+ * objects.c
+ */
+struct netfs_io_request *netfs_alloc_request(const struct netfs_request_ops *ops,
+ void *netfs_priv,
+ struct file *file);
+void netfs_get_request(struct netfs_io_request *rreq);
+void netfs_clear_subrequests(struct netfs_io_request *rreq, bool was_async);
+void netfs_put_request(struct netfs_io_request *rreq, bool was_async);
+struct netfs_io_subrequest *netfs_alloc_subrequest(struct netfs_io_request *rreq);
+void netfs_put_subrequest(struct netfs_io_subrequest *subreq, bool was_async);
+void netfs_get_subrequest(struct netfs_io_subrequest *subreq);
+
/*
* read_helper.c
*/
extern unsigned int netfs_debug;

+void netfs_rreq_work(struct work_struct *work);
+
/*
* stats.c
*/
diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c
new file mode 100644
index 000000000000..f7383c28dc6e
--- /dev/null
+++ b/fs/netfs/objects.c
@@ -0,0 +1,123 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Object lifetime handling and tracing.
+ *
+ * Copyright (C) 2022 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells ([email protected])
+ */
+
+#include <linux/slab.h>
+#include "internal.h"
+
+/*
+ * Allocate an I/O request and initialise it.
+ */
+struct netfs_io_request *netfs_alloc_request(
+ const struct netfs_request_ops *ops, void *netfs_priv,
+ struct file *file)
+{
+ static atomic_t debug_ids;
+ struct netfs_io_request *rreq;
+
+ rreq = kzalloc(sizeof(struct netfs_io_request), GFP_KERNEL);
+ if (rreq) {
+ rreq->netfs_ops = ops;
+ rreq->netfs_priv = netfs_priv;
+ rreq->inode = file_inode(file);
+ rreq->i_size = i_size_read(rreq->inode);
+ rreq->debug_id = atomic_inc_return(&debug_ids);
+ INIT_LIST_HEAD(&rreq->subrequests);
+ INIT_WORK(&rreq->work, netfs_rreq_work);
+ refcount_set(&rreq->usage, 1);
+ __set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
+ if (ops->init_request)
+ ops->init_request(rreq, file);
+ netfs_stat(&netfs_n_rh_rreq);
+ }
+
+ return rreq;
+}
+
+void netfs_get_request(struct netfs_io_request *rreq)
+{
+ refcount_inc(&rreq->usage);
+}
+
+void netfs_clear_subrequests(struct netfs_io_request *rreq, bool was_async)
+{
+ struct netfs_io_subrequest *subreq;
+
+ while (!list_empty(&rreq->subrequests)) {
+ subreq = list_first_entry(&rreq->subrequests,
+ struct netfs_io_subrequest, rreq_link);
+ list_del(&subreq->rreq_link);
+ netfs_put_subrequest(subreq, was_async);
+ }
+}
+
+static void netfs_free_request(struct work_struct *work)
+{
+ struct netfs_io_request *rreq =
+ container_of(work, struct netfs_io_request, work);
+ netfs_clear_subrequests(rreq, false);
+ if (rreq->netfs_priv)
+ rreq->netfs_ops->cleanup(rreq->mapping, rreq->netfs_priv);
+ trace_netfs_rreq(rreq, netfs_rreq_trace_free);
+ if (rreq->cache_resources.ops)
+ rreq->cache_resources.ops->end_operation(&rreq->cache_resources);
+ kfree(rreq);
+ netfs_stat_d(&netfs_n_rh_rreq);
+}
+
+void netfs_put_request(struct netfs_io_request *rreq, bool was_async)
+{
+ if (refcount_dec_and_test(&rreq->usage)) {
+ if (was_async) {
+ rreq->work.func = netfs_free_request;
+ if (!queue_work(system_unbound_wq, &rreq->work))
+ BUG();
+ } else {
+ netfs_free_request(&rreq->work);
+ }
+ }
+}
+
+/*
+ * Allocate and partially initialise an I/O request structure.
+ */
+struct netfs_io_subrequest *netfs_alloc_subrequest(struct netfs_io_request *rreq)
+{
+ struct netfs_io_subrequest *subreq;
+
+ subreq = kzalloc(sizeof(struct netfs_io_subrequest), GFP_KERNEL);
+ if (subreq) {
+ INIT_LIST_HEAD(&subreq->rreq_link);
+ refcount_set(&subreq->usage, 2);
+ subreq->rreq = rreq;
+ netfs_get_request(rreq);
+ netfs_stat(&netfs_n_rh_sreq);
+ }
+
+ return subreq;
+}
+
+void netfs_get_subrequest(struct netfs_io_subrequest *subreq)
+{
+ refcount_inc(&subreq->usage);
+}
+
+static void __netfs_put_subrequest(struct netfs_io_subrequest *subreq,
+ bool was_async)
+{
+ struct netfs_io_request *rreq = subreq->rreq;
+
+ trace_netfs_sreq(subreq, netfs_sreq_trace_free);
+ kfree(subreq);
+ netfs_stat_d(&netfs_n_rh_sreq);
+ netfs_put_request(rreq, was_async);
+}
+
+void netfs_put_subrequest(struct netfs_io_subrequest *subreq, bool was_async)
+{
+ if (refcount_dec_and_test(&subreq->usage))
+ __netfs_put_subrequest(subreq, was_async);
+}
diff --git a/fs/netfs/read_helper.c b/fs/netfs/read_helper.c
index 26d54055b17e..ef23ef9889d5 100644
--- a/fs/netfs/read_helper.c
+++ b/fs/netfs/read_helper.c
@@ -27,122 +27,6 @@ unsigned netfs_debug;
module_param_named(debug, netfs_debug, uint, S_IWUSR | S_IRUGO);
MODULE_PARM_DESC(netfs_debug, "Netfs support debugging mask");

-static void netfs_rreq_work(struct work_struct *);
-static void __netfs_put_subrequest(struct netfs_io_subrequest *, bool);
-
-static void netfs_put_subrequest(struct netfs_io_subrequest *subreq,
- bool was_async)
-{
- if (refcount_dec_and_test(&subreq->usage))
- __netfs_put_subrequest(subreq, was_async);
-}
-
-static struct netfs_io_request *netfs_alloc_request(
- const struct netfs_request_ops *ops, void *netfs_priv,
- struct file *file)
-{
- static atomic_t debug_ids;
- struct netfs_io_request *rreq;
-
- rreq = kzalloc(sizeof(struct netfs_io_request), GFP_KERNEL);
- if (rreq) {
- rreq->netfs_ops = ops;
- rreq->netfs_priv = netfs_priv;
- rreq->inode = file_inode(file);
- rreq->i_size = i_size_read(rreq->inode);
- rreq->debug_id = atomic_inc_return(&debug_ids);
- INIT_LIST_HEAD(&rreq->subrequests);
- INIT_WORK(&rreq->work, netfs_rreq_work);
- refcount_set(&rreq->usage, 1);
- __set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
- if (ops->init_request)
- ops->init_request(rreq, file);
- netfs_stat(&netfs_n_rh_rreq);
- }
-
- return rreq;
-}
-
-static void netfs_get_request(struct netfs_io_request *rreq)
-{
- refcount_inc(&rreq->usage);
-}
-
-static void netfs_clear_subrequests(struct netfs_io_request *rreq, bool was_async)
-{
- struct netfs_io_subrequest *subreq;
-
- while (!list_empty(&rreq->subrequests)) {
- subreq = list_first_entry(&rreq->subrequests,
- struct netfs_io_subrequest, rreq_link);
- list_del(&subreq->rreq_link);
- netfs_put_subrequest(subreq, was_async);
- }
-}
-
-static void netfs_free_request(struct work_struct *work)
-{
- struct netfs_io_request *rreq =
- container_of(work, struct netfs_io_request, work);
- netfs_clear_subrequests(rreq, false);
- if (rreq->netfs_priv)
- rreq->netfs_ops->cleanup(rreq->mapping, rreq->netfs_priv);
- trace_netfs_rreq(rreq, netfs_rreq_trace_free);
- if (rreq->cache_resources.ops)
- rreq->cache_resources.ops->end_operation(&rreq->cache_resources);
- kfree(rreq);
- netfs_stat_d(&netfs_n_rh_rreq);
-}
-
-static void netfs_put_request(struct netfs_io_request *rreq, bool was_async)
-{
- if (refcount_dec_and_test(&rreq->usage)) {
- if (was_async) {
- rreq->work.func = netfs_free_request;
- if (!queue_work(system_unbound_wq, &rreq->work))
- BUG();
- } else {
- netfs_free_request(&rreq->work);
- }
- }
-}
-
-/*
- * Allocate and partially initialise an I/O request structure.
- */
-static struct netfs_io_subrequest *netfs_alloc_subrequest(
- struct netfs_io_request *rreq)
-{
- struct netfs_io_subrequest *subreq;
-
- subreq = kzalloc(sizeof(struct netfs_io_subrequest), GFP_KERNEL);
- if (subreq) {
- INIT_LIST_HEAD(&subreq->rreq_link);
- refcount_set(&subreq->usage, 2);
- subreq->rreq = rreq;
- netfs_get_request(rreq);
- netfs_stat(&netfs_n_rh_sreq);
- }
-
- return subreq;
-}
-
-static void netfs_get_subrequest(struct netfs_io_subrequest *subreq)
-{
- refcount_inc(&subreq->usage);
-}
-
-static void __netfs_put_subrequest(struct netfs_io_subrequest *subreq,
- bool was_async)
-{
- struct netfs_io_request *rreq = subreq->rreq;
-
- trace_netfs_sreq(subreq, netfs_sreq_trace_free);
- kfree(subreq);
- netfs_stat_d(&netfs_n_rh_sreq);
- netfs_put_request(rreq, was_async);
-}
-
/*
* Clear the unread part of an I/O request.
*/
@@ -558,7 +442,7 @@ static void netfs_rreq_assess(struct netfs_io_request *rreq, bool was_async)
netfs_rreq_completed(rreq, was_async);
}

-static void netfs_rreq_work(struct work_struct *work)
+void netfs_rreq_work(struct work_struct *work)
{
struct netfs_io_request *rreq =
container_of(work, struct netfs_io_request, work);


2022-03-09 02:31:57

by David Howells

[permalink] [raw]
Subject: [PATCH v2 07/19] netfs: Trace refcounting on the netfs_io_request struct

Add refcount tracing for the netfs_io_request structure.

Signed-off-by: David Howells <[email protected]>
cc: [email protected]

Link: https://lore.kernel.org/r/164622997668.3564931.14456171619219324968.stgit@warthog.procyon.org.uk/ # v1
---

fs/netfs/internal.h | 11 +++++++++--
fs/netfs/objects.c | 24 +++++++++++++++++-------
fs/netfs/read_helper.c | 14 +++++++-------
include/linux/netfs.h | 2 +-
include/trace/events/netfs.h | 35 +++++++++++++++++++++++++++++++++++
5 files changed, 69 insertions(+), 17 deletions(-)

diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h
index cf7a3ddb16a4..89b02357500d 100644
--- a/fs/netfs/internal.h
+++ b/fs/netfs/internal.h
@@ -20,13 +20,20 @@
struct netfs_io_request *netfs_alloc_request(const struct netfs_request_ops *ops,
void *netfs_priv,
struct file *file);
-void netfs_get_request(struct netfs_io_request *rreq);
+void netfs_get_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace what);
void netfs_clear_subrequests(struct netfs_io_request *rreq, bool was_async);
-void netfs_put_request(struct netfs_io_request *rreq, bool was_async);
+void netfs_put_request(struct netfs_io_request *rreq, bool was_async,
+ enum netfs_rreq_ref_trace what);
struct netfs_io_subrequest *netfs_alloc_subrequest(struct netfs_io_request *rreq);
void netfs_put_subrequest(struct netfs_io_subrequest *subreq, bool was_async);
void netfs_get_subrequest(struct netfs_io_subrequest *subreq);

+static inline void netfs_see_request(struct netfs_io_request *rreq,
+ enum netfs_rreq_ref_trace what)
+{
+ trace_netfs_rreq_ref(rreq->debug_id, refcount_read(&rreq->ref), what);
+}
+
/*
* read_helper.c
*/
diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c
index f7383c28dc6e..4e29c3bb6e5a 100644
--- a/fs/netfs/objects.c
+++ b/fs/netfs/objects.c
@@ -27,7 +27,7 @@ struct netfs_io_request *netfs_alloc_request(
rreq->debug_id = atomic_inc_return(&debug_ids);
INIT_LIST_HEAD(&rreq->subrequests);
INIT_WORK(&rreq->work, netfs_rreq_work);
- refcount_set(&rreq->usage, 1);
+ refcount_set(&rreq->ref, 1);
__set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
if (ops->init_request)
ops->init_request(rreq, file);
@@ -37,9 +37,12 @@ struct netfs_io_request *netfs_alloc_request(
return rreq;
}

-void netfs_get_request(struct netfs_io_request *rreq)
+void netfs_get_request(struct netfs_io_request *rreq, enum netfs_rreq_ref_trace what)
{
- refcount_inc(&rreq->usage);
+ int r;
+
+ __refcount_inc(&rreq->ref, &r);
+ trace_netfs_rreq_ref(rreq->debug_id, r + 1, what);
}

void netfs_clear_subrequests(struct netfs_io_request *rreq, bool was_async)
@@ -68,9 +71,16 @@ static void netfs_free_request(struct work_struct *work)
netfs_stat_d(&netfs_n_rh_rreq);
}

-void netfs_put_request(struct netfs_io_request *rreq, bool was_async)
+void netfs_put_request(struct netfs_io_request *rreq, bool was_async,
+ enum netfs_rreq_ref_trace what)
{
- if (refcount_dec_and_test(&rreq->usage)) {
+ unsigned int debug_id = rreq->debug_id;
+ bool dead;
+ int r;
+
+ dead = __refcount_dec_and_test(&rreq->ref, &r);
+ trace_netfs_rreq_ref(debug_id, r - 1, what);
+ if (dead) {
if (was_async) {
rreq->work.func = netfs_free_request;
if (!queue_work(system_unbound_wq, &rreq->work))
@@ -93,7 +103,7 @@ struct netfs_io_subrequest *netfs_alloc_subrequest(struct netfs_io_request *rreq
INIT_LIST_HEAD(&subreq->rreq_link);
refcount_set(&subreq->usage, 2);
subreq->rreq = rreq;
- netfs_get_request(rreq);
+ netfs_get_request(rreq, netfs_rreq_trace_get_subreq);
netfs_stat(&netfs_n_rh_sreq);
}

@@ -113,7 +123,7 @@ static void __netfs_put_subrequest(struct netfs_io_subrequest *subreq,
trace_netfs_sreq(subreq, netfs_sreq_trace_free);
kfree(subreq);
netfs_stat_d(&netfs_n_rh_sreq);
- netfs_put_request(rreq, was_async);
+ netfs_put_request(rreq, was_async, netfs_rreq_trace_put_subreq);
}

void netfs_put_subrequest(struct netfs_io_subrequest *subreq, bool was_async)
diff --git a/fs/netfs/read_helper.c b/fs/netfs/read_helper.c
index 181aeda32649..620c3be5ec0a 100644
--- a/fs/netfs/read_helper.c
+++ b/fs/netfs/read_helper.c
@@ -109,7 +109,7 @@ static void netfs_rreq_completed(struct netfs_io_request *rreq, bool was_async)
{
trace_netfs_rreq(rreq, netfs_rreq_trace_done);
netfs_clear_subrequests(rreq, was_async);
- netfs_put_request(rreq, was_async);
+ netfs_put_request(rreq, was_async, netfs_rreq_trace_put_complete);
}

/*
@@ -799,7 +799,7 @@ void netfs_readahead(struct readahead_control *ractl,
return;

cleanup_free:
- netfs_put_request(rreq, false);
+ netfs_put_request(rreq, false, netfs_rreq_trace_put_failed);
return;
cleanup:
if (netfs_priv)
@@ -858,7 +858,7 @@ int netfs_readpage(struct file *file,
netfs_stat(&netfs_n_rh_readpage);
trace_netfs_read(rreq, rreq->start, rreq->len, netfs_read_trace_readpage);

- netfs_get_request(rreq);
+ netfs_get_request(rreq, netfs_rreq_trace_get_hold);

atomic_set(&rreq->nr_outstanding, 1);
do {
@@ -883,7 +883,7 @@ int netfs_readpage(struct file *file,
ret = -EIO;
}
out:
- netfs_put_request(rreq, false);
+ netfs_put_request(rreq, false, netfs_rreq_trace_put_hold);
return ret;
}
EXPORT_SYMBOL(netfs_readpage);
@@ -1030,13 +1030,13 @@ int netfs_write_begin(struct file *file, struct address_space *mapping,
*/
ractl._nr_pages = folio_nr_pages(folio);
netfs_rreq_expand(rreq, &ractl);
- netfs_get_request(rreq);

/* We hold the folio locks, so we can drop the references */
folio_get(folio);
while (readahead_folio(&ractl))
;

+ netfs_get_request(rreq, netfs_rreq_trace_get_hold);
atomic_set(&rreq->nr_outstanding, 1);
do {
if (!netfs_rreq_submit_slice(rreq, &debug_index))
@@ -1062,7 +1062,7 @@ int netfs_write_begin(struct file *file, struct address_space *mapping,
trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_write_begin);
ret = -EIO;
}
- netfs_put_request(rreq, false);
+ netfs_put_request(rreq, false, netfs_rreq_trace_put_hold);
if (ret < 0)
goto error;

@@ -1078,7 +1078,7 @@ int netfs_write_begin(struct file *file, struct address_space *mapping,
return 0;

error_put:
- netfs_put_request(rreq, false);
+ netfs_put_request(rreq, false, netfs_rreq_trace_put_failed);
error:
folio_unlock(folio);
folio_put(folio);
diff --git a/include/linux/netfs.h b/include/linux/netfs.h
index f63de27d6f29..541aebe828f3 100644
--- a/include/linux/netfs.h
+++ b/include/linux/netfs.h
@@ -168,7 +168,7 @@ struct netfs_io_request {
loff_t i_size; /* Size of the file */
loff_t start; /* Start position */
pgoff_t no_unlock_folio; /* Don't unlock this folio after read */
- refcount_t usage;
+ refcount_t ref;
unsigned long flags;
#define NETFS_RREQ_INCOMPLETE_IO 0 /* Some ioreqs terminated short or with error */
#define NETFS_RREQ_COPY_TO_CACHE 1 /* Need to write to the cache */
diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h
index daf171de2142..602f3854da81 100644
--- a/include/trace/events/netfs.h
+++ b/include/trace/events/netfs.h
@@ -55,6 +55,15 @@
EM(netfs_fail_short_write_begin, "short-write-begin") \
E_(netfs_fail_prepare_write, "prep-write")

+#define netfs_rreq_ref_traces \
+ EM(netfs_rreq_trace_get_hold, "GET HOLD ") \
+ EM(netfs_rreq_trace_get_subreq, "GET SUBREQ ") \
+ EM(netfs_rreq_trace_put_complete, "PUT COMPLT ") \
+ EM(netfs_rreq_trace_put_failed, "PUT FAILED ") \
+ EM(netfs_rreq_trace_put_hold, "PUT HOLD ") \
+ EM(netfs_rreq_trace_put_subreq, "PUT SUBREQ ") \
+ E_(netfs_rreq_trace_new, "NEW ")
+
#ifndef __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY
#define __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY

@@ -67,6 +76,7 @@ enum netfs_read_trace { netfs_read_traces } __mode(byte);
enum netfs_rreq_trace { netfs_rreq_traces } __mode(byte);
enum netfs_sreq_trace { netfs_sreq_traces } __mode(byte);
enum netfs_failure { netfs_failures } __mode(byte);
+enum netfs_rreq_ref_trace { netfs_rreq_ref_traces } __mode(byte);

#endif

@@ -83,6 +93,7 @@ netfs_rreq_traces;
netfs_sreq_sources;
netfs_sreq_traces;
netfs_failures;
+netfs_rreq_ref_traces;

/*
* Now redefine the EM() and E_() macros to map the enums to the strings that
@@ -229,6 +240,30 @@ TRACE_EVENT(netfs_failure,
__entry->error)
);

+TRACE_EVENT(netfs_rreq_ref,
+ TP_PROTO(unsigned int rreq_debug_id, int ref,
+ enum netfs_rreq_ref_trace what),
+
+ TP_ARGS(rreq_debug_id, ref, what),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, rreq )
+ __field(int, ref )
+ __field(enum netfs_rreq_ref_trace, what )
+ ),
+
+ TP_fast_assign(
+ __entry->rreq = rreq_debug_id;
+ __entry->ref = ref;
+ __entry->what = what;
+ ),
+
+ TP_printk("W=%08x %s r=%u",
+ __entry->rreq,
+ __print_symbolic(__entry->what, netfs_rreq_ref_traces),
+ __entry->ref)
+ );
+
#endif /* _TRACE_NETFS_H */

/* This part must be outside protection */


2022-03-09 16:02:41

by Jeff Layton

[permalink] [raw]
Subject: Re: [PATCH v2 05/19] netfs: Split netfs_io_* object handling out

On Tue, 2022-03-08 at 23:26 +0000, David Howells wrote:
> Split netfs_io_* object handling out into a file that's going to contain
> object allocation, get and put routines.
>
> Signed-off-by: David Howells <[email protected]>
> cc: [email protected]
>
> Link: https://lore.kernel.org/r/164622995118.3564931.6089530629052064470.stgit@warthog.procyon.org.uk/ # v1
> ---
>
> fs/netfs/Makefile | 6 ++
> fs/netfs/internal.h | 18 +++++++
> fs/netfs/objects.c | 123 ++++++++++++++++++++++++++++++++++++++++++++++++
> fs/netfs/read_helper.c | 118 ----------------------------------------------
> 4 files changed, 147 insertions(+), 118 deletions(-)
> create mode 100644 fs/netfs/objects.c
>
> diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile
> index c15bfc966d96..939fd00a1fc9 100644
> --- a/fs/netfs/Makefile
> +++ b/fs/netfs/Makefile
> @@ -1,5 +1,9 @@
> # SPDX-License-Identifier: GPL-2.0
>
> -netfs-y := read_helper.o stats.o
> +netfs-y := \
> + objects.o \
> + read_helper.o
> +
> +netfs-$(CONFIG_NETFS_STATS) += stats.o
>
> obj-$(CONFIG_NETFS_SUPPORT) := netfs.o
> diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h
> index b7f2c4459f33..cf7a3ddb16a4 100644
> --- a/fs/netfs/internal.h
> +++ b/fs/netfs/internal.h
> @@ -5,17 +5,35 @@
> * Written by David Howells ([email protected])
> */
>
> +#include <linux/netfs.h>
> +#include <trace/events/netfs.h>
> +
> #ifdef pr_fmt
> #undef pr_fmt
> #endif
>
> #define pr_fmt(fmt) "netfs: " fmt
>
> +/*
> + * objects.c
> + */
> +struct netfs_io_request *netfs_alloc_request(const struct netfs_request_ops *ops,
> + void *netfs_priv,
> + struct file *file);
> +void netfs_get_request(struct netfs_io_request *rreq);
> +void netfs_clear_subrequests(struct netfs_io_request *rreq, bool was_async);
> +void netfs_put_request(struct netfs_io_request *rreq, bool was_async);
> +struct netfs_io_subrequest *netfs_alloc_subrequest(struct netfs_io_request *rreq);
> +void netfs_put_subrequest(struct netfs_io_subrequest *subreq, bool was_async);
> +void netfs_get_subrequest(struct netfs_io_subrequest *subreq);
> +
> /*
> * read_helper.c
> */
> extern unsigned int netfs_debug;
>
> +void netfs_rreq_work(struct work_struct *work);
> +
> /*
> * stats.c
> */
> diff --git a/fs/netfs/objects.c b/fs/netfs/objects.c
> new file mode 100644
> index 000000000000..f7383c28dc6e
> --- /dev/null
> +++ b/fs/netfs/objects.c
> @@ -0,0 +1,123 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/* Object lifetime handling and tracing.
> + *
> + * Copyright (C) 2022 Red Hat, Inc. All Rights Reserved.
> + * Written by David Howells ([email protected])
> + */
> +
> +#include <linux/slab.h>
> +#include "internal.h"
> +
> +/*
> + * Allocate an I/O request and initialise it.
> + */
> +struct netfs_io_request *netfs_alloc_request(
> + const struct netfs_request_ops *ops, void *netfs_priv,
> + struct file *file)
> +{
> + static atomic_t debug_ids;
> + struct netfs_io_request *rreq;
> +
> + rreq = kzalloc(sizeof(struct netfs_io_request), GFP_KERNEL);
> + if (rreq) {
> + rreq->netfs_ops = ops;
> + rreq->netfs_priv = netfs_priv;
> + rreq->inode = file_inode(file);
> + rreq->i_size = i_size_read(rreq->inode);
> + rreq->debug_id = atomic_inc_return(&debug_ids);
> + INIT_LIST_HEAD(&rreq->subrequests);
> + INIT_WORK(&rreq->work, netfs_rreq_work);
> + refcount_set(&rreq->usage, 1);
> + __set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
> + if (ops->init_request)
> + ops->init_request(rreq, file);
> + netfs_stat(&netfs_n_rh_rreq);
> + }
> +
> + return rreq;
> +}
> +
> +void netfs_get_request(struct netfs_io_request *rreq)
> +{
> + refcount_inc(&rreq->usage);
> +}
> +
> +void netfs_clear_subrequests(struct netfs_io_request *rreq, bool was_async)
> +{
> + struct netfs_io_subrequest *subreq;
> +
> + while (!list_empty(&rreq->subrequests)) {
> + subreq = list_first_entry(&rreq->subrequests,
> + struct netfs_io_subrequest, rreq_link);
> + list_del(&subreq->rreq_link);
> + netfs_put_subrequest(subreq, was_async);
> + }
> +}
> +
> +static void netfs_free_request(struct work_struct *work)
> +{
> + struct netfs_io_request *rreq =
> + container_of(work, struct netfs_io_request, work);
> + netfs_clear_subrequests(rreq, false);
> + if (rreq->netfs_priv)
> + rreq->netfs_ops->cleanup(rreq->mapping, rreq->netfs_priv);
> + trace_netfs_rreq(rreq, netfs_rreq_trace_free);
> + if (rreq->cache_resources.ops)
> + rreq->cache_resources.ops->end_operation(&rreq->cache_resources);
> + kfree(rreq);
> + netfs_stat_d(&netfs_n_rh_rreq);
> +}
> +
> +void netfs_put_request(struct netfs_io_request *rreq, bool was_async)
> +{
> + if (refcount_dec_and_test(&rreq->usage)) {
> + if (was_async) {
> + rreq->work.func = netfs_free_request;
> + if (!queue_work(system_unbound_wq, &rreq->work))
> + BUG();
> + } else {
> + netfs_free_request(&rreq->work);
> + }
> + }
> +}
> +
> +/*
> + * Allocate and partially initialise an I/O request structure.
> + */
> +struct netfs_io_subrequest *netfs_alloc_subrequest(struct netfs_io_request *rreq)
> +{
> + struct netfs_io_subrequest *subreq;
> +
> + subreq = kzalloc(sizeof(struct netfs_io_subrequest), GFP_KERNEL);
> + if (subreq) {
> + INIT_LIST_HEAD(&subreq->rreq_link);
> + refcount_set(&subreq->usage, 2);
> + subreq->rreq = rreq;
> + netfs_get_request(rreq);
> + netfs_stat(&netfs_n_rh_sreq);
> + }
> +
> + return subreq;
> +}
> +
> +void netfs_get_subrequest(struct netfs_io_subrequest *subreq)
> +{
> + refcount_inc(&subreq->usage);
> +}
> +
> +static void __netfs_put_subrequest(struct netfs_io_subrequest *subreq,
> + bool was_async)
> +{
> + struct netfs_io_request *rreq = subreq->rreq;
> +
> + trace_netfs_sreq(subreq, netfs_sreq_trace_free);
> + kfree(subreq);
> + netfs_stat_d(&netfs_n_rh_sreq);
> + netfs_put_request(rreq, was_async);
> +}
> +
> +void netfs_put_subrequest(struct netfs_io_subrequest *subreq, bool was_async)
> +{
> + if (refcount_dec_and_test(&subreq->usage))
> + __netfs_put_subrequest(subreq, was_async);
> +}
> diff --git a/fs/netfs/read_helper.c b/fs/netfs/read_helper.c
> index 26d54055b17e..ef23ef9889d5 100644
> --- a/fs/netfs/read_helper.c
> +++ b/fs/netfs/read_helper.c
> @@ -27,122 +27,6 @@ unsigned netfs_debug;
> module_param_named(debug, netfs_debug, uint, S_IWUSR | S_IRUGO);
> MODULE_PARM_DESC(netfs_debug, "Netfs support debugging mask");
>
> -static void netfs_rreq_work(struct work_struct *);
> -static void __netfs_put_subrequest(struct netfs_io_subrequest *, bool);
> -
> -static void netfs_put_subrequest(struct netfs_io_subrequest *subreq,
> - bool was_async)
> -{
> - if (refcount_dec_and_test(&subreq->usage))
> - __netfs_put_subrequest(subreq, was_async);
> -}
> -
> -static struct netfs_io_request *netfs_alloc_request(
> - const struct netfs_request_ops *ops, void *netfs_priv,
> - struct file *file)
> -{
> - static atomic_t debug_ids;
> - struct netfs_io_request *rreq;
> -
> - rreq = kzalloc(sizeof(struct netfs_io_request), GFP_KERNEL);
> - if (rreq) {
> - rreq->netfs_ops = ops;
> - rreq->netfs_priv = netfs_priv;
> - rreq->inode = file_inode(file);
> - rreq->i_size = i_size_read(rreq->inode);
> - rreq->debug_id = atomic_inc_return(&debug_ids);
> - INIT_LIST_HEAD(&rreq->subrequests);
> - INIT_WORK(&rreq->work, netfs_rreq_work);
> - refcount_set(&rreq->usage, 1);
> - __set_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
> - if (ops->init_request)
> - ops->init_request(rreq, file);
> - netfs_stat(&netfs_n_rh_rreq);
> - }
> -
> - return rreq;
> -}
> -
> -static void netfs_get_request(struct netfs_io_request *rreq)
> -{
> - refcount_inc(&rreq->usage);
> -}
> -
> -static void netfs_clear_subrequests(struct netfs_io_request *rreq, bool was_async)
> -{
> - struct netfs_io_subrequest *subreq;
> -
> - while (!list_empty(&rreq->subrequests)) {
> - subreq = list_first_entry(&rreq->subrequests,
> - struct netfs_io_subrequest, rreq_link);
> - list_del(&subreq->rreq_link);
> - netfs_put_subrequest(subreq, was_async);
> - }
> -}
> -
> -static void netfs_free_request(struct work_struct *work)
> -{
> - struct netfs_io_request *rreq =
> - container_of(work, struct netfs_io_request, work);
> - netfs_clear_subrequests(rreq, false);
> - if (rreq->netfs_priv)
> - rreq->netfs_ops->cleanup(rreq->mapping, rreq->netfs_priv);
> - trace_netfs_rreq(rreq, netfs_rreq_trace_free);
> - if (rreq->cache_resources.ops)
> - rreq->cache_resources.ops->end_operation(&rreq->cache_resources);
> - kfree(rreq);
> - netfs_stat_d(&netfs_n_rh_rreq);
> -}
> -
> -static void netfs_put_request(struct netfs_io_request *rreq, bool was_async)
> -{
> - if (refcount_dec_and_test(&rreq->usage)) {
> - if (was_async) {
> - rreq->work.func = netfs_free_request;
> - if (!queue_work(system_unbound_wq, &rreq->work))
> - BUG();
> - } else {
> - netfs_free_request(&rreq->work);
> - }
> - }
> -}
> -
> -/*
> - * Allocate and partially initialise an I/O request structure.
> - */
> -static struct netfs_io_subrequest *netfs_alloc_subrequest(
> - struct netfs_io_request *rreq)
> -{
> - struct netfs_io_subrequest *subreq;
> -
> - subreq = kzalloc(sizeof(struct netfs_io_subrequest), GFP_KERNEL);
> - if (subreq) {
> - INIT_LIST_HEAD(&subreq->rreq_link);
> - refcount_set(&subreq->usage, 2);
> - subreq->rreq = rreq;
> - netfs_get_request(rreq);
> - netfs_stat(&netfs_n_rh_sreq);
> - }
> -
> - return subreq;
> -}
> -
> -static void netfs_get_subrequest(struct netfs_io_subrequest *subreq)
> -{
> - refcount_inc(&subreq->usage);
> -}
> -
> -static void __netfs_put_subrequest(struct netfs_io_subrequest *subreq,
> - bool was_async)
> -{
> - struct netfs_io_request *rreq = subreq->rreq;
> -
> - trace_netfs_sreq(subreq, netfs_sreq_trace_free);
> - kfree(subreq);
> - netfs_stat_d(&netfs_n_rh_sreq);
> - netfs_put_request(rreq, was_async);
> -}
> -
> /*
> * Clear the unread part of an I/O request.
> */
> @@ -558,7 +442,7 @@ static void netfs_rreq_assess(struct netfs_io_request *rreq, bool was_async)
> netfs_rreq_completed(rreq, was_async);
> }
>
> -static void netfs_rreq_work(struct work_struct *work)
> +void netfs_rreq_work(struct work_struct *work)
> {
> struct netfs_io_request *rreq =
> container_of(work, struct netfs_io_request, work);
>
>

Reviewed-by: Jeff Layton <[email protected]>

2022-03-09 16:02:57

by Jeffrey Layton

[permalink] [raw]
Subject: Re: [PATCH v2 04/19] netfs: Finish off rename of netfs_read_request to netfs_io_request

On Tue, 2022-03-08 at 23:26 +0000, David Howells wrote:
> Adjust helper function names and comments after mass rename of
> struct netfs_read_*request to struct netfs_io_*request.
>
> Changes
> =======
> ver #2)
> - Make the changes in the docs also.
>
> Signed-off-by: David Howells <[email protected]>
> cc: [email protected]
>
> Link: https://lore.kernel.org/r/164622992433.3564931.6684311087845150271.stgit@warthog.procyon.org.uk/ # v1
> ---
>
> Documentation/filesystems/netfs_library.rst | 4 +
> fs/9p/vfs_addr.c | 6 +-
> fs/afs/file.c | 4 +
> fs/cachefiles/io.c | 4 +
> fs/ceph/addr.c | 6 +-
> fs/netfs/read_helper.c | 83 ++++++++++++++-------------
> include/linux/netfs.h | 22 ++++---
> 7 files changed, 65 insertions(+), 64 deletions(-)
>
> diff --git a/Documentation/filesystems/netfs_library.rst b/Documentation/filesystems/netfs_library.rst
> index a997e2d4321d..4eb7e7b7b0fc 100644
> --- a/Documentation/filesystems/netfs_library.rst
> +++ b/Documentation/filesystems/netfs_library.rst
> @@ -250,7 +250,7 @@ through which it can issue requests and negotiate::
> int (*begin_cache_operation)(struct netfs_io_request *rreq);
> void (*expand_readahead)(struct netfs_io_request *rreq);
> bool (*clamp_length)(struct netfs_io_subrequest *subreq);
> - void (*issue_op)(struct netfs_io_subrequest *subreq);
> + void (*issue_read)(struct netfs_io_subrequest *subreq);
> bool (*is_still_valid)(struct netfs_io_request *rreq);
> int (*check_write_begin)(struct file *file, loff_t pos, unsigned len,
> struct folio *folio, void **_fsdata);
> @@ -305,7 +305,7 @@ The operations are as follows:
>
> This should return 0 on success and an error code on error.
>
> - * ``issue_op()``
> + * ``issue_read()``
>
> [Required] The helpers use this to dispatch a subrequest to the server for
> reading. In the subrequest, ->start, ->len and ->transferred indicate what
> diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c
> index 7b79fabe7593..fdc1033a1546 100644
> --- a/fs/9p/vfs_addr.c
> +++ b/fs/9p/vfs_addr.c
> @@ -28,10 +28,10 @@
> #include "fid.h"
>
> /**
> - * v9fs_req_issue_op - Issue a read from 9P
> + * v9fs_issue_read - Issue a read from 9P
> * @subreq: The read to make
> */
> -static void v9fs_req_issue_op(struct netfs_io_subrequest *subreq)
> +static void v9fs_issue_read(struct netfs_io_subrequest *subreq)
> {
> struct netfs_io_request *rreq = subreq->rreq;
> struct p9_fid *fid = rreq->netfs_priv;
> @@ -106,7 +106,7 @@ static const struct netfs_request_ops v9fs_req_ops = {
> .init_request = v9fs_init_request,
> .is_cache_enabled = v9fs_is_cache_enabled,
> .begin_cache_operation = v9fs_begin_cache_operation,
> - .issue_op = v9fs_req_issue_op,
> + .issue_read = v9fs_issue_read,
> .cleanup = v9fs_req_cleanup,
> };
>
> diff --git a/fs/afs/file.c b/fs/afs/file.c
> index e55761f8858c..b19d635eed12 100644
> --- a/fs/afs/file.c
> +++ b/fs/afs/file.c
> @@ -310,7 +310,7 @@ int afs_fetch_data(struct afs_vnode *vnode, struct afs_read *req)
> return afs_do_sync_operation(op);
> }
>
> -static void afs_req_issue_op(struct netfs_io_subrequest *subreq)
> +static void afs_issue_read(struct netfs_io_subrequest *subreq)
> {
> struct afs_vnode *vnode = AFS_FS_I(subreq->rreq->inode);
> struct afs_read *fsreq;
> @@ -401,7 +401,7 @@ const struct netfs_request_ops afs_req_ops = {
> .is_cache_enabled = afs_is_cache_enabled,
> .begin_cache_operation = afs_begin_cache_operation,
> .check_write_begin = afs_check_write_begin,
> - .issue_op = afs_req_issue_op,
> + .issue_read = afs_issue_read,
> .cleanup = afs_priv_cleanup,
> };
>
> diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c
> index 6ac6fdbc70d3..b19f496db9ad 100644
> --- a/fs/cachefiles/io.c
> +++ b/fs/cachefiles/io.c
> @@ -406,7 +406,7 @@ static enum netfs_io_source cachefiles_prepare_read(struct netfs_io_subrequest *
> }
>
> if (test_bit(FSCACHE_COOKIE_NO_DATA_TO_READ, &cookie->flags)) {
> - __set_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags);
> + __set_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags);
> why = cachefiles_trace_read_no_data;
> goto out_no_object;
> }
> @@ -475,7 +475,7 @@ static enum netfs_io_source cachefiles_prepare_read(struct netfs_io_subrequest *
> goto out;
>
> download_and_store:
> - __set_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags);
> + __set_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags);
> out:
> cachefiles_end_secure(cache, saved_cred);
> out_no_object:
> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> index 9d995f351079..9189257476f8 100644
> --- a/fs/ceph/addr.c
> +++ b/fs/ceph/addr.c
> @@ -259,7 +259,7 @@ static bool ceph_netfs_issue_op_inline(struct netfs_io_subrequest *subreq)
> size_t len;
>
> __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
> - __clear_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags);
> + __clear_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags);
>
> if (subreq->start >= inode->i_size)
> goto out;
> @@ -298,7 +298,7 @@ static bool ceph_netfs_issue_op_inline(struct netfs_io_subrequest *subreq)
> return true;
> }
>
> -static void ceph_netfs_issue_op(struct netfs_io_subrequest *subreq)
> +static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq)
> {
> struct netfs_io_request *rreq = subreq->rreq;
> struct inode *inode = rreq->inode;
> @@ -367,7 +367,7 @@ static void ceph_readahead_cleanup(struct address_space *mapping, void *priv)
> static const struct netfs_request_ops ceph_netfs_read_ops = {
> .is_cache_enabled = ceph_is_cache_enabled,
> .begin_cache_operation = ceph_begin_cache_operation,
> - .issue_op = ceph_netfs_issue_op,
> + .issue_read = ceph_netfs_issue_read,
> .expand_readahead = ceph_netfs_expand_readahead,
> .clamp_length = ceph_netfs_clamp_length,
> .check_write_begin = ceph_netfs_check_write_begin,
> diff --git a/fs/netfs/read_helper.c b/fs/netfs/read_helper.c
> index 50035d93f1dc..26d54055b17e 100644
> --- a/fs/netfs/read_helper.c
> +++ b/fs/netfs/read_helper.c
> @@ -37,7 +37,7 @@ static void netfs_put_subrequest(struct netfs_io_subrequest *subreq,
> __netfs_put_subrequest(subreq, was_async);
> }
>
> -static struct netfs_io_request *netfs_alloc_read_request(
> +static struct netfs_io_request *netfs_alloc_request(
> const struct netfs_request_ops *ops, void *netfs_priv,
> struct file *file)
> {
> @@ -63,13 +63,12 @@ static struct netfs_io_request *netfs_alloc_read_request(
> return rreq;
> }
>
> -static void netfs_get_read_request(struct netfs_io_request *rreq)
> +static void netfs_get_request(struct netfs_io_request *rreq)
> {
> refcount_inc(&rreq->usage);
> }
>
> -static void netfs_rreq_clear_subreqs(struct netfs_io_request *rreq,
> - bool was_async)
> +static void netfs_clear_subrequests(struct netfs_io_request *rreq, bool was_async)
> {
> struct netfs_io_subrequest *subreq;
>
> @@ -81,11 +80,11 @@ static void netfs_rreq_clear_subreqs(struct netfs_io_request *rreq,
> }
> }
>
> -static void netfs_free_read_request(struct work_struct *work)
> +static void netfs_free_request(struct work_struct *work)
> {
> struct netfs_io_request *rreq =
> container_of(work, struct netfs_io_request, work);
> - netfs_rreq_clear_subreqs(rreq, false);
> + netfs_clear_subrequests(rreq, false);
> if (rreq->netfs_priv)
> rreq->netfs_ops->cleanup(rreq->mapping, rreq->netfs_priv);
> trace_netfs_rreq(rreq, netfs_rreq_trace_free);
> @@ -95,15 +94,15 @@ static void netfs_free_read_request(struct work_struct *work)
> netfs_stat_d(&netfs_n_rh_rreq);
> }
>
> -static void netfs_put_read_request(struct netfs_io_request *rreq, bool was_async)
> +static void netfs_put_request(struct netfs_io_request *rreq, bool was_async)
> {
> if (refcount_dec_and_test(&rreq->usage)) {
> if (was_async) {
> - rreq->work.func = netfs_free_read_request;
> + rreq->work.func = netfs_free_request;
> if (!queue_work(system_unbound_wq, &rreq->work))
> BUG();
> } else {
> - netfs_free_read_request(&rreq->work);
> + netfs_free_request(&rreq->work);
> }
> }
> }
> @@ -121,14 +120,14 @@ static struct netfs_io_subrequest *netfs_alloc_subrequest(
> INIT_LIST_HEAD(&subreq->rreq_link);
> refcount_set(&subreq->usage, 2);
> subreq->rreq = rreq;
> - netfs_get_read_request(rreq);
> + netfs_get_request(rreq);
> netfs_stat(&netfs_n_rh_sreq);
> }
>
> return subreq;
> }
>
> -static void netfs_get_read_subrequest(struct netfs_io_subrequest *subreq)
> +static void netfs_get_subrequest(struct netfs_io_subrequest *subreq)
> {
> refcount_inc(&subreq->usage);
> }
> @@ -141,7 +140,7 @@ static void __netfs_put_subrequest(struct netfs_io_subrequest *subreq,
> trace_netfs_sreq(subreq, netfs_sreq_trace_free);
> kfree(subreq);
> netfs_stat_d(&netfs_n_rh_sreq);
> - netfs_put_read_request(rreq, was_async);
> + netfs_put_request(rreq, was_async);
> }
>
> /*
> @@ -216,7 +215,7 @@ static void netfs_read_from_server(struct netfs_io_request *rreq,
> struct netfs_io_subrequest *subreq)
> {
> netfs_stat(&netfs_n_rh_download);
> - rreq->netfs_ops->issue_op(subreq);
> + rreq->netfs_ops->issue_read(subreq);
> }
>
> /*
> @@ -225,8 +224,8 @@ static void netfs_read_from_server(struct netfs_io_request *rreq,
> static void netfs_rreq_completed(struct netfs_io_request *rreq, bool was_async)
> {
> trace_netfs_rreq(rreq, netfs_rreq_trace_done);
> - netfs_rreq_clear_subreqs(rreq, was_async);
> - netfs_put_read_request(rreq, was_async);
> + netfs_clear_subrequests(rreq, was_async);
> + netfs_put_request(rreq, was_async);
> }
>
> /*
> @@ -306,7 +305,7 @@ static void netfs_rreq_do_write_to_cache(struct netfs_io_request *rreq)
> atomic_inc(&rreq->nr_copy_ops);
>
> list_for_each_entry_safe(subreq, p, &rreq->subrequests, rreq_link) {
> - if (!test_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags)) {
> + if (!test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) {
> list_del_init(&subreq->rreq_link);
> netfs_put_subrequest(subreq, false);
> }
> @@ -336,7 +335,7 @@ static void netfs_rreq_do_write_to_cache(struct netfs_io_request *rreq)
>
> atomic_inc(&rreq->nr_copy_ops);
> netfs_stat(&netfs_n_rh_write);
> - netfs_get_read_subrequest(subreq);
> + netfs_get_subrequest(subreq);
> trace_netfs_sreq(subreq, netfs_sreq_trace_write);
> cres->ops->write(cres, subreq->start, &iter,
> netfs_rreq_copy_terminated, subreq);
> @@ -378,9 +377,9 @@ static void netfs_rreq_unlock(struct netfs_io_request *rreq)
> XA_STATE(xas, &rreq->mapping->i_pages, start_page);
>
> if (test_bit(NETFS_RREQ_FAILED, &rreq->flags)) {
> - __clear_bit(NETFS_RREQ_WRITE_TO_CACHE, &rreq->flags);
> + __clear_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags);
> list_for_each_entry(subreq, &rreq->subrequests, rreq_link) {
> - __clear_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags);
> + __clear_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags);
> }
> }
>
> @@ -408,7 +407,7 @@ static void netfs_rreq_unlock(struct netfs_io_request *rreq)
> pg_failed = true;
> break;
> }
> - if (test_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags))
> + if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags))
> folio_start_fscache(folio);
> pg_failed |= subreq_failed;
> if (pgend < iopos + subreq->len)
> @@ -453,13 +452,13 @@ static void netfs_rreq_unlock(struct netfs_io_request *rreq)
> static void netfs_rreq_short_read(struct netfs_io_request *rreq,
> struct netfs_io_subrequest *subreq)
> {
> - __clear_bit(NETFS_SREQ_SHORT_READ, &subreq->flags);
> + __clear_bit(NETFS_SREQ_SHORT_IO, &subreq->flags);
> __set_bit(NETFS_SREQ_SEEK_DATA_READ, &subreq->flags);
>
> netfs_stat(&netfs_n_rh_short_read);
> trace_netfs_sreq(subreq, netfs_sreq_trace_resubmit_short);
>
> - netfs_get_read_subrequest(subreq);
> + netfs_get_subrequest(subreq);
> atomic_inc(&rreq->nr_outstanding);
> if (subreq->source == NETFS_READ_FROM_CACHE)
> netfs_read_from_cache(rreq, subreq, NETFS_READ_HOLE_CLEAR);
> @@ -493,10 +492,10 @@ static bool netfs_rreq_perform_resubmissions(struct netfs_io_request *rreq)
> subreq->error = 0;
> netfs_stat(&netfs_n_rh_download_instead);
> trace_netfs_sreq(subreq, netfs_sreq_trace_download_instead);
> - netfs_get_read_subrequest(subreq);
> + netfs_get_subrequest(subreq);
> atomic_inc(&rreq->nr_outstanding);
> netfs_read_from_server(rreq, subreq);
> - } else if (test_bit(NETFS_SREQ_SHORT_READ, &subreq->flags)) {
> + } else if (test_bit(NETFS_SREQ_SHORT_IO, &subreq->flags)) {
> netfs_rreq_short_read(rreq, subreq);
> }
> }
> @@ -553,7 +552,7 @@ static void netfs_rreq_assess(struct netfs_io_request *rreq, bool was_async)
> clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
> wake_up_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS);
>
> - if (test_bit(NETFS_RREQ_WRITE_TO_CACHE, &rreq->flags))
> + if (test_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags))
> return netfs_rreq_write_to_cache(rreq);
>
> netfs_rreq_completed(rreq, was_async);
> @@ -642,8 +641,8 @@ void netfs_subreq_terminated(struct netfs_io_subrequest *subreq,
>
> complete:
> __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags);
> - if (test_bit(NETFS_SREQ_WRITE_TO_CACHE, &subreq->flags))
> - set_bit(NETFS_RREQ_WRITE_TO_CACHE, &rreq->flags);
> + if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags))
> + set_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags);
>
> out:
> trace_netfs_sreq(subreq, netfs_sreq_trace_terminated);
> @@ -674,7 +673,7 @@ void netfs_subreq_terminated(struct netfs_io_subrequest *subreq,
> __clear_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags);
> }
>
> - __set_bit(NETFS_SREQ_SHORT_READ, &subreq->flags);
> + __set_bit(NETFS_SREQ_SHORT_IO, &subreq->flags);
> set_bit(NETFS_RREQ_INCOMPLETE_IO, &rreq->flags);
> goto out;
>
> @@ -878,7 +877,7 @@ void netfs_readahead(struct readahead_control *ractl,
> if (readahead_count(ractl) == 0)
> goto cleanup;
>
> - rreq = netfs_alloc_read_request(ops, netfs_priv, ractl->file);
> + rreq = netfs_alloc_request(ops, netfs_priv, ractl->file);
> if (!rreq)
> goto cleanup;
> rreq->mapping = ractl->mapping;
> @@ -916,7 +915,7 @@ void netfs_readahead(struct readahead_control *ractl,
> return;
>
> cleanup_free:
> - netfs_put_read_request(rreq, false);
> + netfs_put_request(rreq, false);
> return;
> cleanup:
> if (netfs_priv)
> @@ -953,7 +952,7 @@ int netfs_readpage(struct file *file,
>
> _enter("%lx", folio_index(folio));
>
> - rreq = netfs_alloc_read_request(ops, netfs_priv, file);
> + rreq = netfs_alloc_request(ops, netfs_priv, file);
> if (!rreq) {
> if (netfs_priv)
> ops->cleanup(folio_file_mapping(folio), netfs_priv);
> @@ -975,7 +974,7 @@ int netfs_readpage(struct file *file,
> netfs_stat(&netfs_n_rh_readpage);
> trace_netfs_read(rreq, rreq->start, rreq->len, netfs_read_trace_readpage);
>
> - netfs_get_read_request(rreq);
> + netfs_get_request(rreq);
>
> atomic_set(&rreq->nr_outstanding, 1);
> do {
> @@ -989,7 +988,8 @@ int netfs_readpage(struct file *file,
> * process.
> */
> do {
> - wait_var_event(&rreq->nr_outstanding, atomic_read(&rreq->nr_outstanding) == 1);
> + wait_var_event(&rreq->nr_outstanding,
> + atomic_read(&rreq->nr_outstanding) == 1);
> netfs_rreq_assess(rreq, false);
> } while (test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags));
>
> @@ -999,7 +999,7 @@ int netfs_readpage(struct file *file,
> ret = -EIO;
> }
> out:
> - netfs_put_read_request(rreq, false);
> + netfs_put_request(rreq, false);
> return ret;
> }
> EXPORT_SYMBOL(netfs_readpage);
> @@ -1122,7 +1122,7 @@ int netfs_write_begin(struct file *file, struct address_space *mapping,
> }
>
> ret = -ENOMEM;
> - rreq = netfs_alloc_read_request(ops, netfs_priv, file);
> + rreq = netfs_alloc_request(ops, netfs_priv, file);
> if (!rreq)
> goto error;
> rreq->mapping = folio_file_mapping(folio);
> @@ -1146,7 +1146,7 @@ int netfs_write_begin(struct file *file, struct address_space *mapping,
> */
> ractl._nr_pages = folio_nr_pages(folio);
> netfs_rreq_expand(rreq, &ractl);
> - netfs_get_read_request(rreq);
> + netfs_get_request(rreq);
>
> /* We hold the folio locks, so we can drop the references */
> folio_get(folio);
> @@ -1160,12 +1160,13 @@ int netfs_write_begin(struct file *file, struct address_space *mapping,
>
> } while (rreq->submitted < rreq->len);
>
> - /* Keep nr_outstanding incremented so that the ref always belongs to us, and
> - * the service code isn't punted off to a random thread pool to
> + /* Keep nr_outstanding incremented so that the ref always belongs to
> + * us, and the service code isn't punted off to a random thread pool to
> * process.
> */
> for (;;) {
> - wait_var_event(&rreq->nr_outstanding, atomic_read(&rreq->nr_outstanding) == 1);
> + wait_var_event(&rreq->nr_outstanding,
> + atomic_read(&rreq->nr_outstanding) == 1);
> netfs_rreq_assess(rreq, false);
> if (!test_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags))
> break;
> @@ -1177,7 +1178,7 @@ int netfs_write_begin(struct file *file, struct address_space *mapping,
> trace_netfs_failure(rreq, NULL, ret, netfs_fail_short_write_begin);
> ret = -EIO;
> }
> - netfs_put_read_request(rreq, false);
> + netfs_put_request(rreq, false);
> if (ret < 0)
> goto error;
>
> @@ -1193,7 +1194,7 @@ int netfs_write_begin(struct file *file, struct address_space *mapping,
> return 0;
>
> error_put:
> - netfs_put_read_request(rreq, false);
> + netfs_put_request(rreq, false);
> error:
> folio_unlock(folio);
> folio_put(folio);
> diff --git a/include/linux/netfs.h b/include/linux/netfs.h
> index a2ca91cb7a68..f63de27d6f29 100644
> --- a/include/linux/netfs.h
> +++ b/include/linux/netfs.h
> @@ -131,7 +131,7 @@ struct netfs_cache_resources {
> * Descriptor for a single component subrequest.
> */
> struct netfs_io_subrequest {
> - struct netfs_io_request *rreq; /* Supervising read request */
> + struct netfs_io_request *rreq; /* Supervising I/O request */
> struct list_head rreq_link; /* Link in rreq->subrequests */
> loff_t start; /* Where to start the I/O */
> size_t len; /* Size of the I/O */
> @@ -139,29 +139,29 @@ struct netfs_io_subrequest {
> refcount_t usage;
> short error; /* 0 or error that occurred */
> unsigned short debug_index; /* Index in list (for debugging output) */
> - enum netfs_io_source source; /* Where to read from */
> + enum netfs_io_source source; /* Where to read from/write to */
> unsigned long flags;
> -#define NETFS_SREQ_WRITE_TO_CACHE 0 /* Set if should write to cache */
> +#define NETFS_SREQ_COPY_TO_CACHE 0 /* Set if should copy the data to the cache */
> #define NETFS_SREQ_CLEAR_TAIL 1 /* Set if the rest of the read should be cleared */
> -#define NETFS_SREQ_SHORT_READ 2 /* Set if there was a short read from the cache */
> +#define NETFS_SREQ_SHORT_IO 2 /* Set if the I/O was short */
> #define NETFS_SREQ_SEEK_DATA_READ 3 /* Set if ->read() should SEEK_DATA first */
> #define NETFS_SREQ_NO_PROGRESS 4 /* Set if we didn't manage to read any data */
> };
>
> /*
> - * Descriptor for a read helper request. This is used to make multiple I/O
> - * requests on a variety of sources and then stitch the result together.
> + * Descriptor for an I/O helper request. This is used to make multiple I/O
> + * operations to a variety of data stores and then stitch the result together.
> */
> struct netfs_io_request {
> struct work_struct work;
> struct inode *inode; /* The file being accessed */
> struct address_space *mapping; /* The mapping being accessed */
> struct netfs_cache_resources cache_resources;
> - struct list_head subrequests; /* Requests to fetch I/O from disk or net */
> + struct list_head subrequests; /* Contributory I/O operations */
> void *netfs_priv; /* Private data for the netfs */
> unsigned int debug_id;
> - atomic_t nr_outstanding; /* Number of read ops in progress */
> - atomic_t nr_copy_ops; /* Number of write ops in progress */
> + atomic_t nr_outstanding; /* Number of ops in progress */
> + atomic_t nr_copy_ops; /* Number of copy-to-cache ops in progress */
> size_t submitted; /* Amount submitted for I/O so far */
> size_t len; /* Length of the request */
> short error; /* 0 or error that occurred */
> @@ -171,7 +171,7 @@ struct netfs_io_request {
> refcount_t usage;
> unsigned long flags;
> #define NETFS_RREQ_INCOMPLETE_IO 0 /* Some ioreqs terminated short or with error */
> -#define NETFS_RREQ_WRITE_TO_CACHE 1 /* Need to write to the cache */
> +#define NETFS_RREQ_COPY_TO_CACHE 1 /* Need to write to the cache */
> #define NETFS_RREQ_NO_UNLOCK_FOLIO 2 /* Don't unlock no_unlock_folio on completion */
> #define NETFS_RREQ_DONT_UNLOCK_FOLIOS 3 /* Don't unlock the folios on completion */
> #define NETFS_RREQ_FAILED 4 /* The request failed */
> @@ -188,7 +188,7 @@ struct netfs_request_ops {
> int (*begin_cache_operation)(struct netfs_io_request *rreq);
> void (*expand_readahead)(struct netfs_io_request *rreq);
> bool (*clamp_length)(struct netfs_io_subrequest *subreq);
> - void (*issue_op)(struct netfs_io_subrequest *subreq);
> + void (*issue_read)(struct netfs_io_subrequest *subreq);
> bool (*is_still_valid)(struct netfs_io_request *rreq);
> int (*check_write_begin)(struct file *file, loff_t pos, unsigned len,
> struct folio *folio, void **_fsdata);
>
>

Another (mostly) mechanical change...

Reviewed-by: Jeff Layton <[email protected]>

2022-03-09 16:03:48

by David Howells

[permalink] [raw]
Subject: Re: [PATCH v2 02/19] netfs: Generate enums from trace symbol mapping lists

Jeff Layton <[email protected]> wrote:

> Should you undef EM and E_ here after creating these?

Maybe. So far it hasn't mattered...

David

2022-03-09 16:13:15

by Jeff Layton

[permalink] [raw]
Subject: Re: [PATCH v2 02/19] netfs: Generate enums from trace symbol mapping lists

On Tue, 2022-03-08 at 23:25 +0000, David Howells wrote:
> netfs has a number of lists of symbols for use in tracing, listed in an
> enum and then listed again in a symbol->string mapping for use with
> __print_symbolic(). This is, however, redundant.
>
> Instead, use the symbol->string mapping list to also generate the enum
> where the enum is in the same file.
>
> Signed-off-by: David Howells <[email protected]>
> cc: [email protected]
>
> Link: https://lore.kernel.org/r/164622980839.3564931.5673300162465266909.stgit@warthog.procyon.org.uk/ # v1
> ---
>
> include/trace/events/netfs.h | 57 ++++++++++--------------------------------
> 1 file changed, 14 insertions(+), 43 deletions(-)
>
> diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h
> index e6f4ebbb4c69..88d9a74dd346 100644
> --- a/include/trace/events/netfs.h
> +++ b/include/trace/events/netfs.h
> @@ -15,49 +15,6 @@
> /*
> * Define enums for tracing information.
> */
> -#ifndef __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY
> -#define __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY
> -
> -enum netfs_read_trace {
> - netfs_read_trace_expanded,
> - netfs_read_trace_readahead,
> - netfs_read_trace_readpage,
> - netfs_read_trace_write_begin,
> -};
> -
> -enum netfs_rreq_trace {
> - netfs_rreq_trace_assess,
> - netfs_rreq_trace_done,
> - netfs_rreq_trace_free,
> - netfs_rreq_trace_resubmit,
> - netfs_rreq_trace_unlock,
> - netfs_rreq_trace_unmark,
> - netfs_rreq_trace_write,
> -};
> -
> -enum netfs_sreq_trace {
> - netfs_sreq_trace_download_instead,
> - netfs_sreq_trace_free,
> - netfs_sreq_trace_prepare,
> - netfs_sreq_trace_resubmit_short,
> - netfs_sreq_trace_submit,
> - netfs_sreq_trace_terminated,
> - netfs_sreq_trace_write,
> - netfs_sreq_trace_write_skip,
> - netfs_sreq_trace_write_term,
> -};
> -
> -enum netfs_failure {
> - netfs_fail_check_write_begin,
> - netfs_fail_copy_to_cache,
> - netfs_fail_read,
> - netfs_fail_short_readpage,
> - netfs_fail_short_write_begin,
> - netfs_fail_prepare_write,
> -};
> -
> -#endif
> -
> #define netfs_read_traces \
> EM(netfs_read_trace_expanded, "EXPANDED ") \
> EM(netfs_read_trace_readahead, "READAHEAD") \
> @@ -98,6 +55,20 @@ enum netfs_failure {
> EM(netfs_fail_short_write_begin, "short-write-begin") \
> E_(netfs_fail_prepare_write, "prep-write")
>
> +#ifndef __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY
> +#define __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY
> +
> +#undef EM
> +#undef E_
> +#define EM(a, b) a,
> +#define E_(a, b) a
> +
> +enum netfs_read_trace { netfs_read_traces } __mode(byte);
> +enum netfs_rreq_trace { netfs_rreq_traces } __mode(byte);
> +enum netfs_sreq_trace { netfs_sreq_traces } __mode(byte);
> +enum netfs_failure { netfs_failures } __mode(byte);
> +

Should you undef EM and E_ here after creating these?

> +#endif
>
> /*
> * Export enum symbols via userspace.
>
>

Looks fine otherwise:

Acked-by: Jeff Layton <[email protected]>

2022-03-10 04:41:18

by Jingbo Xu

[permalink] [raw]
Subject: Re: [PATCH v2 01/19] fscache: export fscache_end_operation()



On 3/9/22 11:26 PM, Jeff Layton wrote:
> On Tue, 2022-03-08 at 23:25 +0000, David Howells wrote:
>> From: Jeffle Xu <[email protected]>
>>
>> Export fscache_end_operation() to avoid code duplication.
>>
>> Besides, considering the paired fscache_begin_read_operation() is
>> already exported, it shall make sense to also export
>> fscache_end_operation().
>>
>
> Not what I think of when you say "exporting" but the patch itself looks
> fine.
>

Yes, maybe "fscache: make fscache_end_operation() generally available"
as David said shall be better...

--
Thanks,
Jeffle