2021-11-23 01:30:57

by NeilBrown

[permalink] [raw]
Subject: [PATCH 00/19 v2] SUNRPC: clean up server thread management

This is a revision of my series for cleaning up server thread
management.
Currently lockd, nfsd, and nfs-callback all manage threads slightly
differently. This series unifies them.

Changes since first series include:
- minor bug fixes
- kernel-doc comments for new functions
- split first patch into 3, and make the bugfix a separate patch
- fix management of pool_maps so lockd can usse svc_set_num_threads
safely
- switch nfs-callback to not request a 'pooled' service.

NeilBrown


---

NeilBrown (19):
SUNRPC/NFSD: clean up get/put functions.
NFSD: handle error better in write_ports_addfd()
SUNRPC: stop using ->sv_nrthreads as a refcount
nfsd: make nfsd_stats.th_cnt atomic_t
SUNRPC: use sv_lock to protect updates to sv_nrthreads.
NFSD: narrow nfsd_mutex protection in nfsd thread
NFSD: Make it possible to use svc_set_num_threads_sync
SUNRPC: discard svo_setup and rename svc_set_num_threads_sync()
NFSD: simplify locking for network notifier.
lockd: introduce nlmsvc_serv
lockd: simplify management of network status notifiers
lockd: move lockd_start_svc() call into lockd_create_svc()
lockd: move svc_exit_thread() into the thread
lockd: introduce lockd_put()
lockd: rename lockd_create_svc() to lockd_get()
SUNRPC: move the pool_map definitions (back) into svc.c
SUNRPC: always treat sv_nrpools==1 as "not pooled"
lockd: use svc_set_num_threads() for thread start and stop
NFS: switch the callback service back to non-pooled.


fs/lockd/svc.c | 194 ++++++++++++-------------------------
fs/nfs/callback.c | 12 +--
fs/nfsd/netns.h | 13 +--
fs/nfsd/nfsctl.c | 24 ++---
fs/nfsd/nfssvc.c | 139 +++++++++++++-------------
fs/nfsd/stats.c | 2 +-
fs/nfsd/stats.h | 4 +-
include/linux/sunrpc/svc.h | 58 ++++-------
net/sunrpc/svc.c | 166 ++++++++++++++-----------------
9 files changed, 248 insertions(+), 364 deletions(-)

--
Signature



2021-11-23 01:31:02

by NeilBrown

[permalink] [raw]
Subject: [PATCH 01/19] SUNRPC/NFSD: clean up get/put functions.

svc_destroy() is poorly named - it doesn't necessarily destroy the svc,
it might just reduce the ref count.
nfsd_destroy() is poorly named for the same reason.

This patch:
- removes the refcount functionality from svc_destroy(), moving it to
a new svc_put(). Almost all previous callers of svc_destroy() now
call svc_put().
- renames nfsd_destroy() to nfsd_put() and improves the code, using
the new svc_destroy() rather than svc_put()
- also changes svc_get() to return the serv, which simplifies
some code a little.

The only non-trivial part of this is that svc_destroy() would call
svc_sock_update() on a non-final decrement. It can no longer do that,
and svc_put() isn't really a good place of it. This call is now made
from svc_exit_thread() which seems like a good place. This makes the
call *before* sv_nrthreads is decremented rather than after. This
is not particularly important as the call just sets a flag which
causes sv_nrthreads set be checked later. A subsequent patch will
improve the ordering.

Signed-off-by: NeilBrown <[email protected]>
---
fs/lockd/svc.c | 12 +++---------
fs/nfs/callback.c | 20 ++++----------------
fs/nfsd/nfsctl.c | 4 ++--
fs/nfsd/nfsd.h | 2 +-
fs/nfsd/nfssvc.c | 30 ++++++++++++++++--------------
include/linux/sunrpc/svc.h | 29 +++++++++++++++++++++++++----
net/sunrpc/svc.c | 19 +++++--------------
7 files changed, 56 insertions(+), 60 deletions(-)

diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
index b220e1b91726..135bd86ed3ad 100644
--- a/fs/lockd/svc.c
+++ b/fs/lockd/svc.c
@@ -430,14 +430,8 @@ static struct svc_serv *lockd_create_svc(void)
/*
* Check whether we're already up and running.
*/
- if (nlmsvc_rqst) {
- /*
- * Note: increase service usage, because later in case of error
- * svc_destroy() will be called.
- */
- svc_get(nlmsvc_rqst->rq_server);
- return nlmsvc_rqst->rq_server;
- }
+ if (nlmsvc_rqst)
+ return svc_get(nlmsvc_rqst->rq_server);

/*
* Sanity check: if there's no pid,
@@ -497,7 +491,7 @@ int lockd_up(struct net *net, const struct cred *cred)
* so we exit through here on both success and failure.
*/
err_put:
- svc_destroy(serv);
+ svc_put(serv);
err_create:
mutex_unlock(&nlmsvc_mutex);
return error;
diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
index 86d856de1389..edbc7579b4aa 100644
--- a/fs/nfs/callback.c
+++ b/fs/nfs/callback.c
@@ -266,14 +266,8 @@ static struct svc_serv *nfs_callback_create_svc(int minorversion)
/*
* Check whether we're already up and running.
*/
- if (cb_info->serv) {
- /*
- * Note: increase service usage, because later in case of error
- * svc_destroy() will be called.
- */
- svc_get(cb_info->serv);
- return cb_info->serv;
- }
+ if (cb_info->serv)
+ return svc_get(cb_info->serv);

switch (minorversion) {
case 0:
@@ -335,16 +329,10 @@ int nfs_callback_up(u32 minorversion, struct rpc_xprt *xprt)
goto err_start;

cb_info->users++;
- /*
- * svc_create creates the svc_serv with sv_nrthreads == 1, and then
- * svc_prepare_thread increments that. So we need to call svc_destroy
- * on both success and failure so that the refcount is 1 when the
- * thread exits.
- */
err_net:
if (!cb_info->users)
cb_info->serv = NULL;
- svc_destroy(serv);
+ svc_put(serv);
err_create:
mutex_unlock(&nfs_callback_mutex);
return ret;
@@ -370,7 +358,7 @@ void nfs_callback_down(int minorversion, struct net *net)
if (cb_info->users == 0) {
svc_get(serv);
serv->sv_ops->svo_setup(serv, NULL, 0);
- svc_destroy(serv);
+ svc_put(serv);
dprintk("nfs_callback_down: service destroyed\n");
cb_info->serv = NULL;
}
diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
index af8531c3854a..5eb564e58a9b 100644
--- a/fs/nfsd/nfsctl.c
+++ b/fs/nfsd/nfsctl.c
@@ -743,7 +743,7 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net, const struct cred

err = svc_addsock(nn->nfsd_serv, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred);
if (err < 0) {
- nfsd_destroy(net);
+ nfsd_put(net);
return err;
}

@@ -796,7 +796,7 @@ static ssize_t __write_ports_addxprt(char *buf, struct net *net, const struct cr
if (!list_empty(&nn->nfsd_serv->sv_permsocks))
nn->nfsd_serv->sv_nrthreads--;
else
- nfsd_destroy(net);
+ nfsd_put(net);
return err;
}

diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
index 498e5a489826..3e5008b475ff 100644
--- a/fs/nfsd/nfsd.h
+++ b/fs/nfsd/nfsd.h
@@ -97,7 +97,7 @@ int nfsd_pool_stats_open(struct inode *, struct file *);
int nfsd_pool_stats_release(struct inode *, struct file *);
void nfsd_shutdown_threads(struct net *net);

-void nfsd_destroy(struct net *net);
+void nfsd_put(struct net *net);

bool i_am_nfsd(void);

diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index 80431921e5d7..2ab0e650a0e2 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -623,7 +623,7 @@ void nfsd_shutdown_threads(struct net *net)
svc_get(serv);
/* Kill outstanding nfsd threads */
serv->sv_ops->svo_setup(serv, NULL, 0);
- nfsd_destroy(net);
+ nfsd_put(net);
mutex_unlock(&nfsd_mutex);
/* Wait for shutdown of nfsd_serv to complete */
wait_for_completion(&nn->nfsd_shutdown_complete);
@@ -656,7 +656,10 @@ int nfsd_create_serv(struct net *net)
nn->nfsd_serv->sv_maxconn = nn->max_connections;
error = svc_bind(nn->nfsd_serv, net);
if (error < 0) {
- svc_destroy(nn->nfsd_serv);
+ /* NOT nfsd_put() as notifiers (see below) haven't
+ * been set up yet.
+ */
+ svc_put(nn->nfsd_serv);
nfsd_complete_shutdown(net);
return error;
}
@@ -697,16 +700,16 @@ int nfsd_get_nrthreads(int n, int *nthreads, struct net *net)
return 0;
}

-void nfsd_destroy(struct net *net)
+void nfsd_put(struct net *net)
{
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
- int destroy = (nn->nfsd_serv->sv_nrthreads == 1);

- if (destroy)
+ nn->nfsd_serv->sv_nrthreads --;
+ if (nn->nfsd_serv->sv_nrthreads == 0) {
svc_shutdown_net(nn->nfsd_serv, net);
- svc_destroy(nn->nfsd_serv);
- if (destroy)
+ svc_destroy(nn->nfsd_serv);
nfsd_complete_shutdown(net);
+ }
}

int nfsd_set_nrthreads(int n, int *nthreads, struct net *net)
@@ -758,7 +761,7 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net *net)
if (err)
break;
}
- nfsd_destroy(net);
+ nfsd_put(net);
return err;
}

@@ -795,7 +798,7 @@ nfsd_svc(int nrservs, struct net *net, const struct cred *cred)

error = nfsd_startup_net(net, cred);
if (error)
- goto out_destroy;
+ goto out_put;
error = nn->nfsd_serv->sv_ops->svo_setup(nn->nfsd_serv,
NULL, nrservs);
if (error)
@@ -808,8 +811,8 @@ nfsd_svc(int nrservs, struct net *net, const struct cred *cred)
out_shutdown:
if (error < 0 && !nfsd_up_before)
nfsd_shutdown_net(net);
-out_destroy:
- nfsd_destroy(net); /* Release server */
+out_put:
+ nfsd_put(net);
out:
mutex_unlock(&nfsd_mutex);
return error;
@@ -982,7 +985,7 @@ nfsd(void *vrqstp)
/* Release the thread */
svc_exit_thread(rqstp);

- nfsd_destroy(net);
+ nfsd_put(net);

/* Release module */
mutex_unlock(&nfsd_mutex);
@@ -1109,8 +1112,7 @@ int nfsd_pool_stats_release(struct inode *inode, struct file *file)
struct net *net = inode->i_sb->s_fs_info;

mutex_lock(&nfsd_mutex);
- /* this function really, really should have been called svc_put() */
- nfsd_destroy(net);
+ nfsd_put(net);
mutex_unlock(&nfsd_mutex);
return ret;
}
diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
index 0ae28ae6caf2..d87c3392a1e9 100644
--- a/include/linux/sunrpc/svc.h
+++ b/include/linux/sunrpc/svc.h
@@ -114,15 +114,37 @@ struct svc_serv {
#endif /* CONFIG_SUNRPC_BACKCHANNEL */
};

-/*
- * We use sv_nrthreads as a reference count. svc_destroy() drops
+/**
+ * svc_get() - increment reference count on a SUNRPC serv
+ * @serv: the svc_serv to have count incremented
+ *
+ * Returns: the svc_serv that was passed in.
+ *
+ * We use sv_nrthreads as a reference count. svc_put() drops
* this refcount, so we need to bump it up around operations that
* change the number of threads. Horrible, but there it is.
* Should be called with the "service mutex" held.
*/
-static inline void svc_get(struct svc_serv *serv)
+static inline struct svc_serv *svc_get(struct svc_serv *serv)
{
serv->sv_nrthreads++;
+ return serv;
+}
+
+void svc_destroy(struct svc_serv *serv);
+
+/**
+ * svc_put - decrement reference count on a SUNRPC serv
+ * @serv: the svc_serv to have count decremented
+ *
+ * When the reference count reaches zero, svc_destroy()
+ * is called to clean up and free the serv.
+ */
+static inline void svc_put(struct svc_serv *serv)
+{
+ serv->sv_nrthreads --;
+ if (serv->sv_nrthreads == 0)
+ svc_destroy(serv);
}

/*
@@ -514,7 +536,6 @@ struct svc_serv * svc_create_pooled(struct svc_program *, unsigned int,
int svc_set_num_threads(struct svc_serv *, struct svc_pool *, int);
int svc_set_num_threads_sync(struct svc_serv *, struct svc_pool *, int);
int svc_pool_stats_open(struct svc_serv *serv, struct file *file);
-void svc_destroy(struct svc_serv *);
void svc_shutdown_net(struct svc_serv *, struct net *);
int svc_process(struct svc_rqst *);
int bc_svc_process(struct svc_serv *, struct rpc_rqst *,
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index 4292278a9552..55a1bf0d129f 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -528,17 +528,7 @@ EXPORT_SYMBOL_GPL(svc_shutdown_net);
void
svc_destroy(struct svc_serv *serv)
{
- dprintk("svc: svc_destroy(%s, %d)\n",
- serv->sv_program->pg_name,
- serv->sv_nrthreads);
-
- if (serv->sv_nrthreads) {
- if (--(serv->sv_nrthreads) != 0) {
- svc_sock_update_bufs(serv);
- return;
- }
- } else
- printk("svc_destroy: no threads for serv=%p!\n", serv);
+ dprintk("svc: svc_destroy(%s)\n", serv->sv_program->pg_name);

del_timer_sync(&serv->sv_temptimer);

@@ -892,9 +882,10 @@ svc_exit_thread(struct svc_rqst *rqstp)

svc_rqst_free(rqstp);

- /* Release the server */
- if (serv)
- svc_destroy(serv);
+ if (!serv)
+ return;
+ svc_sock_update_bufs(serv);
+ svc_destroy(serv);
}
EXPORT_SYMBOL_GPL(svc_exit_thread);




2021-11-23 01:31:08

by NeilBrown

[permalink] [raw]
Subject: [PATCH 02/19] NFSD: handle error better in write_ports_addfd()

If write_ports_add() fails, we shouldn't destroy the serv, unless we had
only just created it. So if there are any permanent sockets already
attached, leave the serv in place.

Signed-off-by: NeilBrown <[email protected]>
---
fs/nfsd/nfsctl.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
index 5eb564e58a9b..93d417871302 100644
--- a/fs/nfsd/nfsctl.c
+++ b/fs/nfsd/nfsctl.c
@@ -742,7 +742,7 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net, const struct cred
return err;

err = svc_addsock(nn->nfsd_serv, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred);
- if (err < 0) {
+ if (err < 0 && list_empty(&nn->nfsd_serv->sv_permsocks)) {
nfsd_put(net);
return err;
}



2021-11-23 01:31:12

by NeilBrown

[permalink] [raw]
Subject: [PATCH 03/19] SUNRPC: stop using ->sv_nrthreads as a refcount

The use of sv_nrthreads as a general refcount results in clumsy code, as
is seen by various comments needed to explain the situation.

This patch introduces a 'struct kref' and uses that for reference
counting, leaving sv_nrthreads to be a pure count of threads. The kref
is managed particularly in svc_get() and svc_put(), and also nfsd_put();

svc_destroy() now takes a pointer to the embedded kref, rather than to
the serv.

nfsd allows the svc_serv to exist with ->sv_nrhtreads being zero. This
happens when a transport is created before the first thread is started.
To support this, a 'keep_active' flag is introduced which holds a ref on
the svc_serv. This is set when any listening socket is successfully
added (unless there are running threads), and cleared when the number of
threads is set. So when the last thread exits, the nfs_serv will be
destroyed.
The use of 'keep_active' replaces previous code which checked if there
were any permanent sockets.

We no longer clear ->rq_server when nfsd() exits. This was done
to prevent svc_exit_thread() from calling svc_destroy().
Instead we take an extra reference to the svc_serv to prevent
svc_destroy() from being called.

Signed-off-by: NeilBrown <[email protected]>
---
fs/lockd/svc.c | 4 ----
fs/nfs/callback.c | 2 +-
fs/nfsd/netns.h | 7 +++++++
fs/nfsd/nfsctl.c | 22 ++++++++++------------
fs/nfsd/nfssvc.c | 42 ++++++++++++++++++++++++++----------------
include/linux/sunrpc/svc.h | 14 ++++----------
net/sunrpc/svc.c | 22 +++++++++++-----------
7 files changed, 59 insertions(+), 54 deletions(-)

diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
index 135bd86ed3ad..a9669b106dbd 100644
--- a/fs/lockd/svc.c
+++ b/fs/lockd/svc.c
@@ -486,10 +486,6 @@ int lockd_up(struct net *net, const struct cred *cred)
goto err_put;
}
nlmsvc_users++;
- /*
- * Note: svc_serv structures have an initial use count of 1,
- * so we exit through here on both success and failure.
- */
err_put:
svc_put(serv);
err_create:
diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
index edbc7579b4aa..d9d78ffd1d65 100644
--- a/fs/nfs/callback.c
+++ b/fs/nfs/callback.c
@@ -169,7 +169,7 @@ static int nfs_callback_start_svc(int minorversion, struct rpc_xprt *xprt,
if (nrservs < NFS4_MIN_NR_CALLBACK_THREADS)
nrservs = NFS4_MIN_NR_CALLBACK_THREADS;

- if (serv->sv_nrthreads-1 == nrservs)
+ if (serv->sv_nrthreads == nrservs)
return 0;

ret = serv->sv_ops->svo_setup(serv, NULL, nrservs);
diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h
index 935c1028c217..08bcd8f23b01 100644
--- a/fs/nfsd/netns.h
+++ b/fs/nfsd/netns.h
@@ -123,6 +123,13 @@ struct nfsd_net {
u32 clverifier_counter;

struct svc_serv *nfsd_serv;
+ /* When a listening socket is added to nfsd, keep_active is set
+ * and this justifies a reference on nfsd_serv. This stops
+ * nfsd_serv from being freed. When the number of threads is
+ * set, keep_active is cleared and the reference is dropped. So
+ * when the last thread exits, the service will be destroyed.
+ */
+ int keep_active;

wait_queue_head_t ntf_wq;
atomic_t ntf_refcnt;
diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
index 93d417871302..2bbc26fbdae8 100644
--- a/fs/nfsd/nfsctl.c
+++ b/fs/nfsd/nfsctl.c
@@ -742,13 +742,12 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net, const struct cred
return err;

err = svc_addsock(nn->nfsd_serv, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred);
- if (err < 0 && list_empty(&nn->nfsd_serv->sv_permsocks)) {
- nfsd_put(net);
- return err;
- }

- /* Decrease the count, but don't shut down the service */
- nn->nfsd_serv->sv_nrthreads--;
+ if (err >= 0 &&
+ !nn->nfsd_serv->sv_nrthreads && !xchg(&nn->keep_active, 1))
+ svc_get(nn->nfsd_serv);
+
+ nfsd_put(net);
return err;
}

@@ -783,8 +782,10 @@ static ssize_t __write_ports_addxprt(char *buf, struct net *net, const struct cr
if (err < 0 && err != -EAFNOSUPPORT)
goto out_close;

- /* Decrease the count, but don't shut down the service */
- nn->nfsd_serv->sv_nrthreads--;
+ if (!nn->nfsd_serv->sv_nrthreads && !xchg(&nn->keep_active, 1))
+ svc_get(nn->nfsd_serv);
+
+ nfsd_put(net);
return 0;
out_close:
xprt = svc_find_xprt(nn->nfsd_serv, transport, net, PF_INET, port);
@@ -793,10 +794,7 @@ static ssize_t __write_ports_addxprt(char *buf, struct net *net, const struct cr
svc_xprt_put(xprt);
}
out_err:
- if (!list_empty(&nn->nfsd_serv->sv_permsocks))
- nn->nfsd_serv->sv_nrthreads--;
- else
- nfsd_put(net);
+ nfsd_put(net);
return err;
}

diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index 2ab0e650a0e2..5f605e7e8091 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -60,13 +60,13 @@ static __be32 nfsd_init_request(struct svc_rqst *,
* extent ->sv_temp_socks and ->sv_permsocks. It also protects nfsdstats.th_cnt
*
* If (out side the lock) nn->nfsd_serv is non-NULL, then it must point to a
- * properly initialised 'struct svc_serv' with ->sv_nrthreads > 0. That number
- * of nfsd threads must exist and each must listed in ->sp_all_threads in each
- * entry of ->sv_pools[].
+ * properly initialised 'struct svc_serv' with ->sv_nrthreads > 0 (unless
+ * nn->keep_active is set). That number of nfsd threads must
+ * exist and each must be listed in ->sp_all_threads in some entry of
+ * ->sv_pools[].
*
- * Transitions of the thread count between zero and non-zero are of particular
- * interest since the svc_serv needs to be created and initialized at that
- * point, or freed.
+ * Each active thread holds a counted reference on nn->nfsd_serv, as does
+ * the nn->keep_active flag and various transient calls to svc_get().
*
* Finally, the nfsd_mutex also protects some of the global variables that are
* accessed when nfsd starts and that are settable via the write_* routines in
@@ -700,14 +700,22 @@ int nfsd_get_nrthreads(int n, int *nthreads, struct net *net)
return 0;
}

+/* This is the callback for kref_put() below.
+ * There is no code here as the first thing to be done is
+ * call svc_shutdown_net(), but we cannot get the 'net' from
+ * the kref. So do all the work when kref_put returns true.
+ */
+static void nfsd_noop(struct kref *ref)
+{
+}
+
void nfsd_put(struct net *net)
{
struct nfsd_net *nn = net_generic(net, nfsd_net_id);

- nn->nfsd_serv->sv_nrthreads --;
- if (nn->nfsd_serv->sv_nrthreads == 0) {
+ if (kref_put(&nn->nfsd_serv->sv_refcnt, nfsd_noop)) {
svc_shutdown_net(nn->nfsd_serv, net);
- svc_destroy(nn->nfsd_serv);
+ svc_destroy(&nn->nfsd_serv->sv_refcnt);
nfsd_complete_shutdown(net);
}
}
@@ -803,15 +811,14 @@ nfsd_svc(int nrservs, struct net *net, const struct cred *cred)
NULL, nrservs);
if (error)
goto out_shutdown;
- /* We are holding a reference to nn->nfsd_serv which
- * we don't want to count in the return value,
- * so subtract 1
- */
- error = nn->nfsd_serv->sv_nrthreads - 1;
+ error = nn->nfsd_serv->sv_nrthreads;
out_shutdown:
if (error < 0 && !nfsd_up_before)
nfsd_shutdown_net(net);
out_put:
+ /* Threads now hold service active */
+ if (xchg(&nn->keep_active, 0))
+ nfsd_put(net);
nfsd_put(net);
out:
mutex_unlock(&nfsd_mutex);
@@ -980,11 +987,15 @@ nfsd(void *vrqstp)
nfsdstats.th_cnt --;

out:
- rqstp->rq_server = NULL;
+ /* Take an extra ref so that the svc_put in svc_exit_thread()
+ * doesn't call svc_destroy()
+ */
+ svc_get(nn->nfsd_serv);

/* Release the thread */
svc_exit_thread(rqstp);

+ /* Now if needed we call svc_destroy in appropriate context */
nfsd_put(net);

/* Release module */
@@ -1099,7 +1110,6 @@ int nfsd_pool_stats_open(struct inode *inode, struct file *file)
mutex_unlock(&nfsd_mutex);
return -ENODEV;
}
- /* bump up the psudo refcount while traversing */
svc_get(nn->nfsd_serv);
ret = svc_pool_stats_open(nn->nfsd_serv, file);
mutex_unlock(&nfsd_mutex);
diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
index d87c3392a1e9..3903b4ae8ac5 100644
--- a/include/linux/sunrpc/svc.h
+++ b/include/linux/sunrpc/svc.h
@@ -85,6 +85,7 @@ struct svc_serv {
struct svc_program * sv_program; /* RPC program */
struct svc_stat * sv_stats; /* RPC statistics */
spinlock_t sv_lock;
+ struct kref sv_refcnt;
unsigned int sv_nrthreads; /* # of server threads */
unsigned int sv_maxconn; /* max connections allowed or
* '0' causing max to be based
@@ -119,19 +120,14 @@ struct svc_serv {
* @serv: the svc_serv to have count incremented
*
* Returns: the svc_serv that was passed in.
- *
- * We use sv_nrthreads as a reference count. svc_put() drops
- * this refcount, so we need to bump it up around operations that
- * change the number of threads. Horrible, but there it is.
- * Should be called with the "service mutex" held.
*/
static inline struct svc_serv *svc_get(struct svc_serv *serv)
{
- serv->sv_nrthreads++;
+ kref_get(&serv->sv_refcnt);
return serv;
}

-void svc_destroy(struct svc_serv *serv);
+void svc_destroy(struct kref *);

/**
* svc_put - decrement reference count on a SUNRPC serv
@@ -142,9 +138,7 @@ void svc_destroy(struct svc_serv *serv);
*/
static inline void svc_put(struct svc_serv *serv)
{
- serv->sv_nrthreads --;
- if (serv->sv_nrthreads == 0)
- svc_destroy(serv);
+ kref_put(&serv->sv_refcnt, svc_destroy);
}

/*
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index 55a1bf0d129f..acddc6e12e9e 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -435,7 +435,7 @@ __svc_create(struct svc_program *prog, unsigned int bufsize, int npools,
return NULL;
serv->sv_name = prog->pg_name;
serv->sv_program = prog;
- serv->sv_nrthreads = 1;
+ kref_init(&serv->sv_refcnt);
serv->sv_stats = prog->pg_stats;
if (bufsize > RPCSVC_MAXPAYLOAD)
bufsize = RPCSVC_MAXPAYLOAD;
@@ -526,10 +526,11 @@ EXPORT_SYMBOL_GPL(svc_shutdown_net);
* protect the sv_nrthreads, sv_permsocks and sv_tempsocks.
*/
void
-svc_destroy(struct svc_serv *serv)
+svc_destroy(struct kref *ref)
{
- dprintk("svc: svc_destroy(%s)\n", serv->sv_program->pg_name);
+ struct svc_serv *serv = container_of(ref, struct svc_serv, sv_refcnt);

+ dprintk("svc: svc_destroy(%s)\n", serv->sv_program->pg_name);
del_timer_sync(&serv->sv_temptimer);

/*
@@ -637,6 +638,7 @@ svc_prepare_thread(struct svc_serv *serv, struct svc_pool *pool, int node)
if (!rqstp)
return ERR_PTR(-ENOMEM);

+ svc_get(serv);
serv->sv_nrthreads++;
spin_lock_bh(&pool->sp_lock);
pool->sp_nrthreads++;
@@ -776,8 +778,7 @@ int
svc_set_num_threads(struct svc_serv *serv, struct svc_pool *pool, int nrservs)
{
if (pool == NULL) {
- /* The -1 assumes caller has done a svc_get() */
- nrservs -= (serv->sv_nrthreads-1);
+ nrservs -= serv->sv_nrthreads;
} else {
spin_lock_bh(&pool->sp_lock);
nrservs -= pool->sp_nrthreads;
@@ -814,8 +815,7 @@ int
svc_set_num_threads_sync(struct svc_serv *serv, struct svc_pool *pool, int nrservs)
{
if (pool == NULL) {
- /* The -1 assumes caller has done a svc_get() */
- nrservs -= (serv->sv_nrthreads-1);
+ nrservs -= serv->sv_nrthreads;
} else {
spin_lock_bh(&pool->sp_lock);
nrservs -= pool->sp_nrthreads;
@@ -880,12 +880,12 @@ svc_exit_thread(struct svc_rqst *rqstp)
list_del_rcu(&rqstp->rq_all);
spin_unlock_bh(&pool->sp_lock);

+ serv->sv_nrthreads -= 1;
+ svc_sock_update_bufs(serv);
+
svc_rqst_free(rqstp);

- if (!serv)
- return;
- svc_sock_update_bufs(serv);
- svc_destroy(serv);
+ svc_put(serv);
}
EXPORT_SYMBOL_GPL(svc_exit_thread);




2021-11-23 01:31:20

by NeilBrown

[permalink] [raw]
Subject: [PATCH 04/19] nfsd: make nfsd_stats.th_cnt atomic_t

This allows us to move the updates for th_cnt out of the mutex.
This is a step towards reducing mutex coverage in nfsd().

Signed-off-by: NeilBrown <[email protected]>
---
fs/nfsd/nfssvc.c | 6 +++---
fs/nfsd/stats.c | 2 +-
fs/nfsd/stats.h | 4 +---
3 files changed, 5 insertions(+), 7 deletions(-)

diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index 5f605e7e8091..fc5899502a83 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -57,7 +57,7 @@ static __be32 nfsd_init_request(struct svc_rqst *,
/*
* nfsd_mutex protects nn->nfsd_serv -- both the pointer itself and the members
* of the svc_serv struct. In particular, ->sv_nrthreads but also to some
- * extent ->sv_temp_socks and ->sv_permsocks. It also protects nfsdstats.th_cnt
+ * extent ->sv_temp_socks and ->sv_permsocks.
*
* If (out side the lock) nn->nfsd_serv is non-NULL, then it must point to a
* properly initialised 'struct svc_serv' with ->sv_nrthreads > 0 (unless
@@ -955,8 +955,8 @@ nfsd(void *vrqstp)
allow_signal(SIGINT);
allow_signal(SIGQUIT);

- nfsdstats.th_cnt++;
mutex_unlock(&nfsd_mutex);
+ atomic_inc(&nfsdstats.th_cnt);

set_freezable();

@@ -983,8 +983,8 @@ nfsd(void *vrqstp)
/* Clear signals before calling svc_exit_thread() */
flush_signals(current);

+ atomic_dec(&nfsdstats.th_cnt);
mutex_lock(&nfsd_mutex);
- nfsdstats.th_cnt --;

out:
/* Take an extra ref so that the svc_put in svc_exit_thread()
diff --git a/fs/nfsd/stats.c b/fs/nfsd/stats.c
index 1d3b881e7382..a8c5a02a84f0 100644
--- a/fs/nfsd/stats.c
+++ b/fs/nfsd/stats.c
@@ -45,7 +45,7 @@ static int nfsd_proc_show(struct seq_file *seq, void *v)
percpu_counter_sum_positive(&nfsdstats.counter[NFSD_STATS_IO_WRITE]));

/* thread usage: */
- seq_printf(seq, "th %u 0", nfsdstats.th_cnt);
+ seq_printf(seq, "th %u 0", atomic_read(&nfsdstats.th_cnt));

/* deprecated thread usage histogram stats */
for (i = 0; i < 10; i++)
diff --git a/fs/nfsd/stats.h b/fs/nfsd/stats.h
index 51ecda852e23..9b43dc3d9991 100644
--- a/fs/nfsd/stats.h
+++ b/fs/nfsd/stats.h
@@ -29,11 +29,9 @@ enum {
struct nfsd_stats {
struct percpu_counter counter[NFSD_STATS_COUNTERS_NUM];

- /* Protected by nfsd_mutex */
- unsigned int th_cnt; /* number of available threads */
+ atomic_t th_cnt; /* number of available threads */
};

-
extern struct nfsd_stats nfsdstats;

extern struct svc_stat nfsd_svcstats;



2021-11-23 01:31:24

by NeilBrown

[permalink] [raw]
Subject: [PATCH 05/19] SUNRPC: use sv_lock to protect updates to sv_nrthreads.

Using sv_lock means we don't need to hold the service mutex over these
updates.

In particular, svc_exit_thread() no longer requires synchronisation, so
threads can exit asynchronously.

Signed-off-by: NeilBrown <[email protected]>
---
fs/nfsd/nfssvc.c | 5 ++---
net/sunrpc/svc.c | 9 +++++++--
2 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index fc5899502a83..e9c9fa820b17 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -55,9 +55,8 @@ static __be32 nfsd_init_request(struct svc_rqst *,
struct svc_process_info *);

/*
- * nfsd_mutex protects nn->nfsd_serv -- both the pointer itself and the members
- * of the svc_serv struct. In particular, ->sv_nrthreads but also to some
- * extent ->sv_temp_socks and ->sv_permsocks.
+ * nfsd_mutex protects nn->nfsd_serv -- both the pointer itself and some members
+ * of the svc_serv struct such as ->sv_temp_socks and ->sv_permsocks.
*
* If (out side the lock) nn->nfsd_serv is non-NULL, then it must point to a
* properly initialised 'struct svc_serv' with ->sv_nrthreads > 0 (unless
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index acddc6e12e9e..2b2042234e4b 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -523,7 +523,7 @@ EXPORT_SYMBOL_GPL(svc_shutdown_net);

/*
* Destroy an RPC service. Should be called with appropriate locking to
- * protect the sv_nrthreads, sv_permsocks and sv_tempsocks.
+ * protect sv_permsocks and sv_tempsocks.
*/
void
svc_destroy(struct kref *ref)
@@ -639,7 +639,10 @@ svc_prepare_thread(struct svc_serv *serv, struct svc_pool *pool, int node)
return ERR_PTR(-ENOMEM);

svc_get(serv);
- serv->sv_nrthreads++;
+ spin_lock_bh(&serv->sv_lock);
+ serv->sv_nrthreads += 1;
+ spin_unlock_bh(&serv->sv_lock);
+
spin_lock_bh(&pool->sp_lock);
pool->sp_nrthreads++;
list_add_rcu(&rqstp->rq_all, &pool->sp_all_threads);
@@ -880,7 +883,9 @@ svc_exit_thread(struct svc_rqst *rqstp)
list_del_rcu(&rqstp->rq_all);
spin_unlock_bh(&pool->sp_lock);

+ spin_lock_bh(&serv->sv_lock);
serv->sv_nrthreads -= 1;
+ spin_unlock_bh(&serv->sv_lock);
svc_sock_update_bufs(serv);

svc_rqst_free(rqstp);



2021-11-23 01:31:32

by NeilBrown

[permalink] [raw]
Subject: [PATCH 06/19] NFSD: narrow nfsd_mutex protection in nfsd thread

There is nothing happening in the start of nfsd() that requires
protection by the mutex, so don't take it until shutting down the thread
- which does still require protection - but only for nfsd_put().

Signed-off-by: NeilBrown <[email protected]>
---
fs/nfsd/nfssvc.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index e9c9fa820b17..097abd8b059c 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -932,9 +932,6 @@ nfsd(void *vrqstp)
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
int err;

- /* Lock module and set up kernel thread */
- mutex_lock(&nfsd_mutex);
-
/* At this point, the thread shares current->fs
* with the init process. We need to create files with the
* umask as defined by the client instead of init's umask. */
@@ -954,7 +951,6 @@ nfsd(void *vrqstp)
allow_signal(SIGINT);
allow_signal(SIGQUIT);

- mutex_unlock(&nfsd_mutex);
atomic_inc(&nfsdstats.th_cnt);

set_freezable();
@@ -983,7 +979,6 @@ nfsd(void *vrqstp)
flush_signals(current);

atomic_dec(&nfsdstats.th_cnt);
- mutex_lock(&nfsd_mutex);

out:
/* Take an extra ref so that the svc_put in svc_exit_thread()
@@ -995,10 +990,11 @@ nfsd(void *vrqstp)
svc_exit_thread(rqstp);

/* Now if needed we call svc_destroy in appropriate context */
+ mutex_lock(&nfsd_mutex);
nfsd_put(net);
+ mutex_unlock(&nfsd_mutex);

/* Release module */
- mutex_unlock(&nfsd_mutex);
module_put_and_exit(0);
return 0;
}



2021-11-23 01:31:38

by NeilBrown

[permalink] [raw]
Subject: [PATCH 07/19] NFSD: Make it possible to use svc_set_num_threads_sync

nfsd cannot currently use svc_set_num_threads_sync. It instead
uses svc_set_num_threads which does *not* wait for threads to all
exit, and has a separate mechanism (nfsd_shutdown_complete) to wait
for completion.

The reason that nfsd is unlike other services is that nfsd threads can
exit separately from svc_set_num_threads being called - they die on
receipt of SIGKILL. Also, when the last thread exits, the service must
be shut down (sockets closed).

For this, the nfsd_mutex needs to be taken, and as that mutex needs to
be held while svc_set_num_threads is called, the one cannot wait for
the other.

This patch changes the nfsd thread so that it can drop the ref on the
service without blocking on nfsd_mutex, so that svc_set_num_threads_sync
can be used:
- if it can drop a non-last reference, it does that. This does not
trigger shutdown and does not require a mutex. This will likely
happen for all but the last thread signalled, and for all threads
being shut down by nfsd_shutdown_threads()
- if it can get the mutex without blocking (trylock), it does that
and then drops the reference. This will likely happen for the
last thread killed by SIGKILL
- Otherwise there might be an unrelated task holding the mutex,
possibly in another network namespace, or nfsd_shutdown_threads()
might be just about to get a reference on the service, after which
we can drop ours safely.
We cannot conveniently get wakeup notifications on these events,
and we are unlikely to need to, so we sleep briefly and check again.

With this we can discard nfsd_shutdown_complete and
nfsd_complete_shutdown(), and switch to svc_set_num_threads_sync.

Signed-off-by: NeilBrown <[email protected]>
---
fs/nfsd/netns.h | 3 ---
fs/nfsd/nfssvc.c | 41 ++++++++++++++++++++---------------------
include/linux/sunrpc/svc.h | 13 +++++++++++++
3 files changed, 33 insertions(+), 24 deletions(-)

diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h
index 08bcd8f23b01..1fd59eb0730b 100644
--- a/fs/nfsd/netns.h
+++ b/fs/nfsd/netns.h
@@ -134,9 +134,6 @@ struct nfsd_net {
wait_queue_head_t ntf_wq;
atomic_t ntf_refcnt;

- /* Allow umount to wait for nfsd state cleanup */
- struct completion nfsd_shutdown_complete;
-
/*
* clientid and stateid data for construction of net unique COPY
* stateids.
diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index 097abd8b059c..d0d9107a1b93 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -593,20 +593,10 @@ static const struct svc_serv_ops nfsd_thread_sv_ops = {
.svo_shutdown = nfsd_last_thread,
.svo_function = nfsd,
.svo_enqueue_xprt = svc_xprt_do_enqueue,
- .svo_setup = svc_set_num_threads,
+ .svo_setup = svc_set_num_threads_sync,
.svo_module = THIS_MODULE,
};

-static void nfsd_complete_shutdown(struct net *net)
-{
- struct nfsd_net *nn = net_generic(net, nfsd_net_id);
-
- WARN_ON(!mutex_is_locked(&nfsd_mutex));
-
- nn->nfsd_serv = NULL;
- complete(&nn->nfsd_shutdown_complete);
-}
-
void nfsd_shutdown_threads(struct net *net)
{
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
@@ -624,8 +614,6 @@ void nfsd_shutdown_threads(struct net *net)
serv->sv_ops->svo_setup(serv, NULL, 0);
nfsd_put(net);
mutex_unlock(&nfsd_mutex);
- /* Wait for shutdown of nfsd_serv to complete */
- wait_for_completion(&nn->nfsd_shutdown_complete);
}

bool i_am_nfsd(void)
@@ -650,7 +638,6 @@ int nfsd_create_serv(struct net *net)
&nfsd_thread_sv_ops);
if (nn->nfsd_serv == NULL)
return -ENOMEM;
- init_completion(&nn->nfsd_shutdown_complete);

nn->nfsd_serv->sv_maxconn = nn->max_connections;
error = svc_bind(nn->nfsd_serv, net);
@@ -659,7 +646,7 @@ int nfsd_create_serv(struct net *net)
* been set up yet.
*/
svc_put(nn->nfsd_serv);
- nfsd_complete_shutdown(net);
+ nn->nfsd_serv = NULL;
return error;
}

@@ -715,7 +702,7 @@ void nfsd_put(struct net *net)
if (kref_put(&nn->nfsd_serv->sv_refcnt, nfsd_noop)) {
svc_shutdown_net(nn->nfsd_serv, net);
svc_destroy(&nn->nfsd_serv->sv_refcnt);
- nfsd_complete_shutdown(net);
+ nn->nfsd_serv = NULL;
}
}

@@ -743,7 +730,7 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net *net)
if (tot > NFSD_MAXSERVS) {
/* total too large: scale down requested numbers */
for (i = 0; i < n && tot > 0; i++) {
- int new = nthreads[i] * NFSD_MAXSERVS / tot;
+ int new = nthreads[i] * NFSD_MAXSERVS / tot;
tot -= (nthreads[i] - new);
nthreads[i] = new;
}
@@ -989,10 +976,22 @@ nfsd(void *vrqstp)
/* Release the thread */
svc_exit_thread(rqstp);

- /* Now if needed we call svc_destroy in appropriate context */
- mutex_lock(&nfsd_mutex);
- nfsd_put(net);
- mutex_unlock(&nfsd_mutex);
+ /* We need to drop a ref, but may not drop the last reference
+ * without holding nfsd_mutex, and we cannot wait for nfsd_mutex as that
+ * could deadlock with nfsd_shutdown_threads() waiting for us.
+ * So three options are:
+ * - drop a non-final reference,
+ * - get the mutex without waiting
+ * - sleep briefly andd try the above again
+ */
+ while (!svc_put_not_last(nn->nfsd_serv)) {
+ if (mutex_trylock(&nfsd_mutex)) {
+ nfsd_put(net);
+ mutex_unlock(&nfsd_mutex);
+ break;
+ }
+ msleep(20);
+ }

/* Release module */
module_put_and_exit(0);
diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
index 3903b4ae8ac5..36bfc0281988 100644
--- a/include/linux/sunrpc/svc.h
+++ b/include/linux/sunrpc/svc.h
@@ -141,6 +141,19 @@ static inline void svc_put(struct svc_serv *serv)
kref_put(&serv->sv_refcnt, svc_destroy);
}

+/**
+ * svc_put_not_last - decrement non-final reference count on SUNRPC serv
+ * @serv: the svc_serv to have count decremented
+ *
+ * Returns: %true is refcount was decremented.
+ *
+ * If the refcount is 1, it is not decremented and instead failure is reported.
+ */
+static inline bool svc_put_not_last(struct svc_serv *serv)
+{
+ return refcount_dec_not_one(&serv->sv_refcnt.refcount);
+}
+
/*
* Maximum payload size supported by a kernel RPC server.
* This is use to determine the max number of pages nfsd is



2021-11-23 01:31:44

by NeilBrown

[permalink] [raw]
Subject: [PATCH 08/19] SUNRPC: discard svo_setup and rename svc_set_num_threads_sync()

The ->svo_setup callback serves no purpose. It is always called from
within the same module that chooses which callback is needed. So
discard it and call the relevant function directly.

Now that svc_set_num_threads() is no longer used remove it and rename
svc_set_num_threads_sync() to remove the "_sync" suffix.

Signed-off-by: NeilBrown <[email protected]>
---
fs/nfs/callback.c | 8 +++----
fs/nfsd/nfssvc.c | 11 ++++------
include/linux/sunrpc/svc.h | 4 ----
net/sunrpc/svc.c | 49 ++------------------------------------------
4 files changed, 10 insertions(+), 62 deletions(-)

diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
index d9d78ffd1d65..6cdc9d18a7dd 100644
--- a/fs/nfs/callback.c
+++ b/fs/nfs/callback.c
@@ -172,9 +172,9 @@ static int nfs_callback_start_svc(int minorversion, struct rpc_xprt *xprt,
if (serv->sv_nrthreads == nrservs)
return 0;

- ret = serv->sv_ops->svo_setup(serv, NULL, nrservs);
+ ret = svc_set_num_threads(serv, NULL, nrservs);
if (ret) {
- serv->sv_ops->svo_setup(serv, NULL, 0);
+ svc_set_num_threads(serv, NULL, 0);
return ret;
}
dprintk("nfs_callback_up: service started\n");
@@ -235,14 +235,12 @@ static int nfs_callback_up_net(int minorversion, struct svc_serv *serv,
static const struct svc_serv_ops nfs40_cb_sv_ops = {
.svo_function = nfs4_callback_svc,
.svo_enqueue_xprt = svc_xprt_do_enqueue,
- .svo_setup = svc_set_num_threads_sync,
.svo_module = THIS_MODULE,
};
#if defined(CONFIG_NFS_V4_1)
static const struct svc_serv_ops nfs41_cb_sv_ops = {
.svo_function = nfs41_callback_svc,
.svo_enqueue_xprt = svc_xprt_do_enqueue,
- .svo_setup = svc_set_num_threads_sync,
.svo_module = THIS_MODULE,
};

@@ -357,7 +355,7 @@ void nfs_callback_down(int minorversion, struct net *net)
cb_info->users--;
if (cb_info->users == 0) {
svc_get(serv);
- serv->sv_ops->svo_setup(serv, NULL, 0);
+ svc_set_num_threads(serv, NULL, 0);
svc_put(serv);
dprintk("nfs_callback_down: service destroyed\n");
cb_info->serv = NULL;
diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index d0d9107a1b93..020156e96bdb 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -593,7 +593,6 @@ static const struct svc_serv_ops nfsd_thread_sv_ops = {
.svo_shutdown = nfsd_last_thread,
.svo_function = nfsd,
.svo_enqueue_xprt = svc_xprt_do_enqueue,
- .svo_setup = svc_set_num_threads_sync,
.svo_module = THIS_MODULE,
};

@@ -611,7 +610,7 @@ void nfsd_shutdown_threads(struct net *net)

svc_get(serv);
/* Kill outstanding nfsd threads */
- serv->sv_ops->svo_setup(serv, NULL, 0);
+ svc_set_num_threads(serv, NULL, 0);
nfsd_put(net);
mutex_unlock(&nfsd_mutex);
}
@@ -750,8 +749,9 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net *net)
/* apply the new numbers */
svc_get(nn->nfsd_serv);
for (i = 0; i < n; i++) {
- err = nn->nfsd_serv->sv_ops->svo_setup(nn->nfsd_serv,
- &nn->nfsd_serv->sv_pools[i], nthreads[i]);
+ err = svc_set_num_threads(nn->nfsd_serv,
+ &nn->nfsd_serv->sv_pools[i],
+ nthreads[i]);
if (err)
break;
}
@@ -793,8 +793,7 @@ nfsd_svc(int nrservs, struct net *net, const struct cred *cred)
error = nfsd_startup_net(net, cred);
if (error)
goto out_put;
- error = nn->nfsd_serv->sv_ops->svo_setup(nn->nfsd_serv,
- NULL, nrservs);
+ error = svc_set_num_threads(nn->nfsd_serv, NULL, nrservs);
if (error)
goto out_shutdown;
error = nn->nfsd_serv->sv_nrthreads;
diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
index 36bfc0281988..0b38c6eaf985 100644
--- a/include/linux/sunrpc/svc.h
+++ b/include/linux/sunrpc/svc.h
@@ -64,9 +64,6 @@ struct svc_serv_ops {
/* queue up a transport for servicing */
void (*svo_enqueue_xprt)(struct svc_xprt *);

- /* set up thread (or whatever) execution context */
- int (*svo_setup)(struct svc_serv *, struct svc_pool *, int);
-
/* optional module to count when adding threads (pooled svcs only) */
struct module *svo_module;
};
@@ -541,7 +538,6 @@ void svc_pool_map_put(void);
struct svc_serv * svc_create_pooled(struct svc_program *, unsigned int,
const struct svc_serv_ops *);
int svc_set_num_threads(struct svc_serv *, struct svc_pool *, int);
-int svc_set_num_threads_sync(struct svc_serv *, struct svc_pool *, int);
int svc_pool_stats_open(struct svc_serv *serv, struct file *file);
void svc_shutdown_net(struct svc_serv *, struct net *);
int svc_process(struct svc_rqst *);
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index 2b2042234e4b..5513f8c9a8d6 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -743,58 +743,13 @@ svc_start_kthreads(struct svc_serv *serv, struct svc_pool *pool, int nrservs)
return 0;
}

-
-/* destroy old threads */
-static int
-svc_signal_kthreads(struct svc_serv *serv, struct svc_pool *pool, int nrservs)
-{
- struct task_struct *task;
- unsigned int state = serv->sv_nrthreads-1;
-
- /* destroy old threads */
- do {
- task = choose_victim(serv, pool, &state);
- if (task == NULL)
- break;
- send_sig(SIGINT, task, 1);
- nrservs++;
- } while (nrservs < 0);
-
- return 0;
-}
-
/*
* Create or destroy enough new threads to make the number
* of threads the given number. If `pool' is non-NULL, applies
* only to threads in that pool, otherwise round-robins between
* all pools. Caller must ensure that mutual exclusion between this and
* server startup or shutdown.
- *
- * Destroying threads relies on the service threads filling in
- * rqstp->rq_task, which only the nfs ones do. Assumes the serv
- * has been created using svc_create_pooled().
- *
- * Based on code that used to be in nfsd_svc() but tweaked
- * to be pool-aware.
*/
-int
-svc_set_num_threads(struct svc_serv *serv, struct svc_pool *pool, int nrservs)
-{
- if (pool == NULL) {
- nrservs -= serv->sv_nrthreads;
- } else {
- spin_lock_bh(&pool->sp_lock);
- nrservs -= pool->sp_nrthreads;
- spin_unlock_bh(&pool->sp_lock);
- }
-
- if (nrservs > 0)
- return svc_start_kthreads(serv, pool, nrservs);
- if (nrservs < 0)
- return svc_signal_kthreads(serv, pool, nrservs);
- return 0;
-}
-EXPORT_SYMBOL_GPL(svc_set_num_threads);

/* destroy old threads */
static int
@@ -815,7 +770,7 @@ svc_stop_kthreads(struct svc_serv *serv, struct svc_pool *pool, int nrservs)
}

int
-svc_set_num_threads_sync(struct svc_serv *serv, struct svc_pool *pool, int nrservs)
+svc_set_num_threads(struct svc_serv *serv, struct svc_pool *pool, int nrservs)
{
if (pool == NULL) {
nrservs -= serv->sv_nrthreads;
@@ -831,7 +786,7 @@ svc_set_num_threads_sync(struct svc_serv *serv, struct svc_pool *pool, int nrser
return svc_stop_kthreads(serv, pool, nrservs);
return 0;
}
-EXPORT_SYMBOL_GPL(svc_set_num_threads_sync);
+EXPORT_SYMBOL_GPL(svc_set_num_threads);

/**
* svc_rqst_replace_page - Replace one page in rq_pages[]



2021-11-23 01:31:51

by NeilBrown

[permalink] [raw]
Subject: [PATCH 09/19] NFSD: simplify locking for network notifier.

nfsd currently maintains an open-coded read/write semaphore (refcount
and wait queue) for each network namespace to ensure the nfs service
isn't shut down while the notifier is running.

This is excessive. As there is unlikely to be contention between
notifiers and they run without sleeping, a single spinlock is sufficient
to avoid problems.

Signed-off-by: NeilBrown <[email protected]>
---
fs/nfsd/netns.h | 3 ---
fs/nfsd/nfsctl.c | 2 --
fs/nfsd/nfssvc.c | 38 ++++++++++++++++++++------------------
3 files changed, 20 insertions(+), 23 deletions(-)

diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h
index 1fd59eb0730b..021acdc0d03b 100644
--- a/fs/nfsd/netns.h
+++ b/fs/nfsd/netns.h
@@ -131,9 +131,6 @@ struct nfsd_net {
*/
int keep_active;

- wait_queue_head_t ntf_wq;
- atomic_t ntf_refcnt;
-
/*
* clientid and stateid data for construction of net unique COPY
* stateids.
diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
index 2bbc26fbdae8..376862cf2f14 100644
--- a/fs/nfsd/nfsctl.c
+++ b/fs/nfsd/nfsctl.c
@@ -1483,8 +1483,6 @@ static __net_init int nfsd_init_net(struct net *net)
nn->clientid_counter = nn->clientid_base + 1;
nn->s2s_cp_cl_id = nn->clientid_counter++;

- atomic_set(&nn->ntf_refcnt, 0);
- init_waitqueue_head(&nn->ntf_wq);
seqlock_init(&nn->boot_lock);

return 0;
diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index 020156e96bdb..070525fbc1ad 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -434,6 +434,7 @@ static void nfsd_shutdown_net(struct net *net)
nfsd_shutdown_generic();
}

+DEFINE_SPINLOCK(nfsd_notifier_lock);
static int nfsd_inetaddr_event(struct notifier_block *this, unsigned long event,
void *ptr)
{
@@ -443,18 +444,17 @@ static int nfsd_inetaddr_event(struct notifier_block *this, unsigned long event,
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
struct sockaddr_in sin;

- if ((event != NETDEV_DOWN) ||
- !atomic_inc_not_zero(&nn->ntf_refcnt))
+ if (event != NETDEV_DOWN || !nn->nfsd_serv)
goto out;

+ spin_lock(&nfsd_notifier_lock);
if (nn->nfsd_serv) {
dprintk("nfsd_inetaddr_event: removed %pI4\n", &ifa->ifa_local);
sin.sin_family = AF_INET;
sin.sin_addr.s_addr = ifa->ifa_local;
svc_age_temp_xprts_now(nn->nfsd_serv, (struct sockaddr *)&sin);
}
- atomic_dec(&nn->ntf_refcnt);
- wake_up(&nn->ntf_wq);
+ spin_unlock(&nfsd_notifier_lock);

out:
return NOTIFY_DONE;
@@ -474,10 +474,10 @@ static int nfsd_inet6addr_event(struct notifier_block *this,
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
struct sockaddr_in6 sin6;

- if ((event != NETDEV_DOWN) ||
- !atomic_inc_not_zero(&nn->ntf_refcnt))
+ if (event != NETDEV_DOWN || !nn->nfsd_serv)
goto out;

+ spin_lock(&nfsd_notifier_lock);
if (nn->nfsd_serv) {
dprintk("nfsd_inet6addr_event: removed %pI6\n", &ifa->addr);
sin6.sin6_family = AF_INET6;
@@ -486,8 +486,8 @@ static int nfsd_inet6addr_event(struct notifier_block *this,
sin6.sin6_scope_id = ifa->idev->dev->ifindex;
svc_age_temp_xprts_now(nn->nfsd_serv, (struct sockaddr *)&sin6);
}
- atomic_dec(&nn->ntf_refcnt);
- wake_up(&nn->ntf_wq);
+ spin_unlock(&nfsd_notifier_lock);
+
out:
return NOTIFY_DONE;
}
@@ -504,7 +504,6 @@ static void nfsd_last_thread(struct svc_serv *serv, struct net *net)
{
struct nfsd_net *nn = net_generic(net, nfsd_net_id);

- atomic_dec(&nn->ntf_refcnt);
/* check if the notifier still has clients */
if (atomic_dec_return(&nfsd_notifier_refcount) == 0) {
unregister_inetaddr_notifier(&nfsd_inetaddr_notifier);
@@ -512,7 +511,6 @@ static void nfsd_last_thread(struct svc_serv *serv, struct net *net)
unregister_inet6addr_notifier(&nfsd_inet6addr_notifier);
#endif
}
- wait_event(nn->ntf_wq, atomic_read(&nn->ntf_refcnt) == 0);

/*
* write_ports can create the server without actually starting
@@ -624,6 +622,7 @@ int nfsd_create_serv(struct net *net)
{
int error;
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+ struct svc_serv *serv;

WARN_ON(!mutex_is_locked(&nfsd_mutex));
if (nn->nfsd_serv) {
@@ -633,21 +632,23 @@ int nfsd_create_serv(struct net *net)
if (nfsd_max_blksize == 0)
nfsd_max_blksize = nfsd_get_default_max_blksize();
nfsd_reset_versions(nn);
- nn->nfsd_serv = svc_create_pooled(&nfsd_program, nfsd_max_blksize,
- &nfsd_thread_sv_ops);
- if (nn->nfsd_serv == NULL)
+ serv = svc_create_pooled(&nfsd_program, nfsd_max_blksize,
+ &nfsd_thread_sv_ops);
+ if (serv == NULL)
return -ENOMEM;

- nn->nfsd_serv->sv_maxconn = nn->max_connections;
- error = svc_bind(nn->nfsd_serv, net);
+ serv->sv_maxconn = nn->max_connections;
+ error = svc_bind(serv, net);
if (error < 0) {
/* NOT nfsd_put() as notifiers (see below) haven't
* been set up yet.
*/
- svc_put(nn->nfsd_serv);
- nn->nfsd_serv = NULL;
+ svc_put(serv);
return error;
}
+ spin_lock(&nfsd_notifier_lock);
+ nn->nfsd_serv = serv;
+ spin_unlock(&nfsd_notifier_lock);

set_max_drc();
/* check if the notifier is already set */
@@ -657,7 +658,6 @@ int nfsd_create_serv(struct net *net)
register_inet6addr_notifier(&nfsd_inet6addr_notifier);
#endif
}
- atomic_inc(&nn->ntf_refcnt);
nfsd_reset_boot_verifier(nn);
return 0;
}
@@ -701,7 +701,9 @@ void nfsd_put(struct net *net)
if (kref_put(&nn->nfsd_serv->sv_refcnt, nfsd_noop)) {
svc_shutdown_net(nn->nfsd_serv, net);
svc_destroy(&nn->nfsd_serv->sv_refcnt);
+ spin_lock(&nfsd_notifier_lock);
nn->nfsd_serv = NULL;
+ spin_unlock(&nfsd_notifier_lock);
}
}




2021-11-23 01:31:57

by NeilBrown

[permalink] [raw]
Subject: [PATCH 10/19] lockd: introduce nlmsvc_serv

lockd has two globals - nlmsvc_task and nlmsvc_rqst - but mostly it
wants the 'struct svc_serv', and when it doesn't want it exactly it can
get to what it wants from the serv.

This patch is a first step to removing nlmsvc_task and nlmsvc_rqst. It
introduces nlmsvc_serv to store the 'struct svc_serv*'. This is set as
soon as the serv is created, and cleared only when it is destroyed.

Signed-off-by: NeilBrown <[email protected]>
---
fs/lockd/svc.c | 36 ++++++++++++++++++++----------------
1 file changed, 20 insertions(+), 16 deletions(-)

diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
index a9669b106dbd..83874878f41d 100644
--- a/fs/lockd/svc.c
+++ b/fs/lockd/svc.c
@@ -54,6 +54,7 @@ EXPORT_SYMBOL_GPL(nlmsvc_ops);

static DEFINE_MUTEX(nlmsvc_mutex);
static unsigned int nlmsvc_users;
+static struct svc_serv *nlmsvc_serv;
static struct task_struct *nlmsvc_task;
static struct svc_rqst *nlmsvc_rqst;
unsigned long nlmsvc_timeout;
@@ -306,13 +307,12 @@ static int lockd_inetaddr_event(struct notifier_block *this,
!atomic_inc_not_zero(&nlm_ntf_refcnt))
goto out;

- if (nlmsvc_rqst) {
+ if (nlmsvc_serv) {
dprintk("lockd_inetaddr_event: removed %pI4\n",
&ifa->ifa_local);
sin.sin_family = AF_INET;
sin.sin_addr.s_addr = ifa->ifa_local;
- svc_age_temp_xprts_now(nlmsvc_rqst->rq_server,
- (struct sockaddr *)&sin);
+ svc_age_temp_xprts_now(nlmsvc_serv, (struct sockaddr *)&sin);
}
atomic_dec(&nlm_ntf_refcnt);
wake_up(&nlm_ntf_wq);
@@ -336,14 +336,13 @@ static int lockd_inet6addr_event(struct notifier_block *this,
!atomic_inc_not_zero(&nlm_ntf_refcnt))
goto out;

- if (nlmsvc_rqst) {
+ if (nlmsvc_serv) {
dprintk("lockd_inet6addr_event: removed %pI6\n", &ifa->addr);
sin6.sin6_family = AF_INET6;
sin6.sin6_addr = ifa->addr;
if (ipv6_addr_type(&sin6.sin6_addr) & IPV6_ADDR_LINKLOCAL)
sin6.sin6_scope_id = ifa->idev->dev->ifindex;
- svc_age_temp_xprts_now(nlmsvc_rqst->rq_server,
- (struct sockaddr *)&sin6);
+ svc_age_temp_xprts_now(nlmsvc_serv, (struct sockaddr *)&sin6);
}
atomic_dec(&nlm_ntf_refcnt);
wake_up(&nlm_ntf_wq);
@@ -423,15 +422,17 @@ static const struct svc_serv_ops lockd_sv_ops = {
.svo_enqueue_xprt = svc_xprt_do_enqueue,
};

-static struct svc_serv *lockd_create_svc(void)
+static int lockd_create_svc(void)
{
struct svc_serv *serv;

/*
* Check whether we're already up and running.
*/
- if (nlmsvc_rqst)
- return svc_get(nlmsvc_rqst->rq_server);
+ if (nlmsvc_serv) {
+ svc_get(nlmsvc_serv);
+ return 0;
+ }

/*
* Sanity check: if there's no pid,
@@ -448,14 +449,15 @@ static struct svc_serv *lockd_create_svc(void)
serv = svc_create(&nlmsvc_program, LOCKD_BUFSIZE, &lockd_sv_ops);
if (!serv) {
printk(KERN_WARNING "lockd_up: create service failed\n");
- return ERR_PTR(-ENOMEM);
+ return -ENOMEM;
}
+ nlmsvc_serv = serv;
register_inetaddr_notifier(&lockd_inetaddr_notifier);
#if IS_ENABLED(CONFIG_IPV6)
register_inet6addr_notifier(&lockd_inet6addr_notifier);
#endif
dprintk("lockd_up: service created\n");
- return serv;
+ return 0;
}

/*
@@ -468,11 +470,10 @@ int lockd_up(struct net *net, const struct cred *cred)

mutex_lock(&nlmsvc_mutex);

- serv = lockd_create_svc();
- if (IS_ERR(serv)) {
- error = PTR_ERR(serv);
+ error = lockd_create_svc();
+ if (error)
goto err_create;
- }
+ serv = nlmsvc_serv;

error = lockd_up_net(serv, net, cred);
if (error < 0) {
@@ -487,6 +488,8 @@ int lockd_up(struct net *net, const struct cred *cred)
}
nlmsvc_users++;
err_put:
+ if (nlmsvc_users == 0)
+ nlmsvc_serv = NULL;
svc_put(serv);
err_create:
mutex_unlock(&nlmsvc_mutex);
@@ -501,7 +504,7 @@ void
lockd_down(struct net *net)
{
mutex_lock(&nlmsvc_mutex);
- lockd_down_net(nlmsvc_rqst->rq_server, net);
+ lockd_down_net(nlmsvc_serv, net);
if (nlmsvc_users) {
if (--nlmsvc_users)
goto out;
@@ -519,6 +522,7 @@ lockd_down(struct net *net)
dprintk("lockd_down: service stopped\n");
lockd_svc_exit_thread();
dprintk("lockd_down: service destroyed\n");
+ nlmsvc_serv = NULL;
nlmsvc_task = NULL;
nlmsvc_rqst = NULL;
out:



2021-11-23 01:32:02

by NeilBrown

[permalink] [raw]
Subject: [PATCH 11/19] lockd: simplify management of network status notifiers

Now that the network status notifiers use nlmsvc_serv rather then
nlmsvc_rqst the management can be simplified.

Notifier unregistration synchronises with any pending notifications so
providing we unregister before nlm_serv is freed no further interlock
is required.

So we move the unregister call to just before the thread is killed
(which destroys the service) and just before the service is destroyed in
the failure-path of lockd_up().

Then nlm_ntf_refcnt and nlm_ntf_wq can be removed.

Signed-off-by: NeilBrown <[email protected]>
---
fs/lockd/svc.c | 35 +++++++++--------------------------
1 file changed, 9 insertions(+), 26 deletions(-)

diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
index 83874878f41d..20cebb191350 100644
--- a/fs/lockd/svc.c
+++ b/fs/lockd/svc.c
@@ -59,9 +59,6 @@ static struct task_struct *nlmsvc_task;
static struct svc_rqst *nlmsvc_rqst;
unsigned long nlmsvc_timeout;

-static atomic_t nlm_ntf_refcnt = ATOMIC_INIT(0);
-static DECLARE_WAIT_QUEUE_HEAD(nlm_ntf_wq);
-
unsigned int lockd_net_id;

/*
@@ -303,8 +300,7 @@ static int lockd_inetaddr_event(struct notifier_block *this,
struct in_ifaddr *ifa = (struct in_ifaddr *)ptr;
struct sockaddr_in sin;

- if ((event != NETDEV_DOWN) ||
- !atomic_inc_not_zero(&nlm_ntf_refcnt))
+ if (event != NETDEV_DOWN)
goto out;

if (nlmsvc_serv) {
@@ -314,8 +310,6 @@ static int lockd_inetaddr_event(struct notifier_block *this,
sin.sin_addr.s_addr = ifa->ifa_local;
svc_age_temp_xprts_now(nlmsvc_serv, (struct sockaddr *)&sin);
}
- atomic_dec(&nlm_ntf_refcnt);
- wake_up(&nlm_ntf_wq);

out:
return NOTIFY_DONE;
@@ -332,8 +326,7 @@ static int lockd_inet6addr_event(struct notifier_block *this,
struct inet6_ifaddr *ifa = (struct inet6_ifaddr *)ptr;
struct sockaddr_in6 sin6;

- if ((event != NETDEV_DOWN) ||
- !atomic_inc_not_zero(&nlm_ntf_refcnt))
+ if (event != NETDEV_DOWN)
goto out;

if (nlmsvc_serv) {
@@ -344,8 +337,6 @@ static int lockd_inet6addr_event(struct notifier_block *this,
sin6.sin6_scope_id = ifa->idev->dev->ifindex;
svc_age_temp_xprts_now(nlmsvc_serv, (struct sockaddr *)&sin6);
}
- atomic_dec(&nlm_ntf_refcnt);
- wake_up(&nlm_ntf_wq);

out:
return NOTIFY_DONE;
@@ -362,14 +353,6 @@ static void lockd_unregister_notifiers(void)
#if IS_ENABLED(CONFIG_IPV6)
unregister_inet6addr_notifier(&lockd_inet6addr_notifier);
#endif
- wait_event(nlm_ntf_wq, atomic_read(&nlm_ntf_refcnt) == 0);
-}
-
-static void lockd_svc_exit_thread(void)
-{
- atomic_dec(&nlm_ntf_refcnt);
- lockd_unregister_notifiers();
- svc_exit_thread(nlmsvc_rqst);
}

static int lockd_start_svc(struct svc_serv *serv)
@@ -388,11 +371,9 @@ static int lockd_start_svc(struct svc_serv *serv)
printk(KERN_WARNING
"lockd_up: svc_rqst allocation failed, error=%d\n",
error);
- lockd_unregister_notifiers();
goto out_rqst;
}

- atomic_inc(&nlm_ntf_refcnt);
svc_sock_update_bufs(serv);
serv->sv_maxconn = nlm_max_connections;

@@ -410,7 +391,7 @@ static int lockd_start_svc(struct svc_serv *serv)
return 0;

out_task:
- lockd_svc_exit_thread();
+ svc_exit_thread(nlmsvc_rqst);
nlmsvc_task = NULL;
out_rqst:
nlmsvc_rqst = NULL;
@@ -477,7 +458,6 @@ int lockd_up(struct net *net, const struct cred *cred)

error = lockd_up_net(serv, net, cred);
if (error < 0) {
- lockd_unregister_notifiers();
goto err_put;
}

@@ -488,8 +468,10 @@ int lockd_up(struct net *net, const struct cred *cred)
}
nlmsvc_users++;
err_put:
- if (nlmsvc_users == 0)
+ if (nlmsvc_users == 0) {
+ lockd_unregister_notifiers();
nlmsvc_serv = NULL;
+ }
svc_put(serv);
err_create:
mutex_unlock(&nlmsvc_mutex);
@@ -518,13 +500,14 @@ lockd_down(struct net *net)
printk(KERN_ERR "lockd_down: no lockd running.\n");
BUG();
}
+ lockd_unregister_notifiers();
kthread_stop(nlmsvc_task);
dprintk("lockd_down: service stopped\n");
- lockd_svc_exit_thread();
+ svc_exit_thread(nlmsvc_rqst);
+ nlmsvc_rqst = NULL;
dprintk("lockd_down: service destroyed\n");
nlmsvc_serv = NULL;
nlmsvc_task = NULL;
- nlmsvc_rqst = NULL;
out:
mutex_unlock(&nlmsvc_mutex);
}



2021-11-23 01:32:08

by NeilBrown

[permalink] [raw]
Subject: [PATCH 12/19] lockd: move lockd_start_svc() call into lockd_create_svc()

lockd_start_svc() only needs to be called once, just after the svc is
created. If the start fails, the svc is discarded too.

It thus makes sense to call lockd_start_svc() from lockd_create_svc().
This allows us to remove the test against nlmsvc_rqst at the start of
lockd_start_svc() - it must always be NULL.

lockd_up() only held an extra reference on the svc until a thread was
created - then it dropped it. The thread - and thus the extra reference
- will remain until kthread_stop() is called.
Now that the thread is created in lockd_create_svc(), the extra
reference can be dropped there. So the 'serv' variable is no longer
needed in lockd_up().

Signed-off-by: NeilBrown <[email protected]>
---
fs/lockd/svc.c | 22 ++++++++++------------
1 file changed, 10 insertions(+), 12 deletions(-)

diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
index 20cebb191350..91e7c839841e 100644
--- a/fs/lockd/svc.c
+++ b/fs/lockd/svc.c
@@ -359,9 +359,6 @@ static int lockd_start_svc(struct svc_serv *serv)
{
int error;

- if (nlmsvc_rqst)
- return 0;
-
/*
* Create the kernel thread and wait for it to start.
*/
@@ -406,6 +403,7 @@ static const struct svc_serv_ops lockd_sv_ops = {
static int lockd_create_svc(void)
{
struct svc_serv *serv;
+ int error;

/*
* Check whether we're already up and running.
@@ -432,6 +430,13 @@ static int lockd_create_svc(void)
printk(KERN_WARNING "lockd_up: create service failed\n");
return -ENOMEM;
}
+
+ error = lockd_start_svc(serv);
+ /* The thread now holds the only reference */
+ svc_put(serv);
+ if (error < 0)
+ return error;
+
nlmsvc_serv = serv;
register_inetaddr_notifier(&lockd_inetaddr_notifier);
#if IS_ENABLED(CONFIG_IPV6)
@@ -446,7 +451,6 @@ static int lockd_create_svc(void)
*/
int lockd_up(struct net *net, const struct cred *cred)
{
- struct svc_serv *serv;
int error;

mutex_lock(&nlmsvc_mutex);
@@ -454,25 +458,19 @@ int lockd_up(struct net *net, const struct cred *cred)
error = lockd_create_svc();
if (error)
goto err_create;
- serv = nlmsvc_serv;

- error = lockd_up_net(serv, net, cred);
+ error = lockd_up_net(nlmsvc_serv, net, cred);
if (error < 0) {
goto err_put;
}

- error = lockd_start_svc(serv);
- if (error < 0) {
- lockd_down_net(serv, net);
- goto err_put;
- }
nlmsvc_users++;
err_put:
if (nlmsvc_users == 0) {
lockd_unregister_notifiers();
+ kthread_stop(nlmsvc_task);
nlmsvc_serv = NULL;
}
- svc_put(serv);
err_create:
mutex_unlock(&nlmsvc_mutex);
return error;



2021-11-23 01:32:14

by NeilBrown

[permalink] [raw]
Subject: [PATCH 13/19] lockd: move svc_exit_thread() into the thread

The normal place to call svc_exit_thread() is from the thread itself
just before it exists.
Do this for lockd.

This means that nlmsvc_rqst is not used out side of lockd_start_svc(),
so it can be made local to that function, and renamed to 'rqst'.

Signed-off-by: NeilBrown <[email protected]>
---
fs/lockd/svc.c | 23 ++++++++++++-----------
1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
index 91e7c839841e..9aa499a76159 100644
--- a/fs/lockd/svc.c
+++ b/fs/lockd/svc.c
@@ -56,7 +56,6 @@ static DEFINE_MUTEX(nlmsvc_mutex);
static unsigned int nlmsvc_users;
static struct svc_serv *nlmsvc_serv;
static struct task_struct *nlmsvc_task;
-static struct svc_rqst *nlmsvc_rqst;
unsigned long nlmsvc_timeout;

unsigned int lockd_net_id;
@@ -182,6 +181,11 @@ lockd(void *vrqstp)
nlm_shutdown_hosts();
cancel_delayed_work_sync(&ln->grace_period_end);
locks_end_grace(&ln->lockd_manager);
+
+ dprintk("lockd_down: service stopped\n");
+
+ svc_exit_thread(rqstp);
+
return 0;
}

@@ -358,13 +362,14 @@ static void lockd_unregister_notifiers(void)
static int lockd_start_svc(struct svc_serv *serv)
{
int error;
+ struct svc_rqst *rqst;

/*
* Create the kernel thread and wait for it to start.
*/
- nlmsvc_rqst = svc_prepare_thread(serv, &serv->sv_pools[0], NUMA_NO_NODE);
- if (IS_ERR(nlmsvc_rqst)) {
- error = PTR_ERR(nlmsvc_rqst);
+ rqst = svc_prepare_thread(serv, &serv->sv_pools[0], NUMA_NO_NODE);
+ if (IS_ERR(rqst)) {
+ error = PTR_ERR(rqst);
printk(KERN_WARNING
"lockd_up: svc_rqst allocation failed, error=%d\n",
error);
@@ -374,24 +379,23 @@ static int lockd_start_svc(struct svc_serv *serv)
svc_sock_update_bufs(serv);
serv->sv_maxconn = nlm_max_connections;

- nlmsvc_task = kthread_create(lockd, nlmsvc_rqst, "%s", serv->sv_name);
+ nlmsvc_task = kthread_create(lockd, rqst, "%s", serv->sv_name);
if (IS_ERR(nlmsvc_task)) {
error = PTR_ERR(nlmsvc_task);
printk(KERN_WARNING
"lockd_up: kthread_run failed, error=%d\n", error);
goto out_task;
}
- nlmsvc_rqst->rq_task = nlmsvc_task;
+ rqst->rq_task = nlmsvc_task;
wake_up_process(nlmsvc_task);

dprintk("lockd_up: service started\n");
return 0;

out_task:
- svc_exit_thread(nlmsvc_rqst);
+ svc_exit_thread(rqst);
nlmsvc_task = NULL;
out_rqst:
- nlmsvc_rqst = NULL;
return error;
}

@@ -500,9 +504,6 @@ lockd_down(struct net *net)
}
lockd_unregister_notifiers();
kthread_stop(nlmsvc_task);
- dprintk("lockd_down: service stopped\n");
- svc_exit_thread(nlmsvc_rqst);
- nlmsvc_rqst = NULL;
dprintk("lockd_down: service destroyed\n");
nlmsvc_serv = NULL;
nlmsvc_task = NULL;



2021-11-23 01:32:21

by NeilBrown

[permalink] [raw]
Subject: [PATCH 14/19] lockd: introduce lockd_put()

There is some cleanup that is duplicated in lockd_down() and the failure
path of lockd_up().
Factor these out into a new lockd_put() and call it from both places.

lockd_put() does *not* take the mutex - that must be held by the caller.
It decrements nlmsvc_users and if that reaches zero, it cleans up.

Signed-off-by: NeilBrown <[email protected]>
---
fs/lockd/svc.c | 64 ++++++++++++++++++++++++--------------------------------
1 file changed, 27 insertions(+), 37 deletions(-)

diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
index 9aa499a76159..7f12c280fd30 100644
--- a/fs/lockd/svc.c
+++ b/fs/lockd/svc.c
@@ -351,14 +351,6 @@ static struct notifier_block lockd_inet6addr_notifier = {
};
#endif

-static void lockd_unregister_notifiers(void)
-{
- unregister_inetaddr_notifier(&lockd_inetaddr_notifier);
-#if IS_ENABLED(CONFIG_IPV6)
- unregister_inet6addr_notifier(&lockd_inet6addr_notifier);
-#endif
-}
-
static int lockd_start_svc(struct svc_serv *serv)
{
int error;
@@ -450,6 +442,27 @@ static int lockd_create_svc(void)
return 0;
}

+static void lockd_put(void)
+{
+ if (WARN(nlmsvc_users <= 0, "lockd_down: no users!\n"))
+ return;
+ if (--nlmsvc_users)
+ return;
+
+ unregister_inetaddr_notifier(&lockd_inetaddr_notifier);
+#if IS_ENABLED(CONFIG_IPV6)
+ unregister_inet6addr_notifier(&lockd_inet6addr_notifier);
+#endif
+
+ if (nlmsvc_task) {
+ kthread_stop(nlmsvc_task);
+ dprintk("lockd_down: service stopped\n");
+ nlmsvc_task = NULL;
+ }
+ nlmsvc_serv = NULL;
+ dprintk("lockd_down: service destroyed\n");
+}
+
/*
* Bring up the lockd process if it's not already up.
*/
@@ -461,21 +474,16 @@ int lockd_up(struct net *net, const struct cred *cred)

error = lockd_create_svc();
if (error)
- goto err_create;
+ goto err;
+ nlmsvc_users++;

error = lockd_up_net(nlmsvc_serv, net, cred);
if (error < 0) {
- goto err_put;
+ lockd_put();
+ goto err;
}

- nlmsvc_users++;
-err_put:
- if (nlmsvc_users == 0) {
- lockd_unregister_notifiers();
- kthread_stop(nlmsvc_task);
- nlmsvc_serv = NULL;
- }
-err_create:
+err:
mutex_unlock(&nlmsvc_mutex);
return error;
}
@@ -489,25 +497,7 @@ lockd_down(struct net *net)
{
mutex_lock(&nlmsvc_mutex);
lockd_down_net(nlmsvc_serv, net);
- if (nlmsvc_users) {
- if (--nlmsvc_users)
- goto out;
- } else {
- printk(KERN_ERR "lockd_down: no users! task=%p\n",
- nlmsvc_task);
- BUG();
- }
-
- if (!nlmsvc_task) {
- printk(KERN_ERR "lockd_down: no lockd running.\n");
- BUG();
- }
- lockd_unregister_notifiers();
- kthread_stop(nlmsvc_task);
- dprintk("lockd_down: service destroyed\n");
- nlmsvc_serv = NULL;
- nlmsvc_task = NULL;
-out:
+ lockd_put();
mutex_unlock(&nlmsvc_mutex);
}
EXPORT_SYMBOL_GPL(lockd_down);



2021-11-23 01:32:25

by NeilBrown

[permalink] [raw]
Subject: [PATCH 15/19] lockd: rename lockd_create_svc() to lockd_get()

lockd_create_svc() already does an svc_get() if the service already
exists, so it is more like a "get" than a "create".

So:
- Move the increment of nlmsvc_users into the function as well
- rename to lockd_get().

It is now the inverse of lockd_put().

Signed-off-by: NeilBrown <[email protected]>
---
fs/lockd/svc.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
index 7f12c280fd30..1a7c11118b32 100644
--- a/fs/lockd/svc.c
+++ b/fs/lockd/svc.c
@@ -396,16 +396,14 @@ static const struct svc_serv_ops lockd_sv_ops = {
.svo_enqueue_xprt = svc_xprt_do_enqueue,
};

-static int lockd_create_svc(void)
+static int lockd_get(void)
{
struct svc_serv *serv;
int error;

- /*
- * Check whether we're already up and running.
- */
if (nlmsvc_serv) {
svc_get(nlmsvc_serv);
+ nlmsvc_users++;
return 0;
}

@@ -439,6 +437,7 @@ static int lockd_create_svc(void)
register_inet6addr_notifier(&lockd_inet6addr_notifier);
#endif
dprintk("lockd_up: service created\n");
+ nlmsvc_users++;
return 0;
}

@@ -472,10 +471,9 @@ int lockd_up(struct net *net, const struct cred *cred)

mutex_lock(&nlmsvc_mutex);

- error = lockd_create_svc();
+ error = lockd_get();
if (error)
goto err;
- nlmsvc_users++;

error = lockd_up_net(nlmsvc_serv, net, cred);
if (error < 0) {



2021-11-23 01:32:32

by NeilBrown

[permalink] [raw]
Subject: [PATCH 16/19] SUNRPC: move the pool_map definitions (back) into svc.c

These definitions are not used outside of svc.c, and there is no
evidence that they ever have been. So move them into svc.c
and make the declarations 'static'.

Signed-off-by: NeilBrown <[email protected]>
---
include/linux/sunrpc/svc.h | 25 -------------------------
net/sunrpc/svc.c | 31 +++++++++++++++++++++++++------
2 files changed, 25 insertions(+), 31 deletions(-)

diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
index 0b38c6eaf985..d69e6108cb83 100644
--- a/include/linux/sunrpc/svc.h
+++ b/include/linux/sunrpc/svc.h
@@ -494,29 +494,6 @@ struct svc_procedure {
const char * pc_name; /* for display */
};

-/*
- * Mode for mapping cpus to pools.
- */
-enum {
- SVC_POOL_AUTO = -1, /* choose one of the others */
- SVC_POOL_GLOBAL, /* no mapping, just a single global pool
- * (legacy & UP mode) */
- SVC_POOL_PERCPU, /* one pool per cpu */
- SVC_POOL_PERNODE /* one pool per numa node */
-};
-
-struct svc_pool_map {
- int count; /* How many svc_servs use us */
- int mode; /* Note: int not enum to avoid
- * warnings about "enumeration value
- * not handled in switch" */
- unsigned int npools;
- unsigned int *pool_to; /* maps pool id to cpu or node */
- unsigned int *to_pool; /* maps cpu or node to pool id */
-};
-
-extern struct svc_pool_map svc_pool_map;
-
/*
* Function prototypes.
*/
@@ -533,8 +510,6 @@ void svc_rqst_replace_page(struct svc_rqst *rqstp,
struct page *page);
void svc_rqst_free(struct svc_rqst *);
void svc_exit_thread(struct svc_rqst *);
-unsigned int svc_pool_map_get(void);
-void svc_pool_map_put(void);
struct svc_serv * svc_create_pooled(struct svc_program *, unsigned int,
const struct svc_serv_ops *);
int svc_set_num_threads(struct svc_serv *, struct svc_pool *, int);
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index 5513f8c9a8d6..f0dd9ef7e0cd 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -41,14 +41,35 @@ static void svc_unregister(const struct svc_serv *serv, struct net *net);

#define SVC_POOL_DEFAULT SVC_POOL_GLOBAL

+/*
+ * Mode for mapping cpus to pools.
+ */
+enum {
+ SVC_POOL_AUTO = -1, /* choose one of the others */
+ SVC_POOL_GLOBAL, /* no mapping, just a single global pool
+ * (legacy & UP mode) */
+ SVC_POOL_PERCPU, /* one pool per cpu */
+ SVC_POOL_PERNODE /* one pool per numa node */
+};
+
/*
* Structure for mapping cpus to pools and vice versa.
* Setup once during sunrpc initialisation.
*/
-struct svc_pool_map svc_pool_map = {
+
+struct svc_pool_map {
+ int count; /* How many svc_servs use us */
+ int mode; /* Note: int not enum to avoid
+ * warnings about "enumeration value
+ * not handled in switch" */
+ unsigned int npools;
+ unsigned int *pool_to; /* maps pool id to cpu or node */
+ unsigned int *to_pool; /* maps cpu or node to pool id */
+};
+
+static struct svc_pool_map svc_pool_map = {
.mode = SVC_POOL_DEFAULT
};
-EXPORT_SYMBOL_GPL(svc_pool_map);

static DEFINE_MUTEX(svc_pool_map_mutex);/* protects svc_pool_map.count only */

@@ -222,7 +243,7 @@ svc_pool_map_init_pernode(struct svc_pool_map *m)
* vice versa). Initialise the map if we're the first user.
* Returns the number of pools.
*/
-unsigned int
+static unsigned int
svc_pool_map_get(void)
{
struct svc_pool_map *m = &svc_pool_map;
@@ -257,7 +278,6 @@ svc_pool_map_get(void)
mutex_unlock(&svc_pool_map_mutex);
return m->npools;
}
-EXPORT_SYMBOL_GPL(svc_pool_map_get);

/*
* Drop a reference to the global map of cpus to pools.
@@ -266,7 +286,7 @@ EXPORT_SYMBOL_GPL(svc_pool_map_get);
* mode using the pool_mode module option without
* rebooting or re-loading sunrpc.ko.
*/
-void
+static void
svc_pool_map_put(void)
{
struct svc_pool_map *m = &svc_pool_map;
@@ -283,7 +303,6 @@ svc_pool_map_put(void)

mutex_unlock(&svc_pool_map_mutex);
}
-EXPORT_SYMBOL_GPL(svc_pool_map_put);

static int svc_pool_map_get_node(unsigned int pidx)
{



2021-11-23 01:32:39

by NeilBrown

[permalink] [raw]
Subject: [PATCH 17/19] SUNRPC: always treat sv_nrpools==1 as "not pooled"

Currently 'pooled' services hold a reference on the pool_map, and
'unpooled' services do not.
svc_destroy() uses the presence of ->svo_function (via
svc_serv_is_pooled()) to determine if the reference should be dropped.
There is no direct correlation between being pooled and the use of
svo_function, though in practice, lockd is the only non-pooled service,
and the only one not to use svc_function.

This is untidy and would cause problems if we changed lockd to use
svc_set_num_threads(), which requires the use of ->svo_function.

So change the test for "is the service pooled" to "is sv_nrpools > 1".

This means that when svc_pool_map_get() returns 1, it must NOT take a
reference to the pool.

We discard svc_serv_is_pooled(), and test sv_nrpools directly.

Signed-off-by: NeilBrown <[email protected]>
---
net/sunrpc/svc.c | 54 +++++++++++++++++++++++++++++-------------------------
1 file changed, 29 insertions(+), 25 deletions(-)

diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index f0dd9ef7e0cd..5fbe7f55289e 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -37,8 +37,6 @@

static void svc_unregister(const struct svc_serv *serv, struct net *net);

-#define svc_serv_is_pooled(serv) ((serv)->sv_ops->svo_function)
-
#define SVC_POOL_DEFAULT SVC_POOL_GLOBAL

/*
@@ -240,8 +238,10 @@ svc_pool_map_init_pernode(struct svc_pool_map *m)

/*
* Add a reference to the global map of cpus to pools (and
- * vice versa). Initialise the map if we're the first user.
- * Returns the number of pools.
+ * vice versa) if pools are in use.
+ * Initialise the map if we're the first user.
+ * Returns the number of pools. If this is '1', no reference
+ * was taken.
*/
static unsigned int
svc_pool_map_get(void)
@@ -253,6 +253,7 @@ svc_pool_map_get(void)

if (m->count++) {
mutex_unlock(&svc_pool_map_mutex);
+ WARN_ON_ONCE(m->npools <= 1);
return m->npools;
}

@@ -268,29 +269,36 @@ svc_pool_map_get(void)
break;
}

- if (npools < 0) {
+ if (npools <= 0) {
/* default, or memory allocation failure */
npools = 1;
m->mode = SVC_POOL_GLOBAL;
}
m->npools = npools;

+ if (npools == 1)
+ /* service is unpooled, so doesn't hold a reference */
+ m->count--;
+
mutex_unlock(&svc_pool_map_mutex);
- return m->npools;
+ return npools;
}

/*
- * Drop a reference to the global map of cpus to pools.
+ * Drop a reference to the global map of cpus to pools, if
+ * pools were in use, i.e. if npools > 1.
* When the last reference is dropped, the map data is
* freed; this allows the sysadmin to change the pool
* mode using the pool_mode module option without
* rebooting or re-loading sunrpc.ko.
*/
static void
-svc_pool_map_put(void)
+svc_pool_map_put(int npools)
{
struct svc_pool_map *m = &svc_pool_map;

+ if (npools <= 1)
+ return;
mutex_lock(&svc_pool_map_mutex);

if (!--m->count) {
@@ -359,21 +367,18 @@ svc_pool_for_cpu(struct svc_serv *serv, int cpu)
struct svc_pool_map *m = &svc_pool_map;
unsigned int pidx = 0;

- /*
- * An uninitialised map happens in a pure client when
- * lockd is brought up, so silently treat it the
- * same as SVC_POOL_GLOBAL.
- */
- if (svc_serv_is_pooled(serv)) {
- switch (m->mode) {
- case SVC_POOL_PERCPU:
- pidx = m->to_pool[cpu];
- break;
- case SVC_POOL_PERNODE:
- pidx = m->to_pool[cpu_to_node(cpu)];
- break;
- }
+ if (serv->sv_nrpools <= 1)
+ return serv->sv_pools;
+
+ switch (m->mode) {
+ case SVC_POOL_PERCPU:
+ pidx = m->to_pool[cpu];
+ break;
+ case SVC_POOL_PERNODE:
+ pidx = m->to_pool[cpu_to_node(cpu)];
+ break;
}
+
return &serv->sv_pools[pidx % serv->sv_nrpools];
}

@@ -526,7 +531,7 @@ svc_create_pooled(struct svc_program *prog, unsigned int bufsize,
goto out_err;
return serv;
out_err:
- svc_pool_map_put();
+ svc_pool_map_put(npools);
return NULL;
}
EXPORT_SYMBOL_GPL(svc_create_pooled);
@@ -561,8 +566,7 @@ svc_destroy(struct kref *ref)

cache_clean_deferred(serv);

- if (svc_serv_is_pooled(serv))
- svc_pool_map_put();
+ svc_pool_map_put(serv->sv_nrpools);

kfree(serv->sv_pools);
kfree(serv);



2021-11-23 01:32:45

by NeilBrown

[permalink] [raw]
Subject: [PATCH 18/19] lockd: use svc_set_num_threads() for thread start and stop

svc_set_num_threads() does everything that lockd_start_svc() does, except
set sv_maxconn. It also (when passed 0) finds the threads and
stops them with kthread_stop().

So move the setting for sv_maxconn, and use svc_set_num_thread()

We now don't need nlmsvc_task.

Also set svc_module - just for consistency.

svc_prepare_thread is now only used where it is defined, so it can be
made static.

Signed-off-by: NeilBrown <[email protected]>
---
fs/lockd/svc.c | 56 ++++++--------------------------------------
include/linux/sunrpc/svc.h | 2 --
net/sunrpc/svc.c | 3 +-
3 files changed, 8 insertions(+), 53 deletions(-)

diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
index 1a7c11118b32..93f5a4f262f9 100644
--- a/fs/lockd/svc.c
+++ b/fs/lockd/svc.c
@@ -55,7 +55,6 @@ EXPORT_SYMBOL_GPL(nlmsvc_ops);
static DEFINE_MUTEX(nlmsvc_mutex);
static unsigned int nlmsvc_users;
static struct svc_serv *nlmsvc_serv;
-static struct task_struct *nlmsvc_task;
unsigned long nlmsvc_timeout;

unsigned int lockd_net_id;
@@ -292,8 +291,8 @@ static void lockd_down_net(struct svc_serv *serv, struct net *net)
__func__, net->ns.inum);
}
} else {
- pr_err("%s: no users! task=%p, net=%x\n",
- __func__, nlmsvc_task, net->ns.inum);
+ pr_err("%s: no users! net=%x\n",
+ __func__, net->ns.inum);
BUG();
}
}
@@ -351,49 +350,11 @@ static struct notifier_block lockd_inet6addr_notifier = {
};
#endif

-static int lockd_start_svc(struct svc_serv *serv)
-{
- int error;
- struct svc_rqst *rqst;
-
- /*
- * Create the kernel thread and wait for it to start.
- */
- rqst = svc_prepare_thread(serv, &serv->sv_pools[0], NUMA_NO_NODE);
- if (IS_ERR(rqst)) {
- error = PTR_ERR(rqst);
- printk(KERN_WARNING
- "lockd_up: svc_rqst allocation failed, error=%d\n",
- error);
- goto out_rqst;
- }
-
- svc_sock_update_bufs(serv);
- serv->sv_maxconn = nlm_max_connections;
-
- nlmsvc_task = kthread_create(lockd, rqst, "%s", serv->sv_name);
- if (IS_ERR(nlmsvc_task)) {
- error = PTR_ERR(nlmsvc_task);
- printk(KERN_WARNING
- "lockd_up: kthread_run failed, error=%d\n", error);
- goto out_task;
- }
- rqst->rq_task = nlmsvc_task;
- wake_up_process(nlmsvc_task);
-
- dprintk("lockd_up: service started\n");
- return 0;
-
-out_task:
- svc_exit_thread(rqst);
- nlmsvc_task = NULL;
-out_rqst:
- return error;
-}
-
static const struct svc_serv_ops lockd_sv_ops = {
.svo_shutdown = svc_rpcb_cleanup,
+ .svo_function = lockd,
.svo_enqueue_xprt = svc_xprt_do_enqueue,
+ .svo_module = THIS_MODULE,
};

static int lockd_get(void)
@@ -425,7 +386,8 @@ static int lockd_get(void)
return -ENOMEM;
}

- error = lockd_start_svc(serv);
+ serv->sv_maxconn = nlm_max_connections;
+ error = svc_set_num_threads(serv, NULL, 1);
/* The thread now holds the only reference */
svc_put(serv);
if (error < 0)
@@ -453,11 +415,7 @@ static void lockd_put(void)
unregister_inet6addr_notifier(&lockd_inet6addr_notifier);
#endif

- if (nlmsvc_task) {
- kthread_stop(nlmsvc_task);
- dprintk("lockd_down: service stopped\n");
- nlmsvc_task = NULL;
- }
+ svc_set_num_threads(nlmsvc_serv, NULL, 0);
nlmsvc_serv = NULL;
dprintk("lockd_down: service destroyed\n");
}
diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
index d69e6108cb83..313dd3d1ef16 100644
--- a/include/linux/sunrpc/svc.h
+++ b/include/linux/sunrpc/svc.h
@@ -504,8 +504,6 @@ struct svc_serv *svc_create(struct svc_program *, unsigned int,
const struct svc_serv_ops *);
struct svc_rqst *svc_rqst_alloc(struct svc_serv *serv,
struct svc_pool *pool, int node);
-struct svc_rqst *svc_prepare_thread(struct svc_serv *serv,
- struct svc_pool *pool, int node);
void svc_rqst_replace_page(struct svc_rqst *rqstp,
struct page *page);
void svc_rqst_free(struct svc_rqst *);
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index 5fbe7f55289e..2aabec2b4bec 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -652,7 +652,7 @@ svc_rqst_alloc(struct svc_serv *serv, struct svc_pool *pool, int node)
}
EXPORT_SYMBOL_GPL(svc_rqst_alloc);

-struct svc_rqst *
+static struct svc_rqst *
svc_prepare_thread(struct svc_serv *serv, struct svc_pool *pool, int node)
{
struct svc_rqst *rqstp;
@@ -672,7 +672,6 @@ svc_prepare_thread(struct svc_serv *serv, struct svc_pool *pool, int node)
spin_unlock_bh(&pool->sp_lock);
return rqstp;
}
-EXPORT_SYMBOL_GPL(svc_prepare_thread);

/*
* Choose a pool in which to create a new thread, for svc_set_num_threads



2021-11-23 01:32:50

by NeilBrown

[permalink] [raw]
Subject: [PATCH 19/19] NFS: switch the callback service back to non-pooled.

Now that thread management is consistent there is no need for
nfs-callback to use svc_create_pooled() as introduced in Commit
df807fffaabd ("NFSv4.x/callback: Create the callback service through
svc_create_pooled"). So switch back to svc_create().

If service pools were configured, but the number of threads were left at
'1', nfs callback may not work reliably when svc_create_pooled() is used.

Signed-off-by: NeilBrown <[email protected]>
---
fs/nfs/callback.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
index 6cdc9d18a7dd..c4994c1d4e36 100644
--- a/fs/nfs/callback.c
+++ b/fs/nfs/callback.c
@@ -286,7 +286,7 @@ static struct svc_serv *nfs_callback_create_svc(int minorversion)
printk(KERN_WARNING "nfs_callback_create_svc: no kthread, %d users??\n",
cb_info->users);

- serv = svc_create_pooled(&nfs4_callback_program, NFS4_CALLBACK_BUFSIZE, sv_ops);
+ serv = svc_create(&nfs4_callback_program, NFS4_CALLBACK_BUFSIZE, sv_ops);
if (!serv) {
printk(KERN_ERR "nfs_callback_create_svc: create service failed\n");
return ERR_PTR(-ENOMEM);



2021-11-23 14:49:58

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH 00/19 v2] SUNRPC: clean up server thread management

On Tue, Nov 23, 2021 at 12:29:35PM +1100, NeilBrown wrote:
> This is a revision of my series for cleaning up server thread
> management.

For what it's worth, this version now passes my usual regression tests.

--b.

> Currently lockd, nfsd, and nfs-callback all manage threads slightly
> differently. This series unifies them.
>
> Changes since first series include:
> - minor bug fixes
> - kernel-doc comments for new functions
> - split first patch into 3, and make the bugfix a separate patch
> - fix management of pool_maps so lockd can usse svc_set_num_threads
> safely
> - switch nfs-callback to not request a 'pooled' service.
>
> NeilBrown
>
>
> ---
>
> NeilBrown (19):
> SUNRPC/NFSD: clean up get/put functions.
> NFSD: handle error better in write_ports_addfd()
> SUNRPC: stop using ->sv_nrthreads as a refcount
> nfsd: make nfsd_stats.th_cnt atomic_t
> SUNRPC: use sv_lock to protect updates to sv_nrthreads.
> NFSD: narrow nfsd_mutex protection in nfsd thread
> NFSD: Make it possible to use svc_set_num_threads_sync
> SUNRPC: discard svo_setup and rename svc_set_num_threads_sync()
> NFSD: simplify locking for network notifier.
> lockd: introduce nlmsvc_serv
> lockd: simplify management of network status notifiers
> lockd: move lockd_start_svc() call into lockd_create_svc()
> lockd: move svc_exit_thread() into the thread
> lockd: introduce lockd_put()
> lockd: rename lockd_create_svc() to lockd_get()
> SUNRPC: move the pool_map definitions (back) into svc.c
> SUNRPC: always treat sv_nrpools==1 as "not pooled"
> lockd: use svc_set_num_threads() for thread start and stop
> NFS: switch the callback service back to non-pooled.
>
>
> fs/lockd/svc.c | 194 ++++++++++++-------------------------
> fs/nfs/callback.c | 12 +--
> fs/nfsd/netns.h | 13 +--
> fs/nfsd/nfsctl.c | 24 ++---
> fs/nfsd/nfssvc.c | 139 +++++++++++++-------------
> fs/nfsd/stats.c | 2 +-
> fs/nfsd/stats.h | 4 +-
> include/linux/sunrpc/svc.h | 58 ++++-------
> net/sunrpc/svc.c | 166 ++++++++++++++-----------------
> 9 files changed, 248 insertions(+), 364 deletions(-)
>
> --
> Signature

2021-11-23 16:44:23

by Chuck Lever III

[permalink] [raw]
Subject: Re: [PATCH 05/19] SUNRPC: use sv_lock to protect updates to sv_nrthreads.



> On Nov 22, 2021, at 8:29 PM, NeilBrown <[email protected]> wrote:
>
> Using sv_lock means we don't need to hold the service mutex over these
> updates.
>
> In particular, svc_exit_thread() no longer requires synchronisation, so
> threads can exit asynchronously.
>
> Signed-off-by: NeilBrown <[email protected]>
> ---
> fs/nfsd/nfssvc.c | 5 ++---
> net/sunrpc/svc.c | 9 +++++++--
> 2 files changed, 9 insertions(+), 5 deletions(-)
>
> diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
> index fc5899502a83..e9c9fa820b17 100644
> --- a/fs/nfsd/nfssvc.c
> +++ b/fs/nfsd/nfssvc.c
> @@ -55,9 +55,8 @@ static __be32 nfsd_init_request(struct svc_rqst *,
> struct svc_process_info *);
>
> /*
> - * nfsd_mutex protects nn->nfsd_serv -- both the pointer itself and the members
> - * of the svc_serv struct. In particular, ->sv_nrthreads but also to some
> - * extent ->sv_temp_socks and ->sv_permsocks.
> + * nfsd_mutex protects nn->nfsd_serv -- both the pointer itself and some members
> + * of the svc_serv struct such as ->sv_temp_socks and ->sv_permsocks.
> *
> * If (out side the lock) nn->nfsd_serv is non-NULL, then it must point to a
> * properly initialised 'struct svc_serv' with ->sv_nrthreads > 0 (unless
> diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
> index acddc6e12e9e..2b2042234e4b 100644
> --- a/net/sunrpc/svc.c
> +++ b/net/sunrpc/svc.c
> @@ -523,7 +523,7 @@ EXPORT_SYMBOL_GPL(svc_shutdown_net);
>
> /*
> * Destroy an RPC service. Should be called with appropriate locking to
> - * protect the sv_nrthreads, sv_permsocks and sv_tempsocks.
> + * protect sv_permsocks and sv_tempsocks.
> */
> void
> svc_destroy(struct kref *ref)
> @@ -639,7 +639,10 @@ svc_prepare_thread(struct svc_serv *serv, struct svc_pool *pool, int node)
> return ERR_PTR(-ENOMEM);
>
> svc_get(serv);
> - serv->sv_nrthreads++;
> + spin_lock_bh(&serv->sv_lock);
> + serv->sv_nrthreads += 1;
> + spin_unlock_bh(&serv->sv_lock);

atomic_t would be somewhat lighter weight. Can it be used here
instead?


> +
> spin_lock_bh(&pool->sp_lock);
> pool->sp_nrthreads++;
> list_add_rcu(&rqstp->rq_all, &pool->sp_all_threads);
> @@ -880,7 +883,9 @@ svc_exit_thread(struct svc_rqst *rqstp)
> list_del_rcu(&rqstp->rq_all);
> spin_unlock_bh(&pool->sp_lock);
>
> + spin_lock_bh(&serv->sv_lock);
> serv->sv_nrthreads -= 1;
> + spin_unlock_bh(&serv->sv_lock);
> svc_sock_update_bufs(serv);
>
> svc_rqst_free(rqstp);
>
>

--
Chuck Lever




2021-11-23 16:44:38

by Chuck Lever III

[permalink] [raw]
Subject: Re: [PATCH 02/19] NFSD: handle error better in write_ports_addfd()



> On Nov 22, 2021, at 8:29 PM, NeilBrown <[email protected]> wrote:
>
> If write_ports_add() fails, we shouldn't destroy the serv, unless we had
> only just created it. So if there are any permanent sockets already
> attached, leave the serv in place.
>
> Signed-off-by: NeilBrown <[email protected]>

This needs to go at the front of the series, IMO, to make it
more straightforward to backport if needed.

Though ea068bad27ce ("NFSD: move lockd_up() before svc_addsock()")
appears to have introduced "if (err < 0)" I'm not sure that's
actually where problems were introduced. Is Cc: stable warranted?


> ---
> fs/nfsd/nfsctl.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
> index 5eb564e58a9b..93d417871302 100644
> --- a/fs/nfsd/nfsctl.c
> +++ b/fs/nfsd/nfsctl.c
> @@ -742,7 +742,7 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net, const struct cred
> return err;
>
> err = svc_addsock(nn->nfsd_serv, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred);
> - if (err < 0) {
> + if (err < 0 && list_empty(&nn->nfsd_serv->sv_permsocks)) {
> nfsd_put(net);
> return err;
> }
>
>

--
Chuck Lever




2021-11-23 16:45:52

by Chuck Lever III

[permalink] [raw]
Subject: Re: [PATCH 01/19] SUNRPC/NFSD: clean up get/put functions.

Hi Neil-

Some further custodial requests below...


> On Nov 22, 2021, at 8:29 PM, NeilBrown <[email protected]> wrote:
>
> svc_destroy() is poorly named - it doesn't necessarily destroy the svc,
> it might just reduce the ref count.
> nfsd_destroy() is poorly named for the same reason.
>
> This patch:
> - removes the refcount functionality from svc_destroy(), moving it to
> a new svc_put(). Almost all previous callers of svc_destroy() now
> call svc_put().
> - renames nfsd_destroy() to nfsd_put() and improves the code, using
> the new svc_destroy() rather than svc_put()
> - also changes svc_get() to return the serv, which simplifies
> some code a little.
>
> The only non-trivial part of this is that svc_destroy() would call
> svc_sock_update() on a non-final decrement. It can no longer do that,
> and svc_put() isn't really a good place of it. This call is now made
> from svc_exit_thread() which seems like a good place. This makes the
> call *before* sv_nrthreads is decremented rather than after. This
> is not particularly important as the call just sets a flag which
> causes sv_nrthreads set be checked later. A subsequent patch will
> improve the ordering.
>
> Signed-off-by: NeilBrown <[email protected]>
> ---
> fs/lockd/svc.c | 12 +++---------
> fs/nfs/callback.c | 20 ++++----------------
> fs/nfsd/nfsctl.c | 4 ++--
> fs/nfsd/nfsd.h | 2 +-
> fs/nfsd/nfssvc.c | 30 ++++++++++++++++--------------
> include/linux/sunrpc/svc.h | 29 +++++++++++++++++++++++++----
> net/sunrpc/svc.c | 19 +++++--------------
> 7 files changed, 56 insertions(+), 60 deletions(-)
>
> diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
> index b220e1b91726..135bd86ed3ad 100644
> --- a/fs/lockd/svc.c
> +++ b/fs/lockd/svc.c
> @@ -430,14 +430,8 @@ static struct svc_serv *lockd_create_svc(void)
> /*
> * Check whether we're already up and running.
> */
> - if (nlmsvc_rqst) {
> - /*
> - * Note: increase service usage, because later in case of error
> - * svc_destroy() will be called.
> - */
> - svc_get(nlmsvc_rqst->rq_server);
> - return nlmsvc_rqst->rq_server;
> - }
> + if (nlmsvc_rqst)
> + return svc_get(nlmsvc_rqst->rq_server);

The svc_get-related changes seem like they could be split
into a separate clean-up patch.


> /*
> * Sanity check: if there's no pid,
> @@ -497,7 +491,7 @@ int lockd_up(struct net *net, const struct cred *cred)
> * so we exit through here on both success and failure.
> */
> err_put:
> - svc_destroy(serv);
> + svc_put(serv);
> err_create:
> mutex_unlock(&nlmsvc_mutex);
> return error;
> diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
> index 86d856de1389..edbc7579b4aa 100644
> --- a/fs/nfs/callback.c
> +++ b/fs/nfs/callback.c
> @@ -266,14 +266,8 @@ static struct svc_serv *nfs_callback_create_svc(int minorversion)
> /*
> * Check whether we're already up and running.
> */
> - if (cb_info->serv) {
> - /*
> - * Note: increase service usage, because later in case of error
> - * svc_destroy() will be called.
> - */
> - svc_get(cb_info->serv);
> - return cb_info->serv;
> - }
> + if (cb_info->serv)
> + return svc_get(cb_info->serv);
>
> switch (minorversion) {
> case 0:
> @@ -335,16 +329,10 @@ int nfs_callback_up(u32 minorversion, struct rpc_xprt *xprt)
> goto err_start;
>
> cb_info->users++;
> - /*
> - * svc_create creates the svc_serv with sv_nrthreads == 1, and then
> - * svc_prepare_thread increments that. So we need to call svc_destroy
> - * on both success and failure so that the refcount is 1 when the
> - * thread exits.
> - */
> err_net:
> if (!cb_info->users)
> cb_info->serv = NULL;
> - svc_destroy(serv);
> + svc_put(serv);
> err_create:
> mutex_unlock(&nfs_callback_mutex);
> return ret;
> @@ -370,7 +358,7 @@ void nfs_callback_down(int minorversion, struct net *net)
> if (cb_info->users == 0) {
> svc_get(serv);
> serv->sv_ops->svo_setup(serv, NULL, 0);
> - svc_destroy(serv);
> + svc_put(serv);
> dprintk("nfs_callback_down: service destroyed\n");
> cb_info->serv = NULL;
> }
> diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
> index af8531c3854a..5eb564e58a9b 100644
> --- a/fs/nfsd/nfsctl.c
> +++ b/fs/nfsd/nfsctl.c
> @@ -743,7 +743,7 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net, const struct cred
>
> err = svc_addsock(nn->nfsd_serv, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred);
> if (err < 0) {
> - nfsd_destroy(net);
> + nfsd_put(net);

Seems like there should be a matching nfsd_get() somewhere.
Perhaps it can just be an alias for svc_get()?


> return err;
> }
>
> @@ -796,7 +796,7 @@ static ssize_t __write_ports_addxprt(char *buf, struct net *net, const struct cr
> if (!list_empty(&nn->nfsd_serv->sv_permsocks))
> nn->nfsd_serv->sv_nrthreads--;
> else
> - nfsd_destroy(net);
> + nfsd_put(net);
> return err;
> }
>
> diff --git a/fs/nfsd/nfsd.h b/fs/nfsd/nfsd.h
> index 498e5a489826..3e5008b475ff 100644
> --- a/fs/nfsd/nfsd.h
> +++ b/fs/nfsd/nfsd.h
> @@ -97,7 +97,7 @@ int nfsd_pool_stats_open(struct inode *, struct file *);
> int nfsd_pool_stats_release(struct inode *, struct file *);
> void nfsd_shutdown_threads(struct net *net);
>
> -void nfsd_destroy(struct net *net);
> +void nfsd_put(struct net *net);
>
> bool i_am_nfsd(void);
>
> diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
> index 80431921e5d7..2ab0e650a0e2 100644
> --- a/fs/nfsd/nfssvc.c
> +++ b/fs/nfsd/nfssvc.c
> @@ -623,7 +623,7 @@ void nfsd_shutdown_threads(struct net *net)
> svc_get(serv);
> /* Kill outstanding nfsd threads */
> serv->sv_ops->svo_setup(serv, NULL, 0);
> - nfsd_destroy(net);
> + nfsd_put(net);
> mutex_unlock(&nfsd_mutex);
> /* Wait for shutdown of nfsd_serv to complete */
> wait_for_completion(&nn->nfsd_shutdown_complete);
> @@ -656,7 +656,10 @@ int nfsd_create_serv(struct net *net)
> nn->nfsd_serv->sv_maxconn = nn->max_connections;
> error = svc_bind(nn->nfsd_serv, net);
> if (error < 0) {
> - svc_destroy(nn->nfsd_serv);
> + /* NOT nfsd_put() as notifiers (see below) haven't
> + * been set up yet.
> + */
> + svc_put(nn->nfsd_serv);
> nfsd_complete_shutdown(net);
> return error;
> }
> @@ -697,16 +700,16 @@ int nfsd_get_nrthreads(int n, int *nthreads, struct net *net)
> return 0;
> }
>
> -void nfsd_destroy(struct net *net)
> +void nfsd_put(struct net *net)
> {
> struct nfsd_net *nn = net_generic(net, nfsd_net_id);
> - int destroy = (nn->nfsd_serv->sv_nrthreads == 1);
>
> - if (destroy)
> + nn->nfsd_serv->sv_nrthreads --;

checkpatch.pl screamed about the whitespace between the variable
and the unary operator here and in svc_put().


> + if (nn->nfsd_serv->sv_nrthreads == 0) {
> svc_shutdown_net(nn->nfsd_serv, net);
> - svc_destroy(nn->nfsd_serv);
> - if (destroy)
> + svc_destroy(nn->nfsd_serv);
> nfsd_complete_shutdown(net);
> + }
> }
>
> int nfsd_set_nrthreads(int n, int *nthreads, struct net *net)
> @@ -758,7 +761,7 @@ int nfsd_set_nrthreads(int n, int *nthreads, struct net *net)
> if (err)
> break;
> }
> - nfsd_destroy(net);
> + nfsd_put(net);
> return err;
> }
>
> @@ -795,7 +798,7 @@ nfsd_svc(int nrservs, struct net *net, const struct cred *cred)
>
> error = nfsd_startup_net(net, cred);
> if (error)
> - goto out_destroy;
> + goto out_put;
> error = nn->nfsd_serv->sv_ops->svo_setup(nn->nfsd_serv,
> NULL, nrservs);
> if (error)
> @@ -808,8 +811,8 @@ nfsd_svc(int nrservs, struct net *net, const struct cred *cred)
> out_shutdown:
> if (error < 0 && !nfsd_up_before)
> nfsd_shutdown_net(net);
> -out_destroy:
> - nfsd_destroy(net); /* Release server */
> +out_put:
> + nfsd_put(net);
> out:
> mutex_unlock(&nfsd_mutex);
> return error;
> @@ -982,7 +985,7 @@ nfsd(void *vrqstp)
> /* Release the thread */
> svc_exit_thread(rqstp);
>
> - nfsd_destroy(net);
> + nfsd_put(net);
>
> /* Release module */
> mutex_unlock(&nfsd_mutex);
> @@ -1109,8 +1112,7 @@ int nfsd_pool_stats_release(struct inode *inode, struct file *file)
> struct net *net = inode->i_sb->s_fs_info;
>
> mutex_lock(&nfsd_mutex);
> - /* this function really, really should have been called svc_put() */
> - nfsd_destroy(net);
> + nfsd_put(net);
> mutex_unlock(&nfsd_mutex);
> return ret;
> }
> diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
> index 0ae28ae6caf2..d87c3392a1e9 100644
> --- a/include/linux/sunrpc/svc.h
> +++ b/include/linux/sunrpc/svc.h
> @@ -114,15 +114,37 @@ struct svc_serv {
> #endif /* CONFIG_SUNRPC_BACKCHANNEL */
> };
>
> -/*
> - * We use sv_nrthreads as a reference count. svc_destroy() drops
> +/**
> + * svc_get() - increment reference count on a SUNRPC serv
> + * @serv: the svc_serv to have count incremented
> + *
> + * Returns: the svc_serv that was passed in.
> + *
> + * We use sv_nrthreads as a reference count. svc_put() drops
> * this refcount, so we need to bump it up around operations that
> * change the number of threads. Horrible, but there it is.
> * Should be called with the "service mutex" held.
> */
> -static inline void svc_get(struct svc_serv *serv)
> +static inline struct svc_serv *svc_get(struct svc_serv *serv)
> {
> serv->sv_nrthreads++;
> + return serv;
> +}
> +
> +void svc_destroy(struct svc_serv *serv);
> +
> +/**
> + * svc_put - decrement reference count on a SUNRPC serv
> + * @serv: the svc_serv to have count decremented
> + *
> + * When the reference count reaches zero, svc_destroy()
> + * is called to clean up and free the serv.
> + */
> +static inline void svc_put(struct svc_serv *serv)
> +{
> + serv->sv_nrthreads --;
> + if (serv->sv_nrthreads == 0)

Nit: The usual idiom is "if (--serv->sv_nrthreads == 0)"


> + svc_destroy(serv);
> }
>
> /*
> @@ -514,7 +536,6 @@ struct svc_serv * svc_create_pooled(struct svc_program *, unsigned int,
> int svc_set_num_threads(struct svc_serv *, struct svc_pool *, int);
> int svc_set_num_threads_sync(struct svc_serv *, struct svc_pool *, int);
> int svc_pool_stats_open(struct svc_serv *serv, struct file *file);
> -void svc_destroy(struct svc_serv *);
> void svc_shutdown_net(struct svc_serv *, struct net *);
> int svc_process(struct svc_rqst *);
> int bc_svc_process(struct svc_serv *, struct rpc_rqst *,
> diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
> index 4292278a9552..55a1bf0d129f 100644
> --- a/net/sunrpc/svc.c
> +++ b/net/sunrpc/svc.c
> @@ -528,17 +528,7 @@ EXPORT_SYMBOL_GPL(svc_shutdown_net);
> void
> svc_destroy(struct svc_serv *serv)
> {
> - dprintk("svc: svc_destroy(%s, %d)\n",
> - serv->sv_program->pg_name,
> - serv->sv_nrthreads);
> -
> - if (serv->sv_nrthreads) {
> - if (--(serv->sv_nrthreads) != 0) {
> - svc_sock_update_bufs(serv);
> - return;
> - }
> - } else
> - printk("svc_destroy: no threads for serv=%p!\n", serv);
> + dprintk("svc: svc_destroy(%s)\n", serv->sv_program->pg_name);

Maybe the dprintk is unnecessary. I would prefer a trace
point if there is real value in observing destruction of
particular svc_serv objects.

Likewise in subsequent patches.

Also... since we're in the clean-up frame of mind, if you
see a BUG() call site remaining in a hunk, ask yourself
if we really need to kill the kernel at that point, or
if a WARN would suffice.


> del_timer_sync(&serv->sv_temptimer);
>
> @@ -892,9 +882,10 @@ svc_exit_thread(struct svc_rqst *rqstp)
>
> svc_rqst_free(rqstp);
>
> - /* Release the server */
> - if (serv)
> - svc_destroy(serv);
> + if (!serv)
> + return;
> + svc_sock_update_bufs(serv);

I don't object to moving the svc_sock_update_bufs() call
site. But....

Note for someday: I'm not sure of a better way of handling
buffer size changes, but this seems like all kinds layering
violation.


> + svc_destroy(serv);
> }
> EXPORT_SYMBOL_GPL(svc_exit_thread);
>
>
>

--
Chuck Lever




2021-11-24 18:43:33

by Chuck Lever III

[permalink] [raw]
Subject: Re: [PATCH 00/19 v2] SUNRPC: clean up server thread management



> On Nov 23, 2021, at 9:49 AM, J. Bruce Fields <[email protected]> wrote:
>
> On Tue, Nov 23, 2021 at 12:29:35PM +1100, NeilBrown wrote:
>> This is a revision of my series for cleaning up server thread
>> management.
>
> For what it's worth, this version now passes my usual regression tests.

Likewise, I tested with both TCP and RDMA.


> --b.
>
>> Currently lockd, nfsd, and nfs-callback all manage threads slightly
>> differently. This series unifies them.
>>
>> Changes since first series include:
>> - minor bug fixes
>> - kernel-doc comments for new functions
>> - split first patch into 3, and make the bugfix a separate patch
>> - fix management of pool_maps so lockd can usse svc_set_num_threads
>> safely
>> - switch nfs-callback to not request a 'pooled' service.
>>
>> NeilBrown
>>
>>
>> ---
>>
>> NeilBrown (19):
>> SUNRPC/NFSD: clean up get/put functions.
>> NFSD: handle error better in write_ports_addfd()
>> SUNRPC: stop using ->sv_nrthreads as a refcount
>> nfsd: make nfsd_stats.th_cnt atomic_t
>> SUNRPC: use sv_lock to protect updates to sv_nrthreads.
>> NFSD: narrow nfsd_mutex protection in nfsd thread
>> NFSD: Make it possible to use svc_set_num_threads_sync
>> SUNRPC: discard svo_setup and rename svc_set_num_threads_sync()
>> NFSD: simplify locking for network notifier.
>> lockd: introduce nlmsvc_serv
>> lockd: simplify management of network status notifiers
>> lockd: move lockd_start_svc() call into lockd_create_svc()
>> lockd: move svc_exit_thread() into the thread
>> lockd: introduce lockd_put()
>> lockd: rename lockd_create_svc() to lockd_get()
>> SUNRPC: move the pool_map definitions (back) into svc.c
>> SUNRPC: always treat sv_nrpools==1 as "not pooled"
>> lockd: use svc_set_num_threads() for thread start and stop
>> NFS: switch the callback service back to non-pooled.
>>
>>
>> fs/lockd/svc.c | 194 ++++++++++++-------------------------
>> fs/nfs/callback.c | 12 +--
>> fs/nfsd/netns.h | 13 +--
>> fs/nfsd/nfsctl.c | 24 ++---
>> fs/nfsd/nfssvc.c | 139 +++++++++++++-------------
>> fs/nfsd/stats.c | 2 +-
>> fs/nfsd/stats.h | 4 +-
>> include/linux/sunrpc/svc.h | 58 ++++-------
>> net/sunrpc/svc.c | 166 ++++++++++++++-----------------
>> 9 files changed, 248 insertions(+), 364 deletions(-)
>>
>> --
>> Signature

--
Chuck Lever




2021-11-28 23:38:39

by NeilBrown

[permalink] [raw]
Subject: Re: [PATCH 01/19] SUNRPC/NFSD: clean up get/put functions.

On Wed, 24 Nov 2021, Chuck Lever III wrote:
> > diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
> > index b220e1b91726..135bd86ed3ad 100644
> > --- a/fs/lockd/svc.c
> > +++ b/fs/lockd/svc.c
> > @@ -430,14 +430,8 @@ static struct svc_serv *lockd_create_svc(void)
> > /*
> > * Check whether we're already up and running.
> > */
> > - if (nlmsvc_rqst) {
> > - /*
> > - * Note: increase service usage, because later in case of error
> > - * svc_destroy() will be called.
> > - */
> > - svc_get(nlmsvc_rqst->rq_server);
> > - return nlmsvc_rqst->rq_server;
> > - }
> > + if (nlmsvc_rqst)
> > + return svc_get(nlmsvc_rqst->rq_server);
>
> The svc_get-related changes seem like they could be split
> into a separate clean-up patch.

I guess so.

> > diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
> > index af8531c3854a..5eb564e58a9b 100644
> > --- a/fs/nfsd/nfsctl.c
> > +++ b/fs/nfsd/nfsctl.c
> > @@ -743,7 +743,7 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net, const struct cred
> >
> > err = svc_addsock(nn->nfsd_serv, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred);
> > if (err < 0) {
> > - nfsd_destroy(net);
> > + nfsd_put(net);
>
> Seems like there should be a matching nfsd_get() somewhere.
> Perhaps it can just be an alias for svc_get()?

What purpose would that serve? I really don't like having simple aliases
- they seem to hide information.
In particular, I really don't like reading code, seeing some interface
that I haven't seen before, hunting it out to find out what it means,
and discovering that is just a wrapper around something that I already
know. Why should I have to learn 2 interfaces when 1 would suffice?

So I am not inclined to a nfsd_get().

> > -void nfsd_destroy(struct net *net)
> > +void nfsd_put(struct net *net)
> > {
> > struct nfsd_net *nn = net_generic(net, nfsd_net_id);
> > - int destroy = (nn->nfsd_serv->sv_nrthreads == 1);
> >
> > - if (destroy)
> > + nn->nfsd_serv->sv_nrthreads --;
>
> checkpatch.pl screamed about the whitespace between the variable
> and the unary operator here and in svc_put().

I've changed it to ".... -= 1;", which I generally prefer anyway.
But it'll probably become "if atomic_dec_and_test()" in a later patch.

> > +/**
> > + * svc_put - decrement reference count on a SUNRPC serv
> > + * @serv: the svc_serv to have count decremented
> > + *
> > + * When the reference count reaches zero, svc_destroy()
> > + * is called to clean up and free the serv.
> > + */
> > +static inline void svc_put(struct svc_serv *serv)
> > +{
> > + serv->sv_nrthreads --;
> > + if (serv->sv_nrthreads == 0)
>
> Nit: The usual idiom is "if (--serv->sv_nrthreads == 0)"

Is it? I thought that changing variables in if() conditions was
generally discouraged (though it is OK in while()).

So I'll leave it as it is (well... -= 1 ..) until it become atomic_t.

> > diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
> > index 4292278a9552..55a1bf0d129f 100644
> > --- a/net/sunrpc/svc.c
> > +++ b/net/sunrpc/svc.c
> > @@ -528,17 +528,7 @@ EXPORT_SYMBOL_GPL(svc_shutdown_net);
> > void
> > svc_destroy(struct svc_serv *serv)
> > {
> > - dprintk("svc: svc_destroy(%s, %d)\n",
> > - serv->sv_program->pg_name,
> > - serv->sv_nrthreads);
> > -
> > - if (serv->sv_nrthreads) {
> > - if (--(serv->sv_nrthreads) != 0) {
> > - svc_sock_update_bufs(serv);
> > - return;
> > - }
> > - } else
> > - printk("svc_destroy: no threads for serv=%p!\n", serv);
> > + dprintk("svc: svc_destroy(%s)\n", serv->sv_program->pg_name);
>
> Maybe the dprintk is unnecessary. I would prefer a trace
> point if there is real value in observing destruction of
> particular svc_serv objects.
>
> Likewise in subsequent patches.
>
> Also... since we're in the clean-up frame of mind, if you
> see a BUG() call site remaining in a hunk, ask yourself
> if we really need to kill the kernel at that point, or
> if a WARN would suffice.

cleanup of BUGs (and dprinkts) is, for me, a different frame of mind
that the cleanup I currently working on. I'd rather not get distracted.


>
> > del_timer_sync(&serv->sv_temptimer);
> >
> > @@ -892,9 +882,10 @@ svc_exit_thread(struct svc_rqst *rqstp)
> >
> > svc_rqst_free(rqstp);
> >
> > - /* Release the server */
> > - if (serv)
> > - svc_destroy(serv);
> > + if (!serv)
> > + return;
> > + svc_sock_update_bufs(serv);
>
> I don't object to moving the svc_sock_update_bufs() call
> site. But....
>
> Note for someday: I'm not sure of a better way of handling
> buffer size changes, but this seems like all kinds layering
> violation.

Despite the name, this function doesn't update any bufs. It just sets
some flags.
Maybe we should have a sequential "version" number in the svc_serv which
is updated when the number of threads changes. And each svc_sock holds
a copy of this. If it notices the svc_serv has changed version, it
reassesses its buffer space.

Thanks,
NeilBrown

2021-11-28 23:51:31

by NeilBrown

[permalink] [raw]
Subject: Re: [PATCH 02/19] NFSD: handle error better in write_ports_addfd()

On Wed, 24 Nov 2021, Chuck Lever III wrote:
>
> > On Nov 22, 2021, at 8:29 PM, NeilBrown <[email protected]> wrote:
> >
> > If write_ports_add() fails, we shouldn't destroy the serv, unless we had
> > only just created it. So if there are any permanent sockets already
> > attached, leave the serv in place.
> >
> > Signed-off-by: NeilBrown <[email protected]>
>
> This needs to go at the front of the series, IMO, to make it
> more straightforward to backport if needed.

That's reasonable.

>
> Though ea068bad27ce ("NFSD: move lockd_up() before svc_addsock()")
> appears to have introduced "if (err < 0)" I'm not sure that's
> actually where problems were introduced. Is Cc: stable warranted?

I don't think Cc: stable is warranted. I think far too much goes to
'stable', but also not enough.... So I'm selective.

The problem fixed is barely a bug - just a minor inconvenience.
In practice, svc_addsock() doesn't fail, because it is never asked to do
something that it cannot do. So handling failure graceful will only be
noticed by someone who is doing strange things.
So while we should definitely fix it, I'm not inclined to backport the
fix.

BTW, I think the "bug" was introduced in Commit 0cd14a061e32 ("nfsd: fix
error handling in __write_ports_addxprt"), which fixed a different
(real) bug introduced by the patch you identified.

Thanks,
NeilBrown

>
>
> > ---
> > fs/nfsd/nfsctl.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
> > index 5eb564e58a9b..93d417871302 100644
> > --- a/fs/nfsd/nfsctl.c
> > +++ b/fs/nfsd/nfsctl.c
> > @@ -742,7 +742,7 @@ static ssize_t __write_ports_addfd(char *buf, struct net *net, const struct cred
> > return err;
> >
> > err = svc_addsock(nn->nfsd_serv, fd, buf, SIMPLE_TRANSACTION_LIMIT, cred);
> > - if (err < 0) {
> > + if (err < 0 && list_empty(&nn->nfsd_serv->sv_permsocks)) {
> > nfsd_put(net);
> > return err;
> > }
> >
> >
>
> --
> Chuck Lever
>
>
>
>

2021-11-29 00:13:23

by NeilBrown

[permalink] [raw]
Subject: Re: [PATCH 05/19] SUNRPC: use sv_lock to protect updates to sv_nrthreads.

On Wed, 24 Nov 2021, Chuck Lever III wrote:
> > @@ -639,7 +639,10 @@ svc_prepare_thread(struct svc_serv *serv, struct svc_pool *pool, int node)
> > return ERR_PTR(-ENOMEM);
> >
> > svc_get(serv);
> > - serv->sv_nrthreads++;
> > + spin_lock_bh(&serv->sv_lock);
> > + serv->sv_nrthreads += 1;
> > + spin_unlock_bh(&serv->sv_lock);
>
> atomic_t would be somewhat lighter weight. Can it be used here
> instead?
>

We could.... but sv_nrthreads is read-mostly. There are 11 places
where we would need to call "atomic_read()", and just two where we
benefit from the simplicity of atomic_inc/dec.

And even if I did achieve dynamic threads count management, we would not
be changing sv_nrthreads often enough that any performance difference
would be noticeable.
So I'd rather stick with using the spinlock and keeping the read-side
simple.

Thanks,
NeilBrown