2014-06-06 13:07:18

by Jeff Layton

[permalink] [raw]
Subject: [PATCH v2 0/2] nfsd: preliminary patches for client_mutex removal

This is a respin of the last two patches in the 9 patch series I posted
last week. Bruce dropped patch #8, as it caused a lockdep pop due to
lock inversion between the i_lock and the state_lock. Patch #9 was
dropped due to objections from Christoph. Bruce indicated that he would
take the others, so I won't repost them here.

These two patches should address the previous concerns -- let me know if
there are any objections.

Jeff Layton (1):
nfsd: avoid taking the state_lock while holding the i_lock

Trond Myklebust (1):
nfsd: Protect addition to the file_hashtbl

fs/nfsd/nfs4callback.c | 9 +++-
fs/nfsd/nfs4state.c | 123 ++++++++++++++++++++++++++++++++++++-------------
fs/nfsd/state.h | 2 +
3 files changed, 101 insertions(+), 33 deletions(-)

--
1.9.3



2014-06-07 14:31:34

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] nfsd: avoid taking the state_lock while holding the i_lock

On Sat, Jun 07, 2014 at 10:28:26AM -0400, Jeff Layton wrote:
> Well, I think using the fp->fi_lock instead of the i_lock here is
> reasonable. We at least avoid taking the state_lock (which is likely to
> be much more contended) within the i_lock.

Yes, avoiding i_lock usage inside nfsd is something I'd prefer. But
with the current lock manager ops that are called with i_lock held
we'll have some leakage into the nfsd lock hierachy anyway
unfortunately.

> The thing that makes this
> patch nasty is all of the shenanigans to queue the delegation to the
> global list from within rpc_prepare or rpc_release.
>
> Personally, I think it'd be cleaner to add some sort of cb_prepare
> operation to the generic callback framework you're building to handle
> that, but let me know what you thing.

I guess I'll have to do it that way then. It's not like so far
unreleased code should be a hard blocker for a bug fix anyway.

Care to prefer a version that uses fi_lock, but otherwise works like the
first version?


2014-06-06 13:07:20

by Jeff Layton

[permalink] [raw]
Subject: [PATCH v2 1/2] nfsd: Protect addition to the file_hashtbl

From: Trond Myklebust <[email protected]>

Ensure that we only can have a single struct nfs4_file per inode
in the file_hashtbl and make addition atomic with respect to lookup.

To prevent an i_lock/state_lock inversion, change nfsd4_init_file to
use ihold instead if igrab. That's also more efficient anyway as we
definitely hold a reference to the inode at that point.

Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Jeff Layton <[email protected]>
---
fs/nfsd/nfs4state.c | 49 +++++++++++++++++++++++++++++++++++++------------
1 file changed, 37 insertions(+), 12 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index a500033a2f87..cbec573e9445 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -2519,17 +2519,18 @@ static void nfsd4_init_file(struct nfs4_file *fp, struct inode *ino)
{
unsigned int hashval = file_hashval(ino);

+ lockdep_assert_held(&state_lock);
+
atomic_set(&fp->fi_ref, 1);
INIT_LIST_HEAD(&fp->fi_stateids);
INIT_LIST_HEAD(&fp->fi_delegations);
- fp->fi_inode = igrab(ino);
+ ihold(ino);
+ fp->fi_inode = ino;
fp->fi_had_conflict = false;
fp->fi_lease = NULL;
memset(fp->fi_fds, 0, sizeof(fp->fi_fds));
memset(fp->fi_access, 0, sizeof(fp->fi_access));
- spin_lock(&state_lock);
hlist_add_head(&fp->fi_hash, &file_hashtbl[hashval]);
- spin_unlock(&state_lock);
}

void
@@ -2695,23 +2696,49 @@ find_openstateowner_str(unsigned int hashval, struct nfsd4_open *open,

/* search file_hashtbl[] for file */
static struct nfs4_file *
-find_file(struct inode *ino)
+find_file_locked(struct inode *ino)
{
unsigned int hashval = file_hashval(ino);
struct nfs4_file *fp;

- spin_lock(&state_lock);
+ lockdep_assert_held(&state_lock);
+
hlist_for_each_entry(fp, &file_hashtbl[hashval], fi_hash) {
if (fp->fi_inode == ino) {
get_nfs4_file(fp);
- spin_unlock(&state_lock);
return fp;
}
}
- spin_unlock(&state_lock);
return NULL;
}

+static struct nfs4_file *
+find_file(struct inode *ino)
+{
+ struct nfs4_file *fp;
+
+ spin_lock(&state_lock);
+ fp = find_file_locked(ino);
+ spin_unlock(&state_lock);
+ return fp;
+}
+
+static struct nfs4_file *
+find_or_add_file(struct inode *ino, struct nfs4_file *new)
+{
+ struct nfs4_file *fp;
+
+ spin_lock(&state_lock);
+ fp = find_file_locked(ino);
+ if (fp == NULL) {
+ nfsd4_init_file(new, ino);
+ fp = new;
+ }
+ spin_unlock(&state_lock);
+
+ return fp;
+}
+
/*
* Called to check deny when READ with all zero stateid or
* WRITE with all zero or all one stateid
@@ -3230,21 +3257,19 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
* and check for delegations in the process of being recalled.
* If not found, create the nfs4_file struct
*/
- fp = find_file(ino);
- if (fp) {
+ fp = find_or_add_file(ino, open->op_file);
+ if (fp != open->op_file) {
if ((status = nfs4_check_open(fp, open, &stp)))
goto out;
status = nfs4_check_deleg(cl, open, &dp);
if (status)
goto out;
} else {
+ open->op_file = NULL;
status = nfserr_bad_stateid;
if (nfsd4_is_deleg_cur(open))
goto out;
status = nfserr_jukebox;
- fp = open->op_file;
- open->op_file = NULL;
- nfsd4_init_file(fp, ino);
}

/*
--
1.9.3


2014-06-07 14:28:29

by Jeff Layton

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] nfsd: avoid taking the state_lock while holding the i_lock

On Sat, 7 Jun 2014 07:09:04 -0700
Christoph Hellwig <[email protected]> wrote:

> On Fri, Jun 06, 2014 at 09:07:06AM -0400, Jeff Layton wrote:
> > state_lock is a heavily contended global lock. We don't want to grab
> > that while simultaneously holding the inode->i_lock. Avoid doing that in
> > the delegation break callback by ensuring that we add/remove the
> > dl_perfile under a new per-nfs4_file fi_lock, and hold that while walking
> > the fi_delegations list.
> >
> > We still do need to queue the delegations to the global del_recall_lru
> > list. Do that in the rpc_prepare op for the delegation recall RPC. It's
> > possible though that the allocation of the rpc_task will fail, which
> > would cause the delegation to be leaked.
> >
> > If that occurs rpc_release is still called, so we also do it there if
> > the rpc_task failed to run. This brings up another dilemma -- how do
> > we know whether it got queued in rpc_prepare or not?
> >
> > In order to determine that, we set the dl_time to 0 in the delegation
> > break callback from the VFS and only set it when we queue it to the
> > list. If it's still zero by the time we get to rpc_release, then we know
> > that it never got queued and we can do it then.
>
> Compared to this version I have to say the original one that I objected
> to looks like the lesser evil. I'll take another deeper look at it.
>

Well, I think using the fp->fi_lock instead of the i_lock here is
reasonable. We at least avoid taking the state_lock (which is likely to
be much more contended) within the i_lock. The thing that makes this
patch nasty is all of the shenanigans to queue the delegation to the
global list from within rpc_prepare or rpc_release.

Personally, I think it'd be cleaner to add some sort of cb_prepare
operation to the generic callback framework you're building to handle
that, but let me know what you thing.

--
Jeff Layton <[email protected]>

2014-06-06 14:14:32

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] nfsd: Protect addition to the file_hashtbl

On Fri, Jun 06, 2014 at 09:07:05AM -0400, Jeff Layton wrote:
> From: Trond Myklebust <[email protected]>
>
> Ensure that we only can have a single struct nfs4_file per inode
> in the file_hashtbl and make addition atomic with respect to lookup.
>
> To prevent an i_lock/state_lock inversion, change nfsd4_init_file to
> use ihold instead if igrab. That's also more efficient anyway as we
> definitely hold a reference to the inode at that point.
>
> Signed-off-by: Trond Myklebust <[email protected]>
> Signed-off-by: Jeff Layton <[email protected]>

Looks good,

Reviewed-by: Christoph Hellwig <[email protected]>

2014-06-07 14:09:05

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] nfsd: avoid taking the state_lock while holding the i_lock

On Fri, Jun 06, 2014 at 09:07:06AM -0400, Jeff Layton wrote:
> state_lock is a heavily contended global lock. We don't want to grab
> that while simultaneously holding the inode->i_lock. Avoid doing that in
> the delegation break callback by ensuring that we add/remove the
> dl_perfile under a new per-nfs4_file fi_lock, and hold that while walking
> the fi_delegations list.
>
> We still do need to queue the delegations to the global del_recall_lru
> list. Do that in the rpc_prepare op for the delegation recall RPC. It's
> possible though that the allocation of the rpc_task will fail, which
> would cause the delegation to be leaked.
>
> If that occurs rpc_release is still called, so we also do it there if
> the rpc_task failed to run. This brings up another dilemma -- how do
> we know whether it got queued in rpc_prepare or not?
>
> In order to determine that, we set the dl_time to 0 in the delegation
> break callback from the VFS and only set it when we queue it to the
> list. If it's still zero by the time we get to rpc_release, then we know
> that it never got queued and we can do it then.

Compared to this version I have to say the original one that I objected
to looks like the lesser evil. I'll take another deeper look at it.


2014-06-06 13:07:22

by Jeff Layton

[permalink] [raw]
Subject: [PATCH v2 2/2] nfsd: avoid taking the state_lock while holding the i_lock

state_lock is a heavily contended global lock. We don't want to grab
that while simultaneously holding the inode->i_lock. Avoid doing that in
the delegation break callback by ensuring that we add/remove the
dl_perfile under a new per-nfs4_file fi_lock, and hold that while walking
the fi_delegations list.

We still do need to queue the delegations to the global del_recall_lru
list. Do that in the rpc_prepare op for the delegation recall RPC. It's
possible though that the allocation of the rpc_task will fail, which
would cause the delegation to be leaked.

If that occurs rpc_release is still called, so we also do it there if
the rpc_task failed to run. This brings up another dilemma -- how do
we know whether it got queued in rpc_prepare or not?

In order to determine that, we set the dl_time to 0 in the delegation
break callback from the VFS and only set it when we queue it to the
list. If it's still zero by the time we get to rpc_release, then we know
that it never got queued and we can do it then.

Cc: Christoph Hellwig <[email protected]>
Signed-off-by: Jeff Layton <[email protected]>
---
fs/nfsd/nfs4callback.c | 9 ++++--
fs/nfsd/nfs4state.c | 74 +++++++++++++++++++++++++++++++++++++-------------
fs/nfsd/state.h | 2 ++
3 files changed, 64 insertions(+), 21 deletions(-)

diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
index 2c73cae9899d..3d01637d950c 100644
--- a/fs/nfsd/nfs4callback.c
+++ b/fs/nfsd/nfs4callback.c
@@ -810,12 +810,15 @@ static bool nfsd41_cb_get_slot(struct nfs4_client *clp, struct rpc_task *task)
* TODO: cb_sequence should support referring call lists, cachethis, multiple
* slots, and mark callback channel down on communication errors.
*/
-static void nfsd4_cb_prepare(struct rpc_task *task, void *calldata)
+static void nfsd4_cb_recall_prepare(struct rpc_task *task, void *calldata)
{
struct nfsd4_callback *cb = calldata;
struct nfs4_client *clp = cb->cb_clp;
+ struct nfs4_delegation *dp = container_of(cb, struct nfs4_delegation, dl_recall);
u32 minorversion = clp->cl_minorversion;

+ nfsd4_queue_to_del_recall_lru(dp);
+
cb->cb_minorversion = minorversion;
if (minorversion) {
if (!nfsd41_cb_get_slot(clp, task))
@@ -900,6 +903,8 @@ static void nfsd4_cb_recall_release(void *calldata)
struct nfs4_client *clp = cb->cb_clp;
struct nfs4_delegation *dp = container_of(cb, struct nfs4_delegation, dl_recall);

+ nfsd4_queue_to_del_recall_lru(dp);
+
if (cb->cb_done) {
spin_lock(&clp->cl_lock);
list_del(&cb->cb_per_client);
@@ -909,7 +914,7 @@ static void nfsd4_cb_recall_release(void *calldata)
}

static const struct rpc_call_ops nfsd4_cb_recall_ops = {
- .rpc_call_prepare = nfsd4_cb_prepare,
+ .rpc_call_prepare = nfsd4_cb_recall_prepare,
.rpc_call_done = nfsd4_cb_recall_done,
.rpc_release = nfsd4_cb_recall_release,
};
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index cbec573e9445..f429883fb4bb 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -438,7 +438,9 @@ hash_delegation_locked(struct nfs4_delegation *dp, struct nfs4_file *fp)
lockdep_assert_held(&state_lock);

dp->dl_stid.sc_type = NFS4_DELEG_STID;
+ spin_lock(&fp->fi_lock);
list_add(&dp->dl_perfile, &fp->fi_delegations);
+ spin_unlock(&fp->fi_lock);
list_add(&dp->dl_perclnt, &dp->dl_stid.sc_client->cl_delegations);
}

@@ -446,14 +448,20 @@ hash_delegation_locked(struct nfs4_delegation *dp, struct nfs4_file *fp)
static void
unhash_delegation(struct nfs4_delegation *dp)
{
+ struct nfs4_file *fp = dp->dl_file;
+
spin_lock(&state_lock);
list_del_init(&dp->dl_perclnt);
- list_del_init(&dp->dl_perfile);
list_del_init(&dp->dl_recall_lru);
+ if (!list_empty(&dp->dl_perfile)) {
+ spin_lock(&fp->fi_lock);
+ list_del_init(&dp->dl_perfile);
+ spin_unlock(&fp->fi_lock);
+ }
spin_unlock(&state_lock);
- if (dp->dl_file) {
- nfs4_put_deleg_lease(dp->dl_file);
- put_nfs4_file(dp->dl_file);
+ if (fp) {
+ nfs4_put_deleg_lease(fp);
+ put_nfs4_file(fp);
dp->dl_file = NULL;
}
}
@@ -2522,6 +2530,7 @@ static void nfsd4_init_file(struct nfs4_file *fp, struct inode *ino)
lockdep_assert_held(&state_lock);

atomic_set(&fp->fi_ref, 1);
+ spin_lock_init(&fp->fi_lock);
INIT_LIST_HEAD(&fp->fi_stateids);
INIT_LIST_HEAD(&fp->fi_delegations);
ihold(ino);
@@ -2767,23 +2776,49 @@ out:
return ret;
}

+/*
+ * We use a dl_time of 0 as an indicator that the delegation is "disconnected"
+ * from the client lists. If we find that that's the case, set the dl_time and
+ * then queue it to the list.
+ */
+void
+nfsd4_queue_to_del_recall_lru(struct nfs4_delegation *dp)
+{
+ struct nfs4_file *fp = dp->dl_file;
+ struct nfsd_net *nn = net_generic(dp->dl_stid.sc_client->net, nfsd_net_id);
+
+ spin_lock(&fp->fi_lock);
+ if (dp->dl_time) {
+ dp->dl_time = get_seconds();
+ spin_unlock(&fp->fi_lock);
+ spin_lock(&state_lock);
+ list_add_tail(&dp->dl_recall_lru, &nn->del_recall_lru);
+ spin_unlock(&state_lock);
+ } else {
+ spin_unlock(&fp->fi_lock);
+ }
+}
+
static void nfsd_break_one_deleg(struct nfs4_delegation *dp)
{
- struct nfs4_client *clp = dp->dl_stid.sc_client;
- struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
+ lockdep_assert_held(&dp->dl_file->fi_lock);

- lockdep_assert_held(&state_lock);
- /* We're assuming the state code never drops its reference
+ /*
+ * We're assuming the state code never drops its reference
* without first removing the lease. Since we're in this lease
- * callback (and since the lease code is serialized by the kernel
- * lock) we know the server hasn't removed the lease yet, we know
- * it's safe to take a reference: */
+ * callback (and since the lease code is serialized by the i_lock
+ * we know the server hasn't removed the lease yet, we know it's
+ * safe to take a reference.
+ */
atomic_inc(&dp->dl_count);

- list_add_tail(&dp->dl_recall_lru, &nn->del_recall_lru);
-
- /* Only place dl_time is set; protected by i_lock: */
- dp->dl_time = get_seconds();
+ /*
+ * We use a dl_time of 0 to indicate that the delegation has
+ * not yet been queued to the nn->del_recall_lru list. That's
+ * done in the rpc_prepare or rpc_release operations (depending
+ * on which one gets there first).
+ */
+ dp->dl_time = 0;

nfsd4_cb_recall(dp);
}
@@ -2809,11 +2844,11 @@ static void nfsd_break_deleg_cb(struct file_lock *fl)
*/
fl->fl_break_time = 0;

- spin_lock(&state_lock);
+ spin_lock(&fp->fi_lock);
fp->fi_had_conflict = true;
list_for_each_entry(dp, &fp->fi_delegations, dl_perfile)
nfsd_break_one_deleg(dp);
- spin_unlock(&state_lock);
+ spin_unlock(&fp->fi_lock);
}

static
@@ -3454,8 +3489,9 @@ nfs4_laundromat(struct nfsd_net *nn)
dp = list_entry (pos, struct nfs4_delegation, dl_recall_lru);
if (net_generic(dp->dl_stid.sc_client->net, nfsd_net_id) != nn)
continue;
- if (time_after((unsigned long)dp->dl_time, (unsigned long)cutoff)) {
- t = dp->dl_time - cutoff;
+ t = dp->dl_time;
+ if (time_after((unsigned long)t, (unsigned long)cutoff)) {
+ t -= cutoff;
new_timeo = min(new_timeo, t);
break;
}
diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
index 374c66283ac5..eae4fcaa5fd4 100644
--- a/fs/nfsd/state.h
+++ b/fs/nfsd/state.h
@@ -382,6 +382,7 @@ static inline struct nfs4_lockowner * lockowner(struct nfs4_stateowner *so)
/* nfs4_file: a file opened by some number of (open) nfs4_stateowners. */
struct nfs4_file {
atomic_t fi_ref;
+ spinlock_t fi_lock;
struct hlist_node fi_hash; /* hash by "struct inode *" */
struct list_head fi_stateids;
struct list_head fi_delegations;
@@ -472,6 +473,7 @@ extern void nfsd4_cb_recall(struct nfs4_delegation *dp);
extern int nfsd4_create_callback_queue(void);
extern void nfsd4_destroy_callback_queue(void);
extern void nfsd4_shutdown_callback(struct nfs4_client *);
+extern void nfsd4_queue_to_del_recall_lru(struct nfs4_delegation *);
extern void nfs4_put_delegation(struct nfs4_delegation *dp);
extern struct nfs4_client_reclaim *nfs4_client_to_reclaim(const char *name,
struct nfsd_net *nn);
--
1.9.3


2014-06-07 14:34:11

by Jeff Layton

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] nfsd: avoid taking the state_lock while holding the i_lock

On Sat, 7 Jun 2014 07:31:33 -0700
Christoph Hellwig <[email protected]> wrote:

> On Sat, Jun 07, 2014 at 10:28:26AM -0400, Jeff Layton wrote:
> > Well, I think using the fp->fi_lock instead of the i_lock here is
> > reasonable. We at least avoid taking the state_lock (which is likely to
> > be much more contended) within the i_lock.
>
> Yes, avoiding i_lock usage inside nfsd is something I'd prefer. But
> with the current lock manager ops that are called with i_lock held
> we'll have some leakage into the nfsd lock hierachy anyway
> unfortunately.
>

Yeah. Switching the file locking infrastructure over to the i_lock
seemed like such a good idea at the time...

> > The thing that makes this
> > patch nasty is all of the shenanigans to queue the delegation to the
> > global list from within rpc_prepare or rpc_release.
> >
> > Personally, I think it'd be cleaner to add some sort of cb_prepare
> > operation to the generic callback framework you're building to
> > handle that, but let me know what you thing.
>
> I guess I'll have to do it that way then. It's not like so far
> unreleased code should be a hard blocker for a bug fix anyway.
>
> Care to prefer a version that uses fi_lock, but otherwise works like
> the first version?
>

Nope, that'd be fine. It might take a few days to respin as I'll be at
the bakeathon next week.

--
Jeff Layton <[email protected]>