2014-07-25 11:34:41

by Jeff Layton

[permalink] [raw]
Subject: [PATCH v6 0/9] nfsd: more delegation fixes to prepare for client_mutex removal

v6:
- rebase on top of patches that change how struct nfs4_file is hashed

- add patch to remove the dl_fh field from struct nfs4_delegation

- fix a bug in forget_delegations fault injector

- update some changelogs to be more descriptive

v5:
- fix spinlocking in block_delegations. Lock should be held around all
of the set_bit calls so we don't race with the swap of the two
fields.

- eliminate destroy_revoked_delegation (just use nfs4_put_delegation)

- eliminate unneeded NULL pointer check in nfs4_setlease

v4:
- close more potential races in setlease code, and fix some bugs in
error handling in that code.

- clean up delegation setting functions, eliminating unused arguments
and avoiding allocations when there has already been a delegation
break

- add separate spinlock for block_delegations/delegation_blocked code

v3:
- use alternate method for checking for delegation break races after
getting a lease (just check fi_had_conflict instead)

- drop file_has_lease patch -- no longer needed

- move cl_revoked handling patch into this set. It means altering a
few of the later patches, but it keeps the set more topically
coherent

v2:
- move remove_stid call from nfs4_free_stid and into callers

Hopefully this series won't need much introduction now. This is the
delegation rework to prepare for client_mutex removal, now rebased on
top of the changes to nfs4_file hashing.

Jeff Layton (7):
nfsd: fully unhash delegations when revoking them
nfsd: ensure that clp->cl_revoked list is protected by clp->cl_lock
nfsd: drop unused stp arg to alloc_init_deleg
nfsd: clean up arguments to nfs4_open_delegation
nfsd: clean up nfs4_set_delegation
nfsd: give block_delegation and delegation_blocked its own spinlock
nfsd: remove dl_fh field from struct nfs4_delegation

Trond Myklebust (2):
nfsd: simplify stateid allocation and file handling
nfsd: Convert delegation counter to an atomic_long_t type

fs/nfsd/nfs4callback.c | 2 +-
fs/nfsd/nfs4state.c | 180 +++++++++++++++++++++++++++----------------------
fs/nfsd/state.h | 1 -
3 files changed, 99 insertions(+), 84 deletions(-)

--
1.9.3



2014-07-29 17:53:44

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH v6 2/9] nfsd: fully unhash delegations when revoking them

On Tue, Jul 29, 2014 at 12:41:43PM -0400, Jeff Layton wrote:
> On Tue, 29 Jul 2014 10:53:31 -0400
> "J. Bruce Fields" <[email protected]> wrote:
>
> > On Fri, Jul 25, 2014 at 07:34:20AM -0400, Jeff Layton wrote:
> > > @@ -698,11 +699,10 @@ static void revoke_delegation(struct nfs4_delegation *dp)
> > > struct nfs4_client *clp = dp->dl_stid.sc_client;
> > >
> > > if (clp->cl_minorversion == 0)
> > > - destroy_delegation(dp);
> > > + destroy_revoked_delegation(dp);
> >
> > I don't understand this one--in the NFSv4.0 case, when is the delegation
> > unhashed?
> >
> > (And, apologies if that's a repeat question. I remember wondering about
> > this before but can't find mail in the archives....)
> >
> > --b.
> >
>
> It's unhashed by the laundromat (or the fault injection code). The
> deltas that change that are later in this patch, but it doesn't have
> much context so it's a little tricky to spot. Here's the relevant code
> chunk from nfs4_laundromat:
>
> ---------------[snip]-----------------
> spin_lock(&state_lock);
> list_for_each_safe(pos, next, &nn->del_recall_lru) {
> dp = list_entry (pos, struct nfs4_delegation, dl_recall_lru);
> if (net_generic(dp->dl_stid.sc_client->net, nfsd_net_id) != nn)
> continue;
> if (time_after((unsigned long)dp->dl_time, (unsigned long)cutoff)) {
> t = dp->dl_time - cutoff;
> new_timeo = min(new_timeo, t);
> break;
> }
> unhash_delegation_locked(dp);
> list_add(&dp->dl_recall_lru, &reaplist);
> }
> spin_unlock(&state_lock);
> list_for_each_safe(pos, next, &reaplist) {
> dp = list_entry (pos, struct nfs4_delegation, dl_recall_lru);
> revoke_delegation(dp);
> }
> ---------------[snip]-----------------
>
> ...so the laundromat walks the nn->del_recall_lru list, unhashes any
> delegation that has expired and gathers them up onto the list. Then, it
> calls revoke_delegation on each outside of the lock.

Got it, thanks.

> As a side note, that net_generic comparison there looks bogus to me.
> How could you ever end up with an entry from a different namespace on
> this list? I left it in for now since it didn't seem relevant (or
> terribly expensive), but I suspect that it can be removed (or turned
> into a WARN or something).

Looks like this was an oversight from e8c69d17d1ef. I'd be fine with
just removing it.

--b.

>
> > > else {
> > > - unhash_delegation(dp);
> > > dp->dl_stid.sc_type = NFS4_REVOKED_DELEG_STID;
> > > - list_add(&dp->dl_recall_lru, &clp->cl_revoked);
> > > + list_move(&dp->dl_recall_lru, &clp->cl_revoked);
> > > }
> > > }
> > >
> > > @@ -1458,15 +1458,14 @@ destroy_client(struct nfs4_client *clp)
> > > spin_lock(&state_lock);
> > > while (!list_empty(&clp->cl_delegations)) {
> > > dp = list_entry(clp->cl_delegations.next, struct
> > > nfs4_delegation, dl_perclnt);
> > > - list_del_init(&dp->dl_perclnt);
> > > - /* Ensure that deleg break won't try to requeue it
> > > */
> > > - ++dp->dl_time;
> > > - list_move(&dp->dl_recall_lru, &reaplist);
> > > + unhash_delegation_locked(dp);
> > > + list_add(&dp->dl_recall_lru, &reaplist);
> > > }
> > > spin_unlock(&state_lock);
> > > while (!list_empty(&reaplist)) {
> > > dp = list_entry(reaplist.next, struct
> > > nfs4_delegation, dl_recall_lru);
> > > - destroy_delegation(dp);
> > > + list_del_init(&dp->dl_recall_lru);
> > > + nfs4_put_delegation(dp);
> > > }
> > > list_splice_init(&clp->cl_revoked, &reaplist);
> > > while (!list_empty(&reaplist)) {
> > > @@ -3662,7 +3661,7 @@ nfs4_open_delegation(struct net *net, struct
> > > svc_fh *fh, open->op_delegate_type = NFS4_OPEN_DELEGATE_READ;
> > > return;
> > > out_free:
> > > - destroy_delegation(dp);
> > > + nfs4_put_delegation(dp);
> > > out_no_deleg:
> > > open->op_delegate_type = NFS4_OPEN_DELEGATE_NONE;
> > > if (open->op_claim_type == NFS4_OPEN_CLAIM_PREVIOUS &&
> > > @@ -3900,7 +3899,8 @@ nfs4_laundromat(struct nfsd_net *nn)
> > > new_timeo = min(new_timeo, t);
> > > break;
> > > }
> > > - list_move(&dp->dl_recall_lru, &reaplist);
> > > + unhash_delegation_locked(dp);
> > > + list_add(&dp->dl_recall_lru, &reaplist);
> > > }
> > > spin_unlock(&state_lock);
> > > list_for_each_safe(pos, next, &reaplist) {
> > > @@ -5382,12 +5382,8 @@ static u64 nfsd_find_all_delegations(struct
> > > nfs4_client *clp, u64 max, if (dp->dl_time != 0)
> > > continue;
> > >
> > > - /*
> > > - * Increment dl_time to ensure that
> > > delegation breaks
> > > - * don't monkey with it now that we are.
> > > - */
> > > - ++dp->dl_time;
> > > - list_move(&dp->dl_recall_lru, victims);
> > > + unhash_delegation_locked(dp);
> > > + list_add(&dp->dl_recall_lru, victims);
> > > }
> > > if (++count == max)
> > > break;
> > > @@ -5642,12 +5638,14 @@ nfs4_state_shutdown_net(struct net *net)
> > > spin_lock(&state_lock);
> > > list_for_each_safe(pos, next, &nn->del_recall_lru) {
> > > dp = list_entry (pos, struct nfs4_delegation,
> > > dl_recall_lru);
> > > - list_move(&dp->dl_recall_lru, &reaplist);
> > > + unhash_delegation_locked(dp);
> > > + list_add(&dp->dl_recall_lru, &reaplist);
> > > }
> > > spin_unlock(&state_lock);
> > > list_for_each_safe(pos, next, &reaplist) {
> > > dp = list_entry (pos, struct nfs4_delegation,
> > > dl_recall_lru);
> > > - destroy_delegation(dp);
> > > + list_del_init(&dp->dl_recall_lru);
> > > + nfs4_put_delegation(dp);
> > > }
> > >
> > > nfsd4_client_tracking_exit(net);
> > > --
> > > 1.9.3
> > >
>
>
> --
> Jeff Layton <[email protected]>

2014-07-25 11:34:42

by Jeff Layton

[permalink] [raw]
Subject: [PATCH v6 1/9] nfsd: simplify stateid allocation and file handling

From: Trond Myklebust <[email protected]>

Don't allow stateids to clear the open file pointer until they are
being destroyed. In a later patches we'll want to rely on the fact that
we have a valid file pointer when dealing with the stateid and this
will save us from having to do a lot of NULL pointer checks before
doing so.

Also, move to allocating stateids with kzalloc and get rid of the
explicit zeroing of fields.

Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Jeff Layton <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
fs/nfsd/nfs4state.c | 22 ++++++++++------------
1 file changed, 10 insertions(+), 12 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 1dfc8ee85c93..fdbfbcb70914 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -470,7 +470,7 @@ kmem_cache *slab)
struct nfs4_stid *stid;
int new_id;

- stid = kmem_cache_alloc(slab, GFP_KERNEL);
+ stid = kmem_cache_zalloc(slab, GFP_KERNEL);
if (!stid)
return NULL;

@@ -478,11 +478,9 @@ kmem_cache *slab)
if (new_id < 0)
goto out_free;
stid->sc_client = cl;
- stid->sc_type = 0;
stid->sc_stateid.si_opaque.so_id = new_id;
stid->sc_stateid.si_opaque.so_clid = cl->cl_clientid;
/* Will be incremented before return to client: */
- stid->sc_stateid.si_generation = 0;
atomic_set(&stid->sc_count, 1);

/*
@@ -603,10 +601,8 @@ alloc_init_deleg(struct nfs4_client *clp, struct nfs4_ol_stateid *stp, struct sv
INIT_LIST_HEAD(&dp->dl_perfile);
INIT_LIST_HEAD(&dp->dl_perclnt);
INIT_LIST_HEAD(&dp->dl_recall_lru);
- dp->dl_file = NULL;
dp->dl_type = NFS4_OPEN_DELEGATE_READ;
fh_copy_shallow(&dp->dl_fh, &current_fh->fh_handle);
- dp->dl_time = 0;
INIT_WORK(&dp->dl_recall.cb_work, nfsd4_run_cb_recall);
return dp;
}
@@ -627,6 +623,8 @@ void
nfs4_put_delegation(struct nfs4_delegation *dp)
{
if (atomic_dec_and_test(&dp->dl_stid.sc_count)) {
+ if (dp->dl_file)
+ put_nfs4_file(dp->dl_file);
remove_stid(&dp->dl_stid);
nfs4_free_stid(deleg_slab, &dp->dl_stid);
num_delegations--;
@@ -678,13 +676,9 @@ unhash_delegation(struct nfs4_delegation *dp)
list_del_init(&dp->dl_recall_lru);
list_del_init(&dp->dl_perfile);
spin_unlock(&fp->fi_lock);
- if (fp) {
+ if (fp)
nfs4_put_deleg_lease(fp);
- dp->dl_file = NULL;
- }
spin_unlock(&state_lock);
- if (fp)
- put_nfs4_file(fp);
}

static void destroy_revoked_delegation(struct nfs4_delegation *dp)
@@ -892,12 +886,12 @@ static void unhash_generic_stateid(struct nfs4_ol_stateid *stp)
static void close_generic_stateid(struct nfs4_ol_stateid *stp)
{
release_all_access(stp);
- put_nfs4_file(stp->st_file);
- stp->st_file = NULL;
}

static void free_generic_stateid(struct nfs4_ol_stateid *stp)
{
+ if (stp->st_file)
+ put_nfs4_file(stp->st_file);
remove_stid(&stp->st_stid);
nfs4_free_stid(stateid_slab, &stp->st_stid);
}
@@ -4469,6 +4463,10 @@ static void nfsd4_close_open_stateid(struct nfs4_ol_stateid *s)
if (list_empty(&oo->oo_owner.so_stateids))
release_openowner(oo);
} else {
+ if (s->st_file) {
+ put_nfs4_file(s->st_file);
+ s->st_file = NULL;
+ }
oo->oo_last_closed_stid = s;
/*
* In the 4.0 case we need to keep the owners around a
--
1.9.3


2014-07-25 11:34:54

by Jeff Layton

[permalink] [raw]
Subject: [PATCH v6 9/9] nfsd: remove dl_fh field from struct nfs4_delegation

Now that the nfs4_file has a filehandle in it, we no longer need to
keep a per-delegation copy of it. Switch to using the one in the
nfs4_file instead.

Signed-off-by: Jeff Layton <[email protected]>
---
fs/nfsd/nfs4callback.c | 2 +-
fs/nfsd/nfs4state.c | 3 +--
fs/nfsd/state.h | 1 -
3 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
index c393d6ca3fce..e9813389687b 100644
--- a/fs/nfsd/nfs4callback.c
+++ b/fs/nfsd/nfs4callback.c
@@ -337,7 +337,7 @@ static void encode_cb_recall4args(struct xdr_stream *xdr,
p = xdr_reserve_space(xdr, 4);
*p++ = xdr_zero; /* truncate */

- encode_nfs_fh4(xdr, &dp->dl_fh);
+ encode_nfs_fh4(xdr, &dp->dl_file->fi_fhandle);

hdr->nops++;
}
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index ecfddca9b841..b0f83beeca75 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -604,7 +604,6 @@ alloc_init_deleg(struct nfs4_client *clp, struct svc_fh *current_fh)
INIT_LIST_HEAD(&dp->dl_perclnt);
INIT_LIST_HEAD(&dp->dl_recall_lru);
dp->dl_type = NFS4_OPEN_DELEGATE_READ;
- fh_copy_shallow(&dp->dl_fh, &current_fh->fh_handle);
INIT_WORK(&dp->dl_recall.cb_work, nfsd4_run_cb_recall);
return dp;
out_dec:
@@ -3097,7 +3096,7 @@ void nfsd4_prepare_cb_recall(struct nfs4_delegation *dp)
struct nfs4_client *clp = dp->dl_stid.sc_client;
struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);

- block_delegations(&dp->dl_fh);
+ block_delegations(&dp->dl_file->fi_fhandle);

/*
* We can't do this in nfsd_break_deleg_cb because it is
diff --git a/fs/nfsd/state.h b/fs/nfsd/state.h
index 0097d4771521..39747736e83b 100644
--- a/fs/nfsd/state.h
+++ b/fs/nfsd/state.h
@@ -96,7 +96,6 @@ struct nfs4_delegation {
u32 dl_type;
time_t dl_time;
/* For recall: */
- struct knfsd_fh dl_fh;
int dl_retries;
struct nfsd4_callback dl_recall;
};
--
1.9.3


2014-07-25 11:34:49

by Jeff Layton

[permalink] [raw]
Subject: [PATCH v6 5/9] nfsd: drop unused stp arg to alloc_init_deleg

Signed-off-by: Jeff Layton <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
fs/nfsd/nfs4state.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index b421b51c3a9e..049ef2ce72bf 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -579,7 +579,7 @@ static void block_delegations(struct knfsd_fh *fh)
}

static struct nfs4_delegation *
-alloc_init_deleg(struct nfs4_client *clp, struct nfs4_ol_stateid *stp, struct svc_fh *current_fh)
+alloc_init_deleg(struct nfs4_client *clp, struct svc_fh *current_fh)
{
struct nfs4_delegation *dp;
long n;
@@ -3649,7 +3649,7 @@ nfs4_open_delegation(struct net *net, struct svc_fh *fh,
default:
goto out_no_deleg;
}
- dp = alloc_init_deleg(oo->oo_owner.so_client, stp, fh);
+ dp = alloc_init_deleg(oo->oo_owner.so_client, fh);
if (dp == NULL)
goto out_no_deleg;
status = nfs4_set_delegation(dp, stp->st_file);
--
1.9.3


2014-07-25 11:34:53

by Jeff Layton

[permalink] [raw]
Subject: [PATCH v6 8/9] nfsd: give block_delegation and delegation_blocked its own spinlock

The state lock can be fairly heavily contended, and there's no reason
that nfs4_file lookups and delegation_blocked should be mutually
exclusive. Let's give the new block_delegation code its own spinlock.
It does mean that we'll need to take a different lock in the delegation
break code, but that's not generally as critical to performance.

Cc: Neil Brown <[email protected]>
Signed-off-by: Jeff Layton <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
fs/nfsd/nfs4state.c | 21 +++++++++++----------
1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 85d7ac664691..ecfddca9b841 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -517,10 +517,11 @@ static struct nfs4_ol_stateid * nfs4_alloc_stateid(struct nfs4_client *clp)
* Each filter is 256 bits. We hash the filehandle to 32bit and use the
* low 3 bytes as hash-table indices.
*
- * 'state_lock', which is always held when block_delegations() is called,
+ * 'blocked_delegations_lock', which is always taken in block_delegations(),
* is used to manage concurrent access. Testing does not need the lock
* except when swapping the two filters.
*/
+static DEFINE_SPINLOCK(blocked_delegations_lock);
static struct bloom_pair {
int entries, old_entries;
time_t swap_time;
@@ -536,7 +537,7 @@ static int delegation_blocked(struct knfsd_fh *fh)
if (bd->entries == 0)
return 0;
if (seconds_since_boot() - bd->swap_time > 30) {
- spin_lock(&state_lock);
+ spin_lock(&blocked_delegations_lock);
if (seconds_since_boot() - bd->swap_time > 30) {
bd->entries -= bd->old_entries;
bd->old_entries = bd->entries;
@@ -545,7 +546,7 @@ static int delegation_blocked(struct knfsd_fh *fh)
bd->new = 1-bd->new;
bd->swap_time = seconds_since_boot();
}
- spin_unlock(&state_lock);
+ spin_unlock(&blocked_delegations_lock);
}
hash = arch_fast_hash(&fh->fh_base, fh->fh_size, 0);
if (test_bit(hash&255, bd->set[0]) &&
@@ -566,16 +567,16 @@ static void block_delegations(struct knfsd_fh *fh)
u32 hash;
struct bloom_pair *bd = &blocked_delegations;

- lockdep_assert_held(&state_lock);
-
hash = arch_fast_hash(&fh->fh_base, fh->fh_size, 0);

+ spin_lock(&blocked_delegations_lock);
__set_bit(hash&255, bd->set[bd->new]);
__set_bit((hash>>8)&255, bd->set[bd->new]);
__set_bit((hash>>16)&255, bd->set[bd->new]);
if (bd->entries == 0)
bd->swap_time = seconds_since_boot();
bd->entries += 1;
+ spin_unlock(&blocked_delegations_lock);
}

static struct nfs4_delegation *
@@ -3096,16 +3097,16 @@ void nfsd4_prepare_cb_recall(struct nfs4_delegation *dp)
struct nfs4_client *clp = dp->dl_stid.sc_client;
struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);

- /*
- * We can't do this in nfsd_break_deleg_cb because it is
- * already holding inode->i_lock
- */
- spin_lock(&state_lock);
block_delegations(&dp->dl_fh);
+
/*
+ * We can't do this in nfsd_break_deleg_cb because it is
+ * already holding inode->i_lock.
+ *
* If the dl_time != 0, then we know that it has already been
* queued for a lease break. Don't queue it again.
*/
+ spin_lock(&state_lock);
if (dp->dl_time == 0) {
dp->dl_time = get_seconds();
list_add_tail(&dp->dl_recall_lru, &nn->del_recall_lru);
--
1.9.3


2014-07-25 11:34:46

by Jeff Layton

[permalink] [raw]
Subject: [PATCH v6 3/9] nfsd: ensure that clp->cl_revoked list is protected by clp->cl_lock

Currently, both destroy_revoked_delegation and revoke_delegation
manipulate the cl_revoked list without any locking aside from the
client_mutex. Ensure that the clp->cl_lock is held when manipulating it,
except for the list walking in destroy_client. At that point, the client
should no longer be in use, and so it should be safe to walk the list
without any locking. That also means that we don't need to do the
list_splice_init there either.

Also, the fact that revoke_delegation deletes dl_recall_lru list_head
without any locking makes it difficult to know whether it's doing so
safely in all cases. Move the list_del_init calls into the callers, and
add a WARN_ON in the event that t's passed a delegation that has a
non-empty list_head.

Signed-off-by: Jeff Layton <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
fs/nfsd/nfs4state.c | 35 ++++++++++++++++++++---------------
1 file changed, 20 insertions(+), 15 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 618daa0d4109..9c912c004247 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -680,12 +680,6 @@ unhash_delegation_locked(struct nfs4_delegation *dp)
nfs4_put_deleg_lease(fp);
}

-static void destroy_revoked_delegation(struct nfs4_delegation *dp)
-{
- list_del_init(&dp->dl_recall_lru);
- nfs4_put_delegation(dp);
-}
-
static void destroy_delegation(struct nfs4_delegation *dp)
{
spin_lock(&state_lock);
@@ -698,11 +692,15 @@ static void revoke_delegation(struct nfs4_delegation *dp)
{
struct nfs4_client *clp = dp->dl_stid.sc_client;

+ WARN_ON(!list_empty(&dp->dl_recall_lru));
+
if (clp->cl_minorversion == 0)
- destroy_revoked_delegation(dp);
+ nfs4_put_delegation(dp);
else {
dp->dl_stid.sc_type = NFS4_REVOKED_DELEG_STID;
- list_move(&dp->dl_recall_lru, &clp->cl_revoked);
+ spin_lock(&clp->cl_lock);
+ list_add(&dp->dl_recall_lru, &clp->cl_revoked);
+ spin_unlock(&clp->cl_lock);
}
}

@@ -1467,10 +1465,10 @@ destroy_client(struct nfs4_client *clp)
list_del_init(&dp->dl_recall_lru);
nfs4_put_delegation(dp);
}
- list_splice_init(&clp->cl_revoked, &reaplist);
- while (!list_empty(&reaplist)) {
+ while (!list_empty(&clp->cl_revoked)) {
dp = list_entry(reaplist.next, struct nfs4_delegation, dl_recall_lru);
- destroy_revoked_delegation(dp);
+ list_del_init(&dp->dl_recall_lru);
+ nfs4_put_delegation(dp);
}
while (!list_empty(&clp->cl_openowners)) {
oo = list_entry(clp->cl_openowners.next, struct nfs4_openowner, oo_perclient);
@@ -3903,8 +3901,10 @@ nfs4_laundromat(struct nfsd_net *nn)
list_add(&dp->dl_recall_lru, &reaplist);
}
spin_unlock(&state_lock);
- list_for_each_safe(pos, next, &reaplist) {
- dp = list_entry (pos, struct nfs4_delegation, dl_recall_lru);
+ while (!list_empty(&reaplist)) {
+ dp = list_first_entry(&reaplist, struct nfs4_delegation,
+ dl_recall_lru);
+ list_del_init(&dp->dl_recall_lru);
revoke_delegation(dp);
}
list_for_each_safe(pos, next, &nn->close_lru) {
@@ -4248,7 +4248,10 @@ nfsd4_free_stateid(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
break;
case NFS4_REVOKED_DELEG_STID:
dp = delegstateid(s);
- destroy_revoked_delegation(dp);
+ spin_lock(&cl->cl_lock);
+ list_del_init(&dp->dl_recall_lru);
+ spin_unlock(&cl->cl_lock);
+ nfs4_put_delegation(dp);
ret = nfs_ok;
break;
default:
@@ -5401,8 +5404,10 @@ u64 nfsd_forget_client_delegations(struct nfs4_client *clp, u64 max)
count = nfsd_find_all_delegations(clp, max, &victims);
spin_unlock(&state_lock);

- list_for_each_entry_safe(dp, next, &victims, dl_recall_lru)
+ list_for_each_entry_safe(dp, next, &victims, dl_recall_lru) {
+ list_del_init(&dp->dl_recall_lru);
revoke_delegation(dp);
+ }

return count;
}
--
1.9.3


2014-07-29 19:00:03

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH v6 0/9] nfsd: more delegation fixes to prepare for client_mutex removal

OK, these look good to me, thanks! Applied.--b.

On Fri, Jul 25, 2014 at 07:34:18AM -0400, Jeff Layton wrote:
> v6:
> - rebase on top of patches that change how struct nfs4_file is hashed
>
> - add patch to remove the dl_fh field from struct nfs4_delegation
>
> - fix a bug in forget_delegations fault injector
>
> - update some changelogs to be more descriptive
>
> v5:
> - fix spinlocking in block_delegations. Lock should be held around all
> of the set_bit calls so we don't race with the swap of the two
> fields.
>
> - eliminate destroy_revoked_delegation (just use nfs4_put_delegation)
>
> - eliminate unneeded NULL pointer check in nfs4_setlease
>
> v4:
> - close more potential races in setlease code, and fix some bugs in
> error handling in that code.
>
> - clean up delegation setting functions, eliminating unused arguments
> and avoiding allocations when there has already been a delegation
> break
>
> - add separate spinlock for block_delegations/delegation_blocked code
>
> v3:
> - use alternate method for checking for delegation break races after
> getting a lease (just check fi_had_conflict instead)
>
> - drop file_has_lease patch -- no longer needed
>
> - move cl_revoked handling patch into this set. It means altering a
> few of the later patches, but it keeps the set more topically
> coherent
>
> v2:
> - move remove_stid call from nfs4_free_stid and into callers
>
> Hopefully this series won't need much introduction now. This is the
> delegation rework to prepare for client_mutex removal, now rebased on
> top of the changes to nfs4_file hashing.
>
> Jeff Layton (7):
> nfsd: fully unhash delegations when revoking them
> nfsd: ensure that clp->cl_revoked list is protected by clp->cl_lock
> nfsd: drop unused stp arg to alloc_init_deleg
> nfsd: clean up arguments to nfs4_open_delegation
> nfsd: clean up nfs4_set_delegation
> nfsd: give block_delegation and delegation_blocked its own spinlock
> nfsd: remove dl_fh field from struct nfs4_delegation
>
> Trond Myklebust (2):
> nfsd: simplify stateid allocation and file handling
> nfsd: Convert delegation counter to an atomic_long_t type
>
> fs/nfsd/nfs4callback.c | 2 +-
> fs/nfsd/nfs4state.c | 180 +++++++++++++++++++++++++++----------------------
> fs/nfsd/state.h | 1 -
> 3 files changed, 99 insertions(+), 84 deletions(-)
>
> --
> 1.9.3
>

2014-07-25 11:34:47

by Jeff Layton

[permalink] [raw]
Subject: [PATCH v6 4/9] nfsd: Convert delegation counter to an atomic_long_t type

From: Trond Myklebust <[email protected]>

We want to convert to an atomic type so that we don't need to lock
across the call to alloc_init_deleg(). Then convert to a long type so
that we match the size of 'max_delegations'.

None of this is a problem today, but it will be once we remove
client_mutex protection.

Signed-off-by: Trond Myklebust <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
fs/nfsd/nfs4state.c | 18 +++++++++++-------
1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 9c912c004247..b421b51c3a9e 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -342,7 +342,7 @@ find_any_file(struct nfs4_file *f)
return ret;
}

-static int num_delegations;
+static atomic_long_t num_delegations;
unsigned long max_delegations;

/*
@@ -582,22 +582,23 @@ static struct nfs4_delegation *
alloc_init_deleg(struct nfs4_client *clp, struct nfs4_ol_stateid *stp, struct svc_fh *current_fh)
{
struct nfs4_delegation *dp;
+ long n;

dprintk("NFSD alloc_init_deleg\n");
- if (num_delegations > max_delegations)
- return NULL;
+ n = atomic_long_inc_return(&num_delegations);
+ if (n < 0 || n > max_delegations)
+ goto out_dec;
if (delegation_blocked(&current_fh->fh_handle))
- return NULL;
+ goto out_dec;
dp = delegstateid(nfs4_alloc_stid(clp, deleg_slab));
if (dp == NULL)
- return dp;
+ goto out_dec;
/*
* delegation seqid's are never incremented. The 4.1 special
* meaning of seqid 0 isn't meaningful, really, but let's avoid
* 0 anyway just for consistency and use 1:
*/
dp->dl_stid.sc_stateid.si_generation = 1;
- num_delegations++;
INIT_LIST_HEAD(&dp->dl_perfile);
INIT_LIST_HEAD(&dp->dl_perclnt);
INIT_LIST_HEAD(&dp->dl_recall_lru);
@@ -605,6 +606,9 @@ alloc_init_deleg(struct nfs4_client *clp, struct nfs4_ol_stateid *stp, struct sv
fh_copy_shallow(&dp->dl_fh, &current_fh->fh_handle);
INIT_WORK(&dp->dl_recall.cb_work, nfsd4_run_cb_recall);
return dp;
+out_dec:
+ atomic_long_dec(&num_delegations);
+ return NULL;
}

static void remove_stid(struct nfs4_stid *s)
@@ -627,7 +631,7 @@ nfs4_put_delegation(struct nfs4_delegation *dp)
put_nfs4_file(dp->dl_file);
remove_stid(&dp->dl_stid);
nfs4_free_stid(deleg_slab, &dp->dl_stid);
- num_delegations--;
+ atomic_long_dec(&num_delegations);
}
}

--
1.9.3


2014-07-25 11:34:50

by Jeff Layton

[permalink] [raw]
Subject: [PATCH v6 6/9] nfsd: clean up arguments to nfs4_open_delegation

No need to pass in a net pointer since we can derive that.

Signed-off-by: Jeff Layton <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
fs/nfsd/nfs4state.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 049ef2ce72bf..24065e1b2bb2 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -3607,11 +3607,12 @@ static void nfsd4_open_deleg_none_ext(struct nfsd4_open *open, int status)
* proper support for them.
*/
static void
-nfs4_open_delegation(struct net *net, struct svc_fh *fh,
- struct nfsd4_open *open, struct nfs4_ol_stateid *stp)
+nfs4_open_delegation(struct svc_fh *fh, struct nfsd4_open *open,
+ struct nfs4_ol_stateid *stp)
{
struct nfs4_delegation *dp;
- struct nfs4_openowner *oo = container_of(stp->st_stateowner, struct nfs4_openowner, oo_owner);
+ struct nfs4_openowner *oo = openowner(stp->st_stateowner);
+ struct nfs4_client *clp = stp->st_stid.sc_client;
int cb_up;
int status = 0;

@@ -3630,7 +3631,7 @@ nfs4_open_delegation(struct net *net, struct svc_fh *fh,
* Let's not give out any delegations till everyone's
* had the chance to reclaim theirs....
*/
- if (locks_in_grace(net))
+ if (locks_in_grace(clp->net))
goto out_no_deleg;
if (!cb_up || !(oo->oo_flags & NFS4_OO_CONFIRMED))
goto out_no_deleg;
@@ -3649,7 +3650,7 @@ nfs4_open_delegation(struct net *net, struct svc_fh *fh,
default:
goto out_no_deleg;
}
- dp = alloc_init_deleg(oo->oo_owner.so_client, fh);
+ dp = alloc_init_deleg(clp, fh);
if (dp == NULL)
goto out_no_deleg;
status = nfs4_set_delegation(dp, stp->st_file);
@@ -3762,7 +3763,7 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
* Attempt to hand out a delegation. No error return, because the
* OPEN succeeds even if we fail.
*/
- nfs4_open_delegation(SVC_NET(rqstp), current_fh, open, stp);
+ nfs4_open_delegation(current_fh, open, stp);
nodeleg:
status = nfs_ok;

--
1.9.3


2014-07-25 11:34:51

by Jeff Layton

[permalink] [raw]
Subject: [PATCH v6 7/9] nfsd: clean up nfs4_set_delegation

Move the alloc_init_deleg call into nfs4_set_delegation and change the
function to return a pointer to the delegation or an IS_ERR return. This
allows us to skip allocating a delegation if the file has already
experienced a lease conflict.

Signed-off-by: Jeff Layton <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
fs/nfsd/nfs4state.c | 34 ++++++++++++++++++++++------------
1 file changed, 22 insertions(+), 12 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 24065e1b2bb2..85d7ac664691 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -3552,12 +3552,20 @@ out_fput:
return status;
}

-static int nfs4_set_delegation(struct nfs4_delegation *dp, struct nfs4_file *fp)
+static struct nfs4_delegation *
+nfs4_set_delegation(struct nfs4_client *clp, struct svc_fh *fh,
+ struct nfs4_file *fp)
{
- int status = 0;
+ int status;
+ struct nfs4_delegation *dp;

if (fp->fi_had_conflict)
- return -EAGAIN;
+ return ERR_PTR(-EAGAIN);
+
+ dp = alloc_init_deleg(clp, fh);
+ if (!dp)
+ return ERR_PTR(-ENOMEM);
+
get_nfs4_file(fp);
spin_lock(&state_lock);
spin_lock(&fp->fi_lock);
@@ -3565,7 +3573,8 @@ static int nfs4_set_delegation(struct nfs4_delegation *dp, struct nfs4_file *fp)
if (!fp->fi_lease) {
spin_unlock(&fp->fi_lock);
spin_unlock(&state_lock);
- return nfs4_setlease(dp);
+ status = nfs4_setlease(dp);
+ goto out;
}
atomic_inc(&fp->fi_delegees);
if (fp->fi_had_conflict) {
@@ -3573,10 +3582,16 @@ static int nfs4_set_delegation(struct nfs4_delegation *dp, struct nfs4_file *fp)
goto out_unlock;
}
hash_delegation_locked(dp, fp);
+ status = 0;
out_unlock:
spin_unlock(&fp->fi_lock);
spin_unlock(&state_lock);
- return status;
+out:
+ if (status) {
+ nfs4_put_delegation(dp);
+ return ERR_PTR(status);
+ }
+ return dp;
}

static void nfsd4_open_deleg_none_ext(struct nfsd4_open *open, int status)
@@ -3650,12 +3665,9 @@ nfs4_open_delegation(struct svc_fh *fh, struct nfsd4_open *open,
default:
goto out_no_deleg;
}
- dp = alloc_init_deleg(clp, fh);
- if (dp == NULL)
+ dp = nfs4_set_delegation(clp, fh, stp->st_file);
+ if (IS_ERR(dp))
goto out_no_deleg;
- status = nfs4_set_delegation(dp, stp->st_file);
- if (status)
- goto out_free;

memcpy(&open->op_delegate_stateid, &dp->dl_stid.sc_stateid, sizeof(dp->dl_stid.sc_stateid));

@@ -3663,8 +3675,6 @@ nfs4_open_delegation(struct svc_fh *fh, struct nfsd4_open *open,
STATEID_VAL(&dp->dl_stid.sc_stateid));
open->op_delegate_type = NFS4_OPEN_DELEGATE_READ;
return;
-out_free:
- nfs4_put_delegation(dp);
out_no_deleg:
open->op_delegate_type = NFS4_OPEN_DELEGATE_NONE;
if (open->op_claim_type == NFS4_OPEN_CLAIM_PREVIOUS &&
--
1.9.3


2014-07-29 14:53:35

by J. Bruce Fields

[permalink] [raw]
Subject: Re: [PATCH v6 2/9] nfsd: fully unhash delegations when revoking them

On Fri, Jul 25, 2014 at 07:34:20AM -0400, Jeff Layton wrote:
> @@ -698,11 +699,10 @@ static void revoke_delegation(struct nfs4_delegation *dp)
> struct nfs4_client *clp = dp->dl_stid.sc_client;
>
> if (clp->cl_minorversion == 0)
> - destroy_delegation(dp);
> + destroy_revoked_delegation(dp);

I don't understand this one--in the NFSv4.0 case, when is the delegation
unhashed?

(And, apologies if that's a repeat question. I remember wondering about
this before but can't find mail in the archives....)

--b.

> else {
> - unhash_delegation(dp);
> dp->dl_stid.sc_type = NFS4_REVOKED_DELEG_STID;
> - list_add(&dp->dl_recall_lru, &clp->cl_revoked);
> + list_move(&dp->dl_recall_lru, &clp->cl_revoked);
> }
> }
>
> @@ -1458,15 +1458,14 @@ destroy_client(struct nfs4_client *clp)
> spin_lock(&state_lock);
> while (!list_empty(&clp->cl_delegations)) {
> dp = list_entry(clp->cl_delegations.next, struct nfs4_delegation, dl_perclnt);
> - list_del_init(&dp->dl_perclnt);
> - /* Ensure that deleg break won't try to requeue it */
> - ++dp->dl_time;
> - list_move(&dp->dl_recall_lru, &reaplist);
> + unhash_delegation_locked(dp);
> + list_add(&dp->dl_recall_lru, &reaplist);
> }
> spin_unlock(&state_lock);
> while (!list_empty(&reaplist)) {
> dp = list_entry(reaplist.next, struct nfs4_delegation, dl_recall_lru);
> - destroy_delegation(dp);
> + list_del_init(&dp->dl_recall_lru);
> + nfs4_put_delegation(dp);
> }
> list_splice_init(&clp->cl_revoked, &reaplist);
> while (!list_empty(&reaplist)) {
> @@ -3662,7 +3661,7 @@ nfs4_open_delegation(struct net *net, struct svc_fh *fh,
> open->op_delegate_type = NFS4_OPEN_DELEGATE_READ;
> return;
> out_free:
> - destroy_delegation(dp);
> + nfs4_put_delegation(dp);
> out_no_deleg:
> open->op_delegate_type = NFS4_OPEN_DELEGATE_NONE;
> if (open->op_claim_type == NFS4_OPEN_CLAIM_PREVIOUS &&
> @@ -3900,7 +3899,8 @@ nfs4_laundromat(struct nfsd_net *nn)
> new_timeo = min(new_timeo, t);
> break;
> }
> - list_move(&dp->dl_recall_lru, &reaplist);
> + unhash_delegation_locked(dp);
> + list_add(&dp->dl_recall_lru, &reaplist);
> }
> spin_unlock(&state_lock);
> list_for_each_safe(pos, next, &reaplist) {
> @@ -5382,12 +5382,8 @@ static u64 nfsd_find_all_delegations(struct nfs4_client *clp, u64 max,
> if (dp->dl_time != 0)
> continue;
>
> - /*
> - * Increment dl_time to ensure that delegation breaks
> - * don't monkey with it now that we are.
> - */
> - ++dp->dl_time;
> - list_move(&dp->dl_recall_lru, victims);
> + unhash_delegation_locked(dp);
> + list_add(&dp->dl_recall_lru, victims);
> }
> if (++count == max)
> break;
> @@ -5642,12 +5638,14 @@ nfs4_state_shutdown_net(struct net *net)
> spin_lock(&state_lock);
> list_for_each_safe(pos, next, &nn->del_recall_lru) {
> dp = list_entry (pos, struct nfs4_delegation, dl_recall_lru);
> - list_move(&dp->dl_recall_lru, &reaplist);
> + unhash_delegation_locked(dp);
> + list_add(&dp->dl_recall_lru, &reaplist);
> }
> spin_unlock(&state_lock);
> list_for_each_safe(pos, next, &reaplist) {
> dp = list_entry (pos, struct nfs4_delegation, dl_recall_lru);
> - destroy_delegation(dp);
> + list_del_init(&dp->dl_recall_lru);
> + nfs4_put_delegation(dp);
> }
>
> nfsd4_client_tracking_exit(net);
> --
> 1.9.3
>

2014-07-25 11:34:44

by Jeff Layton

[permalink] [raw]
Subject: [PATCH v6 2/9] nfsd: fully unhash delegations when revoking them

Ensure that the delegations cannot be found by the laundromat etc once
we add them to the various 'revoke' lists.

Signed-off-by: Trond Myklebust <[email protected]>
Signed-off-by: Jeff Layton <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
fs/nfsd/nfs4state.c | 44 +++++++++++++++++++++-----------------------
1 file changed, 21 insertions(+), 23 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index fdbfbcb70914..618daa0d4109 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -661,13 +661,13 @@ hash_delegation_locked(struct nfs4_delegation *dp, struct nfs4_file *fp)
list_add(&dp->dl_perclnt, &dp->dl_stid.sc_client->cl_delegations);
}

-/* Called under the state lock. */
static void
-unhash_delegation(struct nfs4_delegation *dp)
+unhash_delegation_locked(struct nfs4_delegation *dp)
{
struct nfs4_file *fp = dp->dl_file;

- spin_lock(&state_lock);
+ lockdep_assert_held(&state_lock);
+
dp->dl_stid.sc_type = NFS4_CLOSED_DELEG_STID;
/* Ensure that deleg break won't try to requeue it */
++dp->dl_time;
@@ -678,7 +678,6 @@ unhash_delegation(struct nfs4_delegation *dp)
spin_unlock(&fp->fi_lock);
if (fp)
nfs4_put_deleg_lease(fp);
- spin_unlock(&state_lock);
}

static void destroy_revoked_delegation(struct nfs4_delegation *dp)
@@ -689,7 +688,9 @@ static void destroy_revoked_delegation(struct nfs4_delegation *dp)

static void destroy_delegation(struct nfs4_delegation *dp)
{
- unhash_delegation(dp);
+ spin_lock(&state_lock);
+ unhash_delegation_locked(dp);
+ spin_unlock(&state_lock);
nfs4_put_delegation(dp);
}

@@ -698,11 +699,10 @@ static void revoke_delegation(struct nfs4_delegation *dp)
struct nfs4_client *clp = dp->dl_stid.sc_client;

if (clp->cl_minorversion == 0)
- destroy_delegation(dp);
+ destroy_revoked_delegation(dp);
else {
- unhash_delegation(dp);
dp->dl_stid.sc_type = NFS4_REVOKED_DELEG_STID;
- list_add(&dp->dl_recall_lru, &clp->cl_revoked);
+ list_move(&dp->dl_recall_lru, &clp->cl_revoked);
}
}

@@ -1458,15 +1458,14 @@ destroy_client(struct nfs4_client *clp)
spin_lock(&state_lock);
while (!list_empty(&clp->cl_delegations)) {
dp = list_entry(clp->cl_delegations.next, struct nfs4_delegation, dl_perclnt);
- list_del_init(&dp->dl_perclnt);
- /* Ensure that deleg break won't try to requeue it */
- ++dp->dl_time;
- list_move(&dp->dl_recall_lru, &reaplist);
+ unhash_delegation_locked(dp);
+ list_add(&dp->dl_recall_lru, &reaplist);
}
spin_unlock(&state_lock);
while (!list_empty(&reaplist)) {
dp = list_entry(reaplist.next, struct nfs4_delegation, dl_recall_lru);
- destroy_delegation(dp);
+ list_del_init(&dp->dl_recall_lru);
+ nfs4_put_delegation(dp);
}
list_splice_init(&clp->cl_revoked, &reaplist);
while (!list_empty(&reaplist)) {
@@ -3662,7 +3661,7 @@ nfs4_open_delegation(struct net *net, struct svc_fh *fh,
open->op_delegate_type = NFS4_OPEN_DELEGATE_READ;
return;
out_free:
- destroy_delegation(dp);
+ nfs4_put_delegation(dp);
out_no_deleg:
open->op_delegate_type = NFS4_OPEN_DELEGATE_NONE;
if (open->op_claim_type == NFS4_OPEN_CLAIM_PREVIOUS &&
@@ -3900,7 +3899,8 @@ nfs4_laundromat(struct nfsd_net *nn)
new_timeo = min(new_timeo, t);
break;
}
- list_move(&dp->dl_recall_lru, &reaplist);
+ unhash_delegation_locked(dp);
+ list_add(&dp->dl_recall_lru, &reaplist);
}
spin_unlock(&state_lock);
list_for_each_safe(pos, next, &reaplist) {
@@ -5382,12 +5382,8 @@ static u64 nfsd_find_all_delegations(struct nfs4_client *clp, u64 max,
if (dp->dl_time != 0)
continue;

- /*
- * Increment dl_time to ensure that delegation breaks
- * don't monkey with it now that we are.
- */
- ++dp->dl_time;
- list_move(&dp->dl_recall_lru, victims);
+ unhash_delegation_locked(dp);
+ list_add(&dp->dl_recall_lru, victims);
}
if (++count == max)
break;
@@ -5642,12 +5638,14 @@ nfs4_state_shutdown_net(struct net *net)
spin_lock(&state_lock);
list_for_each_safe(pos, next, &nn->del_recall_lru) {
dp = list_entry (pos, struct nfs4_delegation, dl_recall_lru);
- list_move(&dp->dl_recall_lru, &reaplist);
+ unhash_delegation_locked(dp);
+ list_add(&dp->dl_recall_lru, &reaplist);
}
spin_unlock(&state_lock);
list_for_each_safe(pos, next, &reaplist) {
dp = list_entry (pos, struct nfs4_delegation, dl_recall_lru);
- destroy_delegation(dp);
+ list_del_init(&dp->dl_recall_lru);
+ nfs4_put_delegation(dp);
}

nfsd4_client_tracking_exit(net);
--
1.9.3


2014-07-29 16:41:46

by Jeff Layton

[permalink] [raw]
Subject: Re: [PATCH v6 2/9] nfsd: fully unhash delegations when revoking them

On Tue, 29 Jul 2014 10:53:31 -0400
"J. Bruce Fields" <[email protected]> wrote:

> On Fri, Jul 25, 2014 at 07:34:20AM -0400, Jeff Layton wrote:
> > @@ -698,11 +699,10 @@ static void revoke_delegation(struct nfs4_delegation *dp)
> > struct nfs4_client *clp = dp->dl_stid.sc_client;
> >
> > if (clp->cl_minorversion == 0)
> > - destroy_delegation(dp);
> > + destroy_revoked_delegation(dp);
>
> I don't understand this one--in the NFSv4.0 case, when is the delegation
> unhashed?
>
> (And, apologies if that's a repeat question. I remember wondering about
> this before but can't find mail in the archives....)
>
> --b.
>

It's unhashed by the laundromat (or the fault injection code). The
deltas that change that are later in this patch, but it doesn't have
much context so it's a little tricky to spot. Here's the relevant code
chunk from nfs4_laundromat:

---------------[snip]-----------------
spin_lock(&state_lock);
list_for_each_safe(pos, next, &nn->del_recall_lru) {
dp = list_entry (pos, struct nfs4_delegation, dl_recall_lru);
if (net_generic(dp->dl_stid.sc_client->net, nfsd_net_id) != nn)
continue;
if (time_after((unsigned long)dp->dl_time, (unsigned long)cutoff)) {
t = dp->dl_time - cutoff;
new_timeo = min(new_timeo, t);
break;
}
unhash_delegation_locked(dp);
list_add(&dp->dl_recall_lru, &reaplist);
}
spin_unlock(&state_lock);
list_for_each_safe(pos, next, &reaplist) {
dp = list_entry (pos, struct nfs4_delegation, dl_recall_lru);
revoke_delegation(dp);
}
---------------[snip]-----------------

...so the laundromat walks the nn->del_recall_lru list, unhashes any
delegation that has expired and gathers them up onto the list. Then, it
calls revoke_delegation on each outside of the lock.

As a side note, that net_generic comparison there looks bogus to me.
How could you ever end up with an entry from a different namespace on
this list? I left it in for now since it didn't seem relevant (or
terribly expensive), but I suspect that it can be removed (or turned
into a WARN or something).

> > else {
> > - unhash_delegation(dp);
> > dp->dl_stid.sc_type = NFS4_REVOKED_DELEG_STID;
> > - list_add(&dp->dl_recall_lru, &clp->cl_revoked);
> > + list_move(&dp->dl_recall_lru, &clp->cl_revoked);
> > }
> > }
> >
> > @@ -1458,15 +1458,14 @@ destroy_client(struct nfs4_client *clp)
> > spin_lock(&state_lock);
> > while (!list_empty(&clp->cl_delegations)) {
> > dp = list_entry(clp->cl_delegations.next, struct
> > nfs4_delegation, dl_perclnt);
> > - list_del_init(&dp->dl_perclnt);
> > - /* Ensure that deleg break won't try to requeue it
> > */
> > - ++dp->dl_time;
> > - list_move(&dp->dl_recall_lru, &reaplist);
> > + unhash_delegation_locked(dp);
> > + list_add(&dp->dl_recall_lru, &reaplist);
> > }
> > spin_unlock(&state_lock);
> > while (!list_empty(&reaplist)) {
> > dp = list_entry(reaplist.next, struct
> > nfs4_delegation, dl_recall_lru);
> > - destroy_delegation(dp);
> > + list_del_init(&dp->dl_recall_lru);
> > + nfs4_put_delegation(dp);
> > }
> > list_splice_init(&clp->cl_revoked, &reaplist);
> > while (!list_empty(&reaplist)) {
> > @@ -3662,7 +3661,7 @@ nfs4_open_delegation(struct net *net, struct
> > svc_fh *fh, open->op_delegate_type = NFS4_OPEN_DELEGATE_READ;
> > return;
> > out_free:
> > - destroy_delegation(dp);
> > + nfs4_put_delegation(dp);
> > out_no_deleg:
> > open->op_delegate_type = NFS4_OPEN_DELEGATE_NONE;
> > if (open->op_claim_type == NFS4_OPEN_CLAIM_PREVIOUS &&
> > @@ -3900,7 +3899,8 @@ nfs4_laundromat(struct nfsd_net *nn)
> > new_timeo = min(new_timeo, t);
> > break;
> > }
> > - list_move(&dp->dl_recall_lru, &reaplist);
> > + unhash_delegation_locked(dp);
> > + list_add(&dp->dl_recall_lru, &reaplist);
> > }
> > spin_unlock(&state_lock);
> > list_for_each_safe(pos, next, &reaplist) {
> > @@ -5382,12 +5382,8 @@ static u64 nfsd_find_all_delegations(struct
> > nfs4_client *clp, u64 max, if (dp->dl_time != 0)
> > continue;
> >
> > - /*
> > - * Increment dl_time to ensure that
> > delegation breaks
> > - * don't monkey with it now that we are.
> > - */
> > - ++dp->dl_time;
> > - list_move(&dp->dl_recall_lru, victims);
> > + unhash_delegation_locked(dp);
> > + list_add(&dp->dl_recall_lru, victims);
> > }
> > if (++count == max)
> > break;
> > @@ -5642,12 +5638,14 @@ nfs4_state_shutdown_net(struct net *net)
> > spin_lock(&state_lock);
> > list_for_each_safe(pos, next, &nn->del_recall_lru) {
> > dp = list_entry (pos, struct nfs4_delegation,
> > dl_recall_lru);
> > - list_move(&dp->dl_recall_lru, &reaplist);
> > + unhash_delegation_locked(dp);
> > + list_add(&dp->dl_recall_lru, &reaplist);
> > }
> > spin_unlock(&state_lock);
> > list_for_each_safe(pos, next, &reaplist) {
> > dp = list_entry (pos, struct nfs4_delegation,
> > dl_recall_lru);
> > - destroy_delegation(dp);
> > + list_del_init(&dp->dl_recall_lru);
> > + nfs4_put_delegation(dp);
> > }
> >
> > nfsd4_client_tracking_exit(net);
> > --
> > 1.9.3
> >


--
Jeff Layton <[email protected]>