- Client should not declare unreclaimed locks to be stale just because
SETCLIENTID_CONFIRM or CREATE_SESSION fail with NFS4ERR_STALE_CLIENTID
- Fix open/state recovery error handling
Trond Myklebust (2):
NFSv4: Fix lock recovery when CREATE_SESSION/SETCLIENTID_CONFIRM fails
NFSv4: fix open/lock state recovery error handling
fs/nfs/nfs4state.c | 17 ++++++-----------
1 file changed, 6 insertions(+), 11 deletions(-)
--
1.9.3
The current open/lock state recovery unfortunately does not handle errors
such as NFS4ERR_CONN_NOT_BOUND_TO_SESSION correctly. Instead of looping,
just proceeds as if the state manager is finished recovering.
This patch ensures that we loop back, handle higher priority errors
and complete the open/lock state recovery.
Cc: [email protected]
Signed-off-by: Trond Myklebust <[email protected]>
---
fs/nfs/nfs4state.c | 16 ++++++----------
1 file changed, 6 insertions(+), 10 deletions(-)
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index 26d510d11efd..87b2d0e79797 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -1705,7 +1705,8 @@ restart:
if (status < 0) {
set_bit(ops->owner_flag_bit, &sp->so_flags);
nfs4_put_state_owner(sp);
- return nfs4_recovery_handle_error(clp, status);
+ status = nfs4_recovery_handle_error(clp, status);
+ return (status != 0) ? status : -EAGAIN;
}
nfs4_put_state_owner(sp);
@@ -1714,7 +1715,7 @@ restart:
spin_unlock(&clp->cl_lock);
}
rcu_read_unlock();
- return status;
+ return 0;
}
static int nfs4_check_lease(struct nfs_client *clp)
@@ -2365,14 +2366,11 @@ static void nfs4_state_manager(struct nfs_client *clp)
section = "reclaim reboot";
status = nfs4_do_reclaim(clp,
clp->cl_mvops->reboot_recovery_ops);
- if (test_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state) ||
- test_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state))
- continue;
- nfs4_state_end_reclaim_reboot(clp);
- if (test_bit(NFS4CLNT_RECLAIM_NOGRACE, &clp->cl_state))
+ if (status == -EAGAIN)
continue;
if (status < 0)
goto out_error;
+ nfs4_state_end_reclaim_reboot(clp);
}
/* Now recover expired state... */
@@ -2380,9 +2378,7 @@ static void nfs4_state_manager(struct nfs_client *clp)
section = "reclaim nograce";
status = nfs4_do_reclaim(clp,
clp->cl_mvops->nograce_recovery_ops);
- if (test_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state) ||
- test_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state) ||
- test_bit(NFS4CLNT_RECLAIM_REBOOT, &clp->cl_state))
+ if (status == -EAGAIN)
continue;
if (status < 0)
goto out_error;
--
1.9.3
If a NFSv4.x server returns NFS4ERR_STALE_CLIENTID in response to a
CREATE_SESSION or SETCLIENTID_CONFIRM in order to tell us that it rebooted
a second time, then the client will currently take this to mean that it must
declare all locks to be stale, and hence ineligible for reboot recovery.
RFC3530 and RFC5661 both suggest that the client should instead rely on the
server to respond to inelegible open share, lock and delegation reclaim
requests with NFS4ERR_NO_GRACE in this situation.
Cc: [email protected]
Signed-off-by: Trond Myklebust <[email protected]>
---
fs/nfs/nfs4state.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index 22fe35104c0c..26d510d11efd 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -1761,7 +1761,6 @@ static int nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status)
break;
case -NFS4ERR_STALE_CLIENTID:
clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
- nfs4_state_clear_reclaim_reboot(clp);
nfs4_state_start_reclaim_reboot(clp);
break;
case -NFS4ERR_CLID_INUSE:
--
1.9.3
On Wed, Oct 1, 2014 at 9:57 AM, Anna Schumaker
<[email protected]> wrote:
> On 09/27/2014 11:54 PM, Trond Myklebust wrote:
>> The current open/lock state recovery unfortunately does not handle errors
>> such as NFS4ERR_CONN_NOT_BOUND_TO_SESSION correctly. Instead of looping,
>> just proceeds as if the state manager is finished recovering.
>> This patch ensures that we loop back, handle higher priority errors
>> and complete the open/lock state recovery.
>>
>> Cc: [email protected]
>> Signed-off-by: Trond Myklebust <[email protected]>
>> ---
>> fs/nfs/nfs4state.c | 16 ++++++----------
>> 1 file changed, 6 insertions(+), 10 deletions(-)
>>
>> diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
>> index 26d510d11efd..87b2d0e79797 100644
>> --- a/fs/nfs/nfs4state.c
>> +++ b/fs/nfs/nfs4state.c
>> @@ -1705,7 +1705,8 @@ restart:
>> if (status < 0) {
>> set_bit(ops->owner_flag_bit, &sp->so_flags);
>> nfs4_put_state_owner(sp);
>> - return nfs4_recovery_handle_error(clp, status);
>> + status = nfs4_recovery_handle_error(clp, status);
>> + return (status != 0) ? status : -EAGAIN;
>
> Why is the ternary better than doing something like this?
>
> if (status == 0)
> return -EAGAIN;
> return status;
It is 2 lines shorter, yet just as legible in this situation.
--
Trond Myklebust
Linux NFS client maintainer, PrimaryData
[email protected]
On 09/27/2014 11:54 PM, Trond Myklebust wrote:
> The current open/lock state recovery unfortunately does not handle errors
> such as NFS4ERR_CONN_NOT_BOUND_TO_SESSION correctly. Instead of looping,
> just proceeds as if the state manager is finished recovering.
> This patch ensures that we loop back, handle higher priority errors
> and complete the open/lock state recovery.
>
> Cc: [email protected]
> Signed-off-by: Trond Myklebust <[email protected]>
> ---
> fs/nfs/nfs4state.c | 16 ++++++----------
> 1 file changed, 6 insertions(+), 10 deletions(-)
>
> diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
> index 26d510d11efd..87b2d0e79797 100644
> --- a/fs/nfs/nfs4state.c
> +++ b/fs/nfs/nfs4state.c
> @@ -1705,7 +1705,8 @@ restart:
> if (status < 0) {
> set_bit(ops->owner_flag_bit, &sp->so_flags);
> nfs4_put_state_owner(sp);
> - return nfs4_recovery_handle_error(clp, status);
> + status = nfs4_recovery_handle_error(clp, status);
> + return (status != 0) ? status : -EAGAIN;
Why is the ternary better than doing something like this?
if (status == 0)
return -EAGAIN;
return status;
Anna
> }
>
> nfs4_put_state_owner(sp);
> @@ -1714,7 +1715,7 @@ restart:
> spin_unlock(&clp->cl_lock);
> }
> rcu_read_unlock();
> - return status;
> + return 0;
> }
>
> static int nfs4_check_lease(struct nfs_client *clp)
> @@ -2365,14 +2366,11 @@ static void nfs4_state_manager(struct nfs_client *clp)
> section = "reclaim reboot";
> status = nfs4_do_reclaim(clp,
> clp->cl_mvops->reboot_recovery_ops);
> - if (test_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state) ||
> - test_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state))
> - continue;
> - nfs4_state_end_reclaim_reboot(clp);
> - if (test_bit(NFS4CLNT_RECLAIM_NOGRACE, &clp->cl_state))
> + if (status == -EAGAIN)
> continue;
> if (status < 0)
> goto out_error;
> + nfs4_state_end_reclaim_reboot(clp);
> }
>
> /* Now recover expired state... */
> @@ -2380,9 +2378,7 @@ static void nfs4_state_manager(struct nfs_client *clp)
> section = "reclaim nograce";
> status = nfs4_do_reclaim(clp,
> clp->cl_mvops->nograce_recovery_ops);
> - if (test_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state) ||
> - test_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state) ||
> - test_bit(NFS4CLNT_RECLAIM_REBOOT, &clp->cl_state))
> + if (status == -EAGAIN)
> continue;
> if (status < 0)
> goto out_error;
On Wed, 1 Oct 2014 14:46:43 -0400
Trond Myklebust <[email protected]> wrote:
> On Wed, Oct 1, 2014 at 2:32 PM, Jeff Layton <[email protected]> wrote:
> > On Sat, 27 Sep 2014 23:54:57 -0400
> > Trond Myklebust <[email protected]> wrote:
> >
> >> If a NFSv4.x server returns NFS4ERR_STALE_CLIENTID in response to a
> >> CREATE_SESSION or SETCLIENTID_CONFIRM in order to tell us that it rebooted
> >> a second time, then the client will currently take this to mean that it must
> >> declare all locks to be stale, and hence ineligible for reboot recovery.
> >>
> >> RFC3530 and RFC5661 both suggest that the client should instead rely on the
> >> server to respond to inelegible open share, lock and delegation reclaim
> >> requests with NFS4ERR_NO_GRACE in this situation.
> >>
> >> Cc: [email protected]
> >> Signed-off-by: Trond Myklebust <[email protected]>
> >> ---
> >> fs/nfs/nfs4state.c | 1 -
> >> 1 file changed, 1 deletion(-)
> >>
> >> diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
> >> index 22fe35104c0c..26d510d11efd 100644
> >> --- a/fs/nfs/nfs4state.c
> >> +++ b/fs/nfs/nfs4state.c
> >> @@ -1761,7 +1761,6 @@ static int nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status)
> >> break;
> >> case -NFS4ERR_STALE_CLIENTID:
> >> clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
> >> - nfs4_state_clear_reclaim_reboot(clp);
> >> nfs4_state_start_reclaim_reboot(clp);
> >> break;
> >> case -NFS4ERR_CLID_INUSE:
> >
> > What distinguishes between the v4.0 and v4.1+ case here?
>
> Nothing. They are actually supposed to be handled identically here.
> nfs4_handle_reclaim_lease_error() is called if and only if the
> SETCLIENTID_CONFIRM (v4.0) or CREATE_SESSION (v4.1) fail. At that
> point we have not yet sent out any OPEN or LOCK reclaim requests, so
> the failure will really result in a reclaim all-or-nothing for both
> NFSv4 and NFSv4.x.
>
> > For v4.1+, we do want the client to just try to reclaim everything that
> > it can. For v4.0 though, we need to be a little more careful. Consider:
> >
> >
> > Client Server
> > ===================================================================
> > SETCLIENTID
> > OPEN (O1)
> > LOCK (L1)
> > reboot (B1)
> >
> > RENEW (NFS4ERR_STALE_CLIENTID)
> > SETCLIENTID
> > OPEN(reclaim O1) (NFS4_OK)
> >
> > === NETWORK PARTITION ===
> > Grace period is lifted, but client1's
> > lease hasn't expired yet
> >
> > Lock that conflicts with L1 is handed out to client2
> >
> > reboot (B2)
> > === PARTITION HEALS ===
> > LOCK(reclaim L1) (NFS4ERR_STALE_CLIENTID)
> >
> > SETCLIENTID
> > OPEN (reclaim O1) (NFS4_OK)
> > LOCK (reclaim L1) (NFS4_OK)
> >
> >
> > Now we have a conflict. I think that the client should not try to
> > reclaim L1 after B2 in the v4.0 case. Do we need to do something
> > to handle the v4.0 vs. v4.1+ cases differently here?
>
> This patch does not change the existing handling of
> NFS4ERR_STALE_CLIENTID above, so the NFSv4.0 code will continue to
> work as before.
>
> The reason why NFSv4.1 will not need changing above is because the
> SEQUENCE op that we send instead of RENEW will receive a
> NFS4ERR_DEADSESSION or NFS4ERR_BADSESSION instead of the stale
> clientid error.
>
Ahh ok. Got it!
Acked-by: Jeff Layton <[email protected]>
On Wed, Oct 1, 2014 at 2:32 PM, Jeff Layton <[email protected]> wrote:
> On Sat, 27 Sep 2014 23:54:57 -0400
> Trond Myklebust <[email protected]> wrote:
>
>> If a NFSv4.x server returns NFS4ERR_STALE_CLIENTID in response to a
>> CREATE_SESSION or SETCLIENTID_CONFIRM in order to tell us that it rebooted
>> a second time, then the client will currently take this to mean that it must
>> declare all locks to be stale, and hence ineligible for reboot recovery.
>>
>> RFC3530 and RFC5661 both suggest that the client should instead rely on the
>> server to respond to inelegible open share, lock and delegation reclaim
>> requests with NFS4ERR_NO_GRACE in this situation.
>>
>> Cc: [email protected]
>> Signed-off-by: Trond Myklebust <[email protected]>
>> ---
>> fs/nfs/nfs4state.c | 1 -
>> 1 file changed, 1 deletion(-)
>>
>> diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
>> index 22fe35104c0c..26d510d11efd 100644
>> --- a/fs/nfs/nfs4state.c
>> +++ b/fs/nfs/nfs4state.c
>> @@ -1761,7 +1761,6 @@ static int nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status)
>> break;
>> case -NFS4ERR_STALE_CLIENTID:
>> clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
>> - nfs4_state_clear_reclaim_reboot(clp);
>> nfs4_state_start_reclaim_reboot(clp);
>> break;
>> case -NFS4ERR_CLID_INUSE:
>
> What distinguishes between the v4.0 and v4.1+ case here?
Nothing. They are actually supposed to be handled identically here.
nfs4_handle_reclaim_lease_error() is called if and only if the
SETCLIENTID_CONFIRM (v4.0) or CREATE_SESSION (v4.1) fail. At that
point we have not yet sent out any OPEN or LOCK reclaim requests, so
the failure will really result in a reclaim all-or-nothing for both
NFSv4 and NFSv4.x.
> For v4.1+, we do want the client to just try to reclaim everything that
> it can. For v4.0 though, we need to be a little more careful. Consider:
>
>
> Client Server
> ===================================================================
> SETCLIENTID
> OPEN (O1)
> LOCK (L1)
> reboot (B1)
>
> RENEW (NFS4ERR_STALE_CLIENTID)
> SETCLIENTID
> OPEN(reclaim O1) (NFS4_OK)
>
> === NETWORK PARTITION ===
> Grace period is lifted, but client1's
> lease hasn't expired yet
>
> Lock that conflicts with L1 is handed out to client2
>
> reboot (B2)
> === PARTITION HEALS ===
> LOCK(reclaim L1) (NFS4ERR_STALE_CLIENTID)
>
> SETCLIENTID
> OPEN (reclaim O1) (NFS4_OK)
> LOCK (reclaim L1) (NFS4_OK)
>
>
> Now we have a conflict. I think that the client should not try to
> reclaim L1 after B2 in the v4.0 case. Do we need to do something
> to handle the v4.0 vs. v4.1+ cases differently here?
This patch does not change the existing handling of
NFS4ERR_STALE_CLIENTID above, so the NFSv4.0 code will continue to
work as before.
The reason why NFSv4.1 will not need changing above is because the
SEQUENCE op that we send instead of RENEW will receive a
NFS4ERR_DEADSESSION or NFS4ERR_BADSESSION instead of the stale
clientid error.
--
Trond Myklebust
Linux NFS client maintainer, PrimaryData
[email protected]
On Wed, Oct 1, 2014 at 9:49 AM, Anna Schumaker
<[email protected]> wrote:
> On 09/27/2014 11:54 PM, Trond Myklebust wrote:
>> If a NFSv4.x server returns NFS4ERR_STALE_CLIENTID in response to a
>> CREATE_SESSION or SETCLIENTID_CONFIRM in order to tell us that it rebooted
>> a second time, then the client will currently take this to mean that it must
>> declare all locks to be stale, and hence ineligible for reboot recovery.
>>
>> RFC3530 and RFC5661 both suggest that the client should instead rely on the
>> server to respond to inelegible open share, lock and delegation reclaim
>> requests with NFS4ERR_NO_GRACE in this situation.
> Has our handling of NFS4ERR_NO_GRACE been tested in this situation?
nfs4_reclaim_open_state() will handle it by calling
nfs4_state_mark_reclaim_nograce(), which clears the flags enabling
reboot recovery, and sets the NFS_STATE_RECLAIM_NOGRACE flag to force
state reclaim using standard OPEN and LOCK.
--
Trond Myklebust
Linux NFS client maintainer, PrimaryData
[email protected]
On Sat, 27 Sep 2014 23:54:57 -0400
Trond Myklebust <[email protected]> wrote:
> If a NFSv4.x server returns NFS4ERR_STALE_CLIENTID in response to a
> CREATE_SESSION or SETCLIENTID_CONFIRM in order to tell us that it rebooted
> a second time, then the client will currently take this to mean that it must
> declare all locks to be stale, and hence ineligible for reboot recovery.
>
> RFC3530 and RFC5661 both suggest that the client should instead rely on the
> server to respond to inelegible open share, lock and delegation reclaim
> requests with NFS4ERR_NO_GRACE in this situation.
>
> Cc: [email protected]
> Signed-off-by: Trond Myklebust <[email protected]>
> ---
> fs/nfs/nfs4state.c | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
> index 22fe35104c0c..26d510d11efd 100644
> --- a/fs/nfs/nfs4state.c
> +++ b/fs/nfs/nfs4state.c
> @@ -1761,7 +1761,6 @@ static int nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status)
> break;
> case -NFS4ERR_STALE_CLIENTID:
> clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
> - nfs4_state_clear_reclaim_reboot(clp);
> nfs4_state_start_reclaim_reboot(clp);
> break;
> case -NFS4ERR_CLID_INUSE:
What distinguishes between the v4.0 and v4.1+ case here?
For v4.1+, we do want the client to just try to reclaim everything that
it can. For v4.0 though, we need to be a little more careful. Consider:
Client Server
===================================================================
SETCLIENTID
OPEN (O1)
LOCK (L1)
reboot (B1)
RENEW (NFS4ERR_STALE_CLIENTID)
SETCLIENTID
OPEN(reclaim O1) (NFS4_OK)
=== NETWORK PARTITION ===
Grace period is lifted, but client1's
lease hasn't expired yet
Lock that conflicts with L1 is handed out to client2
reboot (B2)
=== PARTITION HEALS ===
LOCK(reclaim L1) (NFS4ERR_STALE_CLIENTID)
SETCLIENTID
OPEN (reclaim O1) (NFS4_OK)
LOCK (reclaim L1) (NFS4_OK)
Now we have a conflict. I think that the client should not try to
reclaim L1 after B2 in the v4.0 case. Do we need to do something
to handle the v4.0 vs. v4.1+ cases differently here?
--
Jeff Layton <[email protected]>
On 09/27/2014 11:54 PM, Trond Myklebust wrote:
> If a NFSv4.x server returns NFS4ERR_STALE_CLIENTID in response to a
> CREATE_SESSION or SETCLIENTID_CONFIRM in order to tell us that it rebooted
> a second time, then the client will currently take this to mean that it must
> declare all locks to be stale, and hence ineligible for reboot recovery.
>
> RFC3530 and RFC5661 both suggest that the client should instead rely on the
> server to respond to inelegible open share, lock and delegation reclaim
> requests with NFS4ERR_NO_GRACE in this situation.
Has our handling of NFS4ERR_NO_GRACE been tested in this situation?
Anna
>
> Cc: [email protected]
> Signed-off-by: Trond Myklebust <[email protected]>
> ---
> fs/nfs/nfs4state.c | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
> index 22fe35104c0c..26d510d11efd 100644
> --- a/fs/nfs/nfs4state.c
> +++ b/fs/nfs/nfs4state.c
> @@ -1761,7 +1761,6 @@ static int nfs4_handle_reclaim_lease_error(struct nfs_client *clp, int status)
> break;
> case -NFS4ERR_STALE_CLIENTID:
> clear_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
> - nfs4_state_clear_reclaim_reboot(clp);
> nfs4_state_start_reclaim_reboot(clp);
> break;
> case -NFS4ERR_CLID_INUSE: