This is a rewrite of the NFSv4.1 DRC, switching from a page based cache to
a buffer based cache. The logic for the single slot clientid cache has been
separated from the session slot logic to handle the CREATE_SESSION call
preceeded by a SEQUENCE and all the replay combinations therein.
The session DRC now caches encoded operations with the exception of the
SEQUENCE operation which for a replay is encoded with the current slot and
session values. A review of message sizes indicates that a 1024 byte buffer
for the operations is more than adequate.
Not addressed is the necessary pre-operation processing estimate of the encoded
per operation result to check against the negotiated fore channel maximum
response size cached.
I've tested NFSv4.1 mounts using Connectathon and the new pynfs 4.1 tests,
where I added two new Clientid cache replay tests [to be submitted]to
st_create_session.py.
I've tested NFSv4.0 mounts using Connectathon and the pynfs v4.0 tests.
As always, comments and suggestions welcome.
-->Andy
Clientid single slot cache
0001-nfsd41-add-create-session-slot-buffer-to-struc-nfs4.patch
0002-nfsd41-encode-create_session-result-into-cache.patch
0003-nfsd41-create_session-check-replay-first.patch
0004-nfsd41-replay-solo-and-embedded-create-session.patch
0005-nfsd41-create_session-cache-hold-client-reference.patch
0006-nfsd41-no-nfsd4_release_respages-for-the-clientid-c.patch
0007-nfsd41-slots-are-freed-with-session.patch
Session slot cache
0008-nfsd41-protect-sv_drc_pages_used-with-spinlock.patch
0009-nfsd41-sanity-check-client-drc-maxreqs.patch
0010-nfsd41-change-from-page-to-memory-based-drc-limits.patch
0011-nfsd41-set-the-session-maximum-response-size-cached.patch
0012-nfsd41-allocate-and-use-drc-cache-buffers.patch
0013-nfsd41-free-drc-cache-buffers.patch
0014-nfsd41-obliterate-nfsd4_copy_pages.patch
0015-nfsd41-obliterate-nfsd41_copy_replay_data.patch
0016-nfsd41-obliterate-nfsd4_release_respages.patch
0017-nfsd41-remove-unused-nfsd4_cache_entry-fields.patch
0018-nfsd41-obliterate-nfsd4_set_statp.patch
0019-nfsd41-rename-nfsd4_enc_uncached_replay.patch
0020-nfsd41-encode-replay-sequence-from-the-slot-values.patch
0021-nfsd41-remove-iovlen-field-from-nfsd4_compound_stat.patch
0022-nfsd41-obliterate-nfsd41_copy_replay_data.patch
0023-nfsd41-fix-nfsd4_store_cache_entry-comments.patch
0024-nfsd41-support-16-slots-per-session.patch
0025-nfsd41-use-the-maximum-operations-per-compound-in-n.patch
0026-nfsd41-fix-nfsd4_store_cache_entry-dprintk.patch
0027-nfsd41-add-test-for-failed-sequence-operation.patch
0028-nfsd41-remove-redundant-failed-sequence-check.patch
0029-nfsd41-remove-check-for-session.patch
On Thu, Apr 23, 2009 at 07:21:05PM -0400, bfields wrote:
> On Thu, Apr 23, 2009 at 12:42:41PM -0400, [email protected] wrote:
> > From: Andy Adamson <[email protected]>
> >
> > CREATE_SESSION can be preceeded by a SEQUENCE operation and the
> > create session single slot cache must be maintained. Encode the results of
> > a create session call into the cache at the end of processing.
> >
> > The create session result will also be encoded into the RPC response, and if
> > it is preceeded by a SEQUENCE operation, will also be encoded into the
> > session slot table cache.
> >
> > Errors that do not change the create session cache:
> > A create session NFS4ERR_STALE_CLIENTID error means that a client record
> > (and associated create session slot) could not be found and therefore can't
> > be changed. NFSERR_SEQ_MISORDERED errors do not change the slot cache.
> >
> > All other errors get cached.
>
> Thanks, this all sounds sensible. (One simple thing I think I was
> confused about: the create_session sequencing is done at a layer above
> sessions; so in the case of a create_session preceded by a sequence, if
> we discover at the point of processing the sequence that this is a
> replay, we use the sequence replay cache and never even get to any
> create_session processing. OK!)
>
> > Signed-off-by: Andy Adamson <[email protected]>
> > ---
> > fs/nfsd/nfs4state.c | 6 +++++-
> > fs/nfsd/nfs4xdr.c | 19 +++++++++++++++++++
> > include/linux/nfsd/xdr4.h | 2 ++
> > 3 files changed, 26 insertions(+), 1 deletions(-)
> >
> > diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> > index 27ad37f..279b47e 100644
> > --- a/fs/nfsd/nfs4state.c
> > +++ b/fs/nfsd/nfs4state.c
> > @@ -1377,7 +1377,7 @@ nfsd4_create_session(struct svc_rqst *rqstp,
> > if (!same_creds(&unconf->cl_cred, &rqstp->rq_cred) ||
> > (ip_addr != unconf->cl_addr)) {
>
> Could we apply the patch removing these ip_addr checks before any of
> these patches? Also I seem to recall expressing some other doubts about
> the correctness of exchange_id and create_session--I'd like to see those
> addressed first, as this builds on those.
>
> > status = nfserr_clid_inuse;
> > - goto out;
> > + goto out_cache;
> > }
> >
> > slot = &unconf->cl_slot;
> > @@ -1424,6 +1424,10 @@ nfsd4_create_session(struct svc_rqst *rqstp,
> > cstate->slot = slot;
> > /* Ensure a page is used for the cache */
> > slot->sl_cache_entry.ce_cachethis = 1;
>
> Is this still needed?
Oh, I see, that's gone in a later patch. I think that's OK.
--b.
>
> --b.
>
> > +out_cache:
> > + /* cache solo and embedded create sessions under the state lock */
> > + nfsd4_cache_create_session(cr_ses, slot, status);
> > +
> > out:
> > nfs4_unlock_state();
> > dprintk("%s returns %d\n", __func__, ntohl(status));
> > diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
> > index b2f8d74..6b34fac 100644
> > --- a/fs/nfsd/nfs4xdr.c
> > +++ b/fs/nfsd/nfs4xdr.c
> > @@ -3046,6 +3046,25 @@ nfsd4_encode_create_session(struct nfsd4_compoundres *resp, int nfserr,
> > return 0;
> > }
> >
> > +/*
> > + * Encode the create_session result into the create session single DRC
> > + * slot cache. Do this for solo or embedded create session operations.
> > + */
> > +void
> > +nfsd4_cache_create_session(struct nfsd4_create_session *cr_ses,
> > + struct nfsd4_slot *slot, int nfserr)
> > +{
> > + struct nfsd4_cache_entry *entry = &slot->sl_cache_entry;
> > + __be32 *p = (__be32 *)entry->ce_datav.iov_base;
> > + struct nfsd4_compoundres tmp = {
> > + .p = p,
> > + .end = p + XDR_QUADLEN(CS_MAX_ENC_SZ),
> > + }, *resp = &tmp;
> > +
> > + entry->ce_status = nfsd4_encode_create_session(resp, nfserr, cr_ses);
> > + entry->ce_datav.iov_len = (char *)resp->p - (char *)p;
> > +}
> > +
> > static __be32
> > nfsd4_encode_destroy_session(struct nfsd4_compoundres *resp, int nfserr,
> > struct nfsd4_destroy_session *destroy_session)
> > diff --git a/include/linux/nfsd/xdr4.h b/include/linux/nfsd/xdr4.h
> > index afd4464..cc50ca0 100644
> > --- a/include/linux/nfsd/xdr4.h
> > +++ b/include/linux/nfsd/xdr4.h
> > @@ -517,6 +517,8 @@ extern __be32 nfsd4_setclientid_confirm(struct svc_rqst *rqstp,
> > extern void nfsd4_store_cache_entry(struct nfsd4_compoundres *resp);
> > extern __be32 nfsd4_replay_cache_entry(struct nfsd4_compoundres *resp,
> > struct nfsd4_sequence *seq);
> > +extern void nfsd4_cache_create_session(struct nfsd4_create_session *cr_ses,
> > + struct nfsd4_slot *slot, int nfserr);
> > extern __be32 nfsd4_exchange_id(struct svc_rqst *rqstp,
> > struct nfsd4_compound_state *,
> > struct nfsd4_exchange_id *);
> > --
> > 1.5.4.3
> >
On Thu, Apr 23, 2009 at 12:42:47PM -0400, [email protected] wrote:
> From: Andy Adamson <[email protected]>
>
> Don't use the sv_lock as per March 31 review by bfields.
If you could remind me (and anyone else) what my complaint was, that'd
be helpful. (OK, I think it's coming back to me. Still, it's a comment
that's not useful to anyone else later.)
>
> Serialize access to sv_drc_pages_used which changes on session creation.
So this information is all server-wide?
Might as well just make it (the lock, and the other sv_drc_* stuff)
global, I guess. It shouldn't be in an rpc-level structure (struct
svc_serv) since it's nfs/sessions-specific.
--b.
>
> Signed-off-by: Andy Adamson <[email protected]>
> ---
> fs/nfsd/nfs4state.c | 4 ++--
> include/linux/sunrpc/svc.h | 1 +
> net/sunrpc/svc.c | 1 +
> 3 files changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index 4cc66f3..af21f94 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -427,11 +427,11 @@ static int set_forechannel_maxreqs(struct nfsd4_channel_attrs *fchan)
> {
> int status = 0, np = fchan->maxreqs * NFSD_PAGES_PER_SLOT;
>
> - spin_lock(&nfsd_serv->sv_lock);
> + spin_lock(&nfsd_serv->sv_drc_lock);
> if (np + nfsd_serv->sv_drc_pages_used > nfsd_serv->sv_drc_max_pages)
> np = nfsd_serv->sv_drc_max_pages - nfsd_serv->sv_drc_pages_used;
> nfsd_serv->sv_drc_pages_used += np;
> - spin_unlock(&nfsd_serv->sv_lock);
> + spin_unlock(&nfsd_serv->sv_drc_lock);
>
> if (np <= 0) {
> status = nfserr_resource;
> diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
> index 2a30775..0d2315c 100644
> --- a/include/linux/sunrpc/svc.h
> +++ b/include/linux/sunrpc/svc.h
> @@ -94,6 +94,7 @@ struct svc_serv {
> struct module * sv_module; /* optional module to count when
> * adding threads */
> svc_thread_fn sv_function; /* main function for threads */
> + spinlock_t sv_drc_lock;
> unsigned int sv_drc_max_pages; /* Total pages for DRC */
> unsigned int sv_drc_pages_used;/* DRC pages used */
> };
> diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
> index 8847add..c25070a 100644
> --- a/net/sunrpc/svc.c
> +++ b/net/sunrpc/svc.c
> @@ -394,6 +394,7 @@ __svc_create(struct svc_program *prog, unsigned int bufsize, int npools,
> INIT_LIST_HEAD(&serv->sv_permsocks);
> init_timer(&serv->sv_temptimer);
> spin_lock_init(&serv->sv_lock);
> + spin_lock_init(&serv->sv_drc_lock);
>
> serv->sv_nrpools = npools;
> serv->sv_pools =
> --
> 1.5.4.3
>
On Thu, Apr 23, 2009 at 12:42:52PM -0400, [email protected] wrote:
> From: Andy Adamson <[email protected]>
This should probably be folded into the last patch.
(For one reason: looks like applying everything up to the last patch
would result in a kernel that leaked memory, since these things never
got freed? The rule here is that applying patches 1...n of any patch
series should never introduce any new problems.)
--b.
>
> Signed-off-by: Andy Adamson <[email protected]>
> ---
> fs/nfsd/nfs4state.c | 2 +-
> 1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index 9e79e0c..787cffe 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -590,7 +590,7 @@ free_session(struct kref *kref)
> ses = container_of(kref, struct nfsd4_session, se_ref);
> for (i = 0; i < ses->se_fchannel.maxreqs; i++) {
> struct nfsd4_cache_entry *e = &ses->se_slots[i].sl_cache_entry;
> - nfsd4_release_respages(e->ce_respages, e->ce_resused);
> + kfree(e->ce_datav.iov_base);
> }
> kfree(ses);
> }
> --
> 1.5.4.3
>
On Thu, Apr 23, 2009 at 12:42:39PM -0400, [email protected] wrote:
> This is a rewrite of the NFSv4.1 DRC, switching from a page based cache to
> a buffer based cache. The logic for the single slot clientid cache has been
> separated from the session slot logic to handle the CREATE_SESSION call
> preceeded by a SEQUENCE and all the replay combinations therein.
>
> The session DRC now caches encoded operations with the exception of the
> SEQUENCE operation which for a replay is encoded with the current slot and
> session values. A review of message sizes indicates that a 1024 byte buffer
> for the operations is more than adequate.
Thanks, a quick skim suggests the basic idea's right.... I may not get
around to looking at this closer soon; let me know if there's a new
version at some point.
--b.
>
> Not addressed is the necessary pre-operation processing estimate of the encoded
> per operation result to check against the negotiated fore channel maximum
> response size cached.
>
> I've tested NFSv4.1 mounts using Connectathon and the new pynfs 4.1 tests,
> where I added two new Clientid cache replay tests [to be submitted]to
> st_create_session.py.
>
> I've tested NFSv4.0 mounts using Connectathon and the pynfs v4.0 tests.
>
> As always, comments and suggestions welcome.
>
> -->Andy
>
> Clientid single slot cache
> 0001-nfsd41-add-create-session-slot-buffer-to-struc-nfs4.patch
> 0002-nfsd41-encode-create_session-result-into-cache.patch
> 0003-nfsd41-create_session-check-replay-first.patch
> 0004-nfsd41-replay-solo-and-embedded-create-session.patch
> 0005-nfsd41-create_session-cache-hold-client-reference.patch
> 0006-nfsd41-no-nfsd4_release_respages-for-the-clientid-c.patch
> 0007-nfsd41-slots-are-freed-with-session.patch
>
> Session slot cache
> 0008-nfsd41-protect-sv_drc_pages_used-with-spinlock.patch
> 0009-nfsd41-sanity-check-client-drc-maxreqs.patch
> 0010-nfsd41-change-from-page-to-memory-based-drc-limits.patch
> 0011-nfsd41-set-the-session-maximum-response-size-cached.patch
> 0012-nfsd41-allocate-and-use-drc-cache-buffers.patch
> 0013-nfsd41-free-drc-cache-buffers.patch
> 0014-nfsd41-obliterate-nfsd4_copy_pages.patch
> 0015-nfsd41-obliterate-nfsd41_copy_replay_data.patch
> 0016-nfsd41-obliterate-nfsd4_release_respages.patch
> 0017-nfsd41-remove-unused-nfsd4_cache_entry-fields.patch
> 0018-nfsd41-obliterate-nfsd4_set_statp.patch
> 0019-nfsd41-rename-nfsd4_enc_uncached_replay.patch
> 0020-nfsd41-encode-replay-sequence-from-the-slot-values.patch
> 0021-nfsd41-remove-iovlen-field-from-nfsd4_compound_stat.patch
> 0022-nfsd41-obliterate-nfsd41_copy_replay_data.patch
> 0023-nfsd41-fix-nfsd4_store_cache_entry-comments.patch
> 0024-nfsd41-support-16-slots-per-session.patch
> 0025-nfsd41-use-the-maximum-operations-per-compound-in-n.patch
> 0026-nfsd41-fix-nfsd4_store_cache_entry-dprintk.patch
> 0027-nfsd41-add-test-for-failed-sequence-operation.patch
> 0028-nfsd41-remove-redundant-failed-sequence-check.patch
> 0029-nfsd41-remove-check-for-session.patch
>
>
On Apr 23, 2009, at 7:21 PM, J. Bruce Fields wrote:
> On Thu, Apr 23, 2009 at 12:42:41PM -0400, [email protected] wrote:
>> From: Andy Adamson <[email protected]>
>>
>> CREATE_SESSION can be preceeded by a SEQUENCE operation and the
>> create session single slot cache must be maintained. Encode the
>> results of
>> a create session call into the cache at the end of processing.
>>
>> The create session result will also be encoded into the RPC
>> response, and if
>> it is preceeded by a SEQUENCE operation, will also be encoded into
>> the
>> session slot table cache.
>>
>> Errors that do not change the create session cache:
>> A create session NFS4ERR_STALE_CLIENTID error means that a client
>> record
>> (and associated create session slot) could not be found and
>> therefore can't
>> be changed. NFSERR_SEQ_MISORDERED errors do not change the slot
>> cache.
>>
>> All other errors get cached.
>
> Thanks, this all sounds sensible. (One simple thing I think I was
> confused about: the create_session sequencing is done at a layer above
> sessions; so in the case of a create_session preceded by a sequence,
> if
> we discover at the point of processing the sequence that this is a
> replay, we use the sequence replay cache and never even get to any
> create_session processing. OK!)
Yes! The double DRC of a sequence:create_session is a bit confusing.
It works correctly because the create session or clientid cache is a
single slot. So if there is a sequence replay of a compound with a
create session it's safe to replay the create session stored in the
sequence slot cache because: sequence replay -> client did not get
the response -> create session slot sequence # was not incremented.
-->Andy
>
>
>> Signed-off-by: Andy Adamson <[email protected]>
>> ---
>> fs/nfsd/nfs4state.c | 6 +++++-
>> fs/nfsd/nfs4xdr.c | 19 +++++++++++++++++++
>> include/linux/nfsd/xdr4.h | 2 ++
>> 3 files changed, 26 insertions(+), 1 deletions(-)
>>
>> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
>> index 27ad37f..279b47e 100644
>> --- a/fs/nfsd/nfs4state.c
>> +++ b/fs/nfsd/nfs4state.c
>> @@ -1377,7 +1377,7 @@ nfsd4_create_session(struct svc_rqst *rqstp,
>> if (!same_creds(&unconf->cl_cred, &rqstp->rq_cred) ||
>> (ip_addr != unconf->cl_addr)) {
>
> Could we apply the patch removing these ip_addr checks before any of
> these patches? Also I seem to recall expressing some other doubts
> about
> the correctness of exchange_id and create_session--I'd like to see
> those
> addressed first, as this builds on those.
>
>> status = nfserr_clid_inuse;
>> - goto out;
>> + goto out_cache;
>> }
>>
>> slot = &unconf->cl_slot;
>> @@ -1424,6 +1424,10 @@ nfsd4_create_session(struct svc_rqst *rqstp,
>> cstate->slot = slot;
>> /* Ensure a page is used for the cache */
>> slot->sl_cache_entry.ce_cachethis = 1;
>
> Is this still needed?
>
> --b.
>
>> +out_cache:
>> + /* cache solo and embedded create sessions under the state lock */
>> + nfsd4_cache_create_session(cr_ses, slot, status);
>> +
>> out:
>> nfs4_unlock_state();
>> dprintk("%s returns %d\n", __func__, ntohl(status));
>> diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
>> index b2f8d74..6b34fac 100644
>> --- a/fs/nfsd/nfs4xdr.c
>> +++ b/fs/nfsd/nfs4xdr.c
>> @@ -3046,6 +3046,25 @@ nfsd4_encode_create_session(struct
>> nfsd4_compoundres *resp, int nfserr,
>> return 0;
>> }
>>
>> +/*
>> + * Encode the create_session result into the create session single
>> DRC
>> + * slot cache. Do this for solo or embedded create session
>> operations.
>> + */
>> +void
>> +nfsd4_cache_create_session(struct nfsd4_create_session *cr_ses,
>> + struct nfsd4_slot *slot, int nfserr)
>> +{
>> + struct nfsd4_cache_entry *entry = &slot->sl_cache_entry;
>> + __be32 *p = (__be32 *)entry->ce_datav.iov_base;
>> + struct nfsd4_compoundres tmp = {
>> + .p = p,
>> + .end = p + XDR_QUADLEN(CS_MAX_ENC_SZ),
>> + }, *resp = &tmp;
>> +
>> + entry->ce_status = nfsd4_encode_create_session(resp, nfserr,
>> cr_ses);
>> + entry->ce_datav.iov_len = (char *)resp->p - (char *)p;
>> +}
>> +
>> static __be32
>> nfsd4_encode_destroy_session(struct nfsd4_compoundres *resp, int
>> nfserr,
>> struct nfsd4_destroy_session *destroy_session)
>> diff --git a/include/linux/nfsd/xdr4.h b/include/linux/nfsd/xdr4.h
>> index afd4464..cc50ca0 100644
>> --- a/include/linux/nfsd/xdr4.h
>> +++ b/include/linux/nfsd/xdr4.h
>> @@ -517,6 +517,8 @@ extern __be32 nfsd4_setclientid_confirm(struct
>> svc_rqst *rqstp,
>> extern void nfsd4_store_cache_entry(struct nfsd4_compoundres *resp);
>> extern __be32 nfsd4_replay_cache_entry(struct nfsd4_compoundres
>> *resp,
>> struct nfsd4_sequence *seq);
>> +extern void nfsd4_cache_create_session(struct nfsd4_create_session
>> *cr_ses,
>> + struct nfsd4_slot *slot, int nfserr);
>> extern __be32 nfsd4_exchange_id(struct svc_rqst *rqstp,
>> struct nfsd4_compound_state *,
>> struct nfsd4_exchange_id *);
>> --
>> 1.5.4.3
>>
On Apr 23, 2009, at 7:28 PM, J. Bruce Fields wrote:
> On Thu, Apr 23, 2009 at 12:42:44PM -0400, [email protected] wrote:
>> From: Andy Adamson <[email protected]>
>>
>> expire_client can be called on a confirmed or unconfirmed client
>> while
>> processing the create session operation and accessing the clientid
>> slot.
>
> I don't understand--isn't all that under the state lock for now?
Yes it is. I now see that expire_client is also always called under
the state lock. Fair enough, this patch can be dropped.
-->Andy
>
>
> --b.
>
>>
>> Signed-off-by: Andy Adamson <[email protected]>
>> ---
>> fs/nfsd/nfs4state.c | 14 ++++++++++----
>> 1 files changed, 10 insertions(+), 4 deletions(-)
>>
>> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
>> index a585a58..accad58 100644
>> --- a/fs/nfsd/nfs4state.c
>> +++ b/fs/nfsd/nfs4state.c
>> @@ -1355,6 +1355,7 @@ nfsd4_create_session(struct svc_rqst *rqstp,
>> conf = find_confirmed_client(&cr_ses->clientid);
>>
>> if (conf) {
>> + atomic_inc(&conf->cl_count);
>> slot = &conf->cl_slot;
>> status = check_slot_seqid(cr_ses->seqid, slot);
>> if (status == nfserr_replay_cache) {
>> @@ -1363,27 +1364,30 @@ nfsd4_create_session(struct svc_rqst *rqstp,
>> cstate->status = nfserr_replay_clientid_cache;
>> /* Return the cached reply status */
>> status = nfsd4_replay_create_session(resp, slot);
>> - goto out;
>> + goto out_put;
>> } else if (cr_ses->seqid != conf->cl_slot.sl_seqid + 1) {
>> status = nfserr_seq_misordered;
>> dprintk("Sequence misordered!\n");
>> dprintk("Expected seqid= %d but got seqid= %d\n",
>> slot->sl_seqid, cr_ses->seqid);
>> - goto out;
>> + goto out_put;
>> }
>> conf->cl_slot.sl_seqid++;
>> } else if (unconf) {
>> + atomic_inc(&unconf->cl_count);
>> slot = &unconf->cl_slot;
>> status = check_slot_seqid(cr_ses->seqid, slot);
>> if (status) {
>> /* an unconfirmed replay returns misordered */
>> status = nfserr_seq_misordered;
>> - goto out;
>> + conf = unconf; /* for put_nfs4_client */
>> + goto out_put;
>> }
>>
>> if (!same_creds(&unconf->cl_cred, &rqstp->rq_cred) ||
>> (ip_addr != unconf->cl_addr)) {
>> status = nfserr_clid_inuse;
>> + conf = unconf; /* for put_nfs4_client */
>> goto out_cache;
>> }
>>
>> @@ -1413,7 +1417,7 @@ nfsd4_create_session(struct svc_rqst *rqstp,
>>
>> status = alloc_init_session(rqstp, conf, cr_ses);
>> if (status)
>> - goto out;
>> + goto out_put;
>>
>> memcpy(cr_ses->sessionid.data, conf->cl_sessionid.data,
>> NFS4_MAX_SESSIONID_LEN);
>> @@ -1423,6 +1427,8 @@ out_cache:
>> /* cache solo and embedded create sessions under the state lock */
>> nfsd4_cache_create_session(cr_ses, slot, status);
>>
>> +out_put:
>> + put_nfs4_client(conf);
>> out:
>> nfs4_unlock_state();
>> dprintk("%s returns %d\n", __func__, ntohl(status));
>> --
>> 1.5.4.3
>>
On Apr 23, 2009, at 7:32 PM, J. Bruce Fields wrote:
> On Thu, Apr 23, 2009 at 07:21:05PM -0400, bfields wrote:
>> On Thu, Apr 23, 2009 at 12:42:41PM -0400, [email protected] wrote:
>>> From: Andy Adamson <[email protected]>
>>>
>>> CREATE_SESSION can be preceeded by a SEQUENCE operation and the
>>> create session single slot cache must be maintained. Encode the
>>> results of
>>> a create session call into the cache at the end of processing.
>>>
>>> The create session result will also be encoded into the RPC
>>> response, and if
>>> it is preceeded by a SEQUENCE operation, will also be encoded into
>>> the
>>> session slot table cache.
>>>
>>> Errors that do not change the create session cache:
>>> A create session NFS4ERR_STALE_CLIENTID error means that a client
>>> record
>>> (and associated create session slot) could not be found and
>>> therefore can't
>>> be changed. NFSERR_SEQ_MISORDERED errors do not change the slot
>>> cache.
>>>
>>> All other errors get cached.
>>
>> Thanks, this all sounds sensible. (One simple thing I think I was
>> confused about: the create_session sequencing is done at a layer
>> above
>> sessions; so in the case of a create_session preceded by a
>> sequence, if
>> we discover at the point of processing the sequence that this is a
>> replay, we use the sequence replay cache and never even get to any
>> create_session processing. OK!)
>>
>>> Signed-off-by: Andy Adamson <[email protected]>
>>> ---
>>> fs/nfsd/nfs4state.c | 6 +++++-
>>> fs/nfsd/nfs4xdr.c | 19 +++++++++++++++++++
>>> include/linux/nfsd/xdr4.h | 2 ++
>>> 3 files changed, 26 insertions(+), 1 deletions(-)
>>>
>>> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
>>> index 27ad37f..279b47e 100644
>>> --- a/fs/nfsd/nfs4state.c
>>> +++ b/fs/nfsd/nfs4state.c
>>> @@ -1377,7 +1377,7 @@ nfsd4_create_session(struct svc_rqst *rqstp,
>>> if (!same_creds(&unconf->cl_cred, &rqstp->rq_cred) ||
>>> (ip_addr != unconf->cl_addr)) {
>>
>> Could we apply the patch removing these ip_addr checks before any of
>> these patches? Also I seem to recall expressing some other doubts
>> about
>> the correctness of exchange_id and create_session--I'd like to see
>> those
>> addressed first, as this builds on those.
>>
>>> status = nfserr_clid_inuse;
>>> - goto out;
>>> + goto out_cache;
>>> }
>>>
>>> slot = &unconf->cl_slot;
>>> @@ -1424,6 +1424,10 @@ nfsd4_create_session(struct svc_rqst *rqstp,
>>> cstate->slot = slot;
>>> /* Ensure a page is used for the cache */
>>> slot->sl_cache_entry.ce_cachethis = 1;
>>
>> Is this still needed?
>
> Oh, I see, that's gone in a later patch. I think that's OK.
I tried various orderings of adding new functionality and removing the
old....
-->Andy
>
>
> --b.
>
>>
>> --b.
>>
>>> +out_cache:
>>> + /* cache solo and embedded create sessions under the state lock */
>>> + nfsd4_cache_create_session(cr_ses, slot, status);
>>> +
>>> out:
>>> nfs4_unlock_state();
>>> dprintk("%s returns %d\n", __func__, ntohl(status));
>>> diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
>>> index b2f8d74..6b34fac 100644
>>> --- a/fs/nfsd/nfs4xdr.c
>>> +++ b/fs/nfsd/nfs4xdr.c
>>> @@ -3046,6 +3046,25 @@ nfsd4_encode_create_session(struct
>>> nfsd4_compoundres *resp, int nfserr,
>>> return 0;
>>> }
>>>
>>> +/*
>>> + * Encode the create_session result into the create session
>>> single DRC
>>> + * slot cache. Do this for solo or embedded create session
>>> operations.
>>> + */
>>> +void
>>> +nfsd4_cache_create_session(struct nfsd4_create_session *cr_ses,
>>> + struct nfsd4_slot *slot, int nfserr)
>>> +{
>>> + struct nfsd4_cache_entry *entry = &slot->sl_cache_entry;
>>> + __be32 *p = (__be32 *)entry->ce_datav.iov_base;
>>> + struct nfsd4_compoundres tmp = {
>>> + .p = p,
>>> + .end = p + XDR_QUADLEN(CS_MAX_ENC_SZ),
>>> + }, *resp = &tmp;
>>> +
>>> + entry->ce_status = nfsd4_encode_create_session(resp, nfserr,
>>> cr_ses);
>>> + entry->ce_datav.iov_len = (char *)resp->p - (char *)p;
>>> +}
>>> +
>>> static __be32
>>> nfsd4_encode_destroy_session(struct nfsd4_compoundres *resp, int
>>> nfserr,
>>> struct nfsd4_destroy_session *destroy_session)
>>> diff --git a/include/linux/nfsd/xdr4.h b/include/linux/nfsd/xdr4.h
>>> index afd4464..cc50ca0 100644
>>> --- a/include/linux/nfsd/xdr4.h
>>> +++ b/include/linux/nfsd/xdr4.h
>>> @@ -517,6 +517,8 @@ extern __be32 nfsd4_setclientid_confirm(struct
>>> svc_rqst *rqstp,
>>> extern void nfsd4_store_cache_entry(struct nfsd4_compoundres *resp);
>>> extern __be32 nfsd4_replay_cache_entry(struct nfsd4_compoundres
>>> *resp,
>>> struct nfsd4_sequence *seq);
>>> +extern void nfsd4_cache_create_session(struct
>>> nfsd4_create_session *cr_ses,
>>> + struct nfsd4_slot *slot, int nfserr);
>>> extern __be32 nfsd4_exchange_id(struct svc_rqst *rqstp,
>>> struct nfsd4_compound_state *,
>>> struct nfsd4_exchange_id *);
>>> --
>>> 1.5.4.3
>>>
On Fri, Apr 24, 2009 at 09:52:56AM -0400, Andy Adamson wrote:
>
> On Apr 23, 2009, at 7:28 PM, J. Bruce Fields wrote:
>
>> On Thu, Apr 23, 2009 at 12:42:44PM -0400, [email protected] wrote:
>>> From: Andy Adamson <[email protected]>
>>>
>>> expire_client can be called on a confirmed or unconfirmed client
>>> while
>>> processing the create session operation and accessing the clientid
>>> slot.
>>
>> I don't understand--isn't all that under the state lock for now?
>
> Yes it is. I now see that expire_client is also always called under the
> state lock. Fair enough, this patch can be dropped.
OK, thanks. Note this wouldn't be quite right even in the absence of a
lock, because there'd be nothing to guarantee that conf is still good at
the time we do the atomic_inc(&conf->cl_count) (what happens if someone
frees conf before we even get there?
The way this kind of thing often works is:
acquire some lock
search for an object in some common data structure
bump the reference count on the found object
drop the lock
So the lock protects the object from destruction until we get the
reference count, and then the reference count protects it after we've
dropped the lock.
In this case we'll probably eventually transition to a spinlock
protecting the client hash tables, and then do the above in the
functions that look up clients, so that the object returned from those
functions already comes with its own reference taken.
--b.
>
> -->Andy
>
>>
>>
>> --b.
>>
>>>
>>> Signed-off-by: Andy Adamson <[email protected]>
>>> ---
>>> fs/nfsd/nfs4state.c | 14 ++++++++++----
>>> 1 files changed, 10 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
>>> index a585a58..accad58 100644
>>> --- a/fs/nfsd/nfs4state.c
>>> +++ b/fs/nfsd/nfs4state.c
>>> @@ -1355,6 +1355,7 @@ nfsd4_create_session(struct svc_rqst *rqstp,
>>> conf = find_confirmed_client(&cr_ses->clientid);
>>>
>>> if (conf) {
>>> + atomic_inc(&conf->cl_count);
>>> slot = &conf->cl_slot;
>>> status = check_slot_seqid(cr_ses->seqid, slot);
>>> if (status == nfserr_replay_cache) {
>>> @@ -1363,27 +1364,30 @@ nfsd4_create_session(struct svc_rqst *rqstp,
>>> cstate->status = nfserr_replay_clientid_cache;
>>> /* Return the cached reply status */
>>> status = nfsd4_replay_create_session(resp, slot);
>>> - goto out;
>>> + goto out_put;
>>> } else if (cr_ses->seqid != conf->cl_slot.sl_seqid + 1) {
>>> status = nfserr_seq_misordered;
>>> dprintk("Sequence misordered!\n");
>>> dprintk("Expected seqid= %d but got seqid= %d\n",
>>> slot->sl_seqid, cr_ses->seqid);
>>> - goto out;
>>> + goto out_put;
>>> }
>>> conf->cl_slot.sl_seqid++;
>>> } else if (unconf) {
>>> + atomic_inc(&unconf->cl_count);
>>> slot = &unconf->cl_slot;
>>> status = check_slot_seqid(cr_ses->seqid, slot);
>>> if (status) {
>>> /* an unconfirmed replay returns misordered */
>>> status = nfserr_seq_misordered;
>>> - goto out;
>>> + conf = unconf; /* for put_nfs4_client */
>>> + goto out_put;
>>> }
>>>
>>> if (!same_creds(&unconf->cl_cred, &rqstp->rq_cred) ||
>>> (ip_addr != unconf->cl_addr)) {
>>> status = nfserr_clid_inuse;
>>> + conf = unconf; /* for put_nfs4_client */
>>> goto out_cache;
>>> }
>>>
>>> @@ -1413,7 +1417,7 @@ nfsd4_create_session(struct svc_rqst *rqstp,
>>>
>>> status = alloc_init_session(rqstp, conf, cr_ses);
>>> if (status)
>>> - goto out;
>>> + goto out_put;
>>>
>>> memcpy(cr_ses->sessionid.data, conf->cl_sessionid.data,
>>> NFS4_MAX_SESSIONID_LEN);
>>> @@ -1423,6 +1427,8 @@ out_cache:
>>> /* cache solo and embedded create sessions under the state lock */
>>> nfsd4_cache_create_session(cr_ses, slot, status);
>>>
>>> +out_put:
>>> + put_nfs4_client(conf);
>>> out:
>>> nfs4_unlock_state();
>>> dprintk("%s returns %d\n", __func__, ntohl(status));
>>> --
>>> 1.5.4.3
>>>
>
On Fri, Apr 24, 2009 at 10:02 AM, J. Bruce Fields <[email protected]> wrote:
> On Fri, Apr 24, 2009 at 09:52:56AM -0400, Andy Adamson wrote:
>>
>> On Apr 23, 2009, at 7:28 PM, J. Bruce Fields wrote:
>>
>>> On Thu, Apr 23, 2009 at 12:42:44PM -0400, [email protected] wrote:
>>>> From: Andy Adamson <[email protected]>
>>>>
>>>> expire_client can be called on a confirmed or unconfirmed client
>>>> while
>>>> processing the create session operation and accessing the clientid
>>>> slot.
>>>
>>> I don't understand--isn't all that under the state lock for now?
>>
>> Yes it is. I now see that expire_client is also always called under the
>> state lock. Fair enough, this patch can be dropped.
>
> OK, thanks. Note this wouldn't be quite right even in the absence of a
> lock, because there'd be nothing to guarantee that conf is still good at
> the time we do the atomic_inc(&conf->cl_count) (what happens if someone
> frees conf before we even get there?
>
> The way this kind of thing often works is:
>
> acquire some lock
> search for an object in some common data structure
> bump the reference count on the found object
> drop the lock
>
> So the lock protects the object from destruction until we get the
> reference count, and then the reference count protects it after we've
> dropped the lock.
>
> In this case we'll probably eventually transition to a spinlock
> protecting the client hash tables, and then do the above in the
> functions that look up clients, so that the object returned from those
> functions already comes with its own reference taken.
Got it - thanks
-->Andy
>
> --b.
>
>>
>> -->Andy
>>
>>>
>>>
>>> --b.
>>>
>>>>
>>>> Signed-off-by: Andy Adamson <[email protected]>
>>>> ---
>>>> fs/nfsd/nfs4state.c | 14 ++++++++++----
>>>> 1 files changed, 10 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
>>>> index a585a58..accad58 100644
>>>> --- a/fs/nfsd/nfs4state.c
>>>> +++ b/fs/nfsd/nfs4state.c
>>>> @@ -1355,6 +1355,7 @@ nfsd4_create_session(struct svc_rqst *rqstp,
>>>> conf = find_confirmed_client(&cr_ses->clientid);
>>>>
>>>> if (conf) {
>>>> + atomic_inc(&conf->cl_count);
>>>> slot = &conf->cl_slot;
>>>> status = check_slot_seqid(cr_ses->seqid, slot);
>>>> if (status == nfserr_replay_cache) {
>>>> @@ -1363,27 +1364,30 @@ nfsd4_create_session(struct svc_rqst *rqstp,
>>>> cstate->status = nfserr_replay_clientid_cache;
>>>> /* Return the cached reply status */
>>>> status = nfsd4_replay_create_session(resp, slot);
>>>> - goto out;
>>>> + goto out_put;
>>>> } else if (cr_ses->seqid != conf->cl_slot.sl_seqid + 1) {
>>>> status = nfserr_seq_misordered;
>>>> dprintk("Sequence misordered!\n");
>>>> dprintk("Expected seqid= %d but got seqid= %d\n",
>>>> slot->sl_seqid, cr_ses->seqid);
>>>> - goto out;
>>>> + goto out_put;
>>>> }
>>>> conf->cl_slot.sl_seqid++;
>>>> } else if (unconf) {
>>>> + atomic_inc(&unconf->cl_count);
>>>> slot = &unconf->cl_slot;
>>>> status = check_slot_seqid(cr_ses->seqid, slot);
>>>> if (status) {
>>>> /* an unconfirmed replay returns misordered */
>>>> status = nfserr_seq_misordered;
>>>> - goto out;
>>>> + conf = unconf; /* for put_nfs4_client */
>>>> + goto out_put;
>>>> }
>>>>
>>>> if (!same_creds(&unconf->cl_cred, &rqstp->rq_cred) ||
>>>> (ip_addr != unconf->cl_addr)) {
>>>> status = nfserr_clid_inuse;
>>>> + conf = unconf; /* for put_nfs4_client */
>>>> goto out_cache;
>>>> }
>>>>
>>>> @@ -1413,7 +1417,7 @@ nfsd4_create_session(struct svc_rqst *rqstp,
>>>>
>>>> status = alloc_init_session(rqstp, conf, cr_ses);
>>>> if (status)
>>>> - goto out;
>>>> + goto out_put;
>>>>
>>>> memcpy(cr_ses->sessionid.data, conf->cl_sessionid.data,
>>>> NFS4_MAX_SESSIONID_LEN);
>>>> @@ -1423,6 +1427,8 @@ out_cache:
>>>> /* cache solo and embedded create sessions under the state lock */
>>>> nfsd4_cache_create_session(cr_ses, slot, status);
>>>>
>>>> +out_put:
>>>> + put_nfs4_client(conf);
>>>> out:
>>>> nfs4_unlock_state();
>>>> dprintk("%s returns %d\n", __func__, ntohl(status));
>>>> --
>>>> 1.5.4.3
>>>>
>>
> _______________________________________________
> pNFS mailing list
> [email protected]
> http://linux-nfs.org/cgi-bin/mailman/listinfo/pnfs
>
On Thu, Apr 23, 2009 at 7:39 PM, J. Bruce Fields <[email protected]> wrote:
> On Thu, Apr 23, 2009 at 12:42:52PM -0400, [email protected] wrote:
>> From: Andy Adamson <[email protected]>
>
> This should probably be folded into the last patch.
>
> (For one reason: looks like applying everything up to the last patch
> would result in a kernel that leaked memory, since these things never
> got freed? The rule here is that applying patches 1...n of any patch
> series should never introduce any new problems.)
OK
>
> --b.
>
>>
>> Signed-off-by: Andy Adamson <[email protected]>
>> ---
>> fs/nfsd/nfs4state.c | 2 +-
>> 1 files changed, 1 insertions(+), 1 deletions(-)
>>
>> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
>> index 9e79e0c..787cffe 100644
>> --- a/fs/nfsd/nfs4state.c
>> +++ b/fs/nfsd/nfs4state.c
>> @@ -590,7 +590,7 @@ free_session(struct kref *kref)
>> ses = container_of(kref, struct nfsd4_session, se_ref);
>> for (i = 0; i < ses->se_fchannel.maxreqs; i++) {
>> struct nfsd4_cache_entry *e = &ses->se_slots[i].sl_cache_entry;
>> - nfsd4_release_respages(e->ce_respages, e->ce_resused);
>> + kfree(e->ce_datav.iov_base);
>> }
>> kfree(ses);
>> }
>> --
>> 1.5.4.3
>>
> _______________________________________________
> pNFS mailing list
> [email protected]
> http://linux-nfs.org/cgi-bin/mailman/listinfo/pnfs
>
On Thu, Apr 23, 2009 at 7:41 PM, J. Bruce Fields <[email protected]> wrote:
> On Thu, Apr 23, 2009 at 12:42:39PM -0400, [email protected] wrote:
>> This is a rewrite of the NFSv4.1 DRC, switching from a page based cache to
>> a buffer based cache. The logic for the single slot clientid cache has been
>> separated from the session slot logic to handle the CREATE_SESSION call
>> preceeded by a SEQUENCE and all the replay combinations therein.
>>
>> The session DRC now caches encoded operations with the exception of the
>> SEQUENCE operation which for a replay is encoded with the current slot and
>> session values. A review of message sizes indicates that a 1024 byte buffer
>> for the operations is more than adequate.
>
> Thanks, a quick skim suggests the basic idea's right.... I may not get
> around to looking at this closer soon; let me know if there's a new
> version at some point.
OK - Thanks for your time.
-->Andy
>
> --b.
>
>>
>> Not addressed is the necessary pre-operation processing estimate of the encoded
>> per operation result to check against the negotiated fore channel maximum
>> response size cached.
>>
>> I've tested NFSv4.1 mounts using Connectathon and the new pynfs 4.1 tests,
>> where I added two new Clientid cache replay tests [to be submitted]to
>> st_create_session.py.
>>
>> I've tested NFSv4.0 mounts using Connectathon and the pynfs v4.0 tests.
>>
>> As always, comments and suggestions welcome.
>>
>> -->Andy
>>
>> Clientid single slot cache
>> 0001-nfsd41-add-create-session-slot-buffer-to-struc-nfs4.patch
>> 0002-nfsd41-encode-create_session-result-into-cache.patch
>> 0003-nfsd41-create_session-check-replay-first.patch
>> 0004-nfsd41-replay-solo-and-embedded-create-session.patch
>> 0005-nfsd41-create_session-cache-hold-client-reference.patch
>> 0006-nfsd41-no-nfsd4_release_respages-for-the-clientid-c.patch
>> 0007-nfsd41-slots-are-freed-with-session.patch
>>
>> Session slot cache
>> 0008-nfsd41-protect-sv_drc_pages_used-with-spinlock.patch
>> 0009-nfsd41-sanity-check-client-drc-maxreqs.patch
>> 0010-nfsd41-change-from-page-to-memory-based-drc-limits.patch
>> 0011-nfsd41-set-the-session-maximum-response-size-cached.patch
>> 0012-nfsd41-allocate-and-use-drc-cache-buffers.patch
>> 0013-nfsd41-free-drc-cache-buffers.patch
>> 0014-nfsd41-obliterate-nfsd4_copy_pages.patch
>> 0015-nfsd41-obliterate-nfsd41_copy_replay_data.patch
>> 0016-nfsd41-obliterate-nfsd4_release_respages.patch
>> 0017-nfsd41-remove-unused-nfsd4_cache_entry-fields.patch
>> 0018-nfsd41-obliterate-nfsd4_set_statp.patch
>> 0019-nfsd41-rename-nfsd4_enc_uncached_replay.patch
>> 0020-nfsd41-encode-replay-sequence-from-the-slot-values.patch
>> 0021-nfsd41-remove-iovlen-field-from-nfsd4_compound_stat.patch
>> 0022-nfsd41-obliterate-nfsd41_copy_replay_data.patch
>> 0023-nfsd41-fix-nfsd4_store_cache_entry-comments.patch
>> 0024-nfsd41-support-16-slots-per-session.patch
>> 0025-nfsd41-use-the-maximum-operations-per-compound-in-n.patch
>> 0026-nfsd41-fix-nfsd4_store_cache_entry-dprintk.patch
>> 0027-nfsd41-add-test-for-failed-sequence-operation.patch
>> 0028-nfsd41-remove-redundant-failed-sequence-check.patch
>> 0029-nfsd41-remove-check-for-session.patch
>>
>>
> _______________________________________________
> pNFS mailing list
> [email protected]
> http://linux-nfs.org/cgi-bin/mailman/listinfo/pnfs
>
On Thu, Apr 23, 2009 at 7:36 PM, J. Bruce Fields <[email protected]> wrote:
> On Thu, Apr 23, 2009 at 12:42:47PM -0400, [email protected] wrote:
>> From: Andy Adamson <[email protected]>
>>
>> Don't use the sv_lock as per March 31 review by bfields.
>
> If you could remind me (and anyone else) what my complaint was, that'd
> be helpful. (OK, I think it's coming back to me. Still, it's a comment
> that's not useful to anyone else later.)
OK - I'll avoid this type of comment in the future.
>
>>
>> Serialize access to sv_drc_pages_used which changes on session creation.
>
> So this information is all server-wide?
>
> Might as well just make it (the lock, and the other sv_drc_* stuff)
> global, I guess. It shouldn't be in an rpc-level structure (struct
> svc_serv) since it's nfs/sessions-specific.
I see your point. I can make it global.
-->Andy
>
> --b.
>
>>
>> Signed-off-by: Andy Adamson <[email protected]>
>> ---
>> fs/nfsd/nfs4state.c | 4 ++--
>> include/linux/sunrpc/svc.h | 1 +
>> net/sunrpc/svc.c | 1 +
>> 3 files changed, 4 insertions(+), 2 deletions(-)
>>
>> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
>> index 4cc66f3..af21f94 100644
>> --- a/fs/nfsd/nfs4state.c
>> +++ b/fs/nfsd/nfs4state.c
>> @@ -427,11 +427,11 @@ static int set_forechannel_maxreqs(struct nfsd4_channel_attrs *fchan)
>> {
>> int status = 0, np = fchan->maxreqs * NFSD_PAGES_PER_SLOT;
>>
>> - spin_lock(&nfsd_serv->sv_lock);
>> + spin_lock(&nfsd_serv->sv_drc_lock);
>> if (np + nfsd_serv->sv_drc_pages_used > nfsd_serv->sv_drc_max_pages)
>> np = nfsd_serv->sv_drc_max_pages - nfsd_serv->sv_drc_pages_used;
>> nfsd_serv->sv_drc_pages_used += np;
>> - spin_unlock(&nfsd_serv->sv_lock);
>> + spin_unlock(&nfsd_serv->sv_drc_lock);
>>
>> if (np <= 0) {
>> status = nfserr_resource;
>> diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
>> index 2a30775..0d2315c 100644
>> --- a/include/linux/sunrpc/svc.h
>> +++ b/include/linux/sunrpc/svc.h
>> @@ -94,6 +94,7 @@ struct svc_serv {
>> struct module * sv_module; /* optional module to count when
>> * adding threads */
>> svc_thread_fn sv_function; /* main function for threads */
>> + spinlock_t sv_drc_lock;
>> unsigned int sv_drc_max_pages; /* Total pages for DRC */
>> unsigned int sv_drc_pages_used;/* DRC pages used */
>> };
>> diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
>> index 8847add..c25070a 100644
>> --- a/net/sunrpc/svc.c
>> +++ b/net/sunrpc/svc.c
>> @@ -394,6 +394,7 @@ __svc_create(struct svc_program *prog, unsigned int bufsize, int npools,
>> INIT_LIST_HEAD(&serv->sv_permsocks);
>> init_timer(&serv->sv_temptimer);
>> spin_lock_init(&serv->sv_lock);
>> + spin_lock_init(&serv->sv_drc_lock);
>>
>> serv->sv_nrpools = npools;
>> serv->sv_pools =
>> --
>> 1.5.4.3
>>
> _______________________________________________
> pNFS mailing list
> [email protected]
> http://linux-nfs.org/cgi-bin/mailman/listinfo/pnfs
>
From: Andy Adamson <[email protected]>
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4state.c | 4 +---
1 files changed, 1 insertions(+), 3 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 1e995ec..6be72ad 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1012,7 +1012,7 @@ nfsd4_store_cache_entry(struct nfsd4_compoundres *resp)
struct nfsd4_compoundargs *args = rqstp->rq_argp;
struct nfsd4_op *op = &args->ops[resp->opcnt];
- dprintk("--> %s entry %p\n", __func__, entry);
+ dprintk("--> %s cachethis %d\n", __func__, entry->ce_cachethis);
/* Don't cache a failed OP_SEQUENCE. */
if (resp->opcnt == 1 && op->opnum == OP_SEQUENCE && resp->cstate.status)
@@ -1024,8 +1024,6 @@ nfsd4_store_cache_entry(struct nfsd4_compoundres *resp)
/* Don't cache the sequence operation, use the slot values on replay. */
if (nfsd4_not_cached(resp)) {
entry->ce_datav.iov_len = 0;
- dprintk("%s Just cache SEQUENCE. ce_cachethis %d\n", __func__,
- resp->cstate.slot->sl_cache_entry.ce_cachethis);
return 0;
}
--
1.5.4.3
From: Andy Adamson <[email protected]>
Replacing page based drc cache with buffer based drc cache.
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4state.c | 15 +++------------
1 files changed, 3 insertions(+), 12 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 1affb0a..efc9374 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1086,8 +1086,8 @@ nfsd4_enc_sequence_replay(struct nfsd4_compoundargs *args,
}
/*
- * Keep the first page of the replay. Copy the NFSv4.1 data from the first
- * cached page. Replace any futher replay pages from the cache.
+ * The sequence operation is not cached because we can use the slot and
+ * session values.
*/
__be32
nfsd4_replay_cache_entry(struct nfsd4_compoundres *resp,
@@ -1098,16 +1098,7 @@ nfsd4_replay_cache_entry(struct nfsd4_compoundres *resp,
dprintk("--> %s entry %p\n", __func__, entry);
- /*
- * If this is just the sequence operation, we did not keep
- * a page in the cache entry because we can just use the
- * slot info stored in struct nfsd4_sequence that was checked
- * against the slot in nfsd4_sequence().
- *
- * This occurs when seq->cachethis is FALSE, or when the client
- * session inactivity timer fires and a solo sequence operation
- * is sent (lease renewal).
- */
+ /* Target max slot and flags will be set once they are implemented */
seq->maxslots = resp->cstate.session->se_fchannel.maxreqs;
/* Either returns 0 or nfserr_retry_uncached */
--
1.5.4.3
From: Andy Adamson <[email protected]>
Set resp->p in nfsd4_replay_cache_entry so that the nfs4svc_encode_compoundres
iov_len calculation is correct.
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4state.c | 1 +
fs/nfsd/nfs4xdr.c | 5 +----
include/linux/nfsd/xdr4.h | 1 -
3 files changed, 2 insertions(+), 5 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 8c96451..1affb0a 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1120,6 +1120,7 @@ nfsd4_replay_cache_entry(struct nfsd4_compoundres *resp,
entry->ce_datav.iov_len);
resp->opcnt = entry->ce_opcnt;
+ resp->p = resp->cstate.datap + XDR_QUADLEN(entry->ce_datav.iov_len);
status = entry->ce_status;
return status;
diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
index f4bba66..1e46525 100644
--- a/fs/nfsd/nfs4xdr.c
+++ b/fs/nfsd/nfs4xdr.c
@@ -3359,10 +3359,7 @@ nfs4svc_encode_compoundres(struct svc_rqst *rqstp, __be32 *p, struct nfsd4_compo
iov->iov_len = ((char*)resp->p) - (char*)iov->iov_base;
BUG_ON(iov->iov_len > PAGE_SIZE);
if (nfsd4_has_session(&resp->cstate)) {
- if (resp->cstate.status == nfserr_replay_cache &&
- !nfsd4_not_cached(resp)) {
- iov->iov_len = resp->cstate.iovlen;
- } else {
+ if (resp->cstate.status != nfserr_replay_cache) {
/* FIXME: return ignored for now. Can have -ENOMEM */
nfsd4_store_cache_entry(resp);
dprintk("%s: SET SLOT STATE TO AVAILABLE\n", __func__);
diff --git a/include/linux/nfsd/xdr4.h b/include/linux/nfsd/xdr4.h
index 97f5cee..45ccbc3 100644
--- a/include/linux/nfsd/xdr4.h
+++ b/include/linux/nfsd/xdr4.h
@@ -52,7 +52,6 @@ struct nfsd4_compound_state {
struct nfsd4_session *session;
struct nfsd4_slot *slot;
__be32 *datap;
- size_t iovlen;
u32 minorversion;
u32 status;
};
--
1.5.4.3
From: Andy Adamson <[email protected]>
This function is only used for SEQUENCE replay.
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4proc.c | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
index 32d5866..f8e7720 100644
--- a/fs/nfsd/nfs4proc.c
+++ b/fs/nfsd/nfs4proc.c
@@ -865,7 +865,7 @@ static const char *nfsd4_op_name(unsigned opnum);
* encode the uncache rep error on the next operation.
*/
static __be32
-nfsd4_enc_uncached_replay(struct nfsd4_compoundargs *args,
+nfsd4_enc_sequence_replay(struct nfsd4_compoundargs *args,
struct nfsd4_compoundres *resp)
{
struct nfsd4_op *op;
@@ -1000,7 +1000,7 @@ encode_op:
if (resp->cstate.status == nfserr_replay_cache) {
dprintk("%s NFS4.1 replay from cache\n", __func__);
if (nfsd4_not_cached(resp))
- status = nfsd4_enc_uncached_replay(args, resp);
+ status = nfsd4_enc_sequence_replay(args, resp);
else
status = op->status;
goto out;
--
1.5.4.3
From: Andy Adamson <[email protected]>
Replacing page based drc cache with buffer based drc cache.
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4state.c | 11 -----------
include/linux/nfsd/state.h | 3 ---
2 files changed, 0 insertions(+), 14 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 1d7f2c0..5e2246c 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1041,8 +1041,6 @@ nfsd4_store_cache_entry(struct nfsd4_compoundres *resp)
*/
if (nfsd4_not_cached(resp)) {
- entry->ce_resused = 0;
- entry->ce_rpchdrlen = 0;
entry->ce_datav.iov_len = 0;
dprintk("%s Just cache SEQUENCE. ce_cachethis %d\n", __func__,
resp->cstate.slot->sl_cache_entry.ce_cachethis);
@@ -1062,13 +1060,6 @@ nfsd4_store_cache_entry(struct nfsd4_compoundres *resp)
entry->ce_datav.iov_len = (char *)resp->p - (char *)resp->cstate.datap;
memcpy(entry->ce_datav.iov_base, resp->cstate.datap,
entry->ce_datav.iov_len);
-
- entry->ce_resused = rqstp->rq_resused;
- if (entry->ce_resused > NFSD_PAGES_PER_SLOT + 1)
- entry->ce_resused = NFSD_PAGES_PER_SLOT + 1;
- /* Current request rpc header length*/
- entry->ce_rpchdrlen = (char *)resp->cstate.statp -
- (char *)page_address(rqstp->rq_respages[0]);
return 0;
}
@@ -1104,9 +1095,7 @@ nfsd4_replay_cache_entry(struct nfsd4_compoundres *resp,
memcpy(resp->cstate.datap, entry->ce_datav.iov_base,
entry->ce_datav.iov_len);
- resp->rqstp->rq_resused = entry->ce_resused;
resp->opcnt = entry->ce_opcnt;
- resp->cstate.iovlen = entry->ce_datav.iov_len + entry->ce_rpchdrlen;
status = entry->ce_status;
return status;
diff --git a/include/linux/nfsd/state.h b/include/linux/nfsd/state.h
index b8aa47a..29ed755 100644
--- a/include/linux/nfsd/state.h
+++ b/include/linux/nfsd/state.h
@@ -117,11 +117,8 @@ struct nfs4_callback {
struct nfsd4_cache_entry {
__be32 ce_status;
struct kvec ce_datav; /* encoded cached operations */
- struct page *ce_respages[NFSD_PAGES_PER_SLOT + 1];
int ce_cachethis;
- short ce_resused;
int ce_opcnt;
- int ce_rpchdrlen;
};
struct nfsd4_slot {
--
1.5.4.3
From: Andy Adamson <[email protected]>
Change from page based to memory based NFSv4.1 DRC.
Use the existing nfsd4_cache_entry ce_datav kvec to hold up to
NFSD_SLOT_CACH_SIZE of all encoded operation data past the encoded sequence
operation. Allocate the buffer on demand, and keep it for the life of the
session.
Wnen we have dynamic slots negotiation we need to shutdown the slot upon
allocation error.
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4state.c | 31 ++++++++++++++++++++++++-------
fs/nfsd/nfs4xdr.c | 2 ++
include/linux/nfsd/state.h | 2 +-
include/linux/nfsd/xdr4.h | 3 ++-
4 files changed, 29 insertions(+), 9 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 5666feb..9e79e0c 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1048,20 +1048,19 @@ nfsd4_copy_pages(struct page **topages, struct page **frompages, short count)
* Store the base and length of the rq_req.head[0] page
* of the NFSv4.1 data, just past the rpc header.
*/
-void
+__be32
nfsd4_store_cache_entry(struct nfsd4_compoundres *resp)
{
struct nfsd4_cache_entry *entry = &resp->cstate.slot->sl_cache_entry;
struct svc_rqst *rqstp = resp->rqstp;
struct nfsd4_compoundargs *args = rqstp->rq_argp;
struct nfsd4_op *op = &args->ops[resp->opcnt];
- struct kvec *resv = &rqstp->rq_res.head[0];
dprintk("--> %s entry %p\n", __func__, entry);
/* Don't cache a failed OP_SEQUENCE. */
if (resp->opcnt == 1 && op->opnum == OP_SEQUENCE && resp->cstate.status)
- return;
+ return 0;
nfsd4_release_respages(entry->ce_respages, entry->ce_resused);
entry->ce_opcnt = resp->opcnt;
@@ -1075,21 +1074,35 @@ nfsd4_store_cache_entry(struct nfsd4_compoundres *resp)
if (nfsd4_not_cached(resp)) {
entry->ce_resused = 0;
entry->ce_rpchdrlen = 0;
+ entry->ce_datav.iov_len = 0;
dprintk("%s Just cache SEQUENCE. ce_cachethis %d\n", __func__,
resp->cstate.slot->sl_cache_entry.ce_cachethis);
- return;
+ return 0;
}
+
+ /* allocate the buffer on demand and keep it for the life of the slot */
+ if (entry->ce_datav.iov_base == NULL) {
+ entry->ce_datav.iov_base = kmalloc(NFSD_SLOT_CACHE_SIZE,
+ GFP_KERNEL);
+ if (!entry->ce_datav.iov_base) {
+ /* FIXME: Should remove slot from session */
+ printk(KERN_WARNING "NFSD: No memory for DRC\n");
+ return -ENOMEM;
+ }
+ }
+ entry->ce_datav.iov_len = (char *)resp->p - (char *)resp->cstate.datap;
+ memcpy(entry->ce_datav.iov_base, resp->cstate.datap,
+ entry->ce_datav.iov_len);
+
entry->ce_resused = rqstp->rq_resused;
if (entry->ce_resused > NFSD_PAGES_PER_SLOT + 1)
entry->ce_resused = NFSD_PAGES_PER_SLOT + 1;
nfsd4_copy_pages(entry->ce_respages, rqstp->rq_respages,
entry->ce_resused);
- entry->ce_datav.iov_base = resp->cstate.statp;
- entry->ce_datav.iov_len = resv->iov_len - ((char *)resp->cstate.statp -
- (char *)page_address(rqstp->rq_respages[0]));
/* Current request rpc header length*/
entry->ce_rpchdrlen = (char *)resp->cstate.statp -
(char *)page_address(rqstp->rq_respages[0]);
+ return 0;
}
/*
@@ -1146,6 +1159,10 @@ nfsd4_replay_cache_entry(struct nfsd4_compoundres *resp,
return nfs_ok;
}
+ /* The sequence operation has been encoded, cstate->datap set. */
+ memcpy(resp->cstate.datap, entry->ce_datav.iov_base,
+ entry->ce_datav.iov_len);
+
if (!nfsd41_copy_replay_data(resp, entry)) {
/*
* Not enough room to use the replay rpc header, send the
diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
index 7459900..f4bba66 100644
--- a/fs/nfsd/nfs4xdr.c
+++ b/fs/nfsd/nfs4xdr.c
@@ -3113,6 +3113,7 @@ nfsd4_encode_sequence(struct nfsd4_compoundres *resp, int nfserr,
WRITE32(0);
ADJUST_ARGS();
+ resp->cstate.datap = p; /* DRC cache data pointer */
return 0;
}
@@ -3362,6 +3363,7 @@ nfs4svc_encode_compoundres(struct svc_rqst *rqstp, __be32 *p, struct nfsd4_compo
!nfsd4_not_cached(resp)) {
iov->iov_len = resp->cstate.iovlen;
} else {
+ /* FIXME: return ignored for now. Can have -ENOMEM */
nfsd4_store_cache_entry(resp);
dprintk("%s: SET SLOT STATE TO AVAILABLE\n", __func__);
resp->cstate.slot->sl_inuse = 0;
diff --git a/include/linux/nfsd/state.h b/include/linux/nfsd/state.h
index 473eb0d..b8aa47a 100644
--- a/include/linux/nfsd/state.h
+++ b/include/linux/nfsd/state.h
@@ -116,7 +116,7 @@ struct nfs4_callback {
struct nfsd4_cache_entry {
__be32 ce_status;
- struct kvec ce_datav; /* encoded NFSv4.1 data in rq_res.head[0] */
+ struct kvec ce_datav; /* encoded cached operations */
struct page *ce_respages[NFSD_PAGES_PER_SLOT + 1];
int ce_cachethis;
short ce_resused;
diff --git a/include/linux/nfsd/xdr4.h b/include/linux/nfsd/xdr4.h
index 8fd5eb1..5622e56 100644
--- a/include/linux/nfsd/xdr4.h
+++ b/include/linux/nfsd/xdr4.h
@@ -52,6 +52,7 @@ struct nfsd4_compound_state {
struct nfsd4_session *session;
struct nfsd4_slot *slot;
__be32 *statp;
+ __be32 *datap;
size_t iovlen;
u32 minorversion;
u32 status;
@@ -514,7 +515,7 @@ extern __be32 nfsd4_setclientid(struct svc_rqst *rqstp,
extern __be32 nfsd4_setclientid_confirm(struct svc_rqst *rqstp,
struct nfsd4_compound_state *,
struct nfsd4_setclientid_confirm *setclientid_confirm);
-extern void nfsd4_store_cache_entry(struct nfsd4_compoundres *resp);
+extern __be32 nfsd4_store_cache_entry(struct nfsd4_compoundres *resp);
extern __be32 nfsd4_replay_cache_entry(struct nfsd4_compoundres *resp,
struct nfsd4_sequence *seq);
extern void nfsd4_cache_create_session(struct nfsd4_create_session *cr_ses,
--
1.5.4.3
From: Andy Adamson <[email protected]>
We won't cache the SEQUENCE operation because we can encode it from the session
slot values.
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4state.c | 10 +++++++---
1 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 6bcf494..5666feb 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -448,6 +448,12 @@ static int set_forechannel_maxreqs(struct nfsd4_channel_attrs *fchan)
return status;
}
+/* rpc header + encoded OP_SEQUENCE reply + NFSD_SLOT_CACHE_SIZE in bytes */
+#define NFSD_MAX_RESPONSE_CACHED ((RPC_MAX_HEADER_WITH_AUTH + \
+ 2 + /* OP_SEQUENCE header */ \
+ XDR_QUADLEN(NFS4_MAX_SESSIONID_LEN) + 5 + \
+ XDR_QUADLEN(NFSD_SLOT_CACHE_SIZE)) << 2)
+
/*
* fchan holds the client values on input, and the server values on output
*/
@@ -469,9 +475,7 @@ static int init_forechannel_attrs(struct svc_rqst *rqstp,
fchan->maxresp_sz = maxcount;
session_fchan->maxresp_sz = fchan->maxresp_sz;
- /* Set the max response cached size our default which is
- * a multiple of PAGE_SIZE and small */
- session_fchan->maxresp_cached = NFSD_PAGES_PER_SLOT * PAGE_SIZE;
+ session_fchan->maxresp_cached = NFSD_MAX_RESPONSE_CACHED;
fchan->maxresp_cached = session_fchan->maxresp_cached;
/* Use the client's maxops if possible */
--
1.5.4.3
From: Andy Adamson <[email protected]>
NFSD_SLOT_CACHE_SIZE is the size of all encoded operation responses (excluding
the sequence operation) that we want to cache.
As an estimate, use the same size as the v3 DRC upper limit.
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4state.c | 13 +++++++------
fs/nfsd/nfssvc.c | 11 ++++++-----
include/linux/nfsd/state.h | 2 ++
include/linux/sunrpc/svc.h | 4 ++--
4 files changed, 17 insertions(+), 13 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index d570472..6bcf494 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -425,24 +425,25 @@ gen_sessionid(struct nfsd4_session *ses)
*/
static int set_forechannel_maxreqs(struct nfsd4_channel_attrs *fchan)
{
- int status = 0, np = fchan->maxreqs * NFSD_PAGES_PER_SLOT;
+ int status = 0, mem;
if (fchan->maxreqs < 1)
return nfserr_inval;
else if (fchan->maxreqs > NFSD_MAX_SLOTS_PER_SESSION)
fchan->maxreqs = NFSD_MAX_SLOTS_PER_SESSION;
+ mem = fchan->maxreqs * NFSD_SLOT_CACHE_SIZE;
spin_lock(&nfsd_serv->sv_drc_lock);
- if (np + nfsd_serv->sv_drc_pages_used > nfsd_serv->sv_drc_max_pages)
- np = nfsd_serv->sv_drc_max_pages - nfsd_serv->sv_drc_pages_used;
- nfsd_serv->sv_drc_pages_used += np;
+ if (mem + nfsd_serv->sv_drc_mem_used > nfsd_serv->sv_drc_max_mem)
+ mem = nfsd_serv->sv_drc_max_mem - nfsd_serv->sv_drc_mem_used;
+ nfsd_serv->sv_drc_mem_used += mem;
spin_unlock(&nfsd_serv->sv_drc_lock);
- if (np <= 0) {
+ if (mem < NFSD_SLOT_CACHE_SIZE) {
status = nfserr_resource;
fchan->maxreqs = 0;
} else
- fchan->maxreqs = np / NFSD_PAGES_PER_SLOT;
+ fchan->maxreqs = mem / NFSD_SLOT_CACHE_SIZE;
return status;
}
diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index cbba4a9..125f2ef 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -238,11 +238,12 @@ static void set_max_drc(void)
{
/* The percent of nr_free_buffer_pages used by the V4.1 server DRC */
#define NFSD_DRC_SIZE_SHIFT 7
- nfsd_serv->sv_drc_max_pages = nr_free_buffer_pages()
- >> NFSD_DRC_SIZE_SHIFT;
- nfsd_serv->sv_drc_pages_used = 0;
- dprintk("%s svc_drc_max_pages %u\n", __func__,
- nfsd_serv->sv_drc_max_pages);
+ nfsd_serv->sv_drc_max_mem = (nr_free_buffer_pages()
+ >> NFSD_DRC_SIZE_SHIFT) * PAGE_SIZE;
+ nfsd_serv->sv_drc_mem_used = 0;
+ dprintk("%s svc_drc_max_mem %u [in pages %lu]\n", __func__,
+ nfsd_serv->sv_drc_max_mem,
+ nfsd_serv->sv_drc_max_mem / PAGE_SIZE);
}
int nfsd_create_serv(void)
diff --git a/include/linux/nfsd/state.h b/include/linux/nfsd/state.h
index 87f3eaa..473eb0d 100644
--- a/include/linux/nfsd/state.h
+++ b/include/linux/nfsd/state.h
@@ -109,6 +109,8 @@ struct nfs4_callback {
#define NFSD_MAX_SLOTS_PER_SESSION 128
/* Maximum number of pages per slot cache entry */
#define NFSD_PAGES_PER_SLOT 1
+/* Size of each slot cache entry */
+#define NFSD_SLOT_CACHE_SIZE 1024
/* Maximum number of operations per session compound */
#define NFSD_MAX_OPS_PER_COMPOUND 16
diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
index 0d2315c..dc21169 100644
--- a/include/linux/sunrpc/svc.h
+++ b/include/linux/sunrpc/svc.h
@@ -95,8 +95,8 @@ struct svc_serv {
* adding threads */
svc_thread_fn sv_function; /* main function for threads */
spinlock_t sv_drc_lock;
- unsigned int sv_drc_max_pages; /* Total pages for DRC */
- unsigned int sv_drc_pages_used;/* DRC pages used */
+ unsigned int sv_drc_max_mem; /* Total memory for DRC */
+ unsigned int sv_drc_mem_used;/* DRC memory used */
};
/*
--
1.5.4.3
From: Andy Adamson <[email protected]>
The session and slots are allocated all in one piece.
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4state.c | 1 -
1 files changed, 0 insertions(+), 1 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index dd16196..4cc66f3 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -582,7 +582,6 @@ free_session(struct kref *kref)
struct nfsd4_cache_entry *e = &ses->se_slots[i].sl_cache_entry;
nfsd4_release_respages(e->ce_respages, e->ce_resused);
}
- kfree(ses->se_slots);
kfree(ses);
}
--
1.5.4.3
From: Andy Adamson <[email protected]>
expire_client can be called on a confirmed or unconfirmed client while
processing the create session operation and accessing the clientid slot.
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4state.c | 14 ++++++++++----
1 files changed, 10 insertions(+), 4 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index a585a58..accad58 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1355,6 +1355,7 @@ nfsd4_create_session(struct svc_rqst *rqstp,
conf = find_confirmed_client(&cr_ses->clientid);
if (conf) {
+ atomic_inc(&conf->cl_count);
slot = &conf->cl_slot;
status = check_slot_seqid(cr_ses->seqid, slot);
if (status == nfserr_replay_cache) {
@@ -1363,27 +1364,30 @@ nfsd4_create_session(struct svc_rqst *rqstp,
cstate->status = nfserr_replay_clientid_cache;
/* Return the cached reply status */
status = nfsd4_replay_create_session(resp, slot);
- goto out;
+ goto out_put;
} else if (cr_ses->seqid != conf->cl_slot.sl_seqid + 1) {
status = nfserr_seq_misordered;
dprintk("Sequence misordered!\n");
dprintk("Expected seqid= %d but got seqid= %d\n",
slot->sl_seqid, cr_ses->seqid);
- goto out;
+ goto out_put;
}
conf->cl_slot.sl_seqid++;
} else if (unconf) {
+ atomic_inc(&unconf->cl_count);
slot = &unconf->cl_slot;
status = check_slot_seqid(cr_ses->seqid, slot);
if (status) {
/* an unconfirmed replay returns misordered */
status = nfserr_seq_misordered;
- goto out;
+ conf = unconf; /* for put_nfs4_client */
+ goto out_put;
}
if (!same_creds(&unconf->cl_cred, &rqstp->rq_cred) ||
(ip_addr != unconf->cl_addr)) {
status = nfserr_clid_inuse;
+ conf = unconf; /* for put_nfs4_client */
goto out_cache;
}
@@ -1413,7 +1417,7 @@ nfsd4_create_session(struct svc_rqst *rqstp,
status = alloc_init_session(rqstp, conf, cr_ses);
if (status)
- goto out;
+ goto out_put;
memcpy(cr_ses->sessionid.data, conf->cl_sessionid.data,
NFS4_MAX_SESSIONID_LEN);
@@ -1423,6 +1427,8 @@ out_cache:
/* cache solo and embedded create sessions under the state lock */
nfsd4_cache_create_session(cr_ses, slot, status);
+out_put:
+ put_nfs4_client(conf);
out:
nfs4_unlock_state();
dprintk("%s returns %d\n", __func__, ntohl(status));
--
1.5.4.3
From: Andy Adamson <[email protected]>
CREATE_SESSION replays which used to set the cstate slot but not the session,
no longer set the cstate slot nor use nfsd4_store_cache_entry.
The SEQUENCE operation which references the session always set the cstate slot
and session.
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4xdr.c | 3 +--
1 files changed, 1 insertions(+), 2 deletions(-)
diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
index 1e46525..5d6059a 100644
--- a/fs/nfsd/nfs4xdr.c
+++ b/fs/nfsd/nfs4xdr.c
@@ -3365,8 +3365,7 @@ nfs4svc_encode_compoundres(struct svc_rqst *rqstp, __be32 *p, struct nfsd4_compo
dprintk("%s: SET SLOT STATE TO AVAILABLE\n", __func__);
resp->cstate.slot->sl_inuse = 0;
}
- if (resp->cstate.session)
- nfsd4_put_session(resp->cstate.session);
+ nfsd4_put_session(resp->cstate.session);
}
return 1;
}
--
1.5.4.3
From: Andy Adamson <[email protected]>
The size of the nfsd4_op array in nfsd4_compoundargs determines the
supported maximum number of operations.
Signed-off-by: Andy Adamson <[email protected]>
---
include/linux/nfsd/state.h | 2 +-
include/linux/nfsd/xdr4.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/nfsd/state.h b/include/linux/nfsd/state.h
index 0dfdb6e..d65ce6f 100644
--- a/include/linux/nfsd/state.h
+++ b/include/linux/nfsd/state.h
@@ -112,7 +112,7 @@ struct nfs4_callback {
/* Size of each slot cache entry */
#define NFSD_SLOT_CACHE_SIZE 1024
/* Maximum number of operations per session compound */
-#define NFSD_MAX_OPS_PER_COMPOUND 16
+#define NFSD_MAX_OPS_PER_COMPOUND 8
struct nfsd4_cache_entry {
__be32 ce_status;
diff --git a/include/linux/nfsd/xdr4.h b/include/linux/nfsd/xdr4.h
index 45ccbc3..e312601 100644
--- a/include/linux/nfsd/xdr4.h
+++ b/include/linux/nfsd/xdr4.h
@@ -455,7 +455,7 @@ struct nfsd4_compoundargs {
u32 minorversion;
u32 opcnt;
struct nfsd4_op *ops;
- struct nfsd4_op iops[8];
+ struct nfsd4_op iops[NFSD_MAX_OPS_PER_COMPOUND];
};
struct nfsd4_compoundres {
--
1.5.4.3
From: Andy Adamson <[email protected]>
Failed sequence operations are not cached.
Signed-off-by: Andy Adamson <[email protected]>
---
include/linux/nfsd/xdr4.h | 10 +++++++++-
1 files changed, 9 insertions(+), 1 deletions(-)
diff --git a/include/linux/nfsd/xdr4.h b/include/linux/nfsd/xdr4.h
index e312601..8255a87 100644
--- a/include/linux/nfsd/xdr4.h
+++ b/include/linux/nfsd/xdr4.h
@@ -472,6 +472,13 @@ struct nfsd4_compoundres {
struct nfsd4_compound_state cstate;
};
+static inline bool nfsd4_is_failed_sequence(struct nfsd4_compoundres *resp)
+{
+ struct nfsd4_compoundargs *args = resp->rqstp->rq_argp;
+ return resp->opcnt == 1 && args->ops[0].opnum == OP_SEQUENCE &&
+ resp->cstate.status;
+}
+
static inline bool nfsd4_is_solo_sequence(struct nfsd4_compoundres *resp)
{
struct nfsd4_compoundargs *args = resp->rqstp->rq_argp;
@@ -481,7 +488,8 @@ static inline bool nfsd4_is_solo_sequence(struct nfsd4_compoundres *resp)
static inline bool nfsd4_not_cached(struct nfsd4_compoundres *resp)
{
return !resp->cstate.slot->sl_cache_entry.ce_cachethis ||
- nfsd4_is_solo_sequence(resp);
+ nfsd4_is_solo_sequence(resp) ||
+ nfsd4_is_failed_sequence(resp);
}
#define NFS4_SVC_XDRSIZE sizeof(struct nfsd4_compoundargs)
--
1.5.4.3
From: Andy Adamson <[email protected]>
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4state.c | 7 -------
1 files changed, 0 insertions(+), 7 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 6be72ad..4d211ff 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1008,16 +1008,9 @@ __be32
nfsd4_store_cache_entry(struct nfsd4_compoundres *resp)
{
struct nfsd4_cache_entry *entry = &resp->cstate.slot->sl_cache_entry;
- struct svc_rqst *rqstp = resp->rqstp;
- struct nfsd4_compoundargs *args = rqstp->rq_argp;
- struct nfsd4_op *op = &args->ops[resp->opcnt];
dprintk("--> %s cachethis %d\n", __func__, entry->ce_cachethis);
- /* Don't cache a failed OP_SEQUENCE. */
- if (resp->opcnt == 1 && op->opnum == OP_SEQUENCE && resp->cstate.status)
- return 0;
-
entry->ce_opcnt = resp->opcnt;
entry->ce_status = resp->cstate.status;
--
1.5.4.3
From: Andy Adamson <[email protected]>
128 slots is a bit much without proof that they are needed.
Signed-off-by: Andy Adamson <[email protected]>
---
include/linux/nfsd/state.h | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/nfsd/state.h b/include/linux/nfsd/state.h
index 29ed755..0dfdb6e 100644
--- a/include/linux/nfsd/state.h
+++ b/include/linux/nfsd/state.h
@@ -105,8 +105,8 @@ struct nfs4_callback {
struct rpc_clnt * cb_client;
};
-/* Maximum number of slots per session. 128 is useful for long haul TCP */
-#define NFSD_MAX_SLOTS_PER_SESSION 128
+/* Maximum number of slots per session */
+#define NFSD_MAX_SLOTS_PER_SESSION 16
/* Maximum number of pages per slot cache entry */
#define NFSD_PAGES_PER_SLOT 1
/* Size of each slot cache entry */
--
1.5.4.3
From: Andy Adamson <[email protected]>
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4state.c | 16 +++-------------
1 files changed, 3 insertions(+), 13 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index efc9374..1e995ec 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1001,14 +1001,8 @@ out_err:
}
/*
- * Cache the reply pages up to NFSD_PAGES_PER_SLOT + 1, clearing the previous
- * pages. We add a page to NFSD_PAGES_PER_SLOT for the case where the total
- * length of the XDR response is less than se_fmaxresp_cached
- * (NFSD_PAGES_PER_SLOT * PAGE_SIZE) but the xdr_buf pages is used for a
- * of the reply (e.g. readdir).
- *
- * Store the base and length of the rq_req.head[0] page
- * of the NFSv4.1 data, just past the rpc header.
+ * Copy all encoded operation responses past the sequence response into the
+ * slot's cache.
*/
__be32
nfsd4_store_cache_entry(struct nfsd4_compoundres *resp)
@@ -1027,11 +1021,7 @@ nfsd4_store_cache_entry(struct nfsd4_compoundres *resp)
entry->ce_opcnt = resp->opcnt;
entry->ce_status = resp->cstate.status;
- /*
- * Don't need a page to cache just the sequence operation - the slot
- * does this for us!
- */
-
+ /* Don't cache the sequence operation, use the slot values on replay. */
if (nfsd4_not_cached(resp)) {
entry->ce_datav.iov_len = 0;
dprintk("%s Just cache SEQUENCE. ce_cachethis %d\n", __func__,
--
1.5.4.3
From: Andy Adamson <[email protected]>
Replacing page based drc cache with buffer based drc cache.
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4state.c | 8 --------
fs/nfsd/nfssvc.c | 4 ----
include/linux/nfsd/xdr4.h | 1 -
3 files changed, 0 insertions(+), 13 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 5e2246c..9ac0d71 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1000,14 +1000,6 @@ out_err:
return;
}
-void
-nfsd4_set_statp(struct svc_rqst *rqstp, __be32 *statp)
-{
- struct nfsd4_compoundres *resp = rqstp->rq_resp;
-
- resp->cstate.statp = statp;
-}
-
/*
* Cache the reply pages up to NFSD_PAGES_PER_SLOT + 1, clearing the previous
* pages. We add a page to NFSD_PAGES_PER_SLOT for the case where the total
diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
index 125f2ef..ea61f62 100644
--- a/fs/nfsd/nfssvc.c
+++ b/fs/nfsd/nfssvc.c
@@ -561,10 +561,6 @@ nfsd_dispatch(struct svc_rqst *rqstp, __be32 *statp)
+ rqstp->rq_res.head[0].iov_len;
rqstp->rq_res.head[0].iov_len += sizeof(__be32);
- /* NFSv4.1 DRC requires statp */
- if (rqstp->rq_vers == 4)
- nfsd4_set_statp(rqstp, statp);
-
/* Now call the procedure handler, and encode NFS status. */
nfserr = proc->pc_func(rqstp, rqstp->rq_argp, rqstp->rq_resp);
nfserr = map_new_errors(rqstp->rq_vers, nfserr);
diff --git a/include/linux/nfsd/xdr4.h b/include/linux/nfsd/xdr4.h
index 5622e56..97f5cee 100644
--- a/include/linux/nfsd/xdr4.h
+++ b/include/linux/nfsd/xdr4.h
@@ -51,7 +51,6 @@ struct nfsd4_compound_state {
/* For sessions DRC */
struct nfsd4_session *session;
struct nfsd4_slot *slot;
- __be32 *statp;
__be32 *datap;
size_t iovlen;
u32 minorversion;
--
1.5.4.3
From: Andy Adamson <[email protected]>
The sequence operation is not cached; always encode the sequence operation on
a replay from the slot table and session values.
If this is a replay of a compound that was specified not to be cached, return
NFS4ERR_RETRY_UNCACHED_REP.
Signed-off-by: Andy Adamson <[email protected]>
Please enter the commit message for your changes.
---
fs/nfsd/nfs4proc.c | 33 +--------------------------------
fs/nfsd/nfs4state.c | 40 ++++++++++++++++++++++++++++++++++++----
2 files changed, 37 insertions(+), 36 deletions(-)
diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
index f8e7720..d8e3be9 100644
--- a/fs/nfsd/nfs4proc.c
+++ b/fs/nfsd/nfs4proc.c
@@ -860,34 +860,6 @@ static struct nfsd4_operation nfsd4_ops[];
static const char *nfsd4_op_name(unsigned opnum);
/*
- * This is a replay of a compound for which no cache entry pages
- * were used. Encode the sequence operation, and if cachethis is FALSE
- * encode the uncache rep error on the next operation.
- */
-static __be32
-nfsd4_enc_sequence_replay(struct nfsd4_compoundargs *args,
- struct nfsd4_compoundres *resp)
-{
- struct nfsd4_op *op;
-
- dprintk("--> %s resp->opcnt %d ce_cachethis %u \n", __func__,
- resp->opcnt, resp->cstate.slot->sl_cache_entry.ce_cachethis);
-
- /* Encode the replayed sequence operation */
- BUG_ON(resp->opcnt != 1);
- op = &args->ops[resp->opcnt - 1];
- nfsd4_encode_operation(resp, op);
-
- /*return nfserr_retry_uncached_rep in next operation. */
- if (resp->cstate.slot->sl_cache_entry.ce_cachethis == 0) {
- op = &args->ops[resp->opcnt++];
- op->status = nfserr_retry_uncached_rep;
- nfsd4_encode_operation(resp, op);
- }
- return op->status;
-}
-
-/*
* Enforce NFSv4.1 COMPOUND ordering rules.
*
* TODO:
@@ -999,10 +971,7 @@ encode_op:
/* Only from SEQUENCE */
if (resp->cstate.status == nfserr_replay_cache) {
dprintk("%s NFS4.1 replay from cache\n", __func__);
- if (nfsd4_not_cached(resp))
- status = nfsd4_enc_sequence_replay(args, resp);
- else
- status = op->status;
+ status = op->status;
goto out;
}
/* Only from CREATE_SESSION */
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 9ac0d71..8c96451 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1056,6 +1056,36 @@ nfsd4_store_cache_entry(struct nfsd4_compoundres *resp)
}
/*
+ * Encode the replay sequence operation from the slot values.
+ * If cache this is FALSE encode the uncached rep error on the next
+ * operation which sets resp->p and increment resp->opcnt so that
+ * nfs4svc_encode_compoundres is happy.
+ *
+ */
+static __be32
+nfsd4_enc_sequence_replay(struct nfsd4_compoundargs *args,
+ struct nfsd4_compoundres *resp)
+{
+ struct nfsd4_op *op;
+ struct nfsd4_slot *slot = resp->cstate.slot;
+
+ dprintk("--> %s resp->opcnt %d ce_cachethis %u \n", __func__,
+ resp->opcnt, resp->cstate.slot->sl_cache_entry.ce_cachethis);
+
+ /* Encode the replayed sequence operation */
+ op = &args->ops[resp->opcnt - 1];
+ nfsd4_encode_operation(resp, op);
+
+ /* Return nfserr_retry_uncached_rep in next operation. */
+ if (args->opcnt > 1 && slot->sl_cache_entry.ce_cachethis == 0) {
+ op = &args->ops[resp->opcnt++];
+ op->status = nfserr_retry_uncached_rep;
+ nfsd4_encode_operation(resp, op);
+ }
+ return op->status;
+}
+
+/*
* Keep the first page of the replay. Copy the NFSv4.1 data from the first
* cached page. Replace any futher replay pages from the cache.
*/
@@ -1078,10 +1108,12 @@ nfsd4_replay_cache_entry(struct nfsd4_compoundres *resp,
* session inactivity timer fires and a solo sequence operation
* is sent (lease renewal).
*/
- if (seq && nfsd4_not_cached(resp)) {
- seq->maxslots = resp->cstate.session->se_fchannel.maxreqs;
- return nfs_ok;
- }
+ seq->maxslots = resp->cstate.session->se_fchannel.maxreqs;
+
+ /* Either returns 0 or nfserr_retry_uncached */
+ status = nfsd4_enc_sequence_replay(resp->rqstp->rq_argp, resp);
+ if (status == nfserr_retry_uncached_rep)
+ return status;
/* The sequence operation has been encoded, cstate->datap set. */
memcpy(resp->cstate.datap, entry->ce_datav.iov_base,
--
1.5.4.3
From: Andy Adamson <[email protected]>
Replacing page based drc cache with buffer based drc cache.
Signed-off-by: Andy Adamson <[email protected]
---
fs/nfsd/nfs4state.c | 18 ------------------
1 files changed, 0 insertions(+), 18 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index baa0e81..1d7f2c0 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1009,23 +1009,6 @@ nfsd4_set_statp(struct svc_rqst *rqstp, __be32 *statp)
}
/*
- * Dereference the result pages.
- */
-static void
-nfsd4_release_respages(struct page **respages, short resused)
-{
- int i;
-
- dprintk("--> %s\n", __func__);
- for (i = 0; i < resused; i++) {
- if (!respages[i])
- continue;
- put_page(respages[i]);
- respages[i] = NULL;
- }
-}
-
-/*
* Cache the reply pages up to NFSD_PAGES_PER_SLOT + 1, clearing the previous
* pages. We add a page to NFSD_PAGES_PER_SLOT for the case where the total
* length of the XDR response is less than se_fmaxresp_cached
@@ -1049,7 +1032,6 @@ nfsd4_store_cache_entry(struct nfsd4_compoundres *resp)
if (resp->opcnt == 1 && op->opnum == OP_SEQUENCE && resp->cstate.status)
return 0;
- nfsd4_release_respages(entry->ce_respages, entry->ce_resused);
entry->ce_opcnt = resp->opcnt;
entry->ce_status = resp->cstate.status;
--
1.5.4.3
From: Andy Adamson <[email protected]>
Replacing page based drc cache with buffer based drc cache.
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4state.c | 39 ---------------------------------------
1 files changed, 0 insertions(+), 39 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 334a3f3..baa0e81 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1091,32 +1091,6 @@ nfsd4_store_cache_entry(struct nfsd4_compoundres *resp)
}
/*
- * We keep the rpc header, but take the nfs reply from the replycache.
- */
-static int
-nfsd41_copy_replay_data(struct nfsd4_compoundres *resp,
- struct nfsd4_cache_entry *entry)
-{
- struct svc_rqst *rqstp = resp->rqstp;
- struct kvec *resv = &resp->rqstp->rq_res.head[0];
- int len;
-
- /* Current request rpc header length*/
- len = (char *)resp->cstate.statp -
- (char *)page_address(rqstp->rq_respages[0]);
- if (entry->ce_datav.iov_len + len > PAGE_SIZE) {
- dprintk("%s v41 cached reply too large (%Zd).\n", __func__,
- entry->ce_datav.iov_len);
- return 0;
- }
- /* copy the cached reply nfsd data past the current rpc header */
- memcpy((char *)resv->iov_base + len, entry->ce_datav.iov_base,
- entry->ce_datav.iov_len);
- resv->iov_len = len + entry->ce_datav.iov_len;
- return 1;
-}
-
-/*
* Keep the first page of the replay. Copy the NFSv4.1 data from the first
* cached page. Replace any futher replay pages from the cache.
*/
@@ -1148,19 +1122,6 @@ nfsd4_replay_cache_entry(struct nfsd4_compoundres *resp,
memcpy(resp->cstate.datap, entry->ce_datav.iov_base,
entry->ce_datav.iov_len);
- if (!nfsd41_copy_replay_data(resp, entry)) {
- /*
- * Not enough room to use the replay rpc header, send the
- * cached header. Release all the allocated result pages.
- */
- svc_free_res_pages(resp->rqstp);
- } else {
- /* Release all but the first allocated result page */
-
- resp->rqstp->rq_resused--;
- svc_free_res_pages(resp->rqstp);
- }
-
resp->rqstp->rq_resused = entry->ce_resused;
resp->opcnt = entry->ce_opcnt;
resp->cstate.iovlen = entry->ce_datav.iov_len + entry->ce_rpchdrlen;
--
1.5.4.3
From: Andy Adamson <[email protected]>
Replacing page based drc cache with buffer based drc cache.
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4state.c | 21 ---------------------
1 files changed, 0 insertions(+), 21 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 787cffe..334a3f3 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1025,19 +1025,6 @@ nfsd4_release_respages(struct page **respages, short resused)
}
}
-static void
-nfsd4_copy_pages(struct page **topages, struct page **frompages, short count)
-{
- int i;
-
- for (i = 0; i < count; i++) {
- topages[i] = frompages[i];
- if (!topages[i])
- continue;
- get_page(topages[i]);
- }
-}
-
/*
* Cache the reply pages up to NFSD_PAGES_PER_SLOT + 1, clearing the previous
* pages. We add a page to NFSD_PAGES_PER_SLOT for the case where the total
@@ -1097,8 +1084,6 @@ nfsd4_store_cache_entry(struct nfsd4_compoundres *resp)
entry->ce_resused = rqstp->rq_resused;
if (entry->ce_resused > NFSD_PAGES_PER_SLOT + 1)
entry->ce_resused = NFSD_PAGES_PER_SLOT + 1;
- nfsd4_copy_pages(entry->ce_respages, rqstp->rq_respages,
- entry->ce_resused);
/* Current request rpc header length*/
entry->ce_rpchdrlen = (char *)resp->cstate.statp -
(char *)page_address(rqstp->rq_respages[0]);
@@ -1169,17 +1154,11 @@ nfsd4_replay_cache_entry(struct nfsd4_compoundres *resp,
* cached header. Release all the allocated result pages.
*/
svc_free_res_pages(resp->rqstp);
- nfsd4_copy_pages(resp->rqstp->rq_respages, entry->ce_respages,
- entry->ce_resused);
} else {
/* Release all but the first allocated result page */
resp->rqstp->rq_resused--;
svc_free_res_pages(resp->rqstp);
-
- nfsd4_copy_pages(&resp->rqstp->rq_respages[1],
- &entry->ce_respages[1],
- entry->ce_resused - 1);
}
resp->rqstp->rq_resused = entry->ce_resused;
--
1.5.4.3
From: Andy Adamson <[email protected]>
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4state.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 9e79e0c..787cffe 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -590,7 +590,7 @@ free_session(struct kref *kref)
ses = container_of(kref, struct nfsd4_session, se_ref);
for (i = 0; i < ses->se_fchannel.maxreqs; i++) {
struct nfsd4_cache_entry *e = &ses->se_slots[i].sl_cache_entry;
- nfsd4_release_respages(e->ce_respages, e->ce_resused);
+ kfree(e->ce_datav.iov_base);
}
kfree(ses);
}
--
1.5.4.3
From: Andy Adamson <[email protected]>
The clientid cache no longer uses a page based cache.
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4state.c | 2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index accad58..dd16196 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -653,8 +653,6 @@ free_client(struct nfs4_client *clp)
shutdown_callback_client(clp);
if (clp->cl_cb_xprt)
svc_xprt_put(clp->cl_cb_xprt);
- nfsd4_release_respages(clp->cl_slot.sl_cache_entry.ce_respages,
- clp->cl_slot.sl_cache_entry.ce_resused);
if (clp->cl_cred.cr_group_info)
put_group_info(clp->cl_cred.cr_group_info);
kfree(clp->cl_principal);
--
1.5.4.3
From: Andy Adamson <[email protected]>
Replay processing needs to preceed other error processing, in this case a
replay will always trigger nfserr_clid_inuse.
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4state.c | 12 ++++++------
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 279b47e..20485c5 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1374,12 +1374,6 @@ nfsd4_create_session(struct svc_rqst *rqstp,
}
conf->cl_slot.sl_seqid++;
} else if (unconf) {
- if (!same_creds(&unconf->cl_cred, &rqstp->rq_cred) ||
- (ip_addr != unconf->cl_addr)) {
- status = nfserr_clid_inuse;
- goto out_cache;
- }
-
slot = &unconf->cl_slot;
status = check_slot_seqid(cr_ses->seqid, slot);
if (status) {
@@ -1388,6 +1382,12 @@ nfsd4_create_session(struct svc_rqst *rqstp,
goto out;
}
+ if (!same_creds(&unconf->cl_cred, &rqstp->rq_cred) ||
+ (ip_addr != unconf->cl_addr)) {
+ status = nfserr_clid_inuse;
+ goto out_cache;
+ }
+
slot->sl_seqid++; /* from 0 to 1 */
move_to_confirmed(unconf);
--
1.5.4.3
From: Andy Adamson <[email protected]>
Don't use the sv_lock as per March 31 review by bfields.
Serialize access to sv_drc_pages_used which changes on session creation.
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4state.c | 4 ++--
include/linux/sunrpc/svc.h | 1 +
net/sunrpc/svc.c | 1 +
3 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 4cc66f3..af21f94 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -427,11 +427,11 @@ static int set_forechannel_maxreqs(struct nfsd4_channel_attrs *fchan)
{
int status = 0, np = fchan->maxreqs * NFSD_PAGES_PER_SLOT;
- spin_lock(&nfsd_serv->sv_lock);
+ spin_lock(&nfsd_serv->sv_drc_lock);
if (np + nfsd_serv->sv_drc_pages_used > nfsd_serv->sv_drc_max_pages)
np = nfsd_serv->sv_drc_max_pages - nfsd_serv->sv_drc_pages_used;
nfsd_serv->sv_drc_pages_used += np;
- spin_unlock(&nfsd_serv->sv_lock);
+ spin_unlock(&nfsd_serv->sv_drc_lock);
if (np <= 0) {
status = nfserr_resource;
diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
index 2a30775..0d2315c 100644
--- a/include/linux/sunrpc/svc.h
+++ b/include/linux/sunrpc/svc.h
@@ -94,6 +94,7 @@ struct svc_serv {
struct module * sv_module; /* optional module to count when
* adding threads */
svc_thread_fn sv_function; /* main function for threads */
+ spinlock_t sv_drc_lock;
unsigned int sv_drc_max_pages; /* Total pages for DRC */
unsigned int sv_drc_pages_used;/* DRC pages used */
};
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index 8847add..c25070a 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -394,6 +394,7 @@ __svc_create(struct svc_program *prog, unsigned int bufsize, int npools,
INIT_LIST_HEAD(&serv->sv_permsocks);
init_timer(&serv->sv_temptimer);
spin_lock_init(&serv->sv_lock);
+ spin_lock_init(&serv->sv_drc_lock);
serv->sv_nrpools = npools;
serv->sv_pools =
--
1.5.4.3
From: Andy Adamson <[email protected]>
Ensure the client requested maximum requests are between 1 and
NFSD_MAX_SLOTS_PER_SESSION
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4state.c | 5 +++++
1 files changed, 5 insertions(+), 0 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index af21f94..d570472 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -427,6 +427,11 @@ static int set_forechannel_maxreqs(struct nfsd4_channel_attrs *fchan)
{
int status = 0, np = fchan->maxreqs * NFSD_PAGES_PER_SLOT;
+ if (fchan->maxreqs < 1)
+ return nfserr_inval;
+ else if (fchan->maxreqs > NFSD_MAX_SLOTS_PER_SESSION)
+ fchan->maxreqs = NFSD_MAX_SLOTS_PER_SESSION;
+
spin_lock(&nfsd_serv->sv_drc_lock);
if (np + nfsd_serv->sv_drc_pages_used > nfsd_serv->sv_drc_max_pages)
np = nfsd_serv->sv_drc_max_pages - nfsd_serv->sv_drc_pages_used;
--
1.5.4.3
From: Andy Adamson <[email protected]>
CREATE_SESSION can be preceeded by a SEQUENCE operation and the
create session single slot cache must be maintained. Encode the results of
a create session call into the cache at the end of processing.
The create session result will also be encoded into the RPC response, and if
it is preceeded by a SEQUENCE operation, will also be encoded into the
session slot table cache.
Errors that do not change the create session cache:
A create session NFS4ERR_STALE_CLIENTID error means that a client record
(and associated create session slot) could not be found and therefore can't
be changed. NFSERR_SEQ_MISORDERED errors do not change the slot cache.
All other errors get cached.
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4state.c | 6 +++++-
fs/nfsd/nfs4xdr.c | 19 +++++++++++++++++++
include/linux/nfsd/xdr4.h | 2 ++
3 files changed, 26 insertions(+), 1 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 27ad37f..279b47e 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1377,7 +1377,7 @@ nfsd4_create_session(struct svc_rqst *rqstp,
if (!same_creds(&unconf->cl_cred, &rqstp->rq_cred) ||
(ip_addr != unconf->cl_addr)) {
status = nfserr_clid_inuse;
- goto out;
+ goto out_cache;
}
slot = &unconf->cl_slot;
@@ -1424,6 +1424,10 @@ nfsd4_create_session(struct svc_rqst *rqstp,
cstate->slot = slot;
/* Ensure a page is used for the cache */
slot->sl_cache_entry.ce_cachethis = 1;
+out_cache:
+ /* cache solo and embedded create sessions under the state lock */
+ nfsd4_cache_create_session(cr_ses, slot, status);
+
out:
nfs4_unlock_state();
dprintk("%s returns %d\n", __func__, ntohl(status));
diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
index b2f8d74..6b34fac 100644
--- a/fs/nfsd/nfs4xdr.c
+++ b/fs/nfsd/nfs4xdr.c
@@ -3046,6 +3046,25 @@ nfsd4_encode_create_session(struct nfsd4_compoundres *resp, int nfserr,
return 0;
}
+/*
+ * Encode the create_session result into the create session single DRC
+ * slot cache. Do this for solo or embedded create session operations.
+ */
+void
+nfsd4_cache_create_session(struct nfsd4_create_session *cr_ses,
+ struct nfsd4_slot *slot, int nfserr)
+{
+ struct nfsd4_cache_entry *entry = &slot->sl_cache_entry;
+ __be32 *p = (__be32 *)entry->ce_datav.iov_base;
+ struct nfsd4_compoundres tmp = {
+ .p = p,
+ .end = p + XDR_QUADLEN(CS_MAX_ENC_SZ),
+ }, *resp = &tmp;
+
+ entry->ce_status = nfsd4_encode_create_session(resp, nfserr, cr_ses);
+ entry->ce_datav.iov_len = (char *)resp->p - (char *)p;
+}
+
static __be32
nfsd4_encode_destroy_session(struct nfsd4_compoundres *resp, int nfserr,
struct nfsd4_destroy_session *destroy_session)
diff --git a/include/linux/nfsd/xdr4.h b/include/linux/nfsd/xdr4.h
index afd4464..cc50ca0 100644
--- a/include/linux/nfsd/xdr4.h
+++ b/include/linux/nfsd/xdr4.h
@@ -517,6 +517,8 @@ extern __be32 nfsd4_setclientid_confirm(struct svc_rqst *rqstp,
extern void nfsd4_store_cache_entry(struct nfsd4_compoundres *resp);
extern __be32 nfsd4_replay_cache_entry(struct nfsd4_compoundres *resp,
struct nfsd4_sequence *seq);
+extern void nfsd4_cache_create_session(struct nfsd4_create_session *cr_ses,
+ struct nfsd4_slot *slot, int nfserr);
extern __be32 nfsd4_exchange_id(struct svc_rqst *rqstp,
struct nfsd4_compound_state *,
struct nfsd4_exchange_id *);
--
1.5.4.3
From: Andy Adamson <[email protected]>
CREATE_SESSION can be preceeded by a SEQUENCE operation (an embedded
CREATE_SESSION) and the create session single slot cache must be maintained.
Always replay a create session from nfsd4_replay_create_session(). Leave
nfsd4_store_cache_entry() for session (sequence operation) replays.
Do not set cstate->slot, so that nfs4svc_encode_compoundres session replay
processing is not called.
Since create sesssion caching and replay all occur in nfsd4_create_session,
no need to set the slot inuse boolean.
Set cstate->status to a new internal error, nfserr_replay_clientid_cache, and
change the logic in nfsd4_proc_compound so that CREATE_SESSION replays replace
encode_operation.
The new internal error is needed to differentiate between an embedded
CREATE_SESSION replay and a SEQUENCE replay.
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4proc.c | 9 +++++++--
fs/nfsd/nfs4state.c | 9 ++-------
fs/nfsd/nfs4xdr.c | 18 ++++++++++++++++++
include/linux/nfsd/nfsd.h | 2 ++
include/linux/nfsd/xdr4.h | 2 ++
5 files changed, 31 insertions(+), 9 deletions(-)
diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
index b2883e9..32d5866 100644
--- a/fs/nfsd/nfs4proc.c
+++ b/fs/nfsd/nfs4proc.c
@@ -996,7 +996,7 @@ nfsd4_proc_compound(struct svc_rqst *rqstp,
BUG_ON(op->status == nfs_ok);
encode_op:
- /* Only from SEQUENCE or CREATE_SESSION */
+ /* Only from SEQUENCE */
if (resp->cstate.status == nfserr_replay_cache) {
dprintk("%s NFS4.1 replay from cache\n", __func__);
if (nfsd4_not_cached(resp))
@@ -1005,7 +1005,12 @@ encode_op:
status = op->status;
goto out;
}
- if (op->status == nfserr_replay_me) {
+ /* Only from CREATE_SESSION */
+ if (resp->cstate.status == nfserr_replay_clientid_cache) {
+ dprintk("%s NFS4.1 replay from clientid cache\n",
+ __func__);
+ status = op->status;
+ } else if (op->status == nfserr_replay_me) {
op->replay = &cstate->replay_owner->so_replay;
nfsd4_encode_replay(resp, op);
status = op->status = op->replay->rp_status;
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 20485c5..a585a58 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1360,10 +1360,9 @@ nfsd4_create_session(struct svc_rqst *rqstp,
if (status == nfserr_replay_cache) {
dprintk("Got a create_session replay! seqid= %d\n",
slot->sl_seqid);
- cstate->slot = slot;
- cstate->status = status;
+ cstate->status = nfserr_replay_clientid_cache;
/* Return the cached reply status */
- status = nfsd4_replay_cache_entry(resp, NULL);
+ status = nfsd4_replay_create_session(resp, slot);
goto out;
} else if (cr_ses->seqid != conf->cl_slot.sl_seqid + 1) {
status = nfserr_seq_misordered;
@@ -1420,10 +1419,6 @@ nfsd4_create_session(struct svc_rqst *rqstp,
NFS4_MAX_SESSIONID_LEN);
cr_ses->seqid = slot->sl_seqid;
- slot->sl_inuse = true;
- cstate->slot = slot;
- /* Ensure a page is used for the cache */
- slot->sl_cache_entry.ce_cachethis = 1;
out_cache:
/* cache solo and embedded create sessions under the state lock */
nfsd4_cache_create_session(cr_ses, slot, status);
diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
index 6b34fac..7459900 100644
--- a/fs/nfsd/nfs4xdr.c
+++ b/fs/nfsd/nfs4xdr.c
@@ -3065,6 +3065,24 @@ nfsd4_cache_create_session(struct nfsd4_create_session *cr_ses,
entry->ce_datav.iov_len = (char *)resp->p - (char *)p;
}
+__be32
+nfsd4_replay_create_session(struct nfsd4_compoundres *resp,
+ struct nfsd4_slot *slot)
+{
+ struct nfsd4_cache_entry *entry = &slot->sl_cache_entry;
+ ENCODE_HEAD;
+
+ RESERVE_SPACE(8);
+ WRITE32(OP_CREATE_SESSION);
+ *p++ = entry->ce_status; /* already in network byte order */
+ ADJUST_ARGS();
+
+ memcpy(resp->p, entry->ce_datav.iov_base, entry->ce_datav.iov_len);
+ p += XDR_QUADLEN(entry->ce_datav.iov_len);
+ ADJUST_ARGS();
+ return entry->ce_status;
+}
+
static __be32
nfsd4_encode_destroy_session(struct nfsd4_compoundres *resp, int nfserr,
struct nfsd4_destroy_session *destroy_session)
diff --git a/include/linux/nfsd/nfsd.h b/include/linux/nfsd/nfsd.h
index 2b49d67..9cd6399 100644
--- a/include/linux/nfsd/nfsd.h
+++ b/include/linux/nfsd/nfsd.h
@@ -301,6 +301,8 @@ void nfsd_lockd_shutdown(void);
#define nfserr_replay_me cpu_to_be32(11001)
/* nfs41 replay detected */
#define nfserr_replay_cache cpu_to_be32(11002)
+/* nfs41 clientid cache replay detected */
+#define nfserr_replay_clientid_cache cpu_to_be32(11003)
/* Check for dir entries '.' and '..' */
#define isdotent(n, l) (l < 3 && n[0] == '.' && (l == 1 || n[1] == '.'))
diff --git a/include/linux/nfsd/xdr4.h b/include/linux/nfsd/xdr4.h
index cc50ca0..8fd5eb1 100644
--- a/include/linux/nfsd/xdr4.h
+++ b/include/linux/nfsd/xdr4.h
@@ -519,6 +519,8 @@ extern __be32 nfsd4_replay_cache_entry(struct nfsd4_compoundres *resp,
struct nfsd4_sequence *seq);
extern void nfsd4_cache_create_session(struct nfsd4_create_session *cr_ses,
struct nfsd4_slot *slot, int nfserr);
+extern __be32 nfsd4_replay_create_session(struct nfsd4_compoundres *resp,
+ struct nfsd4_slot *slot);
extern __be32 nfsd4_exchange_id(struct svc_rqst *rqstp,
struct nfsd4_compound_state *,
struct nfsd4_exchange_id *);
--
1.5.4.3
From: Andy Adamson <[email protected]>
The nfs41 single slot clientid cache holds create session response which
has a maximum size of 88 bytes. Add a static buffer to struct nfs4_client
to cache the encoded create session response.
Signed-off-by: Andy Adamson <[email protected]>
---
fs/nfsd/nfs4state.c | 3 +++
include/linux/nfsd/state.h | 8 ++++++++
2 files changed, 11 insertions(+), 0 deletions(-)
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index e82a518..27ad37f 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -1283,6 +1283,9 @@ out_new:
status = nfserr_resource;
goto out;
}
+ /* Set the create session cache buffer */
+ new->cl_slot.sl_cache_entry.ce_datav.iov_base = new->cl_slot_buf;
+ new->cl_slot.sl_cache_entry.ce_datav.iov_len = CS_MAX_ENC_SZ;
copy_verf(new, &verf);
copy_cred(&new->cl_cred, &rqstp->rq_cred);
diff --git a/include/linux/nfsd/state.h b/include/linux/nfsd/state.h
index 14da8f6..87f3eaa 100644
--- a/include/linux/nfsd/state.h
+++ b/include/linux/nfsd/state.h
@@ -174,6 +174,13 @@ struct nfsd4_sessionid {
#define HEXDIR_LEN 33 /* hex version of 16 byte md5 of cl_name plus '\0' */
/*
+ * maximum encoded size of create session response
+ * 16 - sessionid, 8 - sequence # and flags,
+ * 32 - fore channel attrs, 32 - back channel attrs
+ */
+#define CS_MAX_ENC_SZ 88
+
+/*
* struct nfs4_client - one per client. Clientids live here.
* o Each nfs4_client is hashed by clientid.
*
@@ -206,6 +213,7 @@ struct nfs4_client {
/* for nfs41 */
struct list_head cl_sessions;
struct nfsd4_slot cl_slot; /* create_session slot */
+ char cl_slot_buf[CS_MAX_ENC_SZ]; /* slot buffer */
u32 cl_exchange_flags;
struct nfs4_sessionid cl_sessionid;
--
1.5.4.3
On Thu, Apr 23, 2009 at 12:42:40PM -0400, [email protected] wrote:
> From: Andy Adamson <[email protected]>
>
> The nfs41 single slot clientid cache holds create session response which
> has a maximum size of 88 bytes. Add a static buffer to struct nfs4_client
> to cache the encoded create session response.
>
> Signed-off-by: Andy Adamson <[email protected]>
> ---
> fs/nfsd/nfs4state.c | 3 +++
> include/linux/nfsd/state.h | 8 ++++++++
> 2 files changed, 11 insertions(+), 0 deletions(-)
>
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index e82a518..27ad37f 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -1283,6 +1283,9 @@ out_new:
> status = nfserr_resource;
> goto out;
> }
> + /* Set the create session cache buffer */
> + new->cl_slot.sl_cache_entry.ce_datav.iov_base = new->cl_slot_buf;
> + new->cl_slot.sl_cache_entry.ce_datav.iov_len = CS_MAX_ENC_SZ;
Any reason not to do this as part of the other client initialization in
create_client()?
--b.
>
> copy_verf(new, &verf);
> copy_cred(&new->cl_cred, &rqstp->rq_cred);
> diff --git a/include/linux/nfsd/state.h b/include/linux/nfsd/state.h
> index 14da8f6..87f3eaa 100644
> --- a/include/linux/nfsd/state.h
> +++ b/include/linux/nfsd/state.h
> @@ -174,6 +174,13 @@ struct nfsd4_sessionid {
> #define HEXDIR_LEN 33 /* hex version of 16 byte md5 of cl_name plus '\0' */
>
> /*
> + * maximum encoded size of create session response
> + * 16 - sessionid, 8 - sequence # and flags,
> + * 32 - fore channel attrs, 32 - back channel attrs
> + */
> +#define CS_MAX_ENC_SZ 88
> +
> +/*
> * struct nfs4_client - one per client. Clientids live here.
> * o Each nfs4_client is hashed by clientid.
> *
> @@ -206,6 +213,7 @@ struct nfs4_client {
> /* for nfs41 */
> struct list_head cl_sessions;
> struct nfsd4_slot cl_slot; /* create_session slot */
> + char cl_slot_buf[CS_MAX_ENC_SZ]; /* slot buffer */
> u32 cl_exchange_flags;
> struct nfs4_sessionid cl_sessionid;
>
> --
> 1.5.4.3
>
On Thu, Apr 23, 2009 at 12:42:41PM -0400, [email protected] wrote:
> From: Andy Adamson <[email protected]>
>
> CREATE_SESSION can be preceeded by a SEQUENCE operation and the
> create session single slot cache must be maintained. Encode the results of
> a create session call into the cache at the end of processing.
>
> The create session result will also be encoded into the RPC response, and if
> it is preceeded by a SEQUENCE operation, will also be encoded into the
> session slot table cache.
>
> Errors that do not change the create session cache:
> A create session NFS4ERR_STALE_CLIENTID error means that a client record
> (and associated create session slot) could not be found and therefore can't
> be changed. NFSERR_SEQ_MISORDERED errors do not change the slot cache.
>
> All other errors get cached.
Thanks, this all sounds sensible. (One simple thing I think I was
confused about: the create_session sequencing is done at a layer above
sessions; so in the case of a create_session preceded by a sequence, if
we discover at the point of processing the sequence that this is a
replay, we use the sequence replay cache and never even get to any
create_session processing. OK!)
> Signed-off-by: Andy Adamson <[email protected]>
> ---
> fs/nfsd/nfs4state.c | 6 +++++-
> fs/nfsd/nfs4xdr.c | 19 +++++++++++++++++++
> include/linux/nfsd/xdr4.h | 2 ++
> 3 files changed, 26 insertions(+), 1 deletions(-)
>
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index 27ad37f..279b47e 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -1377,7 +1377,7 @@ nfsd4_create_session(struct svc_rqst *rqstp,
> if (!same_creds(&unconf->cl_cred, &rqstp->rq_cred) ||
> (ip_addr != unconf->cl_addr)) {
Could we apply the patch removing these ip_addr checks before any of
these patches? Also I seem to recall expressing some other doubts about
the correctness of exchange_id and create_session--I'd like to see those
addressed first, as this builds on those.
> status = nfserr_clid_inuse;
> - goto out;
> + goto out_cache;
> }
>
> slot = &unconf->cl_slot;
> @@ -1424,6 +1424,10 @@ nfsd4_create_session(struct svc_rqst *rqstp,
> cstate->slot = slot;
> /* Ensure a page is used for the cache */
> slot->sl_cache_entry.ce_cachethis = 1;
Is this still needed?
--b.
> +out_cache:
> + /* cache solo and embedded create sessions under the state lock */
> + nfsd4_cache_create_session(cr_ses, slot, status);
> +
> out:
> nfs4_unlock_state();
> dprintk("%s returns %d\n", __func__, ntohl(status));
> diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
> index b2f8d74..6b34fac 100644
> --- a/fs/nfsd/nfs4xdr.c
> +++ b/fs/nfsd/nfs4xdr.c
> @@ -3046,6 +3046,25 @@ nfsd4_encode_create_session(struct nfsd4_compoundres *resp, int nfserr,
> return 0;
> }
>
> +/*
> + * Encode the create_session result into the create session single DRC
> + * slot cache. Do this for solo or embedded create session operations.
> + */
> +void
> +nfsd4_cache_create_session(struct nfsd4_create_session *cr_ses,
> + struct nfsd4_slot *slot, int nfserr)
> +{
> + struct nfsd4_cache_entry *entry = &slot->sl_cache_entry;
> + __be32 *p = (__be32 *)entry->ce_datav.iov_base;
> + struct nfsd4_compoundres tmp = {
> + .p = p,
> + .end = p + XDR_QUADLEN(CS_MAX_ENC_SZ),
> + }, *resp = &tmp;
> +
> + entry->ce_status = nfsd4_encode_create_session(resp, nfserr, cr_ses);
> + entry->ce_datav.iov_len = (char *)resp->p - (char *)p;
> +}
> +
> static __be32
> nfsd4_encode_destroy_session(struct nfsd4_compoundres *resp, int nfserr,
> struct nfsd4_destroy_session *destroy_session)
> diff --git a/include/linux/nfsd/xdr4.h b/include/linux/nfsd/xdr4.h
> index afd4464..cc50ca0 100644
> --- a/include/linux/nfsd/xdr4.h
> +++ b/include/linux/nfsd/xdr4.h
> @@ -517,6 +517,8 @@ extern __be32 nfsd4_setclientid_confirm(struct svc_rqst *rqstp,
> extern void nfsd4_store_cache_entry(struct nfsd4_compoundres *resp);
> extern __be32 nfsd4_replay_cache_entry(struct nfsd4_compoundres *resp,
> struct nfsd4_sequence *seq);
> +extern void nfsd4_cache_create_session(struct nfsd4_create_session *cr_ses,
> + struct nfsd4_slot *slot, int nfserr);
> extern __be32 nfsd4_exchange_id(struct svc_rqst *rqstp,
> struct nfsd4_compound_state *,
> struct nfsd4_exchange_id *);
> --
> 1.5.4.3
>
On Thu, Apr 23, 2009 at 12:42:43PM -0400, [email protected] wrote:
> From: Andy Adamson <[email protected]>
>
> CREATE_SESSION can be preceeded by a SEQUENCE operation (an embedded
> CREATE_SESSION) and the create session single slot cache must be maintained.
> Always replay a create session from nfsd4_replay_create_session(). Leave
> nfsd4_store_cache_entry() for session (sequence operation) replays.
>
> Do not set cstate->slot, so that nfs4svc_encode_compoundres session replay
> processing is not called.
>
> Since create sesssion caching and replay all occur in nfsd4_create_session,
> no need to set the slot inuse boolean.
>
> Set cstate->status to a new internal error, nfserr_replay_clientid_cache, and
> change the logic in nfsd4_proc_compound so that CREATE_SESSION replays replace
> encode_operation.
> The new internal error is needed to differentiate between an embedded
> CREATE_SESSION replay and a SEQUENCE replay.
That seems OK, thanks.
--b.
>
> Signed-off-by: Andy Adamson <[email protected]>
> ---
> fs/nfsd/nfs4proc.c | 9 +++++++--
> fs/nfsd/nfs4state.c | 9 ++-------
> fs/nfsd/nfs4xdr.c | 18 ++++++++++++++++++
> include/linux/nfsd/nfsd.h | 2 ++
> include/linux/nfsd/xdr4.h | 2 ++
> 5 files changed, 31 insertions(+), 9 deletions(-)
>
> diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
> index b2883e9..32d5866 100644
> --- a/fs/nfsd/nfs4proc.c
> +++ b/fs/nfsd/nfs4proc.c
> @@ -996,7 +996,7 @@ nfsd4_proc_compound(struct svc_rqst *rqstp,
> BUG_ON(op->status == nfs_ok);
>
> encode_op:
> - /* Only from SEQUENCE or CREATE_SESSION */
> + /* Only from SEQUENCE */
> if (resp->cstate.status == nfserr_replay_cache) {
> dprintk("%s NFS4.1 replay from cache\n", __func__);
> if (nfsd4_not_cached(resp))
> @@ -1005,7 +1005,12 @@ encode_op:
> status = op->status;
> goto out;
> }
> - if (op->status == nfserr_replay_me) {
> + /* Only from CREATE_SESSION */
> + if (resp->cstate.status == nfserr_replay_clientid_cache) {
> + dprintk("%s NFS4.1 replay from clientid cache\n",
> + __func__);
> + status = op->status;
> + } else if (op->status == nfserr_replay_me) {
> op->replay = &cstate->replay_owner->so_replay;
> nfsd4_encode_replay(resp, op);
> status = op->status = op->replay->rp_status;
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index 20485c5..a585a58 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -1360,10 +1360,9 @@ nfsd4_create_session(struct svc_rqst *rqstp,
> if (status == nfserr_replay_cache) {
> dprintk("Got a create_session replay! seqid= %d\n",
> slot->sl_seqid);
> - cstate->slot = slot;
> - cstate->status = status;
> + cstate->status = nfserr_replay_clientid_cache;
> /* Return the cached reply status */
> - status = nfsd4_replay_cache_entry(resp, NULL);
> + status = nfsd4_replay_create_session(resp, slot);
> goto out;
> } else if (cr_ses->seqid != conf->cl_slot.sl_seqid + 1) {
> status = nfserr_seq_misordered;
> @@ -1420,10 +1419,6 @@ nfsd4_create_session(struct svc_rqst *rqstp,
> NFS4_MAX_SESSIONID_LEN);
> cr_ses->seqid = slot->sl_seqid;
>
> - slot->sl_inuse = true;
> - cstate->slot = slot;
> - /* Ensure a page is used for the cache */
> - slot->sl_cache_entry.ce_cachethis = 1;
> out_cache:
> /* cache solo and embedded create sessions under the state lock */
> nfsd4_cache_create_session(cr_ses, slot, status);
> diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
> index 6b34fac..7459900 100644
> --- a/fs/nfsd/nfs4xdr.c
> +++ b/fs/nfsd/nfs4xdr.c
> @@ -3065,6 +3065,24 @@ nfsd4_cache_create_session(struct nfsd4_create_session *cr_ses,
> entry->ce_datav.iov_len = (char *)resp->p - (char *)p;
> }
>
> +__be32
> +nfsd4_replay_create_session(struct nfsd4_compoundres *resp,
> + struct nfsd4_slot *slot)
> +{
> + struct nfsd4_cache_entry *entry = &slot->sl_cache_entry;
> + ENCODE_HEAD;
> +
> + RESERVE_SPACE(8);
> + WRITE32(OP_CREATE_SESSION);
> + *p++ = entry->ce_status; /* already in network byte order */
> + ADJUST_ARGS();
> +
> + memcpy(resp->p, entry->ce_datav.iov_base, entry->ce_datav.iov_len);
> + p += XDR_QUADLEN(entry->ce_datav.iov_len);
> + ADJUST_ARGS();
> + return entry->ce_status;
> +}
> +
> static __be32
> nfsd4_encode_destroy_session(struct nfsd4_compoundres *resp, int nfserr,
> struct nfsd4_destroy_session *destroy_session)
> diff --git a/include/linux/nfsd/nfsd.h b/include/linux/nfsd/nfsd.h
> index 2b49d67..9cd6399 100644
> --- a/include/linux/nfsd/nfsd.h
> +++ b/include/linux/nfsd/nfsd.h
> @@ -301,6 +301,8 @@ void nfsd_lockd_shutdown(void);
> #define nfserr_replay_me cpu_to_be32(11001)
> /* nfs41 replay detected */
> #define nfserr_replay_cache cpu_to_be32(11002)
> +/* nfs41 clientid cache replay detected */
> +#define nfserr_replay_clientid_cache cpu_to_be32(11003)
>
> /* Check for dir entries '.' and '..' */
> #define isdotent(n, l) (l < 3 && n[0] == '.' && (l == 1 || n[1] == '.'))
> diff --git a/include/linux/nfsd/xdr4.h b/include/linux/nfsd/xdr4.h
> index cc50ca0..8fd5eb1 100644
> --- a/include/linux/nfsd/xdr4.h
> +++ b/include/linux/nfsd/xdr4.h
> @@ -519,6 +519,8 @@ extern __be32 nfsd4_replay_cache_entry(struct nfsd4_compoundres *resp,
> struct nfsd4_sequence *seq);
> extern void nfsd4_cache_create_session(struct nfsd4_create_session *cr_ses,
> struct nfsd4_slot *slot, int nfserr);
> +extern __be32 nfsd4_replay_create_session(struct nfsd4_compoundres *resp,
> + struct nfsd4_slot *slot);
> extern __be32 nfsd4_exchange_id(struct svc_rqst *rqstp,
> struct nfsd4_compound_state *,
> struct nfsd4_exchange_id *);
> --
> 1.5.4.3
>
On Thu, Apr 23, 2009 at 12:42:44PM -0400, [email protected] wrote:
> From: Andy Adamson <[email protected]>
>
> expire_client can be called on a confirmed or unconfirmed client while
> processing the create session operation and accessing the clientid slot.
I don't understand--isn't all that under the state lock for now?
--b.
>
> Signed-off-by: Andy Adamson <[email protected]>
> ---
> fs/nfsd/nfs4state.c | 14 ++++++++++----
> 1 files changed, 10 insertions(+), 4 deletions(-)
>
> diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
> index a585a58..accad58 100644
> --- a/fs/nfsd/nfs4state.c
> +++ b/fs/nfsd/nfs4state.c
> @@ -1355,6 +1355,7 @@ nfsd4_create_session(struct svc_rqst *rqstp,
> conf = find_confirmed_client(&cr_ses->clientid);
>
> if (conf) {
> + atomic_inc(&conf->cl_count);
> slot = &conf->cl_slot;
> status = check_slot_seqid(cr_ses->seqid, slot);
> if (status == nfserr_replay_cache) {
> @@ -1363,27 +1364,30 @@ nfsd4_create_session(struct svc_rqst *rqstp,
> cstate->status = nfserr_replay_clientid_cache;
> /* Return the cached reply status */
> status = nfsd4_replay_create_session(resp, slot);
> - goto out;
> + goto out_put;
> } else if (cr_ses->seqid != conf->cl_slot.sl_seqid + 1) {
> status = nfserr_seq_misordered;
> dprintk("Sequence misordered!\n");
> dprintk("Expected seqid= %d but got seqid= %d\n",
> slot->sl_seqid, cr_ses->seqid);
> - goto out;
> + goto out_put;
> }
> conf->cl_slot.sl_seqid++;
> } else if (unconf) {
> + atomic_inc(&unconf->cl_count);
> slot = &unconf->cl_slot;
> status = check_slot_seqid(cr_ses->seqid, slot);
> if (status) {
> /* an unconfirmed replay returns misordered */
> status = nfserr_seq_misordered;
> - goto out;
> + conf = unconf; /* for put_nfs4_client */
> + goto out_put;
> }
>
> if (!same_creds(&unconf->cl_cred, &rqstp->rq_cred) ||
> (ip_addr != unconf->cl_addr)) {
> status = nfserr_clid_inuse;
> + conf = unconf; /* for put_nfs4_client */
> goto out_cache;
> }
>
> @@ -1413,7 +1417,7 @@ nfsd4_create_session(struct svc_rqst *rqstp,
>
> status = alloc_init_session(rqstp, conf, cr_ses);
> if (status)
> - goto out;
> + goto out_put;
>
> memcpy(cr_ses->sessionid.data, conf->cl_sessionid.data,
> NFS4_MAX_SESSIONID_LEN);
> @@ -1423,6 +1427,8 @@ out_cache:
> /* cache solo and embedded create sessions under the state lock */
> nfsd4_cache_create_session(cr_ses, slot, status);
>
> +out_put:
> + put_nfs4_client(conf);
> out:
> nfs4_unlock_state();
> dprintk("%s returns %d\n", __func__, ntohl(status));
> --
> 1.5.4.3
>