If a task failed to get the write lock in the call to xprt_connect(), then
it will be queued on xprt->sending. In that case, it is possible for it
to get transmitted before the call to call_connect_status(), in which
case it needs to be handled by call_transmit_status() instead.
Signed-off-by: Trond Myklebust <[email protected]>
---
net/sunrpc/clnt.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
index ae3b8145da35..e35d642558e7 100644
--- a/net/sunrpc/clnt.c
+++ b/net/sunrpc/clnt.c
@@ -1915,6 +1915,13 @@ call_connect_status(struct rpc_task *task)
struct rpc_clnt *clnt = task->tk_client;
int status = task->tk_status;
+ /* Check if the task was already transmitted */
+ if (!test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate)) {
+ xprt_end_transmit(task);
+ task->tk_action = call_transmit_status;
+ return;
+ }
+
dprint_status(task);
trace_rpc_connect_status(task);
--
2.19.2
From: Chuck Lever <[email protected]>
call_encode can be invoked more than once per RPC call. Ensure that
each call to gss_wrap_req_priv does not overwrite pointers to
previously allocated memory.
Signed-off-by: Chuck Lever <[email protected]>
Cc: [email protected]
Signed-off-by: Trond Myklebust <[email protected]>
---
net/sunrpc/auth_gss/auth_gss.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/net/sunrpc/auth_gss/auth_gss.c b/net/sunrpc/auth_gss/auth_gss.c
index 5d3f252659f1..ba765473d1f0 100644
--- a/net/sunrpc/auth_gss/auth_gss.c
+++ b/net/sunrpc/auth_gss/auth_gss.c
@@ -1791,6 +1791,7 @@ priv_release_snd_buf(struct rpc_rqst *rqstp)
for (i=0; i < rqstp->rq_enc_pages_num; i++)
__free_page(rqstp->rq_enc_pages[i]);
kfree(rqstp->rq_enc_pages);
+ rqstp->rq_release_snd_buf = NULL;
}
static int
@@ -1799,6 +1800,9 @@ alloc_enc_pages(struct rpc_rqst *rqstp)
struct xdr_buf *snd_buf = &rqstp->rq_snd_buf;
int first, last, i;
+ if (rqstp->rq_release_snd_buf)
+ rqstp->rq_release_snd_buf(rqstp);
+
if (snd_buf->page_len == 0) {
rqstp->rq_enc_pages_num = 0;
return 0;
--
2.19.2
A call to gss_wrap_req_priv() will end up replacing the original array
in rqstp->rq_snd_buf.pages with a new one containing the encrypted
data. In order to avoid having the rqstp->rq_snd_buf.bvec point to the
wrong page data, we need to refresh that too.
Signed-off-by: Trond Myklebust <[email protected]>
---
net/sunrpc/xprtsock.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index ae77c71c1f64..615ef2397fc5 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -843,6 +843,7 @@ static int xs_nospace(struct rpc_rqst *req)
static void
xs_stream_prepare_request(struct rpc_rqst *req)
{
+ xdr_free_bvec(&req->rq_rcv_buf);
req->rq_task->tk_status = xdr_alloc_bvec(&req->rq_rcv_buf, GFP_NOIO);
}
--
2.19.2
> On Nov 30, 2018, at 4:18 PM, Trond Myklebust <[email protected]> wrote:
>
> A call to gss_wrap_req_priv() will end up replacing the original array
> in rqstp->rq_snd_buf.pages with a new one containing the encrypted
> data. In order to avoid having the rqstp->rq_snd_buf.bvec point to the
> wrong page data, we need to refresh that too.
>
> Signed-off-by: Trond Myklebust <[email protected]>
> ---
> net/sunrpc/xprtsock.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> index ae77c71c1f64..615ef2397fc5 100644
> --- a/net/sunrpc/xprtsock.c
> +++ b/net/sunrpc/xprtsock.c
> @@ -843,6 +843,7 @@ static int xs_nospace(struct rpc_rqst *req)
> static void
> xs_stream_prepare_request(struct rpc_rqst *req)
> {
> + xdr_free_bvec(&req->rq_rcv_buf);
I've added a pr_warn that fires whenever rq_rcv_buf.bvec != NULL,
and it never made a peep. Am I missing something?
> req->rq_task->tk_status = xdr_alloc_bvec(&req->rq_rcv_buf, GFP_NOIO);
> }
>
> --
> 2.19.2
>
--
Chuck Lever
On Fri, 2018-11-30 at 16:23 -0500, Chuck Lever wrote:
> > On Nov 30, 2018, at 4:18 PM, Trond Myklebust <[email protected]>
> > wrote:
> >
> > A call to gss_wrap_req_priv() will end up replacing the original
> > array
> > in rqstp->rq_snd_buf.pages with a new one containing the encrypted
> > data. In order to avoid having the rqstp->rq_snd_buf.bvec point to
> > the
> > wrong page data, we need to refresh that too.
> >
> > Signed-off-by: Trond Myklebust <[email protected]>
> > ---
> > net/sunrpc/xprtsock.c | 1 +
> > 1 file changed, 1 insertion(+)
> >
> > diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
> > index ae77c71c1f64..615ef2397fc5 100644
> > --- a/net/sunrpc/xprtsock.c
> > +++ b/net/sunrpc/xprtsock.c
> > @@ -843,6 +843,7 @@ static int xs_nospace(struct rpc_rqst *req)
> > static void
> > xs_stream_prepare_request(struct rpc_rqst *req)
> > {
> > + xdr_free_bvec(&req->rq_rcv_buf);
>
> I've added a pr_warn that fires whenever rq_rcv_buf.bvec != NULL,
> and it never made a peep. Am I missing something?
>
It may be non-NULL, but contain an array of pointers to the wrong
pages. That's going to be the case when we re-encode the request,
because (as your patch pointed out) we free the old array of
rq_enc_pages (and its contents) and allocate a whole new array + set of
pages.
>
> > req->rq_task->tk_status = xdr_alloc_bvec(&req->rq_rcv_buf,
> > GFP_NOIO);
> > }
> >
> > --
> > 2.19.2
> >
>
> --
> Chuck Lever
>
>
>
> On Nov 30, 2018, at 4:26 PM, Trond Myklebust <[email protected]> wrote:
>
> On Fri, 2018-11-30 at 16:23 -0500, Chuck Lever wrote:
>>> On Nov 30, 2018, at 4:18 PM, Trond Myklebust <[email protected]>
>>> wrote:
>>>
>>> A call to gss_wrap_req_priv() will end up replacing the original
>>> array
>>> in rqstp->rq_snd_buf.pages with a new one containing the encrypted
>>> data. In order to avoid having the rqstp->rq_snd_buf.bvec point to
>>> the
>>> wrong page data, we need to refresh that too.
>>>
>>> Signed-off-by: Trond Myklebust <[email protected]>
>>> ---
>>> net/sunrpc/xprtsock.c | 1 +
>>> 1 file changed, 1 insertion(+)
>>>
>>> diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
>>> index ae77c71c1f64..615ef2397fc5 100644
>>> --- a/net/sunrpc/xprtsock.c
>>> +++ b/net/sunrpc/xprtsock.c
>>> @@ -843,6 +843,7 @@ static int xs_nospace(struct rpc_rqst *req)
>>> static void
>>> xs_stream_prepare_request(struct rpc_rqst *req)
>>> {
>>> + xdr_free_bvec(&req->rq_rcv_buf);
>>
>> I've added a pr_warn that fires whenever rq_rcv_buf.bvec != NULL,
>> and it never made a peep. Am I missing something?
>>
>
> It may be non-NULL,
I never found a case where call_encode was passing in a stale bvec array.
> but contain an array of pointers to the wrong
> pages. That's going to be the case when we re-encode the request,
> because (as your patch pointed out) we free the old array of
> rq_enc_pages (and its contents) and allocate a whole new array + set of
> pages.
I actually had a patch that looked exactly like yours (except for the
description text). It didn't seem to do anything. So there may be an
issue here, but I don't believe I'm seeing it with my reproducer.
I'll keep an eye out.
>>> req->rq_task->tk_status = xdr_alloc_bvec(&req->rq_rcv_buf,
>>> GFP_NOIO);
>>> }
>>>
>>> --
>>> 2.19.2
>>>
>>
>> --
>> Chuck Lever
--
Chuck Lever
On Fri, 2018-11-30 at 16:32 -0500, Chuck Lever wrote:
> >
> I actually had a patch that looked exactly like yours (except for the
> description text). It didn't seem to do anything. So there may be an
> issue here, but I don't believe I'm seeing it with my reproducer.
>
Argh... Yes, I was too quick off the trigger. The call to
xdr_buf_init() in rpc_xdr_encode() will actually clobber the ->bvec
array...
--
Trond Myklebust
Linux NFS client maintainer, Hammerspace
[email protected]
> On Nov 30, 2018, at 4:37 PM, Trond Myklebust <[email protected]> wrote:
>
> On Fri, 2018-11-30 at 16:32 -0500, Chuck Lever wrote:
>>>
>> I actually had a patch that looked exactly like yours (except for the
>> description text). It didn't seem to do anything. So there may be an
>> issue here, but I don't believe I'm seeing it with my reproducer.
>>
>
> Argh... Yes, I was too quick off the trigger. The call to
> xdr_buf_init() in rpc_xdr_encode() will actually clobber the ->bvec
> array...
Duh. That sounds very plausible!
--
Chuck Lever