A set of patches to ensure the backchannel lifetime cannot exceed the
lifetime of the transport to which it is attached.
v2:
- Fix the case where !defined(CONFIG_SUNRPC_BACKCHANNEL)
- Don't allow xprt->bc_alloc_max to underflow in xprt_destroy_bc()
Trond Myklebust (3):
SUNRPC: The TCP back channel mustn't disappear while requests are
outstanding
SUNRPC: The RDMA back channel mustn't disappear while requests are
outstanding
SUNRPC: Destroy the back channel when we destroy the host transport
include/linux/sunrpc/bc_xprt.h | 5 +++++
net/sunrpc/backchannel_rqst.c | 7 ++++---
net/sunrpc/xprt.c | 5 +++++
net/sunrpc/xprtrdma/backchannel.c | 2 ++
4 files changed, 16 insertions(+), 3 deletions(-)
--
2.21.0
If there are TCP back channel requests being processed by the
server threads, then we should hold a reference to the transport
to ensure it doesn't get freed from underneath us.
Reported-by: Neil Brown <[email protected]>
Fixes: 2ea24497a1b3 ("SUNRPC: RPC callbacks may be split across several..")
Signed-off-by: Trond Myklebust <[email protected]>
---
net/sunrpc/backchannel_rqst.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/net/sunrpc/backchannel_rqst.c b/net/sunrpc/backchannel_rqst.c
index 339e8c077c2d..7eb251372f94 100644
--- a/net/sunrpc/backchannel_rqst.c
+++ b/net/sunrpc/backchannel_rqst.c
@@ -307,8 +307,8 @@ void xprt_free_bc_rqst(struct rpc_rqst *req)
*/
dprintk("RPC: Last session removed req=%p\n", req);
xprt_free_allocation(req);
- return;
}
+ xprt_put(xprt);
}
/*
@@ -339,7 +339,7 @@ struct rpc_rqst *xprt_lookup_bc_request(struct rpc_xprt *xprt, __be32 xid)
spin_unlock(&xprt->bc_pa_lock);
if (new) {
if (req != new)
- xprt_free_bc_rqst(new);
+ xprt_free_allocation(new);
break;
} else if (req)
break;
@@ -368,6 +368,7 @@ void xprt_complete_bc_request(struct rpc_rqst *req, uint32_t copied)
set_bit(RPC_BC_PA_IN_USE, &req->rq_bc_pa_state);
dprintk("RPC: add callback request to list\n");
+ xprt_get(xprt);
spin_lock(&bc_serv->sv_cb_lock);
list_add(&req->rq_bc_list, &bc_serv->sv_cb_list);
wake_up(&bc_serv->sv_cb_waitq);
--
2.21.0
If there are RDMA back channel requests being processed by the
server threads, then we should hold a reference to the transport
to ensure it doesn't get freed from underneath us.
Reported-by: Neil Brown <[email protected]>
Fixes: 63cae47005af ("xprtrdma: Handle incoming backward direction RPC calls")
Signed-off-by: Trond Myklebust <[email protected]>
---
net/sunrpc/xprtrdma/backchannel.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/net/sunrpc/xprtrdma/backchannel.c b/net/sunrpc/xprtrdma/backchannel.c
index 50e075fcdd8f..b458bf53ca69 100644
--- a/net/sunrpc/xprtrdma/backchannel.c
+++ b/net/sunrpc/xprtrdma/backchannel.c
@@ -163,6 +163,7 @@ void xprt_rdma_bc_free_rqst(struct rpc_rqst *rqst)
spin_lock(&xprt->bc_pa_lock);
list_add_tail(&rqst->rq_bc_pa_list, &xprt->bc_pa_list);
spin_unlock(&xprt->bc_pa_lock);
+ xprt_put(xprt);
}
static struct rpc_rqst *rpcrdma_bc_rqst_get(struct rpcrdma_xprt *r_xprt)
@@ -259,6 +260,7 @@ void rpcrdma_bc_receive_call(struct rpcrdma_xprt *r_xprt,
/* Queue rqst for ULP's callback service */
bc_serv = xprt->bc_serv;
+ xprt_get(xprt);
spin_lock(&bc_serv->sv_cb_lock);
list_add(&rqst->rq_bc_list, &bc_serv->sv_cb_list);
spin_unlock(&bc_serv->sv_cb_lock);
--
2.21.0
On Thu, Oct 17 2019, Trond Myklebust wrote:
> A set of patches to ensure the backchannel lifetime cannot exceed the
> lifetime of the transport to which it is attached.
>
> v2:
> - Fix the case where !defined(CONFIG_SUNRPC_BACKCHANNEL)
> - Don't allow xprt->bc_alloc_max to underflow in xprt_destroy_bc()
>
> Trond Myklebust (3):
> SUNRPC: The TCP back channel mustn't disappear while requests are
> outstanding
> SUNRPC: The RDMA back channel mustn't disappear while requests are
> outstanding
> SUNRPC: Destroy the back channel when we destroy the host transport
>
> include/linux/sunrpc/bc_xprt.h | 5 +++++
> net/sunrpc/backchannel_rqst.c | 7 ++++---
> net/sunrpc/xprt.c | 5 +++++
> net/sunrpc/xprtrdma/backchannel.c | 2 ++
> 4 files changed, 16 insertions(+), 3 deletions(-)
>
All look good to me - thanks a lot.
Reviewed-by: NeilBrown <[email protected]>
NeilBrown