2019-05-28 18:21:45

by Chuck Lever

[permalink] [raw]
Subject: [PATCH RFC 00/12] for-5.3 NFS/RDMA patches for review

This is a series of fixes and architectural changes that should
improve robustness and result in better scalability of NFS/RDMA.
I'm sure one or two of these could be broken down a little more,
comments welcome.

The fundamental observation is that the RPC work queues are BOUND,
thus rescheduling work in the Receive completion handler to one of
these work queues just forces it to run later on the same CPU. So
try to do more work right in the Receive completion handler to
reduce context switch overhead.

A secondary concern is that the average amount of wall-clock time
it takes to handle a single Receive completion caps the IOPS rate
(both per-xprt and per-NIC). In this patch series I've taken a few
steps to reduce that latency, and I'm looking into a few others.

This series can be fetched from:

git://git.linux-nfs.org/projects/cel/cel-2.6.git

in topic branch "nfs-for-5.3".

---

Chuck Lever (12):
xprtrdma: Fix use-after-free in rpcrdma_post_recvs
xprtrdma: Replace use of xdr_stream_pos in rpcrdma_marshal_req
xprtrdma: Fix occasional transport deadlock
xprtrdma: Remove the RPCRDMA_REQ_F_PENDING flag
xprtrdma: Remove fr_state
xprtrdma: Add mechanism to place MRs back on the free list
xprtrdma: Reduce context switching due to Local Invalidation
xprtrdma: Wake RPCs directly in rpcrdma_wc_send path
xprtrdma: Simplify rpcrdma_rep_create
xprtrdma: Streamline rpcrdma_post_recvs
xprtrdma: Refactor chunk encoding
xprtrdma: Remove rpcrdma_req::rl_buffer


include/trace/events/rpcrdma.h | 47 ++++--
net/sunrpc/xprtrdma/frwr_ops.c | 330 ++++++++++++++++++++++++++-------------
net/sunrpc/xprtrdma/rpc_rdma.c | 146 +++++++----------
net/sunrpc/xprtrdma/transport.c | 16 +-
net/sunrpc/xprtrdma/verbs.c | 115 ++++++--------
net/sunrpc/xprtrdma/xprt_rdma.h | 43 +----
6 files changed, 384 insertions(+), 313 deletions(-)

--
Chuck Lever


2019-05-28 18:21:53

by Chuck Lever

[permalink] [raw]
Subject: [PATCH RFC 01/12] xprtrdma: Fix use-after-free in rpcrdma_post_recvs

Dereference wr->next /before/ the memory backing wr has been
released. This issue was found by code inspection. It is not
expected to be a significant problem because it is in an error
path that is almost never executed.

Fixes: 7c8d9e7c8863 ("xprtrdma: Move Receive posting to ... ")
Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/xprtrdma/verbs.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index bef5eac..a803e94 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -1554,10 +1554,11 @@ static void rpcrdma_regbuf_free(struct rpcrdma_regbuf *rb)
rc = ib_post_recv(r_xprt->rx_ia.ri_id->qp, wr,
(const struct ib_recv_wr **)&bad_wr);
if (rc) {
- for (wr = bad_wr; wr; wr = wr->next) {
+ for (wr = bad_wr; wr;) {
struct rpcrdma_rep *rep;

rep = container_of(wr, struct rpcrdma_rep, rr_recv_wr);
+ wr = wr->next;
rpcrdma_recv_buffer_put(rep);
--count;
}

2019-05-28 18:21:59

by Chuck Lever

[permalink] [raw]
Subject: [PATCH RFC 02/12] xprtrdma: Replace use of xdr_stream_pos in rpcrdma_marshal_req

This is a latent bug. xdr_stream_pos works by subtracting
xdr_stream::nwords from xdr_buf::len. But xdr_stream::nwords is not
initialized by xdr_init_encode().

It works today only because all fields in rpcrdma_req::rl_stream
are initialized to zero by rpcrdma_req_create, making the
subtraction in xdr_stream_pos always a no-op.

I found this issue via code inspection. It was introduced by commit
39f4cd9e9982 ("xprtrdma: Harden chunk list encoding against send
buffer overflow"), but the code has changed enough since then that
this fix can't be automatically applied to stable.

Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/xprtrdma/rpc_rdma.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index 85115a2..608b9ac 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -867,9 +867,9 @@ static bool rpcrdma_prepare_msg_sges(struct rpcrdma_xprt *r_xprt,
if (ret)
goto out_err;

- trace_xprtrdma_marshal(rqst, xdr_stream_pos(xdr), rtype, wtype);
+ trace_xprtrdma_marshal(rqst, req->rl_hdrbuf.len, rtype, wtype);

- ret = rpcrdma_prepare_send_sges(r_xprt, req, xdr_stream_pos(xdr),
+ ret = rpcrdma_prepare_send_sges(r_xprt, req, req->rl_hdrbuf.len,
&rqst->rq_snd_buf, rtype);
if (ret)
goto out_err;

2019-05-28 18:21:59

by Chuck Lever

[permalink] [raw]
Subject: [PATCH RFC 04/12] xprtrdma: Remove the RPCRDMA_REQ_F_PENDING flag

Commit 9590d083c1bb ("xprtrdma: Use xprt_pin_rqst in
rpcrdma_reply_handler") pins incoming RPC/RDMA replies so they
can be left in the pending requests queue while they are being
processed without introducing a race between ->buf_free and the
transport's reply handler. Therefore RPCRDMA_REQ_F_PENDING is no
longer necessary.

Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/xprtrdma/rpc_rdma.c | 1 -
net/sunrpc/xprtrdma/transport.c | 4 +---
net/sunrpc/xprtrdma/xprt_rdma.h | 1 -
3 files changed, 1 insertion(+), 5 deletions(-)

diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index d1f44f4..77fc1e4 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -1371,7 +1371,6 @@ void rpcrdma_reply_handler(struct rpcrdma_rep *rep)
}
req->rl_reply = rep;
rep->rr_rqst = rqst;
- clear_bit(RPCRDMA_REQ_F_PENDING, &req->rl_flags);

trace_xprtrdma_reply(rqst->rq_task, rep, req, credits);
queue_work(buf->rb_completion_wq, &rep->rr_work);
diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
index 1f73a6a..f84375d 100644
--- a/net/sunrpc/xprtrdma/transport.c
+++ b/net/sunrpc/xprtrdma/transport.c
@@ -618,8 +618,7 @@ static bool rpcrdma_check_regbuf(struct rpcrdma_xprt *r_xprt,
struct rpcrdma_xprt *r_xprt = rpcx_to_rdmax(rqst->rq_xprt);
struct rpcrdma_req *req = rpcr_to_rdmar(rqst);

- if (test_bit(RPCRDMA_REQ_F_PENDING, &req->rl_flags))
- rpcrdma_release_rqst(r_xprt, req);
+ rpcrdma_release_rqst(r_xprt, req);
trace_xprtrdma_op_free(task, req);
}

@@ -667,7 +666,6 @@ static bool rpcrdma_check_regbuf(struct rpcrdma_xprt *r_xprt,
goto drop_connection;
rqst->rq_xtime = ktime_get();

- __set_bit(RPCRDMA_REQ_F_PENDING, &req->rl_flags);
if (rpcrdma_ep_post(&r_xprt->rx_ia, &r_xprt->rx_ep, req))
goto drop_connection;

diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index 2c6c5d8..3c39aa3 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -344,7 +344,6 @@ struct rpcrdma_req {

/* rl_flags */
enum {
- RPCRDMA_REQ_F_PENDING = 0,
RPCRDMA_REQ_F_TX_RESOURCES,
};


2019-05-28 18:21:59

by Chuck Lever

[permalink] [raw]
Subject: [PATCH RFC 03/12] xprtrdma: Fix occasional transport deadlock

Under high I/O workloads, I've noticed that an RPC/RDMA transport
occasionally deadlocks (IOPS goes to zero, and doesn't recover).
Diagnosis shows that the sendctx queue is empty, but when sendctxs
are returned to the queue, the xprt_write_space wake-up never
occurs. The wake-up logic in rpcrdma_sendctx_put_locked is racy.

I noticed that both EMPTY_SCQ and XPRT_WRITE_SPACE are implemented
via an atomic bit. Just one of those is sufficient. Removing
EMPTY_SCQ in favor of the generic bit mechanism makes the deadlock
un-reproducible.

Without EMPTY_SCQ, rpcrdma_buffer::rb_flags is no longer used and
is therefore removed.

Unfortunately this patch does not apply cleanly to stable. If
needed, someone will have to port it and test it.

Fixes: 2fad659209d5 ("xprtrdma: Wait on empty sendctx queue")
Signed-off-by: Chuck Lever <[email protected]>
---
include/trace/events/rpcrdma.h | 27 +++++++++++++++++++++++++++
net/sunrpc/xprtrdma/frwr_ops.c | 6 +++++-
net/sunrpc/xprtrdma/rpc_rdma.c | 26 ++++++++++++--------------
net/sunrpc/xprtrdma/verbs.c | 11 +++--------
net/sunrpc/xprtrdma/xprt_rdma.h | 6 ------
5 files changed, 47 insertions(+), 29 deletions(-)

diff --git a/include/trace/events/rpcrdma.h b/include/trace/events/rpcrdma.h
index df9851cb..a4ab3a2 100644
--- a/include/trace/events/rpcrdma.h
+++ b/include/trace/events/rpcrdma.h
@@ -538,6 +538,33 @@
)
);

+TRACE_EVENT(xprtrdma_prepsend_failed,
+ TP_PROTO(const struct rpc_rqst *rqst,
+ int ret
+ ),
+
+ TP_ARGS(rqst, ret),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, task_id)
+ __field(unsigned int, client_id)
+ __field(u32, xid)
+ __field(int, ret)
+ ),
+
+ TP_fast_assign(
+ __entry->task_id = rqst->rq_task->tk_pid;
+ __entry->client_id = rqst->rq_task->tk_client->cl_clid;
+ __entry->xid = be32_to_cpu(rqst->rq_xid);
+ __entry->ret = ret;
+ ),
+
+ TP_printk("task:%u@%u xid=0x%08x: ret=%d",
+ __entry->task_id, __entry->client_id, __entry->xid,
+ __entry->ret
+ )
+);
+
TRACE_EVENT(xprtrdma_post_send,
TP_PROTO(
const struct rpcrdma_req *req,
diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
index 794ba4c..ac47314 100644
--- a/net/sunrpc/xprtrdma/frwr_ops.c
+++ b/net/sunrpc/xprtrdma/frwr_ops.c
@@ -391,7 +391,7 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt *r_xprt,
rpcrdma_mr_recycle(mr);
mr = rpcrdma_mr_get(r_xprt);
if (!mr)
- return ERR_PTR(-EAGAIN);
+ goto out_getmr_err;
} while (mr->frwr.fr_state != FRWR_IS_INVALID);
frwr = &mr->frwr;
frwr->fr_state = FRWR_IS_VALID;
@@ -448,6 +448,10 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt *r_xprt,
*out = mr;
return seg;

+out_getmr_err:
+ xprt_wait_for_buffer_space(&r_xprt->rx_xprt);
+ return ERR_PTR(-EAGAIN);
+
out_dmamap_err:
mr->mr_dir = DMA_NONE;
trace_xprtrdma_frwr_sgerr(mr, i);
diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index 608b9ac..d1f44f4 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -699,22 +699,28 @@ static bool rpcrdma_prepare_msg_sges(struct rpcrdma_xprt *r_xprt,
struct rpcrdma_req *req, u32 hdrlen,
struct xdr_buf *xdr, enum rpcrdma_chunktype rtype)
{
+ int ret;
+
+ ret = -EAGAIN;
req->rl_sendctx = rpcrdma_sendctx_get_locked(r_xprt);
if (!req->rl_sendctx)
- return -EAGAIN;
+ goto err;
req->rl_sendctx->sc_wr.num_sge = 0;
req->rl_sendctx->sc_unmap_count = 0;
req->rl_sendctx->sc_req = req;
__clear_bit(RPCRDMA_REQ_F_TX_RESOURCES, &req->rl_flags);

+ ret = -EIO;
if (!rpcrdma_prepare_hdr_sge(r_xprt, req, hdrlen))
- return -EIO;
-
+ goto err;
if (rtype != rpcrdma_areadch)
if (!rpcrdma_prepare_msg_sges(r_xprt, req, xdr, rtype))
- return -EIO;
-
+ goto err;
return 0;
+
+err:
+ trace_xprtrdma_prepsend_failed(&req->rl_slot, ret);
+ return ret;
}

/**
@@ -877,15 +883,7 @@ static bool rpcrdma_prepare_msg_sges(struct rpcrdma_xprt *r_xprt,

out_err:
trace_xprtrdma_marshal_failed(rqst, ret);
- switch (ret) {
- case -EAGAIN:
- xprt_wait_for_buffer_space(rqst->rq_xprt);
- break;
- case -ENOBUFS:
- break;
- default:
- r_xprt->rx_stats.failed_marshal_count++;
- }
+ r_xprt->rx_stats.failed_marshal_count++;
return ret;
}

diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index a803e94..78be985 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -902,7 +902,7 @@ struct rpcrdma_sendctx *rpcrdma_sendctx_get_locked(struct rpcrdma_xprt *r_xprt)
* completions recently. This is a sign the Send Queue is
* backing up. Cause the caller to pause and try again.
*/
- set_bit(RPCRDMA_BUF_F_EMPTY_SCQ, &buf->rb_flags);
+ xprt_wait_for_buffer_space(&r_xprt->rx_xprt);
r_xprt->rx_stats.empty_sendctx_q++;
return NULL;
}
@@ -937,10 +937,7 @@ struct rpcrdma_sendctx *rpcrdma_sendctx_get_locked(struct rpcrdma_xprt *r_xprt)
/* Paired with READ_ONCE */
smp_store_release(&buf->rb_sc_tail, next_tail);

- if (test_and_clear_bit(RPCRDMA_BUF_F_EMPTY_SCQ, &buf->rb_flags)) {
- smp_mb__after_atomic();
- xprt_write_space(&sc->sc_xprt->rx_xprt);
- }
+ xprt_write_space(&sc->sc_xprt->rx_xprt);
}

static void
@@ -978,8 +975,6 @@ struct rpcrdma_sendctx *rpcrdma_sendctx_get_locked(struct rpcrdma_xprt *r_xprt)
r_xprt->rx_stats.mrs_allocated += count;
spin_unlock(&buf->rb_mrlock);
trace_xprtrdma_createmrs(r_xprt, count);
-
- xprt_write_space(&r_xprt->rx_xprt);
}

static void
@@ -991,6 +986,7 @@ struct rpcrdma_sendctx *rpcrdma_sendctx_get_locked(struct rpcrdma_xprt *r_xprt)
rx_buf);

rpcrdma_mrs_create(r_xprt);
+ xprt_write_space(&r_xprt->rx_xprt);
}

/**
@@ -1090,7 +1086,6 @@ int rpcrdma_buffer_create(struct rpcrdma_xprt *r_xprt)
struct rpcrdma_buffer *buf = &r_xprt->rx_buf;
int i, rc;

- buf->rb_flags = 0;
buf->rb_max_requests = r_xprt->rx_ep.rep_max_requests;
buf->rb_bc_srv_max_requests = 0;
spin_lock_init(&buf->rb_mrlock);
diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index d1e0749..2c6c5d8 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -391,7 +391,6 @@ struct rpcrdma_buffer {
struct list_head rb_recv_bufs;
struct list_head rb_allreqs;

- unsigned long rb_flags;
u32 rb_max_requests;
u32 rb_credits; /* most recent credit grant */

@@ -402,11 +401,6 @@ struct rpcrdma_buffer {
struct delayed_work rb_refresh_worker;
};

-/* rb_flags */
-enum {
- RPCRDMA_BUF_F_EMPTY_SCQ = 0,
-};
-
/*
* Statistics for RPCRDMA
*/

2019-05-28 18:22:01

by Chuck Lever

[permalink] [raw]
Subject: [PATCH RFC 05/12] xprtrdma: Remove fr_state

Since both the Send and Receive completion queues are processed in
a workqueue context, it should be safe to DMA unmap and return MRs
to the free or recycle lists directly in the completion handlers.

Doing this means rpcrdma_frwr no longer needs to track the state
of each MR... a VALID or FLUSHED MR can no longer appear on an
xprt's MR free list.

Signed-off-by: Chuck Lever <[email protected]>
---
include/trace/events/rpcrdma.h | 19 ----
net/sunrpc/xprtrdma/frwr_ops.c | 202 ++++++++++++++++++---------------------
net/sunrpc/xprtrdma/rpc_rdma.c | 2
net/sunrpc/xprtrdma/xprt_rdma.h | 11 --
4 files changed, 95 insertions(+), 139 deletions(-)

diff --git a/include/trace/events/rpcrdma.h b/include/trace/events/rpcrdma.h
index a4ab3a2..7fe21ba 100644
--- a/include/trace/events/rpcrdma.h
+++ b/include/trace/events/rpcrdma.h
@@ -181,18 +181,6 @@
), \
TP_ARGS(task, mr, nsegs))

-TRACE_DEFINE_ENUM(FRWR_IS_INVALID);
-TRACE_DEFINE_ENUM(FRWR_IS_VALID);
-TRACE_DEFINE_ENUM(FRWR_FLUSHED_FR);
-TRACE_DEFINE_ENUM(FRWR_FLUSHED_LI);
-
-#define xprtrdma_show_frwr_state(x) \
- __print_symbolic(x, \
- { FRWR_IS_INVALID, "INVALID" }, \
- { FRWR_IS_VALID, "VALID" }, \
- { FRWR_FLUSHED_FR, "FLUSHED_FR" }, \
- { FRWR_FLUSHED_LI, "FLUSHED_LI" })
-
DECLARE_EVENT_CLASS(xprtrdma_frwr_done,
TP_PROTO(
const struct ib_wc *wc,
@@ -203,22 +191,19 @@

TP_STRUCT__entry(
__field(const void *, mr)
- __field(unsigned int, state)
__field(unsigned int, status)
__field(unsigned int, vendor_err)
),

TP_fast_assign(
__entry->mr = container_of(frwr, struct rpcrdma_mr, frwr);
- __entry->state = frwr->fr_state;
__entry->status = wc->status;
__entry->vendor_err = __entry->status ? wc->vendor_err : 0;
),

TP_printk(
- "mr=%p state=%s: %s (%u/0x%x)",
- __entry->mr, xprtrdma_show_frwr_state(__entry->state),
- rdma_show_wc_status(__entry->status),
+ "mr=%p: %s (%u/0x%x)",
+ __entry->mr, rdma_show_wc_status(__entry->status),
__entry->status, __entry->vendor_err
)
);
diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
index ac47314..99871fbf 100644
--- a/net/sunrpc/xprtrdma/frwr_ops.c
+++ b/net/sunrpc/xprtrdma/frwr_ops.c
@@ -168,7 +168,6 @@ int frwr_init_mr(struct rpcrdma_ia *ia, struct rpcrdma_mr *mr)
goto out_list_err;

mr->frwr.fr_mr = frmr;
- mr->frwr.fr_state = FRWR_IS_INVALID;
mr->mr_dir = DMA_NONE;
INIT_LIST_HEAD(&mr->mr_list);
INIT_WORK(&mr->mr_recycle, frwr_mr_recycle_worker);
@@ -298,65 +297,6 @@ size_t frwr_maxpages(struct rpcrdma_xprt *r_xprt)
}

/**
- * frwr_wc_fastreg - Invoked by RDMA provider for a flushed FastReg WC
- * @cq: completion queue (ignored)
- * @wc: completed WR
- *
- */
-static void
-frwr_wc_fastreg(struct ib_cq *cq, struct ib_wc *wc)
-{
- struct ib_cqe *cqe = wc->wr_cqe;
- struct rpcrdma_frwr *frwr =
- container_of(cqe, struct rpcrdma_frwr, fr_cqe);
-
- /* WARNING: Only wr_cqe and status are reliable at this point */
- if (wc->status != IB_WC_SUCCESS)
- frwr->fr_state = FRWR_FLUSHED_FR;
- trace_xprtrdma_wc_fastreg(wc, frwr);
-}
-
-/**
- * frwr_wc_localinv - Invoked by RDMA provider for a flushed LocalInv WC
- * @cq: completion queue (ignored)
- * @wc: completed WR
- *
- */
-static void
-frwr_wc_localinv(struct ib_cq *cq, struct ib_wc *wc)
-{
- struct ib_cqe *cqe = wc->wr_cqe;
- struct rpcrdma_frwr *frwr = container_of(cqe, struct rpcrdma_frwr,
- fr_cqe);
-
- /* WARNING: Only wr_cqe and status are reliable at this point */
- if (wc->status != IB_WC_SUCCESS)
- frwr->fr_state = FRWR_FLUSHED_LI;
- trace_xprtrdma_wc_li(wc, frwr);
-}
-
-/**
- * frwr_wc_localinv_wake - Invoked by RDMA provider for a signaled LocalInv WC
- * @cq: completion queue (ignored)
- * @wc: completed WR
- *
- * Awaken anyone waiting for an MR to finish being fenced.
- */
-static void
-frwr_wc_localinv_wake(struct ib_cq *cq, struct ib_wc *wc)
-{
- struct ib_cqe *cqe = wc->wr_cqe;
- struct rpcrdma_frwr *frwr = container_of(cqe, struct rpcrdma_frwr,
- fr_cqe);
-
- /* WARNING: Only wr_cqe and status are reliable at this point */
- if (wc->status != IB_WC_SUCCESS)
- frwr->fr_state = FRWR_FLUSHED_LI;
- trace_xprtrdma_wc_li_wake(wc, frwr);
- complete(&frwr->fr_linv_done);
-}
-
-/**
* frwr_map - Register a memory region
* @r_xprt: controlling transport
* @seg: memory region co-ordinates
@@ -378,23 +318,15 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt *r_xprt,
{
struct rpcrdma_ia *ia = &r_xprt->rx_ia;
bool holes_ok = ia->ri_mrtype == IB_MR_TYPE_SG_GAPS;
- struct rpcrdma_frwr *frwr;
struct rpcrdma_mr *mr;
struct ib_mr *ibmr;
struct ib_reg_wr *reg_wr;
int i, n;
u8 key;

- mr = NULL;
- do {
- if (mr)
- rpcrdma_mr_recycle(mr);
- mr = rpcrdma_mr_get(r_xprt);
- if (!mr)
- goto out_getmr_err;
- } while (mr->frwr.fr_state != FRWR_IS_INVALID);
- frwr = &mr->frwr;
- frwr->fr_state = FRWR_IS_VALID;
+ mr = rpcrdma_mr_get(r_xprt);
+ if (!mr)
+ goto out_getmr_err;

if (nsegs > ia->ri_max_frwr_depth)
nsegs = ia->ri_max_frwr_depth;
@@ -423,7 +355,7 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt *r_xprt,
if (!mr->mr_nents)
goto out_dmamap_err;

- ibmr = frwr->fr_mr;
+ ibmr = mr->frwr.fr_mr;
n = ib_map_mr_sg(ibmr, mr->mr_sg, mr->mr_nents, NULL, PAGE_SIZE);
if (unlikely(n != mr->mr_nents))
goto out_mapmr_err;
@@ -433,7 +365,7 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt *r_xprt,
key = (u8)(ibmr->rkey & 0x000000FF);
ib_update_fast_reg_key(ibmr, ++key);

- reg_wr = &frwr->fr_regwr;
+ reg_wr = &mr->frwr.fr_regwr;
reg_wr->mr = ibmr;
reg_wr->key = ibmr->rkey;
reg_wr->access = writing ?
@@ -465,6 +397,23 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt *r_xprt,
}

/**
+ * frwr_wc_fastreg - Invoked by RDMA provider for a flushed FastReg WC
+ * @cq: completion queue (ignored)
+ * @wc: completed WR
+ *
+ */
+static void frwr_wc_fastreg(struct ib_cq *cq, struct ib_wc *wc)
+{
+ struct ib_cqe *cqe = wc->wr_cqe;
+ struct rpcrdma_frwr *frwr =
+ container_of(cqe, struct rpcrdma_frwr, fr_cqe);
+
+ /* WARNING: Only wr_cqe and status are reliable at this point */
+ trace_xprtrdma_wc_fastreg(wc, frwr);
+ /* The MR will get recycled when the associated req is retransmitted */
+}
+
+/**
* frwr_send - post Send WR containing the RPC Call message
* @ia: interface adapter
* @req: Prepared RPC Call
@@ -516,65 +465,104 @@ void frwr_reminv(struct rpcrdma_rep *rep, struct list_head *mrs)
if (mr->mr_handle == rep->rr_inv_rkey) {
list_del_init(&mr->mr_list);
trace_xprtrdma_mr_remoteinv(mr);
- mr->frwr.fr_state = FRWR_IS_INVALID;
rpcrdma_mr_unmap_and_put(mr);
break; /* only one invalidated MR per RPC */
}
}

+static void __frwr_release_mr(struct ib_wc *wc, struct rpcrdma_mr *mr)
+{
+ if (wc->status != IB_WC_SUCCESS)
+ rpcrdma_mr_recycle(mr);
+ else
+ rpcrdma_mr_unmap_and_put(mr);
+}
+
/**
- * frwr_unmap_sync - invalidate memory regions that were registered for @req
- * @r_xprt: controlling transport
- * @mrs: list of MRs to process
+ * frwr_wc_localinv - Invoked by RDMA provider for a LOCAL_INV WC
+ * @cq: completion queue (ignored)
+ * @wc: completed WR
*
- * Sleeps until it is safe for the host CPU to access the
- * previously mapped memory regions.
+ */
+static void frwr_wc_localinv(struct ib_cq *cq, struct ib_wc *wc)
+{
+ struct ib_cqe *cqe = wc->wr_cqe;
+ struct rpcrdma_frwr *frwr =
+ container_of(cqe, struct rpcrdma_frwr, fr_cqe);
+ struct rpcrdma_mr *mr = container_of(frwr, struct rpcrdma_mr, frwr);
+
+ /* WARNING: Only wr_cqe and status are reliable at this point */
+ __frwr_release_mr(wc, mr);
+ trace_xprtrdma_wc_li(wc, frwr);
+}
+
+/**
+ * frwr_wc_localinv_wake - Invoked by RDMA provider for a LOCAL_INV WC
+ * @cq: completion queue (ignored)
+ * @wc: completed WR
*
- * Caller ensures that @mrs is not empty before the call. This
- * function empties the list.
+ * Awaken anyone waiting for an MR to finish being fenced.
*/
-void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct list_head *mrs)
+static void frwr_wc_localinv_wake(struct ib_cq *cq, struct ib_wc *wc)
+{
+ struct ib_cqe *cqe = wc->wr_cqe;
+ struct rpcrdma_frwr *frwr =
+ container_of(cqe, struct rpcrdma_frwr, fr_cqe);
+ struct rpcrdma_mr *mr = container_of(frwr, struct rpcrdma_mr, frwr);
+
+ /* WARNING: Only wr_cqe and status are reliable at this point */
+ __frwr_release_mr(wc, mr);
+ trace_xprtrdma_wc_li_wake(wc, frwr);
+ complete(&frwr->fr_linv_done);
+}
+
+/**
+ * frwr_unmap_sync - invalidate memory regions that were registered for @req
+ * @r_xprt: controlling transport instance
+ * @req: rpcrdma_req with a non-empty list of MRs to process
+ *
+ * Sleeps until it is safe for the host CPU to access the previously mapped
+ * memory regions.
+ */
+void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
{
struct ib_send_wr *first, **prev, *last;
const struct ib_send_wr *bad_wr;
- struct rpcrdma_ia *ia = &r_xprt->rx_ia;
struct rpcrdma_frwr *frwr;
struct rpcrdma_mr *mr;
- int count, rc;
+ int rc;

/* ORDER: Invalidate all of the MRs first
*
* Chain the LOCAL_INV Work Requests and post them with
* a single ib_post_send() call.
*/
- frwr = NULL;
- count = 0;
prev = &first;
- list_for_each_entry(mr, mrs, mr_list) {
- mr->frwr.fr_state = FRWR_IS_INVALID;
+ while (!list_empty(&req->rl_registered)) {
+ mr = rpcrdma_mr_pop(&req->rl_registered);

- frwr = &mr->frwr;
trace_xprtrdma_mr_localinv(mr);
+ r_xprt->rx_stats.local_inv_needed++;

+ frwr = &mr->frwr;
frwr->fr_cqe.done = frwr_wc_localinv;
last = &frwr->fr_invwr;
- memset(last, 0, sizeof(*last));
+ last->next = NULL;
last->wr_cqe = &frwr->fr_cqe;
+ last->sg_list = NULL;
+ last->num_sge = 0;
last->opcode = IB_WR_LOCAL_INV;
+ last->send_flags = IB_SEND_SIGNALED;
last->ex.invalidate_rkey = mr->mr_handle;
- count++;

*prev = last;
prev = &last->next;
}
- if (!frwr)
- goto unmap;

/* Strong send queue ordering guarantees that when the
* last WR in the chain completes, all WRs in the chain
* are complete.
*/
- last->send_flags = IB_SEND_SIGNALED;
frwr->fr_cqe.done = frwr_wc_localinv_wake;
reinit_completion(&frwr->fr_linv_done);

@@ -582,26 +570,18 @@ void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct list_head *mrs)
* replaces the QP. The RPC reply handler won't call us
* unless ri_id->qp is a valid pointer.
*/
- r_xprt->rx_stats.local_inv_needed++;
bad_wr = NULL;
- rc = ib_post_send(ia->ri_id->qp, first, &bad_wr);
- if (bad_wr != first)
- wait_for_completion(&frwr->fr_linv_done);
- if (rc)
- goto out_release;
+ rc = ib_post_send(r_xprt->rx_ia.ri_id->qp, first, &bad_wr);
+ trace_xprtrdma_post_send(req, rc);

- /* ORDER: Now DMA unmap all of the MRs, and return
- * them to the free MR list.
+ /* The final LOCAL_INV WR in the chain is supposed to
+ * do the wake. If it never gets posted, the wake will
+ * not happen, so don't wait in that case.
*/
-unmap:
- while (!list_empty(mrs)) {
- mr = rpcrdma_mr_pop(mrs);
- rpcrdma_mr_unmap_and_put(mr);
- }
- return;
-
-out_release:
- pr_err("rpcrdma: FRWR invalidate ib_post_send returned %i\n", rc);
+ if (bad_wr != first)
+ wait_for_completion(&frwr->fr_linv_done);
+ if (!rc)
+ return;

/* Unmap and release the MRs in the LOCAL_INV WRs that did not
* get posted.
diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index 77fc1e4..6c049fd 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -1277,7 +1277,7 @@ void rpcrdma_release_rqst(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
* RPC has relinquished all its Send Queue entries.
*/
if (!list_empty(&req->rl_registered))
- frwr_unmap_sync(r_xprt, &req->rl_registered);
+ frwr_unmap_sync(r_xprt, req);

/* Ensure that any DMA mapped pages associated with
* the Send of the RPC Call have been unmapped before
diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index 3c39aa3..a9de116 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -240,17 +240,9 @@ struct rpcrdma_sendctx {
* An external memory region is any buffer or page that is registered
* on the fly (ie, not pre-registered).
*/
-enum rpcrdma_frwr_state {
- FRWR_IS_INVALID, /* ready to be used */
- FRWR_IS_VALID, /* in use */
- FRWR_FLUSHED_FR, /* flushed FASTREG WR */
- FRWR_FLUSHED_LI, /* flushed LOCALINV WR */
-};
-
struct rpcrdma_frwr {
struct ib_mr *fr_mr;
struct ib_cqe fr_cqe;
- enum rpcrdma_frwr_state fr_state;
struct completion fr_linv_done;
union {
struct ib_reg_wr fr_regwr;
@@ -567,8 +559,7 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt *r_xprt,
struct rpcrdma_mr **mr);
int frwr_send(struct rpcrdma_ia *ia, struct rpcrdma_req *req);
void frwr_reminv(struct rpcrdma_rep *rep, struct list_head *mrs);
-void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt,
- struct list_head *mrs);
+void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req);

/*
* RPC/RDMA protocol calls - xprtrdma/rpc_rdma.c

2019-05-28 18:22:01

by Chuck Lever

[permalink] [raw]
Subject: [PATCH RFC 06/12] xprtrdma: Add mechanism to place MRs back on the free list

A marshal operation can fail if there are not enough MRs or if
the sendctx queue is empty. In this case, any MRs that were already
set up for that request are recycled. Recycling releases MRs and
creates new ones, which is expensive.

Since commit f2877623082b ("xprtrdma: Chain Send to FastReg WRs")
was merged, recycling FRWRs is unnecessary. This is because before
that commit, frwr_map had already posted FAST_REG Work Requests,
so the MRs were owned by the NIC and dealing with them had to be
delayed.

Since that commit, the FAST_REG WRs are posted at the same time as
the Send WR. This means that if marshaling fails, we are certain
the MRs are safe to simply unmap and place back on the free list
because neither the Send nor the FAST_REG WRs have been posted yet.

This reduces the total number of MRs that the xprt has to create
under heavy workloads and makes the logic a bit less brittle.

Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/xprtrdma/frwr_ops.c | 24 ++++++++++++++++++++++++
net/sunrpc/xprtrdma/rpc_rdma.c | 1 +
net/sunrpc/xprtrdma/xprt_rdma.h | 1 +
3 files changed, 26 insertions(+)

diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
index 99871fbf..ae97119 100644
--- a/net/sunrpc/xprtrdma/frwr_ops.c
+++ b/net/sunrpc/xprtrdma/frwr_ops.c
@@ -144,6 +144,30 @@ void frwr_release_mr(struct rpcrdma_mr *mr)
frwr_release_mr(mr);
}

+/* frwr_reset - Place MRs back on the free list
+ * @req: request to reset
+ *
+ * Used after a failed marshal. For FRWR, this means the MRs
+ * don't have to be fully released and recreated.
+ *
+ * NB: This is safe only as long as none of @req's MRs are
+ * involved with an ongoing asynchronous FAST_REG or LOCAL_INV
+ * Work Request.
+ */
+void frwr_reset(struct rpcrdma_req *req)
+{
+ /* If this is a retransmit, discard previously registered
+ * chunks. Very likely the connection has been replaced,
+ * so these registrations are invalid and unusable.
+ */
+ while (!list_empty(&req->rl_registered)) {
+ struct rpcrdma_mr *mr;
+
+ mr = rpcrdma_mr_pop(&req->rl_registered);
+ rpcrdma_mr_unmap_and_put(mr);
+ }
+}
+
/**
* frwr_init_mr - Initialize one MR
* @ia: interface adapter
diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index 6c049fd..f211f81 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -884,6 +884,7 @@ static bool rpcrdma_prepare_msg_sges(struct rpcrdma_xprt *r_xprt,
out_err:
trace_xprtrdma_marshal_failed(rqst, ret);
r_xprt->rx_stats.failed_marshal_count++;
+ frwr_reset(req);
return ret;
}

diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index a9de116..a396528 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -549,6 +549,7 @@ static inline bool rpcrdma_regbuf_dma_map(struct rpcrdma_xprt *r_xprt,
/* Memory registration calls xprtrdma/frwr_ops.c
*/
bool frwr_is_supported(struct ib_device *device);
+void frwr_reset(struct rpcrdma_req *req);
int frwr_open(struct rpcrdma_ia *ia, struct rpcrdma_ep *ep);
int frwr_init_mr(struct rpcrdma_ia *ia, struct rpcrdma_mr *mr);
void frwr_release_mr(struct rpcrdma_mr *mr);

2019-05-28 18:22:06

by Chuck Lever

[permalink] [raw]
Subject: [PATCH RFC 07/12] xprtrdma: Reduce context switching due to Local Invalidation

Since commit ba69cd122ece ("xprtrdma: Remove support for FMR memory
registration"), FRWR is the only supported memory registration mode.

We can take advantage of the asynchronous nature of FRWR's LOCAL_INV
Work Requests to get rid of the completion wait by having the
LOCAL_INV completion handler take care of DMA unmapping MRs and
waking the upper layer RPC waiter.

This eliminates two context switches when local invalidation is
necessary. As a side benefit, we will no longer need the per-xprt
deferred completion work queue.

Signed-off-by: Chuck Lever <[email protected]>
---
include/trace/events/rpcrdma.h | 1
net/sunrpc/xprtrdma/frwr_ops.c | 104 ++++++++++++++++++++++++++++++++++++++-
net/sunrpc/xprtrdma/rpc_rdma.c | 61 +++++++++++------------
net/sunrpc/xprtrdma/verbs.c | 17 ------
net/sunrpc/xprtrdma/xprt_rdma.h | 8 ++-
5 files changed, 136 insertions(+), 55 deletions(-)

diff --git a/include/trace/events/rpcrdma.h b/include/trace/events/rpcrdma.h
index 7fe21ba..eadcd09 100644
--- a/include/trace/events/rpcrdma.h
+++ b/include/trace/events/rpcrdma.h
@@ -710,6 +710,7 @@
DEFINE_FRWR_DONE_EVENT(xprtrdma_wc_fastreg);
DEFINE_FRWR_DONE_EVENT(xprtrdma_wc_li);
DEFINE_FRWR_DONE_EVENT(xprtrdma_wc_li_wake);
+DEFINE_FRWR_DONE_EVENT(xprtrdma_wc_li_done);

TRACE_EVENT(xprtrdma_frwr_alloc,
TP_PROTO(
diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
index ae97119..e5500f7 100644
--- a/net/sunrpc/xprtrdma/frwr_ops.c
+++ b/net/sunrpc/xprtrdma/frwr_ops.c
@@ -546,7 +546,10 @@ static void frwr_wc_localinv_wake(struct ib_cq *cq, struct ib_wc *wc)
* @req: rpcrdma_req with a non-empty list of MRs to process
*
* Sleeps until it is safe for the host CPU to access the previously mapped
- * memory regions.
+ * memory regions. This guarantees that registered MRs are properly fenced
+ * from the server before the RPC consumer accesses the data in them. It
+ * also ensures proper Send flow control: waking the next RPC waits until
+ * this RPC has relinquished all its Send Queue entries.
*/
void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
{
@@ -607,8 +610,7 @@ void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
if (!rc)
return;

- /* Unmap and release the MRs in the LOCAL_INV WRs that did not
- * get posted.
+ /* Recycle MRs in the LOCAL_INV chain that did not get posted.
*/
while (bad_wr) {
frwr = container_of(bad_wr, struct rpcrdma_frwr,
@@ -620,3 +622,99 @@ void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
rpcrdma_mr_recycle(mr);
}
}
+
+/**
+ * frwr_wc_localinv_done - Invoked by RDMA provider for a signaled LOCAL_INV WC
+ * @cq: completion queue (ignored)
+ * @wc: completed WR
+ *
+ */
+static void frwr_wc_localinv_done(struct ib_cq *cq, struct ib_wc *wc)
+{
+ struct ib_cqe *cqe = wc->wr_cqe;
+ struct rpcrdma_frwr *frwr =
+ container_of(cqe, struct rpcrdma_frwr, fr_cqe);
+ struct rpcrdma_mr *mr = container_of(frwr, struct rpcrdma_mr, frwr);
+
+ /* WARNING: Only wr_cqe and status are reliable at this point */
+ __frwr_release_mr(wc, mr);
+ trace_xprtrdma_wc_li_done(wc, frwr);
+ rpcrdma_complete_rqst(frwr->fr_req->rl_reply);
+}
+
+/**
+ * frwr_unmap_async - invalidate memory regions that were registered for @req
+ * @r_xprt: controlling transport instance
+ * @req: rpcrdma_req with a non-empty list of MRs to process
+ *
+ * This guarantees that registered MRs are properly fenced from the
+ * server before the RPC consumer accesses the data in them. It also
+ * ensures proper Send flow control: waking the next RPC waits until
+ * this RPC has relinquished all its Send Queue entries.
+ */
+void frwr_unmap_async(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
+{
+ struct ib_send_wr *first, *last, **prev;
+ const struct ib_send_wr *bad_wr;
+ struct rpcrdma_frwr *frwr;
+ struct rpcrdma_mr *mr;
+ int rc;
+
+ /* Chain the LOCAL_INV Work Requests and post them with
+ * a single ib_post_send() call.
+ */
+ prev = &first;
+ while (!list_empty(&req->rl_registered)) {
+ mr = rpcrdma_mr_pop(&req->rl_registered);
+
+ trace_xprtrdma_mr_localinv(mr);
+ r_xprt->rx_stats.local_inv_needed++;
+
+ frwr = &mr->frwr;
+ frwr->fr_cqe.done = frwr_wc_localinv;
+ frwr->fr_req = req;
+ last = &frwr->fr_invwr;
+ last->next = NULL;
+ last->wr_cqe = &frwr->fr_cqe;
+ last->sg_list = NULL;
+ last->num_sge = 0;
+ last->opcode = IB_WR_LOCAL_INV;
+ last->send_flags = IB_SEND_SIGNALED;
+ last->ex.invalidate_rkey = mr->mr_handle;
+
+ *prev = last;
+ prev = &last->next;
+ }
+
+ /* Strong send queue ordering guarantees that when the
+ * last WR in the chain completes, all WRs in the chain
+ * are complete.
+ */
+ frwr->fr_cqe.done = frwr_wc_localinv_done;
+
+ /* Transport disconnect drains the receive CQ before it
+ * replaces the QP. The RPC reply handler won't call us
+ * unless ri_id->qp is a valid pointer.
+ */
+ bad_wr = NULL;
+ rc = ib_post_send(r_xprt->rx_ia.ri_id->qp, first, &bad_wr);
+ trace_xprtrdma_post_send(req, rc);
+ if (!rc)
+ return;
+
+ /* Recycle MRs in the LOCAL_INV chain that did not get posted.
+ */
+ while (bad_wr) {
+ frwr = container_of(bad_wr, struct rpcrdma_frwr, fr_invwr);
+ mr = container_of(frwr, struct rpcrdma_mr, frwr);
+ bad_wr = bad_wr->next;
+
+ rpcrdma_mr_recycle(mr);
+ }
+
+ /* The final LOCAL_INV WR in the chain is supposed to
+ * do the wake. If it never gets posted, the wake will
+ * not happen, so wake here in that case.
+ */
+ rpcrdma_complete_rqst(req->rl_reply);
+}
diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index f211f81..ea39f74 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -1268,24 +1268,15 @@ void rpcrdma_complete_rqst(struct rpcrdma_rep *rep)
goto out;
}

-void rpcrdma_release_rqst(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
+/* Ensure that any DMA mapped pages associated with
+ * the Send of the RPC Call have been unmapped before
+ * allowing the RPC to complete. This protects argument
+ * memory not controlled by the RPC client from being
+ * re-used before we're done with it.
+ */
+static void rpcrdma_release_tx(struct rpcrdma_xprt *r_xprt,
+ struct rpcrdma_req *req)
{
- /* Invalidate and unmap the data payloads before waking
- * the waiting application. This guarantees the memory
- * regions are properly fenced from the server before the
- * application accesses the data. It also ensures proper
- * send flow control: waking the next RPC waits until this
- * RPC has relinquished all its Send Queue entries.
- */
- if (!list_empty(&req->rl_registered))
- frwr_unmap_sync(r_xprt, req);
-
- /* Ensure that any DMA mapped pages associated with
- * the Send of the RPC Call have been unmapped before
- * allowing the RPC to complete. This protects argument
- * memory not controlled by the RPC client from being
- * re-used before we're done with it.
- */
if (test_bit(RPCRDMA_REQ_F_TX_RESOURCES, &req->rl_flags)) {
r_xprt->rx_stats.reply_waits_for_send++;
out_of_line_wait_on_bit(&req->rl_flags,
@@ -1295,24 +1286,23 @@ void rpcrdma_release_rqst(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
}
}

-/* Reply handling runs in the poll worker thread. Anything that
- * might wait is deferred to a separate workqueue.
+/**
+ * rpcrdma_release_rqst - Release hardware resources
+ * @r_xprt: controlling transport instance
+ * @req: request with resources to release
+ *
*/
-void rpcrdma_deferred_completion(struct work_struct *work)
+void rpcrdma_release_rqst(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
{
- struct rpcrdma_rep *rep =
- container_of(work, struct rpcrdma_rep, rr_work);
- struct rpcrdma_req *req = rpcr_to_rdmar(rep->rr_rqst);
- struct rpcrdma_xprt *r_xprt = rep->rr_rxprt;
+ if (!list_empty(&req->rl_registered))
+ frwr_unmap_sync(r_xprt, req);

- trace_xprtrdma_defer_cmp(rep);
- if (rep->rr_wc_flags & IB_WC_WITH_INVALIDATE)
- frwr_reminv(rep, &req->rl_registered);
- rpcrdma_release_rqst(r_xprt, req);
- rpcrdma_complete_rqst(rep);
+ rpcrdma_release_tx(r_xprt, req);
}

-/* Process received RPC/RDMA messages.
+/**
+ * rpcrdma_reply_handler - Process received RPC/RDMA messages
+ * @rep: Incoming rpcrdma_rep object to process
*
* Errors must result in the RPC task either being awakened, or
* allowed to timeout, to discover the errors at that time.
@@ -1374,7 +1364,16 @@ void rpcrdma_reply_handler(struct rpcrdma_rep *rep)
rep->rr_rqst = rqst;

trace_xprtrdma_reply(rqst->rq_task, rep, req, credits);
- queue_work(buf->rb_completion_wq, &rep->rr_work);
+
+ if (rep->rr_wc_flags & IB_WC_WITH_INVALIDATE)
+ frwr_reminv(rep, &req->rl_registered);
+ if (!list_empty(&req->rl_registered)) {
+ frwr_unmap_async(r_xprt, req);
+ /* LocalInv completion will complete the RPC */
+ } else {
+ rpcrdma_release_tx(r_xprt, req);
+ rpcrdma_complete_rqst(rep);
+ }
return;

out_badversion:
diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index 78be985..0be455b 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -89,14 +89,12 @@
*/
static void rpcrdma_xprt_drain(struct rpcrdma_xprt *r_xprt)
{
- struct rpcrdma_buffer *buf = &r_xprt->rx_buf;
struct rpcrdma_ia *ia = &r_xprt->rx_ia;

/* Flush Receives, then wait for deferred Reply work
* to complete.
*/
ib_drain_rq(ia->ri_id->qp);
- drain_workqueue(buf->rb_completion_wq);

/* Deferred Reply processing might have scheduled
* local invalidations.
@@ -1057,7 +1055,6 @@ static bool rpcrdma_rep_create(struct rpcrdma_xprt *r_xprt, bool temp)

rep->rr_cqe.done = rpcrdma_wc_receive;
rep->rr_rxprt = r_xprt;
- INIT_WORK(&rep->rr_work, rpcrdma_deferred_completion);
rep->rr_recv_wr.next = NULL;
rep->rr_recv_wr.wr_cqe = &rep->rr_cqe;
rep->rr_recv_wr.sg_list = &rep->rr_rdmabuf->rg_iov;
@@ -1118,15 +1115,6 @@ int rpcrdma_buffer_create(struct rpcrdma_xprt *r_xprt)
if (rc)
goto out;

- buf->rb_completion_wq = alloc_workqueue("rpcrdma-%s",
- WQ_MEM_RECLAIM | WQ_HIGHPRI,
- 0,
- r_xprt->rx_xprt.address_strings[RPC_DISPLAY_ADDR]);
- if (!buf->rb_completion_wq) {
- rc = -ENOMEM;
- goto out;
- }
-
return 0;
out:
rpcrdma_buffer_destroy(buf);
@@ -1200,11 +1188,6 @@ static void rpcrdma_rep_destroy(struct rpcrdma_rep *rep)
{
cancel_delayed_work_sync(&buf->rb_refresh_worker);

- if (buf->rb_completion_wq) {
- destroy_workqueue(buf->rb_completion_wq);
- buf->rb_completion_wq = NULL;
- }
-
rpcrdma_sendctxs_destroy(buf);

while (!list_empty(&buf->rb_recv_bufs)) {
diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index a396528..e465221 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -202,10 +202,9 @@ struct rpcrdma_rep {
bool rr_temp;
struct rpcrdma_regbuf *rr_rdmabuf;
struct rpcrdma_xprt *rr_rxprt;
- struct work_struct rr_work;
+ struct rpc_rqst *rr_rqst;
struct xdr_buf rr_hdrbuf;
struct xdr_stream rr_stream;
- struct rpc_rqst *rr_rqst;
struct list_head rr_list;
struct ib_recv_wr rr_recv_wr;
};
@@ -240,10 +239,12 @@ struct rpcrdma_sendctx {
* An external memory region is any buffer or page that is registered
* on the fly (ie, not pre-registered).
*/
+struct rpcrdma_req;
struct rpcrdma_frwr {
struct ib_mr *fr_mr;
struct ib_cqe fr_cqe;
struct completion fr_linv_done;
+ struct rpcrdma_req *fr_req;
union {
struct ib_reg_wr fr_regwr;
struct ib_send_wr fr_invwr;
@@ -388,7 +389,6 @@ struct rpcrdma_buffer {
u32 rb_bc_srv_max_requests;
u32 rb_bc_max_requests;

- struct workqueue_struct *rb_completion_wq;
struct delayed_work rb_refresh_worker;
};

@@ -561,6 +561,7 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt *r_xprt,
int frwr_send(struct rpcrdma_ia *ia, struct rpcrdma_req *req);
void frwr_reminv(struct rpcrdma_rep *rep, struct list_head *mrs);
void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req);
+void frwr_unmap_async(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req);

/*
* RPC/RDMA protocol calls - xprtrdma/rpc_rdma.c
@@ -585,7 +586,6 @@ int rpcrdma_prepare_send_sges(struct rpcrdma_xprt *r_xprt,
void rpcrdma_reply_handler(struct rpcrdma_rep *rep);
void rpcrdma_release_rqst(struct rpcrdma_xprt *r_xprt,
struct rpcrdma_req *req);
-void rpcrdma_deferred_completion(struct work_struct *work);

static inline void rpcrdma_set_xdrlen(struct xdr_buf *xdr, size_t len)
{

2019-05-28 18:22:14

by Chuck Lever

[permalink] [raw]
Subject: [PATCH RFC 08/12] xprtrdma: Wake RPCs directly in rpcrdma_wc_send path

Eliminate a context switch in the path that handles RPC wake-ups
when a Receive completion has to wait for a Send completion.

Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/xprtrdma/rpc_rdma.c | 61 +++++++++++++++------------------------
net/sunrpc/xprtrdma/transport.c | 9 +++++-
net/sunrpc/xprtrdma/verbs.c | 3 +-
net/sunrpc/xprtrdma/xprt_rdma.h | 12 ++------
4 files changed, 35 insertions(+), 50 deletions(-)

diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index ea39f74..6de90d4 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -511,6 +511,16 @@ static bool rpcrdma_results_inline(struct rpcrdma_xprt *r_xprt,
return 0;
}

+static void rpcrdma_sendctx_done(struct kref *kref)
+{
+ struct rpcrdma_req *req =
+ container_of(kref, struct rpcrdma_req, rl_kref);
+ struct rpcrdma_rep *rep = req->rl_reply;
+
+ rpcrdma_complete_rqst(rep);
+ rep->rr_rxprt->rx_stats.reply_waits_for_send++;
+}
+
/**
* rpcrdma_sendctx_unmap - DMA-unmap Send buffer
* @sc: sendctx containing SGEs to unmap
@@ -520,6 +530,9 @@ void rpcrdma_sendctx_unmap(struct rpcrdma_sendctx *sc)
{
struct ib_sge *sge;

+ if (!sc->sc_unmap_count)
+ return;
+
/* The first two SGEs contain the transport header and
* the inline buffer. These are always left mapped so
* they can be cheaply re-used.
@@ -529,9 +542,7 @@ void rpcrdma_sendctx_unmap(struct rpcrdma_sendctx *sc)
ib_dma_unmap_page(sc->sc_device, sge->addr, sge->length,
DMA_TO_DEVICE);

- if (test_and_clear_bit(RPCRDMA_REQ_F_TX_RESOURCES,
- &sc->sc_req->rl_flags))
- wake_up_bit(&sc->sc_req->rl_flags, RPCRDMA_REQ_F_TX_RESOURCES);
+ kref_put(&sc->sc_req->rl_kref, rpcrdma_sendctx_done);
}

/* Prepare an SGE for the RPC-over-RDMA transport header.
@@ -666,7 +677,7 @@ static bool rpcrdma_prepare_msg_sges(struct rpcrdma_xprt *r_xprt,
out:
sc->sc_wr.num_sge += sge_no;
if (sc->sc_unmap_count)
- __set_bit(RPCRDMA_REQ_F_TX_RESOURCES, &req->rl_flags);
+ kref_get(&req->rl_kref);
return true;

out_regbuf:
@@ -708,7 +719,7 @@ static bool rpcrdma_prepare_msg_sges(struct rpcrdma_xprt *r_xprt,
req->rl_sendctx->sc_wr.num_sge = 0;
req->rl_sendctx->sc_unmap_count = 0;
req->rl_sendctx->sc_req = req;
- __clear_bit(RPCRDMA_REQ_F_TX_RESOURCES, &req->rl_flags);
+ kref_init(&req->rl_kref);

ret = -EIO;
if (!rpcrdma_prepare_hdr_sge(r_xprt, req, hdrlen))
@@ -1268,36 +1279,12 @@ void rpcrdma_complete_rqst(struct rpcrdma_rep *rep)
goto out;
}

-/* Ensure that any DMA mapped pages associated with
- * the Send of the RPC Call have been unmapped before
- * allowing the RPC to complete. This protects argument
- * memory not controlled by the RPC client from being
- * re-used before we're done with it.
- */
-static void rpcrdma_release_tx(struct rpcrdma_xprt *r_xprt,
- struct rpcrdma_req *req)
+static void rpcrdma_reply_done(struct kref *kref)
{
- if (test_bit(RPCRDMA_REQ_F_TX_RESOURCES, &req->rl_flags)) {
- r_xprt->rx_stats.reply_waits_for_send++;
- out_of_line_wait_on_bit(&req->rl_flags,
- RPCRDMA_REQ_F_TX_RESOURCES,
- bit_wait,
- TASK_UNINTERRUPTIBLE);
- }
-}
+ struct rpcrdma_req *req =
+ container_of(kref, struct rpcrdma_req, rl_kref);

-/**
- * rpcrdma_release_rqst - Release hardware resources
- * @r_xprt: controlling transport instance
- * @req: request with resources to release
- *
- */
-void rpcrdma_release_rqst(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
-{
- if (!list_empty(&req->rl_registered))
- frwr_unmap_sync(r_xprt, req);
-
- rpcrdma_release_tx(r_xprt, req);
+ rpcrdma_complete_rqst(req->rl_reply);
}

/**
@@ -1367,13 +1354,11 @@ void rpcrdma_reply_handler(struct rpcrdma_rep *rep)

if (rep->rr_wc_flags & IB_WC_WITH_INVALIDATE)
frwr_reminv(rep, &req->rl_registered);
- if (!list_empty(&req->rl_registered)) {
+ if (!list_empty(&req->rl_registered))
frwr_unmap_async(r_xprt, req);
/* LocalInv completion will complete the RPC */
- } else {
- rpcrdma_release_tx(r_xprt, req);
- rpcrdma_complete_rqst(rep);
- }
+ else
+ kref_put(&req->rl_kref, rpcrdma_reply_done);
return;

out_badversion:
diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
index f84375d..5b1d5d7 100644
--- a/net/sunrpc/xprtrdma/transport.c
+++ b/net/sunrpc/xprtrdma/transport.c
@@ -618,8 +618,15 @@ static bool rpcrdma_check_regbuf(struct rpcrdma_xprt *r_xprt,
struct rpcrdma_xprt *r_xprt = rpcx_to_rdmax(rqst->rq_xprt);
struct rpcrdma_req *req = rpcr_to_rdmar(rqst);

- rpcrdma_release_rqst(r_xprt, req);
trace_xprtrdma_op_free(task, req);
+
+ if (!list_empty(&req->rl_registered))
+ frwr_unmap_sync(r_xprt, req);
+
+ /* XXX: We should wait for the Send completion here,
+ * although it's very likely it's already fired in
+ * this case.
+ */
}

/**
diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index 0be455b..729266e 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -1463,8 +1463,7 @@ static void rpcrdma_regbuf_free(struct rpcrdma_regbuf *rb)
struct ib_send_wr *send_wr = &req->rl_sendctx->sc_wr;
int rc;

- if (!ep->rep_send_count ||
- test_bit(RPCRDMA_REQ_F_TX_RESOURCES, &req->rl_flags)) {
+ if (!ep->rep_send_count || kref_read(&req->rl_kref) > 1) {
send_wr->send_flags |= IB_SEND_SIGNALED;
ep->rep_send_count = ep->rep_send_batch;
} else {
diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index e465221..5475f0d 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -44,7 +44,8 @@

#include <linux/wait.h> /* wait_queue_head_t, etc */
#include <linux/spinlock.h> /* spinlock_t, etc */
-#include <linux/atomic.h> /* atomic_t, etc */
+#include <linux/atomic.h> /* atomic_t, etc */
+#include <linux/kref.h> /* struct kref */
#include <linux/workqueue.h> /* struct work_struct */

#include <rdma/rdma_cm.h> /* RDMA connection api */
@@ -329,17 +330,12 @@ struct rpcrdma_req {
struct rpcrdma_regbuf *rl_recvbuf; /* rq_rcv_buf */

struct list_head rl_all;
- unsigned long rl_flags;
+ struct kref rl_kref;

struct list_head rl_registered; /* registered segments */
struct rpcrdma_mr_seg rl_segments[RPCRDMA_MAX_SEGS];
};

-/* rl_flags */
-enum {
- RPCRDMA_REQ_F_TX_RESOURCES,
-};
-
static inline struct rpcrdma_req *
rpcr_to_rdmar(const struct rpc_rqst *rqst)
{
@@ -584,8 +580,6 @@ int rpcrdma_prepare_send_sges(struct rpcrdma_xprt *r_xprt,
void rpcrdma_set_max_header_sizes(struct rpcrdma_xprt *);
void rpcrdma_complete_rqst(struct rpcrdma_rep *rep);
void rpcrdma_reply_handler(struct rpcrdma_rep *rep);
-void rpcrdma_release_rqst(struct rpcrdma_xprt *r_xprt,
- struct rpcrdma_req *req);

static inline void rpcrdma_set_xdrlen(struct xdr_buf *xdr, size_t len)
{

2019-05-28 18:22:48

by Chuck Lever

[permalink] [raw]
Subject: [PATCH RFC 09/12] xprtrdma: Simplify rpcrdma_rep_create

Clean up.

Commit 7c8d9e7c8863 ("xprtrdma: Move Receive posting to Receive
handler") reduced the number of rpcrdma_rep_create call sites to
one. After that commit, the backchannel code no longer invokes it.

Therefore the free list logic added by commit d698c4a02ee0
("xprtrdma: Fix backchannel allocation of extra rpcrdma_reps") is
no longer necessary, and in fact adds some extra overhead that we
can do without.

Simply post any newly created reps. They will get added back to
the rb_recv_bufs list when they subsequently complete.

Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/xprtrdma/verbs.c | 22 ++++++++--------------
1 file changed, 8 insertions(+), 14 deletions(-)

diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index 729266e..390c3bc 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -1037,9 +1037,9 @@ struct rpcrdma_req *rpcrdma_req_create(struct rpcrdma_xprt *r_xprt, size_t size,
return NULL;
}

-static bool rpcrdma_rep_create(struct rpcrdma_xprt *r_xprt, bool temp)
+static struct rpcrdma_rep *rpcrdma_rep_create(struct rpcrdma_xprt *r_xprt,
+ bool temp)
{
- struct rpcrdma_buffer *buf = &r_xprt->rx_buf;
struct rpcrdma_rep *rep;

rep = kzalloc(sizeof(*rep), GFP_KERNEL);
@@ -1050,9 +1050,9 @@ static bool rpcrdma_rep_create(struct rpcrdma_xprt *r_xprt, bool temp)
DMA_FROM_DEVICE, GFP_KERNEL);
if (!rep->rr_rdmabuf)
goto out_free;
+
xdr_buf_init(&rep->rr_hdrbuf, rdmab_data(rep->rr_rdmabuf),
rdmab_length(rep->rr_rdmabuf));
-
rep->rr_cqe.done = rpcrdma_wc_receive;
rep->rr_rxprt = r_xprt;
rep->rr_recv_wr.next = NULL;
@@ -1060,16 +1060,12 @@ static bool rpcrdma_rep_create(struct rpcrdma_xprt *r_xprt, bool temp)
rep->rr_recv_wr.sg_list = &rep->rr_rdmabuf->rg_iov;
rep->rr_recv_wr.num_sge = 1;
rep->rr_temp = temp;
-
- spin_lock(&buf->rb_lock);
- list_add(&rep->rr_list, &buf->rb_recv_bufs);
- spin_unlock(&buf->rb_lock);
- return true;
+ return rep;

out_free:
kfree(rep);
out:
- return false;
+ return NULL;
}

/**
@@ -1498,7 +1494,6 @@ static void rpcrdma_regbuf_free(struct rpcrdma_regbuf *rb)
count = 0;
wr = NULL;
while (needed) {
- struct rpcrdma_regbuf *rb;
struct rpcrdma_rep *rep;

spin_lock(&buf->rb_lock);
@@ -1508,13 +1503,12 @@ static void rpcrdma_regbuf_free(struct rpcrdma_regbuf *rb)
list_del(&rep->rr_list);
spin_unlock(&buf->rb_lock);
if (!rep) {
- if (!rpcrdma_rep_create(r_xprt, temp))
+ rep = rpcrdma_rep_create(r_xprt, temp);
+ if (!rep)
break;
- continue;
}

- rb = rep->rr_rdmabuf;
- if (!rpcrdma_regbuf_dma_map(r_xprt, rb)) {
+ if (!rpcrdma_regbuf_dma_map(r_xprt, rep->rr_rdmabuf)) {
rpcrdma_recv_buffer_put(rep);
break;
}

2019-05-28 18:22:51

by Chuck Lever

[permalink] [raw]
Subject: [PATCH RFC 10/12] xprtrdma: Streamline rpcrdma_post_recvs

rb_lock is contended between rpcrdma_buffer_create,
rpcrdma_buffer_put, and rpcrdma_post_recvs.

Commit e340c2d6ef2a ("xprtrdma: Reduce the doorbell rate (Receive)")
causes rpcrdma_post_recvs to take the rb_lock repeatedly when it
determines more Receives are needed. Streamline this code path so
it takes the lock just once in most cases to build the Receive
chain that is about to be posted.

Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/xprtrdma/verbs.c | 59 ++++++++++++++++++++++++++++---------------
1 file changed, 38 insertions(+), 21 deletions(-)

diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index 390c3bc..cc1fdca 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -1479,11 +1479,13 @@ static void rpcrdma_regbuf_free(struct rpcrdma_regbuf *rb)
{
struct rpcrdma_buffer *buf = &r_xprt->rx_buf;
struct rpcrdma_ep *ep = &r_xprt->rx_ep;
- struct ib_recv_wr *wr, *bad_wr;
+ struct ib_recv_wr *i, *wr, *bad_wr;
+ struct rpcrdma_rep *rep;
int needed, count, rc;

rc = 0;
count = 0;
+
needed = buf->rb_credits + (buf->rb_bc_srv_max_requests << 1);
if (ep->rep_receive_count > needed)
goto out;
@@ -1491,39 +1493,48 @@ static void rpcrdma_regbuf_free(struct rpcrdma_regbuf *rb)
if (!temp)
needed += RPCRDMA_MAX_RECV_BATCH;

- count = 0;
+ /* fast path: all needed reps can be found on the free list */
wr = NULL;
+ spin_lock(&buf->rb_lock);
while (needed) {
- struct rpcrdma_rep *rep;
-
- spin_lock(&buf->rb_lock);
rep = list_first_entry_or_null(&buf->rb_recv_bufs,
struct rpcrdma_rep, rr_list);
- if (likely(rep))
- list_del(&rep->rr_list);
- spin_unlock(&buf->rb_lock);
- if (!rep) {
- rep = rpcrdma_rep_create(r_xprt, temp);
- if (!rep)
- break;
- }
+ if (!rep)
+ break;

- if (!rpcrdma_regbuf_dma_map(r_xprt, rep->rr_rdmabuf)) {
- rpcrdma_recv_buffer_put(rep);
+ list_del(&rep->rr_list);
+ rep->rr_recv_wr.next = wr;
+ wr = &rep->rr_recv_wr;
+ --needed;
+ }
+ spin_unlock(&buf->rb_lock);
+
+ while (needed) {
+ rep = rpcrdma_rep_create(r_xprt, temp);
+ if (!rep)
break;
- }

- trace_xprtrdma_post_recv(rep->rr_recv_wr.wr_cqe);
rep->rr_recv_wr.next = wr;
wr = &rep->rr_recv_wr;
- ++count;
--needed;
}
- if (!count)
+ if (!wr)
goto out;

+ for (i = wr; i; i = i->next) {
+ rep = container_of(i, struct rpcrdma_rep, rr_recv_wr);
+
+ if (!rpcrdma_regbuf_dma_map(r_xprt, rep->rr_rdmabuf))
+ goto release_wrs;
+
+ trace_xprtrdma_post_recv(rep->rr_recv_wr.wr_cqe);
+ ++count;
+ }
+
rc = ib_post_recv(r_xprt->rx_ia.ri_id->qp, wr,
(const struct ib_recv_wr **)&bad_wr);
+out:
+ trace_xprtrdma_post_recvs(r_xprt, count, rc);
if (rc) {
for (wr = bad_wr; wr;) {
struct rpcrdma_rep *rep;
@@ -1535,6 +1546,12 @@ static void rpcrdma_regbuf_free(struct rpcrdma_regbuf *rb)
}
}
ep->rep_receive_count += count;
-out:
- trace_xprtrdma_post_recvs(r_xprt, count, rc);
+ return;
+
+release_wrs:
+ for (i = wr; i;) {
+ rep = container_of(i, struct rpcrdma_rep, rr_recv_wr);
+ i = i->next;
+ rpcrdma_recv_buffer_put(rep);
+ }
}

2019-05-28 18:24:18

by Chuck Lever

[permalink] [raw]
Subject: [PATCH RFC 12/12] xprtrdma: Remove rpcrdma_req::rl_buffer

Clean up.

There is only one remaining function that uses this field. This
caller can supply the correct rpcrdma_buffer, allowing the removal
of an 8-byte pointer field.

Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/xprtrdma/transport.c | 5 ++++-
net/sunrpc/xprtrdma/verbs.c | 6 ++----
net/sunrpc/xprtrdma/xprt_rdma.h | 4 ++--
3 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
index 5b1d5d7..cff090b 100644
--- a/net/sunrpc/xprtrdma/transport.c
+++ b/net/sunrpc/xprtrdma/transport.c
@@ -550,8 +550,11 @@ void xprt_rdma_close(struct rpc_xprt *xprt)
static void
xprt_rdma_free_slot(struct rpc_xprt *xprt, struct rpc_rqst *rqst)
{
+ struct rpcrdma_xprt *r_xprt =
+ container_of(xprt, struct rpcrdma_xprt, rx_xprt);
+
memset(rqst, 0, sizeof(*rqst));
- rpcrdma_buffer_put(rpcr_to_rdmar(rqst));
+ rpcrdma_buffer_put(&r_xprt->rx_buf, rpcr_to_rdmar(rqst));
rpc_wake_up_next(&xprt->backlog);
}

diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index cc1fdca..ea97b98 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -1020,7 +1020,6 @@ struct rpcrdma_req *rpcrdma_req_create(struct rpcrdma_xprt *r_xprt, size_t size,
if (!req->rl_recvbuf)
goto out4;

- req->rl_buffer = buffer;
INIT_LIST_HEAD(&req->rl_registered);
spin_lock(&buffer->rb_lock);
list_add(&req->rl_all, &buffer->rb_allreqs);
@@ -1300,13 +1299,12 @@ struct rpcrdma_req *

/**
* rpcrdma_buffer_put - Put request/reply buffers back into pool
+ * @buffers: buffer pool
* @req: object to return
*
*/
-void
-rpcrdma_buffer_put(struct rpcrdma_req *req)
+void rpcrdma_buffer_put(struct rpcrdma_buffer *buffers, struct rpcrdma_req *req)
{
- struct rpcrdma_buffer *buffers = req->rl_buffer;
struct rpcrdma_rep *rep = req->rl_reply;

req->rl_reply = NULL;
diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index 5475f0d..117e328 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -320,7 +320,6 @@ enum {
struct rpcrdma_req {
struct list_head rl_list;
struct rpc_rqst rl_slot;
- struct rpcrdma_buffer *rl_buffer;
struct rpcrdma_rep *rl_reply;
struct xdr_stream rl_stream;
struct xdr_buf rl_hdrbuf;
@@ -499,7 +498,8 @@ struct rpcrdma_req *rpcrdma_req_create(struct rpcrdma_xprt *r_xprt, size_t size,
}

struct rpcrdma_req *rpcrdma_buffer_get(struct rpcrdma_buffer *);
-void rpcrdma_buffer_put(struct rpcrdma_req *);
+void rpcrdma_buffer_put(struct rpcrdma_buffer *buffers,
+ struct rpcrdma_req *req);
void rpcrdma_recv_buffer_put(struct rpcrdma_rep *);

bool rpcrdma_regbuf_realloc(struct rpcrdma_regbuf *rb, size_t size,

2019-05-28 18:24:24

by Chuck Lever

[permalink] [raw]
Subject: [PATCH RFC 11/12] xprtrdma: Refactor chunk encoding

Clean up.

Move the "not present" case into the individual chunk encoders. This
improves code organization and readability.

The reason for the original organization was to optimize for the
case where there there are no chunks. The optimization turned out to
be inconsequential.

Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/xprtrdma/rpc_rdma.c | 36 ++++++++++++++++--------------------
1 file changed, 16 insertions(+), 20 deletions(-)

diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index 6de90d4..310614f 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -366,6 +366,9 @@ static bool rpcrdma_results_inline(struct rpcrdma_xprt *r_xprt,
unsigned int pos;
int nsegs;

+ if (rtype == rpcrdma_noch)
+ goto done;
+
pos = rqst->rq_snd_buf.head[0].iov_len;
if (rtype == rpcrdma_areadch)
pos = 0;
@@ -389,7 +392,8 @@ static bool rpcrdma_results_inline(struct rpcrdma_xprt *r_xprt,
nsegs -= mr->mr_nents;
} while (nsegs);

- return 0;
+done:
+ return encode_item_not_present(xdr);
}

/* Register and XDR encode the Write list. Supports encoding a list
@@ -417,6 +421,9 @@ static bool rpcrdma_results_inline(struct rpcrdma_xprt *r_xprt,
int nsegs, nchunks;
__be32 *segcount;

+ if (wtype != rpcrdma_writech)
+ goto done;
+
seg = req->rl_segments;
nsegs = rpcrdma_convert_iovs(r_xprt, &rqst->rq_rcv_buf,
rqst->rq_rcv_buf.head[0].iov_len,
@@ -451,7 +458,8 @@ static bool rpcrdma_results_inline(struct rpcrdma_xprt *r_xprt,
/* Update count of segments in this Write chunk */
*segcount = cpu_to_be32(nchunks);

- return 0;
+done:
+ return encode_item_not_present(xdr);
}

/* Register and XDR encode the Reply chunk. Supports encoding an array
@@ -476,6 +484,9 @@ static bool rpcrdma_results_inline(struct rpcrdma_xprt *r_xprt,
int nsegs, nchunks;
__be32 *segcount;

+ if (wtype != rpcrdma_replych)
+ return encode_item_not_present(xdr);
+
seg = req->rl_segments;
nsegs = rpcrdma_convert_iovs(r_xprt, &rqst->rq_rcv_buf, 0, wtype, seg);
if (nsegs < 0)
@@ -859,28 +870,13 @@ static bool rpcrdma_prepare_msg_sges(struct rpcrdma_xprt *r_xprt,
* send a Call message with a Position Zero Read chunk and a
* regular Read chunk at the same time.
*/
- if (rtype != rpcrdma_noch) {
- ret = rpcrdma_encode_read_list(r_xprt, req, rqst, rtype);
- if (ret)
- goto out_err;
- }
- ret = encode_item_not_present(xdr);
+ ret = rpcrdma_encode_read_list(r_xprt, req, rqst, rtype);
if (ret)
goto out_err;
-
- if (wtype == rpcrdma_writech) {
- ret = rpcrdma_encode_write_list(r_xprt, req, rqst, wtype);
- if (ret)
- goto out_err;
- }
- ret = encode_item_not_present(xdr);
+ ret = rpcrdma_encode_write_list(r_xprt, req, rqst, wtype);
if (ret)
goto out_err;
-
- if (wtype != rpcrdma_replych)
- ret = encode_item_not_present(xdr);
- else
- ret = rpcrdma_encode_reply_chunk(r_xprt, req, rqst, wtype);
+ ret = rpcrdma_encode_reply_chunk(r_xprt, req, rqst, wtype);
if (ret)
goto out_err;


2019-05-29 06:41:24

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH RFC 00/12] for-5.3 NFS/RDMA patches for review

On Tue, May 28, 2019 at 02:20:50PM -0400, Chuck Lever wrote:
> This is a series of fixes and architectural changes that should
> improve robustness and result in better scalability of NFS/RDMA.
> I'm sure one or two of these could be broken down a little more,
> comments welcome.

Just curious, do you have any performance numbers.

2019-05-29 14:36:57

by Chuck Lever

[permalink] [raw]
Subject: Re: [PATCH RFC 00/12] for-5.3 NFS/RDMA patches for review



> On May 29, 2019, at 2:40 AM, Christoph Hellwig <[email protected]> wrote:
>
> On Tue, May 28, 2019 at 02:20:50PM -0400, Chuck Lever wrote:
>> This is a series of fixes and architectural changes that should
>> improve robustness and result in better scalability of NFS/RDMA.
>> I'm sure one or two of these could be broken down a little more,
>> comments welcome.
>
> Just curious, do you have any performance numbers.

To watch for performance regressions and improvements, I regularly run several
variations of iozone, fio 70/30 mix, and multi-threaded software builds. I did
not note any change in throughput after applying these patches.

I'm unsure how to measure context switch rate precisely during these tests.

This is typical for fio 8KB random 70/30 mix on FDR Infiniband on a NUMA client.
Not impressive compared to NVMe, I know, but much better than NFS/TCP. On a
single socket client, the IOPS numbers more than double.

Jobs: 12 (f=12): [m(12)][100.0%][r=897MiB/s,w=386MiB/s][r=115k,w=49.5k IOPS][eta 00m:00s]
8k7030test: (groupid=0, jobs=12): err= 0: pid=2107: Fri May 24 15:22:38 2019
read: IOPS=115k, BW=897MiB/s (941MB/s)(8603MiB/9588msec)
slat (usec): min=2, max=6203, avg= 7.02, stdev=27.49
clat (usec): min=33, max=13553, avg=1131.12, stdev=536.34
lat (usec): min=47, max=13557, avg=1138.37, stdev=537.11
clat percentiles (usec):
| 1.00th=[ 338], 5.00th=[ 515], 10.00th=[ 619], 20.00th=[ 750],
| 30.00th=[ 857], 40.00th=[ 955], 50.00th=[ 1057], 60.00th=[ 1156],
| 70.00th=[ 1270], 80.00th=[ 1434], 90.00th=[ 1696], 95.00th=[ 1926],
| 99.00th=[ 2966], 99.50th=[ 3785], 99.90th=[ 5866], 99.95th=[ 6652],
| 99.99th=[ 8586]
bw ( KiB/s): min=64624, max=82800, per=8.34%, avg=76631.87, stdev=2877.97, samples=227
iops : min= 8078, max=10350, avg=9578.91, stdev=359.76, samples=227
write: IOPS=49.2k, BW=384MiB/s (403MB/s)(3685MiB/9588msec)
slat (usec): min=3, max=7226, avg= 7.54, stdev=29.53
clat (usec): min=64, max=14828, avg=1210.36, stdev=584.82
lat (usec): min=78, max=14834, avg=1218.11, stdev=585.77
clat percentiles (usec):
| 1.00th=[ 359], 5.00th=[ 545], 10.00th=[ 652], 20.00th=[ 791],
| 30.00th=[ 906], 40.00th=[ 1004], 50.00th=[ 1106], 60.00th=[ 1221],
| 70.00th=[ 1369], 80.00th=[ 1549], 90.00th=[ 1844], 95.00th=[ 2147],
| 99.00th=[ 3163], 99.50th=[ 4015], 99.90th=[ 6194], 99.95th=[ 7308],
| 99.99th=[ 9372]
bw ( KiB/s): min=27520, max=36128, per=8.34%, avg=32816.45, stdev=1323.08, samples=227
iops : min= 3440, max= 4516, avg=4101.97, stdev=165.38, samples=227
lat (usec) : 50=0.01%, 100=0.01%, 250=0.31%, 500=3.91%, 750=14.66%
lat (usec) : 1000=24.41%
lat (msec) : 2=51.69%, 4=4.57%, 10=0.44%, 20=0.01%
cpu : usr=3.24%, sys=8.11%, ctx=786935, majf=0, minf=117
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwt: total=1101195,471669,0, short=0,0,0, dropped=0,0,0
latency : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
READ: bw=897MiB/s (941MB/s), 897MiB/s-897MiB/s (941MB/s-941MB/s), io=8603MiB (9021MB), run=9588-9588msec
WRITE: bw=384MiB/s (403MB/s), 384MiB/s-384MiB/s (403MB/s-403MB/s), io=3685MiB (3864MB), run=9588-9588msec


--
Chuck Lever



2019-05-30 14:09:07

by Anna Schumaker

[permalink] [raw]
Subject: Re: [PATCH RFC 05/12] xprtrdma: Remove fr_state

Hi Chuck,

On Tue, 2019-05-28 at 14:21 -0400, Chuck Lever wrote:
> Since both the Send and Receive completion queues are processed in
> a workqueue context, it should be safe to DMA unmap and return MRs
> to the free or recycle lists directly in the completion handlers.
>
> Doing this means rpcrdma_frwr no longer needs to track the state
> of each MR... a VALID or FLUSHED MR can no longer appear on an
> xprt's MR free list.
>
> Signed-off-by: Chuck Lever <[email protected]>
> ---
> include/trace/events/rpcrdma.h | 19 ----
> net/sunrpc/xprtrdma/frwr_ops.c | 202 ++++++++++++++++++------------------
> ---
> net/sunrpc/xprtrdma/rpc_rdma.c | 2
> net/sunrpc/xprtrdma/xprt_rdma.h | 11 --
> 4 files changed, 95 insertions(+), 139 deletions(-)
>
> diff --git a/include/trace/events/rpcrdma.h b/include/trace/events/rpcrdma.h
> index a4ab3a2..7fe21ba 100644
> --- a/include/trace/events/rpcrdma.h
> +++ b/include/trace/events/rpcrdma.h
> @@ -181,18 +181,6 @@
> ), \
> TP_ARGS(task, mr, nsegs))
>
> -TRACE_DEFINE_ENUM(FRWR_IS_INVALID);
> -TRACE_DEFINE_ENUM(FRWR_IS_VALID);
> -TRACE_DEFINE_ENUM(FRWR_FLUSHED_FR);
> -TRACE_DEFINE_ENUM(FRWR_FLUSHED_LI);
> -
> -#define xprtrdma_show_frwr_state(x) \
> - __print_symbolic(x, \
> - { FRWR_IS_INVALID, "INVALID" }, \
> - { FRWR_IS_VALID, "VALID" }, \
> - { FRWR_FLUSHED_FR, "FLUSHED_FR" }, \
> - { FRWR_FLUSHED_LI, "FLUSHED_LI" })
> -
> DECLARE_EVENT_CLASS(xprtrdma_frwr_done,
> TP_PROTO(
> const struct ib_wc *wc,
> @@ -203,22 +191,19 @@
>
> TP_STRUCT__entry(
> __field(const void *, mr)
> - __field(unsigned int, state)
> __field(unsigned int, status)
> __field(unsigned int, vendor_err)
> ),
>
> TP_fast_assign(
> __entry->mr = container_of(frwr, struct rpcrdma_mr, frwr);
> - __entry->state = frwr->fr_state;
> __entry->status = wc->status;
> __entry->vendor_err = __entry->status ? wc->vendor_err : 0;
> ),
>
> TP_printk(
> - "mr=%p state=%s: %s (%u/0x%x)",
> - __entry->mr, xprtrdma_show_frwr_state(__entry->state),
> - rdma_show_wc_status(__entry->status),
> + "mr=%p: %s (%u/0x%x)",
> + __entry->mr, rdma_show_wc_status(__entry->status),
> __entry->status, __entry->vendor_err
> )
> );
> diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
> index ac47314..99871fbf 100644
> --- a/net/sunrpc/xprtrdma/frwr_ops.c
> +++ b/net/sunrpc/xprtrdma/frwr_ops.c
> @@ -168,7 +168,6 @@ int frwr_init_mr(struct rpcrdma_ia *ia, struct rpcrdma_mr
> *mr)
> goto out_list_err;
>
> mr->frwr.fr_mr = frmr;
> - mr->frwr.fr_state = FRWR_IS_INVALID;
> mr->mr_dir = DMA_NONE;
> INIT_LIST_HEAD(&mr->mr_list);
> INIT_WORK(&mr->mr_recycle, frwr_mr_recycle_worker);
> @@ -298,65 +297,6 @@ size_t frwr_maxpages(struct rpcrdma_xprt *r_xprt)
> }
>
> /**
> - * frwr_wc_fastreg - Invoked by RDMA provider for a flushed FastReg WC
> - * @cq: completion queue (ignored)
> - * @wc: completed WR
> - *
> - */
> -static void
> -frwr_wc_fastreg(struct ib_cq *cq, struct ib_wc *wc)
> -{
> - struct ib_cqe *cqe = wc->wr_cqe;
> - struct rpcrdma_frwr *frwr =
> - container_of(cqe, struct rpcrdma_frwr, fr_cqe);
> -
> - /* WARNING: Only wr_cqe and status are reliable at this point */
> - if (wc->status != IB_WC_SUCCESS)
> - frwr->fr_state = FRWR_FLUSHED_FR;
> - trace_xprtrdma_wc_fastreg(wc, frwr);
> -}
> -
> -/**
> - * frwr_wc_localinv - Invoked by RDMA provider for a flushed LocalInv WC
> - * @cq: completion queue (ignored)
> - * @wc: completed WR
> - *
> - */
> -static void
> -frwr_wc_localinv(struct ib_cq *cq, struct ib_wc *wc)
> -{
> - struct ib_cqe *cqe = wc->wr_cqe;
> - struct rpcrdma_frwr *frwr = container_of(cqe, struct rpcrdma_frwr,
> - fr_cqe);
> -
> - /* WARNING: Only wr_cqe and status are reliable at this point */
> - if (wc->status != IB_WC_SUCCESS)
> - frwr->fr_state = FRWR_FLUSHED_LI;
> - trace_xprtrdma_wc_li(wc, frwr);
> -}
> -
> -/**
> - * frwr_wc_localinv_wake - Invoked by RDMA provider for a signaled LocalInv
> WC
> - * @cq: completion queue (ignored)
> - * @wc: completed WR
> - *
> - * Awaken anyone waiting for an MR to finish being fenced.
> - */
> -static void
> -frwr_wc_localinv_wake(struct ib_cq *cq, struct ib_wc *wc)
> -{
> - struct ib_cqe *cqe = wc->wr_cqe;
> - struct rpcrdma_frwr *frwr = container_of(cqe, struct rpcrdma_frwr,
> - fr_cqe);
> -
> - /* WARNING: Only wr_cqe and status are reliable at this point */
> - if (wc->status != IB_WC_SUCCESS)
> - frwr->fr_state = FRWR_FLUSHED_LI;
> - trace_xprtrdma_wc_li_wake(wc, frwr);
> - complete(&frwr->fr_linv_done);
> -}
> -
> -/**
> * frwr_map - Register a memory region
> * @r_xprt: controlling transport
> * @seg: memory region co-ordinates
> @@ -378,23 +318,15 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt
> *r_xprt,
> {
> struct rpcrdma_ia *ia = &r_xprt->rx_ia;
> bool holes_ok = ia->ri_mrtype == IB_MR_TYPE_SG_GAPS;
> - struct rpcrdma_frwr *frwr;
> struct rpcrdma_mr *mr;
> struct ib_mr *ibmr;
> struct ib_reg_wr *reg_wr;
> int i, n;
> u8 key;
>
> - mr = NULL;
> - do {
> - if (mr)
> - rpcrdma_mr_recycle(mr);
> - mr = rpcrdma_mr_get(r_xprt);
> - if (!mr)
> - goto out_getmr_err;
> - } while (mr->frwr.fr_state != FRWR_IS_INVALID);
> - frwr = &mr->frwr;
> - frwr->fr_state = FRWR_IS_VALID;
> + mr = rpcrdma_mr_get(r_xprt);
> + if (!mr)
> + goto out_getmr_err;
>
> if (nsegs > ia->ri_max_frwr_depth)
> nsegs = ia->ri_max_frwr_depth;
> @@ -423,7 +355,7 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt
> *r_xprt,
> if (!mr->mr_nents)
> goto out_dmamap_err;
>
> - ibmr = frwr->fr_mr;
> + ibmr = mr->frwr.fr_mr;
> n = ib_map_mr_sg(ibmr, mr->mr_sg, mr->mr_nents, NULL, PAGE_SIZE);
> if (unlikely(n != mr->mr_nents))
> goto out_mapmr_err;
> @@ -433,7 +365,7 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt
> *r_xprt,
> key = (u8)(ibmr->rkey & 0x000000FF);
> ib_update_fast_reg_key(ibmr, ++key);
>
> - reg_wr = &frwr->fr_regwr;
> + reg_wr = &mr->frwr.fr_regwr;
> reg_wr->mr = ibmr;
> reg_wr->key = ibmr->rkey;
> reg_wr->access = writing ?
> @@ -465,6 +397,23 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt
> *r_xprt,
> }
>
> /**
> + * frwr_wc_fastreg - Invoked by RDMA provider for a flushed FastReg WC
> + * @cq: completion queue (ignored)
> + * @wc: completed WR
> + *
> + */
> +static void frwr_wc_fastreg(struct ib_cq *cq, struct ib_wc *wc)
> +{
> + struct ib_cqe *cqe = wc->wr_cqe;
> + struct rpcrdma_frwr *frwr =
> + container_of(cqe, struct rpcrdma_frwr, fr_cqe);
> +
> + /* WARNING: Only wr_cqe and status are reliable at this point */
> + trace_xprtrdma_wc_fastreg(wc, frwr);
> + /* The MR will get recycled when the associated req is retransmitted */
> +}
> +
> +/**
> * frwr_send - post Send WR containing the RPC Call message
> * @ia: interface adapter
> * @req: Prepared RPC Call
> @@ -516,65 +465,104 @@ void frwr_reminv(struct rpcrdma_rep *rep, struct
> list_head *mrs)
> if (mr->mr_handle == rep->rr_inv_rkey) {
> list_del_init(&mr->mr_list);
> trace_xprtrdma_mr_remoteinv(mr);
> - mr->frwr.fr_state = FRWR_IS_INVALID;
> rpcrdma_mr_unmap_and_put(mr);
> break; /* only one invalidated MR per RPC */
> }
> }
>
> +static void __frwr_release_mr(struct ib_wc *wc, struct rpcrdma_mr *mr)
> +{
> + if (wc->status != IB_WC_SUCCESS)
> + rpcrdma_mr_recycle(mr);
> + else
> + rpcrdma_mr_unmap_and_put(mr);
> +}
> +
> /**
> - * frwr_unmap_sync - invalidate memory regions that were registered for @req
> - * @r_xprt: controlling transport
> - * @mrs: list of MRs to process
> + * frwr_wc_localinv - Invoked by RDMA provider for a LOCAL_INV WC
> + * @cq: completion queue (ignored)
> + * @wc: completed WR
> *
> - * Sleeps until it is safe for the host CPU to access the
> - * previously mapped memory regions.
> + */
> +static void frwr_wc_localinv(struct ib_cq *cq, struct ib_wc *wc)
> +{
> + struct ib_cqe *cqe = wc->wr_cqe;
> + struct rpcrdma_frwr *frwr =
> + container_of(cqe, struct rpcrdma_frwr, fr_cqe);
> + struct rpcrdma_mr *mr = container_of(frwr, struct rpcrdma_mr, frwr);
> +
> + /* WARNING: Only wr_cqe and status are reliable at this point */
> + __frwr_release_mr(wc, mr);
> + trace_xprtrdma_wc_li(wc, frwr);
> +}
> +
> +/**
> + * frwr_wc_localinv_wake - Invoked by RDMA provider for a LOCAL_INV WC
> + * @cq: completion queue (ignored)
> + * @wc: completed WR
> *
> - * Caller ensures that @mrs is not empty before the call. This
> - * function empties the list.
> + * Awaken anyone waiting for an MR to finish being fenced.
> */
> -void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct list_head *mrs)
> +static void frwr_wc_localinv_wake(struct ib_cq *cq, struct ib_wc *wc)
> +{
> + struct ib_cqe *cqe = wc->wr_cqe;
> + struct rpcrdma_frwr *frwr =
> + container_of(cqe, struct rpcrdma_frwr, fr_cqe);
> + struct rpcrdma_mr *mr = container_of(frwr, struct rpcrdma_mr, frwr);
> +
> + /* WARNING: Only wr_cqe and status are reliable at this point */
> + __frwr_release_mr(wc, mr);
> + trace_xprtrdma_wc_li_wake(wc, frwr);
> + complete(&frwr->fr_linv_done);
> +}
> +
> +/**
> + * frwr_unmap_sync - invalidate memory regions that were registered for @req
> + * @r_xprt: controlling transport instance
> + * @req: rpcrdma_req with a non-empty list of MRs to process
> + *
> + * Sleeps until it is safe for the host CPU to access the previously mapped
> + * memory regions.
> + */
> +void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
> {
> struct ib_send_wr *first, **prev, *last;
> const struct ib_send_wr *bad_wr;
> - struct rpcrdma_ia *ia = &r_xprt->rx_ia;
> struct rpcrdma_frwr *frwr;
> struct rpcrdma_mr *mr;
> - int count, rc;
> + int rc;
>
> /* ORDER: Invalidate all of the MRs first
> *
> * Chain the LOCAL_INV Work Requests and post them with
> * a single ib_post_send() call.
> */
> - frwr = NULL;
> - count = 0;
> prev = &first;
> - list_for_each_entry(mr, mrs, mr_list) {
> - mr->frwr.fr_state = FRWR_IS_INVALID;
> + while (!list_empty(&req->rl_registered)) {

Is this list guaranteed to always start full? Because we could potentially use
frwr uninitialized a few lines down if it's not.

net/sunrpc/xprtrdma/frwr_ops.c: In function ‘frwr_unmap_sync’:
net/sunrpc/xprtrdma/frwr_ops.c:582:3: error: ‘frwr’ may be used uninitialized in
this function [-Werror=maybe-uninitialized]
wait_for_completion(&frwr->fr_linv_done);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Thanks,
Anna
> + mr = rpcrdma_mr_pop(&req->rl_registered);
>
> - frwr = &mr->frwr;
> trace_xprtrdma_mr_localinv(mr);
> + r_xprt->rx_stats.local_inv_needed++;
>
> + frwr = &mr->frwr;
> frwr->fr_cqe.done = frwr_wc_localinv;
> last = &frwr->fr_invwr;
> - memset(last, 0, sizeof(*last));
> + last->next = NULL;
> last->wr_cqe = &frwr->fr_cqe;
> + last->sg_list = NULL;
> + last->num_sge = 0;
> last->opcode = IB_WR_LOCAL_INV;
> + last->send_flags = IB_SEND_SIGNALED;
> last->ex.invalidate_rkey = mr->mr_handle;
> - count++;
>
> *prev = last;
> prev = &last->next;
> }
> - if (!frwr)
> - goto unmap;
>
> /* Strong send queue ordering guarantees that when the
> * last WR in the chain completes, all WRs in the chain
> * are complete.
> */
> - last->send_flags = IB_SEND_SIGNALED;
> frwr->fr_cqe.done = frwr_wc_localinv_wake;
> reinit_completion(&frwr->fr_linv_done);
>
> @@ -582,26 +570,18 @@ void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct
> list_head *mrs)
> * replaces the QP. The RPC reply handler won't call us
> * unless ri_id->qp is a valid pointer.
> */
> - r_xprt->rx_stats.local_inv_needed++;
> bad_wr = NULL;
> - rc = ib_post_send(ia->ri_id->qp, first, &bad_wr);
> - if (bad_wr != first)
> - wait_for_completion(&frwr->fr_linv_done);
> - if (rc)
> - goto out_release;
> + rc = ib_post_send(r_xprt->rx_ia.ri_id->qp, first, &bad_wr);
> + trace_xprtrdma_post_send(req, rc);
>
> - /* ORDER: Now DMA unmap all of the MRs, and return
> - * them to the free MR list.
> + /* The final LOCAL_INV WR in the chain is supposed to
> + * do the wake. If it never gets posted, the wake will
> + * not happen, so don't wait in that case.
> */
> -unmap:
> - while (!list_empty(mrs)) {
> - mr = rpcrdma_mr_pop(mrs);
> - rpcrdma_mr_unmap_and_put(mr);
> - }
> - return;
> -
> -out_release:
> - pr_err("rpcrdma: FRWR invalidate ib_post_send returned %i\n", rc);
> + if (bad_wr != first)
> + wait_for_completion(&frwr->fr_linv_done);
> + if (!rc)
> + return;
>
> /* Unmap and release the MRs in the LOCAL_INV WRs that did not
> * get posted.
> diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
> index 77fc1e4..6c049fd 100644
> --- a/net/sunrpc/xprtrdma/rpc_rdma.c
> +++ b/net/sunrpc/xprtrdma/rpc_rdma.c
> @@ -1277,7 +1277,7 @@ void rpcrdma_release_rqst(struct rpcrdma_xprt *r_xprt,
> struct rpcrdma_req *req)
> * RPC has relinquished all its Send Queue entries.
> */
> if (!list_empty(&req->rl_registered))
> - frwr_unmap_sync(r_xprt, &req->rl_registered);
> + frwr_unmap_sync(r_xprt, req);
>
> /* Ensure that any DMA mapped pages associated with
> * the Send of the RPC Call have been unmapped before
> diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
> index 3c39aa3..a9de116 100644
> --- a/net/sunrpc/xprtrdma/xprt_rdma.h
> +++ b/net/sunrpc/xprtrdma/xprt_rdma.h
> @@ -240,17 +240,9 @@ struct rpcrdma_sendctx {
> * An external memory region is any buffer or page that is registered
> * on the fly (ie, not pre-registered).
> */
> -enum rpcrdma_frwr_state {
> - FRWR_IS_INVALID, /* ready to be used */
> - FRWR_IS_VALID, /* in use */
> - FRWR_FLUSHED_FR, /* flushed FASTREG WR */
> - FRWR_FLUSHED_LI, /* flushed LOCALINV WR */
> -};
> -
> struct rpcrdma_frwr {
> struct ib_mr *fr_mr;
> struct ib_cqe fr_cqe;
> - enum rpcrdma_frwr_state fr_state;
> struct completion fr_linv_done;
> union {
> struct ib_reg_wr fr_regwr;
> @@ -567,8 +559,7 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt
> *r_xprt,
> struct rpcrdma_mr **mr);
> int frwr_send(struct rpcrdma_ia *ia, struct rpcrdma_req *req);
> void frwr_reminv(struct rpcrdma_rep *rep, struct list_head *mrs);
> -void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt,
> - struct list_head *mrs);
> +void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req);
>
> /*
> * RPC/RDMA protocol calls - xprtrdma/rpc_rdma.c
>

2019-05-31 13:38:59

by Chuck Lever

[permalink] [raw]
Subject: Re: [PATCH RFC 05/12] xprtrdma: Remove fr_state



> On May 30, 2019, at 10:05 AM, Anna Schumaker <[email protected]> wrote:
>
> Hi Chuck,
>
> On Tue, 2019-05-28 at 14:21 -0400, Chuck Lever wrote:
>> Since both the Send and Receive completion queues are processed in
>> a workqueue context, it should be safe to DMA unmap and return MRs
>> to the free or recycle lists directly in the completion handlers.
>>
>> Doing this means rpcrdma_frwr no longer needs to track the state
>> of each MR... a VALID or FLUSHED MR can no longer appear on an
>> xprt's MR free list.
>>
>> Signed-off-by: Chuck Lever <[email protected]>
>> ---
>> include/trace/events/rpcrdma.h | 19 ----
>> net/sunrpc/xprtrdma/frwr_ops.c | 202 ++++++++++++++++++------------------
>> ---
>> net/sunrpc/xprtrdma/rpc_rdma.c | 2
>> net/sunrpc/xprtrdma/xprt_rdma.h | 11 --
>> 4 files changed, 95 insertions(+), 139 deletions(-)
>>
>> diff --git a/include/trace/events/rpcrdma.h b/include/trace/events/rpcrdma.h
>> index a4ab3a2..7fe21ba 100644
>> --- a/include/trace/events/rpcrdma.h
>> +++ b/include/trace/events/rpcrdma.h
>> @@ -181,18 +181,6 @@
>> ), \
>> TP_ARGS(task, mr, nsegs))
>>
>> -TRACE_DEFINE_ENUM(FRWR_IS_INVALID);
>> -TRACE_DEFINE_ENUM(FRWR_IS_VALID);
>> -TRACE_DEFINE_ENUM(FRWR_FLUSHED_FR);
>> -TRACE_DEFINE_ENUM(FRWR_FLUSHED_LI);
>> -
>> -#define xprtrdma_show_frwr_state(x) \
>> - __print_symbolic(x, \
>> - { FRWR_IS_INVALID, "INVALID" }, \
>> - { FRWR_IS_VALID, "VALID" }, \
>> - { FRWR_FLUSHED_FR, "FLUSHED_FR" }, \
>> - { FRWR_FLUSHED_LI, "FLUSHED_LI" })
>> -
>> DECLARE_EVENT_CLASS(xprtrdma_frwr_done,
>> TP_PROTO(
>> const struct ib_wc *wc,
>> @@ -203,22 +191,19 @@
>>
>> TP_STRUCT__entry(
>> __field(const void *, mr)
>> - __field(unsigned int, state)
>> __field(unsigned int, status)
>> __field(unsigned int, vendor_err)
>> ),
>>
>> TP_fast_assign(
>> __entry->mr = container_of(frwr, struct rpcrdma_mr, frwr);
>> - __entry->state = frwr->fr_state;
>> __entry->status = wc->status;
>> __entry->vendor_err = __entry->status ? wc->vendor_err : 0;
>> ),
>>
>> TP_printk(
>> - "mr=%p state=%s: %s (%u/0x%x)",
>> - __entry->mr, xprtrdma_show_frwr_state(__entry->state),
>> - rdma_show_wc_status(__entry->status),
>> + "mr=%p: %s (%u/0x%x)",
>> + __entry->mr, rdma_show_wc_status(__entry->status),
>> __entry->status, __entry->vendor_err
>> )
>> );
>> diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
>> index ac47314..99871fbf 100644
>> --- a/net/sunrpc/xprtrdma/frwr_ops.c
>> +++ b/net/sunrpc/xprtrdma/frwr_ops.c
>> @@ -168,7 +168,6 @@ int frwr_init_mr(struct rpcrdma_ia *ia, struct rpcrdma_mr
>> *mr)
>> goto out_list_err;
>>
>> mr->frwr.fr_mr = frmr;
>> - mr->frwr.fr_state = FRWR_IS_INVALID;
>> mr->mr_dir = DMA_NONE;
>> INIT_LIST_HEAD(&mr->mr_list);
>> INIT_WORK(&mr->mr_recycle, frwr_mr_recycle_worker);
>> @@ -298,65 +297,6 @@ size_t frwr_maxpages(struct rpcrdma_xprt *r_xprt)
>> }
>>
>> /**
>> - * frwr_wc_fastreg - Invoked by RDMA provider for a flushed FastReg WC
>> - * @cq: completion queue (ignored)
>> - * @wc: completed WR
>> - *
>> - */
>> -static void
>> -frwr_wc_fastreg(struct ib_cq *cq, struct ib_wc *wc)
>> -{
>> - struct ib_cqe *cqe = wc->wr_cqe;
>> - struct rpcrdma_frwr *frwr =
>> - container_of(cqe, struct rpcrdma_frwr, fr_cqe);
>> -
>> - /* WARNING: Only wr_cqe and status are reliable at this point */
>> - if (wc->status != IB_WC_SUCCESS)
>> - frwr->fr_state = FRWR_FLUSHED_FR;
>> - trace_xprtrdma_wc_fastreg(wc, frwr);
>> -}
>> -
>> -/**
>> - * frwr_wc_localinv - Invoked by RDMA provider for a flushed LocalInv WC
>> - * @cq: completion queue (ignored)
>> - * @wc: completed WR
>> - *
>> - */
>> -static void
>> -frwr_wc_localinv(struct ib_cq *cq, struct ib_wc *wc)
>> -{
>> - struct ib_cqe *cqe = wc->wr_cqe;
>> - struct rpcrdma_frwr *frwr = container_of(cqe, struct rpcrdma_frwr,
>> - fr_cqe);
>> -
>> - /* WARNING: Only wr_cqe and status are reliable at this point */
>> - if (wc->status != IB_WC_SUCCESS)
>> - frwr->fr_state = FRWR_FLUSHED_LI;
>> - trace_xprtrdma_wc_li(wc, frwr);
>> -}
>> -
>> -/**
>> - * frwr_wc_localinv_wake - Invoked by RDMA provider for a signaled LocalInv
>> WC
>> - * @cq: completion queue (ignored)
>> - * @wc: completed WR
>> - *
>> - * Awaken anyone waiting for an MR to finish being fenced.
>> - */
>> -static void
>> -frwr_wc_localinv_wake(struct ib_cq *cq, struct ib_wc *wc)
>> -{
>> - struct ib_cqe *cqe = wc->wr_cqe;
>> - struct rpcrdma_frwr *frwr = container_of(cqe, struct rpcrdma_frwr,
>> - fr_cqe);
>> -
>> - /* WARNING: Only wr_cqe and status are reliable at this point */
>> - if (wc->status != IB_WC_SUCCESS)
>> - frwr->fr_state = FRWR_FLUSHED_LI;
>> - trace_xprtrdma_wc_li_wake(wc, frwr);
>> - complete(&frwr->fr_linv_done);
>> -}
>> -
>> -/**
>> * frwr_map - Register a memory region
>> * @r_xprt: controlling transport
>> * @seg: memory region co-ordinates
>> @@ -378,23 +318,15 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt
>> *r_xprt,
>> {
>> struct rpcrdma_ia *ia = &r_xprt->rx_ia;
>> bool holes_ok = ia->ri_mrtype == IB_MR_TYPE_SG_GAPS;
>> - struct rpcrdma_frwr *frwr;
>> struct rpcrdma_mr *mr;
>> struct ib_mr *ibmr;
>> struct ib_reg_wr *reg_wr;
>> int i, n;
>> u8 key;
>>
>> - mr = NULL;
>> - do {
>> - if (mr)
>> - rpcrdma_mr_recycle(mr);
>> - mr = rpcrdma_mr_get(r_xprt);
>> - if (!mr)
>> - goto out_getmr_err;
>> - } while (mr->frwr.fr_state != FRWR_IS_INVALID);
>> - frwr = &mr->frwr;
>> - frwr->fr_state = FRWR_IS_VALID;
>> + mr = rpcrdma_mr_get(r_xprt);
>> + if (!mr)
>> + goto out_getmr_err;
>>
>> if (nsegs > ia->ri_max_frwr_depth)
>> nsegs = ia->ri_max_frwr_depth;
>> @@ -423,7 +355,7 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt
>> *r_xprt,
>> if (!mr->mr_nents)
>> goto out_dmamap_err;
>>
>> - ibmr = frwr->fr_mr;
>> + ibmr = mr->frwr.fr_mr;
>> n = ib_map_mr_sg(ibmr, mr->mr_sg, mr->mr_nents, NULL, PAGE_SIZE);
>> if (unlikely(n != mr->mr_nents))
>> goto out_mapmr_err;
>> @@ -433,7 +365,7 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt
>> *r_xprt,
>> key = (u8)(ibmr->rkey & 0x000000FF);
>> ib_update_fast_reg_key(ibmr, ++key);
>>
>> - reg_wr = &frwr->fr_regwr;
>> + reg_wr = &mr->frwr.fr_regwr;
>> reg_wr->mr = ibmr;
>> reg_wr->key = ibmr->rkey;
>> reg_wr->access = writing ?
>> @@ -465,6 +397,23 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt
>> *r_xprt,
>> }
>>
>> /**
>> + * frwr_wc_fastreg - Invoked by RDMA provider for a flushed FastReg WC
>> + * @cq: completion queue (ignored)
>> + * @wc: completed WR
>> + *
>> + */
>> +static void frwr_wc_fastreg(struct ib_cq *cq, struct ib_wc *wc)
>> +{
>> + struct ib_cqe *cqe = wc->wr_cqe;
>> + struct rpcrdma_frwr *frwr =
>> + container_of(cqe, struct rpcrdma_frwr, fr_cqe);
>> +
>> + /* WARNING: Only wr_cqe and status are reliable at this point */
>> + trace_xprtrdma_wc_fastreg(wc, frwr);
>> + /* The MR will get recycled when the associated req is retransmitted */
>> +}
>> +
>> +/**
>> * frwr_send - post Send WR containing the RPC Call message
>> * @ia: interface adapter
>> * @req: Prepared RPC Call
>> @@ -516,65 +465,104 @@ void frwr_reminv(struct rpcrdma_rep *rep, struct
>> list_head *mrs)
>> if (mr->mr_handle == rep->rr_inv_rkey) {
>> list_del_init(&mr->mr_list);
>> trace_xprtrdma_mr_remoteinv(mr);
>> - mr->frwr.fr_state = FRWR_IS_INVALID;
>> rpcrdma_mr_unmap_and_put(mr);
>> break; /* only one invalidated MR per RPC */
>> }
>> }
>>
>> +static void __frwr_release_mr(struct ib_wc *wc, struct rpcrdma_mr *mr)
>> +{
>> + if (wc->status != IB_WC_SUCCESS)
>> + rpcrdma_mr_recycle(mr);
>> + else
>> + rpcrdma_mr_unmap_and_put(mr);
>> +}
>> +
>> /**
>> - * frwr_unmap_sync - invalidate memory regions that were registered for @req
>> - * @r_xprt: controlling transport
>> - * @mrs: list of MRs to process
>> + * frwr_wc_localinv - Invoked by RDMA provider for a LOCAL_INV WC
>> + * @cq: completion queue (ignored)
>> + * @wc: completed WR
>> *
>> - * Sleeps until it is safe for the host CPU to access the
>> - * previously mapped memory regions.
>> + */
>> +static void frwr_wc_localinv(struct ib_cq *cq, struct ib_wc *wc)
>> +{
>> + struct ib_cqe *cqe = wc->wr_cqe;
>> + struct rpcrdma_frwr *frwr =
>> + container_of(cqe, struct rpcrdma_frwr, fr_cqe);
>> + struct rpcrdma_mr *mr = container_of(frwr, struct rpcrdma_mr, frwr);
>> +
>> + /* WARNING: Only wr_cqe and status are reliable at this point */
>> + __frwr_release_mr(wc, mr);
>> + trace_xprtrdma_wc_li(wc, frwr);
>> +}
>> +
>> +/**
>> + * frwr_wc_localinv_wake - Invoked by RDMA provider for a LOCAL_INV WC
>> + * @cq: completion queue (ignored)
>> + * @wc: completed WR
>> *
>> - * Caller ensures that @mrs is not empty before the call. This
>> - * function empties the list.
>> + * Awaken anyone waiting for an MR to finish being fenced.
>> */
>> -void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct list_head *mrs)
>> +static void frwr_wc_localinv_wake(struct ib_cq *cq, struct ib_wc *wc)
>> +{
>> + struct ib_cqe *cqe = wc->wr_cqe;
>> + struct rpcrdma_frwr *frwr =
>> + container_of(cqe, struct rpcrdma_frwr, fr_cqe);
>> + struct rpcrdma_mr *mr = container_of(frwr, struct rpcrdma_mr, frwr);
>> +
>> + /* WARNING: Only wr_cqe and status are reliable at this point */
>> + __frwr_release_mr(wc, mr);
>> + trace_xprtrdma_wc_li_wake(wc, frwr);
>> + complete(&frwr->fr_linv_done);
>> +}
>> +
>> +/**
>> + * frwr_unmap_sync - invalidate memory regions that were registered for @req
>> + * @r_xprt: controlling transport instance
>> + * @req: rpcrdma_req with a non-empty list of MRs to process
>> + *
>> + * Sleeps until it is safe for the host CPU to access the previously mapped
>> + * memory regions.
>> + */
>> +void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
>> {
>> struct ib_send_wr *first, **prev, *last;
>> const struct ib_send_wr *bad_wr;
>> - struct rpcrdma_ia *ia = &r_xprt->rx_ia;
>> struct rpcrdma_frwr *frwr;
>> struct rpcrdma_mr *mr;
>> - int count, rc;
>> + int rc;
>>
>> /* ORDER: Invalidate all of the MRs first
>> *
>> * Chain the LOCAL_INV Work Requests and post them with
>> * a single ib_post_send() call.
>> */
>> - frwr = NULL;
>> - count = 0;
>> prev = &first;
>> - list_for_each_entry(mr, mrs, mr_list) {
>> - mr->frwr.fr_state = FRWR_IS_INVALID;
>> + while (!list_empty(&req->rl_registered)) {
>
> Is this list guaranteed to always start full? Because we could potentially use
> frwr uninitialized a few lines down if it's not.

The only frwr_unmap_async looks like this:

1353 if (!list_empty(&req->rl_registered))
1354 frwr_unmap_async(r_xprt, req);

The warning is a false positive. I'll see about rearranging the logic.


> net/sunrpc/xprtrdma/frwr_ops.c: In function ‘frwr_unmap_sync’:
> net/sunrpc/xprtrdma/frwr_ops.c:582:3: error: ‘frwr’ may be used uninitialized in
> this function [-Werror=maybe-uninitialized]
> wait_for_completion(&frwr->fr_linv_done);
> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> Thanks,
> Anna
>> + mr = rpcrdma_mr_pop(&req->rl_registered);
>>
>> - frwr = &mr->frwr;
>> trace_xprtrdma_mr_localinv(mr);
>> + r_xprt->rx_stats.local_inv_needed++;
>>
>> + frwr = &mr->frwr;
>> frwr->fr_cqe.done = frwr_wc_localinv;
>> last = &frwr->fr_invwr;
>> - memset(last, 0, sizeof(*last));
>> + last->next = NULL;
>> last->wr_cqe = &frwr->fr_cqe;
>> + last->sg_list = NULL;
>> + last->num_sge = 0;
>> last->opcode = IB_WR_LOCAL_INV;
>> + last->send_flags = IB_SEND_SIGNALED;
>> last->ex.invalidate_rkey = mr->mr_handle;
>> - count++;
>>
>> *prev = last;
>> prev = &last->next;
>> }
>> - if (!frwr)
>> - goto unmap;
>>
>> /* Strong send queue ordering guarantees that when the
>> * last WR in the chain completes, all WRs in the chain
>> * are complete.
>> */
>> - last->send_flags = IB_SEND_SIGNALED;
>> frwr->fr_cqe.done = frwr_wc_localinv_wake;
>> reinit_completion(&frwr->fr_linv_done);
>>
>> @@ -582,26 +570,18 @@ void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct
>> list_head *mrs)
>> * replaces the QP. The RPC reply handler won't call us
>> * unless ri_id->qp is a valid pointer.
>> */
>> - r_xprt->rx_stats.local_inv_needed++;
>> bad_wr = NULL;
>> - rc = ib_post_send(ia->ri_id->qp, first, &bad_wr);
>> - if (bad_wr != first)
>> - wait_for_completion(&frwr->fr_linv_done);
>> - if (rc)
>> - goto out_release;
>> + rc = ib_post_send(r_xprt->rx_ia.ri_id->qp, first, &bad_wr);
>> + trace_xprtrdma_post_send(req, rc);
>>
>> - /* ORDER: Now DMA unmap all of the MRs, and return
>> - * them to the free MR list.
>> + /* The final LOCAL_INV WR in the chain is supposed to
>> + * do the wake. If it never gets posted, the wake will
>> + * not happen, so don't wait in that case.
>> */
>> -unmap:
>> - while (!list_empty(mrs)) {
>> - mr = rpcrdma_mr_pop(mrs);
>> - rpcrdma_mr_unmap_and_put(mr);
>> - }
>> - return;
>> -
>> -out_release:
>> - pr_err("rpcrdma: FRWR invalidate ib_post_send returned %i\n", rc);
>> + if (bad_wr != first)
>> + wait_for_completion(&frwr->fr_linv_done);
>> + if (!rc)
>> + return;
>>
>> /* Unmap and release the MRs in the LOCAL_INV WRs that did not
>> * get posted.
>> diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
>> index 77fc1e4..6c049fd 100644
>> --- a/net/sunrpc/xprtrdma/rpc_rdma.c
>> +++ b/net/sunrpc/xprtrdma/rpc_rdma.c
>> @@ -1277,7 +1277,7 @@ void rpcrdma_release_rqst(struct rpcrdma_xprt *r_xprt,
>> struct rpcrdma_req *req)
>> * RPC has relinquished all its Send Queue entries.
>> */
>> if (!list_empty(&req->rl_registered))
>> - frwr_unmap_sync(r_xprt, &req->rl_registered);
>> + frwr_unmap_sync(r_xprt, req);
>>
>> /* Ensure that any DMA mapped pages associated with
>> * the Send of the RPC Call have been unmapped before
>> diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
>> index 3c39aa3..a9de116 100644
>> --- a/net/sunrpc/xprtrdma/xprt_rdma.h
>> +++ b/net/sunrpc/xprtrdma/xprt_rdma.h
>> @@ -240,17 +240,9 @@ struct rpcrdma_sendctx {
>> * An external memory region is any buffer or page that is registered
>> * on the fly (ie, not pre-registered).
>> */
>> -enum rpcrdma_frwr_state {
>> - FRWR_IS_INVALID, /* ready to be used */
>> - FRWR_IS_VALID, /* in use */
>> - FRWR_FLUSHED_FR, /* flushed FASTREG WR */
>> - FRWR_FLUSHED_LI, /* flushed LOCALINV WR */
>> -};
>> -
>> struct rpcrdma_frwr {
>> struct ib_mr *fr_mr;
>> struct ib_cqe fr_cqe;
>> - enum rpcrdma_frwr_state fr_state;
>> struct completion fr_linv_done;
>> union {
>> struct ib_reg_wr fr_regwr;
>> @@ -567,8 +559,7 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt
>> *r_xprt,
>> struct rpcrdma_mr **mr);
>> int frwr_send(struct rpcrdma_ia *ia, struct rpcrdma_req *req);
>> void frwr_reminv(struct rpcrdma_rep *rep, struct list_head *mrs);
>> -void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt,
>> - struct list_head *mrs);
>> +void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req);
>>
>> /*
>> * RPC/RDMA protocol calls - xprtrdma/rpc_rdma.c

--
Chuck Lever



2019-05-31 14:33:38

by Dennis Dalessandro

[permalink] [raw]
Subject: Re: [PATCH RFC 00/12] for-5.3 NFS/RDMA patches for review

On 5/28/2019 2:20 PM, Chuck Lever wrote:
> This is a series of fixes and architectural changes that should
> improve robustness and result in better scalability of NFS/RDMA.
> I'm sure one or two of these could be broken down a little more,
> comments welcome.
>
> The fundamental observation is that the RPC work queues are BOUND,
> thus rescheduling work in the Receive completion handler to one of
> these work queues just forces it to run later on the same CPU. So
> try to do more work right in the Receive completion handler to
> reduce context switch overhead.
>
> A secondary concern is that the average amount of wall-clock time
> it takes to handle a single Receive completion caps the IOPS rate
> (both per-xprt and per-NIC). In this patch series I've taken a few
> steps to reduce that latency, and I'm looking into a few others.
>
> This series can be fetched from:
>
> git://git.linux-nfs.org/projects/cel/cel-2.6.git
>
> in topic branch "nfs-for-5.3".
>
> ---
>
> Chuck Lever (12):
> xprtrdma: Fix use-after-free in rpcrdma_post_recvs
> xprtrdma: Replace use of xdr_stream_pos in rpcrdma_marshal_req
> xprtrdma: Fix occasional transport deadlock
> xprtrdma: Remove the RPCRDMA_REQ_F_PENDING flag
> xprtrdma: Remove fr_state
> xprtrdma: Add mechanism to place MRs back on the free list
> xprtrdma: Reduce context switching due to Local Invalidation
> xprtrdma: Wake RPCs directly in rpcrdma_wc_send path
> xprtrdma: Simplify rpcrdma_rep_create
> xprtrdma: Streamline rpcrdma_post_recvs
> xprtrdma: Refactor chunk encoding
> xprtrdma: Remove rpcrdma_req::rl_buffer
>
>
> include/trace/events/rpcrdma.h | 47 ++++--
> net/sunrpc/xprtrdma/frwr_ops.c | 330 ++++++++++++++++++++++++++-------------
> net/sunrpc/xprtrdma/rpc_rdma.c | 146 +++++++----------
> net/sunrpc/xprtrdma/transport.c | 16 +-
> net/sunrpc/xprtrdma/verbs.c | 115 ++++++--------
> net/sunrpc/xprtrdma/xprt_rdma.h | 43 +----
> 6 files changed, 384 insertions(+), 313 deletions(-)
>

For hfi1:
Tested-by: Dennis Dalessandro <[email protected]>

2019-05-31 14:37:04

by Chuck Lever

[permalink] [raw]
Subject: Re: [PATCH RFC 00/12] for-5.3 NFS/RDMA patches for review



> On May 31, 2019, at 10:32 AM, Dennis Dalessandro <[email protected]> wrote:
>
> On 5/28/2019 2:20 PM, Chuck Lever wrote:
>> This is a series of fixes and architectural changes that should
>> improve robustness and result in better scalability of NFS/RDMA.
>> I'm sure one or two of these could be broken down a little more,
>> comments welcome.
>> The fundamental observation is that the RPC work queues are BOUND,
>> thus rescheduling work in the Receive completion handler to one of
>> these work queues just forces it to run later on the same CPU. So
>> try to do more work right in the Receive completion handler to
>> reduce context switch overhead.
>> A secondary concern is that the average amount of wall-clock time
>> it takes to handle a single Receive completion caps the IOPS rate
>> (both per-xprt and per-NIC). In this patch series I've taken a few
>> steps to reduce that latency, and I'm looking into a few others.
>> This series can be fetched from:
>> git://git.linux-nfs.org/projects/cel/cel-2.6.git
>> in topic branch "nfs-for-5.3".
>> ---
>> Chuck Lever (12):
>> xprtrdma: Fix use-after-free in rpcrdma_post_recvs
>> xprtrdma: Replace use of xdr_stream_pos in rpcrdma_marshal_req
>> xprtrdma: Fix occasional transport deadlock
>> xprtrdma: Remove the RPCRDMA_REQ_F_PENDING flag
>> xprtrdma: Remove fr_state
>> xprtrdma: Add mechanism to place MRs back on the free list
>> xprtrdma: Reduce context switching due to Local Invalidation
>> xprtrdma: Wake RPCs directly in rpcrdma_wc_send path
>> xprtrdma: Simplify rpcrdma_rep_create
>> xprtrdma: Streamline rpcrdma_post_recvs
>> xprtrdma: Refactor chunk encoding
>> xprtrdma: Remove rpcrdma_req::rl_buffer
>> include/trace/events/rpcrdma.h | 47 ++++--
>> net/sunrpc/xprtrdma/frwr_ops.c | 330 ++++++++++++++++++++++++++-------------
>> net/sunrpc/xprtrdma/rpc_rdma.c | 146 +++++++----------
>> net/sunrpc/xprtrdma/transport.c | 16 +-
>> net/sunrpc/xprtrdma/verbs.c | 115 ++++++--------
>> net/sunrpc/xprtrdma/xprt_rdma.h | 43 +----
>> 6 files changed, 384 insertions(+), 313 deletions(-)
>
> For hfi1:
> Tested-by: Dennis Dalessandro <[email protected]>

Thanks!

--
Chuck Lever