2014-06-23 22:39:03

by Chuck Lever

[permalink] [raw]
Subject: [PATCH v1 00/13] NFS/RDMA patches for 3.17

The main purpose of this series is to address more connection drop
recovery issues by fixing FRMR re-use to make it less likely the
client will drop the connection due to a memory operation error.

Some other clean-ups and fixes are present as well.

See topic branch nfs-rdma-for-3.17 in

git://git.linux-nfs.org/projects/cel/cel-2.6.git

I tested with NFSv3 and NFSv4 on all three supported memory
registration modes. Used cthon04 and iozone with both Solaris
and Linux NFS/RDMA servers. Used xfstests with Linux.

---

Chuck Lever (13):
xprtrdma: Fix panic in rpcrdma_register_frmr_external()
xprtrdma: Protect ->qp during FRMR deregistration
xprtrdma: Limit data payload size for ALLPHYSICAL
xprtrdma: Update rkeys after transport reconnect
xprtrdma: Don't drain CQs on transport disconnect
xprtrdma: Unclutter struct rpcrdma_mr_seg
xprtrdma: Encode Work Request opcode in wc->wr_id
xprtrdma: Back off rkey when FAST_REG_MR fails
xprtrdma: Refactor rpcrdma_buffer_put()
xprtrdma: Release FRMR segment buffers during LOCAL_INV completion
xprtrdma: Clean up rpcrdma_ep_disconnect()
xprtrdma: Remove RPCRDMA_PERSISTENT_REGISTRATION macro
xprtrdma: Handle additional connection events


include/linux/sunrpc/xprtrdma.h | 2
net/sunrpc/xprtrdma/rpc_rdma.c | 77 +++++----
net/sunrpc/xprtrdma/transport.c | 17 +-
net/sunrpc/xprtrdma/verbs.c | 330 +++++++++++++++++++++++++++------------
net/sunrpc/xprtrdma/xprt_rdma.h | 63 ++++++-
5 files changed, 332 insertions(+), 157 deletions(-)

--
Chuck Lever


2014-06-24 14:35:20

by Or Gerlitz

[permalink] [raw]
Subject: Re: [PATCH v1 00/13] NFS/RDMA patches for 3.17

On Tue, Jun 24, 2014 at 1:39 AM, Chuck Lever <[email protected]> wrote:
>
> The main purpose of this series is to address more connection drop
> recovery issues by fixing FRMR re-use to make it less likely the
> client will drop the connection due to a memory operation error.
>
> Some other clean-ups and fixes are present as well.


>From quick looking, patches 1,2 and 5 of series and maybe more have
very good match for 3.16-rc (fix kernel crashes etc), I don't think
they need to wait for 3.17

2014-06-25 16:14:42

by Shirley Ma

[permalink] [raw]
Subject: Re: [PATCH v1 10/13] xprtrdma: Release FRMR segment buffers during LOCAL_INV completion


On 06/25/2014 07:32 AM, Chuck Lever wrote:
> Hi Shirley-
>
> On Jun 25, 2014, at 1:17 AM, Shirley Ma <[email protected]> wrote:
>
>> Would it be possible to delay rpcrdma_buffer_put() until LOCAL_INV request send completion? remove rpcrdma_buffer_put() from xprt_rdma_free(), add a call back after LOCAL_INV completed?
>
> That?s exactly what this patch does. The relevant part of
> rpcrdma_buffer_put() is:
>
> list_add(&mw->mw_list, &buf->rb_mws);
>
> This is now wrapped with a reference count so that
> rpcrdma_buffer_put() and the LOCAL_INV completion can run in any
> order. The FRMR is added back to the list only after both of those
> two have finished.

What I was thinking is to run rpcrdma_buffer_put after LOCAL_INV completion without reference count.

> Nothing in xprt_rdma_free() is allowed to sleep, so we can?t wait for
> LOCAL_INV completion in there.
>
> The only alternative I can think of is having rpcrdma_buffer_get() check
> fr_state as it removes FRMRs from the rb_mws list. Only if the FRMR is
> marked FRMR_IS_INVALID, rpcrdma_buffer_get() will add it to the
> rpcrdma_req.

I thought about it too, an atomic operation would be better than a lock.

> --
> Chuck Lever
> chuck[dot]lever[at]oracle[dot]com
>
>
>

2014-06-24 15:58:59

by Anna Schumaker

[permalink] [raw]
Subject: Re: [PATCH v1 13/13] xprtrdma: Handle additional connection events

On 06/23/2014 06:40 PM, Chuck Lever wrote:
> Commit 38ca83a5 added RDMA_CM_EVENT_TIMEWAIT_EXIT. But that status
> is relevant only for consumers that re-use their QPs on new
> connections. xprtrdma creates a fresh QP on reconnection, so that
> event should be explicitly ignored.
>
> Squelch the alarming "unexpected CM event" message.
>
> Signed-off-by: Chuck Lever <[email protected]>
> ---
> net/sunrpc/xprtrdma/verbs.c | 27 +++++++++++++++++----------
> 1 file changed, 17 insertions(+), 10 deletions(-)
>
> diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
> index ec98e48..dbd5f22 100644
> --- a/net/sunrpc/xprtrdma/verbs.c
> +++ b/net/sunrpc/xprtrdma/verbs.c
> @@ -334,8 +334,16 @@ static const char * const conn[] = {
> "rejected",
> "established",
> "disconnected",
> - "device removal"
> + "device removal",
> + "multicast join",
> + "multicast error",
> + "address change",
> + "timewait exit",
> };
> +
> +#define CONNECTION_MSG(status) \
> + ((status) < ARRAY_SIZE(conn) ? \
> + conn[(status)] : "unrecognized connection error")
> #endif
>
> static int
> @@ -393,13 +401,10 @@ rpcrdma_conn_upcall(struct rdma_cm_id *id, struct rdma_cm_event *event)
> case RDMA_CM_EVENT_DEVICE_REMOVAL:
> connstate = -ENODEV;
> connected:
> - dprintk("RPC: %s: %s: %pI4:%u (ep 0x%p event 0x%x)\n",
> - __func__,
> - (event->event <= 11) ? conn[event->event] :
> - "unknown connection error",
> - &addr->sin_addr.s_addr,
> - ntohs(addr->sin_port),
> - ep, event->event);
> + dprintk("RPC: %s: %pI4:%u (ep 0x%p): %s\n",
> + __func__, &addr->sin_addr.s_addr,
> + ntohs(addr->sin_port), ep,
> + CONNECTION_MSG(event->event));
> atomic_set(&rpcx_to_rdmax(ep->rep_xprt)->rx_buf.rb_credits, 1);
> dprintk("RPC: %s: %sconnected\n",
> __func__, connstate > 0 ? "" : "dis");
> @@ -408,8 +413,10 @@ connected:
> wake_up_all(&ep->rep_connect_wait);
> break;
> default:
> - dprintk("RPC: %s: unexpected CM event %d\n",
> - __func__, event->event);
> + dprintk("RPC: %s: %pI4:%u (ep 0x%p): %s\n",
> + __func__, &addr->sin_addr.s_addr,
> + ntohs(addr->sin_port), ep,
> + CONNECTION_MSG(event->event));

These two dprintk()s are exactly the same, and only a few lines apart. Is there some way to combine them? (I'm just curious, not saying that you need to!)

Anna
> break;
> }
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html


2014-06-23 22:39:27

by Chuck Lever

[permalink] [raw]
Subject: [PATCH v1 03/13] xprtrdma: Limit data payload size for ALLPHYSICAL

When the client uses physical memory registration, each page in the
payload gets its own array entry in the RPC/RDMA header's chunk list.

Therefore, don't advertise a maximum payload size that would require
more array entries than can fit in the RPC buffer where RPC/RDMA
headers are built.

BugLink: https://bugzilla.linux-nfs.org/show_bug.cgi?id=248
Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/xprtrdma/transport.c | 4 +++-
net/sunrpc/xprtrdma/verbs.c | 41 +++++++++++++++++++++++++++++++++++++++
net/sunrpc/xprtrdma/xprt_rdma.h | 1 +
3 files changed, 45 insertions(+), 1 deletion(-)

diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
index 66f91f0..4185102 100644
--- a/net/sunrpc/xprtrdma/transport.c
+++ b/net/sunrpc/xprtrdma/transport.c
@@ -296,7 +296,6 @@ xprt_setup_rdma(struct xprt_create *args)

xprt->resvport = 0; /* privileged port not needed */
xprt->tsh_size = 0; /* RPC-RDMA handles framing */
- xprt->max_payload = RPCRDMA_MAX_DATA_SEGS * PAGE_SIZE;
xprt->ops = &xprt_rdma_procs;

/*
@@ -382,6 +381,9 @@ xprt_setup_rdma(struct xprt_create *args)
new_ep->rep_xprt = xprt;

xprt_rdma_format_addresses(xprt);
+ xprt->max_payload = rpcrdma_max_payload(new_xprt);
+ dprintk("RPC: %s: transport data payload maximum: %zu bytes\n",
+ __func__, xprt->max_payload);

if (!try_module_get(THIS_MODULE))
goto out4;
diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index f70b8ad..3c7f904 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -1819,3 +1819,44 @@ rpcrdma_ep_post_recv(struct rpcrdma_ia *ia,
rc);
return rc;
}
+
+/* Physical mapping means one Read/Write list entry per-page.
+ * All list entries must fit within an inline buffer
+ *
+ * NB: The server must return a Write list for NFS READ,
+ * which has the same constraint. Factor in the inline
+ * rsize as well.
+ */
+static size_t
+rpcrdma_physical_max_payload(struct rpcrdma_xprt *r_xprt)
+{
+ struct rpcrdma_create_data_internal *cdata = &r_xprt->rx_data;
+ unsigned int inline_size, pages;
+
+ inline_size = min_t(unsigned int,
+ cdata->inline_wsize, cdata->inline_rsize) -
+ RPCRDMA_HDRLEN_MIN;
+ pages = inline_size / sizeof(struct rpcrdma_segment);
+ return pages << PAGE_SHIFT;
+}
+
+static size_t
+rpcrdma_mr_max_payload(struct rpcrdma_xprt *r_xprt)
+{
+ return RPCRDMA_MAX_DATA_SEGS << PAGE_SHIFT;
+}
+
+size_t
+rpcrdma_max_payload(struct rpcrdma_xprt *r_xprt)
+{
+ size_t result;
+
+ switch (r_xprt->rx_ia.ri_memreg_strategy) {
+ case RPCRDMA_ALLPHYSICAL:
+ result = rpcrdma_physical_max_payload(r_xprt);
+ break;
+ default:
+ result = rpcrdma_mr_max_payload(r_xprt);
+ }
+ return result;
+}
diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index 97ca516..f3d86b2 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -348,6 +348,7 @@ void rpcrdma_reply_handler(struct rpcrdma_rep *);
* RPC/RDMA protocol calls - xprtrdma/rpc_rdma.c
*/
int rpcrdma_marshal_req(struct rpc_rqst *);
+size_t rpcrdma_max_payload(struct rpcrdma_xprt *);

/* Temporary NFS request map cache. Created in svc_rdma.c */
extern struct kmem_cache *svc_rdma_map_cachep;


2014-06-23 22:40:33

by Chuck Lever

[permalink] [raw]
Subject: [PATCH v1 11/13] xprtrdma: Clean up rpcrdma_ep_disconnect()

The return code is used only for dprintk's that are already
redundant.

Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/xprtrdma/transport.c | 2 +-
net/sunrpc/xprtrdma/verbs.c | 13 +++----------
net/sunrpc/xprtrdma/xprt_rdma.h | 2 +-
3 files changed, 5 insertions(+), 12 deletions(-)

diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
index f6d280b..2faac49 100644
--- a/net/sunrpc/xprtrdma/transport.c
+++ b/net/sunrpc/xprtrdma/transport.c
@@ -414,7 +414,7 @@ xprt_rdma_close(struct rpc_xprt *xprt)
if (r_xprt->rx_ep.rep_connected > 0)
xprt->reestablish_timeout = 0;
xprt_disconnect_done(xprt);
- (void) rpcrdma_ep_disconnect(&r_xprt->rx_ep, &r_xprt->rx_ia);
+ rpcrdma_ep_disconnect(&r_xprt->rx_ep, &r_xprt->rx_ia);
}

static void
diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index 52f57f7..b6c52c7 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -838,10 +838,7 @@ rpcrdma_ep_destroy(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia)
cancel_delayed_work_sync(&ep->rep_connect_worker);

if (ia->ri_id->qp) {
- rc = rpcrdma_ep_disconnect(ep, ia);
- if (rc)
- dprintk("RPC: %s: rpcrdma_ep_disconnect"
- " returned %i\n", __func__, rc);
+ rpcrdma_ep_disconnect(ep, ia);
rdma_destroy_qp(ia->ri_id);
ia->ri_id->qp = NULL;
}
@@ -879,10 +876,7 @@ rpcrdma_ep_connect(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia)
struct rpcrdma_xprt *xprt;
retry:
dprintk("RPC: %s: reconnecting...\n", __func__);
- rc = rpcrdma_ep_disconnect(ep, ia);
- if (rc && rc != -ENOTCONN)
- dprintk("RPC: %s: rpcrdma_ep_disconnect"
- " status %i\n", __func__, rc);
+ rpcrdma_ep_disconnect(ep, ia);

xprt = container_of(ia, struct rpcrdma_xprt, rx_ia);
id = rpcrdma_create_id(xprt, ia,
@@ -988,7 +982,7 @@ out:
* This call is not reentrant, and must not be made in parallel
* on the same endpoint.
*/
-int
+void
rpcrdma_ep_disconnect(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia)
{
int rc;
@@ -1004,7 +998,6 @@ rpcrdma_ep_disconnect(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia)
dprintk("RPC: %s: rdma_disconnect %i\n", __func__, rc);
ep->rep_connected = rc;
}
- return rc;
}

/*
diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index 7a140fe..4f7de2a 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -343,7 +343,7 @@ int rpcrdma_ep_create(struct rpcrdma_ep *, struct rpcrdma_ia *,
struct rpcrdma_create_data_internal *);
void rpcrdma_ep_destroy(struct rpcrdma_ep *, struct rpcrdma_ia *);
int rpcrdma_ep_connect(struct rpcrdma_ep *, struct rpcrdma_ia *);
-int rpcrdma_ep_disconnect(struct rpcrdma_ep *, struct rpcrdma_ia *);
+void rpcrdma_ep_disconnect(struct rpcrdma_ep *, struct rpcrdma_ia *);

int rpcrdma_ep_post(struct rpcrdma_ia *, struct rpcrdma_ep *,
struct rpcrdma_req *);


2014-06-23 22:40:17

by Chuck Lever

[permalink] [raw]
Subject: [PATCH v1 09/13] xprtrdma: Refactor rpcrdma_buffer_put()

Split out the code that manages the rb_mws list.

A little extra error checking is introduced in the code path that
grabs MWs for the next RPC request. If rb_mws were ever to become
empty, the list_entry() would cause a NULL pointer dereference.

Instead, now rpcrdma_buffer_get() returns NULL, which causes
call_allocate() to delay and try again.

Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/xprtrdma/verbs.c | 105 +++++++++++++++++++++++++++------------
net/sunrpc/xprtrdma/xprt_rdma.h | 1
2 files changed, 74 insertions(+), 32 deletions(-)

diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index 3efc007..f24f0bf 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -1251,6 +1251,69 @@ rpcrdma_buffer_destroy(struct rpcrdma_buffer *buf)
kfree(buf->rb_pool);
}

+static void
+rpcrdma_put_mw_locked(struct rpcrdma_mw *mw)
+{
+ list_add_tail(&mw->mw_list, &mw->mw_pool->rb_mws);
+}
+
+static void
+rpcrdma_buffer_put_mw(struct rpcrdma_mw **mw)
+{
+ rpcrdma_put_mw_locked(*mw);
+ *mw = NULL;
+}
+
+/* Cycle mw's back in reverse order, and "spin" them.
+ * This delays and scrambles reuse as much as possible.
+ */
+static void
+rpcrdma_buffer_put_mws(struct rpcrdma_req *req)
+{
+ struct rpcrdma_mr_seg *seg1 = req->rl_segments;
+ struct rpcrdma_mr_seg *seg = seg1;
+ int i;
+
+ for (i = 1, seg++; i < RPCRDMA_MAX_SEGS; seg++, i++)
+ rpcrdma_buffer_put_mw(&seg->mr_chunk.rl_mw);
+ rpcrdma_buffer_put_mw(&seg1->mr_chunk.rl_mw);
+}
+
+static void
+rpcrdma_send_buffer_put(struct rpcrdma_req *req, struct rpcrdma_buffer *buffers)
+{
+ buffers->rb_send_bufs[--buffers->rb_send_index] = req;
+ req->rl_niovs = 0;
+ if (req->rl_reply) {
+ buffers->rb_recv_bufs[--buffers->rb_recv_index] = req->rl_reply;
+ req->rl_reply->rr_func = NULL;
+ req->rl_reply = NULL;
+ }
+}
+
+static struct rpcrdma_req *
+rpcrdma_buffer_get_mws(struct rpcrdma_req *req, struct rpcrdma_buffer *buffers)
+{
+ struct rpcrdma_mw *r;
+ int i;
+
+ for (i = RPCRDMA_MAX_SEGS - 1; i >= 0; i--) {
+ if (list_empty(&buffers->rb_mws))
+ goto out_empty;
+
+ r = list_entry(buffers->rb_mws.next,
+ struct rpcrdma_mw, mw_list);
+ list_del(&r->mw_list);
+ r->mw_pool = buffers;
+ req->rl_segments[i].mr_chunk.rl_mw = r;
+ }
+ return req;
+out_empty:
+ rpcrdma_send_buffer_put(req, buffers);
+ rpcrdma_buffer_put_mws(req);
+ return NULL;
+}
+
/*
* Get a set of request/reply buffers.
*
@@ -1263,10 +1326,9 @@ rpcrdma_buffer_destroy(struct rpcrdma_buffer *buf)
struct rpcrdma_req *
rpcrdma_buffer_get(struct rpcrdma_buffer *buffers)
{
+ struct rpcrdma_ia *ia = rdmab_to_ia(buffers);
struct rpcrdma_req *req;
unsigned long flags;
- int i;
- struct rpcrdma_mw *r;

spin_lock_irqsave(&buffers->rb_lock, flags);
if (buffers->rb_send_index == buffers->rb_max_requests) {
@@ -1286,14 +1348,13 @@ rpcrdma_buffer_get(struct rpcrdma_buffer *buffers)
buffers->rb_recv_bufs[buffers->rb_recv_index++] = NULL;
}
buffers->rb_send_bufs[buffers->rb_send_index++] = NULL;
- if (!list_empty(&buffers->rb_mws)) {
- i = RPCRDMA_MAX_SEGS - 1;
- do {
- r = list_entry(buffers->rb_mws.next,
- struct rpcrdma_mw, mw_list);
- list_del(&r->mw_list);
- req->rl_segments[i].mr_chunk.rl_mw = r;
- } while (--i >= 0);
+ switch (ia->ri_memreg_strategy) {
+ case RPCRDMA_FRMR:
+ case RPCRDMA_MTHCAFMR:
+ req = rpcrdma_buffer_get_mws(req, buffers);
+ break;
+ default:
+ break;
}
spin_unlock_irqrestore(&buffers->rb_lock, flags);
return req;
@@ -1308,34 +1369,14 @@ rpcrdma_buffer_put(struct rpcrdma_req *req)
{
struct rpcrdma_buffer *buffers = req->rl_buffer;
struct rpcrdma_ia *ia = rdmab_to_ia(buffers);
- int i;
unsigned long flags;

spin_lock_irqsave(&buffers->rb_lock, flags);
- buffers->rb_send_bufs[--buffers->rb_send_index] = req;
- req->rl_niovs = 0;
- if (req->rl_reply) {
- buffers->rb_recv_bufs[--buffers->rb_recv_index] = req->rl_reply;
- req->rl_reply->rr_func = NULL;
- req->rl_reply = NULL;
- }
+ rpcrdma_send_buffer_put(req, buffers);
switch (ia->ri_memreg_strategy) {
case RPCRDMA_FRMR:
case RPCRDMA_MTHCAFMR:
- /*
- * Cycle mw's back in reverse order, and "spin" them.
- * This delays and scrambles reuse as much as possible.
- */
- i = 1;
- do {
- struct rpcrdma_mw **mw;
- mw = &req->rl_segments[i].mr_chunk.rl_mw;
- list_add_tail(&(*mw)->mw_list, &buffers->rb_mws);
- *mw = NULL;
- } while (++i < RPCRDMA_MAX_SEGS);
- list_add_tail(&req->rl_segments[0].mr_chunk.rl_mw->mw_list,
- &buffers->rb_mws);
- req->rl_segments[0].mr_chunk.rl_mw = NULL;
+ rpcrdma_buffer_put_mws(req);
break;
default:
break;
diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index 6b5d243..b81e5b5 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -175,6 +175,7 @@ struct rpcrdma_mw {
struct rpcrdma_frmr frmr;
} r;
struct list_head mw_list;
+ struct rpcrdma_buffer *mw_pool;
};

#define RPCRDMA_BIT_FASTREG (0)


2014-06-23 22:39:52

by Chuck Lever

[permalink] [raw]
Subject: [PATCH v1 06/13] xprtrdma: Unclutter struct rpcrdma_mr_seg

Clean ups:
- make it obvious that the rl_mw field is a pointer -- allocated
separately, not as part of struct rpcrdma_mr_seg
- promote "struct {} frmr;" to a named type
- promote the state enum to a named type
- name the MW state field the same way other fields in
rpcrdma_mw are named

Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/xprtrdma/verbs.c | 6 +++--
net/sunrpc/xprtrdma/xprt_rdma.h | 44 +++++++++++++++++++++++++++++----------
2 files changed, 36 insertions(+), 14 deletions(-)

diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index 451e100..e8ed81c 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -156,9 +156,9 @@ rpcrdma_sendcq_process_wc(struct ib_wc *wc)
return;

if (wc->opcode == IB_WC_FAST_REG_MR)
- frmr->r.frmr.state = FRMR_IS_VALID;
+ frmr->r.frmr.fr_state = FRMR_IS_VALID;
else if (wc->opcode == IB_WC_LOCAL_INV)
- frmr->r.frmr.state = FRMR_IS_INVALID;
+ frmr->r.frmr.fr_state = FRMR_IS_INVALID;
}

static int
@@ -1518,7 +1518,7 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
dprintk("RPC: %s: Using frmr %p to map %d segments\n",
__func__, seg1->mr_chunk.rl_mw, i);

- if (unlikely(seg1->mr_chunk.rl_mw->r.frmr.state == FRMR_IS_VALID)) {
+ if (unlikely(seg1->mr_chunk.rl_mw->r.frmr.fr_state == FRMR_IS_VALID)) {
dprintk("RPC: %s: frmr %x left valid, posting invalidate.\n",
__func__,
seg1->mr_chunk.rl_mw->r.frmr.fr_mr->rkey);
diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index c270e59..28c8570 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -146,6 +146,38 @@ struct rpcrdma_rep {
};

/*
+ * struct rpcrdma_mw - external memory region metadata
+ *
+ * An external memory region is any buffer or page that is registered
+ * on the fly (ie, not pre-registered).
+ *
+ * Each rpcrdma_buffer has a list of these anchored in rb_mws. During
+ * call_allocate, rpcrdma_buffer_get() assigns one to each segment in
+ * an rpcrdma_req. Then rpcrdma_register_external() grabs these to keep
+ * track of registration metadata while each RPC is pending.
+ * rpcrdma_deregister_external() uses this metadata to unmap and
+ * release these resources when an RPC is complete.
+ */
+enum rpcrdma_frmr_state {
+ FRMR_IS_INVALID,
+ FRMR_IS_VALID,
+};
+
+struct rpcrdma_frmr {
+ struct ib_fast_reg_page_list *fr_pgl;
+ struct ib_mr *fr_mr;
+ enum rpcrdma_frmr_state fr_state;
+};
+
+struct rpcrdma_mw {
+ union {
+ struct ib_fmr *fmr;
+ struct rpcrdma_frmr frmr;
+ } r;
+ struct list_head mw_list;
+};
+
+/*
* struct rpcrdma_req -- structure central to the request/reply sequence.
*
* N of these are associated with a transport instance, and stored in
@@ -172,17 +204,7 @@ struct rpcrdma_rep {
struct rpcrdma_mr_seg { /* chunk descriptors */
union { /* chunk memory handles */
struct ib_mr *rl_mr; /* if registered directly */
- struct rpcrdma_mw { /* if registered from region */
- union {
- struct ib_fmr *fmr;
- struct {
- struct ib_fast_reg_page_list *fr_pgl;
- struct ib_mr *fr_mr;
- enum { FRMR_IS_INVALID, FRMR_IS_VALID } state;
- } frmr;
- } r;
- struct list_head mw_list;
- } *rl_mw;
+ struct rpcrdma_mw *rl_mw; /* if registered from region */
} mr_chunk;
u64 mr_base; /* registration result */
u32 mr_rkey; /* registration result */


2014-06-24 16:26:37

by Chuck Lever

[permalink] [raw]
Subject: Re: [PATCH v1 08/13] xprtrdma: Back off rkey when FAST_REG_MR fails


On Jun 24, 2014, at 11:47 AM, Anna Schumaker <[email protected]> wrote:

> On 06/23/2014 06:40 PM, Chuck Lever wrote:
>> If posting a FAST_REG_MR Work Reqeust fails, or the FAST_REG WR
>> flushes, revert the rkey update to avoid subsequent
>> IB_WC_MW_BIND_ERR completions.
>>
>> Suggested-by: Steve Wise <[email protected]>
>> Signed-off-by: Chuck Lever <[email protected]>
>> ---
>> net/sunrpc/xprtrdma/verbs.c | 39 +++++++++++++++++++++++++++++----------
>> 1 file changed, 29 insertions(+), 10 deletions(-)
>>
>> diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
>> index cef67fd..3efc007 100644
>> --- a/net/sunrpc/xprtrdma/verbs.c
>> +++ b/net/sunrpc/xprtrdma/verbs.c
>> @@ -61,6 +61,8 @@
>> # define RPCDBG_FACILITY RPCDBG_TRANS
>> #endif
>>
>> +static void rpcrdma_decrement_frmr_rkey(struct rpcrdma_mw *);
>> +
>> /*
>> * internal functions
>> */
>> @@ -154,13 +156,17 @@ rpcrdma_sendcq_process_wc(struct ib_wc *wc)
>>
>> if (wrid == 0)
>> return;
>> - if (wc->status != IB_WC_SUCCESS)
>> - return;
>>
>> fastreg = test_and_clear_bit(RPCRDMA_BIT_FASTREG, &wrid);
>> mw = (struct rpcrdma_mw *)wrid;
>>
>> - mw->r.frmr.fr_state = fastreg ? FRMR_IS_VALID : FRMR_IS_INVALID;
>> + if (wc->status == IB_WC_SUCCESS) {
>> + mw->r.frmr.fr_state = fastreg ?
>> + FRMR_IS_VALID : FRMR_IS_INVALID;
>> + } else {
>> + if (fastreg)
>> + rpcrdma_decrement_frmr_rkey(mw);
>
> Isn't this the same as "else if (fastreg)??

Yep, those are logically equivalent. I left them separate, and
left the extra braces, so it would be cleaner to add more logic in
both arms of ?if (wc->status == IB_WC_SUCCESS)? in subsequent
patches.

Using ?switch (wc->status)? might be more future-proof.


> Anna
>
>> + }
>> }
>>
>> static int
>> @@ -1480,6 +1486,24 @@ rpcrdma_unmap_one(struct rpcrdma_ia *ia, struct rpcrdma_mr_seg *seg)
>> seg->mr_dma, seg->mr_dmalen, seg->mr_dir);
>> }
>>
>> +static void
>> +rpcrdma_increment_frmr_rkey(struct rpcrdma_mw *mw)
>> +{
>> + struct ib_mr *frmr = mw->r.frmr.fr_mr;
>> + u8 key = frmr->rkey & 0x000000FF;
>> +
>> + ib_update_fast_reg_key(frmr, ++key);
>> +}
>> +
>> +static void
>> +rpcrdma_decrement_frmr_rkey(struct rpcrdma_mw *mw)
>> +{
>> + struct ib_mr *frmr = mw->r.frmr.fr_mr;
>> + u8 key = frmr->rkey & 0x000000FF;
>> +
>> + ib_update_fast_reg_key(frmr, --key);
>> +}
>> +
>> static int
>> rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
>> int *nsegs, int writing, struct rpcrdma_ia *ia,
>> @@ -1487,8 +1511,6 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
>> {
>> struct rpcrdma_mr_seg *seg1 = seg;
>> struct ib_send_wr invalidate_wr, frmr_wr, *bad_wr, *post_wr;
>> -
>> - u8 key;
>> int len, pageoff;
>> int i, rc;
>> int seg_len;
>> @@ -1552,14 +1574,10 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
>> rc = -EIO;
>> goto out_err;
>> }
>> -
>> - /* Bump the key */
>> - key = (u8)(seg1->mr_chunk.rl_mw->r.frmr.fr_mr->rkey & 0x000000FF);
>> - ib_update_fast_reg_key(seg1->mr_chunk.rl_mw->r.frmr.fr_mr, ++key);
>> -
>> frmr_wr.wr.fast_reg.access_flags = (writing ?
>> IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE :
>> IB_ACCESS_REMOTE_READ);
>> + rpcrdma_increment_frmr_rkey(seg1->mr_chunk.rl_mw);
>> frmr_wr.wr.fast_reg.rkey = seg1->mr_chunk.rl_mw->r.frmr.fr_mr->rkey;
>> DECR_CQCOUNT(&r_xprt->rx_ep);
>>
>> @@ -1568,6 +1586,7 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
>> if (rc) {
>> dprintk("RPC: %s: failed ib_post_send for register,"
>> " status %i\n", __func__, rc);
>> + rpcrdma_decrement_frmr_rkey(seg1->mr_chunk.rl_mw);
>> goto out_err;
>> } else {
>> seg1->mr_rkey = seg1->mr_chunk.rl_mw->r.frmr.fr_mr->rkey;
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>> the body of a message to [email protected]
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>

--
Chuck Lever
chuck[dot]lever[at]oracle[dot]com




2014-06-25 22:47:27

by Steve Wise

[permalink] [raw]
Subject: RE: [PATCH v1 00/13] NFS/RDMA patches for 3.17

Hey Chuck,

I did some testing on this series. Just 2 nodes, both nfs3 and nfs4 over cxgb4 and mlx4:

cthon b/g/s 10 iterations
iozone -a with direct IO and data validation
fio write and rand-rw testing of large IO/files and 8 threads.
xfs test suite.

No regressions seen.

Tested-by: Steve Wise <[email protected]>


> -----Original Message-----
> From: [email protected] [mailto:[email protected]] On Behalf
> Of Chuck Lever
> Sent: Monday, June 23, 2014 5:39 PM
> To: [email protected]; [email protected]
> Subject: [PATCH v1 00/13] NFS/RDMA patches for 3.17
>
> The main purpose of this series is to address more connection drop
> recovery issues by fixing FRMR re-use to make it less likely the
> client will drop the connection due to a memory operation error.
>
> Some other clean-ups and fixes are present as well.
>
> See topic branch nfs-rdma-for-3.17 in
>
> git://git.linux-nfs.org/projects/cel/cel-2.6.git
>
> I tested with NFSv3 and NFSv4 on all three supported memory
> registration modes. Used cthon04 and iozone with both Solaris
> and Linux NFS/RDMA servers. Used xfstests with Linux.
>
> ---
>
> Chuck Lever (13):
> xprtrdma: Fix panic in rpcrdma_register_frmr_external()
> xprtrdma: Protect ->qp during FRMR deregistration
> xprtrdma: Limit data payload size for ALLPHYSICAL
> xprtrdma: Update rkeys after transport reconnect
> xprtrdma: Don't drain CQs on transport disconnect
> xprtrdma: Unclutter struct rpcrdma_mr_seg
> xprtrdma: Encode Work Request opcode in wc->wr_id
> xprtrdma: Back off rkey when FAST_REG_MR fails
> xprtrdma: Refactor rpcrdma_buffer_put()
> xprtrdma: Release FRMR segment buffers during LOCAL_INV completion
> xprtrdma: Clean up rpcrdma_ep_disconnect()
> xprtrdma: Remove RPCRDMA_PERSISTENT_REGISTRATION macro
> xprtrdma: Handle additional connection events
>
>
> include/linux/sunrpc/xprtrdma.h | 2
> net/sunrpc/xprtrdma/rpc_rdma.c | 77 +++++----
> net/sunrpc/xprtrdma/transport.c | 17 +-
> net/sunrpc/xprtrdma/verbs.c | 330 +++++++++++++++++++++++++++------------
> net/sunrpc/xprtrdma/xprt_rdma.h | 63 ++++++-
> 5 files changed, 332 insertions(+), 157 deletions(-)
>
> --
> Chuck Lever
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html


2014-06-25 05:17:43

by Shirley Ma

[permalink] [raw]
Subject: Re: [PATCH v1 10/13] xprtrdma: Release FRMR segment buffers during LOCAL_INV completion

Would it be possible to delay rpcrdma_buffer_put() until LOCAL_INV request send completion? remove rpcrdma_buffer_put() from xprt_rdma_free(), add a call back after LOCAL_INV completed?

Shirley

On 06/23/2014 03:40 PM, Chuck Lever wrote:
> FRMR uses a LOCAL_INV Work Request, which is asynchronous, to
> deregister segment buffers. Other registration strategies use
> synchronous deregistration mechanisms (like ib_unmap_fmr()).
>
> For a synchronous deregistration mechanism, it makes sense for
> xprt_rdma_free() to put segment buffers back into the buffer pool
> immediately once rpcrdma_deregister_external() returns.
>
> This is currently also what FRMR is doing. It is releasing segment
> buffers just after the LOCAL_INV WR is posted.
>
> But segment buffers need to be put back after the LOCAL_INV WR
> _completes_ (or flushes). Otherwise, rpcrdma_buffer_get() can then
> assign these segment buffers to another RPC task while they are
> still "in use" by the hardware.
>
> The result of re-using an FRMR too quickly is that it's rkey
> no longer matches the rkey that was registered with the provider.
> This results in FAST_REG_MR or LOCAL_INV Work Requests completing
> with IB_WC_MW_BIND_ERR, and the FRMR, and thus the transport,
> becomes unusable.
>
> Signed-off-by: Chuck Lever <[email protected]>
> ---
> net/sunrpc/xprtrdma/verbs.c | 44 +++++++++++++++++++++++++++++++++++----
> net/sunrpc/xprtrdma/xprt_rdma.h | 2 ++
> 2 files changed, 42 insertions(+), 4 deletions(-)
>
> diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
> index f24f0bf..52f57f7 100644
> --- a/net/sunrpc/xprtrdma/verbs.c
> +++ b/net/sunrpc/xprtrdma/verbs.c
> @@ -62,6 +62,8 @@
> #endif
>
> static void rpcrdma_decrement_frmr_rkey(struct rpcrdma_mw *);
> +static void rpcrdma_get_mw(struct rpcrdma_mw *);
> +static void rpcrdma_put_mw(struct rpcrdma_mw *);
>
> /*
> * internal functions
> @@ -167,6 +169,7 @@ rpcrdma_sendcq_process_wc(struct ib_wc *wc)
> if (fastreg)
> rpcrdma_decrement_frmr_rkey(mw);
> }
> + rpcrdma_put_mw(mw);
> }
>
> static int
> @@ -1034,7 +1037,7 @@ rpcrdma_buffer_create(struct rpcrdma_buffer *buf, struct rpcrdma_ep *ep,
> len += cdata->padding;
> switch (ia->ri_memreg_strategy) {
> case RPCRDMA_FRMR:
> - len += buf->rb_max_requests * RPCRDMA_MAX_SEGS *
> + len += (buf->rb_max_requests + 1) * RPCRDMA_MAX_SEGS *
> sizeof(struct rpcrdma_mw);
> break;
> case RPCRDMA_MTHCAFMR:
> @@ -1076,7 +1079,7 @@ rpcrdma_buffer_create(struct rpcrdma_buffer *buf, struct rpcrdma_ep *ep,
> r = (struct rpcrdma_mw *)p;
> switch (ia->ri_memreg_strategy) {
> case RPCRDMA_FRMR:
> - for (i = buf->rb_max_requests * RPCRDMA_MAX_SEGS; i; i--) {
> + for (i = (buf->rb_max_requests+1) * RPCRDMA_MAX_SEGS; i; i--) {
> r->r.frmr.fr_mr = ib_alloc_fast_reg_mr(ia->ri_pd,
> ia->ri_max_frmr_depth);
> if (IS_ERR(r->r.frmr.fr_mr)) {
> @@ -1252,12 +1255,36 @@ rpcrdma_buffer_destroy(struct rpcrdma_buffer *buf)
> }
>
> static void
> -rpcrdma_put_mw_locked(struct rpcrdma_mw *mw)
> +rpcrdma_free_mw(struct kref *kref)
> {
> + struct rpcrdma_mw *mw = container_of(kref, struct rpcrdma_mw, mw_ref);
> list_add_tail(&mw->mw_list, &mw->mw_pool->rb_mws);
> }
>
> static void
> +rpcrdma_put_mw_locked(struct rpcrdma_mw *mw)
> +{
> + kref_put(&mw->mw_ref, rpcrdma_free_mw);
> +}
> +
> +static void
> +rpcrdma_get_mw(struct rpcrdma_mw *mw)
> +{
> + kref_get(&mw->mw_ref);
> +}
> +
> +static void
> +rpcrdma_put_mw(struct rpcrdma_mw *mw)
> +{
> + struct rpcrdma_buffer *buffers = mw->mw_pool;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&buffers->rb_lock, flags);
> + rpcrdma_put_mw_locked(mw);
> + spin_unlock_irqrestore(&buffers->rb_lock, flags);
> +}
> +
> +static void
> rpcrdma_buffer_put_mw(struct rpcrdma_mw **mw)
> {
> rpcrdma_put_mw_locked(*mw);
> @@ -1304,6 +1331,7 @@ rpcrdma_buffer_get_mws(struct rpcrdma_req *req, struct rpcrdma_buffer *buffers)
> r = list_entry(buffers->rb_mws.next,
> struct rpcrdma_mw, mw_list);
> list_del(&r->mw_list);
> + kref_init(&r->mw_ref);
> r->mw_pool = buffers;
> req->rl_segments[i].mr_chunk.rl_mw = r;
> }
> @@ -1583,6 +1611,7 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
> dprintk("RPC: %s: Using frmr %p to map %d segments\n",
> __func__, seg1->mr_chunk.rl_mw, i);
>
> + rpcrdma_get_mw(seg1->mr_chunk.rl_mw);
> if (unlikely(seg1->mr_chunk.rl_mw->r.frmr.fr_state == FRMR_IS_VALID)) {
> dprintk("RPC: %s: frmr %x left valid, posting invalidate.\n",
> __func__,
> @@ -1595,6 +1624,7 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
> invalidate_wr.send_flags = IB_SEND_SIGNALED;
> invalidate_wr.ex.invalidate_rkey =
> seg1->mr_chunk.rl_mw->r.frmr.fr_mr->rkey;
> + rpcrdma_get_mw(seg1->mr_chunk.rl_mw);
> DECR_CQCOUNT(&r_xprt->rx_ep);
> post_wr = &invalidate_wr;
> } else
> @@ -1638,6 +1668,9 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
> *nsegs = i;
> return 0;
> out_err:
> + rpcrdma_put_mw(seg1->mr_chunk.rl_mw);
> + if (post_wr == &invalidate_wr)
> + rpcrdma_put_mw(seg1->mr_chunk.rl_mw);
> while (i--)
> rpcrdma_unmap_one(ia, --seg);
> return rc;
> @@ -1653,6 +1686,7 @@ rpcrdma_deregister_frmr_external(struct rpcrdma_mr_seg *seg,
>
> while (seg1->mr_nsegs--)
> rpcrdma_unmap_one(ia, seg++);
> + rpcrdma_get_mw(seg1->mr_chunk.rl_mw);
>
> memset(&invalidate_wr, 0, sizeof invalidate_wr);
> invalidate_wr.wr_id = (unsigned long)(void *)seg1->mr_chunk.rl_mw;
> @@ -1664,9 +1698,11 @@ rpcrdma_deregister_frmr_external(struct rpcrdma_mr_seg *seg,
> read_lock(&ia->ri_qplock);
> rc = ib_post_send(ia->ri_id->qp, &invalidate_wr, &bad_wr);
> read_unlock(&ia->ri_qplock);
> - if (rc)
> + if (rc) {
> + rpcrdma_put_mw(seg1->mr_chunk.rl_mw);
> dprintk("RPC: %s: failed ib_post_send for invalidate,"
> " status %i\n", __func__, rc);
> + }
> return rc;
> }
>
> diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
> index b81e5b5..7a140fe 100644
> --- a/net/sunrpc/xprtrdma/xprt_rdma.h
> +++ b/net/sunrpc/xprtrdma/xprt_rdma.h
> @@ -44,6 +44,7 @@
> #include <linux/spinlock.h> /* spinlock_t, etc */
> #include <linux/atomic.h> /* atomic_t, etc */
> #include <linux/workqueue.h> /* struct work_struct */
> +#include <linux/kref.h>
>
> #include <rdma/rdma_cm.h> /* RDMA connection api */
> #include <rdma/ib_verbs.h> /* RDMA verbs api */
> @@ -176,6 +177,7 @@ struct rpcrdma_mw {
> } r;
> struct list_head mw_list;
> struct rpcrdma_buffer *mw_pool;
> + struct kref mw_ref;
> };
>
> #define RPCRDMA_BIT_FASTREG (0)
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>

2014-06-23 22:39:19

by Chuck Lever

[permalink] [raw]
Subject: [PATCH v1 02/13] xprtrdma: Protect ->qp during FRMR deregistration

Ensure the QP remains valid while posting LOCAL_INV during a
transport reconnect. Otherwise, ia->ri_id->qp is NULL, which
triggers a panic.

BugLink: https://bugzilla.linux-nfs.org/show_bug.cgi?id=259
Fixes: ec62f40d3505a643497d105c297093bb90afd44e
Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/xprtrdma/verbs.c | 14 +++++++++++---
net/sunrpc/xprtrdma/xprt_rdma.h | 1 +
2 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index 78bd7c6..f70b8ad 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -613,6 +613,7 @@ rpcrdma_ia_open(struct rpcrdma_xprt *xprt, struct sockaddr *addr, int memreg)
/* Else will do memory reg/dereg for each chunk */
ia->ri_memreg_strategy = memreg;

+ rwlock_init(&ia->ri_qplock);
return 0;
out2:
rdma_destroy_id(ia->ri_id);
@@ -859,7 +860,7 @@ rpcrdma_ep_destroy(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia)
int
rpcrdma_ep_connect(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia)
{
- struct rdma_cm_id *id;
+ struct rdma_cm_id *id, *old;
int rc = 0;
int retry_count = 0;

@@ -905,9 +906,14 @@ retry:
rc = -ENETUNREACH;
goto out;
}
- rdma_destroy_qp(ia->ri_id);
- rdma_destroy_id(ia->ri_id);
+
+ write_lock(&ia->ri_qplock);
+ old = ia->ri_id;
ia->ri_id = id;
+ write_unlock(&ia->ri_qplock);
+
+ rdma_destroy_qp(old);
+ rdma_destroy_id(old);
} else {
dprintk("RPC: %s: connecting...\n", __func__);
rc = rdma_create_qp(ia->ri_id, ia->ri_pd, &ep->rep_attr);
@@ -1597,7 +1603,9 @@ rpcrdma_deregister_frmr_external(struct rpcrdma_mr_seg *seg,
invalidate_wr.ex.invalidate_rkey = seg1->mr_chunk.rl_mw->r.frmr.fr_mr->rkey;
DECR_CQCOUNT(&r_xprt->rx_ep);

+ read_lock(&ia->ri_qplock);
rc = ib_post_send(ia->ri_id->qp, &invalidate_wr, &bad_wr);
+ read_unlock(&ia->ri_qplock);
if (rc)
dprintk("RPC: %s: failed ib_post_send for invalidate,"
" status %i\n", __func__, rc);
diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index 89e7cd4..97ca516 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -59,6 +59,7 @@
* Interface Adapter -- one per transport instance
*/
struct rpcrdma_ia {
+ rwlock_t ri_qplock;
struct rdma_cm_id *ri_id;
struct ib_pd *ri_pd;
struct ib_mr *ri_bind_mem;


2014-06-23 22:40:50

by Chuck Lever

[permalink] [raw]
Subject: [PATCH v1 13/13] xprtrdma: Handle additional connection events

Commit 38ca83a5 added RDMA_CM_EVENT_TIMEWAIT_EXIT. But that status
is relevant only for consumers that re-use their QPs on new
connections. xprtrdma creates a fresh QP on reconnection, so that
event should be explicitly ignored.

Squelch the alarming "unexpected CM event" message.

Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/xprtrdma/verbs.c | 27 +++++++++++++++++----------
1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index ec98e48..dbd5f22 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -334,8 +334,16 @@ static const char * const conn[] = {
"rejected",
"established",
"disconnected",
- "device removal"
+ "device removal",
+ "multicast join",
+ "multicast error",
+ "address change",
+ "timewait exit",
};
+
+#define CONNECTION_MSG(status) \
+ ((status) < ARRAY_SIZE(conn) ? \
+ conn[(status)] : "unrecognized connection error")
#endif

static int
@@ -393,13 +401,10 @@ rpcrdma_conn_upcall(struct rdma_cm_id *id, struct rdma_cm_event *event)
case RDMA_CM_EVENT_DEVICE_REMOVAL:
connstate = -ENODEV;
connected:
- dprintk("RPC: %s: %s: %pI4:%u (ep 0x%p event 0x%x)\n",
- __func__,
- (event->event <= 11) ? conn[event->event] :
- "unknown connection error",
- &addr->sin_addr.s_addr,
- ntohs(addr->sin_port),
- ep, event->event);
+ dprintk("RPC: %s: %pI4:%u (ep 0x%p): %s\n",
+ __func__, &addr->sin_addr.s_addr,
+ ntohs(addr->sin_port), ep,
+ CONNECTION_MSG(event->event));
atomic_set(&rpcx_to_rdmax(ep->rep_xprt)->rx_buf.rb_credits, 1);
dprintk("RPC: %s: %sconnected\n",
__func__, connstate > 0 ? "" : "dis");
@@ -408,8 +413,10 @@ connected:
wake_up_all(&ep->rep_connect_wait);
break;
default:
- dprintk("RPC: %s: unexpected CM event %d\n",
- __func__, event->event);
+ dprintk("RPC: %s: %pI4:%u (ep 0x%p): %s\n",
+ __func__, &addr->sin_addr.s_addr,
+ ntohs(addr->sin_port), ep,
+ CONNECTION_MSG(event->event));
break;
}



2014-06-24 15:47:53

by Anna Schumaker

[permalink] [raw]
Subject: Re: [PATCH v1 08/13] xprtrdma: Back off rkey when FAST_REG_MR fails

On 06/23/2014 06:40 PM, Chuck Lever wrote:
> If posting a FAST_REG_MR Work Reqeust fails, or the FAST_REG WR
> flushes, revert the rkey update to avoid subsequent
> IB_WC_MW_BIND_ERR completions.
>
> Suggested-by: Steve Wise <[email protected]>
> Signed-off-by: Chuck Lever <[email protected]>
> ---
> net/sunrpc/xprtrdma/verbs.c | 39 +++++++++++++++++++++++++++++----------
> 1 file changed, 29 insertions(+), 10 deletions(-)
>
> diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
> index cef67fd..3efc007 100644
> --- a/net/sunrpc/xprtrdma/verbs.c
> +++ b/net/sunrpc/xprtrdma/verbs.c
> @@ -61,6 +61,8 @@
> # define RPCDBG_FACILITY RPCDBG_TRANS
> #endif
>
> +static void rpcrdma_decrement_frmr_rkey(struct rpcrdma_mw *);
> +
> /*
> * internal functions
> */
> @@ -154,13 +156,17 @@ rpcrdma_sendcq_process_wc(struct ib_wc *wc)
>
> if (wrid == 0)
> return;
> - if (wc->status != IB_WC_SUCCESS)
> - return;
>
> fastreg = test_and_clear_bit(RPCRDMA_BIT_FASTREG, &wrid);
> mw = (struct rpcrdma_mw *)wrid;
>
> - mw->r.frmr.fr_state = fastreg ? FRMR_IS_VALID : FRMR_IS_INVALID;
> + if (wc->status == IB_WC_SUCCESS) {
> + mw->r.frmr.fr_state = fastreg ?
> + FRMR_IS_VALID : FRMR_IS_INVALID;
> + } else {
> + if (fastreg)
> + rpcrdma_decrement_frmr_rkey(mw);

Isn't this the same as "else if (fastreg)"?

Anna

> + }
> }
>
> static int
> @@ -1480,6 +1486,24 @@ rpcrdma_unmap_one(struct rpcrdma_ia *ia, struct rpcrdma_mr_seg *seg)
> seg->mr_dma, seg->mr_dmalen, seg->mr_dir);
> }
>
> +static void
> +rpcrdma_increment_frmr_rkey(struct rpcrdma_mw *mw)
> +{
> + struct ib_mr *frmr = mw->r.frmr.fr_mr;
> + u8 key = frmr->rkey & 0x000000FF;
> +
> + ib_update_fast_reg_key(frmr, ++key);
> +}
> +
> +static void
> +rpcrdma_decrement_frmr_rkey(struct rpcrdma_mw *mw)
> +{
> + struct ib_mr *frmr = mw->r.frmr.fr_mr;
> + u8 key = frmr->rkey & 0x000000FF;
> +
> + ib_update_fast_reg_key(frmr, --key);
> +}
> +
> static int
> rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
> int *nsegs, int writing, struct rpcrdma_ia *ia,
> @@ -1487,8 +1511,6 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
> {
> struct rpcrdma_mr_seg *seg1 = seg;
> struct ib_send_wr invalidate_wr, frmr_wr, *bad_wr, *post_wr;
> -
> - u8 key;
> int len, pageoff;
> int i, rc;
> int seg_len;
> @@ -1552,14 +1574,10 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
> rc = -EIO;
> goto out_err;
> }
> -
> - /* Bump the key */
> - key = (u8)(seg1->mr_chunk.rl_mw->r.frmr.fr_mr->rkey & 0x000000FF);
> - ib_update_fast_reg_key(seg1->mr_chunk.rl_mw->r.frmr.fr_mr, ++key);
> -
> frmr_wr.wr.fast_reg.access_flags = (writing ?
> IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE :
> IB_ACCESS_REMOTE_READ);
> + rpcrdma_increment_frmr_rkey(seg1->mr_chunk.rl_mw);
> frmr_wr.wr.fast_reg.rkey = seg1->mr_chunk.rl_mw->r.frmr.fr_mr->rkey;
> DECR_CQCOUNT(&r_xprt->rx_ep);
>
> @@ -1568,6 +1586,7 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
> if (rc) {
> dprintk("RPC: %s: failed ib_post_send for register,"
> " status %i\n", __func__, rc);
> + rpcrdma_decrement_frmr_rkey(seg1->mr_chunk.rl_mw);
> goto out_err;
> } else {
> seg1->mr_rkey = seg1->mr_chunk.rl_mw->r.frmr.fr_mr->rkey;
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html


2014-06-23 22:39:44

by Chuck Lever

[permalink] [raw]
Subject: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport disconnect

CQs are not destroyed until unmount. By draining CQs on transport
disconnect, successful completions that can change the r.frmr.state
field can be missed.

Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/xprtrdma/verbs.c | 5 -----
1 file changed, 5 deletions(-)

diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index 3c7f904..451e100 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -873,9 +873,6 @@ retry:
dprintk("RPC: %s: rpcrdma_ep_disconnect"
" status %i\n", __func__, rc);

- rpcrdma_clean_cq(ep->rep_attr.recv_cq);
- rpcrdma_clean_cq(ep->rep_attr.send_cq);
-
xprt = container_of(ia, struct rpcrdma_xprt, rx_ia);
id = rpcrdma_create_id(xprt, ia,
(struct sockaddr *)&xprt->rx_data.addr);
@@ -985,8 +982,6 @@ rpcrdma_ep_disconnect(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia)
{
int rc;

- rpcrdma_clean_cq(ep->rep_attr.recv_cq);
- rpcrdma_clean_cq(ep->rep_attr.send_cq);
rc = rdma_disconnect(ia->ri_id);
if (!rc) {
/* returns without wait if not connected */


2014-06-23 22:40:41

by Chuck Lever

[permalink] [raw]
Subject: [PATCH v1 12/13] xprtrdma: Remove RPCRDMA_PERSISTENT_REGISTRATION macro

Clean up.

RPCRDMA_PERSISTENT_REGISTRATION was a compile-time switch between
RPCRDMA_REGISTER mode and RPCRDMA_ALLPHYSICAL mode. Since
RPCRDMA_REGISTER has been removed, there's no need for the extra
conditional compilation.

Signed-off-by: Chuck Lever <[email protected]>
---
include/linux/sunrpc/xprtrdma.h | 2 --
net/sunrpc/xprtrdma/verbs.c | 13 -------------
2 files changed, 15 deletions(-)

diff --git a/include/linux/sunrpc/xprtrdma.h b/include/linux/sunrpc/xprtrdma.h
index c2f04e1..64a0a0a 100644
--- a/include/linux/sunrpc/xprtrdma.h
+++ b/include/linux/sunrpc/xprtrdma.h
@@ -62,8 +62,6 @@
#define RPCRDMA_INLINE_PAD_THRESH (512)/* payload threshold to pad (bytes) */

/* memory registration strategies */
-#define RPCRDMA_PERSISTENT_REGISTRATION (1)
-
enum rpcrdma_memreg {
RPCRDMA_BOUNCEBUFFERS = 0,
RPCRDMA_REGISTER,
diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index b6c52c7..ec98e48 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -569,12 +569,7 @@ rpcrdma_ia_open(struct rpcrdma_xprt *xprt, struct sockaddr *addr, int memreg)
if (!ia->ri_id->device->alloc_fmr) {
dprintk("RPC: %s: MTHCAFMR registration "
"not supported by HCA\n", __func__);
-#if RPCRDMA_PERSISTENT_REGISTRATION
memreg = RPCRDMA_ALLPHYSICAL;
-#else
- rc = -ENOMEM;
- goto out2;
-#endif
}
}

@@ -589,20 +584,16 @@ rpcrdma_ia_open(struct rpcrdma_xprt *xprt, struct sockaddr *addr, int memreg)
switch (memreg) {
case RPCRDMA_FRMR:
break;
-#if RPCRDMA_PERSISTENT_REGISTRATION
case RPCRDMA_ALLPHYSICAL:
mem_priv = IB_ACCESS_LOCAL_WRITE |
IB_ACCESS_REMOTE_WRITE |
IB_ACCESS_REMOTE_READ;
goto register_setup;
-#endif
case RPCRDMA_MTHCAFMR:
if (ia->ri_have_dma_lkey)
break;
mem_priv = IB_ACCESS_LOCAL_WRITE;
-#if RPCRDMA_PERSISTENT_REGISTRATION
register_setup:
-#endif
ia->ri_bind_mem = ib_get_dma_mr(ia->ri_pd, mem_priv);
if (IS_ERR(ia->ri_bind_mem)) {
printk(KERN_ALERT "%s: ib_get_dma_mr for "
@@ -1770,7 +1761,6 @@ rpcrdma_register_external(struct rpcrdma_mr_seg *seg,

switch (ia->ri_memreg_strategy) {

-#if RPCRDMA_PERSISTENT_REGISTRATION
case RPCRDMA_ALLPHYSICAL:
rpcrdma_map_one(ia, seg, writing);
seg->mr_rkey = ia->ri_bind_mem->rkey;
@@ -1778,7 +1768,6 @@ rpcrdma_register_external(struct rpcrdma_mr_seg *seg,
seg->mr_nsegs = 1;
nsegs = 1;
break;
-#endif

/* Registration using frmr registration */
case RPCRDMA_FRMR:
@@ -1808,11 +1797,9 @@ rpcrdma_deregister_external(struct rpcrdma_mr_seg *seg,

switch (ia->ri_memreg_strategy) {

-#if RPCRDMA_PERSISTENT_REGISTRATION
case RPCRDMA_ALLPHYSICAL:
rpcrdma_unmap_one(ia, seg);
break;
-#endif

case RPCRDMA_FRMR:
rc = rpcrdma_deregister_frmr_external(seg, ia, r_xprt);


2014-06-25 14:32:43

by Chuck Lever

[permalink] [raw]
Subject: Re: [PATCH v1 10/13] xprtrdma: Release FRMR segment buffers during LOCAL_INV completion

Hi Shirley-

On Jun 25, 2014, at 1:17 AM, Shirley Ma <[email protected]> wrote:

> Would it be possible to delay rpcrdma_buffer_put() until LOCAL_INV request send completion? remove rpcrdma_buffer_put() from xprt_rdma_free(), add a call back after LOCAL_INV completed?

That?s exactly what this patch does. The relevant part of
rpcrdma_buffer_put() is:

list_add(&mw->mw_list, &buf->rb_mws);

This is now wrapped with a reference count so that
rpcrdma_buffer_put() and the LOCAL_INV completion can run in any
order. The FRMR is added back to the list only after both of those
two have finished.

Nothing in xprt_rdma_free() is allowed to sleep, so we can?t wait for
LOCAL_INV completion in there.

The only alternative I can think of is having rpcrdma_buffer_get() check
fr_state as it removes FRMRs from the rb_mws list. Only if the FRMR is
marked FRMR_IS_INVALID, rpcrdma_buffer_get() will add it to the
rpcrdma_req.

--
Chuck Lever
chuck[dot]lever[at]oracle[dot]com




2014-06-23 22:39:11

by Chuck Lever

[permalink] [raw]
Subject: [PATCH v1 01/13] xprtrdma: Fix panic in rpcrdma_register_frmr_external()

seg1->mr_nsegs is not yet initialized when it is used to unmap
segments during an error exit. Use the same unmapping logic for
all error exits.

"if (frmr_wr.wr.fast_reg.length < len) {" used to be a BUG_ON check.
The broken code should never be executed under normal operation.

Fixes: c977dea22708688eae31774f70126c97aa4dfe83
Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/xprtrdma/verbs.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index 13dbd1c..78bd7c6 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -1545,9 +1545,8 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
frmr_wr.wr.fast_reg.page_shift = PAGE_SHIFT;
frmr_wr.wr.fast_reg.length = page_no << PAGE_SHIFT;
if (frmr_wr.wr.fast_reg.length < len) {
- while (seg1->mr_nsegs--)
- rpcrdma_unmap_one(ia, seg++);
- return -EIO;
+ rc = -EIO;
+ goto out_err;
}

/* Bump the key */
@@ -1565,8 +1564,7 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
if (rc) {
dprintk("RPC: %s: failed ib_post_send for register,"
" status %i\n", __func__, rc);
- while (i--)
- rpcrdma_unmap_one(ia, --seg);
+ goto out_err;
} else {
seg1->mr_rkey = seg1->mr_chunk.rl_mw->r.frmr.fr_mr->rkey;
seg1->mr_base = seg1->mr_dma + pageoff;
@@ -1574,6 +1572,10 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
seg1->mr_len = len;
}
*nsegs = i;
+ return 0;
+out_err:
+ while (i--)
+ rpcrdma_unmap_one(ia, --seg);
return rc;
}



2014-06-23 22:39:36

by Chuck Lever

[permalink] [raw]
Subject: [PATCH v1 04/13] xprtrdma: Update rkeys after transport reconnect

Various reports of:

rpcrdma_qp_async_error_upcall: QP error 3 on device mlx4_0
ep ffff8800bfd3e848

Ensure that rkeys in already-marshalled RPC/RDMA headers are
refreshed after the QP has been replaced by a reconnect.

BugLink: https://bugzilla.linux-nfs.org/show_bug.cgi?id=249
Suggested-by: Selvin Xavier <[email protected]>
Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/xprtrdma/rpc_rdma.c | 77 ++++++++++++++++++++-------------------
net/sunrpc/xprtrdma/transport.c | 11 +++---
net/sunrpc/xprtrdma/xprt_rdma.h | 10 +++++
3 files changed, 56 insertions(+), 42 deletions(-)

diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
index 693966d..6eeb6d2 100644
--- a/net/sunrpc/xprtrdma/rpc_rdma.c
+++ b/net/sunrpc/xprtrdma/rpc_rdma.c
@@ -53,14 +53,6 @@
# define RPCDBG_FACILITY RPCDBG_TRANS
#endif

-enum rpcrdma_chunktype {
- rpcrdma_noch = 0,
- rpcrdma_readch,
- rpcrdma_areadch,
- rpcrdma_writech,
- rpcrdma_replych
-};
-
#ifdef RPC_DEBUG
static const char transfertypes[][12] = {
"pure inline", /* no chunks */
@@ -286,6 +278,30 @@ out:
}

/*
+ * Marshal chunks. This routine returns the header length
+ * consumed by marshaling.
+ *
+ * Returns positive RPC/RDMA header size, or negative errno.
+ */
+
+ssize_t
+rpcrdma_marshal_chunks(struct rpc_rqst *rqst, ssize_t result)
+{
+ struct rpcrdma_req *req = rpcr_to_rdmar(rqst);
+ struct rpcrdma_msg *headerp = (struct rpcrdma_msg *)req->rl_base;
+
+ if (req->rl_rtype != rpcrdma_noch) {
+ result = rpcrdma_create_chunks(rqst,
+ &rqst->rq_snd_buf, headerp, req->rl_rtype);
+
+ } else if (req->rl_wtype != rpcrdma_noch) {
+ result = rpcrdma_create_chunks(rqst,
+ &rqst->rq_rcv_buf, headerp, req->rl_wtype);
+ }
+ return result;
+}
+
+/*
* Copy write data inline.
* This function is used for "small" requests. Data which is passed
* to RPC via iovecs (or page list) is copied directly into the
@@ -377,7 +393,6 @@ rpcrdma_marshal_req(struct rpc_rqst *rqst)
char *base;
size_t rpclen, padlen;
ssize_t hdrlen;
- enum rpcrdma_chunktype rtype, wtype;
struct rpcrdma_msg *headerp;

/*
@@ -415,13 +430,13 @@ rpcrdma_marshal_req(struct rpc_rqst *rqst)
* into pages; otherwise use reply chunks.
*/
if (rqst->rq_rcv_buf.buflen <= RPCRDMA_INLINE_READ_THRESHOLD(rqst))
- wtype = rpcrdma_noch;
+ req->rl_wtype = rpcrdma_noch;
else if (rqst->rq_rcv_buf.page_len == 0)
- wtype = rpcrdma_replych;
+ req->rl_wtype = rpcrdma_replych;
else if (rqst->rq_rcv_buf.flags & XDRBUF_READ)
- wtype = rpcrdma_writech;
+ req->rl_wtype = rpcrdma_writech;
else
- wtype = rpcrdma_replych;
+ req->rl_wtype = rpcrdma_replych;

/*
* Chunks needed for arguments?
@@ -438,16 +453,16 @@ rpcrdma_marshal_req(struct rpc_rqst *rqst)
* TBD check NFSv4 setacl
*/
if (rqst->rq_snd_buf.len <= RPCRDMA_INLINE_WRITE_THRESHOLD(rqst))
- rtype = rpcrdma_noch;
+ req->rl_rtype = rpcrdma_noch;
else if (rqst->rq_snd_buf.page_len == 0)
- rtype = rpcrdma_areadch;
+ req->rl_rtype = rpcrdma_areadch;
else
- rtype = rpcrdma_readch;
+ req->rl_rtype = rpcrdma_readch;

/* The following simplification is not true forever */
- if (rtype != rpcrdma_noch && wtype == rpcrdma_replych)
- wtype = rpcrdma_noch;
- if (rtype != rpcrdma_noch && wtype != rpcrdma_noch) {
+ if (req->rl_rtype != rpcrdma_noch && req->rl_wtype == rpcrdma_replych)
+ req->rl_wtype = rpcrdma_noch;
+ if (req->rl_rtype != rpcrdma_noch && req->rl_wtype != rpcrdma_noch) {
dprintk("RPC: %s: cannot marshal multiple chunk lists\n",
__func__);
return -EIO;
@@ -461,7 +476,7 @@ rpcrdma_marshal_req(struct rpc_rqst *rqst)
* When padding is in use and applies to the transfer, insert
* it and change the message type.
*/
- if (rtype == rpcrdma_noch) {
+ if (req->rl_rtype == rpcrdma_noch) {

padlen = rpcrdma_inline_pullup(rqst,
RPCRDMA_INLINE_PAD_VALUE(rqst));
@@ -476,7 +491,7 @@ rpcrdma_marshal_req(struct rpc_rqst *rqst)
headerp->rm_body.rm_padded.rm_pempty[1] = xdr_zero;
headerp->rm_body.rm_padded.rm_pempty[2] = xdr_zero;
hdrlen += 2 * sizeof(u32); /* extra words in padhdr */
- if (wtype != rpcrdma_noch) {
+ if (req->rl_wtype != rpcrdma_noch) {
dprintk("RPC: %s: invalid chunk list\n",
__func__);
return -EIO;
@@ -497,30 +512,18 @@ rpcrdma_marshal_req(struct rpc_rqst *rqst)
* on receive. Therefore, we request a reply chunk
* for non-writes wherever feasible and efficient.
*/
- if (wtype == rpcrdma_noch)
- wtype = rpcrdma_replych;
+ if (req->rl_wtype == rpcrdma_noch)
+ req->rl_wtype = rpcrdma_replych;
}
}

- /*
- * Marshal chunks. This routine will return the header length
- * consumed by marshaling.
- */
- if (rtype != rpcrdma_noch) {
- hdrlen = rpcrdma_create_chunks(rqst,
- &rqst->rq_snd_buf, headerp, rtype);
- wtype = rtype; /* simplify dprintk */
-
- } else if (wtype != rpcrdma_noch) {
- hdrlen = rpcrdma_create_chunks(rqst,
- &rqst->rq_rcv_buf, headerp, wtype);
- }
+ hdrlen = rpcrdma_marshal_chunks(rqst, hdrlen);
if (hdrlen < 0)
return hdrlen;

dprintk("RPC: %s: %s: hdrlen %zd rpclen %zd padlen %zd"
" headerp 0x%p base 0x%p lkey 0x%x\n",
- __func__, transfertypes[wtype], hdrlen, rpclen, padlen,
+ __func__, transfertypes[req->rl_wtype], hdrlen, rpclen, padlen,
headerp, base, req->rl_iov.lkey);

/*
diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
index 4185102..f6d280b 100644
--- a/net/sunrpc/xprtrdma/transport.c
+++ b/net/sunrpc/xprtrdma/transport.c
@@ -597,13 +597,14 @@ xprt_rdma_send_request(struct rpc_task *task)
struct rpc_xprt *xprt = rqst->rq_xprt;
struct rpcrdma_req *req = rpcr_to_rdmar(rqst);
struct rpcrdma_xprt *r_xprt = rpcx_to_rdmax(xprt);
- int rc;
+ int rc = 0;

- if (req->rl_niovs == 0) {
+ if (req->rl_niovs == 0)
rc = rpcrdma_marshal_req(rqst);
- if (rc < 0)
- goto failed_marshal;
- }
+ else if (r_xprt->rx_ia.ri_memreg_strategy == RPCRDMA_FRMR)
+ rc = rpcrdma_marshal_chunks(rqst, 0);
+ if (rc < 0)
+ goto failed_marshal;

if (req->rl_reply == NULL) /* e.g. reconnection */
rpcrdma_recv_buffer_get(req);
diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index f3d86b2..c270e59 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -99,6 +99,14 @@ struct rpcrdma_ep {
#define INIT_CQCOUNT(ep) atomic_set(&(ep)->rep_cqcount, (ep)->rep_cqinit)
#define DECR_CQCOUNT(ep) atomic_sub_return(1, &(ep)->rep_cqcount)

+enum rpcrdma_chunktype {
+ rpcrdma_noch = 0,
+ rpcrdma_readch,
+ rpcrdma_areadch,
+ rpcrdma_writech,
+ rpcrdma_replych
+};
+
/*
* struct rpcrdma_rep -- this structure encapsulates state required to recv
* and complete a reply, asychronously. It needs several pieces of
@@ -192,6 +200,7 @@ struct rpcrdma_req {
unsigned int rl_niovs; /* 0, 2 or 4 */
unsigned int rl_nchunks; /* non-zero if chunks */
unsigned int rl_connect_cookie; /* retry detection */
+ enum rpcrdma_chunktype rl_rtype, rl_wtype;
struct rpcrdma_buffer *rl_buffer; /* home base for this structure */
struct rpcrdma_rep *rl_reply;/* holder for reply buffer */
struct rpcrdma_mr_seg rl_segments[RPCRDMA_MAX_SEGS];/* chunk segments */
@@ -347,6 +356,7 @@ void rpcrdma_reply_handler(struct rpcrdma_rep *);
/*
* RPC/RDMA protocol calls - xprtrdma/rpc_rdma.c
*/
+ssize_t rpcrdma_marshal_chunks(struct rpc_rqst *, ssize_t);
int rpcrdma_marshal_req(struct rpc_rqst *);
size_t rpcrdma_max_payload(struct rpcrdma_xprt *);



2014-06-23 22:40:25

by Chuck Lever

[permalink] [raw]
Subject: [PATCH v1 10/13] xprtrdma: Release FRMR segment buffers during LOCAL_INV completion

FRMR uses a LOCAL_INV Work Request, which is asynchronous, to
deregister segment buffers. Other registration strategies use
synchronous deregistration mechanisms (like ib_unmap_fmr()).

For a synchronous deregistration mechanism, it makes sense for
xprt_rdma_free() to put segment buffers back into the buffer pool
immediately once rpcrdma_deregister_external() returns.

This is currently also what FRMR is doing. It is releasing segment
buffers just after the LOCAL_INV WR is posted.

But segment buffers need to be put back after the LOCAL_INV WR
_completes_ (or flushes). Otherwise, rpcrdma_buffer_get() can then
assign these segment buffers to another RPC task while they are
still "in use" by the hardware.

The result of re-using an FRMR too quickly is that it's rkey
no longer matches the rkey that was registered with the provider.
This results in FAST_REG_MR or LOCAL_INV Work Requests completing
with IB_WC_MW_BIND_ERR, and the FRMR, and thus the transport,
becomes unusable.

Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/xprtrdma/verbs.c | 44 +++++++++++++++++++++++++++++++++++----
net/sunrpc/xprtrdma/xprt_rdma.h | 2 ++
2 files changed, 42 insertions(+), 4 deletions(-)

diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index f24f0bf..52f57f7 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -62,6 +62,8 @@
#endif

static void rpcrdma_decrement_frmr_rkey(struct rpcrdma_mw *);
+static void rpcrdma_get_mw(struct rpcrdma_mw *);
+static void rpcrdma_put_mw(struct rpcrdma_mw *);

/*
* internal functions
@@ -167,6 +169,7 @@ rpcrdma_sendcq_process_wc(struct ib_wc *wc)
if (fastreg)
rpcrdma_decrement_frmr_rkey(mw);
}
+ rpcrdma_put_mw(mw);
}

static int
@@ -1034,7 +1037,7 @@ rpcrdma_buffer_create(struct rpcrdma_buffer *buf, struct rpcrdma_ep *ep,
len += cdata->padding;
switch (ia->ri_memreg_strategy) {
case RPCRDMA_FRMR:
- len += buf->rb_max_requests * RPCRDMA_MAX_SEGS *
+ len += (buf->rb_max_requests + 1) * RPCRDMA_MAX_SEGS *
sizeof(struct rpcrdma_mw);
break;
case RPCRDMA_MTHCAFMR:
@@ -1076,7 +1079,7 @@ rpcrdma_buffer_create(struct rpcrdma_buffer *buf, struct rpcrdma_ep *ep,
r = (struct rpcrdma_mw *)p;
switch (ia->ri_memreg_strategy) {
case RPCRDMA_FRMR:
- for (i = buf->rb_max_requests * RPCRDMA_MAX_SEGS; i; i--) {
+ for (i = (buf->rb_max_requests+1) * RPCRDMA_MAX_SEGS; i; i--) {
r->r.frmr.fr_mr = ib_alloc_fast_reg_mr(ia->ri_pd,
ia->ri_max_frmr_depth);
if (IS_ERR(r->r.frmr.fr_mr)) {
@@ -1252,12 +1255,36 @@ rpcrdma_buffer_destroy(struct rpcrdma_buffer *buf)
}

static void
-rpcrdma_put_mw_locked(struct rpcrdma_mw *mw)
+rpcrdma_free_mw(struct kref *kref)
{
+ struct rpcrdma_mw *mw = container_of(kref, struct rpcrdma_mw, mw_ref);
list_add_tail(&mw->mw_list, &mw->mw_pool->rb_mws);
}

static void
+rpcrdma_put_mw_locked(struct rpcrdma_mw *mw)
+{
+ kref_put(&mw->mw_ref, rpcrdma_free_mw);
+}
+
+static void
+rpcrdma_get_mw(struct rpcrdma_mw *mw)
+{
+ kref_get(&mw->mw_ref);
+}
+
+static void
+rpcrdma_put_mw(struct rpcrdma_mw *mw)
+{
+ struct rpcrdma_buffer *buffers = mw->mw_pool;
+ unsigned long flags;
+
+ spin_lock_irqsave(&buffers->rb_lock, flags);
+ rpcrdma_put_mw_locked(mw);
+ spin_unlock_irqrestore(&buffers->rb_lock, flags);
+}
+
+static void
rpcrdma_buffer_put_mw(struct rpcrdma_mw **mw)
{
rpcrdma_put_mw_locked(*mw);
@@ -1304,6 +1331,7 @@ rpcrdma_buffer_get_mws(struct rpcrdma_req *req, struct rpcrdma_buffer *buffers)
r = list_entry(buffers->rb_mws.next,
struct rpcrdma_mw, mw_list);
list_del(&r->mw_list);
+ kref_init(&r->mw_ref);
r->mw_pool = buffers;
req->rl_segments[i].mr_chunk.rl_mw = r;
}
@@ -1583,6 +1611,7 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
dprintk("RPC: %s: Using frmr %p to map %d segments\n",
__func__, seg1->mr_chunk.rl_mw, i);

+ rpcrdma_get_mw(seg1->mr_chunk.rl_mw);
if (unlikely(seg1->mr_chunk.rl_mw->r.frmr.fr_state == FRMR_IS_VALID)) {
dprintk("RPC: %s: frmr %x left valid, posting invalidate.\n",
__func__,
@@ -1595,6 +1624,7 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
invalidate_wr.send_flags = IB_SEND_SIGNALED;
invalidate_wr.ex.invalidate_rkey =
seg1->mr_chunk.rl_mw->r.frmr.fr_mr->rkey;
+ rpcrdma_get_mw(seg1->mr_chunk.rl_mw);
DECR_CQCOUNT(&r_xprt->rx_ep);
post_wr = &invalidate_wr;
} else
@@ -1638,6 +1668,9 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
*nsegs = i;
return 0;
out_err:
+ rpcrdma_put_mw(seg1->mr_chunk.rl_mw);
+ if (post_wr == &invalidate_wr)
+ rpcrdma_put_mw(seg1->mr_chunk.rl_mw);
while (i--)
rpcrdma_unmap_one(ia, --seg);
return rc;
@@ -1653,6 +1686,7 @@ rpcrdma_deregister_frmr_external(struct rpcrdma_mr_seg *seg,

while (seg1->mr_nsegs--)
rpcrdma_unmap_one(ia, seg++);
+ rpcrdma_get_mw(seg1->mr_chunk.rl_mw);

memset(&invalidate_wr, 0, sizeof invalidate_wr);
invalidate_wr.wr_id = (unsigned long)(void *)seg1->mr_chunk.rl_mw;
@@ -1664,9 +1698,11 @@ rpcrdma_deregister_frmr_external(struct rpcrdma_mr_seg *seg,
read_lock(&ia->ri_qplock);
rc = ib_post_send(ia->ri_id->qp, &invalidate_wr, &bad_wr);
read_unlock(&ia->ri_qplock);
- if (rc)
+ if (rc) {
+ rpcrdma_put_mw(seg1->mr_chunk.rl_mw);
dprintk("RPC: %s: failed ib_post_send for invalidate,"
" status %i\n", __func__, rc);
+ }
return rc;
}

diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index b81e5b5..7a140fe 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -44,6 +44,7 @@
#include <linux/spinlock.h> /* spinlock_t, etc */
#include <linux/atomic.h> /* atomic_t, etc */
#include <linux/workqueue.h> /* struct work_struct */
+#include <linux/kref.h>

#include <rdma/rdma_cm.h> /* RDMA connection api */
#include <rdma/ib_verbs.h> /* RDMA verbs api */
@@ -176,6 +177,7 @@ struct rpcrdma_mw {
} r;
struct list_head mw_list;
struct rpcrdma_buffer *mw_pool;
+ struct kref mw_ref;
};

#define RPCRDMA_BIT_FASTREG (0)


2014-06-27 16:17:54

by Shirley Ma

[permalink] [raw]
Subject: Re: [PATCH v1 00/13] NFS/RDMA patches for 3.17

Passed cthon4, iozone test interoperability test between Linux client and Solaris server, no regressions.

Shirley

On 06/25/2014 03:47 PM, Steve Wise wrote:
> Hey Chuck,
>
> I did some testing on this series. Just 2 nodes, both nfs3 and nfs4 over cxgb4 and mlx4:
>
> cthon b/g/s 10 iterations
> iozone -a with direct IO and data validation
> fio write and rand-rw testing of large IO/files and 8 threads.
> xfs test suite.
>
> No regressions seen.
>
> Tested-by: Steve Wise <[email protected]>
>
>
>> -----Original Message-----
>> From: [email protected] [mailto:[email protected]] On Behalf
>> Of Chuck Lever
>> Sent: Monday, June 23, 2014 5:39 PM
>> To: [email protected]; [email protected]
>> Subject: [PATCH v1 00/13] NFS/RDMA patches for 3.17
>>
>> The main purpose of this series is to address more connection drop
>> recovery issues by fixing FRMR re-use to make it less likely the
>> client will drop the connection due to a memory operation error.
>>
>> Some other clean-ups and fixes are present as well.
>>
>> See topic branch nfs-rdma-for-3.17 in
>>
>> git://git.linux-nfs.org/projects/cel/cel-2.6.git
>>
>> I tested with NFSv3 and NFSv4 on all three supported memory
>> registration modes. Used cthon04 and iozone with both Solaris
>> and Linux NFS/RDMA servers. Used xfstests with Linux.
>>
>> ---
>>
>> Chuck Lever (13):
>> xprtrdma: Fix panic in rpcrdma_register_frmr_external()
>> xprtrdma: Protect ->qp during FRMR deregistration
>> xprtrdma: Limit data payload size for ALLPHYSICAL
>> xprtrdma: Update rkeys after transport reconnect
>> xprtrdma: Don't drain CQs on transport disconnect
>> xprtrdma: Unclutter struct rpcrdma_mr_seg
>> xprtrdma: Encode Work Request opcode in wc->wr_id
>> xprtrdma: Back off rkey when FAST_REG_MR fails
>> xprtrdma: Refactor rpcrdma_buffer_put()
>> xprtrdma: Release FRMR segment buffers during LOCAL_INV completion
>> xprtrdma: Clean up rpcrdma_ep_disconnect()
>> xprtrdma: Remove RPCRDMA_PERSISTENT_REGISTRATION macro
>> xprtrdma: Handle additional connection events
>>
>>
>> include/linux/sunrpc/xprtrdma.h | 2
>> net/sunrpc/xprtrdma/rpc_rdma.c | 77 +++++----
>> net/sunrpc/xprtrdma/transport.c | 17 +-
>> net/sunrpc/xprtrdma/verbs.c | 330 +++++++++++++++++++++++++++------------
>> net/sunrpc/xprtrdma/xprt_rdma.h | 63 ++++++-
>> 5 files changed, 332 insertions(+), 157 deletions(-)
>>
>> --
>> Chuck Lever
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
>> the body of a message to [email protected]
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>

2014-06-23 22:40:00

by Chuck Lever

[permalink] [raw]
Subject: [PATCH v1 07/13] xprtrdma: Encode Work Request opcode in wc->wr_id

The wc->opcode field is unreliable when a completion fails.
Up until now, the completion handler has ignored unsuccessful
completions, so that didn't matter to xprtrdma.

In a subsequent patch, however, the send CQ handler will need
to know which Work Request opcode is completing, even if for
error completions.

xprtrdma posts three Work Request opcodes via the send queue:
SEND, FAST_REG_MR, and LOCAL_INV:

For SEND, wc->wr_id is zero. Those completions are ignored.

The other two plant a pointer to an rpcrdma_mw in wc->wr_id. Make
the low-order bit indicate which of FAST_REG_MR or LOCAL_INV is
being done.

Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/xprtrdma/verbs.c | 19 +++++++++++--------
net/sunrpc/xprtrdma/xprt_rdma.h | 2 ++
2 files changed, 13 insertions(+), 8 deletions(-)

diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index e8ed81c..cef67fd 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -145,20 +145,22 @@ rpcrdma_cq_async_error_upcall(struct ib_event *event, void *context)
static void
rpcrdma_sendcq_process_wc(struct ib_wc *wc)
{
- struct rpcrdma_mw *frmr = (struct rpcrdma_mw *)(unsigned long)wc->wr_id;
+ unsigned long wrid = wc->wr_id;
+ struct rpcrdma_mw *mw;
+ int fastreg;

- dprintk("RPC: %s: frmr %p status %X opcode %d\n",
- __func__, frmr, wc->status, wc->opcode);
+ dprintk("RPC: %s: wr_id %lx status %X opcode %d\n",
+ __func__, wrid, wc->status, wc->opcode);

- if (wc->wr_id == 0ULL)
+ if (wrid == 0)
return;
if (wc->status != IB_WC_SUCCESS)
return;

- if (wc->opcode == IB_WC_FAST_REG_MR)
- frmr->r.frmr.fr_state = FRMR_IS_VALID;
- else if (wc->opcode == IB_WC_LOCAL_INV)
- frmr->r.frmr.fr_state = FRMR_IS_INVALID;
+ fastreg = test_and_clear_bit(RPCRDMA_BIT_FASTREG, &wrid);
+ mw = (struct rpcrdma_mw *)wrid;
+
+ mw->r.frmr.fr_state = fastreg ? FRMR_IS_VALID : FRMR_IS_INVALID;
}

static int
@@ -1538,6 +1540,7 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
/* Prepare FRMR WR */
memset(&frmr_wr, 0, sizeof frmr_wr);
frmr_wr.wr_id = (unsigned long)(void *)seg1->mr_chunk.rl_mw;
+ frmr_wr.wr_id |= (u64)1 << RPCRDMA_BIT_FASTREG;
frmr_wr.opcode = IB_WR_FAST_REG_MR;
frmr_wr.send_flags = IB_SEND_SIGNALED;
frmr_wr.wr.fast_reg.iova_start = seg1->mr_dma;
diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index 28c8570..6b5d243 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -177,6 +177,8 @@ struct rpcrdma_mw {
struct list_head mw_list;
};

+#define RPCRDMA_BIT_FASTREG (0)
+
/*
* struct rpcrdma_req -- structure central to the request/reply sequence.
*


2014-06-23 22:40:09

by Chuck Lever

[permalink] [raw]
Subject: [PATCH v1 08/13] xprtrdma: Back off rkey when FAST_REG_MR fails

If posting a FAST_REG_MR Work Reqeust fails, or the FAST_REG WR
flushes, revert the rkey update to avoid subsequent
IB_WC_MW_BIND_ERR completions.

Suggested-by: Steve Wise <[email protected]>
Signed-off-by: Chuck Lever <[email protected]>
---
net/sunrpc/xprtrdma/verbs.c | 39 +++++++++++++++++++++++++++++----------
1 file changed, 29 insertions(+), 10 deletions(-)

diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index cef67fd..3efc007 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -61,6 +61,8 @@
# define RPCDBG_FACILITY RPCDBG_TRANS
#endif

+static void rpcrdma_decrement_frmr_rkey(struct rpcrdma_mw *);
+
/*
* internal functions
*/
@@ -154,13 +156,17 @@ rpcrdma_sendcq_process_wc(struct ib_wc *wc)

if (wrid == 0)
return;
- if (wc->status != IB_WC_SUCCESS)
- return;

fastreg = test_and_clear_bit(RPCRDMA_BIT_FASTREG, &wrid);
mw = (struct rpcrdma_mw *)wrid;

- mw->r.frmr.fr_state = fastreg ? FRMR_IS_VALID : FRMR_IS_INVALID;
+ if (wc->status == IB_WC_SUCCESS) {
+ mw->r.frmr.fr_state = fastreg ?
+ FRMR_IS_VALID : FRMR_IS_INVALID;
+ } else {
+ if (fastreg)
+ rpcrdma_decrement_frmr_rkey(mw);
+ }
}

static int
@@ -1480,6 +1486,24 @@ rpcrdma_unmap_one(struct rpcrdma_ia *ia, struct rpcrdma_mr_seg *seg)
seg->mr_dma, seg->mr_dmalen, seg->mr_dir);
}

+static void
+rpcrdma_increment_frmr_rkey(struct rpcrdma_mw *mw)
+{
+ struct ib_mr *frmr = mw->r.frmr.fr_mr;
+ u8 key = frmr->rkey & 0x000000FF;
+
+ ib_update_fast_reg_key(frmr, ++key);
+}
+
+static void
+rpcrdma_decrement_frmr_rkey(struct rpcrdma_mw *mw)
+{
+ struct ib_mr *frmr = mw->r.frmr.fr_mr;
+ u8 key = frmr->rkey & 0x000000FF;
+
+ ib_update_fast_reg_key(frmr, --key);
+}
+
static int
rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
int *nsegs, int writing, struct rpcrdma_ia *ia,
@@ -1487,8 +1511,6 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
{
struct rpcrdma_mr_seg *seg1 = seg;
struct ib_send_wr invalidate_wr, frmr_wr, *bad_wr, *post_wr;
-
- u8 key;
int len, pageoff;
int i, rc;
int seg_len;
@@ -1552,14 +1574,10 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
rc = -EIO;
goto out_err;
}
-
- /* Bump the key */
- key = (u8)(seg1->mr_chunk.rl_mw->r.frmr.fr_mr->rkey & 0x000000FF);
- ib_update_fast_reg_key(seg1->mr_chunk.rl_mw->r.frmr.fr_mr, ++key);
-
frmr_wr.wr.fast_reg.access_flags = (writing ?
IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE :
IB_ACCESS_REMOTE_READ);
+ rpcrdma_increment_frmr_rkey(seg1->mr_chunk.rl_mw);
frmr_wr.wr.fast_reg.rkey = seg1->mr_chunk.rl_mw->r.frmr.fr_mr->rkey;
DECR_CQCOUNT(&r_xprt->rx_ep);

@@ -1568,6 +1586,7 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
if (rc) {
dprintk("RPC: %s: failed ib_post_send for register,"
" status %i\n", __func__, rc);
+ rpcrdma_decrement_frmr_rkey(seg1->mr_chunk.rl_mw);
goto out_err;
} else {
seg1->mr_rkey = seg1->mr_chunk.rl_mw->r.frmr.fr_mr->rkey;


2014-06-24 17:07:55

by Chuck Lever

[permalink] [raw]
Subject: Re: [PATCH v1 00/13] NFS/RDMA patches for 3.17


On Jun 24, 2014, at 10:35 AM, Or Gerlitz <[email protected]> wrote:

> On Tue, Jun 24, 2014 at 1:39 AM, Chuck Lever <[email protected]> wrote:
>>
>> The main purpose of this series is to address more connection drop
>> recovery issues by fixing FRMR re-use to make it less likely the
>> client will drop the connection due to a memory operation error.
>>
>> Some other clean-ups and fixes are present as well.
>
>
> From quick looking, patches 1,2 and 5 of series and maybe more have
> very good match for 3.16-rc (fix kernel crashes etc), I don't think
> they need to wait for 3.17

My take:

1/13 fixes a panic that should never be hit (that used to be a BUG,
which never fired in operation).

2/13 might be a candidate for 3.16-rc. It?s up to Trond.

5/13 doesn?t fix a regression, it always worked that way.

I?ve updated the ?Fixes:? tag in these, will appear in v2 of the series.

--
Chuck Lever
chuck[dot]lever[at]oracle[dot]com




2014-06-24 14:37:28

by Or Gerlitz

[permalink] [raw]
Subject: Re: [PATCH v1 01/13] xprtrdma: Fix panic in rpcrdma_register_frmr_external()

On Tue, Jun 24, 2014 at 1:39 AM, Chuck Lever <[email protected]> wrote:
> seg1->mr_nsegs is not yet initialized when it is used to unmap
> segments during an error exit. Use the same unmapping logic for
> all error exits.
>
> "if (frmr_wr.wr.fast_reg.length < len) {" used to be a BUG_ON check.
> The broken code should never be executed under normal operation.
>
> Fixes: c977dea22708688eae31774f70126c97aa4dfe83

Here you should put the information provided by git log --oneline e.g

Fixes: c977dea22 ('THE COMMIT TITLE')



> Signed-off-by: Chuck Lever <[email protected]>
> ---
> net/sunrpc/xprtrdma/verbs.c | 12 +++++++-----
> 1 file changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
> index 13dbd1c..78bd7c6 100644
> --- a/net/sunrpc/xprtrdma/verbs.c
> +++ b/net/sunrpc/xprtrdma/verbs.c
> @@ -1545,9 +1545,8 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
> frmr_wr.wr.fast_reg.page_shift = PAGE_SHIFT;
> frmr_wr.wr.fast_reg.length = page_no << PAGE_SHIFT;
> if (frmr_wr.wr.fast_reg.length < len) {
> - while (seg1->mr_nsegs--)
> - rpcrdma_unmap_one(ia, seg++);
> - return -EIO;
> + rc = -EIO;
> + goto out_err;
> }
>
> /* Bump the key */
> @@ -1565,8 +1564,7 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
> if (rc) {
> dprintk("RPC: %s: failed ib_post_send for register,"
> " status %i\n", __func__, rc);
> - while (i--)
> - rpcrdma_unmap_one(ia, --seg);
> + goto out_err;
> } else {
> seg1->mr_rkey = seg1->mr_chunk.rl_mw->r.frmr.fr_mr->rkey;
> seg1->mr_base = seg1->mr_dma + pageoff;
> @@ -1574,6 +1572,10 @@ rpcrdma_register_frmr_external(struct rpcrdma_mr_seg *seg,
> seg1->mr_len = len;
> }
> *nsegs = i;
> + return 0;
> +out_err:
> + while (i--)
> + rpcrdma_unmap_one(ia, --seg);
> return rc;
> }
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2014-07-02 19:40:34

by Chuck Lever

[permalink] [raw]
Subject: Re: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport disconnect


On Jul 2, 2014, at 3:28 PM, Steve Wise <[email protected]> wrote:

> On 7/2/2014 2:06 PM, Devesh Sharma wrote:
>> This change is very much prone to generate poll_cq errors because of un-cleaned completions which still
>> point to the non-existent QPs. On the new connection when these completions are polled, the poll_cq will
>> fail because old QP pointer is already NULL.
>> Did anyone hit this situation during their testing?

I tested this aggressively with a fault injector that triggers regular connection
disruption.

> Hey Devesh,
>
> iw_cxgb4 will silently toss CQEs if the QP is not active.

xprtrdma relies on getting a completion (either successful or in error) for every
WR it has posted. The goal of this patch is to avoid throwing away queued
completions after a transport disconnect so we don't lose track of FRMR rkey
updates (FAST_REG_MR and LOCAL_INV completions) and we can capture all RPC
replies posted before the connection was lost.

Sounds like we also need to keep the QP around, even in error state, until all
known WRs on that QP have completed?


>
>
>>> -----Original Message-----
>>> From: [email protected] [mailto:linux-rdma-
>>> [email protected]] On Behalf Of Chuck Lever
>>> Sent: Tuesday, June 24, 2014 4:10 AM
>>> To: [email protected]; [email protected]
>>> Subject: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport disconnect
>>>
>>> CQs are not destroyed until unmount. By draining CQs on transport
>>> disconnect, successful completions that can change the r.frmr.state field can
>>> be missed.
>>>
>>> Signed-off-by: Chuck Lever <[email protected]>
>>> ---
>>> net/sunrpc/xprtrdma/verbs.c | 5 -----
>>> 1 file changed, 5 deletions(-)
>>>
>>> diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
>>> index 3c7f904..451e100 100644
>>> --- a/net/sunrpc/xprtrdma/verbs.c
>>> +++ b/net/sunrpc/xprtrdma/verbs.c
>>> @@ -873,9 +873,6 @@ retry:
>>> dprintk("RPC: %s: rpcrdma_ep_disconnect"
>>> " status %i\n", __func__, rc);
>>>
>>> - rpcrdma_clean_cq(ep->rep_attr.recv_cq);
>>> - rpcrdma_clean_cq(ep->rep_attr.send_cq);
>>> -
>>> xprt = container_of(ia, struct rpcrdma_xprt, rx_ia);
>>> id = rpcrdma_create_id(xprt, ia,
>>> (struct sockaddr *)&xprt->rx_data.addr);
>>> @@ -985,8 +982,6 @@ rpcrdma_ep_disconnect(struct rpcrdma_ep *ep,
>>> struct rpcrdma_ia *ia) {
>>> int rc;
>>>
>>> - rpcrdma_clean_cq(ep->rep_attr.recv_cq);
>>> - rpcrdma_clean_cq(ep->rep_attr.send_cq);
>>> rc = rdma_disconnect(ia->ri_id);
>>> if (!rc) {
>>> /* returns without wait if not connected */
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the
>>> body of a message to [email protected] More majordomo info at
>>> http://vger.kernel.org/majordomo-info.html
>> N�����r��y���b�X��ǧv�^�)޺{.n�+����{���"��^n�r���z���h����&���G���h�(�階�ݢj"���m�����z�ޖ���f���h���~�mml==
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

--
Chuck Lever
chuck[dot]lever[at]oracle[dot]com





2014-07-02 19:49:56

by Steve Wise

[permalink] [raw]
Subject: RE: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport disconnect



> -----Original Message-----
> From: Devesh Sharma [mailto:[email protected]]
> Sent: Wednesday, July 02, 2014 2:43 PM
> To: Steve Wise; Chuck Lever; [email protected]; [email protected]
> Subject: RE: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport disconnect
>
> > -----Original Message-----
> > From: Steve Wise [mailto:[email protected]]
> > Sent: Thursday, July 03, 2014 12:59 AM
> > To: Devesh Sharma; Chuck Lever; [email protected]; linux-
> > [email protected]
> > Subject: Re: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport
> > disconnect
> >
> > On 7/2/2014 2:06 PM, Devesh Sharma wrote:
> > > This change is very much prone to generate poll_cq errors because of
> > > un-cleaned completions which still point to the non-existent QPs. On
> > > the new connection when these completions are polled, the poll_cq will fail
> > because old QP pointer is already NULL.
> > > Did anyone hit this situation during their testing?
> >
> > Hey Devesh,
> >
> > iw_cxgb4 will silently toss CQEs if the QP is not active.
>
> Ya, just now checked that in mlx and cxgb4 driver code. On the other hand ocrdma is asserting
> a BUG-ON for such CQEs causing system panic.
> Out of curiosity I am asking, how this change is useful here, is it reducing the re-connection
> time...Anyhow rpcrdma_clean_cq was discarding the completions (flush/successful both)
>

Well, I don't think there is anything restricting an application from destroying the QP with pending CQEs on its CQs. So it definitely shouldn't cause a BUG_ON() I think. I'll have to read up in the Verbs specs if destroying a QP kills all the pending CQEs...


> >
> >
> > >> -----Original Message-----
> > >> From: [email protected] [mailto:linux-rdma-
> > >> [email protected]] On Behalf Of Chuck Lever
> > >> Sent: Tuesday, June 24, 2014 4:10 AM
> > >> To: [email protected]; [email protected]
> > >> Subject: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport
> > >> disconnect
> > >>
> > >> CQs are not destroyed until unmount. By draining CQs on transport
> > >> disconnect, successful completions that can change the r.frmr.state
> > >> field can be missed.
>
> Still those are missed isn’t it....Since those successful completions will still be dropped after re-
> connection. Am I missing something to
> understanding the motivation...
>
> > >>
> > >> Signed-off-by: Chuck Lever <[email protected]>
> > >> ---
> > >> net/sunrpc/xprtrdma/verbs.c | 5 -----
> > >> 1 file changed, 5 deletions(-)
> > >>
> > >> diff --git a/net/sunrpc/xprtrdma/verbs.c
> > >> b/net/sunrpc/xprtrdma/verbs.c index 3c7f904..451e100 100644
> > >> --- a/net/sunrpc/xprtrdma/verbs.c
> > >> +++ b/net/sunrpc/xprtrdma/verbs.c
> > >> @@ -873,9 +873,6 @@ retry:
> > >> dprintk("RPC: %s: rpcrdma_ep_disconnect"
> > >> " status %i\n", __func__, rc);
> > >>
> > >> - rpcrdma_clean_cq(ep->rep_attr.recv_cq);
> > >> - rpcrdma_clean_cq(ep->rep_attr.send_cq);
> > >> -
> > >> xprt = container_of(ia, struct rpcrdma_xprt, rx_ia);
> > >> id = rpcrdma_create_id(xprt, ia,
> > >> (struct sockaddr *)&xprt->rx_data.addr);
> > @@ -985,8 +982,6 @@
> > >> rpcrdma_ep_disconnect(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia)
> > >> {
> > >> int rc;
> > >>
> > >> - rpcrdma_clean_cq(ep->rep_attr.recv_cq);
> > >> - rpcrdma_clean_cq(ep->rep_attr.send_cq);
> > >> rc = rdma_disconnect(ia->ri_id);
> > >> if (!rc) {
> > >> /* returns without wait if not connected */
> > >>
> > >> --
> > >> To unsubscribe from this list: send the line "unsubscribe linux-rdma"
> > >> in the body of a message to [email protected] More
> > majordomo
> > >> info at http://vger.kernel.org/majordomo-info.html
> > > N r y b X ǧv ^ )޺{.n + { " ^n r z  h &  G h 
> > > ( 階 ݢj"  m z ޖ f h ~ mml==



2014-07-02 19:56:18

by Steve Wise

[permalink] [raw]
Subject: RE: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport disconnect



> -----Original Message-----
> From: Devesh Sharma [mailto:[email protected]]
> Sent: Wednesday, July 02, 2014 2:54 PM
> To: Steve Wise; 'Chuck Lever'; [email protected]; [email protected]
> Subject: RE: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport disconnect
>
>
>
> > -----Original Message-----
> > From: Steve Wise [mailto:[email protected]]
> > Sent: Thursday, July 03, 2014 1:21 AM
> > To: Devesh Sharma; 'Chuck Lever'; [email protected]; linux-
> > [email protected]
> > Subject: RE: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport
> > disconnect
> >
> >
> >
> > > -----Original Message-----
> > > From: Devesh Sharma [mailto:[email protected]]
> > > Sent: Wednesday, July 02, 2014 2:43 PM
> > > To: Steve Wise; Chuck Lever; [email protected];
> > > [email protected]
> > > Subject: RE: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport
> > > disconnect
> > >
> > > > -----Original Message-----
> > > > From: Steve Wise [mailto:[email protected]]
> > > > Sent: Thursday, July 03, 2014 12:59 AM
> > > > To: Devesh Sharma; Chuck Lever; [email protected]; linux-
> > > > [email protected]
> > > > Subject: Re: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport
> > > > disconnect
> > > >
> > > > On 7/2/2014 2:06 PM, Devesh Sharma wrote:
> > > > > This change is very much prone to generate poll_cq errors because
> > > > > of un-cleaned completions which still point to the non-existent
> > > > > QPs. On the new connection when these completions are polled, the
> > > > > poll_cq will fail
> > > > because old QP pointer is already NULL.
> > > > > Did anyone hit this situation during their testing?
> > > >
> > > > Hey Devesh,
> > > >
> > > > iw_cxgb4 will silently toss CQEs if the QP is not active.
> > >
> > > Ya, just now checked that in mlx and cxgb4 driver code. On the other
> > > hand ocrdma is asserting a BUG-ON for such CQEs causing system panic.
> > > Out of curiosity I am asking, how this change is useful here, is it
> > > reducing the re-connection time...Anyhow rpcrdma_clean_cq was
> > > discarding the completions (flush/successful both)
> > >
> >
> > Well, I don't think there is anything restricting an application from destroying
> > the QP with pending CQEs on its CQs. So it definitely shouldn't cause a
> > BUG_ON() I think. I'll have to read up in the Verbs specs if destroying a QP
> > kills all the pending CQEs...
>
> Oh confusion...let me clarify: in ocrdma BUG ON is hit in poll_cq() after re-connection happens
> and cq is polled again.
> Now the first completion in CQ still points to old QP-ID for which ocrdma does not have valid
> QP pointer.
>

Right. Which means it’s a stale CQE. I don't think that should cause a BUG_ON.


> >
> >
> > > >
> > > >
> > > > >> -----Original Message-----
> > > > >> From: [email protected] [mailto:linux-rdma-
> > > > >> [email protected]] On Behalf Of Chuck Lever
> > > > >> Sent: Tuesday, June 24, 2014 4:10 AM
> > > > >> To: [email protected]; [email protected]
> > > > >> Subject: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport
> > > > >> disconnect
> > > > >>
> > > > >> CQs are not destroyed until unmount. By draining CQs on transport
> > > > >> disconnect, successful completions that can change the
> > > > >> r.frmr.state field can be missed.
> > >
> > > Still those are missed isn’t it....Since those successful completions
> > > will still be dropped after re- connection. Am I missing something to
> > > understanding the motivation...
> > >
> > > > >>
> > > > >> Signed-off-by: Chuck Lever <[email protected]>
> > > > >> ---
> > > > >> net/sunrpc/xprtrdma/verbs.c | 5 -----
> > > > >> 1 file changed, 5 deletions(-)
> > > > >>
> > > > >> diff --git a/net/sunrpc/xprtrdma/verbs.c
> > > > >> b/net/sunrpc/xprtrdma/verbs.c index 3c7f904..451e100 100644
> > > > >> --- a/net/sunrpc/xprtrdma/verbs.c
> > > > >> +++ b/net/sunrpc/xprtrdma/verbs.c
> > > > >> @@ -873,9 +873,6 @@ retry:
> > > > >> dprintk("RPC: %s:
> > rpcrdma_ep_disconnect"
> > > > >> " status %i\n", __func__, rc);
> > > > >>
> > > > >> - rpcrdma_clean_cq(ep->rep_attr.recv_cq);
> > > > >> - rpcrdma_clean_cq(ep->rep_attr.send_cq);
> > > > >> -
> > > > >> xprt = container_of(ia, struct rpcrdma_xprt, rx_ia);
> > > > >> id = rpcrdma_create_id(xprt, ia,
> > > > >> (struct sockaddr *)&xprt-
> > >rx_data.addr);
> > > > @@ -985,8 +982,6 @@
> > > > >> rpcrdma_ep_disconnect(struct rpcrdma_ep *ep, struct rpcrdma_ia
> > > > >> *ia) {
> > > > >> int rc;
> > > > >>
> > > > >> - rpcrdma_clean_cq(ep->rep_attr.recv_cq);
> > > > >> - rpcrdma_clean_cq(ep->rep_attr.send_cq);
> > > > >> rc = rdma_disconnect(ia->ri_id);
> > > > >> if (!rc) {
> > > > >> /* returns without wait if not connected */
> > > > >>
> > > > >> --
> > > > >> To unsubscribe from this list: send the line "unsubscribe linux-rdma"
> > > > >> in the body of a message to [email protected] More
> > > > majordomo
> > > > >> info at http://vger.kernel.org/majordomo-info.html
> > > > > N r y b X ǧv ^ )޺{.n + { " ^n r z  h &  G h 
> > > > > ( 階 ݢj"  m z ޖ f h ~ mml==
> >



2014-07-02 19:45:30

by Steve Wise

[permalink] [raw]
Subject: RE: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport disconnect



> -----Original Message-----
> From: Chuck Lever [mailto:[email protected]]
> Sent: Wednesday, July 02, 2014 2:40 PM
> To: Steve Wise; Devesh Sharma
> Cc: [email protected]; Linux NFS Mailing List
> Subject: Re: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport disconnect
>
>
> On Jul 2, 2014, at 3:28 PM, Steve Wise <[email protected]> wrote:
>
> > On 7/2/2014 2:06 PM, Devesh Sharma wrote:
> >> This change is very much prone to generate poll_cq errors because of un-cleaned
> completions which still
> >> point to the non-existent QPs. On the new connection when these completions are polled,
> the poll_cq will
> >> fail because old QP pointer is already NULL.
> >> Did anyone hit this situation during their testing?
>
> I tested this aggressively with a fault injector that triggers regular connection
> disruption.
>
> > Hey Devesh,
> >
> > iw_cxgb4 will silently toss CQEs if the QP is not active.
>
> xprtrdma relies on getting a completion (either successful or in error) for every
> WR it has posted. The goal of this patch is to avoid throwing away queued
> completions after a transport disconnect so we don't lose track of FRMR rkey
> updates (FAST_REG_MR and LOCAL_INV completions) and we can capture all RPC
> replies posted before the connection was lost.
>
> Sounds like we also need to keep the QP around, even in error state, until all
> known WRs on that QP have completed?
>

Perhaps.

>
> >
> >
> >>> -----Original Message-----
> >>> From: [email protected] [mailto:linux-rdma-
> >>> [email protected]] On Behalf Of Chuck Lever
> >>> Sent: Tuesday, June 24, 2014 4:10 AM
> >>> To: [email protected]; [email protected]
> >>> Subject: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport disconnect
> >>>
> >>> CQs are not destroyed until unmount. By draining CQs on transport
> >>> disconnect, successful completions that can change the r.frmr.state field can
> >>> be missed.
> >>>
> >>> Signed-off-by: Chuck Lever <[email protected]>
> >>> ---
> >>> net/sunrpc/xprtrdma/verbs.c | 5 -----
> >>> 1 file changed, 5 deletions(-)
> >>>
> >>> diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
> >>> index 3c7f904..451e100 100644
> >>> --- a/net/sunrpc/xprtrdma/verbs.c
> >>> +++ b/net/sunrpc/xprtrdma/verbs.c
> >>> @@ -873,9 +873,6 @@ retry:
> >>> dprintk("RPC: %s: rpcrdma_ep_disconnect"
> >>> " status %i\n", __func__, rc);
> >>>
> >>> - rpcrdma_clean_cq(ep->rep_attr.recv_cq);
> >>> - rpcrdma_clean_cq(ep->rep_attr.send_cq);
> >>> -
> >>> xprt = container_of(ia, struct rpcrdma_xprt, rx_ia);
> >>> id = rpcrdma_create_id(xprt, ia,
> >>> (struct sockaddr *)&xprt->rx_data.addr);
> >>> @@ -985,8 +982,6 @@ rpcrdma_ep_disconnect(struct rpcrdma_ep *ep,
> >>> struct rpcrdma_ia *ia) {
> >>> int rc;
> >>>
> >>> - rpcrdma_clean_cq(ep->rep_attr.recv_cq);
> >>> - rpcrdma_clean_cq(ep->rep_attr.send_cq);
> >>> rc = rdma_disconnect(ia->ri_id);
> >>> if (!rc) {
> >>> /* returns without wait if not connected */
> >>>
> >>> --
> >>> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the
> >>> body of a message to [email protected] More majordomo info at
> >>> http://vger.kernel.org/majordomo-info.html
> >> N�����r��y���b�X��ǧv�^�)޺{.n�+����{���"��^n�r���z���h����&���G���h�(�階
> �ݢj"���m�����z�ޖ���f���h���~�mml==
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> > the body of a message to [email protected]
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
>
> --
> Chuck Lever
> chuck[dot]lever[at]oracle[dot]com
>
>



2014-07-02 19:56:15

by Devesh Sharma

[permalink] [raw]
Subject: RE: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport disconnect

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogU3RldmUgV2lzZSBbbWFp
bHRvOnN3aXNlQG9wZW5ncmlkY29tcHV0aW5nLmNvbV0NCj4gU2VudDogVGh1cnNkYXksIEp1bHkg
MDMsIDIwMTQgMToyMSBBTQ0KPiBUbzogRGV2ZXNoIFNoYXJtYTsgJ0NodWNrIExldmVyJzsgbGlu
dXgtcmRtYUB2Z2VyLmtlcm5lbC5vcmc7IGxpbnV4LQ0KPiBuZnNAdmdlci5rZXJuZWwub3JnDQo+
IFN1YmplY3Q6IFJFOiBbUEFUQ0ggdjEgMDUvMTNdIHhwcnRyZG1hOiBEb24ndCBkcmFpbiBDUXMg
b24gdHJhbnNwb3J0DQo+IGRpc2Nvbm5lY3QNCj4gDQo+IA0KPiANCj4gPiAtLS0tLU9yaWdpbmFs
IE1lc3NhZ2UtLS0tLQ0KPiA+IEZyb206IERldmVzaCBTaGFybWEgW21haWx0bzpEZXZlc2guU2hh
cm1hQEVtdWxleC5Db21dDQo+ID4gU2VudDogV2VkbmVzZGF5LCBKdWx5IDAyLCAyMDE0IDI6NDMg
UE0NCj4gPiBUbzogU3RldmUgV2lzZTsgQ2h1Y2sgTGV2ZXI7IGxpbnV4LXJkbWFAdmdlci5rZXJu
ZWwub3JnOw0KPiA+IGxpbnV4LW5mc0B2Z2VyLmtlcm5lbC5vcmcNCj4gPiBTdWJqZWN0OiBSRTog
W1BBVENIIHYxIDA1LzEzXSB4cHJ0cmRtYTogRG9uJ3QgZHJhaW4gQ1FzIG9uIHRyYW5zcG9ydA0K
PiA+IGRpc2Nvbm5lY3QNCj4gPg0KPiA+ID4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4g
PiA+IEZyb206IFN0ZXZlIFdpc2UgW21haWx0bzpzd2lzZUBvcGVuZ3JpZGNvbXB1dGluZy5jb21d
DQo+ID4gPiBTZW50OiBUaHVyc2RheSwgSnVseSAwMywgMjAxNCAxMjo1OSBBTQ0KPiA+ID4gVG86
IERldmVzaCBTaGFybWE7IENodWNrIExldmVyOyBsaW51eC1yZG1hQHZnZXIua2VybmVsLm9yZzsg
bGludXgtDQo+ID4gPiBuZnNAdmdlci5rZXJuZWwub3JnDQo+ID4gPiBTdWJqZWN0OiBSZTogW1BB
VENIIHYxIDA1LzEzXSB4cHJ0cmRtYTogRG9uJ3QgZHJhaW4gQ1FzIG9uIHRyYW5zcG9ydA0KPiA+
ID4gZGlzY29ubmVjdA0KPiA+ID4NCj4gPiA+IE9uIDcvMi8yMDE0IDI6MDYgUE0sIERldmVzaCBT
aGFybWEgd3JvdGU6DQo+ID4gPiA+IFRoaXMgY2hhbmdlIGlzIHZlcnkgbXVjaCBwcm9uZSB0byBn
ZW5lcmF0ZSBwb2xsX2NxIGVycm9ycyBiZWNhdXNlDQo+ID4gPiA+IG9mIHVuLWNsZWFuZWQgY29t
cGxldGlvbnMgd2hpY2ggc3RpbGwgcG9pbnQgdG8gdGhlIG5vbi1leGlzdGVudA0KPiA+ID4gPiBR
UHMuIE9uIHRoZSBuZXcgY29ubmVjdGlvbiB3aGVuIHRoZXNlIGNvbXBsZXRpb25zIGFyZSBwb2xs
ZWQsIHRoZQ0KPiA+ID4gPiBwb2xsX2NxIHdpbGwgZmFpbA0KPiA+ID4gYmVjYXVzZSBvbGQgUVAg
cG9pbnRlciBpcyBhbHJlYWR5IE5VTEwuDQo+ID4gPiA+IERpZCBhbnlvbmUgaGl0IHRoaXMgc2l0
dWF0aW9uIGR1cmluZyB0aGVpciB0ZXN0aW5nPw0KPiA+ID4NCj4gPiA+IEhleSBEZXZlc2gsDQo+
ID4gPg0KPiA+ID4gaXdfY3hnYjQgd2lsbCBzaWxlbnRseSB0b3NzIENRRXMgaWYgdGhlIFFQIGlz
IG5vdCBhY3RpdmUuDQo+ID4NCj4gPiBZYSwganVzdCBub3cgY2hlY2tlZCB0aGF0IGluIG1seCBh
bmQgY3hnYjQgZHJpdmVyIGNvZGUuIE9uIHRoZSBvdGhlcg0KPiA+IGhhbmQgb2NyZG1hIGlzIGFz
c2VydGluZyBhIEJVRy1PTiBmb3Igc3VjaCBDUUVzIGNhdXNpbmcgc3lzdGVtIHBhbmljLg0KPiA+
IE91dCBvZiBjdXJpb3NpdHkgSSBhbSBhc2tpbmcsIGhvdyB0aGlzIGNoYW5nZSBpcyB1c2VmdWwg
aGVyZSwgaXMgaXQNCj4gPiByZWR1Y2luZyB0aGUgcmUtY29ubmVjdGlvbiB0aW1lLi4uQW55aG93
IHJwY3JkbWFfY2xlYW5fY3Egd2FzDQo+ID4gZGlzY2FyZGluZyB0aGUgY29tcGxldGlvbnMgKGZs
dXNoL3N1Y2Nlc3NmdWwgYm90aCkNCj4gPg0KPiANCj4gV2VsbCwgSSBkb24ndCB0aGluayB0aGVy
ZSBpcyBhbnl0aGluZyByZXN0cmljdGluZyBhbiBhcHBsaWNhdGlvbiBmcm9tIGRlc3Ryb3lpbmcN
Cj4gdGhlIFFQIHdpdGggcGVuZGluZyBDUUVzIG9uIGl0cyBDUXMuICAgU28gaXQgZGVmaW5pdGVs
eSBzaG91bGRuJ3QgY2F1c2UgYQ0KPiBCVUdfT04oKSBJIHRoaW5rLiAgIEknbGwgaGF2ZSB0byBy
ZWFkIHVwIGluIHRoZSBWZXJicyBzcGVjcyBpZiBkZXN0cm95aW5nIGEgUVANCj4ga2lsbHMgYWxs
IHRoZSBwZW5kaW5nIENRRXMuLi4NCg0KSSB3aWxsIGRvdWJsZSBjaGVjayBvY3JkbWFfZGVzdHJv
eV9xcCBjb2RlLg0KDQo+IA0KPiANCj4gPiA+DQo+ID4gPg0KPiA+ID4gPj4gLS0tLS1PcmlnaW5h
bCBNZXNzYWdlLS0tLS0NCj4gPiA+ID4+IEZyb206IGxpbnV4LXJkbWEtb3duZXJAdmdlci5rZXJu
ZWwub3JnIFttYWlsdG86bGludXgtcmRtYS0NCj4gPiA+ID4+IG93bmVyQHZnZXIua2VybmVsLm9y
Z10gT24gQmVoYWxmIE9mIENodWNrIExldmVyDQo+ID4gPiA+PiBTZW50OiBUdWVzZGF5LCBKdW5l
IDI0LCAyMDE0IDQ6MTAgQU0NCj4gPiA+ID4+IFRvOiBsaW51eC1yZG1hQHZnZXIua2VybmVsLm9y
ZzsgbGludXgtbmZzQHZnZXIua2VybmVsLm9yZw0KPiA+ID4gPj4gU3ViamVjdDogW1BBVENIIHYx
IDA1LzEzXSB4cHJ0cmRtYTogRG9uJ3QgZHJhaW4gQ1FzIG9uIHRyYW5zcG9ydA0KPiA+ID4gPj4g
ZGlzY29ubmVjdA0KPiA+ID4gPj4NCj4gPiA+ID4+IENRcyBhcmUgbm90IGRlc3Ryb3llZCB1bnRp
bCB1bm1vdW50LiBCeSBkcmFpbmluZyBDUXMgb24gdHJhbnNwb3J0DQo+ID4gPiA+PiBkaXNjb25u
ZWN0LCBzdWNjZXNzZnVsIGNvbXBsZXRpb25zIHRoYXQgY2FuIGNoYW5nZSB0aGUNCj4gPiA+ID4+
IHIuZnJtci5zdGF0ZSBmaWVsZCBjYW4gYmUgbWlzc2VkLg0KPiA+DQo+ID4gU3RpbGwgdGhvc2Ug
YXJlIG1pc3NlZCBpc27igJl0IGl0Li4uLlNpbmNlIHRob3NlIHN1Y2Nlc3NmdWwgY29tcGxldGlv
bnMNCj4gPiB3aWxsIHN0aWxsIGJlIGRyb3BwZWQgYWZ0ZXIgcmUtIGNvbm5lY3Rpb24uIEFtIEkg
bWlzc2luZyBzb21ldGhpbmcgdG8NCj4gPiB1bmRlcnN0YW5kaW5nIHRoZSBtb3RpdmF0aW9uLi4u
DQo+ID4NCj4gPiA+ID4+DQo+ID4gPiA+PiBTaWduZWQtb2ZmLWJ5OiBDaHVjayBMZXZlciA8Y2h1
Y2subGV2ZXJAb3JhY2xlLmNvbT4NCj4gPiA+ID4+IC0tLQ0KPiA+ID4gPj4gICBuZXQvc3VucnBj
L3hwcnRyZG1hL3ZlcmJzLmMgfCAgICA1IC0tLS0tDQo+ID4gPiA+PiAgIDEgZmlsZSBjaGFuZ2Vk
LCA1IGRlbGV0aW9ucygtKQ0KPiA+ID4gPj4NCj4gPiA+ID4+IGRpZmYgLS1naXQgYS9uZXQvc3Vu
cnBjL3hwcnRyZG1hL3ZlcmJzLmMNCj4gPiA+ID4+IGIvbmV0L3N1bnJwYy94cHJ0cmRtYS92ZXJi
cy5jIGluZGV4IDNjN2Y5MDQuLjQ1MWUxMDAgMTAwNjQ0DQo+ID4gPiA+PiAtLS0gYS9uZXQvc3Vu
cnBjL3hwcnRyZG1hL3ZlcmJzLmMNCj4gPiA+ID4+ICsrKyBiL25ldC9zdW5ycGMveHBydHJkbWEv
dmVyYnMuYw0KPiA+ID4gPj4gQEAgLTg3Myw5ICs4NzMsNiBAQCByZXRyeToNCj4gPiA+ID4+ICAg
CQkJZHByaW50aygiUlBDOiAgICAgICAlczoNCj4gcnBjcmRtYV9lcF9kaXNjb25uZWN0Ig0KPiA+
ID4gPj4gICAJCQkJIiBzdGF0dXMgJWlcbiIsIF9fZnVuY19fLCByYyk7DQo+ID4gPiA+Pg0KPiA+
ID4gPj4gLQkJcnBjcmRtYV9jbGVhbl9jcShlcC0+cmVwX2F0dHIucmVjdl9jcSk7DQo+ID4gPiA+
PiAtCQlycGNyZG1hX2NsZWFuX2NxKGVwLT5yZXBfYXR0ci5zZW5kX2NxKTsNCj4gPiA+ID4+IC0N
Cj4gPiA+ID4+ICAgCQl4cHJ0ID0gY29udGFpbmVyX29mKGlhLCBzdHJ1Y3QgcnBjcmRtYV94cHJ0
LCByeF9pYSk7DQo+ID4gPiA+PiAgIAkJaWQgPSBycGNyZG1hX2NyZWF0ZV9pZCh4cHJ0LCBpYSwN
Cj4gPiA+ID4+ICAgCQkJCShzdHJ1Y3Qgc29ja2FkZHIgKikmeHBydC0NCj4gPnJ4X2RhdGEuYWRk
cik7DQo+ID4gPiBAQCAtOTg1LDggKzk4Miw2IEBADQo+ID4gPiA+PiBycGNyZG1hX2VwX2Rpc2Nv
bm5lY3Qoc3RydWN0IHJwY3JkbWFfZXAgKmVwLCBzdHJ1Y3QgcnBjcmRtYV9pYQ0KPiA+ID4gPj4g
KmlhKSB7DQo+ID4gPiA+PiAgIAlpbnQgcmM7DQo+ID4gPiA+Pg0KPiA+ID4gPj4gLQlycGNyZG1h
X2NsZWFuX2NxKGVwLT5yZXBfYXR0ci5yZWN2X2NxKTsNCj4gPiA+ID4+IC0JcnBjcmRtYV9jbGVh
bl9jcShlcC0+cmVwX2F0dHIuc2VuZF9jcSk7DQo+ID4gPiA+PiAgIAlyYyA9IHJkbWFfZGlzY29u
bmVjdChpYS0+cmlfaWQpOw0KPiA+ID4gPj4gICAJaWYgKCFyYykgew0KPiA+ID4gPj4gICAJCS8q
IHJldHVybnMgd2l0aG91dCB3YWl0IGlmIG5vdCBjb25uZWN0ZWQgKi8NCj4gPiA+ID4+DQo+ID4g
PiA+PiAtLQ0KPiA+ID4gPj4gVG8gdW5zdWJzY3JpYmUgZnJvbSB0aGlzIGxpc3Q6IHNlbmQgdGhl
IGxpbmUgInVuc3Vic2NyaWJlIGxpbnV4LXJkbWEiDQo+ID4gPiA+PiBpbiB0aGUgYm9keSBvZiBh
IG1lc3NhZ2UgdG8gbWFqb3Jkb21vQHZnZXIua2VybmVsLm9yZyBNb3JlDQo+ID4gPiBtYWpvcmRv
bW8NCj4gPiA+ID4+IGluZm8gYXQgaHR0cDovL3ZnZXIua2VybmVsLm9yZy9tYWpvcmRvbW8taW5m
by5odG1sDQo+ID4gPiA+IE4gICAgIHIgIHkgICBiIFggIMendiBeICneunsubiArICAgIHsgICAi
ICBebiByICAgeiAaICBoICAgICYgIB4gRyAgIGggAw0KPiA+ID4gPiAoIOmajiDdomoiICAaIBtt
ICAgICB6IN6WICAgZiAgIGggICB+IG1tbD09DQo+IA0KDQo=

2014-07-02 19:28:24

by Steve Wise

[permalink] [raw]
Subject: Re: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport disconnect

On 7/2/2014 2:06 PM, Devesh Sharma wrote:
> This change is very much prone to generate poll_cq errors because of un-cleaned completions which still
> point to the non-existent QPs. On the new connection when these completions are polled, the poll_cq will
> fail because old QP pointer is already NULL.
> Did anyone hit this situation during their testing?

Hey Devesh,

iw_cxgb4 will silently toss CQEs if the QP is not active.


>> -----Original Message-----
>> From: [email protected] [mailto:linux-rdma-
>> [email protected]] On Behalf Of Chuck Lever
>> Sent: Tuesday, June 24, 2014 4:10 AM
>> To: [email protected]; [email protected]
>> Subject: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport disconnect
>>
>> CQs are not destroyed until unmount. By draining CQs on transport
>> disconnect, successful completions that can change the r.frmr.state field can
>> be missed.
>>
>> Signed-off-by: Chuck Lever <[email protected]>
>> ---
>> net/sunrpc/xprtrdma/verbs.c | 5 -----
>> 1 file changed, 5 deletions(-)
>>
>> diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
>> index 3c7f904..451e100 100644
>> --- a/net/sunrpc/xprtrdma/verbs.c
>> +++ b/net/sunrpc/xprtrdma/verbs.c
>> @@ -873,9 +873,6 @@ retry:
>> dprintk("RPC: %s: rpcrdma_ep_disconnect"
>> " status %i\n", __func__, rc);
>>
>> - rpcrdma_clean_cq(ep->rep_attr.recv_cq);
>> - rpcrdma_clean_cq(ep->rep_attr.send_cq);
>> -
>> xprt = container_of(ia, struct rpcrdma_xprt, rx_ia);
>> id = rpcrdma_create_id(xprt, ia,
>> (struct sockaddr *)&xprt->rx_data.addr);
>> @@ -985,8 +982,6 @@ rpcrdma_ep_disconnect(struct rpcrdma_ep *ep,
>> struct rpcrdma_ia *ia) {
>> int rc;
>>
>> - rpcrdma_clean_cq(ep->rep_attr.recv_cq);
>> - rpcrdma_clean_cq(ep->rep_attr.send_cq);
>> rc = rdma_disconnect(ia->ri_id);
>> if (!rc) {
>> /* returns without wait if not connected */
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the
>> body of a message to [email protected] More majordomo info at
>> http://vger.kernel.org/majordomo-info.html
> N�����r��y���b�X��ǧv�^�)޺{.n�+����{���"��^n�r���z���h����&���G���h�(�階�ݢj"���m�����z�ޖ���f���h���~�mml==


2014-07-02 19:43:00

by Devesh Sharma

[permalink] [raw]
Subject: RE: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport disconnect

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBTdGV2ZSBXaXNlIFttYWlsdG86
c3dpc2VAb3BlbmdyaWRjb21wdXRpbmcuY29tXQ0KPiBTZW50OiBUaHVyc2RheSwgSnVseSAwMywg
MjAxNCAxMjo1OSBBTQ0KPiBUbzogRGV2ZXNoIFNoYXJtYTsgQ2h1Y2sgTGV2ZXI7IGxpbnV4LXJk
bWFAdmdlci5rZXJuZWwub3JnOyBsaW51eC0NCj4gbmZzQHZnZXIua2VybmVsLm9yZw0KPiBTdWJq
ZWN0OiBSZTogW1BBVENIIHYxIDA1LzEzXSB4cHJ0cmRtYTogRG9uJ3QgZHJhaW4gQ1FzIG9uIHRy
YW5zcG9ydA0KPiBkaXNjb25uZWN0DQo+IA0KPiBPbiA3LzIvMjAxNCAyOjA2IFBNLCBEZXZlc2gg
U2hhcm1hIHdyb3RlOg0KPiA+IFRoaXMgY2hhbmdlIGlzIHZlcnkgbXVjaCBwcm9uZSB0byBnZW5l
cmF0ZSBwb2xsX2NxIGVycm9ycyBiZWNhdXNlIG9mDQo+ID4gdW4tY2xlYW5lZCBjb21wbGV0aW9u
cyB3aGljaCBzdGlsbCBwb2ludCB0byB0aGUgbm9uLWV4aXN0ZW50IFFQcy4gT24NCj4gPiB0aGUg
bmV3IGNvbm5lY3Rpb24gd2hlbiB0aGVzZSBjb21wbGV0aW9ucyBhcmUgcG9sbGVkLCB0aGUgcG9s
bF9jcSB3aWxsIGZhaWwNCj4gYmVjYXVzZSBvbGQgUVAgcG9pbnRlciBpcyBhbHJlYWR5IE5VTEwu
DQo+ID4gRGlkIGFueW9uZSBoaXQgdGhpcyBzaXR1YXRpb24gZHVyaW5nIHRoZWlyIHRlc3Rpbmc/
DQo+IA0KPiBIZXkgRGV2ZXNoLA0KPiANCj4gaXdfY3hnYjQgd2lsbCBzaWxlbnRseSB0b3NzIENR
RXMgaWYgdGhlIFFQIGlzIG5vdCBhY3RpdmUuDQoNCllhLCBqdXN0IG5vdyBjaGVja2VkIHRoYXQg
aW4gbWx4IGFuZCBjeGdiNCBkcml2ZXIgY29kZS4gT24gdGhlIG90aGVyIGhhbmQgb2NyZG1hIGlz
IGFzc2VydGluZyBhIEJVRy1PTiBmb3Igc3VjaCBDUUVzIGNhdXNpbmcgc3lzdGVtIHBhbmljLg0K
T3V0IG9mIGN1cmlvc2l0eSBJIGFtIGFza2luZywgaG93IHRoaXMgY2hhbmdlIGlzIHVzZWZ1bCBo
ZXJlLCBpcyBpdCByZWR1Y2luZyB0aGUgcmUtY29ubmVjdGlvbiB0aW1lLi4uQW55aG93IHJwY3Jk
bWFfY2xlYW5fY3Egd2FzIGRpc2NhcmRpbmcgdGhlIGNvbXBsZXRpb25zIChmbHVzaC9zdWNjZXNz
ZnVsIGJvdGgpDQoNCj4gDQo+IA0KPiA+PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiA+
PiBGcm9tOiBsaW51eC1yZG1hLW93bmVyQHZnZXIua2VybmVsLm9yZyBbbWFpbHRvOmxpbnV4LXJk
bWEtDQo+ID4+IG93bmVyQHZnZXIua2VybmVsLm9yZ10gT24gQmVoYWxmIE9mIENodWNrIExldmVy
DQo+ID4+IFNlbnQ6IFR1ZXNkYXksIEp1bmUgMjQsIDIwMTQgNDoxMCBBTQ0KPiA+PiBUbzogbGlu
dXgtcmRtYUB2Z2VyLmtlcm5lbC5vcmc7IGxpbnV4LW5mc0B2Z2VyLmtlcm5lbC5vcmcNCj4gPj4g
U3ViamVjdDogW1BBVENIIHYxIDA1LzEzXSB4cHJ0cmRtYTogRG9uJ3QgZHJhaW4gQ1FzIG9uIHRy
YW5zcG9ydA0KPiA+PiBkaXNjb25uZWN0DQo+ID4+DQo+ID4+IENRcyBhcmUgbm90IGRlc3Ryb3ll
ZCB1bnRpbCB1bm1vdW50LiBCeSBkcmFpbmluZyBDUXMgb24gdHJhbnNwb3J0DQo+ID4+IGRpc2Nv
bm5lY3QsIHN1Y2Nlc3NmdWwgY29tcGxldGlvbnMgdGhhdCBjYW4gY2hhbmdlIHRoZSByLmZybXIu
c3RhdGUNCj4gPj4gZmllbGQgY2FuIGJlIG1pc3NlZC4NCg0KU3RpbGwgdGhvc2UgYXJlIG1pc3Nl
ZCBpc27igJl0IGl0Li4uLlNpbmNlIHRob3NlIHN1Y2Nlc3NmdWwgY29tcGxldGlvbnMgd2lsbCBz
dGlsbCBiZSBkcm9wcGVkIGFmdGVyIHJlLWNvbm5lY3Rpb24uIEFtIEkgbWlzc2luZyBzb21ldGhp
bmcgdG8gDQp1bmRlcnN0YW5kaW5nIHRoZSBtb3RpdmF0aW9uLi4uDQoNCj4gPj4NCj4gPj4gU2ln
bmVkLW9mZi1ieTogQ2h1Y2sgTGV2ZXIgPGNodWNrLmxldmVyQG9yYWNsZS5jb20+DQo+ID4+IC0t
LQ0KPiA+PiAgIG5ldC9zdW5ycGMveHBydHJkbWEvdmVyYnMuYyB8ICAgIDUgLS0tLS0NCj4gPj4g
ICAxIGZpbGUgY2hhbmdlZCwgNSBkZWxldGlvbnMoLSkNCj4gPj4NCj4gPj4gZGlmZiAtLWdpdCBh
L25ldC9zdW5ycGMveHBydHJkbWEvdmVyYnMuYw0KPiA+PiBiL25ldC9zdW5ycGMveHBydHJkbWEv
dmVyYnMuYyBpbmRleCAzYzdmOTA0Li40NTFlMTAwIDEwMDY0NA0KPiA+PiAtLS0gYS9uZXQvc3Vu
cnBjL3hwcnRyZG1hL3ZlcmJzLmMNCj4gPj4gKysrIGIvbmV0L3N1bnJwYy94cHJ0cmRtYS92ZXJi
cy5jDQo+ID4+IEBAIC04NzMsOSArODczLDYgQEAgcmV0cnk6DQo+ID4+ICAgCQkJZHByaW50aygi
UlBDOiAgICAgICAlczogcnBjcmRtYV9lcF9kaXNjb25uZWN0Ig0KPiA+PiAgIAkJCQkiIHN0YXR1
cyAlaVxuIiwgX19mdW5jX18sIHJjKTsNCj4gPj4NCj4gPj4gLQkJcnBjcmRtYV9jbGVhbl9jcShl
cC0+cmVwX2F0dHIucmVjdl9jcSk7DQo+ID4+IC0JCXJwY3JkbWFfY2xlYW5fY3EoZXAtPnJlcF9h
dHRyLnNlbmRfY3EpOw0KPiA+PiAtDQo+ID4+ICAgCQl4cHJ0ID0gY29udGFpbmVyX29mKGlhLCBz
dHJ1Y3QgcnBjcmRtYV94cHJ0LCByeF9pYSk7DQo+ID4+ICAgCQlpZCA9IHJwY3JkbWFfY3JlYXRl
X2lkKHhwcnQsIGlhLA0KPiA+PiAgIAkJCQkoc3RydWN0IHNvY2thZGRyICopJnhwcnQtPnJ4X2Rh
dGEuYWRkcik7DQo+IEBAIC05ODUsOCArOTgyLDYgQEANCj4gPj4gcnBjcmRtYV9lcF9kaXNjb25u
ZWN0KHN0cnVjdCBycGNyZG1hX2VwICplcCwgc3RydWN0IHJwY3JkbWFfaWEgKmlhKQ0KPiA+PiB7
DQo+ID4+ICAgCWludCByYzsNCj4gPj4NCj4gPj4gLQlycGNyZG1hX2NsZWFuX2NxKGVwLT5yZXBf
YXR0ci5yZWN2X2NxKTsNCj4gPj4gLQlycGNyZG1hX2NsZWFuX2NxKGVwLT5yZXBfYXR0ci5zZW5k
X2NxKTsNCj4gPj4gICAJcmMgPSByZG1hX2Rpc2Nvbm5lY3QoaWEtPnJpX2lkKTsNCj4gPj4gICAJ
aWYgKCFyYykgew0KPiA+PiAgIAkJLyogcmV0dXJucyB3aXRob3V0IHdhaXQgaWYgbm90IGNvbm5l
Y3RlZCAqLw0KPiA+Pg0KPiA+PiAtLQ0KPiA+PiBUbyB1bnN1YnNjcmliZSBmcm9tIHRoaXMgbGlz
dDogc2VuZCB0aGUgbGluZSAidW5zdWJzY3JpYmUgbGludXgtcmRtYSINCj4gPj4gaW4gdGhlIGJv
ZHkgb2YgYSBtZXNzYWdlIHRvIG1ham9yZG9tb0B2Z2VyLmtlcm5lbC5vcmcgTW9yZQ0KPiBtYWpv
cmRvbW8NCj4gPj4gaW5mbyBhdCBodHRwOi8vdmdlci5rZXJuZWwub3JnL21ham9yZG9tby1pbmZv
Lmh0bWwNCj4gPiBOICAgICByICB5ICAgYiBYICDHp3YgXiAp3rp7Lm4gKyAgICB7ICAgIiAgXm4g
ciAgIHogGiAgaCAgICAmICAeIEcgICBoIAMNCj4gPiAoIOmajiDdomoiICAaIBttICAgICB6IN6W
ICAgZiAgIGggICB+IG1tbD09DQoNCg==

2014-07-03 05:34:03

by Devesh Sharma

[permalink] [raw]
Subject: RE: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport disconnect

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogQ2h1Y2sgTGV2ZXIgW21h
aWx0bzpjaHVjay5sZXZlckBvcmFjbGUuY29tXQ0KPiBTZW50OiBUaHVyc2RheSwgSnVseSAwMywg
MjAxNCAxOjMwIEFNDQo+IFRvOiBEZXZlc2ggU2hhcm1hDQo+IENjOiBTdGV2ZSBXaXNlOyBsaW51
eC1yZG1hQHZnZXIua2VybmVsLm9yZzsgTGludXggTkZTIE1haWxpbmcgTGlzdA0KPiBTdWJqZWN0
OiBSZTogW1BBVENIIHYxIDA1LzEzXSB4cHJ0cmRtYTogRG9uJ3QgZHJhaW4gQ1FzIG9uIHRyYW5z
cG9ydA0KPiBkaXNjb25uZWN0DQo+IA0KPiANCj4gT24gSnVsIDIsIDIwMTQsIGF0IDM6NDggUE0s
IERldmVzaCBTaGFybWEgPERldmVzaC5TaGFybWFARW11bGV4LkNvbT4NCj4gd3JvdGU6DQo+IA0K
PiA+DQo+ID4NCj4gPj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gPj4gRnJvbTogU3Rl
dmUgV2lzZSBbbWFpbHRvOnN3aXNlQG9wZW5ncmlkY29tcHV0aW5nLmNvbV0NCj4gPj4gU2VudDog
VGh1cnNkYXksIEp1bHkgMDMsIDIwMTQgMToxNiBBTQ0KPiA+PiBUbzogJ0NodWNrIExldmVyJzsg
RGV2ZXNoIFNoYXJtYQ0KPiA+PiBDYzogbGludXgtcmRtYUB2Z2VyLmtlcm5lbC5vcmc7ICdMaW51
eCBORlMgTWFpbGluZyBMaXN0Jw0KPiA+PiBTdWJqZWN0OiBSRTogW1BBVENIIHYxIDA1LzEzXSB4
cHJ0cmRtYTogRG9uJ3QgZHJhaW4gQ1FzIG9uIHRyYW5zcG9ydA0KPiA+PiBkaXNjb25uZWN0DQo+
ID4+DQo+ID4+DQo+ID4+DQo+ID4+PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiA+Pj4g
RnJvbTogQ2h1Y2sgTGV2ZXIgW21haWx0bzpjaHVjay5sZXZlckBvcmFjbGUuY29tXQ0KPiA+Pj4g
U2VudDogV2VkbmVzZGF5LCBKdWx5IDAyLCAyMDE0IDI6NDAgUE0NCj4gPj4+IFRvOiBTdGV2ZSBX
aXNlOyBEZXZlc2ggU2hhcm1hDQo+ID4+PiBDYzogbGludXgtcmRtYUB2Z2VyLmtlcm5lbC5vcmc7
IExpbnV4IE5GUyBNYWlsaW5nIExpc3QNCj4gPj4+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggdjEgMDUv
MTNdIHhwcnRyZG1hOiBEb24ndCBkcmFpbiBDUXMgb24gdHJhbnNwb3J0DQo+ID4+PiBkaXNjb25u
ZWN0DQo+ID4+Pg0KPiA+Pj4NCj4gPj4+IE9uIEp1bCAyLCAyMDE0LCBhdCAzOjI4IFBNLCBTdGV2
ZSBXaXNlDQo+IDxzd2lzZUBvcGVuZ3JpZGNvbXB1dGluZy5jb20+DQo+ID4+IHdyb3RlOg0KPiA+
Pj4NCj4gPj4+PiBPbiA3LzIvMjAxNCAyOjA2IFBNLCBEZXZlc2ggU2hhcm1hIHdyb3RlOg0KPiA+
Pj4+PiBUaGlzIGNoYW5nZSBpcyB2ZXJ5IG11Y2ggcHJvbmUgdG8gZ2VuZXJhdGUgcG9sbF9jcSBl
cnJvcnMgYmVjYXVzZQ0KPiA+Pj4+PiBvZiB1bi1jbGVhbmVkDQo+ID4+PiBjb21wbGV0aW9ucyB3
aGljaCBzdGlsbA0KPiA+Pj4+PiBwb2ludCB0byB0aGUgbm9uLWV4aXN0ZW50IFFQcy4gT24gdGhl
IG5ldyBjb25uZWN0aW9uIHdoZW4gdGhlc2UNCj4gPj4+Pj4gY29tcGxldGlvbnMgYXJlIHBvbGxl
ZCwNCj4gPj4+IHRoZSBwb2xsX2NxIHdpbGwNCj4gPj4+Pj4gZmFpbCBiZWNhdXNlIG9sZCBRUCBw
b2ludGVyIGlzIGFscmVhZHkgTlVMTC4NCj4gPj4+Pj4gRGlkIGFueW9uZSBoaXQgdGhpcyBzaXR1
YXRpb24gZHVyaW5nIHRoZWlyIHRlc3Rpbmc/DQo+ID4+Pg0KPiA+Pj4gSSB0ZXN0ZWQgdGhpcyBh
Z2dyZXNzaXZlbHkgd2l0aCBhIGZhdWx0IGluamVjdG9yIHRoYXQgdHJpZ2dlcnMNCj4gPj4+IHJl
Z3VsYXIgY29ubmVjdGlvbiBkaXNydXB0aW9uLg0KPiA+Pj4NCj4gPj4+PiBIZXkgRGV2ZXNoLA0K
PiA+Pj4+DQo+ID4+Pj4gaXdfY3hnYjQgd2lsbCBzaWxlbnRseSB0b3NzIENRRXMgaWYgdGhlIFFQ
IGlzIG5vdCBhY3RpdmUuDQo+ID4+Pg0KPiA+Pj4geHBydHJkbWEgcmVsaWVzIG9uIGdldHRpbmcg
YSBjb21wbGV0aW9uIChlaXRoZXIgc3VjY2Vzc2Z1bCBvciBpbg0KPiA+Pj4gZXJyb3IpIGZvciBl
dmVyeSBXUiBpdCBoYXMgcG9zdGVkLiBUaGUgZ29hbCBvZiB0aGlzIHBhdGNoIGlzIHRvDQo+ID4+
PiBhdm9pZCB0aHJvd2luZyBhd2F5IHF1ZXVlZCBjb21wbGV0aW9ucyBhZnRlciBhIHRyYW5zcG9y
dCBkaXNjb25uZWN0DQo+ID4+PiBzbyB3ZSBkb24ndCBsb3NlIHRyYWNrIG9mIEZSTVIgcmtleSB1
cGRhdGVzIChGQVNUX1JFR19NUiBhbmQNCj4gPj4+IExPQ0FMX0lOVg0KPiA+Pj4gY29tcGxldGlv
bnMpIGFuZCB3ZSBjYW4gY2FwdHVyZSBhbGwgUlBDIHJlcGxpZXMgcG9zdGVkIGJlZm9yZSB0aGUN
Cj4gPj4gY29ubmVjdGlvbiB3YXMgbG9zdC4NCj4gPj4+DQo+ID4+PiBTb3VuZHMgbGlrZSB3ZSBh
bHNvIG5lZWQgdG8ga2VlcCB0aGUgUVAgYXJvdW5kLCBldmVuIGluIGVycm9yIHN0YXRlLA0KPiA+
Pj4gdW50aWwgYWxsIGtub3duIFdScyBvbiB0aGF0IFFQIGhhdmUgY29tcGxldGVkPw0KPiA+Pj4N
Cj4gPg0KPiA+IFdoeSBub3QgcG9sbCBhbmQgcHJvY2VzcyBldmVyeSBjb21wbGV0aW9uIGR1cmlu
Zw0KPiBycGNyZG1hX2NxX2NsZWFudXAoKeKApi4NCj4gDQo+IFllcywgSSBoYXZlIGEgcGF0Y2gg
aW4gdGhlIG5leHQgdmVyc2lvbiBvZiB0aGlzIHNlcmllcyB0aGF0IGRvZXMgdGhhdC4NCj4gSXQg
anVzdCBjYWxscyBycGNyZG1hX3NlbmRjcV91cGNhbGwoKSBmcm9tIHRoZSBjb25uZWN0IHdvcmtl
ci4gSSB3aWxsIHNxdWFzaA0KPiB0aGF0IGNoYW5nZSBpbnRvIHRoaXMgcGF0Y2guDQoNCkNvb2wu
DQoNCj4gDQo+IE1heWJlIGl0IG5lZWRzIHRvIGludm9rZSBycGNyZG1hX3JlY3ZjcV91cGNhbGwo
KSB0aGVyZSBhcyB3ZWxsLg0KDQpZZXMsIHRoYXQgd291bGQgYmUgbmVlZGVkLg0KDQo+IA0KPiAN
Cj4gPg0KPiA+Pg0KPiA+PiBQZXJoYXBzLg0KPiA+Pg0KPiA+Pj4NCj4gPj4+Pg0KPiA+Pj4+DQo+
ID4+Pj4+PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiA+Pj4+Pj4gRnJvbTogbGludXgt
cmRtYS1vd25lckB2Z2VyLmtlcm5lbC5vcmcgW21haWx0bzpsaW51eC1yZG1hLQ0KPiA+Pj4+Pj4g
b3duZXJAdmdlci5rZXJuZWwub3JnXSBPbiBCZWhhbGYgT2YgQ2h1Y2sgTGV2ZXINCj4gPj4+Pj4+
IFNlbnQ6IFR1ZXNkYXksIEp1bmUgMjQsIDIwMTQgNDoxMCBBTQ0KPiA+Pj4+Pj4gVG86IGxpbnV4
LXJkbWFAdmdlci5rZXJuZWwub3JnOyBsaW51eC1uZnNAdmdlci5rZXJuZWwub3JnDQo+ID4+Pj4+
PiBTdWJqZWN0OiBbUEFUQ0ggdjEgMDUvMTNdIHhwcnRyZG1hOiBEb24ndCBkcmFpbiBDUXMgb24g
dHJhbnNwb3J0DQo+ID4+Pj4+PiBkaXNjb25uZWN0DQo+ID4+Pj4+Pg0KPiA+Pj4+Pj4gQ1FzIGFy
ZSBub3QgZGVzdHJveWVkIHVudGlsIHVubW91bnQuIEJ5IGRyYWluaW5nIENRcyBvbiB0cmFuc3Bv
cnQNCj4gPj4+Pj4+IGRpc2Nvbm5lY3QsIHN1Y2Nlc3NmdWwgY29tcGxldGlvbnMgdGhhdCBjYW4g
Y2hhbmdlIHRoZQ0KPiA+Pj4+Pj4gci5mcm1yLnN0YXRlIGZpZWxkIGNhbiBiZSBtaXNzZWQuDQo+
ID4+Pj4+Pg0KPiA+Pj4+Pj4gU2lnbmVkLW9mZi1ieTogQ2h1Y2sgTGV2ZXIgPGNodWNrLmxldmVy
QG9yYWNsZS5jb20+DQo+ID4+Pj4+PiAtLS0NCj4gPj4+Pj4+IG5ldC9zdW5ycGMveHBydHJkbWEv
dmVyYnMuYyB8ICAgIDUgLS0tLS0NCj4gPj4+Pj4+IDEgZmlsZSBjaGFuZ2VkLCA1IGRlbGV0aW9u
cygtKQ0KPiA+Pj4+Pj4NCj4gPj4+Pj4+IGRpZmYgLS1naXQgYS9uZXQvc3VucnBjL3hwcnRyZG1h
L3ZlcmJzLmMNCj4gPj4+Pj4+IGIvbmV0L3N1bnJwYy94cHJ0cmRtYS92ZXJicy5jIGluZGV4IDNj
N2Y5MDQuLjQ1MWUxMDAgMTAwNjQ0DQo+ID4+Pj4+PiAtLS0gYS9uZXQvc3VucnBjL3hwcnRyZG1h
L3ZlcmJzLmMNCj4gPj4+Pj4+ICsrKyBiL25ldC9zdW5ycGMveHBydHJkbWEvdmVyYnMuYw0KPiA+
Pj4+Pj4gQEAgLTg3Myw5ICs4NzMsNiBAQCByZXRyeToNCj4gPj4+Pj4+IAkJCWRwcmludGsoIlJQ
QzogICAgICAgJXM6DQo+IHJwY3JkbWFfZXBfZGlzY29ubmVjdCINCj4gPj4+Pj4+IAkJCQkiIHN0
YXR1cyAlaVxuIiwgX19mdW5jX18sIHJjKTsNCj4gPj4+Pj4+DQo+ID4+Pj4+PiAtCQlycGNyZG1h
X2NsZWFuX2NxKGVwLT5yZXBfYXR0ci5yZWN2X2NxKTsNCj4gPj4+Pj4+IC0JCXJwY3JkbWFfY2xl
YW5fY3EoZXAtPnJlcF9hdHRyLnNlbmRfY3EpOw0KPiA+Pj4+Pj4gLQ0KPiA+Pj4+Pj4gCQl4cHJ0
ID0gY29udGFpbmVyX29mKGlhLCBzdHJ1Y3QgcnBjcmRtYV94cHJ0LCByeF9pYSk7DQo+ID4+Pj4+
PiAJCWlkID0gcnBjcmRtYV9jcmVhdGVfaWQoeHBydCwgaWEsDQo+ID4+Pj4+PiAJCQkJKHN0cnVj
dCBzb2NrYWRkciAqKSZ4cHJ0LQ0KPiA+cnhfZGF0YS5hZGRyKTsNCj4gPj4gQEAgLTk4NSw4ICs5
ODIsNiBAQA0KPiA+Pj4+Pj4gcnBjcmRtYV9lcF9kaXNjb25uZWN0KHN0cnVjdCBycGNyZG1hX2Vw
ICplcCwgc3RydWN0IHJwY3JkbWFfaWENCj4gPj4+Pj4+ICppYSkgIHsNCj4gPj4+Pj4+IAlpbnQg
cmM7DQo+ID4+Pj4+Pg0KPiA+Pj4+Pj4gLQlycGNyZG1hX2NsZWFuX2NxKGVwLT5yZXBfYXR0ci5y
ZWN2X2NxKTsNCj4gPj4+Pj4+IC0JcnBjcmRtYV9jbGVhbl9jcShlcC0+cmVwX2F0dHIuc2VuZF9j
cSk7DQo+ID4+Pj4+PiAJcmMgPSByZG1hX2Rpc2Nvbm5lY3QoaWEtPnJpX2lkKTsNCj4gPj4+Pj4+
IAlpZiAoIXJjKSB7DQo+ID4+Pj4+PiAJCS8qIHJldHVybnMgd2l0aG91dCB3YWl0IGlmIG5vdCBj
b25uZWN0ZWQgKi8NCj4gPj4+Pj4+DQo+ID4+Pj4+PiAtLQ0KPiA+Pj4+Pj4gVG8gdW5zdWJzY3Jp
YmUgZnJvbSB0aGlzIGxpc3Q6IHNlbmQgdGhlIGxpbmUgInVuc3Vic2NyaWJlDQo+ID4+Pj4+PiBs
aW51eC1yZG1hIiBpbiB0aGUgYm9keSBvZiBhIG1lc3NhZ2UgdG8NCj4gbWFqb3Jkb21vQHZnZXIu
a2VybmVsLm9yZw0KPiA+Pj4+Pj4gTW9yZSBtYWpvcmRvbW8gaW5mbyBhdCBodHRwOi8vdmdlci5r
ZXJuZWwub3JnL21ham9yZG9tby0NCj4gaW5mby5odG1sDQo+ID4+Pj4+IE4gICAgIHIgIHkgICBi
IFggIMendiBeICneunsubiArICAgIHsgICAiICBebiByICAgeiAaICBoICAgICYgIB4gRw0KPiA+
Pj4+PiBoIAMoIOmajg0KPiA+Pj4g3aJqIiAgGiAbbSAgICAgeiDeliAgIGYgICBoICAgfiBtbWw9
PQ0KPiA+Pj4+DQo+ID4+Pj4gLS0NCj4gPj4+PiBUbyB1bnN1YnNjcmliZSBmcm9tIHRoaXMgbGlz
dDogc2VuZCB0aGUgbGluZSAidW5zdWJzY3JpYmUgbGludXgtbmZzIg0KPiA+Pj4+IGluIHRoZSBi
b2R5IG9mIGEgbWVzc2FnZSB0byBtYWpvcmRvbW9Admdlci5rZXJuZWwub3JnIE1vcmUNCj4gPj4g
bWFqb3Jkb21vDQo+ID4+Pj4gaW5mbyBhdCAgaHR0cDovL3ZnZXIua2VybmVsLm9yZy9tYWpvcmRv
bW8taW5mby5odG1sDQo+ID4+Pg0KPiA+Pj4gLS0NCj4gPj4+IENodWNrIExldmVyDQo+ID4+PiBj
aHVja1tkb3RdbGV2ZXJbYXRdb3JhY2xlW2RvdF1jb20NCj4gPj4+DQo+ID4+Pg0KPiA+Pg0KPiA+
IC0tDQo+ID4gVG8gdW5zdWJzY3JpYmUgZnJvbSB0aGlzIGxpc3Q6IHNlbmQgdGhlIGxpbmUgInVu
c3Vic2NyaWJlIGxpbnV4LXJkbWEiDQo+ID4gaW4gdGhlIGJvZHkgb2YgYSBtZXNzYWdlIHRvIG1h
am9yZG9tb0B2Z2VyLmtlcm5lbC5vcmcgTW9yZQ0KPiBtYWpvcmRvbW8NCj4gPiBpbmZvIGF0ICBo
dHRwOi8vdmdlci5rZXJuZWwub3JnL21ham9yZG9tby1pbmZvLmh0bWwNCj4gPg0KPiANCj4gLS0N
Cj4gQ2h1Y2sgTGV2ZXINCj4gY2h1Y2tbZG90XWxldmVyW2F0XW9yYWNsZVtkb3RdY29tDQo+IA0K
PiANCg0K

2014-07-02 20:00:04

by Chuck Lever

[permalink] [raw]
Subject: Re: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport disconnect


On Jul 2, 2014, at 3:48 PM, Devesh Sharma <[email protected]> wrote:

>
>
>> -----Original Message-----
>> From: Steve Wise [mailto:[email protected]]
>> Sent: Thursday, July 03, 2014 1:16 AM
>> To: 'Chuck Lever'; Devesh Sharma
>> Cc: [email protected]; 'Linux NFS Mailing List'
>> Subject: RE: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport
>> disconnect
>>
>>
>>
>>> -----Original Message-----
>>> From: Chuck Lever [mailto:[email protected]]
>>> Sent: Wednesday, July 02, 2014 2:40 PM
>>> To: Steve Wise; Devesh Sharma
>>> Cc: [email protected]; Linux NFS Mailing List
>>> Subject: Re: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport
>>> disconnect
>>>
>>>
>>> On Jul 2, 2014, at 3:28 PM, Steve Wise <[email protected]>
>> wrote:
>>>
>>>> On 7/2/2014 2:06 PM, Devesh Sharma wrote:
>>>>> This change is very much prone to generate poll_cq errors because
>>>>> of un-cleaned
>>> completions which still
>>>>> point to the non-existent QPs. On the new connection when these
>>>>> completions are polled,
>>> the poll_cq will
>>>>> fail because old QP pointer is already NULL.
>>>>> Did anyone hit this situation during their testing?
>>>
>>> I tested this aggressively with a fault injector that triggers regular
>>> connection disruption.
>>>
>>>> Hey Devesh,
>>>>
>>>> iw_cxgb4 will silently toss CQEs if the QP is not active.
>>>
>>> xprtrdma relies on getting a completion (either successful or in
>>> error) for every WR it has posted. The goal of this patch is to avoid
>>> throwing away queued completions after a transport disconnect so we
>>> don't lose track of FRMR rkey updates (FAST_REG_MR and LOCAL_INV
>>> completions) and we can capture all RPC replies posted before the
>> connection was lost.
>>>
>>> Sounds like we also need to keep the QP around, even in error state,
>>> until all known WRs on that QP have completed?
>>>
>
> Why not poll and process every completion during rpcrdma_cq_cleanup()….

Yes, I have a patch in the next version of this series that does that.
It just calls rpcrdma_sendcq_upcall() from the connect worker. I will
squash that change into this patch.

Maybe it needs to invoke rpcrdma_recvcq_upcall() there as well.


>
>>
>> Perhaps.
>>
>>>
>>>>
>>>>
>>>>>> -----Original Message-----
>>>>>> From: [email protected] [mailto:linux-rdma-
>>>>>> [email protected]] On Behalf Of Chuck Lever
>>>>>> Sent: Tuesday, June 24, 2014 4:10 AM
>>>>>> To: [email protected]; [email protected]
>>>>>> Subject: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport
>>>>>> disconnect
>>>>>>
>>>>>> CQs are not destroyed until unmount. By draining CQs on transport
>>>>>> disconnect, successful completions that can change the
>>>>>> r.frmr.state field can be missed.
>>>>>>
>>>>>> Signed-off-by: Chuck Lever <[email protected]>
>>>>>> ---
>>>>>> net/sunrpc/xprtrdma/verbs.c | 5 -----
>>>>>> 1 file changed, 5 deletions(-)
>>>>>>
>>>>>> diff --git a/net/sunrpc/xprtrdma/verbs.c
>>>>>> b/net/sunrpc/xprtrdma/verbs.c index 3c7f904..451e100 100644
>>>>>> --- a/net/sunrpc/xprtrdma/verbs.c
>>>>>> +++ b/net/sunrpc/xprtrdma/verbs.c
>>>>>> @@ -873,9 +873,6 @@ retry:
>>>>>> dprintk("RPC: %s: rpcrdma_ep_disconnect"
>>>>>> " status %i\n", __func__, rc);
>>>>>>
>>>>>> - rpcrdma_clean_cq(ep->rep_attr.recv_cq);
>>>>>> - rpcrdma_clean_cq(ep->rep_attr.send_cq);
>>>>>> -
>>>>>> xprt = container_of(ia, struct rpcrdma_xprt, rx_ia);
>>>>>> id = rpcrdma_create_id(xprt, ia,
>>>>>> (struct sockaddr *)&xprt->rx_data.addr);
>> @@ -985,8 +982,6 @@
>>>>>> rpcrdma_ep_disconnect(struct rpcrdma_ep *ep, struct rpcrdma_ia
>>>>>> *ia) {
>>>>>> int rc;
>>>>>>
>>>>>> - rpcrdma_clean_cq(ep->rep_attr.recv_cq);
>>>>>> - rpcrdma_clean_cq(ep->rep_attr.send_cq);
>>>>>> rc = rdma_disconnect(ia->ri_id);
>>>>>> if (!rc) {
>>>>>> /* returns without wait if not connected */
>>>>>>
>>>>>> --
>>>>>> To unsubscribe from this list: send the line "unsubscribe
>>>>>> linux-rdma" in the body of a message to [email protected]
>>>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>>> N r y b X ǧv ^ )޺{.n + { " ^n r z  h &  G
>>>>> h ( 階
>>> ݢj"  m z ޖ f h ~ mml==
>>>>
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-nfs"
>>>> in the body of a message to [email protected] More
>> majordomo
>>>> info at http://vger.kernel.org/majordomo-info.html
>>>
>>> --
>>> Chuck Lever
>>> chuck[dot]lever[at]oracle[dot]com
>>>
>>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>

--
Chuck Lever
chuck[dot]lever[at]oracle[dot]com




2014-07-02 19:53:37

by Devesh Sharma

[permalink] [raw]
Subject: RE: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport disconnect

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogU3RldmUgV2lzZSBbbWFp
bHRvOnN3aXNlQG9wZW5ncmlkY29tcHV0aW5nLmNvbV0NCj4gU2VudDogVGh1cnNkYXksIEp1bHkg
MDMsIDIwMTQgMToyMSBBTQ0KPiBUbzogRGV2ZXNoIFNoYXJtYTsgJ0NodWNrIExldmVyJzsgbGlu
dXgtcmRtYUB2Z2VyLmtlcm5lbC5vcmc7IGxpbnV4LQ0KPiBuZnNAdmdlci5rZXJuZWwub3JnDQo+
IFN1YmplY3Q6IFJFOiBbUEFUQ0ggdjEgMDUvMTNdIHhwcnRyZG1hOiBEb24ndCBkcmFpbiBDUXMg
b24gdHJhbnNwb3J0DQo+IGRpc2Nvbm5lY3QNCj4gDQo+IA0KPiANCj4gPiAtLS0tLU9yaWdpbmFs
IE1lc3NhZ2UtLS0tLQ0KPiA+IEZyb206IERldmVzaCBTaGFybWEgW21haWx0bzpEZXZlc2guU2hh
cm1hQEVtdWxleC5Db21dDQo+ID4gU2VudDogV2VkbmVzZGF5LCBKdWx5IDAyLCAyMDE0IDI6NDMg
UE0NCj4gPiBUbzogU3RldmUgV2lzZTsgQ2h1Y2sgTGV2ZXI7IGxpbnV4LXJkbWFAdmdlci5rZXJu
ZWwub3JnOw0KPiA+IGxpbnV4LW5mc0B2Z2VyLmtlcm5lbC5vcmcNCj4gPiBTdWJqZWN0OiBSRTog
W1BBVENIIHYxIDA1LzEzXSB4cHJ0cmRtYTogRG9uJ3QgZHJhaW4gQ1FzIG9uIHRyYW5zcG9ydA0K
PiA+IGRpc2Nvbm5lY3QNCj4gPg0KPiA+ID4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4g
PiA+IEZyb206IFN0ZXZlIFdpc2UgW21haWx0bzpzd2lzZUBvcGVuZ3JpZGNvbXB1dGluZy5jb21d
DQo+ID4gPiBTZW50OiBUaHVyc2RheSwgSnVseSAwMywgMjAxNCAxMjo1OSBBTQ0KPiA+ID4gVG86
IERldmVzaCBTaGFybWE7IENodWNrIExldmVyOyBsaW51eC1yZG1hQHZnZXIua2VybmVsLm9yZzsg
bGludXgtDQo+ID4gPiBuZnNAdmdlci5rZXJuZWwub3JnDQo+ID4gPiBTdWJqZWN0OiBSZTogW1BB
VENIIHYxIDA1LzEzXSB4cHJ0cmRtYTogRG9uJ3QgZHJhaW4gQ1FzIG9uIHRyYW5zcG9ydA0KPiA+
ID4gZGlzY29ubmVjdA0KPiA+ID4NCj4gPiA+IE9uIDcvMi8yMDE0IDI6MDYgUE0sIERldmVzaCBT
aGFybWEgd3JvdGU6DQo+ID4gPiA+IFRoaXMgY2hhbmdlIGlzIHZlcnkgbXVjaCBwcm9uZSB0byBn
ZW5lcmF0ZSBwb2xsX2NxIGVycm9ycyBiZWNhdXNlDQo+ID4gPiA+IG9mIHVuLWNsZWFuZWQgY29t
cGxldGlvbnMgd2hpY2ggc3RpbGwgcG9pbnQgdG8gdGhlIG5vbi1leGlzdGVudA0KPiA+ID4gPiBR
UHMuIE9uIHRoZSBuZXcgY29ubmVjdGlvbiB3aGVuIHRoZXNlIGNvbXBsZXRpb25zIGFyZSBwb2xs
ZWQsIHRoZQ0KPiA+ID4gPiBwb2xsX2NxIHdpbGwgZmFpbA0KPiA+ID4gYmVjYXVzZSBvbGQgUVAg
cG9pbnRlciBpcyBhbHJlYWR5IE5VTEwuDQo+ID4gPiA+IERpZCBhbnlvbmUgaGl0IHRoaXMgc2l0
dWF0aW9uIGR1cmluZyB0aGVpciB0ZXN0aW5nPw0KPiA+ID4NCj4gPiA+IEhleSBEZXZlc2gsDQo+
ID4gPg0KPiA+ID4gaXdfY3hnYjQgd2lsbCBzaWxlbnRseSB0b3NzIENRRXMgaWYgdGhlIFFQIGlz
IG5vdCBhY3RpdmUuDQo+ID4NCj4gPiBZYSwganVzdCBub3cgY2hlY2tlZCB0aGF0IGluIG1seCBh
bmQgY3hnYjQgZHJpdmVyIGNvZGUuIE9uIHRoZSBvdGhlcg0KPiA+IGhhbmQgb2NyZG1hIGlzIGFz
c2VydGluZyBhIEJVRy1PTiBmb3Igc3VjaCBDUUVzIGNhdXNpbmcgc3lzdGVtIHBhbmljLg0KPiA+
IE91dCBvZiBjdXJpb3NpdHkgSSBhbSBhc2tpbmcsIGhvdyB0aGlzIGNoYW5nZSBpcyB1c2VmdWwg
aGVyZSwgaXMgaXQNCj4gPiByZWR1Y2luZyB0aGUgcmUtY29ubmVjdGlvbiB0aW1lLi4uQW55aG93
IHJwY3JkbWFfY2xlYW5fY3Egd2FzDQo+ID4gZGlzY2FyZGluZyB0aGUgY29tcGxldGlvbnMgKGZs
dXNoL3N1Y2Nlc3NmdWwgYm90aCkNCj4gPg0KPiANCj4gV2VsbCwgSSBkb24ndCB0aGluayB0aGVy
ZSBpcyBhbnl0aGluZyByZXN0cmljdGluZyBhbiBhcHBsaWNhdGlvbiBmcm9tIGRlc3Ryb3lpbmcN
Cj4gdGhlIFFQIHdpdGggcGVuZGluZyBDUUVzIG9uIGl0cyBDUXMuICAgU28gaXQgZGVmaW5pdGVs
eSBzaG91bGRuJ3QgY2F1c2UgYQ0KPiBCVUdfT04oKSBJIHRoaW5rLiAgIEknbGwgaGF2ZSB0byBy
ZWFkIHVwIGluIHRoZSBWZXJicyBzcGVjcyBpZiBkZXN0cm95aW5nIGEgUVANCj4ga2lsbHMgYWxs
IHRoZSBwZW5kaW5nIENRRXMuLi4NCg0KT2ggY29uZnVzaW9uLi4ubGV0IG1lIGNsYXJpZnk6IGlu
IG9jcmRtYSBCVUcgT04gaXMgaGl0IGluIHBvbGxfY3EoKSBhZnRlciByZS1jb25uZWN0aW9uIGhh
cHBlbnMgYW5kIGNxIGlzIHBvbGxlZCBhZ2Fpbi4NCk5vdyB0aGUgZmlyc3QgY29tcGxldGlvbiBp
biBDUSBzdGlsbCBwb2ludHMgdG8gb2xkIFFQLUlEIGZvciB3aGljaCBvY3JkbWEgZG9lcyBub3Qg
aGF2ZSB2YWxpZCBRUCBwb2ludGVyLg0KDQo+IA0KPiANCj4gPiA+DQo+ID4gPg0KPiA+ID4gPj4g
LS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gPiA+ID4+IEZyb206IGxpbnV4LXJkbWEtb3du
ZXJAdmdlci5rZXJuZWwub3JnIFttYWlsdG86bGludXgtcmRtYS0NCj4gPiA+ID4+IG93bmVyQHZn
ZXIua2VybmVsLm9yZ10gT24gQmVoYWxmIE9mIENodWNrIExldmVyDQo+ID4gPiA+PiBTZW50OiBU
dWVzZGF5LCBKdW5lIDI0LCAyMDE0IDQ6MTAgQU0NCj4gPiA+ID4+IFRvOiBsaW51eC1yZG1hQHZn
ZXIua2VybmVsLm9yZzsgbGludXgtbmZzQHZnZXIua2VybmVsLm9yZw0KPiA+ID4gPj4gU3ViamVj
dDogW1BBVENIIHYxIDA1LzEzXSB4cHJ0cmRtYTogRG9uJ3QgZHJhaW4gQ1FzIG9uIHRyYW5zcG9y
dA0KPiA+ID4gPj4gZGlzY29ubmVjdA0KPiA+ID4gPj4NCj4gPiA+ID4+IENRcyBhcmUgbm90IGRl
c3Ryb3llZCB1bnRpbCB1bm1vdW50LiBCeSBkcmFpbmluZyBDUXMgb24gdHJhbnNwb3J0DQo+ID4g
PiA+PiBkaXNjb25uZWN0LCBzdWNjZXNzZnVsIGNvbXBsZXRpb25zIHRoYXQgY2FuIGNoYW5nZSB0
aGUNCj4gPiA+ID4+IHIuZnJtci5zdGF0ZSBmaWVsZCBjYW4gYmUgbWlzc2VkLg0KPiA+DQo+ID4g
U3RpbGwgdGhvc2UgYXJlIG1pc3NlZCBpc27igJl0IGl0Li4uLlNpbmNlIHRob3NlIHN1Y2Nlc3Nm
dWwgY29tcGxldGlvbnMNCj4gPiB3aWxsIHN0aWxsIGJlIGRyb3BwZWQgYWZ0ZXIgcmUtIGNvbm5l
Y3Rpb24uIEFtIEkgbWlzc2luZyBzb21ldGhpbmcgdG8NCj4gPiB1bmRlcnN0YW5kaW5nIHRoZSBt
b3RpdmF0aW9uLi4uDQo+ID4NCj4gPiA+ID4+DQo+ID4gPiA+PiBTaWduZWQtb2ZmLWJ5OiBDaHVj
ayBMZXZlciA8Y2h1Y2subGV2ZXJAb3JhY2xlLmNvbT4NCj4gPiA+ID4+IC0tLQ0KPiA+ID4gPj4g
ICBuZXQvc3VucnBjL3hwcnRyZG1hL3ZlcmJzLmMgfCAgICA1IC0tLS0tDQo+ID4gPiA+PiAgIDEg
ZmlsZSBjaGFuZ2VkLCA1IGRlbGV0aW9ucygtKQ0KPiA+ID4gPj4NCj4gPiA+ID4+IGRpZmYgLS1n
aXQgYS9uZXQvc3VucnBjL3hwcnRyZG1hL3ZlcmJzLmMNCj4gPiA+ID4+IGIvbmV0L3N1bnJwYy94
cHJ0cmRtYS92ZXJicy5jIGluZGV4IDNjN2Y5MDQuLjQ1MWUxMDAgMTAwNjQ0DQo+ID4gPiA+PiAt
LS0gYS9uZXQvc3VucnBjL3hwcnRyZG1hL3ZlcmJzLmMNCj4gPiA+ID4+ICsrKyBiL25ldC9zdW5y
cGMveHBydHJkbWEvdmVyYnMuYw0KPiA+ID4gPj4gQEAgLTg3Myw5ICs4NzMsNiBAQCByZXRyeToN
Cj4gPiA+ID4+ICAgCQkJZHByaW50aygiUlBDOiAgICAgICAlczoNCj4gcnBjcmRtYV9lcF9kaXNj
b25uZWN0Ig0KPiA+ID4gPj4gICAJCQkJIiBzdGF0dXMgJWlcbiIsIF9fZnVuY19fLCByYyk7DQo+
ID4gPiA+Pg0KPiA+ID4gPj4gLQkJcnBjcmRtYV9jbGVhbl9jcShlcC0+cmVwX2F0dHIucmVjdl9j
cSk7DQo+ID4gPiA+PiAtCQlycGNyZG1hX2NsZWFuX2NxKGVwLT5yZXBfYXR0ci5zZW5kX2NxKTsN
Cj4gPiA+ID4+IC0NCj4gPiA+ID4+ICAgCQl4cHJ0ID0gY29udGFpbmVyX29mKGlhLCBzdHJ1Y3Qg
cnBjcmRtYV94cHJ0LCByeF9pYSk7DQo+ID4gPiA+PiAgIAkJaWQgPSBycGNyZG1hX2NyZWF0ZV9p
ZCh4cHJ0LCBpYSwNCj4gPiA+ID4+ICAgCQkJCShzdHJ1Y3Qgc29ja2FkZHIgKikmeHBydC0NCj4g
PnJ4X2RhdGEuYWRkcik7DQo+ID4gPiBAQCAtOTg1LDggKzk4Miw2IEBADQo+ID4gPiA+PiBycGNy
ZG1hX2VwX2Rpc2Nvbm5lY3Qoc3RydWN0IHJwY3JkbWFfZXAgKmVwLCBzdHJ1Y3QgcnBjcmRtYV9p
YQ0KPiA+ID4gPj4gKmlhKSB7DQo+ID4gPiA+PiAgIAlpbnQgcmM7DQo+ID4gPiA+Pg0KPiA+ID4g
Pj4gLQlycGNyZG1hX2NsZWFuX2NxKGVwLT5yZXBfYXR0ci5yZWN2X2NxKTsNCj4gPiA+ID4+IC0J
cnBjcmRtYV9jbGVhbl9jcShlcC0+cmVwX2F0dHIuc2VuZF9jcSk7DQo+ID4gPiA+PiAgIAlyYyA9
IHJkbWFfZGlzY29ubmVjdChpYS0+cmlfaWQpOw0KPiA+ID4gPj4gICAJaWYgKCFyYykgew0KPiA+
ID4gPj4gICAJCS8qIHJldHVybnMgd2l0aG91dCB3YWl0IGlmIG5vdCBjb25uZWN0ZWQgKi8NCj4g
PiA+ID4+DQo+ID4gPiA+PiAtLQ0KPiA+ID4gPj4gVG8gdW5zdWJzY3JpYmUgZnJvbSB0aGlzIGxp
c3Q6IHNlbmQgdGhlIGxpbmUgInVuc3Vic2NyaWJlIGxpbnV4LXJkbWEiDQo+ID4gPiA+PiBpbiB0
aGUgYm9keSBvZiBhIG1lc3NhZ2UgdG8gbWFqb3Jkb21vQHZnZXIua2VybmVsLm9yZyBNb3JlDQo+
ID4gPiBtYWpvcmRvbW8NCj4gPiA+ID4+IGluZm8gYXQgaHR0cDovL3ZnZXIua2VybmVsLm9yZy9t
YWpvcmRvbW8taW5mby5odG1sDQo+ID4gPiA+IE4gICAgIHIgIHkgICBiIFggIMendiBeICneunsu
biArICAgIHsgICAiICBebiByICAgeiAaICBoICAgICYgIB4gRyAgIGggAw0KPiA+ID4gPiAoIOma
jiDdomoiICAaIBttICAgICB6IN6WICAgZiAgIGggICB+IG1tbD09DQo+IA0KDQo=

2014-07-02 19:57:51

by Devesh Sharma

[permalink] [raw]
Subject: RE: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport disconnect

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogU3RldmUgV2lzZSBbbWFp
bHRvOnN3aXNlQG9wZW5ncmlkY29tcHV0aW5nLmNvbV0NCj4gU2VudDogVGh1cnNkYXksIEp1bHkg
MDMsIDIwMTQgMToyNyBBTQ0KPiBUbzogRGV2ZXNoIFNoYXJtYTsgJ0NodWNrIExldmVyJzsgbGlu
dXgtcmRtYUB2Z2VyLmtlcm5lbC5vcmc7IGxpbnV4LQ0KPiBuZnNAdmdlci5rZXJuZWwub3JnDQo+
IFN1YmplY3Q6IFJFOiBbUEFUQ0ggdjEgMDUvMTNdIHhwcnRyZG1hOiBEb24ndCBkcmFpbiBDUXMg
b24gdHJhbnNwb3J0DQo+IGRpc2Nvbm5lY3QNCj4gDQo+IA0KPiANCj4gPiAtLS0tLU9yaWdpbmFs
IE1lc3NhZ2UtLS0tLQ0KPiA+IEZyb206IERldmVzaCBTaGFybWEgW21haWx0bzpEZXZlc2guU2hh
cm1hQEVtdWxleC5Db21dDQo+ID4gU2VudDogV2VkbmVzZGF5LCBKdWx5IDAyLCAyMDE0IDI6NTQg
UE0NCj4gPiBUbzogU3RldmUgV2lzZTsgJ0NodWNrIExldmVyJzsgbGludXgtcmRtYUB2Z2VyLmtl
cm5lbC5vcmc7DQo+ID4gbGludXgtbmZzQHZnZXIua2VybmVsLm9yZw0KPiA+IFN1YmplY3Q6IFJF
OiBbUEFUQ0ggdjEgMDUvMTNdIHhwcnRyZG1hOiBEb24ndCBkcmFpbiBDUXMgb24gdHJhbnNwb3J0
DQo+ID4gZGlzY29ubmVjdA0KPiA+DQo+ID4NCj4gPg0KPiA+ID4gLS0tLS1PcmlnaW5hbCBNZXNz
YWdlLS0tLS0NCj4gPiA+IEZyb206IFN0ZXZlIFdpc2UgW21haWx0bzpzd2lzZUBvcGVuZ3JpZGNv
bXB1dGluZy5jb21dDQo+ID4gPiBTZW50OiBUaHVyc2RheSwgSnVseSAwMywgMjAxNCAxOjIxIEFN
DQo+ID4gPiBUbzogRGV2ZXNoIFNoYXJtYTsgJ0NodWNrIExldmVyJzsgbGludXgtcmRtYUB2Z2Vy
Lmtlcm5lbC5vcmc7IGxpbnV4LQ0KPiA+ID4gbmZzQHZnZXIua2VybmVsLm9yZw0KPiA+ID4gU3Vi
amVjdDogUkU6IFtQQVRDSCB2MSAwNS8xM10geHBydHJkbWE6IERvbid0IGRyYWluIENRcyBvbiB0
cmFuc3BvcnQNCj4gPiA+IGRpc2Nvbm5lY3QNCj4gPiA+DQo+ID4gPg0KPiA+ID4NCj4gPiA+ID4g
LS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gPiA+ID4gRnJvbTogRGV2ZXNoIFNoYXJtYSBb
bWFpbHRvOkRldmVzaC5TaGFybWFARW11bGV4LkNvbV0NCj4gPiA+ID4gU2VudDogV2VkbmVzZGF5
LCBKdWx5IDAyLCAyMDE0IDI6NDMgUE0NCj4gPiA+ID4gVG86IFN0ZXZlIFdpc2U7IENodWNrIExl
dmVyOyBsaW51eC1yZG1hQHZnZXIua2VybmVsLm9yZzsNCj4gPiA+ID4gbGludXgtbmZzQHZnZXIu
a2VybmVsLm9yZw0KPiA+ID4gPiBTdWJqZWN0OiBSRTogW1BBVENIIHYxIDA1LzEzXSB4cHJ0cmRt
YTogRG9uJ3QgZHJhaW4gQ1FzIG9uDQo+ID4gPiA+IHRyYW5zcG9ydCBkaXNjb25uZWN0DQo+ID4g
PiA+DQo+ID4gPiA+ID4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gPiA+ID4gPiBGcm9t
OiBTdGV2ZSBXaXNlIFttYWlsdG86c3dpc2VAb3BlbmdyaWRjb21wdXRpbmcuY29tXQ0KPiA+ID4g
PiA+IFNlbnQ6IFRodXJzZGF5LCBKdWx5IDAzLCAyMDE0IDEyOjU5IEFNDQo+ID4gPiA+ID4gVG86
IERldmVzaCBTaGFybWE7IENodWNrIExldmVyOyBsaW51eC1yZG1hQHZnZXIua2VybmVsLm9yZzsN
Cj4gPiA+ID4gPiBsaW51eC0gbmZzQHZnZXIua2VybmVsLm9yZw0KPiA+ID4gPiA+IFN1YmplY3Q6
IFJlOiBbUEFUQ0ggdjEgMDUvMTNdIHhwcnRyZG1hOiBEb24ndCBkcmFpbiBDUXMgb24NCj4gPiA+
ID4gPiB0cmFuc3BvcnQgZGlzY29ubmVjdA0KPiA+ID4gPiA+DQo+ID4gPiA+ID4gT24gNy8yLzIw
MTQgMjowNiBQTSwgRGV2ZXNoIFNoYXJtYSB3cm90ZToNCj4gPiA+ID4gPiA+IFRoaXMgY2hhbmdl
IGlzIHZlcnkgbXVjaCBwcm9uZSB0byBnZW5lcmF0ZSBwb2xsX2NxIGVycm9ycw0KPiA+ID4gPiA+
ID4gYmVjYXVzZSBvZiB1bi1jbGVhbmVkIGNvbXBsZXRpb25zIHdoaWNoIHN0aWxsIHBvaW50IHRv
IHRoZQ0KPiA+ID4gPiA+ID4gbm9uLWV4aXN0ZW50IFFQcy4gT24gdGhlIG5ldyBjb25uZWN0aW9u
IHdoZW4gdGhlc2UgY29tcGxldGlvbnMNCj4gPiA+ID4gPiA+IGFyZSBwb2xsZWQsIHRoZSBwb2xs
X2NxIHdpbGwgZmFpbA0KPiA+ID4gPiA+IGJlY2F1c2Ugb2xkIFFQIHBvaW50ZXIgaXMgYWxyZWFk
eSBOVUxMLg0KPiA+ID4gPiA+ID4gRGlkIGFueW9uZSBoaXQgdGhpcyBzaXR1YXRpb24gZHVyaW5n
IHRoZWlyIHRlc3Rpbmc/DQo+ID4gPiA+ID4NCj4gPiA+ID4gPiBIZXkgRGV2ZXNoLA0KPiA+ID4g
PiA+DQo+ID4gPiA+ID4gaXdfY3hnYjQgd2lsbCBzaWxlbnRseSB0b3NzIENRRXMgaWYgdGhlIFFQ
IGlzIG5vdCBhY3RpdmUuDQo+ID4gPiA+DQo+ID4gPiA+IFlhLCBqdXN0IG5vdyBjaGVja2VkIHRo
YXQgaW4gbWx4IGFuZCBjeGdiNCBkcml2ZXIgY29kZS4gT24gdGhlDQo+ID4gPiA+IG90aGVyIGhh
bmQgb2NyZG1hIGlzIGFzc2VydGluZyBhIEJVRy1PTiBmb3Igc3VjaCBDUUVzIGNhdXNpbmcgc3lz
dGVtDQo+IHBhbmljLg0KPiA+ID4gPiBPdXQgb2YgY3VyaW9zaXR5IEkgYW0gYXNraW5nLCBob3cg
dGhpcyBjaGFuZ2UgaXMgdXNlZnVsIGhlcmUsIGlzDQo+ID4gPiA+IGl0IHJlZHVjaW5nIHRoZSBy
ZS1jb25uZWN0aW9uIHRpbWUuLi5Bbnlob3cgcnBjcmRtYV9jbGVhbl9jcSB3YXMNCj4gPiA+ID4g
ZGlzY2FyZGluZyB0aGUgY29tcGxldGlvbnMgKGZsdXNoL3N1Y2Nlc3NmdWwgYm90aCkNCj4gPiA+
ID4NCj4gPiA+DQo+ID4gPiBXZWxsLCBJIGRvbid0IHRoaW5rIHRoZXJlIGlzIGFueXRoaW5nIHJl
c3RyaWN0aW5nIGFuIGFwcGxpY2F0aW9uIGZyb20NCj4gZGVzdHJveWluZw0KPiA+ID4gdGhlIFFQ
IHdpdGggcGVuZGluZyBDUUVzIG9uIGl0cyBDUXMuICAgU28gaXQgZGVmaW5pdGVseSBzaG91bGRu
J3QgY2F1c2UgYQ0KPiA+ID4gQlVHX09OKCkgSSB0aGluay4gICBJJ2xsIGhhdmUgdG8gcmVhZCB1
cCBpbiB0aGUgVmVyYnMgc3BlY3MgaWYgZGVzdHJveWluZyBhDQo+IFFQDQo+ID4gPiBraWxscyBh
bGwgdGhlIHBlbmRpbmcgQ1FFcy4uLg0KPiA+DQo+ID4gT2ggY29uZnVzaW9uLi4ubGV0IG1lIGNs
YXJpZnk6IGluIG9jcmRtYSBCVUcgT04gaXMgaGl0IGluIHBvbGxfY3EoKQ0KPiA+IGFmdGVyIHJl
LWNvbm5lY3Rpb24gaGFwcGVucyBhbmQgY3EgaXMgcG9sbGVkIGFnYWluLg0KPiA+IE5vdyB0aGUg
Zmlyc3QgY29tcGxldGlvbiBpbiBDUSBzdGlsbCBwb2ludHMgdG8gb2xkIFFQLUlEIGZvciB3aGlj
aA0KPiA+IG9jcmRtYSBkb2VzIG5vdCBoYXZlIHZhbGlkIFFQIHBvaW50ZXIuDQo+ID4NCj4gDQo+
IFJpZ2h0LiAgV2hpY2ggbWVhbnMgaXTigJlzIGEgc3RhbGUgQ1FFLiAgSSBkb24ndCB0aGluayB0
aGF0IHNob3VsZCBjYXVzZSBhDQo+IEJVR19PTi4NCg0KWWVzIHRoaXMgc3VyZWx5IG5lZWRzIGEg
Zml4IGluIG9jcmRtYS4NCg0KPiANCj4gDQo+ID4gPg0KPiA+ID4NCj4gPiA+ID4gPg0KPiA+ID4g
PiA+DQo+ID4gPiA+ID4gPj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gPiA+ID4gPiA+
PiBGcm9tOiBsaW51eC1yZG1hLW93bmVyQHZnZXIua2VybmVsLm9yZyBbbWFpbHRvOmxpbnV4LXJk
bWEtDQo+ID4gPiA+ID4gPj4gb3duZXJAdmdlci5rZXJuZWwub3JnXSBPbiBCZWhhbGYgT2YgQ2h1
Y2sgTGV2ZXINCj4gPiA+ID4gPiA+PiBTZW50OiBUdWVzZGF5LCBKdW5lIDI0LCAyMDE0IDQ6MTAg
QU0NCj4gPiA+ID4gPiA+PiBUbzogbGludXgtcmRtYUB2Z2VyLmtlcm5lbC5vcmc7IGxpbnV4LW5m
c0B2Z2VyLmtlcm5lbC5vcmcNCj4gPiA+ID4gPiA+PiBTdWJqZWN0OiBbUEFUQ0ggdjEgMDUvMTNd
IHhwcnRyZG1hOiBEb24ndCBkcmFpbiBDUXMgb24NCj4gPiA+ID4gPiA+PiB0cmFuc3BvcnQgZGlz
Y29ubmVjdA0KPiA+ID4gPiA+ID4+DQo+ID4gPiA+ID4gPj4gQ1FzIGFyZSBub3QgZGVzdHJveWVk
IHVudGlsIHVubW91bnQuIEJ5IGRyYWluaW5nIENRcyBvbg0KPiA+ID4gPiA+ID4+IHRyYW5zcG9y
dCBkaXNjb25uZWN0LCBzdWNjZXNzZnVsIGNvbXBsZXRpb25zIHRoYXQgY2FuIGNoYW5nZQ0KPiA+
ID4gPiA+ID4+IHRoZSByLmZybXIuc3RhdGUgZmllbGQgY2FuIGJlIG1pc3NlZC4NCj4gPiA+ID4N
Cj4gPiA+ID4gU3RpbGwgdGhvc2UgYXJlIG1pc3NlZCBpc27igJl0IGl0Li4uLlNpbmNlIHRob3Nl
IHN1Y2Nlc3NmdWwNCj4gPiA+ID4gY29tcGxldGlvbnMgd2lsbCBzdGlsbCBiZSBkcm9wcGVkIGFm
dGVyIHJlLSBjb25uZWN0aW9uLiBBbSBJDQo+ID4gPiA+IG1pc3Npbmcgc29tZXRoaW5nIHRvIHVu
ZGVyc3RhbmRpbmcgdGhlIG1vdGl2YXRpb24uLi4NCj4gPiA+ID4NCj4gPiA+ID4gPiA+Pg0KPiA+
ID4gPiA+ID4+IFNpZ25lZC1vZmYtYnk6IENodWNrIExldmVyIDxjaHVjay5sZXZlckBvcmFjbGUu
Y29tPg0KPiA+ID4gPiA+ID4+IC0tLQ0KPiA+ID4gPiA+ID4+ICAgbmV0L3N1bnJwYy94cHJ0cmRt
YS92ZXJicy5jIHwgICAgNSAtLS0tLQ0KPiA+ID4gPiA+ID4+ICAgMSBmaWxlIGNoYW5nZWQsIDUg
ZGVsZXRpb25zKC0pDQo+ID4gPiA+ID4gPj4NCj4gPiA+ID4gPiA+PiBkaWZmIC0tZ2l0IGEvbmV0
L3N1bnJwYy94cHJ0cmRtYS92ZXJicy5jDQo+ID4gPiA+ID4gPj4gYi9uZXQvc3VucnBjL3hwcnRy
ZG1hL3ZlcmJzLmMgaW5kZXggM2M3ZjkwNC4uNDUxZTEwMCAxMDA2NDQNCj4gPiA+ID4gPiA+PiAt
LS0gYS9uZXQvc3VucnBjL3hwcnRyZG1hL3ZlcmJzLmMNCj4gPiA+ID4gPiA+PiArKysgYi9uZXQv
c3VucnBjL3hwcnRyZG1hL3ZlcmJzLmMNCj4gPiA+ID4gPiA+PiBAQCAtODczLDkgKzg3Myw2IEBA
IHJldHJ5Og0KPiA+ID4gPiA+ID4+ICAgCQkJZHByaW50aygiUlBDOiAgICAgICAlczoNCj4gPiA+
IHJwY3JkbWFfZXBfZGlzY29ubmVjdCINCj4gPiA+ID4gPiA+PiAgIAkJCQkiIHN0YXR1cyAlaVxu
IiwgX19mdW5jX18sIHJjKTsNCj4gPiA+ID4gPiA+Pg0KPiA+ID4gPiA+ID4+IC0JCXJwY3JkbWFf
Y2xlYW5fY3EoZXAtPnJlcF9hdHRyLnJlY3ZfY3EpOw0KPiA+ID4gPiA+ID4+IC0JCXJwY3JkbWFf
Y2xlYW5fY3EoZXAtPnJlcF9hdHRyLnNlbmRfY3EpOw0KPiA+ID4gPiA+ID4+IC0NCj4gPiA+ID4g
PiA+PiAgIAkJeHBydCA9IGNvbnRhaW5lcl9vZihpYSwgc3RydWN0IHJwY3JkbWFfeHBydCwgcnhf
aWEpOw0KPiA+ID4gPiA+ID4+ICAgCQlpZCA9IHJwY3JkbWFfY3JlYXRlX2lkKHhwcnQsIGlhLA0K
PiA+ID4gPiA+ID4+ICAgCQkJCShzdHJ1Y3Qgc29ja2FkZHIgKikmeHBydC0NCj4gPiA+ID5yeF9k
YXRhLmFkZHIpOw0KPiA+ID4gPiA+IEBAIC05ODUsOCArOTgyLDYgQEANCj4gPiA+ID4gPiA+PiBy
cGNyZG1hX2VwX2Rpc2Nvbm5lY3Qoc3RydWN0IHJwY3JkbWFfZXAgKmVwLCBzdHJ1Y3QNCj4gPiA+
ID4gPiA+PiBycGNyZG1hX2lhDQo+ID4gPiA+ID4gPj4gKmlhKSB7DQo+ID4gPiA+ID4gPj4gICAJ
aW50IHJjOw0KPiA+ID4gPiA+ID4+DQo+ID4gPiA+ID4gPj4gLQlycGNyZG1hX2NsZWFuX2NxKGVw
LT5yZXBfYXR0ci5yZWN2X2NxKTsNCj4gPiA+ID4gPiA+PiAtCXJwY3JkbWFfY2xlYW5fY3EoZXAt
PnJlcF9hdHRyLnNlbmRfY3EpOw0KPiA+ID4gPiA+ID4+ICAgCXJjID0gcmRtYV9kaXNjb25uZWN0
KGlhLT5yaV9pZCk7DQo+ID4gPiA+ID4gPj4gICAJaWYgKCFyYykgew0KPiA+ID4gPiA+ID4+ICAg
CQkvKiByZXR1cm5zIHdpdGhvdXQgd2FpdCBpZiBub3QgY29ubmVjdGVkICovDQo+ID4gPiA+ID4g
Pj4NCj4gPiA+ID4gPiA+PiAtLQ0KPiA+ID4gPiA+ID4+IFRvIHVuc3Vic2NyaWJlIGZyb20gdGhp
cyBsaXN0OiBzZW5kIHRoZSBsaW5lICJ1bnN1YnNjcmliZSBsaW51eC0NCj4gcmRtYSINCj4gPiA+
ID4gPiA+PiBpbiB0aGUgYm9keSBvZiBhIG1lc3NhZ2UgdG8gbWFqb3Jkb21vQHZnZXIua2VybmVs
Lm9yZyBNb3JlDQo+ID4gPiA+ID4gbWFqb3Jkb21vDQo+ID4gPiA+ID4gPj4gaW5mbyBhdCBodHRw
Oi8vdmdlci5rZXJuZWwub3JnL21ham9yZG9tby1pbmZvLmh0bWwNCj4gPiA+ID4gPiA+IE4gICAg
IHIgIHkgICBiIFggIMendiBeICneunsubiArICAgIHsgICAiICBebiByICAgeiAaICBoICAgICYg
IB4gRyAgIGggAw0KPiA+ID4gPiA+ID4gKCDpmo4g3aJqIiAgGiAbbSAgICAgeiDeliAgIGYgICBo
ICAgfiBtbWw9PQ0KPiA+ID4NCj4gDQoNCg==

2014-07-02 19:48:49

by Devesh Sharma

[permalink] [raw]
Subject: RE: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport disconnect

DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogU3RldmUgV2lzZSBbbWFp
bHRvOnN3aXNlQG9wZW5ncmlkY29tcHV0aW5nLmNvbV0NCj4gU2VudDogVGh1cnNkYXksIEp1bHkg
MDMsIDIwMTQgMToxNiBBTQ0KPiBUbzogJ0NodWNrIExldmVyJzsgRGV2ZXNoIFNoYXJtYQ0KPiBD
YzogbGludXgtcmRtYUB2Z2VyLmtlcm5lbC5vcmc7ICdMaW51eCBORlMgTWFpbGluZyBMaXN0Jw0K
PiBTdWJqZWN0OiBSRTogW1BBVENIIHYxIDA1LzEzXSB4cHJ0cmRtYTogRG9uJ3QgZHJhaW4gQ1Fz
IG9uIHRyYW5zcG9ydA0KPiBkaXNjb25uZWN0DQo+IA0KPiANCj4gDQo+ID4gLS0tLS1PcmlnaW5h
bCBNZXNzYWdlLS0tLS0NCj4gPiBGcm9tOiBDaHVjayBMZXZlciBbbWFpbHRvOmNodWNrLmxldmVy
QG9yYWNsZS5jb21dDQo+ID4gU2VudDogV2VkbmVzZGF5LCBKdWx5IDAyLCAyMDE0IDI6NDAgUE0N
Cj4gPiBUbzogU3RldmUgV2lzZTsgRGV2ZXNoIFNoYXJtYQ0KPiA+IENjOiBsaW51eC1yZG1hQHZn
ZXIua2VybmVsLm9yZzsgTGludXggTkZTIE1haWxpbmcgTGlzdA0KPiA+IFN1YmplY3Q6IFJlOiBb
UEFUQ0ggdjEgMDUvMTNdIHhwcnRyZG1hOiBEb24ndCBkcmFpbiBDUXMgb24gdHJhbnNwb3J0DQo+
ID4gZGlzY29ubmVjdA0KPiA+DQo+ID4NCj4gPiBPbiBKdWwgMiwgMjAxNCwgYXQgMzoyOCBQTSwg
U3RldmUgV2lzZSA8c3dpc2VAb3BlbmdyaWRjb21wdXRpbmcuY29tPg0KPiB3cm90ZToNCj4gPg0K
PiA+ID4gT24gNy8yLzIwMTQgMjowNiBQTSwgRGV2ZXNoIFNoYXJtYSB3cm90ZToNCj4gPiA+PiBU
aGlzIGNoYW5nZSBpcyB2ZXJ5IG11Y2ggcHJvbmUgdG8gZ2VuZXJhdGUgcG9sbF9jcSBlcnJvcnMg
YmVjYXVzZQ0KPiA+ID4+IG9mIHVuLWNsZWFuZWQNCj4gPiBjb21wbGV0aW9ucyB3aGljaCBzdGls
bA0KPiA+ID4+IHBvaW50IHRvIHRoZSBub24tZXhpc3RlbnQgUVBzLiBPbiB0aGUgbmV3IGNvbm5l
Y3Rpb24gd2hlbiB0aGVzZQ0KPiA+ID4+IGNvbXBsZXRpb25zIGFyZSBwb2xsZWQsDQo+ID4gdGhl
IHBvbGxfY3Egd2lsbA0KPiA+ID4+IGZhaWwgYmVjYXVzZSBvbGQgUVAgcG9pbnRlciBpcyBhbHJl
YWR5IE5VTEwuDQo+ID4gPj4gRGlkIGFueW9uZSBoaXQgdGhpcyBzaXR1YXRpb24gZHVyaW5nIHRo
ZWlyIHRlc3Rpbmc/DQo+ID4NCj4gPiBJIHRlc3RlZCB0aGlzIGFnZ3Jlc3NpdmVseSB3aXRoIGEg
ZmF1bHQgaW5qZWN0b3IgdGhhdCB0cmlnZ2VycyByZWd1bGFyDQo+ID4gY29ubmVjdGlvbiBkaXNy
dXB0aW9uLg0KPiA+DQo+ID4gPiBIZXkgRGV2ZXNoLA0KPiA+ID4NCj4gPiA+IGl3X2N4Z2I0IHdp
bGwgc2lsZW50bHkgdG9zcyBDUUVzIGlmIHRoZSBRUCBpcyBub3QgYWN0aXZlLg0KPiA+DQo+ID4g
eHBydHJkbWEgcmVsaWVzIG9uIGdldHRpbmcgYSBjb21wbGV0aW9uIChlaXRoZXIgc3VjY2Vzc2Z1
bCBvciBpbg0KPiA+IGVycm9yKSBmb3IgZXZlcnkgV1IgaXQgaGFzIHBvc3RlZC4gVGhlIGdvYWwg
b2YgdGhpcyBwYXRjaCBpcyB0byBhdm9pZA0KPiA+IHRocm93aW5nIGF3YXkgcXVldWVkIGNvbXBs
ZXRpb25zIGFmdGVyIGEgdHJhbnNwb3J0IGRpc2Nvbm5lY3Qgc28gd2UNCj4gPiBkb24ndCBsb3Nl
IHRyYWNrIG9mIEZSTVIgcmtleSB1cGRhdGVzIChGQVNUX1JFR19NUiBhbmQgTE9DQUxfSU5WDQo+
ID4gY29tcGxldGlvbnMpIGFuZCB3ZSBjYW4gY2FwdHVyZSBhbGwgUlBDIHJlcGxpZXMgcG9zdGVk
IGJlZm9yZSB0aGUNCj4gY29ubmVjdGlvbiB3YXMgbG9zdC4NCj4gPg0KPiA+IFNvdW5kcyBsaWtl
IHdlIGFsc28gbmVlZCB0byBrZWVwIHRoZSBRUCBhcm91bmQsIGV2ZW4gaW4gZXJyb3Igc3RhdGUs
DQo+ID4gdW50aWwgYWxsIGtub3duIFdScyBvbiB0aGF0IFFQIGhhdmUgY29tcGxldGVkPw0KPiA+
DQoNCldoeSBub3QgcG9sbCBhbmQgcHJvY2VzcyBldmVyeSBjb21wbGV0aW9uIGR1cmluZyBycGNy
ZG1hX2NxX2NsZWFudXAoKS4uLi4NCg0KPiANCj4gUGVyaGFwcy4NCj4gDQo+ID4NCj4gPiA+DQo+
ID4gPg0KPiA+ID4+PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiA+ID4+PiBGcm9tOiBs
aW51eC1yZG1hLW93bmVyQHZnZXIua2VybmVsLm9yZyBbbWFpbHRvOmxpbnV4LXJkbWEtDQo+ID4g
Pj4+IG93bmVyQHZnZXIua2VybmVsLm9yZ10gT24gQmVoYWxmIE9mIENodWNrIExldmVyDQo+ID4g
Pj4+IFNlbnQ6IFR1ZXNkYXksIEp1bmUgMjQsIDIwMTQgNDoxMCBBTQ0KPiA+ID4+PiBUbzogbGlu
dXgtcmRtYUB2Z2VyLmtlcm5lbC5vcmc7IGxpbnV4LW5mc0B2Z2VyLmtlcm5lbC5vcmcNCj4gPiA+
Pj4gU3ViamVjdDogW1BBVENIIHYxIDA1LzEzXSB4cHJ0cmRtYTogRG9uJ3QgZHJhaW4gQ1FzIG9u
IHRyYW5zcG9ydA0KPiA+ID4+PiBkaXNjb25uZWN0DQo+ID4gPj4+DQo+ID4gPj4+IENRcyBhcmUg
bm90IGRlc3Ryb3llZCB1bnRpbCB1bm1vdW50LiBCeSBkcmFpbmluZyBDUXMgb24gdHJhbnNwb3J0
DQo+ID4gPj4+IGRpc2Nvbm5lY3QsIHN1Y2Nlc3NmdWwgY29tcGxldGlvbnMgdGhhdCBjYW4gY2hh
bmdlIHRoZQ0KPiA+ID4+PiByLmZybXIuc3RhdGUgZmllbGQgY2FuIGJlIG1pc3NlZC4NCj4gPiA+
Pj4NCj4gPiA+Pj4gU2lnbmVkLW9mZi1ieTogQ2h1Y2sgTGV2ZXIgPGNodWNrLmxldmVyQG9yYWNs
ZS5jb20+DQo+ID4gPj4+IC0tLQ0KPiA+ID4+PiAgbmV0L3N1bnJwYy94cHJ0cmRtYS92ZXJicy5j
IHwgICAgNSAtLS0tLQ0KPiA+ID4+PiAgMSBmaWxlIGNoYW5nZWQsIDUgZGVsZXRpb25zKC0pDQo+
ID4gPj4+DQo+ID4gPj4+IGRpZmYgLS1naXQgYS9uZXQvc3VucnBjL3hwcnRyZG1hL3ZlcmJzLmMN
Cj4gPiA+Pj4gYi9uZXQvc3VucnBjL3hwcnRyZG1hL3ZlcmJzLmMgaW5kZXggM2M3ZjkwNC4uNDUx
ZTEwMCAxMDA2NDQNCj4gPiA+Pj4gLS0tIGEvbmV0L3N1bnJwYy94cHJ0cmRtYS92ZXJicy5jDQo+
ID4gPj4+ICsrKyBiL25ldC9zdW5ycGMveHBydHJkbWEvdmVyYnMuYw0KPiA+ID4+PiBAQCAtODcz
LDkgKzg3Myw2IEBAIHJldHJ5Og0KPiA+ID4+PiAgCQkJZHByaW50aygiUlBDOiAgICAgICAlczog
cnBjcmRtYV9lcF9kaXNjb25uZWN0Ig0KPiA+ID4+PiAgCQkJCSIgc3RhdHVzICVpXG4iLCBfX2Z1
bmNfXywgcmMpOw0KPiA+ID4+Pg0KPiA+ID4+PiAtCQlycGNyZG1hX2NsZWFuX2NxKGVwLT5yZXBf
YXR0ci5yZWN2X2NxKTsNCj4gPiA+Pj4gLQkJcnBjcmRtYV9jbGVhbl9jcShlcC0+cmVwX2F0dHIu
c2VuZF9jcSk7DQo+ID4gPj4+IC0NCj4gPiA+Pj4gIAkJeHBydCA9IGNvbnRhaW5lcl9vZihpYSwg
c3RydWN0IHJwY3JkbWFfeHBydCwgcnhfaWEpOw0KPiA+ID4+PiAgCQlpZCA9IHJwY3JkbWFfY3Jl
YXRlX2lkKHhwcnQsIGlhLA0KPiA+ID4+PiAgCQkJCShzdHJ1Y3Qgc29ja2FkZHIgKikmeHBydC0+
cnhfZGF0YS5hZGRyKTsNCj4gQEAgLTk4NSw4ICs5ODIsNiBAQA0KPiA+ID4+PiBycGNyZG1hX2Vw
X2Rpc2Nvbm5lY3Qoc3RydWN0IHJwY3JkbWFfZXAgKmVwLCBzdHJ1Y3QgcnBjcmRtYV9pYQ0KPiA+
ID4+PiAqaWEpICB7DQo+ID4gPj4+ICAJaW50IHJjOw0KPiA+ID4+Pg0KPiA+ID4+PiAtCXJwY3Jk
bWFfY2xlYW5fY3EoZXAtPnJlcF9hdHRyLnJlY3ZfY3EpOw0KPiA+ID4+PiAtCXJwY3JkbWFfY2xl
YW5fY3EoZXAtPnJlcF9hdHRyLnNlbmRfY3EpOw0KPiA+ID4+PiAgCXJjID0gcmRtYV9kaXNjb25u
ZWN0KGlhLT5yaV9pZCk7DQo+ID4gPj4+ICAJaWYgKCFyYykgew0KPiA+ID4+PiAgCQkvKiByZXR1
cm5zIHdpdGhvdXQgd2FpdCBpZiBub3QgY29ubmVjdGVkICovDQo+ID4gPj4+DQo+ID4gPj4+IC0t
DQo+ID4gPj4+IFRvIHVuc3Vic2NyaWJlIGZyb20gdGhpcyBsaXN0OiBzZW5kIHRoZSBsaW5lICJ1
bnN1YnNjcmliZQ0KPiA+ID4+PiBsaW51eC1yZG1hIiBpbiB0aGUgYm9keSBvZiBhIG1lc3NhZ2Ug
dG8gbWFqb3Jkb21vQHZnZXIua2VybmVsLm9yZw0KPiA+ID4+PiBNb3JlIG1ham9yZG9tbyBpbmZv
IGF0IGh0dHA6Ly92Z2VyLmtlcm5lbC5vcmcvbWFqb3Jkb21vLWluZm8uaHRtbA0KPiA+ID4+IE4g
ICAgIHIgIHkgICBiIFggIMendiBeICneunsubiArICAgIHsgICAiICBebiByICAgeiAaICBoICAg
ICYgIB4gRw0KPiA+ID4+IGggAygg6ZqODQo+ID4gIN2iaiIgIBogG20gICAgIHog3pYgICBmICAg
aCAgIH4gbW1sPT0NCj4gPiA+DQo+ID4gPiAtLQ0KPiA+ID4gVG8gdW5zdWJzY3JpYmUgZnJvbSB0
aGlzIGxpc3Q6IHNlbmQgdGhlIGxpbmUgInVuc3Vic2NyaWJlIGxpbnV4LW5mcyINCj4gPiA+IGlu
IHRoZSBib2R5IG9mIGEgbWVzc2FnZSB0byBtYWpvcmRvbW9Admdlci5rZXJuZWwub3JnIE1vcmUN
Cj4gbWFqb3Jkb21vDQo+ID4gPiBpbmZvIGF0ICBodHRwOi8vdmdlci5rZXJuZWwub3JnL21ham9y
ZG9tby1pbmZvLmh0bWwNCj4gPg0KPiA+IC0tDQo+ID4gQ2h1Y2sgTGV2ZXINCj4gPiBjaHVja1tk
b3RdbGV2ZXJbYXRdb3JhY2xlW2RvdF1jb20NCj4gPg0KPiA+DQo+IA0KDQo=

2014-07-02 19:06:03

by Devesh Sharma

[permalink] [raw]
Subject: RE: [PATCH v1 05/13] xprtrdma: Don't drain CQs on transport disconnect

VGhpcyBjaGFuZ2UgaXMgdmVyeSBtdWNoIHByb25lIHRvIGdlbmVyYXRlIHBvbGxfY3EgZXJyb3Jz
IGJlY2F1c2Ugb2YgdW4tY2xlYW5lZCBjb21wbGV0aW9ucyB3aGljaCBzdGlsbA0KcG9pbnQgdG8g
dGhlIG5vbi1leGlzdGVudCBRUHMuIE9uIHRoZSBuZXcgY29ubmVjdGlvbiB3aGVuIHRoZXNlIGNv
bXBsZXRpb25zIGFyZSBwb2xsZWQsIHRoZSBwb2xsX2NxIHdpbGwNCmZhaWwgYmVjYXVzZSBvbGQg
UVAgcG9pbnRlciBpcyBhbHJlYWR5IE5VTEwuIA0KRGlkIGFueW9uZSBoaXQgdGhpcyBzaXR1YXRp
b24gZHVyaW5nIHRoZWlyIHRlc3Rpbmc/DQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0N
Cj4gRnJvbTogbGludXgtcmRtYS1vd25lckB2Z2VyLmtlcm5lbC5vcmcgW21haWx0bzpsaW51eC1y
ZG1hLQ0KPiBvd25lckB2Z2VyLmtlcm5lbC5vcmddIE9uIEJlaGFsZiBPZiBDaHVjayBMZXZlcg0K
PiBTZW50OiBUdWVzZGF5LCBKdW5lIDI0LCAyMDE0IDQ6MTAgQU0NCj4gVG86IGxpbnV4LXJkbWFA
dmdlci5rZXJuZWwub3JnOyBsaW51eC1uZnNAdmdlci5rZXJuZWwub3JnDQo+IFN1YmplY3Q6IFtQ
QVRDSCB2MSAwNS8xM10geHBydHJkbWE6IERvbid0IGRyYWluIENRcyBvbiB0cmFuc3BvcnQgZGlz
Y29ubmVjdA0KPiANCj4gQ1FzIGFyZSBub3QgZGVzdHJveWVkIHVudGlsIHVubW91bnQuIEJ5IGRy
YWluaW5nIENRcyBvbiB0cmFuc3BvcnQNCj4gZGlzY29ubmVjdCwgc3VjY2Vzc2Z1bCBjb21wbGV0
aW9ucyB0aGF0IGNhbiBjaGFuZ2UgdGhlIHIuZnJtci5zdGF0ZSBmaWVsZCBjYW4NCj4gYmUgbWlz
c2VkLg0KPiANCj4gU2lnbmVkLW9mZi1ieTogQ2h1Y2sgTGV2ZXIgPGNodWNrLmxldmVyQG9yYWNs
ZS5jb20+DQo+IC0tLQ0KPiAgbmV0L3N1bnJwYy94cHJ0cmRtYS92ZXJicy5jIHwgICAgNSAtLS0t
LQ0KPiAgMSBmaWxlIGNoYW5nZWQsIDUgZGVsZXRpb25zKC0pDQo+IA0KPiBkaWZmIC0tZ2l0IGEv
bmV0L3N1bnJwYy94cHJ0cmRtYS92ZXJicy5jIGIvbmV0L3N1bnJwYy94cHJ0cmRtYS92ZXJicy5j
DQo+IGluZGV4IDNjN2Y5MDQuLjQ1MWUxMDAgMTAwNjQ0DQo+IC0tLSBhL25ldC9zdW5ycGMveHBy
dHJkbWEvdmVyYnMuYw0KPiArKysgYi9uZXQvc3VucnBjL3hwcnRyZG1hL3ZlcmJzLmMNCj4gQEAg
LTg3Myw5ICs4NzMsNiBAQCByZXRyeToNCj4gIAkJCWRwcmludGsoIlJQQzogICAgICAgJXM6IHJw
Y3JkbWFfZXBfZGlzY29ubmVjdCINCj4gIAkJCQkiIHN0YXR1cyAlaVxuIiwgX19mdW5jX18sIHJj
KTsNCj4gDQo+IC0JCXJwY3JkbWFfY2xlYW5fY3EoZXAtPnJlcF9hdHRyLnJlY3ZfY3EpOw0KPiAt
CQlycGNyZG1hX2NsZWFuX2NxKGVwLT5yZXBfYXR0ci5zZW5kX2NxKTsNCj4gLQ0KPiAgCQl4cHJ0
ID0gY29udGFpbmVyX29mKGlhLCBzdHJ1Y3QgcnBjcmRtYV94cHJ0LCByeF9pYSk7DQo+ICAJCWlk
ID0gcnBjcmRtYV9jcmVhdGVfaWQoeHBydCwgaWEsDQo+ICAJCQkJKHN0cnVjdCBzb2NrYWRkciAq
KSZ4cHJ0LT5yeF9kYXRhLmFkZHIpOw0KPiBAQCAtOTg1LDggKzk4Miw2IEBAIHJwY3JkbWFfZXBf
ZGlzY29ubmVjdChzdHJ1Y3QgcnBjcmRtYV9lcCAqZXAsDQo+IHN0cnVjdCBycGNyZG1hX2lhICpp
YSkgIHsNCj4gIAlpbnQgcmM7DQo+IA0KPiAtCXJwY3JkbWFfY2xlYW5fY3EoZXAtPnJlcF9hdHRy
LnJlY3ZfY3EpOw0KPiAtCXJwY3JkbWFfY2xlYW5fY3EoZXAtPnJlcF9hdHRyLnNlbmRfY3EpOw0K
PiAgCXJjID0gcmRtYV9kaXNjb25uZWN0KGlhLT5yaV9pZCk7DQo+ICAJaWYgKCFyYykgew0KPiAg
CQkvKiByZXR1cm5zIHdpdGhvdXQgd2FpdCBpZiBub3QgY29ubmVjdGVkICovDQo+IA0KPiAtLQ0K
PiBUbyB1bnN1YnNjcmliZSBmcm9tIHRoaXMgbGlzdDogc2VuZCB0aGUgbGluZSAidW5zdWJzY3Jp
YmUgbGludXgtcmRtYSIgaW4gdGhlDQo+IGJvZHkgb2YgYSBtZXNzYWdlIHRvIG1ham9yZG9tb0B2
Z2VyLmtlcm5lbC5vcmcgTW9yZSBtYWpvcmRvbW8gaW5mbyBhdA0KPiBodHRwOi8vdmdlci5rZXJu
ZWwub3JnL21ham9yZG9tby1pbmZvLmh0bWwNCg==