2016-10-28 11:15:01

by Hans Westgaard Ry

[permalink] [raw]
Subject: [PATCH] IBcore/CM: Issue DREQ when receiving REQ/REP for stale QP

from "InfiBand Architecture Specifications Volume 1":

A QP is said to have a stale connection when only one side has
connection information. A stale connection may result if the remote CM
had dropped the connection and sent a DREQ but the DREQ was never
received by the local CM. Alternatively the remote CM may have lost
all record of past connections because its node crashed and rebooted,
while the local CM did not become aware of the remote node's reboot
and therefore did not clean up stale connections.

and:

A local CM may receive a REQ/REP for a stale connection. It shall
abort the connection issuing REJ to the REQ/REP. It shall then issue
DREQ with "DREQ:remote QPN” set to the remote QPN from the REQ/REP.

This patch solves a problem with reuse of QPN. Current codebase, that
is IPoIB, relies on a REAP-mechanism to do cleanup of the structures
in CM. A problem with this is the timeconstants governing this
mechanism; they are up to 768 seconds and the interface may look
inresponsive in that period. Issuing a DREQ (and receiving a DREP)
does the necessary cleanup and the interface comes up.

Signed-off-by: Hans Westgaard Ry <[email protected]>
Reviewed-by: Håkon Bugge <[email protected]>
---
drivers/infiniband/core/cm.c | 24 +++++++++++++++++++++++-
1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index c995255..c97e4d5 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -1519,6 +1519,7 @@ static struct cm_id_private * cm_match_req(struct cm_work *work,
struct cm_id_private *listen_cm_id_priv, *cur_cm_id_priv;
struct cm_timewait_info *timewait_info;
struct cm_req_msg *req_msg;
+ struct ib_cm_id *cm_id;

req_msg = (struct cm_req_msg *)work->mad_recv_wc->recv_buf.mad;

@@ -1540,10 +1541,18 @@ static struct cm_id_private * cm_match_req(struct cm_work *work,
timewait_info = cm_insert_remote_qpn(cm_id_priv->timewait_info);
if (timewait_info) {
cm_cleanup_timewait(cm_id_priv->timewait_info);
+ cur_cm_id_priv = cm_get_id(timewait_info->work.local_id,
+ timewait_info->work.remote_id);
+
spin_unlock_irq(&cm.lock);
cm_issue_rej(work->port, work->mad_recv_wc,
IB_CM_REJ_STALE_CONN, CM_MSG_RESPONSE_REQ,
NULL, 0);
+ if (cur_cm_id_priv) {
+ cm_id = &cur_cm_id_priv->id;
+ ib_send_cm_dreq(cm_id, NULL, 0);
+ cm_deref_id(cur_cm_id_priv);
+ }
return NULL;
}

@@ -1919,6 +1928,9 @@ static int cm_rep_handler(struct cm_work *work)
struct cm_id_private *cm_id_priv;
struct cm_rep_msg *rep_msg;
int ret;
+ struct cm_id_private *cur_cm_id_priv;
+ struct ib_cm_id *cm_id;
+ struct cm_timewait_info *timewait_info;

rep_msg = (struct cm_rep_msg *)work->mad_recv_wc->recv_buf.mad;
cm_id_priv = cm_acquire_id(rep_msg->remote_comm_id, 0);
@@ -1953,16 +1965,26 @@ static int cm_rep_handler(struct cm_work *work)
goto error;
}
/* Check for a stale connection. */
- if (cm_insert_remote_qpn(cm_id_priv->timewait_info)) {
+ timewait_info = cm_insert_remote_qpn(cm_id_priv->timewait_info);
+ if (timewait_info) {
rb_erase(&cm_id_priv->timewait_info->remote_id_node,
&cm.remote_id_table);
cm_id_priv->timewait_info->inserted_remote_id = 0;
+ cur_cm_id_priv = cm_get_id(timewait_info->work.local_id,
+ timewait_info->work.remote_id);
+
spin_unlock(&cm.lock);
spin_unlock_irq(&cm_id_priv->lock);
cm_issue_rej(work->port, work->mad_recv_wc,
IB_CM_REJ_STALE_CONN, CM_MSG_RESPONSE_REP,
NULL, 0);
ret = -EINVAL;
+ if (cur_cm_id_priv) {
+ cm_id = &cur_cm_id_priv->id;
+ ib_send_cm_dreq(cm_id, NULL, 0);
+ cm_deref_id(cur_cm_id_priv);
+ }
+
goto error;
}
spin_unlock(&cm.lock);
--
2.7.4


2016-10-30 21:06:41

by Sagi Grimberg

[permalink] [raw]
Subject: Re: [PATCH] IBcore/CM: Issue DREQ when receiving REQ/REP for stale QP

> from "InfiBand Architecture Specifications Volume 1":
>
> A QP is said to have a stale connection when only one side has
> connection information. A stale connection may result if the remote CM
> had dropped the connection and sent a DREQ but the DREQ was never
> received by the local CM. Alternatively the remote CM may have lost
> all record of past connections because its node crashed and rebooted,
> while the local CM did not become aware of the remote node's reboot
> and therefore did not clean up stale connections.
>
> and:
>
> A local CM may receive a REQ/REP for a stale connection. It shall
> abort the connection issuing REJ to the REQ/REP. It shall then issue
> DREQ with "DREQ:remote QPN? set to the remote QPN from the REQ/REP.
>
> This patch solves a problem with reuse of QPN. Current codebase, that
> is IPoIB, relies on a REAP-mechanism to do cleanup of the structures
> in CM. A problem with this is the timeconstants governing this
> mechanism; they are up to 768 seconds and the interface may look
> inresponsive in that period. Issuing a DREQ (and receiving a DREP)
> does the necessary cleanup and the interface comes up.

I like this fix, so,

Reviewed-by: Sagi Grimberg <[email protected]>

But I think the CM layer still is buggy in this area.

In vol 1 the state transition table specifically states that DREP
timeouts should move the cm_id to timewait state but the CM doesn't
seem to maintain response timeouts on disconnect requests. If the
DREQ happened to fail (send error completion) things are fine, but
if the DREQ makes it to the peer but it doesn't reply then no one
will take care of it (i.e. we will never see a TIMEWAIT event from
this cm_id)...

I recall some debugging session with Hal on this area a ~year ago
with a new iser target (which didn't reply to DREQs on reboot
sequences). iser initiator waits for a DISCONNECTED/TIMEWAIT events
before destroying the cm_id (which never happened because of the
above). I think I ended up working around that in iser to just go
ahead and destroy the cm_id after issuing a DREQ (but now I realize
it was never included so I'll probably dig it up again soon).

2016-10-31 04:52:56

by Santosh Shilimkar

[permalink] [raw]
Subject: Re: [PATCH] IBcore/CM: Issue DREQ when receiving REQ/REP for stale QP

On 10/30/16 2:06 PM, Sagi Grimberg wrote:
>> from "InfiBand Architecture Specifications Volume 1":
>>
>> A QP is said to have a stale connection when only one side has
>> connection information. A stale connection may result if the remote CM
>> had dropped the connection and sent a DREQ but the DREQ was never
>> received by the local CM. Alternatively the remote CM may have lost
>> all record of past connections because its node crashed and rebooted,
>> while the local CM did not become aware of the remote node's reboot
>> and therefore did not clean up stale connections.
>>
>> and:
>>
>> A local CM may receive a REQ/REP for a stale connection. It shall
>> abort the connection issuing REJ to the REQ/REP. It shall then issue
>> DREQ with "DREQ:remote QPN? set to the remote QPN from the REQ/REP.
>>
>> This patch solves a problem with reuse of QPN. Current codebase, that
>> is IPoIB, relies on a REAP-mechanism to do cleanup of the structures
>> in CM. A problem with this is the timeconstants governing this
>> mechanism; they are up to 768 seconds and the interface may look
>> inresponsive in that period. Issuing a DREQ (and receiving a DREP)
>> does the necessary cleanup and the interface comes up.
>
> I like this fix, so,
>
Me too and hence suggested Hans to post it on rdma list when
saw this patch in internal review.

> Reviewed-by: Sagi Grimberg <[email protected]>
>
> But I think the CM layer still is buggy in this area.
>
> In vol 1 the state transition table specifically states that DREP
> timeouts should move the cm_id to timewait state but the CM doesn't
> seem to maintain response timeouts on disconnect requests. If the
> DREQ happened to fail (send error completion) things are fine, but
> if the DREQ makes it to the peer but it doesn't reply then no one
> will take care of it (i.e. we will never see a TIMEWAIT event from
> this cm_id)...
>
> I recall some debugging session with Hal on this area a ~year ago
> with a new iser target (which didn't reply to DREQs on reboot
> sequences). iser initiator waits for a DISCONNECTED/TIMEWAIT events
> before destroying the cm_id (which never happened because of the
> above). I think I ended up working around that in iser to just go
> ahead and destroy the cm_id after issuing a DREQ (but now I realize
> it was never included so I'll probably dig it up again soon).

There is another fundamental issue with core CM code wrt DREQ
getting dropped. The the mad agent used to send the DREQ is
associated with a port and if this port is down, the IB link
layer will drop that DREQ as per SPEC. Similarly if the destination
port is down where the DREQ is suppose to reach, then the DREQ
gets dropped there too. These dropped CM ids are retried by MAD
agent on same port till the port comes back alive.

Am not sure in your case the ports were going down or not
but it was the case then IIUC, what you are doing for ISER is
still needed (up front destroying the cm id).

Regards,
Santosh


2016-12-14 17:56:00

by Doug Ledford

[permalink] [raw]
Subject: Re: [PATCH] IBcore/CM: Issue DREQ when receiving REQ/REP for stale QP

On 10/28/2016 7:14 AM, Hans Westgaard Ry wrote:
> from "InfiBand Architecture Specifications Volume 1":
>
> A QP is said to have a stale connection when only one side has
> connection information. A stale connection may result if the remote CM
> had dropped the connection and sent a DREQ but the DREQ was never
> received by the local CM. Alternatively the remote CM may have lost
> all record of past connections because its node crashed and rebooted,
> while the local CM did not become aware of the remote node's reboot
> and therefore did not clean up stale connections.
>
> and:
>
> A local CM may receive a REQ/REP for a stale connection. It shall
> abort the connection issuing REJ to the REQ/REP. It shall then issue
> DREQ with "DREQ:remote QPN” set to the remote QPN from the REQ/REP.
>
> This patch solves a problem with reuse of QPN. Current codebase, that
> is IPoIB, relies on a REAP-mechanism to do cleanup of the structures
> in CM. A problem with this is the timeconstants governing this
> mechanism; they are up to 768 seconds and the interface may look
> inresponsive in that period. Issuing a DREQ (and receiving a DREP)
> does the necessary cleanup and the interface comes up.
>
> Signed-off-by: Hans Westgaard Ry <[email protected]>
> Reviewed-by: Håkon Bugge <[email protected]>

Thanks, applied.


--
Doug Ledford <[email protected]>
GPG Key ID: 0E572FDD


Attachments:
signature.asc (884.00 B)
OpenPGP digital signature