Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S967728AbeAONXc (ORCPT + 1 other); Mon, 15 Jan 2018 08:23:32 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:49752 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934996AbeAOMro (ORCPT ); Mon, 15 Jan 2018 07:47:44 -0500 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Steve Wise , Jason Gunthorpe Subject: [PATCH 4.14 021/118] iw_cxgb4: when flushing, complete all wrs in a chain Date: Mon, 15 Jan 2018 13:34:08 +0100 Message-Id: <20180115123416.511660063@linuxfoundation.org> X-Mailer: git-send-email 2.15.1 In-Reply-To: <20180115123415.325497625@linuxfoundation.org> References: <20180115123415.325497625@linuxfoundation.org> User-Agent: quilt/0.65 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: 4.14-stable review patch. If anyone has any objections, please let me know. ------------------ From: Steve Wise commit d14587334580bc94d3ee11e8320e0c157f91ae8f upstream. If a wr chain was posted and needed to be flushed, only the first wr in the chain was completed with FLUSHED status. The rest were never completed. This caused isert to hang on shutdown due to the missing completions which left iscsi IO commands referenced, stalling the shutdown. Fixes: 4fe7c2962e11 ("iw_cxgb4: refactor sq/rq drain logic") Signed-off-by: Steve Wise Signed-off-by: Jason Gunthorpe Signed-off-by: Greg Kroah-Hartman --- drivers/infiniband/hw/cxgb4/qp.c | 28 ++++++++++++++++++++++++++-- 1 file changed, 26 insertions(+), 2 deletions(-) --- a/drivers/infiniband/hw/cxgb4/qp.c +++ b/drivers/infiniband/hw/cxgb4/qp.c @@ -862,6 +862,22 @@ static int complete_sq_drain_wr(struct c return 0; } +static int complete_sq_drain_wrs(struct c4iw_qp *qhp, struct ib_send_wr *wr, + struct ib_send_wr **bad_wr) +{ + int ret = 0; + + while (wr) { + ret = complete_sq_drain_wr(qhp, wr); + if (ret) { + *bad_wr = wr; + break; + } + wr = wr->next; + } + return ret; +} + static void complete_rq_drain_wr(struct c4iw_qp *qhp, struct ib_recv_wr *wr) { struct t4_cqe cqe = {}; @@ -894,6 +910,14 @@ static void complete_rq_drain_wr(struct } } +static void complete_rq_drain_wrs(struct c4iw_qp *qhp, struct ib_recv_wr *wr) +{ + while (wr) { + complete_rq_drain_wr(qhp, wr); + wr = wr->next; + } +} + int c4iw_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr, struct ib_send_wr **bad_wr) { @@ -917,7 +941,7 @@ int c4iw_post_send(struct ib_qp *ibqp, s */ if (qhp->wq.flushed) { spin_unlock_irqrestore(&qhp->lock, flag); - err = complete_sq_drain_wr(qhp, wr); + err = complete_sq_drain_wrs(qhp, wr, bad_wr); return err; } num_wrs = t4_sq_avail(&qhp->wq); @@ -1066,7 +1090,7 @@ int c4iw_post_receive(struct ib_qp *ibqp */ if (qhp->wq.flushed) { spin_unlock_irqrestore(&qhp->lock, flag); - complete_rq_drain_wr(qhp, wr); + complete_rq_drain_wrs(qhp, wr); return err; } num_wrs = t4_rq_avail(&qhp->wq);