Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3004271imu; Sun, 9 Dec 2018 14:53:00 -0800 (PST) X-Google-Smtp-Source: AFSGD/XvRX5/357HKRuvEkFP2K/y/g/zk4cDxnScMpI2HRv4mpBRZWbzXznRC7/EiBA6EcNh9dq7 X-Received: by 2002:a17:902:1101:: with SMTP id d1mr9841520pla.136.1544395980419; Sun, 09 Dec 2018 14:53:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1544395980; cv=none; d=google.com; s=arc-20160816; b=tAyEMLrFD8rEAf1xnAA1/RMunH6ipERLIVK3uHEM8H9k/1wgKh547gjmCE1CH3w7QP 9xKv9y5IZ8VJFuEzkKZM3M2w5y1tfTL11XxTFt9wVYYE/8M6OQq3IyCNRyL/h9vBewBG cbvGx6aogwiYQu2l3KLty8ELiX3v5dO2tRNU7vhZRPpPamCH0ygRrcZpnVH4zVLahDtY S7/u208iFn4HdWwQ7K2w3IjWhMxvjSdJD+9EDUG2L9bF53B0GMPia2yY6WsxNSAreVeZ xJQb9++j6zbbEDMfT2GQOr32OgPj0DU96G1HXSECcup5IRbGsL0FT2j3qkp1tZzPROm/ yl2Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:subject:message-id:date:cc:to :from:mime-version:content-transfer-encoding:content-disposition; bh=IxBOt3p3udrZYpnoQftphH++mF3KDYnCzpPzFZSiE2Q=; b=wKFeZJU7kFuwkekEs+PhAuMtZjxSsh7rIyvFNCSwsLjcjRZabSrhDO3d0n9tY3ftx8 OnWGHTmma8Vy4TwpxIjx9NSuGHFCUVxxD5QAY4W5K8MErscVwa4gRhzg9nrDRDttAvEl gKP9nKX96H90JHOGEZ0y3/xYCcAudT6uQNaFpX7bbK63NNh+hFt3koHokBohmlBCbX7I Iyo1NvqKwWaABkVQbOxiRpNCSOO0tbo+YnTnaNfJI9o9UI8GPVpkTM6POhHSFbCifrBc cD13BR29zZxAWCFFVMy37r5+t8K/cXXGccEM1l6Rr+kLnF1i2xqeEeEMzSfzZLtsdxsF E8BA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d36si8793131pla.216.2018.12.09.14.52.45; Sun, 09 Dec 2018 14:53:00 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727608AbeLIWFZ (ORCPT + 99 others); Sun, 9 Dec 2018 17:05:25 -0500 Received: from shadbolt.e.decadent.org.uk ([88.96.1.126]:36924 "EHLO shadbolt.e.decadent.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726619AbeLIWFT (ORCPT ); Sun, 9 Dec 2018 17:05:19 -0500 Received: from pub.yeoldevic.com ([81.174.156.145] helo=deadeye) by shadbolt.decadent.org.uk with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89) (envelope-from ) id 1gW737-0002ic-SB; Sun, 09 Dec 2018 21:55:42 +0000 Received: from ben by deadeye with local (Exim 4.91) (envelope-from ) id 1gW72f-0003Tn-Ps; Sun, 09 Dec 2018 21:55:13 +0000 Content-Type: text/plain; charset="UTF-8" Content-Disposition: inline Content-Transfer-Encoding: 8bit MIME-Version: 1.0 From: Ben Hutchings To: linux-kernel@vger.kernel.org, stable@vger.kernel.org CC: akpm@linux-foundation.org, "Doug Ledford" , "Steve Wise" Date: Sun, 09 Dec 2018 21:50:33 +0000 Message-ID: X-Mailer: LinuxStableQueue (scripts by bwh) X-Patchwork-Hint: ignore Subject: [PATCH 3.16 193/328] iw_cxgb4: atomically flush the qp In-Reply-To: X-SA-Exim-Connect-IP: 81.174.156.145 X-SA-Exim-Mail-From: ben@decadent.org.uk X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.16.62-rc1 review patch. If anyone has any objections, please let me know. ------------------ From: Steve Wise commit bc52e9ca74b9a395897bb640c6671b2cbf716032 upstream. __flush_qp() has a race condition where during the flush operation, the qp lock is released allowing another thread to possibly post a WR, which corrupts the queue state, possibly causing crashes. The lock was released to preserve the cq/qp locking hierarchy of cq first, then qp. However releasing the qp lock is not necessary; both RQ and SQ CQ locks can be acquired first, followed by the qp lock, and then the RQ and SQ flushing can be done w/o unlocking. Signed-off-by: Steve Wise Signed-off-by: Doug Ledford [bwh: Backported to 3.16: adjust context] Signed-off-by: Ben Hutchings --- drivers/infiniband/hw/cxgb4/qp.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) --- a/drivers/infiniband/hw/cxgb4/qp.c +++ b/drivers/infiniband/hw/cxgb4/qp.c @@ -1071,31 +1071,34 @@ static void __flush_qp(struct c4iw_qp *q PDBG("%s qhp %p rchp %p schp %p\n", __func__, qhp, rchp, schp); - /* locking hierarchy: cq lock first, then qp lock. */ + /* locking hierarchy: cqs lock first, then qp lock. */ spin_lock_irqsave(&rchp->lock, flag); + if (schp != rchp) + spin_lock(&schp->lock); spin_lock(&qhp->lock); if (qhp->wq.flushed) { spin_unlock(&qhp->lock); + if (schp != rchp) + spin_unlock(&schp->lock); spin_unlock_irqrestore(&rchp->lock, flag); return; } qhp->wq.flushed = 1; + t4_set_wq_in_error(&qhp->wq); c4iw_flush_hw_cq(rchp, qhp); c4iw_count_rcqes(&rchp->cq, &qhp->wq, &count); rq_flushed = c4iw_flush_rq(&qhp->wq, &rchp->cq, count); - spin_unlock(&qhp->lock); - spin_unlock_irqrestore(&rchp->lock, flag); - /* locking hierarchy: cq lock first, then qp lock. */ - spin_lock_irqsave(&schp->lock, flag); - spin_lock(&qhp->lock); if (schp != rchp) c4iw_flush_hw_cq(schp, qhp); sq_flushed = c4iw_flush_sq(qhp); + spin_unlock(&qhp->lock); - spin_unlock_irqrestore(&schp->lock, flag); + if (schp != rchp) + spin_unlock(&schp->lock); + spin_unlock_irqrestore(&rchp->lock, flag); if (schp == rchp) { if (t4_clear_cq_armed(&rchp->cq) && @@ -1129,8 +1132,8 @@ static void flush_qp(struct c4iw_qp *qhp rchp = to_c4iw_cq(qhp->ibqp.recv_cq); schp = to_c4iw_cq(qhp->ibqp.send_cq); - t4_set_wq_in_error(&qhp->wq); if (qhp->ibqp.uobject) { + t4_set_wq_in_error(&qhp->wq); t4_set_cq_in_error(&rchp->cq); spin_lock_irqsave(&rchp->comp_handler_lock, flag); (*rchp->ibcq.comp_handler)(&rchp->ibcq, rchp->ibcq.cq_context);