Received: by 2002:a17:90a:9307:0:0:0:0 with SMTP id p7csp3945092pjo; Tue, 3 Mar 2020 09:52:53 -0800 (PST) X-Google-Smtp-Source: ADFU+vtX9OBC0mNxMf+/UEq7isBqTgU5FidXla5I2TlyEXTgn/V9zA/iAyQTgFHkndjeIOmYhgi+ X-Received: by 2002:a9d:6f07:: with SMTP id n7mr4082431otq.247.1583257973169; Tue, 03 Mar 2020 09:52:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583257973; cv=none; d=google.com; s=arc-20160816; b=napSWcAJ+3xrw0U+4pkuQyRtdW7QI7UIjOooq4nBzj/VsM6pj/5MPlJcv2f4ZIQ+Qc CpSZH5MEY/7cWPoxwkOvFLQSsH/8TIptfjoFGpmTHmztJ8Toqw65L8qkw2ACHpeI/8Tc oYEtoB/VHIKQFSs97ArE1Q4uM9uM+fzlcE1eGbPI3/NprOruppMtO1qpdo82uWE7obp0 NkrsV6Cg7ln3gsoyDBsAWMm6OHQqtub79jr8jHa1Ur01yPqEg7pogTlXygb0tCz2blJr HFAvGoPOq6Rb12Il8re/qmpfxuYAwexU493A5Cf6rCr6Nm11bcnFxDBvx2a+9di6WV/o Ku2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=7Tz18Fi8OIajeQa2dDmllNZ3kxls9icBFhyZoCiNOVs=; b=k0chOI5eyLu8Y+/QQ+pT1eKhJ2P+5ZuQz+3ptfY48RvtcM99M1f1mVGWPvn/l99Wuz gCpsBKJuFwqLJk7/rapYjnnG9N8Q9jgANO+b71LAI2ZF2SByALN1Q/IsBN1cayyeN5eV WktjHfS4B5NbhkbBfH1xNQ57t/yFY6EPCXAcDvvlAnX4aaNhYSdNub3+wtl4q2TJzwPd K3Lz0YlZ7Sat7joVLGW6Wh1+M6u/vpZ0CzAes9kBh8XaCs4dl3gOV/AGquKcLsatDqeM qDOLvFLbSju6Dr+8qBvOE0tQRkwD7oEn17EGhQB+3eP+zEYzKJ9uUrWHYqZiuMasy7fQ pa4g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=EZXje7dT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y83si2523250oia.94.2020.03.03.09.52.41; Tue, 03 Mar 2020 09:52:53 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=EZXje7dT; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731391AbgCCRuf (ORCPT + 99 others); Tue, 3 Mar 2020 12:50:35 -0500 Received: from mail.kernel.org ([198.145.29.99]:58180 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728615AbgCCRuc (ORCPT ); Tue, 3 Mar 2020 12:50:32 -0500 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2E3C120CC7; Tue, 3 Mar 2020 17:50:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1583257830; bh=eHsNYiS8BfkX9qA8ffEl7jMHW4jucE1t/ZyyG5T/pTo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EZXje7dTM9Ba/GgOht9Nu3NVGGNSwbmVdGA2frdwBIOyr7nv6UqmCwiFHqQfAFKNM PxuYYdFDoCcJ2Iq8iisJQ+2qOHPeJkqaNNjxoGrpea68ytxNtqAO5ObWeWyAKx6pay bNPys9/fi1jawbMUlt38EsJUwfQuv6H+X4fSwkIo= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Lijun Ou , Weihang Li , Jason Gunthorpe Subject: [PATCH 5.5 146/176] RDMA/hns: Bugfix for posting a wqe with sge Date: Tue, 3 Mar 2020 18:43:30 +0100 Message-Id: <20200303174321.629798282@linuxfoundation.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200303174304.593872177@linuxfoundation.org> References: <20200303174304.593872177@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Lijun Ou commit 468d020e2f02867b8ec561461a1689cd4365e493 upstream. Driver should first check whether the sge is valid, then fill the valid sge and the caculated total into hardware, otherwise invalid sges will cause an error. Fixes: 52e3b42a2f58 ("RDMA/hns: Filter for zero length of sge in hip08 kernel mode") Fixes: 7bdee4158b37 ("RDMA/hns: Fill sq wqe context of ud type in hip08") Link: https://lore.kernel.org/r/1578571852-13704-1-git-send-email-liweihang@huawei.com Signed-off-by: Lijun Ou Signed-off-by: Weihang Li Signed-off-by: Jason Gunthorpe Signed-off-by: Greg Kroah-Hartman --- drivers/infiniband/hw/hns/hns_roce_hw_v2.c | 41 +++++++++++++++++------------ 1 file changed, 25 insertions(+), 16 deletions(-) --- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c @@ -110,7 +110,7 @@ static void set_atomic_seg(struct hns_ro } static void set_extend_sge(struct hns_roce_qp *qp, const struct ib_send_wr *wr, - unsigned int *sge_ind) + unsigned int *sge_ind, int valid_num_sge) { struct hns_roce_v2_wqe_data_seg *dseg; struct ib_sge *sg; @@ -123,7 +123,7 @@ static void set_extend_sge(struct hns_ro if (qp->ibqp.qp_type == IB_QPT_RC || qp->ibqp.qp_type == IB_QPT_UC) num_in_wqe = HNS_ROCE_V2_UC_RC_SGE_NUM_IN_WQE; - extend_sge_num = wr->num_sge - num_in_wqe; + extend_sge_num = valid_num_sge - num_in_wqe; sg = wr->sg_list + num_in_wqe; shift = qp->hr_buf.page_shift; @@ -159,14 +159,16 @@ static void set_extend_sge(struct hns_ro static int set_rwqe_data_seg(struct ib_qp *ibqp, const struct ib_send_wr *wr, struct hns_roce_v2_rc_send_wqe *rc_sq_wqe, void *wqe, unsigned int *sge_ind, + int valid_num_sge, const struct ib_send_wr **bad_wr) { struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device); struct hns_roce_v2_wqe_data_seg *dseg = wqe; struct hns_roce_qp *qp = to_hr_qp(ibqp); + int j = 0; int i; - if (wr->send_flags & IB_SEND_INLINE && wr->num_sge) { + if (wr->send_flags & IB_SEND_INLINE && valid_num_sge) { if (le32_to_cpu(rc_sq_wqe->msg_len) > hr_dev->caps.max_sq_inline) { *bad_wr = wr; @@ -190,7 +192,7 @@ static int set_rwqe_data_seg(struct ib_q roce_set_bit(rc_sq_wqe->byte_4, V2_RC_SEND_WQE_BYTE_4_INLINE_S, 1); } else { - if (wr->num_sge <= HNS_ROCE_V2_UC_RC_SGE_NUM_IN_WQE) { + if (valid_num_sge <= HNS_ROCE_V2_UC_RC_SGE_NUM_IN_WQE) { for (i = 0; i < wr->num_sge; i++) { if (likely(wr->sg_list[i].length)) { set_data_seg_v2(dseg, wr->sg_list + i); @@ -203,19 +205,21 @@ static int set_rwqe_data_seg(struct ib_q V2_RC_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_S, (*sge_ind) & (qp->sge.sge_cnt - 1)); - for (i = 0; i < HNS_ROCE_V2_UC_RC_SGE_NUM_IN_WQE; i++) { + for (i = 0; i < wr->num_sge && + j < HNS_ROCE_V2_UC_RC_SGE_NUM_IN_WQE; i++) { if (likely(wr->sg_list[i].length)) { set_data_seg_v2(dseg, wr->sg_list + i); dseg++; + j++; } } - set_extend_sge(qp, wr, sge_ind); + set_extend_sge(qp, wr, sge_ind, valid_num_sge); } roce_set_field(rc_sq_wqe->byte_16, V2_RC_SEND_WQE_BYTE_16_SGE_NUM_M, - V2_RC_SEND_WQE_BYTE_16_SGE_NUM_S, wr->num_sge); + V2_RC_SEND_WQE_BYTE_16_SGE_NUM_S, valid_num_sge); } return 0; @@ -243,6 +247,7 @@ static int hns_roce_v2_post_send(struct unsigned int sge_idx; unsigned int wqe_idx; unsigned long flags; + int valid_num_sge; void *wqe = NULL; bool loopback; int attr_mask; @@ -292,8 +297,16 @@ static int hns_roce_v2_post_send(struct qp->sq.wrid[wqe_idx] = wr->wr_id; owner_bit = ~(((qp->sq.head + nreq) >> ilog2(qp->sq.wqe_cnt)) & 0x1); + valid_num_sge = 0; tmp_len = 0; + for (i = 0; i < wr->num_sge; i++) { + if (likely(wr->sg_list[i].length)) { + tmp_len += wr->sg_list[i].length; + valid_num_sge++; + } + } + /* Corresponding to the QP type, wqe process separately */ if (ibqp->qp_type == IB_QPT_GSI) { ud_sq_wqe = wqe; @@ -329,9 +342,6 @@ static int hns_roce_v2_post_send(struct V2_UD_SEND_WQE_BYTE_4_OPCODE_S, HNS_ROCE_V2_WQE_OP_SEND); - for (i = 0; i < wr->num_sge; i++) - tmp_len += wr->sg_list[i].length; - ud_sq_wqe->msg_len = cpu_to_le32(le32_to_cpu(ud_sq_wqe->msg_len) + tmp_len); @@ -367,7 +377,7 @@ static int hns_roce_v2_post_send(struct roce_set_field(ud_sq_wqe->byte_16, V2_UD_SEND_WQE_BYTE_16_SGE_NUM_M, V2_UD_SEND_WQE_BYTE_16_SGE_NUM_S, - wr->num_sge); + valid_num_sge); roce_set_field(ud_sq_wqe->byte_20, V2_UD_SEND_WQE_BYTE_20_MSG_START_SGE_IDX_M, @@ -422,12 +432,10 @@ static int hns_roce_v2_post_send(struct memcpy(&ud_sq_wqe->dgid[0], &ah->av.dgid[0], GID_LEN_V2); - set_extend_sge(qp, wr, &sge_idx); + set_extend_sge(qp, wr, &sge_idx, valid_num_sge); } else if (ibqp->qp_type == IB_QPT_RC) { rc_sq_wqe = wqe; memset(rc_sq_wqe, 0, sizeof(*rc_sq_wqe)); - for (i = 0; i < wr->num_sge; i++) - tmp_len += wr->sg_list[i].length; rc_sq_wqe->msg_len = cpu_to_le32(le32_to_cpu(rc_sq_wqe->msg_len) + tmp_len); @@ -548,10 +556,11 @@ static int hns_roce_v2_post_send(struct roce_set_field(rc_sq_wqe->byte_16, V2_RC_SEND_WQE_BYTE_16_SGE_NUM_M, V2_RC_SEND_WQE_BYTE_16_SGE_NUM_S, - wr->num_sge); + valid_num_sge); } else if (wr->opcode != IB_WR_REG_MR) { ret = set_rwqe_data_seg(ibqp, wr, rc_sq_wqe, - wqe, &sge_idx, bad_wr); + wqe, &sge_idx, + valid_num_sge, bad_wr); if (ret) goto out; }