Received: by 2002:a05:6358:bb9e:b0:b9:5105:a5b4 with SMTP id df30csp5136166rwb; Tue, 6 Sep 2022 19:56:43 -0700 (PDT) X-Google-Smtp-Source: AA6agR5VkPx4rPXtd4ROXcWegg1GTOBUL2SrcgcLxiRnhINjUnmsxR2lRXwJ4KD/Gn6wsxQXSID8 X-Received: by 2002:a17:907:3da4:b0:742:747e:4bc8 with SMTP id he36-20020a1709073da400b00742747e4bc8mr852415ejc.555.1662519403339; Tue, 06 Sep 2022 19:56:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1662519403; cv=none; d=google.com; s=arc-20160816; b=QQPDihsvIEiDIjibKmXMUnbCx/fxkIOVf7UY37i2wg9bKFCmpaw2pCV7k7wbpYqiWI spNIGP5WEeSYSApu8bNDRVgfoU+zWmVtK6Fj9EfYq+Hk5VaDlTytY6fAvS9NbXHTmQmh 4DdfgIetHgd3+rPSeI05ht6aOahZ2xt3/f3QJd6ixlrJT8Wm8Sx3ZwpTVBk+bmvQ/lmA JzD33mGeQ3Te8T3+68XxJkBttTlAucUE42Zs+2n5B7Hqp9hKyqDXSAbolg+zPJnaAvNx dJiu1HtHXSpIMpNTtkHlqSVCaLFu8eerGFbOHUPz+I7reFTSZtEK1h2oNzuDYLecPrTt MZnQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=185LlLakaXO2ddg9BQ8+Tq+3Z4Bu1qOKXPX+b7KuyEk=; b=JYHqjhEFGoHznzTs3u3CAkEKuhNrJbxXG1mQyQmYm+jF8Nt1nzVcZ90kI8zVXmPUaa LifHethJQSg2R4ag4HP7GE1UQzzTzvEOp/eEYEGdlw/G1AmQkTG4haxHuT+f8LUus3LE pZBGZ0mfOjRoMhPWd/RjWZef+/luUF11RrYjyA7CuimrACIR/OiYcqOxyJRlhep7xaV3 DAefSouiN7qFaFPhYLVPrC2JjsUC71KmKmE/A5FbvgxUNBa2iSIfM2AJqJ/DoKMqTDQL 0xpyZZk7UvJN6ghofuAC9IzFb+rSbugJWVTtPZJ5ZNlHrFfd9iMpWCzUf7ec6q5nJqsh p3RA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=fujitsu.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y4-20020a056402358400b004486d90c931si12835514edc.49.2022.09.06.19.56.18; Tue, 06 Sep 2022 19:56:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=fujitsu.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229983AbiIGCqN (ORCPT + 99 others); Tue, 6 Sep 2022 22:46:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229979AbiIGCps (ORCPT ); Tue, 6 Sep 2022 22:45:48 -0400 Received: from esa10.hc1455-7.c3s2.iphmx.com (esa10.hc1455-7.c3s2.iphmx.com [139.138.36.225]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CEFB576970 for ; Tue, 6 Sep 2022 19:45:40 -0700 (PDT) X-IronPort-AV: E=McAfee;i="6500,9779,10462"; a="75355904" X-IronPort-AV: E=Sophos;i="5.93,295,1654527600"; d="scan'208";a="75355904" Received: from unknown (HELO oym-r4.gw.nic.fujitsu.com) ([210.162.30.92]) by esa10.hc1455-7.c3s2.iphmx.com with ESMTP; 07 Sep 2022 11:44:34 +0900 Received: from oym-m3.gw.nic.fujitsu.com (oym-nat-oym-m3.gw.nic.fujitsu.com [192.168.87.60]) by oym-r4.gw.nic.fujitsu.com (Postfix) with ESMTP id E3819DA694; Wed, 7 Sep 2022 11:44:33 +0900 (JST) Received: from m3002.s.css.fujitsu.com (msm3.b.css.fujitsu.com [10.128.233.104]) by oym-m3.gw.nic.fujitsu.com (Postfix) with ESMTP id 071FCD9471; Wed, 7 Sep 2022 11:44:33 +0900 (JST) Received: from localhost.localdomain (unknown [10.19.3.107]) by m3002.s.css.fujitsu.com (Postfix) with ESMTP id B5A1D200B3AC; Wed, 7 Sep 2022 11:44:32 +0900 (JST) From: Daisuke Matsuda To: linux-rdma@vger.kernel.org, leonro@nvidia.com, jgg@nvidia.com, zyjzyj2000@gmail.com Cc: nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, rpearsonhpe@gmail.com, yangx.jy@fujitsu.com, lizhijian@fujitsu.com, y-goto@fujitsu.com, Daisuke Matsuda Subject: [RFC PATCH 3/7] RDMA/rxe: Cleanup code for responder Atomic operations Date: Wed, 7 Sep 2022 11:43:01 +0900 Message-Id: <861f3f8f8a07ce066a05cc5a2210bde76740f870.1662461897.git.matsuda-daisuke@fujitsu.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-TM-AS-GCONF: 00 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently, rxe_responder() directly calls the function to execute Atomic operations. This need to be modified to insert some conditional branches for the new RDMA Write operation and the ODP feature. Signed-off-by: Daisuke Matsuda --- drivers/infiniband/sw/rxe/rxe_resp.c | 102 +++++++++++++++++---------- 1 file changed, 64 insertions(+), 38 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index e97c55b292f0..cadc8fa64dd0 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -591,60 +591,86 @@ static struct resp_res *rxe_prepare_res(struct rxe_qp *qp, /* Guarantee atomicity of atomic operations at the machine level. */ static DEFINE_SPINLOCK(atomic_ops_lock); -static enum resp_states atomic_reply(struct rxe_qp *qp, - struct rxe_pkt_info *pkt) +enum resp_states rxe_process_atomic(struct rxe_qp *qp, + struct rxe_pkt_info *pkt, u64 *vaddr) { - u64 *vaddr; enum resp_states ret; - struct rxe_mr *mr = qp->resp.mr; struct resp_res *res = qp->resp.res; u64 value; - if (!res) { - res = rxe_prepare_res(qp, pkt, RXE_ATOMIC_MASK); - qp->resp.res = res; + /* check vaddr is 8 bytes aligned. */ + if (!vaddr || (uintptr_t)vaddr & 7) { + ret = RESPST_ERR_MISALIGNED_ATOMIC; + goto out; } - if (!res->replay) { - if (mr->state != RXE_MR_STATE_VALID) { - ret = RESPST_ERR_RKEY_VIOLATION; - goto out; - } + spin_lock(&atomic_ops_lock); + res->atomic.orig_val = value = *vaddr; - vaddr = iova_to_vaddr(mr, qp->resp.va + qp->resp.offset, - sizeof(u64)); + if (pkt->opcode == IB_OPCODE_RC_COMPARE_SWAP) { + if (value == atmeth_comp(pkt)) + value = atmeth_swap_add(pkt); + } else { + value += atmeth_swap_add(pkt); + } - /* check vaddr is 8 bytes aligned. */ - if (!vaddr || (uintptr_t)vaddr & 7) { - ret = RESPST_ERR_MISALIGNED_ATOMIC; - goto out; - } + *vaddr = value; + spin_unlock(&atomic_ops_lock); - spin_lock_bh(&atomic_ops_lock); - res->atomic.orig_val = value = *vaddr; + qp->resp.msn++; - if (pkt->opcode == IB_OPCODE_RC_COMPARE_SWAP) { - if (value == atmeth_comp(pkt)) - value = atmeth_swap_add(pkt); - } else { - value += atmeth_swap_add(pkt); - } + /* next expected psn, read handles this separately */ + qp->resp.psn = (pkt->psn + 1) & BTH_PSN_MASK; + qp->resp.ack_psn = qp->resp.psn; - *vaddr = value; - spin_unlock_bh(&atomic_ops_lock); + qp->resp.opcode = pkt->opcode; + qp->resp.status = IB_WC_SUCCESS; - qp->resp.msn++; + ret = RESPST_ACKNOWLEDGE; +out: + return ret; +} - /* next expected psn, read handles this separately */ - qp->resp.psn = (pkt->psn + 1) & BTH_PSN_MASK; - qp->resp.ack_psn = qp->resp.psn; +static enum resp_states rxe_atomic_ops(struct rxe_qp *qp, + struct rxe_pkt_info *pkt, + struct rxe_mr *mr) +{ + u64 *vaddr; + int ret; - qp->resp.opcode = pkt->opcode; - qp->resp.status = IB_WC_SUCCESS; + vaddr = iova_to_vaddr(mr, qp->resp.va + qp->resp.offset, + sizeof(u64)); + + if (pkt->mask & RXE_ATOMIC_MASK) { + ret = rxe_process_atomic(qp, pkt, vaddr); + } else { + /*ATOMIC WRITE operation will come here. */ + ret = RESPST_ERR_UNSUPPORTED_OPCODE; } - ret = RESPST_ACKNOWLEDGE; -out: + return ret; +} + +static enum resp_states rxe_atomic_reply(struct rxe_qp *qp, + struct rxe_pkt_info *pkt) +{ + struct rxe_mr *mr = qp->resp.mr; + struct resp_res *res = qp->resp.res; + int ret; + + if (!res) { + res = rxe_prepare_res(qp, pkt, RXE_ATOMIC_MASK); + qp->resp.res = res; + } + + if (!res->replay) { + if (mr->state != RXE_MR_STATE_VALID) + return RESPST_ERR_RKEY_VIOLATION; + + ret = rxe_atomic_ops(qp, pkt, mr); + } else + ret = RESPST_ACKNOWLEDGE; + return ret; } @@ -1327,7 +1353,7 @@ int rxe_responder(void *arg) state = read_reply(qp, pkt); break; case RESPST_ATOMIC_REPLY: - state = atomic_reply(qp, pkt); + state = rxe_atomic_reply(qp, pkt); break; case RESPST_ACKNOWLEDGE: state = acknowledge(qp, pkt); -- 2.31.1