Received: by 2002:a05:6358:d09b:b0:dc:cd0c:909e with SMTP id jc27csp1874275rwb; Fri, 11 Nov 2022 01:28:16 -0800 (PST) X-Google-Smtp-Source: AA0mqf67R+22iH3NhlwtsDH4jCnE4iq+w7N4EHVPltiEKNZuDDSF9R6pIZKTinhfJAtQ0bk0o0Nl X-Received: by 2002:a05:6402:612:b0:461:ad40:e80a with SMTP id n18-20020a056402061200b00461ad40e80amr672715edv.369.1668158895760; Fri, 11 Nov 2022 01:28:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668158895; cv=none; d=google.com; s=arc-20160816; b=SYOfe3eEhHAc7xijfiBmdul4dP0DJXwLmwCEUeCc5C7WlTn8VPSqYFWXq62H4D5vPL +LwlJmSmLjfWINzuLRqd5bxJDn2KhBsx78a1w/7knqKmGwGMHXQ2uwCk7e/NE0S0m0A1 m/OgjOFmxeGKUR/v1sYodoIHuKWvW6EUOMlK2JqVp5995uh8m7qnhevNGcTDz/ZqqG/u t0PUMVpXXI8JoCZZ8qYFVRJOlwZiC3HJvw9K+z3lFeb0Laa2T78J1pNMSGnKnlKDGrjW T7zZHxAoUXRASqAijGNJdJg6Rn6+pXpguucNKETcG5qkD66r2rXJKq7J9c4OdLrOMtb0 oQ0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=i9qtPdvJxotqjRJPLdMDrGHQIDI8OcAXbETS8nxROhI=; b=rcMSRs+8r70YqlxxN8ulJNcQzRky8CmE4D+kndWM18Fo+8ZNOkmIlUjl5Qlzjs/d0Z BsZKpXhvB4QN8z72DtF0x32hdKMtWeSP40P//OP0R3kTH90L5rAQtjoz24aNEcGyl8Ug gGGqqXfbQIze/g/A21Mhcq9ipVyREFKENirfp7UuUWyPT+rseI8mnIqLHfc9V7unrHqr OYGYkoptOwZW8ME8kvISukREJf1aKZBCGd35vgW+9zpg71ODk+ZqMmd9/y2/BizGFS/a ArfRyL5QyjxSXNgMPg0i44Rv4JHvLsTt8agtkGlWSv2forvWDNX0yzRmxpoKNRDvMwlg vKNA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=fujitsu.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z7-20020a056402274700b0045cecdc1fbcsi2297282edd.9.2022.11.11.01.27.52; Fri, 11 Nov 2022 01:28:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=fujitsu.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233524AbiKKJZJ (ORCPT + 92 others); Fri, 11 Nov 2022 04:25:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41718 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233253AbiKKJZF (ORCPT ); Fri, 11 Nov 2022 04:25:05 -0500 X-Greylist: delayed 64 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Fri, 11 Nov 2022 01:25:04 PST Received: from esa11.hc1455-7.c3s2.iphmx.com (esa11.hc1455-7.c3s2.iphmx.com [207.54.90.137]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 64FC16AEE3; Fri, 11 Nov 2022 01:25:04 -0800 (PST) X-IronPort-AV: E=McAfee;i="6500,9779,10527"; a="75000946" X-IronPort-AV: E=Sophos;i="5.96,156,1665414000"; d="scan'208";a="75000946" Received: from unknown (HELO yto-r2.gw.nic.fujitsu.com) ([218.44.52.218]) by esa11.hc1455-7.c3s2.iphmx.com with ESMTP; 11 Nov 2022 18:23:58 +0900 Received: from yto-m1.gw.nic.fujitsu.com (yto-nat-yto-m1.gw.nic.fujitsu.com [192.168.83.64]) by yto-r2.gw.nic.fujitsu.com (Postfix) with ESMTP id A28B9DE51A; Fri, 11 Nov 2022 18:23:56 +0900 (JST) Received: from m3004.s.css.fujitsu.com (m3004.s.css.fujitsu.com [10.128.233.124]) by yto-m1.gw.nic.fujitsu.com (Postfix) with ESMTP id E6174D0445; Fri, 11 Nov 2022 18:23:55 +0900 (JST) Received: from localhost.localdomain (unknown [10.19.3.107]) by m3004.s.css.fujitsu.com (Postfix) with ESMTP id AFF8D20607A2; Fri, 11 Nov 2022 18:23:55 +0900 (JST) From: Daisuke Matsuda To: linux-rdma@vger.kernel.org, leonro@nvidia.com, jgg@nvidia.com, zyjzyj2000@gmail.com Cc: nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, rpearsonhpe@gmail.com, yangx.jy@fujitsu.com, lizhijian@fujitsu.com, y-goto@fujitsu.com, Daisuke Matsuda Subject: [RFC PATCH v2 3/7] RDMA/rxe: Cleanup code for responder Atomic operations Date: Fri, 11 Nov 2022 18:22:24 +0900 Message-Id: <7f0b8e11ed7dfcbfb8ce6198680745cc6d106783.1668157436.git.matsuda-daisuke@fujitsu.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-TM-AS-GCONF: 00 X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently, rxe_responder() directly calls the function to execute Atomic operations. This need to be modified to insert some conditional branches for the new RDMA Write operation and the ODP feature. Signed-off-by: Daisuke Matsuda --- drivers/infiniband/sw/rxe/rxe_resp.c | 102 +++++++++++++++++---------- 1 file changed, 64 insertions(+), 38 deletions(-) diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c index 91f935d60160..03e54cb37d44 100644 --- a/drivers/infiniband/sw/rxe/rxe_resp.c +++ b/drivers/infiniband/sw/rxe/rxe_resp.c @@ -595,60 +595,86 @@ static struct resp_res *rxe_prepare_res(struct rxe_qp *qp, /* Guarantee atomicity of atomic operations at the machine level. */ static DEFINE_SPINLOCK(atomic_ops_lock); -static enum resp_states atomic_reply(struct rxe_qp *qp, - struct rxe_pkt_info *pkt) +enum resp_states rxe_process_atomic(struct rxe_qp *qp, + struct rxe_pkt_info *pkt, u64 *vaddr) { - u64 *vaddr; enum resp_states ret; - struct rxe_mr *mr = qp->resp.mr; struct resp_res *res = qp->resp.res; u64 value; - if (!res) { - res = rxe_prepare_res(qp, pkt, RXE_ATOMIC_MASK); - qp->resp.res = res; + /* check vaddr is 8 bytes aligned. */ + if (!vaddr || (uintptr_t)vaddr & 7) { + ret = RESPST_ERR_MISALIGNED_ATOMIC; + goto out; } - if (!res->replay) { - if (mr->state != RXE_MR_STATE_VALID) { - ret = RESPST_ERR_RKEY_VIOLATION; - goto out; - } + spin_lock(&atomic_ops_lock); + res->atomic.orig_val = value = *vaddr; - vaddr = iova_to_vaddr(mr, qp->resp.va + qp->resp.offset, - sizeof(u64)); + if (pkt->opcode == IB_OPCODE_RC_COMPARE_SWAP) { + if (value == atmeth_comp(pkt)) + value = atmeth_swap_add(pkt); + } else { + value += atmeth_swap_add(pkt); + } - /* check vaddr is 8 bytes aligned. */ - if (!vaddr || (uintptr_t)vaddr & 7) { - ret = RESPST_ERR_MISALIGNED_ATOMIC; - goto out; - } + *vaddr = value; + spin_unlock(&atomic_ops_lock); - spin_lock_bh(&atomic_ops_lock); - res->atomic.orig_val = value = *vaddr; + qp->resp.msn++; - if (pkt->opcode == IB_OPCODE_RC_COMPARE_SWAP) { - if (value == atmeth_comp(pkt)) - value = atmeth_swap_add(pkt); - } else { - value += atmeth_swap_add(pkt); - } + /* next expected psn, read handles this separately */ + qp->resp.psn = (pkt->psn + 1) & BTH_PSN_MASK; + qp->resp.ack_psn = qp->resp.psn; - *vaddr = value; - spin_unlock_bh(&atomic_ops_lock); + qp->resp.opcode = pkt->opcode; + qp->resp.status = IB_WC_SUCCESS; - qp->resp.msn++; + ret = RESPST_ACKNOWLEDGE; +out: + return ret; +} - /* next expected psn, read handles this separately */ - qp->resp.psn = (pkt->psn + 1) & BTH_PSN_MASK; - qp->resp.ack_psn = qp->resp.psn; +static enum resp_states rxe_atomic_ops(struct rxe_qp *qp, + struct rxe_pkt_info *pkt, + struct rxe_mr *mr) +{ + u64 *vaddr; + int ret; - qp->resp.opcode = pkt->opcode; - qp->resp.status = IB_WC_SUCCESS; + vaddr = iova_to_vaddr(mr, qp->resp.va + qp->resp.offset, + sizeof(u64)); + + if (pkt->mask & RXE_ATOMIC_MASK) { + ret = rxe_process_atomic(qp, pkt, vaddr); + } else { + /*ATOMIC WRITE operation will come here. */ + ret = RESPST_ERR_UNSUPPORTED_OPCODE; } - ret = RESPST_ACKNOWLEDGE; -out: + return ret; +} + +static enum resp_states rxe_atomic_reply(struct rxe_qp *qp, + struct rxe_pkt_info *pkt) +{ + struct rxe_mr *mr = qp->resp.mr; + struct resp_res *res = qp->resp.res; + int ret; + + if (!res) { + res = rxe_prepare_res(qp, pkt, RXE_ATOMIC_MASK); + qp->resp.res = res; + } + + if (!res->replay) { + if (mr->state != RXE_MR_STATE_VALID) + return RESPST_ERR_RKEY_VIOLATION; + + ret = rxe_atomic_ops(qp, pkt, mr); + } else + ret = RESPST_ACKNOWLEDGE; + return ret; } @@ -1321,7 +1347,7 @@ int rxe_responder(void *arg) state = read_reply(qp, pkt); break; case RESPST_ATOMIC_REPLY: - state = atomic_reply(qp, pkt); + state = rxe_atomic_reply(qp, pkt); break; case RESPST_ACKNOWLEDGE: state = acknowledge(qp, pkt); -- 2.31.1