Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp2524227pxv; Sun, 27 Jun 2021 00:36:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzXUkPaDQKabRcjh3vspUS775qp0mmiZQMIW9KCcsmuyky0glYll+0m2M8NhE6y6w35U0Ju X-Received: by 2002:a17:906:4ec3:: with SMTP id i3mr19256086ejv.61.1624779387325; Sun, 27 Jun 2021 00:36:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1624779387; cv=none; d=google.com; s=arc-20160816; b=hymG+P6h3xc8ARxLILX+lm0LwphJXWHEUsk/hedYm7XLQgD29ugna1kMGHlBHHH+Ir QjFQ/oW20om18Z+gI+3zo08bshBLmpm1Ps7fr9haF1Kj5uggzHB8usWhFt7vTtgNl3vZ ITCkxIagGJJMwtQ4C5yPwplbtaOw9++ByUYtoo/x83A7R4Q7ekBRGSNTcwzP6ykn2zpM uB5WQUwcTvYV9PWmih+UVRuz5wAuPjlzaEQ4yvtKgKd3xPgMDaxLutJfHQEEYACHn4k2 8670+XUyy1dIxwzMrHnFxQn3G+O6jyEFmF0JzCPR8QnRiOznjfjEvQ8eloZP6DKSe1w7 KE7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=zbBzdRmKwnRCeOx48Ljornaud8J9Fl1h/FmRnOVbKaQ=; b=Y3l7mmxASa0LI79UaPe2RkxDr5ZyN2JHD3mwuW/DOnobHF6O2QZNSRfkYoZW7uf3sk 56eq1X/lsZISK+V6LgmbU0kqc2sZY+lQRs5oXmjc/DybCLiNfU09iHOlC1VvZhKWOW8k gv2G2Xj5VEhl0WxkFV2ZUatvVHZ2RG9vx8UdhBna5CqJCCxXQVL3gemC0Xph/6l657AK IPHpNIwhV4H6u8rlFMYPozkTGWeVdb16PG8NEb/GGdyXpjpby6s18+gYh3LZg2XsXfn5 jpm4NnjTBjofpa4A9WR/RJ/czsBieLFUJnzwJz1jc1F2yigX6EdmBih68pp9lhn245Eh DYVg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Q6HtdHwf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l7si11220291edv.519.2021.06.27.00.36.03; Sun, 27 Jun 2021 00:36:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=Q6HtdHwf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230005AbhF0Hev (ORCPT + 99 others); Sun, 27 Jun 2021 03:34:51 -0400 Received: from mail.kernel.org ([198.145.29.99]:35276 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229519AbhF0Heu (ORCPT ); Sun, 27 Jun 2021 03:34:50 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 86D6C61C17; Sun, 27 Jun 2021 07:32:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1624779147; bh=zlru4aeecx66uG4AaBV7pSp7w2wfSyth2muFtppUKJg=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Q6HtdHwfeqgFFTFNjXYjvWEfjGx8bbFPSD71xc2+2IASbx2WI/r50aE184FSjBvj8 ynhhNv8T7kVbcUs5Fg96ob6F1fOJWZiJtseZIjpG9exInBx8A/PyoNwss5cb3UwH1t FmjG+yrR0T5Ld6m3/7TtFEdijSLS8fhfweojK2egGMwCwbNTITjVvp7aauVThm+zMT 9+CGa5R/EgWC4rG+lESqtYaBI41RX2kY3ciJDTx33ITjrCXCb5v2rjv3Omr/vXXVz8 gXH4U9ZTg5SBZvzMcrWVf4vJjKHe6I2fjyZw0HCq45NM+o9E7ZKiHJuX6r7lTrf9HN sf/RN+e/GFomw== Date: Sun, 27 Jun 2021 10:32:23 +0300 From: Leon Romanovsky To: Jason Gunthorpe Cc: Max Gurtovoy , Doug Ledford , Avihai Horon , linux-kernel@vger.kernel.org, linux-rdma@vger.kernel.org, Christoph Hellwig , Bart Van Assche , Tom Talpey , Santosh Shilimkar , Chuck Lever III , Keith Busch , David Laight , Honggang LI Subject: Re: [PATCH v2 rdma-next] RDMA/mlx5: Enable Relaxed Ordering by default for kernel ULPs Message-ID: References: <9c5b7ae5-8578-3008-5e78-02e77e121cda@nvidia.com> <1ef0ac51-4c7d-d79d-cb30-2e219f74c8c1@nvidia.com> <20210624113607.GN2371267@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210624113607.GN2371267@nvidia.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jun 24, 2021 at 08:36:07AM -0300, Jason Gunthorpe wrote: > On Thu, Jun 24, 2021 at 10:39:16AM +0300, Max Gurtovoy wrote: > > > > On 6/24/2021 9:38 AM, Leon Romanovsky wrote: > > > On Thu, Jun 24, 2021 at 02:06:46AM +0300, Max Gurtovoy wrote: > > > > On 6/9/2021 2:05 PM, Leon Romanovsky wrote: > > > > > From: Avihai Horon > > > > > > > > > > Relaxed Ordering is a capability that can only benefit users that support > > > > > it. All kernel ULPs should support Relaxed Ordering, as they are designed > > > > > to read data only after observing the CQE and use the DMA API correctly. > > > > > > > > > > Hence, implicitly enable Relaxed Ordering by default for kernel ULPs. > > > > > > > > > > Signed-off-by: Avihai Horon > > > > > Signed-off-by: Leon Romanovsky > > > > > Changelog: > > > > > v2: > > > > > * Dropped IB/core patch and set RO implicitly in mlx5 exactly like in > > > > > eth side of mlx5 driver. > > > > > v1: https://lore.kernel.org/lkml/cover.1621505111.git.leonro@nvidia.com > > > > > * Enabled by default RO in IB/core instead of changing all users > > > > > v0: https://lore.kernel.org/lkml/20210405052404.213889-1-leon@kernel.org > > > > > drivers/infiniband/hw/mlx5/mr.c | 10 ++++++---- > > > > > drivers/infiniband/hw/mlx5/wr.c | 5 ++++- > > > > > 2 files changed, 10 insertions(+), 5 deletions(-) > > > > > > > > > > diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c > > > > > index 3363cde85b14..2182e76ae734 100644 > > > > > +++ b/drivers/infiniband/hw/mlx5/mr.c > > > > > @@ -69,6 +69,7 @@ static void set_mkc_access_pd_addr_fields(void *mkc, int acc, u64 start_addr, > > > > > struct ib_pd *pd) > > > > > { > > > > > struct mlx5_ib_dev *dev = to_mdev(pd->device); > > > > > + bool ro_pci_enabled = pcie_relaxed_ordering_enabled(dev->mdev->pdev); > > > > > MLX5_SET(mkc, mkc, a, !!(acc & IB_ACCESS_REMOTE_ATOMIC)); > > > > > MLX5_SET(mkc, mkc, rw, !!(acc & IB_ACCESS_REMOTE_WRITE)); > > > > > @@ -78,10 +79,10 @@ static void set_mkc_access_pd_addr_fields(void *mkc, int acc, u64 start_addr, > > > > > if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_write)) > > > > > MLX5_SET(mkc, mkc, relaxed_ordering_write, > > > > > - !!(acc & IB_ACCESS_RELAXED_ORDERING)); > > > > > + acc & IB_ACCESS_RELAXED_ORDERING && ro_pci_enabled); > > > > > if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read)) > > > > > MLX5_SET(mkc, mkc, relaxed_ordering_read, > > > > > - !!(acc & IB_ACCESS_RELAXED_ORDERING)); > > > > > + acc & IB_ACCESS_RELAXED_ORDERING && ro_pci_enabled); > > > > Jason, > > > > > > > > If it's still possible to add small change, it will be nice to avoid > > > > calculating "acc & IB_ACCESS_RELAXED_ORDERING && ro_pci_enabled" twice. > > > The patch is part of for-next now, so feel free to send followup patch. > > > > > > Thanks > > > > > > diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c > > > index c1e70c99b70c..c4f246c90c4d 100644 > > > +++ b/drivers/infiniband/hw/mlx5/mr.c > > > @@ -69,7 +69,8 @@ static void set_mkc_access_pd_addr_fields(void *mkc, int acc, u64 start_addr, > > > struct ib_pd *pd) > > > { > > > struct mlx5_ib_dev *dev = to_mdev(pd->device); > > > - bool ro_pci_enabled = pcie_relaxed_ordering_enabled(dev->mdev->pdev); > > > + bool ro_pci_enabled = acc & IB_ACCESS_RELAXED_ORDERING && > > > + pcie_relaxed_ordering_enabled(dev->mdev->pdev); > > > > > > MLX5_SET(mkc, mkc, a, !!(acc & IB_ACCESS_REMOTE_ATOMIC)); > > > MLX5_SET(mkc, mkc, rw, !!(acc & IB_ACCESS_REMOTE_WRITE)); > > > @@ -78,11 +79,9 @@ static void set_mkc_access_pd_addr_fields(void *mkc, int acc, u64 start_addr, > > > MLX5_SET(mkc, mkc, lr, 1); > > > > > > if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_write)) > > > - MLX5_SET(mkc, mkc, relaxed_ordering_write, > > > - (acc & IB_ACCESS_RELAXED_ORDERING) && ro_pci_enabled); > > > + MLX5_SET(mkc, mkc, relaxed_ordering_write, ro_pci_enabled); > > > if (MLX5_CAP_GEN(dev->mdev, relaxed_ordering_read)) > > > - MLX5_SET(mkc, mkc, relaxed_ordering_read, > > > - (acc & IB_ACCESS_RELAXED_ORDERING) && ro_pci_enabled); > > > + MLX5_SET(mkc, mkc, relaxed_ordering_read, ro_pci_enabled); > > > > > > MLX5_SET(mkc, mkc, pd, to_mpd(pd)->pdn); > > > MLX5_SET(mkc, mkc, qpn, 0xffffff); > > > (END) > > > > > Yes this looks good. > > > > Can you/Avihai create a patch from this ? or I'll do it ? > > I'd be surpised if it matters.. CSE and all From bytecode/performance POV, It shouldn't change anything. However it looks better. Thanks > > Jason