Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp84349imm; Thu, 20 Sep 2018 15:40:17 -0700 (PDT) X-Google-Smtp-Source: ANB0VdY5oPX35JJhdGmG4hKClc7t8tMgVwgqj8wcvlSH0XIPJwDi3zDqX2qMXiDeo4Ugz7X7tYGa X-Received: by 2002:a17:902:7803:: with SMTP id p3-v6mr41159250pll.119.1537483217377; Thu, 20 Sep 2018 15:40:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537483217; cv=none; d=google.com; s=arc-20160816; b=BaAYB3KFEPHLhUMLA2BtvGdtB8VR3fsy0M4PGYDCEo3v/fZW0+8Up/x1nD7xbmlh72 8NLUOYgEBypb2sorTqCFKwxlLGQ+jyNmaeSQLCE0PTSdubIjzJqZNuh/u7fksOY3WFfy rdVCHisv+Ekom4ZNAlJLEtkZZNHqRVdMi5r4CsKliJ6P8UW7TCgglBUhLzAYXKU4BjIL Bk/HDSsGE+lyyeV6bhKPDZ84oeYVApGOtU0M4aVCL8qDxww+wR3PXhBKLIiVeCbfGBcp jkYarGFM/zSnQes9MiXixq7hkgksAKBOK5PJRTGgBMeF/4gu3ISMkdOw/pZq+9jmZ3R2 iqog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=mc0XAcTVbIzHT+4Z1L0vXcuDAlfXd0o8T/g1Hrc/nZ8=; b=BbY9bN7tXR605au0gYUJ6RruFPHIS420J+6BvxtvQRMLVwEiNrLoQSHMn4AOuH3mKM vut+4xnbN1RqwYOvd/q7NSlrccVhigXA/j2RwuS0Gp+tMej0gydj8ZEADfcOVHXP7rOk lpId/kAEaGDBQAZFL+bZiFlEjehcyij96zsXGxK0wQ6maAaz4g3QpTJtnP/Ag99s8obC e/EPIweFT2S1KkWivV1O3IMPbDn9NsP3LcO7kUW9ZhoLaNeBusuYuhauzfvrXm0lskRu 42pEHx0k7AtYc0iVtrODGFoDj9rLpXgb1OuURsunoeDgnChT4nkvL52yXroZly/JHjtI 0lxA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=wEBwp57B; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z2-v6si2499240plo.412.2018.09.20.15.40.01; Thu, 20 Sep 2018 15:40:17 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=wEBwp57B; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388524AbeIUEYQ (ORCPT + 99 others); Fri, 21 Sep 2018 00:24:16 -0400 Received: from mail.kernel.org ([198.145.29.99]:53296 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725749AbeIUEYP (ORCPT ); Fri, 21 Sep 2018 00:24:15 -0400 Received: from localhost (unknown [207.160.231.254]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 32FC821522; Thu, 20 Sep 2018 22:38:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1537483110; bh=8QvI4P/cwd5lxqiU5tntsh/2aORvpAC+Vk0r/VskKi4=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=wEBwp57B37qGR4K4EXkBeQwJq3Kj2bXcXKtb9LLvwXHpBt+ldn4/v17kZ5NYN91gA ySTFTsBzJ/E5zLfFoQ35moRqYXIEfZRqju7f614+MoQQV6ruuiZNLNT0ZLK1ZbS9wC mANuEL9nf9/sZyme9K13cJqqWYxJAKAkdPjrXwOY= Date: Thu, 20 Sep 2018 17:38:29 -0500 From: Bjorn Helgaas To: Logan Gunthorpe Cc: linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-nvdimm@lists.01.org, linux-block@vger.kernel.org, Stephen Bates , Christoph Hellwig , Keith Busch , Sagi Grimberg , Bjorn Helgaas , Jason Gunthorpe , Max Gurtovoy , Dan Williams , =?iso-8859-1?B?Suly9G1l?= Glisse , Benjamin Herrenschmidt , Alex Williamson , Christian =?iso-8859-1?Q?K=F6nig?= , Jens Axboe Subject: Re: [PATCH v6 01/13] PCI/P2PDMA: Support peer-to-peer memory Message-ID: <20180920223829.GD224714@bhelgaas-glaptop.roam.corp.google.com> References: <20180913001156.4115-1-logang@deltatee.com> <20180913001156.4115-2-logang@deltatee.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180913001156.4115-2-logang@deltatee.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 12, 2018 at 06:11:44PM -0600, Logan Gunthorpe wrote: > Some PCI devices may have memory mapped in a BAR space that's > intended for use in peer-to-peer transactions. In order to enable > such transactions the memory must be registered with ZONE_DEVICE pages > so it can be used by DMA interfaces in existing drivers. > > Add an interface for other subsystems to find and allocate chunks of P2P > memory as necessary to facilitate transfers between two PCI peers: > > int pci_p2pdma_add_client(); > struct pci_dev *pci_p2pmem_find(); > void *pci_alloc_p2pmem(); > > The new interface requires a driver to collect a list of client devices > involved in the transaction with the pci_p2pmem_add_client*() functions > then call pci_p2pmem_find() to obtain any suitable P2P memory. Once > this is done the list is bound to the memory and the calling driver is > free to add and remove clients as necessary (adding incompatible clients > will fail). With a suitable p2pmem device, memory can then be > allocated with pci_alloc_p2pmem() for use in DMA transactions. > > Depending on hardware, using peer-to-peer memory may reduce the bandwidth > of the transfer but can significantly reduce pressure on system memory. > This may be desirable in many cases: for example a system could be designed > with a small CPU connected to a PCIe switch by a small number of lanes > which would maximize the number of lanes available to connect to NVMe > devices. > > The code is designed to only utilize the p2pmem device if all the devices > involved in a transfer are behind the same PCI bridge. This is because we > have no way of knowing whether peer-to-peer routing between PCIe Root Ports > is supported (PCIe r4.0, sec 1.3.1). Additionally, the benefits of P2P > transfers that go through the RC is limited to only reducing DRAM usage > and, in some cases, coding convenience. The PCI-SIG may be exploring > adding a new capability bit to advertise whether this is possible for > future hardware. > > This commit includes significant rework and feedback from Christoph > Hellwig. > > Signed-off-by: Christoph Hellwig > Signed-off-by: Logan Gunthorpe Acked-by: Bjorn Helgaas # PCI pieces Mostly trivial comments below. Thanks for persevering with this! > --- > drivers/pci/Kconfig | 17 + > drivers/pci/Makefile | 1 + > drivers/pci/p2pdma.c | 761 +++++++++++++++++++++++++++++++++++++ > include/linux/memremap.h | 5 + > include/linux/mm.h | 18 + > include/linux/pci-p2pdma.h | 102 +++++ > include/linux/pci.h | 4 + > 7 files changed, 908 insertions(+) > create mode 100644 drivers/pci/p2pdma.c > create mode 100644 include/linux/pci-p2pdma.h > +#define pr_fmt(fmt) "pci-p2pdma: " fmt Is pr_fmt() actually used anywhere? > + * Check if a PCI bridge has it's ACS redirection bits set to redirect P2P s/it's/its/ > + * TLPs upstream via ACS. Returns 1 if the packets will be redirected > + * upstream, 0 otherwise. > + */ > +static int pci_bridge_has_acs_redir(struct pci_dev *dev) Most of your code uses "pdev" for a struct pci_dev *. > +static void seq_buf_print_bus_devfn(struct seq_buf *buf, struct pci_dev *dev) > +{ > + if (!buf) > + return; > + > + seq_buf_printf(buf, "%04x:%02x:%02x.%x;", pci_domain_nr(dev->bus), > + dev->bus->number, PCI_SLOT(dev->devfn), > + PCI_FUNC(dev->devfn)); dev vs pdev? I think you could use pci_name() here (see pci_setup_device()). > + * In the case where two devices are connected to different PCIe switches, > + * this function will still return a positive distance as long as both > + * switches evenutally have a common upstream bridge. Note this covers s/evenutally/eventually/ > +static int upstream_bridge_distance_warn(struct pci_dev *provider, > + struct pci_dev *client) > +{ > + struct seq_buf acs_list; > + int ret; > + > + seq_buf_init(&acs_list, kmalloc(PAGE_SIZE, GFP_KERNEL), PAGE_SIZE); Check for kmalloc() failure here? Failure would mean acs_list->buffer is NULL, but I gave up following the chain to see how that would be handled. > + > + ret = upstream_bridge_distance(provider, client, &acs_list); > + if (ret == -2) { > + pci_warn(client, "cannot be used for peer-to-peer DMA as ACS redirect is set between the client and provider\n"); Maybe include pci_name(provider) in these messages? > + /* Drop final semicolon */ > + acs_list.buffer[acs_list.len-1] = 0; > + pci_warn(client, "to disable ACS redirect for this path, add the kernel parameter: pci=disable_acs_redir=%s\n", > + acs_list.buffer); > + > + } else if (ret < 0) { > + pci_warn(client, "cannot be used for peer-to-peer DMA as the client and provider do not share an upstream bridge\n"); > + * pci_p2pdma_assign_provider - Check compatibily (as per pci_p2pdma_distance) s/compatibily/compatibility/ > +struct pci_dev *pci_p2pmem_find(struct list_head *clients) > +{ > + struct pci_dev *pdev = NULL; > + struct pci_p2pdma_client *pos; > + int distance; > + int closest_distance = INT_MAX; > + struct pci_dev **closest_pdevs; > + int dev_cnt = 0; > + const int max_devs = PAGE_SIZE / sizeof(*closest_pdevs); > + int i; > + > + closest_pdevs = kmalloc(PAGE_SIZE, GFP_KERNEL); Check for kmalloc() failure? > +++ b/include/linux/pci-p2pdma.h > @@ -0,0 +1,102 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +/* > + * PCI Peer 2 Peer DMA support. > + * > + * Copyright (c) 2016-2018, Logan Gunthorpe > + * Copyright (c) 2016-2017, Microsemi Corporation > + * Copyright (c) 2017, Christoph Hellwig > + * Copyright (c) 2018, Eideticom Inc. > + * Spurious blank line. > + */