Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp656152imm; Fri, 21 Sep 2018 06:16:18 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZanbGw6OPcnkwrlMtNnmFJW6AksxwmxfLJ9r1KAZBNHYNz0Zd76tNUAhBNGBDdCQrj4XkA X-Received: by 2002:a17:902:b03:: with SMTP id 3-v6mr12335516plq.156.1537535778812; Fri, 21 Sep 2018 06:16:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537535778; cv=none; d=google.com; s=arc-20160816; b=zx+j9tvZVbIS3GMN4bBrqPt1cRSpDzrPcNHQKy75pafLchFgvj8/gFiYwIzB8+Bkvp HYbSNbc/pVeODQu1iMVL39T7G3nHbrPRxjCDJUlQ/MihKWSpgIgSHfmtNqjUFgbjVutX 2+yQ/93SwF9c6ZmrvbNelLU8buywoSjN3y4p/w3qZmufWiSAVFL48yrR/rmMkEbDBx4H L1qCzKCp3L6HgAhnxGgtieV5xE6FF9MI1388r3NoadH94EoHEyuan4okHj67IY1OMpV3 xA0R5lsoq3XzqOlgnn0FEdEPTO2Rj12i1DZN1hb8t73GAVYNUt1wGnZ+pkkr1hZW8J59 IRWg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=/kaldIelWL+WsTnh4WFu7I34DGtU3zy3H/iSd9Q31EU=; b=nm7f4Xjy+W1QZYJOXK/zbqMUnxnn8gOaDfJ1VDrreH3ifKhTA50jifo/4UxjQ9ST3b b+4B/J8o0bqYq3TSp9NV5XSPgCHB67nWYw1414jIdfWbHjbi5dvJ+EAjztbh1wfYsuQK B1KA/GdX2yQFgfPxuumcTEvsyD+8ApYfWF0/Y2WGMqMeEwcv7bm0ffOZfQiCZNN+c0C4 S3AAu8E1/sJKa6nKzfKp6evjwDvAo4CM3OiDWwskiWiz6IBFoxp0FlbuYFe/+1oSKEXl QLj65cr6ZjBqDOe/SVd0Vuby/Hd0b1uebq63uUbAEopIT+C9R52DEZpCQlFuNV052QfS zkGA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=KF15LKb9; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b5-v6si26981027ple.241.2018.09.21.06.16.02; Fri, 21 Sep 2018 06:16:18 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=KF15LKb9; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390053AbeIUTEn (ORCPT + 99 others); Fri, 21 Sep 2018 15:04:43 -0400 Received: from mail.kernel.org ([198.145.29.99]:51718 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728384AbeIUTEn (ORCPT ); Fri, 21 Sep 2018 15:04:43 -0400 Received: from localhost (252.sub-174-234-146.myvzw.com [174.234.146.252]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id EC14A21523; Fri, 21 Sep 2018 13:15:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1537535752; bh=vxNz/xPtDKNHX2rmVttBcWKSaFTNhqfaSjbc0/3Dc48=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=KF15LKb962uDg+WACf9NSXfH1WMgax9eWYdgAMM0Zcsep8FQrb2XsEfCEA8xWBPk8 NprJ3VQ8r2eeb+axrO2aQKLROCL7g1qDcy301wI9oNfCqmspyC7TfNP3k7mf8hG0qS yfk6XZ7sCpMHjnHrvHr1jvm9BdzcropIKcamcwiM= Date: Fri, 21 Sep 2018 08:15:50 -0500 From: Bjorn Helgaas To: Logan Gunthorpe Cc: linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org, linux-nvdimm@lists.01.org, linux-block@vger.kernel.org, Stephen Bates , Christoph Hellwig , Keith Busch , Sagi Grimberg , Bjorn Helgaas , Jason Gunthorpe , Max Gurtovoy , Dan Williams , =?iso-8859-1?B?Suly9G1l?= Glisse , Benjamin Herrenschmidt , Alex Williamson , Christian =?iso-8859-1?Q?K=F6nig?= , Jens Axboe Subject: Re: [PATCH v6 03/13] PCI/P2PDMA: Add PCI p2pmem DMA mappings to adjust the bus offset Message-ID: <20180921131550.GG224714@bhelgaas-glaptop.roam.corp.google.com> References: <20180913001156.4115-1-logang@deltatee.com> <20180913001156.4115-4-logang@deltatee.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180913001156.4115-4-logang@deltatee.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 12, 2018 at 06:11:46PM -0600, Logan Gunthorpe wrote: > The DMA address used when mapping PCI P2P memory must be the PCI bus > address. Thus, introduce pci_p2pmem_map_sg() to map the correct > addresses when using P2P memory. Memory mapped in this way does not > need to be unmapped. I think the use of "map" in this context is slightly confusing because the general expectation is that map/unmap must be balanced. I assume it's because the "mapping" consumes no resources, e.g., requires no page table entries. Possibly there's a better verb than "map", e.g., "convert", "convert_to_p2pdma", etc? If you keep "map", maybe add a sentence or two about why there's no corresponding unmap? > For this, we assume that an SGL passed to these functions contain all > P2P memory or no P2P memory. > > Signed-off-by: Logan Gunthorpe Acked-by: Bjorn Helgaas > --- > drivers/pci/p2pdma.c | 43 ++++++++++++++++++++++++++++++++++++++ > include/linux/memremap.h | 1 + > include/linux/pci-p2pdma.h | 7 +++++++ > 3 files changed, 51 insertions(+) > > diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c > index 67c1daf1189e..29bd40a87768 100644 > --- a/drivers/pci/p2pdma.c > +++ b/drivers/pci/p2pdma.c > @@ -191,6 +191,8 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, > pgmap->res.flags = pci_resource_flags(pdev, bar); > pgmap->ref = &pdev->p2pdma->devmap_ref; > pgmap->type = MEMORY_DEVICE_PCI_P2PDMA; > + pgmap->pci_p2pdma_bus_offset = pci_bus_address(pdev, bar) - > + pci_resource_start(pdev, bar); > > addr = devm_memremap_pages(&pdev->dev, pgmap); > if (IS_ERR(addr)) { > @@ -813,3 +815,44 @@ void pci_p2pmem_publish(struct pci_dev *pdev, bool publish) > pdev->p2pdma->p2pmem_published = publish; > } > EXPORT_SYMBOL_GPL(pci_p2pmem_publish); > + > +/** > + * pci_p2pdma_map_sg - map a PCI peer-to-peer scatterlist for DMA > + * @dev: device doing the DMA request > + * @sg: scatter list to map > + * @nents: elements in the scatterlist > + * @dir: DMA direction > + * > + * Scatterlists mapped with this function should not be unmapped in any way. > + * > + * Returns the number of SG entries mapped or 0 on error. > + */ > +int pci_p2pdma_map_sg(struct device *dev, struct scatterlist *sg, int nents, > + enum dma_data_direction dir) > +{ > + struct dev_pagemap *pgmap; > + struct scatterlist *s; > + phys_addr_t paddr; > + int i; > + > + /* > + * p2pdma mappings are not compatible with devices that use > + * dma_virt_ops. If the upper layers do the right thing > + * this should never happen because it will be prevented > + * by the check in pci_p2pdma_add_client() > + */ > + if (WARN_ON_ONCE(IS_ENABLED(CONFIG_DMA_VIRT_OPS) && > + dev->dma_ops == &dma_virt_ops)) > + return 0; > + > + for_each_sg(sg, s, nents, i) { > + pgmap = sg_page(s)->pgmap; > + paddr = sg_phys(s); > + > + s->dma_address = paddr - pgmap->pci_p2pdma_bus_offset; > + sg_dma_len(s) = s->length; > + } > + > + return nents; > +} > +EXPORT_SYMBOL_GPL(pci_p2pdma_map_sg); > diff --git a/include/linux/memremap.h b/include/linux/memremap.h > index 9553370ebdad..0ac69ddf5fc4 100644 > --- a/include/linux/memremap.h > +++ b/include/linux/memremap.h > @@ -125,6 +125,7 @@ struct dev_pagemap { > struct device *dev; > void *data; > enum memory_type type; > + u64 pci_p2pdma_bus_offset; > }; > > #ifdef CONFIG_ZONE_DEVICE > diff --git a/include/linux/pci-p2pdma.h b/include/linux/pci-p2pdma.h > index 7b2b0f547528..2f03dbbf5af6 100644 > --- a/include/linux/pci-p2pdma.h > +++ b/include/linux/pci-p2pdma.h > @@ -36,6 +36,8 @@ struct scatterlist *pci_p2pmem_alloc_sgl(struct pci_dev *pdev, > unsigned int *nents, u32 length); > void pci_p2pmem_free_sgl(struct pci_dev *pdev, struct scatterlist *sgl); > void pci_p2pmem_publish(struct pci_dev *pdev, bool publish); > +int pci_p2pdma_map_sg(struct device *dev, struct scatterlist *sg, int nents, > + enum dma_data_direction dir); > #else /* CONFIG_PCI_P2PDMA */ > static inline int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, > size_t size, u64 offset) > @@ -98,5 +100,10 @@ static inline void pci_p2pmem_free_sgl(struct pci_dev *pdev, > static inline void pci_p2pmem_publish(struct pci_dev *pdev, bool publish) > { > } > +static inline int pci_p2pdma_map_sg(struct device *dev, > + struct scatterlist *sg, int nents, enum dma_data_direction dir) > +{ > + return 0; > +} > #endif /* CONFIG_PCI_P2PDMA */ > #endif /* _LINUX_PCI_P2P_H */ > -- > 2.19.0 >