2021-09-17 10:49:39

by Logan Gunthorpe

[permalink] [raw]
Subject: [PATCH v3 06/20] dma-direct: support PCI P2PDMA pages in dma-direct map_sg

Add PCI P2PDMA support for dma_direct_map_sg() so that it can map
PCI P2PDMA pages directly without a hack in the callers. This allows
for heterogeneous SGLs that contain both P2PDMA and regular pages.

A P2PDMA page may have three possible outcomes when being mapped:
1) If the data path between the two devices doesn't go through the
root port, then it should be mapped with a PCI bus address
2) If the data path goes through the host bridge, it should be mapped
normally, as though it were a CPU physical address
3) It is not possible for the two devices to communicate and thus
the mapping operation should fail (and it will return -EREMOTEIO).

SGL segments that contain PCI bus addresses are marked with
sg_dma_mark_pci_p2pdma() and are ignored when unmapped.

Signed-off-by: Logan Gunthorpe <[email protected]>
---
kernel/dma/direct.c | 44 ++++++++++++++++++++++++++++++++++++++------
1 file changed, 38 insertions(+), 6 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 4c6c5e0635e3..fa8317e8ff44 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -13,6 +13,7 @@
#include <linux/vmalloc.h>
#include <linux/set_memory.h>
#include <linux/slab.h>
+#include <linux/pci-p2pdma.h>
#include "direct.h"

/*
@@ -421,29 +422,60 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
arch_sync_dma_for_cpu_all();
}

+/*
+ * Unmaps segments, except for ones marked as pci_p2pdma which do not
+ * require any further action as they contain a bus address.
+ */
void dma_direct_unmap_sg(struct device *dev, struct scatterlist *sgl,
int nents, enum dma_data_direction dir, unsigned long attrs)
{
struct scatterlist *sg;
int i;

- for_each_sg(sgl, sg, nents, i)
- dma_direct_unmap_page(dev, sg->dma_address, sg_dma_len(sg), dir,
- attrs);
+ for_each_sg(sgl, sg, nents, i) {
+ if (sg_is_dma_pci_p2pdma(sg)) {
+ sg_dma_unmark_pci_p2pdma(sg);
+ } else {
+ dma_direct_unmap_page(dev, sg->dma_address,
+ sg_dma_len(sg), dir, attrs);
+ }
+ }
}
#endif

int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents,
enum dma_data_direction dir, unsigned long attrs)
{
- int i;
+ struct pci_p2pdma_map_state p2pdma_state = {};
+ enum pci_p2pdma_map_type map;
struct scatterlist *sg;
+ int i, ret;

for_each_sg(sgl, sg, nents, i) {
+ if (is_pci_p2pdma_page(sg_page(sg))) {
+ map = pci_p2pdma_map_segment(&p2pdma_state, dev, sg);
+ switch (map) {
+ case PCI_P2PDMA_MAP_BUS_ADDR:
+ continue;
+ case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
+ /*
+ * Mapping through host bridge should be
+ * mapped normally, thus we do nothing
+ * and continue below.
+ */
+ break;
+ default:
+ ret = -EREMOTEIO;
+ goto out_unmap;
+ }
+ }
+
sg->dma_address = dma_direct_map_page(dev, sg_page(sg),
sg->offset, sg->length, dir, attrs);
- if (sg->dma_address == DMA_MAPPING_ERROR)
+ if (sg->dma_address == DMA_MAPPING_ERROR) {
+ ret = -EIO;
goto out_unmap;
+ }
sg_dma_len(sg) = sg->length;
}

@@ -451,7 +483,7 @@ int dma_direct_map_sg(struct device *dev, struct scatterlist *sgl, int nents,

out_unmap:
dma_direct_unmap_sg(dev, sgl, i, dir, attrs | DMA_ATTR_SKIP_CPU_SYNC);
- return -EIO;
+ return ret;
}

dma_addr_t dma_direct_map_resource(struct device *dev, phys_addr_t paddr,
--
2.30.2


2021-09-28 19:10:33

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH v3 06/20] dma-direct: support PCI P2PDMA pages in dma-direct map_sg

On Thu, Sep 16, 2021 at 05:40:46PM -0600, Logan Gunthorpe wrote:
> Add PCI P2PDMA support for dma_direct_map_sg() so that it can map
> PCI P2PDMA pages directly without a hack in the callers. This allows
> for heterogeneous SGLs that contain both P2PDMA and regular pages.
>
> A P2PDMA page may have three possible outcomes when being mapped:
> 1) If the data path between the two devices doesn't go through the
> root port, then it should be mapped with a PCI bus address
> 2) If the data path goes through the host bridge, it should be mapped
> normally, as though it were a CPU physical address
> 3) It is not possible for the two devices to communicate and thus
> the mapping operation should fail (and it will return -EREMOTEIO).
>
> SGL segments that contain PCI bus addresses are marked with
> sg_dma_mark_pci_p2pdma() and are ignored when unmapped.
>
> Signed-off-by: Logan Gunthorpe <[email protected]>
> kernel/dma/direct.c | 44 ++++++++++++++++++++++++++++++++++++++------
> 1 file changed, 38 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 4c6c5e0635e3..fa8317e8ff44 100644
> +++ b/kernel/dma/direct.c
> @@ -13,6 +13,7 @@
> #include <linux/vmalloc.h>
> #include <linux/set_memory.h>
> #include <linux/slab.h>
> +#include <linux/pci-p2pdma.h>
> #include "direct.h"
>
> /*
> @@ -421,29 +422,60 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
> arch_sync_dma_for_cpu_all();
> }
>
> +/*
> + * Unmaps segments, except for ones marked as pci_p2pdma which do not
> + * require any further action as they contain a bus address.
> + */
> void dma_direct_unmap_sg(struct device *dev, struct scatterlist *sgl,
> int nents, enum dma_data_direction dir, unsigned long attrs)
> {
> struct scatterlist *sg;
> int i;
>
> - for_each_sg(sgl, sg, nents, i)
> - dma_direct_unmap_page(dev, sg->dma_address, sg_dma_len(sg), dir,
> - attrs);
> + for_each_sg(sgl, sg, nents, i) {
> + if (sg_is_dma_pci_p2pdma(sg)) {
> + sg_dma_unmark_pci_p2pdma(sg);
> + } else {
> + dma_direct_unmap_page(dev, sg->dma_address,
> + sg_dma_len(sg), dir, attrs);
> + }

If the main usage of this SGL bit is to indicate if it has been DMA
mapped, or not, I think it should be renamed to something clearer.

p2pdma is being used for lots of things now, it feels very
counter-intuitive that P2PDMA pages are not flagged with
something called sg_is_dma_pci_p2pdma().

How about sg_is_dma_unmapped_address() ?
>
> for_each_sg(sgl, sg, nents, i) {
> + if (is_pci_p2pdma_page(sg_page(sg))) {
> + map = pci_p2pdma_map_segment(&p2pdma_state, dev, sg);
> + switch (map) {
> + case PCI_P2PDMA_MAP_BUS_ADDR:
> + continue;
> + case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
> + /*
> + * Mapping through host bridge should be
> + * mapped normally, thus we do nothing
> + * and continue below.
> + */
> + break;
> + default:
> + ret = -EREMOTEIO;
> + goto out_unmap;
> + }
> + }
> +
> sg->dma_address = dma_direct_map_page(dev, sg_page(sg),
> sg->offset, sg->length, dir, attrs);

dma_direct_map_page() can trigger swiotlb and I didn't see this series
dealing with that?

It would probably be fine for now to fail swiotlb_map() for p2p pages?

Jason