2017-06-08 13:26:22

by Christoph Hellwig

[permalink] [raw]
Subject: clean up and modularize arch dma_mapping interface

Hi all,

for a while we have a generic implementation of the dma mapping routines
that call into per-arch or per-device operations. But right now there
still are various bits in the interfaces where don't clearly operate
on these ops. This series tries to clean up a lot of those (but not all
yet, but the series is big enough). It gets rid of the DMA_ERROR_CODE
way of signaling failures of the mapping routines from the
implementations to the generic code (and cleans up various drivers that
were incorrectly using it), and gets rid of the ->set_dma_mask routine
in favor of relying on the ->dma_capable method that can be used in
the same way, but which requires less code duplication.

Btw, we don't seem to have a tree every-growing amount of common dma
mapping code, and given that I have a fair amount of all over the tree
work in that area in my plate I'd like to start one. Any good reason
to that? Anyone willing to volunteer as co maintainer?

The whole series is also available in git:

git://git.infradead.org/users/hch/misc.git dma-map

Gitweb:

http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dma-map


2017-06-08 13:26:32

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 02/44] ibmveth: properly unwind on init errors

That way the driver doesn't have to rely on DMA_ERROR_CODE, which
is not a public API and going away.

Signed-off-by: Christoph Hellwig <[email protected]>
---
drivers/net/ethernet/ibm/ibmveth.c | 159 +++++++++++++++++--------------------
1 file changed, 74 insertions(+), 85 deletions(-)

diff --git a/drivers/net/ethernet/ibm/ibmveth.c b/drivers/net/ethernet/ibm/ibmveth.c
index 72ab7b6bf20b..3ac27f59e595 100644
--- a/drivers/net/ethernet/ibm/ibmveth.c
+++ b/drivers/net/ethernet/ibm/ibmveth.c
@@ -467,56 +467,6 @@ static void ibmveth_rxq_harvest_buffer(struct ibmveth_adapter *adapter)
}
}

-static void ibmveth_cleanup(struct ibmveth_adapter *adapter)
-{
- int i;
- struct device *dev = &adapter->vdev->dev;
-
- if (adapter->buffer_list_addr != NULL) {
- if (!dma_mapping_error(dev, adapter->buffer_list_dma)) {
- dma_unmap_single(dev, adapter->buffer_list_dma, 4096,
- DMA_BIDIRECTIONAL);
- adapter->buffer_list_dma = DMA_ERROR_CODE;
- }
- free_page((unsigned long)adapter->buffer_list_addr);
- adapter->buffer_list_addr = NULL;
- }
-
- if (adapter->filter_list_addr != NULL) {
- if (!dma_mapping_error(dev, adapter->filter_list_dma)) {
- dma_unmap_single(dev, adapter->filter_list_dma, 4096,
- DMA_BIDIRECTIONAL);
- adapter->filter_list_dma = DMA_ERROR_CODE;
- }
- free_page((unsigned long)adapter->filter_list_addr);
- adapter->filter_list_addr = NULL;
- }
-
- if (adapter->rx_queue.queue_addr != NULL) {
- dma_free_coherent(dev, adapter->rx_queue.queue_len,
- adapter->rx_queue.queue_addr,
- adapter->rx_queue.queue_dma);
- adapter->rx_queue.queue_addr = NULL;
- }
-
- for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++)
- if (adapter->rx_buff_pool[i].active)
- ibmveth_free_buffer_pool(adapter,
- &adapter->rx_buff_pool[i]);
-
- if (adapter->bounce_buffer != NULL) {
- if (!dma_mapping_error(dev, adapter->bounce_buffer_dma)) {
- dma_unmap_single(&adapter->vdev->dev,
- adapter->bounce_buffer_dma,
- adapter->netdev->mtu + IBMVETH_BUFF_OH,
- DMA_BIDIRECTIONAL);
- adapter->bounce_buffer_dma = DMA_ERROR_CODE;
- }
- kfree(adapter->bounce_buffer);
- adapter->bounce_buffer = NULL;
- }
-}
-
static int ibmveth_register_logical_lan(struct ibmveth_adapter *adapter,
union ibmveth_buf_desc rxq_desc, u64 mac_address)
{
@@ -573,14 +523,17 @@ static int ibmveth_open(struct net_device *netdev)
for(i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++)
rxq_entries += adapter->rx_buff_pool[i].size;

+ rc = -ENOMEM;
adapter->buffer_list_addr = (void*) get_zeroed_page(GFP_KERNEL);
- adapter->filter_list_addr = (void*) get_zeroed_page(GFP_KERNEL);
+ if (!adapter->buffer_list_addr) {
+ netdev_err(netdev, "unable to allocate list pages\n");
+ goto out;
+ }

- if (!adapter->buffer_list_addr || !adapter->filter_list_addr) {
- netdev_err(netdev, "unable to allocate filter or buffer list "
- "pages\n");
- rc = -ENOMEM;
- goto err_out;
+ adapter->filter_list_addr = (void*) get_zeroed_page(GFP_KERNEL);
+ if (!adapter->filter_list_addr) {
+ netdev_err(netdev, "unable to allocate filter pages\n");
+ goto out_free_buffer_list;
}

dev = &adapter->vdev->dev;
@@ -590,22 +543,21 @@ static int ibmveth_open(struct net_device *netdev)
adapter->rx_queue.queue_addr =
dma_alloc_coherent(dev, adapter->rx_queue.queue_len,
&adapter->rx_queue.queue_dma, GFP_KERNEL);
- if (!adapter->rx_queue.queue_addr) {
- rc = -ENOMEM;
- goto err_out;
- }
+ if (!adapter->rx_queue.queue_addr)
+ goto out_free_filter_list;

adapter->buffer_list_dma = dma_map_single(dev,
adapter->buffer_list_addr, 4096, DMA_BIDIRECTIONAL);
+ if (dma_mapping_error(dev, adapter->buffer_list_dma)) {
+ netdev_err(netdev, "unable to map buffer list pages\n");
+ goto out_free_queue_mem;
+ }
+
adapter->filter_list_dma = dma_map_single(dev,
adapter->filter_list_addr, 4096, DMA_BIDIRECTIONAL);
-
- if ((dma_mapping_error(dev, adapter->buffer_list_dma)) ||
- (dma_mapping_error(dev, adapter->filter_list_dma))) {
- netdev_err(netdev, "unable to map filter or buffer list "
- "pages\n");
- rc = -ENOMEM;
- goto err_out;
+ if (dma_mapping_error(dev, adapter->filter_list_dma)) {
+ netdev_err(netdev, "unable to map filter list pages\n");
+ goto out_unmap_buffer_list;
}

adapter->rx_queue.index = 0;
@@ -636,7 +588,7 @@ static int ibmveth_open(struct net_device *netdev)
rxq_desc.desc,
mac_address);
rc = -ENONET;
- goto err_out;
+ goto out_unmap_filter_list;
}

for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) {
@@ -646,7 +598,7 @@ static int ibmveth_open(struct net_device *netdev)
netdev_err(netdev, "unable to alloc pool\n");
adapter->rx_buff_pool[i].active = 0;
rc = -ENOMEM;
- goto err_out;
+ goto out_free_buffer_pools;
}
}

@@ -660,22 +612,21 @@ static int ibmveth_open(struct net_device *netdev)
lpar_rc = h_free_logical_lan(adapter->vdev->unit_address);
} while (H_IS_LONG_BUSY(lpar_rc) || (lpar_rc == H_BUSY));

- goto err_out;
+ goto out_free_buffer_pools;
}

+ rc = -ENOMEM;
adapter->bounce_buffer =
kmalloc(netdev->mtu + IBMVETH_BUFF_OH, GFP_KERNEL);
- if (!adapter->bounce_buffer) {
- rc = -ENOMEM;
- goto err_out_free_irq;
- }
+ if (!adapter->bounce_buffer)
+ goto out_free_irq;
+
adapter->bounce_buffer_dma =
dma_map_single(&adapter->vdev->dev, adapter->bounce_buffer,
netdev->mtu + IBMVETH_BUFF_OH, DMA_BIDIRECTIONAL);
if (dma_mapping_error(dev, adapter->bounce_buffer_dma)) {
netdev_err(netdev, "unable to map bounce buffer\n");
- rc = -ENOMEM;
- goto err_out_free_irq;
+ goto out_free_bounce_buffer;
}

netdev_dbg(netdev, "initial replenish cycle\n");
@@ -687,10 +638,31 @@ static int ibmveth_open(struct net_device *netdev)

return 0;

-err_out_free_irq:
+out_free_bounce_buffer:
+ kfree(adapter->bounce_buffer);
+out_free_irq:
free_irq(netdev->irq, netdev);
-err_out:
- ibmveth_cleanup(adapter);
+out_free_buffer_pools:
+ while (--i >= 0) {
+ if (adapter->rx_buff_pool[i].active)
+ ibmveth_free_buffer_pool(adapter,
+ &adapter->rx_buff_pool[i]);
+ }
+out_unmap_filter_list:
+ dma_unmap_single(dev, adapter->filter_list_dma, 4096,
+ DMA_BIDIRECTIONAL);
+out_unmap_buffer_list:
+ dma_unmap_single(dev, adapter->buffer_list_dma, 4096,
+ DMA_BIDIRECTIONAL);
+out_free_queue_mem:
+ dma_free_coherent(dev, adapter->rx_queue.queue_len,
+ adapter->rx_queue.queue_addr,
+ adapter->rx_queue.queue_dma);
+out_free_filter_list:
+ free_page((unsigned long)adapter->filter_list_addr);
+out_free_buffer_list:
+ free_page((unsigned long)adapter->buffer_list_addr);
+out:
napi_disable(&adapter->napi);
return rc;
}
@@ -698,7 +670,9 @@ static int ibmveth_open(struct net_device *netdev)
static int ibmveth_close(struct net_device *netdev)
{
struct ibmveth_adapter *adapter = netdev_priv(netdev);
+ struct device *dev = &adapter->vdev->dev;
long lpar_rc;
+ int i;

netdev_dbg(netdev, "close starting\n");

@@ -722,7 +696,27 @@ static int ibmveth_close(struct net_device *netdev)

ibmveth_update_rx_no_buffer(adapter);

- ibmveth_cleanup(adapter);
+ dma_unmap_single(dev, adapter->buffer_list_dma, 4096,
+ DMA_BIDIRECTIONAL);
+ free_page((unsigned long)adapter->buffer_list_addr);
+
+ dma_unmap_single(dev, adapter->filter_list_dma, 4096,
+ DMA_BIDIRECTIONAL);
+ free_page((unsigned long)adapter->filter_list_addr);
+
+ dma_free_coherent(dev, adapter->rx_queue.queue_len,
+ adapter->rx_queue.queue_addr,
+ adapter->rx_queue.queue_dma);
+
+ for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++)
+ if (adapter->rx_buff_pool[i].active)
+ ibmveth_free_buffer_pool(adapter,
+ &adapter->rx_buff_pool[i]);
+
+ dma_unmap_single(&adapter->vdev->dev, adapter->bounce_buffer_dma,
+ adapter->netdev->mtu + IBMVETH_BUFF_OH,
+ DMA_BIDIRECTIONAL);
+ kfree(adapter->bounce_buffer);

netdev_dbg(netdev, "close complete\n");

@@ -1648,11 +1642,6 @@ static int ibmveth_probe(struct vio_dev *dev, const struct vio_device_id *id)
}

netdev_dbg(netdev, "adapter @ 0x%p\n", adapter);
-
- adapter->buffer_list_dma = DMA_ERROR_CODE;
- adapter->filter_list_dma = DMA_ERROR_CODE;
- adapter->rx_queue.queue_dma = DMA_ERROR_CODE;
-
netdev_dbg(netdev, "registering netdev...\n");

ibmveth_set_features(netdev, netdev->features);
--
2.11.0

2017-06-08 13:26:40

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 04/44] drm/exynos: don't use DMA_ERROR_CODE

DMA_ERROR_CODE already isn't a valid API to user for drivers and will
go away soon. exynos_drm_fb_dma_addr uses it a an error return when
the passed in index is invalid, but the callers never check for it
but instead pass the address straight to the hardware.

Add a WARN_ON instead and just return 0.

Signed-off-by: Christoph Hellwig <[email protected]>
---
drivers/gpu/drm/exynos/exynos_drm_fb.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/exynos/exynos_drm_fb.c b/drivers/gpu/drm/exynos/exynos_drm_fb.c
index c77a5aced81a..d48fd7c918f8 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_fb.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_fb.c
@@ -181,8 +181,8 @@ dma_addr_t exynos_drm_fb_dma_addr(struct drm_framebuffer *fb, int index)
{
struct exynos_drm_fb *exynos_fb = to_exynos_fb(fb);

- if (index >= MAX_FB_BUFFER)
- return DMA_ERROR_CODE;
+ if (WARN_ON_ONCE(index >= MAX_FB_BUFFER))
+ return 0;

return exynos_fb->dma_addr[index];
}
--
2.11.0

2017-06-08 13:27:00

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 07/44] xen-swiotlb: consolidate xen_swiotlb_dma_ops

ARM and x86 had duplicated versions of the dma_ops structure, the
only difference is that x86 hasn't wired up the set_dma_mask,
mmap, and get_sgtable ops yet. On x86 all of them are identical
to the generic version, so they aren't needed but harmless.

All the symbols used only for xen_swiotlb_dma_ops can now be marked
static as well.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/arm/xen/mm.c | 17 --------
arch/x86/xen/pci-swiotlb-xen.c | 14 -------
drivers/xen/swiotlb-xen.c | 93 ++++++++++++++++++++++--------------------
include/xen/swiotlb-xen.h | 62 +---------------------------
4 files changed, 49 insertions(+), 137 deletions(-)

diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index f0325d96b97a..785d2a562a23 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -185,23 +185,6 @@ EXPORT_SYMBOL_GPL(xen_destroy_contiguous_region);
const struct dma_map_ops *xen_dma_ops;
EXPORT_SYMBOL(xen_dma_ops);

-static const struct dma_map_ops xen_swiotlb_dma_ops = {
- .alloc = xen_swiotlb_alloc_coherent,
- .free = xen_swiotlb_free_coherent,
- .sync_single_for_cpu = xen_swiotlb_sync_single_for_cpu,
- .sync_single_for_device = xen_swiotlb_sync_single_for_device,
- .sync_sg_for_cpu = xen_swiotlb_sync_sg_for_cpu,
- .sync_sg_for_device = xen_swiotlb_sync_sg_for_device,
- .map_sg = xen_swiotlb_map_sg_attrs,
- .unmap_sg = xen_swiotlb_unmap_sg_attrs,
- .map_page = xen_swiotlb_map_page,
- .unmap_page = xen_swiotlb_unmap_page,
- .dma_supported = xen_swiotlb_dma_supported,
- .set_dma_mask = xen_swiotlb_set_dma_mask,
- .mmap = xen_swiotlb_dma_mmap,
- .get_sgtable = xen_swiotlb_get_sgtable,
-};
-
int __init xen_mm_init(void)
{
struct gnttab_cache_flush cflush;
diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index 42b08f8fc2ca..37c6056a7bba 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@@ -18,20 +18,6 @@

int xen_swiotlb __read_mostly;

-static const struct dma_map_ops xen_swiotlb_dma_ops = {
- .alloc = xen_swiotlb_alloc_coherent,
- .free = xen_swiotlb_free_coherent,
- .sync_single_for_cpu = xen_swiotlb_sync_single_for_cpu,
- .sync_single_for_device = xen_swiotlb_sync_single_for_device,
- .sync_sg_for_cpu = xen_swiotlb_sync_sg_for_cpu,
- .sync_sg_for_device = xen_swiotlb_sync_sg_for_device,
- .map_sg = xen_swiotlb_map_sg_attrs,
- .unmap_sg = xen_swiotlb_unmap_sg_attrs,
- .map_page = xen_swiotlb_map_page,
- .unmap_page = xen_swiotlb_unmap_page,
- .dma_supported = xen_swiotlb_dma_supported,
-};
-
/*
* pci_xen_swiotlb_detect - set xen_swiotlb to 1 if necessary
*
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 8dab0d3dc172..a0f006daab48 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -295,7 +295,8 @@ int __ref xen_swiotlb_init(int verbose, bool early)
free_pages((unsigned long)xen_io_tlb_start, order);
return rc;
}
-void *
+
+static void *
xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
dma_addr_t *dma_handle, gfp_t flags,
unsigned long attrs)
@@ -346,9 +347,8 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
memset(ret, 0, size);
return ret;
}
-EXPORT_SYMBOL_GPL(xen_swiotlb_alloc_coherent);

-void
+static void
xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
dma_addr_t dev_addr, unsigned long attrs)
{
@@ -369,8 +369,6 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,

xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);
}
-EXPORT_SYMBOL_GPL(xen_swiotlb_free_coherent);
-

/*
* Map a single buffer of the indicated size for DMA in streaming mode. The
@@ -379,7 +377,7 @@ EXPORT_SYMBOL_GPL(xen_swiotlb_free_coherent);
* Once the device is given the dma address, the device owns this memory until
* either xen_swiotlb_unmap_page or xen_swiotlb_dma_sync_single is performed.
*/
-dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
+static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction dir,
unsigned long attrs)
@@ -429,7 +427,6 @@ dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,

return DMA_ERROR_CODE;
}
-EXPORT_SYMBOL_GPL(xen_swiotlb_map_page);

/*
* Unmap a single streaming mode DMA translation. The dma_addr and size must
@@ -467,13 +464,12 @@ static void xen_unmap_single(struct device *hwdev, dma_addr_t dev_addr,
dma_mark_clean(phys_to_virt(paddr), size);
}

-void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
+static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
size_t size, enum dma_data_direction dir,
unsigned long attrs)
{
xen_unmap_single(hwdev, dev_addr, size, dir, attrs);
}
-EXPORT_SYMBOL_GPL(xen_swiotlb_unmap_page);

/*
* Make physical memory consistent for a single streaming mode DMA translation
@@ -516,7 +512,6 @@ xen_swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
{
xen_swiotlb_sync_single(hwdev, dev_addr, size, dir, SYNC_FOR_CPU);
}
-EXPORT_SYMBOL_GPL(xen_swiotlb_sync_single_for_cpu);

void
xen_swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
@@ -524,7 +519,25 @@ xen_swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
{
xen_swiotlb_sync_single(hwdev, dev_addr, size, dir, SYNC_FOR_DEVICE);
}
-EXPORT_SYMBOL_GPL(xen_swiotlb_sync_single_for_device);
+
+/*
+ * Unmap a set of streaming mode DMA translations. Again, cpu read rules
+ * concerning calls here are the same as for swiotlb_unmap_page() above.
+ */
+static void
+xen_swiotlb_unmap_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
+ int nelems, enum dma_data_direction dir,
+ unsigned long attrs)
+{
+ struct scatterlist *sg;
+ int i;
+
+ BUG_ON(dir == DMA_NONE);
+
+ for_each_sg(sgl, sg, nelems, i)
+ xen_unmap_single(hwdev, sg->dma_address, sg_dma_len(sg), dir, attrs);
+
+}

/*
* Map a set of buffers described by scatterlist in streaming mode for DMA.
@@ -542,7 +555,7 @@ EXPORT_SYMBOL_GPL(xen_swiotlb_sync_single_for_device);
* Device ownership issues as mentioned above for xen_swiotlb_map_page are the
* same here.
*/
-int
+static int
xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
int nelems, enum dma_data_direction dir,
unsigned long attrs)
@@ -599,27 +612,6 @@ xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
}
return nelems;
}
-EXPORT_SYMBOL_GPL(xen_swiotlb_map_sg_attrs);
-
-/*
- * Unmap a set of streaming mode DMA translations. Again, cpu read rules
- * concerning calls here are the same as for swiotlb_unmap_page() above.
- */
-void
-xen_swiotlb_unmap_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
- int nelems, enum dma_data_direction dir,
- unsigned long attrs)
-{
- struct scatterlist *sg;
- int i;
-
- BUG_ON(dir == DMA_NONE);
-
- for_each_sg(sgl, sg, nelems, i)
- xen_unmap_single(hwdev, sg->dma_address, sg_dma_len(sg), dir, attrs);
-
-}
-EXPORT_SYMBOL_GPL(xen_swiotlb_unmap_sg_attrs);

/*
* Make physical memory consistent for a set of streaming mode DMA translations
@@ -641,21 +633,19 @@ xen_swiotlb_sync_sg(struct device *hwdev, struct scatterlist *sgl,
sg_dma_len(sg), dir, target);
}

-void
+static void
xen_swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
int nelems, enum dma_data_direction dir)
{
xen_swiotlb_sync_sg(hwdev, sg, nelems, dir, SYNC_FOR_CPU);
}
-EXPORT_SYMBOL_GPL(xen_swiotlb_sync_sg_for_cpu);

-void
+static void
xen_swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
int nelems, enum dma_data_direction dir)
{
xen_swiotlb_sync_sg(hwdev, sg, nelems, dir, SYNC_FOR_DEVICE);
}
-EXPORT_SYMBOL_GPL(xen_swiotlb_sync_sg_for_device);

/*
* Return whether the given device DMA address mask can be supported
@@ -663,14 +653,13 @@ EXPORT_SYMBOL_GPL(xen_swiotlb_sync_sg_for_device);
* during bus mastering, then you would pass 0x00ffffff as the mask to
* this function.
*/
-int
+static int
xen_swiotlb_dma_supported(struct device *hwdev, u64 mask)
{
return xen_virt_to_bus(xen_io_tlb_end - 1) <= mask;
}
-EXPORT_SYMBOL_GPL(xen_swiotlb_dma_supported);

-int
+static int
xen_swiotlb_set_dma_mask(struct device *dev, u64 dma_mask)
{
if (!dev->dma_mask || !xen_swiotlb_dma_supported(dev, dma_mask))
@@ -680,14 +669,13 @@ xen_swiotlb_set_dma_mask(struct device *dev, u64 dma_mask)

return 0;
}
-EXPORT_SYMBOL_GPL(xen_swiotlb_set_dma_mask);

/*
* Create userspace mapping for the DMA-coherent memory.
* This function should be called with the pages from the current domain only,
* passing pages mapped from other domains would lead to memory corruption.
*/
-int
+static int
xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size,
unsigned long attrs)
@@ -699,13 +687,12 @@ xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma,
#endif
return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size);
}
-EXPORT_SYMBOL_GPL(xen_swiotlb_dma_mmap);

/*
* This function should be called with the pages from the current domain only,
* passing pages mapped from other domains would lead to memory corruption.
*/
-int
+static int
xen_swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t handle, size_t size,
unsigned long attrs)
@@ -727,4 +714,20 @@ xen_swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt,
#endif
return dma_common_get_sgtable(dev, sgt, cpu_addr, handle, size);
}
-EXPORT_SYMBOL_GPL(xen_swiotlb_get_sgtable);
+
+const struct dma_map_ops xen_swiotlb_dma_ops = {
+ .alloc = xen_swiotlb_alloc_coherent,
+ .free = xen_swiotlb_free_coherent,
+ .sync_single_for_cpu = xen_swiotlb_sync_single_for_cpu,
+ .sync_single_for_device = xen_swiotlb_sync_single_for_device,
+ .sync_sg_for_cpu = xen_swiotlb_sync_sg_for_cpu,
+ .sync_sg_for_device = xen_swiotlb_sync_sg_for_device,
+ .map_sg = xen_swiotlb_map_sg_attrs,
+ .unmap_sg = xen_swiotlb_unmap_sg_attrs,
+ .map_page = xen_swiotlb_map_page,
+ .unmap_page = xen_swiotlb_unmap_page,
+ .dma_supported = xen_swiotlb_dma_supported,
+ .set_dma_mask = xen_swiotlb_set_dma_mask,
+ .mmap = xen_swiotlb_dma_mmap,
+ .get_sgtable = xen_swiotlb_get_sgtable,
+};
diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h
index 1f6d78f044b6..ed2de363da33 100644
--- a/include/xen/swiotlb-xen.h
+++ b/include/xen/swiotlb-xen.h
@@ -1,69 +1,9 @@
#ifndef __LINUX_SWIOTLB_XEN_H
#define __LINUX_SWIOTLB_XEN_H

-#include <linux/dma-direction.h>
-#include <linux/scatterlist.h>
#include <linux/swiotlb.h>

extern int xen_swiotlb_init(int verbose, bool early);
+extern const struct dma_map_ops xen_swiotlb_dma_ops;

-extern void
-*xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
- dma_addr_t *dma_handle, gfp_t flags,
- unsigned long attrs);
-
-extern void
-xen_swiotlb_free_coherent(struct device *hwdev, size_t size,
- void *vaddr, dma_addr_t dma_handle,
- unsigned long attrs);
-
-extern dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
- unsigned long offset, size_t size,
- enum dma_data_direction dir,
- unsigned long attrs);
-
-extern void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
- size_t size, enum dma_data_direction dir,
- unsigned long attrs);
-extern int
-xen_swiotlb_map_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
- int nelems, enum dma_data_direction dir,
- unsigned long attrs);
-
-extern void
-xen_swiotlb_unmap_sg_attrs(struct device *hwdev, struct scatterlist *sgl,
- int nelems, enum dma_data_direction dir,
- unsigned long attrs);
-
-extern void
-xen_swiotlb_sync_single_for_cpu(struct device *hwdev, dma_addr_t dev_addr,
- size_t size, enum dma_data_direction dir);
-
-extern void
-xen_swiotlb_sync_sg_for_cpu(struct device *hwdev, struct scatterlist *sg,
- int nelems, enum dma_data_direction dir);
-
-extern void
-xen_swiotlb_sync_single_for_device(struct device *hwdev, dma_addr_t dev_addr,
- size_t size, enum dma_data_direction dir);
-
-extern void
-xen_swiotlb_sync_sg_for_device(struct device *hwdev, struct scatterlist *sg,
- int nelems, enum dma_data_direction dir);
-
-extern int
-xen_swiotlb_dma_supported(struct device *hwdev, u64 mask);
-
-extern int
-xen_swiotlb_set_dma_mask(struct device *dev, u64 dma_mask);
-
-extern int
-xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma,
- void *cpu_addr, dma_addr_t dma_addr, size_t size,
- unsigned long attrs);
-
-extern int
-xen_swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt,
- void *cpu_addr, dma_addr_t handle, size_t size,
- unsigned long attrs);
#endif /* __LINUX_SWIOTLB_XEN_H */
--
2.11.0

2017-06-08 13:27:34

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 17/44] hexagon: switch to use ->mapping_error for error reporting

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/hexagon/include/asm/dma-mapping.h | 2 --
arch/hexagon/kernel/dma.c | 12 +++++++++---
arch/hexagon/kernel/hexagon_ksyms.c | 1 -
3 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/hexagon/include/asm/dma-mapping.h b/arch/hexagon/include/asm/dma-mapping.h
index d3a87bd9b686..00e3f10113b0 100644
--- a/arch/hexagon/include/asm/dma-mapping.h
+++ b/arch/hexagon/include/asm/dma-mapping.h
@@ -29,8 +29,6 @@
#include <asm/io.h>

struct device;
-extern int bad_dma_address;
-#define DMA_ERROR_CODE bad_dma_address

extern const struct dma_map_ops *dma_ops;

diff --git a/arch/hexagon/kernel/dma.c b/arch/hexagon/kernel/dma.c
index e74b65009587..71269dc0f225 100644
--- a/arch/hexagon/kernel/dma.c
+++ b/arch/hexagon/kernel/dma.c
@@ -25,11 +25,11 @@
#include <linux/module.h>
#include <asm/page.h>

+#define HEXAGON_MAPPING_ERROR 0
+
const struct dma_map_ops *dma_ops;
EXPORT_SYMBOL(dma_ops);

-int bad_dma_address; /* globals are automatically initialized to zero */
-
static inline void *dma_addr_to_virt(dma_addr_t dma_addr)
{
return phys_to_virt((unsigned long) dma_addr);
@@ -181,7 +181,7 @@ static dma_addr_t hexagon_map_page(struct device *dev, struct page *page,
WARN_ON(size == 0);

if (!check_addr("map_single", dev, bus, size))
- return bad_dma_address;
+ return HEXAGON_MAPPING_ERROR;

if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
dma_sync(dma_addr_to_virt(bus), size, dir);
@@ -203,6 +203,11 @@ static void hexagon_sync_single_for_device(struct device *dev,
dma_sync(dma_addr_to_virt(dma_handle), size, dir);
}

+static int hexagon_mapping_error(struct device *dev, dma_addr_t dma_addr)
+{
+ return dma_addr == HEXAGON_MAPPING_ERROR;
+}
+
const struct dma_map_ops hexagon_dma_ops = {
.alloc = hexagon_dma_alloc_coherent,
.free = hexagon_free_coherent,
@@ -210,6 +215,7 @@ const struct dma_map_ops hexagon_dma_ops = {
.map_page = hexagon_map_page,
.sync_single_for_cpu = hexagon_sync_single_for_cpu,
.sync_single_for_device = hexagon_sync_single_for_device,
+ .mapping_error = hexagon_mapping_error;
.is_phys = 1,
};

diff --git a/arch/hexagon/kernel/hexagon_ksyms.c b/arch/hexagon/kernel/hexagon_ksyms.c
index 00bcad9cbd8f..aa248f595431 100644
--- a/arch/hexagon/kernel/hexagon_ksyms.c
+++ b/arch/hexagon/kernel/hexagon_ksyms.c
@@ -40,7 +40,6 @@ EXPORT_SYMBOL(memset);
/* Additional variables */
EXPORT_SYMBOL(__phys_offset);
EXPORT_SYMBOL(_dflt_cache_att);
-EXPORT_SYMBOL(bad_dma_address);

#define DECLARE_EXPORT(name) \
extern void name(void); EXPORT_SYMBOL(name)
--
2.11.0

2017-06-08 13:27:45

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 18/44] iommu/amd: implement ->mapping_error

DMA_ERROR_CODE is going to go away, so don't rely on it.

Signed-off-by: Christoph Hellwig <[email protected]>
---
drivers/iommu/amd_iommu.c | 18 +++++++++++++-----
1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 63cacf5d6cf2..d41280e869de 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -54,6 +54,8 @@
#include "amd_iommu_types.h"
#include "irq_remapping.h"

+#define AMD_IOMMU_MAPPING_ERROR 0
+
#define CMD_SET_TYPE(cmd, t) ((cmd)->data[1] |= ((t) << 28))

#define LOOP_TIMEOUT 100000
@@ -2394,7 +2396,7 @@ static dma_addr_t __map_single(struct device *dev,
paddr &= PAGE_MASK;

address = dma_ops_alloc_iova(dev, dma_dom, pages, dma_mask);
- if (address == DMA_ERROR_CODE)
+ if (address == AMD_IOMMU_MAPPING_ERROR)
goto out;

prot = dir2prot(direction);
@@ -2431,7 +2433,7 @@ static dma_addr_t __map_single(struct device *dev,

dma_ops_free_iova(dma_dom, address, pages);

- return DMA_ERROR_CODE;
+ return AMD_IOMMU_MAPPING_ERROR;
}

/*
@@ -2483,7 +2485,7 @@ static dma_addr_t map_page(struct device *dev, struct page *page,
if (PTR_ERR(domain) == -EINVAL)
return (dma_addr_t)paddr;
else if (IS_ERR(domain))
- return DMA_ERROR_CODE;
+ return AMD_IOMMU_MAPPING_ERROR;

dma_mask = *dev->dma_mask;
dma_dom = to_dma_ops_domain(domain);
@@ -2560,7 +2562,7 @@ static int map_sg(struct device *dev, struct scatterlist *sglist,
npages = sg_num_pages(dev, sglist, nelems);

address = dma_ops_alloc_iova(dev, dma_dom, npages, dma_mask);
- if (address == DMA_ERROR_CODE)
+ if (address == AMD_IOMMU_MAPPING_ERROR)
goto out_err;

prot = dir2prot(direction);
@@ -2683,7 +2685,7 @@ static void *alloc_coherent(struct device *dev, size_t size,
*dma_addr = __map_single(dev, dma_dom, page_to_phys(page),
size, DMA_BIDIRECTIONAL, dma_mask);

- if (*dma_addr == DMA_ERROR_CODE)
+ if (*dma_addr == AMD_IOMMU_MAPPING_ERROR)
goto out_free;

return page_address(page);
@@ -2732,6 +2734,11 @@ static int amd_iommu_dma_supported(struct device *dev, u64 mask)
return check_device(dev);
}

+static int amd_iommu_mapping_error(struct device *dev, dma_addr_t dma_addr)
+{
+ return dma_addr == AMD_IOMMU_MAPPING_ERROR;
+}
+
static const struct dma_map_ops amd_iommu_dma_ops = {
.alloc = alloc_coherent,
.free = free_coherent,
@@ -2740,6 +2747,7 @@ static const struct dma_map_ops amd_iommu_dma_ops = {
.map_sg = map_sg,
.unmap_sg = unmap_sg,
.dma_supported = amd_iommu_dma_supported,
+ .mapping_error = amd_iommu_mapping_error,
};

static int init_reserved_iova_ranges(void)
--
2.11.0

2017-06-08 13:28:17

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 28/44] sparc: remove arch specific dma_supported implementations

Usually dma_supported decisions are done by the dma_map_ops instance.
Switch sparc to that model by providing a ->dma_supported instance for
sbus that always returns false, and implementations tailored to the sun4u
and sun4v cases for sparc64, and leave it unimplemented for PCI on
sparc32, which means always supported.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/sparc/include/asm/dma-mapping.h | 3 ---
arch/sparc/kernel/iommu.c | 40 +++++++++++++++---------------------
arch/sparc/kernel/ioport.c | 22 ++++++--------------
arch/sparc/kernel/pci_sun4v.c | 17 +++++++++++++++
4 files changed, 39 insertions(+), 43 deletions(-)

diff --git a/arch/sparc/include/asm/dma-mapping.h b/arch/sparc/include/asm/dma-mapping.h
index 98da9f92c318..60bf1633d554 100644
--- a/arch/sparc/include/asm/dma-mapping.h
+++ b/arch/sparc/include/asm/dma-mapping.h
@@ -5,9 +5,6 @@
#include <linux/mm.h>
#include <linux/dma-debug.h>

-#define HAVE_ARCH_DMA_SUPPORTED 1
-int dma_supported(struct device *dev, u64 mask);
-
static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction dir)
{
diff --git a/arch/sparc/kernel/iommu.c b/arch/sparc/kernel/iommu.c
index dafa316d978d..fcbcc031f615 100644
--- a/arch/sparc/kernel/iommu.c
+++ b/arch/sparc/kernel/iommu.c
@@ -746,6 +746,21 @@ static int dma_4u_mapping_error(struct device *dev, dma_addr_t dma_addr)
return dma_addr == SPARC_MAPPING_ERROR;
}

+static int dma_4u_supported(struct device *dev, u64 device_mask)
+{
+ struct iommu *iommu = dev->archdata.iommu;
+
+ if (device_mask > DMA_BIT_MASK(32))
+ return 0;
+ if ((device_mask & iommu->dma_addr_mask) == iommu->dma_addr_mask)
+ return 1;
+#ifdef CONFIG_PCI
+ if (dev_is_pci(dev))
+ return pci64_dma_supported(to_pci_dev(dev), device_mask);
+#endif
+ return 0;
+}
+
static const struct dma_map_ops sun4u_dma_ops = {
.alloc = dma_4u_alloc_coherent,
.free = dma_4u_free_coherent,
@@ -755,32 +770,9 @@ static const struct dma_map_ops sun4u_dma_ops = {
.unmap_sg = dma_4u_unmap_sg,
.sync_single_for_cpu = dma_4u_sync_single_for_cpu,
.sync_sg_for_cpu = dma_4u_sync_sg_for_cpu,
+ .dma_supported = dma_4u_supported,
.mapping_error = dma_4u_mapping_error,
};

const struct dma_map_ops *dma_ops = &sun4u_dma_ops;
EXPORT_SYMBOL(dma_ops);
-
-int dma_supported(struct device *dev, u64 device_mask)
-{
- struct iommu *iommu = dev->archdata.iommu;
- u64 dma_addr_mask = iommu->dma_addr_mask;
-
- if (device_mask > DMA_BIT_MASK(32)) {
- if (iommu->atu)
- dma_addr_mask = iommu->atu->dma_addr_mask;
- else
- return 0;
- }
-
- if ((device_mask & dma_addr_mask) == dma_addr_mask)
- return 1;
-
-#ifdef CONFIG_PCI
- if (dev_is_pci(dev))
- return pci64_dma_supported(to_pci_dev(dev), device_mask);
-#endif
-
- return 0;
-}
-EXPORT_SYMBOL(dma_supported);
diff --git a/arch/sparc/kernel/ioport.c b/arch/sparc/kernel/ioport.c
index dd081d557609..12894f259bea 100644
--- a/arch/sparc/kernel/ioport.c
+++ b/arch/sparc/kernel/ioport.c
@@ -401,6 +401,11 @@ static void sbus_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
BUG();
}

+static int sbus_dma_supported(struct device *dev, u64 mask)
+{
+ return 0;
+}
+
static const struct dma_map_ops sbus_dma_ops = {
.alloc = sbus_alloc_coherent,
.free = sbus_free_coherent,
@@ -410,6 +415,7 @@ static const struct dma_map_ops sbus_dma_ops = {
.unmap_sg = sbus_unmap_sg,
.sync_sg_for_cpu = sbus_sync_sg_for_cpu,
.sync_sg_for_device = sbus_sync_sg_for_device,
+ .dma_supported = sbus_dma_supported,
};

static int __init sparc_register_ioport(void)
@@ -655,22 +661,6 @@ EXPORT_SYMBOL(pci32_dma_ops);
const struct dma_map_ops *dma_ops = &sbus_dma_ops;
EXPORT_SYMBOL(dma_ops);

-
-/*
- * Return whether the given PCI device DMA address mask can be
- * supported properly. For example, if your device can only drive the
- * low 24-bits during PCI bus mastering, then you would pass
- * 0x00ffffff as the mask to this function.
- */
-int dma_supported(struct device *dev, u64 mask)
-{
- if (dev_is_pci(dev))
- return 1;
-
- return 0;
-}
-EXPORT_SYMBOL(dma_supported);
-
#ifdef CONFIG_PROC_FS

static int sparc_io_proc_show(struct seq_file *m, void *v)
diff --git a/arch/sparc/kernel/pci_sun4v.c b/arch/sparc/kernel/pci_sun4v.c
index 8e2a56f4c03a..24f21c726dfa 100644
--- a/arch/sparc/kernel/pci_sun4v.c
+++ b/arch/sparc/kernel/pci_sun4v.c
@@ -24,6 +24,7 @@

#include "pci_impl.h"
#include "iommu_common.h"
+#include "kernel.h"

#include "pci_sun4v.h"

@@ -669,6 +670,21 @@ static void dma_4v_unmap_sg(struct device *dev, struct scatterlist *sglist,
local_irq_restore(flags);
}

+static int dma_4v_supported(struct device *dev, u64 device_mask)
+{
+ struct iommu *iommu = dev->archdata.iommu;
+ u64 dma_addr_mask;
+
+ if (device_mask > DMA_BIT_MASK(32) && iommu->atu)
+ dma_addr_mask = iommu->atu->dma_addr_mask;
+ else
+ dma_addr_mask = iommu->dma_addr_mask;
+
+ if ((device_mask & dma_addr_mask) == dma_addr_mask)
+ return 1;
+ return pci64_dma_supported(to_pci_dev(dev), device_mask);
+}
+
static int dma_4v_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return dma_addr == SPARC_MAPPING_ERROR;
@@ -681,6 +697,7 @@ static const struct dma_map_ops sun4v_dma_ops = {
.unmap_page = dma_4v_unmap_page,
.map_sg = dma_4v_map_sg,
.unmap_sg = dma_4v_unmap_sg,
+ .dma_supported = dma_4v_supported,
.mapping_error = dma_4v_mapping_error,
};

--
2.11.0

2017-06-08 13:28:38

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 31/44] hexagon: remove arch-specific dma_supported implementation

This implementation is simply bogus - hexagon only has a simple
direct mapped DMA implementation and thus doesn't care about the
address.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/hexagon/include/asm/dma-mapping.h | 2 --
arch/hexagon/kernel/dma.c | 9 ---------
2 files changed, 11 deletions(-)

diff --git a/arch/hexagon/include/asm/dma-mapping.h b/arch/hexagon/include/asm/dma-mapping.h
index 00e3f10113b0..9c15cb5271a6 100644
--- a/arch/hexagon/include/asm/dma-mapping.h
+++ b/arch/hexagon/include/asm/dma-mapping.h
@@ -37,8 +37,6 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
return dma_ops;
}

-#define HAVE_ARCH_DMA_SUPPORTED 1
-extern int dma_supported(struct device *dev, u64 mask);
extern int dma_is_consistent(struct device *dev, dma_addr_t dma_handle);
extern void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction direction);
diff --git a/arch/hexagon/kernel/dma.c b/arch/hexagon/kernel/dma.c
index 71269dc0f225..9ff1b2041f85 100644
--- a/arch/hexagon/kernel/dma.c
+++ b/arch/hexagon/kernel/dma.c
@@ -35,15 +35,6 @@ static inline void *dma_addr_to_virt(dma_addr_t dma_addr)
return phys_to_virt((unsigned long) dma_addr);
}

-int dma_supported(struct device *dev, u64 mask)
-{
- if (mask == DMA_BIT_MASK(32))
- return 1;
- else
- return 0;
-}
-EXPORT_SYMBOL(dma_supported);
-
static struct gen_pool *coherent_pool;


--
2.11.0

2017-06-08 13:28:46

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 36/44] dma-mapping: remove HAVE_ARCH_DMA_SUPPORTED

Signed-off-by: Christoph Hellwig <[email protected]>
---
include/linux/dma-mapping.h | 2 --
1 file changed, 2 deletions(-)

diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index a57875309bfd..3e5908656226 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -549,7 +549,6 @@ static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
return 0;
}

-#ifndef HAVE_ARCH_DMA_SUPPORTED
static inline int dma_supported(struct device *dev, u64 mask)
{
const struct dma_map_ops *ops = get_dma_ops(dev);
@@ -560,7 +559,6 @@ static inline int dma_supported(struct device *dev, u64 mask)
return 1;
return ops->dma_supported(dev, mask);
}
-#endif

#ifndef HAVE_ARCH_DMA_SET_MASK
static inline int dma_set_mask(struct device *dev, u64 mask)
--
2.11.0

2017-06-08 13:28:56

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 38/44] arm: implement ->dma_supported instead of ->set_dma_mask

Same behavior, less code duplication.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/arm/common/dmabounce.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/arm/common/dmabounce.c b/arch/arm/common/dmabounce.c
index 4aabf117e136..d89a0b56b245 100644
--- a/arch/arm/common/dmabounce.c
+++ b/arch/arm/common/dmabounce.c
@@ -445,12 +445,12 @@ static void dmabounce_sync_for_device(struct device *dev,
arm_dma_ops.sync_single_for_device(dev, handle, size, dir);
}

-static int dmabounce_set_mask(struct device *dev, u64 dma_mask)
+static int dmabounce_dma_supported(struct device *dev, u64 dma_mask)
{
if (dev->archdata.dmabounce)
return 0;

- return arm_dma_ops.set_dma_mask(dev, dma_mask);
+ return arm_dma_ops.dma_supported(dev, dma_mask);
}

static int dmabounce_mapping_error(struct device *dev, dma_addr_t dma_addr)
@@ -474,9 +474,8 @@ static const struct dma_map_ops dmabounce_ops = {
.unmap_sg = arm_dma_unmap_sg,
.sync_sg_for_cpu = arm_dma_sync_sg_for_cpu,
.sync_sg_for_device = arm_dma_sync_sg_for_device,
- .set_dma_mask = dmabounce_set_mask,
+ .dma_supported = dmabounce_dma_supported,
.mapping_error = dmabounce_mapping_error,
- .dma_supported = arm_dma_supported,
};

static int dmabounce_init_pool(struct dmabounce_pool *pool, struct device *dev,
--
2.11.0

2017-06-08 13:29:12

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 41/44] powerpc/cell: clean up fixed mapping dma_ops initialization

By the time cell_pci_dma_dev_setup calls cell_dma_dev_setup no device can
have the fixed map_ops set yet as it's only set by the set_dma_mask
method. So move the setup for the fixed case to be only called in that
place instead of indirecting through cell_dma_dev_setup.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/powerpc/platforms/cell/iommu.c | 27 +++++++--------------------
1 file changed, 7 insertions(+), 20 deletions(-)

diff --git a/arch/powerpc/platforms/cell/iommu.c b/arch/powerpc/platforms/cell/iommu.c
index 948086e33a0c..497bfbdbd967 100644
--- a/arch/powerpc/platforms/cell/iommu.c
+++ b/arch/powerpc/platforms/cell/iommu.c
@@ -663,14 +663,9 @@ static const struct dma_map_ops dma_iommu_fixed_ops = {
.mapping_error = dma_iommu_mapping_error,
};

-static void cell_dma_dev_setup_fixed(struct device *dev);
-
static void cell_dma_dev_setup(struct device *dev)
{
- /* Order is important here, these are not mutually exclusive */
- if (get_dma_ops(dev) == &dma_iommu_fixed_ops)
- cell_dma_dev_setup_fixed(dev);
- else if (get_pci_dma_ops() == &dma_iommu_ops)
+ if (get_pci_dma_ops() == &dma_iommu_ops)
set_iommu_table_base(dev, cell_get_iommu_table(dev));
else if (get_pci_dma_ops() == &dma_direct_ops)
set_dma_offset(dev, cell_dma_direct_offset);
@@ -963,32 +958,24 @@ static int dma_set_mask_and_switch(struct device *dev, u64 dma_mask)
return -EIO;

if (dma_mask == DMA_BIT_MASK(64) &&
- cell_iommu_get_fixed_address(dev) != OF_BAD_ADDR)
- {
+ cell_iommu_get_fixed_address(dev) != OF_BAD_ADDR) {
+ u64 addr = cell_iommu_get_fixed_address(dev) +
+ dma_iommu_fixed_base;
dev_dbg(dev, "iommu: 64-bit OK, using fixed ops\n");
+ dev_dbg(dev, "iommu: fixed addr = %llx\n", addr);
set_dma_ops(dev, &dma_iommu_fixed_ops);
+ set_dma_offset(dev, addr);
} else {
dev_dbg(dev, "iommu: not 64-bit, using default ops\n");
set_dma_ops(dev, get_pci_dma_ops());
+ cell_dma_dev_setup(dev);
}

- cell_dma_dev_setup(dev);
-
*dev->dma_mask = dma_mask;

return 0;
}

-static void cell_dma_dev_setup_fixed(struct device *dev)
-{
- u64 addr;
-
- addr = cell_iommu_get_fixed_address(dev) + dma_iommu_fixed_base;
- set_dma_offset(dev, addr);
-
- dev_dbg(dev, "iommu: fixed addr = %llx\n", addr);
-}
-
static void insert_16M_pte(unsigned long addr, unsigned long *ptab,
unsigned long base_pte)
{
--
2.11.0

2017-06-08 13:29:19

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 44/44] powerpc: merge __dma_set_mask into dma_set_mask

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/powerpc/include/asm/dma-mapping.h | 1 -
arch/powerpc/kernel/dma.c | 13 ++++---------
2 files changed, 4 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/include/asm/dma-mapping.h b/arch/powerpc/include/asm/dma-mapping.h
index 73aedbe6c977..eaece3d3e225 100644
--- a/arch/powerpc/include/asm/dma-mapping.h
+++ b/arch/powerpc/include/asm/dma-mapping.h
@@ -112,7 +112,6 @@ static inline void set_dma_offset(struct device *dev, dma_addr_t off)
#define HAVE_ARCH_DMA_SET_MASK 1
extern int dma_set_mask(struct device *dev, u64 dma_mask);

-extern int __dma_set_mask(struct device *dev, u64 dma_mask);
extern u64 __dma_get_required_mask(struct device *dev);

static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
diff --git a/arch/powerpc/kernel/dma.c b/arch/powerpc/kernel/dma.c
index 466c9f07b288..4194bbbbdb10 100644
--- a/arch/powerpc/kernel/dma.c
+++ b/arch/powerpc/kernel/dma.c
@@ -314,14 +314,6 @@ EXPORT_SYMBOL(dma_set_coherent_mask);

#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)

-int __dma_set_mask(struct device *dev, u64 dma_mask)
-{
- if (!dev->dma_mask || !dma_supported(dev, dma_mask))
- return -EIO;
- *dev->dma_mask = dma_mask;
- return 0;
-}
-
int dma_set_mask(struct device *dev, u64 dma_mask)
{
if (ppc_md.dma_set_mask)
@@ -334,7 +326,10 @@ int dma_set_mask(struct device *dev, u64 dma_mask)
return phb->controller_ops.dma_set_mask(pdev, dma_mask);
}

- return __dma_set_mask(dev, dma_mask);
+ if (!dev->dma_mask || !dma_supported(dev, dma_mask))
+ return -EIO;
+ *dev->dma_mask = dma_mask;
+ return 0;
}
EXPORT_SYMBOL(dma_set_mask);

--
2.11.0

2017-06-08 13:29:46

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 43/44] dma-mapping: remove the set_dma_mask method

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/powerpc/kernel/dma.c | 4 ----
include/linux/dma-mapping.h | 6 ------
2 files changed, 10 deletions(-)

diff --git a/arch/powerpc/kernel/dma.c b/arch/powerpc/kernel/dma.c
index 41c749586bd2..466c9f07b288 100644
--- a/arch/powerpc/kernel/dma.c
+++ b/arch/powerpc/kernel/dma.c
@@ -316,10 +316,6 @@ EXPORT_SYMBOL(dma_set_coherent_mask);

int __dma_set_mask(struct device *dev, u64 dma_mask)
{
- const struct dma_map_ops *dma_ops = get_dma_ops(dev);
-
- if ((dma_ops != NULL) && (dma_ops->set_dma_mask != NULL))
- return dma_ops->set_dma_mask(dev, dma_mask);
if (!dev->dma_mask || !dma_supported(dev, dma_mask))
return -EIO;
*dev->dma_mask = dma_mask;
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 3e5908656226..527f2ed8c645 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -127,7 +127,6 @@ struct dma_map_ops {
enum dma_data_direction dir);
int (*mapping_error)(struct device *dev, dma_addr_t dma_addr);
int (*dma_supported)(struct device *dev, u64 mask);
- int (*set_dma_mask)(struct device *dev, u64 mask);
#ifdef ARCH_HAS_DMA_GET_REQUIRED_MASK
u64 (*get_required_mask)(struct device *dev);
#endif
@@ -563,11 +562,6 @@ static inline int dma_supported(struct device *dev, u64 mask)
#ifndef HAVE_ARCH_DMA_SET_MASK
static inline int dma_set_mask(struct device *dev, u64 mask)
{
- const struct dma_map_ops *ops = get_dma_ops(dev);
-
- if (ops->set_dma_mask)
- return ops->set_dma_mask(dev, mask);
-
if (!dev->dma_mask || !dma_supported(dev, mask))
return -EIO;
*dev->dma_mask = mask;
--
2.11.0

2017-06-08 13:29:10

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 42/44] powerpc/cell: use the dma_supported method for ops switching

Besides removing the last instance of the set_dma_mask method this also
reduced the code duplication.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/powerpc/platforms/cell/iommu.c | 25 +++++++++----------------
1 file changed, 9 insertions(+), 16 deletions(-)

diff --git a/arch/powerpc/platforms/cell/iommu.c b/arch/powerpc/platforms/cell/iommu.c
index 497bfbdbd967..29d4f96ed33e 100644
--- a/arch/powerpc/platforms/cell/iommu.c
+++ b/arch/powerpc/platforms/cell/iommu.c
@@ -644,20 +644,14 @@ static void dma_fixed_unmap_sg(struct device *dev, struct scatterlist *sg,
direction, attrs);
}

-static int dma_fixed_dma_supported(struct device *dev, u64 mask)
-{
- return mask == DMA_BIT_MASK(64);
-}
-
-static int dma_set_mask_and_switch(struct device *dev, u64 dma_mask);
+static int dma_suported_and_switch(struct device *dev, u64 dma_mask);

static const struct dma_map_ops dma_iommu_fixed_ops = {
.alloc = dma_fixed_alloc_coherent,
.free = dma_fixed_free_coherent,
.map_sg = dma_fixed_map_sg,
.unmap_sg = dma_fixed_unmap_sg,
- .dma_supported = dma_fixed_dma_supported,
- .set_dma_mask = dma_set_mask_and_switch,
+ .dma_supported = dma_suported_and_switch,
.map_page = dma_fixed_map_page,
.unmap_page = dma_fixed_unmap_page,
.mapping_error = dma_iommu_mapping_error,
@@ -952,11 +946,8 @@ static u64 cell_iommu_get_fixed_address(struct device *dev)
return dev_addr;
}

-static int dma_set_mask_and_switch(struct device *dev, u64 dma_mask)
+static int dma_suported_and_switch(struct device *dev, u64 dma_mask)
{
- if (!dev->dma_mask || !dma_supported(dev, dma_mask))
- return -EIO;
-
if (dma_mask == DMA_BIT_MASK(64) &&
cell_iommu_get_fixed_address(dev) != OF_BAD_ADDR) {
u64 addr = cell_iommu_get_fixed_address(dev) +
@@ -965,14 +956,16 @@ static int dma_set_mask_and_switch(struct device *dev, u64 dma_mask)
dev_dbg(dev, "iommu: fixed addr = %llx\n", addr);
set_dma_ops(dev, &dma_iommu_fixed_ops);
set_dma_offset(dev, addr);
- } else {
+ return 1;
+ }
+
+ if (dma_iommu_dma_supported(dev, dma_mask)) {
dev_dbg(dev, "iommu: not 64-bit, using default ops\n");
set_dma_ops(dev, get_pci_dma_ops());
cell_dma_dev_setup(dev);
+ return 1;
}

- *dev->dma_mask = dma_mask;
-
return 0;
}

@@ -1127,7 +1120,7 @@ static int __init cell_iommu_fixed_mapping_init(void)
cell_iommu_setup_window(iommu, np, dbase, dsize, 0);
}

- dma_iommu_ops.set_dma_mask = dma_set_mask_and_switch;
+ dma_iommu_ops.dma_supported = dma_suported_and_switch;
set_pci_dma_ops(&dma_iommu_ops);

return 0;
--
2.11.0

2017-06-08 13:30:29

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 40/44] tile: remove dma_supported and mapping_error methods

These just duplicate the default behavior if no method is provided.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/tile/kernel/pci-dma.c | 30 ------------------------------
1 file changed, 30 deletions(-)

diff --git a/arch/tile/kernel/pci-dma.c b/arch/tile/kernel/pci-dma.c
index 569bb6dd154a..f2abedc8a080 100644
--- a/arch/tile/kernel/pci-dma.c
+++ b/arch/tile/kernel/pci-dma.c
@@ -317,18 +317,6 @@ static void tile_dma_sync_sg_for_device(struct device *dev,
}
}

-static inline int
-tile_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
-{
- return 0;
-}
-
-static inline int
-tile_dma_supported(struct device *dev, u64 mask)
-{
- return 1;
-}
-
static const struct dma_map_ops tile_default_dma_map_ops = {
.alloc = tile_dma_alloc_coherent,
.free = tile_dma_free_coherent,
@@ -340,8 +328,6 @@ static const struct dma_map_ops tile_default_dma_map_ops = {
.sync_single_for_device = tile_dma_sync_single_for_device,
.sync_sg_for_cpu = tile_dma_sync_sg_for_cpu,
.sync_sg_for_device = tile_dma_sync_sg_for_device,
- .mapping_error = tile_dma_mapping_error,
- .dma_supported = tile_dma_supported
};

const struct dma_map_ops *tile_dma_map_ops = &tile_default_dma_map_ops;
@@ -504,18 +490,6 @@ static void tile_pci_dma_sync_sg_for_device(struct device *dev,
}
}

-static inline int
-tile_pci_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
-{
- return 0;
-}
-
-static inline int
-tile_pci_dma_supported(struct device *dev, u64 mask)
-{
- return 1;
-}
-
static const struct dma_map_ops tile_pci_default_dma_map_ops = {
.alloc = tile_pci_dma_alloc_coherent,
.free = tile_pci_dma_free_coherent,
@@ -527,8 +501,6 @@ static const struct dma_map_ops tile_pci_default_dma_map_ops = {
.sync_single_for_device = tile_pci_dma_sync_single_for_device,
.sync_sg_for_cpu = tile_pci_dma_sync_sg_for_cpu,
.sync_sg_for_device = tile_pci_dma_sync_sg_for_device,
- .mapping_error = tile_pci_dma_mapping_error,
- .dma_supported = tile_pci_dma_supported
};

const struct dma_map_ops *gx_pci_dma_map_ops = &tile_pci_default_dma_map_ops;
@@ -578,8 +550,6 @@ static const struct dma_map_ops pci_hybrid_dma_ops = {
.sync_single_for_device = tile_pci_dma_sync_single_for_device,
.sync_sg_for_cpu = tile_pci_dma_sync_sg_for_cpu,
.sync_sg_for_device = tile_pci_dma_sync_sg_for_device,
- .mapping_error = tile_pci_dma_mapping_error,
- .dma_supported = tile_pci_dma_supported
};

const struct dma_map_ops *gx_legacy_pci_dma_map_ops = &pci_swiotlb_dma_ops;
--
2.11.0

2017-06-08 13:28:53

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 37/44] mips/loongson64: implement ->dma_supported instead of ->set_dma_mask

Same behavior, less code duplication.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/mips/loongson64/common/dma-swiotlb.c | 19 +++++--------------
1 file changed, 5 insertions(+), 14 deletions(-)

diff --git a/arch/mips/loongson64/common/dma-swiotlb.c b/arch/mips/loongson64/common/dma-swiotlb.c
index 178ca17a5667..34486c138206 100644
--- a/arch/mips/loongson64/common/dma-swiotlb.c
+++ b/arch/mips/loongson64/common/dma-swiotlb.c
@@ -75,19 +75,11 @@ static void loongson_dma_sync_sg_for_device(struct device *dev,
mb();
}

-static int loongson_dma_set_mask(struct device *dev, u64 mask)
+static int loongson_dma_supported(struct device *dev, u64 mask)
{
- if (!dev->dma_mask || !dma_supported(dev, mask))
- return -EIO;
-
- if (mask > DMA_BIT_MASK(loongson_sysconf.dma_mask_bits)) {
- *dev->dma_mask = DMA_BIT_MASK(loongson_sysconf.dma_mask_bits);
- return -EIO;
- }
-
- *dev->dma_mask = mask;
-
- return 0;
+ if (mask > DMA_BIT_MASK(loongson_sysconf.dma_mask_bits))
+ return 0;
+ return swiotlb_dma_supported(dev, mask);
}

dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr)
@@ -126,8 +118,7 @@ static const struct dma_map_ops loongson_dma_map_ops = {
.sync_sg_for_cpu = swiotlb_sync_sg_for_cpu,
.sync_sg_for_device = loongson_dma_sync_sg_for_device,
.mapping_error = swiotlb_dma_mapping_error,
- .dma_supported = swiotlb_dma_supported,
- .set_dma_mask = loongson_dma_set_mask
+ .dma_supported = loongson_dma_supported,
};

void __init plat_swiotlb_setup(void)
--
2.11.0

2017-06-08 13:31:05

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 39/44] xen-swiotlb: remove xen_swiotlb_set_dma_mask

This just duplicates the generic implementation.

Signed-off-by: Christoph Hellwig <[email protected]>
---
drivers/xen/swiotlb-xen.c | 12 ------------
1 file changed, 12 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index c3a04b2d7532..82fc54f8eb77 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -661,17 +661,6 @@ xen_swiotlb_dma_supported(struct device *hwdev, u64 mask)
return xen_virt_to_bus(xen_io_tlb_end - 1) <= mask;
}

-static int
-xen_swiotlb_set_dma_mask(struct device *dev, u64 dma_mask)
-{
- if (!dev->dma_mask || !xen_swiotlb_dma_supported(dev, dma_mask))
- return -EIO;
-
- *dev->dma_mask = dma_mask;
-
- return 0;
-}
-
/*
* Create userspace mapping for the DMA-coherent memory.
* This function should be called with the pages from the current domain only,
@@ -734,7 +723,6 @@ const struct dma_map_ops xen_swiotlb_dma_ops = {
.map_page = xen_swiotlb_map_page,
.unmap_page = xen_swiotlb_unmap_page,
.dma_supported = xen_swiotlb_dma_supported,
- .set_dma_mask = xen_swiotlb_set_dma_mask,
.mmap = xen_swiotlb_dma_mmap,
.get_sgtable = xen_swiotlb_get_sgtable,
.mapping_error = xen_swiotlb_mapping_error,
--
2.11.0

2017-06-08 13:31:44

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 35/44] x86: remove arch specific dma_supported implementation

And instead wire it up as method for all the dma_map_ops instances.

Note that this also means the arch specific check will be fully instead
of partially applied in the AMD iommu driver.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/x86/include/asm/dma-mapping.h | 3 ---
arch/x86/include/asm/iommu.h | 2 ++
arch/x86/kernel/amd_gart_64.c | 1 +
arch/x86/kernel/pci-calgary_64.c | 1 +
arch/x86/kernel/pci-dma.c | 7 +------
arch/x86/kernel/pci-nommu.c | 1 +
arch/x86/pci/sta2x11-fixup.c | 3 ++-
drivers/iommu/amd_iommu.c | 2 ++
drivers/iommu/intel-iommu.c | 3 +++
9 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/dma-mapping.h b/arch/x86/include/asm/dma-mapping.h
index c35d228aa381..398c79889f5c 100644
--- a/arch/x86/include/asm/dma-mapping.h
+++ b/arch/x86/include/asm/dma-mapping.h
@@ -33,9 +33,6 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
bool arch_dma_alloc_attrs(struct device **dev, gfp_t *gfp);
#define arch_dma_alloc_attrs arch_dma_alloc_attrs

-#define HAVE_ARCH_DMA_SUPPORTED 1
-extern int dma_supported(struct device *hwdev, u64 mask);
-
extern void *dma_generic_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_addr, gfp_t flag,
unsigned long attrs);
diff --git a/arch/x86/include/asm/iommu.h b/arch/x86/include/asm/iommu.h
index 793869879464..fca144a104e4 100644
--- a/arch/x86/include/asm/iommu.h
+++ b/arch/x86/include/asm/iommu.h
@@ -6,6 +6,8 @@ extern int force_iommu, no_iommu;
extern int iommu_detected;
extern int iommu_pass_through;

+int x86_dma_supported(struct device *dev, u64 mask);
+
/* 10 seconds */
#define DMAR_OPERATION_TIMEOUT ((cycles_t) tsc_khz*10*1000)

diff --git a/arch/x86/kernel/amd_gart_64.c b/arch/x86/kernel/amd_gart_64.c
index 815dd63f49d0..cc0e8bc0ea3f 100644
--- a/arch/x86/kernel/amd_gart_64.c
+++ b/arch/x86/kernel/amd_gart_64.c
@@ -704,6 +704,7 @@ static const struct dma_map_ops gart_dma_ops = {
.alloc = gart_alloc_coherent,
.free = gart_free_coherent,
.mapping_error = gart_mapping_error,
+ .dma_supported = x86_dma_supported,
};

static void gart_iommu_shutdown(void)
diff --git a/arch/x86/kernel/pci-calgary_64.c b/arch/x86/kernel/pci-calgary_64.c
index e75b490f2b0b..5286a4a92cf7 100644
--- a/arch/x86/kernel/pci-calgary_64.c
+++ b/arch/x86/kernel/pci-calgary_64.c
@@ -493,6 +493,7 @@ static const struct dma_map_ops calgary_dma_ops = {
.map_page = calgary_map_page,
.unmap_page = calgary_unmap_page,
.mapping_error = calgary_mapping_error,
+ .dma_supported = x86_dma_supported,
};

static inline void __iomem * busno_to_bbar(unsigned char num)
diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c
index 3a216ec869cd..b6f5684be3b5 100644
--- a/arch/x86/kernel/pci-dma.c
+++ b/arch/x86/kernel/pci-dma.c
@@ -213,10 +213,8 @@ static __init int iommu_setup(char *p)
}
early_param("iommu", iommu_setup);

-int dma_supported(struct device *dev, u64 mask)
+int x86_dma_supported(struct device *dev, u64 mask)
{
- const struct dma_map_ops *ops = get_dma_ops(dev);
-
#ifdef CONFIG_PCI
if (mask > 0xffffffff && forbid_dac > 0) {
dev_info(dev, "PCI: Disallowing DAC for device\n");
@@ -224,9 +222,6 @@ int dma_supported(struct device *dev, u64 mask)
}
#endif

- if (ops->dma_supported)
- return ops->dma_supported(dev, mask);
-
/* Copied from i386. Doesn't make much sense, because it will
only work for pci_alloc_coherent.
The caller just has to use GFP_DMA in this case. */
diff --git a/arch/x86/kernel/pci-nommu.c b/arch/x86/kernel/pci-nommu.c
index 085fe6ce4049..a6d404087fe3 100644
--- a/arch/x86/kernel/pci-nommu.c
+++ b/arch/x86/kernel/pci-nommu.c
@@ -104,4 +104,5 @@ const struct dma_map_ops nommu_dma_ops = {
.sync_sg_for_device = nommu_sync_sg_for_device,
.is_phys = 1,
.mapping_error = nommu_mapping_error,
+ .dma_supported = x86_dma_supported,
};
diff --git a/arch/x86/pci/sta2x11-fixup.c b/arch/x86/pci/sta2x11-fixup.c
index ec008e800b45..53d600217973 100644
--- a/arch/x86/pci/sta2x11-fixup.c
+++ b/arch/x86/pci/sta2x11-fixup.c
@@ -26,6 +26,7 @@
#include <linux/pci_ids.h>
#include <linux/export.h>
#include <linux/list.h>
+#include <asm/iommu.h>

#define STA2X11_SWIOTLB_SIZE (4*1024*1024)
extern int swiotlb_late_init_with_default_size(size_t default_size);
@@ -191,7 +192,7 @@ static const struct dma_map_ops sta2x11_dma_ops = {
.sync_sg_for_cpu = swiotlb_sync_sg_for_cpu,
.sync_sg_for_device = swiotlb_sync_sg_for_device,
.mapping_error = swiotlb_dma_mapping_error,
- .dma_supported = NULL, /* FIXME: we should use this instead! */
+ .dma_supported = x86_dma_supported,
};

/* At setup time, we use our own ops if the device is a ConneXt one */
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index d41280e869de..521fdf2d41bc 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -2731,6 +2731,8 @@ static void free_coherent(struct device *dev, size_t size,
*/
static int amd_iommu_dma_supported(struct device *dev, u64 mask)
{
+ if (!x86_dma_supported(dev, mask))
+ return 0;
return check_device(dev);
}

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index fc2765ccdb57..53cc0a393f04 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -3981,6 +3981,9 @@ struct dma_map_ops intel_dma_ops = {
.map_page = intel_map_page,
.unmap_page = intel_unmap_page,
.mapping_error = intel_mapping_error,
+#ifdef CONFIG_X86
+ .dma_supported = x86_dma_supported,
+#endif
};

static inline int iommu_domain_cache_init(void)
--
2.11.0

2017-06-08 13:32:20

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 32/44] hexagon: remove the unused dma_is_consistent prototype

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/hexagon/include/asm/dma-mapping.h | 1 -
1 file changed, 1 deletion(-)

diff --git a/arch/hexagon/include/asm/dma-mapping.h b/arch/hexagon/include/asm/dma-mapping.h
index 9c15cb5271a6..463dbc18f853 100644
--- a/arch/hexagon/include/asm/dma-mapping.h
+++ b/arch/hexagon/include/asm/dma-mapping.h
@@ -37,7 +37,6 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
return dma_ops;
}

-extern int dma_is_consistent(struct device *dev, dma_addr_t dma_handle);
extern void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction direction);

--
2.11.0

2017-06-08 13:28:35

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 33/44] openrisc: remove arch-specific dma_supported implementation

This implementation is simply bogus - hexagon only has a simple
direct mapped DMA implementation and thus doesn't care about the
address.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/openrisc/include/asm/dma-mapping.h | 7 -------
1 file changed, 7 deletions(-)

diff --git a/arch/openrisc/include/asm/dma-mapping.h b/arch/openrisc/include/asm/dma-mapping.h
index a4ea139c2ef9..f41bd3cb76d9 100644
--- a/arch/openrisc/include/asm/dma-mapping.h
+++ b/arch/openrisc/include/asm/dma-mapping.h
@@ -33,11 +33,4 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
return &or1k_dma_map_ops;
}

-#define HAVE_ARCH_DMA_SUPPORTED 1
-static inline int dma_supported(struct device *dev, u64 dma_mask)
-{
- /* Support 32 bit DMA mask exclusively */
- return dma_mask == DMA_BIT_MASK(32);
-}
-
#endif /* __ASM_OPENRISC_DMA_MAPPING_H */
--
2.11.0

2017-06-08 13:33:07

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 34/44] arm: remove arch specific dma_supported implementation

And instead wire it up as method for all the dma_map_ops instances.

Note that the code seems a little fishy for dmabounce and iommu, but
for now I'd like to preserve the existing behavior 1:1.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/arm/common/dmabounce.c | 1 +
arch/arm/include/asm/dma-iommu.h | 2 ++
arch/arm/include/asm/dma-mapping.h | 3 ---
arch/arm/mm/dma-mapping.c | 7 +++++--
4 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/arch/arm/common/dmabounce.c b/arch/arm/common/dmabounce.c
index bad457395ff1..4aabf117e136 100644
--- a/arch/arm/common/dmabounce.c
+++ b/arch/arm/common/dmabounce.c
@@ -476,6 +476,7 @@ static const struct dma_map_ops dmabounce_ops = {
.sync_sg_for_device = arm_dma_sync_sg_for_device,
.set_dma_mask = dmabounce_set_mask,
.mapping_error = dmabounce_mapping_error,
+ .dma_supported = arm_dma_supported,
};

static int dmabounce_init_pool(struct dmabounce_pool *pool, struct device *dev,
diff --git a/arch/arm/include/asm/dma-iommu.h b/arch/arm/include/asm/dma-iommu.h
index 389a26a10ea3..c090ec675eac 100644
--- a/arch/arm/include/asm/dma-iommu.h
+++ b/arch/arm/include/asm/dma-iommu.h
@@ -35,5 +35,7 @@ int arm_iommu_attach_device(struct device *dev,
struct dma_iommu_mapping *mapping);
void arm_iommu_detach_device(struct device *dev);

+int arm_dma_supported(struct device *dev, u64 mask);
+
#endif /* __KERNEL__ */
#endif
diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h
index 52a8fd5a8edb..8dabcfdf4505 100644
--- a/arch/arm/include/asm/dma-mapping.h
+++ b/arch/arm/include/asm/dma-mapping.h
@@ -20,9 +20,6 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
return &arm_dma_ops;
}

-#define HAVE_ARCH_DMA_SUPPORTED 1
-extern int dma_supported(struct device *dev, u64 mask);
-
#ifdef __arch_page_to_dma
#error Please update to __arch_pfn_to_dma
#endif
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 2dbc94b5fe5c..2938b724826e 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -199,6 +199,7 @@ const struct dma_map_ops arm_dma_ops = {
.sync_sg_for_cpu = arm_dma_sync_sg_for_cpu,
.sync_sg_for_device = arm_dma_sync_sg_for_device,
.mapping_error = arm_dma_mapping_error,
+ .dma_supported = arm_dma_supported,
};
EXPORT_SYMBOL(arm_dma_ops);

@@ -218,6 +219,7 @@ const struct dma_map_ops arm_coherent_dma_ops = {
.map_page = arm_coherent_dma_map_page,
.map_sg = arm_dma_map_sg,
.mapping_error = arm_dma_mapping_error,
+ .dma_supported = arm_dma_supported,
};
EXPORT_SYMBOL(arm_coherent_dma_ops);

@@ -1184,11 +1186,10 @@ void arm_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
* during bus mastering, then you would pass 0x00ffffff as the mask
* to this function.
*/
-int dma_supported(struct device *dev, u64 mask)
+int arm_dma_supported(struct device *dev, u64 mask)
{
return __dma_supported(dev, mask, false);
}
-EXPORT_SYMBOL(dma_supported);

#define PREALLOC_DMA_DEBUG_ENTRIES 4096

@@ -2149,6 +2150,7 @@ const struct dma_map_ops iommu_ops = {
.unmap_resource = arm_iommu_unmap_resource,

.mapping_error = arm_dma_mapping_error,
+ .dma_supported = arm_dma_supported,
};

const struct dma_map_ops iommu_coherent_ops = {
@@ -2167,6 +2169,7 @@ const struct dma_map_ops iommu_coherent_ops = {
.unmap_resource = arm_iommu_unmap_resource,

.mapping_error = arm_dma_mapping_error,
+ .dma_supported = arm_dma_supported,
};

/**
--
2.11.0

2017-06-08 13:28:29

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 30/44] dma-virt: remove dma_supported and mapping_error methods

These just duplicate the default behavior if no method is provided.

Signed-off-by: Christoph Hellwig <[email protected]>
---
lib/dma-virt.c | 12 ------------
1 file changed, 12 deletions(-)

diff --git a/lib/dma-virt.c b/lib/dma-virt.c
index dcd4df1f7174..5c4f11329721 100644
--- a/lib/dma-virt.c
+++ b/lib/dma-virt.c
@@ -51,22 +51,10 @@ static int dma_virt_map_sg(struct device *dev, struct scatterlist *sgl,
return nents;
}

-static int dma_virt_mapping_error(struct device *dev, dma_addr_t dma_addr)
-{
- return false;
-}
-
-static int dma_virt_supported(struct device *dev, u64 mask)
-{
- return true;
-}
-
const struct dma_map_ops dma_virt_ops = {
.alloc = dma_virt_alloc,
.free = dma_virt_free,
.map_page = dma_virt_map_page,
.map_sg = dma_virt_map_sg,
- .mapping_error = dma_virt_mapping_error,
- .dma_supported = dma_virt_supported,
};
EXPORT_SYMBOL(dma_virt_ops);
--
2.11.0

2017-06-08 13:28:27

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 26/44] dma-mapping: remove DMA_ERROR_CODE

And update the documentation - dma_mapping_error has been supported
everywhere for a long time.

Signed-off-by: Christoph Hellwig <[email protected]>
---
Documentation/DMA-API-HOWTO.txt | 31 +++++--------------------------
include/linux/dma-mapping.h | 5 -----
2 files changed, 5 insertions(+), 31 deletions(-)

diff --git a/Documentation/DMA-API-HOWTO.txt b/Documentation/DMA-API-HOWTO.txt
index 979228bc9035..4ed388356898 100644
--- a/Documentation/DMA-API-HOWTO.txt
+++ b/Documentation/DMA-API-HOWTO.txt
@@ -550,32 +550,11 @@ and to unmap it:
dma_unmap_single(dev, dma_handle, size, direction);

You should call dma_mapping_error() as dma_map_single() could fail and return
-error. Not all DMA implementations support the dma_mapping_error() interface.
-However, it is a good practice to call dma_mapping_error() interface, which
-will invoke the generic mapping error check interface. Doing so will ensure
-that the mapping code will work correctly on all DMA implementations without
-any dependency on the specifics of the underlying implementation. Using the
-returned address without checking for errors could result in failures ranging
-from panics to silent data corruption. A couple of examples of incorrect ways
-to check for errors that make assumptions about the underlying DMA
-implementation are as follows and these are applicable to dma_map_page() as
-well.
-
-Incorrect example 1:
- dma_addr_t dma_handle;
-
- dma_handle = dma_map_single(dev, addr, size, direction);
- if ((dma_handle & 0xffff != 0) || (dma_handle >= 0x1000000)) {
- goto map_error;
- }
-
-Incorrect example 2:
- dma_addr_t dma_handle;
-
- dma_handle = dma_map_single(dev, addr, size, direction);
- if (dma_handle == DMA_ERROR_CODE) {
- goto map_error;
- }
+error. Doing so will ensure that the mapping code will work correctly on all
+DMA implementations without any dependency on the specifics of the underlying
+implementation. Using the returned address without checking for errors could
+result in failures ranging from panics to silent data corruption. The same
+applies to dma_map_page() as well.

You should call dma_unmap_single() when the DMA activity is finished, e.g.,
from the interrupt which told you that the DMA transfer is done.
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 4f3eecedca2d..a57875309bfd 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -546,12 +546,7 @@ static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)

if (get_dma_ops(dev)->mapping_error)
return get_dma_ops(dev)->mapping_error(dev, dma_addr);
-
-#ifdef DMA_ERROR_CODE
- return dma_addr == DMA_ERROR_CODE;
-#else
return 0;
-#endif
}

#ifndef HAVE_ARCH_DMA_SUPPORTED
--
2.11.0

2017-06-08 13:34:10

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 29/44] dma-noop: remove dma_supported and mapping_error methods

These just duplicate the default behavior if no method is provided.

Signed-off-by: Christoph Hellwig <[email protected]>
---
lib/dma-noop.c | 12 ------------
1 file changed, 12 deletions(-)

diff --git a/lib/dma-noop.c b/lib/dma-noop.c
index de26c8b68f34..643a074f139d 100644
--- a/lib/dma-noop.c
+++ b/lib/dma-noop.c
@@ -54,23 +54,11 @@ static int dma_noop_map_sg(struct device *dev, struct scatterlist *sgl, int nent
return nents;
}

-static int dma_noop_mapping_error(struct device *dev, dma_addr_t dma_addr)
-{
- return 0;
-}
-
-static int dma_noop_supported(struct device *dev, u64 mask)
-{
- return 1;
-}
-
const struct dma_map_ops dma_noop_ops = {
.alloc = dma_noop_alloc,
.free = dma_noop_free,
.map_page = dma_noop_map_page,
.map_sg = dma_noop_map_sg,
- .mapping_error = dma_noop_mapping_error,
- .dma_supported = dma_noop_supported,
};

EXPORT_SYMBOL(dma_noop_ops);
--
2.11.0

2017-06-08 13:28:12

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 27/44] sparc: remove leon_dma_ops

We can just use pci32_dma_ops.

Btw, given that leon is 32-bit and appears to be PCI based, do even need
the special case for it in get_arch_dma_ops at all?

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/sparc/include/asm/dma-mapping.h | 3 +--
arch/sparc/kernel/ioport.c | 5 +----
2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/sparc/include/asm/dma-mapping.h b/arch/sparc/include/asm/dma-mapping.h
index b8e8dfcd065d..98da9f92c318 100644
--- a/arch/sparc/include/asm/dma-mapping.h
+++ b/arch/sparc/include/asm/dma-mapping.h
@@ -17,7 +17,6 @@ static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
}

extern const struct dma_map_ops *dma_ops;
-extern const struct dma_map_ops *leon_dma_ops;
extern const struct dma_map_ops pci32_dma_ops;

extern struct bus_type pci_bus_type;
@@ -26,7 +25,7 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
{
#ifdef CONFIG_SPARC_LEON
if (sparc_cpu_model == sparc_leon)
- return leon_dma_ops;
+ return &pci32_dma_ops;
#endif
#if defined(CONFIG_SPARC32) && defined(CONFIG_PCI)
if (bus == &pci_bus_type)
diff --git a/arch/sparc/kernel/ioport.c b/arch/sparc/kernel/ioport.c
index cf20033a1458..dd081d557609 100644
--- a/arch/sparc/kernel/ioport.c
+++ b/arch/sparc/kernel/ioport.c
@@ -637,6 +637,7 @@ static void pci32_sync_sg_for_device(struct device *device, struct scatterlist *
}
}

+/* note: leon re-uses pci32_dma_ops */
const struct dma_map_ops pci32_dma_ops = {
.alloc = pci32_alloc_coherent,
.free = pci32_free_coherent,
@@ -651,10 +652,6 @@ const struct dma_map_ops pci32_dma_ops = {
};
EXPORT_SYMBOL(pci32_dma_ops);

-/* leon re-uses pci32_dma_ops */
-const struct dma_map_ops *leon_dma_ops = &pci32_dma_ops;
-EXPORT_SYMBOL(leon_dma_ops);
-
const struct dma_map_ops *dma_ops = &sbus_dma_ops;
EXPORT_SYMBOL(dma_ops);

--
2.11.0

2017-06-08 13:34:49

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 25/44] arm: implement ->mapping_error

DMA_ERROR_CODE is going to go away, so don't rely on it.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/arm/common/dmabounce.c | 16 ++++++++++++---
arch/arm/include/asm/dma-iommu.h | 2 ++
arch/arm/include/asm/dma-mapping.h | 1 -
arch/arm/mm/dma-mapping.c | 41 ++++++++++++++++++++++++--------------
4 files changed, 41 insertions(+), 19 deletions(-)

diff --git a/arch/arm/common/dmabounce.c b/arch/arm/common/dmabounce.c
index 9b1b7be2ec0e..bad457395ff1 100644
--- a/arch/arm/common/dmabounce.c
+++ b/arch/arm/common/dmabounce.c
@@ -33,6 +33,7 @@
#include <linux/scatterlist.h>

#include <asm/cacheflush.h>
+#include <asm/dma-iommu.h>

#undef STATS

@@ -256,7 +257,7 @@ static inline dma_addr_t map_single(struct device *dev, void *ptr, size_t size,
if (buf == NULL) {
dev_err(dev, "%s: unable to map unsafe buffer %p!\n",
__func__, ptr);
- return DMA_ERROR_CODE;
+ return ARM_MAPPING_ERROR;
}

dev_dbg(dev, "%s: unsafe buffer %p (dma=%#x) mapped to %p (dma=%#x)\n",
@@ -326,7 +327,7 @@ static dma_addr_t dmabounce_map_page(struct device *dev, struct page *page,

ret = needs_bounce(dev, dma_addr, size);
if (ret < 0)
- return DMA_ERROR_CODE;
+ return ARM_MAPPING_ERROR;

if (ret == 0) {
arm_dma_ops.sync_single_for_device(dev, dma_addr, size, dir);
@@ -335,7 +336,7 @@ static dma_addr_t dmabounce_map_page(struct device *dev, struct page *page,

if (PageHighMem(page)) {
dev_err(dev, "DMA buffer bouncing of HIGHMEM pages is not supported\n");
- return DMA_ERROR_CODE;
+ return ARM_MAPPING_ERROR;
}

return map_single(dev, page_address(page) + offset, size, dir, attrs);
@@ -452,6 +453,14 @@ static int dmabounce_set_mask(struct device *dev, u64 dma_mask)
return arm_dma_ops.set_dma_mask(dev, dma_mask);
}

+static int dmabounce_mapping_error(struct device *dev, dma_addr_t dma_addr)
+{
+ if (dev->archdata.dmabounce)
+ return 0;
+
+ return arm_dma_ops.mapping_error(dev, dma_addr);
+}
+
static const struct dma_map_ops dmabounce_ops = {
.alloc = arm_dma_alloc,
.free = arm_dma_free,
@@ -466,6 +475,7 @@ static const struct dma_map_ops dmabounce_ops = {
.sync_sg_for_cpu = arm_dma_sync_sg_for_cpu,
.sync_sg_for_device = arm_dma_sync_sg_for_device,
.set_dma_mask = dmabounce_set_mask,
+ .mapping_error = dmabounce_mapping_error,
};

static int dmabounce_init_pool(struct dmabounce_pool *pool, struct device *dev,
diff --git a/arch/arm/include/asm/dma-iommu.h b/arch/arm/include/asm/dma-iommu.h
index 2ef282f96651..389a26a10ea3 100644
--- a/arch/arm/include/asm/dma-iommu.h
+++ b/arch/arm/include/asm/dma-iommu.h
@@ -9,6 +9,8 @@
#include <linux/kmemcheck.h>
#include <linux/kref.h>

+#define ARM_MAPPING_ERROR (~(dma_addr_t)0x0)
+
struct dma_iommu_mapping {
/* iommu specific data */
struct iommu_domain *domain;
diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h
index 680d3f3889e7..52a8fd5a8edb 100644
--- a/arch/arm/include/asm/dma-mapping.h
+++ b/arch/arm/include/asm/dma-mapping.h
@@ -12,7 +12,6 @@
#include <xen/xen.h>
#include <asm/xen/hypervisor.h>

-#define DMA_ERROR_CODE (~(dma_addr_t)0x0)
extern const struct dma_map_ops arm_dma_ops;
extern const struct dma_map_ops arm_coherent_dma_ops;

diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index c742dfd2967b..2dbc94b5fe5c 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -180,6 +180,11 @@ static void arm_dma_sync_single_for_device(struct device *dev,
__dma_page_cpu_to_dev(page, offset, size, dir);
}

+static int arm_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
+{
+ return dma_addr == ARM_MAPPING_ERROR;
+}
+
const struct dma_map_ops arm_dma_ops = {
.alloc = arm_dma_alloc,
.free = arm_dma_free,
@@ -193,6 +198,7 @@ const struct dma_map_ops arm_dma_ops = {
.sync_single_for_device = arm_dma_sync_single_for_device,
.sync_sg_for_cpu = arm_dma_sync_sg_for_cpu,
.sync_sg_for_device = arm_dma_sync_sg_for_device,
+ .mapping_error = arm_dma_mapping_error,
};
EXPORT_SYMBOL(arm_dma_ops);

@@ -211,6 +217,7 @@ const struct dma_map_ops arm_coherent_dma_ops = {
.get_sgtable = arm_dma_get_sgtable,
.map_page = arm_coherent_dma_map_page,
.map_sg = arm_dma_map_sg,
+ .mapping_error = arm_dma_mapping_error,
};
EXPORT_SYMBOL(arm_coherent_dma_ops);

@@ -799,7 +806,7 @@ static void *__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
gfp &= ~(__GFP_COMP);
args.gfp = gfp;

- *handle = DMA_ERROR_CODE;
+ *handle = ARM_MAPPING_ERROR;
allowblock = gfpflags_allow_blocking(gfp);
cma = allowblock ? dev_get_cma_area(dev) : false;

@@ -1254,7 +1261,7 @@ static inline dma_addr_t __alloc_iova(struct dma_iommu_mapping *mapping,
if (i == mapping->nr_bitmaps) {
if (extend_iommu_mapping(mapping)) {
spin_unlock_irqrestore(&mapping->lock, flags);
- return DMA_ERROR_CODE;
+ return ARM_MAPPING_ERROR;
}

start = bitmap_find_next_zero_area(mapping->bitmaps[i],
@@ -1262,7 +1269,7 @@ static inline dma_addr_t __alloc_iova(struct dma_iommu_mapping *mapping,

if (start > mapping->bits) {
spin_unlock_irqrestore(&mapping->lock, flags);
- return DMA_ERROR_CODE;
+ return ARM_MAPPING_ERROR;
}

bitmap_set(mapping->bitmaps[i], start, count);
@@ -1445,7 +1452,7 @@ __iommu_create_mapping(struct device *dev, struct page **pages, size_t size,
int i;

dma_addr = __alloc_iova(mapping, size);
- if (dma_addr == DMA_ERROR_CODE)
+ if (dma_addr == ARM_MAPPING_ERROR)
return dma_addr;

iova = dma_addr;
@@ -1472,7 +1479,7 @@ __iommu_create_mapping(struct device *dev, struct page **pages, size_t size,
fail:
iommu_unmap(mapping->domain, dma_addr, iova-dma_addr);
__free_iova(mapping, dma_addr, size);
- return DMA_ERROR_CODE;
+ return ARM_MAPPING_ERROR;
}

static int __iommu_remove_mapping(struct device *dev, dma_addr_t iova, size_t size)
@@ -1533,7 +1540,7 @@ static void *__iommu_alloc_simple(struct device *dev, size_t size, gfp_t gfp,
return NULL;

*handle = __iommu_create_mapping(dev, &page, size, attrs);
- if (*handle == DMA_ERROR_CODE)
+ if (*handle == ARM_MAPPING_ERROR)
goto err_mapping;

return addr;
@@ -1561,7 +1568,7 @@ static void *__arm_iommu_alloc_attrs(struct device *dev, size_t size,
struct page **pages;
void *addr = NULL;

- *handle = DMA_ERROR_CODE;
+ *handle = ARM_MAPPING_ERROR;
size = PAGE_ALIGN(size);

if (coherent_flag == COHERENT || !gfpflags_allow_blocking(gfp))
@@ -1582,7 +1589,7 @@ static void *__arm_iommu_alloc_attrs(struct device *dev, size_t size,
return NULL;

*handle = __iommu_create_mapping(dev, pages, size, attrs);
- if (*handle == DMA_ERROR_CODE)
+ if (*handle == ARM_MAPPING_ERROR)
goto err_buffer;

if (attrs & DMA_ATTR_NO_KERNEL_MAPPING)
@@ -1732,10 +1739,10 @@ static int __map_sg_chunk(struct device *dev, struct scatterlist *sg,
int prot;

size = PAGE_ALIGN(size);
- *handle = DMA_ERROR_CODE;
+ *handle = ARM_MAPPING_ERROR;

iova_base = iova = __alloc_iova(mapping, size);
- if (iova == DMA_ERROR_CODE)
+ if (iova == ARM_MAPPING_ERROR)
return -ENOMEM;

for (count = 0, s = sg; count < (size >> PAGE_SHIFT); s = sg_next(s)) {
@@ -1775,7 +1782,7 @@ static int __iommu_map_sg(struct device *dev, struct scatterlist *sg, int nents,
for (i = 1; i < nents; i++) {
s = sg_next(s);

- s->dma_address = DMA_ERROR_CODE;
+ s->dma_address = ARM_MAPPING_ERROR;
s->dma_length = 0;

if (s->offset || (size & ~PAGE_MASK) || size + s->length > max) {
@@ -1950,7 +1957,7 @@ static dma_addr_t arm_coherent_iommu_map_page(struct device *dev, struct page *p
int ret, prot, len = PAGE_ALIGN(size + offset);

dma_addr = __alloc_iova(mapping, len);
- if (dma_addr == DMA_ERROR_CODE)
+ if (dma_addr == ARM_MAPPING_ERROR)
return dma_addr;

prot = __dma_info_to_prot(dir, attrs);
@@ -1962,7 +1969,7 @@ static dma_addr_t arm_coherent_iommu_map_page(struct device *dev, struct page *p
return dma_addr + offset;
fail:
__free_iova(mapping, dma_addr, len);
- return DMA_ERROR_CODE;
+ return ARM_MAPPING_ERROR;
}

/**
@@ -2056,7 +2063,7 @@ static dma_addr_t arm_iommu_map_resource(struct device *dev,
size_t len = PAGE_ALIGN(size + offset);

dma_addr = __alloc_iova(mapping, len);
- if (dma_addr == DMA_ERROR_CODE)
+ if (dma_addr == ARM_MAPPING_ERROR)
return dma_addr;

prot = __dma_info_to_prot(dir, attrs) | IOMMU_MMIO;
@@ -2068,7 +2075,7 @@ static dma_addr_t arm_iommu_map_resource(struct device *dev,
return dma_addr + offset;
fail:
__free_iova(mapping, dma_addr, len);
- return DMA_ERROR_CODE;
+ return ARM_MAPPING_ERROR;
}

/**
@@ -2140,6 +2147,8 @@ const struct dma_map_ops iommu_ops = {

.map_resource = arm_iommu_map_resource,
.unmap_resource = arm_iommu_unmap_resource,
+
+ .mapping_error = arm_dma_mapping_error,
};

const struct dma_map_ops iommu_coherent_ops = {
@@ -2156,6 +2165,8 @@ const struct dma_map_ops iommu_coherent_ops = {

.map_resource = arm_iommu_map_resource,
.unmap_resource = arm_iommu_unmap_resource,
+
+ .mapping_error = arm_dma_mapping_error,
};

/**
--
2.11.0

2017-06-08 13:28:01

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 24/44] x86: remove DMA_ERROR_CODE

All dma_map_ops instances now handle their errors through
->mapping_error.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/x86/include/asm/dma-mapping.h | 2 --
1 file changed, 2 deletions(-)

diff --git a/arch/x86/include/asm/dma-mapping.h b/arch/x86/include/asm/dma-mapping.h
index 08a0838b83fb..c35d228aa381 100644
--- a/arch/x86/include/asm/dma-mapping.h
+++ b/arch/x86/include/asm/dma-mapping.h
@@ -19,8 +19,6 @@
# define ISA_DMA_BIT_MASK DMA_BIT_MASK(32)
#endif

-#define DMA_ERROR_CODE 0
-
extern int iommu_merge;
extern struct device x86_dma_fallback_dev;
extern int panic_on_overflow;
--
2.11.0

2017-06-08 13:36:41

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 22/44] x86/pci-nommu: implement ->mapping_error

DMA_ERROR_CODE is going to go away, so don't rely on it.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/x86/kernel/pci-nommu.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/pci-nommu.c b/arch/x86/kernel/pci-nommu.c
index a88952ef371c..085fe6ce4049 100644
--- a/arch/x86/kernel/pci-nommu.c
+++ b/arch/x86/kernel/pci-nommu.c
@@ -11,6 +11,8 @@
#include <asm/iommu.h>
#include <asm/dma.h>

+#define NOMMU_MAPPING_ERROR 0
+
static int
check_addr(char *name, struct device *hwdev, dma_addr_t bus, size_t size)
{
@@ -33,7 +35,7 @@ static dma_addr_t nommu_map_page(struct device *dev, struct page *page,
dma_addr_t bus = page_to_phys(page) + offset;
WARN_ON(size == 0);
if (!check_addr("map_single", dev, bus, size))
- return DMA_ERROR_CODE;
+ return NOMMU_MAPPING_ERROR;
flush_write_buffers();
return bus;
}
@@ -88,6 +90,11 @@ static void nommu_sync_sg_for_device(struct device *dev,
flush_write_buffers();
}

+static int nommu_mapping_error(struct device *dev, dma_addr_t dma_addr)
+{
+ return dma_addr == NOMMU_MAPPING_ERROR;
+}
+
const struct dma_map_ops nommu_dma_ops = {
.alloc = dma_generic_alloc_coherent,
.free = dma_generic_free_coherent,
@@ -96,4 +103,5 @@ const struct dma_map_ops nommu_dma_ops = {
.sync_single_for_device = nommu_sync_single_for_device,
.sync_sg_for_device = nommu_sync_sg_for_device,
.is_phys = 1,
+ .mapping_error = nommu_mapping_error,
};
--
2.11.0

2017-06-08 13:27:55

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 23/44] x86/calgary: implement ->mapping_error

DMA_ERROR_CODE is going to go away, so don't rely on it.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/x86/kernel/pci-calgary_64.c | 24 ++++++++++++++++--------
1 file changed, 16 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kernel/pci-calgary_64.c b/arch/x86/kernel/pci-calgary_64.c
index fda7867046d0..e75b490f2b0b 100644
--- a/arch/x86/kernel/pci-calgary_64.c
+++ b/arch/x86/kernel/pci-calgary_64.c
@@ -50,6 +50,8 @@
#include <asm/x86_init.h>
#include <asm/iommu_table.h>

+#define CALGARY_MAPPING_ERROR 0
+
#ifdef CONFIG_CALGARY_IOMMU_ENABLED_BY_DEFAULT
int use_calgary __read_mostly = 1;
#else
@@ -252,7 +254,7 @@ static unsigned long iommu_range_alloc(struct device *dev,
if (panic_on_overflow)
panic("Calgary: fix the allocator.\n");
else
- return DMA_ERROR_CODE;
+ return CALGARY_MAPPING_ERROR;
}
}

@@ -272,10 +274,10 @@ static dma_addr_t iommu_alloc(struct device *dev, struct iommu_table *tbl,

entry = iommu_range_alloc(dev, tbl, npages);

- if (unlikely(entry == DMA_ERROR_CODE)) {
+ if (unlikely(entry == CALGARY_MAPPING_ERROR)) {
pr_warn("failed to allocate %u pages in iommu %p\n",
npages, tbl);
- return DMA_ERROR_CODE;
+ return CALGARY_MAPPING_ERROR;
}

/* set the return dma address */
@@ -295,7 +297,7 @@ static void iommu_free(struct iommu_table *tbl, dma_addr_t dma_addr,
unsigned long flags;

/* were we called with bad_dma_address? */
- badend = DMA_ERROR_CODE + (EMERGENCY_PAGES * PAGE_SIZE);
+ badend = CALGARY_MAPPING_ERROR + (EMERGENCY_PAGES * PAGE_SIZE);
if (unlikely(dma_addr < badend)) {
WARN(1, KERN_ERR "Calgary: driver tried unmapping bad DMA "
"address 0x%Lx\n", dma_addr);
@@ -380,7 +382,7 @@ static int calgary_map_sg(struct device *dev, struct scatterlist *sg,
npages = iommu_num_pages(vaddr, s->length, PAGE_SIZE);

entry = iommu_range_alloc(dev, tbl, npages);
- if (entry == DMA_ERROR_CODE) {
+ if (entry == CALGARY_MAPPING_ERROR) {
/* makes sure unmap knows to stop */
s->dma_length = 0;
goto error;
@@ -398,7 +400,7 @@ static int calgary_map_sg(struct device *dev, struct scatterlist *sg,
error:
calgary_unmap_sg(dev, sg, nelems, dir, 0);
for_each_sg(sg, s, nelems, i) {
- sg->dma_address = DMA_ERROR_CODE;
+ sg->dma_address = CALGARY_MAPPING_ERROR;
sg->dma_length = 0;
}
return 0;
@@ -453,7 +455,7 @@ static void* calgary_alloc_coherent(struct device *dev, size_t size,

/* set up tces to cover the allocated range */
mapping = iommu_alloc(dev, tbl, ret, npages, DMA_BIDIRECTIONAL);
- if (mapping == DMA_ERROR_CODE)
+ if (mapping == CALGARY_MAPPING_ERROR)
goto free;
*dma_handle = mapping;
return ret;
@@ -478,6 +480,11 @@ static void calgary_free_coherent(struct device *dev, size_t size,
free_pages((unsigned long)vaddr, get_order(size));
}

+static int calgary_mapping_error(struct device *dev, dma_addr_t dma_addr)
+{
+ return dma_addr == CALGARY_MAPPING_ERROR;
+}
+
static const struct dma_map_ops calgary_dma_ops = {
.alloc = calgary_alloc_coherent,
.free = calgary_free_coherent,
@@ -485,6 +492,7 @@ static const struct dma_map_ops calgary_dma_ops = {
.unmap_sg = calgary_unmap_sg,
.map_page = calgary_map_page,
.unmap_page = calgary_unmap_page,
+ .mapping_error = calgary_mapping_error,
};

static inline void __iomem * busno_to_bbar(unsigned char num)
@@ -732,7 +740,7 @@ static void __init calgary_reserve_regions(struct pci_dev *dev)
struct iommu_table *tbl = pci_iommu(dev->bus);

/* reserve EMERGENCY_PAGES from bad_dma_address and up */
- iommu_range_reserve(tbl, DMA_ERROR_CODE, EMERGENCY_PAGES);
+ iommu_range_reserve(tbl, CALGARY_MAPPING_ERROR, EMERGENCY_PAGES);

/* avoid the BIOS/VGA first 640KB-1MB region */
/* for CalIOC2 - avoid the entire first MB */
--
2.11.0

2017-06-08 13:37:42

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 21/44] powerpc: implement ->mapping_error

DMA_ERROR_CODE is going to go away, so don't rely on it. Instead
define a ->mapping_error method for all IOMMU based dma operation
instances. The direct ops don't ever return an error and don't
need a ->mapping_error method.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/powerpc/include/asm/dma-mapping.h | 4 ----
arch/powerpc/include/asm/iommu.h | 4 ++++
arch/powerpc/kernel/dma-iommu.c | 6 ++++++
arch/powerpc/kernel/iommu.c | 28 ++++++++++++++--------------
arch/powerpc/platforms/cell/iommu.c | 1 +
arch/powerpc/platforms/pseries/vio.c | 3 ++-
6 files changed, 27 insertions(+), 19 deletions(-)

diff --git a/arch/powerpc/include/asm/dma-mapping.h b/arch/powerpc/include/asm/dma-mapping.h
index 181a095468e4..73aedbe6c977 100644
--- a/arch/powerpc/include/asm/dma-mapping.h
+++ b/arch/powerpc/include/asm/dma-mapping.h
@@ -17,10 +17,6 @@
#include <asm/io.h>
#include <asm/swiotlb.h>

-#ifdef CONFIG_PPC64
-#define DMA_ERROR_CODE (~(dma_addr_t)0x0)
-#endif
-
/* Some dma direct funcs must be visible for use in other dma_ops */
extern void *__dma_direct_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flag,
diff --git a/arch/powerpc/include/asm/iommu.h b/arch/powerpc/include/asm/iommu.h
index 8a8ce220d7d0..20febe0b7f32 100644
--- a/arch/powerpc/include/asm/iommu.h
+++ b/arch/powerpc/include/asm/iommu.h
@@ -139,6 +139,8 @@ struct scatterlist;

#ifdef CONFIG_PPC64

+#define IOMMU_MAPPING_ERROR (~(dma_addr_t)0x0)
+
static inline void set_iommu_table_base(struct device *dev,
struct iommu_table *base)
{
@@ -238,6 +240,8 @@ static inline int __init tce_iommu_bus_notifier_init(void)
}
#endif /* !CONFIG_IOMMU_API */

+int dma_iommu_mapping_error(struct device *dev, dma_addr_t dma_addr);
+
#else

static inline void *get_iommu_table_base(struct device *dev)
diff --git a/arch/powerpc/kernel/dma-iommu.c b/arch/powerpc/kernel/dma-iommu.c
index fb7cbaa37658..8f7abf9baa63 100644
--- a/arch/powerpc/kernel/dma-iommu.c
+++ b/arch/powerpc/kernel/dma-iommu.c
@@ -105,6 +105,11 @@ static u64 dma_iommu_get_required_mask(struct device *dev)
return mask;
}

+int dma_iommu_mapping_error(struct device *dev, dma_addr_t dma_addr)
+{
+ return dma_addr == IOMMU_MAPPING_ERROR;
+}
+
struct dma_map_ops dma_iommu_ops = {
.alloc = dma_iommu_alloc_coherent,
.free = dma_iommu_free_coherent,
@@ -115,5 +120,6 @@ struct dma_map_ops dma_iommu_ops = {
.map_page = dma_iommu_map_page,
.unmap_page = dma_iommu_unmap_page,
.get_required_mask = dma_iommu_get_required_mask,
+ .mapping_error = dma_iommu_mapping_error,
};
EXPORT_SYMBOL(dma_iommu_ops);
diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
index f2b724cd9e64..233ca3fe4754 100644
--- a/arch/powerpc/kernel/iommu.c
+++ b/arch/powerpc/kernel/iommu.c
@@ -198,11 +198,11 @@ static unsigned long iommu_range_alloc(struct device *dev,
if (unlikely(npages == 0)) {
if (printk_ratelimit())
WARN_ON(1);
- return DMA_ERROR_CODE;
+ return IOMMU_MAPPING_ERROR;
}

if (should_fail_iommu(dev))
- return DMA_ERROR_CODE;
+ return IOMMU_MAPPING_ERROR;

/*
* We don't need to disable preemption here because any CPU can
@@ -278,7 +278,7 @@ static unsigned long iommu_range_alloc(struct device *dev,
} else {
/* Give up */
spin_unlock_irqrestore(&(pool->lock), flags);
- return DMA_ERROR_CODE;
+ return IOMMU_MAPPING_ERROR;
}
}

@@ -310,13 +310,13 @@ static dma_addr_t iommu_alloc(struct device *dev, struct iommu_table *tbl,
unsigned long attrs)
{
unsigned long entry;
- dma_addr_t ret = DMA_ERROR_CODE;
+ dma_addr_t ret = IOMMU_MAPPING_ERROR;
int build_fail;

entry = iommu_range_alloc(dev, tbl, npages, NULL, mask, align_order);

- if (unlikely(entry == DMA_ERROR_CODE))
- return DMA_ERROR_CODE;
+ if (unlikely(entry == IOMMU_MAPPING_ERROR))
+ return IOMMU_MAPPING_ERROR;

entry += tbl->it_offset; /* Offset into real TCE table */
ret = entry << tbl->it_page_shift; /* Set the return dma address */
@@ -328,12 +328,12 @@ static dma_addr_t iommu_alloc(struct device *dev, struct iommu_table *tbl,

/* tbl->it_ops->set() only returns non-zero for transient errors.
* Clean up the table bitmap in this case and return
- * DMA_ERROR_CODE. For all other errors the functionality is
+ * IOMMU_MAPPING_ERROR. For all other errors the functionality is
* not altered.
*/
if (unlikely(build_fail)) {
__iommu_free(tbl, ret, npages);
- return DMA_ERROR_CODE;
+ return IOMMU_MAPPING_ERROR;
}

/* Flush/invalidate TLB caches if necessary */
@@ -478,7 +478,7 @@ int ppc_iommu_map_sg(struct device *dev, struct iommu_table *tbl,
DBG(" - vaddr: %lx, size: %lx\n", vaddr, slen);

/* Handle failure */
- if (unlikely(entry == DMA_ERROR_CODE)) {
+ if (unlikely(entry == IOMMU_MAPPING_ERROR)) {
if (!(attrs & DMA_ATTR_NO_WARN) &&
printk_ratelimit())
dev_info(dev, "iommu_alloc failed, tbl %p "
@@ -545,7 +545,7 @@ int ppc_iommu_map_sg(struct device *dev, struct iommu_table *tbl,
*/
if (outcount < incount) {
outs = sg_next(outs);
- outs->dma_address = DMA_ERROR_CODE;
+ outs->dma_address = IOMMU_MAPPING_ERROR;
outs->dma_length = 0;
}

@@ -563,7 +563,7 @@ int ppc_iommu_map_sg(struct device *dev, struct iommu_table *tbl,
npages = iommu_num_pages(s->dma_address, s->dma_length,
IOMMU_PAGE_SIZE(tbl));
__iommu_free(tbl, vaddr, npages);
- s->dma_address = DMA_ERROR_CODE;
+ s->dma_address = IOMMU_MAPPING_ERROR;
s->dma_length = 0;
}
if (s == outs)
@@ -777,7 +777,7 @@ dma_addr_t iommu_map_page(struct device *dev, struct iommu_table *tbl,
unsigned long mask, enum dma_data_direction direction,
unsigned long attrs)
{
- dma_addr_t dma_handle = DMA_ERROR_CODE;
+ dma_addr_t dma_handle = IOMMU_MAPPING_ERROR;
void *vaddr;
unsigned long uaddr;
unsigned int npages, align;
@@ -797,7 +797,7 @@ dma_addr_t iommu_map_page(struct device *dev, struct iommu_table *tbl,
dma_handle = iommu_alloc(dev, tbl, vaddr, npages, direction,
mask >> tbl->it_page_shift, align,
attrs);
- if (dma_handle == DMA_ERROR_CODE) {
+ if (dma_handle == IOMMU_MAPPING_ERROR) {
if (!(attrs & DMA_ATTR_NO_WARN) &&
printk_ratelimit()) {
dev_info(dev, "iommu_alloc failed, tbl %p "
@@ -869,7 +869,7 @@ void *iommu_alloc_coherent(struct device *dev, struct iommu_table *tbl,
io_order = get_iommu_order(size, tbl);
mapping = iommu_alloc(dev, tbl, ret, nio_pages, DMA_BIDIRECTIONAL,
mask >> tbl->it_page_shift, io_order, 0);
- if (mapping == DMA_ERROR_CODE) {
+ if (mapping == IOMMU_MAPPING_ERROR) {
free_pages((unsigned long)ret, order);
return NULL;
}
diff --git a/arch/powerpc/platforms/cell/iommu.c b/arch/powerpc/platforms/cell/iommu.c
index 71b995bbcae0..948086e33a0c 100644
--- a/arch/powerpc/platforms/cell/iommu.c
+++ b/arch/powerpc/platforms/cell/iommu.c
@@ -660,6 +660,7 @@ static const struct dma_map_ops dma_iommu_fixed_ops = {
.set_dma_mask = dma_set_mask_and_switch,
.map_page = dma_fixed_map_page,
.unmap_page = dma_fixed_unmap_page,
+ .mapping_error = dma_iommu_mapping_error,
};

static void cell_dma_dev_setup_fixed(struct device *dev);
diff --git a/arch/powerpc/platforms/pseries/vio.c b/arch/powerpc/platforms/pseries/vio.c
index 28b09fd797ec..e6f43d546827 100644
--- a/arch/powerpc/platforms/pseries/vio.c
+++ b/arch/powerpc/platforms/pseries/vio.c
@@ -519,7 +519,7 @@ static dma_addr_t vio_dma_iommu_map_page(struct device *dev, struct page *page,
{
struct vio_dev *viodev = to_vio_dev(dev);
struct iommu_table *tbl;
- dma_addr_t ret = DMA_ERROR_CODE;
+ dma_addr_t ret = IOMMU_MAPPING_ERROR;

tbl = get_iommu_table_base(dev);
if (vio_cmo_alloc(viodev, roundup(size, IOMMU_PAGE_SIZE(tbl)))) {
@@ -625,6 +625,7 @@ static const struct dma_map_ops vio_dma_mapping_ops = {
.unmap_page = vio_dma_iommu_unmap_page,
.dma_supported = vio_dma_iommu_dma_supported,
.get_required_mask = vio_dma_get_required_mask,
+ .mapping_error = dma_iommu_mapping_error,
};

/**
--
2.11.0

2017-06-08 13:27:42

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 19/44] s390: implement ->mapping_error

s390 can also use noop_dma_ops, and while that currently does not return
errors it will so in the future. Implementing the mapping_error method
is the proper way to have per-ops error conditions.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/s390/include/asm/dma-mapping.h | 2 --
arch/s390/pci/pci_dma.c | 18 +++++++++++++-----
2 files changed, 13 insertions(+), 7 deletions(-)

diff --git a/arch/s390/include/asm/dma-mapping.h b/arch/s390/include/asm/dma-mapping.h
index 3108b8dbe266..512ad0eaa11a 100644
--- a/arch/s390/include/asm/dma-mapping.h
+++ b/arch/s390/include/asm/dma-mapping.h
@@ -8,8 +8,6 @@
#include <linux/dma-debug.h>
#include <linux/io.h>

-#define DMA_ERROR_CODE (~(dma_addr_t) 0x0)
-
extern const struct dma_map_ops s390_pci_dma_ops;

static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
diff --git a/arch/s390/pci/pci_dma.c b/arch/s390/pci/pci_dma.c
index 9081a57fa340..ea623faab525 100644
--- a/arch/s390/pci/pci_dma.c
+++ b/arch/s390/pci/pci_dma.c
@@ -14,6 +14,8 @@
#include <linux/pci.h>
#include <asm/pci_dma.h>

+#define S390_MAPPING_ERROR (~(dma_addr_t) 0x0)
+
static struct kmem_cache *dma_region_table_cache;
static struct kmem_cache *dma_page_table_cache;
static int s390_iommu_strict;
@@ -281,7 +283,7 @@ static dma_addr_t dma_alloc_address(struct device *dev, int size)

out_error:
spin_unlock_irqrestore(&zdev->iommu_bitmap_lock, flags);
- return DMA_ERROR_CODE;
+ return S390_MAPPING_ERROR;
}

static void dma_free_address(struct device *dev, dma_addr_t dma_addr, int size)
@@ -329,7 +331,7 @@ static dma_addr_t s390_dma_map_pages(struct device *dev, struct page *page,
/* This rounds up number of pages based on size and offset */
nr_pages = iommu_num_pages(pa, size, PAGE_SIZE);
dma_addr = dma_alloc_address(dev, nr_pages);
- if (dma_addr == DMA_ERROR_CODE) {
+ if (dma_addr == S390_MAPPING_ERROR) {
ret = -ENOSPC;
goto out_err;
}
@@ -352,7 +354,7 @@ static dma_addr_t s390_dma_map_pages(struct device *dev, struct page *page,
out_err:
zpci_err("map error:\n");
zpci_err_dma(ret, pa);
- return DMA_ERROR_CODE;
+ return S390_MAPPING_ERROR;
}

static void s390_dma_unmap_pages(struct device *dev, dma_addr_t dma_addr,
@@ -429,7 +431,7 @@ static int __s390_dma_map_sg(struct device *dev, struct scatterlist *sg,
int ret;

dma_addr_base = dma_alloc_address(dev, nr_pages);
- if (dma_addr_base == DMA_ERROR_CODE)
+ if (dma_addr_base == S390_MAPPING_ERROR)
return -ENOMEM;

dma_addr = dma_addr_base;
@@ -476,7 +478,7 @@ static int s390_dma_map_sg(struct device *dev, struct scatterlist *sg,
for (i = 1; i < nr_elements; i++) {
s = sg_next(s);

- s->dma_address = DMA_ERROR_CODE;
+ s->dma_address = S390_MAPPING_ERROR;
s->dma_length = 0;

if (s->offset || (size & ~PAGE_MASK) ||
@@ -525,6 +527,11 @@ static void s390_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
s->dma_length = 0;
}
}
+
+static int s390_mapping_error(struct device *dev, dma_addr_t dma_addr)
+{
+ return dma_addr == S390_MAPPING_ERROR;
+}

int zpci_dma_init_device(struct zpci_dev *zdev)
{
@@ -657,6 +664,7 @@ const struct dma_map_ops s390_pci_dma_ops = {
.unmap_sg = s390_dma_unmap_sg,
.map_page = s390_dma_map_pages,
.unmap_page = s390_dma_unmap_pages,
+ .mapping_error = s390_mapping_error,
/* if we support direct DMA this must be conditional */
.is_phys = 0,
/* dma_supported is unconditionally true without a callback */
--
2.11.0

2017-06-08 13:38:20

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 20/44] sparc: implement ->mapping_error

DMA_ERROR_CODE is going to go away, so don't rely on it.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/sparc/include/asm/dma-mapping.h | 2 --
arch/sparc/kernel/iommu.c | 12 +++++++++---
arch/sparc/kernel/iommu_common.h | 2 ++
arch/sparc/kernel/pci_sun4v.c | 14 ++++++++++----
4 files changed, 21 insertions(+), 9 deletions(-)

diff --git a/arch/sparc/include/asm/dma-mapping.h b/arch/sparc/include/asm/dma-mapping.h
index 69cc627779f2..b8e8dfcd065d 100644
--- a/arch/sparc/include/asm/dma-mapping.h
+++ b/arch/sparc/include/asm/dma-mapping.h
@@ -5,8 +5,6 @@
#include <linux/mm.h>
#include <linux/dma-debug.h>

-#define DMA_ERROR_CODE (~(dma_addr_t)0x0)
-
#define HAVE_ARCH_DMA_SUPPORTED 1
int dma_supported(struct device *dev, u64 mask);

diff --git a/arch/sparc/kernel/iommu.c b/arch/sparc/kernel/iommu.c
index c63ba99ca551..dafa316d978d 100644
--- a/arch/sparc/kernel/iommu.c
+++ b/arch/sparc/kernel/iommu.c
@@ -314,7 +314,7 @@ static dma_addr_t dma_4u_map_page(struct device *dev, struct page *page,
bad_no_ctx:
if (printk_ratelimit())
WARN_ON(1);
- return DMA_ERROR_CODE;
+ return SPARC_MAPPING_ERROR;
}

static void strbuf_flush(struct strbuf *strbuf, struct iommu *iommu,
@@ -547,7 +547,7 @@ static int dma_4u_map_sg(struct device *dev, struct scatterlist *sglist,

if (outcount < incount) {
outs = sg_next(outs);
- outs->dma_address = DMA_ERROR_CODE;
+ outs->dma_address = SPARC_MAPPING_ERROR;
outs->dma_length = 0;
}

@@ -573,7 +573,7 @@ static int dma_4u_map_sg(struct device *dev, struct scatterlist *sglist,
iommu_tbl_range_free(&iommu->tbl, vaddr, npages,
IOMMU_ERROR_CODE);

- s->dma_address = DMA_ERROR_CODE;
+ s->dma_address = SPARC_MAPPING_ERROR;
s->dma_length = 0;
}
if (s == outs)
@@ -741,6 +741,11 @@ static void dma_4u_sync_sg_for_cpu(struct device *dev,
spin_unlock_irqrestore(&iommu->lock, flags);
}

+static int dma_4u_mapping_error(struct device *dev, dma_addr_t dma_addr)
+{
+ return dma_addr == SPARC_MAPPING_ERROR;
+}
+
static const struct dma_map_ops sun4u_dma_ops = {
.alloc = dma_4u_alloc_coherent,
.free = dma_4u_free_coherent,
@@ -750,6 +755,7 @@ static const struct dma_map_ops sun4u_dma_ops = {
.unmap_sg = dma_4u_unmap_sg,
.sync_single_for_cpu = dma_4u_sync_single_for_cpu,
.sync_sg_for_cpu = dma_4u_sync_sg_for_cpu,
+ .mapping_error = dma_4u_mapping_error,
};

const struct dma_map_ops *dma_ops = &sun4u_dma_ops;
diff --git a/arch/sparc/kernel/iommu_common.h b/arch/sparc/kernel/iommu_common.h
index 828493329f68..5ea5c192b1d9 100644
--- a/arch/sparc/kernel/iommu_common.h
+++ b/arch/sparc/kernel/iommu_common.h
@@ -47,4 +47,6 @@ static inline int is_span_boundary(unsigned long entry,
return iommu_is_span_boundary(entry, nr, shift, boundary_size);
}

+#define SPARC_MAPPING_ERROR (~(dma_addr_t)0x0)
+
#endif /* _IOMMU_COMMON_H */
diff --git a/arch/sparc/kernel/pci_sun4v.c b/arch/sparc/kernel/pci_sun4v.c
index 68bec7c97cb8..8e2a56f4c03a 100644
--- a/arch/sparc/kernel/pci_sun4v.c
+++ b/arch/sparc/kernel/pci_sun4v.c
@@ -412,12 +412,12 @@ static dma_addr_t dma_4v_map_page(struct device *dev, struct page *page,
bad:
if (printk_ratelimit())
WARN_ON(1);
- return DMA_ERROR_CODE;
+ return SPARC_MAPPING_ERROR;

iommu_map_fail:
local_irq_restore(flags);
iommu_tbl_range_free(tbl, bus_addr, npages, IOMMU_ERROR_CODE);
- return DMA_ERROR_CODE;
+ return SPARC_MAPPING_ERROR;
}

static void dma_4v_unmap_page(struct device *dev, dma_addr_t bus_addr,
@@ -590,7 +590,7 @@ static int dma_4v_map_sg(struct device *dev, struct scatterlist *sglist,

if (outcount < incount) {
outs = sg_next(outs);
- outs->dma_address = DMA_ERROR_CODE;
+ outs->dma_address = SPARC_MAPPING_ERROR;
outs->dma_length = 0;
}

@@ -607,7 +607,7 @@ static int dma_4v_map_sg(struct device *dev, struct scatterlist *sglist,
iommu_tbl_range_free(tbl, vaddr, npages,
IOMMU_ERROR_CODE);
/* XXX demap? XXX */
- s->dma_address = DMA_ERROR_CODE;
+ s->dma_address = SPARC_MAPPING_ERROR;
s->dma_length = 0;
}
if (s == outs)
@@ -669,6 +669,11 @@ static void dma_4v_unmap_sg(struct device *dev, struct scatterlist *sglist,
local_irq_restore(flags);
}

+static int dma_4v_mapping_error(struct device *dev, dma_addr_t dma_addr)
+{
+ return dma_addr == SPARC_MAPPING_ERROR;
+}
+
static const struct dma_map_ops sun4v_dma_ops = {
.alloc = dma_4v_alloc_coherent,
.free = dma_4v_free_coherent,
@@ -676,6 +681,7 @@ static const struct dma_map_ops sun4v_dma_ops = {
.unmap_page = dma_4v_unmap_page,
.map_sg = dma_4v_map_sg,
.unmap_sg = dma_4v_unmap_sg,
+ .mapping_error = dma_4v_mapping_error,
};

static void pci_sun4v_scan_bus(struct pci_pbm_info *pbm, struct device *parent)
--
2.11.0

2017-06-08 13:38:55

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 16/44] arm64: remove DMA_ERROR_CODE

The dma alloc interface returns an error by return NULL, and the
mapping interfaces rely on the mapping_error method, which the dummy
ops already implement correctly.

Thus remove the DMA_ERROR_CODE define.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/arm64/include/asm/dma-mapping.h | 1 -
arch/arm64/mm/dma-mapping.c | 3 +--
2 files changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/dma-mapping.h b/arch/arm64/include/asm/dma-mapping.h
index 5392dbeffa45..cf8fc8f05580 100644
--- a/arch/arm64/include/asm/dma-mapping.h
+++ b/arch/arm64/include/asm/dma-mapping.h
@@ -24,7 +24,6 @@
#include <xen/xen.h>
#include <asm/xen/hypervisor.h>

-#define DMA_ERROR_CODE (~(dma_addr_t)0)
extern const struct dma_map_ops dummy_dma_ops;

static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 3216e098c058..147fbb907a2f 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -184,7 +184,6 @@ static void *__dma_alloc(struct device *dev, size_t size,
no_map:
__dma_free_coherent(dev, size, ptr, *dma_handle, attrs);
no_mem:
- *dma_handle = DMA_ERROR_CODE;
return NULL;
}

@@ -487,7 +486,7 @@ static dma_addr_t __dummy_map_page(struct device *dev, struct page *page,
enum dma_data_direction dir,
unsigned long attrs)
{
- return DMA_ERROR_CODE;
+ return 0;
}

static void __dummy_unmap_page(struct device *dev, dma_addr_t dev_addr,
--
2.11.0

2017-06-08 13:27:29

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 15/44] xtensa: remove DMA_ERROR_CODE

xtensa already implements the mapping_error method for its only
dma_map_ops instance.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/xtensa/include/asm/dma-mapping.h | 2 --
1 file changed, 2 deletions(-)

diff --git a/arch/xtensa/include/asm/dma-mapping.h b/arch/xtensa/include/asm/dma-mapping.h
index c6140fa8c0be..269738dc9d1d 100644
--- a/arch/xtensa/include/asm/dma-mapping.h
+++ b/arch/xtensa/include/asm/dma-mapping.h
@@ -16,8 +16,6 @@
#include <linux/mm.h>
#include <linux/scatterlist.h>

-#define DMA_ERROR_CODE (~(dma_addr_t)0x0)
-
extern const struct dma_map_ops xtensa_dma_map_ops;

static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
--
2.11.0

2017-06-08 13:27:27

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 10/44] ia64: remove DMA_ERROR_CODE

All ia64 dma_mapping_ops instances already have a mapping_error member.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/ia64/include/asm/dma-mapping.h | 2 --
1 file changed, 2 deletions(-)

diff --git a/arch/ia64/include/asm/dma-mapping.h b/arch/ia64/include/asm/dma-mapping.h
index 73ec3c6f4cfe..3ce5ab4339f3 100644
--- a/arch/ia64/include/asm/dma-mapping.h
+++ b/arch/ia64/include/asm/dma-mapping.h
@@ -12,8 +12,6 @@

#define ARCH_HAS_DMA_GET_REQUIRED_MASK

-#define DMA_ERROR_CODE 0
-
extern const struct dma_map_ops *dma_ops;
extern struct ia64_machine_vector ia64_mv;
extern void set_iommu_machvec(void);
--
2.11.0

2017-06-08 13:40:30

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 14/44] sh: remove DMA_ERROR_CODE

sh does not return errors for dma_map_page.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/sh/include/asm/dma-mapping.h | 2 --
1 file changed, 2 deletions(-)

diff --git a/arch/sh/include/asm/dma-mapping.h b/arch/sh/include/asm/dma-mapping.h
index d99008af5f73..9b06be07db4d 100644
--- a/arch/sh/include/asm/dma-mapping.h
+++ b/arch/sh/include/asm/dma-mapping.h
@@ -9,8 +9,6 @@ static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
return dma_ops;
}

-#define DMA_ERROR_CODE 0
-
void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction dir);

--
2.11.0

2017-06-08 13:40:59

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 11/44] m32r: remove DMA_ERROR_CODE

dma-noop is the only dma_mapping_ops instance for m32r and does not return
errors.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/m32r/include/asm/dma-mapping.h | 2 --
1 file changed, 2 deletions(-)

diff --git a/arch/m32r/include/asm/dma-mapping.h b/arch/m32r/include/asm/dma-mapping.h
index c01d9f52d228..aff3ae8b62f7 100644
--- a/arch/m32r/include/asm/dma-mapping.h
+++ b/arch/m32r/include/asm/dma-mapping.h
@@ -8,8 +8,6 @@
#include <linux/dma-debug.h>
#include <linux/io.h>

-#define DMA_ERROR_CODE (~(dma_addr_t)0x0)
-
static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
{
return &dma_noop_ops;
--
2.11.0

2017-06-08 13:27:17

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 13/44] openrisc: remove DMA_ERROR_CODE

openrisc does not return errors for dma_map_page.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/openrisc/include/asm/dma-mapping.h | 2 --
1 file changed, 2 deletions(-)

diff --git a/arch/openrisc/include/asm/dma-mapping.h b/arch/openrisc/include/asm/dma-mapping.h
index 0c0075f17145..a4ea139c2ef9 100644
--- a/arch/openrisc/include/asm/dma-mapping.h
+++ b/arch/openrisc/include/asm/dma-mapping.h
@@ -26,8 +26,6 @@
#include <linux/kmemcheck.h>
#include <linux/dma-mapping.h>

-#define DMA_ERROR_CODE (~(dma_addr_t)0x0)
-
extern const struct dma_map_ops or1k_dma_map_ops;

static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
--
2.11.0

2017-06-08 13:27:20

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 12/44] microblaze: remove DMA_ERROR_CODE

microblaze does not return errors for dma_map_page.

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/microblaze/include/asm/dma-mapping.h | 2 --
1 file changed, 2 deletions(-)

diff --git a/arch/microblaze/include/asm/dma-mapping.h b/arch/microblaze/include/asm/dma-mapping.h
index 3fad5e722a66..e15cd2f76e23 100644
--- a/arch/microblaze/include/asm/dma-mapping.h
+++ b/arch/microblaze/include/asm/dma-mapping.h
@@ -28,8 +28,6 @@
#include <asm/io.h>
#include <asm/cacheflush.h>

-#define DMA_ERROR_CODE (~(dma_addr_t)0x0)
-
#define __dma_alloc_coherent(dev, gfp, size, handle) NULL
#define __dma_free_coherent(size, addr) ((void)0)

--
2.11.0

2017-06-08 13:27:10

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 08/44] xen-swiotlb: implement ->mapping_error

DMA_ERROR_CODE is going to go away, so don't rely on it.

Signed-off-by: Christoph Hellwig <[email protected]>
---
drivers/xen/swiotlb-xen.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index a0f006daab48..c3a04b2d7532 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -67,6 +67,8 @@ static unsigned long dma_alloc_coherent_mask(struct device *dev,
}
#endif

+#define XEN_SWIOTLB_ERROR_CODE (~(dma_addr_t)0x0)
+
static char *xen_io_tlb_start, *xen_io_tlb_end;
static unsigned long xen_io_tlb_nslabs;
/*
@@ -410,7 +412,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
map = swiotlb_tbl_map_single(dev, start_dma_addr, phys, size, dir,
attrs);
if (map == SWIOTLB_MAP_ERROR)
- return DMA_ERROR_CODE;
+ return XEN_SWIOTLB_ERROR_CODE;

dev_addr = xen_phys_to_bus(map);
xen_dma_map_page(dev, pfn_to_page(map >> PAGE_SHIFT),
@@ -425,7 +427,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
attrs |= DMA_ATTR_SKIP_CPU_SYNC;
swiotlb_tbl_unmap_single(dev, map, size, dir, attrs);

- return DMA_ERROR_CODE;
+ return XEN_SWIOTLB_ERROR_CODE;
}

/*
@@ -715,6 +717,11 @@ xen_swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt,
return dma_common_get_sgtable(dev, sgt, cpu_addr, handle, size);
}

+static int xen_swiotlb_mapping_error(struct device *dev, dma_addr_t dma_addr)
+{
+ return dma_addr == XEN_SWIOTLB_ERROR_CODE;
+}
+
const struct dma_map_ops xen_swiotlb_dma_ops = {
.alloc = xen_swiotlb_alloc_coherent,
.free = xen_swiotlb_free_coherent,
@@ -730,4 +737,5 @@ const struct dma_map_ops xen_swiotlb_dma_ops = {
.set_dma_mask = xen_swiotlb_set_dma_mask,
.mmap = xen_swiotlb_dma_mmap,
.get_sgtable = xen_swiotlb_get_sgtable,
+ .mapping_error = xen_swiotlb_mapping_error,
};
--
2.11.0

2017-06-08 13:42:12

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 09/44] c6x: remove DMA_ERROR_CODE

Signed-off-by: Christoph Hellwig <[email protected]>
---
arch/c6x/include/asm/dma-mapping.h | 5 -----
1 file changed, 5 deletions(-)

diff --git a/arch/c6x/include/asm/dma-mapping.h b/arch/c6x/include/asm/dma-mapping.h
index aca9f755e4f8..05daf1038111 100644
--- a/arch/c6x/include/asm/dma-mapping.h
+++ b/arch/c6x/include/asm/dma-mapping.h
@@ -12,11 +12,6 @@
#ifndef _ASM_C6X_DMA_MAPPING_H
#define _ASM_C6X_DMA_MAPPING_H

-/*
- * DMA errors are defined by all-bits-set in the DMA address.
- */
-#define DMA_ERROR_CODE ~0
-
extern const struct dma_map_ops c6x_dma_ops;

static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
--
2.11.0

2017-06-08 13:26:53

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 06/44] iommu/dma: don't rely on DMA_ERROR_CODE

DMA_ERROR_CODE is not a public API and will go away soon. dma dma-iommu
driver already implements a proper ->mapping_error method, so it's only
using the value internally. Add a new local define using the value
that arm64 which is the only current user of dma-iommu.

Signed-off-by: Christoph Hellwig <[email protected]>
---
drivers/iommu/dma-iommu.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 62618e77bedc..638aea814b94 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -31,6 +31,8 @@
#include <linux/scatterlist.h>
#include <linux/vmalloc.h>

+#define IOMMU_MAPPING_ERROR (~(dma_addr_t)0)
+
struct iommu_dma_msi_page {
struct list_head list;
dma_addr_t iova;
@@ -500,7 +502,7 @@ void iommu_dma_free(struct device *dev, struct page **pages, size_t size,
{
__iommu_dma_unmap(iommu_get_domain_for_dev(dev), *handle, size);
__iommu_dma_free_pages(pages, PAGE_ALIGN(size) >> PAGE_SHIFT);
- *handle = DMA_ERROR_CODE;
+ *handle = IOMMU_MAPPING_ERROR;
}

/**
@@ -533,7 +535,7 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
dma_addr_t iova;
unsigned int count, min_size, alloc_sizes = domain->pgsize_bitmap;

- *handle = DMA_ERROR_CODE;
+ *handle = IOMMU_MAPPING_ERROR;

min_size = alloc_sizes & -alloc_sizes;
if (min_size < PAGE_SIZE) {
@@ -627,11 +629,11 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,

iova = iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev);
if (!iova)
- return DMA_ERROR_CODE;
+ return IOMMU_MAPPING_ERROR;

if (iommu_map(domain, iova, phys - iova_off, size, prot)) {
iommu_dma_free_iova(cookie, iova, size);
- return DMA_ERROR_CODE;
+ return IOMMU_MAPPING_ERROR;
}
return iova + iova_off;
}
@@ -671,7 +673,7 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents,

s->offset += s_iova_off;
s->length = s_length;
- sg_dma_address(s) = DMA_ERROR_CODE;
+ sg_dma_address(s) = IOMMU_MAPPING_ERROR;
sg_dma_len(s) = 0;

/*
@@ -714,11 +716,11 @@ static void __invalidate_sg(struct scatterlist *sg, int nents)
int i;

for_each_sg(sg, s, nents, i) {
- if (sg_dma_address(s) != DMA_ERROR_CODE)
+ if (sg_dma_address(s) != IOMMU_MAPPING_ERROR)
s->offset += sg_dma_address(s);
if (sg_dma_len(s))
s->length = sg_dma_len(s);
- sg_dma_address(s) = DMA_ERROR_CODE;
+ sg_dma_address(s) = IOMMU_MAPPING_ERROR;
sg_dma_len(s) = 0;
}
}
@@ -836,7 +838,7 @@ void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle,

int iommu_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
- return dma_addr == DMA_ERROR_CODE;
+ return dma_addr == IOMMU_MAPPING_ERROR;
}

static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev,
--
2.11.0

2017-06-08 13:44:48

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 05/44] drm/armada: don't abuse DMA_ERROR_CODE

dev_addr isn't even a dma_addr_t, and DMA_ERROR_CODE has never been
a valid driver API. Add a bool mapped flag instead.

Signed-off-by: Christoph Hellwig <[email protected]>
---
drivers/gpu/drm/armada/armada_fb.c | 2 +-
drivers/gpu/drm/armada/armada_gem.c | 5 ++---
drivers/gpu/drm/armada/armada_gem.h | 1 +
3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/armada/armada_fb.c b/drivers/gpu/drm/armada/armada_fb.c
index 2a7eb6817c36..92e6b08ea64a 100644
--- a/drivers/gpu/drm/armada/armada_fb.c
+++ b/drivers/gpu/drm/armada/armada_fb.c
@@ -133,7 +133,7 @@ static struct drm_framebuffer *armada_fb_create(struct drm_device *dev,
}

/* Framebuffer objects must have a valid device address for scanout */
- if (obj->dev_addr == DMA_ERROR_CODE) {
+ if (!obj->mapped) {
ret = -EINVAL;
goto err_unref;
}
diff --git a/drivers/gpu/drm/armada/armada_gem.c b/drivers/gpu/drm/armada/armada_gem.c
index d6c2a5d190eb..a76ca21d063b 100644
--- a/drivers/gpu/drm/armada/armada_gem.c
+++ b/drivers/gpu/drm/armada/armada_gem.c
@@ -175,6 +175,7 @@ armada_gem_linear_back(struct drm_device *dev, struct armada_gem_object *obj)

obj->phys_addr = obj->linear->start;
obj->dev_addr = obj->linear->start;
+ obj->mapped = true;
}

DRM_DEBUG_DRIVER("obj %p phys %#llx dev %#llx\n", obj,
@@ -205,7 +206,6 @@ armada_gem_alloc_private_object(struct drm_device *dev, size_t size)
return NULL;

drm_gem_private_object_init(dev, &obj->obj, size);
- obj->dev_addr = DMA_ERROR_CODE;

DRM_DEBUG_DRIVER("alloc private obj %p size %zu\n", obj, size);

@@ -229,8 +229,6 @@ static struct armada_gem_object *armada_gem_alloc_object(struct drm_device *dev,
return NULL;
}

- obj->dev_addr = DMA_ERROR_CODE;
-
mapping = obj->obj.filp->f_mapping;
mapping_set_gfp_mask(mapping, GFP_HIGHUSER | __GFP_RECLAIMABLE);

@@ -610,5 +608,6 @@ int armada_gem_map_import(struct armada_gem_object *dobj)
return -EINVAL;
}
dobj->dev_addr = sg_dma_address(dobj->sgt->sgl);
+ dobj->mapped = true;
return 0;
}
diff --git a/drivers/gpu/drm/armada/armada_gem.h b/drivers/gpu/drm/armada/armada_gem.h
index b88d2b9853c7..6e524e0676bb 100644
--- a/drivers/gpu/drm/armada/armada_gem.h
+++ b/drivers/gpu/drm/armada/armada_gem.h
@@ -16,6 +16,7 @@ struct armada_gem_object {
void *addr;
phys_addr_t phys_addr;
resource_size_t dev_addr;
+ bool mapped;
struct drm_mm_node *linear; /* for linear backed */
struct page *page; /* for page backed */
struct sg_table *sgt; /* for imported */
--
2.11.0

2017-06-08 13:46:21

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 03/44] dmaengine: ioat: don't use DMA_ERROR_CODE

DMA_ERROR_CODE is not a public API and will go away. Instead properly
unwind based on the loop counter.

Signed-off-by: Christoph Hellwig <[email protected]>
---
drivers/dma/ioat/init.c | 24 +++++++-----------------
1 file changed, 7 insertions(+), 17 deletions(-)

diff --git a/drivers/dma/ioat/init.c b/drivers/dma/ioat/init.c
index 6ad4384b3fa8..ed8ed1192775 100644
--- a/drivers/dma/ioat/init.c
+++ b/drivers/dma/ioat/init.c
@@ -839,8 +839,6 @@ static int ioat_xor_val_self_test(struct ioatdma_device *ioat_dma)
goto free_resources;
}

- for (i = 0; i < IOAT_NUM_SRC_TEST; i++)
- dma_srcs[i] = DMA_ERROR_CODE;
for (i = 0; i < IOAT_NUM_SRC_TEST; i++) {
dma_srcs[i] = dma_map_page(dev, xor_srcs[i], 0, PAGE_SIZE,
DMA_TO_DEVICE);
@@ -910,8 +908,6 @@ static int ioat_xor_val_self_test(struct ioatdma_device *ioat_dma)

xor_val_result = 1;

- for (i = 0; i < IOAT_NUM_SRC_TEST + 1; i++)
- dma_srcs[i] = DMA_ERROR_CODE;
for (i = 0; i < IOAT_NUM_SRC_TEST + 1; i++) {
dma_srcs[i] = dma_map_page(dev, xor_val_srcs[i], 0, PAGE_SIZE,
DMA_TO_DEVICE);
@@ -965,8 +961,6 @@ static int ioat_xor_val_self_test(struct ioatdma_device *ioat_dma)
op = IOAT_OP_XOR_VAL;

xor_val_result = 0;
- for (i = 0; i < IOAT_NUM_SRC_TEST + 1; i++)
- dma_srcs[i] = DMA_ERROR_CODE;
for (i = 0; i < IOAT_NUM_SRC_TEST + 1; i++) {
dma_srcs[i] = dma_map_page(dev, xor_val_srcs[i], 0, PAGE_SIZE,
DMA_TO_DEVICE);
@@ -1017,18 +1011,14 @@ static int ioat_xor_val_self_test(struct ioatdma_device *ioat_dma)
goto free_resources;
dma_unmap:
if (op == IOAT_OP_XOR) {
- if (dest_dma != DMA_ERROR_CODE)
- dma_unmap_page(dev, dest_dma, PAGE_SIZE,
- DMA_FROM_DEVICE);
- for (i = 0; i < IOAT_NUM_SRC_TEST; i++)
- if (dma_srcs[i] != DMA_ERROR_CODE)
- dma_unmap_page(dev, dma_srcs[i], PAGE_SIZE,
- DMA_TO_DEVICE);
+ while (--i >= 0)
+ dma_unmap_page(dev, dma_srcs[i], PAGE_SIZE,
+ DMA_TO_DEVICE);
+ dma_unmap_page(dev, dest_dma, PAGE_SIZE, DMA_FROM_DEVICE);
} else if (op == IOAT_OP_XOR_VAL) {
- for (i = 0; i < IOAT_NUM_SRC_TEST + 1; i++)
- if (dma_srcs[i] != DMA_ERROR_CODE)
- dma_unmap_page(dev, dma_srcs[i], PAGE_SIZE,
- DMA_TO_DEVICE);
+ while (--i >= 0)
+ dma_unmap_page(dev, dma_srcs[i], PAGE_SIZE,
+ DMA_TO_DEVICE);
}
free_resources:
dma->device_free_chan_resources(dma_chan);
--
2.11.0

2017-06-08 13:26:26

by Christoph Hellwig

[permalink] [raw]
Subject: [PATCH 01/44] firmware/ivc: use dma_mapping_error

DMA_ERROR_CODE is not supposed to be used by drivers.

Signed-off-by: Christoph Hellwig <[email protected]>
---
drivers/firmware/tegra/ivc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/firmware/tegra/ivc.c b/drivers/firmware/tegra/ivc.c
index 29ecfd815320..a01461d63f68 100644
--- a/drivers/firmware/tegra/ivc.c
+++ b/drivers/firmware/tegra/ivc.c
@@ -646,12 +646,12 @@ int tegra_ivc_init(struct tegra_ivc *ivc, struct device *peer, void *rx,
if (peer) {
ivc->rx.phys = dma_map_single(peer, rx, queue_size,
DMA_BIDIRECTIONAL);
- if (ivc->rx.phys == DMA_ERROR_CODE)
+ if (dma_mapping_error(peer, ivc->rx.phys))
return -ENOMEM;

ivc->tx.phys = dma_map_single(peer, tx, queue_size,
DMA_BIDIRECTIONAL);
- if (ivc->tx.phys == DMA_ERROR_CODE) {
+ if (dma_mapping_error(peer, ivc->tx.phys)) {
dma_unmap_single(peer, ivc->rx.phys, queue_size,
DMA_BIDIRECTIONAL);
return -ENOMEM;
--
2.11.0

2017-06-08 13:59:21

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH 06/44] iommu/dma: don't rely on DMA_ERROR_CODE

Hi Christoph,

On 08/06/17 14:25, Christoph Hellwig wrote:
> DMA_ERROR_CODE is not a public API and will go away soon. dma dma-iommu
> driver already implements a proper ->mapping_error method, so it's only
> using the value internally. Add a new local define using the value
> that arm64 which is the only current user of dma-iommu.

It would be fine to just use 0, since dma-iommu already makes sure that
that will never be allocated for a valid DMA address.

Otherwise, looks good!

Robin.

> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> drivers/iommu/dma-iommu.c | 18 ++++++++++--------
> 1 file changed, 10 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 62618e77bedc..638aea814b94 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -31,6 +31,8 @@
> #include <linux/scatterlist.h>
> #include <linux/vmalloc.h>
>
> +#define IOMMU_MAPPING_ERROR (~(dma_addr_t)0)
> +
> struct iommu_dma_msi_page {
> struct list_head list;
> dma_addr_t iova;
> @@ -500,7 +502,7 @@ void iommu_dma_free(struct device *dev, struct page **pages, size_t size,
> {
> __iommu_dma_unmap(iommu_get_domain_for_dev(dev), *handle, size);
> __iommu_dma_free_pages(pages, PAGE_ALIGN(size) >> PAGE_SHIFT);
> - *handle = DMA_ERROR_CODE;
> + *handle = IOMMU_MAPPING_ERROR;
> }
>
> /**
> @@ -533,7 +535,7 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
> dma_addr_t iova;
> unsigned int count, min_size, alloc_sizes = domain->pgsize_bitmap;
>
> - *handle = DMA_ERROR_CODE;
> + *handle = IOMMU_MAPPING_ERROR;
>
> min_size = alloc_sizes & -alloc_sizes;
> if (min_size < PAGE_SIZE) {
> @@ -627,11 +629,11 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
>
> iova = iommu_dma_alloc_iova(domain, size, dma_get_mask(dev), dev);
> if (!iova)
> - return DMA_ERROR_CODE;
> + return IOMMU_MAPPING_ERROR;
>
> if (iommu_map(domain, iova, phys - iova_off, size, prot)) {
> iommu_dma_free_iova(cookie, iova, size);
> - return DMA_ERROR_CODE;
> + return IOMMU_MAPPING_ERROR;
> }
> return iova + iova_off;
> }
> @@ -671,7 +673,7 @@ static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents,
>
> s->offset += s_iova_off;
> s->length = s_length;
> - sg_dma_address(s) = DMA_ERROR_CODE;
> + sg_dma_address(s) = IOMMU_MAPPING_ERROR;
> sg_dma_len(s) = 0;
>
> /*
> @@ -714,11 +716,11 @@ static void __invalidate_sg(struct scatterlist *sg, int nents)
> int i;
>
> for_each_sg(sg, s, nents, i) {
> - if (sg_dma_address(s) != DMA_ERROR_CODE)
> + if (sg_dma_address(s) != IOMMU_MAPPING_ERROR)
> s->offset += sg_dma_address(s);
> if (sg_dma_len(s))
> s->length = sg_dma_len(s);
> - sg_dma_address(s) = DMA_ERROR_CODE;
> + sg_dma_address(s) = IOMMU_MAPPING_ERROR;
> sg_dma_len(s) = 0;
> }
> }
> @@ -836,7 +838,7 @@ void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle,
>
> int iommu_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
> {
> - return dma_addr == DMA_ERROR_CODE;
> + return dma_addr == IOMMU_MAPPING_ERROR;
> }
>
> static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev,
>

2017-06-08 14:02:10

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH 16/44] arm64: remove DMA_ERROR_CODE

On 08/06/17 14:25, Christoph Hellwig wrote:
> The dma alloc interface returns an error by return NULL, and the
> mapping interfaces rely on the mapping_error method, which the dummy
> ops already implement correctly.
>
> Thus remove the DMA_ERROR_CODE define.

Reviewed-by: Robin Murphy <[email protected]>

> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> arch/arm64/include/asm/dma-mapping.h | 1 -
> arch/arm64/mm/dma-mapping.c | 3 +--
> 2 files changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/arch/arm64/include/asm/dma-mapping.h b/arch/arm64/include/asm/dma-mapping.h
> index 5392dbeffa45..cf8fc8f05580 100644
> --- a/arch/arm64/include/asm/dma-mapping.h
> +++ b/arch/arm64/include/asm/dma-mapping.h
> @@ -24,7 +24,6 @@
> #include <xen/xen.h>
> #include <asm/xen/hypervisor.h>
>
> -#define DMA_ERROR_CODE (~(dma_addr_t)0)
> extern const struct dma_map_ops dummy_dma_ops;
>
> static inline const struct dma_map_ops *get_arch_dma_ops(struct bus_type *bus)
> diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
> index 3216e098c058..147fbb907a2f 100644
> --- a/arch/arm64/mm/dma-mapping.c
> +++ b/arch/arm64/mm/dma-mapping.c
> @@ -184,7 +184,6 @@ static void *__dma_alloc(struct device *dev, size_t size,
> no_map:
> __dma_free_coherent(dev, size, ptr, *dma_handle, attrs);
> no_mem:
> - *dma_handle = DMA_ERROR_CODE;
> return NULL;
> }
>
> @@ -487,7 +486,7 @@ static dma_addr_t __dummy_map_page(struct device *dev, struct page *page,
> enum dma_data_direction dir,
> unsigned long attrs)
> {
> - return DMA_ERROR_CODE;
> + return 0;
> }
>
> static void __dummy_unmap_page(struct device *dev, dma_addr_t dev_addr,
>

2017-06-08 14:21:45

by David Miller

[permalink] [raw]
Subject: Re: clean up and modularize arch dma_mapping interface

From: Christoph Hellwig <[email protected]>
Date: Thu, 8 Jun 2017 15:25:25 +0200

> for a while we have a generic implementation of the dma mapping routines
> that call into per-arch or per-device operations. But right now there
> still are various bits in the interfaces where don't clearly operate
> on these ops. This series tries to clean up a lot of those (but not all
> yet, but the series is big enough). It gets rid of the DMA_ERROR_CODE
> way of signaling failures of the mapping routines from the
> implementations to the generic code (and cleans up various drivers that
> were incorrectly using it), and gets rid of the ->set_dma_mask routine
> in favor of relying on the ->dma_capable method that can be used in
> the same way, but which requires less code duplication.

There is unlikely to be conflicts for the sparc and net changes, so I
will simply ACK them.

Thanks Christoph.

2017-06-08 14:22:01

by David Miller

[permalink] [raw]
Subject: Re: [PATCH 02/44] ibmveth: properly unwind on init errors

From: Christoph Hellwig <[email protected]>
Date: Thu, 8 Jun 2017 15:25:27 +0200

> That way the driver doesn't have to rely on DMA_ERROR_CODE, which
> is not a public API and going away.
>
> Signed-off-by: Christoph Hellwig <[email protected]>

Acked-by: David S. Miller <[email protected]>

2017-06-08 14:22:41

by David Miller

[permalink] [raw]
Subject: Re: [PATCH 27/44] sparc: remove leon_dma_ops

From: Christoph Hellwig <[email protected]>
Date: Thu, 8 Jun 2017 15:25:52 +0200

> We can just use pci32_dma_ops.
>
> Btw, given that leon is 32-bit and appears to be PCI based, do even need
> the special case for it in get_arch_dma_ops at all?

I would need to defer to the LEON developers on that, but they haven't
been very actively lately so whether you'll get a response or not is
hard to predict.

> Signed-off-by: Christoph Hellwig <[email protected]>

Acked-by: David S. Miller <[email protected]>

2017-06-08 14:23:14

by Julian Calaby

[permalink] [raw]
Subject: Re: [PATCH 28/44] sparc: remove arch specific dma_supported implementations

Hi Christoph,

On Thu, Jun 8, 2017 at 11:25 PM, Christoph Hellwig <[email protected]> wrote:
> Usually dma_supported decisions are done by the dma_map_ops instance.
> Switch sparc to that model by providing a ->dma_supported instance for
> sbus that always returns false, and implementations tailored to the sun4u
> and sun4v cases for sparc64, and leave it unimplemented for PCI on
> sparc32, which means always supported.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> arch/sparc/include/asm/dma-mapping.h | 3 ---
> arch/sparc/kernel/iommu.c | 40 +++++++++++++++---------------------
> arch/sparc/kernel/ioport.c | 22 ++++++--------------
> arch/sparc/kernel/pci_sun4v.c | 17 +++++++++++++++
> 4 files changed, 39 insertions(+), 43 deletions(-)
>
> diff --git a/arch/sparc/kernel/ioport.c b/arch/sparc/kernel/ioport.c
> index dd081d557609..12894f259bea 100644
> --- a/arch/sparc/kernel/ioport.c
> +++ b/arch/sparc/kernel/ioport.c
> @@ -401,6 +401,11 @@ static void sbus_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
> BUG();
> }
>
> +static int sbus_dma_supported(struct device *dev, u64 mask)
> +{
> + return 0;
> +}
> +

I'm guessing there's a few places that have DMA ops but DMA isn't
actually supported. Why not have a common method for this, maybe
"dma_not_supported"?

> static const struct dma_map_ops sbus_dma_ops = {
> .alloc = sbus_alloc_coherent,
> .free = sbus_free_coherent,
> @@ -410,6 +415,7 @@ static const struct dma_map_ops sbus_dma_ops = {
> .unmap_sg = sbus_unmap_sg,
> .sync_sg_for_cpu = sbus_sync_sg_for_cpu,
> .sync_sg_for_device = sbus_sync_sg_for_device,
> + .dma_supported = sbus_dma_supported,
> };
>
> static int __init sparc_register_ioport(void)

Thanks,

--
Julian Calaby

Email: [email protected]
Profile: http://www.google.com/profiles/julian.calaby/

2017-06-08 14:24:12

by David Miller

[permalink] [raw]
Subject: Re: [PATCH 28/44] sparc: remove arch specific dma_supported implementations

From: Christoph Hellwig <[email protected]>
Date: Thu, 8 Jun 2017 15:25:53 +0200

> Usually dma_supported decisions are done by the dma_map_ops instance.
> Switch sparc to that model by providing a ->dma_supported instance for
> sbus that always returns false, and implementations tailored to the sun4u
> and sun4v cases for sparc64, and leave it unimplemented for PCI on
> sparc32, which means always supported.
>
> Signed-off-by: Christoph Hellwig <[email protected]>

Acked-by: David S. Miller <[email protected]>

2017-06-08 14:24:35

by David Miller

[permalink] [raw]
Subject: Re: [PATCH 20/44] sparc: implement ->mapping_error

From: Christoph Hellwig <[email protected]>
Date: Thu, 8 Jun 2017 15:25:45 +0200

> DMA_ERROR_CODE is going to go away, so don't rely on it.
>
> Signed-off-by: Christoph Hellwig <[email protected]>

Acked-by: David S. Miller <[email protected]>

2017-06-08 14:43:39

by Russell King (Oracle)

[permalink] [raw]
Subject: Re: [PATCH 25/44] arm: implement ->mapping_error

BOn Thu, Jun 08, 2017 at 03:25:50PM +0200, Christoph Hellwig wrote:
> +static int dmabounce_mapping_error(struct device *dev, dma_addr_t dma_addr)
> +{
> + if (dev->archdata.dmabounce)
> + return 0;

I'm not convinced that we need this check here:

dev->archdata.dmabounce = device_info;
set_dma_ops(dev, &dmabounce_ops);

There shouldn't be any chance of dev->archdata.dmabounce being NULL if
the dmabounce_ops has been set as the current device DMA ops. So I
think that test can be killed.

--
RMK's Patch system: http://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line: currently at 9.6Mbps down 400kbps up
according to speedtest.net.

2017-06-08 16:23:28

by Gerald Schaefer

[permalink] [raw]
Subject: Re: [PATCH 19/44] s390: implement ->mapping_error

On Thu, 8 Jun 2017 15:25:44 +0200
Christoph Hellwig <[email protected]> wrote:

> s390 can also use noop_dma_ops, and while that currently does not return
> errors it will so in the future. Implementing the mapping_error method
> is the proper way to have per-ops error conditions.
>
> Signed-off-by: Christoph Hellwig <[email protected]>

Acked-by: Gerald Schaefer <[email protected]>

2017-06-08 16:32:24

by Dave Jiang

[permalink] [raw]
Subject: Re: [PATCH 03/44] dmaengine: ioat: don't use DMA_ERROR_CODE

On 06/08/2017 06:25 AM, Christoph Hellwig wrote:
> DMA_ERROR_CODE is not a public API and will go away. Instead properly
> unwind based on the loop counter.
>
> Signed-off-by: Christoph Hellwig <[email protected]>

Acked-by: Dave Jiang <[email protected]>

> ---
> drivers/dma/ioat/init.c | 24 +++++++-----------------
> 1 file changed, 7 insertions(+), 17 deletions(-)
>
> diff --git a/drivers/dma/ioat/init.c b/drivers/dma/ioat/init.c
> index 6ad4384b3fa8..ed8ed1192775 100644
> --- a/drivers/dma/ioat/init.c
> +++ b/drivers/dma/ioat/init.c
> @@ -839,8 +839,6 @@ static int ioat_xor_val_self_test(struct ioatdma_device *ioat_dma)
> goto free_resources;
> }
>
> - for (i = 0; i < IOAT_NUM_SRC_TEST; i++)
> - dma_srcs[i] = DMA_ERROR_CODE;
> for (i = 0; i < IOAT_NUM_SRC_TEST; i++) {
> dma_srcs[i] = dma_map_page(dev, xor_srcs[i], 0, PAGE_SIZE,
> DMA_TO_DEVICE);
> @@ -910,8 +908,6 @@ static int ioat_xor_val_self_test(struct ioatdma_device *ioat_dma)
>
> xor_val_result = 1;
>
> - for (i = 0; i < IOAT_NUM_SRC_TEST + 1; i++)
> - dma_srcs[i] = DMA_ERROR_CODE;
> for (i = 0; i < IOAT_NUM_SRC_TEST + 1; i++) {
> dma_srcs[i] = dma_map_page(dev, xor_val_srcs[i], 0, PAGE_SIZE,
> DMA_TO_DEVICE);
> @@ -965,8 +961,6 @@ static int ioat_xor_val_self_test(struct ioatdma_device *ioat_dma)
> op = IOAT_OP_XOR_VAL;
>
> xor_val_result = 0;
> - for (i = 0; i < IOAT_NUM_SRC_TEST + 1; i++)
> - dma_srcs[i] = DMA_ERROR_CODE;
> for (i = 0; i < IOAT_NUM_SRC_TEST + 1; i++) {
> dma_srcs[i] = dma_map_page(dev, xor_val_srcs[i], 0, PAGE_SIZE,
> DMA_TO_DEVICE);
> @@ -1017,18 +1011,14 @@ static int ioat_xor_val_self_test(struct ioatdma_device *ioat_dma)
> goto free_resources;
> dma_unmap:
> if (op == IOAT_OP_XOR) {
> - if (dest_dma != DMA_ERROR_CODE)
> - dma_unmap_page(dev, dest_dma, PAGE_SIZE,
> - DMA_FROM_DEVICE);
> - for (i = 0; i < IOAT_NUM_SRC_TEST; i++)
> - if (dma_srcs[i] != DMA_ERROR_CODE)
> - dma_unmap_page(dev, dma_srcs[i], PAGE_SIZE,
> - DMA_TO_DEVICE);
> + while (--i >= 0)
> + dma_unmap_page(dev, dma_srcs[i], PAGE_SIZE,
> + DMA_TO_DEVICE);
> + dma_unmap_page(dev, dest_dma, PAGE_SIZE, DMA_FROM_DEVICE);
> } else if (op == IOAT_OP_XOR_VAL) {
> - for (i = 0; i < IOAT_NUM_SRC_TEST + 1; i++)
> - if (dma_srcs[i] != DMA_ERROR_CODE)
> - dma_unmap_page(dev, dma_srcs[i], PAGE_SIZE,
> - DMA_TO_DEVICE);
> + while (--i >= 0)
> + dma_unmap_page(dev, dma_srcs[i], PAGE_SIZE,
> + DMA_TO_DEVICE);
> }
> free_resources:
> dma->device_free_chan_resources(dma_chan);
>

2017-06-09 12:20:50

by Geert Uytterhoeven

[permalink] [raw]
Subject: Re: [PATCH 33/44] openrisc: remove arch-specific dma_supported implementation

Hi Christoph,

On Thu, Jun 8, 2017 at 3:25 PM, Christoph Hellwig <[email protected]> wrote:
> This implementation is simply bogus - hexagon only has a simple

openrisc?

> direct mapped DMA implementation and thus doesn't care about the
> address.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> arch/openrisc/include/asm/dma-mapping.h | 7 -------

Gr{oetje,eeting}s,

Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [email protected]

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds

2017-06-11 02:38:18

by Konrad Rzeszutek Wilk

[permalink] [raw]
Subject: Re: [PATCH 07/44] xen-swiotlb: consolidate xen_swiotlb_dma_ops

On Thu, Jun 08, 2017 at 03:25:32PM +0200, Christoph Hellwig wrote:
> ARM and x86 had duplicated versions of the dma_ops structure, the
> only difference is that x86 hasn't wired up the set_dma_mask,
> mmap, and get_sgtable ops yet. On x86 all of them are identical
> to the generic version, so they aren't needed but harmless.
>
> All the symbols used only for xen_swiotlb_dma_ops can now be marked
> static as well.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> arch/arm/xen/mm.c | 17 --------
> arch/x86/xen/pci-swiotlb-xen.c | 14 -------
> drivers/xen/swiotlb-xen.c | 93 ++++++++++++++++++++++--------------------
> include/xen/swiotlb-xen.h | 62 +---------------------------
> 4 files changed, 49 insertions(+), 137 deletions(-)

Yeeey!

Reviewed-by: Konrad Rzeszutek Wilk <[email protected]>

2017-06-11 02:38:37

by Konrad Rzeszutek Wilk

[permalink] [raw]
Subject: Re: [PATCH 08/44] xen-swiotlb: implement ->mapping_error

On Thu, Jun 08, 2017 at 03:25:33PM +0200, Christoph Hellwig wrote:
> DMA_ERROR_CODE is going to go away, so don't rely on it.

Reviewed-by: Konrad Rzeszutek Wilk <[email protected]>

2017-06-12 08:33:45

by Andreas Larsson

[permalink] [raw]
Subject: Re: [PATCH 27/44] sparc: remove leon_dma_ops

On 2017-06-08 15:25, Christoph Hellwig wrote:
> We can just use pci32_dma_ops.
>
> Btw, given that leon is 32-bit and appears to be PCI based, do even need
> the special case for it in get_arch_dma_ops at all?

Hi!

Yes, it is needed. LEON systems are AMBA bus based. The common case here
is DMA over AMBA buses. Some LEON systems have PCI bridges, but in
general CONFIG_PCI is not a given.

--
Andreas Larsson
Software Engineer
Cobham Gaisler

2017-06-13 12:35:46

by Thierry Reding

[permalink] [raw]
Subject: Re: [PATCH 01/44] firmware/ivc: use dma_mapping_error

On Thu, Jun 08, 2017 at 03:25:26PM +0200, Christoph Hellwig wrote:
> DMA_ERROR_CODE is not supposed to be used by drivers.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> drivers/firmware/tegra/ivc.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)

Acked-by: Thierry Reding <[email protected]>


Attachments:
(No filename) (319.00 B)
signature.asc (833.00 B)
Download all attachments

2017-06-14 08:28:58

by Vinod Koul

[permalink] [raw]
Subject: Re: [PATCH 03/44] dmaengine: ioat: don't use DMA_ERROR_CODE

On Thu, Jun 08, 2017 at 03:25:28PM +0200, Christoph Hellwig wrote:
> DMA_ERROR_CODE is not a public API and will go away. Instead properly
> unwind based on the loop counter.

Acked-By: Vinod Koul <[email protected]>

--
~Vinod

2017-06-14 09:18:09

by Michael Ellerman

[permalink] [raw]
Subject: Re: [PATCH 21/44] powerpc: implement ->mapping_error

Christoph Hellwig <[email protected]> writes:

> DMA_ERROR_CODE is going to go away, so don't rely on it. Instead
> define a ->mapping_error method for all IOMMU based dma operation
> instances. The direct ops don't ever return an error and don't
> need a ->mapping_error method.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> arch/powerpc/include/asm/dma-mapping.h | 4 ----
> arch/powerpc/include/asm/iommu.h | 4 ++++
> arch/powerpc/kernel/dma-iommu.c | 6 ++++++
> arch/powerpc/kernel/iommu.c | 28 ++++++++++++++--------------
> arch/powerpc/platforms/cell/iommu.c | 1 +
> arch/powerpc/platforms/pseries/vio.c | 3 ++-
> 6 files changed, 27 insertions(+), 19 deletions(-)

I also see:

arch/powerpc/kernel/dma.c:const struct dma_map_ops dma_direct_ops = {

Which you mentioned can't fail.

arch/powerpc/platforms/pseries/ibmebus.c:static const struct dma_map_ops ibmebus_dma_ops = {

Which can't fail.

And:

arch/powerpc/platforms/powernv/npu-dma.c:static const struct dma_map_ops dma_npu_ops = {
arch/powerpc/platforms/ps3/system-bus.c:static const struct dma_map_ops ps3_sb_dma_ops = {
arch/powerpc/platforms/ps3/system-bus.c:static const struct dma_map_ops ps3_ioc0_dma_ops = {

All of which look like they definitely can fail, but return 0 on error
and don't implement ->mapping_error.

So I guess I'm acking this and adding a TODO to fix up the NPU code at
least, the ps3 code is probably better left alone these days.

Acked-by: Michael Ellerman <[email protected]>

cheers

2017-06-16 00:19:21

by Richard Kuo

[permalink] [raw]
Subject: Re: [PATCH 17/44] hexagon: switch to use ->mapping_error for error reporting

On Thu, Jun 08, 2017 at 03:25:42PM +0200, Christoph Hellwig wrote:
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> arch/hexagon/include/asm/dma-mapping.h | 2 --
> arch/hexagon/kernel/dma.c | 12 +++++++++---
> arch/hexagon/kernel/hexagon_ksyms.c | 1 -
> 3 files changed, 9 insertions(+), 6 deletions(-)
>

Acked-by: Richard Kuo <[email protected]>

--
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

2017-06-16 00:20:30

by Richard Kuo

[permalink] [raw]
Subject: Re: [PATCH 31/44] hexagon: remove arch-specific dma_supported implementation

On Thu, Jun 08, 2017 at 03:25:56PM +0200, Christoph Hellwig wrote:
> This implementation is simply bogus - hexagon only has a simple
> direct mapped DMA implementation and thus doesn't care about the
> address.
>
> Signed-off-by: Christoph Hellwig <[email protected]>
> ---
> arch/hexagon/include/asm/dma-mapping.h | 2 --
> arch/hexagon/kernel/dma.c | 9 ---------
> 2 files changed, 11 deletions(-)
>

Acked-by: Richard Kuo <[email protected]>

--
Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

2017-06-16 08:37:06

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 06/44] iommu/dma: don't rely on DMA_ERROR_CODE

On Thu, Jun 08, 2017 at 02:59:07PM +0100, Robin Murphy wrote:
> Hi Christoph,
>
> On 08/06/17 14:25, Christoph Hellwig wrote:
> > DMA_ERROR_CODE is not a public API and will go away soon. dma dma-iommu
> > driver already implements a proper ->mapping_error method, so it's only
> > using the value internally. Add a new local define using the value
> > that arm64 which is the only current user of dma-iommu.
>
> It would be fine to just use 0, since dma-iommu already makes sure that
> that will never be allocated for a valid DMA address.

I'll change it to 0.

> Otherwise, looks good!

Can you give me a formal ACK or Reviewed-by: ?

2017-06-16 08:39:12

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 33/44] openrisc: remove arch-specific dma_supported implementation

On Fri, Jun 09, 2017 at 02:20:42PM +0200, Geert Uytterhoeven wrote:
> Hi Christoph,
>
> On Thu, Jun 8, 2017 at 3:25 PM, Christoph Hellwig <[email protected]> wrote:
> > This implementation is simply bogus - hexagon only has a simple
>
> openrisc?

Yeah.

2017-06-16 08:43:35

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 25/44] arm: implement ->mapping_error

On Thu, Jun 08, 2017 at 03:43:14PM +0100, Russell King - ARM Linux wrote:
> BOn Thu, Jun 08, 2017 at 03:25:50PM +0200, Christoph Hellwig wrote:
> > +static int dmabounce_mapping_error(struct device *dev, dma_addr_t dma_addr)
> > +{
> > + if (dev->archdata.dmabounce)
> > + return 0;
>
> I'm not convinced that we need this check here:
>
> dev->archdata.dmabounce = device_info;
> set_dma_ops(dev, &dmabounce_ops);
>
> There shouldn't be any chance of dev->archdata.dmabounce being NULL if
> the dmabounce_ops has been set as the current device DMA ops. So I
> think that test can be killed.

Ok, I'll fix it up.

2017-06-16 08:45:48

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 27/44] sparc: remove leon_dma_ops

On Mon, Jun 12, 2017 at 10:06:26AM +0200, Andreas Larsson wrote:
> Yes, it is needed. LEON systems are AMBA bus based. The common case here is
> DMA over AMBA buses. Some LEON systems have PCI bridges, but in general
> CONFIG_PCI is not a given.

Ok, and even for AMBA we use the pci ops, so I'll leave it in and drop
the comment from the commit.

2017-06-16 08:47:07

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 28/44] sparc: remove arch specific dma_supported implementations

On Fri, Jun 09, 2017 at 12:22:48AM +1000, Julian Calaby wrote:
> I'm guessing there's a few places that have DMA ops but DMA isn't
> actually supported. Why not have a common method for this, maybe
> "dma_not_supported"?

It's not common at all. Except for sbus all dma API user first
call set_dma_mask which ends up in the dma_supported call. sbus
is the weird outlier here.

2017-06-20 09:19:13

by Daniel Vetter

[permalink] [raw]
Subject: Re: clean up and modularize arch dma_mapping interface

On Thu, Jun 08, 2017 at 03:25:25PM +0200, Christoph Hellwig wrote:
> Hi all,
>
> for a while we have a generic implementation of the dma mapping routines
> that call into per-arch or per-device operations. But right now there
> still are various bits in the interfaces where don't clearly operate
> on these ops. This series tries to clean up a lot of those (but not all
> yet, but the series is big enough). It gets rid of the DMA_ERROR_CODE
> way of signaling failures of the mapping routines from the
> implementations to the generic code (and cleans up various drivers that
> were incorrectly using it), and gets rid of the ->set_dma_mask routine
> in favor of relying on the ->dma_capable method that can be used in
> the same way, but which requires less code duplication.
>
> Btw, we don't seem to have a tree every-growing amount of common dma
> mapping code, and given that I have a fair amount of all over the tree
> work in that area in my plate I'd like to start one. Any good reason
> to that? Anyone willing to volunteer as co maintainer?
>
> The whole series is also available in git:
>
> git://git.infradead.org/users/hch/misc.git dma-map

Ack for the 2 drm patches, but I can also pick them up through drm-misc if
you prefer that (but then it'll be 4.14).
-Daniel

>
> Gitweb:
>
> http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dma-map
> _______________________________________________
> dri-devel mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

2017-06-20 13:17:17

by Christoph Hellwig

[permalink] [raw]
Subject: Re: clean up and modularize arch dma_mapping interface

On Tue, Jun 20, 2017 at 11:19:02AM +0200, Daniel Vetter wrote:
> Ack for the 2 drm patches, but I can also pick them up through drm-misc if
> you prefer that (but then it'll be 4.14).

Nah, I'll plan to set up a dma-mapping tree so that we'll have common
place for dma-mapping work.