2024-02-09 16:54:27

by Robin Murphy

[permalink] [raw]
Subject: [PATCH v3 0/7] dma-mapping: Simplify arch_setup_dma_ops()

v2: https://lore.kernel.org/linux-iommu/[email protected]/

Hi all,

Here's v3, rebased and fixing the thinko from v2, so unless anything
else has changed behind my back I hope it's good to go (via the IOMMU
tree, as mentioned before).

Thanks,
Robin.


Robin Murphy (7):
OF: Retire dma-ranges mask workaround
OF: Simplify DMA range ca1lculations
ACPI/IORT: Handle memory address size limits as limits
dma-mapping: Add helpers for dma_range_map bounds
iommu/dma: Make limit checks self-contained
iommu/dma: Centralise iommu_setup_dma_ops()
dma-mapping: Simplify arch_setup_dma_ops()

arch/arc/mm/dma.c | 3 +--
arch/arm/mm/dma-mapping-nommu.c | 3 +--
arch/arm/mm/dma-mapping.c | 16 +++++++------
arch/arm64/mm/dma-mapping.c | 5 +---
arch/loongarch/kernel/dma.c | 9 ++-----
arch/mips/mm/dma-noncoherent.c | 3 +--
arch/riscv/mm/dma-noncoherent.c | 3 +--
drivers/acpi/arm64/dma.c | 17 ++++---------
drivers/acpi/arm64/iort.c | 20 ++++++++--------
drivers/acpi/scan.c | 7 +-----
drivers/hv/hv_common.c | 6 +----
drivers/iommu/amd/iommu.c | 8 -------
drivers/iommu/dma-iommu.c | 39 ++++++++++++------------------
drivers/iommu/dma-iommu.h | 14 +++++------
drivers/iommu/intel/iommu.c | 7 ------
drivers/iommu/iommu.c | 20 ++++++----------
drivers/iommu/s390-iommu.c | 6 -----
drivers/iommu/virtio-iommu.c | 10 --------
drivers/of/device.c | 42 ++++++---------------------------
include/linux/acpi_iort.h | 4 ++--
include/linux/dma-direct.h | 18 ++++++++++++++
include/linux/dma-map-ops.h | 6 ++---
include/linux/iommu.h | 7 ------
23 files changed, 89 insertions(+), 184 deletions(-)

--
2.39.2.101.g768bb238c484.dirty



2024-02-09 16:55:53

by Robin Murphy

[permalink] [raw]
Subject: [PATCH v3 4/7] dma-mapping: Add helpers for dma_range_map bounds

Several places want to compute the lower and/or upper bounds of a
dma_range_map, so let's factor that out into reusable helpers.

Acked-by: Rob Herring <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Robin Murphy <[email protected]>
---
v2: fix warning for 32-bit builds
---
arch/loongarch/kernel/dma.c | 9 ++-------
drivers/acpi/arm64/dma.c | 8 +-------
drivers/of/device.c | 11 ++---------
include/linux/dma-direct.h | 18 ++++++++++++++++++
4 files changed, 23 insertions(+), 23 deletions(-)

diff --git a/arch/loongarch/kernel/dma.c b/arch/loongarch/kernel/dma.c
index 7a9c6a9dd2d0..429555fb4e13 100644
--- a/arch/loongarch/kernel/dma.c
+++ b/arch/loongarch/kernel/dma.c
@@ -8,17 +8,12 @@
void acpi_arch_dma_setup(struct device *dev)
{
int ret;
- u64 mask, end = 0;
+ u64 mask, end;
const struct bus_dma_region *map = NULL;

ret = acpi_dma_get_range(dev, &map);
if (!ret && map) {
- const struct bus_dma_region *r = map;
-
- for (end = 0; r->size; r++) {
- if (r->dma_start + r->size - 1 > end)
- end = r->dma_start + r->size - 1;
- }
+ end = dma_range_map_max(map);

mask = DMA_BIT_MASK(ilog2(end) + 1);
dev->bus_dma_limit = end;
diff --git a/drivers/acpi/arm64/dma.c b/drivers/acpi/arm64/dma.c
index b98a149f8d50..52b2abf88689 100644
--- a/drivers/acpi/arm64/dma.c
+++ b/drivers/acpi/arm64/dma.c
@@ -28,13 +28,7 @@ void acpi_arch_dma_setup(struct device *dev)

ret = acpi_dma_get_range(dev, &map);
if (!ret && map) {
- const struct bus_dma_region *r = map;
-
- for (end = 0; r->size; r++) {
- if (r->dma_start + r->size - 1 > end)
- end = r->dma_start + r->size - 1;
- }
-
+ end = dma_range_map_max(map);
dev->dma_range_map = map;
}

diff --git a/drivers/of/device.c b/drivers/of/device.c
index 841ccd3a19d1..9e7963972fa7 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -117,16 +117,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
if (!force_dma)
return ret == -ENODEV ? 0 : ret;
} else {
- const struct bus_dma_region *r = map;
-
/* Determine the overall bounds of all DMA regions */
- for (dma_start = ~0; r->size; r++) {
- /* Take lower and upper limits */
- if (r->dma_start < dma_start)
- dma_start = r->dma_start;
- if (r->dma_start + r->size > end)
- end = r->dma_start + r->size;
- }
+ dma_start = dma_range_map_min(map);
+ end = dma_range_map_max(map);
}

/*
diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h
index 3eb3589ff43e..edbe13d00776 100644
--- a/include/linux/dma-direct.h
+++ b/include/linux/dma-direct.h
@@ -54,6 +54,24 @@ static inline phys_addr_t translate_dma_to_phys(struct device *dev,
return (phys_addr_t)-1;
}

+static inline dma_addr_t dma_range_map_min(const struct bus_dma_region *map)
+{
+ dma_addr_t ret = (dma_addr_t)U64_MAX;
+
+ for (; map->size; map++)
+ ret = min(ret, map->dma_start);
+ return ret;
+}
+
+static inline dma_addr_t dma_range_map_max(const struct bus_dma_region *map)
+{
+ dma_addr_t ret = 0;
+
+ for (; map->size; map++)
+ ret = max(ret, map->dma_start + map->size - 1);
+ return ret;
+}
+
#ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA
#include <asm/dma-direct.h>
#ifndef phys_to_dma_unencrypted
--
2.39.2.101.g768bb238c484.dirty


2024-02-09 16:56:11

by Robin Murphy

[permalink] [raw]
Subject: [PATCH v3 5/7] iommu/dma: Make limit checks self-contained

It's now easy to retrieve the device's DMA limits if we want to check
them against the domain aperture, so do that ourselves instead of
relying on them being passed through the callchain.

Reviewed-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Robin Murphy <[email protected]>
---
drivers/iommu/dma-iommu.c | 21 +++++++++------------
1 file changed, 9 insertions(+), 12 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index dbe3c225e0d5..52126f73f690 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -660,19 +660,16 @@ static void iommu_dma_init_options(struct iommu_dma_options *options,
/**
* iommu_dma_init_domain - Initialise a DMA mapping domain
* @domain: IOMMU domain previously prepared by iommu_get_dma_cookie()
- * @base: IOVA at which the mappable address space starts
- * @limit: Last address of the IOVA space
* @dev: Device the domain is being initialised for
*
- * @base and @limit + 1 should be exact multiples of IOMMU page granularity to
- * avoid rounding surprises. If necessary, we reserve the page at address 0
+ * If the geometry and dma_range_map include address 0, we reserve that page
* to ensure it is an invalid IOVA. It is safe to reinitialise a domain, but
* any change which could make prior IOVAs invalid will fail.
*/
-static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
- dma_addr_t limit, struct device *dev)
+static int iommu_dma_init_domain(struct iommu_domain *domain, struct device *dev)
{
struct iommu_dma_cookie *cookie = domain->iova_cookie;
+ const struct bus_dma_region *map = dev->dma_range_map;
unsigned long order, base_pfn;
struct iova_domain *iovad;
int ret;
@@ -684,18 +681,18 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,

/* Use the smallest supported page size for IOVA granularity */
order = __ffs(domain->pgsize_bitmap);
- base_pfn = max_t(unsigned long, 1, base >> order);
+ base_pfn = 1;

/* Check the domain allows at least some access to the device... */
- if (domain->geometry.force_aperture) {
+ if (map) {
+ dma_addr_t base = dma_range_map_min(map);
if (base > domain->geometry.aperture_end ||
- limit < domain->geometry.aperture_start) {
+ dma_range_map_max(map) < domain->geometry.aperture_start) {
pr_warn("specified DMA range outside IOMMU capability\n");
return -EFAULT;
}
/* ...then finally give it a kicking to make sure it fits */
- base_pfn = max_t(unsigned long, base_pfn,
- domain->geometry.aperture_start >> order);
+ base_pfn = max(base, domain->geometry.aperture_start) >> order;
}

/* start_pfn is always nonzero for an already-initialised domain */
@@ -1746,7 +1743,7 @@ void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 dma_limit)
* underlying IOMMU driver needs to support via the dma-iommu layer.
*/
if (iommu_is_dma_domain(domain)) {
- if (iommu_dma_init_domain(domain, dma_base, dma_limit, dev))
+ if (iommu_dma_init_domain(domain, dev))
goto out_err;
dev->dma_ops = &iommu_dma_ops;
}
--
2.39.2.101.g768bb238c484.dirty


2024-02-09 16:57:37

by Robin Murphy

[permalink] [raw]
Subject: [PATCH v3 2/7] OF: Simplify DMA range calculations

Juggling start, end, and size values for a range is somewhat redundant
and a little hard to follow. Consolidate down to just using inclusive
start and end, which saves us worrying about size overflows for full
64-bit ranges (note that passing a potentially-overflowed value through
to arch_setup_dma_ops() is benign for all current implementations, and
this is working towards removing that anyway).

Acked-by: Rob Herring <[email protected]>
Reviewed-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Robin Murphy <[email protected]>
---
drivers/of/device.c | 19 ++++++++-----------
1 file changed, 8 insertions(+), 11 deletions(-)

diff --git a/drivers/of/device.c b/drivers/of/device.c
index a988bee2ee5a..841ccd3a19d1 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -96,7 +96,7 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
const struct bus_dma_region *map = NULL;
struct device_node *bus_np;
u64 dma_start = 0;
- u64 mask, end, size = 0;
+ u64 mask, end = 0;
bool coherent;
int iommu_ret;
int ret;
@@ -118,17 +118,15 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
return ret == -ENODEV ? 0 : ret;
} else {
const struct bus_dma_region *r = map;
- u64 dma_end = 0;

/* Determine the overall bounds of all DMA regions */
for (dma_start = ~0; r->size; r++) {
/* Take lower and upper limits */
if (r->dma_start < dma_start)
dma_start = r->dma_start;
- if (r->dma_start + r->size > dma_end)
- dma_end = r->dma_start + r->size;
+ if (r->dma_start + r->size > end)
+ end = r->dma_start + r->size;
}
- size = dma_end - dma_start;
}

/*
@@ -142,16 +140,15 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
dev->dma_mask = &dev->coherent_dma_mask;
}

- if (!size && dev->coherent_dma_mask)
- size = max(dev->coherent_dma_mask, dev->coherent_dma_mask + 1);
- else if (!size)
- size = 1ULL << 32;
+ if (!end && dev->coherent_dma_mask)
+ end = dev->coherent_dma_mask;
+ else if (!end)
+ end = (1ULL << 32) - 1;

/*
* Limit coherent and dma mask based on size and default mask
* set by the driver.
*/
- end = dma_start + size - 1;
mask = DMA_BIT_MASK(ilog2(end) + 1);
dev->coherent_dma_mask &= mask;
*dev->dma_mask &= mask;
@@ -185,7 +182,7 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
} else
dev_dbg(dev, "device is behind an iommu\n");

- arch_setup_dma_ops(dev, dma_start, size, coherent);
+ arch_setup_dma_ops(dev, dma_start, end - dma_start + 1, coherent);

if (iommu_ret)
of_dma_set_restricted_buffer(dev, np);
--
2.39.2.101.g768bb238c484.dirty


2024-02-09 17:16:57

by Robin Murphy

[permalink] [raw]
Subject: [PATCH v3 1/7] OF: Retire dma-ranges mask workaround

The fixup adding 1 to the dma-ranges size may have been for the benefit
of some early AMD Seattle DTs, or may have merely been a just-in-case,
but either way anyone who might have deserved to get the message has
hopefully seen the warning in the 9 years we've had it there. The modern
dma_range_map mechanism should happily handle odd-sized ranges with no
ill effect, so there's little need to care anyway now. Clean it up.

Acked-by: Rob Herring <[email protected]>
Signed-off-by: Robin Murphy <[email protected]>
---
v2: Tweak commit message
---
drivers/of/device.c | 16 ----------------
1 file changed, 16 deletions(-)

diff --git a/drivers/of/device.c b/drivers/of/device.c
index de89f9906375..a988bee2ee5a 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -129,22 +129,6 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
dma_end = r->dma_start + r->size;
}
size = dma_end - dma_start;
-
- /*
- * Add a work around to treat the size as mask + 1 in case
- * it is defined in DT as a mask.
- */
- if (size & 1) {
- dev_warn(dev, "Invalid size 0x%llx for dma-range(s)\n",
- size);
- size = size + 1;
- }
-
- if (!size) {
- dev_err(dev, "Adjusted size 0x%llx invalid\n", size);
- kfree(map);
- return -EINVAL;
- }
}

/*
--
2.39.2.101.g768bb238c484.dirty


2024-02-09 17:17:44

by Robin Murphy

[permalink] [raw]
Subject: [PATCH v3 3/7] ACPI/IORT: Handle memory address size limits as limits

Return the Root Complex/Named Component memory address size limit as an
inclusive limit value, rather than an exclusive size. This saves having
to fudge an off-by-one for the 64-bit case, and simplifies our caller.

Reviewed-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Robin Murphy <[email protected]>
---
v2: Avoid undefined shifts (grr...)
---
drivers/acpi/arm64/dma.c | 9 +++------
drivers/acpi/arm64/iort.c | 20 ++++++++++----------
include/linux/acpi_iort.h | 4 ++--
3 files changed, 15 insertions(+), 18 deletions(-)

diff --git a/drivers/acpi/arm64/dma.c b/drivers/acpi/arm64/dma.c
index 93d796531af3..b98a149f8d50 100644
--- a/drivers/acpi/arm64/dma.c
+++ b/drivers/acpi/arm64/dma.c
@@ -8,7 +8,6 @@ void acpi_arch_dma_setup(struct device *dev)
{
int ret;
u64 end, mask;
- u64 size = 0;
const struct bus_dma_region *map = NULL;

/*
@@ -23,9 +22,9 @@ void acpi_arch_dma_setup(struct device *dev)
}

if (dev->coherent_dma_mask)
- size = max(dev->coherent_dma_mask, dev->coherent_dma_mask + 1);
+ end = dev->coherent_dma_mask;
else
- size = 1ULL << 32;
+ end = (1ULL << 32) - 1;

ret = acpi_dma_get_range(dev, &map);
if (!ret && map) {
@@ -36,18 +35,16 @@ void acpi_arch_dma_setup(struct device *dev)
end = r->dma_start + r->size - 1;
}

- size = end + 1;
dev->dma_range_map = map;
}

if (ret == -ENODEV)
- ret = iort_dma_get_ranges(dev, &size);
+ ret = iort_dma_get_ranges(dev, &end);
if (!ret) {
/*
* Limit coherent and dma mask based on size retrieved from
* firmware.
*/
- end = size - 1;
mask = DMA_BIT_MASK(ilog2(end) + 1);
dev->bus_dma_limit = end;
dev->coherent_dma_mask = min(dev->coherent_dma_mask, mask);
diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c
index 6496ff5a6ba2..c0b1c2c19444 100644
--- a/drivers/acpi/arm64/iort.c
+++ b/drivers/acpi/arm64/iort.c
@@ -1367,7 +1367,7 @@ int iort_iommu_configure_id(struct device *dev, const u32 *input_id)
{ return -ENODEV; }
#endif

-static int nc_dma_get_range(struct device *dev, u64 *size)
+static int nc_dma_get_range(struct device *dev, u64 *limit)
{
struct acpi_iort_node *node;
struct acpi_iort_named_component *ncomp;
@@ -1384,13 +1384,13 @@ static int nc_dma_get_range(struct device *dev, u64 *size)
return -EINVAL;
}

- *size = ncomp->memory_address_limit >= 64 ? U64_MAX :
- 1ULL<<ncomp->memory_address_limit;
+ *limit = ncomp->memory_address_limit >= 64 ? U64_MAX :
+ (1ULL << ncomp->memory_address_limit) - 1;

return 0;
}

-static int rc_dma_get_range(struct device *dev, u64 *size)
+static int rc_dma_get_range(struct device *dev, u64 *limit)
{
struct acpi_iort_node *node;
struct acpi_iort_root_complex *rc;
@@ -1408,8 +1408,8 @@ static int rc_dma_get_range(struct device *dev, u64 *size)
return -EINVAL;
}

- *size = rc->memory_address_limit >= 64 ? U64_MAX :
- 1ULL<<rc->memory_address_limit;
+ *limit = rc->memory_address_limit >= 64 ? U64_MAX :
+ (1ULL << rc->memory_address_limit) - 1;

return 0;
}
@@ -1417,16 +1417,16 @@ static int rc_dma_get_range(struct device *dev, u64 *size)
/**
* iort_dma_get_ranges() - Look up DMA addressing limit for the device
* @dev: device to lookup
- * @size: DMA range size result pointer
+ * @limit: DMA limit result pointer
*
* Return: 0 on success, an error otherwise.
*/
-int iort_dma_get_ranges(struct device *dev, u64 *size)
+int iort_dma_get_ranges(struct device *dev, u64 *limit)
{
if (dev_is_pci(dev))
- return rc_dma_get_range(dev, size);
+ return rc_dma_get_range(dev, limit);
else
- return nc_dma_get_range(dev, size);
+ return nc_dma_get_range(dev, limit);
}

static void __init acpi_iort_register_irq(int hwirq, const char *name,
diff --git a/include/linux/acpi_iort.h b/include/linux/acpi_iort.h
index 1cb65592c95d..d4ed5622cf2b 100644
--- a/include/linux/acpi_iort.h
+++ b/include/linux/acpi_iort.h
@@ -39,7 +39,7 @@ void iort_get_rmr_sids(struct fwnode_handle *iommu_fwnode,
void iort_put_rmr_sids(struct fwnode_handle *iommu_fwnode,
struct list_head *head);
/* IOMMU interface */
-int iort_dma_get_ranges(struct device *dev, u64 *size);
+int iort_dma_get_ranges(struct device *dev, u64 *limit);
int iort_iommu_configure_id(struct device *dev, const u32 *id_in);
void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head);
phys_addr_t acpi_iort_dma_get_max_cpu_address(void);
@@ -55,7 +55,7 @@ void iort_get_rmr_sids(struct fwnode_handle *iommu_fwnode, struct list_head *hea
static inline
void iort_put_rmr_sids(struct fwnode_handle *iommu_fwnode, struct list_head *head) { }
/* IOMMU interface */
-static inline int iort_dma_get_ranges(struct device *dev, u64 *size)
+static inline int iort_dma_get_ranges(struct device *dev, u64 *limit)
{ return -ENODEV; }
static inline int iort_iommu_configure_id(struct device *dev, const u32 *id_in)
{ return -ENODEV; }
--
2.39.2.101.g768bb238c484.dirty


2024-02-09 17:18:50

by Robin Murphy

[permalink] [raw]
Subject: [PATCH v3 6/7] iommu/dma: Centralise iommu_setup_dma_ops()

It's somewhat hard to see, but arm64's arch_setup_dma_ops() should only
ever call iommu_setup_dma_ops() after a successful iommu_probe_device(),
which means there should be no harm in achieving the same order of
operations by running it off the back of iommu_probe_device() itself.
This then puts it in line with the x86 and s390 .probe_finalize bodges,
letting us pull it all into the main flow properly. As a bonus this lets
us fold in and de-scope the PCI workaround setup as well.

At this point we can also then pull the call up inside the group mutex,
and avoid having to think about whether iommu_group_store_type() could
theoretically race and free the domain if iommu_setup_dma_ops() ran just
*before* iommu_device_use_default_domain() claims it... Furthermore we
replace one .probe_finalize call completely, since the only remaining
implementations are now one which only needs to run once for the initial
boot-time probe, and two which themselves render that path unreachable.

This leaves us a big step closer to realistically being able to unpick
the variety of different things that iommu_setup_dma_ops() has been
muddling together, and further streamline iommu-dma into core API flows
in future.

Signed-off-by: Robin Murphy <[email protected]>
---
v2: Shuffle around to make sure the iommu_group_do_probe_finalize() case
is covered as well, with bonus side-effects as above.
v3: *Really* do that, remembering the other two probe_finalize sites too.
---
arch/arm64/mm/dma-mapping.c | 2 --
drivers/iommu/amd/iommu.c | 8 --------
drivers/iommu/dma-iommu.c | 18 ++++++------------
drivers/iommu/dma-iommu.h | 14 ++++++--------
drivers/iommu/intel/iommu.c | 7 -------
drivers/iommu/iommu.c | 20 +++++++-------------
drivers/iommu/s390-iommu.c | 6 ------
drivers/iommu/virtio-iommu.c | 10 ----------
include/linux/iommu.h | 7 -------
9 files changed, 19 insertions(+), 73 deletions(-)

diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 61886e43e3a1..313d8938a2f0 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -58,8 +58,6 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
ARCH_DMA_MINALIGN, cls);

dev->dma_coherent = coherent;
- if (device_iommu_mapped(dev))
- iommu_setup_dma_ops(dev, dma_base, dma_base + size - 1);

xen_setup_dma_ops(dev);
}
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 4283dd8191f0..00c0d5de1225 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -1968,13 +1968,6 @@ static struct iommu_device *amd_iommu_probe_device(struct device *dev)
return iommu_dev;
}

-static void amd_iommu_probe_finalize(struct device *dev)
-{
- /* Domains are initialized for this device - have a look what we ended up with */
- set_dma_ops(dev, NULL);
- iommu_setup_dma_ops(dev, 0, U64_MAX);
-}
-
static void amd_iommu_release_device(struct device *dev)
{
struct amd_iommu *iommu;
@@ -2626,7 +2619,6 @@ const struct iommu_ops amd_iommu_ops = {
.domain_alloc_user = amd_iommu_domain_alloc_user,
.probe_device = amd_iommu_probe_device,
.release_device = amd_iommu_release_device,
- .probe_finalize = amd_iommu_probe_finalize,
.device_group = amd_iommu_device_group,
.get_resv_regions = amd_iommu_get_resv_regions,
.is_attach_deferred = amd_iommu_is_attach_deferred,
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 52126f73f690..fa1cdca39da6 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -1727,25 +1727,20 @@ static const struct dma_map_ops iommu_dma_ops = {
.opt_mapping_size = iommu_dma_opt_mapping_size,
};

-/*
- * The IOMMU core code allocates the default DMA domain, which the underlying
- * IOMMU driver needs to support via the dma-iommu layer.
- */
-void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 dma_limit)
+void iommu_setup_dma_ops(struct device *dev)
{
struct iommu_domain *domain = iommu_get_domain_for_dev(dev);

- if (!domain)
- goto out_err;
+ if (dev_is_pci(dev))
+ dev->iommu->pci_32bit_workaround = !iommu_dma_forcedac;

- /*
- * The IOMMU core code allocates the default DMA domain, which the
- * underlying IOMMU driver needs to support via the dma-iommu layer.
- */
if (iommu_is_dma_domain(domain)) {
if (iommu_dma_init_domain(domain, dev))
goto out_err;
dev->dma_ops = &iommu_dma_ops;
+ } else if (dev->dma_ops == &iommu_dma_ops) {
+ /* Clean up if we've switched *from* a DMA domain */
+ dev->dma_ops = NULL;
}

return;
@@ -1753,7 +1748,6 @@ void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 dma_limit)
pr_warn("Failed to set up IOMMU for device %s; retaining platform DMA ops\n",
dev_name(dev));
}
-EXPORT_SYMBOL_GPL(iommu_setup_dma_ops);

static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev,
phys_addr_t msi_addr, struct iommu_domain *domain)
diff --git a/drivers/iommu/dma-iommu.h b/drivers/iommu/dma-iommu.h
index c829f1f82a99..c12d63457c76 100644
--- a/drivers/iommu/dma-iommu.h
+++ b/drivers/iommu/dma-iommu.h
@@ -9,6 +9,8 @@

#ifdef CONFIG_IOMMU_DMA

+void iommu_setup_dma_ops(struct device *dev);
+
int iommu_get_dma_cookie(struct iommu_domain *domain);
void iommu_put_dma_cookie(struct iommu_domain *domain);

@@ -17,13 +19,13 @@ int iommu_dma_init_fq(struct iommu_domain *domain);
void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list);

extern bool iommu_dma_forcedac;
-static inline void iommu_dma_set_pci_32bit_workaround(struct device *dev)
-{
- dev->iommu->pci_32bit_workaround = !iommu_dma_forcedac;
-}

#else /* CONFIG_IOMMU_DMA */

+static inline void iommu_setup_dma_ops(struct device *dev)
+{
+}
+
static inline int iommu_dma_init_fq(struct iommu_domain *domain)
{
return -EINVAL;
@@ -42,9 +44,5 @@ static inline void iommu_dma_get_resv_regions(struct device *dev, struct list_he
{
}

-static inline void iommu_dma_set_pci_32bit_workaround(struct device *dev)
-{
-}
-
#endif /* CONFIG_IOMMU_DMA */
#endif /* __DMA_IOMMU_H */
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 6fb5f6fceea1..f1c599429408 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -4291,12 +4291,6 @@ static void intel_iommu_release_device(struct device *dev)
set_dma_ops(dev, NULL);
}

-static void intel_iommu_probe_finalize(struct device *dev)
-{
- set_dma_ops(dev, NULL);
- iommu_setup_dma_ops(dev, 0, U64_MAX);
-}
-
static void intel_iommu_get_resv_regions(struct device *device,
struct list_head *head)
{
@@ -4748,7 +4742,6 @@ const struct iommu_ops intel_iommu_ops = {
.domain_alloc = intel_iommu_domain_alloc,
.domain_alloc_user = intel_iommu_domain_alloc_user,
.probe_device = intel_iommu_probe_device,
- .probe_finalize = intel_iommu_probe_finalize,
.release_device = intel_iommu_release_device,
.get_resv_regions = intel_iommu_get_resv_regions,
.device_group = intel_iommu_device_group,
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index d14413916f93..e793b2f4bdc8 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -571,10 +571,11 @@ static int __iommu_probe_device(struct device *dev, struct list_head *group_list
if (list_empty(&group->entry))
list_add_tail(&group->entry, group_list);
}
- mutex_unlock(&group->mutex);

- if (dev_is_pci(dev))
- iommu_dma_set_pci_32bit_workaround(dev);
+ if (group->default_domain)
+ iommu_setup_dma_ops(dev);
+
+ mutex_unlock(&group->mutex);

return 0;

@@ -2010,6 +2011,8 @@ int bus_iommu_probe(const struct bus_type *bus)
mutex_unlock(&group->mutex);
return ret;
}
+ for_each_group_device(group, gdev)
+ iommu_setup_dma_ops(gdev->dev);
mutex_unlock(&group->mutex);

/*
@@ -3248,18 +3251,9 @@ static ssize_t iommu_group_store_type(struct iommu_group *group,
if (ret)
goto out_unlock;

- /*
- * Release the mutex here because ops->probe_finalize() call-back of
- * some vendor IOMMU drivers calls arm_iommu_attach_device() which
- * in-turn might call back into IOMMU core code, where it tries to take
- * group->mutex, resulting in a deadlock.
- */
- mutex_unlock(&group->mutex);
-
/* Make sure dma_ops is appropriatley set */
for_each_group_device(group, gdev)
- iommu_group_do_probe_finalize(gdev->dev);
- return count;
+ iommu_setup_dma_ops(gdev->dev);

out_unlock:
mutex_unlock(&group->mutex);
diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
index 9a5196f523de..d8eaa7ea380b 100644
--- a/drivers/iommu/s390-iommu.c
+++ b/drivers/iommu/s390-iommu.c
@@ -695,11 +695,6 @@ static size_t s390_iommu_unmap_pages(struct iommu_domain *domain,
return size;
}

-static void s390_iommu_probe_finalize(struct device *dev)
-{
- iommu_setup_dma_ops(dev, 0, U64_MAX);
-}
-
struct zpci_iommu_ctrs *zpci_get_iommu_ctrs(struct zpci_dev *zdev)
{
if (!zdev || !zdev->s390_domain)
@@ -785,7 +780,6 @@ static const struct iommu_ops s390_iommu_ops = {
.capable = s390_iommu_capable,
.domain_alloc_paging = s390_domain_alloc_paging,
.probe_device = s390_iommu_probe_device,
- .probe_finalize = s390_iommu_probe_finalize,
.release_device = s390_iommu_release_device,
.device_group = generic_device_group,
.pgsize_bitmap = SZ_4K,
diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 34db37fd9675..ec7640127125 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -1025,15 +1025,6 @@ static struct iommu_device *viommu_probe_device(struct device *dev)
return ERR_PTR(ret);
}

-static void viommu_probe_finalize(struct device *dev)
-{
-#ifndef CONFIG_ARCH_HAS_SETUP_DMA_OPS
- /* First clear the DMA ops in case we're switching from a DMA domain */
- set_dma_ops(dev, NULL);
- iommu_setup_dma_ops(dev, 0, U64_MAX);
-#endif
-}
-
static void viommu_release_device(struct device *dev)
{
struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
@@ -1072,7 +1063,6 @@ static struct iommu_ops viommu_ops = {
.capable = viommu_capable,
.domain_alloc = viommu_domain_alloc,
.probe_device = viommu_probe_device,
- .probe_finalize = viommu_probe_finalize,
.release_device = viommu_release_device,
.device_group = viommu_device_group,
.get_resv_regions = viommu_get_resv_regions,
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 79c3dac8be75..509add6699c8 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -1367,9 +1367,6 @@ static inline void iommu_debugfs_setup(void) {}
#ifdef CONFIG_IOMMU_DMA
#include <linux/msi.h>

-/* Setup call for arch DMA mapping code */
-void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 dma_limit);
-
int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base);

int iommu_dma_prepare_msi(struct msi_desc *desc, phys_addr_t msi_addr);
@@ -1380,10 +1377,6 @@ void iommu_dma_compose_msi_msg(struct msi_desc *desc, struct msi_msg *msg);
struct msi_desc;
struct msi_msg;

-static inline void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 dma_limit)
-{
-}
-
static inline int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base)
{
return -ENODEV;
--
2.39.2.101.g768bb238c484.dirty


2024-02-09 17:19:16

by Robin Murphy

[permalink] [raw]
Subject: [PATCH v3 7/7] dma-mapping: Simplify arch_setup_dma_ops()

The dma_base, size and iommu arguments are only used by ARM, and can
now easily be deduced from the device itself, so there's no need to pass
them through the callchain as well.

Acked-by: Rob Herring <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Signed-off-by: Robin Murphy <[email protected]>
---
v2: Make sure the ARM changes actually build (oops...)
---
arch/arc/mm/dma.c | 3 +--
arch/arm/mm/dma-mapping-nommu.c | 3 +--
arch/arm/mm/dma-mapping.c | 16 +++++++++-------
arch/arm64/mm/dma-mapping.c | 3 +--
arch/mips/mm/dma-noncoherent.c | 3 +--
arch/riscv/mm/dma-noncoherent.c | 3 +--
drivers/acpi/scan.c | 7 +------
drivers/hv/hv_common.c | 6 +-----
drivers/of/device.c | 4 +---
include/linux/dma-map-ops.h | 6 ++----
10 files changed, 19 insertions(+), 35 deletions(-)

diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c
index 197707bc7658..6b85e94f3275 100644
--- a/arch/arc/mm/dma.c
+++ b/arch/arc/mm/dma.c
@@ -90,8 +90,7 @@ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
/*
* Plug in direct dma map ops.
*/
-void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
- bool coherent)
+void arch_setup_dma_ops(struct device *dev, bool coherent)
{
/*
* IOC hardware snoops all DMA traffic keeping the caches consistent
diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-mapping-nommu.c
index b94850b57995..97db5397c320 100644
--- a/arch/arm/mm/dma-mapping-nommu.c
+++ b/arch/arm/mm/dma-mapping-nommu.c
@@ -33,8 +33,7 @@ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
}
}

-void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
- bool coherent)
+void arch_setup_dma_ops(struct device *dev, bool coherent)
{
if (IS_ENABLED(CONFIG_CPU_V7M)) {
/*
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index f68db05eba29..5adf1769eee4 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -1709,11 +1709,15 @@ void arm_iommu_detach_device(struct device *dev)
}
EXPORT_SYMBOL_GPL(arm_iommu_detach_device);

-static void arm_setup_iommu_dma_ops(struct device *dev, u64 dma_base, u64 size,
- bool coherent)
+static void arm_setup_iommu_dma_ops(struct device *dev)
{
struct dma_iommu_mapping *mapping;
+ u64 dma_base = 0, size = 1ULL << 32;

+ if (dev->dma_range_map) {
+ dma_base = dma_range_map_min(dev->dma_range_map);
+ size = dma_range_map_max(dev->dma_range_map) - dma_base;
+ }
mapping = arm_iommu_create_mapping(dev->bus, dma_base, size);
if (IS_ERR(mapping)) {
pr_warn("Failed to create %llu-byte IOMMU mapping for device %s\n",
@@ -1744,8 +1748,7 @@ static void arm_teardown_iommu_dma_ops(struct device *dev)

#else

-static void arm_setup_iommu_dma_ops(struct device *dev, u64 dma_base, u64 size,
- bool coherent)
+static void arm_setup_iommu_dma_ops(struct device *dev)
{
}

@@ -1753,8 +1756,7 @@ static void arm_teardown_iommu_dma_ops(struct device *dev) { }

#endif /* CONFIG_ARM_DMA_USE_IOMMU */

-void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
- bool coherent)
+void arch_setup_dma_ops(struct device *dev, bool coherent)
{
/*
* Due to legacy code that sets the ->dma_coherent flag from a bus
@@ -1774,7 +1776,7 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
return;

if (device_iommu_mapped(dev))
- arm_setup_iommu_dma_ops(dev, dma_base, size, coherent);
+ arm_setup_iommu_dma_ops(dev);

xen_setup_dma_ops(dev);
dev->archdata.dma_ops_setup = true;
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 313d8938a2f0..0b320a25a471 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -46,8 +46,7 @@ void arch_teardown_dma_ops(struct device *dev)
}
#endif

-void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
- bool coherent)
+void arch_setup_dma_ops(struct device *dev, bool coherent)
{
int cls = cache_line_size_of_cpu();

diff --git a/arch/mips/mm/dma-noncoherent.c b/arch/mips/mm/dma-noncoherent.c
index 0f3cec663a12..ab4f2a75a7d0 100644
--- a/arch/mips/mm/dma-noncoherent.c
+++ b/arch/mips/mm/dma-noncoherent.c
@@ -137,8 +137,7 @@ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
#endif

#ifdef CONFIG_ARCH_HAS_SETUP_DMA_OPS
-void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
- bool coherent)
+void arch_setup_dma_ops(struct device *dev, bool coherent)
{
dev->dma_coherent = coherent;
}
diff --git a/arch/riscv/mm/dma-noncoherent.c b/arch/riscv/mm/dma-noncoherent.c
index 843107f834b2..cb89d7e0ba88 100644
--- a/arch/riscv/mm/dma-noncoherent.c
+++ b/arch/riscv/mm/dma-noncoherent.c
@@ -128,8 +128,7 @@ void arch_dma_prep_coherent(struct page *page, size_t size)
ALT_CMO_OP(FLUSH, flush_addr, size, riscv_cbom_block_size);
}

-void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
- bool coherent)
+void arch_setup_dma_ops(struct device *dev, bool coherent)
{
WARN_TAINT(!coherent && riscv_cbom_block_size > ARCH_DMA_MINALIGN,
TAINT_CPU_OUT_OF_SPEC,
diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
index e6ed1ba91e5c..f5df17d11717 100644
--- a/drivers/acpi/scan.c
+++ b/drivers/acpi/scan.c
@@ -1640,12 +1640,7 @@ int acpi_dma_configure_id(struct device *dev, enum dev_dma_attr attr,
if (ret == -EPROBE_DEFER)
return -EPROBE_DEFER;

- /*
- * Historically this routine doesn't fail driver probing due to errors
- * in acpi_iommu_configure_id()
- */
-
- arch_setup_dma_ops(dev, 0, U64_MAX, attr == DEV_DMA_COHERENT);
+ arch_setup_dma_ops(dev, attr == DEV_DMA_COHERENT);

return 0;
}
diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c
index 0285a74363b3..0e2decd1167a 100644
--- a/drivers/hv/hv_common.c
+++ b/drivers/hv/hv_common.c
@@ -484,11 +484,7 @@ EXPORT_SYMBOL_GPL(hv_query_ext_cap);

void hv_setup_dma_ops(struct device *dev, bool coherent)
{
- /*
- * Hyper-V does not offer a vIOMMU in the guest
- * VM, so pass 0/NULL for the IOMMU settings
- */
- arch_setup_dma_ops(dev, 0, 0, coherent);
+ arch_setup_dma_ops(dev, coherent);
}
EXPORT_SYMBOL_GPL(hv_setup_dma_ops);

diff --git a/drivers/of/device.c b/drivers/of/device.c
index 9e7963972fa7..312c63361211 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -95,7 +95,6 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
{
const struct bus_dma_region *map = NULL;
struct device_node *bus_np;
- u64 dma_start = 0;
u64 mask, end = 0;
bool coherent;
int iommu_ret;
@@ -118,7 +117,6 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
return ret == -ENODEV ? 0 : ret;
} else {
/* Determine the overall bounds of all DMA regions */
- dma_start = dma_range_map_min(map);
end = dma_range_map_max(map);
}

@@ -175,7 +173,7 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
} else
dev_dbg(dev, "device is behind an iommu\n");

- arch_setup_dma_ops(dev, dma_start, end - dma_start + 1, coherent);
+ arch_setup_dma_ops(dev, coherent);

if (iommu_ret)
of_dma_set_restricted_buffer(dev, np);
diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
index 4abc60f04209..ed89e1ce0114 100644
--- a/include/linux/dma-map-ops.h
+++ b/include/linux/dma-map-ops.h
@@ -426,11 +426,9 @@ bool arch_dma_unmap_sg_direct(struct device *dev, struct scatterlist *sg,
#endif

#ifdef CONFIG_ARCH_HAS_SETUP_DMA_OPS
-void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
- bool coherent);
+void arch_setup_dma_ops(struct device *dev, bool coherent);
#else
-static inline void arch_setup_dma_ops(struct device *dev, u64 dma_base,
- u64 size, bool coherent)
+static inline void arch_setup_dma_ops(struct device *dev, bool coherent)
{
}
#endif /* CONFIG_ARCH_HAS_SETUP_DMA_OPS */
--
2.39.2.101.g768bb238c484.dirty


2024-02-09 19:38:13

by Michael Kelley

[permalink] [raw]
Subject: RE: [PATCH v3 7/7] dma-mapping: Simplify arch_setup_dma_ops()

From: Robin Murphy <[email protected]> Sent: Friday, February 9, 2024 8:50 AM
>
> The dma_base, size and iommu arguments are only used by ARM, and can
> now easily be deduced from the device itself, so there's no need to pass
> them through the callchain as well.
>
> Acked-by: Rob Herring <[email protected]>
> Reviewed-by: Christoph Hellwig <[email protected]>
> Signed-off-by: Robin Murphy <[email protected]>
> ---
> v2: Make sure the ARM changes actually build (oops...)
> ---
> arch/arc/mm/dma.c | 3 +--
> arch/arm/mm/dma-mapping-nommu.c | 3 +--
> arch/arm/mm/dma-mapping.c | 16 +++++++++-------
> arch/arm64/mm/dma-mapping.c | 3 +--
> arch/mips/mm/dma-noncoherent.c | 3 +--
> arch/riscv/mm/dma-noncoherent.c | 3 +--
> drivers/acpi/scan.c | 7 +------
> drivers/hv/hv_common.c | 6 +-----
> drivers/of/device.c | 4 +---
> include/linux/dma-map-ops.h | 6 ++----
> 10 files changed, 19 insertions(+), 35 deletions(-)
>

For the Hyper-V related change in drivers/hv/hv_common.c,

Reviewed-by: Michael Kelley <[email protected]>

> diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c
> index 197707bc7658..6b85e94f3275 100644
> --- a/arch/arc/mm/dma.c
> +++ b/arch/arc/mm/dma.c
> @@ -90,8 +90,7 @@ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t
> size,
> /*
> * Plug in direct dma map ops.
> */
> -void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
> - bool coherent)
> +void arch_setup_dma_ops(struct device *dev, bool coherent)
> {
> /*
> * IOC hardware snoops all DMA traffic keeping the caches consistent
> diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-
> mapping-nommu.c
> index b94850b57995..97db5397c320 100644
> --- a/arch/arm/mm/dma-mapping-nommu.c
> +++ b/arch/arm/mm/dma-mapping-nommu.c
> @@ -33,8 +33,7 @@ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t
> size,
> }
> }
>
> -void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
> - bool coherent)
> +void arch_setup_dma_ops(struct device *dev, bool coherent)
> {
> if (IS_ENABLED(CONFIG_CPU_V7M)) {
> /*
> diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
> index f68db05eba29..5adf1769eee4 100644
> --- a/arch/arm/mm/dma-mapping.c
> +++ b/arch/arm/mm/dma-mapping.c
> @@ -1709,11 +1709,15 @@ void arm_iommu_detach_device(struct device
> *dev)
> }
> EXPORT_SYMBOL_GPL(arm_iommu_detach_device);
>
> -static void arm_setup_iommu_dma_ops(struct device *dev, u64 dma_base,
> u64 size,
> - bool coherent)
> +static void arm_setup_iommu_dma_ops(struct device *dev)
> {
> struct dma_iommu_mapping *mapping;
> + u64 dma_base = 0, size = 1ULL << 32;
>
> + if (dev->dma_range_map) {
> + dma_base = dma_range_map_min(dev->dma_range_map);
> + size = dma_range_map_max(dev->dma_range_map) -
> dma_base;
> + }
> mapping = arm_iommu_create_mapping(dev->bus, dma_base, size);
> if (IS_ERR(mapping)) {
> pr_warn("Failed to create %llu-byte IOMMU mapping for
> device %s\n",
> @@ -1744,8 +1748,7 @@ static void
> arm_teardown_iommu_dma_ops(struct device *dev)
>
> #else
>
> -static void arm_setup_iommu_dma_ops(struct device *dev, u64 dma_base,
> u64 size,
> - bool coherent)
> +static void arm_setup_iommu_dma_ops(struct device *dev)
> {
> }
>
> @@ -1753,8 +1756,7 @@ static void
> arm_teardown_iommu_dma_ops(struct device *dev) { }
>
> #endif /* CONFIG_ARM_DMA_USE_IOMMU */
>
> -void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
> - bool coherent)
> +void arch_setup_dma_ops(struct device *dev, bool coherent)
> {
> /*
> * Due to legacy code that sets the ->dma_coherent flag from a bus
> @@ -1774,7 +1776,7 @@ void arch_setup_dma_ops(struct device *dev, u64
> dma_base, u64 size,
> return;
>
> if (device_iommu_mapped(dev))
> - arm_setup_iommu_dma_ops(dev, dma_base, size, coherent);
> + arm_setup_iommu_dma_ops(dev);
>
> xen_setup_dma_ops(dev);
> dev->archdata.dma_ops_setup = true;
> diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-
> mapping.c
> index 313d8938a2f0..0b320a25a471 100644
> --- a/arch/arm64/mm/dma-mapping.c
> +++ b/arch/arm64/mm/dma-mapping.c
> @@ -46,8 +46,7 @@ void arch_teardown_dma_ops(struct device *dev)
> }
> #endif
>
> -void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
> - bool coherent)
> +void arch_setup_dma_ops(struct device *dev, bool coherent)
> {
> int cls = cache_line_size_of_cpu();
>
> diff --git a/arch/mips/mm/dma-noncoherent.c b/arch/mips/mm/dma-
> noncoherent.c
> index 0f3cec663a12..ab4f2a75a7d0 100644
> --- a/arch/mips/mm/dma-noncoherent.c
> +++ b/arch/mips/mm/dma-noncoherent.c
> @@ -137,8 +137,7 @@ void arch_sync_dma_for_cpu(phys_addr_t paddr,
> size_t size,
> #endif
>
> #ifdef CONFIG_ARCH_HAS_SETUP_DMA_OPS
> -void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
> - bool coherent)
> +void arch_setup_dma_ops(struct device *dev, bool coherent)
> {
> dev->dma_coherent = coherent;
> }
> diff --git a/arch/riscv/mm/dma-noncoherent.c b/arch/riscv/mm/dma-
> noncoherent.c
> index 843107f834b2..cb89d7e0ba88 100644
> --- a/arch/riscv/mm/dma-noncoherent.c
> +++ b/arch/riscv/mm/dma-noncoherent.c
> @@ -128,8 +128,7 @@ void arch_dma_prep_coherent(struct page *page,
> size_t size)
> ALT_CMO_OP(FLUSH, flush_addr, size, riscv_cbom_block_size);
> }
>
> -void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
> - bool coherent)
> +void arch_setup_dma_ops(struct device *dev, bool coherent)
> {
> WARN_TAINT(!coherent && riscv_cbom_block_size >
> ARCH_DMA_MINALIGN,
> TAINT_CPU_OUT_OF_SPEC,
> diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
> index e6ed1ba91e5c..f5df17d11717 100644
> --- a/drivers/acpi/scan.c
> +++ b/drivers/acpi/scan.c
> @@ -1640,12 +1640,7 @@ int acpi_dma_configure_id(struct device *dev,
> enum dev_dma_attr attr,
> if (ret == -EPROBE_DEFER)
> return -EPROBE_DEFER;
>
> - /*
> - * Historically this routine doesn't fail driver probing due to errors
> - * in acpi_iommu_configure_id()
> - */
> -
> - arch_setup_dma_ops(dev, 0, U64_MAX, attr ==
> DEV_DMA_COHERENT);
> + arch_setup_dma_ops(dev, attr == DEV_DMA_COHERENT);
>
> return 0;
> }
> diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c
> index 0285a74363b3..0e2decd1167a 100644
> --- a/drivers/hv/hv_common.c
> +++ b/drivers/hv/hv_common.c
> @@ -484,11 +484,7 @@ EXPORT_SYMBOL_GPL(hv_query_ext_cap);
>
> void hv_setup_dma_ops(struct device *dev, bool coherent)
> {
> - /*
> - * Hyper-V does not offer a vIOMMU in the guest
> - * VM, so pass 0/NULL for the IOMMU settings
> - */
> - arch_setup_dma_ops(dev, 0, 0, coherent);
> + arch_setup_dma_ops(dev, coherent);
> }
> EXPORT_SYMBOL_GPL(hv_setup_dma_ops);
>
> diff --git a/drivers/of/device.c b/drivers/of/device.c
> index 9e7963972fa7..312c63361211 100644
> --- a/drivers/of/device.c
> +++ b/drivers/of/device.c
> @@ -95,7 +95,6 @@ int of_dma_configure_id(struct device *dev, struct
> device_node *np,
> {
> const struct bus_dma_region *map = NULL;
> struct device_node *bus_np;
> - u64 dma_start = 0;
> u64 mask, end = 0;
> bool coherent;
> int iommu_ret;
> @@ -118,7 +117,6 @@ int of_dma_configure_id(struct device *dev, struct
> device_node *np,
> return ret == -ENODEV ? 0 : ret;
> } else {
> /* Determine the overall bounds of all DMA regions */
> - dma_start = dma_range_map_min(map);
> end = dma_range_map_max(map);
> }
>
> @@ -175,7 +173,7 @@ int of_dma_configure_id(struct device *dev, struct
> device_node *np,
> } else
> dev_dbg(dev, "device is behind an iommu\n");
>
> - arch_setup_dma_ops(dev, dma_start, end - dma_start + 1, coherent);
> + arch_setup_dma_ops(dev, coherent);
>
> if (iommu_ret)
> of_dma_set_restricted_buffer(dev, np);
> diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
> index 4abc60f04209..ed89e1ce0114 100644
> --- a/include/linux/dma-map-ops.h
> +++ b/include/linux/dma-map-ops.h
> @@ -426,11 +426,9 @@ bool arch_dma_unmap_sg_direct(struct device
> *dev, struct scatterlist *sg,
> #endif
>
> #ifdef CONFIG_ARCH_HAS_SETUP_DMA_OPS
> -void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
> - bool coherent);
> +void arch_setup_dma_ops(struct device *dev, bool coherent);
> #else
> -static inline void arch_setup_dma_ops(struct device *dev, u64 dma_base,
> - u64 size, bool coherent)
> +static inline void arch_setup_dma_ops(struct device *dev, bool coherent)
> {
> }
> #endif /* CONFIG_ARCH_HAS_SETUP_DMA_OPS */
> --
> 2.39.2.101.g768bb238c484.dirty
>


2024-02-10 11:28:07

by Baolu Lu

[permalink] [raw]
Subject: Re: [PATCH v3 6/7] iommu/dma: Centralise iommu_setup_dma_ops()

On 2024/2/10 0:50, Robin Murphy wrote:
> It's somewhat hard to see, but arm64's arch_setup_dma_ops() should only
> ever call iommu_setup_dma_ops() after a successful iommu_probe_device(),
> which means there should be no harm in achieving the same order of
> operations by running it off the back of iommu_probe_device() itself.
> This then puts it in line with the x86 and s390 .probe_finalize bodges,
> letting us pull it all into the main flow properly. As a bonus this lets
> us fold in and de-scope the PCI workaround setup as well.
>
> At this point we can also then pull the call up inside the group mutex,
> and avoid having to think about whether iommu_group_store_type() could
> theoretically race and free the domain if iommu_setup_dma_ops() ran just
> *before* iommu_device_use_default_domain() claims it... Furthermore we
> replace one .probe_finalize call completely, since the only remaining
> implementations are now one which only needs to run once for the initial
> boot-time probe, and two which themselves render that path unreachable.
>
> This leaves us a big step closer to realistically being able to unpick
> the variety of different things that iommu_setup_dma_ops() has been
> muddling together, and further streamline iommu-dma into core API flows
> in future.
>
> Signed-off-by: Robin Murphy<[email protected]>
> ---
> v2: Shuffle around to make sure the iommu_group_do_probe_finalize() case
> is covered as well, with bonus side-effects as above.
> v3:*Really* do that, remembering the other two probe_finalize sites too.
> ---
> arch/arm64/mm/dma-mapping.c | 2 --
> drivers/iommu/amd/iommu.c | 8 --------
> drivers/iommu/dma-iommu.c | 18 ++++++------------
> drivers/iommu/dma-iommu.h | 14 ++++++--------
> drivers/iommu/intel/iommu.c | 7 -------
> drivers/iommu/iommu.c | 20 +++++++-------------
> drivers/iommu/s390-iommu.c | 6 ------
> drivers/iommu/virtio-iommu.c | 10 ----------
> include/linux/iommu.h | 7 -------
> 9 files changed, 19 insertions(+), 73 deletions(-)

For changes in Intel IOMMU driver,

Reviewed-by: Lu Baolu <[email protected]>

Best regards,
baolu

2024-02-24 11:43:43

by Hanjun Guo

[permalink] [raw]
Subject: Re: [PATCH v3 3/7] ACPI/IORT: Handle memory address size limits as limits

On 2024/2/10 0:50, Robin Murphy wrote:
> Return the Root Complex/Named Component memory address size limit as an
> inclusive limit value, rather than an exclusive size. This saves having
> to fudge an off-by-one for the 64-bit case, and simplifies our caller.
>
> Reviewed-by: Jason Gunthorpe <[email protected]>
> Signed-off-by: Robin Murphy <[email protected]>
> ---
> v2: Avoid undefined shifts (grr...)
> ---
> drivers/acpi/arm64/dma.c | 9 +++------
> drivers/acpi/arm64/iort.c | 20 ++++++++++----------
> include/linux/acpi_iort.h | 4 ++--
> 3 files changed, 15 insertions(+), 18 deletions(-)

This also makes the code easier to read,

Acked-by: Hanjun Guo <[email protected]>

Thanks
Hanjun

2024-02-24 11:32:33

by Hanjun Guo

[permalink] [raw]
Subject: Re: [PATCH v3 0/7] dma-mapping: Simplify arch_setup_dma_ops()

On 2024/2/10 0:49, Robin Murphy wrote:
> v2: https://lore.kernel.org/linux-iommu/[email protected]/
>
> Hi all,
>
> Here's v3, rebased and fixing the thinko from v2, so unless anything
> else has changed behind my back I hope it's good to go (via the IOMMU
> tree, as mentioned before).

Compiled with/without ACPI enabled, and test this patch set on an
ARM64 server with ACPI booting, looks good.

Tested-by: Hanjun Guo <[email protected]>

Thanks
Hanjun

2024-02-24 11:48:57

by Hanjun Guo

[permalink] [raw]
Subject: Re: [PATCH v3 4/7] dma-mapping: Add helpers for dma_range_map bounds

On 2024/2/10 0:50, Robin Murphy wrote:
> Several places want to compute the lower and/or upper bounds of a
> dma_range_map, so let's factor that out into reusable helpers.
>
> Acked-by: Rob Herring <[email protected]>
> Reviewed-by: Christoph Hellwig <[email protected]>
> Reviewed-by: Jason Gunthorpe <[email protected]>
> Signed-off-by: Robin Murphy <[email protected]>
> ---
> v2: fix warning for 32-bit builds
> ---
> arch/loongarch/kernel/dma.c | 9 ++-------
> drivers/acpi/arm64/dma.c | 8 +-------
> drivers/of/device.c | 11 ++---------
> include/linux/dma-direct.h | 18 ++++++++++++++++++
> 4 files changed, 23 insertions(+), 23 deletions(-)

For the ARM64 code,

Reviewed-by: Hanjun Guo <[email protected]>

Thanks
Hanjun

2024-03-06 12:03:18

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH v3 0/7] dma-mapping: Simplify arch_setup_dma_ops()

Hi Joerg, Christoph,

On 2024-02-09 4:49 pm, Robin Murphy wrote:
> v2: https://lore.kernel.org/linux-iommu/[email protected]/
>
> Hi all,
>
> Here's v3, rebased and fixing the thinko from v2, so unless anything
> else has changed behind my back I hope it's good to go (via the IOMMU
> tree, as mentioned before).

Are either of you happy to pick this series up now that we have Hanjun's
acks for the IORT parts? As it stands it still applies cleanly to both
iommu/next and dma/for-next. I do have some followup IOMMU patches
prepared already (continuing to delete more code, yay!), but I don't
want to get too far ahead of myself.

Cheers,
Robin.

>
> Thanks,
> Robin.
>
>
> Robin Murphy (7):
> OF: Retire dma-ranges mask workaround
> OF: Simplify DMA range ca1lculations
> ACPI/IORT: Handle memory address size limits as limits
> dma-mapping: Add helpers for dma_range_map bounds
> iommu/dma: Make limit checks self-contained
> iommu/dma: Centralise iommu_setup_dma_ops()
> dma-mapping: Simplify arch_setup_dma_ops()
>
> arch/arc/mm/dma.c | 3 +--
> arch/arm/mm/dma-mapping-nommu.c | 3 +--
> arch/arm/mm/dma-mapping.c | 16 +++++++------
> arch/arm64/mm/dma-mapping.c | 5 +---
> arch/loongarch/kernel/dma.c | 9 ++-----
> arch/mips/mm/dma-noncoherent.c | 3 +--
> arch/riscv/mm/dma-noncoherent.c | 3 +--
> drivers/acpi/arm64/dma.c | 17 ++++---------
> drivers/acpi/arm64/iort.c | 20 ++++++++--------
> drivers/acpi/scan.c | 7 +-----
> drivers/hv/hv_common.c | 6 +----
> drivers/iommu/amd/iommu.c | 8 -------
> drivers/iommu/dma-iommu.c | 39 ++++++++++++------------------
> drivers/iommu/dma-iommu.h | 14 +++++------
> drivers/iommu/intel/iommu.c | 7 ------
> drivers/iommu/iommu.c | 20 ++++++----------
> drivers/iommu/s390-iommu.c | 6 -----
> drivers/iommu/virtio-iommu.c | 10 --------
> drivers/of/device.c | 42 ++++++---------------------------
> include/linux/acpi_iort.h | 4 ++--
> include/linux/dma-direct.h | 18 ++++++++++++++
> include/linux/dma-map-ops.h | 6 ++---
> include/linux/iommu.h | 7 ------
> 23 files changed, 89 insertions(+), 184 deletions(-)
>

2024-03-06 12:11:12

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v3 0/7] dma-mapping: Simplify arch_setup_dma_ops()

On Wed, Mar 06, 2024 at 12:02:41PM +0000, Robin Murphy wrote:
> Are either of you happy to pick this series up now that we have Hanjun's
> acks for the IORT parts? As it stands it still applies cleanly to both
> iommu/next and dma/for-next. I do have some followup IOMMU patches prepared
> already (continuing to delete more code, yay!), but I don't want to get too
> far ahead of myself.

I expected this to go in through the iommu tree. But if Joerg want
the series (or part of it) to go through the dma-mapping tree I'd
be happy to pick it up ASAP.


2024-03-08 20:37:29

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH v3 6/7] iommu/dma: Centralise iommu_setup_dma_ops()

On Fri, Feb 09, 2024 at 04:50:03PM +0000, Robin Murphy wrote:
> It's somewhat hard to see, but arm64's arch_setup_dma_ops() should only
> ever call iommu_setup_dma_ops() after a successful iommu_probe_device(),
> which means there should be no harm in achieving the same order of
> operations by running it off the back of iommu_probe_device() itself.
> This then puts it in line with the x86 and s390 .probe_finalize bodges,
> letting us pull it all into the main flow properly. As a bonus this lets
> us fold in and de-scope the PCI workaround setup as well.
>
> At this point we can also then pull the call up inside the group mutex,
> and avoid having to think about whether iommu_group_store_type() could
> theoretically race and free the domain if iommu_setup_dma_ops() ran just
> *before* iommu_device_use_default_domain() claims it... Furthermore we
> replace one .probe_finalize call completely, since the only remaining
> implementations are now one which only needs to run once for the initial
> boot-time probe, and two which themselves render that path unreachable.
>
> This leaves us a big step closer to realistically being able to unpick
> the variety of different things that iommu_setup_dma_ops() has been
> muddling together, and further streamline iommu-dma into core API flows
> in future.
>
> Signed-off-by: Robin Murphy <[email protected]>
> ---
> v2: Shuffle around to make sure the iommu_group_do_probe_finalize() case
> is covered as well, with bonus side-effects as above.
> v3: *Really* do that, remembering the other two probe_finalize sites too.
> ---
> arch/arm64/mm/dma-mapping.c | 2 --
> drivers/iommu/amd/iommu.c | 8 --------
> drivers/iommu/dma-iommu.c | 18 ++++++------------
> drivers/iommu/dma-iommu.h | 14 ++++++--------
> drivers/iommu/intel/iommu.c | 7 -------
> drivers/iommu/iommu.c | 20 +++++++-------------
> drivers/iommu/s390-iommu.c | 6 ------
> drivers/iommu/virtio-iommu.c | 10 ----------
> include/linux/iommu.h | 7 -------
> 9 files changed, 19 insertions(+), 73 deletions(-)

Reviewed-by: Jason Gunthorpe <[email protected]>

Jason

2024-03-08 20:37:47

by Jason Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH v3 7/7] dma-mapping: Simplify arch_setup_dma_ops()

On Fri, Feb 09, 2024 at 04:50:04PM +0000, Robin Murphy wrote:
> The dma_base, size and iommu arguments are only used by ARM, and can
> now easily be deduced from the device itself, so there's no need to pass
> them through the callchain as well.
>
> Acked-by: Rob Herring <[email protected]>
> Reviewed-by: Christoph Hellwig <[email protected]>
> Signed-off-by: Robin Murphy <[email protected]>
> ---
> v2: Make sure the ARM changes actually build (oops...)
> ---
> arch/arc/mm/dma.c | 3 +--
> arch/arm/mm/dma-mapping-nommu.c | 3 +--
> arch/arm/mm/dma-mapping.c | 16 +++++++++-------
> arch/arm64/mm/dma-mapping.c | 3 +--
> arch/mips/mm/dma-noncoherent.c | 3 +--
> arch/riscv/mm/dma-noncoherent.c | 3 +--
> drivers/acpi/scan.c | 7 +------
> drivers/hv/hv_common.c | 6 +-----
> drivers/of/device.c | 4 +---
> include/linux/dma-map-ops.h | 6 ++----
> 10 files changed, 19 insertions(+), 35 deletions(-)

Reviewed-by: Jason Gunthorpe <[email protected]>

Jason