v3: https://lore.kernel.org/linux-iommu/[email protected]/
Hi all,
Since this ended up missing the boat for 6.9, here's a rebase and resend
with the additional tags from v3 collected.
Cheers,
Robin.
Robin Murphy (7):
OF: Retire dma-ranges mask workaround
OF: Simplify DMA range calculations
ACPI/IORT: Handle memory address size limits as limits
dma-mapping: Add helpers for dma_range_map bounds
iommu/dma: Make limit checks self-contained
iommu/dma: Centralise iommu_setup_dma_ops()
dma-mapping: Simplify arch_setup_dma_ops()
arch/arc/mm/dma.c | 3 +--
arch/arm/mm/dma-mapping-nommu.c | 3 +--
arch/arm/mm/dma-mapping.c | 16 +++++++------
arch/arm64/mm/dma-mapping.c | 5 +---
arch/loongarch/kernel/dma.c | 9 ++-----
arch/mips/mm/dma-noncoherent.c | 3 +--
arch/riscv/mm/dma-noncoherent.c | 3 +--
drivers/acpi/arm64/dma.c | 17 ++++---------
drivers/acpi/arm64/iort.c | 20 ++++++++--------
drivers/acpi/scan.c | 7 +-----
drivers/hv/hv_common.c | 6 +----
drivers/iommu/amd/iommu.c | 8 -------
drivers/iommu/dma-iommu.c | 39 ++++++++++++------------------
drivers/iommu/dma-iommu.h | 14 +++++------
drivers/iommu/intel/iommu.c | 7 ------
drivers/iommu/iommu.c | 20 ++++++----------
drivers/iommu/s390-iommu.c | 6 -----
drivers/iommu/virtio-iommu.c | 10 --------
drivers/of/device.c | 42 ++++++---------------------------
include/linux/acpi_iort.h | 4 ++--
include/linux/dma-direct.h | 18 ++++++++++++++
include/linux/dma-map-ops.h | 6 ++---
include/linux/iommu.h | 7 ------
23 files changed, 89 insertions(+), 184 deletions(-)
--
2.39.2.101.g768bb238c484.dirty
The fixup adding 1 to the dma-ranges size may have been for the benefit
of some early AMD Seattle DTs, or may have merely been a just-in-case,
but either way anyone who might have deserved to get the message has
hopefully seen the warning in the 9 years we've had it there. The modern
dma_range_map mechanism should happily handle odd-sized ranges with no
ill effect, so there's little need to care anyway now. Clean it up.
Acked-by: Rob Herring <[email protected]>
Signed-off-by: Robin Murphy <[email protected]>
---
v2: Tweak commit message
---
drivers/of/device.c | 16 ----------------
1 file changed, 16 deletions(-)
diff --git a/drivers/of/device.c b/drivers/of/device.c
index de89f9906375..a988bee2ee5a 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -129,22 +129,6 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
dma_end = r->dma_start + r->size;
}
size = dma_end - dma_start;
-
- /*
- * Add a work around to treat the size as mask + 1 in case
- * it is defined in DT as a mask.
- */
- if (size & 1) {
- dev_warn(dev, "Invalid size 0x%llx for dma-range(s)\n",
- size);
- size = size + 1;
- }
-
- if (!size) {
- dev_err(dev, "Adjusted size 0x%llx invalid\n", size);
- kfree(map);
- return -EINVAL;
- }
}
/*
--
2.39.2.101.g768bb238c484.dirty
Juggling start, end, and size values for a range is somewhat redundant
and a little hard to follow. Consolidate down to just using inclusive
start and end, which saves us worrying about size overflows for full
64-bit ranges (note that passing a potentially-overflowed value through
to arch_setup_dma_ops() is benign for all current implementations, and
this is working towards removing that anyway).
Acked-by: Rob Herring <[email protected]>
Reviewed-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Robin Murphy <[email protected]>
---
drivers/of/device.c | 19 ++++++++-----------
1 file changed, 8 insertions(+), 11 deletions(-)
diff --git a/drivers/of/device.c b/drivers/of/device.c
index a988bee2ee5a..841ccd3a19d1 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -96,7 +96,7 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
const struct bus_dma_region *map = NULL;
struct device_node *bus_np;
u64 dma_start = 0;
- u64 mask, end, size = 0;
+ u64 mask, end = 0;
bool coherent;
int iommu_ret;
int ret;
@@ -118,17 +118,15 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
return ret == -ENODEV ? 0 : ret;
} else {
const struct bus_dma_region *r = map;
- u64 dma_end = 0;
/* Determine the overall bounds of all DMA regions */
for (dma_start = ~0; r->size; r++) {
/* Take lower and upper limits */
if (r->dma_start < dma_start)
dma_start = r->dma_start;
- if (r->dma_start + r->size > dma_end)
- dma_end = r->dma_start + r->size;
+ if (r->dma_start + r->size > end)
+ end = r->dma_start + r->size;
}
- size = dma_end - dma_start;
}
/*
@@ -142,16 +140,15 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
dev->dma_mask = &dev->coherent_dma_mask;
}
- if (!size && dev->coherent_dma_mask)
- size = max(dev->coherent_dma_mask, dev->coherent_dma_mask + 1);
- else if (!size)
- size = 1ULL << 32;
+ if (!end && dev->coherent_dma_mask)
+ end = dev->coherent_dma_mask;
+ else if (!end)
+ end = (1ULL << 32) - 1;
/*
* Limit coherent and dma mask based on size and default mask
* set by the driver.
*/
- end = dma_start + size - 1;
mask = DMA_BIT_MASK(ilog2(end) + 1);
dev->coherent_dma_mask &= mask;
*dev->dma_mask &= mask;
@@ -185,7 +182,7 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
} else
dev_dbg(dev, "device is behind an iommu\n");
- arch_setup_dma_ops(dev, dma_start, size, coherent);
+ arch_setup_dma_ops(dev, dma_start, end - dma_start + 1, coherent);
if (iommu_ret)
of_dma_set_restricted_buffer(dev, np);
--
2.39.2.101.g768bb238c484.dirty
Return the Root Complex/Named Component memory address size limit as an
inclusive limit value, rather than an exclusive size. This saves having
to fudge an off-by-one for the 64-bit case, and simplifies our caller.
Acked-by: Hanjun Guo <[email protected]>
Reviewed-by: Jason Gunthorpe <[email protected]>
Tested-by: Hanjun Guo <[email protected]>
Signed-off-by: Robin Murphy <[email protected]>
---
v2: Avoid undefined shifts (grr...)
---
drivers/acpi/arm64/dma.c | 9 +++------
drivers/acpi/arm64/iort.c | 20 ++++++++++----------
include/linux/acpi_iort.h | 4 ++--
3 files changed, 15 insertions(+), 18 deletions(-)
diff --git a/drivers/acpi/arm64/dma.c b/drivers/acpi/arm64/dma.c
index 93d796531af3..b98a149f8d50 100644
--- a/drivers/acpi/arm64/dma.c
+++ b/drivers/acpi/arm64/dma.c
@@ -8,7 +8,6 @@ void acpi_arch_dma_setup(struct device *dev)
{
int ret;
u64 end, mask;
- u64 size = 0;
const struct bus_dma_region *map = NULL;
/*
@@ -23,9 +22,9 @@ void acpi_arch_dma_setup(struct device *dev)
}
if (dev->coherent_dma_mask)
- size = max(dev->coherent_dma_mask, dev->coherent_dma_mask + 1);
+ end = dev->coherent_dma_mask;
else
- size = 1ULL << 32;
+ end = (1ULL << 32) - 1;
ret = acpi_dma_get_range(dev, &map);
if (!ret && map) {
@@ -36,18 +35,16 @@ void acpi_arch_dma_setup(struct device *dev)
end = r->dma_start + r->size - 1;
}
- size = end + 1;
dev->dma_range_map = map;
}
if (ret == -ENODEV)
- ret = iort_dma_get_ranges(dev, &size);
+ ret = iort_dma_get_ranges(dev, &end);
if (!ret) {
/*
* Limit coherent and dma mask based on size retrieved from
* firmware.
*/
- end = size - 1;
mask = DMA_BIT_MASK(ilog2(end) + 1);
dev->bus_dma_limit = end;
dev->coherent_dma_mask = min(dev->coherent_dma_mask, mask);
diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c
index 6496ff5a6ba2..c0b1c2c19444 100644
--- a/drivers/acpi/arm64/iort.c
+++ b/drivers/acpi/arm64/iort.c
@@ -1367,7 +1367,7 @@ int iort_iommu_configure_id(struct device *dev, const u32 *input_id)
{ return -ENODEV; }
#endif
-static int nc_dma_get_range(struct device *dev, u64 *size)
+static int nc_dma_get_range(struct device *dev, u64 *limit)
{
struct acpi_iort_node *node;
struct acpi_iort_named_component *ncomp;
@@ -1384,13 +1384,13 @@ static int nc_dma_get_range(struct device *dev, u64 *size)
return -EINVAL;
}
- *size = ncomp->memory_address_limit >= 64 ? U64_MAX :
- 1ULL<<ncomp->memory_address_limit;
+ *limit = ncomp->memory_address_limit >= 64 ? U64_MAX :
+ (1ULL << ncomp->memory_address_limit) - 1;
return 0;
}
-static int rc_dma_get_range(struct device *dev, u64 *size)
+static int rc_dma_get_range(struct device *dev, u64 *limit)
{
struct acpi_iort_node *node;
struct acpi_iort_root_complex *rc;
@@ -1408,8 +1408,8 @@ static int rc_dma_get_range(struct device *dev, u64 *size)
return -EINVAL;
}
- *size = rc->memory_address_limit >= 64 ? U64_MAX :
- 1ULL<<rc->memory_address_limit;
+ *limit = rc->memory_address_limit >= 64 ? U64_MAX :
+ (1ULL << rc->memory_address_limit) - 1;
return 0;
}
@@ -1417,16 +1417,16 @@ static int rc_dma_get_range(struct device *dev, u64 *size)
/**
* iort_dma_get_ranges() - Look up DMA addressing limit for the device
* @dev: device to lookup
- * @size: DMA range size result pointer
+ * @limit: DMA limit result pointer
*
* Return: 0 on success, an error otherwise.
*/
-int iort_dma_get_ranges(struct device *dev, u64 *size)
+int iort_dma_get_ranges(struct device *dev, u64 *limit)
{
if (dev_is_pci(dev))
- return rc_dma_get_range(dev, size);
+ return rc_dma_get_range(dev, limit);
else
- return nc_dma_get_range(dev, size);
+ return nc_dma_get_range(dev, limit);
}
static void __init acpi_iort_register_irq(int hwirq, const char *name,
diff --git a/include/linux/acpi_iort.h b/include/linux/acpi_iort.h
index 1cb65592c95d..d4ed5622cf2b 100644
--- a/include/linux/acpi_iort.h
+++ b/include/linux/acpi_iort.h
@@ -39,7 +39,7 @@ void iort_get_rmr_sids(struct fwnode_handle *iommu_fwnode,
void iort_put_rmr_sids(struct fwnode_handle *iommu_fwnode,
struct list_head *head);
/* IOMMU interface */
-int iort_dma_get_ranges(struct device *dev, u64 *size);
+int iort_dma_get_ranges(struct device *dev, u64 *limit);
int iort_iommu_configure_id(struct device *dev, const u32 *id_in);
void iort_iommu_get_resv_regions(struct device *dev, struct list_head *head);
phys_addr_t acpi_iort_dma_get_max_cpu_address(void);
@@ -55,7 +55,7 @@ void iort_get_rmr_sids(struct fwnode_handle *iommu_fwnode, struct list_head *hea
static inline
void iort_put_rmr_sids(struct fwnode_handle *iommu_fwnode, struct list_head *head) { }
/* IOMMU interface */
-static inline int iort_dma_get_ranges(struct device *dev, u64 *size)
+static inline int iort_dma_get_ranges(struct device *dev, u64 *limit)
{ return -ENODEV; }
static inline int iort_iommu_configure_id(struct device *dev, const u32 *id_in)
{ return -ENODEV; }
--
2.39.2.101.g768bb238c484.dirty
Several places want to compute the lower and/or upper bounds of a
dma_range_map, so let's factor that out into reusable helpers.
Acked-by: Rob Herring <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Hanjun Guo <[email protected]> # For arm64
Reviewed-by: Jason Gunthorpe <[email protected]>
Tested-by: Hanjun Guo <[email protected]>
Signed-off-by: Robin Murphy <[email protected]>
---
v2: fix warning for 32-bit builds
---
arch/loongarch/kernel/dma.c | 9 ++-------
drivers/acpi/arm64/dma.c | 8 +-------
drivers/of/device.c | 11 ++---------
include/linux/dma-direct.h | 18 ++++++++++++++++++
4 files changed, 23 insertions(+), 23 deletions(-)
diff --git a/arch/loongarch/kernel/dma.c b/arch/loongarch/kernel/dma.c
index 7a9c6a9dd2d0..429555fb4e13 100644
--- a/arch/loongarch/kernel/dma.c
+++ b/arch/loongarch/kernel/dma.c
@@ -8,17 +8,12 @@
void acpi_arch_dma_setup(struct device *dev)
{
int ret;
- u64 mask, end = 0;
+ u64 mask, end;
const struct bus_dma_region *map = NULL;
ret = acpi_dma_get_range(dev, &map);
if (!ret && map) {
- const struct bus_dma_region *r = map;
-
- for (end = 0; r->size; r++) {
- if (r->dma_start + r->size - 1 > end)
- end = r->dma_start + r->size - 1;
- }
+ end = dma_range_map_max(map);
mask = DMA_BIT_MASK(ilog2(end) + 1);
dev->bus_dma_limit = end;
diff --git a/drivers/acpi/arm64/dma.c b/drivers/acpi/arm64/dma.c
index b98a149f8d50..52b2abf88689 100644
--- a/drivers/acpi/arm64/dma.c
+++ b/drivers/acpi/arm64/dma.c
@@ -28,13 +28,7 @@ void acpi_arch_dma_setup(struct device *dev)
ret = acpi_dma_get_range(dev, &map);
if (!ret && map) {
- const struct bus_dma_region *r = map;
-
- for (end = 0; r->size; r++) {
- if (r->dma_start + r->size - 1 > end)
- end = r->dma_start + r->size - 1;
- }
-
+ end = dma_range_map_max(map);
dev->dma_range_map = map;
}
diff --git a/drivers/of/device.c b/drivers/of/device.c
index 841ccd3a19d1..9e7963972fa7 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -117,16 +117,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
if (!force_dma)
return ret == -ENODEV ? 0 : ret;
} else {
- const struct bus_dma_region *r = map;
-
/* Determine the overall bounds of all DMA regions */
- for (dma_start = ~0; r->size; r++) {
- /* Take lower and upper limits */
- if (r->dma_start < dma_start)
- dma_start = r->dma_start;
- if (r->dma_start + r->size > end)
- end = r->dma_start + r->size;
- }
+ dma_start = dma_range_map_min(map);
+ end = dma_range_map_max(map);
}
/*
diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h
index 3eb3589ff43e..edbe13d00776 100644
--- a/include/linux/dma-direct.h
+++ b/include/linux/dma-direct.h
@@ -54,6 +54,24 @@ static inline phys_addr_t translate_dma_to_phys(struct device *dev,
return (phys_addr_t)-1;
}
+static inline dma_addr_t dma_range_map_min(const struct bus_dma_region *map)
+{
+ dma_addr_t ret = (dma_addr_t)U64_MAX;
+
+ for (; map->size; map++)
+ ret = min(ret, map->dma_start);
+ return ret;
+}
+
+static inline dma_addr_t dma_range_map_max(const struct bus_dma_region *map)
+{
+ dma_addr_t ret = 0;
+
+ for (; map->size; map++)
+ ret = max(ret, map->dma_start + map->size - 1);
+ return ret;
+}
+
#ifdef CONFIG_ARCH_HAS_PHYS_TO_DMA
#include <asm/dma-direct.h>
#ifndef phys_to_dma_unencrypted
--
2.39.2.101.g768bb238c484.dirty
The dma_base, size and iommu arguments are only used by ARM, and can
now easily be deduced from the device itself, so there's no need to pass
them through the callchain as well.
Acked-by: Rob Herring <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Michael Kelley <[email protected]> # For Hyper-V
Reviewed-by: Jason Gunthorpe <[email protected]>
Tested-by: Hanjun Guo <[email protected]>
Signed-off-by: Robin Murphy <[email protected]>
---
v2: Make sure the ARM changes actually build (oops...)
---
arch/arc/mm/dma.c | 3 +--
arch/arm/mm/dma-mapping-nommu.c | 3 +--
arch/arm/mm/dma-mapping.c | 16 +++++++++-------
arch/arm64/mm/dma-mapping.c | 3 +--
arch/mips/mm/dma-noncoherent.c | 3 +--
arch/riscv/mm/dma-noncoherent.c | 3 +--
drivers/acpi/scan.c | 7 +------
drivers/hv/hv_common.c | 6 +-----
drivers/of/device.c | 4 +---
include/linux/dma-map-ops.h | 6 ++----
10 files changed, 19 insertions(+), 35 deletions(-)
diff --git a/arch/arc/mm/dma.c b/arch/arc/mm/dma.c
index 197707bc7658..6b85e94f3275 100644
--- a/arch/arc/mm/dma.c
+++ b/arch/arc/mm/dma.c
@@ -90,8 +90,7 @@ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
/*
* Plug in direct dma map ops.
*/
-void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
- bool coherent)
+void arch_setup_dma_ops(struct device *dev, bool coherent)
{
/*
* IOC hardware snoops all DMA traffic keeping the caches consistent
diff --git a/arch/arm/mm/dma-mapping-nommu.c b/arch/arm/mm/dma-mapping-nommu.c
index b94850b57995..97db5397c320 100644
--- a/arch/arm/mm/dma-mapping-nommu.c
+++ b/arch/arm/mm/dma-mapping-nommu.c
@@ -33,8 +33,7 @@ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
}
}
-void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
- bool coherent)
+void arch_setup_dma_ops(struct device *dev, bool coherent)
{
if (IS_ENABLED(CONFIG_CPU_V7M)) {
/*
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index f68db05eba29..5adf1769eee4 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -1709,11 +1709,15 @@ void arm_iommu_detach_device(struct device *dev)
}
EXPORT_SYMBOL_GPL(arm_iommu_detach_device);
-static void arm_setup_iommu_dma_ops(struct device *dev, u64 dma_base, u64 size,
- bool coherent)
+static void arm_setup_iommu_dma_ops(struct device *dev)
{
struct dma_iommu_mapping *mapping;
+ u64 dma_base = 0, size = 1ULL << 32;
+ if (dev->dma_range_map) {
+ dma_base = dma_range_map_min(dev->dma_range_map);
+ size = dma_range_map_max(dev->dma_range_map) - dma_base;
+ }
mapping = arm_iommu_create_mapping(dev->bus, dma_base, size);
if (IS_ERR(mapping)) {
pr_warn("Failed to create %llu-byte IOMMU mapping for device %s\n",
@@ -1744,8 +1748,7 @@ static void arm_teardown_iommu_dma_ops(struct device *dev)
#else
-static void arm_setup_iommu_dma_ops(struct device *dev, u64 dma_base, u64 size,
- bool coherent)
+static void arm_setup_iommu_dma_ops(struct device *dev)
{
}
@@ -1753,8 +1756,7 @@ static void arm_teardown_iommu_dma_ops(struct device *dev) { }
#endif /* CONFIG_ARM_DMA_USE_IOMMU */
-void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
- bool coherent)
+void arch_setup_dma_ops(struct device *dev, bool coherent)
{
/*
* Due to legacy code that sets the ->dma_coherent flag from a bus
@@ -1774,7 +1776,7 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
return;
if (device_iommu_mapped(dev))
- arm_setup_iommu_dma_ops(dev, dma_base, size, coherent);
+ arm_setup_iommu_dma_ops(dev);
xen_setup_dma_ops(dev);
dev->archdata.dma_ops_setup = true;
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 313d8938a2f0..0b320a25a471 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -46,8 +46,7 @@ void arch_teardown_dma_ops(struct device *dev)
}
#endif
-void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
- bool coherent)
+void arch_setup_dma_ops(struct device *dev, bool coherent)
{
int cls = cache_line_size_of_cpu();
diff --git a/arch/mips/mm/dma-noncoherent.c b/arch/mips/mm/dma-noncoherent.c
index 0f3cec663a12..ab4f2a75a7d0 100644
--- a/arch/mips/mm/dma-noncoherent.c
+++ b/arch/mips/mm/dma-noncoherent.c
@@ -137,8 +137,7 @@ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
#endif
#ifdef CONFIG_ARCH_HAS_SETUP_DMA_OPS
-void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
- bool coherent)
+void arch_setup_dma_ops(struct device *dev, bool coherent)
{
dev->dma_coherent = coherent;
}
diff --git a/arch/riscv/mm/dma-noncoherent.c b/arch/riscv/mm/dma-noncoherent.c
index 843107f834b2..cb89d7e0ba88 100644
--- a/arch/riscv/mm/dma-noncoherent.c
+++ b/arch/riscv/mm/dma-noncoherent.c
@@ -128,8 +128,7 @@ void arch_dma_prep_coherent(struct page *page, size_t size)
ALT_CMO_OP(FLUSH, flush_addr, size, riscv_cbom_block_size);
}
-void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
- bool coherent)
+void arch_setup_dma_ops(struct device *dev, bool coherent)
{
WARN_TAINT(!coherent && riscv_cbom_block_size > ARCH_DMA_MINALIGN,
TAINT_CPU_OUT_OF_SPEC,
diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
index 7c157bf92695..b1a88992c1a9 100644
--- a/drivers/acpi/scan.c
+++ b/drivers/acpi/scan.c
@@ -1675,12 +1675,7 @@ int acpi_dma_configure_id(struct device *dev, enum dev_dma_attr attr,
if (ret == -EPROBE_DEFER)
return -EPROBE_DEFER;
- /*
- * Historically this routine doesn't fail driver probing due to errors
- * in acpi_iommu_configure_id()
- */
-
- arch_setup_dma_ops(dev, 0, U64_MAX, attr == DEV_DMA_COHERENT);
+ arch_setup_dma_ops(dev, attr == DEV_DMA_COHERENT);
return 0;
}
diff --git a/drivers/hv/hv_common.c b/drivers/hv/hv_common.c
index dde3f9b6871a..9c452bfbd571 100644
--- a/drivers/hv/hv_common.c
+++ b/drivers/hv/hv_common.c
@@ -561,11 +561,7 @@ EXPORT_SYMBOL_GPL(hv_query_ext_cap);
void hv_setup_dma_ops(struct device *dev, bool coherent)
{
- /*
- * Hyper-V does not offer a vIOMMU in the guest
- * VM, so pass 0/NULL for the IOMMU settings
- */
- arch_setup_dma_ops(dev, 0, 0, coherent);
+ arch_setup_dma_ops(dev, coherent);
}
EXPORT_SYMBOL_GPL(hv_setup_dma_ops);
diff --git a/drivers/of/device.c b/drivers/of/device.c
index 9e7963972fa7..312c63361211 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -95,7 +95,6 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
{
const struct bus_dma_region *map = NULL;
struct device_node *bus_np;
- u64 dma_start = 0;
u64 mask, end = 0;
bool coherent;
int iommu_ret;
@@ -118,7 +117,6 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
return ret == -ENODEV ? 0 : ret;
} else {
/* Determine the overall bounds of all DMA regions */
- dma_start = dma_range_map_min(map);
end = dma_range_map_max(map);
}
@@ -175,7 +173,7 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
} else
dev_dbg(dev, "device is behind an iommu\n");
- arch_setup_dma_ops(dev, dma_start, end - dma_start + 1, coherent);
+ arch_setup_dma_ops(dev, coherent);
if (iommu_ret)
of_dma_set_restricted_buffer(dev, np);
diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h
index 4abc60f04209..ed89e1ce0114 100644
--- a/include/linux/dma-map-ops.h
+++ b/include/linux/dma-map-ops.h
@@ -426,11 +426,9 @@ bool arch_dma_unmap_sg_direct(struct device *dev, struct scatterlist *sg,
#endif
#ifdef CONFIG_ARCH_HAS_SETUP_DMA_OPS
-void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
- bool coherent);
+void arch_setup_dma_ops(struct device *dev, bool coherent);
#else
-static inline void arch_setup_dma_ops(struct device *dev, u64 dma_base,
- u64 size, bool coherent)
+static inline void arch_setup_dma_ops(struct device *dev, bool coherent)
{
}
#endif /* CONFIG_ARCH_HAS_SETUP_DMA_OPS */
--
2.39.2.101.g768bb238c484.dirty
It's now easy to retrieve the device's DMA limits if we want to check
them against the domain aperture, so do that ourselves instead of
relying on them being passed through the callchain.
Reviewed-by: Jason Gunthorpe <[email protected]>
Tested-by: Hanjun Guo <[email protected]>
Signed-off-by: Robin Murphy <[email protected]>
---
drivers/iommu/dma-iommu.c | 21 +++++++++------------
1 file changed, 9 insertions(+), 12 deletions(-)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index a3039005b696..f542eabaefa4 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -660,19 +660,16 @@ static void iommu_dma_init_options(struct iommu_dma_options *options,
/**
* iommu_dma_init_domain - Initialise a DMA mapping domain
* @domain: IOMMU domain previously prepared by iommu_get_dma_cookie()
- * @base: IOVA at which the mappable address space starts
- * @limit: Last address of the IOVA space
* @dev: Device the domain is being initialised for
*
- * @base and @limit + 1 should be exact multiples of IOMMU page granularity to
- * avoid rounding surprises. If necessary, we reserve the page at address 0
+ * If the geometry and dma_range_map include address 0, we reserve that page
* to ensure it is an invalid IOVA. It is safe to reinitialise a domain, but
* any change which could make prior IOVAs invalid will fail.
*/
-static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
- dma_addr_t limit, struct device *dev)
+static int iommu_dma_init_domain(struct iommu_domain *domain, struct device *dev)
{
struct iommu_dma_cookie *cookie = domain->iova_cookie;
+ const struct bus_dma_region *map = dev->dma_range_map;
unsigned long order, base_pfn;
struct iova_domain *iovad;
int ret;
@@ -684,18 +681,18 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
/* Use the smallest supported page size for IOVA granularity */
order = __ffs(domain->pgsize_bitmap);
- base_pfn = max_t(unsigned long, 1, base >> order);
+ base_pfn = 1;
/* Check the domain allows at least some access to the device... */
- if (domain->geometry.force_aperture) {
+ if (map) {
+ dma_addr_t base = dma_range_map_min(map);
if (base > domain->geometry.aperture_end ||
- limit < domain->geometry.aperture_start) {
+ dma_range_map_max(map) < domain->geometry.aperture_start) {
pr_warn("specified DMA range outside IOMMU capability\n");
return -EFAULT;
}
/* ...then finally give it a kicking to make sure it fits */
- base_pfn = max_t(unsigned long, base_pfn,
- domain->geometry.aperture_start >> order);
+ base_pfn = max(base, domain->geometry.aperture_start) >> order;
}
/* start_pfn is always nonzero for an already-initialised domain */
@@ -1760,7 +1757,7 @@ void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 dma_limit)
* underlying IOMMU driver needs to support via the dma-iommu layer.
*/
if (iommu_is_dma_domain(domain)) {
- if (iommu_dma_init_domain(domain, dma_base, dma_limit, dev))
+ if (iommu_dma_init_domain(domain, dev))
goto out_err;
dev->dma_ops = &iommu_dma_ops;
}
--
2.39.2.101.g768bb238c484.dirty
It's somewhat hard to see, but arm64's arch_setup_dma_ops() should only
ever call iommu_setup_dma_ops() after a successful iommu_probe_device(),
which means there should be no harm in achieving the same order of
operations by running it off the back of iommu_probe_device() itself.
This then puts it in line with the x86 and s390 .probe_finalize bodges,
letting us pull it all into the main flow properly. As a bonus this lets
us fold in and de-scope the PCI workaround setup as well.
At this point we can also then pull the call up inside the group mutex,
and avoid having to think about whether iommu_group_store_type() could
theoretically race and free the domain if iommu_setup_dma_ops() ran just
*before* iommu_device_use_default_domain() claims it... Furthermore we
replace one .probe_finalize call completely, since the only remaining
implementations are now one which only needs to run once for the initial
boot-time probe, and two which themselves render that path unreachable.
This leaves us a big step closer to realistically being able to unpick
the variety of different things that iommu_setup_dma_ops() has been
muddling together, and further streamline iommu-dma into core API flows
in future.
Reviewed-by: Lu Baolu <[email protected]> # For Intel IOMMU
Reviewed-by: Jason Gunthorpe <[email protected]>
Tested-by: Hanjun Guo <[email protected]>
Signed-off-by: Robin Murphy <[email protected]>
---
v2: Shuffle around to make sure the iommu_group_do_probe_finalize() case
is covered as well, with bonus side-effects as above.
v3: *Really* do that, remembering the other two probe_finalize sites too.
---
arch/arm64/mm/dma-mapping.c | 2 --
drivers/iommu/amd/iommu.c | 8 --------
drivers/iommu/dma-iommu.c | 18 ++++++------------
drivers/iommu/dma-iommu.h | 14 ++++++--------
drivers/iommu/intel/iommu.c | 7 -------
drivers/iommu/iommu.c | 20 +++++++-------------
drivers/iommu/s390-iommu.c | 6 ------
drivers/iommu/virtio-iommu.c | 10 ----------
include/linux/iommu.h | 7 -------
9 files changed, 19 insertions(+), 73 deletions(-)
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 61886e43e3a1..313d8938a2f0 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -58,8 +58,6 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
ARCH_DMA_MINALIGN, cls);
dev->dma_coherent = coherent;
- if (device_iommu_mapped(dev))
- iommu_setup_dma_ops(dev, dma_base, dma_base + size - 1);
xen_setup_dma_ops(dev);
}
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index d35c1b8c8e65..085abf098fa9 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -2175,13 +2175,6 @@ static struct iommu_device *amd_iommu_probe_device(struct device *dev)
return iommu_dev;
}
-static void amd_iommu_probe_finalize(struct device *dev)
-{
- /* Domains are initialized for this device - have a look what we ended up with */
- set_dma_ops(dev, NULL);
- iommu_setup_dma_ops(dev, 0, U64_MAX);
-}
-
static void amd_iommu_release_device(struct device *dev)
{
struct amd_iommu *iommu;
@@ -2784,7 +2777,6 @@ const struct iommu_ops amd_iommu_ops = {
.domain_alloc_user = amd_iommu_domain_alloc_user,
.probe_device = amd_iommu_probe_device,
.release_device = amd_iommu_release_device,
- .probe_finalize = amd_iommu_probe_finalize,
.device_group = amd_iommu_device_group,
.get_resv_regions = amd_iommu_get_resv_regions,
.is_attach_deferred = amd_iommu_is_attach_deferred,
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index f542eabaefa4..89a53c2f2cf9 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -1741,25 +1741,20 @@ static const struct dma_map_ops iommu_dma_ops = {
.max_mapping_size = iommu_dma_max_mapping_size,
};
-/*
- * The IOMMU core code allocates the default DMA domain, which the underlying
- * IOMMU driver needs to support via the dma-iommu layer.
- */
-void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 dma_limit)
+void iommu_setup_dma_ops(struct device *dev)
{
struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
- if (!domain)
- goto out_err;
+ if (dev_is_pci(dev))
+ dev->iommu->pci_32bit_workaround = !iommu_dma_forcedac;
- /*
- * The IOMMU core code allocates the default DMA domain, which the
- * underlying IOMMU driver needs to support via the dma-iommu layer.
- */
if (iommu_is_dma_domain(domain)) {
if (iommu_dma_init_domain(domain, dev))
goto out_err;
dev->dma_ops = &iommu_dma_ops;
+ } else if (dev->dma_ops == &iommu_dma_ops) {
+ /* Clean up if we've switched *from* a DMA domain */
+ dev->dma_ops = NULL;
}
return;
@@ -1767,7 +1762,6 @@ void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 dma_limit)
pr_warn("Failed to set up IOMMU for device %s; retaining platform DMA ops\n",
dev_name(dev));
}
-EXPORT_SYMBOL_GPL(iommu_setup_dma_ops);
static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev,
phys_addr_t msi_addr, struct iommu_domain *domain)
diff --git a/drivers/iommu/dma-iommu.h b/drivers/iommu/dma-iommu.h
index c829f1f82a99..c12d63457c76 100644
--- a/drivers/iommu/dma-iommu.h
+++ b/drivers/iommu/dma-iommu.h
@@ -9,6 +9,8 @@
#ifdef CONFIG_IOMMU_DMA
+void iommu_setup_dma_ops(struct device *dev);
+
int iommu_get_dma_cookie(struct iommu_domain *domain);
void iommu_put_dma_cookie(struct iommu_domain *domain);
@@ -17,13 +19,13 @@ int iommu_dma_init_fq(struct iommu_domain *domain);
void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list);
extern bool iommu_dma_forcedac;
-static inline void iommu_dma_set_pci_32bit_workaround(struct device *dev)
-{
- dev->iommu->pci_32bit_workaround = !iommu_dma_forcedac;
-}
#else /* CONFIG_IOMMU_DMA */
+static inline void iommu_setup_dma_ops(struct device *dev)
+{
+}
+
static inline int iommu_dma_init_fq(struct iommu_domain *domain)
{
return -EINVAL;
@@ -42,9 +44,5 @@ static inline void iommu_dma_get_resv_regions(struct device *dev, struct list_he
{
}
-static inline void iommu_dma_set_pci_32bit_workaround(struct device *dev)
-{
-}
-
#endif /* CONFIG_IOMMU_DMA */
#endif /* __DMA_IOMMU_H */
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 50eb9aed47cc..b2f6d8564463 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -4349,12 +4349,6 @@ static void intel_iommu_release_device(struct device *dev)
set_dma_ops(dev, NULL);
}
-static void intel_iommu_probe_finalize(struct device *dev)
-{
- set_dma_ops(dev, NULL);
- iommu_setup_dma_ops(dev, 0, U64_MAX);
-}
-
static void intel_iommu_get_resv_regions(struct device *device,
struct list_head *head)
{
@@ -4839,7 +4833,6 @@ const struct iommu_ops intel_iommu_ops = {
.domain_alloc = intel_iommu_domain_alloc,
.domain_alloc_user = intel_iommu_domain_alloc_user,
.probe_device = intel_iommu_probe_device,
- .probe_finalize = intel_iommu_probe_finalize,
.release_device = intel_iommu_release_device,
.get_resv_regions = intel_iommu_get_resv_regions,
.device_group = intel_iommu_device_group,
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index a95a483def2d..f01133b906e2 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -581,10 +581,11 @@ static int __iommu_probe_device(struct device *dev, struct list_head *group_list
if (list_empty(&group->entry))
list_add_tail(&group->entry, group_list);
}
- mutex_unlock(&group->mutex);
- if (dev_is_pci(dev))
- iommu_dma_set_pci_32bit_workaround(dev);
+ if (group->default_domain)
+ iommu_setup_dma_ops(dev);
+
+ mutex_unlock(&group->mutex);
return 0;
@@ -1828,6 +1829,8 @@ int bus_iommu_probe(const struct bus_type *bus)
mutex_unlock(&group->mutex);
return ret;
}
+ for_each_group_device(group, gdev)
+ iommu_setup_dma_ops(gdev->dev);
mutex_unlock(&group->mutex);
/*
@@ -3066,18 +3069,9 @@ static ssize_t iommu_group_store_type(struct iommu_group *group,
if (ret)
goto out_unlock;
- /*
- * Release the mutex here because ops->probe_finalize() call-back of
- * some vendor IOMMU drivers calls arm_iommu_attach_device() which
- * in-turn might call back into IOMMU core code, where it tries to take
- * group->mutex, resulting in a deadlock.
- */
- mutex_unlock(&group->mutex);
-
/* Make sure dma_ops is appropriatley set */
for_each_group_device(group, gdev)
- iommu_group_do_probe_finalize(gdev->dev);
- return count;
+ iommu_setup_dma_ops(gdev->dev);
out_unlock:
mutex_unlock(&group->mutex);
diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
index 9a5196f523de..d8eaa7ea380b 100644
--- a/drivers/iommu/s390-iommu.c
+++ b/drivers/iommu/s390-iommu.c
@@ -695,11 +695,6 @@ static size_t s390_iommu_unmap_pages(struct iommu_domain *domain,
return size;
}
-static void s390_iommu_probe_finalize(struct device *dev)
-{
- iommu_setup_dma_ops(dev, 0, U64_MAX);
-}
-
struct zpci_iommu_ctrs *zpci_get_iommu_ctrs(struct zpci_dev *zdev)
{
if (!zdev || !zdev->s390_domain)
@@ -785,7 +780,6 @@ static const struct iommu_ops s390_iommu_ops = {
.capable = s390_iommu_capable,
.domain_alloc_paging = s390_domain_alloc_paging,
.probe_device = s390_iommu_probe_device,
- .probe_finalize = s390_iommu_probe_finalize,
.release_device = s390_iommu_release_device,
.device_group = generic_device_group,
.pgsize_bitmap = SZ_4K,
diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 04048f64a2c0..8e776f6c6e35 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -1025,15 +1025,6 @@ static struct iommu_device *viommu_probe_device(struct device *dev)
return ERR_PTR(ret);
}
-static void viommu_probe_finalize(struct device *dev)
-{
-#ifndef CONFIG_ARCH_HAS_SETUP_DMA_OPS
- /* First clear the DMA ops in case we're switching from a DMA domain */
- set_dma_ops(dev, NULL);
- iommu_setup_dma_ops(dev, 0, U64_MAX);
-#endif
-}
-
static void viommu_release_device(struct device *dev)
{
struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
@@ -1073,7 +1064,6 @@ static struct iommu_ops viommu_ops = {
.capable = viommu_capable,
.domain_alloc = viommu_domain_alloc,
.probe_device = viommu_probe_device,
- .probe_finalize = viommu_probe_finalize,
.release_device = viommu_release_device,
.device_group = viommu_device_group,
.get_resv_regions = viommu_get_resv_regions,
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 190173906ec9..ae6e5adebbd1 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -1445,9 +1445,6 @@ static inline void iommu_debugfs_setup(void) {}
#ifdef CONFIG_IOMMU_DMA
#include <linux/msi.h>
-/* Setup call for arch DMA mapping code */
-void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 dma_limit);
-
int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base);
int iommu_dma_prepare_msi(struct msi_desc *desc, phys_addr_t msi_addr);
@@ -1458,10 +1455,6 @@ void iommu_dma_compose_msi_msg(struct msi_desc *desc, struct msi_msg *msg);
struct msi_desc;
struct msi_msg;
-static inline void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 dma_limit)
-{
-}
-
static inline int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base)
{
return -ENODEV;
--
2.39.2.101.g768bb238c484.dirty
On Fri, Apr 19, 2024 at 05:54:39PM +0100, Robin Murphy wrote:
> v3: https://lore.kernel.org/linux-iommu/[email protected]/
>
> Hi all,
>
> Since this ended up missing the boat for 6.9, here's a rebase and resend
> with the additional tags from v3 collected.
And just to clarify: I expect this to go in through the iommu tree.
If you need further action from me, just let me know.
On Fri, Apr 19, 2024 at 05:54:46PM +0100, Robin Murphy wrote:
> diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
> index 313d8938a2f0..0b320a25a471 100644
> --- a/arch/arm64/mm/dma-mapping.c
> +++ b/arch/arm64/mm/dma-mapping.c
> @@ -46,8 +46,7 @@ void arch_teardown_dma_ops(struct device *dev)
> }
> #endif
>
> -void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
> - bool coherent)
> +void arch_setup_dma_ops(struct device *dev, bool coherent)
> {
> int cls = cache_line_size_of_cpu();
Acked-by: Catalin Marinas <[email protected]>
On Fri, Apr 19, 2024 at 05:54:45PM +0100, Robin Murphy wrote:
> diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
> index 61886e43e3a1..313d8938a2f0 100644
> --- a/arch/arm64/mm/dma-mapping.c
> +++ b/arch/arm64/mm/dma-mapping.c
> @@ -58,8 +58,6 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
> ARCH_DMA_MINALIGN, cls);
>
> dev->dma_coherent = coherent;
> - if (device_iommu_mapped(dev))
> - iommu_setup_dma_ops(dev, dma_base, dma_base + size - 1);
>
> xen_setup_dma_ops(dev);
> }
In case you need an ack for the arm64 changes:
Acked-by: Catalin Marinas <[email protected]>
On Fri, Apr 19, 2024 at 05:54:39PM +0100, Robin Murphy wrote:
> Since this ended up missing the boat for 6.9, here's a rebase and resend
> with the additional tags from v3 collected.
Applied, thanks.
On Fri, Apr 19, 2024 at 05:54:45PM +0100, Robin Murphy wrote:
> It's somewhat hard to see, but arm64's arch_setup_dma_ops() should only
> ever call iommu_setup_dma_ops() after a successful iommu_probe_device(),
> which means there should be no harm in achieving the same order of
> operations by running it off the back of iommu_probe_device() itself.
> This then puts it in line with the x86 and s390 .probe_finalize bodges,
> letting us pull it all into the main flow properly. As a bonus this lets
> us fold in and de-scope the PCI workaround setup as well.
>
> At this point we can also then pull the call up inside the group mutex,
> and avoid having to think about whether iommu_group_store_type() could
> theoretically race and free the domain if iommu_setup_dma_ops() ran just
> *before* iommu_device_use_default_domain() claims it... Furthermore we
> replace one .probe_finalize call completely, since the only remaining
> implementations are now one which only needs to run once for the initial
> boot-time probe, and two which themselves render that path unreachable.
>
> This leaves us a big step closer to realistically being able to unpick
> the variety of different things that iommu_setup_dma_ops() has been
> muddling together, and further streamline iommu-dma into core API flows
> in future.
>
> Reviewed-by: Lu Baolu <[email protected]> # For Intel IOMMU
> Reviewed-by: Jason Gunthorpe <[email protected]>
> Tested-by: Hanjun Guo <[email protected]>
> Signed-off-by: Robin Murphy <[email protected]>
> ---
> v2: Shuffle around to make sure the iommu_group_do_probe_finalize() case
> is covered as well, with bonus side-effects as above.
> v3: *Really* do that, remembering the other two probe_finalize sites too.
> ---
> arch/arm64/mm/dma-mapping.c | 2 --
> drivers/iommu/amd/iommu.c | 8 --------
> drivers/iommu/dma-iommu.c | 18 ++++++------------
> drivers/iommu/dma-iommu.h | 14 ++++++--------
> drivers/iommu/intel/iommu.c | 7 -------
> drivers/iommu/iommu.c | 20 +++++++-------------
> drivers/iommu/s390-iommu.c | 6 ------
> drivers/iommu/virtio-iommu.c | 10 ----------
> include/linux/iommu.h | 7 -------
> 9 files changed, 19 insertions(+), 73 deletions(-)
This patch breaks UFS on Qualcomm SC8180X Primus platform:
[ 3.846856] arm-smmu 15000000.iommu: Unhandled context fault: fsr=0x402, iova=0x1032db3e0, fsynr=0x130000, cbfrsynra=0x300, cb=4
[ 3.846880] ufshcd-qcom 1d84000.ufshc: ufshcd_check_errors: saved_err 0x20000 saved_uic_err 0x0
[ 3.846929] host_regs: 00000000: 1587031f 00000000 00000300 00000000
[ 3.846935] host_regs: 00000010: 01000000 00010217 00000000 00000000
[ 3.846941] host_regs: 00000020: 00000000 00070ef5 00000000 00000000
[ 3.846946] host_regs: 00000030: 0000000f 00000001 00000000 00000000
[ 3.846951] host_regs: 00000040: 00000000 00000000 00000000 00000000
[ 3.846956] host_regs: 00000050: 032db000 00000001 00000000 00000000
[ 3.846962] host_regs: 00000060: 00000000 80000000 00000000 00000000
[ 3.846967] host_regs: 00000070: 032dd000 00000001 00000000 00000000
[ 3.846972] host_regs: 00000080: 00000000 00000000 00000000 00000000
[ 3.846977] host_regs: 00000090: 00000016 00000000 00000000 0000000c
[ 3.847074] ufshcd-qcom 1d84000.ufshc: ufshcd_err_handler started; HBA state eh_fatal; powered 1; shutting down 0; saved_err = 131072; saved_uic_err = 0; force_reset = 0
[ 4.406550] ufshcd-qcom 1d84000.ufshc: ufshcd_verify_dev_init: NOP OUT failed -11
[ 4.417953] ufshcd-qcom 1d84000.ufshc: ufshcd_async_scan failed: -11
--
With best wishes
Dmitry
On Mon, 29 Apr 2024 at 19:31, Dmitry Baryshkov
<[email protected]> wrote:
>
> On Fri, Apr 19, 2024 at 05:54:45PM +0100, Robin Murphy wrote:
> > It's somewhat hard to see, but arm64's arch_setup_dma_ops() should only
> > ever call iommu_setup_dma_ops() after a successful iommu_probe_device(),
> > which means there should be no harm in achieving the same order of
> > operations by running it off the back of iommu_probe_device() itself.
> > This then puts it in line with the x86 and s390 .probe_finalize bodges,
> > letting us pull it all into the main flow properly. As a bonus this lets
> > us fold in and de-scope the PCI workaround setup as well.
> >
> > At this point we can also then pull the call up inside the group mutex,
> > and avoid having to think about whether iommu_group_store_type() could
> > theoretically race and free the domain if iommu_setup_dma_ops() ran just
> > *before* iommu_device_use_default_domain() claims it... Furthermore we
> > replace one .probe_finalize call completely, since the only remaining
> > implementations are now one which only needs to run once for the initial
> > boot-time probe, and two which themselves render that path unreachable.
> >
> > This leaves us a big step closer to realistically being able to unpick
> > the variety of different things that iommu_setup_dma_ops() has been
> > muddling together, and further streamline iommu-dma into core API flows
> > in future.
> >
> > Reviewed-by: Lu Baolu <[email protected]> # For Intel IOMMU
> > Reviewed-by: Jason Gunthorpe <[email protected]>
> > Tested-by: Hanjun Guo <[email protected]>
> > Signed-off-by: Robin Murphy <[email protected]>
> > ---
> > v2: Shuffle around to make sure the iommu_group_do_probe_finalize() case
> > is covered as well, with bonus side-effects as above.
> > v3: *Really* do that, remembering the other two probe_finalize sites too.
> > ---
> > arch/arm64/mm/dma-mapping.c | 2 --
> > drivers/iommu/amd/iommu.c | 8 --------
> > drivers/iommu/dma-iommu.c | 18 ++++++------------
> > drivers/iommu/dma-iommu.h | 14 ++++++--------
> > drivers/iommu/intel/iommu.c | 7 -------
> > drivers/iommu/iommu.c | 20 +++++++-------------
> > drivers/iommu/s390-iommu.c | 6 ------
> > drivers/iommu/virtio-iommu.c | 10 ----------
> > include/linux/iommu.h | 7 -------
> > 9 files changed, 19 insertions(+), 73 deletions(-)
>
> This patch breaks UFS on Qualcomm SC8180X Primus platform:
>
>
> [ 3.846856] arm-smmu 15000000.iommu: Unhandled context fault: fsr=0x402, iova=0x1032db3e0, fsynr=0x130000, cbfrsynra=0x300, cb=4
> [ 3.846880] ufshcd-qcom 1d84000.ufshc: ufshcd_check_errors: saved_err 0x20000 saved_uic_err 0x0
> [ 3.846929] host_regs: 00000000: 1587031f 00000000 00000300 00000000
> [ 3.846935] host_regs: 00000010: 01000000 00010217 00000000 00000000
> [ 3.846941] host_regs: 00000020: 00000000 00070ef5 00000000 00000000
> [ 3.846946] host_regs: 00000030: 0000000f 00000001 00000000 00000000
> [ 3.846951] host_regs: 00000040: 00000000 00000000 00000000 00000000
> [ 3.846956] host_regs: 00000050: 032db000 00000001 00000000 00000000
> [ 3.846962] host_regs: 00000060: 00000000 80000000 00000000 00000000
> [ 3.846967] host_regs: 00000070: 032dd000 00000001 00000000 00000000
> [ 3.846972] host_regs: 00000080: 00000000 00000000 00000000 00000000
> [ 3.846977] host_regs: 00000090: 00000016 00000000 00000000 0000000c
> [ 3.847074] ufshcd-qcom 1d84000.ufshc: ufshcd_err_handler started; HBA state eh_fatal; powered 1; shutting down 0; saved_err = 131072; saved_uic_err = 0; force_reset = 0
> [ 4.406550] ufshcd-qcom 1d84000.ufshc: ufshcd_verify_dev_init: NOP OUT failed -11
> [ 4.417953] ufshcd-qcom 1d84000.ufshc: ufshcd_async_scan failed: -11
Just to confirm: reverting f091e93306e0 ("dma-mapping: Simplify
arch_setup_dma_ops()") and b67483b3c44e ("iommu/dma: Centralise
iommu_setup_dma_ops()" fixes the issue for me. Please ping me if you'd
like me to test a fix.
--
With best wishes
Dmitry
On 2024-04-29 5:31 pm, Dmitry Baryshkov wrote:
> On Fri, Apr 19, 2024 at 05:54:45PM +0100, Robin Murphy wrote:
>> It's somewhat hard to see, but arm64's arch_setup_dma_ops() should only
>> ever call iommu_setup_dma_ops() after a successful iommu_probe_device(),
>> which means there should be no harm in achieving the same order of
>> operations by running it off the back of iommu_probe_device() itself.
>> This then puts it in line with the x86 and s390 .probe_finalize bodges,
>> letting us pull it all into the main flow properly. As a bonus this lets
>> us fold in and de-scope the PCI workaround setup as well.
>>
>> At this point we can also then pull the call up inside the group mutex,
>> and avoid having to think about whether iommu_group_store_type() could
>> theoretically race and free the domain if iommu_setup_dma_ops() ran just
>> *before* iommu_device_use_default_domain() claims it... Furthermore we
>> replace one .probe_finalize call completely, since the only remaining
>> implementations are now one which only needs to run once for the initial
>> boot-time probe, and two which themselves render that path unreachable.
>>
>> This leaves us a big step closer to realistically being able to unpick
>> the variety of different things that iommu_setup_dma_ops() has been
>> muddling together, and further streamline iommu-dma into core API flows
>> in future.
>>
>> Reviewed-by: Lu Baolu <[email protected]> # For Intel IOMMU
>> Reviewed-by: Jason Gunthorpe <[email protected]>
>> Tested-by: Hanjun Guo <[email protected]>
>> Signed-off-by: Robin Murphy <[email protected]>
>> ---
>> v2: Shuffle around to make sure the iommu_group_do_probe_finalize() case
>> is covered as well, with bonus side-effects as above.
>> v3: *Really* do that, remembering the other two probe_finalize sites too.
>> ---
>> arch/arm64/mm/dma-mapping.c | 2 --
>> drivers/iommu/amd/iommu.c | 8 --------
>> drivers/iommu/dma-iommu.c | 18 ++++++------------
>> drivers/iommu/dma-iommu.h | 14 ++++++--------
>> drivers/iommu/intel/iommu.c | 7 -------
>> drivers/iommu/iommu.c | 20 +++++++-------------
>> drivers/iommu/s390-iommu.c | 6 ------
>> drivers/iommu/virtio-iommu.c | 10 ----------
>> include/linux/iommu.h | 7 -------
>> 9 files changed, 19 insertions(+), 73 deletions(-)
>
> This patch breaks UFS on Qualcomm SC8180X Primus platform:
>
>
> [ 3.846856] arm-smmu 15000000.iommu: Unhandled context fault: fsr=0x402, iova=0x1032db3e0, fsynr=0x130000, cbfrsynra=0x300, cb=4
Hmm, a context fault implies that the device did get attached to a DMA
domain, thus has successfully been through __iommu_probe_device(), yet
somehow still didn't get the right DMA ops (since that "IOVA" looks more
like a PA to me). Do you see the "Adding to IOMMU group..." message for
this device, and/or any other relevant messages or errors before this
point? I'm guessing there's a fair chance probe deferral might be
involved as well. I'd like to understand what path(s) this ends up
taking through __iommu_probe_device() and of_dma_configure(), or at
least the number and order of probe attempts between the UFS and SMMU
drivers.
I'll stare at the code in the morning and see if I can spot any
overlooked ways in which what I think might be happening could happen,
but any more info to help narrow it down would be much appreciated.
Thanks,
Robin.
> [ 3.846880] ufshcd-qcom 1d84000.ufshc: ufshcd_check_errors: saved_err 0x20000 saved_uic_err 0x0
> [ 3.846929] host_regs: 00000000: 1587031f 00000000 00000300 00000000
> [ 3.846935] host_regs: 00000010: 01000000 00010217 00000000 00000000
> [ 3.846941] host_regs: 00000020: 00000000 00070ef5 00000000 00000000
> [ 3.846946] host_regs: 00000030: 0000000f 00000001 00000000 00000000
> [ 3.846951] host_regs: 00000040: 00000000 00000000 00000000 00000000
> [ 3.846956] host_regs: 00000050: 032db000 00000001 00000000 00000000
> [ 3.846962] host_regs: 00000060: 00000000 80000000 00000000 00000000
> [ 3.846967] host_regs: 00000070: 032dd000 00000001 00000000 00000000
> [ 3.846972] host_regs: 00000080: 00000000 00000000 00000000 00000000
> [ 3.846977] host_regs: 00000090: 00000016 00000000 00000000 0000000c
> [ 3.847074] ufshcd-qcom 1d84000.ufshc: ufshcd_err_handler started; HBA state eh_fatal; powered 1; shutting down 0; saved_err = 131072; saved_uic_err = 0; force_reset = 0
> [ 4.406550] ufshcd-qcom 1d84000.ufshc: ufshcd_verify_dev_init: NOP OUT failed -11
> [ 4.417953] ufshcd-qcom 1d84000.ufshc: ufshcd_async_scan failed: -11
>
On Tue, 30 Apr 2024 at 01:26, Robin Murphy <[email protected]> wrote:
>
> On 2024-04-29 5:31 pm, Dmitry Baryshkov wrote:
> > On Fri, Apr 19, 2024 at 05:54:45PM +0100, Robin Murphy wrote:
> >> It's somewhat hard to see, but arm64's arch_setup_dma_ops() should only
> >> ever call iommu_setup_dma_ops() after a successful iommu_probe_device(),
> >> which means there should be no harm in achieving the same order of
> >> operations by running it off the back of iommu_probe_device() itself.
> >> This then puts it in line with the x86 and s390 .probe_finalize bodges,
> >> letting us pull it all into the main flow properly. As a bonus this lets
> >> us fold in and de-scope the PCI workaround setup as well.
> >>
> >> At this point we can also then pull the call up inside the group mutex,
> >> and avoid having to think about whether iommu_group_store_type() could
> >> theoretically race and free the domain if iommu_setup_dma_ops() ran just
> >> *before* iommu_device_use_default_domain() claims it... Furthermore we
> >> replace one .probe_finalize call completely, since the only remaining
> >> implementations are now one which only needs to run once for the initial
> >> boot-time probe, and two which themselves render that path unreachable.
> >>
> >> This leaves us a big step closer to realistically being able to unpick
> >> the variety of different things that iommu_setup_dma_ops() has been
> >> muddling together, and further streamline iommu-dma into core API flows
> >> in future.
> >>
> >> Reviewed-by: Lu Baolu <[email protected]> # For Intel IOMMU
> >> Reviewed-by: Jason Gunthorpe <[email protected]>
> >> Tested-by: Hanjun Guo <[email protected]>
> >> Signed-off-by: Robin Murphy <[email protected]>
> >> ---
> >> v2: Shuffle around to make sure the iommu_group_do_probe_finalize() case
> >> is covered as well, with bonus side-effects as above.
> >> v3: *Really* do that, remembering the other two probe_finalize sites too.
> >> ---
> >> arch/arm64/mm/dma-mapping.c | 2 --
> >> drivers/iommu/amd/iommu.c | 8 --------
> >> drivers/iommu/dma-iommu.c | 18 ++++++------------
> >> drivers/iommu/dma-iommu.h | 14 ++++++--------
> >> drivers/iommu/intel/iommu.c | 7 -------
> >> drivers/iommu/iommu.c | 20 +++++++-------------
> >> drivers/iommu/s390-iommu.c | 6 ------
> >> drivers/iommu/virtio-iommu.c | 10 ----------
> >> include/linux/iommu.h | 7 -------
> >> 9 files changed, 19 insertions(+), 73 deletions(-)
> >
> > This patch breaks UFS on Qualcomm SC8180X Primus platform:
> >
> >
> > [ 3.846856] arm-smmu 15000000.iommu: Unhandled context fault: fsr=0x402, iova=0x1032db3e0, fsynr=0x130000, cbfrsynra=0x300, cb=4
>
> Hmm, a context fault implies that the device did get attached to a DMA
> domain, thus has successfully been through __iommu_probe_device(), yet
> somehow still didn't get the right DMA ops (since that "IOVA" looks more
> like a PA to me). Do you see the "Adding to IOMMU group..." message for
> this device, and/or any other relevant messages or errors before this
> point?
No, nothing relevant.
[ 8.372395] ufshcd-qcom 1d84000.ufshc: Adding to iommu group 6
(please ignore the timestamp, it comes before ufshc being probed).
> I'm guessing there's a fair chance probe deferral might be
> involved as well. I'd like to understand what path(s) this ends up
> taking through __iommu_probe_device() and of_dma_configure(), or at
> least the number and order of probe attempts between the UFS and SMMU
> drivers.
__iommu_probe_device() gets called twice and returns early because ops is NULL.
Then finally of_dma_configure_id() is called. The following branches are taken:
np == dev->of_node
of_dma_get_range() returned 0
bus_dma_limit and dma_range_map are set
__iommu_probe_device() is called, using the `!group->default_domain &&
!group_lis` case, then group->default_domain() is not NULL,
In the end, iommu_setup_dma_ops() is called.
Then the ufshc probe defers (most likely the PHY is not present or
some other device is not there yet).
On the next (succeeding) try, of_dma_configure_id() is called again.
The call trace is more or less the same, except that
__iommu_probe_device() is not called
> I'll stare at the code in the morning and see if I can spot any
> overlooked ways in which what I think might be happening could happen,
> but any more info to help narrow it down would be much appreciated.
>
> Thanks,
> Robin.
>
> > [ 3.846880] ufshcd-qcom 1d84000.ufshc: ufshcd_check_errors: saved_err 0x20000 saved_uic_err 0x0
> > [ 3.846929] host_regs: 00000000: 1587031f 00000000 00000300 00000000
> > [ 3.846935] host_regs: 00000010: 01000000 00010217 00000000 00000000
> > [ 3.846941] host_regs: 00000020: 00000000 00070ef5 00000000 00000000
> > [ 3.846946] host_regs: 00000030: 0000000f 00000001 00000000 00000000
> > [ 3.846951] host_regs: 00000040: 00000000 00000000 00000000 00000000
> > [ 3.846956] host_regs: 00000050: 032db000 00000001 00000000 00000000
> > [ 3.846962] host_regs: 00000060: 00000000 80000000 00000000 00000000
> > [ 3.846967] host_regs: 00000070: 032dd000 00000001 00000000 00000000
> > [ 3.846972] host_regs: 00000080: 00000000 00000000 00000000 00000000
> > [ 3.846977] host_regs: 00000090: 00000016 00000000 00000000 0000000c
> > [ 3.847074] ufshcd-qcom 1d84000.ufshc: ufshcd_err_handler started; HBA state eh_fatal; powered 1; shutting down 0; saved_err = 131072; saved_uic_err = 0; force_reset = 0
> > [ 4.406550] ufshcd-qcom 1d84000.ufshc: ufshcd_verify_dev_init: NOP OUT failed -11
> > [ 4.417953] ufshcd-qcom 1d84000.ufshc: ufshcd_async_scan failed: -11
> >
--
With best wishes
Dmitry
On 2024-04-30 1:41 am, Dmitry Baryshkov wrote:
> On Tue, 30 Apr 2024 at 01:26, Robin Murphy <[email protected]> wrote:
>>
>> On 2024-04-29 5:31 pm, Dmitry Baryshkov wrote:
>>> On Fri, Apr 19, 2024 at 05:54:45PM +0100, Robin Murphy wrote:
>>>> It's somewhat hard to see, but arm64's arch_setup_dma_ops() should only
>>>> ever call iommu_setup_dma_ops() after a successful iommu_probe_device(),
>>>> which means there should be no harm in achieving the same order of
>>>> operations by running it off the back of iommu_probe_device() itself.
>>>> This then puts it in line with the x86 and s390 .probe_finalize bodges,
>>>> letting us pull it all into the main flow properly. As a bonus this lets
>>>> us fold in and de-scope the PCI workaround setup as well.
>>>>
>>>> At this point we can also then pull the call up inside the group mutex,
>>>> and avoid having to think about whether iommu_group_store_type() could
>>>> theoretically race and free the domain if iommu_setup_dma_ops() ran just
>>>> *before* iommu_device_use_default_domain() claims it... Furthermore we
>>>> replace one .probe_finalize call completely, since the only remaining
>>>> implementations are now one which only needs to run once for the initial
>>>> boot-time probe, and two which themselves render that path unreachable.
>>>>
>>>> This leaves us a big step closer to realistically being able to unpick
>>>> the variety of different things that iommu_setup_dma_ops() has been
>>>> muddling together, and further streamline iommu-dma into core API flows
>>>> in future.
>>>>
>>>> Reviewed-by: Lu Baolu <[email protected]> # For Intel IOMMU
>>>> Reviewed-by: Jason Gunthorpe <[email protected]>
>>>> Tested-by: Hanjun Guo <[email protected]>
>>>> Signed-off-by: Robin Murphy <[email protected]>
>>>> ---
>>>> v2: Shuffle around to make sure the iommu_group_do_probe_finalize() case
>>>> is covered as well, with bonus side-effects as above.
>>>> v3: *Really* do that, remembering the other two probe_finalize sites too.
>>>> ---
>>>> arch/arm64/mm/dma-mapping.c | 2 --
>>>> drivers/iommu/amd/iommu.c | 8 --------
>>>> drivers/iommu/dma-iommu.c | 18 ++++++------------
>>>> drivers/iommu/dma-iommu.h | 14 ++++++--------
>>>> drivers/iommu/intel/iommu.c | 7 -------
>>>> drivers/iommu/iommu.c | 20 +++++++-------------
>>>> drivers/iommu/s390-iommu.c | 6 ------
>>>> drivers/iommu/virtio-iommu.c | 10 ----------
>>>> include/linux/iommu.h | 7 -------
>>>> 9 files changed, 19 insertions(+), 73 deletions(-)
>>>
>>> This patch breaks UFS on Qualcomm SC8180X Primus platform:
>>>
>>>
>>> [ 3.846856] arm-smmu 15000000.iommu: Unhandled context fault: fsr=0x402, iova=0x1032db3e0, fsynr=0x130000, cbfrsynra=0x300, cb=4
>>
>> Hmm, a context fault implies that the device did get attached to a DMA
>> domain, thus has successfully been through __iommu_probe_device(), yet
>> somehow still didn't get the right DMA ops (since that "IOVA" looks more
>> like a PA to me). Do you see the "Adding to IOMMU group..." message for
>> this device, and/or any other relevant messages or errors before this
>> point?
>
> No, nothing relevant.
>
> [ 8.372395] ufshcd-qcom 1d84000.ufshc: Adding to iommu group 6
>
> (please ignore the timestamp, it comes before ufshc being probed).
>
>> I'm guessing there's a fair chance probe deferral might be
>> involved as well. I'd like to understand what path(s) this ends up
>> taking through __iommu_probe_device() and of_dma_configure(), or at
>> least the number and order of probe attempts between the UFS and SMMU
>> drivers.
>
> __iommu_probe_device() gets called twice and returns early because ops is NULL.
>
> Then finally of_dma_configure_id() is called. The following branches are taken:
>
> np == dev->of_node
> of_dma_get_range() returned 0
> bus_dma_limit and dma_range_map are set
> __iommu_probe_device() is called, using the `!group->default_domain &&
> !group_lis` case, then group->default_domain() is not NULL,
> In the end, iommu_setup_dma_ops() is called.
>
> Then the ufshc probe defers (most likely the PHY is not present or
> some other device is not there yet).
Ah good, probe deferral. And indeed the half-formed hunch from last
night grew into a pretty definite idea by this morning... patch incoming.
Thanks,
Robin.
> On the next (succeeding) try, of_dma_configure_id() is called again.
> The call trace is more or less the same, except that
> __iommu_probe_device() is not called
>
>> I'll stare at the code in the morning and see if I can spot any
>> overlooked ways in which what I think might be happening could happen,
>> but any more info to help narrow it down would be much appreciated.
>>
>> Thanks,
>> Robin.
>>
>>> [ 3.846880] ufshcd-qcom 1d84000.ufshc: ufshcd_check_errors: saved_err 0x20000 saved_uic_err 0x0
>>> [ 3.846929] host_regs: 00000000: 1587031f 00000000 00000300 00000000
>>> [ 3.846935] host_regs: 00000010: 01000000 00010217 00000000 00000000
>>> [ 3.846941] host_regs: 00000020: 00000000 00070ef5 00000000 00000000
>>> [ 3.846946] host_regs: 00000030: 0000000f 00000001 00000000 00000000
>>> [ 3.846951] host_regs: 00000040: 00000000 00000000 00000000 00000000
>>> [ 3.846956] host_regs: 00000050: 032db000 00000001 00000000 00000000
>>> [ 3.846962] host_regs: 00000060: 00000000 80000000 00000000 00000000
>>> [ 3.846967] host_regs: 00000070: 032dd000 00000001 00000000 00000000
>>> [ 3.846972] host_regs: 00000080: 00000000 00000000 00000000 00000000
>>> [ 3.846977] host_regs: 00000090: 00000016 00000000 00000000 0000000c
>>> [ 3.847074] ufshcd-qcom 1d84000.ufshc: ufshcd_err_handler started; HBA state eh_fatal; powered 1; shutting down 0; saved_err = 131072; saved_uic_err = 0; force_reset = 0
>>> [ 4.406550] ufshcd-qcom 1d84000.ufshc: ufshcd_verify_dev_init: NOP OUT failed -11
>>> [ 4.417953] ufshcd-qcom 1d84000.ufshc: ufshcd_async_scan failed: -11
>>>
>
>
>
On Tue, 30 Apr 2024 at 13:20, Robin Murphy <[email protected]> wrote:
>
> On 2024-04-30 1:41 am, Dmitry Baryshkov wrote:
> > On Tue, 30 Apr 2024 at 01:26, Robin Murphy <[email protected]> wrote:
> >>
> >> On 2024-04-29 5:31 pm, Dmitry Baryshkov wrote:
> >>> On Fri, Apr 19, 2024 at 05:54:45PM +0100, Robin Murphy wrote:
> >>>> It's somewhat hard to see, but arm64's arch_setup_dma_ops() should only
> >>>> ever call iommu_setup_dma_ops() after a successful iommu_probe_device(),
> >>>> which means there should be no harm in achieving the same order of
> >>>> operations by running it off the back of iommu_probe_device() itself.
> >>>> This then puts it in line with the x86 and s390 .probe_finalize bodges,
> >>>> letting us pull it all into the main flow properly. As a bonus this lets
> >>>> us fold in and de-scope the PCI workaround setup as well.
> >>>>
> >>>> At this point we can also then pull the call up inside the group mutex,
> >>>> and avoid having to think about whether iommu_group_store_type() could
> >>>> theoretically race and free the domain if iommu_setup_dma_ops() ran just
> >>>> *before* iommu_device_use_default_domain() claims it... Furthermore we
> >>>> replace one .probe_finalize call completely, since the only remaining
> >>>> implementations are now one which only needs to run once for the initial
> >>>> boot-time probe, and two which themselves render that path unreachable.
> >>>>
> >>>> This leaves us a big step closer to realistically being able to unpick
> >>>> the variety of different things that iommu_setup_dma_ops() has been
> >>>> muddling together, and further streamline iommu-dma into core API flows
> >>>> in future.
> >>>>
> >>>> Reviewed-by: Lu Baolu <[email protected]> # For Intel IOMMU
> >>>> Reviewed-by: Jason Gunthorpe <[email protected]>
> >>>> Tested-by: Hanjun Guo <[email protected]>
> >>>> Signed-off-by: Robin Murphy <[email protected]>
> >>>> ---
> >>>> v2: Shuffle around to make sure the iommu_group_do_probe_finalize() case
> >>>> is covered as well, with bonus side-effects as above.
> >>>> v3: *Really* do that, remembering the other two probe_finalize sites too.
> >>>> ---
> >>>> arch/arm64/mm/dma-mapping.c | 2 --
> >>>> drivers/iommu/amd/iommu.c | 8 --------
> >>>> drivers/iommu/dma-iommu.c | 18 ++++++------------
> >>>> drivers/iommu/dma-iommu.h | 14 ++++++--------
> >>>> drivers/iommu/intel/iommu.c | 7 -------
> >>>> drivers/iommu/iommu.c | 20 +++++++-------------
> >>>> drivers/iommu/s390-iommu.c | 6 ------
> >>>> drivers/iommu/virtio-iommu.c | 10 ----------
> >>>> include/linux/iommu.h | 7 -------
> >>>> 9 files changed, 19 insertions(+), 73 deletions(-)
> >>>
> >>> This patch breaks UFS on Qualcomm SC8180X Primus platform:
> >>>
> >>>
> >>> [ 3.846856] arm-smmu 15000000.iommu: Unhandled context fault: fsr=0x402, iova=0x1032db3e0, fsynr=0x130000, cbfrsynra=0x300, cb=4
> >>
> >> Hmm, a context fault implies that the device did get attached to a DMA
> >> domain, thus has successfully been through __iommu_probe_device(), yet
> >> somehow still didn't get the right DMA ops (since that "IOVA" looks more
> >> like a PA to me). Do you see the "Adding to IOMMU group..." message for
> >> this device, and/or any other relevant messages or errors before this
> >> point?
> >
> > No, nothing relevant.
> >
> > [ 8.372395] ufshcd-qcom 1d84000.ufshc: Adding to iommu group 6
> >
> > (please ignore the timestamp, it comes before ufshc being probed).
> >
> >> I'm guessing there's a fair chance probe deferral might be
> >> involved as well. I'd like to understand what path(s) this ends up
> >> taking through __iommu_probe_device() and of_dma_configure(), or at
> >> least the number and order of probe attempts between the UFS and SMMU
> >> drivers.
> >
> > __iommu_probe_device() gets called twice and returns early because ops is NULL.
> >
> > Then finally of_dma_configure_id() is called. The following branches are taken:
> >
> > np == dev->of_node
> > of_dma_get_range() returned 0
> > bus_dma_limit and dma_range_map are set
> > __iommu_probe_device() is called, using the `!group->default_domain &&
> > !group_lis` case, then group->default_domain() is not NULL,
> > In the end, iommu_setup_dma_ops() is called.
> >
> > Then the ufshc probe defers (most likely the PHY is not present or
> > some other device is not there yet).
>
> Ah good, probe deferral. And indeed the half-formed hunch from last
> night grew into a pretty definite idea by this morning... patch incoming.
Thanks a lot for the quick fix!
--
With best wishes
Dmitry
On 29.04.2024 11:26 PM, Dmitry Baryshkov wrote:
> On Mon, 29 Apr 2024 at 19:31, Dmitry Baryshkov
> <[email protected]> wrote:
>>
>> On Fri, Apr 19, 2024 at 05:54:45PM +0100, Robin Murphy wrote:
>>> It's somewhat hard to see, but arm64's arch_setup_dma_ops() should only
>>> ever call iommu_setup_dma_ops() after a successful iommu_probe_device(),
>>> which means there should be no harm in achieving the same order of
>>> operations by running it off the back of iommu_probe_device() itself.
>>> This then puts it in line with the x86 and s390 .probe_finalize bodges,
>>> letting us pull it all into the main flow properly. As a bonus this lets
>>> us fold in and de-scope the PCI workaround setup as well.
>>>
>>> At this point we can also then pull the call up inside the group mutex,
>>> and avoid having to think about whether iommu_group_store_type() could
>>> theoretically race and free the domain if iommu_setup_dma_ops() ran just
>>> *before* iommu_device_use_default_domain() claims it... Furthermore we
>>> replace one .probe_finalize call completely, since the only remaining
>>> implementations are now one which only needs to run once for the initial
>>> boot-time probe, and two which themselves render that path unreachable.
>>>
>>> This leaves us a big step closer to realistically being able to unpick
>>> the variety of different things that iommu_setup_dma_ops() has been
>>> muddling together, and further streamline iommu-dma into core API flows
>>> in future.
>>>
>>> Reviewed-by: Lu Baolu <[email protected]> # For Intel IOMMU
>>> Reviewed-by: Jason Gunthorpe <[email protected]>
>>> Tested-by: Hanjun Guo <[email protected]>
>>> Signed-off-by: Robin Murphy <[email protected]>
>>> ---
>>> v2: Shuffle around to make sure the iommu_group_do_probe_finalize() case
>>> is covered as well, with bonus side-effects as above.
>>> v3: *Really* do that, remembering the other two probe_finalize sites too.
>>> ---
>>> arch/arm64/mm/dma-mapping.c | 2 --
>>> drivers/iommu/amd/iommu.c | 8 --------
>>> drivers/iommu/dma-iommu.c | 18 ++++++------------
>>> drivers/iommu/dma-iommu.h | 14 ++++++--------
>>> drivers/iommu/intel/iommu.c | 7 -------
>>> drivers/iommu/iommu.c | 20 +++++++-------------
>>> drivers/iommu/s390-iommu.c | 6 ------
>>> drivers/iommu/virtio-iommu.c | 10 ----------
>>> include/linux/iommu.h | 7 -------
>>> 9 files changed, 19 insertions(+), 73 deletions(-)
>>
>> This patch breaks UFS on Qualcomm SC8180X Primus platform:
>>
>>
>> [ 3.846856] arm-smmu 15000000.iommu: Unhandled context fault: fsr=0x402, iova=0x1032db3e0, fsynr=0x130000, cbfrsynra=0x300, cb=4
>> [ 3.846880] ufshcd-qcom 1d84000.ufshc: ufshcd_check_errors: saved_err 0x20000 saved_uic_err 0x0
>> [ 3.846929] host_regs: 00000000: 1587031f 00000000 00000300 00000000
>> [ 3.846935] host_regs: 00000010: 01000000 00010217 00000000 00000000
>> [ 3.846941] host_regs: 00000020: 00000000 00070ef5 00000000 00000000
>> [ 3.846946] host_regs: 00000030: 0000000f 00000001 00000000 00000000
>> [ 3.846951] host_regs: 00000040: 00000000 00000000 00000000 00000000
>> [ 3.846956] host_regs: 00000050: 032db000 00000001 00000000 00000000
>> [ 3.846962] host_regs: 00000060: 00000000 80000000 00000000 00000000
>> [ 3.846967] host_regs: 00000070: 032dd000 00000001 00000000 00000000
>> [ 3.846972] host_regs: 00000080: 00000000 00000000 00000000 00000000
>> [ 3.846977] host_regs: 00000090: 00000016 00000000 00000000 0000000c
>> [ 3.847074] ufshcd-qcom 1d84000.ufshc: ufshcd_err_handler started; HBA state eh_fatal; powered 1; shutting down 0; saved_err = 131072; saved_uic_err = 0; force_reset = 0
>> [ 4.406550] ufshcd-qcom 1d84000.ufshc: ufshcd_verify_dev_init: NOP OUT failed -11
>> [ 4.417953] ufshcd-qcom 1d84000.ufshc: ufshcd_async_scan failed: -11
>
> Just to confirm: reverting f091e93306e0 ("dma-mapping: Simplify
> arch_setup_dma_ops()") and b67483b3c44e ("iommu/dma: Centralise
> iommu_setup_dma_ops()" fixes the issue for me. Please ping me if you'd
> like me to test a fix.
This also triggers a different issue (that also comes down to "ufs bad") on
another QC platform (SM8550):
[ 4.282098] scsi host0: ufshcd
[ 4.315970] ufshcd-qcom 1d84000.ufs: ufshcd_check_errors: saved_err 0x20000 saved_uic_err 0x0
[ 4.330155] host_regs: 00000000: 3587031f 00000000 00000400 00000000
[ 4.343955] host_regs: 00000010: 01000000 00010217 00000000 00000000
[ 4.356027] host_regs: 00000020: 00000000 00070ef5 00000000 00000000
[ 4.370136] host_regs: 00000030: 0000000f 00000003 00000000 00000000
[ 4.376662] host_regs: 00000040: 00000000 00000000 00000000 00000000
[ 4.383192] host_regs: 00000050: 85109000 00000008 00000000 00000000
[ 4.389719] host_regs: 00000060: 00000000 80000000 00000000 00000000
[ 4.396245] host_regs: 00000070: 8510a000 00000008 00000000 00000000
[ 4.402773] host_regs: 00000080: 00000000 00000000 00000000 00000000
[ 4.409298] host_regs: 00000090: 00000016 00000000 00000000 0000000c
[ 4.415900] arm-smmu 15000000.iommu: Unhandled context fault: fsr=0x402, iova=0x8851093e0, fsynr=0x3b0001, cbfrsynra=0x60, cb=2
[ 4.416135] ufshcd-qcom 1d84000.ufs: ufshcd_err_handler started; HBA state eh_fatal; powered 1; shutting down 0; saved_err = 131072; saved_uic_err = 0; force_reset = 0
[ 4.951750] ufshcd-qcom 1d84000.ufs: ufshcd_verify_dev_init: NOP OUT failed -11
[ 4.960644] ufshcd-qcom 1d84000.ufs: ufshcd_async_scan failed: -11
Reverting the commits Dmitry mentioned also fixes this.
Konrad
On 30/04/2024 1:23 pm, Konrad Dybcio wrote:
> On 29.04.2024 11:26 PM, Dmitry Baryshkov wrote:
>> On Mon, 29 Apr 2024 at 19:31, Dmitry Baryshkov
>> <[email protected]> wrote:
>>>
>>> On Fri, Apr 19, 2024 at 05:54:45PM +0100, Robin Murphy wrote:
>>>> It's somewhat hard to see, but arm64's arch_setup_dma_ops() should only
>>>> ever call iommu_setup_dma_ops() after a successful iommu_probe_device(),
>>>> which means there should be no harm in achieving the same order of
>>>> operations by running it off the back of iommu_probe_device() itself.
>>>> This then puts it in line with the x86 and s390 .probe_finalize bodges,
>>>> letting us pull it all into the main flow properly. As a bonus this lets
>>>> us fold in and de-scope the PCI workaround setup as well.
>>>>
>>>> At this point we can also then pull the call up inside the group mutex,
>>>> and avoid having to think about whether iommu_group_store_type() could
>>>> theoretically race and free the domain if iommu_setup_dma_ops() ran just
>>>> *before* iommu_device_use_default_domain() claims it... Furthermore we
>>>> replace one .probe_finalize call completely, since the only remaining
>>>> implementations are now one which only needs to run once for the initial
>>>> boot-time probe, and two which themselves render that path unreachable.
>>>>
>>>> This leaves us a big step closer to realistically being able to unpick
>>>> the variety of different things that iommu_setup_dma_ops() has been
>>>> muddling together, and further streamline iommu-dma into core API flows
>>>> in future.
>>>>
>>>> Reviewed-by: Lu Baolu <[email protected]> # For Intel IOMMU
>>>> Reviewed-by: Jason Gunthorpe <[email protected]>
>>>> Tested-by: Hanjun Guo <[email protected]>
>>>> Signed-off-by: Robin Murphy <[email protected]>
>>>> ---
>>>> v2: Shuffle around to make sure the iommu_group_do_probe_finalize() case
>>>> is covered as well, with bonus side-effects as above.
>>>> v3: *Really* do that, remembering the other two probe_finalize sites too.
>>>> ---
>>>> arch/arm64/mm/dma-mapping.c | 2 --
>>>> drivers/iommu/amd/iommu.c | 8 --------
>>>> drivers/iommu/dma-iommu.c | 18 ++++++------------
>>>> drivers/iommu/dma-iommu.h | 14 ++++++--------
>>>> drivers/iommu/intel/iommu.c | 7 -------
>>>> drivers/iommu/iommu.c | 20 +++++++-------------
>>>> drivers/iommu/s390-iommu.c | 6 ------
>>>> drivers/iommu/virtio-iommu.c | 10 ----------
>>>> include/linux/iommu.h | 7 -------
>>>> 9 files changed, 19 insertions(+), 73 deletions(-)
>>>
>>> This patch breaks UFS on Qualcomm SC8180X Primus platform:
>>>
>>>
>>> [ 3.846856] arm-smmu 15000000.iommu: Unhandled context fault: fsr=0x402, iova=0x1032db3e0, fsynr=0x130000, cbfrsynra=0x300, cb=4
>>> [ 3.846880] ufshcd-qcom 1d84000.ufshc: ufshcd_check_errors: saved_err 0x20000 saved_uic_err 0x0
>>> [ 3.846929] host_regs: 00000000: 1587031f 00000000 00000300 00000000
>>> [ 3.846935] host_regs: 00000010: 01000000 00010217 00000000 00000000
>>> [ 3.846941] host_regs: 00000020: 00000000 00070ef5 00000000 00000000
>>> [ 3.846946] host_regs: 00000030: 0000000f 00000001 00000000 00000000
>>> [ 3.846951] host_regs: 00000040: 00000000 00000000 00000000 00000000
>>> [ 3.846956] host_regs: 00000050: 032db000 00000001 00000000 00000000
>>> [ 3.846962] host_regs: 00000060: 00000000 80000000 00000000 00000000
>>> [ 3.846967] host_regs: 00000070: 032dd000 00000001 00000000 00000000
>>> [ 3.846972] host_regs: 00000080: 00000000 00000000 00000000 00000000
>>> [ 3.846977] host_regs: 00000090: 00000016 00000000 00000000 0000000c
>>> [ 3.847074] ufshcd-qcom 1d84000.ufshc: ufshcd_err_handler started; HBA state eh_fatal; powered 1; shutting down 0; saved_err = 131072; saved_uic_err = 0; force_reset = 0
>>> [ 4.406550] ufshcd-qcom 1d84000.ufshc: ufshcd_verify_dev_init: NOP OUT failed -11
>>> [ 4.417953] ufshcd-qcom 1d84000.ufshc: ufshcd_async_scan failed: -11
>>
>> Just to confirm: reverting f091e93306e0 ("dma-mapping: Simplify
>> arch_setup_dma_ops()") and b67483b3c44e ("iommu/dma: Centralise
>> iommu_setup_dma_ops()" fixes the issue for me. Please ping me if you'd
>> like me to test a fix.
>
> This also triggers a different issue (that also comes down to "ufs bad") on
> another QC platform (SM8550):
>
> [ 4.282098] scsi host0: ufshcd
> [ 4.315970] ufshcd-qcom 1d84000.ufs: ufshcd_check_errors: saved_err 0x20000 saved_uic_err 0x0
> [ 4.330155] host_regs: 00000000: 3587031f 00000000 00000400 00000000
> [ 4.343955] host_regs: 00000010: 01000000 00010217 00000000 00000000
> [ 4.356027] host_regs: 00000020: 00000000 00070ef5 00000000 00000000
> [ 4.370136] host_regs: 00000030: 0000000f 00000003 00000000 00000000
> [ 4.376662] host_regs: 00000040: 00000000 00000000 00000000 00000000
> [ 4.383192] host_regs: 00000050: 85109000 00000008 00000000 00000000
> [ 4.389719] host_regs: 00000060: 00000000 80000000 00000000 00000000
> [ 4.396245] host_regs: 00000070: 8510a000 00000008 00000000 00000000
> [ 4.402773] host_regs: 00000080: 00000000 00000000 00000000 00000000
> [ 4.409298] host_regs: 00000090: 00000016 00000000 00000000 0000000c
> [ 4.415900] arm-smmu 15000000.iommu: Unhandled context fault: fsr=0x402, iova=0x8851093e0, fsynr=0x3b0001, cbfrsynra=0x60, cb=2
> [ 4.416135] ufshcd-qcom 1d84000.ufs: ufshcd_err_handler started; HBA state eh_fatal; powered 1; shutting down 0; saved_err = 131072; saved_uic_err = 0; force_reset = 0
> [ 4.951750] ufshcd-qcom 1d84000.ufs: ufshcd_verify_dev_init: NOP OUT failed -11
> [ 4.960644] ufshcd-qcom 1d84000.ufs: ufshcd_async_scan failed: -11
>
> Reverting the commits Dmitry mentioned also fixes this.
Yeah, It'll be the same thing - doesn't really matter exactly *how* the
UFS goes wrong due to the SMMU blocking it, the issue is that the SMMU
is erroneously blocking it in the first place due to a DMA ops mixup.
Fix is now here:
https://lore.kernel.org/linux-iommu/d4cc20cbb0c45175e98dd76bf187e2ad6421296d.1714472573.git.robin.murphy@arm.com/
Thanks,
Robin.
On Fri, 2024-04-19 at 17:54 +0100, Robin Murphy wrote:
> It's somewhat hard to see, but arm64's arch_setup_dma_ops() should only
> ever call iommu_setup_dma_ops() after a successful iommu_probe_device(),
> which means there should be no harm in achieving the same order of
> operations by running it off the back of iommu_probe_device() itself.
> This then puts it in line with the x86 and s390 .probe_finalize bodges,
> letting us pull it all into the main flow properly. As a bonus this lets
> us fold in and de-scope the PCI workaround setup as well.
>
> At this point we can also then pull the call up inside the group mutex,
> and avoid having to think about whether iommu_group_store_type() could
> theoretically race and free the domain if iommu_setup_dma_ops() ran just
> *before* iommu_device_use_default_domain() claims it... Furthermore we
> replace one .probe_finalize call completely, since the only remaining
> implementations are now one which only needs to run once for the initial
> boot-time probe, and two which themselves render that path unreachable.
>
> This leaves us a big step closer to realistically being able to unpick
> the variety of different things that iommu_setup_dma_ops() has been
> muddling together, and further streamline iommu-dma into core API flows
> in future.
>
> Reviewed-by: Lu Baolu <[email protected]> # For Intel IOMMU
> Reviewed-by: Jason Gunthorpe <[email protected]>
> Tested-by: Hanjun Guo <[email protected]>
> Signed-off-by: Robin Murphy <[email protected]>
> ---
---8<---
> diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
> index 9a5196f523de..d8eaa7ea380b 100644
> --- a/drivers/iommu/s390-iommu.c
> +++ b/drivers/iommu/s390-iommu.c
> @@ -695,11 +695,6 @@ static size_t s390_iommu_unmap_pages(struct iommu_domain *domain,
> return size;
> }
>
> -static void s390_iommu_probe_finalize(struct device *dev)
> -{
> - iommu_setup_dma_ops(dev, 0, U64_MAX);
> -}
> -
> struct zpci_iommu_ctrs *zpci_get_iommu_ctrs(struct zpci_dev *zdev)
> {
> if (!zdev || !zdev->s390_domain)
> @@ -785,7 +780,6 @@ static const struct iommu_ops s390_iommu_ops = {
> .capable = s390_iommu_capable,
> .domain_alloc_paging = s390_domain_alloc_paging,
> .probe_device = s390_iommu_probe_device,
> - .probe_finalize = s390_iommu_probe_finalize,
> .release_device = s390_iommu_release_device,
> .device_group = generic_device_group,
> .pgsize_bitmap = SZ_4K,
I gave this whole series a test boot on s390 and also tried running a
KVM guest with vfio-pci pass-through. For the s390 part feel free to
add my.
Acked-by: Niklas Schnelle <[email protected]> # for s390
Tested-by: Niklas Schnelle <[email protected]> # for s390
Thanks,
Niklas
Hi Robin,
On 19/04/2024 17:54, Robin Murphy wrote:
> It's now easy to retrieve the device's DMA limits if we want to check
> them against the domain aperture, so do that ourselves instead of
> relying on them being passed through the callchain.
>
> Reviewed-by: Jason Gunthorpe <[email protected]>
> Tested-by: Hanjun Guo <[email protected]>
> Signed-off-by: Robin Murphy <[email protected]>
> ---
> drivers/iommu/dma-iommu.c | 21 +++++++++------------
> 1 file changed, 9 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index a3039005b696..f542eabaefa4 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -660,19 +660,16 @@ static void iommu_dma_init_options(struct iommu_dma_options *options,
> /**
> * iommu_dma_init_domain - Initialise a DMA mapping domain
> * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie()
> - * @base: IOVA at which the mappable address space starts
> - * @limit: Last address of the IOVA space
> * @dev: Device the domain is being initialised for
> *
> - * @base and @limit + 1 should be exact multiples of IOMMU page granularity to
> - * avoid rounding surprises. If necessary, we reserve the page at address 0
> + * If the geometry and dma_range_map include address 0, we reserve that page
> * to ensure it is an invalid IOVA. It is safe to reinitialise a domain, but
> * any change which could make prior IOVAs invalid will fail.
> */
> -static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
> - dma_addr_t limit, struct device *dev)
> +static int iommu_dma_init_domain(struct iommu_domain *domain, struct device *dev)
> {
> struct iommu_dma_cookie *cookie = domain->iova_cookie;
> + const struct bus_dma_region *map = dev->dma_range_map;
> unsigned long order, base_pfn;
> struct iova_domain *iovad;
> int ret;
> @@ -684,18 +681,18 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
>
> /* Use the smallest supported page size for IOVA granularity */
> order = __ffs(domain->pgsize_bitmap);
> - base_pfn = max_t(unsigned long, 1, base >> order);
> + base_pfn = 1;
>
> /* Check the domain allows at least some access to the device... */
> - if (domain->geometry.force_aperture) {
> + if (map) {
> + dma_addr_t base = dma_range_map_min(map);
> if (base > domain->geometry.aperture_end ||
> - limit < domain->geometry.aperture_start) {
> + dma_range_map_max(map) < domain->geometry.aperture_start) {
> pr_warn("specified DMA range outside IOMMU capability\n");
> return -EFAULT;
> }
> /* ...then finally give it a kicking to make sure it fits */
> - base_pfn = max_t(unsigned long, base_pfn,
> - domain->geometry.aperture_start >> order);
> + base_pfn = max(base, domain->geometry.aperture_start) >> order;
> }
>
> /* start_pfn is always nonzero for an already-initialised domain */
> @@ -1760,7 +1757,7 @@ void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 dma_limit)
> * underlying IOMMU driver needs to support via the dma-iommu layer.
> */
> if (iommu_is_dma_domain(domain)) {
> - if (iommu_dma_init_domain(domain, dma_base, dma_limit, dev))
> + if (iommu_dma_init_domain(domain, dev))
> goto out_err;
> dev->dma_ops = &iommu_dma_ops;
> }
I have noticed some random test failures on Tegra186 and Tegra194 and
bisect is pointing to this commit. Reverting this along with the various
dependencies does fix the problem. On Tegra186 CPU hotplug is failing
and on Tegra194 suspend is failing. Unfortunately, on neither platform
do I see any particular crash but the boards hang somewhere.
If you have any ideas on things we can try let me know.
Cheers
Jon
--
nvpublic
Hi Jon,
On 2024-05-14 2:27 pm, Jon Hunter wrote:
> Hi Robin,
>
> On 19/04/2024 17:54, Robin Murphy wrote:
>> It's now easy to retrieve the device's DMA limits if we want to check
>> them against the domain aperture, so do that ourselves instead of
>> relying on them being passed through the callchain.
>>
>> Reviewed-by: Jason Gunthorpe <[email protected]>
>> Tested-by: Hanjun Guo <[email protected]>
>> Signed-off-by: Robin Murphy <[email protected]>
>> ---
>> drivers/iommu/dma-iommu.c | 21 +++++++++------------
>> 1 file changed, 9 insertions(+), 12 deletions(-)
>>
>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>> index a3039005b696..f542eabaefa4 100644
>> --- a/drivers/iommu/dma-iommu.c
>> +++ b/drivers/iommu/dma-iommu.c
>> @@ -660,19 +660,16 @@ static void iommu_dma_init_options(struct
>> iommu_dma_options *options,
>> /**
>> * iommu_dma_init_domain - Initialise a DMA mapping domain
>> * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie()
>> - * @base: IOVA at which the mappable address space starts
>> - * @limit: Last address of the IOVA space
>> * @dev: Device the domain is being initialised for
>> *
>> - * @base and @limit + 1 should be exact multiples of IOMMU page
>> granularity to
>> - * avoid rounding surprises. If necessary, we reserve the page at
>> address 0
>> + * If the geometry and dma_range_map include address 0, we reserve
>> that page
>> * to ensure it is an invalid IOVA. It is safe to reinitialise a
>> domain, but
>> * any change which could make prior IOVAs invalid will fail.
>> */
>> -static int iommu_dma_init_domain(struct iommu_domain *domain,
>> dma_addr_t base,
>> - dma_addr_t limit, struct device *dev)
>> +static int iommu_dma_init_domain(struct iommu_domain *domain, struct
>> device *dev)
>> {
>> struct iommu_dma_cookie *cookie = domain->iova_cookie;
>> + const struct bus_dma_region *map = dev->dma_range_map;
>> unsigned long order, base_pfn;
>> struct iova_domain *iovad;
>> int ret;
>> @@ -684,18 +681,18 @@ static int iommu_dma_init_domain(struct
>> iommu_domain *domain, dma_addr_t base,
>> /* Use the smallest supported page size for IOVA granularity */
>> order = __ffs(domain->pgsize_bitmap);
>> - base_pfn = max_t(unsigned long, 1, base >> order);
>> + base_pfn = 1;
>> /* Check the domain allows at least some access to the device... */
>> - if (domain->geometry.force_aperture) {
>> + if (map) {
>> + dma_addr_t base = dma_range_map_min(map);
>> if (base > domain->geometry.aperture_end ||
>> - limit < domain->geometry.aperture_start) {
>> + dma_range_map_max(map) < domain->geometry.aperture_start) {
>> pr_warn("specified DMA range outside IOMMU capability\n");
>> return -EFAULT;
>> }
>> /* ...then finally give it a kicking to make sure it fits */
>> - base_pfn = max_t(unsigned long, base_pfn,
>> - domain->geometry.aperture_start >> order);
>> + base_pfn = max(base, domain->geometry.aperture_start) >> order;
>> }
>> /* start_pfn is always nonzero for an already-initialised domain */
>> @@ -1760,7 +1757,7 @@ void iommu_setup_dma_ops(struct device *dev, u64
>> dma_base, u64 dma_limit)
>> * underlying IOMMU driver needs to support via the dma-iommu
>> layer.
>> */
>> if (iommu_is_dma_domain(domain)) {
>> - if (iommu_dma_init_domain(domain, dma_base, dma_limit, dev))
>> + if (iommu_dma_init_domain(domain, dev))
>> goto out_err;
>> dev->dma_ops = &iommu_dma_ops;
>> }
>
>
> I have noticed some random test failures on Tegra186 and Tegra194 and
> bisect is pointing to this commit. Reverting this along with the various
> dependencies does fix the problem. On Tegra186 CPU hotplug is failing
> and on Tegra194 suspend is failing. Unfortunately, on neither platform
> do I see any particular crash but the boards hang somewhere.
That is... thoroughly bemusing :/ Not only is there supposed to be no
real functional change here - we should merely be recalculating the same
information from dev->dma_range_map that the callers were already doing
to generate the base/limit arguments - but the act of initially setting
up a default domain for a device behind an IOMMU should have no
connection whatsoever to suspend and especially not to CPU hotplug.
> If you have any ideas on things we can try let me know.
Since the symptom seems inexplicable, I'd throw the usual memory
debugging stuff like KASAN at it first. I'd also try
"no_console_suspend" to check whether any late output is being missed in
the suspend case (and if it's already broken, then any additional issues
that may be caused by the console itself hopefully shouldn't matter).
For more base-covering, do you have the "arm64: Properly clean up
iommu-dma remnants" fix in there already as well? That bug has bisected
to patch #6 each time though, so I do still suspect that what you're
seeing is likely something else. It does seem potentially significant
that those Tegra platforms are making fairly wide use of dma-ranges, but
there's no clear idea forming out of that observation just yet...
Thanks,
Robin.
On 15/05/2024 15:59, Robin Murphy wrote:
> Hi Jon,
>
> On 2024-05-14 2:27 pm, Jon Hunter wrote:
>> Hi Robin,
>>
>> On 19/04/2024 17:54, Robin Murphy wrote:
>>> It's now easy to retrieve the device's DMA limits if we want to check
>>> them against the domain aperture, so do that ourselves instead of
>>> relying on them being passed through the callchain.
>>>
>>> Reviewed-by: Jason Gunthorpe <[email protected]>
>>> Tested-by: Hanjun Guo <[email protected]>
>>> Signed-off-by: Robin Murphy <[email protected]>
>>> ---
>>> drivers/iommu/dma-iommu.c | 21 +++++++++------------
>>> 1 file changed, 9 insertions(+), 12 deletions(-)
>>>
>>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>>> index a3039005b696..f542eabaefa4 100644
>>> --- a/drivers/iommu/dma-iommu.c
>>> +++ b/drivers/iommu/dma-iommu.c
>>> @@ -660,19 +660,16 @@ static void iommu_dma_init_options(struct
>>> iommu_dma_options *options,
>>> /**
>>> * iommu_dma_init_domain - Initialise a DMA mapping domain
>>> * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie()
>>> - * @base: IOVA at which the mappable address space starts
>>> - * @limit: Last address of the IOVA space
>>> * @dev: Device the domain is being initialised for
>>> *
>>> - * @base and @limit + 1 should be exact multiples of IOMMU page
>>> granularity to
>>> - * avoid rounding surprises. If necessary, we reserve the page at
>>> address 0
>>> + * If the geometry and dma_range_map include address 0, we reserve
>>> that page
>>> * to ensure it is an invalid IOVA. It is safe to reinitialise a
>>> domain, but
>>> * any change which could make prior IOVAs invalid will fail.
>>> */
>>> -static int iommu_dma_init_domain(struct iommu_domain *domain,
>>> dma_addr_t base,
>>> - dma_addr_t limit, struct device *dev)
>>> +static int iommu_dma_init_domain(struct iommu_domain *domain, struct
>>> device *dev)
>>> {
>>> struct iommu_dma_cookie *cookie = domain->iova_cookie;
>>> + const struct bus_dma_region *map = dev->dma_range_map;
>>> unsigned long order, base_pfn;
>>> struct iova_domain *iovad;
>>> int ret;
>>> @@ -684,18 +681,18 @@ static int iommu_dma_init_domain(struct
>>> iommu_domain *domain, dma_addr_t base,
>>> /* Use the smallest supported page size for IOVA granularity */
>>> order = __ffs(domain->pgsize_bitmap);
>>> - base_pfn = max_t(unsigned long, 1, base >> order);
>>> + base_pfn = 1;
>>> /* Check the domain allows at least some access to the
>>> device... */
>>> - if (domain->geometry.force_aperture) {
>>> + if (map) {
>>> + dma_addr_t base = dma_range_map_min(map);
>>> if (base > domain->geometry.aperture_end ||
>>> - limit < domain->geometry.aperture_start) {
>>> + dma_range_map_max(map) < domain->geometry.aperture_start) {
>>> pr_warn("specified DMA range outside IOMMU capability\n");
>>> return -EFAULT;
>>> }
>>> /* ...then finally give it a kicking to make sure it fits */
>>> - base_pfn = max_t(unsigned long, base_pfn,
>>> - domain->geometry.aperture_start >> order);
>>> + base_pfn = max(base, domain->geometry.aperture_start) >> order;
>>> }
>>> /* start_pfn is always nonzero for an already-initialised
>>> domain */
>>> @@ -1760,7 +1757,7 @@ void iommu_setup_dma_ops(struct device *dev,
>>> u64 dma_base, u64 dma_limit)
>>> * underlying IOMMU driver needs to support via the dma-iommu
>>> layer.
>>> */
>>> if (iommu_is_dma_domain(domain)) {
>>> - if (iommu_dma_init_domain(domain, dma_base, dma_limit, dev))
>>> + if (iommu_dma_init_domain(domain, dev))
>>> goto out_err;
>>> dev->dma_ops = &iommu_dma_ops;
>>> }
>>
>>
>> I have noticed some random test failures on Tegra186 and Tegra194 and
>> bisect is pointing to this commit. Reverting this along with the
>> various dependencies does fix the problem. On Tegra186 CPU hotplug is
>> failing and on Tegra194 suspend is failing. Unfortunately, on neither
>> platform do I see any particular crash but the boards hang somewhere.
>
> That is... thoroughly bemusing :/ Not only is there supposed to be no
> real functional change here - we should merely be recalculating the same
> information from dev->dma_range_map that the callers were already doing
> to generate the base/limit arguments - but the act of initially setting
> up a default domain for a device behind an IOMMU should have no
> connection whatsoever to suspend and especially not to CPU hotplug.
Yes it does look odd, but this is what bisect reported ...
git bisect start
# good: [a38297e3fb012ddfa7ce0321a7e5a8daeb1872b6] Linux 6.9
git bisect good a38297e3fb012ddfa7ce0321a7e5a8daeb1872b6
# bad: [6ba6c795dc73c22ce2c86006f17c4aa802db2a60] Add linux-next specific files for 20240513
git bisect bad 6ba6c795dc73c22ce2c86006f17c4aa802db2a60
# good: [29e7f949865a023a21ecdfbd82d68ac697569f34] Merge branch 'main' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git
git bisect good 29e7f949865a023a21ecdfbd82d68ac697569f34
# skip: [150e6cc14e51f2a07034106a4529cdaafd812c46] Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input.git
git bisect skip 150e6cc14e51f2a07034106a4529cdaafd812c46
# good: [f5d75327d30af49acf2e4b55f35ce2e6c45d1287] drm/amd/display: Fix invalid Copyright notice
git bisect good f5d75327d30af49acf2e4b55f35ce2e6c45d1287
# skip: [f1ec9a9ffc526df7c9523006c2abbb8ea554cdd8] Merge branch 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux-dt.git
git bisect skip f1ec9a9ffc526df7c9523006c2abbb8ea554cdd8
# bad: [f091e93306e0429ebb7589b9874590b6a9705e64] dma-mapping: Simplify arch_setup_dma_ops()
git bisect bad f091e93306e0429ebb7589b9874590b6a9705e64
# good: [91cfd679f9e8b9a7bf2f26adf66eff99dbe2026b] ACPI/IORT: Handle memory address size limits as limits
git bisect good 91cfd679f9e8b9a7bf2f26adf66eff99dbe2026b
# bad: [ad4750b07d3462ce29a0c9b1e88b2a1f9795290e] iommu/dma: Make limit checks self-contained
git bisect bad ad4750b07d3462ce29a0c9b1e88b2a1f9795290e
# good: [fece6530bf4b59b01a476a12851e07751e73d69f] dma-mapping: Add helpers for dma_range_map bounds
git bisect good fece6530bf4b59b01a476a12851e07751e73d69f
# first bad commit: [ad4750b07d3462ce29a0c9b1e88b2a1f9795290e] iommu/dma: Make limit checks self-contained
There is a couple skips in there and so I will try this again.
>> If you have any ideas on things we can try let me know.
>
> Since the symptom seems inexplicable, I'd throw the usual memory
> debugging stuff like KASAN at it first. I'd also try
> "no_console_suspend" to check whether any late output is being missed in
> the suspend case (and if it's already broken, then any additional issues
> that may be caused by the console itself hopefully shouldn't matter).
>
> For more base-covering, do you have the "arm64: Properly clean up
> iommu-dma remnants" fix in there already as well? That bug has bisected
> to patch #6 each time though, so I do still suspect that what you're
> seeing is likely something else. It does seem potentially significant
> that those Tegra platforms are making fairly wide use of dma-ranges, but
> there's no clear idea forming out of that observation just yet...
I was hoping it was the same issue other people had reported,
but the fix provided did not help. I have also tried today's
-next and I am still seeing the issue.
I should have more time next week to look at this further. Let
me confirm which change is causing this and add more debug.
Jon
--
nvpublic
On 17/05/2024 3:21 pm, Jon Hunter wrote:
>
> On 15/05/2024 15:59, Robin Murphy wrote:
>> Hi Jon,
>>
>> On 2024-05-14 2:27 pm, Jon Hunter wrote:
>>> Hi Robin,
>>>
>>> On 19/04/2024 17:54, Robin Murphy wrote:
>>>> It's now easy to retrieve the device's DMA limits if we want to check
>>>> them against the domain aperture, so do that ourselves instead of
>>>> relying on them being passed through the callchain.
>>>>
>>>> Reviewed-by: Jason Gunthorpe <[email protected]>
>>>> Tested-by: Hanjun Guo <[email protected]>
>>>> Signed-off-by: Robin Murphy <[email protected]>
>>>> ---
>>>> drivers/iommu/dma-iommu.c | 21 +++++++++------------
>>>> 1 file changed, 9 insertions(+), 12 deletions(-)
>>>>
>>>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>>>> index a3039005b696..f542eabaefa4 100644
>>>> --- a/drivers/iommu/dma-iommu.c
>>>> +++ b/drivers/iommu/dma-iommu.c
>>>> @@ -660,19 +660,16 @@ static void iommu_dma_init_options(struct
>>>> iommu_dma_options *options,
>>>> /**
>>>> * iommu_dma_init_domain - Initialise a DMA mapping domain
>>>> * @domain: IOMMU domain previously prepared by
>>>> iommu_get_dma_cookie()
>>>> - * @base: IOVA at which the mappable address space starts
>>>> - * @limit: Last address of the IOVA space
>>>> * @dev: Device the domain is being initialised for
>>>> *
>>>> - * @base and @limit + 1 should be exact multiples of IOMMU page
>>>> granularity to
>>>> - * avoid rounding surprises. If necessary, we reserve the page at
>>>> address 0
>>>> + * If the geometry and dma_range_map include address 0, we reserve
>>>> that page
>>>> * to ensure it is an invalid IOVA. It is safe to reinitialise a
>>>> domain, but
>>>> * any change which could make prior IOVAs invalid will fail.
>>>> */
>>>> -static int iommu_dma_init_domain(struct iommu_domain *domain,
>>>> dma_addr_t base,
>>>> - dma_addr_t limit, struct device *dev)
>>>> +static int iommu_dma_init_domain(struct iommu_domain *domain,
>>>> struct device *dev)
>>>> {
>>>> struct iommu_dma_cookie *cookie = domain->iova_cookie;
>>>> + const struct bus_dma_region *map = dev->dma_range_map;
>>>> unsigned long order, base_pfn;
>>>> struct iova_domain *iovad;
>>>> int ret;
>>>> @@ -684,18 +681,18 @@ static int iommu_dma_init_domain(struct
>>>> iommu_domain *domain, dma_addr_t base,
>>>> /* Use the smallest supported page size for IOVA granularity */
>>>> order = __ffs(domain->pgsize_bitmap);
>>>> - base_pfn = max_t(unsigned long, 1, base >> order);
>>>> + base_pfn = 1;
>>>> /* Check the domain allows at least some access to the
>>>> device... */
>>>> - if (domain->geometry.force_aperture) {
>>>> + if (map) {
>>>> + dma_addr_t base = dma_range_map_min(map);
>>>> if (base > domain->geometry.aperture_end ||
>>>> - limit < domain->geometry.aperture_start) {
>>>> + dma_range_map_max(map) <
>>>> domain->geometry.aperture_start) {
>>>> pr_warn("specified DMA range outside IOMMU
>>>> capability\n");
>>>> return -EFAULT;
>>>> }
>>>> /* ...then finally give it a kicking to make sure it fits */
>>>> - base_pfn = max_t(unsigned long, base_pfn,
>>>> - domain->geometry.aperture_start >> order);
>>>> + base_pfn = max(base, domain->geometry.aperture_start) >>
>>>> order;
>>>> }
>>>> /* start_pfn is always nonzero for an already-initialised
>>>> domain */
>>>> @@ -1760,7 +1757,7 @@ void iommu_setup_dma_ops(struct device *dev,
>>>> u64 dma_base, u64 dma_limit)
>>>> * underlying IOMMU driver needs to support via the dma-iommu
>>>> layer.
>>>> */
>>>> if (iommu_is_dma_domain(domain)) {
>>>> - if (iommu_dma_init_domain(domain, dma_base, dma_limit, dev))
>>>> + if (iommu_dma_init_domain(domain, dev))
>>>> goto out_err;
>>>> dev->dma_ops = &iommu_dma_ops;
>>>> }
>>>
>>>
>>> I have noticed some random test failures on Tegra186 and Tegra194 and
>>> bisect is pointing to this commit. Reverting this along with the
>>> various dependencies does fix the problem. On Tegra186 CPU hotplug is
>>> failing and on Tegra194 suspend is failing. Unfortunately, on neither
>>> platform do I see any particular crash but the boards hang somewhere.
>>
>> That is... thoroughly bemusing :/ Not only is there supposed to be no
>> real functional change here - we should merely be recalculating the
>> same information from dev->dma_range_map that the callers were already
>> doing to generate the base/limit arguments - but the act of initially
>> setting up a default domain for a device behind an IOMMU should have
>> no connection whatsoever to suspend and especially not to CPU hotplug.
>
>
> Yes it does look odd, but this is what bisect reported ...
>
> git bisect start
> # good: [a38297e3fb012ddfa7ce0321a7e5a8daeb1872b6] Linux 6.9
> git bisect good a38297e3fb012ddfa7ce0321a7e5a8daeb1872b6
> # bad: [6ba6c795dc73c22ce2c86006f17c4aa802db2a60] Add linux-next
> specific files for 20240513
> git bisect bad 6ba6c795dc73c22ce2c86006f17c4aa802db2a60
> # good: [29e7f949865a023a21ecdfbd82d68ac697569f34] Merge branch 'main'
> of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git
> git bisect good 29e7f949865a023a21ecdfbd82d68ac697569f34
> # skip: [150e6cc14e51f2a07034106a4529cdaafd812c46] Merge branch 'next'
> of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input.git
> git bisect skip 150e6cc14e51f2a07034106a4529cdaafd812c46
> # good: [f5d75327d30af49acf2e4b55f35ce2e6c45d1287] drm/amd/display: Fix
> invalid Copyright notice
> git bisect good f5d75327d30af49acf2e4b55f35ce2e6c45d1287
> # skip: [f1ec9a9ffc526df7c9523006c2abbb8ea554cdd8] Merge branch
> 'for-next' of
> git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux-dt.git
> git bisect skip f1ec9a9ffc526df7c9523006c2abbb8ea554cdd8
> # bad: [f091e93306e0429ebb7589b9874590b6a9705e64] dma-mapping: Simplify
> arch_setup_dma_ops()
> git bisect bad f091e93306e0429ebb7589b9874590b6a9705e64
> # good: [91cfd679f9e8b9a7bf2f26adf66eff99dbe2026b] ACPI/IORT: Handle
> memory address size limits as limits
> git bisect good 91cfd679f9e8b9a7bf2f26adf66eff99dbe2026b
> # bad: [ad4750b07d3462ce29a0c9b1e88b2a1f9795290e] iommu/dma: Make limit
> checks self-contained
> git bisect bad ad4750b07d3462ce29a0c9b1e88b2a1f9795290e
> # good: [fece6530bf4b59b01a476a12851e07751e73d69f] dma-mapping: Add
> helpers for dma_range_map bounds
> git bisect good fece6530bf4b59b01a476a12851e07751e73d69f
> # first bad commit: [ad4750b07d3462ce29a0c9b1e88b2a1f9795290e]
> iommu/dma: Make limit checks self-contained
>
> There is a couple skips in there and so I will try this again.
>
>>> If you have any ideas on things we can try let me know.
>>
>> Since the symptom seems inexplicable, I'd throw the usual memory
>> debugging stuff like KASAN at it first. I'd also try
>> "no_console_suspend" to check whether any late output is being missed
>> in the suspend case (and if it's already broken, then any additional
>> issues that may be caused by the console itself hopefully shouldn't
>> matter).
>>
>> For more base-covering, do you have the "arm64: Properly clean up
>> iommu-dma remnants" fix in there already as well? That bug has
>> bisected to patch #6 each time though, so I do still suspect that what
>> you're seeing is likely something else. It does seem potentially
>> significant that those Tegra platforms are making fairly wide use of
>> dma-ranges, but there's no clear idea forming out of that observation
>> just yet...
>
> I was hoping it was the same issue other people had reported,
> but the fix provided did not help. I have also tried today's
> -next and I am still seeing the issue.
>
> I should have more time next week to look at this further. Let
> me confirm which change is causing this and add more debug.
Thanks. From staring at the code I think I've spotted one subtlety which
may not be quite as intended - can you see if the diff below helps? It
occurs to me that suspend and CPU hotplug may not *cause* the symptom,
but they could certainly stall if one or more relevant CPUs is *already*
stuck in a loop somewhere...
Thanks,
Robin.
----->8-----
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 89a53c2f2cf9..85eb1846c637 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -686,6 +686,7 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, struct device *dev
/* Check the domain allows at least some access to the device... */
if (map) {
dma_addr_t base = dma_range_map_min(map);
+ base = max(base, (dma_addr_t)1 << order);
if (base > domain->geometry.aperture_end ||
dma_range_map_max(map) < domain->geometry.aperture_start) {
pr_warn("specified DMA range outside IOMMU capability\n");
On Fri, May 17, 2024 at 04:03:57PM GMT, Robin Murphy wrote:
> On 17/05/2024 3:21 pm, Jon Hunter wrote:
> >
> > On 15/05/2024 15:59, Robin Murphy wrote:
> > > Hi Jon,
> > >
> > > On 2024-05-14 2:27 pm, Jon Hunter wrote:
> > > > Hi Robin,
> > > >
> > > > On 19/04/2024 17:54, Robin Murphy wrote:
> > > > > It's now easy to retrieve the device's DMA limits if we want to check
> > > > > them against the domain aperture, so do that ourselves instead of
> > > > > relying on them being passed through the callchain.
> > > > >
> > > > > Reviewed-by: Jason Gunthorpe <[email protected]>
> > > > > Tested-by: Hanjun Guo <[email protected]>
> > > > > Signed-off-by: Robin Murphy <[email protected]>
> > > > > ---
> > > > > ? drivers/iommu/dma-iommu.c | 21 +++++++++------------
> > > > > ? 1 file changed, 9 insertions(+), 12 deletions(-)
> > > > >
> > > > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> > > > > index a3039005b696..f542eabaefa4 100644
> > > > > --- a/drivers/iommu/dma-iommu.c
> > > > > +++ b/drivers/iommu/dma-iommu.c
> > > > > @@ -660,19 +660,16 @@ static void
> > > > > iommu_dma_init_options(struct iommu_dma_options *options,
> > > > > ? /**
> > > > > ?? * iommu_dma_init_domain - Initialise a DMA mapping domain
> > > > > ?? * @domain: IOMMU domain previously prepared by
> > > > > iommu_get_dma_cookie()
> > > > > - * @base: IOVA at which the mappable address space starts
> > > > > - * @limit: Last address of the IOVA space
> > > > > ?? * @dev: Device the domain is being initialised for
> > > > > ?? *
> > > > > - * @base and @limit + 1 should be exact multiples of IOMMU
> > > > > page granularity to
> > > > > - * avoid rounding surprises. If necessary, we reserve the
> > > > > page at address 0
> > > > > + * If the geometry and dma_range_map include address 0, we
> > > > > reserve that page
> > > > > ?? * to ensure it is an invalid IOVA. It is safe to
> > > > > reinitialise a domain, but
> > > > > ?? * any change which could make prior IOVAs invalid will fail.
> > > > > ?? */
> > > > > -static int iommu_dma_init_domain(struct iommu_domain
> > > > > *domain, dma_addr_t base,
> > > > > -???????????????? dma_addr_t limit, struct device *dev)
> > > > > +static int iommu_dma_init_domain(struct iommu_domain
> > > > > *domain, struct device *dev)
> > > > > ? {
> > > > > ????? struct iommu_dma_cookie *cookie = domain->iova_cookie;
> > > > > +??? const struct bus_dma_region *map = dev->dma_range_map;
> > > > > ????? unsigned long order, base_pfn;
> > > > > ????? struct iova_domain *iovad;
> > > > > ????? int ret;
> > > > > @@ -684,18 +681,18 @@ static int
> > > > > iommu_dma_init_domain(struct iommu_domain *domain,
> > > > > dma_addr_t base,
> > > > > ????? /* Use the smallest supported page size for IOVA granularity */
> > > > > ????? order = __ffs(domain->pgsize_bitmap);
> > > > > -??? base_pfn = max_t(unsigned long, 1, base >> order);
> > > > > +??? base_pfn = 1;
> > > > > ????? /* Check the domain allows at least some access to the
> > > > > device... */
> > > > > -??? if (domain->geometry.force_aperture) {
> > > > > +??? if (map) {
> > > > > +??????? dma_addr_t base = dma_range_map_min(map);
> > > > > ????????? if (base > domain->geometry.aperture_end ||
> > > > > -??????????? limit < domain->geometry.aperture_start) {
> > > > > +??????????? dma_range_map_max(map) <
> > > > > domain->geometry.aperture_start) {
> > > > > ????????????? pr_warn("specified DMA range outside IOMMU
> > > > > capability\n");
> > > > > ????????????? return -EFAULT;
> > > > > ????????? }
> > > > > ????????? /* ...then finally give it a kicking to make sure it fits */
> > > > > -??????? base_pfn = max_t(unsigned long, base_pfn,
> > > > > -??????????????? domain->geometry.aperture_start >> order);
> > > > > +??????? base_pfn = max(base,
> > > > > domain->geometry.aperture_start) >> order;
> > > > > ????? }
> > > > > ????? /* start_pfn is always nonzero for an
> > > > > already-initialised domain */
> > > > > @@ -1760,7 +1757,7 @@ void iommu_setup_dma_ops(struct device
> > > > > *dev, u64 dma_base, u64 dma_limit)
> > > > > ?????? * underlying IOMMU driver needs to support via the
> > > > > dma-iommu layer.
> > > > > ?????? */
> > > > > ????? if (iommu_is_dma_domain(domain)) {
> > > > > -??????? if (iommu_dma_init_domain(domain, dma_base, dma_limit, dev))
> > > > > +??????? if (iommu_dma_init_domain(domain, dev))
> > > > > ????????????? goto out_err;
> > > > > ????????? dev->dma_ops = &iommu_dma_ops;
> > > > > ????? }
> > > >
> > > >
> > > > I have noticed some random test failures on Tegra186 and
> > > > Tegra194 and bisect is pointing to this commit. Reverting this
> > > > along with the various dependencies does fix the problem. On
> > > > Tegra186 CPU hotplug is failing and on Tegra194 suspend is
> > > > failing. Unfortunately, on neither platform do I see any
> > > > particular crash but the boards hang somewhere.
> > >
> > > That is... thoroughly bemusing :/ Not only is there supposed to be
> > > no real functional change here - we should merely be recalculating
> > > the same information from dev->dma_range_map that the callers were
> > > already doing to generate the base/limit arguments - but the act of
> > > initially setting up a default domain for a device behind an IOMMU
> > > should have no connection whatsoever to suspend and especially not
> > > to CPU hotplug.
> >
> >
> > Yes it does look odd, but this is what bisect reported ...
> >
> > git bisect start
> > # good: [a38297e3fb012ddfa7ce0321a7e5a8daeb1872b6] Linux 6.9
> > git bisect good a38297e3fb012ddfa7ce0321a7e5a8daeb1872b6
> > # bad: [6ba6c795dc73c22ce2c86006f17c4aa802db2a60] Add linux-next
> > specific files for 20240513
> > git bisect bad 6ba6c795dc73c22ce2c86006f17c4aa802db2a60
> > # good: [29e7f949865a023a21ecdfbd82d68ac697569f34] Merge branch 'main'
> > of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git
> > git bisect good 29e7f949865a023a21ecdfbd82d68ac697569f34
> > # skip: [150e6cc14e51f2a07034106a4529cdaafd812c46] Merge branch 'next'
> > of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input.git
> > git bisect skip 150e6cc14e51f2a07034106a4529cdaafd812c46
> > # good: [f5d75327d30af49acf2e4b55f35ce2e6c45d1287] drm/amd/display: Fix
> > invalid Copyright notice
> > git bisect good f5d75327d30af49acf2e4b55f35ce2e6c45d1287
> > # skip: [f1ec9a9ffc526df7c9523006c2abbb8ea554cdd8] Merge branch
> > 'for-next' of
> > git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux-dt.git
> > git bisect skip f1ec9a9ffc526df7c9523006c2abbb8ea554cdd8
> > # bad: [f091e93306e0429ebb7589b9874590b6a9705e64] dma-mapping: Simplify
> > arch_setup_dma_ops()
> > git bisect bad f091e93306e0429ebb7589b9874590b6a9705e64
> > # good: [91cfd679f9e8b9a7bf2f26adf66eff99dbe2026b] ACPI/IORT: Handle
> > memory address size limits as limits
> > git bisect good 91cfd679f9e8b9a7bf2f26adf66eff99dbe2026b
> > # bad: [ad4750b07d3462ce29a0c9b1e88b2a1f9795290e] iommu/dma: Make limit
> > checks self-contained
> > git bisect bad ad4750b07d3462ce29a0c9b1e88b2a1f9795290e
> > # good: [fece6530bf4b59b01a476a12851e07751e73d69f] dma-mapping: Add
> > helpers for dma_range_map bounds
> > git bisect good fece6530bf4b59b01a476a12851e07751e73d69f
> > # first bad commit: [ad4750b07d3462ce29a0c9b1e88b2a1f9795290e]
> > iommu/dma: Make limit checks self-contained
> >
> > There is a couple skips in there and so I will try this again.
> >
> > > > If you have any ideas on things we can try let me know.
> > >
> > > Since the symptom seems inexplicable, I'd throw the usual memory
> > > debugging stuff like KASAN at it first. I'd also try
> > > "no_console_suspend" to check whether any late output is being
> > > missed in the suspend case (and if it's already broken, then any
> > > additional issues that may be caused by the console itself hopefully
> > > shouldn't matter).
> > >
> > > For more base-covering, do you have the "arm64: Properly clean up
> > > iommu-dma remnants" fix in there already as well? That bug has
> > > bisected to patch #6 each time though, so I do still suspect that
> > > what you're seeing is likely something else. It does seem
> > > potentially significant that those Tegra platforms are making fairly
> > > wide use of dma-ranges, but there's no clear idea forming out of
> > > that observation just yet...
> >
> > I was hoping it was the same issue other people had reported,
> > but the fix provided did not help. I have also tried today's
> > -next and I am still seeing the issue.
> >
> > I should have more time next week to look at this further. Let
> > me confirm which change is causing this and add more debug.
>
> Thanks. From staring at the code I think I've spotted one subtlety which
> may not be quite as intended - can you see if the diff below helps? It
> occurs to me that suspend and CPU hotplug may not *cause* the symptom,
> but they could certainly stall if one or more relevant CPUs is *already*
> stuck in a loop somewhere...
>
> Thanks,
> Robin.
I ran into an issue with arm-smmu as well with an nvidia orin system. From what I could
see with the system, which seemed a bit odd to me, was it had a bridge and a wireless
nic in the same iommu group, and had a mapping for the bridge at 0xffff000. It was
failing when it tried to set up pci resources for the wireless nic as it was
trying to map it to 0xffff000 and arm_lpae_map path would reject it since
there already was a mapping there.
I'll try to spend more time with it today if I can grab one of the systems.
>
> ----->8-----
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 89a53c2f2cf9..85eb1846c637 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -686,6 +686,7 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, struct device *dev
> /* Check the domain allows at least some access to the device... */
> if (map) {
> dma_addr_t base = dma_range_map_min(map);
> + base = max(base, (dma_addr_t)1 << order);
> if (base > domain->geometry.aperture_end ||
> dma_range_map_max(map) < domain->geometry.aperture_start) {
> pr_warn("specified DMA range outside IOMMU capability\n");
On Fri, May 17, 2024 at 04:03:57PM GMT, Robin Murphy wrote:
> On 17/05/2024 3:21 pm, Jon Hunter wrote:
> >
> > On 15/05/2024 15:59, Robin Murphy wrote:
> > > Hi Jon,
> > >
> > > On 2024-05-14 2:27 pm, Jon Hunter wrote:
> > > > Hi Robin,
> > > >
> > > > On 19/04/2024 17:54, Robin Murphy wrote:
> > > > > It's now easy to retrieve the device's DMA limits if we want to check
> > > > > them against the domain aperture, so do that ourselves instead of
> > > > > relying on them being passed through the callchain.
> > > > >
> > > > > Reviewed-by: Jason Gunthorpe <[email protected]>
> > > > > Tested-by: Hanjun Guo <[email protected]>
> > > > > Signed-off-by: Robin Murphy <[email protected]>
> > > > > ---
> > > > > ? drivers/iommu/dma-iommu.c | 21 +++++++++------------
> > > > > ? 1 file changed, 9 insertions(+), 12 deletions(-)
> > > > >
> > > > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> > > > > index a3039005b696..f542eabaefa4 100644
> > > > > --- a/drivers/iommu/dma-iommu.c
> > > > > +++ b/drivers/iommu/dma-iommu.c
> > > > > @@ -660,19 +660,16 @@ static void
> > > > > iommu_dma_init_options(struct iommu_dma_options *options,
> > > > > ? /**
> > > > > ?? * iommu_dma_init_domain - Initialise a DMA mapping domain
> > > > > ?? * @domain: IOMMU domain previously prepared by
> > > > > iommu_get_dma_cookie()
> > > > > - * @base: IOVA at which the mappable address space starts
> > > > > - * @limit: Last address of the IOVA space
> > > > > ?? * @dev: Device the domain is being initialised for
> > > > > ?? *
> > > > > - * @base and @limit + 1 should be exact multiples of IOMMU
> > > > > page granularity to
> > > > > - * avoid rounding surprises. If necessary, we reserve the
> > > > > page at address 0
> > > > > + * If the geometry and dma_range_map include address 0, we
> > > > > reserve that page
> > > > > ?? * to ensure it is an invalid IOVA. It is safe to
> > > > > reinitialise a domain, but
> > > > > ?? * any change which could make prior IOVAs invalid will fail.
> > > > > ?? */
> > > > > -static int iommu_dma_init_domain(struct iommu_domain
> > > > > *domain, dma_addr_t base,
> > > > > -???????????????? dma_addr_t limit, struct device *dev)
> > > > > +static int iommu_dma_init_domain(struct iommu_domain
> > > > > *domain, struct device *dev)
> > > > > ? {
> > > > > ????? struct iommu_dma_cookie *cookie = domain->iova_cookie;
> > > > > +??? const struct bus_dma_region *map = dev->dma_range_map;
> > > > > ????? unsigned long order, base_pfn;
> > > > > ????? struct iova_domain *iovad;
> > > > > ????? int ret;
> > > > > @@ -684,18 +681,18 @@ static int
> > > > > iommu_dma_init_domain(struct iommu_domain *domain,
> > > > > dma_addr_t base,
> > > > > ????? /* Use the smallest supported page size for IOVA granularity */
> > > > > ????? order = __ffs(domain->pgsize_bitmap);
> > > > > -??? base_pfn = max_t(unsigned long, 1, base >> order);
> > > > > +??? base_pfn = 1;
> > > > > ????? /* Check the domain allows at least some access to the
> > > > > device... */
> > > > > -??? if (domain->geometry.force_aperture) {
> > > > > +??? if (map) {
> > > > > +??????? dma_addr_t base = dma_range_map_min(map);
> > > > > ????????? if (base > domain->geometry.aperture_end ||
> > > > > -??????????? limit < domain->geometry.aperture_start) {
> > > > > +??????????? dma_range_map_max(map) <
> > > > > domain->geometry.aperture_start) {
> > > > > ????????????? pr_warn("specified DMA range outside IOMMU
> > > > > capability\n");
> > > > > ????????????? return -EFAULT;
> > > > > ????????? }
> > > > > ????????? /* ...then finally give it a kicking to make sure it fits */
> > > > > -??????? base_pfn = max_t(unsigned long, base_pfn,
> > > > > -??????????????? domain->geometry.aperture_start >> order);
> > > > > +??????? base_pfn = max(base,
> > > > > domain->geometry.aperture_start) >> order;
> > > > > ????? }
> > > > > ????? /* start_pfn is always nonzero for an
> > > > > already-initialised domain */
> > > > > @@ -1760,7 +1757,7 @@ void iommu_setup_dma_ops(struct device
> > > > > *dev, u64 dma_base, u64 dma_limit)
> > > > > ?????? * underlying IOMMU driver needs to support via the
> > > > > dma-iommu layer.
> > > > > ?????? */
> > > > > ????? if (iommu_is_dma_domain(domain)) {
> > > > > -??????? if (iommu_dma_init_domain(domain, dma_base, dma_limit, dev))
> > > > > +??????? if (iommu_dma_init_domain(domain, dev))
> > > > > ????????????? goto out_err;
> > > > > ????????? dev->dma_ops = &iommu_dma_ops;
> > > > > ????? }
> > > >
> > > >
> > > > I have noticed some random test failures on Tegra186 and
> > > > Tegra194 and bisect is pointing to this commit. Reverting this
> > > > along with the various dependencies does fix the problem. On
> > > > Tegra186 CPU hotplug is failing and on Tegra194 suspend is
> > > > failing. Unfortunately, on neither platform do I see any
> > > > particular crash but the boards hang somewhere.
> > >
> > > That is... thoroughly bemusing :/ Not only is there supposed to be
> > > no real functional change here - we should merely be recalculating
> > > the same information from dev->dma_range_map that the callers were
> > > already doing to generate the base/limit arguments - but the act of
> > > initially setting up a default domain for a device behind an IOMMU
> > > should have no connection whatsoever to suspend and especially not
> > > to CPU hotplug.
> >
> >
> > Yes it does look odd, but this is what bisect reported ...
> >
> > git bisect start
> > # good: [a38297e3fb012ddfa7ce0321a7e5a8daeb1872b6] Linux 6.9
> > git bisect good a38297e3fb012ddfa7ce0321a7e5a8daeb1872b6
> > # bad: [6ba6c795dc73c22ce2c86006f17c4aa802db2a60] Add linux-next
> > specific files for 20240513
> > git bisect bad 6ba6c795dc73c22ce2c86006f17c4aa802db2a60
> > # good: [29e7f949865a023a21ecdfbd82d68ac697569f34] Merge branch 'main'
> > of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git
> > git bisect good 29e7f949865a023a21ecdfbd82d68ac697569f34
> > # skip: [150e6cc14e51f2a07034106a4529cdaafd812c46] Merge branch 'next'
> > of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input.git
> > git bisect skip 150e6cc14e51f2a07034106a4529cdaafd812c46
> > # good: [f5d75327d30af49acf2e4b55f35ce2e6c45d1287] drm/amd/display: Fix
> > invalid Copyright notice
> > git bisect good f5d75327d30af49acf2e4b55f35ce2e6c45d1287
> > # skip: [f1ec9a9ffc526df7c9523006c2abbb8ea554cdd8] Merge branch
> > 'for-next' of
> > git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux-dt.git
> > git bisect skip f1ec9a9ffc526df7c9523006c2abbb8ea554cdd8
> > # bad: [f091e93306e0429ebb7589b9874590b6a9705e64] dma-mapping: Simplify
> > arch_setup_dma_ops()
> > git bisect bad f091e93306e0429ebb7589b9874590b6a9705e64
> > # good: [91cfd679f9e8b9a7bf2f26adf66eff99dbe2026b] ACPI/IORT: Handle
> > memory address size limits as limits
> > git bisect good 91cfd679f9e8b9a7bf2f26adf66eff99dbe2026b
> > # bad: [ad4750b07d3462ce29a0c9b1e88b2a1f9795290e] iommu/dma: Make limit
> > checks self-contained
> > git bisect bad ad4750b07d3462ce29a0c9b1e88b2a1f9795290e
> > # good: [fece6530bf4b59b01a476a12851e07751e73d69f] dma-mapping: Add
> > helpers for dma_range_map bounds
> > git bisect good fece6530bf4b59b01a476a12851e07751e73d69f
> > # first bad commit: [ad4750b07d3462ce29a0c9b1e88b2a1f9795290e]
> > iommu/dma: Make limit checks self-contained
> >
> > There is a couple skips in there and so I will try this again.
> >
> > > > If you have any ideas on things we can try let me know.
> > >
> > > Since the symptom seems inexplicable, I'd throw the usual memory
> > > debugging stuff like KASAN at it first. I'd also try
> > > "no_console_suspend" to check whether any late output is being
> > > missed in the suspend case (and if it's already broken, then any
> > > additional issues that may be caused by the console itself hopefully
> > > shouldn't matter).
> > >
> > > For more base-covering, do you have the "arm64: Properly clean up
> > > iommu-dma remnants" fix in there already as well? That bug has
> > > bisected to patch #6 each time though, so I do still suspect that
> > > what you're seeing is likely something else. It does seem
> > > potentially significant that those Tegra platforms are making fairly
> > > wide use of dma-ranges, but there's no clear idea forming out of
> > > that observation just yet...
> >
> > I was hoping it was the same issue other people had reported,
> > but the fix provided did not help. I have also tried today's
> > -next and I am still seeing the issue.
> >
> > I should have more time next week to look at this further. Let
> > me confirm which change is causing this and add more debug.
>
> Thanks. From staring at the code I think I've spotted one subtlety which
> may not be quite as intended - can you see if the diff below helps? It
> occurs to me that suspend and CPU hotplug may not *cause* the symptom,
> but they could certainly stall if one or more relevant CPUs is *already*
> stuck in a loop somewhere...
>
> Thanks,
> Robin.
>
> ----->8-----
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 89a53c2f2cf9..85eb1846c637 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -686,6 +686,7 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, struct device *dev
> /* Check the domain allows at least some access to the device... */
> if (map) {
> dma_addr_t base = dma_range_map_min(map);
> + base = max(base, (dma_addr_t)1 << order);
> if (base > domain->geometry.aperture_end ||
> dma_range_map_max(map) < domain->geometry.aperture_start) {
> pr_warn("specified DMA range outside IOMMU capability\n");
With this in place I no longer see the mapping fail on the nvidia system.
Regards,
Jerry
On 2024-05-18 7:31 pm, Jerry Snitselaar wrote:
> On Fri, May 17, 2024 at 04:03:57PM GMT, Robin Murphy wrote:
>> On 17/05/2024 3:21 pm, Jon Hunter wrote:
>>>
>>> On 15/05/2024 15:59, Robin Murphy wrote:
>>>> Hi Jon,
>>>>
>>>> On 2024-05-14 2:27 pm, Jon Hunter wrote:
>>>>> Hi Robin,
>>>>>
>>>>> On 19/04/2024 17:54, Robin Murphy wrote:
>>>>>> It's now easy to retrieve the device's DMA limits if we want to check
>>>>>> them against the domain aperture, so do that ourselves instead of
>>>>>> relying on them being passed through the callchain.
>>>>>>
>>>>>> Reviewed-by: Jason Gunthorpe <[email protected]>
>>>>>> Tested-by: Hanjun Guo <[email protected]>
>>>>>> Signed-off-by: Robin Murphy <[email protected]>
>>>>>> ---
>>>>>> � drivers/iommu/dma-iommu.c | 21 +++++++++------------
>>>>>> � 1 file changed, 9 insertions(+), 12 deletions(-)
>>>>>>
>>>>>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>>>>>> index a3039005b696..f542eabaefa4 100644
>>>>>> --- a/drivers/iommu/dma-iommu.c
>>>>>> +++ b/drivers/iommu/dma-iommu.c
>>>>>> @@ -660,19 +660,16 @@ static void
>>>>>> iommu_dma_init_options(struct iommu_dma_options *options,
>>>>>> � /**
>>>>>> �� * iommu_dma_init_domain - Initialise a DMA mapping domain
>>>>>> �� * @domain: IOMMU domain previously prepared by
>>>>>> iommu_get_dma_cookie()
>>>>>> - * @base: IOVA at which the mappable address space starts
>>>>>> - * @limit: Last address of the IOVA space
>>>>>> �� * @dev: Device the domain is being initialised for
>>>>>> �� *
>>>>>> - * @base and @limit + 1 should be exact multiples of IOMMU
>>>>>> page granularity to
>>>>>> - * avoid rounding surprises. If necessary, we reserve the
>>>>>> page at address 0
>>>>>> + * If the geometry and dma_range_map include address 0, we
>>>>>> reserve that page
>>>>>> �� * to ensure it is an invalid IOVA. It is safe to
>>>>>> reinitialise a domain, but
>>>>>> �� * any change which could make prior IOVAs invalid will fail.
>>>>>> �� */
>>>>>> -static int iommu_dma_init_domain(struct iommu_domain
>>>>>> *domain, dma_addr_t base,
>>>>>> -���������������� dma_addr_t limit, struct device *dev)
>>>>>> +static int iommu_dma_init_domain(struct iommu_domain
>>>>>> *domain, struct device *dev)
>>>>>> � {
>>>>>> ����� struct iommu_dma_cookie *cookie = domain->iova_cookie;
>>>>>> +��� const struct bus_dma_region *map = dev->dma_range_map;
>>>>>> ����� unsigned long order, base_pfn;
>>>>>> ����� struct iova_domain *iovad;
>>>>>> ����� int ret;
>>>>>> @@ -684,18 +681,18 @@ static int
>>>>>> iommu_dma_init_domain(struct iommu_domain *domain,
>>>>>> dma_addr_t base,
>>>>>> ����� /* Use the smallest supported page size for IOVA granularity */
>>>>>> ����� order = __ffs(domain->pgsize_bitmap);
>>>>>> -��� base_pfn = max_t(unsigned long, 1, base >> order);
>>>>>> +��� base_pfn = 1;
>>>>>> ����� /* Check the domain allows at least some access to the
>>>>>> device... */
>>>>>> -��� if (domain->geometry.force_aperture) {
>>>>>> +��� if (map) {
>>>>>> +������� dma_addr_t base = dma_range_map_min(map);
>>>>>> ��������� if (base > domain->geometry.aperture_end ||
>>>>>> -����������� limit < domain->geometry.aperture_start) {
>>>>>> +����������� dma_range_map_max(map) <
>>>>>> domain->geometry.aperture_start) {
>>>>>> ������������� pr_warn("specified DMA range outside IOMMU
>>>>>> capability\n");
>>>>>> ������������� return -EFAULT;
>>>>>> ��������� }
>>>>>> ��������� /* ...then finally give it a kicking to make sure it fits */
>>>>>> -������� base_pfn = max_t(unsigned long, base_pfn,
>>>>>> -��������������� domain->geometry.aperture_start >> order);
>>>>>> +������� base_pfn = max(base,
>>>>>> domain->geometry.aperture_start) >> order;
>>>>>> ����� }
>>>>>> ����� /* start_pfn is always nonzero for an
>>>>>> already-initialised domain */
>>>>>> @@ -1760,7 +1757,7 @@ void iommu_setup_dma_ops(struct device
>>>>>> *dev, u64 dma_base, u64 dma_limit)
>>>>>> ������ * underlying IOMMU driver needs to support via the
>>>>>> dma-iommu layer.
>>>>>> ������ */
>>>>>> ����� if (iommu_is_dma_domain(domain)) {
>>>>>> -������� if (iommu_dma_init_domain(domain, dma_base, dma_limit, dev))
>>>>>> +������� if (iommu_dma_init_domain(domain, dev))
>>>>>> ������������� goto out_err;
>>>>>> ��������� dev->dma_ops = &iommu_dma_ops;
>>>>>> ����� }
>>>>>
>>>>>
>>>>> I have noticed some random test failures on Tegra186 and
>>>>> Tegra194 and bisect is pointing to this commit. Reverting this
>>>>> along with the various dependencies does fix the problem. On
>>>>> Tegra186 CPU hotplug is failing and on Tegra194 suspend is
>>>>> failing. Unfortunately, on neither platform do I see any
>>>>> particular crash but the boards hang somewhere.
>>>>
>>>> That is... thoroughly bemusing :/ Not only is there supposed to be
>>>> no real functional change here - we should merely be recalculating
>>>> the same information from dev->dma_range_map that the callers were
>>>> already doing to generate the base/limit arguments - but the act of
>>>> initially setting up a default domain for a device behind an IOMMU
>>>> should have no connection whatsoever to suspend and especially not
>>>> to CPU hotplug.
>>>
>>>
>>> Yes it does look odd, but this is what bisect reported ...
>>>
>>> git bisect start
>>> # good: [a38297e3fb012ddfa7ce0321a7e5a8daeb1872b6] Linux 6.9
>>> git bisect good a38297e3fb012ddfa7ce0321a7e5a8daeb1872b6
>>> # bad: [6ba6c795dc73c22ce2c86006f17c4aa802db2a60] Add linux-next
>>> specific files for 20240513
>>> git bisect bad 6ba6c795dc73c22ce2c86006f17c4aa802db2a60
>>> # good: [29e7f949865a023a21ecdfbd82d68ac697569f34] Merge branch 'main'
>>> of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git
>>> git bisect good 29e7f949865a023a21ecdfbd82d68ac697569f34
>>> # skip: [150e6cc14e51f2a07034106a4529cdaafd812c46] Merge branch 'next'
>>> of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input.git
>>> git bisect skip 150e6cc14e51f2a07034106a4529cdaafd812c46
>>> # good: [f5d75327d30af49acf2e4b55f35ce2e6c45d1287] drm/amd/display: Fix
>>> invalid Copyright notice
>>> git bisect good f5d75327d30af49acf2e4b55f35ce2e6c45d1287
>>> # skip: [f1ec9a9ffc526df7c9523006c2abbb8ea554cdd8] Merge branch
>>> 'for-next' of
>>> git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux-dt.git
>>> git bisect skip f1ec9a9ffc526df7c9523006c2abbb8ea554cdd8
>>> # bad: [f091e93306e0429ebb7589b9874590b6a9705e64] dma-mapping: Simplify
>>> arch_setup_dma_ops()
>>> git bisect bad f091e93306e0429ebb7589b9874590b6a9705e64
>>> # good: [91cfd679f9e8b9a7bf2f26adf66eff99dbe2026b] ACPI/IORT: Handle
>>> memory address size limits as limits
>>> git bisect good 91cfd679f9e8b9a7bf2f26adf66eff99dbe2026b
>>> # bad: [ad4750b07d3462ce29a0c9b1e88b2a1f9795290e] iommu/dma: Make limit
>>> checks self-contained
>>> git bisect bad ad4750b07d3462ce29a0c9b1e88b2a1f9795290e
>>> # good: [fece6530bf4b59b01a476a12851e07751e73d69f] dma-mapping: Add
>>> helpers for dma_range_map bounds
>>> git bisect good fece6530bf4b59b01a476a12851e07751e73d69f
>>> # first bad commit: [ad4750b07d3462ce29a0c9b1e88b2a1f9795290e]
>>> iommu/dma: Make limit checks self-contained
>>>
>>> There is a couple skips in there and so I will try this again.
>>>
>>>>> If you have any ideas on things we can try let me know.
>>>>
>>>> Since the symptom seems inexplicable, I'd throw the usual memory
>>>> debugging stuff like KASAN at it first. I'd also try
>>>> "no_console_suspend" to check whether any late output is being
>>>> missed in the suspend case (and if it's already broken, then any
>>>> additional issues that may be caused by the console itself hopefully
>>>> shouldn't matter).
>>>>
>>>> For more base-covering, do you have the "arm64: Properly clean up
>>>> iommu-dma remnants" fix in there already as well? That bug has
>>>> bisected to patch #6 each time though, so I do still suspect that
>>>> what you're seeing is likely something else. It does seem
>>>> potentially significant that those Tegra platforms are making fairly
>>>> wide use of dma-ranges, but there's no clear idea forming out of
>>>> that observation just yet...
>>>
>>> I was hoping it was the same issue other people had reported,
>>> but the fix provided did not help. I have also tried today's
>>> -next and I am still seeing the issue.
>>>
>>> I should have more time next week to look at this further. Let
>>> me confirm which change is causing this and add more debug.
>>
>> Thanks. From staring at the code I think I've spotted one subtlety which
>> may not be quite as intended - can you see if the diff below helps? It
>> occurs to me that suspend and CPU hotplug may not *cause* the symptom,
>> but they could certainly stall if one or more relevant CPUs is *already*
>> stuck in a loop somewhere...
>>
>> Thanks,
>> Robin.
>>
>> ----->8-----
>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>> index 89a53c2f2cf9..85eb1846c637 100644
>> --- a/drivers/iommu/dma-iommu.c
>> +++ b/drivers/iommu/dma-iommu.c
>> @@ -686,6 +686,7 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, struct device *dev
>> /* Check the domain allows at least some access to the device... */
>> if (map) {
>> dma_addr_t base = dma_range_map_min(map);
>> + base = max(base, (dma_addr_t)1 << order);
>> if (base > domain->geometry.aperture_end ||
>> dma_range_map_max(map) < domain->geometry.aperture_start) {
>> pr_warn("specified DMA range outside IOMMU capability\n");
>
> With this in place I no longer see the mapping fail on the nvidia system.
Cheers Jerry, that's reassuring. I'll write up a proper patch shortly -
with Monday morning eyes I realise this isn't entirely the right fix for
how I messed up here - and hope that my guess was right and it's the
source of Jon's issues as well. From experience I know that the effects
of the IOVA allocator going wrong can be varied and downright weird...
Thanks,
Robin.
Hi Jens,
On 2024-06-03 8:37 pm, Jens Glathe wrote:
> Hi Robin,
>
> an observation from 6.10-rc1: On sc8280xp (Lenovo X13s, Windows Dev Kit
> 2023), when booted to EL2 with the arm-smmuv3 under control of Linux, it
> fails to set up DMA transfers to nvme. My box boots from nvme, so I only
> got a black screen. @craftyguy booted from USB, and got this:
Indeed, I see there's a dma-ranges property with a base of 0 in that DT,
so all manner of hilarity may ensue. The fix is here, just waiting to be
picked up:
https://lore.kernel.org/linux-iommu/159193e80b6a7701c61b32d6119ac68989d457bd.1716997607.git.robin.murphy@arm.com/
Thanks,
Robin.
Hi Robin,
an observation from 6.10-rc1: On sc8280xp (Lenovo X13s, Windows Dev Kit
2023), when booted to EL2 with the arm-smmuv3 under control of Linux, it
fails to set up DMA transfers to nvme. My box boots from nvme, so I only
got a black screen. @craftyguy booted from USB, and got this:
[ 0.008641] CPU: All CPU(s) started at EL2
...
[ 1.475359] nvme 0002:01:00.0: Adding to iommu group 5
[ 1.476346] nvme nvme0: pci function 0002:01:00.0
[ 1.477134] nvme 0002:01:00.0: enabling device (0000 -> 0002)
[ 1.478457] ------------[ cut here ]------------
[ 1.479233] WARNING: CPU: 5 PID: 95 at drivers/iommu/io-pgtable-arm.c:304 __arm_lpae_map+0x2d0/0x3f0
[ 1.480040] Modules linked in: pcie_qcom phy_qcom_qmp_pcie nvme nvme_core
[ 1.480858] CPU: 5 PID: 95 Comm: kworker/u32:4 Not tainted 6.10.0-rc1 #1-lenovo-21bx
[ 1.481669] Hardware name: LENOVO 21BXCTO1WW/21BXCTO1WW, BIOS N3HET88W (1.60 ) 03/14/2024
[ 1.482483] Workqueue: async async_run_entry_fn
[ 1.483309] pstate: 60400009 (nZCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[ 1.484136] pc : __arm_lpae_map+0x2d0/0x3f0
[ 1.484963] lr : __arm_lpae_map+0x128/0x3f0
[ 1.485789] sp : ffff80008116b2e0
[ 1.486613] x29: ffff80008116b2f0 x28: 0000000000000001 x27: ffff591c40fb9ff8
[ 1.487447] x26: 0000000000000001 x25: 0000000000001000 x24: 00000000fffff000
[ 1.488285] x23: 0000000000000003 x22: 00000001038c8000 x21: 0000000000000f44
[ 1.489117] x20: 0000000000000001 x19: ffff591c4947bd80 x18: ffffffffffffffff
[ 1.489944] x17: ffff591c4945bc00 x16: ffffc0bbfadf8768 x15: ffff591c4389e000
[ 1.490771] x14: 0000000000000000 x13: ffff591c4947bd80 x12: ffff80008116b540
[ 1.491599] x11: 0000000000000dc0 x10: 0000000000000001 x9 : 0000000000000009
[ 1.492410] x8 : 00000000000001ff x7 : 0000000000000000 x6 : 0000000000000002
[ 1.493197] x5 : 0000000000000003 x4 : 0000000000000001 x3 : 0000000000000001
[ 1.493978] x2 : 0000000000000003 x1 : 0000000000000000 x0 : ffff591c4947bd80
[ 1.494771] Call trace:
[ 1.495541] __arm_lpae_map+0x2d0/0x3f0
[ 1.496320] __arm_lpae_map+0x128/0x3f0
[ 1.497091] __arm_lpae_map+0x128/0x3f0
[ 1.497861] __arm_lpae_map+0x128/0x3f0
[ 1.498620] arm_lpae_map_pages+0xfc/0x1d0
[ 1.499377] arm_smmu_map_pages+0x24/0x40
[ 1.500132] __iommu_map+0x134/0x2c8
[ 1.500888] iommu_map_sg+0xc0/0x1d0
[ 1.501640] __iommu_dma_alloc_noncontiguous.isra.0+0x2d8/0x4c0
[ 1.502403] iommu_dma_alloc+0x25c/0x3c8
[ 1.503162] dma_alloc_attrs+0x100/0x110
[ 1.503919] nvme_alloc_queue+0x6c/0x170 [nvme]
[ 1.504684] nvme_pci_enable+0x228/0x518 [nvme]
[ 1.505438] nvme_probe+0x290/0x6d8 [nvme]
[ 1.506188] local_pci_probe+0x48/0xb8
[ 1.506937] pci_device_probe+0xb0/0x1d8
[ 1.507683] really_probe+0xc8/0x3a0
[ 1.508433] __driver_probe_device+0x84/0x170
[ 1.509182] driver_probe_device+0x44/0x120
[ 1.509930] __device_attach_driver+0xc4/0x168
[ 1.510675] bus_for_each_drv+0x90/0xf8
[ 1.511434] __device_attach+0xa8/0x1c8
[ 1.512183] device_attach+0x1c/0x30
[ 1.512927] pci_bus_add_device+0x6c/0xe8
[ 1.513653] pci_bus_add_devices+0x40/0x98
[ 1.514357] pci_bus_add_devices+0x6c/0x98
[ 1.515058] pci_host_probe+0x4c/0xd0
[ 1.515756] dw_pcie_host_init+0x250/0x660
[ 1.516452] qcom_pcie_probe+0x234/0x320 [pcie_qcom]
[ 1.517155] platform_probe+0x70/0xd8
[ 1.517854] really_probe+0xc8/0x3a0
[ 1.518543] __driver_probe_device+0x84/0x170
[ 1.519230] driver_probe_device+0x44/0x120
[ 1.519914] __driver_attach_async_helper+0x58/0x100
[ 1.520603] async_run_entry_fn+0x3c/0x160
[ 1.521295] process_one_work+0x160/0x3f0
[ 1.521991] worker_thread+0x304/0x420
[ 1.522666] kthread+0x118/0x128
[ 1.523318] ret_from_fork+0x10/0x20
[ 1.523968] ---[ end trace 0000000000000000 ]---
[ 1.524788] nvme 0002:01:00.0: probe with driver nvme failed with error -12
From bisecting this I landed at this patchset, and I had to revert it
to make it work again on 6.10-rc1. From what I've seen the issue appears
to be coming from of_dma_configure_id(), where instead of the base and
size parameters a map is given now somewhere in the device structure for
arch_setup_dma_ops().
Since this only happens with arm-smmuv3 under Linux control, it could be
either malformed dt or some gap in determining the right map for the dma
from the given parameters.
The additional definition in the dt (sc8280xp.dtsi) for the pcie2a port
is this:
iommu-map = <0 &pcie_smmu 0x20000 0x10000>;
This worked since v6.8. The repository with a working version can be
found here:
https://github.com/jglathe/linux_ms_dev_kit/tree/jg/el2-blackrock-v6.10.y
with best regards
Jens
On 4/19/24 18:54, Robin Murphy wrote:
> It's now easy to retrieve the device's DMA limits if we want to check
> them against the domain aperture, so do that ourselves instead of
> relying on them being passed through the callchain.
>
> Reviewed-by: Jason Gunthorpe<[email protected]>
> Tested-by: Hanjun Guo<[email protected]>
> Signed-off-by: Robin Murphy<[email protected]>
> ---
> drivers/iommu/dma-iommu.c | 21 +++++++++------------
> 1 file changed, 9 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index a3039005b696..f542eabaefa4 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -660,19 +660,16 @@ static void iommu_dma_init_options(struct iommu_dma_options *options,
> /**
> * iommu_dma_init_domain - Initialise a DMA mapping domain
> * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie()
> - * @base: IOVA at which the mappable address space starts
> - * @limit: Last address of the IOVA space
> * @dev: Device the domain is being initialised for
> *
> - * @base and @limit + 1 should be exact multiples of IOMMU page granularity to
> - * avoid rounding surprises. If necessary, we reserve the page at address 0
> + * If the geometry and dma_range_map include address 0, we reserve that page
> * to ensure it is an invalid IOVA. It is safe to reinitialise a domain, but
> * any change which could make prior IOVAs invalid will fail.
> */
> -static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
> - dma_addr_t limit, struct device *dev)
> +static int iommu_dma_init_domain(struct iommu_domain *domain, struct device *dev)
> {
> struct iommu_dma_cookie *cookie = domain->iova_cookie;
> + const struct bus_dma_region *map = dev->dma_range_map;
> unsigned long order, base_pfn;
> struct iova_domain *iovad;
> int ret;
> @@ -684,18 +681,18 @@ static int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base,
>
> /* Use the smallest supported page size for IOVA granularity */
> order = __ffs(domain->pgsize_bitmap);
> - base_pfn = max_t(unsigned long, 1, base >> order);
> + base_pfn = 1;
>
> /* Check the domain allows at least some access to the device... */
> - if (domain->geometry.force_aperture) {
> + if (map) {
> + dma_addr_t base = dma_range_map_min(map);
> if (base > domain->geometry.aperture_end ||
> - limit < domain->geometry.aperture_start) {
> + dma_range_map_max(map) < domain->geometry.aperture_start) {
> pr_warn("specified DMA range outside IOMMU capability\n");
> return -EFAULT;
> }
> /* ...then finally give it a kicking to make sure it fits */
> - base_pfn = max_t(unsigned long, base_pfn,
> - domain->geometry.aperture_start >> order);
> + base_pfn = max(base, domain->geometry.aperture_start) >> order;
> }
>
> /* start_pfn is always nonzero for an already-initialised domain */
> @@ -1760,7 +1757,7 @@ void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 dma_limit)
> * underlying IOMMU driver needs to support via the dma-iommu layer.
> */
> if (iommu_is_dma_domain(domain)) {
> - if (iommu_dma_init_domain(domain, dma_base, dma_limit, dev))
> + if (iommu_dma_init_domain(domain, dev))
> goto out_err;
> dev->dma_ops = &iommu_dma_ops;
> }
Hi Robin,
oh that was fast. Will test, thanks!
with best regards
Jens
On 6/3/24 21:46, Robin Murphy wrote:
> Hi Jens,
>
> On 2024-06-03 8:37 pm, Jens Glathe wrote:
>> Hi Robin,
>>
>> an observation from 6.10-rc1: On sc8280xp (Lenovo X13s, Windows Dev Kit
>> 2023), when booted to EL2 with the arm-smmuv3 under control of Linux, it
>> fails to set up DMA transfers to nvme. My box boots from nvme, so I only
>> got a black screen. @craftyguy booted from USB, and got this:
>
> Indeed, I see there's a dma-ranges property with a base of 0 in that
> DT, so all manner of hilarity may ensue. The fix is here, just waiting
> to be picked up:
>
> https://lore.kernel.org/linux-iommu/159193e80b6a7701c61b32d6119ac68989d457bd.1716997607.git.robin.murphy@arm.com/
>
>
> Thanks,
> Robin.