2021-07-19 12:32:50

by Will Deacon

[permalink] [raw]
Subject: [PATCH 0/5] Fix restricted DMA vs swiotlb_exit()

Hi all,

This series fixes the issues which have been reported against the
Restricted DMA series in -next:

* Fix the build for Sparc as reported by Guenter [1].

* Rework the lifetime of 'io_tlb_default_mem' so that devices
can retain valid references to it even after swiotlb_exit(). This,
in turn, fixes the x86/AMD IOMMU regressions reported by Nathan [2].

I also then added a diagnostic to swiotlb_exit(), as suggested by Konrad
[3] and the final patch frees the underlying buffer memory during the
tear down, but I must confess that I don't know why this wasn't being
done already.

A massive thank you to Nathan for helping to debug this and also for
testing these patches to confirm that they address the issue on his
machine.

Patches are based against swiotlb devel/for-linus-5.15.

Cheers,

Will

[1] https://lore.kernel.org/r/[email protected]
[2] https://lore.kernel.org/r/[email protected]
[3] https://lore.kernel.org/r/[email protected]

Cc: Guenter Roeck <[email protected]>
Cc: Claire Chang <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Robin Murphy <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Nathan Chancellor <[email protected]>

--->8

Will Deacon (5):
of: Return success from of_dma_set_restricted_buffer() when
!OF_ADDRESS
swiotlb: Point io_default_tlb_mem at static allocation
swiotlb: Remove io_tlb_default_mem indirection
swiotlb: Emit diagnostic in swiotlb_exit()
swiotlb: Free tbl memory in swiotlb_exit()

drivers/base/core.c | 2 +-
drivers/of/of_private.h | 3 +-
drivers/xen/swiotlb-xen.c | 4 +-
include/linux/swiotlb.h | 4 +-
kernel/dma/swiotlb.c | 82 +++++++++++++++++++++++----------------
5 files changed, 56 insertions(+), 39 deletions(-)

--
2.32.0.402.g57bb445576-goog


2021-07-19 12:33:07

by Will Deacon

[permalink] [raw]
Subject: [PATCH 3/5] swiotlb: Remove io_tlb_default_mem indirection

The indirection from the global 'io_tlb_default_mem' pointer to the
static '_io_tlb_default_mem' structure is ugly and unnecessary.

Convert all users to reference the static structure directly, using the
'nslabs' field to determine whether swiotlb has been initialised.

Cc: Claire Chang <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Robin Murphy <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Tested-by: Nathan Chancellor <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
---
drivers/base/core.c | 2 +-
drivers/xen/swiotlb-xen.c | 4 ++--
include/linux/swiotlb.h | 2 +-
kernel/dma/swiotlb.c | 38 ++++++++++++++++++--------------------
4 files changed, 22 insertions(+), 24 deletions(-)

diff --git a/drivers/base/core.c b/drivers/base/core.c
index ea5b85354526..b49824001cfa 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -2848,7 +2848,7 @@ void device_initialize(struct device *dev)
dev->dma_coherent = dma_default_coherent;
#endif
#ifdef CONFIG_SWIOTLB
- dev->dma_io_tlb_mem = io_tlb_default_mem;
+ dev->dma_io_tlb_mem = &io_tlb_default_mem;
#endif
}
EXPORT_SYMBOL_GPL(device_initialize);
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 785ec7e8be01..f06d9b4f1e0f 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -164,7 +164,7 @@ int __ref xen_swiotlb_init(void)
int rc = -ENOMEM;
char *start;

- if (io_tlb_default_mem != NULL) {
+ if (io_tlb_default_mem.nslabs) {
pr_warn("swiotlb buffer already initialized\n");
return -EEXIST;
}
@@ -547,7 +547,7 @@ xen_swiotlb_sync_sg_for_device(struct device *dev, struct scatterlist *sgl,
static int
xen_swiotlb_dma_supported(struct device *hwdev, u64 mask)
{
- return xen_phys_to_dma(hwdev, io_tlb_default_mem->end - 1) <= mask;
+ return xen_phys_to_dma(hwdev, io_tlb_default_mem.end - 1) <= mask;
}

const struct dma_map_ops xen_swiotlb_dma_ops = {
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index d3b617c19045..b0cb2a9973f4 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -105,7 +105,7 @@ struct io_tlb_mem {
unsigned int list;
} *slots;
};
-extern struct io_tlb_mem *io_tlb_default_mem;
+extern struct io_tlb_mem io_tlb_default_mem;

static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
{
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 992d73cdc944..7948f274f9bb 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -70,8 +70,7 @@

enum swiotlb_force swiotlb_force;

-struct io_tlb_mem *io_tlb_default_mem;
-static struct io_tlb_mem _io_tlb_default_mem;
+struct io_tlb_mem io_tlb_default_mem;

/*
* Max segment that we can provide which (if pages are contingous) will
@@ -102,7 +101,7 @@ early_param("swiotlb", setup_io_tlb_npages);

unsigned int swiotlb_max_segment(void)
{
- return io_tlb_default_mem ? max_segment : 0;
+ return io_tlb_default_mem.nslabs ? max_segment : 0;
}
EXPORT_SYMBOL_GPL(swiotlb_max_segment);

@@ -135,9 +134,9 @@ void __init swiotlb_adjust_size(unsigned long size)

void swiotlb_print_info(void)
{
- struct io_tlb_mem *mem = io_tlb_default_mem;
+ struct io_tlb_mem *mem = &io_tlb_default_mem;

- if (!mem) {
+ if (!mem->nslabs) {
pr_warn("No low mem\n");
return;
}
@@ -164,11 +163,11 @@ static inline unsigned long nr_slots(u64 val)
*/
void __init swiotlb_update_mem_attributes(void)
{
- struct io_tlb_mem *mem = io_tlb_default_mem;
+ struct io_tlb_mem *mem = &io_tlb_default_mem;
void *vaddr;
unsigned long bytes;

- if (!mem || mem->late_alloc)
+ if (!mem->nslabs || mem->late_alloc)
return;
vaddr = phys_to_virt(mem->start);
bytes = PAGE_ALIGN(mem->nslabs << IO_TLB_SHIFT);
@@ -202,14 +201,14 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,

int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
{
- struct io_tlb_mem *mem = &_io_tlb_default_mem;
+ struct io_tlb_mem *mem = &io_tlb_default_mem;
size_t alloc_size;

if (swiotlb_force == SWIOTLB_NO_FORCE)
return 0;

/* protect against double initialization */
- if (WARN_ON_ONCE(io_tlb_default_mem))
+ if (WARN_ON_ONCE(mem->nslabs))
return -ENOMEM;

alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));
@@ -220,7 +219,6 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)

swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);

- io_tlb_default_mem = mem;
if (verbose)
swiotlb_print_info();
swiotlb_set_max_segment(mem->nslabs << IO_TLB_SHIFT);
@@ -305,14 +303,14 @@ swiotlb_late_init_with_default_size(size_t default_size)
int
swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
{
- struct io_tlb_mem *mem = &_io_tlb_default_mem;
+ struct io_tlb_mem *mem = &io_tlb_default_mem;
unsigned long bytes = nslabs << IO_TLB_SHIFT;

if (swiotlb_force == SWIOTLB_NO_FORCE)
return 0;

/* protect against double initialization */
- if (WARN_ON_ONCE(io_tlb_default_mem))
+ if (WARN_ON_ONCE(mem->nslabs))
return -ENOMEM;

mem->slots = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
@@ -323,7 +321,6 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);

- io_tlb_default_mem = mem;
swiotlb_print_info();
swiotlb_set_max_segment(mem->nslabs << IO_TLB_SHIFT);
return 0;
@@ -331,10 +328,10 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)

void __init swiotlb_exit(void)
{
- struct io_tlb_mem *mem = io_tlb_default_mem;
size_t size;
+ struct io_tlb_mem *mem = &io_tlb_default_mem;

- if (!mem)
+ if (!mem->nslabs)
return;

size = array_size(sizeof(*mem->slots), mem->nslabs);
@@ -342,7 +339,6 @@ void __init swiotlb_exit(void)
free_pages((unsigned long)mem->slots, get_order(size));
else
memblock_free_late(__pa(mem->slots), PAGE_ALIGN(size));
- io_tlb_default_mem = NULL;
memset(mem, 0, sizeof(*mem));
}

@@ -697,7 +693,9 @@ size_t swiotlb_max_mapping_size(struct device *dev)

bool is_swiotlb_active(struct device *dev)
{
- return dev->dma_io_tlb_mem != NULL;
+ struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
+
+ return mem && mem->nslabs;
}
EXPORT_SYMBOL_GPL(is_swiotlb_active);

@@ -712,10 +710,10 @@ static void swiotlb_create_debugfs_files(struct io_tlb_mem *mem)

static int __init swiotlb_create_default_debugfs(void)
{
- struct io_tlb_mem *mem = io_tlb_default_mem;
+ struct io_tlb_mem *mem = &io_tlb_default_mem;

debugfs_dir = debugfs_create_dir("swiotlb", NULL);
- if (mem) {
+ if (mem->nslabs) {
mem->debugfs = debugfs_dir;
swiotlb_create_debugfs_files(mem);
}
@@ -814,7 +812,7 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
static void rmem_swiotlb_device_release(struct reserved_mem *rmem,
struct device *dev)
{
- dev->dma_io_tlb_mem = io_tlb_default_mem;
+ dev->dma_io_tlb_mem = &io_tlb_default_mem;
}

static const struct reserved_mem_ops rmem_swiotlb_ops = {
--
2.32.0.402.g57bb445576-goog

2021-07-19 12:33:07

by Will Deacon

[permalink] [raw]
Subject: [PATCH 1/5] of: Return success from of_dma_set_restricted_buffer() when !OF_ADDRESS

When CONFIG_OF_ADDRESS=n, of_dma_set_restricted_buffer() returns -ENODEV
and breaks the boot for sparc[64] machines. Return 0 instead, since the
function is essentially a glorified NOP in this configuration.

Cc: Claire Chang <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Reported-by: Guenter Roeck <[email protected]>
Suggested-by: Robin Murphy <[email protected]>
Tested-by: Guenter Roeck <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Fixes: fec9b625095f ("of: Add plumbing for restricted DMA pool")
Signed-off-by: Will Deacon <[email protected]>
---
drivers/of/of_private.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
index 376462798f7e..f557bd22b0cf 100644
--- a/drivers/of/of_private.h
+++ b/drivers/of/of_private.h
@@ -173,7 +173,8 @@ static inline int of_dma_get_range(struct device_node *np,
static inline int of_dma_set_restricted_buffer(struct device *dev,
struct device_node *np)
{
- return -ENODEV;
+ /* Do nothing, successfully. */
+ return 0;
}
#endif

--
2.32.0.402.g57bb445576-goog

2021-07-19 12:33:07

by Will Deacon

[permalink] [raw]
Subject: [PATCH 2/5] swiotlb: Point io_default_tlb_mem at static allocation

Since commit 69031f500865 ("swiotlb: Set dev->dma_io_tlb_mem to the
swiotlb pool used"), 'struct device' may hold a copy of the global
'io_default_tlb_mem' pointer if the device is using swiotlb for DMA. A
subsequent call to swiotlb_exit() will therefore leave dangling pointers
behind in these device structures, resulting in KASAN splats such as:

| BUG: KASAN: use-after-free in __iommu_dma_unmap_swiotlb+0x64/0xb0
| Read of size 8 at addr ffff8881d7830000 by task swapper/0/0
|
| CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.12.0-rc3-debug #1
| Hardware name: HP HP Desktop M01-F1xxx/87D6, BIOS F.12 12/17/2020
| Call Trace:
| <IRQ>
| dump_stack+0x9c/0xcf
| print_address_description.constprop.0+0x18/0x130
| kasan_report.cold+0x7f/0x111
| __iommu_dma_unmap_swiotlb+0x64/0xb0
| nvme_pci_complete_rq+0x73/0x130
| blk_complete_reqs+0x6f/0x80
| __do_softirq+0xfc/0x3be

Point 'io_default_tlb_mem' at a static structure, so that the per-device
pointers remain valid after swiotlb_exit() has been invoked. The 'slots'
array is still allocated dynamically and referenced via a pointer rather
than a flexible array member.

Cc: Claire Chang <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Robin Murphy <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Fixes: 69031f500865 ("swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used")
Reported-by: Nathan Chancellor <[email protected]>
Tested-by: Nathan Chancellor <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
---
include/linux/swiotlb.h | 2 +-
kernel/dma/swiotlb.c | 34 +++++++++++++++++++++-------------
2 files changed, 22 insertions(+), 14 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 39284ff2a6cd..d3b617c19045 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -103,7 +103,7 @@ struct io_tlb_mem {
phys_addr_t orig_addr;
size_t alloc_size;
unsigned int list;
- } slots[];
+ } *slots;
};
extern struct io_tlb_mem *io_tlb_default_mem;

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index f1a9ae7fad8f..992d73cdc944 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -71,6 +71,7 @@
enum swiotlb_force swiotlb_force;

struct io_tlb_mem *io_tlb_default_mem;
+static struct io_tlb_mem _io_tlb_default_mem;

/*
* Max segment that we can provide which (if pages are contingous) will
@@ -201,7 +202,7 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,

int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
{
- struct io_tlb_mem *mem;
+ struct io_tlb_mem *mem = &_io_tlb_default_mem;
size_t alloc_size;

if (swiotlb_force == SWIOTLB_NO_FORCE)
@@ -211,9 +212,9 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
if (WARN_ON_ONCE(io_tlb_default_mem))
return -ENOMEM;

- alloc_size = PAGE_ALIGN(struct_size(mem, slots, nslabs));
- mem = memblock_alloc(alloc_size, PAGE_SIZE);
- if (!mem)
+ alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));
+ mem->slots = memblock_alloc(alloc_size, PAGE_SIZE);
+ if (!mem->slots)
panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
__func__, alloc_size, PAGE_SIZE);

@@ -304,7 +305,7 @@ swiotlb_late_init_with_default_size(size_t default_size)
int
swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
{
- struct io_tlb_mem *mem;
+ struct io_tlb_mem *mem = &_io_tlb_default_mem;
unsigned long bytes = nslabs << IO_TLB_SHIFT;

if (swiotlb_force == SWIOTLB_NO_FORCE)
@@ -314,12 +315,11 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
if (WARN_ON_ONCE(io_tlb_default_mem))
return -ENOMEM;

- mem = (void *)__get_free_pages(GFP_KERNEL,
- get_order(struct_size(mem, slots, nslabs)));
- if (!mem)
+ mem->slots = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
+ get_order(array_size(sizeof(*mem->slots), nslabs)));
+ if (!mem->slots)
return -ENOMEM;

- memset(mem, 0, sizeof(*mem));
set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);

@@ -337,12 +337,13 @@ void __init swiotlb_exit(void)
if (!mem)
return;

- size = struct_size(mem, slots, mem->nslabs);
+ size = array_size(sizeof(*mem->slots), mem->nslabs);
if (mem->late_alloc)
- free_pages((unsigned long)mem, get_order(size));
+ free_pages((unsigned long)mem->slots, get_order(size));
else
- memblock_free_late(__pa(mem), PAGE_ALIGN(size));
+ memblock_free_late(__pa(mem->slots), PAGE_ALIGN(size));
io_tlb_default_mem = NULL;
+ memset(mem, 0, sizeof(*mem));
}

/*
@@ -783,10 +784,17 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
* to it.
*/
if (!mem) {
- mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL);
+ mem = kzalloc(sizeof(*mem), GFP_KERNEL);
if (!mem)
return -ENOMEM;

+ mem->slots = kzalloc(array_size(sizeof(*mem->slots), nslabs),
+ GFP_KERNEL);
+ if (!mem->slots) {
+ kfree(mem);
+ return -ENOMEM;
+ }
+
set_memory_decrypted((unsigned long)phys_to_virt(rmem->base),
rmem->size >> PAGE_SHIFT);
swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
--
2.32.0.402.g57bb445576-goog

2021-07-19 12:33:13

by Will Deacon

[permalink] [raw]
Subject: [PATCH 5/5] swiotlb: Free tbl memory in swiotlb_exit()

Although swiotlb_exit() frees the 'slots' metadata array referenced by
'io_tlb_default_mem', it leaves the underlying buffer pages allocated
despite no longer being usable.

Extend swiotlb_exit() to free the buffer pages as well as the slots
array.

Cc: Claire Chang <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Robin Murphy <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Tested-by: Nathan Chancellor <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
---
kernel/dma/swiotlb.c | 21 +++++++++++++++------
1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index b3c793ed9e64..87c40517e822 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -328,18 +328,27 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)

void __init swiotlb_exit(void)
{
- size_t size;
struct io_tlb_mem *mem = &io_tlb_default_mem;
+ unsigned long tbl_vaddr;
+ size_t tbl_size, slots_size;

if (!mem->nslabs)
return;

pr_info("tearing down default memory pool\n");
- size = array_size(sizeof(*mem->slots), mem->nslabs);
- if (mem->late_alloc)
- free_pages((unsigned long)mem->slots, get_order(size));
- else
- memblock_free_late(__pa(mem->slots), PAGE_ALIGN(size));
+ tbl_vaddr = (unsigned long)phys_to_virt(mem->start);
+ tbl_size = PAGE_ALIGN(mem->end - mem->start);
+ slots_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), mem->nslabs));
+
+ set_memory_encrypted(tbl_vaddr, tbl_size >> PAGE_SHIFT);
+ if (mem->late_alloc) {
+ free_pages(tbl_vaddr, get_order(tbl_size));
+ free_pages((unsigned long)mem->slots, get_order(slots_size));
+ } else {
+ memblock_free_late(mem->start, tbl_size);
+ memblock_free_late(__pa(mem->slots), slots_size);
+ }
+
memset(mem, 0, sizeof(*mem));
}

--
2.32.0.402.g57bb445576-goog

2021-07-19 12:33:57

by Will Deacon

[permalink] [raw]
Subject: [PATCH 4/5] swiotlb: Emit diagnostic in swiotlb_exit()

A recent debugging session would have been made a little bit easier if
we had noticed sooner that swiotlb_exit() was being called during boot.

Add a simple diagnostic message to swiotlb_exit() to complement the one
from swiotlb_print_info() during initialisation.

Cc: Claire Chang <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Robin Murphy <[email protected]>
Link: https://lore.kernel.org/r/20210705190352.GA19461@willie-the-truck
Suggested-by: Konrad Rzeszutek Wilk <[email protected]>
Tested-by: Nathan Chancellor <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
---
kernel/dma/swiotlb.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 7948f274f9bb..b3c793ed9e64 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -334,6 +334,7 @@ void __init swiotlb_exit(void)
if (!mem->nslabs)
return;

+ pr_info("tearing down default memory pool\n");
size = array_size(sizeof(*mem->slots), mem->nslabs);
if (mem->late_alloc)
free_pages((unsigned long)mem->slots, get_order(size));
--
2.32.0.402.g57bb445576-goog

2021-07-20 03:39:02

by Claire Chang

[permalink] [raw]
Subject: Re: [PATCH 1/5] of: Return success from of_dma_set_restricted_buffer() when !OF_ADDRESS

On Mon, Jul 19, 2021 at 8:31 PM Will Deacon <[email protected]> wrote:
>
> When CONFIG_OF_ADDRESS=n, of_dma_set_restricted_buffer() returns -ENODEV
> and breaks the boot for sparc[64] machines. Return 0 instead, since the
> function is essentially a glorified NOP in this configuration.
>
> Cc: Claire Chang <[email protected]>
> Cc: Konrad Rzeszutek Wilk <[email protected]>
> Reported-by: Guenter Roeck <[email protected]>
> Suggested-by: Robin Murphy <[email protected]>
> Tested-by: Guenter Roeck <[email protected]>

Tested-by: Claire Chang <[email protected]>

> Link: https://lore.kernel.org/r/[email protected]
> Fixes: fec9b625095f ("of: Add plumbing for restricted DMA pool")
> Signed-off-by: Will Deacon <[email protected]>
> ---
> drivers/of/of_private.h | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
> index 376462798f7e..f557bd22b0cf 100644
> --- a/drivers/of/of_private.h
> +++ b/drivers/of/of_private.h
> @@ -173,7 +173,8 @@ static inline int of_dma_get_range(struct device_node *np,
> static inline int of_dma_set_restricted_buffer(struct device *dev,
> struct device_node *np)
> {
> - return -ENODEV;
> + /* Do nothing, successfully. */
> + return 0;
> }
> #endif
>
> --
> 2.32.0.402.g57bb445576-goog
>

2021-07-20 03:39:03

by Claire Chang

[permalink] [raw]
Subject: Re: [PATCH 2/5] swiotlb: Point io_default_tlb_mem at static allocation

On Mon, Jul 19, 2021 at 8:31 PM Will Deacon <[email protected]> wrote:
>
> Since commit 69031f500865 ("swiotlb: Set dev->dma_io_tlb_mem to the
> swiotlb pool used"), 'struct device' may hold a copy of the global
> 'io_default_tlb_mem' pointer if the device is using swiotlb for DMA. A
> subsequent call to swiotlb_exit() will therefore leave dangling pointers
> behind in these device structures, resulting in KASAN splats such as:
>
> | BUG: KASAN: use-after-free in __iommu_dma_unmap_swiotlb+0x64/0xb0
> | Read of size 8 at addr ffff8881d7830000 by task swapper/0/0
> |
> | CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.12.0-rc3-debug #1
> | Hardware name: HP HP Desktop M01-F1xxx/87D6, BIOS F.12 12/17/2020
> | Call Trace:
> | <IRQ>
> | dump_stack+0x9c/0xcf
> | print_address_description.constprop.0+0x18/0x130
> | kasan_report.cold+0x7f/0x111
> | __iommu_dma_unmap_swiotlb+0x64/0xb0
> | nvme_pci_complete_rq+0x73/0x130
> | blk_complete_reqs+0x6f/0x80
> | __do_softirq+0xfc/0x3be
>
> Point 'io_default_tlb_mem' at a static structure, so that the per-device
> pointers remain valid after swiotlb_exit() has been invoked. The 'slots'
> array is still allocated dynamically and referenced via a pointer rather
> than a flexible array member.
>
> Cc: Claire Chang <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
> Cc: Robin Murphy <[email protected]>
> Cc: Konrad Rzeszutek Wilk <[email protected]>
> Fixes: 69031f500865 ("swiotlb: Set dev->dma_io_tlb_mem to the swiotlb pool used")
> Reported-by: Nathan Chancellor <[email protected]>
> Tested-by: Nathan Chancellor <[email protected]>

Tested-by: Claire Chang <[email protected]>

> Signed-off-by: Will Deacon <[email protected]>
> ---
> include/linux/swiotlb.h | 2 +-
> kernel/dma/swiotlb.c | 34 +++++++++++++++++++++-------------
> 2 files changed, 22 insertions(+), 14 deletions(-)
>
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 39284ff2a6cd..d3b617c19045 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -103,7 +103,7 @@ struct io_tlb_mem {
> phys_addr_t orig_addr;
> size_t alloc_size;
> unsigned int list;
> - } slots[];
> + } *slots;
> };
> extern struct io_tlb_mem *io_tlb_default_mem;
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index f1a9ae7fad8f..992d73cdc944 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -71,6 +71,7 @@
> enum swiotlb_force swiotlb_force;
>
> struct io_tlb_mem *io_tlb_default_mem;
> +static struct io_tlb_mem _io_tlb_default_mem;
>
> /*
> * Max segment that we can provide which (if pages are contingous) will
> @@ -201,7 +202,7 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
>
> int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
> {
> - struct io_tlb_mem *mem;
> + struct io_tlb_mem *mem = &_io_tlb_default_mem;
> size_t alloc_size;
>
> if (swiotlb_force == SWIOTLB_NO_FORCE)
> @@ -211,9 +212,9 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
> if (WARN_ON_ONCE(io_tlb_default_mem))
> return -ENOMEM;
>
> - alloc_size = PAGE_ALIGN(struct_size(mem, slots, nslabs));
> - mem = memblock_alloc(alloc_size, PAGE_SIZE);
> - if (!mem)
> + alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));
> + mem->slots = memblock_alloc(alloc_size, PAGE_SIZE);
> + if (!mem->slots)
> panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
> __func__, alloc_size, PAGE_SIZE);
>
> @@ -304,7 +305,7 @@ swiotlb_late_init_with_default_size(size_t default_size)
> int
> swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
> {
> - struct io_tlb_mem *mem;
> + struct io_tlb_mem *mem = &_io_tlb_default_mem;
> unsigned long bytes = nslabs << IO_TLB_SHIFT;
>
> if (swiotlb_force == SWIOTLB_NO_FORCE)
> @@ -314,12 +315,11 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
> if (WARN_ON_ONCE(io_tlb_default_mem))
> return -ENOMEM;
>
> - mem = (void *)__get_free_pages(GFP_KERNEL,
> - get_order(struct_size(mem, slots, nslabs)));
> - if (!mem)
> + mem->slots = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
> + get_order(array_size(sizeof(*mem->slots), nslabs)));
> + if (!mem->slots)
> return -ENOMEM;
>
> - memset(mem, 0, sizeof(*mem));
> set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
> swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
>
> @@ -337,12 +337,13 @@ void __init swiotlb_exit(void)
> if (!mem)
> return;
>
> - size = struct_size(mem, slots, mem->nslabs);
> + size = array_size(sizeof(*mem->slots), mem->nslabs);
> if (mem->late_alloc)
> - free_pages((unsigned long)mem, get_order(size));
> + free_pages((unsigned long)mem->slots, get_order(size));
> else
> - memblock_free_late(__pa(mem), PAGE_ALIGN(size));
> + memblock_free_late(__pa(mem->slots), PAGE_ALIGN(size));
> io_tlb_default_mem = NULL;
> + memset(mem, 0, sizeof(*mem));
> }
>
> /*
> @@ -783,10 +784,17 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
> * to it.
> */
> if (!mem) {
> - mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL);
> + mem = kzalloc(sizeof(*mem), GFP_KERNEL);
> if (!mem)
> return -ENOMEM;
>
> + mem->slots = kzalloc(array_size(sizeof(*mem->slots), nslabs),
> + GFP_KERNEL);
> + if (!mem->slots) {
> + kfree(mem);
> + return -ENOMEM;
> + }
> +
> set_memory_decrypted((unsigned long)phys_to_virt(rmem->base),
> rmem->size >> PAGE_SHIFT);
> swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
> --
> 2.32.0.402.g57bb445576-goog
>

2021-07-20 03:40:04

by Claire Chang

[permalink] [raw]
Subject: Re: [PATCH 5/5] swiotlb: Free tbl memory in swiotlb_exit()

On Mon, Jul 19, 2021 at 8:31 PM Will Deacon <[email protected]> wrote:
>
> Although swiotlb_exit() frees the 'slots' metadata array referenced by
> 'io_tlb_default_mem', it leaves the underlying buffer pages allocated
> despite no longer being usable.
>
> Extend swiotlb_exit() to free the buffer pages as well as the slots
> array.
>
> Cc: Claire Chang <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
> Cc: Robin Murphy <[email protected]>
> Cc: Konrad Rzeszutek Wilk <[email protected]>
> Tested-by: Nathan Chancellor <[email protected]>

Tested-by: Claire Chang <[email protected]>

> Signed-off-by: Will Deacon <[email protected]>
> ---
> kernel/dma/swiotlb.c | 21 +++++++++++++++------
> 1 file changed, 15 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index b3c793ed9e64..87c40517e822 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -328,18 +328,27 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>
> void __init swiotlb_exit(void)
> {
> - size_t size;
> struct io_tlb_mem *mem = &io_tlb_default_mem;
> + unsigned long tbl_vaddr;
> + size_t tbl_size, slots_size;
>
> if (!mem->nslabs)
> return;
>
> pr_info("tearing down default memory pool\n");
> - size = array_size(sizeof(*mem->slots), mem->nslabs);
> - if (mem->late_alloc)
> - free_pages((unsigned long)mem->slots, get_order(size));
> - else
> - memblock_free_late(__pa(mem->slots), PAGE_ALIGN(size));
> + tbl_vaddr = (unsigned long)phys_to_virt(mem->start);
> + tbl_size = PAGE_ALIGN(mem->end - mem->start);
> + slots_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), mem->nslabs));
> +
> + set_memory_encrypted(tbl_vaddr, tbl_size >> PAGE_SHIFT);
> + if (mem->late_alloc) {
> + free_pages(tbl_vaddr, get_order(tbl_size));
> + free_pages((unsigned long)mem->slots, get_order(slots_size));
> + } else {
> + memblock_free_late(mem->start, tbl_size);
> + memblock_free_late(__pa(mem->slots), slots_size);
> + }
> +
> memset(mem, 0, sizeof(*mem));
> }
>
> --
> 2.32.0.402.g57bb445576-goog
>

2021-07-20 03:40:10

by Claire Chang

[permalink] [raw]
Subject: Re: [PATCH 3/5] swiotlb: Remove io_tlb_default_mem indirection

On Mon, Jul 19, 2021 at 8:31 PM Will Deacon <[email protected]> wrote:
>
> The indirection from the global 'io_tlb_default_mem' pointer to the
> static '_io_tlb_default_mem' structure is ugly and unnecessary.
>
> Convert all users to reference the static structure directly, using the
> 'nslabs' field to determine whether swiotlb has been initialised.
>
> Cc: Claire Chang <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
> Cc: Robin Murphy <[email protected]>
> Cc: Konrad Rzeszutek Wilk <[email protected]>
> Tested-by: Nathan Chancellor <[email protected]>

Tested-by: Claire Chang <[email protected]>

> Signed-off-by: Will Deacon <[email protected]>
> ---
> drivers/base/core.c | 2 +-
> drivers/xen/swiotlb-xen.c | 4 ++--
> include/linux/swiotlb.h | 2 +-
> kernel/dma/swiotlb.c | 38 ++++++++++++++++++--------------------
> 4 files changed, 22 insertions(+), 24 deletions(-)
>
> diff --git a/drivers/base/core.c b/drivers/base/core.c
> index ea5b85354526..b49824001cfa 100644
> --- a/drivers/base/core.c
> +++ b/drivers/base/core.c
> @@ -2848,7 +2848,7 @@ void device_initialize(struct device *dev)
> dev->dma_coherent = dma_default_coherent;
> #endif
> #ifdef CONFIG_SWIOTLB
> - dev->dma_io_tlb_mem = io_tlb_default_mem;
> + dev->dma_io_tlb_mem = &io_tlb_default_mem;
> #endif
> }
> EXPORT_SYMBOL_GPL(device_initialize);
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 785ec7e8be01..f06d9b4f1e0f 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -164,7 +164,7 @@ int __ref xen_swiotlb_init(void)
> int rc = -ENOMEM;
> char *start;
>
> - if (io_tlb_default_mem != NULL) {
> + if (io_tlb_default_mem.nslabs) {
> pr_warn("swiotlb buffer already initialized\n");
> return -EEXIST;
> }
> @@ -547,7 +547,7 @@ xen_swiotlb_sync_sg_for_device(struct device *dev, struct scatterlist *sgl,
> static int
> xen_swiotlb_dma_supported(struct device *hwdev, u64 mask)
> {
> - return xen_phys_to_dma(hwdev, io_tlb_default_mem->end - 1) <= mask;
> + return xen_phys_to_dma(hwdev, io_tlb_default_mem.end - 1) <= mask;
> }
>
> const struct dma_map_ops xen_swiotlb_dma_ops = {
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index d3b617c19045..b0cb2a9973f4 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -105,7 +105,7 @@ struct io_tlb_mem {
> unsigned int list;
> } *slots;
> };
> -extern struct io_tlb_mem *io_tlb_default_mem;
> +extern struct io_tlb_mem io_tlb_default_mem;
>
> static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
> {
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 992d73cdc944..7948f274f9bb 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -70,8 +70,7 @@
>
> enum swiotlb_force swiotlb_force;
>
> -struct io_tlb_mem *io_tlb_default_mem;
> -static struct io_tlb_mem _io_tlb_default_mem;
> +struct io_tlb_mem io_tlb_default_mem;
>
> /*
> * Max segment that we can provide which (if pages are contingous) will
> @@ -102,7 +101,7 @@ early_param("swiotlb", setup_io_tlb_npages);
>
> unsigned int swiotlb_max_segment(void)
> {
> - return io_tlb_default_mem ? max_segment : 0;
> + return io_tlb_default_mem.nslabs ? max_segment : 0;
> }
> EXPORT_SYMBOL_GPL(swiotlb_max_segment);
>
> @@ -135,9 +134,9 @@ void __init swiotlb_adjust_size(unsigned long size)
>
> void swiotlb_print_info(void)
> {
> - struct io_tlb_mem *mem = io_tlb_default_mem;
> + struct io_tlb_mem *mem = &io_tlb_default_mem;
>
> - if (!mem) {
> + if (!mem->nslabs) {
> pr_warn("No low mem\n");
> return;
> }
> @@ -164,11 +163,11 @@ static inline unsigned long nr_slots(u64 val)
> */
> void __init swiotlb_update_mem_attributes(void)
> {
> - struct io_tlb_mem *mem = io_tlb_default_mem;
> + struct io_tlb_mem *mem = &io_tlb_default_mem;
> void *vaddr;
> unsigned long bytes;
>
> - if (!mem || mem->late_alloc)
> + if (!mem->nslabs || mem->late_alloc)
> return;
> vaddr = phys_to_virt(mem->start);
> bytes = PAGE_ALIGN(mem->nslabs << IO_TLB_SHIFT);
> @@ -202,14 +201,14 @@ static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
>
> int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
> {
> - struct io_tlb_mem *mem = &_io_tlb_default_mem;
> + struct io_tlb_mem *mem = &io_tlb_default_mem;
> size_t alloc_size;
>
> if (swiotlb_force == SWIOTLB_NO_FORCE)
> return 0;
>
> /* protect against double initialization */
> - if (WARN_ON_ONCE(io_tlb_default_mem))
> + if (WARN_ON_ONCE(mem->nslabs))
> return -ENOMEM;
>
> alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));
> @@ -220,7 +219,6 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
>
> swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
>
> - io_tlb_default_mem = mem;
> if (verbose)
> swiotlb_print_info();
> swiotlb_set_max_segment(mem->nslabs << IO_TLB_SHIFT);
> @@ -305,14 +303,14 @@ swiotlb_late_init_with_default_size(size_t default_size)
> int
> swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
> {
> - struct io_tlb_mem *mem = &_io_tlb_default_mem;
> + struct io_tlb_mem *mem = &io_tlb_default_mem;
> unsigned long bytes = nslabs << IO_TLB_SHIFT;
>
> if (swiotlb_force == SWIOTLB_NO_FORCE)
> return 0;
>
> /* protect against double initialization */
> - if (WARN_ON_ONCE(io_tlb_default_mem))
> + if (WARN_ON_ONCE(mem->nslabs))
> return -ENOMEM;
>
> mem->slots = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
> @@ -323,7 +321,6 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
> set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
> swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
>
> - io_tlb_default_mem = mem;
> swiotlb_print_info();
> swiotlb_set_max_segment(mem->nslabs << IO_TLB_SHIFT);
> return 0;
> @@ -331,10 +328,10 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>
> void __init swiotlb_exit(void)
> {
> - struct io_tlb_mem *mem = io_tlb_default_mem;
> size_t size;
> + struct io_tlb_mem *mem = &io_tlb_default_mem;
>
> - if (!mem)
> + if (!mem->nslabs)
> return;
>
> size = array_size(sizeof(*mem->slots), mem->nslabs);
> @@ -342,7 +339,6 @@ void __init swiotlb_exit(void)
> free_pages((unsigned long)mem->slots, get_order(size));
> else
> memblock_free_late(__pa(mem->slots), PAGE_ALIGN(size));
> - io_tlb_default_mem = NULL;
> memset(mem, 0, sizeof(*mem));
> }
>
> @@ -697,7 +693,9 @@ size_t swiotlb_max_mapping_size(struct device *dev)
>
> bool is_swiotlb_active(struct device *dev)
> {
> - return dev->dma_io_tlb_mem != NULL;
> + struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
> +
> + return mem && mem->nslabs;
> }
> EXPORT_SYMBOL_GPL(is_swiotlb_active);
>
> @@ -712,10 +710,10 @@ static void swiotlb_create_debugfs_files(struct io_tlb_mem *mem)
>
> static int __init swiotlb_create_default_debugfs(void)
> {
> - struct io_tlb_mem *mem = io_tlb_default_mem;
> + struct io_tlb_mem *mem = &io_tlb_default_mem;
>
> debugfs_dir = debugfs_create_dir("swiotlb", NULL);
> - if (mem) {
> + if (mem->nslabs) {
> mem->debugfs = debugfs_dir;
> swiotlb_create_debugfs_files(mem);
> }
> @@ -814,7 +812,7 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
> static void rmem_swiotlb_device_release(struct reserved_mem *rmem,
> struct device *dev)
> {
> - dev->dma_io_tlb_mem = io_tlb_default_mem;
> + dev->dma_io_tlb_mem = &io_tlb_default_mem;
> }
>
> static const struct reserved_mem_ops rmem_swiotlb_ops = {
> --
> 2.32.0.402.g57bb445576-goog
>

2021-07-20 03:41:14

by Claire Chang

[permalink] [raw]
Subject: Re: [PATCH 4/5] swiotlb: Emit diagnostic in swiotlb_exit()

On Mon, Jul 19, 2021 at 8:31 PM Will Deacon <[email protected]> wrote:
>
> A recent debugging session would have been made a little bit easier if
> we had noticed sooner that swiotlb_exit() was being called during boot.
>
> Add a simple diagnostic message to swiotlb_exit() to complement the one
> from swiotlb_print_info() during initialisation.
>
> Cc: Claire Chang <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
> Cc: Robin Murphy <[email protected]>
> Link: https://lore.kernel.org/r/20210705190352.GA19461@willie-the-truck
> Suggested-by: Konrad Rzeszutek Wilk <[email protected]>
> Tested-by: Nathan Chancellor <[email protected]>

Tested-by: Claire Chang <[email protected]>

> Signed-off-by: Will Deacon <[email protected]>
> ---
> kernel/dma/swiotlb.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 7948f274f9bb..b3c793ed9e64 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -334,6 +334,7 @@ void __init swiotlb_exit(void)
> if (!mem->nslabs)
> return;
>
> + pr_info("tearing down default memory pool\n");
> size = array_size(sizeof(*mem->slots), mem->nslabs);
> if (mem->late_alloc)
> free_pages((unsigned long)mem->slots, get_order(size));
> --
> 2.32.0.402.g57bb445576-goog
>

2021-07-20 07:55:44

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 0/5] Fix restricted DMA vs swiotlb_exit()

Looks fine except for the patch split nitpick mentioned:

Reviewed-by: Christoph Hellwig <[email protected]>

2021-07-20 07:57:09

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 2/5] swiotlb: Point io_default_tlb_mem at static allocation

I'd prefer if the next patch is merged into this one, to avoid the
confusing state inbetween entirely.

2021-07-20 08:39:42

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH 5/5] swiotlb: Free tbl memory in swiotlb_exit()

On Tue, Jul 20, 2021 at 11:36:19AM +0800, Claire Chang wrote:
> On Mon, Jul 19, 2021 at 8:31 PM Will Deacon <[email protected]> wrote:
> >
> > Although swiotlb_exit() frees the 'slots' metadata array referenced by
> > 'io_tlb_default_mem', it leaves the underlying buffer pages allocated
> > despite no longer being usable.
> >
> > Extend swiotlb_exit() to free the buffer pages as well as the slots
> > array.
> >
> > Cc: Claire Chang <[email protected]>
> > Cc: Christoph Hellwig <[email protected]>
> > Cc: Robin Murphy <[email protected]>
> > Cc: Konrad Rzeszutek Wilk <[email protected]>
> > Tested-by: Nathan Chancellor <[email protected]>
>
> Tested-by: Claire Chang <[email protected]>

Thanks, Claire!

Will

2021-07-20 08:58:45

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH 2/5] swiotlb: Point io_default_tlb_mem at static allocation

On Tue, Jul 20, 2021 at 09:51:12AM +0200, Christoph Hellwig wrote:
> I'd prefer if the next patch is merged into this one, to avoid the
> confusing state inbetween entirely.

Of course. For some reason, I half thought this patch would need to go to
stable, but the restricted DMA stuff didn't land so there's no backporting
to worry about.

I'll merge 'em for v2.

Thanks for having a look!

Will