2021-08-30 05:02:04

by David Stevens

[permalink] [raw]
Subject: [PATCH v7 0/7] Fixes for dma-iommu swiotlb bounce buffers

This patch set includes various fixes for dma-iommu's swiotlb bounce
buffers for untrusted devices.

The min_align_mask issue was found when running fio on an untrusted nvme
device with bs=512. The other issues were found via code inspection, so
I don't have any specific use cases where things were not working, nor
any concrete performance numbers.

There are two issues related to min_align_mask that this patch series
does not attempt to fix. First, it does not address the case where
min_align_mask is larger than the IOVA granule. Doing so requires
changes to IOVA allocation, and is not specific to when swiotlb bounce
buffers are used. This is not a problem in practice today, since the
only driver which uses min_align_mask is nvme, which sets it to 4096.

The second issue this series does not address is the fact that extra
swiotlb slots adjacent to a bounce buffer can be exposed to untrusted
devices whose drivers use min_align_mask. Fixing this requires being
able to allocate padding slots at the beginning of a swiotlb allocation.
This is a rather significant change that I am not comfortable making.
Without being able to handle this, there is also little point to
clearing the padding at the start of such a buffer, since we can only
clear based on (IO_TLB_SIZE - 1) instead of iova_mask.

v6 -> v7:
- Remove unsafe attempt to clear padding at start of swiotlb buffer
- Rewrite commit message for min_align_mask commit to better explain
the problem it's fixing
- Rebase on iommu/core
- Acknowledge unsolved issues in cover letter

v5 -> v6:
- Remove unnecessary line break
- Remove redundant config check

v4 -> v5:
- Fix xen build error
- Move _swiotlb refactor into its own patch

v3 -> v4:
- Fold _swiotlb functions into _page functions
- Add patch to align swiotlb buffer to iovad granule
- Combine if checks in iommu_dma_sync_sg_* functions

v2 -> v3:
- Add new patch to address min_align_mask bug
- Set SKIP_CPU_SYNC flag after syncing in map/unmap
- Properly call arch_sync_dma_for_cpu in iommu_dma_sync_sg_for_cpu

v1 -> v2:
- Split fixes into dedicated patches
- Less invasive changes to fix arch_sync when mapping
- Leave dev_is_untrusted check for strict iommu

David Stevens (7):
dma-iommu: fix sync_sg with swiotlb
dma-iommu: fix arch_sync_dma for map
dma-iommu: skip extra sync during unmap w/swiotlb
dma-iommu: fold _swiotlb helpers into callers
dma-iommu: Check CONFIG_SWIOTLB more broadly
swiotlb: support aligned swiotlb buffers
dma-iommu: account for min_align_mask w/swiotlb

drivers/iommu/dma-iommu.c | 188 +++++++++++++++++---------------------
drivers/xen/swiotlb-xen.c | 2 +-
include/linux/swiotlb.h | 3 +-
kernel/dma/swiotlb.c | 11 ++-
4 files changed, 93 insertions(+), 111 deletions(-)

--
2.33.0.259.gc128427fd7-goog


2021-08-30 05:02:15

by David Stevens

[permalink] [raw]
Subject: [PATCH v7 1/7] dma-iommu: fix sync_sg with swiotlb

From: David Stevens <[email protected]>

The is_swiotlb_buffer function takes the physical address of the swiotlb
buffer, not the physical address of the original buffer. The sglist
contains the physical addresses of the original buffer, so for the
sync_sg functions to work properly when a bounce buffer might have been
used, we need to use iommu_iova_to_phys to look up the physical address.
This is what sync_single does, so call that function on each sglist
segment.

The previous code mostly worked because swiotlb does the transfer on map
and unmap. However, any callers which use DMA_ATTR_SKIP_CPU_SYNC with
sglists or which call sync_sg would not have had anything copied to the
bounce buffer.

Fixes: 82612d66d51d ("iommu: Allow the dma-iommu api to use bounce buffers")
Signed-off-by: David Stevens <[email protected]>
Reviewed-by: Robin Murphy <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
drivers/iommu/dma-iommu.c | 33 +++++++++++++--------------------
1 file changed, 13 insertions(+), 20 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index bac7370ead3e..d6ae87212768 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -827,17 +827,13 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev,
struct scatterlist *sg;
int i;

- if (dev_is_dma_coherent(dev) && !dev_is_untrusted(dev))
- return;
-
- for_each_sg(sgl, sg, nelems, i) {
- if (!dev_is_dma_coherent(dev))
+ if (dev_is_untrusted(dev))
+ for_each_sg(sgl, sg, nelems, i)
+ iommu_dma_sync_single_for_cpu(dev, sg_dma_address(sg),
+ sg->length, dir);
+ else if (!dev_is_dma_coherent(dev))
+ for_each_sg(sgl, sg, nelems, i)
arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir);
-
- if (is_swiotlb_buffer(sg_phys(sg)))
- swiotlb_sync_single_for_cpu(dev, sg_phys(sg),
- sg->length, dir);
- }
}

static void iommu_dma_sync_sg_for_device(struct device *dev,
@@ -847,17 +843,14 @@ static void iommu_dma_sync_sg_for_device(struct device *dev,
struct scatterlist *sg;
int i;

- if (dev_is_dma_coherent(dev) && !dev_is_untrusted(dev))
- return;
-
- for_each_sg(sgl, sg, nelems, i) {
- if (is_swiotlb_buffer(sg_phys(sg)))
- swiotlb_sync_single_for_device(dev, sg_phys(sg),
- sg->length, dir);
-
- if (!dev_is_dma_coherent(dev))
+ if (dev_is_untrusted(dev))
+ for_each_sg(sgl, sg, nelems, i)
+ iommu_dma_sync_single_for_device(dev,
+ sg_dma_address(sg),
+ sg->length, dir);
+ else if (!dev_is_dma_coherent(dev))
+ for_each_sg(sgl, sg, nelems, i)
arch_sync_dma_for_device(sg_phys(sg), sg->length, dir);
- }
}

static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
--
2.33.0.259.gc128427fd7-goog

2021-08-30 05:02:16

by David Stevens

[permalink] [raw]
Subject: [PATCH v7 2/7] dma-iommu: fix arch_sync_dma for map

From: David Stevens <[email protected]>

When calling arch_sync_dma, we need to pass it the memory that's
actually being used for dma. When using swiotlb bounce buffers, this is
the bounce buffer. Move arch_sync_dma into the __iommu_dma_map_swiotlb
helper, so it can use the bounce buffer address if necessary.

Now that iommu_dma_map_sg delegates to a function which takes care of
architectural syncing in the untrusted device case, the call to
iommu_dma_sync_sg_for_device can be moved so it only occurs for trusted
devices. Doing the sync for untrusted devices before mapping never
really worked, since it needs to be able to target swiotlb buffers.

This also moves the architectural sync to before the call to
__iommu_dma_map, to guarantee that untrusted devices can't see stale
data they shouldn't see.

Fixes: 82612d66d51d ("iommu: Allow the dma-iommu api to use bounce buffers")
Signed-off-by: David Stevens <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Robin Murphy <[email protected]>
---
drivers/iommu/dma-iommu.c | 16 +++++++---------
1 file changed, 7 insertions(+), 9 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index d6ae87212768..12197fdc3b1c 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -593,6 +593,9 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
memset(padding_start, 0, padding_size);
}

+ if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
+ arch_sync_dma_for_device(phys, org_size, dir);
+
iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs);
@@ -859,14 +862,9 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
{
phys_addr_t phys = page_to_phys(page) + offset;
bool coherent = dev_is_dma_coherent(dev);
- dma_addr_t dma_handle;

- dma_handle = __iommu_dma_map_swiotlb(dev, phys, size, dma_get_mask(dev),
+ return __iommu_dma_map_swiotlb(dev, phys, size, dma_get_mask(dev),
coherent, dir, attrs);
- if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
- dma_handle != DMA_MAPPING_ERROR)
- arch_sync_dma_for_device(phys, size, dir);
- return dma_handle;
}

static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
@@ -1009,12 +1007,12 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
iommu_deferred_attach(dev, domain))
return 0;

- if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
- iommu_dma_sync_sg_for_device(dev, sg, nents, dir);
-
if (dev_is_untrusted(dev))
return iommu_dma_map_sg_swiotlb(dev, sg, nents, dir, attrs);

+ if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
+ iommu_dma_sync_sg_for_device(dev, sg, nents, dir);
+
/*
* Work out how much IOVA space we need, and align the segments to
* IOVA granules for the IOMMU driver to handle. With some clever
--
2.33.0.259.gc128427fd7-goog

2021-08-30 05:02:25

by David Stevens

[permalink] [raw]
Subject: [PATCH v7 3/7] dma-iommu: skip extra sync during unmap w/swiotlb

From: David Stevens <[email protected]>

Calling the iommu_dma_sync_*_for_cpu functions during unmap can cause
two copies out of the swiotlb buffer. Do the arch sync directly in
__iommu_dma_unmap_swiotlb instead to avoid this. This makes the call to
iommu_dma_sync_sg_for_cpu for untrusted devices in iommu_dma_unmap_sg no
longer necessary, so move that invocation later in the function.

Signed-off-by: David Stevens <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Robin Murphy <[email protected]>
---
drivers/iommu/dma-iommu.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 12197fdc3b1c..abc528ed653c 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -521,6 +521,9 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
if (WARN_ON(!phys))
return;

+ if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && !dev_is_dma_coherent(dev))
+ arch_sync_dma_for_cpu(phys, size, dir);
+
__iommu_dma_unmap(dev, dma_addr, size);

if (unlikely(is_swiotlb_buffer(phys)))
@@ -870,8 +873,6 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction dir, unsigned long attrs)
{
- if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
- iommu_dma_sync_single_for_cpu(dev, dma_handle, size, dir);
__iommu_dma_unmap_swiotlb(dev, dma_handle, size, dir, attrs);
}

@@ -1079,14 +1080,14 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
struct scatterlist *tmp;
int i;

- if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
- iommu_dma_sync_sg_for_cpu(dev, sg, nents, dir);
-
if (dev_is_untrusted(dev)) {
iommu_dma_unmap_sg_swiotlb(dev, sg, nents, dir, attrs);
return;
}

+ if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
+ iommu_dma_sync_sg_for_cpu(dev, sg, nents, dir);
+
/*
* The scatterlist segments are mapped into a single
* contiguous IOVA allocation, so this is incredibly easy.
--
2.33.0.259.gc128427fd7-goog

2021-08-30 05:02:29

by David Stevens

[permalink] [raw]
Subject: [PATCH v7 5/7] dma-iommu: Check CONFIG_SWIOTLB more broadly

From: David Stevens <[email protected]>

Introduce a new dev_use_swiotlb function to guard swiotlb code, instead
of overloading dev_is_untrusted. This allows CONFIG_SWIOTLB to be
checked more broadly, so the swiotlb related code can be removed more
aggressively.

Signed-off-by: David Stevens <[email protected]>
Reviewed-by: Robin Murphy <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
drivers/iommu/dma-iommu.c | 20 ++++++++++++--------
1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 95bfa57be488..714bec7a53c2 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -317,6 +317,11 @@ static bool dev_is_untrusted(struct device *dev)
return dev_is_pci(dev) && to_pci_dev(dev)->untrusted;
}

+static bool dev_use_swiotlb(struct device *dev)
+{
+ return IS_ENABLED(CONFIG_SWIOTLB) && dev_is_untrusted(dev);
+}
+
/* sysfs updates are serialised by the mutex of the group owning @domain */
int iommu_dma_init_fq(struct iommu_domain *domain)
{
@@ -730,7 +735,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev,
{
phys_addr_t phys;

- if (dev_is_dma_coherent(dev) && !dev_is_untrusted(dev))
+ if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev))
return;

phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
@@ -746,7 +751,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev,
{
phys_addr_t phys;

- if (dev_is_dma_coherent(dev) && !dev_is_untrusted(dev))
+ if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev))
return;

phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
@@ -764,7 +769,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev,
struct scatterlist *sg;
int i;

- if (dev_is_untrusted(dev))
+ if (dev_use_swiotlb(dev))
for_each_sg(sgl, sg, nelems, i)
iommu_dma_sync_single_for_cpu(dev, sg_dma_address(sg),
sg->length, dir);
@@ -780,7 +785,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev,
struct scatterlist *sg;
int i;

- if (dev_is_untrusted(dev))
+ if (dev_use_swiotlb(dev))
for_each_sg(sgl, sg, nelems, i)
iommu_dma_sync_single_for_device(dev,
sg_dma_address(sg),
@@ -807,8 +812,7 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
* If both the physical buffer start address and size are
* page aligned, we don't need to use a bounce page.
*/
- if (IS_ENABLED(CONFIG_SWIOTLB) && dev_is_untrusted(dev) &&
- iova_offset(iovad, phys | size)) {
+ if (dev_use_swiotlb(dev) && iova_offset(iovad, phys | size)) {
void *padding_start;
size_t padding_size;

@@ -991,7 +995,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
iommu_deferred_attach(dev, domain))
return 0;

- if (dev_is_untrusted(dev))
+ if (dev_use_swiotlb(dev))
return iommu_dma_map_sg_swiotlb(dev, sg, nents, dir, attrs);

if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
@@ -1063,7 +1067,7 @@ static void iommu_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
struct scatterlist *tmp;
int i;

- if (dev_is_untrusted(dev)) {
+ if (dev_use_swiotlb(dev)) {
iommu_dma_unmap_sg_swiotlb(dev, sg, nents, dir, attrs);
return;
}
--
2.33.0.259.gc128427fd7-goog

2021-08-30 05:03:11

by David Stevens

[permalink] [raw]
Subject: [PATCH v7 6/7] swiotlb: support aligned swiotlb buffers

From: David Stevens <[email protected]>

Add an argument to swiotlb_tbl_map_single that specifies the desired
alignment of the allocated buffer. This is used by dma-iommu to ensure
the buffer is aligned to the iova granule size when using swiotlb with
untrusted sub-granule mappings. This addresses an issue where adjacent
slots could be exposed to the untrusted device if IO_TLB_SIZE < iova
granule < PAGE_SIZE.

Signed-off-by: David Stevens <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
---
drivers/iommu/dma-iommu.c | 4 ++--
drivers/xen/swiotlb-xen.c | 2 +-
include/linux/swiotlb.h | 3 ++-
kernel/dma/swiotlb.c | 11 +++++++----
4 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 714bec7a53c2..9b8c17c3d29b 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -817,8 +817,8 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
size_t padding_size;

aligned_size = iova_align(iovad, size);
- phys = swiotlb_tbl_map_single(dev, phys, size,
- aligned_size, dir, attrs);
+ phys = swiotlb_tbl_map_single(dev, phys, size, aligned_size,
+ iova_mask(iovad), dir, attrs);

if (phys == DMA_MAPPING_ERROR)
return DMA_MAPPING_ERROR;
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 24d11861ac7d..8b03d2c93428 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -382,7 +382,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
*/
trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force);

- map = swiotlb_tbl_map_single(dev, phys, size, size, dir, attrs);
+ map = swiotlb_tbl_map_single(dev, phys, size, size, 0, dir, attrs);
if (map == (phys_addr_t)DMA_MAPPING_ERROR)
return DMA_MAPPING_ERROR;

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 216854a5e513..93d82e43eb3a 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -44,7 +44,8 @@ extern void __init swiotlb_update_mem_attributes(void);

phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys,
size_t mapping_size, size_t alloc_size,
- enum dma_data_direction dir, unsigned long attrs);
+ unsigned int alloc_aligned_mask, enum dma_data_direction dir,
+ unsigned long attrs);

extern void swiotlb_tbl_unmap_single(struct device *hwdev,
phys_addr_t tlb_addr,
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index e50df8d8f87e..d4c45d8cd1fa 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -427,7 +427,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
* allocate a buffer from that IO TLB pool.
*/
static int find_slots(struct device *dev, phys_addr_t orig_addr,
- size_t alloc_size)
+ size_t alloc_size, unsigned int alloc_align_mask)
{
struct io_tlb_mem *mem = io_tlb_default_mem;
unsigned long boundary_mask = dma_get_seg_boundary(dev);
@@ -450,6 +450,7 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr,
stride = (iotlb_align_mask >> IO_TLB_SHIFT) + 1;
if (alloc_size >= PAGE_SIZE)
stride = max(stride, stride << (PAGE_SHIFT - IO_TLB_SHIFT));
+ stride = max(stride, (alloc_align_mask >> IO_TLB_SHIFT) + 1);

spin_lock_irqsave(&mem->lock, flags);
if (unlikely(nslots > mem->nslabs - mem->used))
@@ -504,7 +505,8 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr,

phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
size_t mapping_size, size_t alloc_size,
- enum dma_data_direction dir, unsigned long attrs)
+ unsigned int alloc_align_mask, enum dma_data_direction dir,
+ unsigned long attrs)
{
struct io_tlb_mem *mem = io_tlb_default_mem;
unsigned int offset = swiotlb_align_offset(dev, orig_addr);
@@ -524,7 +526,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
return (phys_addr_t)DMA_MAPPING_ERROR;
}

- index = find_slots(dev, orig_addr, alloc_size + offset);
+ index = find_slots(dev, orig_addr,
+ alloc_size + offset, alloc_align_mask);
if (index == -1) {
if (!(attrs & DMA_ATTR_NO_WARN))
dev_warn_ratelimited(dev,
@@ -636,7 +639,7 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size,
trace_swiotlb_bounced(dev, phys_to_dma(dev, paddr), size,
swiotlb_force);

- swiotlb_addr = swiotlb_tbl_map_single(dev, paddr, size, size, dir,
+ swiotlb_addr = swiotlb_tbl_map_single(dev, paddr, size, size, 0, dir,
attrs);
if (swiotlb_addr == (phys_addr_t)DMA_MAPPING_ERROR)
return DMA_MAPPING_ERROR;
--
2.33.0.259.gc128427fd7-goog

2021-08-30 05:03:43

by David Stevens

[permalink] [raw]
Subject: [PATCH v7 7/7] dma-iommu: account for min_align_mask w/swiotlb

From: David Stevens <[email protected]>

Pass the non-aligned size to __iommu_dma_map when using swiotlb bounce
buffers in iommu_dma_map_page, to account for min_align_mask.

To deal with granule alignment, __iommu_dma_map maps iova_align(size +
iova_off) bytes starting at phys - iova_off. If iommu_dma_map_page
passes aligned size when using swiotlb, then this becomes
iova_align(iova_align(orig_size) + iova_off). Normally iova_off will be
zero when using swiotlb. However, this is not the case for devices that
set min_align_mask. When iova_off is non-zero, __iommu_dma_map ends up
mapping an extra page at the end of the buffer. Beyond just being a
security issue, the extra page is not cleaned up by __iommu_dma_unmap.
This causes problems when the IOVA is reused, due to collisions in the
iommu driver. Just passing the original size is sufficient, since
__iommu_dma_map will take care of granule alignment.

Fixes: 1f221a0d0dbf ("swiotlb: respect min_align_mask")
Signed-off-by: David Stevens <[email protected]>
---
drivers/iommu/dma-iommu.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 9b8c17c3d29b..addcaa09db12 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -805,7 +805,6 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
struct iommu_domain *domain = iommu_get_dma_domain(dev);
struct iommu_dma_cookie *cookie = domain->iova_cookie;
struct iova_domain *iovad = &cookie->iovad;
- size_t aligned_size = size;
dma_addr_t iova, dma_mask = dma_get_mask(dev);

/*
@@ -814,7 +813,7 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
*/
if (dev_use_swiotlb(dev) && iova_offset(iovad, phys | size)) {
void *padding_start;
- size_t padding_size;
+ size_t padding_size, aligned_size;

aligned_size = iova_align(iovad, size);
phys = swiotlb_tbl_map_single(dev, phys, size, aligned_size,
@@ -839,7 +838,7 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
arch_sync_dma_for_device(phys, size, dir);

- iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
+ iova = __iommu_dma_map(dev, phys, size, prot, dma_mask);
if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
return iova;
--
2.33.0.259.gc128427fd7-goog

2021-08-30 05:03:54

by David Stevens

[permalink] [raw]
Subject: [PATCH v7 4/7] dma-iommu: fold _swiotlb helpers into callers

From: David Stevens <[email protected]>

Fold the _swiotlb helper functions into the respective _page functions,
since recent fixes have moved all logic from the _page functions to the
_swiotlb functions.

Signed-off-by: David Stevens <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Robin Murphy <[email protected]>
---
drivers/iommu/dma-iommu.c | 135 +++++++++++++++++---------------------
1 file changed, 59 insertions(+), 76 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index abc528ed653c..95bfa57be488 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -510,26 +510,6 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr,
iommu_dma_free_iova(cookie, dma_addr, size, &iotlb_gather);
}

-static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
- size_t size, enum dma_data_direction dir,
- unsigned long attrs)
-{
- struct iommu_domain *domain = iommu_get_dma_domain(dev);
- phys_addr_t phys;
-
- phys = iommu_iova_to_phys(domain, dma_addr);
- if (WARN_ON(!phys))
- return;
-
- if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && !dev_is_dma_coherent(dev))
- arch_sync_dma_for_cpu(phys, size, dir);
-
- __iommu_dma_unmap(dev, dma_addr, size);
-
- if (unlikely(is_swiotlb_buffer(phys)))
- swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
-}
-
static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
size_t size, int prot, u64 dma_mask)
{
@@ -556,55 +536,6 @@ static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
return iova + iova_off;
}

-static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
- size_t org_size, dma_addr_t dma_mask, bool coherent,
- enum dma_data_direction dir, unsigned long attrs)
-{
- int prot = dma_info_to_prot(dir, coherent, attrs);
- struct iommu_domain *domain = iommu_get_dma_domain(dev);
- struct iommu_dma_cookie *cookie = domain->iova_cookie;
- struct iova_domain *iovad = &cookie->iovad;
- size_t aligned_size = org_size;
- void *padding_start;
- size_t padding_size;
- dma_addr_t iova;
-
- /*
- * If both the physical buffer start address and size are
- * page aligned, we don't need to use a bounce page.
- */
- if (IS_ENABLED(CONFIG_SWIOTLB) && dev_is_untrusted(dev) &&
- iova_offset(iovad, phys | org_size)) {
- aligned_size = iova_align(iovad, org_size);
- phys = swiotlb_tbl_map_single(dev, phys, org_size,
- aligned_size, dir, attrs);
-
- if (phys == DMA_MAPPING_ERROR)
- return DMA_MAPPING_ERROR;
-
- /* Cleanup the padding area. */
- padding_start = phys_to_virt(phys);
- padding_size = aligned_size;
-
- if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
- (dir == DMA_TO_DEVICE ||
- dir == DMA_BIDIRECTIONAL)) {
- padding_start += org_size;
- padding_size -= org_size;
- }
-
- memset(padding_start, 0, padding_size);
- }
-
- if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
- arch_sync_dma_for_device(phys, org_size, dir);
-
- iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
- if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
- swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs);
- return iova;
-}
-
static void __iommu_dma_free_pages(struct page **pages, int count)
{
while (count--)
@@ -865,15 +796,68 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
{
phys_addr_t phys = page_to_phys(page) + offset;
bool coherent = dev_is_dma_coherent(dev);
+ int prot = dma_info_to_prot(dir, coherent, attrs);
+ struct iommu_domain *domain = iommu_get_dma_domain(dev);
+ struct iommu_dma_cookie *cookie = domain->iova_cookie;
+ struct iova_domain *iovad = &cookie->iovad;
+ size_t aligned_size = size;
+ dma_addr_t iova, dma_mask = dma_get_mask(dev);
+
+ /*
+ * If both the physical buffer start address and size are
+ * page aligned, we don't need to use a bounce page.
+ */
+ if (IS_ENABLED(CONFIG_SWIOTLB) && dev_is_untrusted(dev) &&
+ iova_offset(iovad, phys | size)) {
+ void *padding_start;
+ size_t padding_size;
+
+ aligned_size = iova_align(iovad, size);
+ phys = swiotlb_tbl_map_single(dev, phys, size,
+ aligned_size, dir, attrs);
+
+ if (phys == DMA_MAPPING_ERROR)
+ return DMA_MAPPING_ERROR;

- return __iommu_dma_map_swiotlb(dev, phys, size, dma_get_mask(dev),
- coherent, dir, attrs);
+ /* Cleanup the padding area. */
+ padding_start = phys_to_virt(phys);
+ padding_size = aligned_size;
+
+ if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+ (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)) {
+ padding_start += size;
+ padding_size -= size;
+ }
+
+ memset(padding_start, 0, padding_size);
+ }
+
+ if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
+ arch_sync_dma_for_device(phys, size, dir);
+
+ iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
+ if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
+ swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
+ return iova;
}

static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction dir, unsigned long attrs)
{
- __iommu_dma_unmap_swiotlb(dev, dma_handle, size, dir, attrs);
+ struct iommu_domain *domain = iommu_get_dma_domain(dev);
+ phys_addr_t phys;
+
+ phys = iommu_iova_to_phys(domain, dma_handle);
+ if (WARN_ON(!phys))
+ return;
+
+ if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && !dev_is_dma_coherent(dev))
+ arch_sync_dma_for_cpu(phys, size, dir);
+
+ __iommu_dma_unmap(dev, dma_handle, size);
+
+ if (unlikely(is_swiotlb_buffer(phys)))
+ swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
}

/*
@@ -958,7 +942,7 @@ static void iommu_dma_unmap_sg_swiotlb(struct device *dev, struct scatterlist *s
int i;

for_each_sg(sg, s, nents, i)
- __iommu_dma_unmap_swiotlb(dev, sg_dma_address(s),
+ iommu_dma_unmap_page(dev, sg_dma_address(s),
sg_dma_len(s), dir, attrs);
}

@@ -969,9 +953,8 @@ static int iommu_dma_map_sg_swiotlb(struct device *dev, struct scatterlist *sg,
int i;

for_each_sg(sg, s, nents, i) {
- sg_dma_address(s) = __iommu_dma_map_swiotlb(dev, sg_phys(s),
- s->length, dma_get_mask(dev),
- dev_is_dma_coherent(dev), dir, attrs);
+ sg_dma_address(s) = iommu_dma_map_page(dev, sg_page(s),
+ s->offset, s->length, dir, attrs);
if (sg_dma_address(s) == DMA_MAPPING_ERROR)
goto out_unmap;
sg_dma_len(s) = s->length;
--
2.33.0.259.gc128427fd7-goog

2021-08-30 17:07:49

by Rajat Jain

[permalink] [raw]
Subject: Re: [PATCH v7 0/7] Fixes for dma-iommu swiotlb bounce buffers

I'm wondering why I don't see v7 on these patches on patchwork (these
patches on https://lore.kernel.org/patchwork/project/lkml/list/?series=&submitter=27643&state=&q=&archive=&delegate=)
?

On Sun, Aug 29, 2021 at 10:00 PM David Stevens <[email protected]> wrote:
>
> This patch set includes various fixes for dma-iommu's swiotlb bounce
> buffers for untrusted devices.
>
> The min_align_mask issue was found when running fio on an untrusted nvme
> device with bs=512. The other issues were found via code inspection, so
> I don't have any specific use cases where things were not working, nor
> any concrete performance numbers.
>
> There are two issues related to min_align_mask that this patch series
> does not attempt to fix. First, it does not address the case where
> min_align_mask is larger than the IOVA granule. Doing so requires
> changes to IOVA allocation, and is not specific to when swiotlb bounce
> buffers are used. This is not a problem in practice today, since the
> only driver which uses min_align_mask is nvme, which sets it to 4096.
>
> The second issue this series does not address is the fact that extra
> swiotlb slots adjacent to a bounce buffer can be exposed to untrusted
> devices whose drivers use min_align_mask. Fixing this requires being
> able to allocate padding slots at the beginning of a swiotlb allocation.
> This is a rather significant change that I am not comfortable making.
> Without being able to handle this, there is also little point to
> clearing the padding at the start of such a buffer, since we can only
> clear based on (IO_TLB_SIZE - 1) instead of iova_mask.
>
> v6 -> v7:
> - Remove unsafe attempt to clear padding at start of swiotlb buffer
> - Rewrite commit message for min_align_mask commit to better explain
> the problem it's fixing
> - Rebase on iommu/core
> - Acknowledge unsolved issues in cover letter
>
> v5 -> v6:
> - Remove unnecessary line break
> - Remove redundant config check
>
> v4 -> v5:
> - Fix xen build error
> - Move _swiotlb refactor into its own patch
>
> v3 -> v4:
> - Fold _swiotlb functions into _page functions
> - Add patch to align swiotlb buffer to iovad granule
> - Combine if checks in iommu_dma_sync_sg_* functions
>
> v2 -> v3:
> - Add new patch to address min_align_mask bug
> - Set SKIP_CPU_SYNC flag after syncing in map/unmap
> - Properly call arch_sync_dma_for_cpu in iommu_dma_sync_sg_for_cpu
>
> v1 -> v2:
> - Split fixes into dedicated patches
> - Less invasive changes to fix arch_sync when mapping
> - Leave dev_is_untrusted check for strict iommu
>
> David Stevens (7):
> dma-iommu: fix sync_sg with swiotlb
> dma-iommu: fix arch_sync_dma for map
> dma-iommu: skip extra sync during unmap w/swiotlb
> dma-iommu: fold _swiotlb helpers into callers
> dma-iommu: Check CONFIG_SWIOTLB more broadly
> swiotlb: support aligned swiotlb buffers
> dma-iommu: account for min_align_mask w/swiotlb
>
> drivers/iommu/dma-iommu.c | 188 +++++++++++++++++---------------------
> drivers/xen/swiotlb-xen.c | 2 +-
> include/linux/swiotlb.h | 3 +-
> kernel/dma/swiotlb.c | 11 ++-
> 4 files changed, 93 insertions(+), 111 deletions(-)
>
> --
> 2.33.0.259.gc128427fd7-goog
>

2021-09-13 08:24:10

by David Stevens

[permalink] [raw]
Subject: Re: [PATCH v7 0/7] Fixes for dma-iommu swiotlb bounce buffers

Is there further feedback on these patches? Only patch 7 is still
pending review.

-David

On Mon, Aug 30, 2021 at 2:00 PM David Stevens <[email protected]> wrote:
>
> This patch set includes various fixes for dma-iommu's swiotlb bounce
> buffers for untrusted devices.
>
> The min_align_mask issue was found when running fio on an untrusted nvme
> device with bs=512. The other issues were found via code inspection, so
> I don't have any specific use cases where things were not working, nor
> any concrete performance numbers.
>
> There are two issues related to min_align_mask that this patch series
> does not attempt to fix. First, it does not address the case where
> min_align_mask is larger than the IOVA granule. Doing so requires
> changes to IOVA allocation, and is not specific to when swiotlb bounce
> buffers are used. This is not a problem in practice today, since the
> only driver which uses min_align_mask is nvme, which sets it to 4096.
>
> The second issue this series does not address is the fact that extra
> swiotlb slots adjacent to a bounce buffer can be exposed to untrusted
> devices whose drivers use min_align_mask. Fixing this requires being
> able to allocate padding slots at the beginning of a swiotlb allocation.
> This is a rather significant change that I am not comfortable making.
> Without being able to handle this, there is also little point to
> clearing the padding at the start of such a buffer, since we can only
> clear based on (IO_TLB_SIZE - 1) instead of iova_mask.
>
> v6 -> v7:
> - Remove unsafe attempt to clear padding at start of swiotlb buffer
> - Rewrite commit message for min_align_mask commit to better explain
> the problem it's fixing
> - Rebase on iommu/core
> - Acknowledge unsolved issues in cover letter
>
> v5 -> v6:
> - Remove unnecessary line break
> - Remove redundant config check
>
> v4 -> v5:
> - Fix xen build error
> - Move _swiotlb refactor into its own patch
>
> v3 -> v4:
> - Fold _swiotlb functions into _page functions
> - Add patch to align swiotlb buffer to iovad granule
> - Combine if checks in iommu_dma_sync_sg_* functions
>
> v2 -> v3:
> - Add new patch to address min_align_mask bug
> - Set SKIP_CPU_SYNC flag after syncing in map/unmap
> - Properly call arch_sync_dma_for_cpu in iommu_dma_sync_sg_for_cpu
>
> v1 -> v2:
> - Split fixes into dedicated patches
> - Less invasive changes to fix arch_sync when mapping
> - Leave dev_is_untrusted check for strict iommu
>
> David Stevens (7):
> dma-iommu: fix sync_sg with swiotlb
> dma-iommu: fix arch_sync_dma for map
> dma-iommu: skip extra sync during unmap w/swiotlb
> dma-iommu: fold _swiotlb helpers into callers
> dma-iommu: Check CONFIG_SWIOTLB more broadly
> swiotlb: support aligned swiotlb buffers
> dma-iommu: account for min_align_mask w/swiotlb
>
> drivers/iommu/dma-iommu.c | 188 +++++++++++++++++---------------------
> drivers/xen/swiotlb-xen.c | 2 +-
> include/linux/swiotlb.h | 3 +-
> kernel/dma/swiotlb.c | 11 ++-
> 4 files changed, 93 insertions(+), 111 deletions(-)
>
> --
> 2.33.0.259.gc128427fd7-goog
>

2021-09-28 09:07:01

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH v7 0/7] Fixes for dma-iommu swiotlb bounce buffers

On Mon, Aug 30, 2021 at 01:59:18PM +0900, David Stevens wrote:
> David Stevens (7):
> dma-iommu: fix sync_sg with swiotlb
> dma-iommu: fix arch_sync_dma for map
> dma-iommu: skip extra sync during unmap w/swiotlb
> dma-iommu: fold _swiotlb helpers into callers
> dma-iommu: Check CONFIG_SWIOTLB more broadly
> swiotlb: support aligned swiotlb buffers
> dma-iommu: account for min_align_mask w/swiotlb

This doesn't apply to v5.15-rc3. Can you please sort this out and
re-send?