2023-02-20 15:23:18

by Niklas Schnelle

[permalink] [raw]
Subject: [PATCH v7 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing

Hi All,

This patch series converts s390's PCI support from its platform specific DMA
API implementation in arch/s390/pci/pci_dma.c to the common DMA IOMMU layer.
The conversion itself is done in patches 3-4 with patch 2 providing the final
necessary IOMMU driver improvement to handle s390's special IOTLB flush
out-of-resource indication in virtualized environments. Patches 1-2 may be
applied independently. The conversion itself only touches the s390 IOMMU driver
and s390 arch code moving over remaining functions from the s390 DMA API
implementation. No changes to common code are necessary.

After patch 4 the basic conversion is done and on our partitioning machine
hypervisor LPAR performance matches or exceeds the existing code. When running
under z/VM or KVM however, performance plummets to about half of the existing
code due to a much higher rate of IOTLB flushes for unmapped pages. Due to the
hypervisors use of IOTLB flushes to synchronize their shadow tables these are
very expensive and minimizing them is key for regaining the performance loss.

To this end patches 5-6 propose a new, single queue, IOTLB flushing scheme as
an alternative to the existing per-CPU flush queues. Introducing an alternative
scheme was also suggested by Robin Murphy[1]. In the previous RFC of this
conversion Robin suggested reusing more of the existing queuing logic which
I incorporated since v2. The single queue mode is introduced in patch
5 together with a new dma_iommu_options struct and tune_dma_iommu callback in
IOMMU ops which allows IOMMU drivers to switch to a single flush queue.

Then patch 6 enables variable queue sizes using power of 2 queue sizes and
shift/mask to keep performance as close to the existing code as possible. The
variable queue size and a variable timeout are added to the dma_iommu_options
struct and utilized by s390 in the z/VM and KVM guest cases. As it is
implemented in common code the single queue IOTLB flushing scheme can of course
be used by other platforms with expensive IOTLB flushes. Particularly
virtio-iommu may be a candidate.

In a previous version I verified that the new scheme does work on my x86_64
Ryzen workstation by locally modifying iommu_subsys_init() to default to the
single queue mode and verifying its use via "/sys/.../iommu_group/type". I did
not find problems with an AMD GPU, Intel NIC (with SR-IOV and KVM
pass-through), NVMes or any on board peripherals.

As with previous series this is available via my git.kernel.org tree[3] in the
dma_iommu_v7 branch with signed s390_dma_iommu_v7 tag. This version applies
on top of iommu-next to incorporate the ops->set_platform_dma() and GFP
changes.

NOTE: Due to the large drop in performance I think we should not merge the DMA
API conversion (patch 4) until we have a more suited IOVA flushing scheme
with similar improvements as the proposed changes.

Best regards,
Niklas

[0] https://lore.kernel.org/linux-iommu/[email protected]/
[1] https://lore.kernel.org/linux-iommu/[email protected]/
[2] https://lore.kernel.org/linux-iommu/[email protected]/
[3] https://git.kernel.org/pub/scm/linux/kernel/git/niks/linux.git/

Changes since v6:
- Rebased on iommu-next branch (Matt)
- No need for ops->set_platform_dma() anymore
- Add gfp_t gfp parameters for page allocations
- In patch 4 removed a superflous s390_domain->dma_table assignment
- Added R-bs from Matt

Changes since v5:
- Instead of introducing a new IOMMU domain type utilize a new options
mechanism that allows IOMMU drivers to tune the DMA IOMMU flushing (Jason,
Robin)
- The above reworks patches 5 and 6
- Dropped patch 7 as its functionality is no longer needed

Changes since v4:
- Picked up R-b's for patch 1, 2 and 3
- In patch 5 fixed iommu_group_store_type() mistakenly initializing DMA-SQ
instead of DMA-FQ. This was caused by iommu_dma_init_fq() being called before
domain->type is set, instead pass the type as paramater. This also closes
a window where domain->type is still DMA while the FQ is already used. (Gerd)
- Replaced a missed check for IOMMU_DOMAIN_DMA_FQ with the new generic
__IOMMU_DOMAIN_DMA_LAZY in patch 5
- Made the ISM PCI Function Type a define (Matt)
- Removed stale TODO comment (Matt)

Changes since v3:
- Reword commit message of patch 2 for more clarity
- Correct typo in comment added by patch 2 (Alexandra)
- Adapted signature of .iommu_tlb_sync mapo for sun50i IOMMU driver added in
v6.2-rc1 (kernel test robot)
- Add R-b from Alexandra for patch 1

Changes since v2:
- Move the IOTLB out-of-resource handling into the IOMMU enabling it also for
the IOMMU API (patch 2). This also makes this independent from the DMA API
conversion (Robin, Jason).
- Rename __IOMMU_DOMAIN_DMA_FQ to __IOMMU_DOMAIN_DMA_LAZY when introducing
single queue flushing mode.
- Make selecting between single and per-CPU flush queues an explicit IOMMU op
(patch 7)

Changes since RFC v1:
- Patch 1 uses dma_set_mask_and_coherent() (Christoph)
- Patch 3 now documents and allows the use of iommu.strict=0|1 on s390 and
deprecates s390_iommu=strict while making it an alias.
- Patches 5-7 completely reworked to reuse existing queue logic (Robin)
- Added patch 4 to allow using iommu.strict=0|1 to override
ops->def_domain_type.

Niklas Schnelle (6):
s390/ism: Set DMA coherent mask
iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return
s390/pci: prepare is_passed_through() for dma-iommu
s390/pci: Use dma-iommu layer
iommu/dma: Allow a single FQ in addition to per-CPU FQs
iommu/dma: Make flush queue sizes and timeout driver configurable

.../admin-guide/kernel-parameters.txt | 9 +-
arch/s390/include/asm/pci.h | 7 -
arch/s390/include/asm/pci_clp.h | 3 +
arch/s390/include/asm/pci_dma.h | 121 +--
arch/s390/pci/Makefile | 2 +-
arch/s390/pci/pci.c | 22 +-
arch/s390/pci/pci_bus.c | 5 -
arch/s390/pci/pci_debug.c | 12 +-
arch/s390/pci/pci_dma.c | 735 ------------------
arch/s390/pci/pci_event.c | 17 +-
arch/s390/pci/pci_sysfs.c | 19 +-
drivers/iommu/Kconfig | 4 +-
drivers/iommu/amd/iommu.c | 5 +-
drivers/iommu/apple-dart.c | 5 +-
drivers/iommu/dma-iommu.c | 189 +++--
drivers/iommu/dma-iommu.h | 4 +-
drivers/iommu/intel/iommu.c | 5 +-
drivers/iommu/iommu.c | 24 +-
drivers/iommu/msm_iommu.c | 5 +-
drivers/iommu/mtk_iommu.c | 5 +-
drivers/iommu/s390-iommu.c | 435 ++++++++++-
drivers/iommu/sprd-iommu.c | 5 +-
drivers/iommu/sun50i-iommu.c | 4 +-
drivers/iommu/tegra-gart.c | 5 +-
drivers/s390/net/ism_drv.c | 2 +-
include/linux/iommu.h | 29 +-
26 files changed, 671 insertions(+), 1007 deletions(-)
delete mode 100644 arch/s390/pci/pci_dma.c

--
2.37.2



2023-02-20 15:23:23

by Niklas Schnelle

[permalink] [raw]
Subject: [PATCH v7 6/6] iommu/dma: Make flush queue sizes and timeout driver configurable

Flush queues currently use a fixed compile time size of 256 entries.
This being a power of 2 allows the compiler to use shift and mask
instead of more expensive modulo operations. With per-CPU flush queues
larger queue sizes would hit per-CPU allocation limits, with a single
flush queue these limits do not apply however. Also with single queues
being particularly suitable for virtualized environments with expensive
IOTLB flushes these benefit especially from larger queues and thus fewer
flushes.

To this end re-order struct iova_fq so we can use a dynamic array and
introduce the flush queue size and timeouts as new options in the
dma_iommu_options struct. So as not to lose the shift and mask
optimization, check that the variable length is a power of 2 and use
explicit shift and mask instead of letting the compiler optimize this.

In the s390 IOMMU driver a large fixed queue size and timeout is then
set together with single queue mode bringing its performance on s390
paged memory guests on par with the previous s390 specific DMA API
implementation.

Reviewed-by: Matthew Rosato <[email protected]> #s390
Signed-off-by: Niklas Schnelle <[email protected]>
---
drivers/iommu/dma-iommu.c | 40 ++++++++++++++++++++++++--------------
drivers/iommu/s390-iommu.c | 8 +++++++-
include/linux/iommu.h | 6 +++++-
3 files changed, 37 insertions(+), 17 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 6f5fd110e0e0..ec71dda87521 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -89,10 +89,10 @@ static int __init iommu_dma_forcedac_setup(char *str)
early_param("iommu.forcedac", iommu_dma_forcedac_setup);

/* Number of entries per flush queue */
-#define IOVA_FQ_SIZE 256
+#define IOVA_DEFAULT_FQ_SIZE 256

/* Timeout (in ms) after which entries are flushed from the queue */
-#define IOVA_FQ_TIMEOUT 10
+#define IOVA_DEFAULT_FQ_TIMEOUT 10

/* Flush queue entry for deferred flushing */
struct iova_fq_entry {
@@ -104,18 +104,19 @@ struct iova_fq_entry {

/* Per-CPU flush queue structure */
struct iova_fq {
- struct iova_fq_entry entries[IOVA_FQ_SIZE];
- unsigned int head, tail;
spinlock_t lock;
+ unsigned int head, tail;
+ unsigned int mod_mask;
+ struct iova_fq_entry entries[];
};

#define fq_ring_for_each(i, fq) \
- for ((i) = (fq)->head; (i) != (fq)->tail; (i) = ((i) + 1) % IOVA_FQ_SIZE)
+ for ((i) = (fq)->head; (i) != (fq)->tail; (i) = ((i) + 1) & (fq)->mod_mask)

static inline bool fq_full(struct iova_fq *fq)
{
assert_spin_locked(&fq->lock);
- return (((fq->tail + 1) % IOVA_FQ_SIZE) == fq->head);
+ return (((fq->tail + 1) & fq->mod_mask) == fq->head);
}

static inline unsigned int fq_ring_add(struct iova_fq *fq)
@@ -124,7 +125,7 @@ static inline unsigned int fq_ring_add(struct iova_fq *fq)

assert_spin_locked(&fq->lock);

- fq->tail = (idx + 1) % IOVA_FQ_SIZE;
+ fq->tail = (idx + 1) & fq->mod_mask;

return idx;
}
@@ -146,7 +147,7 @@ static void fq_ring_free(struct iommu_dma_cookie *cookie, struct iova_fq *fq)
fq->entries[idx].iova_pfn,
fq->entries[idx].pages);

- fq->head = (fq->head + 1) % IOVA_FQ_SIZE;
+ fq->head = (fq->head + 1) & fq->mod_mask;
}
}

@@ -244,7 +245,7 @@ static void queue_iova(struct iommu_dma_cookie *cookie,
if (!atomic_read(&cookie->fq_timer_on) &&
!atomic_xchg(&cookie->fq_timer_on, 1))
mod_timer(&cookie->fq_timer,
- jiffies + msecs_to_jiffies(IOVA_FQ_TIMEOUT));
+ jiffies + msecs_to_jiffies(cookie->options.fq_timeout));
}

static void iommu_dma_free_fq_single(struct iova_fq *fq)
@@ -286,27 +287,29 @@ static void iommu_dma_free_fq(struct iommu_dma_cookie *cookie)
}


-static void iommu_dma_init_one_fq(struct iova_fq *fq)
+static void iommu_dma_init_one_fq(struct iova_fq *fq, size_t fq_size)
{
int i;

fq->head = 0;
fq->tail = 0;
+ fq->mod_mask = fq_size - 1;

spin_lock_init(&fq->lock);

- for (i = 0; i < IOVA_FQ_SIZE; i++)
+ for (i = 0; i < fq_size; i++)
INIT_LIST_HEAD(&fq->entries[i].freelist);
}

static int iommu_dma_init_fq_single(struct iommu_dma_cookie *cookie)
{
+ size_t fq_size = cookie->options.fq_size;
struct iova_fq *queue;

- queue = vzalloc(sizeof(*queue));
+ queue = vzalloc(struct_size(queue, entries, fq_size));
if (!queue)
return -ENOMEM;
- iommu_dma_init_one_fq(queue);
+ iommu_dma_init_one_fq(queue, fq_size);
cookie->single_fq = queue;

return 0;
@@ -314,15 +317,17 @@ static int iommu_dma_init_fq_single(struct iommu_dma_cookie *cookie)

static int iommu_dma_init_fq_percpu(struct iommu_dma_cookie *cookie)
{
+ size_t fq_size = cookie->options.fq_size;
struct iova_fq __percpu *queue;
int cpu;

- queue = alloc_percpu(struct iova_fq);
+ queue = __alloc_percpu(struct_size(queue, entries, fq_size),
+ __alignof__(*queue));
if (!queue)
return -ENOMEM;

for_each_possible_cpu(cpu)
- iommu_dma_init_one_fq(per_cpu_ptr(queue, cpu));
+ iommu_dma_init_one_fq(per_cpu_ptr(queue, cpu), fq_size);
cookie->percpu_fq = queue;
return 0;
}
@@ -340,6 +345,9 @@ int iommu_dma_init_fq(struct device *dev, struct iommu_domain *domain)
if (ops->tune_dma_iommu)
ops->tune_dma_iommu(dev, &cookie->options);

+ if (WARN_ON_ONCE(!is_power_of_2(cookie->options.fq_size)))
+ cookie->options.fq_size = IOVA_DEFAULT_FQ_SIZE;
+
atomic64_set(&cookie->fq_flush_start_cnt, 0);
atomic64_set(&cookie->fq_flush_finish_cnt, 0);

@@ -382,6 +390,8 @@ static struct iommu_dma_cookie *cookie_alloc(enum iommu_dma_cookie_type type)
INIT_LIST_HEAD(&cookie->msi_page_list);
cookie->type = type;
cookie->options.flags = IOMMU_DMA_OPTS_PER_CPU_QUEUE;
+ cookie->options.fq_size = IOVA_DEFAULT_FQ_SIZE;
+ cookie->options.fq_timeout = IOVA_DEFAULT_FQ_TIMEOUT;
}
return cookie;
}
diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
index 7059d45c36df..24922ba99783 100644
--- a/drivers/iommu/s390-iommu.c
+++ b/drivers/iommu/s390-iommu.c
@@ -453,13 +453,19 @@ static void s390_iommu_get_resv_regions(struct device *dev,
}
}

+#define S390_IOMMU_SINGLE_FQ_SIZE 32768
+#define S390_IOMMU_SINGLE_FQ_TIMEOUT 1000
+
static void s390_iommu_tune_dma_iommu(struct device *dev,
struct dma_iommu_options *options)
{
struct zpci_dev *zdev = to_zpci_dev(dev);

- if (zdev->tlb_refresh)
+ if (zdev->tlb_refresh) {
options->flags |= IOMMU_DMA_OPTS_SINGLE_QUEUE;
+ options->fq_size = S390_IOMMU_SINGLE_FQ_SIZE;
+ options->fq_timeout = S390_IOMMU_SINGLE_FQ_TIMEOUT;
+ }
}

static struct iommu_device *s390_iommu_probe_device(struct device *dev)
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 0f8508c18cb2..8efe9a5b5812 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -224,6 +224,8 @@ struct iommu_iotlb_gather {
* struct dma_iommu_options - Options for dma-iommu
*
* @flags: Flag bits for enabling/disabling dma-iommu settings
+ * @fq_size: Size of the IOTLB flush queue(s), must be a power of two
+ * @fq_timeout: Timeout used for queued IOTLB flushes
*
* This structure is intended to provide IOMMU drivers a way to influence the
* behavior of the dma-iommu DMA API implementation. This allows optimizing for
@@ -232,7 +234,9 @@ struct iommu_iotlb_gather {
struct dma_iommu_options {
#define IOMMU_DMA_OPTS_PER_CPU_QUEUE (0L << 0)
#define IOMMU_DMA_OPTS_SINGLE_QUEUE (1L << 0)
- u64 flags;
+ u64 flags;
+ size_t fq_size;
+ unsigned int fq_timeout;
};

/**
--
2.37.2


2023-02-20 15:23:25

by Niklas Schnelle

[permalink] [raw]
Subject: [PATCH v7 2/6] iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return

On s390 when using a paging hypervisor, .iotlb_sync_map is used to sync
mappings by letting the hypervisor inspect the synced IOVA range and
updating a shadow table. This however means that .iotlb_sync_map can
fail as the hypervisor may run out of resources while doing the sync.
This can be due to the hypervisor being unable to pin guest pages, due
to a limit on mapped addresses such as vfio_iommu_type1.dma_entry_limit
or lack of other resources. Either way such a failure to sync a mapping
should result in a DMA_MAPPING_ERROR.

Now especially when running with batched IOTLB flushes for unmap it may
be that some IOVAs have already been invalidated but not yet synced via
.iotlb_sync_map. Thus if the hypervisor indicates running out of
resources, first do a global flush allowing the hypervisor to free
resources associated with these mappings as well a retry creating the
new mappings and only if that also fails report this error to callers.

Reviewed-by: Lu Baolu <[email protected]>
Reviewed-by: Matthew Rosato <[email protected]>
Signed-off-by: Niklas Schnelle <[email protected]>
---
v3 -> v4:
- Adapted signature of .iommu_tlb_sync mapo for sun50i IOMMU driver added in
v6.2-rc1 (kernel test robot)

drivers/iommu/amd/iommu.c | 5 +++--
drivers/iommu/apple-dart.c | 5 +++--
drivers/iommu/intel/iommu.c | 5 +++--
drivers/iommu/iommu.c | 20 ++++++++++++++++----
drivers/iommu/msm_iommu.c | 5 +++--
drivers/iommu/mtk_iommu.c | 5 +++--
drivers/iommu/s390-iommu.c | 29 ++++++++++++++++++++++++-----
drivers/iommu/sprd-iommu.c | 5 +++--
drivers/iommu/sun50i-iommu.c | 4 +++-
drivers/iommu/tegra-gart.c | 5 +++--
include/linux/iommu.h | 4 ++--
11 files changed, 66 insertions(+), 26 deletions(-)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index c20c41dd9c91..d0e378408d79 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -2187,14 +2187,15 @@ static int amd_iommu_attach_device(struct iommu_domain *dom,
return ret;
}

-static void amd_iommu_iotlb_sync_map(struct iommu_domain *dom,
- unsigned long iova, size_t size)
+static int amd_iommu_iotlb_sync_map(struct iommu_domain *dom,
+ unsigned long iova, size_t size)
{
struct protection_domain *domain = to_pdomain(dom);
struct io_pgtable_ops *ops = &domain->iop.iop.ops;

if (ops->map_pages)
domain_flush_np_cache(domain, iova, size);
+ return 0;
}

static int amd_iommu_map_pages(struct iommu_domain *dom, unsigned long iova,
diff --git a/drivers/iommu/apple-dart.c b/drivers/iommu/apple-dart.c
index 06169d36eab8..cbed1f87eae9 100644
--- a/drivers/iommu/apple-dart.c
+++ b/drivers/iommu/apple-dart.c
@@ -506,10 +506,11 @@ static void apple_dart_iotlb_sync(struct iommu_domain *domain,
apple_dart_domain_flush_tlb(to_dart_domain(domain));
}

-static void apple_dart_iotlb_sync_map(struct iommu_domain *domain,
- unsigned long iova, size_t size)
+static int apple_dart_iotlb_sync_map(struct iommu_domain *domain,
+ unsigned long iova, size_t size)
{
apple_dart_domain_flush_tlb(to_dart_domain(domain));
+ return 0;
}

static phys_addr_t apple_dart_iova_to_phys(struct iommu_domain *domain,
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 86ee06ac058b..c1a6d64be89a 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -4747,8 +4747,8 @@ static bool risky_device(struct pci_dev *pdev)
return false;
}

-static void intel_iommu_iotlb_sync_map(struct iommu_domain *domain,
- unsigned long iova, size_t size)
+static int intel_iommu_iotlb_sync_map(struct iommu_domain *domain,
+ unsigned long iova, size_t size)
{
struct dmar_domain *dmar_domain = to_dmar_domain(domain);
unsigned long pages = aligned_nrpages(iova, size);
@@ -4758,6 +4758,7 @@ static void intel_iommu_iotlb_sync_map(struct iommu_domain *domain,

xa_for_each(&dmar_domain->iommu_array, i, info)
__mapping_notify_one(info->iommu, dmar_domain, pfn, pages);
+ return 0;
}

static void intel_iommu_remove_dev_pasid(struct device *dev, ioasid_t pasid)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index b3f847b25b4f..a3de8d5928bf 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2386,8 +2386,17 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova,
return -EINVAL;

ret = __iommu_map(domain, iova, paddr, size, prot, gfp);
- if (ret == 0 && ops->iotlb_sync_map)
- ops->iotlb_sync_map(domain, iova, size);
+ if (ret == 0 && ops->iotlb_sync_map) {
+ ret = ops->iotlb_sync_map(domain, iova, size);
+ if (ret)
+ goto out_err;
+ }
+
+ return ret;
+
+out_err:
+ /* undo mappings already done */
+ iommu_unmap(domain, iova, size);

return ret;
}
@@ -2528,8 +2537,11 @@ ssize_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
sg = sg_next(sg);
}

- if (ops->iotlb_sync_map)
- ops->iotlb_sync_map(domain, iova, mapped);
+ if (ops->iotlb_sync_map) {
+ ret = ops->iotlb_sync_map(domain, iova, mapped);
+ if (ret)
+ goto out_err;
+ }
return mapped;

out_err:
diff --git a/drivers/iommu/msm_iommu.c b/drivers/iommu/msm_iommu.c
index 454f6331c889..2033716eac78 100644
--- a/drivers/iommu/msm_iommu.c
+++ b/drivers/iommu/msm_iommu.c
@@ -486,12 +486,13 @@ static int msm_iommu_map(struct iommu_domain *domain, unsigned long iova,
return ret;
}

-static void msm_iommu_sync_map(struct iommu_domain *domain, unsigned long iova,
- size_t size)
+static int msm_iommu_sync_map(struct iommu_domain *domain, unsigned long iova,
+ size_t size)
{
struct msm_priv *priv = to_msm_priv(domain);

__flush_iotlb_range(iova, size, SZ_4K, false, priv);
+ return 0;
}

static size_t msm_iommu_unmap(struct iommu_domain *domain, unsigned long iova,
diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
index d5a4955910ff..29769fb5c51e 100644
--- a/drivers/iommu/mtk_iommu.c
+++ b/drivers/iommu/mtk_iommu.c
@@ -750,12 +750,13 @@ static void mtk_iommu_iotlb_sync(struct iommu_domain *domain,
mtk_iommu_tlb_flush_range_sync(gather->start, length, dom->bank);
}

-static void mtk_iommu_sync_map(struct iommu_domain *domain, unsigned long iova,
- size_t size)
+static int mtk_iommu_sync_map(struct iommu_domain *domain, unsigned long iova,
+ size_t size)
{
struct mtk_iommu_domain *dom = to_mtk_domain(domain);

mtk_iommu_tlb_flush_range_sync(iova, size, dom->bank);
+ return 0;
}

static phys_addr_t mtk_iommu_iova_to_phys(struct iommu_domain *domain,
diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
index 6849644e2892..f79aa527d621 100644
--- a/drivers/iommu/s390-iommu.c
+++ b/drivers/iommu/s390-iommu.c
@@ -207,6 +207,14 @@ static void s390_iommu_release_device(struct device *dev)
__s390_iommu_detach_device(zdev);
}

+
+static int zpci_refresh_all(struct zpci_dev *zdev)
+{
+ return zpci_refresh_trans((u64)zdev->fh << 32, zdev->start_dma,
+ zdev->end_dma - zdev->start_dma + 1);
+
+}
+
static void s390_iommu_flush_iotlb_all(struct iommu_domain *domain)
{
struct s390_domain *s390_domain = to_s390_domain(domain);
@@ -214,8 +222,7 @@ static void s390_iommu_flush_iotlb_all(struct iommu_domain *domain)

rcu_read_lock();
list_for_each_entry_rcu(zdev, &s390_domain->devices, iommu_list) {
- zpci_refresh_trans((u64)zdev->fh << 32, zdev->start_dma,
- zdev->end_dma - zdev->start_dma + 1);
+ zpci_refresh_all(zdev);
}
rcu_read_unlock();
}
@@ -239,20 +246,32 @@ static void s390_iommu_iotlb_sync(struct iommu_domain *domain,
rcu_read_unlock();
}

-static void s390_iommu_iotlb_sync_map(struct iommu_domain *domain,
+static int s390_iommu_iotlb_sync_map(struct iommu_domain *domain,
unsigned long iova, size_t size)
{
struct s390_domain *s390_domain = to_s390_domain(domain);
struct zpci_dev *zdev;
+ int ret = 0;

rcu_read_lock();
list_for_each_entry_rcu(zdev, &s390_domain->devices, iommu_list) {
if (!zdev->tlb_refresh)
continue;
- zpci_refresh_trans((u64)zdev->fh << 32,
- iova, size);
+ ret = zpci_refresh_trans((u64)zdev->fh << 32,
+ iova, size);
+ /*
+ * let the hypervisor discover invalidated entries
+ * allowing it to free IOVAs and unpin pages
+ */
+ if (ret == -ENOMEM) {
+ ret = zpci_refresh_all(zdev);
+ if (ret)
+ break;
+ }
}
rcu_read_unlock();
+
+ return ret;
}

static int s390_iommu_validate_trans(struct s390_domain *s390_domain,
diff --git a/drivers/iommu/sprd-iommu.c b/drivers/iommu/sprd-iommu.c
index ae94d74b73f4..74bcae69653c 100644
--- a/drivers/iommu/sprd-iommu.c
+++ b/drivers/iommu/sprd-iommu.c
@@ -315,8 +315,8 @@ static size_t sprd_iommu_unmap(struct iommu_domain *domain, unsigned long iova,
return size;
}

-static void sprd_iommu_sync_map(struct iommu_domain *domain,
- unsigned long iova, size_t size)
+static int sprd_iommu_sync_map(struct iommu_domain *domain,
+ unsigned long iova, size_t size)
{
struct sprd_iommu_domain *dom = to_sprd_domain(domain);
unsigned int reg;
@@ -328,6 +328,7 @@ static void sprd_iommu_sync_map(struct iommu_domain *domain,

/* clear IOMMU TLB buffer after page table updated */
sprd_iommu_write(dom->sdev, reg, 0xffffffff);
+ return 0;
}

static void sprd_iommu_sync(struct iommu_domain *domain,
diff --git a/drivers/iommu/sun50i-iommu.c b/drivers/iommu/sun50i-iommu.c
index 2d993d0cea7d..60a983f4a494 100644
--- a/drivers/iommu/sun50i-iommu.c
+++ b/drivers/iommu/sun50i-iommu.c
@@ -402,7 +402,7 @@ static void sun50i_iommu_flush_iotlb_all(struct iommu_domain *domain)
spin_unlock_irqrestore(&iommu->iommu_lock, flags);
}

-static void sun50i_iommu_iotlb_sync_map(struct iommu_domain *domain,
+static int sun50i_iommu_iotlb_sync_map(struct iommu_domain *domain,
unsigned long iova, size_t size)
{
struct sun50i_iommu_domain *sun50i_domain = to_sun50i_domain(domain);
@@ -412,6 +412,8 @@ static void sun50i_iommu_iotlb_sync_map(struct iommu_domain *domain,
spin_lock_irqsave(&iommu->iommu_lock, flags);
sun50i_iommu_zap_range(iommu, iova, size);
spin_unlock_irqrestore(&iommu->iommu_lock, flags);
+
+ return 0;
}

static void sun50i_iommu_iotlb_sync(struct iommu_domain *domain,
diff --git a/drivers/iommu/tegra-gart.c b/drivers/iommu/tegra-gart.c
index a482ff838b53..44966d7b07ba 100644
--- a/drivers/iommu/tegra-gart.c
+++ b/drivers/iommu/tegra-gart.c
@@ -252,10 +252,11 @@ static int gart_iommu_of_xlate(struct device *dev,
return 0;
}

-static void gart_iommu_sync_map(struct iommu_domain *domain, unsigned long iova,
- size_t size)
+static int gart_iommu_sync_map(struct iommu_domain *domain, unsigned long iova,
+ size_t size)
{
FLUSH_GART_REGS(gart_handle);
+ return 0;
}

static void gart_iommu_sync(struct iommu_domain *domain,
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 3589d1b8f922..1d5f9250d9ea 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -334,8 +334,8 @@ struct iommu_domain_ops {
struct iommu_iotlb_gather *iotlb_gather);

void (*flush_iotlb_all)(struct iommu_domain *domain);
- void (*iotlb_sync_map)(struct iommu_domain *domain, unsigned long iova,
- size_t size);
+ int (*iotlb_sync_map)(struct iommu_domain *domain, unsigned long iova,
+ size_t size);
void (*iotlb_sync)(struct iommu_domain *domain,
struct iommu_iotlb_gather *iotlb_gather);

--
2.37.2


2023-03-07 13:18:22

by Niklas Schnelle

[permalink] [raw]
Subject: Re: [PATCH v7 0/6] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing

On Mon, 2023-02-20 at 16:22 +0100, Niklas Schnelle wrote:
> Hi All,
>
> This patch series converts s390's PCI support from its platform specific DMA
> API implementation in arch/s390/pci/pci_dma.c to the common DMA IOMMU layer.
> The conversion itself is done in patches 3-4 with patch 2 providing the final
> necessary IOMMU driver improvement to handle s390's special IOTLB flush
> out-of-resource indication in virtualized environments. Patches 1-2 may be
> applied independently. The conversion itself only touches the s390 IOMMU driver
> and s390 arch code moving over remaining functions from the s390 DMA API
> implementation. No changes to common code are necessary.
>
> After patch 4 the basic conversion is done and on our partitioning machine
> hypervisor LPAR performance matches or exceeds the existing code. When running
> under z/VM or KVM however, performance plummets to about half of the existing
> code due to a much higher rate of IOTLB flushes for unmapped pages. Due to the
> hypervisors use of IOTLB flushes to synchronize their shadow tables these are
> very expensive and minimizing them is key for regaining the performance loss.
>
> To this end patches 5-6 propose a new, single queue, IOTLB flushing scheme as
> an alternative to the existing per-CPU flush queues. Introducing an alternative
> scheme was also suggested by Robin Murphy[1]. In the previous RFC of this
> conversion Robin suggested reusing more of the existing queuing logic which
> I incorporated since v2. The single queue mode is introduced in patch
> 5 together with a new dma_iommu_options struct and tune_dma_iommu callback in
> IOMMU ops which allows IOMMU drivers to switch to a single flush queue.
>
> Then patch 6 enables variable queue sizes using power of 2 queue sizes and
> shift/mask to keep performance as close to the existing code as possible. The
> variable queue size and a variable timeout are added to the dma_iommu_options
> struct and utilized by s390 in the z/VM and KVM guest cases. As it is
> implemented in common code the single queue IOTLB flushing scheme can of course
> be used by other platforms with expensive IOTLB flushes. Particularly
> virtio-iommu may be a candidate.
>
> In a previous version I verified that the new scheme does work on my x86_64
> Ryzen workstation by locally modifying iommu_subsys_init() to default to the
> single queue mode and verifying its use via "/sys/.../iommu_group/type". I did
> not find problems with an AMD GPU, Intel NIC (with SR-IOV and KVM
> pass-through), NVMes or any on board peripherals.
>
> As with previous series this is available via my git.kernel.org tree[3] in the
> dma_iommu_v7 branch with signed s390_dma_iommu_v7 tag. This version applies
> on top of iommu-next to incorporate the ops->set_platform_dma() and GFP
> changes.

FYI this patch set now applies cleanly (and works) on v6.3-rc1. If need
be I can resend with Matt's R-b added but other than that I currently
don't have open TODOs for this so review away.

Thanks,
Niklas