Hi All,
This is v5 of a follow up to Matt's recent series[0] where he tackled
a race that turned out to be outside of the s390 IOMMU driver itself as
well as duplicate device attachments. After an internal discussion we came
up with what I believe is a cleaner fix. Instead of actively checking for
duplicates we instead detach from any previous domain on attach. From my
cursory reading of the code this seems to be what the Intel IOMMU driver is
doing as well.
Moreover we drop the attempt to re-attach the device to its previous IOMMU
domain on failure. This was fragile, unlikely to help and unexpected for
calling code. Thanks Jason for the suggestion.
We can also get rid of struct s390_domain_device entirely if we instead
thread the list through the attached struct zpci_devs. This saves us from
having to allocate during attach and gets rid of one level of indirection
during IOMMU operations.
Additionally 3 more fixes have been added in v3 that weren't in v2 of this
series. One is for a potential situation where the aperture of a domain
could shrink and leave invalid translations. The next one fixes an off by
one in checking validity of an IOVA and the last one fixes a wrong value
for pgsize_bitmap.
In v4 we also add a patch changing to the map_pages()/unmap_pages()
interface in order to prevent a performance regression due to the
pgsize_bitmap change.
*Note*:
This series is against the s390 features branch[1] which already contains
the bus_next field removal that was part of v2.
It is also available as branch iommu_fixes_v6 with the GPG signed tag
s390_iommu_fixes_v5 on my niks/linux.git on git.kernel.org[2].
*Open Question*:
Which tree should this go via?
Best regards,
Niklas
Changes since v5:
- Only set zdev->dma_table once zpci_register_ioat() has succeeded like
we correctly did in v4 (Matt)
- In patch 3 WARN_ON() aperture violation in .unmap_pages (Matt)
- In patch 3 return after WARN_ON() check for apterute in attach
Changes since v4:
- Add patch to change to the map_pages()/unmap_pages() API to prevent
a performance regression from the pgsize_bitmap change (Robin)
- In patch 1 unregister IOAT on error (Matt)
- Turn the aperture check in attach into a WARN_ON() in patch 3 (Jason)
Changes since v3:
- Drop s390_domain from __s390_iommu_detach_device() (Jason)
- WARN_ON() mismatched domain in s390_iommu_detach_device() (Jason)
- Use __s390_iommu_detach_device() in s390_iommu_release_device() (Jason)
- Make aperture check resistant against overflow (Jason)
Changes since v2:
- The patch removing the unused bus_next field has been spun out and
already made it into the s390 feature branch on git.kernel.org
- Make __s390_iommu_detach_device() return void (Jason)
- Remove the re-attach on failure dance as it is unlikely to help
and complicates debug and recovery (Jason)
- Ignore attempts to detach from domain that is not the active one
- Add patch to fix potential shrinking of the aperture and use
reserved ranges per device instead of the aperture to respect
IOVA range restrictions (Jason)
- Add a fix for an off by one error on checking an IOVA against
the aperture
- Add a fix for wrong pgsize_bitmap
Changes since v1:
- After patch 3 we don't have to search in the devices list on detach as
we alreadz have hold of the zpci_dev (Jason)
- Add a WARN_ON() if somehow ended up detaching a device from a domain that
isn't the device's current domain.
- Removed the iteration and list delete from s390_domain_free() instead
just WARN_ON() when we're freeing without having detached
- The last two points should help catching sequencing errors much more
quickly in the future.
[0] https://lore.kernel.org/linux-iommu/[email protected]/
[1] https://git.kernel.org/pub/scm/linux/kernel/git/s390/linux.git/
[2] https://git.kernel.org/pub/scm/linux/kernel/git/niks/linux.git
Niklas Schnelle (6):
iommu/s390: Fix duplicate domain attachments
iommu/s390: Get rid of s390_domain_device
iommu/s390: Fix potential s390_domain aperture shrinking
iommu/s390: Fix incorrect aperture check
iommu/s390: Fix incorrect pgsize_bitmap
iommu/s390: Implement map_pages()/unmap_pages() instead of
map()/unmap()
arch/s390/include/asm/pci.h | 1 +
drivers/iommu/s390-iommu.c | 223 +++++++++++++++++-------------------
2 files changed, 109 insertions(+), 115 deletions(-)
--
2.34.1
The struct s390_domain_device serves the sole purpose as list entry for
the devices list of a struct s390_domain. As it contains no additional
information besides a list_head and a pointer to the struct zpci_dev we
can simplify things and just thread the device list through struct
zpci_dev directly. This removes the need to allocate during domain
attach and gets rid of one level of indirection during mapping
operations.
Reviewed-by: Matthew Rosato <[email protected]>
Signed-off-by: Niklas Schnelle <[email protected]>
---
v5->v6:
- Drop Jason's R-b as he didn't explicitly extend it for
the rollback change
arch/s390/include/asm/pci.h | 1 +
drivers/iommu/s390-iommu.c | 37 +++++++------------------------------
2 files changed, 8 insertions(+), 30 deletions(-)
diff --git a/arch/s390/include/asm/pci.h b/arch/s390/include/asm/pci.h
index 108e732d7b14..15f8714ca9b7 100644
--- a/arch/s390/include/asm/pci.h
+++ b/arch/s390/include/asm/pci.h
@@ -117,6 +117,7 @@ struct zpci_bus {
struct zpci_dev {
struct zpci_bus *zbus;
struct list_head entry; /* list of all zpci_devices, needed for hotplug, etc. */
+ struct list_head iommu_list;
struct kref kref;
struct hotplug_slot hotplug_slot;
diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
index 96173cfee324..399c31b97f65 100644
--- a/drivers/iommu/s390-iommu.c
+++ b/drivers/iommu/s390-iommu.c
@@ -29,11 +29,6 @@ struct s390_domain {
spinlock_t list_lock;
};
-struct s390_domain_device {
- struct list_head list;
- struct zpci_dev *zdev;
-};
-
static struct s390_domain *to_s390_domain(struct iommu_domain *dom)
{
return container_of(dom, struct s390_domain, domain);
@@ -87,21 +82,13 @@ static void s390_domain_free(struct iommu_domain *domain)
static void __s390_iommu_detach_device(struct zpci_dev *zdev)
{
struct s390_domain *s390_domain = zdev->s390_domain;
- struct s390_domain_device *domain_device, *tmp;
unsigned long flags;
if (!s390_domain)
return;
spin_lock_irqsave(&s390_domain->list_lock, flags);
- list_for_each_entry_safe(domain_device, tmp, &s390_domain->devices,
- list) {
- if (domain_device->zdev == zdev) {
- list_del(&domain_device->list);
- kfree(domain_device);
- break;
- }
- }
+ list_del_init(&zdev->iommu_list);
spin_unlock_irqrestore(&s390_domain->list_lock, flags);
zpci_unregister_ioat(zdev, 0);
@@ -114,17 +101,12 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
{
struct s390_domain *s390_domain = to_s390_domain(domain);
struct zpci_dev *zdev = to_zpci_dev(dev);
- struct s390_domain_device *domain_device;
unsigned long flags;
int cc, rc = 0;
if (!zdev)
return -ENODEV;
- domain_device = kzalloc(sizeof(*domain_device), GFP_KERNEL);
- if (!domain_device)
- return -ENOMEM;
-
if (zdev->s390_domain)
__s390_iommu_detach_device(zdev);
else if (zdev->dma_table)
@@ -132,10 +114,8 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
cc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma,
virt_to_phys(s390_domain->dma_table));
- if (cc) {
- rc = -EIO;
- goto out_free;
- }
+ if (cc)
+ return -EIO;
zdev->dma_table = s390_domain->dma_table;
spin_lock_irqsave(&s390_domain->list_lock, flags);
@@ -151,9 +131,8 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
rc = -EINVAL;
goto out_unregister;
}
- domain_device->zdev = zdev;
zdev->s390_domain = s390_domain;
- list_add(&domain_device->list, &s390_domain->devices);
+ list_add(&zdev->iommu_list, &s390_domain->devices);
spin_unlock_irqrestore(&s390_domain->list_lock, flags);
return 0;
@@ -161,8 +140,6 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
out_unregister:
zpci_unregister_ioat(zdev, 0);
zdev->dma_table = NULL;
-out_free:
- kfree(domain_device);
return rc;
}
@@ -201,10 +178,10 @@ static int s390_iommu_update_trans(struct s390_domain *s390_domain,
phys_addr_t pa, dma_addr_t dma_addr,
size_t size, int flags)
{
- struct s390_domain_device *domain_device;
phys_addr_t page_addr = pa & PAGE_MASK;
dma_addr_t start_dma_addr = dma_addr;
unsigned long irq_flags, nr_pages, i;
+ struct zpci_dev *zdev;
unsigned long *entry;
int rc = 0;
@@ -229,8 +206,8 @@ static int s390_iommu_update_trans(struct s390_domain *s390_domain,
}
spin_lock(&s390_domain->list_lock);
- list_for_each_entry(domain_device, &s390_domain->devices, list) {
- rc = zpci_refresh_trans((u64) domain_device->zdev->fh << 32,
+ list_for_each_entry(zdev, &s390_domain->devices, iommu_list) {
+ rc = zpci_refresh_trans((u64)zdev->fh << 32,
start_dma_addr, nr_pages * PAGE_SIZE);
if (rc)
break;
--
2.34.1
Since commit fa7e9ecc5e1c ("iommu/s390: Tolerate repeat attach_dev
calls") we can end up with duplicates in the list of devices attached to
a domain. This is inefficient and confusing since only one domain can
actually be in control of the IOMMU translations for a device. Fix this
by detaching the device from the previous domain, if any, on attach.
Add a WARN_ON() in case we still have attached devices on freeing the
domain. While here remove the re-attach on failure dance as it was
determined to be unlikely to help and may confuse debug and recovery.
Fixes: fa7e9ecc5e1c ("iommu/s390: Tolerate repeat attach_dev calls")
Signed-off-by: Niklas Schnelle <[email protected]>
---
v5->v6:
- Only set zdev->dma_table once zpci_register_ioat() succeeded (Matt)
v4->v5:
- Unregister IOAT and set zdev->dma_table on error (Matt)
drivers/iommu/s390-iommu.c | 106 ++++++++++++++++---------------------
1 file changed, 45 insertions(+), 61 deletions(-)
diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
index c898bcbbce11..96173cfee324 100644
--- a/drivers/iommu/s390-iommu.c
+++ b/drivers/iommu/s390-iommu.c
@@ -79,10 +79,36 @@ static void s390_domain_free(struct iommu_domain *domain)
{
struct s390_domain *s390_domain = to_s390_domain(domain);
+ WARN_ON(!list_empty(&s390_domain->devices));
dma_cleanup_tables(s390_domain->dma_table);
kfree(s390_domain);
}
+static void __s390_iommu_detach_device(struct zpci_dev *zdev)
+{
+ struct s390_domain *s390_domain = zdev->s390_domain;
+ struct s390_domain_device *domain_device, *tmp;
+ unsigned long flags;
+
+ if (!s390_domain)
+ return;
+
+ spin_lock_irqsave(&s390_domain->list_lock, flags);
+ list_for_each_entry_safe(domain_device, tmp, &s390_domain->devices,
+ list) {
+ if (domain_device->zdev == zdev) {
+ list_del(&domain_device->list);
+ kfree(domain_device);
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&s390_domain->list_lock, flags);
+
+ zpci_unregister_ioat(zdev, 0);
+ zdev->s390_domain = NULL;
+ zdev->dma_table = NULL;
+}
+
static int s390_iommu_attach_device(struct iommu_domain *domain,
struct device *dev)
{
@@ -90,7 +116,7 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
struct zpci_dev *zdev = to_zpci_dev(dev);
struct s390_domain_device *domain_device;
unsigned long flags;
- int cc, rc;
+ int cc, rc = 0;
if (!zdev)
return -ENODEV;
@@ -99,24 +125,18 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
if (!domain_device)
return -ENOMEM;
- if (zdev->dma_table && !zdev->s390_domain) {
- cc = zpci_dma_exit_device(zdev);
- if (cc) {
- rc = -EIO;
- goto out_free;
- }
- }
-
if (zdev->s390_domain)
- zpci_unregister_ioat(zdev, 0);
+ __s390_iommu_detach_device(zdev);
+ else if (zdev->dma_table)
+ zpci_dma_exit_device(zdev);
- zdev->dma_table = s390_domain->dma_table;
cc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma,
- virt_to_phys(zdev->dma_table));
+ virt_to_phys(s390_domain->dma_table));
if (cc) {
rc = -EIO;
- goto out_restore;
+ goto out_free;
}
+ zdev->dma_table = s390_domain->dma_table;
spin_lock_irqsave(&s390_domain->list_lock, flags);
/* First device defines the DMA range limits */
@@ -127,9 +147,9 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
/* Allow only devices with identical DMA range limits */
} else if (domain->geometry.aperture_start != zdev->start_dma ||
domain->geometry.aperture_end != zdev->end_dma) {
- rc = -EINVAL;
spin_unlock_irqrestore(&s390_domain->list_lock, flags);
- goto out_restore;
+ rc = -EINVAL;
+ goto out_unregister;
}
domain_device->zdev = zdev;
zdev->s390_domain = s390_domain;
@@ -138,14 +158,9 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
return 0;
-out_restore:
- if (!zdev->s390_domain) {
- zpci_dma_init_device(zdev);
- } else {
- zdev->dma_table = zdev->s390_domain->dma_table;
- zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma,
- virt_to_phys(zdev->dma_table));
- }
+out_unregister:
+ zpci_unregister_ioat(zdev, 0);
+ zdev->dma_table = NULL;
out_free:
kfree(domain_device);
@@ -155,32 +170,12 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
static void s390_iommu_detach_device(struct iommu_domain *domain,
struct device *dev)
{
- struct s390_domain *s390_domain = to_s390_domain(domain);
struct zpci_dev *zdev = to_zpci_dev(dev);
- struct s390_domain_device *domain_device, *tmp;
- unsigned long flags;
- int found = 0;
- if (!zdev)
- return;
+ WARN_ON(zdev->s390_domain != to_s390_domain(domain));
- spin_lock_irqsave(&s390_domain->list_lock, flags);
- list_for_each_entry_safe(domain_device, tmp, &s390_domain->devices,
- list) {
- if (domain_device->zdev == zdev) {
- list_del(&domain_device->list);
- kfree(domain_device);
- found = 1;
- break;
- }
- }
- spin_unlock_irqrestore(&s390_domain->list_lock, flags);
-
- if (found && (zdev->s390_domain == s390_domain)) {
- zdev->s390_domain = NULL;
- zpci_unregister_ioat(zdev, 0);
- zpci_dma_init_device(zdev);
- }
+ __s390_iommu_detach_device(zdev);
+ zpci_dma_init_device(zdev);
}
static struct iommu_device *s390_iommu_probe_device(struct device *dev)
@@ -193,24 +188,13 @@ static struct iommu_device *s390_iommu_probe_device(struct device *dev)
static void s390_iommu_release_device(struct device *dev)
{
struct zpci_dev *zdev = to_zpci_dev(dev);
- struct iommu_domain *domain;
/*
- * This is a workaround for a scenario where the IOMMU API common code
- * "forgets" to call the detach_dev callback: After binding a device
- * to vfio-pci and completing the VFIO_SET_IOMMU ioctl (which triggers
- * the attach_dev), removing the device via
- * "echo 1 > /sys/bus/pci/devices/.../remove" won't trigger detach_dev,
- * only release_device will be called via the BUS_NOTIFY_REMOVED_DEVICE
- * notifier.
- *
- * So let's call detach_dev from here if it hasn't been called before.
+ * release_device is expected to detach any domain currently attached
+ * to the device, but keep it attached to other devices in the group.
*/
- if (zdev && zdev->s390_domain) {
- domain = iommu_get_domain_for_dev(dev);
- if (domain)
- s390_iommu_detach_device(domain, dev);
- }
+ if (zdev)
+ __s390_iommu_detach_device(zdev);
}
static int s390_iommu_update_trans(struct s390_domain *s390_domain,
--
2.34.1
While s390-iommu currently implements the map_page()/unmap_page()
operations which only map/unmap a single page at a time the internal
s390_iommu_update_trans() API already supports mapping/unmapping a range
of pages at once. Take advantage of this by implementing the
map_pages()/unmap_pages() operations instead thus allowing users of the
IOMMU drivers to map multiple pages in a single call followed by
a single I/O TLB flush if needed.
Reviewed-by: Matthew Rosato <[email protected]>
Signed-off-by: Niklas Schnelle <[email protected]>
---
v5->v6:
- WARN_ON() aperture violation on .unmap_pages (Matt)
drivers/iommu/s390-iommu.c | 48 +++++++++++++++++++++++++-------------
1 file changed, 32 insertions(+), 16 deletions(-)
diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
index 1524f18f8523..77acfad6f919 100644
--- a/drivers/iommu/s390-iommu.c
+++ b/drivers/iommu/s390-iommu.c
@@ -191,20 +191,15 @@ static void s390_iommu_release_device(struct device *dev)
static int s390_iommu_update_trans(struct s390_domain *s390_domain,
phys_addr_t pa, dma_addr_t dma_addr,
- size_t size, int flags)
+ unsigned long nr_pages, int flags)
{
phys_addr_t page_addr = pa & PAGE_MASK;
dma_addr_t start_dma_addr = dma_addr;
- unsigned long irq_flags, nr_pages, i;
+ unsigned long irq_flags, i;
struct zpci_dev *zdev;
unsigned long *entry;
int rc = 0;
- if (dma_addr < s390_domain->domain.geometry.aperture_start ||
- (dma_addr + size - 1) > s390_domain->domain.geometry.aperture_end)
- return -EINVAL;
-
- nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
if (!nr_pages)
return 0;
@@ -247,11 +242,24 @@ static int s390_iommu_update_trans(struct s390_domain *s390_domain,
return rc;
}
-static int s390_iommu_map(struct iommu_domain *domain, unsigned long iova,
- phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
+static int s390_iommu_map_pages(struct iommu_domain *domain,
+ unsigned long iova, phys_addr_t paddr,
+ size_t pgsize, size_t pgcount,
+ int prot, gfp_t gfp, size_t *mapped)
{
struct s390_domain *s390_domain = to_s390_domain(domain);
int flags = ZPCI_PTE_VALID, rc = 0;
+ size_t size = pgcount << __ffs(pgsize);
+
+ if (pgsize != SZ_4K)
+ return -EINVAL;
+
+ if (iova < s390_domain->domain.geometry.aperture_start ||
+ (iova + size - 1) > s390_domain->domain.geometry.aperture_end)
+ return -EINVAL;
+
+ if (!IS_ALIGNED(iova | paddr, pgsize))
+ return -EINVAL;
if (!(prot & IOMMU_READ))
return -EINVAL;
@@ -260,7 +268,9 @@ static int s390_iommu_map(struct iommu_domain *domain, unsigned long iova,
flags |= ZPCI_TABLE_PROTECTED;
rc = s390_iommu_update_trans(s390_domain, paddr, iova,
- size, flags);
+ pgcount, flags);
+ if (!rc)
+ *mapped = size;
return rc;
}
@@ -296,21 +306,27 @@ static phys_addr_t s390_iommu_iova_to_phys(struct iommu_domain *domain,
return phys;
}
-static size_t s390_iommu_unmap(struct iommu_domain *domain,
- unsigned long iova, size_t size,
- struct iommu_iotlb_gather *gather)
+static size_t s390_iommu_unmap_pages(struct iommu_domain *domain,
+ unsigned long iova,
+ size_t pgsize, size_t pgcount,
+ struct iommu_iotlb_gather *gather)
{
struct s390_domain *s390_domain = to_s390_domain(domain);
+ size_t size = pgcount << __ffs(pgsize);
int flags = ZPCI_PTE_INVALID;
phys_addr_t paddr;
int rc;
+ if (WARN_ON(iova < s390_domain->domain.geometry.aperture_start ||
+ (iova + size - 1) > s390_domain->domain.geometry.aperture_end))
+ return 0;
+
paddr = s390_iommu_iova_to_phys(domain, iova);
if (!paddr)
return 0;
rc = s390_iommu_update_trans(s390_domain, paddr, iova,
- size, flags);
+ pgcount, flags);
if (rc)
return 0;
@@ -356,8 +372,8 @@ static const struct iommu_ops s390_iommu_ops = {
.default_domain_ops = &(const struct iommu_domain_ops) {
.attach_dev = s390_iommu_attach_device,
.detach_dev = s390_iommu_detach_device,
- .map = s390_iommu_map,
- .unmap = s390_iommu_unmap,
+ .map_pages = s390_iommu_map_pages,
+ .unmap_pages = s390_iommu_unmap_pages,
.iova_to_phys = s390_iommu_iova_to_phys,
.free = s390_domain_free,
}
--
2.34.1
The domain->geometry.aperture_end specifies the last valid address treat
it as such when checking if a DMA address is valid.
Reviewed-by: Pierre Morel <[email protected]>
Reviewed-by: Matthew Rosato <[email protected]>
Signed-off-by: Niklas Schnelle <[email protected]>
---
drivers/iommu/s390-iommu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
index 65835a5ca328..a4c6a1a63fef 100644
--- a/drivers/iommu/s390-iommu.c
+++ b/drivers/iommu/s390-iommu.c
@@ -208,7 +208,7 @@ static int s390_iommu_update_trans(struct s390_domain *s390_domain,
int rc = 0;
if (dma_addr < s390_domain->domain.geometry.aperture_start ||
- dma_addr + size > s390_domain->domain.geometry.aperture_end)
+ (dma_addr + size - 1) > s390_domain->domain.geometry.aperture_end)
return -EINVAL;
nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
--
2.34.1
The s390 IOMMU driver currently sets the IOMMU domain's aperture to
match the device specific DMA address range of the device that is first
attached. This is not ideal. For one if the domain has no device
attached in the meantime the aperture could be shrunk allowing
translations outside the aperture to exist in the translation tables.
Also this is a bit of a misuse of the aperture which really should
describe what addresses can be translated and not some device specific
limitations.
Instead of misusing the aperture like this we can instead create
reserved ranges for the ranges inaccessible to the attached devices
allowing devices with overlapping ranges to still share an IOMMU domain.
This also significantly simplifies s390_iommu_attach_device() allowing
us to move the aperture check to the beginning of the function and
removing the need to hold the device list's lock to check the aperture.
As we then use the same aperture for all domains and it only depends on
the table properties we can already check zdev->start_dma/end_dma at
probe time and turn the check on attach into a WARN_ON().
Suggested-by: Jason Gunthorpe <[email protected]>
Reviewed-by: Matthew Rosato <[email protected]>
Signed-off-by: Niklas Schnelle <[email protected]>
---
v5->v6:
- Return -EINVAL after WARN_ON() in attach
v4->v5:
- Make aperture check in attach a WARN_ON() and fail in probe if
zdev->start_dma/end_dma doesn't git in aperture (Jason)
drivers/iommu/s390-iommu.c | 63 ++++++++++++++++++++++++++------------
1 file changed, 43 insertions(+), 20 deletions(-)
diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
index 399c31b97f65..65835a5ca328 100644
--- a/drivers/iommu/s390-iommu.c
+++ b/drivers/iommu/s390-iommu.c
@@ -62,6 +62,9 @@ static struct iommu_domain *s390_domain_alloc(unsigned domain_type)
kfree(s390_domain);
return NULL;
}
+ s390_domain->domain.geometry.force_aperture = true;
+ s390_domain->domain.geometry.aperture_start = 0;
+ s390_domain->domain.geometry.aperture_end = ZPCI_TABLE_SIZE_RT - 1;
spin_lock_init(&s390_domain->dma_table_lock);
spin_lock_init(&s390_domain->list_lock);
@@ -102,11 +105,15 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
struct s390_domain *s390_domain = to_s390_domain(domain);
struct zpci_dev *zdev = to_zpci_dev(dev);
unsigned long flags;
- int cc, rc = 0;
+ int cc;
if (!zdev)
return -ENODEV;
+ if (WARN_ON(domain->geometry.aperture_start > zdev->end_dma ||
+ domain->geometry.aperture_end < zdev->start_dma))
+ return -EINVAL;
+
if (zdev->s390_domain)
__s390_iommu_detach_device(zdev);
else if (zdev->dma_table)
@@ -118,30 +125,14 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
return -EIO;
zdev->dma_table = s390_domain->dma_table;
- spin_lock_irqsave(&s390_domain->list_lock, flags);
- /* First device defines the DMA range limits */
- if (list_empty(&s390_domain->devices)) {
- domain->geometry.aperture_start = zdev->start_dma;
- domain->geometry.aperture_end = zdev->end_dma;
- domain->geometry.force_aperture = true;
- /* Allow only devices with identical DMA range limits */
- } else if (domain->geometry.aperture_start != zdev->start_dma ||
- domain->geometry.aperture_end != zdev->end_dma) {
- spin_unlock_irqrestore(&s390_domain->list_lock, flags);
- rc = -EINVAL;
- goto out_unregister;
- }
+ zdev->dma_table = s390_domain->dma_table;
zdev->s390_domain = s390_domain;
+
+ spin_lock_irqsave(&s390_domain->list_lock, flags);
list_add(&zdev->iommu_list, &s390_domain->devices);
spin_unlock_irqrestore(&s390_domain->list_lock, flags);
return 0;
-
-out_unregister:
- zpci_unregister_ioat(zdev, 0);
- zdev->dma_table = NULL;
-
- return rc;
}
static void s390_iommu_detach_device(struct iommu_domain *domain,
@@ -155,10 +146,41 @@ static void s390_iommu_detach_device(struct iommu_domain *domain,
zpci_dma_init_device(zdev);
}
+static void s390_iommu_get_resv_regions(struct device *dev,
+ struct list_head *list)
+{
+ struct zpci_dev *zdev = to_zpci_dev(dev);
+ struct iommu_resv_region *region;
+
+ if (zdev->start_dma) {
+ region = iommu_alloc_resv_region(0, zdev->start_dma, 0,
+ IOMMU_RESV_RESERVED);
+ if (!region)
+ return;
+ list_add_tail(®ion->list, list);
+ }
+
+ if (zdev->end_dma < ZPCI_TABLE_SIZE_RT - 1) {
+ region = iommu_alloc_resv_region(zdev->end_dma + 1,
+ ZPCI_TABLE_SIZE_RT - zdev->end_dma - 1,
+ 0, IOMMU_RESV_RESERVED);
+ if (!region)
+ return;
+ list_add_tail(®ion->list, list);
+ }
+}
+
static struct iommu_device *s390_iommu_probe_device(struct device *dev)
{
struct zpci_dev *zdev = to_zpci_dev(dev);
+ if (zdev->start_dma > zdev->end_dma ||
+ zdev->start_dma > ZPCI_TABLE_SIZE_RT - 1)
+ return ERR_PTR(-EINVAL);
+
+ if (zdev->end_dma > ZPCI_TABLE_SIZE_RT - 1)
+ zdev->end_dma = ZPCI_TABLE_SIZE_RT - 1;
+
return &zdev->iommu_dev;
}
@@ -337,6 +359,7 @@ static const struct iommu_ops s390_iommu_ops = {
.release_device = s390_iommu_release_device,
.device_group = generic_device_group,
.pgsize_bitmap = S390_IOMMU_PGSIZES,
+ .get_resv_regions = s390_iommu_get_resv_regions,
.default_domain_ops = &(const struct iommu_domain_ops) {
.attach_dev = s390_iommu_attach_device,
.detach_dev = s390_iommu_detach_device,
--
2.34.1
The .pgsize_bitmap property of struct iommu_ops is not a page mask but
rather has a bit set for each size of pages the IOMMU supports. As the
comment correctly pointed out at this moment the code only support 4K
pages so simply use SZ_4K here.
Reviewed-by: Matthew Rosato <[email protected]>
Reviewed-by: Jason Gunthorpe <[email protected]>
Signed-off-by: Niklas Schnelle <[email protected]>
---
drivers/iommu/s390-iommu.c | 9 +--------
1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
index a4c6a1a63fef..1524f18f8523 100644
--- a/drivers/iommu/s390-iommu.c
+++ b/drivers/iommu/s390-iommu.c
@@ -12,13 +12,6 @@
#include <linux/sizes.h>
#include <asm/pci_dma.h>
-/*
- * Physically contiguous memory regions can be mapped with 4 KiB alignment,
- * we allow all page sizes that are an order of 4KiB (no special large page
- * support so far).
- */
-#define S390_IOMMU_PGSIZES (~0xFFFUL)
-
static const struct iommu_ops s390_iommu_ops;
struct s390_domain {
@@ -358,7 +351,7 @@ static const struct iommu_ops s390_iommu_ops = {
.probe_device = s390_iommu_probe_device,
.release_device = s390_iommu_release_device,
.device_group = generic_device_group,
- .pgsize_bitmap = S390_IOMMU_PGSIZES,
+ .pgsize_bitmap = SZ_4K,
.get_resv_regions = s390_iommu_get_resv_regions,
.default_domain_ops = &(const struct iommu_domain_ops) {
.attach_dev = s390_iommu_attach_device,
--
2.34.1
On 10/7/22 5:50 AM, Niklas Schnelle wrote:
> Since commit fa7e9ecc5e1c ("iommu/s390: Tolerate repeat attach_dev
> calls") we can end up with duplicates in the list of devices attached to
> a domain. This is inefficient and confusing since only one domain can
> actually be in control of the IOMMU translations for a device. Fix this
> by detaching the device from the previous domain, if any, on attach.
> Add a WARN_ON() in case we still have attached devices on freeing the
> domain. While here remove the re-attach on failure dance as it was
> determined to be unlikely to help and may confuse debug and recovery.
>
> Fixes: fa7e9ecc5e1c ("iommu/s390: Tolerate repeat attach_dev calls")
> Signed-off-by: Niklas Schnelle <[email protected]>
Looks good to me now, thanks!
Reviewed-by: Matthew Rosato <[email protected]>
> ---
> v5->v6:
> - Only set zdev->dma_table once zpci_register_ioat() succeeded (Matt)
> v4->v5:
> - Unregister IOAT and set zdev->dma_table on error (Matt)
>
> drivers/iommu/s390-iommu.c | 106 ++++++++++++++++---------------------
> 1 file changed, 45 insertions(+), 61 deletions(-)
>
> diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
> index c898bcbbce11..96173cfee324 100644
> --- a/drivers/iommu/s390-iommu.c
> +++ b/drivers/iommu/s390-iommu.c
> @@ -79,10 +79,36 @@ static void s390_domain_free(struct iommu_domain *domain)
> {
> struct s390_domain *s390_domain = to_s390_domain(domain);
>
> + WARN_ON(!list_empty(&s390_domain->devices));
> dma_cleanup_tables(s390_domain->dma_table);
> kfree(s390_domain);
> }
>
> +static void __s390_iommu_detach_device(struct zpci_dev *zdev)
> +{
> + struct s390_domain *s390_domain = zdev->s390_domain;
> + struct s390_domain_device *domain_device, *tmp;
> + unsigned long flags;
> +
> + if (!s390_domain)
> + return;
> +
> + spin_lock_irqsave(&s390_domain->list_lock, flags);
> + list_for_each_entry_safe(domain_device, tmp, &s390_domain->devices,
> + list) {
> + if (domain_device->zdev == zdev) {
> + list_del(&domain_device->list);
> + kfree(domain_device);
> + break;
> + }
> + }
> + spin_unlock_irqrestore(&s390_domain->list_lock, flags);
> +
> + zpci_unregister_ioat(zdev, 0);
> + zdev->s390_domain = NULL;
> + zdev->dma_table = NULL;
> +}
> +
> static int s390_iommu_attach_device(struct iommu_domain *domain,
> struct device *dev)
> {
> @@ -90,7 +116,7 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
> struct zpci_dev *zdev = to_zpci_dev(dev);
> struct s390_domain_device *domain_device;
> unsigned long flags;
> - int cc, rc;
> + int cc, rc = 0;
>
> if (!zdev)
> return -ENODEV;
> @@ -99,24 +125,18 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
> if (!domain_device)
> return -ENOMEM;
>
> - if (zdev->dma_table && !zdev->s390_domain) {
> - cc = zpci_dma_exit_device(zdev);
> - if (cc) {
> - rc = -EIO;
> - goto out_free;
> - }
> - }
> -
> if (zdev->s390_domain)
> - zpci_unregister_ioat(zdev, 0);
> + __s390_iommu_detach_device(zdev);
> + else if (zdev->dma_table)
> + zpci_dma_exit_device(zdev);
>
> - zdev->dma_table = s390_domain->dma_table;
> cc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma,
> - virt_to_phys(zdev->dma_table));
> + virt_to_phys(s390_domain->dma_table));
> if (cc) {
> rc = -EIO;
> - goto out_restore;
> + goto out_free;
> }
> + zdev->dma_table = s390_domain->dma_table;
>
> spin_lock_irqsave(&s390_domain->list_lock, flags);
> /* First device defines the DMA range limits */
> @@ -127,9 +147,9 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
> /* Allow only devices with identical DMA range limits */
> } else if (domain->geometry.aperture_start != zdev->start_dma ||
> domain->geometry.aperture_end != zdev->end_dma) {
> - rc = -EINVAL;
> spin_unlock_irqrestore(&s390_domain->list_lock, flags);
> - goto out_restore;
> + rc = -EINVAL;
> + goto out_unregister;
> }
> domain_device->zdev = zdev;
> zdev->s390_domain = s390_domain;
> @@ -138,14 +158,9 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
>
> return 0;
>
> -out_restore:
> - if (!zdev->s390_domain) {
> - zpci_dma_init_device(zdev);
> - } else {
> - zdev->dma_table = zdev->s390_domain->dma_table;
> - zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma,
> - virt_to_phys(zdev->dma_table));
> - }
> +out_unregister:
> + zpci_unregister_ioat(zdev, 0);
> + zdev->dma_table = NULL;
> out_free:
> kfree(domain_device);
>
> @@ -155,32 +170,12 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
> static void s390_iommu_detach_device(struct iommu_domain *domain,
> struct device *dev)
> {
> - struct s390_domain *s390_domain = to_s390_domain(domain);
> struct zpci_dev *zdev = to_zpci_dev(dev);
> - struct s390_domain_device *domain_device, *tmp;
> - unsigned long flags;
> - int found = 0;
>
> - if (!zdev)
> - return;
> + WARN_ON(zdev->s390_domain != to_s390_domain(domain));
>
> - spin_lock_irqsave(&s390_domain->list_lock, flags);
> - list_for_each_entry_safe(domain_device, tmp, &s390_domain->devices,
> - list) {
> - if (domain_device->zdev == zdev) {
> - list_del(&domain_device->list);
> - kfree(domain_device);
> - found = 1;
> - break;
> - }
> - }
> - spin_unlock_irqrestore(&s390_domain->list_lock, flags);
> -
> - if (found && (zdev->s390_domain == s390_domain)) {
> - zdev->s390_domain = NULL;
> - zpci_unregister_ioat(zdev, 0);
> - zpci_dma_init_device(zdev);
> - }
> + __s390_iommu_detach_device(zdev);
> + zpci_dma_init_device(zdev);
> }
>
> static struct iommu_device *s390_iommu_probe_device(struct device *dev)
> @@ -193,24 +188,13 @@ static struct iommu_device *s390_iommu_probe_device(struct device *dev)
> static void s390_iommu_release_device(struct device *dev)
> {
> struct zpci_dev *zdev = to_zpci_dev(dev);
> - struct iommu_domain *domain;
>
> /*
> - * This is a workaround for a scenario where the IOMMU API common code
> - * "forgets" to call the detach_dev callback: After binding a device
> - * to vfio-pci and completing the VFIO_SET_IOMMU ioctl (which triggers
> - * the attach_dev), removing the device via
> - * "echo 1 > /sys/bus/pci/devices/.../remove" won't trigger detach_dev,
> - * only release_device will be called via the BUS_NOTIFY_REMOVED_DEVICE
> - * notifier.
> - *
> - * So let's call detach_dev from here if it hasn't been called before.
> + * release_device is expected to detach any domain currently attached
> + * to the device, but keep it attached to other devices in the group.
> */
> - if (zdev && zdev->s390_domain) {
> - domain = iommu_get_domain_for_dev(dev);
> - if (domain)
> - s390_iommu_detach_device(domain, dev);
> - }
> + if (zdev)
> + __s390_iommu_detach_device(zdev);
> }
>
> static int s390_iommu_update_trans(struct s390_domain *s390_domain,
On Fri, 2022-10-07 at 11:49 +0200, Niklas Schnelle wrote:
> Hi All,
>
> This is v5 of a follow up to Matt's recent series[0] where he tackled
> a race that turned out to be outside of the s390 IOMMU driver itself as
> well as duplicate device attachments. After an internal discussion we came
> up with what I believe is a cleaner fix. Instead of actively checking for
> duplicates we instead detach from any previous domain on attach. From my
> cursory reading of the code this seems to be what the Intel IOMMU driver is
> doing as well.
>
> Moreover we drop the attempt to re-attach the device to its previous IOMMU
> domain on failure. This was fragile, unlikely to help and unexpected for
> calling code. Thanks Jason for the suggestion.
>
> We can also get rid of struct s390_domain_device entirely if we instead
> thread the list through the attached struct zpci_devs. This saves us from
> having to allocate during attach and gets rid of one level of indirection
> during IOMMU operations.
>
> Additionally 3 more fixes have been added in v3 that weren't in v2 of this
> series. One is for a potential situation where the aperture of a domain
> could shrink and leave invalid translations. The next one fixes an off by
> one in checking validity of an IOVA and the last one fixes a wrong value
> for pgsize_bitmap.
>
> In v4 we also add a patch changing to the map_pages()/unmap_pages()
> interface in order to prevent a performance regression due to the
> pgsize_bitmap change.
>
> *Note*:
> This series is against the s390 features branch[1] which already contains
> the bus_next field removal that was part of v2.
>
> It is also available as branch iommu_fixes_v6 with the GPG signed tag
> s390_iommu_fixes_v5 on my niks/linux.git on git.kernel.org[2].
>
> *Open Question*:
> Which tree should this go via?
The conflicting commit that removed the bus_next field from struct
zpci_dev has now made it into Linus' tree via the s390 pull. So this
series now applies cleanly on mainline master. Still not sure though
which tree this would best go into.
>
> Best regards,
> Niklas
>
>
---8<---
On Mon, Oct 10, 2022 at 04:54:07PM +0200, Niklas Schnelle wrote:
> On Fri, 2022-10-07 at 11:49 +0200, Niklas Schnelle wrote:
> > Hi All,
> >
> > This is v5 of a follow up to Matt's recent series[0] where he tackled
> > a race that turned out to be outside of the s390 IOMMU driver itself as
> > well as duplicate device attachments. After an internal discussion we came
> > up with what I believe is a cleaner fix. Instead of actively checking for
> > duplicates we instead detach from any previous domain on attach. From my
> > cursory reading of the code this seems to be what the Intel IOMMU driver is
> > doing as well.
> >
> > Moreover we drop the attempt to re-attach the device to its previous IOMMU
> > domain on failure. This was fragile, unlikely to help and unexpected for
> > calling code. Thanks Jason for the suggestion.
> >
> > We can also get rid of struct s390_domain_device entirely if we instead
> > thread the list through the attached struct zpci_devs. This saves us from
> > having to allocate during attach and gets rid of one level of indirection
> > during IOMMU operations.
> >
> > Additionally 3 more fixes have been added in v3 that weren't in v2 of this
> > series. One is for a potential situation where the aperture of a domain
> > could shrink and leave invalid translations. The next one fixes an off by
> > one in checking validity of an IOVA and the last one fixes a wrong value
> > for pgsize_bitmap.
> >
> > In v4 we also add a patch changing to the map_pages()/unmap_pages()
> > interface in order to prevent a performance regression due to the
> > pgsize_bitmap change.
> >
> > *Note*:
> > This series is against the s390 features branch[1] which already contains
> > the bus_next field removal that was part of v2.
> >
> > It is also available as branch iommu_fixes_v6 with the GPG signed tag
> > s390_iommu_fixes_v5 on my niks/linux.git on git.kernel.org[2].
> >
> > *Open Question*:
> > Which tree should this go via?
>
> The conflicting commit that removed the bus_next field from struct
> zpci_dev has now made it into Linus' tree via the s390 pull. So this
> series now applies cleanly on mainline master. Still not sure though
> which tree this would best go into.
Arguably it should go through Joerg's iommu tree since it is only in
the iommu driver..
If you need it on a branch to share with the s390 tree then send Joerg
a PR.
Jason
On Mon, 2022-10-10 at 15:45 -0300, Jason Gunthorpe wrote:
> On Mon, Oct 10, 2022 at 04:54:07PM +0200, Niklas Schnelle wrote:
> > On Fri, 2022-10-07 at 11:49 +0200, Niklas Schnelle wrote:
> > > Hi All,
> > >
> > > This is v5 of a follow up to Matt's recent series[0] where he tackled
> > > a race that turned out to be outside of the s390 IOMMU driver itself as
> > > well as duplicate device attachments. After an internal discussion we came
> > > up with what I believe is a cleaner fix. Instead of actively checking for
> > > duplicates we instead detach from any previous domain on attach. From my
> > > cursory reading of the code this seems to be what the Intel IOMMU driver is
> > > doing as well.
> > >
> > > Moreover we drop the attempt to re-attach the device to its previous IOMMU
> > > domain on failure. This was fragile, unlikely to help and unexpected for
> > > calling code. Thanks Jason for the suggestion.
> > >
> > > We can also get rid of struct s390_domain_device entirely if we instead
> > > thread the list through the attached struct zpci_devs. This saves us from
> > > having to allocate during attach and gets rid of one level of indirection
> > > during IOMMU operations.
> > >
> > > Additionally 3 more fixes have been added in v3 that weren't in v2 of this
> > > series. One is for a potential situation where the aperture of a domain
> > > could shrink and leave invalid translations. The next one fixes an off by
> > > one in checking validity of an IOVA and the last one fixes a wrong value
> > > for pgsize_bitmap.
> > >
> > > In v4 we also add a patch changing to the map_pages()/unmap_pages()
> > > interface in order to prevent a performance regression due to the
> > > pgsize_bitmap change.
> > >
> > > *Note*:
> > > This series is against the s390 features branch[1] which already contains
> > > the bus_next field removal that was part of v2.
> > >
> > > It is also available as branch iommu_fixes_v6 with the GPG signed tag
> > > s390_iommu_fixes_v5 on my niks/linux.git on git.kernel.org[2].
> > >
> > > *Open Question*:
> > > Which tree should this go via?
> >
> > The conflicting commit that removed the bus_next field from struct
> > zpci_dev has now made it into Linus' tree via the s390 pull. So this
> > series now applies cleanly on mainline master. Still not sure though
> > which tree this would best go into.
>
> Arguably it should go through Joerg's iommu tree since it is only in
> the iommu driver..
>
> If you need it on a branch to share with the s390 tree then send Joerg
> a PR.
>
> Jason
Ok makes sense, I don't think I need it on an extra branch and whatever
is easiest for Joerg is fine. I hope that since all but patch 6 are
fixes and that one is quite simple that this could maybe still go into
v6.1.
Not sure if Joerg is still waiting on some Acks or R-bs though. I did
remove yours on patches 1, 2, and 3 as there were some changes since
you gave it. I don't think you gave one for patch 4 and patch 6 is new.
I plan on sending further IOMMU improvements and the DMA conversion
based on this but will just reference it and provide private branches
on git.kernel.org. I think those will target the next merge window at
the earliest so that should be fine.
Thanks,
Niklas
On Tue, Oct 11, 2022 at 01:03:27PM +0200, Niklas Schnelle wrote:
> Ok makes sense, I don't think I need it on an extra branch and whatever
> is easiest for Joerg is fine. I hope that since all but patch 6 are
> fixes and that one is quite simple that this could maybe still go into
> v6.1.
Oh there is no way for 6.1 at this point.
You will have to respin it for 6.1-rc1 and Joerg usually waits until
rc3 before taking any patches.
But I would post and origanize all the things you want for 6.1 next
week.
Jason