This series allows a virtualizer to program the nested stage mode.
This is useful when both the host and the guest are exposed with
an SMMUv3 and a PCI device is assigned to the guest using VFIO.
In this mode, the physical IOMMU must be programmed to translate
the two stages: the one set up by the guest (IOVA -> GPA) and the
one set up by the host VFIO driver as part of the assignment process
(GPA -> HPA).
On Intel, this is traditionnaly achieved by combining the 2 stages
into a single physical stage. However this relies on the capability
to trap on each guest translation structure update. This is possible
by using the VTD Caching Mode. Unfortunately the ARM SMMUv3 does
not offer a similar mechanism.
However, the ARM SMMUv3 architecture supports 2 physical stages! Those
were devised exactly with that use case in mind. Assuming the HW
implements both stages (optional), the guest now can use stage 1
while the host uses stage 2.
This assumes the virtualizer has means to propagate guest settings
to the host SMMUv3 driver. This series brings this VFIO/IOMMU
infrastructure. Those services are:
- bind the guest stage 1 configuration to the stream table entry
- propagate guest TLB invalidations
- bind MSI IOVAs
- propagate faults collected at physical level up to the virtualizer
This series largely reuses the user API and infrastructure originally
devised for SVA/SVM and patches submitted by Jacob, Yi Liu, Tianyu in
[1-2] and Jean-Philippe [3-4].
Best Regards
Eric
This series can be found at:
https://github.com/eauger/linux/tree/v5.0-2stage-v6
References:
[1] [PATCH v5 00/23] IOMMU and VT-d driver support for Shared Virtual
Address (SVA)
https://lwn.net/Articles/754331/
[2] [RFC PATCH 0/8] Shared Virtual Memory virtualization for VT-d
(VFIO part)
https://lists.linuxfoundation.org/pipermail/iommu/2017-April/021475.html
[3] [v2,00/40] Shared Virtual Addressing for the IOMMU
https://patchwork.ozlabs.org/cover/912129/
[4] [PATCH v3 00/10] Shared Virtual Addressing for the IOMMU
https://patchwork.kernel.org/cover/10608299/
History:
v5 -> v6:
- Fix compilation issue when CONFIG_IOMMU_API is unset
v4 -> v5:
- fix bug reported by Vincent: fault handler unregistration now happens in
vfio_pci_release
- IOMMU_FAULT_PERM_* moved outside of struct definition + small
uapi changes suggested by Kean-Philippe (except fetch_addr)
- iommu: introduce device fault report API: removed the PRI part.
- see individual logs for more details
- reset the ste abort flag on detach
v3 -> v4:
- took into account Alex, jean-Philippe and Robin's comments on v3
- rework of the smmuv3 driver integration
- add tear down ops for msi binding and PASID table binding
- fix S1 fault propagation
- put fault reporting patches at the beginning of the series following
Jean-Philippe's request
- update of the cache invalidate and fault API uapis
- VFIO fault reporting rework with 2 separate regions and one mmappable
segment for the fault queue
- moved to PATCH
v2 -> v3:
- When registering the S1 MSI binding we now store the device handle. This
addresses Robin's comment about discimination of devices beonging to
different S1 groups and using different physical MSI doorbells.
- Change the fault reporting API: use VFIO_PCI_DMA_FAULT_IRQ_INDEX to
set the eventfd and expose the faults through an mmappable fault region
v1 -> v2:
- Added the fault reporting capability
- asid properly passed on invalidation (fix assignment of multiple
devices)
- see individual change logs for more info
Eric Auger (13):
iommu: Introduce bind/unbind_guest_msi
vfio: VFIO_IOMMU_BIND/UNBIND_MSI
iommu/smmuv3: Get prepared for nested stage support
iommu/smmuv3: Implement attach/detach_pasid_table
iommu/smmuv3: Implement cache_invalidate
dma-iommu: Implement NESTED_MSI cookie
iommu/smmuv3: Implement bind/unbind_guest_msi
iommu/smmuv3: Report non recoverable faults
vfio-pci: Add a new VFIO_REGION_TYPE_NESTED region type
vfio-pci: Register an iommu fault handler
vfio_pci: Allow to mmap the fault queue
vfio-pci: Add VFIO_PCI_DMA_FAULT_IRQ_INDEX
vfio: Document nested stage control
Jacob Pan (4):
driver core: add per device iommu param
iommu: introduce device fault data
iommu: introduce device fault report API
iommu: Introduce attach/detach_pasid_table API
Jean-Philippe Brucker (2):
iommu/arm-smmu-v3: Link domains and devices
iommu/arm-smmu-v3: Maintain a SID->device structure
Liu, Yi L (3):
iommu: Introduce cache_invalidate API
vfio: VFIO_IOMMU_ATTACH/DETACH_PASID_TABLE
vfio: VFIO_IOMMU_CACHE_INVALIDATE
Documentation/vfio.txt | 83 ++++
drivers/iommu/arm-smmu-v3.c | 581 ++++++++++++++++++++++++++--
drivers/iommu/dma-iommu.c | 145 ++++++-
drivers/iommu/iommu.c | 201 +++++++++-
drivers/vfio/pci/vfio_pci.c | 214 ++++++++++
drivers/vfio/pci/vfio_pci_intrs.c | 19 +
drivers/vfio/pci/vfio_pci_private.h | 18 +
drivers/vfio/pci/vfio_pci_rdwr.c | 73 ++++
drivers/vfio/vfio_iommu_type1.c | 158 ++++++++
include/linux/device.h | 3 +
include/linux/dma-iommu.h | 18 +
include/linux/iommu.h | 137 +++++++
include/uapi/linux/iommu.h | 233 +++++++++++
include/uapi/linux/vfio.h | 102 +++++
14 files changed, 1951 insertions(+), 34 deletions(-)
create mode 100644 include/uapi/linux/iommu.h
--
2.20.1
From: Jacob Pan <[email protected]>
DMA faults can be detected by IOMMU at device level. Adding a pointer
to struct device allows IOMMU subsystem to report relevant faults
back to the device driver for further handling.
For direct assigned device (or user space drivers), guest OS holds
responsibility to handle and respond per device IOMMU fault.
Therefore we need fault reporting mechanism to propagate faults beyond
IOMMU subsystem.
There are two other IOMMU data pointers under struct device today, here
we introduce iommu_param as a parent pointer such that all device IOMMU
data can be consolidated here. The idea was suggested here by Greg KH
and Joerg. The name iommu_param is chosen here since iommu_data has been used.
Suggested-by: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Greg Kroah-Hartman <[email protected]>
Signed-off-by: Jacob Pan <[email protected]>
Link: https://lkml.org/lkml/2017/10/6/81
---
include/linux/device.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/include/linux/device.h b/include/linux/device.h
index b425a7ee04ce..39b4dd1b01f5 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -42,6 +42,7 @@ struct iommu_ops;
struct iommu_group;
struct iommu_fwspec;
struct dev_pin_info;
+struct iommu_param;
struct bus_attribute {
struct attribute attr;
@@ -961,6 +962,7 @@ struct dev_links_info {
* device (i.e. the bus driver that discovered the device).
* @iommu_group: IOMMU group the device belongs to.
* @iommu_fwspec: IOMMU-specific properties supplied by firmware.
+ * @iommu_param: Per device generic IOMMU runtime data
*
* @offline_disabled: If set, the device is permanently online.
* @offline: Set after successful invocation of bus type's .offline().
@@ -1054,6 +1056,7 @@ struct device {
void (*release)(struct device *dev);
struct iommu_group *iommu_group;
struct iommu_fwspec *iommu_fwspec;
+ struct iommu_param *iommu_param;
bool offline_disabled:1;
bool offline:1;
--
2.20.1
From: Jacob Pan <[email protected]>
Device faults detected by IOMMU can be reported outside the IOMMU
subsystem for further processing. This patch introduces
a generic device fault data structure.
The fault can be either an unrecoverable fault or a page request,
also referred to as a recoverable fault.
We only care about non internal faults that are likely to be reported
to an external subsystem.
Signed-off-by: Jacob Pan <[email protected]>
Signed-off-by: Jean-Philippe Brucker <[email protected]>
Signed-off-by: Liu, Yi L <[email protected]>
Signed-off-by: Ashok Raj <[email protected]>
Signed-off-by: Eric Auger <[email protected]>
---
v4 -> v5:
- simplified struct iommu_fault_event comment
- Moved IOMMU_FAULT_PERM outside of the struct
- Removed IOMMU_FAULT_PERM_INST
- s/IOMMU_FAULT_PAGE_REQUEST_PASID_PRESENT/
IOMMU_FAULT_PAGE_REQUEST_PASID_VALID
v3 -> v4:
- use a union containing aither an unrecoverable fault or a page
request message. Move the device private data in the page request
structure. Reshuffle the fields and use flags.
- move fault perm attributes to the uapi
- remove a bunch of iommu_fault_reason enum values that were related
to internal errors
---
include/linux/iommu.h | 44 ++++++++++++++
include/uapi/linux/iommu.h | 115 +++++++++++++++++++++++++++++++++++++
2 files changed, 159 insertions(+)
create mode 100644 include/uapi/linux/iommu.h
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index ffbbc7e39cee..c6f398f7e6e0 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -25,6 +25,7 @@
#include <linux/errno.h>
#include <linux/err.h>
#include <linux/of.h>
+#include <uapi/linux/iommu.h>
#define IOMMU_READ (1 << 0)
#define IOMMU_WRITE (1 << 1)
@@ -48,6 +49,7 @@ struct bus_type;
struct device;
struct iommu_domain;
struct notifier_block;
+struct iommu_fault_event;
/* iommu fault flags */
#define IOMMU_FAULT_READ 0x0
@@ -55,6 +57,7 @@ struct notifier_block;
typedef int (*iommu_fault_handler_t)(struct iommu_domain *,
struct device *, unsigned long, int, void *);
+typedef int (*iommu_dev_fault_handler_t)(struct iommu_fault_event *, void *);
struct iommu_domain_geometry {
dma_addr_t aperture_start; /* First address that can be mapped */
@@ -247,6 +250,46 @@ struct iommu_device {
struct device *dev;
};
+/**
+ * struct iommu_fault_event - Generic fault event
+ *
+ * Can represent recoverable faults such as a page requests or
+ * unrecoverable faults such as DMA or IRQ remapping faults.
+ *
+ * @fault: fault descriptor
+ * @iommu_private: used by the IOMMU driver for storing fault-specific
+ * data. Users should not modify this field before
+ * sending the fault response.
+ */
+struct iommu_fault_event {
+ struct iommu_fault fault;
+ u64 iommu_private;
+};
+
+/**
+ * struct iommu_fault_param - per-device IOMMU fault data
+ * @dev_fault_handler: Callback function to handle IOMMU faults at device level
+ * @data: handler private data
+ *
+ */
+struct iommu_fault_param {
+ iommu_dev_fault_handler_t handler;
+ void *data;
+};
+
+/**
+ * struct iommu_param - collection of per-device IOMMU data
+ *
+ * @fault_param: IOMMU detected device fault reporting data
+ *
+ * TODO: migrate other per device data pointers under iommu_dev_data, e.g.
+ * struct iommu_group *iommu_group;
+ * struct iommu_fwspec *iommu_fwspec;
+ */
+struct iommu_param {
+ struct iommu_fault_param *fault_param;
+};
+
int iommu_device_register(struct iommu_device *iommu);
void iommu_device_unregister(struct iommu_device *iommu);
int iommu_device_sysfs_add(struct iommu_device *iommu,
@@ -422,6 +465,7 @@ struct iommu_ops {};
struct iommu_group {};
struct iommu_fwspec {};
struct iommu_device {};
+struct iommu_fault_param {};
static inline bool iommu_present(struct bus_type *bus)
{
diff --git a/include/uapi/linux/iommu.h b/include/uapi/linux/iommu.h
new file mode 100644
index 000000000000..edcc0dda7993
--- /dev/null
+++ b/include/uapi/linux/iommu.h
@@ -0,0 +1,115 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * IOMMU user API definitions
+ */
+
+#ifndef _UAPI_IOMMU_H
+#define _UAPI_IOMMU_H
+
+#include <linux/types.h>
+
+#define IOMMU_FAULT_PERM_WRITE (1 << 0) /* write */
+#define IOMMU_FAULT_PERM_EXEC (1 << 1) /* exec */
+#define IOMMU_FAULT_PERM_PRIV (1 << 2) /* privileged */
+
+/* Generic fault types, can be expanded IRQ remapping fault */
+enum iommu_fault_type {
+ IOMMU_FAULT_DMA_UNRECOV = 1, /* unrecoverable fault */
+ IOMMU_FAULT_PAGE_REQ, /* page request fault */
+};
+
+enum iommu_fault_reason {
+ IOMMU_FAULT_REASON_UNKNOWN = 0,
+
+ /* Could not access the PASID table (fetch caused external abort) */
+ IOMMU_FAULT_REASON_PASID_FETCH,
+
+ /* pasid entry is invalid or has configuration errors */
+ IOMMU_FAULT_REASON_BAD_PASID_ENTRY,
+
+ /*
+ * PASID is out of range (e.g. exceeds the maximum PASID
+ * supported by the IOMMU) or disabled.
+ */
+ IOMMU_FAULT_REASON_PASID_INVALID,
+
+ /*
+ * An external abort occurred fetching (or updating) a translation
+ * table descriptor
+ */
+ IOMMU_FAULT_REASON_WALK_EABT,
+
+ /*
+ * Could not access the page table entry (Bad address),
+ * actual translation fault
+ */
+ IOMMU_FAULT_REASON_PTE_FETCH,
+
+ /* Protection flag check failed */
+ IOMMU_FAULT_REASON_PERMISSION,
+
+ /* access flag check failed */
+ IOMMU_FAULT_REASON_ACCESS,
+
+ /* Output address of a translation stage caused Address Size fault */
+ IOMMU_FAULT_REASON_OOR_ADDRESS,
+};
+
+/**
+ * Unrecoverable fault data
+ * @reason: reason of the fault
+ * @addr: offending page address
+ * @fetch_addr: address that caused a fetch abort, if any
+ * @pasid: contains process address space ID, used in shared virtual memory
+ * @perm: Requested permission access using by the incoming transaction
+ * (IOMMU_FAULT_PERM_* values)
+ */
+struct iommu_fault_unrecoverable {
+ __u32 reason; /* enum iommu_fault_reason */
+#define IOMMU_FAULT_UNRECOV_PASID_VALID (1 << 0)
+#define IOMMU_FAULT_UNRECOV_PERM_VALID (1 << 1)
+#define IOMMU_FAULT_UNRECOV_ADDR_VALID (1 << 2)
+#define IOMMU_FAULT_UNRECOV_FETCH_ADDR_VALID (1 << 3)
+ __u32 flags;
+ __u32 pasid;
+ __u32 perm;
+ __u64 addr;
+ __u64 fetch_addr;
+};
+
+/*
+ * Page Request data (aka. recoverable fault data)
+ * @flags : encodes whether the pasid is valid and whether this
+ * is the last page in group
+ * @pasid: pasid
+ * @grpid: page request group index
+ * @perm: requested page permissions (IOMMU_FAULT_PERM_* values)
+ * @addr: page address
+ */
+struct iommu_fault_page_request {
+#define IOMMU_FAULT_PAGE_REQUEST_PASID_VALID (1 << 0)
+#define IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE (1 << 1)
+#define IOMMU_FAULT_PAGE_REQUEST_PRIV_DATA (1 << 2)
+ __u32 flags;
+ __u32 pasid;
+ __u32 grpid;
+ __u32 perm;
+ __u64 addr;
+ __u64 private_data[2];
+};
+
+/**
+ * struct iommu_fault - Generic fault data
+ *
+ * @type contains fault type
+ */
+
+struct iommu_fault {
+ __u32 type; /* enum iommu_fault_type */
+ __u32 reserved;
+ union {
+ struct iommu_fault_unrecoverable event;
+ struct iommu_fault_page_request prm;
+ };
+};
+#endif /* _UAPI_IOMMU_H */
--
2.20.1
From: Jacob Pan <[email protected]>
Traditionally, device specific faults are detected and handled within
their own device drivers. When IOMMU is enabled, faults such as DMA
related transactions are detected by IOMMU. There is no generic
reporting mechanism to report faults back to the in-kernel device
driver or the guest OS in case of assigned devices.
This patch introduces a registration API for device specific fault
handlers. This differs from the existing iommu_set_fault_handler/
report_iommu_fault infrastructures in several ways:
- it allows to report more sophisticated fault events (both
unrecoverable faults and page request faults) due to the nature
of the iommu_fault struct
- it is device specific and not domain specific.
The current iommu_report_device_fault() implementation only handles
the "shoot and forget" unrecoverable fault case. Handling of page
request faults or stalled faults will come later.
Signed-off-by: Jacob Pan <[email protected]>
Signed-off-by: Ashok Raj <[email protected]>
Signed-off-by: Jean-Philippe Brucker <[email protected]>
Signed-off-by: Eric Auger <[email protected]>
---
v4 -> v5:
- remove stuff related to recoverable faults
---
drivers/iommu/iommu.c | 134 +++++++++++++++++++++++++++++++++++++++++-
include/linux/iommu.h | 36 +++++++++++-
2 files changed, 168 insertions(+), 2 deletions(-)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 33a982e33716..56d5bf68de53 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -648,6 +648,13 @@ int iommu_group_add_device(struct iommu_group *group, struct device *dev)
goto err_free_name;
}
+ dev->iommu_param = kzalloc(sizeof(*dev->iommu_param), GFP_KERNEL);
+ if (!dev->iommu_param) {
+ ret = -ENOMEM;
+ goto err_free_name;
+ }
+ mutex_init(&dev->iommu_param->lock);
+
kobject_get(group->devices_kobj);
dev->iommu_group = group;
@@ -678,6 +685,7 @@ int iommu_group_add_device(struct iommu_group *group, struct device *dev)
mutex_unlock(&group->mutex);
dev->iommu_group = NULL;
kobject_put(group->devices_kobj);
+ kfree(dev->iommu_param);
err_free_name:
kfree(device->name);
err_remove_link:
@@ -724,7 +732,7 @@ void iommu_group_remove_device(struct device *dev)
sysfs_remove_link(&dev->kobj, "iommu_group");
trace_remove_device_from_group(group->id, dev);
-
+ kfree(dev->iommu_param);
kfree(device->name);
kfree(device);
dev->iommu_group = NULL;
@@ -858,6 +866,130 @@ int iommu_group_unregister_notifier(struct iommu_group *group,
}
EXPORT_SYMBOL_GPL(iommu_group_unregister_notifier);
+/**
+ * iommu_register_device_fault_handler() - Register a device fault handler
+ * @dev: the device
+ * @handler: the fault handler
+ * @data: private data passed as argument to the handler
+ *
+ * When an IOMMU fault event is received, this handler gets called with the
+ * fault event and data as argument.
+ *
+ * Return 0 if the fault handler was installed successfully, or an error.
+ */
+int iommu_register_device_fault_handler(struct device *dev,
+ iommu_dev_fault_handler_t handler,
+ void *data)
+{
+ struct iommu_param *param = dev->iommu_param;
+ int ret = 0;
+
+ /*
+ * Device iommu_param should have been allocated when device is
+ * added to its iommu_group.
+ */
+ if (!param)
+ return -EINVAL;
+
+ mutex_lock(¶m->lock);
+ /* Only allow one fault handler registered for each device */
+ if (param->fault_param) {
+ ret = -EBUSY;
+ goto done_unlock;
+ }
+
+ get_device(dev);
+ param->fault_param =
+ kzalloc(sizeof(struct iommu_fault_param), GFP_KERNEL);
+ if (!param->fault_param) {
+ put_device(dev);
+ ret = -ENOMEM;
+ goto done_unlock;
+ }
+ mutex_init(¶m->fault_param->lock);
+ param->fault_param->handler = handler;
+ param->fault_param->data = data;
+ INIT_LIST_HEAD(¶m->fault_param->faults);
+
+done_unlock:
+ mutex_unlock(¶m->lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(iommu_register_device_fault_handler);
+
+/**
+ * iommu_unregister_device_fault_handler() - Unregister the device fault handler
+ * @dev: the device
+ *
+ * Remove the device fault handler installed with
+ * iommu_register_device_fault_handler().
+ *
+ * Return 0 on success, or an error.
+ */
+int iommu_unregister_device_fault_handler(struct device *dev)
+{
+ struct iommu_param *param = dev->iommu_param;
+ int ret = 0;
+
+ if (!param)
+ return -EINVAL;
+
+ mutex_lock(¶m->lock);
+
+ if (!param->fault_param)
+ goto unlock;
+
+ /* we cannot unregister handler if there are pending faults */
+ if (!list_empty(¶m->fault_param->faults)) {
+ ret = -EBUSY;
+ goto unlock;
+ }
+
+ kfree(param->fault_param);
+ param->fault_param = NULL;
+ put_device(dev);
+unlock:
+ mutex_unlock(¶m->lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(iommu_unregister_device_fault_handler);
+
+
+/**
+ * iommu_report_device_fault() - Report fault event to device
+ * @dev: the device
+ * @evt: fault event data
+ *
+ * Called by IOMMU model specific drivers when fault is detected, typically
+ * in a threaded IRQ handler.
+ *
+ * Return 0 on success, or an error.
+ */
+int iommu_report_device_fault(struct device *dev, struct iommu_fault_event *evt)
+{
+ struct iommu_fault_param *fparam;
+ int ret = 0;
+
+ /* iommu_param is allocated when device is added to group */
+ if (!dev->iommu_param || !evt)
+ return -EINVAL;
+ /* we only report device fault if there is a handler registered */
+ mutex_lock(&dev->iommu_param->lock);
+ if (!dev->iommu_param->fault_param ||
+ !dev->iommu_param->fault_param->handler) {
+ ret = -EINVAL;
+ goto done_unlock;
+ }
+ fparam = dev->iommu_param->fault_param;
+ ret = fparam->handler(evt, fparam->data);
+done_unlock:
+ mutex_unlock(&dev->iommu_param->lock);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(iommu_report_device_fault);
+
/**
* iommu_group_id - Return ID for a group
* @group: the group to ID
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index c6f398f7e6e0..aeb4b615cb44 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -257,11 +257,13 @@ struct iommu_device {
* unrecoverable faults such as DMA or IRQ remapping faults.
*
* @fault: fault descriptor
+ * @list pending fault event list, used for tracking responses
* @iommu_private: used by the IOMMU driver for storing fault-specific
* data. Users should not modify this field before
* sending the fault response.
*/
struct iommu_fault_event {
+ struct list_head list;
struct iommu_fault fault;
u64 iommu_private;
};
@@ -270,10 +272,13 @@ struct iommu_fault_event {
* struct iommu_fault_param - per-device IOMMU fault data
* @dev_fault_handler: Callback function to handle IOMMU faults at device level
* @data: handler private data
- *
+ * @faults: holds the pending faults which needs response, e.g. page response.
+ * @lock: protect pending PRQ event list
*/
struct iommu_fault_param {
iommu_dev_fault_handler_t handler;
+ struct list_head faults;
+ struct mutex lock;
void *data;
};
@@ -287,6 +292,7 @@ struct iommu_fault_param {
* struct iommu_fwspec *iommu_fwspec;
*/
struct iommu_param {
+ struct mutex lock;
struct iommu_fault_param *fault_param;
};
@@ -379,6 +385,15 @@ extern int iommu_group_register_notifier(struct iommu_group *group,
struct notifier_block *nb);
extern int iommu_group_unregister_notifier(struct iommu_group *group,
struct notifier_block *nb);
+extern int iommu_register_device_fault_handler(struct device *dev,
+ iommu_dev_fault_handler_t handler,
+ void *data);
+
+extern int iommu_unregister_device_fault_handler(struct device *dev);
+
+extern int iommu_report_device_fault(struct device *dev,
+ struct iommu_fault_event *evt);
+
extern int iommu_group_id(struct iommu_group *group);
extern struct iommu_group *iommu_group_get_for_dev(struct device *dev);
extern struct iommu_domain *iommu_group_default_domain(struct iommu_group *);
@@ -659,6 +674,25 @@ static inline int iommu_group_unregister_notifier(struct iommu_group *group,
return 0;
}
+static inline
+int iommu_register_device_fault_handler(struct device *dev,
+ iommu_dev_fault_handler_t handler,
+ void *data)
+{
+ return -ENODEV;
+}
+
+static inline int iommu_unregister_device_fault_handler(struct device *dev)
+{
+ return 0;
+}
+
+static inline
+int iommu_report_device_fault(struct device *dev, struct iommu_fault_event *evt)
+{
+ return -ENODEV;
+}
+
static inline int iommu_group_id(struct iommu_group *group)
{
return -ENODEV;
--
2.20.1
From: "Liu, Yi L" <[email protected]>
In any virtualization use case, when the first translation stage
is "owned" by the guest OS, the host IOMMU driver has no knowledge
of caching structure updates unless the guest invalidation activities
are trapped by the virtualizer and passed down to the host.
Since the invalidation data are obtained from user space and will be
written into physical IOMMU, we must allow security check at various
layers. Therefore, generic invalidation data format are proposed here,
model specific IOMMU drivers need to convert them into their own format.
Signed-off-by: Liu, Yi L <[email protected]>
Signed-off-by: Jean-Philippe Brucker <[email protected]>
Signed-off-by: Jacob Pan <[email protected]>
Signed-off-by: Ashok Raj <[email protected]>
Signed-off-by: Eric Auger <[email protected]>
---
v5 -> v6:
- fix merge issue
v3 -> v4:
- full reshape of the API following Alex' comments
v1 -> v2:
- add arch_id field
- renamed tlb_invalidate into cache_invalidate as this API allows
to invalidate context caches on top of IOTLBs
v1:
renamed sva_invalidate into tlb_invalidate and add iommu_ prefix in
header. Commit message reworded.
---
drivers/iommu/iommu.c | 14 ++++++++
include/linux/iommu.h | 15 ++++++++
include/uapi/linux/iommu.h | 71 ++++++++++++++++++++++++++++++++++++++
3 files changed, 100 insertions(+)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 7d9285cea100..b72e326ddd41 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -1544,6 +1544,20 @@ void iommu_detach_pasid_table(struct iommu_domain *domain)
}
EXPORT_SYMBOL_GPL(iommu_detach_pasid_table);
+int iommu_cache_invalidate(struct iommu_domain *domain, struct device *dev,
+ struct iommu_cache_invalidate_info *inv_info)
+{
+ int ret = 0;
+
+ if (unlikely(!domain->ops->cache_invalidate))
+ return -ENODEV;
+
+ ret = domain->ops->cache_invalidate(domain, dev, inv_info);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(iommu_cache_invalidate);
+
static void __iommu_detach_device(struct iommu_domain *domain,
struct device *dev)
{
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index fb9b7a8de25f..7c7c6bad1420 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -191,6 +191,7 @@ struct iommu_resv_region {
* driver init to device driver init (default no)
* @attach_pasid_table: attach a pasid table
* @detach_pasid_table: detach the pasid table
+ * @cache_invalidate: invalidate translation caches
* @pgsize_bitmap: bitmap of all possible supported page sizes
*/
struct iommu_ops {
@@ -239,6 +240,9 @@ struct iommu_ops {
struct iommu_pasid_table_config *cfg);
void (*detach_pasid_table)(struct iommu_domain *domain);
+ int (*cache_invalidate)(struct iommu_domain *domain, struct device *dev,
+ struct iommu_cache_invalidate_info *inv_info);
+
unsigned long pgsize_bitmap;
};
@@ -349,6 +353,9 @@ extern void iommu_detach_device(struct iommu_domain *domain,
extern int iommu_attach_pasid_table(struct iommu_domain *domain,
struct iommu_pasid_table_config *cfg);
extern void iommu_detach_pasid_table(struct iommu_domain *domain);
+extern int iommu_cache_invalidate(struct iommu_domain *domain,
+ struct device *dev,
+ struct iommu_cache_invalidate_info *inv_info);
extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev);
extern struct iommu_domain *iommu_get_dma_domain(struct device *dev);
extern int iommu_map(struct iommu_domain *domain, unsigned long iova,
@@ -797,6 +804,14 @@ int iommu_attach_pasid_table(struct iommu_domain *domain,
static inline
void iommu_detach_pasid_table(struct iommu_domain *domain) {}
+static inline int
+iommu_cache_invalidate(struct iommu_domain *domain,
+ struct device *dev,
+ struct iommu_cache_invalidate_info *inv_info)
+{
+ return -ENODEV;
+}
+
#endif /* CONFIG_IOMMU_API */
#ifdef CONFIG_IOMMU_DEBUGFS
diff --git a/include/uapi/linux/iommu.h b/include/uapi/linux/iommu.h
index 532a64075f23..e4c6a447e85a 100644
--- a/include/uapi/linux/iommu.h
+++ b/include/uapi/linux/iommu.h
@@ -159,4 +159,75 @@ struct iommu_pasid_table_config {
};
};
+/* defines the granularity of the invalidation */
+enum iommu_inv_granularity {
+ IOMMU_INV_GRANU_DOMAIN, /* domain-selective invalidation */
+ IOMMU_INV_GRANU_PASID, /* pasid-selective invalidation */
+ IOMMU_INV_GRANU_ADDR, /* page-selective invalidation */
+};
+
+/**
+ * Address Selective Invalidation Structure
+ *
+ * @flags indicates the granularity of the address-selective invalidation
+ * - if PASID bit is set, @pasid field is populated and the invalidation
+ * relates to cache entries tagged with this PASID and matching the
+ * address range.
+ * - if ARCHID bit is set, @archid is populated and the invalidation relates
+ * to cache entries tagged with this architecture specific id and matching
+ * the address range.
+ * - Both PASID and ARCHID can be set as they may tag different caches.
+ * - if neither PASID or ARCHID is set, global addr invalidation applies
+ * - LEAF flag indicates whether only the leaf PTE caching needs to be
+ * invalidated and other paging structure caches can be preserved.
+ * @pasid: process address space id
+ * @archid: architecture-specific id
+ * @addr: first stage/level input address
+ * @granule_size: page/block size of the mapping in bytes
+ * @nb_granules: number of contiguous granules to be invalidated
+ */
+struct iommu_inv_addr_info {
+#define IOMMU_INV_ADDR_FLAGS_PASID (1 << 0)
+#define IOMMU_INV_ADDR_FLAGS_ARCHID (1 << 1)
+#define IOMMU_INV_ADDR_FLAGS_LEAF (1 << 2)
+ __u32 flags;
+ __u32 archid;
+ __u64 pasid;
+ __u64 addr;
+ __u64 granule_size;
+ __u64 nb_granules;
+};
+
+/**
+ * First level/stage invalidation information
+ * @cache: bitfield that allows to select which caches to invalidate
+ * @granularity: defines the lowest granularity used for the invalidation:
+ * domain > pasid > addr
+ *
+ * Not all the combinations of cache/granularity make sense:
+ *
+ * type | DEV_IOTLB | IOTLB | PASID |
+ * granularity | | | cache |
+ * -------------+---------------+---------------+---------------+
+ * DOMAIN | N/A | Y | Y |
+ * PASID | Y | Y | Y |
+ * ADDR | Y | Y | N/A |
+ */
+struct iommu_cache_invalidate_info {
+#define IOMMU_CACHE_INVALIDATE_INFO_VERSION_1 1
+ __u32 version;
+/* IOMMU paging structure cache */
+#define IOMMU_CACHE_INV_TYPE_IOTLB (1 << 0) /* IOMMU IOTLB */
+#define IOMMU_CACHE_INV_TYPE_DEV_IOTLB (1 << 1) /* Device IOTLB */
+#define IOMMU_CACHE_INV_TYPE_PASID (1 << 2) /* PASID cache */
+ __u8 cache;
+ __u8 granularity;
+ __u8 padding[2];
+ union {
+ __u64 pasid;
+ struct iommu_inv_addr_info addr_info;
+ };
+};
+
+
#endif /* _UAPI_IOMMU_H */
--
2.20.1
From: Jacob Pan <[email protected]>
In virtualization use case, when a guest is assigned
a PCI host device, protected by a virtual IOMMU on the guest,
the physical IOMMU must be programmed to be consistent with
the guest mappings. If the physical IOMMU supports two
translation stages it makes sense to program guest mappings
onto the first stage/level (ARM/Intel terminology) while the host
owns the stage/level 2.
In that case, it is mandated to trap on guest configuration
settings and pass those to the physical iommu driver.
This patch adds a new API to the iommu subsystem that allows
to set/unset the pasid table information.
A generic iommu_pasid_table_config struct is introduced in
a new iommu.h uapi header. This is going to be used by the VFIO
user API.
Signed-off-by: Jean-Philippe Brucker <[email protected]>
Signed-off-by: Liu, Yi L <[email protected]>
Signed-off-by: Ashok Raj <[email protected]>
Signed-off-by: Jacob Pan <[email protected]>
Signed-off-by: Eric Auger <[email protected]>
Reviewed-by: Jean-Philippe Brucker <[email protected]>
---
This patch generalizes the API introduced by Jacob & co-authors in
https://lwn.net/Articles/754331/
v4 -> v5:
- no returned valued for dummy definition of iommu_detach_pasid_table
- fix order in comment
- added Jean's R-b
v3 -> v4:
- s/set_pasid_table/attach_pasid_table
- restore detach_pasid_table. Detach can be used on unwind path.
- add padding
- remove @abort
- signature used for config and format
- add comments for fields in the SMMU struct
v2 -> v3:
- replace unbind/bind by set_pasid_table
- move table pointer and pasid bits in the generic part of the struct
v1 -> v2:
- restore the original pasid table name
- remove the struct device * parameter in the API
- reworked iommu_pasid_smmuv3
---
drivers/iommu/iommu.c | 19 +++++++++++++++
include/linux/iommu.h | 19 +++++++++++++++
include/uapi/linux/iommu.h | 47 ++++++++++++++++++++++++++++++++++++++
3 files changed, 85 insertions(+)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 56d5bf68de53..7d9285cea100 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -1525,6 +1525,25 @@ int iommu_attach_device(struct iommu_domain *domain, struct device *dev)
}
EXPORT_SYMBOL_GPL(iommu_attach_device);
+int iommu_attach_pasid_table(struct iommu_domain *domain,
+ struct iommu_pasid_table_config *cfg)
+{
+ if (unlikely(!domain->ops->attach_pasid_table))
+ return -ENODEV;
+
+ return domain->ops->attach_pasid_table(domain, cfg);
+}
+EXPORT_SYMBOL_GPL(iommu_attach_pasid_table);
+
+void iommu_detach_pasid_table(struct iommu_domain *domain)
+{
+ if (unlikely(!domain->ops->detach_pasid_table))
+ return;
+
+ domain->ops->detach_pasid_table(domain);
+}
+EXPORT_SYMBOL_GPL(iommu_detach_pasid_table);
+
static void __iommu_detach_device(struct iommu_domain *domain,
struct device *dev)
{
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index aeb4b615cb44..fb9b7a8de25f 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -189,6 +189,8 @@ struct iommu_resv_region {
* @of_xlate: add OF master IDs to iommu grouping
* @is_attach_deferred: Check if domain attach should be deferred from iommu
* driver init to device driver init (default no)
+ * @attach_pasid_table: attach a pasid table
+ * @detach_pasid_table: detach the pasid table
* @pgsize_bitmap: bitmap of all possible supported page sizes
*/
struct iommu_ops {
@@ -233,6 +235,10 @@ struct iommu_ops {
int (*of_xlate)(struct device *dev, struct of_phandle_args *args);
bool (*is_attach_deferred)(struct iommu_domain *domain, struct device *dev);
+ int (*attach_pasid_table)(struct iommu_domain *domain,
+ struct iommu_pasid_table_config *cfg);
+ void (*detach_pasid_table)(struct iommu_domain *domain);
+
unsigned long pgsize_bitmap;
};
@@ -340,6 +346,9 @@ extern int iommu_attach_device(struct iommu_domain *domain,
struct device *dev);
extern void iommu_detach_device(struct iommu_domain *domain,
struct device *dev);
+extern int iommu_attach_pasid_table(struct iommu_domain *domain,
+ struct iommu_pasid_table_config *cfg);
+extern void iommu_detach_pasid_table(struct iommu_domain *domain);
extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev);
extern struct iommu_domain *iommu_get_dma_domain(struct device *dev);
extern int iommu_map(struct iommu_domain *domain, unsigned long iova,
@@ -778,6 +787,16 @@ const struct iommu_ops *iommu_ops_from_fwnode(struct fwnode_handle *fwnode)
return NULL;
}
+static inline
+int iommu_attach_pasid_table(struct iommu_domain *domain,
+ struct iommu_pasid_table_config *cfg)
+{
+ return -ENODEV;
+}
+
+static inline
+void iommu_detach_pasid_table(struct iommu_domain *domain) {}
+
#endif /* CONFIG_IOMMU_API */
#ifdef CONFIG_IOMMU_DEBUGFS
diff --git a/include/uapi/linux/iommu.h b/include/uapi/linux/iommu.h
index edcc0dda7993..532a64075f23 100644
--- a/include/uapi/linux/iommu.h
+++ b/include/uapi/linux/iommu.h
@@ -112,4 +112,51 @@ struct iommu_fault {
struct iommu_fault_page_request prm;
};
};
+
+/**
+ * SMMUv3 Stream Table Entry stage 1 related information
+ * The PASID table is referred to as the context descriptor (CD) table.
+ *
+ * @s1fmt: STE s1fmt (format of the CD table: single CD, linear table
+ or 2-level table)
+ * @s1dss: STE s1dss (specifies the behavior when pasid_bits != 0
+ and no pasid is passed along with the incoming transaction)
+ * Please refer to the smmu 3.x spec (ARM IHI 0070A) for full details
+ */
+struct iommu_pasid_smmuv3 {
+#define PASID_TABLE_SMMUV3_CFG_VERSION_1 1
+ __u32 version;
+ __u8 s1fmt;
+ __u8 s1dss;
+ __u8 padding[2];
+};
+
+/**
+ * PASID table data used to bind guest PASID table to the host IOMMU
+ * Note PASID table corresponds to the Context Table on ARM SMMUv3.
+ *
+ * @version: API version to prepare for future extensions
+ * @format: format of the PASID table
+ * @base_ptr: guest physical address of the PASID table
+ * @pasid_bits: number of PASID bits used in the PASID table
+ * @config: indicates whether the guest translation stage must
+ * be translated, bypassed or aborted.
+ */
+struct iommu_pasid_table_config {
+#define PASID_TABLE_CFG_VERSION_1 1
+ __u32 version;
+#define IOMMU_PASID_FORMAT_SMMUV3 1
+ __u32 format;
+ __u64 base_ptr;
+ __u8 pasid_bits;
+#define IOMMU_PASID_CONFIG_TRANSLATE 1
+#define IOMMU_PASID_CONFIG_BYPASS 2
+#define IOMMU_PASID_CONFIG_ABORT 3
+ __u8 config;
+ __u8 padding[6];
+ union {
+ struct iommu_pasid_smmuv3 smmuv3;
+ };
+};
+
#endif /* _UAPI_IOMMU_H */
--
2.20.1
On ARM, MSI are translated by the SMMU. An IOVA is allocated
for each MSI doorbell. If both the host and the guest are exposed
with SMMUs, we end up with 2 different IOVAs allocated by each.
guest allocates an IOVA (gIOVA) to map onto the guest MSI
doorbell (gDB). The Host allocates another IOVA (hIOVA) to map
onto the physical doorbell (hDB).
So we end up with 2 untied mappings:
S1 S2
gIOVA -> gDB
hIOVA -> hDB
Currently the PCI device is programmed by the host with hIOVA
as MSI doorbell. So this does not work.
This patch introduces an API to pass gIOVA/gDB to the host so
that gIOVA can be reused by the host instead of re-allocating
a new IOVA. So the goal is to create the following nested mapping:
S1 S2
gIOVA -> gDB -> hDB
and program the PCI device with gIOVA MSI doorbell.
Signed-off-by: Eric Auger <[email protected]>
---
v5 -> v6:
-fix compile issue when IOMMU_API is not set
v3 -> v4:
- add unbind
v2 -> v3:
- add a struct device handle
---
drivers/iommu/iommu.c | 34 ++++++++++++++++++++++++++++++++++
include/linux/iommu.h | 25 +++++++++++++++++++++++++
2 files changed, 59 insertions(+)
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index b72e326ddd41..0b6569fbfcb7 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -1572,6 +1572,40 @@ static void __iommu_detach_device(struct iommu_domain *domain,
trace_detach_device_from_domain(dev);
}
+/**
+ * iommu_bind_guest_msi - Passes the stage1 GIOVA/GPA mapping of the
+ * virtual doorbell used by the assigned device @dev.
+ *
+ * @domain: iommu domain the stage 1 mapping will be attached to
+ * @dev: assigned device which uses this stage1 mapping
+ * @iova: iova llocated by the guest
+ * @gpa: guest physical address of the virtual doorbell
+ * @size: granule size used for the mapping
+ *
+ * The associated IOVA can be reused by the host to create a nested
+ * stage2 binding mapping onto the physical doorbell used by @dev
+ */
+
+int iommu_bind_guest_msi(struct iommu_domain *domain, struct device *dev,
+ dma_addr_t giova, phys_addr_t gpa, size_t size)
+{
+ if (unlikely(!domain->ops->bind_guest_msi))
+ return -ENODEV;
+
+ return domain->ops->bind_guest_msi(domain, dev, giova, gpa, size);
+}
+EXPORT_SYMBOL_GPL(iommu_bind_guest_msi);
+
+void iommu_unbind_guest_msi(struct iommu_domain *domain, struct device *dev,
+ dma_addr_t iova)
+{
+ if (unlikely(!domain->ops->unbind_guest_msi))
+ return;
+
+ domain->ops->unbind_guest_msi(domain, dev, iova);
+}
+EXPORT_SYMBOL_GPL(iommu_unbind_guest_msi);
+
void iommu_detach_device(struct iommu_domain *domain, struct device *dev)
{
struct iommu_group *group;
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 7c7c6bad1420..a4c12d14417c 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -192,6 +192,8 @@ struct iommu_resv_region {
* @attach_pasid_table: attach a pasid table
* @detach_pasid_table: detach the pasid table
* @cache_invalidate: invalidate translation caches
+ * @bind_guest_msi: provides a stage1 giova/gpa MSI doorbell mapping
+ * @unbind_guest_msi: withdraw a stage1 giova/gpa MSI doorbell mapping
* @pgsize_bitmap: bitmap of all possible supported page sizes
*/
struct iommu_ops {
@@ -243,6 +245,11 @@ struct iommu_ops {
int (*cache_invalidate)(struct iommu_domain *domain, struct device *dev,
struct iommu_cache_invalidate_info *inv_info);
+ int (*bind_guest_msi)(struct iommu_domain *domain, struct device *dev,
+ dma_addr_t giova, phys_addr_t gpa, size_t size);
+ void (*unbind_guest_msi)(struct iommu_domain *domain,
+ struct device *dev, dma_addr_t giova);
+
unsigned long pgsize_bitmap;
};
@@ -356,6 +363,11 @@ extern void iommu_detach_pasid_table(struct iommu_domain *domain);
extern int iommu_cache_invalidate(struct iommu_domain *domain,
struct device *dev,
struct iommu_cache_invalidate_info *inv_info);
+extern int iommu_bind_guest_msi(struct iommu_domain *domain, struct device *dev,
+ dma_addr_t giova, phys_addr_t gpa, size_t size);
+extern void iommu_unbind_guest_msi(struct iommu_domain *domain,
+ struct device *dev, dma_addr_t giova);
+
extern struct iommu_domain *iommu_get_domain_for_dev(struct device *dev);
extern struct iommu_domain *iommu_get_dma_domain(struct device *dev);
extern int iommu_map(struct iommu_domain *domain, unsigned long iova,
@@ -812,6 +824,19 @@ iommu_cache_invalidate(struct iommu_domain *domain,
return -ENODEV;
}
+static inline
+int iommu_bind_guest_msi(struct iommu_domain *domain, struct device *dev,
+ dma_addr_t giova, phys_addr_t gpa, size_t size)
+{
+ return -ENODEV;
+}
+static inline
+int iommu_unbind_guest_msi(struct iommu_domain *domain, struct device *dev,
+ dma_addr_t giova)
+{
+ return -ENODEV;
+}
+
#endif /* CONFIG_IOMMU_API */
#ifdef CONFIG_IOMMU_DEBUGFS
--
2.20.1
From: Jean-Philippe Brucker <[email protected]>
When removing a mapping from a domain, we need to send an invalidation to
all devices that might have stored it in their Address Translation Cache
(ATC). In addition with SVM, we'll need to invalidate context descriptors
of all devices attached to a live domain.
Maintain a list of devices in each domain, protected by a spinlock. It is
updated every time we attach or detach devices to and from domains.
It needs to be a spinlock because we'll invalidate ATC entries from
within hardirq-safe contexts, but it may be possible to relax the read
side with RCU later.
Signed-off-by: Jean-Philippe Brucker <[email protected]>
---
drivers/iommu/arm-smmu-v3.c | 28 ++++++++++++++++++++++++++++
1 file changed, 28 insertions(+)
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index d3880010c6cf..ff998c967a0a 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -594,6 +594,11 @@ struct arm_smmu_device {
struct arm_smmu_master_data {
struct arm_smmu_device *smmu;
struct arm_smmu_strtab_ent ste;
+
+ struct arm_smmu_domain *domain;
+ struct list_head list; /* domain->devices */
+
+ struct device *dev;
};
/* SMMU private data for an IOMMU domain */
@@ -618,6 +623,9 @@ struct arm_smmu_domain {
};
struct iommu_domain domain;
+
+ struct list_head devices;
+ spinlock_t devices_lock;
};
struct arm_smmu_option_prop {
@@ -1493,6 +1501,9 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned type)
}
mutex_init(&smmu_domain->init_mutex);
+ INIT_LIST_HEAD(&smmu_domain->devices);
+ spin_lock_init(&smmu_domain->devices_lock);
+
return &smmu_domain->domain;
}
@@ -1713,6 +1724,16 @@ static void arm_smmu_detach_dev(struct device *dev)
{
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
struct arm_smmu_master_data *master = fwspec->iommu_priv;
+ unsigned long flags;
+ struct arm_smmu_domain *smmu_domain = master->domain;
+
+ if (smmu_domain) {
+ spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+ list_del(&master->list);
+ spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+
+ master->domain = NULL;
+ }
master->ste.assigned = false;
arm_smmu_install_ste_for_dev(fwspec);
@@ -1722,6 +1743,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
{
int ret = 0;
struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+ unsigned long flags;
struct arm_smmu_device *smmu;
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct arm_smmu_master_data *master;
@@ -1757,6 +1779,11 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
}
ste->assigned = true;
+ master->domain = smmu_domain;
+
+ spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+ list_add(&master->list, &smmu_domain->devices);
+ spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
if (smmu_domain->stage == ARM_SMMU_DOMAIN_BYPASS) {
ste->s1_cfg = NULL;
@@ -1883,6 +1910,7 @@ static int arm_smmu_add_device(struct device *dev)
return -ENOMEM;
master->smmu = smmu;
+ master->dev = dev;
fwspec->iommu_priv = master;
}
--
2.20.1
When a stage 1 related fault event is read from the event queue,
let's propagate it to potential external fault listeners, ie. users
who registered a fault handler.
Signed-off-by: Eric Auger <[email protected]>
---
v4 -> v5:
- s/IOMMU_FAULT_PERM_INST/IOMMU_FAULT_PERM_EXEC
---
drivers/iommu/arm-smmu-v3.c | 169 +++++++++++++++++++++++++++++++++---
1 file changed, 158 insertions(+), 11 deletions(-)
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index a4b82c520647..8b2160788c9b 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -167,6 +167,26 @@
#define ARM_SMMU_PRIQ_IRQ_CFG1 0xd8
#define ARM_SMMU_PRIQ_IRQ_CFG2 0xdc
+/* Events */
+#define ARM_SMMU_EVT_F_UUT 0x01
+#define ARM_SMMU_EVT_C_BAD_STREAMID 0x02
+#define ARM_SMMU_EVT_F_STE_FETCH 0x03
+#define ARM_SMMU_EVT_C_BAD_STE 0x04
+#define ARM_SMMU_EVT_F_BAD_ATS_TREQ 0x05
+#define ARM_SMMU_EVT_F_STREAM_DISABLED 0x06
+#define ARM_SMMU_EVT_F_TRANSL_FORBIDDEN 0x07
+#define ARM_SMMU_EVT_C_BAD_SUBSTREAMID 0x08
+#define ARM_SMMU_EVT_F_CD_FETCH 0x09
+#define ARM_SMMU_EVT_C_BAD_CD 0x0a
+#define ARM_SMMU_EVT_F_WALK_EABT 0x0b
+#define ARM_SMMU_EVT_F_TRANSLATION 0x10
+#define ARM_SMMU_EVT_F_ADDR_SIZE 0x11
+#define ARM_SMMU_EVT_F_ACCESS 0x12
+#define ARM_SMMU_EVT_F_PERMISSION 0x13
+#define ARM_SMMU_EVT_F_TLB_CONFLICT 0x20
+#define ARM_SMMU_EVT_F_CFG_CONFLICT 0x21
+#define ARM_SMMU_EVT_E_PAGE_REQUEST 0x24
+
/* Common MSI config fields */
#define MSI_CFG0_ADDR_MASK GENMASK_ULL(51, 2)
#define MSI_CFG2_SH GENMASK(5, 4)
@@ -332,6 +352,15 @@
#define EVTQ_MAX_SZ_SHIFT 7
#define EVTQ_0_ID GENMASK_ULL(7, 0)
+#define EVTQ_0_SSV GENMASK_ULL(11, 11)
+#define EVTQ_0_SUBSTREAMID GENMASK_ULL(31, 12)
+#define EVTQ_0_STREAMID GENMASK_ULL(63, 32)
+#define EVTQ_1_PNU GENMASK_ULL(33, 33)
+#define EVTQ_1_IND GENMASK_ULL(34, 34)
+#define EVTQ_1_RNW GENMASK_ULL(35, 35)
+#define EVTQ_1_S2 GENMASK_ULL(39, 39)
+#define EVTQ_1_CLASS GENMASK_ULL(40, 41)
+#define EVTQ_3_FETCH_ADDR GENMASK_ULL(51, 3)
/* PRI queue */
#define PRIQ_ENT_DWORDS 2
@@ -639,6 +668,64 @@ struct arm_smmu_domain {
spinlock_t devices_lock;
};
+/* fault propagation */
+
+#define IOMMU_FAULT_F_FIELDS (IOMMU_FAULT_UNRECOV_PASID_VALID | \
+ IOMMU_FAULT_UNRECOV_PERM_VALID | \
+ IOMMU_FAULT_UNRECOV_ADDR_VALID)
+
+struct arm_smmu_fault_propagation_data {
+ enum iommu_fault_reason reason;
+ bool s1_check;
+ u32 fields; /* IOMMU_FAULT_UNRECOV_*_VALID bits */
+};
+
+/*
+ * Describes how SMMU faults translate into generic IOMMU faults
+ * and if they need to be reported externally
+ */
+static const struct arm_smmu_fault_propagation_data fault_propagation[] = {
+[ARM_SMMU_EVT_F_UUT] = { },
+[ARM_SMMU_EVT_C_BAD_STREAMID] = { },
+[ARM_SMMU_EVT_F_STE_FETCH] = { },
+[ARM_SMMU_EVT_C_BAD_STE] = { },
+[ARM_SMMU_EVT_F_BAD_ATS_TREQ] = { },
+[ARM_SMMU_EVT_F_STREAM_DISABLED] = { },
+[ARM_SMMU_EVT_F_TRANSL_FORBIDDEN] = { },
+[ARM_SMMU_EVT_C_BAD_SUBSTREAMID] = {IOMMU_FAULT_REASON_PASID_INVALID,
+ false,
+ IOMMU_FAULT_UNRECOV_PASID_VALID
+ },
+[ARM_SMMU_EVT_F_CD_FETCH] = {IOMMU_FAULT_REASON_PASID_FETCH,
+ false,
+ IOMMU_FAULT_UNRECOV_PASID_VALID |
+ IOMMU_FAULT_UNRECOV_FETCH_ADDR_VALID
+ },
+[ARM_SMMU_EVT_C_BAD_CD] = {IOMMU_FAULT_REASON_BAD_PASID_ENTRY,
+ false,
+ IOMMU_FAULT_UNRECOV_PASID_VALID
+ },
+[ARM_SMMU_EVT_F_WALK_EABT] = {IOMMU_FAULT_REASON_WALK_EABT, true,
+ IOMMU_FAULT_F_FIELDS |
+ IOMMU_FAULT_UNRECOV_FETCH_ADDR_VALID
+ },
+[ARM_SMMU_EVT_F_TRANSLATION] = {IOMMU_FAULT_REASON_PTE_FETCH, true,
+ IOMMU_FAULT_F_FIELDS
+ },
+[ARM_SMMU_EVT_F_ADDR_SIZE] = {IOMMU_FAULT_REASON_OOR_ADDRESS, true,
+ IOMMU_FAULT_F_FIELDS
+ },
+[ARM_SMMU_EVT_F_ACCESS] = {IOMMU_FAULT_REASON_ACCESS, true,
+ IOMMU_FAULT_F_FIELDS
+ },
+[ARM_SMMU_EVT_F_PERMISSION] = {IOMMU_FAULT_REASON_PERMISSION, true,
+ IOMMU_FAULT_F_FIELDS
+ },
+[ARM_SMMU_EVT_F_TLB_CONFLICT] = { },
+[ARM_SMMU_EVT_F_CFG_CONFLICT] = { },
+[ARM_SMMU_EVT_E_PAGE_REQUEST] = { },
+};
+
struct arm_smmu_option_prop {
u32 opt;
const char *prop;
@@ -1258,7 +1345,6 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
return 0;
}
-__maybe_unused
static struct arm_smmu_master_data *
arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid)
{
@@ -1284,24 +1370,85 @@ arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid)
return master;
}
+/* Populates the record fields according to the input SMMU event */
+static bool arm_smmu_transcode_fault(u64 *evt, u8 type,
+ struct iommu_fault_unrecoverable *record)
+{
+ const struct arm_smmu_fault_propagation_data *data;
+ u32 fields;
+
+ if (type >= ARRAY_SIZE(fault_propagation))
+ return false;
+
+ data = &fault_propagation[type];
+ if (!data->reason)
+ return false;
+
+ fields = data->fields;
+
+ if (data->s1_check & FIELD_GET(EVTQ_1_S2, evt[1]))
+ return false; /* S2 related fault, don't propagate */
+
+ if (fields & IOMMU_FAULT_UNRECOV_PASID_VALID) {
+ if (FIELD_GET(EVTQ_0_SSV, evt[0]))
+ record->pasid = FIELD_GET(EVTQ_0_SUBSTREAMID, evt[0]);
+ else
+ fields &= ~IOMMU_FAULT_UNRECOV_PASID_VALID;
+ }
+ if (fields & IOMMU_FAULT_UNRECOV_PERM_VALID) {
+ if (!FIELD_GET(EVTQ_1_RNW, evt[1]))
+ record->perm |= IOMMU_FAULT_PERM_WRITE;
+ if (FIELD_GET(EVTQ_1_PNU, evt[1]))
+ record->perm |= IOMMU_FAULT_PERM_PRIV;
+ if (FIELD_GET(EVTQ_1_IND, evt[1]))
+ record->perm |= IOMMU_FAULT_PERM_EXEC;
+ }
+ if (fields & IOMMU_FAULT_UNRECOV_ADDR_VALID)
+ record->addr = evt[2];
+
+ if (fields & IOMMU_FAULT_UNRECOV_FETCH_ADDR_VALID)
+ record->fetch_addr = FIELD_GET(EVTQ_3_FETCH_ADDR, evt[3]);
+
+ record->flags = fields;
+ return true;
+}
+
+static void arm_smmu_report_event(struct arm_smmu_device *smmu, u64 *evt)
+{
+ u32 sid = FIELD_GET(EVTQ_0_STREAMID, evt[0]);
+ u8 type = FIELD_GET(EVTQ_0_ID, evt[0]);
+ struct arm_smmu_master_data *master;
+ struct iommu_fault_event event = {};
+ int i;
+
+ master = arm_smmu_find_master(smmu, sid);
+ if (WARN_ON(!master))
+ return;
+
+ event.fault.type = IOMMU_FAULT_DMA_UNRECOV;
+
+ if (arm_smmu_transcode_fault(evt, type, &event.fault.event)) {
+ iommu_report_device_fault(master->dev, &event);
+ return;
+ }
+
+ dev_info(smmu->dev, "event 0x%02x received:\n", type);
+ for (i = 0; i < EVTQ_ENT_DWORDS; ++i) {
+ dev_info(smmu->dev, "\t0x%016llx\n",
+ (unsigned long long)evt[i]);
+ }
+}
+
/* IRQ and event handlers */
static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
{
- int i;
struct arm_smmu_device *smmu = dev;
struct arm_smmu_queue *q = &smmu->evtq.q;
u64 evt[EVTQ_ENT_DWORDS];
do {
- while (!queue_remove_raw(q, evt)) {
- u8 id = FIELD_GET(EVTQ_0_ID, evt[0]);
-
- dev_info(smmu->dev, "event 0x%02x received:\n", id);
- for (i = 0; i < ARRAY_SIZE(evt); ++i)
- dev_info(smmu->dev, "\t0x%016llx\n",
- (unsigned long long)evt[i]);
-
- }
+ while (!queue_remove_raw(q, evt))
+ arm_smmu_report_event(smmu, evt);
/*
* Not much we can do on overflow, so scream and pretend we're
--
2.20.1
On attach_pasid_table() we program STE S1 related info set
by the guest into the actual physical STEs. At minimum
we need to program the context descriptor GPA and compute
whether the stage1 is translated/bypassed or aborted.
Signed-off-by: Eric Auger <[email protected]>
---
v3 -> v4:
- adapt to changes in iommu_pasid_table_config
- different programming convention at s1_cfg/s2_cfg/ste.abort
v2 -> v3:
- callback now is named set_pasid_table and struct fields
are laid out differently.
v1 -> v2:
- invalidate the STE before changing them
- hold init_mutex
- handle new fields
---
drivers/iommu/arm-smmu-v3.c | 114 ++++++++++++++++++++++++++++++++++++
1 file changed, 114 insertions(+)
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index e22e944ffc05..e41f61844d78 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -2207,6 +2207,118 @@ static void arm_smmu_put_resv_regions(struct device *dev,
kfree(entry);
}
+static int arm_smmu_attach_pasid_table(struct iommu_domain *domain,
+ struct iommu_pasid_table_config *cfg)
+{
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+ struct arm_smmu_master_data *entry;
+ struct arm_smmu_s1_cfg *s1_cfg;
+ struct arm_smmu_device *smmu;
+ unsigned long flags;
+ int ret = -EINVAL;
+
+ if (cfg->format != IOMMU_PASID_FORMAT_SMMUV3)
+ return -EINVAL;
+
+ mutex_lock(&smmu_domain->init_mutex);
+
+ smmu = smmu_domain->smmu;
+
+ if (!smmu)
+ goto out;
+
+ if (!((smmu->features & ARM_SMMU_FEAT_TRANS_S1) &&
+ (smmu->features & ARM_SMMU_FEAT_TRANS_S2))) {
+ dev_info(smmu_domain->smmu->dev,
+ "does not implement two stages\n");
+ goto out;
+ }
+
+ if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED)
+ goto out;
+
+ switch (cfg->config) {
+ case IOMMU_PASID_CONFIG_ABORT:
+ spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+ list_for_each_entry(entry, &smmu_domain->devices, list) {
+ entry->ste.s1_cfg = NULL;
+ entry->ste.abort = true;
+ arm_smmu_install_ste_for_dev(entry->dev->iommu_fwspec);
+ }
+ spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+ ret = 0;
+ break;
+ case IOMMU_PASID_CONFIG_BYPASS:
+ spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+ list_for_each_entry(entry, &smmu_domain->devices, list) {
+ entry->ste.s1_cfg = NULL;
+ entry->ste.abort = false;
+ arm_smmu_install_ste_for_dev(entry->dev->iommu_fwspec);
+ }
+ spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+ ret = 0;
+ break;
+ case IOMMU_PASID_CONFIG_TRANSLATE:
+ /* we currently support a single CD */
+ if (cfg->pasid_bits)
+ goto out;
+
+ s1_cfg = &smmu_domain->s1_cfg;
+ s1_cfg->cdptr_dma = cfg->base_ptr;
+
+ spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+ list_for_each_entry(entry, &smmu_domain->devices, list) {
+ entry->ste.s1_cfg = s1_cfg;
+ entry->ste.abort = false;
+ arm_smmu_install_ste_for_dev(entry->dev->iommu_fwspec);
+ }
+ spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+ ret = 0;
+ break;
+ default:
+ break;
+ }
+out:
+ mutex_unlock(&smmu_domain->init_mutex);
+ return ret;
+}
+
+static void arm_smmu_detach_pasid_table(struct iommu_domain *domain)
+{
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+ struct arm_smmu_master_data *entry;
+ struct arm_smmu_device *smmu;
+ unsigned long flags;
+
+ mutex_lock(&smmu_domain->init_mutex);
+
+ smmu = smmu_domain->smmu;
+
+ if (!smmu)
+ return;
+
+ if (!((smmu->features & ARM_SMMU_FEAT_TRANS_S1) &&
+ (smmu->features & ARM_SMMU_FEAT_TRANS_S2))) {
+ dev_info(smmu_domain->smmu->dev,
+ "does not implement two stages\n");
+ return;
+ }
+
+ if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED)
+ return;
+
+ spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+ list_for_each_entry(entry, &smmu_domain->devices, list) {
+ entry->ste.s1_cfg = NULL;
+ entry->ste.abort = true;
+ arm_smmu_install_ste_for_dev(entry->dev->iommu_fwspec);
+ }
+ spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+
+ memset(&smmu_domain->s1_cfg, 0, sizeof(struct arm_smmu_s1_cfg));
+ mutex_unlock(&smmu_domain->init_mutex);
+}
+
static struct iommu_ops arm_smmu_ops = {
.capable = arm_smmu_capable,
.domain_alloc = arm_smmu_domain_alloc,
@@ -2225,6 +2337,8 @@ static struct iommu_ops arm_smmu_ops = {
.of_xlate = arm_smmu_of_xlate,
.get_resv_regions = arm_smmu_get_resv_regions,
.put_resv_regions = arm_smmu_put_resv_regions,
+ .attach_pasid_table = arm_smmu_attach_pasid_table,
+ .detach_pasid_table = arm_smmu_detach_pasid_table,
.pgsize_bitmap = -1UL, /* Restricted during device attach */
};
--
2.20.1
From: "Liu, Yi L" <[email protected]>
This patch adds VFIO_IOMMU_ATTACH/DETACH_PASID_TABLE ioctl
which aims to pass/withdraw the virtual iommu guest configuration
to/from the VFIO driver downto to the iommu subsystem.
Signed-off-by: Jacob Pan <[email protected]>
Signed-off-by: Liu, Yi L <[email protected]>
Signed-off-by: Eric Auger <[email protected]>
---
v3 -> v4:
- restore ATTACH/DETACH
- add unwind on failure
v2 -> v3:
- s/BIND_PASID_TABLE/SET_PASID_TABLE
v1 -> v2:
- s/BIND_GUEST_STAGE/BIND_PASID_TABLE
- remove the struct device arg
---
drivers/vfio/vfio_iommu_type1.c | 53 +++++++++++++++++++++++++++++++++
include/uapi/linux/vfio.h | 17 +++++++++++
2 files changed, 70 insertions(+)
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 73652e21efec..222e9199edbf 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -1644,6 +1644,43 @@ static int vfio_domains_have_iommu_cache(struct vfio_iommu *iommu)
return ret;
}
+static void
+vfio_detach_pasid_table(struct vfio_iommu *iommu)
+{
+ struct vfio_domain *d;
+
+ mutex_lock(&iommu->lock);
+
+ list_for_each_entry(d, &iommu->domain_list, next) {
+ iommu_detach_pasid_table(d->domain);
+ }
+ mutex_unlock(&iommu->lock);
+}
+
+static int
+vfio_attach_pasid_table(struct vfio_iommu *iommu,
+ struct vfio_iommu_type1_attach_pasid_table *ustruct)
+{
+ struct vfio_domain *d;
+ int ret = 0;
+
+ mutex_lock(&iommu->lock);
+
+ list_for_each_entry(d, &iommu->domain_list, next) {
+ ret = iommu_attach_pasid_table(d->domain, &ustruct->config);
+ if (ret)
+ goto unwind;
+ }
+ goto unlock;
+unwind:
+ list_for_each_entry_continue_reverse(d, &iommu->domain_list, next) {
+ iommu_detach_pasid_table(d->domain);
+ }
+unlock:
+ mutex_unlock(&iommu->lock);
+ return ret;
+}
+
static long vfio_iommu_type1_ioctl(void *iommu_data,
unsigned int cmd, unsigned long arg)
{
@@ -1714,6 +1751,22 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
return copy_to_user((void __user *)arg, &unmap, minsz) ?
-EFAULT : 0;
+ } else if (cmd == VFIO_IOMMU_ATTACH_PASID_TABLE) {
+ struct vfio_iommu_type1_attach_pasid_table ustruct;
+
+ minsz = offsetofend(struct vfio_iommu_type1_attach_pasid_table,
+ config);
+
+ if (copy_from_user(&ustruct, (void __user *)arg, minsz))
+ return -EFAULT;
+
+ if (ustruct.argsz < minsz || ustruct.flags)
+ return -EINVAL;
+
+ return vfio_attach_pasid_table(iommu, &ustruct);
+ } else if (cmd == VFIO_IOMMU_DETACH_PASID_TABLE) {
+ vfio_detach_pasid_table(iommu);
+ return 0;
}
return -ENOTTY;
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 02bb7ad6e986..329d378565d9 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -14,6 +14,7 @@
#include <linux/types.h>
#include <linux/ioctl.h>
+#include <linux/iommu.h>
#define VFIO_API_VERSION 0
@@ -759,6 +760,22 @@ struct vfio_iommu_type1_dma_unmap {
#define VFIO_IOMMU_ENABLE _IO(VFIO_TYPE, VFIO_BASE + 15)
#define VFIO_IOMMU_DISABLE _IO(VFIO_TYPE, VFIO_BASE + 16)
+/**
+ * VFIO_IOMMU_ATTACH_PASID_TABLE - _IOWR(VFIO_TYPE, VFIO_BASE + 22,
+ * struct vfio_iommu_type1_attach_pasid_table)
+ *
+ * Passes the PASID table to the host. Calling ATTACH_PASID_TABLE
+ * while a table is already installed is allowed: it replaces the old
+ * table. DETACH does a comprehensive tear down of the nested mode.
+ */
+struct vfio_iommu_type1_attach_pasid_table {
+ __u32 argsz;
+ __u32 flags;
+ struct iommu_pasid_table_config config;
+};
+#define VFIO_IOMMU_ATTACH_PASID_TABLE _IO(VFIO_TYPE, VFIO_BASE + 22)
+#define VFIO_IOMMU_DETACH_PASID_TABLE _IO(VFIO_TYPE, VFIO_BASE + 23)
+
/* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
/*
--
2.20.1
This patch adds two new regions aiming to handle nested mode
translation faults.
The first region (two host kernel pages) is read-only from the
user-space perspective. The first page contains an header
that provides information about the circular buffer located in the
second page. The circular buffer is put in a different page in
the prospect to be mmappable.
The max user API version supported by the kernel is returned
through a dedicated fault region capability.
The prod header contains
- the user API version in use (potentially inferior to the one
returned in the capability),
- the offset of the queue within the region,
- the producer index relative to the start of the queue
- the max number of fault records,
- the size of each record.
The second region is write-only from the user perspective. It
contains the version of the requested fault ABI and the consumer
index that is updated by the userspace each time this latter has
consumed fault records.
The natural order of operation for the userspace is:
- retrieve the highest supported fault ABI version
- set the requested fault ABI version in the consumer region
Until the ABI version is not set by the userspace, the kernel
cannot return a comprehensive set of information inside the
prod header (entry size and number of entries in the fault queue).
Signed-off-by: Eric Auger <[email protected]>
---
v4 -> v5
- check cons is not null in vfio_pci_check_cons_fault
v3 -> v4:
- use 2 separate regions, respectively in read and write modes
- add the version capability
---
drivers/vfio/pci/vfio_pci.c | 105 ++++++++++++++++++++++++++++
drivers/vfio/pci/vfio_pci_private.h | 17 +++++
drivers/vfio/pci/vfio_pci_rdwr.c | 73 +++++++++++++++++++
include/uapi/linux/vfio.h | 42 +++++++++++
4 files changed, 237 insertions(+)
diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index a25659b5a5d1..01b1b4cb8349 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -260,6 +260,106 @@ int vfio_pci_set_power_state(struct vfio_pci_device *vdev, pci_power_t state)
return ret;
}
+void vfio_pci_fault_release(struct vfio_pci_device *vdev,
+ struct vfio_pci_region *region)
+{
+}
+
+static const struct vfio_pci_fault_abi fault_abi_versions[] = {
+ [0] = {
+ .entry_size = sizeof(struct iommu_fault),
+ },
+};
+
+#define NR_FAULT_ABIS ARRAY_SIZE(fault_abi_versions)
+
+static int vfio_pci_fault_prod_add_capability(struct vfio_pci_device *vdev,
+ struct vfio_pci_region *region, struct vfio_info_cap *caps)
+{
+ struct vfio_region_info_cap_fault cap = {
+ .header.id = VFIO_REGION_INFO_CAP_PRODUCER_FAULT,
+ .header.version = 1,
+ .version = NR_FAULT_ABIS,
+ };
+ return vfio_info_add_capability(caps, &cap.header, sizeof(cap));
+}
+
+static const struct vfio_pci_regops vfio_pci_fault_cons_regops = {
+ .rw = vfio_pci_fault_cons_rw,
+ .release = vfio_pci_fault_release,
+};
+
+static const struct vfio_pci_regops vfio_pci_fault_prod_regops = {
+ .rw = vfio_pci_fault_prod_rw,
+ .release = vfio_pci_fault_release,
+ .add_capability = vfio_pci_fault_prod_add_capability,
+};
+
+static int vfio_pci_init_fault_region(struct vfio_pci_device *vdev)
+{
+ struct vfio_region_fault_prod *header;
+ int ret;
+
+ mutex_init(&vdev->fault_queue_lock);
+
+ vdev->fault_pages = kzalloc(3 * PAGE_SIZE, GFP_KERNEL);
+ if (!vdev->fault_pages)
+ return -ENOMEM;
+
+ ret = vfio_pci_register_dev_region(vdev,
+ VFIO_REGION_TYPE_NESTED,
+ VFIO_REGION_SUBTYPE_NESTED_FAULT_PROD,
+ &vfio_pci_fault_prod_regops, 2 * PAGE_SIZE,
+ VFIO_REGION_INFO_FLAG_READ, vdev->fault_pages);
+ if (ret)
+ goto out;
+
+ ret = vfio_pci_register_dev_region(vdev,
+ VFIO_REGION_TYPE_NESTED,
+ VFIO_REGION_SUBTYPE_NESTED_FAULT_CONS,
+ &vfio_pci_fault_cons_regops,
+ sizeof(struct vfio_region_fault_cons),
+ VFIO_REGION_INFO_FLAG_WRITE,
+ vdev->fault_pages + 2 * PAGE_SIZE);
+ if (ret)
+ goto out;
+
+ header = (struct vfio_region_fault_prod *)vdev->fault_pages;
+ header->version = -1;
+ header->offset = PAGE_SIZE;
+ return 0;
+out:
+ kfree(vdev->fault_pages);
+ return ret;
+}
+
+int vfio_pci_check_cons_fault(struct vfio_pci_device *vdev,
+ struct vfio_region_fault_cons *cons_header)
+{
+ struct vfio_region_fault_prod *prod_header =
+ (struct vfio_region_fault_prod *)vdev->fault_pages;
+
+ if (cons_header->version > NR_FAULT_ABIS)
+ return -EINVAL;
+
+ if (!vdev->fault_abi) {
+ vdev->fault_abi = cons_header->version;
+ prod_header->entry_size =
+ fault_abi_versions[vdev->fault_abi - 1].entry_size;
+ prod_header->nb_entries = PAGE_SIZE / prod_header->entry_size;
+ return 0;
+ }
+
+ /* Fault ABI is set */
+ if (cons_header->version != vdev->fault_abi)
+ return -EINVAL;
+
+ if (cons_header->cons && cons_header->cons >= prod_header->nb_entries)
+ return -EINVAL;
+
+ return 0;
+}
+
static int vfio_pci_enable(struct vfio_pci_device *vdev)
{
struct pci_dev *pdev = vdev->pdev;
@@ -362,6 +462,10 @@ static int vfio_pci_enable(struct vfio_pci_device *vdev)
}
}
+ ret = vfio_pci_init_fault_region(vdev);
+ if (ret)
+ goto disable_exit;
+
vfio_pci_probe_mmaps(vdev);
return 0;
@@ -1377,6 +1481,7 @@ static void vfio_pci_remove(struct pci_dev *pdev)
vfio_iommu_group_put(pdev->dev.iommu_group, &pdev->dev);
kfree(vdev->region);
+ kfree(vdev->fault_pages);
mutex_destroy(&vdev->ioeventfds_lock);
if (!disable_idle_d3)
diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
index 1812cf22fc4f..8e0a55682d3f 100644
--- a/drivers/vfio/pci/vfio_pci_private.h
+++ b/drivers/vfio/pci/vfio_pci_private.h
@@ -122,9 +122,12 @@ struct vfio_pci_device {
int ioeventfds_nr;
struct eventfd_ctx *err_trigger;
struct eventfd_ctx *req_trigger;
+ struct mutex fault_queue_lock;
+ int fault_abi;
struct list_head dummy_resources_list;
struct mutex ioeventfds_lock;
struct list_head ioeventfds_list;
+ u8 *fault_pages;
};
#define is_intx(vdev) (vdev->irq_type == VFIO_PCI_INTX_IRQ_INDEX)
@@ -153,6 +156,18 @@ extern ssize_t vfio_pci_vga_rw(struct vfio_pci_device *vdev, char __user *buf,
extern long vfio_pci_ioeventfd(struct vfio_pci_device *vdev, loff_t offset,
uint64_t data, int count, int fd);
+struct vfio_pci_fault_abi {
+ u32 entry_size;
+};
+
+extern size_t vfio_pci_fault_cons_rw(struct vfio_pci_device *vdev,
+ char __user *buf, size_t count,
+ loff_t *ppos, bool iswrite);
+
+extern size_t vfio_pci_fault_prod_rw(struct vfio_pci_device *vdev,
+ char __user *buf, size_t count,
+ loff_t *ppos, bool iswrite);
+
extern int vfio_pci_init_perm_bits(void);
extern void vfio_pci_uninit_perm_bits(void);
@@ -166,6 +181,8 @@ extern int vfio_pci_register_dev_region(struct vfio_pci_device *vdev,
extern int vfio_pci_set_power_state(struct vfio_pci_device *vdev,
pci_power_t state);
+extern int vfio_pci_check_cons_fault(struct vfio_pci_device *vdev,
+ struct vfio_region_fault_cons *header);
#ifdef CONFIG_VFIO_PCI_IGD
extern int vfio_pci_igd_init(struct vfio_pci_device *vdev);
diff --git a/drivers/vfio/pci/vfio_pci_rdwr.c b/drivers/vfio/pci/vfio_pci_rdwr.c
index a6029d0a5524..67cd9363f4e7 100644
--- a/drivers/vfio/pci/vfio_pci_rdwr.c
+++ b/drivers/vfio/pci/vfio_pci_rdwr.c
@@ -277,6 +277,79 @@ ssize_t vfio_pci_vga_rw(struct vfio_pci_device *vdev, char __user *buf,
return done;
}
+/* Read-only region */
+size_t vfio_pci_fault_prod_rw(struct vfio_pci_device *vdev, char __user *buf,
+ size_t count, loff_t *ppos, bool iswrite)
+{
+ unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) - VFIO_PCI_NUM_REGIONS;
+ void *base = vdev->region[i].data;
+ loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
+ int ret = 0;
+
+ if (iswrite)
+ return 0;
+
+ if (!vdev->fault_abi)
+ return -EINVAL;
+
+ if (pos >= vdev->region[i].size)
+ return -EINVAL;
+
+ count = min(count, (size_t)(vdev->region[i].size - pos));
+
+ mutex_lock(&vdev->fault_queue_lock);
+
+ if (copy_to_user(buf, base + pos, count)) {
+ ret = -EFAULT;
+ goto unlock;
+ }
+ *ppos += count;
+ ret = count;
+unlock:
+ mutex_unlock(&vdev->fault_queue_lock);
+ return ret;
+}
+
+
+/* write only */
+size_t vfio_pci_fault_cons_rw(struct vfio_pci_device *vdev, char __user *buf,
+ size_t count, loff_t *ppos, bool iswrite)
+{
+ unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) - VFIO_PCI_NUM_REGIONS;
+ void *base = vdev->region[i].data;
+ loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
+ struct vfio_region_fault_cons *header;
+ struct vfio_region_fault_cons orig_header =
+ *(struct vfio_region_fault_cons *)base;
+ int ret = 0;
+
+ if (!iswrite)
+ return 0;
+
+ if (pos >= vdev->region[i].size)
+ return -EINVAL;
+
+ count = min(count, (size_t)(vdev->region[i].size - pos));
+
+ mutex_lock(&vdev->fault_queue_lock);
+
+ if (copy_from_user(base + pos, buf, count)) {
+ ret = -EFAULT;
+ goto unlock;
+ }
+ header = (struct vfio_region_fault_cons *)base;
+ ret = vfio_pci_check_cons_fault(vdev, header);
+ if (ret) {
+ *header = orig_header;
+ goto unlock;
+ }
+ *ppos += count;
+ ret = count;
+unlock:
+ mutex_unlock(&vdev->fault_queue_lock);
+ return ret;
+}
+
static int vfio_pci_ioeventfd_handler(void *opaque, void *unused)
{
struct vfio_pci_ioeventfd *ioeventfd = opaque;
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 6763389b6adc..40b7aec8fefa 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -307,6 +307,10 @@ struct vfio_region_info_cap_type {
#define VFIO_REGION_TYPE_GFX (1)
#define VFIO_REGION_SUBTYPE_GFX_EDID (1)
+#define VFIO_REGION_TYPE_NESTED (2)
+#define VFIO_REGION_SUBTYPE_NESTED_FAULT_PROD (1)
+#define VFIO_REGION_SUBTYPE_NESTED_FAULT_CONS (2)
+
/**
* struct vfio_region_gfx_edid - EDID region layout.
*
@@ -697,6 +701,44 @@ struct vfio_device_ioeventfd {
#define VFIO_DEVICE_IOEVENTFD _IO(VFIO_TYPE, VFIO_BASE + 16)
+
+/*
+ * Capability exposed by the Producer Fault Region
+ * @version: max fault ABI version supported by the kernel
+ */
+#define VFIO_REGION_INFO_CAP_PRODUCER_FAULT 6
+
+struct vfio_region_info_cap_fault {
+ struct vfio_info_cap_header header;
+ __u32 version;
+};
+
+/*
+ * Producer Fault Region (Read-Only from user space perspective)
+ * Contains the fault circular buffer and the producer index
+ * @version: version of the fault record uapi
+ * @entry_size: size of each fault record
+ * @offset: offset of the start of the queue
+ * @prod: producer index relative to the start of the queue
+ */
+struct vfio_region_fault_prod {
+ __u32 version;
+ __u32 nb_entries;
+ __u32 entry_size;
+ __u32 offset;
+ __u32 prod;
+};
+
+/*
+ * Consumer Fault Region (Write-Only from the user space perspective)
+ * @version: ABI version requested by the userspace
+ * @cons: consumer index relative to the start of the queue
+ */
+struct vfio_region_fault_cons {
+ __u32 version;
+ __u32 cons;
+};
+
/* -------- API for Type1 VFIO IOMMU -------- */
/**
--
2.20.1
From: "Liu, Yi L" <[email protected]>
When the guest "owns" the stage 1 translation structures, the host
IOMMU driver has no knowledge of caching structure updates unless
the guest invalidation requests are trapped and passed down to the
host.
This patch adds the VFIO_IOMMU_CACHE_INVALIDATE ioctl with aims
at propagating guest stage1 IOMMU cache invalidations to the host.
Signed-off-by: Liu, Yi L <[email protected]>
Signed-off-by: Eric Auger <[email protected]>
---
v2 -> v3:
- introduce vfio_iommu_for_each_dev back in this patch
v1 -> v2:
- s/TLB/CACHE
- remove vfio_iommu_task usage
- commit message rewording
---
drivers/vfio/vfio_iommu_type1.c | 47 +++++++++++++++++++++++++++++++++
include/uapi/linux/vfio.h | 13 +++++++++
2 files changed, 60 insertions(+)
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 222e9199edbf..12a40b9db6aa 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -113,6 +113,26 @@ struct vfio_regions {
#define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu) \
(!list_empty(&iommu->domain_list))
+/* iommu->lock must be held */
+static int
+vfio_iommu_for_each_dev(struct vfio_iommu *iommu, void *data,
+ int (*fn)(struct device *, void *))
+{
+ struct vfio_domain *d;
+ struct vfio_group *g;
+ int ret = 0;
+
+ list_for_each_entry(d, &iommu->domain_list, next) {
+ list_for_each_entry(g, &d->group_list, next) {
+ ret = iommu_group_for_each_dev(g->iommu_group,
+ data, fn);
+ if (ret)
+ break;
+ }
+ }
+ return ret;
+}
+
static int put_pfn(unsigned long pfn, int prot);
/*
@@ -1681,6 +1701,15 @@ vfio_attach_pasid_table(struct vfio_iommu *iommu,
return ret;
}
+static int vfio_cache_inv_fn(struct device *dev, void *data)
+{
+ struct vfio_iommu_type1_cache_invalidate *ustruct =
+ (struct vfio_iommu_type1_cache_invalidate *)data;
+ struct iommu_domain *d = iommu_get_domain_for_dev(dev);
+
+ return iommu_cache_invalidate(d, dev, &ustruct->info);
+}
+
static long vfio_iommu_type1_ioctl(void *iommu_data,
unsigned int cmd, unsigned long arg)
{
@@ -1767,6 +1796,24 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
} else if (cmd == VFIO_IOMMU_DETACH_PASID_TABLE) {
vfio_detach_pasid_table(iommu);
return 0;
+ } else if (cmd == VFIO_IOMMU_CACHE_INVALIDATE) {
+ struct vfio_iommu_type1_cache_invalidate ustruct;
+ int ret;
+
+ minsz = offsetofend(struct vfio_iommu_type1_cache_invalidate,
+ info);
+
+ if (copy_from_user(&ustruct, (void __user *)arg, minsz))
+ return -EFAULT;
+
+ if (ustruct.argsz < minsz || ustruct.flags)
+ return -EINVAL;
+
+ mutex_lock(&iommu->lock);
+ ret = vfio_iommu_for_each_dev(iommu, &ustruct,
+ vfio_cache_inv_fn);
+ mutex_unlock(&iommu->lock);
+ return ret;
}
return -ENOTTY;
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 329d378565d9..29f0ef2d805d 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -776,6 +776,19 @@ struct vfio_iommu_type1_attach_pasid_table {
#define VFIO_IOMMU_ATTACH_PASID_TABLE _IO(VFIO_TYPE, VFIO_BASE + 22)
#define VFIO_IOMMU_DETACH_PASID_TABLE _IO(VFIO_TYPE, VFIO_BASE + 23)
+/**
+ * VFIO_IOMMU_CACHE_INVALIDATE - _IOWR(VFIO_TYPE, VFIO_BASE + 24,
+ * struct vfio_iommu_type1_cache_invalidate)
+ *
+ * Propagate guest IOMMU cache invalidation to the host.
+ */
+struct vfio_iommu_type1_cache_invalidate {
+ __u32 argsz;
+ __u32 flags;
+ struct iommu_cache_invalidate_info info;
+};
+#define VFIO_IOMMU_CACHE_INVALIDATE _IO(VFIO_TYPE, VFIO_BASE + 24)
+
/* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
/*
--
2.20.1
This patch adds the VFIO_IOMMU_BIND/UNBIND_MSI ioctl which aim
to pass/withdraw the guest MSI binding to/from the host.
Signed-off-by: Eric Auger <[email protected]>
---
v3 -> v4:
- add UNBIND
- unwind on BIND error
v2 -> v3:
- adapt to new proto of bind_guest_msi
- directly use vfio_iommu_for_each_dev
v1 -> v2:
- s/vfio_iommu_type1_guest_msi_binding/vfio_iommu_type1_bind_guest_msi
---
drivers/vfio/vfio_iommu_type1.c | 58 +++++++++++++++++++++++++++++++++
include/uapi/linux/vfio.h | 29 +++++++++++++++++
2 files changed, 87 insertions(+)
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 12a40b9db6aa..66513679081b 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -1710,6 +1710,25 @@ static int vfio_cache_inv_fn(struct device *dev, void *data)
return iommu_cache_invalidate(d, dev, &ustruct->info);
}
+static int vfio_bind_msi_fn(struct device *dev, void *data)
+{
+ struct vfio_iommu_type1_bind_msi *ustruct =
+ (struct vfio_iommu_type1_bind_msi *)data;
+ struct iommu_domain *d = iommu_get_domain_for_dev(dev);
+
+ return iommu_bind_guest_msi(d, dev, ustruct->iova,
+ ustruct->gpa, ustruct->size);
+}
+
+static int vfio_unbind_msi_fn(struct device *dev, void *data)
+{
+ dma_addr_t *iova = (dma_addr_t *)data;
+ struct iommu_domain *d = iommu_get_domain_for_dev(dev);
+
+ iommu_unbind_guest_msi(d, dev, *iova);
+ return 0;
+}
+
static long vfio_iommu_type1_ioctl(void *iommu_data,
unsigned int cmd, unsigned long arg)
{
@@ -1814,6 +1833,45 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
vfio_cache_inv_fn);
mutex_unlock(&iommu->lock);
return ret;
+ } else if (cmd == VFIO_IOMMU_BIND_MSI) {
+ struct vfio_iommu_type1_bind_msi ustruct;
+ int ret;
+
+ minsz = offsetofend(struct vfio_iommu_type1_bind_msi,
+ size);
+
+ if (copy_from_user(&ustruct, (void __user *)arg, minsz))
+ return -EFAULT;
+
+ if (ustruct.argsz < minsz || ustruct.flags)
+ return -EINVAL;
+
+ mutex_lock(&iommu->lock);
+ ret = vfio_iommu_for_each_dev(iommu, &ustruct,
+ vfio_bind_msi_fn);
+ if (ret)
+ vfio_iommu_for_each_dev(iommu, &ustruct.iova,
+ vfio_unbind_msi_fn);
+ mutex_unlock(&iommu->lock);
+ return ret;
+ } else if (cmd == VFIO_IOMMU_UNBIND_MSI) {
+ struct vfio_iommu_type1_unbind_msi ustruct;
+ int ret;
+
+ minsz = offsetofend(struct vfio_iommu_type1_unbind_msi,
+ iova);
+
+ if (copy_from_user(&ustruct, (void __user *)arg, minsz))
+ return -EFAULT;
+
+ if (ustruct.argsz < minsz || ustruct.flags)
+ return -EINVAL;
+
+ mutex_lock(&iommu->lock);
+ ret = vfio_iommu_for_each_dev(iommu, &ustruct.iova,
+ vfio_unbind_msi_fn);
+ mutex_unlock(&iommu->lock);
+ return ret;
}
return -ENOTTY;
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 29f0ef2d805d..6763389b6adc 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -789,6 +789,35 @@ struct vfio_iommu_type1_cache_invalidate {
};
#define VFIO_IOMMU_CACHE_INVALIDATE _IO(VFIO_TYPE, VFIO_BASE + 24)
+/**
+ * VFIO_IOMMU_BIND_MSI - _IOWR(VFIO_TYPE, VFIO_BASE + 25,
+ * struct vfio_iommu_type1_bind_msi)
+ *
+ * Pass a stage 1 MSI doorbell mapping to the host so that this
+ * latter can build a nested stage2 mapping
+ */
+struct vfio_iommu_type1_bind_msi {
+ __u32 argsz;
+ __u32 flags;
+ __u64 iova;
+ __u64 gpa;
+ __u64 size;
+};
+#define VFIO_IOMMU_BIND_MSI _IO(VFIO_TYPE, VFIO_BASE + 25)
+
+/**
+ * VFIO_IOMMU_UNBIND_MSI - _IOWR(VFIO_TYPE, VFIO_BASE + 26,
+ * struct vfio_iommu_type1_unbind_msi)
+ *
+ * Unregister an MSI mapping
+ */
+struct vfio_iommu_type1_unbind_msi {
+ __u32 argsz;
+ __u32 flags;
+ __u64 iova;
+};
+#define VFIO_IOMMU_UNBIND_MSI _IO(VFIO_TYPE, VFIO_BASE + 26)
+
/* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
/*
--
2.20.1
To allow nested stage support, we need to store both
stage 1 and stage 2 configurations (and remove the former
union).
A nested setup is characterized by both s1_cfg and s2_cfg
set.
We introduce a new ste.abort field that will be set upon
guest stage1 configuration passing. If s1_cfg is NULL and
ste.abort is set, traffic can't pass. If ste.abort is not set,
S1 is bypassed.
arm_smmu_write_strtab_ent() is modified to write both stage
fields in the STE and deal with the abort field.
In nested mode, only stage 2 is "finalized" as the host does
not own/configure the stage 1 context descriptor, guest does.
Signed-off-by: Eric Auger <[email protected]>
---
v4 -> v5:
- reset ste.abort on detach
v3 -> v4:
- s1_cfg.nested_abort and nested_bypass removed.
- s/ste.nested/ste.abort
- arm_smmu_write_strtab_ent modifications with introduction
of local abort, bypass and translate local variables
- comment updated
v1 -> v2:
- invalidate the STE before moving from a live STE config to another
- add the nested_abort and nested_bypass fields
---
drivers/iommu/arm-smmu-v3.c | 35 ++++++++++++++++++++---------------
1 file changed, 20 insertions(+), 15 deletions(-)
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index 21d027695181..e22e944ffc05 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -211,6 +211,7 @@
#define STRTAB_STE_0_CFG_BYPASS 4
#define STRTAB_STE_0_CFG_S1_TRANS 5
#define STRTAB_STE_0_CFG_S2_TRANS 6
+#define STRTAB_STE_0_CFG_NESTED 7
#define STRTAB_STE_0_S1FMT GENMASK_ULL(5, 4)
#define STRTAB_STE_0_S1FMT_LINEAR 0
@@ -514,6 +515,7 @@ struct arm_smmu_strtab_ent {
* configured according to the domain type.
*/
bool assigned;
+ bool abort;
struct arm_smmu_s1_cfg *s1_cfg;
struct arm_smmu_s2_cfg *s2_cfg;
};
@@ -628,10 +630,8 @@ struct arm_smmu_domain {
bool non_strict;
enum arm_smmu_domain_stage stage;
- union {
- struct arm_smmu_s1_cfg s1_cfg;
- struct arm_smmu_s2_cfg s2_cfg;
- };
+ struct arm_smmu_s1_cfg s1_cfg;
+ struct arm_smmu_s2_cfg s2_cfg;
struct iommu_domain domain;
@@ -1108,12 +1108,13 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid,
__le64 *dst, struct arm_smmu_strtab_ent *ste)
{
/*
- * This is hideously complicated, but we only really care about
- * three cases at the moment:
+ * We care about the following transitions:
*
* 1. Invalid (all zero) -> bypass/fault (init)
- * 2. Bypass/fault -> translation/bypass (attach)
- * 3. Translation/bypass -> bypass/fault (detach)
+ * 2. Bypass/fault -> single stage translation/bypass (attach)
+ * 3. single stage Translation/bypass -> bypass/fault (detach)
+ * 4. S2 -> S1 + S2 (attach_pasid_table)
+ * 5. S1 + S2 -> S2 (detach_pasid_table)
*
* Given that we can't update the STE atomically and the SMMU
* doesn't read the thing in a defined order, that leaves us
@@ -1124,7 +1125,7 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid,
* 3. Update Config, sync
*/
u64 val = le64_to_cpu(dst[0]);
- bool ste_live = false;
+ bool abort, bypass, translate, ste_live = false;
struct arm_smmu_cmdq_ent prefetch_cmd = {
.opcode = CMDQ_OP_PREFETCH_CFG,
.prefetch = {
@@ -1138,11 +1139,11 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid,
break;
case STRTAB_STE_0_CFG_S1_TRANS:
case STRTAB_STE_0_CFG_S2_TRANS:
+ case STRTAB_STE_0_CFG_NESTED:
ste_live = true;
break;
case STRTAB_STE_0_CFG_ABORT:
- if (disable_bypass)
- break;
+ break;
default:
BUG(); /* STE corruption */
}
@@ -1152,8 +1153,13 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid,
val = STRTAB_STE_0_V;
/* Bypass/fault */
- if (!ste->assigned || !(ste->s1_cfg || ste->s2_cfg)) {
- if (!ste->assigned && disable_bypass)
+
+ abort = (!ste->assigned && disable_bypass) || ste->abort;
+ translate = ste->s1_cfg || ste->s2_cfg;
+ bypass = !abort && !translate;
+
+ if (abort || bypass) {
+ if (abort)
val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_ABORT);
else
val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_BYPASS);
@@ -1172,7 +1178,6 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid,
}
if (ste->s1_cfg) {
- BUG_ON(ste_live);
dst[1] = cpu_to_le64(
FIELD_PREP(STRTAB_STE_1_S1CIR, STRTAB_STE_1_S1C_CACHE_WBRA) |
FIELD_PREP(STRTAB_STE_1_S1COR, STRTAB_STE_1_S1C_CACHE_WBRA) |
@@ -1191,7 +1196,6 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_device *smmu, u32 sid,
}
if (ste->s2_cfg) {
- BUG_ON(ste_live);
dst[2] = cpu_to_le64(
FIELD_PREP(STRTAB_STE_2_S2VMID, ste->s2_cfg->vmid) |
FIELD_PREP(STRTAB_STE_2_VTCR, ste->s2_cfg->vtcr) |
@@ -1773,6 +1777,7 @@ static void arm_smmu_detach_dev(struct device *dev)
}
master->ste.assigned = false;
+ master->ste.abort = false;
arm_smmu_install_ste_for_dev(fwspec);
}
--
2.20.1
This patch registers a fault handler which records faults in
a circular buffer and then signals an eventfd. This buffer is
exposed within the fault region.
Signed-off-by: Eric Auger <[email protected]>
---
v3 -> v4:
- move iommu_unregister_device_fault_handler to vfio_pci_release
---
drivers/vfio/pci/vfio_pci.c | 49 +++++++++++++++++++++++++++++
drivers/vfio/pci/vfio_pci_private.h | 1 +
2 files changed, 50 insertions(+)
diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index 01b1b4cb8349..cf12204486c3 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -29,6 +29,7 @@
#include <linux/vfio.h>
#include <linux/vgaarb.h>
#include <linux/nospec.h>
+#include <linux/circ_buf.h>
#include "vfio_pci_private.h"
@@ -295,6 +296,46 @@ static const struct vfio_pci_regops vfio_pci_fault_prod_regops = {
.add_capability = vfio_pci_fault_prod_add_capability,
};
+int vfio_pci_iommu_dev_fault_handler(struct iommu_fault_event *evt, void *data)
+{
+ struct vfio_pci_device *vdev = (struct vfio_pci_device *) data;
+ struct vfio_region_fault_prod *prod_region =
+ (struct vfio_region_fault_prod *)vdev->fault_pages;
+ struct vfio_region_fault_cons *cons_region =
+ (struct vfio_region_fault_cons *)(vdev->fault_pages + 2 * PAGE_SIZE);
+ struct iommu_fault *new =
+ (struct iommu_fault *)(vdev->fault_pages + prod_region->offset +
+ prod_region->prod * prod_region->entry_size);
+ int prod, cons, size;
+
+ mutex_lock(&vdev->fault_queue_lock);
+
+ if (!vdev->fault_abi)
+ goto unlock;
+
+ prod = prod_region->prod;
+ cons = cons_region->cons;
+ size = prod_region->nb_entries;
+
+ if (CIRC_SPACE(prod, cons, size) < 1)
+ goto unlock;
+
+ *new = evt->fault;
+ prod = (prod + 1) % size;
+ prod_region->prod = prod;
+ mutex_unlock(&vdev->fault_queue_lock);
+
+ mutex_lock(&vdev->igate);
+ if (vdev->dma_fault_trigger)
+ eventfd_signal(vdev->dma_fault_trigger, 1);
+ mutex_unlock(&vdev->igate);
+ return 0;
+
+unlock:
+ mutex_unlock(&vdev->fault_queue_lock);
+ return -EINVAL;
+}
+
static int vfio_pci_init_fault_region(struct vfio_pci_device *vdev)
{
struct vfio_region_fault_prod *header;
@@ -327,6 +368,13 @@ static int vfio_pci_init_fault_region(struct vfio_pci_device *vdev)
header = (struct vfio_region_fault_prod *)vdev->fault_pages;
header->version = -1;
header->offset = PAGE_SIZE;
+
+ ret = iommu_register_device_fault_handler(&vdev->pdev->dev,
+ vfio_pci_iommu_dev_fault_handler,
+ vdev);
+ if (ret)
+ goto out;
+
return 0;
out:
kfree(vdev->fault_pages);
@@ -574,6 +622,7 @@ static void vfio_pci_release(void *device_data)
if (!(--vdev->refcnt)) {
vfio_spapr_pci_eeh_release(vdev->pdev);
vfio_pci_disable(vdev);
+ iommu_unregister_device_fault_handler(&vdev->pdev->dev);
}
mutex_unlock(&vdev->reflck->lock);
diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
index 8e0a55682d3f..a9276926f008 100644
--- a/drivers/vfio/pci/vfio_pci_private.h
+++ b/drivers/vfio/pci/vfio_pci_private.h
@@ -122,6 +122,7 @@ struct vfio_pci_device {
int ioeventfds_nr;
struct eventfd_ctx *err_trigger;
struct eventfd_ctx *req_trigger;
+ struct eventfd_ctx *dma_fault_trigger;
struct mutex fault_queue_lock;
int fault_abi;
struct list_head dummy_resources_list;
--
2.20.1
From: Jean-Philippe Brucker <[email protected]>
When handling faults from the event or PRI queue, we need to find the
struct device associated to a SID. Add a rb_tree to keep track of SIDs.
Signed-off-by: Jean-Philippe Brucker <[email protected]>
---
drivers/iommu/arm-smmu-v3.c | 136 ++++++++++++++++++++++++++++++++++--
1 file changed, 132 insertions(+), 4 deletions(-)
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index ff998c967a0a..21d027695181 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -588,6 +588,16 @@ struct arm_smmu_device {
/* IOMMU core code handle */
struct iommu_device iommu;
+
+ struct rb_root streams;
+ struct mutex streams_mutex;
+
+};
+
+struct arm_smmu_stream {
+ u32 id;
+ struct arm_smmu_master_data *master;
+ struct rb_node node;
};
/* SMMU private data for each master */
@@ -597,6 +607,7 @@ struct arm_smmu_master_data {
struct arm_smmu_domain *domain;
struct list_head list; /* domain->devices */
+ struct arm_smmu_stream *streams;
struct device *dev;
};
@@ -1243,6 +1254,32 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
return 0;
}
+__maybe_unused
+static struct arm_smmu_master_data *
+arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid)
+{
+ struct rb_node *node;
+ struct arm_smmu_stream *stream;
+ struct arm_smmu_master_data *master = NULL;
+
+ mutex_lock(&smmu->streams_mutex);
+ node = smmu->streams.rb_node;
+ while (node) {
+ stream = rb_entry(node, struct arm_smmu_stream, node);
+ if (stream->id < sid) {
+ node = node->rb_right;
+ } else if (stream->id > sid) {
+ node = node->rb_left;
+ } else {
+ master = stream->master;
+ break;
+ }
+ }
+ mutex_unlock(&smmu->streams_mutex);
+
+ return master;
+}
+
/* IRQ and event handlers */
static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev)
{
@@ -1881,6 +1918,71 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
return sid < limit;
}
+static int arm_smmu_insert_master(struct arm_smmu_device *smmu,
+ struct arm_smmu_master_data *master)
+{
+ int i;
+ int ret = 0;
+ struct arm_smmu_stream *new_stream, *cur_stream;
+ struct rb_node **new_node, *parent_node = NULL;
+ struct iommu_fwspec *fwspec = master->dev->iommu_fwspec;
+
+ master->streams = kcalloc(fwspec->num_ids,
+ sizeof(struct arm_smmu_stream), GFP_KERNEL);
+ if (!master->streams)
+ return -ENOMEM;
+
+ mutex_lock(&smmu->streams_mutex);
+ for (i = 0; i < fwspec->num_ids && !ret; i++) {
+ new_stream = &master->streams[i];
+ new_stream->id = fwspec->ids[i];
+ new_stream->master = master;
+
+ new_node = &(smmu->streams.rb_node);
+ while (*new_node) {
+ cur_stream = rb_entry(*new_node, struct arm_smmu_stream,
+ node);
+ parent_node = *new_node;
+ if (cur_stream->id > new_stream->id) {
+ new_node = &((*new_node)->rb_left);
+ } else if (cur_stream->id < new_stream->id) {
+ new_node = &((*new_node)->rb_right);
+ } else {
+ dev_warn(master->dev,
+ "stream %u already in tree\n",
+ cur_stream->id);
+ ret = -EINVAL;
+ break;
+ }
+ }
+
+ if (!ret) {
+ rb_link_node(&new_stream->node, parent_node, new_node);
+ rb_insert_color(&new_stream->node, &smmu->streams);
+ }
+ }
+ mutex_unlock(&smmu->streams_mutex);
+
+ return ret;
+}
+
+static void arm_smmu_remove_master(struct arm_smmu_device *smmu,
+ struct arm_smmu_master_data *master)
+{
+ int i;
+ struct iommu_fwspec *fwspec = master->dev->iommu_fwspec;
+
+ if (!master->streams)
+ return;
+
+ mutex_lock(&smmu->streams_mutex);
+ for (i = 0; i < fwspec->num_ids; i++)
+ rb_erase(&master->streams[i].node, &smmu->streams);
+ mutex_unlock(&smmu->streams_mutex);
+
+ kfree(master->streams);
+}
+
static struct iommu_ops arm_smmu_ops;
static int arm_smmu_add_device(struct device *dev)
@@ -1929,13 +2031,35 @@ static int arm_smmu_add_device(struct device *dev)
}
}
+ ret = iommu_device_link(&smmu->iommu, dev);
+ if (ret)
+ goto err_free_master;
+
+ ret = arm_smmu_insert_master(smmu, master);
+ if (ret)
+ goto err_unlink;
+
group = iommu_group_get_for_dev(dev);
- if (!IS_ERR(group)) {
- iommu_group_put(group);
- iommu_device_link(&smmu->iommu, dev);
+ if (IS_ERR(group)) {
+ ret = PTR_ERR(group);
+ goto err_remove_master;
}
- return PTR_ERR_OR_ZERO(group);
+ iommu_group_put(group);
+
+ return 0;
+
+err_remove_master:
+ arm_smmu_remove_master(smmu, master);
+
+err_unlink:
+ iommu_device_unlink(&smmu->iommu, dev);
+
+err_free_master:
+ kfree(master);
+ fwspec->iommu_priv = NULL;
+
+ return ret;
}
static void arm_smmu_remove_device(struct device *dev)
@@ -1952,6 +2076,7 @@ static void arm_smmu_remove_device(struct device *dev)
if (master && master->ste.assigned)
arm_smmu_detach_dev(dev);
iommu_group_remove_device(dev);
+ arm_smmu_remove_master(smmu, master);
iommu_device_unlink(&smmu->iommu, dev);
kfree(master);
iommu_fwspec_free(dev);
@@ -2265,6 +2390,9 @@ static int arm_smmu_init_structures(struct arm_smmu_device *smmu)
{
int ret;
+ mutex_init(&smmu->streams_mutex);
+ smmu->streams = RB_ROOT;
+
ret = arm_smmu_init_queues(smmu);
if (ret)
return ret;
--
2.20.1
The Producer Fault region contains the fault queue in the second page.
There is benefit to let the userspace mmap this area. So let's expose
this mmappable area through a sparse mmap entry and implement the mmap
operation.
Signed-off-by: Eric Auger <[email protected]>
---
drivers/vfio/pci/vfio_pci.c | 61 +++++++++++++++++++++++++++++++++++--
1 file changed, 59 insertions(+), 2 deletions(-)
diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index cf12204486c3..8c895ece4750 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -274,15 +274,70 @@ static const struct vfio_pci_fault_abi fault_abi_versions[] = {
#define NR_FAULT_ABIS ARRAY_SIZE(fault_abi_versions)
+static int vfio_pci_fault_mmap(struct vfio_pci_device *vdev,
+ struct vfio_pci_region *region,
+ struct vm_area_struct *vma)
+{
+ u64 phys_len, req_len, pgoff, req_start;
+ unsigned long long addr;
+ unsigned int index, ret;
+
+ index = vma->vm_pgoff >> (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT);
+
+ phys_len = region->size;
+
+ req_len = vma->vm_end - vma->vm_start;
+ pgoff = vma->vm_pgoff &
+ ((1U << (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT)) - 1);
+ req_start = pgoff << PAGE_SHIFT;
+
+ /* only the second page of the producer fault region is mmappable */
+ if (req_start < PAGE_SIZE)
+ return -EINVAL;
+
+ if (req_start + req_len > phys_len)
+ return -EINVAL;
+
+ addr = virt_to_phys(vdev->fault_pages);
+ vma->vm_private_data = vdev;
+ vma->vm_pgoff = (addr >> PAGE_SHIFT) + pgoff;
+
+ ret = remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
+ req_len, vma->vm_page_prot);
+ return ret;
+}
+
static int vfio_pci_fault_prod_add_capability(struct vfio_pci_device *vdev,
struct vfio_pci_region *region, struct vfio_info_cap *caps)
{
+ struct vfio_region_info_cap_sparse_mmap *sparse = NULL;
struct vfio_region_info_cap_fault cap = {
.header.id = VFIO_REGION_INFO_CAP_PRODUCER_FAULT,
.header.version = 1,
.version = NR_FAULT_ABIS,
};
- return vfio_info_add_capability(caps, &cap.header, sizeof(cap));
+ size_t size = sizeof(*sparse) + sizeof(*sparse->areas);
+ int ret;
+
+ ret = vfio_info_add_capability(caps, &cap.header, sizeof(cap));
+ if (ret)
+ return ret;
+
+ sparse = kzalloc(size, GFP_KERNEL);
+ if (!sparse)
+ return -ENOMEM;
+
+ sparse->header.id = VFIO_REGION_INFO_CAP_SPARSE_MMAP;
+ sparse->header.version = 1;
+ sparse->nr_areas = 1;
+ sparse->areas[0].offset = PAGE_SIZE;
+ sparse->areas[0].size = PAGE_SIZE;
+
+ ret = vfio_info_add_capability(caps, &sparse->header, size);
+ if (ret)
+ kfree(sparse);
+
+ return ret;
}
static const struct vfio_pci_regops vfio_pci_fault_cons_regops = {
@@ -293,6 +348,7 @@ static const struct vfio_pci_regops vfio_pci_fault_cons_regops = {
static const struct vfio_pci_regops vfio_pci_fault_prod_regops = {
.rw = vfio_pci_fault_prod_rw,
.release = vfio_pci_fault_release,
+ .mmap = vfio_pci_fault_mmap,
.add_capability = vfio_pci_fault_prod_add_capability,
};
@@ -351,7 +407,8 @@ static int vfio_pci_init_fault_region(struct vfio_pci_device *vdev)
VFIO_REGION_TYPE_NESTED,
VFIO_REGION_SUBTYPE_NESTED_FAULT_PROD,
&vfio_pci_fault_prod_regops, 2 * PAGE_SIZE,
- VFIO_REGION_INFO_FLAG_READ, vdev->fault_pages);
+ VFIO_REGION_INFO_FLAG_READ | VFIO_REGION_INFO_FLAG_MMAP,
+ vdev->fault_pages);
if (ret)
goto out;
--
2.20.1
Add a new VFIO_PCI_DMA_FAULT_IRQ_INDEX index. This allows to
set/unset an eventfd that will be triggered when DMA translation
faults are detected at physical level when the nested mode is used.
Signed-off-by: Eric Auger <[email protected]>
---
drivers/vfio/pci/vfio_pci.c | 3 +++
drivers/vfio/pci/vfio_pci_intrs.c | 19 +++++++++++++++++++
include/uapi/linux/vfio.h | 1 +
3 files changed, 23 insertions(+)
diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index 8c895ece4750..36b57fe363d7 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -750,6 +750,8 @@ static int vfio_pci_get_irq_count(struct vfio_pci_device *vdev, int irq_type)
return 1;
} else if (irq_type == VFIO_PCI_REQ_IRQ_INDEX) {
return 1;
+ } else if (irq_type == VFIO_PCI_DMA_FAULT_IRQ_INDEX) {
+ return 1;
}
return 0;
@@ -1086,6 +1088,7 @@ static long vfio_pci_ioctl(void *device_data,
switch (info.index) {
case VFIO_PCI_INTX_IRQ_INDEX ... VFIO_PCI_MSIX_IRQ_INDEX:
case VFIO_PCI_REQ_IRQ_INDEX:
+ case VFIO_PCI_DMA_FAULT_IRQ_INDEX:
break;
case VFIO_PCI_ERR_IRQ_INDEX:
if (pci_is_pcie(vdev->pdev))
diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c
index 1c46045b0e7f..28a96117daf3 100644
--- a/drivers/vfio/pci/vfio_pci_intrs.c
+++ b/drivers/vfio/pci/vfio_pci_intrs.c
@@ -622,6 +622,18 @@ static int vfio_pci_set_req_trigger(struct vfio_pci_device *vdev,
count, flags, data);
}
+static int vfio_pci_set_dma_fault_trigger(struct vfio_pci_device *vdev,
+ unsigned index, unsigned start,
+ unsigned count, uint32_t flags,
+ void *data)
+{
+ if (index != VFIO_PCI_DMA_FAULT_IRQ_INDEX || start != 0 || count > 1)
+ return -EINVAL;
+
+ return vfio_pci_set_ctx_trigger_single(&vdev->dma_fault_trigger,
+ count, flags, data);
+}
+
int vfio_pci_set_irqs_ioctl(struct vfio_pci_device *vdev, uint32_t flags,
unsigned index, unsigned start, unsigned count,
void *data)
@@ -671,6 +683,13 @@ int vfio_pci_set_irqs_ioctl(struct vfio_pci_device *vdev, uint32_t flags,
break;
}
break;
+ case VFIO_PCI_DMA_FAULT_IRQ_INDEX:
+ switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) {
+ case VFIO_IRQ_SET_ACTION_TRIGGER:
+ func = vfio_pci_set_dma_fault_trigger;
+ break;
+ }
+ break;
}
if (!func)
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 40b7aec8fefa..b47f65df5b86 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -555,6 +555,7 @@ enum {
VFIO_PCI_MSIX_IRQ_INDEX,
VFIO_PCI_ERR_IRQ_INDEX,
VFIO_PCI_REQ_IRQ_INDEX,
+ VFIO_PCI_DMA_FAULT_IRQ_INDEX,
VFIO_PCI_NUM_IRQS
};
--
2.20.1
Implement domain-selective and page-selective IOTLB invalidations.
Signed-off-by: Eric Auger <[email protected]>
---
v3 -> v4:
- adapt to changes in the uapi
- add support for leaf parameter
- do not use arm_smmu_tlb_inv_range_nosync or arm_smmu_tlb_inv_context
anymore
v2 -> v3:
- replace __arm_smmu_tlb_sync by arm_smmu_cmdq_issue_sync
v1 -> v2:
- properly pass the asid
---
drivers/iommu/arm-smmu-v3.c | 57 +++++++++++++++++++++++++++++++++++++
1 file changed, 57 insertions(+)
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index e41f61844d78..4a05dd66df74 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -2319,6 +2319,62 @@ static void arm_smmu_detach_pasid_table(struct iommu_domain *domain)
mutex_unlock(&smmu_domain->init_mutex);
}
+static int
+arm_smmu_cache_invalidate(struct iommu_domain *domain, struct device *dev,
+ struct iommu_cache_invalidate_info *inv_info)
+{
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+ struct arm_smmu_device *smmu = smmu_domain->smmu;
+
+ if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED)
+ return -EINVAL;
+
+ if (!smmu)
+ return -EINVAL;
+
+ if (inv_info->cache & IOMMU_CACHE_INV_TYPE_IOTLB) {
+ if (inv_info->granularity == IOMMU_INV_GRANU_PASID) {
+ struct arm_smmu_cmdq_ent cmd = {
+ .opcode = CMDQ_OP_TLBI_NH_ASID,
+ .tlbi = {
+ .vmid = smmu_domain->s2_cfg.vmid,
+ .asid = inv_info->pasid,
+ },
+ };
+
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+ arm_smmu_cmdq_issue_sync(smmu);
+
+ } else if (inv_info->granularity == IOMMU_INV_GRANU_ADDR) {
+ struct iommu_inv_addr_info *info = &inv_info->addr_info;
+ size_t size = info->nb_granules * info->granule_size;
+ bool leaf = info->flags & IOMMU_INV_ADDR_FLAGS_LEAF;
+ struct arm_smmu_cmdq_ent cmd = {
+ .opcode = CMDQ_OP_TLBI_NH_VA,
+ .tlbi = {
+ .addr = info->addr,
+ .vmid = smmu_domain->s2_cfg.vmid,
+ .asid = info->pasid,
+ .leaf = leaf,
+ },
+ };
+
+ do {
+ arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+ cmd.tlbi.addr += info->granule_size;
+ } while (size -= info->granule_size);
+ arm_smmu_cmdq_issue_sync(smmu);
+ } else {
+ return -EINVAL;
+ }
+ }
+ if (inv_info->cache & IOMMU_CACHE_INV_TYPE_PASID ||
+ inv_info->cache & IOMMU_CACHE_INV_TYPE_DEV_IOTLB) {
+ return -ENOENT;
+ }
+ return 0;
+}
+
static struct iommu_ops arm_smmu_ops = {
.capable = arm_smmu_capable,
.domain_alloc = arm_smmu_domain_alloc,
@@ -2339,6 +2395,7 @@ static struct iommu_ops arm_smmu_ops = {
.put_resv_regions = arm_smmu_put_resv_regions,
.attach_pasid_table = arm_smmu_attach_pasid_table,
.detach_pasid_table = arm_smmu_detach_pasid_table,
+ .cache_invalidate = arm_smmu_cache_invalidate,
.pgsize_bitmap = -1UL, /* Restricted during device attach */
};
--
2.20.1
New iotcls were introduced to pass information about guest stage1
to the host through VFIO. Let's document the nested stage control.
Signed-off-by: Eric Auger <[email protected]>
---
v2 -> v3:
- document the new fault API
v1 -> v2:
- use the new ioctl names
- add doc related to fault handling
---
Documentation/vfio.txt | 83 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 83 insertions(+)
diff --git a/Documentation/vfio.txt b/Documentation/vfio.txt
index f1a4d3c3ba0b..aab59ddf5ebd 100644
--- a/Documentation/vfio.txt
+++ b/Documentation/vfio.txt
@@ -239,6 +239,89 @@ group and can access them as follows::
/* Gratuitous device reset and go... */
ioctl(device, VFIO_DEVICE_RESET);
+IOMMU Dual Stage Control
+------------------------
+
+Some IOMMUs support 2 stages/levels of translation. "Stage" corresponds to
+the ARM terminology while "level" corresponds to Intel's VTD terminology. In
+the following text we use either without distinction.
+
+This is useful when the guest is exposed with a virtual IOMMU and some
+devices are assigned to the guest through VFIO. Then the guest OS can use
+stage 1 (IOVA -> GPA), while the hypervisor uses stage 2 for VM isolation
+(GPA -> HPA).
+
+The guest gets ownership of the stage 1 page tables and also owns stage 1
+configuration structures. The hypervisor owns the root configuration structure
+(for security reason), including stage 2 configuration. This works as long
+configuration structures and page table format are compatible between the
+virtual IOMMU and the physical IOMMU.
+
+Assuming the HW supports it, this nested mode is selected by choosing the
+VFIO_TYPE1_NESTING_IOMMU type through:
+
+ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1_NESTING_IOMMU);
+
+This forces the hypervisor to use the stage 2, leaving stage 1 available for
+guest usage.
+
+Once groups are attached to the container, the guest stage 1 translation
+configuration data can be passed to VFIO by using
+
+ioctl(container, VFIO_IOMMU_BIND_PASID_TABLE, &pasid_table_info);
+
+This allows to combine guest stage 1 configuration structure along with
+hypervisor stage 2 configuration structure. stage 1 configuration structures
+are dependent on the IOMMU type.
+
+As the stage 1 translation is fully delegated to the HW, physical events that
+may occur (especially translation faults), need to be propagated up to
+the virtualizer and re-injected into the guest.
+
+The userspace must be prepared to receive faults. The VFIO-PCI device
+exposes 2 regions dedicated to HW faults: one read-only "producer" fault
+region (kernel is the producer and writes into this region) and one
+write-only "consumer" fault region, type/subtype respectively:
+- VFIO_REGION_TYPE_NESTED/VFIO_REGION_SUBTYPE_NESTED_FAULT_PROD
+- VFIO_REGION_TYPE_NESTED/VFIO_REGION_SUBTYPE_NESTED_FAULT_CONS
+
+The producer fault region exposes a VFIO_REGION_INFO_CAP_PRODUCER_FAULT
+region capability that allows the userspace to retrieve the max fault
+ABI version supported by the kernel.
+
+The ABI version can be negotiated: the userspace writes the version it
+wants in the consumer region (greater or equal than 1). Once set, the
+ABI version cannot be changed.
+
+Then by using VFIO_DEVICE_SET_IRQS along with the VFIO_PCI_DMA_FAULT_IRQ_INDEX
+index, the virtualizer can register an eventfd signalled whenever a fault is
+observed at physical level.
+
+The kernel writes the fault records formatted according to the negotiated
+ABI version in the producer region fault queue. This part of the producer
+fault region can be mmapped (see VFIO_REGION_INFO_CAP_SPARSE_MMAP result).
+
+When the userspace consumes a fault in the queue, it should increment
+the consumer index to allow new fault records to replace the used ones.
+The queue size and the entry size can be retrieved in the producer region.
+The consumer index should never overshoot the producer index as in any
+other circular buffer scheme. Also it must be less than the queue size
+otherwise the change is ignored by the kernel.
+
+When the guest invalidates stage 1 related caches, invalidations must be
+forwarded to the host through
+ioctl(container, VFIO_IOMMU_CACHE_INVALIDATE, &inv_data);
+Those invalidations can happen at various granularity levels, page, context, ...
+
+The ARM SMMU specification introduces another challenge: MSIs are translated by
+both the virtual SMMU and the physical SMMU. To build a nested mapping for the
+IOVA programmed into the assigned device, the guest needs to pass its IOVA/MSI
+doorbell GPA binding to the host. Then the hypervisor can build a nested stage 2
+binding eventually translating into the physical MSI doorbell.
+
+This is achieved by
+ioctl(container, VFIO_IOMMU_BIND_MSI, &guest_binding);
+
VFIO User API
-------------------------------------------------------------------------------
--
2.20.1
Up to now, when the type was UNMANAGED, we used to
allocate IOVA pages within a range provided by the user.
This does not work in nested mode.
If both the host and the guest are exposed with SMMUs, each
would allocate an IOVA. The guest allocates an IOVA (gIOVA)
to map onto the guest MSI doorbell (gDB). The Host allocates
another IOVA (hIOVA) to map onto the physical doorbell (hDB).
So we end up with 2 unrelated mappings, at S1 and S2:
S1 S2
gIOVA -> gDB
hIOVA -> hDB
The PCI device would be programmed with hIOVA.
iommu_dma_bind_guest_msi allows to pass gIOVA/gDB
to the host so that gIOVA can be used by the host instead of
re-allocating a new hIOVA. The device handle also is passed
to garantee devices belonging to different stage1 domains record
distinguishable stage1 mappings. That way the host can create
the following nested
mapping:
S1 S2
gIOVA -> gDB -> hDB
this time, the PCI device will be programmed with the gIOVA MSI
doorbell which is correctly mapped through the 2 stages.
Signed-off-by: Eric Auger <[email protected]>
---
v3 -> v4:
- change function names; add unregister
- protect with msi_lock
v2 -> v3:
- also store the device handle on S1 mapping registration.
This garantees we associate the associated S2 mapping binds
to the correct physical MSI controller.
v1 -> v2:
- unmap stage2 on put()
---
drivers/iommu/dma-iommu.c | 145 ++++++++++++++++++++++++++++++++++++--
include/linux/dma-iommu.h | 18 +++++
2 files changed, 159 insertions(+), 4 deletions(-)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 77aabe637a60..77ec3d35d41e 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -35,12 +35,16 @@
struct iommu_dma_msi_page {
struct list_head list;
dma_addr_t iova;
+ dma_addr_t gpa;
phys_addr_t phys;
+ size_t s1_granule;
+ struct device *dev;
};
enum iommu_dma_cookie_type {
IOMMU_DMA_IOVA_COOKIE,
IOMMU_DMA_MSI_COOKIE,
+ IOMMU_DMA_NESTED_MSI_COOKIE,
};
struct iommu_dma_cookie {
@@ -110,14 +114,17 @@ EXPORT_SYMBOL(iommu_get_dma_cookie);
*
* Users who manage their own IOVA allocation and do not want DMA API support,
* but would still like to take advantage of automatic MSI remapping, can use
- * this to initialise their own domain appropriately. Users should reserve a
+ * this to initialise their own domain appropriately. Users may reserve a
* contiguous IOVA region, starting at @base, large enough to accommodate the
* number of PAGE_SIZE mappings necessary to cover every MSI doorbell address
- * used by the devices attached to @domain.
+ * used by the devices attached to @domain. The other way round is to provide
+ * usable iova pages through the iommu_dma_bind_doorbell API (nested stages
+ * use case)
*/
int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base)
{
struct iommu_dma_cookie *cookie;
+ int nesting, ret;
if (domain->type != IOMMU_DOMAIN_UNMANAGED)
return -EINVAL;
@@ -125,7 +132,12 @@ int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base)
if (domain->iova_cookie)
return -EEXIST;
- cookie = cookie_alloc(IOMMU_DMA_MSI_COOKIE);
+ ret = iommu_domain_get_attr(domain, DOMAIN_ATTR_NESTING, &nesting);
+ if (!ret && nesting)
+ cookie = cookie_alloc(IOMMU_DMA_NESTED_MSI_COOKIE);
+ else
+ cookie = cookie_alloc(IOMMU_DMA_MSI_COOKIE);
+
if (!cookie)
return -ENOMEM;
@@ -146,6 +158,7 @@ void iommu_put_dma_cookie(struct iommu_domain *domain)
{
struct iommu_dma_cookie *cookie = domain->iova_cookie;
struct iommu_dma_msi_page *msi, *tmp;
+ bool s2_unmap = false;
if (!cookie)
return;
@@ -153,7 +166,15 @@ void iommu_put_dma_cookie(struct iommu_domain *domain)
if (cookie->type == IOMMU_DMA_IOVA_COOKIE && cookie->iovad.granule)
put_iova_domain(&cookie->iovad);
+ if (cookie->type == IOMMU_DMA_NESTED_MSI_COOKIE)
+ s2_unmap = true;
+
list_for_each_entry_safe(msi, tmp, &cookie->msi_page_list, list) {
+ if (s2_unmap && msi->phys) {
+ size_t size = cookie_msi_granule(cookie);
+
+ WARN_ON(iommu_unmap(domain, msi->gpa, size) != size);
+ }
list_del(&msi->list);
kfree(msi);
}
@@ -162,6 +183,85 @@ void iommu_put_dma_cookie(struct iommu_domain *domain)
}
EXPORT_SYMBOL(iommu_put_dma_cookie);
+/**
+ * iommu_dma_bind_guest_msi - Allows to pass the stage 1
+ * binding of a virtual MSI doorbell used by @dev.
+ *
+ * @domain: domain handle
+ * @dev: device handle
+ * @iova: guest iova
+ * @gpa: gpa of the virtual doorbell
+ * @size: size of the granule used for the stage1 mapping
+ *
+ * In nested stage use case, the user can provide IOVA/IPA bindings
+ * corresponding to a guest MSI stage 1 mapping. When the host needs
+ * to map its own MSI doorbells, it can use @gpa as stage 2 input
+ * and map it onto the physical MSI doorbell.
+ */
+int iommu_dma_bind_guest_msi(struct iommu_domain *domain, struct device *dev,
+ dma_addr_t iova, phys_addr_t gpa, size_t size)
+{
+ struct iommu_dma_cookie *cookie = domain->iova_cookie;
+ struct iommu_dma_msi_page *msi;
+ int ret = 0;
+
+ if (!cookie)
+ return -EINVAL;
+
+ if (cookie->type != IOMMU_DMA_NESTED_MSI_COOKIE)
+ return -EINVAL;
+
+ iova = iova & ~(dma_addr_t)(size - 1);
+ gpa = gpa & ~(phys_addr_t)(size - 1);
+
+ spin_lock(&cookie->msi_lock);
+
+ list_for_each_entry(msi, &cookie->msi_page_list, list) {
+ if (msi->iova == iova && msi->dev == dev)
+ goto unlock; /* this page is already registered */
+ }
+
+ msi = kzalloc(sizeof(*msi), GFP_ATOMIC);
+ if (!msi) {
+ ret = -ENOMEM;
+ goto unlock;
+ }
+
+ msi->iova = iova;
+ msi->gpa = gpa;
+ msi->dev = dev;
+ msi->s1_granule = size;
+ list_add(&msi->list, &cookie->msi_page_list);
+unlock:
+ spin_unlock(&cookie->msi_lock);
+ return ret;
+}
+EXPORT_SYMBOL(iommu_dma_bind_guest_msi);
+
+void iommu_dma_unbind_guest_msi(struct iommu_domain *domain, struct device *dev,
+ dma_addr_t giova)
+{
+ struct iommu_dma_cookie *cookie = domain->iova_cookie;
+ struct iommu_dma_msi_page *msi, *tmp;
+
+ list_for_each_entry_safe(msi, tmp, &cookie->msi_page_list, list) {
+ dma_addr_t aligned_giova =
+ giova & ~(dma_addr_t)(msi->s1_granule - 1);
+
+ if (msi->dev == dev && msi->iova == aligned_giova) {
+ if (msi->phys) {
+ /* unmap the stage 2 */
+ size_t size = cookie_msi_granule(cookie);
+
+ WARN_ON(iommu_unmap(domain, msi->gpa, size) != size);
+ }
+ list_del(&msi->list);
+ kfree(msi);
+ }
+ }
+}
+EXPORT_SYMBOL(iommu_dma_unbind_guest_msi);
+
/**
* iommu_dma_get_resv_regions - Reserved region driver helper
* @dev: Device from iommu_get_resv_regions()
@@ -855,6 +955,16 @@ void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle,
__iommu_dma_unmap(iommu_get_dma_domain(dev), handle, size);
}
+static bool msi_page_match(struct iommu_dma_msi_page *msi_page,
+ struct device *dev, phys_addr_t msi_addr)
+{
+ bool match = msi_page->phys == msi_addr;
+
+ if (msi_page->dev)
+ match &= (msi_page->dev == dev);
+ return match;
+}
+
static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev,
phys_addr_t msi_addr, struct iommu_domain *domain)
{
@@ -866,9 +976,36 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev,
msi_addr &= ~(phys_addr_t)(size - 1);
list_for_each_entry(msi_page, &cookie->msi_page_list, list)
- if (msi_page->phys == msi_addr)
+ if (msi_page_match(msi_page, dev, msi_addr))
return msi_page;
+ /*
+ * In nested stage mode, we do not allocate an MSI page in
+ * a range provided by the user. Instead, IOVA/IPA bindings are
+ * individually provided. We reuse thise IOVAs to build the
+ * GIOVA -> GPA -> MSI HPA nested stage mapping.
+ */
+ if (cookie->type == IOMMU_DMA_NESTED_MSI_COOKIE) {
+ list_for_each_entry(msi_page, &cookie->msi_page_list, list)
+ if (!msi_page->phys && msi_page->dev == dev) {
+ int ret;
+
+ /* do the stage 2 mapping */
+ ret = iommu_map(domain,
+ msi_page->gpa, msi_addr, size,
+ IOMMU_MMIO | IOMMU_WRITE);
+ if (ret) {
+ pr_warn("MSI S2 mapping failed (%d)\n",
+ ret);
+ return NULL;
+ }
+ msi_page->phys = msi_addr;
+ return msi_page;
+ }
+ pr_warn("%s no MSI binding found\n", __func__);
+ return NULL;
+ }
+
msi_page = kzalloc(sizeof(*msi_page), GFP_ATOMIC);
if (!msi_page)
return NULL;
diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h
index e760dc5d1fa8..6fc0f2b4a56a 100644
--- a/include/linux/dma-iommu.h
+++ b/include/linux/dma-iommu.h
@@ -24,6 +24,7 @@
#include <linux/dma-mapping.h>
#include <linux/iommu.h>
#include <linux/msi.h>
+#include <uapi/linux/iommu.h>
int iommu_dma_init(void);
@@ -73,6 +74,10 @@ void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle,
/* The DMA API isn't _quite_ the whole story, though... */
void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg);
void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list);
+int iommu_dma_bind_guest_msi(struct iommu_domain *domain, struct device *dev,
+ dma_addr_t iova, phys_addr_t gpa, size_t size);
+void iommu_dma_unbind_guest_msi(struct iommu_domain *domain,
+ struct device *dev, dma_addr_t giova);
#else
@@ -103,6 +108,19 @@ static inline void iommu_dma_map_msi_msg(int irq, struct msi_msg *msg)
{
}
+static inline int
+iommu_dma_bind_guest_msi(struct iommu_domain *domain, struct device *dev,
+ dma_addr_t iova, phys_addr_t gpa, size_t size)
+{
+ return -ENODEV;
+}
+
+static inline void
+iommu_dma_unbind_guest_msi(struct iommu_domain *domain,
+ struct device *dev, dma_addr_t giova);
+{
+}
+
static inline void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list)
{
}
--
2.20.1
The bind/unbind_guest_msi() callbacks check the domain
is NESTED and redirect to the dma-iommu implementation.
Signed-off-by: Eric Auger <[email protected]>
---
drivers/iommu/arm-smmu-v3.c | 44 +++++++++++++++++++++++++++++++++++++
1 file changed, 44 insertions(+)
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index 4a05dd66df74..a4b82c520647 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -2207,6 +2207,48 @@ static void arm_smmu_put_resv_regions(struct device *dev,
kfree(entry);
}
+static int
+arm_smmu_bind_guest_msi(struct iommu_domain *domain, struct device *dev,
+ dma_addr_t giova, phys_addr_t gpa, size_t size)
+{
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+ struct arm_smmu_device *smmu;
+ int ret = -EINVAL;
+
+ mutex_lock(&smmu_domain->init_mutex);
+ smmu = smmu_domain->smmu;
+ if (!smmu)
+ goto out;
+
+ if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED)
+ goto out;
+
+ ret = iommu_dma_bind_guest_msi(domain, dev, giova, gpa, size);
+out:
+ mutex_unlock(&smmu_domain->init_mutex);
+ return ret;
+}
+
+static void
+arm_smmu_unbind_guest_msi(struct iommu_domain *domain,
+ struct device *dev, dma_addr_t giova)
+{
+ struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+ struct arm_smmu_device *smmu;
+
+ mutex_lock(&smmu_domain->init_mutex);
+ smmu = smmu_domain->smmu;
+ if (!smmu)
+ goto unlock;
+
+ if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED)
+ goto unlock;
+
+ iommu_dma_unbind_guest_msi(domain, dev, giova);
+unlock:
+ mutex_unlock(&smmu_domain->init_mutex);
+}
+
static int arm_smmu_attach_pasid_table(struct iommu_domain *domain,
struct iommu_pasid_table_config *cfg)
{
@@ -2396,6 +2438,8 @@ static struct iommu_ops arm_smmu_ops = {
.attach_pasid_table = arm_smmu_attach_pasid_table,
.detach_pasid_table = arm_smmu_detach_pasid_table,
.cache_invalidate = arm_smmu_cache_invalidate,
+ .bind_guest_msi = arm_smmu_bind_guest_msi,
+ .unbind_guest_msi = arm_smmu_unbind_guest_msi,
.pgsize_bitmap = -1UL, /* Restricted during device attach */
};
--
2.20.1
On Sun, 17 Mar 2019 18:22:15 +0100
Eric Auger <[email protected]> wrote:
> From: "Liu, Yi L" <[email protected]>
>
> In any virtualization use case, when the first translation stage
> is "owned" by the guest OS, the host IOMMU driver has no knowledge
> of caching structure updates unless the guest invalidation activities
> are trapped by the virtualizer and passed down to the host.
>
> Since the invalidation data are obtained from user space and will be
> written into physical IOMMU, we must allow security check at various
> layers. Therefore, generic invalidation data format are proposed here,
> model specific IOMMU drivers need to convert them into their own
> format.
>
> Signed-off-by: Liu, Yi L <[email protected]>
> Signed-off-by: Jean-Philippe Brucker <[email protected]>
> Signed-off-by: Jacob Pan <[email protected]>
> Signed-off-by: Ashok Raj <[email protected]>
> Signed-off-by: Eric Auger <[email protected]>
>
> ---
> v5 -> v6:
> - fix merge issue
>
> v3 -> v4:
> - full reshape of the API following Alex' comments
>
> v1 -> v2:
> - add arch_id field
> - renamed tlb_invalidate into cache_invalidate as this API allows
> to invalidate context caches on top of IOTLBs
>
> v1:
> renamed sva_invalidate into tlb_invalidate and add iommu_ prefix in
> header. Commit message reworded.
> ---
> drivers/iommu/iommu.c | 14 ++++++++
> include/linux/iommu.h | 15 ++++++++
> include/uapi/linux/iommu.h | 71
> ++++++++++++++++++++++++++++++++++++++ 3 files changed, 100
> insertions(+)
>
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index 7d9285cea100..b72e326ddd41 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -1544,6 +1544,20 @@ void iommu_detach_pasid_table(struct
> iommu_domain *domain) }
> EXPORT_SYMBOL_GPL(iommu_detach_pasid_table);
>
> +int iommu_cache_invalidate(struct iommu_domain *domain, struct
> device *dev,
> + struct iommu_cache_invalidate_info
> *inv_info) +{
> + int ret = 0;
> +
> + if (unlikely(!domain->ops->cache_invalidate))
> + return -ENODEV;
> +
> + ret = domain->ops->cache_invalidate(domain, dev, inv_info);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(iommu_cache_invalidate);
> +
> static void __iommu_detach_device(struct iommu_domain *domain,
> struct device *dev)
> {
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index fb9b7a8de25f..7c7c6bad1420 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -191,6 +191,7 @@ struct iommu_resv_region {
> * driver init to device driver init (default
> no)
> * @attach_pasid_table: attach a pasid table
> * @detach_pasid_table: detach the pasid table
> + * @cache_invalidate: invalidate translation caches
> * @pgsize_bitmap: bitmap of all possible supported page sizes
> */
> struct iommu_ops {
> @@ -239,6 +240,9 @@ struct iommu_ops {
> struct iommu_pasid_table_config
> *cfg); void (*detach_pasid_table)(struct iommu_domain *domain);
>
> + int (*cache_invalidate)(struct iommu_domain *domain, struct
> device *dev,
> + struct iommu_cache_invalidate_info
> *inv_info); +
> unsigned long pgsize_bitmap;
> };
>
> @@ -349,6 +353,9 @@ extern void iommu_detach_device(struct
> iommu_domain *domain, extern int iommu_attach_pasid_table(struct
> iommu_domain *domain, struct iommu_pasid_table_config *cfg);
> extern void iommu_detach_pasid_table(struct iommu_domain *domain);
> +extern int iommu_cache_invalidate(struct iommu_domain *domain,
> + struct device *dev,
> + struct iommu_cache_invalidate_info
> *inv_info); extern struct iommu_domain
> *iommu_get_domain_for_dev(struct device *dev); extern struct
> iommu_domain *iommu_get_dma_domain(struct device *dev); extern int
> iommu_map(struct iommu_domain *domain, unsigned long iova, @@ -797,6
> +804,14 @@ int iommu_attach_pasid_table(struct iommu_domain *domain,
> static inline void iommu_detach_pasid_table(struct iommu_domain
> *domain) {}
> +static inline int
> +iommu_cache_invalidate(struct iommu_domain *domain,
> + struct device *dev,
> + struct iommu_cache_invalidate_info *inv_info)
> +{
> + return -ENODEV;
> +}
> +
> #endif /* CONFIG_IOMMU_API */
>
> #ifdef CONFIG_IOMMU_DEBUGFS
> diff --git a/include/uapi/linux/iommu.h b/include/uapi/linux/iommu.h
> index 532a64075f23..e4c6a447e85a 100644
> --- a/include/uapi/linux/iommu.h
> +++ b/include/uapi/linux/iommu.h
> @@ -159,4 +159,75 @@ struct iommu_pasid_table_config {
> };
> };
>
> +/* defines the granularity of the invalidation */
> +enum iommu_inv_granularity {
> + IOMMU_INV_GRANU_DOMAIN, /* domain-selective
> invalidation */
> + IOMMU_INV_GRANU_PASID, /* pasid-selective
> invalidation */
> + IOMMU_INV_GRANU_ADDR, /* page-selective invalidation
> */ +};
> +
> +/**
> + * Address Selective Invalidation Structure
> + *
> + * @flags indicates the granularity of the address-selective
> invalidation
> + * - if PASID bit is set, @pasid field is populated and the
> invalidation
> + * relates to cache entries tagged with this PASID and matching the
> + * address range.
> + * - if ARCHID bit is set, @archid is populated and the invalidation
> relates
> + * to cache entries tagged with this architecture specific id and
> matching
> + * the address range.
> + * - Both PASID and ARCHID can be set as they may tag different
> caches.
> + * - if neither PASID or ARCHID is set, global addr invalidation
> applies
> + * - LEAF flag indicates whether only the leaf PTE caching needs to
> be
> + * invalidated and other paging structure caches can be preserved.
> + * @pasid: process address space id
> + * @archid: architecture-specific id
> + * @addr: first stage/level input address
> + * @granule_size: page/block size of the mapping in bytes
> + * @nb_granules: number of contiguous granules to be invalidated
> + */
> +struct iommu_inv_addr_info {
> +#define IOMMU_INV_ADDR_FLAGS_PASID (1 << 0)
> +#define IOMMU_INV_ADDR_FLAGS_ARCHID (1 << 1)
> +#define IOMMU_INV_ADDR_FLAGS_LEAF (1 << 2)
> + __u32 flags;
> + __u32 archid;
> + __u64 pasid;
> + __u64 addr;
> + __u64 granule_size;
> + __u64 nb_granules;
> +};
> +
> +/**
> + * First level/stage invalidation information
> + * @cache: bitfield that allows to select which caches to invalidate
> + * @granularity: defines the lowest granularity used for the
> invalidation:
> + * domain > pasid > addr
> + *
> + * Not all the combinations of cache/granularity make sense:
> + *
> + * type | DEV_IOTLB | IOTLB | PASID |
> + * granularity | | |
> cache |
> + * -------------+---------------+---------------+---------------+
> + * DOMAIN | N/A | Y |
> Y |
> + * PASID | Y | Y |
> Y |
> + * ADDR | Y | Y |
> N/A |
> + */
> +struct iommu_cache_invalidate_info {
> +#define IOMMU_CACHE_INVALIDATE_INFO_VERSION_1 1
> + __u32 version;
> +/* IOMMU paging structure cache */
> +#define IOMMU_CACHE_INV_TYPE_IOTLB (1 << 0) /* IOMMU IOTLB */
> +#define IOMMU_CACHE_INV_TYPE_DEV_IOTLB (1 << 1) /* Device
> IOTLB */ +#define IOMMU_CACHE_INV_TYPE_PASID (1 << 2) /* PASID
> cache */
Just a clarification, this used to be an enum. You do intend to issue a
single invalidation request on multiple cache types? Perhaps for
virtio-IOMMU? I only see a single cache type in your patch #14. For VT-d
we plan to issue one cache type at a time for now. So this format works
for us.
However, if multiple cache types are issued in a single invalidation.
They must share a single granularity, not all combinations are valid.
e.g. dev IOTLB does not support domain granularity. Just a reminder,
not an issue. Driver could filter out invalid combinations.
> + __u8 cache;
> + __u8 granularity;
> + __u8 padding[2];
> + union {
> + __u64 pasid;
> + struct iommu_inv_addr_info addr_info;
> + };
> +};
> +
> +
> #endif /* _UAPI_IOMMU_H */
[Jacob Pan]
On 20/03/2019 16:37, Jacob Pan wrote:
[...]
>> +struct iommu_inv_addr_info {
>> +#define IOMMU_INV_ADDR_FLAGS_PASID (1 << 0)
>> +#define IOMMU_INV_ADDR_FLAGS_ARCHID (1 << 1)
>> +#define IOMMU_INV_ADDR_FLAGS_LEAF (1 << 2)
>> + __u32 flags;
>> + __u32 archid;
>> + __u64 pasid;
>> + __u64 addr;
>> + __u64 granule_size;
>> + __u64 nb_granules;
>> +};
>> +
>> +/**
>> + * First level/stage invalidation information
>> + * @cache: bitfield that allows to select which caches to invalidate
>> + * @granularity: defines the lowest granularity used for the
>> invalidation:
>> + * domain > pasid > addr
>> + *
>> + * Not all the combinations of cache/granularity make sense:
>> + *
>> + * type | DEV_IOTLB | IOTLB | PASID |
>> + * granularity | | |
>> cache |
>> + * -------------+---------------+---------------+---------------+
>> + * DOMAIN | N/A | Y |
>> Y |
>> + * PASID | Y | Y |
>> Y |
>> + * ADDR | Y | Y |
>> N/A |
>> + */
>> +struct iommu_cache_invalidate_info {
>> +#define IOMMU_CACHE_INVALIDATE_INFO_VERSION_1 1
>> + __u32 version;
>> +/* IOMMU paging structure cache */
>> +#define IOMMU_CACHE_INV_TYPE_IOTLB (1 << 0) /* IOMMU IOTLB */
>> +#define IOMMU_CACHE_INV_TYPE_DEV_IOTLB (1 << 1) /* Device
>> IOTLB */ +#define IOMMU_CACHE_INV_TYPE_PASID (1 << 2) /* PASID
>> cache */
> Just a clarification, this used to be an enum. You do intend to issue a
> single invalidation request on multiple cache types? Perhaps for
> virtio-IOMMU? I only see a single cache type in your patch #14. For VT-d
> we plan to issue one cache type at a time for now. So this format works
> for us.
Yes for virtio-iommu I'd like as little overhead as possible, which
means a single invalidation message to hit both IOTLB and ATC at once,
and the ability to specify multiple pages with @nb_granules.
> However, if multiple cache types are issued in a single invalidation.
> They must share a single granularity, not all combinations are valid.
> e.g. dev IOTLB does not support domain granularity. Just a reminder,
> not an issue. Driver could filter out invalid combinations.
Agreed. Even the core could filter out invalid combinations based on the
table above: IOTLB and domain granularity are N/A.
Thanks,
Jean
>
>> + __u8 cache;
>> + __u8 granularity;
>> + __u8 padding[2];
>> + union {
>> + __u64 pasid;
>> + struct iommu_inv_addr_info addr_info;
>> + };
>> +};
>> +
>> +
>> #endif /* _UAPI_IOMMU_H */
>
> [Jacob Pan]
> _______________________________________________
> iommu mailing list
> [email protected]
> https://lists.linuxfoundation.org/mailman/listinfo/iommu
>
Hi Jacob, Jean-Philippe,
On 3/20/19 5:50 PM, Jean-Philippe Brucker wrote:
> On 20/03/2019 16:37, Jacob Pan wrote:
> [...]
>>> +struct iommu_inv_addr_info {
>>> +#define IOMMU_INV_ADDR_FLAGS_PASID (1 << 0)
>>> +#define IOMMU_INV_ADDR_FLAGS_ARCHID (1 << 1)
>>> +#define IOMMU_INV_ADDR_FLAGS_LEAF (1 << 2)
>>> + __u32 flags;
>>> + __u32 archid;
>>> + __u64 pasid;
>>> + __u64 addr;
>>> + __u64 granule_size;
>>> + __u64 nb_granules;
>>> +};
>>> +
>>> +/**
>>> + * First level/stage invalidation information
>>> + * @cache: bitfield that allows to select which caches to invalidate
>>> + * @granularity: defines the lowest granularity used for the
>>> invalidation:
>>> + * domain > pasid > addr
>>> + *
>>> + * Not all the combinations of cache/granularity make sense:
>>> + *
>>> + * type | DEV_IOTLB | IOTLB | PASID |
>>> + * granularity | | |
>>> cache |
>>> + * -------------+---------------+---------------+---------------+
>>> + * DOMAIN | N/A | Y |
>>> Y |
>>> + * PASID | Y | Y |
>>> Y |
>>> + * ADDR | Y | Y |
>>> N/A |
>>> + */
>>> +struct iommu_cache_invalidate_info {
>>> +#define IOMMU_CACHE_INVALIDATE_INFO_VERSION_1 1
>>> + __u32 version;
>>> +/* IOMMU paging structure cache */
>>> +#define IOMMU_CACHE_INV_TYPE_IOTLB (1 << 0) /* IOMMU IOTLB */
>>> +#define IOMMU_CACHE_INV_TYPE_DEV_IOTLB (1 << 1) /* Device
>>> IOTLB */ +#define IOMMU_CACHE_INV_TYPE_PASID (1 << 2) /* PASID
>>> cache */
>> Just a clarification, this used to be an enum. You do intend to issue a
>> single invalidation request on multiple cache types? Perhaps for
>> virtio-IOMMU? I only see a single cache type in your patch #14. For VT-d
>> we plan to issue one cache type at a time for now. So this format works
>> for us.
>
> Yes for virtio-iommu I'd like as little overhead as possible, which
> means a single invalidation message to hit both IOTLB and ATC at once,
> and the ability to specify multiple pages with @nb_granules.
The original request/explanation from Jean-Philippe can be found here:
https://lkml.org/lkml/2019/1/28/1497
>
>> However, if multiple cache types are issued in a single invalidation.
>> They must share a single granularity, not all combinations are valid.
>> e.g. dev IOTLB does not support domain granularity. Just a reminder,
>> not an issue. Driver could filter out invalid combinations.
Sure I will add a comment about this restriction.
>
> Agreed. Even the core could filter out invalid combinations based on the
> table above: IOTLB and domain granularity are N/A.
I don't get this sentence. What about vtd IOTLB domain-selective
invalidation:
"
• IOTLB entries caching mappings associated with the specified domain-id
are invalidated.
• Paging-structure-cache entries caching mappings associated with the
specified domain-id are invalidated.
"
Thanks
Eric
>
> Thanks,
> Jean
>
>>
>>> + __u8 cache;
>>> + __u8 granularity;
>>> + __u8 padding[2];
>>> + union {
>>> + __u64 pasid;
>>> + struct iommu_inv_addr_info addr_info;
>>> + };
>>> +};
>>> +
>>> +
>>> #endif /* _UAPI_IOMMU_H */
>>
>> [Jacob Pan]
>> _______________________________________________
>> iommu mailing list
>> [email protected]
>> https://lists.linuxfoundation.org/mailman/listinfo/iommu
>>
>
On 21/03/2019 13:54, Auger Eric wrote:
> Hi Jacob, Jean-Philippe,
>
> On 3/20/19 5:50 PM, Jean-Philippe Brucker wrote:
>> On 20/03/2019 16:37, Jacob Pan wrote:
>> [...]
>>>> +struct iommu_inv_addr_info {
>>>> +#define IOMMU_INV_ADDR_FLAGS_PASID (1 << 0)
>>>> +#define IOMMU_INV_ADDR_FLAGS_ARCHID (1 << 1)
>>>> +#define IOMMU_INV_ADDR_FLAGS_LEAF (1 << 2)
>>>> + __u32 flags;
>>>> + __u32 archid;
>>>> + __u64 pasid;
>>>> + __u64 addr;
>>>> + __u64 granule_size;
>>>> + __u64 nb_granules;
>>>> +};
>>>> +
>>>> +/**
>>>> + * First level/stage invalidation information
>>>> + * @cache: bitfield that allows to select which caches to invalidate
>>>> + * @granularity: defines the lowest granularity used for the
>>>> invalidation:
>>>> + * domain > pasid > addr
>>>> + *
>>>> + * Not all the combinations of cache/granularity make sense:
>>>> + *
>>>> + * type | DEV_IOTLB | IOTLB | PASID |
>>>> + * granularity | | |
>>>> cache |
>>>> + * -------------+---------------+---------------+---------------+
>>>> + * DOMAIN | N/A | Y |
>>>> Y |
>>>> + * PASID | Y | Y |
>>>> Y |
>>>> + * ADDR | Y | Y |
>>>> N/A |
>>>> + */
>>>> +struct iommu_cache_invalidate_info {
>>>> +#define IOMMU_CACHE_INVALIDATE_INFO_VERSION_1 1
>>>> + __u32 version;
>>>> +/* IOMMU paging structure cache */
>>>> +#define IOMMU_CACHE_INV_TYPE_IOTLB (1 << 0) /* IOMMU IOTLB */
>>>> +#define IOMMU_CACHE_INV_TYPE_DEV_IOTLB (1 << 1) /* Device
>>>> IOTLB */ +#define IOMMU_CACHE_INV_TYPE_PASID (1 << 2) /* PASID
>>>> cache */
>>> Just a clarification, this used to be an enum. You do intend to issue a
>>> single invalidation request on multiple cache types? Perhaps for
>>> virtio-IOMMU? I only see a single cache type in your patch #14. For VT-d
>>> we plan to issue one cache type at a time for now. So this format works
>>> for us.
>>
>> Yes for virtio-iommu I'd like as little overhead as possible, which
>> means a single invalidation message to hit both IOTLB and ATC at once,
>> and the ability to specify multiple pages with @nb_granules.
> The original request/explanation from Jean-Philippe can be found here:
> https://lkml.org/lkml/2019/1/28/1497
>
>>
>>> However, if multiple cache types are issued in a single invalidation.
>>> They must share a single granularity, not all combinations are valid.
>>> e.g. dev IOTLB does not support domain granularity. Just a reminder,
>>> not an issue. Driver could filter out invalid combinations.
> Sure I will add a comment about this restriction.
>>
>> Agreed. Even the core could filter out invalid combinations based on the
>> table above: IOTLB and domain granularity are N/A.
> I don't get this sentence. What about vtd IOTLB domain-selective
> invalidation:
My mistake: I meant dev-IOTLB and domain granularity are N/A
Thanks,
Jean
> "
> • IOTLB entries caching mappings associated with the specified domain-id
> are invalidated.
> • Paging-structure-cache entries caching mappings associated with the
> specified domain-id are invalidated.
> "
>
> Thanks
>
> Eric
>
>>
>> Thanks,
>> Jean
>>
>>>
>>>> + __u8 cache;
>>>> + __u8 granularity;
>>>> + __u8 padding[2];
>>>> + union {
>>>> + __u64 pasid;
>>>> + struct iommu_inv_addr_info addr_info;
>>>> + };
>>>> +};
>>>> +
>>>> +
>>>> #endif /* _UAPI_IOMMU_H */
>>>
>>> [Jacob Pan]
>>> _______________________________________________
>>> iommu mailing list
>>> [email protected]
>>> https://lists.linuxfoundation.org/mailman/listinfo/iommu
>>>
>>
> _______________________________________________
> iommu mailing list
> [email protected]
> https://lists.linuxfoundation.org/mailman/listinfo/iommu
>
Hi jean, Jacob,
On 3/21/19 3:13 PM, Jean-Philippe Brucker wrote:
> On 21/03/2019 13:54, Auger Eric wrote:
>> Hi Jacob, Jean-Philippe,
>>
>> On 3/20/19 5:50 PM, Jean-Philippe Brucker wrote:
>>> On 20/03/2019 16:37, Jacob Pan wrote:
>>> [...]
>>>>> +struct iommu_inv_addr_info {
>>>>> +#define IOMMU_INV_ADDR_FLAGS_PASID (1 << 0)
>>>>> +#define IOMMU_INV_ADDR_FLAGS_ARCHID (1 << 1)
>>>>> +#define IOMMU_INV_ADDR_FLAGS_LEAF (1 << 2)
>>>>> + __u32 flags;
>>>>> + __u32 archid;
>>>>> + __u64 pasid;
>>>>> + __u64 addr;
>>>>> + __u64 granule_size;
>>>>> + __u64 nb_granules;
>>>>> +};
>>>>> +
>>>>> +/**
>>>>> + * First level/stage invalidation information
>>>>> + * @cache: bitfield that allows to select which caches to invalidate
>>>>> + * @granularity: defines the lowest granularity used for the
>>>>> invalidation:
>>>>> + * domain > pasid > addr
>>>>> + *
>>>>> + * Not all the combinations of cache/granularity make sense:
>>>>> + *
>>>>> + * type | DEV_IOTLB | IOTLB | PASID |
>>>>> + * granularity | | |
>>>>> cache |
>>>>> + * -------------+---------------+---------------+---------------+
>>>>> + * DOMAIN | N/A | Y |
>>>>> Y |
>>>>> + * PASID | Y | Y |
>>>>> Y |
>>>>> + * ADDR | Y | Y |
>>>>> N/A |
>>>>> + */
>>>>> +struct iommu_cache_invalidate_info {
>>>>> +#define IOMMU_CACHE_INVALIDATE_INFO_VERSION_1 1
>>>>> + __u32 version;
>>>>> +/* IOMMU paging structure cache */
>>>>> +#define IOMMU_CACHE_INV_TYPE_IOTLB (1 << 0) /* IOMMU IOTLB */
>>>>> +#define IOMMU_CACHE_INV_TYPE_DEV_IOTLB (1 << 1) /* Device
>>>>> IOTLB */ +#define IOMMU_CACHE_INV_TYPE_PASID (1 << 2) /* PASID
>>>>> cache */
>>>> Just a clarification, this used to be an enum. You do intend to issue a
>>>> single invalidation request on multiple cache types? Perhaps for
>>>> virtio-IOMMU? I only see a single cache type in your patch #14. For VT-d
>>>> we plan to issue one cache type at a time for now. So this format works
>>>> for us.
>>>
>>> Yes for virtio-iommu I'd like as little overhead as possible, which
>>> means a single invalidation message to hit both IOTLB and ATC at once,
>>> and the ability to specify multiple pages with @nb_granules.
>> The original request/explanation from Jean-Philippe can be found here:
>> https://lkml.org/lkml/2019/1/28/1497
>>
>>>
>>>> However, if multiple cache types are issued in a single invalidation.
>>>> They must share a single granularity, not all combinations are valid.
>>>> e.g. dev IOTLB does not support domain granularity. Just a reminder,
>>>> not an issue. Driver could filter out invalid combinations.
>> Sure I will add a comment about this restriction.
>>>
>>> Agreed. Even the core could filter out invalid combinations based on the
>>> table above: IOTLB and domain granularity are N/A.
>> I don't get this sentence. What about vtd IOTLB domain-selective
>> invalidation:
>
> My mistake: I meant dev-IOTLB and domain granularity are N/A
Ah OK, no worries.
How do we proceed further with those user APIs? Besides the comment to
be added above and previous suggestion from Jean ("Invalidations by
@granularity use field ...), have we reached a consensus now on:
- attach/detach_pasid_table
- cache_invalidate
- fault data and fault report API?
If not, please let me know.
Thanks
Eric
>
> Thanks,
> Jean
>
>> "
>> • IOTLB entries caching mappings associated with the specified domain-id
>> are invalidated.
>> • Paging-structure-cache entries caching mappings associated with the
>> specified domain-id are invalidated.
>> "
>>
>> Thanks
>>
>> Eric
>>
>>>
>>> Thanks,
>>> Jean
>>>
>>>>
>>>>> + __u8 cache;
>>>>> + __u8 granularity;
>>>>> + __u8 padding[2];
>>>>> + union {
>>>>> + __u64 pasid;
>>>>> + struct iommu_inv_addr_info addr_info;
>>>>> + };
>>>>> +};
>>>>> +
>>>>> +
>>>>> #endif /* _UAPI_IOMMU_H */
>>>>
>>>> [Jacob Pan]
>>>> _______________________________________________
>>>> iommu mailing list
>>>> [email protected]
>>>> https://lists.linuxfoundation.org/mailman/listinfo/iommu
>>>>
>>>
>> _______________________________________________
>> iommu mailing list
>> [email protected]
>> https://lists.linuxfoundation.org/mailman/listinfo/iommu
>>
>
On Sun, 17 Mar 2019 18:22:13 +0100
Eric Auger <[email protected]> wrote:
> From: Jacob Pan <[email protected]>
>
> Traditionally, device specific faults are detected and handled within
> their own device drivers. When IOMMU is enabled, faults such as DMA
> related transactions are detected by IOMMU. There is no generic
> reporting mechanism to report faults back to the in-kernel device
> driver or the guest OS in case of assigned devices.
>
> This patch introduces a registration API for device specific fault
> handlers. This differs from the existing iommu_set_fault_handler/
> report_iommu_fault infrastructures in several ways:
> - it allows to report more sophisticated fault events (both
> unrecoverable faults and page request faults) due to the nature
> of the iommu_fault struct
> - it is device specific and not domain specific.
>
> The current iommu_report_device_fault() implementation only handles
> the "shoot and forget" unrecoverable fault case. Handling of page
> request faults or stalled faults will come later.
>
> Signed-off-by: Jacob Pan <[email protected]>
> Signed-off-by: Ashok Raj <[email protected]>
> Signed-off-by: Jean-Philippe Brucker <[email protected]>
> Signed-off-by: Eric Auger <[email protected]>
>
> ---
>
> v4 -> v5:
> - remove stuff related to recoverable faults
> ---
> drivers/iommu/iommu.c | 134 +++++++++++++++++++++++++++++++++++++++++-
> include/linux/iommu.h | 36 +++++++++++-
> 2 files changed, 168 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index 33a982e33716..56d5bf68de53 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -648,6 +648,13 @@ int iommu_group_add_device(struct iommu_group *group, struct device *dev)
> goto err_free_name;
> }
>
> + dev->iommu_param = kzalloc(sizeof(*dev->iommu_param), GFP_KERNEL);
> + if (!dev->iommu_param) {
> + ret = -ENOMEM;
> + goto err_free_name;
> + }
> + mutex_init(&dev->iommu_param->lock);
> +
> kobject_get(group->devices_kobj);
>
> dev->iommu_group = group;
> @@ -678,6 +685,7 @@ int iommu_group_add_device(struct iommu_group *group, struct device *dev)
> mutex_unlock(&group->mutex);
> dev->iommu_group = NULL;
> kobject_put(group->devices_kobj);
> + kfree(dev->iommu_param);
> err_free_name:
> kfree(device->name);
> err_remove_link:
> @@ -724,7 +732,7 @@ void iommu_group_remove_device(struct device *dev)
> sysfs_remove_link(&dev->kobj, "iommu_group");
>
> trace_remove_device_from_group(group->id, dev);
> -
> + kfree(dev->iommu_param);
> kfree(device->name);
> kfree(device);
> dev->iommu_group = NULL;
> @@ -858,6 +866,130 @@ int iommu_group_unregister_notifier(struct iommu_group *group,
> }
> EXPORT_SYMBOL_GPL(iommu_group_unregister_notifier);
>
> +/**
> + * iommu_register_device_fault_handler() - Register a device fault handler
> + * @dev: the device
> + * @handler: the fault handler
> + * @data: private data passed as argument to the handler
> + *
> + * When an IOMMU fault event is received, this handler gets called with the
> + * fault event and data as argument.
> + *
> + * Return 0 if the fault handler was installed successfully, or an error.
> + */
> +int iommu_register_device_fault_handler(struct device *dev,
> + iommu_dev_fault_handler_t handler,
> + void *data)
> +{
> + struct iommu_param *param = dev->iommu_param;
> + int ret = 0;
> +
> + /*
> + * Device iommu_param should have been allocated when device is
> + * added to its iommu_group.
> + */
> + if (!param)
> + return -EINVAL;
> +
> + mutex_lock(¶m->lock);
> + /* Only allow one fault handler registered for each device */
> + if (param->fault_param) {
> + ret = -EBUSY;
> + goto done_unlock;
> + }
> +
> + get_device(dev);
> + param->fault_param =
> + kzalloc(sizeof(struct iommu_fault_param), GFP_KERNEL);
> + if (!param->fault_param) {
> + put_device(dev);
> + ret = -ENOMEM;
> + goto done_unlock;
> + }
> + mutex_init(¶m->fault_param->lock);
> + param->fault_param->handler = handler;
> + param->fault_param->data = data;
> + INIT_LIST_HEAD(¶m->fault_param->faults);
> +
> +done_unlock:
> + mutex_unlock(¶m->lock);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(iommu_register_device_fault_handler);
> +
> +/**
> + * iommu_unregister_device_fault_handler() - Unregister the device fault handler
> + * @dev: the device
> + *
> + * Remove the device fault handler installed with
> + * iommu_register_device_fault_handler().
> + *
> + * Return 0 on success, or an error.
> + */
> +int iommu_unregister_device_fault_handler(struct device *dev)
> +{
> + struct iommu_param *param = dev->iommu_param;
> + int ret = 0;
> +
> + if (!param)
> + return -EINVAL;
> +
> + mutex_lock(¶m->lock);
> +
> + if (!param->fault_param)
> + goto unlock;
> +
> + /* we cannot unregister handler if there are pending faults */
> + if (!list_empty(¶m->fault_param->faults)) {
> + ret = -EBUSY;
> + goto unlock;
> + }
> +
> + kfree(param->fault_param);
> + param->fault_param = NULL;
> + put_device(dev);
> +unlock:
> + mutex_unlock(¶m->lock);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(iommu_unregister_device_fault_handler);
> +
> +
> +/**
> + * iommu_report_device_fault() - Report fault event to device
> + * @dev: the device
> + * @evt: fault event data
> + *
> + * Called by IOMMU model specific drivers when fault is detected, typically
> + * in a threaded IRQ handler.
> + *
> + * Return 0 on success, or an error.
> + */
> +int iommu_report_device_fault(struct device *dev, struct iommu_fault_event *evt)
> +{
> + struct iommu_fault_param *fparam;
> + int ret = 0;
> +
Nit, for consistency with above functions, it'd be useful to have:
struct iommu_param *param = dev->iommu_param;
It's not as obvious as it could be that we're using the same mutex here
as in the register/unregister above. Thanks,
Alex
> + /* iommu_param is allocated when device is added to group */
> + if (!dev->iommu_param || !evt)
> + return -EINVAL;
> + /* we only report device fault if there is a handler registered */
> + mutex_lock(&dev->iommu_param->lock);
> + if (!dev->iommu_param->fault_param ||
> + !dev->iommu_param->fault_param->handler) {
> + ret = -EINVAL;
> + goto done_unlock;
> + }
> + fparam = dev->iommu_param->fault_param;
> + ret = fparam->handler(evt, fparam->data);
> +done_unlock:
> + mutex_unlock(&dev->iommu_param->lock);
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(iommu_report_device_fault);
> +
> /**
> * iommu_group_id - Return ID for a group
> * @group: the group to ID
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index c6f398f7e6e0..aeb4b615cb44 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -257,11 +257,13 @@ struct iommu_device {
> * unrecoverable faults such as DMA or IRQ remapping faults.
> *
> * @fault: fault descriptor
> + * @list pending fault event list, used for tracking responses
> * @iommu_private: used by the IOMMU driver for storing fault-specific
> * data. Users should not modify this field before
> * sending the fault response.
> */
> struct iommu_fault_event {
> + struct list_head list;
> struct iommu_fault fault;
> u64 iommu_private;
> };
> @@ -270,10 +272,13 @@ struct iommu_fault_event {
> * struct iommu_fault_param - per-device IOMMU fault data
> * @dev_fault_handler: Callback function to handle IOMMU faults at device level
> * @data: handler private data
> - *
> + * @faults: holds the pending faults which needs response, e.g. page response.
> + * @lock: protect pending PRQ event list
> */
> struct iommu_fault_param {
> iommu_dev_fault_handler_t handler;
> + struct list_head faults;
> + struct mutex lock;
> void *data;
> };
>
> @@ -287,6 +292,7 @@ struct iommu_fault_param {
> * struct iommu_fwspec *iommu_fwspec;
> */
> struct iommu_param {
> + struct mutex lock;
> struct iommu_fault_param *fault_param;
> };
>
> @@ -379,6 +385,15 @@ extern int iommu_group_register_notifier(struct iommu_group *group,
> struct notifier_block *nb);
> extern int iommu_group_unregister_notifier(struct iommu_group *group,
> struct notifier_block *nb);
> +extern int iommu_register_device_fault_handler(struct device *dev,
> + iommu_dev_fault_handler_t handler,
> + void *data);
> +
> +extern int iommu_unregister_device_fault_handler(struct device *dev);
> +
> +extern int iommu_report_device_fault(struct device *dev,
> + struct iommu_fault_event *evt);
> +
> extern int iommu_group_id(struct iommu_group *group);
> extern struct iommu_group *iommu_group_get_for_dev(struct device *dev);
> extern struct iommu_domain *iommu_group_default_domain(struct iommu_group *);
> @@ -659,6 +674,25 @@ static inline int iommu_group_unregister_notifier(struct iommu_group *group,
> return 0;
> }
>
> +static inline
> +int iommu_register_device_fault_handler(struct device *dev,
> + iommu_dev_fault_handler_t handler,
> + void *data)
> +{
> + return -ENODEV;
> +}
> +
> +static inline int iommu_unregister_device_fault_handler(struct device *dev)
> +{
> + return 0;
> +}
> +
> +static inline
> +int iommu_report_device_fault(struct device *dev, struct iommu_fault_event *evt)
> +{
> + return -ENODEV;
> +}
> +
> static inline int iommu_group_id(struct iommu_group *group)
> {
> return -ENODEV;
On Sun, 17 Mar 2019 18:22:12 +0100
Eric Auger <[email protected]> wrote:
> From: Jacob Pan <[email protected]>
>
> Device faults detected by IOMMU can be reported outside the IOMMU
> subsystem for further processing. This patch introduces
> a generic device fault data structure.
>
> The fault can be either an unrecoverable fault or a page request,
> also referred to as a recoverable fault.
>
> We only care about non internal faults that are likely to be reported
> to an external subsystem.
>
> Signed-off-by: Jacob Pan <[email protected]>
> Signed-off-by: Jean-Philippe Brucker <[email protected]>
> Signed-off-by: Liu, Yi L <[email protected]>
> Signed-off-by: Ashok Raj <[email protected]>
> Signed-off-by: Eric Auger <[email protected]>
>
> ---
> v4 -> v5:
> - simplified struct iommu_fault_event comment
> - Moved IOMMU_FAULT_PERM outside of the struct
> - Removed IOMMU_FAULT_PERM_INST
> - s/IOMMU_FAULT_PAGE_REQUEST_PASID_PRESENT/
> IOMMU_FAULT_PAGE_REQUEST_PASID_VALID
>
> v3 -> v4:
> - use a union containing aither an unrecoverable fault or a page
> request message. Move the device private data in the page request
> structure. Reshuffle the fields and use flags.
> - move fault perm attributes to the uapi
> - remove a bunch of iommu_fault_reason enum values that were related
> to internal errors
> ---
> include/linux/iommu.h | 44 ++++++++++++++
> include/uapi/linux/iommu.h | 115
> +++++++++++++++++++++++++++++++++++++ 2 files changed, 159
> insertions(+) create mode 100644 include/uapi/linux/iommu.h
>
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index ffbbc7e39cee..c6f398f7e6e0 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -25,6 +25,7 @@
> #include <linux/errno.h>
> #include <linux/err.h>
> #include <linux/of.h>
> +#include <uapi/linux/iommu.h>
>
> #define IOMMU_READ (1 << 0)
> #define IOMMU_WRITE (1 << 1)
> @@ -48,6 +49,7 @@ struct bus_type;
> struct device;
> struct iommu_domain;
> struct notifier_block;
> +struct iommu_fault_event;
>
> /* iommu fault flags */
> #define IOMMU_FAULT_READ 0x0
> @@ -55,6 +57,7 @@ struct notifier_block;
>
> typedef int (*iommu_fault_handler_t)(struct iommu_domain *,
> struct device *, unsigned long, int, void *);
> +typedef int (*iommu_dev_fault_handler_t)(struct iommu_fault_event *,
> void *);
> struct iommu_domain_geometry {
> dma_addr_t aperture_start; /* First address that can be
> mapped */ @@ -247,6 +250,46 @@ struct iommu_device {
> struct device *dev;
> };
>
> +/**
> + * struct iommu_fault_event - Generic fault event
> + *
> + * Can represent recoverable faults such as a page requests or
> + * unrecoverable faults such as DMA or IRQ remapping faults.
> + *
> + * @fault: fault descriptor
> + * @iommu_private: used by the IOMMU driver for storing
> fault-specific
> + * data. Users should not modify this field before
> + * sending the fault response.
> + */
> +struct iommu_fault_event {
> + struct iommu_fault fault;
> + u64 iommu_private;
> +};
> +
> +/**
> + * struct iommu_fault_param - per-device IOMMU fault data
> + * @dev_fault_handler: Callback function to handle IOMMU faults at
> device level
> + * @data: handler private data
> + *
> + */
> +struct iommu_fault_param {
> + iommu_dev_fault_handler_t handler;
> + void *data;
> +};
> +
> +/**
> + * struct iommu_param - collection of per-device IOMMU data
> + *
> + * @fault_param: IOMMU detected device fault reporting data
> + *
> + * TODO: migrate other per device data pointers under
> iommu_dev_data, e.g.
> + * struct iommu_group *iommu_group;
> + * struct iommu_fwspec *iommu_fwspec;
> + */
> +struct iommu_param {
> + struct iommu_fault_param *fault_param;
> +};
> +
> int iommu_device_register(struct iommu_device *iommu);
> void iommu_device_unregister(struct iommu_device *iommu);
> int iommu_device_sysfs_add(struct iommu_device *iommu,
> @@ -422,6 +465,7 @@ struct iommu_ops {};
> struct iommu_group {};
> struct iommu_fwspec {};
> struct iommu_device {};
> +struct iommu_fault_param {};
>
> static inline bool iommu_present(struct bus_type *bus)
> {
> diff --git a/include/uapi/linux/iommu.h b/include/uapi/linux/iommu.h
> new file mode 100644
> index 000000000000..edcc0dda7993
> --- /dev/null
> +++ b/include/uapi/linux/iommu.h
> @@ -0,0 +1,115 @@
> +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
> +/*
> + * IOMMU user API definitions
> + */
> +
> +#ifndef _UAPI_IOMMU_H
> +#define _UAPI_IOMMU_H
> +
> +#include <linux/types.h>
> +
> +#define IOMMU_FAULT_PERM_WRITE (1 << 0) /* write */
> +#define IOMMU_FAULT_PERM_EXEC (1 << 1) /* exec */
> +#define IOMMU_FAULT_PERM_PRIV (1 << 2) /* privileged */
> +
> +/* Generic fault types, can be expanded IRQ remapping fault */
> +enum iommu_fault_type {
> + IOMMU_FAULT_DMA_UNRECOV = 1, /* unrecoverable fault */
> + IOMMU_FAULT_PAGE_REQ, /* page request fault */
> +};
> +
> +enum iommu_fault_reason {
> + IOMMU_FAULT_REASON_UNKNOWN = 0,
> +
> + /* Could not access the PASID table (fetch caused external
> abort) */
> + IOMMU_FAULT_REASON_PASID_FETCH,
> +
> + /* pasid entry is invalid or has configuration errors */
> + IOMMU_FAULT_REASON_BAD_PASID_ENTRY,
> +
> + /*
> + * PASID is out of range (e.g. exceeds the maximum PASID
> + * supported by the IOMMU) or disabled.
> + */
> + IOMMU_FAULT_REASON_PASID_INVALID,
> +
> + /*
> + * An external abort occurred fetching (or updating) a
> translation
> + * table descriptor
> + */
> + IOMMU_FAULT_REASON_WALK_EABT,
> +
> + /*
> + * Could not access the page table entry (Bad address),
> + * actual translation fault
> + */
> + IOMMU_FAULT_REASON_PTE_FETCH,
> +
> + /* Protection flag check failed */
> + IOMMU_FAULT_REASON_PERMISSION,
> +
> + /* access flag check failed */
> + IOMMU_FAULT_REASON_ACCESS,
> +
> + /* Output address of a translation stage caused Address Size
> fault */
> + IOMMU_FAULT_REASON_OOR_ADDRESS,
> +};
> +
For VT-d scalable mode, fault reason can be further split into first
level faults and second level faults. But since we pin the second level
today and guest owns the first level, we only need to inject faults of
the FL to vIOMMU. So I think this is fine today, I think this enum can
be extended w/o a new version of the structure.
> +/**
> + * Unrecoverable fault data
> + * @reason: reason of the fault
> + * @addr: offending page address
> + * @fetch_addr: address that caused a fetch abort, if any
> + * @pasid: contains process address space ID, used in shared virtual
> memory
> + * @perm: Requested permission access using by the incoming
> transaction
> + * (IOMMU_FAULT_PERM_* values)
> + */
> +struct iommu_fault_unrecoverable {
> + __u32 reason; /* enum iommu_fault_reason */
> +#define IOMMU_FAULT_UNRECOV_PASID_VALID (1 << 0)
> +#define IOMMU_FAULT_UNRECOV_PERM_VALID (1 << 1)
> +#define IOMMU_FAULT_UNRECOV_ADDR_VALID (1 << 2)
> +#define IOMMU_FAULT_UNRECOV_FETCH_ADDR_VALID (1 << 3)
> + __u32 flags;
> + __u32 pasid;
> + __u32 perm;
> + __u64 addr;
> + __u64 fetch_addr;
> +};
> +
> +/*
> + * Page Request data (aka. recoverable fault data)
> + * @flags : encodes whether the pasid is valid and whether this
> + * is the last page in group
> + * @pasid: pasid
> + * @grpid: page request group index
> + * @perm: requested page permissions (IOMMU_FAULT_PERM_* values)
> + * @addr: page address
> + */
> +struct iommu_fault_page_request {
> +#define IOMMU_FAULT_PAGE_REQUEST_PASID_VALID (1 << 0)
> +#define IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE (1 << 1)
> +#define IOMMU_FAULT_PAGE_REQUEST_PRIV_DATA (1 << 2)
> + __u32 flags;
> + __u32 pasid;
> + __u32 grpid;
> + __u32 perm;
> + __u64 addr;
> + __u64 private_data[2];
> +};
> +
> +/**
> + * struct iommu_fault - Generic fault data
> + *
> + * @type contains fault type
> + */
> +
> +struct iommu_fault {
> + __u32 type; /* enum iommu_fault_type */
> + __u32 reserved;
> + union {
> + struct iommu_fault_unrecoverable event;
> + struct iommu_fault_page_request prm;
> + };
> +};
> +#endif /* _UAPI_IOMMU_H */
[Jacob Pan]
On Thu, 21 Mar 2019 15:32:45 +0100
Auger Eric <[email protected]> wrote:
> Hi jean, Jacob,
>
> On 3/21/19 3:13 PM, Jean-Philippe Brucker wrote:
> > On 21/03/2019 13:54, Auger Eric wrote:
> >> Hi Jacob, Jean-Philippe,
> >>
> >> On 3/20/19 5:50 PM, Jean-Philippe Brucker wrote:
> >>> On 20/03/2019 16:37, Jacob Pan wrote:
> >>> [...]
> >>>>> +struct iommu_inv_addr_info {
> >>>>> +#define IOMMU_INV_ADDR_FLAGS_PASID (1 << 0)
> >>>>> +#define IOMMU_INV_ADDR_FLAGS_ARCHID (1 << 1)
> >>>>> +#define IOMMU_INV_ADDR_FLAGS_LEAF (1 << 2)
> >>>>> + __u32 flags;
> >>>>> + __u32 archid;
> >>>>> + __u64 pasid;
> >>>>> + __u64 addr;
> >>>>> + __u64 granule_size;
> >>>>> + __u64 nb_granules;
> >>>>> +};
> >>>>> +
> >>>>> +/**
> >>>>> + * First level/stage invalidation information
> >>>>> + * @cache: bitfield that allows to select which caches to
> >>>>> invalidate
> >>>>> + * @granularity: defines the lowest granularity used for the
> >>>>> invalidation:
> >>>>> + * domain > pasid > addr
> >>>>> + *
> >>>>> + * Not all the combinations of cache/granularity make sense:
> >>>>> + *
> >>>>> + * type | DEV_IOTLB | IOTLB |
> >>>>> PASID |
> >>>>> + * granularity | | |
> >>>>> cache |
> >>>>> + *
> >>>>> -------------+---------------+---------------+---------------+
> >>>>> + * DOMAIN | N/A | Y |
> >>>>> Y |
> >>>>> + * PASID | Y | Y |
> >>>>> Y |
> >>>>> + * ADDR | Y | Y |
> >>>>> N/A |
> >>>>> + */
> >>>>> +struct iommu_cache_invalidate_info {
> >>>>> +#define IOMMU_CACHE_INVALIDATE_INFO_VERSION_1 1
> >>>>> + __u32 version;
> >>>>> +/* IOMMU paging structure cache */
> >>>>> +#define IOMMU_CACHE_INV_TYPE_IOTLB (1 << 0) /* IOMMU
> >>>>> IOTLB */ +#define IOMMU_CACHE_INV_TYPE_DEV_IOTLB (1 <<
> >>>>> 1) /* Device IOTLB */ +#define
> >>>>> IOMMU_CACHE_INV_TYPE_PASID (1 << 2) /* PASID cache */
> >>>> Just a clarification, this used to be an enum. You do intend to
> >>>> issue a single invalidation request on multiple cache types?
> >>>> Perhaps for virtio-IOMMU? I only see a single cache type in your
> >>>> patch #14. For VT-d we plan to issue one cache type at a time
> >>>> for now. So this format works for us.
> >>>
> >>> Yes for virtio-iommu I'd like as little overhead as possible,
> >>> which means a single invalidation message to hit both IOTLB and
> >>> ATC at once, and the ability to specify multiple pages with
> >>> @nb_granules.
> >> The original request/explanation from Jean-Philippe can be found
> >> here: https://lkml.org/lkml/2019/1/28/1497
> >>
> >>>
> >>>> However, if multiple cache types are issued in a single
> >>>> invalidation. They must share a single granularity, not all
> >>>> combinations are valid. e.g. dev IOTLB does not support domain
> >>>> granularity. Just a reminder, not an issue. Driver could filter
> >>>> out invalid combinations.
> >> Sure I will add a comment about this restriction.
> >>>
> >>> Agreed. Even the core could filter out invalid combinations based
> >>> on the table above: IOTLB and domain granularity are N/A.
> >> I don't get this sentence. What about vtd IOTLB domain-selective
> >> invalidation:
> >
> > My mistake: I meant dev-IOTLB and domain granularity are N/A
>
> Ah OK, no worries.
>
> How do we proceed further with those user APIs? Besides the comment to
> be added above and previous suggestion from Jean ("Invalidations by
> @granularity use field ...), have we reached a consensus now on:
>
> - attach/detach_pasid_table
> - cache_invalidate
> - fault data and fault report API?
>
These APIs are sufficient for today's VT-d use and leave enough space
for extension. E.g. new fault reasons.
I have cherry picked the above APIs from your patchset to enable VT-d
nested translation. Going forward, I will reuse these until they get
merged.
> If not, please let me know.
>
> Thanks
>
> Eric
>
>
> >
> > Thanks,
> > Jean
> >
> >> "
> >> • IOTLB entries caching mappings associated with the specified
> >> domain-id are invalidated.
> >> • Paging-structure-cache entries caching mappings associated with
> >> the specified domain-id are invalidated.
> >> "
> >>
> >> Thanks
> >>
> >> Eric
> >>
> >>>
> >>> Thanks,
> >>> Jean
> >>>
> >>>>
> >>>>> + __u8 cache;
> >>>>> + __u8 granularity;
> >>>>> + __u8 padding[2];
> >>>>> + union {
> >>>>> + __u64 pasid;
> >>>>> + struct iommu_inv_addr_info addr_info;
> >>>>> + };
> >>>>> +};
> >>>>> +
> >>>>> +
> >>>>> #endif /* _UAPI_IOMMU_H */
> >>>>
> >>>> [Jacob Pan]
> >>>> _______________________________________________
> >>>> iommu mailing list
> >>>> [email protected]
> >>>> https://lists.linuxfoundation.org/mailman/listinfo/iommu
> >>>>
> >>>
> >> _______________________________________________
> >> iommu mailing list
> >> [email protected]
> >> https://lists.linuxfoundation.org/mailman/listinfo/iommu
> >>
> >
[Jacob Pan]
On Sun, 17 Mar 2019 18:22:17 +0100
Eric Auger <[email protected]> wrote:
> From: "Liu, Yi L" <[email protected]>
>
> This patch adds VFIO_IOMMU_ATTACH/DETACH_PASID_TABLE ioctl
> which aims to pass/withdraw the virtual iommu guest configuration
> to/from the VFIO driver downto to the iommu subsystem.
>
> Signed-off-by: Jacob Pan <[email protected]>
> Signed-off-by: Liu, Yi L <[email protected]>
> Signed-off-by: Eric Auger <[email protected]>
>
> ---
> v3 -> v4:
> - restore ATTACH/DETACH
> - add unwind on failure
>
> v2 -> v3:
> - s/BIND_PASID_TABLE/SET_PASID_TABLE
>
> v1 -> v2:
> - s/BIND_GUEST_STAGE/BIND_PASID_TABLE
> - remove the struct device arg
> ---
> drivers/vfio/vfio_iommu_type1.c | 53 +++++++++++++++++++++++++++++++++
> include/uapi/linux/vfio.h | 17 +++++++++++
> 2 files changed, 70 insertions(+)
>
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index 73652e21efec..222e9199edbf 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -1644,6 +1644,43 @@ static int vfio_domains_have_iommu_cache(struct vfio_iommu *iommu)
> return ret;
> }
>
> +static void
> +vfio_detach_pasid_table(struct vfio_iommu *iommu)
> +{
> + struct vfio_domain *d;
> +
> + mutex_lock(&iommu->lock);
> +
> + list_for_each_entry(d, &iommu->domain_list, next) {
> + iommu_detach_pasid_table(d->domain);
> + }
> + mutex_unlock(&iommu->lock);
> +}
> +
> +static int
> +vfio_attach_pasid_table(struct vfio_iommu *iommu,
> + struct vfio_iommu_type1_attach_pasid_table *ustruct)
> +{
> + struct vfio_domain *d;
> + int ret = 0;
> +
> + mutex_lock(&iommu->lock);
> +
> + list_for_each_entry(d, &iommu->domain_list, next) {
> + ret = iommu_attach_pasid_table(d->domain, &ustruct->config);
> + if (ret)
> + goto unwind;
> + }
> + goto unlock;
> +unwind:
> + list_for_each_entry_continue_reverse(d, &iommu->domain_list, next) {
> + iommu_detach_pasid_table(d->domain);
> + }
> +unlock:
> + mutex_unlock(&iommu->lock);
> + return ret;
> +}
> +
> static long vfio_iommu_type1_ioctl(void *iommu_data,
> unsigned int cmd, unsigned long arg)
> {
> @@ -1714,6 +1751,22 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
>
> return copy_to_user((void __user *)arg, &unmap, minsz) ?
> -EFAULT : 0;
> + } else if (cmd == VFIO_IOMMU_ATTACH_PASID_TABLE) {
> + struct vfio_iommu_type1_attach_pasid_table ustruct;
> +
> + minsz = offsetofend(struct vfio_iommu_type1_attach_pasid_table,
> + config);
> +
> + if (copy_from_user(&ustruct, (void __user *)arg, minsz))
> + return -EFAULT;
> +
> + if (ustruct.argsz < minsz || ustruct.flags)
> + return -EINVAL;
Who is responsible for validating the ustruct.config?
arm_smmu_attach_pasid_table() only looks at the format, ignoring the
version field :-\ In fact, where is struct iommu_pasid_smmuv3 ever used
from the config? Should the padding fields be forced to zero? We
don't have flags to incorporate use of them with future extensions, so
maybe we don't care?
> +
> + return vfio_attach_pasid_table(iommu, &ustruct);
> + } else if (cmd == VFIO_IOMMU_DETACH_PASID_TABLE) {
> + vfio_detach_pasid_table(iommu);
> + return 0;
> }
>
> return -ENOTTY;
> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index 02bb7ad6e986..329d378565d9 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -14,6 +14,7 @@
>
> #include <linux/types.h>
> #include <linux/ioctl.h>
> +#include <linux/iommu.h>
>
> #define VFIO_API_VERSION 0
>
> @@ -759,6 +760,22 @@ struct vfio_iommu_type1_dma_unmap {
> #define VFIO_IOMMU_ENABLE _IO(VFIO_TYPE, VFIO_BASE + 15)
> #define VFIO_IOMMU_DISABLE _IO(VFIO_TYPE, VFIO_BASE + 16)
>
> +/**
> + * VFIO_IOMMU_ATTACH_PASID_TABLE - _IOWR(VFIO_TYPE, VFIO_BASE + 22,
> + * struct vfio_iommu_type1_attach_pasid_table)
> + *
> + * Passes the PASID table to the host. Calling ATTACH_PASID_TABLE
> + * while a table is already installed is allowed: it replaces the old
> + * table. DETACH does a comprehensive tear down of the nested mode.
> + */
> +struct vfio_iommu_type1_attach_pasid_table {
> + __u32 argsz;
> + __u32 flags;
> + struct iommu_pasid_table_config config;
> +};
> +#define VFIO_IOMMU_ATTACH_PASID_TABLE _IO(VFIO_TYPE, VFIO_BASE + 22)
> +#define VFIO_IOMMU_DETACH_PASID_TABLE _IO(VFIO_TYPE, VFIO_BASE + 23)
> +
DETACH should also be documented, it's not clear from the uapi that it
requires no parameters. Thanks,
Alex
On Sun, 17 Mar 2019 18:22:18 +0100
Eric Auger <[email protected]> wrote:
> From: "Liu, Yi L" <[email protected]>
>
> When the guest "owns" the stage 1 translation structures, the host
> IOMMU driver has no knowledge of caching structure updates unless
> the guest invalidation requests are trapped and passed down to the
> host.
>
> This patch adds the VFIO_IOMMU_CACHE_INVALIDATE ioctl with aims
> at propagating guest stage1 IOMMU cache invalidations to the host.
>
> Signed-off-by: Liu, Yi L <[email protected]>
> Signed-off-by: Eric Auger <[email protected]>
>
> ---
>
> v2 -> v3:
> - introduce vfio_iommu_for_each_dev back in this patch
>
> v1 -> v2:
> - s/TLB/CACHE
> - remove vfio_iommu_task usage
> - commit message rewording
> ---
> drivers/vfio/vfio_iommu_type1.c | 47 +++++++++++++++++++++++++++++++++
> include/uapi/linux/vfio.h | 13 +++++++++
> 2 files changed, 60 insertions(+)
>
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index 222e9199edbf..12a40b9db6aa 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -113,6 +113,26 @@ struct vfio_regions {
> #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu) \
> (!list_empty(&iommu->domain_list))
>
static struct foo {
struct iommu_domain *domain;
void *data;
};
> +/* iommu->lock must be held */
> +static int
> +vfio_iommu_for_each_dev(struct vfio_iommu *iommu, void *data,
> + int (*fn)(struct device *, void *))
> +{
> + struct vfio_domain *d;
> + struct vfio_group *g;
> + int ret = 0;
struct foo bar = { .data = data };
> +
> + list_for_each_entry(d, &iommu->domain_list, next) {
bar.domain = d->domain;
> + list_for_each_entry(g, &d->group_list, next) {
> + ret = iommu_group_for_each_dev(g->iommu_group,
> + data, fn);
s/data/&bar/
> + if (ret)
> + break;
> + }
> + }
> + return ret;
> +}
> +
> static int put_pfn(unsigned long pfn, int prot);
>
> /*
> @@ -1681,6 +1701,15 @@ vfio_attach_pasid_table(struct vfio_iommu *iommu,
> return ret;
> }
>
> +static int vfio_cache_inv_fn(struct device *dev, void *data)
> +{
struct foo *bar = data;
> + struct vfio_iommu_type1_cache_invalidate *ustruct =
> + (struct vfio_iommu_type1_cache_invalidate *)data;
... = bar->data;
> + struct iommu_domain *d = iommu_get_domain_for_dev(dev);
... = bar->domain;
¯\_(ツ)_/¯ seems more efficient that doing a lookup.
> +
> + return iommu_cache_invalidate(d, dev, &ustruct->info);
> +}
> +
> static long vfio_iommu_type1_ioctl(void *iommu_data,
> unsigned int cmd, unsigned long arg)
> {
> @@ -1767,6 +1796,24 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
> } else if (cmd == VFIO_IOMMU_DETACH_PASID_TABLE) {
> vfio_detach_pasid_table(iommu);
> return 0;
> + } else if (cmd == VFIO_IOMMU_CACHE_INVALIDATE) {
> + struct vfio_iommu_type1_cache_invalidate ustruct;
> + int ret;
> +
> + minsz = offsetofend(struct vfio_iommu_type1_cache_invalidate,
> + info);
> +
> + if (copy_from_user(&ustruct, (void __user *)arg, minsz))
> + return -EFAULT;
> +
> + if (ustruct.argsz < minsz || ustruct.flags)
> + return -EINVAL;
> +
> + mutex_lock(&iommu->lock);
> + ret = vfio_iommu_for_each_dev(iommu, &ustruct,
> + vfio_cache_inv_fn);
Guess what has a version field that never gets checked ;)
Thanks,
Alex
> + mutex_unlock(&iommu->lock);
> + return ret;
> }
>
> return -ENOTTY;
> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index 329d378565d9..29f0ef2d805d 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -776,6 +776,19 @@ struct vfio_iommu_type1_attach_pasid_table {
> #define VFIO_IOMMU_ATTACH_PASID_TABLE _IO(VFIO_TYPE, VFIO_BASE + 22)
> #define VFIO_IOMMU_DETACH_PASID_TABLE _IO(VFIO_TYPE, VFIO_BASE + 23)
>
> +/**
> + * VFIO_IOMMU_CACHE_INVALIDATE - _IOWR(VFIO_TYPE, VFIO_BASE + 24,
> + * struct vfio_iommu_type1_cache_invalidate)
> + *
> + * Propagate guest IOMMU cache invalidation to the host.
> + */
> +struct vfio_iommu_type1_cache_invalidate {
> + __u32 argsz;
> + __u32 flags;
> + struct iommu_cache_invalidate_info info;
> +};
> +#define VFIO_IOMMU_CACHE_INVALIDATE _IO(VFIO_TYPE, VFIO_BASE + 24)
> +
> /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
>
> /*
On Sun, 17 Mar 2019 18:22:19 +0100
Eric Auger <[email protected]> wrote:
> This patch adds the VFIO_IOMMU_BIND/UNBIND_MSI ioctl which aim
> to pass/withdraw the guest MSI binding to/from the host.
>
> Signed-off-by: Eric Auger <[email protected]>
>
> ---
> v3 -> v4:
> - add UNBIND
> - unwind on BIND error
>
> v2 -> v3:
> - adapt to new proto of bind_guest_msi
> - directly use vfio_iommu_for_each_dev
>
> v1 -> v2:
> - s/vfio_iommu_type1_guest_msi_binding/vfio_iommu_type1_bind_guest_msi
> ---
> drivers/vfio/vfio_iommu_type1.c | 58 +++++++++++++++++++++++++++++++++
> include/uapi/linux/vfio.h | 29 +++++++++++++++++
> 2 files changed, 87 insertions(+)
>
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index 12a40b9db6aa..66513679081b 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -1710,6 +1710,25 @@ static int vfio_cache_inv_fn(struct device *dev, void *data)
> return iommu_cache_invalidate(d, dev, &ustruct->info);
> }
>
> +static int vfio_bind_msi_fn(struct device *dev, void *data)
> +{
> + struct vfio_iommu_type1_bind_msi *ustruct =
> + (struct vfio_iommu_type1_bind_msi *)data;
> + struct iommu_domain *d = iommu_get_domain_for_dev(dev);
> +
> + return iommu_bind_guest_msi(d, dev, ustruct->iova,
> + ustruct->gpa, ustruct->size);
> +}
> +
> +static int vfio_unbind_msi_fn(struct device *dev, void *data)
> +{
> + dma_addr_t *iova = (dma_addr_t *)data;
> + struct iommu_domain *d = iommu_get_domain_for_dev(dev);
Same as previous, we can encapsulate domain in our own struct to avoid
a lookup.
> +
> + iommu_unbind_guest_msi(d, dev, *iova);
Is it strange that iommu-core is exposing these interfaces at a device
level if every one of them requires us to walk all the devices? Thanks,
Alex
> + return 0;
> +}
> +
> static long vfio_iommu_type1_ioctl(void *iommu_data,
> unsigned int cmd, unsigned long arg)
> {
> @@ -1814,6 +1833,45 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
> vfio_cache_inv_fn);
> mutex_unlock(&iommu->lock);
> return ret;
> + } else if (cmd == VFIO_IOMMU_BIND_MSI) {
> + struct vfio_iommu_type1_bind_msi ustruct;
> + int ret;
> +
> + minsz = offsetofend(struct vfio_iommu_type1_bind_msi,
> + size);
> +
> + if (copy_from_user(&ustruct, (void __user *)arg, minsz))
> + return -EFAULT;
> +
> + if (ustruct.argsz < minsz || ustruct.flags)
> + return -EINVAL;
> +
> + mutex_lock(&iommu->lock);
> + ret = vfio_iommu_for_each_dev(iommu, &ustruct,
> + vfio_bind_msi_fn);
> + if (ret)
> + vfio_iommu_for_each_dev(iommu, &ustruct.iova,
> + vfio_unbind_msi_fn);
> + mutex_unlock(&iommu->lock);
> + return ret;
> + } else if (cmd == VFIO_IOMMU_UNBIND_MSI) {
> + struct vfio_iommu_type1_unbind_msi ustruct;
> + int ret;
> +
> + minsz = offsetofend(struct vfio_iommu_type1_unbind_msi,
> + iova);
> +
> + if (copy_from_user(&ustruct, (void __user *)arg, minsz))
> + return -EFAULT;
> +
> + if (ustruct.argsz < minsz || ustruct.flags)
> + return -EINVAL;
> +
> + mutex_lock(&iommu->lock);
> + ret = vfio_iommu_for_each_dev(iommu, &ustruct.iova,
> + vfio_unbind_msi_fn);
> + mutex_unlock(&iommu->lock);
> + return ret;
> }
>
> return -ENOTTY;
> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index 29f0ef2d805d..6763389b6adc 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -789,6 +789,35 @@ struct vfio_iommu_type1_cache_invalidate {
> };
> #define VFIO_IOMMU_CACHE_INVALIDATE _IO(VFIO_TYPE, VFIO_BASE + 24)
>
> +/**
> + * VFIO_IOMMU_BIND_MSI - _IOWR(VFIO_TYPE, VFIO_BASE + 25,
> + * struct vfio_iommu_type1_bind_msi)
> + *
> + * Pass a stage 1 MSI doorbell mapping to the host so that this
> + * latter can build a nested stage2 mapping
> + */
> +struct vfio_iommu_type1_bind_msi {
> + __u32 argsz;
> + __u32 flags;
> + __u64 iova;
> + __u64 gpa;
> + __u64 size;
> +};
> +#define VFIO_IOMMU_BIND_MSI _IO(VFIO_TYPE, VFIO_BASE + 25)
> +
> +/**
> + * VFIO_IOMMU_UNBIND_MSI - _IOWR(VFIO_TYPE, VFIO_BASE + 26,
> + * struct vfio_iommu_type1_unbind_msi)
> + *
> + * Unregister an MSI mapping
> + */
> +struct vfio_iommu_type1_unbind_msi {
> + __u32 argsz;
> + __u32 flags;
> + __u64 iova;
> +};
> +#define VFIO_IOMMU_UNBIND_MSI _IO(VFIO_TYPE, VFIO_BASE + 26)
> +
> /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
>
> /*
Hi Alex,
On 3/21/19 11:19 PM, Alex Williamson wrote:
> On Sun, 17 Mar 2019 18:22:17 +0100
> Eric Auger <[email protected]> wrote:
>
>> From: "Liu, Yi L" <[email protected]>
>>
>> This patch adds VFIO_IOMMU_ATTACH/DETACH_PASID_TABLE ioctl
>> which aims to pass/withdraw the virtual iommu guest configuration
>> to/from the VFIO driver downto to the iommu subsystem.
>>
>> Signed-off-by: Jacob Pan <[email protected]>
>> Signed-off-by: Liu, Yi L <[email protected]>
>> Signed-off-by: Eric Auger <[email protected]>
>>
>> ---
>> v3 -> v4:
>> - restore ATTACH/DETACH
>> - add unwind on failure
>>
>> v2 -> v3:
>> - s/BIND_PASID_TABLE/SET_PASID_TABLE
>>
>> v1 -> v2:
>> - s/BIND_GUEST_STAGE/BIND_PASID_TABLE
>> - remove the struct device arg
>> ---
>> drivers/vfio/vfio_iommu_type1.c | 53 +++++++++++++++++++++++++++++++++
>> include/uapi/linux/vfio.h | 17 +++++++++++
>> 2 files changed, 70 insertions(+)
>>
>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>> index 73652e21efec..222e9199edbf 100644
>> --- a/drivers/vfio/vfio_iommu_type1.c
>> +++ b/drivers/vfio/vfio_iommu_type1.c
>> @@ -1644,6 +1644,43 @@ static int vfio_domains_have_iommu_cache(struct vfio_iommu *iommu)
>> return ret;
>> }
>>
>> +static void
>> +vfio_detach_pasid_table(struct vfio_iommu *iommu)
>> +{
>> + struct vfio_domain *d;
>> +
>> + mutex_lock(&iommu->lock);
>> +
>> + list_for_each_entry(d, &iommu->domain_list, next) {
>> + iommu_detach_pasid_table(d->domain);
>> + }
>> + mutex_unlock(&iommu->lock);
>> +}
>> +
>> +static int
>> +vfio_attach_pasid_table(struct vfio_iommu *iommu,
>> + struct vfio_iommu_type1_attach_pasid_table *ustruct)
>> +{
>> + struct vfio_domain *d;
>> + int ret = 0;
>> +
>> + mutex_lock(&iommu->lock);
>> +
>> + list_for_each_entry(d, &iommu->domain_list, next) {
>> + ret = iommu_attach_pasid_table(d->domain, &ustruct->config);
>> + if (ret)
>> + goto unwind;
>> + }
>> + goto unlock;
>> +unwind:
>> + list_for_each_entry_continue_reverse(d, &iommu->domain_list, next) {
>> + iommu_detach_pasid_table(d->domain);
>> + }
>> +unlock:
>> + mutex_unlock(&iommu->lock);
>> + return ret;
>> +}
>> +
>> static long vfio_iommu_type1_ioctl(void *iommu_data,
>> unsigned int cmd, unsigned long arg)
>> {
>> @@ -1714,6 +1751,22 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
>>
>> return copy_to_user((void __user *)arg, &unmap, minsz) ?
>> -EFAULT : 0;
>> + } else if (cmd == VFIO_IOMMU_ATTACH_PASID_TABLE) {
>> + struct vfio_iommu_type1_attach_pasid_table ustruct;
>> +
>> + minsz = offsetofend(struct vfio_iommu_type1_attach_pasid_table,
>> + config);
>> +
>> + if (copy_from_user(&ustruct, (void __user *)arg, minsz))
>> + return -EFAULT;
>> +
>> + if (ustruct.argsz < minsz || ustruct.flags)
>> + return -EINVAL;
>
> Who is responsible for validating the ustruct.config?
> arm_smmu_attach_pasid_table() only looks at the format, ignoring the
> version field :-\ In fact, where is struct iommu_pasid_smmuv3 ever used
> from the config?
This is something missing and to be fixed in smmuv3
arm_smmu_attach_pasid_table(). At the moment the virtual SMMUv3 only
supports a single context descriptor hence the shortcut.
Should the padding fields be forced to zero? We
> don't have flags to incorporate use of them with future extensions, so
> maybe we don't care?
My guess is if we were to add new fields in iommu_pasid_smmuv3, we would
both increment iommu_pasid_smmuv3.version and
iommu_pasid_table_config.version. I don't think padding fields are meant
to be filled here (ie. no flag needed).
>
>> +
>> + return vfio_attach_pasid_table(iommu, &ustruct);
>> + } else if (cmd == VFIO_IOMMU_DETACH_PASID_TABLE) {
>> + vfio_detach_pasid_table(iommu);
>> + return 0;
>> }
>>
>> return -ENOTTY;
>> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
>> index 02bb7ad6e986..329d378565d9 100644
>> --- a/include/uapi/linux/vfio.h
>> +++ b/include/uapi/linux/vfio.h
>> @@ -14,6 +14,7 @@
>>
>> #include <linux/types.h>
>> #include <linux/ioctl.h>
>> +#include <linux/iommu.h>
>>
>> #define VFIO_API_VERSION 0
>>
>> @@ -759,6 +760,22 @@ struct vfio_iommu_type1_dma_unmap {
>> #define VFIO_IOMMU_ENABLE _IO(VFIO_TYPE, VFIO_BASE + 15)
>> #define VFIO_IOMMU_DISABLE _IO(VFIO_TYPE, VFIO_BASE + 16)
>>
>> +/**
>> + * VFIO_IOMMU_ATTACH_PASID_TABLE - _IOWR(VFIO_TYPE, VFIO_BASE + 22,
>> + * struct vfio_iommu_type1_attach_pasid_table)
>> + *
>> + * Passes the PASID table to the host. Calling ATTACH_PASID_TABLE
>> + * while a table is already installed is allowed: it replaces the old
>> + * table. DETACH does a comprehensive tear down of the nested mode.
>> + */
>> +struct vfio_iommu_type1_attach_pasid_table {
>> + __u32 argsz;
>> + __u32 flags;
>> + struct iommu_pasid_table_config config;
>> +};
>> +#define VFIO_IOMMU_ATTACH_PASID_TABLE _IO(VFIO_TYPE, VFIO_BASE + 22)
>> +#define VFIO_IOMMU_DETACH_PASID_TABLE _IO(VFIO_TYPE, VFIO_BASE + 23)
>> +
>
> DETACH should also be documented, it's not clear from the uapi that it
> requires no parameters. Thanks,
sure
Thanks
Eric
>
> Alex
>
Hi Jacob,
On 3/21/19 11:04 PM, Jacob Pan wrote:
> On Sun, 17 Mar 2019 18:22:12 +0100
> Eric Auger <[email protected]> wrote:
>
>> From: Jacob Pan <[email protected]>
>>
>> Device faults detected by IOMMU can be reported outside the IOMMU
>> subsystem for further processing. This patch introduces
>> a generic device fault data structure.
>>
>> The fault can be either an unrecoverable fault or a page request,
>> also referred to as a recoverable fault.
>>
>> We only care about non internal faults that are likely to be reported
>> to an external subsystem.
>>
>> Signed-off-by: Jacob Pan <[email protected]>
>> Signed-off-by: Jean-Philippe Brucker <[email protected]>
>> Signed-off-by: Liu, Yi L <[email protected]>
>> Signed-off-by: Ashok Raj <[email protected]>
>> Signed-off-by: Eric Auger <[email protected]>
>>
>> ---
>> v4 -> v5:
>> - simplified struct iommu_fault_event comment
>> - Moved IOMMU_FAULT_PERM outside of the struct
>> - Removed IOMMU_FAULT_PERM_INST
>> - s/IOMMU_FAULT_PAGE_REQUEST_PASID_PRESENT/
>> IOMMU_FAULT_PAGE_REQUEST_PASID_VALID
>>
>> v3 -> v4:
>> - use a union containing aither an unrecoverable fault or a page
>> request message. Move the device private data in the page request
>> structure. Reshuffle the fields and use flags.
>> - move fault perm attributes to the uapi
>> - remove a bunch of iommu_fault_reason enum values that were related
>> to internal errors
>> ---
>> include/linux/iommu.h | 44 ++++++++++++++
>> include/uapi/linux/iommu.h | 115
>> +++++++++++++++++++++++++++++++++++++ 2 files changed, 159
>> insertions(+) create mode 100644 include/uapi/linux/iommu.h
>>
>> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
>> index ffbbc7e39cee..c6f398f7e6e0 100644
>> --- a/include/linux/iommu.h
>> +++ b/include/linux/iommu.h
>> @@ -25,6 +25,7 @@
>> #include <linux/errno.h>
>> #include <linux/err.h>
>> #include <linux/of.h>
>> +#include <uapi/linux/iommu.h>
>>
>> #define IOMMU_READ (1 << 0)
>> #define IOMMU_WRITE (1 << 1)
>> @@ -48,6 +49,7 @@ struct bus_type;
>> struct device;
>> struct iommu_domain;
>> struct notifier_block;
>> +struct iommu_fault_event;
>>
>> /* iommu fault flags */
>> #define IOMMU_FAULT_READ 0x0
>> @@ -55,6 +57,7 @@ struct notifier_block;
>>
>> typedef int (*iommu_fault_handler_t)(struct iommu_domain *,
>> struct device *, unsigned long, int, void *);
>> +typedef int (*iommu_dev_fault_handler_t)(struct iommu_fault_event *,
>> void *);
>> struct iommu_domain_geometry {
>> dma_addr_t aperture_start; /* First address that can be
>> mapped */ @@ -247,6 +250,46 @@ struct iommu_device {
>> struct device *dev;
>> };
>>
>> +/**
>> + * struct iommu_fault_event - Generic fault event
>> + *
>> + * Can represent recoverable faults such as a page requests or
>> + * unrecoverable faults such as DMA or IRQ remapping faults.
>> + *
>> + * @fault: fault descriptor
>> + * @iommu_private: used by the IOMMU driver for storing
>> fault-specific
>> + * data. Users should not modify this field before
>> + * sending the fault response.
>> + */
>> +struct iommu_fault_event {
>> + struct iommu_fault fault;
>> + u64 iommu_private;
>> +};
>> +
>> +/**
>> + * struct iommu_fault_param - per-device IOMMU fault data
>> + * @dev_fault_handler: Callback function to handle IOMMU faults at
>> device level
>> + * @data: handler private data
>> + *
>> + */
>> +struct iommu_fault_param {
>> + iommu_dev_fault_handler_t handler;
>> + void *data;
>> +};
>> +
>> +/**
>> + * struct iommu_param - collection of per-device IOMMU data
>> + *
>> + * @fault_param: IOMMU detected device fault reporting data
>> + *
>> + * TODO: migrate other per device data pointers under
>> iommu_dev_data, e.g.
>> + * struct iommu_group *iommu_group;
>> + * struct iommu_fwspec *iommu_fwspec;
>> + */
>> +struct iommu_param {
>> + struct iommu_fault_param *fault_param;
>> +};
>> +
>> int iommu_device_register(struct iommu_device *iommu);
>> void iommu_device_unregister(struct iommu_device *iommu);
>> int iommu_device_sysfs_add(struct iommu_device *iommu,
>> @@ -422,6 +465,7 @@ struct iommu_ops {};
>> struct iommu_group {};
>> struct iommu_fwspec {};
>> struct iommu_device {};
>> +struct iommu_fault_param {};
>>
>> static inline bool iommu_present(struct bus_type *bus)
>> {
>> diff --git a/include/uapi/linux/iommu.h b/include/uapi/linux/iommu.h
>> new file mode 100644
>> index 000000000000..edcc0dda7993
>> --- /dev/null
>> +++ b/include/uapi/linux/iommu.h
>> @@ -0,0 +1,115 @@
>> +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
>> +/*
>> + * IOMMU user API definitions
>> + */
>> +
>> +#ifndef _UAPI_IOMMU_H
>> +#define _UAPI_IOMMU_H
>> +
>> +#include <linux/types.h>
>> +
>> +#define IOMMU_FAULT_PERM_WRITE (1 << 0) /* write */
>> +#define IOMMU_FAULT_PERM_EXEC (1 << 1) /* exec */
>> +#define IOMMU_FAULT_PERM_PRIV (1 << 2) /* privileged */
>> +
>> +/* Generic fault types, can be expanded IRQ remapping fault */
>> +enum iommu_fault_type {
>> + IOMMU_FAULT_DMA_UNRECOV = 1, /* unrecoverable fault */
>> + IOMMU_FAULT_PAGE_REQ, /* page request fault */
>> +};
>> +
>> +enum iommu_fault_reason {
>> + IOMMU_FAULT_REASON_UNKNOWN = 0,
>> +
>> + /* Could not access the PASID table (fetch caused external
>> abort) */
>> + IOMMU_FAULT_REASON_PASID_FETCH,
>> +
>> + /* pasid entry is invalid or has configuration errors */
>> + IOMMU_FAULT_REASON_BAD_PASID_ENTRY,
>> +
>> + /*
>> + * PASID is out of range (e.g. exceeds the maximum PASID
>> + * supported by the IOMMU) or disabled.
>> + */
>> + IOMMU_FAULT_REASON_PASID_INVALID,
>> +
>> + /*
>> + * An external abort occurred fetching (or updating) a
>> translation
>> + * table descriptor
>> + */
>> + IOMMU_FAULT_REASON_WALK_EABT,
>> +
>> + /*
>> + * Could not access the page table entry (Bad address),
>> + * actual translation fault
>> + */
>> + IOMMU_FAULT_REASON_PTE_FETCH,
>> +
>> + /* Protection flag check failed */
>> + IOMMU_FAULT_REASON_PERMISSION,
>> +
>> + /* access flag check failed */
>> + IOMMU_FAULT_REASON_ACCESS,
>> +
>> + /* Output address of a translation stage caused Address Size
>> fault */
>> + IOMMU_FAULT_REASON_OOR_ADDRESS,
>> +};
>> +
> For VT-d scalable mode, fault reason can be further split into first
> level faults and second level faults. But since we pin the second level
> today and guest owns the first level, we only need to inject faults of
> the FL to vIOMMU. So I think this is fine today, I think this enum can
> be extended w/o a new version of the structure.
I think that's the same actually for SMMUv3. Here we only kept stage1
related fault reasons as stage2 should rather be kept internal to the
driver.
Thanks
Eric
>
>> +/**
>> + * Unrecoverable fault data
>> + * @reason: reason of the fault
>> + * @addr: offending page address
>> + * @fetch_addr: address that caused a fetch abort, if any
>> + * @pasid: contains process address space ID, used in shared virtual
>> memory
>> + * @perm: Requested permission access using by the incoming
>> transaction
>> + * (IOMMU_FAULT_PERM_* values)
>> + */
>> +struct iommu_fault_unrecoverable {
>> + __u32 reason; /* enum iommu_fault_reason */
>> +#define IOMMU_FAULT_UNRECOV_PASID_VALID (1 << 0)
>> +#define IOMMU_FAULT_UNRECOV_PERM_VALID (1 << 1)
>> +#define IOMMU_FAULT_UNRECOV_ADDR_VALID (1 << 2)
>> +#define IOMMU_FAULT_UNRECOV_FETCH_ADDR_VALID (1 << 3)
>> + __u32 flags;
>> + __u32 pasid;
>> + __u32 perm;
>> + __u64 addr;
>> + __u64 fetch_addr;
>> +};
>> +
>> +/*
>> + * Page Request data (aka. recoverable fault data)
>> + * @flags : encodes whether the pasid is valid and whether this
>> + * is the last page in group
>> + * @pasid: pasid
>> + * @grpid: page request group index
>> + * @perm: requested page permissions (IOMMU_FAULT_PERM_* values)
>> + * @addr: page address
>> + */
>> +struct iommu_fault_page_request {
>> +#define IOMMU_FAULT_PAGE_REQUEST_PASID_VALID (1 << 0)
>> +#define IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE (1 << 1)
>> +#define IOMMU_FAULT_PAGE_REQUEST_PRIV_DATA (1 << 2)
>> + __u32 flags;
>> + __u32 pasid;
>> + __u32 grpid;
>> + __u32 perm;
>> + __u64 addr;
>> + __u64 private_data[2];
>> +};
>> +
>> +/**
>> + * struct iommu_fault - Generic fault data
>> + *
>> + * @type contains fault type
>> + */
>> +
>> +struct iommu_fault {
>> + __u32 type; /* enum iommu_fault_type */
>> + __u32 reserved;
>> + union {
>> + struct iommu_fault_unrecoverable event;
>> + struct iommu_fault_page_request prm;
>> + };
>> +};
>> +#endif /* _UAPI_IOMMU_H */
>
> [Jacob Pan]
>
Hi Jacob,
On 3/21/19 11:10 PM, Jacob Pan wrote:
> On Thu, 21 Mar 2019 15:32:45 +0100
> Auger Eric <[email protected]> wrote:
>
>> Hi jean, Jacob,
>>
>> On 3/21/19 3:13 PM, Jean-Philippe Brucker wrote:
>>> On 21/03/2019 13:54, Auger Eric wrote:
>>>> Hi Jacob, Jean-Philippe,
>>>>
>>>> On 3/20/19 5:50 PM, Jean-Philippe Brucker wrote:
>>>>> On 20/03/2019 16:37, Jacob Pan wrote:
>>>>> [...]
>>>>>>> +struct iommu_inv_addr_info {
>>>>>>> +#define IOMMU_INV_ADDR_FLAGS_PASID (1 << 0)
>>>>>>> +#define IOMMU_INV_ADDR_FLAGS_ARCHID (1 << 1)
>>>>>>> +#define IOMMU_INV_ADDR_FLAGS_LEAF (1 << 2)
>>>>>>> + __u32 flags;
>>>>>>> + __u32 archid;
>>>>>>> + __u64 pasid;
>>>>>>> + __u64 addr;
>>>>>>> + __u64 granule_size;
>>>>>>> + __u64 nb_granules;
>>>>>>> +};
>>>>>>> +
>>>>>>> +/**
>>>>>>> + * First level/stage invalidation information
>>>>>>> + * @cache: bitfield that allows to select which caches to
>>>>>>> invalidate
>>>>>>> + * @granularity: defines the lowest granularity used for the
>>>>>>> invalidation:
>>>>>>> + * domain > pasid > addr
>>>>>>> + *
>>>>>>> + * Not all the combinations of cache/granularity make sense:
>>>>>>> + *
>>>>>>> + * type | DEV_IOTLB | IOTLB |
>>>>>>> PASID |
>>>>>>> + * granularity | | |
>>>>>>> cache |
>>>>>>> + *
>>>>>>> -------------+---------------+---------------+---------------+
>>>>>>> + * DOMAIN | N/A | Y |
>>>>>>> Y |
>>>>>>> + * PASID | Y | Y |
>>>>>>> Y |
>>>>>>> + * ADDR | Y | Y |
>>>>>>> N/A |
>>>>>>> + */
>>>>>>> +struct iommu_cache_invalidate_info {
>>>>>>> +#define IOMMU_CACHE_INVALIDATE_INFO_VERSION_1 1
>>>>>>> + __u32 version;
>>>>>>> +/* IOMMU paging structure cache */
>>>>>>> +#define IOMMU_CACHE_INV_TYPE_IOTLB (1 << 0) /* IOMMU
>>>>>>> IOTLB */ +#define IOMMU_CACHE_INV_TYPE_DEV_IOTLB (1 <<
>>>>>>> 1) /* Device IOTLB */ +#define
>>>>>>> IOMMU_CACHE_INV_TYPE_PASID (1 << 2) /* PASID cache */
>>>>>> Just a clarification, this used to be an enum. You do intend to
>>>>>> issue a single invalidation request on multiple cache types?
>>>>>> Perhaps for virtio-IOMMU? I only see a single cache type in your
>>>>>> patch #14. For VT-d we plan to issue one cache type at a time
>>>>>> for now. So this format works for us.
>>>>>
>>>>> Yes for virtio-iommu I'd like as little overhead as possible,
>>>>> which means a single invalidation message to hit both IOTLB and
>>>>> ATC at once, and the ability to specify multiple pages with
>>>>> @nb_granules.
>>>> The original request/explanation from Jean-Philippe can be found
>>>> here: https://lkml.org/lkml/2019/1/28/1497
>>>>
>>>>>
>>>>>> However, if multiple cache types are issued in a single
>>>>>> invalidation. They must share a single granularity, not all
>>>>>> combinations are valid. e.g. dev IOTLB does not support domain
>>>>>> granularity. Just a reminder, not an issue. Driver could filter
>>>>>> out invalid combinations.
>>>> Sure I will add a comment about this restriction.
>>>>>
>>>>> Agreed. Even the core could filter out invalid combinations based
>>>>> on the table above: IOTLB and domain granularity are N/A.
>>>> I don't get this sentence. What about vtd IOTLB domain-selective
>>>> invalidation:
>>>
>>> My mistake: I meant dev-IOTLB and domain granularity are N/A
>>
>> Ah OK, no worries.
>>
>> How do we proceed further with those user APIs? Besides the comment to
>> be added above and previous suggestion from Jean ("Invalidations by
>> @granularity use field ...), have we reached a consensus now on:
>>
>> - attach/detach_pasid_table
>> - cache_invalidate
>> - fault data and fault report API?
>>
> These APIs are sufficient for today's VT-d use and leave enough space
> for extension. E.g. new fault reasons.
>
> I have cherry picked the above APIs from your patchset to enable VT-d
> nested translation. Going forward, I will reuse these until they get
> merged.
OK thanks!
Eric
>
>> If not, please let me know.
>>
>> Thanks
>>
>> Eric
>>
>>
>>>
>>> Thanks,
>>> Jean
>>>
>>>> "
>>>> • IOTLB entries caching mappings associated with the specified
>>>> domain-id are invalidated.
>>>> • Paging-structure-cache entries caching mappings associated with
>>>> the specified domain-id are invalidated.
>>>> "
>>>>
>>>> Thanks
>>>>
>>>> Eric
>>>>
>>>>>
>>>>> Thanks,
>>>>> Jean
>>>>>
>>>>>>
>>>>>>> + __u8 cache;
>>>>>>> + __u8 granularity;
>>>>>>> + __u8 padding[2];
>>>>>>> + union {
>>>>>>> + __u64 pasid;
>>>>>>> + struct iommu_inv_addr_info addr_info;
>>>>>>> + };
>>>>>>> +};
>>>>>>> +
>>>>>>> +
>>>>>>> #endif /* _UAPI_IOMMU_H */
>>>>>>
>>>>>> [Jacob Pan]
>>>>>> _______________________________________________
>>>>>> iommu mailing list
>>>>>> [email protected]
>>>>>> https://lists.linuxfoundation.org/mailman/listinfo/iommu
>>>>>>
>>>>>
>>>> _______________________________________________
>>>> iommu mailing list
>>>> [email protected]
>>>> https://lists.linuxfoundation.org/mailman/listinfo/iommu
>>>>
>>>
>
> [Jacob Pan]
>
Hi Alex,
On 3/22/19 12:01 AM, Alex Williamson wrote:
> On Sun, 17 Mar 2019 18:22:19 +0100
> Eric Auger <[email protected]> wrote:
>
>> This patch adds the VFIO_IOMMU_BIND/UNBIND_MSI ioctl which aim
>> to pass/withdraw the guest MSI binding to/from the host.
>>
>> Signed-off-by: Eric Auger <[email protected]>
>>
>> ---
>> v3 -> v4:
>> - add UNBIND
>> - unwind on BIND error
>>
>> v2 -> v3:
>> - adapt to new proto of bind_guest_msi
>> - directly use vfio_iommu_for_each_dev
>>
>> v1 -> v2:
>> - s/vfio_iommu_type1_guest_msi_binding/vfio_iommu_type1_bind_guest_msi
>> ---
>> drivers/vfio/vfio_iommu_type1.c | 58 +++++++++++++++++++++++++++++++++
>> include/uapi/linux/vfio.h | 29 +++++++++++++++++
>> 2 files changed, 87 insertions(+)
>>
>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>> index 12a40b9db6aa..66513679081b 100644
>> --- a/drivers/vfio/vfio_iommu_type1.c
>> +++ b/drivers/vfio/vfio_iommu_type1.c
>> @@ -1710,6 +1710,25 @@ static int vfio_cache_inv_fn(struct device *dev, void *data)
>> return iommu_cache_invalidate(d, dev, &ustruct->info);
>> }
>>
>> +static int vfio_bind_msi_fn(struct device *dev, void *data)
>> +{
>> + struct vfio_iommu_type1_bind_msi *ustruct =
>> + (struct vfio_iommu_type1_bind_msi *)data;
>> + struct iommu_domain *d = iommu_get_domain_for_dev(dev);
>> +
>> + return iommu_bind_guest_msi(d, dev, ustruct->iova,
>> + ustruct->gpa, ustruct->size);
>> +}
>> +
>> +static int vfio_unbind_msi_fn(struct device *dev, void *data)
>> +{
>> + dma_addr_t *iova = (dma_addr_t *)data;
>> + struct iommu_domain *d = iommu_get_domain_for_dev(dev);
>
> Same as previous, we can encapsulate domain in our own struct to avoid
> a lookup.
>
>> +
>> + iommu_unbind_guest_msi(d, dev, *iova);
>
> Is it strange that iommu-core is exposing these interfaces at a device
> level if every one of them requires us to walk all the devices? Thanks,
Hum this per device API was devised in response of Robin's comments on
[RFC v2 12/20] dma-iommu: Implement NESTED_MSI cookie.
"
But that then seems to reveal a somewhat bigger problem - if the callers
are simply registering IPAs, and relying on the ITS driver to grab an
entry and fill in a PA later, then how does either one know *which* PA
is supposed to belong to a given IPA in the case where you have multiple
devices with different ITS targets assigned to the same guest? (and if
it's possible to assume a guest will use per-device stage 1 mappings and
present it with a single vITS backed by multiple pITSes, I think things
start breaking even harder.)
"
However looking back into the problem I wonder if there was an issue
with the iommu_domain based API.
If my understanding is correct, when assigned devices are protected by a
vIOMMU then they necessarily end up in separate host iommu domains even
if they belong to the same iommu_domain on the guest. And there can only
be a single device in this iommu_domain.
If this is confirmed, there is a non ambiguous association between 1
physical iommu_domain, 1 device, 1 S1 mapping and 1 physical MSI
controller.
I added the device handle handle to disambiguate those associations. The
gIOVA ->gDB mapping is associated with a device handle. Then when the
host needs a stage 1 mapping for this device, to build the nested
mapping towards the physical DB it can easily grab the gIOVA->gDB stage
1 mapping registered for this device.
The correctness looks more obvious to me, at least.
Thanks
Eric
>
> Alex
>
>> + return 0;
>> +}
>> +
>> static long vfio_iommu_type1_ioctl(void *iommu_data,
>> unsigned int cmd, unsigned long arg)
>> {
>> @@ -1814,6 +1833,45 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
>> vfio_cache_inv_fn);
>> mutex_unlock(&iommu->lock);
>> return ret;
>> + } else if (cmd == VFIO_IOMMU_BIND_MSI) {
>> + struct vfio_iommu_type1_bind_msi ustruct;
>> + int ret;
>> +
>> + minsz = offsetofend(struct vfio_iommu_type1_bind_msi,
>> + size);
>> +
>> + if (copy_from_user(&ustruct, (void __user *)arg, minsz))
>> + return -EFAULT;
>> +
>> + if (ustruct.argsz < minsz || ustruct.flags)
>> + return -EINVAL;
>> +
>> + mutex_lock(&iommu->lock);
>> + ret = vfio_iommu_for_each_dev(iommu, &ustruct,
>> + vfio_bind_msi_fn);
>> + if (ret)
>> + vfio_iommu_for_each_dev(iommu, &ustruct.iova,
>> + vfio_unbind_msi_fn);
>> + mutex_unlock(&iommu->lock);
>> + return ret;
>> + } else if (cmd == VFIO_IOMMU_UNBIND_MSI) {
>> + struct vfio_iommu_type1_unbind_msi ustruct;
>> + int ret;
>> +
>> + minsz = offsetofend(struct vfio_iommu_type1_unbind_msi,
>> + iova);
>> +
>> + if (copy_from_user(&ustruct, (void __user *)arg, minsz))
>> + return -EFAULT;
>> +
>> + if (ustruct.argsz < minsz || ustruct.flags)
>> + return -EINVAL;
>> +
>> + mutex_lock(&iommu->lock);
>> + ret = vfio_iommu_for_each_dev(iommu, &ustruct.iova,
>> + vfio_unbind_msi_fn);
>> + mutex_unlock(&iommu->lock);
>> + return ret;
>> }
>>
>> return -ENOTTY;
>> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
>> index 29f0ef2d805d..6763389b6adc 100644
>> --- a/include/uapi/linux/vfio.h
>> +++ b/include/uapi/linux/vfio.h
>> @@ -789,6 +789,35 @@ struct vfio_iommu_type1_cache_invalidate {
>> };
>> #define VFIO_IOMMU_CACHE_INVALIDATE _IO(VFIO_TYPE, VFIO_BASE + 24)
>>
>> +/**
>> + * VFIO_IOMMU_BIND_MSI - _IOWR(VFIO_TYPE, VFIO_BASE + 25,
>> + * struct vfio_iommu_type1_bind_msi)
>> + *
>> + * Pass a stage 1 MSI doorbell mapping to the host so that this
>> + * latter can build a nested stage2 mapping
>> + */
>> +struct vfio_iommu_type1_bind_msi {
>> + __u32 argsz;
>> + __u32 flags;
>> + __u64 iova;
>> + __u64 gpa;
>> + __u64 size;
>> +};
>> +#define VFIO_IOMMU_BIND_MSI _IO(VFIO_TYPE, VFIO_BASE + 25)
>> +
>> +/**
>> + * VFIO_IOMMU_UNBIND_MSI - _IOWR(VFIO_TYPE, VFIO_BASE + 26,
>> + * struct vfio_iommu_type1_unbind_msi)
>> + *
>> + * Unregister an MSI mapping
>> + */
>> +struct vfio_iommu_type1_unbind_msi {
>> + __u32 argsz;
>> + __u32 flags;
>> + __u64 iova;
>> +};
>> +#define VFIO_IOMMU_UNBIND_MSI _IO(VFIO_TYPE, VFIO_BASE + 26)
>> +
>> /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
>>
>> /*
>
Hi,
On 3/17/19 6:22 PM, Eric Auger wrote:
> This series allows a virtualizer to program the nested stage mode.
> This is useful when both the host and the guest are exposed with
> an SMMUv3 and a PCI device is assigned to the guest using VFIO.
>
> In this mode, the physical IOMMU must be programmed to translate
> the two stages: the one set up by the guest (IOVA -> GPA) and the
> one set up by the host VFIO driver as part of the assignment process
> (GPA -> HPA).
>
> On Intel, this is traditionnaly achieved by combining the 2 stages
> into a single physical stage. However this relies on the capability
> to trap on each guest translation structure update. This is possible
> by using the VTD Caching Mode. Unfortunately the ARM SMMUv3 does
> not offer a similar mechanism.
>
> However, the ARM SMMUv3 architecture supports 2 physical stages! Those
> were devised exactly with that use case in mind. Assuming the HW
> implements both stages (optional), the guest now can use stage 1
> while the host uses stage 2.
>
> This assumes the virtualizer has means to propagate guest settings
> to the host SMMUv3 driver. This series brings this VFIO/IOMMU
> infrastructure. Those services are:
> - bind the guest stage 1 configuration to the stream table entry
> - propagate guest TLB invalidations
> - bind MSI IOVAs
> - propagate faults collected at physical level up to the virtualizer
>
> This series largely reuses the user API and infrastructure originally
> devised for SVA/SVM and patches submitted by Jacob, Yi Liu, Tianyu in
> [1-2] and Jean-Philippe [3-4].
>
> Best Regards
>
> Eric
>
> This series can be found at:
> https://github.com/eauger/linux/tree/v5.0-2stage-v6
For those who would like to test this series with qemu, please use the
following qemu branch:
https://github.com/eauger/qemu/tree/v4.0-2stage-unformal-for-v6-testing
until I release a formal respin.
Thanks
Eric
>
> References:
> [1] [PATCH v5 00/23] IOMMU and VT-d driver support for Shared Virtual
> Address (SVA)
> https://lwn.net/Articles/754331/
> [2] [RFC PATCH 0/8] Shared Virtual Memory virtualization for VT-d
> (VFIO part)
> https://lists.linuxfoundation.org/pipermail/iommu/2017-April/021475.html
> [3] [v2,00/40] Shared Virtual Addressing for the IOMMU
> https://patchwork.ozlabs.org/cover/912129/
> [4] [PATCH v3 00/10] Shared Virtual Addressing for the IOMMU
> https://patchwork.kernel.org/cover/10608299/
>
> History:
> v5 -> v6:
> - Fix compilation issue when CONFIG_IOMMU_API is unset
>
> v4 -> v5:
> - fix bug reported by Vincent: fault handler unregistration now happens in
> vfio_pci_release
> - IOMMU_FAULT_PERM_* moved outside of struct definition + small
> uapi changes suggested by Kean-Philippe (except fetch_addr)
> - iommu: introduce device fault report API: removed the PRI part.
> - see individual logs for more details
> - reset the ste abort flag on detach
>
> v3 -> v4:
> - took into account Alex, jean-Philippe and Robin's comments on v3
> - rework of the smmuv3 driver integration
> - add tear down ops for msi binding and PASID table binding
> - fix S1 fault propagation
> - put fault reporting patches at the beginning of the series following
> Jean-Philippe's request
> - update of the cache invalidate and fault API uapis
> - VFIO fault reporting rework with 2 separate regions and one mmappable
> segment for the fault queue
> - moved to PATCH
>
> v2 -> v3:
> - When registering the S1 MSI binding we now store the device handle. This
> addresses Robin's comment about discimination of devices beonging to
> different S1 groups and using different physical MSI doorbells.
> - Change the fault reporting API: use VFIO_PCI_DMA_FAULT_IRQ_INDEX to
> set the eventfd and expose the faults through an mmappable fault region
>
> v1 -> v2:
> - Added the fault reporting capability
> - asid properly passed on invalidation (fix assignment of multiple
> devices)
> - see individual change logs for more info
>
>
> Eric Auger (13):
> iommu: Introduce bind/unbind_guest_msi
> vfio: VFIO_IOMMU_BIND/UNBIND_MSI
> iommu/smmuv3: Get prepared for nested stage support
> iommu/smmuv3: Implement attach/detach_pasid_table
> iommu/smmuv3: Implement cache_invalidate
> dma-iommu: Implement NESTED_MSI cookie
> iommu/smmuv3: Implement bind/unbind_guest_msi
> iommu/smmuv3: Report non recoverable faults
> vfio-pci: Add a new VFIO_REGION_TYPE_NESTED region type
> vfio-pci: Register an iommu fault handler
> vfio_pci: Allow to mmap the fault queue
> vfio-pci: Add VFIO_PCI_DMA_FAULT_IRQ_INDEX
> vfio: Document nested stage control
>
> Jacob Pan (4):
> driver core: add per device iommu param
> iommu: introduce device fault data
> iommu: introduce device fault report API
> iommu: Introduce attach/detach_pasid_table API
>
> Jean-Philippe Brucker (2):
> iommu/arm-smmu-v3: Link domains and devices
> iommu/arm-smmu-v3: Maintain a SID->device structure
>
> Liu, Yi L (3):
> iommu: Introduce cache_invalidate API
> vfio: VFIO_IOMMU_ATTACH/DETACH_PASID_TABLE
> vfio: VFIO_IOMMU_CACHE_INVALIDATE
>
> Documentation/vfio.txt | 83 ++++
> drivers/iommu/arm-smmu-v3.c | 581 ++++++++++++++++++++++++++--
> drivers/iommu/dma-iommu.c | 145 ++++++-
> drivers/iommu/iommu.c | 201 +++++++++-
> drivers/vfio/pci/vfio_pci.c | 214 ++++++++++
> drivers/vfio/pci/vfio_pci_intrs.c | 19 +
> drivers/vfio/pci/vfio_pci_private.h | 18 +
> drivers/vfio/pci/vfio_pci_rdwr.c | 73 ++++
> drivers/vfio/vfio_iommu_type1.c | 158 ++++++++
> include/linux/device.h | 3 +
> include/linux/dma-iommu.h | 18 +
> include/linux/iommu.h | 137 +++++++
> include/uapi/linux/iommu.h | 233 +++++++++++
> include/uapi/linux/vfio.h | 102 +++++
> 14 files changed, 1951 insertions(+), 34 deletions(-)
> create mode 100644 include/uapi/linux/iommu.h
>
On Fri, 22 Mar 2019 10:30:02 +0100
Auger Eric <[email protected]> wrote:
> Hi Alex,
> On 3/22/19 12:01 AM, Alex Williamson wrote:
> > On Sun, 17 Mar 2019 18:22:19 +0100
> > Eric Auger <[email protected]> wrote:
> >
> >> This patch adds the VFIO_IOMMU_BIND/UNBIND_MSI ioctl which aim
> >> to pass/withdraw the guest MSI binding to/from the host.
> >>
> >> Signed-off-by: Eric Auger <[email protected]>
> >>
> >> ---
> >> v3 -> v4:
> >> - add UNBIND
> >> - unwind on BIND error
> >>
> >> v2 -> v3:
> >> - adapt to new proto of bind_guest_msi
> >> - directly use vfio_iommu_for_each_dev
> >>
> >> v1 -> v2:
> >> - s/vfio_iommu_type1_guest_msi_binding/vfio_iommu_type1_bind_guest_msi
> >> ---
> >> drivers/vfio/vfio_iommu_type1.c | 58 +++++++++++++++++++++++++++++++++
> >> include/uapi/linux/vfio.h | 29 +++++++++++++++++
> >> 2 files changed, 87 insertions(+)
> >>
> >> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> >> index 12a40b9db6aa..66513679081b 100644
> >> --- a/drivers/vfio/vfio_iommu_type1.c
> >> +++ b/drivers/vfio/vfio_iommu_type1.c
> >> @@ -1710,6 +1710,25 @@ static int vfio_cache_inv_fn(struct device *dev, void *data)
> >> return iommu_cache_invalidate(d, dev, &ustruct->info);
> >> }
> >>
> >> +static int vfio_bind_msi_fn(struct device *dev, void *data)
> >> +{
> >> + struct vfio_iommu_type1_bind_msi *ustruct =
> >> + (struct vfio_iommu_type1_bind_msi *)data;
> >> + struct iommu_domain *d = iommu_get_domain_for_dev(dev);
> >> +
> >> + return iommu_bind_guest_msi(d, dev, ustruct->iova,
> >> + ustruct->gpa, ustruct->size);
> >> +}
> >> +
> >> +static int vfio_unbind_msi_fn(struct device *dev, void *data)
> >> +{
> >> + dma_addr_t *iova = (dma_addr_t *)data;
> >> + struct iommu_domain *d = iommu_get_domain_for_dev(dev);
> >
> > Same as previous, we can encapsulate domain in our own struct to avoid
> > a lookup.
> >
> >> +
> >> + iommu_unbind_guest_msi(d, dev, *iova);
> >
> > Is it strange that iommu-core is exposing these interfaces at a device
> > level if every one of them requires us to walk all the devices? Thanks,
>
> Hum this per device API was devised in response of Robin's comments on
>
> [RFC v2 12/20] dma-iommu: Implement NESTED_MSI cookie.
>
> "
> But that then seems to reveal a somewhat bigger problem - if the callers
> are simply registering IPAs, and relying on the ITS driver to grab an
> entry and fill in a PA later, then how does either one know *which* PA
> is supposed to belong to a given IPA in the case where you have multiple
> devices with different ITS targets assigned to the same guest? (and if
> it's possible to assume a guest will use per-device stage 1 mappings and
> present it with a single vITS backed by multiple pITSes, I think things
> start breaking even harder.)
> "
>
> However looking back into the problem I wonder if there was an issue
> with the iommu_domain based API.
>
> If my understanding is correct, when assigned devices are protected by a
> vIOMMU then they necessarily end up in separate host iommu domains even
> if they belong to the same iommu_domain on the guest. And there can only
> be a single device in this iommu_domain.
Don't forget that a container represents the IOMMU context in a vfio
environment, groups are associated with containers and a group may
contain one or more devices. When a vIOMMU comes into play, we still
only have an IOMMU context per container. If we have multiple devices
in a group, we run into problems with vIOMMU. We can resolve this by
requiring that the user ignore all but one device in the group,
or making sure that the devices in the group have the same IOMMU
context. The latter we could do in QEMU if PCIe-to-PCI bridges there
masked the per-device address space as it does on real hardware (ie.
there is no requester ID on conventional PCI, all transactions appear to
the IOMMU with the bridge requester ID). So I raise this question
because vfio's minimum domain granularity is a group.
> If this is confirmed, there is a non ambiguous association between 1
> physical iommu_domain, 1 device, 1 S1 mapping and 1 physical MSI
> controller.
>
> I added the device handle handle to disambiguate those associations. The
> gIOVA ->gDB mapping is associated with a device handle. Then when the
> host needs a stage 1 mapping for this device, to build the nested
> mapping towards the physical DB it can easily grab the gIOVA->gDB stage
> 1 mapping registered for this device.
>
> The correctness looks more obvious to me, at least.
Except all devices within all groups within the same container
necessarily share the same IOMMU context, so from that perspective, it
appears to impose non-trivial redundancy on the caller. Thanks,
Alex
Hi Alex,
On 3/22/19 11:09 PM, Alex Williamson wrote:
> On Fri, 22 Mar 2019 10:30:02 +0100
> Auger Eric <[email protected]> wrote:
>
>> Hi Alex,
>> On 3/22/19 12:01 AM, Alex Williamson wrote:
>>> On Sun, 17 Mar 2019 18:22:19 +0100
>>> Eric Auger <[email protected]> wrote:
>>>
>>>> This patch adds the VFIO_IOMMU_BIND/UNBIND_MSI ioctl which aim
>>>> to pass/withdraw the guest MSI binding to/from the host.
>>>>
>>>> Signed-off-by: Eric Auger <[email protected]>
>>>>
>>>> ---
>>>> v3 -> v4:
>>>> - add UNBIND
>>>> - unwind on BIND error
>>>>
>>>> v2 -> v3:
>>>> - adapt to new proto of bind_guest_msi
>>>> - directly use vfio_iommu_for_each_dev
>>>>
>>>> v1 -> v2:
>>>> - s/vfio_iommu_type1_guest_msi_binding/vfio_iommu_type1_bind_guest_msi
>>>> ---
>>>> drivers/vfio/vfio_iommu_type1.c | 58 +++++++++++++++++++++++++++++++++
>>>> include/uapi/linux/vfio.h | 29 +++++++++++++++++
>>>> 2 files changed, 87 insertions(+)
>>>>
>>>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>>>> index 12a40b9db6aa..66513679081b 100644
>>>> --- a/drivers/vfio/vfio_iommu_type1.c
>>>> +++ b/drivers/vfio/vfio_iommu_type1.c
>>>> @@ -1710,6 +1710,25 @@ static int vfio_cache_inv_fn(struct device *dev, void *data)
>>>> return iommu_cache_invalidate(d, dev, &ustruct->info);
>>>> }
>>>>
>>>> +static int vfio_bind_msi_fn(struct device *dev, void *data)
>>>> +{
>>>> + struct vfio_iommu_type1_bind_msi *ustruct =
>>>> + (struct vfio_iommu_type1_bind_msi *)data;
>>>> + struct iommu_domain *d = iommu_get_domain_for_dev(dev);
>>>> +
>>>> + return iommu_bind_guest_msi(d, dev, ustruct->iova,
>>>> + ustruct->gpa, ustruct->size);
>>>> +}
>>>> +
>>>> +static int vfio_unbind_msi_fn(struct device *dev, void *data)
>>>> +{
>>>> + dma_addr_t *iova = (dma_addr_t *)data;
>>>> + struct iommu_domain *d = iommu_get_domain_for_dev(dev);
>>>
>>> Same as previous, we can encapsulate domain in our own struct to avoid
>>> a lookup.
>>>
>>>> +
>>>> + iommu_unbind_guest_msi(d, dev, *iova);
>>>
>>> Is it strange that iommu-core is exposing these interfaces at a device
>>> level if every one of them requires us to walk all the devices? Thanks,
>>
>> Hum this per device API was devised in response of Robin's comments on
>>
>> [RFC v2 12/20] dma-iommu: Implement NESTED_MSI cookie.
>>
>> "
>> But that then seems to reveal a somewhat bigger problem - if the callers
>> are simply registering IPAs, and relying on the ITS driver to grab an
>> entry and fill in a PA later, then how does either one know *which* PA
>> is supposed to belong to a given IPA in the case where you have multiple
>> devices with different ITS targets assigned to the same guest? (and if
>> it's possible to assume a guest will use per-device stage 1 mappings and
>> present it with a single vITS backed by multiple pITSes, I think things
>> start breaking even harder.)
>> "
>>
>> However looking back into the problem I wonder if there was an issue
>> with the iommu_domain based API.
>>
>> If my understanding is correct, when assigned devices are protected by a
>> vIOMMU then they necessarily end up in separate host iommu domains even
>> if they belong to the same iommu_domain on the guest. And there can only
>> be a single device in this iommu_domain.
>
> Don't forget that a container represents the IOMMU context in a vfio
> environment, groups are associated with containers and a group may
> contain one or more devices. When a vIOMMU comes into play, we still
> only have an IOMMU context per container. If we have multiple devices
> in a group, we run into problems with vIOMMU. We can resolve this by
> requiring that the user ignore all but one device in the group,
> or making sure that the devices in the group have the same IOMMU
> context. The latter we could do in QEMU if PCIe-to-PCI bridges there
> masked the per-device address space as it does on real hardware (ie.
> there is no requester ID on conventional PCI, all transactions appear to
> the IOMMU with the bridge requester ID). So I raise this question
> because vfio's minimum domain granularity is a group.
>
>> If this is confirmed, there is a non ambiguous association between 1
>> physical iommu_domain, 1 device, 1 S1 mapping and 1 physical MSI
>> controller.
>>
>> I added the device handle handle to disambiguate those associations. The
>> gIOVA ->gDB mapping is associated with a device handle. Then when the
>> host needs a stage 1 mapping for this device, to build the nested
>> mapping towards the physical DB it can easily grab the gIOVA->gDB stage
>> 1 mapping registered for this device.
>>
>> The correctness looks more obvious to me, at least.
>
> Except all devices within all groups within the same container
> necessarily share the same IOMMU context, so from that perspective, it
> appears to impose non-trivial redundancy on the caller. Thanks,
Taking into consideration the case where we could have several devices
attached to the same host iommu group, each of them possibly using
different host MSI doorbells, I think I am in trouble.
Let's assume that using the pcie-to-pci bridge trick on guest side they
end up in the same container and in the same guest iommu group.
At the moment there is a single MSI controller on guest, so the same
gIOVA/gDB S1 mapping is going to be created by the guest iommu dommain
and both devices are programmed with gIOVA. If dev0 and dev1 are
attached to different host MSI controllers, I would need to build the 2
nested bindings:
dev0: MSI nested binding: gIOVA -> gDB -> hDB0
dev1: MSI nested binding: gIOVA -> gDB -> hDB1
(on guest there is a single MSI controller at the moment)
which is not possible as the devices belong to the same host iommu group
and share the same mapping.
The solution would be to instantiate 2 MSI controllers on guest side, in
which case we would end up with
dev0: gIOVA0 -> gDB0 -> hDB0
dev1: gIOVA1 -> gDB1 -> hDB1
Isn't it somehow what we do with the IOMMU RID topology. We need to take
into account the host topology (2 devices belonging to the same group)
to force the same on guest by introducing a PCIe-to-PCI bridge. Here we
would need to say, those assigned devices are attached to different MSI
domains on host, so we need the same on guest.
Anyway, the current container based IOCTL would fail to implement that
because I would register gIOVA0 -> gDB0 and gIOVA1 -> gDB1 for each
device within the container which would definitively fail to build the
correct association. So I think I would need anyway a device based IOTCL
that would aim to tell: this assigned device uses this S1 MSI binding.
All the notification mechanism we have in qemu is based on container, so
this would obliged to have device based notification mechanism.
So I wonder whether it wouldn't be sensible to restrict this use case
and say we support nested mode only if we have a single assigned device
within the container?
Thoughts?
Eric
>
> Alex
>
On Wed, 3 Apr 2019 16:30:15 +0200
Auger Eric <[email protected]> wrote:
> Hi Alex,
>
> On 3/22/19 11:09 PM, Alex Williamson wrote:
> > On Fri, 22 Mar 2019 10:30:02 +0100
> > Auger Eric <[email protected]> wrote:
> >
> >> Hi Alex,
> >> On 3/22/19 12:01 AM, Alex Williamson wrote:
> >>> On Sun, 17 Mar 2019 18:22:19 +0100
> >>> Eric Auger <[email protected]> wrote:
> >>>
> >>>> This patch adds the VFIO_IOMMU_BIND/UNBIND_MSI ioctl which aim
> >>>> to pass/withdraw the guest MSI binding to/from the host.
> >>>>
> >>>> Signed-off-by: Eric Auger <[email protected]>
> >>>>
> >>>> ---
> >>>> v3 -> v4:
> >>>> - add UNBIND
> >>>> - unwind on BIND error
> >>>>
> >>>> v2 -> v3:
> >>>> - adapt to new proto of bind_guest_msi
> >>>> - directly use vfio_iommu_for_each_dev
> >>>>
> >>>> v1 -> v2:
> >>>> - s/vfio_iommu_type1_guest_msi_binding/vfio_iommu_type1_bind_guest_msi
> >>>> ---
> >>>> drivers/vfio/vfio_iommu_type1.c | 58 +++++++++++++++++++++++++++++++++
> >>>> include/uapi/linux/vfio.h | 29 +++++++++++++++++
> >>>> 2 files changed, 87 insertions(+)
> >>>>
> >>>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> >>>> index 12a40b9db6aa..66513679081b 100644
> >>>> --- a/drivers/vfio/vfio_iommu_type1.c
> >>>> +++ b/drivers/vfio/vfio_iommu_type1.c
> >>>> @@ -1710,6 +1710,25 @@ static int vfio_cache_inv_fn(struct device *dev, void *data)
> >>>> return iommu_cache_invalidate(d, dev, &ustruct->info);
> >>>> }
> >>>>
> >>>> +static int vfio_bind_msi_fn(struct device *dev, void *data)
> >>>> +{
> >>>> + struct vfio_iommu_type1_bind_msi *ustruct =
> >>>> + (struct vfio_iommu_type1_bind_msi *)data;
> >>>> + struct iommu_domain *d = iommu_get_domain_for_dev(dev);
> >>>> +
> >>>> + return iommu_bind_guest_msi(d, dev, ustruct->iova,
> >>>> + ustruct->gpa, ustruct->size);
> >>>> +}
> >>>> +
> >>>> +static int vfio_unbind_msi_fn(struct device *dev, void *data)
> >>>> +{
> >>>> + dma_addr_t *iova = (dma_addr_t *)data;
> >>>> + struct iommu_domain *d = iommu_get_domain_for_dev(dev);
> >>>
> >>> Same as previous, we can encapsulate domain in our own struct to avoid
> >>> a lookup.
> >>>
> >>>> +
> >>>> + iommu_unbind_guest_msi(d, dev, *iova);
> >>>
> >>> Is it strange that iommu-core is exposing these interfaces at a device
> >>> level if every one of them requires us to walk all the devices? Thanks,
> >>
> >> Hum this per device API was devised in response of Robin's comments on
> >>
> >> [RFC v2 12/20] dma-iommu: Implement NESTED_MSI cookie.
> >>
> >> "
> >> But that then seems to reveal a somewhat bigger problem - if the callers
> >> are simply registering IPAs, and relying on the ITS driver to grab an
> >> entry and fill in a PA later, then how does either one know *which* PA
> >> is supposed to belong to a given IPA in the case where you have multiple
> >> devices with different ITS targets assigned to the same guest? (and if
> >> it's possible to assume a guest will use per-device stage 1 mappings and
> >> present it with a single vITS backed by multiple pITSes, I think things
> >> start breaking even harder.)
> >> "
> >>
> >> However looking back into the problem I wonder if there was an issue
> >> with the iommu_domain based API.
> >>
> >> If my understanding is correct, when assigned devices are protected by a
> >> vIOMMU then they necessarily end up in separate host iommu domains even
> >> if they belong to the same iommu_domain on the guest. And there can only
> >> be a single device in this iommu_domain.
> >
> > Don't forget that a container represents the IOMMU context in a vfio
> > environment, groups are associated with containers and a group may
> > contain one or more devices. When a vIOMMU comes into play, we still
> > only have an IOMMU context per container. If we have multiple devices
> > in a group, we run into problems with vIOMMU. We can resolve this by
> > requiring that the user ignore all but one device in the group,
> > or making sure that the devices in the group have the same IOMMU
> > context. The latter we could do in QEMU if PCIe-to-PCI bridges there
> > masked the per-device address space as it does on real hardware (ie.
> > there is no requester ID on conventional PCI, all transactions appear to
> > the IOMMU with the bridge requester ID). So I raise this question
> > because vfio's minimum domain granularity is a group.
> >
> >> If this is confirmed, there is a non ambiguous association between 1
> >> physical iommu_domain, 1 device, 1 S1 mapping and 1 physical MSI
> >> controller.
> >>
> >> I added the device handle handle to disambiguate those associations. The
> >> gIOVA ->gDB mapping is associated with a device handle. Then when the
> >> host needs a stage 1 mapping for this device, to build the nested
> >> mapping towards the physical DB it can easily grab the gIOVA->gDB stage
> >> 1 mapping registered for this device.
> >>
> >> The correctness looks more obvious to me, at least.
> >
> > Except all devices within all groups within the same container
> > necessarily share the same IOMMU context, so from that perspective, it
> > appears to impose non-trivial redundancy on the caller. Thanks,
>
> Taking into consideration the case where we could have several devices
> attached to the same host iommu group, each of them possibly using
> different host MSI doorbells, I think I am in trouble.
>
> Let's assume that using the pcie-to-pci bridge trick on guest side they
> end up in the same container and in the same guest iommu group.
>
> At the moment there is a single MSI controller on guest, so the same
> gIOVA/gDB S1 mapping is going to be created by the guest iommu dommain
> and both devices are programmed with gIOVA. If dev0 and dev1 are
> attached to different host MSI controllers, I would need to build the 2
> nested bindings:
> dev0: MSI nested binding: gIOVA -> gDB -> hDB0
> dev1: MSI nested binding: gIOVA -> gDB -> hDB1
> (on guest there is a single MSI controller at the moment)
>
> which is not possible as the devices belong to the same host iommu group
> and share the same mapping.
>
> The solution would be to instantiate 2 MSI controllers on guest side, in
> which case we would end up with
> dev0: gIOVA0 -> gDB0 -> hDB0
> dev1: gIOVA1 -> gDB1 -> hDB1
>
> Isn't it somehow what we do with the IOMMU RID topology. We need to take
> into account the host topology (2 devices belonging to the same group)
> to force the same on guest by introducing a PCIe-to-PCI bridge. Here we
> would need to say, those assigned devices are attached to different MSI
> domains on host, so we need the same on guest.
>
> Anyway, the current container based IOCTL would fail to implement that
> because I would register gIOVA0 -> gDB0 and gIOVA1 -> gDB1 for each
> device within the container which would definitively fail to build the
> correct association. So I think I would need anyway a device based IOTCL
> that would aim to tell: this assigned device uses this S1 MSI binding.
> All the notification mechanism we have in qemu is based on container, so
> this would obliged to have device based notification mechanism.
>
> So I wonder whether it wouldn't be sensible to restrict this use case
> and say we support nested mode only if we have a single assigned device
> within the container?
>
> Thoughts?
We've essentially done that with vIOMMU up to this point already, it's
not been possible to assign multiple devices from the same group to a
VM with intel-iommu, amd-iommu, or smmu due to the requirement of
separate address spaces per device. It's only when we introduce
address space aliasing with bridges that we can even consider this
possibility, and it's a configuration which smmu doesn't properly
support even on bare metal. I hope we can consider that to be simply a
gap in the implementation that will get fixed and not an architectural
problem.
As we discussed offline though, I wonder if we're attempting to support
more than necessary with your scenarios above. If devices within the
same group can be verified to share a host MSI controller, do we still
have an issue mapping them to a single guest MSI controller? When we
talked we were headed down a path that if a group is necessarily
associated to a single IOMMU, perhaps that necessarily means that a
group is also associated to a single MSI controller. I've since
thought of a configuration where a group could span physical IOMMU
devices, NVLink. As essentially a secondary bus interface for a
device, NVLink can cause devices with arbitrary PCI hierarchy
connections to be non-isolated, and ideally our grouping would
understand to account for that. However, if it could be determined
that a group associates to a single MSI controller, do we still have an
issue with multiple devices within the group?
My issue with the per device interface for what is fundamentally an
IOVA mapping is that vfio does not support per device mappings. We
support mappings at the container level, where the minimum set of
devices we can attach to a container is a group. Therefore to create
an interface that purports to support device level mappings is not
accurate. Maybe MSI controllers will restrict our configuration but
I'd rather not design the interface around the wrong level of mapping
granularity. Thanks,
Alex
Hi Marc, Robin, Alex,
On 4/3/19 7:38 PM, Alex Williamson wrote:
> On Wed, 3 Apr 2019 16:30:15 +0200
> Auger Eric <[email protected]> wrote:
>
>> Hi Alex,
>>
>> On 3/22/19 11:09 PM, Alex Williamson wrote:
>>> On Fri, 22 Mar 2019 10:30:02 +0100
>>> Auger Eric <[email protected]> wrote:
>>>
>>>> Hi Alex,
>>>> On 3/22/19 12:01 AM, Alex Williamson wrote:
>>>>> On Sun, 17 Mar 2019 18:22:19 +0100
>>>>> Eric Auger <[email protected]> wrote:
>>>>>
>>>>>> This patch adds the VFIO_IOMMU_BIND/UNBIND_MSI ioctl which aim
>>>>>> to pass/withdraw the guest MSI binding to/from the host.
>>>>>>
>>>>>> Signed-off-by: Eric Auger <[email protected]>
>>>>>>
>>>>>> ---
>>>>>> v3 -> v4:
>>>>>> - add UNBIND
>>>>>> - unwind on BIND error
>>>>>>
>>>>>> v2 -> v3:
>>>>>> - adapt to new proto of bind_guest_msi
>>>>>> - directly use vfio_iommu_for_each_dev
>>>>>>
>>>>>> v1 -> v2:
>>>>>> - s/vfio_iommu_type1_guest_msi_binding/vfio_iommu_type1_bind_guest_msi
>>>>>> ---
>>>>>> drivers/vfio/vfio_iommu_type1.c | 58 +++++++++++++++++++++++++++++++++
>>>>>> include/uapi/linux/vfio.h | 29 +++++++++++++++++
>>>>>> 2 files changed, 87 insertions(+)
>>>>>>
>>>>>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>>>>>> index 12a40b9db6aa..66513679081b 100644
>>>>>> --- a/drivers/vfio/vfio_iommu_type1.c
>>>>>> +++ b/drivers/vfio/vfio_iommu_type1.c
>>>>>> @@ -1710,6 +1710,25 @@ static int vfio_cache_inv_fn(struct device *dev, void *data)
>>>>>> return iommu_cache_invalidate(d, dev, &ustruct->info);
>>>>>> }
>>>>>>
>>>>>> +static int vfio_bind_msi_fn(struct device *dev, void *data)
>>>>>> +{
>>>>>> + struct vfio_iommu_type1_bind_msi *ustruct =
>>>>>> + (struct vfio_iommu_type1_bind_msi *)data;
>>>>>> + struct iommu_domain *d = iommu_get_domain_for_dev(dev);
>>>>>> +
>>>>>> + return iommu_bind_guest_msi(d, dev, ustruct->iova,
>>>>>> + ustruct->gpa, ustruct->size);
>>>>>> +}
>>>>>> +
>>>>>> +static int vfio_unbind_msi_fn(struct device *dev, void *data)
>>>>>> +{
>>>>>> + dma_addr_t *iova = (dma_addr_t *)data;
>>>>>> + struct iommu_domain *d = iommu_get_domain_for_dev(dev);
>>>>>
>>>>> Same as previous, we can encapsulate domain in our own struct to avoid
>>>>> a lookup.
>>>>>
>>>>>> +
>>>>>> + iommu_unbind_guest_msi(d, dev, *iova);
>>>>>
>>>>> Is it strange that iommu-core is exposing these interfaces at a device
>>>>> level if every one of them requires us to walk all the devices? Thanks,
>>>>
>>>> Hum this per device API was devised in response of Robin's comments on
>>>>
>>>> [RFC v2 12/20] dma-iommu: Implement NESTED_MSI cookie.
>>>>
>>>> "
>>>> But that then seems to reveal a somewhat bigger problem - if the callers
>>>> are simply registering IPAs, and relying on the ITS driver to grab an
>>>> entry and fill in a PA later, then how does either one know *which* PA
>>>> is supposed to belong to a given IPA in the case where you have multiple
>>>> devices with different ITS targets assigned to the same guest? (and if
>>>> it's possible to assume a guest will use per-device stage 1 mappings and
>>>> present it with a single vITS backed by multiple pITSes, I think things
>>>> start breaking even harder.)
>>>> "
>>>>
>>>> However looking back into the problem I wonder if there was an issue
>>>> with the iommu_domain based API.
>>>>
>>>> If my understanding is correct, when assigned devices are protected by a
>>>> vIOMMU then they necessarily end up in separate host iommu domains even
>>>> if they belong to the same iommu_domain on the guest. And there can only
>>>> be a single device in this iommu_domain.
>>>
>>> Don't forget that a container represents the IOMMU context in a vfio
>>> environment, groups are associated with containers and a group may
>>> contain one or more devices. When a vIOMMU comes into play, we still
>>> only have an IOMMU context per container. If we have multiple devices
>>> in a group, we run into problems with vIOMMU. We can resolve this by
>>> requiring that the user ignore all but one device in the group,
>>> or making sure that the devices in the group have the same IOMMU
>>> context. The latter we could do in QEMU if PCIe-to-PCI bridges there
>>> masked the per-device address space as it does on real hardware (ie.
>>> there is no requester ID on conventional PCI, all transactions appear to
>>> the IOMMU with the bridge requester ID). So I raise this question
>>> because vfio's minimum domain granularity is a group.
>>>
>>>> If this is confirmed, there is a non ambiguous association between 1
>>>> physical iommu_domain, 1 device, 1 S1 mapping and 1 physical MSI
>>>> controller.
>>>>
>>>> I added the device handle handle to disambiguate those associations. The
>>>> gIOVA ->gDB mapping is associated with a device handle. Then when the
>>>> host needs a stage 1 mapping for this device, to build the nested
>>>> mapping towards the physical DB it can easily grab the gIOVA->gDB stage
>>>> 1 mapping registered for this device.
>>>>
>>>> The correctness looks more obvious to me, at least.
>>>
>>> Except all devices within all groups within the same container
>>> necessarily share the same IOMMU context, so from that perspective, it
>>> appears to impose non-trivial redundancy on the caller. Thanks,
>>
>> Taking into consideration the case where we could have several devices
>> attached to the same host iommu group, each of them possibly using
>> different host MSI doorbells, I think I am in trouble.
>>
>> Let's assume that using the pcie-to-pci bridge trick on guest side they
>> end up in the same container and in the same guest iommu group.
>>
>> At the moment there is a single MSI controller on guest, so the same
>> gIOVA/gDB S1 mapping is going to be created by the guest iommu dommain
>> and both devices are programmed with gIOVA. If dev0 and dev1 are
>> attached to different host MSI controllers, I would need to build the 2
>> nested bindings:
>> dev0: MSI nested binding: gIOVA -> gDB -> hDB0
>> dev1: MSI nested binding: gIOVA -> gDB -> hDB1
>> (on guest there is a single MSI controller at the moment)
>>
>> which is not possible as the devices belong to the same host iommu group
>> and share the same mapping.
>>
>> The solution would be to instantiate 2 MSI controllers on guest side, in
>> which case we would end up with
>> dev0: gIOVA0 -> gDB0 -> hDB0
>> dev1: gIOVA1 -> gDB1 -> hDB1
>>
>> Isn't it somehow what we do with the IOMMU RID topology. We need to take
>> into account the host topology (2 devices belonging to the same group)
>> to force the same on guest by introducing a PCIe-to-PCI bridge. Here we
>> would need to say, those assigned devices are attached to different MSI
>> domains on host, so we need the same on guest.
>>
>> Anyway, the current container based IOCTL would fail to implement that
>> because I would register gIOVA0 -> gDB0 and gIOVA1 -> gDB1 for each
>> device within the container which would definitively fail to build the
>> correct association. So I think I would need anyway a device based IOTCL
>> that would aim to tell: this assigned device uses this S1 MSI binding.
>> All the notification mechanism we have in qemu is based on container, so
>> this would obliged to have device based notification mechanism.
>>
>> So I wonder whether it wouldn't be sensible to restrict this use case
>> and say we support nested mode only if we have a single assigned device
>> within the container?
>>
>> Thoughts?
>
> We've essentially done that with vIOMMU up to this point already, it's
> not been possible to assign multiple devices from the same group to a
> VM with intel-iommu, amd-iommu, or smmu due to the requirement of
> separate address spaces per device. It's only when we introduce
> address space aliasing with bridges that we can even consider this
> possibility, and it's a configuration which smmu doesn't properly
> support even on bare metal. I hope we can consider that to be simply a
> gap in the implementation that will get fixed and not an architectural
> problem.
>
> As we discussed offline though, I wonder if we're attempting to support
> more than necessary with your scenarios above. If devices within the
> same group can be verified to share a host MSI controller, do we still
> have an issue mapping them to a single guest MSI controller? When we
> talked we were headed down a path that if a group is necessarily
> associated to a single IOMMU, perhaps that necessarily means that a
> group is also associated to a single MSI controller. I've since
> thought of a configuration where a group could span physical IOMMU
> devices, NVLink. As essentially a secondary bus interface for a
> device, NVLink can cause devices with arbitrary PCI hierarchy
> connections to be non-isolated, and ideally our grouping would
> understand to account for that. However, if it could be determined
> that a group associates to a single MSI controller, do we still have an
> issue with multiple devices within the group?
Marc, Robin,
Do you think this is a reasonable assumption to consider devices within
the same host iommu group share the same MSI doorbell?
>
> My issue with the per device interface for what is fundamentally an
> IOVA mapping is that vfio does not support per device mappings. We
> support mappings at the container level, where the minimum set of
> devices we can attach to a container is a group. Therefore to create
> an interface that purports to support device level mappings is not
> accurate. Maybe MSI controllers will restrict our configuration but
> I'd rather not design the interface around the wrong level of mapping
> granularity. Thanks,
Alex,
In case of nested, the vfio container is used to setup the stage2 only
whereas the stage1 is owned by the guest. Here the mapping we pass to
the host is a stage1 mapping; this information is used to build the
correct S2 mapping. Then the mapping remains the same for the whole
container. If dev1 decides to use gIOVA0 it will reach hDB0. Anyway the
granularity of the mapping cannot change.
So this rather tells this assigned dev uses this given giova for
reaching a guest MSI doorbell.
Thanks
Eric
>
> Alex
>
Hi Vincent,
On 10/04/2019 13:35, Vincent Stehlé wrote:
> On Thu, Apr 04, 2019 at 08:55:25AM +0200, Auger Eric wrote:
>> Hi Marc, Robin, Alex,
> (..)
>> Do you think this is a reasonable assumption to consider devices within
>> the same host iommu group share the same MSI doorbell?
>
> Hi Eric,
>
> I am not sure this assumption always hold.
>
> Marc, Robin and Alex can correct me, but for example I think the following
> topology is valid for Arm systems:
>
> +------------+ +------------+
> | Endpoint A | | Endpoint B |
> +------------+ +------------+
> v v
> /---------\
> | Non-ACS |
> | Switch |
> \---------/
> v
> +---------------+
> | PCIe |
> | Root Complex |
> +---------------+
> v
> +-----------+
> | SMMU |
> +-----------+
> v
> +--------------------------+
> | System interconnect |
> +--------------------------+
> v v
> +-----------+ +-----------+
> | ITS A | | ITS B |
> +-----------+ +-----------+
>
> All PCIe Endpoints and ITS could be in the same ITS Group 0, meaning
> devices could send their MSI at any ITS in hardware.
>
> For Linux the two PCIe Endpoints would be in the same iommu group, because
> the switch in this example does not support ACS.
>
> I think the devicetree msi-map property could be used to "map" the RID of
> Endpoint A to ITS A and the RID of Endpoint B to ITS B, which would violate
> the assumption.
>
> See the monolithic example in [1], the example system in [2], appendices
> D, E and F in [3] and the msi-map property in [4].
I think we are all in agreement that this is a possible topology. It is
just that it doesn't exist in any real-life implementation we know of
(the ITS tends to be close to the RC and not downstream of the
interconnect).
Given the complexity of what we're trying to put together, I'd rather
start with a small step which supports commonly implemented topology,
and later address the odd ones if they actually crop up.
Thanks,
M.
--
Jazz is not dead. It just smells funny...
Hi Vincent,
On 4/10/19 2:35 PM, Vincent Stehlé wrote:
> On Thu, Apr 04, 2019 at 08:55:25AM +0200, Auger Eric wrote:
>> Hi Marc, Robin, Alex,
> (..)
>> Do you think this is a reasonable assumption to consider devices within
>> the same host iommu group share the same MSI doorbell?
>
> Hi Eric,
>
> I am not sure this assumption always hold.
>
> Marc, Robin and Alex can correct me, but for example I think the following
> topology is valid for Arm systems:
>
> +------------+ +------------+
> | Endpoint A | | Endpoint B |
> +------------+ +------------+
> v v
> /---------\
> | Non-ACS |
> | Switch |
> \---------/
> v
> +---------------+
> | PCIe |
> | Root Complex |
> +---------------+
> v
> +-----------+
> | SMMU |
> +-----------+
> v
> +--------------------------+
> | System interconnect |
> +--------------------------+
> v v
> +-----------+ +-----------+
> | ITS A | | ITS B |
> +-----------+ +-----------+
>
> All PCIe Endpoints and ITS could be in the same ITS Group 0, meaning
> devices could send their MSI at any ITS in hardware.
>
> For Linux the two PCIe Endpoints would be in the same iommu group, because
> the switch in this example does not support ACS.
>
> I think the devicetree msi-map property could be used to "map" the RID of
> Endpoint A to ITS A and the RID of Endpoint B to ITS B, which would violate
> the assumption.
>
> See the monolithic example in [1], the example system in [2], appendices
> D, E and F in [3] and the msi-map property in [4].
Thank you for the review & links.
I understand the above topology is perfectly valid. Now the question is:
is it sufficiently common to care about it?
At the moment VFIO/vIOMMU assignment of devices belonging to the same
group isn't upstream yet. Work is ongoing by Alex to support it. It uses
a PCIe-to-PCI bridge on guest side and it looks this topology is not
supported by the SMMUv3 driver. Then comes the trouble of using several
ITS in nested mode.
If this topology is sufficiently rare I propose we to do not support it
in this VFIO/vIOMMU use case. in v7 I introduced a check that aims to
verify devices attached to the same nested iommu_domain share the same
msi_domain.
Thanks
Eric
>
> Best regards,
> Vincent.
>
> [1] https://static.docs.arm.com/100336/0102/corelink_gic600_generic_interrupt_controller_technical_reference_manual_100336_0102_00_en.pdf
> [2] http://infocenter.arm.com/help/topic/com.arm.doc.den0049d/DEN0049D_IO_Remapping_Table.pdf
> [3] https://static.docs.arm.com/den0029/50/Q1-DEN0029B_SBSA_5.0.pdf
> [4] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/devicetree/bindings/pci/pci-msi.txt
>
On Thu, Apr 04, 2019 at 08:55:25AM +0200, Auger Eric wrote:
> Hi Marc, Robin, Alex,
(..)
> Do you think this is a reasonable assumption to consider devices within
> the same host iommu group share the same MSI doorbell?
Hi Eric,
I am not sure this assumption always hold.
Marc, Robin and Alex can correct me, but for example I think the following
topology is valid for Arm systems:
+------------+ +------------+
| Endpoint A | | Endpoint B |
+------------+ +------------+
v v
/---------\
| Non-ACS |
| Switch |
\---------/
v
+---------------+
| PCIe |
| Root Complex |
+---------------+
v
+-----------+
| SMMU |
+-----------+
v
+--------------------------+
| System interconnect |
+--------------------------+
v v
+-----------+ +-----------+
| ITS A | | ITS B |
+-----------+ +-----------+
All PCIe Endpoints and ITS could be in the same ITS Group 0, meaning
devices could send their MSI at any ITS in hardware.
For Linux the two PCIe Endpoints would be in the same iommu group, because
the switch in this example does not support ACS.
I think the devicetree msi-map property could be used to "map" the RID of
Endpoint A to ITS A and the RID of Endpoint B to ITS B, which would violate
the assumption.
See the monolithic example in [1], the example system in [2], appendices
D, E and F in [3] and the msi-map property in [4].
Best regards,
Vincent.
[1] https://static.docs.arm.com/100336/0102/corelink_gic600_generic_interrupt_controller_technical_reference_manual_100336_0102_00_en.pdf
[2] http://infocenter.arm.com/help/topic/com.arm.doc.den0049d/DEN0049D_IO_Remapping_Table.pdf
[3] https://static.docs.arm.com/den0029/50/Q1-DEN0029B_SBSA_5.0.pdf
[4] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/devicetree/bindings/pci/pci-msi.txt