2020-03-20 16:20:38

by Eric Auger

[permalink] [raw]
Subject: [PATCH v10 00/11] SMMUv3 Nested Stage Setup (VFIO part)

This series brings the VFIO part of HW nested paging support
in the SMMUv3.

This is a rebase on top of Will's arm-smmu-updates branch
(git://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git tags/arm-smmu-updates)

This is basically a resend of v9 as patches still applied.

This work has been stalled since Plumber 2019. Since then
some users expressed interest in that work and tested v9:
- https://patchwork.kernel.org/cover/11039995/#23012381
- https://patchwork.kernel.org/cover/11039995/#23197235

The series depends on:
[PATCH v10 00/13] SMMUv3 Nested Stage Setup (IOMMU part)

3 new IOCTLs are introduced that allow the userspace to
1) pass the guest stage 1 configuration
2) pass stage 1 MSI bindings
3) invalidate stage 1 related caches

They map onto the related new IOMMU API functions.

We introduce the capability to register specific interrupt
indexes (see [1]). A new DMA_FAULT interrupt index allows to register
an eventfd to be signaled whenever a stage 1 related fault
is detected at physical level. Also a specific region allows
to expose the fault records to the user space.

Best Regards

Eric

This series can be found at:
https://github.com/eauger/linux/tree/will-arm-smmu-updates-2stage-v10

It series includes Tina's patch steming from
[1] "[RFC PATCH v2 1/3] vfio: Use capability chains to handle device
specific irq" plus patches originally contributed by Yi.

History:

v9 -> v10
- rebase on top of 5.6.0-rc3 (no change versus v9)

v8 -> v9:
- introduce specific irq framework
- single fault region
- iommu_unregister_device_fault_handler failure case not handled
yet.

v7 -> v8:
- rebase on top of v5.2-rc1 and especially
8be39a1a04c1 iommu/arm-smmu-v3: Add a master->domain pointer
- dynamic alloc of s1_cfg/s2_cfg
- __arm_smmu_tlb_inv_asid/s1_range_nosync
- check there is no HW MSI regions
- asid invalidation using pasid extended struct (change in the uapi)
- add s1_live/s2_live checks
- move check about support of nested stages in domain finalise
- fixes in error reporting according to the discussion with Robin
- reordered the patches to have first iommu/smmuv3 patches and then
VFIO patches

v6 -> v7:
- removed device handle from bind/unbind_guest_msi
- added "iommu/smmuv3: Nested mode single MSI doorbell per domain
enforcement"
- added few uapi comments as suggested by Jean, Jacop and Alex

v5 -> v6:
- Fix compilation issue when CONFIG_IOMMU_API is unset

v4 -> v5:
- fix bug reported by Vincent: fault handler unregistration now happens in
vfio_pci_release
- IOMMU_FAULT_PERM_* moved outside of struct definition + small
uapi changes suggested by Kean-Philippe (except fetch_addr)
- iommu: introduce device fault report API: removed the PRI part.
- see individual logs for more details
- reset the ste abort flag on detach

v3 -> v4:
- took into account Alex, jean-Philippe and Robin's comments on v3
- rework of the smmuv3 driver integration
- add tear down ops for msi binding and PASID table binding
- fix S1 fault propagation
- put fault reporting patches at the beginning of the series following
Jean-Philippe's request
- update of the cache invalidate and fault API uapis
- VFIO fault reporting rework with 2 separate regions and one mmappable
segment for the fault queue
- moved to PATCH

v2 -> v3:
- When registering the S1 MSI binding we now store the device handle. This
addresses Robin's comment about discimination of devices beonging to
different S1 groups and using different physical MSI doorbells.
- Change the fault reporting API: use VFIO_PCI_DMA_FAULT_IRQ_INDEX to
set the eventfd and expose the faults through an mmappable fault region

v1 -> v2:
- Added the fault reporting capability
- asid properly passed on invalidation (fix assignment of multiple
devices)
- see individual change logs for more info


Eric Auger (8):
vfio: VFIO_IOMMU_SET_MSI_BINDING
vfio/pci: Add VFIO_REGION_TYPE_NESTED region type
vfio/pci: Register an iommu fault handler
vfio/pci: Allow to mmap the fault queue
vfio: Add new IRQ for DMA fault reporting
vfio/pci: Add framework for custom interrupt indices
vfio/pci: Register and allow DMA FAULT IRQ signaling
vfio: Document nested stage control

Liu, Yi L (2):
vfio: VFIO_IOMMU_SET_PASID_TABLE
vfio: VFIO_IOMMU_CACHE_INVALIDATE

Tina Zhang (1):
vfio: Use capability chains to handle device specific irq

Documentation/driver-api/vfio.rst | 77 ++++++++
drivers/vfio/pci/vfio_pci.c | 283 ++++++++++++++++++++++++++--
drivers/vfio/pci/vfio_pci_intrs.c | 62 ++++++
drivers/vfio/pci/vfio_pci_private.h | 24 +++
drivers/vfio/pci/vfio_pci_rdwr.c | 45 +++++
drivers/vfio/vfio_iommu_type1.c | 166 ++++++++++++++++
include/uapi/linux/vfio.h | 109 ++++++++++-
7 files changed, 747 insertions(+), 19 deletions(-)

--
2.20.1


2020-03-20 16:20:50

by Eric Auger

[permalink] [raw]
Subject: [PATCH v10 03/11] vfio: VFIO_IOMMU_SET_MSI_BINDING

This patch adds the VFIO_IOMMU_SET_MSI_BINDING ioctl which aim
to (un)register the guest MSI binding to the host. This latter
then can use those stage 1 bindings to build a nested stage
binding targeting the physical MSIs.

Signed-off-by: Eric Auger <[email protected]>

---

v8 -> v9:
- merge VFIO_IOMMU_BIND_MSI/VFIO_IOMMU_UNBIND_MSI into a single
VFIO_IOMMU_SET_MSI_BINDING ioctl
- ioctl id changed

v6 -> v7:
- removed the dev arg

v3 -> v4:
- add UNBIND
- unwind on BIND error

v2 -> v3:
- adapt to new proto of bind_guest_msi
- directly use vfio_iommu_for_each_dev

v1 -> v2:
- s/vfio_iommu_type1_guest_msi_binding/vfio_iommu_type1_bind_guest_msi
---
drivers/vfio/vfio_iommu_type1.c | 55 +++++++++++++++++++++++++++++++++
include/uapi/linux/vfio.h | 20 ++++++++++++
2 files changed, 75 insertions(+)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 04c6625098bb..17ba63de1494 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -2246,6 +2246,42 @@ static int vfio_cache_inv_fn(struct device *dev, void *data)
return iommu_cache_invalidate(dc->domain, dev, &ustruct->info);
}

+static int
+vfio_bind_msi(struct vfio_iommu *iommu,
+ dma_addr_t giova, phys_addr_t gpa, size_t size)
+{
+ struct vfio_domain *d;
+ int ret = 0;
+
+ mutex_lock(&iommu->lock);
+
+ list_for_each_entry(d, &iommu->domain_list, next) {
+ ret = iommu_bind_guest_msi(d->domain, giova, gpa, size);
+ if (ret)
+ goto unwind;
+ }
+ goto unlock;
+unwind:
+ list_for_each_entry_continue_reverse(d, &iommu->domain_list, next) {
+ iommu_unbind_guest_msi(d->domain, giova);
+ }
+unlock:
+ mutex_unlock(&iommu->lock);
+ return ret;
+}
+
+static void
+vfio_unbind_msi(struct vfio_iommu *iommu, dma_addr_t giova)
+{
+ struct vfio_domain *d;
+
+ mutex_lock(&iommu->lock);
+ list_for_each_entry(d, &iommu->domain_list, next) {
+ iommu_unbind_guest_msi(d->domain, giova);
+ }
+ mutex_unlock(&iommu->lock);
+}
+
static long vfio_iommu_type1_ioctl(void *iommu_data,
unsigned int cmd, unsigned long arg)
{
@@ -2387,6 +2423,25 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
&ustruct);
mutex_unlock(&iommu->lock);
return ret;
+ } else if (cmd == VFIO_IOMMU_SET_MSI_BINDING) {
+ struct vfio_iommu_type1_set_msi_binding ustruct;
+
+ minsz = offsetofend(struct vfio_iommu_type1_set_msi_binding,
+ size);
+
+ if (copy_from_user(&ustruct, (void __user *)arg, minsz))
+ return -EFAULT;
+
+ if (ustruct.argsz < minsz)
+ return -EINVAL;
+
+ if (ustruct.flags == VFIO_IOMMU_UNBIND_MSI)
+ vfio_unbind_msi(iommu, ustruct.iova);
+ else if (ustruct.flags == VFIO_IOMMU_BIND_MSI)
+ return vfio_bind_msi(iommu, ustruct.iova, ustruct.gpa,
+ ustruct.size);
+ else
+ return -EINVAL;
}

return -ENOTTY;
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 98212c1711e7..9f2429eb1958 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -826,6 +826,26 @@ struct vfio_iommu_type1_cache_invalidate {
};
#define VFIO_IOMMU_CACHE_INVALIDATE _IO(VFIO_TYPE, VFIO_BASE + 23)

+/**
+ * VFIO_IOMMU_SET_MSI_BINDING - _IOWR(VFIO_TYPE, VFIO_BASE + 24,
+ * struct vfio_iommu_type1_set_msi_binding)
+ *
+ * Pass a stage 1 MSI doorbell mapping to the host so that this
+ * latter can build a nested stage2 mapping. Or conversely tear
+ * down a previously bound stage 1 MSI binding.
+ */
+struct vfio_iommu_type1_set_msi_binding {
+ __u32 argsz;
+ __u32 flags;
+#define VFIO_IOMMU_BIND_MSI (1 << 0)
+#define VFIO_IOMMU_UNBIND_MSI (1 << 1)
+ __u64 iova; /* MSI guest IOVA */
+ /* Fields below are used on BIND */
+ __u64 gpa; /* MSI guest physical address */
+ __u64 size; /* size of stage1 mapping (bytes) */
+};
+#define VFIO_IOMMU_SET_MSI_BINDING _IO(VFIO_TYPE, VFIO_BASE + 24)
+
/* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */

/*
--
2.20.1

2020-03-20 16:21:02

by Eric Auger

[permalink] [raw]
Subject: [PATCH v10 01/11] vfio: VFIO_IOMMU_SET_PASID_TABLE

From: "Liu, Yi L" <[email protected]>

This patch adds an VFIO_IOMMU_SET_PASID_TABLE ioctl
which aims to pass the virtual iommu guest configuration
to the host. This latter takes the form of the so-called
PASID table.

Signed-off-by: Jacob Pan <[email protected]>
Signed-off-by: Liu, Yi L <[email protected]>
Signed-off-by: Eric Auger <[email protected]>

---
v8 -> v9:
- Merge VFIO_IOMMU_ATTACH/DETACH_PASID_TABLE into a single
VFIO_IOMMU_SET_PASID_TABLE ioctl.

v6 -> v7:
- add a comment related to VFIO_IOMMU_DETACH_PASID_TABLE

v3 -> v4:
- restore ATTACH/DETACH
- add unwind on failure

v2 -> v3:
- s/BIND_PASID_TABLE/SET_PASID_TABLE

v1 -> v2:
- s/BIND_GUEST_STAGE/BIND_PASID_TABLE
- remove the struct device arg
---
drivers/vfio/vfio_iommu_type1.c | 56 +++++++++++++++++++++++++++++++++
include/uapi/linux/vfio.h | 19 +++++++++++
2 files changed, 75 insertions(+)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index a177bf2c6683..bfacbd876ee1 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -2172,6 +2172,43 @@ static int vfio_iommu_iova_build_caps(struct vfio_iommu *iommu,
return ret;
}

+static void
+vfio_detach_pasid_table(struct vfio_iommu *iommu)
+{
+ struct vfio_domain *d;
+
+ mutex_lock(&iommu->lock);
+
+ list_for_each_entry(d, &iommu->domain_list, next) {
+ iommu_detach_pasid_table(d->domain);
+ }
+ mutex_unlock(&iommu->lock);
+}
+
+static int
+vfio_attach_pasid_table(struct vfio_iommu *iommu,
+ struct vfio_iommu_type1_set_pasid_table *ustruct)
+{
+ struct vfio_domain *d;
+ int ret = 0;
+
+ mutex_lock(&iommu->lock);
+
+ list_for_each_entry(d, &iommu->domain_list, next) {
+ ret = iommu_attach_pasid_table(d->domain, &ustruct->config);
+ if (ret)
+ goto unwind;
+ }
+ goto unlock;
+unwind:
+ list_for_each_entry_continue_reverse(d, &iommu->domain_list, next) {
+ iommu_detach_pasid_table(d->domain);
+ }
+unlock:
+ mutex_unlock(&iommu->lock);
+ return ret;
+}
+
static long vfio_iommu_type1_ioctl(void *iommu_data,
unsigned int cmd, unsigned long arg)
{
@@ -2276,6 +2313,25 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,

return copy_to_user((void __user *)arg, &unmap, minsz) ?
-EFAULT : 0;
+ } else if (cmd == VFIO_IOMMU_SET_PASID_TABLE) {
+ struct vfio_iommu_type1_set_pasid_table ustruct;
+
+ minsz = offsetofend(struct vfio_iommu_type1_set_pasid_table,
+ config);
+
+ if (copy_from_user(&ustruct, (void __user *)arg, minsz))
+ return -EFAULT;
+
+ if (ustruct.argsz < minsz)
+ return -EINVAL;
+
+ if (ustruct.flags & VFIO_PASID_TABLE_FLAG_SET)
+ return vfio_attach_pasid_table(iommu, &ustruct);
+ else if (ustruct.flags & VFIO_PASID_TABLE_FLAG_UNSET) {
+ vfio_detach_pasid_table(iommu);
+ return 0;
+ } else
+ return -EINVAL;
}

return -ENOTTY;
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 9e843a147ead..e032a1fe6ed9 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -14,6 +14,7 @@

#include <linux/types.h>
#include <linux/ioctl.h>
+#include <linux/iommu.h>

#define VFIO_API_VERSION 0

@@ -794,6 +795,24 @@ struct vfio_iommu_type1_dma_unmap {
#define VFIO_IOMMU_ENABLE _IO(VFIO_TYPE, VFIO_BASE + 15)
#define VFIO_IOMMU_DISABLE _IO(VFIO_TYPE, VFIO_BASE + 16)

+/**
+ * VFIO_IOMMU_SET_PASID_TABLE - _IOWR(VFIO_TYPE, VFIO_BASE + 22,
+ * struct vfio_iommu_type1_set_pasid_table)
+ *
+ * The SET operation passes a PASID table to the host while the
+ * UNSET operation detaches the one currently programmed. Setting
+ * a table while another is already programmed replaces the old table.
+ */
+struct vfio_iommu_type1_set_pasid_table {
+ __u32 argsz;
+ __u32 flags;
+#define VFIO_PASID_TABLE_FLAG_SET (1 << 0)
+#define VFIO_PASID_TABLE_FLAG_UNSET (1 << 1)
+ struct iommu_pasid_table_config config; /* used on SET */
+};
+
+#define VFIO_IOMMU_SET_PASID_TABLE _IO(VFIO_TYPE, VFIO_BASE + 22)
+
/* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */

/*
--
2.20.1

2020-03-20 16:21:15

by Eric Auger

[permalink] [raw]
Subject: [PATCH v10 05/11] vfio/pci: Register an iommu fault handler

Register an IOMMU fault handler which records faults in
the DMA FAULT region ring buffer. In a subsequent patch, we
will add the signaling of a specific eventfd to allow the
userspace to be notified whenever a new fault as shown up.

Signed-off-by: Eric Auger <[email protected]>

---

v8 -> v9:
- handler now takes an iommu_fault handle
- eventfd signaling moved to a subsequent patch
- check the fault type and return an error if != UNRECOV
- still the fault handler registration can fail. We need to
reach an agreement about how to deal with the situation

v3 -> v4:
- move iommu_unregister_device_fault_handler to vfio_pci_release
---
drivers/vfio/pci/vfio_pci.c | 42 +++++++++++++++++++++++++++++++++++++
1 file changed, 42 insertions(+)

diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index 586b89debed5..69595c240baf 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -27,6 +27,7 @@
#include <linux/vfio.h>
#include <linux/vgaarb.h>
#include <linux/nospec.h>
+#include <linux/circ_buf.h>

#include "vfio_pci_private.h"

@@ -283,6 +284,38 @@ static const struct vfio_pci_regops vfio_pci_dma_fault_regops = {
.add_capability = vfio_pci_dma_fault_add_capability,
};

+int vfio_pci_iommu_dev_fault_handler(struct iommu_fault *fault, void *data)
+{
+ struct vfio_pci_device *vdev = (struct vfio_pci_device *)data;
+ struct vfio_region_dma_fault *reg =
+ (struct vfio_region_dma_fault *)vdev->fault_pages;
+ struct iommu_fault *new =
+ (struct iommu_fault *)(vdev->fault_pages + reg->offset +
+ reg->head * reg->entry_size);
+ int head, tail, size;
+ int ret = 0;
+
+ if (fault->type != IOMMU_FAULT_DMA_UNRECOV)
+ return -ENOENT;
+
+ mutex_lock(&vdev->fault_queue_lock);
+
+ head = reg->head;
+ tail = reg->tail;
+ size = reg->nb_entries;
+
+ if (CIRC_SPACE(head, tail, size) < 1) {
+ ret = -ENOSPC;
+ goto unlock;
+ }
+
+ *new = *fault;
+ reg->head = (head + 1) % size;
+unlock:
+ mutex_unlock(&vdev->fault_queue_lock);
+ return ret;
+}
+
#define DMA_FAULT_RING_LENGTH 512

static int vfio_pci_init_dma_fault_region(struct vfio_pci_device *vdev)
@@ -317,6 +350,13 @@ static int vfio_pci_init_dma_fault_region(struct vfio_pci_device *vdev)
header->entry_size = sizeof(struct iommu_fault);
header->nb_entries = DMA_FAULT_RING_LENGTH;
header->offset = sizeof(struct vfio_region_dma_fault);
+
+ ret = iommu_register_device_fault_handler(&vdev->pdev->dev,
+ vfio_pci_iommu_dev_fault_handler,
+ vdev);
+ if (ret)
+ goto out;
+
return 0;
out:
kfree(vdev->fault_pages);
@@ -542,6 +582,8 @@ static void vfio_pci_release(void *device_data)
if (!(--vdev->refcnt)) {
vfio_spapr_pci_eeh_release(vdev->pdev);
vfio_pci_disable(vdev);
+ /* TODO: Failure problematics */
+ iommu_unregister_device_fault_handler(&vdev->pdev->dev);
}

mutex_unlock(&vdev->reflck->lock);
--
2.20.1

2020-03-20 16:21:38

by Eric Auger

[permalink] [raw]
Subject: [PATCH v10 04/11] vfio/pci: Add VFIO_REGION_TYPE_NESTED region type

Add a new specific DMA_FAULT region aiming to exposed nested mode
translation faults.

The region has a ring buffer that contains the actual fault
records plus a header allowing to handle it (tail/head indices,
max capacity, entry size). At the moment the region is dimensionned
for 512 fault records.

Signed-off-by: Eric Auger <[email protected]>

---

v8 -> v9:
- Use a single region instead of a prod/cons region

v4 -> v5
- check cons is not null in vfio_pci_check_cons_fault

v3 -> v4:
- use 2 separate regions, respectively in read and write modes
- add the version capability
---
drivers/vfio/pci/vfio_pci.c | 68 +++++++++++++++++++++++++++++
drivers/vfio/pci/vfio_pci_private.h | 10 +++++
drivers/vfio/pci/vfio_pci_rdwr.c | 45 +++++++++++++++++++
include/uapi/linux/vfio.h | 35 +++++++++++++++
4 files changed, 158 insertions(+)

diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index 379a02c36e37..586b89debed5 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -260,6 +260,69 @@ int vfio_pci_set_power_state(struct vfio_pci_device *vdev, pci_power_t state)
return ret;
}

+static void vfio_pci_dma_fault_release(struct vfio_pci_device *vdev,
+ struct vfio_pci_region *region)
+{
+}
+
+static int vfio_pci_dma_fault_add_capability(struct vfio_pci_device *vdev,
+ struct vfio_pci_region *region,
+ struct vfio_info_cap *caps)
+{
+ struct vfio_region_info_cap_fault cap = {
+ .header.id = VFIO_REGION_INFO_CAP_DMA_FAULT,
+ .header.version = 1,
+ .version = 1,
+ };
+ return vfio_info_add_capability(caps, &cap.header, sizeof(cap));
+}
+
+static const struct vfio_pci_regops vfio_pci_dma_fault_regops = {
+ .rw = vfio_pci_dma_fault_rw,
+ .release = vfio_pci_dma_fault_release,
+ .add_capability = vfio_pci_dma_fault_add_capability,
+};
+
+#define DMA_FAULT_RING_LENGTH 512
+
+static int vfio_pci_init_dma_fault_region(struct vfio_pci_device *vdev)
+{
+ struct vfio_region_dma_fault *header;
+ size_t size;
+ int ret;
+
+ mutex_init(&vdev->fault_queue_lock);
+
+ /*
+ * We provision 1 page for the header and space for
+ * DMA_FAULT_RING_LENGTH fault records in the ring buffer.
+ */
+ size = ALIGN(sizeof(struct iommu_fault) *
+ DMA_FAULT_RING_LENGTH, PAGE_SIZE) + PAGE_SIZE;
+
+ vdev->fault_pages = kzalloc(size, GFP_KERNEL);
+ if (!vdev->fault_pages)
+ return -ENOMEM;
+
+ ret = vfio_pci_register_dev_region(vdev,
+ VFIO_REGION_TYPE_NESTED,
+ VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT,
+ &vfio_pci_dma_fault_regops, size,
+ VFIO_REGION_INFO_FLAG_READ | VFIO_REGION_INFO_FLAG_WRITE,
+ vdev->fault_pages);
+ if (ret)
+ goto out;
+
+ header = (struct vfio_region_dma_fault *)vdev->fault_pages;
+ header->entry_size = sizeof(struct iommu_fault);
+ header->nb_entries = DMA_FAULT_RING_LENGTH;
+ header->offset = sizeof(struct vfio_region_dma_fault);
+ return 0;
+out:
+ kfree(vdev->fault_pages);
+ return ret;
+}
+
static int vfio_pci_enable(struct vfio_pci_device *vdev)
{
struct pci_dev *pdev = vdev->pdev;
@@ -358,6 +421,10 @@ static int vfio_pci_enable(struct vfio_pci_device *vdev)
}
}

+ ret = vfio_pci_init_dma_fault_region(vdev);
+ if (ret)
+ goto disable_exit;
+
vfio_pci_probe_mmaps(vdev);

return 0;
@@ -1383,6 +1450,7 @@ static void vfio_pci_remove(struct pci_dev *pdev)

vfio_iommu_group_put(pdev->dev.iommu_group, &pdev->dev);
kfree(vdev->region);
+ kfree(vdev->fault_pages);
mutex_destroy(&vdev->ioeventfds_lock);

if (!disable_idle_d3)
diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
index 8a2c7607d513..a392f50e3a99 100644
--- a/drivers/vfio/pci/vfio_pci_private.h
+++ b/drivers/vfio/pci/vfio_pci_private.h
@@ -119,6 +119,8 @@ struct vfio_pci_device {
int ioeventfds_nr;
struct eventfd_ctx *err_trigger;
struct eventfd_ctx *req_trigger;
+ u8 *fault_pages;
+ struct mutex fault_queue_lock;
struct list_head dummy_resources_list;
struct mutex ioeventfds_lock;
struct list_head ioeventfds_list;
@@ -150,6 +152,14 @@ extern ssize_t vfio_pci_vga_rw(struct vfio_pci_device *vdev, char __user *buf,
extern long vfio_pci_ioeventfd(struct vfio_pci_device *vdev, loff_t offset,
uint64_t data, int count, int fd);

+struct vfio_pci_fault_abi {
+ u32 entry_size;
+};
+
+extern size_t vfio_pci_dma_fault_rw(struct vfio_pci_device *vdev,
+ char __user *buf, size_t count,
+ loff_t *ppos, bool iswrite);
+
extern int vfio_pci_init_perm_bits(void);
extern void vfio_pci_uninit_perm_bits(void);

diff --git a/drivers/vfio/pci/vfio_pci_rdwr.c b/drivers/vfio/pci/vfio_pci_rdwr.c
index a87992892a9f..4004ab8cad0e 100644
--- a/drivers/vfio/pci/vfio_pci_rdwr.c
+++ b/drivers/vfio/pci/vfio_pci_rdwr.c
@@ -274,6 +274,51 @@ ssize_t vfio_pci_vga_rw(struct vfio_pci_device *vdev, char __user *buf,
return done;
}

+size_t vfio_pci_dma_fault_rw(struct vfio_pci_device *vdev, char __user *buf,
+ size_t count, loff_t *ppos, bool iswrite)
+{
+ unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) - VFIO_PCI_NUM_REGIONS;
+ loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
+ void *base = vdev->region[i].data;
+ int ret = -EFAULT;
+
+ if (pos >= vdev->region[i].size)
+ return -EINVAL;
+
+ count = min(count, (size_t)(vdev->region[i].size - pos));
+
+ mutex_lock(&vdev->fault_queue_lock);
+
+ if (iswrite) {
+ struct vfio_region_dma_fault *header =
+ (struct vfio_region_dma_fault *)base;
+ u32 new_tail;
+
+ if (pos != 0 || count != 4) {
+ ret = -EINVAL;
+ goto unlock;
+ }
+
+ if (copy_from_user((void *)&new_tail, buf, count))
+ goto unlock;
+
+ if (new_tail > header->nb_entries) {
+ ret = -EINVAL;
+ goto unlock;
+ }
+ header->tail = new_tail;
+ } else {
+ if (copy_to_user(buf, base + pos, count))
+ goto unlock;
+ }
+ *ppos += count;
+ ret = count;
+unlock:
+ mutex_unlock(&vdev->fault_queue_lock);
+ return ret;
+}
+
+
static int vfio_pci_ioeventfd_handler(void *opaque, void *unused)
{
struct vfio_pci_ioeventfd *ioeventfd = opaque;
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 9f2429eb1958..40d770f80e3d 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -330,6 +330,9 @@ struct vfio_region_info_cap_type {
/* sub-types for VFIO_REGION_TYPE_GFX */
#define VFIO_REGION_SUBTYPE_GFX_EDID (1)

+#define VFIO_REGION_TYPE_NESTED (2)
+#define VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT (1)
+
/**
* struct vfio_region_gfx_edid - EDID region layout.
*
@@ -708,6 +711,38 @@ struct vfio_device_ioeventfd {

#define VFIO_DEVICE_IOEVENTFD _IO(VFIO_TYPE, VFIO_BASE + 16)

+
+/*
+ * Capability exposed by the DMA fault region
+ * @version: ABI version
+ */
+#define VFIO_REGION_INFO_CAP_DMA_FAULT 6
+
+struct vfio_region_info_cap_fault {
+ struct vfio_info_cap_header header;
+ __u32 version;
+};
+
+/*
+ * DMA Fault Region Layout
+ * @tail: index relative to the start of the ring buffer at which the
+ * consumer finds the next item in the buffer
+ * @entry_size: fault ring buffer entry size in bytes
+ * @nb_entries: max capacity of the fault ring buffer
+ * @offset: ring buffer offset relative to the start of the region
+ * @head: index relative to the start of the ring buffer at which the
+ * producer (kernel) inserts items into the buffers
+ */
+struct vfio_region_dma_fault {
+ /* Write-Only */
+ __u32 tail;
+ /* Read-Only */
+ __u32 entry_size;
+ __u32 nb_entries;
+ __u32 offset;
+ __u32 head;
+};
+
/* -------- API for Type1 VFIO IOMMU -------- */

/**
--
2.20.1

2020-03-20 16:21:48

by Eric Auger

[permalink] [raw]
Subject: [PATCH v10 08/11] vfio: Add new IRQ for DMA fault reporting

Add a new IRQ type/subtype to get notification on nested
stage DMA faults.

Signed-off-by: Eric Auger <[email protected]>
---
include/uapi/linux/vfio.h | 3 +++
1 file changed, 3 insertions(+)

diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index f0fd26d058c9..73a6bf072b10 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -480,6 +480,9 @@ struct vfio_irq_info_cap_type {
__u32 subtype; /* type specific */
};

+#define VFIO_IRQ_TYPE_NESTED (1)
+#define VFIO_IRQ_SUBTYPE_DMA_FAULT (1)
+
/**
* VFIO_DEVICE_SET_IRQS - _IOW(VFIO_TYPE, VFIO_BASE + 10, struct vfio_irq_set)
*
--
2.20.1

2020-03-20 16:22:03

by Eric Auger

[permalink] [raw]
Subject: [PATCH v10 09/11] vfio/pci: Add framework for custom interrupt indices

Implement IRQ capability chain infrastructure. All interrupt
indexes beyond VFIO_PCI_NUM_IRQS are handled as extended
interrupts. They are registered with a specific type/subtype
and supported flags.

Signed-off-by: Eric Auger <[email protected]>
---
drivers/vfio/pci/vfio_pci.c | 100 +++++++++++++++++++++++-----
drivers/vfio/pci/vfio_pci_intrs.c | 62 +++++++++++++++++
drivers/vfio/pci/vfio_pci_private.h | 14 ++++
3 files changed, 158 insertions(+), 18 deletions(-)

diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index 3c99f6f3825b..ca13067e4718 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -543,6 +543,14 @@ static void vfio_pci_disable(struct vfio_pci_device *vdev)
VFIO_IRQ_SET_ACTION_TRIGGER,
vdev->irq_type, 0, 0, NULL);

+ for (i = 0; i < vdev->num_ext_irqs; i++)
+ vfio_pci_set_irqs_ioctl(vdev, VFIO_IRQ_SET_DATA_NONE |
+ VFIO_IRQ_SET_ACTION_TRIGGER,
+ VFIO_PCI_NUM_IRQS + i, 0, 0, NULL);
+ vdev->num_ext_irqs = 0;
+ kfree(vdev->ext_irqs);
+ vdev->ext_irqs = NULL;
+
/* Device closed, don't need mutex here */
list_for_each_entry_safe(ioeventfd, ioeventfd_tmp,
&vdev->ioeventfds_list, next) {
@@ -709,6 +717,9 @@ static int vfio_pci_get_irq_count(struct vfio_pci_device *vdev, int irq_type)
return 1;
} else if (irq_type == VFIO_PCI_REQ_IRQ_INDEX) {
return 1;
+ } else if (irq_type >= VFIO_PCI_NUM_IRQS &&
+ irq_type < VFIO_PCI_NUM_IRQS + vdev->num_ext_irqs) {
+ return 1;
}

return 0;
@@ -878,7 +889,7 @@ static long vfio_pci_ioctl(void *device_data,
info.flags |= VFIO_DEVICE_FLAGS_RESET;

info.num_regions = VFIO_PCI_NUM_REGIONS + vdev->num_regions;
- info.num_irqs = VFIO_PCI_NUM_IRQS;
+ info.num_irqs = VFIO_PCI_NUM_IRQS + vdev->num_ext_irqs;

return copy_to_user((void __user *)arg, &info, minsz) ?
-EFAULT : 0;
@@ -1033,36 +1044,88 @@ static long vfio_pci_ioctl(void *device_data,

} else if (cmd == VFIO_DEVICE_GET_IRQ_INFO) {
struct vfio_irq_info info;
+ struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
+ unsigned long capsz;

minsz = offsetofend(struct vfio_irq_info, count);

+ /* For backward compatibility, cannot require this */
+ capsz = offsetofend(struct vfio_irq_info, cap_offset);
+
if (copy_from_user(&info, (void __user *)arg, minsz))
return -EFAULT;

- if (info.argsz < minsz || info.index >= VFIO_PCI_NUM_IRQS)
+ if (info.argsz < minsz ||
+ info.index >= VFIO_PCI_NUM_IRQS + vdev->num_ext_irqs)
return -EINVAL;

- switch (info.index) {
- case VFIO_PCI_INTX_IRQ_INDEX ... VFIO_PCI_MSIX_IRQ_INDEX:
- case VFIO_PCI_REQ_IRQ_INDEX:
- break;
- case VFIO_PCI_ERR_IRQ_INDEX:
- if (pci_is_pcie(vdev->pdev))
- break;
- /* fall through */
- default:
- return -EINVAL;
- }
+ if (info.argsz >= capsz)
+ minsz = capsz;

info.flags = VFIO_IRQ_INFO_EVENTFD;

- info.count = vfio_pci_get_irq_count(vdev, info.index);
-
- if (info.index == VFIO_PCI_INTX_IRQ_INDEX)
+ switch (info.index) {
+ case VFIO_PCI_INTX_IRQ_INDEX:
info.flags |= (VFIO_IRQ_INFO_MASKABLE |
VFIO_IRQ_INFO_AUTOMASKED);
- else
+ break;
+ case VFIO_PCI_MSI_IRQ_INDEX ... VFIO_PCI_MSIX_IRQ_INDEX:
+ case VFIO_PCI_REQ_IRQ_INDEX:
info.flags |= VFIO_IRQ_INFO_NORESIZE;
+ break;
+ case VFIO_PCI_ERR_IRQ_INDEX:
+ info.flags |= VFIO_IRQ_INFO_NORESIZE;
+ if (!pci_is_pcie(vdev->pdev))
+ return -EINVAL;
+ break;
+ /* fall through */
+ default:
+ {
+ struct vfio_irq_info_cap_type cap_type = {
+ .header.id = VFIO_IRQ_INFO_CAP_TYPE,
+ .header.version = 1 };
+ int ret, i;
+
+ if (info.index >= VFIO_PCI_NUM_IRQS +
+ vdev->num_ext_irqs)
+ return -EINVAL;
+ info.index = array_index_nospec(info.index,
+ VFIO_PCI_NUM_IRQS +
+ vdev->num_ext_irqs);
+ i = info.index - VFIO_PCI_NUM_IRQS;
+
+ info.flags = vdev->ext_irqs[i].flags;
+ cap_type.type = vdev->ext_irqs[i].type;
+ cap_type.subtype = vdev->ext_irqs[i].subtype;
+
+ ret = vfio_info_add_capability(&caps,
+ &cap_type.header,
+ sizeof(cap_type));
+ if (ret)
+ return ret;
+ }
+ }
+
+ info.count = vfio_pci_get_irq_count(vdev, info.index);
+
+ if (caps.size) {
+ info.flags |= VFIO_IRQ_INFO_FLAG_CAPS;
+ if (info.argsz < sizeof(info) + caps.size) {
+ info.argsz = sizeof(info) + caps.size;
+ info.cap_offset = 0;
+ } else {
+ vfio_info_cap_shift(&caps, sizeof(info));
+ if (copy_to_user((void __user *)arg +
+ sizeof(info), caps.buf,
+ caps.size)) {
+ kfree(caps.buf);
+ return -EFAULT;
+ }
+ info.cap_offset = sizeof(info);
+ }
+
+ kfree(caps.buf);
+ }

return copy_to_user((void __user *)arg, &info, minsz) ?
-EFAULT : 0;
@@ -1081,7 +1144,8 @@ static long vfio_pci_ioctl(void *device_data,
max = vfio_pci_get_irq_count(vdev, hdr.index);

ret = vfio_set_irqs_validate_and_prepare(&hdr, max,
- VFIO_PCI_NUM_IRQS, &data_size);
+ VFIO_PCI_NUM_IRQS + vdev->num_ext_irqs,
+ &data_size);
if (ret)
return ret;

diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c
index 2056f3f85f59..f15e2cd7ae64 100644
--- a/drivers/vfio/pci/vfio_pci_intrs.c
+++ b/drivers/vfio/pci/vfio_pci_intrs.c
@@ -19,6 +19,7 @@
#include <linux/vfio.h>
#include <linux/wait.h>
#include <linux/slab.h>
+#include <linux/nospec.h>

#include "vfio_pci_private.h"

@@ -619,6 +620,24 @@ static int vfio_pci_set_req_trigger(struct vfio_pci_device *vdev,
count, flags, data);
}

+static int vfio_pci_set_ext_irq_trigger(struct vfio_pci_device *vdev,
+ unsigned int index, unsigned int start,
+ unsigned int count, uint32_t flags,
+ void *data)
+{
+ int i;
+
+ if (start != 0 || count > 1)
+ return -EINVAL;
+
+ index = array_index_nospec(index,
+ VFIO_PCI_NUM_IRQS + vdev->num_ext_irqs);
+ i = index - VFIO_PCI_NUM_IRQS;
+
+ return vfio_pci_set_ctx_trigger_single(&vdev->ext_irqs[i].trigger,
+ count, flags, data);
+}
+
int vfio_pci_set_irqs_ioctl(struct vfio_pci_device *vdev, uint32_t flags,
unsigned index, unsigned start, unsigned count,
void *data)
@@ -668,6 +687,13 @@ int vfio_pci_set_irqs_ioctl(struct vfio_pci_device *vdev, uint32_t flags,
break;
}
break;
+ default:
+ switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) {
+ case VFIO_IRQ_SET_ACTION_TRIGGER:
+ func = vfio_pci_set_ext_irq_trigger;
+ break;
+ }
+ break;
}

if (!func)
@@ -675,3 +701,39 @@ int vfio_pci_set_irqs_ioctl(struct vfio_pci_device *vdev, uint32_t flags,

return func(vdev, index, start, count, flags, data);
}
+
+int vfio_pci_get_ext_irq_index(struct vfio_pci_device *vdev,
+ unsigned int type, unsigned int subtype)
+{
+ int i;
+
+ for (i = 0; i < vdev->num_ext_irqs; i++) {
+ if (vdev->ext_irqs[i].type == type &&
+ vdev->ext_irqs[i].subtype == subtype) {
+ return i;
+ }
+ }
+ return -EINVAL;
+}
+
+int vfio_pci_register_irq(struct vfio_pci_device *vdev,
+ unsigned int type, unsigned int subtype,
+ u32 flags)
+{
+ struct vfio_ext_irq *ext_irqs;
+
+ ext_irqs = krealloc(vdev->ext_irqs,
+ (vdev->num_ext_irqs + 1) * sizeof(*ext_irqs),
+ GFP_KERNEL);
+ if (!ext_irqs)
+ return -ENOMEM;
+
+ vdev->ext_irqs = ext_irqs;
+
+ vdev->ext_irqs[vdev->num_ext_irqs].type = type;
+ vdev->ext_irqs[vdev->num_ext_irqs].subtype = subtype;
+ vdev->ext_irqs[vdev->num_ext_irqs].flags = flags;
+ vdev->ext_irqs[vdev->num_ext_irqs].trigger = NULL;
+ vdev->num_ext_irqs++;
+ return 0;
+}
diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
index a392f50e3a99..e296a45c136e 100644
--- a/drivers/vfio/pci/vfio_pci_private.h
+++ b/drivers/vfio/pci/vfio_pci_private.h
@@ -73,6 +73,13 @@ struct vfio_pci_region {
u32 flags;
};

+struct vfio_ext_irq {
+ u32 type;
+ u32 subtype;
+ u32 flags;
+ struct eventfd_ctx *trigger;
+};
+
struct vfio_pci_dummy_resource {
struct resource resource;
int index;
@@ -96,6 +103,8 @@ struct vfio_pci_device {
struct vfio_pci_irq_ctx *ctx;
int num_ctx;
int irq_type;
+ struct vfio_ext_irq *ext_irqs;
+ int num_ext_irqs;
int num_regions;
struct vfio_pci_region *region;
u8 msi_qmax;
@@ -134,6 +143,11 @@ struct vfio_pci_device {

extern void vfio_pci_intx_mask(struct vfio_pci_device *vdev);
extern void vfio_pci_intx_unmask(struct vfio_pci_device *vdev);
+extern int vfio_pci_register_irq(struct vfio_pci_device *vdev,
+ unsigned int type, unsigned int subtype,
+ u32 flags);
+extern int vfio_pci_get_ext_irq_index(struct vfio_pci_device *vdev,
+ unsigned int type, unsigned int subtype);

extern int vfio_pci_set_irqs_ioctl(struct vfio_pci_device *vdev,
uint32_t flags, unsigned index,
--
2.20.1

2020-03-20 16:22:07

by Eric Auger

[permalink] [raw]
Subject: [PATCH v10 10/11] vfio/pci: Register and allow DMA FAULT IRQ signaling

Register the VFIO_IRQ_TYPE_NESTED/VFIO_IRQ_SUBTYPE_DMA_FAULT
IRQ that allows to signal a nested mode DMA fault.

Signed-off-by: Eric Auger <[email protected]>
---
drivers/vfio/pci/vfio_pci.c | 22 ++++++++++++++++++++--
1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index ca13067e4718..70e3a31da9f0 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -346,7 +346,7 @@ int vfio_pci_iommu_dev_fault_handler(struct iommu_fault *fault, void *data)
struct iommu_fault *new =
(struct iommu_fault *)(vdev->fault_pages + reg->offset +
reg->head * reg->entry_size);
- int head, tail, size;
+ int head, tail, size, ext_irq_index;
int ret = 0;

if (fault->type != IOMMU_FAULT_DMA_UNRECOV)
@@ -367,7 +367,19 @@ int vfio_pci_iommu_dev_fault_handler(struct iommu_fault *fault, void *data)
reg->head = (head + 1) % size;
unlock:
mutex_unlock(&vdev->fault_queue_lock);
- return ret;
+ if (ret)
+ return ret;
+
+ ext_irq_index = vfio_pci_get_ext_irq_index(vdev, VFIO_IRQ_TYPE_NESTED,
+ VFIO_IRQ_SUBTYPE_DMA_FAULT);
+ if (ext_irq_index < 0)
+ return -EINVAL;
+
+ mutex_lock(&vdev->igate);
+ if (vdev->ext_irqs[ext_irq_index].trigger)
+ eventfd_signal(vdev->ext_irqs[ext_irq_index].trigger, 1);
+ mutex_unlock(&vdev->igate);
+ return 0;
}

#define DMA_FAULT_RING_LENGTH 512
@@ -520,6 +532,12 @@ static int vfio_pci_enable(struct vfio_pci_device *vdev)
if (ret)
goto disable_exit;

+ ret = vfio_pci_register_irq(vdev, VFIO_IRQ_TYPE_NESTED,
+ VFIO_IRQ_SUBTYPE_DMA_FAULT,
+ VFIO_IRQ_INFO_EVENTFD);
+ if (ret)
+ goto disable_exit;
+
vfio_pci_probe_mmaps(vdev);

return 0;
--
2.20.1

2020-03-20 16:22:19

by Eric Auger

[permalink] [raw]
Subject: [PATCH v10 11/11] vfio: Document nested stage control

The VFIO API was enhanced to support nested stage control: a bunch of
new iotcls, one DMA FAULT region and an associated specific IRQ.

Let's document the process to follow to set up nested mode.

Signed-off-by: Eric Auger <[email protected]>

---

v8 -> v9:
- new names for SET_MSI_BINDING and SET_PASID_TABLE
- new layout for the DMA FAULT memory region and specific IRQ

v2 -> v3:
- document the new fault API

v1 -> v2:
- use the new ioctl names
- add doc related to fault handling
---
Documentation/driver-api/vfio.rst | 77 +++++++++++++++++++++++++++++++
1 file changed, 77 insertions(+)

diff --git a/Documentation/driver-api/vfio.rst b/Documentation/driver-api/vfio.rst
index f1a4d3c3ba0b..563ebcec9224 100644
--- a/Documentation/driver-api/vfio.rst
+++ b/Documentation/driver-api/vfio.rst
@@ -239,6 +239,83 @@ group and can access them as follows::
/* Gratuitous device reset and go... */
ioctl(device, VFIO_DEVICE_RESET);

+IOMMU Dual Stage Control
+------------------------
+
+Some IOMMUs support 2 stages/levels of translation. "Stage" corresponds to
+the ARM terminology while "level" corresponds to Intel's VTD terminology. In
+the following text we use either without distinction.
+
+This is useful when the guest is exposed with a virtual IOMMU and some
+devices are assigned to the guest through VFIO. Then the guest OS can use
+stage 1 (IOVA -> GPA), while the hypervisor uses stage 2 for VM isolation
+(GPA -> HPA).
+
+The guest gets ownership of the stage 1 page tables and also owns stage 1
+configuration structures. The hypervisor owns the root configuration structure
+(for security reason), including stage 2 configuration. This works as long
+configuration structures and page table format are compatible between the
+virtual IOMMU and the physical IOMMU.
+
+Assuming the HW supports it, this nested mode is selected by choosing the
+VFIO_TYPE1_NESTING_IOMMU type through:
+
+ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1_NESTING_IOMMU);
+
+This forces the hypervisor to use the stage 2, leaving stage 1 available for
+guest usage.
+
+Once groups are attached to the container, the guest stage 1 translation
+configuration data can be passed to VFIO by using
+
+ioctl(container, VFIO_IOMMU_SET_PASID_TABLE, &pasid_table_info);
+
+This allows to combine the guest stage 1 configuration structure along with
+the hypervisor stage 2 configuration structure. Stage 1 configuration
+structures are dependent on the IOMMU type.
+
+As the stage 1 translation is fully delegated to the HW, translation faults
+encountered during the translation process need to be propagated up to
+the virtualizer and re-injected into the guest.
+
+The userspace must be prepared to receive faults. The VFIO-PCI device
+exposes one dedicated DMA FAULT region: it contains a ring buffer and
+its header that allows to manage the head/tail indices. The region is
+identified by the following index/subindex:
+- VFIO_REGION_TYPE_NESTED/VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT
+
+The DMA FAULT region exposes a VFIO_REGION_INFO_CAP_PRODUCER_FAULT
+region capability that allows the userspace to retrieve the ABI version
+of the fault records filled by the host.
+
+On top of that region, the userspace can be notified whenever a fault
+occurs at the physical level. It can use the VFIO_IRQ_TYPE_NESTED/
+VFIO_IRQ_SUBTYPE_DMA_FAULT specific IRQ to attach the eventfd to be
+signalled.
+
+The ring buffer containing the fault records can be mmapped. When
+the userspace consumes a fault in the queue, it should increment
+the consumer index to allow new fault records to replace the used ones.
+
+The queue size and the entry size can be retrieved in the header.
+The tail index should never overshoot the producer index as in any
+other circular buffer scheme. Also it must be less than the queue size
+otherwise the change fails.
+
+When the guest invalidates stage 1 related caches, invalidations must be
+forwarded to the host through
+ioctl(container, VFIO_IOMMU_CACHE_INVALIDATE, &inv_data);
+Those invalidations can happen at various granularity levels, page, context, ...
+
+The ARM SMMU specification introduces another challenge: MSIs are translated by
+both the virtual SMMU and the physical SMMU. To build a nested mapping for the
+IOVA programmed into the assigned device, the guest needs to pass its IOVA/MSI
+doorbell GPA binding to the host. Then the hypervisor can build a nested stage 2
+binding eventually translating into the physical MSI doorbell.
+
+This is achieved by calling
+ioctl(container, VFIO_IOMMU_SET_MSI_BINDING, &guest_binding);
+
VFIO User API
-------------------------------------------------------------------------------

--
2.20.1

2020-03-20 16:22:44

by Eric Auger

[permalink] [raw]
Subject: [PATCH v10 02/11] vfio: VFIO_IOMMU_CACHE_INVALIDATE

From: "Liu, Yi L" <[email protected]>

When the guest "owns" the stage 1 translation structures, the host
IOMMU driver has no knowledge of caching structure updates unless
the guest invalidation requests are trapped and passed down to the
host.

This patch adds the VFIO_IOMMU_CACHE_INVALIDATE ioctl with aims
at propagating guest stage1 IOMMU cache invalidations to the host.

Signed-off-by: Liu, Yi L <[email protected]>
Signed-off-by: Eric Auger <[email protected]>

---

v8 -> v9:
- change the ioctl ID

v6 -> v7:
- Use iommu_capsule struct
- renamed vfio_iommu_for_each_dev into vfio_iommu_lookup_dev
due to checkpatch error related to for_each_dev suffix

v2 -> v3:
- introduce vfio_iommu_for_each_dev back in this patch

v1 -> v2:
- s/TLB/CACHE
- remove vfio_iommu_task usage
- commit message rewording
---
drivers/vfio/vfio_iommu_type1.c | 55 +++++++++++++++++++++++++++++++++
include/uapi/linux/vfio.h | 13 ++++++++
2 files changed, 68 insertions(+)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index bfacbd876ee1..04c6625098bb 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -124,6 +124,34 @@ struct vfio_regions {
#define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu) \
(!list_empty(&iommu->domain_list))

+struct domain_capsule {
+ struct iommu_domain *domain;
+ void *data;
+};
+
+/* iommu->lock must be held */
+static int
+vfio_iommu_lookup_dev(struct vfio_iommu *iommu,
+ int (*fn)(struct device *dev, void *data),
+ void *data)
+{
+ struct domain_capsule dc = {.data = data};
+ struct vfio_domain *d;
+ struct vfio_group *g;
+ int ret = 0;
+
+ list_for_each_entry(d, &iommu->domain_list, next) {
+ dc.domain = d->domain;
+ list_for_each_entry(g, &d->group_list, next) {
+ ret = iommu_group_for_each_dev(g->iommu_group,
+ &dc, fn);
+ if (ret)
+ break;
+ }
+ }
+ return ret;
+}
+
static int put_pfn(unsigned long pfn, int prot);

/*
@@ -2209,6 +2237,15 @@ vfio_attach_pasid_table(struct vfio_iommu *iommu,
return ret;
}

+static int vfio_cache_inv_fn(struct device *dev, void *data)
+{
+ struct domain_capsule *dc = (struct domain_capsule *)data;
+ struct vfio_iommu_type1_cache_invalidate *ustruct =
+ (struct vfio_iommu_type1_cache_invalidate *)dc->data;
+
+ return iommu_cache_invalidate(dc->domain, dev, &ustruct->info);
+}
+
static long vfio_iommu_type1_ioctl(void *iommu_data,
unsigned int cmd, unsigned long arg)
{
@@ -2332,6 +2369,24 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
return 0;
} else
return -EINVAL;
+ } else if (cmd == VFIO_IOMMU_CACHE_INVALIDATE) {
+ struct vfio_iommu_type1_cache_invalidate ustruct;
+ int ret;
+
+ minsz = offsetofend(struct vfio_iommu_type1_cache_invalidate,
+ info);
+
+ if (copy_from_user(&ustruct, (void __user *)arg, minsz))
+ return -EFAULT;
+
+ if (ustruct.argsz < minsz || ustruct.flags)
+ return -EINVAL;
+
+ mutex_lock(&iommu->lock);
+ ret = vfio_iommu_lookup_dev(iommu, vfio_cache_inv_fn,
+ &ustruct);
+ mutex_unlock(&iommu->lock);
+ return ret;
}

return -ENOTTY;
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index e032a1fe6ed9..98212c1711e7 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -813,6 +813,19 @@ struct vfio_iommu_type1_set_pasid_table {

#define VFIO_IOMMU_SET_PASID_TABLE _IO(VFIO_TYPE, VFIO_BASE + 22)

+/**
+ * VFIO_IOMMU_CACHE_INVALIDATE - _IOWR(VFIO_TYPE, VFIO_BASE + 23,
+ * struct vfio_iommu_type1_cache_invalidate)
+ *
+ * Propagate guest IOMMU cache invalidation to the host.
+ */
+struct vfio_iommu_type1_cache_invalidate {
+ __u32 argsz;
+ __u32 flags;
+ struct iommu_cache_invalidate_info info;
+};
+#define VFIO_IOMMU_CACHE_INVALIDATE _IO(VFIO_TYPE, VFIO_BASE + 23)
+
/* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */

/*
--
2.20.1

2020-03-20 16:22:46

by Eric Auger

[permalink] [raw]
Subject: [PATCH v10 06/11] vfio/pci: Allow to mmap the fault queue

The DMA FAULT region contains the fault ring buffer.
There is benefit to let the userspace mmap this area.
Expose this mmappable area through a sparse mmap entry
and implement the mmap operation.

Signed-off-by: Eric Auger <[email protected]>

---

v8 -> v9:
- remove unused index local variable in vfio_pci_fault_mmap
---
drivers/vfio/pci/vfio_pci.c | 61 +++++++++++++++++++++++++++++++++++--
1 file changed, 58 insertions(+), 3 deletions(-)

diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index 69595c240baf..3c99f6f3825b 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -266,21 +266,75 @@ static void vfio_pci_dma_fault_release(struct vfio_pci_device *vdev,
{
}

+static int vfio_pci_dma_fault_mmap(struct vfio_pci_device *vdev,
+ struct vfio_pci_region *region,
+ struct vm_area_struct *vma)
+{
+ u64 phys_len, req_len, pgoff, req_start;
+ unsigned long long addr;
+ unsigned int ret;
+
+ phys_len = region->size;
+
+ req_len = vma->vm_end - vma->vm_start;
+ pgoff = vma->vm_pgoff &
+ ((1U << (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT)) - 1);
+ req_start = pgoff << PAGE_SHIFT;
+
+ /* only the second page of the producer fault region is mmappable */
+ if (req_start < PAGE_SIZE)
+ return -EINVAL;
+
+ if (req_start + req_len > phys_len)
+ return -EINVAL;
+
+ addr = virt_to_phys(vdev->fault_pages);
+ vma->vm_private_data = vdev;
+ vma->vm_pgoff = (addr >> PAGE_SHIFT) + pgoff;
+
+ ret = remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
+ req_len, vma->vm_page_prot);
+ return ret;
+}
+
static int vfio_pci_dma_fault_add_capability(struct vfio_pci_device *vdev,
struct vfio_pci_region *region,
struct vfio_info_cap *caps)
{
+ struct vfio_region_info_cap_sparse_mmap *sparse = NULL;
struct vfio_region_info_cap_fault cap = {
.header.id = VFIO_REGION_INFO_CAP_DMA_FAULT,
.header.version = 1,
.version = 1,
};
- return vfio_info_add_capability(caps, &cap.header, sizeof(cap));
+ size_t size = sizeof(*sparse) + sizeof(*sparse->areas);
+ int ret;
+
+ ret = vfio_info_add_capability(caps, &cap.header, sizeof(cap));
+ if (ret)
+ return ret;
+
+ sparse = kzalloc(size, GFP_KERNEL);
+ if (!sparse)
+ return -ENOMEM;
+
+ sparse->header.id = VFIO_REGION_INFO_CAP_SPARSE_MMAP;
+ sparse->header.version = 1;
+ sparse->nr_areas = 1;
+ sparse->areas[0].offset = PAGE_SIZE;
+ sparse->areas[0].size = region->size - PAGE_SIZE;
+
+ ret = vfio_info_add_capability(caps, &sparse->header, size);
+ if (ret)
+ kfree(sparse);
+
+ return ret;
}

static const struct vfio_pci_regops vfio_pci_dma_fault_regops = {
.rw = vfio_pci_dma_fault_rw,
.release = vfio_pci_dma_fault_release,
+ .mmap = vfio_pci_dma_fault_mmap,
.add_capability = vfio_pci_dma_fault_add_capability,
};

@@ -341,7 +395,8 @@ static int vfio_pci_init_dma_fault_region(struct vfio_pci_device *vdev)
VFIO_REGION_TYPE_NESTED,
VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT,
&vfio_pci_dma_fault_regops, size,
- VFIO_REGION_INFO_FLAG_READ | VFIO_REGION_INFO_FLAG_WRITE,
+ VFIO_REGION_INFO_FLAG_READ | VFIO_REGION_INFO_FLAG_WRITE |
+ VFIO_REGION_INFO_FLAG_MMAP,
vdev->fault_pages);
if (ret)
goto out;
@@ -349,7 +404,7 @@ static int vfio_pci_init_dma_fault_region(struct vfio_pci_device *vdev)
header = (struct vfio_region_dma_fault *)vdev->fault_pages;
header->entry_size = sizeof(struct iommu_fault);
header->nb_entries = DMA_FAULT_RING_LENGTH;
- header->offset = sizeof(struct vfio_region_dma_fault);
+ header->offset = PAGE_SIZE;

ret = iommu_register_device_fault_handler(&vdev->pdev->dev,
vfio_pci_iommu_dev_fault_handler,
--
2.20.1

2020-03-20 16:23:21

by Eric Auger

[permalink] [raw]
Subject: [PATCH v10 07/11] vfio: Use capability chains to handle device specific irq

From: Tina Zhang <[email protected]>

Caps the number of irqs with fixed indexes and uses capability chains
to chain device specific irqs.

Signed-off-by: Tina Zhang <[email protected]>
Signed-off-by: Eric Auger <[email protected]>
[Eric: Put cap_offset at the end of the vfio_irq_info struct,
remove GFX IRQ at the moment and remove any reference to this latter
in the commit message]

---
---
include/uapi/linux/vfio.h | 19 ++++++++++++++++++-
1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index 40d770f80e3d..f0fd26d058c9 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -459,11 +459,27 @@ struct vfio_irq_info {
#define VFIO_IRQ_INFO_MASKABLE (1 << 1)
#define VFIO_IRQ_INFO_AUTOMASKED (1 << 2)
#define VFIO_IRQ_INFO_NORESIZE (1 << 3)
+#define VFIO_IRQ_INFO_FLAG_CAPS (1 << 4) /* Info supports caps */
__u32 index; /* IRQ index */
__u32 count; /* Number of IRQs within this index */
+ __u32 cap_offset; /* Offset within info struct of first cap */
};
#define VFIO_DEVICE_GET_IRQ_INFO _IO(VFIO_TYPE, VFIO_BASE + 9)

+/*
+ * The irq type capability allows IRQs unique to a specific device or
+ * class of devices to be exposed.
+ *
+ * The structures below define version 1 of this capability.
+ */
+#define VFIO_IRQ_INFO_CAP_TYPE 3
+
+struct vfio_irq_info_cap_type {
+ struct vfio_info_cap_header header;
+ __u32 type; /* global per bus driver */
+ __u32 subtype; /* type specific */
+};
+
/**
* VFIO_DEVICE_SET_IRQS - _IOW(VFIO_TYPE, VFIO_BASE + 10, struct vfio_irq_set)
*
@@ -565,7 +581,8 @@ enum {
VFIO_PCI_MSIX_IRQ_INDEX,
VFIO_PCI_ERR_IRQ_INDEX,
VFIO_PCI_REQ_IRQ_INDEX,
- VFIO_PCI_NUM_IRQS
+ VFIO_PCI_NUM_IRQS = 5 /* Fixed user ABI, IRQ indexes >=5 use */
+ /* device specific cap to define content */
};

/*
--
2.20.1

2020-03-21 05:20:23

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v10 01/11] vfio: VFIO_IOMMU_SET_PASID_TABLE

Hi Eric,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on vfio/next]
[also build test ERROR on v5.6-rc6 next-20200320]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url: https://github.com/0day-ci/linux/commits/Eric-Auger/SMMUv3-Nested-Stage-Setup-VFIO-part/20200321-040935
base: https://github.com/awilliam/linux-vfio.git next
config: arm64-randconfig-a001-20200321 (attached as .config)
compiler: aarch64-linux-gcc (GCC) 9.2.0
reproduce:
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
GCC_VERSION=9.2.0 make.cross ARCH=arm64

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <[email protected]>

All errors (new ones prefixed by >>):

In file included from include/linux/vfio.h:16,
from drivers/vfio/vfio.c:32:
>> include/uapi/linux/vfio.h:811:34: error: field 'config' has incomplete type
811 | struct iommu_pasid_table_config config; /* used on SET */
| ^~~~~~
--
In file included from include/linux/vfio.h:16,
from drivers/vfio/vfio_iommu_type1.c:35:
>> include/uapi/linux/vfio.h:811:34: error: field 'config' has incomplete type
811 | struct iommu_pasid_table_config config; /* used on SET */
| ^~~~~~
drivers/vfio/vfio_iommu_type1.c: In function 'vfio_detach_pasid_table':
>> drivers/vfio/vfio_iommu_type1.c:2204:3: error: implicit declaration of function 'iommu_detach_pasid_table'; did you mean 'vfio_detach_pasid_table'? [-Werror=implicit-function-declaration]
2204 | iommu_detach_pasid_table(d->domain);
| ^~~~~~~~~~~~~~~~~~~~~~~~
| vfio_detach_pasid_table
drivers/vfio/vfio_iommu_type1.c: In function 'vfio_attach_pasid_table':
>> drivers/vfio/vfio_iommu_type1.c:2219:9: error: implicit declaration of function 'iommu_attach_pasid_table'; did you mean 'vfio_attach_pasid_table'? [-Werror=implicit-function-declaration]
2219 | ret = iommu_attach_pasid_table(d->domain, &ustruct->config);
| ^~~~~~~~~~~~~~~~~~~~~~~~
| vfio_attach_pasid_table
cc1: some warnings being treated as errors

vim +/config +811 include/uapi/linux/vfio.h

797
798 /**
799 * VFIO_IOMMU_SET_PASID_TABLE - _IOWR(VFIO_TYPE, VFIO_BASE + 22,
800 * struct vfio_iommu_type1_set_pasid_table)
801 *
802 * The SET operation passes a PASID table to the host while the
803 * UNSET operation detaches the one currently programmed. Setting
804 * a table while another is already programmed replaces the old table.
805 */
806 struct vfio_iommu_type1_set_pasid_table {
807 __u32 argsz;
808 __u32 flags;
809 #define VFIO_PASID_TABLE_FLAG_SET (1 << 0)
810 #define VFIO_PASID_TABLE_FLAG_UNSET (1 << 1)
> 811 struct iommu_pasid_table_config config; /* used on SET */
812 };
813

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/[email protected]


Attachments:
(No filename) (3.34 kB)
.config.gz (38.02 kB)
Download all attachments

2020-04-01 13:20:04

by Yi Liu

[permalink] [raw]
Subject: RE: [PATCH v10 04/11] vfio/pci: Add VFIO_REGION_TYPE_NESTED region type

Hi Eric,

Just curious about your plan on this patch, I just heard my colleague would like
to reference the functions from this patch in his dsa driver work.

Regards,
Yi Liu

> From: Eric Auger <[email protected]>
> Sent: Saturday, March 21, 2020 12:19 AM
> To: [email protected]; [email protected]; [email protected]
> foundation.org; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; Liu, Yi L <[email protected]>; jean-
> [email protected]; [email protected]; [email protected]
> Cc: [email protected]; [email protected]; [email protected]
> Subject: [PATCH v10 04/11] vfio/pci: Add VFIO_REGION_TYPE_NESTED region type
>
> Add a new specific DMA_FAULT region aiming to exposed nested mode
> translation faults.
>
> The region has a ring buffer that contains the actual fault
> records plus a header allowing to handle it (tail/head indices,
> max capacity, entry size). At the moment the region is dimensionned
> for 512 fault records.
>
> Signed-off-by: Eric Auger <[email protected]>
>
> ---
>
> v8 -> v9:
> - Use a single region instead of a prod/cons region
>
> v4 -> v5
> - check cons is not null in vfio_pci_check_cons_fault
>
> v3 -> v4:
> - use 2 separate regions, respectively in read and write modes
> - add the version capability
> ---
> drivers/vfio/pci/vfio_pci.c | 68 +++++++++++++++++++++++++++++
> drivers/vfio/pci/vfio_pci_private.h | 10 +++++
> drivers/vfio/pci/vfio_pci_rdwr.c | 45 +++++++++++++++++++
> include/uapi/linux/vfio.h | 35 +++++++++++++++
> 4 files changed, 158 insertions(+)
>
> diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
> index 379a02c36e37..586b89debed5 100644
> --- a/drivers/vfio/pci/vfio_pci.c
> +++ b/drivers/vfio/pci/vfio_pci.c
> @@ -260,6 +260,69 @@ int vfio_pci_set_power_state(struct vfio_pci_device *vdev,
> pci_power_t state)
> return ret;
> }
>
> +static void vfio_pci_dma_fault_release(struct vfio_pci_device *vdev,
> + struct vfio_pci_region *region)
> +{
> +}
> +
> +static int vfio_pci_dma_fault_add_capability(struct vfio_pci_device *vdev,
> + struct vfio_pci_region *region,
> + struct vfio_info_cap *caps)
> +{
> + struct vfio_region_info_cap_fault cap = {
> + .header.id = VFIO_REGION_INFO_CAP_DMA_FAULT,
> + .header.version = 1,
> + .version = 1,
> + };
> + return vfio_info_add_capability(caps, &cap.header, sizeof(cap));
> +}
> +
> +static const struct vfio_pci_regops vfio_pci_dma_fault_regops = {
> + .rw = vfio_pci_dma_fault_rw,
> + .release = vfio_pci_dma_fault_release,
> + .add_capability = vfio_pci_dma_fault_add_capability,
> +};
> +
> +#define DMA_FAULT_RING_LENGTH 512
> +
> +static int vfio_pci_init_dma_fault_region(struct vfio_pci_device *vdev)
> +{
> + struct vfio_region_dma_fault *header;
> + size_t size;
> + int ret;
> +
> + mutex_init(&vdev->fault_queue_lock);
> +
> + /*
> + * We provision 1 page for the header and space for
> + * DMA_FAULT_RING_LENGTH fault records in the ring buffer.
> + */
> + size = ALIGN(sizeof(struct iommu_fault) *
> + DMA_FAULT_RING_LENGTH, PAGE_SIZE) + PAGE_SIZE;
> +
> + vdev->fault_pages = kzalloc(size, GFP_KERNEL);
> + if (!vdev->fault_pages)
> + return -ENOMEM;
> +
> + ret = vfio_pci_register_dev_region(vdev,
> + VFIO_REGION_TYPE_NESTED,
> + VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT,
> + &vfio_pci_dma_fault_regops, size,
> + VFIO_REGION_INFO_FLAG_READ |
> VFIO_REGION_INFO_FLAG_WRITE,
> + vdev->fault_pages);
> + if (ret)
> + goto out;
> +
> + header = (struct vfio_region_dma_fault *)vdev->fault_pages;
> + header->entry_size = sizeof(struct iommu_fault);
> + header->nb_entries = DMA_FAULT_RING_LENGTH;
> + header->offset = sizeof(struct vfio_region_dma_fault);
> + return 0;
> +out:
> + kfree(vdev->fault_pages);
> + return ret;
> +}
> +
> static int vfio_pci_enable(struct vfio_pci_device *vdev)
> {
> struct pci_dev *pdev = vdev->pdev;
> @@ -358,6 +421,10 @@ static int vfio_pci_enable(struct vfio_pci_device *vdev)
> }
> }
>
> + ret = vfio_pci_init_dma_fault_region(vdev);
> + if (ret)
> + goto disable_exit;
> +
> vfio_pci_probe_mmaps(vdev);
>
> return 0;
> @@ -1383,6 +1450,7 @@ static void vfio_pci_remove(struct pci_dev *pdev)
>
> vfio_iommu_group_put(pdev->dev.iommu_group, &pdev->dev);
> kfree(vdev->region);
> + kfree(vdev->fault_pages);
> mutex_destroy(&vdev->ioeventfds_lock);
>
> if (!disable_idle_d3)
> diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
> index 8a2c7607d513..a392f50e3a99 100644
> --- a/drivers/vfio/pci/vfio_pci_private.h
> +++ b/drivers/vfio/pci/vfio_pci_private.h
> @@ -119,6 +119,8 @@ struct vfio_pci_device {
> int ioeventfds_nr;
> struct eventfd_ctx *err_trigger;
> struct eventfd_ctx *req_trigger;
> + u8 *fault_pages;
> + struct mutex fault_queue_lock;
> struct list_head dummy_resources_list;
> struct mutex ioeventfds_lock;
> struct list_head ioeventfds_list;
> @@ -150,6 +152,14 @@ extern ssize_t vfio_pci_vga_rw(struct vfio_pci_device
> *vdev, char __user *buf,
> extern long vfio_pci_ioeventfd(struct vfio_pci_device *vdev, loff_t offset,
> uint64_t data, int count, int fd);
>
> +struct vfio_pci_fault_abi {
> + u32 entry_size;
> +};
> +
> +extern size_t vfio_pci_dma_fault_rw(struct vfio_pci_device *vdev,
> + char __user *buf, size_t count,
> + loff_t *ppos, bool iswrite);
> +
> extern int vfio_pci_init_perm_bits(void);
> extern void vfio_pci_uninit_perm_bits(void);
>
> diff --git a/drivers/vfio/pci/vfio_pci_rdwr.c b/drivers/vfio/pci/vfio_pci_rdwr.c
> index a87992892a9f..4004ab8cad0e 100644
> --- a/drivers/vfio/pci/vfio_pci_rdwr.c
> +++ b/drivers/vfio/pci/vfio_pci_rdwr.c
> @@ -274,6 +274,51 @@ ssize_t vfio_pci_vga_rw(struct vfio_pci_device *vdev, char
> __user *buf,
> return done;
> }
>
> +size_t vfio_pci_dma_fault_rw(struct vfio_pci_device *vdev, char __user *buf,
> + size_t count, loff_t *ppos, bool iswrite)
> +{
> + unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) -
> VFIO_PCI_NUM_REGIONS;
> + loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
> + void *base = vdev->region[i].data;
> + int ret = -EFAULT;
> +
> + if (pos >= vdev->region[i].size)
> + return -EINVAL;
> +
> + count = min(count, (size_t)(vdev->region[i].size - pos));
> +
> + mutex_lock(&vdev->fault_queue_lock);
> +
> + if (iswrite) {
> + struct vfio_region_dma_fault *header =
> + (struct vfio_region_dma_fault *)base;
> + u32 new_tail;
> +
> + if (pos != 0 || count != 4) {
> + ret = -EINVAL;
> + goto unlock;
> + }
> +
> + if (copy_from_user((void *)&new_tail, buf, count))
> + goto unlock;
> +
> + if (new_tail > header->nb_entries) {
> + ret = -EINVAL;
> + goto unlock;
> + }
> + header->tail = new_tail;
> + } else {
> + if (copy_to_user(buf, base + pos, count))
> + goto unlock;
> + }
> + *ppos += count;
> + ret = count;
> +unlock:
> + mutex_unlock(&vdev->fault_queue_lock);
> + return ret;
> +}
> +
> +
> static int vfio_pci_ioeventfd_handler(void *opaque, void *unused)
> {
> struct vfio_pci_ioeventfd *ioeventfd = opaque;
> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index 9f2429eb1958..40d770f80e3d 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -330,6 +330,9 @@ struct vfio_region_info_cap_type {
> /* sub-types for VFIO_REGION_TYPE_GFX */
> #define VFIO_REGION_SUBTYPE_GFX_EDID (1)
>
> +#define VFIO_REGION_TYPE_NESTED (2)
> +#define VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT (1)
> +
> /**
> * struct vfio_region_gfx_edid - EDID region layout.
> *
> @@ -708,6 +711,38 @@ struct vfio_device_ioeventfd {
>
> #define VFIO_DEVICE_IOEVENTFD _IO(VFIO_TYPE, VFIO_BASE + 16)
>
> +
> +/*
> + * Capability exposed by the DMA fault region
> + * @version: ABI version
> + */
> +#define VFIO_REGION_INFO_CAP_DMA_FAULT 6
> +
> +struct vfio_region_info_cap_fault {
> + struct vfio_info_cap_header header;
> + __u32 version;
> +};
> +
> +/*
> + * DMA Fault Region Layout
> + * @tail: index relative to the start of the ring buffer at which the
> + * consumer finds the next item in the buffer
> + * @entry_size: fault ring buffer entry size in bytes
> + * @nb_entries: max capacity of the fault ring buffer
> + * @offset: ring buffer offset relative to the start of the region
> + * @head: index relative to the start of the ring buffer at which the
> + * producer (kernel) inserts items into the buffers
> + */
> +struct vfio_region_dma_fault {
> + /* Write-Only */
> + __u32 tail;
> + /* Read-Only */
> + __u32 entry_size;
> + __u32 nb_entries;
> + __u32 offset;
> + __u32 head;
> +};
> +
> /* -------- API for Type1 VFIO IOMMU -------- */
>
> /**
> --
> 2.20.1

2020-04-01 13:31:51

by Eric Auger

[permalink] [raw]
Subject: Re: [PATCH v10 04/11] vfio/pci: Add VFIO_REGION_TYPE_NESTED region type

Hi Yi,

On 4/1/20 3:18 PM, Liu, Yi L wrote:
> Hi Eric,
>
> Just curious about your plan on this patch, I just heard my colleague would like
> to reference the functions from this patch in his dsa driver work.

Well I intend to respin until somebody tells me it is completely vain or
dead follows. Joking aside, feel free to embed it in any series it would
be beneficial to, just please cc me in case code diverges.

Thanks

Eric
>
> Regards,
> Yi Liu
>
>> From: Eric Auger <[email protected]>
>> Sent: Saturday, March 21, 2020 12:19 AM
>> To: [email protected]; [email protected]; [email protected]
>> foundation.org; [email protected]; [email protected];
>> [email protected]; [email protected]; [email protected];
>> [email protected]; Liu, Yi L <[email protected]>; jean-
>> [email protected]; [email protected]; [email protected]
>> Cc: [email protected]; [email protected]; [email protected]
>> Subject: [PATCH v10 04/11] vfio/pci: Add VFIO_REGION_TYPE_NESTED region type
>>
>> Add a new specific DMA_FAULT region aiming to exposed nested mode
>> translation faults.
>>
>> The region has a ring buffer that contains the actual fault
>> records plus a header allowing to handle it (tail/head indices,
>> max capacity, entry size). At the moment the region is dimensionned
>> for 512 fault records.
>>
>> Signed-off-by: Eric Auger <[email protected]>
>>
>> ---
>>
>> v8 -> v9:
>> - Use a single region instead of a prod/cons region
>>
>> v4 -> v5
>> - check cons is not null in vfio_pci_check_cons_fault
>>
>> v3 -> v4:
>> - use 2 separate regions, respectively in read and write modes
>> - add the version capability
>> ---
>> drivers/vfio/pci/vfio_pci.c | 68 +++++++++++++++++++++++++++++
>> drivers/vfio/pci/vfio_pci_private.h | 10 +++++
>> drivers/vfio/pci/vfio_pci_rdwr.c | 45 +++++++++++++++++++
>> include/uapi/linux/vfio.h | 35 +++++++++++++++
>> 4 files changed, 158 insertions(+)
>>
>> diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
>> index 379a02c36e37..586b89debed5 100644
>> --- a/drivers/vfio/pci/vfio_pci.c
>> +++ b/drivers/vfio/pci/vfio_pci.c
>> @@ -260,6 +260,69 @@ int vfio_pci_set_power_state(struct vfio_pci_device *vdev,
>> pci_power_t state)
>> return ret;
>> }
>>
>> +static void vfio_pci_dma_fault_release(struct vfio_pci_device *vdev,
>> + struct vfio_pci_region *region)
>> +{
>> +}
>> +
>> +static int vfio_pci_dma_fault_add_capability(struct vfio_pci_device *vdev,
>> + struct vfio_pci_region *region,
>> + struct vfio_info_cap *caps)
>> +{
>> + struct vfio_region_info_cap_fault cap = {
>> + .header.id = VFIO_REGION_INFO_CAP_DMA_FAULT,
>> + .header.version = 1,
>> + .version = 1,
>> + };
>> + return vfio_info_add_capability(caps, &cap.header, sizeof(cap));
>> +}
>> +
>> +static const struct vfio_pci_regops vfio_pci_dma_fault_regops = {
>> + .rw = vfio_pci_dma_fault_rw,
>> + .release = vfio_pci_dma_fault_release,
>> + .add_capability = vfio_pci_dma_fault_add_capability,
>> +};
>> +
>> +#define DMA_FAULT_RING_LENGTH 512
>> +
>> +static int vfio_pci_init_dma_fault_region(struct vfio_pci_device *vdev)
>> +{
>> + struct vfio_region_dma_fault *header;
>> + size_t size;
>> + int ret;
>> +
>> + mutex_init(&vdev->fault_queue_lock);
>> +
>> + /*
>> + * We provision 1 page for the header and space for
>> + * DMA_FAULT_RING_LENGTH fault records in the ring buffer.
>> + */
>> + size = ALIGN(sizeof(struct iommu_fault) *
>> + DMA_FAULT_RING_LENGTH, PAGE_SIZE) + PAGE_SIZE;
>> +
>> + vdev->fault_pages = kzalloc(size, GFP_KERNEL);
>> + if (!vdev->fault_pages)
>> + return -ENOMEM;
>> +
>> + ret = vfio_pci_register_dev_region(vdev,
>> + VFIO_REGION_TYPE_NESTED,
>> + VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT,
>> + &vfio_pci_dma_fault_regops, size,
>> + VFIO_REGION_INFO_FLAG_READ |
>> VFIO_REGION_INFO_FLAG_WRITE,
>> + vdev->fault_pages);
>> + if (ret)
>> + goto out;
>> +
>> + header = (struct vfio_region_dma_fault *)vdev->fault_pages;
>> + header->entry_size = sizeof(struct iommu_fault);
>> + header->nb_entries = DMA_FAULT_RING_LENGTH;
>> + header->offset = sizeof(struct vfio_region_dma_fault);
>> + return 0;
>> +out:
>> + kfree(vdev->fault_pages);
>> + return ret;
>> +}
>> +
>> static int vfio_pci_enable(struct vfio_pci_device *vdev)
>> {
>> struct pci_dev *pdev = vdev->pdev;
>> @@ -358,6 +421,10 @@ static int vfio_pci_enable(struct vfio_pci_device *vdev)
>> }
>> }
>>
>> + ret = vfio_pci_init_dma_fault_region(vdev);
>> + if (ret)
>> + goto disable_exit;
>> +
>> vfio_pci_probe_mmaps(vdev);
>>
>> return 0;
>> @@ -1383,6 +1450,7 @@ static void vfio_pci_remove(struct pci_dev *pdev)
>>
>> vfio_iommu_group_put(pdev->dev.iommu_group, &pdev->dev);
>> kfree(vdev->region);
>> + kfree(vdev->fault_pages);
>> mutex_destroy(&vdev->ioeventfds_lock);
>>
>> if (!disable_idle_d3)
>> diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
>> index 8a2c7607d513..a392f50e3a99 100644
>> --- a/drivers/vfio/pci/vfio_pci_private.h
>> +++ b/drivers/vfio/pci/vfio_pci_private.h
>> @@ -119,6 +119,8 @@ struct vfio_pci_device {
>> int ioeventfds_nr;
>> struct eventfd_ctx *err_trigger;
>> struct eventfd_ctx *req_trigger;
>> + u8 *fault_pages;
>> + struct mutex fault_queue_lock;
>> struct list_head dummy_resources_list;
>> struct mutex ioeventfds_lock;
>> struct list_head ioeventfds_list;
>> @@ -150,6 +152,14 @@ extern ssize_t vfio_pci_vga_rw(struct vfio_pci_device
>> *vdev, char __user *buf,
>> extern long vfio_pci_ioeventfd(struct vfio_pci_device *vdev, loff_t offset,
>> uint64_t data, int count, int fd);
>>
>> +struct vfio_pci_fault_abi {
>> + u32 entry_size;
>> +};
>> +
>> +extern size_t vfio_pci_dma_fault_rw(struct vfio_pci_device *vdev,
>> + char __user *buf, size_t count,
>> + loff_t *ppos, bool iswrite);
>> +
>> extern int vfio_pci_init_perm_bits(void);
>> extern void vfio_pci_uninit_perm_bits(void);
>>
>> diff --git a/drivers/vfio/pci/vfio_pci_rdwr.c b/drivers/vfio/pci/vfio_pci_rdwr.c
>> index a87992892a9f..4004ab8cad0e 100644
>> --- a/drivers/vfio/pci/vfio_pci_rdwr.c
>> +++ b/drivers/vfio/pci/vfio_pci_rdwr.c
>> @@ -274,6 +274,51 @@ ssize_t vfio_pci_vga_rw(struct vfio_pci_device *vdev, char
>> __user *buf,
>> return done;
>> }
>>
>> +size_t vfio_pci_dma_fault_rw(struct vfio_pci_device *vdev, char __user *buf,
>> + size_t count, loff_t *ppos, bool iswrite)
>> +{
>> + unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) -
>> VFIO_PCI_NUM_REGIONS;
>> + loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
>> + void *base = vdev->region[i].data;
>> + int ret = -EFAULT;
>> +
>> + if (pos >= vdev->region[i].size)
>> + return -EINVAL;
>> +
>> + count = min(count, (size_t)(vdev->region[i].size - pos));
>> +
>> + mutex_lock(&vdev->fault_queue_lock);
>> +
>> + if (iswrite) {
>> + struct vfio_region_dma_fault *header =
>> + (struct vfio_region_dma_fault *)base;
>> + u32 new_tail;
>> +
>> + if (pos != 0 || count != 4) {
>> + ret = -EINVAL;
>> + goto unlock;
>> + }
>> +
>> + if (copy_from_user((void *)&new_tail, buf, count))
>> + goto unlock;
>> +
>> + if (new_tail > header->nb_entries) {
>> + ret = -EINVAL;
>> + goto unlock;
>> + }
>> + header->tail = new_tail;
>> + } else {
>> + if (copy_to_user(buf, base + pos, count))
>> + goto unlock;
>> + }
>> + *ppos += count;
>> + ret = count;
>> +unlock:
>> + mutex_unlock(&vdev->fault_queue_lock);
>> + return ret;
>> +}
>> +
>> +
>> static int vfio_pci_ioeventfd_handler(void *opaque, void *unused)
>> {
>> struct vfio_pci_ioeventfd *ioeventfd = opaque;
>> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
>> index 9f2429eb1958..40d770f80e3d 100644
>> --- a/include/uapi/linux/vfio.h
>> +++ b/include/uapi/linux/vfio.h
>> @@ -330,6 +330,9 @@ struct vfio_region_info_cap_type {
>> /* sub-types for VFIO_REGION_TYPE_GFX */
>> #define VFIO_REGION_SUBTYPE_GFX_EDID (1)
>>
>> +#define VFIO_REGION_TYPE_NESTED (2)
>> +#define VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT (1)
>> +
>> /**
>> * struct vfio_region_gfx_edid - EDID region layout.
>> *
>> @@ -708,6 +711,38 @@ struct vfio_device_ioeventfd {
>>
>> #define VFIO_DEVICE_IOEVENTFD _IO(VFIO_TYPE, VFIO_BASE + 16)
>>
>> +
>> +/*
>> + * Capability exposed by the DMA fault region
>> + * @version: ABI version
>> + */
>> +#define VFIO_REGION_INFO_CAP_DMA_FAULT 6
>> +
>> +struct vfio_region_info_cap_fault {
>> + struct vfio_info_cap_header header;
>> + __u32 version;
>> +};
>> +
>> +/*
>> + * DMA Fault Region Layout
>> + * @tail: index relative to the start of the ring buffer at which the
>> + * consumer finds the next item in the buffer
>> + * @entry_size: fault ring buffer entry size in bytes
>> + * @nb_entries: max capacity of the fault ring buffer
>> + * @offset: ring buffer offset relative to the start of the region
>> + * @head: index relative to the start of the ring buffer at which the
>> + * producer (kernel) inserts items into the buffers
>> + */
>> +struct vfio_region_dma_fault {
>> + /* Write-Only */
>> + __u32 tail;
>> + /* Read-Only */
>> + __u32 entry_size;
>> + __u32 nb_entries;
>> + __u32 offset;
>> + __u32 head;
>> +};
>> +
>> /* -------- API for Type1 VFIO IOMMU -------- */
>>
>> /**
>> --
>> 2.20.1
>

2020-04-06 06:30:13

by Yi Liu

[permalink] [raw]
Subject: RE: [PATCH v10 04/11] vfio/pci: Add VFIO_REGION_TYPE_NESTED region type

Hi Eric,
> From: Auger Eric <[email protected]>
> Sent: Wednesday, April 1, 2020 9:31 PM
> To: Liu, Yi L <[email protected]>; [email protected]; [email protected]
> Subject: Re: [PATCH v10 04/11] vfio/pci: Add VFIO_REGION_TYPE_NESTED region
> type
>
> Hi Yi,
>
> On 4/1/20 3:18 PM, Liu, Yi L wrote:
> > Hi Eric,
> >
> > Just curious about your plan on this patch, I just heard my colleague
> > would like to reference the functions from this patch in his dsa driver work.
>
> Well I intend to respin until somebody tells me it is completely vain or dead follows.
> Joking aside, feel free to embed it in any series it would be beneficial to, just please
> cc me in case code diverges.

got it. Please also cc me in future version. :-)

Regards,
Yi Liu

2020-09-23 11:29:44

by Zenghui Yu

[permalink] [raw]
Subject: Re: [PATCH v10 01/11] vfio: VFIO_IOMMU_SET_PASID_TABLE

Hi Eric,

On 2020/3/21 0:19, Eric Auger wrote:
> From: "Liu, Yi L" <[email protected]>
>
> This patch adds an VFIO_IOMMU_SET_PASID_TABLE ioctl
> which aims to pass the virtual iommu guest configuration
> to the host. This latter takes the form of the so-called
> PASID table.
>
> Signed-off-by: Jacob Pan <[email protected]>
> Signed-off-by: Liu, Yi L <[email protected]>
> Signed-off-by: Eric Auger <[email protected]>

[...]

> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index a177bf2c6683..bfacbd876ee1 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -2172,6 +2172,43 @@ static int vfio_iommu_iova_build_caps(struct vfio_iommu *iommu,
> return ret;
> }
>
> +static void
> +vfio_detach_pasid_table(struct vfio_iommu *iommu)
> +{
> + struct vfio_domain *d;
> +
> + mutex_lock(&iommu->lock);
> +
> + list_for_each_entry(d, &iommu->domain_list, next) {
> + iommu_detach_pasid_table(d->domain);
> + }
> + mutex_unlock(&iommu->lock);
> +}
> +
> +static int
> +vfio_attach_pasid_table(struct vfio_iommu *iommu,
> + struct vfio_iommu_type1_set_pasid_table *ustruct)
> +{
> + struct vfio_domain *d;
> + int ret = 0;
> +
> + mutex_lock(&iommu->lock);
> +
> + list_for_each_entry(d, &iommu->domain_list, next) {
> + ret = iommu_attach_pasid_table(d->domain, &ustruct->config);
> + if (ret)
> + goto unwind;
> + }
> + goto unlock;
> +unwind:
> + list_for_each_entry_continue_reverse(d, &iommu->domain_list, next) {
> + iommu_detach_pasid_table(d->domain);
> + }
> +unlock:
> + mutex_unlock(&iommu->lock);
> + return ret;
> +}
> +
> static long vfio_iommu_type1_ioctl(void *iommu_data,
> unsigned int cmd, unsigned long arg)
> {
> @@ -2276,6 +2313,25 @@ static long vfio_iommu_type1_ioctl(void *iommu_data,
>
> return copy_to_user((void __user *)arg, &unmap, minsz) ?
> -EFAULT : 0;
> + } else if (cmd == VFIO_IOMMU_SET_PASID_TABLE) {
> + struct vfio_iommu_type1_set_pasid_table ustruct;
> +
> + minsz = offsetofend(struct vfio_iommu_type1_set_pasid_table,
> + config);
> +
> + if (copy_from_user(&ustruct, (void __user *)arg, minsz))
> + return -EFAULT;
> +
> + if (ustruct.argsz < minsz)
> + return -EINVAL;
> +
> + if (ustruct.flags & VFIO_PASID_TABLE_FLAG_SET)
> + return vfio_attach_pasid_table(iommu, &ustruct);
> + else if (ustruct.flags & VFIO_PASID_TABLE_FLAG_UNSET) {
> + vfio_detach_pasid_table(iommu);
> + return 0;
> + } else
> + return -EINVAL;

Nit:

What if user-space blindly set both flags? Should we check that only one
flag is allowed to be set at this stage, and return error otherwise?

Besides, before going through the whole series [1][2], I'd like to know
if this is the latest version of your Nested-Stage-Setup work in case I
had missed something.

[1] https://lore.kernel.org/r/[email protected]
[2] https://lore.kernel.org/r/[email protected]


Thanks,
Zenghui

2020-09-23 11:49:28

by Eric Auger

[permalink] [raw]
Subject: Re: [PATCH v10 01/11] vfio: VFIO_IOMMU_SET_PASID_TABLE

Hi Zenghui,

On 9/23/20 1:27 PM, Zenghui Yu wrote:
> Hi Eric,
>
> On 2020/3/21 0:19, Eric Auger wrote:
>> From: "Liu, Yi L" <[email protected]>
>>
>> This patch adds an VFIO_IOMMU_SET_PASID_TABLE ioctl
>> which aims to pass the virtual iommu guest configuration
>> to the host. This latter takes the form of the so-called
>> PASID table.
>>
>> Signed-off-by: Jacob Pan <[email protected]>
>> Signed-off-by: Liu, Yi L <[email protected]>
>> Signed-off-by: Eric Auger <[email protected]>
>
> [...]
>
>> diff --git a/drivers/vfio/vfio_iommu_type1.c
>> b/drivers/vfio/vfio_iommu_type1.c
>> index a177bf2c6683..bfacbd876ee1 100644
>> --- a/drivers/vfio/vfio_iommu_type1.c
>> +++ b/drivers/vfio/vfio_iommu_type1.c
>> @@ -2172,6 +2172,43 @@ static int vfio_iommu_iova_build_caps(struct
>> vfio_iommu *iommu,
>>       return ret;
>>   }
>>   +static void
>> +vfio_detach_pasid_table(struct vfio_iommu *iommu)
>> +{
>> +    struct vfio_domain *d;
>> +
>> +    mutex_lock(&iommu->lock);
>> +
>> +    list_for_each_entry(d, &iommu->domain_list, next) {
>> +        iommu_detach_pasid_table(d->domain);
>> +    }
>> +    mutex_unlock(&iommu->lock);
>> +}
>> +
>> +static int
>> +vfio_attach_pasid_table(struct vfio_iommu *iommu,
>> +            struct vfio_iommu_type1_set_pasid_table *ustruct)
>> +{
>> +    struct vfio_domain *d;
>> +    int ret = 0;
>> +
>> +    mutex_lock(&iommu->lock);
>> +
>> +    list_for_each_entry(d, &iommu->domain_list, next) {
>> +        ret = iommu_attach_pasid_table(d->domain, &ustruct->config);
>> +        if (ret)
>> +            goto unwind;
>> +    }
>> +    goto unlock;
>> +unwind:
>> +    list_for_each_entry_continue_reverse(d, &iommu->domain_list, next) {
>> +        iommu_detach_pasid_table(d->domain);
>> +    }
>> +unlock:
>> +    mutex_unlock(&iommu->lock);
>> +    return ret;
>> +}
>> +
>>   static long vfio_iommu_type1_ioctl(void *iommu_data,
>>                      unsigned int cmd, unsigned long arg)
>>   {
>> @@ -2276,6 +2313,25 @@ static long vfio_iommu_type1_ioctl(void
>> *iommu_data,
>>             return copy_to_user((void __user *)arg, &unmap, minsz) ?
>>               -EFAULT : 0;
>> +    } else if (cmd == VFIO_IOMMU_SET_PASID_TABLE) {
>> +        struct vfio_iommu_type1_set_pasid_table ustruct;
>> +
>> +        minsz = offsetofend(struct vfio_iommu_type1_set_pasid_table,
>> +                    config);
>> +
>> +        if (copy_from_user(&ustruct, (void __user *)arg, minsz))
>> +            return -EFAULT;
>> +
>> +        if (ustruct.argsz < minsz)
>> +            return -EINVAL;
>> +
>> +        if (ustruct.flags & VFIO_PASID_TABLE_FLAG_SET)
>> +            return vfio_attach_pasid_table(iommu, &ustruct);
>> +        else if (ustruct.flags & VFIO_PASID_TABLE_FLAG_UNSET) {
>> +            vfio_detach_pasid_table(iommu);
>> +            return 0;
>> +        } else
>> +            return -EINVAL;
>
> Nit:
>
> What if user-space blindly set both flags? Should we check that only one
> flag is allowed to be set at this stage, and return error otherwise?
Indeed I can check that.
>
> Besides, before going through the whole series [1][2], I'd like to know
> if this is the latest version of your Nested-Stage-Setup work in case I
> had missed something.
>
> [1] https://lore.kernel.org/r/[email protected]
> [2] https://lore.kernel.org/r/[email protected]

yes those 2 series are the last ones. Thank you for reviewing.

FYI, I intend to respin within a week or 2 on top of Jacob's [PATCH v10
0/7] IOMMU user API enhancement. But functionally there won't a lot of
changes.

Thanks

Eric
>
>
> Thanks,
> Zenghui
>

2020-09-24 08:25:05

by Zenghui Yu

[permalink] [raw]
Subject: Re: [PATCH v10 04/11] vfio/pci: Add VFIO_REGION_TYPE_NESTED region type

Hi Eric,

On 2020/3/21 0:19, Eric Auger wrote:
> Add a new specific DMA_FAULT region aiming to exposed nested mode
> translation faults.
>
> The region has a ring buffer that contains the actual fault
> records plus a header allowing to handle it (tail/head indices,
> max capacity, entry size). At the moment the region is dimensionned
> for 512 fault records.
>
> Signed-off-by: Eric Auger <[email protected]>

[...]

> diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
> index 379a02c36e37..586b89debed5 100644
> --- a/drivers/vfio/pci/vfio_pci.c
> +++ b/drivers/vfio/pci/vfio_pci.c
> @@ -260,6 +260,69 @@ int vfio_pci_set_power_state(struct vfio_pci_device *vdev, pci_power_t state)
> return ret;
> }
>
> +static void vfio_pci_dma_fault_release(struct vfio_pci_device *vdev,
> + struct vfio_pci_region *region)
> +{
> +}
> +
> +static int vfio_pci_dma_fault_add_capability(struct vfio_pci_device *vdev,
> + struct vfio_pci_region *region,
> + struct vfio_info_cap *caps)
> +{
> + struct vfio_region_info_cap_fault cap = {
> + .header.id = VFIO_REGION_INFO_CAP_DMA_FAULT,
> + .header.version = 1,
> + .version = 1,
> + };
> + return vfio_info_add_capability(caps, &cap.header, sizeof(cap));
> +}
> +
> +static const struct vfio_pci_regops vfio_pci_dma_fault_regops = {
> + .rw = vfio_pci_dma_fault_rw,
> + .release = vfio_pci_dma_fault_release,
> + .add_capability = vfio_pci_dma_fault_add_capability,
> +};
> +
> +#define DMA_FAULT_RING_LENGTH 512
> +
> +static int vfio_pci_init_dma_fault_region(struct vfio_pci_device *vdev)
> +{
> + struct vfio_region_dma_fault *header;
> + size_t size;
> + int ret;
> +
> + mutex_init(&vdev->fault_queue_lock);
> +
> + /*
> + * We provision 1 page for the header and space for
> + * DMA_FAULT_RING_LENGTH fault records in the ring buffer.
> + */
> + size = ALIGN(sizeof(struct iommu_fault) *
> + DMA_FAULT_RING_LENGTH, PAGE_SIZE) + PAGE_SIZE;
> +
> + vdev->fault_pages = kzalloc(size, GFP_KERNEL);
> + if (!vdev->fault_pages)
> + return -ENOMEM;
> +
> + ret = vfio_pci_register_dev_region(vdev,
> + VFIO_REGION_TYPE_NESTED,
> + VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT,
> + &vfio_pci_dma_fault_regops, size,
> + VFIO_REGION_INFO_FLAG_READ | VFIO_REGION_INFO_FLAG_WRITE,
> + vdev->fault_pages);
> + if (ret)
> + goto out;
> +
> + header = (struct vfio_region_dma_fault *)vdev->fault_pages;
> + header->entry_size = sizeof(struct iommu_fault);
> + header->nb_entries = DMA_FAULT_RING_LENGTH;
> + header->offset = sizeof(struct vfio_region_dma_fault);
> + return 0;
> +out:
> + kfree(vdev->fault_pages);
> + return ret;
> +}
> +
> static int vfio_pci_enable(struct vfio_pci_device *vdev)
> {
> struct pci_dev *pdev = vdev->pdev;
> @@ -358,6 +421,10 @@ static int vfio_pci_enable(struct vfio_pci_device *vdev)
> }
> }
>
> + ret = vfio_pci_init_dma_fault_region(vdev);
> + if (ret)
> + goto disable_exit;
> +
> vfio_pci_probe_mmaps(vdev);
>
> return 0;
> @@ -1383,6 +1450,7 @@ static void vfio_pci_remove(struct pci_dev *pdev)
>
> vfio_iommu_group_put(pdev->dev.iommu_group, &pdev->dev);
> kfree(vdev->region);
> + kfree(vdev->fault_pages);
> mutex_destroy(&vdev->ioeventfds_lock);
>
> if (!disable_idle_d3)
> diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
> index 8a2c7607d513..a392f50e3a99 100644
> --- a/drivers/vfio/pci/vfio_pci_private.h
> +++ b/drivers/vfio/pci/vfio_pci_private.h
> @@ -119,6 +119,8 @@ struct vfio_pci_device {
> int ioeventfds_nr;
> struct eventfd_ctx *err_trigger;
> struct eventfd_ctx *req_trigger;
> + u8 *fault_pages;

What's the reason to use 'u8'? It doesn't match the type of header, nor
the type of ring buffer.

> + struct mutex fault_queue_lock;
> struct list_head dummy_resources_list;
> struct mutex ioeventfds_lock;
> struct list_head ioeventfds_list;
> @@ -150,6 +152,14 @@ extern ssize_t vfio_pci_vga_rw(struct vfio_pci_device *vdev, char __user *buf,
> extern long vfio_pci_ioeventfd(struct vfio_pci_device *vdev, loff_t offset,
> uint64_t data, int count, int fd);
>
> +struct vfio_pci_fault_abi {
> + u32 entry_size;
> +};

This is not used by this patch (and the whole series).

> +
> +extern size_t vfio_pci_dma_fault_rw(struct vfio_pci_device *vdev,
> + char __user *buf, size_t count,
> + loff_t *ppos, bool iswrite);
> +
> extern int vfio_pci_init_perm_bits(void);
> extern void vfio_pci_uninit_perm_bits(void);
>
> diff --git a/drivers/vfio/pci/vfio_pci_rdwr.c b/drivers/vfio/pci/vfio_pci_rdwr.c
> index a87992892a9f..4004ab8cad0e 100644
> --- a/drivers/vfio/pci/vfio_pci_rdwr.c
> +++ b/drivers/vfio/pci/vfio_pci_rdwr.c
> @@ -274,6 +274,51 @@ ssize_t vfio_pci_vga_rw(struct vfio_pci_device *vdev, char __user *buf,
> return done;
> }
>
> +size_t vfio_pci_dma_fault_rw(struct vfio_pci_device *vdev, char __user *buf,
> + size_t count, loff_t *ppos, bool iswrite)
> +{
> + unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) - VFIO_PCI_NUM_REGIONS;
> + loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
> + void *base = vdev->region[i].data;
> + int ret = -EFAULT;
> +
> + if (pos >= vdev->region[i].size)
> + return -EINVAL;
> +
> + count = min(count, (size_t)(vdev->region[i].size - pos));
> +
> + mutex_lock(&vdev->fault_queue_lock);
> +
> + if (iswrite) {
> + struct vfio_region_dma_fault *header =
> + (struct vfio_region_dma_fault *)base;
> + u32 new_tail;
> +
> + if (pos != 0 || count != 4) {
> + ret = -EINVAL;
> + goto unlock;
> + }
> +
> + if (copy_from_user((void *)&new_tail, buf, count))
> + goto unlock;
> +
> + if (new_tail > header->nb_entries) {

Maybe

new_tail >= header->nb_entries ?

> + ret = -EINVAL;
> + goto unlock;
> + }
> + header->tail = new_tail;
> + } else {
> + if (copy_to_user(buf, base + pos, count))
> + goto unlock;
> + }
> + *ppos += count;
> + ret = count;
> +unlock:
> + mutex_unlock(&vdev->fault_queue_lock);
> + return ret;
> +}
> +
> +
> static int vfio_pci_ioeventfd_handler(void *opaque, void *unused)
> {
> struct vfio_pci_ioeventfd *ioeventfd = opaque;


Thanks,
Zenghui

2020-09-24 08:53:50

by Zenghui Yu

[permalink] [raw]
Subject: Re: [PATCH v10 05/11] vfio/pci: Register an iommu fault handler

Hi Eric,

On 2020/3/21 0:19, Eric Auger wrote:
> Register an IOMMU fault handler which records faults in
> the DMA FAULT region ring buffer. In a subsequent patch, we
> will add the signaling of a specific eventfd to allow the
> userspace to be notified whenever a new fault as shown up.
>
> Signed-off-by: Eric Auger <[email protected]>

[...]

> diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
> index 586b89debed5..69595c240baf 100644
> --- a/drivers/vfio/pci/vfio_pci.c
> +++ b/drivers/vfio/pci/vfio_pci.c
> @@ -27,6 +27,7 @@
> #include <linux/vfio.h>
> #include <linux/vgaarb.h>
> #include <linux/nospec.h>
> +#include <linux/circ_buf.h>
>
> #include "vfio_pci_private.h"
>
> @@ -283,6 +284,38 @@ static const struct vfio_pci_regops vfio_pci_dma_fault_regops = {
> .add_capability = vfio_pci_dma_fault_add_capability,
> };
>
> +int vfio_pci_iommu_dev_fault_handler(struct iommu_fault *fault, void *data)
> +{
> + struct vfio_pci_device *vdev = (struct vfio_pci_device *)data;
> + struct vfio_region_dma_fault *reg =
> + (struct vfio_region_dma_fault *)vdev->fault_pages;
> + struct iommu_fault *new =
> + (struct iommu_fault *)(vdev->fault_pages + reg->offset +
> + reg->head * reg->entry_size);

Shouldn't 'reg->head' be protected under the fault_queue_lock? Otherwise
things may change behind our backs...

We shouldn't take any assumption about how IOMMU driver would report the
fault (serially or in parallel), I think.

> + int head, tail, size;
> + int ret = 0;
> +
> + if (fault->type != IOMMU_FAULT_DMA_UNRECOV)
> + return -ENOENT;
> +
> + mutex_lock(&vdev->fault_queue_lock);
> +
> + head = reg->head;
> + tail = reg->tail;
> + size = reg->nb_entries;
> +
> + if (CIRC_SPACE(head, tail, size) < 1) {
> + ret = -ENOSPC;
> + goto unlock;
> + }
> +
> + *new = *fault;
> + reg->head = (head + 1) % size;
> +unlock:
> + mutex_unlock(&vdev->fault_queue_lock);
> + return ret;
> +}
> +
> #define DMA_FAULT_RING_LENGTH 512
>
> static int vfio_pci_init_dma_fault_region(struct vfio_pci_device *vdev)
> @@ -317,6 +350,13 @@ static int vfio_pci_init_dma_fault_region(struct vfio_pci_device *vdev)
> header->entry_size = sizeof(struct iommu_fault);
> header->nb_entries = DMA_FAULT_RING_LENGTH;
> header->offset = sizeof(struct vfio_region_dma_fault);
> +
> + ret = iommu_register_device_fault_handler(&vdev->pdev->dev,
> + vfio_pci_iommu_dev_fault_handler,
> + vdev);
> + if (ret)
> + goto out;
> +
> return 0;
> out:
> kfree(vdev->fault_pages);


Thanks,
Zenghui

2020-09-24 13:43:50

by Zenghui Yu

[permalink] [raw]
Subject: Re: [PATCH v10 11/11] vfio: Document nested stage control

Hi Eric,

On 2020/3/21 0:19, Eric Auger wrote:
> The VFIO API was enhanced to support nested stage control: a bunch of
> new iotcls, one DMA FAULT region and an associated specific IRQ.
>
> Let's document the process to follow to set up nested mode.
>
> Signed-off-by: Eric Auger <[email protected]>

[...]

> +The userspace must be prepared to receive faults. The VFIO-PCI device
> +exposes one dedicated DMA FAULT region: it contains a ring buffer and
> +its header that allows to manage the head/tail indices. The region is
> +identified by the following index/subindex:
> +- VFIO_REGION_TYPE_NESTED/VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT
> +
> +The DMA FAULT region exposes a VFIO_REGION_INFO_CAP_PRODUCER_FAULT
> +region capability that allows the userspace to retrieve the ABI version
> +of the fault records filled by the host.

Nit: I don't see this capability in the code.


Thanks,
Zenghui

2020-10-06 15:31:09

by Eric Auger

[permalink] [raw]
Subject: Re: [PATCH v10 11/11] vfio: Document nested stage control

Hi Zenghui,

On 9/24/20 3:42 PM, Zenghui Yu wrote:
> Hi Eric,
>
> On 2020/3/21 0:19, Eric Auger wrote:
>> The VFIO API was enhanced to support nested stage control: a bunch of
>> new iotcls, one DMA FAULT region and an associated specific IRQ.
>>
>> Let's document the process to follow to set up nested mode.
>>
>> Signed-off-by: Eric Auger <[email protected]>
>
> [...]
>
>> +The userspace must be prepared to receive faults. The VFIO-PCI device
>> +exposes one dedicated DMA FAULT region: it contains a ring buffer and
>> +its header that allows to manage the head/tail indices. The region is
>> +identified by the following index/subindex:
>> +- VFIO_REGION_TYPE_NESTED/VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT
>> +
>> +The DMA FAULT region exposes a VFIO_REGION_INFO_CAP_PRODUCER_FAULT
>> +region capability that allows the userspace to retrieve the ABI version
>> +of the fault records filled by the host.
>
> Nit: I don't see this capability in the code.

Thank you very much for the review. I am late doing the respin but I
will take into account all your comments.

Thanks!

Eric
>
>
> Thanks,
> Zenghui
>

Subject: RE: [PATCH v10 01/11] vfio: VFIO_IOMMU_SET_PASID_TABLE

Hi Eric,

> -----Original Message-----
> From: iommu [mailto:[email protected]] On Behalf Of
> Auger Eric
> Sent: 23 September 2020 12:47
> To: yuzenghui <[email protected]>; [email protected];
> [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected];
> [email protected]; [email protected]
> Subject: Re: [PATCH v10 01/11] vfio: VFIO_IOMMU_SET_PASID_TABLE

...

> > Besides, before going through the whole series [1][2], I'd like to
> > know if this is the latest version of your Nested-Stage-Setup work in
> > case I had missed something.
> >
> > [1]
> > https://lore.kernel.org/r/[email protected]
> > [2]
> > https://lore.kernel.org/r/[email protected]
>
> yes those 2 series are the last ones. Thank you for reviewing.
>
> FYI, I intend to respin within a week or 2 on top of Jacob's [PATCH v10 0/7]
> IOMMU user API enhancement.

Thanks for that. Also is there any plan to respin the related Qemu series as well?
I know dual stage interface proposals are still under discussion, but it would be
nice to have a testable solution based on new interfaces for ARM64 as well.
Happy to help with any tests or verifications.

Please let me know.

Thanks,
Shameer


2020-10-28 07:43:06

by Eric Auger

[permalink] [raw]
Subject: Re: [PATCH v10 01/11] vfio: VFIO_IOMMU_SET_PASID_TABLE

Hi Shameer,

On 10/27/20 1:20 PM, Shameerali Kolothum Thodi wrote:
> Hi Eric,
>
>> -----Original Message-----
>> From: iommu [mailto:[email protected]] On Behalf Of
>> Auger Eric
>> Sent: 23 September 2020 12:47
>> To: yuzenghui <[email protected]>; [email protected];
>> [email protected]; [email protected];
>> [email protected]; [email protected]; [email protected];
>> [email protected]; [email protected];
>> [email protected]; [email protected]
>> Subject: Re: [PATCH v10 01/11] vfio: VFIO_IOMMU_SET_PASID_TABLE
>
> ...
>
>>> Besides, before going through the whole series [1][2], I'd like to
>>> know if this is the latest version of your Nested-Stage-Setup work in
>>> case I had missed something.
>>>
>>> [1]
>>> https://lore.kernel.org/r/[email protected]
>>> [2]
>>> https://lore.kernel.org/r/[email protected]
>>
>> yes those 2 series are the last ones. Thank you for reviewing.
>>
>> FYI, I intend to respin within a week or 2 on top of Jacob's [PATCH v10 0/7]
>> IOMMU user API enhancement.
>
> Thanks for that. Also is there any plan to respin the related Qemu series as well?
> I know dual stage interface proposals are still under discussion, but it would be
> nice to have a testable solution based on new interfaces for ARM64 as well.
> Happy to help with any tests or verifications.

Yes the QEMU series will be respinned as well. That's on the top of my
todo list right now.

Thanks

Eric
>
> Please let me know.
>
> Thanks,
> Shameer
>
>

2020-11-13 16:14:25

by Eric Auger

[permalink] [raw]
Subject: Re: [PATCH v10 05/11] vfio/pci: Register an iommu fault handler

Hi Zenghui,

On 9/24/20 10:49 AM, Zenghui Yu wrote:
> Hi Eric,
>
> On 2020/3/21 0:19, Eric Auger wrote:
>> Register an IOMMU fault handler which records faults in
>> the DMA FAULT region ring buffer. In a subsequent patch, we
>> will add the signaling of a specific eventfd to allow the
>> userspace to be notified whenever a new fault as shown up.
>>
>> Signed-off-by: Eric Auger <[email protected]>
>
> [...]
>
>> diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
>> index 586b89debed5..69595c240baf 100644
>> --- a/drivers/vfio/pci/vfio_pci.c
>> +++ b/drivers/vfio/pci/vfio_pci.c
>> @@ -27,6 +27,7 @@
>>   #include <linux/vfio.h>
>>   #include <linux/vgaarb.h>
>>   #include <linux/nospec.h>
>> +#include <linux/circ_buf.h>
>>     #include "vfio_pci_private.h"
>>   @@ -283,6 +284,38 @@ static const struct vfio_pci_regops
>> vfio_pci_dma_fault_regops = {
>>       .add_capability = vfio_pci_dma_fault_add_capability,
>>   };
>>   +int vfio_pci_iommu_dev_fault_handler(struct iommu_fault *fault,
>> void *data)
>> +{
>> +    struct vfio_pci_device *vdev = (struct vfio_pci_device *)data;
>> +    struct vfio_region_dma_fault *reg =
>> +        (struct vfio_region_dma_fault *)vdev->fault_pages;
>> +    struct iommu_fault *new =
>> +        (struct iommu_fault *)(vdev->fault_pages + reg->offset +
>> +            reg->head * reg->entry_size);
>
> Shouldn't 'reg->head' be protected under the fault_queue_lock? Otherwise
> things may change behind our backs...>
> We shouldn't take any assumption about how IOMMU driver would report the
> fault (serially or in parallel), I think.

Yes I modified the locking

Thanks

Eric
>
>> +    int head, tail, size;
>> +    int ret = 0;
>> +
>> +    if (fault->type != IOMMU_FAULT_DMA_UNRECOV)
>> +        return -ENOENT;
>> +
>> +    mutex_lock(&vdev->fault_queue_lock);
>> +
>> +    head = reg->head;
>> +    tail = reg->tail;
>> +    size = reg->nb_entries;
>> +
>> +    if (CIRC_SPACE(head, tail, size) < 1) {
>> +        ret = -ENOSPC;
>> +        goto unlock;
>> +    }
>> +
>> +    *new = *fault;
>> +    reg->head = (head + 1) % size;
>> +unlock:
>> +    mutex_unlock(&vdev->fault_queue_lock);
>> +    return ret;
>> +}
>> +
>>   #define DMA_FAULT_RING_LENGTH 512
>>     static int vfio_pci_init_dma_fault_region(struct vfio_pci_device
>> *vdev)
>> @@ -317,6 +350,13 @@ static int vfio_pci_init_dma_fault_region(struct
>> vfio_pci_device *vdev)
>>       header->entry_size = sizeof(struct iommu_fault);
>>       header->nb_entries = DMA_FAULT_RING_LENGTH;
>>       header->offset = sizeof(struct vfio_region_dma_fault);
>> +
>> +    ret = iommu_register_device_fault_handler(&vdev->pdev->dev,
>> +                    vfio_pci_iommu_dev_fault_handler,
>> +                    vdev);
>> +    if (ret)
>> +        goto out;
>> +
>>       return 0;
>>   out:
>>       kfree(vdev->fault_pages);
>
>
> Thanks,
> Zenghui
>

2020-11-13 16:16:37

by Eric Auger

[permalink] [raw]
Subject: Re: [PATCH v10 04/11] vfio/pci: Add VFIO_REGION_TYPE_NESTED region type

Hi Zenghui,
On 9/24/20 10:23 AM, Zenghui Yu wrote:
> Hi Eric,
>
> On 2020/3/21 0:19, Eric Auger wrote:
>> Add a new specific DMA_FAULT region aiming to exposed nested mode
>> translation faults.
>>
>> The region has a ring buffer that contains the actual fault
>> records plus a header allowing to handle it (tail/head indices,
>> max capacity, entry size). At the moment the region is dimensionned
>> for 512 fault records.
>>
>> Signed-off-by: Eric Auger <[email protected]>
>
> [...]
>
>> diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
>> index 379a02c36e37..586b89debed5 100644
>> --- a/drivers/vfio/pci/vfio_pci.c
>> +++ b/drivers/vfio/pci/vfio_pci.c
>> @@ -260,6 +260,69 @@ int vfio_pci_set_power_state(struct
>> vfio_pci_device *vdev, pci_power_t state)
>>       return ret;
>>   }
>>   +static void vfio_pci_dma_fault_release(struct vfio_pci_device *vdev,
>> +                       struct vfio_pci_region *region)
>> +{
>> +}
>> +
>> +static int vfio_pci_dma_fault_add_capability(struct vfio_pci_device
>> *vdev,
>> +                         struct vfio_pci_region *region,
>> +                         struct vfio_info_cap *caps)
>> +{
>> +    struct vfio_region_info_cap_fault cap = {
>> +        .header.id = VFIO_REGION_INFO_CAP_DMA_FAULT,
>> +        .header.version = 1,
>> +        .version = 1,
>> +    };
>> +    return vfio_info_add_capability(caps, &cap.header, sizeof(cap));
>> +}
>> +
>> +static const struct vfio_pci_regops vfio_pci_dma_fault_regops = {
>> +    .rw        = vfio_pci_dma_fault_rw,
>> +    .release    = vfio_pci_dma_fault_release,
>> +    .add_capability = vfio_pci_dma_fault_add_capability,
>> +};
>> +
>> +#define DMA_FAULT_RING_LENGTH 512
>> +
>> +static int vfio_pci_init_dma_fault_region(struct vfio_pci_device *vdev)
>> +{
>> +    struct vfio_region_dma_fault *header;
>> +    size_t size;
>> +    int ret;
>> +
>> +    mutex_init(&vdev->fault_queue_lock);
>> +
>> +    /*
>> +     * We provision 1 page for the header and space for
>> +     * DMA_FAULT_RING_LENGTH fault records in the ring buffer.
>> +     */
>> +    size = ALIGN(sizeof(struct iommu_fault) *
>> +             DMA_FAULT_RING_LENGTH, PAGE_SIZE) + PAGE_SIZE;
>> +
>> +    vdev->fault_pages = kzalloc(size, GFP_KERNEL);
>> +    if (!vdev->fault_pages)
>> +        return -ENOMEM;
>> +
>> +    ret = vfio_pci_register_dev_region(vdev,
>> +        VFIO_REGION_TYPE_NESTED,
>> +        VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT,
>> +        &vfio_pci_dma_fault_regops, size,
>> +        VFIO_REGION_INFO_FLAG_READ | VFIO_REGION_INFO_FLAG_WRITE,
>> +        vdev->fault_pages);
>> +    if (ret)
>> +        goto out;
>> +
>> +    header = (struct vfio_region_dma_fault *)vdev->fault_pages;
>> +    header->entry_size = sizeof(struct iommu_fault);
>> +    header->nb_entries = DMA_FAULT_RING_LENGTH;
>> +    header->offset = sizeof(struct vfio_region_dma_fault);
>> +    return 0;
>> +out:
>> +    kfree(vdev->fault_pages);
>> +    return ret;
>> +}
>> +
>>   static int vfio_pci_enable(struct vfio_pci_device *vdev)
>>   {
>>       struct pci_dev *pdev = vdev->pdev;
>> @@ -358,6 +421,10 @@ static int vfio_pci_enable(struct vfio_pci_device
>> *vdev)
>>           }
>>       }
>>   +    ret = vfio_pci_init_dma_fault_region(vdev);
>> +    if (ret)
>> +        goto disable_exit;
>> +
>>       vfio_pci_probe_mmaps(vdev);
>>         return 0;
>> @@ -1383,6 +1450,7 @@ static void vfio_pci_remove(struct pci_dev *pdev)
>>         vfio_iommu_group_put(pdev->dev.iommu_group, &pdev->dev);
>>       kfree(vdev->region);
>> +    kfree(vdev->fault_pages);
>>       mutex_destroy(&vdev->ioeventfds_lock);
>>         if (!disable_idle_d3)
>> diff --git a/drivers/vfio/pci/vfio_pci_private.h
>> b/drivers/vfio/pci/vfio_pci_private.h
>> index 8a2c7607d513..a392f50e3a99 100644
>> --- a/drivers/vfio/pci/vfio_pci_private.h
>> +++ b/drivers/vfio/pci/vfio_pci_private.h
>> @@ -119,6 +119,8 @@ struct vfio_pci_device {
>>       int            ioeventfds_nr;
>>       struct eventfd_ctx    *err_trigger;
>>       struct eventfd_ctx    *req_trigger;
>> +    u8            *fault_pages;
>
> What's the reason to use 'u8'? It doesn't match the type of header, nor
> the type of ring buffer.
actually it matches
u8 *pci_config_map;
u8 *vconfig;

fault_pages is the va of the ring buffer. In the header, offset is the
offset of the ring wrt start of the region.

>
>> +    struct mutex        fault_queue_lock;
>>       struct list_head    dummy_resources_list;
>>       struct mutex        ioeventfds_lock;
>>       struct list_head    ioeventfds_list;
>> @@ -150,6 +152,14 @@ extern ssize_t vfio_pci_vga_rw(struct
>> vfio_pci_device *vdev, char __user *buf,
>>   extern long vfio_pci_ioeventfd(struct vfio_pci_device *vdev, loff_t
>> offset,
>>                      uint64_t data, int count, int fd);
>>   +struct vfio_pci_fault_abi {
>> +    u32 entry_size;
>> +};
>
> This is not used by this patch (and the whole series).
removed
>
>> +
>> +extern size_t vfio_pci_dma_fault_rw(struct vfio_pci_device *vdev,
>> +                    char __user *buf, size_t count,
>> +                    loff_t *ppos, bool iswrite);
>> +
>>   extern int vfio_pci_init_perm_bits(void);
>>   extern void vfio_pci_uninit_perm_bits(void);
>>   diff --git a/drivers/vfio/pci/vfio_pci_rdwr.c
>> b/drivers/vfio/pci/vfio_pci_rdwr.c
>> index a87992892a9f..4004ab8cad0e 100644
>> --- a/drivers/vfio/pci/vfio_pci_rdwr.c
>> +++ b/drivers/vfio/pci/vfio_pci_rdwr.c
>> @@ -274,6 +274,51 @@ ssize_t vfio_pci_vga_rw(struct vfio_pci_device
>> *vdev, char __user *buf,
>>       return done;
>>   }
>>   +size_t vfio_pci_dma_fault_rw(struct vfio_pci_device *vdev, char
>> __user *buf,
>> +                 size_t count, loff_t *ppos, bool iswrite)
>> +{
>> +    unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) -
>> VFIO_PCI_NUM_REGIONS;
>> +    loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
>> +    void *base = vdev->region[i].data;
>> +    int ret = -EFAULT;
>> +
>> +    if (pos >= vdev->region[i].size)
>> +        return -EINVAL;
>> +
>> +    count = min(count, (size_t)(vdev->region[i].size - pos));
>> +
>> +    mutex_lock(&vdev->fault_queue_lock);
>> +
>> +    if (iswrite) {
>> +        struct vfio_region_dma_fault *header =
>> +            (struct vfio_region_dma_fault *)base;
>> +        u32 new_tail;
>> +
>> +        if (pos != 0 || count != 4) {
>> +            ret = -EINVAL;
>> +            goto unlock;
>> +        }
>> +
>> +        if (copy_from_user((void *)&new_tail, buf, count))
>> +            goto unlock;
>> +
>> +        if (new_tail > header->nb_entries) {
>
> Maybe
>
> new_tail >= header->nb_entries ?
sure

Thanks

Eric
>
>> +            ret = -EINVAL;
>> +            goto unlock;
>> +        }
>> +        header->tail = new_tail;
>> +    } else {
>> +        if (copy_to_user(buf, base + pos, count))
>> +            goto unlock;
>> +    }
>> +    *ppos += count;
>> +    ret = count;
>> +unlock:
>> +    mutex_unlock(&vdev->fault_queue_lock);
>> +    return ret;
>> +}
>> +
>> +
>>   static int vfio_pci_ioeventfd_handler(void *opaque, void *unused)
>>   {
>>       struct vfio_pci_ioeventfd *ioeventfd = opaque;
>
>
> Thanks,
> Zenghui
>