2021-02-23 21:23:08

by Eric Auger

[permalink] [raw]
Subject: [PATCH v12 00/13] SMMUv3 Nested Stage Setup (VFIO part)

This series brings the VFIO part of HW nested paging support
in the SMMUv3.

This is a rebase on top of v5.11

The series depends on:
[PATCH v14 00/13] SMMUv3 Nested Stage Setup (IOMMU part)

3 new IOCTLs are introduced that allow the userspace to
1) pass the guest stage 1 configuration
2) pass stage 1 MSI bindings
3) invalidate stage 1 related caches

They map onto the related new IOMMU API functions.

We introduce the capability to register specific interrupt
indexes (see [1]). A new DMA_FAULT interrupt index allows to register
an eventfd to be signaled whenever a stage 1 related fault
is detected at physical level. Also two specific regions allow to
- expose the fault records to the user space and
- inject page responses.

This latter functionality is not exercised in this series
but is provided as a POC for further vSVA activities (Shameer's input).

Best Regards

Eric

This series can be found at:
https://github.com/eauger/linux/tree/v5.11-stallv12-2stage-v14

The series series includes Tina's patch steming from
[1] "[RFC PATCH v2 1/3] vfio: Use capability chains to handle device
specific irq" plus patches originally contributed by Yi.

History:

v11 -> v12:
- numerous fixes following feedbacks. Many thanks to all of you
- See individual history log.

v10 -> v11:
- rebase on top of v5.10-rc4
- adapt to changes on the IOMMU API (compliant with the doc
written by Jacob/Yi)
- addition of the page response region
- Took into account Zenghui's comments
- In this version I have kept the ioctl separate. Since
Yi's series [2] is currently stalled, I've just rebased here.

[2] [PATCH v7 00/16] vfio: expose virtual Shared Virtual Addressing
to VMs

v9 -> v10
- rebase on top of 5.6.0-rc3 (no change versus v9)

v8 -> v9:
- introduce specific irq framework
- single fault region
- iommu_unregister_device_fault_handler failure case not handled
yet.

v7 -> v8:
- rebase on top of v5.2-rc1 and especially
8be39a1a04c1 iommu/arm-smmu-v3: Add a master->domain pointer
- dynamic alloc of s1_cfg/s2_cfg
- __arm_smmu_tlb_inv_asid/s1_range_nosync
- check there is no HW MSI regions
- asid invalidation using pasid extended struct (change in the uapi)
- add s1_live/s2_live checks
- move check about support of nested stages in domain finalise
- fixes in error reporting according to the discussion with Robin
- reordered the patches to have first iommu/smmuv3 patches and then
VFIO patches

v6 -> v7:
- removed device handle from bind/unbind_guest_msi
- added "iommu/smmuv3: Nested mode single MSI doorbell per domain
enforcement"
- added few uapi comments as suggested by Jean, Jacop and Alex

v5 -> v6:
- Fix compilation issue when CONFIG_IOMMU_API is unset

v4 -> v5:
- fix bug reported by Vincent: fault handler unregistration now happens in
vfio_pci_release
- IOMMU_FAULT_PERM_* moved outside of struct definition + small
uapi changes suggested by Kean-Philippe (except fetch_addr)
- iommu: introduce device fault report API: removed the PRI part.
- see individual logs for more details
- reset the ste abort flag on detach

v3 -> v4:
- took into account Alex, jean-Philippe and Robin's comments on v3
- rework of the smmuv3 driver integration
- add tear down ops for msi binding and PASID table binding
- fix S1 fault propagation
- put fault reporting patches at the beginning of the series following
Jean-Philippe's request
- update of the cache invalidate and fault API uapis
- VFIO fault reporting rework with 2 separate regions and one mmappable
segment for the fault queue
- moved to PATCH

v2 -> v3:
- When registering the S1 MSI binding we now store the device handle. This
addresses Robin's comment about discimination of devices beonging to
different S1 groups and using different physical MSI doorbells.
- Change the fault reporting API: use VFIO_PCI_DMA_FAULT_IRQ_INDEX to
set the eventfd and expose the faults through an mmappable fault region

v1 -> v2:
- Added the fault reporting capability
- asid properly passed on invalidation (fix assignment of multiple
devices)
- see individual change logs for more info


Eric Auger (10):
vfio: VFIO_IOMMU_SET_MSI_BINDING
vfio/pci: Add VFIO_REGION_TYPE_NESTED region type
vfio/pci: Register an iommu fault handler
vfio/pci: Allow to mmap the fault queue
vfio/pci: Add framework for custom interrupt indices
vfio: Add new IRQ for DMA fault reporting
vfio/pci: Register and allow DMA FAULT IRQ signaling
vfio: Document nested stage control
vfio/pci: Register a DMA fault response region
vfio/pci: Inject page response upon response region fill

Liu, Yi L (2):
vfio: VFIO_IOMMU_SET_PASID_TABLE
vfio: VFIO_IOMMU_CACHE_INVALIDATE

Tina Zhang (1):
vfio: Use capability chains to handle device specific irq

Documentation/driver-api/vfio.rst | 77 +++++
drivers/vfio/pci/vfio_pci.c | 447 ++++++++++++++++++++++++++--
drivers/vfio/pci/vfio_pci_intrs.c | 62 ++++
drivers/vfio/pci/vfio_pci_private.h | 33 ++
drivers/vfio/pci/vfio_pci_rdwr.c | 84 ++++++
drivers/vfio/vfio_iommu_type1.c | 178 +++++++++++
include/uapi/linux/vfio.h | 141 ++++++++-
7 files changed, 1002 insertions(+), 20 deletions(-)

--
2.26.2


2021-02-23 21:23:08

by Eric Auger

[permalink] [raw]
Subject: [PATCH v12 11/13] vfio: Document nested stage control

The VFIO API was enhanced to support nested stage control: a bunch of
new iotcls, one DMA FAULT region and an associated specific IRQ.

Let's document the process to follow to set up nested mode.

Signed-off-by: Eric Auger <[email protected]>

---

v11 -> v12:
s/VFIO_REGION_INFO_CAP_PRODUCER_FAULT/VFIO_REGION_INFO_CAP_DMA_FAULT

v8 -> v9:
- new names for SET_MSI_BINDING and SET_PASID_TABLE
- new layout for the DMA FAULT memory region and specific IRQ

v2 -> v3:
- document the new fault API

v1 -> v2:
- use the new ioctl names
- add doc related to fault handling
---
Documentation/driver-api/vfio.rst | 77 +++++++++++++++++++++++++++++++
1 file changed, 77 insertions(+)

diff --git a/Documentation/driver-api/vfio.rst b/Documentation/driver-api/vfio.rst
index f1a4d3c3ba0b..14e41324237d 100644
--- a/Documentation/driver-api/vfio.rst
+++ b/Documentation/driver-api/vfio.rst
@@ -239,6 +239,83 @@ group and can access them as follows::
/* Gratuitous device reset and go... */
ioctl(device, VFIO_DEVICE_RESET);

+IOMMU Dual Stage Control
+------------------------
+
+Some IOMMUs support 2 stages/levels of translation. "Stage" corresponds to
+the ARM terminology while "level" corresponds to Intel's VTD terminology. In
+the following text we use either without distinction.
+
+This is useful when the guest is exposed with a virtual IOMMU and some
+devices are assigned to the guest through VFIO. Then the guest OS can use
+stage 1 (IOVA -> GPA), while the hypervisor uses stage 2 for VM isolation
+(GPA -> HPA).
+
+The guest gets ownership of the stage 1 page tables and also owns stage 1
+configuration structures. The hypervisor owns the root configuration structure
+(for security reason), including stage 2 configuration. This works as long
+configuration structures and page table format are compatible between the
+virtual IOMMU and the physical IOMMU.
+
+Assuming the HW supports it, this nested mode is selected by choosing the
+VFIO_TYPE1_NESTING_IOMMU type through:
+
+ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1_NESTING_IOMMU);
+
+This forces the hypervisor to use the stage 2, leaving stage 1 available for
+guest usage.
+
+Once groups are attached to the container, the guest stage 1 translation
+configuration data can be passed to VFIO by using
+
+ioctl(container, VFIO_IOMMU_SET_PASID_TABLE, &pasid_table_info);
+
+This allows to combine the guest stage 1 configuration structure along with
+the hypervisor stage 2 configuration structure. Stage 1 configuration
+structures are dependent on the IOMMU type.
+
+As the stage 1 translation is fully delegated to the HW, translation faults
+encountered during the translation process need to be propagated up to
+the virtualizer and re-injected into the guest.
+
+The userspace must be prepared to receive faults. The VFIO-PCI device
+exposes one dedicated DMA FAULT region: it contains a ring buffer and
+its header that allows to manage the head/tail indices. The region is
+identified by the following index/subindex:
+- VFIO_REGION_TYPE_NESTED/VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT
+
+The DMA FAULT region exposes a VFIO_REGION_INFO_CAP_DMA_FAULT
+region capability that allows the userspace to retrieve the ABI version
+of the fault records filled by the host.
+
+On top of that region, the userspace can be notified whenever a fault
+occurs at the physical level. It can use the VFIO_IRQ_TYPE_NESTED/
+VFIO_IRQ_SUBTYPE_DMA_FAULT specific IRQ to attach the eventfd to be
+signalled.
+
+The ring buffer containing the fault records can be mmapped. When
+the userspace consumes a fault in the queue, it should increment
+the consumer index to allow new fault records to replace the used ones.
+
+The queue size and the entry size can be retrieved in the header.
+The tail index should never overshoot the producer index as in any
+other circular buffer scheme. Also it must be less than the queue size
+otherwise the change fails.
+
+When the guest invalidates stage 1 related caches, invalidations must be
+forwarded to the host through
+ioctl(container, VFIO_IOMMU_CACHE_INVALIDATE, &inv_data);
+Those invalidations can happen at various granularity levels, page, context, ...
+
+The ARM SMMU specification introduces another challenge: MSIs are translated by
+both the virtual SMMU and the physical SMMU. To build a nested mapping for the
+IOVA programmed into the assigned device, the guest needs to pass its IOVA/MSI
+doorbell GPA binding to the host. Then the hypervisor can build a nested stage 2
+binding eventually translating into the physical MSI doorbell.
+
+This is achieved by calling
+ioctl(container, VFIO_IOMMU_SET_MSI_BINDING, &guest_binding);
+
VFIO User API
-------------------------------------------------------------------------------

--
2.26.2

2021-02-23 21:23:29

by Eric Auger

[permalink] [raw]
Subject: [PATCH v12 06/13] vfio/pci: Allow to mmap the fault queue

The DMA FAULT region contains the fault ring buffer.
There is benefit to let the userspace mmap this area.
Expose this mmappable area through a sparse mmap entry
and implement the mmap operation.

Signed-off-by: Eric Auger <[email protected]>

---

v8 -> v9:
- remove unused index local variable in vfio_pci_fault_mmap
---
drivers/vfio/pci/vfio_pci.c | 61 +++++++++++++++++++++++++++++++++++--
1 file changed, 58 insertions(+), 3 deletions(-)

diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index ad3fe0ce2e64..a528b72b57d2 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -316,21 +316,75 @@ static void vfio_pci_dma_fault_release(struct vfio_pci_device *vdev,
kfree(vdev->fault_pages);
}

+static int vfio_pci_dma_fault_mmap(struct vfio_pci_device *vdev,
+ struct vfio_pci_region *region,
+ struct vm_area_struct *vma)
+{
+ u64 phys_len, req_len, pgoff, req_start;
+ unsigned long long addr;
+ unsigned int ret;
+
+ phys_len = region->size;
+
+ req_len = vma->vm_end - vma->vm_start;
+ pgoff = vma->vm_pgoff &
+ ((1U << (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT)) - 1);
+ req_start = pgoff << PAGE_SHIFT;
+
+ /* only the second page of the producer fault region is mmappable */
+ if (req_start < PAGE_SIZE)
+ return -EINVAL;
+
+ if (req_start + req_len > phys_len)
+ return -EINVAL;
+
+ addr = virt_to_phys(vdev->fault_pages);
+ vma->vm_private_data = vdev;
+ vma->vm_pgoff = (addr >> PAGE_SHIFT) + pgoff;
+
+ ret = remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
+ req_len, vma->vm_page_prot);
+ return ret;
+}
+
static int vfio_pci_dma_fault_add_capability(struct vfio_pci_device *vdev,
struct vfio_pci_region *region,
struct vfio_info_cap *caps)
{
+ struct vfio_region_info_cap_sparse_mmap *sparse = NULL;
struct vfio_region_info_cap_fault cap = {
.header.id = VFIO_REGION_INFO_CAP_DMA_FAULT,
.header.version = 1,
.version = 1,
};
- return vfio_info_add_capability(caps, &cap.header, sizeof(cap));
+ size_t size = sizeof(*sparse) + sizeof(*sparse->areas);
+ int ret;
+
+ ret = vfio_info_add_capability(caps, &cap.header, sizeof(cap));
+ if (ret)
+ return ret;
+
+ sparse = kzalloc(size, GFP_KERNEL);
+ if (!sparse)
+ return -ENOMEM;
+
+ sparse->header.id = VFIO_REGION_INFO_CAP_SPARSE_MMAP;
+ sparse->header.version = 1;
+ sparse->nr_areas = 1;
+ sparse->areas[0].offset = PAGE_SIZE;
+ sparse->areas[0].size = region->size - PAGE_SIZE;
+
+ ret = vfio_info_add_capability(caps, &sparse->header, size);
+ if (ret)
+ kfree(sparse);
+
+ return ret;
}

static const struct vfio_pci_regops vfio_pci_dma_fault_regops = {
.rw = vfio_pci_dma_fault_rw,
.release = vfio_pci_dma_fault_release,
+ .mmap = vfio_pci_dma_fault_mmap,
.add_capability = vfio_pci_dma_fault_add_capability,
};

@@ -404,7 +458,8 @@ static int vfio_pci_dma_fault_init(struct vfio_pci_device *vdev)
VFIO_REGION_TYPE_NESTED,
VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT,
&vfio_pci_dma_fault_regops, size,
- VFIO_REGION_INFO_FLAG_READ | VFIO_REGION_INFO_FLAG_WRITE,
+ VFIO_REGION_INFO_FLAG_READ | VFIO_REGION_INFO_FLAG_WRITE |
+ VFIO_REGION_INFO_FLAG_MMAP,
vdev->fault_pages);
if (ret)
goto out;
@@ -412,7 +467,7 @@ static int vfio_pci_dma_fault_init(struct vfio_pci_device *vdev)
header = (struct vfio_region_dma_fault *)vdev->fault_pages;
header->entry_size = sizeof(struct iommu_fault);
header->nb_entries = DMA_FAULT_RING_LENGTH;
- header->offset = sizeof(struct vfio_region_dma_fault);
+ header->offset = PAGE_SIZE;

ret = iommu_register_device_fault_handler(&vdev->pdev->dev,
vfio_pci_iommu_dev_fault_handler,
--
2.26.2

2021-02-23 21:23:30

by Eric Auger

[permalink] [raw]
Subject: [PATCH v12 05/13] vfio/pci: Register an iommu fault handler

Register an IOMMU fault handler which records faults in
the DMA FAULT region ring buffer. In a subsequent patch, we
will add the signaling of a specific eventfd to allow the
userspace to be notified whenever a new fault has shown up.

Signed-off-by: Eric Auger <[email protected]>

---
v11 -> v12:
- take the fault_queue_lock before reading header (Zenghui)
- also record recoverable errors
- only WARN_ON if the unregistration returns -EBUSY
- make vfio_pci_iommu_dev_fault_handler static

v10 -> v11:
- move iommu_unregister_device_fault_handler into
vfio_pci_disable
- check fault_pages != 0

v8 -> v9:
- handler now takes an iommu_fault handle
- eventfd signaling moved to a subsequent patch
- check the fault type and return an error if != UNRECOV
- still the fault handler registration can fail. We need to
reach an agreement about how to deal with the situation

v3 -> v4:
- move iommu_unregister_device_fault_handler to vfio_pci_release
---
drivers/vfio/pci/vfio_pci.c | 48 ++++++++++++++++++++++++++++++++++++-
1 file changed, 47 insertions(+), 1 deletion(-)

diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index e9a4a1c502c7..ad3fe0ce2e64 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -27,6 +27,7 @@
#include <linux/vgaarb.h>
#include <linux/nospec.h>
#include <linux/sched/mm.h>
+#include <linux/circ_buf.h>

#include "vfio_pci_private.h"

@@ -333,6 +334,41 @@ static const struct vfio_pci_regops vfio_pci_dma_fault_regops = {
.add_capability = vfio_pci_dma_fault_add_capability,
};

+static int
+vfio_pci_iommu_dev_fault_handler(struct iommu_fault *fault, void *data)
+{
+ struct vfio_pci_device *vdev = (struct vfio_pci_device *)data;
+ struct vfio_region_dma_fault *reg =
+ (struct vfio_region_dma_fault *)vdev->fault_pages;
+ struct iommu_fault *new;
+ u32 head, tail, size;
+ int ret = -EINVAL;
+
+ if (WARN_ON(!reg))
+ return ret;
+
+ mutex_lock(&vdev->fault_queue_lock);
+
+ head = reg->head;
+ tail = reg->tail;
+ size = reg->nb_entries;
+
+ new = (struct iommu_fault *)(vdev->fault_pages + reg->offset +
+ head * reg->entry_size);
+
+ if (CIRC_SPACE(head, tail, size) < 1) {
+ ret = -ENOSPC;
+ goto unlock;
+ }
+
+ *new = *fault;
+ reg->head = (head + 1) % size;
+ ret = 0;
+unlock:
+ mutex_unlock(&vdev->fault_queue_lock);
+ return ret;
+}
+
#define DMA_FAULT_RING_LENGTH 512

static int vfio_pci_dma_fault_init(struct vfio_pci_device *vdev)
@@ -377,6 +413,13 @@ static int vfio_pci_dma_fault_init(struct vfio_pci_device *vdev)
header->entry_size = sizeof(struct iommu_fault);
header->nb_entries = DMA_FAULT_RING_LENGTH;
header->offset = sizeof(struct vfio_region_dma_fault);
+
+ ret = iommu_register_device_fault_handler(&vdev->pdev->dev,
+ vfio_pci_iommu_dev_fault_handler,
+ vdev);
+ if (ret) /* the dma fault region is freed in vfio_pci_disable() */
+ goto out;
+
return 0;
out:
kfree(vdev->fault_pages);
@@ -500,7 +543,7 @@ static void vfio_pci_disable(struct vfio_pci_device *vdev)
struct pci_dev *pdev = vdev->pdev;
struct vfio_pci_dummy_resource *dummy_res, *tmp;
struct vfio_pci_ioeventfd *ioeventfd, *ioeventfd_tmp;
- int i, bar;
+ int i, bar, ret;

/* Stop the device from further DMA */
pci_clear_master(pdev);
@@ -509,6 +552,9 @@ static void vfio_pci_disable(struct vfio_pci_device *vdev)
VFIO_IRQ_SET_ACTION_TRIGGER,
vdev->irq_type, 0, 0, NULL);

+ ret = iommu_unregister_device_fault_handler(&vdev->pdev->dev);
+ WARN_ON(ret == -EBUSY);
+
/* Device closed, don't need mutex here */
list_for_each_entry_safe(ioeventfd, ioeventfd_tmp,
&vdev->ioeventfds_list, next) {
--
2.26.2

2021-02-23 21:24:38

by Eric Auger

[permalink] [raw]
Subject: [PATCH v12 12/13] vfio/pci: Register a DMA fault response region

In preparation for vSVA, let's register a DMA fault response region,
where the userspace will push the page responses and increment the
head of the buffer. The kernel will pop those responses and inject them
on iommu side.

Signed-off-by: Eric Auger <[email protected]>

---

v11 -> v12:
- use DMA_FAULT_RESPONSE cap [Shameer]
- struct vfio_pci_device dma_fault_response_wq field introduced in
this patch
- return 0 if the domain is NULL
- pass an int pointer to iommu_domain_get_attr
---
drivers/vfio/pci/vfio_pci.c | 125 ++++++++++++++++++++++++++--
drivers/vfio/pci/vfio_pci_private.h | 6 ++
drivers/vfio/pci/vfio_pci_rdwr.c | 39 +++++++++
include/uapi/linux/vfio.h | 32 +++++++
4 files changed, 193 insertions(+), 9 deletions(-)

diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index f3fa6d4318ae..9f1f5008e556 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -316,9 +316,20 @@ static void vfio_pci_dma_fault_release(struct vfio_pci_device *vdev,
kfree(vdev->fault_pages);
}

-static int vfio_pci_dma_fault_mmap(struct vfio_pci_device *vdev,
- struct vfio_pci_region *region,
- struct vm_area_struct *vma)
+static void
+vfio_pci_dma_fault_response_release(struct vfio_pci_device *vdev,
+ struct vfio_pci_region *region)
+{
+ if (vdev->dma_fault_response_wq)
+ destroy_workqueue(vdev->dma_fault_response_wq);
+ kfree(vdev->fault_response_pages);
+ vdev->fault_response_pages = NULL;
+}
+
+static int __vfio_pci_dma_fault_mmap(struct vfio_pci_device *vdev,
+ struct vfio_pci_region *region,
+ struct vm_area_struct *vma,
+ u8 *pages)
{
u64 phys_len, req_len, pgoff, req_start;
unsigned long long addr;
@@ -331,14 +342,14 @@ static int vfio_pci_dma_fault_mmap(struct vfio_pci_device *vdev,
((1U << (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT)) - 1);
req_start = pgoff << PAGE_SHIFT;

- /* only the second page of the producer fault region is mmappable */
+ /* only the second page of the fault region is mmappable */
if (req_start < PAGE_SIZE)
return -EINVAL;

if (req_start + req_len > phys_len)
return -EINVAL;

- addr = virt_to_phys(vdev->fault_pages);
+ addr = virt_to_phys(pages);
vma->vm_private_data = vdev;
vma->vm_pgoff = (addr >> PAGE_SHIFT) + pgoff;

@@ -347,13 +358,29 @@ static int vfio_pci_dma_fault_mmap(struct vfio_pci_device *vdev,
return ret;
}

-static int vfio_pci_dma_fault_add_capability(struct vfio_pci_device *vdev,
- struct vfio_pci_region *region,
- struct vfio_info_cap *caps)
+static int vfio_pci_dma_fault_mmap(struct vfio_pci_device *vdev,
+ struct vfio_pci_region *region,
+ struct vm_area_struct *vma)
+{
+ return __vfio_pci_dma_fault_mmap(vdev, region, vma, vdev->fault_pages);
+}
+
+static int
+vfio_pci_dma_fault_response_mmap(struct vfio_pci_device *vdev,
+ struct vfio_pci_region *region,
+ struct vm_area_struct *vma)
+{
+ return __vfio_pci_dma_fault_mmap(vdev, region, vma, vdev->fault_response_pages);
+}
+
+static int __vfio_pci_dma_fault_add_capability(struct vfio_pci_device *vdev,
+ struct vfio_pci_region *region,
+ struct vfio_info_cap *caps,
+ u32 cap_id)
{
struct vfio_region_info_cap_sparse_mmap *sparse = NULL;
struct vfio_region_info_cap_fault cap = {
- .header.id = VFIO_REGION_INFO_CAP_DMA_FAULT,
+ .header.id = cap_id,
.header.version = 1,
.version = 1,
};
@@ -381,6 +408,23 @@ static int vfio_pci_dma_fault_add_capability(struct vfio_pci_device *vdev,
return ret;
}

+static int vfio_pci_dma_fault_add_capability(struct vfio_pci_device *vdev,
+ struct vfio_pci_region *region,
+ struct vfio_info_cap *caps)
+{
+ return __vfio_pci_dma_fault_add_capability(vdev, region, caps,
+ VFIO_REGION_INFO_CAP_DMA_FAULT);
+}
+
+static int
+vfio_pci_dma_fault_response_add_capability(struct vfio_pci_device *vdev,
+ struct vfio_pci_region *region,
+ struct vfio_info_cap *caps)
+{
+ return __vfio_pci_dma_fault_add_capability(vdev, region, caps,
+ VFIO_REGION_INFO_CAP_DMA_FAULT_RESPONSE);
+}
+
static const struct vfio_pci_regops vfio_pci_dma_fault_regops = {
.rw = vfio_pci_dma_fault_rw,
.release = vfio_pci_dma_fault_release,
@@ -388,6 +432,13 @@ static const struct vfio_pci_regops vfio_pci_dma_fault_regops = {
.add_capability = vfio_pci_dma_fault_add_capability,
};

+static const struct vfio_pci_regops vfio_pci_dma_fault_response_regops = {
+ .rw = vfio_pci_dma_fault_response_rw,
+ .release = vfio_pci_dma_fault_response_release,
+ .mmap = vfio_pci_dma_fault_response_mmap,
+ .add_capability = vfio_pci_dma_fault_response_add_capability,
+};
+
static int
vfio_pci_iommu_dev_fault_handler(struct iommu_fault *fault, void *data)
{
@@ -501,6 +552,57 @@ static int vfio_pci_dma_fault_init(struct vfio_pci_device *vdev)
return ret;
}

+#define DMA_FAULT_RESPONSE_RING_LENGTH 512
+
+static int vfio_pci_dma_fault_response_init(struct vfio_pci_device *vdev)
+{
+ struct vfio_region_dma_fault_response *header;
+ struct iommu_domain *domain;
+ int nested, ret;
+ size_t size;
+
+ domain = iommu_get_domain_for_dev(&vdev->pdev->dev);
+ if (!domain)
+ return 0;
+
+ ret = iommu_domain_get_attr(domain, DOMAIN_ATTR_NESTING, &nested);
+ if (ret || !nested)
+ return ret;
+
+ mutex_init(&vdev->fault_response_queue_lock);
+
+ /*
+ * We provision 1 page for the header and space for
+ * DMA_FAULT_RING_LENGTH fault records in the ring buffer.
+ */
+ size = ALIGN(sizeof(struct iommu_page_response) *
+ DMA_FAULT_RESPONSE_RING_LENGTH, PAGE_SIZE) + PAGE_SIZE;
+
+ vdev->fault_response_pages = kzalloc(size, GFP_KERNEL);
+ if (!vdev->fault_response_pages)
+ return -ENOMEM;
+
+ ret = vfio_pci_register_dev_region(vdev,
+ VFIO_REGION_TYPE_NESTED,
+ VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT_RESPONSE,
+ &vfio_pci_dma_fault_response_regops, size,
+ VFIO_REGION_INFO_FLAG_READ | VFIO_REGION_INFO_FLAG_WRITE |
+ VFIO_REGION_INFO_FLAG_MMAP,
+ vdev->fault_response_pages);
+ if (ret)
+ goto out;
+
+ header = (struct vfio_region_dma_fault_response *)vdev->fault_response_pages;
+ header->entry_size = sizeof(struct iommu_page_response);
+ header->nb_entries = DMA_FAULT_RESPONSE_RING_LENGTH;
+ header->offset = PAGE_SIZE;
+
+ return 0;
+out:
+ vdev->fault_response_pages = NULL;
+ return ret;
+}
+
static int vfio_pci_enable(struct vfio_pci_device *vdev)
{
struct pci_dev *pdev = vdev->pdev;
@@ -603,6 +705,10 @@ static int vfio_pci_enable(struct vfio_pci_device *vdev)
if (ret)
goto disable_exit;

+ ret = vfio_pci_dma_fault_response_init(vdev);
+ if (ret)
+ goto disable_exit;
+
vfio_pci_probe_mmaps(vdev);

return 0;
@@ -2230,6 +2336,7 @@ static int vfio_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
INIT_LIST_HEAD(&vdev->ioeventfds_list);
mutex_init(&vdev->vma_lock);
INIT_LIST_HEAD(&vdev->vma_list);
+ INIT_LIST_HEAD(&vdev->dummy_resources_list);
init_rwsem(&vdev->memory_lock);

ret = vfio_add_group_dev(&pdev->dev, &vfio_pci_ops, vdev);
diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
index e180b5435c8f..82a883c101c9 100644
--- a/drivers/vfio/pci/vfio_pci_private.h
+++ b/drivers/vfio/pci/vfio_pci_private.h
@@ -144,7 +144,10 @@ struct vfio_pci_device {
struct eventfd_ctx *err_trigger;
struct eventfd_ctx *req_trigger;
u8 *fault_pages;
+ u8 *fault_response_pages;
+ struct workqueue_struct *dma_fault_response_wq;
struct mutex fault_queue_lock;
+ struct mutex fault_response_queue_lock;
struct list_head dummy_resources_list;
struct mutex ioeventfds_lock;
struct list_head ioeventfds_list;
@@ -189,6 +192,9 @@ extern long vfio_pci_ioeventfd(struct vfio_pci_device *vdev, loff_t offset,
extern size_t vfio_pci_dma_fault_rw(struct vfio_pci_device *vdev,
char __user *buf, size_t count,
loff_t *ppos, bool iswrite);
+extern size_t vfio_pci_dma_fault_response_rw(struct vfio_pci_device *vdev,
+ char __user *buf, size_t count,
+ loff_t *ppos, bool iswrite);

extern int vfio_pci_init_perm_bits(void);
extern void vfio_pci_uninit_perm_bits(void);
diff --git a/drivers/vfio/pci/vfio_pci_rdwr.c b/drivers/vfio/pci/vfio_pci_rdwr.c
index 164120607469..efde0793360b 100644
--- a/drivers/vfio/pci/vfio_pci_rdwr.c
+++ b/drivers/vfio/pci/vfio_pci_rdwr.c
@@ -400,6 +400,45 @@ size_t vfio_pci_dma_fault_rw(struct vfio_pci_device *vdev, char __user *buf,
return ret;
}

+size_t vfio_pci_dma_fault_response_rw(struct vfio_pci_device *vdev, char __user *buf,
+ size_t count, loff_t *ppos, bool iswrite)
+{
+ unsigned int i = VFIO_PCI_OFFSET_TO_INDEX(*ppos) - VFIO_PCI_NUM_REGIONS;
+ loff_t pos = *ppos & VFIO_PCI_OFFSET_MASK;
+ void *base = vdev->region[i].data;
+ int ret = -EFAULT;
+
+ if (pos >= vdev->region[i].size)
+ return -EINVAL;
+
+ count = min(count, (size_t)(vdev->region[i].size - pos));
+
+ if (iswrite) {
+ struct vfio_region_dma_fault_response *header =
+ (struct vfio_region_dma_fault_response *)base;
+ uint32_t new_head;
+
+ if (pos != 0 || count != 4)
+ return -EINVAL;
+
+ if (copy_from_user((void *)&new_head, buf, count))
+ return -EFAULT;
+
+ if (new_head >= header->nb_entries)
+ return -EINVAL;
+
+ mutex_lock(&vdev->fault_response_queue_lock);
+ header->head = new_head;
+ mutex_unlock(&vdev->fault_response_queue_lock);
+ } else {
+ if (copy_to_user(buf, base + pos, count))
+ return -EFAULT;
+ }
+ *ppos += count;
+ ret = count;
+ return ret;
+}
+
static void vfio_pci_ioeventfd_do_write(struct vfio_pci_ioeventfd *ioeventfd,
bool test_mem)
{
diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index f0eea0f9305a..30dfaa473495 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -346,6 +346,7 @@ struct vfio_region_info_cap_type {

/* sub-types for VFIO_REGION_TYPE_NESTED */
#define VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT (1)
+#define VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT_RESPONSE (2)

/**
* struct vfio_region_gfx_edid - EDID region layout.
@@ -1024,6 +1025,17 @@ struct vfio_region_info_cap_fault {
__u32 version;
};

+/*
+ * Capability exposed by the DMA fault response region
+ * @version: ABI version
+ */
+#define VFIO_REGION_INFO_CAP_DMA_FAULT_RESPONSE 7
+
+struct vfio_region_info_cap_fault_response {
+ struct vfio_info_cap_header header;
+ __u32 version;
+};
+
/*
* DMA Fault Region Layout
* @tail: index relative to the start of the ring buffer at which the
@@ -1044,6 +1056,26 @@ struct vfio_region_dma_fault {
__u32 head;
};

+/*
+ * DMA Fault Response Region Layout
+ * @head: index relative to the start of the ring buffer at which the
+ * producer (userspace) insert responses into the buffer
+ * @entry_size: fault ring buffer entry size in bytes
+ * @nb_entries: max capacity of the fault ring buffer
+ * @offset: ring buffer offset relative to the start of the region
+ * @tail: index relative to the start of the ring buffer at which the
+ * consumer (kernel) finds the next item in the buffer
+ */
+struct vfio_region_dma_fault_response {
+ /* Write-Only */
+ __u32 head;
+ /* Read-Only */
+ __u32 entry_size;
+ __u32 nb_entries;
+ __u32 offset;
+ __u32 tail;
+};
+
/* -------- API for Type1 VFIO IOMMU -------- */

/**
--
2.26.2

2021-02-23 21:24:51

by Eric Auger

[permalink] [raw]
Subject: [PATCH v12 07/13] vfio: Use capability chains to handle device specific irq

From: Tina Zhang <[email protected]>

Caps the number of irqs with fixed indexes and uses capability chains
to chain device specific irqs.

Signed-off-by: Tina Zhang <[email protected]>
Signed-off-by: Eric Auger <[email protected]>
[Eric: Put cap_offset at the end of the vfio_irq_info struct,
remove GFX IRQ at the moment and remove any reference to this latter
in the commit message]

---
---
include/uapi/linux/vfio.h | 19 ++++++++++++++++++-
1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
index bc46e5d6daa4..3688215843fe 100644
--- a/include/uapi/linux/vfio.h
+++ b/include/uapi/linux/vfio.h
@@ -702,11 +702,27 @@ struct vfio_irq_info {
#define VFIO_IRQ_INFO_MASKABLE (1 << 1)
#define VFIO_IRQ_INFO_AUTOMASKED (1 << 2)
#define VFIO_IRQ_INFO_NORESIZE (1 << 3)
+#define VFIO_IRQ_INFO_FLAG_CAPS (1 << 4) /* Info supports caps */
__u32 index; /* IRQ index */
__u32 count; /* Number of IRQs within this index */
+ __u32 cap_offset; /* Offset within info struct of first cap */
};
#define VFIO_DEVICE_GET_IRQ_INFO _IO(VFIO_TYPE, VFIO_BASE + 9)

+/*
+ * The irq type capability allows IRQs unique to a specific device or
+ * class of devices to be exposed.
+ *
+ * The structures below define version 1 of this capability.
+ */
+#define VFIO_IRQ_INFO_CAP_TYPE 3
+
+struct vfio_irq_info_cap_type {
+ struct vfio_info_cap_header header;
+ __u32 type; /* global per bus driver */
+ __u32 subtype; /* type specific */
+};
+
/**
* VFIO_DEVICE_SET_IRQS - _IOW(VFIO_TYPE, VFIO_BASE + 10, struct vfio_irq_set)
*
@@ -808,7 +824,8 @@ enum {
VFIO_PCI_MSIX_IRQ_INDEX,
VFIO_PCI_ERR_IRQ_INDEX,
VFIO_PCI_REQ_IRQ_INDEX,
- VFIO_PCI_NUM_IRQS
+ VFIO_PCI_NUM_IRQS = 5 /* Fixed user ABI, IRQ indexes >=5 use */
+ /* device specific cap to define content */
};

/*
--
2.26.2

2021-02-23 21:24:51

by Eric Auger

[permalink] [raw]
Subject: [PATCH v12 10/13] vfio/pci: Register and allow DMA FAULT IRQ signaling

Register the VFIO_IRQ_TYPE_NESTED/VFIO_IRQ_SUBTYPE_DMA_FAULT
IRQ that allows to signal a nested mode DMA fault.

Signed-off-by: Eric Auger <[email protected]>

---

v10 -> v11:
- the irq now is registered in vfio_pci_dma_fault_init()
in case the domain is nested
---
drivers/vfio/pci/vfio_pci.c | 21 ++++++++++++++++++++-
1 file changed, 20 insertions(+), 1 deletion(-)

diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index e6b94bad77da..f3fa6d4318ae 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -396,6 +396,7 @@ vfio_pci_iommu_dev_fault_handler(struct iommu_fault *fault, void *data)
(struct vfio_region_dma_fault *)vdev->fault_pages;
struct iommu_fault *new;
u32 head, tail, size;
+ int ext_irq_index;
int ret = -EINVAL;

if (WARN_ON(!reg))
@@ -420,7 +421,19 @@ vfio_pci_iommu_dev_fault_handler(struct iommu_fault *fault, void *data)
ret = 0;
unlock:
mutex_unlock(&vdev->fault_queue_lock);
- return ret;
+ if (ret)
+ return ret;
+
+ ext_irq_index = vfio_pci_get_ext_irq_index(vdev, VFIO_IRQ_TYPE_NESTED,
+ VFIO_IRQ_SUBTYPE_DMA_FAULT);
+ if (ext_irq_index < 0)
+ return -EINVAL;
+
+ mutex_lock(&vdev->igate);
+ if (vdev->ext_irqs[ext_irq_index].trigger)
+ eventfd_signal(vdev->ext_irqs[ext_irq_index].trigger, 1);
+ mutex_unlock(&vdev->igate);
+ return 0;
}

#define DMA_FAULT_RING_LENGTH 512
@@ -475,6 +488,12 @@ static int vfio_pci_dma_fault_init(struct vfio_pci_device *vdev)
if (ret) /* the dma fault region is freed in vfio_pci_disable() */
goto out;

+ ret = vfio_pci_register_irq(vdev, VFIO_IRQ_TYPE_NESTED,
+ VFIO_IRQ_SUBTYPE_DMA_FAULT,
+ VFIO_IRQ_INFO_EVENTFD);
+ if (ret) /* the fault handler is also freed in vfio_pci_disable() */
+ goto out;
+
return 0;
out:
kfree(vdev->fault_pages);
--
2.26.2

2021-02-24 00:19:56

by Eric Auger

[permalink] [raw]
Subject: [PATCH v12 08/13] vfio/pci: Add framework for custom interrupt indices

Implement IRQ capability chain infrastructure. All interrupt
indexes beyond VFIO_PCI_NUM_IRQS are handled as extended
interrupts. They are registered with a specific type/subtype
and supported flags.

Signed-off-by: Eric Auger <[email protected]>

---

v11 -> v12:
- check !vdev->num_ext_irqs in vfio_pci_set_ext_irq_trigger()
[Shameer, Qubingbing]
---
drivers/vfio/pci/vfio_pci.c | 99 +++++++++++++++++++++++------
drivers/vfio/pci/vfio_pci_intrs.c | 62 ++++++++++++++++++
drivers/vfio/pci/vfio_pci_private.h | 14 ++++
3 files changed, 157 insertions(+), 18 deletions(-)

diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index a528b72b57d2..e6b94bad77da 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -610,6 +610,14 @@ static void vfio_pci_disable(struct vfio_pci_device *vdev)
ret = iommu_unregister_device_fault_handler(&vdev->pdev->dev);
WARN_ON(ret == -EBUSY);

+ for (i = 0; i < vdev->num_ext_irqs; i++)
+ vfio_pci_set_irqs_ioctl(vdev, VFIO_IRQ_SET_DATA_NONE |
+ VFIO_IRQ_SET_ACTION_TRIGGER,
+ VFIO_PCI_NUM_IRQS + i, 0, 0, NULL);
+ vdev->num_ext_irqs = 0;
+ kfree(vdev->ext_irqs);
+ vdev->ext_irqs = NULL;
+
/* Device closed, don't need mutex here */
list_for_each_entry_safe(ioeventfd, ioeventfd_tmp,
&vdev->ioeventfds_list, next) {
@@ -825,6 +833,9 @@ static int vfio_pci_get_irq_count(struct vfio_pci_device *vdev, int irq_type)
return 1;
} else if (irq_type == VFIO_PCI_REQ_IRQ_INDEX) {
return 1;
+ } else if (irq_type >= VFIO_PCI_NUM_IRQS &&
+ irq_type < VFIO_PCI_NUM_IRQS + vdev->num_ext_irqs) {
+ return 1;
}

return 0;
@@ -1010,7 +1021,7 @@ static long vfio_pci_ioctl(void *device_data,
info.flags |= VFIO_DEVICE_FLAGS_RESET;

info.num_regions = VFIO_PCI_NUM_REGIONS + vdev->num_regions;
- info.num_irqs = VFIO_PCI_NUM_IRQS;
+ info.num_irqs = VFIO_PCI_NUM_IRQS + vdev->num_ext_irqs;

if (IS_ENABLED(CONFIG_VFIO_PCI_ZDEV)) {
int ret = vfio_pci_info_zdev_add_caps(vdev, &caps);
@@ -1189,36 +1200,87 @@ static long vfio_pci_ioctl(void *device_data,

} else if (cmd == VFIO_DEVICE_GET_IRQ_INFO) {
struct vfio_irq_info info;
+ struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
+ unsigned long capsz;

minsz = offsetofend(struct vfio_irq_info, count);

+ /* For backward compatibility, cannot require this */
+ capsz = offsetofend(struct vfio_irq_info, cap_offset);
+
if (copy_from_user(&info, (void __user *)arg, minsz))
return -EFAULT;

- if (info.argsz < minsz || info.index >= VFIO_PCI_NUM_IRQS)
+ if (info.argsz < minsz ||
+ info.index >= VFIO_PCI_NUM_IRQS + vdev->num_ext_irqs)
return -EINVAL;

- switch (info.index) {
- case VFIO_PCI_INTX_IRQ_INDEX ... VFIO_PCI_MSIX_IRQ_INDEX:
- case VFIO_PCI_REQ_IRQ_INDEX:
- break;
- case VFIO_PCI_ERR_IRQ_INDEX:
- if (pci_is_pcie(vdev->pdev))
- break;
- fallthrough;
- default:
- return -EINVAL;
- }
+ if (info.argsz >= capsz)
+ minsz = capsz;

info.flags = VFIO_IRQ_INFO_EVENTFD;

- info.count = vfio_pci_get_irq_count(vdev, info.index);
-
- if (info.index == VFIO_PCI_INTX_IRQ_INDEX)
+ switch (info.index) {
+ case VFIO_PCI_INTX_IRQ_INDEX:
info.flags |= (VFIO_IRQ_INFO_MASKABLE |
VFIO_IRQ_INFO_AUTOMASKED);
- else
+ break;
+ case VFIO_PCI_MSI_IRQ_INDEX ... VFIO_PCI_MSIX_IRQ_INDEX:
+ case VFIO_PCI_REQ_IRQ_INDEX:
info.flags |= VFIO_IRQ_INFO_NORESIZE;
+ break;
+ case VFIO_PCI_ERR_IRQ_INDEX:
+ info.flags |= VFIO_IRQ_INFO_NORESIZE;
+ if (!pci_is_pcie(vdev->pdev))
+ return -EINVAL;
+ break;
+ default:
+ {
+ struct vfio_irq_info_cap_type cap_type = {
+ .header.id = VFIO_IRQ_INFO_CAP_TYPE,
+ .header.version = 1 };
+ int ret, i;
+
+ if (info.index >= VFIO_PCI_NUM_IRQS +
+ vdev->num_ext_irqs)
+ return -EINVAL;
+ info.index = array_index_nospec(info.index,
+ VFIO_PCI_NUM_IRQS +
+ vdev->num_ext_irqs);
+ i = info.index - VFIO_PCI_NUM_IRQS;
+
+ info.flags = vdev->ext_irqs[i].flags;
+ cap_type.type = vdev->ext_irqs[i].type;
+ cap_type.subtype = vdev->ext_irqs[i].subtype;
+
+ ret = vfio_info_add_capability(&caps,
+ &cap_type.header,
+ sizeof(cap_type));
+ if (ret)
+ return ret;
+ }
+ }
+
+ info.count = vfio_pci_get_irq_count(vdev, info.index);
+
+ if (caps.size) {
+ info.flags |= VFIO_IRQ_INFO_FLAG_CAPS;
+ if (info.argsz < sizeof(info) + caps.size) {
+ info.argsz = sizeof(info) + caps.size;
+ info.cap_offset = 0;
+ } else {
+ vfio_info_cap_shift(&caps, sizeof(info));
+ if (copy_to_user((void __user *)arg +
+ sizeof(info), caps.buf,
+ caps.size)) {
+ kfree(caps.buf);
+ return -EFAULT;
+ }
+ info.cap_offset = sizeof(info);
+ }
+
+ kfree(caps.buf);
+ }

return copy_to_user((void __user *)arg, &info, minsz) ?
-EFAULT : 0;
@@ -1237,7 +1299,8 @@ static long vfio_pci_ioctl(void *device_data,
max = vfio_pci_get_irq_count(vdev, hdr.index);

ret = vfio_set_irqs_validate_and_prepare(&hdr, max,
- VFIO_PCI_NUM_IRQS, &data_size);
+ VFIO_PCI_NUM_IRQS + vdev->num_ext_irqs,
+ &data_size);
if (ret)
return ret;

diff --git a/drivers/vfio/pci/vfio_pci_intrs.c b/drivers/vfio/pci/vfio_pci_intrs.c
index 869dce5f134d..d67995fe872f 100644
--- a/drivers/vfio/pci/vfio_pci_intrs.c
+++ b/drivers/vfio/pci/vfio_pci_intrs.c
@@ -19,6 +19,7 @@
#include <linux/vfio.h>
#include <linux/wait.h>
#include <linux/slab.h>
+#include <linux/nospec.h>

#include "vfio_pci_private.h"

@@ -635,6 +636,24 @@ static int vfio_pci_set_req_trigger(struct vfio_pci_device *vdev,
count, flags, data);
}

+static int vfio_pci_set_ext_irq_trigger(struct vfio_pci_device *vdev,
+ unsigned int index, unsigned int start,
+ unsigned int count, uint32_t flags,
+ void *data)
+{
+ int i;
+
+ if (start != 0 || count > 1 || !vdev->num_ext_irqs)
+ return -EINVAL;
+
+ index = array_index_nospec(index,
+ VFIO_PCI_NUM_IRQS + vdev->num_ext_irqs);
+ i = index - VFIO_PCI_NUM_IRQS;
+
+ return vfio_pci_set_ctx_trigger_single(&vdev->ext_irqs[i].trigger,
+ count, flags, data);
+}
+
int vfio_pci_set_irqs_ioctl(struct vfio_pci_device *vdev, uint32_t flags,
unsigned index, unsigned start, unsigned count,
void *data)
@@ -684,6 +703,13 @@ int vfio_pci_set_irqs_ioctl(struct vfio_pci_device *vdev, uint32_t flags,
break;
}
break;
+ default:
+ switch (flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) {
+ case VFIO_IRQ_SET_ACTION_TRIGGER:
+ func = vfio_pci_set_ext_irq_trigger;
+ break;
+ }
+ break;
}

if (!func)
@@ -691,3 +717,39 @@ int vfio_pci_set_irqs_ioctl(struct vfio_pci_device *vdev, uint32_t flags,

return func(vdev, index, start, count, flags, data);
}
+
+int vfio_pci_get_ext_irq_index(struct vfio_pci_device *vdev,
+ unsigned int type, unsigned int subtype)
+{
+ int i;
+
+ for (i = 0; i < vdev->num_ext_irqs; i++) {
+ if (vdev->ext_irqs[i].type == type &&
+ vdev->ext_irqs[i].subtype == subtype) {
+ return i;
+ }
+ }
+ return -EINVAL;
+}
+
+int vfio_pci_register_irq(struct vfio_pci_device *vdev,
+ unsigned int type, unsigned int subtype,
+ u32 flags)
+{
+ struct vfio_ext_irq *ext_irqs;
+
+ ext_irqs = krealloc(vdev->ext_irqs,
+ (vdev->num_ext_irqs + 1) * sizeof(*ext_irqs),
+ GFP_KERNEL);
+ if (!ext_irqs)
+ return -ENOMEM;
+
+ vdev->ext_irqs = ext_irqs;
+
+ vdev->ext_irqs[vdev->num_ext_irqs].type = type;
+ vdev->ext_irqs[vdev->num_ext_irqs].subtype = subtype;
+ vdev->ext_irqs[vdev->num_ext_irqs].flags = flags;
+ vdev->ext_irqs[vdev->num_ext_irqs].trigger = NULL;
+ vdev->num_ext_irqs++;
+ return 0;
+}
diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h
index 1d9b0f648133..e180b5435c8f 100644
--- a/drivers/vfio/pci/vfio_pci_private.h
+++ b/drivers/vfio/pci/vfio_pci_private.h
@@ -77,6 +77,13 @@ struct vfio_pci_region {
u32 flags;
};

+struct vfio_ext_irq {
+ u32 type;
+ u32 subtype;
+ u32 flags;
+ struct eventfd_ctx *trigger;
+};
+
struct vfio_pci_dummy_resource {
struct resource resource;
int index;
@@ -111,6 +118,8 @@ struct vfio_pci_device {
struct vfio_pci_irq_ctx *ctx;
int num_ctx;
int irq_type;
+ struct vfio_ext_irq *ext_irqs;
+ int num_ext_irqs;
int num_regions;
struct vfio_pci_region *region;
u8 msi_qmax;
@@ -154,6 +163,11 @@ struct vfio_pci_device {

extern void vfio_pci_intx_mask(struct vfio_pci_device *vdev);
extern void vfio_pci_intx_unmask(struct vfio_pci_device *vdev);
+extern int vfio_pci_register_irq(struct vfio_pci_device *vdev,
+ unsigned int type, unsigned int subtype,
+ u32 flags);
+extern int vfio_pci_get_ext_irq_index(struct vfio_pci_device *vdev,
+ unsigned int type, unsigned int subtype);

extern int vfio_pci_set_irqs_ioctl(struct vfio_pci_device *vdev,
uint32_t flags, unsigned index,
--
2.26.2

2021-03-18 00:03:40

by Krishna Reddy

[permalink] [raw]
Subject: RE: [PATCH v12 00/13] SMMUv3 Nested Stage Setup (VFIO part)

Tested-by: Krishna Reddy <[email protected]>

Validated Nested SMMUv3 translations for NVMe PCIe device from Guest VM and is functional.

This patch series resolved the mismatch(seen with v11 patches) for VFIO_IOMMU_SET_PASID_TABLE and VFIO_IOMMU_CACHE_INVALIDATE Ioctls between linux and QEMU patch series "vSMMUv3/pSMMUv3 2 stage VFIO integration" (v5.2.0-2stage-rfcv8).

-KR