Hi Alex,
Since commit cbf7827bc5dc ("iommu/s390: Fix potential s390_domain aperture
shrinking") the s390 IOMMU driver uses a reserved region instead of an
artificially shrunk aperture to restrict IOMMU use based on the system provided
DMA ranges of devices. While this is more aligned with the common code use
of apertures and reserved regions it turns out this currently breaks vfio-pci
pass-through of our special virtual ISM PCI device. Investigation showed that
this is due to vfio_test_domain_fgsp() ignoring the IOMMU reserved regions and
mapping IOVA address 0 even if that falls within a reserved region. Thus
I propose the below patch to make vfio_test_domain_fgsp() find a region to do
its mapping test instead of blindly using IOVA 0.
I did post the below patch independently already on December 22 but the subject
wording didn't make it clear that this fixes a real problem and of course the
holidays contribute to making things easier to miss so I wanted to post again
with a bit more background and a more catchy/clear subject on the cover letter.
As for testing, I tested this based on current master on both on s390 where it
skips the reserved 0x0-0x100000000 range and on an AMD Ryzen 3990X where it
continues to do the test on DMA address 0 and sets domain->fgsp to true.
Thanks,
Niklas Schnelle
Niklas Schnelle (1):
vfio/type1: Respect IOMMU reserved regions in vfio_test_domain_fgsp()
drivers/vfio/vfio_iommu_type1.c | 29 ++++++++++++++++++-----------
1 file changed, 18 insertions(+), 11 deletions(-)
--
2.34.1
Since commit cbf7827bc5dc ("iommu/s390: Fix potential s390_domain
aperture shrinking") the s390 IOMMU driver uses a reserved region
instead of an artificially shrunk aperture to restrict IOMMU use based
on the system provided DMA ranges of devices. In particular on current
machines this prevents use of DMA addresses below 2^32 for all devices.
While usually just IOMMU mapping below these addresses is
harmless. However our virtual ISM PCI device looks at new mappings on
IOTLB flush and immediately goes into the error state if such a mapping
violates its allowed DMA ranges. This then breaks pass-through of the
ISM device to a KVM guest.
Analysing this we found that vfio_test_domain_fgsp() maps 2 pages at DMA
address 0 irrespective of the IOMMUs reserved regions. Even if usually
harmless this seems wrong in the general case so instead go through the
freshly updated IOVA list and try to find a range that isn't reserved
and fits 2 pages and use that for testing for fine grained super pages.
Fixes: 6fe1010d6d9c ("vfio/type1: DMA unmap chunking")
Reported-by: Matthew Rosato <[email protected]>
Signed-off-by: Niklas Schnelle <[email protected]>
---
drivers/vfio/vfio_iommu_type1.c | 29 ++++++++++++++++++-----------
1 file changed, 18 insertions(+), 11 deletions(-)
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 23c24fe98c00..9395097897b8 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -1856,24 +1856,31 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
* significantly boosts non-hugetlbfs mappings and doesn't seem to hurt when
* hugetlbfs is in use.
*/
-static void vfio_test_domain_fgsp(struct vfio_domain *domain)
+static void vfio_test_domain_fgsp(struct vfio_domain *domain, struct list_head *regions)
{
- struct page *pages;
int ret, order = get_order(PAGE_SIZE * 2);
+ struct vfio_iova *region;
+ struct page *pages;
pages = alloc_pages(GFP_KERNEL | __GFP_ZERO, order);
if (!pages)
return;
- ret = iommu_map(domain->domain, 0, page_to_phys(pages), PAGE_SIZE * 2,
- IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE);
- if (!ret) {
- size_t unmapped = iommu_unmap(domain->domain, 0, PAGE_SIZE);
+ list_for_each_entry(region, regions, list) {
+ if (region->end - region->start < PAGE_SIZE * 2)
+ continue;
- if (unmapped == PAGE_SIZE)
- iommu_unmap(domain->domain, PAGE_SIZE, PAGE_SIZE);
- else
- domain->fgsp = true;
+ ret = iommu_map(domain->domain, region->start, page_to_phys(pages), PAGE_SIZE * 2,
+ IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE);
+ if (!ret) {
+ size_t unmapped = iommu_unmap(domain->domain, region->start, PAGE_SIZE);
+
+ if (unmapped == PAGE_SIZE)
+ iommu_unmap(domain->domain, region->start + PAGE_SIZE, PAGE_SIZE);
+ else
+ domain->fgsp = true;
+ }
+ break;
}
__free_pages(pages, order);
@@ -2326,7 +2333,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
}
}
- vfio_test_domain_fgsp(domain);
+ vfio_test_domain_fgsp(domain, &iova_copy);
/* replay mappings on new domains */
ret = vfio_iommu_replay(iommu, domain);
--
2.34.1
On Mon, Jan 02, 2023 at 10:34:52AM +0100, Niklas Schnelle wrote:
> Since commit cbf7827bc5dc ("iommu/s390: Fix potential s390_domain
> aperture shrinking") the s390 IOMMU driver uses a reserved region
> instead of an artificially shrunk aperture to restrict IOMMU use based
> on the system provided DMA ranges of devices. In particular on current
> machines this prevents use of DMA addresses below 2^32 for all devices.
> While usually just IOMMU mapping below these addresses is
> harmless. However our virtual ISM PCI device looks at new mappings on
> IOTLB flush and immediately goes into the error state if such a mapping
> violates its allowed DMA ranges. This then breaks pass-through of the
> ISM device to a KVM guest.
>
> Analysing this we found that vfio_test_domain_fgsp() maps 2 pages at DMA
> address 0 irrespective of the IOMMUs reserved regions. Even if usually
> harmless this seems wrong in the general case so instead go through the
> freshly updated IOVA list and try to find a range that isn't reserved
> and fits 2 pages and use that for testing for fine grained super pages.
Why does it matter? The s390 driver will not set fgsp=true, so if it
fails because map fails or does a proper detection it shouldn't make a
difference.
IOW how does this actualy manifest into a failure?
> - if (!ret) {
> - size_t unmapped = iommu_unmap(domain->domain, 0, PAGE_SIZE);
> + list_for_each_entry(region, regions, list) {
> + if (region->end - region->start < PAGE_SIZE * 2)
> + continue;
>
> - if (unmapped == PAGE_SIZE)
> - iommu_unmap(domain->domain, PAGE_SIZE, PAGE_SIZE);
> - else
> - domain->fgsp = true;
> + ret = iommu_map(domain->domain, region->start, page_to_phys(pages), PAGE_SIZE * 2,
> + IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE);
The region also needs to have 'region->start % (PAGE_SIZE*2) == 0' for the
test to work
Jason
On Tue, 2023-01-03 at 19:39 -0400, Jason Gunthorpe wrote:
> On Mon, Jan 02, 2023 at 10:34:52AM +0100, Niklas Schnelle wrote:
> > Since commit cbf7827bc5dc ("iommu/s390: Fix potential s390_domain
> > aperture shrinking") the s390 IOMMU driver uses a reserved region
> > instead of an artificially shrunk aperture to restrict IOMMU use based
> > on the system provided DMA ranges of devices. In particular on current
> > machines this prevents use of DMA addresses below 2^32 for all devices.
> > While usually just IOMMU mapping below these addresses is
> > harmless. However our virtual ISM PCI device looks at new mappings on
> > IOTLB flush and immediately goes into the error state if such a mapping
> > violates its allowed DMA ranges. This then breaks pass-through of the
> > ISM device to a KVM guest.
> >
> > Analysing this we found that vfio_test_domain_fgsp() maps 2 pages at DMA
> > address 0 irrespective of the IOMMUs reserved regions. Even if usually
> > harmless this seems wrong in the general case so instead go through the
> > freshly updated IOVA list and try to find a range that isn't reserved
> > and fits 2 pages and use that for testing for fine grained super pages.
>
> Why does it matter? The s390 driver will not set fgsp=true, so if it
> fails because map fails or does a proper detection it shouldn't make a
> difference.
>
> IOW how does this actualy manifest into a failure?
Oh, yeah I agree that's what I meant by saying that just mapping should
usually be harmless. This is indeedthe case for all normal PCI devices
on s390 there it doesn't matter.
The problem manifests only with ISM devices which are a special s390
virtual PCI device that is implemented in the machine hypervisor. This
device is used for high speed cross-LPAR (Logical Partition)
communication, basically it allows two LPARs that previously exchanged
an authentication token to memcpy between their partitioned memory
using the virtual device. For copying a receiving LPAR will IOMMU map a
region of memory for the ISM device that it will allow DMAing into
(memcpy by the hypervisor). All other regions remain unmapped and thus
inaccessible. In preparation the device emulation in the machine
hypervisor intercepts the IOTLB flush and looks at the IOMMU
translation tables performing e.g. size and alignment checks I presume,
one of these checks against the start/end DMA boundaries. This check
fails which leads to the virtual ISM device being put into an error
state. Being in an error state it then fails to be initialized by the
guest driver later on.
>
> > - if (!ret) {
> > - size_t unmapped = iommu_unmap(domain->domain, 0, PAGE_SIZE);
> > + list_for_each_entry(region, regions, list) {
> > + if (region->end - region->start < PAGE_SIZE * 2)
> > + continue;
> >
> > - if (unmapped == PAGE_SIZE)
> > - iommu_unmap(domain->domain, PAGE_SIZE, PAGE_SIZE);
> > - else
> > - domain->fgsp = true;
> > + ret = iommu_map(domain->domain, region->start, page_to_phys(pages), PAGE_SIZE * 2,
> > + IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE);
>
> The region also needs to have 'region->start % (PAGE_SIZE*2) == 0' for the
> test to work
>
> Jason
Ah okay makes sense, I guess that check could easily be added.
On Wed, Jan 04, 2023 at 10:52:55AM +0100, Niklas Schnelle wrote:
> The problem manifests only with ISM devices which are a special s390
> virtual PCI device that is implemented in the machine hypervisor. This
> device is used for high speed cross-LPAR (Logical Partition)
> communication, basically it allows two LPARs that previously exchanged
> an authentication token to memcpy between their partitioned memory
> using the virtual device. For copying a receiving LPAR will IOMMU map a
> region of memory for the ISM device that it will allow DMAing into
> (memcpy by the hypervisor). All other regions remain unmapped and thus
> inaccessible. In preparation the device emulation in the machine
> hypervisor intercepts the IOTLB flush and looks at the IOMMU
> translation tables performing e.g. size and alignment checks I presume,
> one of these checks against the start/end DMA boundaries. This check
> fails which leads to the virtual ISM device being put into an error
> state. Being in an error state it then fails to be initialized by the
> guest driver later on.
You could rephrase this as saying that the S390 map operation doesn't
check for bounds so mapping in a reserved region doesn't fail, but
errors the HW.
Which seems reasonable to me
Jason
On Wed, 2023-01-04 at 08:16 -0400, Jason Gunthorpe wrote:
> On Wed, Jan 04, 2023 at 10:52:55AM +0100, Niklas Schnelle wrote:
>
> > The problem manifests only with ISM devices which are a special s390
> > virtual PCI device that is implemented in the machine hypervisor. This
> > device is used for high speed cross-LPAR (Logical Partition)
> > communication, basically it allows two LPARs that previously exchanged
> > an authentication token to memcpy between their partitioned memory
> > using the virtual device. For copying a receiving LPAR will IOMMU map a
> > region of memory for the ISM device that it will allow DMAing into
> > (memcpy by the hypervisor). All other regions remain unmapped and thus
> > inaccessible. In preparation the device emulation in the machine
> > hypervisor intercepts the IOTLB flush and looks at the IOMMU
> > translation tables performing e.g. size and alignment checks I presume,
> > one of these checks against the start/end DMA boundaries. This check
> > fails which leads to the virtual ISM device being put into an error
> > state. Being in an error state it then fails to be initialized by the
> > guest driver later on.
>
> You could rephrase this as saying that the S390 map operation doesn't
> check for bounds so mapping in a reserved region doesn't fail, but
> errors the HW.
>
> Which seems reasonable to me
>
> Jason
Kind of yes, before the recent IOMMU changes the IOMMU code did check
on map failing early but now handles the limits via reserved regions.
The IOMMU hardware would only check the limits once an actual DMA uses
them but of course no DMA will be triggered for this test mapping. For
this specific virtual device though there is an extra check as part of
an intercepted IOTLB flush (RPCIT instruction in S390).