2019-12-21 15:20:12

by Tom Murphy

[permalink] [raw]
Subject: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api

This patchset converts the intel iommu driver to the dma-iommu api.

While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here:
https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg

This issue is most likely in the i915 driver and is most likely caused by the driver not respecting the return value of the dma_map_ops::map_sg function. You can see the driver ignoring the return value here:
https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51

Previously this didn’t cause issues because the intel map_sg always returned the same number of elements as the input scatter gather list but with the change to this dma-iommu api this is no longer the case. I wasn’t able to track the bug down to a specific line of code unfortunately.

Could someone from the intel team look at this?


I have been testing on a lenovo x1 carbon 5th generation. Let me know if there’s any more information you need.

To allow my patch set to be tested I have added a patch (patch 8/8) in this series to disable combining sg segments in the dma-iommu api which fixes the bug but it doesn't fix the actual problem.

As part of this patch series I copied the intel bounce buffer code to the dma-iommu path. The addition of the bounce buffer code took me by surprise. I did most of my development on this patch series before the bounce buffer code was added and my reimplementation in the dma-iommu path is very rushed and not properly tested but I’m running out of time to work on this patch set.

On top of that I also didn’t port over the intel tracing code from this commit:
https://github.com/torvalds/linux/commit/3b53034c268d550d9e8522e613a14ab53b8840d8#diff-6b3e7c4993f05e76331e463ab1fc87e1
So all the work in that commit is now wasted. The code will need to be removed and reimplemented in the dma-iommu path. I would like to take the time to do this but I really don’t have the time at the moment and I want to get these changes out before the iommu code changes any more.


Tom Murphy (8):
iommu/vt-d: clean up 32bit si_domain assignment
iommu/vt-d: Use default dma_direct_* mapping functions for direct
mapped devices
iommu/vt-d: Remove IOVA handling code from non-dma_ops path
iommu: Handle freelists when using deferred flushing in iommu drivers
iommu: Add iommu_dma_free_cpu_cached_iovas function
iommu: allow the dma-iommu api to use bounce buffers
iommu/vt-d: Convert intel iommu driver to the iommu ops
DO NOT MERGE: iommu: disable list appending in dma-iommu

drivers/iommu/Kconfig | 1 +
drivers/iommu/amd_iommu.c | 14 +-
drivers/iommu/arm-smmu-v3.c | 3 +-
drivers/iommu/arm-smmu.c | 3 +-
drivers/iommu/dma-iommu.c | 183 +++++--
drivers/iommu/exynos-iommu.c | 3 +-
drivers/iommu/intel-iommu.c | 936 ++++----------------------------
drivers/iommu/iommu.c | 39 +-
drivers/iommu/ipmmu-vmsa.c | 3 +-
drivers/iommu/msm_iommu.c | 3 +-
drivers/iommu/mtk_iommu.c | 3 +-
drivers/iommu/mtk_iommu_v1.c | 3 +-
drivers/iommu/omap-iommu.c | 3 +-
drivers/iommu/qcom_iommu.c | 3 +-
drivers/iommu/rockchip-iommu.c | 3 +-
drivers/iommu/s390-iommu.c | 3 +-
drivers/iommu/tegra-gart.c | 3 +-
drivers/iommu/tegra-smmu.c | 3 +-
drivers/iommu/virtio-iommu.c | 3 +-
drivers/vfio/vfio_iommu_type1.c | 2 +-
include/linux/dma-iommu.h | 3 +
include/linux/intel-iommu.h | 1 -
include/linux/iommu.h | 32 +-
23 files changed, 345 insertions(+), 908 deletions(-)

--
2.20.1


2019-12-21 15:28:43

by Tom Murphy

[permalink] [raw]
Subject: [PATCH 3/8] iommu/vt-d: Remove IOVA handling code from non-dma_ops path

Remove all IOVA handling code from the non-dma_ops path in the intel
iommu driver.

There's no need for the non-dma_ops path to keep track of IOVAs. The
whole point of the non-dma_ops path is that it allows the IOVAs to be
handled separately. The IOVA handling code removed in this patch is
pointless.

Signed-off-by: Tom Murphy <[email protected]>
---
drivers/iommu/intel-iommu.c | 89 ++++++++++++++-----------------------
1 file changed, 33 insertions(+), 56 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 64b1a9793daa..8d72ea0fb843 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -1908,7 +1908,8 @@ static void domain_exit(struct dmar_domain *domain)
domain_remove_dev_info(domain);

/* destroy iovas */
- put_iova_domain(&domain->iovad);
+ if (domain->domain.type == IOMMU_DOMAIN_DMA)
+ put_iova_domain(&domain->iovad);

if (domain->pgd) {
struct page *freelist;
@@ -2671,19 +2672,9 @@ static struct dmar_domain *set_domain_for_dev(struct device *dev,
}

static int iommu_domain_identity_map(struct dmar_domain *domain,
- unsigned long long start,
- unsigned long long end)
+ unsigned long first_vpfn,
+ unsigned long last_vpfn)
{
- unsigned long first_vpfn = start >> VTD_PAGE_SHIFT;
- unsigned long last_vpfn = end >> VTD_PAGE_SHIFT;
-
- if (!reserve_iova(&domain->iovad, dma_to_mm_pfn(first_vpfn),
- dma_to_mm_pfn(last_vpfn))) {
- pr_err("Reserving iova failed\n");
- return -ENOMEM;
- }
-
- pr_debug("Mapping reserved region %llx-%llx\n", start, end);
/*
* RMRR range might have overlap with physical memory range,
* clear it first
@@ -2760,7 +2751,8 @@ static int __init si_domain_init(int hw)

for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {
ret = iommu_domain_identity_map(si_domain,
- PFN_PHYS(start_pfn), PFN_PHYS(end_pfn));
+ mm_to_dma_pfn(start_pfn),
+ mm_to_dma_pfn(end_pfn));
if (ret)
return ret;
}
@@ -4593,58 +4585,37 @@ static int intel_iommu_memory_notifier(struct notifier_block *nb,
unsigned long val, void *v)
{
struct memory_notify *mhp = v;
- unsigned long long start, end;
- unsigned long start_vpfn, last_vpfn;
+ unsigned long start_vpfn = mm_to_dma_pfn(mhp->start_pfn);
+ unsigned long last_vpfn = mm_to_dma_pfn(mhp->start_pfn +
+ mhp->nr_pages - 1);

switch (val) {
case MEM_GOING_ONLINE:
- start = mhp->start_pfn << PAGE_SHIFT;
- end = ((mhp->start_pfn + mhp->nr_pages) << PAGE_SHIFT) - 1;
- if (iommu_domain_identity_map(si_domain, start, end)) {
- pr_warn("Failed to build identity map for [%llx-%llx]\n",
- start, end);
+ if (iommu_domain_identity_map(si_domain, start_vpfn,
+ last_vpfn)) {
+ pr_warn("Failed to build identity map for [%lx-%lx]\n",
+ start_vpfn, last_vpfn);
return NOTIFY_BAD;
}
break;

case MEM_OFFLINE:
case MEM_CANCEL_ONLINE:
- start_vpfn = mm_to_dma_pfn(mhp->start_pfn);
- last_vpfn = mm_to_dma_pfn(mhp->start_pfn + mhp->nr_pages - 1);
- while (start_vpfn <= last_vpfn) {
- struct iova *iova;
+ {
struct dmar_drhd_unit *drhd;
struct intel_iommu *iommu;
struct page *freelist;

- iova = find_iova(&si_domain->iovad, start_vpfn);
- if (iova == NULL) {
- pr_debug("Failed get IOVA for PFN %lx\n",
- start_vpfn);
- break;
- }
-
- iova = split_and_remove_iova(&si_domain->iovad, iova,
- start_vpfn, last_vpfn);
- if (iova == NULL) {
- pr_warn("Failed to split IOVA PFN [%lx-%lx]\n",
- start_vpfn, last_vpfn);
- return NOTIFY_BAD;
- }
-
- freelist = domain_unmap(si_domain, iova->pfn_lo,
- iova->pfn_hi);
+ freelist = domain_unmap(si_domain, start_vpfn,
+ last_vpfn);

rcu_read_lock();
for_each_active_iommu(iommu, drhd)
iommu_flush_iotlb_psi(iommu, si_domain,
- iova->pfn_lo, iova_size(iova),
+ start_vpfn, mhp->nr_pages,
!freelist, 0);
rcu_read_unlock();
dma_free_pagelist(freelist);
-
- start_vpfn = iova->pfn_hi + 1;
- free_iova_mem(iova);
}
break;
}
@@ -4672,8 +4643,9 @@ static void free_all_cpu_cached_iovas(unsigned int cpu)
for (did = 0; did < cap_ndoms(iommu->cap); did++) {
domain = get_iommu_domain(iommu, (u16)did);

- if (!domain)
+ if (!domain || domain->domain.type != IOMMU_DOMAIN_DMA)
continue;
+
free_cpu_cached_iovas(cpu, &domain->iovad);
}
}
@@ -5095,9 +5067,6 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width)
{
int adjust_width;

- init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN);
- domain_reserve_special_ranges(domain);
-
/* calculate AGAW */
domain->gaw = guest_width;
adjust_width = guestwidth_to_adjustwidth(guest_width);
@@ -5116,6 +5085,18 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width)
return 0;
}

+static void intel_init_iova_domain(struct dmar_domain *dmar_domain)
+{
+ init_iova_domain(&dmar_domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN);
+ copy_reserved_iova(&reserved_iova_list, &dmar_domain->iovad);
+
+ if (init_iova_flush_queue(&dmar_domain->iovad, iommu_flush_iova,
+ iova_entry_free)) {
+ pr_warn("iova flush queue initialization failed\n");
+ intel_iommu_strict = 1;
+ }
+}
+
static struct iommu_domain *intel_iommu_domain_alloc(unsigned type)
{
struct dmar_domain *dmar_domain;
@@ -5136,12 +5117,8 @@ static struct iommu_domain *intel_iommu_domain_alloc(unsigned type)
return NULL;
}

- if (type == IOMMU_DOMAIN_DMA &&
- init_iova_flush_queue(&dmar_domain->iovad,
- iommu_flush_iova, iova_entry_free)) {
- pr_warn("iova flush queue initialization failed\n");
- intel_iommu_strict = 1;
- }
+ if (type == IOMMU_DOMAIN_DMA)
+ intel_init_iova_domain(dmar_domain);

domain_update_iommu_cap(dmar_domain);

--
2.20.1

2019-12-23 10:38:56

by Jani Nikula

[permalink] [raw]
Subject: Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api

On Sat, 21 Dec 2019, Tom Murphy <[email protected]> wrote:
> This patchset converts the intel iommu driver to the dma-iommu api.
>
> While converting the driver I exposed a bug in the intel i915 driver
> which causes a huge amount of artifacts on the screen of my
> laptop. You can see a picture of it here:
> https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg
>
> This issue is most likely in the i915 driver and is most likely caused
> by the driver not respecting the return value of the
> dma_map_ops::map_sg function. You can see the driver ignoring the
> return value here:
> https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51
>
> Previously this didn’t cause issues because the intel map_sg always
> returned the same number of elements as the input scatter gather list
> but with the change to this dma-iommu api this is no longer the
> case. I wasn’t able to track the bug down to a specific line of code
> unfortunately.
>
> Could someone from the intel team look at this?

Let me get this straight. There is current API that on success always
returns the same number of elements as the input scatter gather
list. You propose to change the API so that this is no longer the case?

A quick check of various dma_map_sg() calls in the kernel seems to
indicate checking for 0 for errors and then ignoring the non-zero return
is a common pattern. Are you sure it's okay to make the change you're
proposing?

Anyway, due to the time of year and all, I'd like to ask you to file a
bug against i915 at [1] so this is not forgotten, and please let's not
merge the changes before this is resolved.


Thanks,
Jani.


[1] https://gitlab.freedesktop.org/drm/intel/issues/new


--
Jani Nikula, Intel Open Source Graphics Center

2019-12-23 11:30:27

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api

On 2019-12-23 10:37 am, Jani Nikula wrote:
> On Sat, 21 Dec 2019, Tom Murphy <[email protected]> wrote:
>> This patchset converts the intel iommu driver to the dma-iommu api.
>>
>> While converting the driver I exposed a bug in the intel i915 driver
>> which causes a huge amount of artifacts on the screen of my
>> laptop. You can see a picture of it here:
>> https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg
>>
>> This issue is most likely in the i915 driver and is most likely caused
>> by the driver not respecting the return value of the
>> dma_map_ops::map_sg function. You can see the driver ignoring the
>> return value here:
>> https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51
>>
>> Previously this didn’t cause issues because the intel map_sg always
>> returned the same number of elements as the input scatter gather list
>> but with the change to this dma-iommu api this is no longer the
>> case. I wasn’t able to track the bug down to a specific line of code
>> unfortunately.
>>
>> Could someone from the intel team look at this?
>
> Let me get this straight. There is current API that on success always
> returns the same number of elements as the input scatter gather
> list. You propose to change the API so that this is no longer the case?

No, the API for dma_map_sg() has always been that it may return fewer
DMA segments than nents - see Documentation/DMA-API.txt (and otherwise,
the return value would surely be a simple success/fail condition).
Relying on a particular implementation behaviour has never been strictly
correct, even if it does happen to be a very common behaviour.

> A quick check of various dma_map_sg() calls in the kernel seems to
> indicate checking for 0 for errors and then ignoring the non-zero return
> is a common pattern. Are you sure it's okay to make the change you're
> proposing?

Various code uses tricks like just iterating the mapped list until the
first segment with zero sg_dma_len(). Others may well simply have bugs.

Robin.

> Anyway, due to the time of year and all, I'd like to ask you to file a
> bug against i915 at [1] so this is not forgotten, and please let's not
> merge the changes before this is resolved.
>
>
> Thanks,
> Jani.
>
>
> [1] https://gitlab.freedesktop.org/drm/intel/issues/new
>
>

2019-12-23 11:42:37

by Jani Nikula

[permalink] [raw]
Subject: Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api

On Mon, 23 Dec 2019, Robin Murphy <[email protected]> wrote:
> On 2019-12-23 10:37 am, Jani Nikula wrote:
>> On Sat, 21 Dec 2019, Tom Murphy <[email protected]> wrote:
>>> This patchset converts the intel iommu driver to the dma-iommu api.
>>>
>>> While converting the driver I exposed a bug in the intel i915 driver
>>> which causes a huge amount of artifacts on the screen of my
>>> laptop. You can see a picture of it here:
>>> https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg
>>>
>>> This issue is most likely in the i915 driver and is most likely caused
>>> by the driver not respecting the return value of the
>>> dma_map_ops::map_sg function. You can see the driver ignoring the
>>> return value here:
>>> https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51
>>>
>>> Previously this didn’t cause issues because the intel map_sg always
>>> returned the same number of elements as the input scatter gather list
>>> but with the change to this dma-iommu api this is no longer the
>>> case. I wasn’t able to track the bug down to a specific line of code
>>> unfortunately.
>>>
>>> Could someone from the intel team look at this?
>>
>> Let me get this straight. There is current API that on success always
>> returns the same number of elements as the input scatter gather
>> list. You propose to change the API so that this is no longer the case?
>
> No, the API for dma_map_sg() has always been that it may return fewer
> DMA segments than nents - see Documentation/DMA-API.txt (and otherwise,
> the return value would surely be a simple success/fail condition).
> Relying on a particular implementation behaviour has never been strictly
> correct, even if it does happen to be a very common behaviour.
>
>> A quick check of various dma_map_sg() calls in the kernel seems to
>> indicate checking for 0 for errors and then ignoring the non-zero return
>> is a common pattern. Are you sure it's okay to make the change you're
>> proposing?
>
> Various code uses tricks like just iterating the mapped list until the
> first segment with zero sg_dma_len(). Others may well simply have bugs.

Thanks for the clarification.

BR,
Jani.

>
> Robin.
>
>> Anyway, due to the time of year and all, I'd like to ask you to file a
>> bug against i915 at [1] so this is not forgotten, and please let's not
>> merge the changes before this is resolved.
>>
>>
>> Thanks,
>> Jani.
>>
>>
>> [1] https://gitlab.freedesktop.org/drm/intel/issues/new
>>
>>

--
Jani Nikula, Intel Open Source Graphics Center

2020-03-20 06:29:20

by Tom Murphy

[permalink] [raw]
Subject: Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api

Any news on this? Is there anyone who wants to try and fix this possible bug?

On Mon, 23 Dec 2019 at 03:41, Jani Nikula <[email protected]> wrote:
>
> On Mon, 23 Dec 2019, Robin Murphy <[email protected]> wrote:
> > On 2019-12-23 10:37 am, Jani Nikula wrote:
> >> On Sat, 21 Dec 2019, Tom Murphy <[email protected]> wrote:
> >>> This patchset converts the intel iommu driver to the dma-iommu api.
> >>>
> >>> While converting the driver I exposed a bug in the intel i915 driver
> >>> which causes a huge amount of artifacts on the screen of my
> >>> laptop. You can see a picture of it here:
> >>> https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg
> >>>
> >>> This issue is most likely in the i915 driver and is most likely caused
> >>> by the driver not respecting the return value of the
> >>> dma_map_ops::map_sg function. You can see the driver ignoring the
> >>> return value here:
> >>> https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51
> >>>
> >>> Previously this didn’t cause issues because the intel map_sg always
> >>> returned the same number of elements as the input scatter gather list
> >>> but with the change to this dma-iommu api this is no longer the
> >>> case. I wasn’t able to track the bug down to a specific line of code
> >>> unfortunately.
> >>>
> >>> Could someone from the intel team look at this?
> >>
> >> Let me get this straight. There is current API that on success always
> >> returns the same number of elements as the input scatter gather
> >> list. You propose to change the API so that this is no longer the case?
> >
> > No, the API for dma_map_sg() has always been that it may return fewer
> > DMA segments than nents - see Documentation/DMA-API.txt (and otherwise,
> > the return value would surely be a simple success/fail condition).
> > Relying on a particular implementation behaviour has never been strictly
> > correct, even if it does happen to be a very common behaviour.
> >
> >> A quick check of various dma_map_sg() calls in the kernel seems to
> >> indicate checking for 0 for errors and then ignoring the non-zero return
> >> is a common pattern. Are you sure it's okay to make the change you're
> >> proposing?
> >
> > Various code uses tricks like just iterating the mapped list until the
> > first segment with zero sg_dma_len(). Others may well simply have bugs.
>
> Thanks for the clarification.
>
> BR,
> Jani.
>
> >
> > Robin.
> >
> >> Anyway, due to the time of year and all, I'd like to ask you to file a
> >> bug against i915 at [1] so this is not forgotten, and please let's not
> >> merge the changes before this is resolved.
> >>
> >>
> >> Thanks,
> >> Jani.
> >>
> >>
> >> [1] https://gitlab.freedesktop.org/drm/intel/issues/new
> >>
> >>
>
> --
> Jani Nikula, Intel Open Source Graphics Center

2020-03-20 06:32:22

by Tom Murphy

[permalink] [raw]
Subject: Re: [PATCH 3/8] iommu/vt-d: Remove IOVA handling code from non-dma_ops path

Could we merge patch 1-3 from this series? it just cleans up weird
code and merging these patches will cover some of the work needed to
move the intel iommu driver to the dma-iommu api in the future.

On Sat, 21 Dec 2019 at 07:04, Tom Murphy <[email protected]> wrote:
>
> Remove all IOVA handling code from the non-dma_ops path in the intel
> iommu driver.
>
> There's no need for the non-dma_ops path to keep track of IOVAs. The
> whole point of the non-dma_ops path is that it allows the IOVAs to be
> handled separately. The IOVA handling code removed in this patch is
> pointless.
>
> Signed-off-by: Tom Murphy <[email protected]>
> ---
> drivers/iommu/intel-iommu.c | 89 ++++++++++++++-----------------------
> 1 file changed, 33 insertions(+), 56 deletions(-)
>
> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
> index 64b1a9793daa..8d72ea0fb843 100644
> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -1908,7 +1908,8 @@ static void domain_exit(struct dmar_domain *domain)
> domain_remove_dev_info(domain);
>
> /* destroy iovas */
> - put_iova_domain(&domain->iovad);
> + if (domain->domain.type == IOMMU_DOMAIN_DMA)
> + put_iova_domain(&domain->iovad);
>
> if (domain->pgd) {
> struct page *freelist;
> @@ -2671,19 +2672,9 @@ static struct dmar_domain *set_domain_for_dev(struct device *dev,
> }
>
> static int iommu_domain_identity_map(struct dmar_domain *domain,
> - unsigned long long start,
> - unsigned long long end)
> + unsigned long first_vpfn,
> + unsigned long last_vpfn)
> {
> - unsigned long first_vpfn = start >> VTD_PAGE_SHIFT;
> - unsigned long last_vpfn = end >> VTD_PAGE_SHIFT;
> -
> - if (!reserve_iova(&domain->iovad, dma_to_mm_pfn(first_vpfn),
> - dma_to_mm_pfn(last_vpfn))) {
> - pr_err("Reserving iova failed\n");
> - return -ENOMEM;
> - }
> -
> - pr_debug("Mapping reserved region %llx-%llx\n", start, end);
> /*
> * RMRR range might have overlap with physical memory range,
> * clear it first
> @@ -2760,7 +2751,8 @@ static int __init si_domain_init(int hw)
>
> for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {
> ret = iommu_domain_identity_map(si_domain,
> - PFN_PHYS(start_pfn), PFN_PHYS(end_pfn));
> + mm_to_dma_pfn(start_pfn),
> + mm_to_dma_pfn(end_pfn));
> if (ret)
> return ret;
> }
> @@ -4593,58 +4585,37 @@ static int intel_iommu_memory_notifier(struct notifier_block *nb,
> unsigned long val, void *v)
> {
> struct memory_notify *mhp = v;
> - unsigned long long start, end;
> - unsigned long start_vpfn, last_vpfn;
> + unsigned long start_vpfn = mm_to_dma_pfn(mhp->start_pfn);
> + unsigned long last_vpfn = mm_to_dma_pfn(mhp->start_pfn +
> + mhp->nr_pages - 1);
>
> switch (val) {
> case MEM_GOING_ONLINE:
> - start = mhp->start_pfn << PAGE_SHIFT;
> - end = ((mhp->start_pfn + mhp->nr_pages) << PAGE_SHIFT) - 1;
> - if (iommu_domain_identity_map(si_domain, start, end)) {
> - pr_warn("Failed to build identity map for [%llx-%llx]\n",
> - start, end);
> + if (iommu_domain_identity_map(si_domain, start_vpfn,
> + last_vpfn)) {
> + pr_warn("Failed to build identity map for [%lx-%lx]\n",
> + start_vpfn, last_vpfn);
> return NOTIFY_BAD;
> }
> break;
>
> case MEM_OFFLINE:
> case MEM_CANCEL_ONLINE:
> - start_vpfn = mm_to_dma_pfn(mhp->start_pfn);
> - last_vpfn = mm_to_dma_pfn(mhp->start_pfn + mhp->nr_pages - 1);
> - while (start_vpfn <= last_vpfn) {
> - struct iova *iova;
> + {
> struct dmar_drhd_unit *drhd;
> struct intel_iommu *iommu;
> struct page *freelist;
>
> - iova = find_iova(&si_domain->iovad, start_vpfn);
> - if (iova == NULL) {
> - pr_debug("Failed get IOVA for PFN %lx\n",
> - start_vpfn);
> - break;
> - }
> -
> - iova = split_and_remove_iova(&si_domain->iovad, iova,
> - start_vpfn, last_vpfn);
> - if (iova == NULL) {
> - pr_warn("Failed to split IOVA PFN [%lx-%lx]\n",
> - start_vpfn, last_vpfn);
> - return NOTIFY_BAD;
> - }
> -
> - freelist = domain_unmap(si_domain, iova->pfn_lo,
> - iova->pfn_hi);
> + freelist = domain_unmap(si_domain, start_vpfn,
> + last_vpfn);
>
> rcu_read_lock();
> for_each_active_iommu(iommu, drhd)
> iommu_flush_iotlb_psi(iommu, si_domain,
> - iova->pfn_lo, iova_size(iova),
> + start_vpfn, mhp->nr_pages,
> !freelist, 0);
> rcu_read_unlock();
> dma_free_pagelist(freelist);
> -
> - start_vpfn = iova->pfn_hi + 1;
> - free_iova_mem(iova);
> }
> break;
> }
> @@ -4672,8 +4643,9 @@ static void free_all_cpu_cached_iovas(unsigned int cpu)
> for (did = 0; did < cap_ndoms(iommu->cap); did++) {
> domain = get_iommu_domain(iommu, (u16)did);
>
> - if (!domain)
> + if (!domain || domain->domain.type != IOMMU_DOMAIN_DMA)
> continue;
> +
> free_cpu_cached_iovas(cpu, &domain->iovad);
> }
> }
> @@ -5095,9 +5067,6 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width)
> {
> int adjust_width;
>
> - init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN);
> - domain_reserve_special_ranges(domain);
> -
> /* calculate AGAW */
> domain->gaw = guest_width;
> adjust_width = guestwidth_to_adjustwidth(guest_width);
> @@ -5116,6 +5085,18 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width)
> return 0;
> }
>
> +static void intel_init_iova_domain(struct dmar_domain *dmar_domain)
> +{
> + init_iova_domain(&dmar_domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN);
> + copy_reserved_iova(&reserved_iova_list, &dmar_domain->iovad);
> +
> + if (init_iova_flush_queue(&dmar_domain->iovad, iommu_flush_iova,
> + iova_entry_free)) {
> + pr_warn("iova flush queue initialization failed\n");
> + intel_iommu_strict = 1;
> + }
> +}
> +
> static struct iommu_domain *intel_iommu_domain_alloc(unsigned type)
> {
> struct dmar_domain *dmar_domain;
> @@ -5136,12 +5117,8 @@ static struct iommu_domain *intel_iommu_domain_alloc(unsigned type)
> return NULL;
> }
>
> - if (type == IOMMU_DOMAIN_DMA &&
> - init_iova_flush_queue(&dmar_domain->iovad,
> - iommu_flush_iova, iova_entry_free)) {
> - pr_warn("iova flush queue initialization failed\n");
> - intel_iommu_strict = 1;
> - }
> + if (type == IOMMU_DOMAIN_DMA)
> + intel_init_iova_domain(dmar_domain);
>
> domain_update_iommu_cap(dmar_domain);
>
> --
> 2.20.1
>

2020-03-20 07:07:08

by Lu Baolu

[permalink] [raw]
Subject: Re: [PATCH 3/8] iommu/vt-d: Remove IOVA handling code from non-dma_ops path

On 2020/3/20 14:30, Tom Murphy wrote:
> Could we merge patch 1-3 from this series? it just cleans up weird
> code and merging these patches will cover some of the work needed to
> move the intel iommu driver to the dma-iommu api in the future.

Can you please take a look at this patch series?

https://lkml.org/lkml/2020/3/13/1162

It probably makes this series easier.

Best regards,
baolu

>
> On Sat, 21 Dec 2019 at 07:04, Tom Murphy<[email protected]> wrote:
>> Remove all IOVA handling code from the non-dma_ops path in the intel
>> iommu driver.
>>
>> There's no need for the non-dma_ops path to keep track of IOVAs. The
>> whole point of the non-dma_ops path is that it allows the IOVAs to be
>> handled separately. The IOVA handling code removed in this patch is
>> pointless.
>>
>> Signed-off-by: Tom Murphy<[email protected]>

2020-05-29 00:27:09

by Logan Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api

Hi Tom,

On 2019-12-21 8:03 a.m., Tom Murphy wrote:
> This patchset converts the intel iommu driver to the dma-iommu api.

Just wanted to note that I've rebased your series on recent kernels and
have done some testing on my old Sandybridge machine (without the DO NOT
MERGE patch) and have found no issues. I hope this can make progress
soon and get merged soon. If you like you can add:

Tested-By: Logan Gunthorpe <[email protected]>

> While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here:
> https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg
>
> This issue is most likely in the i915 driver and is most likely caused by the driver not respecting the return value of the dma_map_ops::map_sg function. You can see the driver ignoring the return value here:
> https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51
>
> Previously this didn’t cause issues because the intel map_sg always returned the same number of elements as the input scatter gather list but with the change to this dma-iommu api this is no longer the case. I wasn’t able to track the bug down to a specific line of code unfortunately.

I did some digging into this myself and while I don't have full patch, I
think I traced it closer to the problem.

Sadly, ignoring the number of nents returned by map_sg() is endemic to
dma-buf users, but AMD's GPU driver seems to do the same thing,
presumably without issues.

Digging a bit further, I found that the i915 has an "innovative" way of
iterating through SGLs, see [1]. I suspect if __sgt_iter is changed to
increment with sg_dma_len() and return NULL when there is no length
left, it may fix the issue.

But, sorry, I don't really have the means or time to fix and test this
myself.

Thanks,

Logan

[1]
https://elixir.bootlin.com/linux/v5.7-rc7/source/drivers/gpu/drm/i915/i915_scatterlist.h#L76

2020-05-29 12:49:43

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api

On Thu, May 28, 2020 at 06:00:44PM -0600, Logan Gunthorpe wrote:
> > This issue is most likely in the i915 driver and is most likely caused by the driver not respecting the return value of the dma_map_ops::map_sg function. You can see the driver ignoring the return value here:
> > https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51
> >
> > Previously this didn’t cause issues because the intel map_sg always returned the same number of elements as the input scatter gather list but with the change to this dma-iommu api this is no longer the case. I wasn’t able to track the bug down to a specific line of code unfortunately.

Mark did a big audit into the map_sg API abuse and initially had
some i915 patches, but then gave up on them with this comment:

"The biggest TODO is DRM/i915 driver and I don't feel brave enough to fix
it fully. The driver creatively uses sg_table->orig_nents to store the
size of the allocate scatterlist and ignores the number of the entries
returned by dma_map_sg function. In this patchset I only fixed the
sg_table objects exported by dmabuf related functions. I hope that I
didn't break anything there."

it would be really nice if the i915 maintainers could help with sorting
that API abuse out.

2020-05-29 19:10:36

by Logan Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api



On 2020-05-29 6:45 a.m., Christoph Hellwig wrote:
> On Thu, May 28, 2020 at 06:00:44PM -0600, Logan Gunthorpe wrote:
>>> This issue is most likely in the i915 driver and is most likely caused by the driver not respecting the return value of the dma_map_ops::map_sg function. You can see the driver ignoring the return value here:
>>> https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51
>>>
>>> Previously this didn’t cause issues because the intel map_sg always returned the same number of elements as the input scatter gather list but with the change to this dma-iommu api this is no longer the case. I wasn’t able to track the bug down to a specific line of code unfortunately.
>
> Mark did a big audit into the map_sg API abuse and initially had
> some i915 patches, but then gave up on them with this comment:
>
> "The biggest TODO is DRM/i915 driver and I don't feel brave enough to fix
> it fully. The driver creatively uses sg_table->orig_nents to store the
> size of the allocate scatterlist and ignores the number of the entries
> returned by dma_map_sg function. In this patchset I only fixed the
> sg_table objects exported by dmabuf related functions. I hope that I
> didn't break anything there."
>
> it would be really nice if the i915 maintainers could help with sorting
> that API abuse out.
>

I agree completely that the API abuse should be sorted out, but I think
that's much larger than just the i915 driver. Pretty much every dma-buf
map_dma_buf implementation I looked at ignores the returned nents of
sg_attrs. This sucks, but I don't think it's the bug Tom ran into. See:

amdgpu_dma_buf_map
armada_gem_prime_map_dma_buf
drm_gem_map_dma_buf
i915_gem_map_dma_buf
tegra_gem_prime_map_dma_buf

So this should probably be addressed by the whole GPU community.

However, as Robin pointed out, there are other ugly tricks like stopping
iterating through the SGL when sg_dma_len() is zero. For example, the
AMD driver appears to use drm_prime_sg_to_page_addr_arrays() which does
this trick and thus likely isn't buggy (otherwise, I'd expect someone to
have complained by now seeing AMD has already switched to IOMMU-DMA.

As I tried to point out in my previous email, i915 does not do this
trick. In fact, it completely ignores sg_dma_len() and is thus
completely broken. See i915_scatterlist.h and the __sgt_iter() function.
So it doesn't sound to me like Mark's fix would address the issue at
all. Per my previous email, I'd guess that it can be fixed simply by
adjusting the __sgt_iter() function to do something more sensible.
(Better yet, if possible, ditch __sgt_iter() and use the common DRM
function that AMD uses).

This would at least allow us to make progress with Tom's IOMMU-DMA patch
set and once that gets in, it will be harder for other drivers to make
the same mistake.

Logan


2020-05-29 21:14:50

by Marek Szyprowski

[permalink] [raw]
Subject: Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api

Hi Logan,

On 29.05.2020 21:05, Logan Gunthorpe wrote:
> On 2020-05-29 6:45 a.m., Christoph Hellwig wrote:
>> On Thu, May 28, 2020 at 06:00:44PM -0600, Logan Gunthorpe wrote:
>>>> This issue is most likely in the i915 driver and is most likely caused by the driver not respecting the return value of the dma_map_ops::map_sg function. You can see the driver ignoring the return value here:
>>>> https://protect2.fireeye.com/url?k=ca25a34b-97f7b813-ca242804-0cc47a31c8b4-0ecdffc9f56851e1&q=1&u=https%3A%2F%2Fgithub.com%2Ftorvalds%2Flinux%2Fblob%2F7e0165b2f1a912a06e381e91f0f4e495f4ac3736%2Fdrivers%2Fgpu%2Fdrm%2Fi915%2Fgem%2Fi915_gem_dmabuf.c%23L51
>>>>
>>>> Previously this didn’t cause issues because the intel map_sg always returned the same number of elements as the input scatter gather list but with the change to this dma-iommu api this is no longer the case. I wasn’t able to track the bug down to a specific line of code unfortunately.
>> Mark did a big audit into the map_sg API abuse and initially had
>> some i915 patches, but then gave up on them with this comment:
>>
>> "The biggest TODO is DRM/i915 driver and I don't feel brave enough to fix
>> it fully. The driver creatively uses sg_table->orig_nents to store the
>> size of the allocate scatterlist and ignores the number of the entries
>> returned by dma_map_sg function. In this patchset I only fixed the
>> sg_table objects exported by dmabuf related functions. I hope that I
>> didn't break anything there."
>>
>> it would be really nice if the i915 maintainers could help with sorting
>> that API abuse out.
>>
> I agree completely that the API abuse should be sorted out, but I think
> that's much larger than just the i915 driver. Pretty much every dma-buf
> map_dma_buf implementation I looked at ignores the returned nents of
> sg_attrs. This sucks, but I don't think it's the bug Tom ran into. See:
>
> amdgpu_dma_buf_map
> armada_gem_prime_map_dma_buf
> drm_gem_map_dma_buf
> i915_gem_map_dma_buf
> tegra_gem_prime_map_dma_buf
>
> So this should probably be addressed by the whole GPU community.

Patches are pending:
https://lore.kernel.org/linux-iommu/[email protected]/T/

> However, as Robin pointed out, there are other ugly tricks like stopping
> iterating through the SGL when sg_dma_len() is zero. For example, the
> AMD driver appears to use drm_prime_sg_to_page_addr_arrays() which does
> this trick and thus likely isn't buggy (otherwise, I'd expect someone to
> have complained by now seeing AMD has already switched to IOMMU-DMA.

I'm not sure that this is a trick. Stopping at zero sg_dma_len() was
somewhere documented.

> As I tried to point out in my previous email, i915 does not do this
> trick. In fact, it completely ignores sg_dma_len() and is thus
> completely broken. See i915_scatterlist.h and the __sgt_iter() function.
> So it doesn't sound to me like Mark's fix would address the issue at
> all. Per my previous email, I'd guess that it can be fixed simply by
> adjusting the __sgt_iter() function to do something more sensible.
> (Better yet, if possible, ditch __sgt_iter() and use the common DRM
> function that AMD uses).
>
> This would at least allow us to make progress with Tom's IOMMU-DMA patch
> set and once that gets in, it will be harder for other drivers to make
> the same mistake.

Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland

2020-05-29 21:23:54

by Logan Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api



On 2020-05-29 3:11 p.m., Marek Szyprowski wrote:
> Patches are pending:
> https://lore.kernel.org/linux-iommu/[email protected]/T/

Cool, nice! Though, I still don't think that fixes the issue in
i915_scatterlist.h given it still ignores sg_dma_len() and strictly
relies on sg_next()/sg_is_last() to stop iterating -- and I suspect this
is the bug that got in Tom's way.

>> However, as Robin pointed out, there are other ugly tricks like stopping
>> iterating through the SGL when sg_dma_len() is zero. For example, the
>> AMD driver appears to use drm_prime_sg_to_page_addr_arrays() which does
>> this trick and thus likely isn't buggy (otherwise, I'd expect someone to
>> have complained by now seeing AMD has already switched to IOMMU-DMA.
>
> I'm not sure that this is a trick. Stopping at zero sg_dma_len() was
> somewhere documented.

Well whatever you want to call it, it is ugly to have some drivers doing
one thing with the returned value and others assuming there's an extra
zero at the end. It just causes confusion for people reading/copying the
code. It would be better if they are all consistent. However, I concede
stopping at zero should not be broken, presently.

Logan

2020-08-24 00:16:34

by Tom Murphy

[permalink] [raw]
Subject: Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api

Hi Logan/All,

I have added a check for the sg_dma_len == 0 :
"""
} __sgt_iter(struct scatterlist *sgl, bool dma) {
struct sgt_iter s = { .sgp = sgl };

+ if (sgl && sg_dma_len(sgl) == 0)
+ s.sgp = NULL;

if (s.sgp) {
.....
"""
at location [1].
but it doens't fix the problem.

You're right though, this change does need to be made, this code
doesn't handle pages of sg_dma_len(sg) == 0 correctly
So my guess is that we have more bugs in other parts of the i915
driver (or there is a problem with my "sg_dma_len == 0" fix above).
I have been trying to spot where else the code might be buggy but I
haven't had any luck so far.

I'm doing a microconfernce (at LPC 2020) this wednesdays [1] on this
if you're interested in attending.
I'm hoping I can chat about it with a few people and find how can
reproduce and fix this issues. I don't have any more time I can give
to this unfortunately and it would be a shame for the work to go to
waste.

[0] https://github.com/torvalds/linux/blob/d012a7190fc1fd72ed48911e77ca97ba4521bccd/drivers/gpu/drm/i915/i915_scatterlist.h#L28
[1] https://linuxplumbersconf.org/event/7/contributions/846/

On Fri, 29 May 2020 at 22:21, Logan Gunthorpe <[email protected]> wrote:
>
>
>
> On 2020-05-29 3:11 p.m., Marek Szyprowski wrote:
> > Patches are pending:
> > https://lore.kernel.org/linux-iommu/[email protected]/T/
>
> Cool, nice! Though, I still don't think that fixes the issue in
> i915_scatterlist.h given it still ignores sg_dma_len() and strictly
> relies on sg_next()/sg_is_last() to stop iterating -- and I suspect this
> is the bug that got in Tom's way.
>
> >> However, as Robin pointed out, there are other ugly tricks like stopping
> >> iterating through the SGL when sg_dma_len() is zero. For example, the
> >> AMD driver appears to use drm_prime_sg_to_page_addr_arrays() which does
> >> this trick and thus likely isn't buggy (otherwise, I'd expect someone to
> >> have complained by now seeing AMD has already switched to IOMMU-DMA.
> >
> > I'm not sure that this is a trick. Stopping at zero sg_dma_len() was
> > somewhere documented.
>
> Well whatever you want to call it, it is ugly to have some drivers doing
> one thing with the returned value and others assuming there's an extra
> zero at the end. It just causes confusion for people reading/copying the
> code. It would be better if they are all consistent. However, I concede
> stopping at zero should not be broken, presently.
>
> Logan

2020-08-26 18:15:39

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api

Hi Tom,

On 2019-12-21 15:03, Tom Murphy wrote:
> This patchset converts the intel iommu driver to the dma-iommu api.
>
> While converting the driver I exposed a bug in the intel i915 driver which causes a huge amount of artifacts on the screen of my laptop. You can see a picture of it here:
> https://github.com/pippy360/kernelPatches/blob/master/IMG_20191219_225922.jpg
>
> This issue is most likely in the i915 driver and is most likely caused by the driver not respecting the return value of the dma_map_ops::map_sg function. You can see the driver ignoring the return value here:
> https://github.com/torvalds/linux/blob/7e0165b2f1a912a06e381e91f0f4e495f4ac3736/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c#L51
>
> Previously this didn’t cause issues because the intel map_sg always returned the same number of elements as the input scatter gather list but with the change to this dma-iommu api this is no longer the case. I wasn’t able to track the bug down to a specific line of code unfortunately.
>
> Could someone from the intel team look at this?
>
>
> I have been testing on a lenovo x1 carbon 5th generation. Let me know if there’s any more information you need.
>
> To allow my patch set to be tested I have added a patch (patch 8/8) in this series to disable combining sg segments in the dma-iommu api which fixes the bug but it doesn't fix the actual problem.
>
> As part of this patch series I copied the intel bounce buffer code to the dma-iommu path. The addition of the bounce buffer code took me by surprise. I did most of my development on this patch series before the bounce buffer code was added and my reimplementation in the dma-iommu path is very rushed and not properly tested but I’m running out of time to work on this patch set.
>
> On top of that I also didn’t port over the intel tracing code from this commit:
> https://github.com/torvalds/linux/commit/3b53034c268d550d9e8522e613a14ab53b8840d8#diff-6b3e7c4993f05e76331e463ab1fc87e1
> So all the work in that commit is now wasted. The code will need to be removed and reimplemented in the dma-iommu path. I would like to take the time to do this but I really don’t have the time at the moment and I want to get these changes out before the iommu code changes any more.

Further to what we just discussed at LPC, I've realised that tracepoints
are actually something I could do with *right now* for debugging my Arm
DMA ops series, so if I'm going to hack something up anyway I may as
well take responsibility for polishing it into a proper patch as well :)

Robin.

>
> Tom Murphy (8):
> iommu/vt-d: clean up 32bit si_domain assignment
> iommu/vt-d: Use default dma_direct_* mapping functions for direct
> mapped devices
> iommu/vt-d: Remove IOVA handling code from non-dma_ops path
> iommu: Handle freelists when using deferred flushing in iommu drivers
> iommu: Add iommu_dma_free_cpu_cached_iovas function
> iommu: allow the dma-iommu api to use bounce buffers
> iommu/vt-d: Convert intel iommu driver to the iommu ops
> DO NOT MERGE: iommu: disable list appending in dma-iommu
>
> drivers/iommu/Kconfig | 1 +
> drivers/iommu/amd_iommu.c | 14 +-
> drivers/iommu/arm-smmu-v3.c | 3 +-
> drivers/iommu/arm-smmu.c | 3 +-
> drivers/iommu/dma-iommu.c | 183 +++++--
> drivers/iommu/exynos-iommu.c | 3 +-
> drivers/iommu/intel-iommu.c | 936 ++++----------------------------
> drivers/iommu/iommu.c | 39 +-
> drivers/iommu/ipmmu-vmsa.c | 3 +-
> drivers/iommu/msm_iommu.c | 3 +-
> drivers/iommu/mtk_iommu.c | 3 +-
> drivers/iommu/mtk_iommu_v1.c | 3 +-
> drivers/iommu/omap-iommu.c | 3 +-
> drivers/iommu/qcom_iommu.c | 3 +-
> drivers/iommu/rockchip-iommu.c | 3 +-
> drivers/iommu/s390-iommu.c | 3 +-
> drivers/iommu/tegra-gart.c | 3 +-
> drivers/iommu/tegra-smmu.c | 3 +-
> drivers/iommu/virtio-iommu.c | 3 +-
> drivers/vfio/vfio_iommu_type1.c | 2 +-
> include/linux/dma-iommu.h | 3 +
> include/linux/intel-iommu.h | 1 -
> include/linux/iommu.h | 32 +-
> 23 files changed, 345 insertions(+), 908 deletions(-)
>

2020-08-26 18:29:40

by Alex Deucher

[permalink] [raw]
Subject: Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api

On Mon, Aug 24, 2020 at 2:56 AM Tom Murphy <[email protected]> wrote:
>
> Hi Logan/All,
>
> I have added a check for the sg_dma_len == 0 :
> """
> } __sgt_iter(struct scatterlist *sgl, bool dma) {
> struct sgt_iter s = { .sgp = sgl };
>
> + if (sgl && sg_dma_len(sgl) == 0)
> + s.sgp = NULL;
>
> if (s.sgp) {
> .....
> """
> at location [1].
> but it doens't fix the problem.
>
> You're right though, this change does need to be made, this code
> doesn't handle pages of sg_dma_len(sg) == 0 correctly
> So my guess is that we have more bugs in other parts of the i915
> driver (or there is a problem with my "sg_dma_len == 0" fix above).
> I have been trying to spot where else the code might be buggy but I
> haven't had any luck so far.
>
> I'm doing a microconfernce (at LPC 2020) this wednesdays [1] on this
> if you're interested in attending.
> I'm hoping I can chat about it with a few people and find how can
> reproduce and fix this issues. I don't have any more time I can give
> to this unfortunately and it would be a shame for the work to go to
> waste.
>
> [0] https://github.com/torvalds/linux/blob/d012a7190fc1fd72ed48911e77ca97ba4521bccd/drivers/gpu/drm/i915/i915_scatterlist.h#L28
> [1] https://linuxplumbersconf.org/event/7/contributions/846/
>
> On Fri, 29 May 2020 at 22:21, Logan Gunthorpe <[email protected]> wrote:
> >
> >
> >
> > On 2020-05-29 3:11 p.m., Marek Szyprowski wrote:
> > > Patches are pending:
> > > https://lore.kernel.org/linux-iommu/[email protected]/T/
> >
> > Cool, nice! Though, I still don't think that fixes the issue in
> > i915_scatterlist.h given it still ignores sg_dma_len() and strictly
> > relies on sg_next()/sg_is_last() to stop iterating -- and I suspect this
> > is the bug that got in Tom's way.
> >
> > >> However, as Robin pointed out, there are other ugly tricks like stopping
> > >> iterating through the SGL when sg_dma_len() is zero. For example, the
> > >> AMD driver appears to use drm_prime_sg_to_page_addr_arrays() which does
> > >> this trick and thus likely isn't buggy (otherwise, I'd expect someone to
> > >> have complained by now seeing AMD has already switched to IOMMU-DMA.

We ran into the same issue with amdgpu and radeon when the AMD IOMMU
driver was converted and had to fix it as well. The relevant fixes
were:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=42e67b479eab6d26459b80b4867298232b0435e7
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=0199172f933342d8b1011aae2054a695c25726f4
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=47f7826c520ecd92ffbffe59ecaa2fe61e42ec70
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c0f83d164fb8f3a2b7bc379a6c1e27d1123a9eab

Alex

> > >
> > > I'm not sure that this is a trick. Stopping at zero sg_dma_len() was
> > > somewhere documented.
> >
> > Well whatever you want to call it, it is ugly to have some drivers doing
> > one thing with the returned value and others assuming there's an extra
> > zero at the end. It just causes confusion for people reading/copying the
> > code. It would be better if they are all consistent. However, I concede
> > stopping at zero should not be broken, presently.
> >
> > Logan
> _______________________________________________
> dri-devel mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

2020-08-27 21:38:39

by Logan Gunthorpe

[permalink] [raw]
Subject: Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api



On 2020-08-23 6:04 p.m., Tom Murphy wrote:
> I have added a check for the sg_dma_len == 0 :
> """
> } __sgt_iter(struct scatterlist *sgl, bool dma) {
> struct sgt_iter s = { .sgp = sgl };
>
> + if (sgl && sg_dma_len(sgl) == 0)
> + s.sgp = NULL;
>
> if (s.sgp) {
> .....
> """
> at location [1].
> but it doens't fix the problem.

Based on my read of the code, it looks like we also need to change usage
of sgl->length... Something like the rough patch below, maybe?

Also, Tom, do you have an updated version of the patchset to convert the
Intel IOMMU to dma-iommu available? The last one I've found doesn't
apply cleanly (I'm assuming parts of it have been merged in slightly
modified forms).

Thanks,

Logan

--

diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h
b/drivers/gpu/drm/i915/i915
index b7b59328cb76..9367ac801f0c 100644
--- a/drivers/gpu/drm/i915/i915_scatterlist.h
+++ b/drivers/gpu/drm/i915/i915_scatterlist.h
@@ -27,13 +27,19 @@ static __always_inline struct sgt_iter {
} __sgt_iter(struct scatterlist *sgl, bool dma) {
struct sgt_iter s = { .sgp = sgl };

+ if (sgl && !sg_dma_len(s.sgp))
+ s.sgp = NULL;
+
if (s.sgp) {
s.max = s.curr = s.sgp->offset;
- s.max += s.sgp->length;
- if (dma)
+
+ if (dma) {
+ s.max += sg_dma_len(s.sgp);
s.dma = sg_dma_address(s.sgp);
- else
+ } else {
+ s.max += s.sgp->length;
s.pfn = page_to_pfn(sg_page(s.sgp));
+ }
}

return s;

2020-08-27 23:38:16

by Tom Murphy

[permalink] [raw]
Subject: Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api

On Thu, 27 Aug 2020 at 22:36, Logan Gunthorpe <[email protected]> wrote:
>
>
>
> On 2020-08-23 6:04 p.m., Tom Murphy wrote:
> > I have added a check for the sg_dma_len == 0 :
> > """
> > } __sgt_iter(struct scatterlist *sgl, bool dma) {
> > struct sgt_iter s = { .sgp = sgl };
> >
> > + if (sgl && sg_dma_len(sgl) == 0)
> > + s.sgp = NULL;
> >
> > if (s.sgp) {
> > .....
> > """
> > at location [1].
> > but it doens't fix the problem.
>
> Based on my read of the code, it looks like we also need to change usage
> of sgl->length... Something like the rough patch below, maybe?
>
> Also, Tom, do you have an updated version of the patchset to convert the
> Intel IOMMU to dma-iommu available? The last one I've found doesn't
> apply cleanly (I'm assuming parts of it have been merged in slightly
> modified forms).
>

I'll try and post one in the next 24hours

> Thanks,
>
> Logan
>
> --
>
> diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h
> b/drivers/gpu/drm/i915/i915
> index b7b59328cb76..9367ac801f0c 100644
> --- a/drivers/gpu/drm/i915/i915_scatterlist.h
> +++ b/drivers/gpu/drm/i915/i915_scatterlist.h
> @@ -27,13 +27,19 @@ static __always_inline struct sgt_iter {
> } __sgt_iter(struct scatterlist *sgl, bool dma) {
> struct sgt_iter s = { .sgp = sgl };
>
> + if (sgl && !sg_dma_len(s.sgp))
> + s.sgp = NULL;
> +
> if (s.sgp) {
> s.max = s.curr = s.sgp->offset;
> - s.max += s.sgp->length;
> - if (dma)
> +
> + if (dma) {
> + s.max += sg_dma_len(s.sgp);
> s.dma = sg_dma_address(s.sgp);
> - else
> + } else {
> + s.max += s.sgp->length;
> s.pfn = page_to_pfn(sg_page(s.sgp));
> + }
> }
>
> return s;

2020-09-03 20:28:22

by Tom Murphy

[permalink] [raw]
Subject: Re: [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api

On Fri, 28 Aug 2020 at 00:34, Tom Murphy <[email protected]> wrote:
>
> On Thu, 27 Aug 2020 at 22:36, Logan Gunthorpe <[email protected]> wrote:
> >
> >
> >
> > On 2020-08-23 6:04 p.m., Tom Murphy wrote:
> > > I have added a check for the sg_dma_len == 0 :
> > > """
> > > } __sgt_iter(struct scatterlist *sgl, bool dma) {
> > > struct sgt_iter s = { .sgp = sgl };
> > >
> > > + if (sgl && sg_dma_len(sgl) == 0)
> > > + s.sgp = NULL;
> > >
> > > if (s.sgp) {
> > > .....
> > > """
> > > at location [1].
> > > but it doens't fix the problem.
> >
> > Based on my read of the code, it looks like we also need to change usage
> > of sgl->length... Something like the rough patch below, maybe?
> >
> > Also, Tom, do you have an updated version of the patchset to convert the
> > Intel IOMMU to dma-iommu available? The last one I've found doesn't
> > apply cleanly (I'm assuming parts of it have been merged in slightly
> > modified forms).
> >
>
> I'll try and post one in the next 24hours

I have just posted this now:
The subject of the cover letter is:
"[PATCH V2 0/5] Convert the intel iommu driver to the dma-iommu api"

>
> > Thanks,
> >
> > Logan
> >
> > --
> >
> > diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h
> > b/drivers/gpu/drm/i915/i915
> > index b7b59328cb76..9367ac801f0c 100644
> > --- a/drivers/gpu/drm/i915/i915_scatterlist.h
> > +++ b/drivers/gpu/drm/i915/i915_scatterlist.h
> > @@ -27,13 +27,19 @@ static __always_inline struct sgt_iter {
> > } __sgt_iter(struct scatterlist *sgl, bool dma) {
> > struct sgt_iter s = { .sgp = sgl };
> >
> > + if (sgl && !sg_dma_len(s.sgp))
> > + s.sgp = NULL;
> > +
> > if (s.sgp) {
> > s.max = s.curr = s.sgp->offset;
> > - s.max += s.sgp->length;
> > - if (dma)
> > +
> > + if (dma) {
> > + s.max += sg_dma_len(s.sgp);
> > s.dma = sg_dma_address(s.sgp);
> > - else
> > + } else {
> > + s.max += s.sgp->length;
> > s.pfn = page_to_pfn(sg_page(s.sgp));
> > + }
> > }
> >
> > return s;

2020-09-08 19:56:59

by Tvrtko Ursulin

[permalink] [raw]
Subject: Re: [Intel-gfx] [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api


Hi,

On 27/08/2020 22:36, Logan Gunthorpe wrote:
> On 2020-08-23 6:04 p.m., Tom Murphy wrote:
>> I have added a check for the sg_dma_len == 0 :
>> """
>> } __sgt_iter(struct scatterlist *sgl, bool dma) {
>> struct sgt_iter s = { .sgp = sgl };
>>
>> + if (sgl && sg_dma_len(sgl) == 0)
>> + s.sgp = NULL;
>>
>> if (s.sgp) {
>> .....
>> """
>> at location [1].
>> but it doens't fix the problem.
>
> Based on my read of the code, it looks like we also need to change usage
> of sgl->length... Something like the rough patch below, maybe?

This thread was brought to my attention and I initially missed this
reply. Essentially I came to the same conclusion about the need to use
sg_dma_len. One small correction below:

> Also, Tom, do you have an updated version of the patchset to convert the
> Intel IOMMU to dma-iommu available? The last one I've found doesn't
> apply cleanly (I'm assuming parts of it have been merged in slightly
> modified forms).
>
> Thanks,
>
> Logan
>
> --
>
> diff --git a/drivers/gpu/drm/i915/i915_scatterlist.h
> b/drivers/gpu/drm/i915/i915
> index b7b59328cb76..9367ac801f0c 100644
> --- a/drivers/gpu/drm/i915/i915_scatterlist.h
> +++ b/drivers/gpu/drm/i915/i915_scatterlist.h
> @@ -27,13 +27,19 @@ static __always_inline struct sgt_iter {
> } __sgt_iter(struct scatterlist *sgl, bool dma) {
> struct sgt_iter s = { .sgp = sgl };
>
> + if (sgl && !sg_dma_len(s.sgp))

I'd extend the condition to be, just to be safe:

if (dma && sgl && !sg_dma_len(s.sgp))

> + s.sgp = NULL;
> +
> if (s.sgp) {
> s.max = s.curr = s.sgp->offset;
> - s.max += s.sgp->length;
> - if (dma)
> +
> + if (dma) {
> + s.max += sg_dma_len(s.sgp);
> s.dma = sg_dma_address(s.sgp);
> - else
> + } else {
> + s.max += s.sgp->length;
> s.pfn = page_to_pfn(sg_page(s.sgp));
> + }

Otherwise has this been tested or alternatively how to test it? (How to
repro the issue.)

Regards,

Tvrtko