2020-05-21 11:33:24

by Prakash Gupta

[permalink] [raw]
Subject: [PATCH] iommu/dma: limit iova free size to unmmaped iova

Limit the iova size while freeing based on unmapped size. In absence of
this even with unmap failure, invalid iova is pushed to iova rcache and
subsequently can cause panic while rcache magazine is freed.

Signed-off-by: Prakash Gupta <[email protected]>

:100644 100644 4959f5df21bd 098f7d377e04 M drivers/iommu/dma-iommu.c

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 4959f5df21bd..098f7d377e04 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -472,7 +472,8 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr,

if (!cookie->fq_domain)
iommu_tlb_sync(domain, &iotlb_gather);
- iommu_dma_free_iova(cookie, dma_addr, size);
+ if (unmapped)
+ iommu_dma_free_iova(cookie, dma_addr, unmapped);
}

static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a
member of the Code Aurora Forum, hosted by The Linux Foundation


2020-05-21 18:51:19

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH] iommu/dma: limit iova free size to unmmaped iova

On Thu, 21 May 2020 17:00:04 +0530 Prakash Gupta <[email protected]> wrote:

> Limit the iova size while freeing based on unmapped size. In absence of
> this even with unmap failure, invalid iova is pushed to iova rcache and
> subsequently can cause panic while rcache magazine is freed.
>
> Signed-off-by: Prakash Gupta <[email protected]>
>

I think we need a cc:stable here?

> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -472,7 +472,8 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr,
>
> if (!cookie->fq_domain)
> iommu_tlb_sync(domain, &iotlb_gather);
> - iommu_dma_free_iova(cookie, dma_addr, size);
> + if (unmapped)
> + iommu_dma_free_iova(cookie, dma_addr, unmapped);
> }
>
> static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,

I'll assume that Joerg will handle this fix?

2020-05-21 20:18:36

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH] iommu/dma: limit iova free size to unmmaped iova

On 2020-05-21 12:30, Prakash Gupta wrote:
> Limit the iova size while freeing based on unmapped size. In absence of
> this even with unmap failure, invalid iova is pushed to iova rcache and
> subsequently can cause panic while rcache magazine is freed.

Can you elaborate on that panic?

> Signed-off-by: Prakash Gupta <[email protected]>
>
> :100644 100644 4959f5df21bd 098f7d377e04 M drivers/iommu/dma-iommu.c
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 4959f5df21bd..098f7d377e04 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -472,7 +472,8 @@ static void __iommu_dma_unmap(struct device *dev, dma_addr_t dma_addr,
>
> if (!cookie->fq_domain)
> iommu_tlb_sync(domain, &iotlb_gather);
> - iommu_dma_free_iova(cookie, dma_addr, size);
> + if (unmapped)
> + iommu_dma_free_iova(cookie, dma_addr, unmapped);

Frankly, if any part of the unmap fails then things have gone
catastrophically wrong already, but either way this isn't right. The
IOVA API doesn't support partial freeing - an IOVA *must* be freed with
its original size, or not freed at all, otherwise it will corrupt the
state of the rcaches and risk a cascade of further misbehaviour for
future callers.

TBH my gut feeling here is that you're really just trying to treat a
symptom of another bug elsewhere, namely some driver calling dma_unmap_*
or dma_free_* with the wrong address or size in the first place.

Robin.

> }
>
> static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
>

2020-05-22 06:28:01

by Prakash Gupta

[permalink] [raw]
Subject: Re: [PATCH] iommu/dma: limit iova free size to unmmaped iova

On 2020-05-22 01:46, Robin Murphy wrote:
> On 2020-05-21 12:30, Prakash Gupta wrote:
>> Limit the iova size while freeing based on unmapped size. In absence
>> of
>> this even with unmap failure, invalid iova is pushed to iova rcache
>> and
>> subsequently can cause panic while rcache magazine is freed.
>
> Can you elaborate on that panic?
>
We have seen couple of stability issues around this.
Below is one such example:

kernel BUG at kernel/msm-4.19/drivers/iommu/iova.c:904!
iova_magazine_free_pfns
iova_rcache_insert
free_iova_fast
__iommu_unmap_page
iommu_dma_unmap_page


It turned out an iova pfn 0 got into iova_rcache. One possibility I see
is
where client unmap with invalid dma_addr. The unmap call will fail and
warn on
and still try to free iova. This will cause invalid pfn to be inserted
into
rcache. As and when the magazine with invalid pfn will be freed
private_find_iova() will return NULL for invalid iova and meet bug
condition.

>> Signed-off-by: Prakash Gupta <[email protected]>
>>
>> :100644 100644 4959f5df21bd 098f7d377e04 M drivers/iommu/dma-iommu.c
>>
>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>> index 4959f5df21bd..098f7d377e04 100644
>> --- a/drivers/iommu/dma-iommu.c
>> +++ b/drivers/iommu/dma-iommu.c
>> @@ -472,7 +472,8 @@ static void __iommu_dma_unmap(struct device *dev,
>> dma_addr_t dma_addr,
>> if (!cookie->fq_domain)
>> iommu_tlb_sync(domain, &iotlb_gather);
>> - iommu_dma_free_iova(cookie, dma_addr, size);
>> + if (unmapped)
>> + iommu_dma_free_iova(cookie, dma_addr, unmapped);
>
> Frankly, if any part of the unmap fails then things have gone
> catastrophically wrong already, but either way this isn't right. The
> IOVA API doesn't support partial freeing - an IOVA *must* be freed
> with its original size, or not freed at all, otherwise it will corrupt
> the state of the rcaches and risk a cascade of further misbehaviour
> for future callers.
>
I agree, we shouldn't be freeing the partial iova. Instead just making
sure if unmap was successful should be sufficient before freeing iova.
So change
can instead be something like this:

- iommu_dma_free_iova(cookie, dma_addr, size);
+ if (unmapped)
+ iommu_dma_free_iova(cookie, dma_addr, size);

> TBH my gut feeling here is that you're really just trying to treat a
> symptom of another bug elsewhere, namely some driver calling
> dma_unmap_* or dma_free_* with the wrong address or size in the first
> place.
>
This condition would arise only if driver calling dma_unmap/free_* with
0
iova_pfn. This will be flagged with a warning during unmap but will
trigger
panic later on while doing unrelated dma_map/unmap_*. If unmapped has
already
failed for invalid iova, there is no reason we should consider this as
valid
iova and free. This part should be fixed.

On 2020-05-22 00:19, Andrew Morton wrote:
> I think we need a cc:stable here?
>
Added now.

Thanks,
Prakash

2020-05-22 09:28:36

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH] iommu/dma: limit iova free size to unmmaped iova

On 2020-05-22 07:25, [email protected] wrote:
> On 2020-05-22 01:46, Robin Murphy wrote:
>> On 2020-05-21 12:30, Prakash Gupta wrote:
>>> Limit the iova size while freeing based on unmapped size. In absence of
>>> this even with unmap failure, invalid iova is pushed to iova rcache and
>>> subsequently can cause panic while rcache magazine is freed.
>>
>> Can you elaborate on that panic?
>>
> We have seen couple of stability issues around this.
> Below is one such example:
>
> kernel BUG at kernel/msm-4.19/drivers/iommu/iova.c:904!
> iova_magazine_free_pfns
> iova_rcache_insert
> free_iova_fast
> __iommu_unmap_page
> iommu_dma_unmap_page

OK, so it's not some NULL dereference or anything completely unexpected,
that's good.

> It turned out an iova pfn 0 got into iova_rcache. One possibility I see is
> where client unmap with invalid dma_addr. The unmap call will fail and
> warn on
> and still try to free iova. This will cause invalid pfn to be inserted into
> rcache. As and when the magazine with invalid pfn will be freed
> private_find_iova() will return NULL for invalid iova and meet bug
> condition.

That would indeed be a bug in whatever driver made the offending
dma_unmap call.

>>> Signed-off-by: Prakash Gupta <[email protected]>
>>>
>>> :100644 100644 4959f5df21bd 098f7d377e04 M    drivers/iommu/dma-iommu.c
>>>
>>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>>> index 4959f5df21bd..098f7d377e04 100644
>>> --- a/drivers/iommu/dma-iommu.c
>>> +++ b/drivers/iommu/dma-iommu.c
>>> @@ -472,7 +472,8 @@ static void __iommu_dma_unmap(struct device *dev,
>>> dma_addr_t dma_addr,
>>>         if (!cookie->fq_domain)
>>>           iommu_tlb_sync(domain, &iotlb_gather);
>>> -    iommu_dma_free_iova(cookie, dma_addr, size);
>>> +    if (unmapped)
>>> +        iommu_dma_free_iova(cookie, dma_addr, unmapped);
>>
>> Frankly, if any part of the unmap fails then things have gone
>> catastrophically wrong already, but either way this isn't right. The
>> IOVA API doesn't support partial freeing - an IOVA *must* be freed
>> with its original size, or not freed at all, otherwise it will corrupt
>> the state of the rcaches and risk a cascade of further misbehaviour
>> for future callers.
>>
> I agree, we shouldn't be freeing the partial iova. Instead just making
> sure if unmap was successful should be sufficient before freeing iova.
> So change
> can instead be something like this:
>
> -    iommu_dma_free_iova(cookie, dma_addr, size);
> +    if (unmapped)
> +        iommu_dma_free_iova(cookie, dma_addr, size);
>
>> TBH my gut feeling here is that you're really just trying to treat a
>> symptom of another bug elsewhere, namely some driver calling
>> dma_unmap_* or dma_free_* with the wrong address or size in the first
>> place.
>>
> This condition would arise only if driver calling dma_unmap/free_* with 0
> iova_pfn. This will be flagged with a warning during unmap but will trigger
> panic later on while doing unrelated dma_map/unmap_*. If unmapped has
> already
> failed for invalid iova, there is no reason we should consider this as
> valid
> iova and free. This part should be fixed.

I disagree. In general, if drivers call the DMA API incorrectly it is
liable to lead to data loss, memory corruption, and various other
unpleasant misbehaviour - it is not the DMA layer's job to attempt to
paper over driver bugs.

There *is* an argument for downgrading the BUG_ON() in
iova_magazine_free_pfns() to a WARN_ON(), since frankly it isn't a
sufficiently serious condition to justify killing the whole machine
immediately, but NAK to bodging the iommu-dma mid-layer to "fix" that. A
serious bug already happened elsewhere, so trying to hide the fallout
really doesn't help anyone.

Robin.

> On 2020-05-22 00:19, Andrew Morton wrote:
>> I think we need a cc:stable here?
>>
> Added now.
>
> Thanks,
> Prakash

2020-05-25 20:03:12

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH] iommu/dma: limit iova free size to unmmaped iova

On Thu, May 21, 2020 at 11:49:06AM -0700, Andrew Morton wrote:
> I'll assume that Joerg will handle this fix?

Yes, I will when it turns out necessary.


Regards,

Joerg

2020-05-26 07:59:16

by Prakash Gupta

[permalink] [raw]
Subject: Re: [PATCH] iommu/dma: limit iova free size to unmmaped iova

On 2020-05-22 14:54, Robin Murphy wrote:
> On 2020-05-22 07:25, [email protected] wrote:
>> On 2020-05-22 01:46, Robin Murphy wrote:
>>> On 2020-05-21 12:30, Prakash Gupta wrote:
>> I agree, we shouldn't be freeing the partial iova. Instead just making
>> sure if unmap was successful should be sufficient before freeing iova.
>> So change
>> can instead be something like this:
>>
>> -    iommu_dma_free_iova(cookie, dma_addr, size);
>> +    if (unmapped)
>> +        iommu_dma_free_iova(cookie, dma_addr, size);
>>
>>> TBH my gut feeling here is that you're really just trying to treat a
>>> symptom of another bug elsewhere, namely some driver calling
>>> dma_unmap_* or dma_free_* with the wrong address or size in the first
>>> place.
>>>
>> This condition would arise only if driver calling dma_unmap/free_*
>> with 0
>> iova_pfn. This will be flagged with a warning during unmap but will
>> trigger
>> panic later on while doing unrelated dma_map/unmap_*. If unmapped has
>> already
>> failed for invalid iova, there is no reason we should consider this as
>> valid
>> iova and free. This part should be fixed.
>
> I disagree. In general, if drivers call the DMA API incorrectly it is
> liable to lead to data loss, memory corruption, and various other
> unpleasant misbehaviour - it is not the DMA layer's job to attempt to
> paper over driver bugs.
>
> There *is* an argument for downgrading the BUG_ON() in
> iova_magazine_free_pfns() to a WARN_ON(), since frankly it isn't a
> sufficiently serious condition to justify killing the whole machine
> immediately, but NAK to bodging the iommu-dma mid-layer to "fix" that.
> A serious bug already happened elsewhere, so trying to hide the
> fallout really doesn't help anyone.
>
Sorry for delayed response, it was a long weekend.
I agree that invalid DMA API call can result in unexpected issues and
client
should fix it, but then the present behavior makes it difficult to catch
cases
when driver is making wrong DMA API calls. When invalid iova pfn is
passed it
doesn't fail then and there, though DMA layer is aware of iova being
invalid. It
fails much after that in the context of an valid map/unmap, with
BUG_ON().

Downgrading BUG_ON() to WARN_ON() in iova_magazine_free_pfns() will not
help
much as invalid iova will cause NULL pointer dereference.

I see no reason why DMA layer wants to free an iova for which unmapped
failed.
IMHO queuing an invalid iova (which already failed unmap) to rcache
which
eventually going to crash the system looks like iommu-dma layer issue.

Thanks,
Prakash

2020-06-02 09:42:07

by Prakash Gupta

[permalink] [raw]
Subject: Re: [PATCH] iommu/dma: limit iova free size to unmmaped iova

On 2020-05-26 12:49, [email protected] wrote:
> On 2020-05-22 14:54, Robin Murphy wrote:
>> On 2020-05-22 07:25, [email protected] wrote:
>>> On 2020-05-22 01:46, Robin Murphy wrote:
>>>> On 2020-05-21 12:30, Prakash Gupta wrote:
>>> I agree, we shouldn't be freeing the partial iova. Instead just
>>> making
>>> sure if unmap was successful should be sufficient before freeing
>>> iova. So change
>>> can instead be something like this:
>>>
>>> -    iommu_dma_free_iova(cookie, dma_addr, size);
>>> +    if (unmapped)
>>> +        iommu_dma_free_iova(cookie, dma_addr, size);
>>>
>>>> TBH my gut feeling here is that you're really just trying to treat a
>>>> symptom of another bug elsewhere, namely some driver calling
>>>> dma_unmap_* or dma_free_* with the wrong address or size in the
>>>> first
>>>> place.
>>>>
>>> This condition would arise only if driver calling dma_unmap/free_*
>>> with 0
>>> iova_pfn. This will be flagged with a warning during unmap but will
>>> trigger
>>> panic later on while doing unrelated dma_map/unmap_*. If unmapped has
>>> already
>>> failed for invalid iova, there is no reason we should consider this
>>> as valid
>>> iova and free. This part should be fixed.
>>
>> I disagree. In general, if drivers call the DMA API incorrectly it is
>> liable to lead to data loss, memory corruption, and various other
>> unpleasant misbehaviour - it is not the DMA layer's job to attempt to
>> paper over driver bugs.
>>
>> There *is* an argument for downgrading the BUG_ON() in
>> iova_magazine_free_pfns() to a WARN_ON(), since frankly it isn't a
>> sufficiently serious condition to justify killing the whole machine
>> immediately, but NAK to bodging the iommu-dma mid-layer to "fix" that.
>> A serious bug already happened elsewhere, so trying to hide the
>> fallout really doesn't help anyone.
>>
> Sorry for delayed response, it was a long weekend.
> I agree that invalid DMA API call can result in unexpected issues and
> client
> should fix it, but then the present behavior makes it difficult to
> catch cases
> when driver is making wrong DMA API calls. When invalid iova pfn is
> passed it
> doesn't fail then and there, though DMA layer is aware of iova being
> invalid. It
> fails much after that in the context of an valid map/unmap, with
> BUG_ON().
>
> Downgrading BUG_ON() to WARN_ON() in iova_magazine_free_pfns() will not
> help
> much as invalid iova will cause NULL pointer dereference.
>
> I see no reason why DMA layer wants to free an iova for which unmapped
> failed.
> IMHO queuing an invalid iova (which already failed unmap) to rcache
> which
> eventually going to crash the system looks like iommu-dma layer issue.
>
> Thanks,
> Prakash

ping?

2020-06-02 13:10:24

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH] iommu/dma: limit iova free size to unmmaped iova

On 2020-05-26 08:19, [email protected] wrote:
> On 2020-05-22 14:54, Robin Murphy wrote:
>> On 2020-05-22 07:25, [email protected] wrote:
>>> On 2020-05-22 01:46, Robin Murphy wrote:
>>>> On 2020-05-21 12:30, Prakash Gupta wrote:
>>> I agree, we shouldn't be freeing the partial iova. Instead just making
>>> sure if unmap was successful should be sufficient before freeing
>>> iova. So change
>>> can instead be something like this:
>>>
>>> -    iommu_dma_free_iova(cookie, dma_addr, size);
>>> +    if (unmapped)
>>> +        iommu_dma_free_iova(cookie, dma_addr, size);
>>>
>>>> TBH my gut feeling here is that you're really just trying to treat a
>>>> symptom of another bug elsewhere, namely some driver calling
>>>> dma_unmap_* or dma_free_* with the wrong address or size in the first
>>>> place.
>>>>
>>> This condition would arise only if driver calling dma_unmap/free_*
>>> with 0
>>> iova_pfn. This will be flagged with a warning during unmap but will
>>> trigger
>>> panic later on while doing unrelated dma_map/unmap_*. If unmapped has
>>> already
>>> failed for invalid iova, there is no reason we should consider this
>>> as valid
>>> iova and free. This part should be fixed.
>>
>> I disagree. In general, if drivers call the DMA API incorrectly it is
>> liable to lead to data loss, memory corruption, and various other
>> unpleasant misbehaviour - it is not the DMA layer's job to attempt to
>> paper over driver bugs.
>>
>> There *is* an argument for downgrading the BUG_ON() in
>> iova_magazine_free_pfns() to a WARN_ON(), since frankly it isn't a
>> sufficiently serious condition to justify killing the whole machine
>> immediately, but NAK to bodging the iommu-dma mid-layer to "fix" that.
>> A serious bug already happened elsewhere, so trying to hide the
>> fallout really doesn't help anyone.
>>
> Sorry for delayed response, it was a long weekend.
> I agree that invalid DMA API call can result in unexpected issues and
> client
> should fix it, but then the present behavior makes it difficult to catch
> cases
> when driver is making wrong DMA API calls. When invalid iova pfn is
> passed it
> doesn't fail then and there, though DMA layer is aware of iova being
> invalid. It
> fails much after that in the context of an valid map/unmap, with BUG_ON().
>
> Downgrading BUG_ON() to WARN_ON() in iova_magazine_free_pfns() will not
> help
> much as invalid iova will cause NULL pointer dereference.

Obviously I didn't mean a literal s/BUG/WARN/ substitution - some
additional control flow to actually handle the error case was implied.

I'll write up the patch myself, since it's easier than further debating.

> I see no reason why DMA layer wants to free an iova for which unmapped
> failed.
> IMHO queuing an invalid iova (which already failed unmap) to rcache which
> eventually going to crash the system looks like iommu-dma layer issue.

What if the unmap fails because the address range is already entirely
unmapped? Freeing the IOVA (or at least attempting to) would be
logically appropriate in that case. In fact some IOMMU drivers might not
even consider that a failure, so the DMA layer may not even be aware
that it's been handed a bogus unallocated address.

The point is that unmapping *doesn't* fail under normal and correct
operation, so the DMA layer should not expect to have to handle it. Even
if it does happen, that's a highly exceptional case that the DMA layer
cannot recover from by itself; at best it can just push the problem
elsewhere. It's pretty hard to justify doing extra work to simply move
an exceptional problem around without really addressing it.

And in this particular case, personally I would *much* rather see
warnings spewing from both the pagetable and IOVA code as early as
possible to clearly indicate that the DMA layer itself has been thrown
out of sync, than just have warnings that might represent some other
source of pagetable corruption (or at worst, depending on the pagetable
code, no warnings at all and only have dma_map_*() calls quietly start
failing much, much later due to all the IOVA space having been leaked by
bad unmaps).

Robin.

2020-06-03 07:39:23

by Prakash Gupta

[permalink] [raw]
Subject: Re: [PATCH] iommu/dma: limit iova free size to unmmaped iova

On 2020-06-02 18:37, Robin Murphy wrote:
> On 2020-05-26 08:19, [email protected] wrote:
>> On 2020-05-22 14:54, Robin Murphy wrote:
>>> On 2020-05-22 07:25, [email protected] wrote:
>>>> On 2020-05-22 01:46, Robin Murphy wrote:
>>>>> On 2020-05-21 12:30, Prakash Gupta wrote:
>> Sorry for delayed response, it was a long weekend.
>> I agree that invalid DMA API call can result in unexpected issues and
>> client
>> should fix it, but then the present behavior makes it difficult to
>> catch cases
>> when driver is making wrong DMA API calls. When invalid iova pfn is
>> passed it
>> doesn't fail then and there, though DMA layer is aware of iova being
>> invalid. It
>> fails much after that in the context of an valid map/unmap, with
>> BUG_ON().
>>
>> Downgrading BUG_ON() to WARN_ON() in iova_magazine_free_pfns() will
>> not help
>> much as invalid iova will cause NULL pointer dereference.
>
> Obviously I didn't mean a literal s/BUG/WARN/ substitution - some
> additional control flow to actually handle the error case was implied.
>
> I'll write up the patch myself, since it's easier than further
> debating.
>

I think it will address the issue I am facing as well. I will wait for
your patch.

>> I see no reason why DMA layer wants to free an iova for which unmapped
>> failed.
>> IMHO queuing an invalid iova (which already failed unmap) to rcache
>> which
>> eventually going to crash the system looks like iommu-dma layer issue.
>
> What if the unmap fails because the address range is already entirely
> unmapped? Freeing the IOVA (or at least attempting to) would be
> logically appropriate in that case. In fact some IOMMU drivers might
> not even consider that a failure, so the DMA layer may not even be
> aware that it's been handed a bogus unallocated address.
>
> The point is that unmapping *doesn't* fail under normal and correct
> operation, so the DMA layer should not expect to have to handle it.
> Even if it does happen, that's a highly exceptional case that the DMA
> layer cannot recover from by itself; at best it can just push the
> problem elsewhere. It's pretty hard to justify doing extra work to
> simply move an exceptional problem around without really addressing
> it.
>

iommu_unmap() expects that all areas within unmap size are mapped.
infact It
abandons further unmap if it find any chunk not mapped. So if an IOMMU
implementation is not returning failure for already unmapped area,
then it's prone to issue where DMA API reuse the IOVA, which is already
mapped.

In this case where iommu implementation returns error for already
unmapped area,
currently there is no way to distinguish an unmap failure due to range
already
unmapped or say while partially unmapping a section map, memory
allocation fails
and thus unmap returns failure.
In second case, we will free the iova even with mapping present. And
subsequent
dma mapping will keep on failing if it uses this freed iova. For managed
iova
both unmap/map should be done with dma APIs, it should be safe to expect
if a
range is unmapped with DMA APIs corresponding iova is also freed, so
there
shouldn't be a real need to free iova where unmap fails due to range
already
entirely unmapped.

> And in this particular case, personally I would *much* rather see
> warnings spewing from both the pagetable and IOVA code as early as
> possible to clearly indicate that the DMA layer itself has been thrown
> out of sync, than just have warnings that might represent some other
> source of pagetable corruption (or at worst, depending on the
> pagetable code, no warnings at all and only have dma_map_*() calls
> quietly start failing much, much later due to all the IOVA space
> having been leaked by bad unmaps).
>

I am not sure how useful is this freed iova if corresponding mappings
are not
unmapped. We won't be able to use those iova. The subsequent iommu_map
will fail
if using this freed iova. So it's important to ensure we only free iova
which is
unmapped successfully.

Thanks,
Prakash