2018-11-15 15:49:33

by Souptick Joarder

[permalink] [raw]
Subject: [PATCH 6/9] iommu/dma-iommu.c: Convert to use vm_insert_range

Convert to use vm_insert_range() to map range of kernel
memory to user vma.

Signed-off-by: Souptick Joarder <[email protected]>
Reviewed-by: Matthew Wilcox <[email protected]>
---
drivers/iommu/dma-iommu.c | 12 ++----------
1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index d1b0475..69c66b1 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -622,17 +622,9 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,

int iommu_dma_mmap(struct page **pages, size_t size, struct vm_area_struct *vma)
{
- unsigned long uaddr = vma->vm_start;
- unsigned int i, count = PAGE_ALIGN(size) >> PAGE_SHIFT;
- int ret = -ENXIO;
+ unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;

- for (i = vma->vm_pgoff; i < count && uaddr < vma->vm_end; i++) {
- ret = vm_insert_page(vma, uaddr, pages[i]);
- if (ret)
- break;
- uaddr += PAGE_SIZE;
- }
- return ret;
+ return vm_insert_range(vma, vma->vm_start, pages, count);
}

static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
--
1.9.1



2018-11-24 08:45:26

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH 6/9] iommu/dma-iommu.c: Convert to use vm_insert_range

On 15/11/2018 15:49, Souptick Joarder wrote:
> Convert to use vm_insert_range() to map range of kernel
> memory to user vma.
>
> Signed-off-by: Souptick Joarder <[email protected]>
> Reviewed-by: Matthew Wilcox <[email protected]>
> ---
> drivers/iommu/dma-iommu.c | 12 ++----------
> 1 file changed, 2 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index d1b0475..69c66b1 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -622,17 +622,9 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
>
> int iommu_dma_mmap(struct page **pages, size_t size, struct vm_area_struct *vma)
> {
> - unsigned long uaddr = vma->vm_start;
> - unsigned int i, count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> - int ret = -ENXIO;
> + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
>
> - for (i = vma->vm_pgoff; i < count && uaddr < vma->vm_end; i++) {
> - ret = vm_insert_page(vma, uaddr, pages[i]);
> - if (ret)
> - break;
> - uaddr += PAGE_SIZE;
> - }
> - return ret;
> + return vm_insert_range(vma, vma->vm_start, pages, count);

AFIACS, vm_insert_range() doesn't respect vma->vm_pgoff, so doesn't this
break partial mmap()s of a large buffer? (which I believe can be a thing)

Robin.

> }
>
> static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,
>

2018-11-24 08:53:53

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH 6/9] iommu/dma-iommu.c: Convert to use vm_insert_range

On Fri, Nov 23, 2018 at 05:23:06PM +0000, Robin Murphy wrote:
> On 15/11/2018 15:49, Souptick Joarder wrote:
> > Convert to use vm_insert_range() to map range of kernel
> > memory to user vma.
> >
> > Signed-off-by: Souptick Joarder <[email protected]>
> > Reviewed-by: Matthew Wilcox <[email protected]>
> > ---
> > drivers/iommu/dma-iommu.c | 12 ++----------
> > 1 file changed, 2 insertions(+), 10 deletions(-)
> >
> > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> > index d1b0475..69c66b1 100644
> > --- a/drivers/iommu/dma-iommu.c
> > +++ b/drivers/iommu/dma-iommu.c
> > @@ -622,17 +622,9 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
> > int iommu_dma_mmap(struct page **pages, size_t size, struct vm_area_struct *vma)
> > {
> > - unsigned long uaddr = vma->vm_start;
> > - unsigned int i, count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> > - int ret = -ENXIO;
> > + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> > - for (i = vma->vm_pgoff; i < count && uaddr < vma->vm_end; i++) {
> > - ret = vm_insert_page(vma, uaddr, pages[i]);
> > - if (ret)
> > - break;
> > - uaddr += PAGE_SIZE;
> > - }
> > - return ret;
> > + return vm_insert_range(vma, vma->vm_start, pages, count);
>
> AFIACS, vm_insert_range() doesn't respect vma->vm_pgoff, so doesn't this
> break partial mmap()s of a large buffer? (which I believe can be a thing)

Whoops. That should have been:

return vm_insert_range(vma, vma->vm_start, pages + vma->vm_pgoff, count);

I suppose.

Although arguably we should respect vm_pgoff inside vm_insert_region()
and then callers automatically get support for vm_pgoff without having
to think about it ... although we should then also pass in the length
of the pages array to avoid pages being mapped in which aren't part of
the allocated array.

Hm. More thought required.

2018-11-24 09:03:11

by Souptick Joarder

[permalink] [raw]
Subject: Re: [PATCH 6/9] iommu/dma-iommu.c: Convert to use vm_insert_range

On Sat, Nov 24, 2018 at 3:04 AM Matthew Wilcox <[email protected]> wrote:
>
> On Fri, Nov 23, 2018 at 05:23:06PM +0000, Robin Murphy wrote:
> > On 15/11/2018 15:49, Souptick Joarder wrote:
> > > Convert to use vm_insert_range() to map range of kernel
> > > memory to user vma.
> > >
> > > Signed-off-by: Souptick Joarder <[email protected]>
> > > Reviewed-by: Matthew Wilcox <[email protected]>
> > > ---
> > > drivers/iommu/dma-iommu.c | 12 ++----------
> > > 1 file changed, 2 insertions(+), 10 deletions(-)
> > >
> > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> > > index d1b0475..69c66b1 100644
> > > --- a/drivers/iommu/dma-iommu.c
> > > +++ b/drivers/iommu/dma-iommu.c
> > > @@ -622,17 +622,9 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
> > > int iommu_dma_mmap(struct page **pages, size_t size, struct vm_area_struct *vma)
> > > {
> > > - unsigned long uaddr = vma->vm_start;
> > > - unsigned int i, count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> > > - int ret = -ENXIO;
> > > + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> > > - for (i = vma->vm_pgoff; i < count && uaddr < vma->vm_end; i++) {
> > > - ret = vm_insert_page(vma, uaddr, pages[i]);
> > > - if (ret)
> > > - break;
> > > - uaddr += PAGE_SIZE;
> > > - }
> > > - return ret;
> > > + return vm_insert_range(vma, vma->vm_start, pages, count);
> >
> > AFIACS, vm_insert_range() doesn't respect vma->vm_pgoff, so doesn't this
> > break partial mmap()s of a large buffer? (which I believe can be a thing)
>
> Whoops. That should have been:
>
> return vm_insert_range(vma, vma->vm_start, pages + vma->vm_pgoff, count);
>
> I suppose.

Matthew, patch "drm/rockchip/rockchip_drm_gem.c: Convert to use vm_insert_range"
also need to address the same issue ?

>
> Although arguably we should respect vm_pgoff inside vm_insert_region()
> and then callers automatically get support for vm_pgoff without having
> to think about it ... although we should then also pass in the length
> of the pages array to avoid pages being mapped in which aren't part of
> the allocated array.
>
> Hm. More thought required.

2018-11-26 06:45:39

by Souptick Joarder

[permalink] [raw]
Subject: Re: [PATCH 6/9] iommu/dma-iommu.c: Convert to use vm_insert_range

On Sat, Nov 24, 2018 at 3:04 AM Matthew Wilcox <[email protected]> wrote:
>
> On Fri, Nov 23, 2018 at 05:23:06PM +0000, Robin Murphy wrote:
> > On 15/11/2018 15:49, Souptick Joarder wrote:
> > > Convert to use vm_insert_range() to map range of kernel
> > > memory to user vma.
> > >
> > > Signed-off-by: Souptick Joarder <[email protected]>
> > > Reviewed-by: Matthew Wilcox <[email protected]>
> > > ---
> > > drivers/iommu/dma-iommu.c | 12 ++----------
> > > 1 file changed, 2 insertions(+), 10 deletions(-)
> > >
> > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> > > index d1b0475..69c66b1 100644
> > > --- a/drivers/iommu/dma-iommu.c
> > > +++ b/drivers/iommu/dma-iommu.c
> > > @@ -622,17 +622,9 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
> > > int iommu_dma_mmap(struct page **pages, size_t size, struct vm_area_struct *vma)
> > > {
> > > - unsigned long uaddr = vma->vm_start;
> > > - unsigned int i, count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> > > - int ret = -ENXIO;
> > > + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> > > - for (i = vma->vm_pgoff; i < count && uaddr < vma->vm_end; i++) {
> > > - ret = vm_insert_page(vma, uaddr, pages[i]);
> > > - if (ret)
> > > - break;
> > > - uaddr += PAGE_SIZE;
> > > - }
> > > - return ret;
> > > + return vm_insert_range(vma, vma->vm_start, pages, count);
> >
> > AFIACS, vm_insert_range() doesn't respect vma->vm_pgoff, so doesn't this
> > break partial mmap()s of a large buffer? (which I believe can be a thing)
>
> Whoops. That should have been:
>
> return vm_insert_range(vma, vma->vm_start, pages + vma->vm_pgoff, count);

I am unable to trace back where vma->vm_pgoff is set for this driver ? if any ?
If default value set to 0 then I think existing code is correct.

>
> I suppose.
>

> Although arguably we should respect vm_pgoff inside vm_insert_region()
> and then callers automatically get support for vm_pgoff without having
> to think about it ...

I assume, vm_insert_region() means vm_insert_range(). If we respect vm_pgoff
inside vm_insert_range, for any uninitialized/ error value set for vm_pgoff from
drivers will introduce a bug inside core mm which might be difficult
to trace back.
But when vm_pgoff set and passed from caller (drivers) it might be
easy to figure out.

> although we should then also pass in the length
> of the pages array to avoid pages being mapped in which aren't part of
> the allocated array.

Mostly Partial mapping is done by starting from an index and mapped it till
end of pages array. Calculating length of the pages array will have a small
overhead for each drivers.

Please correct me if I am wrong.

2018-11-26 13:21:31

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH 6/9] iommu/dma-iommu.c: Convert to use vm_insert_range

On 26/11/2018 06:44, Souptick Joarder wrote:
> On Sat, Nov 24, 2018 at 3:04 AM Matthew Wilcox <[email protected]> wrote:
>>
>> On Fri, Nov 23, 2018 at 05:23:06PM +0000, Robin Murphy wrote:
>>> On 15/11/2018 15:49, Souptick Joarder wrote:
>>>> Convert to use vm_insert_range() to map range of kernel
>>>> memory to user vma.
>>>>
>>>> Signed-off-by: Souptick Joarder <[email protected]>
>>>> Reviewed-by: Matthew Wilcox <[email protected]>
>>>> ---
>>>> drivers/iommu/dma-iommu.c | 12 ++----------
>>>> 1 file changed, 2 insertions(+), 10 deletions(-)
>>>>
>>>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>>>> index d1b0475..69c66b1 100644
>>>> --- a/drivers/iommu/dma-iommu.c
>>>> +++ b/drivers/iommu/dma-iommu.c
>>>> @@ -622,17 +622,9 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
>>>> int iommu_dma_mmap(struct page **pages, size_t size, struct vm_area_struct *vma)
>>>> {
>>>> - unsigned long uaddr = vma->vm_start;
>>>> - unsigned int i, count = PAGE_ALIGN(size) >> PAGE_SHIFT;
>>>> - int ret = -ENXIO;
>>>> + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
>>>> - for (i = vma->vm_pgoff; i < count && uaddr < vma->vm_end; i++) {
>>>> - ret = vm_insert_page(vma, uaddr, pages[i]);
>>>> - if (ret)
>>>> - break;
>>>> - uaddr += PAGE_SIZE;
>>>> - }
>>>> - return ret;
>>>> + return vm_insert_range(vma, vma->vm_start, pages, count);
>>>
>>> AFIACS, vm_insert_range() doesn't respect vma->vm_pgoff, so doesn't this
>>> break partial mmap()s of a large buffer? (which I believe can be a thing)
>>
>> Whoops. That should have been:
>>
>> return vm_insert_range(vma, vma->vm_start, pages + vma->vm_pgoff, count);
>
> I am unable to trace back where vma->vm_pgoff is set for this driver ? if any ?

This isn't a driver - it's a DMA API backend, so any caller of
dma_mmap_*() could potentially end up here (similarly for patch 2/9).

Robin.

> If default value set to 0 then I think existing code is correct.
>
>>
>> I suppose.
>>
>
>> Although arguably we should respect vm_pgoff inside vm_insert_region()
>> and then callers automatically get support for vm_pgoff without having
>> to think about it ...
>
> I assume, vm_insert_region() means vm_insert_range(). If we respect vm_pgoff
> inside vm_insert_range, for any uninitialized/ error value set for vm_pgoff from
> drivers will introduce a bug inside core mm which might be difficult
> to trace back.
> But when vm_pgoff set and passed from caller (drivers) it might be
> easy to figure out.
>
>> although we should then also pass in the length
>> of the pages array to avoid pages being mapped in which aren't part of
>> the allocated array.
>
> Mostly Partial mapping is done by starting from an index and mapped it till
> end of pages array. Calculating length of the pages array will have a small
> overhead for each drivers.
>
> Please correct me if I am wrong.
>