2013-05-01 13:12:08

by Cliff Wickman

[permalink] [raw]
Subject: [PATCH] mm/pagewalk.c: walk_page_range should avoid VM_PFNMAP areas


This patch replaces "[PATCH] fs/proc: smaps should avoid VM_PFNMAP areas".
/proc/<pid>/smaps and similar walks through a user page table should not
be looking at VM_PFNMAP areas.

Certain tests in walk_page_range() (specifically split_huge_page_pmd())
assume that all the mapped PFN's are backed with page structures. And this is
not usually true for VM_PFNMAP areas. This can result in panics on kernel
page faults when attempting to address those page structures.

There are a half dozen callers of walk_page_range() that walk through
a task's entire page table (as N. Horiguchi pointed out). So rather than
change all of them, this patch changes just walk_page_range() to ignore
VM_PFNMAP areas.

The logic of hugetlb_vma() is moved back into walk_page_range(), as we
want to test any vma in the range.

VM_PFNMAP areas are used by:
- graphics memory manager gpu/drm/drm_gem.c
- global reference unit sgi-gru/grufile.c
- sgi special memory char/mspec.c
- and probably several out-of-tree modules

I'm copying everyone who has changed this file recently, in case
there is some reason that I am not aware of to provide
/proc/<pid>/smaps|clear_refs|maps|numa_maps for these VM_PFNMAP areas.

Signed-off-by: Cliff Wickman <[email protected]>
---
mm/pagewalk.c | 60 +++++++++++++++++++++++++++++-----------------------------
1 file changed, 31 insertions(+), 29 deletions(-)

Index: linux/mm/pagewalk.c
===================================================================
--- linux.orig/mm/pagewalk.c
+++ linux/mm/pagewalk.c
@@ -127,22 +127,6 @@ static int walk_hugetlb_range(struct vm_
return 0;
}

-static struct vm_area_struct* hugetlb_vma(unsigned long addr, struct mm_walk *walk)
-{
- struct vm_area_struct *vma;
-
- /* We don't need vma lookup at all. */
- if (!walk->hugetlb_entry)
- return NULL;
-
- VM_BUG_ON(!rwsem_is_locked(&walk->mm->mmap_sem));
- vma = find_vma(walk->mm, addr);
- if (vma && vma->vm_start <= addr && is_vm_hugetlb_page(vma))
- return vma;
-
- return NULL;
-}
-
#else /* CONFIG_HUGETLB_PAGE */
static struct vm_area_struct* hugetlb_vma(unsigned long addr, struct mm_walk *walk)
{
@@ -200,28 +184,46 @@ int walk_page_range(unsigned long addr,

pgd = pgd_offset(walk->mm, addr);
do {
- struct vm_area_struct *vma;
+ struct vm_area_struct *vma = NULL;

next = pgd_addr_end(addr, end);

/*
- * handle hugetlb vma individually because pagetable walk for
- * the hugetlb page is dependent on the architecture and
- * we can't handled it in the same manner as non-huge pages.
+ * Check any special vma's within this range.
*/
- vma = hugetlb_vma(addr, walk);
+ VM_BUG_ON(!rwsem_is_locked(&walk->mm->mmap_sem));
+ vma = find_vma(walk->mm, addr);
if (vma) {
- if (vma->vm_end < next)
+ /*
+ * There are no page structures backing a VM_PFNMAP
+ * range, so allow no split_huge_page_pmd().
+ */
+ if (vma->vm_flags & VM_PFNMAP) {
next = vma->vm_end;
+ pgd = pgd_offset(walk->mm, next);
+ continue;
+ }
/*
- * Hugepage is very tightly coupled with vma, so
- * walk through hugetlb entries within a given vma.
+ * Handle hugetlb vma individually because pagetable
+ * walk for the hugetlb page is dependent on the
+ * architecture and we can't handled it in the same
+ * manner as non-huge pages.
*/
- err = walk_hugetlb_range(vma, addr, next, walk);
- if (err)
- break;
- pgd = pgd_offset(walk->mm, next);
- continue;
+ if (walk->hugetlb_entry && (vma->vm_start <= addr) &&
+ is_vm_hugetlb_page(vma)) {
+ if (vma->vm_end < next)
+ next = vma->vm_end;
+ /*
+ * Hugepage is very tightly coupled with vma,
+ * so walk through hugetlb entries within a
+ * given vma.
+ */
+ err = walk_hugetlb_range(vma, addr, next, walk);
+ if (err)
+ break;
+ pgd = pgd_offset(walk->mm, next);
+ continue;
+ }
}

if (pgd_none_or_clear_bad(pgd)) {


2013-05-01 15:47:17

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH] mm/pagewalk.c: walk_page_range should avoid VM_PFNMAP areas

On Wed, 1 May 2013, Cliff Wickman wrote:

> Index: linux/mm/pagewalk.c
> ===================================================================
> --- linux.orig/mm/pagewalk.c
> +++ linux/mm/pagewalk.c
> @@ -127,22 +127,6 @@ static int walk_hugetlb_range(struct vm_
> return 0;
> }
>
> -static struct vm_area_struct* hugetlb_vma(unsigned long addr, struct mm_walk *walk)
> -{
> - struct vm_area_struct *vma;
> -
> - /* We don't need vma lookup at all. */
> - if (!walk->hugetlb_entry)
> - return NULL;
> -
> - VM_BUG_ON(!rwsem_is_locked(&walk->mm->mmap_sem));
> - vma = find_vma(walk->mm, addr);
> - if (vma && vma->vm_start <= addr && is_vm_hugetlb_page(vma))
> - return vma;
> -
> - return NULL;
> -}
> -
> #else /* CONFIG_HUGETLB_PAGE */
> static struct vm_area_struct* hugetlb_vma(unsigned long addr, struct mm_walk *walk)
> {
> @@ -200,28 +184,46 @@ int walk_page_range(unsigned long addr,
>
> pgd = pgd_offset(walk->mm, addr);
> do {
> - struct vm_area_struct *vma;
> + struct vm_area_struct *vma = NULL;
>
> next = pgd_addr_end(addr, end);
>
> /*
> - * handle hugetlb vma individually because pagetable walk for
> - * the hugetlb page is dependent on the architecture and
> - * we can't handled it in the same manner as non-huge pages.
> + * Check any special vma's within this range.
> */
> - vma = hugetlb_vma(addr, walk);
> + VM_BUG_ON(!rwsem_is_locked(&walk->mm->mmap_sem));

I think this should be moved out of the iteration. It's currently inside
it even before your patch, but I think it's pointless.

> + vma = find_vma(walk->mm, addr);
> if (vma) {
> - if (vma->vm_end < next)
> + /*
> + * There are no page structures backing a VM_PFNMAP
> + * range, so allow no split_huge_page_pmd().
> + */
> + if (vma->vm_flags & VM_PFNMAP) {
> next = vma->vm_end;
> + pgd = pgd_offset(walk->mm, next);
> + continue;
> + }

What if end < vma->vm_end?

> /*
> - * Hugepage is very tightly coupled with vma, so
> - * walk through hugetlb entries within a given vma.
> + * Handle hugetlb vma individually because pagetable
> + * walk for the hugetlb page is dependent on the
> + * architecture and we can't handled it in the same
> + * manner as non-huge pages.
> */
> - err = walk_hugetlb_range(vma, addr, next, walk);
> - if (err)
> - break;
> - pgd = pgd_offset(walk->mm, next);
> - continue;
> + if (walk->hugetlb_entry && (vma->vm_start <= addr) &&
> + is_vm_hugetlb_page(vma)) {
> + if (vma->vm_end < next)
> + next = vma->vm_end;
> + /*
> + * Hugepage is very tightly coupled with vma,
> + * so walk through hugetlb entries within a
> + * given vma.
> + */
> + err = walk_hugetlb_range(vma, addr, next, walk);
> + if (err)
> + break;
> + pgd = pgd_offset(walk->mm, next);
> + continue;
> + }
> }
>
> if (pgd_none_or_clear_bad(pgd)) {

2013-05-01 18:39:20

by Cliff Wickman

[permalink] [raw]
Subject: Re: [PATCH] mm/pagewalk.c: walk_page_range should avoid VM_PFNMAP areas

On Wed, May 01, 2013 at 08:47:02AM -0700, David Rientjes wrote:
> On Wed, 1 May 2013, Cliff Wickman wrote:
>
> > Index: linux/mm/pagewalk.c
> > ===================================================================
> > --- linux.orig/mm/pagewalk.c
> > +++ linux/mm/pagewalk.c
> > @@ -127,22 +127,6 @@ static int walk_hugetlb_range(struct vm_
> > return 0;
> > }
> >
> > -static struct vm_area_struct* hugetlb_vma(unsigned long addr, struct mm_walk *walk)
> > -{
> > - struct vm_area_struct *vma;
> > -
> > - /* We don't need vma lookup at all. */
> > - if (!walk->hugetlb_entry)
> > - return NULL;
> > -
> > - VM_BUG_ON(!rwsem_is_locked(&walk->mm->mmap_sem));
> > - vma = find_vma(walk->mm, addr);
> > - if (vma && vma->vm_start <= addr && is_vm_hugetlb_page(vma))
> > - return vma;
> > -
> > - return NULL;
> > -}
> > -
> > #else /* CONFIG_HUGETLB_PAGE */
> > static struct vm_area_struct* hugetlb_vma(unsigned long addr, struct mm_walk *walk)
> > {
> > @@ -200,28 +184,46 @@ int walk_page_range(unsigned long addr,
> >
> > pgd = pgd_offset(walk->mm, addr);
> > do {
> > - struct vm_area_struct *vma;
> > + struct vm_area_struct *vma = NULL;
> >
> > next = pgd_addr_end(addr, end);
> >
> > /*
> > - * handle hugetlb vma individually because pagetable walk for
> > - * the hugetlb page is dependent on the architecture and
> > - * we can't handled it in the same manner as non-huge pages.
> > + * Check any special vma's within this range.
> > */
> > - vma = hugetlb_vma(addr, walk);
> > + VM_BUG_ON(!rwsem_is_locked(&walk->mm->mmap_sem));
>
> I think this should be moved out of the iteration. It's currently inside
> it even before your patch, but I think it's pointless.

I don't follow. We are iterating through a range of addresses. When
we come to a range that is VM_PFNMAP we skip it. How can we take that
out of the iteration?

> > + vma = find_vma(walk->mm, addr);
> > if (vma) {
> > - if (vma->vm_end < next)
> > + /*
> > + * There are no page structures backing a VM_PFNMAP
> > + * range, so allow no split_huge_page_pmd().
> > + */
> > + if (vma->vm_flags & VM_PFNMAP) {
> > next = vma->vm_end;
> > + pgd = pgd_offset(walk->mm, next);
> > + continue;
> > + }
>
> What if end < vma->vm_end?

Yes, a bad omission. Thanks for pointing that out.
It should be if ((vma->vm_start <= addr) && (vma->vm_flags & VM_PFNMAP))
as find_vma can return a vma above the addr.

-Cliff
> > /*
> > - * Hugepage is very tightly coupled with vma, so
> > - * walk through hugetlb entries within a given vma.
> > + * Handle hugetlb vma individually because pagetable
> > + * walk for the hugetlb page is dependent on the
> > + * architecture and we can't handled it in the same
> > + * manner as non-huge pages.
> > */
> > - err = walk_hugetlb_range(vma, addr, next, walk);
> > - if (err)
> > - break;
> > - pgd = pgd_offset(walk->mm, next);
> > - continue;
> > + if (walk->hugetlb_entry && (vma->vm_start <= addr) &&
> > + is_vm_hugetlb_page(vma)) {
> > + if (vma->vm_end < next)
> > + next = vma->vm_end;
> > + /*
> > + * Hugepage is very tightly coupled with vma,
> > + * so walk through hugetlb entries within a
> > + * given vma.
> > + */
> > + err = walk_hugetlb_range(vma, addr, next, walk);
> > + if (err)
> > + break;
> > + pgd = pgd_offset(walk->mm, next);
> > + continue;
> > + }
> > }
> >
> > if (pgd_none_or_clear_bad(pgd)) {

--
Cliff Wickman
SGI
[email protected]
(651) 683-3824

2013-05-01 18:45:10

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH] mm/pagewalk.c: walk_page_range should avoid VM_PFNMAP areas

On Wed, 1 May 2013, Cliff Wickman wrote:

> > > @@ -200,28 +184,46 @@ int walk_page_range(unsigned long addr,
> > >
> > > pgd = pgd_offset(walk->mm, addr);
> > > do {
> > > - struct vm_area_struct *vma;
> > > + struct vm_area_struct *vma = NULL;
> > >
> > > next = pgd_addr_end(addr, end);
> > >
> > > /*
> > > - * handle hugetlb vma individually because pagetable walk for
> > > - * the hugetlb page is dependent on the architecture and
> > > - * we can't handled it in the same manner as non-huge pages.
> > > + * Check any special vma's within this range.
> > > */
> > > - vma = hugetlb_vma(addr, walk);
> > > + VM_BUG_ON(!rwsem_is_locked(&walk->mm->mmap_sem));
> >
> > I think this should be moved out of the iteration. It's currently inside
> > it even before your patch, but I think it's pointless.
>
> I don't follow. We are iterating through a range of addresses. When
> we come to a range that is VM_PFNMAP we skip it. How can we take that
> out of the iteration?
>

I'm referring only to the VM_BUG_ON().