2012-02-15 18:33:24

by Dave Jones

[permalink] [raw]
Subject: exit_mmap() BUG_ON triggering since 3.1

We've had three reports against the Fedora kernel recently where
a process exits, and we're tripping up the

BUG_ON(mm->nr_ptes > (FIRST_USER_ADDRESS+PMD_SIZE-1)>>PMD_SHIFT);

in exit_mmap()

It started happening with 3.1, but still occurs on 3.2
(no 3.3rc reports yet, but it's not getting much testing).

https://bugzilla.redhat.com/show_bug.cgi?id=786632
https://bugzilla.redhat.com/show_bug.cgi?id=787527
https://bugzilla.redhat.com/show_bug.cgi?id=790546

I don't see anything special in common between the loaded modules.

anyone?

Dave


2012-02-16 02:14:48

by Hugh Dickins

[permalink] [raw]
Subject: Re: exit_mmap() BUG_ON triggering since 3.1

On Wed, 15 Feb 2012, Dave Jones wrote:

> We've had three reports against the Fedora kernel recently where
> a process exits, and we're tripping up the
>
> BUG_ON(mm->nr_ptes > (FIRST_USER_ADDRESS+PMD_SIZE-1)>>PMD_SHIFT);
>
> in exit_mmap()
>
> It started happening with 3.1, but still occurs on 3.2
> (no 3.3rc reports yet, but it's not getting much testing).
>
> https://bugzilla.redhat.com/show_bug.cgi?id=786632
> https://bugzilla.redhat.com/show_bug.cgi?id=787527
> https://bugzilla.redhat.com/show_bug.cgi?id=790546
>
> I don't see anything special in common between the loaded modules.
>
> anyone?

My suspicion was that it would be related to Transparent HugePages:
they do complicate the pagetable story. And I think I have found a
potential culprit. I don't know if nr_ptes is the only loser from
these two split_huge_pages calls, but assuming it is...


[PATCH] mm: fix BUG on mm->nr_ptes

mm->nr_ptes had unusual locking: down_read mmap_sem plus page_table_lock
when incrementing, down_write mmap_sem (or mm_users 0) when decrementing;
whereas THP is careful to increment and decrement it under page_table_lock.

Now most of those paths in THP also hold mmap_sem for read or write (with
appropriate checks on mm_users), but two do not: when split_huge_page()
is called by hwpoison_user_mappings(), and when called by add_to_swap().

It's conceivable that the latter case is responsible for the exit_mmap()
BUG_ON mm->nr_ptes that has been reported on Fedora.

THP's understanding of the locking seems reasonable, so take that lock
to update it in free_pgd_range(): try to avoid retaking it repeatedly
by passing the count up from levels below - free_pgtables() already
does its best to combine calls across neighbouring vmas.

Or should we try harder to avoid the extra locking: test mm_users?
#ifdef on THP? Or consider the accuracy of this count not worth
extra locking, and just scrap the BUG_ON now?

Reported-by: Dave Jones <[email protected]>
Signed-off-by: Hugh Dickins <[email protected]>
---

mm/memory.c | 40 +++++++++++++++++++++++++++-------------
1 file changed, 27 insertions(+), 13 deletions(-)

--- 3.3-rc3/mm/memory.c 2012-01-31 14:51:15.100021868 -0800
+++ linux/mm/memory.c 2012-02-15 17:01:46.588649490 -0800
@@ -419,22 +419,23 @@ void pmd_clear_bad(pmd_t *pmd)
* Note: this doesn't free the actual pages themselves. That
* has been handled earlier when unmapping all the memory regions.
*/
-static void free_pte_range(struct mmu_gather *tlb, pmd_t *pmd,
+static long free_pte_range(struct mmu_gather *tlb, pmd_t *pmd,
unsigned long addr)
{
pgtable_t token = pmd_pgtable(*pmd);
pmd_clear(pmd);
pte_free_tlb(tlb, token, addr);
- tlb->mm->nr_ptes--;
+ return 1;
}

-static inline void free_pmd_range(struct mmu_gather *tlb, pud_t *pud,
+static inline long free_pmd_range(struct mmu_gather *tlb, pud_t *pud,
unsigned long addr, unsigned long end,
unsigned long floor, unsigned long ceiling)
{
pmd_t *pmd;
unsigned long next;
unsigned long start;
+ long nr_ptes = 0;

start = addr;
pmd = pmd_offset(pud, addr);
@@ -442,32 +443,35 @@ static inline void free_pmd_range(struct
next = pmd_addr_end(addr, end);
if (pmd_none_or_clear_bad(pmd))
continue;
- free_pte_range(tlb, pmd, addr);
+ nr_ptes += free_pte_range(tlb, pmd, addr);
} while (pmd++, addr = next, addr != end);

start &= PUD_MASK;
if (start < floor)
- return;
+ goto out;
if (ceiling) {
ceiling &= PUD_MASK;
if (!ceiling)
- return;
+ goto out;
}
if (end - 1 > ceiling - 1)
- return;
+ goto out;

pmd = pmd_offset(pud, start);
pud_clear(pud);
pmd_free_tlb(tlb, pmd, start);
+out:
+ return nr_ptes;
}

-static inline void free_pud_range(struct mmu_gather *tlb, pgd_t *pgd,
+static inline long free_pud_range(struct mmu_gather *tlb, pgd_t *pgd,
unsigned long addr, unsigned long end,
unsigned long floor, unsigned long ceiling)
{
pud_t *pud;
unsigned long next;
unsigned long start;
+ long nr_ptes = 0;

start = addr;
pud = pud_offset(pgd, addr);
@@ -475,23 +479,25 @@ static inline void free_pud_range(struct
next = pud_addr_end(addr, end);
if (pud_none_or_clear_bad(pud))
continue;
- free_pmd_range(tlb, pud, addr, next, floor, ceiling);
+ nr_ptes += free_pmd_range(tlb, pud, addr, next, floor, ceiling);
} while (pud++, addr = next, addr != end);

start &= PGDIR_MASK;
if (start < floor)
- return;
+ goto out;
if (ceiling) {
ceiling &= PGDIR_MASK;
if (!ceiling)
- return;
+ goto out;
}
if (end - 1 > ceiling - 1)
- return;
+ goto out;

pud = pud_offset(pgd, start);
pgd_clear(pgd);
pud_free_tlb(tlb, pud, start);
+out:
+ return nr_ptes;
}

/*
@@ -505,6 +511,7 @@ void free_pgd_range(struct mmu_gather *t
{
pgd_t *pgd;
unsigned long next;
+ long nr_ptes = 0;

/*
* The next few lines have given us lots of grief...
@@ -553,8 +560,15 @@ void free_pgd_range(struct mmu_gather *t
next = pgd_addr_end(addr, end);
if (pgd_none_or_clear_bad(pgd))
continue;
- free_pud_range(tlb, pgd, addr, next, floor, ceiling);
+ nr_ptes += free_pud_range(tlb, pgd, addr, next, floor, ceiling);
} while (pgd++, addr = next, addr != end);
+
+ if (nr_ptes) {
+ struct mm_struct *mm = tlb->mm;
+ spin_lock(&mm->page_table_lock);
+ mm->nr_ptes -= nr_ptes;
+ spin_unlock(&mm->page_table_lock);
+ }
}

void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,

2012-02-16 02:22:49

by Roland Dreier

[permalink] [raw]
Subject: Re: exit_mmap() BUG_ON triggering since 3.1

On Wed, Feb 15, 2012 at 6:14 PM, Hugh Dickins <[email protected]> wrote:
> My suspicion was that it would be related to Transparent HugePages:
> they do complicate the pagetable story. ?And I think I have found a
> potential culprit. ?I don't know if nr_ptes is the only loser from
> these two split_huge_pages calls, but assuming it is...

Do you have an idea when this bug might have been introduced?
Presumably it's been there since THP came in?

The reason I ask is that I have one of these exit_mm BUG_ONs
in my pile of one-off unreproducible crashes, but in my case it
happened with 2.6.39 (with THP enabled). So I'm wondering if
I can cross it off my list and blame this bug, or if it remains one
of those inexplicable mysteries...

Thanks,
Roland

2012-02-16 02:49:12

by Hugh Dickins

[permalink] [raw]
Subject: Re: exit_mmap() BUG_ON triggering since 3.1

On Wed, 15 Feb 2012, Roland Dreier wrote:
> On Wed, Feb 15, 2012 at 6:14 PM, Hugh Dickins <[email protected]> wrote:
> > My suspicion was that it would be related to Transparent HugePages:
> > they do complicate the pagetable story. ?And I think I have found a
> > potential culprit. ?I don't know if nr_ptes is the only loser from
> > these two split_huge_pages calls, but assuming it is...
>
> Do you have an idea when this bug might have been introduced?
> Presumably it's been there since THP came in?

That's right, since THP came in (2.6.38 on mainline,
but IIRC Red Hat had THP applied to an earlier kernel).

>
> The reason I ask is that I have one of these exit_mm BUG_ONs
> in my pile of one-off unreproducible crashes, but in my case it
> happened with 2.6.39 (with THP enabled). So I'm wondering if
> I can cross it off my list and blame this bug, or if it remains one
> of those inexplicable mysteries...

If you think that system could have been using swap, yes, cross it
off (unless someone points out that I'm totally wrong, because....).

But if you know that system used no swap (and didn't get involved
in any memory-failure hwpoison business), then keep on worrying!

Hugh

2012-02-16 07:12:26

by Andrea Arcangeli

[permalink] [raw]
Subject: Re: exit_mmap() BUG_ON triggering since 3.1

On Wed, Feb 15, 2012 at 06:14:12PM -0800, Hugh Dickins wrote:
> Now most of those paths in THP also hold mmap_sem for read or write (with
> appropriate checks on mm_users), but two do not: when split_huge_page()
> is called by hwpoison_user_mappings(), and when called by add_to_swap().

So the race is split_huge_page_map() called by add_to_swap() running
concurrently with free_pgtables. Great catch!!

> Or should we try harder to avoid the extra locking: test mm_users?
> #ifdef on THP? Or consider the accuracy of this count not worth
> extra locking, and just scrap the BUG_ON now?

It's probably also happening with a large munmap, while add_to_swap
runs on another vma. Process didn't exit yet, but the actual BUG_ON
check runs at exit. So I doubt aborting split_huge_page on zero
mm_users could solve it.

Good part is, this being a false positive makes these oopses a
nuisance, so it means they can't corrupt any memory or disk etc...

The simplest is probably to change nr_ptes to count THPs too. I tried
that and no oopses so far but it's not very well tested yet.

====
From: Andrea Arcangeli <[email protected]>
Subject: [PATCH] mm: thp: fix BUG on mm->nr_ptes

Quoting Hugh's discovery and explanation of the SMP race condition:

===
mm->nr_ptes had unusual locking: down_read mmap_sem plus
page_table_lock when incrementing, down_write mmap_sem (or mm_users 0)
when decrementing; whereas THP is careful to increment and decrement
it under page_table_lock.

Now most of those paths in THP also hold mmap_sem for read or write
(with appropriate checks on mm_users), but two do not: when
split_huge_page() is called by hwpoison_user_mappings(), and when
called by add_to_swap().

It's conceivable that the latter case is responsible for the
exit_mmap() BUG_ON mm->nr_ptes that has been reported on Fedora.
===

The simplest way to fix it without having to alter the locking is to
make split_huge_page() a noop in nr_ptes terms, so by counting the
preallocated pagetables that exists for every mapped hugepage. It was
an arbitrary choice not to count them and either way is not wrong or
right, because they are not used but they're still allocated.

Reported-by: Dave Jones <[email protected]>
Reported-by: Hugh Dickins <[email protected]>
Signed-off-by: Andrea Arcangeli <[email protected]>
---
mm/huge_memory.c | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 91d3efb..8f7fc39 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -671,6 +671,7 @@ static int __do_huge_pmd_anonymous_page(struct mm_struct *mm,
set_pmd_at(mm, haddr, pmd, entry);
prepare_pmd_huge_pte(pgtable, mm);
add_mm_counter(mm, MM_ANONPAGES, HPAGE_PMD_NR);
+ mm->nr_ptes++;
spin_unlock(&mm->page_table_lock);
}

@@ -789,6 +790,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
pmd = pmd_mkold(pmd_wrprotect(pmd));
set_pmd_at(dst_mm, addr, dst_pmd, pmd);
prepare_pmd_huge_pte(pgtable, dst_mm);
+ dst_mm->nr_ptes++;

ret = 0;
out_unlock:
@@ -887,7 +889,6 @@ static int do_huge_pmd_wp_page_fallback(struct mm_struct *mm,
}
kfree(pages);

- mm->nr_ptes++;
smp_wmb(); /* make pte visible before pmd */
pmd_populate(mm, pmd, pgtable);
page_remove_rmap(page);
@@ -1047,6 +1048,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
VM_BUG_ON(page_mapcount(page) < 0);
add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR);
VM_BUG_ON(!PageHead(page));
+ tlb->mm->nr_ptes--;
spin_unlock(&tlb->mm->page_table_lock);
tlb_remove_page(tlb, page);
pte_free(tlb->mm, pgtable);
@@ -1375,7 +1377,6 @@ static int __split_huge_page_map(struct page *page,
pte_unmap(pte);
}

- mm->nr_ptes++;
smp_wmb(); /* make pte visible before pmd */
/*
* Up to this point the pmd is present and huge and
@@ -1988,7 +1989,6 @@ static void collapse_huge_page(struct mm_struct *mm,
set_pmd_at(mm, address, pmd, _pmd);
update_mmu_cache(vma, address, _pmd);
prepare_pmd_huge_pte(pgtable, mm);
- mm->nr_ptes--;
spin_unlock(&mm->page_table_lock);

#ifndef CONFIG_NUMA

2012-02-16 09:53:36

by Hugh Dickins

[permalink] [raw]
Subject: Re: exit_mmap() BUG_ON triggering since 3.1

On Thu, 16 Feb 2012, Andrea Arcangeli wrote:
> On Wed, Feb 15, 2012 at 06:14:12PM -0800, Hugh Dickins wrote:
> > Now most of those paths in THP also hold mmap_sem for read or write (with
> > appropriate checks on mm_users), but two do not: when split_huge_page()
> > is called by hwpoison_user_mappings(), and when called by add_to_swap().
>
> So the race is split_huge_page_map() called by add_to_swap() running
> concurrently with free_pgtables. Great catch!!
>
> > Or should we try harder to avoid the extra locking: test mm_users?
> > #ifdef on THP? Or consider the accuracy of this count not worth
> > extra locking, and just scrap the BUG_ON now?
>
> It's probably also happening with a large munmap, while add_to_swap
> runs on another vma. Process didn't exit yet, but the actual BUG_ON
> check runs at exit. So I doubt aborting split_huge_page on zero
> mm_users could solve it.

Indeed, what I meant was, I was wondering whether to make the
spin_lock and spin_unlock in my patch conditional on mm_users,
not to make split_huge_page conditional on it.

>
> Good part is, this being a false positive makes these oopses a
> nuisance, so it means they can't corrupt any memory or disk etc...

Yes (and I think less troublesome than most BUGs, coming at exit
while not holding locks; though we could well make it a WARN_ON,
I don't think that existed back in the day).

>
> The simplest is probably to change nr_ptes to count THPs too. I tried
> that and no oopses so far but it's not very well tested yet.

Oh, I like that, that's a much nicer fix than mine. If you're happy
to change the THP end (which I could hardly blame for getting those odd
rules slightly wrong), and it passes your testing, then certainly add my

Acked-by: Hugh Dickins <[email protected]>

In looking into the bug, it had actually bothered me a little that you
were setting aside those pages, yet not counting them into nr_ptes;
though the only thing that cares is oom_kill.c, and the count of pages
in each hugepage can only dwarf the count in nr_ptes (whereas, without
hugepages, it's possible to populate very sparsely and nr_ptes become
significant).

>
> ====
> From: Andrea Arcangeli <[email protected]>
> Subject: [PATCH] mm: thp: fix BUG on mm->nr_ptes
>
> Quoting Hugh's discovery and explanation of the SMP race condition:
>
> ===
> mm->nr_ptes had unusual locking: down_read mmap_sem plus
> page_table_lock when incrementing, down_write mmap_sem (or mm_users 0)
> when decrementing; whereas THP is careful to increment and decrement
> it under page_table_lock.
>
> Now most of those paths in THP also hold mmap_sem for read or write
> (with appropriate checks on mm_users), but two do not: when
> split_huge_page() is called by hwpoison_user_mappings(), and when
> called by add_to_swap().
>
> It's conceivable that the latter case is responsible for the
> exit_mmap() BUG_ON mm->nr_ptes that has been reported on Fedora.
> ===
>
> The simplest way to fix it without having to alter the locking is to
> make split_huge_page() a noop in nr_ptes terms, so by counting the
> preallocated pagetables that exists for every mapped hugepage. It was
> an arbitrary choice not to count them and either way is not wrong or
> right, because they are not used but they're still allocated.
>
> Reported-by: Dave Jones <[email protected]>
> Reported-by: Hugh Dickins <[email protected]>
> Signed-off-by: Andrea Arcangeli <[email protected]>
> ---
> mm/huge_memory.c | 6 +++---
> 1 files changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 91d3efb..8f7fc39 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -671,6 +671,7 @@ static int __do_huge_pmd_anonymous_page(struct mm_struct *mm,
> set_pmd_at(mm, haddr, pmd, entry);
> prepare_pmd_huge_pte(pgtable, mm);
> add_mm_counter(mm, MM_ANONPAGES, HPAGE_PMD_NR);
> + mm->nr_ptes++;
> spin_unlock(&mm->page_table_lock);
> }
>
> @@ -789,6 +790,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> pmd = pmd_mkold(pmd_wrprotect(pmd));
> set_pmd_at(dst_mm, addr, dst_pmd, pmd);
> prepare_pmd_huge_pte(pgtable, dst_mm);
> + dst_mm->nr_ptes++;
>
> ret = 0;
> out_unlock:
> @@ -887,7 +889,6 @@ static int do_huge_pmd_wp_page_fallback(struct mm_struct *mm,
> }
> kfree(pages);
>
> - mm->nr_ptes++;
> smp_wmb(); /* make pte visible before pmd */
> pmd_populate(mm, pmd, pgtable);
> page_remove_rmap(page);
> @@ -1047,6 +1048,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
> VM_BUG_ON(page_mapcount(page) < 0);
> add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR);
> VM_BUG_ON(!PageHead(page));
> + tlb->mm->nr_ptes--;
> spin_unlock(&tlb->mm->page_table_lock);
> tlb_remove_page(tlb, page);
> pte_free(tlb->mm, pgtable);
> @@ -1375,7 +1377,6 @@ static int __split_huge_page_map(struct page *page,
> pte_unmap(pte);
> }
>
> - mm->nr_ptes++;
> smp_wmb(); /* make pte visible before pmd */
> /*
> * Up to this point the pmd is present and huge and
> @@ -1988,7 +1989,6 @@ static void collapse_huge_page(struct mm_struct *mm,
> set_pmd_at(mm, address, pmd, _pmd);
> update_mmu_cache(vma, address, _pmd);
> prepare_pmd_huge_pte(pgtable, mm);
> - mm->nr_ptes--;
> spin_unlock(&mm->page_table_lock);
>
> #ifndef CONFIG_NUMA

2012-02-16 21:42:51

by Andrea Arcangeli

[permalink] [raw]
Subject: Re: exit_mmap() BUG_ON triggering since 3.1

On Thu, Feb 16, 2012 at 01:53:04AM -0800, Hugh Dickins wrote:
> Yes (and I think less troublesome than most BUGs, coming at exit
> while not holding locks; though we could well make it a WARN_ON,
> I don't think that existed back in the day).

A WARN_ON would be fine with me, go ahead if you prefer it... only
risk would be to go unnoticed or be underestimated. I am ok with the
BUG_ON too (even if this time it triggered false positives... sigh).

> Acked-by: Hugh Dickins <[email protected]>

Thanks for the quick review!

> In looking into the bug, it had actually bothered me a little that you
> were setting aside those pages, yet not counting them into nr_ptes;
> though the only thing that cares is oom_kill.c, and the count of pages
> in each hugepage can only dwarf the count in nr_ptes (whereas, without
> hugepages, it's possible to populate very sparsely and nr_ptes become
> significant).

Agreed, it's not significant either ways.

Running my two primary systems with this applied for half a day and no
problem so far so it should be good for -mm at least.