2016-04-05 19:06:46

by Sukadev Bhattiprolu

[permalink] [raw]
Subject: [PATCH 1/1] powerpc/mm: Add memory barrier in __hugepte_alloc()

>From f7b73c6b4508fe9b141a43d92be2f9dd7d3c4a58 Mon Sep 17 00:00:00 2001
From: Sukadev Bhattiprolu <[email protected]>
Date: Thu, 24 Mar 2016 02:07:57 -0400
Subject: [PATCH 1/1] powerpc/mm: Add memory barrier in __hugepte_alloc()

__hugepte_alloc() uses kmem_cache_zalloc() to allocate a zeroed PTE
and proceeds to use the newly allocated PTE. Add a memory barrier to
make sure that the other CPUs see a properly initialized PTE.

Based on a fix suggested by James Dykman.

Reported-by: James Dykman <[email protected]>
Signed-off-by: Aneesh Kumar K.V <[email protected]>
Signed-off-by: Sukadev Bhattiprolu <[email protected]>
Tested-by: James Dykman <[email protected]>
---
Note:
The bug was encountered and fix tested on an older version
of the kernel. Forward porting to mainline.
---
arch/powerpc/mm/hugetlbpage.c | 7 +++++++
1 file changed, 7 insertions(+)

diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index d991b9e..081f679 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -81,6 +81,13 @@ static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp,
if (! new)
return -ENOMEM;

+ /*
+ * Make sure other cpus find the hugepd set only after a
+ * properly initialized page table is visible to them.
+ * For more details look for comment in __pte_alloc().
+ */
+ smp_wmb();
+
spin_lock(&mm->page_table_lock);
#ifdef CONFIG_PPC_FSL_BOOK3E
/*
--
1.8.3.1


2016-04-06 09:56:28

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH 1/1] powerpc/mm: Add memory barrier in __hugepte_alloc()

On Tue 05-04-16 12:05:47, Sukadev Bhattiprolu wrote:
[...]
> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
> index d991b9e..081f679 100644
> --- a/arch/powerpc/mm/hugetlbpage.c
> +++ b/arch/powerpc/mm/hugetlbpage.c
> @@ -81,6 +81,13 @@ static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp,
> if (! new)
> return -ENOMEM;
>
> + /*
> + * Make sure other cpus find the hugepd set only after a
> + * properly initialized page table is visible to them.
> + * For more details look for comment in __pte_alloc().
> + */
> + smp_wmb();
> +

what is the pairing memory barrier?

> spin_lock(&mm->page_table_lock);
> #ifdef CONFIG_PPC_FSL_BOOK3E
> /*
> --
> 1.8.3.1

--
Michal Hocko
SUSE Labs

2016-04-06 10:21:43

by Aneesh Kumar K.V

[permalink] [raw]
Subject: Re: [PATCH 1/1] powerpc/mm: Add memory barrier in __hugepte_alloc()

Michal Hocko <[email protected]> writes:

> [ text/plain ]
> On Tue 05-04-16 12:05:47, Sukadev Bhattiprolu wrote:
> [...]
>> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
>> index d991b9e..081f679 100644
>> --- a/arch/powerpc/mm/hugetlbpage.c
>> +++ b/arch/powerpc/mm/hugetlbpage.c
>> @@ -81,6 +81,13 @@ static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp,
>> if (! new)
>> return -ENOMEM;
>>
>> + /*
>> + * Make sure other cpus find the hugepd set only after a
>> + * properly initialized page table is visible to them.
>> + * For more details look for comment in __pte_alloc().
>> + */
>> + smp_wmb();
>> +
>
> what is the pairing memory barrier?
>
>> spin_lock(&mm->page_table_lock);
>> #ifdef CONFIG_PPC_FSL_BOOK3E
>> /*

This is documented in __pte_alloc(). I didn't want to repeat the same
here.

/*
* Ensure all pte setup (eg. pte page lock and page clearing) are
* visible before the pte is made visible to other CPUs by being
* put into page tables.
*
* The other side of the story is the pointer chasing in the page
* table walking code (when walking the page table without locking;
* ie. most of the time). Fortunately, these data accesses consist
* of a chain of data-dependent loads, meaning most CPUs (alpha
* being the notable exception) will already guarantee loads are
* seen in-order. See the alpha page table accessors for the
* smp_read_barrier_depends() barriers in page table walking code.
*/
smp_wmb(); /* Could be smp_wmb__xxx(before|after)_spin_lock */


-aneesh

2016-04-06 11:27:24

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH 1/1] powerpc/mm: Add memory barrier in __hugepte_alloc()

On Wed 06-04-16 15:39:17, Aneesh Kumar K.V wrote:
> Michal Hocko <[email protected]> writes:
>
> > [ text/plain ]
> > On Tue 05-04-16 12:05:47, Sukadev Bhattiprolu wrote:
> > [...]
> >> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
> >> index d991b9e..081f679 100644
> >> --- a/arch/powerpc/mm/hugetlbpage.c
> >> +++ b/arch/powerpc/mm/hugetlbpage.c
> >> @@ -81,6 +81,13 @@ static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp,
> >> if (! new)
> >> return -ENOMEM;
> >>
> >> + /*
> >> + * Make sure other cpus find the hugepd set only after a
> >> + * properly initialized page table is visible to them.
> >> + * For more details look for comment in __pte_alloc().
> >> + */
> >> + smp_wmb();
> >> +
> >
> > what is the pairing memory barrier?
> >
> >> spin_lock(&mm->page_table_lock);
> >> #ifdef CONFIG_PPC_FSL_BOOK3E
> >> /*
>
> This is documented in __pte_alloc(). I didn't want to repeat the same
> here.
>
> /*
> * Ensure all pte setup (eg. pte page lock and page clearing) are
> * visible before the pte is made visible to other CPUs by being
> * put into page tables.
> *
> * The other side of the story is the pointer chasing in the page
> * table walking code (when walking the page table without locking;
> * ie. most of the time). Fortunately, these data accesses consist
> * of a chain of data-dependent loads, meaning most CPUs (alpha
> * being the notable exception) will already guarantee loads are
> * seen in-order. See the alpha page table accessors for the
> * smp_read_barrier_depends() barriers in page table walking code.
> */
> smp_wmb(); /* Could be smp_wmb__xxx(before|after)_spin_lock */

OK, I have missed the reference to __pte_alloc. My bad!

--
Michal Hocko
SUSE Labs