2010-08-06 03:28:33

by Shaohua Li

[permalink] [raw]
Subject: [patch]x86: avoid unnecessary tlb flush

In x86, access and dirty bits are set automatically by CPU when CPU accesses
memory. When we go into the code path of below flush_tlb_nonprotect_page(),
we already set dirty bit for pte and don't need flush tlb. This might mean
tlb entry in some CPUs hasn't dirty bit set, but this doesn't matter. When
the CPUs do page write, they will automatically check the bit and no software
involved.

On the other hand, flush tlb in below position is harmful. Test creates CPU
number of threads, each thread writes to a same but random address in same vma
range and we measure the total time. Under a 4 socket system, original time is
1.96s, while with the patch, the time is 0.8s. Under a 2 socket system, there is
20% time cut too. perf shows a lot of time are taking to send ipi/handle ipi for
tlb flush.

Signed-off-by: Shaohua Li <[email protected]>

---
arch/x86/include/asm/pgtable.h | 3 +++
include/asm-generic/pgtable.h | 4 ++++
mm/memory.c | 2 +-
3 files changed, 8 insertions(+), 1 deletion(-)

Index: linux/arch/x86/include/asm/pgtable.h
===================================================================
--- linux.orig/arch/x86/include/asm/pgtable.h 2010-07-29 13:25:12.000000000 +0800
+++ linux/arch/x86/include/asm/pgtable.h 2010-08-03 09:02:07.000000000 +0800
@@ -603,6 +603,9 @@ static inline void ptep_set_wrprotect(st
pte_update(mm, addr, ptep);
}

+#define __HAVE_ARCH_FLUSH_TLB_NONPROTECT_PAGE
+#define flush_tlb_nonprotect_page(vma, address)
+
/*
* clone_pgd_range(pgd_t *dst, pgd_t *src, int count);
*
Index: linux/include/asm-generic/pgtable.h
===================================================================
--- linux.orig/include/asm-generic/pgtable.h 2010-07-29 13:25:12.000000000 +0800
+++ linux/include/asm-generic/pgtable.h 2010-08-03 09:02:07.000000000 +0800
@@ -129,6 +129,10 @@ static inline void ptep_set_wrprotect(st
#define move_pte(pte, prot, old_addr, new_addr) (pte)
#endif

+#ifndef __HAVE_ARCH_FLUSH_TLB_NONPROTECT_PAGE
+#define flush_tlb_nonprotect_page(vma, address) flush_tlb_page(vma, address)
+#endif
+
#ifndef pgprot_noncached
#define pgprot_noncached(prot) (prot)
#endif
Index: linux/mm/memory.c
===================================================================
--- linux.orig/mm/memory.c 2010-08-02 08:50:05.000000000 +0800
+++ linux/mm/memory.c 2010-08-03 09:02:07.000000000 +0800
@@ -3116,7 +3116,7 @@ static inline int handle_pte_fault(struc
* with threads.
*/
if (flags & FAULT_FLAG_WRITE)
- flush_tlb_page(vma, address);
+ flush_tlb_nonprotect_page(vma, address);
}
unlock:
pte_unmap_unlock(pte, ptl);


2010-08-06 05:19:05

by Andrew Morton

[permalink] [raw]
Subject: Re: [patch]x86: avoid unnecessary tlb flush

On Fri, 06 Aug 2010 11:28:28 +0800 Shaohua Li <[email protected]> wrote:

> In x86, access and dirty bits are set automatically by CPU when CPU accesses
> memory. When we go into the code path of below flush_tlb_nonprotect_page(),
> we already set dirty bit for pte and don't need flush tlb. This might mean
> tlb entry in some CPUs hasn't dirty bit set, but this doesn't matter. When
> the CPUs do page write, they will automatically check the bit and no software
> involved.
>
> On the other hand, flush tlb in below position is harmful. Test creates CPU
> number of threads, each thread writes to a same but random address in same vma
> range and we measure the total time. Under a 4 socket system, original time is
> 1.96s, while with the patch, the time is 0.8s. Under a 2 socket system, there is
> 20% time cut too. perf shows a lot of time are taking to send ipi/handle ipi for
> tlb flush.
>
> Signed-off-by: Shaohua Li <[email protected]>
>
> ---
> arch/x86/include/asm/pgtable.h | 3 +++
> include/asm-generic/pgtable.h | 4 ++++
> mm/memory.c | 2 +-
> 3 files changed, 8 insertions(+), 1 deletion(-)
>
> Index: linux/arch/x86/include/asm/pgtable.h
> ===================================================================
> --- linux.orig/arch/x86/include/asm/pgtable.h 2010-07-29 13:25:12.000000000 +0800
> +++ linux/arch/x86/include/asm/pgtable.h 2010-08-03 09:02:07.000000000 +0800
> @@ -603,6 +603,9 @@ static inline void ptep_set_wrprotect(st
> pte_update(mm, addr, ptep);
> }
>
> +#define __HAVE_ARCH_FLUSH_TLB_NONPROTECT_PAGE
> +#define flush_tlb_nonprotect_page(vma, address)
> +
> /*
> * clone_pgd_range(pgd_t *dst, pgd_t *src, int count);
> *
> Index: linux/include/asm-generic/pgtable.h
> ===================================================================
> --- linux.orig/include/asm-generic/pgtable.h 2010-07-29 13:25:12.000000000 +0800
> +++ linux/include/asm-generic/pgtable.h 2010-08-03 09:02:07.000000000 +0800
> @@ -129,6 +129,10 @@ static inline void ptep_set_wrprotect(st
> #define move_pte(pte, prot, old_addr, new_addr) (pte)
> #endif
>
> +#ifndef __HAVE_ARCH_FLUSH_TLB_NONPROTECT_PAGE
> +#define flush_tlb_nonprotect_page(vma, address) flush_tlb_page(vma, address)
> +#endif

The preferred technique here is

#ifndef flush_tlb_nonprotect_page
#define flush_tlb_nonprotect_page(vma, address) flush_tlb_page(vma, address)
#endif

so no need for __HAVE_ARCH_FLUSH_TLB_NONPROTECT_PAGE.
include/asm-generic/pgtable.h uses a mix of the two techniques.

2010-08-13 00:48:13

by Shaohua Li

[permalink] [raw]
Subject: Re: [patch]x86: avoid unnecessary tlb flush

On Fri, 2010-08-06 at 13:19 +0800, Andrew Morton wrote:
> On Fri, 06 Aug 2010 11:28:28 +0800 Shaohua Li <[email protected]> wrote:
>
> > In x86, access and dirty bits are set automatically by CPU when CPU accesses
> > memory. When we go into the code path of below flush_tlb_nonprotect_page(),
> > we already set dirty bit for pte and don't need flush tlb. This might mean
> > tlb entry in some CPUs hasn't dirty bit set, but this doesn't matter. When
> > the CPUs do page write, they will automatically check the bit and no software
> > involved.
> >
> > On the other hand, flush tlb in below position is harmful. Test creates CPU
> > number of threads, each thread writes to a same but random address in same vma
> > range and we measure the total time. Under a 4 socket system, original time is
> > 1.96s, while with the patch, the time is 0.8s. Under a 2 socket system, there is
> > 20% time cut too. perf shows a lot of time are taking to send ipi/handle ipi for
> > tlb flush.
> >
> > Signed-off-by: Shaohua Li <[email protected]>
> >
> > ---
> > arch/x86/include/asm/pgtable.h | 3 +++
> > include/asm-generic/pgtable.h | 4 ++++
> > mm/memory.c | 2 +-
> > 3 files changed, 8 insertions(+), 1 deletion(-)
> >
> > Index: linux/arch/x86/include/asm/pgtable.h
> > ===================================================================
> > --- linux.orig/arch/x86/include/asm/pgtable.h 2010-07-29 13:25:12.000000000 +0800
> > +++ linux/arch/x86/include/asm/pgtable.h 2010-08-03 09:02:07.000000000 +0800
> > @@ -603,6 +603,9 @@ static inline void ptep_set_wrprotect(st
> > pte_update(mm, addr, ptep);
> > }
> >
> > +#define __HAVE_ARCH_FLUSH_TLB_NONPROTECT_PAGE
> > +#define flush_tlb_nonprotect_page(vma, address)
> > +
> > /*
> > * clone_pgd_range(pgd_t *dst, pgd_t *src, int count);
> > *
> > Index: linux/include/asm-generic/pgtable.h
> > ===================================================================
> > --- linux.orig/include/asm-generic/pgtable.h 2010-07-29 13:25:12.000000000 +0800
> > +++ linux/include/asm-generic/pgtable.h 2010-08-03 09:02:07.000000000 +0800
> > @@ -129,6 +129,10 @@ static inline void ptep_set_wrprotect(st
> > #define move_pte(pte, prot, old_addr, new_addr) (pte)
> > #endif
> >
> > +#ifndef __HAVE_ARCH_FLUSH_TLB_NONPROTECT_PAGE
> > +#define flush_tlb_nonprotect_page(vma, address) flush_tlb_page(vma, address)
> > +#endif
>
> The preferred technique here is
>
> #ifndef flush_tlb_nonprotect_page
> #define flush_tlb_nonprotect_page(vma, address) flush_tlb_page(vma, address)
> #endif
>
> so no need for __HAVE_ARCH_FLUSH_TLB_NONPROTECT_PAGE.
> include/asm-generic/pgtable.h uses a mix of the two techniques.
ok, updated the patch.


In x86, access and dirty bits are set automatically by CPU when CPU accesses
memory. When we go into the code path of below flush_tlb_nonprotect_page(),
we already set dirty bit for pte and don't need flush tlb. This might mean
tlb entry in some CPUs hasn't dirty bit set, but this doesn't matter. When
the CPUs do page write, they will automatically check the bit and no software
involved.

On the other hand, flush tlb in below position is harmful. Test creates CPU
number of threads, each thread writes to a same but random address in same vma
range and we measure the total time. Under a 4 socket system, original time is
1.96s, while with the patch, the time is 0.8s. Under a 2 socket system, there is
20% time cut too. perf shows a lot of time are taking to send ipi/handle ipi for
tlb flush.

Signed-off-by: Shaohua Li <[email protected]>

---
arch/x86/include/asm/pgtable.h | 2 ++
include/asm-generic/pgtable.h | 4 ++++
mm/memory.c | 2 +-
3 files changed, 7 insertions(+), 1 deletion(-)

Index: linux/arch/x86/include/asm/pgtable.h
===================================================================
--- linux.orig/arch/x86/include/asm/pgtable.h 2010-08-13 08:23:13.000000000 +0800
+++ linux/arch/x86/include/asm/pgtable.h 2010-08-13 08:24:53.000000000 +0800
@@ -603,6 +603,8 @@ static inline void ptep_set_wrprotect(st
pte_update(mm, addr, ptep);
}

+#define flush_tlb_nonprotect_page(vma, address)
+
/*
* clone_pgd_range(pgd_t *dst, pgd_t *src, int count);
*
Index: linux/include/asm-generic/pgtable.h
===================================================================
--- linux.orig/include/asm-generic/pgtable.h 2010-08-13 08:23:13.000000000 +0800
+++ linux/include/asm-generic/pgtable.h 2010-08-13 08:24:53.000000000 +0800
@@ -129,6 +129,10 @@ static inline void ptep_set_wrprotect(st
#define move_pte(pte, prot, old_addr, new_addr) (pte)
#endif

+#ifndef flush_tlb_nonprotect_page
+#define flush_tlb_nonprotect_page(vma, address) flush_tlb_page(vma, address)
+#endif
+
#ifndef pgprot_noncached
#define pgprot_noncached(prot) (prot)
#endif
Index: linux/mm/memory.c
===================================================================
--- linux.orig/mm/memory.c 2010-08-13 08:23:13.000000000 +0800
+++ linux/mm/memory.c 2010-08-13 08:24:53.000000000 +0800
@@ -3116,7 +3116,7 @@ static inline int handle_pte_fault(struc
* with threads.
*/
if (flags & FAULT_FLAG_WRITE)
- flush_tlb_page(vma, address);
+ flush_tlb_nonprotect_page(vma, address);
}
unlock:
pte_unmap_unlock(pte, ptl);

2010-08-13 19:29:27

by Hugh Dickins

[permalink] [raw]
Subject: Re: [patch]x86: avoid unnecessary tlb flush

On Fri, 13 Aug 2010, Shaohua Li wrote:
>
> In x86, access and dirty bits are set automatically by CPU when CPU accesses
> memory. When we go into the code path of below flush_tlb_nonprotect_page(),
> we already set dirty bit for pte and don't need flush tlb. This might mean
> tlb entry in some CPUs hasn't dirty bit set, but this doesn't matter. When
> the CPUs do page write, they will automatically check the bit and no software
> involved.
>
> On the other hand, flush tlb in below position is harmful. Test creates CPU
> number of threads, each thread writes to a same but random address in same vma
> range and we measure the total time. Under a 4 socket system, original time is
> 1.96s, while with the patch, the time is 0.8s. Under a 2 socket system, there is
> 20% time cut too. perf shows a lot of time are taking to send ipi/handle ipi for
> tlb flush.
>
> Signed-off-by: Shaohua Li <[email protected]>
>
> ---
> arch/x86/include/asm/pgtable.h | 2 ++
> include/asm-generic/pgtable.h | 4 ++++
> mm/memory.c | 2 +-
> 3 files changed, 7 insertions(+), 1 deletion(-)
>
> Index: linux/arch/x86/include/asm/pgtable.h
> ===================================================================
> --- linux.orig/arch/x86/include/asm/pgtable.h 2010-08-13 08:23:13.000000000 +0800
> +++ linux/arch/x86/include/asm/pgtable.h 2010-08-13 08:24:53.000000000 +0800
> @@ -603,6 +603,8 @@ static inline void ptep_set_wrprotect(st
> pte_update(mm, addr, ptep);
> }
>
> +#define flush_tlb_nonprotect_page(vma, address)
> +
> /*
> * clone_pgd_range(pgd_t *dst, pgd_t *src, int count);
> *
> Index: linux/include/asm-generic/pgtable.h
> ===================================================================
> --- linux.orig/include/asm-generic/pgtable.h 2010-08-13 08:23:13.000000000 +0800
> +++ linux/include/asm-generic/pgtable.h 2010-08-13 08:24:53.000000000 +0800
> @@ -129,6 +129,10 @@ static inline void ptep_set_wrprotect(st
> #define move_pte(pte, prot, old_addr, new_addr) (pte)
> #endif
>
> +#ifndef flush_tlb_nonprotect_page
> +#define flush_tlb_nonprotect_page(vma, address) flush_tlb_page(vma, address)
> +#endif
> +
> #ifndef pgprot_noncached
> #define pgprot_noncached(prot) (prot)
> #endif
> Index: linux/mm/memory.c
> ===================================================================
> --- linux.orig/mm/memory.c 2010-08-13 08:23:13.000000000 +0800
> +++ linux/mm/memory.c 2010-08-13 08:24:53.000000000 +0800
> @@ -3116,7 +3116,7 @@ static inline int handle_pte_fault(struc
> * with threads.
> */
> if (flags & FAULT_FLAG_WRITE)
> - flush_tlb_page(vma, address);
> + flush_tlb_nonprotect_page(vma, address);
> }
> unlock:
> pte_unmap_unlock(pte, ptl);

Just added Andrea to the Cc list: he did that TLB flush in 1a44e149,
I'd feel more comfortable noop-ing it on x86 if you've convinced him.

Hugh

2010-08-13 21:09:06

by H. Peter Anvin

[permalink] [raw]
Subject: Re: [patch]x86: avoid unnecessary tlb flush

On 08/13/2010 12:29 PM, Hugh Dickins wrote:
>
> Just added Andrea to the Cc list: he did that TLB flush in 1a44e149,
> I'd feel more comfortable noop-ing it on x86 if you've convinced him.
>
> Hugh

Andrea is probably on his way back from LinuxCon, but looking at the
original patch it might be something that non-x86 architectures need,
but which can be optimized specifically on x86, since x86 has explicit
"no flush needed when going to more permissive" semantics.

-hpa

2010-08-13 23:01:07

by Suresh Siddha

[permalink] [raw]
Subject: Re: [patch]x86: avoid unnecessary tlb flush

On Fri, 2010-08-13 at 14:08 -0700, H. Peter Anvin wrote:
> On 08/13/2010 12:29 PM, Hugh Dickins wrote:
> >
> > Just added Andrea to the Cc list: he did that TLB flush in 1a44e149,
> > I'd feel more comfortable noop-ing it on x86 if you've convinced him.
> >
> > Hugh
>
> Andrea is probably on his way back from LinuxCon, but looking at the
> original patch it might be something that non-x86 architectures need,
> but which can be optimized specifically on x86, since x86 has explicit
> "no flush needed when going to more permissive" semantics.

Yes. I don't see a problem with the proposed patch. This is the case of
parallel thread execution getting spurious write protection faults for
the same page for which the pte entry is already up to date and the
fault has already flushed the existing spurious TLB entry in the case of
x86.

I prefer a better name for the new flush_tlb_nonprotect_page() to
reflect the above. something like tlb_fix_spurious_fault() or something?

Also for other architectures, in this case, do we really need a global
tlb flush or just the local tlb flush?

Acked-by: Suresh Siddha <[email protected]>

2010-08-16 01:16:59

by Shaohua Li

[permalink] [raw]
Subject: Re: [patch]x86: avoid unnecessary tlb flush

On Sat, Aug 14, 2010 at 07:00:37AM +0800, Siddha, Suresh B wrote:
> On Fri, 2010-08-13 at 14:08 -0700, H. Peter Anvin wrote:
> > On 08/13/2010 12:29 PM, Hugh Dickins wrote:
> > >
> > > Just added Andrea to the Cc list: he did that TLB flush in 1a44e149,
> > > I'd feel more comfortable noop-ing it on x86 if you've convinced him.
> > >
> > > Hugh
> >
> > Andrea is probably on his way back from LinuxCon, but looking at the
> > original patch it might be something that non-x86 architectures need,
> > but which can be optimized specifically on x86, since x86 has explicit
> > "no flush needed when going to more permissive" semantics.
>
> Yes. I don't see a problem with the proposed patch. This is the case of
> parallel thread execution getting spurious write protection faults for
> the same page for which the pte entry is already up to date and the
> fault has already flushed the existing spurious TLB entry in the case of
> x86.
>
> I prefer a better name for the new flush_tlb_nonprotect_page() to
> reflect the above. something like tlb_fix_spurious_fault() or something?
this name is better.


In x86, access and dirty bits are set automatically by CPU when CPU accesses
memory. When we go into the code path of below flush_tlb_fix_spurious_fault(),
we already set dirty bit for pte and don't need flush tlb. This might mean
tlb entry in some CPUs hasn't dirty bit set, but this doesn't matter. When
the CPUs do page write, they will automatically check the bit and no software
involved.

On the other hand, flush tlb in below position is harmful. Test creates CPU
number of threads, each thread writes to a same but random address in same vma
range and we measure the total time. Under a 4 socket system, original time is
1.96s, while with the patch, the time is 0.8s. Under a 2 socket system, there is
20% time cut too. perf shows a lot of time are taking to send ipi/handle ipi for
tlb flush.

Signed-off-by: Shaohua Li <[email protected]>
Acked-by: Suresh Siddha <[email protected]>

---
arch/x86/include/asm/pgtable.h | 2 ++
include/asm-generic/pgtable.h | 4 ++++
mm/memory.c | 2 +-
3 files changed, 7 insertions(+), 1 deletion(-)

Index: linux/arch/x86/include/asm/pgtable.h
===================================================================
--- linux.orig/arch/x86/include/asm/pgtable.h 2010-08-16 09:00:02.000000000 +0800
+++ linux/arch/x86/include/asm/pgtable.h 2010-08-16 09:03:41.000000000 +0800
@@ -603,6 +603,8 @@ static inline void ptep_set_wrprotect(st
pte_update(mm, addr, ptep);
}

+#define flush_tlb_fix_spurious_fault(vma, address)
+
/*
* clone_pgd_range(pgd_t *dst, pgd_t *src, int count);
*
Index: linux/include/asm-generic/pgtable.h
===================================================================
--- linux.orig/include/asm-generic/pgtable.h 2010-08-16 09:00:02.000000000 +0800
+++ linux/include/asm-generic/pgtable.h 2010-08-16 09:03:41.000000000 +0800
@@ -129,6 +129,10 @@ static inline void ptep_set_wrprotect(st
#define move_pte(pte, prot, old_addr, new_addr) (pte)
#endif

+#ifndef flush_tlb_fix_spurious_fault
+#define flush_tlb_fix_spurious_fault(vma, address) flush_tlb_page(vma, address)
+#endif
+
#ifndef pgprot_noncached
#define pgprot_noncached(prot) (prot)
#endif
Index: linux/mm/memory.c
===================================================================
--- linux.orig/mm/memory.c 2010-08-16 09:03:08.000000000 +0800
+++ linux/mm/memory.c 2010-08-16 09:03:41.000000000 +0800
@@ -3140,7 +3140,7 @@ static inline int handle_pte_fault(struc
* with threads.
*/
if (flags & FAULT_FLAG_WRITE)
- flush_tlb_page(vma, address);
+ flush_tlb_fix_spurious_fault(vma, address);
}
unlock:
pte_unmap_unlock(pte, ptl);

2010-08-23 00:43:33

by Shaohua Li

[permalink] [raw]
Subject: Re: [patch]x86: avoid unnecessary tlb flush

On Mon, Aug 16, 2010 at 09:16:55AM +0800, Shaohua Li wrote:
> On Sat, Aug 14, 2010 at 07:00:37AM +0800, Siddha, Suresh B wrote:
> > On Fri, 2010-08-13 at 14:08 -0700, H. Peter Anvin wrote:
> > > On 08/13/2010 12:29 PM, Hugh Dickins wrote:
> > > >
> > > > Just added Andrea to the Cc list: he did that TLB flush in 1a44e149,
> > > > I'd feel more comfortable noop-ing it on x86 if you've convinced him.
> > > >
> > > > Hugh
> > >
> > > Andrea is probably on his way back from LinuxCon, but looking at the
> > > original patch it might be something that non-x86 architectures need,
> > > but which can be optimized specifically on x86, since x86 has explicit
> > > "no flush needed when going to more permissive" semantics.
> >
> > Yes. I don't see a problem with the proposed patch. This is the case of
> > parallel thread execution getting spurious write protection faults for
> > the same page for which the pte entry is already up to date and the
> > fault has already flushed the existing spurious TLB entry in the case of
> > x86.
> >
> > I prefer a better name for the new flush_tlb_nonprotect_page() to
> > reflect the above. something like tlb_fix_spurious_fault() or something?
> this name is better.
Hi Andrea,
can you look at this patch?

Thanks,
Shaohua

> In x86, access and dirty bits are set automatically by CPU when CPU accesses
> memory. When we go into the code path of below flush_tlb_fix_spurious_fault(),
> we already set dirty bit for pte and don't need flush tlb. This might mean
> tlb entry in some CPUs hasn't dirty bit set, but this doesn't matter. When
> the CPUs do page write, they will automatically check the bit and no software
> involved.
>
> On the other hand, flush tlb in below position is harmful. Test creates CPU
> number of threads, each thread writes to a same but random address in same vma
> range and we measure the total time. Under a 4 socket system, original time is
> 1.96s, while with the patch, the time is 0.8s. Under a 2 socket system, there is
> 20% time cut too. perf shows a lot of time are taking to send ipi/handle ipi for
> tlb flush.
>
> Signed-off-by: Shaohua Li <[email protected]>
> Acked-by: Suresh Siddha <[email protected]>
>
> ---
> arch/x86/include/asm/pgtable.h | 2 ++
> include/asm-generic/pgtable.h | 4 ++++
> mm/memory.c | 2 +-
> 3 files changed, 7 insertions(+), 1 deletion(-)
>
> Index: linux/arch/x86/include/asm/pgtable.h
> ===================================================================
> --- linux.orig/arch/x86/include/asm/pgtable.h 2010-08-16 09:00:02.000000000 +0800
> +++ linux/arch/x86/include/asm/pgtable.h 2010-08-16 09:03:41.000000000 +0800
> @@ -603,6 +603,8 @@ static inline void ptep_set_wrprotect(st
> pte_update(mm, addr, ptep);
> }
>
> +#define flush_tlb_fix_spurious_fault(vma, address)
> +
> /*
> * clone_pgd_range(pgd_t *dst, pgd_t *src, int count);
> *
> Index: linux/include/asm-generic/pgtable.h
> ===================================================================
> --- linux.orig/include/asm-generic/pgtable.h 2010-08-16 09:00:02.000000000 +0800
> +++ linux/include/asm-generic/pgtable.h 2010-08-16 09:03:41.000000000 +0800
> @@ -129,6 +129,10 @@ static inline void ptep_set_wrprotect(st
> #define move_pte(pte, prot, old_addr, new_addr) (pte)
> #endif
>
> +#ifndef flush_tlb_fix_spurious_fault
> +#define flush_tlb_fix_spurious_fault(vma, address) flush_tlb_page(vma, address)
> +#endif
> +
> #ifndef pgprot_noncached
> #define pgprot_noncached(prot) (prot)
> #endif
> Index: linux/mm/memory.c
> ===================================================================
> --- linux.orig/mm/memory.c 2010-08-16 09:03:08.000000000 +0800
> +++ linux/mm/memory.c 2010-08-16 09:03:41.000000000 +0800
> @@ -3140,7 +3140,7 @@ static inline int handle_pte_fault(struc
> * with threads.
> */
> if (flags & FAULT_FLAG_WRITE)
> - flush_tlb_page(vma, address);
> + flush_tlb_fix_spurious_fault(vma, address);
> }
> unlock:
> pte_unmap_unlock(pte, ptl);

2010-08-23 17:58:04

by Shaohua Li

[permalink] [raw]
Subject: [tip:x86/mm] x86, mm: Avoid unnecessary TLB flush

Commit-ID: 61c77326d1df079f202fa79403c3ccd8c5966a81
Gitweb: http://git.kernel.org/tip/61c77326d1df079f202fa79403c3ccd8c5966a81
Author: Shaohua Li <[email protected]>
AuthorDate: Mon, 16 Aug 2010 09:16:55 +0800
Committer: H. Peter Anvin <[email protected]>
CommitDate: Mon, 23 Aug 2010 10:04:57 -0700

x86, mm: Avoid unnecessary TLB flush

In x86, access and dirty bits are set automatically by CPU when CPU accesses
memory. When we go into the code path of below flush_tlb_fix_spurious_fault(),
we already set dirty bit for pte and don't need flush tlb. This might mean
tlb entry in some CPUs hasn't dirty bit set, but this doesn't matter. When
the CPUs do page write, they will automatically check the bit and no software
involved.

On the other hand, flush tlb in below position is harmful. Test creates CPU
number of threads, each thread writes to a same but random address in same vma
range and we measure the total time. Under a 4 socket system, original time is
1.96s, while with the patch, the time is 0.8s. Under a 2 socket system, there is
20% time cut too. perf shows a lot of time are taking to send ipi/handle ipi for
tlb flush.

Signed-off-by: Shaohua Li <[email protected]>
LKML-Reference: <[email protected]>
Acked-by: Suresh Siddha <[email protected]>
Cc: Andrea Archangeli <[email protected]>
Signed-off-by: H. Peter Anvin <[email protected]>
---
arch/x86/include/asm/pgtable.h | 2 ++
include/asm-generic/pgtable.h | 4 ++++
mm/memory.c | 2 +-
3 files changed, 7 insertions(+), 1 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index a34c785..2d0a33b 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -603,6 +603,8 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm,
pte_update(mm, addr, ptep);
}

+#define flush_tlb_fix_spurious_fault(vma, address)
+
/*
* clone_pgd_range(pgd_t *dst, pgd_t *src, int count);
*
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index e2bd73e..f4d4120 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -129,6 +129,10 @@ static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addres
#define move_pte(pte, prot, old_addr, new_addr) (pte)
#endif

+#ifndef flush_tlb_fix_spurious_fault
+#define flush_tlb_fix_spurious_fault(vma, address) flush_tlb_page(vma, address)
+#endif
+
#ifndef pgprot_noncached
#define pgprot_noncached(prot) (prot)
#endif
diff --git a/mm/memory.c b/mm/memory.c
index 2ed2267..a40da69 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3147,7 +3147,7 @@ static inline int handle_pte_fault(struct mm_struct *mm,
* with threads.
*/
if (flags & FAULT_FLAG_WRITE)
- flush_tlb_page(vma, address);
+ flush_tlb_fix_spurious_fault(vma, address);
}
unlock:
pte_unmap_unlock(pte, ptl);