2022-07-11 04:47:53

by Barry Song

[permalink] [raw]
Subject: [PATCH v2 0/4] mm: arm64: bring up BATCHED_UNMAP_TLB_FLUSH

Though ARM64 has the hardware to do tlb shootdown, the hardware
broadcasting is not free.
A simplest micro benchmark shows even on snapdragon 888 with only
8 cores, the overhead for ptep_clear_flush is huge even for paging
out one page mapped by only one process:
5.36% a.out [kernel.kallsyms] [k] ptep_clear_flush

While pages are mapped by multiple processes or HW has more CPUs,
the cost should become even higher due to the bad scalability of
tlb shootdown.

The same benchmark can result in 16.99% CPU consumption on ARM64
server with around 100 cores according to Yicong's test on patch
4/4.

This patchset leverages the existing BATCHED_UNMAP_TLB_FLUSH by
1. only send tlbi instructions in the first stage -
arch_tlbbatch_add_mm()
2. wait for the completion of tlbi by dsb while doing tlbbatch
sync in arch_tlbbatch_flush()
My testing on snapdragon shows the overhead of ptep_clear_flush
is removed by the patchset. The micro benchmark becomes 5% faster
even for one page mapped by single process on snapdragon 888.


-v2:
1. Collected Yicong's test result on kunpeng920 ARM64 server;
2. Removed the redundant vma parameter in arch_tlbbatch_add_mm()
according to the comments of Peter Zijlstra and Dave Hansen
3. Added ARCH_HAS_MM_CPUMASK rather than checking if mm_cpumask
is empty according to the comments of Nadav Amit

Thanks, Yicong, Peter, Dave and Nadav for your testing or reviewing
, and comments.

-v1:
https://lore.kernel.org/lkml/[email protected]/

Barry Song (4):
Revert "Documentation/features: mark BATCHED_UNMAP_TLB_FLUSH doesn't
apply to ARM64"
mm: rmap: Allow platforms without mm_cpumask to defer TLB flush
mm: rmap: Extend tlbbatch APIs to fit new platforms
arm64: support batched/deferred tlb shootdown during page reclamation

Documentation/features/arch-support.txt | 1 -
.../features/vm/TLB/arch-support.txt | 2 +-
arch/arm/Kconfig | 1 +
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/tlbbatch.h | 12 ++++++++++
arch/arm64/include/asm/tlbflush.h | 23 +++++++++++++++++--
arch/loongarch/Kconfig | 1 +
arch/mips/Kconfig | 1 +
arch/openrisc/Kconfig | 1 +
arch/powerpc/Kconfig | 1 +
arch/riscv/Kconfig | 1 +
arch/s390/Kconfig | 1 +
arch/um/Kconfig | 1 +
arch/x86/Kconfig | 1 +
arch/x86/include/asm/tlbflush.h | 3 ++-
mm/Kconfig | 3 +++
mm/rmap.c | 14 +++++++----
17 files changed, 59 insertions(+), 9 deletions(-)
create mode 100644 arch/arm64/include/asm/tlbbatch.h

--
2.25.1


2022-07-11 04:50:30

by Barry Song

[permalink] [raw]
Subject: [PATCH v2 3/4] mm: rmap: Extend tlbbatch APIs to fit new platforms

From: Barry Song <[email protected]>

Add uaddr to tlbbatch APIs so that platforms like ARM64 are
able to apply this on their specific hardware features. For
ARM64, this could be sending tlbi into hardware queues for
the page with this particular uaddr.

Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Nadav Amit <[email protected]>
Cc: Mel Gorman <[email protected]>
Signed-off-by: Barry Song <[email protected]>
---
arch/x86/include/asm/tlbflush.h | 3 ++-
mm/rmap.c | 10 ++++++----
2 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 4af5579c7ef7..1b32f4b999c7 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -251,7 +251,8 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm)
}

static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
- struct mm_struct *mm)
+ struct mm_struct *mm,
+ unsigned long uaddr)
{
inc_mm_tlb_gen(mm);
cpumask_or(&batch->cpumask, &batch->cpumask, mm_cpumask(mm));
diff --git a/mm/rmap.c b/mm/rmap.c
index 13d4f9a1d4f1..a52381a680db 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -642,12 +642,13 @@ void try_to_unmap_flush_dirty(void)
#define TLB_FLUSH_BATCH_PENDING_LARGE \
(TLB_FLUSH_BATCH_PENDING_MASK / 2)

-static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable)
+static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable,
+ unsigned long uaddr)
{
struct tlbflush_unmap_batch *tlb_ubc = &current->tlb_ubc;
int batch, nbatch;

- arch_tlbbatch_add_mm(&tlb_ubc->arch, mm);
+ arch_tlbbatch_add_mm(&tlb_ubc->arch, mm, uaddr);
tlb_ubc->flush_required = true;

/*
@@ -736,7 +737,8 @@ void flush_tlb_batched_pending(struct mm_struct *mm)
}
}
#else
-static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable)
+static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable,
+ unsigned long uaddr)
{
}

@@ -1599,7 +1601,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
*/
pteval = ptep_get_and_clear(mm, address, pvmw.pte);

- set_tlb_ubc_flush_pending(mm, pte_dirty(pteval));
+ set_tlb_ubc_flush_pending(mm, pte_dirty(pteval), address);
} else {
pteval = ptep_clear_flush(vma, address, pvmw.pte);
}
--
2.25.1

2022-07-14 04:00:17

by haoxin

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] mm: arm64: bring up BATCHED_UNMAP_TLB_FLUSH

Hi barry.

I do some test on Kunpeng arm64 machine use Unixbench.

The test  result as below.

One core, we can see the performance improvement above +30%.
./Run -c 1 -i 1 shell1
w/o
System Benchmarks Partial Index              BASELINE RESULT INDEX
Shell Scripts (1 concurrent)                     42.4 5481.0 1292.7
========
System Benchmarks Index Score (Partial Only)                         1292.7

w/
System Benchmarks Partial Index              BASELINE RESULT INDEX
Shell Scripts (1 concurrent)                     42.4 6974.6 1645.0
========
System Benchmarks Index Score (Partial Only)                         1645.0


But with whole cores, there have little performance degradation above -5%

./Run -c 96 -i 1 shell1
w/o
Shell Scripts (1 concurrent)                  80765.5 lpm   (60.0 s, 1
samples)
System Benchmarks Partial Index              BASELINE RESULT INDEX
Shell Scripts (1 concurrent)                     42.4 80765.5 19048.5
========
System Benchmarks Index Score (Partial Only)                        19048.5

w
Shell Scripts (1 concurrent)                  76333.6 lpm   (60.0 s, 1
samples)
System Benchmarks Partial Index              BASELINE RESULT INDEX
Shell Scripts (1 concurrent)                     42.4 76333.6 18003.2
========
System Benchmarks Index Score (Partial Only)                        18003.2

----------------------------------------------------------------------------------------------


After discuss with you, and do some changes in the patch.

ndex a52381a680db..1ecba81f1277 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -727,7 +727,11 @@ void flush_tlb_batched_pending(struct mm_struct *mm)
int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT;

if (pending != flushed) {
+#ifdef CONFIG_ARCH_HAS_MM_CPUMASK
flush_tlb_mm(mm);
+#else
+ dsb(ish);
+#endif
/*
* If the new TLB flushing is pending during flushing, leave
* mm->tlb_flush_batched as is, to avoid losing flushing.

there have a performance improvement with whole cores, above +30%

./Run -c 96 -i 1 shell1
96 CPUs in system; running 96 parallel copies of tests

Shell Scripts (1 concurrent)                 109229.0 lpm   (60.0 s, 1 samples)
System Benchmarks Partial Index              BASELINE       RESULT    INDEX
Shell Scripts (1 concurrent)                     42.4     109229.0  25761.6
                                                                   ========
System Benchmarks Index Score (Partial Only)                        25761.6


Tested-by: Xin Hao<[email protected]>

Looking forward to your next version patch.

On 7/11/22 11:46 AM, Barry Song wrote:
> Though ARM64 has the hardware to do tlb shootdown, the hardware
> broadcasting is not free.
> A simplest micro benchmark shows even on snapdragon 888 with only
> 8 cores, the overhead for ptep_clear_flush is huge even for paging
> out one page mapped by only one process:
> 5.36% a.out [kernel.kallsyms] [k] ptep_clear_flush
>
> While pages are mapped by multiple processes or HW has more CPUs,
> the cost should become even higher due to the bad scalability of
> tlb shootdown.
>
> The same benchmark can result in 16.99% CPU consumption on ARM64
> server with around 100 cores according to Yicong's test on patch
> 4/4.
>
> This patchset leverages the existing BATCHED_UNMAP_TLB_FLUSH by
> 1. only send tlbi instructions in the first stage -
> arch_tlbbatch_add_mm()
> 2. wait for the completion of tlbi by dsb while doing tlbbatch
> sync in arch_tlbbatch_flush()
> My testing on snapdragon shows the overhead of ptep_clear_flush
> is removed by the patchset. The micro benchmark becomes 5% faster
> even for one page mapped by single process on snapdragon 888.
>
>
> -v2:
> 1. Collected Yicong's test result on kunpeng920 ARM64 server;
> 2. Removed the redundant vma parameter in arch_tlbbatch_add_mm()
> according to the comments of Peter Zijlstra and Dave Hansen
> 3. Added ARCH_HAS_MM_CPUMASK rather than checking if mm_cpumask
> is empty according to the comments of Nadav Amit
>
> Thanks, Yicong, Peter, Dave and Nadav for your testing or reviewing
> , and comments.
>
> -v1:
> https://lore.kernel.org/lkml/[email protected]/
>
> Barry Song (4):
> Revert "Documentation/features: mark BATCHED_UNMAP_TLB_FLUSH doesn't
> apply to ARM64"
> mm: rmap: Allow platforms without mm_cpumask to defer TLB flush
> mm: rmap: Extend tlbbatch APIs to fit new platforms
> arm64: support batched/deferred tlb shootdown during page reclamation
>
> Documentation/features/arch-support.txt | 1 -
> .../features/vm/TLB/arch-support.txt | 2 +-
> arch/arm/Kconfig | 1 +
> arch/arm64/Kconfig | 1 +
> arch/arm64/include/asm/tlbbatch.h | 12 ++++++++++
> arch/arm64/include/asm/tlbflush.h | 23 +++++++++++++++++--
> arch/loongarch/Kconfig | 1 +
> arch/mips/Kconfig | 1 +
> arch/openrisc/Kconfig | 1 +
> arch/powerpc/Kconfig | 1 +
> arch/riscv/Kconfig | 1 +
> arch/s390/Kconfig | 1 +
> arch/um/Kconfig | 1 +
> arch/x86/Kconfig | 1 +
> arch/x86/include/asm/tlbflush.h | 3 ++-
> mm/Kconfig | 3 +++
> mm/rmap.c | 14 +++++++----
> 17 files changed, 59 insertions(+), 9 deletions(-)
> create mode 100644 arch/arm64/include/asm/tlbbatch.h
>
--
Best Regards!
Xin Hao

2022-07-14 05:03:24

by Barry Song

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] mm: arm64: bring up BATCHED_UNMAP_TLB_FLUSH

On Thu, Jul 14, 2022 at 3:29 PM Xin Hao <[email protected]> wrote:
>
> Hi barry.
>
> I do some test on Kunpeng arm64 machine use Unixbench.
>
> The test result as below.
>
> One core, we can see the performance improvement above +30%.

I am really pleased to see the 30%+ improvement on unixbench on single core.

> ./Run -c 1 -i 1 shell1
> w/o
> System Benchmarks Partial Index BASELINE RESULT INDEX
> Shell Scripts (1 concurrent) 42.4 5481.0 1292.7
> ========
> System Benchmarks Index Score (Partial Only) 1292.7
>
> w/
> System Benchmarks Partial Index BASELINE RESULT INDEX
> Shell Scripts (1 concurrent) 42.4 6974.6 1645.0
> ========
> System Benchmarks Index Score (Partial Only) 1645.0
>
>
> But with whole cores, there have little performance degradation above -5%

That is sad as we might get more concurrency between mprotect(), madvise(),
mremap(), zap_pte_range() and the deferred tlbi.

>
> ./Run -c 96 -i 1 shell1
> w/o
> Shell Scripts (1 concurrent) 80765.5 lpm (60.0 s, 1
> samples)
> System Benchmarks Partial Index BASELINE RESULT INDEX
> Shell Scripts (1 concurrent) 42.4 80765.5 19048.5
> ========
> System Benchmarks Index Score (Partial Only) 19048.5
>
> w
> Shell Scripts (1 concurrent) 76333.6 lpm (60.0 s, 1
> samples)
> System Benchmarks Partial Index BASELINE RESULT INDEX
> Shell Scripts (1 concurrent) 42.4 76333.6 18003.2
> ========
> System Benchmarks Index Score (Partial Only) 18003.2
>
> ----------------------------------------------------------------------------------------------
>
>
> After discuss with you, and do some changes in the patch.
>
> ndex a52381a680db..1ecba81f1277 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -727,7 +727,11 @@ void flush_tlb_batched_pending(struct mm_struct *mm)
> int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT;
>
> if (pending != flushed) {
> +#ifdef CONFIG_ARCH_HAS_MM_CPUMASK
> flush_tlb_mm(mm);
> +#else
> + dsb(ish);
> +#endif
>

i was guessing the problem might be flush_tlb_batched_pending()
so i asked you to change this to verify my guess.

/*
> * If the new TLB flushing is pending during flushing, leave
> * mm->tlb_flush_batched as is, to avoid losing flushing.
>
> there have a performance improvement with whole cores, above +30%

But I don't think it is a proper patch. There is no guarantee the cpu calling
flush_tlb_batched_pending is exactly the cpu sending the deferred
tlbi. so the solution is unsafe. But since this temporary code can bring the
30%+ performance improvement back for high concurrency, we have huge
potential to finally make it.

Unfortunately I don't have an arm64 server to debug on this. I only have
8 cores which are unlikely to reproduce regression which happens in
high concurrency with 96 parallel tasks.

So I'd ask if @yicong or someone else working on kunpeng or other
arm64 servers is able to actually debug and figure out a proper
patch for this, then add the patch as 5/5 into this series?

>
> ./Run -c 96 -i 1 shell1
> 96 CPUs in system; running 96 parallel copies of tests
>
> Shell Scripts (1 concurrent) 109229.0 lpm (60.0 s, 1 samples)
> System Benchmarks Partial Index BASELINE RESULT INDEX
> Shell Scripts (1 concurrent) 42.4 109229.0 25761.6
> ========
> System Benchmarks Index Score (Partial Only) 25761.6
>
>
> Tested-by: Xin Hao<[email protected]>

Thanks for your testing!

>
> Looking forward to your next version patch.
>
> On 7/11/22 11:46 AM, Barry Song wrote:
> > Though ARM64 has the hardware to do tlb shootdown, the hardware
> > broadcasting is not free.
> > A simplest micro benchmark shows even on snapdragon 888 with only
> > 8 cores, the overhead for ptep_clear_flush is huge even for paging
> > out one page mapped by only one process:
> > 5.36% a.out [kernel.kallsyms] [k] ptep_clear_flush
> >
> > While pages are mapped by multiple processes or HW has more CPUs,
> > the cost should become even higher due to the bad scalability of
> > tlb shootdown.
> >
> > The same benchmark can result in 16.99% CPU consumption on ARM64
> > server with around 100 cores according to Yicong's test on patch
> > 4/4.
> >
> > This patchset leverages the existing BATCHED_UNMAP_TLB_FLUSH by
> > 1. only send tlbi instructions in the first stage -
> > arch_tlbbatch_add_mm()
> > 2. wait for the completion of tlbi by dsb while doing tlbbatch
> > sync in arch_tlbbatch_flush()
> > My testing on snapdragon shows the overhead of ptep_clear_flush
> > is removed by the patchset. The micro benchmark becomes 5% faster
> > even for one page mapped by single process on snapdragon 888.
> >
> >
> > -v2:
> > 1. Collected Yicong's test result on kunpeng920 ARM64 server;
> > 2. Removed the redundant vma parameter in arch_tlbbatch_add_mm()
> > according to the comments of Peter Zijlstra and Dave Hansen
> > 3. Added ARCH_HAS_MM_CPUMASK rather than checking if mm_cpumask
> > is empty according to the comments of Nadav Amit
> >
> > Thanks, Yicong, Peter, Dave and Nadav for your testing or reviewing
> > , and comments.
> >
> > -v1:
> > https://lore.kernel.org/lkml/[email protected]/
> >
> > Barry Song (4):
> > Revert "Documentation/features: mark BATCHED_UNMAP_TLB_FLUSH doesn't
> > apply to ARM64"
> > mm: rmap: Allow platforms without mm_cpumask to defer TLB flush
> > mm: rmap: Extend tlbbatch APIs to fit new platforms
> > arm64: support batched/deferred tlb shootdown during page reclamation
> >
> > Documentation/features/arch-support.txt | 1 -
> > .../features/vm/TLB/arch-support.txt | 2 +-
> > arch/arm/Kconfig | 1 +
> > arch/arm64/Kconfig | 1 +
> > arch/arm64/include/asm/tlbbatch.h | 12 ++++++++++
> > arch/arm64/include/asm/tlbflush.h | 23 +++++++++++++++++--
> > arch/loongarch/Kconfig | 1 +
> > arch/mips/Kconfig | 1 +
> > arch/openrisc/Kconfig | 1 +
> > arch/powerpc/Kconfig | 1 +
> > arch/riscv/Kconfig | 1 +
> > arch/s390/Kconfig | 1 +
> > arch/um/Kconfig | 1 +
> > arch/x86/Kconfig | 1 +
> > arch/x86/include/asm/tlbflush.h | 3 ++-
> > mm/Kconfig | 3 +++
> > mm/rmap.c | 14 +++++++----
> > 17 files changed, 59 insertions(+), 9 deletions(-)
> > create mode 100644 arch/arm64/include/asm/tlbbatch.h
> >
> --
> Best Regards!
> Xin Hao
>

Thanks
Barry

2022-07-20 11:30:40

by Barry Song

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] mm: arm64: bring up BATCHED_UNMAP_TLB_FLUSH

On Tue, Jul 19, 2022 at 1:28 AM Yicong Yang <[email protected]> wrote:
>
> On 2022/7/14 12:51, Barry Song wrote:
> > On Thu, Jul 14, 2022 at 3:29 PM Xin Hao <[email protected]> wrote:
> >>
> >> Hi barry.
> >>
> >> I do some test on Kunpeng arm64 machine use Unixbench.
> >>
> >> The test result as below.
> >>
> >> One core, we can see the performance improvement above +30%.
> >
> > I am really pleased to see the 30%+ improvement on unixbench on single core.
> >
> >> ./Run -c 1 -i 1 shell1
> >> w/o
> >> System Benchmarks Partial Index BASELINE RESULT INDEX
> >> Shell Scripts (1 concurrent) 42.4 5481.0 1292.7
> >> ========
> >> System Benchmarks Index Score (Partial Only) 1292.7
> >>
> >> w/
> >> System Benchmarks Partial Index BASELINE RESULT INDEX
> >> Shell Scripts (1 concurrent) 42.4 6974.6 1645.0
> >> ========
> >> System Benchmarks Index Score (Partial Only) 1645.0
> >>
> >>
> >> But with whole cores, there have little performance degradation above -5%
> >
> > That is sad as we might get more concurrency between mprotect(), madvise(),
> > mremap(), zap_pte_range() and the deferred tlbi.
> >
> >>
> >> ./Run -c 96 -i 1 shell1
> >> w/o
> >> Shell Scripts (1 concurrent) 80765.5 lpm (60.0 s, 1
> >> samples)
> >> System Benchmarks Partial Index BASELINE RESULT INDEX
> >> Shell Scripts (1 concurrent) 42.4 80765.5 19048.5
> >> ========
> >> System Benchmarks Index Score (Partial Only) 19048.5
> >>
> >> w
> >> Shell Scripts (1 concurrent) 76333.6 lpm (60.0 s, 1
> >> samples)
> >> System Benchmarks Partial Index BASELINE RESULT INDEX
> >> Shell Scripts (1 concurrent) 42.4 76333.6 18003.2
> >> ========
> >> System Benchmarks Index Score (Partial Only) 18003.2
> >>
> >> ----------------------------------------------------------------------------------------------
> >>
> >>
> >> After discuss with you, and do some changes in the patch.
> >>
> >> ndex a52381a680db..1ecba81f1277 100644
> >> --- a/mm/rmap.c
> >> +++ b/mm/rmap.c
> >> @@ -727,7 +727,11 @@ void flush_tlb_batched_pending(struct mm_struct *mm)
> >> int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT;
> >>
> >> if (pending != flushed) {
> >> +#ifdef CONFIG_ARCH_HAS_MM_CPUMASK
> >> flush_tlb_mm(mm);
> >> +#else
> >> + dsb(ish);
> >> +#endif
> >>
> >
> > i was guessing the problem might be flush_tlb_batched_pending()
> > so i asked you to change this to verify my guess.
> >
>
> flush_tlb_batched_pending() looks like the critical path for this issue then the code
> above can mitigate this.
>
> I cannot reproduce this on a 2P 128C Kunpeng920 server. The kernel is based on the
> v5.19-rc6 and unixbench of version 5.1.3. The result of `./Run -c 128 -i 1 shell1` is:
> iter-1 iter-2 iter-3
> w/o 17708.1 17637.1 17630.1
> w 17766.0 17752.3 17861.7
>
> And flush_tlb_batched_pending()isn't the hot spot with the patch:
> 7.00% sh [kernel.kallsyms] [k] ptep_clear_flush
> 4.17% sh [kernel.kallsyms] [k] ptep_set_access_flags
> 2.43% multi.sh [kernel.kallsyms] [k] ptep_clear_flush
> 1.98% sh [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
> 1.69% sh [kernel.kallsyms] [k] next_uptodate_page
> 1.66% sort [kernel.kallsyms] [k] ptep_clear_flush
> 1.56% multi.sh [kernel.kallsyms] [k] ptep_set_access_flags
> 1.27% sh [kernel.kallsyms] [k] page_counter_cancel
> 1.11% sh [kernel.kallsyms] [k] page_remove_rmap
> 1.06% sh [kernel.kallsyms] [k] perf_event_alloc
>
> Hi Xin Hao,
>
> I'm not sure the test setup as well as the config is same with yours. (96C vs 128C
> should not be the reason I think). Did you check that the 5% is a fluctuation or
> not? It'll be helpful if more information provided for reproducing this issue.
>
> Thanks.

I guess that is because "./Run -c 1 -i 1 shell1" isn't an application
stressed on
memory. Hi Xin, in what kinds of configurations can we reproduce your test
result?

As I suppose tlbbatch will mainly affect the performance of user scenarios
which require memory page-out/page-in like reclaiming file/anon pages.
"./Run -c 1 -i 1 shell1" on a system with sufficient free memory won't be
affected by tlbbatch at all, I believe.

Thanks
Barry

2022-07-23 09:18:31

by haoxin

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] mm: arm64: bring up BATCHED_UNMAP_TLB_FLUSH


On 7/18/22 9:28 PM, Yicong Yang wrote:
> On 2022/7/14 12:51, Barry Song wrote:
>> On Thu, Jul 14, 2022 at 3:29 PM Xin Hao <[email protected]> wrote:
>>> Hi barry.
>>>
>>> I do some test on Kunpeng arm64 machine use Unixbench.
>>>
>>> The test result as below.
>>>
>>> One core, we can see the performance improvement above +30%.
>> I am really pleased to see the 30%+ improvement on unixbench on single core.
>>
>>> ./Run -c 1 -i 1 shell1
>>> w/o
>>> System Benchmarks Partial Index BASELINE RESULT INDEX
>>> Shell Scripts (1 concurrent) 42.4 5481.0 1292.7
>>> ========
>>> System Benchmarks Index Score (Partial Only) 1292.7
>>>
>>> w/
>>> System Benchmarks Partial Index BASELINE RESULT INDEX
>>> Shell Scripts (1 concurrent) 42.4 6974.6 1645.0
>>> ========
>>> System Benchmarks Index Score (Partial Only) 1645.0
>>>
>>>
>>> But with whole cores, there have little performance degradation above -5%
>> That is sad as we might get more concurrency between mprotect(), madvise(),
>> mremap(), zap_pte_range() and the deferred tlbi.
>>
>>> ./Run -c 96 -i 1 shell1
>>> w/o
>>> Shell Scripts (1 concurrent) 80765.5 lpm (60.0 s, 1
>>> samples)
>>> System Benchmarks Partial Index BASELINE RESULT INDEX
>>> Shell Scripts (1 concurrent) 42.4 80765.5 19048.5
>>> ========
>>> System Benchmarks Index Score (Partial Only) 19048.5
>>>
>>> w
>>> Shell Scripts (1 concurrent) 76333.6 lpm (60.0 s, 1
>>> samples)
>>> System Benchmarks Partial Index BASELINE RESULT INDEX
>>> Shell Scripts (1 concurrent) 42.4 76333.6 18003.2
>>> ========
>>> System Benchmarks Index Score (Partial Only) 18003.2
>>>
>>> ----------------------------------------------------------------------------------------------
>>>
>>>
>>> After discuss with you, and do some changes in the patch.
>>>
>>> ndex a52381a680db..1ecba81f1277 100644
>>> --- a/mm/rmap.c
>>> +++ b/mm/rmap.c
>>> @@ -727,7 +727,11 @@ void flush_tlb_batched_pending(struct mm_struct *mm)
>>> int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT;
>>>
>>> if (pending != flushed) {
>>> +#ifdef CONFIG_ARCH_HAS_MM_CPUMASK
>>> flush_tlb_mm(mm);
>>> +#else
>>> + dsb(ish);
>>> +#endif
>>>
>> i was guessing the problem might be flush_tlb_batched_pending()
>> so i asked you to change this to verify my guess.
>>
> flush_tlb_batched_pending() looks like the critical path for this issue then the code
> above can mitigate this.
>
> I cannot reproduce this on a 2P 128C Kunpeng920 server. The kernel is based on the
> v5.19-rc6 and unixbench of version 5.1.3. The result of `./Run -c 128 -i 1 shell1` is:
> iter-1 iter-2 iter-3
> w/o 17708.1 17637.1 17630.1
> w 17766.0 17752.3 17861.7
>
> And flush_tlb_batched_pending()isn't the hot spot with the patch:
> 7.00% sh [kernel.kallsyms] [k] ptep_clear_flush
> 4.17% sh [kernel.kallsyms] [k] ptep_set_access_flags
> 2.43% multi.sh [kernel.kallsyms] [k] ptep_clear_flush
> 1.98% sh [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
> 1.69% sh [kernel.kallsyms] [k] next_uptodate_page
> 1.66% sort [kernel.kallsyms] [k] ptep_clear_flush
> 1.56% multi.sh [kernel.kallsyms] [k] ptep_set_access_flags
> 1.27% sh [kernel.kallsyms] [k] page_counter_cancel
> 1.11% sh [kernel.kallsyms] [k] page_remove_rmap
> 1.06% sh [kernel.kallsyms] [k] perf_event_alloc
>
> Hi Xin Hao,
>
> I'm not sure the test setup as well as the config is same with yours. (96C vs 128C
> should not be the reason I think). Did you check that the 5% is a fluctuation or
> not? It'll be helpful if more information provided for reproducing this issue.
Yes, not always the 5% reduce,  there exist a fluctuation.
>
> Thanks.
>
>> /*
>>> * If the new TLB flushing is pending during flushing, leave
>>> * mm->tlb_flush_batched as is, to avoid losing flushing.
>>>
>>> there have a performance improvement with whole cores, above +30%
>> But I don't think it is a proper patch. There is no guarantee the cpu calling
>> flush_tlb_batched_pending is exactly the cpu sending the deferred
>> tlbi. so the solution is unsafe. But since this temporary code can bring the
>> 30%+ performance improvement back for high concurrency, we have huge
>> potential to finally make it.
>>
>> Unfortunately I don't have an arm64 server to debug on this. I only have
>> 8 cores which are unlikely to reproduce regression which happens in
>> high concurrency with 96 parallel tasks.
>>
>> So I'd ask if @yicong or someone else working on kunpeng or other
>> arm64 servers is able to actually debug and figure out a proper
>> patch for this, then add the patch as 5/5 into this series?
>>
>>> ./Run -c 96 -i 1 shell1
>>> 96 CPUs in system; running 96 parallel copies of tests
>>>
>>> Shell Scripts (1 concurrent) 109229.0 lpm (60.0 s, 1 samples)
>>> System Benchmarks Partial Index BASELINE RESULT INDEX
>>> Shell Scripts (1 concurrent) 42.4 109229.0 25761.6
>>> ========
>>> System Benchmarks Index Score (Partial Only) 25761.6
>>>
>>>
>>> Tested-by: Xin Hao<[email protected]>
>> Thanks for your testing!
>>
>>> Looking forward to your next version patch.
>>>
>>> On 7/11/22 11:46 AM, Barry Song wrote:
>>>> Though ARM64 has the hardware to do tlb shootdown, the hardware
>>>> broadcasting is not free.
>>>> A simplest micro benchmark shows even on snapdragon 888 with only
>>>> 8 cores, the overhead for ptep_clear_flush is huge even for paging
>>>> out one page mapped by only one process:
>>>> 5.36% a.out [kernel.kallsyms] [k] ptep_clear_flush
>>>>
>>>> While pages are mapped by multiple processes or HW has more CPUs,
>>>> the cost should become even higher due to the bad scalability of
>>>> tlb shootdown.
>>>>
>>>> The same benchmark can result in 16.99% CPU consumption on ARM64
>>>> server with around 100 cores according to Yicong's test on patch
>>>> 4/4.
>>>>
>>>> This patchset leverages the existing BATCHED_UNMAP_TLB_FLUSH by
>>>> 1. only send tlbi instructions in the first stage -
>>>> arch_tlbbatch_add_mm()
>>>> 2. wait for the completion of tlbi by dsb while doing tlbbatch
>>>> sync in arch_tlbbatch_flush()
>>>> My testing on snapdragon shows the overhead of ptep_clear_flush
>>>> is removed by the patchset. The micro benchmark becomes 5% faster
>>>> even for one page mapped by single process on snapdragon 888.
>>>>
>>>>
>>>> -v2:
>>>> 1. Collected Yicong's test result on kunpeng920 ARM64 server;
>>>> 2. Removed the redundant vma parameter in arch_tlbbatch_add_mm()
>>>> according to the comments of Peter Zijlstra and Dave Hansen
>>>> 3. Added ARCH_HAS_MM_CPUMASK rather than checking if mm_cpumask
>>>> is empty according to the comments of Nadav Amit
>>>>
>>>> Thanks, Yicong, Peter, Dave and Nadav for your testing or reviewing
>>>> , and comments.
>>>>
>>>> -v1:
>>>> https://lore.kernel.org/lkml/[email protected]/
>>>>
>>>> Barry Song (4):
>>>> Revert "Documentation/features: mark BATCHED_UNMAP_TLB_FLUSH doesn't
>>>> apply to ARM64"
>>>> mm: rmap: Allow platforms without mm_cpumask to defer TLB flush
>>>> mm: rmap: Extend tlbbatch APIs to fit new platforms
>>>> arm64: support batched/deferred tlb shootdown during page reclamation
>>>>
>>>> Documentation/features/arch-support.txt | 1 -
>>>> .../features/vm/TLB/arch-support.txt | 2 +-
>>>> arch/arm/Kconfig | 1 +
>>>> arch/arm64/Kconfig | 1 +
>>>> arch/arm64/include/asm/tlbbatch.h | 12 ++++++++++
>>>> arch/arm64/include/asm/tlbflush.h | 23 +++++++++++++++++--
>>>> arch/loongarch/Kconfig | 1 +
>>>> arch/mips/Kconfig | 1 +
>>>> arch/openrisc/Kconfig | 1 +
>>>> arch/powerpc/Kconfig | 1 +
>>>> arch/riscv/Kconfig | 1 +
>>>> arch/s390/Kconfig | 1 +
>>>> arch/um/Kconfig | 1 +
>>>> arch/x86/Kconfig | 1 +
>>>> arch/x86/include/asm/tlbflush.h | 3 ++-
>>>> mm/Kconfig | 3 +++
>>>> mm/rmap.c | 14 +++++++----
>>>> 17 files changed, 59 insertions(+), 9 deletions(-)
>>>> create mode 100644 arch/arm64/include/asm/tlbbatch.h
>>>>
>>> --
>>> Best Regards!
>>> Xin Hao
>>>
>> Thanks
>> Barry
>> .
>>
--
Best Regards!
Xin Hao

2022-07-23 09:43:37

by haoxin

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] mm: arm64: bring up BATCHED_UNMAP_TLB_FLUSH


On 7/20/22 7:18 PM, Barry Song wrote:
> On Tue, Jul 19, 2022 at 1:28 AM Yicong Yang <[email protected]> wrote:
>> On 2022/7/14 12:51, Barry Song wrote:
>>> On Thu, Jul 14, 2022 at 3:29 PM Xin Hao <[email protected]> wrote:
>>>> Hi barry.
>>>>
>>>> I do some test on Kunpeng arm64 machine use Unixbench.
>>>>
>>>> The test result as below.
>>>>
>>>> One core, we can see the performance improvement above +30%.
>>> I am really pleased to see the 30%+ improvement on unixbench on single core.
>>>
>>>> ./Run -c 1 -i 1 shell1
>>>> w/o
>>>> System Benchmarks Partial Index BASELINE RESULT INDEX
>>>> Shell Scripts (1 concurrent) 42.4 5481.0 1292.7
>>>> ========
>>>> System Benchmarks Index Score (Partial Only) 1292.7
>>>>
>>>> w/
>>>> System Benchmarks Partial Index BASELINE RESULT INDEX
>>>> Shell Scripts (1 concurrent) 42.4 6974.6 1645.0
>>>> ========
>>>> System Benchmarks Index Score (Partial Only) 1645.0
>>>>
>>>>
>>>> But with whole cores, there have little performance degradation above -5%
>>> That is sad as we might get more concurrency between mprotect(), madvise(),
>>> mremap(), zap_pte_range() and the deferred tlbi.
>>>
>>>> ./Run -c 96 -i 1 shell1
>>>> w/o
>>>> Shell Scripts (1 concurrent) 80765.5 lpm (60.0 s, 1
>>>> samples)
>>>> System Benchmarks Partial Index BASELINE RESULT INDEX
>>>> Shell Scripts (1 concurrent) 42.4 80765.5 19048.5
>>>> ========
>>>> System Benchmarks Index Score (Partial Only) 19048.5
>>>>
>>>> w
>>>> Shell Scripts (1 concurrent) 76333.6 lpm (60.0 s, 1
>>>> samples)
>>>> System Benchmarks Partial Index BASELINE RESULT INDEX
>>>> Shell Scripts (1 concurrent) 42.4 76333.6 18003.2
>>>> ========
>>>> System Benchmarks Index Score (Partial Only) 18003.2
>>>>
>>>> ----------------------------------------------------------------------------------------------
>>>>
>>>>
>>>> After discuss with you, and do some changes in the patch.
>>>>
>>>> ndex a52381a680db..1ecba81f1277 100644
>>>> --- a/mm/rmap.c
>>>> +++ b/mm/rmap.c
>>>> @@ -727,7 +727,11 @@ void flush_tlb_batched_pending(struct mm_struct *mm)
>>>> int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT;
>>>>
>>>> if (pending != flushed) {
>>>> +#ifdef CONFIG_ARCH_HAS_MM_CPUMASK
>>>> flush_tlb_mm(mm);
>>>> +#else
>>>> + dsb(ish);
>>>> +#endif
>>>>
>>> i was guessing the problem might be flush_tlb_batched_pending()
>>> so i asked you to change this to verify my guess.
>>>
>> flush_tlb_batched_pending() looks like the critical path for this issue then the code
>> above can mitigate this.
>>
>> I cannot reproduce this on a 2P 128C Kunpeng920 server. The kernel is based on the
>> v5.19-rc6 and unixbench of version 5.1.3. The result of `./Run -c 128 -i 1 shell1` is:
>> iter-1 iter-2 iter-3
>> w/o 17708.1 17637.1 17630.1
>> w 17766.0 17752.3 17861.7
>>
>> And flush_tlb_batched_pending()isn't the hot spot with the patch:
>> 7.00% sh [kernel.kallsyms] [k] ptep_clear_flush
>> 4.17% sh [kernel.kallsyms] [k] ptep_set_access_flags
>> 2.43% multi.sh [kernel.kallsyms] [k] ptep_clear_flush
>> 1.98% sh [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore
>> 1.69% sh [kernel.kallsyms] [k] next_uptodate_page
>> 1.66% sort [kernel.kallsyms] [k] ptep_clear_flush
>> 1.56% multi.sh [kernel.kallsyms] [k] ptep_set_access_flags
>> 1.27% sh [kernel.kallsyms] [k] page_counter_cancel
>> 1.11% sh [kernel.kallsyms] [k] page_remove_rmap
>> 1.06% sh [kernel.kallsyms] [k] perf_event_alloc
>>
>> Hi Xin Hao,
>>
>> I'm not sure the test setup as well as the config is same with yours. (96C vs 128C
>> should not be the reason I think). Did you check that the 5% is a fluctuation or
>> not? It'll be helpful if more information provided for reproducing this issue.
>>
>> Thanks.
> I guess that is because "./Run -c 1 -i 1 shell1" isn't an application
> stressed on
> memory. Hi Xin, in what kinds of configurations can we reproduce your test
> result?

Oh, my fault, I do the test is not based on the lastest upstream kernel, there maybe some impact here,
i will do a new test on the lastest kernel.

> As I suppose tlbbatch will mainly affect the performance of user scenarios
> which require memory page-out/page-in like reclaiming file/anon pages.
> "./Run -c 1 -i 1 shell1" on a system with sufficient free memory won't be
> affected by tlbbatch at all, I believe.
>
> Thanks
> Barry

--
Best Regards!
Xin Hao