2022-06-07 08:31:08

by Nadav Amit

[permalink] [raw]
Subject: [PATCH v2] x86/mm/tlb: avoid reading mm_tlb_gen when possible

From: Nadav Amit <[email protected]>

On extreme TLB shootdown storms, the mm's tlb_gen cacheline is highly
contended and reading it should (arguably) be avoided as much as
possible.

Currently, flush_tlb_func() reads the mm's tlb_gen unconditionally,
even when it is not necessary (e.g., the mm was already switched).
This is wasteful.

Moreover, one of the existing optimizations is to read mm's tlb_gen to
see if there are additional in-flight TLB invalidations and flush the
entire TLB in such a case. However, if the request's tlb_gen was already
flushed, the benefit of checking the mm's tlb_gen is likely to be offset
by the overhead of the check itself.

Running will-it-scale with tlb_flush1_threads show a considerable
benefit on 56-core Skylake (up to +24%):

threads Baseline (v5.17+) +Patch
1 159960 160202
5 310808 308378 (-0.7%)
10 479110 490728
15 526771 562528
20 534495 587316
25 547462 628296
30 579616 666313
35 594134 701814
40 612288 732967
45 617517 749727
50 637476 735497
55 614363 778913 (+24%)

Acked-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Signed-off-by: Nadav Amit <[email protected]>

--

Note: The benchmarked kernels include Dave's revert of commit
6035152d8eeb ("x86/mm/tlb: Open-code on_each_cpu_cond_mask() for
tlb_is_not_lazy()
---
arch/x86/mm/tlb.c | 18 +++++++++++++++++-
1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index d400b6d9d246..d9314cc8b81f 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -734,10 +734,10 @@ static void flush_tlb_func(void *info)
const struct flush_tlb_info *f = info;
struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm);
u32 loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
- u64 mm_tlb_gen = atomic64_read(&loaded_mm->context.tlb_gen);
u64 local_tlb_gen = this_cpu_read(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen);
bool local = smp_processor_id() == f->initiating_cpu;
unsigned long nr_invalidate = 0;
+ u64 mm_tlb_gen;

/* This code cannot presently handle being reentered. */
VM_WARN_ON(!irqs_disabled());
@@ -771,6 +771,22 @@ static void flush_tlb_func(void *info)
return;
}

+ if (f->new_tlb_gen <= local_tlb_gen) {
+ /*
+ * The TLB is already up to date in respect to f->new_tlb_gen.
+ * While the core might be still behind mm_tlb_gen, checking
+ * mm_tlb_gen unnecessarily would have negative caching effects
+ * so avoid it.
+ */
+ return;
+ }
+
+ /*
+ * Defer mm_tlb_gen reading as long as possible to avoid cache
+ * contention.
+ */
+ mm_tlb_gen = atomic64_read(&loaded_mm->context.tlb_gen);
+
if (unlikely(local_tlb_gen == mm_tlb_gen)) {
/*
* There's nothing to do: we're already up to date. This can
--
2.25.1


2022-06-07 09:06:14

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH v2] x86/mm/tlb: avoid reading mm_tlb_gen when possible

On Mon, Jun 6, 2022, at 11:01 AM, Nadav Amit wrote:
> From: Nadav Amit <[email protected]>
>
> On extreme TLB shootdown storms, the mm's tlb_gen cacheline is highly
> contended and reading it should (arguably) be avoided as much as
> possible.
>
> Currently, flush_tlb_func() reads the mm's tlb_gen unconditionally,
> even when it is not necessary (e.g., the mm was already switched).
> This is wasteful.
>
> Moreover, one of the existing optimizations is to read mm's tlb_gen to
> see if there are additional in-flight TLB invalidations and flush the
> entire TLB in such a case. However, if the request's tlb_gen was already
> flushed, the benefit of checking the mm's tlb_gen is likely to be offset
> by the overhead of the check itself.
>
> Running will-it-scale with tlb_flush1_threads show a considerable
> benefit on 56-core Skylake (up to +24%):

Acked-by: Andy Lutomirski <[email protected]>

But...

I'm suspicious that the analysis is missing something. Under this kind of workload, there are a whole bunch of flushes being initiated, presumably in parallel. Each flush does an RMW on mm_tlb_gen (which will make the cacheline exclusive on the initiating CPU). And each flush sends out an IPI, and the IPI handler reads mm_tlb_gen (which makes the cacheline shared) when it updates the local tlb_gen. So you're doing (at least!) an E->S and S->E transition per flush. Your patch doesn't change this.

But your patch does add a whole new case in which the IPI handler simply doesn't flush! I think it takes either quite a bit of racing or a well-timed context switch to hit that case, but, if you hit it, then you skip a flush and you skip the read of mm_tlb_gen.

Have you tested what happens if you do something like your patch but you also make the mm_tlb_gen read unconditional? I'm curious if there's more to the story than you're seeing.

You could also contemplate a somewhat evil hack in which you don't read mm_tlb_gen even if you *do* flush and instead use f->new_tlb_gen. That would potentially do a bit of extra flushing but would avoid the flush path causing the E->S transition. (Which may be of dubious value for real workloads, since I don't think there's a credible way to avoid having context switches read mm_tlb_gen.)

--Andy

Subject: [tip: x86/mm] x86/mm/tlb: Avoid reading mm_tlb_gen when possible

The following commit has been merged into the x86/mm branch of tip:

Commit-ID: aa44284960d550eb4d8614afdffebc68a432a9b4
Gitweb: https://git.kernel.org/tip/aa44284960d550eb4d8614afdffebc68a432a9b4
Author: Nadav Amit <[email protected]>
AuthorDate: Mon, 06 Jun 2022 11:01:23 -07:00
Committer: Dave Hansen <[email protected]>
CommitterDate: Tue, 07 Jun 2022 08:48:03 -07:00

x86/mm/tlb: Avoid reading mm_tlb_gen when possible

On extreme TLB shootdown storms, the mm's tlb_gen cacheline is highly
contended and reading it should (arguably) be avoided as much as
possible.

Currently, flush_tlb_func() reads the mm's tlb_gen unconditionally,
even when it is not necessary (e.g., the mm was already switched).
This is wasteful.

Moreover, one of the existing optimizations is to read mm's tlb_gen to
see if there are additional in-flight TLB invalidations and flush the
entire TLB in such a case. However, if the request's tlb_gen was already
flushed, the benefit of checking the mm's tlb_gen is likely to be offset
by the overhead of the check itself.

Running will-it-scale with tlb_flush1_threads show a considerable
benefit on 56-core Skylake (up to +24%):

threads Baseline (v5.17+) +Patch
1 159960 160202
5 310808 308378 (-0.7%)
10 479110 490728
15 526771 562528
20 534495 587316
25 547462 628296
30 579616 666313
35 594134 701814
40 612288 732967
45 617517 749727
50 637476 735497
55 614363 778913 (+24%)

Signed-off-by: Nadav Amit <[email protected]>
Signed-off-by: Dave Hansen <[email protected]>
Acked-by: Peter Zijlstra (Intel) <[email protected]>
Acked-by: Andy Lutomirski <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
---
arch/x86/mm/tlb.c | 18 +++++++++++++++++-
1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index d400b6d..d9314cc 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -734,10 +734,10 @@ static void flush_tlb_func(void *info)
const struct flush_tlb_info *f = info;
struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm);
u32 loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
- u64 mm_tlb_gen = atomic64_read(&loaded_mm->context.tlb_gen);
u64 local_tlb_gen = this_cpu_read(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen);
bool local = smp_processor_id() == f->initiating_cpu;
unsigned long nr_invalidate = 0;
+ u64 mm_tlb_gen;

/* This code cannot presently handle being reentered. */
VM_WARN_ON(!irqs_disabled());
@@ -771,6 +771,22 @@ static void flush_tlb_func(void *info)
return;
}

+ if (f->new_tlb_gen <= local_tlb_gen) {
+ /*
+ * The TLB is already up to date in respect to f->new_tlb_gen.
+ * While the core might be still behind mm_tlb_gen, checking
+ * mm_tlb_gen unnecessarily would have negative caching effects
+ * so avoid it.
+ */
+ return;
+ }
+
+ /*
+ * Defer mm_tlb_gen reading as long as possible to avoid cache
+ * contention.
+ */
+ mm_tlb_gen = atomic64_read(&loaded_mm->context.tlb_gen);
+
if (unlikely(local_tlb_gen == mm_tlb_gen)) {
/*
* There's nothing to do: we're already up to date. This can

2022-07-08 04:22:32

by Hugh Dickins

[permalink] [raw]
Subject: Re: [PATCH v2] x86/mm/tlb: avoid reading mm_tlb_gen when possible

On Mon, 6 Jun 2022, Nadav Amit wrote:

> From: Nadav Amit <[email protected]>
>
> On extreme TLB shootdown storms, the mm's tlb_gen cacheline is highly
> contended and reading it should (arguably) be avoided as much as
> possible.
>
> Currently, flush_tlb_func() reads the mm's tlb_gen unconditionally,
> even when it is not necessary (e.g., the mm was already switched).
> This is wasteful.
>
> Moreover, one of the existing optimizations is to read mm's tlb_gen to
> see if there are additional in-flight TLB invalidations and flush the
> entire TLB in such a case. However, if the request's tlb_gen was already
> flushed, the benefit of checking the mm's tlb_gen is likely to be offset
> by the overhead of the check itself.
>
> Running will-it-scale with tlb_flush1_threads show a considerable
> benefit on 56-core Skylake (up to +24%):
>
> threads Baseline (v5.17+) +Patch
> 1 159960 160202
> 5 310808 308378 (-0.7%)
> 10 479110 490728
> 15 526771 562528
> 20 534495 587316
> 25 547462 628296
> 30 579616 666313
> 35 594134 701814
> 40 612288 732967
> 45 617517 749727
> 50 637476 735497
> 55 614363 778913 (+24%)
>
> Acked-by: Peter Zijlstra (Intel) <[email protected]>
> Cc: Dave Hansen <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Andy Lutomirski <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: [email protected]
> Signed-off-by: Nadav Amit <[email protected]>
>
> --
>
> Note: The benchmarked kernels include Dave's revert of commit
> 6035152d8eeb ("x86/mm/tlb: Open-code on_each_cpu_cond_mask() for
> tlb_is_not_lazy()
> ---
> arch/x86/mm/tlb.c | 18 +++++++++++++++++-
> 1 file changed, 17 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index d400b6d9d246..d9314cc8b81f 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -734,10 +734,10 @@ static void flush_tlb_func(void *info)
> const struct flush_tlb_info *f = info;
> struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm);
> u32 loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
> - u64 mm_tlb_gen = atomic64_read(&loaded_mm->context.tlb_gen);
> u64 local_tlb_gen = this_cpu_read(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen);
> bool local = smp_processor_id() == f->initiating_cpu;
> unsigned long nr_invalidate = 0;
> + u64 mm_tlb_gen;
>
> /* This code cannot presently handle being reentered. */
> VM_WARN_ON(!irqs_disabled());
> @@ -771,6 +771,22 @@ static void flush_tlb_func(void *info)
> return;
> }
>
> + if (f->new_tlb_gen <= local_tlb_gen) {
> + /*
> + * The TLB is already up to date in respect to f->new_tlb_gen.
> + * While the core might be still behind mm_tlb_gen, checking
> + * mm_tlb_gen unnecessarily would have negative caching effects
> + * so avoid it.
> + */
> + return;
> + }
> +
> + /*
> + * Defer mm_tlb_gen reading as long as possible to avoid cache
> + * contention.
> + */
> + mm_tlb_gen = atomic64_read(&loaded_mm->context.tlb_gen);
> +
> if (unlikely(local_tlb_gen == mm_tlb_gen)) {
> /*
> * There's nothing to do: we're already up to date. This can
> --
> 2.25.1

I'm sorry, but bisection and reversion show that this commit,
aa44284960d550eb4d8614afdffebc68a432a9b4 in current linux-next,
is responsible for the "internal compiler error: Segmentation fault"s
I get when running kernel builds on tmpfs in 1G memory, lots of swapping.

That tmpfs is using huge pages as much as it can, so splitting and
collapsing, compaction and page migration entailed, in case that's
relevant (maybe this commit is perfect, but there's a TLB flushing
bug over there in mm which this commit just exposes).

Whether those segfaults happen without the huge page element,
I have not done enough testing to tell - there are other bugs with
swapping in current linux-next, indeed, I wouldn't even have found
this one, if I hadn't already been on a bisection for another bug,
and got thrown off course by these segfaults.

I hope that you can work out what might be wrong with this,
but meantime I think it needs to be reverted.

Thanks,
Hugh

2022-07-08 04:33:14

by Nadav Amit

[permalink] [raw]
Subject: Re: [PATCH v2] x86/mm/tlb: avoid reading mm_tlb_gen when possible

On Jul 7, 2022, at 8:27 PM, Hugh Dickins <[email protected]> wrote:

> On Mon, 6 Jun 2022, Nadav Amit wrote:
>
>> From: Nadav Amit <[email protected]>
>>
>> On extreme TLB shootdown storms, the mm's tlb_gen cacheline is highly
>> contended and reading it should (arguably) be avoided as much as
>> possible.
>>
>> Currently, flush_tlb_func() reads the mm's tlb_gen unconditionally,
>> even when it is not necessary (e.g., the mm was already switched).
>> This is wasteful.
>>
>> Moreover, one of the existing optimizations is to read mm's tlb_gen to
>> see if there are additional in-flight TLB invalidations and flush the
>> entire TLB in such a case. However, if the request's tlb_gen was already
>> flushed, the benefit of checking the mm's tlb_gen is likely to be offset
>> by the overhead of the check itself.
>>
>> Running will-it-scale with tlb_flush1_threads show a considerable
>> benefit on 56-core Skylake (up to +24%):
>>
>> threads Baseline (v5.17+) +Patch
>> 1 159960 160202
>> 5 310808 308378 (-0.7%)
>> 10 479110 490728
>> 15 526771 562528
>> 20 534495 587316
>> 25 547462 628296
>> 30 579616 666313
>> 35 594134 701814
>> 40 612288 732967
>> 45 617517 749727
>> 50 637476 735497
>> 55 614363 778913 (+24%)
>>
>> Acked-by: Peter Zijlstra (Intel) <[email protected]>
>> Cc: Dave Hansen <[email protected]>
>> Cc: Ingo Molnar <[email protected]>
>> Cc: Andy Lutomirski <[email protected]>
>> Cc: Thomas Gleixner <[email protected]>
>> Cc: [email protected]
>> Signed-off-by: Nadav Amit <[email protected]>
>>
>> --
>>
>> Note: The benchmarked kernels include Dave's revert of commit
>> 6035152d8eeb ("x86/mm/tlb: Open-code on_each_cpu_cond_mask() for
>> tlb_is_not_lazy()
>> ---
>> arch/x86/mm/tlb.c | 18 +++++++++++++++++-
>> 1 file changed, 17 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
>> index d400b6d9d246..d9314cc8b81f 100644
>> --- a/arch/x86/mm/tlb.c
>> +++ b/arch/x86/mm/tlb.c
>> @@ -734,10 +734,10 @@ static void flush_tlb_func(void *info)
>> const struct flush_tlb_info *f = info;
>> struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm);
>> u32 loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
>> - u64 mm_tlb_gen = atomic64_read(&loaded_mm->context.tlb_gen);
>> u64 local_tlb_gen = this_cpu_read(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen);
>> bool local = smp_processor_id() == f->initiating_cpu;
>> unsigned long nr_invalidate = 0;
>> + u64 mm_tlb_gen;
>>
>> /* This code cannot presently handle being reentered. */
>> VM_WARN_ON(!irqs_disabled());
>> @@ -771,6 +771,22 @@ static void flush_tlb_func(void *info)
>> return;
>> }
>>
>> + if (f->new_tlb_gen <= local_tlb_gen) {
>> + /*
>> + * The TLB is already up to date in respect to f->new_tlb_gen.
>> + * While the core might be still behind mm_tlb_gen, checking
>> + * mm_tlb_gen unnecessarily would have negative caching effects
>> + * so avoid it.
>> + */
>> + return;
>> + }
>> +
>> + /*
>> + * Defer mm_tlb_gen reading as long as possible to avoid cache
>> + * contention.
>> + */
>> + mm_tlb_gen = atomic64_read(&loaded_mm->context.tlb_gen);
>> +
>> if (unlikely(local_tlb_gen == mm_tlb_gen)) {
>> /*
>> * There's nothing to do: we're already up to date. This can
>> --
>> 2.25.1
>
> I'm sorry, but bisection and reversion show that this commit,
> aa44284960d550eb4d8614afdffebc68a432a9b4 in current linux-next,
> is responsible for the "internal compiler error: Segmentation fault"s
> I get when running kernel builds on tmpfs in 1G memory, lots of swapping.
>
> That tmpfs is using huge pages as much as it can, so splitting and
> collapsing, compaction and page migration entailed, in case that's
> relevant (maybe this commit is perfect, but there's a TLB flushing
> bug over there in mm which this commit just exposes).
>
> Whether those segfaults happen without the huge page element,
> I have not done enough testing to tell - there are other bugs with
> swapping in current linux-next, indeed, I wouldn't even have found
> this one, if I hadn't already been on a bisection for another bug,
> and got thrown off course by these segfaults.
>
> I hope that you can work out what might be wrong with this,
> but meantime I think it needs to be reverted.

I find it always surprising how trivial one liners fail.

As you probably know, debugging these kind of things is hard. I see two
possible cases:

1. The failure is directly related to this optimization. The immediate
suspect in my mind is something to do with PCID/ASID.

2. The failure is due to another bug that was papered by “enough” TLB
flushes.

I will look into the code. But if it is possible, it would be helpful to
know whether you get the failure with the “nopcid” kernel parameter. If it
passes, it wouldn’t say much, but if it fails, I think (2) is more likely.

Not arguing about a revert, but, in some way, if the test fails, it can
indicate that the optimization “works”…

I’ll put some time to look deeper into the code, but it would be very
helpful if you can let me know what happens with nopcid.

2022-07-08 06:34:48

by Nadav Amit

[permalink] [raw]
Subject: Re: [PATCH v2] x86/mm/tlb: avoid reading mm_tlb_gen when possible

On Jul 7, 2022, at 9:23 PM, Nadav Amit <[email protected]> wrote:

> On Jul 7, 2022, at 8:27 PM, Hugh Dickins <[email protected]> wrote:
>
>> On Mon, 6 Jun 2022, Nadav Amit wrote:
>>
>>> From: Nadav Amit <[email protected]>
>>>
>>> On extreme TLB shootdown storms, the mm's tlb_gen cacheline is highly
>>> contended and reading it should (arguably) be avoided as much as
>>> possible.
>>>
>>> Currently, flush_tlb_func() reads the mm's tlb_gen unconditionally,
>>> even when it is not necessary (e.g., the mm was already switched).
>>> This is wasteful.
>>>
>>> Moreover, one of the existing optimizations is to read mm's tlb_gen to
>>> see if there are additional in-flight TLB invalidations and flush the
>>> entire TLB in such a case. However, if the request's tlb_gen was already
>>> flushed, the benefit of checking the mm's tlb_gen is likely to be offset
>>> by the overhead of the check itself.
>>>
>>> Running will-it-scale with tlb_flush1_threads show a considerable
>>> benefit on 56-core Skylake (up to +24%):
>>>
>>> threads Baseline (v5.17+) +Patch
>>> 1 159960 160202
>>> 5 310808 308378 (-0.7%)
>>> 10 479110 490728
>>> 15 526771 562528
>>> 20 534495 587316
>>> 25 547462 628296
>>> 30 579616 666313
>>> 35 594134 701814
>>> 40 612288 732967
>>> 45 617517 749727
>>> 50 637476 735497
>>> 55 614363 778913 (+24%)
>>>
>>> Acked-by: Peter Zijlstra (Intel) <[email protected]>
>>> Cc: Dave Hansen <[email protected]>
>>> Cc: Ingo Molnar <[email protected]>
>>> Cc: Andy Lutomirski <[email protected]>
>>> Cc: Thomas Gleixner <[email protected]>
>>> Cc: [email protected]
>>> Signed-off-by: Nadav Amit <[email protected]>
>>>
>>> --
>>>
>>> Note: The benchmarked kernels include Dave's revert of commit
>>> 6035152d8eeb ("x86/mm/tlb: Open-code on_each_cpu_cond_mask() for
>>> tlb_is_not_lazy()
>>> ---
>>> arch/x86/mm/tlb.c | 18 +++++++++++++++++-
>>> 1 file changed, 17 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
>>> index d400b6d9d246..d9314cc8b81f 100644
>>> --- a/arch/x86/mm/tlb.c
>>> +++ b/arch/x86/mm/tlb.c
>>> @@ -734,10 +734,10 @@ static void flush_tlb_func(void *info)
>>> const struct flush_tlb_info *f = info;
>>> struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm);
>>> u32 loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
>>> - u64 mm_tlb_gen = atomic64_read(&loaded_mm->context.tlb_gen);
>>> u64 local_tlb_gen = this_cpu_read(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen);
>>> bool local = smp_processor_id() == f->initiating_cpu;
>>> unsigned long nr_invalidate = 0;
>>> + u64 mm_tlb_gen;
>>>
>>> /* This code cannot presently handle being reentered. */
>>> VM_WARN_ON(!irqs_disabled());
>>> @@ -771,6 +771,22 @@ static void flush_tlb_func(void *info)
>>> return;
>>> }
>>>
>>> + if (f->new_tlb_gen <= local_tlb_gen) {
>>> + /*
>>> + * The TLB is already up to date in respect to f->new_tlb_gen.
>>> + * While the core might be still behind mm_tlb_gen, checking
>>> + * mm_tlb_gen unnecessarily would have negative caching effects
>>> + * so avoid it.
>>> + */
>>> + return;
>>> + }
>>> +
>>> + /*
>>> + * Defer mm_tlb_gen reading as long as possible to avoid cache
>>> + * contention.
>>> + */
>>> + mm_tlb_gen = atomic64_read(&loaded_mm->context.tlb_gen);
>>> +
>>> if (unlikely(local_tlb_gen == mm_tlb_gen)) {
>>> /*
>>> * There's nothing to do: we're already up to date. This can
>>> --
>>> 2.25.1
>>
>> I'm sorry, but bisection and reversion show that this commit,
>> aa44284960d550eb4d8614afdffebc68a432a9b4 in current linux-next,
>> is responsible for the "internal compiler error: Segmentation fault"s
>> I get when running kernel builds on tmpfs in 1G memory, lots of swapping.
>>
>> That tmpfs is using huge pages as much as it can, so splitting and
>> collapsing, compaction and page migration entailed, in case that's
>> relevant (maybe this commit is perfect, but there's a TLB flushing
>> bug over there in mm which this commit just exposes).
>>
>> Whether those segfaults happen without the huge page element,
>> I have not done enough testing to tell - there are other bugs with
>> swapping in current linux-next, indeed, I wouldn't even have found
>> this one, if I hadn't already been on a bisection for another bug,
>> and got thrown off course by these segfaults.
>>
>> I hope that you can work out what might be wrong with this,
>> but meantime I think it needs to be reverted.
>
> I find it always surprising how trivial one liners fail.
>
> As you probably know, debugging these kind of things is hard. I see two
> possible cases:
>
> 1. The failure is directly related to this optimization. The immediate
> suspect in my mind is something to do with PCID/ASID.
>
> 2. The failure is due to another bug that was papered by “enough” TLB
> flushes.
>
> I will look into the code. But if it is possible, it would be helpful to
> know whether you get the failure with the “nopcid” kernel parameter. If it
> passes, it wouldn’t say much, but if it fails, I think (2) is more likely.
>
> Not arguing about a revert, but, in some way, if the test fails, it can
> indicate that the optimization “works”…
>
> I’ll put some time to look deeper into the code, but it would be very
> helpful if you can let me know what happens with nopcid.

Actually, only using “nopcid” would most likely make it go away if we have
PTI enabled. So to get a good indication, a check whether it reproduces with
“nopti” and “nopcid” is needed.

I don’t have a better answer yet. Still trying to see what might have gone
wrong.

2022-07-08 07:21:01

by Nadav Amit

[permalink] [raw]
Subject: Re: [PATCH v2] x86/mm/tlb: avoid reading mm_tlb_gen when possible

On Jul 7, 2022, at 10:56 PM, Nadav Amit <[email protected]> wrote:

> On Jul 7, 2022, at 9:23 PM, Nadav Amit <[email protected]> wrote:
>
>> On Jul 7, 2022, at 8:27 PM, Hugh Dickins <[email protected]> wrote:
>>
>>> On Mon, 6 Jun 2022, Nadav Amit wrote:
>>>
>>>> From: Nadav Amit <[email protected]>
>>>>
>>>> On extreme TLB shootdown storms, the mm's tlb_gen cacheline is highly
>>>> contended and reading it should (arguably) be avoided as much as
>>>> possible.
>>>>
>>>> Currently, flush_tlb_func() reads the mm's tlb_gen unconditionally,
>>>> even when it is not necessary (e.g., the mm was already switched).
>>>> This is wasteful.
>>>>
>>>> Moreover, one of the existing optimizations is to read mm's tlb_gen to
>>>> see if there are additional in-flight TLB invalidations and flush the
>>>> entire TLB in such a case. However, if the request's tlb_gen was already
>>>> flushed, the benefit of checking the mm's tlb_gen is likely to be offset
>>>> by the overhead of the check itself.
>>>>
>>>> Running will-it-scale with tlb_flush1_threads show a considerable
>>>> benefit on 56-core Skylake (up to +24%):
>>>>
>>>> threads Baseline (v5.17+) +Patch
>>>> 1 159960 160202
>>>> 5 310808 308378 (-0.7%)
>>>> 10 479110 490728
>>>> 15 526771 562528
>>>> 20 534495 587316
>>>> 25 547462 628296
>>>> 30 579616 666313
>>>> 35 594134 701814
>>>> 40 612288 732967
>>>> 45 617517 749727
>>>> 50 637476 735497
>>>> 55 614363 778913 (+24%)
>>>>
>>>> Acked-by: Peter Zijlstra (Intel) <[email protected]>
>>>> Cc: Dave Hansen <[email protected]>
>>>> Cc: Ingo Molnar <[email protected]>
>>>> Cc: Andy Lutomirski <[email protected]>
>>>> Cc: Thomas Gleixner <[email protected]>
>>>> Cc: [email protected]
>>>> Signed-off-by: Nadav Amit <[email protected]>
>>>>
>>>> --
>>>>
>>>> Note: The benchmarked kernels include Dave's revert of commit
>>>> 6035152d8eeb ("x86/mm/tlb: Open-code on_each_cpu_cond_mask() for
>>>> tlb_is_not_lazy()
>>>> ---
>>>> arch/x86/mm/tlb.c | 18 +++++++++++++++++-
>>>> 1 file changed, 17 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
>>>> index d400b6d9d246..d9314cc8b81f 100644
>>>> --- a/arch/x86/mm/tlb.c
>>>> +++ b/arch/x86/mm/tlb.c
>>>> @@ -734,10 +734,10 @@ static void flush_tlb_func(void *info)
>>>> const struct flush_tlb_info *f = info;
>>>> struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm);
>>>> u32 loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
>>>> - u64 mm_tlb_gen = atomic64_read(&loaded_mm->context.tlb_gen);
>>>> u64 local_tlb_gen = this_cpu_read(cpu_tlbstate.ctxs[loaded_mm_asid].tlb_gen);
>>>> bool local = smp_processor_id() == f->initiating_cpu;
>>>> unsigned long nr_invalidate = 0;
>>>> + u64 mm_tlb_gen;
>>>>
>>>> /* This code cannot presently handle being reentered. */
>>>> VM_WARN_ON(!irqs_disabled());
>>>> @@ -771,6 +771,22 @@ static void flush_tlb_func(void *info)
>>>> return;
>>>> }
>>>>
>>>> + if (f->new_tlb_gen <= local_tlb_gen) {
>>>> + /*
>>>> + * The TLB is already up to date in respect to f->new_tlb_gen.
>>>> + * While the core might be still behind mm_tlb_gen, checking
>>>> + * mm_tlb_gen unnecessarily would have negative caching effects
>>>> + * so avoid it.
>>>> + */
>>>> + return;
>>>> + }
>>>> +
>>>> + /*
>>>> + * Defer mm_tlb_gen reading as long as possible to avoid cache
>>>> + * contention.
>>>> + */
>>>> + mm_tlb_gen = atomic64_read(&loaded_mm->context.tlb_gen);
>>>> +
>>>> if (unlikely(local_tlb_gen == mm_tlb_gen)) {
>>>> /*
>>>> * There's nothing to do: we're already up to date. This can
>>>> --
>>>> 2.25.1
>>>
>>> I'm sorry, but bisection and reversion show that this commit,
>>> aa44284960d550eb4d8614afdffebc68a432a9b4 in current linux-next,
>>> is responsible for the "internal compiler error: Segmentation fault"s
>>> I get when running kernel builds on tmpfs in 1G memory, lots of swapping.
>>>
>>> That tmpfs is using huge pages as much as it can, so splitting and
>>> collapsing, compaction and page migration entailed, in case that's
>>> relevant (maybe this commit is perfect, but there's a TLB flushing
>>> bug over there in mm which this commit just exposes).
>>>
>>> Whether those segfaults happen without the huge page element,
>>> I have not done enough testing to tell - there are other bugs with
>>> swapping in current linux-next, indeed, I wouldn't even have found
>>> this one, if I hadn't already been on a bisection for another bug,
>>> and got thrown off course by these segfaults.
>>>
>>> I hope that you can work out what might be wrong with this,
>>> but meantime I think it needs to be reverted.
>>
>> I find it always surprising how trivial one liners fail.
>>
>> As you probably know, debugging these kind of things is hard. I see two
>> possible cases:
>>
>> 1. The failure is directly related to this optimization. The immediate
>> suspect in my mind is something to do with PCID/ASID.
>>
>> 2. The failure is due to another bug that was papered by “enough” TLB
>> flushes.
>>
>> I will look into the code. But if it is possible, it would be helpful to
>> know whether you get the failure with the “nopcid” kernel parameter. If it
>> passes, it wouldn’t say much, but if it fails, I think (2) is more likely.
>>
>> Not arguing about a revert, but, in some way, if the test fails, it can
>> indicate that the optimization “works”…
>>
>> I’ll put some time to look deeper into the code, but it would be very
>> helpful if you can let me know what happens with nopcid.
>
> Actually, only using “nopcid” would most likely make it go away if we have
> PTI enabled. So to get a good indication, a check whether it reproduces with
> “nopti” and “nopcid” is needed.
>
> I don’t have a better answer yet. Still trying to see what might have gone
> wrong.

Ok. My bad. Sorry. arch_tlbbatch_flush() does not set any generation in
flush_tlb_info. Bad.

Should be fixed by something like - I’ll send a patch tomorrow:

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index d9314cc8b81f..9f19894c322f 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -771,7 +771,7 @@ static void flush_tlb_func(void *info)
return;
}

- if (f->new_tlb_gen <= local_tlb_gen) {
+ if (unlikely(f->mm && f->new_tlb_gen <= local_tlb_gen)) {
/*
* The TLB is already up to date in respect to f->new_tlb_gen.
* While the core might be still behind mm_tlb_gen, checking