2020-11-28 21:54:42

by Nicholas Piggin

[permalink] [raw]
Subject: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option

On big systems, the mm refcount can become highly contented when doing
a lot of context switching with threaded applications (particularly
switching between the idle thread and an application thread).

Abandoning lazy tlb slows switching down quite a bit in the important
user->idle->user cases, so so instead implement a non-refcounted scheme
that causes __mmdrop() to IPI all CPUs in the mm_cpumask and shoot down
any remaining lazy ones.

Shootdown IPIs are some concern, but they have not been observed to be
a big problem with this scheme (the powerpc implementation generated
314 additional interrupts on a 144 CPU system during a kernel compile).
There are a number of strategies that could be employed to reduce IPIs
if they turn out to be a problem for some workload.

Signed-off-by: Nicholas Piggin <[email protected]>
---
arch/Kconfig | 13 +++++++++++++
kernel/fork.c | 53 +++++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 66 insertions(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index 596bf589d74b..540e43aeefa4 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -440,6 +440,19 @@ config MMU_LAZY_TLB
config MMU_LAZY_TLB_REFCOUNT
def_bool y
depends on MMU_LAZY_TLB
+ depends on !MMU_LAZY_TLB_SHOOTDOWN
+
+config MMU_LAZY_TLB_SHOOTDOWN
+ bool
+ depends on MMU_LAZY_TLB
+ help
+ Instead of refcounting the "lazy tlb" mm struct, which can cause
+ contention with multi-threaded apps on large multiprocessor systems,
+ this option causes __mmdrop to IPI all CPUs in the mm_cpumask and
+ switch to init_mm if they were using the to-be-freed mm as the lazy
+ tlb. To implement this, architectures must use _lazy_tlb variants of
+ mm refcounting, and mm_cpumask must include at least all possible
+ CPUs in which mm might be lazy.

config ARCH_HAVE_NMI_SAFE_CMPXCHG
bool
diff --git a/kernel/fork.c b/kernel/fork.c
index 6d266388d380..e47312c2b48b 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -669,6 +669,54 @@ static void check_mm(struct mm_struct *mm)
#define allocate_mm() (kmem_cache_alloc(mm_cachep, GFP_KERNEL))
#define free_mm(mm) (kmem_cache_free(mm_cachep, (mm)))

+static void do_shoot_lazy_tlb(void *arg)
+{
+ struct mm_struct *mm = arg;
+
+ if (current->active_mm == mm) {
+ WARN_ON_ONCE(current->mm);
+ current->active_mm = &init_mm;
+ switch_mm(mm, &init_mm, current);
+ exit_lazy_tlb(mm, current);
+ }
+}
+
+static void do_check_lazy_tlb(void *arg)
+{
+ struct mm_struct *mm = arg;
+
+ WARN_ON_ONCE(current->active_mm == mm);
+}
+
+static void shoot_lazy_tlbs(struct mm_struct *mm)
+{
+ if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_SHOOTDOWN)) {
+ /*
+ * IPI overheads have not found to be expensive, but they could
+ * be reduced in a number of possible ways, for example (in
+ * roughly increasing order of complexity):
+ * - A batch of mms requiring IPIs could be gathered and freed
+ * at once.
+ * - CPUs could store their active mm somewhere that can be
+ * remotely checked without a lock, to filter out
+ * false-positives in the cpumask.
+ * - After mm_users or mm_count reaches zero, switching away
+ * from the mm could clear mm_cpumask to reduce some IPIs
+ * (some batching or delaying would help).
+ * - A delayed freeing and RCU-like quiescing sequence based on
+ * mm switching to avoid IPIs completely.
+ */
+ on_each_cpu_mask(mm_cpumask(mm), do_shoot_lazy_tlb, (void *)mm, 1);
+ if (IS_ENABLED(CONFIG_DEBUG_VM))
+ on_each_cpu(do_check_lazy_tlb, (void *)mm, 1);
+ } else {
+ /*
+ * In this case, lazy tlb mms are refounted and would not reach
+ * __mmdrop until all CPUs have switched away and mmdrop()ed.
+ */
+ }
+}
+
/*
* Called when the last reference to the mm
* is dropped: either by a lazy thread or by
@@ -678,7 +726,12 @@ void __mmdrop(struct mm_struct *mm)
{
BUG_ON(mm == &init_mm);
WARN_ON_ONCE(mm == current->mm);
+
+ /* Ensure no CPUs are using this as their lazy tlb mm */
+ shoot_lazy_tlbs(mm);
+
WARN_ON_ONCE(mm == current->active_mm);
+
mm_free_pgd(mm);
destroy_context(mm);
mmu_notifier_subscriptions_destroy(mm);
--
2.23.0


2020-11-29 03:59:52

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option

On Sat, Nov 28, 2020 at 8:02 AM Nicholas Piggin <[email protected]> wrote:
>
> On big systems, the mm refcount can become highly contented when doing
> a lot of context switching with threaded applications (particularly
> switching between the idle thread and an application thread).
>
> Abandoning lazy tlb slows switching down quite a bit in the important
> user->idle->user cases, so so instead implement a non-refcounted scheme
> that causes __mmdrop() to IPI all CPUs in the mm_cpumask and shoot down
> any remaining lazy ones.
>
> Shootdown IPIs are some concern, but they have not been observed to be
> a big problem with this scheme (the powerpc implementation generated
> 314 additional interrupts on a 144 CPU system during a kernel compile).
> There are a number of strategies that could be employed to reduce IPIs
> if they turn out to be a problem for some workload.

I'm still wondering whether we can do even better.

The IPIs you're doing aren't really necessary -- we don't
fundamentally need to free the pagetables immediately when all
non-lazy users are done with them (and current kernels don't) -- what
we need to do is to synchronize all the bookkeeping. So, with
adequate locking (famous last words), a couple of alternative schemes
ought to be possible.

a) Instead of sending an IPI, increment mm_count on behalf of the
remote CPU and do something to make sure that the remote CPU knows we
did this on its behalf. Then free the mm when mm_count hits zero.

b) Treat mm_cpumask as part of the refcount. Add one to mm_count when
an mm is created. Once mm_users hits zero, whoever clears the last
bit in mm_cpumask is responsible for decrementing a single reference
from mm_count, and whoever sets it to zero frees the mm.

Version (b) seems fairly straightforward to implement -- add RCU
protection and a atomic_t special_ref_cleared (initially 0) to struct
mm_struct itself. After anyone clears a bit to mm_cpumask (which is
already a barrier), they read mm_users. If it's zero, then they scan
mm_cpumask and see if it's empty. If it is, they atomically swap
special_ref_cleared to 1. If it was zero before the swap, they do
mmdrop(). I can imagine some tweaks that could make this a big
faster, at least in the limit of a huge number of CPUs.

Version (a) seems a bit harder to reason about. Maybe it could be
done like this. Add a percpu variable mm_with_extra_count. This
variable can be NULL, but it can also be an mm that has an extra
reference on behalf of the cpu in question.

__mmput scans mm_cpumask and, for each cpu in the mask, mmgrabs the mm
and cmpxchgs that cpu's mm_with_extra_count from NULL to mm. If it
succeeds, then we win. If it fails, further thought is required, and
maybe we have to send an IPI, although maybe some other cleverness is
possible. Any time a CPU switches mms, it does atomic swaps
mm_with_extra_count to NULL and mmdrops whatever the mm was. (Maybe
it needs to check the mm isn't equal to the new mm, although it would
be quite bizarre for this to happen.) Other than these mmgrab and
mmdrop calls, the mm switching code doesn't mmgrab or mmdrop at all.


Version (a) seems like it could have excellent performance.


*However*, I think we should consider whether we want to do something
even bigger first. Even with any of these changes, we still need to
maintain mm_cpumask(), and that itself can be a scalability problem.
I wonder if we can solve this problem too. Perhaps the switch_mm()
paths could only ever set mm_cpumask bits, and anyone who would send
an IPI because a bit is set in mm_cpumask would first check some
percpu variable (cpu_rq(cpu)->something? an entirely new variable) to
see if the bit in mm_cpumask is spurious. Or perhaps mm_cpumask could
be split up across multiple cachelines, one per node.

We should keep the recent lessons from Apple in mind, though: x86 is a
dinosaur. The future of atomics is going to look a lot more like
ARM's LSE than x86's rather anemic set. This means that mm_cpumask
operations won't need to be full barriers forever, and we might not
want to take the implied full barriers in set_bit() and clear_bit()
for granted.

--Andy

2020-11-29 20:20:38

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option

On Sat, Nov 28, 2020 at 7:54 PM Andy Lutomirski <[email protected]> wrote:
>
> On Sat, Nov 28, 2020 at 8:02 AM Nicholas Piggin <[email protected]> wrote:
> >
> > On big systems, the mm refcount can become highly contented when doing
> > a lot of context switching with threaded applications (particularly
> > switching between the idle thread and an application thread).
> >
> > Abandoning lazy tlb slows switching down quite a bit in the important
> > user->idle->user cases, so so instead implement a non-refcounted scheme
> > that causes __mmdrop() to IPI all CPUs in the mm_cpumask and shoot down
> > any remaining lazy ones.
> >
> > Shootdown IPIs are some concern, but they have not been observed to be
> > a big problem with this scheme (the powerpc implementation generated
> > 314 additional interrupts on a 144 CPU system during a kernel compile).
> > There are a number of strategies that could be employed to reduce IPIs
> > if they turn out to be a problem for some workload.
>
> I'm still wondering whether we can do even better.
>

Hold on a sec.. __mmput() unmaps VMAs, frees pagetables, and flushes
the TLB. On x86, this will shoot down all lazies as long as even a
single pagetable was freed. (Or at least it will if we don't have a
serious bug, but the code seems okay. We'll hit pmd_free_tlb, which
sets tlb->freed_tables, which will trigger the IPI.) So, on
architectures like x86, the shootdown approach should be free. The
only way it ought to have any excess IPIs is if we have CPUs in
mm_cpumask() that don't need IPI to free pagetables, which could
happen on paravirt.

Can you try to figure out why you saw any increase in IPIs? It would
be nice if we can make the new code unconditional.

2020-11-30 09:28:21

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option

On Sun, Nov 29, 2020 at 12:16:26PM -0800, Andy Lutomirski wrote:
> On Sat, Nov 28, 2020 at 7:54 PM Andy Lutomirski <[email protected]> wrote:
> >
> > On Sat, Nov 28, 2020 at 8:02 AM Nicholas Piggin <[email protected]> wrote:
> > >
> > > On big systems, the mm refcount can become highly contented when doing
> > > a lot of context switching with threaded applications (particularly
> > > switching between the idle thread and an application thread).
> > >
> > > Abandoning lazy tlb slows switching down quite a bit in the important
> > > user->idle->user cases, so so instead implement a non-refcounted scheme
> > > that causes __mmdrop() to IPI all CPUs in the mm_cpumask and shoot down
> > > any remaining lazy ones.
> > >
> > > Shootdown IPIs are some concern, but they have not been observed to be
> > > a big problem with this scheme (the powerpc implementation generated
> > > 314 additional interrupts on a 144 CPU system during a kernel compile).
> > > There are a number of strategies that could be employed to reduce IPIs
> > > if they turn out to be a problem for some workload.
> >
> > I'm still wondering whether we can do even better.
> >
>
> Hold on a sec.. __mmput() unmaps VMAs, frees pagetables, and flushes
> the TLB. On x86, this will shoot down all lazies as long as even a
> single pagetable was freed. (Or at least it will if we don't have a
> serious bug, but the code seems okay. We'll hit pmd_free_tlb, which
> sets tlb->freed_tables, which will trigger the IPI.) So, on
> architectures like x86, the shootdown approach should be free. The
> only way it ought to have any excess IPIs is if we have CPUs in
> mm_cpumask() that don't need IPI to free pagetables, which could
> happen on paravirt.
>
> Can you try to figure out why you saw any increase in IPIs? It would
> be nice if we can make the new code unconditional.

Power doesn't do IPI based TLBI.

2020-11-30 09:29:48

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option

On Sat, Nov 28, 2020 at 07:54:57PM -0800, Andy Lutomirski wrote:
> Version (b) seems fairly straightforward to implement -- add RCU
> protection and a atomic_t special_ref_cleared (initially 0) to struct
> mm_struct itself. After anyone clears a bit to mm_cpumask (which is
> already a barrier),

No it isn't. clear_bit() implies no barrier what so ever. That's x86
you're thinking about.

> they read mm_users. If it's zero, then they scan
> mm_cpumask and see if it's empty. If it is, they atomically swap
> special_ref_cleared to 1. If it was zero before the swap, they do
> mmdrop(). I can imagine some tweaks that could make this a big
> faster, at least in the limit of a huge number of CPUs.

2020-11-30 09:33:51

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option

On Sat, Nov 28, 2020 at 07:54:57PM -0800, Andy Lutomirski wrote:
> This means that mm_cpumask operations won't need to be full barriers
> forever, and we might not want to take the implied full barriers in
> set_bit() and clear_bit() for granted.

There is no implied full barrier for those ops.

2020-11-30 09:39:49

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option

On Mon, Nov 30, 2020 at 10:30:00AM +0100, Peter Zijlstra wrote:
> On Sat, Nov 28, 2020 at 07:54:57PM -0800, Andy Lutomirski wrote:
> > This means that mm_cpumask operations won't need to be full barriers
> > forever, and we might not want to take the implied full barriers in
> > set_bit() and clear_bit() for granted.
>
> There is no implied full barrier for those ops.

Neither ARM nor Power implies any ordering on those ops. But Power has
some of the worst atomic performance in the world despite of that.

IIRC they don't do their LL/SC in L1.

2020-11-30 18:35:17

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option

other arch folk: there's some background here:

https://lkml.kernel.org/r/CALCETrVXUbe8LfNn-Qs+DzrOQaiw+sFUg1J047yByV31SaTOZw@mail.gmail.com

On Sun, Nov 29, 2020 at 12:16 PM Andy Lutomirski <[email protected]> wrote:
>
> On Sat, Nov 28, 2020 at 7:54 PM Andy Lutomirski <[email protected]> wrote:
> >
> > On Sat, Nov 28, 2020 at 8:02 AM Nicholas Piggin <[email protected]> wrote:
> > >
> > > On big systems, the mm refcount can become highly contented when doing
> > > a lot of context switching with threaded applications (particularly
> > > switching between the idle thread and an application thread).
> > >
> > > Abandoning lazy tlb slows switching down quite a bit in the important
> > > user->idle->user cases, so so instead implement a non-refcounted scheme
> > > that causes __mmdrop() to IPI all CPUs in the mm_cpumask and shoot down
> > > any remaining lazy ones.
> > >
> > > Shootdown IPIs are some concern, but they have not been observed to be
> > > a big problem with this scheme (the powerpc implementation generated
> > > 314 additional interrupts on a 144 CPU system during a kernel compile).
> > > There are a number of strategies that could be employed to reduce IPIs
> > > if they turn out to be a problem for some workload.
> >
> > I'm still wondering whether we can do even better.
> >
>
> Hold on a sec.. __mmput() unmaps VMAs, frees pagetables, and flushes
> the TLB. On x86, this will shoot down all lazies as long as even a
> single pagetable was freed. (Or at least it will if we don't have a
> serious bug, but the code seems okay. We'll hit pmd_free_tlb, which
> sets tlb->freed_tables, which will trigger the IPI.) So, on
> architectures like x86, the shootdown approach should be free. The
> only way it ought to have any excess IPIs is if we have CPUs in
> mm_cpumask() that don't need IPI to free pagetables, which could
> happen on paravirt.

Indeed, on x86, we do this:

[ 11.558844] flush_tlb_mm_range.cold+0x18/0x1d
[ 11.559905] tlb_finish_mmu+0x10e/0x1a0
[ 11.561068] exit_mmap+0xc8/0x1a0
[ 11.561932] mmput+0x29/0xd0
[ 11.562688] do_exit+0x316/0xa90
[ 11.563588] do_group_exit+0x34/0xb0
[ 11.564476] __x64_sys_exit_group+0xf/0x10
[ 11.565512] do_syscall_64+0x34/0x50

and we have info->freed_tables set.

What are the architectures that have large systems like?

x86: we already zap lazies, so it should cost basically nothing to do
a little loop at the end of __mmput() to make sure that no lazies are
left. If we care about paravirt performance, we could implement one
of the optimizations I mentioned above to fix up the refcounts instead
of sending an IPI to any remaining lazies.

arm64: AFAICT arm64's flush uses magic arm64 hardware support for
remote flushes, so any lazy mm references will still exist after
exit_mmap(). (arm64 uses lazy TLB, right?) So this is kind of like
the x86 paravirt case. Are there large enough arm64 systems that any
of this matters?

s390x: The code has too many acronyms for me to understand it fully,
but I think it's more or less the same situation as arm64. How big do
s390x systems come?

power: Ridiculously complicated, seems to vary by system and kernel config.

So, Nick, your unconditional IPI scheme is apparently a big
improvement for power, and it should be an improvement and have low
cost for x86. On arm64 and s390x it will add more IPIs on process
exit but reduce contention on context switching depending on how lazy
TLB works. I suppose we could try it for all architectures without
any further optimizations. Or we could try one of the perhaps
excessively clever improvements I linked above. arm64, s390x people,
what do you think?

2020-12-01 21:30:53

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option

On Mon, Nov 30, 2020 at 10:31:51AM -0800, Andy Lutomirski wrote:
> other arch folk: there's some background here:
>
> https://lkml.kernel.org/r/CALCETrVXUbe8LfNn-Qs+DzrOQaiw+sFUg1J047yByV31SaTOZw@mail.gmail.com
>
> On Sun, Nov 29, 2020 at 12:16 PM Andy Lutomirski <[email protected]> wrote:
> >
> > On Sat, Nov 28, 2020 at 7:54 PM Andy Lutomirski <[email protected]> wrote:
> > >
> > > On Sat, Nov 28, 2020 at 8:02 AM Nicholas Piggin <[email protected]> wrote:
> > > >
> > > > On big systems, the mm refcount can become highly contented when doing
> > > > a lot of context switching with threaded applications (particularly
> > > > switching between the idle thread and an application thread).
> > > >
> > > > Abandoning lazy tlb slows switching down quite a bit in the important
> > > > user->idle->user cases, so so instead implement a non-refcounted scheme
> > > > that causes __mmdrop() to IPI all CPUs in the mm_cpumask and shoot down
> > > > any remaining lazy ones.
> > > >
> > > > Shootdown IPIs are some concern, but they have not been observed to be
> > > > a big problem with this scheme (the powerpc implementation generated
> > > > 314 additional interrupts on a 144 CPU system during a kernel compile).
> > > > There are a number of strategies that could be employed to reduce IPIs
> > > > if they turn out to be a problem for some workload.
> > >
> > > I'm still wondering whether we can do even better.
> > >
> >
> > Hold on a sec.. __mmput() unmaps VMAs, frees pagetables, and flushes
> > the TLB. On x86, this will shoot down all lazies as long as even a
> > single pagetable was freed. (Or at least it will if we don't have a
> > serious bug, but the code seems okay. We'll hit pmd_free_tlb, which
> > sets tlb->freed_tables, which will trigger the IPI.) So, on
> > architectures like x86, the shootdown approach should be free. The
> > only way it ought to have any excess IPIs is if we have CPUs in
> > mm_cpumask() that don't need IPI to free pagetables, which could
> > happen on paravirt.
>
> Indeed, on x86, we do this:
>
> [ 11.558844] flush_tlb_mm_range.cold+0x18/0x1d
> [ 11.559905] tlb_finish_mmu+0x10e/0x1a0
> [ 11.561068] exit_mmap+0xc8/0x1a0
> [ 11.561932] mmput+0x29/0xd0
> [ 11.562688] do_exit+0x316/0xa90
> [ 11.563588] do_group_exit+0x34/0xb0
> [ 11.564476] __x64_sys_exit_group+0xf/0x10
> [ 11.565512] do_syscall_64+0x34/0x50
>
> and we have info->freed_tables set.
>
> What are the architectures that have large systems like?
>
> x86: we already zap lazies, so it should cost basically nothing to do
> a little loop at the end of __mmput() to make sure that no lazies are
> left. If we care about paravirt performance, we could implement one
> of the optimizations I mentioned above to fix up the refcounts instead
> of sending an IPI to any remaining lazies.
>
> arm64: AFAICT arm64's flush uses magic arm64 hardware support for
> remote flushes, so any lazy mm references will still exist after
> exit_mmap(). (arm64 uses lazy TLB, right?) So this is kind of like
> the x86 paravirt case. Are there large enough arm64 systems that any
> of this matters?

Yes, there are large arm64 systems where performance of TLB invalidation
matters, but they're either niche (supercomputers) or not readily available
(NUMA boxes).

But anyway, we blow away the TLB for everybody in tlb_finish_mmu() after
freeing the page-tables. We have an optimisation to avoid flushing if
we're just unmapping leaf entries when the mm is going away, but we don't
have a choice once we get to actually reclaiming the page-tables.

One thing I probably should mention, though, is that we don't maintain
mm_cpumask() because we're not able to benefit from it and the atomic
update is a waste of time.

Will

2020-12-01 21:55:53

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option

On Tue, Dec 1, 2020 at 1:28 PM Will Deacon <[email protected]> wrote:
>
> On Mon, Nov 30, 2020 at 10:31:51AM -0800, Andy Lutomirski wrote:
> > other arch folk: there's some background here:
> >
> > https://lkml.kernel.org/r/CALCETrVXUbe8LfNn-Qs+DzrOQaiw+sFUg1J047yByV31SaTOZw@mail.gmail.com
> >
> > On Sun, Nov 29, 2020 at 12:16 PM Andy Lutomirski <[email protected]> wrote:
> > >
> > > On Sat, Nov 28, 2020 at 7:54 PM Andy Lutomirski <[email protected]> wrote:
> > > >
> > > > On Sat, Nov 28, 2020 at 8:02 AM Nicholas Piggin <[email protected]> wrote:
> > > > >
> > > > > On big systems, the mm refcount can become highly contented when doing
> > > > > a lot of context switching with threaded applications (particularly
> > > > > switching between the idle thread and an application thread).
> > > > >
> > > > > Abandoning lazy tlb slows switching down quite a bit in the important
> > > > > user->idle->user cases, so so instead implement a non-refcounted scheme
> > > > > that causes __mmdrop() to IPI all CPUs in the mm_cpumask and shoot down
> > > > > any remaining lazy ones.
> > > > >
> > > > > Shootdown IPIs are some concern, but they have not been observed to be
> > > > > a big problem with this scheme (the powerpc implementation generated
> > > > > 314 additional interrupts on a 144 CPU system during a kernel compile).
> > > > > There are a number of strategies that could be employed to reduce IPIs
> > > > > if they turn out to be a problem for some workload.
> > > >
> > > > I'm still wondering whether we can do even better.
> > > >
> > >
> > > Hold on a sec.. __mmput() unmaps VMAs, frees pagetables, and flushes
> > > the TLB. On x86, this will shoot down all lazies as long as even a
> > > single pagetable was freed. (Or at least it will if we don't have a
> > > serious bug, but the code seems okay. We'll hit pmd_free_tlb, which
> > > sets tlb->freed_tables, which will trigger the IPI.) So, on
> > > architectures like x86, the shootdown approach should be free. The
> > > only way it ought to have any excess IPIs is if we have CPUs in
> > > mm_cpumask() that don't need IPI to free pagetables, which could
> > > happen on paravirt.
> >
> > Indeed, on x86, we do this:
> >
> > [ 11.558844] flush_tlb_mm_range.cold+0x18/0x1d
> > [ 11.559905] tlb_finish_mmu+0x10e/0x1a0
> > [ 11.561068] exit_mmap+0xc8/0x1a0
> > [ 11.561932] mmput+0x29/0xd0
> > [ 11.562688] do_exit+0x316/0xa90
> > [ 11.563588] do_group_exit+0x34/0xb0
> > [ 11.564476] __x64_sys_exit_group+0xf/0x10
> > [ 11.565512] do_syscall_64+0x34/0x50
> >
> > and we have info->freed_tables set.
> >
> > What are the architectures that have large systems like?
> >
> > x86: we already zap lazies, so it should cost basically nothing to do
> > a little loop at the end of __mmput() to make sure that no lazies are
> > left. If we care about paravirt performance, we could implement one
> > of the optimizations I mentioned above to fix up the refcounts instead
> > of sending an IPI to any remaining lazies.
> >
> > arm64: AFAICT arm64's flush uses magic arm64 hardware support for
> > remote flushes, so any lazy mm references will still exist after
> > exit_mmap(). (arm64 uses lazy TLB, right?) So this is kind of like
> > the x86 paravirt case. Are there large enough arm64 systems that any
> > of this matters?
>
> Yes, there are large arm64 systems where performance of TLB invalidation
> matters, but they're either niche (supercomputers) or not readily available
> (NUMA boxes).
>
> But anyway, we blow away the TLB for everybody in tlb_finish_mmu() after
> freeing the page-tables. We have an optimisation to avoid flushing if
> we're just unmapping leaf entries when the mm is going away, but we don't
> have a choice once we get to actually reclaiming the page-tables.
>
> One thing I probably should mention, though, is that we don't maintain
> mm_cpumask() because we're not able to benefit from it and the atomic
> update is a waste of time.

Do you do anything special for lazy TLB or do you just use the generic
code? (i.e. where do your user pagetables point when you go from a
user task to idle or to a kernel thread?)

Do you end up with all cpus set in mm_cpumask or can you have the mm
loaded on a CPU that isn't in mm_cpumask?

--Andy

>
> Will

2020-12-01 23:08:35

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option

On Tue, Dec 01, 2020 at 01:50:38PM -0800, Andy Lutomirski wrote:
> On Tue, Dec 1, 2020 at 1:28 PM Will Deacon <[email protected]> wrote:
> >
> > On Mon, Nov 30, 2020 at 10:31:51AM -0800, Andy Lutomirski wrote:
> > > other arch folk: there's some background here:
> > >
> > > https://lkml.kernel.org/r/CALCETrVXUbe8LfNn-Qs+DzrOQaiw+sFUg1J047yByV31SaTOZw@mail.gmail.com
> > >
> > > On Sun, Nov 29, 2020 at 12:16 PM Andy Lutomirski <[email protected]> wrote:
> > > >
> > > > On Sat, Nov 28, 2020 at 7:54 PM Andy Lutomirski <[email protected]> wrote:
> > > > >
> > > > > On Sat, Nov 28, 2020 at 8:02 AM Nicholas Piggin <[email protected]> wrote:
> > > > > >
> > > > > > On big systems, the mm refcount can become highly contented when doing
> > > > > > a lot of context switching with threaded applications (particularly
> > > > > > switching between the idle thread and an application thread).
> > > > > >
> > > > > > Abandoning lazy tlb slows switching down quite a bit in the important
> > > > > > user->idle->user cases, so so instead implement a non-refcounted scheme
> > > > > > that causes __mmdrop() to IPI all CPUs in the mm_cpumask and shoot down
> > > > > > any remaining lazy ones.
> > > > > >
> > > > > > Shootdown IPIs are some concern, but they have not been observed to be
> > > > > > a big problem with this scheme (the powerpc implementation generated
> > > > > > 314 additional interrupts on a 144 CPU system during a kernel compile).
> > > > > > There are a number of strategies that could be employed to reduce IPIs
> > > > > > if they turn out to be a problem for some workload.
> > > > >
> > > > > I'm still wondering whether we can do even better.
> > > > >
> > > >
> > > > Hold on a sec.. __mmput() unmaps VMAs, frees pagetables, and flushes
> > > > the TLB. On x86, this will shoot down all lazies as long as even a
> > > > single pagetable was freed. (Or at least it will if we don't have a
> > > > serious bug, but the code seems okay. We'll hit pmd_free_tlb, which
> > > > sets tlb->freed_tables, which will trigger the IPI.) So, on
> > > > architectures like x86, the shootdown approach should be free. The
> > > > only way it ought to have any excess IPIs is if we have CPUs in
> > > > mm_cpumask() that don't need IPI to free pagetables, which could
> > > > happen on paravirt.
> > >
> > > Indeed, on x86, we do this:
> > >
> > > [ 11.558844] flush_tlb_mm_range.cold+0x18/0x1d
> > > [ 11.559905] tlb_finish_mmu+0x10e/0x1a0
> > > [ 11.561068] exit_mmap+0xc8/0x1a0
> > > [ 11.561932] mmput+0x29/0xd0
> > > [ 11.562688] do_exit+0x316/0xa90
> > > [ 11.563588] do_group_exit+0x34/0xb0
> > > [ 11.564476] __x64_sys_exit_group+0xf/0x10
> > > [ 11.565512] do_syscall_64+0x34/0x50
> > >
> > > and we have info->freed_tables set.
> > >
> > > What are the architectures that have large systems like?
> > >
> > > x86: we already zap lazies, so it should cost basically nothing to do
> > > a little loop at the end of __mmput() to make sure that no lazies are
> > > left. If we care about paravirt performance, we could implement one
> > > of the optimizations I mentioned above to fix up the refcounts instead
> > > of sending an IPI to any remaining lazies.
> > >
> > > arm64: AFAICT arm64's flush uses magic arm64 hardware support for
> > > remote flushes, so any lazy mm references will still exist after
> > > exit_mmap(). (arm64 uses lazy TLB, right?) So this is kind of like
> > > the x86 paravirt case. Are there large enough arm64 systems that any
> > > of this matters?
> >
> > Yes, there are large arm64 systems where performance of TLB invalidation
> > matters, but they're either niche (supercomputers) or not readily available
> > (NUMA boxes).
> >
> > But anyway, we blow away the TLB for everybody in tlb_finish_mmu() after
> > freeing the page-tables. We have an optimisation to avoid flushing if
> > we're just unmapping leaf entries when the mm is going away, but we don't
> > have a choice once we get to actually reclaiming the page-tables.
> >
> > One thing I probably should mention, though, is that we don't maintain
> > mm_cpumask() because we're not able to benefit from it and the atomic
> > update is a waste of time.
>
> Do you do anything special for lazy TLB or do you just use the generic
> code? (i.e. where do your user pagetables point when you go from a
> user task to idle or to a kernel thread?)

We don't do anything special (there's something funny with the PAN emulation
but you can ignore that); the page-table just points wherever it did before
for userspace. Switching explicitly to the init_mm, however, causes us to
unmap userspace entirely.

Since we have ASIDs, switch_mm() generally doesn't have to care about the
TLBs at all.

> Do you end up with all cpus set in mm_cpumask or can you have the mm
> loaded on a CPU that isn't in mm_cpumask?

I think the mask is always zero (we never set anything in there).

Will

2020-12-02 03:12:32

by Nicholas Piggin

[permalink] [raw]
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option

Excerpts from Andy Lutomirski's message of November 29, 2020 1:54 pm:
> On Sat, Nov 28, 2020 at 8:02 AM Nicholas Piggin <[email protected]> wrote:
>>
>> On big systems, the mm refcount can become highly contented when doing
>> a lot of context switching with threaded applications (particularly
>> switching between the idle thread and an application thread).
>>
>> Abandoning lazy tlb slows switching down quite a bit in the important
>> user->idle->user cases, so so instead implement a non-refcounted scheme
>> that causes __mmdrop() to IPI all CPUs in the mm_cpumask and shoot down
>> any remaining lazy ones.
>>
>> Shootdown IPIs are some concern, but they have not been observed to be
>> a big problem with this scheme (the powerpc implementation generated
>> 314 additional interrupts on a 144 CPU system during a kernel compile).
>> There are a number of strategies that could be employed to reduce IPIs
>> if they turn out to be a problem for some workload.
>
> I'm still wondering whether we can do even better.

We probably can, for some values of better / more complex. This came up
last time I posted, there was a big concern about IPIs etc, but it just
wasn't an issue at all even when I tried to coax them to happen a bit.

The thing is they are faily self-limiting, it's not actually all that
frequent that you have an mm get taken for a lazy *and* move between
CPUs. Perhaps more often with threaded apps, but in that case you're
eating various IPI costs anyway (e.g., when moving the task to another
CPU, on TLB shootdowns, etc).

So from last time I did measure and I did document some possible
improvements that could be made in comments, but I decided to keep it
simple before adding complexity to it.

>
> The IPIs you're doing aren't really necessary -- we don't
> fundamentally need to free the pagetables immediately when all
> non-lazy users are done with them (and current kernels don't) -- what
> we need to do is to synchronize all the bookkeeping. So, with
> adequate locking (famous last words), a couple of alternative schemes
> ought to be possible.

It's not freeing the page tables, those are freed by this point already
I think (at least on powerpc they are). It's releasing the lazy mm.

>
> a) Instead of sending an IPI, increment mm_count on behalf of the
> remote CPU and do something to make sure that the remote CPU knows we
> did this on its behalf. Then free the mm when mm_count hits zero.
>
> b) Treat mm_cpumask as part of the refcount. Add one to mm_count when
> an mm is created. Once mm_users hits zero, whoever clears the last
> bit in mm_cpumask is responsible for decrementing a single reference
> from mm_count, and whoever sets it to zero frees the mm.

Right, these were some possible avenues to explore, thing is it's
complexity and more synchronisation costs, and in the fast (context
switch) path too. The IPI actually avoids all fast path work, atomic
or not.

> Version (b) seems fairly straightforward to implement -- add RCU
> protection and a atomic_t special_ref_cleared (initially 0) to struct
> mm_struct itself. After anyone clears a bit to mm_cpumask (which is
> already a barrier), they read mm_users. If it's zero, then they scan
> mm_cpumask and see if it's empty. If it is, they atomically swap
> special_ref_cleared to 1. If it was zero before the swap, they do
> mmdrop(). I can imagine some tweaks that could make this a big
> faster, at least in the limit of a huge number of CPUs.
>
> Version (a) seems a bit harder to reason about. Maybe it could be
> done like this. Add a percpu variable mm_with_extra_count. This
> variable can be NULL, but it can also be an mm that has an extra
> reference on behalf of the cpu in question.
>
> __mmput scans mm_cpumask and, for each cpu in the mask, mmgrabs the mm
> and cmpxchgs that cpu's mm_with_extra_count from NULL to mm. If it
> succeeds, then we win. If it fails, further thought is required, and
> maybe we have to send an IPI, although maybe some other cleverness is
> possible. Any time a CPU switches mms, it does atomic swaps
> mm_with_extra_count to NULL and mmdrops whatever the mm was. (Maybe
> it needs to check the mm isn't equal to the new mm, although it would
> be quite bizarre for this to happen.) Other than these mmgrab and
> mmdrop calls, the mm switching code doesn't mmgrab or mmdrop at all.
>
>
> Version (a) seems like it could have excellent performance.

That said, if x86 wanted to explore something like this, the code to do
it is a bit modular (I don't think a proliferation of lazy refcounting
config options is a good idea of course, but 2 versions one for powrepc
style set-and-forget mm_cpumask and one for x86 set-and-clear would
be okay.

> *However*, I think we should consider whether we want to do something
> even bigger first. Even with any of these changes, we still need to
> maintain mm_cpumask(), and that itself can be a scalability problem.
> I wonder if we can solve this problem too. Perhaps the switch_mm()
> paths could only ever set mm_cpumask bits,

Powerpc does this.

> and anyone who would send
> an IPI because a bit is set in mm_cpumask would first check some
> percpu variable (cpu_rq(cpu)->something?

This is a suggested possible optimization to the IPI scheme (you would
check if it's active).

There's pros and cons to it. You get more IPIs and cross TLB shootdowns
and jitter cleaning up behind you rather than cleaning up as you go.

> an entirely new variable) to
> see if the bit in mm_cpumask is spurious. Or perhaps mm_cpumask could
> be split up across multiple cachelines, one per node.

IIRC Peter or someone mentioned this was something that was looked at
for x86.

> We should keep the recent lessons from Apple in mind, though: x86 is a
> dinosaur.

Wow. What is the recent lesson from Apple?? I'm completely out of the
loop here.

> The future of atomics is going to look a lot more like
> ARM's LSE than x86's rather anemic set. This means that mm_cpumask
> operations won't need to be full barriers forever, and we might not
> want to take the implied full barriers in set_bit() and clear_bit()
> for granted.

Sure, set_bit / clear_bit aren't full barriers in terms of Linux
semantics, so generic code doesn't assume that. What x86 does is
add the smp_mb__after_blah or before_blah to avoid an added barrier
because of it's heavier-than-required set_bit.

I'm not quite sure what you were getting at though. The atomic itself is
really quite a small cost of the exit() operation (or even a context
switch operation) _most_ of the time (x86 CPUs seem to have very fast
atomics so might be even smaller cost for you). It's just when you
happen to bounce a cache line, which hurts no matter what you do.

Thanks,
Nick


2020-12-02 03:52:08

by Nicholas Piggin

[permalink] [raw]
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option

Excerpts from Andy Lutomirski's message of December 1, 2020 4:31 am:
> other arch folk: there's some background here:
>
> https://lkml.kernel.org/r/CALCETrVXUbe8LfNn-Qs+DzrOQaiw+sFUg1J047yByV31SaTOZw@mail.gmail.com
>
> On Sun, Nov 29, 2020 at 12:16 PM Andy Lutomirski <[email protected]> wrote:
>>
>> On Sat, Nov 28, 2020 at 7:54 PM Andy Lutomirski <[email protected]> wrote:
>> >
>> > On Sat, Nov 28, 2020 at 8:02 AM Nicholas Piggin <[email protected]> wrote:
>> > >
>> > > On big systems, the mm refcount can become highly contented when doing
>> > > a lot of context switching with threaded applications (particularly
>> > > switching between the idle thread and an application thread).
>> > >
>> > > Abandoning lazy tlb slows switching down quite a bit in the important
>> > > user->idle->user cases, so so instead implement a non-refcounted scheme
>> > > that causes __mmdrop() to IPI all CPUs in the mm_cpumask and shoot down
>> > > any remaining lazy ones.
>> > >
>> > > Shootdown IPIs are some concern, but they have not been observed to be
>> > > a big problem with this scheme (the powerpc implementation generated
>> > > 314 additional interrupts on a 144 CPU system during a kernel compile).
>> > > There are a number of strategies that could be employed to reduce IPIs
>> > > if they turn out to be a problem for some workload.
>> >
>> > I'm still wondering whether we can do even better.
>> >
>>
>> Hold on a sec.. __mmput() unmaps VMAs, frees pagetables, and flushes
>> the TLB. On x86, this will shoot down all lazies as long as even a
>> single pagetable was freed. (Or at least it will if we don't have a
>> serious bug, but the code seems okay. We'll hit pmd_free_tlb, which
>> sets tlb->freed_tables, which will trigger the IPI.) So, on
>> architectures like x86, the shootdown approach should be free. The
>> only way it ought to have any excess IPIs is if we have CPUs in
>> mm_cpumask() that don't need IPI to free pagetables, which could
>> happen on paravirt.
>
> Indeed, on x86, we do this:
>
> [ 11.558844] flush_tlb_mm_range.cold+0x18/0x1d
> [ 11.559905] tlb_finish_mmu+0x10e/0x1a0
> [ 11.561068] exit_mmap+0xc8/0x1a0
> [ 11.561932] mmput+0x29/0xd0
> [ 11.562688] do_exit+0x316/0xa90
> [ 11.563588] do_group_exit+0x34/0xb0
> [ 11.564476] __x64_sys_exit_group+0xf/0x10
> [ 11.565512] do_syscall_64+0x34/0x50
>
> and we have info->freed_tables set.
>
> What are the architectures that have large systems like?
>
> x86: we already zap lazies, so it should cost basically nothing to do

This is not zapping lazies, this is freeing the user page tables.

"lazy mm" is where a switch to a kernel thread takes on the
previous mm for its kernel mapping rather than switch to init_mm.

> a little loop at the end of __mmput() to make sure that no lazies are
> left. If we care about paravirt performance, we could implement one
> of the optimizations I mentioned above to fix up the refcounts instead
> of sending an IPI to any remaining lazies.

It might be possible x86's scheme you could scan mm_cpumask
carefully synchronized or something when the last user reference
gets dropped that frees the lazy at that point, but I don't know
what that would buy you because you're still having to maintain
the mm_cpumask on switches. powerpc's characteristics are just
different here so it makes sense whereas I don't know if it
would on x86.

>
> arm64: AFAICT arm64's flush uses magic arm64 hardware support for
> remote flushes, so any lazy mm references will still exist after
> exit_mmap(). (arm64 uses lazy TLB, right?) So this is kind of like
> the x86 paravirt case. Are there large enough arm64 systems that any
> of this matters?
>
> s390x: The code has too many acronyms for me to understand it fully,
> but I think it's more or less the same situation as arm64. How big do
> s390x systems come?
>
> power: Ridiculously complicated, seems to vary by system and kernel config.
>
> So, Nick, your unconditional IPI scheme is apparently a big
> improvement for power, and it should be an improvement and have low
> cost for x86.

As said, the tradeoffs are different, I'm not so sure. It was a big
improvement on a very big system with the powerpc mm_cpumask switching
model on a microbenchmark designed to stress this, which is about all
I can say for it.

> On arm64 and s390x it will add more IPIs on process
> exit but reduce contention on context switching depending on how lazy
> TLB works. I suppose we could try it for all architectures without
> any further optimizations.

It will remain opt-in but certainly try it out and see. There are some
requirements as documented in the config option text.

> Or we could try one of the perhaps
> excessively clever improvements I linked above. arm64, s390x people,
> what do you think?
>

I'm not against improvements to the scheme. e.g., from the patch

+ /*
+ * IPI overheads have not found to be expensive, but they could
+ * be reduced in a number of possible ways, for example (in
+ * roughly increasing order of complexity):
+ * - A batch of mms requiring IPIs could be gathered and freed
+ * at once.
+ * - CPUs could store their active mm somewhere that can be
+ * remotely checked without a lock, to filter out
+ * false-positives in the cpumask.
+ * - After mm_users or mm_count reaches zero, switching away
+ * from the mm could clear mm_cpumask to reduce some IPIs
+ * (some batching or delaying would help).
+ * - A delayed freeing and RCU-like quiescing sequence based on
+ * mm switching to avoid IPIs completely.
+ */

But would like to have numbers before being too clever.

Thanks,
Nick

2020-12-02 11:20:59

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option

On Sun, Nov 29, 2020 at 02:01:39AM +1000, Nicholas Piggin wrote:
> +static void shoot_lazy_tlbs(struct mm_struct *mm)
> +{
> + if (IS_ENABLED(CONFIG_MMU_LAZY_TLB_SHOOTDOWN)) {
> + /*
> + * IPI overheads have not found to be expensive, but they could
> + * be reduced in a number of possible ways, for example (in
> + * roughly increasing order of complexity):
> + * - A batch of mms requiring IPIs could be gathered and freed
> + * at once.
> + * - CPUs could store their active mm somewhere that can be
> + * remotely checked without a lock, to filter out
> + * false-positives in the cpumask.
> + * - After mm_users or mm_count reaches zero, switching away
> + * from the mm could clear mm_cpumask to reduce some IPIs
> + * (some batching or delaying would help).
> + * - A delayed freeing and RCU-like quiescing sequence based on
> + * mm switching to avoid IPIs completely.
> + */
> + on_each_cpu_mask(mm_cpumask(mm), do_shoot_lazy_tlb, (void *)mm, 1);
> + if (IS_ENABLED(CONFIG_DEBUG_VM))
> + on_each_cpu(do_check_lazy_tlb, (void *)mm, 1);

So the obvious 'improvement' here would be something like:

for_each_online_cpu(cpu) {
p = rcu_dereference(cpu_rq(cpu)->curr;
if (p->active_mm != mm)
continue;
__cpumask_set_cpu(cpu, tmpmask);
}
on_each_cpu_mask(tmpmask, ...);

The remote CPU will never switch _to_ @mm, on account of it being quite
dead, but it is quite prone to false negatives.

Consider that __schedule() sets rq->curr *before* context_switch(), this
means we'll see next->active_mm, even though prev->active_mm might still
be our @mm.

Now, because we'll be removing the atomic ops from context_switch()'s
active_mm swizzling, I think we can change this to something like the
below. The hope being that the cost of the new barrier can be offset by
the loss of the atomics.

Hmm ?

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 41404afb7f4c..2597c5c0ccb0 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4509,7 +4509,6 @@ context_switch(struct rq *rq, struct task_struct *prev,
if (!next->mm) { // to kernel
enter_lazy_tlb(prev->active_mm, next);

- next->active_mm = prev->active_mm;
if (prev->mm) // from user
mmgrab(prev->active_mm);
else
@@ -4524,6 +4523,7 @@ context_switch(struct rq *rq, struct task_struct *prev,
* case 'prev->active_mm == next->mm' through
* finish_task_switch()'s mmdrop().
*/
+ next->active_mm = next->mm;
switch_mm_irqs_off(prev->active_mm, next->mm, next);

if (!prev->mm) { // from kernel
@@ -5713,11 +5713,9 @@ static void __sched notrace __schedule(bool preempt)

if (likely(prev != next)) {
rq->nr_switches++;
- /*
- * RCU users of rcu_dereference(rq->curr) may not see
- * changes to task_struct made by pick_next_task().
- */
- RCU_INIT_POINTER(rq->curr, next);
+
+ next->active_mm = prev->active_mm;
+ rcu_assign_pointer(rq->curr, next);
/*
* The membarrier system call requires each architecture
* to have a full memory barrier after updating

2020-12-02 12:48:18

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option

On Wed, Dec 02, 2020 at 12:17:31PM +0100, Peter Zijlstra wrote:

> So the obvious 'improvement' here would be something like:
>
> for_each_online_cpu(cpu) {
> p = rcu_dereference(cpu_rq(cpu)->curr;
> if (p->active_mm != mm)
> continue;
> __cpumask_set_cpu(cpu, tmpmask);
> }
> on_each_cpu_mask(tmpmask, ...);
>
> The remote CPU will never switch _to_ @mm, on account of it being quite
> dead, but it is quite prone to false negatives.
>
> Consider that __schedule() sets rq->curr *before* context_switch(), this
> means we'll see next->active_mm, even though prev->active_mm might still
> be our @mm.
>
> Now, because we'll be removing the atomic ops from context_switch()'s
> active_mm swizzling, I think we can change this to something like the
> below. The hope being that the cost of the new barrier can be offset by
> the loss of the atomics.
>
> Hmm ?
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 41404afb7f4c..2597c5c0ccb0 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4509,7 +4509,6 @@ context_switch(struct rq *rq, struct task_struct *prev,
> if (!next->mm) { // to kernel
> enter_lazy_tlb(prev->active_mm, next);
>
> - next->active_mm = prev->active_mm;
> if (prev->mm) // from user
> mmgrab(prev->active_mm);
> else
> @@ -4524,6 +4523,7 @@ context_switch(struct rq *rq, struct task_struct *prev,
> * case 'prev->active_mm == next->mm' through
> * finish_task_switch()'s mmdrop().
> */
> + next->active_mm = next->mm;
> switch_mm_irqs_off(prev->active_mm, next->mm, next);

I think that next->active_mm store should be after switch_mm(),
otherwise we still race.

>
> if (!prev->mm) { // from kernel
> @@ -5713,11 +5713,9 @@ static void __sched notrace __schedule(bool preempt)
>
> if (likely(prev != next)) {
> rq->nr_switches++;
> - /*
> - * RCU users of rcu_dereference(rq->curr) may not see
> - * changes to task_struct made by pick_next_task().
> - */
> - RCU_INIT_POINTER(rq->curr, next);
> +
> + next->active_mm = prev->active_mm;
> + rcu_assign_pointer(rq->curr, next);
> /*
> * The membarrier system call requires each architecture
> * to have a full memory barrier after updating

2020-12-02 14:22:56

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option

On Sun, Nov 29, 2020 at 02:01:39AM +1000, Nicholas Piggin wrote:
> + * - A delayed freeing and RCU-like quiescing sequence based on
> + * mm switching to avoid IPIs completely.

That one's interesting too. so basically you want to count switch_mm()
invocations on each CPU. Then, periodically snapshot the counter on each
CPU, and when they've all changed, increment a global counter.

Then, you snapshot the global counter and wait for it to increment
(twice I think, the first increment might already be in progress).

The only question here is what should drive this machinery.. the tick
probably.

This shouldn't be too hard to do I think.

Something a little like so perhaps?


diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 41404afb7f4c..27b64a60a468 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4525,6 +4525,7 @@ context_switch(struct rq *rq, struct task_struct *prev,
* finish_task_switch()'s mmdrop().
*/
switch_mm_irqs_off(prev->active_mm, next->mm, next);
+ rq->nr_mm_switches++;

if (!prev->mm) { // from kernel
/* will mmdrop() in finish_task_switch(). */
@@ -4739,6 +4740,80 @@ unsigned long long task_sched_runtime(struct task_struct *p)
return ns;
}

+static DEFINE_PER_CPU(unsigned long[2], mm_switches);
+
+static struct {
+ unsigned long __percpu *switches[2];
+ unsigned long generation;
+ atomic_t complete;
+ struct wait_queue_dead wait;
+} mm_foo = {
+ .switches = &mm_switches,
+ .generation = 0,
+ .complete = -1, // XXX bootstrap, hotplug
+ .wait = __WAIT_QUEUE_HEAD_INITIALIZER(mm_foo.wait),
+};
+
+static void mm_gen_tick(int cpu, struct rq *rq)
+{
+ unsigned long prev, curr, switches = rq->nr_mm_switches;
+ int idx = READ_ONCE(mm_foo.generation) & 1;
+
+ /* DATA-DEP on mm_foo.generation */
+
+ prev = __this_cpu_read(mm_foo.switches[idx^1]);
+ curr = __this_cpu_read(mm_foo.switches[idx]);
+
+ /* we haven't switched since the last generation */
+ if (prev == switches)
+ return false;
+
+ __this_cpu_write(mm_foo.switches[idx], switches);
+
+ /*
+ * If @curr is less than @prev, this is the first update of
+ * this generation, per the above, switches has also increased since,
+ * so mark out CPU complete.
+ */
+ if ((long)(curr - prev) < 0 && atomic_dec_and_test(&mm_foo.complete)) {
+ /*
+ * All CPUs are complete, IOW they all switched at least once
+ * since the last generation. Reset the completion counter and
+ * increment the generation.
+ */
+ atomic_set(&mm_foo.complete, nr_online_cpus());
+ /*
+ * Matches the address dependency above:
+ *
+ * idx = gen & 1 complete = nr_cpus
+ * <DATA-DEP> <WMB>
+ * curr = sw[idx] generation++;
+ * prev = sw[idx^1]
+ * if (curr < prev)
+ * complete--
+ *
+ * If we don't observe the new generation; we'll not decrement. If we
+ * do see the new generation, we must also see the new completion count.
+ */
+ smp_wmb();
+ mm_foo.generation++;
+ return true;
+ }
+
+ return false;
+}
+
+static void mm_gen_wake(void)
+{
+ wake_up_all(&mm_foo.wait);
+}
+
+static void mm_gen_wait(void)
+{
+ unsigned int gen = READ_ONCE(mm_foo.generation);
+ wait_event(&mm_foo.wait, READ_ONCE(mm_foo.generation) - gen > 1);
+}
+
/*
* This function gets called by the timer code, with HZ frequency.
* We call it with interrupts disabled.
@@ -4750,6 +4825,7 @@ void scheduler_tick(void)
struct task_struct *curr = rq->curr;
struct rq_flags rf;
unsigned long thermal_pressure;
+ bool wake_mm_gen;

arch_scale_freq_tick();
sched_clock_tick();
@@ -4763,8 +4839,13 @@ void scheduler_tick(void)
calc_global_load_tick(rq);
psi_task_tick(rq);

+ wake_mm_gen = mm_gen_tick(cpu, rq);
+
rq_unlock(rq, &rf);

+ if (wake_mm_gen)
+ mm_gen_wake();
+
perf_event_task_tick();

#ifdef CONFIG_SMP
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index bf9d8da7d35e..62fb685db8d0 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -927,6 +927,7 @@ struct rq {
unsigned int ttwu_pending;
#endif
u64 nr_switches;
+ u64 nr_mm_switches;

#ifdef CONFIG_UCLAMP_TASK
/* Utilization clamp values based on CPU's RUNNABLE tasks */

2020-12-02 14:42:56

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option


> On Dec 2, 2020, at 6:20 AM, Peter Zijlstra <[email protected]> wrote:
>
> On Sun, Nov 29, 2020 at 02:01:39AM +1000, Nicholas Piggin wrote:
>> + * - A delayed freeing and RCU-like quiescing sequence based on
>> + * mm switching to avoid IPIs completely.
>
> That one's interesting too. so basically you want to count switch_mm()
> invocations on each CPU. Then, periodically snapshot the counter on each
> CPU, and when they've all changed, increment a global counter.
>
> Then, you snapshot the global counter and wait for it to increment
> (twice I think, the first increment might already be in progress).
>
> The only question here is what should drive this machinery.. the tick
> probably.
>
> This shouldn't be too hard to do I think.
>
> Something a little like so perhaps?

I don’t think this will work. A CPU can go idle with lazy mm and nohz forever. This could lead to unbounded memory use on a lightly loaded system.

2020-12-02 16:34:26

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option

On Wed, Dec 02, 2020 at 06:38:12AM -0800, Andy Lutomirski wrote:
>
> > On Dec 2, 2020, at 6:20 AM, Peter Zijlstra <[email protected]> wrote:
> >
> > On Sun, Nov 29, 2020 at 02:01:39AM +1000, Nicholas Piggin wrote:
> >> + * - A delayed freeing and RCU-like quiescing sequence based on
> >> + * mm switching to avoid IPIs completely.
> >
> > That one's interesting too. so basically you want to count switch_mm()
> > invocations on each CPU. Then, periodically snapshot the counter on each
> > CPU, and when they've all changed, increment a global counter.
> >
> > Then, you snapshot the global counter and wait for it to increment
> > (twice I think, the first increment might already be in progress).
> >
> > The only question here is what should drive this machinery.. the tick
> > probably.
> >
> > This shouldn't be too hard to do I think.
> >
> > Something a little like so perhaps?
>
> I don’t think this will work. A CPU can go idle with lazy mm and nohz
> forever. This could lead to unbounded memory use on a lightly loaded
> system.

Hurm.. quite so indeed. Fixing that seems to end up with requiring that
other proposal, such that we can tell which CPU has what active_mm
stuck.

Also, more complicated... :/

2020-12-03 05:09:07

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option

> On Dec 1, 2020, at 7:47 PM, Nicholas Piggin <[email protected]> wrote:
>
> Excerpts from Andy Lutomirski's message of December 1, 2020 4:31 am:
>> other arch folk: there's some background here:
>>
>> https://lkml.kernel.org/r/CALCETrVXUbe8LfNn-Qs+DzrOQaiw+sFUg1J047yByV31SaTOZw@mail.gmail.com
>>
>>> On Sun, Nov 29, 2020 at 12:16 PM Andy Lutomirski <[email protected]> wrote:
>>>
>>> On Sat, Nov 28, 2020 at 7:54 PM Andy Lutomirski <[email protected]> wrote:
>>>>
>>>> On Sat, Nov 28, 2020 at 8:02 AM Nicholas Piggin <[email protected]> wrote:
>>>>>
>>>>> On big systems, the mm refcount can become highly contented when doing
>>>>> a lot of context switching with threaded applications (particularly
>>>>> switching between the idle thread and an application thread).
>>>>>
>>>>> Abandoning lazy tlb slows switching down quite a bit in the important
>>>>> user->idle->user cases, so so instead implement a non-refcounted scheme
>>>>> that causes __mmdrop() to IPI all CPUs in the mm_cpumask and shoot down
>>>>> any remaining lazy ones.
>>>>>
>>>>> Shootdown IPIs are some concern, but they have not been observed to be
>>>>> a big problem with this scheme (the powerpc implementation generated
>>>>> 314 additional interrupts on a 144 CPU system during a kernel compile).
>>>>> There are a number of strategies that could be employed to reduce IPIs
>>>>> if they turn out to be a problem for some workload.
>>>>
>>>> I'm still wondering whether we can do even better.
>>>>
>>>
>>> Hold on a sec.. __mmput() unmaps VMAs, frees pagetables, and flushes
>>> the TLB. On x86, this will shoot down all lazies as long as even a
>>> single pagetable was freed. (Or at least it will if we don't have a
>>> serious bug, but the code seems okay. We'll hit pmd_free_tlb, which
>>> sets tlb->freed_tables, which will trigger the IPI.) So, on
>>> architectures like x86, the shootdown approach should be free. The
>>> only way it ought to have any excess IPIs is if we have CPUs in
>>> mm_cpumask() that don't need IPI to free pagetables, which could
>>> happen on paravirt.
>>
>> Indeed, on x86, we do this:
>>
>> [ 11.558844] flush_tlb_mm_range.cold+0x18/0x1d
>> [ 11.559905] tlb_finish_mmu+0x10e/0x1a0
>> [ 11.561068] exit_mmap+0xc8/0x1a0
>> [ 11.561932] mmput+0x29/0xd0
>> [ 11.562688] do_exit+0x316/0xa90
>> [ 11.563588] do_group_exit+0x34/0xb0
>> [ 11.564476] __x64_sys_exit_group+0xf/0x10
>> [ 11.565512] do_syscall_64+0x34/0x50
>>
>> and we have info->freed_tables set.
>>
>> What are the architectures that have large systems like?
>>
>> x86: we already zap lazies, so it should cost basically nothing to do
>
> This is not zapping lazies, this is freeing the user page tables.
>
> "lazy mm" is where a switch to a kernel thread takes on the
> previous mm for its kernel mapping rather than switch to init_mm.

The intent of the code is to flush the TLB after freeing user pages
tables, but, on bare metal, lazies get zapped as a side effect.

Anyway, I'm going to send out a mockup of an alternative approach shortly.

2020-12-03 17:14:03

by Alexander Gordeev

[permalink] [raw]
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option

On Mon, Nov 30, 2020 at 10:31:51AM -0800, Andy Lutomirski wrote:
> other arch folk: there's some background here:
>
> https://lkml.kernel.org/r/CALCETrVXUbe8LfNn-Qs+DzrOQaiw+sFUg1J047yByV31SaTOZw@mail.gmail.com
>
> On Sun, Nov 29, 2020 at 12:16 PM Andy Lutomirski <[email protected]> wrote:
> >
> > On Sat, Nov 28, 2020 at 7:54 PM Andy Lutomirski <[email protected]> wrote:
> > >
> > > On Sat, Nov 28, 2020 at 8:02 AM Nicholas Piggin <[email protected]> wrote:
> > > >
> > > > On big systems, the mm refcount can become highly contented when doing
> > > > a lot of context switching with threaded applications (particularly
> > > > switching between the idle thread and an application thread).
> > > >
> > > > Abandoning lazy tlb slows switching down quite a bit in the important
> > > > user->idle->user cases, so so instead implement a non-refcounted scheme
> > > > that causes __mmdrop() to IPI all CPUs in the mm_cpumask and shoot down
> > > > any remaining lazy ones.
> > > >
> > > > Shootdown IPIs are some concern, but they have not been observed to be
> > > > a big problem with this scheme (the powerpc implementation generated
> > > > 314 additional interrupts on a 144 CPU system during a kernel compile).
> > > > There are a number of strategies that could be employed to reduce IPIs
> > > > if they turn out to be a problem for some workload.
> > >
> > > I'm still wondering whether we can do even better.
> > >
> >
> > Hold on a sec.. __mmput() unmaps VMAs, frees pagetables, and flushes
> > the TLB. On x86, this will shoot down all lazies as long as even a
> > single pagetable was freed. (Or at least it will if we don't have a
> > serious bug, but the code seems okay. We'll hit pmd_free_tlb, which
> > sets tlb->freed_tables, which will trigger the IPI.) So, on
> > architectures like x86, the shootdown approach should be free. The
> > only way it ought to have any excess IPIs is if we have CPUs in
> > mm_cpumask() that don't need IPI to free pagetables, which could
> > happen on paravirt.
>
> Indeed, on x86, we do this:
>
> [ 11.558844] flush_tlb_mm_range.cold+0x18/0x1d
> [ 11.559905] tlb_finish_mmu+0x10e/0x1a0
> [ 11.561068] exit_mmap+0xc8/0x1a0
> [ 11.561932] mmput+0x29/0xd0
> [ 11.562688] do_exit+0x316/0xa90
> [ 11.563588] do_group_exit+0x34/0xb0
> [ 11.564476] __x64_sys_exit_group+0xf/0x10
> [ 11.565512] do_syscall_64+0x34/0x50
>
> and we have info->freed_tables set.
>
> What are the architectures that have large systems like?
>
> x86: we already zap lazies, so it should cost basically nothing to do
> a little loop at the end of __mmput() to make sure that no lazies are
> left. If we care about paravirt performance, we could implement one
> of the optimizations I mentioned above to fix up the refcounts instead
> of sending an IPI to any remaining lazies.
>
> arm64: AFAICT arm64's flush uses magic arm64 hardware support for
> remote flushes, so any lazy mm references will still exist after
> exit_mmap(). (arm64 uses lazy TLB, right?) So this is kind of like
> the x86 paravirt case. Are there large enough arm64 systems that any
> of this matters?
>
> s390x: The code has too many acronyms for me to understand it fully,
> but I think it's more or less the same situation as arm64. How big do
> s390x systems come?
>
> power: Ridiculously complicated, seems to vary by system and kernel config.
>
> So, Nick, your unconditional IPI scheme is apparently a big
> improvement for power, and it should be an improvement and have low
> cost for x86. On arm64 and s390x it will add more IPIs on process
> exit but reduce contention on context switching depending on how lazy

s390 does not invalidate TLBs per-CPU explicitly - we have special
instructions for that. Those in turn initiate signalling to other
CPUs, completely transparent to OS.

Apart from mm_count, I am struggling to realize how the suggested
scheme could change the the contention on s390 in connection with
TLB. Could you clarify a bit here, please?

> TLB works. I suppose we could try it for all architectures without
> any further optimizations. Or we could try one of the perhaps
> excessively clever improvements I linked above. arm64, s390x people,
> what do you think?

I do not immediately see anything in the series that would harm
performance on s390.

We however use mm_cpumask to distinguish between local and global TLB
flushes. With this series it looks like mm_cpumask is *required* to
be consistent with lazy users. And that is something quite diffucult
for us to adhere (at least in the foreseeable future).

But actually keeping track of lazy users in a cpumask is something
the generic code would rather do AFAICT.

Thanks!

2020-12-03 17:18:09

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option



> On Dec 3, 2020, at 9:09 AM, Alexander Gordeev <[email protected]> wrote:
>
> On Mon, Nov 30, 2020 at 10:31:51AM -0800, Andy Lutomirski wrote:
>> other arch folk: there's some background here:

>
>>
>> power: Ridiculously complicated, seems to vary by system and kernel config.
>>
>> So, Nick, your unconditional IPI scheme is apparently a big
>> improvement for power, and it should be an improvement and have low
>> cost for x86. On arm64 and s390x it will add more IPIs on process
>> exit but reduce contention on context switching depending on how lazy
>
> s390 does not invalidate TLBs per-CPU explicitly - we have special
> instructions for that. Those in turn initiate signalling to other
> CPUs, completely transparent to OS.

Just to make sure I understand: this means that you broadcast flushes to all CPUs, not just a subset?

>
> Apart from mm_count, I am struggling to realize how the suggested
> scheme could change the the contention on s390 in connection with
> TLB. Could you clarify a bit here, please?

I’m just talking about mm_count. Maintaining mm_count is quite expensive on some workloads.

>
>> TLB works. I suppose we could try it for all architectures without
>> any further optimizations. Or we could try one of the perhaps
>> excessively clever improvements I linked above. arm64, s390x people,
>> what do you think?
>
> I do not immediately see anything in the series that would harm
> performance on s390.
>
> We however use mm_cpumask to distinguish between local and global TLB
> flushes. With this series it looks like mm_cpumask is *required* to
> be consistent with lazy users. And that is something quite diffucult
> for us to adhere (at least in the foreseeable future).

You don’t actually need to maintain mm_cpumask — we could scan all CPUs instead.

>
> But actually keeping track of lazy users in a cpumask is something
> the generic code would rather do AFAICT.

The problem is that arches don’t agree on what the contents of mm_cpumask should be. Tracking a mask of exactly what the arch wants in generic code is a nontrivial operation.

2020-12-03 18:38:59

by Alexander Gordeev

[permalink] [raw]
Subject: Re: [PATCH 6/8] lazy tlb: shoot lazies, a non-refcounting lazy tlb option

On Thu, Dec 03, 2020 at 09:14:22AM -0800, Andy Lutomirski wrote:
>
>
> > On Dec 3, 2020, at 9:09 AM, Alexander Gordeev <[email protected]> wrote:
> >
> > On Mon, Nov 30, 2020 at 10:31:51AM -0800, Andy Lutomirski wrote:
> >> other arch folk: there's some background here:
>
> >
> >>
> >> power: Ridiculously complicated, seems to vary by system and kernel config.
> >>
> >> So, Nick, your unconditional IPI scheme is apparently a big
> >> improvement for power, and it should be an improvement and have low
> >> cost for x86. On arm64 and s390x it will add more IPIs on process
> >> exit but reduce contention on context switching depending on how lazy
> >
> > s390 does not invalidate TLBs per-CPU explicitly - we have special
> > instructions for that. Those in turn initiate signalling to other
> > CPUs, completely transparent to OS.
>
> Just to make sure I understand: this means that you broadcast flushes to all CPUs, not just a subset?

Correct.
If mm has one CPU attached we flush TLB only for that CPU.
If mm has more than one CPUs attached we flush all CPUs' TLBs.

In fact, details are bit more complicated, since the hardware
is able to flush subsets of TLB entries depending on provided
parameters (e.g page tables used to create that entries).
But we can not select a CPU subset.

> > Apart from mm_count, I am struggling to realize how the suggested
> > scheme could change the the contention on s390 in connection with
> > TLB. Could you clarify a bit here, please?
>
> I’m just talking about mm_count. Maintaining mm_count is quite expensive on some workloads.
>
> >
> >> TLB works. I suppose we could try it for all architectures without
> >> any further optimizations. Or we could try one of the perhaps
> >> excessively clever improvements I linked above. arm64, s390x people,
> >> what do you think?
> >
> > I do not immediately see anything in the series that would harm
> > performance on s390.
> >
> > We however use mm_cpumask to distinguish between local and global TLB
> > flushes. With this series it looks like mm_cpumask is *required* to
> > be consistent with lazy users. And that is something quite diffucult
> > for us to adhere (at least in the foreseeable future).
>
> You don’t actually need to maintain mm_cpumask — we could scan all CPUs instead.
>
> >
> > But actually keeping track of lazy users in a cpumask is something
> > the generic code would rather do AFAICT.
>
> The problem is that arches don’t agree on what the contents of mm_cpumask should be. Tracking a mask of exactly what the arch wants in generic code is a nontrivial operation.

It could be yet another cpumask or the CPU scan you mentioned.
Just wanted to make sure there is no new requirement for an arch
to maintain mm_cpumask ;)

Thanks, Andy!