2024-03-06 15:25:18

by Mathieu Desnoyers

[permalink] [raw]
Subject: [RFC PATCH] sched: Add missing memory barrier in switch_mm_cid

Many architectures' switch_mm() (e.g. arm64) do not have an smp_mb()
which the core scheduler code has depended upon since commit:

commit 223baf9d17f25 ("sched: Fix performance regression introduced by mm_cid")

If switch_mm() doesn't call smp_mb(), sched_mm_cid_remote_clear() can
unset the activly used cid when it fails to observe active task after it
sets lazy_put.

The *is* a memory barrier between storing to rq->curr and _return to
userspace_ (as required by membarrier), but the rseq mm_cid has stricter
requirements: the barrier needs to be issued between store to rq->curr
and switch_mm_cid(), which happens earlier than:

- spin_unlock(),
- switch_to().

So it's fine when the architecture switch_mm happens to have that barrier
already, but less so when the architecture only provides the full barrier
in switch_to() or spin_unlock().

It is a bug in the rseq switch_mm_cid() implementation. All architectures
that don't have memory barriers in switch_mm(), but rather have the full
barrier either in finish_lock_switch() or switch_to() have them too late
for the needs of switch_mm_cid().

Introduce a new smp_mb__after_switch_mm(), defined as smp_mb() in the
generic barrier.h header, and use it in switch_mm_cid() for scheduler
transitions where switch_mm() is expected to provide a memory barrier.

Architectures can override smp_mb__after_switch_mm() if their
switch_mm() implementation provides an implicit memory barrier.
Override it with a no-op on x86 which implicitly provide this memory
barrier by writing to CR3.

Reported-by: levi.yun <[email protected]>
Signed-off-by: Mathieu Desnoyers <[email protected]>
Fixes: 223baf9d17f2 ("sched: Fix performance regression introduced by mm_cid")
Cc: <[email protected]> # 6.4.x
Cc: Mathieu Desnoyers <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Aaron Lu <[email protected]>
---
arch/x86/include/asm/barrier.h | 3 +++
include/asm-generic/barrier.h | 8 ++++++++
kernel/sched/sched.h | 19 +++++++++++++------
3 files changed, 24 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 35389b2af88e..0d5e54201eb2 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -79,6 +79,9 @@ do { \
#define __smp_mb__before_atomic() do { } while (0)
#define __smp_mb__after_atomic() do { } while (0)

+/* Writing to CR3 provides a full memory barrier in switch_mm(). */
+#define smp_mb__after_switch_mm() do { } while (0)
+
#include <asm-generic/barrier.h>

/*
diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index 961f4d88f9ef..5a6c94d7a598 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -296,5 +296,13 @@ do { \
#define io_stop_wc() do { } while (0)
#endif

+/*
+ * Architectures that guarantee an implicit smp_mb() in switch_mm()
+ * can override smp_mb__after_switch_mm.
+ */
+#ifndef smp_mb__after_switch_mm
+#define smp_mb__after_switch_mm() smp_mb()
+#endif
+
#endif /* !__ASSEMBLY__ */
#endif /* __ASM_GENERIC_BARRIER_H */
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 2e5a95486a42..638ebd355912 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -79,6 +79,8 @@
# include <asm/paravirt_api_clock.h>
#endif

+#include <asm/barrier.h>
+
#include "cpupri.h"
#include "cpudeadline.h"

@@ -3481,13 +3483,18 @@ static inline void switch_mm_cid(struct rq *rq,
* between rq->curr store and load of {prev,next}->mm->pcpu_cid[cpu].
* Provide it here.
*/
- if (!prev->mm) // from kernel
+ if (!prev->mm) { // from kernel
smp_mb();
- /*
- * user -> user transition guarantees a memory barrier through
- * switch_mm() when current->mm changes. If current->mm is
- * unchanged, no barrier is needed.
- */
+ } else { // from user
+ /*
+ * user -> user transition relies on an implicit the
+ * memory barrier in switch_mm() when current->mm
+ * changes. If the architecture switch_mm() does not
+ * have an implicit memory barrier, it is emitted here.
+ * If current->mm is unchanged, no barrier is needed.
+ */
+ smp_mb__after_switch_mm();
+ }
}
if (prev->mm_cid_active) {
mm_cid_snapshot_time(rq, prev->mm);
--
2.39.2



2024-03-11 08:45:30

by Yeo Reum Yun

[permalink] [raw]
Subject: Re: [RFC PATCH] sched: Add missing memory barrier in switch_mm_cid

Hi. Mathieu. Sorry to late answer.

> diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
> index 35389b2af88e..0d5e54201eb2 100644
> --- a/arch/x86/include/asm/barrier.h
> +++ b/arch/x86/include/asm/barrier.h
> @@ -79,6 +79,9 @@ do { \
> #define __smp_mb__before_atomic() do { } while (0)
> #define __smp_mb__after_atomic() do { } while (0)

> +/* Writing to CR3 provides a full memory barrier in switch_mm(). */
> +#define smp_mb__after_switch_mm() do { } while (0)
> +
> #include <asm-generic/barrier.h>

IIUC, ppc already does smp_mb() in switch_mm.

Would it better to add the same macro which do nothing to pcc?\\

Thanks!

IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

2024-03-12 18:07:22

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: [RFC PATCH] sched: Add missing memory barrier in switch_mm_cid

On 2024-03-11 04:45, Yeo Reum Yun wrote:
> Hi. Mathieu. Sorry to late answer.
>
>> diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
>> index 35389b2af88e..0d5e54201eb2 100644
>> --- a/arch/x86/include/asm/barrier.h
>> +++ b/arch/x86/include/asm/barrier.h
>> @@ -79,6 +79,9 @@ do { \
>> #define __smp_mb__before_atomic() do { } while (0)
>> #define __smp_mb__after_atomic() do { } while (0)
>
>> +/* Writing to CR3 provides a full memory barrier in switch_mm(). */
>> +#define smp_mb__after_switch_mm() do { } while (0)
>> +
>> #include <asm-generic/barrier.h>
>
> IIUC, ppc already does smp_mb() in switch_mm.
>
> Would it better to add the same macro which do nothing to pcc?\\

Does it ?

Based on arch/powerpc/include/asm/membarrier.h, it appears that
powerpc does _not_ have a guaranteed barrier in switch_mm():

static inline void membarrier_arch_switch_mm(struct mm_struct *prev,
struct mm_struct *next,
struct task_struct *tsk)
{
/*
* Only need the full barrier when switching between processes.
* Barrier when switching from kernel to userspace is not
* required here, given that it is implied by mmdrop(). Barrier
* when switching from userspace to kernel is not needed after
* store to rq->curr.
*/
if (IS_ENABLED(CONFIG_SMP) &&
likely(!(atomic_read(&next->membarrier_state) &
(MEMBARRIER_STATE_PRIVATE_EXPEDITED |
MEMBARRIER_STATE_GLOBAL_EXPEDITED)) || !prev))
return;

/*
* The membarrier system call requires a full memory barrier
* after storing to rq->curr, before going back to user-space.
*/
smp_mb();
}

AFAIU the barrier provided in powerpc switch_mm_irqs_off() is only in the
"new_on_cpu" case. Am I missing something ?

Thanks,

Mathieu

>
> Thanks!
>
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com


2024-03-13 08:38:26

by Yeo Reum Yun

[permalink] [raw]
Subject: Re: [RFC PATCH] sched: Add missing memory barrier in switch_mm_cid

index 35389b2af88e..0d5e54201eb2 100644
>>> --- a/arch/x86/include/asm/barrier.h
>>> +++ b/arch/x86/include/asm/barrier.h
>>> @@ -79,6 +79,9 @@ do { \
>>> #define __smp_mb__before_atomic() do { } while (0)
>>> #define __smp_mb__after_atomic() do { } while (0)
>>
>>> +/* Writing to CR3 provides a full memory barrier in switch_mm(). */
>>> +#define smp_mb__after_switch_mm() do { } while (0)
>>> +
>>> #include <asm-generic/barrier.h>
>>
>> IIUC, ppc already does smp_mb() in switch_mm.
>>
>> Would it better to add the same macro which do nothing to pcc?\\
>
> Does it ?
>
Sorry. I made a mistake.

IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.