2021-02-16 21:39:06

by Nadav Amit

[permalink] [raw]
Subject: Local execution of ipi_sync_rq_state() on sync_runqueues_membarrier_state()

Hello Mathieu,

While trying to find some unrelated by, something in
sync_runqueues_membarrier_state() caught my eye:


static int sync_runqueues_membarrier_state(struct mm_struct *mm)
{
if (atomic_read(&mm->mm_users) == 1 || num_online_cpus() == 1) {
this_cpu_write(runqueues.membarrier_state, membarrier_state);

/*
* For single mm user, we can simply issue a memory barrier
* after setting MEMBARRIER_STATE_GLOBAL_EXPEDITED in the
* mm and in the current runqueue to guarantee that no memory
* access following registration is reordered before
* registration.
*/
smp_mb();
return 0;
}

[ snip ]

smp_call_function_many(tmpmask, ipi_sync_rq_state, mm, 1);


And ipi_sync_rq_state() does:

this_cpu_write(runqueues.membarrier_state,
atomic_read(&mm->membarrier_state));


So my question: are you aware smp_call_function_many() would not run
ipi_sync_rq_state() on the local CPU? Is that the intention of the code?

Thanks,
Nadav


2021-02-17 18:14:37

by Mathieu Desnoyers

[permalink] [raw]
Subject: Re: Local execution of ipi_sync_rq_state() on sync_runqueues_membarrier_state()

----- On Feb 16, 2021, at 4:35 PM, Nadav Amit [email protected] wrote:

> Hello Mathieu,
>
> While trying to find some unrelated by, something in
> sync_runqueues_membarrier_state() caught my eye:
>
>
> static int sync_runqueues_membarrier_state(struct mm_struct *mm)
> {
> if (atomic_read(&mm->mm_users) == 1 || num_online_cpus() == 1) {
> this_cpu_write(runqueues.membarrier_state, membarrier_state);
>
> /*
> * For single mm user, we can simply issue a memory barrier
> * after setting MEMBARRIER_STATE_GLOBAL_EXPEDITED in the
> * mm and in the current runqueue to guarantee that no memory
> * access following registration is reordered before
> * registration.
> */
> smp_mb();
> return 0;
> }
>
> [ snip ]
>
> smp_call_function_many(tmpmask, ipi_sync_rq_state, mm, 1);
>
>
> And ipi_sync_rq_state() does:
>
> this_cpu_write(runqueues.membarrier_state,
> atomic_read(&mm->membarrier_state));
>
>
> So my question: are you aware smp_call_function_many() would not run
> ipi_sync_rq_state() on the local CPU?

Generally, yes, I am aware of it, but it appears that when I wrote that
code, I missed that important fact. See

commit 227a4aadc75b ("sched/membarrier: Fix p->mm->membarrier_state racy load")

> Is that the intention of the code?

Clearly not! If we look at sync_runqueues_membarrier_state(), there is even a
special-case for mm_users==1 || num online cpus == 1 where it writes the membarrier
state into the current cpu runqueue. I'll prepare a fix, thanks a bunch for spotting
this.

Mathieu

>
> Thanks,
> Nadav

--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

Subject: [tip: sched/urgent] sched/membarrier: fix missing local execution of ipi_sync_rq_state()

The following commit has been merged into the sched/urgent branch of tip:

Commit-ID: fba111913e51a934eaad85734254eab801343836
Gitweb: https://git.kernel.org/tip/fba111913e51a934eaad85734254eab801343836
Author: Mathieu Desnoyers <[email protected]>
AuthorDate: Wed, 17 Feb 2021 11:56:51 -05:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Mon, 01 Mar 2021 11:02:15 +01:00

sched/membarrier: fix missing local execution of ipi_sync_rq_state()

The function sync_runqueues_membarrier_state() should copy the
membarrier state from the @mm received as parameter to each runqueue
currently running tasks using that mm.

However, the use of smp_call_function_many() skips the current runqueue,
which is unintended. Replace by a call to on_each_cpu_mask().

Fixes: 227a4aadc75b ("sched/membarrier: Fix p->mm->membarrier_state racy load")
Reported-by: Nadav Amit <[email protected]>
Signed-off-by: Mathieu Desnoyers <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: [email protected] # 5.4.x+
Link: https://lore.kernel.org/r/[email protected]
---
kernel/sched/membarrier.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
index acdae62..b5add64 100644
--- a/kernel/sched/membarrier.c
+++ b/kernel/sched/membarrier.c
@@ -471,9 +471,7 @@ static int sync_runqueues_membarrier_state(struct mm_struct *mm)
}
rcu_read_unlock();

- preempt_disable();
- smp_call_function_many(tmpmask, ipi_sync_rq_state, mm, 1);
- preempt_enable();
+ on_each_cpu_mask(tmpmask, ipi_sync_rq_state, mm, true);

free_cpumask_var(tmpmask);
cpus_read_unlock();

Subject: [tip: sched/core] sched/membarrier: fix missing local execution of ipi_sync_rq_state()

The following commit has been merged into the sched/core branch of tip:

Commit-ID: ce29ddc47b91f97e7f69a0fb7cbb5845f52a9825
Gitweb: https://git.kernel.org/tip/ce29ddc47b91f97e7f69a0fb7cbb5845f52a9825
Author: Mathieu Desnoyers <[email protected]>
AuthorDate: Wed, 17 Feb 2021 11:56:51 -05:00
Committer: Ingo Molnar <[email protected]>
CommitterDate: Sat, 06 Mar 2021 12:40:21 +01:00

sched/membarrier: fix missing local execution of ipi_sync_rq_state()

The function sync_runqueues_membarrier_state() should copy the
membarrier state from the @mm received as parameter to each runqueue
currently running tasks using that mm.

However, the use of smp_call_function_many() skips the current runqueue,
which is unintended. Replace by a call to on_each_cpu_mask().

Fixes: 227a4aadc75b ("sched/membarrier: Fix p->mm->membarrier_state racy load")
Reported-by: Nadav Amit <[email protected]>
Signed-off-by: Mathieu Desnoyers <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
Cc: [email protected] # 5.4.x+
Link: https://lore.kernel.org/r/[email protected]
---
kernel/sched/membarrier.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/kernel/sched/membarrier.c b/kernel/sched/membarrier.c
index acdae62..b5add64 100644
--- a/kernel/sched/membarrier.c
+++ b/kernel/sched/membarrier.c
@@ -471,9 +471,7 @@ static int sync_runqueues_membarrier_state(struct mm_struct *mm)
}
rcu_read_unlock();

- preempt_disable();
- smp_call_function_many(tmpmask, ipi_sync_rq_state, mm, 1);
- preempt_enable();
+ on_each_cpu_mask(tmpmask, ipi_sync_rq_state, mm, true);

free_cpumask_var(tmpmask);
cpus_read_unlock();