2020-12-03 17:19:11

by Alexey Klimov

[permalink] [raw]
Subject: [RFC][PATCH] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu online

When a CPU offlined and onlined via device_offline() and device_online()
the userspace gets uevent notification. If, after receiving uevent,
userspace executes sched_setaffinity() on some task trying to move it
to a recently onlined CPU, then it will fail with -EINVAL. Userspace needs
to wait around 5..30 ms before sched_setaffinity() will succeed for
recently onlined CPU after receiving uevent.

If in_mask for sched_setaffinity() has only recently onlined CPU, it
quickly fails with such flow:

sched_setaffinity()
cpuset_cpus_allowed()
guarantee_online_cpus() <-- cs->effective_cpus mask does not
contain recently onlined cpu
cpumask_and() <-- final new_mask is empty
__set_cpus_allowed_ptr()
cpumask_any_and_distribute() <-- returns dest_cpu equal to nr_cpu_ids
returns -EINVAL

Cpusets are updated using workqueue from cpuset_update_active_cpus() which
in its turn is called from cpu hotplug callback sched_cpu_activate() hence
the delay observable by sched_setaffinity().
Out of line uevent can be avoided if we will ensure that cpuset_hotplug_work
has run to completion using cpuset_wait_for_hotplug() after onlining the
cpu in cpu_up(). Unfortunately, the execution time of
echo 1 > /sys/devices/system/cpu/cpuX/online roughly doubled with this
change (on my test machine).

Co-analyzed-by: Joshua Baker <[email protected]>
Signed-off-by: Alexey Klimov <[email protected]>
---

The commit "cpuset: Make cpuset hotplug synchronous" would also get rid of the
early uevent but it was reverted.

The nature of this bug is also described here (with different consequences):
https://lore.kernel.org/lkml/[email protected]/

Reproducer: https://gitlab.com/0xeafffffe/xlam

It could be that I missed the correct place for cpuset synchronisation and it should
be done in cpu_device_up() instead.
I also in doubts if we need cpuset_wait_for_hotplug() in cpuhp_online_cpu_device()
since an online uevent is sent there too.
Currently with such change the reproducer code continues to work without issues.
The idea is to avoid the situation when userspace receives the event about
onlined CPU which is not ready to take tasks for a while after uevent.


kernel/cpu.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 6ff2578ecf17..f39a27a7f24b 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -15,6 +15,7 @@
#include <linux/sched/smt.h>
#include <linux/unistd.h>
#include <linux/cpu.h>
+#include <linux/cpuset.h>
#include <linux/oom.h>
#include <linux/rcupdate.h>
#include <linux/export.h>
@@ -1275,6 +1276,8 @@ static int cpu_up(unsigned int cpu, enum cpuhp_state target)
}

err = _cpu_up(cpu, 0, target);
+ if (!err)
+ cpuset_wait_for_hotplug();
out:
cpu_maps_update_done();
return err;
--
2.26.2


2020-12-07 08:41:13

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [RFC][PATCH] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu online

On Thu, Dec 03, 2020 at 05:14:31PM +0000, Alexey Klimov wrote:
> When a CPU offlined and onlined via device_offline() and device_online()
> the userspace gets uevent notification. If, after receiving uevent,
> userspace executes sched_setaffinity() on some task trying to move it
> to a recently onlined CPU, then it will fail with -EINVAL. Userspace needs
> to wait around 5..30 ms before sched_setaffinity() will succeed for
> recently onlined CPU after receiving uevent.

Right.

> Unfortunately, the execution time of
> echo 1 > /sys/devices/system/cpu/cpuX/online roughly doubled with this
> change (on my test machine).

Nobody cares, it's hotplug, it's supposed to be slow :-) That is,
we fundamentally shift the work _to_ the hotplug path, so as to keep
everybody else fast.

> The nature of this bug is also described here (with different consequences):
> https://lore.kernel.org/lkml/[email protected]/

Yeah, pesky deadlocks.. someone was going to try again.


> kernel/cpu.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/kernel/cpu.c b/kernel/cpu.c
> index 6ff2578ecf17..f39a27a7f24b 100644
> --- a/kernel/cpu.c
> +++ b/kernel/cpu.c
> @@ -15,6 +15,7 @@
> #include <linux/sched/smt.h>
> #include <linux/unistd.h>
> #include <linux/cpu.h>
> +#include <linux/cpuset.h>
> #include <linux/oom.h>
> #include <linux/rcupdate.h>
> #include <linux/export.h>
> @@ -1275,6 +1276,8 @@ static int cpu_up(unsigned int cpu, enum cpuhp_state target)
> }
>
> err = _cpu_up(cpu, 0, target);
> + if (!err)
> + cpuset_wait_for_hotplug();
> out:
> cpu_maps_update_done();
> return err;

My only consideration is if doing that flush under cpu_add_remove_lock()
is wise.

2020-12-09 02:15:32

by Daniel Jordan

[permalink] [raw]
Subject: Re: [RFC][PATCH] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu online

Alexey Klimov <[email protected]> writes:
> I also in doubts if we need cpuset_wait_for_hotplug() in cpuhp_online_cpu_device()
> since an online uevent is sent there too.

We do need it there if we go with this fix. Your reproducer hits the
same issue when it's changed to exercise smt/control instead of
cpuN/online.

2020-12-09 02:46:39

by Daniel Jordan

[permalink] [raw]
Subject: Re: [RFC][PATCH] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu online

Peter Zijlstra <[email protected]> writes:
>> The nature of this bug is also described here (with different consequences):
>> https://lore.kernel.org/lkml/[email protected]/
>
> Yeah, pesky deadlocks.. someone was going to try again.

I dug up the synchronous patch

https://lore.kernel.org/lkml/[email protected]/

but surprisingly wasn't able to reproduce the lockdep splat from

https://lore.kernel.org/lkml/[email protected]/

even though I could hit it a few weeks ago. I'm going to try to mess
with it later, but don't let me hold this up.

2021-01-15 05:45:39

by Daniel Jordan

[permalink] [raw]
Subject: Re: [RFC][PATCH] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu online

Daniel Jordan <[email protected]> writes:
> Peter Zijlstra <[email protected]> writes:
>>> The nature of this bug is also described here (with different consequences):
>>> https://lore.kernel.org/lkml/[email protected]/
>>
>> Yeah, pesky deadlocks.. someone was going to try again.
>
> I dug up the synchronous patch
>
> https://lore.kernel.org/lkml/[email protected]/
>
> but surprisingly wasn't able to reproduce the lockdep splat from
>
> https://lore.kernel.org/lkml/[email protected]/
>
> even though I could hit it a few weeks ago.

oh okay, you need to mount a legacy cpuset hierarchy.

So as the above splat shows, making cpuset_hotplug_workfn() synchronous
means cpu_hotplug_lock (and "cpuhp_state-down") can be acquired before
cgroup_mutex.

But there are at least four cgroup paths that take the locks in the
opposite order. They're all the same, they take cgroup_mutex and then
cpu_hotplug_lock later on to modify one or more static keys.

cpu_hotplug_lock should probably be ahead of cgroup_mutex because the
latter is taken in a hotplug callback, and we should keep the static
branches in cgroup, so the only way out I can think of is moving
cpu_hotplug_lock to just before cgroup_mutex is taken and switching to
_cpuslocked flavors of the static key calls.

lockdep quiets down with that change everywhere, but it puts another big
lock around a lot of cgroup paths. Seems less heavyhanded to go with
this RFC. What do you all think?

Absent further discussion, Alexey, do you plan to post another version?

2021-01-19 16:57:54

by Alexey Klimov

[permalink] [raw]
Subject: Re: [RFC][PATCH] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu online

On Fri, Jan 15, 2021 at 6:54 AM Daniel Jordan
<[email protected]> wrote:
>
> Daniel Jordan <[email protected]> writes:
> > Peter Zijlstra <[email protected]> writes:
> >>> The nature of this bug is also described here (with different consequences):
> >>> https://lore.kernel.org/lkml/[email protected]/
> >>
> >> Yeah, pesky deadlocks.. someone was going to try again.
> >
> > I dug up the synchronous patch
> >
> > https://lore.kernel.org/lkml/[email protected]/
> >
> > but surprisingly wasn't able to reproduce the lockdep splat from
> >
> > https://lore.kernel.org/lkml/[email protected]/
> >
> > even though I could hit it a few weeks ago.
>
> oh okay, you need to mount a legacy cpuset hierarchy.
>
> So as the above splat shows, making cpuset_hotplug_workfn() synchronous
> means cpu_hotplug_lock (and "cpuhp_state-down") can be acquired before
> cgroup_mutex.
>
> But there are at least four cgroup paths that take the locks in the
> opposite order. They're all the same, they take cgroup_mutex and then
> cpu_hotplug_lock later on to modify one or more static keys.
>
> cpu_hotplug_lock should probably be ahead of cgroup_mutex because the
> latter is taken in a hotplug callback, and we should keep the static
> branches in cgroup, so the only way out I can think of is moving
> cpu_hotplug_lock to just before cgroup_mutex is taken and switching to
> _cpuslocked flavors of the static key calls.
>
> lockdep quiets down with that change everywhere, but it puts another big
> lock around a lot of cgroup paths. Seems less heavyhanded to go with
> this RFC. What do you all think?

Daniel, thank you for taking a look. I don't mind reviewing+testing
another approach that you described.

> Absent further discussion, Alexey, do you plan to post another version?

I plan to update this patch and re-send in the next couple of days. It
looks like it might be a series of two patches. Sorry for delays.

Best regards,
Alexey

2021-01-21 03:27:43

by Daniel Jordan

[permalink] [raw]
Subject: Re: [RFC][PATCH] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu online

Alexey Klimov <[email protected]> writes:
> Daniel, thank you for taking a look. I don't mind reviewing+testing
> another approach that you described.

Eh, I like yours better :)

>> Absent further discussion, Alexey, do you plan to post another version?
>
> I plan to update this patch and re-send in the next couple of days. It
> looks like it might be a series of two patches. Sorry for delays.

Not at all, this is just something I'm messing with when I have the
time.