2020-09-23 23:40:18

by Prasad Sodagudi

[permalink] [raw]
Subject: [PATCH 0/2] measure latency of cpu hotplug path

There are all changes related to cpu hotplug path and would like to seek
upstream review. These are all patches in Qualcomm downstream kernel
for a quite long time. First patch sets the rt prioity to hotplug
task and second patch adds cpuhp trace events.

1) cpu-hotplug: Always use real time scheduling when hotplugging a CPU
2) cpu/hotplug: Add cpuhp_latency trace event

Example logs:-
cpu online -
cpuhp/4-200 [004] .... 223.891886: cpuhp_enter: cpu: 0004 target: 213 step: 212 (sched_cpu_activate)
cpuhp/4-200 [004] .... 223.891894: cpuhp_exit: cpu: 0004 state: 212 step: 212 ret: 0
sh-176 [000] .... 223.891912: cpuhp_exit: cpu: 0004 state: 213 step: 86 ret: 0
sh-176 [000] .... 223.891915: cpuhp_latency: cpu:4 state:online latency:3874 USEC ret: 0

cpu offline -
sh-176 [000] .... 265.193490: cpuhp_exit: cpu: 0004 state: 2 step: 2 ret: 0
sh-176 [000] .... 265.193494: cpuhp_latency: cpu:4 state:offline latency:57431 USEC ret: 0


Prasad Sodagudi (1):
cpu/hotplug: Add cpuhp_latency trace event

Syed Rameez Mustafa (1):
cpu-hotplug: Always use real time scheduling when hotplugging a CPU

include/trace/events/cpuhp.h | 29 +++++++++++++++++++++++++
kernel/cpu.c | 50 ++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 79 insertions(+)

--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project


2020-09-23 23:40:38

by Prasad Sodagudi

[permalink] [raw]
Subject: [PATCH 2/2] cpu-hotplug: Always use real time scheduling when hotplugging a CPU

From: Syed Rameez Mustafa <[email protected]>

CPU hotplug operations take place in preemptible context. This leaves
the hotplugging thread at the mercy of overall system load and CPU
availability. If the hotplugging thread does not get an opportunity
to execute after it has already begun a hotplug operation, CPUs can
end up being stuck in a quasi online state. In the worst case a CPU
can be stuck in a state where the migration thread is parked while
another task is executing and changing affinity in a loop. This
combination can result in unbounded execution time for the running
task until the hotplugging thread gets the chance to run to complete
the hotplug operation.

Fix the said problem by ensuring that hotplug can only occur from
threads belonging to the RT sched class. This allows the hotplugging
thread priority on the CPU no matter what the system load or the
number of available CPUs are. If a SCHED_NORMAL task attempts to
hotplug a CPU, we temporarily elevate it's scheduling policy to RT.
Furthermore, we disallow hotplugging operations to begin if the
calling task belongs to the idle and deadline classes or those that
use the SCHED_BATCH policy.

Signed-off-by: Syed Rameez Mustafa <[email protected]>
Signed-off-by: Prasad Sodagudi <[email protected]>
---
kernel/cpu.c | 41 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 41 insertions(+)

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 68b3740..aea4ce2 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -32,6 +32,7 @@
#include <linux/relay.h>
#include <linux/slab.h>
#include <linux/percpu-rwsem.h>
+#include <uapi/linux/sched/types.h>

#include <trace/events/power.h>
#define CREATE_TRACE_POINTS
@@ -1191,6 +1192,33 @@ void cpuhp_online_idle(enum cpuhp_state state)
complete_ap_thread(st, true);
}

+static int switch_to_rt_policy(void)
+{
+ struct sched_param param = { .sched_priority = MAX_RT_PRIO - 1 };
+ unsigned int policy = current->policy;
+ int err;
+
+ /* Nobody should be attempting hotplug from these policy contexts. */
+ if (policy == SCHED_BATCH || policy == SCHED_IDLE ||
+ policy == SCHED_DEADLINE)
+ return -EPERM;
+
+ if (policy == SCHED_FIFO || policy == SCHED_RR)
+ return 1;
+
+ /* Only SCHED_NORMAL left. */
+ err = sched_setscheduler_nocheck(current, SCHED_FIFO, &param);
+ return err;
+
+}
+
+static int switch_to_fair_policy(void)
+{
+ struct sched_param param = { .sched_priority = 0 };
+
+ return sched_setscheduler_nocheck(current, SCHED_NORMAL, &param);
+}
+
/* Requires cpu_add_remove_lock to be held */
static int _cpu_up(unsigned int cpu, int tasks_frozen, enum cpuhp_state target)
{
@@ -1258,6 +1286,7 @@ static int _cpu_up(unsigned int cpu, int tasks_frozen, enum cpuhp_state target)
static int cpu_up(unsigned int cpu, enum cpuhp_state target)
{
int err = 0;
+ int switch_err = 0;

if (!cpu_possible(cpu)) {
pr_err("can't online cpu %d because it is not configured as may-hotadd at boot time\n",
@@ -1268,6 +1297,10 @@ static int cpu_up(unsigned int cpu, enum cpuhp_state target)
return -EINVAL;
}

+ switch_err = switch_to_rt_policy();
+ if (switch_err < 0)
+ return switch_err;
+
err = try_online_node(cpu_to_node(cpu));
if (err)
return err;
@@ -1286,6 +1319,14 @@ static int cpu_up(unsigned int cpu, enum cpuhp_state target)
err = _cpu_up(cpu, 0, target);
out:
cpu_maps_update_done();
+
+ if (!switch_err) {
+ switch_err = switch_to_fair_policy();
+ if (switch_err)
+ pr_err("Hotplug policy switch err=%d Task %s pid=%d\n",
+ switch_err, current->comm, current->pid);
+ }
+
return err;
}

--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

2020-09-24 08:38:02

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 0/2] measure latency of cpu hotplug path

On Wed, Sep 23, 2020 at 04:37:44PM -0700, Prasad Sodagudi wrote:
> There are all changes related to cpu hotplug path and would like to seek
> upstream review. These are all patches in Qualcomm downstream kernel
> for a quite long time. First patch sets the rt prioity to hotplug
> task and second patch adds cpuhp trace events.
>
> 1) cpu-hotplug: Always use real time scheduling when hotplugging a CPU
> 2) cpu/hotplug: Add cpuhp_latency trace event

Why? Hotplug is a known super slow path. If you care about hotplug
latency you're doing it wrong.

2020-09-24 15:00:06

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH 0/2] measure latency of cpu hotplug path

On Thu, 24 Sep 2020 10:34:14 +0200
[email protected] wrote:

> On Wed, Sep 23, 2020 at 04:37:44PM -0700, Prasad Sodagudi wrote:
> > There are all changes related to cpu hotplug path and would like to seek
> > upstream review. These are all patches in Qualcomm downstream kernel
> > for a quite long time. First patch sets the rt prioity to hotplug
> > task and second patch adds cpuhp trace events.
> >
> > 1) cpu-hotplug: Always use real time scheduling when hotplugging a CPU
> > 2) cpu/hotplug: Add cpuhp_latency trace event
>
> Why? Hotplug is a known super slow path. If you care about hotplug
> latency you're doing it wrong.

I'd like to know the answer to Peter's question too. Why?

-- Steve

2020-09-28 02:46:29

by Prasad Sodagudi

[permalink] [raw]
Subject: Re: [PATCH 0/2] measure latency of cpu hotplug path

On 2020-09-24 07:58, Steven Rostedt wrote:
> On Thu, 24 Sep 2020 10:34:14 +0200
> [email protected] wrote:
>
>> On Wed, Sep 23, 2020 at 04:37:44PM -0700, Prasad Sodagudi wrote:
>> > There are all changes related to cpu hotplug path and would like to seek
>> > upstream review. These are all patches in Qualcomm downstream kernel
>> > for a quite long time. First patch sets the rt prioity to hotplug
>> > task and second patch adds cpuhp trace events.
>> >
>> > 1) cpu-hotplug: Always use real time scheduling when hotplugging a CPU
>> > 2) cpu/hotplug: Add cpuhp_latency trace event
>>
>> Why? Hotplug is a known super slow path. If you care about hotplug
>> latency you're doing it wrong.
Hi Peter,

[PATCH 1/2] cpu/hotplug: Add cpuhp_latency trace event -
1) Tracing of the cpuhp operation is important to find whether upstream
changes or out of tree modules(or firmware changes) caused latency
regression or not.
2) Secondary cpus are hotplug out during the device suspend and hotplug
in during the resume.
3) firmware(psci calls handling from firmware) changes impact need to be
tested right?
4) cpu hotplug framework(CPUHP_AP_ONLINE_DYN) dynamic callbacks may
impact the hotplug latency.


[PATCH 2/2] cpu-hotplug: Always use real time scheduling when
hotplugging a CPU –

CPU hotplug operation is stressed and while stress testing with full
load on the system following problem is observed.
CPU hotplug operations take place in preemptible context. This leaves
the hotplugging thread at the mercy of overall system load and CPU
availability. If the hotplugging thread does not get an opportunity to
execute after it has already begun a hotplug operation, CPUs can
end up being stuck in a quasi online state. In the worst case a CPU can
be stuck in a state where the migration thread is parked while
another task is executing and changing affinity in a loop. This
combination can result in unbounded execution time for the running
task until the hot plugging thread gets the chance to run to complete
the hotplug operation.

-Thanks, Prasad

>
> I'd like to know the answer to Peter's question too. Why?
>
> -- Steve

2020-09-28 07:44:06

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 0/2] measure latency of cpu hotplug path

On Sun, Sep 27, 2020 at 07:41:45PM -0700, [email protected] wrote:
> On 2020-09-24 07:58, Steven Rostedt wrote:
> > On Thu, 24 Sep 2020 10:34:14 +0200
> > [email protected] wrote:
> >
> > > On Wed, Sep 23, 2020 at 04:37:44PM -0700, Prasad Sodagudi wrote:
> > > > There are all changes related to cpu hotplug path and would like to seek
> > > > upstream review. These are all patches in Qualcomm downstream kernel
> > > > for a quite long time. First patch sets the rt prioity to hotplug
> > > > task and second patch adds cpuhp trace events.
> > > >
> > > > 1) cpu-hotplug: Always use real time scheduling when hotplugging a CPU
> > > > 2) cpu/hotplug: Add cpuhp_latency trace event
> > >
> > > Why? Hotplug is a known super slow path. If you care about hotplug
> > > latency you're doing it wrong.
> Hi Peter,
>
> [PATCH 1/2] cpu/hotplug: Add cpuhp_latency trace event -
> 1) Tracing of the cpuhp operation is important to find whether upstream
> changes or out of tree modules(or firmware changes) caused latency
> regression or not.

This is a contradiction in terms, it is impossible to have a latency
regression is you don't care about the latency in this super slow path
to begin with.

> 2) Secondary cpus are hotplug out during the device suspend and hotplug in
> during the resume.

Indeed they are.

> 3) firmware(psci calls handling from firmware) changes impact need to be
> tested right?

Firmware is firmware, it's broken by design and we can't fix it if it's
broken. The only sane solution is not having firmware :-)

> 4) cpu hotplug framework(CPUHP_AP_ONLINE_DYN) dynamic callbacks may impact
> the hotplug latency.

Again, nobody cares.

> [PATCH 2/2] cpu-hotplug: Always use real time scheduling when hotplugging a
> CPU –
>
> CPU hotplug operation is stressed and while stress testing with full load on
> the system following problem is observed.
> CPU hotplug operations take place in preemptible context. This leaves the
> hotplugging thread at the mercy of overall system load and CPU
> availability. If the hotplugging thread does not get an opportunity to
> execute after it has already begun a hotplug operation, CPUs can
> end up being stuck in a quasi online state. In the worst case a CPU can be
> stuck in a state where the migration thread is parked while
> another task is executing and changing affinity in a loop. This combination
> can result in unbounded execution time for the running
> task until the hot plugging thread gets the chance to run to complete the
> hotplug operation.

How is that not an administration problem?

Also, you shouldn't be able to change your affinity _to_ a CPU that's
going down. One of the very first steps in hotplug ensures that.