2019-07-04 09:22:49

by Abhishek Goel

[permalink] [raw]
Subject: [PATCH v3 0/3] Forced-wakeup for stop states on Powernv

Currently, the cpuidle governors determine what idle state a idling CPU
should enter into based on heuristics that depend on the idle history on
that CPU. Given that no predictive heuristic is perfect, there are cases
where the governor predicts a shallow idle state, hoping that the CPU will
be busy soon. However, if no new workload is scheduled on that CPU in the
near future, the CPU will end up in the shallow state.

Motivation
----------
In case of POWER, this is problematic, when the predicted state in the
aforementioned scenario is a shallow stop state on a tickless system. As
we might get stuck into shallow states even for hours, in absence of ticks
or interrupts.

To address this, We forcefully wakeup the cpu by setting the decrementer.
The decrementer is set to a value that corresponds with the residency of
the next available state. Thus firing up a timer that will forcefully
wakeup the cpu. Few such iterations will essentially train the governor to
select a deeper state for that cpu, as the timer here corresponds to the
next available cpuidle state residency. Thus, cpu will eventually end up
in the deepest possible state and we won't get stuck in a shallow state
for long duration.

Experiment
----------
For earlier versions when this feature was meat to be only for shallow lite
states, I performed experiments for three scenarios to collect some data.

case 1 :
Without this patch and without tick retained, i.e. in a upstream kernel,
It would spend more than even a second to get out of stop0_lite.

case 2 : With tick retained in a upstream kernel -

Generally, we have a sched tick at 4ms(CONF_HZ = 250). Ideally I expected
it to take 8 sched tick to get out of stop0_lite. Experimentally,
observation was

=========================================================
sample min max 99percentile
20 4ms 12ms 4ms
=========================================================

It would take atleast one sched tick to get out of stop0_lite.

case 2 : With this patch (not stopping tick, but explicitly queuing a
timer)

============================================================
sample min max 99percentile
============================================================
20 144us 192us 144us
============================================================


Description of current implementation
-------------------------------------

We calculate timeout for the current idle state as the residency value
of the next available idle state. If the decrementer is set to be
greater than this timeout, we update the decrementer value with the
residency of next available idle state. Thus, essentially training the
governor to select the next available deeper state until we reach the
deepest state. Hence, we won't get stuck unnecessarily in shallow states
for longer duration.

--------------------------------
v1 of auto-promotion : https://lkml.org/lkml/2019/3/22/58 This patch was
implemented only for shallow lite state in generic cpuidle driver.

v2 : Removed timeout_needed and rebased to current
upstream kernel

Then,
v1 of forced-wakeup : Moved the code to cpuidle powernv driver and started
as forced wakeup instead of auto-promotion

v2 : Extended the forced wakeup logic for all states.
Setting the decrementer instead of queuing up a hrtimer to implement the
logic.

v3 : 1) Cleanly handle setting the decrementer after exiting out of stop
states.
2) Added a disable_callback feature to compute timeout whenever a
state is enbaled or disabled instead of computing everytime in fast
idle path.
3) Use disable callback to recompute timeout whenever state usage
is changed for a state. Also, cleaned up the get_snooze_timeout
function.


Abhishek Goel (3):
cpuidle-powernv : forced wakeup for stop states
cpuidle : Add callback whenever a state usage is enabled/disabled
cpuidle-powernv : Recompute the idle-state timeouts when state usage
is enabled/disabled

arch/powerpc/include/asm/time.h | 2 ++
arch/powerpc/kernel/time.c | 40 ++++++++++++++++++++++++++
drivers/cpuidle/cpuidle-powernv.c | 47 ++++++++++++++++++++++---------
drivers/cpuidle/sysfs.c | 15 +++++++++-
include/linux/cpuidle.h | 5 ++++
5 files changed, 95 insertions(+), 14 deletions(-)

--
2.17.1


2019-07-04 09:23:23

by Abhishek Goel

[permalink] [raw]
Subject: [RFC v3 3/3] cpuidle-powernv : Recompute the idle-state timeouts when state usage is enabled/disabled

The disable callback can be used to compute timeout for other states
whenever a state is enabled or disabled. We store the computed timeout
in "timeout" defined in cpuidle state strucure. So, we compute timeout
only when some state is enabled or disabled and not every time in the
fast idle path.
We also use the computed timeout to get timeout for snooze, thus getting
rid of get_snooze_timeout for snooze loop.

Signed-off-by: Abhishek Goel <[email protected]>
---
drivers/cpuidle/cpuidle-powernv.c | 35 +++++++++++--------------------
include/linux/cpuidle.h | 1 +
2 files changed, 13 insertions(+), 23 deletions(-)

diff --git a/drivers/cpuidle/cpuidle-powernv.c b/drivers/cpuidle/cpuidle-powernv.c
index f51478460..7350f404a 100644
--- a/drivers/cpuidle/cpuidle-powernv.c
+++ b/drivers/cpuidle/cpuidle-powernv.c
@@ -45,7 +45,6 @@ struct stop_psscr_table {
static struct stop_psscr_table stop_psscr_table[CPUIDLE_STATE_MAX] __read_mostly;

static u64 default_snooze_timeout __read_mostly;
-static bool snooze_timeout_en __read_mostly;

static u64 forced_wakeup_timeout(struct cpuidle_device *dev,
struct cpuidle_driver *drv,
@@ -67,26 +66,13 @@ static u64 forced_wakeup_timeout(struct cpuidle_device *dev,
return 0;
}

-static u64 get_snooze_timeout(struct cpuidle_device *dev,
- struct cpuidle_driver *drv,
- int index)
+static void pnv_disable_callback(struct cpuidle_device *dev,
+ struct cpuidle_driver *drv)
{
int i;

- if (unlikely(!snooze_timeout_en))
- return default_snooze_timeout;
-
- for (i = index + 1; i < drv->state_count; i++) {
- struct cpuidle_state *s = &drv->states[i];
- struct cpuidle_state_usage *su = &dev->states_usage[i];
-
- if (s->disabled || su->disable)
- continue;
-
- return s->target_residency * tb_ticks_per_usec;
- }
-
- return default_snooze_timeout;
+ for (i = 0; i < drv->state_count; i++)
+ drv->states[i].timeout = forced_wakeup_timeout(dev, drv, i);
}

static int snooze_loop(struct cpuidle_device *dev,
@@ -94,16 +80,20 @@ static int snooze_loop(struct cpuidle_device *dev,
int index)
{
u64 snooze_exit_time;
+ u64 snooze_timeout = drv->states[index].timeout;
+
+ if (!snooze_timeout)
+ snooze_timeout = default_snooze_timeout;

set_thread_flag(TIF_POLLING_NRFLAG);

local_irq_enable();

- snooze_exit_time = get_tb() + get_snooze_timeout(dev, drv, index);
+ snooze_exit_time = get_tb() + snooze_timeout;
ppc64_runlatch_off();
HMT_very_low();
while (!need_resched()) {
- if (likely(snooze_timeout_en) && get_tb() > snooze_exit_time) {
+ if (get_tb() > snooze_exit_time) {
/*
* Task has not woken up but we are exiting the polling
* loop anyway. Require a barrier after polling is
@@ -168,7 +158,7 @@ static int stop_loop(struct cpuidle_device *dev,
u64 timeout_tb;
int forced_wakeup = 0;

- timeout_tb = forced_wakeup_timeout(dev, drv, index);
+ timeout_tb = drv->states[index].timeout;
if (timeout_tb)
forced_wakeup = set_dec_before_idle(timeout_tb);

@@ -255,6 +245,7 @@ static int powernv_cpuidle_driver_init(void)
*/

drv->cpumask = (struct cpumask *)cpu_present_mask;
+ drv->disable_callback = pnv_disable_callback;

return 0;
}
@@ -414,8 +405,6 @@ static int powernv_idle_probe(void)
/* Device tree can indicate more idle states */
max_idle_state = powernv_add_idle_states();
default_snooze_timeout = TICK_USEC * tb_ticks_per_usec;
- if (max_idle_state > 1)
- snooze_timeout_en = true;
} else
return -ENODEV;

diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
index 8a0e54bd0..31662b657 100644
--- a/include/linux/cpuidle.h
+++ b/include/linux/cpuidle.h
@@ -50,6 +50,7 @@ struct cpuidle_state {
int power_usage; /* in mW */
unsigned int target_residency; /* in US */
bool disabled; /* disabled on all CPUs */
+ unsigned long long timeout; /* timeout for exiting out of a state */

int (*enter) (struct cpuidle_device *dev,
struct cpuidle_driver *drv,
--
2.17.1

2019-07-04 09:23:40

by Abhishek Goel

[permalink] [raw]
Subject: [RFC v3 2/3] cpuidle : Add callback whenever a state usage is enabled/disabled

To force wakeup a cpu, we need to compute the timeout in the fast idle
path as a state may be enabled or disabled but there did not exist a
feedback to driver when a state is enabled or disabled.
This patch adds a callback whenever a state_usage records a store for
disable attribute.

Signed-off-by: Abhishek Goel <[email protected]>
---
drivers/cpuidle/sysfs.c | 15 ++++++++++++++-
include/linux/cpuidle.h | 4 ++++
2 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/drivers/cpuidle/sysfs.c b/drivers/cpuidle/sysfs.c
index eb20adb5d..141671a53 100644
--- a/drivers/cpuidle/sysfs.c
+++ b/drivers/cpuidle/sysfs.c
@@ -415,8 +415,21 @@ static ssize_t cpuidle_state_store(struct kobject *kobj, struct attribute *attr,
struct cpuidle_state_usage *state_usage = kobj_to_state_usage(kobj);
struct cpuidle_state_attr *cattr = attr_to_stateattr(attr);

- if (cattr->store)
+ if (cattr->store) {
ret = cattr->store(state, state_usage, buf, size);
+ if (ret == size &&
+ strncmp(cattr->attr.name, "disable",
+ strlen("disable"))) {
+ struct kobject *cpuidle_kobj = kobj->parent;
+ struct cpuidle_device *dev =
+ to_cpuidle_device(cpuidle_kobj);
+ struct cpuidle_driver *drv =
+ cpuidle_get_cpu_driver(dev);
+
+ if (drv->disable_callback)
+ drv->disable_callback(dev, drv);
+ }
+ }

return ret;
}
diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
index bb9a0db89..8a0e54bd0 100644
--- a/include/linux/cpuidle.h
+++ b/include/linux/cpuidle.h
@@ -119,6 +119,10 @@ struct cpuidle_driver {

/* the driver handles the cpus in cpumask */
struct cpumask *cpumask;
+
+ void (*disable_callback)(struct cpuidle_device *dev,
+ struct cpuidle_driver *drv);
+
};

#ifdef CONFIG_CPU_IDLE
--
2.17.1