2018-08-12 16:11:48

by Leo Yan

[permalink] [raw]
Subject: [PATCH v1 0/5] Improvement stopping tick decision making in 'menu' idle governor

We found the CPU cannot stay in deepest idle state as expected with
running synthetic workloads with mainline kernel on Arm platform
(96boards Hikey620 with octa CA53 CPUs).

The main issue is the criteria for decision stopping tick; now
the criteria is checking expected interval is less than TICK_USEC, but
this doesn't consider the next tick detla is float due CPU randomly
eneters and exits idle states; furthermore, it's stick to checking
TICK_USEC as boundary for decision stopping tick, unfortunately this has
hole to select a shallow state with stopping tick, so the CPU stays in
shallow state for long time.

This patch series is to explore more reasonable making decision for
stopping tick and the most important fixing is to avoid powernightmares
issue after we apply these criterias for making decisions. Patches
0001 ~ 0003 are used to refactor the variables and structures for more
readable code, it also provides a function menu_decide_stopping_tick()
which can be used to encapsulate the making decision logics. The last
two patches are primary for improvement, patch 0004 'cpuidle: menu:
Don't stay in shallow state for a long time' introduces a new criteria
(it's a more strict criteria than before) for not stopping tick for
shallow state cases; patch 0005 is use the dynamic tick detla to replace
the static value TICK_USEC for decision if the tick is expired before or
after the prediction, according this comparison we can get conclusion if
need to stop tick or not.

With more accurate decision for stopping tick, one immediate benefit is
the CPUs have more chance to stay in deepest state, it also can avoid to
run tick unnecessarily and so avoid a shallower state introduced by tick
event. For the testing result in below table, we can see the result
proves the improvement by better stopping tick decision making in this
patch series, we run the workload generated by rt-app (a single task
with period 5ms and duty cycle 1%/3%/5%/10%/20%/30%/40%), the total
running time is 60s. We do statistics for all CPUs for all idle states
duration, the unit is second (s), for cases (dutycycle=1%/3%/5%/10%/20%)
we can see the shallow state C0/C1 duration are reduced and the time
has been moved to deepest state, so the deepest state C2 duration can
have improvement for ~9s to ~21s. for cases (dutycycle=30%/40%) though
we can see the deepest state durations are parity between with and
without patch series, but it has a minor improvement for C1 state
duration by stealing C0 state duration.

Some notations are used in the table:

state: C0: WFI; C1: CPU OFF; C2: Cluster OFF

All testing cases have single task with 5ms period:

Without patches With patches Difference
----------------------- ----------------------- --------------------------
Duty cycle C0 C1 C2 C0 C1 C2 C0 C1 C2
1% 2.397 16.528 471.905 0.916 2.688 487.328 -1.481 -13.840 +15.422
3% 3.957 20.541 464.434 1.510 2.398 485.914 -2.447 -18.143 +21.480
5% 2.866 8.609 474.777 1.166 2.250 483.983 -1.699 -6.359 +9.205
10% 2.893 28.753 453.277 1.147 14.134 469.190 -1.745 -14.618 +15.913
20% 7.620 41.086 431.735 1.595 35.055 442.482 -6.024 -6.030 +10.747
30% 4.394 38.328 431.442 1.964 40.857 430.973 -2.430 +2.529 -0.468
40% 7.390 29.415 430.914 1.789 34.832 431.588 -5.600 +5.417 -0.673


P.s. for the testing, applied Rafael's patch 'cpuidle: menu: Handle
stopped tick more aggressively' [1] to avoid select unexpected shallow
state after tick has been stopped.

[1] https://lkml.org/lkml/2018/8/10/259

Leo Yan (5):
cpuidle: menu: Clean up variables usage in menu_select()
cpuidle: menu: Record tick delta value in struct menu_device
cpuidle: menu: Provide menu_decide_stopping_tick()
cpuidle: menu: Don't stay in shallow state for a long time
cpuidle: menu: Change to compare prediction with tick delta

drivers/cpuidle/governors/menu.c | 104 ++++++++++++++++++++++++++++-----------
1 file changed, 76 insertions(+), 28 deletions(-)

--
2.7.4



2018-08-12 16:11:48

by Leo Yan

[permalink] [raw]
Subject: [PATCH v1 2/5] cpuidle: menu: Record tick delta value in struct menu_device

Since the tick delta is used in multiple places in menu_select(), it's
better to use single one variable to record this value; furthermore, for
more readable we can refactor the code to split a separate function to
making decision for stopping tick, which also needs to use tick delta
value as one metric for consideration.

To achieve these purposes, this patch adds a new item 'tick_delta_us' in
struct menu_device to record tick delta value. This patch also is a
preparation for optimization stopping tick in sequential patches.

Signed-off-by: Leo Yan <[email protected]>
---
drivers/cpuidle/governors/menu.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
index b972db1..83618ab 100644
--- a/drivers/cpuidle/governors/menu.c
+++ b/drivers/cpuidle/governors/menu.c
@@ -124,6 +124,7 @@ struct menu_device {
int tick_wakeup;

unsigned int next_timer_us;
+ unsigned int tick_delta_us;
unsigned int predicted_us;
unsigned int bucket;
unsigned int correction_factor[BUCKETS];
@@ -305,6 +306,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,

/* determine the expected residency time, round up */
data->next_timer_us = ktime_to_us(tick_nohz_get_sleep_length(&delta_next));
+ data->tick_delta_us = ktime_to_us(delta_next);

get_iowait_load(&nr_iowaiters, &cpu_load);
data->bucket = which_bucket(data->next_timer_us, nr_iowaiters);
@@ -317,7 +319,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
* timer event for the idle state selection.
*/
if (tick_nohz_tick_stopped()) {
- data->predicted_us = ktime_to_us(delta_next);
+ data->predicted_us = data->tick_delta_us;
goto select;
}

@@ -400,11 +402,11 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
*/
if (((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) ||
data->predicted_us < TICK_USEC) && !tick_nohz_tick_stopped()) {
- unsigned int delta_next_us = ktime_to_us(delta_next);

*stop_tick = false;

- if (idx > 0 && drv->states[idx].target_residency > delta_next_us) {
+ if (idx > 0 &&
+ drv->states[idx].target_residency > data->tick_delta_us) {
/*
* The tick is not going to be stopped and the target
* residency of the state to be returned is not within
@@ -417,7 +419,8 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
continue;

idx = i;
- if (drv->states[i].target_residency <= delta_next_us)
+ if (drv->states[i].target_residency <=
+ data->tick_delta_us)
break;
}
}
--
2.7.4


2018-08-12 16:11:48

by Leo Yan

[permalink] [raw]
Subject: [PATCH v1 1/5] cpuidle: menu: Clean up variables usage in menu_select()

The usage for two variables 'data->predicted_us' and 'expected_interval'
in menu_select() are confused, especially these two variables are
assigned with each other: firstly 'data->predicted_us' is assigned to
the minimum value between 'data->predicted_us' and 'expected_interval',
so it presents the prediction period for taking account different
factors and include consideration for expected interval; but later
'data->predicted_us' is assigned back to 'expected_interval' and from
then on the function uses 'expected_interval' to select idle state; this
results in 'expected_interval' has two different semantics between the
top half and the bottom half of the same function.

This patch is to clean up the usage of these two variables, we always
use 'data->predicted_us' to present the idle duration predictions and
it can be used to compare with idle state target residency or tick
boundary for choosing idle state; we purely use 'expected_interval' to
record the expected interval value, which is mainly for interval
interrupt estimation.

Signed-off-by: Leo Yan <[email protected]>
---
drivers/cpuidle/governors/menu.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
index 5eb7d6f..b972db1 100644
--- a/drivers/cpuidle/governors/menu.c
+++ b/drivers/cpuidle/governors/menu.c
@@ -363,7 +363,6 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
latency_req = interactivity_req;

select:
- expected_interval = data->predicted_us;
/*
* Find the idle state with the lowest power while satisfying
* our constraints.
@@ -386,7 +385,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
* expected idle duration so that the tick is retained
* as long as that target residency is low enough.
*/
- expected_interval = drv->states[idx].target_residency;
+ data->predicted_us = drv->states[idx].target_residency;
break;
}
idx = i;
@@ -400,7 +399,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
* expected idle duration is shorter than the tick period length.
*/
if (((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) ||
- expected_interval < TICK_USEC) && !tick_nohz_tick_stopped()) {
+ data->predicted_us < TICK_USEC) && !tick_nohz_tick_stopped()) {
unsigned int delta_next_us = ktime_to_us(delta_next);

*stop_tick = false;
--
2.7.4


2018-08-12 16:11:48

by Leo Yan

[permalink] [raw]
Subject: [PATCH v1 3/5] cpuidle: menu: Provide menu_decide_stopping_tick()

This patch is only for code refactoring and without functional change.
It introduces a new function menu_decide_stopping_tick(); we can use
this function to focus on making stopping tick decision. With moving
out stopping tick decision code, it lets the below piece code is
simplized only for the idle state calibration and thus save one indent
level in the loop.

Signed-off-by: Leo Yan <[email protected]>
---
drivers/cpuidle/governors/menu.c | 76 ++++++++++++++++++++++++++--------------
1 file changed, 50 insertions(+), 26 deletions(-)

diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
index 83618ab..4f02207 100644
--- a/drivers/cpuidle/governors/menu.c
+++ b/drivers/cpuidle/governors/menu.c
@@ -276,6 +276,37 @@ static unsigned int get_typical_interval(struct menu_device *data)
}

/**
+ * menu_decide_stopping_tick - decides if need to stopping tick
+ * @drv: cpuidle driver containing state data
+ * @data: menu_device structure pointer
+ * @idx: the candidate idle state index
+ */
+static bool menu_decide_stopping_tick(struct cpuidle_driver *drv,
+ struct menu_device *data, int idx)
+{
+ /*
+ * If the tick has been stopped yet, force to stop it afterwards and
+ * don't give chance to set *stop_tick to false.
+ */
+ if (tick_nohz_tick_stopped())
+ return true;
+
+ /* Don't stop the tick if the selected state is a polling one */
+ if (drv->states[idx].flags & CPUIDLE_FLAG_POLLING)
+ return false;
+
+ /*
+ * Don't stop the tick if the prediction is shorter than the
+ * tick period length.
+ */
+ if (data->predicted_us < TICK_USEC)
+ return false;
+
+ /* Otherwise, let's stop the tick at this time. */
+ return true;
+}
+
+/**
* menu_select - selects the next idle state to enter
* @drv: cpuidle driver containing state data
* @dev: the CPU
@@ -396,33 +427,26 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
if (idx == -1)
idx = 0; /* No states enabled. Must use 0. */

- /*
- * Don't stop the tick if the selected state is a polling one or if the
- * expected idle duration is shorter than the tick period length.
- */
- if (((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) ||
- data->predicted_us < TICK_USEC) && !tick_nohz_tick_stopped()) {
-
- *stop_tick = false;
+ *stop_tick = menu_decide_stopping_tick(drv, data, idx);

- if (idx > 0 &&
- drv->states[idx].target_residency > data->tick_delta_us) {
- /*
- * The tick is not going to be stopped and the target
- * residency of the state to be returned is not within
- * the time until the next timer event including the
- * tick, so try to correct that.
- */
- for (i = idx - 1; i >= 0; i--) {
- if (drv->states[i].disabled ||
- dev->states_usage[i].disable)
- continue;
-
- idx = i;
- if (drv->states[i].target_residency <=
- data->tick_delta_us)
- break;
- }
+ /* Calibrate the idle state according to the tick event. */
+ if (!*stop_tick && idx > 0 &&
+ drv->states[idx].target_residency > data->tick_delta_us) {
+ /*
+ * The tick is not going to be stopped and the target
+ * residency of the state to be returned is not within
+ * the time until the next timer event including the
+ * tick, so try to correct that.
+ */
+ for (i = idx - 1; i >= 0; i--) {
+ if (drv->states[i].disabled ||
+ dev->states_usage[i].disable)
+ continue;
+
+ idx = i;
+ if (drv->states[i].target_residency <=
+ data->tick_delta_us)
+ break;
}
}

--
2.7.4


2018-08-12 16:12:11

by Leo Yan

[permalink] [raw]
Subject: [PATCH v1 4/5] cpuidle: menu: Don't stay in shallow state for a long time

To avoid staying in a shallow state for a long time, the menu governor
relies on not stopping tick when detects the prediction is shorter than
the tick event. This is just luckily to cover most cases but cannot say
it is completely safe. For example, if the prediction is 2000us and the
TICK_USEC=1000 so it's impossible to meet the condition
'data->predicted_us < TICK_USEC' and this lead to stop the tick for a
shallow state; finally the CPU is possible to stay in this shallow state
for very long time.

This patch checks the candidate idle state isn't deepest one and find if
the timer will come after more than 2 times of the maximum target
residency, though the governor selects a shallow state according to
prediction, due the timer is most reliable waken up source but it will
come very late, so the CPU has chance to stay in the shallow state
for a long time; the patch doesn't stop the tick for this case so can
avoid powernightmares issue.

Signed-off-by: Leo Yan <[email protected]>
---
drivers/cpuidle/governors/menu.c | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)

diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
index 4f02207..566c65c 100644
--- a/drivers/cpuidle/governors/menu.c
+++ b/drivers/cpuidle/governors/menu.c
@@ -284,6 +284,10 @@ static unsigned int get_typical_interval(struct menu_device *data)
static bool menu_decide_stopping_tick(struct cpuidle_driver *drv,
struct menu_device *data, int idx)
{
+ int max_target_residency;
+
+ max_target_residency = drv->states[drv->state_count-1].target_residency;
+
/*
* If the tick has been stopped yet, force to stop it afterwards and
* don't give chance to set *stop_tick to false.
@@ -302,6 +306,23 @@ static bool menu_decide_stopping_tick(struct cpuidle_driver *drv,
if (data->predicted_us < TICK_USEC)
return false;

+ /*
+ * The candidate idle state isn't deepest one, on the other hand
+ * the most reliable wakeup source is timer (compare against to
+ * interrupts) says it will come after more than 2 times of maximum
+ * target residency, this means the CPU has risk to stay in shallow
+ * state for more than 2 times of maximum target residency.
+ *
+ * It's acceptable to stay in the shallow state at this time but we
+ * need to ensure to wake up the CPU by tick to check if has better
+ * choice. Finally it can have choice to select deeper state and
+ * avoid the CPU staying in shallow state for very long time and
+ * without any wake up event.
+ */
+ if (idx < drv->state_count - 1 &&
+ data->next_timer_us > max_target_residency * 2)
+ return false;
+
/* Otherwise, let's stop the tick at this time. */
return true;
}
--
2.7.4


2018-08-12 16:12:20

by Leo Yan

[permalink] [raw]
Subject: [PATCH v1 5/5] cpuidle: menu: Change to compare prediction with tick delta

The tick stopping decision is made by comparing the prediction with
TICK_USEC, if the prediction is shorter than TICK_USEC then this means
the CPU is likely waken up before the tick event so it's pointless to
stop tick. In reality when make the decision, though the tick period is
fixed to TICK_USEC, but the CPU is randomly entering/exiting idle
states so the next tick delta is float and should be in the range
[0, TICK_USEC]. This can result in wrong decision for stopping tick,
e.g. if the prediction is 3ms idle duration and we compare with
TICK_USEC=4000 (HZ=250), this can lead to a wrong conclusion is the tick
event will be later than the prediction duration so the governor doesn't
stop the tick; but in fact the tick is expired for 1ms, so the tick
wakes up the CPU ahead and the CPU cannot stay in idle for 3ms as
expected.

Alternatively, 'data->tick_delta_us' is for the tick delta value and
it's a accurate estimation for tick event coming. This patch changes to
compare prediction with tick delta rather than comparing with the static
tick interval.

Signed-off-by: Leo Yan <[email protected]>
---
drivers/cpuidle/governors/menu.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
index 566c65c..06d5942 100644
--- a/drivers/cpuidle/governors/menu.c
+++ b/drivers/cpuidle/governors/menu.c
@@ -300,10 +300,11 @@ static bool menu_decide_stopping_tick(struct cpuidle_driver *drv,
return false;

/*
- * Don't stop the tick if the prediction is shorter than the
- * tick period length.
+ * If the prediction is shorter than the next tick event, means
+ * the CPU will be waken up before the tick event; don't stop
+ * the tick.
*/
- if (data->predicted_us < TICK_USEC)
+ if (data->predicted_us < data->tick_delta_us)
return false;

/*
--
2.7.4


2018-08-21 08:36:37

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH v1 1/5] cpuidle: menu: Clean up variables usage in menu_select()

On Sunday, August 12, 2018 6:09:27 PM CEST Leo Yan wrote:
> The usage for two variables 'data->predicted_us' and 'expected_interval'
> in menu_select() are confused, especially these two variables are
> assigned with each other: firstly 'data->predicted_us' is assigned to
> the minimum value between 'data->predicted_us' and 'expected_interval',
> so it presents the prediction period for taking account different
> factors and include consideration for expected interval; but later
> 'data->predicted_us' is assigned back to 'expected_interval' and from
> then on the function uses 'expected_interval' to select idle state; this
> results in 'expected_interval' has two different semantics between the
> top half and the bottom half of the same function.
>
> This patch is to clean up the usage of these two variables, we always
> use 'data->predicted_us' to present the idle duration predictions and
> it can be used to compare with idle state target residency or tick
> boundary for choosing idle state; we purely use 'expected_interval' to
> record the expected interval value, which is mainly for interval
> interrupt estimation.
>
> Signed-off-by: Leo Yan <[email protected]>
> ---
> drivers/cpuidle/governors/menu.c | 5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
> index 5eb7d6f..b972db1 100644
> --- a/drivers/cpuidle/governors/menu.c
> +++ b/drivers/cpuidle/governors/menu.c
> @@ -363,7 +363,6 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
> latency_req = interactivity_req;
>
> select:
> - expected_interval = data->predicted_us;
> /*
> * Find the idle state with the lowest power while satisfying
> * our constraints.
> @@ -386,7 +385,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
> * expected idle duration so that the tick is retained
> * as long as that target residency is low enough.
> */
> - expected_interval = drv->states[idx].target_residency;
> + data->predicted_us = drv->states[idx].target_residency;

This is not what is predicted though, so the name of the field isn't quite
adequate for this use IMO.

Besides, I'm not sure in what way using a structure field is simpler than
using a local variable.

> break;
> }
> idx = i;
> @@ -400,7 +399,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
> * expected idle duration is shorter than the tick period length.
> */
> if (((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) ||
> - expected_interval < TICK_USEC) && !tick_nohz_tick_stopped()) {
> + data->predicted_us < TICK_USEC) && !tick_nohz_tick_stopped()) {
> unsigned int delta_next_us = ktime_to_us(delta_next);
>
> *stop_tick = false;
>

Yes, it can be simplified along these lines, but then please note that
data->predicted_us is only used in menu_select(), so it doesn't even need to
be there in struct menu_device. And if you make it a local variable and
call it something like duration_us, then yes, it will be fine to use it like
in this patch in my view.

Thanks,
Rafael


2018-08-21 08:38:14

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH v1 2/5] cpuidle: menu: Record tick delta value in struct menu_device

On Sunday, August 12, 2018 6:09:28 PM CEST Leo Yan wrote:
> Since the tick delta is used in multiple places in menu_select(), it's
> better to use single one variable to record this value; furthermore, for
> more readable we can refactor the code to split a separate function to
> making decision for stopping tick, which also needs to use tick delta
> value as one metric for consideration.

I don't quite agree that allocating more per-CPU memory for readability
sake is a good idea, sorry.

Thanks,
Rafael


2018-08-21 08:40:06

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH v1 4/5] cpuidle: menu: Don't stay in shallow state for a long time

On Sunday, August 12, 2018 6:09:30 PM CEST Leo Yan wrote:
> To avoid staying in a shallow state for a long time, the menu governor
> relies on not stopping tick when detects the prediction is shorter than
> the tick event. This is just luckily to cover most cases but cannot say
> it is completely safe. For example, if the prediction is 2000us and the
> TICK_USEC=1000 so it's impossible to meet the condition
> 'data->predicted_us < TICK_USEC' and this lead to stop the tick for a
> shallow state; finally the CPU is possible to stay in this shallow state
> for very long time.
>
> This patch checks the candidate idle state isn't deepest one and find if
> the timer will come after more than 2 times of the maximum target
> residency, though the governor selects a shallow state according to
> prediction, due the timer is most reliable waken up source but it will
> come very late, so the CPU has chance to stay in the shallow state
> for a long time; the patch doesn't stop the tick for this case so can
> avoid powernightmares issue.
>
> Signed-off-by: Leo Yan <[email protected]>
> ---
> drivers/cpuidle/governors/menu.c | 21 +++++++++++++++++++++
> 1 file changed, 21 insertions(+)
>
> diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
> index 4f02207..566c65c 100644
> --- a/drivers/cpuidle/governors/menu.c
> +++ b/drivers/cpuidle/governors/menu.c
> @@ -284,6 +284,10 @@ static unsigned int get_typical_interval(struct menu_device *data)
> static bool menu_decide_stopping_tick(struct cpuidle_driver *drv,
> struct menu_device *data, int idx)
> {
> + int max_target_residency;
> +
> + max_target_residency = drv->states[drv->state_count-1].target_residency;

But this state may be disabled, may it not?

Thanks,
Rafael


2018-08-21 08:41:48

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH v1 0/5] Improvement stopping tick decision making in 'menu' idle governor

On Sunday, August 12, 2018 6:09:26 PM CEST Leo Yan wrote:
> We found the CPU cannot stay in deepest idle state as expected with
> running synthetic workloads with mainline kernel on Arm platform
> (96boards Hikey620 with octa CA53 CPUs).
>
> The main issue is the criteria for decision stopping tick; now
> the criteria is checking expected interval is less than TICK_USEC, but
> this doesn't consider the next tick detla is float due CPU randomly
> eneters and exits idle states; furthermore, it's stick to checking
> TICK_USEC as boundary for decision stopping tick, unfortunately this has
> hole to select a shallow state with stopping tick, so the CPU stays in
> shallow state for long time.
>
> This patch series is to explore more reasonable making decision for
> stopping tick and the most important fixing is to avoid powernightmares
> issue after we apply these criterias for making decisions. Patches
> 0001 ~ 0003 are used to refactor the variables and structures for more
> readable code, it also provides a function menu_decide_stopping_tick()
> which can be used to encapsulate the making decision logics. The last
> two patches are primary for improvement, patch 0004 'cpuidle: menu:
> Don't stay in shallow state for a long time' introduces a new criteria
> (it's a more strict criteria than before) for not stopping tick for
> shallow state cases; patch 0005 is use the dynamic tick detla to replace
> the static value TICK_USEC for decision if the tick is expired before or
> after the prediction, according this comparison we can get conclusion if
> need to stop tick or not.
>
> With more accurate decision for stopping tick, one immediate benefit is
> the CPUs have more chance to stay in deepest state, it also can avoid to
> run tick unnecessarily and so avoid a shallower state introduced by tick
> event. For the testing result in below table, we can see the result
> proves the improvement by better stopping tick decision making in this
> patch series, we run the workload generated by rt-app (a single task
> with period 5ms and duty cycle 1%/3%/5%/10%/20%/30%/40%), the total
> running time is 60s. We do statistics for all CPUs for all idle states
> duration, the unit is second (s), for cases (dutycycle=1%/3%/5%/10%/20%)
> we can see the shallow state C0/C1 duration are reduced and the time
> has been moved to deepest state, so the deepest state C2 duration can
> have improvement for ~9s to ~21s. for cases (dutycycle=30%/40%) though
> we can see the deepest state durations are parity between with and
> without patch series, but it has a minor improvement for C1 state
> duration by stealing C0 state duration.
>
> Some notations are used in the table:
>
> state: C0: WFI; C1: CPU OFF; C2: Cluster OFF
>
> All testing cases have single task with 5ms period:
>
> Without patches With patches Difference
> ----------------------- ----------------------- --------------------------
> Duty cycle C0 C1 C2 C0 C1 C2 C0 C1 C2
> 1% 2.397 16.528 471.905 0.916 2.688 487.328 -1.481 -13.840 +15.422
> 3% 3.957 20.541 464.434 1.510 2.398 485.914 -2.447 -18.143 +21.480
> 5% 2.866 8.609 474.777 1.166 2.250 483.983 -1.699 -6.359 +9.205
> 10% 2.893 28.753 453.277 1.147 14.134 469.190 -1.745 -14.618 +15.913
> 20% 7.620 41.086 431.735 1.595 35.055 442.482 -6.024 -6.030 +10.747
> 30% 4.394 38.328 431.442 1.964 40.857 430.973 -2.430 +2.529 -0.468
> 40% 7.390 29.415 430.914 1.789 34.832 431.588 -5.600 +5.417 -0.673
>
>
> P.s. for the testing, applied Rafael's patch 'cpuidle: menu: Handle
> stopped tick more aggressively' [1] to avoid select unexpected shallow
> state after tick has been stopped.
>
> [1] https://lkml.org/lkml/2018/8/10/259
>
> Leo Yan (5):
> cpuidle: menu: Clean up variables usage in menu_select()
> cpuidle: menu: Record tick delta value in struct menu_device
> cpuidle: menu: Provide menu_decide_stopping_tick()
> cpuidle: menu: Don't stay in shallow state for a long time
> cpuidle: menu: Change to compare prediction with tick delta
>
> drivers/cpuidle/governors/menu.c | 104 ++++++++++++++++++++++++++++-----------
> 1 file changed, 76 insertions(+), 28 deletions(-)
>
>

Overall, I don't like this series, sorry about that.

The majority of changes in it are code reorganization, quite questionable
in a couple of cases, and a similar goal can be achieved with a very simple
patch that I'm going to post shortly.

Thanks,
Rafael