2019-04-22 06:36:34

by Abhishek Goel

[permalink] [raw]
Subject: [PATCH 0/1] Forced-wakeup for stop lite states on Powernv

Currently, the cpuidle governors determine what idle state a idling CPU
should enter into based on heuristics that depend on the idle history on
that CPU. Given that no predictive heuristic is perfect, there are cases
where the governor predicts a shallow idle state, hoping that the CPU will
be busy soon. However, if no new workload is scheduled on that CPU in the
near future, the CPU will end up in the shallow state.

Motivation
----------
In case of POWER, this is problematic, when the predicted state in the
aforementioned scenario is a lite stop state, as such lite states will
inhibit SMT folding, thereby depriving the other threads in the core from
using the core resources.

So we do not want to get stucked in such states for longer duration. To
address this, the cpuidle-core can queue timer to correspond with the
residency value of the next available state. This timer will forcefully
wakeup the cpu. Few such iterations will essentially train the governor to
select a deeper state for that cpu, as the timer here corresponds to the
next available cpuidle state residency. Cpu will be kicked out of the lite
state and end up in a non-lite state.

Experiment
----------
I performed experiments for three scenarios to collect some data.

case 1 :
Without this patch and without tick retained, i.e. in a upstream kernel,
It would spend more than even a second to get out of stop0_lite.

case 2 : With tick retained in a upstream kernel -

Generally, we have a sched tick at 4ms(CONF_HZ = 250). Ideally I expected
it to take 8 sched tick to get out of stop0_lite. Experimentally,
observation was

=========================================================
sample min max 99percentile
20 4ms 12ms 4ms
=========================================================

It would take atleast one sched tick to get out of stop0_lite.

case 2 : With this patch (not stopping tick, but explicitly queuing a
timer)

============================================================
sample min max 99percentile
============================================================
20 144us 192us 144us
============================================================

In this patch, we queue a timer just before entering into a stop0_lite
state. The timer fires at (residency of next available state + exit latency
of next available state * 2). Let's say if next state(stop0) is available
which has residency of 20us, it should get out in as low as (20+2*2)*8
[Based on the forumla (residency + 2xlatency)*history length] microseconds
= 192us. Ideally we would expect 8 iterations, it was observed to get out
in 6-7 iterations. Even if let's say stop2 is next available state(stop0
and stop1 both are unavailable), it would take (100+2*10)*8 = 960us to get
into stop2.

So, We are able to get out of stop0_lite generally in 150us(with this
patch) as compared to 4ms(with tick retained). As stated earlier, we do not
want to get stuck into stop0_lite as it inhibits SMT folding for other
sibling threads, depriving them of core resources. Current patch is using
forced-wakeup only for stop0_lite, as it gives performance benefit(primary
reason) along with lowering down power consumption. We may extend this
model for other states in future.

Abhishek Goel (1):
cpuidle-powernv : forced wakeup for stop lite states

arch/powerpc/include/asm/opal-api.h | 1 +
drivers/cpuidle/cpuidle-powernv.c | 71 ++++++++++++++++++++++++++++-
2 files changed, 71 insertions(+), 1 deletion(-)

--
2.17.1


2019-04-22 06:37:25

by Abhishek Goel

[permalink] [raw]
Subject: [PATCH 1/1] cpuidle-powernv : forced wakeup for stop lite states

Currently, the cpuidle governors determine what idle state a idling CPU
should enter into based on heuristics that depend on the idle history on
that CPU. Given that no predictive heuristic is perfect, there are cases
where the governor predicts a shallow idle state, hoping that the CPU will
be busy soon. However, if no new workload is scheduled on that CPU in the
near future, the CPU will end up in the shallow state.

In case of POWER, this is problematic, when the predicted state in the
aforementioned scenario is a lite stop state, as such lite states will
inhibit SMT folding, thereby depriving the other threads in the core from
using the core resources.

So we do not want to get stucked in such states for longer duration. To
address this, the cpuidle-core can queue timer to correspond with the
residency value of the next available state. This timer will forcefully
wakeup the cpu. Few such iterations will essentially train the governor to
select a deeper state for that cpu, as the timer here corresponds to the
next available cpuidle state residency. Cpu will be kicked out of the lite
state and end up in a non-lite state.

Signed-off-by: Abhishek Goel <[email protected]>
---
arch/powerpc/include/asm/opal-api.h | 1 +
drivers/cpuidle/cpuidle-powernv.c | 71 ++++++++++++++++++++++++++++-
2 files changed, 71 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/opal-api.h b/arch/powerpc/include/asm/opal-api.h
index 870fb7b23..735dec731 100644
--- a/arch/powerpc/include/asm/opal-api.h
+++ b/arch/powerpc/include/asm/opal-api.h
@@ -226,6 +226,7 @@
*/

#define OPAL_PM_TIMEBASE_STOP 0x00000002
+#define OPAL_PM_LOSE_USER_CONTEXT 0x00001000
#define OPAL_PM_LOSE_HYP_CONTEXT 0x00002000
#define OPAL_PM_LOSE_FULL_CONTEXT 0x00004000
#define OPAL_PM_NAP_ENABLED 0x00010000
diff --git a/drivers/cpuidle/cpuidle-powernv.c b/drivers/cpuidle/cpuidle-powernv.c
index 84b1ebe21..30b877962 100644
--- a/drivers/cpuidle/cpuidle-powernv.c
+++ b/drivers/cpuidle/cpuidle-powernv.c
@@ -15,6 +15,7 @@
#include <linux/clockchips.h>
#include <linux/of.h>
#include <linux/slab.h>
+#include <linux/hrtimer.h>

#include <asm/machdep.h>
#include <asm/firmware.h>
@@ -43,6 +44,40 @@ struct stop_psscr_table {

static struct stop_psscr_table stop_psscr_table[CPUIDLE_STATE_MAX] __read_mostly;

+DEFINE_PER_CPU(struct hrtimer, forced_wakeup_timer);
+
+static int forced_wakeup_time_compute(struct cpuidle_device *dev,
+ struct cpuidle_driver *drv,
+ int index)
+{
+ int i, timeout_us = 0;
+
+ for (i = index + 1; i < drv->state_count; i++) {
+ if (drv->states[i].disabled || dev->states_usage[i].disable)
+ continue;
+ timeout_us = drv->states[i].target_residency +
+ 2 * drv->states[i].exit_latency;
+ break;
+ }
+
+ return timeout_us;
+}
+
+enum hrtimer_restart forced_wakeup_hrtimer_callback(struct hrtimer *hrtimer)
+{
+ return HRTIMER_NORESTART;
+}
+
+static void forced_wakeup_timer_init(int cpu, struct cpuidle_driver *drv)
+{
+ struct hrtimer *cpu_forced_wakeup_timer = &per_cpu(forced_wakeup_timer,
+ cpu);
+
+ hrtimer_init(cpu_forced_wakeup_timer, CLOCK_MONOTONIC,
+ HRTIMER_MODE_REL);
+ cpu_forced_wakeup_timer->function = forced_wakeup_hrtimer_callback;
+}
+
static u64 default_snooze_timeout __read_mostly;
static bool snooze_timeout_en __read_mostly;

@@ -103,6 +138,28 @@ static int snooze_loop(struct cpuidle_device *dev,
return index;
}

+static int stop_lite_loop(struct cpuidle_device *dev,
+ struct cpuidle_driver *drv,
+ int index)
+{
+ int timeout_us;
+ struct hrtimer *this_timer = &per_cpu(forced_wakeup_timer, dev->cpu);
+
+ timeout_us = forced_wakeup_time_compute(dev, drv, index);
+
+ if (timeout_us > 0)
+ hrtimer_start(this_timer, ns_to_ktime(timeout_us * 1000),
+ HRTIMER_MODE_REL_PINNED);
+
+ power9_idle_type(stop_psscr_table[index].val,
+ stop_psscr_table[index].mask);
+
+ if (unlikely(hrtimer_is_queued(this_timer)))
+ hrtimer_cancel(this_timer);
+
+ return index;
+}
+
static int nap_loop(struct cpuidle_device *dev,
struct cpuidle_driver *drv,
int index)
@@ -190,7 +247,7 @@ static int powernv_cpuidle_cpu_dead(unsigned int cpu)
*/
static int powernv_cpuidle_driver_init(void)
{
- int idle_state;
+ int idle_state, cpu;
struct cpuidle_driver *drv = &powernv_idle_driver;

drv->state_count = 0;
@@ -224,6 +281,9 @@ static int powernv_cpuidle_driver_init(void)

drv->cpumask = (struct cpumask *)cpu_present_mask;

+ for_each_cpu(cpu, drv->cpumask)
+ forced_wakeup_timer_init(cpu, drv);
+
return 0;
}

@@ -299,6 +359,7 @@ static int powernv_add_idle_states(void)
for (i = 0; i < dt_idle_states; i++) {
unsigned int exit_latency, target_residency;
bool stops_timebase = false;
+ bool lose_user_context = false;
struct pnv_idle_states_t *state = &pnv_idle_states[i];

/*
@@ -324,6 +385,9 @@ static int powernv_add_idle_states(void)
if (has_stop_states && !(state->valid))
continue;

+ if (state->flags & OPAL_PM_LOSE_USER_CONTEXT)
+ lose_user_context = true;
+
if (state->flags & OPAL_PM_TIMEBASE_STOP)
stops_timebase = true;

@@ -332,6 +396,11 @@ static int powernv_add_idle_states(void)
add_powernv_state(nr_idle_states, "Nap",
CPUIDLE_FLAG_NONE, nap_loop,
target_residency, exit_latency, 0, 0);
+ } else if (has_stop_states && !lose_user_context) {
+ add_powernv_state(nr_idle_states, state->name,
+ CPUIDLE_FLAG_NONE, stop_lite_loop,
+ target_residency, exit_latency,
+ state->psscr_val, state->psscr_mask);
} else if (has_stop_states && !stops_timebase) {
add_powernv_state(nr_idle_states, state->name,
CPUIDLE_FLAG_NONE, stop_loop,
--
2.17.1

2019-05-08 03:57:06

by Gautham R Shenoy

[permalink] [raw]
Subject: Re: [PATCH 1/1] cpuidle-powernv : forced wakeup for stop lite states

Hi Abhishek,

Apologies for this delayed review.

On Mon, Apr 22, 2019 at 01:32:31AM -0500, Abhishek Goel wrote:
> Currently, the cpuidle governors determine what idle state a idling CPU
> should enter into based on heuristics that depend on the idle history on
> that CPU. Given that no predictive heuristic is perfect, there are cases
> where the governor predicts a shallow idle state, hoping that the CPU will
> be busy soon. However, if no new workload is scheduled on that CPU in the
> near future, the CPU will end up in the shallow state.
>
> In case of POWER, this is problematic, when the predicted state in the
> aforementioned scenario is a lite stop state, as such lite states will
> inhibit SMT folding, thereby depriving the other threads in the core from
> using the core resources.
>
> So we do not want to get stucked in such states for longer duration. To

s/stucked/stuck

> address this, the cpuidle-core can queue timer to correspond with the

Actually we are queuing the timer in the driver and not the core!

> residency value of the next available state. This timer will forcefully
> wakeup the cpu. Few such iterations will essentially train the governor to
> select a deeper state for that cpu, as the timer here corresponds to the
> next available cpuidle state residency. Cpu will be kicked out of the lite
> state and end up in a non-lite state.


Coming to think of it, this is also the probelm that we have solved
for the snooze state. So, perhaps we can reuse that code to determine
what the timeout value should be for these idle states in which the
CPU shouldn't be remaining for a long time.


>
> Signed-off-by: Abhishek Goel <[email protected]>
> ---
> arch/powerpc/include/asm/opal-api.h | 1 +
> drivers/cpuidle/cpuidle-powernv.c | 71 ++++++++++++++++++++++++++++-
> 2 files changed, 71 insertions(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/include/asm/opal-api.h b/arch/powerpc/include/asm/opal-api.h
> index 870fb7b23..735dec731 100644
> --- a/arch/powerpc/include/asm/opal-api.h
> +++ b/arch/powerpc/include/asm/opal-api.h
> @@ -226,6 +226,7 @@
> */
>
> #define OPAL_PM_TIMEBASE_STOP 0x00000002
> +#define OPAL_PM_LOSE_USER_CONTEXT 0x00001000
> #define OPAL_PM_LOSE_HYP_CONTEXT 0x00002000
> #define OPAL_PM_LOSE_FULL_CONTEXT 0x00004000
> #define OPAL_PM_NAP_ENABLED 0x00010000
> diff --git a/drivers/cpuidle/cpuidle-powernv.c b/drivers/cpuidle/cpuidle-powernv.c
> index 84b1ebe21..30b877962 100644
> --- a/drivers/cpuidle/cpuidle-powernv.c
> +++ b/drivers/cpuidle/cpuidle-powernv.c
> @@ -15,6 +15,7 @@
> #include <linux/clockchips.h>
> #include <linux/of.h>
> #include <linux/slab.h>
> +#include <linux/hrtimer.h>
>
> #include <asm/machdep.h>
> #include <asm/firmware.h>
> @@ -43,6 +44,40 @@ struct stop_psscr_table {
>
> static struct stop_psscr_table stop_psscr_table[CPUIDLE_STATE_MAX] __read_mostly;
>
> +DEFINE_PER_CPU(struct hrtimer, forced_wakeup_timer);
> +
> +static int forced_wakeup_time_compute(struct cpuidle_device *dev,
> + struct cpuidle_driver *drv,
> + int index)
> +{
> + int i, timeout_us = 0;
> +
> + for (i = index + 1; i < drv->state_count; i++) {
> + if (drv->states[i].disabled || dev->states_usage[i].disable)
> + continue;
> + timeout_us = drv->states[i].target_residency +
> + 2 * drv->states[i].exit_latency;
> + break;
> + }
> +

This code is similar to the one in get_snooze_timeout(), except for
the inclusion of exit_latency in the timeout. What will we miss if we
won't consider exit_latency?

Could you try to see if you can club the two ?


> + return timeout_us;
> +}
> +
> +enum hrtimer_restart forced_wakeup_hrtimer_callback(struct hrtimer *hrtimer)
> +{
> + return HRTIMER_NORESTART;
> +}
> +
> +static void forced_wakeup_timer_init(int cpu, struct cpuidle_driver *drv)
> +{
> + struct hrtimer *cpu_forced_wakeup_timer = &per_cpu(forced_wakeup_timer,
> + cpu);
> +
> + hrtimer_init(cpu_forced_wakeup_timer, CLOCK_MONOTONIC,
> + HRTIMER_MODE_REL);
> + cpu_forced_wakeup_timer->function = forced_wakeup_hrtimer_callback;
> +}
> +
> static u64 default_snooze_timeout __read_mostly;
> static bool snooze_timeout_en __read_mostly;
>
> @@ -103,6 +138,28 @@ static int snooze_loop(struct cpuidle_device *dev,
> return index;
> }
>
> +static int stop_lite_loop(struct cpuidle_device *dev,
> + struct cpuidle_driver *drv,
> + int index)
> +{
> + int timeout_us;
> + struct hrtimer *this_timer = &per_cpu(forced_wakeup_timer, dev->cpu);
> +
> + timeout_us = forced_wakeup_time_compute(dev, drv, index);
> +
> + if (timeout_us > 0)
> + hrtimer_start(this_timer, ns_to_ktime(timeout_us * 1000),
> + HRTIMER_MODE_REL_PINNED);
> +
> + power9_idle_type(stop_psscr_table[index].val,
> + stop_psscr_table[index].mask);
> +
> + if (unlikely(hrtimer_is_queued(this_timer)))
> + hrtimer_cancel(this_timer);
> +
> + return index;
> +}
> +
> static int nap_loop(struct cpuidle_device *dev,
> struct cpuidle_driver *drv,
> int index)
> @@ -190,7 +247,7 @@ static int powernv_cpuidle_cpu_dead(unsigned int cpu)
> */
> static int powernv_cpuidle_driver_init(void)
> {
> - int idle_state;
> + int idle_state, cpu;
> struct cpuidle_driver *drv = &powernv_idle_driver;
>
> drv->state_count = 0;
> @@ -224,6 +281,9 @@ static int powernv_cpuidle_driver_init(void)
>
> drv->cpumask = (struct cpumask *)cpu_present_mask;
>
> + for_each_cpu(cpu, drv->cpumask)
> + forced_wakeup_timer_init(cpu, drv);
> +
> return 0;
> }
>
> @@ -299,6 +359,7 @@ static int powernv_add_idle_states(void)
> for (i = 0; i < dt_idle_states; i++) {
> unsigned int exit_latency, target_residency;
> bool stops_timebase = false;
> + bool lose_user_context = false;
> struct pnv_idle_states_t *state = &pnv_idle_states[i];
>
> /*
> @@ -324,6 +385,9 @@ static int powernv_add_idle_states(void)
> if (has_stop_states && !(state->valid))
> continue;
>
> + if (state->flags & OPAL_PM_LOSE_USER_CONTEXT)
> + lose_user_context = true;
> +
> if (state->flags & OPAL_PM_TIMEBASE_STOP)
> stops_timebase = true;
>
> @@ -332,6 +396,11 @@ static int powernv_add_idle_states(void)
> add_powernv_state(nr_idle_states, "Nap",
> CPUIDLE_FLAG_NONE, nap_loop,
> target_residency, exit_latency, 0, 0);
> + } else if (has_stop_states && !lose_user_context) {
> + add_powernv_state(nr_idle_states, state->name,
> + CPUIDLE_FLAG_NONE, stop_lite_loop,
> + target_residency, exit_latency,
> + state->psscr_val, state->psscr_mask);
> } else if (has_stop_states && !stops_timebase) {
> add_powernv_state(nr_idle_states, state->name,
> CPUIDLE_FLAG_NONE, stop_loop,
> --
> 2.17.1
>

2019-05-08 05:00:27

by Nicholas Piggin

[permalink] [raw]
Subject: Re: [PATCH 0/1] Forced-wakeup for stop lite states on Powernv

Abhishek Goel's on April 22, 2019 4:32 pm:
> Currently, the cpuidle governors determine what idle state a idling CPU
> should enter into based on heuristics that depend on the idle history on
> that CPU. Given that no predictive heuristic is perfect, there are cases
> where the governor predicts a shallow idle state, hoping that the CPU will
> be busy soon. However, if no new workload is scheduled on that CPU in the
> near future, the CPU will end up in the shallow state.
>
> Motivation
> ----------
> In case of POWER, this is problematic, when the predicted state in the
> aforementioned scenario is a lite stop state, as such lite states will
> inhibit SMT folding, thereby depriving the other threads in the core from
> using the core resources.
>
> So we do not want to get stucked in such states for longer duration. To
> address this, the cpuidle-core can queue timer to correspond with the
> residency value of the next available state. This timer will forcefully
> wakeup the cpu. Few such iterations will essentially train the governor to
> select a deeper state for that cpu, as the timer here corresponds to the
> next available cpuidle state residency. Cpu will be kicked out of the lite
> state and end up in a non-lite state.
>
> Experiment
> ----------
> I performed experiments for three scenarios to collect some data.
>
> case 1 :
> Without this patch and without tick retained, i.e. in a upstream kernel,
> It would spend more than even a second to get out of stop0_lite.
>
> case 2 : With tick retained in a upstream kernel -
>
> Generally, we have a sched tick at 4ms(CONF_HZ = 250). Ideally I expected
> it to take 8 sched tick to get out of stop0_lite. Experimentally,
> observation was
>
> =========================================================
> sample min max 99percentile
> 20 4ms 12ms 4ms
> =========================================================
>
> It would take atleast one sched tick to get out of stop0_lite.
>
> case 2 : With this patch (not stopping tick, but explicitly queuing a
> timer)
>
> ============================================================
> sample min max 99percentile
> ============================================================
> 20 144us 192us 144us
> ============================================================
>
> In this patch, we queue a timer just before entering into a stop0_lite
> state. The timer fires at (residency of next available state + exit latency
> of next available state * 2). Let's say if next state(stop0) is available
> which has residency of 20us, it should get out in as low as (20+2*2)*8
> [Based on the forumla (residency + 2xlatency)*history length] microseconds
> = 192us. Ideally we would expect 8 iterations, it was observed to get out
> in 6-7 iterations. Even if let's say stop2 is next available state(stop0
> and stop1 both are unavailable), it would take (100+2*10)*8 = 960us to get
> into stop2.
>
> So, We are able to get out of stop0_lite generally in 150us(with this
> patch) as compared to 4ms(with tick retained). As stated earlier, we do not
> want to get stuck into stop0_lite as it inhibits SMT folding for other
> sibling threads, depriving them of core resources. Current patch is using
> forced-wakeup only for stop0_lite, as it gives performance benefit(primary
> reason) along with lowering down power consumption. We may extend this
> model for other states in future.

I still have to wonder, between our snooze loop and stop0, what does
stop0_lite buy us.

That said, the problem you're solving here is a generic one that all
stop states have, I think. Doesn't the same thing apply going from
stop0 to stop5? You might under estimate the sleep time and lose power
savings and therefore performance there too. Shouldn't we make it
generic for all stop states?

Thanks,
Nick

2019-05-13 10:24:45

by Abhishek Goel

[permalink] [raw]
Subject: Re: [PATCH 0/1] Forced-wakeup for stop lite states on Powernv

On 05/08/2019 10:29 AM, Nicholas Piggin wrote:
> Abhishek Goel's on April 22, 2019 4:32 pm:
>> Currently, the cpuidle governors determine what idle state a idling CPU
>> should enter into based on heuristics that depend on the idle history on
>> that CPU. Given that no predictive heuristic is perfect, there are cases
>> where the governor predicts a shallow idle state, hoping that the CPU will
>> be busy soon. However, if no new workload is scheduled on that CPU in the
>> near future, the CPU will end up in the shallow state.
>>
>> Motivation
>> ----------
>> In case of POWER, this is problematic, when the predicted state in the
>> aforementioned scenario is a lite stop state, as such lite states will
>> inhibit SMT folding, thereby depriving the other threads in the core from
>> using the core resources.
>>
>> So we do not want to get stucked in such states for longer duration. To
>> address this, the cpuidle-core can queue timer to correspond with the
>> residency value of the next available state. This timer will forcefully
>> wakeup the cpu. Few such iterations will essentially train the governor to
>> select a deeper state for that cpu, as the timer here corresponds to the
>> next available cpuidle state residency. Cpu will be kicked out of the lite
>> state and end up in a non-lite state.
>>
>> Experiment
>> ----------
>> I performed experiments for three scenarios to collect some data.
>>
>> case 1 :
>> Without this patch and without tick retained, i.e. in a upstream kernel,
>> It would spend more than even a second to get out of stop0_lite.
>>
>> case 2 : With tick retained in a upstream kernel -
>>
>> Generally, we have a sched tick at 4ms(CONF_HZ = 250). Ideally I expected
>> it to take 8 sched tick to get out of stop0_lite. Experimentally,
>> observation was
>>
>> =========================================================
>> sample min max 99percentile
>> 20 4ms 12ms 4ms
>> =========================================================
>>
>> It would take atleast one sched tick to get out of stop0_lite.
>>
>> case 2 : With this patch (not stopping tick, but explicitly queuing a
>> timer)
>>
>> ============================================================
>> sample min max 99percentile
>> ============================================================
>> 20 144us 192us 144us
>> ============================================================
>>
>> In this patch, we queue a timer just before entering into a stop0_lite
>> state. The timer fires at (residency of next available state + exit latency
>> of next available state * 2). Let's say if next state(stop0) is available
>> which has residency of 20us, it should get out in as low as (20+2*2)*8
>> [Based on the forumla (residency + 2xlatency)*history length] microseconds
>> = 192us. Ideally we would expect 8 iterations, it was observed to get out
>> in 6-7 iterations. Even if let's say stop2 is next available state(stop0
>> and stop1 both are unavailable), it would take (100+2*10)*8 = 960us to get
>> into stop2.
>>
>> So, We are able to get out of stop0_lite generally in 150us(with this
>> patch) as compared to 4ms(with tick retained). As stated earlier, we do not
>> want to get stuck into stop0_lite as it inhibits SMT folding for other
>> sibling threads, depriving them of core resources. Current patch is using
>> forced-wakeup only for stop0_lite, as it gives performance benefit(primary
>> reason) along with lowering down power consumption. We may extend this
>> model for other states in future.
> I still have to wonder, between our snooze loop and stop0, what does
> stop0_lite buy us.
>
> That said, the problem you're solving here is a generic one that all
> stop states have, I think. Doesn't the same thing apply going from
> stop0 to stop5? You might under estimate the sleep time and lose power
> savings and therefore performance there too. Shouldn't we make it
> generic for all stop states?
>
> Thanks,
> Nick
>
>
When a cpu is in snooze, it takes both space and time of core. When in
stop0_lite,
it free up time but it still takes space. When it is in stop0 or deeper,
it free up both
space and time slice of core.
In stop0_lite, cpu doesn't free up the core resources and thus inhibits
thread
folding. When a cpu goes to stop0, it will free up the core resources
thus increasing
the single thread performance of other sibling thread.
Hence, we do not want to get stuck in stop0_lite for long duration, and
want to quickly
move onto the next state.
If we get stuck in any other state we would possibly be losing on to
power saving,
but will still be able to gain the performance benefits for other
sibling threads.

Thanks,
Abhishek

2019-05-16 04:58:10

by Nicholas Piggin

[permalink] [raw]
Subject: Re: [PATCH 0/1] Forced-wakeup for stop lite states on Powernv

Abhishek's on May 13, 2019 7:49 pm:
> On 05/08/2019 10:29 AM, Nicholas Piggin wrote:
>> Abhishek Goel's on April 22, 2019 4:32 pm:
>>> Currently, the cpuidle governors determine what idle state a idling CPU
>>> should enter into based on heuristics that depend on the idle history on
>>> that CPU. Given that no predictive heuristic is perfect, there are cases
>>> where the governor predicts a shallow idle state, hoping that the CPU will
>>> be busy soon. However, if no new workload is scheduled on that CPU in the
>>> near future, the CPU will end up in the shallow state.
>>>
>>> Motivation
>>> ----------
>>> In case of POWER, this is problematic, when the predicted state in the
>>> aforementioned scenario is a lite stop state, as such lite states will
>>> inhibit SMT folding, thereby depriving the other threads in the core from
>>> using the core resources.
>>>
>>> So we do not want to get stucked in such states for longer duration. To
>>> address this, the cpuidle-core can queue timer to correspond with the
>>> residency value of the next available state. This timer will forcefully
>>> wakeup the cpu. Few such iterations will essentially train the governor to
>>> select a deeper state for that cpu, as the timer here corresponds to the
>>> next available cpuidle state residency. Cpu will be kicked out of the lite
>>> state and end up in a non-lite state.
>>>
>>> Experiment
>>> ----------
>>> I performed experiments for three scenarios to collect some data.
>>>
>>> case 1 :
>>> Without this patch and without tick retained, i.e. in a upstream kernel,
>>> It would spend more than even a second to get out of stop0_lite.
>>>
>>> case 2 : With tick retained in a upstream kernel -
>>>
>>> Generally, we have a sched tick at 4ms(CONF_HZ = 250). Ideally I expected
>>> it to take 8 sched tick to get out of stop0_lite. Experimentally,
>>> observation was
>>>
>>> =========================================================
>>> sample min max 99percentile
>>> 20 4ms 12ms 4ms
>>> =========================================================
>>>
>>> It would take atleast one sched tick to get out of stop0_lite.
>>>
>>> case 2 : With this patch (not stopping tick, but explicitly queuing a
>>> timer)
>>>
>>> ============================================================
>>> sample min max 99percentile
>>> ============================================================
>>> 20 144us 192us 144us
>>> ============================================================
>>>
>>> In this patch, we queue a timer just before entering into a stop0_lite
>>> state. The timer fires at (residency of next available state + exit latency
>>> of next available state * 2). Let's say if next state(stop0) is available
>>> which has residency of 20us, it should get out in as low as (20+2*2)*8
>>> [Based on the forumla (residency + 2xlatency)*history length] microseconds
>>> = 192us. Ideally we would expect 8 iterations, it was observed to get out
>>> in 6-7 iterations. Even if let's say stop2 is next available state(stop0
>>> and stop1 both are unavailable), it would take (100+2*10)*8 = 960us to get
>>> into stop2.
>>>
>>> So, We are able to get out of stop0_lite generally in 150us(with this
>>> patch) as compared to 4ms(with tick retained). As stated earlier, we do not
>>> want to get stuck into stop0_lite as it inhibits SMT folding for other
>>> sibling threads, depriving them of core resources. Current patch is using
>>> forced-wakeup only for stop0_lite, as it gives performance benefit(primary
>>> reason) along with lowering down power consumption. We may extend this
>>> model for other states in future.
>> I still have to wonder, between our snooze loop and stop0, what does
>> stop0_lite buy us.
>>
>> That said, the problem you're solving here is a generic one that all
>> stop states have, I think. Doesn't the same thing apply going from
>> stop0 to stop5? You might under estimate the sleep time and lose power
>> savings and therefore performance there too. Shouldn't we make it
>> generic for all stop states?
>>
>> Thanks,
>> Nick
>>
>>
> When a cpu is in snooze, it takes both space and time of core. When in
> stop0_lite,
> it free up time but it still takes space.

True, but snooze should only be taking less than 1% of front end
cycles. I appreciate there is some non-zero difference here, I just
wonder in practice what exactly we gain by it.

We should always have fewer states unless proven otherwise.

That said, we enable it today so I don't want to argue this point
here, because it is a different issue from your patch.

> When it is in stop0 or deeper,
> it free up both
> space and time slice of core.
> In stop0_lite, cpu doesn't free up the core resources and thus inhibits
> thread
> folding. When a cpu goes to stop0, it will free up the core resources
> thus increasing
> the single thread performance of other sibling thread.
> Hence, we do not want to get stuck in stop0_lite for long duration, and
> want to quickly
> move onto the next state.
> If we get stuck in any other state we would possibly be losing on to
> power saving,
> but will still be able to gain the performance benefits for other
> sibling threads.

That's true, but stop0 -> deeper stop is also a benefit (for
performance if we have some power/thermal constraints, and/or for power
usage).

Sure it may not be so noticable as the SMT switch, but I just wonder
if the infrastructure should be there for the same reason.

I was testing interrupt frequency on some tickless workloads configs,
and without too much trouble you can get CPUs to sleep with no
interrupts for many minutes. Hours even. We wouldn't want the CPU to
stay in stop0 for that long.

Just thinking about the patch itself, I wonder do you need a full
kernel timer, or could we just set the decrementer? Is there much
performance cost here?

Thanks,
Nick

2019-05-16 05:38:38

by Gautham R Shenoy

[permalink] [raw]
Subject: Re: [PATCH 0/1] Forced-wakeup for stop lite states on Powernv

Hello Nicholas,


On Thu, May 16, 2019 at 02:55:42PM +1000, Nicholas Piggin wrote:
> Abhishek's on May 13, 2019 7:49 pm:
> > On 05/08/2019 10:29 AM, Nicholas Piggin wrote:
> >> Abhishek Goel's on April 22, 2019 4:32 pm:
> >>> Currently, the cpuidle governors determine what idle state a idling CPU
> >>> should enter into based on heuristics that depend on the idle history on
> >>> that CPU. Given that no predictive heuristic is perfect, there are cases
> >>> where the governor predicts a shallow idle state, hoping that the CPU will
> >>> be busy soon. However, if no new workload is scheduled on that CPU in the
> >>> near future, the CPU will end up in the shallow state.
> >>>
> >>> Motivation
> >>> ----------
> >>> In case of POWER, this is problematic, when the predicted state in the
> >>> aforementioned scenario is a lite stop state, as such lite states will
> >>> inhibit SMT folding, thereby depriving the other threads in the core from
> >>> using the core resources.
> >>>
> >>> So we do not want to get stucked in such states for longer duration. To
> >>> address this, the cpuidle-core can queue timer to correspond with the
> >>> residency value of the next available state. This timer will forcefully
> >>> wakeup the cpu. Few such iterations will essentially train the governor to
> >>> select a deeper state for that cpu, as the timer here corresponds to the
> >>> next available cpuidle state residency. Cpu will be kicked out of the lite
> >>> state and end up in a non-lite state.
> >>>
> >>> Experiment
> >>> ----------
> >>> I performed experiments for three scenarios to collect some data.
> >>>
> >>> case 1 :
> >>> Without this patch and without tick retained, i.e. in a upstream kernel,
> >>> It would spend more than even a second to get out of stop0_lite.
> >>>
> >>> case 2 : With tick retained in a upstream kernel -
> >>>
> >>> Generally, we have a sched tick at 4ms(CONF_HZ = 250). Ideally I expected
> >>> it to take 8 sched tick to get out of stop0_lite. Experimentally,
> >>> observation was
> >>>
> >>> =========================================================
> >>> sample min max 99percentile
> >>> 20 4ms 12ms 4ms
> >>> =========================================================
> >>>
> >>> It would take atleast one sched tick to get out of stop0_lite.
> >>>
> >>> case 2 : With this patch (not stopping tick, but explicitly queuing a
> >>> timer)
> >>>
> >>> ============================================================
> >>> sample min max 99percentile
> >>> ============================================================
> >>> 20 144us 192us 144us
> >>> ============================================================
> >>>
> >>> In this patch, we queue a timer just before entering into a stop0_lite
> >>> state. The timer fires at (residency of next available state + exit latency
> >>> of next available state * 2). Let's say if next state(stop0) is available
> >>> which has residency of 20us, it should get out in as low as (20+2*2)*8
> >>> [Based on the forumla (residency + 2xlatency)*history length] microseconds
> >>> = 192us. Ideally we would expect 8 iterations, it was observed to get out
> >>> in 6-7 iterations. Even if let's say stop2 is next available state(stop0
> >>> and stop1 both are unavailable), it would take (100+2*10)*8 = 960us to get
> >>> into stop2.
> >>>
> >>> So, We are able to get out of stop0_lite generally in 150us(with this
> >>> patch) as compared to 4ms(with tick retained). As stated earlier, we do not
> >>> want to get stuck into stop0_lite as it inhibits SMT folding for other
> >>> sibling threads, depriving them of core resources. Current patch is using
> >>> forced-wakeup only for stop0_lite, as it gives performance benefit(primary
> >>> reason) along with lowering down power consumption. We may extend this
> >>> model for other states in future.
> >> I still have to wonder, between our snooze loop and stop0, what does
> >> stop0_lite buy us.
> >>
> >> That said, the problem you're solving here is a generic one that all
> >> stop states have, I think. Doesn't the same thing apply going from
> >> stop0 to stop5? You might under estimate the sleep time and lose power
> >> savings and therefore performance there too. Shouldn't we make it
> >> generic for all stop states?
> >>
> >> Thanks,
> >> Nick
> >>
> >>
> > When a cpu is in snooze, it takes both space and time of core. When in
> > stop0_lite,
> > it free up time but it still takes space.
>
> True, but snooze should only be taking less than 1% of front end
> cycles. I appreciate there is some non-zero difference here, I just
> wonder in practice what exactly we gain by it.

The idea behind implementing a lite-state was that on the future
platforms it can be made to wait on a flag and hence act as a
replacement for snooze. On POWER9 we don't have this feature.

The motivation behind this patch was a HPC customer issue where they
were observing some CPUs in the core getting stuck at stop0_lite
state, thereby lowering the performance on the other CPUs of the core
which were running the application.

Disabling stop0_lite via sysfs didn't help since we would fallback to
snooze and it would make matters worse.

>
> We should always have fewer states unless proven otherwise.

I agree.

>
> That said, we enable it today so I don't want to argue this point
> here, because it is a different issue from your patch.
>
> > When it is in stop0 or deeper,
> > it free up both
> > space and time slice of core.
> > In stop0_lite, cpu doesn't free up the core resources and thus inhibits
> > thread
> > folding. When a cpu goes to stop0, it will free up the core resources
> > thus increasing
> > the single thread performance of other sibling thread.
> > Hence, we do not want to get stuck in stop0_lite for long duration, and
> > want to quickly
> > move onto the next state.
> > If we get stuck in any other state we would possibly be losing on to
> > power saving,
> > but will still be able to gain the performance benefits for other
> > sibling threads.
>
> That's true, but stop0 -> deeper stop is also a benefit (for
> performance if we have some power/thermal constraints, and/or for power
> usage).
>
> Sure it may not be so noticable as the SMT switch, but I just wonder
> if the infrastructure should be there for the same reason.
>
> I was testing interrupt frequency on some tickless workloads configs,
> and without too much trouble you can get CPUs to sleep with no
> interrupts for many minutes. Hours even. We wouldn't want the CPU to
> stay in stop0 for that long.

If it stays in stop0 or even stop2 for that long, we would want to
"promote" it to a deeper state, such as say STOP5 which allows the
other cores to run at higher frequencies.

>
> Just thinking about the patch itself, I wonder do you need a full
> kernel timer, or could we just set the decrementer? Is there much
> performance cost here?
>

Good point. A decrementer would do actually.

> Thanks,
> Nick

--
Thanks and Regards
gautham.

2019-05-16 06:16:53

by Nicholas Piggin

[permalink] [raw]
Subject: Re: [PATCH 0/1] Forced-wakeup for stop lite states on Powernv

Gautham R Shenoy's on May 16, 2019 3:36 pm:
> Hello Nicholas,
>
>
> On Thu, May 16, 2019 at 02:55:42PM +1000, Nicholas Piggin wrote:
>> Abhishek's on May 13, 2019 7:49 pm:
>> > On 05/08/2019 10:29 AM, Nicholas Piggin wrote:
>> >> Abhishek Goel's on April 22, 2019 4:32 pm:
>> >>> Currently, the cpuidle governors determine what idle state a idling CPU
>> >>> should enter into based on heuristics that depend on the idle history on
>> >>> that CPU. Given that no predictive heuristic is perfect, there are cases
>> >>> where the governor predicts a shallow idle state, hoping that the CPU will
>> >>> be busy soon. However, if no new workload is scheduled on that CPU in the
>> >>> near future, the CPU will end up in the shallow state.
>> >>>
>> >>> Motivation
>> >>> ----------
>> >>> In case of POWER, this is problematic, when the predicted state in the
>> >>> aforementioned scenario is a lite stop state, as such lite states will
>> >>> inhibit SMT folding, thereby depriving the other threads in the core from
>> >>> using the core resources.
>> >>>
>> >>> So we do not want to get stucked in such states for longer duration. To
>> >>> address this, the cpuidle-core can queue timer to correspond with the
>> >>> residency value of the next available state. This timer will forcefully
>> >>> wakeup the cpu. Few such iterations will essentially train the governor to
>> >>> select a deeper state for that cpu, as the timer here corresponds to the
>> >>> next available cpuidle state residency. Cpu will be kicked out of the lite
>> >>> state and end up in a non-lite state.
>> >>>
>> >>> Experiment
>> >>> ----------
>> >>> I performed experiments for three scenarios to collect some data.
>> >>>
>> >>> case 1 :
>> >>> Without this patch and without tick retained, i.e. in a upstream kernel,
>> >>> It would spend more than even a second to get out of stop0_lite.
>> >>>
>> >>> case 2 : With tick retained in a upstream kernel -
>> >>>
>> >>> Generally, we have a sched tick at 4ms(CONF_HZ = 250). Ideally I expected
>> >>> it to take 8 sched tick to get out of stop0_lite. Experimentally,
>> >>> observation was
>> >>>
>> >>> =========================================================
>> >>> sample min max 99percentile
>> >>> 20 4ms 12ms 4ms
>> >>> =========================================================
>> >>>
>> >>> It would take atleast one sched tick to get out of stop0_lite.
>> >>>
>> >>> case 2 : With this patch (not stopping tick, but explicitly queuing a
>> >>> timer)
>> >>>
>> >>> ============================================================
>> >>> sample min max 99percentile
>> >>> ============================================================
>> >>> 20 144us 192us 144us
>> >>> ============================================================
>> >>>
>> >>> In this patch, we queue a timer just before entering into a stop0_lite
>> >>> state. The timer fires at (residency of next available state + exit latency
>> >>> of next available state * 2). Let's say if next state(stop0) is available
>> >>> which has residency of 20us, it should get out in as low as (20+2*2)*8
>> >>> [Based on the forumla (residency + 2xlatency)*history length] microseconds
>> >>> = 192us. Ideally we would expect 8 iterations, it was observed to get out
>> >>> in 6-7 iterations. Even if let's say stop2 is next available state(stop0
>> >>> and stop1 both are unavailable), it would take (100+2*10)*8 = 960us to get
>> >>> into stop2.
>> >>>
>> >>> So, We are able to get out of stop0_lite generally in 150us(with this
>> >>> patch) as compared to 4ms(with tick retained). As stated earlier, we do not
>> >>> want to get stuck into stop0_lite as it inhibits SMT folding for other
>> >>> sibling threads, depriving them of core resources. Current patch is using
>> >>> forced-wakeup only for stop0_lite, as it gives performance benefit(primary
>> >>> reason) along with lowering down power consumption. We may extend this
>> >>> model for other states in future.
>> >> I still have to wonder, between our snooze loop and stop0, what does
>> >> stop0_lite buy us.
>> >>
>> >> That said, the problem you're solving here is a generic one that all
>> >> stop states have, I think. Doesn't the same thing apply going from
>> >> stop0 to stop5? You might under estimate the sleep time and lose power
>> >> savings and therefore performance there too. Shouldn't we make it
>> >> generic for all stop states?
>> >>
>> >> Thanks,
>> >> Nick
>> >>
>> >>
>> > When a cpu is in snooze, it takes both space and time of core. When in
>> > stop0_lite,
>> > it free up time but it still takes space.
>>
>> True, but snooze should only be taking less than 1% of front end
>> cycles. I appreciate there is some non-zero difference here, I just
>> wonder in practice what exactly we gain by it.
>
> The idea behind implementing a lite-state was that on the future
> platforms it can be made to wait on a flag and hence act as a
> replacement for snooze. On POWER9 we don't have this feature.

Right. I mean for POWER9.

> The motivation behind this patch was a HPC customer issue where they
> were observing some CPUs in the core getting stuck at stop0_lite
> state, thereby lowering the performance on the other CPUs of the core
> which were running the application.
>
> Disabling stop0_lite via sysfs didn't help since we would fallback to
> snooze and it would make matters worse.

snooze has the timeout though, so it should kick into stop0 properly
(and if it doesn't that's another issue that should be fixed in this
series).

I'm not questioning the patch for stop0_lite, to be clear. I think
the logic is sound. I just raise one urelated issue that happens to
be for stop0_lite as well (should we even enable it on P9?), and one
peripheral issue (should we make a similar fix for deeper stop states?)

>
>>
>> We should always have fewer states unless proven otherwise.
>
> I agree.
>
>>
>> That said, we enable it today so I don't want to argue this point
>> here, because it is a different issue from your patch.
>>
>> > When it is in stop0 or deeper,
>> > it free up both
>> > space and time slice of core.
>> > In stop0_lite, cpu doesn't free up the core resources and thus inhibits
>> > thread
>> > folding. When a cpu goes to stop0, it will free up the core resources
>> > thus increasing
>> > the single thread performance of other sibling thread.
>> > Hence, we do not want to get stuck in stop0_lite for long duration, and
>> > want to quickly
>> > move onto the next state.
>> > If we get stuck in any other state we would possibly be losing on to
>> > power saving,
>> > but will still be able to gain the performance benefits for other
>> > sibling threads.
>>
>> That's true, but stop0 -> deeper stop is also a benefit (for
>> performance if we have some power/thermal constraints, and/or for power
>> usage).
>>
>> Sure it may not be so noticable as the SMT switch, but I just wonder
>> if the infrastructure should be there for the same reason.
>>
>> I was testing interrupt frequency on some tickless workloads configs,
>> and without too much trouble you can get CPUs to sleep with no
>> interrupts for many minutes. Hours even. We wouldn't want the CPU to
>> stay in stop0 for that long.
>
> If it stays in stop0 or even stop2 for that long, we would want to
> "promote" it to a deeper state, such as say STOP5 which allows the
> other cores to run at higher frequencies.

So we would want this same logic for all but the deepest runtime
stop state?

>> Just thinking about the patch itself, I wonder do you need a full
>> kernel timer, or could we just set the decrementer? Is there much
>> performance cost here?
>>
>
> Good point. A decrementer would do actually.

That would be good if it does, might save a few cycles.

Thanks,
Nick

2019-05-16 08:24:12

by Gautham R Shenoy

[permalink] [raw]
Subject: Re: [PATCH 0/1] Forced-wakeup for stop lite states on Powernv

Hi Nicholas,

On Thu, May 16, 2019 at 04:13:17PM +1000, Nicholas Piggin wrote:

>
> > The motivation behind this patch was a HPC customer issue where they
> > were observing some CPUs in the core getting stuck at stop0_lite
> > state, thereby lowering the performance on the other CPUs of the core
> > which were running the application.
> >
> > Disabling stop0_lite via sysfs didn't help since we would fallback to
> > snooze and it would make matters worse.
>
> snooze has the timeout though, so it should kick into stop0 properly
> (and if it doesn't that's another issue that should be fixed in this
> series).
>
> I'm not questioning the patch for stop0_lite, to be clear. I think
> the logic is sound. I just raise one urelated issue that happens to
> be for stop0_lite as well (should we even enable it on P9?), and one
> peripheral issue (should we make a similar fix for deeper stop states?)
>

I think it makes sense to generalize this from the point of view of
CPUs remaining in shallower idle states for long durations on tickless
kernels.

> >
> >>
> >> We should always have fewer states unless proven otherwise.
> >
> > I agree.
> >
> >>
> >> That said, we enable it today so I don't want to argue this point
> >> here, because it is a different issue from your patch.
> >>
> >> > When it is in stop0 or deeper,
> >> > it free up both
> >> > space and time slice of core.
> >> > In stop0_lite, cpu doesn't free up the core resources and thus inhibits
> >> > thread
> >> > folding. When a cpu goes to stop0, it will free up the core resources
> >> > thus increasing
> >> > the single thread performance of other sibling thread.
> >> > Hence, we do not want to get stuck in stop0_lite for long duration, and
> >> > want to quickly
> >> > move onto the next state.
> >> > If we get stuck in any other state we would possibly be losing on to
> >> > power saving,
> >> > but will still be able to gain the performance benefits for other
> >> > sibling threads.
> >>
> >> That's true, but stop0 -> deeper stop is also a benefit (for
> >> performance if we have some power/thermal constraints, and/or for power
> >> usage).
> >>
> >> Sure it may not be so noticable as the SMT switch, but I just wonder
> >> if the infrastructure should be there for the same reason.
> >>
> >> I was testing interrupt frequency on some tickless workloads configs,
> >> and without too much trouble you can get CPUs to sleep with no
> >> interrupts for many minutes. Hours even. We wouldn't want the CPU to
> >> stay in stop0 for that long.
> >
> > If it stays in stop0 or even stop2 for that long, we would want to
> > "promote" it to a deeper state, such as say STOP5 which allows the
> > other cores to run at higher frequencies.
>
> So we would want this same logic for all but the deepest runtime
> stop state?

Yes. We can, in steps, promote individual threads of the core to
eventually request a deeper state such as stop4/5. On a completely
idle tickless system, eventually we should see the core go to the
deeper idle state.

>
> >> Just thinking about the patch itself, I wonder do you need a full
> >> kernel timer, or could we just set the decrementer? Is there much
> >> performance cost here?
> >>
> >
> > Good point. A decrementer would do actually.
>
> That would be good if it does, might save a few cycles.
>
> Thanks,
> Nick
>

--
Thanks and Regards
gautham.