2020-07-07 11:14:15

by Gautham R Shenoy

[permalink] [raw]
Subject: [PATCH 0/5] cpuidle-pseries: Parse extended CEDE information for idle.

From: "Gautham R. Shenoy" <[email protected]>

Hi,

On pseries Dedicated Linux LPARs, apart from the polling snooze idle
state, we currently have the CEDE idle state which cedes the CPU to
the hypervisor with latency-hint = 0.

However, the PowerVM hypervisor supports additional extended CEDE
states, which can be queried through the "ibm,get-systems-parameter"
rtas-call with the CEDE_LATENCY_TOKEN. The hypervisor maps these
extended CEDE states to appropriate platform idle-states in order to
provide energy-savings as well as shifting power to the active
units. On existing pseries LPARs today we have extended CEDE with
latency-hints {1,2} supported.

In Patches 1-3 of this patchset, we add the code to parse the CEDE
latency records provided by the hypervisor. We use this information to
determine the wakeup latency of the regular CEDE (which we have been
so far hardcoding to 10us while experimentally it is much lesser ~
1us), by looking at the wakeup latency provided by the hypervisor for
Extended CEDE states. Since the platform currently advertises Extended
CEDE 1 to have wakeup latency of 2us, we can be sure that the wakeup
latency of the regular CEDE is no more than this.

Patch 4 (currently marked as RFC), expose the extended CEDE states
parsed above to the cpuidle framework, provided that they can wakeup
on an interrupt. On current platforms only Extended CEDE 1 fits the
bill, but this is going to change in future platforms where even
Extended CEDE 2 may be responsive to external interrupts.

Patch 5 (currently marked as RFC), filters out Extended CEDE 1 since
it offers no added advantage over the normal CEDE.

With Patches 1-3, we see an improvement in the single-threaded
performance on ebizzy.

2 ebizzy threads bound to the same big-core. 25% improvement in the
avg records/s (higher the better) with patches 1-3.
x without_patches
* with_patches
N Min Max Median Avg Stddev
x 10 2491089 5834307 5398375 4244335 1596244.9
* 10 2893813 5834474 5832448 5327281.3 1055941.4

We do not observe any major regression in either the context_switch2
benchmark or the schbench benchmark

context_switch2 across CPU0 CPU1 (Both belong to same big-core, but different
small cores). We observe a minor 0.14% regression in the number of
context-switches (higher is better).
x without_patch
* with_patch
N Min Max Median Avg Stddev
x 500 348872 362236 354712 354745.69 2711.827
* 500 349422 361452 353942 354215.4 2576.9258

context_switch2 across CPU0 CPU8 (Different big-cores). We observe a 0.37%
improvement in the number of context-switches (higher is better).
x without_patch
* with_patch
N Min Max Median Avg Stddev
x 500 287956 294940 288896 288977.23 646.59295
* 500 288300 294646 289582 290064.76 1161.9992

schbench:
No major difference could be seen until the 99.9th percentile.

Without-patch
Latency percentiles (usec)
50.0th: 29
75.0th: 39
90.0th: 49
95.0th: 59
*99.0th: 13104
99.5th: 14672
99.9th: 15824
min=0, max=17993

With-patch:
Latency percentiles (usec)
50.0th: 29
75.0th: 40
90.0th: 50
95.0th: 61
*99.0th: 13648
99.5th: 14768
99.9th: 15664
min=0, max=29812



Gautham R. Shenoy (5):
cpuidle-pseries: Set the latency-hint before entering CEDE
cpuidle-pseries: Add function to parse extended CEDE records
cpuidle-pseries : Fixup exit latency for CEDE(0)
cpuidle-pseries : Include extended CEDE states in cpuidle framework
cpuidle-pseries: Block Extended CEDE(1) which adds no additional
value.

drivers/cpuidle/cpuidle-pseries.c | 268 +++++++++++++++++++++++++++++++++++++-
1 file changed, 266 insertions(+), 2 deletions(-)

--
1.9.4


2020-07-07 11:15:46

by Gautham R Shenoy

[permalink] [raw]
Subject: [PATCH 1/5] cpuidle-pseries: Set the latency-hint before entering CEDE

From: "Gautham R. Shenoy" <[email protected]>

As per the PAPR, each H_CEDE call is associated with a latency-hint to
be passed in the VPA field "cede_latency_hint". The CEDE states that
we were implicitly entering so far is CEDE with latency-hint = 0.

This patch explicitly sets the latency hint corresponding to the CEDE
state that we are currently entering. While at it, we save the
previous hint, to be restored once we wakeup from CEDE. This will be
required in the future when we expose extended-cede states through the
cpuidle framework, where each of them will have a different
cede-latency hint.

Signed-off-by: Gautham R. Shenoy <[email protected]>
---
drivers/cpuidle/cpuidle-pseries.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/cpuidle/cpuidle-pseries.c b/drivers/cpuidle/cpuidle-pseries.c
index 4a37252..39d4bb6 100644
--- a/drivers/cpuidle/cpuidle-pseries.c
+++ b/drivers/cpuidle/cpuidle-pseries.c
@@ -105,19 +105,27 @@ static void check_and_cede_processor(void)
}
}

+#define NR_CEDE_STATES 1 /* CEDE with latency-hint 0 */
+#define NR_DEDICATED_STATES (NR_CEDE_STATES + 1) /* Includes snooze */
+
+u8 cede_latency_hint[NR_DEDICATED_STATES];
static int dedicated_cede_loop(struct cpuidle_device *dev,
struct cpuidle_driver *drv,
int index)
{
+ u8 old_latency_hint;

pseries_idle_prolog();
get_lppaca()->donate_dedicated_cpu = 1;
+ old_latency_hint = get_lppaca()->cede_latency_hint;
+ get_lppaca()->cede_latency_hint = cede_latency_hint[index];

HMT_medium();
check_and_cede_processor();

local_irq_disable();
get_lppaca()->donate_dedicated_cpu = 0;
+ get_lppaca()->cede_latency_hint = old_latency_hint;

pseries_idle_epilog();

@@ -149,7 +157,7 @@ static int shared_cede_loop(struct cpuidle_device *dev,
/*
* States for dedicated partition case.
*/
-static struct cpuidle_state dedicated_states[] = {
+static struct cpuidle_state dedicated_states[NR_DEDICATED_STATES] = {
{ /* Snooze */
.name = "snooze",
.desc = "snooze",
--
1.9.4

2020-07-07 11:35:38

by Gautham R Shenoy

[permalink] [raw]
Subject: Re: [PATCH 0/5] cpuidle-pseries: Parse extended CEDE information for idle.

Hi,

On Tue, Jul 07, 2020 at 04:41:34PM +0530, Gautham R. Shenoy wrote:
> From: "Gautham R. Shenoy" <[email protected]>
>
> Hi,
>
>
>
>
> Gautham R. Shenoy (5):
> cpuidle-pseries: Set the latency-hint before entering CEDE
> cpuidle-pseries: Add function to parse extended CEDE records
> cpuidle-pseries : Fixup exit latency for CEDE(0)
> cpuidle-pseries : Include extended CEDE states in cpuidle framework
> cpuidle-pseries: Block Extended CEDE(1) which adds no additional
> value.

Forgot to mention that these patches are on top of Nathan's series to
remove extended CEDE offline and bogus topology update code :
https://lore.kernel.org/linuxppc-dev/[email protected]/

>
> drivers/cpuidle/cpuidle-pseries.c | 268 +++++++++++++++++++++++++++++++++++++-
> 1 file changed, 266 insertions(+), 2 deletions(-)
>
> --
> 1.9.4
>

2020-07-20 05:49:52

by Vaidyanathan Srinivasan

[permalink] [raw]
Subject: Re: [PATCH 0/5] cpuidle-pseries: Parse extended CEDE information for idle.

* Gautham R Shenoy <[email protected]> [2020-07-07 16:41:34]:

> From: "Gautham R. Shenoy" <[email protected]>
>
> Hi,
>
> On pseries Dedicated Linux LPARs, apart from the polling snooze idle
> state, we currently have the CEDE idle state which cedes the CPU to
> the hypervisor with latency-hint = 0.
>
> However, the PowerVM hypervisor supports additional extended CEDE
> states, which can be queried through the "ibm,get-systems-parameter"
> rtas-call with the CEDE_LATENCY_TOKEN. The hypervisor maps these
> extended CEDE states to appropriate platform idle-states in order to
> provide energy-savings as well as shifting power to the active
> units. On existing pseries LPARs today we have extended CEDE with
> latency-hints {1,2} supported.
>
> In Patches 1-3 of this patchset, we add the code to parse the CEDE
> latency records provided by the hypervisor. We use this information to
> determine the wakeup latency of the regular CEDE (which we have been
> so far hardcoding to 10us while experimentally it is much lesser ~
> 1us), by looking at the wakeup latency provided by the hypervisor for
> Extended CEDE states. Since the platform currently advertises Extended
> CEDE 1 to have wakeup latency of 2us, we can be sure that the wakeup
> latency of the regular CEDE is no more than this.
>
> Patch 4 (currently marked as RFC), expose the extended CEDE states
> parsed above to the cpuidle framework, provided that they can wakeup
> on an interrupt. On current platforms only Extended CEDE 1 fits the
> bill, but this is going to change in future platforms where even
> Extended CEDE 2 may be responsive to external interrupts.
>
> Patch 5 (currently marked as RFC), filters out Extended CEDE 1 since
> it offers no added advantage over the normal CEDE.
>
> With Patches 1-3, we see an improvement in the single-threaded
> performance on ebizzy.
>
> 2 ebizzy threads bound to the same big-core. 25% improvement in the
> avg records/s (higher the better) with patches 1-3.
> x without_patches
> * with_patches
> N Min Max Median Avg Stddev
> x 10 2491089 5834307 5398375 4244335 1596244.9
> * 10 2893813 5834474 5832448 5327281.3 1055941.4
>
> We do not observe any major regression in either the context_switch2
> benchmark or the schbench benchmark
>
> context_switch2 across CPU0 CPU1 (Both belong to same big-core, but different
> small cores). We observe a minor 0.14% regression in the number of
> context-switches (higher is better).
> x without_patch
> * with_patch
> N Min Max Median Avg Stddev
> x 500 348872 362236 354712 354745.69 2711.827
> * 500 349422 361452 353942 354215.4 2576.9258
>
> context_switch2 across CPU0 CPU8 (Different big-cores). We observe a 0.37%
> improvement in the number of context-switches (higher is better).
> x without_patch
> * with_patch
> N Min Max Median Avg Stddev
> x 500 287956 294940 288896 288977.23 646.59295
> * 500 288300 294646 289582 290064.76 1161.9992
>
> schbench:
> No major difference could be seen until the 99.9th percentile.
>
> Without-patch
> Latency percentiles (usec)
> 50.0th: 29
> 75.0th: 39
> 90.0th: 49
> 95.0th: 59
> *99.0th: 13104
> 99.5th: 14672
> 99.9th: 15824
> min=0, max=17993
>
> With-patch:
> Latency percentiles (usec)
> 50.0th: 29
> 75.0th: 40
> 90.0th: 50
> 95.0th: 61
> *99.0th: 13648
> 99.5th: 14768
> 99.9th: 15664
> min=0, max=29812

This patch series mainly cleans up the CEDE latency discovery and
prepares to add different cpuidle states in virtualised environment.
This helps in improving SMT folding speeds and also power savings and
power shifting with newer platform firmware.

The current benefit is primarily from faster SMT folding and resulting
single performance achieved by updating the platform firmware provided
heuristics in the cpuidle states.

--Vaidy



2020-07-20 05:59:44

by Vaidyanathan Srinivasan

[permalink] [raw]
Subject: Re: [PATCH 1/5] cpuidle-pseries: Set the latency-hint before entering CEDE

* Gautham R Shenoy <[email protected]> [2020-07-07 16:41:35]:

> From: "Gautham R. Shenoy" <[email protected]>
>
> As per the PAPR, each H_CEDE call is associated with a latency-hint to
> be passed in the VPA field "cede_latency_hint". The CEDE states that
> we were implicitly entering so far is CEDE with latency-hint = 0.
>
> This patch explicitly sets the latency hint corresponding to the CEDE
> state that we are currently entering. While at it, we save the
> previous hint, to be restored once we wakeup from CEDE. This will be
> required in the future when we expose extended-cede states through the
> cpuidle framework, where each of them will have a different
> cede-latency hint.
>
> Signed-off-by: Gautham R. Shenoy <[email protected]>

Reviewed-by: Vaidyanathan Srinivasan <[email protected]>


> ---
> drivers/cpuidle/cpuidle-pseries.c | 10 +++++++++-
> 1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/cpuidle/cpuidle-pseries.c b/drivers/cpuidle/cpuidle-pseries.c
> index 4a37252..39d4bb6 100644
> --- a/drivers/cpuidle/cpuidle-pseries.c
> +++ b/drivers/cpuidle/cpuidle-pseries.c
> @@ -105,19 +105,27 @@ static void check_and_cede_processor(void)
> }
> }
>
> +#define NR_CEDE_STATES 1 /* CEDE with latency-hint 0 */
> +#define NR_DEDICATED_STATES (NR_CEDE_STATES + 1) /* Includes snooze */
> +
> +u8 cede_latency_hint[NR_DEDICATED_STATES];
> static int dedicated_cede_loop(struct cpuidle_device *dev,
> struct cpuidle_driver *drv,
> int index)
> {
> + u8 old_latency_hint;
>
> pseries_idle_prolog();
> get_lppaca()->donate_dedicated_cpu = 1;
> + old_latency_hint = get_lppaca()->cede_latency_hint;
> + get_lppaca()->cede_latency_hint = cede_latency_hint[index];
>
> HMT_medium();
> check_and_cede_processor();
>
> local_irq_disable();
> get_lppaca()->donate_dedicated_cpu = 0;
> + get_lppaca()->cede_latency_hint = old_latency_hint;
>
> pseries_idle_epilog();
>
> @@ -149,7 +157,7 @@ static int shared_cede_loop(struct cpuidle_device *dev,
> /*
> * States for dedicated partition case.
> */
> -static struct cpuidle_state dedicated_states[] = {
> +static struct cpuidle_state dedicated_states[NR_DEDICATED_STATES] = {
> { /* Snooze */
> .name = "snooze",
> .desc = "snooze",


Saving and restoring the current cede hint value helps in maintaining
compatibility with other parts of the kernel. Over long term we can
make cpuidle driver deterministically set the CEDE hint at each
invocation of H_CEDE call so that we do not have to do multiple
redundant save-restore.

This is a reasonable start to cleanup this cupidle subsystem on PAPR
guests.

--Vaidy

2020-07-27 14:39:30

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH 0/5] cpuidle-pseries: Parse extended CEDE information for idle.

On Tue, Jul 7, 2020 at 1:32 PM Gautham R Shenoy <[email protected]> wrote:
>
> Hi,
>
> On Tue, Jul 07, 2020 at 04:41:34PM +0530, Gautham R. Shenoy wrote:
> > From: "Gautham R. Shenoy" <[email protected]>
> >
> > Hi,
> >
> >
> >
> >
> > Gautham R. Shenoy (5):
> > cpuidle-pseries: Set the latency-hint before entering CEDE
> > cpuidle-pseries: Add function to parse extended CEDE records
> > cpuidle-pseries : Fixup exit latency for CEDE(0)
> > cpuidle-pseries : Include extended CEDE states in cpuidle framework
> > cpuidle-pseries: Block Extended CEDE(1) which adds no additional
> > value.
>
> Forgot to mention that these patches are on top of Nathan's series to
> remove extended CEDE offline and bogus topology update code :
> https://lore.kernel.org/linuxppc-dev/[email protected]/

OK, so this is targeted at the powerpc maintainers, isn't it?

2020-07-27 19:30:28

by Gautham R Shenoy

[permalink] [raw]
Subject: Re: [PATCH 0/5] cpuidle-pseries: Parse extended CEDE information for idle.

Hello Rafael,

On Mon, Jul 27, 2020 at 04:14:12PM +0200, Rafael J. Wysocki wrote:
> On Tue, Jul 7, 2020 at 1:32 PM Gautham R Shenoy <[email protected]> wrote:
> >
> > Hi,
> >
> > On Tue, Jul 07, 2020 at 04:41:34PM +0530, Gautham R. Shenoy wrote:
> > > From: "Gautham R. Shenoy" <[email protected]>
> > >
> > > Hi,
> > >
> > >
> > >
> > >
> > > Gautham R. Shenoy (5):
> > > cpuidle-pseries: Set the latency-hint before entering CEDE
> > > cpuidle-pseries: Add function to parse extended CEDE records
> > > cpuidle-pseries : Fixup exit latency for CEDE(0)
> > > cpuidle-pseries : Include extended CEDE states in cpuidle framework
> > > cpuidle-pseries: Block Extended CEDE(1) which adds no additional
> > > value.
> >
> > Forgot to mention that these patches are on top of Nathan's series to
> > remove extended CEDE offline and bogus topology update code :
> > https://lore.kernel.org/linuxppc-dev/[email protected]/
>
> OK, so this is targeted at the powerpc maintainers, isn't it?

Yes, the code is powerpc specific.

Also, I noticed that Nathan's patches have been merged by Michael
Ellerman in the powerpc/merge tree. I will rebase and post a v2 of
this patch series.

--
Thanks and Regards
gautham.