2020-12-02 05:07:53

by Srikar Dronamraju

[permalink] [raw]
Subject: [PATCH v2 0/4] Powerpc: Better preemption for shared processor

Currently, vcpu_is_preempted will return the yield_count for
shared_processor. On a PowerVM LPAR, Phyp schedules at SMT8 core boundary
i.e all CPUs belonging to a core are either group scheduled in or group
scheduled out. This can be used to better predict non-preempted CPUs on
PowerVM shared LPARs.

perf stat -r 5 -a perf bench sched pipe -l 10000000 (lesser time is better)

powerpc/next
35,107,951.20 msec cpu-clock # 255.898 CPUs utilized ( +- 0.31% )
23,655,348 context-switches # 0.674 K/sec ( +- 3.72% )
14,465 cpu-migrations # 0.000 K/sec ( +- 5.37% )
82,463 page-faults # 0.002 K/sec ( +- 8.40% )
1,127,182,328,206 cycles # 0.032 GHz ( +- 1.60% ) (66.67%)
78,587,300,622 stalled-cycles-frontend # 6.97% frontend cycles idle ( +- 0.08% ) (50.01%)
654,124,218,432 stalled-cycles-backend # 58.03% backend cycles idle ( +- 1.74% ) (50.01%)
834,013,059,242 instructions # 0.74 insn per cycle
# 0.78 stalled cycles per insn ( +- 0.73% ) (66.67%)
132,911,454,387 branches # 3.786 M/sec ( +- 0.59% ) (50.00%)
2,890,882,143 branch-misses # 2.18% of all branches ( +- 0.46% ) (50.00%)

137.195 +- 0.419 seconds time elapsed ( +- 0.31% )

powerpc/next + patchset
29,981,702.64 msec cpu-clock # 255.881 CPUs utilized ( +- 1.30% )
40,162,456 context-switches # 0.001 M/sec ( +- 0.01% )
1,110 cpu-migrations # 0.000 K/sec ( +- 5.20% )
62,616 page-faults # 0.002 K/sec ( +- 3.93% )
1,430,030,626,037 cycles # 0.048 GHz ( +- 1.41% ) (66.67%)
83,202,707,288 stalled-cycles-frontend # 5.82% frontend cycles idle ( +- 0.75% ) (50.01%)
744,556,088,520 stalled-cycles-backend # 52.07% backend cycles idle ( +- 1.39% ) (50.01%)
940,138,418,674 instructions # 0.66 insn per cycle
# 0.79 stalled cycles per insn ( +- 0.51% ) (66.67%)
146,452,852,283 branches # 4.885 M/sec ( +- 0.80% ) (50.00%)
3,237,743,996 branch-misses # 2.21% of all branches ( +- 1.18% ) (50.01%)

117.17 +- 1.52 seconds time elapsed ( +- 1.30% )

This is around 14.6% improvement in performance.

Changelog:
v1->v2:
v1: https://lore.kernel.org/linuxppc-dev/[email protected]/t/#u
- Rebased to 27th Nov linuxppc/merge tree.
- Moved a hunk to fix a no previous prototype warning reported by: [email protected]
https://lists.01.org/hyperkitty/list/[email protected]/thread/C6PTRPHWMC7VV4OTYN3ISYKDHTDQS6YI/

Cc: linuxppc-dev <[email protected]>
Cc: LKML <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Nathan Lynch <[email protected]>
Cc: Gautham R Shenoy <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Phil Auld <[email protected]>

Srikar Dronamraju (4):
powerpc: Refactor is_kvm_guest declaration to new header
powerpc: Rename is_kvm_guest to check_kvm_guest
powerpc: Reintroduce is_kvm_guest
powerpc/paravirt: Use is_kvm_guest in vcpu_is_preempted

arch/powerpc/include/asm/firmware.h | 6 ------
arch/powerpc/include/asm/kvm_guest.h | 25 +++++++++++++++++++++++++
arch/powerpc/include/asm/kvm_para.h | 2 +-
arch/powerpc/include/asm/paravirt.h | 18 ++++++++++++++++++
arch/powerpc/kernel/firmware.c | 5 ++++-
arch/powerpc/platforms/pseries/smp.c | 3 ++-
6 files changed, 50 insertions(+), 9 deletions(-)
create mode 100644 arch/powerpc/include/asm/kvm_guest.h

--
2.18.4


2020-12-02 05:08:07

by Srikar Dronamraju

[permalink] [raw]
Subject: [PATCH v2 1/4] powerpc: Refactor is_kvm_guest declaration to new header

Only code/declaration movement, in anticipation of doing a kvm-aware
vcpu_is_preempted. No additional changes.

Cc: linuxppc-dev <[email protected]>
Cc: LKML <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Nathan Lynch <[email protected]>
Cc: Gautham R Shenoy <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Phil Auld <[email protected]>
Acked-by: Waiman Long <[email protected]>
Signed-off-by: Srikar Dronamraju <[email protected]>
---
Changelog:
v1->v2:
v1: https://lore.kernel.org/linuxppc-dev/[email protected]/t/#u
- Moved a hunk to fix a no previous prototype warning reported by: [email protected]
https://lists.01.org/hyperkitty/list/[email protected]/thread/C6PTRPHWMC7VV4OTYN3ISYKDHTDQS6YI/

arch/powerpc/include/asm/firmware.h | 6 ------
arch/powerpc/include/asm/kvm_guest.h | 15 +++++++++++++++
arch/powerpc/include/asm/kvm_para.h | 2 +-
arch/powerpc/kernel/firmware.c | 1 +
arch/powerpc/platforms/pseries/smp.c | 1 +
5 files changed, 18 insertions(+), 7 deletions(-)
create mode 100644 arch/powerpc/include/asm/kvm_guest.h

diff --git a/arch/powerpc/include/asm/firmware.h b/arch/powerpc/include/asm/firmware.h
index 0b295bdb201e..aa6a5ef5d483 100644
--- a/arch/powerpc/include/asm/firmware.h
+++ b/arch/powerpc/include/asm/firmware.h
@@ -134,12 +134,6 @@ extern int ibm_nmi_interlock_token;

extern unsigned int __start___fw_ftr_fixup, __stop___fw_ftr_fixup;

-#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_KVM_GUEST)
-bool is_kvm_guest(void);
-#else
-static inline bool is_kvm_guest(void) { return false; }
-#endif
-
#ifdef CONFIG_PPC_PSERIES
void pseries_probe_fw_features(void);
#else
diff --git a/arch/powerpc/include/asm/kvm_guest.h b/arch/powerpc/include/asm/kvm_guest.h
new file mode 100644
index 000000000000..c0ace884a0e8
--- /dev/null
+++ b/arch/powerpc/include/asm/kvm_guest.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2020 IBM Corporation
+ */
+
+#ifndef __POWERPC_KVM_GUEST_H__
+#define __POWERPC_KVM_GUEST_H__
+
+#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_KVM_GUEST)
+bool is_kvm_guest(void);
+#else
+static inline bool is_kvm_guest(void) { return false; }
+#endif
+
+#endif /* __POWERPC_KVM_GUEST_H__ */
diff --git a/arch/powerpc/include/asm/kvm_para.h b/arch/powerpc/include/asm/kvm_para.h
index 744612054c94..abe1b5e82547 100644
--- a/arch/powerpc/include/asm/kvm_para.h
+++ b/arch/powerpc/include/asm/kvm_para.h
@@ -8,7 +8,7 @@
#ifndef __POWERPC_KVM_PARA_H__
#define __POWERPC_KVM_PARA_H__

-#include <asm/firmware.h>
+#include <asm/kvm_guest.h>

#include <uapi/asm/kvm_para.h>

diff --git a/arch/powerpc/kernel/firmware.c b/arch/powerpc/kernel/firmware.c
index fe48d319d490..5f48e5ad24cd 100644
--- a/arch/powerpc/kernel/firmware.c
+++ b/arch/powerpc/kernel/firmware.c
@@ -14,6 +14,7 @@
#include <linux/of.h>

#include <asm/firmware.h>
+#include <asm/kvm_guest.h>

#ifdef CONFIG_PPC64
unsigned long powerpc_firmware_features __read_mostly;
diff --git a/arch/powerpc/platforms/pseries/smp.c b/arch/powerpc/platforms/pseries/smp.c
index 92922491a81c..d578732c545d 100644
--- a/arch/powerpc/platforms/pseries/smp.c
+++ b/arch/powerpc/platforms/pseries/smp.c
@@ -42,6 +42,7 @@
#include <asm/plpar_wrappers.h>
#include <asm/code-patching.h>
#include <asm/svm.h>
+#include <asm/kvm_guest.h>

#include "pseries.h"

--
2.18.4

2020-12-02 05:08:26

by Srikar Dronamraju

[permalink] [raw]
Subject: [PATCH v2 2/4] powerpc: Rename is_kvm_guest to check_kvm_guest

is_kvm_guest() will be reused in subsequent patch in a new avatar. Hence
rename is_kvm_guest to check_kvm_guest. No additional changes.

Cc: linuxppc-dev <[email protected]>
Cc: LKML <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Nathan Lynch <[email protected]>
Cc: Gautham R Shenoy <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Phil Auld <[email protected]>
Acked-by: Waiman Long <[email protected]>
Signed-off-by: Srikar Dronamraju <[email protected]>
---
arch/powerpc/include/asm/kvm_guest.h | 4 ++--
arch/powerpc/include/asm/kvm_para.h | 2 +-
arch/powerpc/kernel/firmware.c | 2 +-
arch/powerpc/platforms/pseries/smp.c | 2 +-
4 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_guest.h b/arch/powerpc/include/asm/kvm_guest.h
index c0ace884a0e8..ba8291e02ba9 100644
--- a/arch/powerpc/include/asm/kvm_guest.h
+++ b/arch/powerpc/include/asm/kvm_guest.h
@@ -7,9 +7,9 @@
#define __POWERPC_KVM_GUEST_H__

#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_KVM_GUEST)
-bool is_kvm_guest(void);
+bool check_kvm_guest(void);
#else
-static inline bool is_kvm_guest(void) { return false; }
+static inline bool check_kvm_guest(void) { return false; }
#endif

#endif /* __POWERPC_KVM_GUEST_H__ */
diff --git a/arch/powerpc/include/asm/kvm_para.h b/arch/powerpc/include/asm/kvm_para.h
index abe1b5e82547..6fba06b6cfdb 100644
--- a/arch/powerpc/include/asm/kvm_para.h
+++ b/arch/powerpc/include/asm/kvm_para.h
@@ -14,7 +14,7 @@

static inline int kvm_para_available(void)
{
- return IS_ENABLED(CONFIG_KVM_GUEST) && is_kvm_guest();
+ return IS_ENABLED(CONFIG_KVM_GUEST) && check_kvm_guest();
}

static inline unsigned int kvm_arch_para_features(void)
diff --git a/arch/powerpc/kernel/firmware.c b/arch/powerpc/kernel/firmware.c
index 5f48e5ad24cd..0aeb6a5b1a9e 100644
--- a/arch/powerpc/kernel/firmware.c
+++ b/arch/powerpc/kernel/firmware.c
@@ -22,7 +22,7 @@ EXPORT_SYMBOL_GPL(powerpc_firmware_features);
#endif

#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_KVM_GUEST)
-bool is_kvm_guest(void)
+bool check_kvm_guest(void)
{
struct device_node *hyper_node;

diff --git a/arch/powerpc/platforms/pseries/smp.c b/arch/powerpc/platforms/pseries/smp.c
index d578732c545d..c70b4be9f0a5 100644
--- a/arch/powerpc/platforms/pseries/smp.c
+++ b/arch/powerpc/platforms/pseries/smp.c
@@ -211,7 +211,7 @@ static __init void pSeries_smp_probe(void)
if (!cpu_has_feature(CPU_FTR_SMT))
return;

- if (is_kvm_guest()) {
+ if (check_kvm_guest()) {
/*
* KVM emulates doorbells by disabling FSCR[MSGP] so msgsndp
* faults to the hypervisor which then reads the instruction
--
2.18.4

2020-12-02 05:09:59

by Srikar Dronamraju

[permalink] [raw]
Subject: [PATCH v2 4/4] powerpc/paravirt: Use is_kvm_guest in vcpu_is_preempted

If its a shared lpar but not a KVM guest, then see if the vCPU is
related to the calling vCPU. On PowerVM, only cores can be preempted.
So if one vCPU is a non-preempted state, we can decipher that all other
vCPUs sharing the same core are in non-preempted state.

Cc: linuxppc-dev <[email protected]>
Cc: LKML <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Nathan Lynch <[email protected]>
Cc: Gautham R Shenoy <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Phil Auld <[email protected]>
Acked-by: Waiman Long <[email protected]>
Signed-off-by: Srikar Dronamraju <[email protected]>
---
arch/powerpc/include/asm/paravirt.h | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)

diff --git a/arch/powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h
index 9362c94fe3aa..edc08f04aef7 100644
--- a/arch/powerpc/include/asm/paravirt.h
+++ b/arch/powerpc/include/asm/paravirt.h
@@ -10,6 +10,9 @@
#endif

#ifdef CONFIG_PPC_SPLPAR
+#include <asm/kvm_guest.h>
+#include <asm/cputhreads.h>
+
DECLARE_STATIC_KEY_FALSE(shared_processor);

static inline bool is_shared_processor(void)
@@ -74,6 +77,21 @@ static inline bool vcpu_is_preempted(int cpu)
{
if (!is_shared_processor())
return false;
+
+#ifdef CONFIG_PPC_SPLPAR
+ if (!is_kvm_guest()) {
+ int first_cpu = cpu_first_thread_sibling(smp_processor_id());
+
+ /*
+ * Preemption can only happen at core granularity. This CPU
+ * is not preempted if one of the CPU of this core is not
+ * preempted.
+ */
+ if (cpu_first_thread_sibling(cpu) == first_cpu)
+ return false;
+ }
+#endif
+
if (yield_count_of(cpu) & 1)
return true;
return false;
--
2.18.4

2020-12-02 05:10:58

by Srikar Dronamraju

[permalink] [raw]
Subject: [PATCH v2 3/4] powerpc: Reintroduce is_kvm_guest in a new avatar

Introduce a static branch that would be set during boot if the OS
happens to be a KVM guest. Subsequent checks to see if we are on KVM
will rely on this static branch. This static branch would be used in
vcpu_is_preempted in a subsequent patch.

Cc: linuxppc-dev <[email protected]>
Cc: LKML <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Nathan Lynch <[email protected]>
Cc: Gautham R Shenoy <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Waiman Long <[email protected]>
Cc: Phil Auld <[email protected]>
Acked-by: Waiman Long <[email protected]>
Signed-off-by: Srikar Dronamraju <[email protected]>
---
arch/powerpc/include/asm/kvm_guest.h | 10 ++++++++++
arch/powerpc/include/asm/kvm_para.h | 2 +-
arch/powerpc/kernel/firmware.c | 2 ++
3 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/kvm_guest.h b/arch/powerpc/include/asm/kvm_guest.h
index ba8291e02ba9..627ba272e781 100644
--- a/arch/powerpc/include/asm/kvm_guest.h
+++ b/arch/powerpc/include/asm/kvm_guest.h
@@ -7,8 +7,18 @@
#define __POWERPC_KVM_GUEST_H__

#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_KVM_GUEST)
+#include <linux/jump_label.h>
+
+DECLARE_STATIC_KEY_FALSE(kvm_guest);
+
+static inline bool is_kvm_guest(void)
+{
+ return static_branch_unlikely(&kvm_guest);
+}
+
bool check_kvm_guest(void);
#else
+static inline bool is_kvm_guest(void) { return false; }
static inline bool check_kvm_guest(void) { return false; }
#endif

diff --git a/arch/powerpc/include/asm/kvm_para.h b/arch/powerpc/include/asm/kvm_para.h
index 6fba06b6cfdb..abe1b5e82547 100644
--- a/arch/powerpc/include/asm/kvm_para.h
+++ b/arch/powerpc/include/asm/kvm_para.h
@@ -14,7 +14,7 @@

static inline int kvm_para_available(void)
{
- return IS_ENABLED(CONFIG_KVM_GUEST) && check_kvm_guest();
+ return IS_ENABLED(CONFIG_KVM_GUEST) && is_kvm_guest();
}

static inline unsigned int kvm_arch_para_features(void)
diff --git a/arch/powerpc/kernel/firmware.c b/arch/powerpc/kernel/firmware.c
index 0aeb6a5b1a9e..28498fc573f2 100644
--- a/arch/powerpc/kernel/firmware.c
+++ b/arch/powerpc/kernel/firmware.c
@@ -22,6 +22,7 @@ EXPORT_SYMBOL_GPL(powerpc_firmware_features);
#endif

#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_KVM_GUEST)
+DEFINE_STATIC_KEY_FALSE(kvm_guest);
bool check_kvm_guest(void)
{
struct device_node *hyper_node;
@@ -33,6 +34,7 @@ bool check_kvm_guest(void)
if (!of_device_is_compatible(hyper_node, "linux,kvm"))
return 0;

+ static_branch_enable(&kvm_guest);
return 1;
}
#endif
--
2.18.4

2020-12-10 14:34:37

by Michael Ellerman

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] Powerpc: Better preemption for shared processor

On Wed, 2 Dec 2020 10:34:52 +0530, Srikar Dronamraju wrote:
> Currently, vcpu_is_preempted will return the yield_count for
> shared_processor. On a PowerVM LPAR, Phyp schedules at SMT8 core boundary
> i.e all CPUs belonging to a core are either group scheduled in or group
> scheduled out. This can be used to better predict non-preempted CPUs on
> PowerVM shared LPARs.
>
> perf stat -r 5 -a perf bench sched pipe -l 10000000 (lesser time is better)
>
> [...]

Applied to powerpc/next.

[1/4] powerpc: Refactor is_kvm_guest() declaration to new header
https://git.kernel.org/powerpc/c/92cc6bf01c7f4c5cfefd1963985c0064687ebeda
[2/4] powerpc: Rename is_kvm_guest() to check_kvm_guest()
https://git.kernel.org/powerpc/c/16520a858a995742c2d2248e86a6026bd0316562
[3/4] powerpc: Reintroduce is_kvm_guest() as a fast-path check
https://git.kernel.org/powerpc/c/a21d1becaa3f17a97b933ffa677b526afc514ec5
[4/4] powerpc/paravirt: Use is_kvm_guest() in vcpu_is_preempted()
https://git.kernel.org/powerpc/c/ca3f969dcb111d35674b66bdcb72beb2c426b9b5

cheers