2022-04-22 19:11:28

by Sandipan Das

[permalink] [raw]
Subject: [PATCH v2 0/7] perf/x86/amd/core: Add AMD PerfMonV2 support

Add support for using AMD Performance Monitoring Version 2
(PerfMonV2) features on upcoming processors. New CPU features
are introduced for PerfMonV2 detection. New MSR definitions
are added to make use of an alternative PMC management scheme
based on the new PMC global control and status registers.

The global control register provides the ability to start and
stop multiple PMCs at the same time. This makes it possible
to enable or disable all counters with a single MSR write
instead of writing to the individual PMC control registers
iteratively under x86_pmu_{enable,disable}(). The effects
can be seen when counting the same events across multiple
PMCs.

E.g.

$ sudo perf stat -e "{cycles,instructions,cycles,instructions}" sleep 1

Before:

Performance counter stats for 'sleep 1':

1013281 cycles
1452859 instructions # 1.43 insn per cycle
1023462 cycles
1461724 instructions # 1.43 insn per cycle

1.001644276 seconds time elapsed

0.001948000 seconds user
0.000000000 seconds sys

After:

Performance counter stats for 'sleep 1':

999165 cycles
1440456 instructions # 1.44 insn per cycle
999165 cycles
1440456 instructions # 1.44 insn per cycle

1.001879504 seconds time elapsed

0.001817000 seconds user
0.000000000 seconds sys

No additional failures are seen upon running the following:
* perf built-in test suite
* perf_event_tests suite
* rr test suite

Previous versions can be found at:
v1: https://lore.kernel.org/all/[email protected]/

Changes in v2:
- Sort PerfCntrGlobal* register definitions based on MSR index.
- Use wrmsrl() in cpu_{starting,dead}().
- Add enum to extract bitfields from CPUID leaf 0x80000022.
- Remove static calls for counter management functions.
- Stop counters before inspecting overflow status in NMI handler.
- Save and restore PMU enabled state in NMI handler.
- Remove unused variable in NMI handler.
- Remove redundant write to APIC_LVTPC in NMI handler.
- Add comment on APIC_LVTPC mask bit behaviour during counter overflow.

Sandipan Das (7):
x86/cpufeatures: Add PerfMonV2 feature bit
x86/msr: Add PerfCntrGlobal* registers
perf/x86/amd/core: Detect PerfMonV2 support
perf/x86/amd/core: Detect available counters
perf/x86/amd/core: Add PerfMonV2 counter control
perf/x86/amd/core: Add PerfMonV2 overflow handling
kvm: x86/cpuid: Fix CPUID leaf 0xA

arch/x86/events/amd/core.c | 227 +++++++++++++++++++++++++++--
arch/x86/include/asm/cpufeatures.h | 2 +-
arch/x86/include/asm/msr-index.h | 5 +
arch/x86/include/asm/perf_event.h | 17 +++
arch/x86/kernel/cpu/scattered.c | 1 +
arch/x86/kvm/cpuid.c | 5 +
6 files changed, 240 insertions(+), 17 deletions(-)

--
2.32.0


2022-04-22 19:33:52

by Sandipan Das

[permalink] [raw]
Subject: [PATCH v2 2/7] x86/msr: Add PerfCntrGlobal* registers

Add MSR definitions that will be used to enable the new AMD
Performance Monitoring Version 2 (PerfMonV2) features. These
include:

* Performance Counter Global Control (PerfCntrGlobalCtl)
* Performance Counter Global Status (PerfCntrGlobalStatus)
* Performance Counter Global Status Clear (PerfCntrGlobalStatusClr)

The new Performance Counter Global Control and Status MSRs
provide an interface for enabling or disabling multiple
counters at the same time and for testing overflow without
probing the individual registers for each PMC.

The availability of these registers is indicated through the
PerfMonV2 feature bit of CPUID leaf 0x80000022 EAX.

Signed-off-by: Sandipan Das <[email protected]>
---
arch/x86/include/asm/msr-index.h | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 9e2e7185fc1d..a040f4af93c9 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -527,6 +527,11 @@
#define AMD_CPPC_DES_PERF(x) (((x) & 0xff) << 16)
#define AMD_CPPC_ENERGY_PERF_PREF(x) (((x) & 0xff) << 24)

+/* AMD Performance Counter Global Status and Control MSRs */
+#define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS 0xc0000300
+#define MSR_AMD64_PERF_CNTR_GLOBAL_CTL 0xc0000301
+#define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR 0xc0000302
+
/* Fam 17h MSRs */
#define MSR_F17H_IRPERF 0xc00000e9

--
2.32.0

2022-04-22 20:17:08

by Sandipan Das

[permalink] [raw]
Subject: [PATCH v2 3/7] perf/x86/amd/core: Detect PerfMonV2 support

AMD Performance Monitoring Version 2 (PerfMonV2) introduces
some new Core PMU features such as detection of the number
of available PMCs and managing PMCs using global registers
namely, PerfCntrGlobalCtl and PerfCntrGlobalStatus.

Clearing PerfCntrGlobalCtl and PerfCntrGlobalStatus ensures
that all PMCs are inactive and have no pending overflows
when CPUs are onlined or offlined.

The PMU version (x86_pmu.version) now indicates PerfMonV2
support and will be used to bypass the new features on
unsupported processors.

Signed-off-by: Sandipan Das <[email protected]>
---
arch/x86/events/amd/core.c | 27 +++++++++++++++++++++++++++
1 file changed, 27 insertions(+)

diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
index 8e1e818f8195..b70dfa028ba5 100644
--- a/arch/x86/events/amd/core.c
+++ b/arch/x86/events/amd/core.c
@@ -19,6 +19,9 @@ static unsigned long perf_nmi_window;
#define AMD_MERGE_EVENT ((0xFULL << 32) | 0xFFULL)
#define AMD_MERGE_EVENT_ENABLE (AMD_MERGE_EVENT | ARCH_PERFMON_EVENTSEL_ENABLE)

+/* PMC Enable and Overflow bits for PerfCntrGlobal* registers */
+static u64 amd_pmu_global_cntr_mask __read_mostly;
+
static __initconst const u64 amd_hw_cache_event_ids
[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
@@ -578,6 +581,18 @@ static struct amd_nb *amd_alloc_nb(int cpu)
return nb;
}

+static void amd_pmu_cpu_reset(int cpu)
+{
+ if (x86_pmu.version < 2)
+ return;
+
+ /* Clear enable bits i.e. PerfCntrGlobalCtl.PerfCntrEn */
+ wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_CTL, 0);
+
+ /* Clear overflow bits i.e. PerfCntrGLobalStatus.PerfCntrOvfl */
+ wrmsrl(MSR_AMD64_PERF_CNTR_GLOBAL_STATUS_CLR, amd_pmu_global_cntr_mask);
+}
+
static int amd_pmu_cpu_prepare(int cpu)
{
struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);
@@ -625,6 +640,7 @@ static void amd_pmu_cpu_starting(int cpu)
cpuc->amd_nb->refcnt++;

amd_brs_reset();
+ amd_pmu_cpu_reset(cpu);
}

static void amd_pmu_cpu_dead(int cpu)
@@ -644,6 +660,8 @@ static void amd_pmu_cpu_dead(int cpu)

cpuhw->amd_nb = NULL;
}
+
+ amd_pmu_cpu_reset(cpu);
}

/*
@@ -1185,6 +1203,15 @@ static int __init amd_core_pmu_init(void)
x86_pmu.eventsel = MSR_F15H_PERF_CTL;
x86_pmu.perfctr = MSR_F15H_PERF_CTR;
x86_pmu.num_counters = AMD64_NUM_COUNTERS_CORE;
+
+ /* Check for Performance Monitoring v2 support */
+ if (boot_cpu_has(X86_FEATURE_PERFMON_V2)) {
+ /* Update PMU version for later usage */
+ x86_pmu.version = 2;
+
+ amd_pmu_global_cntr_mask = (1ULL << x86_pmu.num_counters) - 1;
+ }
+
/*
* AMD Core perfctr has separate MSRs for the NB events, see
* the amd/uncore.c driver.
--
2.32.0

2022-04-22 20:18:11

by Sandipan Das

[permalink] [raw]
Subject: [PATCH v2 4/7] perf/x86/amd/core: Detect available counters

If AMD Performance Monitoring Version 2 (PerfMonV2) is
supported, use CPUID leaf 0x80000022 EBX to detect the
number of Core PMCs. This offers more flexibility if the
counts change in later processor families.

Signed-off-by: Sandipan Das <[email protected]>
---
arch/x86/events/amd/core.c | 6 ++++++
arch/x86/include/asm/perf_event.h | 17 +++++++++++++++++
2 files changed, 23 insertions(+)

diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
index b70dfa028ba5..52fd7941a724 100644
--- a/arch/x86/events/amd/core.c
+++ b/arch/x86/events/amd/core.c
@@ -1186,6 +1186,7 @@ static const struct attribute_group *amd_attr_update[] = {

static int __init amd_core_pmu_init(void)
{
+ union cpuid_0x80000022_ebx ebx;
u64 even_ctr_mask = 0ULL;
int i;

@@ -1206,9 +1207,14 @@ static int __init amd_core_pmu_init(void)

/* Check for Performance Monitoring v2 support */
if (boot_cpu_has(X86_FEATURE_PERFMON_V2)) {
+ ebx.full = cpuid_ebx(EXT_PERFMON_DEBUG_FEATURES);
+
/* Update PMU version for later usage */
x86_pmu.version = 2;

+ /* Find the number of available Core PMCs */
+ x86_pmu.num_counters = ebx.split.num_core_pmc;
+
amd_pmu_global_cntr_mask = (1ULL << x86_pmu.num_counters) - 1;
}

diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
index a5dea5da1b52..7aa1d420c779 100644
--- a/arch/x86/include/asm/perf_event.h
+++ b/arch/x86/include/asm/perf_event.h
@@ -186,6 +186,18 @@ union cpuid28_ecx {
unsigned int full;
};

+/*
+ * AMD "Extended Performance Monitoring and Debug" CPUID
+ * detection/enumeration details:
+ */
+union cpuid_0x80000022_ebx {
+ struct {
+ /* Number of Core Performance Counters */
+ unsigned int num_core_pmc:4;
+ } split;
+ unsigned int full;
+};
+
struct x86_pmu_capability {
int version;
int num_counters_gp;
@@ -372,6 +384,11 @@ struct pebs_xmm {
u64 xmm[16*2]; /* two entries for each register */
};

+/*
+ * AMD Extended Performance Monitoring and Debug cpuid feature detection
+ */
+#define EXT_PERFMON_DEBUG_FEATURES 0x80000022
+
/*
* IBS cpuid feature detection
*/
--
2.32.0