2023-04-07 23:34:35

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH v4 0/6] KVM: selftests: Add "instructions required" testcase

This is the selftests portion of Aaron's series[*] to fix incorrect
"instructions required" counting. The big change from v3 is to use a
common struct to copy counter values to/from the guest instead of smushing
the values into a single 64-bit value.

[*] https://lkml.kernel.org/r/20230307141400.1486314-1-aaronlewis%40google.com

Aaron Lewis (4):
KVM: selftests: Add a common helper for the PMU event filter guest
code
KVM: selftests: Add helpers for PMC asserts in PMU event filter test
KVM: selftests: Print detailed info in PMU event filter asserts
KVM: selftests: Test the PMU event "Instructions retired"

Sean Christopherson (2):
KVM: selftests: Use error codes to signal errors in PMU event filter
test
KVM: selftests: Copy full counter values from guest in PMU event
filter test

.../kvm/x86_64/pmu_event_filter_test.c | 252 ++++++++++--------
1 file changed, 140 insertions(+), 112 deletions(-)


base-commit: b821bf96cf3bed0c1c8d34659299a3a1dc47218d
--
2.40.0.577.gac1e443424-goog


2023-04-07 23:35:05

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH v4 2/6] KVM: selftests: Add helpers for PMC asserts in PMU event filter test

From: Aaron Lewis <[email protected]>

Add helper macros to consolidate the asserts that a PMC is/isn't counting
(branch) instructions retired. This will make it easier to add additional
asserts related to counting instructions later on.

No functional changes intended.

Signed-off-by: Aaron Lewis <[email protected]>
[sean: add "INSTRUCTIONS", massage changelog]
Signed-off-by: Sean Christopherson <[email protected]>
---
.../kvm/x86_64/pmu_event_filter_test.c | 52 ++++++++++---------
1 file changed, 27 insertions(+), 25 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
index a00a9d6ea41e..9b53e02a0565 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
@@ -244,14 +244,27 @@ static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f,
return f;
}

+#define ASSERT_PMC_COUNTING_INSTRUCTIONS(count) \
+do { \
+ if (count != NUM_BRANCHES) \
+ pr_info("%s: Branch instructions retired = %lu (expected %u)\n", \
+ __func__, count, NUM_BRANCHES); \
+ TEST_ASSERT(count, "Allowed PMU event is not counting."); \
+} while (0)
+
+#define ASSERT_PMC_NOT_COUNTING_INSTRUCTIONS(count) \
+do { \
+ if (count) \
+ pr_info("%s: Branch instructions retired = %lu (expected 0)\n", \
+ __func__, count); \
+ TEST_ASSERT(!count, "Disallowed PMU Event is counting"); \
+} while (0)
+
static void test_without_filter(struct kvm_vcpu *vcpu)
{
uint64_t count = run_vcpu_to_sync(vcpu);

- if (count != NUM_BRANCHES)
- pr_info("%s: Branch instructions retired = %lu (expected %u)\n",
- __func__, count, NUM_BRANCHES);
- TEST_ASSERT(count, "Allowed PMU event is not counting");
+ ASSERT_PMC_COUNTING_INSTRUCTIONS(count);
}

static uint64_t test_with_filter(struct kvm_vcpu *vcpu,
@@ -269,12 +282,9 @@ static void test_amd_deny_list(struct kvm_vcpu *vcpu)

f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0);
count = test_with_filter(vcpu, f);
-
free(f);
- if (count != NUM_BRANCHES)
- pr_info("%s: Branch instructions retired = %lu (expected %u)\n",
- __func__, count, NUM_BRANCHES);
- TEST_ASSERT(count, "Allowed PMU event is not counting");
+
+ ASSERT_PMC_COUNTING_INSTRUCTIONS(count);
}

static void test_member_deny_list(struct kvm_vcpu *vcpu)
@@ -283,10 +293,8 @@ static void test_member_deny_list(struct kvm_vcpu *vcpu)
uint64_t count = test_with_filter(vcpu, f);

free(f);
- if (count)
- pr_info("%s: Branch instructions retired = %lu (expected 0)\n",
- __func__, count);
- TEST_ASSERT(!count, "Disallowed PMU Event is counting");
+
+ ASSERT_PMC_NOT_COUNTING_INSTRUCTIONS(count);
}

static void test_member_allow_list(struct kvm_vcpu *vcpu)
@@ -295,10 +303,8 @@ static void test_member_allow_list(struct kvm_vcpu *vcpu)
uint64_t count = test_with_filter(vcpu, f);

free(f);
- if (count != NUM_BRANCHES)
- pr_info("%s: Branch instructions retired = %lu (expected %u)\n",
- __func__, count, NUM_BRANCHES);
- TEST_ASSERT(count, "Allowed PMU event is not counting");
+
+ ASSERT_PMC_COUNTING_INSTRUCTIONS(count);
}

static void test_not_member_deny_list(struct kvm_vcpu *vcpu)
@@ -310,10 +316,8 @@ static void test_not_member_deny_list(struct kvm_vcpu *vcpu)
remove_event(f, AMD_ZEN_BR_RETIRED);
count = test_with_filter(vcpu, f);
free(f);
- if (count != NUM_BRANCHES)
- pr_info("%s: Branch instructions retired = %lu (expected %u)\n",
- __func__, count, NUM_BRANCHES);
- TEST_ASSERT(count, "Allowed PMU event is not counting");
+
+ ASSERT_PMC_COUNTING_INSTRUCTIONS(count);
}

static void test_not_member_allow_list(struct kvm_vcpu *vcpu)
@@ -325,10 +329,8 @@ static void test_not_member_allow_list(struct kvm_vcpu *vcpu)
remove_event(f, AMD_ZEN_BR_RETIRED);
count = test_with_filter(vcpu, f);
free(f);
- if (count)
- pr_info("%s: Branch instructions retired = %lu (expected 0)\n",
- __func__, count);
- TEST_ASSERT(!count, "Disallowed PMU Event is counting");
+
+ ASSERT_PMC_NOT_COUNTING_INSTRUCTIONS(count);
}

/*
--
2.40.0.577.gac1e443424-goog

2023-04-07 23:35:07

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH v4 1/6] KVM: selftests: Add a common helper for the PMU event filter guest code

From: Aaron Lewis <[email protected]>

Split out the common parts of the Intel and AMD guest code in the PMU
event filter test into a helper function. This is in preparation for
adding additional counters to the test.

No functional changes intended.

Signed-off-by: Aaron Lewis <[email protected]>
Signed-off-by: Sean Christopherson <[email protected]>
---
.../kvm/x86_64/pmu_event_filter_test.c | 29 ++++++++++++-------
1 file changed, 18 insertions(+), 11 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
index 1f60dfae69e0..a00a9d6ea41e 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
@@ -100,6 +100,15 @@ static void check_msr(uint32_t msr, uint64_t bits_to_flip)
GUEST_SYNC(0);
}

+static uint64_t run_and_measure_loop(uint32_t msr_base)
+{
+ uint64_t branches_retired = rdmsr(msr_base + 0);
+
+ __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES}));
+
+ return rdmsr(msr_base + 0) - branches_retired;
+}
+
static void intel_guest_code(void)
{
check_msr(MSR_CORE_PERF_GLOBAL_CTRL, 1);
@@ -108,16 +117,15 @@ static void intel_guest_code(void)
GUEST_SYNC(1);

for (;;) {
- uint64_t br0, br1;
+ uint64_t count;

wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
wrmsr(MSR_P6_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE |
ARCH_PERFMON_EVENTSEL_OS | INTEL_BR_RETIRED);
- wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 1);
- br0 = rdmsr(MSR_IA32_PMC0);
- __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES}));
- br1 = rdmsr(MSR_IA32_PMC0);
- GUEST_SYNC(br1 - br0);
+ wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x1);
+
+ count = run_and_measure_loop(MSR_IA32_PMC0);
+ GUEST_SYNC(count);
}
}

@@ -133,15 +141,14 @@ static void amd_guest_code(void)
GUEST_SYNC(1);

for (;;) {
- uint64_t br0, br1;
+ uint64_t count;

wrmsr(MSR_K7_EVNTSEL0, 0);
wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE |
ARCH_PERFMON_EVENTSEL_OS | AMD_ZEN_BR_RETIRED);
- br0 = rdmsr(MSR_K7_PERFCTR0);
- __asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES}));
- br1 = rdmsr(MSR_K7_PERFCTR0);
- GUEST_SYNC(br1 - br0);
+
+ count = run_and_measure_loop(MSR_K7_PERFCTR0);
+ GUEST_SYNC(count);
}
}

--
2.40.0.577.gac1e443424-goog

2023-04-07 23:35:47

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH v4 3/6] KVM: selftests: Print detailed info in PMU event filter asserts

From: Aaron Lewis <[email protected]>

Provide the actual vs. expected count in the PMU event filter test's
asserts instead of relying on pr_info() to provide the context, e.g. so
that all information needed to triage a failure is readily available even
if the environment in which the test is run captures only the assert
itself.

Signed-off-by: Aaron Lewis <[email protected]>
[sean: rewrite changelog]
Signed-off-by: Sean Christopherson <[email protected]>
---
.../selftests/kvm/x86_64/pmu_event_filter_test.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
index 9b53e02a0565..ef07aaca2168 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
@@ -246,18 +246,17 @@ static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f,

#define ASSERT_PMC_COUNTING_INSTRUCTIONS(count) \
do { \
- if (count != NUM_BRANCHES) \
+ if (count && count != NUM_BRANCHES) \
pr_info("%s: Branch instructions retired = %lu (expected %u)\n", \
__func__, count, NUM_BRANCHES); \
- TEST_ASSERT(count, "Allowed PMU event is not counting."); \
+ TEST_ASSERT(count, "%s: Branch instructions retired = %lu (expected > 0)", \
+ __func__, count); \
} while (0)

#define ASSERT_PMC_NOT_COUNTING_INSTRUCTIONS(count) \
do { \
- if (count) \
- pr_info("%s: Branch instructions retired = %lu (expected 0)\n", \
- __func__, count); \
- TEST_ASSERT(!count, "Disallowed PMU Event is counting"); \
+ TEST_ASSERT(!count, "%s: Branch instructions retired = %lu (expected 0)", \
+ __func__, count); \
} while (0)

static void test_without_filter(struct kvm_vcpu *vcpu)
--
2.40.0.577.gac1e443424-goog

2023-04-07 23:35:49

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH v4 5/6] KVM: selftests: Copy full counter values from guest in PMU event filter test

Use a single struct to track all PMC event counts in the PMU filter test,
and copy the full struct to/from the guest when running and measuring each
guest workload. Using a common struct avoids naming conflicts, e.g. the
loads/stores testcase has claimed "perf_counter", and eliminates the
unnecessary truncation of the counter values when they are propagated from
the guest MSRs to the host structs.

Zero the struct before running the guest workload to ensure that the test
doesn't get a false pass due to consuming data from a previous run.

Signed-off-by: Sean Christopherson <[email protected]>
---
.../kvm/x86_64/pmu_event_filter_test.c | 170 +++++++++---------
1 file changed, 80 insertions(+), 90 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
index 0432ba347b22..5112aece3f95 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
@@ -71,6 +71,13 @@ static const uint64_t event_list[] = {
AMD_ZEN_BR_RETIRED,
};

+struct {
+ uint64_t loads;
+ uint64_t stores;
+ uint64_t loads_stores;
+ uint64_t branches_retired;
+} pmc_results;
+
/*
* If we encounter a #GP during the guest PMU sanity check, then the guest
* PMU is not functional. Inform the hypervisor via GUEST_SYNC(0).
@@ -100,13 +107,13 @@ static void check_msr(uint32_t msr, uint64_t bits_to_flip)
GUEST_SYNC(-EIO);
}

-static uint64_t run_and_measure_loop(uint32_t msr_base)
+static void run_and_measure_loop(uint32_t msr_base)
{
- uint64_t branches_retired = rdmsr(msr_base + 0);
+ const uint64_t branches_retired = rdmsr(msr_base + 0);

__asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES}));

- return rdmsr(msr_base + 0) - branches_retired;
+ pmc_results.branches_retired = rdmsr(msr_base + 0) - branches_retired;
}

static void intel_guest_code(void)
@@ -117,15 +124,13 @@ static void intel_guest_code(void)
GUEST_SYNC(0);

for (;;) {
- uint64_t count;
-
wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
wrmsr(MSR_P6_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE |
ARCH_PERFMON_EVENTSEL_OS | INTEL_BR_RETIRED);
wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x1);

- count = run_and_measure_loop(MSR_IA32_PMC0);
- GUEST_SYNC(count);
+ run_and_measure_loop(MSR_IA32_PMC0);
+ GUEST_SYNC(0);
}
}

@@ -141,14 +146,12 @@ static void amd_guest_code(void)
GUEST_SYNC(0);

for (;;) {
- uint64_t count;
-
wrmsr(MSR_K7_EVNTSEL0, 0);
wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE |
ARCH_PERFMON_EVENTSEL_OS | AMD_ZEN_BR_RETIRED);

- count = run_and_measure_loop(MSR_K7_PERFCTR0);
- GUEST_SYNC(count);
+ run_and_measure_loop(MSR_K7_PERFCTR0);
+ GUEST_SYNC(0);
}
}

@@ -168,6 +171,19 @@ static uint64_t run_vcpu_to_sync(struct kvm_vcpu *vcpu)
return uc.args[1];
}

+static void run_vcpu_and_sync_pmc_results(struct kvm_vcpu *vcpu)
+{
+ uint64_t r;
+
+ memset(&pmc_results, 0, sizeof(pmc_results));
+ sync_global_to_guest(vcpu->vm, pmc_results);
+
+ r = run_vcpu_to_sync(vcpu);
+ TEST_ASSERT(!r, "Unexpected sync value: 0x%lx", r);
+
+ sync_global_from_guest(vcpu->vm, pmc_results);
+}
+
/*
* In a nested environment or if the vPMU is disabled, the guest PMU
* might not work as architected (accessing the PMU MSRs may raise
@@ -244,92 +260,93 @@ static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f,
return f;
}

-#define ASSERT_PMC_COUNTING_INSTRUCTIONS(count) \
+#define ASSERT_PMC_COUNTING_INSTRUCTIONS() \
do { \
- if (count && count != NUM_BRANCHES) \
+ uint64_t br = pmc_results.branches_retired; \
+ \
+ if (br && br != NUM_BRANCHES) \
pr_info("%s: Branch instructions retired = %lu (expected %u)\n", \
- __func__, count, NUM_BRANCHES); \
- TEST_ASSERT(count, "%s: Branch instructions retired = %lu (expected > 0)", \
- __func__, count); \
+ __func__, br, NUM_BRANCHES); \
+ TEST_ASSERT(br, "%s: Branch instructions retired = %lu (expected > 0)", \
+ __func__, br); \
} while (0)

-#define ASSERT_PMC_NOT_COUNTING_INSTRUCTIONS(count) \
+#define ASSERT_PMC_NOT_COUNTING_INSTRUCTIONS() \
do { \
- TEST_ASSERT(!count, "%s: Branch instructions retired = %lu (expected 0)", \
- __func__, count); \
+ uint64_t br = pmc_results.branches_retired; \
+ \
+ TEST_ASSERT(!br, "%s: Branch instructions retired = %lu (expected 0)", \
+ __func__, br); \
} while (0)

static void test_without_filter(struct kvm_vcpu *vcpu)
{
- uint64_t count = run_vcpu_to_sync(vcpu);
+ run_vcpu_and_sync_pmc_results(vcpu);

- ASSERT_PMC_COUNTING_INSTRUCTIONS(count);
+ ASSERT_PMC_COUNTING_INSTRUCTIONS();
}

-static uint64_t test_with_filter(struct kvm_vcpu *vcpu,
- struct kvm_pmu_event_filter *f)
+static void test_with_filter(struct kvm_vcpu *vcpu,
+ struct kvm_pmu_event_filter *f)
{
vm_ioctl(vcpu->vm, KVM_SET_PMU_EVENT_FILTER, f);
- return run_vcpu_to_sync(vcpu);
+ run_vcpu_and_sync_pmc_results(vcpu);
}

static void test_amd_deny_list(struct kvm_vcpu *vcpu)
{
uint64_t event = EVENT(0x1C2, 0);
struct kvm_pmu_event_filter *f;
- uint64_t count;

f = create_pmu_event_filter(&event, 1, KVM_PMU_EVENT_DENY, 0);
- count = test_with_filter(vcpu, f);
+ test_with_filter(vcpu, f);
free(f);

- ASSERT_PMC_COUNTING_INSTRUCTIONS(count);
+ ASSERT_PMC_COUNTING_INSTRUCTIONS();
}

static void test_member_deny_list(struct kvm_vcpu *vcpu)
{
struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY);
- uint64_t count = test_with_filter(vcpu, f);

+ test_with_filter(vcpu, f);
free(f);

- ASSERT_PMC_NOT_COUNTING_INSTRUCTIONS(count);
+ ASSERT_PMC_NOT_COUNTING_INSTRUCTIONS();
}

static void test_member_allow_list(struct kvm_vcpu *vcpu)
{
struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW);
- uint64_t count = test_with_filter(vcpu, f);

+ test_with_filter(vcpu, f);
free(f);

- ASSERT_PMC_COUNTING_INSTRUCTIONS(count);
+ ASSERT_PMC_COUNTING_INSTRUCTIONS();
}

static void test_not_member_deny_list(struct kvm_vcpu *vcpu)
{
struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY);
- uint64_t count;

remove_event(f, INTEL_BR_RETIRED);
remove_event(f, AMD_ZEN_BR_RETIRED);
- count = test_with_filter(vcpu, f);
+ test_with_filter(vcpu, f);
free(f);

- ASSERT_PMC_COUNTING_INSTRUCTIONS(count);
+ ASSERT_PMC_COUNTING_INSTRUCTIONS();
}

static void test_not_member_allow_list(struct kvm_vcpu *vcpu)
{
struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW);
- uint64_t count;

remove_event(f, INTEL_BR_RETIRED);
remove_event(f, AMD_ZEN_BR_RETIRED);
- count = test_with_filter(vcpu, f);
+ test_with_filter(vcpu, f);
free(f);

- ASSERT_PMC_NOT_COUNTING_INSTRUCTIONS(count);
+ ASSERT_PMC_NOT_COUNTING_INSTRUCTIONS();
}

/*
@@ -458,51 +475,30 @@ static bool supports_event_mem_inst_retired(void)
#define EXCLUDE_MASKED_ENTRY(event_select, mask, match) \
KVM_PMU_ENCODE_MASKED_ENTRY(event_select, mask, match, true)

-struct perf_counter {
- union {
- uint64_t raw;
- struct {
- uint64_t loads:22;
- uint64_t stores:22;
- uint64_t loads_stores:20;
- };
- };
-};
-
-static uint64_t masked_events_guest_test(uint32_t msr_base)
+static void masked_events_guest_test(uint32_t msr_base)
{
- uint64_t ld0, ld1, st0, st1, ls0, ls1;
- struct perf_counter c;
- int val;
-
/*
- * The acutal value of the counters don't determine the outcome of
+ * The actual value of the counters don't determine the outcome of
* the test. Only that they are zero or non-zero.
*/
- ld0 = rdmsr(msr_base + 0);
- st0 = rdmsr(msr_base + 1);
- ls0 = rdmsr(msr_base + 2);
+ const uint64_t loads = rdmsr(msr_base + 0);
+ const uint64_t stores = rdmsr(msr_base + 1);
+ const uint64_t loads_stores = rdmsr(msr_base + 2);
+ int val;
+

__asm__ __volatile__("movl $0, %[v];"
"movl %[v], %%eax;"
"incl %[v];"
: [v]"+m"(val) :: "eax");

- ld1 = rdmsr(msr_base + 0);
- st1 = rdmsr(msr_base + 1);
- ls1 = rdmsr(msr_base + 2);
-
- c.loads = ld1 - ld0;
- c.stores = st1 - st0;
- c.loads_stores = ls1 - ls0;
-
- return c.raw;
+ pmc_results.loads = rdmsr(msr_base + 0) - loads;
+ pmc_results.stores = rdmsr(msr_base + 1) - stores;
+ pmc_results.loads_stores = rdmsr(msr_base + 2) - loads_stores;
}

static void intel_masked_events_guest_code(void)
{
- uint64_t r;
-
for (;;) {
wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0);

@@ -515,16 +511,13 @@ static void intel_masked_events_guest_code(void)

wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x7);

- r = masked_events_guest_test(MSR_IA32_PMC0);
-
- GUEST_SYNC(r);
+ masked_events_guest_test(MSR_IA32_PMC0);
+ GUEST_SYNC(0);
}
}

static void amd_masked_events_guest_code(void)
{
- uint64_t r;
-
for (;;) {
wrmsr(MSR_K7_EVNTSEL0, 0);
wrmsr(MSR_K7_EVNTSEL1, 0);
@@ -537,26 +530,22 @@ static void amd_masked_events_guest_code(void)
wrmsr(MSR_K7_EVNTSEL2, ARCH_PERFMON_EVENTSEL_ENABLE |
ARCH_PERFMON_EVENTSEL_OS | LS_DISPATCH_LOAD_STORE);

- r = masked_events_guest_test(MSR_K7_PERFCTR0);
-
- GUEST_SYNC(r);
+ masked_events_guest_test(MSR_K7_PERFCTR0);
+ GUEST_SYNC(0);
}
}

-static struct perf_counter run_masked_events_test(struct kvm_vcpu *vcpu,
- const uint64_t masked_events[],
- const int nmasked_events)
+static void run_masked_events_test(struct kvm_vcpu *vcpu,
+ const uint64_t masked_events[],
+ const int nmasked_events)
{
struct kvm_pmu_event_filter *f;
- struct perf_counter r;

f = create_pmu_event_filter(masked_events, nmasked_events,
KVM_PMU_EVENT_ALLOW,
KVM_PMU_EVENT_FLAG_MASKED_EVENTS);
- r.raw = test_with_filter(vcpu, f);
+ test_with_filter(vcpu, f);
free(f);
-
- return r;
}

/* Matches KVM_PMU_EVENT_FILTER_MAX_EVENTS in pmu.c */
@@ -681,7 +670,6 @@ static void run_masked_events_tests(struct kvm_vcpu *vcpu, uint64_t *events,
int nevents)
{
int ntests = ARRAY_SIZE(test_cases);
- struct perf_counter c;
int i, n;

for (i = 0; i < ntests; i++) {
@@ -693,13 +681,15 @@ static void run_masked_events_tests(struct kvm_vcpu *vcpu, uint64_t *events,

n = append_test_events(test, events, nevents);

- c = run_masked_events_test(vcpu, events, n);
- TEST_ASSERT(bool_eq(c.loads, test->flags & ALLOW_LOADS) &&
- bool_eq(c.stores, test->flags & ALLOW_STORES) &&
- bool_eq(c.loads_stores,
+ run_masked_events_test(vcpu, events, n);
+
+ TEST_ASSERT(bool_eq(pmc_results.loads, test->flags & ALLOW_LOADS) &&
+ bool_eq(pmc_results.stores, test->flags & ALLOW_STORES) &&
+ bool_eq(pmc_results.loads_stores,
test->flags & ALLOW_LOADS_STORES),
- "%s loads: %u, stores: %u, loads + stores: %u",
- test->msg, c.loads, c.stores, c.loads_stores);
+ "%s loads: %lu, stores: %lu, loads + stores: %lu",
+ test->msg, pmc_results.loads, pmc_results.stores,
+ pmc_results.loads_stores);
}
}

--
2.40.0.577.gac1e443424-goog

2023-04-07 23:36:07

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH v4 6/6] KVM: selftests: Test the PMU event "Instructions retired"

From: Aaron Lewis <[email protected]>

Add testing for the event "Instructions retired" (0xc0) in the PMU
event filter on both Intel and AMD to ensure that the event doesn't
count when it is disallowed. Unlike most of the other events, the
event "Instructions retired" will be incremented by KVM when an
instruction is emulated. Test that this case is being properly handled
and that KVM doesn't increment the counter when that event is
disallowed.

Signed-off-by: Aaron Lewis <[email protected]>
Link: https://lore.kernel.org/r/[email protected]
Signed-off-by: Sean Christopherson <[email protected]>
---
.../kvm/x86_64/pmu_event_filter_test.c | 34 +++++++++++++++++--
1 file changed, 32 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
index 5112aece3f95..40507ed9fe8a 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
@@ -54,6 +54,21 @@

#define AMD_ZEN_BR_RETIRED EVENT(0xc2, 0)

+
+/*
+ * "Retired instructions", from Processor Programming Reference
+ * (PPR) for AMD Family 17h Model 01h, Revision B1 Processors,
+ * Preliminary Processor Programming Reference (PPR) for AMD Family
+ * 17h Model 31h, Revision B0 Processors, and Preliminary Processor
+ * Programming Reference (PPR) for AMD Family 19h Model 01h, Revision
+ * B1 Processors Volume 1 of 2.
+ * --- and ---
+ * "Instructions retired", from the Intel SDM, volume 3,
+ * "Pre-defined Architectural Performance Events."
+ */
+
+#define INST_RETIRED EVENT(0xc0, 0)
+
/*
* This event list comprises Intel's eight architectural events plus
* AMD's "retired branch instructions" for Zen[123] (and possibly
@@ -61,7 +76,7 @@
*/
static const uint64_t event_list[] = {
EVENT(0x3c, 0),
- EVENT(0xc0, 0),
+ INST_RETIRED,
EVENT(0x3c, 1),
EVENT(0x2e, 0x4f),
EVENT(0x2e, 0x41),
@@ -76,6 +91,7 @@ struct {
uint64_t stores;
uint64_t loads_stores;
uint64_t branches_retired;
+ uint64_t instructions_retired;
} pmc_results;

/*
@@ -110,10 +126,12 @@ static void check_msr(uint32_t msr, uint64_t bits_to_flip)
static void run_and_measure_loop(uint32_t msr_base)
{
const uint64_t branches_retired = rdmsr(msr_base + 0);
+ const uint64_t insn_retired = rdmsr(msr_base + 1);

__asm__ __volatile__("loop ." : "+c"((int){NUM_BRANCHES}));

pmc_results.branches_retired = rdmsr(msr_base + 0) - branches_retired;
+ pmc_results.instructions_retired = rdmsr(msr_base + 1) - insn_retired;
}

static void intel_guest_code(void)
@@ -127,7 +145,9 @@ static void intel_guest_code(void)
wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
wrmsr(MSR_P6_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE |
ARCH_PERFMON_EVENTSEL_OS | INTEL_BR_RETIRED);
- wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x1);
+ wrmsr(MSR_P6_EVNTSEL1, ARCH_PERFMON_EVENTSEL_ENABLE |
+ ARCH_PERFMON_EVENTSEL_OS | INST_RETIRED);
+ wrmsr(MSR_CORE_PERF_GLOBAL_CTRL, 0x3);

run_and_measure_loop(MSR_IA32_PMC0);
GUEST_SYNC(0);
@@ -149,6 +169,8 @@ static void amd_guest_code(void)
wrmsr(MSR_K7_EVNTSEL0, 0);
wrmsr(MSR_K7_EVNTSEL0, ARCH_PERFMON_EVENTSEL_ENABLE |
ARCH_PERFMON_EVENTSEL_OS | AMD_ZEN_BR_RETIRED);
+ wrmsr(MSR_K7_EVNTSEL1, ARCH_PERFMON_EVENTSEL_ENABLE |
+ ARCH_PERFMON_EVENTSEL_OS | INST_RETIRED);

run_and_measure_loop(MSR_K7_PERFCTR0);
GUEST_SYNC(0);
@@ -263,20 +285,26 @@ static struct kvm_pmu_event_filter *remove_event(struct kvm_pmu_event_filter *f,
#define ASSERT_PMC_COUNTING_INSTRUCTIONS() \
do { \
uint64_t br = pmc_results.branches_retired; \
+ uint64_t ir = pmc_results.instructions_retired; \
\
if (br && br != NUM_BRANCHES) \
pr_info("%s: Branch instructions retired = %lu (expected %u)\n", \
__func__, br, NUM_BRANCHES); \
TEST_ASSERT(br, "%s: Branch instructions retired = %lu (expected > 0)", \
__func__, br); \
+ TEST_ASSERT(ir, "%s: Instructions retired = %lu (expected > 0)", \
+ __func__, ir); \
} while (0)

#define ASSERT_PMC_NOT_COUNTING_INSTRUCTIONS() \
do { \
uint64_t br = pmc_results.branches_retired; \
+ uint64_t ir = pmc_results.instructions_retired; \
\
TEST_ASSERT(!br, "%s: Branch instructions retired = %lu (expected 0)", \
__func__, br); \
+ TEST_ASSERT(!ir, "%s: Instructions retired = %lu (expected 0)", \
+ __func__, ir); \
} while (0)

static void test_without_filter(struct kvm_vcpu *vcpu)
@@ -329,6 +357,7 @@ static void test_not_member_deny_list(struct kvm_vcpu *vcpu)
{
struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_DENY);

+ remove_event(f, INST_RETIRED);
remove_event(f, INTEL_BR_RETIRED);
remove_event(f, AMD_ZEN_BR_RETIRED);
test_with_filter(vcpu, f);
@@ -341,6 +370,7 @@ static void test_not_member_allow_list(struct kvm_vcpu *vcpu)
{
struct kvm_pmu_event_filter *f = event_filter(KVM_PMU_EVENT_ALLOW);

+ remove_event(f, INST_RETIRED);
remove_event(f, INTEL_BR_RETIRED);
remove_event(f, AMD_ZEN_BR_RETIRED);
test_with_filter(vcpu, f);
--
2.40.0.577.gac1e443424-goog

2023-04-07 23:36:28

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH v4 4/6] KVM: selftests: Use error codes to signal errors in PMU event filter test

Use '0' to signal success and '-errno' to signal failure in the PMU event
filter test so that the values are slightly less magical/arbitrary. Using
'0' in the error paths is especially confusing as understanding it's an
error value requires following the breadcrumbs to the host code that
ultimately consumes the value.

Arguably there should also be a #define for "success", but 0/-errno is a
common enough pattern that defining another macro on top would likely do
more harm than good.

Signed-off-by: Sean Christopherson <[email protected]>
---
.../selftests/kvm/x86_64/pmu_event_filter_test.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
index ef07aaca2168..0432ba347b22 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
@@ -77,7 +77,7 @@ static const uint64_t event_list[] = {
*/
static void guest_gp_handler(struct ex_regs *regs)
{
- GUEST_SYNC(0);
+ GUEST_SYNC(-EFAULT);
}

/*
@@ -92,12 +92,12 @@ static void check_msr(uint32_t msr, uint64_t bits_to_flip)

wrmsr(msr, v);
if (rdmsr(msr) != v)
- GUEST_SYNC(0);
+ GUEST_SYNC(-EIO);

v ^= bits_to_flip;
wrmsr(msr, v);
if (rdmsr(msr) != v)
- GUEST_SYNC(0);
+ GUEST_SYNC(-EIO);
}

static uint64_t run_and_measure_loop(uint32_t msr_base)
@@ -114,7 +114,7 @@ static void intel_guest_code(void)
check_msr(MSR_CORE_PERF_GLOBAL_CTRL, 1);
check_msr(MSR_P6_EVNTSEL0, 0xffff);
check_msr(MSR_IA32_PMC0, 0xffff);
- GUEST_SYNC(1);
+ GUEST_SYNC(0);

for (;;) {
uint64_t count;
@@ -138,7 +138,7 @@ static void amd_guest_code(void)
{
check_msr(MSR_K7_EVNTSEL0, 0xffff);
check_msr(MSR_K7_PERFCTR0, 0xffff);
- GUEST_SYNC(1);
+ GUEST_SYNC(0);

for (;;) {
uint64_t count;
@@ -178,13 +178,13 @@ static uint64_t run_vcpu_to_sync(struct kvm_vcpu *vcpu)
*/
static bool sanity_check_pmu(struct kvm_vcpu *vcpu)
{
- bool success;
+ uint64_t r;

vm_install_exception_handler(vcpu->vm, GP_VECTOR, guest_gp_handler);
- success = run_vcpu_to_sync(vcpu);
+ r = run_vcpu_to_sync(vcpu);
vm_install_exception_handler(vcpu->vm, GP_VECTOR, NULL);

- return success;
+ return !r;
}

static struct kvm_pmu_event_filter *alloc_pmu_event_filter(uint32_t nevents)
--
2.40.0.577.gac1e443424-goog

2023-04-12 23:09:27

by Aaron Lewis

[permalink] [raw]
Subject: Re: [PATCH v4 0/6] KVM: selftests: Add "instructions required" testcase

On Fri, Apr 7, 2023 at 11:33 PM Sean Christopherson <[email protected]> wrote:
>
> This is the selftests portion of Aaron's series[*] to fix incorrect
> "instructions required" counting. The big change from v3 is to use a

nit: here and in the subject, s/required/retired/.

> common struct to copy counter values to/from the guest instead of smushing
> the values into a single 64-bit value.
>
> [*] https://lkml.kernel.org/r/20230307141400.1486314-1-aaronlewis%40google.com
>
> Aaron Lewis (4):
> KVM: selftests: Add a common helper for the PMU event filter guest
> code
> KVM: selftests: Add helpers for PMC asserts in PMU event filter test
> KVM: selftests: Print detailed info in PMU event filter asserts
> KVM: selftests: Test the PMU event "Instructions retired"
>
> Sean Christopherson (2):
> KVM: selftests: Use error codes to signal errors in PMU event filter
> test
> KVM: selftests: Copy full counter values from guest in PMU event
> filter test
>
> .../kvm/x86_64/pmu_event_filter_test.c | 252 ++++++++++--------
> 1 file changed, 140 insertions(+), 112 deletions(-)
>
>
> base-commit: b821bf96cf3bed0c1c8d34659299a3a1dc47218d
> --
> 2.40.0.577.gac1e443424-goog
>

Fingers crossed that marking this as "plain text" works this time.

Reviewed by: Aaron Lewis <[email protected]>

2023-04-14 20:36:18

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v4 0/6] KVM: selftests: Add "instructions required" testcase

On Fri, 07 Apr 2023 16:32:48 -0700, Sean Christopherson wrote:
> This is the selftests portion of Aaron's series[*] to fix incorrect
> "instructions required" counting. The big change from v3 is to use a
> common struct to copy counter values to/from the guest instead of smushing
> the values into a single 64-bit value.
>
> [*] https://lkml.kernel.org/r/20230307141400.1486314-1-aaronlewis%40google.com
>
> [...]

Applied to kvm-x86 pmu, thanks!

[1/6] KVM: selftests: Add a common helper for the PMU event filter guest code
https://github.com/kvm-x86/linux/commit/33ef1411a36b
[2/6] KVM: selftests: Add helpers for PMC asserts in PMU event filter test
https://github.com/kvm-x86/linux/commit/fa32233d51b9
[3/6] KVM: selftests: Print detailed info in PMU event filter asserts
https://github.com/kvm-x86/linux/commit/c140e93a0c11
[4/6] KVM: selftests: Use error codes to signal errors in PMU event filter test
https://github.com/kvm-x86/linux/commit/c02c74428288
[5/6] KVM: selftests: Copy full counter values from guest in PMU event filter test
https://github.com/kvm-x86/linux/commit/e9f322bd2396
[6/6] KVM: selftests: Test the PMU event "Instructions retired"
https://github.com/kvm-x86/linux/commit/457bd7af1a17

--
https://github.com/kvm-x86/linux/tree/next
https://github.com/kvm-x86/linux/tree/fixes