2019-04-02 15:55:56

by Tom Lendacky

[permalink] [raw]
Subject: [PATCH v4 0/3] x86/perf/amd: AMD PMC counters and NMI latency

This patch series addresses issues with increased NMI latency in newer
AMD processors that can result in unknown NMI messages when PMC counters
are active.

The following fixes are included in this series:

- Resolve a race condition when disabling an overflowed PMC counter,
specifically when updating the PMC counter with a new value.
- Resolve handling of active PMC counter overflows in the perf NMI
handler and when to report that the NMI is not related to a PMC.
- Remove earlier workaround for spurious NMIs by re-ordering the
PMC stop sequence to disable the PMC first and then remove the PMC
bit from the active_mask bitmap. As part of disabling the PMC, the
code will wait for an overflow to be reset.

The last patch re-works the order of when the PMC is removed from the
active_mask. There was a comment from a long time ago about having
to clear the bit in active_mask before disabling the counter because
the perf NMI handler could re-enable the PMC again. Looking at the
handler today, I don't see that as possible, hence the reordering. The
question will be whether the Intel PMC support will now have issues.
There is still support for using x86_pmu_handle_irq() in the Intel
core.c file.

Also, I couldn't completely get rid of the "running" bit because it
is used by arch/x86/events/intel/p4.c. An old commit comment that
seems to indicate the p4 code suffered the spurious interrupts:
Commit 03e22198d237 ("perf, x86: Handle in flight NMIs on P4 platform").

---

Changes from v3:
- Changed nmi.h include from asm/nmi.h to linux/nmi.h.
- Let the information about the last patch in the cover letter, but
removed the questions which were previously answered.
- Removed the RFC tag.

Changes from v2 (based on feedback from Peter Z):
- Simplified AMD specific disable_all callback by calling the common
x86_pmu_disable_all() function and then checking and waiting for
reset of and overflowed PMCs.
- Removed erroneous check for no active counters in the NMI latency
mitigation patch, which effectively nullified commit 63e6be6d98e1.
- Reworked x86_pmu_stop() in order to remove 63e6be6d98e1.

Changes from v1 (based on feedback from Peter Z):
- Created an AMD specific disable_all callback function to handle the
disabling of the counters and resolve the race condition
- Created an AMD specific handle_irq callback function that invokes the
common x86_pmu_handle_irq() function and then performs the NMI latency
mitigation.
- Take into account the possibility of non-perf NMI sources when applying
the mitigation.

This patch series is based off of the perf/core branch of tip:
https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git perf/core

Commit 1a9df9e29c2a ("Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net")

Tom Lendacky (3):
x86/perf/amd: Resolve race condition when disabling PMC
x86/perf/amd: Resolve NMI latency issues for active PMCs
x86/perf/amd: Remove need to check "running" bit in NMI handler

arch/x86/events/amd/core.c | 139 +++++++++++++++++++++++++++++++++++--
arch/x86/events/core.c | 13 +---
2 files changed, 137 insertions(+), 15 deletions(-)

--
2.17.1


2019-04-02 17:11:35

by Tom Lendacky

[permalink] [raw]
Subject: [PATCH v4 1/3] x86/perf/amd: Resolve race condition when disabling PMC

On AMD processors, the detection of an overflowed counter in the NMI
handler relies on the current value of the counter. So, for example, to
check for overflow on a 48 bit counter, bit 47 is checked to see if it
is 1 (not overflowed) or 0 (overflowed).

There is currently a race condition present when disabling and then
updating the PMC. Increased NMI latency in newer AMD processors makes this
race condition more pronounced. If the counter value has overflowed, it is
possible to update the PMC value before the NMI handler can run. The
updated PMC value is not an overflowed value, so when the perf NMI handler
does run, it will not find an overflowed counter. This may appear as an
unknown NMI resulting in either a panic or a series of messages, depending
on how the kernel is configured.

To eliminate this race condition, the PMC value must be checked after
disabling the counter. Add an AMD function, amd_pmu_disable_all(), that
will wait for the NMI handler to reset any active and overflowed counter
after calling x86_pmu_disable_all().

Cc: <[email protected]> # 4.14.x-
Signed-off-by: Tom Lendacky <[email protected]>
---
arch/x86/events/amd/core.c | 64 ++++++++++++++++++++++++++++++++++++--
1 file changed, 61 insertions(+), 3 deletions(-)

diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
index 7d2d7c801dba..beb132593622 100644
--- a/arch/x86/events/amd/core.c
+++ b/arch/x86/events/amd/core.c
@@ -3,6 +3,7 @@
#include <linux/types.h>
#include <linux/init.h>
#include <linux/slab.h>
+#include <linux/delay.h>
#include <asm/apicdef.h>

#include "../perf_event.h"
@@ -429,6 +430,63 @@ static void amd_pmu_cpu_dead(int cpu)
}
}

+/*
+ * When a PMC counter overflows, an NMI is used to process the event and
+ * reset the counter. NMI latency can result in the counter being updated
+ * before the NMI can run, which can result in what appear to be spurious
+ * NMIs. This function is intended to wait for the NMI to run and reset
+ * the counter to avoid possible unhandled NMI messages.
+ */
+#define OVERFLOW_WAIT_COUNT 50
+static void amd_pmu_wait_on_overflow(int idx)
+{
+ unsigned int i;
+ u64 counter;
+
+ /*
+ * Wait for the counter to be reset if it has overflowed. This loop
+ * should exit very, very quickly, but just in case, don't wait
+ * forever...
+ */
+ for (i = 0; i < OVERFLOW_WAIT_COUNT; i++) {
+ rdmsrl(x86_pmu_event_addr(idx), counter);
+ if (counter & (1ULL << (x86_pmu.cntval_bits - 1)))
+ break;
+
+ /* Might be in IRQ context, so can't sleep */
+ udelay(1);
+ }
+}
+
+static void amd_pmu_disable_all(void)
+{
+ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+ int idx;
+
+ x86_pmu_disable_all();
+
+ /*
+ * This shouldn't be called from NMI context, but add a safeguard here
+ * to return, since if we're in NMI context we can't wait for an NMI
+ * to reset an overflowed counter value.
+ */
+ if (in_nmi())
+ return;
+
+ /*
+ * Check each counter for overflow and wait for it to be reset by the
+ * NMI if it has overflowed. This relies on the fact that all active
+ * counters are always enabled when this function is caled and
+ * ARCH_PERFMON_EVENTSEL_INT is always set.
+ */
+ for (idx = 0; idx < x86_pmu.num_counters; idx++) {
+ if (!test_bit(idx, cpuc->active_mask))
+ continue;
+
+ amd_pmu_wait_on_overflow(idx);
+ }
+}
+
static struct event_constraint *
amd_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
struct perf_event *event)
@@ -622,7 +680,7 @@ static ssize_t amd_event_sysfs_show(char *page, u64 config)
static __initconst const struct x86_pmu amd_pmu = {
.name = "AMD",
.handle_irq = x86_pmu_handle_irq,
- .disable_all = x86_pmu_disable_all,
+ .disable_all = amd_pmu_disable_all,
.enable_all = x86_pmu_enable_all,
.enable = x86_pmu_enable_event,
.disable = x86_pmu_disable_event,
@@ -732,7 +790,7 @@ void amd_pmu_enable_virt(void)
cpuc->perf_ctr_virt_mask = 0;

/* Reload all events */
- x86_pmu_disable_all();
+ amd_pmu_disable_all();
x86_pmu_enable_all(0);
}
EXPORT_SYMBOL_GPL(amd_pmu_enable_virt);
@@ -750,7 +808,7 @@ void amd_pmu_disable_virt(void)
cpuc->perf_ctr_virt_mask = AMD64_EVENTSEL_HOSTONLY;

/* Reload all events */
- x86_pmu_disable_all();
+ amd_pmu_disable_all();
x86_pmu_enable_all(0);
}
EXPORT_SYMBOL_GPL(amd_pmu_disable_virt);
--
2.17.1

2019-04-02 17:12:43

by Tom Lendacky

[permalink] [raw]
Subject: [PATCH v4 3/3] x86/perf/amd: Remove need to check "running" bit in NMI handler

Spurious interrupt support was adding to perf in:
commit 63e6be6d98e1 ("perf, x86: Catch spurious interrupts after disabling counters")

The two previous patches (resolving the race condition when disabling a
PMC and NMI latency mitigation) allow for the removal of this older
spurious interrupt support.

Currently in x86_pmu_stop(), the bit for the PMC in the active_mask bitmap
is cleared before disabling the PMC, which sets up a race condition. This
race condition was mitigated by introducing the running bitmap. That race
condition can be eliminated by first disabling the PMC, waiting for PMC
reset on overflow and then clearing the bit for the PMC in the active_mask
bitmap. The NMI handler will not re-enable a disabled counter.

If x86_pmu_stop() is called from the perf NMI handler, the NMI latency
mitigation support will guard against any unhandled NMI messages.

Cc: <[email protected]> # 4.14.x-
Signed-off-by: Tom Lendacky <[email protected]>
---
arch/x86/events/amd/core.c | 19 ++++++++++++++++++-
arch/x86/events/core.c | 13 +++----------
2 files changed, 21 insertions(+), 11 deletions(-)

diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
index 8f06ec0e673b..22d6667c7063 100644
--- a/arch/x86/events/amd/core.c
+++ b/arch/x86/events/amd/core.c
@@ -490,6 +490,23 @@ static void amd_pmu_disable_all(void)
}
}

+static void amd_pmu_disable_event(struct perf_event *event)
+{
+ x86_pmu_disable_event(event);
+
+ /*
+ * This can be called from NMI context (via x86_pmu_stop). The counter
+ * may have overflowed, but either way, we'll never see it get reset
+ * by the NMI if we're already in the NMI. And the NMI latency support
+ * below will take care of any pending NMI that might have been
+ * generated by the overflow.
+ */
+ if (in_nmi())
+ return;
+
+ amd_pmu_wait_on_overflow(event->hw.idx);
+}
+
/*
* Because of NMI latency, if multiple PMC counters are active or other sources
* of NMIs are received, the perf NMI handler can handle one or more overflowed
@@ -737,7 +754,7 @@ static __initconst const struct x86_pmu amd_pmu = {
.disable_all = amd_pmu_disable_all,
.enable_all = x86_pmu_enable_all,
.enable = x86_pmu_enable_event,
- .disable = x86_pmu_disable_event,
+ .disable = amd_pmu_disable_event,
.hw_config = amd_pmu_hw_config,
.schedule_events = x86_schedule_events,
.eventsel = MSR_K7_EVNTSEL0,
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index e2b1447192a8..81911e11a15d 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -1349,8 +1349,9 @@ void x86_pmu_stop(struct perf_event *event, int flags)
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
struct hw_perf_event *hwc = &event->hw;

- if (__test_and_clear_bit(hwc->idx, cpuc->active_mask)) {
+ if (test_bit(hwc->idx, cpuc->active_mask)) {
x86_pmu.disable(event);
+ __clear_bit(hwc->idx, cpuc->active_mask);
cpuc->events[hwc->idx] = NULL;
WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
hwc->state |= PERF_HES_STOPPED;
@@ -1447,16 +1448,8 @@ int x86_pmu_handle_irq(struct pt_regs *regs)
apic_write(APIC_LVTPC, APIC_DM_NMI);

for (idx = 0; idx < x86_pmu.num_counters; idx++) {
- if (!test_bit(idx, cpuc->active_mask)) {
- /*
- * Though we deactivated the counter some cpus
- * might still deliver spurious interrupts still
- * in flight. Catch them:
- */
- if (__test_and_clear_bit(idx, cpuc->running))
- handled++;
+ if (!test_bit(idx, cpuc->active_mask))
continue;
- }

event = cpuc->events[idx];

--
2.17.1

2019-04-02 17:47:27

by Tom Lendacky

[permalink] [raw]
Subject: [PATCH v4 2/3] x86/perf/amd: Resolve NMI latency issues for active PMCs

On AMD processors, the detection of an overflowed PMC counter in the NMI
handler relies on the current value of the PMC. So, for example, to check
for overflow on a 48-bit counter, bit 47 is checked to see if it is 1 (not
overflowed) or 0 (overflowed).

When the perf NMI handler executes it does not know in advance which PMC
counters have overflowed. As such, the NMI handler will process all active
PMC counters that have overflowed. NMI latency in newer AMD processors can
result in multiple overflowed PMC counters being processed in one NMI and
then a subsequent NMI, that does not appear to be a back-to-back NMI, not
finding any PMC counters that have overflowed. This may appear to be an
unhandled NMI resulting in either a panic or a series of messages,
depending on how the kernel was configured.

To mitigate this issue, add an AMD handle_irq callback function,
amd_pmu_handle_irq(), that will invoke the common x86_pmu_handle_irq()
function and upon return perform some additional processing that will
indicate if the NMI has been handled or would have been handled had an
earlier NMI not handled the overflowed PMC. Using a per-CPU variable, a
minimum value of the number of active PMCs or 2 will be set whenever a
PMC is active. This is used to indicate the possible number of NMIs that
can still occur. The value of 2 is used for when an NMI does not arrive
at the LAPIC in time to be collapsed into an already pending NMI. Each
time the function is called without having handled an overflowed counter,
the per-CPU value is checked. If the value is non-zero, it is decremented
and the NMI indicates that it handled the NMI. If the value is zero, then
the NMI indicates that it did not handle the NMI.

Cc: <[email protected]> # 4.14.x-
Signed-off-by: Tom Lendacky <[email protected]>
---
arch/x86/events/amd/core.c | 56 +++++++++++++++++++++++++++++++++++++-
1 file changed, 55 insertions(+), 1 deletion(-)

diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
index beb132593622..8f06ec0e673b 100644
--- a/arch/x86/events/amd/core.c
+++ b/arch/x86/events/amd/core.c
@@ -4,10 +4,13 @@
#include <linux/init.h>
#include <linux/slab.h>
#include <linux/delay.h>
+#include <linux/nmi.h>
#include <asm/apicdef.h>

#include "../perf_event.h"

+static DEFINE_PER_CPU(unsigned int, perf_nmi_counter);
+
static __initconst const u64 amd_hw_cache_event_ids
[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
@@ -487,6 +490,57 @@ static void amd_pmu_disable_all(void)
}
}

+/*
+ * Because of NMI latency, if multiple PMC counters are active or other sources
+ * of NMIs are received, the perf NMI handler can handle one or more overflowed
+ * PMC counters outside of the NMI associated with the PMC overflow. If the NMI
+ * doesn't arrive at the LAPIC in time to become a pending NMI, then the kernel
+ * back-to-back NMI support won't be active. This PMC handler needs to take into
+ * account that this can occur, otherwise this could result in unknown NMI
+ * messages being issued. Examples of this is PMC overflow while in the NMI
+ * handler when multiple PMCs are active or PMC overflow while handling some
+ * other source of an NMI.
+ *
+ * Attempt to mitigate this by using the number of active PMCs to determine
+ * whether to return NMI_HANDLED if the perf NMI handler did not handle/reset
+ * any PMCs. The per-CPU perf_nmi_counter variable is set to a minimum of the
+ * number of active PMCs or 2. The value of 2 is used in case an NMI does not
+ * arrive at the LAPIC in time to be collapsed into an already pending NMI.
+ */
+static int amd_pmu_handle_irq(struct pt_regs *regs)
+{
+ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+ int active, handled;
+
+ /*
+ * Obtain the active count before calling x86_pmu_handle_irq() since
+ * it is possible that x86_pmu_handle_irq() may make a counter
+ * inactive (through x86_pmu_stop).
+ */
+ active = __bitmap_weight(cpuc->active_mask, X86_PMC_IDX_MAX);
+
+ /* Process any counter overflows */
+ handled = x86_pmu_handle_irq(regs);
+
+ /*
+ * If a counter was handled, record the number of possible remaining
+ * NMIs that can occur.
+ */
+ if (handled) {
+ this_cpu_write(perf_nmi_counter,
+ min_t(unsigned int, 2, active));
+
+ return handled;
+ }
+
+ if (!this_cpu_read(perf_nmi_counter))
+ return NMI_DONE;
+
+ this_cpu_dec(perf_nmi_counter);
+
+ return NMI_HANDLED;
+}
+
static struct event_constraint *
amd_get_event_constraints(struct cpu_hw_events *cpuc, int idx,
struct perf_event *event)
@@ -679,7 +733,7 @@ static ssize_t amd_event_sysfs_show(char *page, u64 config)

static __initconst const struct x86_pmu amd_pmu = {
.name = "AMD",
- .handle_irq = x86_pmu_handle_irq,
+ .handle_irq = amd_pmu_handle_irq,
.disable_all = amd_pmu_disable_all,
.enable_all = x86_pmu_enable_all,
.enable = x86_pmu_enable_event,
--
2.17.1

2019-04-03 07:57:16

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v4 0/3] x86/perf/amd: AMD PMC counters and NMI latency

On Tue, Apr 02, 2019 at 03:21:12PM +0000, Lendacky, Thomas wrote:
> Tom Lendacky (3):
> x86/perf/amd: Resolve race condition when disabling PMC
> x86/perf/amd: Resolve NMI latency issues for active PMCs
> x86/perf/amd: Remove need to check "running" bit in NMI handler
>
> arch/x86/events/amd/core.c | 139 +++++++++++++++++++++++++++++++++++--
> arch/x86/events/core.c | 13 +---
> 2 files changed, 137 insertions(+), 15 deletions(-)

no base64 encoding this time, applied without issue.

Thanks Tom!