Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp2521501yba; Mon, 15 Apr 2019 13:29:25 -0700 (PDT) X-Google-Smtp-Source: APXvYqx+aw14bLNrMRUmvQTyhZgPQWo0v8V0nzeY0mTnvVAWu54x6k64mVWW8Vfd4SSJFGwf3v0q X-Received: by 2002:a17:902:f08a:: with SMTP id go10mr75147546plb.121.1555360165533; Mon, 15 Apr 2019 13:29:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555360165; cv=none; d=google.com; s=arc-20160816; b=vgJeK7yguZY6WFUJyx0Ge4QUPzsg7NZxbUxzLrkUGktD1fd8+R6Ps+8CNsqTAlKC4v 6tg5SH7rG1EAAEVOZ4txxLIspMdkE9Md09eeA2fU60t+kpOp/QMVFpcVokPb+9tvyFWF 77S8xZCaB12V0Rl5KaA8JMhmMrEulJVhSlHGpZj5/IYe5tEvTLmQW5PuK8oROLseA9U4 8JuN0JS3wDa6Uu8FSWhHsFPUzU/7EoUZW9eSyJ6xHukLbcJ3LJuYLWopEQ0Oyu79W5yG AsRY9u6sewYSOg4b9d4UwI/LProyZdcvKaGm52yDiljLbB59BqUrtm+Vg2R/Ng/jOsZ8 /L8g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=+V5Zy8lZQ39RO20+LkGy0VJRnV+NtFS5g2J13pMXfyw=; b=kwRGcWNraDFyK1IuZMdrirTOCeGV2UxSzLYVHnedQEvw9d3TlEiG+qreZvmvGNhT6z gKwQc+p5tf8oVMsO0WvmtWieej2xvWDu+6Edl9gE1qG1rDivcneNpn7qdVD68ruOwbQO FOZkXicfy3EShqeMK0p9PK9qE8AscLlJ9bP5p1T7jjQfojrPkc017POmvRUhU6/bOKLu wFPjAON6w8Yrgy+zovLjStIPz1RZs1GDBAIlUXp5vsO9cXEwmiW4YqePCjDsqQPeuAPy TR/dgA45Dn11CObvBB1NG51J4RSLzjP5nkImLxqZmUbKuxhHjIA8+HQdx17z6C7+yxlc 4Ekw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=MQxBRvJ1; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h8si31105022pgg.573.2019.04.15.13.29.09; Mon, 15 Apr 2019 13:29:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=MQxBRvJ1; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729603AbfDOTEp (ORCPT + 99 others); Mon, 15 Apr 2019 15:04:45 -0400 Received: from mail.kernel.org ([198.145.29.99]:37758 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728278AbfDOTEk (ORCPT ); Mon, 15 Apr 2019 15:04:40 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2E8F6218DA; Mon, 15 Apr 2019 19:04:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1555355079; bh=rxnzBwlkhUzrI6abMQJu/V3EOLM6jm7aXGSvuvpuN+M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MQxBRvJ1LV8duEurp+Mm/QNmtr5N3gP59BaQY9fzLTYXRR8tPS/GBLp2Cb21byapF vehABac10prkLIOI3oqF+7KyjeILTtCb6z7wZHoott6lE5PJpWhIXe8yidMYzr2F4I +bWk+17OlgV397sfIOSQ27oxMNKaDP1ZnQcKfcxI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Tom Lendacky , "Peter Zijlstra (Intel)" , Alexander Shishkin , Arnaldo Carvalho de Melo , Arnaldo Carvalho de Melo , Borislav Petkov , Jiri Olsa , Linus Torvalds , Namhyung Kim , Stephane Eranian , Thomas Gleixner , Vince Weaver , Ingo Molnar Subject: [PATCH 4.14 65/69] x86/perf/amd: Remove need to check "running" bit in NMI handler Date: Mon, 15 Apr 2019 20:59:23 +0200 Message-Id: <20190415183736.330712794@linuxfoundation.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190415183726.036654568@linuxfoundation.org> References: <20190415183726.036654568@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Lendacky, Thomas commit 3966c3feca3fd10b2935caa0b4a08c7dd59469e5 upstream. Spurious interrupt support was added to perf in the following commit, almost a decade ago: 63e6be6d98e1 ("perf, x86: Catch spurious interrupts after disabling counters") The two previous patches (resolving the race condition when disabling a PMC and NMI latency mitigation) allow for the removal of this older spurious interrupt support. Currently in x86_pmu_stop(), the bit for the PMC in the active_mask bitmap is cleared before disabling the PMC, which sets up a race condition. This race condition was mitigated by introducing the running bitmap. That race condition can be eliminated by first disabling the PMC, waiting for PMC reset on overflow and then clearing the bit for the PMC in the active_mask bitmap. The NMI handler will not re-enable a disabled counter. If x86_pmu_stop() is called from the perf NMI handler, the NMI latency mitigation support will guard against any unhandled NMI messages. Signed-off-by: Tom Lendacky Signed-off-by: Peter Zijlstra (Intel) Cc: # 4.14.x- Cc: Alexander Shishkin Cc: Arnaldo Carvalho de Melo Cc: Arnaldo Carvalho de Melo Cc: Borislav Petkov Cc: Jiri Olsa Cc: Linus Torvalds Cc: Namhyung Kim Cc: Peter Zijlstra Cc: Stephane Eranian Cc: Thomas Gleixner Cc: Vince Weaver Link: https://lkml.kernel.org/r/Message-ID: Signed-off-by: Ingo Molnar Signed-off-by: Greg Kroah-Hartman --- arch/x86/events/amd/core.c | 21 +++++++++++++++++++-- arch/x86/events/core.c | 13 +++---------- 2 files changed, 22 insertions(+), 12 deletions(-) --- a/arch/x86/events/amd/core.c +++ b/arch/x86/events/amd/core.c @@ -4,8 +4,8 @@ #include #include #include -#include #include +#include #include "../perf_event.h" @@ -491,6 +491,23 @@ static void amd_pmu_disable_all(void) } } +static void amd_pmu_disable_event(struct perf_event *event) +{ + x86_pmu_disable_event(event); + + /* + * This can be called from NMI context (via x86_pmu_stop). The counter + * may have overflowed, but either way, we'll never see it get reset + * by the NMI if we're already in the NMI. And the NMI latency support + * below will take care of any pending NMI that might have been + * generated by the overflow. + */ + if (in_nmi()) + return; + + amd_pmu_wait_on_overflow(event->hw.idx); +} + /* * Because of NMI latency, if multiple PMC counters are active or other sources * of NMIs are received, the perf NMI handler can handle one or more overflowed @@ -738,7 +755,7 @@ static __initconst const struct x86_pmu .disable_all = amd_pmu_disable_all, .enable_all = x86_pmu_enable_all, .enable = x86_pmu_enable_event, - .disable = x86_pmu_disable_event, + .disable = amd_pmu_disable_event, .hw_config = amd_pmu_hw_config, .schedule_events = x86_schedule_events, .eventsel = MSR_K7_EVNTSEL0, --- a/arch/x86/events/core.c +++ b/arch/x86/events/core.c @@ -1328,8 +1328,9 @@ void x86_pmu_stop(struct perf_event *eve struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events); struct hw_perf_event *hwc = &event->hw; - if (__test_and_clear_bit(hwc->idx, cpuc->active_mask)) { + if (test_bit(hwc->idx, cpuc->active_mask)) { x86_pmu.disable(event); + __clear_bit(hwc->idx, cpuc->active_mask); cpuc->events[hwc->idx] = NULL; WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED); hwc->state |= PERF_HES_STOPPED; @@ -1426,16 +1427,8 @@ int x86_pmu_handle_irq(struct pt_regs *r apic_write(APIC_LVTPC, APIC_DM_NMI); for (idx = 0; idx < x86_pmu.num_counters; idx++) { - if (!test_bit(idx, cpuc->active_mask)) { - /* - * Though we deactivated the counter some cpus - * might still deliver spurious interrupts still - * in flight. Catch them: - */ - if (__test_and_clear_bit(idx, cpuc->running)) - handled++; + if (!test_bit(idx, cpuc->active_mask)) continue; - } event = cpuc->events[idx];