Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754706AbdCFUMW (ORCPT ); Mon, 6 Mar 2017 15:12:22 -0500 Received: from relay1.sgi.com ([192.48.180.66]:35385 "EHLO relay.sgi.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754580AbdCFULV (ORCPT ); Mon, 6 Mar 2017 15:11:21 -0500 Message-Id: <20170306181737.322206440@asylum.americas.sgi.com> References: <20170306181737.059578494@asylum.americas.sgi.com> User-Agent: quilt/0.46-1 Date: Mon, 06 Mar 2017 12:17:38 -0600 From: Mike Travis To: Ingo Molnar , Thomas Gleixner , "H. Peter Anvin" , Don Zickus , Peter Zijlstra Cc: Dimitri Sivanich , Frank Ramsay , Russ Anderson , Tony Ernst , x86@kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/2] x86/platform: Add a low priority low frequency NMI call chain Content-Disposition: inline; filename=x86_add_nmi_remote_call Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2295 Lines: 81 Add a new NMI call chain that is called last after all other NMI handlers have been checked and did not "handle" the NMI. This mimics the current NMI_UNKNOWN call chain except it eliminates the WARNING message about multiple NMI handlers registering on this call chain. This call chain dramatically lowers the NMI call frequency when high frequency NMI tools are in use, notably the perf tools. It is required for NMI handlers that cannot sustain a high NMI call rate without ramifications to the system operability. Signed-off-by: Mike Travis Reviewed-by: Russ Anderson --- arch/x86/include/asm/nmi.h | 1 + arch/x86/kernel/nmi.c | 21 ++++++++++++++++++++- 2 files changed, 21 insertions(+), 1 deletion(-) --- linux-4.4.orig/arch/x86/include/asm/nmi.h +++ linux-4.4/arch/x86/include/asm/nmi.h @@ -28,6 +28,7 @@ enum { NMI_UNKNOWN, NMI_SERR, NMI_IO_CHECK, + NMI_LAST, NMI_MAX }; --- linux-4.4.orig/arch/x86/kernel/nmi.c +++ linux-4.4/arch/x86/kernel/nmi.c @@ -57,6 +57,10 @@ static struct nmi_desc nmi_desc[NMI_MAX] .lock = __SPIN_LOCK_UNLOCKED(&nmi_desc[3].lock), .head = LIST_HEAD_INIT(nmi_desc[3].head), }, + { + .lock = __SPIN_LOCK_UNLOCKED(&nmi_desc[4].lock), + .head = LIST_HEAD_INIT(nmi_desc[4].head), + }, }; @@ -65,6 +69,7 @@ struct nmi_stats { unsigned int unknown; unsigned int external; unsigned int swallow; + unsigned int last; }; static DEFINE_PER_CPU(struct nmi_stats, nmi_stats); @@ -312,6 +317,20 @@ unknown_nmi_error(unsigned char reason, } NOKPROBE_SYMBOL(unknown_nmi_error); +static void check_nmi_last(unsigned char reason, struct pt_regs *regs) +{ + int handled; + + /* Check low frequency, multiple CPU NMI handlers */ + handled = nmi_handle(NMI_LAST, regs); + __this_cpu_add(nmi_stats.last, handled); + if (handled) + return; + + unknown_nmi_error(reason, regs); +} +NOKPROBE_SYMBOL(check_nmi_last); + static DEFINE_PER_CPU(bool, swallow_nmi); static DEFINE_PER_CPU(unsigned long, last_nmi_rip); @@ -423,7 +442,7 @@ static void default_do_nmi(struct pt_reg if (b2b && __this_cpu_read(swallow_nmi)) __this_cpu_add(nmi_stats.swallow, 1); else - unknown_nmi_error(reason, regs); + check_nmi_last(reason, regs); } NOKPROBE_SYMBOL(default_do_nmi); --