Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758556AbcLAUGX (ORCPT ); Thu, 1 Dec 2016 15:06:23 -0500 Received: from mx1.redhat.com ([209.132.183.28]:41416 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757232AbcLAUGV (ORCPT ); Thu, 1 Dec 2016 15:06:21 -0500 Date: Thu, 1 Dec 2016 15:06:13 -0500 From: Don Zickus To: Prarit Bhargava Cc: linux-kernel@vger.kernel.org, Borislav Petkov , Tejun Heo , Hidehiro Kawai , Thomas Gleixner , Andi Kleen , Joshua Hunt , Ingo Molnar , Babu Moger Subject: Re: [PATCH] kernel/watchdog.c: Do not hardcode CPU 0 as the initial thread Message-ID: <20161201200613.GG35881@redhat.com> References: <1480425321-32296-1-git-send-email-prarit@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1480425321-32296-1-git-send-email-prarit@redhat.com> User-Agent: Mutt/1.5.23.1 (2014-03-12) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Thu, 01 Dec 2016 20:06:21 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4322 Lines: 117 On Tue, Nov 29, 2016 at 08:15:21AM -0500, Prarit Bhargava wrote: > When CONFIG_BOOTPARAM_HOTPLUG_CPU0 is enabled, the socket containing the > boot cpu can be replaced. During the hot add event, the message > > NMI watchdog: enabled on all CPUs, permanently consumes one hw-PMU counter. > > is output implying that the NMI watchdog was disabled at some point. This > is not the case and the message has caused confusion for users of systems > that support the removal of the boot cpu socket. > > The watchdog code is coded to assume that cpu 0 is always the first cpu to > initialize the watchdog, and the last to stop its watchdog thread. That > is not the case for initializing if cpu 0 has been removed and added. The > removal case has never been correct because the smpboot code will remove > the watchdog threads starting with the lowest cpu number. > > This patch adds watchdog_cpus to track the number of cpus with active NMI > watchdog threads so that the first and last thread can be used to set and > clear the value of firstcpu_err. firstcpu_err is set when the first > watchdog thread is enabled, and cleared when the last watchdog thread is > disabled. > > This patch is based on top of linux-next akpm-base. It passed my tests. Thanks! Acked-by: Don Zickus > > Signed-off-by: Prarit Bhargava > Cc: Borislav Petkov > Cc: Tejun Heo > Cc: Don Zickus > Cc: Hidehiro Kawai > Cc: Thomas Gleixner > Cc: Andi Kleen > Cc: Joshua Hunt > Cc: Ingo Molnar > Cc: Babu Moger > --- > kernel/watchdog_hld.c | 25 +++++++++++++++---------- > 1 file changed, 15 insertions(+), 10 deletions(-) > > diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c > index 84016c8aee6b..30761f7504ef 100644 > --- a/kernel/watchdog_hld.c > +++ b/kernel/watchdog_hld.c > @@ -134,12 +134,14 @@ static void watchdog_overflow_callback(struct perf_event *event, > * Reduce the watchdog noise by only printing messages > * that are different from what cpu0 displayed. > */ > -static unsigned long cpu0_err; > +static unsigned long firstcpu_err; > +static atomic_t watchdog_cpus; > > int watchdog_nmi_enable(unsigned int cpu) > { > struct perf_event_attr *wd_attr; > struct perf_event *event = per_cpu(watchdog_ev, cpu); > + int firstcpu = 0; > > /* nothing to do if the hard lockup detector is disabled */ > if (!(watchdog_enabled & NMI_WATCHDOG_ENABLED)) > @@ -153,19 +155,22 @@ int watchdog_nmi_enable(unsigned int cpu) > if (event != NULL) > goto out_enable; > > + if (atomic_inc_return(&watchdog_cpus) == 1) > + firstcpu = 1; > + > wd_attr = &wd_hw_attr; > wd_attr->sample_period = hw_nmi_get_sample_period(watchdog_thresh); > > /* Try to register using hardware perf events */ > event = perf_event_create_kernel_counter(wd_attr, cpu, NULL, watchdog_overflow_callback, NULL); > > - /* save cpu0 error for future comparision */ > - if (cpu == 0 && IS_ERR(event)) > - cpu0_err = PTR_ERR(event); > + /* save the first cpu's error for future comparision */ > + if (firstcpu && IS_ERR(event)) > + firstcpu_err = PTR_ERR(event); > > if (!IS_ERR(event)) { > - /* only print for cpu0 or different than cpu0 */ > - if (cpu == 0 || cpu0_err) > + /* only print for the first cpu initialized */ > + if (firstcpu || firstcpu_err) > pr_info("enabled on all CPUs, permanently consumes one hw-PMU counter.\n"); > goto out_save; > } > @@ -183,7 +188,7 @@ int watchdog_nmi_enable(unsigned int cpu) > smp_mb__after_atomic(); > > /* skip displaying the same error again */ > - if (cpu > 0 && (PTR_ERR(event) == cpu0_err)) > + if (!firstcpu && (PTR_ERR(event) == firstcpu_err)) > return PTR_ERR(event); > > /* vary the KERN level based on the returned errno */ > @@ -219,9 +224,9 @@ void watchdog_nmi_disable(unsigned int cpu) > > /* should be in cleanup, but blocks oprofile */ > perf_event_release_kernel(event); > - } > - if (cpu == 0) { > + > /* watchdog_nmi_enable() expects this to be zero initially. */ > - cpu0_err = 0; > + if (atomic_dec_and_test(&watchdog_cpus)) > + firstcpu_err = 0; > } > } > -- > 1.7.9.3 >