Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753447AbdFWQ3L (ORCPT ); Fri, 23 Jun 2017 12:29:11 -0400 Received: from mx1.redhat.com ([209.132.183.28]:35990 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751230AbdFWQ3J (ORCPT ); Fri, 23 Jun 2017 12:29:09 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 54A61142860 Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=dzickus@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 54A61142860 Date: Fri, 23 Jun 2017 12:29:07 -0400 From: Don Zickus To: Thomas Gleixner Cc: Kan Liang , linux-kernel@vger.kernel.org, mingo@kernel.org, akpm@linux-foundation.org, babu.moger@oracle.com, atomlin@redhat.com, prarit@redhat.com, torvalds@linux-foundation.org, peterz@infradead.org, eranian@google.com, acme@redhat.com, ak@linux.intel.com, stable@vger.kernel.org Subject: Re: [PATCH V2] kernel/watchdog: fix spurious hard lockups Message-ID: <20170623162907.l6inpxgztwwkeaoi@redhat.com> References: <20170621144118.5939-1-kan.liang@intel.com> <20170622154450.2lua7fdmigcixldw@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20170428-dirty (1.8.2) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Fri, 23 Jun 2017 16:29:09 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2640 Lines: 71 On Fri, Jun 23, 2017 at 10:01:55AM +0200, Thomas Gleixner wrote: > On Thu, 22 Jun 2017, Don Zickus wrote: > > On Wed, Jun 21, 2017 at 11:53:57PM +0200, Thomas Gleixner wrote: > > > On Wed, 21 Jun 2017, kan.liang@intel.com wrote: > > > > We now have more and more systems where the Turbo range is wide enough > > > > that the NMI watchdog expires faster than the soft watchdog timer that > > > > updates the interrupt tick the NMI watchdog relies on. > > > > > > > > This problem was originally added by commit 58687acba592 > > > > ("lockup_detector: Combine nmi_watchdog and softlockup detector"). > > > > Previously the NMI watchdog would always check jiffies, which were > > > > ticking fast enough. But now the backing is quite slow so the expire > > > > time becomes more sensitive. > > > > > > And slapping a factor 3 on the NMI period is the wrong answer to the > > > problem. The simple solution would be to increase the hrtimer frequency, > > > but that's not really desired either. > > > > > > Find an untested patch below, which should cure the issue. > > > > A simple low pass filter. It compiles. :-) I don't think I have knowledge > > to test it. Kan? > > Yes, and it has an interesting twist. It's only working once we have > switched to TSC as clocksource. > > As long as jiffies are the clocksource, this will miserably fail because > when the hrtimer interrupt is not delivered jiffies wont be incremented > either and the NMI will say: Oh. not enough time elapsed. Lather, rinse and > repeat. > > One simple way to fix this is with the delta patch below. Hmm, all this work for a temp fix. Kan, how much longer until the real fix of having perf count the right cycles? Cheers, Don > > Thanks, > > tglx > > 8<-------------------------- > --- a/kernel/watchdog_hld.c > +++ b/kernel/watchdog_hld.c > @@ -72,6 +72,7 @@ EXPORT_SYMBOL(touch_nmi_watchdog); > > #ifdef CONFIG_HARDLOCKUP_CHECK_TIMESTAMP > static DEFINE_PER_CPU(ktime_t, last_timestamp); > +static DEFINE_PER_CPU(unsigned int, nmi_rearmed); > static ktime_t watchdog_hrtimer_sample_threshold __read_mostly; > > void watchdog_update_hrtimer_threshold(u64 period) > @@ -105,8 +106,11 @@ static bool watchdog_check_timestamp(voi > ktime_t delta, now = ktime_get_mono_fast_ns(); > > delta = now - __this_cpu_read(last_timestamp); > - if (delta < watchdog_hrtimer_sample_threshold) > - return false; > + if (delta < watchdog_hrtimer_sample_threshold) { > + if (__this_cpu_inc_return(nmi_rearmed) < 10) > + return false; > + } > + __this_cpu_write(nmi_rearmed, 0); > __this_cpu_write(last_timestamp, now); > return true; > } > >