Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751860AbbHQWRa (ORCPT ); Mon, 17 Aug 2015 18:17:30 -0400 Received: from mail-ig0-f170.google.com ([209.85.213.170]:35045 "EHLO mail-ig0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750875AbbHQWR3 (ORCPT ); Mon, 17 Aug 2015 18:17:29 -0400 MIME-Version: 1.0 In-Reply-To: References: <1439844063-7957-1-git-send-email-john.stultz@linaro.org> <1439844063-7957-9-git-send-email-john.stultz@linaro.org> Date: Mon, 17 Aug 2015 15:17:28 -0700 Message-ID: Subject: Re: [PATCH 8/9] clocksource: Improve unstable clocksource detection From: John Stultz To: Thomas Gleixner Cc: lkml , Shaohua Li , Prarit Bhargava , Richard Cochran , Daniel Lezcano , Ingo Molnar Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4387 Lines: 115 On Mon, Aug 17, 2015 at 3:04 PM, Thomas Gleixner wrote: > On Mon, 17 Aug 2015, John Stultz wrote: > >> From: Shaohua Li >> >> >From time to time we saw TSC is marked as unstable in our systems, while > > Stray '>' > >> the CPUs declare to have stable TSC. Looking at the clocksource unstable >> detection, there are two problems: >> - watchdog clock source wrap. HPET is the most common watchdog clock >> source. It's 32-bit and runs in 14.3Mhz. That means the hpet counter >> can wrap in about 5 minutes. >> - threshold isn't scaled against interval. The threshold is 0.0625s in >> 0.5s interval. What if the actual interval is bigger than 0.5s? >> >> The watchdog runs in a timer bh, so hard/soft irq can defer its running. >> Heavy network stack softirq can hog a cpu. IPMI driver can disable >> interrupt for a very long time. > > And they hold off the timer softirq for more than a second? Don't you > think that's the problem which needs to be fixed? Though this is an issue I've experienced (and tried unsuccessfully to fix in a more complicated way) with the RT kernel, where high priority tasks blocked the watchdog long enough that we'd disqualify the TSC. Ideally that sort of high-priority RT busyness would be avoided, but its also a pain to have false positive trigger when doing things like stress testing. >> The first problem is mostly we are suffering I think. > > So you think that's the root cause and because your patch makes it go > away it's not necessary to know for sure, right? > >> Here is a simple patch to fix the issues. If the waterdog doesn't run > > waterdog? Allergen-free. :) >> for a long time, we ignore the detection. > > What's 'long time'? Please explain the numbers chosen. > >> This should work for the two > > Emphasis on 'should'? > >> problems. For the second one, we probably doen't need to scale if the >> interval isn't very long. > > -ENOPARSE > >> @@ -122,9 +122,10 @@ static int clocksource_watchdog_kthread(void *data); >> static void __clocksource_change_rating(struct clocksource *cs, int rating); >> >> /* >> - * Interval: 0.5sec Threshold: 0.0625s >> + * Interval: 0.5sec MaxInterval: 1s Threshold: 0.0625s >> */ >> #define WATCHDOG_INTERVAL (HZ >> 1) >> +#define WATCHDOG_MAX_INTERVAL_NS (NSEC_PER_SEC) >> #define WATCHDOG_THRESHOLD (NSEC_PER_SEC >> 4) >> >> static void clocksource_watchdog_work(struct work_struct *work) >> @@ -217,7 +218,9 @@ static void clocksource_watchdog(unsigned long data) >> continue; >> >> /* Check the deviation from the watchdog clocksource. */ >> - if ((abs(cs_nsec - wd_nsec) > WATCHDOG_THRESHOLD)) { >> + if ((abs(cs_nsec - wd_nsec) > WATCHDOG_THRESHOLD) && >> + cs_nsec < WATCHDOG_MAX_INTERVAL_NS && >> + wd_nsec < WATCHDOG_MAX_INTERVAL_NS) { > > So that adds a new opportunity for undiscovered wreckage: > > clocksource_watchdog(); > .... <--- SMI skews TSC > looong_irq_disabled_region(); > .... > clocksource_watchdog(); <--- Does not detect skew > > and it will not detect it later on if that SMI was a one time event. > > So 'fixing' the watchdog is the wrong approach. Fixing the stuff which > prevents the watchdog to run is the proper thing to do. I'm not sure here. I feel like these delay-caused false positives (I've seen similar reports w/ VMs being stalled) are more common then one-off SMI TSC skews. There are hard lines in the timekeeping code, where we do say: Don't delay us past X or we can't really handle it, but in this case, the main clocksource is fine and the limit is being caused by the watchdog. So I think some sort of a solution to remove this restriction would be good. We don't want to needlessly punish fine hardware because our checks for bad hardware add extra restrictions. That said, I agree the "should"s and other vague qualifiers in the commit description you point out should have more specifics to back things up. And I'm fine delaying this (and the follow-on) patch until those details are provided. thanks -john -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/