Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752859AbbHaVM4 (ORCPT ); Mon, 31 Aug 2015 17:12:56 -0400 Received: from mx0b-00082601.pphosted.com ([67.231.153.30]:11680 "EHLO mx0b-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751815AbbHaVMy (ORCPT ); Mon, 31 Aug 2015 17:12:54 -0400 Date: Mon, 31 Aug 2015 14:12:33 -0700 From: Shaohua Li To: Thomas Gleixner CC: John Stultz , lkml , Prarit Bhargava , Richard Cochran , Daniel Lezcano , Ingo Molnar , Clark Williams , Steven Rostedt Subject: Re: [PATCH 8/9] clocksource: Improve unstable clocksource detection Message-ID: <20150831211233.GA1413758@devbig257.prn2.facebook.com> References: <1439844063-7957-1-git-send-email-john.stultz@linaro.org> <1439844063-7957-9-git-send-email-john.stultz@linaro.org> <20150826171533.GA2189998@devbig257.prn2.facebook.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <20150826171533.GA2189998@devbig257.prn2.facebook.com> User-Agent: Mutt/1.5.20 (2009-12-10) X-Originating-IP: [192.168.52.123] X-Proofpoint-Spam-Reason: safe X-FB-Internal: Safe X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.14.151,1.0.33,0.0.0000 definitions=2015-08-31_05:2015-08-31,2015-08-31,1970-01-01 signatures=0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4338 Lines: 88 On Wed, Aug 26, 2015 at 10:15:33AM -0700, Shaohua Li wrote: > On Tue, Aug 18, 2015 at 10:18:09PM +0200, Thomas Gleixner wrote: > > On Tue, 18 Aug 2015, John Stultz wrote: > > > On Tue, Aug 18, 2015 at 12:28 PM, Thomas Gleixner wrote: > > > > On Tue, 18 Aug 2015, John Stultz wrote: > > > >> On Tue, Aug 18, 2015 at 1:38 AM, Thomas Gleixner wrote: > > > >> > On Mon, 17 Aug 2015, John Stultz wrote: > > > >> >> On Mon, Aug 17, 2015 at 3:04 PM, Thomas Gleixner wrote: > > > >> >> > On Mon, 17 Aug 2015, John Stultz wrote: > > > >> >> > > > > >> >> >> From: Shaohua Li > > > >> >> >> > > > >> >> >> >From time to time we saw TSC is marked as unstable in our systems, while > > > >> >> > > > > >> >> > Stray '>' > > > >> >> > > > > >> >> >> the CPUs declare to have stable TSC. Looking at the clocksource unstable > > > >> >> >> detection, there are two problems: > > > >> >> >> - watchdog clock source wrap. HPET is the most common watchdog clock > > > >> >> >> source. It's 32-bit and runs in 14.3Mhz. That means the hpet counter > > > >> >> >> can wrap in about 5 minutes. > > > >> >> >> - threshold isn't scaled against interval. The threshold is 0.0625s in > > > >> >> >> 0.5s interval. What if the actual interval is bigger than 0.5s? > > > >> >> >> > > > >> >> >> The watchdog runs in a timer bh, so hard/soft irq can defer its running. > > > >> >> >> Heavy network stack softirq can hog a cpu. IPMI driver can disable > > > >> >> >> interrupt for a very long time. > > > >> >> > > > > >> >> > And they hold off the timer softirq for more than a second? Don't you > > > >> >> > think that's the problem which needs to be fixed? > > > >> >> > > > >> >> Though this is an issue I've experienced (and tried unsuccessfully to > > > >> >> fix in a more complicated way) with the RT kernel, where high priority > > > >> >> tasks blocked the watchdog long enough that we'd disqualify the TSC. > > > >> > > > > >> > Did it disqualify the watchdog due to HPET wraparounds (5 minutes) or > > > >> > due to the fixed threshold being applied? > > > >> > > > >> This was years ago, but in my experience, the watchdog false positives > > > >> were due to HPET wraparounds. > > > > > > > > Blocking stuff for 5 minutes is insane .... > > > > > > Yea. It was usually due to -RT stress testing, which keept the > > > machines busy for quite awhile. But again, if you have machines being > > > maxed out with networking load, etc, even for long amounts of time, we > > > still want to avoid false positives. Because after the watchdog > > > > The networking softirq does not hog the other softirqs. It has a limit > > on processing loops and then goes back to let the other softirqs be > > handled. So no, I doubt that heavy networking can cause this. If it > > does then we have some other way more serious problems. > > > > I can see the issue with RT stress testing, but not with networking in > > mainline. > > Ok, the issue is triggerd in my kvm guest, I guess it's easier to > trigger in kvm because hpet is 100Mhz. > > [ 135.930067] clocksource: timekeeping watchdog: Marking clocksource 'tsc' as unstable because the skew is too large: > [ 135.930095] clocksource: 'hpet' wd_now: 2bc19ea0 wd_last: 6c4e5570 mask: ffffffff > [ 135.930105] clocksource: 'tsc' cs_now: 481250b45b cs_last: 219e6efb50 mask: ffffffffffffffff > [ 135.938750] clocksource: Switched to clocksource hpet > > The HPET clock is 100MHz, CPU speed is 2200MHz, kvm is passed correct cpu > info, so guest cpuinfo shows TSC is stable. > > hpet interval is ((0x2bc19ea0 - 0x6c4e5570) & 0xffffffff) / 100000000 = 32.1s. > > The HPET wraps interval is 0xffffffff / 100000000 = 42.9s > > tsc interval is (0x481250b45b - 0x219e6efb50) / 2200000000 = 75s > > 32.1 + 42.9 = 75 > > The example shows hpet wraps, while tsc is marked unstable Thomas & John, Is this data enough to prove TSC unstable issue can be triggered by HPET wrap? I can resend the patch with the data included. Thanks, Shaohua -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/