Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965636AbcDMGST (ORCPT ); Wed, 13 Apr 2016 02:18:19 -0400 Received: from mail-wm0-f66.google.com ([74.125.82.66]:34381 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933834AbcDMGSS (ORCPT ); Wed, 13 Apr 2016 02:18:18 -0400 Date: Wed, 13 Apr 2016 08:18:13 +0200 From: Ingo Molnar To: Waiman Long Cc: Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , linux-kernel@vger.kernel.org, x86@kernel.org, Jiang Liu , Borislav Petkov , Andy Lutomirski , Scott J Norton , Douglas Hatch , Randy Wright , Peter Zijlstra Subject: Re: [PATCH v4] x86/hpet: Reduce HPET counter read contention Message-ID: <20160413061813.GB4705@gmail.com> References: <1460486768-34024-1-git-send-email-Waiman.Long@hpe.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1460486768-34024-1-git-send-email-Waiman.Long@hpe.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1791 Lines: 44 * Waiman Long wrote: > On a large system with many CPUs, using HPET as the clock source can > have a significant impact on the overall system performance because > of the following reasons: > 1) There is a single HPET counter shared by all the CPUs. > 2) HPET counter reading is a very slow operation. > > Using HPET as the default clock source may happen when, for example, > the TSC clock calibration exceeds the allowable tolerance. Something > the performance slowdown can be so severe that the system may crash > because of a NMI watchdog soft lockup, for example. > /* > + * Reading the HPET counter is a very slow operation. If a large number of > + * CPUs are trying to access the HPET counter simultaneously, it can cause > + * massive delay and slow down system performance dramatically. This may > + * happen when HPET is the default clock source instead of TSC. For a > + * really large system with hundreds of CPUs, the slowdown may be so > + * severe that it may actually crash the system because of a NMI watchdog > + * soft lockup, for example. > + * > + * If multiple CPUs are trying to access the HPET counter at the same time, > + * we don't actually need to read the counter multiple times. Instead, the > + * other CPUs can use the counter value read by the first CPU in the group. Hm, weird, so how can this: static cycle_t read_hpet(struct clocksource *cs) { return (cycle_t)hpet_readl(HPET_COUNTER); } ... cause an actual slowdown of that magnitude? This goes straight to MMIO. So is the hardware so terminally broken? How good is the TSC clocksource on the affected system? Could we simply always use the TSC (and not use the HPET at all as a clocksource), instead of trying to fix broken hardware? Thanks, Ingo