Received: by 2002:ab2:6857:0:b0:1ef:ffd0:ce49 with SMTP id l23csp2665302lqp; Mon, 25 Mar 2024 06:08:32 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWbTqFy8SKosU3YfS58CmpnuGMhBObIgSyEdceXd3w8VICQyiUjyUtBZkKsnosm/uwAzI7rkXbCT0mY43W7oL/qb1UvWkjjT1qBGjOEnA== X-Google-Smtp-Source: AGHT+IGNtm2JYPM/FbNPtSsPnKQ3C3mS5iGRluaYjeipZOCYhJQo9AJPJEm945SeIxWb1COqqEPp X-Received: by 2002:a05:620a:1368:b0:788:4891:53d0 with SMTP id d8-20020a05620a136800b00788489153d0mr7964268qkl.30.1711372112576; Mon, 25 Mar 2024 06:08:32 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1711372112; cv=pass; d=google.com; s=arc-20160816; b=iGgXc3GSEcmVg68ZrLtBuFvw7ymSsWfQFB06miMgqywhNvvc5oKSKWcD1n+gD4TW0o uml5qhyFtbLAb8yz7P1X1oxuQuI/aVq/u+zl0ClQt6cVTDCMX27UE2YCiLDRlt1TJHav QL2/l6yG0aFpydJLHIDUyDTbk1PVgQbBOhXDUuY95Wv5IKiYsDabmK8i1od+dxrClrUj RqvYFEgFs2AMbtZc6W9cKzSXsdpPvlg2OxbRBNOP+587X+Jh3JB1+DRJxgViwQHya6Z6 K9fyHIgUSXmlIzunbC7duxlqEoDoiqNnBT2C2MbKuOUI5nqdToSYOG1DHtBCCWQTnzeb RP9g== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:organization:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature; bh=FnUuJhq1uI86S47fSQumWCsXgYJ29M+pjRWlqpGbajM=; fh=wgq3SFFcpYoLsYNr1Fy+HYH2+WGNJgTNu+AnKLa6P2U=; b=a+S6EBxfbG2WOH1UoSUhI1z5087ILNFKAKCo6w+K5EurQT4Qy8t25Ia3X0ip5y6CTy QE11OAh+7RrW9uZHdDNPwI2gFsTXB2qNf++BdFKSYKZGVwEURPJWFl3cTP5YvJquK8rE WlQul04uIW9VeqrnhSH1F/Eh+5/RoBxe4yN29mRYc9yZxsMIwP6/n5uwMBdO5R4Pxi4+ Ia6BL0OfCm+qGWEAUJH/jVlXyWh9Ag8tE/5Bihf2plDjwgm852iqdkdz4AaP04a4dWFi +LhTDfc5RD6jJpvbYJj7gmtbL87/fM08ewmjmJW2lmnm8O2Gh7/XhpxpFpRhmUHbjxwt B1Nw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ZcnCP8Hd; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-116599-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-116599-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id br16-20020a05620a461000b0078a5e3c4e29si292968qkb.499.2024.03.25.06.08.32 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Mar 2024 06:08:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-116599-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=ZcnCP8Hd; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-116599-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-116599-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 871F01C2CB79 for ; Mon, 25 Mar 2024 13:08:09 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id A4798157A49; Mon, 25 Mar 2024 08:53:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ZcnCP8Hd" Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9768F15F3F5; Mon, 25 Mar 2024 06:41:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.9 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711348893; cv=none; b=Bd9vEP7DOT7qDCuoduFrcE0PMOWwr3JCW4x9wC0AT99wC7SbzWyQSTIiXcNM6T4PJG+FFzy8GQplsFQpyiD4oGTGjjVyNZmM/H3cbIYSjgBNy1x9/D08td039bjN+rQg3n32fSENaBF0Uyhfqtn/v4+AyXappcIp8T1BMWWBaUQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711348893; c=relaxed/simple; bh=tRyxc+NjWa10JwJq5Oo62d3MoJ/4i/AwqfqN12/XrkQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=hySxjraNHZdVf7LXVJ4vAERUQmMC1R2XsCa7gyRR8SXbMvDsXM3fdvN+lyyXfY4HPBMcFi2Im1g4OwYYkQUUk9ng79KygkRgvXyyM34m5ZUfuk8xThm8mh0tprCKbEnKCbMyzXQdIwhrgdRod5djMlXUiYiNubdFN/oq28LFMf0= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=ZcnCP8Hd; arc=none smtp.client-ip=192.198.163.9 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1711348891; x=1742884891; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tRyxc+NjWa10JwJq5Oo62d3MoJ/4i/AwqfqN12/XrkQ=; b=ZcnCP8Hdj6Yunb/uBCTaem/QbJJ8IgqxvJCAsf5tHn3HTDQj+VY9Tie0 OyyKUvdpkZP3ijhc3fUQ1T81sdsUTRHQamG+mCS4H9fNoJI12lHr1P+lA ADVREu1dGSZahXZbw4xs9SrmNscn7bY5BHw0wcZDzHMxbfFFGjJGTR27D bLgah26ik1UPYqhWr4M+XZBiKC1353O2jIQso2ZrxoRVZLgno2OjKn7E9 RY21KbPVDVYwjCMyHd4CgiohCAgya8x2pq8y64oYlkHpPMiQ542j+IjJ/ YBlMJcSQiVyrxLZm2/116ixLRMA+d2abSlihRmDKN607KZSWuWd8v7Jpi w==; X-IronPort-AV: E=McAfee;i="6600,9927,11023"; a="17065225" X-IronPort-AV: E=Sophos;i="6.07,152,1708416000"; d="scan'208";a="17065225" Received: from fmviesa002.fm.intel.com ([10.60.135.142]) by fmvoesa103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Mar 2024 23:41:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,152,1708416000"; d="scan'208";a="38629602" Received: from ahunter6-mobl1.ger.corp.intel.com (HELO ahunter-VirtualBox.home\044ger.corp.intel.com) ([10.251.211.155]) by fmviesa002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Mar 2024 23:41:24 -0700 From: Adrian Hunter To: Thomas Gleixner Cc: Michael Ellerman , Nicholas Piggin , Christophe Leroy , "Aneesh Kumar K.V" , "Naveen N. Rao" , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Andy Lutomirski , Vincenzo Frascino , John Stultz , Stephen Boyd , Peter Zijlstra , Randy Dunlap , Bjorn Helgaas , Arnd Bergmann , Anna-Maria Behnsen , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, linux-s390@vger.kernel.org Subject: [PATCH V2 08/19] x86/vdso: Make delta calculation overflow safe Date: Mon, 25 Mar 2024 08:40:12 +0200 Message-Id: <20240325064023.2997-9-adrian.hunter@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240325064023.2997-1-adrian.hunter@intel.com> References: <20240325064023.2997-1-adrian.hunter@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Organization: Intel Finland Oy, Registered Address: PL 281, 00181 Helsinki, Business Identity Code: 0357606 - 4, Domiciled in Helsinki Content-Transfer-Encoding: 8bit Kernel timekeeping is designed to keep the change in cycles (since the last timer interrupt) below max_cycles, which prevents multiplication overflow when converting cycles to nanoseconds. However, if timer interrupts stop, the calculation will eventually overflow. Add protection against that. Select GENERIC_VDSO_OVERFLOW_PROTECT so that max_cycles is made available in the VDSO data page. Check against max_cycles, falling back to a slower higher precision calculation. Take advantage of the opportunity to move masking and negative motion check into the slow path. The result is a calculation that has similar performance as before. Newer machines showed performance benefit, whereas older Skylake-based hardware such as Intel Kaby Lake was seen <1% worse. Suggested-by: Thomas Gleixner Signed-off-by: Adrian Hunter --- arch/x86/Kconfig | 1 + arch/x86/include/asm/vdso/gettimeofday.h | 29 +++++++++++++++++------- 2 files changed, 22 insertions(+), 8 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 03483b23a009..3a70ebb558e7 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -168,6 +168,7 @@ config X86 select GENERIC_TIME_VSYSCALL select GENERIC_GETTIMEOFDAY select GENERIC_VDSO_TIME_NS + select GENERIC_VDSO_OVERFLOW_PROTECT select GUP_GET_PXX_LOW_HIGH if X86_PAE select HARDIRQS_SW_RESEND select HARDLOCKUP_CHECK_TIMESTAMP if X86_64 diff --git a/arch/x86/include/asm/vdso/gettimeofday.h b/arch/x86/include/asm/vdso/gettimeofday.h index 5727dedd3549..0ef36190abe6 100644 --- a/arch/x86/include/asm/vdso/gettimeofday.h +++ b/arch/x86/include/asm/vdso/gettimeofday.h @@ -319,18 +319,31 @@ static inline bool arch_vdso_cycles_ok(u64 cycles) */ static __always_inline u64 vdso_calc_ns(const struct vdso_data *vd, u64 cycles, u64 base) { + u64 delta = cycles - vd->cycle_last; + /* + * Negative motion and deltas which can cause multiplication + * overflow require special treatment. This check covers both as + * negative motion is guaranteed to be greater than @vd::max_cycles + * due to unsigned comparison. + * * Due to the MSB/Sign-bit being used as invalid marker (see - * arch_vdso_cycles_valid() above), the effective mask is S64_MAX. + * arch_vdso_cycles_valid() above), the effective mask is S64_MAX, + * but that case is also unlikely and will also take the unlikely path + * here. */ - u64 delta = (cycles - vd->cycle_last) & S64_MAX; + if (unlikely(delta > vd->max_cycles)) { + /* + * Due to the above mentioned TSC wobbles, filter out + * negative motion. Per the above masking, the effective + * sign bit is now bit 62. + */ + if (delta & (1ULL << 62)) + return base >> vd->shift; - /* - * Due to the above mentioned TSC wobbles, filter out negative motion. - * Per the above masking, the effective sign bit is now bit 62. - */ - if (unlikely(delta & (1ULL << 62))) - return base >> vd->shift; + /* Handle multiplication overflow gracefully */ + return mul_u64_u32_add_u64_shr(delta & S64_MAX, vd->mult, base, vd->shift); + } return ((delta * vd->mult) + base) >> vd->shift; } -- 2.34.1