Received: by 2002:a05:6358:4e97:b0:b3:742d:4702 with SMTP id ce23csp2217050rwb; Mon, 15 Aug 2022 00:42:52 -0700 (PDT) X-Google-Smtp-Source: AA6agR7kCV9wiTFPWPCG4Dwz1lcI6F9x6F5foiil1KWAEpzvIhDMi64BxVuRuQWBXwPLHgeK0UcJ X-Received: by 2002:a05:6a00:190d:b0:532:6f27:ea9b with SMTP id y13-20020a056a00190d00b005326f27ea9bmr13337205pfi.40.1660549371885; Mon, 15 Aug 2022 00:42:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1660549371; cv=none; d=google.com; s=arc-20160816; b=AHcUbouTjkZMoVMjoSLONECoahGHlStUnQvnlPYHnickrCMUz8Z2L6nrY8R7PFJkJV soKTZzw82ldnxVnxQ04S7YeRrKU83e6HXvATpO6NuEUJ9hL/DTW4/nTAjDA25inRv9zp A9oJbj4qYG/7aaPrAQxE1euV8wQSBr9+Y3mRIyvOHT/cSR9keRz/nNcFPPbAVGJpYgcq BJbkapalTj/pIASN5MAuNPyrURJideva1JDX/LabLl72+PiOpnRGiuPweLF5AW6mY3SW P0flCefB187b+fbpxivL+iQS4Hu/zlSZQupSZaiT855jYb6vok+Ge7MLIhu5USqmODxw 8+xQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:from:subject :references:mime-version:message-id:in-reply-to:date:dkim-signature; bh=F4xmUEdaUsMsbEYfVbJBjjwTvc/M3miaYS+QPbbrcOU=; b=hf/vTUMp9Rl8lzoGwW8WmQ7z4LUi6wcXy0bePSB+WBVEUsz4qgC6FXUtwCeNyTVDgj 5eoVSeIAjCWPGcq6XEzm1mTX4MVrLnbQ4hGIBCnWxKoEKlldtY5KvOq89oQXtqRueKVu fipF9iPsgb+c8ZBOnGAmNSn3qe0zl1oH0lTupKpSOk0KB1eIiMuyBdRHhieeyQfjgWH8 lWpUCTutRpN6No50bljHJxGBNLMvLveWglk51n4DchggOwrZl5iAiCTUT2avygJby9si BXQO/M9wXZAVZirX1ZnPj7nhzlqC1s6gHuPvnopCc9OIHsTWrr0TWlqiQy3p0ZOoDgKk WJMA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=i0l14FcX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d14-20020a63fd0e000000b0041296bc96b6si9061237pgh.268.2022.08.15.00.42.41; Mon, 15 Aug 2022 00:42:51 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=i0l14FcX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241424AbiHOHOV (ORCPT + 99 others); Mon, 15 Aug 2022 03:14:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240765AbiHOHOR (ORCPT ); Mon, 15 Aug 2022 03:14:17 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 268BF18B30 for ; Mon, 15 Aug 2022 00:14:16 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-31f3959ba41so51564247b3.2 for ; Mon, 15 Aug 2022 00:14:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:from:to:cc; bh=F4xmUEdaUsMsbEYfVbJBjjwTvc/M3miaYS+QPbbrcOU=; b=i0l14FcXIF2jFcRbhjNQJ4bpatMDCJ3ghyqdJ4AnfApm/UBqv9DDDDJpWOZBVgpw1I WYUbXtKtD3Dpceay5gMJsRPNJJOvJaF5275SnlO4cNWxXYCBjZzQWOsOFEM4+STi2qUC Owpo33q+E9BlLueZ8NH0ClPcNIt/RaZhmfIxkGceBrZdrbUVOEu9GJqLZ+HFdXEMc2QY CymVfkKl4qAXlBoiefo/nNij/wOO9k65CPgiPluDRdUWuATaWe06fOtqg9KNFQz05yLk cFTqGJzXjgIXvHDBjPp5pjVBXkKcouV/abrK8r92j6RSgL57kFrFhq50/9lgYEVFTXNH by4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:x-gm-message-state:from:to :cc; bh=F4xmUEdaUsMsbEYfVbJBjjwTvc/M3miaYS+QPbbrcOU=; b=RB8Ri7XhMgLp18aUq9Inpk7FGUhl8lRBRvCQiKzRUXPt46IbOq0gb5JSYnW7KXeyIu bba1jLVKxPYbr1Xd0wC5boXhsSehiYY2wnlolDkd+R1VQRitvh3iVWERuB9CH7hPZ65d xUouKE0NrdUSs2qU2n2wOyr4yC9VK/cl3XgssdzQOz3ntyKKPYpadcJSNgKM8wSz6Xlu 1gaSBcC1kk6kkYxYHWj9BogpiVSGVjgXiJH/F2YYBuQ5JOUzFh0xbBKYv8PqecEGBVZo O1uWiLU/cs8zbm9HSBYnRrT7g4zrAS7zbyfPYEmBsJ4b2W9Pc2T9r2gAQtu/IV7X3yrY O4FA== X-Gm-Message-State: ACgBeo0IQ+MGvIMxQZ8dhcAN8ulgKtvLXkVoXTPI1ieV0gC/PhczK/RY yMgPkJzH55KmUYzZa4rJjcb5uUqxIuk= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:d91:5887:ac93:ddf0]) (user=yuzhao job=sendgmr) by 2002:a25:4ce:0:b0:67c:3362:c396 with SMTP id 197-20020a2504ce000000b0067c3362c396mr11024827ybe.336.1660547655351; Mon, 15 Aug 2022 00:14:15 -0700 (PDT) Date: Mon, 15 Aug 2022 01:13:20 -0600 In-Reply-To: <20220815071332.627393-1-yuzhao@google.com> Message-Id: <20220815071332.627393-2-yuzhao@google.com> Mime-Version: 1.0 References: <20220815071332.627393-1-yuzhao@google.com> X-Mailer: git-send-email 2.37.1.595.g718a3a8f04-goog Subject: [PATCH v14 01/14] mm: x86, arm64: add arch_has_hw_pte_young() From: Yu Zhao To: Andrew Morton Cc: Andi Kleen , Aneesh Kumar , Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Johannes Weiner , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Mel Gorman , Michael Larabel , Michal Hocko , Mike Rapoport , Peter Zijlstra , Tejun Heo , Vlastimil Babka , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, page-reclaim@google.com, Yu Zhao , Barry Song , Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , "=?UTF-8?q?Holger=20Hoffst=C3=A4tte?=" , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh , Vaibhav Jain Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Some architectures automatically set the accessed bit in PTEs, e.g., x86 and arm64 v8.2. On architectures that do not have this capability, clearing the accessed bit in a PTE usually triggers a page fault following the TLB miss of this PTE (to emulate the accessed bit). Being aware of this capability can help make better decisions, e.g., whether to spread the work out over a period of time to reduce bursty page faults when trying to clear the accessed bit in many PTEs. Note that theoretically this capability can be unreliable, e.g., hotplugged CPUs might be different from builtin ones. Therefore it should not be used in architecture-independent code that involves correctness, e.g., to determine whether TLB flushes are required (in combination with the accessed bit). Signed-off-by: Yu Zhao Reviewed-by: Barry Song Acked-by: Brian Geffon Acked-by: Jan Alexander Steffens (heftig) Acked-by: Oleksandr Natalenko Acked-by: Steven Barrett Acked-by: Suleiman Souhlal Acked-by: Will Deacon Tested-by: Daniel Byrne Tested-by: Donald Carr Tested-by: Holger Hoffst=C3=A4tte Tested-by: Konstantin Kharlamov Tested-by: Shuang Zhai Tested-by: Sofia Trinh Tested-by: Vaibhav Jain --- arch/arm64/include/asm/pgtable.h | 15 ++------------- arch/x86/include/asm/pgtable.h | 6 +++--- include/linux/pgtable.h | 13 +++++++++++++ mm/memory.c | 14 +------------- 4 files changed, 19 insertions(+), 29 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgta= ble.h index b5df82aa99e6..71a1af42f0e8 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1082,24 +1082,13 @@ static inline void update_mmu_cache(struct vm_area_= struct *vma, * page after fork() + CoW for pfn mappings. We don't always have a * hardware-managed access flag on arm64. */ -static inline bool arch_faults_on_old_pte(void) -{ - /* The register read below requires a stable CPU to make any sense */ - cant_migrate(); - - return !cpu_has_hw_af(); -} -#define arch_faults_on_old_pte arch_faults_on_old_pte +#define arch_has_hw_pte_young cpu_has_hw_af =20 /* * Experimentally, it's cheap to set the access flag in hardware and we * benefit from prefaulting mappings as 'old' to start with. */ -static inline bool arch_wants_old_prefaulted_pte(void) -{ - return !arch_faults_on_old_pte(); -} -#define arch_wants_old_prefaulted_pte arch_wants_old_prefaulted_pte +#define arch_wants_old_prefaulted_pte cpu_has_hw_af =20 static inline bool pud_sect_supported(void) { diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.= h index 44e2d6f1dbaa..dc5f7d8ef68a 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1431,10 +1431,10 @@ static inline bool arch_has_pfn_modify_check(void) return boot_cpu_has_bug(X86_BUG_L1TF); } =20 -#define arch_faults_on_old_pte arch_faults_on_old_pte -static inline bool arch_faults_on_old_pte(void) +#define arch_has_hw_pte_young arch_has_hw_pte_young +static inline bool arch_has_hw_pte_young(void) { - return false; + return true; } =20 #ifdef CONFIG_PAGE_TABLE_CHECK diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 014ee8f0fbaa..95f408df4695 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -260,6 +260,19 @@ static inline int pmdp_clear_flush_young(struct vm_are= a_struct *vma, #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ #endif =20 +#ifndef arch_has_hw_pte_young +/* + * Return whether the accessed bit is supported on the local CPU. + * + * This stub assumes accessing through an old PTE triggers a page fault. + * Architectures that automatically set the access bit should overwrite it= . + */ +static inline bool arch_has_hw_pte_young(void) +{ + return false; +} +#endif + #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long address, diff --git a/mm/memory.c b/mm/memory.c index b994784158f5..46071cf00b47 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -126,18 +126,6 @@ int randomize_va_space __read_mostly =3D 2; #endif =20 -#ifndef arch_faults_on_old_pte -static inline bool arch_faults_on_old_pte(void) -{ - /* - * Those arches which don't have hw access flag feature need to - * implement their own helper. By default, "true" means pagefault - * will be hit on old pte. - */ - return true; -} -#endif - #ifndef arch_wants_old_prefaulted_pte static inline bool arch_wants_old_prefaulted_pte(void) { @@ -2871,7 +2859,7 @@ static inline bool __wp_page_copy_user(struct page *d= st, struct page *src, * On architectures with software "accessed" bits, we would * take a double page fault, so mark it accessed here. */ - if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) { + if (!arch_has_hw_pte_young() && !pte_young(vmf->orig_pte)) { pte_t entry; =20 vmf->pte =3D pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl); --=20 2.37.1.595.g718a3a8f04-goog