Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp4117945pxu; Wed, 9 Dec 2020 08:47:08 -0800 (PST) X-Google-Smtp-Source: ABdhPJzKiNOECJ/XheF8/eTWbNzvPqNICOtdqrUvqqy5rqR3JV5OjOtsH4U5QaMUQvBSYu7c+4FV X-Received: by 2002:a17:906:7d98:: with SMTP id v24mr2825285ejo.129.1607532428234; Wed, 09 Dec 2020 08:47:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1607532428; cv=none; d=google.com; s=arc-20160816; b=bma3nWbxKEzp/A6XNptq+e7FNHYMdpRyyqGxWM0K7Sw38nwTnymP+K9fMRIVrbQ1R9 0ay9n4A0e9bTHVPYh8csQ+1Y71QM5PQiRWRpaYQlddVlNNtjp6c9ajzPgrC+3j3acwtU NJgl3rnnJcVJl8CybiZqM6qFp0/lyh2F+MXwFxLeDog9nTn0C6Lz7TSA1TAJvi0Kr8Lv sE9bd1vGqqfG1taA1VKxsicUzTNsw/6erxc8en5uPV8ZOKl17u5E8DQS6wMPLeSQjSkA sL+nnBgzWMr6WdOorljFIm/pc+sFEOPL9lZ9mOR8od1y9y18jnK6vGXUYMsCcBrjNNqt AFYA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=wMcLwdwi7k8lzgRjIB37YO4ohNfnQJrA+sZ5Bo5GOVo=; b=YRZe8XObiL0MpSVA3EykolRKhummkYq6xDLE1ZWz2Y40cx8c3Y0HU1FerE19SfpfQL wOIO3VaVOYVtLlIO0AKYUgp2rafgqw6UkNQRjS4PFfGxMr5UR8Bi0ewR0nFUdGgEDjy/ 7Rmx8nrgjVVIR1f9PPaEASL/B5qsQ2xQXYFBC7p/Lf/zvEwZINTDPHhGWZy/BbvfXX6V fOlRdZ/s7SRxcrOYHHOW8fZV1SORQ0rYJx4f+33VIB2RLXPVvdT/NCdvzRT4E8lWnO5w 0AyFKHFPQmLQPQEYRQe/7lM3VeK7689hXx9OHBE7hZ7TXeWB0kisUjiA2qJrtYBV5DzL sZJg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z14si1029730ejp.692.2020.12.09.08.46.45; Wed, 09 Dec 2020 08:47:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731968AbgLIQkr (ORCPT + 99 others); Wed, 9 Dec 2020 11:40:47 -0500 Received: from mail.kernel.org ([198.145.29.99]:44274 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730364AbgLIQkj (ORCPT ); Wed, 9 Dec 2020 11:40:39 -0500 From: Will Deacon Authentication-Results: mail.kernel.org; dkim=permerror (bad message/signature format) To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, Will Deacon , Catalin Marinas , Jan Kara , Minchan Kim , Andrew Morton , "Kirill A . Shutemov" , Linus Torvalds , Vinayak Menon , kernel-team@android.com Subject: [PATCH 1/2] mm: Allow architectures to request 'old' entries when prefaulting Date: Wed, 9 Dec 2020 16:39:49 +0000 Message-Id: <20201209163950.8494-2-will@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20201209163950.8494-1-will@kernel.org> References: <20201209163950.8494-1-will@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Commit 5c0a85fad949 ("mm: make faultaround produce old ptes") changed the "faultaround" behaviour to initialise prefaulted PTEs as 'old', since this avoids vmscan wrongly assuming that they are hot, despite having never been explicitly accessed by userspace. The change has been shown to benefit numerous arm64 micro-architectures (with hardware access flag) running Android, where both application launch latency and direct reclaim time are significantly reduced. Unfortunately, commit 315d09bf30c2 ("Revert "mm: make faultaround produce old ptes"") reverted the change to it being identified as the cause of a ~6% regression in unixbench on x86. Experiments on a variety of recent arm64 micro-architectures indicate that unixbench is not affected by the original commit, yielding a 0-1% performance improvement. Since one size does not fit all for the initial state of prefaulted PTEs, introduce arch_wants_old_faultaround_pte(), which allows an architecture to opt-in to 'old' prefaulted PTEs at runtime based on whatever criteria it may have. Cc: Jan Kara Cc: Minchan Kim Cc: Andrew Morton Cc: Kirill A. Shutemov Cc: Linus Torvalds Reported-by: Vinayak Menon Signed-off-by: Will Deacon --- include/linux/mm.h | 5 ++++- mm/memory.c | 31 ++++++++++++++++++++++++++++--- 2 files changed, 32 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index db6ae4d3fb4e..932886554586 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -426,6 +426,7 @@ extern pgprot_t protection_map[16]; * @FAULT_FLAG_REMOTE: The fault is not for current task/mm. * @FAULT_FLAG_INSTRUCTION: The fault was during an instruction fetch. * @FAULT_FLAG_INTERRUPTIBLE: The fault can be interrupted by non-fatal signals. + * @FAULT_FLAG_PREFAULT_OLD: Initialise pre-faulted PTEs in the 'old' state. * * About @FAULT_FLAG_ALLOW_RETRY and @FAULT_FLAG_TRIED: we can specify * whether we would allow page faults to retry by specifying these two @@ -456,6 +457,7 @@ extern pgprot_t protection_map[16]; #define FAULT_FLAG_REMOTE 0x80 #define FAULT_FLAG_INSTRUCTION 0x100 #define FAULT_FLAG_INTERRUPTIBLE 0x200 +#define FAULT_FLAG_PREFAULT_OLD 0x400 /* * The default fault flags that should be used by most of the @@ -493,7 +495,8 @@ static inline bool fault_flag_allow_retry_first(unsigned int flags) { FAULT_FLAG_USER, "USER" }, \ { FAULT_FLAG_REMOTE, "REMOTE" }, \ { FAULT_FLAG_INSTRUCTION, "INSTRUCTION" }, \ - { FAULT_FLAG_INTERRUPTIBLE, "INTERRUPTIBLE" } + { FAULT_FLAG_INTERRUPTIBLE, "INTERRUPTIBLE" }, \ + { FAULT_FLAG_PREFAULT_OLD, "PREFAULT_OLD" } /* * vm_fault is filled by the pagefault handler and passed to the vma's diff --git a/mm/memory.c b/mm/memory.c index c48f8df6e502..6b30c15120e7 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -134,6 +134,18 @@ static inline bool arch_faults_on_old_pte(void) } #endif +#ifndef arch_wants_old_faultaround_pte +static inline bool arch_wants_old_faultaround_pte(void) +{ + /* + * Transitioning a PTE from 'old' to 'young' can be expensive on + * some architectures, even if it's performed in hardware. By + * default, "false" means prefaulted entries will be 'young'. + */ + return false; +} +#endif + static int __init disable_randmaps(char *s) { randomize_va_space = 0; @@ -3788,6 +3800,7 @@ vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct page *page) { struct vm_area_struct *vma = vmf->vma; bool write = vmf->flags & FAULT_FLAG_WRITE; + bool old = vmf->flags & FAULT_FLAG_PREFAULT_OLD; pte_t entry; vm_fault_t ret; @@ -3811,7 +3824,7 @@ vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct page *page) flush_icache_page(vma, page); entry = mk_pte(page, vma->vm_page_prot); - entry = pte_sw_mkyoung(entry); + entry = old ? pte_mkold(entry) : pte_sw_mkyoung(entry); if (write) entry = maybe_mkwrite(pte_mkdirty(entry), vma); /* copy-on-write page */ @@ -3964,6 +3977,9 @@ static vm_fault_t do_fault_around(struct vm_fault *vmf) smp_wmb(); /* See comment in __pte_alloc() */ } + if (arch_wants_old_faultaround_pte()) + vmf->flags |= FAULT_FLAG_PREFAULT_OLD; + vmf->vma->vm_ops->map_pages(vmf, start_pgoff, end_pgoff); /* Huge page is mapped? Page fault is solved */ @@ -3978,8 +3994,17 @@ static vm_fault_t do_fault_around(struct vm_fault *vmf) /* check if the page fault is solved */ vmf->pte -= (vmf->address >> PAGE_SHIFT) - (address >> PAGE_SHIFT); - if (!pte_none(*vmf->pte)) - ret = VM_FAULT_NOPAGE; + if (pte_none(*vmf->pte)) + goto out_unlock; + + if (vmf->flags & FAULT_FLAG_PREFAULT_OLD) { + pte_t pte = pte_mkyoung(*vmf->pte); + if (ptep_set_access_flags(vmf->vma, address, vmf->pte, pte, 0)) + update_mmu_cache(vmf->vma, address, vmf->pte); + } + + ret = VM_FAULT_NOPAGE; +out_unlock: pte_unmap_unlock(vmf->pte, vmf->ptl); out: vmf->address = address; -- 2.29.2.576.ga3fc446d84-goog