Received: by 2002:a05:7412:1e0b:b0:fc:a2b0:25d7 with SMTP id kr11csp413269rdb; Thu, 15 Feb 2024 04:19:09 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCX0aQFkdGkldDTZUcy7zNLWQZrQmp9q12zzm6a3naRzH4FSV8ry608DPZKirOAax43S60G9zi6x4JnW2NzNvpUNSssfGwvjw97FbarxEg== X-Google-Smtp-Source: AGHT+IERQHckaWi0C6kwAe81QM9qSbCScFqNJXL2nWud83l2RdXjIQWBfRxBh/ARv0noTZRqd3Du X-Received: by 2002:a17:90a:8546:b0:298:c104:1eb8 with SMTP id a6-20020a17090a854600b00298c1041eb8mr2411084pjw.19.1707999549639; Thu, 15 Feb 2024 04:19:09 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707999549; cv=pass; d=google.com; s=arc-20160816; b=aw9gOZaHS7vyMbJm4UwFmsWkQWPGd0ndRSLoJtfK6Ltr0f6uJgw7M6zJiviilqnJXW iDRPgIeyusyrZXFEGII9tgYSgxBkEAzQhRNmzQOmxFC/zHDXkewxy0C65ggbWM08fXEK JN/+wEzSwH+FrvfhkzTNlQIIk9pCjQXwkPT6tyQUL4XoTB7MLP/TOYkecLUBTtKvq5tu Ow26RfR2L9mhP8XeaWlqnISok1UBI+vRkeKbYN5zoUyITTwLvLwdg50TO16MGgclfHbG 6pR/cfB63lCruu96OOZWz6zi7maf32ZGWyVngjL21uHCIxBJ5xeKNeIeqvOqOmAbvdfH 5Gcg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=uUGDUFHa6ouIpGmhULS0O/3eXPjqJ4rcpqKk6CYa4SM=; fh=2FwJMiZjh7JNs0IOfHdUsjppsJNoMkV8ot/xk2oRsPU=; b=H0qgNKeyw5eRRFprVA4dNhfaP/sQvSZgdEKwDBd/REGVWN6i2n4R93rFlXhNUnpaFR iYc19XyWE8OFsonRS6Bcp7Py2s9AbaVVW0lgvnIZg6FHpha7qXl/cW3nyBWbXa3Ibmkm Q4bt7OP6aq8OjcWh0R3z4WxbIKN1dUqCGjBcG3l0dshFOfBVJUcjeRSqb8Z46quhg22a 7e2UUYhh2y4ISiM5vXc33oGvhfSnrNFR0GqGeJhGGt0Ey1GxgcNO54gSDcBqdSXZHmDz bc7vlfUfdzw5diEH0Q4jICLIDroEtFwiB38vP5FsPp3tf0JZ956XSiLR8AbWy33MDiZr QI1Q==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-66875-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-66875-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id kk13-20020a17090b4a0d00b00298d3bcc5b3si1157000pjb.20.2024.02.15.04.19.09 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Feb 2024 04:19:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-66875-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-66875-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-66875-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 4F5E828308C for ; Thu, 15 Feb 2024 12:19:09 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id BBF0712E1C4; Thu, 15 Feb 2024 12:18:14 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 5ABED12D740 for ; Thu, 15 Feb 2024 12:18:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707999494; cv=none; b=WadgaHtKqyII6HkmWi9qSKijysK8b3jRDCBtnY13EpddcK6k5C2+3U9zuC3uY7KJpsodoPQpuHOuFXbWPv6J+O9nRm45Rl6MZiKABvo5D40Ctom9lE67Zf3wNQWXTmSO9D4Q0GHiSssy/8yBcwBNGcL3Nd52TYvraxuHltjDo8w= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707999494; c=relaxed/simple; bh=/tuiT3Py4+8Z0/xPKrapDWvOvRSoeEqJvjCHSWeUIPw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=MOoK/RNmPcefnmiX9EaLf8n7Xb3z1IaznIwPcEOpq3XIcvawvt4gwdAGmJAxRWVAGuYnNhXoX9iui0oR7n6W29tWoAPA7E+U61O6RdFpVJP7LBu7Blp+ePb9v8VwxbiemD1l1Jda3+UvDq/g2YpmWtNTHeLYMejAFNA9P2A7Erg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id ACE0F14BF; Thu, 15 Feb 2024 04:18:52 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0F9763F766; Thu, 15 Feb 2024 04:18:09 -0800 (PST) From: Ryan Roberts To: David Hildenbrand , Mark Rutland , Catalin Marinas , Will Deacon , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Andrew Morton , Muchun Song Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v1 3/4] mm/memory: Use ptep_get_lockless_norecency() for orig_pte Date: Thu, 15 Feb 2024 12:17:55 +0000 Message-Id: <20240215121756.2734131-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240215121756.2734131-1-ryan.roberts@arm.com> References: <20240215121756.2734131-1-ryan.roberts@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Let's convert handle_pte_fault()'s use of ptep_get_lockless() to ptep_get_lockless_norecency() to save orig_pte. There are a number of places that follow this model: orig_pte = ptep_get_lockless(ptep) ... if (!pte_same(orig_pte, ptep_get(ptep))) // RACE! ... So we need to be careful to convert all of those to use pte_same_norecency() so that the access and dirty bits are excluded from the comparison. Additionally there are a couple of places that genuinely rely on the access and dirty bits of orig_pte, but with some careful refactoring, we can use ptep_get() once we are holding the lock to achieve equivalent logic. Signed-off-by: Ryan Roberts --- mm/memory.c | 55 +++++++++++++++++++++++++++++++++-------------------- 1 file changed, 34 insertions(+), 21 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 8e65fa1884f1..075245314ec3 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3014,7 +3014,7 @@ static inline int pte_unmap_same(struct vm_fault *vmf) #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPTION) if (sizeof(pte_t) > sizeof(unsigned long)) { spin_lock(vmf->ptl); - same = pte_same(ptep_get(vmf->pte), vmf->orig_pte); + same = pte_same_norecency(ptep_get(vmf->pte), vmf->orig_pte); spin_unlock(vmf->ptl); } #endif @@ -3062,11 +3062,14 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src, * take a double page fault, so mark it accessed here. */ vmf->pte = NULL; - if (!arch_has_hw_pte_young() && !pte_young(vmf->orig_pte)) { + if (!arch_has_hw_pte_young()) { pte_t entry; vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl); - if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { + if (likely(vmf->pte)) + entry = ptep_get(vmf->pte); + if (unlikely(!vmf->pte || + !pte_same_norecency(entry, vmf->orig_pte))) { /* * Other thread has already handled the fault * and update local tlb only @@ -3077,9 +3080,11 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src, goto pte_unlock; } - entry = pte_mkyoung(vmf->orig_pte); - if (ptep_set_access_flags(vma, addr, vmf->pte, entry, 0)) - update_mmu_cache_range(vmf, vma, addr, vmf->pte, 1); + if (!pte_young(entry)) { + entry = pte_mkyoung(entry); + if (ptep_set_access_flags(vma, addr, vmf->pte, entry, 0)) + update_mmu_cache_range(vmf, vma, addr, vmf->pte, 1); + } } /* @@ -3094,7 +3099,8 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src, /* Re-validate under PTL if the page is still mapped */ vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl); - if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { + if (unlikely(!vmf->pte || + !pte_same_norecency(ptep_get(vmf->pte), vmf->orig_pte))) { /* The PTE changed under us, update local tlb */ if (vmf->pte) update_mmu_tlb(vma, addr, vmf->pte); @@ -3369,14 +3375,17 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) * Re-check the pte - we dropped the lock */ vmf->pte = pte_offset_map_lock(mm, vmf->pmd, vmf->address, &vmf->ptl); - if (likely(vmf->pte && pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { + if (likely(vmf->pte)) + entry = ptep_get(vmf->pte); + if (likely(vmf->pte && pte_same_norecency(entry, vmf->orig_pte))) { if (old_folio) { if (!folio_test_anon(old_folio)) { dec_mm_counter(mm, mm_counter_file(old_folio)); inc_mm_counter(mm, MM_ANONPAGES); } } else { - ksm_might_unmap_zero_page(mm, vmf->orig_pte); + /* Needs dirty bit so can't use vmf->orig_pte. */ + ksm_might_unmap_zero_page(mm, entry); inc_mm_counter(mm, MM_ANONPAGES); } flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte)); @@ -3494,7 +3503,7 @@ static vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf, struct folio *folio * We might have raced with another page fault while we released the * pte_offset_map_lock. */ - if (!pte_same(ptep_get(vmf->pte), vmf->orig_pte)) { + if (!pte_same_norecency(ptep_get(vmf->pte), vmf->orig_pte)) { update_mmu_tlb(vmf->vma, vmf->address, vmf->pte); pte_unmap_unlock(vmf->pte, vmf->ptl); return VM_FAULT_NOPAGE; @@ -3883,7 +3892,8 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); - if (likely(vmf->pte && pte_same(ptep_get(vmf->pte), vmf->orig_pte))) + if (likely(vmf->pte && + pte_same_norecency(ptep_get(vmf->pte), vmf->orig_pte))) restore_exclusive_pte(vma, vmf->page, vmf->address, vmf->pte); if (vmf->pte) @@ -3928,7 +3938,7 @@ static vm_fault_t pte_marker_clear(struct vm_fault *vmf) * quickly from a PTE_MARKER_UFFD_WP into PTE_MARKER_POISONED. * So is_pte_marker() check is not enough to safely drop the pte. */ - if (pte_same(vmf->orig_pte, ptep_get(vmf->pte))) + if (pte_same_norecency(vmf->orig_pte, ptep_get(vmf->pte))) pte_clear(vmf->vma->vm_mm, vmf->address, vmf->pte); pte_unmap_unlock(vmf->pte, vmf->ptl); return 0; @@ -4028,8 +4038,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); if (unlikely(!vmf->pte || - !pte_same(ptep_get(vmf->pte), - vmf->orig_pte))) + !pte_same_norecency(ptep_get(vmf->pte), + vmf->orig_pte))) goto unlock; /* @@ -4117,7 +4127,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); if (likely(vmf->pte && - pte_same(ptep_get(vmf->pte), vmf->orig_pte))) + pte_same_norecency(ptep_get(vmf->pte), + vmf->orig_pte))) ret = VM_FAULT_OOM; goto unlock; } @@ -4187,7 +4198,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) */ vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); - if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte))) + if (unlikely(!vmf->pte || + !pte_same_norecency(ptep_get(vmf->pte), vmf->orig_pte))) goto out_nomap; if (unlikely(!folio_test_uptodate(folio))) { @@ -4747,7 +4759,7 @@ void set_pte_range(struct vm_fault *vmf, struct folio *folio, static bool vmf_pte_changed(struct vm_fault *vmf) { if (vmf->flags & FAULT_FLAG_ORIG_PTE_VALID) - return !pte_same(ptep_get(vmf->pte), vmf->orig_pte); + return !pte_same_norecency(ptep_get(vmf->pte), vmf->orig_pte); return !pte_none(ptep_get(vmf->pte)); } @@ -5125,7 +5137,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) * the pfn may be screwed if the read is non atomic. */ spin_lock(vmf->ptl); - if (unlikely(!pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { + if (unlikely(!pte_same_norecency(ptep_get(vmf->pte), vmf->orig_pte))) { pte_unmap_unlock(vmf->pte, vmf->ptl); goto out; } @@ -5197,7 +5209,8 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) vmf->address, &vmf->ptl); if (unlikely(!vmf->pte)) goto out; - if (unlikely(!pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { + if (unlikely(!pte_same_norecency(ptep_get(vmf->pte), + vmf->orig_pte))) { pte_unmap_unlock(vmf->pte, vmf->ptl); goto out; } @@ -5343,7 +5356,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) vmf->address, &vmf->ptl); if (unlikely(!vmf->pte)) return 0; - vmf->orig_pte = ptep_get_lockless(vmf->pte); + vmf->orig_pte = ptep_get_lockless_norecency(vmf->pte); vmf->flags |= FAULT_FLAG_ORIG_PTE_VALID; if (pte_none(vmf->orig_pte)) { @@ -5363,7 +5376,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) spin_lock(vmf->ptl); entry = vmf->orig_pte; - if (unlikely(!pte_same(ptep_get(vmf->pte), entry))) { + if (unlikely(!pte_same_norecency(ptep_get(vmf->pte), entry))) { update_mmu_tlb(vmf->vma, vmf->address, vmf->pte); goto unlock; } -- 2.25.1