Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp809485rwd; Thu, 18 May 2023 04:24:08 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7IltN5ZxJi4PkusB1kCPYQpdO5VQtIzVPLLCc4Xoq6TJjOClyr+j+KMXhPkqTcF53IyUju X-Received: by 2002:a17:903:234c:b0:1ac:a9c1:b61d with SMTP id c12-20020a170903234c00b001aca9c1b61dmr2240285plh.11.1684409048403; Thu, 18 May 2023 04:24:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684409048; cv=none; d=google.com; s=arc-20160816; b=nhFXdq5oub4JziPVNO4zC530X4HMfEbNEPKv27Z+c4AWpNpcH3OtEOalI7xFasJNV8 JMnAnWQPFJLO1xjpgw+aBGHzfHfaI3UvfvXOOe3wv1Uwln7PRfM4jvYzkhSW5sllpZVv Jpn970uHR8UmyI+Q0CKPlWzcHGl+VdV03KhRN8gUEaauGKSOLOurh47Lw9JWQUS9yeaJ Q18yLBNkBTHKSBhSZ2i0SgHDHrqlVyoalxGEMj9/22JTQosyQk8JCuTGoy2+pUg3eVCq EBLUIAAvFFuusK/qpCZKzpniRUi6vCBzsc1b1Dz+2kwDRk4smIOZgH0yyZ/e+g0SSQHI 5SbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=iyeo2oFz3wRj1jfkhI8YHLr4XW9LbfJm+AYYq9yjljk=; b=madjducNHfZAFGypwP6FtbF0ySjheBN0bKI5LzmYezz3Q1Gb3NbE2/P9prGqfmcD48 TfDt2AJZ1AJojxT4pSMiOs3LUudbqOPpBKALyFjjd47NF1rVmUKPQUhehq2gHZJuYaVP GEypWNdYateSPF1qBExcmdKO6Q0xUxUDxZqtzp242rLNTZTAUHJTIUIMqozpiFVxKHc8 zSchTmAXyYHYsf92yI7csN5XDOPz+aONYTsnWoUt3Na0iqLsM9gx18+tQSjDVnTL+E10 2CtH0aq3szv6oStYg/isukLDilU/4dIiVJ7sKPMugfHUyCqp0IIwUIs5kAW81shJcqtg jpEA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id ju4-20020a170903428400b001ae141947acsi1031502plb.183.2023.05.18.04.23.56; Thu, 18 May 2023 04:24:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230449AbjERLHt (ORCPT + 99 others); Thu, 18 May 2023 07:07:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41986 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230397AbjERLHn (ORCPT ); Thu, 18 May 2023 07:07:43 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 15F3BB8 for ; Thu, 18 May 2023 04:07:42 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9B791D75; Thu, 18 May 2023 04:08:26 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 427773F793; Thu, 18 May 2023 04:07:40 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , SeongJae Park , Christoph Hellwig Cc: Ryan Roberts , "Matthew Wilcox (Oracle)" , "Kirill A. Shutemov" , Lorenzo Stoakes , Uladzislau Rezki , Zi Yan , linux-kernel@vger.kernel.org, linux-mm@kvack.org, damon@lists.linux.dev Subject: [PATCH v2 2/5] mm: damon must atomically clear young on ptes and pmds Date: Thu, 18 May 2023 12:07:24 +0100 Message-Id: <20230518110727.2106156-3-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230518110727.2106156-1-ryan.roberts@arm.com> References: <20230518110727.2106156-1-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org It is racy to non-atomically read a pte, then clear the young bit, then write it back as this could discard dirty information. Further, it is bad practice to directly set a pte entry within a table. Instead clearing young must go through the arch-provided helper, ptep_test_and_clear_young() to ensure it is modified atomically and to give the arch code visibility and allow it to check (and potentially modify) the operation. Fixes: 46c3a0accdc4 ("mm/damon/vaddr: separate commonly usable functions") Signed-off-by: Ryan Roberts Reviewed-by: Zi Yan --- mm/damon/ops-common.c | 16 ++++++---------- mm/damon/ops-common.h | 4 ++-- mm/damon/paddr.c | 4 ++-- mm/damon/vaddr.c | 4 ++-- 4 files changed, 12 insertions(+), 16 deletions(-) diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c index cc63cf953636..acc264b97903 100644 --- a/mm/damon/ops-common.c +++ b/mm/damon/ops-common.c @@ -37,7 +37,7 @@ struct folio *damon_get_folio(unsigned long pfn) return folio; } -void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr) +void damon_ptep_mkold(pte_t *pte, struct vm_area_struct *vma, unsigned long addr) { bool referenced = false; struct folio *folio = damon_get_folio(pte_pfn(*pte)); @@ -45,13 +45,11 @@ void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr) if (!folio) return; - if (pte_young(*pte)) { + if (ptep_test_and_clear_young(vma, addr, pte)) referenced = true; - *pte = pte_mkold(*pte); - } #ifdef CONFIG_MMU_NOTIFIER - if (mmu_notifier_clear_young(mm, addr, addr + PAGE_SIZE)) + if (mmu_notifier_clear_young(vma->vm_mm, addr, addr + PAGE_SIZE)) referenced = true; #endif /* CONFIG_MMU_NOTIFIER */ @@ -62,7 +60,7 @@ void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr) folio_put(folio); } -void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm, unsigned long addr) +void damon_pmdp_mkold(pmd_t *pmd, struct vm_area_struct *vma, unsigned long addr) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE bool referenced = false; @@ -71,13 +69,11 @@ void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm, unsigned long addr) if (!folio) return; - if (pmd_young(*pmd)) { + if (pmdp_test_and_clear_young(vma, addr, pmd)) referenced = true; - *pmd = pmd_mkold(*pmd); - } #ifdef CONFIG_MMU_NOTIFIER - if (mmu_notifier_clear_young(mm, addr, addr + HPAGE_PMD_SIZE)) + if (mmu_notifier_clear_young(vma->vm_mm, addr, addr + HPAGE_PMD_SIZE)) referenced = true; #endif /* CONFIG_MMU_NOTIFIER */ diff --git a/mm/damon/ops-common.h b/mm/damon/ops-common.h index 14f4bc69f29b..18d837d11bce 100644 --- a/mm/damon/ops-common.h +++ b/mm/damon/ops-common.h @@ -9,8 +9,8 @@ struct folio *damon_get_folio(unsigned long pfn); -void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm, unsigned long addr); -void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm, unsigned long addr); +void damon_ptep_mkold(pte_t *pte, struct vm_area_struct *vma, unsigned long addr); +void damon_pmdp_mkold(pmd_t *pmd, struct vm_area_struct *vma, unsigned long addr); int damon_cold_score(struct damon_ctx *c, struct damon_region *r, struct damos *s); diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c index 467b99166b43..5b3a3463d078 100644 --- a/mm/damon/paddr.c +++ b/mm/damon/paddr.c @@ -24,9 +24,9 @@ static bool __damon_pa_mkold(struct folio *folio, struct vm_area_struct *vma, while (page_vma_mapped_walk(&pvmw)) { addr = pvmw.address; if (pvmw.pte) - damon_ptep_mkold(pvmw.pte, vma->vm_mm, addr); + damon_ptep_mkold(pvmw.pte, vma, addr); else - damon_pmdp_mkold(pvmw.pmd, vma->vm_mm, addr); + damon_pmdp_mkold(pvmw.pmd, vma, addr); } return true; } diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c index 1fec16d7263e..37994fb6120c 100644 --- a/mm/damon/vaddr.c +++ b/mm/damon/vaddr.c @@ -311,7 +311,7 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr, } if (pmd_trans_huge(*pmd)) { - damon_pmdp_mkold(pmd, walk->mm, addr); + damon_pmdp_mkold(pmd, walk->vma, addr); spin_unlock(ptl); return 0; } @@ -323,7 +323,7 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr, pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); if (!pte_present(*pte)) goto out; - damon_ptep_mkold(pte, walk->mm, addr); + damon_ptep_mkold(pte, walk->vma, addr); out: pte_unmap_unlock(pte, ptl); return 0; -- 2.25.1