Received: by 2002:a05:6500:1b45:b0:1f5:f2ab:c469 with SMTP id cz5csp88840lqb; Tue, 16 Apr 2024 09:30:45 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCWsj9/Ube9UOqWTRG15J4vXr67e48Lcs+CVXLFn4X1SM/n0j96EZdY9XMVEvNL5an3crfTQwiEr8GmpVVLEfcjB4x626MQEcbXDh5MXag== X-Google-Smtp-Source: AGHT+IGkeoWy4fRSvskMNlBdbkqMO5vAGp7lAXWN3s3HlTPkZjVY/51n3iFRTXP+vncifQ/9z/hh X-Received: by 2002:a05:6a00:1399:b0:6ea:aaf5:9e00 with SMTP id t25-20020a056a00139900b006eaaaf59e00mr15486815pfg.33.1713285045427; Tue, 16 Apr 2024 09:30:45 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1713285045; cv=pass; d=google.com; s=arc-20160816; b=Eqz2Z3ff9rcG204dC+sVvaU9m5A6aNHMiiMduBjjDT3Cqr1tISouSTmZ/2ZlDWPLwV CqQ1AZrdWt7ExAcJqPcW5EFhl6ZAxNTxnvuWTshGJyCKwx7vCKYRr7sUX9obinDddL84 0H0SB85khxmmdbg1tp+7HgwMP69/Y87fMZeGugKIN8aen7SDIZFbfAeqSI19MFMdCKp2 Ms4Y/copAB3I/+8ODIPn3LsBZcbu9Jt9MK41BMy8autaVMe0yXziZRc5z65yJaMF50IQ SIeDbytHOumObXTzqo2umPtVMwVm+fC1QRBzZ0UczjpaAeiVLTe3fDIc2+SGLgCcrcH2 raiA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=8n4Ej3la01fpzYP2J2CYRAquXT+CIO66uFTvymq1Lz4=; fh=X1xx8hJ05Qc9z6LkvRxhouD7txNEB0mw5KwEekAwxV8=; b=zdQxIpR2jMqQdUX0SlPXxbCD4PpjsBKJEQkcscxv+6fY7SRZiN31NVZ+iIXameA8iQ JURFHd58hLG511p2zkNfd2AwlDdrOEXU2QCexXZrz3gwD/nVANWofcFv4VcMxIasR52Z jVvBqpmtQ5F+Twik0zPy7npUPprRV3y7CzUFtq5Kla0qz2m2ci/lfwz7MfHxWnuysP2d I6726hxiqI2GLFvUlw1ErxfJGm6kaoaMDxFhFNGig3GAUFRnSYHMJu8Ep97Vl0XcxkCv Ra+REb/mPQEOBIPe18zazanU2vUF8lqSmaFA4uXi2ng2ZraJ8b4l7fuvmI78zTukQTl2 FxAw==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-147241-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-147241-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id v20-20020a056a00149400b006e650049479si10351190pfu.96.2024.04.16.09.30.45 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 16 Apr 2024 09:30:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-147241-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-147241-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-147241-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 17B2D2847F5 for ; Tue, 16 Apr 2024 16:30:45 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6FA0B132C1B; Tue, 16 Apr 2024 16:29:34 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 5B9EE13281B for ; Tue, 16 Apr 2024 16:29:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713284973; cv=none; b=VMFKsY2PtHLYWMmSlZQb3q3SjI7Zf8zppp+boBwh4zDFxPEPKKDCDqJezU+pqHCI3cg6sHoQK7VPPR4WpIRJ3NtksSv2gLrtzRGjjCgs7pK32skqMuQ01Ofduaxnw5C+yeXDmtxfgDPLbOHGWwH3kLWn0hFhJw6gjx0ifjip1oE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713284973; c=relaxed/simple; bh=a+U1aff+byZUWRNdPTYVIQxzfH8dezfV0KWrVGhhhtA=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=Gz3DiY6/+TbfL7IpZaQ6sklvxMcVMYq21R0Vyr/e/WxyXfTMKM6FHB8dmgOtsLceTfEGQvDSF1kxzHX8GynjKIN37tftXSLVuSQ+WZxU4SbbVXEgTL+DaR4zwLLZP1BLPSupFW06ce4RvNHcnvGEAmNIPe232i0CfNitJVZ3la4= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A5629339; Tue, 16 Apr 2024 09:29:58 -0700 (PDT) Received: from [10.1.39.189] (XHFQ2J9959.cambridge.arm.com [10.1.39.189]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 823B43F792; Tue, 16 Apr 2024 09:29:28 -0700 (PDT) Message-ID: <456de31b-221f-4aeb-a2d3-9bb318526417@arm.com> Date: Tue, 16 Apr 2024 17:29:27 +0100 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v7 2/3] mm/arm64: override clear_young_dirty_ptes() batch helper Content-Language: en-GB To: Lance Yang , akpm@linux-foundation.org Cc: david@redhat.com, 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20240416033457.32154-1-ioworker0@gmail.com> <20240416033457.32154-3-ioworker0@gmail.com> From: Ryan Roberts In-Reply-To: <20240416033457.32154-3-ioworker0@gmail.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 16/04/2024 04:34, Lance Yang wrote: > The per-pte get_and_clear/modify/set approach would result in > unfolding/refolding for contpte mappings on arm64. So we need > to override clear_young_dirty_ptes() for arm64 to avoid it. > > Suggested-by: David Hildenbrand > Suggested-by: Barry Song <21cnbao@gmail.com> > Suggested-by: Ryan Roberts > Signed-off-by: Lance Yang Reviewed-by: Ryan Roberts > --- > arch/arm64/include/asm/pgtable.h | 55 ++++++++++++++++++++++++++++++++ > arch/arm64/mm/contpte.c | 29 +++++++++++++++++ > 2 files changed, 84 insertions(+) > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index 9fd8613b2db2..1303d30287dc 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -1223,6 +1223,46 @@ static inline void __wrprotect_ptes(struct mm_struct *mm, unsigned long address, > __ptep_set_wrprotect(mm, address, ptep); > } > > +static inline void __clear_young_dirty_pte(struct vm_area_struct *vma, > + unsigned long addr, pte_t *ptep, > + pte_t pte, cydp_t flags) > +{ > + pte_t old_pte; > + > + do { > + old_pte = pte; > + > + if (flags & CYDP_CLEAR_YOUNG) > + pte = pte_mkold(pte); > + if (flags & CYDP_CLEAR_DIRTY) > + pte = pte_mkclean(pte); > + > + pte_val(pte) = cmpxchg_relaxed(&pte_val(*ptep), > + pte_val(old_pte), pte_val(pte)); > + } while (pte_val(pte) != pte_val(old_pte)); > +} > + > +static inline void __clear_young_dirty_ptes(struct vm_area_struct *vma, > + unsigned long addr, pte_t *ptep, > + unsigned int nr, cydp_t flags) > +{ > + pte_t pte; > + > + for (;;) { > + pte = __ptep_get(ptep); > + > + if (flags == (CYDP_CLEAR_YOUNG | CYDP_CLEAR_DIRTY)) > + __set_pte(ptep, pte_mkclean(pte_mkold(pte))); > + else > + __clear_young_dirty_pte(vma, addr, ptep, pte, flags); > + > + if (--nr == 0) > + break; > + ptep++; > + addr += PAGE_SIZE; > + } > +} > + > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > #define __HAVE_ARCH_PMDP_SET_WRPROTECT > static inline void pmdp_set_wrprotect(struct mm_struct *mm, > @@ -1379,6 +1419,9 @@ extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr, > extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma, > unsigned long addr, pte_t *ptep, > pte_t entry, int dirty); > +extern void contpte_clear_young_dirty_ptes(struct vm_area_struct *vma, > + unsigned long addr, pte_t *ptep, > + unsigned int nr, cydp_t flags); > > static __always_inline void contpte_try_fold(struct mm_struct *mm, > unsigned long addr, pte_t *ptep, pte_t pte) > @@ -1603,6 +1646,17 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma, > return contpte_ptep_set_access_flags(vma, addr, ptep, entry, dirty); > } > > +#define clear_young_dirty_ptes clear_young_dirty_ptes > +static inline void clear_young_dirty_ptes(struct vm_area_struct *vma, > + unsigned long addr, pte_t *ptep, > + unsigned int nr, cydp_t flags) > +{ > + if (likely(nr == 1 && !pte_cont(__ptep_get(ptep)))) > + __clear_young_dirty_ptes(vma, addr, ptep, nr, flags); > + else > + contpte_clear_young_dirty_ptes(vma, addr, ptep, nr, flags); > +} > + > #else /* CONFIG_ARM64_CONTPTE */ > > #define ptep_get __ptep_get > @@ -1622,6 +1676,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma, > #define wrprotect_ptes __wrprotect_ptes > #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS > #define ptep_set_access_flags __ptep_set_access_flags > +#define clear_young_dirty_ptes __clear_young_dirty_ptes > > #endif /* CONFIG_ARM64_CONTPTE */ > > diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c > index 1b64b4c3f8bf..9f9486de0004 100644 > --- a/arch/arm64/mm/contpte.c > +++ b/arch/arm64/mm/contpte.c > @@ -361,6 +361,35 @@ void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr, > } > EXPORT_SYMBOL_GPL(contpte_wrprotect_ptes); > > +void contpte_clear_young_dirty_ptes(struct vm_area_struct *vma, > + unsigned long addr, pte_t *ptep, > + unsigned int nr, cydp_t flags) > +{ > + /* > + * We can safely clear access/dirty without needing to unfold from > + * the architectures perspective, even when contpte is set. If the > + * range starts or ends midway through a contpte block, we can just > + * expand to include the full contpte block. While this is not > + * exactly what the core-mm asked for, it tracks access/dirty per > + * folio, not per page. And since we only create a contpte block > + * when it is covered by a single folio, we can get away with > + * clearing access/dirty for the whole block. > + */ > + unsigned long start = addr; > + unsigned long end = start + nr; > + > + if (pte_cont(__ptep_get(ptep + nr - 1))) > + end = ALIGN(end, CONT_PTE_SIZE); > + > + if (pte_cont(__ptep_get(ptep))) { > + start = ALIGN_DOWN(start, CONT_PTE_SIZE); > + ptep = contpte_align_down(ptep); > + } > + > + __clear_young_dirty_ptes(vma, start, ptep, end - start, flags); > +} > +EXPORT_SYMBOL_GPL(contpte_clear_young_dirty_ptes); > + > int contpte_ptep_set_access_flags(struct vm_area_struct *vma, > unsigned long addr, pte_t *ptep, > pte_t entry, int dirty)