Received: by 2002:a05:7412:a9a3:b0:f9:327e:43ab with SMTP id o35csp13599rdh; Mon, 18 Dec 2023 02:55:32 -0800 (PST) X-Google-Smtp-Source: AGHT+IHvGRIPZQHJBlMCm0qPiAXc/zJah1rrNUE7N+U/uYQ0FkMp3jlW87bpxxtqXuN0D0lZg1nn X-Received: by 2002:a17:902:f807:b0:1d3:abbf:b892 with SMTP id ix7-20020a170902f80700b001d3abbfb892mr502647plb.35.1702896931872; Mon, 18 Dec 2023 02:55:31 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702896931; cv=none; d=google.com; s=arc-20160816; b=DKPcVlZAaJx/JbEPkjopkV8kMQOTWtSxu9iupIQ/WTkBLFqy7Gsz+mzDXFWvJzo16M 6UWa5ER5KBbYaBu2ZHk+nzGdYy+hGgfLprAGnsDBgZG6oOUCw/HxQF7J+VCjcNTSBVuR gvTR9PNZokCsmJIq8Rm+bl77XrWeFfHpmM6vR0DK8E9qcX5OfDKpFX/InXR6uzxfuRcW UviTiaF6qW4Qa2KlrmAnNI2ydQTGn7Z5vzFlxIqOA1cJwczQVEFfzolVmq6HjpwjBIC8 MtPyQ5TdFYCPYqKqoBVxO8EUJ9rI0l2zbw5Si40YiMeqfh5AWIF3h3K9vk9v11WmSBgS C5Nw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from; bh=Pl80kIjTjpL3O0EXepdffRGD/m7SMioiLS6gBNRB6iM=; fh=a3gZESXuO1ptbCtzvzEnZL5xUl+WC4Vy5hoF2yaOXSs=; b=SGmR6V652p5pPkqC3pQIw5trzQyTkH7egxtkAaEi26POEKemibqEGNsVABHkzQuWMD B3TpF2n7MWFR6ZtsedXRMFj30FkdWWiSp+Sybxaf2OXz79fnfzZmEAjNsFsGHYZtzKDC 4eZjseXd8Qd8E6NLx53ekqY/S9PbUr9CaZpoVUp4xL+PEOpmUWrG3Qiik7Uey1nmCuTy o41T70qs+vri5wuUm8dwl93UWqWFF5ZhqRWIxJsE67ifH45g2oETWth2Fzd4T7VoJqes Ap/9QbA9tki74s460aYVHVcgSFEN60B9RSXQNph4vIkZe1tFe7rVVu2ssUKoyGq8zsWK 9XTg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-3385-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-3385-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id x3-20020a1709027c0300b001cfd0495291si17565974pll.524.2023.12.18.02.55.31 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Dec 2023 02:55:31 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-3385-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-3385-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-3385-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 5EC83B20B7A for ; Mon, 18 Dec 2023 10:55:02 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B95B53984F; Mon, 18 Dec 2023 10:52:22 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 16B1B37D10 for ; Mon, 18 Dec 2023 10:52:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D081E1424; Mon, 18 Dec 2023 02:53:00 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 00DE33F738; Mon, 18 Dec 2023 02:52:12 -0800 (PST) From: Ryan Roberts To: Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland , David Hildenbrand , Kefeng Wang , John Hubbard , Zi Yan , Barry Song <21cnbao@gmail.com>, Alistair Popple , Yang Shi Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v4 16/16] arm64/mm: Implement clear_ptes() to optimize exit, munmap, dontneed Date: Mon, 18 Dec 2023 10:51:00 +0000 Message-Id: <20231218105100.172635-17-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231218105100.172635-1-ryan.roberts@arm.com> References: <20231218105100.172635-1-ryan.roberts@arm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit With the core-mm changes in place to batch-clear ptes during zap_pte_range(), we can take advantage of this in arm64 to greatly reduce the number of tlbis we have to issue, and recover the lost performance in exit, munmap and madvise(DONTNEED) incured when adding support for transparent contiguous ptes. If we are clearing a whole contpte range, we can elide first unfolding that range and save the tlbis. We just clear the whole range. The following microbenchmark results demonstate the effect of this change on madvise(DONTNEED) performance for large pte-mapped folios. madvise(dontneed) is called for each page of a 1G populated mapping and the total time is measured. 100 iterations per run, 8 runs performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests performed for case where 1G memory is comprised of pte-mapped order-9 folios. Negative is faster, positive is slower, compared to baseline upon which the series is based: | dontneed | Apple M2 VM | Ampere Altra | | order-9 |-------------------|-------------------| | (pte-map) | mean | stdev | mean | stdev | |---------------|---------|---------|---------|---------| | baseline | 0.0% | 7.9% | 0.0% | 0.0% | | before-change | -1.3% | 7.0% | 13.0% | 0.0% | | after-change | -9.9% | 0.9% | 14.1% | 0.0% | The memory is initially all contpte-mapped and has to be unfolded (which requires tlbi for the whole block) when the first page is touched (since the test is madvise-ing 1 page at a time). Ampere Altra has high cost for tlbi; this is why cost increases there. The following microbenchmark results demonstate the recovery (and overall improvement) of munmap performance for large pte-mapped folios. munmap is called for a 1G populated mapping and the function runtime is measured. 100 iterations per run, 8 runs performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests performed for case where 1G memory is comprised of pte-mapped order-9 folios. Negative is faster, positive is slower, compared to baseline upon which the series is based: | munmap | Apple M2 VM | Ampere Altra | | order-9 |-------------------|-------------------| | (pte-map) | mean | stdev | mean | stdev | |---------------|---------|---------|---------|---------| | baseline | 0.0% | 6.4% | 0.0% | 0.1% | | before-change | 43.3% | 1.9% | 375.2% | 0.0% | | after-change | -6.0% | 1.4% | -0.6% | 0.2% | Tested-by: John Hubbard Signed-off-by: Ryan Roberts --- arch/arm64/include/asm/pgtable.h | 42 +++++++++++++++++++++++++++++ arch/arm64/mm/contpte.c | 45 ++++++++++++++++++++++++++++++++ 2 files changed, 87 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index d4805f73b9db..f5bf059291c3 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -953,6 +953,29 @@ static inline pte_t __ptep_get_and_clear(struct mm_struct *mm, return pte; } +static inline pte_t __clear_ptes(struct mm_struct *mm, + unsigned long address, pte_t *ptep, + unsigned int nr, int full) +{ + pte_t orig_pte = __ptep_get_and_clear(mm, address, ptep); + unsigned int i; + pte_t pte; + + for (i = 1; i < nr; i++) { + address += PAGE_SIZE; + ptep++; + pte = __ptep_get_and_clear(mm, address, ptep); + + if (pte_dirty(pte)) + orig_pte = pte_mkdirty(orig_pte); + + if (pte_young(pte)) + orig_pte = pte_mkyoung(orig_pte); + } + + return orig_pte; +} + #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm, @@ -1151,6 +1174,8 @@ extern pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte); extern pte_t contpte_ptep_get_lockless(pte_t *orig_ptep); extern void contpte_set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte, unsigned int nr); +extern pte_t contpte_clear_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, unsigned int nr, int full); extern int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep); extern int contpte_ptep_clear_flush_young(struct vm_area_struct *vma, @@ -1279,6 +1304,22 @@ static inline void pte_clear(struct mm_struct *mm, __pte_clear(mm, addr, ptep); } +#define clear_ptes clear_ptes +static inline pte_t clear_ptes(struct mm_struct *mm, + unsigned long addr, pte_t *ptep, + unsigned int nr, int full) +{ + pte_t pte; + + if (nr == 1) { + contpte_try_unfold(mm, addr, ptep, __ptep_get(ptep)); + pte = __ptep_get_and_clear(mm, addr, ptep); + } else + pte = contpte_clear_ptes(mm, addr, ptep, nr, full); + + return pte; +} + #define __HAVE_ARCH_PTEP_GET_AND_CLEAR static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) @@ -1366,6 +1407,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma, #define set_pte __set_pte #define set_ptes __set_ptes #define pte_clear __pte_clear +#define clear_ptes __clear_ptes #define __HAVE_ARCH_PTEP_GET_AND_CLEAR #define ptep_get_and_clear __ptep_get_and_clear #define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c index 72e672024785..6f2a15ac5163 100644 --- a/arch/arm64/mm/contpte.c +++ b/arch/arm64/mm/contpte.c @@ -293,6 +293,51 @@ void contpte_set_ptes(struct mm_struct *mm, unsigned long addr, } EXPORT_SYMBOL(contpte_set_ptes); +pte_t contpte_clear_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, + unsigned int nr, int full) +{ + /* + * If we cover a partial contpte block at the beginning or end of the + * batch, unfold if currently folded. This makes it safe to clear some + * of the entries while keeping others. contpte blocks in the middle of + * the range, which are fully covered don't need to be unfolded because + * we will clear the full block. + */ + + unsigned int i; + pte_t pte; + pte_t tail; + + if (!mm_is_user(mm)) + return __clear_ptes(mm, addr, ptep, nr, full); + + if (ptep != contpte_align_down(ptep) || nr < CONT_PTES) + contpte_try_unfold(mm, addr, ptep, __ptep_get(ptep)); + + if (ptep + nr != contpte_align_down(ptep + nr)) + contpte_try_unfold(mm, addr + PAGE_SIZE * (nr - 1), + ptep + nr - 1, + __ptep_get(ptep + nr - 1)); + + pte = __ptep_get_and_clear(mm, addr, ptep); + + for (i = 1; i < nr; i++) { + addr += PAGE_SIZE; + ptep++; + + tail = __ptep_get_and_clear(mm, addr, ptep); + + if (pte_dirty(tail)) + pte = pte_mkdirty(pte); + + if (pte_young(tail)) + pte = pte_mkyoung(pte); + } + + return pte; +} +EXPORT_SYMBOL(contpte_clear_ptes); + int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { -- 2.25.1