Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp4584917rdb; Tue, 12 Dec 2023 04:02:38 -0800 (PST) X-Google-Smtp-Source: AGHT+IEDm2Lc07ZmjN4VVM8tWGyBchDxsLBm1GfhzhRLpQfTtg+mwHCTq85MyJJQgCsqNZs9EWJk X-Received: by 2002:a17:903:28d:b0:1d0:c229:a00d with SMTP id j13-20020a170903028d00b001d0c229a00dmr6579254plr.34.1702382558130; Tue, 12 Dec 2023 04:02:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1702382558; cv=none; d=google.com; s=arc-20160816; b=tzUpHxe7XoUXjxzK1JpRPIozpDnjCQP3VbXkpfvIwvR+gfV7lLnsUGdwZ7VUEgghtT cDUHMVrqA3mR6SkXK2aNYplJ2hhWNdpFzjlS530oI7XjNEiyTAA3GBUP0n8F0CH4kV+P Bnclon6hDqy3bdKY6yfSxtSD6lTaPGXHrHmREMyU9SbrqxzQDyp0Y8CI8SDJVDYRCbK4 0OhkH4stMf/VnF2VfopH3ck6DPRssgGs0bQV1r5xk2nXZQZlyALuXxXN5u/D8E5AKtHM 7PJyeDy7SXBqse3l05shtUSTnH+LOiX0P4syBwwgOz6Sg2vREE9vWTLWntTcNVipFMCI esow== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=SQDByDfWKrFG0Tvq2LzSBX3gdtw4nw95RexoJVw5iCA=; fh=sjRO0h08Rt53ZYj/hAB0dQsagBDTUYGBEHH0mv5vZY4=; b=JIaVfJKTGd+2YhGXI00XQoMoHbEy94R0T6PO0Ovwo1rC4cQlADRcvF0Te3KQhch3fH aHO/DmR80e90kknszPh3qPRm0x8n7OvIUm8J+3cL1HDQevgOVsRzuVL78ImBzbALNlVi VWVR9DJ7Eq3YH8U03wkfumj+SldymaWJY1jxg+1YdZ/zjsiylhMCFYo7pySTDB9nyiqr Jb/7NS1IysiEq4aUmkbqhtFz9HrEVzJZVF0b/YDeNv/xBXWFX6Arf8C65MgPCnkfvOs9 +bF8uAbIriYDC6pDXPuyS4ufVWh0cp+tlYmdiMhd9gdsEIl/hbzQXd2D4tzTDKh6k+Rq ccyw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from pete.vger.email (pete.vger.email. [23.128.96.36]) by mx.google.com with ESMTPS id p14-20020a170902e74e00b001d332818f3bsi1376540plf.159.2023.12.12.04.02.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Dec 2023 04:02:38 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) client-ip=23.128.96.36; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.36 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id 775CA808271E; Tue, 12 Dec 2023 04:02:35 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232372AbjLLMCT (ORCPT + 99 others); Tue, 12 Dec 2023 07:02:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36642 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230055AbjLLMCS (ORCPT ); Tue, 12 Dec 2023 07:02:18 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 17C16AF for ; Tue, 12 Dec 2023 04:02:24 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2C8FD143D; Tue, 12 Dec 2023 04:03:10 -0800 (PST) Received: from [10.1.39.183] (XHFQ2J9959.cambridge.arm.com [10.1.39.183]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6CB303F762; Tue, 12 Dec 2023 04:02:20 -0800 (PST) Message-ID: <58152ed9-5c8d-484f-8e11-23ee5879bc1e@arm.com> Date: Tue, 12 Dec 2023 12:02:19 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 15/15] arm64/mm: Implement clear_ptes() to optimize exit() Content-Language: en-GB To: Alistair Popple Cc: Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland , David Hildenbrand , Kefeng Wang , John Hubbard , Zi Yan , Barry Song <21cnbao@gmail.com>, Yang Shi , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20231204105440.61448-1-ryan.roberts@arm.com> <20231204105440.61448-16-ryan.roberts@arm.com> <878r65a2m2.fsf@nvdebian.thelocal> From: Ryan Roberts In-Reply-To: <878r65a2m2.fsf@nvdebian.thelocal> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Tue, 12 Dec 2023 04:02:35 -0800 (PST) On 08/12/2023 01:45, Alistair Popple wrote: > > Ryan Roberts writes: > >> With the core-mm changes in place to batch-clear ptes during >> zap_pte_range(), we can take advantage of this in arm64 to greatly >> reduce the number of tlbis we have to issue, and recover the lost exit >> performance incured when adding support for transparent contiguous ptes. >> >> If we are clearing a whole contpte range, we can elide first unfolding >> that range and save the tlbis. We just clear the whole range. >> >> The following shows the results of running a kernel compilation workload >> and measuring the cost of arm64_sys_exit_group() (which at ~1.5% is a >> very small part of the overall workload). >> >> Benchmarks were run on Ampere Altra in 2 configs; single numa node and 2 >> numa nodes (tlbis are more expensive in 2 node config). >> >> - baseline: v6.7-rc1 + anonfolio-v7 >> - no-opt: contpte series without any attempt to optimize exit() >> - simple-ptep_get_clear_full: simple optimization to exploit full=1. >> ptep_get_clear_full() does not fully conform to its intended semantic >> - robust-ptep_get_clear_full: similar to previous but >> ptep_get_clear_full() fully conforms to its intended semantic >> - clear_ptes: optimization implemented by this patch >> >> | config | numa=1 | numa=2 | >> |----------------------------|--------|--------| >> | baseline | 0% | 0% | >> | no-opt | 190% | 768% | >> | simple-ptep_get_clear_full | 8% | 29% | >> | robust-ptep_get_clear_full | 21% | 19% | >> | clear_ptes | 13% | 9% | >> >> In all cases, the cost of arm64_sys_exit_group() increases; this is >> anticipated because there is more work to do to tear down the page >> tables. But clear_ptes() gives the smallest increase overall. >> >> Signed-off-by: Ryan Roberts >> --- >> arch/arm64/include/asm/pgtable.h | 32 ++++++++++++++++++++++++ >> arch/arm64/mm/contpte.c | 42 ++++++++++++++++++++++++++++++++ >> 2 files changed, 74 insertions(+) >> >> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h >> index 9bd2f57a9e11..ff6b3cc9e819 100644 >> --- a/arch/arm64/include/asm/pgtable.h >> +++ b/arch/arm64/include/asm/pgtable.h >> @@ -1145,6 +1145,8 @@ extern pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte); >> extern pte_t contpte_ptep_get_lockless(pte_t *orig_ptep); >> extern void contpte_set_ptes(struct mm_struct *mm, unsigned long addr, >> pte_t *ptep, pte_t pte, unsigned int nr); >> +extern pte_t contpte_clear_ptes(struct mm_struct *mm, unsigned long addr, >> + pte_t *ptep, unsigned int nr); >> extern int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma, >> unsigned long addr, pte_t *ptep); >> extern int contpte_ptep_clear_flush_young(struct vm_area_struct *vma, >> @@ -1270,6 +1272,36 @@ static inline void pte_clear(struct mm_struct *mm, >> __pte_clear(mm, addr, ptep); >> } >> >> +#define clear_ptes clear_ptes >> +static inline pte_t clear_ptes(struct mm_struct *mm, >> + unsigned long addr, pte_t *ptep, int full, >> + unsigned int nr) >> +{ >> + pte_t pte; >> + >> + if (!contpte_is_enabled(mm)) { > > I think it would be better to call the generic definition of > clear_ptes() here. Obviously that won't exist if clear_ptes is defined > here, but you could alcall it __clear_ptes() and #define clear_ptes > __clear_ptes when the arch specific helper isn't defined. My thinking was that we wouldn't bother to expose clear_ptes() when CONFIG_ARM64_CONTPTE is disabled, and just fall back to the core-mm generic one. But I think your proposal is probably cleaner and more consistent with everything else. So I'll do this for the next version. > >> + unsigned int i; >> + pte_t tail; >> + >> + pte = __ptep_get_and_clear(mm, addr, ptep); >> + for (i = 1; i < nr; i++) { >> + addr += PAGE_SIZE; >> + ptep++; >> + tail = __ptep_get_and_clear(mm, addr, ptep); >> + if (pte_dirty(tail)) >> + pte = pte_mkdirty(pte); >> + if (pte_young(tail)) >> + pte = pte_mkyoung(pte); >> + } >> + } else if (nr == 1) { >> + contpte_try_unfold(mm, addr, ptep, __ptep_get(ptep)); >> + pte = __ptep_get_and_clear(mm, addr, ptep); >> + } else >> + pte = contpte_clear_ptes(mm, addr, ptep, nr); >> + >> + return pte; >> +} >> + >> #define __HAVE_ARCH_PTEP_GET_AND_CLEAR >> static inline pte_t ptep_get_and_clear(struct mm_struct *mm, >> unsigned long addr, pte_t *ptep) >> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c >> index 2a57df16bf58..34b43bde3fcd 100644 >> --- a/arch/arm64/mm/contpte.c >> +++ b/arch/arm64/mm/contpte.c >> @@ -257,6 +257,48 @@ void contpte_set_ptes(struct mm_struct *mm, unsigned long addr, >> } >> EXPORT_SYMBOL(contpte_set_ptes); >> >> +pte_t contpte_clear_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep, >> + unsigned int nr) >> +{ >> + /* >> + * If we cover a partial contpte block at the beginning or end of the >> + * batch, unfold if currently folded. This makes it safe to clear some >> + * of the entries while keeping others. contpte blocks in the middle of >> + * the range, which are fully covered don't need to be unfolded because >> + * we will clear the full block. >> + */ >> + >> + unsigned int i; >> + pte_t pte; >> + pte_t tail; >> + >> + if (ptep != contpte_align_down(ptep) || nr < CONT_PTES) >> + contpte_try_unfold(mm, addr, ptep, __ptep_get(ptep)); >> + >> + if (ptep + nr != contpte_align_down(ptep + nr)) >> + contpte_try_unfold(mm, addr + PAGE_SIZE * (nr - 1), >> + ptep + nr - 1, >> + __ptep_get(ptep + nr - 1)); >> + >> + pte = __ptep_get_and_clear(mm, addr, ptep); >> + >> + for (i = 1; i < nr; i++) { >> + addr += PAGE_SIZE; >> + ptep++; >> + >> + tail = __ptep_get_and_clear(mm, addr, ptep); >> + >> + if (pte_dirty(tail)) >> + pte = pte_mkdirty(pte); >> + >> + if (pte_young(tail)) >> + pte = pte_mkyoung(pte); >> + } >> + >> + return pte; >> +} >> +EXPORT_SYMBOL(contpte_clear_ptes); >> + >> int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma, >> unsigned long addr, pte_t *ptep) >> { >