Received: by 2002:a05:7412:d008:b0:f9:6acb:47ec with SMTP id bd8csp74861rdb; Tue, 19 Dec 2023 09:43:16 -0800 (PST) X-Google-Smtp-Source: AGHT+IHUkSZOGskMrNoaURTkpJsNCgwc4kiwWe40zi1QGpceBEEjhzO2cq0flHGNx04/WGQ10cyN X-Received: by 2002:a17:90a:fd93:b0:28b:af35:c40d with SMTP id cx19-20020a17090afd9300b0028baf35c40dmr1710359pjb.83.1703007796278; Tue, 19 Dec 2023 09:43:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703007796; cv=none; d=google.com; s=arc-20160816; b=hIAT2peuDrg3Cf2zOd+DU84NQv6xNsDxh6N2sK8C/29efAKXJTldHbXWl8xjzEWRWK pjjA+1VZaFL8kYO2r3rAhjXLEadRRvPtmlgiPjUsnNSL3loJ6MKwJriOHH9A9f11mKk9 VqdgocKRScMYJ7CpgqtQJeliVqp6wy6lBODawakup/o8PEvHIrILjA69gOWiQclw0Fo+ w+Pl1yVAp85zgkfGTpUFQIRxd4IUQA/97NYNCc9sM2+FCGtY3pVZx5eSAe4jdWNkQc2E CBWpBcSvl923HeQR6+BVRE94p2PQzGtmWTEnCXP4ObPHqCY4T44Azuf04wJ6CdxX7aLB v/TQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=M4Cq5Y8OHBwtmfTX8C88T4CQDOGtT97mcY6Gk+k3N6Y=; fh=trt4PdSBegCHZPRiqPmatjero+uClKSnMdCqK8PHWAU=; b=E/UrEjmO50qhEHa4CUdX0doCXdcIv5dGD/JV0nyUC5vSdvjkEck7kvSrwIF0YrrNmE WWajzq8dDqIzr9rxezIYF/ft4OsdJ1ACC9l6sfjnoWIb5qtEXqQ5jBlPy/mmk0V7E013 L2rT9b7PW9ESA9yL6h8HEWvocLlAsP6cWTIu87RHNtBRsTTxzeg8sGUEgLYgh46Jly2x dkLfR6JtqT6dr56ZM5i7KXi11vkCsOa5iWicBOKrcjZcFj7Xl5qK5YpdWfuzjRlHFItA P9uHNmConu4KMG0CFGG5rLmAJ/WKyN3uncgJ+Li8dDDx69Upm9WcPxbfJpVpeV1PdXHO 4RAg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-5709-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5709-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id h10-20020a17090ac38a00b0028ae26639c0si1481516pjt.60.2023.12.19.09.43.15 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 09:43:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-5709-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-5709-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-5709-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 81CA4B2326E for ; Tue, 19 Dec 2023 17:43:10 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 0013B3D0B0; Tue, 19 Dec 2023 17:42:26 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C5E6B3D0A3 for ; Tue, 19 Dec 2023 17:42:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 714CB1FB; Tue, 19 Dec 2023 09:43:05 -0800 (PST) Received: from [10.1.32.152] (XHFQ2J9959.cambridge.arm.com [10.1.32.152]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CB7973F738; Tue, 19 Dec 2023 09:42:17 -0800 (PST) Message-ID: Date: Tue, 19 Dec 2023 17:42:16 +0000 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4 02/16] mm: Batch-copy PTE ranges during fork() Content-Language: en-GB To: David Hildenbrand , Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland , Kefeng Wang , John Hubbard , Zi Yan , Barry Song <21cnbao@gmail.com>, Alistair Popple , Yang Shi Cc: linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20231218105100.172635-1-ryan.roberts@arm.com> <20231218105100.172635-3-ryan.roberts@arm.com> <0bef5423-6eea-446b-8854-980e9c23a948@redhat.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit On 19/12/2023 17:22, David Hildenbrand wrote: > On 19.12.23 09:30, Ryan Roberts wrote: >> On 18/12/2023 17:47, David Hildenbrand wrote: >>> On 18.12.23 11:50, Ryan Roberts wrote: >>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given >>>> batch is determined by the architecture with the new helper, >>>> pte_batch_remaining(), and maps a physically contiguous block of memory, >>>> all belonging to the same folio. A pte batch is then write-protected in >>>> one go in the parent using the new helper, ptep_set_wrprotects() and is >>>> set in one go in the child using the new helper, set_ptes_full(). >>>> >>>> The primary motivation for this change is to reduce the number of tlb >>>> maintenance operations that the arm64 backend has to perform during >>>> fork, as it is about to add transparent support for the "contiguous bit" >>>> in its ptes. By write-protecting the parent using the new >>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend >>>> can avoid having to unfold contig ranges of PTEs, which is expensive, >>>> when all ptes in the range are being write-protected. Similarly, by >>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the >>>> child, the backend does not need to fold a contiguous range once they >>>> are all populated - they can be initially populated as a contiguous >>>> range in the first place. >>>> >>>> This code is very performance sensitive, and a significant amount of >>>> effort has been put into not regressing performance for the order-0 >>>> folio case. By default, pte_batch_remaining() is compile constant 1, >>>> which enables the compiler to simplify the extra loops that are added >>>> for batching and produce code that is equivalent (and equally >>>> performant) as the previous implementation. >>>> >>>> This change addresses the core-mm refactoring only and a separate change >>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and >>>> set_ptes_full() in the arm64 backend to realize the performance >>>> improvement as part of the work to enable contpte mappings. >>>> >>>> To ensure the arm64 is performant once implemented, this change is very >>>> careful to only call ptep_get() once per pte batch. >>>> >>>> The following microbenchmark results demonstate that there is no >>>> significant performance change after this patch. Fork is called in a >>>> tight loop in a process with 1G of populated memory and the time for the >>>> function to execute is measured. 100 iterations per run, 8 runs >>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests >>>> performed for case where 1G memory is comprised of order-0 folios and >>>> case where comprised of pte-mapped order-9 folios. Negative is faster, >>>> positive is slower, compared to baseline upon which the series is based: >>>> >>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) | >>>> | fork          |-------------------|-------------------| >>>> | microbench    |    mean |   stdev |    mean |   stdev | >>>> |---------------|---------|---------|---------|---------| >>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% | >>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% | >>>> >>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) | >>>> | fork          |-------------------|-------------------| >>>> | microbench    |    mean |   stdev |    mean |   stdev | >>>> |---------------|---------|---------|---------|---------| >>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% | >>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% | >>>> >>>> Tested-by: John Hubbard >>>> Reviewed-by: Alistair Popple >>>> Signed-off-by: Ryan Roberts >>>> --- >>>>    include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++ >>>>    mm/memory.c             | 92 ++++++++++++++++++++++++++--------------- >>>>    2 files changed, 139 insertions(+), 33 deletions(-) >>>> >>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h >>>> index af7639c3b0a3..db93fb81465a 100644 >>>> --- a/include/linux/pgtable.h >>>> +++ b/include/linux/pgtable.h >>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd) >>>>    #define arch_flush_lazy_mmu_mode()    do {} while (0) >>>>    #endif >>>>    +#ifndef pte_batch_remaining >>>> +/** >>>> + * pte_batch_remaining - Number of pages from addr to next batch boundary. >>>> + * @pte: Page table entry for the first page. >>>> + * @addr: Address of the first page. >>>> + * @end: Batch ceiling (e.g. end of vma). >>>> + * >>>> + * Some architectures (arm64) can efficiently modify a contiguous batch of >>>> ptes. >>>> + * In such cases, this function returns the remaining number of pages to >>>> the end >>>> + * of the current batch, as defined by addr. This can be useful when iterating >>>> + * over ptes. >>>> + * >>>> + * May be overridden by the architecture, else batch size is always 1. >>>> + */ >>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long addr, >>>> +                        unsigned long end) >>>> +{ >>>> +    return 1; >>>> +} >>>> +#endif >>> >>> It's a shame we now lose the optimization for all other archtiectures. >>> >>> Was there no way to have some basic batching mechanism that doesn't require arch >>> specifics? >> >> I tried a bunch of things but ultimately the way I've done it was the only way >> to reduce the order-0 fork regression to 0. >> >> My original v3 posting was costing 5% extra and even my first attempt at an >> arch-specific version that didn't resolve to a compile-time constant 1 still >> cost an extra 3%. >> >> >>> >>> I'd have thought that something very basic would have worked like: >>> >>> * Check if PTE is the same when setting the PFN to 0. >>> * Check that PFN is consecutive >>> * Check that all PFNs belong to the same folio >> >> I haven't tried this exact approach, but I'd be surprised if I can get the >> regression under 4% with this. Further along the series I spent a lot of time >> having to fiddle with the arm64 implementation; every conditional and every >> memory read (even when in cache) was a problem. There is just so little in the >> inner loop that every instruction matters. (At least on Ampere Altra and Apple >> M2). >> >> Of course if you're willing to pay that 4-5% for order-0 then the benefit to >> order-9 is around 10% in my measurements. Personally though, I'd prefer to play >> safe and ensure the common order-0 case doesn't regress, as you previously >> suggested. >> > > I just hacked something up, on top of my beloved rmap cleanup/batching series. I > implemented very generic and simple batching for large folios (all PTE bits > except the PFN have to match). > > Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R) Silver > 4210R CPU. > > order-0: 0.014210 -> 0.013969 > > -> Around 1.7 % faster > > order-9: 0.014373 -> 0.009149 > > -> Around 36.3 % faster Well I guess that shows me :) I'll do a review and run the tests on my HW to see if it concurs. > > > But it's likely buggy, so don't trust the numbers just yet. If they actually > hold up, we should probably do something like that ahead of time, before all the > arm-specific cont-pte work. > > I suspect you can easily extend that by arch hooks where reasonable. > > The (3) patches on top of the rmap cleanups can be found at: > >     https://github.com/davidhildenbrand/linux/tree/fork-batching >