Received: by 2002:a05:7412:d008:b0:f9:6acb:47ec with SMTP id bd8csp449866rdb; Wed, 20 Dec 2023 01:51:19 -0800 (PST) X-Google-Smtp-Source: AGHT+IFPg/HZn2lURWwJRkNzixLnqorayZ8r+d0DS1BgTv0cFNWPhF532i7QwAVWWQgDrHwouWK+ X-Received: by 2002:a25:d108:0:b0:dbd:9a0c:20 with SMTP id i8-20020a25d108000000b00dbd9a0c0020mr571680ybg.43.1703065879720; Wed, 20 Dec 2023 01:51:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703065879; cv=none; d=google.com; s=arc-20160816; b=i/gl6KQksSseCNFxqXYPv8iqkyFXxBfkwTRH8CDB58q7EHj8y7+E//6TYx6P6nX7hI 5ZYu2lu1xGMbt81f19Vt/hVMfzqhQNa+flxya5rUy/PY7C2QMK1SaH9UMc9/FP60J2RS nqDArs6wnRQxyHNHTkHr3odi50qiv8xykMXUlHdxkJx5G6PdZqAJ4RQHk8B+gaiHOVLK c+nrLpKUeRy3W6ZNzJppGL/iR3fRCjb//3MG3B1gD7LfePpt6pR0x0+zV8m+FYRJmSGu EXTm4EbfwJB3RXrEeCAuJHo5zheKPBahgoUQTYzZyiTznrOTgXLuPG7gLbah/DvXTvye 8ntQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=x3E7LnhRD58WGoH+cpcvZNGeCXKUt5qXyepbv2Qmc1A=; fh=trt4PdSBegCHZPRiqPmatjero+uClKSnMdCqK8PHWAU=; b=BJAD8EPbTuqatXphHPUnD4F1I4x6+/6mH71MuPQ5xuOk9NcpZXWqb8KhwRmOvMzIBz JZHTD3HBlvHnWCslvA+xUqKVJqSnQtcIGow7DoDhqsloz+RltCCx+l0/bB6IkNWTQRto J9Fs7WWEtOHfl2mQ1VaWu5Qt7ur+9arQu9CZXkQ7Ebe3W+qsdVc4bybxQ58D/HdVgavt B2mrjdnbSO6tJ4vkrAS2Ff/u1aCegsX5DkxXdFF10giMQm68rSzF1xFKUah0HXFArZun R6eEnvYb266m+71BQOCNnFvSKjog6Kh6+ZVI+f/2dx+zB4Jt6H1IuTDbX5d9ODaS9DZV Mw4g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-6651-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6651-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id u15-20020a05622a17cf00b0041ccdb7e223si30716944qtk.654.2023.12.20.01.51.19 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 01:51:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-6651-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel+bounces-6651-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6651-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 202011C22094 for ; Wed, 20 Dec 2023 09:51:19 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 38490200CA; Wed, 20 Dec 2023 09:51:12 +0000 (UTC) X-Original-To: linux-kernel@vger.kernel.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id AB9DA200AB for ; Wed, 20 Dec 2023 09:51:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 765F31FB; Wed, 20 Dec 2023 01:51:53 -0800 (PST) Received: from [10.57.75.247] (unknown [10.57.75.247]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DDB023F5A1; Wed, 20 Dec 2023 01:51:03 -0800 (PST) Message-ID: <9a58b1a2-2c13-4fa0-8ffa-2b3d9655f1b6@arm.com> Date: Wed, 20 Dec 2023 09:51:01 +0000 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4 02/16] mm: Batch-copy PTE ranges during fork() Content-Language: en-GB To: David Hildenbrand , Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland , Kefeng Wang , John Hubbard , Zi Yan , Barry Song <21cnbao@gmail.com>, Alistair Popple , Yang Shi Cc: linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20231218105100.172635-1-ryan.roberts@arm.com> <20231218105100.172635-3-ryan.roberts@arm.com> <0bef5423-6eea-446b-8854-980e9c23a948@redhat.com> <7c0236ad-01f3-437f-8b04-125d69e90dc0@redhat.com> From: Ryan Roberts In-Reply-To: <7c0236ad-01f3-437f-8b04-125d69e90dc0@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit On 20/12/2023 09:17, David Hildenbrand wrote: > On 19.12.23 18:42, Ryan Roberts wrote: >> On 19/12/2023 17:22, David Hildenbrand wrote: >>> On 19.12.23 09:30, Ryan Roberts wrote: >>>> On 18/12/2023 17:47, David Hildenbrand wrote: >>>>> On 18.12.23 11:50, Ryan Roberts wrote: >>>>>> Convert copy_pte_range() to copy a batch of ptes in one go. A given >>>>>> batch is determined by the architecture with the new helper, >>>>>> pte_batch_remaining(), and maps a physically contiguous block of memory, >>>>>> all belonging to the same folio. A pte batch is then write-protected in >>>>>> one go in the parent using the new helper, ptep_set_wrprotects() and is >>>>>> set in one go in the child using the new helper, set_ptes_full(). >>>>>> >>>>>> The primary motivation for this change is to reduce the number of tlb >>>>>> maintenance operations that the arm64 backend has to perform during >>>>>> fork, as it is about to add transparent support for the "contiguous bit" >>>>>> in its ptes. By write-protecting the parent using the new >>>>>> ptep_set_wrprotects() (note the 's' at the end) function, the backend >>>>>> can avoid having to unfold contig ranges of PTEs, which is expensive, >>>>>> when all ptes in the range are being write-protected. Similarly, by >>>>>> using set_ptes_full() rather than set_pte_at() to set up ptes in the >>>>>> child, the backend does not need to fold a contiguous range once they >>>>>> are all populated - they can be initially populated as a contiguous >>>>>> range in the first place. >>>>>> >>>>>> This code is very performance sensitive, and a significant amount of >>>>>> effort has been put into not regressing performance for the order-0 >>>>>> folio case. By default, pte_batch_remaining() is compile constant 1, >>>>>> which enables the compiler to simplify the extra loops that are added >>>>>> for batching and produce code that is equivalent (and equally >>>>>> performant) as the previous implementation. >>>>>> >>>>>> This change addresses the core-mm refactoring only and a separate change >>>>>> will implement pte_batch_remaining(), ptep_set_wrprotects() and >>>>>> set_ptes_full() in the arm64 backend to realize the performance >>>>>> improvement as part of the work to enable contpte mappings. >>>>>> >>>>>> To ensure the arm64 is performant once implemented, this change is very >>>>>> careful to only call ptep_get() once per pte batch. >>>>>> >>>>>> The following microbenchmark results demonstate that there is no >>>>>> significant performance change after this patch. Fork is called in a >>>>>> tight loop in a process with 1G of populated memory and the time for the >>>>>> function to execute is measured. 100 iterations per run, 8 runs >>>>>> performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests >>>>>> performed for case where 1G memory is comprised of order-0 folios and >>>>>> case where comprised of pte-mapped order-9 folios. Negative is faster, >>>>>> positive is slower, compared to baseline upon which the series is based: >>>>>> >>>>>> | Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) | >>>>>> | fork          |-------------------|-------------------| >>>>>> | microbench    |    mean |   stdev |    mean |   stdev | >>>>>> |---------------|---------|---------|---------|---------| >>>>>> | baseline      |    0.0% |    1.1% |    0.0% |    1.2% | >>>>>> | after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% | >>>>>> >>>>>> | Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) | >>>>>> | fork          |-------------------|-------------------| >>>>>> | microbench    |    mean |   stdev |    mean |   stdev | >>>>>> |---------------|---------|---------|---------|---------| >>>>>> | baseline      |    0.0% |    1.0% |    0.0% |    0.1% | >>>>>> | after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% | >>>>>> >>>>>> Tested-by: John Hubbard >>>>>> Reviewed-by: Alistair Popple >>>>>> Signed-off-by: Ryan Roberts >>>>>> --- >>>>>>     include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++ >>>>>>     mm/memory.c             | 92 ++++++++++++++++++++++++++--------------- >>>>>>     2 files changed, 139 insertions(+), 33 deletions(-) >>>>>> >>>>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h >>>>>> index af7639c3b0a3..db93fb81465a 100644 >>>>>> --- a/include/linux/pgtable.h >>>>>> +++ b/include/linux/pgtable.h >>>>>> @@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd) >>>>>>     #define arch_flush_lazy_mmu_mode()    do {} while (0) >>>>>>     #endif >>>>>>     +#ifndef pte_batch_remaining >>>>>> +/** >>>>>> + * pte_batch_remaining - Number of pages from addr to next batch boundary. >>>>>> + * @pte: Page table entry for the first page. >>>>>> + * @addr: Address of the first page. >>>>>> + * @end: Batch ceiling (e.g. end of vma). >>>>>> + * >>>>>> + * Some architectures (arm64) can efficiently modify a contiguous batch of >>>>>> ptes. >>>>>> + * In such cases, this function returns the remaining number of pages to >>>>>> the end >>>>>> + * of the current batch, as defined by addr. This can be useful when >>>>>> iterating >>>>>> + * over ptes. >>>>>> + * >>>>>> + * May be overridden by the architecture, else batch size is always 1. >>>>>> + */ >>>>>> +static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long >>>>>> addr, >>>>>> +                        unsigned long end) >>>>>> +{ >>>>>> +    return 1; >>>>>> +} >>>>>> +#endif >>>>> >>>>> It's a shame we now lose the optimization for all other archtiectures. >>>>> >>>>> Was there no way to have some basic batching mechanism that doesn't require >>>>> arch >>>>> specifics? >>>> >>>> I tried a bunch of things but ultimately the way I've done it was the only way >>>> to reduce the order-0 fork regression to 0. >>>> >>>> My original v3 posting was costing 5% extra and even my first attempt at an >>>> arch-specific version that didn't resolve to a compile-time constant 1 still >>>> cost an extra 3%. >>>> >>>> >>>>> >>>>> I'd have thought that something very basic would have worked like: >>>>> >>>>> * Check if PTE is the same when setting the PFN to 0. >>>>> * Check that PFN is consecutive >>>>> * Check that all PFNs belong to the same folio >>>> >>>> I haven't tried this exact approach, but I'd be surprised if I can get the >>>> regression under 4% with this. Further along the series I spent a lot of time >>>> having to fiddle with the arm64 implementation; every conditional and every >>>> memory read (even when in cache) was a problem. There is just so little in the >>>> inner loop that every instruction matters. (At least on Ampere Altra and Apple >>>> M2). >>>> >>>> Of course if you're willing to pay that 4-5% for order-0 then the benefit to >>>> order-9 is around 10% in my measurements. Personally though, I'd prefer to play >>>> safe and ensure the common order-0 case doesn't regress, as you previously >>>> suggested. >>>> >>> >>> I just hacked something up, on top of my beloved rmap cleanup/batching series. I >>> implemented very generic and simple batching for large folios (all PTE bits >>> except the PFN have to match). >>> >>> Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R) Silver >>> 4210R CPU. >>> >>> order-0: 0.014210 -> 0.013969 >>> >>> -> Around 1.7 % faster >>> >>> order-9: 0.014373 -> 0.009149 >>> >>> -> Around 36.3 % faster >> >> Well I guess that shows me :) >> >> I'll do a review and run the tests on my HW to see if it concurs. > > > I pushed a simple compile fixup (we need pte_next_pfn()). I've just been trying to compile and noticed this. Will take a look at your update. But upon review, I've noticed the part that I think makes this difficult for arm64 with the contpte optimization; You are calling ptep_get() for every pte in the batch. While this is functionally correct, once arm64 has the contpte changes, its ptep_get() has to read every pte in the contpte block in order to gather the access and dirty bits. So if your batching function ends up wealking a 16 entry contpte block, that will cause 16 x 16 reads, which kills performance. That's why I added the arch-specific pte_batch_remaining() function; this allows the core-mm to skip to the end of the contpte block and avoid ptep_get() for the 15 tail ptes. So we end up with 16 READ_ONCE()s instead of 256. I considered making a ptep_get_noyoungdirty() variant, which would avoid the bit gathering. But we have a similar problem in zap_pte_range() and that function needs the dirty bit to update the folio. So it doesn't work there. (see patch 3 in my series). I guess you are going to say that we should combine both approaches, so that your batching loop can skip forward an arch-provided number of ptes? That would certainly work, but feels like an orthogonal change to what I'm trying to achieve :). Anyway, I'll spend some time playing with it today. > > Note that we should probably handle "ptep_set_wrprotects" rather like set_ptes: > > #ifndef wrprotect_ptes > static inline void wrprotect_ptes(struct mm_struct *mm, unsigned long addr, >                pte_t *ptep, unsigned int nr) > { >        for (;;) { >                ptep_set_wrprotect(mm, addr, ptep); >                if (--nr == 0) >                        break; >                ptep++; >                addr += PAGE_SIZE; >        } > } > #endif > >