Received: by 2002:a05:7412:d1aa:b0:fc:a2b0:25d7 with SMTP id ba42csp1646200rdb; Wed, 31 Jan 2024 05:13:00 -0800 (PST) X-Google-Smtp-Source: AGHT+IH8v+E5vmumBH8hh7hq8EBEgOHRm0eN9EzifI8vHnFzh7dBkxzHQn6bGj7s5V+BhMnsQYCi X-Received: by 2002:a05:6358:d38b:b0:178:ac0d:7190 with SMTP id mp11-20020a056358d38b00b00178ac0d7190mr1309214rwb.13.1706706780500; Wed, 31 Jan 2024 05:13:00 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706706780; cv=pass; d=google.com; s=arc-20160816; b=jhxsRMQ9pLkHU9t5Kdf6j1gT6zWGvbkFYakDCTX0gkvJpF6129GOqpU7cx0E/R7tC+ gPNGvme9Qk5oY01EkYwdWWP9BHaGPMiFQ7z34HY/bkMWt5dru5LnxezgXqMxfNtcTuU5 IZMVtOdHeMu3Fp2767gwQis22OnUSTN49ciWSbVBe0gj1d/+p/tpBTtZO8IGSxsBC30g Gg6wnx7Is3qX2s+VTMvIO9BwNA8wWFrt+By+a2BZg+9MqVGqSVfWGgVy7ikD2Kzbkvys LQEYkOidM7sDqmzD9nrHAb1JMtIXxnbT0ZPPwcufly6m+ciUMkdSFaSHIC59dywv6Sku kz3w== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:content-language:references :cc:to:from:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=uyk9GvkYY2ak5a2yzKm/NjLG3fAGXhtCowZ+B0JSpxY=; fh=Qhq8Qz7kLFUE3WmaAiyF6nqziqwE9ZuwKwfG4DRm5SQ=; b=zQ6lkDZVcaBWrGObnw/sU0h6RyM21074N+YU+0eFbIl8UPTd0TCs6ghekCSTbz1jia LtgiIhluDvT3VSYn9sznBtulzhaD5ilqw42A+83YG92sDMa6PQAtDVa7UfgIHTtns/3G GWFdZKxghj1OsuH/596y06lBUSYPNKvX4p/eVbAlFmAoF4KmlZXLM8qKe7E3bkG5bThU ZBgrMo4+lMAn2hBVXbTLhfmX1h+TY2PxlaIGmjN/2+ql9RzAWMIKz/fA0O4QBm4UTavN l+NJFlNlOqKMmGLQTIRxrhzlQxSty67m4DV7fOa8Qd9cCfjB0wDxjHvV5KSYcKIy0gH4 zXPQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-46429-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-46429-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com X-Forwarded-Encrypted: i=1; AJvYcCU3yPUc3/7jov72geAHw6hAAz4flmHF1RlVTaRIA0KwTQ/PBJcwpyia/K7IC+M+/D1Kj3Be0Hdbl2wqSVmkzvNlUECuzqa5O9KzkkF2PQ== Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id bx31-20020a056a02051f00b005cdf7e87091si9736420pgb.248.2024.01.31.05.12.59 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 31 Jan 2024 05:13:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-46429-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-46429-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-46429-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id A6907B26937 for ; Wed, 31 Jan 2024 12:37:43 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 51A807AE60; Wed, 31 Jan 2024 12:37:27 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 93DFD76C7B; Wed, 31 Jan 2024 12:37:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706704646; cv=none; b=OKgPaUaEobdRuixQUoK8OX7Y8W2aFIeWewBG9BSpG/zRMeKBpUQPNO9zomZMsFrcjvcVawjNA9ABq85wlghV8MqSqPiyOSv/qxJV79b2BN8j6o5UPrrNvtqmVuM5TK5numJzYAEUs34ytwEldTeVn1Mj8jNVddCmJRag4SELQ4I= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706704646; c=relaxed/simple; bh=eRaAZjaP9jf6t16QUPO2IcGvixvhkeF9xfBBEu3mTOw=; h=Message-ID:Date:MIME-Version:Subject:From:To:Cc:References: In-Reply-To:Content-Type; b=ZI73eTXa84wcWYPqL7j+frNKvTPDCeUEs/zAwMm7nV67qs9tq4GyGKzU6GnqXOYVa0rSuuwyhxEvFDlGDqlOY6ghZOP+UzHlnUZoq3VDYEECJXr0JOxh+nCNc/Yvn4wAgACfBxBI7LZ01Bl/hU3MZRB00TiHS+pcbh1C13emTbI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 70DA8DA7; Wed, 31 Jan 2024 04:38:07 -0800 (PST) Received: from [10.57.79.60] (unknown [10.57.79.60]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CAA383F738; Wed, 31 Jan 2024 04:37:11 -0800 (PST) Message-ID: Date: Wed, 31 Jan 2024 12:37:10 +0000 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 00/15] mm/memory: optimize fork() with PTE-mapped THP From: Ryan Roberts To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , Matthew Wilcox , Russell King , Catalin Marinas , Will Deacon , Dinh Nguyen , Michael Ellerman , Nicholas Piggin , Christophe Leroy , "Aneesh Kumar K.V" , "Naveen N. Rao" , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , "David S. Miller" , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org References: <20240129124649.189745-1-david@redhat.com> <57eb82c7-4816-42a2-b5ab-cc221e289b21@arm.com> <714d0930-2202-48b6-9728-d248f820325e@arm.com> Content-Language: en-GB In-Reply-To: <714d0930-2202-48b6-9728-d248f820325e@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit On 31/01/2024 11:49, Ryan Roberts wrote: > On 31/01/2024 11:28, David Hildenbrand wrote: >> On 31.01.24 12:16, Ryan Roberts wrote: >>> On 31/01/2024 11:06, David Hildenbrand wrote: >>>> On 31.01.24 11:43, Ryan Roberts wrote: >>>>> On 29/01/2024 12:46, David Hildenbrand wrote: >>>>>> Now that the rmap overhaul[1] is upstream that provides a clean interface >>>>>> for rmap batching, let's implement PTE batching during fork when processing >>>>>> PTE-mapped THPs. >>>>>> >>>>>> This series is partially based on Ryan's previous work[2] to implement >>>>>> cont-pte support on arm64, but its a complete rewrite based on [1] to >>>>>> optimize all architectures independent of any such PTE bits, and to >>>>>> use the new rmap batching functions that simplify the code and prepare >>>>>> for further rmap accounting changes. >>>>>> >>>>>> We collect consecutive PTEs that map consecutive pages of the same large >>>>>> folio, making sure that the other PTE bits are compatible, and (a) adjust >>>>>> the refcount only once per batch, (b) call rmap handling functions only >>>>>> once per batch and (c) perform batch PTE setting/updates. >>>>>> >>>>>> While this series should be beneficial for adding cont-pte support on >>>>>> ARM64[2], it's one of the requirements for maintaining a total mapcount[3] >>>>>> for large folios with minimal added overhead and further changes[4] that >>>>>> build up on top of the total mapcount. >>>>>> >>>>>> Independent of all that, this series results in a speedup during fork with >>>>>> PTE-mapped THP, which is the default with THPs that are smaller than a PMD >>>>>> (for example, 16KiB to 1024KiB mTHPs for anonymous memory[5]). >>>>>> >>>>>> On an Intel Xeon Silver 4210R CPU, fork'ing with 1GiB of PTE-mapped folios >>>>>> of the same size (stddev < 1%) results in the following runtimes >>>>>> for fork() (shorter is better): >>>>>> >>>>>> Folio Size | v6.8-rc1 |      New | Change >>>>>> ------------------------------------------ >>>>>>         4KiB | 0.014328 | 0.014035 |   - 2% >>>>>>        16KiB | 0.014263 | 0.01196  |   -16% >>>>>>        32KiB | 0.014334 | 0.01094  |   -24% >>>>>>        64KiB | 0.014046 | 0.010444 |   -26% >>>>>>       128KiB | 0.014011 | 0.010063 |   -28% >>>>>>       256KiB | 0.013993 | 0.009938 |   -29% >>>>>>       512KiB | 0.013983 | 0.00985  |   -30% >>>>>>      1024KiB | 0.013986 | 0.00982  |   -30% >>>>>>      2048KiB | 0.014305 | 0.010076 |   -30% >>>>> >>>>> Just a heads up that I'm seeing some strange results on Apple M2. Fork for >>>>> order-0 is seemingly costing ~17% more. I'm using GCC 13.2 and was pretty >>>>> sure I >>>>> didn't see this problem with version 1; although that was on a different >>>>> baseline and I've thrown the numbers away so will rerun and try to debug this. > > Numbers for v1 of the series, both on top of 6.8-rc1 and rebased to the same > mm-unstable base as v3 of the series (first 2 rows are from what I just posted > for context): > > | kernel | mean_rel | std_rel | > |:-------------------|-----------:|----------:| > | mm-unstabe (base) | 0.0% | 1.1% | > | mm-unstable + v3 | 16.7% | 0.8% | > | mm-unstable + v1 | -2.5% | 1.7% | > | v6.8-rc1 + v1 | -6.6% | 1.1% | > > So all looks good with v1. And seems to suggest mm-unstable has regressed by ~4% > vs v6.8-rc1. Is this really a useful benchmark? Does the raw performance of > fork() syscall really matter? Evidence suggests its moving all over the place - > breath on the code and it changes - not a great place to be when using the test > for gating purposes! > > Still with the old tests - I'll move to the new ones now. > > >>>>> >>>> >>>> So far, on my x86 tests (Intel, AMD EPYC), I was not able to observe this. >>>> fork() for order-0 was consistently effectively unchanged. Do you observe that >>>> on other ARM systems as well? >>> >>> Nope; running the exact same kernel binary and user space on Altra, I see >>> sensible numbers; >>> >>> fork order-0: -1.3% >>> fork order-9: -7.6% >>> dontneed order-0: -0.5% >>> dontneed order-9: 0.1% >>> munmap order-0: 0.0% >>> munmap order-9: -67.9% >>> >>> So I guess some pipelining issue that causes the M2 to stall more? >> >> With one effective added folio_test_large(), it could only be a code layout >> problem? Or the compiler does something stupid, but you say that you run the >> exact same kernel binary, so that doesn't make sense. > > Yup, same binary. We know this code is very sensitive - 1 cycle makes a big > difference. So could easily be code layout, branch prediction, etc... > >> >> I'm also surprised about the dontneed vs. munmap numbers. > > You mean the ones for Altra that I posted? (I didn't post any for M2). The altra > numbers look ok to me; dontneed has no change, and munmap has no change for > order-0 and is massively improved for order-9. > > Doesn't make any sense >> (again, there was this VMA merging problem but it would still allow for batching >> within a single VMA that spans exactly one large folio). >> >> What are you using as baseline? Really just mm-unstable vs. mm-unstable+patches? > > yes. except for "v6.8-rc1 + v1" above. > >> >> Let's see if the new test changes the numbers you measure. Nope: looks the same. I've taken my test harness out of the picture and done everything manually from the ground up, with the old tests and the new. Headline is that I see similar numbers from both. Some details: - I'm running for 10 seconds then averaging the output - test is bimodal; first run (of 10 seconds) after boot is a bit faster on average (up to 10%) than the rest; I could guess this is due to the memory being allocated more contiguously the first few times through, so struct pages have better locality, but that's a guess. - test is 5-10% slower when output is printed to terminal vs when redirected to file. I've always effectively been redirecting. Not sure if this overhead could start to dominate the regression and that's why you don't see it? I'm inclined to run this test for the last N kernel releases and if the number moves around significantly, conclude that these tests don't really matter. Otherwise its an exercise in randomly refactoring code until it works well, but that's just overfitting to the compiler and hw. What do you think? Thanks, Ryan