Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp214064rdb; Tue, 5 Dec 2023 03:31:35 -0800 (PST) X-Google-Smtp-Source: AGHT+IGxtvtBHJo10NRiKzDAnb4pjGmJROt2ZOCJRRohJz7H6dcZxYenIp3p6l/OCW2QsR59hd1j X-Received: by 2002:a05:6a21:1c84:b0:18f:97c:8258 with SMTP id sf4-20020a056a211c8400b0018f097c8258mr2341999pzb.98.1701775894974; Tue, 05 Dec 2023 03:31:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701775894; cv=none; d=google.com; s=arc-20160816; b=GRUZj+wvsszQIn6y4xatCM0U4ZYRHMdIxvEqPSCTJkQlXypcrI4M3nODIz5DvmMBLf tbl/LcE3XKZrVvURCETY8jROWaariiNrOJKPzqTEBlfOPd/buk+FTFQUl8XKWzCMETpB 1Er/mLBi5UleGFOGiGcikTUpoKFk56vp4/VLT+b1HPyFvXbNRo0h8kkHg6pTabH9K1tm 0f7Iy8K1lIyaLfjXiWTqd95lt3PzVl6pPjXcjMCwpvg9jv5ZEKuVAsZfaz8qKMcxKIyr cH6MpP1Vnb+b1wuESjLfDR8//vDUMhOdZ+Xk1SLJQwfIAOwXAt/h1X4o9Mq/PjUSfYc+ GomA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=sqtmWRyxvfaBPg6LNZmukl4Pl3CPIFex3/Rjc1VLido=; fh=trt4PdSBegCHZPRiqPmatjero+uClKSnMdCqK8PHWAU=; b=CZ5ox9Q5KFw7XRYhli806TzfWpRcuSIm9Gg7cFim4KtkPBX0ZyZcVXO/u82uT2IKSE PclNsJ8YHfg+Ghb6UXOtRHWaIx4i38GTmz/mCk5c+/mGLtQf1rp43jS+oNAV4BxGxMfe xvjQV0oJAQeQ0FxNEOJcR5hGRuZuji41dK+E38BhRU+9cJegoBCP0WaDHwaJsfN5BC1u MLUG4WRs6mv11OxSLEELs59tCsmjoqGnynKwandL0nE+lOgCDjwHuLjE3k3Mxw0wI2UP 9at7dUSKGKol1PqXpHe3Rzbpv7fJOht3yU7wKxRr+BcX7ny2TmLXzPPTZBTbUKQV+0Or sfXg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id pq4-20020a17090b3d8400b00286d1fd21f4si1780759pjb.131.2023.12.05.03.31.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Dec 2023 03:31:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 0589E803C964; Tue, 5 Dec 2023 03:31:32 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345023AbjLELbC (ORCPT + 99 others); Tue, 5 Dec 2023 06:31:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54886 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1442218AbjLELa4 (ORCPT ); Tue, 5 Dec 2023 06:30:56 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id A2D2D116 for ; Tue, 5 Dec 2023 03:31:02 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A8988139F; Tue, 5 Dec 2023 03:31:48 -0800 (PST) Received: from [10.57.73.130] (unknown [10.57.73.130]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 000CE3F766; Tue, 5 Dec 2023 03:30:57 -0800 (PST) Message-ID: Date: Tue, 5 Dec 2023 11:30:56 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 01/15] mm: Batch-copy PTE ranges during fork() Content-Language: en-GB To: David Hildenbrand , Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland , Kefeng Wang , John Hubbard , Zi Yan , Barry Song <21cnbao@gmail.com>, Alistair Popple , Yang Shi Cc: linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20231204105440.61448-1-ryan.roberts@arm.com> <20231204105440.61448-2-ryan.roberts@arm.com> <104de2d6-ecf9-4b0c-a982-5bd8e1aea758@redhat.com> <5b8b9f8c-8e9b-42a5-b8b2-9b96903f3ada@redhat.com> From: Ryan Roberts In-Reply-To: <5b8b9f8c-8e9b-42a5-b8b2-9b96903f3ada@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Tue, 05 Dec 2023 03:31:32 -0800 (PST) On 04/12/2023 17:27, David Hildenbrand wrote: >> >> With rmap batching from [1] -- rebased+changed on top of that -- we could turn >> that into an effective (untested): >> >>           if (page && folio_test_anon(folio)) { >> +               nr = folio_nr_pages_cont_mapped(folio, page, src_pte, addr, end, >> +                                               pte, enforce_uffd_wp, &nr_dirty, >> +                                               &nr_writable); >>                   /* >>                    * If this page may have been pinned by the parent process, >>                    * copy the page immediately for the child so that we'll always >>                    * guarantee the pinned page won't be randomly replaced in the >>                    * future. >>                    */ >> -               folio_get(folio); >> -               if (unlikely(folio_try_dup_anon_rmap_pte(folio, page, >> src_vma))) { >> +               folio_ref_add(folio, nr); >> +               if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page, nr, >> src_vma))) { >>                           /* Page may be pinned, we have to copy. */ >> -                       folio_put(folio); >> -                       return copy_present_page(dst_vma, src_vma, dst_pte, >> src_pte, >> -                                                addr, rss, prealloc, page); >> +                       folio_ref_sub(folio, nr); >> +                       ret = copy_present_page(dst_vma, src_vma, dst_pte, >> +                                               src_pte, addr, rss, prealloc, >> +                                               page); >> +                       return ret == 0 ? 1 : ret; >>                   } >> -               rss[MM_ANONPAGES]++; >> +               rss[MM_ANONPAGES] += nr; >>           } else if (page) { >> -               folio_get(folio); >> -               folio_dup_file_rmap_pte(folio, page); >> -               rss[mm_counter_file(page)]++; >> +               nr = folio_nr_pages_cont_mapped(folio, page, src_pte, addr, end, >> +                                               pte, enforce_uffd_wp, &nr_dirty, >> +                                               &nr_writable); >> +               folio_ref_add(folio, nr); >> +               folio_dup_file_rmap_ptes(folio, page, nr); >> +               rss[mm_counter_file(page)] += nr; >>           } >> >> >> We'll have to test performance, but it could be that we want to specialize >> more on !folio_test_large(). That code is very performance-sensitive. >> >> >> [1] https://lkml.kernel.org/r/20231204142146.91437-1-david@redhat.com > > So, on top of [1] without rmap batching but with a slightly modified version of Can you clarify what you mean by "without rmap batching"? I thought [1] implicitly adds rmap batching? (e.g. folio_dup_file_rmap_ptes(), which you've added in the code snippet above). > yours (that keeps the existing code structure as pointed out and e.g., updates > counter updates), running my fork() microbenchmark with a 1 GiB of memory: > > Compared to [1], with all order-0 pages it gets 13--14% _slower_ and with all > PTE-mapped THP (order-9) it gets ~29--30% _faster_. What test are you running - I'd like to reproduce if possible, since it sounds like I've got some work to do to remove the order-0 regression. > > So looks like we really want to have a completely seprate code path for > "!folio_test_large()" to keep that case as fast as possible. And "Likely" we > want to use "likely(!folio_test_large()". ;) Yuk, but fair enough. If I can repro the perf numbers, I'll have a go a reworking this. I think you're also implicitly suggesting that this change needs to depend on [1]? Which is a shame... I guess I should also go through a similar exercise for patch 2 in this series. > > Performing rmap batching on top of that code only slightly (another 1% or so) > improves performance in the PTE-mapped THP (order-9) case right now, in contrast > to other rmap batching. Reason is as all rmap code gets inlined here and we're > only doing subpage mapcount updates + PAE handling. >