Received: by 2002:a05:7412:5112:b0:fa:6e18:a558 with SMTP id fm18csp682693rdb; Tue, 23 Jan 2024 11:21:16 -0800 (PST) X-Google-Smtp-Source: AGHT+IHHLUF9T8TbEoM8u36gHa+5vqo9Muc7r2jz1EArTNgI7yfW1l1sp3KA98RI2JuSxmg0ApSz X-Received: by 2002:a17:903:2305:b0:1d7:587f:3749 with SMTP id d5-20020a170903230500b001d7587f3749mr2866988plh.133.1706037676638; Tue, 23 Jan 2024 11:21:16 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706037676; cv=pass; d=google.com; s=arc-20160816; b=e2PYBgeKfpsVSD8OLDwhf+ol0z5zwUa/iyKBsjogLRAyYDvupTIxxlYjGL2z4TUgGa gR6WgC3/v2rXVmR/nE7jzYQopK8P5lOOR8ugmGJjCuQgDYNtoslZXSzrbppLk1RUeNCn XZEBiTyHod6dzPP7TAKd2BOtvNKl/PEgjdAB+WoeUTf2egD8SbBngucStwbmAmpNt34b 0cckIf0n0w3RjrbTCNE6eah7lVV64ySqQAtlVHfFvUkgByd3RTRi+fhIq4uSFUognOWk ipCz6F7Klt+UWFMmhQ7k+7OH8M+YPiG7zn1rHdDrQ3YdqVfAL9qPsXq5i6ZiZSridDT9 AUcA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=38n5b9aGoXebQPbH9nj7DLdfSy/hrPTshyQg4uIa/mM=; fh=cLjkAanmB8aobrGIK/HTftjty0FQAaQ+aRvIv8PHcYU=; b=x2ONgyC/FQ3ZPCSyZZ//98KNvA4kwZoJKl1Zf7k4SwUTDTYDB4EicuwS6JQMUrGi6H KkwtSU3+RxjMDlsshAlVwQhRFVBZN+70NfbxFTX9A/w7ett4oGpMw5hXk5C+aS5QrY0T 919lQ294OCBmO+3h8MnlFnAIdIdVPdrDP1rlAa+TienQx6bEu2ikgsan0Rwivw5/AkQQ 0bmd0PSAA8vaYQ8zz3kyCm1t4oKL1jcI8xNnguaFkXMg0wT1VxZoUcEq9cicmb2s5bE8 gorb6GTkLjiEtUnYC0AV8Qvrx/CyL1lDTwXxF4ouPuz0mm95kesrRS7/AFHG72cyjXYE mF8g== ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-35976-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-35976-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id k2-20020a170902c40200b001d71213e8b2si9371287plk.508.2024.01.23.11.21.16 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Jan 2024 11:21:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-35976-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-35976-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-35976-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 6687D294529 for ; Tue, 23 Jan 2024 19:15:54 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B160681AAB; Tue, 23 Jan 2024 19:15:47 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id E4E7150A64; Tue, 23 Jan 2024 19:15:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706037347; cv=none; b=GtDiBxiZDb+kb+B6kDxIwRCKNTgaZ+Tw7EstfTygQrxsLzbB9Iwj6Me0Fj2hA7ScXZiDzsOXKATY+W+7zyJhw6hOIhoTV0b2/CDT2SEb4IkPZ8H/RYum0TZD29c+3RghJah6U4cubwKZ8nZLu/5Z1eIl29nzrTqCBRsrTp1kTrI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706037347; c=relaxed/simple; bh=dO62yN+4S3GW0t7/iGaTOEbxPhAmAh875fSgkvySTng=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=NyjRZxIMNZuyQUoiZvmO/gsXoZZwLR3Z7FiWIf3kmRWahlxHLjA813iiWuGV0yzhIq+CV1FdLiP7+sOWmg+6DqF6D4WV8X+cnpNx6KYZjpAhf07SGd495wq+MAYRKXgOHZoRMn3uwVA9bKTe3KGdIl/Ba/c+Qk4sM25XAj74cQs= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7948C1FB; Tue, 23 Jan 2024 11:16:29 -0800 (PST) Received: from [10.57.77.165] (unknown [10.57.77.165]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 787ED3F73F; Tue, 23 Jan 2024 11:15:40 -0800 (PST) Message-ID: <56bee384-461e-4167-b7e9-4dd60666dd66@arm.com> Date: Tue, 23 Jan 2024 19:15:39 +0000 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1 00/11] mm/memory: optimize fork() with PTE-mapped THP Content-Language: en-GB To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , Matthew Wilcox , Russell King , Catalin Marinas , Will Deacon , Dinh Nguyen , Michael Ellerman , Nicholas Piggin , Christophe Leroy , "Aneesh Kumar K.V" , "Naveen N. Rao" , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , "David S. Miller" , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org References: <20240122194200.381241-1-david@redhat.com> From: Ryan Roberts In-Reply-To: <20240122194200.381241-1-david@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit On 22/01/2024 19:41, David Hildenbrand wrote: > Now that the rmap overhaul[1] is upstream that provides a clean interface > for rmap batching, let's implement PTE batching during fork when processing > PTE-mapped THPs. > > This series is partially based on Ryan's previous work[2] to implement > cont-pte support on arm64, but its a complete rewrite based on [1] to > optimize all architectures independent of any such PTE bits, and to > use the new rmap batching functions that simplify the code and prepare > for further rmap accounting changes. > > We collect consecutive PTEs that map consecutive pages of the same large > folio, making sure that the other PTE bits are compatible, and (a) adjust > the refcount only once per batch, (b) call rmap handling functions only > once per batch and (c) perform batch PTE setting/updates. > > While this series should be beneficial for adding cont-pte support on > ARM64[2], it's one of the requirements for maintaining a total mapcount[3] > for large folios with minimal added overhead and further changes[4] that > build up on top of the total mapcount. I'm currently rebasing my contpte work onto this series, and have hit a problem. I need to expose the "size" of a pte (pte_size()) and skip forward to the start of the next (cont)pte every time through the folio_pte_batch() loop. But pte_next_pfn() only allows advancing by 1 pfn; I need to advance by nr pfns: static inline int folio_pte_batch(struct folio *folio, unsigned long addr, pte_t *start_ptep, pte_t pte, int max_nr, bool *any_writable) { unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio); const pte_t *end_ptep = start_ptep + max_nr; pte_t expected_pte = __pte_batch_clear_ignored(pte_next_pfn(pte)); - pte_t *ptep = start_ptep + 1; + pte_t *ptep = start_ptep; + int vfn, nr, i; bool writable; if (any_writable) *any_writable = false; VM_WARN_ON_FOLIO(!pte_present(pte), folio); + vfn = addr >> PAGE_SIZE; + nr = pte_size(pte); + nr = ALIGN_DOWN(vfn + nr, nr) - vfn; + ptep += nr; + while (ptep != end_ptep) { + pte = ptep_get(ptep); nr = pte_size(pte); if (any_writable) writable = !!pte_write(pte); pte = __pte_batch_clear_ignored(pte); if (!pte_same(pte, expected_pte)) break; /* * Stop immediately once we reached the end of the folio. In * corner cases the next PFN might fall into a different * folio. */ - if (pte_pfn(pte) == folio_end_pfn) + if (pte_pfn(pte) >= folio_end_pfn) break; if (any_writable) *any_writable |= writable; - expected_pte = pte_next_pfn(expected_pte); - ptep++; + for (i = 0; i < nr; i++) + expected_pte = pte_next_pfn(expected_pte); + ptep += nr; } return ptep - start_ptep; } So I'm wondering if instead of enabling pte_next_pfn() for all the arches, perhaps its actually better to expose pte_pgprot() for all the arches. Then we can be much more flexible about generating ptes with pfn_pte(pfn, pgprot). What do you think?