Received: by 2002:a05:7412:5112:b0:fa:6e18:a558 with SMTP id fm18csp719946rdb; Tue, 23 Jan 2024 12:44:08 -0800 (PST) X-Google-Smtp-Source: AGHT+IHf2L7cMmdvA32ajseLIl3qhwrxfYO6hNUVWBsngDwzwGtwMFDdK1WOuAYc1TdBz0Xv2v2R X-Received: by 2002:a05:6359:2308:b0:176:40ee:46d9 with SMTP id lk8-20020a056359230800b0017640ee46d9mr4764835rwb.10.1706042647807; Tue, 23 Jan 2024 12:44:07 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1706042647; cv=pass; d=google.com; s=arc-20160816; b=Pd2A4xRt1PGClNEi0oeCoZacQWMD11s7JiO5YZUSciFTEc1nu+ebcyV1rlxQpeD+Mh XC8P1be11GPqKZgPP5R5FiMIf/vuVnyvBrLCURzUE6p58ntWtbpGGiOt1drxcUe3t0uS hH7i4a3+t1a4galkUtGa1DIFXow59lKU9nXE8l61VyvbAIsO7jB2sR5pVpKF7Vgm2Kyb lHf1RL5eXvLE4w1avHEIt8fMDg0KSpXE64nLcQYDIig80ITPaqIkWe9HzVugc8gA8gS+ MEWzzKaLO27jNJbrFioG0Wp719TfYapCHZ3cEixDqeJgkIq8zK4dWQKcGXkVPlqPmc7N Cz+g== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:in-reply-to:from:references:cc:to :content-language:subject:user-agent:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:date:message-id; bh=yRojBFTrQyBNMjKNCFBE1EZVJrEKEAdjB8XGngjJsLY=; fh=cLjkAanmB8aobrGIK/HTftjty0FQAaQ+aRvIv8PHcYU=; b=RlQU49xkIlDb5t6/TDa+kT2TEGJqPEKssbcNrm8xwImndraGqdJfDYYkgi+pLHxZ7e ets0Efn5JqwcxoUnKzco2SUoXtUumntirk0z0TqWpHT4pIm1xZoOf+TAkZnPygvlJp2G D0AVUXTHZN4XHSIa0Zm5W8U85qeCfPSEbjll9/LoNT36n7jUdmGIUb+gWFdwfMN83Gyn ajW/7gWadsctgsUu10lJ9MHy2ysUjyh18zGZsAg9cbqNzw3YfNEeAj95bGPtpUCSsAFN MCkJDsUGj9rFun3sldlJBCiR92YQwZiQD6GgkCIH86ODsxDqLO5DJFaqB9/AmrQNlViq CF3Q== ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-36036-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-36036-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id ln21-20020a056a003cd500b006dbe3ce4cdesi3697438pfb.329.2024.01.23.12.44.07 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 Jan 2024 12:44:07 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-36036-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=arm.com dmarc=pass fromdomain=arm.com); spf=pass (google.com: domain of linux-kernel+bounces-36036-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-36036-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 5F2F928A3C0 for ; Tue, 23 Jan 2024 20:44:07 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id EDDF7522C; Tue, 23 Jan 2024 20:44:02 +0000 (UTC) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 16D9946B3; Tue, 23 Jan 2024 20:43:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706042642; cv=none; b=TRFt8N4Au2f5ZnIuzKEd8L5srjwMWnGRCHOSQn1lrFlJc3Y+1FQNTxtTfP3eITqXPKvAFWUcmz8AfxVuydcT95HmSIg7MBqFIGoSxGxaBPf3JH/F5CBt+W5aNZVG+rYMmVDYhKuakzAQBmry6ZKKTvgIuss7MBXUbbtM4QLVuN8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1706042642; c=relaxed/simple; bh=BkiYqmjcGER75gNfr+z1v1ylvtolmFYCztKOkeGymvM=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=opDIbc92DzS3TYEJ7DjwhgpvnO7mdlk1mc5PvSuh1qp/Kwr/hdj5r1NyYtNAMNxybN8RVZjyDHAPs+V0620qXqDTTPpsY+zsOpv4wqySoRT0RkkB5MkGmWXV1rj68HeCrEysthgVOsWTiWTRKbBLMMdidErv1mxt40J0Q0bbbwY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4F4471FB; Tue, 23 Jan 2024 12:44:44 -0800 (PST) Received: from [10.57.77.165] (unknown [10.57.77.165]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 526623F73F; Tue, 23 Jan 2024 12:43:55 -0800 (PST) Message-ID: Date: Tue, 23 Jan 2024 20:43:53 +0000 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v1 00/11] mm/memory: optimize fork() with PTE-mapped THP Content-Language: en-GB To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , Matthew Wilcox , Russell King , Catalin Marinas , Will Deacon , Dinh Nguyen , Michael Ellerman , Nicholas Piggin , Christophe Leroy , "Aneesh Kumar K.V" , "Naveen N. Rao" , Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , "David S. Miller" , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org References: <20240122194200.381241-1-david@redhat.com> <56bee384-461e-4167-b7e9-4dd60666dd66@arm.com> <7d92d27a-44f6-47d0-8eab-3f80bd7bd75d@arm.com> <33cf54a9-b855-4d2d-9926-a4936fc9068b@redhat.com> From: Ryan Roberts In-Reply-To: <33cf54a9-b855-4d2d-9926-a4936fc9068b@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit On 23/01/2024 20:14, David Hildenbrand wrote: > On 23.01.24 20:43, Ryan Roberts wrote: >> On 23/01/2024 19:33, David Hildenbrand wrote: >>> On 23.01.24 20:15, Ryan Roberts wrote: >>>> On 22/01/2024 19:41, David Hildenbrand wrote: >>>>> Now that the rmap overhaul[1] is upstream that provides a clean interface >>>>> for rmap batching, let's implement PTE batching during fork when processing >>>>> PTE-mapped THPs. >>>>> >>>>> This series is partially based on Ryan's previous work[2] to implement >>>>> cont-pte support on arm64, but its a complete rewrite based on [1] to >>>>> optimize all architectures independent of any such PTE bits, and to >>>>> use the new rmap batching functions that simplify the code and prepare >>>>> for further rmap accounting changes. >>>>> >>>>> We collect consecutive PTEs that map consecutive pages of the same large >>>>> folio, making sure that the other PTE bits are compatible, and (a) adjust >>>>> the refcount only once per batch, (b) call rmap handling functions only >>>>> once per batch and (c) perform batch PTE setting/updates. >>>>> >>>>> While this series should be beneficial for adding cont-pte support on >>>>> ARM64[2], it's one of the requirements for maintaining a total mapcount[3] >>>>> for large folios with minimal added overhead and further changes[4] that >>>>> build up on top of the total mapcount. >>>> >>>> I'm currently rebasing my contpte work onto this series, and have hit a >>>> problem. >>>> I need to expose the "size" of a pte (pte_size()) and skip forward to the start >>>> of the next (cont)pte every time through the folio_pte_batch() loop. But >>>> pte_next_pfn() only allows advancing by 1 pfn; I need to advance by nr pfns: >>>> >>>> >>>> static inline int folio_pte_batch(struct folio *folio, unsigned long addr, >>>>          pte_t *start_ptep, pte_t pte, int max_nr, bool *any_writable) >>>> { >>>>      unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio); >>>>      const pte_t *end_ptep = start_ptep + max_nr; >>>>      pte_t expected_pte = __pte_batch_clear_ignored(pte_next_pfn(pte)); >>>> -    pte_t *ptep = start_ptep + 1; >>>> +    pte_t *ptep = start_ptep; >>>> +    int vfn, nr, i; >>>>      bool writable; >>>> >>>>      if (any_writable) >>>>          *any_writable = false; >>>> >>>>      VM_WARN_ON_FOLIO(!pte_present(pte), folio); >>>> >>>> +    vfn = addr >> PAGE_SIZE; >>>> +    nr = pte_size(pte); >>>> +    nr = ALIGN_DOWN(vfn + nr, nr) - vfn; >>>> +    ptep += nr; >>>> + >>>>      while (ptep != end_ptep) { >>>> +        pte = ptep_get(ptep); >>>>          nr = pte_size(pte); >>>>          if (any_writable) >>>>              writable = !!pte_write(pte); >>>>          pte = __pte_batch_clear_ignored(pte); >>>> >>>>          if (!pte_same(pte, expected_pte)) >>>>              break; >>>> >>>>          /* >>>>           * Stop immediately once we reached the end of the folio. In >>>>           * corner cases the next PFN might fall into a different >>>>           * folio. >>>>           */ >>>> -        if (pte_pfn(pte) == folio_end_pfn) >>>> +        if (pte_pfn(pte) >= folio_end_pfn) >>>>              break; >>>> >>>>          if (any_writable) >>>>              *any_writable |= writable; >>>> >>>> -        expected_pte = pte_next_pfn(expected_pte); >>>> -        ptep++; >>>> +        for (i = 0; i < nr; i++) >>>> +            expected_pte = pte_next_pfn(expected_pte); >>>> +        ptep += nr; >>>>      } >>>> >>>>      return ptep - start_ptep; >>>> } >>>> >>>> >>>> So I'm wondering if instead of enabling pte_next_pfn() for all the arches, >>>> perhaps its actually better to expose pte_pgprot() for all the arches. Then we >>>> can be much more flexible about generating ptes with pfn_pte(pfn, pgprot). >>>> >>>> What do you think? >>> >>> The pte_pgprot() stuff is just nasty IMHO. >> >> I dunno; we have pfn_pte() which takes a pfn and a pgprot. It seems reasonable >> that we should be able to do the reverse. > > But pte_pgprot() is only available on a handful of architectures, no? It would > be nice to have a completely generic pte_next_pfn() / pte_advance_pfns(), though. > > Anyhow, this is all "easy" to rework later. Unless I am missing something, the > low hanging fruit is simply using PFN_PTE_SHIFT for now that exists on most > archs already. > >> >>> >>> Likely it's best to simply convert pte_next_pfn() to something like >>> pte_advance_pfns(). The we could just have >>> >>> #define pte_next_pfn(pte) pte_advance_pfns(pte, 1) >>> >>> That should be fairly easy to do on top (based on PFN_PTE_SHIFT). And only 3 >>> archs (x86-64, arm64, and powerpc) need slight care to replace a hardcoded "1" >>> by an integer we pass in. >> >> I thought we agreed powerpc was safe to just define PFN_PTE_SHIFT? But, yeah, >> the principle works I guess. I guess I can do this change along with my series. > > It is, if nobody insists on that micro-optimization on powerpc. > > If there is good reason to invest more time and effort right now on the > pte_pgprot approach, then please let me know :) > No I think you're right. I thought pte_pgprot() was implemented by more arches, but there are 13 without it, so clearly a lot of effort to plug that gap. I'll take the approach you suggest with pte_advance_pfns(). It'll just require mods to x86 and arm64, +/- ppc.