Received: by 2002:a05:7412:b101:b0:e2:908c:2ebd with SMTP id az1csp3180583rdb; Thu, 16 Nov 2023 02:27:03 -0800 (PST) X-Google-Smtp-Source: AGHT+IH1CljdcHC2IJee1fJksMeCLgWM9tuiCV97kVc1TM9QVTbYENTMOAnbAKU0hZYz6faZtMo9 X-Received: by 2002:a05:6870:1c9:b0:1ef:b62e:6481 with SMTP id n9-20020a05687001c900b001efb62e6481mr20113845oad.54.1700130423028; Thu, 16 Nov 2023 02:27:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1700130422; cv=none; d=google.com; s=arc-20160816; b=vo2aFtszQUKwgyFoDPrXfNhWjVXxia1zz4mXOtFcER0o/znLTKHDxMDViLUSUj8tYN E15DawxFcxoB6cKNRI9tmkSC8TCFkNMQVmcOhPMrVYlq1daNiGgo01rhNy/I+GviCgxv 4/+DaN4F5LPVr9KoFAyGMQOBHm1ysh8fVDqiPcdH/bGwJD2YVWH+OjrZLoIBkMkfRg6T jftZ98WqKjjzcGCl+Vym4wv0Xs8RZnmgS3e9L8NwmJCO8IqNGJTflEVRSjCxM7+fAQG4 tqXdVmQqZbfZtQDq9vld+40PQhKuM47wfLb01PtHLepbGT9ONT5Nq5bwlNHnr2Wp4l9q nccA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=vT4gt85geZkwMlK2ePX3bGk+fMOV3urHN7s9kVWvPoA=; fh=OGPqLCcKfctxbEcC0AxDlLd9x87wt0imXnH3pfcwe04=; b=0sRqLUNecnSxVL7ks5bC2S/Oysnv4u5QZOTCf3LqQmFisFjDi31MKf6dayGDArrZ/G FkFWZduNAqo093H6ZpSqzbXwg3MWs4PcJXM+UQtAbkg/0sBMI/v749JKU0dhLG+AEHzI QLxi0cLqPYginUwc28b9p20zHREOO95ZwvQbBDeyYJH0m6d9+jZOHapZnCC8u9g5dO/s NvKWReRjRq1lfhy7aM212JWrGKUr0SeCqb6sZTccf7Qeo1B8kB8VjYwADStFgefO7JHD NjyatwSYCfpbLs1tp6qiZZ5sdol1Xm0vjYSTX3OhqRFfI90cj4jRfAi1zRrk33NwVwSo ROcQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id s8-20020a656448000000b005b98aa3f613si12182681pgv.405.2023.11.16.02.27.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Nov 2023 02:27:02 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 92217816F0EB; Thu, 16 Nov 2023 02:26:59 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233461AbjKPK0s (ORCPT + 99 others); Thu, 16 Nov 2023 05:26:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230352AbjKPK0r (ORCPT ); Thu, 16 Nov 2023 05:26:47 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id EDC68B8 for ; Thu, 16 Nov 2023 02:26:42 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A6E571595; Thu, 16 Nov 2023 02:27:28 -0800 (PST) Received: from [10.1.35.163] (XHFQ2J9959.cambridge.arm.com [10.1.35.163]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9F0F43F6C4; Thu, 16 Nov 2023 02:26:39 -0800 (PST) Message-ID: Date: Thu, 16 Nov 2023 10:26:33 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 01/14] mm: Batch-copy PTE ranges during fork() Content-Language: en-GB To: David Hildenbrand , Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland , Kefeng Wang , John Hubbard , Zi Yan Cc: linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20231115163018.1303287-1-ryan.roberts@arm.com> <20231115163018.1303287-2-ryan.roberts@arm.com> <271f1e98-6217-4b40-bae0-0ac9fe5851cb@redhat.com> From: Ryan Roberts In-Reply-To: <271f1e98-6217-4b40-bae0-0ac9fe5851cb@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Thu, 16 Nov 2023 02:26:59 -0800 (PST) On 16/11/2023 10:03, David Hildenbrand wrote: > On 15.11.23 17:30, Ryan Roberts wrote: >> Convert copy_pte_range() to copy a set of ptes in a batch. A given batch >> maps a physically contiguous block of memory, all belonging to the same >> folio, with the same permissions, and for shared mappings, the same >> dirty state. This will likely improve performance by a tiny amount due >> to batching the folio reference count management and calling set_ptes() >> rather than making individual calls to set_pte_at(). >> >> However, the primary motivation for this change is to reduce the number >> of tlb maintenance operations that the arm64 backend has to perform >> during fork, as it is about to add transparent support for the >> "contiguous bit" in its ptes. By write-protecting the parent using the >> new ptep_set_wrprotects() (note the 's' at the end) function, the >> backend can avoid having to unfold contig ranges of PTEs, which is >> expensive, when all ptes in the range are being write-protected. >> Similarly, by using set_ptes() rather than set_pte_at() to set up ptes >> in the child, the backend does not need to fold a contiguous range once >> they are all populated - they can be initially populated as a contiguous >> range in the first place. >> >> This change addresses the core-mm refactoring only, and introduces >> ptep_set_wrprotects() with a default implementation that calls >> ptep_set_wrprotect() for each pte in the range. A separate change will >> implement ptep_set_wrprotects() in the arm64 backend to realize the >> performance improvement as part of the work to enable contpte mappings. >> >> Signed-off-by: Ryan Roberts >> --- >>   include/linux/pgtable.h |  13 +++ >>   mm/memory.c             | 175 +++++++++++++++++++++++++++++++--------- >>   2 files changed, 150 insertions(+), 38 deletions(-) >> >> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h >> index af7639c3b0a3..1c50f8a0fdde 100644 >> --- a/include/linux/pgtable.h >> +++ b/include/linux/pgtable.h >> @@ -622,6 +622,19 @@ static inline void ptep_set_wrprotect(struct mm_struct >> *mm, unsigned long addres >>   } >>   #endif >>   +#ifndef ptep_set_wrprotects >> +struct mm_struct; >> +static inline void ptep_set_wrprotects(struct mm_struct *mm, >> +                unsigned long address, pte_t *ptep, >> +                unsigned int nr) >> +{ >> +    unsigned int i; >> + >> +    for (i = 0; i < nr; i++, address += PAGE_SIZE, ptep++) >> +        ptep_set_wrprotect(mm, address, ptep); >> +} >> +#endif >> + >>   /* >>    * On some architectures hardware does not set page access bit when accessing >>    * memory page, it is responsibility of software setting this bit. It brings >> diff --git a/mm/memory.c b/mm/memory.c >> index 1f18ed4a5497..b7c8228883cf 100644 >> --- a/mm/memory.c >> +++ b/mm/memory.c >> @@ -921,46 +921,129 @@ copy_present_page(struct vm_area_struct *dst_vma, >> struct vm_area_struct *src_vma >>           /* Uffd-wp needs to be delivered to dest pte as well */ >>           pte = pte_mkuffd_wp(pte); >>       set_pte_at(dst_vma->vm_mm, addr, dst_pte, pte); >> -    return 0; >> +    return 1; >> +} >> + >> +static inline unsigned long page_cont_mapped_vaddr(struct page *page, >> +                struct page *anchor, unsigned long anchor_vaddr) >> +{ >> +    unsigned long offset; >> +    unsigned long vaddr; >> + >> +    offset = (page_to_pfn(page) - page_to_pfn(anchor)) << PAGE_SHIFT; >> +    vaddr = anchor_vaddr + offset; >> + >> +    if (anchor > page) { >> +        if (vaddr > anchor_vaddr) >> +            return 0; >> +    } else { >> +        if (vaddr < anchor_vaddr) >> +            return ULONG_MAX; >> +    } >> + >> +    return vaddr; >> +} >> + >> +static int folio_nr_pages_cont_mapped(struct folio *folio, >> +                      struct page *page, pte_t *pte, >> +                      unsigned long addr, unsigned long end, >> +                      pte_t ptent, bool *any_dirty) >> +{ >> +    int floops; >> +    int i; >> +    unsigned long pfn; >> +    pgprot_t prot; >> +    struct page *folio_end; >> + >> +    if (!folio_test_large(folio)) >> +        return 1; >> + >> +    folio_end = &folio->page + folio_nr_pages(folio); >> +    end = min(page_cont_mapped_vaddr(folio_end, page, addr), end); >> +    floops = (end - addr) >> PAGE_SHIFT; >> +    pfn = page_to_pfn(page); >> +    prot = pte_pgprot(pte_mkold(pte_mkclean(ptent))); >> + >> +    *any_dirty = pte_dirty(ptent); >> + >> +    pfn++; >> +    pte++; >> + >> +    for (i = 1; i < floops; i++) { >> +        ptent = ptep_get(pte); >> +        ptent = pte_mkold(pte_mkclean(ptent)); >> + >> +        if (!pte_present(ptent) || pte_pfn(ptent) != pfn || >> +            pgprot_val(pte_pgprot(ptent)) != pgprot_val(prot)) >> +            break; >> + >> +        if (pte_dirty(ptent)) >> +            *any_dirty = true; >> + >> +        pfn++; >> +        pte++; >> +    } >> + >> +    return i; >>   } >>     /* >> - * Copy one pte.  Returns 0 if succeeded, or -EAGAIN if one preallocated page >> - * is required to copy this pte. >> + * Copy set of contiguous ptes.  Returns number of ptes copied if succeeded >> + * (always gte 1), or -EAGAIN if one preallocated page is required to copy the >> + * first pte. >>    */ >>   static inline int >> -copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, >> -         pte_t *dst_pte, pte_t *src_pte, unsigned long addr, int *rss, >> -         struct folio **prealloc) >> +copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct >> *src_vma, >> +          pte_t *dst_pte, pte_t *src_pte, >> +          unsigned long addr, unsigned long end, >> +          int *rss, struct folio **prealloc) >>   { >>       struct mm_struct *src_mm = src_vma->vm_mm; >>       unsigned long vm_flags = src_vma->vm_flags; >>       pte_t pte = ptep_get(src_pte); >>       struct page *page; >>       struct folio *folio; >> +    int nr = 1; >> +    bool anon; >> +    bool any_dirty = pte_dirty(pte); >> +    int i; >>         page = vm_normal_page(src_vma, addr, pte); >> -    if (page) >> +    if (page) { >>           folio = page_folio(page); >> -    if (page && folio_test_anon(folio)) { >> -        /* >> -         * If this page may have been pinned by the parent process, >> -         * copy the page immediately for the child so that we'll always >> -         * guarantee the pinned page won't be randomly replaced in the >> -         * future. >> -         */ >> -        folio_get(folio); >> -        if (unlikely(page_try_dup_anon_rmap(page, false, src_vma))) { >> -            /* Page may be pinned, we have to copy. */ >> -            folio_put(folio); >> -            return copy_present_page(dst_vma, src_vma, dst_pte, src_pte, >> -                         addr, rss, prealloc, page); >> +        anon = folio_test_anon(folio); >> +        nr = folio_nr_pages_cont_mapped(folio, page, src_pte, addr, >> +                        end, pte, &any_dirty); >> + >> +        for (i = 0; i < nr; i++, page++) { >> +            if (anon) { >> +                /* >> +                 * If this page may have been pinned by the >> +                 * parent process, copy the page immediately for >> +                 * the child so that we'll always guarantee the >> +                 * pinned page won't be randomly replaced in the >> +                 * future. >> +                 */ >> +                if (unlikely(page_try_dup_anon_rmap( >> +                        page, false, src_vma))) { >> +                    if (i != 0) >> +                        break; >> +                    /* Page may be pinned, we have to copy. */ >> +                    return copy_present_page( >> +                        dst_vma, src_vma, dst_pte, >> +                        src_pte, addr, rss, prealloc, >> +                        page); >> +                } >> +                rss[MM_ANONPAGES]++; >> +                VM_BUG_ON(PageAnonExclusive(page)); >> +            } else { >> +                page_dup_file_rmap(page, false); >> +                rss[mm_counter_file(page)]++; >> +            } >>           } >> -        rss[MM_ANONPAGES]++; >> -    } else if (page) { >> -        folio_get(folio); >> -        page_dup_file_rmap(page, false); >> -        rss[mm_counter_file(page)]++; >> + >> +        nr = i; >> +        folio_ref_add(folio, nr); > > You're changing the order of mapcount vs. refcount increment. Don't. Make sure > your refcount >= mapcount. Ouch - good spot. > > You can do that easily by doing the folio_ref_add(folio, nr) first and then > decrementing in case of error accordingly. Errors due to pinned pages are the > corner case. Yep, propose this for v3: diff --git a/mm/memory.c b/mm/memory.c index b7c8228883cf..98373349806e 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1014,6 +1014,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma anon = folio_test_anon(folio); nr = folio_nr_pages_cont_mapped(folio, page, src_pte, addr, end, pte, &any_dirty); + folio_ref_add(folio, nr); for (i = 0; i < nr; i++, page++) { if (anon) { @@ -1029,6 +1030,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma if (i != 0) break; /* Page may be pinned, we have to copy. */ + folio_ref_sub(folio, nr); return copy_present_page( dst_vma, src_vma, dst_pte, src_pte, addr, rss, prealloc, @@ -1042,8 +1044,10 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma } } - nr = i; - folio_ref_add(folio, nr); + if (i < nr) { + folio_ref_sub(folio, nr - i); + nr = i; + } } > > I'll note that it will make a lot of sense to have batch variants of > page_try_dup_anon_rmap() and page_dup_file_rmap(). > > Especially, the batch variant of page_try_dup_anon_rmap() would only check once > if the folio maybe pinned, and in that case, you can simply drop all references > again. So you either have all or no ptes to process, which makes that code easier. > > But that can be added on top, and I'll happily do that. That's very kind - thanks for the offer! I'll leave it to you then.