Received: by 2002:a05:7412:b101:b0:e2:908c:2ebd with SMTP id az1csp3287038rdb; Thu, 16 Nov 2023 05:49:28 -0800 (PST) X-Google-Smtp-Source: AGHT+IHio03gMcmCS5xG6+TqTAohRvBNi6n9NTwkGX6LFNH7wNPKL+lkcJCXxazksW1Cxfvw0/gJ X-Received: by 2002:a05:6a20:8429:b0:181:15:5755 with SMTP id c41-20020a056a20842900b0018100155755mr22763837pzd.56.1700142567715; Thu, 16 Nov 2023 05:49:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1700142567; cv=none; d=google.com; s=arc-20160816; b=m1iVhq1Yd/mIdDoOBYBvAqW2XwSVyuaGf+8BZQlmBOBwvi4JZ+Kmnvkxwy/ZAvvIUo 4FBVktp4rQ2w1ECR/OwA7WGwKHIZwUEeDqc6S9mcyXNJm5jNoARcIJrLNLd0dE4j2TQw rLbuD8gLmY1UWpbVTGf5LMppOzW3C8N84DpLGdaqw8BReO/haKC4dxHcQRf+XLv4ytKz +dAFUWkr8ZG02f22TngBl5+mNxFigpSd8gKBQSOIyicxw9hfNVTtLK0FvrScpKqHw4q7 13w8BokLdJza8d2xNQRESI5U5cv5LhcKeAt0TT51fjn8Y/EUwu+xlVxmWW3MRYPUT91a w1sg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=eyeJfS7ASGlbYqcYDNGyb9fn8doXcyGcdnOBO0h+a18=; fh=OGPqLCcKfctxbEcC0AxDlLd9x87wt0imXnH3pfcwe04=; b=ZpSP8TqEUNQZUWwfq6uXv0VaoIZTtJKfHBeV82hw4osP42/xh88cbifdUvuI8idg5E kAgBcG4lcdO04xzq2ZJClh+S5n3HTwTpz6uLLjaOQ393E/ZA2/171oyAUi8+8mU3OhlC hHbsb+Iu7G9essqefrAzHJpCt/M+XJbDP/XMiNSS+tD4NnfTDOkpI8UoorA4pxNTjvME K1g4rev0GvA0iacR5ylOeHo5fTj/yytB6Q/1k1e4RbGeLinrRci09VCoGfhbSK2seSIE tYknYNPanzeGp1/cC3um8GwmyvEhkOWtx+iK29tA6Ewb268C9KYmUgL+3dA5PJtLv19a 6T5w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from snail.vger.email (snail.vger.email. [23.128.96.37]) by mx.google.com with ESMTPS id q9-20020a056a00088900b006c3b70b6433si12714360pfj.206.2023.11.16.05.49.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Nov 2023 05:49:27 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) client-ip=23.128.96.37; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.37 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 16D00820E529; Thu, 16 Nov 2023 05:49:26 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345270AbjKPNtX (ORCPT + 99 others); Thu, 16 Nov 2023 08:49:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345255AbjKPNtW (ORCPT ); Thu, 16 Nov 2023 08:49:22 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 4F0CBD51 for ; Thu, 16 Nov 2023 05:49:18 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 074411595; Thu, 16 Nov 2023 05:50:04 -0800 (PST) Received: from [10.1.35.163] (XHFQ2J9959.cambridge.arm.com [10.1.35.163]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EB1303F73F; Thu, 16 Nov 2023 05:49:14 -0800 (PST) Message-ID: <2d027a8d-adfb-481d-89ea-c99139e669aa@arm.com> Date: Thu, 16 Nov 2023 13:49:13 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 01/14] mm: Batch-copy PTE ranges during fork() Content-Language: en-GB To: David Hildenbrand , Catalin Marinas , Will Deacon , Ard Biesheuvel , Marc Zyngier , Oliver Upton , James Morse , Suzuki K Poulose , Zenghui Yu , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Anshuman Khandual , Matthew Wilcox , Yu Zhao , Mark Rutland , Kefeng Wang , John Hubbard , Zi Yan Cc: linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20231115163018.1303287-1-ryan.roberts@arm.com> <20231115163018.1303287-2-ryan.roberts@arm.com> <89a9fe07-a5c5-4a99-b588-e6145053c58f@redhat.com> <1459f78b-e80c-4f21-bc65-f0ab259d348a@arm.com> <08ef2c36-2b9c-4b96-9d1d-68cca0f68ba5@redhat.com> From: Ryan Roberts In-Reply-To: <08ef2c36-2b9c-4b96-9d1d-68cca0f68ba5@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_BLOCKED,SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Thu, 16 Nov 2023 05:49:26 -0800 (PST) On 16/11/2023 13:20, David Hildenbrand wrote: > On 16.11.23 12:20, Ryan Roberts wrote: >> On 16/11/2023 11:03, David Hildenbrand wrote: >>> On 15.11.23 17:30, Ryan Roberts wrote: >>>> Convert copy_pte_range() to copy a set of ptes in a batch. A given batch >>>> maps a physically contiguous block of memory, all belonging to the same >>>> folio, with the same permissions, and for shared mappings, the same >>>> dirty state. This will likely improve performance by a tiny amount due >>>> to batching the folio reference count management and calling set_ptes() >>>> rather than making individual calls to set_pte_at(). >>>> >>>> However, the primary motivation for this change is to reduce the number >>>> of tlb maintenance operations that the arm64 backend has to perform >>>> during fork, as it is about to add transparent support for the >>>> "contiguous bit" in its ptes. By write-protecting the parent using the >>>> new ptep_set_wrprotects() (note the 's' at the end) function, the >>>> backend can avoid having to unfold contig ranges of PTEs, which is >>>> expensive, when all ptes in the range are being write-protected. >>>> Similarly, by using set_ptes() rather than set_pte_at() to set up ptes >>>> in the child, the backend does not need to fold a contiguous range once >>>> they are all populated - they can be initially populated as a contiguous >>>> range in the first place. >>>> >>>> This change addresses the core-mm refactoring only, and introduces >>>> ptep_set_wrprotects() with a default implementation that calls >>>> ptep_set_wrprotect() for each pte in the range. A separate change will >>>> implement ptep_set_wrprotects() in the arm64 backend to realize the >>>> performance improvement as part of the work to enable contpte mappings. >>>> >>>> Signed-off-by: Ryan Roberts >>>> --- >>>>    include/linux/pgtable.h |  13 +++ >>>>    mm/memory.c             | 175 +++++++++++++++++++++++++++++++--------- >>>>    2 files changed, 150 insertions(+), 38 deletions(-) >>>> >>>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h >>>> index af7639c3b0a3..1c50f8a0fdde 100644 >>>> --- a/include/linux/pgtable.h >>>> +++ b/include/linux/pgtable.h >>>> @@ -622,6 +622,19 @@ static inline void ptep_set_wrprotect(struct mm_struct >>>> *mm, unsigned long addres >>>>    } >>>>    #endif >>>>    +#ifndef ptep_set_wrprotects >>>> +struct mm_struct; >>>> +static inline void ptep_set_wrprotects(struct mm_struct *mm, >>>> +                unsigned long address, pte_t *ptep, >>>> +                unsigned int nr) >>>> +{ >>>> +    unsigned int i; >>>> + >>>> +    for (i = 0; i < nr; i++, address += PAGE_SIZE, ptep++) >>>> +        ptep_set_wrprotect(mm, address, ptep); >>>> +} >>>> +#endif >>>> + >>>>    /* >>>>     * On some architectures hardware does not set page access bit when >>>> accessing >>>>     * memory page, it is responsibility of software setting this bit. It brings >>>> diff --git a/mm/memory.c b/mm/memory.c >>>> index 1f18ed4a5497..b7c8228883cf 100644 >>>> --- a/mm/memory.c >>>> +++ b/mm/memory.c >>>> @@ -921,46 +921,129 @@ copy_present_page(struct vm_area_struct *dst_vma, >>>> struct vm_area_struct *src_vma >>>>            /* Uffd-wp needs to be delivered to dest pte as well */ >>>>            pte = pte_mkuffd_wp(pte); >>>>        set_pte_at(dst_vma->vm_mm, addr, dst_pte, pte); >>>> -    return 0; >>>> +    return 1; >>>> +} >>>> + >>>> +static inline unsigned long page_cont_mapped_vaddr(struct page *page, >>>> +                struct page *anchor, unsigned long anchor_vaddr) >>>> +{ >>>> +    unsigned long offset; >>>> +    unsigned long vaddr; >>>> + >>>> +    offset = (page_to_pfn(page) - page_to_pfn(anchor)) << PAGE_SHIFT; >>>> +    vaddr = anchor_vaddr + offset; >>>> + >>>> +    if (anchor > page) { >>>> +        if (vaddr > anchor_vaddr) >>>> +            return 0; >>>> +    } else { >>>> +        if (vaddr < anchor_vaddr) >>>> +            return ULONG_MAX; >>>> +    } >>>> + >>>> +    return vaddr; >>>> +} >>>> + >>>> +static int folio_nr_pages_cont_mapped(struct folio *folio, >>>> +                      struct page *page, pte_t *pte, >>>> +                      unsigned long addr, unsigned long end, >>>> +                      pte_t ptent, bool *any_dirty) >>>> +{ >>>> +    int floops; >>>> +    int i; >>>> +    unsigned long pfn; >>>> +    pgprot_t prot; >>>> +    struct page *folio_end; >>>> + >>>> +    if (!folio_test_large(folio)) >>>> +        return 1; >>>> + >>>> +    folio_end = &folio->page + folio_nr_pages(folio); >>>> +    end = min(page_cont_mapped_vaddr(folio_end, page, addr), end); >>>> +    floops = (end - addr) >> PAGE_SHIFT; >>>> +    pfn = page_to_pfn(page); >>>> +    prot = pte_pgprot(pte_mkold(pte_mkclean(ptent))); >>>> + >>>> +    *any_dirty = pte_dirty(ptent); >>>> + >>>> +    pfn++; >>>> +    pte++; >>>> + >>>> +    for (i = 1; i < floops; i++) { >>>> +        ptent = ptep_get(pte); >>>> +        ptent = pte_mkold(pte_mkclean(ptent)); >>>> + >>>> +        if (!pte_present(ptent) || pte_pfn(ptent) != pfn || >>>> +            pgprot_val(pte_pgprot(ptent)) != pgprot_val(prot)) >>>> +            break; >>>> + >>>> +        if (pte_dirty(ptent)) >>>> +            *any_dirty = true; >>>> + >>>> +        pfn++; >>>> +        pte++; >>>> +    } >>>> + >>>> +    return i; >>>>    } >>>>      /* >>>> - * Copy one pte.  Returns 0 if succeeded, or -EAGAIN if one preallocated page >>>> - * is required to copy this pte. >>>> + * Copy set of contiguous ptes.  Returns number of ptes copied if succeeded >>>> + * (always gte 1), or -EAGAIN if one preallocated page is required to copy the >>>> + * first pte. >>>>     */ >>>>    static inline int >>>> -copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct >>>> *src_vma, >>>> -         pte_t *dst_pte, pte_t *src_pte, unsigned long addr, int *rss, >>>> -         struct folio **prealloc) >>>> +copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct >>>> *src_vma, >>>> +          pte_t *dst_pte, pte_t *src_pte, >>>> +          unsigned long addr, unsigned long end, >>>> +          int *rss, struct folio **prealloc) >>>>    { >>>>        struct mm_struct *src_mm = src_vma->vm_mm; >>>>        unsigned long vm_flags = src_vma->vm_flags; >>>>        pte_t pte = ptep_get(src_pte); >>>>        struct page *page; >>>>        struct folio *folio; >>>> +    int nr = 1; >>>> +    bool anon; >>>> +    bool any_dirty = pte_dirty(pte); >>>> +    int i; >>>>          page = vm_normal_page(src_vma, addr, pte); >>>> -    if (page) >>>> +    if (page) { >>>>            folio = page_folio(page); >>>> -    if (page && folio_test_anon(folio)) { >>>> -        /* >>>> -         * If this page may have been pinned by the parent process, >>>> -         * copy the page immediately for the child so that we'll always >>>> -         * guarantee the pinned page won't be randomly replaced in the >>>> -         * future. >>>> -         */ >>>> -        folio_get(folio); >>>> -        if (unlikely(page_try_dup_anon_rmap(page, false, src_vma))) { >>>> -            /* Page may be pinned, we have to copy. */ >>>> -            folio_put(folio); >>>> -            return copy_present_page(dst_vma, src_vma, dst_pte, src_pte, >>>> -                         addr, rss, prealloc, page); >>>> +        anon = folio_test_anon(folio); >>>> +        nr = folio_nr_pages_cont_mapped(folio, page, src_pte, addr, >>>> +                        end, pte, &any_dirty); >>>> + >>>> +        for (i = 0; i < nr; i++, page++) { >>>> +            if (anon) { >>>> +                /* >>>> +                 * If this page may have been pinned by the >>>> +                 * parent process, copy the page immediately for >>>> +                 * the child so that we'll always guarantee the >>>> +                 * pinned page won't be randomly replaced in the >>>> +                 * future. >>>> +                 */ >>>> +                if (unlikely(page_try_dup_anon_rmap( >>>> +                        page, false, src_vma))) { >>>> +                    if (i != 0) >>>> +                        break; >>>> +                    /* Page may be pinned, we have to copy. */ >>>> +                    return copy_present_page( >>>> +                        dst_vma, src_vma, dst_pte, >>>> +                        src_pte, addr, rss, prealloc, >>>> +                        page); >>>> +                } >>>> +                rss[MM_ANONPAGES]++; >>>> +                VM_BUG_ON(PageAnonExclusive(page)); >>>> +            } else { >>>> +                page_dup_file_rmap(page, false); >>>> +                rss[mm_counter_file(page)]++; >>>> +            } >>>>            } >>>> -        rss[MM_ANONPAGES]++; >>>> -    } else if (page) { >>>> -        folio_get(folio); >>>> -        page_dup_file_rmap(page, false); >>>> -        rss[mm_counter_file(page)]++; >>>> + >>>> +        nr = i; >>>> +        folio_ref_add(folio, nr); >>>>        } >>>>          /* >>>> @@ -968,24 +1051,28 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct >>>> vm_area_struct *src_vma, >>>>         * in the parent and the child >>>>         */ >>>>        if (is_cow_mapping(vm_flags) && pte_write(pte)) { >>>> -        ptep_set_wrprotect(src_mm, addr, src_pte); >>>> +        ptep_set_wrprotects(src_mm, addr, src_pte, nr); >>>>            pte = pte_wrprotect(pte); >>> >>> You likely want an "any_pte_writable" check here instead, no? >>> >>> Any operations that target a single indiividual PTE while multiple PTEs are >>> adjusted are suspicious :) >> >> The idea is that I've already constrained the batch of pages such that the >> permissions are all the same (see folio_nr_pages_cont_mapped()). So if the first >> pte is writable, then they all are - something has gone badly wrong if some are >> writable and others are not. > > I wonder if it would be cleaner and easier to not do that, though. > > Simply record if any pte is writable. Afterwards they will *all* be R/O and you > can set the cont bit, correct? Oh I see what you mean - that only works for cow mappings though. If you have a shared mapping, you won't be making it read-only at fork. So if we ignore pte_write() state when demarking the batches, we will end up with a batch of pages with a mix of RO and RW in the parent, but then we set_ptes() for the child and those pages will all have the permissions of the first page of the batch. I guess we could special case and do it the way you suggested for cow mappings; it might be faster, but certainly not cleaner and easier IMHO. >