Received: by 2002:a05:6358:700f:b0:131:369:b2a3 with SMTP id 15csp1176529rwo; Wed, 2 Aug 2023 09:47:33 -0700 (PDT) X-Google-Smtp-Source: APBJJlGQwNdSZ80aO+fXL4FQcKevXRC3FOJLZ719QsQO2NT52FiTq6MiUR6gentrpXkDRFUYn6Up X-Received: by 2002:a17:906:220c:b0:993:f2b4:13c9 with SMTP id s12-20020a170906220c00b00993f2b413c9mr5273346ejs.21.1690994853113; Wed, 02 Aug 2023 09:47:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690994853; cv=none; d=google.com; s=arc-20160816; b=g9OQm40jj2S3XD8n0NvpcbR59edbJbtliSRWhFVKNaYu6Gi7kYA7S6eByQ3m8R4VFQ 7mE618Ih1yPWhm1HdQLqcMuonfS2TU47gDKEpWwoa+Kqdjkq/2/nArpZdV0yeYOIxlkC M52ERv26n1AtatH8NqmiIFcOaKFyybRg6bQlqpdFMqQmfx6GC7CP8/avI+rgTlJeJrGS VdVgHPTZit/8ti35Sl5Ec+jbbSw9JTFUv3L4JU/TlDug0nboJCQVcdM8UMEZEUK0WBKF 54Q0Vw6FKiuk4m5CW/6Obwj9uV3XLlf4y3ByJx3jSRSlO+WQOw76ftLw719QRBeOrXXZ v5dQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=5NeY2tinWPOPcN8XQpBKoS8mG0qewPWY+zZNL+PQrqQ=; fh=Qc6GEpVPrF7JduJ1DyLOaCjpXHan3q8l/c6tVyGBTME=; b=LvBYWSjHb4NjB1uHysWJDFRPD6dgqhDIwiy5DOi7g+MAQDdWjfs8KA7iTthgmtZTlf ua8ZLtcEdhswh9o8wh/O8EoyfuP8Wam9ImCAZW8M/4S/2C9EGohuSTZW6O3w/acc5fH6 COwt4LQaI40udaeycBhdmJziP0HuwNLXx/81T9GVwqKy1eMi5JzFq0c8E+zpC3rxseMQ a7aa5GAxACNU3RPQLOoH/15sKTeSpscA7owHIM/qdbbnSyV8ugTzJmQ6auST5xdEVn/k q1f3syRV2HsDu8fJnwbVuxiPsDtMi9HVHuTn9L/1dxLWOnXR3ezPZlbY7fodhWNDdEsU NL8w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=XVrKAxvc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id y14-20020a17090668ce00b00988d4bc0913si11383137ejr.478.2023.08.02.09.46.50; Wed, 02 Aug 2023 09:47:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=XVrKAxvc; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235222AbjHBPQ5 (ORCPT + 99 others); Wed, 2 Aug 2023 11:16:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37622 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234966AbjHBPO1 (ORCPT ); Wed, 2 Aug 2023 11:14:27 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7ADD3122; Wed, 2 Aug 2023 08:14:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=5NeY2tinWPOPcN8XQpBKoS8mG0qewPWY+zZNL+PQrqQ=; b=XVrKAxvc+jVqKyFmXERMoGt5Kw vwMb2okt9L3PZTtcfHLjMcidaiM+dInPYZKdG4RiJ2S/xU98gNNb2jGRpaNTjy9O86mit4xeStfPE HfvMOauWiFKc0COw3Ih0gBrxzH+kFievJLj/HlmbMsbL2AIbUm3Gxu+wgfYQgl8zj2eZtpXk/aPxz LjHbHXgwtrWqmIXLIcq92sAkVI0DYUuRVt6PWgiZ1PvWmB0g+2XooTtKdfyEuz47yzGQTceO0MBC3 RqwrDg+VgVmAmZwR5sucETkA33L69LiobG2TgpntFFGR7dtgBl2CiJHwFQwBxCzO4a0U/tf0RNk/H A4xEcWPA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1qRDYC-00Ffly-6k; Wed, 02 Aug 2023 15:14:12 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: Yin Fengwei , linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v6 36/38] mm: Convert do_set_pte() to set_pte_range() Date: Wed, 2 Aug 2023 16:14:04 +0100 Message-Id: <20230802151406.3735276-37-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20230802151406.3735276-1-willy@infradead.org> References: <20230802151406.3735276-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED,SPF_HELO_NONE, SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Yin Fengwei set_pte_range() allows to setup page table entries for a specific range. It takes advantage of batched rmap update for large folio. It now takes care of calling update_mmu_cache_range(). Signed-off-by: Yin Fengwei Signed-off-by: Matthew Wilcox (Oracle) --- Documentation/filesystems/locking.rst | 2 +- include/linux/mm.h | 3 ++- mm/filemap.c | 3 +-- mm/memory.c | 37 +++++++++++++++++---------- 4 files changed, 28 insertions(+), 17 deletions(-) diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst index 89c5ec9e3392..cd032f2324e8 100644 --- a/Documentation/filesystems/locking.rst +++ b/Documentation/filesystems/locking.rst @@ -670,7 +670,7 @@ locked. The VM will unlock the page. Filesystem should find and map pages associated with offsets from "start_pgoff" till "end_pgoff". ->map_pages() is called with the RCU lock held and must not block. If it's not possible to reach a page without blocking, -filesystem should skip it. Filesystem should use do_set_pte() to setup +filesystem should skip it. Filesystem should use set_pte_range() to setup page table entry. Pointer to entry associated with the page is passed in "pte" field in vm_fault structure. Pointers to entries for other offsets should be calculated relative to "pte". diff --git a/include/linux/mm.h b/include/linux/mm.h index 2fbc6c631764..19493d6a2bb8 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1346,7 +1346,8 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma) } vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page); -void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr); +void set_pte_range(struct vm_fault *vmf, struct folio *folio, + struct page *page, unsigned int nr, unsigned long addr); vm_fault_t finish_fault(struct vm_fault *vmf); vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf); diff --git a/mm/filemap.c b/mm/filemap.c index 9dc15af7ab5b..2e7050461a87 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3506,8 +3506,7 @@ static vm_fault_t filemap_map_folio_range(struct vm_fault *vmf, ret = VM_FAULT_NOPAGE; ref_count++; - do_set_pte(vmf, page, addr); - update_mmu_cache(vma, addr, vmf->pte); + set_pte_range(vmf, folio, page, 1, addr); } while (vmf->pte++, page++, addr += PAGE_SIZE, ++count < nr_pages); /* Restore the vmf->pte */ diff --git a/mm/memory.c b/mm/memory.c index e25edd4c24b8..621716109627 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4465,15 +4465,24 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) } #endif -void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr) +/** + * set_pte_range - Set a range of PTEs to point to pages in a folio. + * @vmf: Fault decription. + * @folio: The folio that contains @page. + * @page: The first page to create a PTE for. + * @nr: The number of PTEs to create. + * @addr: The first address to create a PTE for. + */ +void set_pte_range(struct vm_fault *vmf, struct folio *folio, + struct page *page, unsigned int nr, unsigned long addr) { struct vm_area_struct *vma = vmf->vma; bool uffd_wp = vmf_orig_pte_uffd_wp(vmf); bool write = vmf->flags & FAULT_FLAG_WRITE; - bool prefault = vmf->address != addr; + bool prefault = in_range(vmf->address, addr, nr * PAGE_SIZE); pte_t entry; - flush_icache_page(vma, page); + flush_icache_pages(vma, page, nr); entry = mk_pte(page, vma->vm_page_prot); if (prefault && arch_wants_old_prefaulted_pte()) @@ -4487,14 +4496,18 @@ void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr) entry = pte_mkuffd_wp(entry); /* copy-on-write page */ if (write && !(vma->vm_flags & VM_SHARED)) { - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); - page_add_new_anon_rmap(page, vma, addr); - lru_cache_add_inactive_or_unevictable(page, vma); + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr); + VM_BUG_ON_FOLIO(nr != 1, folio); + folio_add_new_anon_rmap(folio, vma, addr); + folio_add_lru_vma(folio, vma); } else { - inc_mm_counter(vma->vm_mm, mm_counter_file(page)); - page_add_file_rmap(page, vma, false); + add_mm_counter(vma->vm_mm, mm_counter_file(page), nr); + folio_add_file_rmap_range(folio, page, nr, vma, false); } - set_pte_at(vma->vm_mm, addr, vmf->pte, entry); + set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr); + + /* no need to invalidate: a not-present page won't be cached */ + update_mmu_cache_range(vmf, vma, addr, vmf->pte, nr); } static bool vmf_pte_changed(struct vm_fault *vmf) @@ -4562,11 +4575,9 @@ vm_fault_t finish_fault(struct vm_fault *vmf) /* Re-check under ptl */ if (likely(!vmf_pte_changed(vmf))) { - do_set_pte(vmf, page, vmf->address); - - /* no need to invalidate: a not-present page won't be cached */ - update_mmu_cache(vma, vmf->address, vmf->pte); + struct folio *folio = page_folio(page); + set_pte_range(vmf, folio, page, 1, vmf->address); ret = 0; } else { update_mmu_tlb(vma, vmf->address, vmf->pte); -- 2.40.1