Received: by 2002:a05:7412:37c9:b0:e2:908c:2ebd with SMTP id jz9csp2366652rdb; Thu, 21 Sep 2023 17:14:35 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEKzomUxcoIyqxbADVB2AkfT+N94xKDw25/9TeDYwCXjfcygmRxkDuimQYsuYh0dPRzYEnh X-Received: by 2002:a17:903:2303:b0:1c3:4073:bf80 with SMTP id d3-20020a170903230300b001c34073bf80mr7286248plh.0.1695341674713; Thu, 21 Sep 2023 17:14:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695341674; cv=none; d=google.com; s=arc-20160816; b=d8F1WtjRzj5pbQkF2cVwaWkqpqDmrkeREIFNmyKQv0ebRcLnkEsUcVRhjV7QQQypd4 GrkghWWZ1gO7ha45vIGcHUHTwaMV5MdJWL0rIB5EVRCG4bqRs6zR5ryYNgN/WotpqtW4 AMlA4wwwaUfdgFOImpBPS2cBCYtTmLkI+U7bZumalLGV6RhDH9QwViywbh675t6tnsHq /VbR5eTjsev1H7Sek5Mnl+oiHwcopeBfMQCclbbrtKr1So68OODnghzMkxS7KDPs+mD1 dEitpfmIwt9X0oj9dB+hgDVL8lvZirKkhvenUQvD1bDNvVLpWdsffnazp2wvRLoLutMe A4TA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=KzfRLaLyR125P3gGdZxsyNO8aJT47vmGKTm1sKBR754=; fh=q/eoddAVbbeWDzUVs4Vwk85GEJr6Ex1kH1y0q+v38Hg=; b=v8BVuFHqSHbCJJiXEnCm5Tgw2twoWeA+X+kKo/kmHUUGfJWGBuqzbH60yiZSs+1mqR t2lTj1Y62hmpF4HkBgOXEMUlQis8pr2dxOgX+tZ7io32FKxeI2WU/sRMrlkalrt0L83Q MsuIhbxmER8JjdMeCAkeKrP74+KMO0NTV7Ylyn1e3h7/8c7FV4MCKPsRna/+OOjMwj4H qmmehXBMJa4WzE0BLADXZT4N4QTTMfxLPM10JOjP57oIi4MLB0NsvBV8jusd+6iTzy9N +Z+xtrv7kZARzpmzIqiAHN0dqb7b0IKh1QTFPeRP5hdX+ap7IjeQvWrf5Ba6UK+yuFv9 KVcw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from howler.vger.email (howler.vger.email. [23.128.96.34]) by mx.google.com with ESMTPS id e9-20020a170902b78900b001c3f58d15aesi2446883pls.223.2023.09.21.17.14.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Sep 2023 17:14:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) client-ip=23.128.96.34; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.34 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 7FF288245A5A; Thu, 21 Sep 2023 14:50:47 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232635AbjIUVug (ORCPT + 99 others); Thu, 21 Sep 2023 17:50:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47490 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232389AbjIUVuV (ORCPT ); Thu, 21 Sep 2023 17:50:21 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 2739A7EA30; Thu, 21 Sep 2023 10:37:38 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 914391762; Thu, 21 Sep 2023 09:21:28 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0C22D3F59C; Thu, 21 Sep 2023 09:20:46 -0700 (PDT) From: Ryan Roberts To: Catalin Marinas , Will Deacon , "James E.J. Bottomley" , Helge Deller , Nicholas Piggin , Christophe Leroy , Paul Walmsley , Palmer Dabbelt , Albert Ou , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Gerald Schaefer , "David S. Miller" , Arnd Bergmann , Mike Kravetz , Muchun Song , SeongJae Park , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , Lorenzo Stoakes , Anshuman Khandual , Peter Xu , Axel Rasmussen , Qi Zheng Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-mm@kvack.org, stable@vger.kernel.org Subject: [PATCH v1 6/8] mm: hugetlb: Convert set_huge_pte_at() to take vma Date: Thu, 21 Sep 2023 17:20:05 +0100 Message-Id: <20230921162007.1630149-7-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230921162007.1630149-1-ryan.roberts@arm.com> References: <20230921162007.1630149-1-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Thu, 21 Sep 2023 14:50:47 -0700 (PDT) In order to fix a bug, arm64 needs access to the vma inside it's implementation of set_huge_pte_at(). Provide for this by converting the mm parameter to be a vma. Any implementations that require the mm can access it via vma->vm_mm. This commit makes the required modifications to the core mm. Separate commits update the arches, before the actual bug is fixed in arm64. No behavioral changes intended. Signed-off-by: Ryan Roberts --- include/asm-generic/hugetlb.h | 6 +++--- include/linux/hugetlb.h | 6 +++--- mm/damon/vaddr.c | 2 +- mm/hugetlb.c | 30 +++++++++++++++--------------- mm/migrate.c | 2 +- mm/rmap.c | 10 +++++----- mm/vmalloc.c | 5 ++++- 7 files changed, 32 insertions(+), 29 deletions(-) diff --git a/include/asm-generic/hugetlb.h b/include/asm-generic/hugetlb.h index 4da02798a00b..515e4777fb65 100644 --- a/include/asm-generic/hugetlb.h +++ b/include/asm-generic/hugetlb.h @@ -75,10 +75,10 @@ static inline void hugetlb_free_pgd_range(struct mmu_gather *tlb, #endif #ifndef __HAVE_ARCH_HUGE_SET_HUGE_PTE_AT -static inline void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +static inline void set_huge_pte_at(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, pte_t pte) { - set_pte_at(mm, addr, ptep, pte); + set_pte_at(vma->vm_mm, addr, ptep, pte); } #endif diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 5b2626063f4f..08184f32430c 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -984,7 +984,7 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t old_pte, pte_t pte) { - set_huge_pte_at(vma->vm_mm, addr, ptep, pte); + set_huge_pte_at(vma, addr, ptep, pte); } #endif @@ -1172,8 +1172,8 @@ static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, #endif } -static inline void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pte) +static inline void set_huge_pte_at(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep, pte_t pte) { } diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c index 4c81a9dbd044..55da8cee8fbc 100644 --- a/mm/damon/vaddr.c +++ b/mm/damon/vaddr.c @@ -347,7 +347,7 @@ static void damon_hugetlb_mkold(pte_t *pte, struct mm_struct *mm, if (pte_young(entry)) { referenced = true; entry = pte_mkold(entry); - set_huge_pte_at(mm, addr, pte, entry); + set_huge_pte_at(vma, addr, pte, entry); } #ifdef CONFIG_MMU_NOTIFIER diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ba6d39b71cb1..bcc30cd62586 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4988,7 +4988,7 @@ hugetlb_install_folio(struct vm_area_struct *vma, pte_t *ptep, unsigned long add hugepage_add_new_anon_rmap(new_folio, vma, addr); if (userfaultfd_wp(vma) && huge_pte_uffd_wp(old)) newpte = huge_pte_mkuffd_wp(newpte); - set_huge_pte_at(vma->vm_mm, addr, ptep, newpte); + set_huge_pte_at(vma, addr, ptep, newpte); hugetlb_count_add(pages_per_huge_page(hstate_vma(vma)), vma->vm_mm); folio_set_hugetlb_migratable(new_folio); } @@ -5065,7 +5065,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry))) { if (!userfaultfd_wp(dst_vma)) entry = huge_pte_clear_uffd_wp(entry); - set_huge_pte_at(dst, addr, dst_pte, entry); + set_huge_pte_at(dst_vma, addr, dst_pte, entry); } else if (unlikely(is_hugetlb_entry_migration(entry))) { swp_entry_t swp_entry = pte_to_swp_entry(entry); bool uffd_wp = pte_swp_uffd_wp(entry); @@ -5080,17 +5080,17 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, entry = swp_entry_to_pte(swp_entry); if (userfaultfd_wp(src_vma) && uffd_wp) entry = pte_swp_mkuffd_wp(entry); - set_huge_pte_at(src, addr, src_pte, entry); + set_huge_pte_at(src_vma, addr, src_pte, entry); } if (!userfaultfd_wp(dst_vma)) entry = huge_pte_clear_uffd_wp(entry); - set_huge_pte_at(dst, addr, dst_pte, entry); + set_huge_pte_at(dst_vma, addr, dst_pte, entry); } else if (unlikely(is_pte_marker(entry))) { pte_marker marker = copy_pte_marker( pte_to_swp_entry(entry), dst_vma); if (marker) - set_huge_pte_at(dst, addr, dst_pte, + set_huge_pte_at(dst_vma, addr, dst_pte, make_pte_marker(marker)); } else { entry = huge_ptep_get(src_pte); @@ -5166,7 +5166,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, if (!userfaultfd_wp(dst_vma)) entry = huge_pte_clear_uffd_wp(entry); - set_huge_pte_at(dst, addr, dst_pte, entry); + set_huge_pte_at(dst_vma, addr, dst_pte, entry); hugetlb_count_add(npages, dst); } spin_unlock(src_ptl); @@ -5202,7 +5202,7 @@ static void move_huge_pte(struct vm_area_struct *vma, unsigned long old_addr, spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); pte = huge_ptep_get_and_clear(mm, old_addr, src_pte); - set_huge_pte_at(mm, new_addr, dst_pte, pte); + set_huge_pte_at(vma, new_addr, dst_pte, pte); if (src_ptl != dst_ptl) spin_unlock(src_ptl); @@ -5336,7 +5336,7 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct */ if (pte_swp_uffd_wp_any(pte) && !(zap_flags & ZAP_FLAG_DROP_MARKER)) - set_huge_pte_at(mm, address, ptep, + set_huge_pte_at(vma, address, ptep, make_pte_marker(PTE_MARKER_UFFD_WP)); else huge_pte_clear(mm, address, ptep, sz); @@ -5370,7 +5370,7 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct /* Leave a uffd-wp pte marker if needed */ if (huge_pte_uffd_wp(pte) && !(zap_flags & ZAP_FLAG_DROP_MARKER)) - set_huge_pte_at(mm, address, ptep, + set_huge_pte_at(vma, address, ptep, make_pte_marker(PTE_MARKER_UFFD_WP)); hugetlb_count_sub(pages_per_huge_page(h), mm); page_remove_rmap(page, vma, true); @@ -5676,7 +5676,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, hugepage_add_new_anon_rmap(new_folio, vma, haddr); if (huge_pte_uffd_wp(pte)) newpte = huge_pte_mkuffd_wp(newpte); - set_huge_pte_at(mm, haddr, ptep, newpte); + set_huge_pte_at(vma, haddr, ptep, newpte); folio_set_hugetlb_migratable(new_folio); /* Make the old page be freed below */ new_folio = old_folio; @@ -5972,7 +5972,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, */ if (unlikely(pte_marker_uffd_wp(old_pte))) new_pte = huge_pte_mkuffd_wp(new_pte); - set_huge_pte_at(mm, haddr, ptep, new_pte); + set_huge_pte_at(vma, haddr, ptep, new_pte); hugetlb_count_add(pages_per_huge_page(h), mm); if ((flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) { @@ -6261,7 +6261,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, } _dst_pte = make_pte_marker(PTE_MARKER_POISONED); - set_huge_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); + set_huge_pte_at(dst_vma, dst_addr, dst_pte, _dst_pte); /* No need to invalidate - it was non-present before */ update_mmu_cache(dst_vma, dst_addr, dst_pte); @@ -6412,7 +6412,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, if (wp_enabled) _dst_pte = huge_pte_mkuffd_wp(_dst_pte); - set_huge_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); + set_huge_pte_at(dst_vma, dst_addr, dst_pte, _dst_pte); hugetlb_count_add(pages_per_huge_page(h), dst_mm); @@ -6598,7 +6598,7 @@ long hugetlb_change_protection(struct vm_area_struct *vma, else if (uffd_wp_resolve) newpte = pte_swp_clear_uffd_wp(newpte); if (!pte_same(pte, newpte)) - set_huge_pte_at(mm, address, ptep, newpte); + set_huge_pte_at(vma, address, ptep, newpte); } else if (unlikely(is_pte_marker(pte))) { /* No other markers apply for now. */ WARN_ON_ONCE(!pte_marker_uffd_wp(pte)); @@ -6622,7 +6622,7 @@ long hugetlb_change_protection(struct vm_area_struct *vma, /* None pte */ if (unlikely(uffd_wp)) /* Safe to modify directly (none->non-present). */ - set_huge_pte_at(mm, address, ptep, + set_huge_pte_at(vma, address, ptep, make_pte_marker(PTE_MARKER_UFFD_WP)); } spin_unlock(ptl); diff --git a/mm/migrate.c b/mm/migrate.c index b7fa020003f3..6aa752984f32 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -251,7 +251,7 @@ static bool remove_migration_pte(struct folio *folio, rmap_flags); else page_dup_file_rmap(new, true); - set_huge_pte_at(vma->vm_mm, pvmw.address, pvmw.pte, pte); + set_huge_pte_at(vma, pvmw.address, pvmw.pte, pte); } else #endif { diff --git a/mm/rmap.c b/mm/rmap.c index ec7f8e6c9e48..a6353a0c67e8 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1628,7 +1628,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); if (folio_test_hugetlb(folio)) { hugetlb_count_sub(folio_nr_pages(folio), mm); - set_huge_pte_at(mm, address, pvmw.pte, pteval); + set_huge_pte_at(vma, address, pvmw.pte, pteval); } else { dec_mm_counter(mm, mm_counter(&folio->page)); set_pte_at(mm, address, pvmw.pte, pteval); @@ -2020,7 +2020,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, pteval = swp_entry_to_pte(make_hwpoison_entry(subpage)); if (folio_test_hugetlb(folio)) { hugetlb_count_sub(folio_nr_pages(folio), mm); - set_huge_pte_at(mm, address, pvmw.pte, pteval); + set_huge_pte_at(vma, address, pvmw.pte, pteval); } else { dec_mm_counter(mm, mm_counter(&folio->page)); set_pte_at(mm, address, pvmw.pte, pteval); @@ -2044,7 +2044,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, if (arch_unmap_one(mm, vma, address, pteval) < 0) { if (folio_test_hugetlb(folio)) - set_huge_pte_at(mm, address, pvmw.pte, pteval); + set_huge_pte_at(vma, address, pvmw.pte, pteval); else set_pte_at(mm, address, pvmw.pte, pteval); ret = false; @@ -2058,7 +2058,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, if (anon_exclusive && page_try_share_anon_rmap(subpage)) { if (folio_test_hugetlb(folio)) - set_huge_pte_at(mm, address, pvmw.pte, pteval); + set_huge_pte_at(vma, address, pvmw.pte, pteval); else set_pte_at(mm, address, pvmw.pte, pteval); ret = false; @@ -2090,7 +2090,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, if (pte_uffd_wp(pteval)) swp_pte = pte_swp_mkuffd_wp(swp_pte); if (folio_test_hugetlb(folio)) - set_huge_pte_at(mm, address, pvmw.pte, swp_pte); + set_huge_pte_at(vma, address, pvmw.pte, swp_pte); else set_pte_at(mm, address, pvmw.pte, swp_pte); trace_set_migration_pte(address, pte_val(swp_pte), diff --git a/mm/vmalloc.c b/mm/vmalloc.c index ef8599d394fd..10fa40222f30 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -94,6 +94,9 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, phys_addr_t phys_addr, pgprot_t prot, unsigned int max_page_shift, pgtbl_mod_mask *mask) { +#ifdef CONFIG_HUGETLB_PAGE + struct vm_area_struct vma = TLB_FLUSH_VMA(&init_mm, 0); +#endif pte_t *pte; u64 pfn; unsigned long size = PAGE_SIZE; @@ -111,7 +114,7 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, pte_t entry = pfn_pte(pfn, prot); entry = arch_make_huge_pte(entry, ilog2(size), 0); - set_huge_pte_at(&init_mm, addr, pte, entry); + set_huge_pte_at(&vma, addr, pte, entry); pfn += PFN_DOWN(size); continue; } -- 2.25.1