Received: by 2002:a05:7412:b10a:b0:f3:1519:9f41 with SMTP id az10csp3096734rdb; Mon, 4 Dec 2023 17:25:15 -0800 (PST) X-Google-Smtp-Source: AGHT+IFEc3q9wMzPLItHF7P5niajB8p0m0xJ7UCXZF3JL2KoWib1zFK9Ge9tc14eP3u2tfkIQoC0 X-Received: by 2002:a17:90b:1e01:b0:285:93d4:16c with SMTP id pg1-20020a17090b1e0100b0028593d4016cmr462591pjb.20.1701739514898; Mon, 04 Dec 2023 17:25:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701739514; cv=none; d=google.com; s=arc-20160816; b=LDb4d86m+8X5KbhNYSNOP75++FpsaJfCkeNAh0B9LXm28wrXXKNOjtCaGGHr4bqvfW ajPC6GD3GVLyu6GFcRTbp181W4dXKLmdQsuN5pB347jZ8vMIy81UbN9FWZhYhONWunuZ Mjl9Dxb3IFO0UiDyY3bqLaNAM7oSGzYaDAcQGQjVGXa70pbvgL/yv4HQcH3AItptXe0f qnPuDKgfinp8vUQN8Usy0ddex9gyGeY74mcvfhGoRguYt78CdiAC2Sy/dvdP4eCHW/iX 6pTJw/Y468CsCTpkG7wJcIDejhOd6zrKnXZTLWgklqDjmfVh69YTGCe4XPUB79UhvW8e gKbw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=0tQk3YVUKG0AYOtvIQ7xyVTS9E98YjBdQc/XKS+hniA=; fh=wHEbWWem9Uy/Kvpt69ElMEGR8Gz+9ggd1M1lGPpN2ms=; b=xJAwxKal+xlHRoZRy05ukxJo3gQb8qQuiHLJ3xBHlnp73Cy0j6s9srtRfS+F4aRscF q1nhIpQo3pdnPqrXhIKQWucoEQww1fD5uODLsv0yc/jxEAryYYDy2n0k4t5GDqKuMliv 2OWrN/iPPxAhAIzqbvD7/dwj4x2HFvg1L2ooCa0Pg+h2vuk7anLiBmzhrV75bL3uBlP0 Zke7I87H3xJaN6XG+qe7Dw5WLaeGkY+EAnLnsGMSgFFS9pI/hpnqs8gCi/kfcS+dn/d/ 0BcTiaS7aztGSPrKoAI9Q489ObG+bMQffUPrdPixkol/OleDRGVoEQN+8wmH0brl7X/W lSbg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=VLL+DUy9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id t14-20020a17090a950e00b00286482dce1dsi7997650pjo.156.2023.12.04.17.25.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 17:25:14 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20230601 header.b=VLL+DUy9; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id AD2EA8032E61; Mon, 4 Dec 2023 17:25:11 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346138AbjLEBYz (ORCPT + 99 others); Mon, 4 Dec 2023 20:24:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54612 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231567AbjLEBYw (ORCPT ); Mon, 4 Dec 2023 20:24:52 -0500 Received: from mail-vs1-xe34.google.com (mail-vs1-xe34.google.com [IPv6:2607:f8b0:4864:20::e34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D3B7D53 for ; Mon, 4 Dec 2023 17:24:39 -0800 (PST) Received: by mail-vs1-xe34.google.com with SMTP id ada2fe7eead31-4649d22b78aso593797137.0 for ; Mon, 04 Dec 2023 17:24:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1701739478; x=1702344278; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=0tQk3YVUKG0AYOtvIQ7xyVTS9E98YjBdQc/XKS+hniA=; b=VLL+DUy9effbNy16CasgKdDaqIf80KXyM54X6RksYOZh09Phs+nycM+IHcy/a/lHNH RKBBi5GJgHq7GX0fZ++BYruW2xEj/0UCYCMu+1Gn7yfuy/TFQQHPhkRxIZsfXxRu13RQ 8/pY/Ie0ZX6e2XgMPwDeOjTMgLIqjGgAs4L8MyVaXuUQs65D9vC9fEBE42uUogvdwHaS 92PD1JxDnRdPIkvRvm0ARrsMzBbYfHXqMzcFRopUknX4wooe24mqf3cT7U2W840IOmiD syaWy+HoisjqKP4OgSjv9DeX+nlPmCix8DxeVjB7aUqFhKtQM8yBSJDOHc/WIdT5e7fX NmWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701739478; x=1702344278; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0tQk3YVUKG0AYOtvIQ7xyVTS9E98YjBdQc/XKS+hniA=; b=w2bJFWNk27GT8GEFd+RP7yNJmK0vgtK9AWHugp98JLVkvUsyQVcm/F0zEvCvf6+Rhj yFqjnjrAHna6y7R2qHTe5NvnkXwZLl/lAUNpdiRcIhLJdXMiOHOnOeHhPQ9Zxq7R3HmO 5bxD8+ynpv4Taw0HDq0+vyLuye0osFgsxnQ7fJxdvp/r8HO2sgh1y1LlpUPFVOHlIRqT AbDDlCnqUrTH21enpTSOu85OgxWiTPW3AIrzsE79k/5j94Bt9eH6VefLo/EpbWmIKB9Q dc1EmsO3Ktjj/OntAzl99J60GUv9EdvnAUfJeAohfLKE5sYrynLWjIwn+bEJqff6hqHM ZNrw== X-Gm-Message-State: AOJu0YwES1PVo3Y8dzzWubUEPJHxYHMNlpK1yrm1lOZQu3L1LvE4v5jG gPC0mZzjwUjSH0rmM39IAZrB63OFI1T6fL5iLcc= X-Received: by 2002:a67:c402:0:b0:464:782c:9aa6 with SMTP id c2-20020a67c402000000b00464782c9aa6mr700345vsk.21.1701739478108; Mon, 04 Dec 2023 17:24:38 -0800 (PST) MIME-Version: 1.0 References: <20231204102027.57185-1-ryan.roberts@arm.com> <20231204102027.57185-5-ryan.roberts@arm.com> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Tue, 5 Dec 2023 09:24:26 +0800 Message-ID: Subject: Re: [PATCH v8 04/10] mm: thp: Support allocation of anonymous multi-size THP To: Ryan Roberts Cc: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , Itaru Kitayama , "Kirill A. Shutemov" , John Hubbard , David Rientjes , Vlastimil Babka , Hugh Dickins , Kefeng Wang , Alistair Popple , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-0.6 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on howler.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Mon, 04 Dec 2023 17:25:11 -0800 (PST) On Tue, Dec 5, 2023 at 9:15=E2=80=AFAM Barry Song <21cnbao@gmail.com> wrote= : > > On Mon, Dec 4, 2023 at 6:21=E2=80=AFPM Ryan Roberts wrote: > > > > Introduce the logic to allow THP to be configured (through the new sysf= s > > interface we just added) to allocate large folios to back anonymous > > memory, which are larger than the base page size but smaller than > > PMD-size. We call this new THP extension "multi-size THP" (mTHP). > > > > mTHP continues to be PTE-mapped, but in many cases can still provide > > similar benefits to traditional PMD-sized THP: Page faults are > > significantly reduced (by a factor of e.g. 4, 8, 16, etc. depending on > > the configured order), but latency spikes are much less prominent > > because the size of each page isn't as huge as the PMD-sized variant an= d > > there is less memory to clear in each page fault. The number of per-pag= e > > operations (e.g. ref counting, rmap management, lru list management) ar= e > > also significantly reduced since those ops now become per-folio. > > > > Some architectures also employ TLB compression mechanisms to squeeze > > more entries in when a set of PTEs are virtually and physically > > contiguous and approporiately aligned. In this case, TLB misses will > > occur less often. > > > > The new behaviour is disabled by default, but can be enabled at runtime > > by writing to /sys/kernel/mm/transparent_hugepage/hugepage-XXkb/enabled > > (see documentation in previous commit). The long term aim is to change > > the default to include suitable lower orders, but there are some risks > > around internal fragmentation that need to be better understood first. > > > > Signed-off-by: Ryan Roberts > > --- > > include/linux/huge_mm.h | 6 ++- > > mm/memory.c | 106 ++++++++++++++++++++++++++++++++++++---- > > 2 files changed, 101 insertions(+), 11 deletions(-) > > > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > > index bd0eadd3befb..91a53b9835a4 100644 > > --- a/include/linux/huge_mm.h > > +++ b/include/linux/huge_mm.h > > @@ -68,9 +68,11 @@ extern struct kobj_attribute shmem_enabled_attr; > > #define HPAGE_PMD_NR (1< > > > /* > > - * Mask of all large folio orders supported for anonymous THP. > > + * Mask of all large folio orders supported for anonymous THP; all ord= ers up to > > + * and including PMD_ORDER, except order-0 (which is not "huge") and o= rder-1 > > + * (which is a limitation of the THP implementation). > > */ > > -#define THP_ORDERS_ALL_ANON BIT(PMD_ORDER) > > +#define THP_ORDERS_ALL_ANON ((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | = BIT(1))) > > > > /* > > * Mask of all large folio orders supported for file THP. > > diff --git a/mm/memory.c b/mm/memory.c > > index 3ceeb0f45bf5..bf7e93813018 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -4125,6 +4125,84 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > return ret; > > } > > > > +static bool pte_range_none(pte_t *pte, int nr_pages) > > +{ > > + int i; > > + > > + for (i =3D 0; i < nr_pages; i++) { > > + if (!pte_none(ptep_get_lockless(pte + i))) > > + return false; > > + } > > + > > + return true; > > +} > > + > > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE > > +static struct folio *alloc_anon_folio(struct vm_fault *vmf) > > +{ > > + gfp_t gfp; > > + pte_t *pte; > > + unsigned long addr; > > + struct folio *folio; > > + struct vm_area_struct *vma =3D vmf->vma; > > + unsigned long orders; > > + int order; > > + > > + /* > > + * If uffd is active for the vma we need per-page fault fidelit= y to > > + * maintain the uffd semantics. > > + */ > > + if (userfaultfd_armed(vma)) > > + goto fallback; > > + > > + /* > > + * Get a list of all the (large) orders below PMD_ORDER that ar= e enabled > > + * for this vma. Then filter out the orders that can't be alloc= ated over > > + * the faulting address and still be fully contained in the vma= . > > + */ > > + orders =3D thp_vma_allowable_orders(vma, vma->vm_flags, false, = true, true, > > + BIT(PMD_ORDER) - 1); > > + orders =3D thp_vma_suitable_orders(vma, vmf->address, orders); > > + > > + if (!orders) > > + goto fallback; > > + > > + pte =3D pte_offset_map(vmf->pmd, vmf->address & PMD_MASK); > > + if (!pte) > > + return ERR_PTR(-EAGAIN); > > + > > + order =3D first_order(orders); > > + while (orders) { > > + addr =3D ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > > + vmf->pte =3D pte + pte_index(addr); > > + if (pte_range_none(vmf->pte, 1 << order)) > > + break; > > + order =3D next_order(&orders, order); > > + } > > + > > + vmf->pte =3D NULL; > > + pte_unmap(pte); > > + > > + gfp =3D vma_thp_gfp_mask(vma); > > + > > + while (orders) { > > + addr =3D ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > > + folio =3D vma_alloc_folio(gfp, order, vma, addr, true); > > + if (folio) { > > + clear_huge_page(&folio->page, addr, 1 << order)= ; > > Minor. > > Do we have to constantly clear a huge page? Is it possible to let > post_alloc_hook() > finish this job by using __GFP_ZERO/__GFP_ZEROTAGS as > vma_alloc_zeroed_movable_folio() is doing? > > struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, > unsigned long vaddr) > { > gfp_t flags =3D GFP_HIGHUSER_MOVABLE | __GFP_ZERO; > > /* > * If the page is mapped with PROT_MTE, initialise the tags at th= e > * point of allocation and page zeroing as this is usually faster= than > * separate DC ZVA and STGM. > */ > if (vma->vm_flags & VM_MTE) > flags |=3D __GFP_ZEROTAGS; > > return vma_alloc_folio(flags, 0, vma, vaddr, false); > } I am asking this because Android and some other kernels might always set CONFIG_INIT_ON_ALLOC_DEFAULT_ON, that means one more explicit clear_page is doing a duplicated job. when the below is true, post_alloc_hook has cleared huge_page before vma_alloc_folio() returns the folio, static inline bool want_init_on_alloc(gfp_t flags) { if (static_branch_maybe(CONFIG_INIT_ON_ALLOC_DEFAULT_ON, &init_on_alloc)) return true; return flags & __GFP_ZERO; } > > > + return folio; > > + } > > + order =3D next_order(&orders, order); > > + } > > + > > +fallback: > > + return vma_alloc_zeroed_movable_folio(vma, vmf->address); > > +} > > +#else > > +#define alloc_anon_folio(vmf) \ > > + vma_alloc_zeroed_movable_folio((vmf)->vma, (vmf)->addre= ss) > > +#endif > > + > > /* > > * We enter with non-exclusive mmap_lock (to exclude vma changes, > > * but allow concurrent faults), and pte mapped but not yet locked. > > @@ -4132,6 +4210,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > */ > > static vm_fault_t do_anonymous_page(struct vm_fault *vmf) > > { > > + int i; > > + int nr_pages =3D 1; > > + unsigned long addr =3D vmf->address; > > bool uffd_wp =3D vmf_orig_pte_uffd_wp(vmf); > > struct vm_area_struct *vma =3D vmf->vma; > > struct folio *folio; > > @@ -4176,10 +4257,15 @@ static vm_fault_t do_anonymous_page(struct vm_f= ault *vmf) > > /* Allocate our own private page. */ > > if (unlikely(anon_vma_prepare(vma))) > > goto oom; > > - folio =3D vma_alloc_zeroed_movable_folio(vma, vmf->address); > > + folio =3D alloc_anon_folio(vmf); > > + if (IS_ERR(folio)) > > + return 0; > > if (!folio) > > goto oom; > > > > + nr_pages =3D folio_nr_pages(folio); > > + addr =3D ALIGN_DOWN(vmf->address, nr_pages * PAGE_SIZE); > > + > > if (mem_cgroup_charge(folio, vma->vm_mm, GFP_KERNEL)) > > goto oom_free_page; > > folio_throttle_swaprate(folio, GFP_KERNEL); > > @@ -4196,12 +4282,13 @@ static vm_fault_t do_anonymous_page(struct vm_f= ault *vmf) > > if (vma->vm_flags & VM_WRITE) > > entry =3D pte_mkwrite(pte_mkdirty(entry), vma); > > > > - vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->add= ress, > > - &vmf->ptl); > > + vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &v= mf->ptl); > > if (!vmf->pte) > > goto release; > > - if (vmf_pte_changed(vmf)) { > > - update_mmu_tlb(vma, vmf->address, vmf->pte); > > + if ((nr_pages =3D=3D 1 && vmf_pte_changed(vmf)) || > > + (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages))) { > > + for (i =3D 0; i < nr_pages; i++) > > + update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->= pte + i); > > goto release; > > } > > > > @@ -4216,16 +4303,17 @@ static vm_fault_t do_anonymous_page(struct vm_f= ault *vmf) > > return handle_userfault(vmf, VM_UFFD_MISSING); > > } > > > > - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); > > - folio_add_new_anon_rmap(folio, vma, vmf->address); > > + folio_ref_add(folio, nr_pages - 1); > > + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); > > + folio_add_new_anon_rmap(folio, vma, addr); > > folio_add_lru_vma(folio, vma); > > setpte: > > if (uffd_wp) > > entry =3D pte_mkuffd_wp(entry); > > - set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry); > > + set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr_pages); > > > > /* No need to invalidate - it was non-present before */ > > - update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); > > + update_mmu_cache_range(vmf, vma, addr, vmf->pte, nr_pages); > > unlock: > > if (vmf->pte) > > pte_unmap_unlock(vmf->pte, vmf->ptl); > > -- > > 2.25.1 > >