Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp192670rdb; Tue, 5 Dec 2023 02:48:52 -0800 (PST) X-Google-Smtp-Source: AGHT+IFmj69EVlGPH+wrophA8PV0YKiEpw60u3/wSxdskGmplwOwCjsOYXQFltspifdk+IDanjZs X-Received: by 2002:a05:6e02:d47:b0:35d:59a2:2ab with SMTP id h7-20020a056e020d4700b0035d59a202abmr6233343ilj.75.1701773332354; Tue, 05 Dec 2023 02:48:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701773332; cv=none; d=google.com; s=arc-20160816; b=O21iYVeAVH7A+woFGRiFKub0RF5XvjclZ+6Osg239N2+bafpWs8Hd7b515TPjsy6H/ p4qLIc22Ppmaw+M6I7hlQke0G6XPASwEzhK31x9mGu2SPJmwsXdZambi9C0Q+V3dGc9n RSiyGASySdwPC/CWZt6NZxB7IKvPXirPA6BwmrPk1uUIuJTP82UWiF8tGjSz9APK5Iv5 KXulTez2esgYjiTKvLtbg3XHp/Qc2uUBWm3Y0d/JS+z+FCywF5m+PwpCzk8MCNnF5l8e DEJKxZmTPhZXPCowQGMbbfwqHQH6/DHjyJo6h9aWu5pOBmvhD9+mRDoVCo4MNTrQtMLg PXlQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=qsfqB8C2NTw6z18fkF43AaFtzkSz6xaO19nTGVpJVss=; fh=E9a34w7y7mxIWf2Ip8hre7OG9y3SK5wj2HS003vmZaU=; b=je0EYC/Yeu+fFW3W+NaY3hnFBnCQLZohj5BmJotaDtNrWh8bc3X7ClT9TdoM64+pbu qSno/bJgtmED0/cxveDYssJHqYV/2mhod7AfL2fi58WW/X6N2a5PbXOK8AppdZ/7qM+5 ac1THQJ/pNadETLwD24siIIEFcwbZwu8MUmJ0e06RseheSdD2RQ4qrDfI9PjFsOzM4ej 5CWJ4wTxXOrzhUBWI0A5UxKKa7ihJvwVQfmmSBw8apjjJGfykQC7goXmA/Htl0YpCTx/ 3egGa8G4iTYdkyE/F6ZeLN1VfBlzBf4PvjqxcvWu52CqOEL10b7i/B8E+6OWAubQpBCP iNnw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from agentk.vger.email (agentk.vger.email. [2620:137:e000::3:2]) by mx.google.com with ESMTPS id u2-20020a632342000000b005c5e2165e37si9379478pgm.125.2023.12.05.02.48.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Dec 2023 02:48:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) client-ip=2620:137:e000::3:2; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:2 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 2E17C804B10F; Tue, 5 Dec 2023 02:48:49 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235098AbjLEKsb (ORCPT + 99 others); Tue, 5 Dec 2023 05:48:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235072AbjLEKs3 (ORCPT ); Tue, 5 Dec 2023 05:48:29 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 40516B9 for ; Tue, 5 Dec 2023 02:48:34 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7A84B139F; Tue, 5 Dec 2023 02:49:20 -0800 (PST) Received: from [10.57.73.130] (unknown [10.57.73.130]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2162D3F5A1; Tue, 5 Dec 2023 02:48:30 -0800 (PST) Message-ID: <5216caaf-1fcf-4715-99c3-521e2a1cc756@arm.com> Date: Tue, 5 Dec 2023 10:48:28 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v8 04/10] mm: thp: Support allocation of anonymous multi-size THP Content-Language: en-GB To: Barry Song <21cnbao@gmail.com> Cc: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , Itaru Kitayama , "Kirill A. Shutemov" , John Hubbard , David Rientjes , Vlastimil Babka , Hugh Dickins , Kefeng Wang , Alistair Popple , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org References: <20231204102027.57185-1-ryan.roberts@arm.com> <20231204102027.57185-5-ryan.roberts@arm.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Tue, 05 Dec 2023 02:48:49 -0800 (PST) On 05/12/2023 01:24, Barry Song wrote: > On Tue, Dec 5, 2023 at 9:15 AM Barry Song <21cnbao@gmail.com> wrote: >> >> On Mon, Dec 4, 2023 at 6:21 PM Ryan Roberts wrote: >>> >>> Introduce the logic to allow THP to be configured (through the new sysfs >>> interface we just added) to allocate large folios to back anonymous >>> memory, which are larger than the base page size but smaller than >>> PMD-size. We call this new THP extension "multi-size THP" (mTHP). >>> >>> mTHP continues to be PTE-mapped, but in many cases can still provide >>> similar benefits to traditional PMD-sized THP: Page faults are >>> significantly reduced (by a factor of e.g. 4, 8, 16, etc. depending on >>> the configured order), but latency spikes are much less prominent >>> because the size of each page isn't as huge as the PMD-sized variant and >>> there is less memory to clear in each page fault. The number of per-page >>> operations (e.g. ref counting, rmap management, lru list management) are >>> also significantly reduced since those ops now become per-folio. >>> >>> Some architectures also employ TLB compression mechanisms to squeeze >>> more entries in when a set of PTEs are virtually and physically >>> contiguous and approporiately aligned. In this case, TLB misses will >>> occur less often. >>> >>> The new behaviour is disabled by default, but can be enabled at runtime >>> by writing to /sys/kernel/mm/transparent_hugepage/hugepage-XXkb/enabled >>> (see documentation in previous commit). The long term aim is to change >>> the default to include suitable lower orders, but there are some risks >>> around internal fragmentation that need to be better understood first. >>> >>> Signed-off-by: Ryan Roberts >>> --- >>> include/linux/huge_mm.h | 6 ++- >>> mm/memory.c | 106 ++++++++++++++++++++++++++++++++++++---- >>> 2 files changed, 101 insertions(+), 11 deletions(-) >>> >>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h >>> index bd0eadd3befb..91a53b9835a4 100644 >>> --- a/include/linux/huge_mm.h >>> +++ b/include/linux/huge_mm.h >>> @@ -68,9 +68,11 @@ extern struct kobj_attribute shmem_enabled_attr; >>> #define HPAGE_PMD_NR (1<>> >>> /* >>> - * Mask of all large folio orders supported for anonymous THP. >>> + * Mask of all large folio orders supported for anonymous THP; all orders up to >>> + * and including PMD_ORDER, except order-0 (which is not "huge") and order-1 >>> + * (which is a limitation of the THP implementation). >>> */ >>> -#define THP_ORDERS_ALL_ANON BIT(PMD_ORDER) >>> +#define THP_ORDERS_ALL_ANON ((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BIT(1))) >>> >>> /* >>> * Mask of all large folio orders supported for file THP. >>> diff --git a/mm/memory.c b/mm/memory.c >>> index 3ceeb0f45bf5..bf7e93813018 100644 >>> --- a/mm/memory.c >>> +++ b/mm/memory.c >>> @@ -4125,6 +4125,84 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>> return ret; >>> } >>> >>> +static bool pte_range_none(pte_t *pte, int nr_pages) >>> +{ >>> + int i; >>> + >>> + for (i = 0; i < nr_pages; i++) { >>> + if (!pte_none(ptep_get_lockless(pte + i))) >>> + return false; >>> + } >>> + >>> + return true; >>> +} >>> + >>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE >>> +static struct folio *alloc_anon_folio(struct vm_fault *vmf) >>> +{ >>> + gfp_t gfp; >>> + pte_t *pte; >>> + unsigned long addr; >>> + struct folio *folio; >>> + struct vm_area_struct *vma = vmf->vma; >>> + unsigned long orders; >>> + int order; >>> + >>> + /* >>> + * If uffd is active for the vma we need per-page fault fidelity to >>> + * maintain the uffd semantics. >>> + */ >>> + if (userfaultfd_armed(vma)) >>> + goto fallback; >>> + >>> + /* >>> + * Get a list of all the (large) orders below PMD_ORDER that are enabled >>> + * for this vma. Then filter out the orders that can't be allocated over >>> + * the faulting address and still be fully contained in the vma. >>> + */ >>> + orders = thp_vma_allowable_orders(vma, vma->vm_flags, false, true, true, >>> + BIT(PMD_ORDER) - 1); >>> + orders = thp_vma_suitable_orders(vma, vmf->address, orders); >>> + >>> + if (!orders) >>> + goto fallback; >>> + >>> + pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK); >>> + if (!pte) >>> + return ERR_PTR(-EAGAIN); >>> + >>> + order = first_order(orders); >>> + while (orders) { >>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); >>> + vmf->pte = pte + pte_index(addr); >>> + if (pte_range_none(vmf->pte, 1 << order)) >>> + break; >>> + order = next_order(&orders, order); >>> + } >>> + >>> + vmf->pte = NULL; >>> + pte_unmap(pte); >>> + >>> + gfp = vma_thp_gfp_mask(vma); >>> + >>> + while (orders) { >>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); >>> + folio = vma_alloc_folio(gfp, order, vma, addr, true); >>> + if (folio) { >>> + clear_huge_page(&folio->page, addr, 1 << order); >> >> Minor. >> >> Do we have to constantly clear a huge page? Is it possible to let >> post_alloc_hook() >> finish this job by using __GFP_ZERO/__GFP_ZEROTAGS as >> vma_alloc_zeroed_movable_folio() is doing? I'm currently following the same allocation pattern as is done for PMD-sized THP. In earlier versions of this patch I was trying to be smarter and use the __GFP_ZERO/__GFP_ZEROTAGS as you suggest, but I was advised to keep it simple and follow the existing pattern. I have a vague recollection __GFP_ZERO is not preferred for large folios because of some issue with virtually indexed caches? (Matthew: did I see you mention that in some other context?) That said, I wasn't aware that Android ships with CONFIG_INIT_ON_ALLOC_DEFAULT_ON (I thought it was only used as a debug option), so I can see the potential for some overhead reduction here. Options: 1) leave it as is and accept the duplicated clearing 2) Pass __GFP_ZERO and remove clear_huge_page() 3) define __GFP_SKIP_ZERO even when kasan is not enabled and pass it down so clear_huge_page() is the only clear 4) make clear_huge_page() conditional on !want_init_on_alloc() I prefer option 4. What do you think? As an aside, I've also noticed that clear_huge_page() should take vmf->address so that it clears the faulting page last to keep the cache hot. If we decide on an option that keeps clear_huge_page(), I'll also make that change. Thanks, Ryan >> >> struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma, >> unsigned long vaddr) >> { >> gfp_t flags = GFP_HIGHUSER_MOVABLE | __GFP_ZERO; >> >> /* >> * If the page is mapped with PROT_MTE, initialise the tags at the >> * point of allocation and page zeroing as this is usually faster than >> * separate DC ZVA and STGM. >> */ >> if (vma->vm_flags & VM_MTE) >> flags |= __GFP_ZEROTAGS; >> >> return vma_alloc_folio(flags, 0, vma, vaddr, false); >> } > > I am asking this because Android and some other kernels might always set > CONFIG_INIT_ON_ALLOC_DEFAULT_ON, that means one more explicit > clear_page is doing a duplicated job. > > when the below is true, post_alloc_hook has cleared huge_page before > vma_alloc_folio() returns the folio, > > static inline bool want_init_on_alloc(gfp_t flags) > { > if (static_branch_maybe(CONFIG_INIT_ON_ALLOC_DEFAULT_ON, > &init_on_alloc)) > return true; > return flags & __GFP_ZERO; > } > > >> >>> + return folio; >>> + } >>> + order = next_order(&orders, order); >>> + } >>> + >>> +fallback: >>> + return vma_alloc_zeroed_movable_folio(vma, vmf->address); >>> +} >>> +#else >>> +#define alloc_anon_folio(vmf) \ >>> + vma_alloc_zeroed_movable_folio((vmf)->vma, (vmf)->address) >>> +#endif >>> + >>> /* >>> * We enter with non-exclusive mmap_lock (to exclude vma changes, >>> * but allow concurrent faults), and pte mapped but not yet locked. >>> @@ -4132,6 +4210,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>> */ >>> static vm_fault_t do_anonymous_page(struct vm_fault *vmf) >>> { >>> + int i; >>> + int nr_pages = 1; >>> + unsigned long addr = vmf->address; >>> bool uffd_wp = vmf_orig_pte_uffd_wp(vmf); >>> struct vm_area_struct *vma = vmf->vma; >>> struct folio *folio; >>> @@ -4176,10 +4257,15 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) >>> /* Allocate our own private page. */ >>> if (unlikely(anon_vma_prepare(vma))) >>> goto oom; >>> - folio = vma_alloc_zeroed_movable_folio(vma, vmf->address); >>> + folio = alloc_anon_folio(vmf); >>> + if (IS_ERR(folio)) >>> + return 0; >>> if (!folio) >>> goto oom; >>> >>> + nr_pages = folio_nr_pages(folio); >>> + addr = ALIGN_DOWN(vmf->address, nr_pages * PAGE_SIZE); >>> + >>> if (mem_cgroup_charge(folio, vma->vm_mm, GFP_KERNEL)) >>> goto oom_free_page; >>> folio_throttle_swaprate(folio, GFP_KERNEL); >>> @@ -4196,12 +4282,13 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) >>> if (vma->vm_flags & VM_WRITE) >>> entry = pte_mkwrite(pte_mkdirty(entry), vma); >>> >>> - vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, >>> - &vmf->ptl); >>> + vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf->ptl); >>> if (!vmf->pte) >>> goto release; >>> - if (vmf_pte_changed(vmf)) { >>> - update_mmu_tlb(vma, vmf->address, vmf->pte); >>> + if ((nr_pages == 1 && vmf_pte_changed(vmf)) || >>> + (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages))) { >>> + for (i = 0; i < nr_pages; i++) >>> + update_mmu_tlb(vma, addr + PAGE_SIZE * i, vmf->pte + i); >>> goto release; >>> } >>> >>> @@ -4216,16 +4303,17 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) >>> return handle_userfault(vmf, VM_UFFD_MISSING); >>> } >>> >>> - inc_mm_counter(vma->vm_mm, MM_ANONPAGES); >>> - folio_add_new_anon_rmap(folio, vma, vmf->address); >>> + folio_ref_add(folio, nr_pages - 1); >>> + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages); >>> + folio_add_new_anon_rmap(folio, vma, addr); >>> folio_add_lru_vma(folio, vma); >>> setpte: >>> if (uffd_wp) >>> entry = pte_mkuffd_wp(entry); >>> - set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry); >>> + set_ptes(vma->vm_mm, addr, vmf->pte, entry, nr_pages); >>> >>> /* No need to invalidate - it was non-present before */ >>> - update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1); >>> + update_mmu_cache_range(vmf, vma, addr, vmf->pte, nr_pages); >>> unlock: >>> if (vmf->pte) >>> pte_unmap_unlock(vmf->pte, vmf->ptl); >>> -- >>> 2.25.1 >>>