Received: by 2002:a05:7412:8d10:b0:f3:1519:9f41 with SMTP id bj16csp872080rdb; Wed, 6 Dec 2023 02:16:19 -0800 (PST) X-Google-Smtp-Source: AGHT+IHzrqIlH6KP0uwXbJAM4BBzR9fTd29CJzSGos1RRjM93B+33TRHqDMBGtGjyRDrpx1337EF X-Received: by 2002:a17:90a:f287:b0:286:997f:655d with SMTP id fs7-20020a17090af28700b00286997f655dmr461025pjb.65.1701857779313; Wed, 06 Dec 2023 02:16:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701857779; cv=none; d=google.com; s=arc-20160816; b=ubhE/rbxfctzs47uNKCi3uQWoDFW08sMjz3ytu+AJ2T0pADdQOyuOSscHClMZso0U8 ILSr/MLKNICKcHhQY4jzPNbcXeHZ2UY7+pchTeAkNnUAaeWBAVWVO/AR3SMkClHRqpoD g0gFRJrVNEzbskNTpU2Ba4cAnuE0Rb9Chvrd/1GWv4+kuKuZ+qtiW7s2FiA8iunM7Wtz ipLWa0GMoom+o3nrN1jvgiYDbPV3f0milTP8y5egBSrVk/oy8NaPBZSeFRxbKOx5DlGH n7/aVNmslgPiqKaJaoihzkX4OAU7LuxmDtNDntNZ/KKmzKgbjwsrrRKe9toT4LMz+RMC JJCw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id; bh=1OCe+pRAGIzhkshHmkMZ34Xmn+Sl0Mdwzhh5IcBWt10=; fh=E9a34w7y7mxIWf2Ip8hre7OG9y3SK5wj2HS003vmZaU=; b=0VN59CIuDsWC7V/0FU2eTQuD9b6Ef6akZJ3U3i+pvcvkIOUzg6Lglg6s3mECVkNnFl /CArf5VNW21MThrWF2KherO+dcvMB6vL+imyc4PIipSy9QR4DosuQhEbW0kSajZD0Gd8 eHxgloqLQPkBGauCM4sSTviyTgduxwXSJvG8fDXCsIMhiN2IeF9/NzmrfCwFgHCqjgc9 53oB+Ytowz3gyjH3bpVWdNde7XSaeW7OBbb0N/RQIIJLyHEeP5BaK2uaVKr5Owp8WVId LNs+xpaRdUL0iqsXur2zNqYfCyG9r3o8/2s2DSXE6uwB8VwZpPUbR/mKySUJbPJlaL9S Q77w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from pete.vger.email (pete.vger.email. [2620:137:e000::3:6]) by mx.google.com with ESMTPS id kb2-20020a17090ae7c200b0028676beec1bsi6993885pjb.152.2023.12.06.02.16.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Dec 2023 02:16:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) client-ip=2620:137:e000::3:6; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:6 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by pete.vger.email (Postfix) with ESMTP id C0F8080AC475; Wed, 6 Dec 2023 02:16:15 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at pete.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377546AbjLFKP5 (ORCPT + 99 others); Wed, 6 Dec 2023 05:15:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1377549AbjLFKP4 (ORCPT ); Wed, 6 Dec 2023 05:15:56 -0500 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 5019F112 for ; Wed, 6 Dec 2023 02:16:01 -0800 (PST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1E1EA1474; Wed, 6 Dec 2023 02:16:47 -0800 (PST) Received: from [10.57.73.130] (unknown [10.57.73.130]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C962C3F762; Wed, 6 Dec 2023 02:15:57 -0800 (PST) Message-ID: Date: Wed, 6 Dec 2023 10:15:56 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v8 04/10] mm: thp: Support allocation of anonymous multi-size THP Content-Language: en-GB To: Barry Song <21cnbao@gmail.com> Cc: Andrew Morton , Matthew Wilcox , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Anshuman Khandual , Yang Shi , "Huang, Ying" , Zi Yan , Luis Chamberlain , Itaru Kitayama , "Kirill A. Shutemov" , John Hubbard , David Rientjes , Vlastimil Babka , Hugh Dickins , Kefeng Wang , Alistair Popple , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org References: <20231204102027.57185-1-ryan.roberts@arm.com> <20231204102027.57185-5-ryan.roberts@arm.com> <5216caaf-1fcf-4715-99c3-521e2a1cc756@arm.com> From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.8 required=5.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on pete.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (pete.vger.email [0.0.0.0]); Wed, 06 Dec 2023 02:16:16 -0800 (PST) On 05/12/2023 20:16, Barry Song wrote: > On Tue, Dec 5, 2023 at 11:48 PM Ryan Roberts wrote: >> >> On 05/12/2023 01:24, Barry Song wrote: >>> On Tue, Dec 5, 2023 at 9:15 AM Barry Song <21cnbao@gmail.com> wrote: >>>> >>>> On Mon, Dec 4, 2023 at 6:21 PM Ryan Roberts wrote: >>>>> >>>>> Introduce the logic to allow THP to be configured (through the new sysfs >>>>> interface we just added) to allocate large folios to back anonymous >>>>> memory, which are larger than the base page size but smaller than >>>>> PMD-size. We call this new THP extension "multi-size THP" (mTHP). >>>>> >>>>> mTHP continues to be PTE-mapped, but in many cases can still provide >>>>> similar benefits to traditional PMD-sized THP: Page faults are >>>>> significantly reduced (by a factor of e.g. 4, 8, 16, etc. depending on >>>>> the configured order), but latency spikes are much less prominent >>>>> because the size of each page isn't as huge as the PMD-sized variant and >>>>> there is less memory to clear in each page fault. The number of per-page >>>>> operations (e.g. ref counting, rmap management, lru list management) are >>>>> also significantly reduced since those ops now become per-folio. >>>>> >>>>> Some architectures also employ TLB compression mechanisms to squeeze >>>>> more entries in when a set of PTEs are virtually and physically >>>>> contiguous and approporiately aligned. In this case, TLB misses will >>>>> occur less often. >>>>> >>>>> The new behaviour is disabled by default, but can be enabled at runtime >>>>> by writing to /sys/kernel/mm/transparent_hugepage/hugepage-XXkb/enabled >>>>> (see documentation in previous commit). The long term aim is to change >>>>> the default to include suitable lower orders, but there are some risks >>>>> around internal fragmentation that need to be better understood first. >>>>> >>>>> Signed-off-by: Ryan Roberts >>>>> --- >>>>> include/linux/huge_mm.h | 6 ++- >>>>> mm/memory.c | 106 ++++++++++++++++++++++++++++++++++++---- >>>>> 2 files changed, 101 insertions(+), 11 deletions(-) >>>>> >>>>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h >>>>> index bd0eadd3befb..91a53b9835a4 100644 >>>>> --- a/include/linux/huge_mm.h >>>>> +++ b/include/linux/huge_mm.h >>>>> @@ -68,9 +68,11 @@ extern struct kobj_attribute shmem_enabled_attr; >>>>> #define HPAGE_PMD_NR (1<>>>> >>>>> /* >>>>> - * Mask of all large folio orders supported for anonymous THP. >>>>> + * Mask of all large folio orders supported for anonymous THP; all orders up to >>>>> + * and including PMD_ORDER, except order-0 (which is not "huge") and order-1 >>>>> + * (which is a limitation of the THP implementation). >>>>> */ >>>>> -#define THP_ORDERS_ALL_ANON BIT(PMD_ORDER) >>>>> +#define THP_ORDERS_ALL_ANON ((BIT(PMD_ORDER + 1) - 1) & ~(BIT(0) | BIT(1))) >>>>> >>>>> /* >>>>> * Mask of all large folio orders supported for file THP. >>>>> diff --git a/mm/memory.c b/mm/memory.c >>>>> index 3ceeb0f45bf5..bf7e93813018 100644 >>>>> --- a/mm/memory.c >>>>> +++ b/mm/memory.c >>>>> @@ -4125,6 +4125,84 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>>>> return ret; >>>>> } >>>>> >>>>> +static bool pte_range_none(pte_t *pte, int nr_pages) >>>>> +{ >>>>> + int i; >>>>> + >>>>> + for (i = 0; i < nr_pages; i++) { >>>>> + if (!pte_none(ptep_get_lockless(pte + i))) >>>>> + return false; >>>>> + } >>>>> + >>>>> + return true; >>>>> +} >>>>> + >>>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE >>>>> +static struct folio *alloc_anon_folio(struct vm_fault *vmf) >>>>> +{ >>>>> + gfp_t gfp; >>>>> + pte_t *pte; >>>>> + unsigned long addr; >>>>> + struct folio *folio; >>>>> + struct vm_area_struct *vma = vmf->vma; >>>>> + unsigned long orders; >>>>> + int order; >>>>> + >>>>> + /* >>>>> + * If uffd is active for the vma we need per-page fault fidelity to >>>>> + * maintain the uffd semantics. >>>>> + */ >>>>> + if (userfaultfd_armed(vma)) >>>>> + goto fallback; >>>>> + >>>>> + /* >>>>> + * Get a list of all the (large) orders below PMD_ORDER that are enabled >>>>> + * for this vma. Then filter out the orders that can't be allocated over >>>>> + * the faulting address and still be fully contained in the vma. >>>>> + */ >>>>> + orders = thp_vma_allowable_orders(vma, vma->vm_flags, false, true, true, >>>>> + BIT(PMD_ORDER) - 1); >>>>> + orders = thp_vma_suitable_orders(vma, vmf->address, orders); >>>>> + >>>>> + if (!orders) >>>>> + goto fallback; >>>>> + >>>>> + pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK); >>>>> + if (!pte) >>>>> + return ERR_PTR(-EAGAIN); >>>>> + >>>>> + order = first_order(orders); >>>>> + while (orders) { >>>>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); >>>>> + vmf->pte = pte + pte_index(addr); >>>>> + if (pte_range_none(vmf->pte, 1 << order)) >>>>> + break; >>>>> + order = next_order(&orders, order); >>>>> + } >>>>> + >>>>> + vmf->pte = NULL; >>>>> + pte_unmap(pte); >>>>> + >>>>> + gfp = vma_thp_gfp_mask(vma); >>>>> + >>>>> + while (orders) { >>>>> + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); >>>>> + folio = vma_alloc_folio(gfp, order, vma, addr, true); >>>>> + if (folio) { >>>>> + clear_huge_page(&folio->page, addr, 1 << order); >>>> >>>> Minor. >>>> >>>> Do we have to constantly clear a huge page? Is it possible to let >>>> post_alloc_hook() >>>> finish this job by using __GFP_ZERO/__GFP_ZEROTAGS as >>>> vma_alloc_zeroed_movable_folio() is doing? >> >> I'm currently following the same allocation pattern as is done for PMD-sized >> THP. In earlier versions of this patch I was trying to be smarter and use the >> __GFP_ZERO/__GFP_ZEROTAGS as you suggest, but I was advised to keep it simple >> and follow the existing pattern. >> >> I have a vague recollection __GFP_ZERO is not preferred for large folios because >> of some issue with virtually indexed caches? (Matthew: did I see you mention >> that in some other context?) >> >> That said, I wasn't aware that Android ships with >> CONFIG_INIT_ON_ALLOC_DEFAULT_ON (I thought it was only used as a debug option), >> so I can see the potential for some overhead reduction here. >> >> Options: >> >> 1) leave it as is and accept the duplicated clearing >> 2) Pass __GFP_ZERO and remove clear_huge_page() >> 3) define __GFP_SKIP_ZERO even when kasan is not enabled and pass it down so >> clear_huge_page() is the only clear >> 4) make clear_huge_page() conditional on !want_init_on_alloc() >> >> I prefer option 4. What do you think? > > either 1 and 4 is ok to me if we will finally remove this duplicated > clear_huge_page on top. > 4 is even better as it can at least temporarily resolve the problem. I'm going to stick with option 1 for this series. Then we can fix it uniformly here and for PMD-sized THP in a separate patch (possibly with the approach suggested in 4). > > in Android gki_defconfig, > https://android.googlesource.com/kernel/common/+/refs/heads/android14-6.1-lts/arch/arm64/configs/gki_defconfig > > Android always has the below, > CONFIG_INIT_ON_ALLOC_DEFAULT_ON=y > > here is some explanation for the reason, > https://source.android.com/docs/security/test/memory-safety/zero-initialized-memory > >> >> As an aside, I've also noticed that clear_huge_page() should take vmf->address >> so that it clears the faulting page last to keep the cache hot. If we decide on >> an option that keeps clear_huge_page(), I'll also make that change. I'll make this change for the next version. >> >> Thanks, >> Ryan >> >>>> > > Thanks > Barry