Received: by 2002:a05:7412:da14:b0:e2:908c:2ebd with SMTP id fe20csp2330689rdb; Mon, 9 Oct 2023 23:59:25 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE0vbXx2XFotF9RQ0vRZeFXNbh7W+/wEhDeO6TUUiqBL/MjSTgrLiUZhpae2X3uA11pufkN X-Received: by 2002:a05:6358:e49f:b0:143:7bba:3c73 with SMTP id by31-20020a056358e49f00b001437bba3c73mr17667798rwb.1.1696921165424; Mon, 09 Oct 2023 23:59:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1696921165; cv=none; d=google.com; s=arc-20160816; b=sWNKBqW9m4xwhTdf07TLPcp/8KUUiVa+Dli9/GRN4eX+zrT73mV6ChRwvv+Zvqko3o d24G8hWFmIPwf3Vq5JRsLWw4DcvD6SxZPeaLfzi6n3Ti6FczuY+7GhNTzS7xOu08Q1GQ MYkvJGFkiLq75b+tT9bze7lMzHDKhXVAd2+gx566lJAEeGsi+r77DuMphyrvF8ZcZHfN 9nlFMAzI2jQ96maD6c2E0o+3iF7U2R+gbZzNJh0GbwSC5XsysTz1iB6eLMlOw8/BfyDo a9LrEzTZSFIppBQwy8kEfSS7X+xBhZ0VaZkz7qyeaZCZ3Xf4VCaHZmgBe0D+5XADf2af YQbw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:subject:mime-version:date:dkim-signature :message-id; bh=/e6vb1bWQTImJ55mJskAw3PJCVgy9GbqhEsCFyFthVg=; fh=8uFMym8xOAutDuK/gOTsjpxCgxXfOOvZtEe68rRKwGs=; b=mZuQz4m4CtLsrJ7PB4pl3FHySblADD1U41KOmEqXCQlBiwkTI4vak9lbLgN+oncFRM CYm4H8LS5vZvSlDJHKHgZmeJLWtQBoaYj91M1g43JyZQ85CS1iJUmto5Nz6eL9JYV1mD ydV8SO+BLSmwqEO6JMVSAOo+l5BWxitp4Ag7/nZVjdmeJeWuc/AMvQBKx7nNUP/0bDhU NBspTFukPmbBxrO8G19LyPCHCgkhxuihxjMpwwcC2WSINUNbYeUzMXmlxxAVTDzXy2Mg AtzWrvLBl1lhRXQ1/HrBsYijAXbafTk3pwHEs5nxM7S/wlPUb90mAAFmjrLznLexI96l Iwwg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=FrDZusWT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Return-Path: Received: from agentk.vger.email (agentk.vger.email. [23.128.96.32]) by mx.google.com with ESMTPS id g8-20020a635208000000b0056a36ac322fsi10462840pgb.514.2023.10.09.23.59.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Oct 2023 23:59:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) client-ip=23.128.96.32; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=FrDZusWT; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.32 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by agentk.vger.email (Postfix) with ESMTP id 525F0802691F; Mon, 9 Oct 2023 23:59:22 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at agentk.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1442352AbjJJG7K (ORCPT + 99 others); Tue, 10 Oct 2023 02:59:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38194 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1442267AbjJJG7J (ORCPT ); Tue, 10 Oct 2023 02:59:09 -0400 Received: from out-204.mta1.migadu.com (out-204.mta1.migadu.com [IPv6:2001:41d0:203:375::cc]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 34BE79F for ; Mon, 9 Oct 2023 23:59:06 -0700 (PDT) Message-ID: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1696921144; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/e6vb1bWQTImJ55mJskAw3PJCVgy9GbqhEsCFyFthVg=; b=FrDZusWTJBCDVcBM+vcpe9w2jZzfwxtG18yor58cE8MDaJI8WiagwyryK+UBvslDFOt6W5 LhW2j8S3Mxd2qSWp2xhx7vDOgsHCfr7Gy6Hf2KffvKBk139/1fPkA3L+ssrFg8gTZmCTEM hvo4O/IcN3Dd83tO44CW7fXOa1ABb1U= Date: Tue, 10 Oct 2023 14:58:56 +0800 MIME-Version: 1.0 Subject: Re: [PATCH 1/1] hugetlb_vmemmap: use folio argument for hugetlb_vmemmap_* functions To: Usama Arif , linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org, mike.kravetz@oracle.com, songmuchun@bytedance.com, fam.zheng@bytedance.com, liangma@liangbit.com, punit.agrawal@bytedance.com References: <20231009151830.2248885-1-usama.arif@bytedance.com> <20231009151830.2248885-2-usama.arif@bytedance.com> X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Muchun Song In-Reply-To: <20231009151830.2248885-2-usama.arif@bytedance.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=2.7 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, RCVD_IN_SBL_CSS,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on agentk.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (agentk.vger.email [0.0.0.0]); Mon, 09 Oct 2023 23:59:22 -0700 (PDT) X-Spam-Level: ** On 2023/10/9 23:18, Usama Arif wrote: > Most function calls in hugetlb.c are made with folio arguments. > This brings hugetlb_vmemmap calls inline with them by using folio > instead of head struct page. Head struct page is still needed > within these functions. > > The set/clear/test functions for hugepages are also changed to > folio versions. > > Signed-off-by: Usama Arif > --- > mm/hugetlb.c | 10 +++++----- > mm/hugetlb_vmemmap.c | 42 ++++++++++++++++++++++-------------------- > mm/hugetlb_vmemmap.h | 8 ++++---- > 3 files changed, 31 insertions(+), 29 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index b12f5fd295bb..73803d62066a 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1606,7 +1606,7 @@ static void __update_and_free_hugetlb_folio(struct hstate *h, > * is no longer identified as a hugetlb page. hugetlb_vmemmap_restore > * can only be passed hugetlb pages and will BUG otherwise. > */ > - if (clear_dtor && hugetlb_vmemmap_restore(h, &folio->page)) { > + if (clear_dtor && hugetlb_vmemmap_restore(h, folio)) { > spin_lock_irq(&hugetlb_lock); > /* > * If we cannot allocate vmemmap pages, just refuse to free the > @@ -1749,7 +1749,7 @@ static void bulk_vmemmap_restore_error(struct hstate *h, > * quit processing the list to retry the bulk operation. > */ > list_for_each_entry_safe(folio, t_folio, folio_list, lru) > - if (hugetlb_vmemmap_restore(h, &folio->page)) { > + if (hugetlb_vmemmap_restore(h, folio)) { > list_del(&folio->lru); > spin_lock_irq(&hugetlb_lock); > add_hugetlb_folio(h, folio, true); > @@ -1907,7 +1907,7 @@ static void init_new_hugetlb_folio(struct hstate *h, struct folio *folio) > static void __prep_new_hugetlb_folio(struct hstate *h, struct folio *folio) > { > init_new_hugetlb_folio(h, folio); > - hugetlb_vmemmap_optimize(h, &folio->page); > + hugetlb_vmemmap_optimize(h, folio); > } > > static void prep_new_hugetlb_folio(struct hstate *h, struct folio *folio, int nid) > @@ -2312,7 +2312,7 @@ int dissolve_free_huge_page(struct page *page) > * Attempt to allocate vmemmmap here so that we can take > * appropriate action on failure. > */ > - rc = hugetlb_vmemmap_restore(h, &folio->page); > + rc = hugetlb_vmemmap_restore(h, folio); > if (!rc) { > update_and_free_hugetlb_folio(h, folio, false); > } else { > @@ -3721,7 +3721,7 @@ static int demote_free_hugetlb_folio(struct hstate *h, struct folio *folio) > * passed hugetlb folios and will BUG otherwise. > */ > if (folio_test_hugetlb(folio)) { > - rc = hugetlb_vmemmap_restore(h, &folio->page); > + rc = hugetlb_vmemmap_restore(h, folio); > if (rc) { > /* Allocation of vmemmmap failed, we can not demote folio */ > spin_lock_irq(&hugetlb_lock); > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > index d2999c303031..84b5ac93b9e5 100644 > --- a/mm/hugetlb_vmemmap.c > +++ b/mm/hugetlb_vmemmap.c > @@ -495,14 +495,15 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); > static bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON); > core_param(hugetlb_free_vmemmap, vmemmap_optimize_enabled, bool, 0); > > -static int __hugetlb_vmemmap_restore(const struct hstate *h, struct page *head, unsigned long flags) > +static int __hugetlb_vmemmap_restore(const struct hstate *h, struct folio *folio, unsigned long flags) > { > int ret; > + struct page *head = &folio->page; > unsigned long vmemmap_start = (unsigned long)head, vmemmap_end; > unsigned long vmemmap_reuse; > > VM_WARN_ON_ONCE(!PageHuge(head)); > - if (!HPageVmemmapOptimized(head)) > + if (!folio_test_hugetlb_vmemmap_optimized(folio)) > return 0; > > vmemmap_end = vmemmap_start + hugetlb_vmemmap_size(h); > @@ -518,7 +519,7 @@ static int __hugetlb_vmemmap_restore(const struct hstate *h, struct page *head, > */ > ret = vmemmap_remap_alloc(vmemmap_start, vmemmap_end, vmemmap_reuse, flags); > if (!ret) { > - ClearHPageVmemmapOptimized(head); > + folio_clear_hugetlb_vmemmap_optimized(folio); > static_branch_dec(&hugetlb_optimize_vmemmap_key); > } > > @@ -530,14 +531,14 @@ static int __hugetlb_vmemmap_restore(const struct hstate *h, struct page *head, > * hugetlb_vmemmap_optimize()) vmemmap pages which > * will be reallocated and remapped. > * @h: struct hstate. > - * @head: the head page whose vmemmap pages will be restored. > + * @folio: the folio whose vmemmap pages will be restored. > * > - * Return: %0 if @head's vmemmap pages have been reallocated and remapped, > + * Return: %0 if @folio's vmemmap pages have been reallocated and remapped, > * negative error code otherwise. > */ > -int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > +int hugetlb_vmemmap_restore(const struct hstate *h, struct folio *folio) I'd like to rename this to hugetlb_vmemmap_restore_folio to be consistent with hugetlb_vmemmap_restore_folios. > { > - return __hugetlb_vmemmap_restore(h, head, 0); > + return __hugetlb_vmemmap_restore(h, folio, 0); > } > > /** > @@ -563,7 +564,7 @@ long hugetlb_vmemmap_restore_folios(const struct hstate *h, > > list_for_each_entry_safe(folio, t_folio, folio_list, lru) { > if (folio_test_hugetlb_vmemmap_optimized(folio)) { > - ret = __hugetlb_vmemmap_restore(h, &folio->page, > + ret = __hugetlb_vmemmap_restore(h, folio, > VMEMMAP_REMAP_NO_TLB_FLUSH); > if (ret) > break; > @@ -641,11 +642,12 @@ static bool vmemmap_should_optimize(const struct hstate *h, const struct page *h > } > > static int __hugetlb_vmemmap_optimize(const struct hstate *h, > - struct page *head, > + struct folio *folio, > struct list_head *vmemmap_pages, > unsigned long flags) > { > int ret = 0; > + struct page *head = &folio->page; > unsigned long vmemmap_start = (unsigned long)head, vmemmap_end; > unsigned long vmemmap_reuse; > > @@ -665,7 +667,7 @@ static int __hugetlb_vmemmap_optimize(const struct hstate *h, > * If there is an error during optimization, we will immediately FLUSH > * the TLB and clear the flag below. > */ > - SetHPageVmemmapOptimized(head); > + folio_set_hugetlb_vmemmap_optimized(folio); > > vmemmap_end = vmemmap_start + hugetlb_vmemmap_size(h); > vmemmap_reuse = vmemmap_start; > @@ -681,27 +683,27 @@ static int __hugetlb_vmemmap_optimize(const struct hstate *h, > vmemmap_pages, flags); > if (ret) { > static_branch_dec(&hugetlb_optimize_vmemmap_key); > - ClearHPageVmemmapOptimized(head); > + folio_clear_hugetlb_vmemmap_optimized(folio); > } > > return ret; > } > > /** > - * hugetlb_vmemmap_optimize - optimize @head page's vmemmap pages. > + * hugetlb_vmemmap_optimize - optimize @folio's vmemmap pages. > * @h: struct hstate. > - * @head: the head page whose vmemmap pages will be optimized. > + * @folio: the folio whose vmemmap pages will be optimized. > * > - * This function only tries to optimize @head's vmemmap pages and does not > + * This function only tries to optimize @folio's vmemmap pages and does not > * guarantee that the optimization will succeed after it returns. The caller > - * can use HPageVmemmapOptimized(@head) to detect if @head's vmemmap pages > - * have been optimized. > + * can use folio_test_hugetlb_vmemmap_optimized(@folio) to detect if @folio's > + * vmemmap pages have been optimized. > */ > -void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head) > +void hugetlb_vmemmap_optimize(const struct hstate *h, struct folio *folio) The same as here. Otherwise, LGTM. Please free to add: Reviewed-by: Muchun Song in you next edition. Thanks. > { > LIST_HEAD(vmemmap_pages); > > - __hugetlb_vmemmap_optimize(h, head, &vmemmap_pages, 0); > + __hugetlb_vmemmap_optimize(h, folio, &vmemmap_pages, 0); > free_vmemmap_page_list(&vmemmap_pages); > } > > @@ -745,7 +747,7 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l > flush_tlb_all(); > > list_for_each_entry(folio, folio_list, lru) { > - int ret = __hugetlb_vmemmap_optimize(h, &folio->page, > + int ret = __hugetlb_vmemmap_optimize(h, folio, > &vmemmap_pages, > VMEMMAP_REMAP_NO_TLB_FLUSH); > > @@ -761,7 +763,7 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l > flush_tlb_all(); > free_vmemmap_page_list(&vmemmap_pages); > INIT_LIST_HEAD(&vmemmap_pages); > - __hugetlb_vmemmap_optimize(h, &folio->page, > + __hugetlb_vmemmap_optimize(h, folio, > &vmemmap_pages, > VMEMMAP_REMAP_NO_TLB_FLUSH); > } > diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h > index a0dcf49f46ba..6a06dccd7ffa 100644 > --- a/mm/hugetlb_vmemmap.h > +++ b/mm/hugetlb_vmemmap.h > @@ -18,11 +18,11 @@ > #define HUGETLB_VMEMMAP_RESERVE_PAGES (HUGETLB_VMEMMAP_RESERVE_SIZE / sizeof(struct page)) > > #ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP > -int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head); > +int hugetlb_vmemmap_restore(const struct hstate *h, struct folio *folio); > long hugetlb_vmemmap_restore_folios(const struct hstate *h, > struct list_head *folio_list, > struct list_head *non_hvo_folios); > -void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head); > +void hugetlb_vmemmap_optimize(const struct hstate *h, struct folio *folio); > void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_list); > > static inline unsigned int hugetlb_vmemmap_size(const struct hstate *h) > @@ -43,7 +43,7 @@ static inline unsigned int hugetlb_vmemmap_optimizable_size(const struct hstate > return size > 0 ? size : 0; > } > #else > -static inline int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head) > +static inline int hugetlb_vmemmap_restore(const struct hstate *h, struct folio *folio) > { > return 0; > } > @@ -56,7 +56,7 @@ static long hugetlb_vmemmap_restore_folios(const struct hstate *h, > return 0; > } > > -static inline void hugetlb_vmemmap_optimize(const struct hstate *h, struct page *head) > +static inline void hugetlb_vmemmap_optimize(const struct hstate *h, struct folio *folio) > { > } >