Received: by 2002:a05:7412:419a:b0:f3:1519:9f41 with SMTP id i26csp2879974rdh; Mon, 27 Nov 2023 00:47:46 -0800 (PST) X-Google-Smtp-Source: AGHT+IH1dgOjh2PS4fMikQQNOhuaVV8mynmfStdnXJOSjZSlwHtmBvTeKzy6TrADuku+mkot3rOH X-Received: by 2002:a17:90b:38cb:b0:285:db2e:f6ac with SMTP id nn11-20020a17090b38cb00b00285db2ef6acmr1061300pjb.28.1701074866201; Mon, 27 Nov 2023 00:47:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1701074866; cv=none; d=google.com; s=arc-20160816; b=0iQfHzN9k9igBhnzQiViqIAi+H5CpMvvNVP1oAEMd45ugjm8MbAYPE0xAiVlmklrPg HWUxxAQIQoFweHVK447R/d2cR+MDE7zsZDfLpsdp4Z3zKL3TdZxmZ+04JwH7nFfcLX6D +yfdG7rv50GZm3rPrRDebLcrUnkLT2hKJoHKygpuiBk0jc/o20JAS/7cF44IAw89g+o3 b/+ek02+zEWzdDzGtMUpZWSvylBgniJSgC9ZTBaO3VpsGBlKLZiSe10ht8MRL/xzQU5L eYIyAdW38YRr0ecS5BQ3PNzHDs2+aKva2cmJQBrhJCuTs+M8BLzQX4GhYqEk6I69YWZc wuBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=KDwCfd7tNwbLPqndee0T3tycLaDjfbEI5ZluCSMX2aI=; fh=UHevQD8g/ASKsyBf3gegCtnBBR+hA6VRMUG2agRDexA=; b=n9cLqDW9CGBI5wJZFo3dGuQNk5zqliAHeG9otaJ00qKAm2P9g7dCU1oVvluWJ6PXLA 4KT0gPbZLUIVlhEN/f3RPMpqAAIpTuyY9xFf0bn3oey++NPUtyROeay77tv3CFKh2h0h 1SI0DVjc0RYLx9GagJkSN3Z8cZSS1ZWb+6UD0QVVSfsK7DIT93GAnZgP0v9TEqyYMTAz 1mYFYDHsrEugccp2xXgljbATrbb9IUAOK8D8XI2a+cQAJgVj4w2X8Gy1bb7eXmzbFqfH Sxd8ojr+o/oR6W6XMIcm2a17WNubyCJnnTJRPkm+ww/9JkUk6t2uP1vrxjjAdHv3bh0y t1aQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=Zo8JzX3Q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Return-Path: Received: from snail.vger.email (snail.vger.email. [2620:137:e000::3:7]) by mx.google.com with ESMTPS id x24-20020a17090ab01800b00262ff3a4545si9741310pjq.169.2023.11.27.00.47.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 27 Nov 2023 00:47:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) client-ip=2620:137:e000::3:7; Authentication-Results: mx.google.com; dkim=pass header.i=@bytedance.com header.s=google header.b=Zo8JzX3Q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:7 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=bytedance.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by snail.vger.email (Postfix) with ESMTP id 7D44A8061162; Mon, 27 Nov 2023 00:47:44 -0800 (PST) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.11 at snail.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232707AbjK0Ir2 (ORCPT + 99 others); Mon, 27 Nov 2023 03:47:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232689AbjK0Ir1 (ORCPT ); Mon, 27 Nov 2023 03:47:27 -0500 Received: from mail-oi1-x234.google.com (mail-oi1-x234.google.com [IPv6:2607:f8b0:4864:20::234]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C118110A for ; Mon, 27 Nov 2023 00:47:33 -0800 (PST) Received: by mail-oi1-x234.google.com with SMTP id 5614622812f47-3b8382b8f5aso2527252b6e.0 for ; Mon, 27 Nov 2023 00:47:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1701074853; x=1701679653; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KDwCfd7tNwbLPqndee0T3tycLaDjfbEI5ZluCSMX2aI=; b=Zo8JzX3QIvF7cZUN/IgQLWMdmTy0dvJYt8dFAonuPDXO4tOrKowwU4tuAoqrKcFWyn 1AvSBhdcfv09Qim9z0HmAHDG1F+YsKOOzz+jxuTsyEMRmbI9Ye2jJeJ2byapoGRSqNUl yHyYd3Kpk8mTFlZEIOgLwtZ86MBExZyog5vhjGvCd/lWDCkOBYz57gqRGBifaq0PMbxP s0Y5OaA2m34PvfX4eCNJZvZs5IQhsLcbujdwUkQk0cBg3C41ktew9paAS/4sfhS5ygmR 6f70GMq5mwi39SCCp4EgqyeFLARCu41NcybmTUkp/M05jzJUnW8Crp9XXmNuAssTbeWC 4HmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701074853; x=1701679653; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KDwCfd7tNwbLPqndee0T3tycLaDjfbEI5ZluCSMX2aI=; b=sX2RerqNUNT51+Z31bPsIXxeqPjFs/3zguIQ21jBSnNEHMKZ60OGWrMelo6OwBlVgR MdenBdw5QYmE3KUh/mVhIiy+UtXM2TIyuo/ZL6+YGssjxlMfD6AAPNpXgj1iOxlCNk96 TERhAQlK2cPdif3g9aml3CIvcQBTGjXH278uCLrkFHXpmA1OJy9KAbyayNwlRhprmzSM 4jIC7q6RilV7VWsBteXSZz03ZRqHsrD23xACIBcLi2zPUDOaB7lQK10fV6IqnrYvBFje i4FqV4+miH7rBf5cHpz6uoqOxhyCUGWuFrxEnDh/OWlYOqkWvaZjQ3+hVLUMtGkm9AKz 3kBQ== X-Gm-Message-State: AOJu0YxpFmmbewvS4hHgPdAQ2IfoDE4YWzpjiwMOgaX+OBVzQOrewTma bA0NuC6+qd4F/djSOM/dd7Nuyw== X-Received: by 2002:aca:1c02:0:b0:3ae:156f:d325 with SMTP id c2-20020aca1c02000000b003ae156fd325mr12357984oic.58.1701074853042; Mon, 27 Nov 2023 00:47:33 -0800 (PST) Received: from PXLDJ45XCM.bytedance.net ([139.177.225.230]) by smtp.gmail.com with ESMTPSA id e22-20020aa78c56000000b006c875abecbcsm6686932pfd.121.2023.11.27.00.47.29 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Mon, 27 Nov 2023 00:47:32 -0800 (PST) From: Muchun Song To: mike.kravetz@oracle.com, muchun.song@linux.dev, akpm@linux-foundation.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Muchun Song Subject: [PATCH 4/4] mm: hugetlb_vmemmap: convert page to folio Date: Mon, 27 Nov 2023 16:46:45 +0800 Message-Id: <20231127084645.27017-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.39.3 (Apple Git-145) In-Reply-To: <20231127084645.27017-1-songmuchun@bytedance.com> References: <20231127084645.27017-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_BLOCKED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (snail.vger.email [0.0.0.0]); Mon, 27 Nov 2023 00:47:44 -0800 (PST) There is still some places where it does not be converted to folio, this patch convert all of them to folio. And this patch also does some trival cleanup to fix the code style problems. Signed-off-by: Muchun Song --- mm/hugetlb_vmemmap.c | 51 ++++++++++++++++++++++---------------------- 1 file changed, 25 insertions(+), 26 deletions(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index ce920ca6c90ee..54f388aa361fb 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -280,7 +280,7 @@ static void vmemmap_restore_pte(pte_t *pte, unsigned long addr, * Return: %0 on success, negative error code otherwise. */ static int vmemmap_remap_split(unsigned long start, unsigned long end, - unsigned long reuse) + unsigned long reuse) { int ret; struct vmemmap_remap_walk walk = { @@ -447,14 +447,14 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); static bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON); core_param(hugetlb_free_vmemmap, vmemmap_optimize_enabled, bool, 0); -static int __hugetlb_vmemmap_restore_folio(const struct hstate *h, struct folio *folio, unsigned long flags) +static int __hugetlb_vmemmap_restore_folio(const struct hstate *h, + struct folio *folio, unsigned long flags) { int ret; - struct page *head = &folio->page; - unsigned long vmemmap_start = (unsigned long)head, vmemmap_end; + unsigned long vmemmap_start = (unsigned long)&folio->page, vmemmap_end; unsigned long vmemmap_reuse; - VM_WARN_ON_ONCE(!PageHuge(head)); + VM_WARN_ON_ONCE_FOLIO(!folio_test_hugetlb(folio), folio); if (!folio_test_hugetlb_vmemmap_optimized(folio)) return 0; @@ -517,7 +517,7 @@ long hugetlb_vmemmap_restore_folios(const struct hstate *h, list_for_each_entry_safe(folio, t_folio, folio_list, lru) { if (folio_test_hugetlb_vmemmap_optimized(folio)) { ret = __hugetlb_vmemmap_restore_folio(h, folio, - VMEMMAP_REMAP_NO_TLB_FLUSH); + VMEMMAP_REMAP_NO_TLB_FLUSH); if (ret) break; restored++; @@ -535,9 +535,9 @@ long hugetlb_vmemmap_restore_folios(const struct hstate *h, } /* Return true iff a HugeTLB whose vmemmap should and can be optimized. */ -static bool vmemmap_should_optimize(const struct hstate *h, const struct page *head) +static bool vmemmap_should_optimize_folio(const struct hstate *h, struct folio *folio) { - if (HPageVmemmapOptimized((struct page *)head)) + if (folio_test_hugetlb_vmemmap_optimized(folio)) return false; if (!READ_ONCE(vmemmap_optimize_enabled)) @@ -550,17 +550,16 @@ static bool vmemmap_should_optimize(const struct hstate *h, const struct page *h } static int __hugetlb_vmemmap_optimize_folio(const struct hstate *h, - struct folio *folio, - struct list_head *vmemmap_pages, - unsigned long flags) + struct folio *folio, + struct list_head *vmemmap_pages, + unsigned long flags) { int ret = 0; - struct page *head = &folio->page; - unsigned long vmemmap_start = (unsigned long)head, vmemmap_end; + unsigned long vmemmap_start = (unsigned long)&folio->page, vmemmap_end; unsigned long vmemmap_reuse; - VM_WARN_ON_ONCE(!PageHuge(head)); - if (!vmemmap_should_optimize(h, head)) + VM_WARN_ON_ONCE_FOLIO(!folio_test_hugetlb(folio), folio); + if (!vmemmap_should_optimize_folio(h, folio)) return ret; static_branch_inc(&hugetlb_optimize_vmemmap_key); @@ -588,7 +587,7 @@ static int __hugetlb_vmemmap_optimize_folio(const struct hstate *h, * the caller. */ ret = vmemmap_remap_free(vmemmap_start, vmemmap_end, vmemmap_reuse, - vmemmap_pages, flags); + vmemmap_pages, flags); if (ret) { static_branch_dec(&hugetlb_optimize_vmemmap_key); folio_clear_hugetlb_vmemmap_optimized(folio); @@ -615,12 +614,12 @@ void hugetlb_vmemmap_optimize_folio(const struct hstate *h, struct folio *folio) free_vmemmap_page_list(&vmemmap_pages); } -static int hugetlb_vmemmap_split(const struct hstate *h, struct page *head) +static int hugetlb_vmemmap_split_folio(const struct hstate *h, struct folio *folio) { - unsigned long vmemmap_start = (unsigned long)head, vmemmap_end; + unsigned long vmemmap_start = (unsigned long)&folio->page, vmemmap_end; unsigned long vmemmap_reuse; - if (!vmemmap_should_optimize(h, head)) + if (!vmemmap_should_optimize_folio(h, folio)) return 0; vmemmap_end = vmemmap_start + hugetlb_vmemmap_size(h); @@ -640,7 +639,7 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l LIST_HEAD(vmemmap_pages); list_for_each_entry(folio, folio_list, lru) { - int ret = hugetlb_vmemmap_split(h, &folio->page); + int ret = hugetlb_vmemmap_split_folio(h, folio); /* * Spliting the PMD requires allocating a page, thus lets fail @@ -655,9 +654,10 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l flush_tlb_all(); list_for_each_entry(folio, folio_list, lru) { - int ret = __hugetlb_vmemmap_optimize_folio(h, folio, - &vmemmap_pages, - VMEMMAP_REMAP_NO_TLB_FLUSH); + int ret; + + ret = __hugetlb_vmemmap_optimize_folio(h, folio, &vmemmap_pages, + VMEMMAP_REMAP_NO_TLB_FLUSH); /* * Pages to be freed may have been accumulated. If we @@ -671,9 +671,8 @@ void hugetlb_vmemmap_optimize_folios(struct hstate *h, struct list_head *folio_l flush_tlb_all(); free_vmemmap_page_list(&vmemmap_pages); INIT_LIST_HEAD(&vmemmap_pages); - __hugetlb_vmemmap_optimize_folio(h, folio, - &vmemmap_pages, - VMEMMAP_REMAP_NO_TLB_FLUSH); + __hugetlb_vmemmap_optimize_folio(h, folio, &vmemmap_pages, + VMEMMAP_REMAP_NO_TLB_FLUSH); } } -- 2.20.1