Received: by 2002:a05:7412:2a8c:b0:e2:908c:2ebd with SMTP id u12csp1570234rdh; Mon, 25 Sep 2023 17:55:51 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG5btRoMd4O8htbKly2G/9/zWucL6vX6Pze5B66Tm+aiKI/HQTsaLZmw76Ag+BkhiN/2aVL X-Received: by 2002:a17:903:192:b0:1c3:8031:1d9e with SMTP id z18-20020a170903019200b001c380311d9emr7286186plg.15.1695689750904; Mon, 25 Sep 2023 17:55:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695689750; cv=none; d=google.com; s=arc-20160816; b=PrimostayCr9IcGZVLbwpallGtBCYO62osCnaO4X2tjfnMz2iJ6UZwOGEqAbOryL6O LsI0x3nEQO/I9vVo9Oc8VLpSfI9XISj0+xVBR/ASohGExvc/MZFHFuqbc3ZYxZYh/3e1 eXYSxYlF8F+a8xZN7mwW6H0zs4dO42mpBMPiwlifDNLjMfnoYDNRUvIqZY5pQJzx3yyT +HrnDQ/cysecN/4j8IzN5heh1zF60hFiMWOimLpiBdbsc/Evg4+LA+UvzZO47Of8vmLD yOhePWWfh+TXpTxTAhN8IvxiBMWHJZWh+0ip/UaPMwnC/ncpKLNgoDYsHJezqLJg5+8L MniQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=0mBqhTUBjVNOhn3gad4QEbyJ4wj4oLKXxijp0qqMZg4=; fh=FiTqiBStz9H28Qh/9rW0myzOFac7M9VOPFJTunt8oTE=; b=cZhBONclj4OiLaDQU1/yLTcOE+L0gF9a30pcxUOohLQ/BS0eGjv9em7rBDlPPpcwBU m+zHzOO/3/bswEnVy2TRCebTh2uSWEmwSHV0nZ/IdNxjj/1A50DEucgtHL1PmslSUwOe CBqDZ37GqnVhTepMxZRMycwDTucjxeSD4GCfnd9DhPtuvUVHZOJNusPH2aYGKH5JQnXh oexDypacmMW8Yq55miC71eH09ROHkDQ2zOZwa1wNl2SDCsSPdUhWZ1YgfWatN3Bsrxqr i8tzPicURQ3pDc6Q7OIj3MZtPSq+B50kpR679jCKCTMddxALiYSM5URwKWcEnM199Faj 7n9Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from howler.vger.email (howler.vger.email. [2620:137:e000::3:4]) by mx.google.com with ESMTPS id l3-20020a170903244300b001a94b91f402si12297763pls.218.2023.09.25.17.55.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 25 Sep 2023 17:55:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) client-ip=2620:137:e000::3:4; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::3:4 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by howler.vger.email (Postfix) with ESMTP id 27839832C2C8; Mon, 25 Sep 2023 17:53:56 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at howler.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233488AbjIZAxl (ORCPT + 99 others); Mon, 25 Sep 2023 20:53:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57536 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232582AbjIZAxV (ORCPT ); Mon, 25 Sep 2023 20:53:21 -0400 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16A0010A for ; Mon, 25 Sep 2023 17:53:14 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.53]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4Rvh2z45FGzMlpj; Tue, 26 Sep 2023 08:49:31 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Tue, 26 Sep 2023 08:53:12 +0800 From: Kefeng Wang To: Andrew Morton CC: Mike Rapoport , Matthew Wilcox , David Hildenbrand , , , , Zi Yan , Kefeng Wang Subject: [PATCH -next 9/9] mm: convert page_cpupid_reset_last() to folio_cpupid_reset_last() Date: Tue, 26 Sep 2023 08:52:54 +0800 Message-ID: <20230926005254.2861577-10-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230926005254.2861577-1-wangkefeng.wang@huawei.com> References: <20230926005254.2861577-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (howler.vger.email [0.0.0.0]); Mon, 25 Sep 2023 17:53:56 -0700 (PDT) It isn't need to fill the default cpupid value for all the struct page, since cpupid is only used for numa balancing, and the pages for numa balancing are all from buddy, page_cpupid_reset_last() is already called by free_pages_prepare() to initialize it, so let's drop the page_cpupid_reset_last() in __init_single_page(), then make page_cpupid_reset_last() to take a folio and rename it to folio_cpupid_reset_last(). Signed-off-by: Kefeng Wang --- include/linux/mm.h | 10 +++++----- mm/mm_init.c | 1 - mm/page_alloc.c | 2 +- 3 files changed, 6 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a6f4b55bf469..ca66a05eb2ed 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1692,9 +1692,9 @@ static inline int folio_cpupid_last(struct folio *folio) { return folio->_last_cpupid; } -static inline void page_cpupid_reset_last(struct page *page) +static inline void folio_cpupid_reset_last(struct folio *folio) { - page->_last_cpupid = -1 & LAST_CPUPID_MASK; + folio->_last_cpupid = -1 & LAST_CPUPID_MASK; } #else static inline int folio_cpupid_last(struct folio *folio) @@ -1704,9 +1704,9 @@ static inline int folio_cpupid_last(struct folio *folio) extern int folio_cpupid_xchg_last(struct folio *folio, int cpupid); -static inline void page_cpupid_reset_last(struct page *page) +static inline void folio_cpupid_reset_last(struct folio *folio) { - page->flags |= LAST_CPUPID_MASK << LAST_CPUPID_PGSHIFT; + folio->flags |= LAST_CPUPID_MASK << LAST_CPUPID_PGSHIFT; } #endif /* LAST_CPUPID_NOT_IN_PAGE_FLAGS */ @@ -1769,7 +1769,7 @@ static inline bool cpupid_pid_unset(int cpupid) return true; } -static inline void page_cpupid_reset_last(struct page *page) +static inline void folio_cpupid_reset_last(struct folio *folio) { } diff --git a/mm/mm_init.c b/mm/mm_init.c index 06a72c223bce..74c0dc27fbf1 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -563,7 +563,6 @@ void __meminit __init_single_page(struct page *page, unsigned long pfn, set_page_links(page, zone, nid, pfn); init_page_count(page); page_mapcount_reset(page); - page_cpupid_reset_last(page); page_kasan_tag_reset(page); INIT_LIST_HEAD(&page->lru); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a888b9d57751..852fc78ddb34 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1126,7 +1126,7 @@ static __always_inline bool free_pages_prepare(struct page *page, return false; } - page_cpupid_reset_last(page); + folio_cpupid_reset_last(folio); page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; reset_page_owner(page, order); page_table_check_free(page, order); -- 2.27.0