Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp2627778pxv; Sun, 11 Jul 2021 20:39:16 -0700 (PDT) X-Google-Smtp-Source: ABdhPJweHb0kjJgVIWemAnHSVNtGKQjLiiAHfBg07HGXc6HPKqQsa3sUAa32DC7xPaLSWlmBYPU0 X-Received: by 2002:a17:906:4fd6:: with SMTP id i22mr37691031ejw.92.1626061156142; Sun, 11 Jul 2021 20:39:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626061156; cv=none; d=google.com; s=arc-20160816; b=QIHjDjOYb4ZgtoWa04nEzbujDa/amZUkfAq9QoeSjoZzsPKEarIlJqJ2EKvwZgu6I6 uS7IosLLjyXruDpe1Ap9f9PSj2/v3QwAl90UrYVb7Gxj2Jl84g3g/0LCTQJmxhPGhxSK q5bSM5lKzcPmDFf5RwM3NhBiK25p8Z8HkRWZdtD8tc6ax+LPEY3y8xOhQknS35uH2EX+ 1vjXPTSHMOmjMQrRQ1Ibxl7RmJbMOPHAsXd3XwMaQ4Sog7xGaFXss961M6vKsffc1Yy8 TWbUDFeEc4i5zUrF9pGI5XbtXvi36MTdxdBat4ppwJaw9tD/sK3X7Xz9wv2wGi2jzJTA +XnA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=4elHufFSfjZmbzKZoX4tubmiKmy7+rW+aJ1r5vGuNLw=; b=NS2yU/RkVCQ6/0199g2+LJ8bFtF3bweFOS0RwJZx5Nn3f+eVFGfaDwwTBPGMP2cvER rjQpDkK9RGeNUPo71gp8oecvrS89NvSZwQbWWYNgVq43WB/WijFRDoooRL+RKfGYbK4c BQZ/0aPk5Zk2Qy4KhKyseN0/4D1eVlOG9EnragoJpa/KsplhvakZHgA/406UCYQ82KHl pJETP6zNBoxRf5EqMhQKvAKNVEgFeMjA9k19hncxpvkKdi22Do3D+Tz/crgJcH/l2xJm WkAhv9Btcm75KTod9Yf2fjMtqnAR+Emb9LT/U1Zk3bDiNlCpAUPu7Azct7ZOl6F+J/as 10zA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=TEkjpN5K; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id ji4si14125407ejc.469.2021.07.11.20.38.53; Sun, 11 Jul 2021 20:39:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=TEkjpN5K; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233045AbhGLDkr (ORCPT + 99 others); Sun, 11 Jul 2021 23:40:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230022AbhGLDkr (ORCPT ); Sun, 11 Jul 2021 23:40:47 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDAC4C0613DD; Sun, 11 Jul 2021 20:37:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=4elHufFSfjZmbzKZoX4tubmiKmy7+rW+aJ1r5vGuNLw=; b=TEkjpN5K2CBsxKN4/WxUoUs+sK FomdgmfdcDcGGPfWxAb8KQ2F4wRLz1ZW+BCaIJ8M9ufeiVFYSJnpZ1v/1Z62h0A8fxNctxH2dxTL9 XN2Kjj4oq6G/fdawdP8aZpNCIHBowUSjolPrl4x66xYpzydJw2mqH/8kaa6/DlW1pXEbr8hKR9YVu FBgjwif36Su2oFJfaj2pAHMD1XIKZnZtQQapVqnHJdwfDcZPKmGVH0cwQ+qtzTFOm6pu1sglEtGCI /Jfu+3fkw0LfEcPE/Gc3qsvJ9QmrH1TiVymmsfDy+usCG36CoMAvAA3hYsf59nBvCY8G21Ic2Dmsq g3deJXIg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m2mkd-00Gorz-KL; Mon, 12 Jul 2021 03:37:05 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Vlastimil Babka , William Kucharski , Christoph Hellwig Subject: [PATCH v13 056/137] mm: Add folio_young() and folio_idle() Date: Mon, 12 Jul 2021 04:05:40 +0100 Message-Id: <20210712030701.4000097-57-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210712030701.4000097-1-willy@infradead.org> References: <20210712030701.4000097-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Idle page tracking is handled through page_ext on 32-bit architectures. Add folio equivalents for 32-bit and move all the page compatibility parts to common code. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: Christoph Hellwig --- include/linux/page_idle.h | 99 +++++++++++++++++++-------------------- 1 file changed, 49 insertions(+), 50 deletions(-) diff --git a/include/linux/page_idle.h b/include/linux/page_idle.h index 1e894d34bdce..bd957e818558 100644 --- a/include/linux/page_idle.h +++ b/include/linux/page_idle.h @@ -8,46 +8,16 @@ #ifdef CONFIG_IDLE_PAGE_TRACKING -#ifdef CONFIG_64BIT -static inline bool page_is_young(struct page *page) -{ - return PageYoung(page); -} - -static inline void set_page_young(struct page *page) -{ - SetPageYoung(page); -} - -static inline bool test_and_clear_page_young(struct page *page) -{ - return TestClearPageYoung(page); -} - -static inline bool page_is_idle(struct page *page) -{ - return PageIdle(page); -} - -static inline void set_page_idle(struct page *page) -{ - SetPageIdle(page); -} - -static inline void clear_page_idle(struct page *page) -{ - ClearPageIdle(page); -} -#else /* !CONFIG_64BIT */ +#ifndef CONFIG_64BIT /* * If there is not enough space to store Idle and Young bits in page flags, use * page ext flags instead. */ extern struct page_ext_operations page_idle_ops; -static inline bool page_is_young(struct page *page) +static inline bool folio_young(struct folio *folio) { - struct page_ext *page_ext = lookup_page_ext(page); + struct page_ext *page_ext = lookup_page_ext(&folio->page); if (unlikely(!page_ext)) return false; @@ -55,9 +25,9 @@ static inline bool page_is_young(struct page *page) return test_bit(PAGE_EXT_YOUNG, &page_ext->flags); } -static inline void set_page_young(struct page *page) +static inline void folio_set_young_flag(struct folio *folio) { - struct page_ext *page_ext = lookup_page_ext(page); + struct page_ext *page_ext = lookup_page_ext(&folio->page); if (unlikely(!page_ext)) return; @@ -65,9 +35,9 @@ static inline void set_page_young(struct page *page) set_bit(PAGE_EXT_YOUNG, &page_ext->flags); } -static inline bool test_and_clear_page_young(struct page *page) +static inline bool folio_test_clear_young_flag(struct folio *folio) { - struct page_ext *page_ext = lookup_page_ext(page); + struct page_ext *page_ext = lookup_page_ext(&folio->page); if (unlikely(!page_ext)) return false; @@ -75,9 +45,9 @@ static inline bool test_and_clear_page_young(struct page *page) return test_and_clear_bit(PAGE_EXT_YOUNG, &page_ext->flags); } -static inline bool page_is_idle(struct page *page) +static inline bool folio_idle(struct folio *folio) { - struct page_ext *page_ext = lookup_page_ext(page); + struct page_ext *page_ext = lookup_page_ext(&folio->page); if (unlikely(!page_ext)) return false; @@ -85,9 +55,9 @@ static inline bool page_is_idle(struct page *page) return test_bit(PAGE_EXT_IDLE, &page_ext->flags); } -static inline void set_page_idle(struct page *page) +static inline void folio_set_idle_flag(struct folio *folio) { - struct page_ext *page_ext = lookup_page_ext(page); + struct page_ext *page_ext = lookup_page_ext(&folio->page); if (unlikely(!page_ext)) return; @@ -95,46 +65,75 @@ static inline void set_page_idle(struct page *page) set_bit(PAGE_EXT_IDLE, &page_ext->flags); } -static inline void clear_page_idle(struct page *page) +static inline void folio_clear_idle_flag(struct folio *folio) { - struct page_ext *page_ext = lookup_page_ext(page); + struct page_ext *page_ext = lookup_page_ext(&folio->page); if (unlikely(!page_ext)) return; clear_bit(PAGE_EXT_IDLE, &page_ext->flags); } -#endif /* CONFIG_64BIT */ +#endif /* !CONFIG_64BIT */ #else /* !CONFIG_IDLE_PAGE_TRACKING */ -static inline bool page_is_young(struct page *page) +static inline bool folio_young(struct folio *folio) { return false; } -static inline void set_page_young(struct page *page) +static inline void folio_set_young_flag(struct folio *folio) { } -static inline bool test_and_clear_page_young(struct page *page) +static inline bool folio_test_clear_young_flag(struct folio *folio) { return false; } -static inline bool page_is_idle(struct page *page) +static inline bool folio_idle(struct folio *folio) { return false; } -static inline void set_page_idle(struct page *page) +static inline void folio_set_idle_flag(struct folio *folio) { } -static inline void clear_page_idle(struct page *page) +static inline void folio_clear_idle_flag(struct folio *folio) { } #endif /* CONFIG_IDLE_PAGE_TRACKING */ +static inline bool page_is_young(struct page *page) +{ + return folio_young(page_folio(page)); +} + +static inline void set_page_young(struct page *page) +{ + folio_set_young_flag(page_folio(page)); +} + +static inline bool test_and_clear_page_young(struct page *page) +{ + return folio_test_clear_young_flag(page_folio(page)); +} + +static inline bool page_is_idle(struct page *page) +{ + return folio_idle(page_folio(page)); +} + +static inline void set_page_idle(struct page *page) +{ + folio_set_idle_flag(page_folio(page)); +} + +static inline void clear_page_idle(struct page *page) +{ + folio_clear_idle_flag(page_folio(page)); +} #endif /* _LINUX_MM_PAGE_IDLE_H */ -- 2.30.2