Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp103849pxv; Wed, 14 Jul 2021 23:52:53 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyiugFtsLaqUWRbfgwdRSeiiDsaFD1w6SA90fnXr0BUVfK5Jr18U1CTduY+OlWLYqf3K8tO X-Received: by 2002:a05:6402:451:: with SMTP id p17mr4524023edw.332.1626331972855; Wed, 14 Jul 2021 23:52:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626331972; cv=none; d=google.com; s=arc-20160816; b=AGtWWRHhzhJhTAaxBr5aJiq9PxyKkk8y4mEIqmmTb5WH6mp4qFBIZDMBxua0H9NlsG jecR9F87u8F97RsPWlKE4rWwzPwv0pXWsLyCnOHU4suOUmBEc+n+VjhTwTioCBOoxDLA CUXgNfqyix16c+G5P3EwDZJjCJatfddO2RMnl3pL9d3mh5+UqpLryEfC/pXb01YfIz3l KUCXCW7O25ISwcRzI7Y2WkNBg88j5iTFK0TcKkmLkEh9Ftr81FrH3XznPI+R0W+8Cwss RTtjGjYudDR3Bnxqf8PVa5MxYHniOdV/BSrwYJGNsP5rI/BSyAM9ynwEs3XdxNaHReqN ECrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=YnFXoc4cznHKZojXibwmEBdTqWwEducieNxXhTy+6a0=; b=fspG6OokWmRDdH/e9gb38WxEU4zHLY4LGPCBrvoB0PX9a6XOGxyrw8/VMgr8g9eooY d4e9rUzXeTbVOO13qoRzEeSk/AjF3lVsNmIHyqeH3s26jjxR3qWDTGAPL11pnogXzyZa hafzRKSmvDqDAqjKBupAYRm2Y9yK8V0HB6iROJcSF5usVlmm+XBu6ytpbVDU0AjbSdIv bK37BmPA1wbQlKg11O+rZ6GKm90ARItu/n0FblcV1oiP4ERzhAs8Mp3ZcpYKKVL8pFwq qKZnzv9XiDBUfSXNEW/8TZijawrA44M+Ef9WpidLgBFbItfDRcaZn+cF3GgkBmBiNiA3 N7XQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Vx2LFn70; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id gx13si3639636ejc.641.2021.07.14.23.52.30; Wed, 14 Jul 2021 23:52:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=Vx2LFn70; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235742AbhGOE0T (ORCPT + 99 others); Thu, 15 Jul 2021 00:26:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230076AbhGOE0T (ORCPT ); Thu, 15 Jul 2021 00:26:19 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CCEE8C06175F; Wed, 14 Jul 2021 21:23:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=YnFXoc4cznHKZojXibwmEBdTqWwEducieNxXhTy+6a0=; b=Vx2LFn70j+ZE9xQFNCIozoCfaf oj8UHwfaRRi7XD6OhWZQLuS5/7aCDS3gOx5wMZM50ikbZa8ojQ9CNS6pFL7/qnK4DLWNBBX0Gpse7 AF8iMZ6+O272jOVr3JNMTCboVEdfwUtcW5iE/C9nbdkguiWw7GOFVyETDp2Sqc250gP6HpyG7N6fi 6+cHZ3frOi6dhfF4aaMDJdvpU8cWgY7uCT6ZbNyfmXWuBNX5agJsOY7lV06AMKFyqUVcyj8OjhpGF CBvoHNAU8kPDb/ZyP17VMzLWw1Q8NDzvzSsUV2pcQ5YvT+mKQPSlPS4BO28PjePwuLXYnaID8TTqT 9D5imLrg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3ssm-002x3F-3H; Thu, 15 Jul 2021 04:22:17 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Vlastimil Babka , William Kucharski , Christoph Hellwig Subject: [PATCH v14 056/138] mm: Add folio_young and folio_idle Date: Thu, 15 Jul 2021 04:35:42 +0100 Message-Id: <20210715033704.692967-57-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Idle page tracking is handled through page_ext on 32-bit architectures. Add folio equivalents for 32-bit and move all the page compatibility parts to common code. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: Christoph Hellwig --- include/linux/page_idle.h | 99 +++++++++++++++++++-------------------- 1 file changed, 49 insertions(+), 50 deletions(-) diff --git a/include/linux/page_idle.h b/include/linux/page_idle.h index 1e894d34bdce..1bcb1365b1d0 100644 --- a/include/linux/page_idle.h +++ b/include/linux/page_idle.h @@ -8,46 +8,16 @@ #ifdef CONFIG_IDLE_PAGE_TRACKING -#ifdef CONFIG_64BIT -static inline bool page_is_young(struct page *page) -{ - return PageYoung(page); -} - -static inline void set_page_young(struct page *page) -{ - SetPageYoung(page); -} - -static inline bool test_and_clear_page_young(struct page *page) -{ - return TestClearPageYoung(page); -} - -static inline bool page_is_idle(struct page *page) -{ - return PageIdle(page); -} - -static inline void set_page_idle(struct page *page) -{ - SetPageIdle(page); -} - -static inline void clear_page_idle(struct page *page) -{ - ClearPageIdle(page); -} -#else /* !CONFIG_64BIT */ +#ifndef CONFIG_64BIT /* * If there is not enough space to store Idle and Young bits in page flags, use * page ext flags instead. */ extern struct page_ext_operations page_idle_ops; -static inline bool page_is_young(struct page *page) +static inline bool folio_test_young(struct folio *folio) { - struct page_ext *page_ext = lookup_page_ext(page); + struct page_ext *page_ext = lookup_page_ext(&folio->page); if (unlikely(!page_ext)) return false; @@ -55,9 +25,9 @@ static inline bool page_is_young(struct page *page) return test_bit(PAGE_EXT_YOUNG, &page_ext->flags); } -static inline void set_page_young(struct page *page) +static inline void folio_set_young(struct folio *folio) { - struct page_ext *page_ext = lookup_page_ext(page); + struct page_ext *page_ext = lookup_page_ext(&folio->page); if (unlikely(!page_ext)) return; @@ -65,9 +35,9 @@ static inline void set_page_young(struct page *page) set_bit(PAGE_EXT_YOUNG, &page_ext->flags); } -static inline bool test_and_clear_page_young(struct page *page) +static inline bool folio_test_clear_young(struct folio *folio) { - struct page_ext *page_ext = lookup_page_ext(page); + struct page_ext *page_ext = lookup_page_ext(&folio->page); if (unlikely(!page_ext)) return false; @@ -75,9 +45,9 @@ static inline bool test_and_clear_page_young(struct page *page) return test_and_clear_bit(PAGE_EXT_YOUNG, &page_ext->flags); } -static inline bool page_is_idle(struct page *page) +static inline bool folio_test_idle(struct folio *folio) { - struct page_ext *page_ext = lookup_page_ext(page); + struct page_ext *page_ext = lookup_page_ext(&folio->page); if (unlikely(!page_ext)) return false; @@ -85,9 +55,9 @@ static inline bool page_is_idle(struct page *page) return test_bit(PAGE_EXT_IDLE, &page_ext->flags); } -static inline void set_page_idle(struct page *page) +static inline void folio_set_idle(struct folio *folio) { - struct page_ext *page_ext = lookup_page_ext(page); + struct page_ext *page_ext = lookup_page_ext(&folio->page); if (unlikely(!page_ext)) return; @@ -95,46 +65,75 @@ static inline void set_page_idle(struct page *page) set_bit(PAGE_EXT_IDLE, &page_ext->flags); } -static inline void clear_page_idle(struct page *page) +static inline void folio_clear_idle(struct folio *folio) { - struct page_ext *page_ext = lookup_page_ext(page); + struct page_ext *page_ext = lookup_page_ext(&folio->page); if (unlikely(!page_ext)) return; clear_bit(PAGE_EXT_IDLE, &page_ext->flags); } -#endif /* CONFIG_64BIT */ +#endif /* !CONFIG_64BIT */ #else /* !CONFIG_IDLE_PAGE_TRACKING */ -static inline bool page_is_young(struct page *page) +static inline bool folio_test_young(struct folio *folio) { return false; } -static inline void set_page_young(struct page *page) +static inline void folio_set_young(struct folio *folio) { } -static inline bool test_and_clear_page_young(struct page *page) +static inline bool folio_test_clear_young(struct folio *folio) { return false; } -static inline bool page_is_idle(struct page *page) +static inline bool folio_test_idle(struct folio *folio) { return false; } -static inline void set_page_idle(struct page *page) +static inline void folio_set_idle(struct folio *folio) { } -static inline void clear_page_idle(struct page *page) +static inline void folio_clear_idle(struct folio *folio) { } #endif /* CONFIG_IDLE_PAGE_TRACKING */ +static inline bool page_is_young(struct page *page) +{ + return folio_test_young(page_folio(page)); +} + +static inline void set_page_young(struct page *page) +{ + folio_set_young(page_folio(page)); +} + +static inline bool test_and_clear_page_young(struct page *page) +{ + return folio_test_clear_young(page_folio(page)); +} + +static inline bool page_is_idle(struct page *page) +{ + return folio_test_idle(page_folio(page)); +} + +static inline void set_page_idle(struct page *page) +{ + folio_set_idle(page_folio(page)); +} + +static inline void clear_page_idle(struct page *page) +{ + folio_clear_idle(page_folio(page)); +} #endif /* _LINUX_MM_PAGE_IDLE_H */ -- 2.30.2