2022-12-07 00:59:28

by Vishal Moola

[permalink] [raw]
Subject: [PATCH 0/3] Convert deactivate_page() to deactivate_folio()

Deactivate_page() has already been converted to use folios. This patch
series modifies the callers of deactivate_page() to use folios as well
so that deactivate_page() can take in a folio instead.

Vishal Moola (Oracle) (3):
madvise: Convert madvise_cold_or_pageout_pte_range() to use folios
damon: Convert damon_pa_mark_accessed_or_deactivate() to use folios
swap: Convert deactivate_page() to deactivate_folio()

include/linux/swap.h | 2 +-
mm/damon/paddr.c | 11 ++++--
mm/madvise.c | 88 ++++++++++++++++++++++----------------------
mm/swap.c | 14 +++----
4 files changed, 59 insertions(+), 56 deletions(-)

--
2.38.1


2022-12-07 01:14:37

by Vishal Moola

[permalink] [raw]
Subject: [PATCH 3/3] swap: Convert deactivate_page() to deactivate_folio()

Deactivate_page() has already been converted to use folios, this change
converts it to take in a folio argument instead of calling page_folio().

Signed-off-by: Vishal Moola (Oracle) <[email protected]>
---
include/linux/swap.h | 2 +-
mm/damon/paddr.c | 2 +-
mm/madvise.c | 4 ++--
mm/swap.c | 14 ++++++--------
4 files changed, 10 insertions(+), 12 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index a18cf4b7c724..f404790222c0 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -409,7 +409,7 @@ extern void lru_add_drain(void);
extern void lru_add_drain_cpu(int cpu);
extern void lru_add_drain_cpu_zone(struct zone *zone);
extern void lru_add_drain_all(void);
-extern void deactivate_page(struct page *page);
+extern void deactivate_folio(struct folio *folio);
extern void mark_page_lazyfree(struct page *page);
extern void swap_setup(void);

diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index 73548bc82297..828961fc7899 100644
--- a/mm/damon/paddr.c
+++ b/mm/damon/paddr.c
@@ -247,7 +247,7 @@ static inline unsigned long damon_pa_mark_accessed_or_deactivate(
if (mark_accessed)
folio_mark_accessed(folio);
else
- deactivate_page(&folio->page);
+ deactivate_folio(folio);
folio_put(folio);
applied += folio_nr_pages(folio);
}
diff --git a/mm/madvise.c b/mm/madvise.c
index 59bfc6c9c548..afe957994317 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -397,7 +397,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
list_add(&folio->lru, &folio_list);
}
} else
- deactivate_page(&folio->page);
+ deactivate_folio(folio);
huge_unlock:
spin_unlock(ptl);
if (pageout)
@@ -487,7 +487,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
list_add(&folio->lru, &folio_list);
}
} else
- deactivate_page(&folio->page);
+ deactivate_folio(folio);
}

arch_leave_lazy_mmu_mode();
diff --git a/mm/swap.c b/mm/swap.c
index 955930f41d20..9982469e8da8 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -720,17 +720,15 @@ void deactivate_file_folio(struct folio *folio)
}

/*
- * deactivate_page - deactivate a page
- * @page: page to deactivate
+ * deactivate_folio - deactivate a folio
+ * @folio: folio to deactivate
*
- * deactivate_page() moves @page to the inactive list if @page was on the active
- * list and was not an unevictable page. This is done to accelerate the reclaim
- * of @page.
+ * deactivate_folio() moves @folio to the inactive list if @folio was on the
+ * active list and was not an unevictable page. This is done to accelerate
+ * the reclaim of @folio.
*/
-void deactivate_page(struct page *page)
+void deactivate_folio(struct folio *folio)
{
- struct folio *folio = page_folio(page);
-
if (folio_test_lru(folio) && !folio_test_unevictable(folio) &&
(folio_test_active(folio) || lru_gen_enabled())) {
struct folio_batch *fbatch;
--
2.38.1

2022-12-07 16:37:10

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH 3/3] swap: Convert deactivate_page() to deactivate_folio()

On Tue, Dec 06, 2022 at 04:21:58PM -0800, Vishal Moola (Oracle) wrote:
> /*
> - * deactivate_page - deactivate a page
> - * @page: page to deactivate
> + * deactivate_folio - deactivate a folio
> + * @folio: folio to deactivate
> *
> - * deactivate_page() moves @page to the inactive list if @page was on the active
> - * list and was not an unevictable page. This is done to accelerate the reclaim
> - * of @page.
> + * deactivate_folio() moves @folio to the inactive list if @folio was on the
> + * active list and was not an unevictable page. This is done to accelerate

... and was not unevictable. This ...

> + * the reclaim of @folio.
> */
> -void deactivate_page(struct page *page)
> +void deactivate_folio(struct folio *folio)
> {
> - struct folio *folio = page_folio(page);
> -
> if (folio_test_lru(folio) && !folio_test_unevictable(folio) &&
> (folio_test_active(folio) || lru_gen_enabled())) {
> struct folio_batch *fbatch;
> --
> 2.38.1
>
>