Now the page isolation functions did not return a boolean to indicate
success or not, instead it will return a negative error when failed
to isolate a page. So below code used in most places seem a boolean
success/failure thing, which can confuse people whether the isolation
is successful.
if (folio_isolate_lru(folio))
continue;
Moreover the page isolation functions only return 0 or -EBUSY, and
most users did not care about the negative error except for few users,
thus we can convert all page isolation functions to return a boolean
value, which can remove the confusion to make code more clear.
No functional changes intended in this patch series.
Changes from v1:
- Convert all isolation functions to return bool.
Baolin Wang (4):
mm: change to return bool for folio_isolate_lru()
mm: change to return bool for isolate_lru_page()
mm: hugetlb: change to return bool for isolate_hugetlb()
mm: change to return bool for isolate_movable_page()
include/linux/hugetlb.h | 6 +++---
include/linux/migrate.h | 6 +++---
mm/compaction.c | 2 +-
mm/damon/paddr.c | 2 +-
mm/folio-compat.c | 4 ++--
mm/gup.c | 2 +-
mm/hugetlb.c | 12 ++++++++----
mm/internal.h | 4 ++--
mm/khugepaged.c | 4 ++--
mm/madvise.c | 4 ++--
mm/memcontrol.c | 4 ++--
mm/memory-failure.c | 10 +++++-----
mm/memory_hotplug.c | 2 +-
mm/mempolicy.c | 4 ++--
mm/migrate.c | 17 ++++++++++-------
mm/migrate_device.c | 2 +-
mm/vmscan.c | 10 +++++-----
17 files changed, 51 insertions(+), 44 deletions(-)
--
2.27.0
Now the folio_isolate_lru() did not return a boolean value to indicate
isolation success or not, however below code checking the return value
can make people think that it was a boolean success/failure thing, which
makes people easy to make mistakes (see the fix patch[1]).
if (folio_isolate_lru(folio))
continue;
Thus it's better to check the negative error value expilictly returned by
folio_isolate_lru(), which makes code more clear per Linus's suggestion[2].
Moreover Matthew suggested we can convert the isolation functions to return
a boolean[3], since most users did not care about the negative error value,
and can also remove the confusing of checking return value.
So this patch converts the folio_isolate_lru() to return a boolean value,
which means return 'true' to indicate the folio isolation is successful, and
'false' means a failure to isolation. Meanwhile changing all users' logic of
checking the isolation state.
No functional changes intended.
[1] https://lore.kernel.org/all/[email protected]/T/#u
[2] https://lore.kernel.org/all/CAHk-=wiBrY+O-4=2mrbVyxR+hOqfdJ=Do6xoucfJ9_5az01L4Q@mail.gmail.com/
[3] https://lore.kernel.org/all/[email protected]/
Signed-off-by: Baolin Wang <[email protected]>
---
mm/damon/paddr.c | 2 +-
mm/folio-compat.c | 8 +++++++-
mm/gup.c | 2 +-
mm/internal.h | 2 +-
mm/khugepaged.c | 2 +-
mm/madvise.c | 4 ++--
mm/mempolicy.c | 2 +-
mm/vmscan.c | 10 +++++-----
8 files changed, 19 insertions(+), 13 deletions(-)
diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index b4df9b9bcc0a..607bb69e526c 100644
--- a/mm/damon/paddr.c
+++ b/mm/damon/paddr.c
@@ -246,7 +246,7 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s)
folio_clear_referenced(folio);
folio_test_clear_young(folio);
- if (folio_isolate_lru(folio)) {
+ if (!folio_isolate_lru(folio)) {
folio_put(folio);
continue;
}
diff --git a/mm/folio-compat.c b/mm/folio-compat.c
index 18c48b557926..540373cf904e 100644
--- a/mm/folio-compat.c
+++ b/mm/folio-compat.c
@@ -115,9 +115,15 @@ EXPORT_SYMBOL(grab_cache_page_write_begin);
int isolate_lru_page(struct page *page)
{
+ bool ret;
+
if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"))
return -EBUSY;
- return folio_isolate_lru((struct folio *)page);
+ ret = folio_isolate_lru((struct folio *)page);
+ if (ret)
+ return 0;
+
+ return -EBUSY;
}
void putback_lru_page(struct page *page)
diff --git a/mm/gup.c b/mm/gup.c
index b0885f70579c..eab18ba045db 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1939,7 +1939,7 @@ static unsigned long collect_longterm_unpinnable_pages(
drain_allow = false;
}
- if (folio_isolate_lru(folio))
+ if (!folio_isolate_lru(folio))
continue;
list_add_tail(&folio->lru, movable_page_list);
diff --git a/mm/internal.h b/mm/internal.h
index dfb37e94e140..8645e8496537 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -188,7 +188,7 @@ pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr,
* in mm/vmscan.c:
*/
int isolate_lru_page(struct page *page);
-int folio_isolate_lru(struct folio *folio);
+bool folio_isolate_lru(struct folio *folio);
void putback_lru_page(struct page *page);
void folio_putback_lru(struct folio *folio);
extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason);
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index a5d32231bfad..cee659cfa3c1 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2047,7 +2047,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
goto out_unlock;
}
- if (folio_isolate_lru(folio)) {
+ if (!folio_isolate_lru(folio)) {
result = SCAN_DEL_PAGE_LRU;
goto out_unlock;
}
diff --git a/mm/madvise.c b/mm/madvise.c
index 5a5a687d03c2..c2202f51e9dd 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -406,7 +406,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
folio_clear_referenced(folio);
folio_test_clear_young(folio);
if (pageout) {
- if (!folio_isolate_lru(folio)) {
+ if (folio_isolate_lru(folio)) {
if (folio_test_unevictable(folio))
folio_putback_lru(folio);
else
@@ -500,7 +500,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
folio_clear_referenced(folio);
folio_test_clear_young(folio);
if (pageout) {
- if (!folio_isolate_lru(folio)) {
+ if (folio_isolate_lru(folio)) {
if (folio_test_unevictable(folio))
folio_putback_lru(folio);
else
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 0919c7a719d4..2751bc3310fd 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1033,7 +1033,7 @@ static int migrate_folio_add(struct folio *folio, struct list_head *foliolist,
* expensive, so check the estimated mapcount of the folio instead.
*/
if ((flags & MPOL_MF_MOVE_ALL) || folio_estimated_sharers(folio) == 1) {
- if (!folio_isolate_lru(folio)) {
+ if (folio_isolate_lru(folio)) {
list_add_tail(&folio->lru, foliolist);
node_stat_mod_folio(folio,
NR_ISOLATED_ANON + folio_is_file_lru(folio),
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 34535bbd4fe9..7658b40df947 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2337,12 +2337,12 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan,
* (2) The lru_lock must not be held.
* (3) Interrupts must be enabled.
*
- * Return: 0 if the folio was removed from an LRU list.
- * -EBUSY if the folio was not on an LRU list.
+ * Return: true if the folio was removed from an LRU list.
+ * false if the folio was not on an LRU list.
*/
-int folio_isolate_lru(struct folio *folio)
+bool folio_isolate_lru(struct folio *folio)
{
- int ret = -EBUSY;
+ bool ret = false;
VM_BUG_ON_FOLIO(!folio_ref_count(folio), folio);
@@ -2353,7 +2353,7 @@ int folio_isolate_lru(struct folio *folio)
lruvec = folio_lruvec_lock_irq(folio);
lruvec_del_folio(lruvec, folio);
unlock_page_lruvec_irq(lruvec);
- ret = 0;
+ ret = true;
}
return ret;
--
2.27.0
The isolate_lru_page() can only return 0 or -EBUSY, and most users did
not care about the negative error of isolate_lru_page(), except one user
in add_page_for_migration(). So we can convert the isolate_lru_page() to
return a boolean value, which can help to make the code more clear when
checking the return value of isolate_lru_page().
Also convert all users' logic of checking the isolation state.
No functional changes intended.
Signed-off-by: Baolin Wang <[email protected]>
---
mm/folio-compat.c | 12 +++---------
mm/internal.h | 2 +-
mm/khugepaged.c | 2 +-
mm/memcontrol.c | 4 ++--
mm/memory-failure.c | 4 ++--
mm/memory_hotplug.c | 2 +-
mm/migrate.c | 9 ++++++---
mm/migrate_device.c | 2 +-
8 files changed, 17 insertions(+), 20 deletions(-)
diff --git a/mm/folio-compat.c b/mm/folio-compat.c
index 540373cf904e..cabcd1de9ecb 100644
--- a/mm/folio-compat.c
+++ b/mm/folio-compat.c
@@ -113,17 +113,11 @@ struct page *grab_cache_page_write_begin(struct address_space *mapping,
}
EXPORT_SYMBOL(grab_cache_page_write_begin);
-int isolate_lru_page(struct page *page)
+bool isolate_lru_page(struct page *page)
{
- bool ret;
-
if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"))
- return -EBUSY;
- ret = folio_isolate_lru((struct folio *)page);
- if (ret)
- return 0;
-
- return -EBUSY;
+ return false;
+ return folio_isolate_lru((struct folio *)page);
}
void putback_lru_page(struct page *page)
diff --git a/mm/internal.h b/mm/internal.h
index 8645e8496537..fc01fd092ea5 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -187,7 +187,7 @@ pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr,
/*
* in mm/vmscan.c:
*/
-int isolate_lru_page(struct page *page);
+bool isolate_lru_page(struct page *page);
bool folio_isolate_lru(struct folio *folio);
void putback_lru_page(struct page *page);
void folio_putback_lru(struct folio *folio);
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index cee659cfa3c1..8dbc39896811 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -659,7 +659,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
* Isolate the page to avoid collapsing an hugepage
* currently in use by the VM.
*/
- if (isolate_lru_page(page)) {
+ if (!isolate_lru_page(page)) {
unlock_page(page);
result = SCAN_DEL_PAGE_LRU;
goto out;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 17335459d8dc..e8fd42be5fab 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6176,7 +6176,7 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
target_type = get_mctgt_type_thp(vma, addr, *pmd, &target);
if (target_type == MC_TARGET_PAGE) {
page = target.page;
- if (!isolate_lru_page(page)) {
+ if (isolate_lru_page(page)) {
if (!mem_cgroup_move_account(page, true,
mc.from, mc.to)) {
mc.precharge -= HPAGE_PMD_NR;
@@ -6226,7 +6226,7 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd,
*/
if (PageTransCompound(page))
goto put;
- if (!device && isolate_lru_page(page))
+ if (!device && !isolate_lru_page(page))
goto put;
if (!mem_cgroup_move_account(page, false,
mc.from, mc.to)) {
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index db85c2d37f70..e504362fdb23 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -846,7 +846,7 @@ static const char * const action_page_types[] = {
*/
static int delete_from_lru_cache(struct page *p)
{
- if (!isolate_lru_page(p)) {
+ if (isolate_lru_page(p)) {
/*
* Clear sensible page flags, so that the buddy system won't
* complain when the page is unpoison-and-freed.
@@ -2513,7 +2513,7 @@ static bool isolate_page(struct page *page, struct list_head *pagelist)
bool lru = !__PageMovable(page);
if (lru)
- isolated = !isolate_lru_page(page);
+ isolated = isolate_lru_page(page);
else
isolated = !isolate_movable_page(page,
ISOLATE_UNEVICTABLE);
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index a1e8c3e9ab08..17ed80707518 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1668,7 +1668,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
* LRU and non-lru movable pages.
*/
if (PageLRU(page))
- ret = isolate_lru_page(page);
+ ret = !isolate_lru_page(page);
else
ret = isolate_movable_page(page, ISOLATE_UNEVICTABLE);
if (!ret) { /* Success */
diff --git a/mm/migrate.c b/mm/migrate.c
index ef68a1aff35c..53010a142e7f 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2132,11 +2132,14 @@ static int add_page_for_migration(struct mm_struct *mm, unsigned long addr,
}
} else {
struct page *head;
+ bool isolated;
head = compound_head(page);
- err = isolate_lru_page(head);
- if (err)
+ isolated = isolate_lru_page(head);
+ if (!isolated) {
+ err = -EBUSY;
goto out_putpage;
+ }
err = 1;
list_add_tail(&head->lru, pagelist);
@@ -2541,7 +2544,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
return 0;
}
- if (isolate_lru_page(page))
+ if (!isolate_lru_page(page))
return 0;
mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_is_file_lru(page),
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index 6c3740318a98..d30c9de60b0d 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -388,7 +388,7 @@ static unsigned long migrate_device_unmap(unsigned long *src_pfns,
allow_drain = false;
}
- if (isolate_lru_page(page)) {
+ if (!isolate_lru_page(page)) {
src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
restore++;
continue;
--
2.27.0
Now the isolate_hugetlb() only returns 0 or -EBUSY, and most users did not
care about the negative value, thus we can convert the isolate_hugetlb()
to return a boolean value to make code more clear when checking the
hugetlb isolation state. Moreover converts 2 users which will consider
the negative value returned by isolate_hugetlb().
No functional changes intended.
Signed-off-by: Baolin Wang <[email protected]>
---
include/linux/hugetlb.h | 6 +++---
mm/hugetlb.c | 12 ++++++++----
mm/memory-failure.c | 2 +-
mm/mempolicy.c | 2 +-
mm/migrate.c | 2 +-
5 files changed, 14 insertions(+), 10 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index df6dd624ccfe..5f5e4177b2e0 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -171,7 +171,7 @@ bool hugetlb_reserve_pages(struct inode *inode, long from, long to,
vm_flags_t vm_flags);
long hugetlb_unreserve_pages(struct inode *inode, long start, long end,
long freed);
-int isolate_hugetlb(struct folio *folio, struct list_head *list);
+bool isolate_hugetlb(struct folio *folio, struct list_head *list);
int get_hwpoison_hugetlb_folio(struct folio *folio, bool *hugetlb, bool unpoison);
int get_huge_page_for_hwpoison(unsigned long pfn, int flags,
bool *migratable_cleared);
@@ -413,9 +413,9 @@ static inline pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr,
return NULL;
}
-static inline int isolate_hugetlb(struct folio *folio, struct list_head *list)
+static inline bool isolate_hugetlb(struct folio *folio, struct list_head *list)
{
- return -EBUSY;
+ return false;
}
static inline int get_hwpoison_hugetlb_folio(struct folio *folio, bool *hugetlb, bool unpoison)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 3a01a9dbf445..75097e3abc18 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2932,6 +2932,10 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
spin_unlock_irq(&hugetlb_lock);
ret = isolate_hugetlb(old_folio, list);
spin_lock_irq(&hugetlb_lock);
+ if (!ret)
+ ret = -EBUSY;
+ else
+ ret = 0;
goto free_new;
} else if (!folio_test_hugetlb_freed(old_folio)) {
/*
@@ -3005,7 +3009,7 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list)
if (hstate_is_gigantic(h))
return -ENOMEM;
- if (folio_ref_count(folio) && !isolate_hugetlb(folio, list))
+ if (folio_ref_count(folio) && isolate_hugetlb(folio, list))
ret = 0;
else if (!folio_ref_count(folio))
ret = alloc_and_dissolve_hugetlb_folio(h, folio, list);
@@ -7251,15 +7255,15 @@ __weak unsigned long hugetlb_mask_last_page(struct hstate *h)
* These functions are overwritable if your architecture needs its own
* behavior.
*/
-int isolate_hugetlb(struct folio *folio, struct list_head *list)
+bool isolate_hugetlb(struct folio *folio, struct list_head *list)
{
- int ret = 0;
+ bool ret = true;
spin_lock_irq(&hugetlb_lock);
if (!folio_test_hugetlb(folio) ||
!folio_test_hugetlb_migratable(folio) ||
!folio_try_get(folio)) {
- ret = -EBUSY;
+ ret = false;
goto unlock;
}
folio_clear_hugetlb_migratable(folio);
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index e504362fdb23..8604753bc644 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -2508,7 +2508,7 @@ static bool isolate_page(struct page *page, struct list_head *pagelist)
bool isolated = false;
if (PageHuge(page)) {
- isolated = !isolate_hugetlb(page_folio(page), pagelist);
+ isolated = isolate_hugetlb(page_folio(page), pagelist);
} else {
bool lru = !__PageMovable(page);
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 2751bc3310fd..a256a241fd1d 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -609,7 +609,7 @@ static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask,
if (flags & (MPOL_MF_MOVE_ALL) ||
(flags & MPOL_MF_MOVE && folio_estimated_sharers(folio) == 1 &&
!hugetlb_pmd_shared(pte))) {
- if (isolate_hugetlb(folio, qp->pagelist) &&
+ if (!isolate_hugetlb(folio, qp->pagelist) &&
(flags & MPOL_MF_STRICT))
/*
* Failed to isolate folio but allow migrating pages
diff --git a/mm/migrate.c b/mm/migrate.c
index 53010a142e7f..c5136fa48638 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2128,7 +2128,7 @@ static int add_page_for_migration(struct mm_struct *mm, unsigned long addr,
if (PageHead(page)) {
err = isolate_hugetlb(page_folio(page), pagelist);
if (!err)
- err = 1;
+ err = -EBUSY;
}
} else {
struct page *head;
--
2.27.0
Now the isolate_movable_page() can only return 0 or -EBUSY, and no users
will care about the negative return value, thus we can convert the
isolate_movable_page() to return a boolean value to make the code more
clear when checking the movable page isolation state.
No functional changes intended.
Signed-off-by: Baolin Wang <[email protected]>
---
include/linux/migrate.h | 6 +++---
mm/compaction.c | 2 +-
mm/memory-failure.c | 4 ++--
mm/memory_hotplug.c | 4 ++--
mm/migrate.c | 6 +++---
5 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index c88b96b48be7..6b252f519c86 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -71,7 +71,7 @@ extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free,
unsigned long private, enum migrate_mode mode, int reason,
unsigned int *ret_succeeded);
extern struct page *alloc_migration_target(struct page *page, unsigned long private);
-extern int isolate_movable_page(struct page *page, isolate_mode_t mode);
+extern bool isolate_movable_page(struct page *page, isolate_mode_t mode);
int migrate_huge_page_move_mapping(struct address_space *mapping,
struct folio *dst, struct folio *src);
@@ -92,8 +92,8 @@ static inline int migrate_pages(struct list_head *l, new_page_t new,
static inline struct page *alloc_migration_target(struct page *page,
unsigned long private)
{ return NULL; }
-static inline int isolate_movable_page(struct page *page, isolate_mode_t mode)
- { return -EBUSY; }
+static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode)
+ { return false; }
static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
struct folio *dst, struct folio *src)
diff --git a/mm/compaction.c b/mm/compaction.c
index d73578af44cc..ad7409f70519 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -976,7 +976,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
locked = NULL;
}
- if (!isolate_movable_page(page, mode))
+ if (isolate_movable_page(page, mode))
goto isolate_success;
}
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 8604753bc644..a1ede7bdce95 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -2515,8 +2515,8 @@ static bool isolate_page(struct page *page, struct list_head *pagelist)
if (lru)
isolated = isolate_lru_page(page);
else
- isolated = !isolate_movable_page(page,
- ISOLATE_UNEVICTABLE);
+ isolated = isolate_movable_page(page,
+ ISOLATE_UNEVICTABLE);
if (isolated) {
list_add(&page->lru, pagelist);
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 17ed80707518..e54c4aa69636 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1668,10 +1668,10 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
* LRU and non-lru movable pages.
*/
if (PageLRU(page))
- ret = !isolate_lru_page(page);
+ ret = isolate_lru_page(page);
else
ret = isolate_movable_page(page, ISOLATE_UNEVICTABLE);
- if (!ret) { /* Success */
+ if (ret) { /* Success */
list_add_tail(&page->lru, &source);
if (!__PageMovable(page))
inc_node_page_state(page, NR_ISOLATED_ANON +
diff --git a/mm/migrate.c b/mm/migrate.c
index c5136fa48638..b7982aa745b0 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -58,7 +58,7 @@
#include "internal.h"
-int isolate_movable_page(struct page *page, isolate_mode_t mode)
+bool isolate_movable_page(struct page *page, isolate_mode_t mode)
{
struct folio *folio = folio_get_nontail_page(page);
const struct movable_operations *mops;
@@ -119,14 +119,14 @@ int isolate_movable_page(struct page *page, isolate_mode_t mode)
folio_set_isolated(folio);
folio_unlock(folio);
- return 0;
+ return true;
out_no_isolated:
folio_unlock(folio);
out_putfolio:
folio_put(folio);
out:
- return -EBUSY;
+ return false;
}
static void putback_movable_folio(struct folio *folio)
--
2.27.0
On Tue, 14 Feb 2023 21:59:29 +0800 Baolin Wang <[email protected]> wrote:
> Now the folio_isolate_lru() did not return a boolean value to indicate
> isolation success or not, however below code checking the return value
> can make people think that it was a boolean success/failure thing, which
> makes people easy to make mistakes (see the fix patch[1]).
>
> if (folio_isolate_lru(folio))
> continue;
>
> Thus it's better to check the negative error value expilictly returned by
> folio_isolate_lru(), which makes code more clear per Linus's suggestion[2].
> Moreover Matthew suggested we can convert the isolation functions to return
> a boolean[3], since most users did not care about the negative error value,
> and can also remove the confusing of checking return value.
>
> So this patch converts the folio_isolate_lru() to return a boolean value,
> which means return 'true' to indicate the folio isolation is successful, and
> 'false' means a failure to isolation. Meanwhile changing all users' logic of
> checking the isolation state.
>
> No functional changes intended.
>
> [1] https://lore.kernel.org/all/[email protected]/T/#u
> [2] https://lore.kernel.org/all/CAHk-=wiBrY+O-4=2mrbVyxR+hOqfdJ=Do6xoucfJ9_5az01L4Q@mail.gmail.com/
> [3] https://lore.kernel.org/all/[email protected]/
>
> Signed-off-by: Baolin Wang <[email protected]>
> ---
> mm/damon/paddr.c | 2 +-
> mm/folio-compat.c | 8 +++++++-
> mm/gup.c | 2 +-
> mm/internal.h | 2 +-
> mm/khugepaged.c | 2 +-
> mm/madvise.c | 4 ++--
> mm/mempolicy.c | 2 +-
> mm/vmscan.c | 10 +++++-----
> 8 files changed, 19 insertions(+), 13 deletions(-)
>
> diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
> index b4df9b9bcc0a..607bb69e526c 100644
> --- a/mm/damon/paddr.c
> +++ b/mm/damon/paddr.c
> @@ -246,7 +246,7 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s)
>
> folio_clear_referenced(folio);
> folio_test_clear_young(folio);
> - if (folio_isolate_lru(folio)) {
> + if (!folio_isolate_lru(folio)) {
> folio_put(folio);
> continue;
> }
> diff --git a/mm/folio-compat.c b/mm/folio-compat.c
> index 18c48b557926..540373cf904e 100644
> --- a/mm/folio-compat.c
> +++ b/mm/folio-compat.c
> @@ -115,9 +115,15 @@ EXPORT_SYMBOL(grab_cache_page_write_begin);
>
> int isolate_lru_page(struct page *page)
> {
> + bool ret;
> +
> if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"))
> return -EBUSY;
> - return folio_isolate_lru((struct folio *)page);
> + ret = folio_isolate_lru((struct folio *)page);
> + if (ret)
> + return 0;
> +
> + return -EBUSY;
> }
>
> void putback_lru_page(struct page *page)
> diff --git a/mm/gup.c b/mm/gup.c
> index b0885f70579c..eab18ba045db 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1939,7 +1939,7 @@ static unsigned long collect_longterm_unpinnable_pages(
> drain_allow = false;
> }
>
> - if (folio_isolate_lru(folio))
> + if (!folio_isolate_lru(folio))
> continue;
>
> list_add_tail(&folio->lru, movable_page_list);
> diff --git a/mm/internal.h b/mm/internal.h
> index dfb37e94e140..8645e8496537 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -188,7 +188,7 @@ pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr,
> * in mm/vmscan.c:
> */
> int isolate_lru_page(struct page *page);
> -int folio_isolate_lru(struct folio *folio);
> +bool folio_isolate_lru(struct folio *folio);
> void putback_lru_page(struct page *page);
> void folio_putback_lru(struct folio *folio);
> extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason);
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index a5d32231bfad..cee659cfa3c1 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -2047,7 +2047,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
> goto out_unlock;
> }
>
> - if (folio_isolate_lru(folio)) {
> + if (!folio_isolate_lru(folio)) {
> result = SCAN_DEL_PAGE_LRU;
> goto out_unlock;
> }
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 5a5a687d03c2..c2202f51e9dd 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -406,7 +406,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> folio_clear_referenced(folio);
> folio_test_clear_young(folio);
> if (pageout) {
> - if (!folio_isolate_lru(folio)) {
> + if (folio_isolate_lru(folio)) {
> if (folio_test_unevictable(folio))
> folio_putback_lru(folio);
> else
> @@ -500,7 +500,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> folio_clear_referenced(folio);
> folio_test_clear_young(folio);
> if (pageout) {
> - if (!folio_isolate_lru(folio)) {
> + if (folio_isolate_lru(folio)) {
> if (folio_test_unevictable(folio))
> folio_putback_lru(folio);
> else
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 0919c7a719d4..2751bc3310fd 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -1033,7 +1033,7 @@ static int migrate_folio_add(struct folio *folio, struct list_head *foliolist,
> * expensive, so check the estimated mapcount of the folio instead.
> */
> if ((flags & MPOL_MF_MOVE_ALL) || folio_estimated_sharers(folio) == 1) {
> - if (!folio_isolate_lru(folio)) {
> + if (folio_isolate_lru(folio)) {
> list_add_tail(&folio->lru, foliolist);
> node_stat_mod_folio(folio,
> NR_ISOLATED_ANON + folio_is_file_lru(folio),
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 34535bbd4fe9..7658b40df947 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2337,12 +2337,12 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan,
> * (2) The lru_lock must not be held.
> * (3) Interrupts must be enabled.
> *
> - * Return: 0 if the folio was removed from an LRU list.
> - * -EBUSY if the folio was not on an LRU list.
> + * Return: true if the folio was removed from an LRU list.
> + * false if the folio was not on an LRU list.
> */
> -int folio_isolate_lru(struct folio *folio)
> +bool folio_isolate_lru(struct folio *folio)
> {
> - int ret = -EBUSY;
> + bool ret = false;
>
> VM_BUG_ON_FOLIO(!folio_ref_count(folio), folio);
>
> @@ -2353,7 +2353,7 @@ int folio_isolate_lru(struct folio *folio)
> lruvec = folio_lruvec_lock_irq(folio);
> lruvec_del_folio(lruvec, folio);
> unlock_page_lruvec_irq(lruvec);
> - ret = 0;
> + ret = true;
> }
>
> return ret;
> --
> 2.27.0
Reviewed-by: SeongJae Park <[email protected]>
Thanks,
SJ
On 14.02.23 14:59, Baolin Wang wrote:
> Now the page isolation functions did not return a boolean to indicate
> success or not, instead it will return a negative error when failed
> to isolate a page. So below code used in most places seem a boolean
> success/failure thing, which can confuse people whether the isolation
> is successful.
>
> if (folio_isolate_lru(folio))
> continue;
>
> Moreover the page isolation functions only return 0 or -EBUSY, and
> most users did not care about the negative error except for few users,
> thus we can convert all page isolation functions to return a boolean
> value, which can remove the confusion to make code more clear.
>
> No functional changes intended in this patch series.
>
> Changes from v1:
> - Convert all isolation functions to return bool.
Acked-by: David Hildenbrand <[email protected]>
Although it's controversial if
if (!ret)
ret = -EBUSY;
else
ret = 0;
is really appealing to the readers eye :)
ret = ret ? 0 : -EBUSY;
It's still confusing.
would be better as
ret = isolated ? 0 : -EBUSY;
IOW, not reusing the "int ret" variable.
--
Thanks,
David / dhildenb
On Tue, 14 Feb 2023 21:59:31 +0800 Baolin Wang <[email protected]> wrote:
> Now the isolate_hugetlb() only returns 0 or -EBUSY, and most users did not
> care about the negative value, thus we can convert the isolate_hugetlb()
> to return a boolean value to make code more clear when checking the
> hugetlb isolation state. Moreover converts 2 users which will consider
> the negative value returned by isolate_hugetlb().
>
> No functional changes intended.
>
> Signed-off-by: Baolin Wang <[email protected]>
> ---
> include/linux/hugetlb.h | 6 +++---
> mm/hugetlb.c | 12 ++++++++----
> mm/memory-failure.c | 2 +-
> mm/mempolicy.c | 2 +-
> mm/migrate.c | 2 +-
> 5 files changed, 14 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index df6dd624ccfe..5f5e4177b2e0 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -171,7 +171,7 @@ bool hugetlb_reserve_pages(struct inode *inode, long from, long to,
> vm_flags_t vm_flags);
> long hugetlb_unreserve_pages(struct inode *inode, long start, long end,
> long freed);
> -int isolate_hugetlb(struct folio *folio, struct list_head *list);
> +bool isolate_hugetlb(struct folio *folio, struct list_head *list);
> int get_hwpoison_hugetlb_folio(struct folio *folio, bool *hugetlb, bool unpoison);
> int get_huge_page_for_hwpoison(unsigned long pfn, int flags,
> bool *migratable_cleared);
> @@ -413,9 +413,9 @@ static inline pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr,
> return NULL;
> }
>
> -static inline int isolate_hugetlb(struct folio *folio, struct list_head *list)
> +static inline bool isolate_hugetlb(struct folio *folio, struct list_head *list)
> {
> - return -EBUSY;
> + return false;
> }
>
> static inline int get_hwpoison_hugetlb_folio(struct folio *folio, bool *hugetlb, bool unpoison)
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 3a01a9dbf445..75097e3abc18 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2932,6 +2932,10 @@ static int alloc_and_dissolve_hugetlb_folio(struct hstate *h,
> spin_unlock_irq(&hugetlb_lock);
> ret = isolate_hugetlb(old_folio, list);
> spin_lock_irq(&hugetlb_lock);
> + if (!ret)
> + ret = -EBUSY;
> + else
> + ret = 0;
This would work, but 'ret' is not 'bool' but 'int'. How about below?
ret = isolate_hugetlb(old_folio, list) ? 0 : -EBUSY;
> goto free_new;
> } else if (!folio_test_hugetlb_freed(old_folio)) {
> /*
> @@ -3005,7 +3009,7 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list)
> if (hstate_is_gigantic(h))
> return -ENOMEM;
>
> - if (folio_ref_count(folio) && !isolate_hugetlb(folio, list))
> + if (folio_ref_count(folio) && isolate_hugetlb(folio, list))
> ret = 0;
> else if (!folio_ref_count(folio))
> ret = alloc_and_dissolve_hugetlb_folio(h, folio, list);
> @@ -7251,15 +7255,15 @@ __weak unsigned long hugetlb_mask_last_page(struct hstate *h)
> * These functions are overwritable if your architecture needs its own
> * behavior.
> */
> -int isolate_hugetlb(struct folio *folio, struct list_head *list)
> +bool isolate_hugetlb(struct folio *folio, struct list_head *list)
> {
> - int ret = 0;
> + bool ret = true;
>
> spin_lock_irq(&hugetlb_lock);
> if (!folio_test_hugetlb(folio) ||
> !folio_test_hugetlb_migratable(folio) ||
> !folio_try_get(folio)) {
> - ret = -EBUSY;
> + ret = false;
> goto unlock;
> }
> folio_clear_hugetlb_migratable(folio);
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index e504362fdb23..8604753bc644 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -2508,7 +2508,7 @@ static bool isolate_page(struct page *page, struct list_head *pagelist)
> bool isolated = false;
>
> if (PageHuge(page)) {
> - isolated = !isolate_hugetlb(page_folio(page), pagelist);
> + isolated = isolate_hugetlb(page_folio(page), pagelist);
> } else {
> bool lru = !__PageMovable(page);
>
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 2751bc3310fd..a256a241fd1d 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -609,7 +609,7 @@ static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask,
> if (flags & (MPOL_MF_MOVE_ALL) ||
> (flags & MPOL_MF_MOVE && folio_estimated_sharers(folio) == 1 &&
> !hugetlb_pmd_shared(pte))) {
> - if (isolate_hugetlb(folio, qp->pagelist) &&
> + if (!isolate_hugetlb(folio, qp->pagelist) &&
> (flags & MPOL_MF_STRICT))
> /*
> * Failed to isolate folio but allow migrating pages
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 53010a142e7f..c5136fa48638 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2128,7 +2128,7 @@ static int add_page_for_migration(struct mm_struct *mm, unsigned long addr,
> if (PageHead(page)) {
> err = isolate_hugetlb(page_folio(page), pagelist);
> if (!err)
> - err = 1;
> + err = -EBUSY;
Again, I think this is confusing. 'err' is 'bool', not 'int'.
Thanks,
SJ
> }
> } else {
> struct page *head;
> --
> 2.27.0
>
On Tue, 14 Feb 2023 18:03:24 +0000 SeongJae Park <[email protected]> wrote:
> On Tue, 14 Feb 2023 21:59:31 +0800 Baolin Wang <[email protected]> wrote:
>
> > Now the isolate_hugetlb() only returns 0 or -EBUSY, and most users did not
> > care about the negative value, thus we can convert the isolate_hugetlb()
> > to return a boolean value to make code more clear when checking the
> > hugetlb isolation state. Moreover converts 2 users which will consider
> > the negative value returned by isolate_hugetlb().
> >
> > No functional changes intended.
> >
> > Signed-off-by: Baolin Wang <[email protected]>
> > ---
> > include/linux/hugetlb.h | 6 +++---
> > mm/hugetlb.c | 12 ++++++++----
> > mm/memory-failure.c | 2 +-
> > mm/mempolicy.c | 2 +-
> > mm/migrate.c | 2 +-
> > 5 files changed, 14 insertions(+), 10 deletions(-)
> >
> > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> > index df6dd624ccfe..5f5e4177b2e0 100644
[...]
> > diff --git a/mm/migrate.c b/mm/migrate.c
> > index 53010a142e7f..c5136fa48638 100644
> > --- a/mm/migrate.c
> > +++ b/mm/migrate.c
> > @@ -2128,7 +2128,7 @@ static int add_page_for_migration(struct mm_struct *mm, unsigned long addr,
> > if (PageHead(page)) {
> > err = isolate_hugetlb(page_folio(page), pagelist);
> > if (!err)
> > - err = 1;
> > + err = -EBUSY;
>
> Again, I think this is confusing. 'err' is 'bool', not 'int'.
I mean, 'err' is not 'bool' but 'int', sorry. See? This confuses me ;)
Thanks,
SJ
>
>
> Thanks,
> SJ
>
> > }
> > } else {
> > struct page *head;
> > --
> > 2.27.0
> >
On 02/14/23 18:07, SeongJae Park wrote:
> On Tue, 14 Feb 2023 18:03:24 +0000 SeongJae Park <[email protected]> wrote:
>
> > On Tue, 14 Feb 2023 21:59:31 +0800 Baolin Wang <[email protected]> wrote:
> >
> > > Now the isolate_hugetlb() only returns 0 or -EBUSY, and most users did not
> > > care about the negative value, thus we can convert the isolate_hugetlb()
> > > to return a boolean value to make code more clear when checking the
> > > hugetlb isolation state. Moreover converts 2 users which will consider
> > > the negative value returned by isolate_hugetlb().
> > >
> > > No functional changes intended.
> > >
> > > Signed-off-by: Baolin Wang <[email protected]>
> > > ---
> > > include/linux/hugetlb.h | 6 +++---
> > > mm/hugetlb.c | 12 ++++++++----
> > > mm/memory-failure.c | 2 +-
> > > mm/mempolicy.c | 2 +-
> > > mm/migrate.c | 2 +-
> > > 5 files changed, 14 insertions(+), 10 deletions(-)
> > >
> > > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> > > index df6dd624ccfe..5f5e4177b2e0 100644
> [...]
> > > diff --git a/mm/migrate.c b/mm/migrate.c
> > > index 53010a142e7f..c5136fa48638 100644
> > > --- a/mm/migrate.c
> > > +++ b/mm/migrate.c
> > > @@ -2128,7 +2128,7 @@ static int add_page_for_migration(struct mm_struct *mm, unsigned long addr,
> > > if (PageHead(page)) {
> > > err = isolate_hugetlb(page_folio(page), pagelist);
> > > if (!err)
> > > - err = 1;
> > > + err = -EBUSY;
> >
> > Again, I think this is confusing. 'err' is 'bool', not 'int'.
>
> I mean, 'err' is not 'bool' but 'int', sorry. See? This confuses me ;)
>
Yes,
in the case here (and elsewhere) I like David's suggestion of using a separate
bool such as 'isolated' to capture the return value of the isolate function.
Then, the statement:
err = isolated ? 0 : -EBUSY;
would be pretty clear.
--
Mike Kravetz
On Tue, 14 Feb 2023 21:59:30 +0800 Baolin Wang <[email protected]> wrote:
> The isolate_lru_page() can only return 0 or -EBUSY, and most users did
> not care about the negative error of isolate_lru_page(), except one user
> in add_page_for_migration(). So we can convert the isolate_lru_page() to
> return a boolean value, which can help to make the code more clear when
> checking the return value of isolate_lru_page().
>
> Also convert all users' logic of checking the isolation state.
>
> No functional changes intended.
>
> Signed-off-by: Baolin Wang <[email protected]>
> ---
> mm/folio-compat.c | 12 +++---------
> mm/internal.h | 2 +-
> mm/khugepaged.c | 2 +-
> mm/memcontrol.c | 4 ++--
> mm/memory-failure.c | 4 ++--
> mm/memory_hotplug.c | 2 +-
> mm/migrate.c | 9 ++++++---
> mm/migrate_device.c | 2 +-
> 8 files changed, 17 insertions(+), 20 deletions(-)
>
[...]
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index a1e8c3e9ab08..17ed80707518 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1668,7 +1668,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
> * LRU and non-lru movable pages.
> */
> if (PageLRU(page))
> - ret = isolate_lru_page(page);
> + ret = !isolate_lru_page(page);
This may change return value of this function. That is, this function will
return 1 instead of -EBUSY after this change. It's not a real issue as no
caller of this function takes care of the return value, though.
Thanks,
SJ
> else
> ret = isolate_movable_page(page, ISOLATE_UNEVICTABLE);
> if (!ret) { /* Success */
[...]
On 2/15/2023 3:32 AM, SeongJae Park wrote:
> On Tue, 14 Feb 2023 21:59:30 +0800 Baolin Wang <[email protected]> wrote:
>
>> The isolate_lru_page() can only return 0 or -EBUSY, and most users did
>> not care about the negative error of isolate_lru_page(), except one user
>> in add_page_for_migration(). So we can convert the isolate_lru_page() to
>> return a boolean value, which can help to make the code more clear when
>> checking the return value of isolate_lru_page().
>>
>> Also convert all users' logic of checking the isolation state.
>>
>> No functional changes intended.
>>
>> Signed-off-by: Baolin Wang <[email protected]>
>> ---
>> mm/folio-compat.c | 12 +++---------
>> mm/internal.h | 2 +-
>> mm/khugepaged.c | 2 +-
>> mm/memcontrol.c | 4 ++--
>> mm/memory-failure.c | 4 ++--
>> mm/memory_hotplug.c | 2 +-
>> mm/migrate.c | 9 ++++++---
>> mm/migrate_device.c | 2 +-
>> 8 files changed, 17 insertions(+), 20 deletions(-)
>>
> [...]
>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>> index a1e8c3e9ab08..17ed80707518 100644
>> --- a/mm/memory_hotplug.c
>> +++ b/mm/memory_hotplug.c
>> @@ -1668,7 +1668,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
>> * LRU and non-lru movable pages.
>> */
>> if (PageLRU(page))
>> - ret = isolate_lru_page(page);
>> + ret = !isolate_lru_page(page);
>
> This may change return value of this function. That is, this function will
> return 1 instead of -EBUSY after this change. It's not a real issue as no
> caller of this function takes care of the return value, though.
Yes, I've also thought about this. OK, I can keep the original logic
here by adding a new variable. Thanks.
isolated = isolate_lru_page(page);
ret = isolated ? 0 : -EBUSY;
On 2/15/2023 2:21 AM, Mike Kravetz wrote:
> On 02/14/23 18:07, SeongJae Park wrote:
>> On Tue, 14 Feb 2023 18:03:24 +0000 SeongJae Park <[email protected]> wrote:
>>
>>> On Tue, 14 Feb 2023 21:59:31 +0800 Baolin Wang <[email protected]> wrote:
>>>
>>>> Now the isolate_hugetlb() only returns 0 or -EBUSY, and most users did not
>>>> care about the negative value, thus we can convert the isolate_hugetlb()
>>>> to return a boolean value to make code more clear when checking the
>>>> hugetlb isolation state. Moreover converts 2 users which will consider
>>>> the negative value returned by isolate_hugetlb().
>>>>
>>>> No functional changes intended.
>>>>
>>>> Signed-off-by: Baolin Wang <[email protected]>
>>>> ---
>>>> include/linux/hugetlb.h | 6 +++---
>>>> mm/hugetlb.c | 12 ++++++++----
>>>> mm/memory-failure.c | 2 +-
>>>> mm/mempolicy.c | 2 +-
>>>> mm/migrate.c | 2 +-
>>>> 5 files changed, 14 insertions(+), 10 deletions(-)
>>>>
>>>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
>>>> index df6dd624ccfe..5f5e4177b2e0 100644
>> [...]
>>>> diff --git a/mm/migrate.c b/mm/migrate.c
>>>> index 53010a142e7f..c5136fa48638 100644
>>>> --- a/mm/migrate.c
>>>> +++ b/mm/migrate.c
>>>> @@ -2128,7 +2128,7 @@ static int add_page_for_migration(struct mm_struct *mm, unsigned long addr,
>>>> if (PageHead(page)) {
>>>> err = isolate_hugetlb(page_folio(page), pagelist);
>>>> if (!err)
>>>> - err = 1;
>>>> + err = -EBUSY;
>>>
>>> Again, I think this is confusing. 'err' is 'bool', not 'int'.
>>
>> I mean, 'err' is not 'bool' but 'int', sorry. See? This confuses me ;)
>>
>
> Yes,
> in the case here (and elsewhere) I like David's suggestion of using a separate
> bool such as 'isolated' to capture the return value of the isolate function.
> Then, the statement:
>
> err = isolated ? 0 : -EBUSY;
>
> would be pretty clear.
Yes, much better, will do. Thanks.
On 2/15/2023 1:52 AM, David Hildenbrand wrote:
> On 14.02.23 14:59, Baolin Wang wrote:
>> Now the page isolation functions did not return a boolean to indicate
>> success or not, instead it will return a negative error when failed
>> to isolate a page. So below code used in most places seem a boolean
>> success/failure thing, which can confuse people whether the isolation
>> is successful.
>>
>> if (folio_isolate_lru(folio))
>> continue;
>>
>> Moreover the page isolation functions only return 0 or -EBUSY, and
>> most users did not care about the negative error except for few users,
>> thus we can convert all page isolation functions to return a boolean
>> value, which can remove the confusion to make code more clear.
>>
>> No functional changes intended in this patch series.
>>
>> Changes from v1:
>> - Convert all isolation functions to return bool.
>
> Acked-by: David Hildenbrand <[email protected]>
Thanks.
>
> Although it's controversial if
>
> if (!ret)
> ret = -EBUSY;
> else
> ret = 0;
>
> is really appealing to the readers eye :)
>
> ret = ret ? 0 : -EBUSY;
>
> It's still confusing.
>
> would be better as
>
> ret = isolated ? 0 : -EBUSY;
>
> IOW, not reusing the "int ret" variable.
Yes, pretty clear. Will do in next version.