2022-08-30 12:40:01

by Miaohe Lin

[permalink] [raw]
Subject: [PATCH 0/6] A few cleanup patches for memory-failure

Hi everyone,
This series contains a few cleanup patches to use __PageMovable() to
detect non-lru movable pages, use num_poisoned_pages_sub() to reduce
multiple atomic ops overheads and so on. More details can be found in
the respective changelogs.
Thanks!

Miaohe Lin (6):
mm, hwpoison: use ClearPageHWPoison() in memory_failure()
mm, hwpoison: use __PageMovable() to detect non-lru movable pages
mm, hwpoison: use num_poisoned_pages_sub() to decrease
num_poisoned_pages
mm, hwpoison: avoid unneeded page_mapped_in_vma() overhead in
collect_procs_anon()
mm, hwpoison: check PageTable() explicitly in hwpoison_user_mappings()
mm, hwpoison: cleanup some obsolete comments

include/linux/swapops.h | 5 -----
mm/memory-failure.c | 28 +++++++++++++++-------------
2 files changed, 15 insertions(+), 18 deletions(-)

--
2.23.0


2022-08-30 12:40:08

by Miaohe Lin

[permalink] [raw]
Subject: [PATCH 1/6] mm, hwpoison: use ClearPageHWPoison() in memory_failure()

Use ClearPageHWPoison() instead of TestClearPageHWPoison() to clear page
hwpoison flags to avoid unneeded full memory barrier overhead.

Signed-off-by: Miaohe Lin <[email protected]>
---
mm/memory-failure.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index ebf16d177ee5..a923a6dde871 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -2128,7 +2128,7 @@ int memory_failure(unsigned long pfn, int flags)
page_flags = p->flags;

if (hwpoison_filter(p)) {
- TestClearPageHWPoison(p);
+ ClearPageHWPoison(p);
unlock_page(p);
put_page(p);
res = -EOPNOTSUPP;
--
2.23.0

2022-08-30 12:40:21

by Miaohe Lin

[permalink] [raw]
Subject: [PATCH 3/6] mm, hwpoison: use num_poisoned_pages_sub() to decrease num_poisoned_pages

Use num_poisoned_pages_sub() to combine multiple atomic ops into one. Also
num_poisoned_pages_dec() can be killed as there's no caller now.

Signed-off-by: Miaohe Lin <[email protected]>
---
include/linux/swapops.h | 5 -----
mm/memory-failure.c | 6 ++++--
2 files changed, 4 insertions(+), 7 deletions(-)

diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index dbf9df854124..86b95ccb81bb 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -602,11 +602,6 @@ static inline void num_poisoned_pages_inc(void)
atomic_long_inc(&num_poisoned_pages);
}

-static inline void num_poisoned_pages_dec(void)
-{
- atomic_long_dec(&num_poisoned_pages);
-}
-
static inline void num_poisoned_pages_sub(long i)
{
atomic_long_sub(i, &num_poisoned_pages);
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 3966fa6abe03..69c4d1b48ad6 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -2596,7 +2596,7 @@ int soft_offline_page(unsigned long pfn, int flags)

void clear_hwpoisoned_pages(struct page *memmap, int nr_pages)
{
- int i;
+ int i, total = 0;

/*
* A further optimization is to have per section refcounted
@@ -2609,8 +2609,10 @@ void clear_hwpoisoned_pages(struct page *memmap, int nr_pages)

for (i = 0; i < nr_pages; i++) {
if (PageHWPoison(&memmap[i])) {
- num_poisoned_pages_dec();
+ total++;
ClearPageHWPoison(&memmap[i]);
}
}
+ if (total)
+ num_poisoned_pages_sub(total);
}
--
2.23.0

2022-08-30 12:40:33

by Miaohe Lin

[permalink] [raw]
Subject: [PATCH 6/6] mm, hwpoison: cleanup some obsolete comments

1.Remove meaningless comment in kill_proc(). That doesn't tell anything.
2.Fix the wrong function name get_hwpoison_unless_zero(). It should be
get_page_unless_zero().
3.The gate keeper for free hwpoison page has moved to check_new_page().
Update the corresponding comment.

Signed-off-by: Miaohe Lin <[email protected]>
---
mm/memory-failure.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index fb6a10005109..df3bf266eebf 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -277,7 +277,7 @@ static int kill_proc(struct to_kill *tk, unsigned long pfn, int flags)
* to SIG_IGN, but hopefully no one will do that?
*/
ret = send_sig_mceerr(BUS_MCEERR_AO, (void __user *)tk->addr,
- addr_lsb, t); /* synchronous? */
+ addr_lsb, t);
if (ret < 0)
pr_info("Error sending signal to %s:%d: %d\n",
t->comm, t->pid, ret);
@@ -1246,9 +1246,9 @@ static int __get_hwpoison_page(struct page *page, unsigned long flags)
return ret;

/*
- * This check prevents from calling get_hwpoison_unless_zero()
- * for any unsupported type of page in order to reduce the risk of
- * unexpected races caused by taking a page refcount.
+ * This check prevents from calling get_page_unless_zero() for any
+ * unsupported type of page in order to reduce the risk of unexpected
+ * races caused by taking a page refcount.
*/
if (!HWPoisonHandlable(head, flags))
return -EBUSY;
@@ -2025,7 +2025,7 @@ int memory_failure(unsigned long pfn, int flags)
/*
* We need/can do nothing about count=0 pages.
* 1) it's a free page, and therefore in safe hand:
- * prep_new_page() will be the gate keeper.
+ * check_new_page() will be the gate keeper.
* 2) it's part of a non-compound high order page.
* Implies some kernel user: cannot stop them from
* R/W the page; let's pray that the page has been
--
2.23.0

2022-08-30 12:40:50

by Miaohe Lin

[permalink] [raw]
Subject: [PATCH 4/6] mm, hwpoison: avoid unneeded page_mapped_in_vma() overhead in collect_procs_anon()

If vma->vm_mm != t->mm, there's no need to call page_mapped_in_vma() as
add_to_kill() won't be called in this case. Move up the mm check to avoid
possible unneeded calling to page_mapped_in_vma().

Signed-off-by: Miaohe Lin <[email protected]>
---
mm/memory-failure.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 69c4d1b48ad6..904c2b6284a4 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -521,11 +521,11 @@ static void collect_procs_anon(struct page *page, struct list_head *to_kill,
anon_vma_interval_tree_foreach(vmac, &av->rb_root,
pgoff, pgoff) {
vma = vmac->vma;
+ if (vma->vm_mm != t->mm)
+ continue;
if (!page_mapped_in_vma(page, vma))
continue;
- if (vma->vm_mm == t->mm)
- add_to_kill(t, page, FSDAX_INVALID_PGOFF, vma,
- to_kill);
+ add_to_kill(t, page, FSDAX_INVALID_PGOFF, vma, to_kill);
}
}
read_unlock(&tasklist_lock);
--
2.23.0

2022-08-30 12:40:58

by Miaohe Lin

[permalink] [raw]
Subject: [PATCH 2/6] mm, hwpoison: use __PageMovable() to detect non-lru movable pages

It's more recommended to use __PageMovable() to detect non-lru movable
pages. We can avoid bumping page refcnt via isolate_movable_page() for
the isolated lru pages. Also if pages become PageLRU just after they're
checked but before trying to isolate them, isolate_lru_page() will be
called to do the right work.

Signed-off-by: Miaohe Lin <[email protected]>
---
mm/memory-failure.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index a923a6dde871..3966fa6abe03 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -2404,7 +2404,7 @@ EXPORT_SYMBOL(unpoison_memory);
static bool isolate_page(struct page *page, struct list_head *pagelist)
{
bool isolated = false;
- bool lru = PageLRU(page);
+ bool lru = !__PageMovable(page);

if (PageHuge(page)) {
isolated = !isolate_hugetlb(page, pagelist);
--
2.23.0

2022-08-30 13:19:58

by Miaohe Lin

[permalink] [raw]
Subject: [PATCH 5/6] mm, hwpoison: check PageTable() explicitly in hwpoison_user_mappings()

PageTable can't be handled by memory_failure(). Filter it out explicitly in
hwpoison_user_mappings(). This will also make code more consistent with the
relevant check in unpoison_memory().

Signed-off-by: Miaohe Lin <[email protected]>
---
mm/memory-failure.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 904c2b6284a4..fb6a10005109 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1406,7 +1406,7 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
* Here we are interested only in user-mapped pages, so skip any
* other types of pages.
*/
- if (PageReserved(p) || PageSlab(p))
+ if (PageReserved(p) || PageSlab(p) || PageTable(p))
return true;
if (!(PageLRU(hpage) || PageHuge(p)))
return true;
--
2.23.0

Subject: Re: [PATCH 1/6] mm, hwpoison: use ClearPageHWPoison() in memory_failure()

On Tue, Aug 30, 2022 at 08:35:59PM +0800, Miaohe Lin wrote:
> Use ClearPageHWPoison() instead of TestClearPageHWPoison() to clear page
> hwpoison flags to avoid unneeded full memory barrier overhead.
>
> Signed-off-by: Miaohe Lin <[email protected]>

Acked-by: Naoya Horiguchi <[email protected]>

Subject: Re: [PATCH 3/6] mm, hwpoison: use num_poisoned_pages_sub() to decrease num_poisoned_pages

On Tue, Aug 30, 2022 at 08:36:01PM +0800, Miaohe Lin wrote:
> Use num_poisoned_pages_sub() to combine multiple atomic ops into one. Also
> num_poisoned_pages_dec() can be killed as there's no caller now.
>
> Signed-off-by: Miaohe Lin <[email protected]>

Acked-by: Naoya Horiguchi <[email protected]>

Subject: Re: [PATCH 2/6] mm, hwpoison: use __PageMovable() to detect non-lru movable pages

Hi Miaohe,

On Tue, Aug 30, 2022 at 08:36:00PM +0800, Miaohe Lin wrote:
> It's more recommended to use __PageMovable() to detect non-lru movable
> pages. We can avoid bumping page refcnt via isolate_movable_page() for
> the isolated lru pages. Also if pages become PageLRU just after they're
> checked but before trying to isolate them, isolate_lru_page() will be
> called to do the right work.

Good point, non-lru movable page is currently handled by isolate_lru_page(),
which always fails. This means that we lost the chance of soft-offlining
for any non-lru movable page. So this patch improves the situation.

>
> Signed-off-by: Miaohe Lin <[email protected]>
> ---
> mm/memory-failure.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index a923a6dde871..3966fa6abe03 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -2404,7 +2404,7 @@ EXPORT_SYMBOL(unpoison_memory);
> static bool isolate_page(struct page *page, struct list_head *pagelist)
> {
> bool isolated = false;
> - bool lru = PageLRU(page);
> + bool lru = !__PageMovable(page);

It seems that PAGE_MAPPING_MOVABLE is not set for hugetlb pages, so
lru becomes true for them. Then, if isolate_hugetlb() succeeds,
inc_node_page_state() is called for hugetlb pages, maybe that's not expected.

>
> if (PageHuge(page)) {
> isolated = !isolate_hugetlb(page, pagelist);
} else {
if (lru)
isolated = !isolate_lru_page(page);
else
isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE);

if (isolated)
list_add(&page->lru, pagelist);
}

if (isolated && lru)
inc_node_page_state(page, NR_ISOLATED_ANON +
page_is_file_lru(page));

so, how about moving this if block into the above else block?
Then, the automatic variable lru can be moved into the else block.

Thanks,
Naoya Horiguchi

Subject: Re: [PATCH 4/6] mm, hwpoison: avoid unneeded page_mapped_in_vma() overhead in collect_procs_anon()

On Tue, Aug 30, 2022 at 08:36:02PM +0800, Miaohe Lin wrote:
> If vma->vm_mm != t->mm, there's no need to call page_mapped_in_vma() as
> add_to_kill() won't be called in this case. Move up the mm check to avoid
> possible unneeded calling to page_mapped_in_vma().
>
> Signed-off-by: Miaohe Lin <[email protected]>

Acked-by: Naoya Horiguchi <[email protected]>

Subject: Re: [PATCH 5/6] mm, hwpoison: check PageTable() explicitly in hwpoison_user_mappings()

On Tue, Aug 30, 2022 at 08:36:03PM +0800, Miaohe Lin wrote:
> PageTable can't be handled by memory_failure(). Filter it out explicitly in
> hwpoison_user_mappings(). This will also make code more consistent with the
> relevant check in unpoison_memory().
>
> Signed-off-by: Miaohe Lin <[email protected]>

Acked-by: Naoya Horiguchi <[email protected]>

Subject: Re: [PATCH 6/6] mm, hwpoison: cleanup some obsolete comments

On Tue, Aug 30, 2022 at 08:36:04PM +0800, Miaohe Lin wrote:
> 1.Remove meaningless comment in kill_proc(). That doesn't tell anything.
> 2.Fix the wrong function name get_hwpoison_unless_zero(). It should be
> get_page_unless_zero().
> 3.The gate keeper for free hwpoison page has moved to check_new_page().
> Update the corresponding comment.
>
> Signed-off-by: Miaohe Lin <[email protected]>

Acked-by: Naoya Horiguchi <[email protected]>

2022-09-05 07:20:36

by Miaohe Lin

[permalink] [raw]
Subject: Re: [PATCH 2/6] mm, hwpoison: use __PageMovable() to detect non-lru movable pages

On 2022/9/5 13:22, HORIGUCHI NAOYA(堀口 直也) wrote:
> Hi Miaohe,
>
> On Tue, Aug 30, 2022 at 08:36:00PM +0800, Miaohe Lin wrote:
>> It's more recommended to use __PageMovable() to detect non-lru movable
>> pages. We can avoid bumping page refcnt via isolate_movable_page() for
>> the isolated lru pages. Also if pages become PageLRU just after they're
>> checked but before trying to isolate them, isolate_lru_page() will be
>> called to do the right work.
>
> Good point, non-lru movable page is currently handled by isolate_lru_page(),
> which always fails. This means that we lost the chance of soft-offlining
> for any non-lru movable page. So this patch improves the situation.

Non-lru movable page will still be handled by isolate_movable_page() before the code change
as they don't have PageLRU set. The current situation is that the isolated LRU pages are
passed to isolate_movable_page() uncorrectly. This might not hurt. But the chance that pages
become un-isolated and thus available just after checking could be seized with this patch.

>
>>
>> Signed-off-by: Miaohe Lin <[email protected]>
>> ---
>> mm/memory-failure.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
>> index a923a6dde871..3966fa6abe03 100644
>> --- a/mm/memory-failure.c
>> +++ b/mm/memory-failure.c
>> @@ -2404,7 +2404,7 @@ EXPORT_SYMBOL(unpoison_memory);
>> static bool isolate_page(struct page *page, struct list_head *pagelist)
>> {
>> bool isolated = false;
>> - bool lru = PageLRU(page);
>> + bool lru = !__PageMovable(page);
>
> It seems that PAGE_MAPPING_MOVABLE is not set for hugetlb pages, so
> lru becomes true for them. Then, if isolate_hugetlb() succeeds,
> inc_node_page_state() is called for hugetlb pages, maybe that's not expected.

Yes, that's unexpected. Thanks for pointing this out.

>
>>
>> if (PageHuge(page)) {
>> isolated = !isolate_hugetlb(page, pagelist);
> } else {
> if (lru)
> isolated = !isolate_lru_page(page);
> else
> isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE);
>
> if (isolated)
> list_add(&page->lru, pagelist);
> }
>
> if (isolated && lru)
> inc_node_page_state(page, NR_ISOLATED_ANON +
> page_is_file_lru(page));
>
> so, how about moving this if block into the above else block?
> Then, the automatic variable lru can be moved into the else block.

Do you mean something like below?

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index df3bf266eebf..48780f3a61d3 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -2404,24 +2404,25 @@ EXPORT_SYMBOL(unpoison_memory);
static bool isolate_page(struct page *page, struct list_head *pagelist)
{
bool isolated = false;
- bool lru = !__PageMovable(page);

if (PageHuge(page)) {
isolated = !isolate_hugetlb(page, pagelist);
} else {
+ bool lru = !__PageMovable(page);
+
if (lru)
isolated = !isolate_lru_page(page);
else
isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE);

- if (isolated)
+ if (isolated) {
list_add(&page->lru, pagelist);
+ if (lru)
+ inc_node_page_state(page, NR_ISOLATED_ANON +
+ page_is_file_lru(page));
+ }
}

- if (isolated && lru)
- inc_node_page_state(page, NR_ISOLATED_ANON +
- page_is_file_lru(page));
-
/*
* If we succeed to isolate the page, we grabbed another refcount on
* the page, so we can safely drop the one we got from get_any_pages().

>
> Thanks,
> Naoya Horiguchi

Thanks a lot for your review and comment on this series, Naoya.

Thanks,
Miaohe Lin.

Subject: Re: [PATCH 2/6] mm, hwpoison: use __PageMovable() to detect non-lru movable pages

On Mon, Sep 05, 2022 at 02:53:41PM +0800, Miaohe Lin wrote:
> On 2022/9/5 13:22, HORIGUCHI NAOYA(堀口 直也) wrote:
> > Hi Miaohe,
> >
> > On Tue, Aug 30, 2022 at 08:36:00PM +0800, Miaohe Lin wrote:
> >> It's more recommended to use __PageMovable() to detect non-lru movable
> >> pages. We can avoid bumping page refcnt via isolate_movable_page() for
> >> the isolated lru pages. Also if pages become PageLRU just after they're
> >> checked but before trying to isolate them, isolate_lru_page() will be
> >> called to do the right work.
> >
> > Good point, non-lru movable page is currently handled by isolate_lru_page(),
> > which always fails. This means that we lost the chance of soft-offlining
> > for any non-lru movable page. So this patch improves the situation.
>
> Non-lru movable page will still be handled by isolate_movable_page() before the code change
> as they don't have PageLRU set. The current situation is that the isolated LRU pages are
> passed to isolate_movable_page() uncorrectly. This might not hurt. But the chance that pages
> become un-isolated and thus available just after checking could be seized with this patch.

OK, thank you for correct me.

>
> >
> >>
> >> Signed-off-by: Miaohe Lin <[email protected]>
> >> ---
> >> mm/memory-failure.c | 2 +-
> >> 1 file changed, 1 insertion(+), 1 deletion(-)
> >>
> >> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> >> index a923a6dde871..3966fa6abe03 100644
> >> --- a/mm/memory-failure.c
> >> +++ b/mm/memory-failure.c
> >> @@ -2404,7 +2404,7 @@ EXPORT_SYMBOL(unpoison_memory);
> >> static bool isolate_page(struct page *page, struct list_head *pagelist)
> >> {
> >> bool isolated = false;
> >> - bool lru = PageLRU(page);
> >> + bool lru = !__PageMovable(page);
> >
> > It seems that PAGE_MAPPING_MOVABLE is not set for hugetlb pages, so
> > lru becomes true for them. Then, if isolate_hugetlb() succeeds,
> > inc_node_page_state() is called for hugetlb pages, maybe that's not expected.
>
> Yes, that's unexpected. Thanks for pointing this out.
>
> >
> >>
> >> if (PageHuge(page)) {
> >> isolated = !isolate_hugetlb(page, pagelist);
> > } else {
> > if (lru)
> > isolated = !isolate_lru_page(page);
> > else
> > isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE);
> >
> > if (isolated)
> > list_add(&page->lru, pagelist);
> > }
> >
> > if (isolated && lru)
> > inc_node_page_state(page, NR_ISOLATED_ANON +
> > page_is_file_lru(page));
> >
> > so, how about moving this if block into the above else block?
> > Then, the automatic variable lru can be moved into the else block.
>
> Do you mean something like below?
>
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index df3bf266eebf..48780f3a61d3 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -2404,24 +2404,25 @@ EXPORT_SYMBOL(unpoison_memory);
> static bool isolate_page(struct page *page, struct list_head *pagelist)
> {
> bool isolated = false;
> - bool lru = !__PageMovable(page);
>
> if (PageHuge(page)) {
> isolated = !isolate_hugetlb(page, pagelist);
> } else {
> + bool lru = !__PageMovable(page);
> +
> if (lru)
> isolated = !isolate_lru_page(page);
> else
> isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE);
>
> - if (isolated)
> + if (isolated) {
> list_add(&page->lru, pagelist);
> + if (lru)
> + inc_node_page_state(page, NR_ISOLATED_ANON +
> + page_is_file_lru(page));
> + }
> }
>
> - if (isolated && lru)
> - inc_node_page_state(page, NR_ISOLATED_ANON +
> - page_is_file_lru(page));
> -
> /*
> * If we succeed to isolate the page, we grabbed another refcount on
> * the page, so we can safely drop the one we got from get_any_pages().
>

Yes, that's exactly what I thought of.

Thanks,
Naoya Horiguchi

2022-09-05 07:33:29

by Miaohe Lin

[permalink] [raw]
Subject: Re: [PATCH 2/6] mm, hwpoison: use __PageMovable() to detect non-lru movable pages

On 2022/9/5 15:15, HORIGUCHI NAOYA(堀口 直也) wrote:
> On Mon, Sep 05, 2022 at 02:53:41PM +0800, Miaohe Lin wrote:
>> On 2022/9/5 13:22, HORIGUCHI NAOYA(堀口 直也) wrote:
>>> Hi Miaohe,
>>>
>>> On Tue, Aug 30, 2022 at 08:36:00PM +0800, Miaohe Lin wrote:
>>>> It's more recommended to use __PageMovable() to detect non-lru movable
>>>> pages. We can avoid bumping page refcnt via isolate_movable_page() for
>>>> the isolated lru pages. Also if pages become PageLRU just after they're
>>>> checked but before trying to isolate them, isolate_lru_page() will be
>>>> called to do the right work.
>>>
>>> Good point, non-lru movable page is currently handled by isolate_lru_page(),
>>> which always fails. This means that we lost the chance of soft-offlining
>>> for any non-lru movable page. So this patch improves the situation.
>>
>> Non-lru movable page will still be handled by isolate_movable_page() before the code change
>> as they don't have PageLRU set. The current situation is that the isolated LRU pages are
>> passed to isolate_movable_page() uncorrectly. This might not hurt. But the chance that pages
>> become un-isolated and thus available just after checking could be seized with this patch.
>
> OK, thank you for correct me.
>
>>
>>>
>>>>
>>>> Signed-off-by: Miaohe Lin <[email protected]>
>>>> ---
>>>> mm/memory-failure.c | 2 +-
>>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
>>>> index a923a6dde871..3966fa6abe03 100644
>>>> --- a/mm/memory-failure.c
>>>> +++ b/mm/memory-failure.c
>>>> @@ -2404,7 +2404,7 @@ EXPORT_SYMBOL(unpoison_memory);
>>>> static bool isolate_page(struct page *page, struct list_head *pagelist)
>>>> {
>>>> bool isolated = false;
>>>> - bool lru = PageLRU(page);
>>>> + bool lru = !__PageMovable(page);
>>>
>>> It seems that PAGE_MAPPING_MOVABLE is not set for hugetlb pages, so
>>> lru becomes true for them. Then, if isolate_hugetlb() succeeds,
>>> inc_node_page_state() is called for hugetlb pages, maybe that's not expected.
>>
>> Yes, that's unexpected. Thanks for pointing this out.
>>
>>>
>>>>
>>>> if (PageHuge(page)) {
>>>> isolated = !isolate_hugetlb(page, pagelist);
>>> } else {
>>> if (lru)
>>> isolated = !isolate_lru_page(page);
>>> else
>>> isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE);
>>>
>>> if (isolated)
>>> list_add(&page->lru, pagelist);
>>> }
>>>
>>> if (isolated && lru)
>>> inc_node_page_state(page, NR_ISOLATED_ANON +
>>> page_is_file_lru(page));
>>>
>>> so, how about moving this if block into the above else block?
>>> Then, the automatic variable lru can be moved into the else block.
>>
>> Do you mean something like below?
>>
>> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
>> index df3bf266eebf..48780f3a61d3 100644
>> --- a/mm/memory-failure.c
>> +++ b/mm/memory-failure.c
>> @@ -2404,24 +2404,25 @@ EXPORT_SYMBOL(unpoison_memory);
>> static bool isolate_page(struct page *page, struct list_head *pagelist)
>> {
>> bool isolated = false;
>> - bool lru = !__PageMovable(page);
>>
>> if (PageHuge(page)) {
>> isolated = !isolate_hugetlb(page, pagelist);
>> } else {
>> + bool lru = !__PageMovable(page);
>> +
>> if (lru)
>> isolated = !isolate_lru_page(page);
>> else
>> isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE);
>>
>> - if (isolated)
>> + if (isolated) {
>> list_add(&page->lru, pagelist);
>> + if (lru)
>> + inc_node_page_state(page, NR_ISOLATED_ANON +
>> + page_is_file_lru(page));
>> + }
>> }
>>
>> - if (isolated && lru)
>> - inc_node_page_state(page, NR_ISOLATED_ANON +
>> - page_is_file_lru(page));
>> -
>> /*
>> * If we succeed to isolate the page, we grabbed another refcount on
>> * the page, so we can safely drop the one we got from get_any_pages().
>>
>
> Yes, that's exactly what I thought of.

Hi Andrew:

The above code change could be applied to the mm-tree directly. Or should I resend
the v2 series? Which one is more convenient for you? They're all fine to me. ;)

Many thanks both.

Thanks,
Miaohe Lin

2022-09-05 21:55:08

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH 2/6] mm, hwpoison: use __PageMovable() to detect non-lru movable pages

On Mon, 5 Sep 2022 15:29:34 +0800 Miaohe Lin <[email protected]> wrote:

> The above code change could be applied to the mm-tree directly. Or should I resend
> the v2 series? Which one is more convenient for you? They're all fine to me. ;)

I got it, thanks.

From: Miaohe Lin <[email protected]>
Subject: mm-hwpoison-use-__pagemovable-to-detect-non-lru-movable-pages-fix
Date: Mon, 5 Sep 2022 14:53:41 +0800

fixes per Naoya Horiguchi

Link: https://lkml.kernel.org/r/[email protected]
Cc: Naoya Horiguchi <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
---

mm/memory-failure.c | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)

--- a/mm/memory-failure.c~mm-hwpoison-use-__pagemovable-to-detect-non-lru-movable-pages-fix
+++ a/mm/memory-failure.c
@@ -2404,24 +2404,26 @@ EXPORT_SYMBOL(unpoison_memory);
static bool isolate_page(struct page *page, struct list_head *pagelist)
{
bool isolated = false;
- bool lru = !__PageMovable(page);

if (PageHuge(page)) {
isolated = !isolate_hugetlb(page, pagelist);
} else {
+ bool lru = !__PageMovable(page);
+
if (lru)
isolated = !isolate_lru_page(page);
else
- isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE);
+ isolated = !isolate_movable_page(page,
+ ISOLATE_UNEVICTABLE);

- if (isolated)
+ if (isolated) {
list_add(&page->lru, pagelist);
+ if (lru)
+ inc_node_page_state(page, NR_ISOLATED_ANON +
+ page_is_file_lru(page));
+ }
}

- if (isolated && lru)
- inc_node_page_state(page, NR_ISOLATED_ANON +
- page_is_file_lru(page));
-
/*
* If we succeed to isolate the page, we grabbed another refcount on
* the page, so we can safely drop the one we got from get_any_pages().
_

2022-09-06 01:40:50

by Miaohe Lin

[permalink] [raw]
Subject: Re: [PATCH 2/6] mm, hwpoison: use __PageMovable() to detect non-lru movable pages

On 2022/9/6 5:53, Andrew Morton wrote:
> On Mon, 5 Sep 2022 15:29:34 +0800 Miaohe Lin <[email protected]> wrote:
>
>> The above code change could be applied to the mm-tree directly. Or should I resend
>> the v2 series? Which one is more convenient for you? They're all fine to me. ;)
>
> I got it, thanks.

Many thanks for doing this. That's very kind of you.

Thanks,
Miaohe Lin


>
> From: Miaohe Lin <[email protected]>
> Subject: mm-hwpoison-use-__pagemovable-to-detect-non-lru-movable-pages-fix
> Date: Mon, 5 Sep 2022 14:53:41 +0800
>
> fixes per Naoya Horiguchi
>
> Link: https://lkml.kernel.org/r/[email protected]
> Cc: Naoya Horiguchi <[email protected]>
> Signed-off-by: Andrew Morton <[email protected]>
> ---
>
> mm/memory-failure.c | 16 +++++++++-------
> 1 file changed, 9 insertions(+), 7 deletions(-)
>
> --- a/mm/memory-failure.c~mm-hwpoison-use-__pagemovable-to-detect-non-lru-movable-pages-fix
> +++ a/mm/memory-failure.c
> @@ -2404,24 +2404,26 @@ EXPORT_SYMBOL(unpoison_memory);
> static bool isolate_page(struct page *page, struct list_head *pagelist)
> {
> bool isolated = false;
> - bool lru = !__PageMovable(page);
>
> if (PageHuge(page)) {
> isolated = !isolate_hugetlb(page, pagelist);
> } else {
> + bool lru = !__PageMovable(page);
> +
> if (lru)
> isolated = !isolate_lru_page(page);
> else
> - isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE);
> + isolated = !isolate_movable_page(page,
> + ISOLATE_UNEVICTABLE);
>
> - if (isolated)
> + if (isolated) {
> list_add(&page->lru, pagelist);
> + if (lru)
> + inc_node_page_state(page, NR_ISOLATED_ANON +
> + page_is_file_lru(page));
> + }
> }
>
> - if (isolated && lru)
> - inc_node_page_state(page, NR_ISOLATED_ANON +
> - page_is_file_lru(page));
> -
> /*
> * If we succeed to isolate the page, we grabbed another refcount on
> * the page, so we can safely drop the one we got from get_any_pages().
> _
>
> .
>