Hi,
I updated unpoison fix patchset (sorry for long blank time since v1).
Main purpose of this series is to sync unpoison code to recent changes
around how hwpoison code takes page refcount. Unpoison should work or
simply fail (without crash) if impossible.
The recent works of keeping hwpoison pages in shmem pagecache introduce
a new state of hwpoisoned pages, but unpoison for such pages is not
supported yet with this series.
It seems that soft-offline and unpoison can be used as general purpose
page offline/online mechanism (not in the context of memory error). I
think that we need some additional works to realize it because currently
soft-offline and unpoison are assumed not to happen so frequently
(print out too many messages for aggressive usecases). But anyway this
could be another interesting next topic.
v1: https://lore.kernel.org/linux-mm/[email protected]/
Thanks,
Naoya Horiguchi
---
Summary:
Naoya Horiguchi (4):
mm/hwpoison: mf_mutex for soft offline and unpoison
mm/hwpoison: remove race consideration
mm/hwpoison: remove MF_MSG_BUDDY_2ND and MF_MSG_POISONED_HUGE
mm/hwpoison: fix unpoison_memory()
include/linux/mm.h | 3 +-
include/linux/page-flags.h | 4 ++
include/ras/ras_event.h | 2 -
mm/memory-failure.c | 166 ++++++++++++++++++++++++++++-----------------
mm/page_alloc.c | 23 +++++++
5 files changed, 130 insertions(+), 68 deletions(-)
From: Naoya Horiguchi <[email protected]>
After recent soft-offline rework, error pages can be taken off from
buddy allocator, but the existing unpoison_memory() does not properly
undo the operation. Moreover, due to the recent change on
__get_hwpoison_page(), get_page_unless_zero() is hardly called for
hwpoisoned pages. So __get_hwpoison_page() highly likely returns zero
(meaning to fail to grab page refcount) and unpoison just clears
PG_hwpoison without releasing a refcount. That does not lead to a
critical issue like kernel panic, but unpoisoned pages never get back to
buddy (leaked permanently), which is not good.
To (partially) fix this, we need to identify "taken off" pages from
other types of hwpoisoned pages. We can't use refcount or page flags
for this purpose, so a pseudo flag is defined by hacking ->private
field. Someone might think that put_page() is enough to cancel
taken-off pages, but the normal free path contains some operations not
suitable for the current purpose, and can fire VM_BUG_ON().
Note that unpoison_memory() is now supposed to be cancel hwpoison events
injected only by madvise() or /sys/devices/system/memory/{hard,soft}_offline_page,
not by MCE injection, so please don't try to use unpoison when testing
with MCE injection.
Signed-off-by: Naoya Horiguchi <[email protected]>
---
ChangeLog v2:
- unpoison_memory() returns as commented
- explicitly avoids unpoisoning slab pages
- separates internal pinning function into __get_unpoison_page()
---
include/linux/mm.h | 1 +
include/linux/page-flags.h | 4 ++
mm/memory-failure.c | 104 ++++++++++++++++++++++++++++++-------
mm/page_alloc.c | 23 ++++++++
4 files changed, 113 insertions(+), 19 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 71d886470d71..c7ad3fdfee7c 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3219,6 +3219,7 @@ enum mf_flags {
MF_ACTION_REQUIRED = 1 << 1,
MF_MUST_KILL = 1 << 2,
MF_SOFT_OFFLINE = 1 << 3,
+ MF_UNPOISON = 1 << 4,
};
extern int memory_failure(unsigned long pfn, int flags);
extern void memory_failure_queue(unsigned long pfn, int flags);
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index b78f137acc62..8add006535f6 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -522,7 +522,11 @@ PAGEFLAG_FALSE(Uncached, uncached)
PAGEFLAG(HWPoison, hwpoison, PF_ANY)
TESTSCFLAG(HWPoison, hwpoison, PF_ANY)
#define __PG_HWPOISON (1UL << PG_hwpoison)
+#define MAGIC_HWPOISON 0x4857504f49534f4e
+extern void SetPageHWPoisonTakenOff(struct page *page);
+extern void ClearPageHWPoisonTakenOff(struct page *page);
extern bool take_page_off_buddy(struct page *page);
+extern bool take_page_back_buddy(struct page *page);
#else
PAGEFLAG_FALSE(HWPoison, hwpoison)
#define __PG_HWPOISON 0
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 09f079987928..a6f80a670012 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1160,6 +1160,22 @@ static int page_action(struct page_state *ps, struct page *p,
return (result == MF_RECOVERED || result == MF_DELAYED) ? 0 : -EBUSY;
}
+static inline bool PageHWPoisonTakenOff(struct page *page)
+{
+ return PageHWPoison(page) && page_private(page) == MAGIC_HWPOISON;
+}
+
+void SetPageHWPoisonTakenOff(struct page *page)
+{
+ set_page_private(page, MAGIC_HWPOISON);
+}
+
+void ClearPageHWPoisonTakenOff(struct page *page)
+{
+ if (PageHWPoison(page))
+ set_page_private(page, 0);
+}
+
/*
* Return true if a page type of a given page is supported by hwpoison
* mechanism (while handling could fail), otherwise false. This function
@@ -1262,6 +1278,27 @@ static int get_any_page(struct page *p, unsigned long flags)
return ret;
}
+static int __get_unpoison_page(struct page *page)
+{
+ struct page *head = compound_head(page);
+ int ret = 0;
+ bool hugetlb = false;
+
+ ret = get_hwpoison_huge_page(head, &hugetlb);
+ if (hugetlb)
+ return ret;
+
+ /*
+ * PageHWPoisonTakenOff pages are not only marked as PG_hwpoison,
+ * but also isolated from buddy freelist, so need to identify the
+ * state and have to cancel both operations to unpoison.
+ */
+ if (PageHWPoisonTakenOff(head))
+ return -EHWPOISON;
+
+ return get_page_unless_zero(head) ? 1 : 0;
+}
+
/**
* get_hwpoison_page() - Get refcount for memory error handling
* @p: Raw error page (hit by memory error)
@@ -1278,18 +1315,26 @@ static int get_any_page(struct page *p, unsigned long flags)
* extra care for the error page's state (as done in __get_hwpoison_page()),
* and has some retry logic in get_any_page().
*
+ * When called from unpoison_memory(), the caller should already ensure that
+ * the given page has PG_hwpoison. So it's never reused for other page
+ * allocations, and __get_unpoison_page() never races with them.
+ *
* Return: 0 on failure,
* 1 on success for in-use pages in a well-defined state,
* -EIO for pages on which we can not handle memory errors,
* -EBUSY when get_hwpoison_page() has raced with page lifecycle
- * operations like allocation and free.
+ * operations like allocation and free,
+ * -EHWPOISON when the page is hwpoisoned and taken off from buddy.
*/
static int get_hwpoison_page(struct page *p, unsigned long flags)
{
int ret;
zone_pcp_disable(page_zone(p));
- ret = get_any_page(p, flags);
+ if (flags & MF_UNPOISON)
+ ret = __get_unpoison_page(p);
+ else
+ ret = get_any_page(p, flags);
zone_pcp_enable(page_zone(p));
return ret;
@@ -1942,6 +1987,26 @@ core_initcall(memory_failure_init);
pr_info(fmt, pfn); \
})
+static inline int clear_page_hwpoison(struct ratelimit_state *rs, struct page *p)
+{
+ if (TestClearPageHWPoison(p)) {
+ unpoison_pr_info("Unpoison: Software-unpoisoned page %#lx\n",
+ page_to_pfn(p), rs);
+ num_poisoned_pages_dec();
+ return 0;
+ }
+ return -EBUSY;
+}
+
+static inline int unpoison_taken_off_page(struct ratelimit_state *rs,
+ struct page *p)
+{
+ if (take_page_back_buddy(p) && !clear_page_hwpoison(rs, p))
+ return 0;
+ else
+ return -EBUSY;
+}
+
/**
* unpoison_memory - Unpoison a previously poisoned page
* @pfn: Page number of the to be unpoisoned page
@@ -1958,9 +2023,7 @@ int unpoison_memory(unsigned long pfn)
{
struct page *page;
struct page *p;
- int freeit = 0;
- int ret = 0;
- unsigned long flags = 0;
+ int ret = -EBUSY;
static DEFINE_RATELIMIT_STATE(unpoison_rs, DEFAULT_RATELIMIT_INTERVAL,
DEFAULT_RATELIMIT_BURST);
@@ -1996,24 +2059,27 @@ int unpoison_memory(unsigned long pfn)
goto unlock_mutex;
}
- if (!get_hwpoison_page(p, flags)) {
- if (TestClearPageHWPoison(p))
- num_poisoned_pages_dec();
- unpoison_pr_info("Unpoison: Software-unpoisoned free page %#lx\n",
- pfn, &unpoison_rs);
+ if (PageSlab(page))
goto unlock_mutex;
- }
- if (TestClearPageHWPoison(page)) {
- unpoison_pr_info("Unpoison: Software-unpoisoned page %#lx\n",
- pfn, &unpoison_rs);
- num_poisoned_pages_dec();
- freeit = 1;
- }
+ ret = get_hwpoison_page(p, MF_UNPOISON);
+ if (!ret) {
+ ret = clear_page_hwpoison(&unpoison_rs, p);
+ } else if (ret < 0) {
+ if (ret == -EHWPOISON) {
+ ret = unpoison_taken_off_page(&unpoison_rs, p);
+ } else
+ unpoison_pr_info("Unpoison: failed to grab page %#lx\n",
+ pfn, &unpoison_rs);
+ } else {
+ int freeit = clear_page_hwpoison(&unpoison_rs, p);
- put_page(page);
- if (freeit && !(pfn == my_zero_pfn(0) && page_count(p) == 1))
put_page(page);
+ if (freeit && !(pfn == my_zero_pfn(0) && page_count(p) == 1)) {
+ put_page(page);
+ ret = 0;
+ }
+ }
unlock_mutex:
mutex_unlock(&mf_mutex);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4ea590646f89..b6e4cbb44c54 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -9466,6 +9466,7 @@ bool take_page_off_buddy(struct page *page)
del_page_from_free_list(page_head, zone, page_order);
break_down_buddy_pages(zone, page_head, page, 0,
page_order, migratetype);
+ SetPageHWPoisonTakenOff(page);
if (!is_migrate_isolate(migratetype))
__mod_zone_freepage_state(zone, -1, migratetype);
ret = true;
@@ -9477,4 +9478,26 @@ bool take_page_off_buddy(struct page *page)
spin_unlock_irqrestore(&zone->lock, flags);
return ret;
}
+
+/*
+ * Cancel takeoff done by take_page_off_buddy().
+ */
+bool take_page_back_buddy(struct page *page)
+{
+ struct zone *zone = page_zone(page);
+ unsigned long pfn = page_to_pfn(page);
+ unsigned long flags;
+ int migratetype = get_pfnblock_migratetype(page, pfn);
+ bool ret = false;
+
+ spin_lock_irqsave(&zone->lock, flags);
+ if (put_page_testzero(page)) {
+ ClearPageHWPoisonTakenOff(page);
+ __free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE);
+ ret = true;
+ }
+ spin_unlock_irqrestore(&zone->lock, flags);
+
+ return ret;
+}
#endif
--
2.25.1
On Tue, Oct 26, 2021 at 08:04:59AM +0900, Naoya Horiguchi wrote:
> Hi,
>
> I updated unpoison fix patchset (sorry for long blank time since v1).
>
> Main purpose of this series is to sync unpoison code to recent changes
> around how hwpoison code takes page refcount. Unpoison should work or
> simply fail (without crash) if impossible.
>
> The recent works of keeping hwpoison pages in shmem pagecache introduce
> a new state of hwpoisoned pages, but unpoison for such pages is not
> supported yet with this series.
>
> It seems that soft-offline and unpoison can be used as general purpose
> page offline/online mechanism (not in the context of memory error). I
> think that we need some additional works to realize it because currently
> soft-offline and unpoison are assumed not to happen so frequently
> (print out too many messages for aggressive usecases). But anyway this
> could be another interesting next topic.
>
> v1: https://lore.kernel.org/linux-mm/[email protected]/
>
> Thanks,
> Naoya Horiguchi
> ---
> Summary:
>
> Naoya Horiguchi (4):
> mm/hwpoison: mf_mutex for soft offline and unpoison
> mm/hwpoison: remove race consideration
> mm/hwpoison: remove MF_MSG_BUDDY_2ND and MF_MSG_POISONED_HUGE
> mm/hwpoison: fix unpoison_memory()
>
> include/linux/mm.h | 3 +-
> include/linux/page-flags.h | 4 ++
> include/ras/ras_event.h | 2 -
> mm/memory-failure.c | 166 ++++++++++++++++++++++++++++-----------------
> mm/page_alloc.c | 23 +++++++
> 5 files changed, 130 insertions(+), 68 deletions(-)
To:
Cc:
Bcc: [email protected]
Subject:
Reply-To:
From: Naoya Horiguchi <[email protected]>
Originally mf_mutex is introduced to serialize multiple MCE events, but
it's also helpful to exclude races among soft_offline_page() and
unpoison_memory(). So apply mf_mutex to them.
Signed-off-by: Naoya Horiguchi <[email protected]>
---
ChangeLog v2:
- add mutex_unlock() in "page already poisoned" path in soft_offline_page().
(Thanks to Ding Hui)
---
mm/memory-failure.c | 27 +++++++++++++++++++--------
1 file changed, 19 insertions(+), 8 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index fa9dda95a2a2..97297edfbd8e 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1628,6 +1628,8 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags,
return rc;
}
+static DEFINE_MUTEX(mf_mutex);
+
/**
* memory_failure - Handle memory failure of a page.
* @pfn: Page Number of the corrupted page
@@ -1654,7 +1656,6 @@ int memory_failure(unsigned long pfn, int flags)
int res = 0;
unsigned long page_flags;
bool retry = true;
- static DEFINE_MUTEX(mf_mutex);
if (!sysctl_memory_failure_recovery)
panic("Memory failure on page %lx", pfn);
@@ -1978,6 +1979,7 @@ int unpoison_memory(unsigned long pfn)
struct page *page;
struct page *p;
int freeit = 0;
+ int ret = 0;
unsigned long flags = 0;
static DEFINE_RATELIMIT_STATE(unpoison_rs, DEFAULT_RATELIMIT_INTERVAL,
DEFAULT_RATELIMIT_BURST);
@@ -1988,28 +1990,30 @@ int unpoison_memory(unsigned long pfn)
p = pfn_to_page(pfn);
page = compound_head(p);
+ mutex_lock(&mf_mutex);
+
if (!PageHWPoison(p)) {
unpoison_pr_info("Unpoison: Page was already unpoisoned %#lx\n",
pfn, &unpoison_rs);
- return 0;
+ goto unlock_mutex;
}
if (page_count(page) > 1) {
unpoison_pr_info("Unpoison: Someone grabs the hwpoison page %#lx\n",
pfn, &unpoison_rs);
- return 0;
+ goto unlock_mutex;
}
if (page_mapped(page)) {
unpoison_pr_info("Unpoison: Someone maps the hwpoison page %#lx\n",
pfn, &unpoison_rs);
- return 0;
+ goto unlock_mutex;
}
if (page_mapping(page)) {
unpoison_pr_info("Unpoison: the hwpoison page has non-NULL mapping %#lx\n",
pfn, &unpoison_rs);
- return 0;
+ goto unlock_mutex;
}
/*
@@ -2020,7 +2024,7 @@ int unpoison_memory(unsigned long pfn)
if (!PageHuge(page) && PageTransHuge(page)) {
unpoison_pr_info("Unpoison: Memory failure is now running on %#lx\n",
pfn, &unpoison_rs);
- return 0;
+ goto unlock_mutex;
}
if (!get_hwpoison_page(p, flags)) {
@@ -2028,7 +2032,7 @@ int unpoison_memory(unsigned long pfn)
num_poisoned_pages_dec();
unpoison_pr_info("Unpoison: Software-unpoisoned free page %#lx\n",
pfn, &unpoison_rs);
- return 0;
+ goto unlock_mutex;
}
lock_page(page);
@@ -2050,7 +2054,9 @@ int unpoison_memory(unsigned long pfn)
if (freeit && !(pfn == my_zero_pfn(0) && page_count(p) == 1))
put_page(page);
- return 0;
+unlock_mutex:
+ mutex_unlock(&mf_mutex);
+ return ret;
}
EXPORT_SYMBOL(unpoison_memory);
@@ -2231,9 +2237,12 @@ int soft_offline_page(unsigned long pfn, int flags)
return -EIO;
}
+ mutex_lock(&mf_mutex);
+
if (PageHWPoison(page)) {
pr_info("%s: %#lx page already poisoned\n", __func__, pfn);
put_ref_page(ref_page);
+ mutex_unlock(&mf_mutex);
return 0;
}
@@ -2251,5 +2260,7 @@ int soft_offline_page(unsigned long pfn, int flags)
}
}
+ mutex_unlock(&mf_mutex);
+
return ret;
}
--
2.25.1
On Mon, Oct 25, 2021 at 4:06 PM Naoya Horiguchi
<[email protected]> wrote:
>
> From: Naoya Horiguchi <[email protected]>
>
> Originally mf_mutex is introduced to serialize multiple MCE events, but
> it's also helpful to exclude races among soft_offline_page() and
> unpoison_memory(). So apply mf_mutex to them.
My understanding is it is not that useful to make unpoison run
parallel with memory_failure() and soft offline, so they can be
serialized by mf_mutex and we could make the memory failure handler
and soft offline simpler.
If the above statement is correct, could you please tweak this commit
log to reflect it with patch #2 squashed into this patch?
> Signed-off-by: Naoya Horiguchi <[email protected]>
> ---
> ChangeLog v2:
> - add mutex_unlock() in "page already poisoned" path in soft_offline_page().
> (Thanks to Ding Hui)
> ---
> mm/memory-failure.c | 27 +++++++++++++++++++--------
> 1 file changed, 19 insertions(+), 8 deletions(-)
>
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index fa9dda95a2a2..97297edfbd8e 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1628,6 +1628,8 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags,
> return rc;
> }
>
> +static DEFINE_MUTEX(mf_mutex);
> +
> /**
> * memory_failure - Handle memory failure of a page.
> * @pfn: Page Number of the corrupted page
> @@ -1654,7 +1656,6 @@ int memory_failure(unsigned long pfn, int flags)
> int res = 0;
> unsigned long page_flags;
> bool retry = true;
> - static DEFINE_MUTEX(mf_mutex);
>
> if (!sysctl_memory_failure_recovery)
> panic("Memory failure on page %lx", pfn);
> @@ -1978,6 +1979,7 @@ int unpoison_memory(unsigned long pfn)
> struct page *page;
> struct page *p;
> int freeit = 0;
> + int ret = 0;
> unsigned long flags = 0;
> static DEFINE_RATELIMIT_STATE(unpoison_rs, DEFAULT_RATELIMIT_INTERVAL,
> DEFAULT_RATELIMIT_BURST);
> @@ -1988,28 +1990,30 @@ int unpoison_memory(unsigned long pfn)
> p = pfn_to_page(pfn);
> page = compound_head(p);
>
> + mutex_lock(&mf_mutex);
> +
> if (!PageHWPoison(p)) {
> unpoison_pr_info("Unpoison: Page was already unpoisoned %#lx\n",
> pfn, &unpoison_rs);
> - return 0;
> + goto unlock_mutex;
> }
>
> if (page_count(page) > 1) {
> unpoison_pr_info("Unpoison: Someone grabs the hwpoison page %#lx\n",
> pfn, &unpoison_rs);
> - return 0;
> + goto unlock_mutex;
> }
>
> if (page_mapped(page)) {
> unpoison_pr_info("Unpoison: Someone maps the hwpoison page %#lx\n",
> pfn, &unpoison_rs);
> - return 0;
> + goto unlock_mutex;
> }
>
> if (page_mapping(page)) {
> unpoison_pr_info("Unpoison: the hwpoison page has non-NULL mapping %#lx\n",
> pfn, &unpoison_rs);
> - return 0;
> + goto unlock_mutex;
> }
>
> /*
> @@ -2020,7 +2024,7 @@ int unpoison_memory(unsigned long pfn)
> if (!PageHuge(page) && PageTransHuge(page)) {
> unpoison_pr_info("Unpoison: Memory failure is now running on %#lx\n",
> pfn, &unpoison_rs);
> - return 0;
> + goto unlock_mutex;
> }
>
> if (!get_hwpoison_page(p, flags)) {
> @@ -2028,7 +2032,7 @@ int unpoison_memory(unsigned long pfn)
> num_poisoned_pages_dec();
> unpoison_pr_info("Unpoison: Software-unpoisoned free page %#lx\n",
> pfn, &unpoison_rs);
> - return 0;
> + goto unlock_mutex;
> }
>
> lock_page(page);
> @@ -2050,7 +2054,9 @@ int unpoison_memory(unsigned long pfn)
> if (freeit && !(pfn == my_zero_pfn(0) && page_count(p) == 1))
> put_page(page);
>
> - return 0;
> +unlock_mutex:
> + mutex_unlock(&mf_mutex);
> + return ret;
> }
> EXPORT_SYMBOL(unpoison_memory);
>
> @@ -2231,9 +2237,12 @@ int soft_offline_page(unsigned long pfn, int flags)
> return -EIO;
> }
>
> + mutex_lock(&mf_mutex);
> +
> if (PageHWPoison(page)) {
> pr_info("%s: %#lx page already poisoned\n", __func__, pfn);
> put_ref_page(ref_page);
> + mutex_unlock(&mf_mutex);
> return 0;
> }
>
> @@ -2251,5 +2260,7 @@ int soft_offline_page(unsigned long pfn, int flags)
> }
> }
>
> + mutex_unlock(&mf_mutex);
> +
> return ret;
> }
> --
> 2.25.1
>
On Tue, Oct 26, 2021 at 06:32:36PM -0700, Yang Shi wrote:
> On Mon, Oct 25, 2021 at 4:06 PM Naoya Horiguchi
> <[email protected]> wrote:
> >
> > From: Naoya Horiguchi <[email protected]>
> >
> > Originally mf_mutex is introduced to serialize multiple MCE events, but
> > it's also helpful to exclude races among soft_offline_page() and
> > unpoison_memory(). So apply mf_mutex to them.
>
> My understanding is it is not that useful to make unpoison run
> parallel with memory_failure() and soft offline, so they can be
> serialized by mf_mutex and we could make the memory failure handler
> and soft offline simpler.
Thank you for the suggestion, this sounds correct and more specific.
>
> If the above statement is correct, could you please tweak this commit
> log to reflect it with patch #2 squashed into this patch?
Sure, I'm thinking of revising like below:
Originally mf_mutex is introduced to serialize multiple MCE events, but
it is not that useful to allow unpoison to run in parallel with memory_failure()
and soft offline. So apply mf_they to soft offline and unpoison.
The memory failure handler and soft offline handler get simpler with this.
Thanks,
Naoya Horiguchi
On Mon, Oct 25, 2021 at 4:16 PM Naoya Horiguchi <[email protected]> wrote:
>
> From: Naoya Horiguchi <[email protected]>
>
> After recent soft-offline rework, error pages can be taken off from
> buddy allocator, but the existing unpoison_memory() does not properly
> undo the operation. Moreover, due to the recent change on
> __get_hwpoison_page(), get_page_unless_zero() is hardly called for
> hwpoisoned pages. So __get_hwpoison_page() highly likely returns zero
> (meaning to fail to grab page refcount) and unpoison just clears
> PG_hwpoison without releasing a refcount. That does not lead to a
> critical issue like kernel panic, but unpoisoned pages never get back to
> buddy (leaked permanently), which is not good.
>
> To (partially) fix this, we need to identify "taken off" pages from
> other types of hwpoisoned pages. We can't use refcount or page flags
> for this purpose, so a pseudo flag is defined by hacking ->private
> field. Someone might think that put_page() is enough to cancel
> taken-off pages, but the normal free path contains some operations not
> suitable for the current purpose, and can fire VM_BUG_ON().
>
> Note that unpoison_memory() is now supposed to be cancel hwpoison events
> injected only by madvise() or /sys/devices/system/memory/{hard,soft}_offline_page,
> not by MCE injection, so please don't try to use unpoison when testing
> with MCE injection.
>
> Signed-off-by: Naoya Horiguchi <[email protected]>
> ---
> ChangeLog v2:
> - unpoison_memory() returns as commented
> - explicitly avoids unpoisoning slab pages
> - separates internal pinning function into __get_unpoison_page()
> ---
> include/linux/mm.h | 1 +
> include/linux/page-flags.h | 4 ++
> mm/memory-failure.c | 104 ++++++++++++++++++++++++++++++-------
> mm/page_alloc.c | 23 ++++++++
> 4 files changed, 113 insertions(+), 19 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 71d886470d71..c7ad3fdfee7c 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -3219,6 +3219,7 @@ enum mf_flags {
> MF_ACTION_REQUIRED = 1 << 1,
> MF_MUST_KILL = 1 << 2,
> MF_SOFT_OFFLINE = 1 << 3,
> + MF_UNPOISON = 1 << 4,
> };
> extern int memory_failure(unsigned long pfn, int flags);
> extern void memory_failure_queue(unsigned long pfn, int flags);
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index b78f137acc62..8add006535f6 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -522,7 +522,11 @@ PAGEFLAG_FALSE(Uncached, uncached)
> PAGEFLAG(HWPoison, hwpoison, PF_ANY)
> TESTSCFLAG(HWPoison, hwpoison, PF_ANY)
> #define __PG_HWPOISON (1UL << PG_hwpoison)
> +#define MAGIC_HWPOISON 0x4857504f49534f4e
> +extern void SetPageHWPoisonTakenOff(struct page *page);
> +extern void ClearPageHWPoisonTakenOff(struct page *page);
> extern bool take_page_off_buddy(struct page *page);
> +extern bool take_page_back_buddy(struct page *page);
> #else
> PAGEFLAG_FALSE(HWPoison, hwpoison)
> #define __PG_HWPOISON 0
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 09f079987928..a6f80a670012 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1160,6 +1160,22 @@ static int page_action(struct page_state *ps, struct page *p,
> return (result == MF_RECOVERED || result == MF_DELAYED) ? 0 : -EBUSY;
> }
>
> +static inline bool PageHWPoisonTakenOff(struct page *page)
> +{
> + return PageHWPoison(page) && page_private(page) == MAGIC_HWPOISON;
> +}
> +
> +void SetPageHWPoisonTakenOff(struct page *page)
> +{
> + set_page_private(page, MAGIC_HWPOISON);
> +}
> +
> +void ClearPageHWPoisonTakenOff(struct page *page)
> +{
> + if (PageHWPoison(page))
> + set_page_private(page, 0);
> +}
> +
> /*
> * Return true if a page type of a given page is supported by hwpoison
> * mechanism (while handling could fail), otherwise false. This function
> @@ -1262,6 +1278,27 @@ static int get_any_page(struct page *p, unsigned long flags)
> return ret;
> }
>
> +static int __get_unpoison_page(struct page *page)
> +{
> + struct page *head = compound_head(page);
> + int ret = 0;
> + bool hugetlb = false;
> +
> + ret = get_hwpoison_huge_page(head, &hugetlb);
> + if (hugetlb)
> + return ret;
> +
> + /*
> + * PageHWPoisonTakenOff pages are not only marked as PG_hwpoison,
> + * but also isolated from buddy freelist, so need to identify the
> + * state and have to cancel both operations to unpoison.
> + */
> + if (PageHWPoisonTakenOff(head))
> + return -EHWPOISON;
I don't think we could see compound page here, but checking head page
might be confusing since private is per subpage, so it might be better
to use page instead of head.
> +
> + return get_page_unless_zero(head) ? 1 : 0;
> +}
> +
> /**
> * get_hwpoison_page() - Get refcount for memory error handling
> * @p: Raw error page (hit by memory error)
> @@ -1278,18 +1315,26 @@ static int get_any_page(struct page *p, unsigned long flags)
> * extra care for the error page's state (as done in __get_hwpoison_page()),
> * and has some retry logic in get_any_page().
> *
> + * When called from unpoison_memory(), the caller should already ensure that
> + * the given page has PG_hwpoison. So it's never reused for other page
> + * allocations, and __get_unpoison_page() never races with them.
> + *
> * Return: 0 on failure,
> * 1 on success for in-use pages in a well-defined state,
> * -EIO for pages on which we can not handle memory errors,
> * -EBUSY when get_hwpoison_page() has raced with page lifecycle
> - * operations like allocation and free.
> + * operations like allocation and free,
> + * -EHWPOISON when the page is hwpoisoned and taken off from buddy.
> */
> static int get_hwpoison_page(struct page *p, unsigned long flags)
> {
> int ret;
>
> zone_pcp_disable(page_zone(p));
> - ret = get_any_page(p, flags);
> + if (flags & MF_UNPOISON)
> + ret = __get_unpoison_page(p);
> + else
> + ret = get_any_page(p, flags);
> zone_pcp_enable(page_zone(p));
>
> return ret;
> @@ -1942,6 +1987,26 @@ core_initcall(memory_failure_init);
> pr_info(fmt, pfn); \
> })
>
> +static inline int clear_page_hwpoison(struct ratelimit_state *rs, struct page *p)
> +{
> + if (TestClearPageHWPoison(p)) {
> + unpoison_pr_info("Unpoison: Software-unpoisoned page %#lx\n",
> + page_to_pfn(p), rs);
> + num_poisoned_pages_dec();
> + return 0;
> + }
> + return -EBUSY;
I don't quite get why -EBUSY is returned. TestClear returns the old
value, so returning 0 means the flag was cleared before. It is fine,
right? I don't see why we have to return different values.
> +}
> +
> +static inline int unpoison_taken_off_page(struct ratelimit_state *rs,
> + struct page *p)
> +{
> + if (take_page_back_buddy(p) && !clear_page_hwpoison(rs, p))
If clear_page_hwpoison() is void, it can be moved into take_page_back_buddy().
> + return 0;
> + else
> + return -EBUSY;
> +}
> +
> /**
> * unpoison_memory - Unpoison a previously poisoned page
> * @pfn: Page number of the to be unpoisoned page
> @@ -1958,9 +2023,7 @@ int unpoison_memory(unsigned long pfn)
> {
> struct page *page;
> struct page *p;
> - int freeit = 0;
> - int ret = 0;
> - unsigned long flags = 0;
> + int ret = -EBUSY;
> static DEFINE_RATELIMIT_STATE(unpoison_rs, DEFAULT_RATELIMIT_INTERVAL,
> DEFAULT_RATELIMIT_BURST);
>
> @@ -1996,24 +2059,27 @@ int unpoison_memory(unsigned long pfn)
> goto unlock_mutex;
> }
>
> - if (!get_hwpoison_page(p, flags)) {
> - if (TestClearPageHWPoison(p))
> - num_poisoned_pages_dec();
> - unpoison_pr_info("Unpoison: Software-unpoisoned free page %#lx\n",
> - pfn, &unpoison_rs);
> + if (PageSlab(page))
> goto unlock_mutex;
> - }
>
> - if (TestClearPageHWPoison(page)) {
> - unpoison_pr_info("Unpoison: Software-unpoisoned page %#lx\n",
> - pfn, &unpoison_rs);
> - num_poisoned_pages_dec();
> - freeit = 1;
> - }
> + ret = get_hwpoison_page(p, MF_UNPOISON);
> + if (!ret) {
> + ret = clear_page_hwpoison(&unpoison_rs, p);
> + } else if (ret < 0) {
> + if (ret == -EHWPOISON) {
> + ret = unpoison_taken_off_page(&unpoison_rs, p);
> + } else
> + unpoison_pr_info("Unpoison: failed to grab page %#lx\n",
> + pfn, &unpoison_rs);
> + } else {
> + int freeit = clear_page_hwpoison(&unpoison_rs, p);
>
> - put_page(page);
> - if (freeit && !(pfn == my_zero_pfn(0) && page_count(p) == 1))
> put_page(page);
> + if (freeit && !(pfn == my_zero_pfn(0) && page_count(p) == 1)) {
> + put_page(page);
> + ret = 0;
> + }
> + }
>
> unlock_mutex:
> mutex_unlock(&mf_mutex);
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 4ea590646f89..b6e4cbb44c54 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -9466,6 +9466,7 @@ bool take_page_off_buddy(struct page *page)
> del_page_from_free_list(page_head, zone, page_order);
> break_down_buddy_pages(zone, page_head, page, 0,
> page_order, migratetype);
> + SetPageHWPoisonTakenOff(page);
> if (!is_migrate_isolate(migratetype))
> __mod_zone_freepage_state(zone, -1, migratetype);
> ret = true;
> @@ -9477,4 +9478,26 @@ bool take_page_off_buddy(struct page *page)
> spin_unlock_irqrestore(&zone->lock, flags);
> return ret;
> }
> +
> +/*
> + * Cancel takeoff done by take_page_off_buddy().
> + */
> +bool take_page_back_buddy(struct page *page)
put_page_back_buddy() sounds more natural?
> +{
> + struct zone *zone = page_zone(page);
> + unsigned long pfn = page_to_pfn(page);
> + unsigned long flags;
> + int migratetype = get_pfnblock_migratetype(page, pfn);
> + bool ret = false;
> +
> + spin_lock_irqsave(&zone->lock, flags);
> + if (put_page_testzero(page)) {
> + ClearPageHWPoisonTakenOff(page);
> + __free_one_page(page, pfn, zone, 0, migratetype, FPI_NONE);
> + ret = true;
> + }
> + spin_unlock_irqrestore(&zone->lock, flags);
> +
> + return ret;
> +}
> #endif
> --
> 2.25.1
>
>
>
> On Tue, Oct 26, 2021 at 08:04:59AM +0900, Naoya Horiguchi wrote:
> > Hi,
> >
> > I updated unpoison fix patchset (sorry for long blank time since v1).
> >
> > Main purpose of this series is to sync unpoison code to recent changes
> > around how hwpoison code takes page refcount. Unpoison should work or
> > simply fail (without crash) if impossible.
> >
> > The recent works of keeping hwpoison pages in shmem pagecache introduce
> > a new state of hwpoisoned pages, but unpoison for such pages is not
> > supported yet with this series.
> >
> > It seems that soft-offline and unpoison can be used as general purpose
> > page offline/online mechanism (not in the context of memory error). I
> > think that we need some additional works to realize it because currently
> > soft-offline and unpoison are assumed not to happen so frequently
> > (print out too many messages for aggressive usecases). But anyway this
> > could be another interesting next topic.
> >
> > v1: https://lore.kernel.org/linux-mm/[email protected]/
> >
> > Thanks,
> > Naoya Horiguchi
> > ---
> > Summary:
> >
> > Naoya Horiguchi (4):
> > mm/hwpoison: mf_mutex for soft offline and unpoison
> > mm/hwpoison: remove race consideration
> > mm/hwpoison: remove MF_MSG_BUDDY_2ND and MF_MSG_POISONED_HUGE
> > mm/hwpoison: fix unpoison_memory()
> >
> > include/linux/mm.h | 3 +-
> > include/linux/page-flags.h | 4 ++
> > include/ras/ras_event.h | 2 -
> > mm/memory-failure.c | 166 ++++++++++++++++++++++++++++-----------------
> > mm/page_alloc.c | 23 +++++++
> > 5 files changed, 130 insertions(+), 68 deletions(-)
>
> To:
> Cc:
> Bcc: [email protected]
> Subject:
> Reply-To:
>
On Tue, Oct 26, 2021 at 09:00:37PM -0700, Yang Shi wrote:
> On Mon, Oct 25, 2021 at 4:16 PM Naoya Horiguchi <[email protected]> wrote:
> >
> > From: Naoya Horiguchi <[email protected]>
> >
> > After recent soft-offline rework, error pages can be taken off from
> > buddy allocator, but the existing unpoison_memory() does not properly
> > undo the operation. Moreover, due to the recent change on
> > __get_hwpoison_page(), get_page_unless_zero() is hardly called for
> > hwpoisoned pages. So __get_hwpoison_page() highly likely returns zero
> > (meaning to fail to grab page refcount) and unpoison just clears
> > PG_hwpoison without releasing a refcount. That does not lead to a
> > critical issue like kernel panic, but unpoisoned pages never get back to
> > buddy (leaked permanently), which is not good.
> >
> > To (partially) fix this, we need to identify "taken off" pages from
> > other types of hwpoisoned pages. We can't use refcount or page flags
> > for this purpose, so a pseudo flag is defined by hacking ->private
> > field. Someone might think that put_page() is enough to cancel
> > taken-off pages, but the normal free path contains some operations not
> > suitable for the current purpose, and can fire VM_BUG_ON().
> >
> > Note that unpoison_memory() is now supposed to be cancel hwpoison events
> > injected only by madvise() or /sys/devices/system/memory/{hard,soft}_offline_page,
> > not by MCE injection, so please don't try to use unpoison when testing
> > with MCE injection.
> >
> > Signed-off-by: Naoya Horiguchi <[email protected]>
> > ---
> > ChangeLog v2:
> > - unpoison_memory() returns as commented
> > - explicitly avoids unpoisoning slab pages
> > - separates internal pinning function into __get_unpoison_page()
> > ---
...
> > @@ -1262,6 +1278,27 @@ static int get_any_page(struct page *p, unsigned long flags)
> > return ret;
> > }
> >
> > +static int __get_unpoison_page(struct page *page)
> > +{
> > + struct page *head = compound_head(page);
> > + int ret = 0;
> > + bool hugetlb = false;
> > +
> > + ret = get_hwpoison_huge_page(head, &hugetlb);
> > + if (hugetlb)
> > + return ret;
> > +
> > + /*
> > + * PageHWPoisonTakenOff pages are not only marked as PG_hwpoison,
> > + * but also isolated from buddy freelist, so need to identify the
> > + * state and have to cancel both operations to unpoison.
> > + */
> > + if (PageHWPoisonTakenOff(head))
> > + return -EHWPOISON;
>
> I don't think we could see compound page here, but checking head page
> might be confusing since private is per subpage, so it might be better
> to use page instead of head.
OK, I'll do this (and pass page to get_page_unless_zero() too).
> > +
> > + return get_page_unless_zero(head) ? 1 : 0;
> > +}
> > +
> > /**
> > * get_hwpoison_page() - Get refcount for memory error handling
> > * @p: Raw error page (hit by memory error)
> > @@ -1278,18 +1315,26 @@ static int get_any_page(struct page *p, unsigned long flags)
> > * extra care for the error page's state (as done in __get_hwpoison_page()),
> > * and has some retry logic in get_any_page().
> > *
> > + * When called from unpoison_memory(), the caller should already ensure that
> > + * the given page has PG_hwpoison. So it's never reused for other page
> > + * allocations, and __get_unpoison_page() never races with them.
> > + *
> > * Return: 0 on failure,
> > * 1 on success for in-use pages in a well-defined state,
> > * -EIO for pages on which we can not handle memory errors,
> > * -EBUSY when get_hwpoison_page() has raced with page lifecycle
> > - * operations like allocation and free.
> > + * operations like allocation and free,
> > + * -EHWPOISON when the page is hwpoisoned and taken off from buddy.
> > */
> > static int get_hwpoison_page(struct page *p, unsigned long flags)
> > {
> > int ret;
> >
> > zone_pcp_disable(page_zone(p));
> > - ret = get_any_page(p, flags);
> > + if (flags & MF_UNPOISON)
> > + ret = __get_unpoison_page(p);
> > + else
> > + ret = get_any_page(p, flags);
> > zone_pcp_enable(page_zone(p));
> >
> > return ret;
> > @@ -1942,6 +1987,26 @@ core_initcall(memory_failure_init);
> > pr_info(fmt, pfn); \
> > })
> >
> > +static inline int clear_page_hwpoison(struct ratelimit_state *rs, struct page *p)
> > +{
> > + if (TestClearPageHWPoison(p)) {
> > + unpoison_pr_info("Unpoison: Software-unpoisoned page %#lx\n",
> > + page_to_pfn(p), rs);
> > + num_poisoned_pages_dec();
> > + return 0;
> > + }
> > + return -EBUSY;
>
> I don't quite get why -EBUSY is returned. TestClear returns the old
> value, so returning 0 means the flag was cleared before. It is fine,
> right? I don't see why we have to return different values.
I think clear_page_hwpoison()'s return value is used in the path where
get_hwpoison_page(MF_UNPOISON) returns 1. But as you mentioned -EBUSY might
not be a good return value. And I noticed that freeit's semantics is wrongly
reversed, that's just a bug, so I'll fix it.
>
> > +}
> > +
> > +static inline int unpoison_taken_off_page(struct ratelimit_state *rs,
> > + struct page *p)
> > +{
> > + if (take_page_back_buddy(p) && !clear_page_hwpoison(rs, p))
>
> If clear_page_hwpoison() is void, it can be moved into take_page_back_buddy().
>
> > + return 0;
> > + else
> > + return -EBUSY;
> > +}
> > +
> > /**
> > * unpoison_memory - Unpoison a previously poisoned page
> > * @pfn: Page number of the to be unpoisoned page
> > @@ -1958,9 +2023,7 @@ int unpoison_memory(unsigned long pfn)
> > {
> > struct page *page;
> > struct page *p;
> > - int freeit = 0;
> > - int ret = 0;
> > - unsigned long flags = 0;
> > + int ret = -EBUSY;
> > static DEFINE_RATELIMIT_STATE(unpoison_rs, DEFAULT_RATELIMIT_INTERVAL,
> > DEFAULT_RATELIMIT_BURST);
> >
> > @@ -1996,24 +2059,27 @@ int unpoison_memory(unsigned long pfn)
> > goto unlock_mutex;
> > }
> >
> > - if (!get_hwpoison_page(p, flags)) {
> > - if (TestClearPageHWPoison(p))
> > - num_poisoned_pages_dec();
> > - unpoison_pr_info("Unpoison: Software-unpoisoned free page %#lx\n",
> > - pfn, &unpoison_rs);
> > + if (PageSlab(page))
> > goto unlock_mutex;
> > - }
> >
> > - if (TestClearPageHWPoison(page)) {
> > - unpoison_pr_info("Unpoison: Software-unpoisoned page %#lx\n",
> > - pfn, &unpoison_rs);
> > - num_poisoned_pages_dec();
> > - freeit = 1;
> > - }
> > + ret = get_hwpoison_page(p, MF_UNPOISON);
> > + if (!ret) {
> > + ret = clear_page_hwpoison(&unpoison_rs, p);
> > + } else if (ret < 0) {
> > + if (ret == -EHWPOISON) {
> > + ret = unpoison_taken_off_page(&unpoison_rs, p);
> > + } else
> > + unpoison_pr_info("Unpoison: failed to grab page %#lx\n",
> > + pfn, &unpoison_rs);
> > + } else {
> > + int freeit = clear_page_hwpoison(&unpoison_rs, p);
> >
> > - put_page(page);
> > - if (freeit && !(pfn == my_zero_pfn(0) && page_count(p) == 1))
> > put_page(page);
> > + if (freeit && !(pfn == my_zero_pfn(0) && page_count(p) == 1)) {
> > + put_page(page);
> > + ret = 0;
> > + }
> > + }
> >
> > unlock_mutex:
> > mutex_unlock(&mf_mutex);
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 4ea590646f89..b6e4cbb44c54 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -9466,6 +9466,7 @@ bool take_page_off_buddy(struct page *page)
> > del_page_from_free_list(page_head, zone, page_order);
> > break_down_buddy_pages(zone, page_head, page, 0,
> > page_order, migratetype);
> > + SetPageHWPoisonTakenOff(page);
> > if (!is_migrate_isolate(migratetype))
> > __mod_zone_freepage_state(zone, -1, migratetype);
> > ret = true;
> > @@ -9477,4 +9478,26 @@ bool take_page_off_buddy(struct page *page)
> > spin_unlock_irqrestore(&zone->lock, flags);
> > return ret;
> > }
> > +
> > +/*
> > + * Cancel takeoff done by take_page_off_buddy().
> > + */
> > +bool take_page_back_buddy(struct page *page)
>
> put_page_back_buddy() sounds more natural?
Thanks for the suggestion, sounds nice, so I'll rename it.
Thanks,
Naoya Horiguchi