2018-07-13 03:27:11

by Naoya Horiguchi

[permalink] [raw]
Subject: [PATCH v1 0/2] mm: soft-offline: fix race against page allocation

Xishi recently reported the issue about race on reusing the target pages
of soft offlining.
Discussion and analysis showed that we need make sure that setting PG_hwpoison
should be done in the right place under zone->lock for soft offline.
1/2 handles free hugepage's case, and 2/2 hanldes free buddy page's case.

Thanks,
Naoya Horiguchi
---
Summary:

Naoya Horiguchi (2):
mm: fix race on soft-offlining free huge pages
mm: soft-offline: close the race against page allocation

include/linux/page-flags.h | 5 +++++
include/linux/swapops.h | 10 ----------
mm/hugetlb.c | 11 +++++------
mm/memory-failure.c | 44 +++++++++++++++++++++++++++++++++++---------
mm/migrate.c | 4 +---
mm/page_alloc.c | 29 +++++++++++++++++++++++++++++
6 files changed, 75 insertions(+), 28 deletions(-)


2018-07-13 03:27:13

by Naoya Horiguchi

[permalink] [raw]
Subject: [PATCH v1 1/2] mm: fix race on soft-offlining free huge pages

There's a race condition between soft offline and hugetlb_fault which
causes unexpected process killing and/or hugetlb allocation failure.

The process killing is caused by the following flow:

CPU 0 CPU 1 CPU 2

soft offline
get_any_page
// find the hugetlb is free
mmap a hugetlb file
page fault
...
hugetlb_fault
hugetlb_no_page
alloc_huge_page
// succeed
soft_offline_free_page
// set hwpoison flag
mmap the hugetlb file
page fault
...
hugetlb_fault
hugetlb_no_page
find_lock_page
return VM_FAULT_HWPOISON
mm_fault_error
do_sigbus
// kill the process


The hugetlb allocation failure comes from the following flow:

CPU 0 CPU 1

mmap a hugetlb file
// reserve all free page but don't fault-in
soft offline
get_any_page
// find the hugetlb is free
soft_offline_free_page
// set hwpoison flag
dissolve_free_huge_page
// fail because all free hugepages are reserved
page fault
...
hugetlb_fault
hugetlb_no_page
alloc_huge_page
...
dequeue_huge_page_node_exact
// ignore hwpoisoned hugepage
// and finally fail due to no-mem

The root cause of this is that current soft-offline code is written
based on an assumption that PageHWPoison flag should beset at first to
avoid accessing the corrupted data. This makes sense for memory_failure()
or hard offline, but does not for soft offline because soft offline is
about corrected (not uncorrected) error and is safe from data lost.
This patch changes soft offline semantics where it sets PageHWPoison flag
only after containment of the error page completes successfully.

Reported-by: Xishi Qiu <[email protected]>
Suggested-by: Xishi Qiu <[email protected]>
Signed-off-by: Naoya Horiguchi <[email protected]>
---
mm/hugetlb.c | 11 +++++------
mm/memory-failure.c | 22 ++++++++++++++++------
mm/migrate.c | 2 --
3 files changed, 21 insertions(+), 14 deletions(-)

diff --git v4.18-rc4-mmotm-2018-07-10-16-50/mm/hugetlb.c v4.18-rc4-mmotm-2018-07-10-16-50_patched/mm/hugetlb.c
index 430be42..937c142 100644
--- v4.18-rc4-mmotm-2018-07-10-16-50/mm/hugetlb.c
+++ v4.18-rc4-mmotm-2018-07-10-16-50_patched/mm/hugetlb.c
@@ -1479,22 +1479,20 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
/*
* Dissolve a given free hugepage into free buddy pages. This function does
* nothing for in-use (including surplus) hugepages. Returns -EBUSY if the
- * number of free hugepages would be reduced below the number of reserved
- * hugepages.
+ * dissolution fails because a give page is not a free hugepage, or because
+ * free hugepages are fully reserved.
*/
int dissolve_free_huge_page(struct page *page)
{
- int rc = 0;
+ int rc = -EBUSY;

spin_lock(&hugetlb_lock);
if (PageHuge(page) && !page_count(page)) {
struct page *head = compound_head(page);
struct hstate *h = page_hstate(head);
int nid = page_to_nid(head);
- if (h->free_huge_pages - h->resv_huge_pages == 0) {
- rc = -EBUSY;
+ if (h->free_huge_pages - h->resv_huge_pages == 0)
goto out;
- }
/*
* Move PageHWPoison flag from head page to the raw error page,
* which makes any subpages rather than the error page reusable.
@@ -1508,6 +1506,7 @@ int dissolve_free_huge_page(struct page *page)
h->free_huge_pages_node[nid]--;
h->max_huge_pages--;
update_and_free_page(h, head);
+ rc = 0;
}
out:
spin_unlock(&hugetlb_lock);
diff --git v4.18-rc4-mmotm-2018-07-10-16-50/mm/memory-failure.c v4.18-rc4-mmotm-2018-07-10-16-50_patched/mm/memory-failure.c
index 9d142b9..c63d982 100644
--- v4.18-rc4-mmotm-2018-07-10-16-50/mm/memory-failure.c
+++ v4.18-rc4-mmotm-2018-07-10-16-50_patched/mm/memory-failure.c
@@ -1598,8 +1598,18 @@ static int soft_offline_huge_page(struct page *page, int flags)
if (ret > 0)
ret = -EIO;
} else {
- if (PageHuge(page))
- dissolve_free_huge_page(page);
+ /*
+ * We set PG_hwpoison only when the migration source hugepage
+ * was successfully dissolved, because otherwise hwpoisoned
+ * hugepage remains on free hugepage list, then userspace will
+ * find it as SIGBUS by allocation failure. That's not expected
+ * in soft-offlining.
+ */
+ ret = dissolve_free_huge_page(page);
+ if (!ret) {
+ if (set_hwpoison_free_buddy_page(page))
+ num_poisoned_pages_inc();
+ }
}
return ret;
}
@@ -1715,13 +1725,13 @@ static int soft_offline_in_use_page(struct page *page, int flags)

static void soft_offline_free_page(struct page *page)
{
+ int rc = 0;
struct page *head = compound_head(page);

- if (!TestSetPageHWPoison(head)) {
+ if (PageHuge(head))
+ rc = dissolve_free_huge_page(page);
+ if (!rc && !TestSetPageHWPoison(page))
num_poisoned_pages_inc();
- if (PageHuge(head))
- dissolve_free_huge_page(page);
- }
}

/**
diff --git v4.18-rc4-mmotm-2018-07-10-16-50/mm/migrate.c v4.18-rc4-mmotm-2018-07-10-16-50_patched/mm/migrate.c
index 198af42..3ae213b 100644
--- v4.18-rc4-mmotm-2018-07-10-16-50/mm/migrate.c
+++ v4.18-rc4-mmotm-2018-07-10-16-50_patched/mm/migrate.c
@@ -1318,8 +1318,6 @@ static int unmap_and_move_huge_page(new_page_t get_new_page,
out:
if (rc != -EAGAIN)
putback_active_hugepage(hpage);
- if (reason == MR_MEMORY_FAILURE && !test_set_page_hwpoison(hpage))
- num_poisoned_pages_inc();

/*
* If migration was not successful and there's a freeing callback, use
--
2.7.0


2018-07-13 03:27:28

by Naoya Horiguchi

[permalink] [raw]
Subject: [PATCH v1 2/2] mm: soft-offline: close the race against page allocation

A process can be killed with SIGBUS(BUS_MCEERR_AR) when it tries to
allocate a page that was just freed on the way of soft-offline.
This is undesirable because soft-offline (which is about corrected error)
is less aggressive than hard-offline (which is about uncorrected error),
and we can make soft-offline fail and keep using the page for good reason
like "system is busy."

Two main changes of this patch are:

- setting migrate type of the target page to MIGRATE_ISOLATE. As done
in free_unref_page_commit(), this makes kernel bypass pcplist when
freeing the page. So we can assume that the page is in freelist just
after put_page() returns,

- setting PG_hwpoison on free page under zone->lock which protects
freelists, so this allows us to avoid setting PG_hwpoison on a page
that is decided to be allocated soon.

Reported-by: Xishi Qiu <[email protected]>
Signed-off-by: Naoya Horiguchi <[email protected]>
---
include/linux/page-flags.h | 5 +++++
include/linux/swapops.h | 10 ----------
mm/memory-failure.c | 26 +++++++++++++++++++++-----
mm/migrate.c | 2 +-
mm/page_alloc.c | 29 +++++++++++++++++++++++++++++
5 files changed, 56 insertions(+), 16 deletions(-)

diff --git v4.18-rc4-mmotm-2018-07-10-16-50/include/linux/page-flags.h v4.18-rc4-mmotm-2018-07-10-16-50_patched/include/linux/page-flags.h
index 901943e..74bee8c 100644
--- v4.18-rc4-mmotm-2018-07-10-16-50/include/linux/page-flags.h
+++ v4.18-rc4-mmotm-2018-07-10-16-50_patched/include/linux/page-flags.h
@@ -369,8 +369,13 @@ PAGEFLAG_FALSE(Uncached)
PAGEFLAG(HWPoison, hwpoison, PF_ANY)
TESTSCFLAG(HWPoison, hwpoison, PF_ANY)
#define __PG_HWPOISON (1UL << PG_hwpoison)
+extern bool set_hwpoison_free_buddy_page(struct page *page);
#else
PAGEFLAG_FALSE(HWPoison)
+static inline bool set_hwpoison_free_buddy_page(struct page *page)
+{
+ return 0;
+}
#define __PG_HWPOISON 0
#endif

diff --git v4.18-rc4-mmotm-2018-07-10-16-50/include/linux/swapops.h v4.18-rc4-mmotm-2018-07-10-16-50_patched/include/linux/swapops.h
index 9c0eb4d..fe8e08b 100644
--- v4.18-rc4-mmotm-2018-07-10-16-50/include/linux/swapops.h
+++ v4.18-rc4-mmotm-2018-07-10-16-50_patched/include/linux/swapops.h
@@ -335,11 +335,6 @@ static inline int is_hwpoison_entry(swp_entry_t entry)
return swp_type(entry) == SWP_HWPOISON;
}

-static inline bool test_set_page_hwpoison(struct page *page)
-{
- return TestSetPageHWPoison(page);
-}
-
static inline void num_poisoned_pages_inc(void)
{
atomic_long_inc(&num_poisoned_pages);
@@ -362,11 +357,6 @@ static inline int is_hwpoison_entry(swp_entry_t swp)
return 0;
}

-static inline bool test_set_page_hwpoison(struct page *page)
-{
- return false;
-}
-
static inline void num_poisoned_pages_inc(void)
{
}
diff --git v4.18-rc4-mmotm-2018-07-10-16-50/mm/memory-failure.c v4.18-rc4-mmotm-2018-07-10-16-50_patched/mm/memory-failure.c
index c63d982..794687a 100644
--- v4.18-rc4-mmotm-2018-07-10-16-50/mm/memory-failure.c
+++ v4.18-rc4-mmotm-2018-07-10-16-50_patched/mm/memory-failure.c
@@ -57,6 +57,7 @@
#include <linux/mm_inline.h>
#include <linux/kfifo.h>
#include <linux/ratelimit.h>
+#include <linux/page-isolation.h>
#include "internal.h"
#include "ras/ras_event.h"

@@ -1697,6 +1698,7 @@ static int __soft_offline_page(struct page *page, int flags)
static int soft_offline_in_use_page(struct page *page, int flags)
{
int ret;
+ int mt;
struct page *hpage = compound_head(page);

if (!PageHuge(page) && PageTransHuge(hpage)) {
@@ -1715,23 +1717,37 @@ static int soft_offline_in_use_page(struct page *page, int flags)
put_hwpoison_page(hpage);
}

+ /*
+ * Setting MIGRATE_ISOLATE here ensures that the page will be linked
+ * to free list immediately (not via pcplist) when released after
+ * successful page migration. Otherwise we can't guarantee that the
+ * page is really free after put_page() returns, so
+ * set_hwpoison_free_buddy_page() highly likely fails.
+ */
+ mt = get_pageblock_migratetype(page);
+ set_pageblock_migratetype(page, MIGRATE_ISOLATE);
if (PageHuge(page))
ret = soft_offline_huge_page(page, flags);
else
ret = __soft_offline_page(page, flags);
-
+ set_pageblock_migratetype(page, mt);
return ret;
}

-static void soft_offline_free_page(struct page *page)
+static int soft_offline_free_page(struct page *page)
{
int rc = 0;
struct page *head = compound_head(page);

if (PageHuge(head))
rc = dissolve_free_huge_page(page);
- if (!rc && !TestSetPageHWPoison(page))
- num_poisoned_pages_inc();
+ if (!rc) {
+ if (set_hwpoison_free_buddy_page(page))
+ num_poisoned_pages_inc();
+ else
+ rc = -EBUSY;
+ }
+ return rc;
}

/**
@@ -1775,7 +1791,7 @@ int soft_offline_page(struct page *page, int flags)
if (ret > 0)
ret = soft_offline_in_use_page(page, flags);
else if (ret == 0)
- soft_offline_free_page(page);
+ ret = soft_offline_free_page(page);

return ret;
}
diff --git v4.18-rc4-mmotm-2018-07-10-16-50/mm/migrate.c v4.18-rc4-mmotm-2018-07-10-16-50_patched/mm/migrate.c
index 3ae213b..e772323 100644
--- v4.18-rc4-mmotm-2018-07-10-16-50/mm/migrate.c
+++ v4.18-rc4-mmotm-2018-07-10-16-50_patched/mm/migrate.c
@@ -1199,7 +1199,7 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page,
* intentionally. Although it's rather weird,
* it's how HWPoison flag works at the moment.
*/
- if (!test_set_page_hwpoison(page))
+ if (set_hwpoison_free_buddy_page(page))
num_poisoned_pages_inc();
}
} else {
diff --git v4.18-rc4-mmotm-2018-07-10-16-50/mm/page_alloc.c v4.18-rc4-mmotm-2018-07-10-16-50_patched/mm/page_alloc.c
index 607deff..3c76d40 100644
--- v4.18-rc4-mmotm-2018-07-10-16-50/mm/page_alloc.c
+++ v4.18-rc4-mmotm-2018-07-10-16-50_patched/mm/page_alloc.c
@@ -8027,3 +8027,32 @@ bool is_free_buddy_page(struct page *page)

return order < MAX_ORDER;
}
+
+#ifdef CONFIG_MEMORY_FAILURE
+/*
+ * Set PG_hwpoison flag if a given page is confirmed to be a free page
+ * within zone lock, which prevents the race against page allocation.
+ */
+bool set_hwpoison_free_buddy_page(struct page *page)
+{
+ struct zone *zone = page_zone(page);
+ unsigned long pfn = page_to_pfn(page);
+ unsigned long flags;
+ unsigned int order;
+ bool hwpoisoned = false;
+
+ spin_lock_irqsave(&zone->lock, flags);
+ for (order = 0; order < MAX_ORDER; order++) {
+ struct page *page_head = page - (pfn & ((1 << order) - 1));
+
+ if (PageBuddy(page_head) && page_order(page_head) >= order) {
+ if (!TestSetPageHWPoison(page))
+ hwpoisoned = true;
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&zone->lock, flags);
+
+ return hwpoisoned;
+}
+#endif
--
2.7.0


2018-07-13 20:36:36

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v1 1/2] mm: fix race on soft-offlining free huge pages

On Fri, 13 Jul 2018 12:26:05 +0900 Naoya Horiguchi <[email protected]> wrote:

> There's a race condition between soft offline and hugetlb_fault which
> causes unexpected process killing and/or hugetlb allocation failure.
>
> The process killing is caused by the following flow:
>
> CPU 0 CPU 1 CPU 2
>
> soft offline
> get_any_page
> // find the hugetlb is free
> mmap a hugetlb file
> page fault
> ...
> hugetlb_fault
> hugetlb_no_page
> alloc_huge_page
> // succeed
> soft_offline_free_page
> // set hwpoison flag
> mmap the hugetlb file
> page fault
> ...
> hugetlb_fault
> hugetlb_no_page
> find_lock_page
> return VM_FAULT_HWPOISON
> mm_fault_error
> do_sigbus
> // kill the process
>
>
> The hugetlb allocation failure comes from the following flow:
>
> CPU 0 CPU 1
>
> mmap a hugetlb file
> // reserve all free page but don't fault-in
> soft offline
> get_any_page
> // find the hugetlb is free
> soft_offline_free_page
> // set hwpoison flag
> dissolve_free_huge_page
> // fail because all free hugepages are reserved
> page fault
> ...
> hugetlb_fault
> hugetlb_no_page
> alloc_huge_page
> ...
> dequeue_huge_page_node_exact
> // ignore hwpoisoned hugepage
> // and finally fail due to no-mem
>
> The root cause of this is that current soft-offline code is written
> based on an assumption that PageHWPoison flag should beset at first to
> avoid accessing the corrupted data. This makes sense for memory_failure()
> or hard offline, but does not for soft offline because soft offline is
> about corrected (not uncorrected) error and is safe from data lost.
> This patch changes soft offline semantics where it sets PageHWPoison flag
> only after containment of the error page completes successfully.
>
> ...
>
> --- v4.18-rc4-mmotm-2018-07-10-16-50/mm/memory-failure.c
> +++ v4.18-rc4-mmotm-2018-07-10-16-50_patched/mm/memory-failure.c
> @@ -1598,8 +1598,18 @@ static int soft_offline_huge_page(struct page *page, int flags)
> if (ret > 0)
> ret = -EIO;
> } else {
> - if (PageHuge(page))
> - dissolve_free_huge_page(page);
> + /*
> + * We set PG_hwpoison only when the migration source hugepage
> + * was successfully dissolved, because otherwise hwpoisoned
> + * hugepage remains on free hugepage list, then userspace will
> + * find it as SIGBUS by allocation failure. That's not expected
> + * in soft-offlining.
> + */

This comment is unclear. What happens if there's a hwpoisoned page on
the freelist? The allocator just skips it and looks for another page?
Or does the allocator return the poisoned page, it gets mapped and
userspace gets a SIGBUS when accessing it? If the latter (or the
former!), why does the comment mention allocation failure?




2018-07-13 20:41:12

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v1 2/2] mm: soft-offline: close the race against page allocation

On Fri, 13 Jul 2018 12:26:06 +0900 Naoya Horiguchi <[email protected]> wrote:

> A process can be killed with SIGBUS(BUS_MCEERR_AR) when it tries to
> allocate a page that was just freed on the way of soft-offline.
> This is undesirable because soft-offline (which is about corrected error)
> is less aggressive than hard-offline (which is about uncorrected error),
> and we can make soft-offline fail and keep using the page for good reason
> like "system is busy."
>
> Two main changes of this patch are:
>
> - setting migrate type of the target page to MIGRATE_ISOLATE. As done
> in free_unref_page_commit(), this makes kernel bypass pcplist when
> freeing the page. So we can assume that the page is in freelist just
> after put_page() returns,
>
> - setting PG_hwpoison on free page under zone->lock which protects
> freelists, so this allows us to avoid setting PG_hwpoison on a page
> that is decided to be allocated soon.
>
>
> ...
>
> +
> +#ifdef CONFIG_MEMORY_FAILURE
> +/*
> + * Set PG_hwpoison flag if a given page is confirmed to be a free page
> + * within zone lock, which prevents the race against page allocation.
> + */

I think this is clearer?

--- a/mm/page_alloc.c~mm-soft-offline-close-the-race-against-page-allocation-fix
+++ a/mm/page_alloc.c
@@ -8039,8 +8039,9 @@ bool is_free_buddy_page(struct page *pag

#ifdef CONFIG_MEMORY_FAILURE
/*
- * Set PG_hwpoison flag if a given page is confirmed to be a free page
- * within zone lock, which prevents the race against page allocation.
+ * Set PG_hwpoison flag if a given page is confirmed to be a free page. This
+ * test is performed under the zone lock to prevent a race against page
+ * allocation.
*/
bool set_hwpoison_free_buddy_page(struct page *page)
{

> +bool set_hwpoison_free_buddy_page(struct page *page)
> +{
> + struct zone *zone = page_zone(page);
> + unsigned long pfn = page_to_pfn(page);
> + unsigned long flags;
> + unsigned int order;
> + bool hwpoisoned = false;
> +
> + spin_lock_irqsave(&zone->lock, flags);
> + for (order = 0; order < MAX_ORDER; order++) {
> + struct page *page_head = page - (pfn & ((1 << order) - 1));
> +
> + if (PageBuddy(page_head) && page_order(page_head) >= order) {
> + if (!TestSetPageHWPoison(page))
> + hwpoisoned = true;
> + break;
> + }
> + }
> + spin_unlock_irqrestore(&zone->lock, flags);
> +
> + return hwpoisoned;
> +}
> +#endif


2018-07-17 00:29:54

by Naoya Horiguchi

[permalink] [raw]
Subject: Re: [PATCH v1 1/2] mm: fix race on soft-offlining free huge pages

On Fri, Jul 13, 2018 at 01:35:45PM -0700, Andrew Morton wrote:
> On Fri, 13 Jul 2018 12:26:05 +0900 Naoya Horiguchi <[email protected]> wrote:
>
> > There's a race condition between soft offline and hugetlb_fault which
> > causes unexpected process killing and/or hugetlb allocation failure.
> >
> > The process killing is caused by the following flow:
> >
> > CPU 0 CPU 1 CPU 2
> >
> > soft offline
> > get_any_page
> > // find the hugetlb is free
> > mmap a hugetlb file
> > page fault
> > ...
> > hugetlb_fault
> > hugetlb_no_page
> > alloc_huge_page
> > // succeed
> > soft_offline_free_page
> > // set hwpoison flag
> > mmap the hugetlb file
> > page fault
> > ...
> > hugetlb_fault
> > hugetlb_no_page
> > find_lock_page
> > return VM_FAULT_HWPOISON
> > mm_fault_error
> > do_sigbus
> > // kill the process
> >
> >
> > The hugetlb allocation failure comes from the following flow:
> >
> > CPU 0 CPU 1
> >
> > mmap a hugetlb file
> > // reserve all free page but don't fault-in
> > soft offline
> > get_any_page
> > // find the hugetlb is free
> > soft_offline_free_page
> > // set hwpoison flag
> > dissolve_free_huge_page
> > // fail because all free hugepages are reserved
> > page fault
> > ...
> > hugetlb_fault
> > hugetlb_no_page
> > alloc_huge_page
> > ...
> > dequeue_huge_page_node_exact
> > // ignore hwpoisoned hugepage
> > // and finally fail due to no-mem
> >
> > The root cause of this is that current soft-offline code is written
> > based on an assumption that PageHWPoison flag should beset at first to
> > avoid accessing the corrupted data. This makes sense for memory_failure()
> > or hard offline, but does not for soft offline because soft offline is
> > about corrected (not uncorrected) error and is safe from data lost.
> > This patch changes soft offline semantics where it sets PageHWPoison flag
> > only after containment of the error page completes successfully.
> >
> > ...
> >
> > --- v4.18-rc4-mmotm-2018-07-10-16-50/mm/memory-failure.c
> > +++ v4.18-rc4-mmotm-2018-07-10-16-50_patched/mm/memory-failure.c
> > @@ -1598,8 +1598,18 @@ static int soft_offline_huge_page(struct page *page, int flags)
> > if (ret > 0)
> > ret = -EIO;
> > } else {
> > - if (PageHuge(page))
> > - dissolve_free_huge_page(page);
> > + /*
> > + * We set PG_hwpoison only when the migration source hugepage
> > + * was successfully dissolved, because otherwise hwpoisoned
> > + * hugepage remains on free hugepage list, then userspace will
> > + * find it as SIGBUS by allocation failure. That's not expected
> > + * in soft-offlining.
> > + */
>
> This comment is unclear. What happens if there's a hwpoisoned page on
> the freelist? The allocator just skips it and looks for another page?

Yes, this is what the allocator does.

> Or does the allocator return the poisoned page, it gets mapped and
> userspace gets a SIGBUS when accessing it? If the latter (or the
> former!), why does the comment mention allocation failure?

The mention to allocation failure was unclear, I'd like to replace
with below, is it clearer?

+ /*
+ * We set PG_hwpoison only when the migration source hugepage
+ * was successfully dissolved, because otherwise hwpoisoned
+ * hugepage remains on free hugepage list. The allocator ignores
+ * such a hwpoisoned page so it's never allocated, but it could
+ * kill a process because of no-memory rather than hwpoison.
+ * Soft-offline never impacts the userspace, so this is undesired.
+ */

Thanks,
Naoya Horiguchi

2018-07-17 00:35:08

by Naoya Horiguchi

[permalink] [raw]
Subject: Re: [PATCH v1 2/2] mm: soft-offline: close the race against page allocation

On Fri, Jul 13, 2018 at 01:40:02PM -0700, Andrew Morton wrote:
> On Fri, 13 Jul 2018 12:26:06 +0900 Naoya Horiguchi <[email protected]> wrote:
>
> > A process can be killed with SIGBUS(BUS_MCEERR_AR) when it tries to
> > allocate a page that was just freed on the way of soft-offline.
> > This is undesirable because soft-offline (which is about corrected error)
> > is less aggressive than hard-offline (which is about uncorrected error),
> > and we can make soft-offline fail and keep using the page for good reason
> > like "system is busy."
> >
> > Two main changes of this patch are:
> >
> > - setting migrate type of the target page to MIGRATE_ISOLATE. As done
> > in free_unref_page_commit(), this makes kernel bypass pcplist when
> > freeing the page. So we can assume that the page is in freelist just
> > after put_page() returns,
> >
> > - setting PG_hwpoison on free page under zone->lock which protects
> > freelists, so this allows us to avoid setting PG_hwpoison on a page
> > that is decided to be allocated soon.
> >
> >
> > ...
> >
> > +
> > +#ifdef CONFIG_MEMORY_FAILURE
> > +/*
> > + * Set PG_hwpoison flag if a given page is confirmed to be a free page
> > + * within zone lock, which prevents the race against page allocation.
> > + */
>
> I think this is clearer?
>
> --- a/mm/page_alloc.c~mm-soft-offline-close-the-race-against-page-allocation-fix
> +++ a/mm/page_alloc.c
> @@ -8039,8 +8039,9 @@ bool is_free_buddy_page(struct page *pag
>
> #ifdef CONFIG_MEMORY_FAILURE
> /*
> - * Set PG_hwpoison flag if a given page is confirmed to be a free page
> - * within zone lock, which prevents the race against page allocation.
> + * Set PG_hwpoison flag if a given page is confirmed to be a free page. This
> + * test is performed under the zone lock to prevent a race against page
> + * allocation.

Yes, I like it.

Thanks,
Naoya Horiguchi