2022-08-09 15:09:30

by Charan Teja Kalla

[permalink] [raw]
Subject: [PATCH V3] mm: fix use-after free of page_ext after race with memory-offline

The below is one path where race between page_ext and offline of the
respective memory blocks will cause use-after-free on the access of
page_ext structure.

process1 process2
--------- ---------
a)doing /proc/page_owner doing memory offline
through offline_pages.

b)PageBuddy check is failed
thus proceed to get the
page_owner information
through page_ext access.
page_ext = lookup_page_ext(page);

migrate_pages();
.................
Since all pages are successfully
migrated as part of the offline
operation,send MEM_OFFLINE notification
where for page_ext it calls:
offline_page_ext()-->
__free_page_ext()-->
free_page_ext()-->
vfree(ms->page_ext)
mem_section->page_ext = NULL

c) Check for the PAGE_EXT flags
in the page_ext->flags access
results into the use-after-free(leading
to the translation faults).

As mentioned above, there is really no synchronization between page_ext
access and its freeing in the memory_offline.

The memory offline steps(roughly) on a memory block is as below:
1) Isolate all the pages
2) while(1)
try free the pages to buddy.(->free_list[MIGRATE_ISOLATE])
3) delete the pages from this buddy list.
4) Then free page_ext.(Note: The struct page is still alive as it is
freed only during hot remove of the memory which frees the memmap, which
steps the user might not perform).

This design leads to the state where struct page is alive but the struct
page_ext is freed, where the later is ideally part of the former which
just representing the page_flags (check [3] for why this design is
chosen).

The above mentioned race is just one example __but the problem persists
in the other paths too involving page_ext->flags access(eg:
page_is_idle())__. Since offline waits till the last reference on the
page goes down i.e. any path that took the refcount on the page can make
the memory offline operation to wait. Eg: In the migrate_pages()
operation, we do take the extra refcount on the pages that are under
migration and then we do copy page_owner by accessing page_ext.

Fix those paths where offline races with page_ext access by maintaining
synchronization with rcu lock and is achieved in 3 steps:
1) Invalidate all the page_ext's of the sections of a memory block by
storing a flag in the LSB of mem_section->page_ext.

2) Wait till all the existing readers to finish working with the
->page_ext's with synchronize_rcu(). Any parallel process that starts
after this call will not get page_ext, through lookup_page_ext(), for
the block parallel offline operation is being performed.

3) Now safely free all sections ->page_ext's of the block on which
offline operation is being performed.

Note: If synchronize_rcu() takes time then optimizations can be done in
this path through call_rcu()[2].

Thanks to David Hildenbrand for his views/suggestions on the initial
discussion[1] and Pavan kondeti for various inputs on this patch.

[1] https://lore.kernel.org/linux-mm/[email protected]/
[2] https://lore.kernel.org/all/[email protected]/T/#u
[3] https://lore.kernel.org/all/[email protected]/

Suggested-by: David Hildenbrand <[email protected]>
Suggested-by: Michal Hocko <[email protected]>
Signed-off-by: Charan Teja Kalla <[email protected]>
---
Changes in V3:
o Exposed page_ext_get/put() and hid lookup_page_ext to get page_ext information.
o Converted the call sites to use single interface i.e.page_ext_get/put().
o Placed rcu_lock held checks where required.
o Improved the commit message.

Changes in V2:
o Use only page_ext_get/put() to get the page_ext in the
required paths. Add proper comments for them.
o Use synchronize_rcu() only once instead of calling it for
every mem_section::page_ext of a memory block.
o Free'd page_ext in 3 steps of invalidate, wait till all the
users are finished using and then finally free page_ext.
o https://lore.kernel.org/all/[email protected]/

Changes in V1:
o Used the RCU lock while accessing the page_ext in the paths that
can race with the memory offline operation.
o Introduced (get|put)_page_ext() function to get the page_ext of page.
o https://lore.kernel.org/all/[email protected]/

include/linux/page_ext.h | 17 +++++----
include/linux/page_idle.h | 34 ++++++++++++------
mm/page_ext.c | 92 +++++++++++++++++++++++++++++++++++++++++++----
mm/page_owner.c | 74 ++++++++++++++++++++++++++++----------
mm/page_table_check.c | 10 ++++--
5 files changed, 184 insertions(+), 43 deletions(-)

diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h
index fabb2e1..0e259da 100644
--- a/include/linux/page_ext.h
+++ b/include/linux/page_ext.h
@@ -55,7 +55,8 @@ static inline void page_ext_init(void)
}
#endif

-struct page_ext *lookup_page_ext(const struct page *page);
+extern struct page_ext *page_ext_get(struct page *page);
+extern void page_ext_put(void);

static inline struct page_ext *page_ext_next(struct page_ext *curr)
{
@@ -71,11 +72,6 @@ static inline void pgdat_page_ext_init(struct pglist_data *pgdat)
{
}

-static inline struct page_ext *lookup_page_ext(const struct page *page)
-{
- return NULL;
-}
-
static inline void page_ext_init(void)
{
}
@@ -87,5 +83,14 @@ static inline void page_ext_init_flatmem_late(void)
static inline void page_ext_init_flatmem(void)
{
}
+
+static inline struct page *page_ext_get(struct page *page)
+{
+ return NULL;
+}
+
+static inline void page_ext_put(void)
+{
+}
#endif /* CONFIG_PAGE_EXTENSION */
#endif /* __LINUX_PAGE_EXT_H */
diff --git a/include/linux/page_idle.h b/include/linux/page_idle.h
index 4663dfe..0001c3d 100644
--- a/include/linux/page_idle.h
+++ b/include/linux/page_idle.h
@@ -13,65 +13,79 @@
* If there is not enough space to store Idle and Young bits in page flags, use
* page ext flags instead.
*/
-
static inline bool folio_test_young(struct folio *folio)
{
- struct page_ext *page_ext = lookup_page_ext(&folio->page);
+ struct page_ext *page_ext = page_ext_get(&folio->page);
+ bool page_young;

if (unlikely(!page_ext))
return false;

- return test_bit(PAGE_EXT_YOUNG, &page_ext->flags);
+ page_young = test_bit(PAGE_EXT_YOUNG, &page_ext->flags);
+ page_ext_put();
+
+ return page_young;
}

static inline void folio_set_young(struct folio *folio)
{
- struct page_ext *page_ext = lookup_page_ext(&folio->page);
+ struct page_ext *page_ext = page_ext_get(&folio->page);

if (unlikely(!page_ext))
return;

set_bit(PAGE_EXT_YOUNG, &page_ext->flags);
+ page_ext_put();
}

static inline bool folio_test_clear_young(struct folio *folio)
{
- struct page_ext *page_ext = lookup_page_ext(&folio->page);
+ struct page_ext *page_ext = page_ext_get(&folio->page);
+ bool page_young;

if (unlikely(!page_ext))
return false;

- return test_and_clear_bit(PAGE_EXT_YOUNG, &page_ext->flags);
+ page_young = test_and_clear_bit(PAGE_EXT_YOUNG, &page_ext->flags);
+ page_ext_put();
+
+ return page_young;
}

static inline bool folio_test_idle(struct folio *folio)
{
- struct page_ext *page_ext = lookup_page_ext(&folio->page);
+ struct page_ext *page_ext = page_ext_get(&folio->page);
+ bool page_idle;

if (unlikely(!page_ext))
return false;

- return test_bit(PAGE_EXT_IDLE, &page_ext->flags);
+ page_idle = test_bit(PAGE_EXT_IDLE, &page_ext->flags);
+ page_ext_put();
+
+ return page_idle;
}

static inline void folio_set_idle(struct folio *folio)
{
- struct page_ext *page_ext = lookup_page_ext(&folio->page);
+ struct page_ext *page_ext = page_ext_get(&folio->page);

if (unlikely(!page_ext))
return;

set_bit(PAGE_EXT_IDLE, &page_ext->flags);
+ page_ext_put();
}

static inline void folio_clear_idle(struct folio *folio)
{
- struct page_ext *page_ext = lookup_page_ext(&folio->page);
+ struct page_ext *page_ext = page_ext_get(&folio->page);

if (unlikely(!page_ext))
return;

clear_bit(PAGE_EXT_IDLE, &page_ext->flags);
+ page_ext_put();
}
#endif /* !CONFIG_64BIT */

diff --git a/mm/page_ext.c b/mm/page_ext.c
index 3dc715d..91d7bd2 100644
--- a/mm/page_ext.c
+++ b/mm/page_ext.c
@@ -9,6 +9,7 @@
#include <linux/page_owner.h>
#include <linux/page_idle.h>
#include <linux/page_table_check.h>
+#include <linux/rcupdate.h>

/*
* struct page extension
@@ -59,6 +60,10 @@
* can utilize this callback to initialize the state of it correctly.
*/

+#ifdef CONFIG_SPARSEMEM
+#define PAGE_EXT_INVALID (0x1)
+#endif
+
#if defined(CONFIG_PAGE_IDLE_FLAG) && !defined(CONFIG_64BIT)
static bool need_page_idle(void)
{
@@ -84,6 +89,7 @@ static struct page_ext_operations *page_ext_ops[] __initdata = {
unsigned long page_ext_size = sizeof(struct page_ext);

static unsigned long total_usage;
+static struct page_ext *lookup_page_ext(const struct page *page);

static bool __init invoke_need_callbacks(void)
{
@@ -125,6 +131,37 @@ static inline struct page_ext *get_entry(void *base, unsigned long index)
return base + page_ext_size * index;
}

+/*
+ * This function gives proper page_ext of a memory section
+ * during race with the offline operation on a memory block
+ * this section falls into. Not using this function to get
+ * page_ext of a page, in code paths where extra refcount
+ * is not taken on that page eg: pfn walking, can lead to
+ * use-after-free access of page_ext.
+ */
+struct page_ext *page_ext_get(struct page *page)
+{
+ struct page_ext *page_ext;
+
+ rcu_read_lock();
+ page_ext = lookup_page_ext(page);
+ if (!page_ext) {
+ rcu_read_unlock();
+ return NULL;
+ }
+
+ return page_ext;
+}
+
+/*
+ * Must be called after work is done with the page_ext received
+ * with page_ext_get().
+ */
+
+void page_ext_put(void)
+{
+ rcu_read_unlock();
+}
#ifndef CONFIG_SPARSEMEM


@@ -133,12 +170,13 @@ void __meminit pgdat_page_ext_init(struct pglist_data *pgdat)
pgdat->node_page_ext = NULL;
}

-struct page_ext *lookup_page_ext(const struct page *page)
+static struct page_ext *lookup_page_ext(const struct page *page)
{
unsigned long pfn = page_to_pfn(page);
unsigned long index;
struct page_ext *base;

+ WARN_ON_ONCE(!rcu_read_lock_held());
base = NODE_DATA(page_to_nid(page))->node_page_ext;
/*
* The sanity checks the page allocator does upon freeing a
@@ -206,20 +244,27 @@ void __init page_ext_init_flatmem(void)
}

#else /* CONFIG_SPARSEMEM */
+static bool page_ext_invalid(struct page_ext *page_ext)
+{
+ return !page_ext || (((unsigned long)page_ext & PAGE_EXT_INVALID) == PAGE_EXT_INVALID);
+}

-struct page_ext *lookup_page_ext(const struct page *page)
+static struct page_ext *lookup_page_ext(const struct page *page)
{
unsigned long pfn = page_to_pfn(page);
struct mem_section *section = __pfn_to_section(pfn);
+ struct page_ext *page_ext = READ_ONCE(section->page_ext);
+
+ WARN_ON_ONCE(!rcu_read_lock_held());
/*
* The sanity checks the page allocator does upon freeing a
* page can reach here before the page_ext arrays are
* allocated when feeding a range of pages to the allocator
* for the first time during bootup or memory hotplug.
*/
- if (!section->page_ext)
+ if (page_ext_invalid(page_ext))
return NULL;
- return get_entry(section->page_ext, pfn);
+ return get_entry(page_ext, pfn);
}

static void *__meminit alloc_page_ext(size_t size, int nid)
@@ -298,9 +343,30 @@ static void __free_page_ext(unsigned long pfn)
ms = __pfn_to_section(pfn);
if (!ms || !ms->page_ext)
return;
- base = get_entry(ms->page_ext, pfn);
+
+ base = READ_ONCE(ms->page_ext);
+ /*
+ * page_ext here can be valid while doing the roll back
+ * operation in online_page_ext().
+ */
+ if (page_ext_invalid(base))
+ base = (void *)base - PAGE_EXT_INVALID;
+ WRITE_ONCE(ms->page_ext, NULL);
+
+ base = get_entry(base, pfn);
free_page_ext(base);
- ms->page_ext = NULL;
+}
+
+static void __invalidate_page_ext(unsigned long pfn)
+{
+ struct mem_section *ms;
+ void *val;
+
+ ms = __pfn_to_section(pfn);
+ if (!ms || !ms->page_ext)
+ return;
+ val = (void *)ms->page_ext + PAGE_EXT_INVALID;
+ WRITE_ONCE(ms->page_ext, val);
}

static int __meminit online_page_ext(unsigned long start_pfn,
@@ -343,6 +409,20 @@ static int __meminit offline_page_ext(unsigned long start_pfn,
start = SECTION_ALIGN_DOWN(start_pfn);
end = SECTION_ALIGN_UP(start_pfn + nr_pages);

+ /*
+ * Freeing of page_ext is done in 3 steps to avoid
+ * use-after-free of it:
+ * 1) Traverse all the sections and mark their page_ext
+ * as invalid.
+ * 2) Wait for all the existing users of page_ext who
+ * started before invalidation to finish.
+ * 3) Free the page_ext.
+ */
+ for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION)
+ __invalidate_page_ext(pfn);
+
+ synchronize_rcu();
+
for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION)
__free_page_ext(pfn);
return 0;
diff --git a/mm/page_owner.c b/mm/page_owner.c
index e4c6f3f..223bbf8 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -141,7 +141,7 @@ void __reset_page_owner(struct page *page, unsigned short order)
struct page_owner *page_owner;
u64 free_ts_nsec = local_clock();

- page_ext = lookup_page_ext(page);
+ page_ext = page_ext_get(page);
if (unlikely(!page_ext))
return;

@@ -153,6 +153,7 @@ void __reset_page_owner(struct page *page, unsigned short order)
page_owner->free_ts_nsec = free_ts_nsec;
page_ext = page_ext_next(page_ext);
}
+ page_ext_put();
}

static inline void __set_page_owner_handle(struct page_ext *page_ext,
@@ -183,19 +184,26 @@ static inline void __set_page_owner_handle(struct page_ext *page_ext,
noinline void __set_page_owner(struct page *page, unsigned short order,
gfp_t gfp_mask)
{
- struct page_ext *page_ext = lookup_page_ext(page);
+ struct page_ext *page_ext = page_ext_get(page);
depot_stack_handle_t handle;

if (unlikely(!page_ext))
return;
+ page_ext_put();

handle = save_stack(gfp_mask);
+
+ /* Ensure page_ext is valid after page_ext_put() above */
+ page_ext = page_ext_get(page);
+ if (unlikely(!page_ext))
+ return;
__set_page_owner_handle(page_ext, handle, order, gfp_mask);
+ page_ext_put();
}

void __set_page_owner_migrate_reason(struct page *page, int reason)
{
- struct page_ext *page_ext = lookup_page_ext(page);
+ struct page_ext *page_ext = page_ext_get(page);
struct page_owner *page_owner;

if (unlikely(!page_ext))
@@ -203,12 +211,13 @@ void __set_page_owner_migrate_reason(struct page *page, int reason)

page_owner = get_page_owner(page_ext);
page_owner->last_migrate_reason = reason;
+ page_ext_put();
}

void __split_page_owner(struct page *page, unsigned int nr)
{
int i;
- struct page_ext *page_ext = lookup_page_ext(page);
+ struct page_ext *page_ext = page_ext_get(page);
struct page_owner *page_owner;

if (unlikely(!page_ext))
@@ -219,16 +228,24 @@ void __split_page_owner(struct page *page, unsigned int nr)
page_owner->order = 0;
page_ext = page_ext_next(page_ext);
}
+ page_ext_put();
}

void __folio_copy_owner(struct folio *newfolio, struct folio *old)
{
- struct page_ext *old_ext = lookup_page_ext(&old->page);
- struct page_ext *new_ext = lookup_page_ext(&newfolio->page);
+ struct page_ext *old_ext;
+ struct page_ext *new_ext;
struct page_owner *old_page_owner, *new_page_owner;

- if (unlikely(!old_ext || !new_ext))
+ old_ext = page_ext_get(&old->page);
+ if (unlikely(!old_ext))
+ return;
+
+ new_ext = page_ext_get(&newfolio->page);
+ if (unlikely(!new_ext)) {
+ page_ext_put();
return;
+ }

old_page_owner = get_page_owner(old_ext);
new_page_owner = get_page_owner(new_ext);
@@ -254,6 +271,8 @@ void __folio_copy_owner(struct folio *newfolio, struct folio *old)
*/
__set_bit(PAGE_EXT_OWNER, &new_ext->flags);
__set_bit(PAGE_EXT_OWNER_ALLOCATED, &new_ext->flags);
+ page_ext_put();
+ page_ext_put();
}

void pagetypeinfo_showmixedcount_print(struct seq_file *m,
@@ -307,12 +326,12 @@ void pagetypeinfo_showmixedcount_print(struct seq_file *m,
if (PageReserved(page))
continue;

- page_ext = lookup_page_ext(page);
+ page_ext = page_ext_get(page);
if (unlikely(!page_ext))
continue;

if (!test_bit(PAGE_EXT_OWNER_ALLOCATED, &page_ext->flags))
- continue;
+ goto loop;

page_owner = get_page_owner(page_ext);
page_mt = gfp_migratetype(page_owner->gfp_mask);
@@ -323,9 +342,12 @@ void pagetypeinfo_showmixedcount_print(struct seq_file *m,
count[pageblock_mt]++;

pfn = block_end_pfn;
+ page_ext_put();
break;
}
pfn += (1UL << page_owner->order) - 1;
+loop:
+ page_ext_put();
}
}

@@ -435,7 +457,7 @@ print_page_owner(char __user *buf, size_t count, unsigned long pfn,

void __dump_page_owner(const struct page *page)
{
- struct page_ext *page_ext = lookup_page_ext(page);
+ struct page_ext *page_ext = page_ext_get((void *)page);
struct page_owner *page_owner;
depot_stack_handle_t handle;
gfp_t gfp_mask;
@@ -452,6 +474,7 @@ void __dump_page_owner(const struct page *page)

if (!test_bit(PAGE_EXT_OWNER, &page_ext->flags)) {
pr_alert("page_owner info is not present (never set?)\n");
+ page_ext_put();
return;
}

@@ -482,6 +505,7 @@ void __dump_page_owner(const struct page *page)
if (page_owner->last_migrate_reason != -1)
pr_alert("page has been migrated, last migrate reason: %s\n",
migrate_reason_names[page_owner->last_migrate_reason]);
+ page_ext_put();
}

static ssize_t
@@ -508,6 +532,14 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
/* Find an allocated page */
for (; pfn < max_pfn; pfn++) {
/*
+ * This temporary page_owner is required so
+ * that we can avoid the context switches while holding
+ * the rcu lock and copying the page owner information to
+ * user through copy_to_user() or GFP_KERNEL allocations.
+ */
+ struct page_owner page_owner_tmp;
+
+ /*
* If the new page is in a new MAX_ORDER_NR_PAGES area,
* validate the area as existing, skip it if not
*/
@@ -525,7 +557,7 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
continue;
}

- page_ext = lookup_page_ext(page);
+ page_ext = page_ext_get(page);
if (unlikely(!page_ext))
continue;

@@ -534,14 +566,14 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
* because we don't hold the zone lock.
*/
if (!test_bit(PAGE_EXT_OWNER, &page_ext->flags))
- continue;
+ goto loop;

/*
* Although we do have the info about past allocation of free
* pages, it's not relevant for current memory usage.
*/
if (!test_bit(PAGE_EXT_OWNER_ALLOCATED, &page_ext->flags))
- continue;
+ goto loop;

page_owner = get_page_owner(page_ext);

@@ -550,7 +582,7 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
* would inflate the stats.
*/
if (!IS_ALIGNED(pfn, 1 << page_owner->order))
- continue;
+ goto loop;

/*
* Access to page_ext->handle isn't synchronous so we should
@@ -558,13 +590,17 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
*/
handle = READ_ONCE(page_owner->handle);
if (!handle)
- continue;
+ goto loop;

/* Record the next PFN to read in the file offset */
*ppos = (pfn - min_low_pfn) + 1;

+ memcpy(&page_owner_tmp, page_owner, sizeof(struct page_owner));
+ page_ext_put();
return print_page_owner(buf, count, pfn, page,
- page_owner, handle);
+ &page_owner_tmp, handle);
+loop:
+ page_ext_put();
}

return 0;
@@ -617,18 +653,20 @@ static void init_pages_in_zone(pg_data_t *pgdat, struct zone *zone)
if (PageReserved(page))
continue;

- page_ext = lookup_page_ext(page);
+ page_ext = page_ext_get(page);
if (unlikely(!page_ext))
continue;

/* Maybe overlapping zone */
if (test_bit(PAGE_EXT_OWNER, &page_ext->flags))
- continue;
+ goto loop;

/* Found early allocated page */
__set_page_owner_handle(page_ext, early_handle,
0, 0);
count++;
+loop:
+ page_ext_put();
}
cond_resched();
}
diff --git a/mm/page_table_check.c b/mm/page_table_check.c
index e206274..ec371b9 100644
--- a/mm/page_table_check.c
+++ b/mm/page_table_check.c
@@ -68,7 +68,7 @@ static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
return;

page = pfn_to_page(pfn);
- page_ext = lookup_page_ext(page);
+ page_ext = page_ext_get(page);
anon = PageAnon(page);

for (i = 0; i < pgcnt; i++) {
@@ -83,6 +83,7 @@ static void page_table_check_clear(struct mm_struct *mm, unsigned long addr,
}
page_ext = page_ext_next(page_ext);
}
+ page_ext_put();
}

/*
@@ -103,7 +104,7 @@ static void page_table_check_set(struct mm_struct *mm, unsigned long addr,
return;

page = pfn_to_page(pfn);
- page_ext = lookup_page_ext(page);
+ page_ext = page_ext_get(page);
anon = PageAnon(page);

for (i = 0; i < pgcnt; i++) {
@@ -118,6 +119,7 @@ static void page_table_check_set(struct mm_struct *mm, unsigned long addr,
}
page_ext = page_ext_next(page_ext);
}
+ page_ext_put();
}

/*
@@ -126,9 +128,10 @@ static void page_table_check_set(struct mm_struct *mm, unsigned long addr,
*/
void __page_table_check_zero(struct page *page, unsigned int order)
{
- struct page_ext *page_ext = lookup_page_ext(page);
+ struct page_ext *page_ext;
unsigned long i;

+ page_ext = page_ext_get(page);
BUG_ON(!page_ext);
for (i = 0; i < (1ul << order); i++) {
struct page_table_check *ptc = get_page_table_check(page_ext);
@@ -137,6 +140,7 @@ void __page_table_check_zero(struct page *page, unsigned int order)
BUG_ON(atomic_read(&ptc->file_map_count));
page_ext = page_ext_next(page_ext);
}
+ page_ext_put();
}

void __page_table_check_pte_clear(struct mm_struct *mm, unsigned long addr,
--
2.7.4


2022-08-10 02:03:50

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH V3] mm: fix use-after free of page_ext after race with memory-offline

On Tue, 9 Aug 2022 20:16:43 +0530 Charan Teja Kalla <[email protected]> wrote:

> The below is one path where race between page_ext and offline of the
> respective memory blocks will cause use-after-free on the access of
> page_ext structure.

Has this race ever been observed at runtime?

Given the size of the fix, I'm looking for excuses to not backport it
into -stable kernels!

2022-08-10 07:27:45

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH V3] mm: fix use-after free of page_ext after race with memory-offline

On Tue 09-08-22 18:57:14, Andrew Morton wrote:
> On Tue, 9 Aug 2022 20:16:43 +0530 Charan Teja Kalla <[email protected]> wrote:
>
> > The below is one path where race between page_ext and offline of the
> > respective memory blocks will cause use-after-free on the access of
> > page_ext structure.
>
> Has this race ever been observed at runtime?
>
> Given the size of the fix, I'm looking for excuses to not backport it
> into -stable kernels!

I believe this is quite theoretical for two reasons
1) the memory hotplug (offlining) is quite rare operation
2) with all the retries the race window is quite hard to trigger

So this is good to have address long term but nothing really for stable
until somebody actually hits that with a real world workload.

Btw. I plan to have a look and review this but times are busy. Hopefully
soon.

Thanks!

--
Michal Hocko
SUSE Labs

2022-08-10 09:42:32

by Charan Teja Kalla

[permalink] [raw]
Subject: Re: [PATCH V3] mm: fix use-after free of page_ext after race with memory-offline

Thanks Andrew/Michal!!

On 8/10/2022 12:53 PM, Michal Hocko wrote:
> On Tue 09-08-22 18:57:14, Andrew Morton wrote:
>> On Tue, 9 Aug 2022 20:16:43 +0530 Charan Teja Kalla <[email protected]> wrote:
>>
>>> The below is one path where race between page_ext and offline of the
>>> respective memory blocks will cause use-after-free on the access of
>>> page_ext structure.
>>
>> Has this race ever been observed at runtime?
>>
>> Given the size of the fix, I'm looking for excuses to not backport it
>> into -stable kernels!
>
> I believe this is quite theoretical for two reasons
> 1) the memory hotplug (offlining) is quite rare operation
> 2) with all the retries the race window is quite hard to trigger
>
> So this is good to have address long term but nothing really for stable
> until somebody actually hits that with a real world workload.
>

Actually in the embedded systems the offline is not a rare operation,
especially, in cases where one want to save some power through PASR[1].

This issue is caught with and in the page_pinner[2](currently being used
in Android) path where it is accessing the page_ext of a page after it
is freed. This is again not with the real workload but with some stress
tests. So, I am also agree with Michal here to not to backport it.

[1]https://lwn.net/Articles/478049/
[2] https://lore.kernel.org/all/[email protected]/

> Btw. I plan to have a look and review this but times are busy. Hopefully
> soon.
>
> Thanks!
>

2022-08-10 11:58:59

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH V3] mm: fix use-after free of page_ext after race with memory-offline

On 8/9/22 16:46, Charan Teja Kalla wrote:
> The below is one path where race between page_ext and offline of the
> respective memory blocks will cause use-after-free on the access of
> page_ext structure.
>
> process1 process2
> --------- ---------
> a)doing /proc/page_owner doing memory offline
> through offline_pages.
>
> b)PageBuddy check is failed
> thus proceed to get the
> page_owner information
> through page_ext access.
> page_ext = lookup_page_ext(page);
>
> migrate_pages();
> .................
> Since all pages are successfully
> migrated as part of the offline
> operation,send MEM_OFFLINE notification
> where for page_ext it calls:
> offline_page_ext()-->
> __free_page_ext()-->
> free_page_ext()-->
> vfree(ms->page_ext)
> mem_section->page_ext = NULL
>
> c) Check for the PAGE_EXT flags
> in the page_ext->flags access
> results into the use-after-free(leading
> to the translation faults).
>
> As mentioned above, there is really no synchronization between page_ext
> access and its freeing in the memory_offline.
>
> The memory offline steps(roughly) on a memory block is as below:
> 1) Isolate all the pages
> 2) while(1)
> try free the pages to buddy.(->free_list[MIGRATE_ISOLATE])
> 3) delete the pages from this buddy list.
> 4) Then free page_ext.(Note: The struct page is still alive as it is
> freed only during hot remove of the memory which frees the memmap, which
> steps the user might not perform).
>
> This design leads to the state where struct page is alive but the struct
> page_ext is freed, where the later is ideally part of the former which
> just representing the page_flags (check [3] for why this design is
> chosen).
>
> The above mentioned race is just one example __but the problem persists
> in the other paths too involving page_ext->flags access(eg:
> page_is_idle())__. Since offline waits till the last reference on the
> page goes down i.e. any path that took the refcount on the page can make
> the memory offline operation to wait. Eg: In the migrate_pages()
> operation, we do take the extra refcount on the pages that are under
> migration and then we do copy page_owner by accessing page_ext.
>
> Fix those paths where offline races with page_ext access by maintaining
> synchronization with rcu lock and is achieved in 3 steps:
> 1) Invalidate all the page_ext's of the sections of a memory block by
> storing a flag in the LSB of mem_section->page_ext.
>
> 2) Wait till all the existing readers to finish working with the
> ->page_ext's with synchronize_rcu(). Any parallel process that starts
> after this call will not get page_ext, through lookup_page_ext(), for
> the block parallel offline operation is being performed.
>
> 3) Now safely free all sections ->page_ext's of the block on which
> offline operation is being performed.
>
> Note: If synchronize_rcu() takes time then optimizations can be done in
> this path through call_rcu()[2].
>
> Thanks to David Hildenbrand for his views/suggestions on the initial
> discussion[1] and Pavan kondeti for various inputs on this patch.
>
> [1] https://lore.kernel.org/linux-mm/[email protected]/
> [2] https://lore.kernel.org/all/[email protected]/T/#u
> [3] https://lore.kernel.org/all/[email protected]/
>
> Suggested-by: David Hildenbrand <[email protected]>
> Suggested-by: Michal Hocko <[email protected]>
> Signed-off-by: Charan Teja Kalla <[email protected]>

<snip>

> --- a/mm/page_owner.c
> +++ b/mm/page_owner.c
> @@ -141,7 +141,7 @@ void __reset_page_owner(struct page *page, unsigned short order)
> struct page_owner *page_owner;
> u64 free_ts_nsec = local_clock();
>
> - page_ext = lookup_page_ext(page);
> + page_ext = page_ext_get(page);
> if (unlikely(!page_ext))
> return;
>
> @@ -153,6 +153,7 @@ void __reset_page_owner(struct page *page, unsigned short order)
> page_owner->free_ts_nsec = free_ts_nsec;
> page_ext = page_ext_next(page_ext);
> }
> + page_ext_put();
> }
>
> static inline void __set_page_owner_handle(struct page_ext *page_ext,
> @@ -183,19 +184,26 @@ static inline void __set_page_owner_handle(struct page_ext *page_ext,
> noinline void __set_page_owner(struct page *page, unsigned short order,
> gfp_t gfp_mask)
> {
> - struct page_ext *page_ext = lookup_page_ext(page);
> + struct page_ext *page_ext = page_ext_get(page);
> depot_stack_handle_t handle;
>
> if (unlikely(!page_ext))
> return;
> + page_ext_put();
>
> handle = save_stack(gfp_mask);
> +
> + /* Ensure page_ext is valid after page_ext_put() above */
> + page_ext = page_ext_get(page);

Why not simply do the save_stack() first and then page_ext_get() just
once? It should be really rare that it's NULL, so I don't think we save
much by avoiding an unnecessary save_stack(), while the overhead of
doing two get/put instead of one will affect every call.

> + if (unlikely(!page_ext))
> + return;
> __set_page_owner_handle(page_ext, handle, order, gfp_mask);
> + page_ext_put();
> }
>
> void __set_page_owner_migrate_reason(struct page *page, int reason)
> {
> - struct page_ext *page_ext = lookup_page_ext(page);
> + struct page_ext *page_ext = page_ext_get(page);
> struct page_owner *page_owner;
>
> if (unlikely(!page_ext))
> @@ -203,12 +211,13 @@ void __set_page_owner_migrate_reason(struct page *page, int reason)
>
> page_owner = get_page_owner(page_ext);
> page_owner->last_migrate_reason = reason;
> + page_ext_put();
> }
>
> void __split_page_owner(struct page *page, unsigned int nr)
> {
> int i;
> - struct page_ext *page_ext = lookup_page_ext(page);
> + struct page_ext *page_ext = page_ext_get(page);
> struct page_owner *page_owner;
>
> if (unlikely(!page_ext))
> @@ -219,16 +228,24 @@ void __split_page_owner(struct page *page, unsigned int nr)
> page_owner->order = 0;
> page_ext = page_ext_next(page_ext);
> }
> + page_ext_put();
> }
>
> void __folio_copy_owner(struct folio *newfolio, struct folio *old)
> {
> - struct page_ext *old_ext = lookup_page_ext(&old->page);
> - struct page_ext *new_ext = lookup_page_ext(&newfolio->page);
> + struct page_ext *old_ext;
> + struct page_ext *new_ext;
> struct page_owner *old_page_owner, *new_page_owner;
>
> - if (unlikely(!old_ext || !new_ext))
> + old_ext = page_ext_get(&old->page);
> + if (unlikely(!old_ext))
> + return;
> +
> + new_ext = page_ext_get(&newfolio->page);

The second one can keep using just lookup_page_ext() and we can have a
single page_ext_put()? I don't think it would be dangerous in case the
internals change, as page_ext_put() doesn't have a page parameter anyway
so it can't be specific to a page.

> + if (unlikely(!new_ext)) {
> + page_ext_put();
> return;
> + }
>
> old_page_owner = get_page_owner(old_ext);
> new_page_owner = get_page_owner(new_ext);
> @@ -254,6 +271,8 @@ void __folio_copy_owner(struct folio *newfolio, struct folio *old)
> */
> __set_bit(PAGE_EXT_OWNER, &new_ext->flags);
> __set_bit(PAGE_EXT_OWNER_ALLOCATED, &new_ext->flags);
> + page_ext_put();
> + page_ext_put();
> }
>
> void pagetypeinfo_showmixedcount_print(struct seq_file *m,
> @@ -307,12 +326,12 @@ void pagetypeinfo_showmixedcount_print(struct seq_file *m,
> if (PageReserved(page))
> continue;
>
> - page_ext = lookup_page_ext(page);
> + page_ext = page_ext_get(page);
> if (unlikely(!page_ext))
> continue;
>
> if (!test_bit(PAGE_EXT_OWNER_ALLOCATED, &page_ext->flags))
> - continue;
> + goto loop;
>
> page_owner = get_page_owner(page_ext);
> page_mt = gfp_migratetype(page_owner->gfp_mask);
> @@ -323,9 +342,12 @@ void pagetypeinfo_showmixedcount_print(struct seq_file *m,
> count[pageblock_mt]++;
>
> pfn = block_end_pfn;
> + page_ext_put();
> break;
> }
> pfn += (1UL << page_owner->order) - 1;
> +loop:
> + page_ext_put();
> }
> }
>
> @@ -435,7 +457,7 @@ print_page_owner(char __user *buf, size_t count, unsigned long pfn,
>
> void __dump_page_owner(const struct page *page)
> {
> - struct page_ext *page_ext = lookup_page_ext(page);
> + struct page_ext *page_ext = page_ext_get((void *)page);
> struct page_owner *page_owner;
> depot_stack_handle_t handle;
> gfp_t gfp_mask;
> @@ -452,6 +474,7 @@ void __dump_page_owner(const struct page *page)
>
> if (!test_bit(PAGE_EXT_OWNER, &page_ext->flags)) {
> pr_alert("page_owner info is not present (never set?)\n");
> + page_ext_put();
> return;
> }
>
> @@ -482,6 +505,7 @@ void __dump_page_owner(const struct page *page)
> if (page_owner->last_migrate_reason != -1)
> pr_alert("page has been migrated, last migrate reason: %s\n",
> migrate_reason_names[page_owner->last_migrate_reason]);
> + page_ext_put();
> }
>
> static ssize_t
> @@ -508,6 +532,14 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
> /* Find an allocated page */
> for (; pfn < max_pfn; pfn++) {
> /*
> + * This temporary page_owner is required so
> + * that we can avoid the context switches while holding
> + * the rcu lock and copying the page owner information to
> + * user through copy_to_user() or GFP_KERNEL allocations.
> + */
> + struct page_owner page_owner_tmp;
> +
> + /*
> * If the new page is in a new MAX_ORDER_NR_PAGES area,
> * validate the area as existing, skip it if not
> */
> @@ -525,7 +557,7 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
> continue;
> }
>
> - page_ext = lookup_page_ext(page);
> + page_ext = page_ext_get(page);
> if (unlikely(!page_ext))
> continue;
>
> @@ -534,14 +566,14 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
> * because we don't hold the zone lock.
> */
> if (!test_bit(PAGE_EXT_OWNER, &page_ext->flags))
> - continue;
> + goto loop;
>
> /*
> * Although we do have the info about past allocation of free
> * pages, it's not relevant for current memory usage.
> */
> if (!test_bit(PAGE_EXT_OWNER_ALLOCATED, &page_ext->flags))
> - continue;
> + goto loop;
>
> page_owner = get_page_owner(page_ext);
>
> @@ -550,7 +582,7 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
> * would inflate the stats.
> */
> if (!IS_ALIGNED(pfn, 1 << page_owner->order))
> - continue;
> + goto loop;
>
> /*
> * Access to page_ext->handle isn't synchronous so we should
> @@ -558,13 +590,17 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
> */
> handle = READ_ONCE(page_owner->handle);
> if (!handle)
> - continue;
> + goto loop;
>
> /* Record the next PFN to read in the file offset */
> *ppos = (pfn - min_low_pfn) + 1;
>
> + memcpy(&page_owner_tmp, page_owner, sizeof(struct page_owner));
> + page_ext_put();
> return print_page_owner(buf, count, pfn, page,
> - page_owner, handle);
> + &page_owner_tmp, handle);
> +loop:
> + page_ext_put();
> }
>
> return 0;
> @@ -617,18 +653,20 @@ static void init_pages_in_zone(pg_data_t *pgdat, struct zone *zone)
> if (PageReserved(page))
> continue;
>
> - page_ext = lookup_page_ext(page);
> + page_ext = page_ext_get(page);
> if (unlikely(!page_ext))
> continue;
>
> /* Maybe overlapping zone */
> if (test_bit(PAGE_EXT_OWNER, &page_ext->flags))
> - continue;
> + goto loop;
>
> /* Found early allocated page */
> __set_page_owner_handle(page_ext, early_handle,
> 0, 0);
> count++;
> +loop:
> + page_ext_put();
> }
> cond_resched();

This is called from init_page_owner() where races with offline are
impossible, so it's unnecessary. Although it won't hurt.



2022-08-10 14:40:15

by Charan Teja Kalla

[permalink] [raw]
Subject: Re: [PATCH V3] mm: fix use-after free of page_ext after race with memory-offline

Thanks Vlastimil for the inputs!!

On 8/10/2022 5:10 PM, Vlastimil Babka wrote:
>> --- a/mm/page_owner.c
>> +++ b/mm/page_owner.c
>> @@ -141,7 +141,7 @@ void __reset_page_owner(struct page *page,
>> unsigned short order)
>>       struct page_owner *page_owner;
>>       u64 free_ts_nsec = local_clock();
>>   -    page_ext = lookup_page_ext(page);
>> +    page_ext = page_ext_get(page);
>>       if (unlikely(!page_ext))
>>           return;
>>   @@ -153,6 +153,7 @@ void __reset_page_owner(struct page *page,
>> unsigned short order)
>>           page_owner->free_ts_nsec = free_ts_nsec;
>>           page_ext = page_ext_next(page_ext);
>>       }
>> +    page_ext_put();
>>   }
>>     static inline void __set_page_owner_handle(struct page_ext *page_ext,
>> @@ -183,19 +184,26 @@ static inline void
>> __set_page_owner_handle(struct page_ext *page_ext,
>>   noinline void __set_page_owner(struct page *page, unsigned short order,
>>                       gfp_t gfp_mask)
>>   {
>> -    struct page_ext *page_ext = lookup_page_ext(page);
>> +    struct page_ext *page_ext = page_ext_get(page);
>>       depot_stack_handle_t handle;
>>         if (unlikely(!page_ext))
>>           return;
>> +    page_ext_put();
>>         handle = save_stack(gfp_mask);
>> +
>> +    /* Ensure page_ext is valid after page_ext_put() above */
>> +    page_ext = page_ext_get(page);
>
> Why not simply do the save_stack() first and then page_ext_get() just
> once? It should be really rare that it's NULL, so I don't think we save
> much by avoiding an unnecessary save_stack(), while the overhead of
> doing two get/put instead of one will affect every call.
>
I am under the assumption that save_stack can take time when it goes for
GFP_KERNEL allocations over page_ext which is mere rcu_lock + few
arithmetic operations. Am I wrong here?

But yes it is rare that page_ext can be NULL here, so I am fine to
follow your suggestion, which atleast improve the code readability, IMO.

>> +    if (unlikely(!page_ext))
>> +        return;
>>       __set_page_owner_handle(page_ext, handle, order, gfp_mask);
>> +    page_ext_put();
>>   }
>>     void __set_page_owner_migrate_reason(struct page *page, int reason)
>>   {
>> -    struct page_ext *page_ext = lookup_page_ext(page);
>> +    struct page_ext *page_ext = page_ext_get(page);
>>       struct page_owner *page_owner;
>>         if (unlikely(!page_ext))
>> @@ -203,12 +211,13 @@ void __set_page_owner_migrate_reason(struct page
>> *page, int reason)
>>         page_owner = get_page_owner(page_ext);
>>       page_owner->last_migrate_reason = reason;
>> +    page_ext_put();
>>   }
>>     void __split_page_owner(struct page *page, unsigned int nr)
>>   {
>>       int i;
>> -    struct page_ext *page_ext = lookup_page_ext(page);
>> +    struct page_ext *page_ext = page_ext_get(page);
>>       struct page_owner *page_owner;
>>         if (unlikely(!page_ext))
>> @@ -219,16 +228,24 @@ void __split_page_owner(struct page *page,
>> unsigned int nr)
>>           page_owner->order = 0;
>>           page_ext = page_ext_next(page_ext);
>>       }
>> +    page_ext_put();
>>   }
>>     void __folio_copy_owner(struct folio *newfolio, struct folio *old)
>>   {
>> -    struct page_ext *old_ext = lookup_page_ext(&old->page);
>> -    struct page_ext *new_ext = lookup_page_ext(&newfolio->page);
>> +    struct page_ext *old_ext;
>> +    struct page_ext *new_ext;
>>       struct page_owner *old_page_owner, *new_page_owner;
>>   -    if (unlikely(!old_ext || !new_ext))
>> +    old_ext = page_ext_get(&old->page);
>> +    if (unlikely(!old_ext))
>> +        return;
>> +
>> +    new_ext = page_ext_get(&newfolio->page);
>
> The second one can keep using just lookup_page_ext() and we can have a
> single page_ext_put()? I don't think it would be dangerous in case the
> internals change, as page_ext_put() doesn't have a page parameter anyway
> so it can't be specific to a page.

Actually we did hide the lookup_page_ext() while exposing only a single
interface i.e. page_ext_get/put(). And this suggestion requires to
expose the lookup_page_ext as well which leaves two interfaces to get
the page_ext which seems not look good, IMO. Please let me know If you
think otherwise here.

>
>> +    if (unlikely(!new_ext)) {
>> +        page_ext_put();
>>           return;
>> +    }
>>         old_page_owner = get_page_owner(old_ext);
>>       new_page_owner = get_page_owner(new_ext);
>> @@ -254,6 +271,8 @@ void __folio_copy_owner(struct folio *newfolio,
>> struct folio *old)
>>        */
>>       __set_bit(PAGE_EXT_OWNER, &new_ext->flags);
>>       __set_bit(PAGE_EXT_OWNER_ALLOCATED, &new_ext->flags);
>> +    page_ext_put();
>> +    page_ext_put();
>>   }
>>     void pagetypeinfo_showmixedcount_print(struct seq_file *m,
>> @@ -307,12 +326,12 @@ void pagetypeinfo_showmixedcount_print(struct
>> seq_file *m,
>>               if (PageReserved(page))
>>                   continue;
>>   -            page_ext = lookup_page_ext(page);
>> +            page_ext = page_ext_get(page);
>>               if (unlikely(!page_ext))
>>                   continue;
>>                 if (!test_bit(PAGE_EXT_OWNER_ALLOCATED,
>> &page_ext->flags))
>> -                continue;
>> +                goto loop;
>>                 page_owner = get_page_owner(page_ext);
>>               page_mt = gfp_migratetype(page_owner->gfp_mask);
>> @@ -323,9 +342,12 @@ void pagetypeinfo_showmixedcount_print(struct
>> seq_file *m,
>>                       count[pageblock_mt]++;
>>                     pfn = block_end_pfn;
>> +                page_ext_put();
>>                   break;
>>               }
>>               pfn += (1UL << page_owner->order) - 1;
>> +loop:
>> +            page_ext_put();
>>           }
>>       }
>>   @@ -435,7 +457,7 @@ print_page_owner(char __user *buf, size_t count,
>> unsigned long pfn,
>>     void __dump_page_owner(const struct page *page)
>>   {
>> -    struct page_ext *page_ext = lookup_page_ext(page);
>> +    struct page_ext *page_ext = page_ext_get((void *)page);
>>       struct page_owner *page_owner;
>>       depot_stack_handle_t handle;
>>       gfp_t gfp_mask;
>> @@ -452,6 +474,7 @@ void __dump_page_owner(const struct page *page)
>>         if (!test_bit(PAGE_EXT_OWNER, &page_ext->flags)) {
>>           pr_alert("page_owner info is not present (never set?)\n");
>> +        page_ext_put();
>>           return;
>>       }
>>   @@ -482,6 +505,7 @@ void __dump_page_owner(const struct page *page)
>>       if (page_owner->last_migrate_reason != -1)
>>           pr_alert("page has been migrated, last migrate reason: %s\n",
>>               migrate_reason_names[page_owner->last_migrate_reason]);
>> +    page_ext_put();
>>   }
>>     static ssize_t
>> @@ -508,6 +532,14 @@ read_page_owner(struct file *file, char __user
>> *buf, size_t count, loff_t *ppos)
>>       /* Find an allocated page */
>>       for (; pfn < max_pfn; pfn++) {
>>           /*
>> +         * This temporary page_owner is required so
>> +         * that we can avoid the context switches while holding
>> +         * the rcu lock and copying the page owner information to
>> +         * user through copy_to_user() or GFP_KERNEL allocations.
>> +         */
>> +        struct page_owner page_owner_tmp;
>> +
>> +        /*
>>            * If the new page is in a new MAX_ORDER_NR_PAGES area,
>>            * validate the area as existing, skip it if not
>>            */
>> @@ -525,7 +557,7 @@ read_page_owner(struct file *file, char __user
>> *buf, size_t count, loff_t *ppos)
>>               continue;
>>           }
>>   -        page_ext = lookup_page_ext(page);
>> +        page_ext = page_ext_get(page);
>>           if (unlikely(!page_ext))
>>               continue;
>>   @@ -534,14 +566,14 @@ read_page_owner(struct file *file, char __user
>> *buf, size_t count, loff_t *ppos)
>>            * because we don't hold the zone lock.
>>            */
>>           if (!test_bit(PAGE_EXT_OWNER, &page_ext->flags))
>> -            continue;
>> +            goto loop;
>>             /*
>>            * Although we do have the info about past allocation of free
>>            * pages, it's not relevant for current memory usage.
>>            */
>>           if (!test_bit(PAGE_EXT_OWNER_ALLOCATED, &page_ext->flags))
>> -            continue;
>> +            goto loop;
>>             page_owner = get_page_owner(page_ext);
>>   @@ -550,7 +582,7 @@ read_page_owner(struct file *file, char __user
>> *buf, size_t count, loff_t *ppos)
>>            * would inflate the stats.
>>            */
>>           if (!IS_ALIGNED(pfn, 1 << page_owner->order))
>> -            continue;
>> +            goto loop;
>>             /*
>>            * Access to page_ext->handle isn't synchronous so we should
>> @@ -558,13 +590,17 @@ read_page_owner(struct file *file, char __user
>> *buf, size_t count, loff_t *ppos)
>>            */
>>           handle = READ_ONCE(page_owner->handle);
>>           if (!handle)
>> -            continue;
>> +            goto loop;
>>             /* Record the next PFN to read in the file offset */
>>           *ppos = (pfn - min_low_pfn) + 1;
>>   +        memcpy(&page_owner_tmp, page_owner, sizeof(struct
>> page_owner));
>> +        page_ext_put();
>>           return print_page_owner(buf, count, pfn, page,
>> -                page_owner, handle);
>> +                &page_owner_tmp, handle);
>> +loop:
>> +        page_ext_put();
>>       }
>>         return 0;
>> @@ -617,18 +653,20 @@ static void init_pages_in_zone(pg_data_t *pgdat,
>> struct zone *zone)
>>               if (PageReserved(page))
>>                   continue;
>>   -            page_ext = lookup_page_ext(page);
>> +            page_ext = page_ext_get(page);
>>               if (unlikely(!page_ext))
>>                   continue;
>>                 /* Maybe overlapping zone */
>>               if (test_bit(PAGE_EXT_OWNER, &page_ext->flags))
>> -                continue;
>> +                goto loop;
>>                 /* Found early allocated page */
>>               __set_page_owner_handle(page_ext, early_handle,
>>                           0, 0);
>>               count++;
>> +loop:
>> +            page_ext_put();
>>           }
>>           cond_resched();
>
> This is called from init_page_owner() where races with offline are
> impossible, so it's unnecessary. Although it won't hurt.

Totally agree. Infact in V2, this change is not there. And there are
some other places too that it is not required to go for
page_ext_get/put, eg: page_owner_migration, but these changes are done
as we have exposed a single interface to get the page_ext.

>

Thanks,
Chararan

2022-08-15 15:33:27

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH V3] mm: fix use-after free of page_ext after race with memory-offline

On Mon, Aug 15, 2022 at 05:06:18PM +0200, Michal Hocko wrote:
> > + * This function gives proper page_ext of a memory section
> > + * during race with the offline operation on a memory block
> > + * this section falls into. Not using this function to get
> > + * page_ext of a page, in code paths where extra refcount
> > + * is not taken on that page eg: pfn walking, can lead to
> > + * use-after-free access of page_ext.
>
> I do not think this is really useful comment, it goes into way too much
> detail about memory hotplug yet not enough to actually understand the
> interaction because there are no references to the actual
> synchronization scheme. I would go with something like:
>
> /*
> * Get a page_ext associated with the given page. Returns NULL if
> * no such page_ext exists otherwise ensures that the page_ext will
> * stay alive until page_ext_put is called.
> * This implies a non-sleeping context.
> */

I'd go further and turn this into kernel-doc:

/**
* page_ext_get() - Get the extended information for a page.
* @page: The page we're interested in.
*
* Ensures that the page_ext will remain valid until page_ext_put()
* is called.
*
* Return: NULL if no page_ext exists for this page.
* Context: Any context. Caller may not sleep until they have called
* page_ext_put().
*/

2022-08-15 15:35:05

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH V3] mm: fix use-after free of page_ext after race with memory-offline

On Tue 09-08-22 20:16:43, Charan Teja Kalla wrote:
[...]
> diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h
> index fabb2e1..0e259da 100644
> --- a/include/linux/page_ext.h
> +++ b/include/linux/page_ext.h
[...]
> @@ -87,5 +83,14 @@ static inline void page_ext_init_flatmem_late(void)
> static inline void page_ext_init_flatmem(void)
> {
> }
> +
> +static inline struct page *page_ext_get(struct page *page)

struct page_ext *

> +{
> + return NULL;
> +}
> +
> +static inline void page_ext_put(void)
> +{
> +}
> #endif /* CONFIG_PAGE_EXTENSION */
> #endif /* __LINUX_PAGE_EXT_H */
[...]
> diff --git a/mm/page_ext.c b/mm/page_ext.c
> index 3dc715d..91d7bd2 100644
> --- a/mm/page_ext.c
> +++ b/mm/page_ext.c
> @@ -9,6 +9,7 @@
> #include <linux/page_owner.h>
> #include <linux/page_idle.h>
> #include <linux/page_table_check.h>
> +#include <linux/rcupdate.h>
>
> /*
> * struct page extension
> @@ -59,6 +60,10 @@
> * can utilize this callback to initialize the state of it correctly.
> */
>
> +#ifdef CONFIG_SPARSEMEM
> +#define PAGE_EXT_INVALID (0x1)
> +#endif
> +
> #if defined(CONFIG_PAGE_IDLE_FLAG) && !defined(CONFIG_64BIT)
> static bool need_page_idle(void)
> {
> @@ -84,6 +89,7 @@ static struct page_ext_operations *page_ext_ops[] __initdata = {
> unsigned long page_ext_size = sizeof(struct page_ext);
>
> static unsigned long total_usage;
> +static struct page_ext *lookup_page_ext(const struct page *page);
>
> static bool __init invoke_need_callbacks(void)
> {
> @@ -125,6 +131,37 @@ static inline struct page_ext *get_entry(void *base, unsigned long index)
> return base + page_ext_size * index;
> }
>
> +/*
> + * This function gives proper page_ext of a memory section
> + * during race with the offline operation on a memory block
> + * this section falls into. Not using this function to get
> + * page_ext of a page, in code paths where extra refcount
> + * is not taken on that page eg: pfn walking, can lead to
> + * use-after-free access of page_ext.

I do not think this is really useful comment, it goes into way too much
detail about memory hotplug yet not enough to actually understand the
interaction because there are no references to the actual
synchronization scheme. I would go with something like:

/*
* Get a page_ext associated with the given page. Returns NULL if
* no such page_ext exists otherwise ensures that the page_ext will
* stay alive until page_ext_put is called.
* This implies a non-sleeping context.
*/
> + */
> +struct page_ext *page_ext_get(struct page *page)
> +{
> + struct page_ext *page_ext;
> +
> + rcu_read_lock();
> + page_ext = lookup_page_ext(page);
> + if (!page_ext) {
> + rcu_read_unlock();
> + return NULL;
> + }
> +
> + return page_ext;
> +}
> +
> +/*
> + * Must be called after work is done with the page_ext received
> + * with page_ext_get().
> + */
> +
> +void page_ext_put(void)
> +{
> + rcu_read_unlock();
> +}

Thinking about this some more I am not sure this is a good interface. It
doesn't have any reference to the actual object this is called for. This
is nicely visible in __folio_copy_owner which just calles page_ext_put()
twice because there are 2 page_exts and I can already see how somebody
might get confused this is just an error and send a patch to drop one of
them.

I do understand why you went this way because having a parameter which
is not used will likely lead to the same situation. On the other hand it
could be annotated to not raise warnings. One potential way to
workaround that would be

void page_ext_put(struct page_ext *page_ext)
{
if (unlikely(!page_ext))
return;

rcu_read_unlock();
}

which would help to make the api slightly more robust in case somebody
does page_ext_put in a branch where page_ext_get returns NULL.

No strong opinion on that though. WDYI?

> #ifndef CONFIG_SPARSEMEM
>
>
[...]
> @@ -183,19 +184,26 @@ static inline void __set_page_owner_handle(struct page_ext *page_ext,
> noinline void __set_page_owner(struct page *page, unsigned short order,
> gfp_t gfp_mask)
> {
> - struct page_ext *page_ext = lookup_page_ext(page);
> + struct page_ext *page_ext = page_ext_get(page);
> depot_stack_handle_t handle;
>
> if (unlikely(!page_ext))
> return;

Either add a comment like this
/* save_stack can sleep in general so we have to page_ext_put */
> + page_ext_put();
>
> handle = save_stack(gfp_mask);

or just drop the initial page_ext_get altogether. This function is
called only when page_ext is supposed to be initialized and !page_ext
case above should be very unlikely. Or is there any reason to keep this?

> +
> + /* Ensure page_ext is valid after page_ext_put() above */
> + page_ext = page_ext_get(page);
> + if (unlikely(!page_ext))
> + return;
> __set_page_owner_handle(page_ext, handle, order, gfp_mask);
> + page_ext_put();
> }
>
[...]
> @@ -558,13 +590,17 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
> */
> handle = READ_ONCE(page_owner->handle);
> if (!handle)
> - continue;
> + goto loop;
>
> /* Record the next PFN to read in the file offset */
> *ppos = (pfn - min_low_pfn) + 1;
>
> + memcpy(&page_owner_tmp, page_owner, sizeof(struct page_owner));
> + page_ext_put();

why not
page_owner_tmp = *page_owner;

> return print_page_owner(buf, count, pfn, page,
> - page_owner, handle);
> + &page_owner_tmp, handle);
> +loop:
> + page_ext_put();
> }
>
> return 0;

Otherwise looks good to me.

Thanks!
--
Michal Hocko
SUSE Labs

2022-08-15 15:36:00

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH V3] mm: fix use-after free of page_ext after race with memory-offline

On Mon 15-08-22 16:26:46, Matthew Wilcox wrote:
> On Mon, Aug 15, 2022 at 05:06:18PM +0200, Michal Hocko wrote:
> > > + * This function gives proper page_ext of a memory section
> > > + * during race with the offline operation on a memory block
> > > + * this section falls into. Not using this function to get
> > > + * page_ext of a page, in code paths where extra refcount
> > > + * is not taken on that page eg: pfn walking, can lead to
> > > + * use-after-free access of page_ext.
> >
> > I do not think this is really useful comment, it goes into way too much
> > detail about memory hotplug yet not enough to actually understand the
> > interaction because there are no references to the actual
> > synchronization scheme. I would go with something like:
> >
> > /*
> > * Get a page_ext associated with the given page. Returns NULL if
> > * no such page_ext exists otherwise ensures that the page_ext will
> > * stay alive until page_ext_put is called.
> > * This implies a non-sleeping context.
> > */
>
> I'd go further and turn this into kernel-doc:
>
> /**
> * page_ext_get() - Get the extended information for a page.
> * @page: The page we're interested in.
> *
> * Ensures that the page_ext will remain valid until page_ext_put()
> * is called.
> *
> * Return: NULL if no page_ext exists for this page.
> * Context: Any context. Caller may not sleep until they have called
> * page_ext_put().
> */

Yes, thanks!

--
Michal Hocko
SUSE Labs

2022-08-16 10:50:40

by Charan Teja Kalla

[permalink] [raw]
Subject: Re: [PATCH V3] mm: fix use-after free of page_ext after race with memory-offline

Thanks Michal!!

On 8/15/2022 8:36 PM, Michal Hocko wrote:
> On Tue 09-08-22 20:16:43, Charan Teja Kalla wrote:
> [...]
>> diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h
>> index fabb2e1..0e259da 100644
>> --- a/include/linux/page_ext.h
>> +++ b/include/linux/page_ext.h
> [...]
>> @@ -87,5 +83,14 @@ static inline void page_ext_init_flatmem_late(void)
>> static inline void page_ext_init_flatmem(void)
>> {
>> }
>> +
>> +static inline struct page *page_ext_get(struct page *page)
> struct page_ext *
>
oops!! It didn't get caught as this is in !CONFIG_PAGE_EXTENSION.
>> +{
>> + return NULL;
>> +}
>> +
>> +static inline void page_ext_put(void)
>> +{
>> +}
>> #endif /* CONFIG_PAGE_EXTENSION */
>> #endif /* __LINUX_PAGE_EXT_H */
> [...]
>> diff --git a/mm/page_ext.c b/mm/page_ext.c
>> index 3dc715d..91d7bd2 100644
>> --- a/mm/page_ext.c
>> +++ b/mm/page_ext.c
>> @@ -9,6 +9,7 @@
>> #include <linux/page_owner.h>
>> #include <linux/page_idle.h>
>> #include <linux/page_table_check.h>
>> +#include <linux/rcupdate.h>
>>
>> /*
>> * struct page extension
>> @@ -59,6 +60,10 @@
>> * can utilize this callback to initialize the state of it correctly.
>> */
>>
>> +#ifdef CONFIG_SPARSEMEM
>> +#define PAGE_EXT_INVALID (0x1)
>> +#endif
>> +
>> #if defined(CONFIG_PAGE_IDLE_FLAG) && !defined(CONFIG_64BIT)
>> static bool need_page_idle(void)
>> {
>> @@ -84,6 +89,7 @@ static struct page_ext_operations *page_ext_ops[] __initdata = {
>> unsigned long page_ext_size = sizeof(struct page_ext);
>>
>> static unsigned long total_usage;
>> +static struct page_ext *lookup_page_ext(const struct page *page);
>>
>> static bool __init invoke_need_callbacks(void)
>> {
>> @@ -125,6 +131,37 @@ static inline struct page_ext *get_entry(void *base, unsigned long index)
>> return base + page_ext_size * index;
>> }
>>
>> +/*
>> + * This function gives proper page_ext of a memory section
>> + * during race with the offline operation on a memory block
>> + * this section falls into. Not using this function to get
>> + * page_ext of a page, in code paths where extra refcount
>> + * is not taken on that page eg: pfn walking, can lead to
>> + * use-after-free access of page_ext.
> I do not think this is really useful comment, it goes into way too much
> detail about memory hotplug yet not enough to actually understand the
> interaction because there are no references to the actual
> synchronization scheme. I would go with something like:
>
> /*
> * Get a page_ext associated with the given page. Returns NULL if
> * no such page_ext exists otherwise ensures that the page_ext will
> * stay alive until page_ext_put is called.
> * This implies a non-sleeping context.
> */

Will update as per the Matthew input @
https://lore.kernel.org/all/[email protected]/
>> + */
>> +struct page_ext *page_ext_get(struct page *page)
>> +{
>> + struct page_ext *page_ext;
>> +
>> + rcu_read_lock();
>> + page_ext = lookup_page_ext(page);
>> + if (!page_ext) {
>> + rcu_read_unlock();
>> + return NULL;
>> + }
>> +
>> + return page_ext;
>> +}
>> +
>> +/*
>> + * Must be called after work is done with the page_ext received
>> + * with page_ext_get().
>> + */
>> +
>> +void page_ext_put(void)
>> +{
>> + rcu_read_unlock();
>> +}
> Thinking about this some more I am not sure this is a good interface. It
> doesn't have any reference to the actual object this is called for. This
> is nicely visible in __folio_copy_owner which just calles page_ext_put()
> twice because there are 2 page_exts and I can already see how somebody
> might get confused this is just an error and send a patch to drop one of
> them.
>
> I do understand why you went this way because having a parameter which
> is not used will likely lead to the same situation. On the other hand it
> could be annotated to not raise warnings. One potential way to
> workaround that would be
>
> void page_ext_put(struct page_ext *page_ext)
> {
> if (unlikely(!page_ext))
> return;
>
> rcu_read_unlock();
> }
>
> which would help to make the api slightly more robust in case somebody
> does page_ext_put in a branch where page_ext_get returns NULL.
>
Looks better. Will change this accordingly.

> No strong opinion on that though. WDYI?
>
>> #ifndef CONFIG_SPARSEMEM
>>
>>
> [...]
>> @@ -183,19 +184,26 @@ static inline void __set_page_owner_handle(struct page_ext *page_ext,
>> noinline void __set_page_owner(struct page *page, unsigned short order,
>> gfp_t gfp_mask)
>> {
>> - struct page_ext *page_ext = lookup_page_ext(page);
>> + struct page_ext *page_ext = page_ext_get(page);
>> depot_stack_handle_t handle;
>>
>> if (unlikely(!page_ext))
>> return;
> Either add a comment like this
> /* save_stack can sleep in general so we have to page_ext_put */


Vlastimil suggested to go for save stack first since !page_ext is mostly
unlikely. Snip from his comments:
Why not simply do the save_stack() first and then page_ext_get() just
once? It should be really rare that it's NULL, so I don't think we save
much by avoiding an unnecessary save_stack(), while the overhead of
doing two get/put instead of one will affect every call.

https://lore.kernel.org/all/[email protected]/
>> + page_ext_put();
>>
>> handle = save_stack(gfp_mask);
> or just drop the initial page_ext_get altogether. This function is
> called only when page_ext is supposed to be initialized and !page_ext
> case above should be very unlikely. Or is there any reason to keep this?
>


>> +
>> + /* Ensure page_ext is valid after page_ext_put() above */
>> + page_ext = page_ext_get(page);
>> + if (unlikely(!page_ext))
>> + return;
>> __set_page_owner_handle(page_ext, handle, order, gfp_mask);
>> + page_ext_put();
>> }
>>
> [...]
>> @@ -558,13 +590,17 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
>> */
>> handle = READ_ONCE(page_owner->handle);
>> if (!handle)
>> - continue;
>> + goto loop;
>>
>> /* Record the next PFN to read in the file offset */
>> *ppos = (pfn - min_low_pfn) + 1;
>>
>> + memcpy(&page_owner_tmp, page_owner, sizeof(struct page_owner));
>> + page_ext_put();
> why not
> page_owner_tmp = *page_owner;

Done!!
>
>> return print_page_owner(buf, count, pfn, page,
>> - page_owner, handle);
>> + &page_owner_tmp, handle);
>> +loop:
>> + page_ext_put();
>> }
>>
>> return 0;

2022-08-16 17:28:29

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH V3] mm: fix use-after free of page_ext after race with memory-offline

On Tue 16-08-22 15:04:01, Charan Teja Kalla wrote:
[...]
> >> @@ -183,19 +184,26 @@ static inline void __set_page_owner_handle(struct page_ext *page_ext,
> >> noinline void __set_page_owner(struct page *page, unsigned short order,
> >> gfp_t gfp_mask)
> >> {
> >> - struct page_ext *page_ext = lookup_page_ext(page);
> >> + struct page_ext *page_ext = page_ext_get(page);
> >> depot_stack_handle_t handle;
> >>
> >> if (unlikely(!page_ext))
> >> return;
> > Either add a comment like this
> > /* save_stack can sleep in general so we have to page_ext_put */
>
>
> Vlastimil suggested to go for save stack first since !page_ext is mostly
> unlikely. Snip from his comments:
> Why not simply do the save_stack() first and then page_ext_get() just
> once? It should be really rare that it's NULL, so I don't think we save
> much by avoiding an unnecessary save_stack(), while the overhead of
> doing two get/put instead of one will affect every call.

right see below
>
> https://lore.kernel.org/all/[email protected]/
> >> + page_ext_put();
> >>
> >> handle = save_stack(gfp_mask);
> > or just drop the initial page_ext_get altogether. This function is
> > called only when page_ext is supposed to be initialized and !page_ext
> > case above should be very unlikely. Or is there any reason to keep this?

^^^^^
--
Michal Hocko
SUSE Labs

2022-08-18 14:30:15

by Charan Teja Kalla

[permalink] [raw]
Subject: Re: [PATCH V3] mm: fix use-after free of page_ext after race with memory-offline

Hi Michal,

On 8/16/2022 9:45 PM, Michal Hocko wrote:
>>>> @@ -183,19 +184,26 @@ static inline void __set_page_owner_handle(struct page_ext *page_ext,
>>>> noinline void __set_page_owner(struct page *page, unsigned short order,
>>>> gfp_t gfp_mask)
>>>> {
>>>> - struct page_ext *page_ext = lookup_page_ext(page);
>>>> + struct page_ext *page_ext = page_ext_get(page);
>>>> depot_stack_handle_t handle;
>>>>
>>>> if (unlikely(!page_ext))
>>>> return;
>>> Either add a comment like this
>>> /* save_stack can sleep in general so we have to page_ext_put */
>>
>> Vlastimil suggested to go for save stack first since !page_ext is mostly
>> unlikely. Snip from his comments:
>> Why not simply do the save_stack() first and then page_ext_get() just
>> once? It should be really rare that it's NULL, so I don't think we save
>> much by avoiding an unnecessary save_stack(), while the overhead of
>> doing two get/put instead of one will affect every call.
> right see below
>> https://lore.kernel.org/all/[email protected]/
>>>> + page_ext_put();
>>>>
>>>> handle = save_stack(gfp_mask);
>>> or just drop the initial page_ext_get altogether. This function is
>>> called only when page_ext is supposed to be initialized and !page_ext
>>> case above should be very unlikely. Or is there any reason to keep this?
I don't think that !page_ext check is really required as
__set_page_owner() is called means page_ext should have been
initialized. Will raise a separate change for this suggestion. For now
V4 is raised with the earlier suggestion of dropping the initial
page_ext.
https://lore.kernel.org/all/[email protected]/.

Thanks,
Charan