2020-01-07 21:13:36

by Ralph Campbell

[permalink] [raw]
Subject: [PATCH 3/3] mm/migrate: add stable check in migrate_vma_insert_page()

migrate_vma_insert_page() closely follows the code in:
__handle_mm_fault()
handle_pte_fault()
do_anonymous_page()

Add a call to check_stable_address_space() after locking the page table
entry before inserting a ZONE_DEVICE private zero page mapping similar to
page faulting a new anonymous page.

Signed-off-by: Ralph Campbell <[email protected]>
---
mm/migrate.c | 12 ++++++++++++
1 file changed, 12 insertions(+)

diff --git a/mm/migrate.c b/mm/migrate.c
index 4b1a6d69afb5..403b82472d24 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -48,6 +48,7 @@
#include <linux/page_owner.h>
#include <linux/sched/mm.h>
#include <linux/ptrace.h>
+#include <linux/oom.h>

#include <asm/tlbflush.h>

@@ -2675,6 +2676,14 @@ int migrate_vma_setup(struct migrate_vma *args)
}
EXPORT_SYMBOL(migrate_vma_setup);

+/*
+ * This code closely matches the code in:
+ * __handle_mm_fault()
+ * handle_pte_fault()
+ * do_anonymous_page()
+ * to map in an anonymous zero page but the struct page will be a ZONE_DEVICE
+ * private page.
+ */
static void migrate_vma_insert_page(struct migrate_vma *migrate,
unsigned long addr,
struct page *page,
@@ -2755,6 +2764,9 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate,

ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl);

+ if (check_stable_address_space(mm))
+ goto unlock_abort;
+
if (pte_present(*ptep)) {
unsigned long pfn = pte_pfn(*ptep);

--
2.20.1


2020-01-08 07:14:49

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 3/3] mm/migrate: add stable check in migrate_vma_insert_page()

On Tue, Jan 07, 2020 at 01:12:08PM -0800, Ralph Campbell wrote:
> migrate_vma_insert_page() closely follows the code in:
> __handle_mm_fault()
> handle_pte_fault()
> do_anonymous_page()

I wonder if we could share more code there?

Otherwise this looks good:

Reviewed-by: Christoph Hellwig <[email protected]>