2023-10-03 09:13:22

by Hugh Dickins

[permalink] [raw]
Subject: [PATCH v2 00/12] mempolicy: cleanups leading to NUMA mpol without vma

Here is v2 of a series of mempolicy patches, this version based on
mm-everything-2023-10-02-21-43, applicable also to next-20231003:
mostly cleanups in mm/mempolicy.c, but finally removing the pseudo-vma
from shmem folio allocation, and removing the mmap_lock around folio
migration for mbind and migrate_pages syscalls.

This replaces the "mempolicy: cleanups leading to NUMA mpol without vma"
https://lore.kernel.org/linux-mm/[email protected]/
series of 12 based on 6.6-rc3 and posted on 2023-09-25.

01/12 hugetlbfs: drop shared NUMA mempolicy pretence
v2: add reviewed-by Matthew
hugetlb.h include pagemap.h for filemap_lock_folio()
02/12 kernfs: drop shared NUMA mempolicy hooks
v2: add reviewed-by Matthew
03/12 mempolicy: fix migrate_pages(2) syscall return
v2: add reviewed-by Matthew
replace Yang Shi's qp->has_unmovable by qp->nr_failed
remove ptl,addr,end to queue_folios_pmd() per Huang,Ying
reword comments above queue_folios_pte_range()
fix incorrect migrate_folio_add()ing per Huang,Ying
which also fixes qp->nr_failed as count of folios
04/12 mempolicy trivia: delete those ancient pr_debug()s
v2: add reviewed-by Matthew
05/12 mempolicy trivia: slightly more consistent naming
v2: add reviewed-by Matthew
06/12 mempolicy trivia: use pgoff_t in shared mempolicy tree
v2: declare struct shared_policy (the root) before struct sp_node
reformat sp_lookup, mpol_shared_policy_lookup decls per Matthew
07/12 mempolicy: mpol_shared_policy_init() without pseudo-vma
v2: sn,npol instead of n,new (but no optimization) per Matthew
08/12 mempolicy: remove confusing MPOL_MF_LAZY dead code
v2: add reviewed-by Matthew
09/12 mm: add page_rmappable_folio() wrapper
v2: move page_rmappable_folio() to mm/internal.h per Matthew
10/12 mempolicy: alloc_pages_mpol() for NUMA policy without vma
v2: adjust to fit on top of earlier mm-unstable mods
11/12 mempolicy: mmap_lock is not needed while migrating folios
v2: remove HugeTLBfs special casing of src->index
12/12 mempolicy: migration attempt to match interleave nodes
v2: remove HugeTLBfs special casing of page->index

fs/hugetlbfs/inode.c | 41 +-
fs/kernfs/file.c | 49 --
fs/proc/task_mmu.c | 5 +-
include/linux/gfp.h | 10 +-
include/linux/hugetlb.h | 12 +-
include/linux/mempolicy.h | 44 +-
include/linux/mm.h | 2 +-
include/uapi/linux/mempolicy.h | 2 +-
ipc/shm.c | 21 +-
mm/hugetlb.c | 38 +-
mm/internal.h | 9 +
mm/mempolicy.c | 988 ++++++++++++++++-------------------
mm/page_alloc.c | 8 +-
mm/shmem.c | 92 ++--
mm/swap.h | 9 +-
mm/swap_state.c | 83 +--
16 files changed, 630 insertions(+), 783 deletions(-)

Hugh


2023-10-03 09:15:29

by Hugh Dickins

[permalink] [raw]
Subject: [PATCH v2 01/12] hugetlbfs: drop shared NUMA mempolicy pretence

hugetlbfs_fallocate() goes through the motions of pasting a shared NUMA
mempolicy onto its pseudo-vma, but how could there ever be a shared NUMA
mempolicy for this file? hugetlb_vm_ops has never offered a set_policy
method, and hugetlbfs_parse_param() has never supported any mpol options
for a mount-wide default policy.

It's just an illusion: clean it away so as not to confuse others, giving
us more freedom to adjust shmem's set_policy/get_policy implementation.
But hugetlbfs_inode_info is still required, just to accommodate seals.

Yes, shared NUMA mempolicy support could be added to hugetlbfs, with a
set_policy method and/or mpol mount option (Andi's first posting did
include an admitted-unsatisfactory hugetlb_set_policy()); but it seems
that nobody has bothered to add that in the nineteen years since v2.6.7
made it possible, and there is at least one company that has invested
enough into hugetlbfs, that I guess they have learnt well enough how to
manage its NUMA, without needing shared mempolicy.

Remove linux/mempolicy.h from linux/hugetlb.h: include linux/pagemap.h in
its place, because hugetlb.h's recently added use of filemap_lock_folio()
requires that (although most .configs and .c's get it in some other way).

Signed-off-by: Hugh Dickins <[email protected]>
Reviewed-by: Matthew Wilcox (Oracle) <[email protected]>
---
fs/hugetlbfs/inode.c | 41 +----------------------------------------
include/linux/hugetlb.h | 3 +--
2 files changed, 2 insertions(+), 42 deletions(-)

diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 926d01c493fb..0586c90cb9a5 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -83,29 +83,6 @@ static const struct fs_parameter_spec hugetlb_fs_parameters[] = {
{}
};

-#ifdef CONFIG_NUMA
-static inline void hugetlb_set_vma_policy(struct vm_area_struct *vma,
- struct inode *inode, pgoff_t index)
-{
- vma->vm_policy = mpol_shared_policy_lookup(&HUGETLBFS_I(inode)->policy,
- index);
-}
-
-static inline void hugetlb_drop_vma_policy(struct vm_area_struct *vma)
-{
- mpol_cond_put(vma->vm_policy);
-}
-#else
-static inline void hugetlb_set_vma_policy(struct vm_area_struct *vma,
- struct inode *inode, pgoff_t index)
-{
-}
-
-static inline void hugetlb_drop_vma_policy(struct vm_area_struct *vma)
-{
-}
-#endif
-
/*
* Mask used when checking the page offset value passed in via system
* calls. This value will be converted to a loff_t which is signed.
@@ -853,8 +830,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,

/*
* Initialize a pseudo vma as this is required by the huge page
- * allocation routines. If NUMA is configured, use page index
- * as input to create an allocation policy.
+ * allocation routines.
*/
vma_init(&pseudo_vma, mm);
vm_flags_init(&pseudo_vma, VM_HUGETLB | VM_MAYSHARE | VM_SHARED);
@@ -902,9 +878,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
* folios in these areas, we need to consume the reserves
* to keep reservation accounting consistent.
*/
- hugetlb_set_vma_policy(&pseudo_vma, inode, index);
folio = alloc_hugetlb_folio(&pseudo_vma, addr, 0);
- hugetlb_drop_vma_policy(&pseudo_vma);
if (IS_ERR(folio)) {
mutex_unlock(&hugetlb_fault_mutex_table[hash]);
error = PTR_ERR(folio);
@@ -1283,18 +1257,6 @@ static struct inode *hugetlbfs_alloc_inode(struct super_block *sb)
hugetlbfs_inc_free_inodes(sbinfo);
return NULL;
}
-
- /*
- * Any time after allocation, hugetlbfs_destroy_inode can be called
- * for the inode. mpol_free_shared_policy is unconditionally called
- * as part of hugetlbfs_destroy_inode. So, initialize policy here
- * in case of a quick call to destroy.
- *
- * Note that the policy is initialized even if we are creating a
- * private inode. This simplifies hugetlbfs_destroy_inode.
- */
- mpol_shared_policy_init(&p->policy, NULL);
-
return &p->vfs_inode;
}

@@ -1306,7 +1268,6 @@ static void hugetlbfs_free_inode(struct inode *inode)
static void hugetlbfs_destroy_inode(struct inode *inode)
{
hugetlbfs_inc_free_inodes(HUGETLBFS_SB(inode->i_sb));
- mpol_free_shared_policy(&HUGETLBFS_I(inode)->policy);
}

static const struct address_space_operations hugetlbfs_aops = {
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 3c4427a2396d..a574e26e18a2 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -30,7 +30,7 @@ void free_huge_folio(struct folio *folio);

#ifdef CONFIG_HUGETLB_PAGE

-#include <linux/mempolicy.h>
+#include <linux/pagemap.h>
#include <linux/shm.h>
#include <asm/tlbflush.h>

@@ -513,7 +513,6 @@ static inline struct hugetlbfs_sb_info *HUGETLBFS_SB(struct super_block *sb)
}

struct hugetlbfs_inode_info {
- struct shared_policy policy;
struct inode vfs_inode;
unsigned int seals;
};
--
2.35.3

2023-10-03 09:16:50

by Hugh Dickins

[permalink] [raw]
Subject: [PATCH v2 02/12] kernfs: drop shared NUMA mempolicy hooks

It seems strange that kernfs should be an outlier with a set_policy and
get_policy in its kernfs_vm_ops. Ah, it dates back to v2.6.30's commit
095160aee954 ("sysfs: fix some bin_vm_ops errors"), when I had crashed
on powerpc's pci_mmap_legacy_page_range() fallback to shmem_zero_setup().

Well, that was commendably thorough, to give sysfs-bin a set_policy and
get_policy, just to avoid the way it was coded resulting in EINVAL from
mmap when CONFIG_NUMA; but somehow feels a bit over-the-top to me now.

It's easier to say that nobody should expect to manage a shmem object's
shared NUMA mempolicy via some kernfs backdoor to that object: delete
that code (and there's no longer an EINVAL from mmap in the NUMA case).

This then leaves set_policy/get_policy as implemented only by shmem -
though importantly also by SysV SHM, which has to interface with shmem
which implements them, and with SHM_HUGETLB which does not.

Signed-off-by: Hugh Dickins <[email protected]>
Reviewed-by: Matthew Wilcox (Oracle) <[email protected]>
---
fs/kernfs/file.c | 49 -------------------------------------------------
1 file changed, 49 deletions(-)

diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c
index 180906c36f51..aaa76410e550 100644
--- a/fs/kernfs/file.c
+++ b/fs/kernfs/file.c
@@ -429,60 +429,11 @@ static int kernfs_vma_access(struct vm_area_struct *vma, unsigned long addr,
return ret;
}

-#ifdef CONFIG_NUMA
-static int kernfs_vma_set_policy(struct vm_area_struct *vma,
- struct mempolicy *new)
-{
- struct file *file = vma->vm_file;
- struct kernfs_open_file *of = kernfs_of(file);
- int ret;
-
- if (!of->vm_ops)
- return 0;
-
- if (!kernfs_get_active(of->kn))
- return -EINVAL;
-
- ret = 0;
- if (of->vm_ops->set_policy)
- ret = of->vm_ops->set_policy(vma, new);
-
- kernfs_put_active(of->kn);
- return ret;
-}
-
-static struct mempolicy *kernfs_vma_get_policy(struct vm_area_struct *vma,
- unsigned long addr)
-{
- struct file *file = vma->vm_file;
- struct kernfs_open_file *of = kernfs_of(file);
- struct mempolicy *pol;
-
- if (!of->vm_ops)
- return vma->vm_policy;
-
- if (!kernfs_get_active(of->kn))
- return vma->vm_policy;
-
- pol = vma->vm_policy;
- if (of->vm_ops->get_policy)
- pol = of->vm_ops->get_policy(vma, addr);
-
- kernfs_put_active(of->kn);
- return pol;
-}
-
-#endif
-
static const struct vm_operations_struct kernfs_vm_ops = {
.open = kernfs_vma_open,
.fault = kernfs_vma_fault,
.page_mkwrite = kernfs_vma_page_mkwrite,
.access = kernfs_vma_access,
-#ifdef CONFIG_NUMA
- .set_policy = kernfs_vma_set_policy,
- .get_policy = kernfs_vma_get_policy,
-#endif
};

static int kernfs_fop_mmap(struct file *file, struct vm_area_struct *vma)
--
2.35.3

2023-10-03 09:18:29

by Hugh Dickins

[permalink] [raw]
Subject: [PATCH v2 03/12] mempolicy: fix migrate_pages(2) syscall return nr_failed

"man 2 migrate_pages" says "On success migrate_pages() returns the number
of pages that could not be moved". Although 5.3 and 5.4 commits fixed
mbind(MPOL_MF_STRICT|MPOL_MF_MOVE*) to fail with EIO when not all pages
could be moved (because some could not be isolated for migration),
migrate_pages(2) was left still reporting only those pages failing at the
migration stage, forgetting those failing at the earlier isolation stage.

Fix that by accumulating a long nr_failed count in struct queue_pages,
returned by queue_pages_range() when it's not returning an error, for
adding on to the nr_failed count from migrate_pages() in mm/migrate.c.
A count of pages? It's more a count of folios, but changing it to pages
would entail more work (also in mm/migrate.c): does not seem justified.

queue_pages_range() itself should only return -EIO in the "strictly
unmovable" case (STRICT without any MOVEs): in that case it's best to
break out as soon as nr_failed gets set; but otherwise it should continue
to isolate pages for MOVing even when nr_failed - as the mbind(2) manpage
promises.

There's a case when nr_failed should be incremented when it was missed:
queue_folios_pte_range() and queue_folios_hugetlb() count the transient
migration entries, like queue_folios_pmd() already did. And there's a
case when nr_failed should not be incremented when it would have been:
in meeting later PTEs of the same large folio, which can only be isolated
once: fixed by recording the current large folio in struct queue_pages.

Clean up the affected functions, fixing or updating many comments. Bool
migrate_folio_add(), without -EIO: true if adding, or if skipping shared
(but its arguable folio_estimated_sharers() heuristic left unchanged).
Use MPOL_MF_WRLOCK flag to queue_pages_range(), instead of bool lock_vma.
Use explicit STRICT|MOVE* flags where queue_pages_test_walk() checks for
skipping, instead of hiding them behind MPOL_MF_VALID.

Signed-off-by: Hugh Dickins <[email protected]>
Reviewed-by: Matthew Wilcox (Oracle) <[email protected]>
---
mm/mempolicy.c | 342 ++++++++++++++++++++++++---------------------------
1 file changed, 161 insertions(+), 181 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 38a47fa33ef4..752d880dcdf8 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -111,7 +111,8 @@

/* Internal flags */
#define MPOL_MF_DISCONTIG_OK (MPOL_MF_INTERNAL << 0) /* Skip checks for continuous vmas */
-#define MPOL_MF_INVERT (MPOL_MF_INTERNAL << 1) /* Invert check for nodemask */
+#define MPOL_MF_INVERT (MPOL_MF_INTERNAL << 1) /* Invert check for nodemask */
+#define MPOL_MF_WRLOCK (MPOL_MF_INTERNAL << 2) /* Write-lock walked vmas */

static struct kmem_cache *policy_cache;
static struct kmem_cache *sn_cache;
@@ -416,9 +417,19 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = {
},
};

-static int migrate_folio_add(struct folio *folio, struct list_head *foliolist,
+static bool migrate_folio_add(struct folio *folio, struct list_head *foliolist,
unsigned long flags);

+static bool strictly_unmovable(unsigned long flags)
+{
+ /*
+ * STRICT without MOVE flags lets do_mbind() fail immediately with -EIO
+ * if any misplaced page is found.
+ */
+ return (flags & (MPOL_MF_STRICT | MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) ==
+ MPOL_MF_STRICT;
+}
+
struct queue_pages {
struct list_head *pagelist;
unsigned long flags;
@@ -426,7 +437,8 @@ struct queue_pages {
unsigned long start;
unsigned long end;
struct vm_area_struct *first;
- bool has_unmovable;
+ struct folio *large; /* note last large folio encountered */
+ long nr_failed; /* could not be isolated at this time */
};

/*
@@ -444,61 +456,37 @@ static inline bool queue_folio_required(struct folio *folio,
return node_isset(nid, *qp->nmask) == !(flags & MPOL_MF_INVERT);
}

-/*
- * queue_folios_pmd() has three possible return values:
- * 0 - folios are placed on the right node or queued successfully, or
- * special page is met, i.e. zero page, or unmovable page is found
- * but continue walking (indicated by queue_pages.has_unmovable).
- * -EIO - is migration entry or only MPOL_MF_STRICT was specified and an
- * existing folio was already on a node that does not follow the
- * policy.
- */
-static int queue_folios_pmd(pmd_t *pmd, spinlock_t *ptl, unsigned long addr,
- unsigned long end, struct mm_walk *walk)
- __releases(ptl)
+static void queue_folios_pmd(pmd_t *pmd, struct mm_walk *walk)
{
- int ret = 0;
struct folio *folio;
struct queue_pages *qp = walk->private;
- unsigned long flags;

if (unlikely(is_pmd_migration_entry(*pmd))) {
- ret = -EIO;
- goto unlock;
+ qp->nr_failed++;
+ return;
}
folio = pfn_folio(pmd_pfn(*pmd));
if (is_huge_zero_page(&folio->page)) {
walk->action = ACTION_CONTINUE;
- goto unlock;
+ return;
}
if (!queue_folio_required(folio, qp))
- goto unlock;
-
- flags = qp->flags;
- /* go to folio migration */
- if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
- if (!vma_migratable(walk->vma) ||
- migrate_folio_add(folio, qp->pagelist, flags)) {
- qp->has_unmovable = true;
- goto unlock;
- }
- } else
- ret = -EIO;
-unlock:
- spin_unlock(ptl);
- return ret;
+ return;
+ if (!(qp->flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) ||
+ !vma_migratable(walk->vma) ||
+ !migrate_folio_add(folio, qp->pagelist, qp->flags))
+ qp->nr_failed++;
}

/*
- * Scan through pages checking if pages follow certain conditions,
- * and move them to the pagelist if they do.
+ * Scan through folios, checking if they satisfy the required conditions,
+ * moving them from LRU to local pagelist for migration if they do (or not).
*
- * queue_folios_pte_range() has three possible return values:
- * 0 - folios are placed on the right node or queued successfully, or
- * special page is met, i.e. zero page, or unmovable page is found
- * but continue walking (indicated by queue_pages.has_unmovable).
- * -EIO - only MPOL_MF_STRICT was specified and an existing folio was already
- * on a node that does not follow the policy.
+ * queue_folios_pte_range() has two possible return values:
+ * 0 - continue walking to scan for more, even if an existing folio on the
+ * wrong node could not be isolated and queued for migration.
+ * -EIO - only MPOL_MF_STRICT was specified, without MPOL_MF_MOVE or ..._ALL,
+ * and an existing folio was on a node that does not follow the policy.
*/
static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
unsigned long end, struct mm_walk *walk)
@@ -512,8 +500,11 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
spinlock_t *ptl;

ptl = pmd_trans_huge_lock(pmd, vma);
- if (ptl)
- return queue_folios_pmd(pmd, ptl, addr, end, walk);
+ if (ptl) {
+ queue_folios_pmd(pmd, walk);
+ spin_unlock(ptl);
+ goto out;
+ }

mapped_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
if (!pte) {
@@ -522,8 +513,13 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
}
for (; addr != end; pte++, addr += PAGE_SIZE) {
ptent = ptep_get(pte);
- if (!pte_present(ptent))
+ if (pte_none(ptent))
continue;
+ if (!pte_present(ptent)) {
+ if (is_migration_entry(pte_to_swp_entry(ptent)))
+ qp->nr_failed++;
+ continue;
+ }
folio = vm_normal_folio(vma, addr, ptent);
if (!folio || folio_is_zone_device(folio))
continue;
@@ -535,95 +531,87 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
continue;
if (!queue_folio_required(folio, qp))
continue;
- if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
+ if (folio_test_large(folio)) {
/*
- * MPOL_MF_STRICT must be specified if we get here.
- * Continue walking vmas due to MPOL_MF_MOVE* flags.
+ * A large folio can only be isolated from LRU once,
+ * but may be mapped by many PTEs (and Copy-On-Write may
+ * intersperse PTEs of other, order 0, folios). This is
+ * a common case, so don't mistake it for failure (but
+ * there can be other cases of multi-mapped pages which
+ * this quick check does not help to filter out - and a
+ * search of the pagelist might grow to be prohibitive).
+ *
+ * migrate_pages(&pagelist) returns nr_failed folios, so
+ * check "large" now so that queue_pages_range() returns
+ * a comparable nr_failed folios. This does imply that
+ * if folio could not be isolated for some racy reason
+ * at its first PTE, later PTEs will not give it another
+ * chance of isolation; but keeps the accounting simple.
*/
- if (!vma_migratable(vma))
- qp->has_unmovable = true;
-
- /*
- * Do not abort immediately since there may be
- * temporary off LRU pages in the range. Still
- * need migrate other LRU pages.
- */
- if (migrate_folio_add(folio, qp->pagelist, flags))
- qp->has_unmovable = true;
- } else
- break;
+ if (folio == qp->large)
+ continue;
+ qp->large = folio;
+ }
+ if (!(flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) ||
+ !vma_migratable(vma) ||
+ !migrate_folio_add(folio, qp->pagelist, flags)) {
+ qp->nr_failed++;
+ if (strictly_unmovable(flags))
+ break;
+ }
}
pte_unmap_unlock(mapped_pte, ptl);
cond_resched();
-
- return addr != end ? -EIO : 0;
+out:
+ if (qp->nr_failed && strictly_unmovable(flags))
+ return -EIO;
+ return 0;
}

static int queue_folios_hugetlb(pte_t *pte, unsigned long hmask,
unsigned long addr, unsigned long end,
struct mm_walk *walk)
{
- int ret = 0;
#ifdef CONFIG_HUGETLB_PAGE
struct queue_pages *qp = walk->private;
- unsigned long flags = (qp->flags & MPOL_MF_VALID);
+ unsigned long flags = qp->flags;
struct folio *folio;
spinlock_t *ptl;
pte_t entry;

ptl = huge_pte_lock(hstate_vma(walk->vma), walk->mm, pte);
entry = huge_ptep_get(pte);
- if (!pte_present(entry))
+ if (!pte_present(entry)) {
+ if (unlikely(is_hugetlb_entry_migration(entry)))
+ qp->nr_failed++;
goto unlock;
+ }
folio = pfn_folio(pte_pfn(entry));
if (!queue_folio_required(folio, qp))
goto unlock;
-
- if (flags == MPOL_MF_STRICT) {
- /*
- * STRICT alone means only detecting misplaced folio and no
- * need to further check other vma.
- */
- ret = -EIO;
+ if (!(flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) ||
+ !vma_migratable(walk->vma)) {
+ qp->nr_failed++;
goto unlock;
}
-
- if (!vma_migratable(walk->vma)) {
- /*
- * Must be STRICT with MOVE*, otherwise .test_walk() have
- * stopped walking current vma.
- * Detecting misplaced folio but allow migrating folios which
- * have been queued.
- */
- qp->has_unmovable = true;
- goto unlock;
- }
-
/*
- * With MPOL_MF_MOVE, we try to migrate only unshared folios. If it
- * is shared it is likely not worth migrating.
+ * Unless MPOL_MF_MOVE_ALL, we try to avoid migrating a shared folio.
+ * Choosing not to migrate a shared folio is not counted as a failure.
*
* To check if the folio is shared, ideally we want to make sure
* every page is mapped to the same process. Doing that is very
- * expensive, so check the estimated mapcount of the folio instead.
+ * expensive, so check the estimated sharers of the folio instead.
*/
- if (flags & (MPOL_MF_MOVE_ALL) ||
- (flags & MPOL_MF_MOVE && folio_estimated_sharers(folio) == 1 &&
- !hugetlb_pmd_shared(pte))) {
- if (!isolate_hugetlb(folio, qp->pagelist) &&
- (flags & MPOL_MF_STRICT))
- /*
- * Failed to isolate folio but allow migrating pages
- * which have been queued.
- */
- qp->has_unmovable = true;
- }
+ if ((flags & MPOL_MF_MOVE_ALL) ||
+ (folio_estimated_sharers(folio) == 1 && !hugetlb_pmd_shared(pte)))
+ if (!isolate_hugetlb(folio, qp->pagelist))
+ qp->nr_failed++;
unlock:
spin_unlock(ptl);
-#else
- BUG();
+ if (qp->nr_failed && strictly_unmovable(flags))
+ return -EIO;
#endif
- return ret;
+ return 0;
}

#ifdef CONFIG_NUMA_BALANCING
@@ -704,8 +692,11 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end,
return 1;
}

- /* queue pages from current vma */
- if (flags & MPOL_MF_VALID)
+ /*
+ * Check page nodes, and queue pages to move, in the current vma.
+ * But if no moving, and no strict checking, the scan can be skipped.
+ */
+ if (flags & (MPOL_MF_STRICT | MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
return 0;
return 1;
}
@@ -727,22 +718,21 @@ static const struct mm_walk_ops queue_pages_lock_vma_walk_ops = {
/*
* Walk through page tables and collect pages to be migrated.
*
- * If pages found in a given range are on a set of nodes (determined by
- * @nodes and @flags,) it's isolated and queued to the pagelist which is
- * passed via @private.
+ * If pages found in a given range are not on the required set of @nodes,
+ * and migration is allowed, they are isolated and queued to @pagelist.
*
- * queue_pages_range() has three possible return values:
- * 1 - there is unmovable page, but MPOL_MF_MOVE* & MPOL_MF_STRICT were
- * specified.
- * 0 - queue pages successfully or no misplaced page.
- * errno - i.e. misplaced pages with MPOL_MF_STRICT specified (-EIO) or
- * memory range specified by nodemask and maxnode points outside
- * your accessible address space (-EFAULT)
+ * queue_pages_range() may return:
+ * 0 - all pages already on the right node, or successfully queued for moving
+ * (or neither strict checking nor moving requested: only range checking).
+ * >0 - this number of misplaced folios could not be queued for moving
+ * (a hugetlbfs page or a transparent huge page being counted as 1).
+ * -EIO - a misplaced page found, when MPOL_MF_STRICT specified without MOVEs.
+ * -EFAULT - a hole in the memory range, when MPOL_MF_DISCONTIG_OK unspecified.
*/
-static int
+static long
queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end,
nodemask_t *nodes, unsigned long flags,
- struct list_head *pagelist, bool lock_vma)
+ struct list_head *pagelist)
{
int err;
struct queue_pages qp = {
@@ -752,20 +742,17 @@ queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end,
.start = start,
.end = end,
.first = NULL,
- .has_unmovable = false,
};
- const struct mm_walk_ops *ops = lock_vma ?
+ const struct mm_walk_ops *ops = (flags & MPOL_MF_WRLOCK) ?
&queue_pages_lock_vma_walk_ops : &queue_pages_walk_ops;

err = walk_page_range(mm, start, end, ops, &qp);

- if (qp.has_unmovable)
- err = 1;
if (!qp.first)
/* whole range in hole */
err = -EFAULT;

- return err;
+ return err ? : qp.nr_failed;
}

/*
@@ -1028,16 +1015,16 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask,
}

#ifdef CONFIG_MIGRATION
-static int migrate_folio_add(struct folio *folio, struct list_head *foliolist,
+static bool migrate_folio_add(struct folio *folio, struct list_head *foliolist,
unsigned long flags)
{
/*
- * We try to migrate only unshared folios. If it is shared it
- * is likely not worth migrating.
+ * Unless MPOL_MF_MOVE_ALL, we try to avoid migrating a shared folio.
+ * Choosing not to migrate a shared folio is not counted as a failure.
*
* To check if the folio is shared, ideally we want to make sure
* every page is mapped to the same process. Doing that is very
- * expensive, so check the estimated mapcount of the folio instead.
+ * expensive, so check the estimated sharers of the folio instead.
*/
if ((flags & MPOL_MF_MOVE_ALL) || folio_estimated_sharers(folio) == 1) {
if (folio_isolate_lru(folio)) {
@@ -1045,32 +1032,31 @@ static int migrate_folio_add(struct folio *folio, struct list_head *foliolist,
node_stat_mod_folio(folio,
NR_ISOLATED_ANON + folio_is_file_lru(folio),
folio_nr_pages(folio));
- } else if (flags & MPOL_MF_STRICT) {
+ } else {
/*
* Non-movable folio may reach here. And, there may be
* temporary off LRU folios or non-LRU movable folios.
* Treat them as unmovable folios since they can't be
- * isolated, so they can't be moved at the moment. It
- * should return -EIO for this case too.
+ * isolated, so they can't be moved at the moment.
*/
- return -EIO;
+ return false;
}
}
-
- return 0;
+ return true;
}

/*
* Migrate pages from one node to a target node.
* Returns error or the number of pages not migrated.
*/
-static int migrate_to_node(struct mm_struct *mm, int source, int dest,
- int flags)
+static long migrate_to_node(struct mm_struct *mm, int source, int dest,
+ int flags)
{
nodemask_t nmask;
struct vm_area_struct *vma;
LIST_HEAD(pagelist);
- int err = 0;
+ long nr_failed;
+ long err = 0;
struct migration_target_control mtc = {
.nid = dest,
.gfp_mask = GFP_HIGHUSER_MOVABLE | __GFP_THISNODE,
@@ -1079,23 +1065,27 @@ static int migrate_to_node(struct mm_struct *mm, int source, int dest,
nodes_clear(nmask);
node_set(source, nmask);

- /*
- * This does not "check" the range but isolates all pages that
- * need migration. Between passing in the full user address
- * space range and MPOL_MF_DISCONTIG_OK, this call can not fail.
- */
- vma = find_vma(mm, 0);
VM_BUG_ON(!(flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)));
- queue_pages_range(mm, vma->vm_start, mm->task_size, &nmask,
- flags | MPOL_MF_DISCONTIG_OK, &pagelist, false);
+ vma = find_vma(mm, 0);
+
+ /*
+ * This does not migrate the range, but isolates all pages that
+ * need migration. Between passing in the full user address
+ * space range and MPOL_MF_DISCONTIG_OK, this call cannot fail,
+ * but passes back the count of pages which could not be isolated.
+ */
+ nr_failed = queue_pages_range(mm, vma->vm_start, mm->task_size, &nmask,
+ flags | MPOL_MF_DISCONTIG_OK, &pagelist);

if (!list_empty(&pagelist)) {
err = migrate_pages(&pagelist, alloc_migration_target, NULL,
- (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL, NULL);
+ (unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL, NULL);
if (err)
putback_movable_pages(&pagelist);
}

+ if (err >= 0)
+ err += nr_failed;
return err;
}

@@ -1108,8 +1098,8 @@ static int migrate_to_node(struct mm_struct *mm, int source, int dest,
int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
const nodemask_t *to, int flags)
{
- int busy = 0;
- int err = 0;
+ long nr_failed = 0;
+ long err = 0;
nodemask_t tmp;

lru_cache_disable();
@@ -1191,7 +1181,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
node_clear(source, tmp);
err = migrate_to_node(mm, source, dest, flags);
if (err > 0)
- busy += err;
+ nr_failed += err;
if (err < 0)
break;
}
@@ -1200,8 +1190,7 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
lru_cache_enable();
if (err < 0)
return err;
- return busy;
-
+ return (nr_failed < INT_MAX) ? nr_failed : INT_MAX;
}

/*
@@ -1240,10 +1229,10 @@ static struct folio *new_folio(struct folio *src, unsigned long start)
}
#else

-static int migrate_folio_add(struct folio *folio, struct list_head *foliolist,
+static bool migrate_folio_add(struct folio *folio, struct list_head *foliolist,
unsigned long flags)
{
- return -EIO;
+ return false;
}

int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
@@ -1267,8 +1256,8 @@ static long do_mbind(unsigned long start, unsigned long len,
struct vma_iterator vmi;
struct mempolicy *new;
unsigned long end;
- int err;
- int ret;
+ long err;
+ long nr_failed;
LIST_HEAD(pagelist);

if (flags & ~(unsigned long)MPOL_MF_VALID)
@@ -1308,10 +1297,8 @@ static long do_mbind(unsigned long start, unsigned long len,
start, start + len, mode, mode_flags,
nmask ? nodes_addr(*nmask)[0] : NUMA_NO_NODE);

- if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
-
+ if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
lru_cache_disable();
- }
{
NODEMASK_SCRATCH(scratch);
if (scratch) {
@@ -1327,44 +1314,37 @@ static long do_mbind(unsigned long start, unsigned long len,
goto mpol_out;

/*
- * Lock the VMAs before scanning for pages to migrate, to ensure we don't
- * miss a concurrently inserted page.
+ * Lock the VMAs before scanning for pages to migrate,
+ * to ensure we don't miss a concurrently inserted page.
*/
- ret = queue_pages_range(mm, start, end, nmask,
- flags | MPOL_MF_INVERT, &pagelist, true);
+ nr_failed = queue_pages_range(mm, start, end, nmask,
+ flags | MPOL_MF_INVERT | MPOL_MF_WRLOCK, &pagelist);

- if (ret < 0) {
- err = ret;
- goto up_out;
- }
-
- vma_iter_init(&vmi, mm, start);
- prev = vma_prev(&vmi);
- for_each_vma_range(vmi, vma, end) {
- err = mbind_range(&vmi, vma, &prev, start, end, new);
- if (err)
- break;
+ if (nr_failed < 0) {
+ err = nr_failed;
+ } else {
+ vma_iter_init(&vmi, mm, start);
+ prev = vma_prev(&vmi);
+ for_each_vma_range(vmi, vma, end) {
+ err = mbind_range(&vmi, vma, &prev, start, end, new);
+ if (err)
+ break;
+ }
}

if (!err) {
- int nr_failed = 0;
-
if (!list_empty(&pagelist)) {
WARN_ON_ONCE(flags & MPOL_MF_LAZY);
- nr_failed = migrate_pages(&pagelist, new_folio, NULL,
+ nr_failed |= migrate_pages(&pagelist, new_folio, NULL,
start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND, NULL);
- if (nr_failed)
- putback_movable_pages(&pagelist);
}
-
- if (((ret > 0) || nr_failed) && (flags & MPOL_MF_STRICT))
+ if (nr_failed && (flags & MPOL_MF_STRICT))
err = -EIO;
- } else {
-up_out:
- if (!list_empty(&pagelist))
- putback_movable_pages(&pagelist);
}

+ if (!list_empty(&pagelist))
+ putback_movable_pages(&pagelist);
+
mmap_write_unlock(mm);
mpol_out:
mpol_put(new);
--
2.35.3

2023-10-03 09:19:21

by Hugh Dickins

[permalink] [raw]
Subject: [PATCH v2 04/12] mempolicy trivia: delete those ancient pr_debug()s

Delete those ancient pr_debug()s - PDprintk()s in Andi Kleen's original
submission of core NUMA API, and useful when debugging shared mempolicy
lifetime back then, but not used recently.

Signed-off-by: Hugh Dickins <[email protected]>
Reviewed-by: Matthew Wilcox (Oracle) <[email protected]>
---
mm/mempolicy.c | 21 ---------------------
1 file changed, 21 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 752d880dcdf8..780498662b75 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -264,9 +264,6 @@ static struct mempolicy *mpol_new(unsigned short mode, unsigned short flags,
{
struct mempolicy *policy;

- pr_debug("setting mode %d flags %d nodes[0] %lx\n",
- mode, flags, nodes ? nodes_addr(*nodes)[0] : NUMA_NO_NODE);
-
if (mode == MPOL_DEFAULT) {
if (nodes && !nodes_empty(*nodes))
return ERR_PTR(-EINVAL);
@@ -768,11 +765,6 @@ static int vma_replace_policy(struct vm_area_struct *vma,

vma_assert_write_locked(vma);

- pr_debug("vma %lx-%lx/%lx vm_ops %p vm_file %p set_policy %p\n",
- vma->vm_start, vma->vm_end, vma->vm_pgoff,
- vma->vm_ops, vma->vm_file,
- vma->vm_ops ? vma->vm_ops->set_policy : NULL);
-
new = mpol_dup(pol);
if (IS_ERR(new))
return PTR_ERR(new);
@@ -1293,10 +1285,6 @@ static long do_mbind(unsigned long start, unsigned long len,
if (!new)
flags |= MPOL_MF_DISCONTIG_OK;

- pr_debug("mbind %lx-%lx mode:%d flags:%d nodes:%lx\n",
- start, start + len, mode, mode_flags,
- nmask ? nodes_addr(*nmask)[0] : NUMA_NO_NODE);
-
if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
lru_cache_disable();
{
@@ -2516,8 +2504,6 @@ static void sp_insert(struct shared_policy *sp, struct sp_node *new)
}
rb_link_node(&new->nd, parent, p);
rb_insert_color(&new->nd, &sp->root);
- pr_debug("inserting %lx-%lx: %d\n", new->start, new->end,
- new->policy ? new->policy->mode : 0);
}

/* Find shared policy intersecting idx */
@@ -2656,7 +2642,6 @@ void mpol_put_task_policy(struct task_struct *task)

static void sp_delete(struct shared_policy *sp, struct sp_node *n)
{
- pr_debug("deleting %lx-l%lx\n", n->start, n->end);
rb_erase(&n->nd, &sp->root);
sp_free(n);
}
@@ -2813,12 +2798,6 @@ int mpol_set_shared_policy(struct shared_policy *info,
struct sp_node *new = NULL;
unsigned long sz = vma_pages(vma);

- pr_debug("set_shared_policy %lx sz %lu %d %d %lx\n",
- vma->vm_pgoff,
- sz, npol ? npol->mode : -1,
- npol ? npol->flags : -1,
- npol ? nodes_addr(npol->nodes)[0] : NUMA_NO_NODE);
-
if (npol) {
new = sp_alloc(vma->vm_pgoff, vma->vm_pgoff + sz, npol);
if (!new)
--
2.35.3

2023-10-03 09:21:04

by Hugh Dickins

[permalink] [raw]
Subject: [PATCH v2 05/12] mempolicy trivia: slightly more consistent naming

Before getting down to work, do a little cleanup, mainly of inconsistent
variable naming. I gave up trying to rationalize mpol versus pol versus
policy, and node versus nid, but let's avoid p and nd. Remove a few
superfluous blank lines, but add one; and here prefer vma->vm_policy to
vma_policy(vma) - the latter being appropriate in other sources, which
have to allow for !CONFIG_NUMA. That intriguing line about KERNEL_DS?
should have gone in v2.6.15, when numa_policy_init() stopped using
set_mempolicy(2)'s system call handler.

Signed-off-by: Hugh Dickins <[email protected]>
Reviewed-by: Matthew Wilcox (Oracle) <[email protected]>
---
include/linux/mempolicy.h | 11 +++----
mm/mempolicy.c | 73 +++++++++++++++++++----------------------
2 files changed, 38 insertions(+), 46 deletions(-)

diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index 6c2754d7bfed..325b7200c311 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -126,10 +126,9 @@ struct shared_policy {

int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst);
void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol);
-int mpol_set_shared_policy(struct shared_policy *info,
- struct vm_area_struct *vma,
- struct mempolicy *new);
-void mpol_free_shared_policy(struct shared_policy *p);
+int mpol_set_shared_policy(struct shared_policy *sp,
+ struct vm_area_struct *vma, struct mempolicy *mpol);
+void mpol_free_shared_policy(struct shared_policy *sp);
struct mempolicy *mpol_shared_policy_lookup(struct shared_policy *sp,
unsigned long idx);

@@ -193,7 +192,7 @@ static inline bool mpol_equal(struct mempolicy *a, struct mempolicy *b)
return true;
}

-static inline void mpol_put(struct mempolicy *p)
+static inline void mpol_put(struct mempolicy *pol)
{
}

@@ -212,7 +211,7 @@ static inline void mpol_shared_policy_init(struct shared_policy *sp,
{
}

-static inline void mpol_free_shared_policy(struct shared_policy *p)
+static inline void mpol_free_shared_policy(struct shared_policy *sp)
{
}

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 780498662b75..c7906a034959 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -25,7 +25,7 @@
* to the last. It would be better if bind would truly restrict
* the allocation to memory nodes instead
*
- * preferred Try a specific node first before normal fallback.
+ * preferred Try a specific node first before normal fallback.
* As a special case NUMA_NO_NODE here means do the allocation
* on the local CPU. This is normally identical to default,
* but useful to set in a VMA when you have a non default
@@ -52,7 +52,7 @@
* on systems with highmem kernel lowmem allocation don't get policied.
* Same with GFP_DMA allocations.
*
- * For shmfs/tmpfs/hugetlbfs shared memory the policy is shared between
+ * For shmem/tmpfs shared memory the policy is shared between
* all users and remembered even when nobody has memory mapped.
*/

@@ -291,6 +291,7 @@ static struct mempolicy *mpol_new(unsigned short mode, unsigned short flags,
return ERR_PTR(-EINVAL);
} else if (nodes_empty(*nodes))
return ERR_PTR(-EINVAL);
+
policy = kmem_cache_alloc(policy_cache, GFP_KERNEL);
if (!policy)
return ERR_PTR(-ENOMEM);
@@ -303,11 +304,11 @@ static struct mempolicy *mpol_new(unsigned short mode, unsigned short flags,
}

/* Slow path of a mpol destructor. */
-void __mpol_put(struct mempolicy *p)
+void __mpol_put(struct mempolicy *pol)
{
- if (!atomic_dec_and_test(&p->refcnt))
+ if (!atomic_dec_and_test(&pol->refcnt))
return;
- kmem_cache_free(policy_cache, p);
+ kmem_cache_free(policy_cache, pol);
}

static void mpol_rebind_default(struct mempolicy *pol, const nodemask_t *nodes)
@@ -364,7 +365,6 @@ static void mpol_rebind_policy(struct mempolicy *pol, const nodemask_t *newmask)
*
* Called with task's alloc_lock held.
*/
-
void mpol_rebind_task(struct task_struct *tsk, const nodemask_t *new)
{
mpol_rebind_policy(tsk->mempolicy, new);
@@ -375,7 +375,6 @@ void mpol_rebind_task(struct task_struct *tsk, const nodemask_t *new)
*
* Call holding a reference to mm. Takes mm->mmap_lock during call.
*/
-
void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new)
{
struct vm_area_struct *vma;
@@ -757,7 +756,7 @@ queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end,
* This must be called with the mmap_lock held for writing.
*/
static int vma_replace_policy(struct vm_area_struct *vma,
- struct mempolicy *pol)
+ struct mempolicy *pol)
{
int err;
struct mempolicy *old;
@@ -803,7 +802,7 @@ static int mbind_range(struct vma_iterator *vmi, struct vm_area_struct *vma,
vmstart = vma->vm_start;
}

- if (mpol_equal(vma_policy(vma), new_pol)) {
+ if (mpol_equal(vma->vm_policy, new_pol)) {
*prev = vma;
return 0;
}
@@ -875,18 +874,18 @@ static long do_set_mempolicy(unsigned short mode, unsigned short flags,
*
* Called with task's alloc_lock held
*/
-static void get_policy_nodemask(struct mempolicy *p, nodemask_t *nodes)
+static void get_policy_nodemask(struct mempolicy *pol, nodemask_t *nodes)
{
nodes_clear(*nodes);
- if (p == &default_policy)
+ if (pol == &default_policy)
return;

- switch (p->mode) {
+ switch (pol->mode) {
case MPOL_BIND:
case MPOL_INTERLEAVE:
case MPOL_PREFERRED:
case MPOL_PREFERRED_MANY:
- *nodes = p->nodes;
+ *nodes = pol->nodes;
break;
case MPOL_LOCAL:
/* return empty node mask for local allocation */
@@ -1654,7 +1653,6 @@ static int kernel_migrate_pages(pid_t pid, unsigned long maxnode,
out_put:
put_task_struct(task);
goto out;
-
}

SYSCALL_DEFINE4(migrate_pages, pid_t, pid, unsigned long, maxnode,
@@ -1664,7 +1662,6 @@ SYSCALL_DEFINE4(migrate_pages, pid_t, pid, unsigned long, maxnode,
return kernel_migrate_pages(pid, maxnode, old_nodes, new_nodes);
}

-
/* Retrieve NUMA policy */
static int kernel_get_mempolicy(int __user *policy,
unsigned long __user *nmask,
@@ -1847,10 +1844,10 @@ nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy)
* policy_node() is always coupled with policy_nodemask(), which
* secures the nodemask limit for 'bind' and 'prefer-many' policy.
*/
-static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd)
+static int policy_node(gfp_t gfp, struct mempolicy *policy, int nid)
{
if (policy->mode == MPOL_PREFERRED) {
- nd = first_node(policy->nodes);
+ nid = first_node(policy->nodes);
} else {
/*
* __GFP_THISNODE shouldn't even be used with the bind policy
@@ -1865,19 +1862,18 @@ static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd)
policy->home_node != NUMA_NO_NODE)
return policy->home_node;

- return nd;
+ return nid;
}

/* Do dynamic interleaving for a process */
-static unsigned interleave_nodes(struct mempolicy *policy)
+static unsigned int interleave_nodes(struct mempolicy *policy)
{
- unsigned next;
- struct task_struct *me = current;
+ unsigned int nid;

- next = next_node_in(me->il_prev, policy->nodes);
- if (next < MAX_NUMNODES)
- me->il_prev = next;
- return next;
+ nid = next_node_in(current->il_prev, policy->nodes);
+ if (nid < MAX_NUMNODES)
+ current->il_prev = nid;
+ return nid;
}

/*
@@ -2367,7 +2363,7 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp,

int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst)
{
- struct mempolicy *pol = mpol_dup(vma_policy(src));
+ struct mempolicy *pol = mpol_dup(src->vm_policy);

if (IS_ERR(pol))
return PTR_ERR(pol);
@@ -2791,40 +2787,40 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol)
}
}

-int mpol_set_shared_policy(struct shared_policy *info,
- struct vm_area_struct *vma, struct mempolicy *npol)
+int mpol_set_shared_policy(struct shared_policy *sp,
+ struct vm_area_struct *vma, struct mempolicy *pol)
{
int err;
struct sp_node *new = NULL;
unsigned long sz = vma_pages(vma);

- if (npol) {
- new = sp_alloc(vma->vm_pgoff, vma->vm_pgoff + sz, npol);
+ if (pol) {
+ new = sp_alloc(vma->vm_pgoff, vma->vm_pgoff + sz, pol);
if (!new)
return -ENOMEM;
}
- err = shared_policy_replace(info, vma->vm_pgoff, vma->vm_pgoff+sz, new);
+ err = shared_policy_replace(sp, vma->vm_pgoff, vma->vm_pgoff + sz, new);
if (err && new)
sp_free(new);
return err;
}

/* Free a backing policy store on inode delete. */
-void mpol_free_shared_policy(struct shared_policy *p)
+void mpol_free_shared_policy(struct shared_policy *sp)
{
struct sp_node *n;
struct rb_node *next;

- if (!p->root.rb_node)
+ if (!sp->root.rb_node)
return;
- write_lock(&p->lock);
- next = rb_first(&p->root);
+ write_lock(&sp->lock);
+ next = rb_first(&sp->root);
while (next) {
n = rb_entry(next, struct sp_node, nd);
next = rb_next(&n->nd);
- sp_delete(p, n);
+ sp_delete(sp, n);
}
- write_unlock(&p->lock);
+ write_unlock(&sp->lock);
}

#ifdef CONFIG_NUMA_BALANCING
@@ -2874,7 +2870,6 @@ static inline void __init check_numabalancing_enable(void)
}
#endif /* CONFIG_NUMA_BALANCING */

-/* assumes fs == KERNEL_DS */
void __init numa_policy_init(void)
{
nodemask_t interleave_nodes;
@@ -2937,7 +2932,6 @@ void numa_default_policy(void)
/*
* Parse and format mempolicy from/to strings
*/
-
static const char * const policy_modes[] =
{
[MPOL_DEFAULT] = "default",
@@ -2948,7 +2942,6 @@ static const char * const policy_modes[] =
[MPOL_PREFERRED_MANY] = "prefer (many)",
};

-
#ifdef CONFIG_TMPFS
/**
* mpol_parse_str - parse string to mempolicy, for tmpfs mpol mount option.
--
2.35.3

2023-10-03 09:21:56

by Hugh Dickins

[permalink] [raw]
Subject: [PATCH v2 06/12] mempolicy trivia: use pgoff_t in shared mempolicy tree

Prefer the more explicit "pgoff_t" to "unsigned long" when dealing with
a shared mempolicy tree. Delete confusing comment about pseudo mm vmas.

Signed-off-by: Hugh Dickins <[email protected]>
---
include/linux/mempolicy.h | 20 +++++++-------------
mm/mempolicy.c | 12 ++++++------
2 files changed, 13 insertions(+), 19 deletions(-)

diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index 325b7200c311..c69f9480d5e4 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -107,22 +107,16 @@ static inline bool mpol_equal(struct mempolicy *a, struct mempolicy *b)

/*
* Tree of shared policies for a shared memory region.
- * Maintain the policies in a pseudo mm that contains vmas. The vmas
- * carry the policy. As a special twist the pseudo mm is indexed in pages, not
- * bytes, so that we can work with shared memory segments bigger than
- * unsigned long.
*/
-
-struct sp_node {
- struct rb_node nd;
- unsigned long start, end;
- struct mempolicy *policy;
-};
-
struct shared_policy {
struct rb_root root;
rwlock_t lock;
};
+struct sp_node {
+ struct rb_node nd;
+ pgoff_t start, end;
+ struct mempolicy *policy;
+};

int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst);
void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol);
@@ -130,7 +124,7 @@ int mpol_set_shared_policy(struct shared_policy *sp,
struct vm_area_struct *vma, struct mempolicy *mpol);
void mpol_free_shared_policy(struct shared_policy *sp);
struct mempolicy *mpol_shared_policy_lookup(struct shared_policy *sp,
- unsigned long idx);
+ pgoff_t idx);

struct mempolicy *get_task_policy(struct task_struct *p);
struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
@@ -216,7 +210,7 @@ static inline void mpol_free_shared_policy(struct shared_policy *sp)
}

static inline struct mempolicy *
-mpol_shared_policy_lookup(struct shared_policy *sp, unsigned long idx)
+mpol_shared_policy_lookup(struct shared_policy *sp, pgoff_t idx)
{
return NULL;
}
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index c7906a034959..1d3f9e1ecbb8 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2448,8 +2448,8 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b)
* lookup first element intersecting start-end. Caller holds sp->lock for
* reading or for writing
*/
-static struct sp_node *
-sp_lookup(struct shared_policy *sp, unsigned long start, unsigned long end)
+static struct sp_node *sp_lookup(struct shared_policy *sp,
+ pgoff_t start, pgoff_t end)
{
struct rb_node *n = sp->root.rb_node;

@@ -2503,8 +2503,8 @@ static void sp_insert(struct shared_policy *sp, struct sp_node *new)
}

/* Find shared policy intersecting idx */
-struct mempolicy *
-mpol_shared_policy_lookup(struct shared_policy *sp, unsigned long idx)
+struct mempolicy *mpol_shared_policy_lookup(struct shared_policy *sp,
+ pgoff_t idx)
{
struct mempolicy *pol = NULL;
struct sp_node *sn;
@@ -2672,8 +2672,8 @@ static struct sp_node *sp_alloc(unsigned long start, unsigned long end,
}

/* Replace a policy range. */
-static int shared_policy_replace(struct shared_policy *sp, unsigned long start,
- unsigned long end, struct sp_node *new)
+static int shared_policy_replace(struct shared_policy *sp, pgoff_t start,
+ pgoff_t end, struct sp_node *new)
{
struct sp_node *n;
struct sp_node *n_new = NULL;
--
2.35.3

2023-10-03 09:23:23

by Hugh Dickins

[permalink] [raw]
Subject: [PATCH v2 07/12] mempolicy: mpol_shared_policy_init() without pseudo-vma

mpol_shared_policy_init() does not need to use a pseudo-vma: it can use
sp_alloc() and sp_insert() directly, since the object's shared policy
tree is empty and inaccessible (needing no lock) at get_inode() time.

Signed-off-by: Hugh Dickins <[email protected]>
---
mm/mempolicy.c | 28 ++++++++++++++--------------
1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 1d3f9e1ecbb8..5d99fd5cd60b 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2756,30 +2756,30 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol)
rwlock_init(&sp->lock);

if (mpol) {
- struct vm_area_struct pvma;
- struct mempolicy *new;
+ struct sp_node *sn;
+ struct mempolicy *npol;
NODEMASK_SCRATCH(scratch);

if (!scratch)
goto put_mpol;
- /* contextualize the tmpfs mount point mempolicy */
- new = mpol_new(mpol->mode, mpol->flags, &mpol->w.user_nodemask);
- if (IS_ERR(new))
+
+ /* contextualize the tmpfs mount point mempolicy to this file */
+ npol = mpol_new(mpol->mode, mpol->flags, &mpol->w.user_nodemask);
+ if (IS_ERR(npol))
goto free_scratch; /* no valid nodemask intersection */

task_lock(current);
- ret = mpol_set_nodemask(new, &mpol->w.user_nodemask, scratch);
+ ret = mpol_set_nodemask(npol, &mpol->w.user_nodemask, scratch);
task_unlock(current);
if (ret)
- goto put_new;
+ goto put_npol;

- /* Create pseudo-vma that contains just the policy */
- vma_init(&pvma, NULL);
- pvma.vm_end = TASK_SIZE; /* policy covers entire file */
- mpol_set_shared_policy(sp, &pvma, new); /* adds ref */
-
-put_new:
- mpol_put(new); /* drop initial ref */
+ /* alloc node covering entire file; adds ref to file's npol */
+ sn = sp_alloc(0, MAX_LFS_FILESIZE >> PAGE_SHIFT, npol);
+ if (sn)
+ sp_insert(sp, sn);
+put_npol:
+ mpol_put(npol); /* drop initial ref on file's npol */
free_scratch:
NODEMASK_SCRATCH_FREE(scratch);
put_mpol:
--
2.35.3

2023-10-03 09:24:28

by Hugh Dickins

[permalink] [raw]
Subject: [PATCH v2 08/12] mempolicy: remove confusing MPOL_MF_LAZY dead code

v3.8 commit b24f53a0bea3 ("mm: mempolicy: Add MPOL_MF_LAZY") introduced
MPOL_MF_LAZY, and included it in the MPOL_MF_VALID flags; but a720094ded8
("mm: mempolicy: Hide MPOL_NOOP and MPOL_MF_LAZY from userspace for now")
immediately removed it from MPOL_MF_VALID flags, pending further review.
"This will need to be revisited", but it has not been reinstated.

The present state is confusing: there is dead code in mm/mempolicy.c to
handle MPOL_MF_LAZY cases which can never occur. Remove that: it can be
resurrected later if necessary. But keep the definition of MPOL_MF_LAZY,
which must remain in the UAPI, even though it always fails with EINVAL.

https://lore.kernel.org/linux-mm/[email protected]/
links to a previous request to remove MPOL_MF_LAZY.

Signed-off-by: Hugh Dickins <[email protected]>
Reviewed-by: Matthew Wilcox (Oracle) <[email protected]>
---
include/uapi/linux/mempolicy.h | 2 +-
mm/mempolicy.c | 18 ------------------
2 files changed, 1 insertion(+), 19 deletions(-)

diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h
index 046d0ccba4cd..a8963f7ef4c2 100644
--- a/include/uapi/linux/mempolicy.h
+++ b/include/uapi/linux/mempolicy.h
@@ -48,7 +48,7 @@ enum {
#define MPOL_MF_MOVE (1<<1) /* Move pages owned by this process to conform
to policy */
#define MPOL_MF_MOVE_ALL (1<<2) /* Move every page to conform to policy */
-#define MPOL_MF_LAZY (1<<3) /* Modifies '_MOVE: lazy migrate on fault */
+#define MPOL_MF_LAZY (1<<3) /* UNSUPPORTED FLAG: Lazy migrate on fault */
#define MPOL_MF_INTERNAL (1<<4) /* Internal flags start here */

#define MPOL_MF_VALID (MPOL_MF_STRICT | \
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 5d99fd5cd60b..f3224a8b0f6c 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -636,12 +636,6 @@ unsigned long change_prot_numa(struct vm_area_struct *vma,

return nr_updated;
}
-#else
-static unsigned long change_prot_numa(struct vm_area_struct *vma,
- unsigned long addr, unsigned long end)
-{
- return 0;
-}
#endif /* CONFIG_NUMA_BALANCING */

static int queue_pages_test_walk(unsigned long start, unsigned long end,
@@ -680,14 +674,6 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end,
if (endvma > end)
endvma = end;

- if (flags & MPOL_MF_LAZY) {
- /* Similar to task_numa_work, skip inaccessible VMAs */
- if (!is_vm_hugetlb_page(vma) && vma_is_accessible(vma) &&
- !(vma->vm_flags & VM_MIXEDMAP))
- change_prot_numa(vma, start, endvma);
- return 1;
- }
-
/*
* Check page nodes, and queue pages to move, in the current vma.
* But if no moving, and no strict checking, the scan can be skipped.
@@ -1274,9 +1260,6 @@ static long do_mbind(unsigned long start, unsigned long len,
if (IS_ERR(new))
return PTR_ERR(new);

- if (flags & MPOL_MF_LAZY)
- new->flags |= MPOL_F_MOF;
-
/*
* If we are using the default policy then operation
* on discontinuous address spaces is okay after all
@@ -1321,7 +1304,6 @@ static long do_mbind(unsigned long start, unsigned long len,

if (!err) {
if (!list_empty(&pagelist)) {
- WARN_ON_ONCE(flags & MPOL_MF_LAZY);
nr_failed |= migrate_pages(&pagelist, new_folio, NULL,
start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND, NULL);
}
--
2.35.3

2023-10-03 09:26:29

by Hugh Dickins

[permalink] [raw]
Subject: [PATCH v2 09/12] mm: add page_rmappable_folio() wrapper

folio_prep_large_rmappable() is being used repeatedly along with a
conversion from page to folio, a check non-NULL, a check order > 1:
wrap it all up into struct folio *page_rmappable_folio(struct page *).

Signed-off-by: Hugh Dickins <[email protected]>
---
mm/internal.h | 9 +++++++++
mm/mempolicy.c | 17 +++--------------
mm/page_alloc.c | 8 ++------
3 files changed, 14 insertions(+), 20 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index d7916f1e9e98..b2b3716d1df6 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -415,6 +415,15 @@ static inline void folio_set_order(struct folio *folio, unsigned int order)

void folio_undo_large_rmappable(struct folio *folio);

+static inline struct folio *page_rmappable_folio(struct page *page)
+{
+ struct folio *folio = (struct folio *)page;
+
+ if (folio && folio_order(folio) > 1)
+ folio_prep_large_rmappable(folio);
+ return folio;
+}
+
static inline void prep_compound_head(struct page *page, unsigned int order)
{
struct folio *folio = (struct folio *)page;
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index f3224a8b0f6c..bfcc523a2860 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2142,10 +2142,7 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma,
mpol_cond_put(pol);
gfp |= __GFP_COMP;
page = alloc_page_interleave(gfp, order, nid);
- folio = (struct folio *)page;
- if (folio && order > 1)
- folio_prep_large_rmappable(folio);
- goto out;
+ return page_rmappable_folio(page);
}

if (pol->mode == MPOL_PREFERRED_MANY) {
@@ -2155,10 +2152,7 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma,
gfp |= __GFP_COMP;
page = alloc_pages_preferred_many(gfp, order, node, pol);
mpol_cond_put(pol);
- folio = (struct folio *)page;
- if (folio && order > 1)
- folio_prep_large_rmappable(folio);
- goto out;
+ return page_rmappable_folio(page);
}

if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage)) {
@@ -2252,12 +2246,7 @@ EXPORT_SYMBOL(alloc_pages);

struct folio *folio_alloc(gfp_t gfp, unsigned order)
{
- struct page *page = alloc_pages(gfp | __GFP_COMP, order);
- struct folio *folio = (struct folio *)page;
-
- if (folio && order > 1)
- folio_prep_large_rmappable(folio);
- return folio;
+ return page_rmappable_folio(alloc_pages(gfp | __GFP_COMP, order));
}
EXPORT_SYMBOL(folio_alloc);

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7df77b58a961..00f94dd88355 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4619,12 +4619,8 @@ struct folio *__folio_alloc(gfp_t gfp, unsigned int order, int preferred_nid,
nodemask_t *nodemask)
{
struct page *page = __alloc_pages(gfp | __GFP_COMP, order,
- preferred_nid, nodemask);
- struct folio *folio = (struct folio *)page;
-
- if (folio && order > 1)
- folio_prep_large_rmappable(folio);
- return folio;
+ preferred_nid, nodemask);
+ return page_rmappable_folio(page);
}
EXPORT_SYMBOL(__folio_alloc);

--
2.35.3

2023-10-03 09:27:51

by Hugh Dickins

[permalink] [raw]
Subject: [PATCH v2 10/12] mempolicy: alloc_pages_mpol() for NUMA policy without vma

Shrink shmem's stack usage by eliminating the pseudo-vma from its folio
allocation. alloc_pages_mpol(gfp, order, pol, ilx, nid) becomes the
principal actor for passing mempolicy choice down to __alloc_pages(),
rather than vma_alloc_folio(gfp, order, vma, addr, hugepage).

vma_alloc_folio() and alloc_pages() remain, but as wrappers around
alloc_pages_mpol(). alloc_pages_bulk_*() untouched, except to provide
the additional args to policy_nodemask(), which subsumes policy_node().
Cleanup throughout, cutting out some unhelpful "helpers".

It would all be much simpler without MPOL_INTERLEAVE, but that adds a
dynamic to the constant mpol: complicated by v3.6 commit 09c231cb8bfd
("tmpfs: distribute interleave better across nodes"), which added ino
bias to the interleave, hidden from mm/mempolicy.c until this commit.

Hence "ilx" throughout, the "interleave index". Originally I thought it
could be done just with nid, but that's wrong: the nodemask may come from
the shared policy layer below a shmem vma, or it may come from the task
layer above a shmem vma; and without the final nodemask then nodeid
cannot be decided. And how ilx is applied depends also on page order.

The interleave index is almost always irrelevant unless MPOL_INTERLEAVE:
with one exception in alloc_pages_mpol(), where the NO_INTERLEAVE_INDEX
passed down from vma-less alloc_pages() is also used as hint not to use
THP-style hugepage allocation - to avoid the overhead of a hugepage arg
(though I don't understand why we never just added a GFP bit for THP -
if it actually needs a different allocation strategy from other pages of
the same order). vma_alloc_folio() still carries its hugepage arg here,
but it is not used, and should be removed when agreed.

get_vma_policy() no longer allows a NULL vma: over time I believe we've
eradicated all the places which used to need it e.g. swapoff and madvise
used to pass NULL vma to read_swap_cache_async(), but now know the vma.

Signed-off-by: Hugh Dickins <[email protected]>
---
fs/proc/task_mmu.c | 5 +-
include/linux/gfp.h | 10 +-
include/linux/mempolicy.h | 13 +-
include/linux/mm.h | 2 +-
ipc/shm.c | 21 +--
mm/mempolicy.c | 383 +++++++++++++++++-----------------------
mm/shmem.c | 92 +++++-----
mm/swap.h | 9 +-
mm/swap_state.c | 83 +++++----
9 files changed, 297 insertions(+), 321 deletions(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 5a302f1b8d68..6cc4720eabf0 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -2637,8 +2637,9 @@ static int show_numa_map(struct seq_file *m, void *v)
struct numa_maps *md = &numa_priv->md;
struct file *file = vma->vm_file;
struct mm_struct *mm = vma->vm_mm;
- struct mempolicy *pol;
char buffer[64];
+ struct mempolicy *pol;
+ pgoff_t ilx;
int nid;

if (!mm)
@@ -2647,7 +2648,7 @@ static int show_numa_map(struct seq_file *m, void *v)
/* Ensure we start with an empty set of numa_maps statistics. */
memset(md, 0, sizeof(*md));

- pol = __get_vma_policy(vma, vma->vm_start);
+ pol = __get_vma_policy(vma, vma->vm_start, &ilx);
if (pol) {
mpol_to_str(buffer, sizeof(buffer), pol);
mpol_cond_put(pol);
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 5b917e5b9350..de292a007138 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -8,6 +8,7 @@
#include <linux/topology.h>

struct vm_area_struct;
+struct mempolicy;

/* Convert GFP flags to their corresponding migrate type */
#define GFP_MOVABLE_MASK (__GFP_RECLAIMABLE|__GFP_MOVABLE)
@@ -262,7 +263,9 @@ static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask,

#ifdef CONFIG_NUMA
struct page *alloc_pages(gfp_t gfp, unsigned int order);
-struct folio *folio_alloc(gfp_t gfp, unsigned order);
+struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order,
+ struct mempolicy *mpol, pgoff_t ilx, int nid);
+struct folio *folio_alloc(gfp_t gfp, unsigned int order);
struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma,
unsigned long addr, bool hugepage);
#else
@@ -270,6 +273,11 @@ static inline struct page *alloc_pages(gfp_t gfp_mask, unsigned int order)
{
return alloc_pages_node(numa_node_id(), gfp_mask, order);
}
+static inline struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order,
+ struct mempolicy *mpol, pgoff_t ilx, int nid)
+{
+ return alloc_pages(gfp, order);
+}
static inline struct folio *folio_alloc(gfp_t gfp, unsigned int order)
{
return __folio_alloc_node(gfp, order, numa_node_id());
diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index c69f9480d5e4..3c208d4f0ee9 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -128,7 +128,9 @@ struct mempolicy *mpol_shared_policy_lookup(struct shared_policy *sp,

struct mempolicy *get_task_policy(struct task_struct *p);
struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
- unsigned long addr);
+ unsigned long addr, pgoff_t *ilx);
+struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
+ unsigned long addr, int order, pgoff_t *ilx);
bool vma_policy_mof(struct vm_area_struct *vma);

extern void numa_default_policy(void);
@@ -142,8 +144,6 @@ extern int huge_node(struct vm_area_struct *vma,
extern bool init_nodemask_of_mempolicy(nodemask_t *mask);
extern bool mempolicy_in_oom_domain(struct task_struct *tsk,
const nodemask_t *mask);
-extern nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy);
-
extern unsigned int mempolicy_slab_node(void);

extern enum zone_type policy_zone;
@@ -215,6 +215,13 @@ mpol_shared_policy_lookup(struct shared_policy *sp, pgoff_t idx)
return NULL;
}

+static inline struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
+ unsigned long addr, int order, pgoff_t *ilx)
+{
+ *ilx = 0;
+ return NULL;
+}
+
#define vma_policy(vma) NULL

static inline int
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 52c40b3d0813..9b86a9c35427 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -619,7 +619,7 @@ struct vm_operations_struct {
* policy.
*/
struct mempolicy *(*get_policy)(struct vm_area_struct *vma,
- unsigned long addr);
+ unsigned long addr, pgoff_t *ilx);
#endif
/*
* Called by vm_normal_page() for special PTEs to find the
diff --git a/ipc/shm.c b/ipc/shm.c
index 576a543b7cff..222aaf035afb 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -562,30 +562,25 @@ static unsigned long shm_pagesize(struct vm_area_struct *vma)
}

#ifdef CONFIG_NUMA
-static int shm_set_policy(struct vm_area_struct *vma, struct mempolicy *new)
+static int shm_set_policy(struct vm_area_struct *vma, struct mempolicy *mpol)
{
- struct file *file = vma->vm_file;
- struct shm_file_data *sfd = shm_file_data(file);
+ struct shm_file_data *sfd = shm_file_data(vma->vm_file);
int err = 0;

if (sfd->vm_ops->set_policy)
- err = sfd->vm_ops->set_policy(vma, new);
+ err = sfd->vm_ops->set_policy(vma, mpol);
return err;
}

static struct mempolicy *shm_get_policy(struct vm_area_struct *vma,
- unsigned long addr)
+ unsigned long addr, pgoff_t *ilx)
{
- struct file *file = vma->vm_file;
- struct shm_file_data *sfd = shm_file_data(file);
- struct mempolicy *pol = NULL;
+ struct shm_file_data *sfd = shm_file_data(vma->vm_file);
+ struct mempolicy *mpol = vma->vm_policy;

if (sfd->vm_ops->get_policy)
- pol = sfd->vm_ops->get_policy(vma, addr);
- else if (vma->vm_policy)
- pol = vma->vm_policy;
-
- return pol;
+ mpol = sfd->vm_ops->get_policy(vma, addr, ilx);
+ return mpol;
}
#endif

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index bfcc523a2860..8cf76de12acd 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -114,6 +114,8 @@
#define MPOL_MF_INVERT (MPOL_MF_INTERNAL << 1) /* Invert check for nodemask */
#define MPOL_MF_WRLOCK (MPOL_MF_INTERNAL << 2) /* Write-lock walked vmas */

+#define NO_INTERLEAVE_INDEX (-1UL)
+
static struct kmem_cache *policy_cache;
static struct kmem_cache *sn_cache;

@@ -918,6 +920,7 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask,
}

if (flags & MPOL_F_ADDR) {
+ pgoff_t ilx; /* ignored here */
/*
* Do NOT fall back to task policy if the
* vma/shared policy at addr is NULL. We
@@ -929,10 +932,7 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask,
mmap_read_unlock(mm);
return -EFAULT;
}
- if (vma->vm_ops && vma->vm_ops->get_policy)
- pol = vma->vm_ops->get_policy(vma, addr);
- else
- pol = vma->vm_policy;
+ pol = __get_vma_policy(vma, addr, &ilx);
} else if (addr)
return -EINVAL;

@@ -1190,6 +1190,15 @@ static struct folio *new_folio(struct folio *src, unsigned long start)
break;
}

+ /*
+ * __get_vma_policy() now expects a genuine non-NULL vma. Return NULL
+ * when the page can no longer be located in a vma: that is not ideal
+ * (migrate_pages() will give up early, presuming ENOMEM), but good
+ * enough to avoid a crash by syzkaller or concurrent holepunch.
+ */
+ if (!vma)
+ return NULL;
+
if (folio_test_hugetlb(src)) {
return alloc_hugetlb_folio_vma(folio_hstate(src),
vma, address);
@@ -1198,9 +1207,6 @@ static struct folio *new_folio(struct folio *src, unsigned long start)
if (folio_test_large(src))
gfp = GFP_TRANSHUGE;

- /*
- * if !vma, vma_alloc_folio() will use task or system default policy
- */
return vma_alloc_folio(gfp, folio_order(src), vma, address,
folio_test_large(src));
}
@@ -1710,34 +1716,19 @@ bool vma_migratable(struct vm_area_struct *vma)
}

struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
- unsigned long addr)
+ unsigned long addr, pgoff_t *ilx)
{
- struct mempolicy *pol = NULL;
-
- if (vma) {
- if (vma->vm_ops && vma->vm_ops->get_policy) {
- pol = vma->vm_ops->get_policy(vma, addr);
- } else if (vma->vm_policy) {
- pol = vma->vm_policy;
-
- /*
- * shmem_alloc_page() passes MPOL_F_SHARED policy with
- * a pseudo vma whose vma->vm_ops=NULL. Take a reference
- * count on these policies which will be dropped by
- * mpol_cond_put() later
- */
- if (mpol_needs_cond_ref(pol))
- mpol_get(pol);
- }
- }
-
- return pol;
+ *ilx = 0;
+ return (vma->vm_ops && vma->vm_ops->get_policy) ?
+ vma->vm_ops->get_policy(vma, addr, ilx) : vma->vm_policy;
}

/*
- * get_vma_policy(@vma, @addr)
+ * get_vma_policy(@vma, @addr, @order, @ilx)
* @vma: virtual memory area whose policy is sought
* @addr: address in @vma for shared policy lookup
+ * @order: 0, or appropriate huge_page_order for interleaving
+ * @ilx: interleave index (output), for use only when MPOL_INTERLEAVE
*
* Returns effective policy for a VMA at specified address.
* Falls back to current->mempolicy or system default policy, as necessary.
@@ -1746,14 +1737,18 @@ struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
* freeing by another task. It is the caller's responsibility to free the
* extra reference for shared policies.
*/
-static struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
- unsigned long addr)
+struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
+ unsigned long addr, int order, pgoff_t *ilx)
{
- struct mempolicy *pol = __get_vma_policy(vma, addr);
+ struct mempolicy *pol;

+ pol = __get_vma_policy(vma, addr, ilx);
if (!pol)
pol = get_task_policy(current);
-
+ if (pol->mode == MPOL_INTERLEAVE) {
+ *ilx += vma->vm_pgoff >> order;
+ *ilx += (addr - vma->vm_start) >> (PAGE_SHIFT + order);
+ }
return pol;
}

@@ -1763,8 +1758,9 @@ bool vma_policy_mof(struct vm_area_struct *vma)

if (vma->vm_ops && vma->vm_ops->get_policy) {
bool ret = false;
+ pgoff_t ilx; /* ignored here */

- pol = vma->vm_ops->get_policy(vma, vma->vm_start);
+ pol = vma->vm_ops->get_policy(vma, vma->vm_start, &ilx);
if (pol && (pol->flags & MPOL_F_MOF))
ret = true;
mpol_cond_put(pol);
@@ -1799,54 +1795,6 @@ bool apply_policy_zone(struct mempolicy *policy, enum zone_type zone)
return zone >= dynamic_policy_zone;
}

-/*
- * Return a nodemask representing a mempolicy for filtering nodes for
- * page allocation
- */
-nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy)
-{
- int mode = policy->mode;
-
- /* Lower zones don't get a nodemask applied for MPOL_BIND */
- if (unlikely(mode == MPOL_BIND) &&
- apply_policy_zone(policy, gfp_zone(gfp)) &&
- cpuset_nodemask_valid_mems_allowed(&policy->nodes))
- return &policy->nodes;
-
- if (mode == MPOL_PREFERRED_MANY)
- return &policy->nodes;
-
- return NULL;
-}
-
-/*
- * Return the preferred node id for 'prefer' mempolicy, and return
- * the given id for all other policies.
- *
- * policy_node() is always coupled with policy_nodemask(), which
- * secures the nodemask limit for 'bind' and 'prefer-many' policy.
- */
-static int policy_node(gfp_t gfp, struct mempolicy *policy, int nid)
-{
- if (policy->mode == MPOL_PREFERRED) {
- nid = first_node(policy->nodes);
- } else {
- /*
- * __GFP_THISNODE shouldn't even be used with the bind policy
- * because we might easily break the expectation to stay on the
- * requested node and not break the policy.
- */
- WARN_ON_ONCE(policy->mode == MPOL_BIND && (gfp & __GFP_THISNODE));
- }
-
- if ((policy->mode == MPOL_BIND ||
- policy->mode == MPOL_PREFERRED_MANY) &&
- policy->home_node != NUMA_NO_NODE)
- return policy->home_node;
-
- return nid;
-}
-
/* Do dynamic interleaving for a process */
static unsigned int interleave_nodes(struct mempolicy *policy)
{
@@ -1906,11 +1854,11 @@ unsigned int mempolicy_slab_node(void)
}

/*
- * Do static interleaving for a VMA with known offset @n. Returns the n'th
- * node in pol->nodes (starting from n=0), wrapping around if n exceeds the
- * number of present nodes.
+ * Do static interleaving for interleave index @ilx. Returns the ilx'th
+ * node in pol->nodes (starting from ilx=0), wrapping around if ilx
+ * exceeds the number of present nodes.
*/
-static unsigned offset_il_node(struct mempolicy *pol, unsigned long n)
+static unsigned int interleave_nid(struct mempolicy *pol, pgoff_t ilx)
{
nodemask_t nodemask = pol->nodes;
unsigned int target, nnodes;
@@ -1928,33 +1876,54 @@ static unsigned offset_il_node(struct mempolicy *pol, unsigned long n)
nnodes = nodes_weight(nodemask);
if (!nnodes)
return numa_node_id();
- target = (unsigned int)n % nnodes;
+ target = ilx % nnodes;
nid = first_node(nodemask);
for (i = 0; i < target; i++)
nid = next_node(nid, nodemask);
return nid;
}

-/* Determine a node number for interleave */
-static inline unsigned interleave_nid(struct mempolicy *pol,
- struct vm_area_struct *vma, unsigned long addr, int shift)
+/*
+ * Return a nodemask representing a mempolicy for filtering nodes for
+ * page allocation, together with preferred node id (or the input node id).
+ */
+static nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *pol,
+ pgoff_t ilx, int *nid)
{
- if (vma) {
- unsigned long off;
+ nodemask_t *nodemask = NULL;

+ switch (pol->mode) {
+ case MPOL_PREFERRED:
+ /* Override input node id */
+ *nid = first_node(pol->nodes);
+ break;
+ case MPOL_PREFERRED_MANY:
+ nodemask = &pol->nodes;
+ if (pol->home_node != NUMA_NO_NODE)
+ *nid = pol->home_node;
+ break;
+ case MPOL_BIND:
+ /* Restrict to nodemask (but not on lower zones) */
+ if (apply_policy_zone(pol, gfp_zone(gfp)) &&
+ cpuset_nodemask_valid_mems_allowed(&pol->nodes))
+ nodemask = &pol->nodes;
+ if (pol->home_node != NUMA_NO_NODE)
+ *nid = pol->home_node;
/*
- * for small pages, there is no difference between
- * shift and PAGE_SHIFT, so the bit-shift is safe.
- * for huge pages, since vm_pgoff is in units of small
- * pages, we need to shift off the always 0 bits to get
- * a useful offset.
+ * __GFP_THISNODE shouldn't even be used with the bind policy
+ * because we might easily break the expectation to stay on the
+ * requested node and not break the policy.
*/
- BUG_ON(shift < PAGE_SHIFT);
- off = vma->vm_pgoff >> (shift - PAGE_SHIFT);
- off += (addr - vma->vm_start) >> shift;
- return offset_il_node(pol, off);
- } else
- return interleave_nodes(pol);
+ WARN_ON_ONCE(gfp & __GFP_THISNODE);
+ break;
+ case MPOL_INTERLEAVE:
+ /* Override input node id */
+ *nid = (ilx == NO_INTERLEAVE_INDEX) ?
+ interleave_nodes(pol) : interleave_nid(pol, ilx);
+ break;
+ }
+
+ return nodemask;
}

#ifdef CONFIG_HUGETLBFS
@@ -1970,27 +1939,16 @@ static inline unsigned interleave_nid(struct mempolicy *pol,
* to the struct mempolicy for conditional unref after allocation.
* If the effective policy is 'bind' or 'prefer-many', returns a pointer
* to the mempolicy's @nodemask for filtering the zonelist.
- *
- * Must be protected by read_mems_allowed_begin()
*/
int huge_node(struct vm_area_struct *vma, unsigned long addr, gfp_t gfp_flags,
- struct mempolicy **mpol, nodemask_t **nodemask)
+ struct mempolicy **mpol, nodemask_t **nodemask)
{
+ pgoff_t ilx;
int nid;
- int mode;

- *mpol = get_vma_policy(vma, addr);
- *nodemask = NULL;
- mode = (*mpol)->mode;
-
- if (unlikely(mode == MPOL_INTERLEAVE)) {
- nid = interleave_nid(*mpol, vma, addr,
- huge_page_shift(hstate_vma(vma)));
- } else {
- nid = policy_node(gfp_flags, *mpol, numa_node_id());
- if (mode == MPOL_BIND || mode == MPOL_PREFERRED_MANY)
- *nodemask = &(*mpol)->nodes;
- }
+ nid = numa_node_id();
+ *mpol = get_vma_policy(vma, addr, hstate_vma(vma)->order, &ilx);
+ *nodemask = policy_nodemask(gfp_flags, *mpol, ilx, &nid);
return nid;
}

@@ -2068,27 +2026,8 @@ bool mempolicy_in_oom_domain(struct task_struct *tsk,
return ret;
}

-/* Allocate a page in interleaved policy.
- Own path because it needs to do special accounting. */
-static struct page *alloc_page_interleave(gfp_t gfp, unsigned order,
- unsigned nid)
-{
- struct page *page;
-
- page = __alloc_pages(gfp, order, nid, NULL);
- /* skip NUMA_INTERLEAVE_HIT counter update if numa stats is disabled */
- if (!static_branch_likely(&vm_numa_stat_key))
- return page;
- if (page && page_to_nid(page) == nid) {
- preempt_disable();
- __count_numa_event(page_zone(page), NUMA_INTERLEAVE_HIT);
- preempt_enable();
- }
- return page;
-}
-
static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order,
- int nid, struct mempolicy *pol)
+ int nid, nodemask_t *nodemask)
{
struct page *page;
gfp_t preferred_gfp;
@@ -2101,7 +2040,7 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order,
*/
preferred_gfp = gfp | __GFP_NOWARN;
preferred_gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL);
- page = __alloc_pages(preferred_gfp, order, nid, &pol->nodes);
+ page = __alloc_pages(preferred_gfp, order, nid, nodemask);
if (!page)
page = __alloc_pages(gfp, order, nid, NULL);

@@ -2109,55 +2048,29 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order,
}

/**
- * vma_alloc_folio - Allocate a folio for a VMA.
+ * alloc_pages_mpol - Allocate pages according to NUMA mempolicy.
* @gfp: GFP flags.
- * @order: Order of the folio.
- * @vma: Pointer to VMA or NULL if not available.
- * @addr: Virtual address of the allocation. Must be inside @vma.
- * @hugepage: For hugepages try only the preferred node if possible.
+ * @order: Order of the page allocation.
+ * @pol: Pointer to the NUMA mempolicy.
+ * @ilx: Index for interleave mempolicy (also distinguishes alloc_pages()).
+ * @nid: Preferred node (usually numa_node_id() but @mpol may override it).
*
- * Allocate a folio for a specific address in @vma, using the appropriate
- * NUMA policy. When @vma is not NULL the caller must hold the mmap_lock
- * of the mm_struct of the VMA to prevent it from going away. Should be
- * used for all allocations for folios that will be mapped into user space.
- *
- * Return: The folio on success or NULL if allocation fails.
+ * Return: The page on success or NULL if allocation fails.
*/
-struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma,
- unsigned long addr, bool hugepage)
+struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order,
+ struct mempolicy *pol, pgoff_t ilx, int nid)
{
- struct mempolicy *pol;
- int node = numa_node_id();
- struct folio *folio;
- int preferred_nid;
- nodemask_t *nmask;
+ nodemask_t *nodemask;
+ struct page *page;

- pol = get_vma_policy(vma, addr);
+ nodemask = policy_nodemask(gfp, pol, ilx, &nid);

- if (pol->mode == MPOL_INTERLEAVE) {
- struct page *page;
- unsigned nid;
-
- nid = interleave_nid(pol, vma, addr, PAGE_SHIFT + order);
- mpol_cond_put(pol);
- gfp |= __GFP_COMP;
- page = alloc_page_interleave(gfp, order, nid);
- return page_rmappable_folio(page);
- }
-
- if (pol->mode == MPOL_PREFERRED_MANY) {
- struct page *page;
-
- node = policy_node(gfp, pol, node);
- gfp |= __GFP_COMP;
- page = alloc_pages_preferred_many(gfp, order, node, pol);
- mpol_cond_put(pol);
- return page_rmappable_folio(page);
- }
-
- if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage)) {
- int hpage_node = node;
+ if (pol->mode == MPOL_PREFERRED_MANY)
+ return alloc_pages_preferred_many(gfp, order, nid, nodemask);

+ if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
+ /* filter "hugepage" allocation, unless from alloc_pages() */
+ order == HPAGE_PMD_ORDER && ilx != NO_INTERLEAVE_INDEX) {
/*
* For hugepage allocation and non-interleave policy which
* allows the current node (or other explicitly preferred
@@ -2168,39 +2081,68 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma,
* If the policy is interleave or does not allow the current
* node in its nodemask, we allocate the standard way.
*/
- if (pol->mode == MPOL_PREFERRED)
- hpage_node = first_node(pol->nodes);
-
- nmask = policy_nodemask(gfp, pol);
- if (!nmask || node_isset(hpage_node, *nmask)) {
- mpol_cond_put(pol);
+ if (pol->mode != MPOL_INTERLEAVE &&
+ (!nodemask || node_isset(nid, *nodemask))) {
/*
* First, try to allocate THP only on local node, but
* don't reclaim unnecessarily, just compact.
*/
- folio = __folio_alloc_node(gfp | __GFP_THISNODE |
- __GFP_NORETRY, order, hpage_node);
-
+ page = __alloc_pages_node(nid,
+ gfp | __GFP_THISNODE | __GFP_NORETRY, order);
+ if (page || !(gfp & __GFP_DIRECT_RECLAIM))
+ return page;
/*
* If hugepage allocations are configured to always
* synchronous compact or the vma has been madvised
* to prefer hugepage backing, retry allowing remote
* memory with both reclaim and compact as well.
*/
- if (!folio && (gfp & __GFP_DIRECT_RECLAIM))
- folio = __folio_alloc(gfp, order, hpage_node,
- nmask);
-
- goto out;
}
}

- nmask = policy_nodemask(gfp, pol);
- preferred_nid = policy_node(gfp, pol, node);
- folio = __folio_alloc(gfp, order, preferred_nid, nmask);
+ page = __alloc_pages(gfp, order, nid, nodemask);
+
+ if (unlikely(pol->mode == MPOL_INTERLEAVE) && page) {
+ /* skip NUMA_INTERLEAVE_HIT update if numa stats is disabled */
+ if (static_branch_likely(&vm_numa_stat_key) &&
+ page_to_nid(page) == nid) {
+ preempt_disable();
+ __count_numa_event(page_zone(page), NUMA_INTERLEAVE_HIT);
+ preempt_enable();
+ }
+ }
+
+ return page;
+}
+
+/**
+ * vma_alloc_folio - Allocate a folio for a VMA.
+ * @gfp: GFP flags.
+ * @order: Order of the folio.
+ * @vma: Pointer to VMA.
+ * @addr: Virtual address of the allocation. Must be inside @vma.
+ * @hugepage: Unused (was: For hugepages try only preferred node if possible).
+ *
+ * Allocate a folio for a specific address in @vma, using the appropriate
+ * NUMA policy. The caller must hold the mmap_lock of the mm_struct of the
+ * VMA to prevent it from going away. Should be used for all allocations
+ * for folios that will be mapped into user space, excepting hugetlbfs, and
+ * excepting where direct use of alloc_pages_mpol() is more appropriate.
+ *
+ * Return: The folio on success or NULL if allocation fails.
+ */
+struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma,
+ unsigned long addr, bool hugepage)
+{
+ struct mempolicy *pol;
+ pgoff_t ilx;
+ struct page *page;
+
+ pol = get_vma_policy(vma, addr, order, &ilx);
+ page = alloc_pages_mpol(gfp | __GFP_COMP, order,
+ pol, ilx, numa_node_id());
mpol_cond_put(pol);
-out:
- return folio;
+ return page_rmappable_folio(page);
}
EXPORT_SYMBOL(vma_alloc_folio);

@@ -2218,33 +2160,23 @@ EXPORT_SYMBOL(vma_alloc_folio);
* flags are used.
* Return: The page on success or NULL if allocation fails.
*/
-struct page *alloc_pages(gfp_t gfp, unsigned order)
+struct page *alloc_pages(gfp_t gfp, unsigned int order)
{
struct mempolicy *pol = &default_policy;
- struct page *page;
-
- if (!in_interrupt() && !(gfp & __GFP_THISNODE))
- pol = get_task_policy(current);

/*
* No reference counting needed for current->mempolicy
* nor system default_policy
*/
- if (pol->mode == MPOL_INTERLEAVE)
- page = alloc_page_interleave(gfp, order, interleave_nodes(pol));
- else if (pol->mode == MPOL_PREFERRED_MANY)
- page = alloc_pages_preferred_many(gfp, order,
- policy_node(gfp, pol, numa_node_id()), pol);
- else
- page = __alloc_pages(gfp, order,
- policy_node(gfp, pol, numa_node_id()),
- policy_nodemask(gfp, pol));
+ if (!in_interrupt() && !(gfp & __GFP_THISNODE))
+ pol = get_task_policy(current);

- return page;
+ return alloc_pages_mpol(gfp, order,
+ pol, NO_INTERLEAVE_INDEX, numa_node_id());
}
EXPORT_SYMBOL(alloc_pages);

-struct folio *folio_alloc(gfp_t gfp, unsigned order)
+struct folio *folio_alloc(gfp_t gfp, unsigned int order)
{
return page_rmappable_folio(alloc_pages(gfp | __GFP_COMP, order));
}
@@ -2315,6 +2247,8 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp,
unsigned long nr_pages, struct page **page_array)
{
struct mempolicy *pol = &default_policy;
+ nodemask_t *nodemask;
+ int nid;

if (!in_interrupt() && !(gfp & __GFP_THISNODE))
pol = get_task_policy(current);
@@ -2327,9 +2261,10 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp,
return alloc_pages_bulk_array_preferred_many(gfp,
numa_node_id(), pol, nr_pages, page_array);

- return __alloc_pages_bulk(gfp, policy_node(gfp, pol, numa_node_id()),
- policy_nodemask(gfp, pol), nr_pages, NULL,
- page_array);
+ nid = numa_node_id();
+ nodemask = policy_nodemask(gfp, pol, NO_INTERLEAVE_INDEX, &nid);
+ return __alloc_pages_bulk(gfp, nid, nodemask,
+ nr_pages, NULL, page_array);
}

int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst)
@@ -2516,23 +2451,21 @@ int mpol_misplaced(struct folio *folio, struct vm_area_struct *vma,
unsigned long addr)
{
struct mempolicy *pol;
+ pgoff_t ilx;
struct zoneref *z;
int curnid = folio_nid(folio);
- unsigned long pgoff;
int thiscpu = raw_smp_processor_id();
int thisnid = cpu_to_node(thiscpu);
int polnid = NUMA_NO_NODE;
int ret = NUMA_NO_NODE;

- pol = get_vma_policy(vma, addr);
+ pol = get_vma_policy(vma, addr, folio_order(folio), &ilx);
if (!(pol->flags & MPOL_F_MOF))
goto out;

switch (pol->mode) {
case MPOL_INTERLEAVE:
- pgoff = vma->vm_pgoff;
- pgoff += (addr - vma->vm_start) >> PAGE_SHIFT;
- polnid = offset_il_node(pol, pgoff);
+ polnid = interleave_nid(pol, ilx);
break;

case MPOL_PREFERRED:
diff --git a/mm/shmem.c b/mm/shmem.c
index a3ec5d2dda9a..6503910b0f54 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1561,38 +1561,20 @@ static inline struct mempolicy *shmem_get_sbmpol(struct shmem_sb_info *sbinfo)
return NULL;
}
#endif /* CONFIG_NUMA && CONFIG_TMPFS */
-#ifndef CONFIG_NUMA
-#define vm_policy vm_private_data
-#endif

-static void shmem_pseudo_vma_init(struct vm_area_struct *vma,
- struct shmem_inode_info *info, pgoff_t index)
-{
- /* Create a pseudo vma that just contains the policy */
- vma_init(vma, NULL);
- /* Bias interleave by inode number to distribute better across nodes */
- vma->vm_pgoff = index + info->vfs_inode.i_ino;
- vma->vm_policy = mpol_shared_policy_lookup(&info->policy, index);
-}
+static struct mempolicy *shmem_get_pgoff_policy(struct shmem_inode_info *info,
+ pgoff_t index, unsigned int order, pgoff_t *ilx);

-static void shmem_pseudo_vma_destroy(struct vm_area_struct *vma)
-{
- /* Drop reference taken by mpol_shared_policy_lookup() */
- mpol_cond_put(vma->vm_policy);
-}
-
-static struct folio *shmem_swapin(swp_entry_t swap, gfp_t gfp,
+static struct folio *shmem_swapin_cluster(swp_entry_t swap, gfp_t gfp,
struct shmem_inode_info *info, pgoff_t index)
{
- struct vm_area_struct pvma;
+ struct mempolicy *mpol;
+ pgoff_t ilx;
struct page *page;
- struct vm_fault vmf = {
- .vma = &pvma,
- };

- shmem_pseudo_vma_init(&pvma, info, index);
- page = swap_cluster_readahead(swap, gfp, &vmf);
- shmem_pseudo_vma_destroy(&pvma);
+ mpol = shmem_get_pgoff_policy(info, index, 0, &ilx);
+ page = swap_cluster_readahead(swap, gfp, mpol, ilx);
+ mpol_cond_put(mpol);

if (!page)
return NULL;
@@ -1626,27 +1608,29 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp)
static struct folio *shmem_alloc_hugefolio(gfp_t gfp,
struct shmem_inode_info *info, pgoff_t index)
{
- struct vm_area_struct pvma;
- struct folio *folio;
+ struct mempolicy *mpol;
+ pgoff_t ilx;
+ struct page *page;

- shmem_pseudo_vma_init(&pvma, info, index);
- folio = vma_alloc_folio(gfp, HPAGE_PMD_ORDER, &pvma, 0, true);
- shmem_pseudo_vma_destroy(&pvma);
+ mpol = shmem_get_pgoff_policy(info, index, HPAGE_PMD_ORDER, &ilx);
+ page = alloc_pages_mpol(gfp, HPAGE_PMD_ORDER, mpol, ilx, numa_node_id());
+ mpol_cond_put(mpol);

- return folio;
+ return page_rmappable_folio(page);
}

static struct folio *shmem_alloc_folio(gfp_t gfp,
struct shmem_inode_info *info, pgoff_t index)
{
- struct vm_area_struct pvma;
- struct folio *folio;
+ struct mempolicy *mpol;
+ pgoff_t ilx;
+ struct page *page;

- shmem_pseudo_vma_init(&pvma, info, index);
- folio = vma_alloc_folio(gfp, 0, &pvma, 0, false);
- shmem_pseudo_vma_destroy(&pvma);
+ mpol = shmem_get_pgoff_policy(info, index, 0, &ilx);
+ page = alloc_pages_mpol(gfp, 0, mpol, ilx, numa_node_id());
+ mpol_cond_put(mpol);

- return folio;
+ return (struct folio *)page;
}

static struct folio *shmem_alloc_and_add_folio(gfp_t gfp,
@@ -1900,7 +1884,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
count_memcg_event_mm(fault_mm, PGMAJFAULT);
}
/* Here we actually start the io */
- folio = shmem_swapin(swap, gfp, info, index);
+ folio = shmem_swapin_cluster(swap, gfp, info, index);
if (!folio) {
error = -ENOMEM;
goto failed;
@@ -2351,15 +2335,41 @@ static int shmem_set_policy(struct vm_area_struct *vma, struct mempolicy *mpol)
}

static struct mempolicy *shmem_get_policy(struct vm_area_struct *vma,
- unsigned long addr)
+ unsigned long addr, pgoff_t *ilx)
{
struct inode *inode = file_inode(vma->vm_file);
pgoff_t index;

+ /*
+ * Bias interleave by inode number to distribute better across nodes;
+ * but this interface is independent of which page order is used, so
+ * supplies only that bias, letting caller apply the offset (adjusted
+ * by page order, as in shmem_get_pgoff_policy() and get_vma_policy()).
+ */
+ *ilx = inode->i_ino;
index = ((addr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
return mpol_shared_policy_lookup(&SHMEM_I(inode)->policy, index);
}
-#endif
+
+static struct mempolicy *shmem_get_pgoff_policy(struct shmem_inode_info *info,
+ pgoff_t index, unsigned int order, pgoff_t *ilx)
+{
+ struct mempolicy *mpol;
+
+ /* Bias interleave by inode number to distribute better across nodes */
+ *ilx = info->vfs_inode.i_ino + (index >> order);
+
+ mpol = mpol_shared_policy_lookup(&info->policy, index);
+ return mpol ? mpol : get_task_policy(current);
+}
+#else
+static struct mempolicy *shmem_get_pgoff_policy(struct shmem_inode_info *info,
+ pgoff_t index, unsigned int order, pgoff_t *ilx)
+{
+ *ilx = 0;
+ return NULL;
+}
+#endif /* CONFIG_NUMA */

int shmem_lock(struct file *file, int lock, struct ucounts *ucounts)
{
diff --git a/mm/swap.h b/mm/swap.h
index 8a3c7a0ace4f..73c332ee4d91 100644
--- a/mm/swap.h
+++ b/mm/swap.h
@@ -2,6 +2,8 @@
#ifndef _MM_SWAP_H
#define _MM_SWAP_H

+struct mempolicy;
+
#ifdef CONFIG_SWAP
#include <linux/blk_types.h> /* for bio_end_io_t */

@@ -48,11 +50,10 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
unsigned long addr,
struct swap_iocb **plug);
struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
- struct vm_area_struct *vma,
- unsigned long addr,
+ struct mempolicy *mpol, pgoff_t ilx,
bool *new_page_allocated);
struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
- struct vm_fault *vmf);
+ struct mempolicy *mpol, pgoff_t ilx);
struct page *swapin_readahead(swp_entry_t entry, gfp_t flag,
struct vm_fault *vmf);

@@ -80,7 +81,7 @@ static inline void show_swap_cache_info(void)
}

static inline struct page *swap_cluster_readahead(swp_entry_t entry,
- gfp_t gfp_mask, struct vm_fault *vmf)
+ gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t ilx)
{
return NULL;
}
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 788e36a06c34..4afa4ed464d2 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -10,6 +10,7 @@
#include <linux/mm.h>
#include <linux/gfp.h>
#include <linux/kernel_stat.h>
+#include <linux/mempolicy.h>
#include <linux/swap.h>
#include <linux/swapops.h>
#include <linux/init.h>
@@ -411,8 +412,8 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
}

struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
- struct vm_area_struct *vma, unsigned long addr,
- bool *new_page_allocated)
+ struct mempolicy *mpol, pgoff_t ilx,
+ bool *new_page_allocated)
{
struct swap_info_struct *si;
struct folio *folio;
@@ -455,7 +456,8 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
* before marking swap_map SWAP_HAS_CACHE, when -EEXIST will
* cause any racers to loop around until we add it to cache.
*/
- folio = vma_alloc_folio(gfp_mask, 0, vma, addr, false);
+ folio = (struct folio *)alloc_pages_mpol(gfp_mask, 0,
+ mpol, ilx, numa_node_id());
if (!folio)
goto fail_put_swap;

@@ -547,14 +549,19 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
struct vm_area_struct *vma,
unsigned long addr, struct swap_iocb **plug)
{
- bool page_was_allocated;
- struct page *retpage = __read_swap_cache_async(entry, gfp_mask,
- vma, addr, &page_was_allocated);
+ bool page_allocated;
+ struct mempolicy *mpol;
+ pgoff_t ilx;
+ struct page *page;

- if (page_was_allocated)
- swap_readpage(retpage, false, plug);
+ mpol = get_vma_policy(vma, addr, 0, &ilx);
+ page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
+ &page_allocated);
+ mpol_cond_put(mpol);

- return retpage;
+ if (page_allocated)
+ swap_readpage(page, false, plug);
+ return page;
}

static unsigned int __swapin_nr_pages(unsigned long prev_offset,
@@ -638,7 +645,8 @@ static void inc_nr_protected(struct page *page)
* swap_cluster_readahead - swap in pages in hope we need them soon
* @entry: swap entry of this memory
* @gfp_mask: memory allocation flags
- * @vmf: fault information
+ * @mpol: NUMA memory allocation policy to be applied
+ * @ilx: NUMA interleave index, for use only when MPOL_INTERLEAVE
*
* Returns the struct page for entry and addr, after queueing swapin.
*
@@ -647,13 +655,12 @@ static void inc_nr_protected(struct page *page)
* because it doesn't cost us any seek time. We also make sure to queue
* the 'original' request together with the readahead ones...
*
- * This has been extended to use the NUMA policies from the mm triggering
- * the readahead.
- *
- * Caller must hold read mmap_lock if vmf->vma is not NULL.
+ * Note: it is intentional that the same NUMA policy and interleave index
+ * are used for every page of the readahead: neighbouring pages on swap
+ * are fairly likely to have been swapped out from the same node.
*/
struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
- struct vm_fault *vmf)
+ struct mempolicy *mpol, pgoff_t ilx)
{
struct page *page;
unsigned long entry_offset = swp_offset(entry);
@@ -664,8 +671,6 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
struct blk_plug plug;
struct swap_iocb *splug = NULL;
bool page_allocated;
- struct vm_area_struct *vma = vmf->vma;
- unsigned long addr = vmf->address;

mask = swapin_nr_pages(offset) - 1;
if (!mask)
@@ -683,8 +688,8 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
for (offset = start_offset; offset <= end_offset ; offset++) {
/* Ok, do the async read-ahead now */
page = __read_swap_cache_async(
- swp_entry(swp_type(entry), offset),
- gfp_mask, vma, addr, &page_allocated);
+ swp_entry(swp_type(entry), offset),
+ gfp_mask, mpol, ilx, &page_allocated);
if (!page)
continue;
if (page_allocated) {
@@ -698,11 +703,13 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
}
blk_finish_plug(&plug);
swap_read_unplug(splug);
-
lru_add_drain(); /* Push any new pages onto the LRU now */
skip:
/* The page was likely read above, so no need for plugging here */
- page = read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
+ page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
+ &page_allocated);
+ if (unlikely(page_allocated))
+ swap_readpage(page, false, NULL);
#ifdef CONFIG_ZSWAP
if (page)
inc_nr_protected(page);
@@ -805,8 +812,10 @@ static void swap_ra_info(struct vm_fault *vmf,

/**
* swap_vma_readahead - swap in pages in hope we need them soon
- * @fentry: swap entry of this memory
+ * @targ_entry: swap entry of the targeted memory
* @gfp_mask: memory allocation flags
+ * @mpol: NUMA memory allocation policy to be applied
+ * @targ_ilx: NUMA interleave index, for use only when MPOL_INTERLEAVE
* @vmf: fault information
*
* Returns the struct page for entry and addr, after queueing swapin.
@@ -817,16 +826,17 @@ static void swap_ra_info(struct vm_fault *vmf,
* Caller must hold read mmap_lock if vmf->vma is not NULL.
*
*/
-static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
+static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
+ struct mempolicy *mpol, pgoff_t targ_ilx,
struct vm_fault *vmf)
{
struct blk_plug plug;
struct swap_iocb *splug = NULL;
- struct vm_area_struct *vma = vmf->vma;
struct page *page;
pte_t *pte = NULL, pentry;
unsigned long addr;
swp_entry_t entry;
+ pgoff_t ilx;
unsigned int i;
bool page_allocated;
struct vma_swap_readahead ra_info = {
@@ -838,9 +848,10 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
goto skip;

addr = vmf->address - (ra_info.offset * PAGE_SIZE);
+ ilx = targ_ilx - ra_info.offset;

blk_start_plug(&plug);
- for (i = 0; i < ra_info.nr_pte; i++, addr += PAGE_SIZE) {
+ for (i = 0; i < ra_info.nr_pte; i++, ilx++, addr += PAGE_SIZE) {
if (!pte++) {
pte = pte_offset_map(vmf->pmd, addr);
if (!pte)
@@ -854,8 +865,8 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
continue;
pte_unmap(pte);
pte = NULL;
- page = __read_swap_cache_async(entry, gfp_mask, vma,
- addr, &page_allocated);
+ page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
+ &page_allocated);
if (!page)
continue;
if (page_allocated) {
@@ -874,7 +885,10 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
lru_add_drain();
skip:
/* The page was likely read above, so no need for plugging here */
- page = read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, NULL);
+ page = __read_swap_cache_async(targ_entry, gfp_mask, mpol, targ_ilx,
+ &page_allocated);
+ if (unlikely(page_allocated))
+ swap_readpage(page, false, NULL);
#ifdef CONFIG_ZSWAP
if (page)
inc_nr_protected(page);
@@ -897,9 +911,16 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask,
struct vm_fault *vmf)
{
- return swap_use_vma_readahead() ?
- swap_vma_readahead(entry, gfp_mask, vmf) :
- swap_cluster_readahead(entry, gfp_mask, vmf);
+ struct mempolicy *mpol;
+ pgoff_t ilx;
+ struct page *page;
+
+ mpol = get_vma_policy(vmf->vma, vmf->address, 0, &ilx);
+ page = swap_use_vma_readahead() ?
+ swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf) :
+ swap_cluster_readahead(entry, gfp_mask, mpol, ilx);
+ mpol_cond_put(mpol);
+ return page;
}

#ifdef CONFIG_SYSFS
--
2.35.3

2023-10-03 09:27:58

by Hugh Dickins

[permalink] [raw]
Subject: [PATCH v2 11/12] mempolicy: mmap_lock is not needed while migrating folios

mbind(2) holds down_write of current task's mmap_lock throughout
(exclusive because it needs to set the new mempolicy on the vmas);
migrate_pages(2) holds down_read of pid's mmap_lock throughout.

They both hold mmap_lock across the internal migrate_pages(), under which
all new page allocations (huge or small) are made. I'm nervous about it;
and migrate_pages() certainly does not need mmap_lock itself. It's done
this way for mbind(2), because its page allocator is vma_alloc_folio() or
alloc_hugetlb_folio_vma(), both of which depend on vma and address.

Now that we have alloc_pages_mpol(), depending on (refcounted) memory
policy and interleave index, mbind(2) can be modified to use that or
alloc_hugetlb_folio_nodemask(), and then not need mmap_lock across the
internal migrate_pages() at all: add alloc_migration_target_by_mpol()
to replace mbind's new_page().

(After that change, alloc_hugetlb_folio_vma() is used by nothing but a
userfaultfd function: move it out of hugetlb.h and into the #ifdef.)

migrate_pages(2) has chosen its target node before migrating, so can
continue to use the standard alloc_migration_target(); but let it take
and drop mmap_lock just around migrate_to_node()'s queue_pages_range():
neither the node-to-node calculations nor the page migrations need it.

It seems unlikely, but it is conceivable that some userspace depends on
the kernel's mmap_lock exclusion here, instead of doing its own locking:
more likely in a testsuite than in real life. It is also possible, of
course, that some pages on the list will be munmapped by another thread
before they are migrated, or a newer memory policy applied to the range
by that time: but such races could happen before, as soon as mmap_lock
was dropped, so it does not appear to be a concern.

Signed-off-by: Hugh Dickins <[email protected]>
---
include/linux/hugetlb.h | 9 -----
mm/hugetlb.c | 38 ++++++++++----------
mm/mempolicy.c | 83 ++++++++++++++++++++++---------------------
3 files changed, 63 insertions(+), 67 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index a574e26e18a2..7c6faee07b42 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -716,8 +716,6 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
unsigned long addr, int avoid_reserve);
struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,
nodemask_t *nmask, gfp_t gfp_mask);
-struct folio *alloc_hugetlb_folio_vma(struct hstate *h, struct vm_area_struct *vma,
- unsigned long address);
int hugetlb_add_to_page_cache(struct folio *folio, struct address_space *mapping,
pgoff_t idx);
void restore_reserve_on_error(struct hstate *h, struct vm_area_struct *vma,
@@ -1040,13 +1038,6 @@ alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,
return NULL;
}

-static inline struct folio *alloc_hugetlb_folio_vma(struct hstate *h,
- struct vm_area_struct *vma,
- unsigned long address)
-{
- return NULL;
-}
-
static inline int __alloc_bootmem_huge_page(struct hstate *h)
{
return 0;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 9d5b7f208dac..68ff79061f88 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2458,24 +2458,6 @@ struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,
return alloc_migrate_hugetlb_folio(h, gfp_mask, preferred_nid, nmask);
}

-/* mempolicy aware migration callback */
-struct folio *alloc_hugetlb_folio_vma(struct hstate *h, struct vm_area_struct *vma,
- unsigned long address)
-{
- struct mempolicy *mpol;
- nodemask_t *nodemask;
- struct folio *folio;
- gfp_t gfp_mask;
- int node;
-
- gfp_mask = htlb_alloc_mask(h);
- node = huge_node(vma, address, gfp_mask, &mpol, &nodemask);
- folio = alloc_hugetlb_folio_nodemask(h, node, nodemask, gfp_mask);
- mpol_cond_put(mpol);
-
- return folio;
-}
-
/*
* Increase the hugetlb pool such that it can accommodate a reservation
* of size 'delta'.
@@ -6279,6 +6261,26 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
}

#ifdef CONFIG_USERFAULTFD
+/*
+ * Can probably be eliminated, but still used by hugetlb_mfill_atomic_pte().
+ */
+static struct folio *alloc_hugetlb_folio_vma(struct hstate *h,
+ struct vm_area_struct *vma, unsigned long address)
+{
+ struct mempolicy *mpol;
+ nodemask_t *nodemask;
+ struct folio *folio;
+ gfp_t gfp_mask;
+ int node;
+
+ gfp_mask = htlb_alloc_mask(h);
+ node = huge_node(vma, address, gfp_mask, &mpol, &nodemask);
+ folio = alloc_hugetlb_folio_nodemask(h, node, nodemask, gfp_mask);
+ mpol_cond_put(mpol);
+
+ return folio;
+}
+
/*
* Used by userfaultfd UFFDIO_* ioctls. Based on userfaultfd's mfill_atomic_pte
* with modifications for hugetlb pages.
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 8cf76de12acd..a7b34b9c00ef 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -417,6 +417,8 @@ static const struct mempolicy_operations mpol_ops[MPOL_MAX] = {

static bool migrate_folio_add(struct folio *folio, struct list_head *foliolist,
unsigned long flags);
+static nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *pol,
+ pgoff_t ilx, int *nid);

static bool strictly_unmovable(unsigned long flags)
{
@@ -1043,6 +1045,8 @@ static long migrate_to_node(struct mm_struct *mm, int source, int dest,
node_set(source, nmask);

VM_BUG_ON(!(flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)));
+
+ mmap_read_lock(mm);
vma = find_vma(mm, 0);

/*
@@ -1053,6 +1057,7 @@ static long migrate_to_node(struct mm_struct *mm, int source, int dest,
*/
nr_failed = queue_pages_range(mm, vma->vm_start, mm->task_size, &nmask,
flags | MPOL_MF_DISCONTIG_OK, &pagelist);
+ mmap_read_unlock(mm);

if (!list_empty(&pagelist)) {
err = migrate_pages(&pagelist, alloc_migration_target, NULL,
@@ -1081,8 +1086,6 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,

lru_cache_disable();

- mmap_read_lock(mm);
-
/*
* Find a 'source' bit set in 'tmp' whose corresponding 'dest'
* bit in 'to' is not also set in 'tmp'. Clear the found 'source'
@@ -1162,7 +1165,6 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
if (err < 0)
break;
}
- mmap_read_unlock(mm);

lru_cache_enable();
if (err < 0)
@@ -1171,44 +1173,38 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
}

/*
- * Allocate a new page for page migration based on vma policy.
- * Start by assuming the page is mapped by the same vma as contains @start.
- * Search forward from there, if not. N.B., this assumes that the
- * list of pages handed to migrate_pages()--which is how we get here--
- * is in virtual address order.
+ * Allocate a new folio for page migration, according to NUMA mempolicy.
*/
-static struct folio *new_folio(struct folio *src, unsigned long start)
+static struct folio *alloc_migration_target_by_mpol(struct folio *src,
+ unsigned long private)
{
- struct vm_area_struct *vma;
- unsigned long address;
- VMA_ITERATOR(vmi, current->mm, start);
- gfp_t gfp = GFP_HIGHUSER_MOVABLE | __GFP_RETRY_MAYFAIL;
+ struct mempolicy *pol = (struct mempolicy *)private;
+ pgoff_t ilx = 0; /* improve on this later */
+ struct page *page;
+ unsigned int order;
+ int nid = numa_node_id();
+ gfp_t gfp;

- for_each_vma(vmi, vma) {
- address = page_address_in_vma(&src->page, vma);
- if (address != -EFAULT)
- break;
- }
-
- /*
- * __get_vma_policy() now expects a genuine non-NULL vma. Return NULL
- * when the page can no longer be located in a vma: that is not ideal
- * (migrate_pages() will give up early, presuming ENOMEM), but good
- * enough to avoid a crash by syzkaller or concurrent holepunch.
- */
- if (!vma)
- return NULL;
+ order = folio_order(src);
+ ilx += src->index >> order;

if (folio_test_hugetlb(src)) {
- return alloc_hugetlb_folio_vma(folio_hstate(src),
- vma, address);
+ nodemask_t *nodemask;
+ struct hstate *h;
+
+ h = folio_hstate(src);
+ gfp = htlb_alloc_mask(h);
+ nodemask = policy_nodemask(gfp, pol, ilx, &nid);
+ return alloc_hugetlb_folio_nodemask(h, nid, nodemask, gfp);
}

if (folio_test_large(src))
gfp = GFP_TRANSHUGE;
+ else
+ gfp = GFP_HIGHUSER_MOVABLE | __GFP_RETRY_MAYFAIL | __GFP_COMP;

- return vma_alloc_folio(gfp, folio_order(src), vma, address,
- folio_test_large(src));
+ page = alloc_pages_mpol(gfp, order, pol, ilx, nid);
+ return page_rmappable_folio(page);
}
#else

@@ -1224,7 +1220,8 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
return -ENOSYS;
}

-static struct folio *new_folio(struct folio *src, unsigned long start)
+static struct folio *alloc_migration_target_by_mpol(struct folio *src,
+ unsigned long private)
{
return NULL;
}
@@ -1298,6 +1295,7 @@ static long do_mbind(unsigned long start, unsigned long len,

if (nr_failed < 0) {
err = nr_failed;
+ nr_failed = 0;
} else {
vma_iter_init(&vmi, mm, start);
prev = vma_prev(&vmi);
@@ -1308,19 +1306,24 @@ static long do_mbind(unsigned long start, unsigned long len,
}
}

- if (!err) {
- if (!list_empty(&pagelist)) {
- nr_failed |= migrate_pages(&pagelist, new_folio, NULL,
- start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND, NULL);
+ mmap_write_unlock(mm);
+
+ if (!err && !list_empty(&pagelist)) {
+ /* Convert MPOL_DEFAULT's NULL to task or default policy */
+ if (!new) {
+ new = get_task_policy(current);
+ mpol_get(new);
}
- if (nr_failed && (flags & MPOL_MF_STRICT))
- err = -EIO;
+ nr_failed |= migrate_pages(&pagelist,
+ alloc_migration_target_by_mpol, NULL,
+ (unsigned long)new, MIGRATE_SYNC,
+ MR_MEMPOLICY_MBIND, NULL);
}

+ if (nr_failed && (flags & MPOL_MF_STRICT))
+ err = -EIO;
if (!list_empty(&pagelist))
putback_movable_pages(&pagelist);
-
- mmap_write_unlock(mm);
mpol_out:
mpol_put(new);
if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
--
2.35.3

2023-10-03 09:29:23

by Hugh Dickins

[permalink] [raw]
Subject: [PATCH v2 12/12] mempolicy: migration attempt to match interleave nodes

Improve alloc_migration_target_by_mpol()'s treatment of MPOL_INTERLEAVE.

Make an effort in do_mbind(), to identify the correct interleave index
for the first page to be migrated, so that it and all subsequent pages
from the same vma will be targeted to precisely their intended nodes.
Pages from following vmas will still be interleaved from the requested
nodemask, but perhaps starting from a different base.

Whether this is worth doing at all, or worth improving further, is
arguable: queue_folio_required() is right not to care about the precise
placement on interleaved nodes; but this little effort seems appropriate.

Signed-off-by: Hugh Dickins <[email protected]>
---
mm/mempolicy.c | 49 ++++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 46 insertions(+), 3 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index a7b34b9c00ef..b01922e88548 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -430,6 +430,11 @@ static bool strictly_unmovable(unsigned long flags)
MPOL_MF_STRICT;
}

+struct migration_mpol { /* for alloc_migration_target_by_mpol() */
+ struct mempolicy *pol;
+ pgoff_t ilx;
+};
+
struct queue_pages {
struct list_head *pagelist;
unsigned long flags;
@@ -1178,8 +1183,9 @@ int do_migrate_pages(struct mm_struct *mm, const nodemask_t *from,
static struct folio *alloc_migration_target_by_mpol(struct folio *src,
unsigned long private)
{
- struct mempolicy *pol = (struct mempolicy *)private;
- pgoff_t ilx = 0; /* improve on this later */
+ struct migration_mpol *mmpol = (struct migration_mpol *)private;
+ struct mempolicy *pol = mmpol->pol;
+ pgoff_t ilx = mmpol->ilx;
struct page *page;
unsigned int order;
int nid = numa_node_id();
@@ -1234,6 +1240,7 @@ static long do_mbind(unsigned long start, unsigned long len,
struct mm_struct *mm = current->mm;
struct vm_area_struct *vma, *prev;
struct vma_iterator vmi;
+ struct migration_mpol mmpol;
struct mempolicy *new;
unsigned long end;
long err;
@@ -1314,9 +1321,45 @@ static long do_mbind(unsigned long start, unsigned long len,
new = get_task_policy(current);
mpol_get(new);
}
+ mmpol.pol = new;
+ mmpol.ilx = 0;
+
+ /*
+ * In the interleaved case, attempt to allocate on exactly the
+ * targeted nodes, for the first VMA to be migrated; for later
+ * VMAs, the nodes will still be interleaved from the targeted
+ * nodemask, but one by one may be selected differently.
+ */
+ if (new->mode == MPOL_INTERLEAVE) {
+ struct page *page;
+ unsigned int order;
+ unsigned long addr = -EFAULT;
+
+ list_for_each_entry(page, &pagelist, lru) {
+ if (!PageKsm(page))
+ break;
+ }
+ if (!list_entry_is_head(page, &pagelist, lru)) {
+ vma_iter_init(&vmi, mm, start);
+ for_each_vma_range(vmi, vma, end) {
+ addr = page_address_in_vma(page, vma);
+ if (addr != -EFAULT)
+ break;
+ }
+ }
+ if (addr != -EFAULT) {
+ order = compound_order(page);
+ /* We already know the pol, but not the ilx */
+ mpol_cond_put(get_vma_policy(vma, addr, order,
+ &mmpol.ilx));
+ /* Set base from which to increment by index */
+ mmpol.ilx -= page->index >> order;
+ }
+ }
+
nr_failed |= migrate_pages(&pagelist,
alloc_migration_target_by_mpol, NULL,
- (unsigned long)new, MIGRATE_SYNC,
+ (unsigned long)&mmpol, MIGRATE_SYNC,
MR_MEMPOLICY_MBIND, NULL);
}

--
2.35.3

2023-10-03 22:28:44

by Yang Shi

[permalink] [raw]
Subject: Re: [PATCH v2 08/12] mempolicy: remove confusing MPOL_MF_LAZY dead code

On Tue, Oct 3, 2023 at 2:24 AM Hugh Dickins <[email protected]> wrote:
>
> v3.8 commit b24f53a0bea3 ("mm: mempolicy: Add MPOL_MF_LAZY") introduced
> MPOL_MF_LAZY, and included it in the MPOL_MF_VALID flags; but a720094ded8
> ("mm: mempolicy: Hide MPOL_NOOP and MPOL_MF_LAZY from userspace for now")
> immediately removed it from MPOL_MF_VALID flags, pending further review.
> "This will need to be revisited", but it has not been reinstated.
>
> The present state is confusing: there is dead code in mm/mempolicy.c to
> handle MPOL_MF_LAZY cases which can never occur. Remove that: it can be
> resurrected later if necessary. But keep the definition of MPOL_MF_LAZY,
> which must remain in the UAPI, even though it always fails with EINVAL.
>
> https://lore.kernel.org/linux-mm/[email protected]/
> links to a previous request to remove MPOL_MF_LAZY.

Thanks for mentioning my work. I'm glad to see the dead code go away.

Reviewed-by: Yang Shi <[email protected]>

>
> Signed-off-by: Hugh Dickins <[email protected]>
> Reviewed-by: Matthew Wilcox (Oracle) <[email protected]>
> ---
> include/uapi/linux/mempolicy.h | 2 +-
> mm/mempolicy.c | 18 ------------------
> 2 files changed, 1 insertion(+), 19 deletions(-)
>
> diff --git a/include/uapi/linux/mempolicy.h b/include/uapi/linux/mempolicy.h
> index 046d0ccba4cd..a8963f7ef4c2 100644
> --- a/include/uapi/linux/mempolicy.h
> +++ b/include/uapi/linux/mempolicy.h
> @@ -48,7 +48,7 @@ enum {
> #define MPOL_MF_MOVE (1<<1) /* Move pages owned by this process to conform
> to policy */
> #define MPOL_MF_MOVE_ALL (1<<2) /* Move every page to conform to policy */
> -#define MPOL_MF_LAZY (1<<3) /* Modifies '_MOVE: lazy migrate on fault */
> +#define MPOL_MF_LAZY (1<<3) /* UNSUPPORTED FLAG: Lazy migrate on fault */
> #define MPOL_MF_INTERNAL (1<<4) /* Internal flags start here */
>
> #define MPOL_MF_VALID (MPOL_MF_STRICT | \
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 5d99fd5cd60b..f3224a8b0f6c 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -636,12 +636,6 @@ unsigned long change_prot_numa(struct vm_area_struct *vma,
>
> return nr_updated;
> }
> -#else
> -static unsigned long change_prot_numa(struct vm_area_struct *vma,
> - unsigned long addr, unsigned long end)
> -{
> - return 0;
> -}
> #endif /* CONFIG_NUMA_BALANCING */
>
> static int queue_pages_test_walk(unsigned long start, unsigned long end,
> @@ -680,14 +674,6 @@ static int queue_pages_test_walk(unsigned long start, unsigned long end,
> if (endvma > end)
> endvma = end;
>
> - if (flags & MPOL_MF_LAZY) {
> - /* Similar to task_numa_work, skip inaccessible VMAs */
> - if (!is_vm_hugetlb_page(vma) && vma_is_accessible(vma) &&
> - !(vma->vm_flags & VM_MIXEDMAP))
> - change_prot_numa(vma, start, endvma);
> - return 1;
> - }
> -
> /*
> * Check page nodes, and queue pages to move, in the current vma.
> * But if no moving, and no strict checking, the scan can be skipped.
> @@ -1274,9 +1260,6 @@ static long do_mbind(unsigned long start, unsigned long len,
> if (IS_ERR(new))
> return PTR_ERR(new);
>
> - if (flags & MPOL_MF_LAZY)
> - new->flags |= MPOL_F_MOF;
> -
> /*
> * If we are using the default policy then operation
> * on discontinuous address spaces is okay after all
> @@ -1321,7 +1304,6 @@ static long do_mbind(unsigned long start, unsigned long len,
>
> if (!err) {
> if (!list_empty(&pagelist)) {
> - WARN_ON_ONCE(flags & MPOL_MF_LAZY);
> nr_failed |= migrate_pages(&pagelist, new_folio, NULL,
> start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND, NULL);
> }
> --
> 2.35.3
>

2023-10-07 07:29:55

by Huang, Ying

[permalink] [raw]
Subject: Re: [PATCH v2 03/12] mempolicy: fix migrate_pages(2) syscall return nr_failed

Hugh Dickins <[email protected]> writes:

> "man 2 migrate_pages" says "On success migrate_pages() returns the number
> of pages that could not be moved". Although 5.3 and 5.4 commits fixed
> mbind(MPOL_MF_STRICT|MPOL_MF_MOVE*) to fail with EIO when not all pages
> could be moved (because some could not be isolated for migration),
> migrate_pages(2) was left still reporting only those pages failing at the
> migration stage, forgetting those failing at the earlier isolation stage.
>
> Fix that by accumulating a long nr_failed count in struct queue_pages,
> returned by queue_pages_range() when it's not returning an error, for
> adding on to the nr_failed count from migrate_pages() in mm/migrate.c.
> A count of pages? It's more a count of folios, but changing it to pages
> would entail more work (also in mm/migrate.c): does not seem justified.
>
> queue_pages_range() itself should only return -EIO in the "strictly
> unmovable" case (STRICT without any MOVEs): in that case it's best to
> break out as soon as nr_failed gets set; but otherwise it should continue
> to isolate pages for MOVing even when nr_failed - as the mbind(2) manpage
> promises.
>
> There's a case when nr_failed should be incremented when it was missed:
> queue_folios_pte_range() and queue_folios_hugetlb() count the transient
> migration entries, like queue_folios_pmd() already did. And there's a
> case when nr_failed should not be incremented when it would have been:
> in meeting later PTEs of the same large folio, which can only be isolated
> once: fixed by recording the current large folio in struct queue_pages.
>
> Clean up the affected functions, fixing or updating many comments. Bool
> migrate_folio_add(), without -EIO: true if adding, or if skipping shared
> (but its arguable folio_estimated_sharers() heuristic left unchanged).
> Use MPOL_MF_WRLOCK flag to queue_pages_range(), instead of bool lock_vma.
> Use explicit STRICT|MOVE* flags where queue_pages_test_walk() checks for
> skipping, instead of hiding them behind MPOL_MF_VALID.
>
> Signed-off-by: Hugh Dickins <[email protected]>
> Reviewed-by: Matthew Wilcox (Oracle) <[email protected]>

Thanks! Feel free to add

Reviewed-by: "Huang, Ying" <[email protected]>

--
Best Regards,
Huang, Ying

2023-10-19 20:41:41

by Hugh Dickins

[permalink] [raw]
Subject: [PATCH v3 10/12] mempolicy: alloc_pages_mpol() for NUMA policy without vma

Shrink shmem's stack usage by eliminating the pseudo-vma from its folio
allocation. alloc_pages_mpol(gfp, order, pol, ilx, nid) becomes the
principal actor for passing mempolicy choice down to __alloc_pages(),
rather than vma_alloc_folio(gfp, order, vma, addr, hugepage).

vma_alloc_folio() and alloc_pages() remain, but as wrappers around
alloc_pages_mpol(). alloc_pages_bulk_*() untouched, except to provide the
additional args to policy_nodemask(), which subsumes policy_node().
Cleanup throughout, cutting out some unhelpful "helpers".

It would all be much simpler without MPOL_INTERLEAVE, but that adds a
dynamic to the constant mpol: complicated by v3.6 commit 09c231cb8bfd
("tmpfs: distribute interleave better across nodes"), which added ino bias
to the interleave, hidden from mm/mempolicy.c until this commit.

Hence "ilx" throughout, the "interleave index". Originally I thought it
could be done just with nid, but that's wrong: the nodemask may come from
the shared policy layer below a shmem vma, or it may come from the task
layer above a shmem vma; and without the final nodemask then nodeid cannot
be decided. And how ilx is applied depends also on page order.

The interleave index is almost always irrelevant unless MPOL_INTERLEAVE:
with one exception in alloc_pages_mpol(), where the NO_INTERLEAVE_INDEX
passed down from vma-less alloc_pages() is also used as hint not to use
THP-style hugepage allocation - to avoid the overhead of a hugepage arg
(though I don't understand why we never just added a GFP bit for THP - if
it actually needs a different allocation strategy from other pages of the
same order). vma_alloc_folio() still carries its hugepage arg here, but
it is not used, and should be removed when agreed.

get_vma_policy() no longer allows a NULL vma: over time I believe we've
eradicated all the places which used to need it e.g. swapoff and madvise
used to pass NULL vma to read_swap_cache_async(), but now know the vma.

Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Hugh Dickins <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Huang Ying <[email protected]>
Cc: Kefeng Wang <[email protected]>
Cc: Matthew Wilcox (Oracle) <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Mike Kravetz <[email protected]>
Cc: Nhat Pham <[email protected]>
Cc: Sidhartha Kumar <[email protected]>
Cc: Suren Baghdasaryan <[email protected]>
Cc: Tejun heo <[email protected]>
Cc: Vishal Moola (Oracle) <[email protected]>
Cc: Yang Shi <[email protected]>
Cc: Yosry Ahmed <[email protected]>
---
Rebased to mm.git's current mm-stable, to resolve with removal of
vma_policy() from include/linux/mempolicy.h, and temporary omission
of Nhat's ZSWAP mods from mm/swap_state.c: no other changes.

git cherry-pick 800caf44af25^..237d4ce921f0 # applies mm-unstable's 01-09
then apply this "mempolicy: alloc_pages_mpol() for NUMA policy without vma"
git cherry-pick e4fb3362b782^..ec6412928b8e # applies mm-unstable's 11-12

fs/proc/task_mmu.c | 5 +-
include/linux/gfp.h | 10 +-
include/linux/mempolicy.h | 13 +-
include/linux/mm.h | 2 +-
ipc/shm.c | 21 +--
mm/mempolicy.c | 383 +++++++++++++++++++---------------------------
mm/shmem.c | 92 ++++++-----
mm/swap.h | 9 +-
mm/swap_state.c | 86 +++++++----
9 files changed, 299 insertions(+), 322 deletions(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 1d99450..66ae1c2 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -2673,8 +2673,9 @@ static int show_numa_map(struct seq_file *m, void *v)
struct numa_maps *md = &numa_priv->md;
struct file *file = vma->vm_file;
struct mm_struct *mm = vma->vm_mm;
- struct mempolicy *pol;
char buffer[64];
+ struct mempolicy *pol;
+ pgoff_t ilx;
int nid;

if (!mm)
@@ -2683,7 +2684,7 @@ static int show_numa_map(struct seq_file *m, void *v)
/* Ensure we start with an empty set of numa_maps statistics. */
memset(md, 0, sizeof(*md));

- pol = __get_vma_policy(vma, vma->vm_start);
+ pol = __get_vma_policy(vma, vma->vm_start, &ilx);
if (pol) {
mpol_to_str(buffer, sizeof(buffer), pol);
mpol_cond_put(pol);
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 665f066..f74f8d0 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -8,6 +8,7 @@
#include <linux/topology.h>

struct vm_area_struct;
+struct mempolicy;

/* Convert GFP flags to their corresponding migrate type */
#define GFP_MOVABLE_MASK (__GFP_RECLAIMABLE|__GFP_MOVABLE)
@@ -262,7 +263,9 @@ static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask,

#ifdef CONFIG_NUMA
struct page *alloc_pages(gfp_t gfp, unsigned int order);
-struct folio *folio_alloc(gfp_t gfp, unsigned order);
+struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order,
+ struct mempolicy *mpol, pgoff_t ilx, int nid);
+struct folio *folio_alloc(gfp_t gfp, unsigned int order);
struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma,
unsigned long addr, bool hugepage);
#else
@@ -270,6 +273,11 @@ static inline struct page *alloc_pages(gfp_t gfp_mask, unsigned int order)
{
return alloc_pages_node(numa_node_id(), gfp_mask, order);
}
+static inline struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order,
+ struct mempolicy *mpol, pgoff_t ilx, int nid)
+{
+ return alloc_pages(gfp, order);
+}
static inline struct folio *folio_alloc(gfp_t gfp, unsigned int order)
{
return __folio_alloc_node(gfp, order, numa_node_id());
diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index acdb12f..2801d5b 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -126,7 +126,9 @@ struct mempolicy *mpol_shared_policy_lookup(struct shared_policy *sp,

struct mempolicy *get_task_policy(struct task_struct *p);
struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
- unsigned long addr);
+ unsigned long addr, pgoff_t *ilx);
+struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
+ unsigned long addr, int order, pgoff_t *ilx);
bool vma_policy_mof(struct vm_area_struct *vma);

extern void numa_default_policy(void);
@@ -140,8 +142,6 @@ extern int huge_node(struct vm_area_struct *vma,
extern bool init_nodemask_of_mempolicy(nodemask_t *mask);
extern bool mempolicy_in_oom_domain(struct task_struct *tsk,
const nodemask_t *mask);
-extern nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy);
-
extern unsigned int mempolicy_slab_node(void);

extern enum zone_type policy_zone;
@@ -213,6 +213,13 @@ static inline void mpol_free_shared_policy(struct shared_policy *sp)
return NULL;
}

+static inline struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
+ unsigned long addr, int order, pgoff_t *ilx)
+{
+ *ilx = 0;
+ return NULL;
+}
+
static inline int
vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst)
{
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 86e040e..b4d67a8 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -619,7 +619,7 @@ struct vm_operations_struct {
* policy.
*/
struct mempolicy *(*get_policy)(struct vm_area_struct *vma,
- unsigned long addr);
+ unsigned long addr, pgoff_t *ilx);
#endif
/*
* Called by vm_normal_page() for special PTEs to find the
diff --git a/ipc/shm.c b/ipc/shm.c
index 576a543..222aaf0 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -562,30 +562,25 @@ static unsigned long shm_pagesize(struct vm_area_struct *vma)
}

#ifdef CONFIG_NUMA
-static int shm_set_policy(struct vm_area_struct *vma, struct mempolicy *new)
+static int shm_set_policy(struct vm_area_struct *vma, struct mempolicy *mpol)
{
- struct file *file = vma->vm_file;
- struct shm_file_data *sfd = shm_file_data(file);
+ struct shm_file_data *sfd = shm_file_data(vma->vm_file);
int err = 0;

if (sfd->vm_ops->set_policy)
- err = sfd->vm_ops->set_policy(vma, new);
+ err = sfd->vm_ops->set_policy(vma, mpol);
return err;
}

static struct mempolicy *shm_get_policy(struct vm_area_struct *vma,
- unsigned long addr)
+ unsigned long addr, pgoff_t *ilx)
{
- struct file *file = vma->vm_file;
- struct shm_file_data *sfd = shm_file_data(file);
- struct mempolicy *pol = NULL;
+ struct shm_file_data *sfd = shm_file_data(vma->vm_file);
+ struct mempolicy *mpol = vma->vm_policy;

if (sfd->vm_ops->get_policy)
- pol = sfd->vm_ops->get_policy(vma, addr);
- else if (vma->vm_policy)
- pol = vma->vm_policy;
-
- return pol;
+ mpol = sfd->vm_ops->get_policy(vma, addr, ilx);
+ return mpol;
}
#endif

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 596d580..8df0503 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -114,6 +114,8 @@
#define MPOL_MF_INVERT (MPOL_MF_INTERNAL << 1) /* Invert check for nodemask */
#define MPOL_MF_WRLOCK (MPOL_MF_INTERNAL << 2) /* Write-lock walked vmas */

+#define NO_INTERLEAVE_INDEX (-1UL)
+
static struct kmem_cache *policy_cache;
static struct kmem_cache *sn_cache;

@@ -898,6 +900,7 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask,
}

if (flags & MPOL_F_ADDR) {
+ pgoff_t ilx; /* ignored here */
/*
* Do NOT fall back to task policy if the
* vma/shared policy at addr is NULL. We
@@ -909,10 +912,7 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask,
mmap_read_unlock(mm);
return -EFAULT;
}
- if (vma->vm_ops && vma->vm_ops->get_policy)
- pol = vma->vm_ops->get_policy(vma, addr);
- else
- pol = vma->vm_policy;
+ pol = __get_vma_policy(vma, addr, &ilx);
} else if (addr)
return -EINVAL;

@@ -1170,6 +1170,15 @@ static struct folio *new_folio(struct folio *src, unsigned long start)
break;
}

+ /*
+ * __get_vma_policy() now expects a genuine non-NULL vma. Return NULL
+ * when the page can no longer be located in a vma: that is not ideal
+ * (migrate_pages() will give up early, presuming ENOMEM), but good
+ * enough to avoid a crash by syzkaller or concurrent holepunch.
+ */
+ if (!vma)
+ return NULL;
+
if (folio_test_hugetlb(src)) {
return alloc_hugetlb_folio_vma(folio_hstate(src),
vma, address);
@@ -1178,9 +1187,6 @@ static struct folio *new_folio(struct folio *src, unsigned long start)
if (folio_test_large(src))
gfp = GFP_TRANSHUGE;

- /*
- * if !vma, vma_alloc_folio() will use task or system default policy
- */
return vma_alloc_folio(gfp, folio_order(src), vma, address,
folio_test_large(src));
}
@@ -1690,34 +1696,19 @@ bool vma_migratable(struct vm_area_struct *vma)
}

struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
- unsigned long addr)
+ unsigned long addr, pgoff_t *ilx)
{
- struct mempolicy *pol = NULL;
-
- if (vma) {
- if (vma->vm_ops && vma->vm_ops->get_policy) {
- pol = vma->vm_ops->get_policy(vma, addr);
- } else if (vma->vm_policy) {
- pol = vma->vm_policy;
-
- /*
- * shmem_alloc_page() passes MPOL_F_SHARED policy with
- * a pseudo vma whose vma->vm_ops=NULL. Take a reference
- * count on these policies which will be dropped by
- * mpol_cond_put() later
- */
- if (mpol_needs_cond_ref(pol))
- mpol_get(pol);
- }
- }
-
- return pol;
+ *ilx = 0;
+ return (vma->vm_ops && vma->vm_ops->get_policy) ?
+ vma->vm_ops->get_policy(vma, addr, ilx) : vma->vm_policy;
}

/*
- * get_vma_policy(@vma, @addr)
+ * get_vma_policy(@vma, @addr, @order, @ilx)
* @vma: virtual memory area whose policy is sought
* @addr: address in @vma for shared policy lookup
+ * @order: 0, or appropriate huge_page_order for interleaving
+ * @ilx: interleave index (output), for use only when MPOL_INTERLEAVE
*
* Returns effective policy for a VMA at specified address.
* Falls back to current->mempolicy or system default policy, as necessary.
@@ -1726,14 +1717,18 @@ struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
* freeing by another task. It is the caller's responsibility to free the
* extra reference for shared policies.
*/
-static struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
- unsigned long addr)
+struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
+ unsigned long addr, int order, pgoff_t *ilx)
{
- struct mempolicy *pol = __get_vma_policy(vma, addr);
+ struct mempolicy *pol;

+ pol = __get_vma_policy(vma, addr, ilx);
if (!pol)
pol = get_task_policy(current);
-
+ if (pol->mode == MPOL_INTERLEAVE) {
+ *ilx += vma->vm_pgoff >> order;
+ *ilx += (addr - vma->vm_start) >> (PAGE_SHIFT + order);
+ }
return pol;
}

@@ -1743,8 +1738,9 @@ bool vma_policy_mof(struct vm_area_struct *vma)

if (vma->vm_ops && vma->vm_ops->get_policy) {
bool ret = false;
+ pgoff_t ilx; /* ignored here */

- pol = vma->vm_ops->get_policy(vma, vma->vm_start);
+ pol = vma->vm_ops->get_policy(vma, vma->vm_start, &ilx);
if (pol && (pol->flags & MPOL_F_MOF))
ret = true;
mpol_cond_put(pol);
@@ -1779,54 +1775,6 @@ bool apply_policy_zone(struct mempolicy *policy, enum zone_type zone)
return zone >= dynamic_policy_zone;
}

-/*
- * Return a nodemask representing a mempolicy for filtering nodes for
- * page allocation
- */
-nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy)
-{
- int mode = policy->mode;
-
- /* Lower zones don't get a nodemask applied for MPOL_BIND */
- if (unlikely(mode == MPOL_BIND) &&
- apply_policy_zone(policy, gfp_zone(gfp)) &&
- cpuset_nodemask_valid_mems_allowed(&policy->nodes))
- return &policy->nodes;
-
- if (mode == MPOL_PREFERRED_MANY)
- return &policy->nodes;
-
- return NULL;
-}
-
-/*
- * Return the preferred node id for 'prefer' mempolicy, and return
- * the given id for all other policies.
- *
- * policy_node() is always coupled with policy_nodemask(), which
- * secures the nodemask limit for 'bind' and 'prefer-many' policy.
- */
-static int policy_node(gfp_t gfp, struct mempolicy *policy, int nid)
-{
- if (policy->mode == MPOL_PREFERRED) {
- nid = first_node(policy->nodes);
- } else {
- /*
- * __GFP_THISNODE shouldn't even be used with the bind policy
- * because we might easily break the expectation to stay on the
- * requested node and not break the policy.
- */
- WARN_ON_ONCE(policy->mode == MPOL_BIND && (gfp & __GFP_THISNODE));
- }
-
- if ((policy->mode == MPOL_BIND ||
- policy->mode == MPOL_PREFERRED_MANY) &&
- policy->home_node != NUMA_NO_NODE)
- return policy->home_node;
-
- return nid;
-}
-
/* Do dynamic interleaving for a process */
static unsigned int interleave_nodes(struct mempolicy *policy)
{
@@ -1886,11 +1834,11 @@ unsigned int mempolicy_slab_node(void)
}

/*
- * Do static interleaving for a VMA with known offset @n. Returns the n'th
- * node in pol->nodes (starting from n=0), wrapping around if n exceeds the
- * number of present nodes.
+ * Do static interleaving for interleave index @ilx. Returns the ilx'th
+ * node in pol->nodes (starting from ilx=0), wrapping around if ilx
+ * exceeds the number of present nodes.
*/
-static unsigned offset_il_node(struct mempolicy *pol, unsigned long n)
+static unsigned int interleave_nid(struct mempolicy *pol, pgoff_t ilx)
{
nodemask_t nodemask = pol->nodes;
unsigned int target, nnodes;
@@ -1908,33 +1856,54 @@ static unsigned offset_il_node(struct mempolicy *pol, unsigned long n)
nnodes = nodes_weight(nodemask);
if (!nnodes)
return numa_node_id();
- target = (unsigned int)n % nnodes;
+ target = ilx % nnodes;
nid = first_node(nodemask);
for (i = 0; i < target; i++)
nid = next_node(nid, nodemask);
return nid;
}

-/* Determine a node number for interleave */
-static inline unsigned interleave_nid(struct mempolicy *pol,
- struct vm_area_struct *vma, unsigned long addr, int shift)
+/*
+ * Return a nodemask representing a mempolicy for filtering nodes for
+ * page allocation, together with preferred node id (or the input node id).
+ */
+static nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *pol,
+ pgoff_t ilx, int *nid)
{
- if (vma) {
- unsigned long off;
+ nodemask_t *nodemask = NULL;

+ switch (pol->mode) {
+ case MPOL_PREFERRED:
+ /* Override input node id */
+ *nid = first_node(pol->nodes);
+ break;
+ case MPOL_PREFERRED_MANY:
+ nodemask = &pol->nodes;
+ if (pol->home_node != NUMA_NO_NODE)
+ *nid = pol->home_node;
+ break;
+ case MPOL_BIND:
+ /* Restrict to nodemask (but not on lower zones) */
+ if (apply_policy_zone(pol, gfp_zone(gfp)) &&
+ cpuset_nodemask_valid_mems_allowed(&pol->nodes))
+ nodemask = &pol->nodes;
+ if (pol->home_node != NUMA_NO_NODE)
+ *nid = pol->home_node;
/*
- * for small pages, there is no difference between
- * shift and PAGE_SHIFT, so the bit-shift is safe.
- * for huge pages, since vm_pgoff is in units of small
- * pages, we need to shift off the always 0 bits to get
- * a useful offset.
+ * __GFP_THISNODE shouldn't even be used with the bind policy
+ * because we might easily break the expectation to stay on the
+ * requested node and not break the policy.
*/
- BUG_ON(shift < PAGE_SHIFT);
- off = vma->vm_pgoff >> (shift - PAGE_SHIFT);
- off += (addr - vma->vm_start) >> shift;
- return offset_il_node(pol, off);
- } else
- return interleave_nodes(pol);
+ WARN_ON_ONCE(gfp & __GFP_THISNODE);
+ break;
+ case MPOL_INTERLEAVE:
+ /* Override input node id */
+ *nid = (ilx == NO_INTERLEAVE_INDEX) ?
+ interleave_nodes(pol) : interleave_nid(pol, ilx);
+ break;
+ }
+
+ return nodemask;
}

#ifdef CONFIG_HUGETLBFS
@@ -1950,27 +1919,16 @@ static inline unsigned interleave_nid(struct mempolicy *pol,
* to the struct mempolicy for conditional unref after allocation.
* If the effective policy is 'bind' or 'prefer-many', returns a pointer
* to the mempolicy's @nodemask for filtering the zonelist.
- *
- * Must be protected by read_mems_allowed_begin()
*/
int huge_node(struct vm_area_struct *vma, unsigned long addr, gfp_t gfp_flags,
- struct mempolicy **mpol, nodemask_t **nodemask)
+ struct mempolicy **mpol, nodemask_t **nodemask)
{
+ pgoff_t ilx;
int nid;
- int mode;

- *mpol = get_vma_policy(vma, addr);
- *nodemask = NULL;
- mode = (*mpol)->mode;
-
- if (unlikely(mode == MPOL_INTERLEAVE)) {
- nid = interleave_nid(*mpol, vma, addr,
- huge_page_shift(hstate_vma(vma)));
- } else {
- nid = policy_node(gfp_flags, *mpol, numa_node_id());
- if (mode == MPOL_BIND || mode == MPOL_PREFERRED_MANY)
- *nodemask = &(*mpol)->nodes;
- }
+ nid = numa_node_id();
+ *mpol = get_vma_policy(vma, addr, hstate_vma(vma)->order, &ilx);
+ *nodemask = policy_nodemask(gfp_flags, *mpol, ilx, &nid);
return nid;
}

@@ -2048,27 +2006,8 @@ bool mempolicy_in_oom_domain(struct task_struct *tsk,
return ret;
}

-/* Allocate a page in interleaved policy.
- Own path because it needs to do special accounting. */
-static struct page *alloc_page_interleave(gfp_t gfp, unsigned order,
- unsigned nid)
-{
- struct page *page;
-
- page = __alloc_pages(gfp, order, nid, NULL);
- /* skip NUMA_INTERLEAVE_HIT counter update if numa stats is disabled */
- if (!static_branch_likely(&vm_numa_stat_key))
- return page;
- if (page && page_to_nid(page) == nid) {
- preempt_disable();
- __count_numa_event(page_zone(page), NUMA_INTERLEAVE_HIT);
- preempt_enable();
- }
- return page;
-}
-
static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order,
- int nid, struct mempolicy *pol)
+ int nid, nodemask_t *nodemask)
{
struct page *page;
gfp_t preferred_gfp;
@@ -2081,7 +2020,7 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order,
*/
preferred_gfp = gfp | __GFP_NOWARN;
preferred_gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL);
- page = __alloc_pages(preferred_gfp, order, nid, &pol->nodes);
+ page = __alloc_pages(preferred_gfp, order, nid, nodemask);
if (!page)
page = __alloc_pages(gfp, order, nid, NULL);

@@ -2089,55 +2028,29 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order,
}

/**
- * vma_alloc_folio - Allocate a folio for a VMA.
+ * alloc_pages_mpol - Allocate pages according to NUMA mempolicy.
* @gfp: GFP flags.
- * @order: Order of the folio.
- * @vma: Pointer to VMA or NULL if not available.
- * @addr: Virtual address of the allocation. Must be inside @vma.
- * @hugepage: For hugepages try only the preferred node if possible.
+ * @order: Order of the page allocation.
+ * @pol: Pointer to the NUMA mempolicy.
+ * @ilx: Index for interleave mempolicy (also distinguishes alloc_pages()).
+ * @nid: Preferred node (usually numa_node_id() but @mpol may override it).
*
- * Allocate a folio for a specific address in @vma, using the appropriate
- * NUMA policy. When @vma is not NULL the caller must hold the mmap_lock
- * of the mm_struct of the VMA to prevent it from going away. Should be
- * used for all allocations for folios that will be mapped into user space.
- *
- * Return: The folio on success or NULL if allocation fails.
+ * Return: The page on success or NULL if allocation fails.
*/
-struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma,
- unsigned long addr, bool hugepage)
+struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order,
+ struct mempolicy *pol, pgoff_t ilx, int nid)
{
- struct mempolicy *pol;
- int node = numa_node_id();
- struct folio *folio;
- int preferred_nid;
- nodemask_t *nmask;
+ nodemask_t *nodemask;
+ struct page *page;

- pol = get_vma_policy(vma, addr);
+ nodemask = policy_nodemask(gfp, pol, ilx, &nid);

- if (pol->mode == MPOL_INTERLEAVE) {
- struct page *page;
- unsigned nid;
-
- nid = interleave_nid(pol, vma, addr, PAGE_SHIFT + order);
- mpol_cond_put(pol);
- gfp |= __GFP_COMP;
- page = alloc_page_interleave(gfp, order, nid);
- return page_rmappable_folio(page);
- }
-
- if (pol->mode == MPOL_PREFERRED_MANY) {
- struct page *page;
-
- node = policy_node(gfp, pol, node);
- gfp |= __GFP_COMP;
- page = alloc_pages_preferred_many(gfp, order, node, pol);
- mpol_cond_put(pol);
- return page_rmappable_folio(page);
- }
-
- if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage)) {
- int hpage_node = node;
+ if (pol->mode == MPOL_PREFERRED_MANY)
+ return alloc_pages_preferred_many(gfp, order, nid, nodemask);

+ if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
+ /* filter "hugepage" allocation, unless from alloc_pages() */
+ order == HPAGE_PMD_ORDER && ilx != NO_INTERLEAVE_INDEX) {
/*
* For hugepage allocation and non-interleave policy which
* allows the current node (or other explicitly preferred
@@ -2148,39 +2061,68 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma,
* If the policy is interleave or does not allow the current
* node in its nodemask, we allocate the standard way.
*/
- if (pol->mode == MPOL_PREFERRED)
- hpage_node = first_node(pol->nodes);
-
- nmask = policy_nodemask(gfp, pol);
- if (!nmask || node_isset(hpage_node, *nmask)) {
- mpol_cond_put(pol);
+ if (pol->mode != MPOL_INTERLEAVE &&
+ (!nodemask || node_isset(nid, *nodemask))) {
/*
* First, try to allocate THP only on local node, but
* don't reclaim unnecessarily, just compact.
*/
- folio = __folio_alloc_node(gfp | __GFP_THISNODE |
- __GFP_NORETRY, order, hpage_node);
-
+ page = __alloc_pages_node(nid,
+ gfp | __GFP_THISNODE | __GFP_NORETRY, order);
+ if (page || !(gfp & __GFP_DIRECT_RECLAIM))
+ return page;
/*
* If hugepage allocations are configured to always
* synchronous compact or the vma has been madvised
* to prefer hugepage backing, retry allowing remote
* memory with both reclaim and compact as well.
*/
- if (!folio && (gfp & __GFP_DIRECT_RECLAIM))
- folio = __folio_alloc(gfp, order, hpage_node,
- nmask);
-
- goto out;
}
}

- nmask = policy_nodemask(gfp, pol);
- preferred_nid = policy_node(gfp, pol, node);
- folio = __folio_alloc(gfp, order, preferred_nid, nmask);
+ page = __alloc_pages(gfp, order, nid, nodemask);
+
+ if (unlikely(pol->mode == MPOL_INTERLEAVE) && page) {
+ /* skip NUMA_INTERLEAVE_HIT update if numa stats is disabled */
+ if (static_branch_likely(&vm_numa_stat_key) &&
+ page_to_nid(page) == nid) {
+ preempt_disable();
+ __count_numa_event(page_zone(page), NUMA_INTERLEAVE_HIT);
+ preempt_enable();
+ }
+ }
+
+ return page;
+}
+
+/**
+ * vma_alloc_folio - Allocate a folio for a VMA.
+ * @gfp: GFP flags.
+ * @order: Order of the folio.
+ * @vma: Pointer to VMA.
+ * @addr: Virtual address of the allocation. Must be inside @vma.
+ * @hugepage: Unused (was: For hugepages try only preferred node if possible).
+ *
+ * Allocate a folio for a specific address in @vma, using the appropriate
+ * NUMA policy. The caller must hold the mmap_lock of the mm_struct of the
+ * VMA to prevent it from going away. Should be used for all allocations
+ * for folios that will be mapped into user space, excepting hugetlbfs, and
+ * excepting where direct use of alloc_pages_mpol() is more appropriate.
+ *
+ * Return: The folio on success or NULL if allocation fails.
+ */
+struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma,
+ unsigned long addr, bool hugepage)
+{
+ struct mempolicy *pol;
+ pgoff_t ilx;
+ struct page *page;
+
+ pol = get_vma_policy(vma, addr, order, &ilx);
+ page = alloc_pages_mpol(gfp | __GFP_COMP, order,
+ pol, ilx, numa_node_id());
mpol_cond_put(pol);
-out:
- return folio;
+ return page_rmappable_folio(page);
}
EXPORT_SYMBOL(vma_alloc_folio);

@@ -2198,33 +2140,23 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma,
* flags are used.
* Return: The page on success or NULL if allocation fails.
*/
-struct page *alloc_pages(gfp_t gfp, unsigned order)
+struct page *alloc_pages(gfp_t gfp, unsigned int order)
{
struct mempolicy *pol = &default_policy;
- struct page *page;
-
- if (!in_interrupt() && !(gfp & __GFP_THISNODE))
- pol = get_task_policy(current);

/*
* No reference counting needed for current->mempolicy
* nor system default_policy
*/
- if (pol->mode == MPOL_INTERLEAVE)
- page = alloc_page_interleave(gfp, order, interleave_nodes(pol));
- else if (pol->mode == MPOL_PREFERRED_MANY)
- page = alloc_pages_preferred_many(gfp, order,
- policy_node(gfp, pol, numa_node_id()), pol);
- else
- page = __alloc_pages(gfp, order,
- policy_node(gfp, pol, numa_node_id()),
- policy_nodemask(gfp, pol));
+ if (!in_interrupt() && !(gfp & __GFP_THISNODE))
+ pol = get_task_policy(current);

- return page;
+ return alloc_pages_mpol(gfp, order,
+ pol, NO_INTERLEAVE_INDEX, numa_node_id());
}
EXPORT_SYMBOL(alloc_pages);

-struct folio *folio_alloc(gfp_t gfp, unsigned order)
+struct folio *folio_alloc(gfp_t gfp, unsigned int order)
{
return page_rmappable_folio(alloc_pages(gfp | __GFP_COMP, order));
}
@@ -2295,6 +2227,8 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp,
unsigned long nr_pages, struct page **page_array)
{
struct mempolicy *pol = &default_policy;
+ nodemask_t *nodemask;
+ int nid;

if (!in_interrupt() && !(gfp & __GFP_THISNODE))
pol = get_task_policy(current);
@@ -2307,9 +2241,10 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp,
return alloc_pages_bulk_array_preferred_many(gfp,
numa_node_id(), pol, nr_pages, page_array);

- return __alloc_pages_bulk(gfp, policy_node(gfp, pol, numa_node_id()),
- policy_nodemask(gfp, pol), nr_pages, NULL,
- page_array);
+ nid = numa_node_id();
+ nodemask = policy_nodemask(gfp, pol, NO_INTERLEAVE_INDEX, &nid);
+ return __alloc_pages_bulk(gfp, nid, nodemask,
+ nr_pages, NULL, page_array);
}

int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst)
@@ -2496,23 +2431,21 @@ int mpol_misplaced(struct folio *folio, struct vm_area_struct *vma,
unsigned long addr)
{
struct mempolicy *pol;
+ pgoff_t ilx;
struct zoneref *z;
int curnid = folio_nid(folio);
- unsigned long pgoff;
int thiscpu = raw_smp_processor_id();
int thisnid = cpu_to_node(thiscpu);
int polnid = NUMA_NO_NODE;
int ret = NUMA_NO_NODE;

- pol = get_vma_policy(vma, addr);
+ pol = get_vma_policy(vma, addr, folio_order(folio), &ilx);
if (!(pol->flags & MPOL_F_MOF))
goto out;

switch (pol->mode) {
case MPOL_INTERLEAVE:
- pgoff = vma->vm_pgoff;
- pgoff += (addr - vma->vm_start) >> PAGE_SHIFT;
- polnid = offset_il_node(pol, pgoff);
+ polnid = interleave_nid(pol, ilx);
break;

case MPOL_PREFERRED:
diff --git a/mm/shmem.c b/mm/shmem.c
index bcbe9db..a314a25 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1544,38 +1544,20 @@ static inline struct mempolicy *shmem_get_sbmpol(struct shmem_sb_info *sbinfo)
return NULL;
}
#endif /* CONFIG_NUMA && CONFIG_TMPFS */
-#ifndef CONFIG_NUMA
-#define vm_policy vm_private_data
-#endif

-static void shmem_pseudo_vma_init(struct vm_area_struct *vma,
- struct shmem_inode_info *info, pgoff_t index)
-{
- /* Create a pseudo vma that just contains the policy */
- vma_init(vma, NULL);
- /* Bias interleave by inode number to distribute better across nodes */
- vma->vm_pgoff = index + info->vfs_inode.i_ino;
- vma->vm_policy = mpol_shared_policy_lookup(&info->policy, index);
-}
+static struct mempolicy *shmem_get_pgoff_policy(struct shmem_inode_info *info,
+ pgoff_t index, unsigned int order, pgoff_t *ilx);

-static void shmem_pseudo_vma_destroy(struct vm_area_struct *vma)
-{
- /* Drop reference taken by mpol_shared_policy_lookup() */
- mpol_cond_put(vma->vm_policy);
-}
-
-static struct folio *shmem_swapin(swp_entry_t swap, gfp_t gfp,
+static struct folio *shmem_swapin_cluster(swp_entry_t swap, gfp_t gfp,
struct shmem_inode_info *info, pgoff_t index)
{
- struct vm_area_struct pvma;
+ struct mempolicy *mpol;
+ pgoff_t ilx;
struct page *page;
- struct vm_fault vmf = {
- .vma = &pvma,
- };

- shmem_pseudo_vma_init(&pvma, info, index);
- page = swap_cluster_readahead(swap, gfp, &vmf);
- shmem_pseudo_vma_destroy(&pvma);
+ mpol = shmem_get_pgoff_policy(info, index, 0, &ilx);
+ page = swap_cluster_readahead(swap, gfp, mpol, ilx);
+ mpol_cond_put(mpol);

if (!page)
return NULL;
@@ -1609,27 +1591,29 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp)
static struct folio *shmem_alloc_hugefolio(gfp_t gfp,
struct shmem_inode_info *info, pgoff_t index)
{
- struct vm_area_struct pvma;
- struct folio *folio;
+ struct mempolicy *mpol;
+ pgoff_t ilx;
+ struct page *page;

- shmem_pseudo_vma_init(&pvma, info, index);
- folio = vma_alloc_folio(gfp, HPAGE_PMD_ORDER, &pvma, 0, true);
- shmem_pseudo_vma_destroy(&pvma);
+ mpol = shmem_get_pgoff_policy(info, index, HPAGE_PMD_ORDER, &ilx);
+ page = alloc_pages_mpol(gfp, HPAGE_PMD_ORDER, mpol, ilx, numa_node_id());
+ mpol_cond_put(mpol);

- return folio;
+ return page_rmappable_folio(page);
}

static struct folio *shmem_alloc_folio(gfp_t gfp,
struct shmem_inode_info *info, pgoff_t index)
{
- struct vm_area_struct pvma;
- struct folio *folio;
+ struct mempolicy *mpol;
+ pgoff_t ilx;
+ struct page *page;

- shmem_pseudo_vma_init(&pvma, info, index);
- folio = vma_alloc_folio(gfp, 0, &pvma, 0, false);
- shmem_pseudo_vma_destroy(&pvma);
+ mpol = shmem_get_pgoff_policy(info, index, 0, &ilx);
+ page = alloc_pages_mpol(gfp, 0, mpol, ilx, numa_node_id());
+ mpol_cond_put(mpol);

- return folio;
+ return (struct folio *)page;
}

static struct folio *shmem_alloc_and_add_folio(gfp_t gfp,
@@ -1883,7 +1867,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
count_memcg_event_mm(fault_mm, PGMAJFAULT);
}
/* Here we actually start the io */
- folio = shmem_swapin(swap, gfp, info, index);
+ folio = shmem_swapin_cluster(swap, gfp, info, index);
if (!folio) {
error = -ENOMEM;
goto failed;
@@ -2334,15 +2318,41 @@ static int shmem_set_policy(struct vm_area_struct *vma, struct mempolicy *mpol)
}

static struct mempolicy *shmem_get_policy(struct vm_area_struct *vma,
- unsigned long addr)
+ unsigned long addr, pgoff_t *ilx)
{
struct inode *inode = file_inode(vma->vm_file);
pgoff_t index;

+ /*
+ * Bias interleave by inode number to distribute better across nodes;
+ * but this interface is independent of which page order is used, so
+ * supplies only that bias, letting caller apply the offset (adjusted
+ * by page order, as in shmem_get_pgoff_policy() and get_vma_policy()).
+ */
+ *ilx = inode->i_ino;
index = ((addr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
return mpol_shared_policy_lookup(&SHMEM_I(inode)->policy, index);
}
-#endif
+
+static struct mempolicy *shmem_get_pgoff_policy(struct shmem_inode_info *info,
+ pgoff_t index, unsigned int order, pgoff_t *ilx)
+{
+ struct mempolicy *mpol;
+
+ /* Bias interleave by inode number to distribute better across nodes */
+ *ilx = info->vfs_inode.i_ino + (index >> order);
+
+ mpol = mpol_shared_policy_lookup(&info->policy, index);
+ return mpol ? mpol : get_task_policy(current);
+}
+#else
+static struct mempolicy *shmem_get_pgoff_policy(struct shmem_inode_info *info,
+ pgoff_t index, unsigned int order, pgoff_t *ilx)
+{
+ *ilx = 0;
+ return NULL;
+}
+#endif /* CONFIG_NUMA */

int shmem_lock(struct file *file, int lock, struct ucounts *ucounts)
{
diff --git a/mm/swap.h b/mm/swap.h
index 8a3c7a0..73c332e 100644
--- a/mm/swap.h
+++ b/mm/swap.h
@@ -2,6 +2,8 @@
#ifndef _MM_SWAP_H
#define _MM_SWAP_H

+struct mempolicy;
+
#ifdef CONFIG_SWAP
#include <linux/blk_types.h> /* for bio_end_io_t */

@@ -48,11 +50,10 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
unsigned long addr,
struct swap_iocb **plug);
struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
- struct vm_area_struct *vma,
- unsigned long addr,
+ struct mempolicy *mpol, pgoff_t ilx,
bool *new_page_allocated);
struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
- struct vm_fault *vmf);
+ struct mempolicy *mpol, pgoff_t ilx);
struct page *swapin_readahead(swp_entry_t entry, gfp_t flag,
struct vm_fault *vmf);

@@ -80,7 +81,7 @@ static inline void show_swap_cache_info(void)
}

static inline struct page *swap_cluster_readahead(swp_entry_t entry,
- gfp_t gfp_mask, struct vm_fault *vmf)
+ gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t ilx)
{
return NULL;
}
diff --git a/mm/swap_state.c b/mm/swap_state.c
index b3b14bd..a421f01 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -10,6 +10,7 @@
#include <linux/mm.h>
#include <linux/gfp.h>
#include <linux/kernel_stat.h>
+#include <linux/mempolicy.h>
#include <linux/swap.h>
#include <linux/swapops.h>
#include <linux/init.h>
@@ -410,8 +411,8 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
}

struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
- struct vm_area_struct *vma, unsigned long addr,
- bool *new_page_allocated)
+ struct mempolicy *mpol, pgoff_t ilx,
+ bool *new_page_allocated)
{
struct swap_info_struct *si;
struct folio *folio;
@@ -453,7 +454,8 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
* before marking swap_map SWAP_HAS_CACHE, when -EEXIST will
* cause any racers to loop around until we add it to cache.
*/
- folio = vma_alloc_folio(gfp_mask, 0, vma, addr, false);
+ folio = (struct folio *)alloc_pages_mpol(gfp_mask, 0,
+ mpol, ilx, numa_node_id());
if (!folio)
goto fail_put_swap;

@@ -528,14 +530,19 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
struct vm_area_struct *vma,
unsigned long addr, struct swap_iocb **plug)
{
- bool page_was_allocated;
- struct page *retpage = __read_swap_cache_async(entry, gfp_mask,
- vma, addr, &page_was_allocated);
+ bool page_allocated;
+ struct mempolicy *mpol;
+ pgoff_t ilx;
+ struct page *page;

- if (page_was_allocated)
- swap_readpage(retpage, false, plug);
+ mpol = get_vma_policy(vma, addr, 0, &ilx);
+ page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
+ &page_allocated);
+ mpol_cond_put(mpol);

- return retpage;
+ if (page_allocated)
+ swap_readpage(page, false, plug);
+ return page;
}

static unsigned int __swapin_nr_pages(unsigned long prev_offset,
@@ -603,7 +610,8 @@ static unsigned long swapin_nr_pages(unsigned long offset)
* swap_cluster_readahead - swap in pages in hope we need them soon
* @entry: swap entry of this memory
* @gfp_mask: memory allocation flags
- * @vmf: fault information
+ * @mpol: NUMA memory allocation policy to be applied
+ * @ilx: NUMA interleave index, for use only when MPOL_INTERLEAVE
*
* Returns the struct page for entry and addr, after queueing swapin.
*
@@ -612,13 +620,12 @@ static unsigned long swapin_nr_pages(unsigned long offset)
* because it doesn't cost us any seek time. We also make sure to queue
* the 'original' request together with the readahead ones...
*
- * This has been extended to use the NUMA policies from the mm triggering
- * the readahead.
- *
- * Caller must hold read mmap_lock if vmf->vma is not NULL.
+ * Note: it is intentional that the same NUMA policy and interleave index
+ * are used for every page of the readahead: neighbouring pages on swap
+ * are fairly likely to have been swapped out from the same node.
*/
struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
- struct vm_fault *vmf)
+ struct mempolicy *mpol, pgoff_t ilx)
{
struct page *page;
unsigned long entry_offset = swp_offset(entry);
@@ -629,8 +636,6 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
struct blk_plug plug;
struct swap_iocb *splug = NULL;
bool page_allocated;
- struct vm_area_struct *vma = vmf->vma;
- unsigned long addr = vmf->address;

mask = swapin_nr_pages(offset) - 1;
if (!mask)
@@ -648,8 +653,8 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
for (offset = start_offset; offset <= end_offset ; offset++) {
/* Ok, do the async read-ahead now */
page = __read_swap_cache_async(
- swp_entry(swp_type(entry), offset),
- gfp_mask, vma, addr, &page_allocated);
+ swp_entry(swp_type(entry), offset),
+ gfp_mask, mpol, ilx, &page_allocated);
if (!page)
continue;
if (page_allocated) {
@@ -663,11 +668,14 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
}
blk_finish_plug(&plug);
swap_read_unplug(splug);
-
lru_add_drain(); /* Push any new pages onto the LRU now */
skip:
/* The page was likely read above, so no need for plugging here */
- return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
+ page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
+ &page_allocated);
+ if (unlikely(page_allocated))
+ swap_readpage(page, false, NULL);
+ return page;
}

int init_swap_address_space(unsigned int type, unsigned long nr_pages)
@@ -765,8 +773,10 @@ static void swap_ra_info(struct vm_fault *vmf,

/**
* swap_vma_readahead - swap in pages in hope we need them soon
- * @fentry: swap entry of this memory
+ * @targ_entry: swap entry of the targeted memory
* @gfp_mask: memory allocation flags
+ * @mpol: NUMA memory allocation policy to be applied
+ * @targ_ilx: NUMA interleave index, for use only when MPOL_INTERLEAVE
* @vmf: fault information
*
* Returns the struct page for entry and addr, after queueing swapin.
@@ -777,16 +787,17 @@ static void swap_ra_info(struct vm_fault *vmf,
* Caller must hold read mmap_lock if vmf->vma is not NULL.
*
*/
-static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
+static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
+ struct mempolicy *mpol, pgoff_t targ_ilx,
struct vm_fault *vmf)
{
struct blk_plug plug;
struct swap_iocb *splug = NULL;
- struct vm_area_struct *vma = vmf->vma;
struct page *page;
pte_t *pte = NULL, pentry;
unsigned long addr;
swp_entry_t entry;
+ pgoff_t ilx;
unsigned int i;
bool page_allocated;
struct vma_swap_readahead ra_info = {
@@ -798,9 +809,10 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
goto skip;

addr = vmf->address - (ra_info.offset * PAGE_SIZE);
+ ilx = targ_ilx - ra_info.offset;

blk_start_plug(&plug);
- for (i = 0; i < ra_info.nr_pte; i++, addr += PAGE_SIZE) {
+ for (i = 0; i < ra_info.nr_pte; i++, ilx++, addr += PAGE_SIZE) {
if (!pte++) {
pte = pte_offset_map(vmf->pmd, addr);
if (!pte)
@@ -814,8 +826,8 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
continue;
pte_unmap(pte);
pte = NULL;
- page = __read_swap_cache_async(entry, gfp_mask, vma,
- addr, &page_allocated);
+ page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
+ &page_allocated);
if (!page)
continue;
if (page_allocated) {
@@ -834,8 +846,11 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
lru_add_drain();
skip:
/* The page was likely read above, so no need for plugging here */
- return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address,
- NULL);
+ page = __read_swap_cache_async(targ_entry, gfp_mask, mpol, targ_ilx,
+ &page_allocated);
+ if (unlikely(page_allocated))
+ swap_readpage(page, false, NULL);
+ return page;
}

/**
@@ -853,9 +868,16 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask,
struct vm_fault *vmf)
{
- return swap_use_vma_readahead() ?
- swap_vma_readahead(entry, gfp_mask, vmf) :
- swap_cluster_readahead(entry, gfp_mask, vmf);
+ struct mempolicy *mpol;
+ pgoff_t ilx;
+ struct page *page;
+
+ mpol = get_vma_policy(vmf->vma, vmf->address, 0, &ilx);
+ page = swap_use_vma_readahead() ?
+ swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf) :
+ swap_cluster_readahead(entry, gfp_mask, mpol, ilx);
+ mpol_cond_put(mpol);
+ return page;
}

#ifdef CONFIG_SYSFS
--
1.8.4.5

2023-10-23 16:53:49

by domenico cerasuolo

[permalink] [raw]
Subject: Re: [PATCH v3 10/12] mempolicy: alloc_pages_mpol() for NUMA policy without vma

Il giorno ven 20 ott 2023 alle ore 00:05 Hugh Dickins
<[email protected]> ha scritto:
>
> Shrink shmem's stack usage by eliminating the pseudo-vma from its folio
> allocation. alloc_pages_mpol(gfp, order, pol, ilx, nid) becomes the
> principal actor for passing mempolicy choice down to __alloc_pages(),
> rather than vma_alloc_folio(gfp, order, vma, addr, hugepage).
>
> vma_alloc_folio() and alloc_pages() remain, but as wrappers around
> alloc_pages_mpol(). alloc_pages_bulk_*() untouched, except to provide the
> additional args to policy_nodemask(), which subsumes policy_node().
> Cleanup throughout, cutting out some unhelpful "helpers".
>
> It would all be much simpler without MPOL_INTERLEAVE, but that adds a
> dynamic to the constant mpol: complicated by v3.6 commit 09c231cb8bfd
> ("tmpfs: distribute interleave better across nodes"), which added ino bias
> to the interleave, hidden from mm/mempolicy.c until this commit.
>
> Hence "ilx" throughout, the "interleave index". Originally I thought it
> could be done just with nid, but that's wrong: the nodemask may come from
> the shared policy layer below a shmem vma, or it may come from the task
> layer above a shmem vma; and without the final nodemask then nodeid cannot
> be decided. And how ilx is applied depends also on page order.
>
> The interleave index is almost always irrelevant unless MPOL_INTERLEAVE:
> with one exception in alloc_pages_mpol(), where the NO_INTERLEAVE_INDEX
> passed down from vma-less alloc_pages() is also used as hint not to use
> THP-style hugepage allocation - to avoid the overhead of a hugepage arg
> (though I don't understand why we never just added a GFP bit for THP - if
> it actually needs a different allocation strategy from other pages of the
> same order). vma_alloc_folio() still carries its hugepage arg here, but
> it is not used, and should be removed when agreed.
>
> get_vma_policy() no longer allows a NULL vma: over time I believe we've
> eradicated all the places which used to need it e.g. swapoff and madvise
> used to pass NULL vma to read_swap_cache_async(), but now know the vma.
>
> Link: https://lkml.kernel.org/r/[email protected]
> Signed-off-by: Hugh Dickins <[email protected]>
> Cc: Andi Kleen <[email protected]>
> Cc: Christoph Lameter <[email protected]>
> Cc: David Hildenbrand <[email protected]>
> Cc: Greg Kroah-Hartman <[email protected]>
> Cc: Huang Ying <[email protected]>
> Cc: Kefeng Wang <[email protected]>
> Cc: Matthew Wilcox (Oracle) <[email protected]>
> Cc: Mel Gorman <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Mike Kravetz <[email protected]>
> Cc: Nhat Pham <[email protected]>
> Cc: Sidhartha Kumar <[email protected]>
> Cc: Suren Baghdasaryan <[email protected]>
> Cc: Tejun heo <[email protected]>
> Cc: Vishal Moola (Oracle) <[email protected]>
> Cc: Yang Shi <[email protected]>
> Cc: Yosry Ahmed <[email protected]>
> ---
> Rebased to mm.git's current mm-stable, to resolve with removal of
> vma_policy() from include/linux/mempolicy.h, and temporary omission
> of Nhat's ZSWAP mods from mm/swap_state.c: no other changes.

Hi Hugh,

not sure if it's the rebase, but I don't see an update to
__read_swap_cache_async invocation in zswap.c at line 1078. Shouldn't we pass a
mempolicy there too?

Thanks,
Domenico

>
> git cherry-pick 800caf44af25^..237d4ce921f0 # applies mm-unstable's 01-09
> then apply this "mempolicy: alloc_pages_mpol() for NUMA policy without vma"
> git cherry-pick e4fb3362b782^..ec6412928b8e # applies mm-unstable's 11-12
>
> fs/proc/task_mmu.c | 5 +-
> include/linux/gfp.h | 10 +-
> include/linux/mempolicy.h | 13 +-
> include/linux/mm.h | 2 +-
> ipc/shm.c | 21 +--
> mm/mempolicy.c | 383 +++++++++++++++++++---------------------------
> mm/shmem.c | 92 ++++++-----
> mm/swap.h | 9 +-
> mm/swap_state.c | 86 +++++++----
> 9 files changed, 299 insertions(+), 322 deletions(-)
>
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index 1d99450..66ae1c2 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -2673,8 +2673,9 @@ static int show_numa_map(struct seq_file *m, void *v)
> struct numa_maps *md = &numa_priv->md;
> struct file *file = vma->vm_file;
> struct mm_struct *mm = vma->vm_mm;
> - struct mempolicy *pol;
> char buffer[64];
> + struct mempolicy *pol;
> + pgoff_t ilx;
> int nid;
>
> if (!mm)
> @@ -2683,7 +2684,7 @@ static int show_numa_map(struct seq_file *m, void *v)
> /* Ensure we start with an empty set of numa_maps statistics. */
> memset(md, 0, sizeof(*md));
>
> - pol = __get_vma_policy(vma, vma->vm_start);
> + pol = __get_vma_policy(vma, vma->vm_start, &ilx);
> if (pol) {
> mpol_to_str(buffer, sizeof(buffer), pol);
> mpol_cond_put(pol);
> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> index 665f066..f74f8d0 100644
> --- a/include/linux/gfp.h
> +++ b/include/linux/gfp.h
> @@ -8,6 +8,7 @@
> #include <linux/topology.h>
>
> struct vm_area_struct;
> +struct mempolicy;
>
> /* Convert GFP flags to their corresponding migrate type */
> #define GFP_MOVABLE_MASK (__GFP_RECLAIMABLE|__GFP_MOVABLE)
> @@ -262,7 +263,9 @@ static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask,
>
> #ifdef CONFIG_NUMA
> struct page *alloc_pages(gfp_t gfp, unsigned int order);
> -struct folio *folio_alloc(gfp_t gfp, unsigned order);
> +struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order,
> + struct mempolicy *mpol, pgoff_t ilx, int nid);
> +struct folio *folio_alloc(gfp_t gfp, unsigned int order);
> struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma,
> unsigned long addr, bool hugepage);
> #else
> @@ -270,6 +273,11 @@ static inline struct page *alloc_pages(gfp_t gfp_mask, unsigned int order)
> {
> return alloc_pages_node(numa_node_id(), gfp_mask, order);
> }
> +static inline struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order,
> + struct mempolicy *mpol, pgoff_t ilx, int nid)
> +{
> + return alloc_pages(gfp, order);
> +}
> static inline struct folio *folio_alloc(gfp_t gfp, unsigned int order)
> {
> return __folio_alloc_node(gfp, order, numa_node_id());
> diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
> index acdb12f..2801d5b 100644
> --- a/include/linux/mempolicy.h
> +++ b/include/linux/mempolicy.h
> @@ -126,7 +126,9 @@ struct mempolicy *mpol_shared_policy_lookup(struct shared_policy *sp,
>
> struct mempolicy *get_task_policy(struct task_struct *p);
> struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
> - unsigned long addr);
> + unsigned long addr, pgoff_t *ilx);
> +struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
> + unsigned long addr, int order, pgoff_t *ilx);
> bool vma_policy_mof(struct vm_area_struct *vma);
>
> extern void numa_default_policy(void);
> @@ -140,8 +142,6 @@ extern int huge_node(struct vm_area_struct *vma,
> extern bool init_nodemask_of_mempolicy(nodemask_t *mask);
> extern bool mempolicy_in_oom_domain(struct task_struct *tsk,
> const nodemask_t *mask);
> -extern nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy);
> -
> extern unsigned int mempolicy_slab_node(void);
>
> extern enum zone_type policy_zone;
> @@ -213,6 +213,13 @@ static inline void mpol_free_shared_policy(struct shared_policy *sp)
> return NULL;
> }
>
> +static inline struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
> + unsigned long addr, int order, pgoff_t *ilx)
> +{
> + *ilx = 0;
> + return NULL;
> +}
> +
> static inline int
> vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst)
> {
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 86e040e..b4d67a8 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -619,7 +619,7 @@ struct vm_operations_struct {
> * policy.
> */
> struct mempolicy *(*get_policy)(struct vm_area_struct *vma,
> - unsigned long addr);
> + unsigned long addr, pgoff_t *ilx);
> #endif
> /*
> * Called by vm_normal_page() for special PTEs to find the
> diff --git a/ipc/shm.c b/ipc/shm.c
> index 576a543..222aaf0 100644
> --- a/ipc/shm.c
> +++ b/ipc/shm.c
> @@ -562,30 +562,25 @@ static unsigned long shm_pagesize(struct vm_area_struct *vma)
> }
>
> #ifdef CONFIG_NUMA
> -static int shm_set_policy(struct vm_area_struct *vma, struct mempolicy *new)
> +static int shm_set_policy(struct vm_area_struct *vma, struct mempolicy *mpol)
> {
> - struct file *file = vma->vm_file;
> - struct shm_file_data *sfd = shm_file_data(file);
> + struct shm_file_data *sfd = shm_file_data(vma->vm_file);
> int err = 0;
>
> if (sfd->vm_ops->set_policy)
> - err = sfd->vm_ops->set_policy(vma, new);
> + err = sfd->vm_ops->set_policy(vma, mpol);
> return err;
> }
>
> static struct mempolicy *shm_get_policy(struct vm_area_struct *vma,
> - unsigned long addr)
> + unsigned long addr, pgoff_t *ilx)
> {
> - struct file *file = vma->vm_file;
> - struct shm_file_data *sfd = shm_file_data(file);
> - struct mempolicy *pol = NULL;
> + struct shm_file_data *sfd = shm_file_data(vma->vm_file);
> + struct mempolicy *mpol = vma->vm_policy;
>
> if (sfd->vm_ops->get_policy)
> - pol = sfd->vm_ops->get_policy(vma, addr);
> - else if (vma->vm_policy)
> - pol = vma->vm_policy;
> -
> - return pol;
> + mpol = sfd->vm_ops->get_policy(vma, addr, ilx);
> + return mpol;
> }
> #endif
>
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 596d580..8df0503 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -114,6 +114,8 @@
> #define MPOL_MF_INVERT (MPOL_MF_INTERNAL << 1) /* Invert check for nodemask */
> #define MPOL_MF_WRLOCK (MPOL_MF_INTERNAL << 2) /* Write-lock walked vmas */
>
> +#define NO_INTERLEAVE_INDEX (-1UL)
> +
> static struct kmem_cache *policy_cache;
> static struct kmem_cache *sn_cache;
>
> @@ -898,6 +900,7 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask,
> }
>
> if (flags & MPOL_F_ADDR) {
> + pgoff_t ilx; /* ignored here */
> /*
> * Do NOT fall back to task policy if the
> * vma/shared policy at addr is NULL. We
> @@ -909,10 +912,7 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask,
> mmap_read_unlock(mm);
> return -EFAULT;
> }
> - if (vma->vm_ops && vma->vm_ops->get_policy)
> - pol = vma->vm_ops->get_policy(vma, addr);
> - else
> - pol = vma->vm_policy;
> + pol = __get_vma_policy(vma, addr, &ilx);
> } else if (addr)
> return -EINVAL;
>
> @@ -1170,6 +1170,15 @@ static struct folio *new_folio(struct folio *src, unsigned long start)
> break;
> }
>
> + /*
> + * __get_vma_policy() now expects a genuine non-NULL vma. Return NULL
> + * when the page can no longer be located in a vma: that is not ideal
> + * (migrate_pages() will give up early, presuming ENOMEM), but good
> + * enough to avoid a crash by syzkaller or concurrent holepunch.
> + */
> + if (!vma)
> + return NULL;
> +
> if (folio_test_hugetlb(src)) {
> return alloc_hugetlb_folio_vma(folio_hstate(src),
> vma, address);
> @@ -1178,9 +1187,6 @@ static struct folio *new_folio(struct folio *src, unsigned long start)
> if (folio_test_large(src))
> gfp = GFP_TRANSHUGE;
>
> - /*
> - * if !vma, vma_alloc_folio() will use task or system default policy
> - */
> return vma_alloc_folio(gfp, folio_order(src), vma, address,
> folio_test_large(src));
> }
> @@ -1690,34 +1696,19 @@ bool vma_migratable(struct vm_area_struct *vma)
> }
>
> struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
> - unsigned long addr)
> + unsigned long addr, pgoff_t *ilx)
> {
> - struct mempolicy *pol = NULL;
> -
> - if (vma) {
> - if (vma->vm_ops && vma->vm_ops->get_policy) {
> - pol = vma->vm_ops->get_policy(vma, addr);
> - } else if (vma->vm_policy) {
> - pol = vma->vm_policy;
> -
> - /*
> - * shmem_alloc_page() passes MPOL_F_SHARED policy with
> - * a pseudo vma whose vma->vm_ops=NULL. Take a reference
> - * count on these policies which will be dropped by
> - * mpol_cond_put() later
> - */
> - if (mpol_needs_cond_ref(pol))
> - mpol_get(pol);
> - }
> - }
> -
> - return pol;
> + *ilx = 0;
> + return (vma->vm_ops && vma->vm_ops->get_policy) ?
> + vma->vm_ops->get_policy(vma, addr, ilx) : vma->vm_policy;
> }
>
> /*
> - * get_vma_policy(@vma, @addr)
> + * get_vma_policy(@vma, @addr, @order, @ilx)
> * @vma: virtual memory area whose policy is sought
> * @addr: address in @vma for shared policy lookup
> + * @order: 0, or appropriate huge_page_order for interleaving
> + * @ilx: interleave index (output), for use only when MPOL_INTERLEAVE
> *
> * Returns effective policy for a VMA at specified address.
> * Falls back to current->mempolicy or system default policy, as necessary.
> @@ -1726,14 +1717,18 @@ struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
> * freeing by another task. It is the caller's responsibility to free the
> * extra reference for shared policies.
> */
> -static struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
> - unsigned long addr)
> +struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
> + unsigned long addr, int order, pgoff_t *ilx)
> {
> - struct mempolicy *pol = __get_vma_policy(vma, addr);
> + struct mempolicy *pol;
>
> + pol = __get_vma_policy(vma, addr, ilx);
> if (!pol)
> pol = get_task_policy(current);
> -
> + if (pol->mode == MPOL_INTERLEAVE) {
> + *ilx += vma->vm_pgoff >> order;
> + *ilx += (addr - vma->vm_start) >> (PAGE_SHIFT + order);
> + }
> return pol;
> }
>
> @@ -1743,8 +1738,9 @@ bool vma_policy_mof(struct vm_area_struct *vma)
>
> if (vma->vm_ops && vma->vm_ops->get_policy) {
> bool ret = false;
> + pgoff_t ilx; /* ignored here */
>
> - pol = vma->vm_ops->get_policy(vma, vma->vm_start);
> + pol = vma->vm_ops->get_policy(vma, vma->vm_start, &ilx);
> if (pol && (pol->flags & MPOL_F_MOF))
> ret = true;
> mpol_cond_put(pol);
> @@ -1779,54 +1775,6 @@ bool apply_policy_zone(struct mempolicy *policy, enum zone_type zone)
> return zone >= dynamic_policy_zone;
> }
>
> -/*
> - * Return a nodemask representing a mempolicy for filtering nodes for
> - * page allocation
> - */
> -nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy)
> -{
> - int mode = policy->mode;
> -
> - /* Lower zones don't get a nodemask applied for MPOL_BIND */
> - if (unlikely(mode == MPOL_BIND) &&
> - apply_policy_zone(policy, gfp_zone(gfp)) &&
> - cpuset_nodemask_valid_mems_allowed(&policy->nodes))
> - return &policy->nodes;
> -
> - if (mode == MPOL_PREFERRED_MANY)
> - return &policy->nodes;
> -
> - return NULL;
> -}
> -
> -/*
> - * Return the preferred node id for 'prefer' mempolicy, and return
> - * the given id for all other policies.
> - *
> - * policy_node() is always coupled with policy_nodemask(), which
> - * secures the nodemask limit for 'bind' and 'prefer-many' policy.
> - */
> -static int policy_node(gfp_t gfp, struct mempolicy *policy, int nid)
> -{
> - if (policy->mode == MPOL_PREFERRED) {
> - nid = first_node(policy->nodes);
> - } else {
> - /*
> - * __GFP_THISNODE shouldn't even be used with the bind policy
> - * because we might easily break the expectation to stay on the
> - * requested node and not break the policy.
> - */
> - WARN_ON_ONCE(policy->mode == MPOL_BIND && (gfp & __GFP_THISNODE));
> - }
> -
> - if ((policy->mode == MPOL_BIND ||
> - policy->mode == MPOL_PREFERRED_MANY) &&
> - policy->home_node != NUMA_NO_NODE)
> - return policy->home_node;
> -
> - return nid;
> -}
> -
> /* Do dynamic interleaving for a process */
> static unsigned int interleave_nodes(struct mempolicy *policy)
> {
> @@ -1886,11 +1834,11 @@ unsigned int mempolicy_slab_node(void)
> }
>
> /*
> - * Do static interleaving for a VMA with known offset @n. Returns the n'th
> - * node in pol->nodes (starting from n=0), wrapping around if n exceeds the
> - * number of present nodes.
> + * Do static interleaving for interleave index @ilx. Returns the ilx'th
> + * node in pol->nodes (starting from ilx=0), wrapping around if ilx
> + * exceeds the number of present nodes.
> */
> -static unsigned offset_il_node(struct mempolicy *pol, unsigned long n)
> +static unsigned int interleave_nid(struct mempolicy *pol, pgoff_t ilx)
> {
> nodemask_t nodemask = pol->nodes;
> unsigned int target, nnodes;
> @@ -1908,33 +1856,54 @@ static unsigned offset_il_node(struct mempolicy *pol, unsigned long n)
> nnodes = nodes_weight(nodemask);
> if (!nnodes)
> return numa_node_id();
> - target = (unsigned int)n % nnodes;
> + target = ilx % nnodes;
> nid = first_node(nodemask);
> for (i = 0; i < target; i++)
> nid = next_node(nid, nodemask);
> return nid;
> }
>
> -/* Determine a node number for interleave */
> -static inline unsigned interleave_nid(struct mempolicy *pol,
> - struct vm_area_struct *vma, unsigned long addr, int shift)
> +/*
> + * Return a nodemask representing a mempolicy for filtering nodes for
> + * page allocation, together with preferred node id (or the input node id).
> + */
> +static nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *pol,
> + pgoff_t ilx, int *nid)
> {
> - if (vma) {
> - unsigned long off;
> + nodemask_t *nodemask = NULL;
>
> + switch (pol->mode) {
> + case MPOL_PREFERRED:
> + /* Override input node id */
> + *nid = first_node(pol->nodes);
> + break;
> + case MPOL_PREFERRED_MANY:
> + nodemask = &pol->nodes;
> + if (pol->home_node != NUMA_NO_NODE)
> + *nid = pol->home_node;
> + break;
> + case MPOL_BIND:
> + /* Restrict to nodemask (but not on lower zones) */
> + if (apply_policy_zone(pol, gfp_zone(gfp)) &&
> + cpuset_nodemask_valid_mems_allowed(&pol->nodes))
> + nodemask = &pol->nodes;
> + if (pol->home_node != NUMA_NO_NODE)
> + *nid = pol->home_node;
> /*
> - * for small pages, there is no difference between
> - * shift and PAGE_SHIFT, so the bit-shift is safe.
> - * for huge pages, since vm_pgoff is in units of small
> - * pages, we need to shift off the always 0 bits to get
> - * a useful offset.
> + * __GFP_THISNODE shouldn't even be used with the bind policy
> + * because we might easily break the expectation to stay on the
> + * requested node and not break the policy.
> */
> - BUG_ON(shift < PAGE_SHIFT);
> - off = vma->vm_pgoff >> (shift - PAGE_SHIFT);
> - off += (addr - vma->vm_start) >> shift;
> - return offset_il_node(pol, off);
> - } else
> - return interleave_nodes(pol);
> + WARN_ON_ONCE(gfp & __GFP_THISNODE);
> + break;
> + case MPOL_INTERLEAVE:
> + /* Override input node id */
> + *nid = (ilx == NO_INTERLEAVE_INDEX) ?
> + interleave_nodes(pol) : interleave_nid(pol, ilx);
> + break;
> + }
> +
> + return nodemask;
> }
>
> #ifdef CONFIG_HUGETLBFS
> @@ -1950,27 +1919,16 @@ static inline unsigned interleave_nid(struct mempolicy *pol,
> * to the struct mempolicy for conditional unref after allocation.
> * If the effective policy is 'bind' or 'prefer-many', returns a pointer
> * to the mempolicy's @nodemask for filtering the zonelist.
> - *
> - * Must be protected by read_mems_allowed_begin()
> */
> int huge_node(struct vm_area_struct *vma, unsigned long addr, gfp_t gfp_flags,
> - struct mempolicy **mpol, nodemask_t **nodemask)
> + struct mempolicy **mpol, nodemask_t **nodemask)
> {
> + pgoff_t ilx;
> int nid;
> - int mode;
>
> - *mpol = get_vma_policy(vma, addr);
> - *nodemask = NULL;
> - mode = (*mpol)->mode;
> -
> - if (unlikely(mode == MPOL_INTERLEAVE)) {
> - nid = interleave_nid(*mpol, vma, addr,
> - huge_page_shift(hstate_vma(vma)));
> - } else {
> - nid = policy_node(gfp_flags, *mpol, numa_node_id());
> - if (mode == MPOL_BIND || mode == MPOL_PREFERRED_MANY)
> - *nodemask = &(*mpol)->nodes;
> - }
> + nid = numa_node_id();
> + *mpol = get_vma_policy(vma, addr, hstate_vma(vma)->order, &ilx);
> + *nodemask = policy_nodemask(gfp_flags, *mpol, ilx, &nid);
> return nid;
> }
>
> @@ -2048,27 +2006,8 @@ bool mempolicy_in_oom_domain(struct task_struct *tsk,
> return ret;
> }
>
> -/* Allocate a page in interleaved policy.
> - Own path because it needs to do special accounting. */
> -static struct page *alloc_page_interleave(gfp_t gfp, unsigned order,
> - unsigned nid)
> -{
> - struct page *page;
> -
> - page = __alloc_pages(gfp, order, nid, NULL);
> - /* skip NUMA_INTERLEAVE_HIT counter update if numa stats is disabled */
> - if (!static_branch_likely(&vm_numa_stat_key))
> - return page;
> - if (page && page_to_nid(page) == nid) {
> - preempt_disable();
> - __count_numa_event(page_zone(page), NUMA_INTERLEAVE_HIT);
> - preempt_enable();
> - }
> - return page;
> -}
> -
> static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order,
> - int nid, struct mempolicy *pol)
> + int nid, nodemask_t *nodemask)
> {
> struct page *page;
> gfp_t preferred_gfp;
> @@ -2081,7 +2020,7 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order,
> */
> preferred_gfp = gfp | __GFP_NOWARN;
> preferred_gfp &= ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL);
> - page = __alloc_pages(preferred_gfp, order, nid, &pol->nodes);
> + page = __alloc_pages(preferred_gfp, order, nid, nodemask);
> if (!page)
> page = __alloc_pages(gfp, order, nid, NULL);
>
> @@ -2089,55 +2028,29 @@ static struct page *alloc_pages_preferred_many(gfp_t gfp, unsigned int order,
> }
>
> /**
> - * vma_alloc_folio - Allocate a folio for a VMA.
> + * alloc_pages_mpol - Allocate pages according to NUMA mempolicy.
> * @gfp: GFP flags.
> - * @order: Order of the folio.
> - * @vma: Pointer to VMA or NULL if not available.
> - * @addr: Virtual address of the allocation. Must be inside @vma.
> - * @hugepage: For hugepages try only the preferred node if possible.
> + * @order: Order of the page allocation.
> + * @pol: Pointer to the NUMA mempolicy.
> + * @ilx: Index for interleave mempolicy (also distinguishes alloc_pages()).
> + * @nid: Preferred node (usually numa_node_id() but @mpol may override it).
> *
> - * Allocate a folio for a specific address in @vma, using the appropriate
> - * NUMA policy. When @vma is not NULL the caller must hold the mmap_lock
> - * of the mm_struct of the VMA to prevent it from going away. Should be
> - * used for all allocations for folios that will be mapped into user space.
> - *
> - * Return: The folio on success or NULL if allocation fails.
> + * Return: The page on success or NULL if allocation fails.
> */
> -struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma,
> - unsigned long addr, bool hugepage)
> +struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order,
> + struct mempolicy *pol, pgoff_t ilx, int nid)
> {
> - struct mempolicy *pol;
> - int node = numa_node_id();
> - struct folio *folio;
> - int preferred_nid;
> - nodemask_t *nmask;
> + nodemask_t *nodemask;
> + struct page *page;
>
> - pol = get_vma_policy(vma, addr);
> + nodemask = policy_nodemask(gfp, pol, ilx, &nid);
>
> - if (pol->mode == MPOL_INTERLEAVE) {
> - struct page *page;
> - unsigned nid;
> -
> - nid = interleave_nid(pol, vma, addr, PAGE_SHIFT + order);
> - mpol_cond_put(pol);
> - gfp |= __GFP_COMP;
> - page = alloc_page_interleave(gfp, order, nid);
> - return page_rmappable_folio(page);
> - }
> -
> - if (pol->mode == MPOL_PREFERRED_MANY) {
> - struct page *page;
> -
> - node = policy_node(gfp, pol, node);
> - gfp |= __GFP_COMP;
> - page = alloc_pages_preferred_many(gfp, order, node, pol);
> - mpol_cond_put(pol);
> - return page_rmappable_folio(page);
> - }
> -
> - if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage)) {
> - int hpage_node = node;
> + if (pol->mode == MPOL_PREFERRED_MANY)
> + return alloc_pages_preferred_many(gfp, order, nid, nodemask);
>
> + if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
> + /* filter "hugepage" allocation, unless from alloc_pages() */
> + order == HPAGE_PMD_ORDER && ilx != NO_INTERLEAVE_INDEX) {
> /*
> * For hugepage allocation and non-interleave policy which
> * allows the current node (or other explicitly preferred
> @@ -2148,39 +2061,68 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma,
> * If the policy is interleave or does not allow the current
> * node in its nodemask, we allocate the standard way.
> */
> - if (pol->mode == MPOL_PREFERRED)
> - hpage_node = first_node(pol->nodes);
> -
> - nmask = policy_nodemask(gfp, pol);
> - if (!nmask || node_isset(hpage_node, *nmask)) {
> - mpol_cond_put(pol);
> + if (pol->mode != MPOL_INTERLEAVE &&
> + (!nodemask || node_isset(nid, *nodemask))) {
> /*
> * First, try to allocate THP only on local node, but
> * don't reclaim unnecessarily, just compact.
> */
> - folio = __folio_alloc_node(gfp | __GFP_THISNODE |
> - __GFP_NORETRY, order, hpage_node);
> -
> + page = __alloc_pages_node(nid,
> + gfp | __GFP_THISNODE | __GFP_NORETRY, order);
> + if (page || !(gfp & __GFP_DIRECT_RECLAIM))
> + return page;
> /*
> * If hugepage allocations are configured to always
> * synchronous compact or the vma has been madvised
> * to prefer hugepage backing, retry allowing remote
> * memory with both reclaim and compact as well.
> */
> - if (!folio && (gfp & __GFP_DIRECT_RECLAIM))
> - folio = __folio_alloc(gfp, order, hpage_node,
> - nmask);
> -
> - goto out;
> }
> }
>
> - nmask = policy_nodemask(gfp, pol);
> - preferred_nid = policy_node(gfp, pol, node);
> - folio = __folio_alloc(gfp, order, preferred_nid, nmask);
> + page = __alloc_pages(gfp, order, nid, nodemask);
> +
> + if (unlikely(pol->mode == MPOL_INTERLEAVE) && page) {
> + /* skip NUMA_INTERLEAVE_HIT update if numa stats is disabled */
> + if (static_branch_likely(&vm_numa_stat_key) &&
> + page_to_nid(page) == nid) {
> + preempt_disable();
> + __count_numa_event(page_zone(page), NUMA_INTERLEAVE_HIT);
> + preempt_enable();
> + }
> + }
> +
> + return page;
> +}
> +
> +/**
> + * vma_alloc_folio - Allocate a folio for a VMA.
> + * @gfp: GFP flags.
> + * @order: Order of the folio.
> + * @vma: Pointer to VMA.
> + * @addr: Virtual address of the allocation. Must be inside @vma.
> + * @hugepage: Unused (was: For hugepages try only preferred node if possible).
> + *
> + * Allocate a folio for a specific address in @vma, using the appropriate
> + * NUMA policy. The caller must hold the mmap_lock of the mm_struct of the
> + * VMA to prevent it from going away. Should be used for all allocations
> + * for folios that will be mapped into user space, excepting hugetlbfs, and
> + * excepting where direct use of alloc_pages_mpol() is more appropriate.
> + *
> + * Return: The folio on success or NULL if allocation fails.
> + */
> +struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma,
> + unsigned long addr, bool hugepage)
> +{
> + struct mempolicy *pol;
> + pgoff_t ilx;
> + struct page *page;
> +
> + pol = get_vma_policy(vma, addr, order, &ilx);
> + page = alloc_pages_mpol(gfp | __GFP_COMP, order,
> + pol, ilx, numa_node_id());
> mpol_cond_put(pol);
> -out:
> - return folio;
> + return page_rmappable_folio(page);
> }
> EXPORT_SYMBOL(vma_alloc_folio);
>
> @@ -2198,33 +2140,23 @@ struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma,
> * flags are used.
> * Return: The page on success or NULL if allocation fails.
> */
> -struct page *alloc_pages(gfp_t gfp, unsigned order)
> +struct page *alloc_pages(gfp_t gfp, unsigned int order)
> {
> struct mempolicy *pol = &default_policy;
> - struct page *page;
> -
> - if (!in_interrupt() && !(gfp & __GFP_THISNODE))
> - pol = get_task_policy(current);
>
> /*
> * No reference counting needed for current->mempolicy
> * nor system default_policy
> */
> - if (pol->mode == MPOL_INTERLEAVE)
> - page = alloc_page_interleave(gfp, order, interleave_nodes(pol));
> - else if (pol->mode == MPOL_PREFERRED_MANY)
> - page = alloc_pages_preferred_many(gfp, order,
> - policy_node(gfp, pol, numa_node_id()), pol);
> - else
> - page = __alloc_pages(gfp, order,
> - policy_node(gfp, pol, numa_node_id()),
> - policy_nodemask(gfp, pol));
> + if (!in_interrupt() && !(gfp & __GFP_THISNODE))
> + pol = get_task_policy(current);
>
> - return page;
> + return alloc_pages_mpol(gfp, order,
> + pol, NO_INTERLEAVE_INDEX, numa_node_id());
> }
> EXPORT_SYMBOL(alloc_pages);
>
> -struct folio *folio_alloc(gfp_t gfp, unsigned order)
> +struct folio *folio_alloc(gfp_t gfp, unsigned int order)
> {
> return page_rmappable_folio(alloc_pages(gfp | __GFP_COMP, order));
> }
> @@ -2295,6 +2227,8 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp,
> unsigned long nr_pages, struct page **page_array)
> {
> struct mempolicy *pol = &default_policy;
> + nodemask_t *nodemask;
> + int nid;
>
> if (!in_interrupt() && !(gfp & __GFP_THISNODE))
> pol = get_task_policy(current);
> @@ -2307,9 +2241,10 @@ unsigned long alloc_pages_bulk_array_mempolicy(gfp_t gfp,
> return alloc_pages_bulk_array_preferred_many(gfp,
> numa_node_id(), pol, nr_pages, page_array);
>
> - return __alloc_pages_bulk(gfp, policy_node(gfp, pol, numa_node_id()),
> - policy_nodemask(gfp, pol), nr_pages, NULL,
> - page_array);
> + nid = numa_node_id();
> + nodemask = policy_nodemask(gfp, pol, NO_INTERLEAVE_INDEX, &nid);
> + return __alloc_pages_bulk(gfp, nid, nodemask,
> + nr_pages, NULL, page_array);
> }
>
> int vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst)
> @@ -2496,23 +2431,21 @@ int mpol_misplaced(struct folio *folio, struct vm_area_struct *vma,
> unsigned long addr)
> {
> struct mempolicy *pol;
> + pgoff_t ilx;
> struct zoneref *z;
> int curnid = folio_nid(folio);
> - unsigned long pgoff;
> int thiscpu = raw_smp_processor_id();
> int thisnid = cpu_to_node(thiscpu);
> int polnid = NUMA_NO_NODE;
> int ret = NUMA_NO_NODE;
>
> - pol = get_vma_policy(vma, addr);
> + pol = get_vma_policy(vma, addr, folio_order(folio), &ilx);
> if (!(pol->flags & MPOL_F_MOF))
> goto out;
>
> switch (pol->mode) {
> case MPOL_INTERLEAVE:
> - pgoff = vma->vm_pgoff;
> - pgoff += (addr - vma->vm_start) >> PAGE_SHIFT;
> - polnid = offset_il_node(pol, pgoff);
> + polnid = interleave_nid(pol, ilx);
> break;
>
> case MPOL_PREFERRED:
> diff --git a/mm/shmem.c b/mm/shmem.c
> index bcbe9db..a314a25 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1544,38 +1544,20 @@ static inline struct mempolicy *shmem_get_sbmpol(struct shmem_sb_info *sbinfo)
> return NULL;
> }
> #endif /* CONFIG_NUMA && CONFIG_TMPFS */
> -#ifndef CONFIG_NUMA
> -#define vm_policy vm_private_data
> -#endif
>
> -static void shmem_pseudo_vma_init(struct vm_area_struct *vma,
> - struct shmem_inode_info *info, pgoff_t index)
> -{
> - /* Create a pseudo vma that just contains the policy */
> - vma_init(vma, NULL);
> - /* Bias interleave by inode number to distribute better across nodes */
> - vma->vm_pgoff = index + info->vfs_inode.i_ino;
> - vma->vm_policy = mpol_shared_policy_lookup(&info->policy, index);
> -}
> +static struct mempolicy *shmem_get_pgoff_policy(struct shmem_inode_info *info,
> + pgoff_t index, unsigned int order, pgoff_t *ilx);
>
> -static void shmem_pseudo_vma_destroy(struct vm_area_struct *vma)
> -{
> - /* Drop reference taken by mpol_shared_policy_lookup() */
> - mpol_cond_put(vma->vm_policy);
> -}
> -
> -static struct folio *shmem_swapin(swp_entry_t swap, gfp_t gfp,
> +static struct folio *shmem_swapin_cluster(swp_entry_t swap, gfp_t gfp,
> struct shmem_inode_info *info, pgoff_t index)
> {
> - struct vm_area_struct pvma;
> + struct mempolicy *mpol;
> + pgoff_t ilx;
> struct page *page;
> - struct vm_fault vmf = {
> - .vma = &pvma,
> - };
>
> - shmem_pseudo_vma_init(&pvma, info, index);
> - page = swap_cluster_readahead(swap, gfp, &vmf);
> - shmem_pseudo_vma_destroy(&pvma);
> + mpol = shmem_get_pgoff_policy(info, index, 0, &ilx);
> + page = swap_cluster_readahead(swap, gfp, mpol, ilx);
> + mpol_cond_put(mpol);
>
> if (!page)
> return NULL;
> @@ -1609,27 +1591,29 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp)
> static struct folio *shmem_alloc_hugefolio(gfp_t gfp,
> struct shmem_inode_info *info, pgoff_t index)
> {
> - struct vm_area_struct pvma;
> - struct folio *folio;
> + struct mempolicy *mpol;
> + pgoff_t ilx;
> + struct page *page;
>
> - shmem_pseudo_vma_init(&pvma, info, index);
> - folio = vma_alloc_folio(gfp, HPAGE_PMD_ORDER, &pvma, 0, true);
> - shmem_pseudo_vma_destroy(&pvma);
> + mpol = shmem_get_pgoff_policy(info, index, HPAGE_PMD_ORDER, &ilx);
> + page = alloc_pages_mpol(gfp, HPAGE_PMD_ORDER, mpol, ilx, numa_node_id());
> + mpol_cond_put(mpol);
>
> - return folio;
> + return page_rmappable_folio(page);
> }
>
> static struct folio *shmem_alloc_folio(gfp_t gfp,
> struct shmem_inode_info *info, pgoff_t index)
> {
> - struct vm_area_struct pvma;
> - struct folio *folio;
> + struct mempolicy *mpol;
> + pgoff_t ilx;
> + struct page *page;
>
> - shmem_pseudo_vma_init(&pvma, info, index);
> - folio = vma_alloc_folio(gfp, 0, &pvma, 0, false);
> - shmem_pseudo_vma_destroy(&pvma);
> + mpol = shmem_get_pgoff_policy(info, index, 0, &ilx);
> + page = alloc_pages_mpol(gfp, 0, mpol, ilx, numa_node_id());
> + mpol_cond_put(mpol);
>
> - return folio;
> + return (struct folio *)page;
> }
>
> static struct folio *shmem_alloc_and_add_folio(gfp_t gfp,
> @@ -1883,7 +1867,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
> count_memcg_event_mm(fault_mm, PGMAJFAULT);
> }
> /* Here we actually start the io */
> - folio = shmem_swapin(swap, gfp, info, index);
> + folio = shmem_swapin_cluster(swap, gfp, info, index);
> if (!folio) {
> error = -ENOMEM;
> goto failed;
> @@ -2334,15 +2318,41 @@ static int shmem_set_policy(struct vm_area_struct *vma, struct mempolicy *mpol)
> }
>
> static struct mempolicy *shmem_get_policy(struct vm_area_struct *vma,
> - unsigned long addr)
> + unsigned long addr, pgoff_t *ilx)
> {
> struct inode *inode = file_inode(vma->vm_file);
> pgoff_t index;
>
> + /*
> + * Bias interleave by inode number to distribute better across nodes;
> + * but this interface is independent of which page order is used, so
> + * supplies only that bias, letting caller apply the offset (adjusted
> + * by page order, as in shmem_get_pgoff_policy() and get_vma_policy()).
> + */
> + *ilx = inode->i_ino;
> index = ((addr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
> return mpol_shared_policy_lookup(&SHMEM_I(inode)->policy, index);
> }
> -#endif
> +
> +static struct mempolicy *shmem_get_pgoff_policy(struct shmem_inode_info *info,
> + pgoff_t index, unsigned int order, pgoff_t *ilx)
> +{
> + struct mempolicy *mpol;
> +
> + /* Bias interleave by inode number to distribute better across nodes */
> + *ilx = info->vfs_inode.i_ino + (index >> order);
> +
> + mpol = mpol_shared_policy_lookup(&info->policy, index);
> + return mpol ? mpol : get_task_policy(current);
> +}
> +#else
> +static struct mempolicy *shmem_get_pgoff_policy(struct shmem_inode_info *info,
> + pgoff_t index, unsigned int order, pgoff_t *ilx)
> +{
> + *ilx = 0;
> + return NULL;
> +}
> +#endif /* CONFIG_NUMA */
>
> int shmem_lock(struct file *file, int lock, struct ucounts *ucounts)
> {
> diff --git a/mm/swap.h b/mm/swap.h
> index 8a3c7a0..73c332e 100644
> --- a/mm/swap.h
> +++ b/mm/swap.h
> @@ -2,6 +2,8 @@
> #ifndef _MM_SWAP_H
> #define _MM_SWAP_H
>
> +struct mempolicy;
> +
> #ifdef CONFIG_SWAP
> #include <linux/blk_types.h> /* for bio_end_io_t */
>
> @@ -48,11 +50,10 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> unsigned long addr,
> struct swap_iocb **plug);
> struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> - struct vm_area_struct *vma,
> - unsigned long addr,
> + struct mempolicy *mpol, pgoff_t ilx,
> bool *new_page_allocated);
> struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag,
> - struct vm_fault *vmf);
> + struct mempolicy *mpol, pgoff_t ilx);
> struct page *swapin_readahead(swp_entry_t entry, gfp_t flag,
> struct vm_fault *vmf);
>
> @@ -80,7 +81,7 @@ static inline void show_swap_cache_info(void)
> }
>
> static inline struct page *swap_cluster_readahead(swp_entry_t entry,
> - gfp_t gfp_mask, struct vm_fault *vmf)
> + gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t ilx)
> {
> return NULL;
> }
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index b3b14bd..a421f01 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -10,6 +10,7 @@
> #include <linux/mm.h>
> #include <linux/gfp.h>
> #include <linux/kernel_stat.h>
> +#include <linux/mempolicy.h>
> #include <linux/swap.h>
> #include <linux/swapops.h>
> #include <linux/init.h>
> @@ -410,8 +411,8 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
> }
>
> struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> - struct vm_area_struct *vma, unsigned long addr,
> - bool *new_page_allocated)
> + struct mempolicy *mpol, pgoff_t ilx,
> + bool *new_page_allocated)
> {
> struct swap_info_struct *si;
> struct folio *folio;
> @@ -453,7 +454,8 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> * before marking swap_map SWAP_HAS_CACHE, when -EEXIST will
> * cause any racers to loop around until we add it to cache.
> */
> - folio = vma_alloc_folio(gfp_mask, 0, vma, addr, false);
> + folio = (struct folio *)alloc_pages_mpol(gfp_mask, 0,
> + mpol, ilx, numa_node_id());
> if (!folio)
> goto fail_put_swap;
>
> @@ -528,14 +530,19 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
> struct vm_area_struct *vma,
> unsigned long addr, struct swap_iocb **plug)
> {
> - bool page_was_allocated;
> - struct page *retpage = __read_swap_cache_async(entry, gfp_mask,
> - vma, addr, &page_was_allocated);
> + bool page_allocated;
> + struct mempolicy *mpol;
> + pgoff_t ilx;
> + struct page *page;
>
> - if (page_was_allocated)
> - swap_readpage(retpage, false, plug);
> + mpol = get_vma_policy(vma, addr, 0, &ilx);
> + page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
> + &page_allocated);
> + mpol_cond_put(mpol);
>
> - return retpage;
> + if (page_allocated)
> + swap_readpage(page, false, plug);
> + return page;
> }
>
> static unsigned int __swapin_nr_pages(unsigned long prev_offset,
> @@ -603,7 +610,8 @@ static unsigned long swapin_nr_pages(unsigned long offset)
> * swap_cluster_readahead - swap in pages in hope we need them soon
> * @entry: swap entry of this memory
> * @gfp_mask: memory allocation flags
> - * @vmf: fault information
> + * @mpol: NUMA memory allocation policy to be applied
> + * @ilx: NUMA interleave index, for use only when MPOL_INTERLEAVE
> *
> * Returns the struct page for entry and addr, after queueing swapin.
> *
> @@ -612,13 +620,12 @@ static unsigned long swapin_nr_pages(unsigned long offset)
> * because it doesn't cost us any seek time. We also make sure to queue
> * the 'original' request together with the readahead ones...
> *
> - * This has been extended to use the NUMA policies from the mm triggering
> - * the readahead.
> - *
> - * Caller must hold read mmap_lock if vmf->vma is not NULL.
> + * Note: it is intentional that the same NUMA policy and interleave index
> + * are used for every page of the readahead: neighbouring pages on swap
> + * are fairly likely to have been swapped out from the same node.
> */
> struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> - struct vm_fault *vmf)
> + struct mempolicy *mpol, pgoff_t ilx)
> {
> struct page *page;
> unsigned long entry_offset = swp_offset(entry);
> @@ -629,8 +636,6 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> struct blk_plug plug;
> struct swap_iocb *splug = NULL;
> bool page_allocated;
> - struct vm_area_struct *vma = vmf->vma;
> - unsigned long addr = vmf->address;
>
> mask = swapin_nr_pages(offset) - 1;
> if (!mask)
> @@ -648,8 +653,8 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> for (offset = start_offset; offset <= end_offset ; offset++) {
> /* Ok, do the async read-ahead now */
> page = __read_swap_cache_async(
> - swp_entry(swp_type(entry), offset),
> - gfp_mask, vma, addr, &page_allocated);
> + swp_entry(swp_type(entry), offset),
> + gfp_mask, mpol, ilx, &page_allocated);
> if (!page)
> continue;
> if (page_allocated) {
> @@ -663,11 +668,14 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask,
> }
> blk_finish_plug(&plug);
> swap_read_unplug(splug);
> -
> lru_add_drain(); /* Push any new pages onto the LRU now */
> skip:
> /* The page was likely read above, so no need for plugging here */
> - return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL);
> + page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
> + &page_allocated);
> + if (unlikely(page_allocated))
> + swap_readpage(page, false, NULL);
> + return page;
> }
>
> int init_swap_address_space(unsigned int type, unsigned long nr_pages)
> @@ -765,8 +773,10 @@ static void swap_ra_info(struct vm_fault *vmf,
>
> /**
> * swap_vma_readahead - swap in pages in hope we need them soon
> - * @fentry: swap entry of this memory
> + * @targ_entry: swap entry of the targeted memory
> * @gfp_mask: memory allocation flags
> + * @mpol: NUMA memory allocation policy to be applied
> + * @targ_ilx: NUMA interleave index, for use only when MPOL_INTERLEAVE
> * @vmf: fault information
> *
> * Returns the struct page for entry and addr, after queueing swapin.
> @@ -777,16 +787,17 @@ static void swap_ra_info(struct vm_fault *vmf,
> * Caller must hold read mmap_lock if vmf->vma is not NULL.
> *
> */
> -static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
> +static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask,
> + struct mempolicy *mpol, pgoff_t targ_ilx,
> struct vm_fault *vmf)
> {
> struct blk_plug plug;
> struct swap_iocb *splug = NULL;
> - struct vm_area_struct *vma = vmf->vma;
> struct page *page;
> pte_t *pte = NULL, pentry;
> unsigned long addr;
> swp_entry_t entry;
> + pgoff_t ilx;
> unsigned int i;
> bool page_allocated;
> struct vma_swap_readahead ra_info = {
> @@ -798,9 +809,10 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
> goto skip;
>
> addr = vmf->address - (ra_info.offset * PAGE_SIZE);
> + ilx = targ_ilx - ra_info.offset;
>
> blk_start_plug(&plug);
> - for (i = 0; i < ra_info.nr_pte; i++, addr += PAGE_SIZE) {
> + for (i = 0; i < ra_info.nr_pte; i++, ilx++, addr += PAGE_SIZE) {
> if (!pte++) {
> pte = pte_offset_map(vmf->pmd, addr);
> if (!pte)
> @@ -814,8 +826,8 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
> continue;
> pte_unmap(pte);
> pte = NULL;
> - page = __read_swap_cache_async(entry, gfp_mask, vma,
> - addr, &page_allocated);
> + page = __read_swap_cache_async(entry, gfp_mask, mpol, ilx,
> + &page_allocated);
> if (!page)
> continue;
> if (page_allocated) {
> @@ -834,8 +846,11 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
> lru_add_drain();
> skip:
> /* The page was likely read above, so no need for plugging here */
> - return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address,
> - NULL);
> + page = __read_swap_cache_async(targ_entry, gfp_mask, mpol, targ_ilx,
> + &page_allocated);
> + if (unlikely(page_allocated))
> + swap_readpage(page, false, NULL);
> + return page;
> }
>
> /**
> @@ -853,9 +868,16 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask,
> struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask,
> struct vm_fault *vmf)
> {
> - return swap_use_vma_readahead() ?
> - swap_vma_readahead(entry, gfp_mask, vmf) :
> - swap_cluster_readahead(entry, gfp_mask, vmf);
> + struct mempolicy *mpol;
> + pgoff_t ilx;
> + struct page *page;
> +
> + mpol = get_vma_policy(vmf->vma, vmf->address, 0, &ilx);
> + page = swap_use_vma_readahead() ?
> + swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf) :
> + swap_cluster_readahead(entry, gfp_mask, mpol, ilx);
> + mpol_cond_put(mpol);
> + return page;
> }
>
> #ifdef CONFIG_SYSFS
> --
> 1.8.4.5
>

2023-10-23 17:53:56

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v3 10/12] mempolicy: alloc_pages_mpol() for NUMA policy without vma

On Mon, 23 Oct 2023 18:53:26 +0200 domenico cerasuolo <[email protected]> wrote:

> > Rebased to mm.git's current mm-stable, to resolve with removal of
> > vma_policy() from include/linux/mempolicy.h, and temporary omission
> > of Nhat's ZSWAP mods from mm/swap_state.c: no other changes.
>
> Hi Hugh,
>
> not sure if it's the rebase, but I don't see an update to
> __read_swap_cache_async invocation in zswap.c at line 1078. Shouldn't we pass a
> mempolicy there too?

No change needed. zswap_writeback_entry() was passing a NULL for arg
`vma' and it's now passing a NULL for arg `mpol'.

2023-10-23 18:11:01

by domenico cerasuolo

[permalink] [raw]
Subject: Re: [PATCH v3 10/12] mempolicy: alloc_pages_mpol() for NUMA policy without vma

Il giorno lun 23 ott 2023 alle ore 19:53 Andrew Morton
<[email protected]> ha scritto:
>
> On Mon, 23 Oct 2023 18:53:26 +0200 domenico cerasuolo <[email protected]> wrote:
>
> > > Rebased to mm.git's current mm-stable, to resolve with removal of
> > > vma_policy() from include/linux/mempolicy.h, and temporary omission
> > > of Nhat's ZSWAP mods from mm/swap_state.c: no other changes.
> >
> > Hi Hugh,
> >
> > not sure if it's the rebase, but I don't see an update to
> > __read_swap_cache_async invocation in zswap.c at line 1078. Shouldn't we pass a
> > mempolicy there too?
>
> No change needed. zswap_writeback_entry() was passing a NULL for arg
> `vma' and it's now passing a NULL for arg `mpol'.

Problem is that alloc_pages_mpol is dereferencing mpol, when I test the zswap
writeback at 397148729f21edcf700ecb2a01749dbce955d09e it crashes, not sure if
I'm missing something.

>

2023-10-23 18:35:01

by Zi Yan

[permalink] [raw]
Subject: Re: [PATCH v3 10/12] mempolicy: alloc_pages_mpol() for NUMA policy without vma

On 19 Oct 2023, at 16:39, Hugh Dickins wrote:

> Shrink shmem's stack usage by eliminating the pseudo-vma from its folio
> allocation. alloc_pages_mpol(gfp, order, pol, ilx, nid) becomes the
> principal actor for passing mempolicy choice down to __alloc_pages(),
> rather than vma_alloc_folio(gfp, order, vma, addr, hugepage).
>
> vma_alloc_folio() and alloc_pages() remain, but as wrappers around
> alloc_pages_mpol(). alloc_pages_bulk_*() untouched, except to provide the
> additional args to policy_nodemask(), which subsumes policy_node().
> Cleanup throughout, cutting out some unhelpful "helpers".
>
> It would all be much simpler without MPOL_INTERLEAVE, but that adds a
> dynamic to the constant mpol: complicated by v3.6 commit 09c231cb8bfd
> ("tmpfs: distribute interleave better across nodes"), which added ino bias
> to the interleave, hidden from mm/mempolicy.c until this commit.
>
> Hence "ilx" throughout, the "interleave index". Originally I thought it
> could be done just with nid, but that's wrong: the nodemask may come from
> the shared policy layer below a shmem vma, or it may come from the task
> layer above a shmem vma; and without the final nodemask then nodeid cannot
> be decided. And how ilx is applied depends also on page order.
>
> The interleave index is almost always irrelevant unless MPOL_INTERLEAVE:
> with one exception in alloc_pages_mpol(), where the NO_INTERLEAVE_INDEX
> passed down from vma-less alloc_pages() is also used as hint not to use
> THP-style hugepage allocation - to avoid the overhead of a hugepage arg
> (though I don't understand why we never just added a GFP bit for THP - if
> it actually needs a different allocation strategy from other pages of the
> same order). vma_alloc_folio() still carries its hugepage arg here, but
> it is not used, and should be removed when agreed.
>
> get_vma_policy() no longer allows a NULL vma: over time I believe we've
> eradicated all the places which used to need it e.g. swapoff and madvise
> used to pass NULL vma to read_swap_cache_async(), but now know the vma.
>
> Link: https://lkml.kernel.org/r/[email protected]
> Signed-off-by: Hugh Dickins <[email protected]>
> Cc: Andi Kleen <[email protected]>
> Cc: Christoph Lameter <[email protected]>
> Cc: David Hildenbrand <[email protected]>
> Cc: Greg Kroah-Hartman <[email protected]>
> Cc: Huang Ying <[email protected]>
> Cc: Kefeng Wang <[email protected]>
> Cc: Matthew Wilcox (Oracle) <[email protected]>
> Cc: Mel Gorman <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Mike Kravetz <[email protected]>
> Cc: Nhat Pham <[email protected]>
> Cc: Sidhartha Kumar <[email protected]>
> Cc: Suren Baghdasaryan <[email protected]>
> Cc: Tejun heo <[email protected]>
> Cc: Vishal Moola (Oracle) <[email protected]>
> Cc: Yang Shi <[email protected]>
> Cc: Yosry Ahmed <[email protected]>
> ---
> Rebased to mm.git's current mm-stable, to resolve with removal of
> vma_policy() from include/linux/mempolicy.h, and temporary omission
> of Nhat's ZSWAP mods from mm/swap_state.c: no other changes.
>
> git cherry-pick 800caf44af25^..237d4ce921f0 # applies mm-unstable's 01-09
> then apply this "mempolicy: alloc_pages_mpol() for NUMA policy without vma"
> git cherry-pick e4fb3362b782^..ec6412928b8e # applies mm-unstable's 11-12
>
> fs/proc/task_mmu.c | 5 +-
> include/linux/gfp.h | 10 +-
> include/linux/mempolicy.h | 13 +-
> include/linux/mm.h | 2 +-
> ipc/shm.c | 21 +--
> mm/mempolicy.c | 383 +++++++++++++++++++---------------------------
> mm/shmem.c | 92 ++++++-----
> mm/swap.h | 9 +-
> mm/swap_state.c | 86 +++++++----
> 9 files changed, 299 insertions(+), 322 deletions(-)
>
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index 1d99450..66ae1c2 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -2673,8 +2673,9 @@ static int show_numa_map(struct seq_file *m, void *v)
> struct numa_maps *md = &numa_priv->md;
> struct file *file = vma->vm_file;
> struct mm_struct *mm = vma->vm_mm;
> - struct mempolicy *pol;
> char buffer[64];
> + struct mempolicy *pol;
> + pgoff_t ilx;
> int nid;
>
> if (!mm)
> @@ -2683,7 +2684,7 @@ static int show_numa_map(struct seq_file *m, void *v)
> /* Ensure we start with an empty set of numa_maps statistics. */
> memset(md, 0, sizeof(*md));
>
> - pol = __get_vma_policy(vma, vma->vm_start);
> + pol = __get_vma_policy(vma, vma->vm_start, &ilx);
> if (pol) {
> mpol_to_str(buffer, sizeof(buffer), pol);
> mpol_cond_put(pol);
> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
> index 665f066..f74f8d0 100644
> --- a/include/linux/gfp.h
> +++ b/include/linux/gfp.h
> @@ -8,6 +8,7 @@
> #include <linux/topology.h>
>
> struct vm_area_struct;
> +struct mempolicy;
>
> /* Convert GFP flags to their corresponding migrate type */
> #define GFP_MOVABLE_MASK (__GFP_RECLAIMABLE|__GFP_MOVABLE)
> @@ -262,7 +263,9 @@ static inline struct page *alloc_pages_node(int nid, gfp_t gfp_mask,
>
> #ifdef CONFIG_NUMA
> struct page *alloc_pages(gfp_t gfp, unsigned int order);
> -struct folio *folio_alloc(gfp_t gfp, unsigned order);
> +struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order,
> + struct mempolicy *mpol, pgoff_t ilx, int nid);
> +struct folio *folio_alloc(gfp_t gfp, unsigned int order);
> struct folio *vma_alloc_folio(gfp_t gfp, int order, struct vm_area_struct *vma,
> unsigned long addr, bool hugepage);
> #else
> @@ -270,6 +273,11 @@ static inline struct page *alloc_pages(gfp_t gfp_mask, unsigned int order)
> {
> return alloc_pages_node(numa_node_id(), gfp_mask, order);
> }
> +static inline struct page *alloc_pages_mpol(gfp_t gfp, unsigned int order,
> + struct mempolicy *mpol, pgoff_t ilx, int nid)
> +{
> + return alloc_pages(gfp, order);
> +}
> static inline struct folio *folio_alloc(gfp_t gfp, unsigned int order)
> {
> return __folio_alloc_node(gfp, order, numa_node_id());
> diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
> index acdb12f..2801d5b 100644
> --- a/include/linux/mempolicy.h
> +++ b/include/linux/mempolicy.h
> @@ -126,7 +126,9 @@ struct mempolicy *mpol_shared_policy_lookup(struct shared_policy *sp,
>
> struct mempolicy *get_task_policy(struct task_struct *p);
> struct mempolicy *__get_vma_policy(struct vm_area_struct *vma,
> - unsigned long addr);
> + unsigned long addr, pgoff_t *ilx);
> +struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
> + unsigned long addr, int order, pgoff_t *ilx);
> bool vma_policy_mof(struct vm_area_struct *vma);
>
> extern void numa_default_policy(void);
> @@ -140,8 +142,6 @@ extern int huge_node(struct vm_area_struct *vma,
> extern bool init_nodemask_of_mempolicy(nodemask_t *mask);
> extern bool mempolicy_in_oom_domain(struct task_struct *tsk,
> const nodemask_t *mask);
> -extern nodemask_t *policy_nodemask(gfp_t gfp, struct mempolicy *policy);
> -
> extern unsigned int mempolicy_slab_node(void);
>
> extern enum zone_type policy_zone;
> @@ -213,6 +213,13 @@ static inline void mpol_free_shared_policy(struct shared_policy *sp)
> return NULL;
> }
>
> +static inline struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
> + unsigned long addr, int order, pgoff_t *ilx)
> +{
> + *ilx = 0;
> + return NULL;
> +}
> +
> static inline int
> vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst)
> {
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 86e040e..b4d67a8 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -619,7 +619,7 @@ struct vm_operations_struct {
> * policy.
> */
> struct mempolicy *(*get_policy)(struct vm_area_struct *vma,
> - unsigned long addr);
> + unsigned long addr, pgoff_t *ilx);
> #endif
> /*
> * Called by vm_normal_page() for special PTEs to find the
> diff --git a/ipc/shm.c b/ipc/shm.c
> index 576a543..222aaf0 100644
> --- a/ipc/shm.c
> +++ b/ipc/shm.c
> @@ -562,30 +562,25 @@ static unsigned long shm_pagesize(struct vm_area_struct *vma)
> }
>
> #ifdef CONFIG_NUMA
> -static int shm_set_policy(struct vm_area_struct *vma, struct mempolicy *new)
> +static int shm_set_policy(struct vm_area_struct *vma, struct mempolicy *mpol)
> {
> - struct file *file = vma->vm_file;
> - struct shm_file_data *sfd = shm_file_data(file);
> + struct shm_file_data *sfd = shm_file_data(vma->vm_file);
> int err = 0;
>
> if (sfd->vm_ops->set_policy)
> - err = sfd->vm_ops->set_policy(vma, new);
> + err = sfd->vm_ops->set_policy(vma, mpol);
> return err;
> }
>
> static struct mempolicy *shm_get_policy(struct vm_area_struct *vma,
> - unsigned long addr)
> + unsigned long addr, pgoff_t *ilx)
> {
> - struct file *file = vma->vm_file;
> - struct shm_file_data *sfd = shm_file_data(file);
> - struct mempolicy *pol = NULL;
> + struct shm_file_data *sfd = shm_file_data(vma->vm_file);
> + struct mempolicy *mpol = vma->vm_policy;
>
> if (sfd->vm_ops->get_policy)
> - pol = sfd->vm_ops->get_policy(vma, addr);
> - else if (vma->vm_policy)
> - pol = vma->vm_policy;
> -
> - return pol;
> + mpol = sfd->vm_ops->get_policy(vma, addr, ilx);
> + return mpol;
> }
> #endif
>
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 596d580..8df0503 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -114,6 +114,8 @@
> #define MPOL_MF_INVERT (MPOL_MF_INTERNAL << 1) /* Invert check for nodemask */
> #define MPOL_MF_WRLOCK (MPOL_MF_INTERNAL << 2) /* Write-lock walked vmas */
>
> +#define NO_INTERLEAVE_INDEX (-1UL)
> +
> static struct kmem_cache *policy_cache;
> static struct kmem_cache *sn_cache;
>
> @@ -898,6 +900,7 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask,
> }
>
> if (flags & MPOL_F_ADDR) {
> + pgoff_t ilx; /* ignored here */
> /*
> * Do NOT fall back to task policy if the
> * vma/shared policy at addr is NULL. We
> @@ -909,10 +912,7 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask,
> mmap_read_unlock(mm);
> return -EFAULT;
> }
> - if (vma->vm_ops && vma->vm_ops->get_policy)
> - pol = vma->vm_ops->get_policy(vma, addr);
> - else
> - pol = vma->vm_policy;
> + pol = __get_vma_policy(vma, addr, &ilx);
> } else if (addr)
> return -EINVAL;
>
> @@ -1170,6 +1170,15 @@ static struct folio *new_folio(struct folio *src, unsigned long start)
> break;
> }
>
> + /*
> + * __get_vma_policy() now expects a genuine non-NULL vma. Return NULL
> + * when the page can no longer be located in a vma: that is not ideal
> + * (migrate_pages() will give up early, presuming ENOMEM), but good
> + * enough to avoid a crash by syzkaller or concurrent holepunch.
> + */
> + if (!vma)
> + return NULL;
> +

How often would this happen? I just want to point out that ENOMEM can cause
src THPs or large folios to be split by migrate_pages().

> if (folio_test_hugetlb(src)) {
> return alloc_hugetlb_folio_vma(folio_hstate(src),
> vma, address);
> @@ -1178,9 +1187,6 @@ static struct folio *new_folio(struct folio *src, unsigned long start)
> if (folio_test_large(src))
> gfp = GFP_TRANSHUGE;
>
> - /*
> - * if !vma, vma_alloc_folio() will use task or system default policy
> - */
> return vma_alloc_folio(gfp, folio_order(src), vma, address,
> folio_test_large(src));
> }


--
Best Regards,
Yan, Zi


Attachments:
signature.asc (871.00 B)
OpenPGP digital signature

2023-10-23 19:06:21

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v3 10/12] mempolicy: alloc_pages_mpol() for NUMA policy without vma

On Mon, Oct 23, 2023 at 08:10:32PM +0200, domenico cerasuolo wrote:
> Il giorno lun 23 ott 2023 alle ore 19:53 Andrew Morton
> <[email protected]> ha scritto:
> >
> > On Mon, 23 Oct 2023 18:53:26 +0200 domenico cerasuolo <[email protected]> wrote:
> >
> > > > Rebased to mm.git's current mm-stable, to resolve with removal of
> > > > vma_policy() from include/linux/mempolicy.h, and temporary omission
> > > > of Nhat's ZSWAP mods from mm/swap_state.c: no other changes.
> > >
> > > Hi Hugh,
> > >
> > > not sure if it's the rebase, but I don't see an update to
> > > __read_swap_cache_async invocation in zswap.c at line 1078. Shouldn't we pass a
> > > mempolicy there too?
> >
> > No change needed. zswap_writeback_entry() was passing a NULL for arg
> > `vma' and it's now passing a NULL for arg `mpol'.
>
> Problem is that alloc_pages_mpol is dereferencing mpol, when I test the zswap
> writeback at 397148729f21edcf700ecb2a01749dbce955d09e it crashes, not sure if
> I'm missing something.

I don't think you are. The NULL vma used to go to get_vma_policy(),
which fell back to

pol = get_task_policy(current);

Now the NULL pol gets passed to alloc_pages_mpol() directly, which
dereferences it. Oops.

I think Hugh's patch needs zswap to pass get_task_policy(current)
instead of NULL.

2023-10-23 19:49:22

by Hugh Dickins

[permalink] [raw]
Subject: Re: [PATCH v3 10/12] mempolicy: alloc_pages_mpol() for NUMA policy without vma

On Mon, 23 Oct 2023, Johannes Weiner wrote:
> On Mon, Oct 23, 2023 at 08:10:32PM +0200, domenico cerasuolo wrote:
> > Il giorno lun 23 ott 2023 alle ore 19:53 Andrew Morton
> > <[email protected]> ha scritto:
> > >
> > > On Mon, 23 Oct 2023 18:53:26 +0200 domenico cerasuolo <[email protected]> wrote:
> > >
> > > > > Rebased to mm.git's current mm-stable, to resolve with removal of
> > > > > vma_policy() from include/linux/mempolicy.h, and temporary omission
> > > > > of Nhat's ZSWAP mods from mm/swap_state.c: no other changes.
> > > >
> > > > Hi Hugh,
> > > >
> > > > not sure if it's the rebase, but I don't see an update to
> > > > __read_swap_cache_async invocation in zswap.c at line 1078. Shouldn't we pass a
> > > > mempolicy there too?
> > >
> > > No change needed. zswap_writeback_entry() was passing a NULL for arg
> > > `vma' and it's now passing a NULL for arg `mpol'.

Andrew's answer was indeed my thinking, and why none of us got a build error.

> >
> > Problem is that alloc_pages_mpol is dereferencing mpol, when I test the zswap
> > writeback at 397148729f21edcf700ecb2a01749dbce955d09e it crashes, not sure if
> > I'm missing something.
>
> I don't think you are. The NULL vma used to go to get_vma_policy(),
> which fell back to
>
> pol = get_task_policy(current);
>
> Now the NULL pol gets passed to alloc_pages_mpol() directly, which
> dereferences it. Oops.

Yes, I failed to think it through that far.

>
> I think Hugh's patch needs zswap to pass get_task_policy(current)
> instead of NULL.

That sounds the likely fix, thank you Domenico, Andrew, Johannes.

I'll check it out and send a fix patch later today.

I don't know who runs that zswap_writeback_entry() code, but I presume
that task's mempolicy is unlikely to be relevant to the swap cache page
in question: but a whole lot better than oopsing, and will reproduce
the previous behaviour (and the assumption at this writeback point would
be that the page is unlikely to be reused after writeback anyway, so its
node unimportant).

Hugh

2023-10-23 21:11:11

by Hugh Dickins

[permalink] [raw]
Subject: Re: [PATCH v3 10/12] mempolicy: alloc_pages_mpol() for NUMA policy without vma

On Mon, 23 Oct 2023, Zi Yan wrote:
> On 19 Oct 2023, at 16:39, Hugh Dickins wrote:
> > @@ -1170,6 +1170,15 @@ static struct folio *new_folio(struct folio *src, unsigned long start)
> > break;
> > }
> >
> > + /*
> > + * __get_vma_policy() now expects a genuine non-NULL vma. Return NULL
> > + * when the page can no longer be located in a vma: that is not ideal
> > + * (migrate_pages() will give up early, presuming ENOMEM), but good
> > + * enough to avoid a crash by syzkaller or concurrent holepunch.
> > + */
> > + if (!vma)
> > + return NULL;
> > +
>
> How often would this happen? I just want to point out that ENOMEM can cause
> src THPs or large folios to be split by migrate_pages().

The only case I know of it happening was when a file was mapped, then that
file truncated (cutting out the source page) before migrate_pages(&pagelist)
reached it - rather a syzbotty thing to do, not of great reallife concern.

I won't assert that's the only way: I've a ghost of a memory of another way,
that I can't quite resurface, from a long-ago version of queue_pages_range().

But in the end just didn't care enough about it, because this is really just
to save a bisection point from crashing - the possibility goes away in the
11/12 commit which follows this one, which takes VMA out of it altogether.

Hugh

2023-10-23 21:14:05

by Zi Yan

[permalink] [raw]
Subject: Re: [PATCH v3 10/12] mempolicy: alloc_pages_mpol() for NUMA policy without vma

On 23 Oct 2023, at 17:10, Hugh Dickins wrote:

> On Mon, 23 Oct 2023, Zi Yan wrote:
>> On 19 Oct 2023, at 16:39, Hugh Dickins wrote:
>>> @@ -1170,6 +1170,15 @@ static struct folio *new_folio(struct folio *src, unsigned long start)
>>> break;
>>> }
>>>
>>> + /*
>>> + * __get_vma_policy() now expects a genuine non-NULL vma. Return NULL
>>> + * when the page can no longer be located in a vma: that is not ideal
>>> + * (migrate_pages() will give up early, presuming ENOMEM), but good
>>> + * enough to avoid a crash by syzkaller or concurrent holepunch.
>>> + */
>>> + if (!vma)
>>> + return NULL;
>>> +
>>
>> How often would this happen? I just want to point out that ENOMEM can cause
>> src THPs or large folios to be split by migrate_pages().
>
> The only case I know of it happening was when a file was mapped, then that
> file truncated (cutting out the source page) before migrate_pages(&pagelist)
> reached it - rather a syzbotty thing to do, not of great reallife concern.
>
> I won't assert that's the only way: I've a ghost of a memory of another way,
> that I can't quite resurface, from a long-ago version of queue_pages_range().
>
> But in the end just didn't care enough about it, because this is really just
> to save a bisection point from crashing - the possibility goes away in the
> 11/12 commit which follows this one, which takes VMA out of it altogether.

Got it. Thanks for the explanation. I should have finished the whole series.

--
Best Regards,
Yan, Zi


Attachments:
signature.asc (871.00 B)
OpenPGP digital signature

2023-10-24 06:44:33

by Hugh Dickins

[permalink] [raw]
Subject: [PATCH] mempolicy: alloc_pages_mpol() for NUMA policy without vma: fix

mm-unstable commit 48a7bd12d57f ("mempolicy: alloc_pages_mpol() for NUMA
policy without vma") ended read_swap_cache_async() supporting NULL vma -
okay; but missed the NULL mpol being passed to __read_swap_cache_async()
by zswap_writeback_entry() - oops!

Since its other callers all give good mpol, add get_task_policy(current)
there in mm/zswap.c, to produce the same good-enough behaviour as before
(and task policy, acted on in current task, does not require the refcount
to be dup'ed).

But if that policy is (quite reasonably) MPOL_INTERLEAVE, then ilx must
be NO_INTERLEAVE_INDEX rather than 0, to provide the same distribution
as before: move that definition from mempolicy.c to mempolicy.h.

Reported-by: Domenico Cerasuolo <[email protected]>
Closes: https://lore.kernel.org/linux-mm/[email protected]/T/#mf08c877d1884fc7867f9e328cdf02257ff3b3ae9
Suggested-by: Johannes Weiner <[email protected]>
Fixes: 48a7bd12d57f ("mempolicy: alloc_pages_mpol() for NUMA policy without vma")
Signed-off-by: Hugh Dickins <[email protected]>
---
include/linux/mempolicy.h | 2 ++
mm/mempolicy.c | 2 --
mm/zswap.c | 7 +++++--
3 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index 2801d5b0a4e9..dd9ed2ce7fd5 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -17,6 +17,8 @@

struct mm_struct;

+#define NO_INTERLEAVE_INDEX (-1UL) /* use task il_prev for interleaving */
+
#ifdef CONFIG_NUMA

/*
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 898ee2e3c85b..989293180eb6 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -114,8 +114,6 @@
#define MPOL_MF_INVERT (MPOL_MF_INTERNAL << 1) /* Invert check for nodemask */
#define MPOL_MF_WRLOCK (MPOL_MF_INTERNAL << 2) /* Write-lock walked vmas */

-#define NO_INTERLEAVE_INDEX (-1UL)
-
static struct kmem_cache *policy_cache;
static struct kmem_cache *sn_cache;

diff --git a/mm/zswap.c b/mm/zswap.c
index 37d2b1cb2ecb..060857adca76 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -24,6 +24,7 @@
#include <linux/swap.h>
#include <linux/crypto.h>
#include <linux/scatterlist.h>
+#include <linux/mempolicy.h>
#include <linux/mempool.h>
#include <linux/zpool.h>
#include <crypto/acompress.h>
@@ -1057,6 +1058,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
{
swp_entry_t swpentry = entry->swpentry;
struct page *page;
+ struct mempolicy *mpol;
struct scatterlist input, output;
struct crypto_acomp_ctx *acomp_ctx;
struct zpool *pool = zswap_find_zpool(entry);
@@ -1075,8 +1077,9 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
}

/* try to allocate swap cache page */
- page = __read_swap_cache_async(swpentry, GFP_KERNEL, NULL, 0,
- &page_was_allocated);
+ mpol = get_task_policy(current);
+ page = __read_swap_cache_async(swpentry, GFP_KERNEL, mpol,
+ NO_INTERLEAVE_INDEX, &page_was_allocated);
if (!page) {
ret = -ENOMEM;
goto fail;
--
2.35.3

2023-10-24 06:51:00

by Hugh Dickins

[permalink] [raw]
Subject: [PATCH] mempolicy: migration attempt to match interleave nodes: fix

mm-unstable commit edd33b8807a1 ("mempolicy: migration attempt to match
interleave nodes") added a second vma_iter search to do_mbind(), to
determine the interleave index to be used in the MPOL_INTERLEAVE case.

But sadly it added it just after the mmap_write_unlock(), leaving this
new VMA search unprotected: and so syzbot reports suspicious RCU usage
from lib/maple_tree.c:856.

This could be fixed with an rcu_read_lock/unlock() pair (per Liam);
but since we have been relying on the mmap_lock up to this point, it's
slightly better to extend it over the new search too, for a well-defined
result consistent with the policy this mbind() is establishing (rather
than whatever might follow once the mmap_lock is dropped).

Reported-by: [email protected]
Closes: https://lore.kernel.org/linux-mm/[email protected]/
Fixes: edd33b8807a1 ("mempolicy: migration attempt to match interleave nodes")
Signed-off-by: Hugh Dickins <[email protected]>
---
mm/mempolicy.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 989293180eb6..5e472e6e0507 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1291,8 +1291,6 @@ static long do_mbind(unsigned long start, unsigned long len,
}
}

- mmap_write_unlock(mm);
-
if (!err && !list_empty(&pagelist)) {
/* Convert MPOL_DEFAULT's NULL to task or default policy */
if (!new) {
@@ -1334,7 +1332,11 @@ static long do_mbind(unsigned long start, unsigned long len,
mmpol.ilx -= page->index >> order;
}
}
+ }

+ mmap_write_unlock(mm);
+
+ if (!err && !list_empty(&pagelist)) {
nr_failed |= migrate_pages(&pagelist,
alloc_migration_target_by_mpol, NULL,
(unsigned long)&mmpol, MIGRATE_SYNC,
--
2.35.3

2023-10-24 08:17:57

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH] mempolicy: alloc_pages_mpol() for NUMA policy without vma: fix

Hi Hugh,

kernel test robot noticed the following build warnings:



url: https://github.com/intel-lab-lkp/linux/commits/UPDATE-20231024-144517/Hugh-Dickins/hugetlbfs-drop-shared-NUMA-mempolicy-pretence/20231003-173301
base: the 10th patch of https://lore.kernel.org/r/74e34633-6060-f5e3-aee-7040d43f2e93%40google.com
patch link: https://lore.kernel.org/r/00dc4f56-e623-7c85-29ea-4211e93063f6%40google.com
patch subject: [PATCH] mempolicy: alloc_pages_mpol() for NUMA policy without vma: fix
config: m68k-allyesconfig (https://download.01.org/0day-ci/archive/20231024/[email protected]/config)
compiler: m68k-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231024/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

All warnings (new ones prefixed by >>):

In file included from mm/zswap.c:41:
mm/internal.h: In function 'shrinker_debugfs_name_alloc':
mm/internal.h:1232:9: warning: function 'shrinker_debugfs_name_alloc' might be a candidate for 'gnu_printf' format attribute [-Wsuggest-attribute=format]
1232 | shrinker->name = kvasprintf_const(GFP_KERNEL, fmt, ap);
| ^~~~~~~~
mm/zswap.c: In function 'zswap_writeback_entry':
mm/zswap.c:1322:16: error: implicit declaration of function 'get_task_policy'; did you mean 'get_vma_policy'? [-Werror=implicit-function-declaration]
1322 | mpol = get_task_policy(current);
| ^~~~~~~~~~~~~~~
| get_vma_policy
>> mm/zswap.c:1322:14: warning: assignment to 'struct mempolicy *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
1322 | mpol = get_task_policy(current);
| ^
cc1: some warnings being treated as errors


vim +1322 mm/zswap.c

1282
1283 /*********************************
1284 * writeback code
1285 **********************************/
1286 /*
1287 * Attempts to free an entry by adding a page to the swap cache,
1288 * decompressing the entry data into the page, and issuing a
1289 * bio write to write the page back to the swap device.
1290 *
1291 * This can be thought of as a "resumed writeback" of the page
1292 * to the swap device. We are basically resuming the same swap
1293 * writeback path that was intercepted with the zswap_store()
1294 * in the first place. After the page has been decompressed into
1295 * the swap cache, the compressed version stored by zswap can be
1296 * freed.
1297 */
1298 static int zswap_writeback_entry(struct zswap_entry *entry,
1299 struct zswap_tree *tree)
1300 {
1301 swp_entry_t swpentry = entry->swpentry;
1302 struct page *page;
1303 struct mempolicy *mpol;
1304 struct scatterlist input, output;
1305 struct crypto_acomp_ctx *acomp_ctx;
1306 struct zpool *pool = zswap_find_zpool(entry);
1307 bool page_was_allocated;
1308 u8 *src, *tmp = NULL;
1309 unsigned int dlen;
1310 int ret;
1311 struct writeback_control wbc = {
1312 .sync_mode = WB_SYNC_NONE,
1313 };
1314
1315 if (!zpool_can_sleep_mapped(pool)) {
1316 tmp = kmalloc(PAGE_SIZE, GFP_KERNEL);
1317 if (!tmp)
1318 return -ENOMEM;
1319 }
1320
1321 /* try to allocate swap cache page */
> 1322 mpol = get_task_policy(current);
1323 page = __read_swap_cache_async(swpentry, GFP_KERNEL, mpol,
1324 NO_INTERLEAVE_INDEX, &page_was_allocated);
1325 if (!page) {
1326 ret = -ENOMEM;
1327 goto fail;
1328 }
1329
1330 /* Found an existing page, we raced with load/swapin */
1331 if (!page_was_allocated) {
1332 put_page(page);
1333 ret = -EEXIST;
1334 goto fail;
1335 }
1336
1337 /*
1338 * Page is locked, and the swapcache is now secured against
1339 * concurrent swapping to and from the slot. Verify that the
1340 * swap entry hasn't been invalidated and recycled behind our
1341 * backs (our zswap_entry reference doesn't prevent that), to
1342 * avoid overwriting a new swap page with old compressed data.
1343 */
1344 spin_lock(&tree->lock);
1345 if (zswap_rb_search(&tree->rbroot, swp_offset(entry->swpentry)) != entry) {
1346 spin_unlock(&tree->lock);
1347 delete_from_swap_cache(page_folio(page));
1348 ret = -ENOMEM;
1349 goto fail;
1350 }
1351 spin_unlock(&tree->lock);
1352
1353 /* decompress */
1354 acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
1355 dlen = PAGE_SIZE;
1356
1357 src = zpool_map_handle(pool, entry->handle, ZPOOL_MM_RO);
1358 if (!zpool_can_sleep_mapped(pool)) {
1359 memcpy(tmp, src, entry->length);
1360 src = tmp;
1361 zpool_unmap_handle(pool, entry->handle);
1362 }
1363
1364 mutex_lock(acomp_ctx->mutex);
1365 sg_init_one(&input, src, entry->length);
1366 sg_init_table(&output, 1);
1367 sg_set_page(&output, page, PAGE_SIZE, 0);
1368 acomp_request_set_params(acomp_ctx->req, &input, &output, entry->length, dlen);
1369 ret = crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &acomp_ctx->wait);
1370 dlen = acomp_ctx->req->dlen;
1371 mutex_unlock(acomp_ctx->mutex);
1372
1373 if (!zpool_can_sleep_mapped(pool))
1374 kfree(tmp);
1375 else
1376 zpool_unmap_handle(pool, entry->handle);
1377
1378 BUG_ON(ret);
1379 BUG_ON(dlen != PAGE_SIZE);
1380
1381 /* page is up to date */
1382 SetPageUptodate(page);
1383
1384 /* move it to the tail of the inactive list after end_writeback */
1385 SetPageReclaim(page);
1386
1387 /* start writeback */
1388 __swap_writepage(page, &wbc);
1389 put_page(page);
1390 zswap_written_back_pages++;
1391
1392 return ret;
1393
1394 fail:
1395 if (!zpool_can_sleep_mapped(pool))
1396 kfree(tmp);
1397
1398 /*
1399 * If we get here because the page is already in swapcache, a
1400 * load may be happening concurrently. It is safe and okay to
1401 * not free the entry. It is also okay to return !0.
1402 */
1403 return ret;
1404 }
1405

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-10-24 15:23:38

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH] mempolicy: migration attempt to match interleave nodes: fix

* Hugh Dickins <[email protected]> [231024 02:50]:
> mm-unstable commit edd33b8807a1 ("mempolicy: migration attempt to match
> interleave nodes") added a second vma_iter search to do_mbind(), to
> determine the interleave index to be used in the MPOL_INTERLEAVE case.
>
> But sadly it added it just after the mmap_write_unlock(), leaving this
> new VMA search unprotected: and so syzbot reports suspicious RCU usage
> from lib/maple_tree.c:856.
>
> This could be fixed with an rcu_read_lock/unlock() pair (per Liam);
> but since we have been relying on the mmap_lock up to this point, it's
> slightly better to extend it over the new search too, for a well-defined
> result consistent with the policy this mbind() is establishing (rather
> than whatever might follow once the mmap_lock is dropped).

Would downgrading the lock work? It would avoid the potential writing
issue and should still satisfy lockdep.

>
> Reported-by: [email protected]
> Closes: https://lore.kernel.org/linux-mm/[email protected]/
> Fixes: edd33b8807a1 ("mempolicy: migration attempt to match interleave nodes")
> Signed-off-by: Hugh Dickins <[email protected]>
> ---
> mm/mempolicy.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 989293180eb6..5e472e6e0507 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -1291,8 +1291,6 @@ static long do_mbind(unsigned long start, unsigned long len,
> }
> }
>
> - mmap_write_unlock(mm);
> -
> if (!err && !list_empty(&pagelist)) {
> /* Convert MPOL_DEFAULT's NULL to task or default policy */
> if (!new) {
> @@ -1334,7 +1332,11 @@ static long do_mbind(unsigned long start, unsigned long len,
> mmpol.ilx -= page->index >> order;
> }
> }
> + }
>
> + mmap_write_unlock(mm);
> +
> + if (!err && !list_empty(&pagelist)) {
> nr_failed |= migrate_pages(&pagelist,
> alloc_migration_target_by_mpol, NULL,
> (unsigned long)&mmpol, MIGRATE_SYNC,
> --
> 2.35.3
>

2023-10-24 15:57:02

by Hugh Dickins

[permalink] [raw]
Subject: Re: [PATCH] mempolicy: alloc_pages_mpol() for NUMA policy without vma: fix

On Tue, 24 Oct 2023, kernel test robot wrote:

> Hi Hugh,
>
> kernel test robot noticed the following build warnings:
>
>
>
> url: https://github.com/intel-lab-lkp/linux/commits/UPDATE-20231024-144517/Hugh-Dickins/hugetlbfs-drop-shared-NUMA-mempolicy-pretence/20231003-173301
> base: the 10th patch of https://lore.kernel.org/r/74e34633-6060-f5e3-aee-7040d43f2e93%40google.com
> patch link: https://lore.kernel.org/r/00dc4f56-e623-7c85-29ea-4211e93063f6%40google.com
> patch subject: [PATCH] mempolicy: alloc_pages_mpol() for NUMA policy without vma: fix
> config: m68k-allyesconfig (https://download.01.org/0day-ci/archive/20231024/[email protected]/config)
> compiler: m68k-linux-gcc (GCC) 13.2.0
> reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20231024/[email protected]/reproduce)
>
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <[email protected]>
> | Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/
>
> All warnings (new ones prefixed by >>):
>
> In file included from mm/zswap.c:41:
> mm/internal.h: In function 'shrinker_debugfs_name_alloc':
> mm/internal.h:1232:9: warning: function 'shrinker_debugfs_name_alloc' might be a candidate for 'gnu_printf' format attribute [-Wsuggest-attribute=format]
> 1232 | shrinker->name = kvasprintf_const(GFP_KERNEL, fmt, ap);
> | ^~~~~~~~
> mm/zswap.c: In function 'zswap_writeback_entry':
> mm/zswap.c:1322:16: error: implicit declaration of function 'get_task_policy'; did you mean 'get_vma_policy'? [-Werror=implicit-function-declaration]
> 1322 | mpol = get_task_policy(current);
> | ^~~~~~~~~~~~~~~
> | get_vma_policy
> >> mm/zswap.c:1322:14: warning: assignment to 'struct mempolicy *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
> 1322 | mpol = get_task_policy(current);
> | ^
> cc1: some warnings being treated as errors

Gaah, thanks for that, I never built it without CONFIG_NUMA=y:
v2 patch with a get_task_policy() without CONFIG_NUMA coming up,
built this time.

Hugh

2023-10-24 16:10:06

by Hugh Dickins

[permalink] [raw]
Subject: [PATCH v2] mempolicy: alloc_pages_mpol() for NUMA policy without vma: fix

mm-unstable commit 48a7bd12d57f ("mempolicy: alloc_pages_mpol() for NUMA
policy without vma") ended read_swap_cache_async() supporting NULL vma -
okay; but missed the NULL mpol being passed to __read_swap_cache_async()
by zswap_writeback_entry() - oops!

Since its other callers all give good mpol, add get_task_policy(current)
there in mm/zswap.c, to produce the same good-enough behaviour as before
(and task policy, acted on in current task, does not require the refcount
to be dup'ed).

But if that policy is (quite reasonably) MPOL_INTERLEAVE, then ilx must
be NO_INTERLEAVE_INDEX rather than 0, to provide the same distribution
as before: move that definition from mempolicy.c to mempolicy.h.

Reported-by: Domenico Cerasuolo <[email protected]>
Closes: https://lore.kernel.org/linux-mm/[email protected]/T/#mf08c877d1884fc7867f9e328cdf02257ff3b3ae9
Suggested-by: Johannes Weiner <[email protected]>
Fixes: 48a7bd12d57f ("mempolicy: alloc_pages_mpol() for NUMA policy without vma")
Signed-off-by: Hugh Dickins <[email protected]>
---
v2: !CONFIG_NUMA builds with a get_task_policy() added in mempolicy.h

include/linux/mempolicy.h | 7 +++++++
mm/mempolicy.c | 2 --
mm/zswap.c | 7 +++++--
3 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index 2801d5b0a4e9..931b118336f4 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -17,6 +17,8 @@

struct mm_struct;

+#define NO_INTERLEAVE_INDEX (-1UL) /* use task il_prev for interleaving */
+
#ifdef CONFIG_NUMA

/*
@@ -179,6 +181,11 @@ extern bool apply_policy_zone(struct mempolicy *policy, enum zone_type zone);

struct mempolicy {};

+static inline struct mempolicy *get_task_policy(struct task_struct *p)
+{
+ return NULL;
+}
+
static inline bool mpol_equal(struct mempolicy *a, struct mempolicy *b)
{
return true;
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 898ee2e3c85b..989293180eb6 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -114,8 +114,6 @@
#define MPOL_MF_INVERT (MPOL_MF_INTERNAL << 1) /* Invert check for nodemask */
#define MPOL_MF_WRLOCK (MPOL_MF_INTERNAL << 2) /* Write-lock walked vmas */

-#define NO_INTERLEAVE_INDEX (-1UL)
-
static struct kmem_cache *policy_cache;
static struct kmem_cache *sn_cache;

diff --git a/mm/zswap.c b/mm/zswap.c
index 37d2b1cb2ecb..060857adca76 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -24,6 +24,7 @@
#include <linux/swap.h>
#include <linux/crypto.h>
#include <linux/scatterlist.h>
+#include <linux/mempolicy.h>
#include <linux/mempool.h>
#include <linux/zpool.h>
#include <crypto/acompress.h>
@@ -1057,6 +1058,7 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
{
swp_entry_t swpentry = entry->swpentry;
struct page *page;
+ struct mempolicy *mpol;
struct scatterlist input, output;
struct crypto_acomp_ctx *acomp_ctx;
struct zpool *pool = zswap_find_zpool(entry);
@@ -1075,8 +1077,9 @@ static int zswap_writeback_entry(struct zswap_entry *entry,
}

/* try to allocate swap cache page */
- page = __read_swap_cache_async(swpentry, GFP_KERNEL, NULL, 0,
- &page_was_allocated);
+ mpol = get_task_policy(current);
+ page = __read_swap_cache_async(swpentry, GFP_KERNEL, mpol,
+ NO_INTERLEAVE_INDEX, &page_was_allocated);
if (!page) {
ret = -ENOMEM;
goto fail;
--
2.35.3

2023-10-24 16:33:16

by Hugh Dickins

[permalink] [raw]
Subject: Re: [PATCH] mempolicy: migration attempt to match interleave nodes: fix

On Tue, 24 Oct 2023, Liam R. Howlett wrote:

> * Hugh Dickins <[email protected]> [231024 02:50]:
> > mm-unstable commit edd33b8807a1 ("mempolicy: migration attempt to match
> > interleave nodes") added a second vma_iter search to do_mbind(), to
> > determine the interleave index to be used in the MPOL_INTERLEAVE case.
> >
> > But sadly it added it just after the mmap_write_unlock(), leaving this
> > new VMA search unprotected: and so syzbot reports suspicious RCU usage
> > from lib/maple_tree.c:856.
> >
> > This could be fixed with an rcu_read_lock/unlock() pair (per Liam);
> > but since we have been relying on the mmap_lock up to this point, it's
> > slightly better to extend it over the new search too, for a well-defined
> > result consistent with the policy this mbind() is establishing (rather
> > than whatever might follow once the mmap_lock is dropped).
>
> Would downgrading the lock work? It would avoid the potential writing
> issue and should still satisfy lockdep.

Downgrading the lock would work, but it would be a pointless complication.

The "second vma_iter search" is not a lengthy operation (normally it just
checks pgoff,start,end of the first VMA and immediately breaks out; in
worst case it just makes that check on each VMA involved: it doesn't get
into splits or merges or pte scans), we already have mmap_lock, yes it's
only needed for read during that scani, but it's not worth playing with.

Whereas migrating an indefinite number of pages, with all the allocating
and unmapping and copying and remapping involved, really is something we
prefer not to hold mmap_lock across.

Hugh

2023-10-24 16:46:14

by Matthew Wilcox

[permalink] [raw]
Subject: Re: [PATCH] mempolicy: migration attempt to match interleave nodes: fix

On Tue, Oct 24, 2023 at 09:32:44AM -0700, Hugh Dickins wrote:
> On Tue, 24 Oct 2023, Liam R. Howlett wrote:
>
> > * Hugh Dickins <[email protected]> [231024 02:50]:
> > > mm-unstable commit edd33b8807a1 ("mempolicy: migration attempt to match
> > > interleave nodes") added a second vma_iter search to do_mbind(), to
> > > determine the interleave index to be used in the MPOL_INTERLEAVE case.
> > >
> > > But sadly it added it just after the mmap_write_unlock(), leaving this
> > > new VMA search unprotected: and so syzbot reports suspicious RCU usage
> > > from lib/maple_tree.c:856.
> > >
> > > This could be fixed with an rcu_read_lock/unlock() pair (per Liam);
> > > but since we have been relying on the mmap_lock up to this point, it's
> > > slightly better to extend it over the new search too, for a well-defined
> > > result consistent with the policy this mbind() is establishing (rather
> > > than whatever might follow once the mmap_lock is dropped).
> >
> > Would downgrading the lock work? It would avoid the potential writing
> > issue and should still satisfy lockdep.
>
> Downgrading the lock would work, but it would be a pointless complication.

I tend to agree. It's also becoming far less important these days
with the vast majority of page faults handled under the per-VMA lock.
We might be able to turn it into a mutex instead of an rwsem without
seeing a noticable drop-off in performance. Not volunteering to try this.