2012-05-18 10:25:34

by Lee Schermerhorn

[permalink] [raw]
Subject: [tip:sched/numa] mm/mpol: Add MPOL_MF_NOOP

Commit-ID: 84f1e3478238c0c65711364e43d081ef32c068fb
Gitweb: http://git.kernel.org/tip/84f1e3478238c0c65711364e43d081ef32c068fb
Author: Lee Schermerhorn <[email protected]>
AuthorDate: Mon, 16 Jan 2012 14:43:29 +0100
Committer: Ingo Molnar <[email protected]>
CommitDate: Fri, 18 May 2012 08:16:17 +0200

mm/mpol: Add MPOL_MF_NOOP

This patch augments the MPOL_MF_LAZY feature by adding a "NOOP"
policy to mbind(). When the NOOP policy is used with the 'MOVE
and 'LAZY flags, mbind() [check_range()] will walk the specified
range and unmap eligible pages so that they will be migrated on
next touch.

This allows an application to prepare for a new phase of operation
where different regions of shared storage will be assigned to
worker threads, w/o changing policy. Note that we could just use
"default" policy in this case. However, this also allows an
application to request that pages be migrated, only if necessary,
to follow any arbitrary policy that might currently apply to a
range of pages, without knowing the policy, or without specifying
multiple mbind()s for ranges with different policies.

Signed-off-by: Lee Schermerhorn <[email protected]>
Signed-off-by: Peter Zijlstra <[email protected]>
Cc: Suresh Siddha <[email protected]>
Cc: Paul Turner <[email protected]>
Cc: Dan Smith <[email protected]>
Cc: Bharata B Rao <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Linus Torvalds <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
include/linux/mempolicy.h | 1 +
mm/mempolicy.c | 8 ++++----
2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index 801ad50..b484ae2 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -21,6 +21,7 @@ enum {
MPOL_BIND,
MPOL_INTERLEAVE,
MPOL_LOCAL,
+ MPOL_NOOP, /* retain existing policy for range */
MPOL_MAX, /* always last member of enum */
};

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index f261c52..e972ba0 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -251,10 +251,10 @@ static struct mempolicy *mpol_new(unsigned short mode, unsigned short flags,
pr_debug("setting mode %d flags %d nodes[0] %lx\n",
mode, flags, nodes ? nodes_addr(*nodes)[0] : -1);

- if (mode == MPOL_DEFAULT) {
+ if (mode == MPOL_DEFAULT || mode == MPOL_NOOP) {
if (nodes && !nodes_empty(*nodes))
return ERR_PTR(-EINVAL);
- return NULL; /* simply delete any existing policy */
+ return NULL;
}
VM_BUG_ON(!nodes);

@@ -1056,7 +1056,7 @@ static long do_mbind(unsigned long start, unsigned long len,
if (start & ~PAGE_MASK)
return -EINVAL;

- if (mode == MPOL_DEFAULT)
+ if (mode == MPOL_DEFAULT || mode == MPOL_NOOP)
flags &= ~MPOL_MF_STRICT;

len = (len + PAGE_SIZE - 1) & PAGE_MASK;
@@ -1108,7 +1108,7 @@ static long do_mbind(unsigned long start, unsigned long len,
flags | MPOL_MF_INVERT, &pagelist);

err = PTR_ERR(vma); /* maybe ... */
- if (!IS_ERR(vma))
+ if (!IS_ERR(vma) && mode != MPOL_NOOP)
err = mbind_range(mm, start, end, new);

if (!err) {