2012-02-28 14:15:01

by Johannes Weiner

[permalink] [raw]
Subject: [patch 1/2] kernel: cgroup: push rcu read locking from css_is_ancestor() to callsite

Library functions should not grab locks when the callsites can do it,
even if the lock nests like the rcu read-side lock does.

Push the rcu_read_lock() from css_is_ancestor() to its single user,
mem_cgroup_same_or_subtree() in preparation for another user that may
already hold the rcu read-side lock.

Signed-off-by: Johannes Weiner <[email protected]>
---
kernel/cgroup.c | 20 ++++++++++----------
mm/memcontrol.c | 14 +++++++++-----
2 files changed, 19 insertions(+), 15 deletions(-)

diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 4be474d..9003bd8 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -4841,7 +4841,7 @@ EXPORT_SYMBOL_GPL(css_depth);
* @root: the css supporsed to be an ancestor of the child.
*
* Returns true if "root" is an ancestor of "child" in its hierarchy. Because
- * this function reads css->id, this use rcu_dereference() and rcu_read_lock().
+ * this function reads css->id, the caller must hold rcu_read_lock().
* But, considering usual usage, the csses should be valid objects after test.
* Assuming that the caller will do some action to the child if this returns
* returns true, the caller must take "child";s reference count.
@@ -4853,18 +4853,18 @@ bool css_is_ancestor(struct cgroup_subsys_state *child,
{
struct css_id *child_id;
struct css_id *root_id;
- bool ret = true;

- rcu_read_lock();
child_id = rcu_dereference(child->id);
+ if (!child_id)
+ return false;
root_id = rcu_dereference(root->id);
- if (!child_id
- || !root_id
- || (child_id->depth < root_id->depth)
- || (child_id->stack[root_id->depth] != root_id->id))
- ret = false;
- rcu_read_unlock();
- return ret;
+ if (!root_id)
+ return false;
+ if (child_id->depth < root_id->depth)
+ return false;
+ if (child_id->stack[root_id->depth] != root_id->id)
+ return false;
+ return true;
}

void free_css_id(struct cgroup_subsys *ss, struct cgroup_subsys_state *css)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index e4be95a..b4622fb 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1047,12 +1047,16 @@ struct lruvec *mem_cgroup_lru_move_lists(struct zone *zone,
static bool mem_cgroup_same_or_subtree(const struct mem_cgroup *root_memcg,
struct mem_cgroup *memcg)
{
- if (root_memcg != memcg) {
- return (root_memcg->use_hierarchy &&
- css_is_ancestor(&memcg->css, &root_memcg->css));
- }
+ bool ret;

- return true;
+ if (root_memcg == memcg)
+ return true;
+ if (!root_memcg->use_hierarchy)
+ return false;
+ rcu_read_lock();
+ ret = css_is_ancestor(&memcg->css, &root_memcg->css);
+ rcu_read_unlock();
+ return ret;
}

int task_in_mem_cgroup(struct task_struct *task, const struct mem_cgroup *memcg)
--
1.7.7.6


2012-02-28 14:15:06

by Johannes Weiner

[permalink] [raw]
Subject: [patch 2/2] mm: memcg: count pte references from every member of the reclaimed hierarchy

The rmap walker checking page table references has historically
ignored references from VMAs that were not part of the memcg that was
being reclaimed during memcg hard limit reclaim.

When transitioning global reclaim to memcg hierarchy reclaim, I missed
that bit and now references from outside a memcg are ignored even
during global reclaim.

Reverting back to traditional behaviour - count all references during
global reclaim and only mind references of the memcg being reclaimed
during limit reclaim would be one option.

However, the more generic idea is to ignore references exactly then
when they are outside the hierarchy that is currently under reclaim;
because only then will their reclamation be of any use to help the
pressure situation. It makes no sense to ignore references from a
sibling memcg and then evict a page that will be immediately refaulted
by that sibling which contributes to the same usage of the common
ancestor under reclaim.

The solution: make the rmap walker ignore references from VMAs that
are not part of the hierarchy that is being reclaimed.

Flat limit reclaim will stay the same, hierarchical limit reclaim will
mind the references only to pages that the hierarchy owns. Global
reclaim, since it reclaims from all memcgs, will be fixed to regard
all references.

Reported-by: Konstantin Khlebnikov <[email protected]>
Signed-off-by: Johannes Weiner <[email protected]>
---
include/linux/memcontrol.h | 6 +++++-
mm/memcontrol.c | 16 +++++++++++-----
mm/vmscan.c | 6 ++++--
3 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 8537c5d..661b54a 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -78,6 +78,7 @@ extern void mem_cgroup_uncharge_page(struct page *page);
extern void mem_cgroup_uncharge_cache_page(struct page *page);

extern void mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask);
+bool __mem_cgroup_same_or_subtree(const struct mem_cgroup *, struct mem_cgroup *);
int task_in_mem_cgroup(struct task_struct *task, const struct mem_cgroup *memcg);

extern struct mem_cgroup *try_get_mem_cgroup_from_page(struct page *page);
@@ -88,10 +89,13 @@ static inline
int mm_match_cgroup(const struct mm_struct *mm, const struct mem_cgroup *cgroup)
{
struct mem_cgroup *memcg;
+ int match;
+
rcu_read_lock();
memcg = mem_cgroup_from_task(rcu_dereference((mm)->owner));
+ match = __mem_cgroup_same_or_subtree(cgroup, memcg);
rcu_read_unlock();
- return cgroup == memcg;
+ return match;
}

extern struct cgroup_subsys_state *mem_cgroup_css(struct mem_cgroup *memcg);
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index b4622fb..21004df 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1044,17 +1044,23 @@ struct lruvec *mem_cgroup_lru_move_lists(struct zone *zone,
* Checks whether given mem is same or in the root_mem_cgroup's
* hierarchy subtree
*/
-static bool mem_cgroup_same_or_subtree(const struct mem_cgroup *root_memcg,
- struct mem_cgroup *memcg)
+bool __mem_cgroup_same_or_subtree(const struct mem_cgroup *root_memcg,
+ struct mem_cgroup *memcg)
{
- bool ret;
-
if (root_memcg == memcg)
return true;
if (!root_memcg->use_hierarchy)
return false;
+ return css_is_ancestor(&memcg->css, &root_memcg->css);
+}
+
+static bool mem_cgroup_same_or_subtree(const struct mem_cgroup *root_memcg,
+ struct mem_cgroup *memcg)
+{
+ bool ret;
+
rcu_read_lock();
- ret = css_is_ancestor(&memcg->css, &root_memcg->css);
+ ret = __mem_cgroup_same_or_subtree(root_memcg, memcg);
rcu_read_unlock();
return ret;
}
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c631234..120646e 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -708,7 +708,8 @@ static enum page_references page_check_references(struct page *page,
int referenced_ptes, referenced_page;
unsigned long vm_flags;

- referenced_ptes = page_referenced(page, 1, mz->mem_cgroup, &vm_flags);
+ referenced_ptes = page_referenced(page, 1, sc->target_mem_cgroup,
+ &vm_flags);
referenced_page = TestClearPageReferenced(page);

/* Lumpy reclaim - ignore references */
@@ -1710,7 +1711,8 @@ static void shrink_active_list(unsigned long nr_pages,
continue;
}

- if (page_referenced(page, 0, mz->mem_cgroup, &vm_flags)) {
+ if (page_referenced(page, 0, sc->target_mem_cgroup,
+ &vm_flags)) {
nr_rotated += hpage_nr_pages(page);
/*
* Identify referenced, file-backed active pages and
--
1.7.7.6

2012-02-28 15:47:01

by Konstantin Khlebnikov

[permalink] [raw]
Subject: Re: [patch 2/2] mm: memcg: count pte references from every member of the reclaimed hierarchy

Johannes Weiner wrote:
> The rmap walker checking page table references has historically
> ignored references from VMAs that were not part of the memcg that was
> being reclaimed during memcg hard limit reclaim.
>
> When transitioning global reclaim to memcg hierarchy reclaim, I missed
> that bit and now references from outside a memcg are ignored even
> during global reclaim.
>
> Reverting back to traditional behaviour - count all references during
> global reclaim and only mind references of the memcg being reclaimed
> during limit reclaim would be one option.
>
> However, the more generic idea is to ignore references exactly then
> when they are outside the hierarchy that is currently under reclaim;
> because only then will their reclamation be of any use to help the
> pressure situation. It makes no sense to ignore references from a
> sibling memcg and then evict a page that will be immediately refaulted
> by that sibling which contributes to the same usage of the common
> ancestor under reclaim.
>
> The solution: make the rmap walker ignore references from VMAs that
> are not part of the hierarchy that is being reclaimed.
>
> Flat limit reclaim will stay the same, hierarchical limit reclaim will
> mind the references only to pages that the hierarchy owns. Global
> reclaim, since it reclaims from all memcgs, will be fixed to regard
> all references.
>
> Reported-by: Konstantin Khlebnikov<[email protected]>
> Signed-off-by: Johannes Weiner<[email protected]>

Thanks, it makes my patchset smaller.
One note below.

> ---
> include/linux/memcontrol.h | 6 +++++-
> mm/memcontrol.c | 16 +++++++++++-----
> mm/vmscan.c | 6 ++++--
> 3 files changed, 20 insertions(+), 8 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 8537c5d..661b54a 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -78,6 +78,7 @@ extern void mem_cgroup_uncharge_page(struct page *page);
> extern void mem_cgroup_uncharge_cache_page(struct page *page);
>
> extern void mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask);
> +bool __mem_cgroup_same_or_subtree(const struct mem_cgroup *, struct mem_cgroup *);
> int task_in_mem_cgroup(struct task_struct *task, const struct mem_cgroup *memcg);
>
> extern struct mem_cgroup *try_get_mem_cgroup_from_page(struct page *page);
> @@ -88,10 +89,13 @@ static inline
> int mm_match_cgroup(const struct mm_struct *mm, const struct mem_cgroup *cgroup)
> {
> struct mem_cgroup *memcg;
> + int match;
> +
> rcu_read_lock();
> memcg = mem_cgroup_from_task(rcu_dereference((mm)->owner));
> + match = __mem_cgroup_same_or_subtree(cgroup, memcg);

I'm afraid mm->owner and memcg can be NULL here,
for example if sys_exit() is raced with sys_swapoff, so
match = memcg && __mem_cgroup_same_or_subtree(cgroup, memcg);
would be better.

> rcu_read_unlock();
> - return cgroup == memcg;
> + return match;
> }
>
> extern struct cgroup_subsys_state *mem_cgroup_css(struct mem_cgroup *memcg);
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index b4622fb..21004df 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1044,17 +1044,23 @@ struct lruvec *mem_cgroup_lru_move_lists(struct zone *zone,
> * Checks whether given mem is same or in the root_mem_cgroup's
> * hierarchy subtree
> */
> -static bool mem_cgroup_same_or_subtree(const struct mem_cgroup *root_memcg,
> - struct mem_cgroup *memcg)
> +bool __mem_cgroup_same_or_subtree(const struct mem_cgroup *root_memcg,
> + struct mem_cgroup *memcg)
> {
> - bool ret;
> -
> if (root_memcg == memcg)
> return true;
> if (!root_memcg->use_hierarchy)
> return false;
> + return css_is_ancestor(&memcg->css,&root_memcg->css);
> +}
> +
> +static bool mem_cgroup_same_or_subtree(const struct mem_cgroup *root_memcg,
> + struct mem_cgroup *memcg)
> +{
> + bool ret;
> +
> rcu_read_lock();
> - ret = css_is_ancestor(&memcg->css,&root_memcg->css);
> + ret = __mem_cgroup_same_or_subtree(root_memcg, memcg);
> rcu_read_unlock();
> return ret;
> }
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index c631234..120646e 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -708,7 +708,8 @@ static enum page_references page_check_references(struct page *page,
> int referenced_ptes, referenced_page;
> unsigned long vm_flags;
>
> - referenced_ptes = page_referenced(page, 1, mz->mem_cgroup,&vm_flags);
> + referenced_ptes = page_referenced(page, 1, sc->target_mem_cgroup,
> + &vm_flags);
> referenced_page = TestClearPageReferenced(page);
>
> /* Lumpy reclaim - ignore references */
> @@ -1710,7 +1711,8 @@ static void shrink_active_list(unsigned long nr_pages,
> continue;
> }
>
> - if (page_referenced(page, 0, mz->mem_cgroup,&vm_flags)) {
> + if (page_referenced(page, 0, sc->target_mem_cgroup,
> + &vm_flags)) {
> nr_rotated += hpage_nr_pages(page);
> /*
> * Identify referenced, file-backed active pages and

2012-02-29 00:01:36

by Kamezawa Hiroyuki

[permalink] [raw]
Subject: Re: [patch 1/2] kernel: cgroup: push rcu read locking from css_is_ancestor() to callsite

On Tue, 28 Feb 2012 15:14:48 +0100
Johannes Weiner <[email protected]> wrote:

> Library functions should not grab locks when the callsites can do it,
> even if the lock nests like the rcu read-side lock does.
>
> Push the rcu_read_lock() from css_is_ancestor() to its single user,
> mem_cgroup_same_or_subtree() in preparation for another user that may
> already hold the rcu read-side lock.
>
> Signed-off-by: Johannes Weiner <[email protected]>

Acked-by: KAMEZAWA Hiroyuki <[email protected]>

2012-02-29 00:41:16

by Kamezawa Hiroyuki

[permalink] [raw]
Subject: Re: [patch 2/2] mm: memcg: count pte references from every member of the reclaimed hierarchy

On Tue, 28 Feb 2012 15:14:49 +0100
Johannes Weiner <[email protected]> wrote:

> The rmap walker checking page table references has historically
> ignored references from VMAs that were not part of the memcg that was
> being reclaimed during memcg hard limit reclaim.
>
> When transitioning global reclaim to memcg hierarchy reclaim, I missed
> that bit and now references from outside a memcg are ignored even
> during global reclaim.
>
> Reverting back to traditional behaviour - count all references during
> global reclaim and only mind references of the memcg being reclaimed
> during limit reclaim would be one option.
>
> However, the more generic idea is to ignore references exactly then
> when they are outside the hierarchy that is currently under reclaim;
> because only then will their reclamation be of any use to help the
> pressure situation. It makes no sense to ignore references from a
> sibling memcg and then evict a page that will be immediately refaulted
> by that sibling which contributes to the same usage of the common
> ancestor under reclaim.
>
> The solution: make the rmap walker ignore references from VMAs that
> are not part of the hierarchy that is being reclaimed.
>
> Flat limit reclaim will stay the same, hierarchical limit reclaim will
> mind the references only to pages that the hierarchy owns. Global
> reclaim, since it reclaims from all memcgs, will be fixed to regard
> all references.
>
> Reported-by: Konstantin Khlebnikov <[email protected]>
> Signed-off-by: Johannes Weiner <[email protected]>
> ---
> include/linux/memcontrol.h | 6 +++++-
> mm/memcontrol.c | 16 +++++++++++-----
> mm/vmscan.c | 6 ++++--
> 3 files changed, 20 insertions(+), 8 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 8537c5d..661b54a 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -78,6 +78,7 @@ extern void mem_cgroup_uncharge_page(struct page *page);
> extern void mem_cgroup_uncharge_cache_page(struct page *page);
>
> extern void mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask);
> +bool __mem_cgroup_same_or_subtree(const struct mem_cgroup *, struct mem_cgroup *);
> int task_in_mem_cgroup(struct task_struct *task, const struct mem_cgroup *memcg);
>
> extern struct mem_cgroup *try_get_mem_cgroup_from_page(struct page *page);
> @@ -88,10 +89,13 @@ static inline
> int mm_match_cgroup(const struct mm_struct *mm, const struct mem_cgroup *cgroup)
> {
> struct mem_cgroup *memcg;
> + int match;
> +
> rcu_read_lock();
> memcg = mem_cgroup_from_task(rcu_dereference((mm)->owner));
> + match = __mem_cgroup_same_or_subtree(cgroup, memcg);
> rcu_read_unlock();
> - return cgroup == memcg;
> + return match;
> }
>
> extern struct cgroup_subsys_state *mem_cgroup_css(struct mem_cgroup *memcg);
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index b4622fb..21004df 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1044,17 +1044,23 @@ struct lruvec *mem_cgroup_lru_move_lists(struct zone *zone,
> * Checks whether given mem is same or in the root_mem_cgroup's
> * hierarchy subtree
> */
> -static bool mem_cgroup_same_or_subtree(const struct mem_cgroup *root_memcg,
> - struct mem_cgroup *memcg)
> +bool __mem_cgroup_same_or_subtree(const struct mem_cgroup *root_memcg,
> + struct mem_cgroup *memcg)
> {
> - bool ret;
> -
> if (root_memcg == memcg)
> return true;
> if (!root_memcg->use_hierarchy)
> return false;
> + return css_is_ancestor(&memcg->css, &root_memcg->css);
> +}
> +
> +static bool mem_cgroup_same_or_subtree(const struct mem_cgroup *root_memcg,
> + struct mem_cgroup *memcg)
> +{
> + bool ret;
> +
> rcu_read_lock();
> - ret = css_is_ancestor(&memcg->css, &root_memcg->css);
> + ret = __mem_cgroup_same_or_subtree(root_memcg, memcg);
> rcu_read_unlock();
> return ret;
> }
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index c631234..120646e 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -708,7 +708,8 @@ static enum page_references page_check_references(struct page *page,
> int referenced_ptes, referenced_page;
> unsigned long vm_flags;
>
> - referenced_ptes = page_referenced(page, 1, mz->mem_cgroup, &vm_flags);
> + referenced_ptes = page_referenced(page, 1, sc->target_mem_cgroup,
> + &vm_flags);


I'm sorry if I don't understand the codes... !sc->target_mem_cgroup case is handled ?

Thanks,
-Kame

> referenced_page = TestClearPageReferenced(page);
>
> /* Lumpy reclaim - ignore references */
> @@ -1710,7 +1711,8 @@ static void shrink_active_list(unsigned long nr_pages,
> continue;
> }
>
> - if (page_referenced(page, 0, mz->mem_cgroup, &vm_flags)) {
> + if (page_referenced(page, 0, sc->target_mem_cgroup,
> + &vm_flags)) {
> nr_rotated += hpage_nr_pages(page);
> /*
> * Identify referenced, file-backed active pages and
> --
> 1.7.7.6
>
>

2012-02-29 02:03:08

by Johannes Weiner

[permalink] [raw]
Subject: Re: [patch 2/2] mm: memcg: count pte references from every member of the reclaimed hierarchy

On Wed, Feb 29, 2012 at 09:39:46AM +0900, KAMEZAWA Hiroyuki wrote:
> On Tue, 28 Feb 2012 15:14:49 +0100
> Johannes Weiner <[email protected]> wrote:
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -708,7 +708,8 @@ static enum page_references page_check_references(struct page *page,
> > int referenced_ptes, referenced_page;
> > unsigned long vm_flags;
> >
> > - referenced_ptes = page_referenced(page, 1, mz->mem_cgroup, &vm_flags);
> > + referenced_ptes = page_referenced(page, 1, sc->target_mem_cgroup,
> > + &vm_flags);
>
>
> I'm sorry if I don't understand the codes... !sc->target_mem_cgroup case is handled ?

Yes, but it's not obvious from the diff alone. page_referenced() does
this:

/*
* If we are reclaiming on behalf of a cgroup, skip
* counting on behalf of references from different
* cgroups
*/
if (memcg && !mm_match_cgroup(vma->vm_mm, memcg))
continue;

As a result, !sc->target_mem_cgroup -- global reclaim -- will never
ignore references, or put differently, respect references from all
memcgs, which is what we want.

2012-02-29 03:10:44

by Kamezawa Hiroyuki

[permalink] [raw]
Subject: Re: [patch 2/2] mm: memcg: count pte references from every member of the reclaimed hierarchy

On Wed, 29 Feb 2012 03:02:46 +0100
Johannes Weiner <[email protected]> wrote:

> On Wed, Feb 29, 2012 at 09:39:46AM +0900, KAMEZAWA Hiroyuki wrote:
> > On Tue, 28 Feb 2012 15:14:49 +0100
> > Johannes Weiner <[email protected]> wrote:
> > > --- a/mm/vmscan.c
> > > +++ b/mm/vmscan.c
> > > @@ -708,7 +708,8 @@ static enum page_references page_check_references(struct page *page,
> > > int referenced_ptes, referenced_page;
> > > unsigned long vm_flags;
> > >
> > > - referenced_ptes = page_referenced(page, 1, mz->mem_cgroup, &vm_flags);
> > > + referenced_ptes = page_referenced(page, 1, sc->target_mem_cgroup,
> > > + &vm_flags);
> >
> >
> > I'm sorry if I don't understand the codes... !sc->target_mem_cgroup case is handled ?
>
> Yes, but it's not obvious from the diff alone. page_referenced() does
> this:
>
> /*
> * If we are reclaiming on behalf of a cgroup, skip
> * counting on behalf of references from different
> * cgroups
> */
> if (memcg && !mm_match_cgroup(vma->vm_mm, memcg))
> continue;
>
> As a result, !sc->target_mem_cgroup -- global reclaim -- will never
> ignore references, or put differently, respect references from all
> memcgs, which is what we want.
>
Ah, thank you.

Acked-by: KAMEZAWA Hiroyuki <[email protected]>

2012-03-01 01:35:39

by Li Zefan

[permalink] [raw]
Subject: Re: [patch 1/2] kernel: cgroup: push rcu read locking from css_is_ancestor() to callsite

Johannes Weiner wrote:
> Library functions should not grab locks when the callsites can do it,
> even if the lock nests like the rcu read-side lock does.
>
> Push the rcu_read_lock() from css_is_ancestor() to its single user,
> mem_cgroup_same_or_subtree() in preparation for another user that may
> already hold the rcu read-side lock.
>
> Signed-off-by: Johannes Weiner <[email protected]>

Acked-by: Li Zefan <[email protected]>