2008-12-08 02:36:51

by Daisuke Nishimura

[permalink] [raw]
Subject: [PATCH -mmotm 0/4] cleanups/fixes for memory cgroup

Hi.

These are some cleanup/bug fix patches that I have now for memory cgroup.

Patches:
[1/4] memcg: don't trigger oom at page migration
[2/4] memcg: remove mem_cgroup_try_charge
[3/4] memcg: avoid deadlock caused by race between oom and cpuset_attach
[4/4] memcg: change try_to_free_pages to hierarchical_reclaim

There is no special meaning in patch order except for 1 and 2.


Thanks,
Daisuke Nishimura.


2008-12-08 02:33:59

by Daisuke Nishimura

[permalink] [raw]
Subject: [PATCH -mmotm 1/4] memcg: don't trigger oom at page migration

I think triggering OOM at mem_cgroup_prepare_migration would be just a bit
overkill.
Returning -ENOMEM would be enough for mem_cgroup_prepare_migration.
The caller would handle the case anyway.

Signed-off-by: Daisuke Nishimura <[email protected]>
Acked-by: KAMEZAWA Hiroyuki <[email protected]>
Acked-by: Balbir Singh <[email protected]>
---
mm/memcontrol.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index a4854a7..0683459 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1331,7 +1331,7 @@ int mem_cgroup_prepare_migration(struct page *page, struct mem_cgroup **ptr)
unlock_page_cgroup(pc);

if (mem) {
- ret = mem_cgroup_try_charge(NULL, GFP_KERNEL, &mem);
+ ret = __mem_cgroup_try_charge(NULL, GFP_KERNEL, &mem, false);
css_put(&mem->css);
}
*ptr = mem;

2008-12-08 02:34:40

by Daisuke Nishimura

[permalink] [raw]
Subject: [PATCH -mmotm 2/4] memcg: remove mem_cgroup_try_charge

After previous patch, mem_cgroup_try_charge is not used by anyone, so we can
remove it.

Signed-off-by: Daisuke Nishimura <[email protected]>
Acked-by:KAMEZAWA Hiroyuki <[email protected]>
---
include/linux/memcontrol.h | 8 --------
mm/memcontrol.c | 21 +--------------------
2 files changed, 1 insertions(+), 28 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 8752052..74c4009 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -40,8 +40,6 @@ struct mm_struct;
extern int mem_cgroup_newpage_charge(struct page *page, struct mm_struct *mm,
gfp_t gfp_mask);
/* for swap handling */
-extern int mem_cgroup_try_charge(struct mm_struct *mm,
- gfp_t gfp_mask, struct mem_cgroup **ptr);
extern int mem_cgroup_try_charge_swapin(struct mm_struct *mm,
struct page *page, gfp_t mask, struct mem_cgroup **ptr);
extern void mem_cgroup_commit_charge_swapin(struct page *page,
@@ -135,12 +133,6 @@ static inline int mem_cgroup_cache_charge(struct page *page,
return 0;
}

-static inline int mem_cgroup_try_charge(struct mm_struct *mm,
- gfp_t gfp_mask, struct mem_cgroup **ptr)
-{
- return 0;
-}
-
static inline int mem_cgroup_try_charge_swapin(struct mm_struct *mm,
struct page *page, gfp_t gfp_mask, struct mem_cgroup **ptr)
{
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 0683459..9877b03 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -809,27 +809,8 @@ nomem:
return -ENOMEM;
}

-/**
- * mem_cgroup_try_charge - get charge of PAGE_SIZE.
- * @mm: an mm_struct which is charged against. (when *memcg is NULL)
- * @gfp_mask: gfp_mask for reclaim.
- * @memcg: a pointer to memory cgroup which is charged against.
- *
- * charge against memory cgroup pointed by *memcg. if *memcg == NULL, estimated
- * memory cgroup from @mm is got and stored in *memcg.
- *
- * Returns 0 if success. -ENOMEM at failure.
- * This call can invoke OOM-Killer.
- */
-
-int mem_cgroup_try_charge(struct mm_struct *mm,
- gfp_t mask, struct mem_cgroup **memcg)
-{
- return __mem_cgroup_try_charge(mm, mask, memcg, true);
-}
-
/*
- * commit a charge got by mem_cgroup_try_charge() and makes page_cgroup to be
+ * commit a charge got by __mem_cgroup_try_charge() and makes page_cgroup to be
* USED state. If already USED, uncharge and return.
*/

2008-12-08 02:35:18

by Daisuke Nishimura

[permalink] [raw]
Subject: [PATCH -mmotm 3/4] memcg: avoid dead lock caused by race between oom and cpuset_attach

mpol_rebind_mm(), which can be called from cpuset_attach(), does down_write(mm->mmap_sem).
This means down_write(mm->mmap_sem) can be called under cgroup_mutex.

OTOH, page fault path does down_read(mm->mmap_sem) and calls mem_cgroup_try_charge_xxx(),
which may eventually calls mem_cgroup_out_of_memory(). And mem_cgroup_out_of_memory()
calls cgroup_lock().
This means cgroup_lock() can be called under down_read(mm->mmap_sem).

If those two paths race, dead lock can happen.

This patch avoid this dead lock by:
- remove cgroup_lock() from mem_cgroup_out_of_memory().
- define new mutex (memcg_tasklist) and serialize mem_cgroup_move_task()
(->attach handler of memory cgroup) and mem_cgroup_out_of_memory.

Signed-off-by: Daisuke Nishimura <[email protected]>
Reviewed-by: KAMEZAWA Hiroyuki <[email protected],com>
Acked-by: Balbir Singh <[email protected]>
---
mm/memcontrol.c | 5 +++++
mm/oom_kill.c | 2 --
2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 9877b03..fec4fc3 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -51,6 +51,7 @@ static int really_do_swap_account __initdata = 1; /* for remember boot option*/
#define do_swap_account (0)
#endif

+static DEFINE_MUTEX(memcg_tasklist); /* can be hold under cgroup_mutex */

/*
* Statistics for memory cgroup.
@@ -797,7 +798,9 @@ static int __mem_cgroup_try_charge(struct mm_struct *mm,

if (!nr_retries--) {
if (oom) {
+ mutex_lock(&memcg_tasklist);
mem_cgroup_out_of_memory(mem_over_limit, gfp_mask);
+ mutex_unlock(&memcg_tasklist);
mem_over_limit->last_oom_jiffies = jiffies;
}
goto nomem;
@@ -2173,10 +2176,12 @@ static void mem_cgroup_move_task(struct cgroup_subsys *ss,
struct cgroup *old_cont,
struct task_struct *p)
{
+ mutex_lock(&memcg_tasklist);
/*
* FIXME: It's better to move charges of this process from old
* memcg to new memcg. But it's just on TODO-List now.
*/
+ mutex_unlock(&memcg_tasklist);
}

struct cgroup_subsys mem_cgroup_subsys = {
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index fd150e3..40ba050 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -429,7 +429,6 @@ void mem_cgroup_out_of_memory(struct mem_cgroup *mem, gfp_t gfp_mask)
unsigned long points = 0;
struct task_struct *p;

- cgroup_lock();
read_lock(&tasklist_lock);
retry:
p = select_bad_process(&points, mem);
@@ -444,7 +443,6 @@ retry:
goto retry;
out:
read_unlock(&tasklist_lock);
- cgroup_unlock();
}
#endif

2008-12-08 02:35:40

by Daisuke Nishimura

[permalink] [raw]
Subject: [PATCH -mmotm 4/4] memcg: change try_to_free_pages to hierarchical_reclaim

mem_cgroup_hierarchicl_reclaim() works properly even when !use_hierarchy now
(by memcg-hierarchy-avoid-unnecessary-reclaim.patch), so, instead of
try_to_free_mem_cgroup_pages(), it should be used in many cases.

The only exception is force_empty. The group has no children in this case.


Signed-off-by: Daisuke Nishimura <[email protected]>
Acked-by: KAMEZAWA Hiroyuki <[email protected]>
Acked-by: Balbir Singh <[email protected]>
---
mm/memcontrol.c | 12 ++++--------
1 files changed, 4 insertions(+), 8 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index fec4fc3..b2b5c57 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1400,8 +1400,7 @@ int mem_cgroup_shrink_usage(struct mm_struct *mm, gfp_t gfp_mask)
rcu_read_unlock();

do {
- progress = try_to_free_mem_cgroup_pages(mem, gfp_mask, true,
- get_swappiness(mem));
+ progress = mem_cgroup_hierarchical_reclaim(mem, gfp_mask, true);
progress += mem_cgroup_check_under_limit(mem);
} while (!progress && --retry);

@@ -1468,10 +1467,8 @@ static int mem_cgroup_resize_limit(struct mem_cgroup *memcg,
if (!ret)
break;

- progress = try_to_free_mem_cgroup_pages(memcg,
- GFP_KERNEL,
- false,
- get_swappiness(memcg));
+ progress = mem_cgroup_hierarchical_reclaim(memcg, GFP_KERNEL,
+ false);
if (!progress) retry_count--;
}

@@ -1515,8 +1512,7 @@ int mem_cgroup_resize_memsw_limit(struct mem_cgroup *memcg,
break;

oldusage = res_counter_read_u64(&memcg->memsw, RES_USAGE);
- try_to_free_mem_cgroup_pages(memcg, GFP_KERNEL, true,
- get_swappiness(memcg));
+ mem_cgroup_hierarchical_reclaim(memcg, GFP_KERNEL, true);
curusage = res_counter_read_u64(&memcg->memsw, RES_USAGE);
if (curusage >= oldusage)
retry_count--;

2008-12-08 02:49:13

by Daisuke Nishimura

[permalink] [raw]
Subject: Re: [PATCH -mmotm 3/4] memcg: avoid dead lock caused by race between oom and cpuset_attach

On Mon, 8 Dec 2008 11:05:11 +0900, Daisuke Nishimura <[email protected]> wrote:
> mpol_rebind_mm(), which can be called from cpuset_attach(), does down_write(mm->mmap_sem).
> This means down_write(mm->mmap_sem) can be called under cgroup_mutex.
>
> OTOH, page fault path does down_read(mm->mmap_sem) and calls mem_cgroup_try_charge_xxx(),
> which may eventually calls mem_cgroup_out_of_memory(). And mem_cgroup_out_of_memory()
> calls cgroup_lock().
> This means cgroup_lock() can be called under down_read(mm->mmap_sem).
>
> If those two paths race, dead lock can happen.
>
> This patch avoid this dead lock by:
> - remove cgroup_lock() from mem_cgroup_out_of_memory().
> - define new mutex (memcg_tasklist) and serialize mem_cgroup_move_task()
> (->attach handler of memory cgroup) and mem_cgroup_out_of_memory.
>
> Signed-off-by: Daisuke Nishimura <[email protected]>
> Reviewed-by: KAMEZAWA Hiroyuki <[email protected],com>
Ooops, Kamezawa-san's address was invalid...

Reviewed-by: KAMEZAWA Hiroyuki <[email protected]>


Sorry.
Daisuke Nishimura.

> Acked-by: Balbir Singh <[email protected]>
> ---
> mm/memcontrol.c | 5 +++++
> mm/oom_kill.c | 2 --
> 2 files changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 9877b03..fec4fc3 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -51,6 +51,7 @@ static int really_do_swap_account __initdata = 1; /* for remember boot option*/
> #define do_swap_account (0)
> #endif
>
> +static DEFINE_MUTEX(memcg_tasklist); /* can be hold under cgroup_mutex */
>
> /*
> * Statistics for memory cgroup.
> @@ -797,7 +798,9 @@ static int __mem_cgroup_try_charge(struct mm_struct *mm,
>
> if (!nr_retries--) {
> if (oom) {
> + mutex_lock(&memcg_tasklist);
> mem_cgroup_out_of_memory(mem_over_limit, gfp_mask);
> + mutex_unlock(&memcg_tasklist);
> mem_over_limit->last_oom_jiffies = jiffies;
> }
> goto nomem;
> @@ -2173,10 +2176,12 @@ static void mem_cgroup_move_task(struct cgroup_subsys *ss,
> struct cgroup *old_cont,
> struct task_struct *p)
> {
> + mutex_lock(&memcg_tasklist);
> /*
> * FIXME: It's better to move charges of this process from old
> * memcg to new memcg. But it's just on TODO-List now.
> */
> + mutex_unlock(&memcg_tasklist);
> }
>
> struct cgroup_subsys mem_cgroup_subsys = {
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index fd150e3..40ba050 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -429,7 +429,6 @@ void mem_cgroup_out_of_memory(struct mem_cgroup *mem, gfp_t gfp_mask)
> unsigned long points = 0;
> struct task_struct *p;
>
> - cgroup_lock();
> read_lock(&tasklist_lock);
> retry:
> p = select_bad_process(&points, mem);
> @@ -444,7 +443,6 @@ retry:
> goto retry;
> out:
> read_unlock(&tasklist_lock);
> - cgroup_unlock();
> }
> #endif
>

2008-12-09 06:41:19

by Paul Menage

[permalink] [raw]
Subject: Re: [PATCH -mmotm 3/4] memcg: avoid dead lock caused by race between oom and cpuset_attach

On Sun, Dec 7, 2008 at 6:05 PM, Daisuke Nishimura
<[email protected]> wrote:
> mpol_rebind_mm(), which can be called from cpuset_attach(), does down_write(mm->mmap_sem).
> This means down_write(mm->mmap_sem) can be called under cgroup_mutex.
>
> OTOH, page fault path does down_read(mm->mmap_sem) and calls mem_cgroup_try_charge_xxx(),
> which may eventually calls mem_cgroup_out_of_memory(). And mem_cgroup_out_of_memory()
> calls cgroup_lock().
> This means cgroup_lock() can be called under down_read(mm->mmap_sem).

We should probably try to get cgroup_lock() out of the cpuset code
that calls mpol_rebind_mm() as well.

Paul

>
> If those two paths race, dead lock can happen.
>
> This patch avoid this dead lock by:
> - remove cgroup_lock() from mem_cgroup_out_of_memory().
> - define new mutex (memcg_tasklist) and serialize mem_cgroup_move_task()
> (->attach handler of memory cgroup) and mem_cgroup_out_of_memory.
>
> Signed-off-by: Daisuke Nishimura <[email protected]>
> Reviewed-by: KAMEZAWA Hiroyuki <[email protected],com>
> Acked-by: Balbir Singh <[email protected]>
> ---
> mm/memcontrol.c | 5 +++++
> mm/oom_kill.c | 2 --
> 2 files changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 9877b03..fec4fc3 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -51,6 +51,7 @@ static int really_do_swap_account __initdata = 1; /* for remember boot option*/
> #define do_swap_account (0)
> #endif
>
> +static DEFINE_MUTEX(memcg_tasklist); /* can be hold under cgroup_mutex */
>
> /*
> * Statistics for memory cgroup.
> @@ -797,7 +798,9 @@ static int __mem_cgroup_try_charge(struct mm_struct *mm,
>
> if (!nr_retries--) {
> if (oom) {
> + mutex_lock(&memcg_tasklist);
> mem_cgroup_out_of_memory(mem_over_limit, gfp_mask);
> + mutex_unlock(&memcg_tasklist);
> mem_over_limit->last_oom_jiffies = jiffies;
> }
> goto nomem;
> @@ -2173,10 +2176,12 @@ static void mem_cgroup_move_task(struct cgroup_subsys *ss,
> struct cgroup *old_cont,
> struct task_struct *p)
> {
> + mutex_lock(&memcg_tasklist);
> /*
> * FIXME: It's better to move charges of this process from old
> * memcg to new memcg. But it's just on TODO-List now.
> */
> + mutex_unlock(&memcg_tasklist);
> }
>
> struct cgroup_subsys mem_cgroup_subsys = {
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index fd150e3..40ba050 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -429,7 +429,6 @@ void mem_cgroup_out_of_memory(struct mem_cgroup *mem, gfp_t gfp_mask)
> unsigned long points = 0;
> struct task_struct *p;
>
> - cgroup_lock();
> read_lock(&tasklist_lock);
> retry:
> p = select_bad_process(&points, mem);
> @@ -444,7 +443,6 @@ retry:
> goto retry;
> out:
> read_unlock(&tasklist_lock);
> - cgroup_unlock();
> }
> #endif
>
>