2018-08-21 21:38:12

by Roman Gushchin

[permalink] [raw]
Subject: [PATCH v2 1/3] mm: rework memcg kernel stack accounting

If CONFIG_VMAP_STACK is set, kernel stacks are allocated
using __vmalloc_node_range() with __GFP_ACCOUNT. So kernel
stack pages are charged against corresponding memory cgroups
on allocation and uncharged on releasing them.

The problem is that we do cache kernel stacks in small
per-cpu caches and do reuse them for new tasks, which can
belong to different memory cgroups.

Each stack page still holds a reference to the original cgroup,
so the cgroup can't be released until the vmap area is released.

To make this happen we need more than two subsequent exits
without forks in between on the current cpu, which makes it
very unlikely to happen. As a result, I saw a significant number
of dying cgroups (in theory, up to 2 * number_of_cpu +
number_of_tasks), which can't be released even by significant
memory pressure.

As a cgroup structure can take a significant amount of memory
(first of all, per-cpu data like memcg statistics), it leads
to a noticeable waste of memory.

Signed-off-by: Roman Gushchin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Konstantin Khlebnikov <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Shakeel Butt <[email protected]>
---
include/linux/memcontrol.h | 13 +++++++++-
kernel/fork.c | 51 +++++++++++++++++++++++++++++++++-----
2 files changed, 57 insertions(+), 7 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 0e6c515fb698..b12a553048e2 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1250,10 +1250,11 @@ struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep);
void memcg_kmem_put_cache(struct kmem_cache *cachep);
int memcg_kmem_charge_memcg(struct page *page, gfp_t gfp, int order,
struct mem_cgroup *memcg);
+
+#ifdef CONFIG_MEMCG_KMEM
int memcg_kmem_charge(struct page *page, gfp_t gfp, int order);
void memcg_kmem_uncharge(struct page *page, int order);

-#ifdef CONFIG_MEMCG_KMEM
extern struct static_key_false memcg_kmem_enabled_key;
extern struct workqueue_struct *memcg_kmem_cache_wq;

@@ -1289,6 +1290,16 @@ extern int memcg_expand_shrinker_maps(int new_id);
extern void memcg_set_shrinker_bit(struct mem_cgroup *memcg,
int nid, int shrinker_id);
#else
+
+static inline int memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
+{
+ return 0;
+}
+
+static inline void memcg_kmem_uncharge(struct page *page, int order)
+{
+}
+
#define for_each_memcg_cache_index(_idx) \
for (; NULL; )

diff --git a/kernel/fork.c b/kernel/fork.c
index 5ee74c113381..09b5b9a40166 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -223,9 +223,14 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
return s->addr;
}

+ /*
+ * Allocated stacks are cached and later reused by new threads,
+ * so memcg accounting is performed manually on assigning/releasing
+ * stacks to tasks. Drop __GFP_ACCOUNT.
+ */
stack = __vmalloc_node_range(THREAD_SIZE, THREAD_ALIGN,
VMALLOC_START, VMALLOC_END,
- THREADINFO_GFP,
+ THREADINFO_GFP & ~__GFP_ACCOUNT,
PAGE_KERNEL,
0, node, __builtin_return_address(0));

@@ -248,9 +253,20 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
static inline void free_thread_stack(struct task_struct *tsk)
{
#ifdef CONFIG_VMAP_STACK
- if (task_stack_vm_area(tsk)) {
+ struct vm_struct *vm = task_stack_vm_area(tsk);
+
+ if (vm) {
int i;

+ for (i = 0; i < THREAD_SIZE / PAGE_SIZE; i++) {
+ mod_memcg_page_state(vm->pages[i],
+ MEMCG_KERNEL_STACK_KB,
+ -(int)(PAGE_SIZE / 1024));
+
+ memcg_kmem_uncharge(vm->pages[i],
+ compound_order(vm->pages[i]));
+ }
+
for (i = 0; i < NR_CACHED_STACKS; i++) {
if (this_cpu_cmpxchg(cached_stacks[i],
NULL, tsk->stack_vm_area) != NULL)
@@ -350,10 +366,6 @@ static void account_kernel_stack(struct task_struct *tsk, int account)
NR_KERNEL_STACK_KB,
PAGE_SIZE / 1024 * account);
}
-
- /* All stack pages belong to the same memcg. */
- mod_memcg_page_state(vm->pages[0], MEMCG_KERNEL_STACK_KB,
- account * (THREAD_SIZE / 1024));
} else {
/*
* All stack pages are in the same zone and belong to the
@@ -369,6 +381,30 @@ static void account_kernel_stack(struct task_struct *tsk, int account)
}
}

+static int memcg_charge_kernel_stack(struct task_struct *tsk)
+{
+#ifdef CONFIG_VMAP_STACK
+ struct vm_struct *vm = task_stack_vm_area(tsk);
+ int ret;
+
+ if (vm) {
+ int i;
+
+ for (i = 0; i < THREAD_SIZE / PAGE_SIZE; i++) {
+ ret = memcg_kmem_charge(vm->pages[i], GFP_KERNEL,
+ compound_order(vm->pages[i]));
+ if (ret)
+ return ret;
+
+ mod_memcg_page_state(vm->pages[i],
+ MEMCG_KERNEL_STACK_KB,
+ PAGE_SIZE / 1024);
+ }
+ }
+#endif
+ return 0;
+}
+
static void release_task_stack(struct task_struct *tsk)
{
if (WARN_ON(tsk->state != TASK_DEAD))
@@ -807,6 +843,9 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node)
if (!stack)
goto free_tsk;

+ if (memcg_charge_kernel_stack(tsk))
+ goto free_stack;
+
stack_vm_area = task_stack_vm_area(tsk);

err = arch_dup_task_struct(tsk, orig);
--
2.17.1



2018-08-21 21:39:13

by Roman Gushchin

[permalink] [raw]
Subject: [PATCH v2 3/3] mm: don't miss the last page because of round-off error

I've noticed, that dying memory cgroups are often pinned
in memory by a single pagecache page. Even under moderate
memory pressure they sometimes stayed in such state
for a long time. That looked strange.

My investigation showed that the problem is caused by
applying the LRU pressure balancing math:

scan = div64_u64(scan * fraction[lru], denominator),

where

denominator = fraction[anon] + fraction[file] + 1.

Because fraction[lru] is always less than denominator,
if the initial scan size is 1, the result is always 0.

This means the last page is not scanned and has
no chances to be reclaimed.

Fix this by rounding up the result of the division.

In practice this change significantly improves the speed
of dying cgroups reclaim.

Signed-off-by: Roman Gushchin <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Konstantin Khlebnikov <[email protected]>
Cc: Matthew Wilcox <[email protected]>
---
include/linux/math64.h | 2 ++
mm/vmscan.c | 6 ++++--
2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/include/linux/math64.h b/include/linux/math64.h
index 837f2f2d1d34..94af3d9c73e7 100644
--- a/include/linux/math64.h
+++ b/include/linux/math64.h
@@ -281,4 +281,6 @@ static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor)
}
#endif /* mul_u64_u32_div */

+#define DIV64_U64_ROUND_UP(ll, d) div64_u64((ll) + (d) - 1, (d))
+
#endif /* _LINUX_MATH64_H */
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 4375b1e9bd56..2d78c5cff15e 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2445,9 +2445,11 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
/*
* Scan types proportional to swappiness and
* their relative recent reclaim efficiency.
+ * Make sure we don't miss the last page
+ * because of a round-off error.
*/
- scan = div64_u64(scan * fraction[file],
- denominator);
+ scan = DIV64_U64_ROUND_UP(scan * fraction[file],
+ denominator);
break;
case SCAN_FILE:
case SCAN_ANON:
--
2.17.1


2018-08-21 22:08:51

by Roman Gushchin

[permalink] [raw]
Subject: [PATCH v2 2/3] mm: drain memcg stocks on css offlining

Memcg charge is batched using per-cpu stocks, so an offline memcg
can be pinned by a cached charge up to a moment, when a process
belonging to some other cgroup will charge some memory on the same
cpu. In other words, cached charges can prevent a memory cgroup
from being reclaimed for some time, without any clear need.

Let's optimize it by explicit draining of all stocks on css offlining.
As draining is performed asynchronously, and is skipped if any
parallel draining is happening, it's cheap.

Signed-off-by: Roman Gushchin <[email protected]>
Reviewed-by: Shakeel Butt <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Konstantin Khlebnikov <[email protected]>
Cc: Tejun Heo <[email protected]>
---
mm/memcontrol.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6a921890739f..c2a254f74f30 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -4518,6 +4518,8 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
memcg_offline_kmem(memcg);
wb_memcg_offline(memcg);

+ drain_all_stock(memcg);
+
mem_cgroup_id_put(memcg);
}

--
2.17.1


2018-08-21 22:16:42

by Shakeel Butt

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] mm: rework memcg kernel stack accounting

On Tue, Aug 21, 2018 at 2:36 PM Roman Gushchin <[email protected]> wrote:
>
> If CONFIG_VMAP_STACK is set, kernel stacks are allocated
> using __vmalloc_node_range() with __GFP_ACCOUNT. So kernel
> stack pages are charged against corresponding memory cgroups
> on allocation and uncharged on releasing them.
>
> The problem is that we do cache kernel stacks in small
> per-cpu caches and do reuse them for new tasks, which can
> belong to different memory cgroups.
>
> Each stack page still holds a reference to the original cgroup,
> so the cgroup can't be released until the vmap area is released.
>
> To make this happen we need more than two subsequent exits
> without forks in between on the current cpu, which makes it
> very unlikely to happen. As a result, I saw a significant number
> of dying cgroups (in theory, up to 2 * number_of_cpu +
> number_of_tasks), which can't be released even by significant
> memory pressure.
>
> As a cgroup structure can take a significant amount of memory
> (first of all, per-cpu data like memcg statistics), it leads
> to a noticeable waste of memory.
>
> Signed-off-by: Roman Gushchin <[email protected]>

Reviewed-by: Shakeel Butt <[email protected]>

BTW this makes a very good use-case for optimizing kmem uncharging
similar to what you did for skmem uncharging.

> Cc: Johannes Weiner <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Andy Lutomirski <[email protected]>
> Cc: Konstantin Khlebnikov <[email protected]>
> Cc: Tejun Heo <[email protected]>
> Cc: Shakeel Butt <[email protected]>
> ---
> include/linux/memcontrol.h | 13 +++++++++-
> kernel/fork.c | 51 +++++++++++++++++++++++++++++++++-----
> 2 files changed, 57 insertions(+), 7 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 0e6c515fb698..b12a553048e2 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -1250,10 +1250,11 @@ struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep);
> void memcg_kmem_put_cache(struct kmem_cache *cachep);
> int memcg_kmem_charge_memcg(struct page *page, gfp_t gfp, int order,
> struct mem_cgroup *memcg);
> +
> +#ifdef CONFIG_MEMCG_KMEM
> int memcg_kmem_charge(struct page *page, gfp_t gfp, int order);
> void memcg_kmem_uncharge(struct page *page, int order);
>
> -#ifdef CONFIG_MEMCG_KMEM
> extern struct static_key_false memcg_kmem_enabled_key;
> extern struct workqueue_struct *memcg_kmem_cache_wq;
>
> @@ -1289,6 +1290,16 @@ extern int memcg_expand_shrinker_maps(int new_id);
> extern void memcg_set_shrinker_bit(struct mem_cgroup *memcg,
> int nid, int shrinker_id);
> #else
> +
> +static inline int memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
> +{
> + return 0;
> +}
> +
> +static inline void memcg_kmem_uncharge(struct page *page, int order)
> +{
> +}
> +
> #define for_each_memcg_cache_index(_idx) \
> for (; NULL; )
>
> diff --git a/kernel/fork.c b/kernel/fork.c
> index 5ee74c113381..09b5b9a40166 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -223,9 +223,14 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
> return s->addr;
> }
>
> + /*
> + * Allocated stacks are cached and later reused by new threads,
> + * so memcg accounting is performed manually on assigning/releasing
> + * stacks to tasks. Drop __GFP_ACCOUNT.
> + */
> stack = __vmalloc_node_range(THREAD_SIZE, THREAD_ALIGN,
> VMALLOC_START, VMALLOC_END,
> - THREADINFO_GFP,
> + THREADINFO_GFP & ~__GFP_ACCOUNT,
> PAGE_KERNEL,
> 0, node, __builtin_return_address(0));
>
> @@ -248,9 +253,20 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
> static inline void free_thread_stack(struct task_struct *tsk)
> {
> #ifdef CONFIG_VMAP_STACK
> - if (task_stack_vm_area(tsk)) {
> + struct vm_struct *vm = task_stack_vm_area(tsk);
> +
> + if (vm) {
> int i;
>
> + for (i = 0; i < THREAD_SIZE / PAGE_SIZE; i++) {
> + mod_memcg_page_state(vm->pages[i],
> + MEMCG_KERNEL_STACK_KB,
> + -(int)(PAGE_SIZE / 1024));
> +
> + memcg_kmem_uncharge(vm->pages[i],
> + compound_order(vm->pages[i]));
> + }
> +
> for (i = 0; i < NR_CACHED_STACKS; i++) {
> if (this_cpu_cmpxchg(cached_stacks[i],
> NULL, tsk->stack_vm_area) != NULL)
> @@ -350,10 +366,6 @@ static void account_kernel_stack(struct task_struct *tsk, int account)
> NR_KERNEL_STACK_KB,
> PAGE_SIZE / 1024 * account);
> }
> -
> - /* All stack pages belong to the same memcg. */
> - mod_memcg_page_state(vm->pages[0], MEMCG_KERNEL_STACK_KB,
> - account * (THREAD_SIZE / 1024));
> } else {
> /*
> * All stack pages are in the same zone and belong to the
> @@ -369,6 +381,30 @@ static void account_kernel_stack(struct task_struct *tsk, int account)
> }
> }
>
> +static int memcg_charge_kernel_stack(struct task_struct *tsk)
> +{
> +#ifdef CONFIG_VMAP_STACK
> + struct vm_struct *vm = task_stack_vm_area(tsk);
> + int ret;
> +
> + if (vm) {
> + int i;
> +
> + for (i = 0; i < THREAD_SIZE / PAGE_SIZE; i++) {
> + ret = memcg_kmem_charge(vm->pages[i], GFP_KERNEL,
> + compound_order(vm->pages[i]));
> + if (ret)
> + return ret;
> +
> + mod_memcg_page_state(vm->pages[i],
> + MEMCG_KERNEL_STACK_KB,
> + PAGE_SIZE / 1024);
> + }
> + }
> +#endif
> + return 0;
> +}
> +
> static void release_task_stack(struct task_struct *tsk)
> {
> if (WARN_ON(tsk->state != TASK_DEAD))
> @@ -807,6 +843,9 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node)
> if (!stack)
> goto free_tsk;
>
> + if (memcg_charge_kernel_stack(tsk))
> + goto free_stack;
> +
> stack_vm_area = task_stack_vm_area(tsk);
>
> err = arch_dup_task_struct(tsk, orig);
> --
> 2.17.1
>

2018-08-21 22:37:52

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] mm: rework memcg kernel stack accounting

On Tue, Aug 21, 2018 at 03:10:52PM -0700, Shakeel Butt wrote:
> On Tue, Aug 21, 2018 at 2:36 PM Roman Gushchin <[email protected]> wrote:
> >
> > If CONFIG_VMAP_STACK is set, kernel stacks are allocated
> > using __vmalloc_node_range() with __GFP_ACCOUNT. So kernel
> > stack pages are charged against corresponding memory cgroups
> > on allocation and uncharged on releasing them.
> >
> > The problem is that we do cache kernel stacks in small
> > per-cpu caches and do reuse them for new tasks, which can
> > belong to different memory cgroups.
> >
> > Each stack page still holds a reference to the original cgroup,
> > so the cgroup can't be released until the vmap area is released.
> >
> > To make this happen we need more than two subsequent exits
> > without forks in between on the current cpu, which makes it
> > very unlikely to happen. As a result, I saw a significant number
> > of dying cgroups (in theory, up to 2 * number_of_cpu +
> > number_of_tasks), which can't be released even by significant
> > memory pressure.
> >
> > As a cgroup structure can take a significant amount of memory
> > (first of all, per-cpu data like memcg statistics), it leads
> > to a noticeable waste of memory.
> >
> > Signed-off-by: Roman Gushchin <[email protected]>
>
> Reviewed-by: Shakeel Butt <[email protected]>

Thanks!

>
> BTW this makes a very good use-case for optimizing kmem uncharging
> similar to what you did for skmem uncharging.

Good point!
Let me prepare the patch.

2018-08-22 14:21:57

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] mm: rework memcg kernel stack accounting

On Tue 21-08-18 14:35:57, Roman Gushchin wrote:
> If CONFIG_VMAP_STACK is set, kernel stacks are allocated
> using __vmalloc_node_range() with __GFP_ACCOUNT. So kernel
> stack pages are charged against corresponding memory cgroups
> on allocation and uncharged on releasing them.
>
> The problem is that we do cache kernel stacks in small
> per-cpu caches and do reuse them for new tasks, which can
> belong to different memory cgroups.
>
> Each stack page still holds a reference to the original cgroup,
> so the cgroup can't be released until the vmap area is released.
>
> To make this happen we need more than two subsequent exits
> without forks in between on the current cpu, which makes it
> very unlikely to happen. As a result, I saw a significant number
> of dying cgroups (in theory, up to 2 * number_of_cpu +
> number_of_tasks), which can't be released even by significant
> memory pressure.
>
> As a cgroup structure can take a significant amount of memory
> (first of all, per-cpu data like memcg statistics), it leads
> to a noticeable waste of memory.
>
> Signed-off-by: Roman Gushchin <[email protected]>
> Cc: Johannes Weiner <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Andy Lutomirski <[email protected]>
> Cc: Konstantin Khlebnikov <[email protected]>
> Cc: Tejun Heo <[email protected]>
> Cc: Shakeel Butt <[email protected]>

Looks good to me. Two nits below.

I am not sure stable tree backport is really needed but it would be nice
to put
Fixes: ac496bf48d97 ("fork: Optimize task creation by caching two thread stacks per CPU if CONFIG_VMAP_STACK=y")

Acked-by: Michal Hocko <[email protected]>

> @@ -248,9 +253,20 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
> static inline void free_thread_stack(struct task_struct *tsk)
> {
> #ifdef CONFIG_VMAP_STACK
> - if (task_stack_vm_area(tsk)) {
> + struct vm_struct *vm = task_stack_vm_area(tsk);
> +
> + if (vm) {
> int i;
>
> + for (i = 0; i < THREAD_SIZE / PAGE_SIZE; i++) {
> + mod_memcg_page_state(vm->pages[i],
> + MEMCG_KERNEL_STACK_KB,
> + -(int)(PAGE_SIZE / 1024));
> +
> + memcg_kmem_uncharge(vm->pages[i],
> + compound_order(vm->pages[i]));

when do we have order > 0 here? Also I was wondering how come this
doesn't blow up on partially charged stacks but both
mod_memcg_page_state and memcg_kmem_uncharge check for page->mem_cgroup
so this is safe. Maybe a comment would save people from scratching their
heads.

--
Michal Hocko
SUSE Labs

2018-08-23 16:31:29

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] mm: rework memcg kernel stack accounting

On Wed, Aug 22, 2018 at 04:12:13PM +0200, Michal Hocko wrote:
> On Tue 21-08-18 14:35:57, Roman Gushchin wrote:
> > If CONFIG_VMAP_STACK is set, kernel stacks are allocated
> > using __vmalloc_node_range() with __GFP_ACCOUNT. So kernel
> > stack pages are charged against corresponding memory cgroups
> > on allocation and uncharged on releasing them.
> >
> > The problem is that we do cache kernel stacks in small
> > per-cpu caches and do reuse them for new tasks, which can
> > belong to different memory cgroups.
> >
> > Each stack page still holds a reference to the original cgroup,
> > so the cgroup can't be released until the vmap area is released.
> >
> > To make this happen we need more than two subsequent exits
> > without forks in between on the current cpu, which makes it
> > very unlikely to happen. As a result, I saw a significant number
> > of dying cgroups (in theory, up to 2 * number_of_cpu +
> > number_of_tasks), which can't be released even by significant
> > memory pressure.
> >
> > As a cgroup structure can take a significant amount of memory
> > (first of all, per-cpu data like memcg statistics), it leads
> > to a noticeable waste of memory.
> >
> > Signed-off-by: Roman Gushchin <[email protected]>
> > Cc: Johannes Weiner <[email protected]>
> > Cc: Michal Hocko <[email protected]>
> > Cc: Andy Lutomirski <[email protected]>
> > Cc: Konstantin Khlebnikov <[email protected]>
> > Cc: Tejun Heo <[email protected]>
> > Cc: Shakeel Butt <[email protected]>
>
> Looks good to me. Two nits below.
>
> I am not sure stable tree backport is really needed but it would be nice
> to put
> Fixes: ac496bf48d97 ("fork: Optimize task creation by caching two thread stacks per CPU if CONFIG_VMAP_STACK=y")
>
> Acked-by: Michal Hocko <[email protected]>

Will add, thanks!

>
> > @@ -248,9 +253,20 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
> > static inline void free_thread_stack(struct task_struct *tsk)
> > {
> > #ifdef CONFIG_VMAP_STACK
> > - if (task_stack_vm_area(tsk)) {
> > + struct vm_struct *vm = task_stack_vm_area(tsk);
> > +
> > + if (vm) {
> > int i;
> >
> > + for (i = 0; i < THREAD_SIZE / PAGE_SIZE; i++) {
> > + mod_memcg_page_state(vm->pages[i],
> > + MEMCG_KERNEL_STACK_KB,
> > + -(int)(PAGE_SIZE / 1024));
> > +
> > + memcg_kmem_uncharge(vm->pages[i],
> > + compound_order(vm->pages[i]));
>
> when do we have order > 0 here?

I guess, it's not possible, but hard-coded 1 looked a bit crappy.
Do you think it's better?

> Also I was wondering how come this
> doesn't blow up on partially charged stacks but both
> mod_memcg_page_state and memcg_kmem_uncharge check for page->mem_cgroup
> so this is safe. Maybe a comment would save people from scratching their
> heads.

Ok, will add.

Thank you!

2018-08-24 07:53:27

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] mm: rework memcg kernel stack accounting

On Thu 23-08-18 09:23:50, Roman Gushchin wrote:
> On Wed, Aug 22, 2018 at 04:12:13PM +0200, Michal Hocko wrote:
[...]
> > > @@ -248,9 +253,20 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
> > > static inline void free_thread_stack(struct task_struct *tsk)
> > > {
> > > #ifdef CONFIG_VMAP_STACK
> > > - if (task_stack_vm_area(tsk)) {
> > > + struct vm_struct *vm = task_stack_vm_area(tsk);
> > > +
> > > + if (vm) {
> > > int i;
> > >
> > > + for (i = 0; i < THREAD_SIZE / PAGE_SIZE; i++) {
> > > + mod_memcg_page_state(vm->pages[i],
> > > + MEMCG_KERNEL_STACK_KB,
> > > + -(int)(PAGE_SIZE / 1024));
> > > +
> > > + memcg_kmem_uncharge(vm->pages[i],
> > > + compound_order(vm->pages[i]));
> >
> > when do we have order > 0 here?
>
> I guess, it's not possible, but hard-coded 1 looked a bit crappy.
> Do you think it's better?

I guess you meant 0 here. Well, I do not mind, I was just wondering
whether I am missing something.
--
Michal Hocko
SUSE Labs

2018-08-24 12:53:07

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] mm: rework memcg kernel stack accounting

On Thu, Aug 23, 2018 at 09:23:50AM -0700, Roman Gushchin wrote:
> On Wed, Aug 22, 2018 at 04:12:13PM +0200, Michal Hocko wrote:
> > On Tue 21-08-18 14:35:57, Roman Gushchin wrote:
> > > @@ -248,9 +253,20 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
> > > static inline void free_thread_stack(struct task_struct *tsk)
> > > {
> > > #ifdef CONFIG_VMAP_STACK
> > > - if (task_stack_vm_area(tsk)) {
> > > + struct vm_struct *vm = task_stack_vm_area(tsk);
> > > +
> > > + if (vm) {
> > > int i;
> > >
> > > + for (i = 0; i < THREAD_SIZE / PAGE_SIZE; i++) {
> > > + mod_memcg_page_state(vm->pages[i],
> > > + MEMCG_KERNEL_STACK_KB,
> > > + -(int)(PAGE_SIZE / 1024));
> > > +
> > > + memcg_kmem_uncharge(vm->pages[i],
> > > + compound_order(vm->pages[i]));
> >
> > when do we have order > 0 here?
>
> I guess, it's not possible, but hard-coded 1 looked a bit crappy.
> Do you think it's better?

Yes, specifying the known value (order 0) is much better. I asked
myself the same question as Michal: we're walking through THREAD_SIZE
in PAGE_SIZE steps, how could it possibly be a higher order page?

It adds an unnecessary branch to the code and the reader's brain.

2018-08-24 15:47:17

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] mm: rework memcg kernel stack accounting

On Fri, Aug 24, 2018 at 08:50:52AM -0400, Johannes Weiner wrote:
> On Thu, Aug 23, 2018 at 09:23:50AM -0700, Roman Gushchin wrote:
> > On Wed, Aug 22, 2018 at 04:12:13PM +0200, Michal Hocko wrote:
> > > On Tue 21-08-18 14:35:57, Roman Gushchin wrote:
> > > > @@ -248,9 +253,20 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
> > > > static inline void free_thread_stack(struct task_struct *tsk)
> > > > {
> > > > #ifdef CONFIG_VMAP_STACK
> > > > - if (task_stack_vm_area(tsk)) {
> > > > + struct vm_struct *vm = task_stack_vm_area(tsk);
> > > > +
> > > > + if (vm) {
> > > > int i;
> > > >
> > > > + for (i = 0; i < THREAD_SIZE / PAGE_SIZE; i++) {
> > > > + mod_memcg_page_state(vm->pages[i],
> > > > + MEMCG_KERNEL_STACK_KB,
> > > > + -(int)(PAGE_SIZE / 1024));
> > > > +
> > > > + memcg_kmem_uncharge(vm->pages[i],
> > > > + compound_order(vm->pages[i]));
> > >
> > > when do we have order > 0 here?
> >
> > I guess, it's not possible, but hard-coded 1 looked a bit crappy.
> > Do you think it's better?
>
> Yes, specifying the known value (order 0) is much better. I asked
> myself the same question as Michal: we're walking through THREAD_SIZE
> in PAGE_SIZE steps, how could it possibly be a higher order page?
>
> It adds an unnecessary branch to the code and the reader's brain.

Fair enough. Will switch over hard-coded order 0 in v3.

Thanks!

2018-08-29 21:26:07

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] mm: rework memcg kernel stack accounting

On Tue, Aug 21, 2018 at 03:10:52PM -0700, Shakeel Butt wrote:
> On Tue, Aug 21, 2018 at 2:36 PM Roman Gushchin <[email protected]> wrote:
> >
> > If CONFIG_VMAP_STACK is set, kernel stacks are allocated
> > using __vmalloc_node_range() with __GFP_ACCOUNT. So kernel
> > stack pages are charged against corresponding memory cgroups
> > on allocation and uncharged on releasing them.
> >
> > The problem is that we do cache kernel stacks in small
> > per-cpu caches and do reuse them for new tasks, which can
> > belong to different memory cgroups.
> >
> > Each stack page still holds a reference to the original cgroup,
> > so the cgroup can't be released until the vmap area is released.
> >
> > To make this happen we need more than two subsequent exits
> > without forks in between on the current cpu, which makes it
> > very unlikely to happen. As a result, I saw a significant number
> > of dying cgroups (in theory, up to 2 * number_of_cpu +
> > number_of_tasks), which can't be released even by significant
> > memory pressure.
> >
> > As a cgroup structure can take a significant amount of memory
> > (first of all, per-cpu data like memcg statistics), it leads
> > to a noticeable waste of memory.
> >
> > Signed-off-by: Roman Gushchin <[email protected]>
>
> Reviewed-by: Shakeel Butt <[email protected]>
>
> BTW this makes a very good use-case for optimizing kmem uncharging
> similar to what you did for skmem uncharging.

The only thing I'm slightly worried here is that it can make
reclaiming of memory cgroups harder. Probably, it's still ok,
but let me first finish the work I'm doing on optimizing the
whole memcg reclaim process, and then return to this case.

Thanks!

2018-08-29 21:34:13

by Shakeel Butt

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] mm: rework memcg kernel stack accounting

On Wed, Aug 29, 2018 at 2:24 PM Roman Gushchin <[email protected]> wrote:
>
> On Tue, Aug 21, 2018 at 03:10:52PM -0700, Shakeel Butt wrote:
> > On Tue, Aug 21, 2018 at 2:36 PM Roman Gushchin <[email protected]> wrote:
> > >
> > > If CONFIG_VMAP_STACK is set, kernel stacks are allocated
> > > using __vmalloc_node_range() with __GFP_ACCOUNT. So kernel
> > > stack pages are charged against corresponding memory cgroups
> > > on allocation and uncharged on releasing them.
> > >
> > > The problem is that we do cache kernel stacks in small
> > > per-cpu caches and do reuse them for new tasks, which can
> > > belong to different memory cgroups.
> > >
> > > Each stack page still holds a reference to the original cgroup,
> > > so the cgroup can't be released until the vmap area is released.
> > >
> > > To make this happen we need more than two subsequent exits
> > > without forks in between on the current cpu, which makes it
> > > very unlikely to happen. As a result, I saw a significant number
> > > of dying cgroups (in theory, up to 2 * number_of_cpu +
> > > number_of_tasks), which can't be released even by significant
> > > memory pressure.
> > >
> > > As a cgroup structure can take a significant amount of memory
> > > (first of all, per-cpu data like memcg statistics), it leads
> > > to a noticeable waste of memory.
> > >
> > > Signed-off-by: Roman Gushchin <[email protected]>
> >
> > Reviewed-by: Shakeel Butt <[email protected]>
> >
> > BTW this makes a very good use-case for optimizing kmem uncharging
> > similar to what you did for skmem uncharging.
>
> The only thing I'm slightly worried here is that it can make
> reclaiming of memory cgroups harder. Probably, it's still ok,
> but let me first finish the work I'm doing on optimizing the
> whole memcg reclaim process, and then return to this case.
>

Yes, maybe we can disable that optimization for offlined memcgs.
Anyways, we can discuss this later as you have suggested.

Shakeel