2020-11-10 01:10:36

by Roman Gushchin

[permalink] [raw]
Subject: [PATCH] mm: memcg/slab: enable slab memory accounting atomically

Many kernel memory accounting paths are guarded by the
memcg_kmem_enabled_key static key. It changes it's state during
the onlining of the first non-root cgroup. However is doesn't
happen atomically: before all call sites will become patched
some charges/uncharges can be skipped, resulting in an unbalanced
charge. The problem is mostly a theoretical issue, unlikely
having a noticeable impact ion the real life.

Before the rework of the slab controller we relied at setting
kmemcg_id after enabling the memcg_kmem_enabled_key static key.
Now we can use the setting of memcg->objcg to enable the slab
memory accounting atomically.

The patch also removes obsolete comments related to already deleted
members of kmem_cache->memcg_params.

Signed-off-by: Roman Gushchin <[email protected]>
Fixes: 10befea91b61 ("mm: memcg/slab: use a single set of kmem_caches for all allocations")
---
include/linux/memcontrol.h | 6 ++----
mm/memcontrol.c | 7 ++++---
2 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 20108e426f84..01099dfa839c 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -310,7 +310,6 @@ struct mem_cgroup {
int tcpmem_pressure;

#ifdef CONFIG_MEMCG_KMEM
- /* Index in the kmem_cache->memcg_params.memcg_caches array */
int kmemcg_id;
enum memcg_kmem_state kmem_state;
struct obj_cgroup __rcu *objcg;
@@ -1641,9 +1640,8 @@ static inline void memcg_kmem_uncharge_page(struct page *page, int order)
}

/*
- * helper for accessing a memcg's index. It will be used as an index in the
- * child cache array in kmem_cache, and also to derive its name. This function
- * will return -1 when this is not a kmem-limited memcg.
+ * A helper for accessing memcg's kmem_id, used for getting
+ * corresponding LRU lists.
*/
static inline int memcg_cache_id(struct mem_cgroup *memcg)
{
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 69a2893a6455..267cc68fba05 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3675,17 +3675,18 @@ static int memcg_online_kmem(struct mem_cgroup *memcg)
memcg_free_cache_id(memcg_id);
return -ENOMEM;
}
- objcg->memcg = memcg;
- rcu_assign_pointer(memcg->objcg, objcg);

static_branch_enable(&memcg_kmem_enabled_key);

/*
* A memory cgroup is considered kmem-online as soon as it gets
- * kmemcg_id. Setting the id after enabling static branching will
+ * objcg. Setting the objcg after enabling static branching will
* guarantee no one starts accounting before all call sites are
* patched.
*/
+ objcg->memcg = memcg;
+ rcu_assign_pointer(memcg->objcg, objcg);
+
memcg->kmemcg_id = memcg_id;
memcg->kmem_state = KMEM_ONLINE;

--
2.26.2


2020-11-10 15:26:13

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH] mm: memcg/slab: enable slab memory accounting atomically

On Mon, Nov 09, 2020 at 05:06:15PM -0800, Roman Gushchin wrote:
> Many kernel memory accounting paths are guarded by the
> memcg_kmem_enabled_key static key. It changes it's state during
> the onlining of the first non-root cgroup. However is doesn't
> happen atomically: before all call sites will become patched
> some charges/uncharges can be skipped, resulting in an unbalanced
> charge. The problem is mostly a theoretical issue, unlikely
> having a noticeable impact ion the real life.

memcg_online_kmem() is called from .css_alloc rather than .css_online,
at a time where memcg is brandnew and the css hasn't been set up and
published yet (the refcount isn't even initialized for css_tryget()).

I don't really see how charges could race with that.

We may want to rename memcg_online_kmem() to memcg_alloc_kmem() or
something.

> Before the rework of the slab controller we relied at setting
> kmemcg_id after enabling the memcg_kmem_enabled_key static key.
> Now we can use the setting of memcg->objcg to enable the slab
> memory accounting atomically.

I suspect that code comment has been out of date since commit
0b8f73e104285a4badf9d768d1c39b06d77d1f97.

2020-11-10 16:37:13

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH] mm: memcg/slab: enable slab memory accounting atomically

On Tue, Nov 10, 2020 at 10:21:43AM -0500, Johannes Weiner wrote:
> On Mon, Nov 09, 2020 at 05:06:15PM -0800, Roman Gushchin wrote:
> > Many kernel memory accounting paths are guarded by the
> > memcg_kmem_enabled_key static key. It changes it's state during
> > the onlining of the first non-root cgroup. However is doesn't
> > happen atomically: before all call sites will become patched
> > some charges/uncharges can be skipped, resulting in an unbalanced
> > charge. The problem is mostly a theoretical issue, unlikely
> > having a noticeable impact ion the real life.
>
> memcg_online_kmem() is called from .css_alloc rather than .css_online,
> at a time where memcg is brandnew and the css hasn't been set up and
> published yet (the refcount isn't even initialized for css_tryget()).
>
> I don't really see how charges could race with that.

You're right. I was thinking of it in the context of a patchset causing
the root memory cgroup to be charged as any other cgroup. In that case,
the root memory cgroup can be accessed and charged as soon as we set
root_mem_cgroup, and it happens exactly in mem_cgroup_css_alloc().
As in the current state, it's indeed not a problem.

>
> We may want to rename memcg_online_kmem() to memcg_alloc_kmem() or
> something.
>
> > Before the rework of the slab controller we relied at setting
> > kmemcg_id after enabling the memcg_kmem_enabled_key static key.
> > Now we can use the setting of memcg->objcg to enable the slab
> > memory accounting atomically.
>
> I suspect that code comment has been out of date since commit
> 0b8f73e104285a4badf9d768d1c39b06d77d1f97.

I'll send a patch to cleanup these obsolete comments.

Thanks!