2021-03-30 10:24:17

by Muchun Song

[permalink] [raw]
Subject: [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages

Since the following patchsets applied. All the kernel memory are charged
with the new APIs of obj_cgroup.

[v17,00/19] The new cgroup slab memory controller
[v5,0/7] Use obj_cgroup APIs to charge kmem pages

But user memory allocations (LRU pages) pinning memcgs for a long time -
it exists at a larger scale and is causing recurring problems in the real
world: page cache doesn't get reclaimed for a long time, or is used by the
second, third, fourth, ... instance of the same job that was restarted into
a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
and make page reclaim very inefficient.

We can convert LRU pages and most other raw memcg pins to the objcg direction
to fix this problem, and then the LRU pages will not pin the memcgs.

This patchset aims to make the LRU pages to drop the reference to memory
cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
of the dying cgroups will not increase if we run the following test script.

```bash
#!/bin/bash

cat /proc/cgroups | grep memory

cd /sys/fs/cgroup/memory

for i in range{1..500}
do
mkdir test
echo $$ > test/cgroup.procs
sleep 60 &
echo $$ > cgroup.procs
echo `cat test/cgroup.procs` > cgroup.procs
rmdir test
done

cat /proc/cgroups | grep memory
```

Patch 1 aims to fix page charging in page replacement.
Patch 2-5 are code cleanup and simplification.
Patch 6-15 convert LRU pages pin to the objcg direction.

Muchun Song (15):
mm: memcontrol: fix page charging in page replacement
mm: memcontrol: bail out early when !mm in get_mem_cgroup_from_mm
mm: memcontrol: remove the pgdata parameter of mem_cgroup_page_lruvec
mm: memcontrol: use lruvec_memcg in lruvec_holds_page_lru_lock
mm: memcontrol: simplify the logic of objcg pinning memcg
mm: memcontrol: move the objcg infrastructure out of CONFIG_MEMCG_KMEM
mm: memcontrol: introduce compact_lock_page_lruvec_irqsave
mm: memcontrol: make lruvec lock safe when the LRU pages reparented
mm: thp: introduce lock/unlock_split_queue{_irqsave}()
mm: thp: make deferred split queue lock safe when the LRU pages
reparented
mm: memcontrol: make all the callers of page_memcg() safe
mm: memcontrol: introduce memcg_reparent_ops
mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
mm: memcontrol: rename {un}lock_page_memcg() to {un}lock_page_objcg()
mm: lru: add VM_BUG_ON_PAGE to lru maintenance function

Documentation/admin-guide/cgroup-v1/memory.rst | 2 +-
fs/buffer.c | 13 +-
fs/fs-writeback.c | 23 +-
fs/iomap/buffered-io.c | 4 +-
include/linux/memcontrol.h | 197 +++++----
include/linux/mm_inline.h | 6 +
mm/compaction.c | 42 +-
mm/filemap.c | 2 +-
mm/huge_memory.c | 175 +++++++-
mm/memcontrol.c | 578 +++++++++++++++++--------
mm/migrate.c | 4 +
mm/page-writeback.c | 28 +-
mm/page_io.c | 5 +-
mm/rmap.c | 14 +-
mm/swap.c | 7 +-
mm/vmscan.c | 3 +-
mm/workingset.c | 2 +-
17 files changed, 762 insertions(+), 343 deletions(-)

--
2.11.0


2021-03-30 10:24:22

by Muchun Song

[permalink] [raw]
Subject: [RFC PATCH 09/15] mm: thp: introduce lock/unlock_split_queue{_irqsave}()

We should make thp deferred split queue lock safe when LRU pages
reparented. Similar to lock_page_lruvec{_irqsave, _irq}(), we
introduce lock/unlock_split_queue{_irqsave}() to make the deferred
split queue lock easier to be reparented.

And in the next patch, we can use a similar approach (just like
lruvec lock did) to make thp deferred split queue lock safe when
the LRU pages reparented.

Signed-off-by: Muchun Song <[email protected]>
---
mm/huge_memory.c | 97 +++++++++++++++++++++++++++++++++++++++++++-------------
1 file changed, 75 insertions(+), 22 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 395c75111d33..186dc11e8992 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -486,25 +486,76 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
}

#ifdef CONFIG_MEMCG
-static inline struct deferred_split *get_deferred_split_queue(struct page *page)
+static inline struct mem_cgroup *split_queue_to_memcg(struct deferred_split *queue)
{
- struct mem_cgroup *memcg = page_memcg(compound_head(page));
- struct pglist_data *pgdat = NODE_DATA(page_to_nid(page));
+ return container_of(queue, struct mem_cgroup, deferred_split_queue);
+}
+
+static struct deferred_split *lock_split_queue(struct page *page)
+{
+ struct deferred_split *queue;
+ struct mem_cgroup *memcg;

+ memcg = page_memcg(compound_head(page));
if (memcg)
- return &memcg->deferred_split_queue;
+ queue = &memcg->deferred_split_queue;
else
- return &pgdat->deferred_split_queue;
+ queue = &NODE_DATA(page_to_nid(page))->deferred_split_queue;
+ spin_lock(&queue->split_queue_lock);
+
+ return queue;
+}
+
+static struct deferred_split *lock_split_queue_irqsave(struct page *page,
+ unsigned long *flags)
+{
+ struct deferred_split *queue;
+ struct mem_cgroup *memcg;
+
+ memcg = page_memcg(compound_head(page));
+ if (memcg)
+ queue = &memcg->deferred_split_queue;
+ else
+ queue = &NODE_DATA(page_to_nid(page))->deferred_split_queue;
+ spin_lock_irqsave(&queue->split_queue_lock, *flags);
+
+ return queue;
}
#else
-static inline struct deferred_split *get_deferred_split_queue(struct page *page)
+static struct deferred_split *lock_split_queue(struct page *page)
{
- struct pglist_data *pgdat = NODE_DATA(page_to_nid(page));
+ struct deferred_split *queue;

- return &pgdat->deferred_split_queue;
+ queue = &NODE_DATA(page_to_nid(page))->deferred_split_queue;
+ spin_lock(&queue->split_queue_lock);
+
+ return queue;
+}
+
+static struct deferred_split *lock_split_queue_irqsave(struct page *page,
+ unsigned long *flags)
+
+{
+ struct deferred_split *queue;
+
+ queue = &NODE_DATA(page_to_nid(page))->deferred_split_queue;
+ spin_lock_irqsave(&queue->split_queue_lock, *flags);
+
+ return queue;
}
#endif

+static inline void unlock_split_queue(struct deferred_split *queue)
+{
+ spin_unlock(&queue->split_queue_lock);
+}
+
+static inline void unlock_split_queue_irqrestore(struct deferred_split *queue,
+ unsigned long flags)
+{
+ spin_unlock_irqrestore(&queue->split_queue_lock, flags);
+}
+
void prep_transhuge_page(struct page *page)
{
/*
@@ -2668,7 +2719,7 @@ bool can_split_huge_page(struct page *page, int *pextra_pins)
int split_huge_page_to_list(struct page *page, struct list_head *list)
{
struct page *head = compound_head(page);
- struct deferred_split *ds_queue = get_deferred_split_queue(head);
+ struct deferred_split *ds_queue;
struct anon_vma *anon_vma = NULL;
struct address_space *mapping = NULL;
int count, mapcount, extra_pins, ret;
@@ -2747,7 +2798,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
}

/* Prevent deferred_split_scan() touching ->_refcount */
- spin_lock(&ds_queue->split_queue_lock);
+ ds_queue = lock_split_queue(head);
count = page_count(head);
mapcount = total_mapcount(head);
if (!mapcount && page_ref_freeze(head, 1 + extra_pins)) {
@@ -2755,7 +2806,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
ds_queue->split_queue_len--;
list_del(page_deferred_list(head));
}
- spin_unlock(&ds_queue->split_queue_lock);
+ unlock_split_queue(ds_queue);
if (mapping) {
int nr = thp_nr_pages(head);

@@ -2778,7 +2829,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
dump_page(page, "total_mapcount(head) > 0");
BUG();
}
- spin_unlock(&ds_queue->split_queue_lock);
+ unlock_split_queue(ds_queue);
fail: if (mapping)
xa_unlock(&mapping->i_pages);
local_irq_enable();
@@ -2800,24 +2851,21 @@ fail: if (mapping)

void free_transhuge_page(struct page *page)
{
- struct deferred_split *ds_queue = get_deferred_split_queue(page);
+ struct deferred_split *ds_queue;
unsigned long flags;

- spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
+ ds_queue = lock_split_queue_irqsave(page, &flags);
if (!list_empty(page_deferred_list(page))) {
ds_queue->split_queue_len--;
list_del(page_deferred_list(page));
}
- spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
+ unlock_split_queue_irqrestore(ds_queue, flags);
free_compound_page(page);
}

void deferred_split_huge_page(struct page *page)
{
- struct deferred_split *ds_queue = get_deferred_split_queue(page);
-#ifdef CONFIG_MEMCG
- struct mem_cgroup *memcg = page_memcg(compound_head(page));
-#endif
+ struct deferred_split *ds_queue;
unsigned long flags;

VM_BUG_ON_PAGE(!PageTransHuge(page), page);
@@ -2835,18 +2883,23 @@ void deferred_split_huge_page(struct page *page)
if (PageSwapCache(page))
return;

- spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
+ ds_queue = lock_split_queue_irqsave(page, &flags);
if (list_empty(page_deferred_list(page))) {
count_vm_event(THP_DEFERRED_SPLIT_PAGE);
list_add_tail(page_deferred_list(page), &ds_queue->split_queue);
ds_queue->split_queue_len++;
+
#ifdef CONFIG_MEMCG
- if (memcg)
+ if (page_memcg(page)) {
+ struct mem_cgroup *memcg;
+
+ memcg = split_queue_to_memcg(ds_queue);
memcg_set_shrinker_bit(memcg, page_to_nid(page),
deferred_split_shrinker.id);
+ }
#endif
}
- spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
+ unlock_split_queue_irqrestore(ds_queue, flags);
}

static unsigned long deferred_split_count(struct shrinker *shrink,
--
2.11.0

2021-03-30 10:24:50

by Muchun Song

[permalink] [raw]
Subject: [RFC PATCH 02/15] mm: memcontrol: bail out early when !mm in get_mem_cgroup_from_mm

When mm is NULL, we do not need to hold rcu lock and call css_tryget for
the root memcg. And we also do not need to check !mm in every loop of
while. So bail out early when !mm.

Signed-off-by: Muchun Song <[email protected]>
---
mm/memcontrol.c | 21 ++++++++++-----------
1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index d0c4f6e91e17..48e4c20bf115 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1029,20 +1029,19 @@ struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm)
if (mem_cgroup_disabled())
return NULL;

+ /*
+ * Page cache insertions can happen withou an
+ * actual mm context, e.g. during disk probing
+ * on boot, loopback IO, acct() writes etc.
+ */
+ if (unlikely(!mm))
+ return root_mem_cgroup;
+
rcu_read_lock();
do {
- /*
- * Page cache insertions can happen withou an
- * actual mm context, e.g. during disk probing
- * on boot, loopback IO, acct() writes etc.
- */
- if (unlikely(!mm))
+ memcg = mem_cgroup_from_task(rcu_dereference(mm->owner));
+ if (unlikely(!memcg))
memcg = root_mem_cgroup;
- else {
- memcg = mem_cgroup_from_task(rcu_dereference(mm->owner));
- if (unlikely(!memcg))
- memcg = root_mem_cgroup;
- }
} while (!css_tryget(&memcg->css));
rcu_read_unlock();
return memcg;
--
2.11.0

2021-03-30 10:25:06

by Muchun Song

[permalink] [raw]
Subject: [RFC PATCH 04/15] mm: memcontrol: use lruvec_memcg in lruvec_holds_page_lru_lock

We already have a helper lruvec_memcg() to get the memcg from lruvec, we
do not need to do it ourselves in the lruvec_holds_page_lru_lock(). So use
lruvec_memcg() instead.

Signed-off-by: Muchun Song <[email protected]>
---
include/linux/memcontrol.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index a35a22994cf7..6e3283828391 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -744,20 +744,20 @@ static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page)
return mem_cgroup_lruvec(memcg, pgdat);
}

+static inline struct mem_cgroup *lruvec_memcg(struct lruvec *lruvec);
+
static inline bool lruvec_holds_page_lru_lock(struct page *page,
struct lruvec *lruvec)
{
pg_data_t *pgdat = page_pgdat(page);
const struct mem_cgroup *memcg;
- struct mem_cgroup_per_node *mz;

if (mem_cgroup_disabled())
return lruvec == &pgdat->__lruvec;

- mz = container_of(lruvec, struct mem_cgroup_per_node, lruvec);
memcg = page_memcg(page) ? : root_mem_cgroup;

- return lruvec->pgdat == pgdat && mz->memcg == memcg;
+ return lruvec->pgdat == pgdat && lruvec_memcg(lruvec) == memcg;
}

struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p);
--
2.11.0

2021-03-30 10:25:21

by Muchun Song

[permalink] [raw]
Subject: [RFC PATCH 13/15] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages

We will reuse the obj_cgroup APIs to charge the LRU pages. Finally,
page->memcg_data will have 2 different meanings.

- For the slab pages, page->memcg_data points to an object cgroups
vector.

- For the kmem pages (exclude the slab pages) and the LRU pages,
page->memcg_data points to an object cgroup.

In this patch, we reuse obj_cgroup APIs to charge LRU pages. In the end,
The page cache cannot prevent long-living objects from pinning the original
memory cgroup in the memory.

At the same time we also changed the rules of page and objcg or memcg
binding stability. The new rules are as follows.

For a page any of the following ensures page and objcg binding stability:

- the page lock
- LRU isolation
- lock_page_memcg()
- exclusive reference

Based on the stable binding of page and objcg, for a page any of the
following ensures page and memcg binding stability:

- css_set_lock
- cgroup_mutex
- the lruvec lock
- the split queue lock (only THP page)

If the caller only want to ensure that the page counters of memcg are
updated correctly, ensure that the binding stability of page and objcg
is sufficient.

Signed-off-by: Muchun Song <[email protected]>
---
include/linux/memcontrol.h | 96 +++++++-----------
mm/huge_memory.c | 48 +++++++++
mm/memcontrol.c | 245 ++++++++++++++++++++++++++++++---------------
3 files changed, 251 insertions(+), 138 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index c79770ce3c81..cd9e9ff6c2bf 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -371,8 +371,6 @@ enum page_memcg_data_flags {

#define MEMCG_DATA_FLAGS_MASK (__NR_MEMCG_DATA_FLAGS - 1)

-static inline bool PageMemcgKmem(struct page *page);
-
/*
* After the initialization objcg->memcg is always pointing at
* a valid memcg, but can be atomically swapped to the parent memcg.
@@ -386,43 +384,19 @@ static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg)
}

/*
- * __page_memcg - get the memory cgroup associated with a non-kmem page
- * @page: a pointer to the page struct
- *
- * Returns a pointer to the memory cgroup associated with the page,
- * or NULL. This function assumes that the page is known to have a
- * proper memory cgroup pointer. It's not safe to call this function
- * against some type of pages, e.g. slab pages or ex-slab pages or
- * kmem pages.
- */
-static inline struct mem_cgroup *__page_memcg(struct page *page)
-{
- unsigned long memcg_data = page->memcg_data;
-
- VM_BUG_ON_PAGE(PageSlab(page), page);
- VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_OBJCGS, page);
- VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, page);
-
- return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
-}
-
-/*
- * __page_objcg - get the object cgroup associated with a kmem page
+ * page_objcg - get the object cgroup associated with page
* @page: a pointer to the page struct
*
* Returns a pointer to the object cgroup associated with the page,
* or NULL. This function assumes that the page is known to have a
- * proper object cgroup pointer. It's not safe to call this function
- * against some type of pages, e.g. slab pages or ex-slab pages or
- * LRU pages.
+ * proper object cgroup pointer.
*/
-static inline struct obj_cgroup *__page_objcg(struct page *page)
+static inline struct obj_cgroup *page_objcg(struct page *page)
{
unsigned long memcg_data = page->memcg_data;

VM_BUG_ON_PAGE(PageSlab(page), page);
VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_OBJCGS, page);
- VM_BUG_ON_PAGE(!(memcg_data & MEMCG_DATA_KMEM), page);

return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
}
@@ -436,23 +410,35 @@ static inline struct obj_cgroup *__page_objcg(struct page *page)
* proper memory cgroup pointer. It's not safe to call this function
* against some type of pages, e.g. slab pages or ex-slab pages.
*
- * For a non-kmem page any of the following ensures page and memcg binding
- * stability:
+ * For a page any of the following ensures page and objcg binding stability:
*
* - the page lock
* - LRU isolation
* - lock_page_memcg()
* - exclusive reference
*
- * For a kmem page a caller should hold an rcu read lock to protect memcg
- * associated with a kmem page from being released.
+ * Based on the stable binding of page and objcg, for a page any of the
+ * following ensures page and memcg binding stability:
+ *
+ * - css_set_lock
+ * - cgroup_mutex
+ * - the lruvec lock
+ * - the split queue lock (only THP page)
+ *
+ * If the caller only want to ensure that the page counters of memcg are
+ * updated correctly, ensure that the binding stability of page and objcg
+ * is sufficient.
+ *
+ * A caller should hold an rcu read lock (In addition, regions of code across
+ * which interrupts, preemption, or softirqs have been disabled also serve as
+ * RCU read-side critical sections) to protect memcg associated with a page
+ * from being released.
*/
static inline struct mem_cgroup *page_memcg(struct page *page)
{
- if (PageMemcgKmem(page))
- return obj_cgroup_memcg(__page_objcg(page));
- else
- return __page_memcg(page);
+ struct obj_cgroup *objcg = page_objcg(page);
+
+ return objcg ? obj_cgroup_memcg(objcg) : NULL;
}

/*
@@ -465,6 +451,8 @@ static inline struct mem_cgroup *page_memcg(struct page *page)
* is known to have a proper memory cgroup pointer. It's not safe to call
* this function against some type of pages, e.g. slab pages or ex-slab
* pages.
+ *
+ * The page and objcg or memcg binding rules can refer to page_memcg().
*/
static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page)
{
@@ -488,22 +476,20 @@ static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page)
* or NULL. This function assumes that the page is known to have a
* proper memory cgroup pointer. It's not safe to call this function
* against some type of pages, e.g. slab pages or ex-slab pages.
+ *
+ * The page and objcg or memcg binding rules can refer to page_memcg().
*/
static inline struct mem_cgroup *page_memcg_rcu(struct page *page)
{
unsigned long memcg_data = READ_ONCE(page->memcg_data);
+ struct obj_cgroup *objcg;

VM_BUG_ON_PAGE(PageSlab(page), page);
WARN_ON_ONCE(!rcu_read_lock_held());

- if (memcg_data & MEMCG_DATA_KMEM) {
- struct obj_cgroup *objcg;
-
- objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
- return obj_cgroup_memcg(objcg);
- }
+ objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);

- return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
+ return objcg ? obj_cgroup_memcg(objcg) : NULL;
}

/*
@@ -516,16 +502,10 @@ static inline struct mem_cgroup *page_memcg_rcu(struct page *page)
* has an associated memory cgroup pointer or an object cgroups vector or
* an object cgroup.
*
- * For a non-kmem page any of the following ensures page and memcg binding
- * stability:
- *
- * - the page lock
- * - LRU isolation
- * - lock_page_memcg()
- * - exclusive reference
+ * The page and objcg or memcg binding rules can refer to page_memcg().
*
- * For a kmem page a caller should hold an rcu read lock to protect memcg
- * associated with a kmem page from being released.
+ * A caller should hold an rcu read lock to protect memcg associated with a
+ * page from being released.
*/
static inline struct mem_cgroup *page_memcg_check(struct page *page)
{
@@ -534,18 +514,14 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
* for slab pages, READ_ONCE() should be used here.
*/
unsigned long memcg_data = READ_ONCE(page->memcg_data);
+ struct obj_cgroup *objcg;

if (memcg_data & MEMCG_DATA_OBJCGS)
return NULL;

- if (memcg_data & MEMCG_DATA_KMEM) {
- struct obj_cgroup *objcg;
-
- objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
- return obj_cgroup_memcg(objcg);
- }
+ objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);

- return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
+ return objcg ? obj_cgroup_memcg(objcg) : NULL;
}

#ifdef CONFIG_MEMCG_KMEM
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 434cc7283a64..a47c97a48951 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -486,6 +486,8 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
}

#ifdef CONFIG_MEMCG
+static struct shrinker deferred_split_shrinker;
+
static inline struct mem_cgroup *split_queue_to_memcg(struct deferred_split *queue)
{
return container_of(queue, struct mem_cgroup, deferred_split_queue);
@@ -547,6 +549,52 @@ static struct deferred_split *lock_split_queue_irqsave(struct page *page,

return queue;
}
+
+static void memcg_reparent_split_queue_lock(struct mem_cgroup *memcg,
+ struct mem_cgroup *parent)
+{
+ spin_lock(&memcg->deferred_split_queue.split_queue_lock);
+ spin_lock(&parent->deferred_split_queue.split_queue_lock);
+}
+
+static void memcg_reparent_split_queue_unlock(struct mem_cgroup *memcg,
+ struct mem_cgroup *parent)
+{
+ spin_unlock(&parent->deferred_split_queue.split_queue_lock);
+ spin_unlock(&memcg->deferred_split_queue.split_queue_lock);
+}
+
+static void memcg_reparent_split_queue(struct mem_cgroup *memcg,
+ struct mem_cgroup *parent)
+{
+ int nid;
+ struct deferred_split *src, *dst;
+
+ src = &memcg->deferred_split_queue;
+ dst = &parent->deferred_split_queue;
+
+ if (!src->split_queue_len)
+ return;
+
+ list_splice_tail_init(&src->split_queue, &dst->split_queue);
+ dst->split_queue_len += src->split_queue_len;
+ src->split_queue_len = 0;
+
+ for_each_node(nid)
+ memcg_set_shrinker_bit(parent, nid, deferred_split_shrinker.id);
+}
+
+static struct memcg_reparent_ops split_queue_reparent_ops = {
+ .lock = memcg_reparent_split_queue_lock,
+ .unlock = memcg_reparent_split_queue_unlock,
+ .reparent = memcg_reparent_split_queue,
+};
+
+static void __init split_queue_reparent_init(void)
+{
+ register_memcg_repatent(&split_queue_reparent_ops);
+}
+core_initcall(split_queue_reparent_init);
#else
static struct deferred_split *lock_split_queue(struct page *page)
{
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index d5701117794a..71689243242f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -338,6 +338,77 @@ static struct obj_cgroup *obj_cgroup_alloc(void)
return objcg;
}

+static void memcg_reparent_lruvec_lock(struct mem_cgroup *memcg,
+ struct mem_cgroup *parent)
+{
+ int nid;
+
+ for_each_node(nid) {
+ spin_lock(&mem_cgroup_lruvec(memcg, NODE_DATA(nid))->lru_lock);
+ spin_lock(&mem_cgroup_lruvec(parent, NODE_DATA(nid))->lru_lock);
+ }
+}
+
+static void memcg_reparent_lruvec_unlock(struct mem_cgroup *memcg,
+ struct mem_cgroup *parent)
+{
+ int nid;
+
+ for_each_node(nid) {
+ spin_unlock(&mem_cgroup_lruvec(parent, NODE_DATA(nid))->lru_lock);
+ spin_unlock(&mem_cgroup_lruvec(memcg, NODE_DATA(nid))->lru_lock);
+ }
+}
+
+static void lruvec_reparent_lru(struct lruvec *src, struct lruvec *dst,
+ enum lru_list lru)
+{
+ int zid;
+ struct mem_cgroup_per_node *mz_src, *mz_dst;
+
+ mz_src = container_of(src, struct mem_cgroup_per_node, lruvec);
+ mz_dst = container_of(dst, struct mem_cgroup_per_node, lruvec);
+
+ list_splice_tail_init(&src->lists[lru], &dst->lists[lru]);
+
+ for (zid = 0; zid < MAX_NR_ZONES; zid++) {
+ mz_dst->lru_zone_size[zid][lru] += mz_src->lru_zone_size[zid][lru];
+ mz_src->lru_zone_size[zid][lru] = 0;
+ }
+}
+
+static void memcg_reparent_lruvec(struct mem_cgroup *memcg,
+ struct mem_cgroup *parent)
+{
+ int nid;
+
+ for_each_node(nid) {
+ enum lru_list lru;
+ struct lruvec *src, *dst;
+
+ src = mem_cgroup_lruvec(memcg, NODE_DATA(nid));
+ dst = mem_cgroup_lruvec(parent, NODE_DATA(nid));
+
+ dst->anon_cost += src->anon_cost;
+ dst->file_cost += src->file_cost;
+
+ for_each_lru(lru)
+ lruvec_reparent_lru(src, dst, lru);
+ }
+}
+
+static struct memcg_reparent_ops lruvec_reparent_ops = {
+ .lock = memcg_reparent_lruvec_lock,
+ .unlock = memcg_reparent_lruvec_unlock,
+ .reparent = memcg_reparent_lruvec,
+};
+
+static void __init lruvec_reparent_init(void)
+{
+ register_memcg_repatent(&lruvec_reparent_ops);
+}
+core_initcall(lruvec_reparent_init);
+
static LIST_HEAD(reparent_ops_head);

static void memcg_reparent_lock(struct mem_cgroup *memcg,
@@ -2963,18 +3034,18 @@ static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages)
}
#endif

-static void commit_charge(struct page *page, struct mem_cgroup *memcg)
+static void commit_charge(struct page *page, struct obj_cgroup *objcg)
{
- VM_BUG_ON_PAGE(page_memcg(page), page);
+ VM_BUG_ON_PAGE(page_objcg(page), page);
/*
- * Any of the following ensures page's memcg stability:
+ * Any of the following ensures page's objcg stability:
*
* - the page lock
* - LRU isolation
* - lock_page_memcg()
* - exclusive reference
*/
- page->memcg_data = (unsigned long)memcg;
+ page->memcg_data = (unsigned long)objcg;
}

static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
@@ -2991,6 +3062,21 @@ static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
return memcg;
}

+static struct obj_cgroup *get_obj_cgroup_from_memcg(struct mem_cgroup *memcg)
+{
+ struct obj_cgroup *objcg = NULL;
+
+ rcu_read_lock();
+ for (; memcg; memcg = parent_mem_cgroup(memcg)) {
+ objcg = rcu_dereference(memcg->objcg);
+ if (objcg && obj_cgroup_tryget(objcg))
+ break;
+ }
+ rcu_read_unlock();
+
+ return objcg;
+}
+
#ifdef CONFIG_MEMCG_KMEM
int memcg_alloc_page_obj_cgroups(struct page *page, struct kmem_cache *s,
gfp_t gfp, bool new_page)
@@ -3088,12 +3174,15 @@ __always_inline struct obj_cgroup *get_obj_cgroup_from_current(void)
else
memcg = mem_cgroup_from_task(current);

- for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) {
- objcg = rcu_dereference(memcg->objcg);
- if (objcg && obj_cgroup_tryget(objcg))
- break;
+ if (mem_cgroup_is_root(memcg))
+ goto out;
+
+ objcg = get_obj_cgroup_from_memcg(memcg);
+ if (obj_cgroup_is_root(objcg)) {
+ obj_cgroup_put(objcg);
objcg = NULL;
}
+out:
rcu_read_unlock();

return objcg;
@@ -3236,13 +3325,14 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order)
*/
void __memcg_kmem_uncharge_page(struct page *page, int order)
{
- struct obj_cgroup *objcg;
+ struct obj_cgroup *objcg = page_objcg(page);
unsigned int nr_pages = 1 << order;

- if (!PageMemcgKmem(page))
+ if (!objcg)
return;

- objcg = __page_objcg(page);
+ VM_BUG_ON_PAGE(!PageMemcgKmem(page), page);
+ objcg = page_objcg(page);
obj_cgroup_uncharge_pages(objcg, nr_pages);
page->memcg_data = 0;
obj_cgroup_put(objcg);
@@ -3379,17 +3469,16 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size)
*/
void mem_cgroup_split_huge_fixup(struct page *head)
{
- struct mem_cgroup *memcg = get_mem_cgroup_from_page(head);
+ struct obj_cgroup *objcg = page_objcg(head);
int i;

if (mem_cgroup_disabled())
return;

for (i = 1; i < HPAGE_PMD_NR; i++) {
- css_get(&memcg->css);
- head[i].memcg_data = (unsigned long)memcg;
+ obj_cgroup_get(objcg);
+ commit_charge(&head[i], objcg);
}
- css_put(&memcg->css);
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */

@@ -5755,10 +5844,10 @@ static int mem_cgroup_move_account(struct page *page,
*/
smp_mb();

- css_get(&to->css);
- css_put(&from->css);
+ obj_cgroup_get(to->objcg);
+ obj_cgroup_put(from->objcg);

- page->memcg_data = (unsigned long)to;
+ page->memcg_data = (unsigned long)to->objcg;

__unlock_page_memcg(from);

@@ -6796,6 +6885,7 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
{
unsigned int nr_pages = thp_nr_pages(page);
struct mem_cgroup *memcg = NULL;
+ struct obj_cgroup *objcg;
int ret = 0;

if (mem_cgroup_disabled())
@@ -6813,7 +6903,7 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
* removal, which in turn serializes uncharging.
*/
VM_BUG_ON_PAGE(!PageLocked(page), page);
- if (page_memcg(compound_head(page)))
+ if (page_objcg(compound_head(page)))
goto out;

id = lookup_swap_cgroup_id(ent);
@@ -6827,12 +6917,16 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
if (!memcg)
memcg = get_mem_cgroup_from_mm(mm);

- ret = try_charge(memcg, gfp_mask, nr_pages);
- if (ret)
- goto out_put;
+ objcg = get_obj_cgroup_from_memcg(memcg);
+ /* Do not account at the root objcg level. */
+ if (!obj_cgroup_is_root(objcg)) {
+ ret = try_charge(memcg, gfp_mask, nr_pages);
+ if (ret)
+ goto out_put;
+ }

- css_get(&memcg->css);
- commit_charge(page, memcg);
+ obj_cgroup_get(objcg);
+ commit_charge(page, objcg);

local_irq_disable();
mem_cgroup_charge_statistics(memcg, page, nr_pages);
@@ -6862,13 +6956,14 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
}

out_put:
+ obj_cgroup_put(objcg);
css_put(&memcg->css);
out:
return ret;
}

struct uncharge_gather {
- struct mem_cgroup *memcg;
+ struct obj_cgroup *objcg;
unsigned long nr_memory;
unsigned long pgpgout;
unsigned long nr_kmem;
@@ -6883,63 +6978,56 @@ static inline void uncharge_gather_clear(struct uncharge_gather *ug)
static void uncharge_batch(const struct uncharge_gather *ug)
{
unsigned long flags;
+ struct mem_cgroup *memcg;

+ rcu_read_lock();
+ memcg = obj_cgroup_memcg(ug->objcg);
if (ug->nr_memory) {
- page_counter_uncharge(&ug->memcg->memory, ug->nr_memory);
+ page_counter_uncharge(&memcg->memory, ug->nr_memory);
if (do_memsw_account())
- page_counter_uncharge(&ug->memcg->memsw, ug->nr_memory);
+ page_counter_uncharge(&memcg->memsw, ug->nr_memory);
if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && ug->nr_kmem)
- page_counter_uncharge(&ug->memcg->kmem, ug->nr_kmem);
- memcg_oom_recover(ug->memcg);
+ page_counter_uncharge(&memcg->kmem, ug->nr_kmem);
+ memcg_oom_recover(memcg);
}

local_irq_save(flags);
- __count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout);
- __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_memory);
- memcg_check_events(ug->memcg, ug->dummy_page);
+ __count_memcg_events(memcg, PGPGOUT, ug->pgpgout);
+ __this_cpu_add(memcg->vmstats_percpu->nr_page_events, ug->nr_memory);
+ memcg_check_events(memcg, ug->dummy_page);
local_irq_restore(flags);
+ rcu_read_unlock();

/* drop reference from uncharge_page */
- css_put(&ug->memcg->css);
+ obj_cgroup_put(ug->objcg);
}

static void uncharge_page(struct page *page, struct uncharge_gather *ug)
{
unsigned long nr_pages;
- struct mem_cgroup *memcg;
struct obj_cgroup *objcg;

VM_BUG_ON_PAGE(PageLRU(page), page);

/*
* Nobody should be changing or seriously looking at
- * page memcg or objcg at this point, we have fully
- * exclusive access to the page.
+ * page objcg at this point, we have fully exclusive
+ * access to the page.
*/
- if (PageMemcgKmem(page)) {
- objcg = __page_objcg(page);
- /*
- * This get matches the put at the end of the function and
- * kmem pages do not hold memcg references anymore.
- */
- memcg = get_mem_cgroup_from_objcg(objcg);
- } else {
- memcg = __page_memcg(page);
- }
-
- if (!memcg)
+ objcg = page_objcg(page);
+ if (!objcg)
return;

- if (ug->memcg != memcg) {
- if (ug->memcg) {
+ if (ug->objcg != objcg) {
+ if (ug->objcg) {
uncharge_batch(ug);
uncharge_gather_clear(ug);
}
- ug->memcg = memcg;
+ ug->objcg = objcg;
ug->dummy_page = page;

- /* pairs with css_put in uncharge_batch */
- css_get(&memcg->css);
+ /* pairs with obj_cgroup_put in uncharge_batch */
+ obj_cgroup_get(objcg);
}

nr_pages = compound_nr(page);
@@ -6947,19 +7035,15 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
if (PageMemcgKmem(page)) {
ug->nr_memory += nr_pages;
ug->nr_kmem += nr_pages;
-
- page->memcg_data = 0;
- obj_cgroup_put(objcg);
} else {
/* LRU pages aren't accounted at the root level */
- if (!mem_cgroup_is_root(memcg))
+ if (!obj_cgroup_is_root(objcg))
ug->nr_memory += nr_pages;
ug->pgpgout++;
-
- page->memcg_data = 0;
}

- css_put(&memcg->css);
+ page->memcg_data = 0;
+ obj_cgroup_put(objcg);
}

/**
@@ -6976,7 +7060,7 @@ void mem_cgroup_uncharge(struct page *page)
return;

/* Don't touch page->lru of any random page, pre-check: */
- if (!page_memcg(page))
+ if (!page_objcg(page))
return;

uncharge_gather_clear(&ug);
@@ -7002,7 +7086,7 @@ void mem_cgroup_uncharge_list(struct list_head *page_list)
uncharge_gather_clear(&ug);
list_for_each_entry(page, page_list, lru)
uncharge_page(page, &ug);
- if (ug.memcg)
+ if (ug.objcg)
uncharge_batch(&ug);
}

@@ -7019,6 +7103,7 @@ void mem_cgroup_uncharge_list(struct list_head *page_list)
void mem_cgroup_migrate(struct page *oldpage, struct page *newpage)
{
struct mem_cgroup *memcg;
+ struct obj_cgroup *objcg;
unsigned int nr_pages;
unsigned long flags;

@@ -7032,32 +7117,34 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage)
return;

/* Page cache replacement: new page already charged? */
- if (page_memcg(newpage))
+ if (page_objcg(newpage))
return;

- memcg = get_mem_cgroup_from_page(oldpage);
- VM_WARN_ON_ONCE_PAGE(!memcg, oldpage);
- if (!memcg)
+ objcg = page_objcg(oldpage);
+ VM_WARN_ON_ONCE_PAGE(!objcg, oldpage);
+ if (!objcg)
return;

/* Force-charge the new page. The old one will be freed soon */
nr_pages = thp_nr_pages(newpage);

- if (!mem_cgroup_is_root(memcg)) {
+ rcu_read_lock();
+ memcg = obj_cgroup_memcg(objcg);
+
+ if (!obj_cgroup_is_root(objcg)) {
page_counter_charge(&memcg->memory, nr_pages);
if (do_memsw_account())
page_counter_charge(&memcg->memsw, nr_pages);
}

- css_get(&memcg->css);
- commit_charge(newpage, memcg);
+ obj_cgroup_get(objcg);
+ commit_charge(newpage, objcg);

local_irq_save(flags);
mem_cgroup_charge_statistics(memcg, newpage, nr_pages);
memcg_check_events(memcg, newpage);
local_irq_restore(flags);
-
- css_put(&memcg->css);
+ rcu_read_unlock();
}

DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key);
@@ -7234,6 +7321,7 @@ static struct mem_cgroup *mem_cgroup_id_get_online(struct mem_cgroup *memcg)
void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
{
struct mem_cgroup *memcg, *swap_memcg;
+ struct obj_cgroup *objcg;
unsigned int nr_entries;
unsigned short oldid;

@@ -7246,15 +7334,16 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
return;

+ objcg = page_objcg(page);
+ VM_WARN_ON_ONCE_PAGE(!objcg, page);
+ if (!objcg)
+ return;
+
/*
* Interrupts should be disabled by the caller (see the comments below),
* which can serve as RCU read-side critical sections.
*/
- memcg = page_memcg(page);
-
- VM_WARN_ON_ONCE_PAGE(!memcg, page);
- if (!memcg)
- return;
+ memcg = obj_cgroup_memcg(objcg);

/*
* In case the memcg owning these pages has been offlined and doesn't
@@ -7273,7 +7362,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)

page->memcg_data = 0;

- if (!mem_cgroup_is_root(memcg))
+ if (!obj_cgroup_is_root(objcg))
page_counter_uncharge(&memcg->memory, nr_entries);

if (!cgroup_memory_noswap && memcg != swap_memcg) {
@@ -7292,7 +7381,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
mem_cgroup_charge_statistics(memcg, page, -nr_entries);
memcg_check_events(memcg, page);

- css_put(&memcg->css);
+ obj_cgroup_put(objcg);
}

/**
--
2.11.0

2021-03-30 10:26:30

by Muchun Song

[permalink] [raw]
Subject: [RFC PATCH 11/15] mm: memcontrol: make all the callers of page_memcg() safe

When we use objcg APIs to charge the LRU pages, the page will not hold
a reference to the memcg associated with the page. So the caller of the
page_memcg() should hold an rcu read lock or obtain a reference to the
memcg associated with the page to protect memcg from being released. So
introduce get_mem_cgroup_from_page() to obtain a reference to the memory
cgroup associated with the page.

In this patch, make all the callers hold an rcu read lock or obtain a
reference to the memcg to protect memcg from being released when the LRU
pages reparented.

We do not need to adjust the callers of page_memcg() during the whole
process of mem_cgroup_move_task(). Because the cgroup migration and
memory cgroup offlining are serialized by @cgroup_mutex. In this
routine, the LRU pages cannot be reparented to its parent memory
cgroup. So page_memcg(page) is stable and cannot be released.

This is a preparation for reparenting the LRU pages.

Signed-off-by: Muchun Song <[email protected]>
---
fs/buffer.c | 3 ++-
fs/fs-writeback.c | 23 +++++++++++----------
include/linux/memcontrol.h | 34 ++++++++++++++++++++++++++++---
mm/memcontrol.c | 50 ++++++++++++++++++++++++++++++++++++----------
mm/migrate.c | 4 ++++
mm/page_io.c | 5 +++--
6 files changed, 91 insertions(+), 28 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 591547779dbd..790ba6660d10 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -848,7 +848,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size,
gfp |= __GFP_NOFAIL;

/* The page lock pins the memcg */
- memcg = page_memcg(page);
+ memcg = get_mem_cgroup_from_page(page);
old_memcg = set_active_memcg(memcg);

head = NULL;
@@ -868,6 +868,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size,
set_bh_page(bh, page, offset);
}
out:
+ mem_cgroup_put(memcg);
set_active_memcg(old_memcg);
return head;
/*
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index e91980f49388..3ac002561327 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -255,15 +255,13 @@ void __inode_attach_wb(struct inode *inode, struct page *page)
if (inode_cgwb_enabled(inode)) {
struct cgroup_subsys_state *memcg_css;

- if (page) {
- memcg_css = mem_cgroup_css_from_page(page);
- wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
- } else {
- /* must pin memcg_css, see wb_get_create() */
+ /* must pin memcg_css, see wb_get_create() */
+ if (page)
+ memcg_css = get_mem_cgroup_css_from_page(page);
+ else
memcg_css = task_get_css(current, memory_cgrp_id);
- wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
- css_put(memcg_css);
- }
+ wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
+ css_put(memcg_css);
}

if (!wb)
@@ -736,16 +734,16 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
if (!wbc->wb || wbc->no_cgroup_owner)
return;

- css = mem_cgroup_css_from_page(page);
+ css = get_mem_cgroup_css_from_page(page);
/* dead cgroups shouldn't contribute to inode ownership arbitration */
if (!(css->flags & CSS_ONLINE))
- return;
+ goto out;

id = css->id;

if (id == wbc->wb_id) {
wbc->wb_bytes += bytes;
- return;
+ goto out;
}

if (id == wbc->wb_lcand_id)
@@ -758,6 +756,9 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
wbc->wb_tcand_bytes += bytes;
else
wbc->wb_tcand_bytes -= min(bytes, wbc->wb_tcand_bytes);
+
+out:
+ css_put(css);
}
EXPORT_SYMBOL_GPL(wbc_account_cgroup_owner);

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 5d7c8a060843..8944115ebf8e 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -367,7 +367,7 @@ static inline bool PageMemcgKmem(struct page *page);
* a valid memcg, but can be atomically swapped to the parent memcg.
*
* The caller must ensure that the returned memcg won't be released:
- * e.g. acquire the rcu_read_lock or css_set_lock.
+ * e.g. acquire the rcu_read_lock or css_set_lock or cgroup_mutex.
*/
static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg)
{
@@ -445,6 +445,31 @@ static inline struct mem_cgroup *page_memcg(struct page *page)
}

/*
+ * get_mem_cgroup_from_page - Obtain a reference on the memory cgroup associated
+ * with a page
+ * @page: a pointer to the page struct
+ *
+ * Returns a pointer to the memory cgroup (and obtain a reference on it)
+ * associated with the page, or NULL. This function assumes that the page
+ * is known to have a proper memory cgroup pointer. It's not safe to call
+ * this function against some type of pages, e.g. slab pages or ex-slab
+ * pages.
+ */
+static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page)
+{
+ struct mem_cgroup *memcg;
+
+ rcu_read_lock();
+retry:
+ memcg = page_memcg(page);
+ if (unlikely(memcg && !css_tryget(&memcg->css)))
+ goto retry;
+ rcu_read_unlock();
+
+ return memcg;
+}
+
+/*
* page_memcg_rcu - locklessly get the memory cgroup associated with a page
* @page: a pointer to the page struct
*
@@ -870,7 +895,7 @@ static inline bool mm_match_cgroup(struct mm_struct *mm,
return match;
}

-struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page);
+struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page);
ino_t page_cgroup_ino(struct page *page);

static inline bool mem_cgroup_online(struct mem_cgroup *memcg)
@@ -1068,10 +1093,13 @@ static inline void count_memcg_events(struct mem_cgroup *memcg,
static inline void count_memcg_page_event(struct page *page,
enum vm_event_item idx)
{
- struct mem_cgroup *memcg = page_memcg(page);
+ struct mem_cgroup *memcg;

+ rcu_read_lock();
+ memcg = page_memcg(page);
if (memcg)
count_memcg_events(memcg, idx, 1);
+ rcu_read_unlock();
}

static inline void count_memcg_event_mm(struct mm_struct *mm,
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 2592e2b072ef..cb650d089d9f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -563,7 +563,7 @@ void memcg_set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id)
}

/**
- * mem_cgroup_css_from_page - css of the memcg associated with a page
+ * get_mem_cgroup_css_from_page - get css of the memcg associated with a page
* @page: page of interest
*
* If memcg is bound to the default hierarchy, css of the memcg associated
@@ -573,13 +573,15 @@ void memcg_set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id)
* If memcg is bound to a traditional hierarchy, the css of root_mem_cgroup
* is returned.
*/
-struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page)
+struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page)
{
struct mem_cgroup *memcg;

- memcg = page_memcg(page);
+ if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
+ return &root_mem_cgroup->css;

- if (!memcg || !cgroup_subsys_on_dfl(memory_cgrp_subsys))
+ memcg = get_mem_cgroup_from_page(page);
+ if (!memcg)
memcg = root_mem_cgroup;

return &memcg->css;
@@ -3332,7 +3334,7 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size)
*/
void mem_cgroup_split_huge_fixup(struct page *head)
{
- struct mem_cgroup *memcg = page_memcg(head);
+ struct mem_cgroup *memcg = get_mem_cgroup_from_page(head);
int i;

if (mem_cgroup_disabled())
@@ -3342,6 +3344,7 @@ void mem_cgroup_split_huge_fixup(struct page *head)
css_get(&memcg->css);
head[i].memcg_data = (unsigned long)memcg;
}
+ css_put(&memcg->css);
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */

@@ -4664,7 +4667,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages,
void mem_cgroup_track_foreign_dirty_slowpath(struct page *page,
struct bdi_writeback *wb)
{
- struct mem_cgroup *memcg = page_memcg(page);
+ struct mem_cgroup *memcg;
struct memcg_cgwb_frn *frn;
u64 now = get_jiffies_64();
u64 oldest_at = now;
@@ -4673,6 +4676,7 @@ void mem_cgroup_track_foreign_dirty_slowpath(struct page *page,

trace_track_foreign_dirty(page, wb);

+ memcg = get_mem_cgroup_from_page(page);
/*
* Pick the slot to use. If there is already a slot for @wb, keep
* using it. If not replace the oldest one which isn't being
@@ -4711,6 +4715,7 @@ void mem_cgroup_track_foreign_dirty_slowpath(struct page *page,
frn->memcg_id = wb->memcg_css->id;
frn->at = now;
}
+ css_put(&memcg->css);
}

/* issue foreign writeback flushes for recorded foreign dirtying events */
@@ -6182,6 +6187,14 @@ static void mem_cgroup_move_charge(void)
atomic_dec(&mc.from->moving_account);
}

+/*
+ * The cgroup migration and memory cgroup offlining are serialized by
+ * @cgroup_mutex. If we reach here, it means that the LRU pages cannot
+ * be reparented to its parent memory cgroup. So during the whole process
+ * of mem_cgroup_move_task(), page_memcg(page) is stable. So we do not
+ * need to worry about the memcg (returned from page_memcg()) being
+ * released even if we do not hold an rcu read lock.
+ */
static void mem_cgroup_move_task(void)
{
if (mc.to) {
@@ -6977,7 +6990,7 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage)
if (page_memcg(newpage))
return;

- memcg = page_memcg(oldpage);
+ memcg = get_mem_cgroup_from_page(oldpage);
VM_WARN_ON_ONCE_PAGE(!memcg, oldpage);
if (!memcg)
return;
@@ -6998,6 +7011,8 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage)
mem_cgroup_charge_statistics(memcg, newpage, nr_pages);
memcg_check_events(memcg, newpage);
local_irq_restore(flags);
+
+ css_put(&memcg->css);
}

DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key);
@@ -7186,6 +7201,10 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
return;

+ /*
+ * Interrupts should be disabled by the caller (see the comments below),
+ * which can serve as RCU read-side critical sections.
+ */
memcg = page_memcg(page);

VM_WARN_ON_ONCE_PAGE(!memcg, page);
@@ -7253,15 +7272,16 @@ int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
return 0;

+ rcu_read_lock();
memcg = page_memcg(page);

VM_WARN_ON_ONCE_PAGE(!memcg, page);
if (!memcg)
- return 0;
+ goto out;

if (!entry.val) {
memcg_memory_event(memcg, MEMCG_SWAP_FAIL);
- return 0;
+ goto out;
}

memcg = mem_cgroup_id_get_online(memcg);
@@ -7271,6 +7291,7 @@ int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
memcg_memory_event(memcg, MEMCG_SWAP_MAX);
memcg_memory_event(memcg, MEMCG_SWAP_FAIL);
mem_cgroup_id_put(memcg);
+ rcu_read_unlock();
return -ENOMEM;
}

@@ -7280,6 +7301,8 @@ int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
oldid = swap_cgroup_record(entry, mem_cgroup_id(memcg), nr_pages);
VM_BUG_ON_PAGE(oldid, page);
mod_memcg_state(memcg, MEMCG_SWAP, nr_pages);
+out:
+ rcu_read_unlock();

return 0;
}
@@ -7334,17 +7357,22 @@ bool mem_cgroup_swap_full(struct page *page)
if (cgroup_memory_noswap || !cgroup_subsys_on_dfl(memory_cgrp_subsys))
return false;

+ rcu_read_lock();
memcg = page_memcg(page);
if (!memcg)
- return false;
+ goto out;

for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) {
unsigned long usage = page_counter_read(&memcg->swap);

if (usage * 2 >= READ_ONCE(memcg->swap.high) ||
- usage * 2 >= READ_ONCE(memcg->swap.max))
+ usage * 2 >= READ_ONCE(memcg->swap.max)) {
+ rcu_read_unlock();
return true;
+ }
}
+out:
+ rcu_read_unlock();

return false;
}
diff --git a/mm/migrate.c b/mm/migrate.c
index 62b81d5257aa..6f5d445949c2 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -490,6 +490,10 @@ int migrate_page_move_mapping(struct address_space *mapping,
struct lruvec *old_lruvec, *new_lruvec;
struct mem_cgroup *memcg;

+ /*
+ * Irq is disabled, which can serve as RCU read-side critical
+ * sections.
+ */
memcg = page_memcg(page);
old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat);
new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat);
diff --git a/mm/page_io.c b/mm/page_io.c
index c493ce9ebcf5..81744777ab76 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -269,13 +269,14 @@ static void bio_associate_blkg_from_page(struct bio *bio, struct page *page)
struct cgroup_subsys_state *css;
struct mem_cgroup *memcg;

+ rcu_read_lock();
memcg = page_memcg(page);
if (!memcg)
- return;
+ goto out;

- rcu_read_lock();
css = cgroup_e_css(memcg->css.cgroup, &io_cgrp_subsys);
bio_associate_blkg_from_css(bio, css);
+out:
rcu_read_unlock();
}
#else
--
2.11.0

2021-03-30 10:26:30

by Muchun Song

[permalink] [raw]
Subject: [RFC PATCH 14/15] mm: memcontrol: rename {un}lock_page_memcg() to {un}lock_page_objcg()

Now the lock_page_memcg() does not lock a page and memcg binding, it
actually lock a page and objcg binding. So rename lock_page_memcg()
to lock_page_objcg().

This is just code cleanup without any functionality changes.

Signed-off-by: Muchun Song <[email protected]>
---
Documentation/admin-guide/cgroup-v1/memory.rst | 2 +-
fs/buffer.c | 10 ++--
fs/iomap/buffered-io.c | 4 +-
include/linux/memcontrol.h | 22 +++++----
mm/filemap.c | 2 +-
mm/huge_memory.c | 4 +-
mm/memcontrol.c | 65 ++++++++++++++++----------
mm/page-writeback.c | 26 +++++------
mm/rmap.c | 14 +++---
9 files changed, 85 insertions(+), 64 deletions(-)

diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst
index 0936412e044e..578823f2c764 100644
--- a/Documentation/admin-guide/cgroup-v1/memory.rst
+++ b/Documentation/admin-guide/cgroup-v1/memory.rst
@@ -291,7 +291,7 @@ Lock order is as follows:

Page lock (PG_locked bit of page->flags)
mm->page_table_lock or split pte_lock
- lock_page_memcg (memcg->move_lock)
+ lock_page_objcg (memcg->move_lock)
mapping->i_pages lock
lruvec->lru_lock.

diff --git a/fs/buffer.c b/fs/buffer.c
index 790ba6660d10..8b6d66511690 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -595,7 +595,7 @@ EXPORT_SYMBOL(mark_buffer_dirty_inode);
* If warn is true, then emit a warning if the page is not uptodate and has
* not been truncated.
*
- * The caller must hold lock_page_memcg().
+ * The caller must hold lock_page_objcg().
*/
void __set_page_dirty(struct page *page, struct address_space *mapping,
int warn)
@@ -660,14 +660,14 @@ int __set_page_dirty_buffers(struct page *page)
* Lock out page's memcg migration to keep PageDirty
* synchronized with per-memcg dirty page counters.
*/
- lock_page_memcg(page);
+ lock_page_objcg(page);
newly_dirty = !TestSetPageDirty(page);
spin_unlock(&mapping->private_lock);

if (newly_dirty)
__set_page_dirty(page, mapping, 1);

- unlock_page_memcg(page);
+ unlock_page_objcg(page);

if (newly_dirty)
__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
@@ -1168,13 +1168,13 @@ void mark_buffer_dirty(struct buffer_head *bh)
struct page *page = bh->b_page;
struct address_space *mapping = NULL;

- lock_page_memcg(page);
+ lock_page_objcg(page);
if (!TestSetPageDirty(page)) {
mapping = page_mapping(page);
if (mapping)
__set_page_dirty(page, mapping, 0);
}
- unlock_page_memcg(page);
+ unlock_page_objcg(page);
if (mapping)
__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
}
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 16a1e82e3aeb..8a3ffd38d9e0 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -653,11 +653,11 @@ iomap_set_page_dirty(struct page *page)
* Lock out page's memcg migration to keep PageDirty
* synchronized with per-memcg dirty page counters.
*/
- lock_page_memcg(page);
+ lock_page_objcg(page);
newly_dirty = !TestSetPageDirty(page);
if (newly_dirty)
__set_page_dirty(page, mapping, 0);
- unlock_page_memcg(page);
+ unlock_page_objcg(page);

if (newly_dirty)
__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index cd9e9ff6c2bf..688a8e1fa9b6 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -410,11 +410,12 @@ static inline struct obj_cgroup *page_objcg(struct page *page)
* proper memory cgroup pointer. It's not safe to call this function
* against some type of pages, e.g. slab pages or ex-slab pages.
*
- * For a page any of the following ensures page and objcg binding stability:
+ * For a page any of the following ensures page and objcg binding stability
+ * (But the page can be reparented to its parent memcg):
*
* - the page lock
* - LRU isolation
- * - lock_page_memcg()
+ * - lock_page_objcg()
* - exclusive reference
*
* Based on the stable binding of page and objcg, for a page any of the
@@ -947,9 +948,9 @@ void mem_cgroup_print_oom_group(struct mem_cgroup *memcg);
extern bool cgroup_memory_noswap;
#endif

-struct mem_cgroup *lock_page_memcg(struct page *page);
-void __unlock_page_memcg(struct mem_cgroup *memcg);
-void unlock_page_memcg(struct page *page);
+struct obj_cgroup *lock_page_objcg(struct page *page);
+void __unlock_page_objcg(struct obj_cgroup *objcg);
+void unlock_page_objcg(struct page *page);

/*
* idx can be of type enum memcg_stat_item or node_stat_item.
@@ -1155,6 +1156,11 @@ void mem_cgroup_split_huge_fixup(struct page *head);

struct mem_cgroup;

+static inline struct obj_cgroup *page_objcg(struct page *page)
+{
+ return NULL;
+}
+
static inline struct mem_cgroup *page_memcg(struct page *page)
{
return NULL;
@@ -1375,16 +1381,16 @@ mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg)
{
}

-static inline struct mem_cgroup *lock_page_memcg(struct page *page)
+static inline struct obj_cgroup *lock_page_objcg(struct page *page)
{
return NULL;
}

-static inline void __unlock_page_memcg(struct mem_cgroup *memcg)
+static inline void __unlock_page_objcg(struct obj_cgroup *objcg)
{
}

-static inline void unlock_page_memcg(struct page *page)
+static inline void unlock_page_objcg(struct page *page)
{
}

diff --git a/mm/filemap.c b/mm/filemap.c
index 925964b67583..c427de610860 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -110,7 +110,7 @@
* ->i_pages lock (page_remove_rmap->set_page_dirty)
* bdi.wb->list_lock (page_remove_rmap->set_page_dirty)
* ->inode->i_lock (page_remove_rmap->set_page_dirty)
- * ->memcg->move_lock (page_remove_rmap->lock_page_memcg)
+ * ->memcg->move_lock (page_remove_rmap->lock_page_objcg)
* bdi.wb->list_lock (zap_pte_range->set_page_dirty)
* ->inode->i_lock (zap_pte_range->set_page_dirty)
* ->private_lock (zap_pte_range->__set_page_dirty_buffers)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index a47c97a48951..088511eaa326 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2303,7 +2303,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
atomic_inc(&page[i]._mapcount);
}

- lock_page_memcg(page);
+ lock_page_objcg(page);
if (atomic_add_negative(-1, compound_mapcount_ptr(page))) {
/* Last compound_mapcount is gone. */
__mod_lruvec_page_state(page, NR_ANON_THPS,
@@ -2314,7 +2314,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
atomic_dec(&page[i]._mapcount);
}
}
- unlock_page_memcg(page);
+ unlock_page_objcg(page);
}

smp_wmb(); /* make pte visible before pmd */
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 71689243242f..442b846dc7bc 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1439,7 +1439,7 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memcg,
* These functions are safe to use under any of the following conditions:
* - page locked
* - PageLRU cleared
- * - lock_page_memcg()
+ * - lock_page_objcg()
* - page->_refcount is zero
*/
struct lruvec *lock_page_lruvec(struct page *page)
@@ -2255,20 +2255,22 @@ void mem_cgroup_print_oom_group(struct mem_cgroup *memcg)
}

/**
- * lock_page_memcg - lock a page and memcg binding
+ * lock_page_objcg - lock a page and objcg binding
* @page: the page
*
* This function protects unlocked LRU pages from being moved to
- * another cgroup.
+ * another object cgroup. But the page can be reparented to its
+ * parent memcg.
*
- * It ensures lifetime of the returned memcg. Caller is responsible
- * for the lifetime of the page; __unlock_page_memcg() is available
+ * It ensures lifetime of the returned objcg. Caller is responsible
+ * for the lifetime of the page; __unlock_page_objcg() is available
* when @page might get freed inside the locked section.
*/
-struct mem_cgroup *lock_page_memcg(struct page *page)
+struct obj_cgroup *lock_page_objcg(struct page *page)
{
struct page *head = compound_head(page); /* rmap on tail pages */
struct mem_cgroup *memcg;
+ struct obj_cgroup *objcg;
unsigned long flags;

/*
@@ -2287,10 +2289,12 @@ struct mem_cgroup *lock_page_memcg(struct page *page)
if (mem_cgroup_disabled())
return NULL;
again:
- memcg = page_memcg(head);
- if (unlikely(!memcg))
+ objcg = page_objcg(head);
+ if (unlikely(!objcg))
return NULL;

+ memcg = obj_cgroup_memcg(objcg);
+
#ifdef CONFIG_PROVE_LOCKING
local_irq_save(flags);
might_lock(&memcg->move_lock);
@@ -2298,7 +2302,7 @@ struct mem_cgroup *lock_page_memcg(struct page *page)
#endif

if (atomic_read(&memcg->moving_account) <= 0)
- return memcg;
+ return objcg;

spin_lock_irqsave(&memcg->move_lock, flags);
if (memcg != page_memcg(head)) {
@@ -2309,23 +2313,34 @@ struct mem_cgroup *lock_page_memcg(struct page *page)
/*
* When charge migration first begins, we can have locked and
* unlocked page stat updates happening concurrently. Track
- * the task who has the lock for unlock_page_memcg().
+ * the task who has the lock for unlock_page_objcg().
*/
memcg->move_lock_task = current;
memcg->move_lock_flags = flags;

- return memcg;
+ /*
+ * The cgroup migration and memory cgroup offlining are serialized by
+ * cgroup_mutex. If we reach here, it means that we are race with cgroup
+ * migration (or we are cgroup migration) and the @page cannot be
+ * reparented to its parent memory cgroup. So during the whole process
+ * from lock_page_objcg(page) to unlock_page_objcg(page), page_memcg(page)
+ * and obj_cgroup_memcg(objcg) are stable.
+ */
+
+ return objcg;
}
-EXPORT_SYMBOL(lock_page_memcg);
+EXPORT_SYMBOL(lock_page_objcg);

/**
- * __unlock_page_memcg - unlock and unpin a memcg
- * @memcg: the memcg
+ * __unlock_page_objcg - unlock and unpin a objcg
+ * @objcg: the objcg
*
- * Unlock and unpin a memcg returned by lock_page_memcg().
+ * Unlock and unpin a objcg returned by lock_page_objcg().
*/
-void __unlock_page_memcg(struct mem_cgroup *memcg)
+void __unlock_page_objcg(struct obj_cgroup *objcg)
{
+ struct mem_cgroup *memcg = objcg ? obj_cgroup_memcg(objcg) : NULL;
+
if (memcg && memcg->move_lock_task == current) {
unsigned long flags = memcg->move_lock_flags;

@@ -2339,16 +2354,16 @@ void __unlock_page_memcg(struct mem_cgroup *memcg)
}

/**
- * unlock_page_memcg - unlock a page and memcg binding
+ * unlock_page_objcg - unlock a page and objcg binding
* @page: the page
*/
-void unlock_page_memcg(struct page *page)
+void unlock_page_objcg(struct page *page)
{
struct page *head = compound_head(page);

- __unlock_page_memcg(page_memcg(head));
+ __unlock_page_objcg(page_objcg(head));
}
-EXPORT_SYMBOL(unlock_page_memcg);
+EXPORT_SYMBOL(unlock_page_objcg);

struct memcg_stock_pcp {
struct mem_cgroup *cached; /* this never be root cgroup */
@@ -3042,7 +3057,7 @@ static void commit_charge(struct page *page, struct obj_cgroup *objcg)
*
* - the page lock
* - LRU isolation
- * - lock_page_memcg()
+ * - lock_page_objcg()
* - exclusive reference
*/
page->memcg_data = (unsigned long)objcg;
@@ -5785,7 +5800,7 @@ static int mem_cgroup_move_account(struct page *page,
from_vec = mem_cgroup_lruvec(from, pgdat);
to_vec = mem_cgroup_lruvec(to, pgdat);

- lock_page_memcg(page);
+ lock_page_objcg(page);

if (PageAnon(page)) {
if (page_mapped(page)) {
@@ -5837,7 +5852,7 @@ static int mem_cgroup_move_account(struct page *page,
* with (un)charging, migration, LRU putback, or anything else
* that would rely on a stable page's memory cgroup.
*
- * Note that lock_page_memcg is a memcg lock, not a page lock,
+ * Note that lock_page_objcg is a memcg lock, not a page lock,
* to save space. As soon as we switch page's memory cgroup to a
* new memcg that isn't locked, the above state can change
* concurrently again. Make sure we're truly done with it.
@@ -5849,7 +5864,7 @@ static int mem_cgroup_move_account(struct page *page,

page->memcg_data = (unsigned long)to->objcg;

- __unlock_page_memcg(from);
+ __unlock_page_objcg(from->objcg);

ret = 0;

@@ -6291,7 +6306,7 @@ static void mem_cgroup_move_charge(void)
{
lru_add_drain_all();
/*
- * Signal lock_page_memcg() to take the memcg's move_lock
+ * Signal lock_page_objcg() to take the memcg's move_lock
* while we're moving its pages to another memcg. Then wait
* for already started RCU-only updates to finish.
*/
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index f517e0669924..2a119afbf7fa 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2413,7 +2413,7 @@ int __set_page_dirty_no_writeback(struct page *page)
/*
* Helper function for set_page_dirty family.
*
- * Caller must hold lock_page_memcg().
+ * Caller must hold lock_page_objcg().
*
* NOTE: This relies on being atomic wrt interrupts.
*/
@@ -2445,7 +2445,7 @@ void account_page_dirtied(struct page *page, struct address_space *mapping)
/*
* Helper function for deaccounting dirty page without writeback.
*
- * Caller must hold lock_page_memcg().
+ * Caller must hold lock_page_objcg().
*/
void account_page_cleaned(struct page *page, struct address_space *mapping,
struct bdi_writeback *wb)
@@ -2472,13 +2472,13 @@ void account_page_cleaned(struct page *page, struct address_space *mapping,
*/
int __set_page_dirty_nobuffers(struct page *page)
{
- lock_page_memcg(page);
+ lock_page_objcg(page);
if (!TestSetPageDirty(page)) {
struct address_space *mapping = page_mapping(page);
unsigned long flags;

if (!mapping) {
- unlock_page_memcg(page);
+ unlock_page_objcg(page);
return 1;
}

@@ -2489,7 +2489,7 @@ int __set_page_dirty_nobuffers(struct page *page)
__xa_set_mark(&mapping->i_pages, page_index(page),
PAGECACHE_TAG_DIRTY);
xa_unlock_irqrestore(&mapping->i_pages, flags);
- unlock_page_memcg(page);
+ unlock_page_objcg(page);

if (mapping->host) {
/* !PageAnon && !swapper_space */
@@ -2497,7 +2497,7 @@ int __set_page_dirty_nobuffers(struct page *page)
}
return 1;
}
- unlock_page_memcg(page);
+ unlock_page_objcg(page);
return 0;
}
EXPORT_SYMBOL(__set_page_dirty_nobuffers);
@@ -2630,14 +2630,14 @@ void __cancel_dirty_page(struct page *page)
struct bdi_writeback *wb;
struct wb_lock_cookie cookie = {};

- lock_page_memcg(page);
+ lock_page_objcg(page);
wb = unlocked_inode_to_wb_begin(inode, &cookie);

if (TestClearPageDirty(page))
account_page_cleaned(page, mapping, wb);

unlocked_inode_to_wb_end(inode, &cookie);
- unlock_page_memcg(page);
+ unlock_page_objcg(page);
} else {
ClearPageDirty(page);
}
@@ -2722,11 +2722,11 @@ EXPORT_SYMBOL(clear_page_dirty_for_io);
int test_clear_page_writeback(struct page *page)
{
struct address_space *mapping = page_mapping(page);
- struct mem_cgroup *memcg;
+ struct obj_cgroup *objcg;
struct lruvec *lruvec;
int ret;

- memcg = lock_page_memcg(page);
+ objcg = lock_page_objcg(page);
lruvec = mem_cgroup_page_lruvec(page);
if (mapping && mapping_use_writeback_tags(mapping)) {
struct inode *inode = mapping->host;
@@ -2759,7 +2759,7 @@ int test_clear_page_writeback(struct page *page)
dec_zone_page_state(page, NR_ZONE_WRITE_PENDING);
inc_node_page_state(page, NR_WRITTEN);
}
- __unlock_page_memcg(memcg);
+ __unlock_page_objcg(objcg);
return ret;
}

@@ -2768,7 +2768,7 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
struct address_space *mapping = page_mapping(page);
int ret, access_ret;

- lock_page_memcg(page);
+ lock_page_objcg(page);
if (mapping && mapping_use_writeback_tags(mapping)) {
XA_STATE(xas, &mapping->i_pages, page_index(page));
struct inode *inode = mapping->host;
@@ -2808,7 +2808,7 @@ int __test_set_page_writeback(struct page *page, bool keep_write)
inc_lruvec_page_state(page, NR_WRITEBACK);
inc_zone_page_state(page, NR_ZONE_WRITE_PENDING);
}
- unlock_page_memcg(page);
+ unlock_page_objcg(page);
access_ret = arch_make_page_accessible(page);
/*
* If writeback has been triggered on a page that cannot be made
diff --git a/mm/rmap.c b/mm/rmap.c
index b0fc27e77d6d..3c2488e1081c 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -31,7 +31,7 @@
* swap_lock (in swap_duplicate, swap_info_get)
* mmlist_lock (in mmput, drain_mmlist and others)
* mapping->private_lock (in __set_page_dirty_buffers)
- * lock_page_memcg move_lock (in __set_page_dirty_buffers)
+ * lock_page_objcg move_lock (in __set_page_dirty_buffers)
* i_pages lock (widely used)
* lruvec->lru_lock (in lock_page_lruvec_irq)
* inode->i_lock (in set_page_dirty's __mark_inode_dirty)
@@ -1127,7 +1127,7 @@ void do_page_add_anon_rmap(struct page *page,
bool first;

if (unlikely(PageKsm(page)))
- lock_page_memcg(page);
+ lock_page_objcg(page);
else
VM_BUG_ON_PAGE(!PageLocked(page), page);

@@ -1155,7 +1155,7 @@ void do_page_add_anon_rmap(struct page *page,
}

if (unlikely(PageKsm(page))) {
- unlock_page_memcg(page);
+ unlock_page_objcg(page);
return;
}

@@ -1215,7 +1215,7 @@ void page_add_file_rmap(struct page *page, bool compound)
int i, nr = 1;

VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page);
- lock_page_memcg(page);
+ lock_page_objcg(page);
if (compound && PageTransHuge(page)) {
int nr_pages = thp_nr_pages(page);

@@ -1244,7 +1244,7 @@ void page_add_file_rmap(struct page *page, bool compound)
}
__mod_lruvec_page_state(page, NR_FILE_MAPPED, nr);
out:
- unlock_page_memcg(page);
+ unlock_page_objcg(page);
}

static void page_remove_file_rmap(struct page *page, bool compound)
@@ -1345,7 +1345,7 @@ static void page_remove_anon_compound_rmap(struct page *page)
*/
void page_remove_rmap(struct page *page, bool compound)
{
- lock_page_memcg(page);
+ lock_page_objcg(page);

if (!PageAnon(page)) {
page_remove_file_rmap(page, compound);
@@ -1384,7 +1384,7 @@ void page_remove_rmap(struct page *page, bool compound)
* faster for those pages still in swapcache.
*/
out:
- unlock_page_memcg(page);
+ unlock_page_objcg(page);
}

/*
--
2.11.0

2021-03-30 10:26:54

by Muchun Song

[permalink] [raw]
Subject: [RFC PATCH 12/15] mm: memcontrol: introduce memcg_reparent_ops

In the previous patch, we know how to make the lruvec lock safe when the
LRU pages reparented. We should do something like following.

memcg_reparent_objcgs(memcg)
1) lock
// lruvec belongs to memcg and lruvec_parent belongs to parent memcg.
spin_lock(&lruvec->lru_lock);
spin_lock(&lruvec_parent->lru_lock);

2) do reparent
// Move all the pages from the lruvec list to the parent lruvec list.

3) unlock
spin_unlock(&lruvec_parent->lru_lock);
spin_unlock(&lruvec->lru_lock);

Apart from the page lruvec lock, the deferred split queue lock (THP only)
also needs to do something similar. So we extracted the necessary 3 steps
in the memcg_reparent_objcgs().

memcg_reparent_objcgs(memcg)
1) lock
memcg_reparent_ops->lock(memcg, parent);

2) reparent
memcg_reparent_ops->reparent(memcg, reparent);

3) unlock
memcg_reparent_ops->unlock(memcg, reparent);

Now there are two different locks (e.g. lruvec lock and deferred split
queue lock) need to use this infrastructure. In the next patch, we will
use those APIs to make those locks safe when the LRU pages reparented.

Signed-off-by: Muchun Song <[email protected]>
---
include/linux/memcontrol.h | 11 +++++++++++
mm/memcontrol.c | 49 ++++++++++++++++++++++++++++++++++++++++++++--
2 files changed, 58 insertions(+), 2 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 8944115ebf8e..c79770ce3c81 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -341,6 +341,17 @@ struct mem_cgroup {
/* WARNING: nodeinfo must be the last member here */
};

+struct memcg_reparent_ops {
+ struct list_head list;
+
+ /* Irq is disabled before calling those functions. */
+ void (*lock)(struct mem_cgroup *memcg, struct mem_cgroup *parent);
+ void (*unlock)(struct mem_cgroup *memcg, struct mem_cgroup *parent);
+ void (*reparent)(struct mem_cgroup *memcg, struct mem_cgroup *parent);
+};
+
+void __init register_memcg_repatent(struct memcg_reparent_ops *ops);
+
/*
* size of first charge trial. "32" comes from vmscan.c's magic value.
* TODO: maybe necessary to use big numbers in big irons.
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index cb650d089d9f..d5701117794a 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -338,6 +338,41 @@ static struct obj_cgroup *obj_cgroup_alloc(void)
return objcg;
}

+static LIST_HEAD(reparent_ops_head);
+
+static void memcg_reparent_lock(struct mem_cgroup *memcg,
+ struct mem_cgroup *parent)
+{
+ struct memcg_reparent_ops *ops;
+
+ list_for_each_entry(ops, &reparent_ops_head, list)
+ ops->lock(memcg, parent);
+}
+
+static void memcg_reparent_unlock(struct mem_cgroup *memcg,
+ struct mem_cgroup *parent)
+{
+ struct memcg_reparent_ops *ops;
+
+ list_for_each_entry(ops, &reparent_ops_head, list)
+ ops->unlock(memcg, parent);
+}
+
+static void memcg_do_reparent(struct mem_cgroup *memcg,
+ struct mem_cgroup *parent)
+{
+ struct memcg_reparent_ops *ops;
+
+ list_for_each_entry(ops, &reparent_ops_head, list)
+ ops->reparent(memcg, parent);
+}
+
+void __init register_memcg_repatent(struct memcg_reparent_ops *ops)
+{
+ BUG_ON(!ops->lock || !ops->unlock || !ops->reparent);
+ list_add(&ops->list, &reparent_ops_head);
+}
+
static void memcg_reparent_objcgs(struct mem_cgroup *memcg)
{
struct obj_cgroup *objcg, *iter;
@@ -347,9 +382,13 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg)
if (!parent)
parent = root_mem_cgroup;

+ local_irq_disable();
+
+ memcg_reparent_lock(memcg, parent);
+
objcg = rcu_replace_pointer(memcg->objcg, NULL, true);

- spin_lock_irq(&css_set_lock);
+ spin_lock(&css_set_lock);

/* 1) Ready to reparent active objcg. */
list_add(&objcg->list, &memcg->objcg_list);
@@ -361,7 +400,13 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg)
/* 3) Move already reparented objcgs to the parent's list */
list_splice(&memcg->objcg_list, &parent->objcg_list);

- spin_unlock_irq(&css_set_lock);
+ spin_unlock(&css_set_lock);
+
+ memcg_do_reparent(memcg, parent);
+
+ memcg_reparent_unlock(memcg, parent);
+
+ local_irq_enable();

percpu_ref_kill(&objcg->refcnt);
}
--
2.11.0

2021-03-30 10:27:33

by Muchun Song

[permalink] [raw]
Subject: [RFC PATCH 15/15] mm: lru: add VM_BUG_ON_PAGE to lru maintenance function

We need to make sure that the page is deleted from or added to the
correct lruvec list. So add a VM_BUG_ON_PAGE() to catch invalid
users.

Signed-off-by: Muchun Song <[email protected]>
---
include/linux/mm_inline.h | 6 ++++++
mm/vmscan.c | 3 ++-
2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 355ea1ee32bd..d19870448287 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -84,6 +84,8 @@ static __always_inline void add_page_to_lru_list(struct page *page,
{
enum lru_list lru = page_lru(page);

+ VM_BUG_ON_PAGE(!lruvec_holds_page_lru_lock(page, lruvec), page);
+
update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page));
list_add(&page->lru, &lruvec->lists[lru]);
}
@@ -93,6 +95,8 @@ static __always_inline void add_page_to_lru_list_tail(struct page *page,
{
enum lru_list lru = page_lru(page);

+ VM_BUG_ON_PAGE(!lruvec_holds_page_lru_lock(page, lruvec), page);
+
update_lru_size(lruvec, lru, page_zonenum(page), thp_nr_pages(page));
list_add_tail(&page->lru, &lruvec->lists[lru]);
}
@@ -100,6 +104,8 @@ static __always_inline void add_page_to_lru_list_tail(struct page *page,
static __always_inline void del_page_from_lru_list(struct page *page,
struct lruvec *lruvec)
{
+ VM_BUG_ON_PAGE(!lruvec_holds_page_lru_lock(page, lruvec), page);
+
list_del(&page->lru);
update_lru_size(lruvec, page_lru(page), page_zonenum(page),
-thp_nr_pages(page));
diff --git a/mm/vmscan.c b/mm/vmscan.c
index fea6b43bc1f9..0a4a3072d092 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1656,6 +1656,8 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
page = lru_to_page(src);
prefetchw_prev_lru_page(page, src, flags);

+ VM_BUG_ON_PAGE(!lruvec_holds_page_lru_lock(page, lruvec), page);
+
nr_pages = compound_nr(page);
total_scan += nr_pages;

@@ -1866,7 +1868,6 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
* All pages were isolated from the same lruvec (and isolation
* inhibits memcg migration).
*/
- VM_BUG_ON_PAGE(!lruvec_holds_page_lru_lock(page, lruvec), page);
add_page_to_lru_list(page, lruvec);
nr_pages = thp_nr_pages(page);
nr_moved += nr_pages;
--
2.11.0

2021-03-30 18:43:34

by Shakeel Butt

[permalink] [raw]
Subject: Re: [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages

On Tue, Mar 30, 2021 at 3:20 AM Muchun Song <[email protected]> wrote:
>
> Since the following patchsets applied. All the kernel memory are charged
> with the new APIs of obj_cgroup.
>
> [v17,00/19] The new cgroup slab memory controller
> [v5,0/7] Use obj_cgroup APIs to charge kmem pages
>
> But user memory allocations (LRU pages) pinning memcgs for a long time -
> it exists at a larger scale and is causing recurring problems in the real
> world: page cache doesn't get reclaimed for a long time, or is used by the
> second, third, fourth, ... instance of the same job that was restarted into
> a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> and make page reclaim very inefficient.
>
> We can convert LRU pages and most other raw memcg pins to the objcg direction
> to fix this problem, and then the LRU pages will not pin the memcgs.
>
> This patchset aims to make the LRU pages to drop the reference to memory
> cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> of the dying cgroups will not increase if we run the following test script.
>
> ```bash
> #!/bin/bash
>
> cat /proc/cgroups | grep memory
>
> cd /sys/fs/cgroup/memory
>
> for i in range{1..500}
> do
> mkdir test
> echo $$ > test/cgroup.procs
> sleep 60 &
> echo $$ > cgroup.procs
> echo `cat test/cgroup.procs` > cgroup.procs
> rmdir test
> done
>
> cat /proc/cgroups | grep memory
> ```
>
> Patch 1 aims to fix page charging in page replacement.
> Patch 2-5 are code cleanup and simplification.
> Patch 6-15 convert LRU pages pin to the objcg direction.

The main concern I have with *just* reparenting LRU pages is that for
the long running systems, the root memcg will become a dumping ground.
In addition a job running multiple times on a machine will see
inconsistent memory usage if it re-accesses the file pages which were
reparented to the root memcg.

Please note that I do agree with the mentioned problem and we do see
this issue in our fleet. Internally we have a "memcg mount option"
feature which couples a file system with a memcg and all file pages
allocated on that file system will be charged to that memcg. Multiple
instances (concurrent or subsequent) of the job will use that file
system (with a dedicated memcg) without leaving the zombies behind. I
am not pushing for this solution as it comes with its own intricacies
(e.g. if memcg coupled with a file system has a limit, the oom
behavior would be awkward and therefore internally we don't put a
limit on such memcgs). Though I want this to be part of discussion.

I think the underlying reasons behind this issue are:

1) Filesystem shared by disjoint jobs.
2) For job dedicated filesystems, the lifetime of the filesystem is
different from the lifetime of the job.

For now, we have two potential solutions to the
zombies-due-to-offlined-LRU-pages i.e. (1) reparent and (2) pairing
memcg and filesystem. For reparenting, the cons are inconsistent
memory usage and root memcg potentially becoming dumping ground. For
pairing, the oom behavior is awkward which is the same for any type of
remote charging.

I am wondering how we can resolve the cons for each. To resolve the
root-memcg-dump issue in the reparenting, maybe we uncharge the page
when it reaches root and the next accesser will be charged. For
inconsistent memory usage, either users accept the inconsistency or
force reclaim before offline which will reduce the benefit of sharing
filesystem with subsequent instances of the job. For the pairing,
maybe we can punt to the user/admin to not set a limit on such memcg
to avoid awkward oom situations.

Thoughts?

2021-03-30 19:00:29

by Roman Gushchin

[permalink] [raw]
Subject: Re: [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages

On Tue, Mar 30, 2021 at 11:34:11AM -0700, Shakeel Butt wrote:
> On Tue, Mar 30, 2021 at 3:20 AM Muchun Song <[email protected]> wrote:
> >
> > Since the following patchsets applied. All the kernel memory are charged
> > with the new APIs of obj_cgroup.
> >
> > [v17,00/19] The new cgroup slab memory controller
> > [v5,0/7] Use obj_cgroup APIs to charge kmem pages
> >
> > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > it exists at a larger scale and is causing recurring problems in the real
> > world: page cache doesn't get reclaimed for a long time, or is used by the
> > second, third, fourth, ... instance of the same job that was restarted into
> > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > and make page reclaim very inefficient.
> >
> > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > to fix this problem, and then the LRU pages will not pin the memcgs.
> >
> > This patchset aims to make the LRU pages to drop the reference to memory
> > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > of the dying cgroups will not increase if we run the following test script.
> >
> > ```bash
> > #!/bin/bash
> >
> > cat /proc/cgroups | grep memory
> >
> > cd /sys/fs/cgroup/memory
> >
> > for i in range{1..500}
> > do
> > mkdir test
> > echo $$ > test/cgroup.procs
> > sleep 60 &
> > echo $$ > cgroup.procs
> > echo `cat test/cgroup.procs` > cgroup.procs
> > rmdir test
> > done
> >
> > cat /proc/cgroups | grep memory
> > ```
> >
> > Patch 1 aims to fix page charging in page replacement.
> > Patch 2-5 are code cleanup and simplification.
> > Patch 6-15 convert LRU pages pin to the objcg direction.
>
> The main concern I have with *just* reparenting LRU pages is that for
> the long running systems, the root memcg will become a dumping ground.
> In addition a job running multiple times on a machine will see
> inconsistent memory usage if it re-accesses the file pages which were
> reparented to the root memcg.

I agree, but also the reparenting is not the perfect thing in a combination
with any memory protections (e.g. memory.low).

Imagine the following configuration:
workload.slice
- workload_gen_1.service memory.min = 30G
- workload_gen_2.service memory.min = 30G
- workload_gen_3.service memory.min = 30G
...

Parent cgroup and several generations of the child cgroup, protected by a memory.low.
Once the memory is getting reparented, it's not protected anymore.

I guess we need something smarter: e.g. reassign a page to a different
cgroup if the page is activated/rotated and is currently on a dying lru.

Also, I'm somewhat concerned about the interaction of the reparenting
with the writeback and dirty throttling. How does it work together?

Thanks!

2021-03-30 21:11:50

by Johannes Weiner

[permalink] [raw]
Subject: Re: [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages

On Tue, Mar 30, 2021 at 11:34:11AM -0700, Shakeel Butt wrote:
> On Tue, Mar 30, 2021 at 3:20 AM Muchun Song <[email protected]> wrote:
> >
> > Since the following patchsets applied. All the kernel memory are charged
> > with the new APIs of obj_cgroup.
> >
> > [v17,00/19] The new cgroup slab memory controller
> > [v5,0/7] Use obj_cgroup APIs to charge kmem pages
> >
> > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > it exists at a larger scale and is causing recurring problems in the real
> > world: page cache doesn't get reclaimed for a long time, or is used by the
> > second, third, fourth, ... instance of the same job that was restarted into
> > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > and make page reclaim very inefficient.
> >
> > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > to fix this problem, and then the LRU pages will not pin the memcgs.
> >
> > This patchset aims to make the LRU pages to drop the reference to memory
> > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > of the dying cgroups will not increase if we run the following test script.
> >
> > ```bash
> > #!/bin/bash
> >
> > cat /proc/cgroups | grep memory
> >
> > cd /sys/fs/cgroup/memory
> >
> > for i in range{1..500}
> > do
> > mkdir test
> > echo $$ > test/cgroup.procs
> > sleep 60 &
> > echo $$ > cgroup.procs
> > echo `cat test/cgroup.procs` > cgroup.procs
> > rmdir test
> > done
> >
> > cat /proc/cgroups | grep memory
> > ```
> >
> > Patch 1 aims to fix page charging in page replacement.
> > Patch 2-5 are code cleanup and simplification.
> > Patch 6-15 convert LRU pages pin to the objcg direction.
>
> The main concern I have with *just* reparenting LRU pages is that for
> the long running systems, the root memcg will become a dumping ground.
> In addition a job running multiple times on a machine will see
> inconsistent memory usage if it re-accesses the file pages which were
> reparented to the root memcg.

I don't understand how Muchun's patches are supposed to *change* the
behavior the way you are describing it. This IS today's behavior.

We have hierarchical accounting, and a page that belongs to a leaf
cgroup will automatically belong to all its parents.

Further, if you delete a cgroup today, the abandoned cache will stay
physically linked to that cgroup, but that zombie group no longer acts
as a control structure: it imposes no limit and no protection; the
pages will be reclaimed as if it WERE linked to the parent.

For all intents and purposes, when you delete a cgroup today, its
remaining pages ARE dumped onto the parent.

The only difference is that today they pointlessly pin the leaf cgroup
as a holding vessel - which is then round-robin'd from the parent
during reclaim in order to pretend that all these child pages actually
ARE linked to the parent's LRU list.

Remember how we used to have every page physically linked to multiple
lrus? The leaf cgroup and the root?

All pages always belong to the (virtual) LRU list of all ancestor
cgroups. The only thing Muchun changes is that they no longer pin a
cgroup that has no semantical meaning anymore (because it's neither
visible to the user nor exerts any contol over the pages anymore).

Maybe I'm missing something that either you or Roman can explain to
me. But this series looks like a (rare) pure win.

Whether you like the current semantics is a separate discussion IMO.

> Please note that I do agree with the mentioned problem and we do see
> this issue in our fleet. Internally we have a "memcg mount option"
> feature which couples a file system with a memcg and all file pages
> allocated on that file system will be charged to that memcg. Multiple
> instances (concurrent or subsequent) of the job will use that file
> system (with a dedicated memcg) without leaving the zombies behind. I
> am not pushing for this solution as it comes with its own intricacies
> (e.g. if memcg coupled with a file system has a limit, the oom
> behavior would be awkward and therefore internally we don't put a
> limit on such memcgs). Though I want this to be part of discussion.

Right, you disconnect memory from the tasks that are allocating it,
and so you can't assign culpability when you need to.

OOM is one thing, but there are also CPU cycles and IO bandwidth
consumed during reclaim.

> I think the underlying reasons behind this issue are:
>
> 1) Filesystem shared by disjoint jobs.
> 2) For job dedicated filesystems, the lifetime of the filesystem is
> different from the lifetime of the job.

There is also the case of deleting a cgroup just to recreate it right
after for the same job. Many job managers do this on restart right now
- like systemd, and what we're using in our fleet. This seems
avoidable by recycling a group for another instance of the same job.

Sharing is a more difficult discussion. If you access a page that you
share with another cgroup, it may or may not be subject to your own or
your buddy's memory limits. The only limit it is guaranteed to be
subjected to is that of your parent. So One thing I could imagine is,
instead of having a separate cgroup outside the hierarchy, we would
reparent live pages the second they are accessed from a foreign
cgroup. And reparent them until you reach the first common ancestor.

This way, when you mount a filesystem shared by two jobs, you can put
them into a joint subtree, and the root level of this subtree captures
all the memory (as well as the reclaim CPU and IO) used by the two
jobs - the private portions and the shared portions - and doesn't make
them the liability of jobs in the system that DON'T share the same fs.

But again, this is a useful discussion to have, but I don't quite see
why it's relevant to Muchun's patches. They're purely an optimization.

So I'd like to clear that up first before going further.

2021-03-30 21:32:34

by Johannes Weiner

[permalink] [raw]
Subject: Re: [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages

On Tue, Mar 30, 2021 at 11:58:31AM -0700, Roman Gushchin wrote:
> On Tue, Mar 30, 2021 at 11:34:11AM -0700, Shakeel Butt wrote:
> > On Tue, Mar 30, 2021 at 3:20 AM Muchun Song <[email protected]> wrote:
> > >
> > > Since the following patchsets applied. All the kernel memory are charged
> > > with the new APIs of obj_cgroup.
> > >
> > > [v17,00/19] The new cgroup slab memory controller
> > > [v5,0/7] Use obj_cgroup APIs to charge kmem pages
> > >
> > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > it exists at a larger scale and is causing recurring problems in the real
> > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > second, third, fourth, ... instance of the same job that was restarted into
> > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > and make page reclaim very inefficient.
> > >
> > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > >
> > > This patchset aims to make the LRU pages to drop the reference to memory
> > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > of the dying cgroups will not increase if we run the following test script.
> > >
> > > ```bash
> > > #!/bin/bash
> > >
> > > cat /proc/cgroups | grep memory
> > >
> > > cd /sys/fs/cgroup/memory
> > >
> > > for i in range{1..500}
> > > do
> > > mkdir test
> > > echo $$ > test/cgroup.procs
> > > sleep 60 &
> > > echo $$ > cgroup.procs
> > > echo `cat test/cgroup.procs` > cgroup.procs
> > > rmdir test
> > > done
> > >
> > > cat /proc/cgroups | grep memory
> > > ```
> > >
> > > Patch 1 aims to fix page charging in page replacement.
> > > Patch 2-5 are code cleanup and simplification.
> > > Patch 6-15 convert LRU pages pin to the objcg direction.
> >
> > The main concern I have with *just* reparenting LRU pages is that for
> > the long running systems, the root memcg will become a dumping ground.
> > In addition a job running multiple times on a machine will see
> > inconsistent memory usage if it re-accesses the file pages which were
> > reparented to the root memcg.
>
> I agree, but also the reparenting is not the perfect thing in a combination
> with any memory protections (e.g. memory.low).
>
> Imagine the following configuration:
> workload.slice
> - workload_gen_1.service memory.min = 30G
> - workload_gen_2.service memory.min = 30G
> - workload_gen_3.service memory.min = 30G
> ...
>
> Parent cgroup and several generations of the child cgroup, protected by a memory.low.
> Once the memory is getting reparented, it's not protected anymore.

That doesn't sound right.

A deleted cgroup today exerts no control over its abandoned
pages. css_reset() will blow out any control settings.

If you're talking about protection previously inherited by
workload.slice, that continues to apply as it always has.

None of this is really accidental. Per definition the workload.slice
control domain includes workload_gen_1.service. And per definition,
the workload_gen_1.service domain ceases to exist when you delete it.

There are no (or shouldn't be any!) semantic changes from the physical
unlinking from a dead control domain.

> Also, I'm somewhat concerned about the interaction of the reparenting
> with the writeback and dirty throttling. How does it work together?

What interaction specifically?

When you delete a cgroup that had both the block and the memory
controller enabled, the control domain of both goes away and it
becomes subject to whatever control domain is above it (if any).

A higher control domain in turn takes a recursive view of the subtree,
see mem_cgroup_wb_stats(), so when control is exerted, it applies
regardless of how and where pages are physically linked in children.

It's also already possible to enable e.g. block control only at a very
high level and memory control down to a lower level. Per design this
code can live with different domain sizes for memory and block.

2021-03-30 22:08:20

by Roman Gushchin

[permalink] [raw]
Subject: Re: [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages

On Tue, Mar 30, 2021 at 05:30:10PM -0400, Johannes Weiner wrote:
> On Tue, Mar 30, 2021 at 11:58:31AM -0700, Roman Gushchin wrote:
> > On Tue, Mar 30, 2021 at 11:34:11AM -0700, Shakeel Butt wrote:
> > > On Tue, Mar 30, 2021 at 3:20 AM Muchun Song <[email protected]> wrote:
> > > >
> > > > Since the following patchsets applied. All the kernel memory are charged
> > > > with the new APIs of obj_cgroup.
> > > >
> > > > [v17,00/19] The new cgroup slab memory controller
> > > > [v5,0/7] Use obj_cgroup APIs to charge kmem pages
> > > >
> > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > it exists at a larger scale and is causing recurring problems in the real
> > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > and make page reclaim very inefficient.
> > > >
> > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > >
> > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > of the dying cgroups will not increase if we run the following test script.
> > > >
> > > > ```bash
> > > > #!/bin/bash
> > > >
> > > > cat /proc/cgroups | grep memory
> > > >
> > > > cd /sys/fs/cgroup/memory
> > > >
> > > > for i in range{1..500}
> > > > do
> > > > mkdir test
> > > > echo $$ > test/cgroup.procs
> > > > sleep 60 &
> > > > echo $$ > cgroup.procs
> > > > echo `cat test/cgroup.procs` > cgroup.procs
> > > > rmdir test
> > > > done
> > > >
> > > > cat /proc/cgroups | grep memory
> > > > ```
> > > >
> > > > Patch 1 aims to fix page charging in page replacement.
> > > > Patch 2-5 are code cleanup and simplification.
> > > > Patch 6-15 convert LRU pages pin to the objcg direction.
> > >
> > > The main concern I have with *just* reparenting LRU pages is that for
> > > the long running systems, the root memcg will become a dumping ground.
> > > In addition a job running multiple times on a machine will see
> > > inconsistent memory usage if it re-accesses the file pages which were
> > > reparented to the root memcg.
> >
> > I agree, but also the reparenting is not the perfect thing in a combination
> > with any memory protections (e.g. memory.low).
> >
> > Imagine the following configuration:
> > workload.slice
> > - workload_gen_1.service memory.min = 30G
> > - workload_gen_2.service memory.min = 30G
> > - workload_gen_3.service memory.min = 30G
> > ...
> >
> > Parent cgroup and several generations of the child cgroup, protected by a memory.low.
> > Once the memory is getting reparented, it's not protected anymore.
>
> That doesn't sound right.
>
> A deleted cgroup today exerts no control over its abandoned
> pages. css_reset() will blow out any control settings.

I know. Currently it works in the following way: once cgroup gen_1 is deleted,
it's memory is not protected anymore, so eventually it's getting evicted and
re-faulted as gen_2 (or gen_N) memory. Muchun's patchset doesn't change this,
of course. But long-term we likely wanna re-charge such pages to new cgroups
and avoid unnecessary evictions and re-faults. Switching to obj_cgroups doesn't
help and likely will complicate this change. So I'm a bit skeptical here.

Also, in my experience the pagecache is not the main/worst memcg reference
holder (writeback structures are way worse). Pages are relatively large
(in comparison to some slab objects), and rarely there is only one page pinning
a separate memcg. And switching to obj_cgroup doesn't completely eliminate
the problem: we just switch from accumulating larger mem_cgroups to accumulating
smaller obj_cgroups.

With all this said, I'm not necessarily opposing the patchset, but it's
necessary to discuss how it fits into the long-term picture.
E.g. if we're going to use obj_cgroup API for page-sized objects, shouldn't
we split it back into the reparenting and bytes-sized accounting parts,
as I initially suggested. And shouldn't we move the reparenting part to
the cgroup core level, so we could use it for other controllers
(e.g. to fix the writeback problem).

>
> If you're talking about protection previously inherited by
> workload.slice, that continues to apply as it always has.
>
> None of this is really accidental. Per definition the workload.slice
> control domain includes workload_gen_1.service. And per definition,
> the workload_gen_1.service domain ceases to exist when you delete it.
>
> There are no (or shouldn't be any!) semantic changes from the physical
> unlinking from a dead control domain.
>
> > Also, I'm somewhat concerned about the interaction of the reparenting
> > with the writeback and dirty throttling. How does it work together?
>
> What interaction specifically?
>
> When you delete a cgroup that had both the block and the memory
> controller enabled, the control domain of both goes away and it
> becomes subject to whatever control domain is above it (if any).
>
> A higher control domain in turn takes a recursive view of the subtree,
> see mem_cgroup_wb_stats(), so when control is exerted, it applies
> regardless of how and where pages are physically linked in children.
>
> It's also already possible to enable e.g. block control only at a very
> high level and memory control down to a lower level. Per design this
> code can live with different domain sizes for memory and block.

I'm totally happy if it's safe, I just don't know this code well enough
to be sure without taking a closer look.

2021-03-31 00:31:49

by Shakeel Butt

[permalink] [raw]
Subject: Re: [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages

On Tue, Mar 30, 2021 at 2:10 PM Johannes Weiner <[email protected]> wrote:
>
[...]
> > The main concern I have with *just* reparenting LRU pages is that for
> > the long running systems, the root memcg will become a dumping ground.
> > In addition a job running multiple times on a machine will see
> > inconsistent memory usage if it re-accesses the file pages which were
> > reparented to the root memcg.
>
> I don't understand how Muchun's patches are supposed to *change* the
> behavior the way you are describing it. This IS today's behavior.
>
> We have hierarchical accounting, and a page that belongs to a leaf
> cgroup will automatically belong to all its parents.
>
> Further, if you delete a cgroup today, the abandoned cache will stay
> physically linked to that cgroup, but that zombie group no longer acts
> as a control structure: it imposes no limit and no protection; the
> pages will be reclaimed as if it WERE linked to the parent.
>
> For all intents and purposes, when you delete a cgroup today, its
> remaining pages ARE dumped onto the parent.
>
> The only difference is that today they pointlessly pin the leaf cgroup
> as a holding vessel - which is then round-robin'd from the parent
> during reclaim in order to pretend that all these child pages actually
> ARE linked to the parent's LRU list.
>
> Remember how we used to have every page physically linked to multiple
> lrus? The leaf cgroup and the root?
>
> All pages always belong to the (virtual) LRU list of all ancestor
> cgroups. The only thing Muchun changes is that they no longer pin a
> cgroup that has no semantical meaning anymore (because it's neither
> visible to the user nor exerts any contol over the pages anymore).
>

Indeed you are right. Even if the physical representation of the tree
has changed, the logical picture remains the same.

[Subconsciously I was sad that we will lose the information about the
origin memcg of the page for debugging purposes but then I thought if
we really need it then we can just add that metadata in the obj_cgroup
object. So, never mind.]

> Maybe I'm missing something that either you or Roman can explain to
> me. But this series looks like a (rare) pure win.
>
> Whether you like the current semantics is a separate discussion IMO.
>
> > Please note that I do agree with the mentioned problem and we do see
> > this issue in our fleet. Internally we have a "memcg mount option"
> > feature which couples a file system with a memcg and all file pages
> > allocated on that file system will be charged to that memcg. Multiple
> > instances (concurrent or subsequent) of the job will use that file
> > system (with a dedicated memcg) without leaving the zombies behind. I
> > am not pushing for this solution as it comes with its own intricacies
> > (e.g. if memcg coupled with a file system has a limit, the oom
> > behavior would be awkward and therefore internally we don't put a
> > limit on such memcgs). Though I want this to be part of discussion.
>
> Right, you disconnect memory from the tasks that are allocating it,
> and so you can't assign culpability when you need to.
>
> OOM is one thing, but there are also CPU cycles and IO bandwidth
> consumed during reclaim.
>

We didn't really have any issue regarding CPU or IO but that might be
due to our unique setup (i.e. no local disk).

> > I think the underlying reasons behind this issue are:
> >
> > 1) Filesystem shared by disjoint jobs.
> > 2) For job dedicated filesystems, the lifetime of the filesystem is
> > different from the lifetime of the job.
>
> There is also the case of deleting a cgroup just to recreate it right
> after for the same job. Many job managers do this on restart right now
> - like systemd, and what we're using in our fleet. This seems
> avoidable by recycling a group for another instance of the same job.

I was bundling the scenario you mentioned with (2) i.e. the filesystem
persists across multiple subsequent instances of the same job.

>
> Sharing is a more difficult discussion. If you access a page that you
> share with another cgroup, it may or may not be subject to your own or
> your buddy's memory limits. The only limit it is guaranteed to be
> subjected to is that of your parent. So One thing I could imagine is,
> instead of having a separate cgroup outside the hierarchy, we would
> reparent live pages the second they are accessed from a foreign
> cgroup. And reparent them until you reach the first common ancestor.
>
> This way, when you mount a filesystem shared by two jobs, you can put
> them into a joint subtree, and the root level of this subtree captures
> all the memory (as well as the reclaim CPU and IO) used by the two
> jobs - the private portions and the shared portions - and doesn't make
> them the liability of jobs in the system that DON'T share the same fs.

I will give more thought on this idea and see where it goes.

>
> But again, this is a useful discussion to have, but I don't quite see
> why it's relevant to Muchun's patches. They're purely an optimization.
>
> So I'd like to clear that up first before going further.

I think we are on the same page i.e. these patches change the physical
representation of the memcg tree but logically it remains the same and
fixes the zombie memcg issue.

2021-03-31 03:29:57

by Balbir Singh

[permalink] [raw]
Subject: Re: [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages

On Tue, Mar 30, 2021 at 05:10:04PM -0400, Johannes Weiner wrote:
> On Tue, Mar 30, 2021 at 11:34:11AM -0700, Shakeel Butt wrote:
> > On Tue, Mar 30, 2021 at 3:20 AM Muchun Song <[email protected]> wrote:
> > >
> > > Since the following patchsets applied. All the kernel memory are charged
> > > with the new APIs of obj_cgroup.
> > >
> > > [v17,00/19] The new cgroup slab memory controller
> > > [v5,0/7] Use obj_cgroup APIs to charge kmem pages
> > >
> > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > it exists at a larger scale and is causing recurring problems in the real
> > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > second, third, fourth, ... instance of the same job that was restarted into
> > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > and make page reclaim very inefficient.
> > >
> > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > >
> > > This patchset aims to make the LRU pages to drop the reference to memory
> > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > of the dying cgroups will not increase if we run the following test script.
> > >
> > > ```bash
> > > #!/bin/bash
> > >
> > > cat /proc/cgroups | grep memory
> > >
> > > cd /sys/fs/cgroup/memory
> > >
> > > for i in range{1..500}
> > > do
> > > mkdir test
> > > echo $$ > test/cgroup.procs
> > > sleep 60 &
> > > echo $$ > cgroup.procs
> > > echo `cat test/cgroup.procs` > cgroup.procs
> > > rmdir test
> > > done
> > >
> > > cat /proc/cgroups | grep memory
> > > ```
> > >
> > > Patch 1 aims to fix page charging in page replacement.
> > > Patch 2-5 are code cleanup and simplification.
> > > Patch 6-15 convert LRU pages pin to the objcg direction.
> >
> > The main concern I have with *just* reparenting LRU pages is that for
> > the long running systems, the root memcg will become a dumping ground.
> > In addition a job running multiple times on a machine will see
> > inconsistent memory usage if it re-accesses the file pages which were
> > reparented to the root memcg.
>
> I don't understand how Muchun's patches are supposed to *change* the
> behavior the way you are describing it. This IS today's behavior.
>
> We have hierarchical accounting, and a page that belongs to a leaf
> cgroup will automatically belong to all its parents.
>
> Further, if you delete a cgroup today, the abandoned cache will stay
> physically linked to that cgroup, but that zombie group no longer acts
> as a control structure: it imposes no limit and no protection; the
> pages will be reclaimed as if it WERE linked to the parent.
>
> For all intents and purposes, when you delete a cgroup today, its
> remaining pages ARE dumped onto the parent.
>
> The only difference is that today they pointlessly pin the leaf cgroup
> as a holding vessel - which is then round-robin'd from the parent
> during reclaim in order to pretend that all these child pages actually
> ARE linked to the parent's LRU list.
>
> Remember how we used to have every page physically linked to multiple
> lrus? The leaf cgroup and the root?
>
> All pages always belong to the (virtual) LRU list of all ancestor
> cgroups. The only thing Muchun changes is that they no longer pin a
> cgroup that has no semantical meaning anymore (because it's neither
> visible to the user nor exerts any contol over the pages anymore).
>
> Maybe I'm missing something that either you or Roman can explain to
> me. But this series looks like a (rare) pure win.
>
> Whether you like the current semantics is a separate discussion IMO.
>
> > Please note that I do agree with the mentioned problem and we do see
> > this issue in our fleet. Internally we have a "memcg mount option"
> > feature which couples a file system with a memcg and all file pages
> > allocated on that file system will be charged to that memcg. Multiple
> > instances (concurrent or subsequent) of the job will use that file
> > system (with a dedicated memcg) without leaving the zombies behind. I
> > am not pushing for this solution as it comes with its own intricacies
> > (e.g. if memcg coupled with a file system has a limit, the oom
> > behavior would be awkward and therefore internally we don't put a
> > limit on such memcgs). Though I want this to be part of discussion.
>
> Right, you disconnect memory from the tasks that are allocating it,
> and so you can't assign culpability when you need to.
>
> OOM is one thing, but there are also CPU cycles and IO bandwidth
> consumed during reclaim.
>
> > I think the underlying reasons behind this issue are:
> >
> > 1) Filesystem shared by disjoint jobs.
> > 2) For job dedicated filesystems, the lifetime of the filesystem is
> > different from the lifetime of the job.
>
> There is also the case of deleting a cgroup just to recreate it right
> after for the same job. Many job managers do this on restart right now
> - like systemd, and what we're using in our fleet. This seems
> avoidable by recycling a group for another instance of the same job.
>
> Sharing is a more difficult discussion. If you access a page that you
> share with another cgroup, it may or may not be subject to your own or
> your buddy's memory limits. The only limit it is guaranteed to be
> subjected to is that of your parent. So One thing I could imagine is,
> instead of having a separate cgroup outside the hierarchy, we would
> reparent live pages the second they are accessed from a foreign
> cgroup. And reparent them until you reach the first common ancestor.
>
> This way, when you mount a filesystem shared by two jobs, you can put
> them into a joint subtree, and the root level of this subtree captures
> all the memory (as well as the reclaim CPU and IO) used by the two
> jobs - the private portions and the shared portions - and doesn't make
> them the liability of jobs in the system that DON'T share the same fs.
>
> But again, this is a useful discussion to have, but I don't quite see
> why it's relevant to Muchun's patches. They're purely an optimization.
>
> So I'd like to clear that up first before going further.
>

I suspect a lot of the issue really is the lack of lockstepping
between a page (unmapped page cache) and the corresponding memcgroups
lifecycle. When we delete a memcgroup, we sort of lose accounting
(depending on the inheriting parent) and ideally we want to bring back
the accounting when the page is reused in a different cgroup (almost
like first touch). I would like to look at the patches and see if they
do solve the issue that leads to zombie cgroups hanging around. In my experience,
the combination of namespaces and number of cgroups (several of which could
be zombies), does not scale well.

Balbir Singh.

2021-03-31 15:20:54

by Johannes Weiner

[permalink] [raw]
Subject: Re: [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages

On Tue, Mar 30, 2021 at 03:05:42PM -0700, Roman Gushchin wrote:
> On Tue, Mar 30, 2021 at 05:30:10PM -0400, Johannes Weiner wrote:
> > On Tue, Mar 30, 2021 at 11:58:31AM -0700, Roman Gushchin wrote:
> > > On Tue, Mar 30, 2021 at 11:34:11AM -0700, Shakeel Butt wrote:
> > > > On Tue, Mar 30, 2021 at 3:20 AM Muchun Song <[email protected]> wrote:
> > > > >
> > > > > Since the following patchsets applied. All the kernel memory are charged
> > > > > with the new APIs of obj_cgroup.
> > > > >
> > > > > [v17,00/19] The new cgroup slab memory controller
> > > > > [v5,0/7] Use obj_cgroup APIs to charge kmem pages
> > > > >
> > > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > > it exists at a larger scale and is causing recurring problems in the real
> > > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > > and make page reclaim very inefficient.
> > > > >
> > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > > >
> > > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > > of the dying cgroups will not increase if we run the following test script.
> > > > >
> > > > > ```bash
> > > > > #!/bin/bash
> > > > >
> > > > > cat /proc/cgroups | grep memory
> > > > >
> > > > > cd /sys/fs/cgroup/memory
> > > > >
> > > > > for i in range{1..500}
> > > > > do
> > > > > mkdir test
> > > > > echo $$ > test/cgroup.procs
> > > > > sleep 60 &
> > > > > echo $$ > cgroup.procs
> > > > > echo `cat test/cgroup.procs` > cgroup.procs
> > > > > rmdir test
> > > > > done
> > > > >
> > > > > cat /proc/cgroups | grep memory
> > > > > ```
> > > > >
> > > > > Patch 1 aims to fix page charging in page replacement.
> > > > > Patch 2-5 are code cleanup and simplification.
> > > > > Patch 6-15 convert LRU pages pin to the objcg direction.
> > > >
> > > > The main concern I have with *just* reparenting LRU pages is that for
> > > > the long running systems, the root memcg will become a dumping ground.
> > > > In addition a job running multiple times on a machine will see
> > > > inconsistent memory usage if it re-accesses the file pages which were
> > > > reparented to the root memcg.
> > >
> > > I agree, but also the reparenting is not the perfect thing in a combination
> > > with any memory protections (e.g. memory.low).
> > >
> > > Imagine the following configuration:
> > > workload.slice
> > > - workload_gen_1.service memory.min = 30G
> > > - workload_gen_2.service memory.min = 30G
> > > - workload_gen_3.service memory.min = 30G
> > > ...
> > >
> > > Parent cgroup and several generations of the child cgroup, protected by a memory.low.
> > > Once the memory is getting reparented, it's not protected anymore.
> >
> > That doesn't sound right.
> >
> > A deleted cgroup today exerts no control over its abandoned
> > pages. css_reset() will blow out any control settings.
>
> I know. Currently it works in the following way: once cgroup gen_1 is deleted,
> it's memory is not protected anymore, so eventually it's getting evicted and
> re-faulted as gen_2 (or gen_N) memory. Muchun's patchset doesn't change this,
> of course. But long-term we likely wanna re-charge such pages to new cgroups
> and avoid unnecessary evictions and re-faults. Switching to obj_cgroups doesn't
> help and likely will complicate this change. So I'm a bit skeptical here.

We should be careful with the long-term plans.

The zombie issue is a pretty urgent concern that has caused several
production emergencies now. It needs a fix sooner rather than later.

The long-term plans of how to handle shared/reused data better will
require some time to work out. There are MANY open questions around
recharging to arbitrary foreign cgroup users. Like how to identify
accesses after the page's cgroup has been deleted: Do we need to
annotate every page cache lookup? Do we need to inject minor faults to
recharge mmapped pages? We can't wait for this discussion to complete.

I also think the objcg is helping with that direction rather than
getting in the way because:

- The old charge moving code we have for LRU pages isn't reusable
anyway. It relies on optimistic locking to work, and may leave
memory behind in arbitrary and unpredictable ways. After a few
cycles, objects tend to be spread all over the place.

The objcg provides a new synchronization scheme that will always
work because the readside (page/object to memcg lookup) needs to be
prepared for the memcg to change and/or die at any time.

- There isn't much point in recharging only some of the abandoned
memory. We've tried per-allocation class reparenting and it didn't
work out too well. Newly accounted allocations crop up faster than
we can conjure up tailor-made reparenting schemes for them.

The objcg provides a generic reference and reassignment scheme that
can be used by all tracked objects.

Importantly, once we have a central thing as LRU pages converted, we
can require all new allocation tracking to use objcg from the start.

> Also, in my experience the pagecache is not the main/worst memcg reference
> holder (writeback structures are way worse). Pages are relatively large
> (in comparison to some slab objects), and rarely there is only one page pinning
> a separate memcg.

I've seen that exact thing cause zombies to pile up: one or two pages
in the old group, pinned by the next instance of a job. If the job has
a sufficiently large working set, this can pin a LOT of dead
cgroups. Is it the biggest or most problematic source of buildups?
Maybe not. But there is definitely cause to fix it.

LRU pages are also the most difficult to convert because of their rich
interactions. It's a good test of the api. If it works for those
pages, converting everybody else will be much simpler.

And again, as the core memory consumer it sets the tone of how memcg
rmapping is supposed to work for new and existing object types. This
helps align ongoing development.

> And switching to obj_cgroup doesn't completely eliminate the
> problem: we just switch from accumulating larger mem_cgroups to
> accumulating smaller obj_cgroups.

In your own words, the discrepancy between tiny objects pinning large
memcgs is a problem. objcgs are smaller than most objects, so it's not
much different as if an object were simply a few bytes bigger in size.
A memcg on the other hand is vastly bigger than most objects. It's
also composed of many allocations and so causes more fragmentation.

Another issue is that memcgs with abandoned objects need to be visited
by the reclaimer on every single reclaim walk, a hot path. The list of
objcgs on the other hand is only iterated when the cgroup is deleted,
which is not a fast path. It's also rare that parents with many dead
children are deleted at all (system.slice, workload.slice etc.)

So no, I would say for all intents and purposes, it fixes all the
problems we're having with zombie memcgs.

> With all this said, I'm not necessarily opposing the patchset, but it's
> necessary to discuss how it fits into the long-term picture.
> E.g. if we're going to use obj_cgroup API for page-sized objects, shouldn't
> we split it back into the reparenting and bytes-sized accounting parts,
> as I initially suggested. And shouldn't we move the reparenting part to
> the cgroup core level, so we could use it for other controllers
> (e.g. to fix the writeback problem).

Yes, I do think we want to generalize it. But I wouldn't say it's a
requirement for these patches, either:

- The byte-sized accounting part is one atomic_t. That's 4 bytes
pinned unnecessarily - compared to an entire struct memcg right now
which includes memory accounting, swap accounting, byte accounting,
and a whole lot of other things likely not used by the stale object.

- The cgroup generalization is a good idea too, but that doesn't
really change the callsites either. Unless you were thinking of
renaming, but objcg seems like a good, generic fit for a name to
describe the linkage between objects to a cgroup.

The memcg member will need to change into something generic (a
css_set type mapping), but that can likely be hidden behind
page_memcg(), objcg_memcg() and similar helpers.

Both of these aspects can be improved incrementally.

2021-04-01 18:32:46

by Muchun Song

[permalink] [raw]
Subject: Re: [External] Re: [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages

On Wed, Mar 31, 2021 at 11:17 PM Johannes Weiner <[email protected]> wrote:
>
> On Tue, Mar 30, 2021 at 03:05:42PM -0700, Roman Gushchin wrote:
> > On Tue, Mar 30, 2021 at 05:30:10PM -0400, Johannes Weiner wrote:
> > > On Tue, Mar 30, 2021 at 11:58:31AM -0700, Roman Gushchin wrote:
> > > > On Tue, Mar 30, 2021 at 11:34:11AM -0700, Shakeel Butt wrote:
> > > > > On Tue, Mar 30, 2021 at 3:20 AM Muchun Song <[email protected]> wrote:
> > > > > >
> > > > > > Since the following patchsets applied. All the kernel memory are charged
> > > > > > with the new APIs of obj_cgroup.
> > > > > >
> > > > > > [v17,00/19] The new cgroup slab memory controller
> > > > > > [v5,0/7] Use obj_cgroup APIs to charge kmem pages
> > > > > >
> > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > > > it exists at a larger scale and is causing recurring problems in the real
> > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > > > and make page reclaim very inefficient.
> > > > > >
> > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > > > >
> > > > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > > > of the dying cgroups will not increase if we run the following test script.
> > > > > >
> > > > > > ```bash
> > > > > > #!/bin/bash
> > > > > >
> > > > > > cat /proc/cgroups | grep memory
> > > > > >
> > > > > > cd /sys/fs/cgroup/memory
> > > > > >
> > > > > > for i in range{1..500}
> > > > > > do
> > > > > > mkdir test
> > > > > > echo $$ > test/cgroup.procs
> > > > > > sleep 60 &
> > > > > > echo $$ > cgroup.procs
> > > > > > echo `cat test/cgroup.procs` > cgroup.procs
> > > > > > rmdir test
> > > > > > done
> > > > > >
> > > > > > cat /proc/cgroups | grep memory
> > > > > > ```
> > > > > >
> > > > > > Patch 1 aims to fix page charging in page replacement.
> > > > > > Patch 2-5 are code cleanup and simplification.
> > > > > > Patch 6-15 convert LRU pages pin to the objcg direction.
> > > > >
> > > > > The main concern I have with *just* reparenting LRU pages is that for
> > > > > the long running systems, the root memcg will become a dumping ground.
> > > > > In addition a job running multiple times on a machine will see
> > > > > inconsistent memory usage if it re-accesses the file pages which were
> > > > > reparented to the root memcg.
> > > >
> > > > I agree, but also the reparenting is not the perfect thing in a combination
> > > > with any memory protections (e.g. memory.low).
> > > >
> > > > Imagine the following configuration:
> > > > workload.slice
> > > > - workload_gen_1.service memory.min = 30G
> > > > - workload_gen_2.service memory.min = 30G
> > > > - workload_gen_3.service memory.min = 30G
> > > > ...
> > > >
> > > > Parent cgroup and several generations of the child cgroup, protected by a memory.low.
> > > > Once the memory is getting reparented, it's not protected anymore.
> > >
> > > That doesn't sound right.
> > >
> > > A deleted cgroup today exerts no control over its abandoned
> > > pages. css_reset() will blow out any control settings.
> >
> > I know. Currently it works in the following way: once cgroup gen_1 is deleted,
> > it's memory is not protected anymore, so eventually it's getting evicted and
> > re-faulted as gen_2 (or gen_N) memory. Muchun's patchset doesn't change this,
> > of course. But long-term we likely wanna re-charge such pages to new cgroups
> > and avoid unnecessary evictions and re-faults. Switching to obj_cgroups doesn't
> > help and likely will complicate this change. So I'm a bit skeptical here.
>
> We should be careful with the long-term plans.
>
> The zombie issue is a pretty urgent concern that has caused several
> production emergencies now. It needs a fix sooner rather than later.

Thank you very much for clarifying the problem for me. I do agree
with you. This issue should be fixed ASAP. Using objcg is a good
choice. Dying objcg should not be a problem. Because the size of
objcg is so small compared to memcg.

Thanks.

>
> The long-term plans of how to handle shared/reused data better will
> require some time to work out. There are MANY open questions around
> recharging to arbitrary foreign cgroup users. Like how to identify
> accesses after the page's cgroup has been deleted: Do we need to
> annotate every page cache lookup? Do we need to inject minor faults to
> recharge mmapped pages? We can't wait for this discussion to complete.
>
> I also think the objcg is helping with that direction rather than
> getting in the way because:
>
> - The old charge moving code we have for LRU pages isn't reusable
> anyway. It relies on optimistic locking to work, and may leave
> memory behind in arbitrary and unpredictable ways. After a few
> cycles, objects tend to be spread all over the place.
>
> The objcg provides a new synchronization scheme that will always
> work because the readside (page/object to memcg lookup) needs to be
> prepared for the memcg to change and/or die at any time.
>
> - There isn't much point in recharging only some of the abandoned
> memory. We've tried per-allocation class reparenting and it didn't
> work out too well. Newly accounted allocations crop up faster than
> we can conjure up tailor-made reparenting schemes for them.
>
> The objcg provides a generic reference and reassignment scheme that
> can be used by all tracked objects.
>
> Importantly, once we have a central thing as LRU pages converted, we
> can require all new allocation tracking to use objcg from the start.
>
> > Also, in my experience the pagecache is not the main/worst memcg reference
> > holder (writeback structures are way worse). Pages are relatively large
> > (in comparison to some slab objects), and rarely there is only one page pinning
> > a separate memcg.
>
> I've seen that exact thing cause zombies to pile up: one or two pages
> in the old group, pinned by the next instance of a job. If the job has
> a sufficiently large working set, this can pin a LOT of dead
> cgroups. Is it the biggest or most problematic source of buildups?
> Maybe not. But there is definitely cause to fix it.
>
> LRU pages are also the most difficult to convert because of their rich
> interactions. It's a good test of the api. If it works for those
> pages, converting everybody else will be much simpler.
>
> And again, as the core memory consumer it sets the tone of how memcg
> rmapping is supposed to work for new and existing object types. This
> helps align ongoing development.
>
> > And switching to obj_cgroup doesn't completely eliminate the
> > problem: we just switch from accumulating larger mem_cgroups to
> > accumulating smaller obj_cgroups.
>
> In your own words, the discrepancy between tiny objects pinning large
> memcgs is a problem. objcgs are smaller than most objects, so it's not
> much different as if an object were simply a few bytes bigger in size.
> A memcg on the other hand is vastly bigger than most objects. It's
> also composed of many allocations and so causes more fragmentation.
>
> Another issue is that memcgs with abandoned objects need to be visited
> by the reclaimer on every single reclaim walk, a hot path. The list of
> objcgs on the other hand is only iterated when the cgroup is deleted,
> which is not a fast path. It's also rare that parents with many dead
> children are deleted at all (system.slice, workload.slice etc.)
>
> So no, I would say for all intents and purposes, it fixes all the
> problems we're having with zombie memcgs.
>
> > With all this said, I'm not necessarily opposing the patchset, but it's
> > necessary to discuss how it fits into the long-term picture.
> > E.g. if we're going to use obj_cgroup API for page-sized objects, shouldn't
> > we split it back into the reparenting and bytes-sized accounting parts,
> > as I initially suggested. And shouldn't we move the reparenting part to
> > the cgroup core level, so we could use it for other controllers
> > (e.g. to fix the writeback problem).
>
> Yes, I do think we want to generalize it. But I wouldn't say it's a
> requirement for these patches, either:
>
> - The byte-sized accounting part is one atomic_t. That's 4 bytes
> pinned unnecessarily - compared to an entire struct memcg right now
> which includes memory accounting, swap accounting, byte accounting,
> and a whole lot of other things likely not used by the stale object.
>
> - The cgroup generalization is a good idea too, but that doesn't
> really change the callsites either. Unless you were thinking of
> renaming, but objcg seems like a good, generic fit for a name to
> describe the linkage between objects to a cgroup.
>
> The memcg member will need to change into something generic (a
> css_set type mapping), but that can likely be hidden behind
> page_memcg(), objcg_memcg() and similar helpers.
>
> Both of these aspects can be improved incrementally.

2021-04-01 21:37:18

by Roman Gushchin

[permalink] [raw]
Subject: Re: [External] Re: [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages

On Fri, Apr 02, 2021 at 12:07:42AM +0800, Muchun Song wrote:
> On Wed, Mar 31, 2021 at 11:17 PM Johannes Weiner <[email protected]> wrote:
> >
> > On Tue, Mar 30, 2021 at 03:05:42PM -0700, Roman Gushchin wrote:
> > > On Tue, Mar 30, 2021 at 05:30:10PM -0400, Johannes Weiner wrote:
> > > > On Tue, Mar 30, 2021 at 11:58:31AM -0700, Roman Gushchin wrote:
> > > > > On Tue, Mar 30, 2021 at 11:34:11AM -0700, Shakeel Butt wrote:
> > > > > > On Tue, Mar 30, 2021 at 3:20 AM Muchun Song <[email protected]> wrote:
> > > > > > >
> > > > > > > Since the following patchsets applied. All the kernel memory are charged
> > > > > > > with the new APIs of obj_cgroup.
> > > > > > >
> > > > > > > [v17,00/19] The new cgroup slab memory controller
> > > > > > > [v5,0/7] Use obj_cgroup APIs to charge kmem pages
> > > > > > >
> > > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > > > > it exists at a larger scale and is causing recurring problems in the real
> > > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > > > > and make page reclaim very inefficient.
> > > > > > >
> > > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > > > > >
> > > > > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > > > > of the dying cgroups will not increase if we run the following test script.
> > > > > > >
> > > > > > > ```bash
> > > > > > > #!/bin/bash
> > > > > > >
> > > > > > > cat /proc/cgroups | grep memory
> > > > > > >
> > > > > > > cd /sys/fs/cgroup/memory
> > > > > > >
> > > > > > > for i in range{1..500}
> > > > > > > do
> > > > > > > mkdir test
> > > > > > > echo $$ > test/cgroup.procs
> > > > > > > sleep 60 &
> > > > > > > echo $$ > cgroup.procs
> > > > > > > echo `cat test/cgroup.procs` > cgroup.procs
> > > > > > > rmdir test
> > > > > > > done
> > > > > > >
> > > > > > > cat /proc/cgroups | grep memory
> > > > > > > ```
> > > > > > >
> > > > > > > Patch 1 aims to fix page charging in page replacement.
> > > > > > > Patch 2-5 are code cleanup and simplification.
> > > > > > > Patch 6-15 convert LRU pages pin to the objcg direction.
> > > > > >
> > > > > > The main concern I have with *just* reparenting LRU pages is that for
> > > > > > the long running systems, the root memcg will become a dumping ground.
> > > > > > In addition a job running multiple times on a machine will see
> > > > > > inconsistent memory usage if it re-accesses the file pages which were
> > > > > > reparented to the root memcg.
> > > > >
> > > > > I agree, but also the reparenting is not the perfect thing in a combination
> > > > > with any memory protections (e.g. memory.low).
> > > > >
> > > > > Imagine the following configuration:
> > > > > workload.slice
> > > > > - workload_gen_1.service memory.min = 30G
> > > > > - workload_gen_2.service memory.min = 30G
> > > > > - workload_gen_3.service memory.min = 30G
> > > > > ...
> > > > >
> > > > > Parent cgroup and several generations of the child cgroup, protected by a memory.low.
> > > > > Once the memory is getting reparented, it's not protected anymore.
> > > >
> > > > That doesn't sound right.
> > > >
> > > > A deleted cgroup today exerts no control over its abandoned
> > > > pages. css_reset() will blow out any control settings.
> > >
> > > I know. Currently it works in the following way: once cgroup gen_1 is deleted,
> > > it's memory is not protected anymore, so eventually it's getting evicted and
> > > re-faulted as gen_2 (or gen_N) memory. Muchun's patchset doesn't change this,
> > > of course. But long-term we likely wanna re-charge such pages to new cgroups
> > > and avoid unnecessary evictions and re-faults. Switching to obj_cgroups doesn't
> > > help and likely will complicate this change. So I'm a bit skeptical here.
> >
> > We should be careful with the long-term plans.
> >
> > The zombie issue is a pretty urgent concern that has caused several
> > production emergencies now. It needs a fix sooner rather than later.
>
> Thank you very much for clarifying the problem for me. I do agree
> with you. This issue should be fixed ASAP. Using objcg is a good
> choice. Dying objcg should not be a problem. Because the size of
> objcg is so small compared to memcg.

It would be nice to see some data from real-life workloads showing
an improvement in the memory usage/reclaim efficiency and also not showing
any performance degradation.

I agree that it would be nice to fix this issue, but let's not exaggerate
it's urgency: it was here for years and nothing really changed. I'm not
proposing to postpone it for years, but there were valid questions raised
how it fits into the bigger picture (e.g. other controllers).

Indeed we added an accounting of different objects, but it's not directly
connected to LRU pages.

Thanks!

2021-04-01 22:56:43

by Yang Shi

[permalink] [raw]
Subject: Re: [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages

On Wed, Mar 31, 2021 at 8:17 AM Johannes Weiner <[email protected]> wrote:
>
> On Tue, Mar 30, 2021 at 03:05:42PM -0700, Roman Gushchin wrote:
> > On Tue, Mar 30, 2021 at 05:30:10PM -0400, Johannes Weiner wrote:
> > > On Tue, Mar 30, 2021 at 11:58:31AM -0700, Roman Gushchin wrote:
> > > > On Tue, Mar 30, 2021 at 11:34:11AM -0700, Shakeel Butt wrote:
> > > > > On Tue, Mar 30, 2021 at 3:20 AM Muchun Song <[email protected]> wrote:
> > > > > >
> > > > > > Since the following patchsets applied. All the kernel memory are charged
> > > > > > with the new APIs of obj_cgroup.
> > > > > >
> > > > > > [v17,00/19] The new cgroup slab memory controller
> > > > > > [v5,0/7] Use obj_cgroup APIs to charge kmem pages
> > > > > >
> > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > > > it exists at a larger scale and is causing recurring problems in the real
> > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > > > and make page reclaim very inefficient.
> > > > > >
> > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > > > >
> > > > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > > > of the dying cgroups will not increase if we run the following test script.
> > > > > >
> > > > > > ```bash
> > > > > > #!/bin/bash
> > > > > >
> > > > > > cat /proc/cgroups | grep memory
> > > > > >
> > > > > > cd /sys/fs/cgroup/memory
> > > > > >
> > > > > > for i in range{1..500}
> > > > > > do
> > > > > > mkdir test
> > > > > > echo $$ > test/cgroup.procs
> > > > > > sleep 60 &
> > > > > > echo $$ > cgroup.procs
> > > > > > echo `cat test/cgroup.procs` > cgroup.procs
> > > > > > rmdir test
> > > > > > done
> > > > > >
> > > > > > cat /proc/cgroups | grep memory
> > > > > > ```
> > > > > >
> > > > > > Patch 1 aims to fix page charging in page replacement.
> > > > > > Patch 2-5 are code cleanup and simplification.
> > > > > > Patch 6-15 convert LRU pages pin to the objcg direction.
> > > > >
> > > > > The main concern I have with *just* reparenting LRU pages is that for
> > > > > the long running systems, the root memcg will become a dumping ground.
> > > > > In addition a job running multiple times on a machine will see
> > > > > inconsistent memory usage if it re-accesses the file pages which were
> > > > > reparented to the root memcg.
> > > >
> > > > I agree, but also the reparenting is not the perfect thing in a combination
> > > > with any memory protections (e.g. memory.low).
> > > >
> > > > Imagine the following configuration:
> > > > workload.slice
> > > > - workload_gen_1.service memory.min = 30G
> > > > - workload_gen_2.service memory.min = 30G
> > > > - workload_gen_3.service memory.min = 30G
> > > > ...
> > > >
> > > > Parent cgroup and several generations of the child cgroup, protected by a memory.low.
> > > > Once the memory is getting reparented, it's not protected anymore.
> > >
> > > That doesn't sound right.
> > >
> > > A deleted cgroup today exerts no control over its abandoned
> > > pages. css_reset() will blow out any control settings.
> >
> > I know. Currently it works in the following way: once cgroup gen_1 is deleted,
> > it's memory is not protected anymore, so eventually it's getting evicted and
> > re-faulted as gen_2 (or gen_N) memory. Muchun's patchset doesn't change this,
> > of course. But long-term we likely wanna re-charge such pages to new cgroups
> > and avoid unnecessary evictions and re-faults. Switching to obj_cgroups doesn't
> > help and likely will complicate this change. So I'm a bit skeptical here.
>
> We should be careful with the long-term plans.

Excuse me for a dumb question. I recall we did reparent LRU pages
before (before 4.x kernel). I vaguely recall there were some tricky
race conditions during reparenting so we didn't do it anymore once
reclaimer could reclaim from offlined memcgs. My memory may be wrong,
if it is not so please feel free to correct me. If my memory is true,
it means the race conditions are gone? Or the new obj_cgroup APIs made
life easier?

Thanks,
Yang

>
> The zombie issue is a pretty urgent concern that has caused several
> production emergencies now. It needs a fix sooner rather than later.
>
> The long-term plans of how to handle shared/reused data better will
> require some time to work out. There are MANY open questions around
> recharging to arbitrary foreign cgroup users. Like how to identify
> accesses after the page's cgroup has been deleted: Do we need to
> annotate every page cache lookup? Do we need to inject minor faults to
> recharge mmapped pages? We can't wait for this discussion to complete.
>
> I also think the objcg is helping with that direction rather than
> getting in the way because:
>
> - The old charge moving code we have for LRU pages isn't reusable
> anyway. It relies on optimistic locking to work, and may leave
> memory behind in arbitrary and unpredictable ways. After a few
> cycles, objects tend to be spread all over the place.
>
> The objcg provides a new synchronization scheme that will always
> work because the readside (page/object to memcg lookup) needs to be
> prepared for the memcg to change and/or die at any time.
>
> - There isn't much point in recharging only some of the abandoned
> memory. We've tried per-allocation class reparenting and it didn't
> work out too well. Newly accounted allocations crop up faster than
> we can conjure up tailor-made reparenting schemes for them.
>
> The objcg provides a generic reference and reassignment scheme that
> can be used by all tracked objects.
>
> Importantly, once we have a central thing as LRU pages converted, we
> can require all new allocation tracking to use objcg from the start.
>
> > Also, in my experience the pagecache is not the main/worst memcg reference
> > holder (writeback structures are way worse). Pages are relatively large
> > (in comparison to some slab objects), and rarely there is only one page pinning
> > a separate memcg.
>
> I've seen that exact thing cause zombies to pile up: one or two pages
> in the old group, pinned by the next instance of a job. If the job has
> a sufficiently large working set, this can pin a LOT of dead
> cgroups. Is it the biggest or most problematic source of buildups?
> Maybe not. But there is definitely cause to fix it.
>
> LRU pages are also the most difficult to convert because of their rich
> interactions. It's a good test of the api. If it works for those
> pages, converting everybody else will be much simpler.
>
> And again, as the core memory consumer it sets the tone of how memcg
> rmapping is supposed to work for new and existing object types. This
> helps align ongoing development.
>
> > And switching to obj_cgroup doesn't completely eliminate the
> > problem: we just switch from accumulating larger mem_cgroups to
> > accumulating smaller obj_cgroups.
>
> In your own words, the discrepancy between tiny objects pinning large
> memcgs is a problem. objcgs are smaller than most objects, so it's not
> much different as if an object were simply a few bytes bigger in size.
> A memcg on the other hand is vastly bigger than most objects. It's
> also composed of many allocations and so causes more fragmentation.
>
> Another issue is that memcgs with abandoned objects need to be visited
> by the reclaimer on every single reclaim walk, a hot path. The list of
> objcgs on the other hand is only iterated when the cgroup is deleted,
> which is not a fast path. It's also rare that parents with many dead
> children are deleted at all (system.slice, workload.slice etc.)
>
> So no, I would say for all intents and purposes, it fixes all the
> problems we're having with zombie memcgs.
>
> > With all this said, I'm not necessarily opposing the patchset, but it's
> > necessary to discuss how it fits into the long-term picture.
> > E.g. if we're going to use obj_cgroup API for page-sized objects, shouldn't
> > we split it back into the reparenting and bytes-sized accounting parts,
> > as I initially suggested. And shouldn't we move the reparenting part to
> > the cgroup core level, so we could use it for other controllers
> > (e.g. to fix the writeback problem).
>
> Yes, I do think we want to generalize it. But I wouldn't say it's a
> requirement for these patches, either:
>
> - The byte-sized accounting part is one atomic_t. That's 4 bytes
> pinned unnecessarily - compared to an entire struct memcg right now
> which includes memory accounting, swap accounting, byte accounting,
> and a whole lot of other things likely not used by the stale object.
>
> - The cgroup generalization is a good idea too, but that doesn't
> really change the callsites either. Unless you were thinking of
> renaming, but objcg seems like a good, generic fit for a name to
> describe the linkage between objects to a cgroup.
>
> The memcg member will need to change into something generic (a
> css_set type mapping), but that can likely be hidden behind
> page_memcg(), objcg_memcg() and similar helpers.
>
> Both of these aspects can be improved incrementally.
>

2021-04-02 04:04:24

by Muchun Song

[permalink] [raw]
Subject: Re: [External] Re: [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages

On Fri, Apr 2, 2021 at 6:55 AM Yang Shi <[email protected]> wrote:
>
> On Wed, Mar 31, 2021 at 8:17 AM Johannes Weiner <[email protected]> wrote:
> >
> > On Tue, Mar 30, 2021 at 03:05:42PM -0700, Roman Gushchin wrote:
> > > On Tue, Mar 30, 2021 at 05:30:10PM -0400, Johannes Weiner wrote:
> > > > On Tue, Mar 30, 2021 at 11:58:31AM -0700, Roman Gushchin wrote:
> > > > > On Tue, Mar 30, 2021 at 11:34:11AM -0700, Shakeel Butt wrote:
> > > > > > On Tue, Mar 30, 2021 at 3:20 AM Muchun Song <[email protected]> wrote:
> > > > > > >
> > > > > > > Since the following patchsets applied. All the kernel memory are charged
> > > > > > > with the new APIs of obj_cgroup.
> > > > > > >
> > > > > > > [v17,00/19] The new cgroup slab memory controller
> > > > > > > [v5,0/7] Use obj_cgroup APIs to charge kmem pages
> > > > > > >
> > > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > > > > it exists at a larger scale and is causing recurring problems in the real
> > > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > > > > and make page reclaim very inefficient.
> > > > > > >
> > > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > > > > >
> > > > > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > > > > of the dying cgroups will not increase if we run the following test script.
> > > > > > >
> > > > > > > ```bash
> > > > > > > #!/bin/bash
> > > > > > >
> > > > > > > cat /proc/cgroups | grep memory
> > > > > > >
> > > > > > > cd /sys/fs/cgroup/memory
> > > > > > >
> > > > > > > for i in range{1..500}
> > > > > > > do
> > > > > > > mkdir test
> > > > > > > echo $$ > test/cgroup.procs
> > > > > > > sleep 60 &
> > > > > > > echo $$ > cgroup.procs
> > > > > > > echo `cat test/cgroup.procs` > cgroup.procs
> > > > > > > rmdir test
> > > > > > > done
> > > > > > >
> > > > > > > cat /proc/cgroups | grep memory
> > > > > > > ```
> > > > > > >
> > > > > > > Patch 1 aims to fix page charging in page replacement.
> > > > > > > Patch 2-5 are code cleanup and simplification.
> > > > > > > Patch 6-15 convert LRU pages pin to the objcg direction.
> > > > > >
> > > > > > The main concern I have with *just* reparenting LRU pages is that for
> > > > > > the long running systems, the root memcg will become a dumping ground.
> > > > > > In addition a job running multiple times on a machine will see
> > > > > > inconsistent memory usage if it re-accesses the file pages which were
> > > > > > reparented to the root memcg.
> > > > >
> > > > > I agree, but also the reparenting is not the perfect thing in a combination
> > > > > with any memory protections (e.g. memory.low).
> > > > >
> > > > > Imagine the following configuration:
> > > > > workload.slice
> > > > > - workload_gen_1.service memory.min = 30G
> > > > > - workload_gen_2.service memory.min = 30G
> > > > > - workload_gen_3.service memory.min = 30G
> > > > > ...
> > > > >
> > > > > Parent cgroup and several generations of the child cgroup, protected by a memory.low.
> > > > > Once the memory is getting reparented, it's not protected anymore.
> > > >
> > > > That doesn't sound right.
> > > >
> > > > A deleted cgroup today exerts no control over its abandoned
> > > > pages. css_reset() will blow out any control settings.
> > >
> > > I know. Currently it works in the following way: once cgroup gen_1 is deleted,
> > > it's memory is not protected anymore, so eventually it's getting evicted and
> > > re-faulted as gen_2 (or gen_N) memory. Muchun's patchset doesn't change this,
> > > of course. But long-term we likely wanna re-charge such pages to new cgroups
> > > and avoid unnecessary evictions and re-faults. Switching to obj_cgroups doesn't
> > > help and likely will complicate this change. So I'm a bit skeptical here.
> >
> > We should be careful with the long-term plans.
>
> Excuse me for a dumb question. I recall we did reparent LRU pages
> before (before 4.x kernel). I vaguely recall there were some tricky
> race conditions during reparenting so we didn't do it anymore once
> reclaimer could reclaim from offlined memcgs. My memory may be wrong,
> if it is not so please feel free to correct me. If my memory is true,

I looked at the historical information by git. Your memory is right.

> it means the race conditions are gone? Or the new obj_cgroup APIs made
> life easier?

I do not know about the history of how many race conditions
there are. Johannes should be very familiar with this. From
my point of view, obj_cgroup APIs could make life easier.
Because we do not need to care about the correctness of
page counters and statistics when the LRU pages are
reparented. The operation of reparenting can be easy.
Just splice the lruvec list to its parent lruvec list (We do
not need to isolate page one by one).

Thanks.

>
> Thanks,
> Yang
>
> >
> > The zombie issue is a pretty urgent concern that has caused several
> > production emergencies now. It needs a fix sooner rather than later.
> >
> > The long-term plans of how to handle shared/reused data better will
> > require some time to work out. There are MANY open questions around
> > recharging to arbitrary foreign cgroup users. Like how to identify
> > accesses after the page's cgroup has been deleted: Do we need to
> > annotate every page cache lookup? Do we need to inject minor faults to
> > recharge mmapped pages? We can't wait for this discussion to complete.
> >
> > I also think the objcg is helping with that direction rather than
> > getting in the way because:
> >
> > - The old charge moving code we have for LRU pages isn't reusable
> > anyway. It relies on optimistic locking to work, and may leave
> > memory behind in arbitrary and unpredictable ways. After a few
> > cycles, objects tend to be spread all over the place.
> >
> > The objcg provides a new synchronization scheme that will always
> > work because the readside (page/object to memcg lookup) needs to be
> > prepared for the memcg to change and/or die at any time.
> >
> > - There isn't much point in recharging only some of the abandoned
> > memory. We've tried per-allocation class reparenting and it didn't
> > work out too well. Newly accounted allocations crop up faster than
> > we can conjure up tailor-made reparenting schemes for them.
> >
> > The objcg provides a generic reference and reassignment scheme that
> > can be used by all tracked objects.
> >
> > Importantly, once we have a central thing as LRU pages converted, we
> > can require all new allocation tracking to use objcg from the start.
> >
> > > Also, in my experience the pagecache is not the main/worst memcg reference
> > > holder (writeback structures are way worse). Pages are relatively large
> > > (in comparison to some slab objects), and rarely there is only one page pinning
> > > a separate memcg.
> >
> > I've seen that exact thing cause zombies to pile up: one or two pages
> > in the old group, pinned by the next instance of a job. If the job has
> > a sufficiently large working set, this can pin a LOT of dead
> > cgroups. Is it the biggest or most problematic source of buildups?
> > Maybe not. But there is definitely cause to fix it.
> >
> > LRU pages are also the most difficult to convert because of their rich
> > interactions. It's a good test of the api. If it works for those
> > pages, converting everybody else will be much simpler.
> >
> > And again, as the core memory consumer it sets the tone of how memcg
> > rmapping is supposed to work for new and existing object types. This
> > helps align ongoing development.
> >
> > > And switching to obj_cgroup doesn't completely eliminate the
> > > problem: we just switch from accumulating larger mem_cgroups to
> > > accumulating smaller obj_cgroups.
> >
> > In your own words, the discrepancy between tiny objects pinning large
> > memcgs is a problem. objcgs are smaller than most objects, so it's not
> > much different as if an object were simply a few bytes bigger in size.
> > A memcg on the other hand is vastly bigger than most objects. It's
> > also composed of many allocations and so causes more fragmentation.
> >
> > Another issue is that memcgs with abandoned objects need to be visited
> > by the reclaimer on every single reclaim walk, a hot path. The list of
> > objcgs on the other hand is only iterated when the cgroup is deleted,
> > which is not a fast path. It's also rare that parents with many dead
> > children are deleted at all (system.slice, workload.slice etc.)
> >
> > So no, I would say for all intents and purposes, it fixes all the
> > problems we're having with zombie memcgs.
> >
> > > With all this said, I'm not necessarily opposing the patchset, but it's
> > > necessary to discuss how it fits into the long-term picture.
> > > E.g. if we're going to use obj_cgroup API for page-sized objects, shouldn't
> > > we split it back into the reparenting and bytes-sized accounting parts,
> > > as I initially suggested. And shouldn't we move the reparenting part to
> > > the cgroup core level, so we could use it for other controllers
> > > (e.g. to fix the writeback problem).
> >
> > Yes, I do think we want to generalize it. But I wouldn't say it's a
> > requirement for these patches, either:
> >
> > - The byte-sized accounting part is one atomic_t. That's 4 bytes
> > pinned unnecessarily - compared to an entire struct memcg right now
> > which includes memory accounting, swap accounting, byte accounting,
> > and a whole lot of other things likely not used by the stale object.
> >
> > - The cgroup generalization is a good idea too, but that doesn't
> > really change the callsites either. Unless you were thinking of
> > renaming, but objcg seems like a good, generic fit for a name to
> > describe the linkage between objects to a cgroup.
> >
> > The memcg member will need to change into something generic (a
> > css_set type mapping), but that can likely be hidden behind
> > page_memcg(), objcg_memcg() and similar helpers.
> >
> > Both of these aspects can be improved incrementally.
> >

2021-04-02 15:09:37

by Johannes Weiner

[permalink] [raw]
Subject: Re: [RFC PATCH 02/15] mm: memcontrol: bail out early when !mm in get_mem_cgroup_from_mm

On Tue, Mar 30, 2021 at 06:15:18PM +0800, Muchun Song wrote:
> When mm is NULL, we do not need to hold rcu lock and call css_tryget for
> the root memcg. And we also do not need to check !mm in every loop of
> while. So bail out early when !mm.
>
> Signed-off-by: Muchun Song <[email protected]>

Good observation.

Acked-by: Johannes Weiner <[email protected]>

2021-04-02 16:24:45

by Johannes Weiner

[permalink] [raw]
Subject: Re: [RFC PATCH 04/15] mm: memcontrol: use lruvec_memcg in lruvec_holds_page_lru_lock

On Tue, Mar 30, 2021 at 06:15:20PM +0800, Muchun Song wrote:
> We already have a helper lruvec_memcg() to get the memcg from lruvec, we
> do not need to do it ourselves in the lruvec_holds_page_lru_lock(). So use
> lruvec_memcg() instead.
>
> Signed-off-by: Muchun Song <[email protected]>
> ---
> include/linux/memcontrol.h | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index a35a22994cf7..6e3283828391 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -744,20 +744,20 @@ static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page)
> return mem_cgroup_lruvec(memcg, pgdat);
> }
>
> +static inline struct mem_cgroup *lruvec_memcg(struct lruvec *lruvec);

Please reorder the functions instead to avoid forward decls.

> static inline bool lruvec_holds_page_lru_lock(struct page *page,
> struct lruvec *lruvec)
> {
> pg_data_t *pgdat = page_pgdat(page);
> const struct mem_cgroup *memcg;
> - struct mem_cgroup_per_node *mz;
>
> if (mem_cgroup_disabled())
> return lruvec == &pgdat->__lruvec;
>
> - mz = container_of(lruvec, struct mem_cgroup_per_node, lruvec);
> memcg = page_memcg(page) ? : root_mem_cgroup;
>
> - return lruvec->pgdat == pgdat && mz->memcg == memcg;
> + return lruvec->pgdat == pgdat && lruvec_memcg(lruvec) == memcg;

Looks reasonable to me, but I wonder if there is more we can do.

lruvec_memcg() already handles CONFIG_MEMCG and mem_cgroup_disabled()
combinations, and there is also a lruvec_pgdat() which does.

One thing that is odd is page_memcg(page) ? : root_mem_cgroup. How can
lruvec pages not have a page_memcg()? mem_cgroup_page_lruvec() has:

memcg = page_memcg(page);
VM_WARN_ON_ONCE_PAGE(!memcg && !mem_cgroup_disabled(), page);

Unless I'm missing something, we should be able to have a single
definition for this function that works for !CONFIG_MEMCG,
CONFIG_MEMCG + mem_cgroup_disabled() and CONFIG_MEMCG:

lruvec_holds_page_lru_lock()
{
return lruvec_pgdat(lruvec) == page_pgdat(page) &&
lruvec_memcg(lruvec) == page_memcg(page);
}

2021-04-03 12:39:04

by Muchun Song

[permalink] [raw]
Subject: Re: [External] Re: [RFC PATCH 04/15] mm: memcontrol: use lruvec_memcg in lruvec_holds_page_lru_lock

On Sat, Apr 3, 2021 at 12:21 AM Johannes Weiner <[email protected]> wrote:
>
> On Tue, Mar 30, 2021 at 06:15:20PM +0800, Muchun Song wrote:
> > We already have a helper lruvec_memcg() to get the memcg from lruvec, we
> > do not need to do it ourselves in the lruvec_holds_page_lru_lock(). So use
> > lruvec_memcg() instead.
> >
> > Signed-off-by: Muchun Song <[email protected]>
> > ---
> > include/linux/memcontrol.h | 6 +++---
> > 1 file changed, 3 insertions(+), 3 deletions(-)
> >
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index a35a22994cf7..6e3283828391 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -744,20 +744,20 @@ static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page)
> > return mem_cgroup_lruvec(memcg, pgdat);
> > }
> >
> > +static inline struct mem_cgroup *lruvec_memcg(struct lruvec *lruvec);
>
> Please reorder the functions instead to avoid forward decls.

OK. Will fix it in the next version.


>
> > static inline bool lruvec_holds_page_lru_lock(struct page *page,
> > struct lruvec *lruvec)
> > {
> > pg_data_t *pgdat = page_pgdat(page);
> > const struct mem_cgroup *memcg;
> > - struct mem_cgroup_per_node *mz;
> >
> > if (mem_cgroup_disabled())
> > return lruvec == &pgdat->__lruvec;
> >
> > - mz = container_of(lruvec, struct mem_cgroup_per_node, lruvec);
> > memcg = page_memcg(page) ? : root_mem_cgroup;
> >
> > - return lruvec->pgdat == pgdat && mz->memcg == memcg;
> > + return lruvec->pgdat == pgdat && lruvec_memcg(lruvec) == memcg;
>
> Looks reasonable to me, but I wonder if there is more we can do.
>
> lruvec_memcg() already handles CONFIG_MEMCG and mem_cgroup_disabled()
> combinations, and there is also a lruvec_pgdat() which does.
>
> One thing that is odd is page_memcg(page) ? : root_mem_cgroup. How can
> lruvec pages not have a page_memcg()? mem_cgroup_page_lruvec() has:

You are right. page_memcg() cannot be NULL.

>
> memcg = page_memcg(page);
> VM_WARN_ON_ONCE_PAGE(!memcg && !mem_cgroup_disabled(), page);
>
> Unless I'm missing something, we should be able to have a single
> definition for this function that works for !CONFIG_MEMCG,
> CONFIG_MEMCG + mem_cgroup_disabled() and CONFIG_MEMCG:
>
> lruvec_holds_page_lru_lock()
> {
> return lruvec_pgdat(lruvec) == page_pgdat(page) &&
> lruvec_memcg(lruvec) == page_memcg(page);
> }

Aha, how wonderful! It is simpler than mine. I will use this.
Thanks for your suggestions.