2022-05-30 13:43:59

by Muchun Song

[permalink] [raw]
Subject: [PATCH v5 00/11] Use obj_cgroup APIs to charge the LRU pages

This version is rebased on v5.18.

Since the following patchsets applied. All the kernel memory are charged
with the new APIs of obj_cgroup.

[v17,00/19] The new cgroup slab memory controller [1]
[v5,0/7] Use obj_cgroup APIs to charge kmem pages [2]

But user memory allocations (LRU pages) pinning memcgs for a long time -
it exists at a larger scale and is causing recurring problems in the real
world: page cache doesn't get reclaimed for a long time, or is used by the
second, third, fourth, ... instance of the same job that was restarted into
a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
and make page reclaim very inefficient.

We can convert LRU pages and most other raw memcg pins to the objcg direction
to fix this problem, and then the LRU pages will not pin the memcgs.

This patchset aims to make the LRU pages to drop the reference to memory
cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
of the dying cgroups will not increase if we run the following test script.

```bash
#!/bin/bash

dd if=/dev/zero of=temp bs=4096 count=1
cat /proc/cgroups | grep memory

for i in {0..2000}
do
mkdir /sys/fs/cgroup/memory/test$i
echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
cat temp >> log
echo $$ > /sys/fs/cgroup/memory/cgroup.procs
rmdir /sys/fs/cgroup/memory/test$i
done

cat /proc/cgroups | grep memory

rm -f temp log
```

[1] https://lore.kernel.org/linux-mm/[email protected]/
[2] https://lore.kernel.org/linux-mm/[email protected]/

v4: https://lore.kernel.org/all/[email protected]/
v3: https://lore.kernel.org/all/[email protected]/
v2: https://lore.kernel.org/all/[email protected]/
v1: https://lore.kernel.org/all/[email protected]/
RFC v4: https://lore.kernel.org/all/[email protected]/
RFC v3: https://lore.kernel.org/all/[email protected]/
RFC v2: https://lore.kernel.org/all/[email protected]/
RFC v1: https://lore.kernel.org/all/[email protected]/

v5:
- Lots of improvements from Johannes, Roman and Waiman.
- Fix lockdep warning reported by kernel test robot.
- Add two new patches to do code cleanup.
- Collect Acked-by and Reviewed-by from Johannes and Roman.
- I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
it to local_lock. It could be an improvement in the future.

v4:
- Resend and rebased on v5.18.

v3:
- Removed the Acked-by tags from Roman since this version is based on
the folio relevant.

v2:
- Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
- Rebase to linux 5.15-rc1.
- Add a new pacth to cleanup mem_cgroup_kmem_disabled().

v1:
- Drop RFC tag.
- Rebase to linux next-20210811.

RFC v4:
- Collect Acked-by from Roman.
- Rebase to linux next-20210525.
- Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
- Change the patch 1 title to "prepare objcg API for non-kmem usage".
- Convert reparent_ops_head to an array in patch 8.

Thanks for Roman's review and suggestions.

RFC v3:
- Drop the code cleanup and simplification patches. Gather those patches
into a separate series[1].
- Rework patch #1 suggested by Johannes.

RFC v2:
- Collect Acked-by tags by Johannes. Thanks.
- Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
- Fix move_pages_to_lru().

Muchun Song (11):
mm: memcontrol: remove dead code and comments
mm: rename unlock_page_lruvec{_irq, _irqrestore} to
lruvec_unlock{_irq, _irqrestore}
mm: memcontrol: prepare objcg API for non-kmem usage
mm: memcontrol: make lruvec lock safe when LRU pages are reparented
mm: vmscan: rework move_pages_to_lru()
mm: thp: make split queue lock safe when LRU pages are reparented
mm: memcontrol: make all the callers of {folio,page}_memcg() safe
mm: memcontrol: introduce memcg_reparent_ops
mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
mm: lru: add VM_BUG_ON_FOLIO to lru maintenance function
mm: lru: use lruvec lock to serialize memcg changes

fs/buffer.c | 4 +-
fs/fs-writeback.c | 23 +-
include/linux/memcontrol.h | 213 +++++++++------
include/linux/mm_inline.h | 6 +
include/trace/events/writeback.h | 5 +
mm/compaction.c | 39 ++-
mm/huge_memory.c | 153 +++++++++--
mm/memcontrol.c | 560 +++++++++++++++++++++++++++------------
mm/migrate.c | 4 +
mm/mlock.c | 2 +-
mm/page_io.c | 5 +-
mm/swap.c | 62 ++---
mm/vmscan.c | 67 +++--
13 files changed, 767 insertions(+), 376 deletions(-)


base-commit: 4b0986a3613c92f4ec1bdc7f60ec66fea135991f
--
2.11.0



2022-05-30 14:04:05

by Muchun Song

[permalink] [raw]
Subject: [PATCH v5 04/11] mm: memcontrol: make lruvec lock safe when LRU pages are reparented

The diagram below shows how to make the folio lruvec lock safe when LRU
pages are reparented.

folio_lruvec_lock(folio)
rcu_read_lock();
retry:
lruvec = folio_lruvec(folio);

// The folio is reparented at this time.
spin_lock(&lruvec->lru_lock);

if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio)))
// Acquired the wrong lruvec lock and need to retry.
// Because this folio is on the parent memcg lruvec list.
spin_unlock(&lruvec->lru_lock);
goto retry;

// If we reach here, it means that folio_memcg(folio) is stable.

memcg_reparent_objcgs(memcg)
// lruvec belongs to memcg and lruvec_parent belongs to parent memcg.
spin_lock(&lruvec->lru_lock);
spin_lock(&lruvec_parent->lru_lock);

// Move all the pages from the lruvec list to the parent lruvec list.

spin_unlock(&lruvec_parent->lru_lock);
spin_unlock(&lruvec->lru_lock);

After we acquire the lruvec lock, we need to check whether the folio is
reparented. If so, we need to reacquire the new lruvec lock. On the
routine of the LRU pages reparenting, we will also acquire the lruvec
lock (will be implemented in the later patch). So folio_memcg() cannot
be changed when we hold the lruvec lock.

Since lruvec_memcg(lruvec) is always equal to folio_memcg(folio) after
we hold the lruvec lock, lruvec_memcg_debug() check is pointless. So
remove it.

This is a preparation for reparenting the LRU pages.

Signed-off-by: Muchun Song <[email protected]>
---
include/linux/memcontrol.h | 18 +++-------------
mm/compaction.c | 27 +++++++++++++++++++----
mm/memcontrol.c | 53 ++++++++++++++++++++++++++--------------------
mm/swap.c | 5 +++++
4 files changed, 61 insertions(+), 42 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 27f3171f42a1..e390aaa46776 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -752,7 +752,9 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg,
* folio_lruvec - return lruvec for isolating/putting an LRU folio
* @folio: Pointer to the folio.
*
- * This function relies on folio->mem_cgroup being stable.
+ * The lruvec can be changed to its parent lruvec when the page reparented.
+ * The caller need to recheck if it cares about this changes (just like
+ * folio_lruvec_lock() does).
*/
static inline struct lruvec *folio_lruvec(struct folio *folio)
{
@@ -771,15 +773,6 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio);
struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio,
unsigned long *flags);

-#ifdef CONFIG_DEBUG_VM
-void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio);
-#else
-static inline
-void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
-{
-}
-#endif
-
static inline
struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css){
return css ? container_of(css, struct mem_cgroup, css) : NULL;
@@ -1240,11 +1233,6 @@ static inline struct lruvec *folio_lruvec(struct folio *folio)
return &pgdat->__lruvec;
}

-static inline
-void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
-{
-}
-
static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg)
{
return NULL;
diff --git a/mm/compaction.c b/mm/compaction.c
index 4f155df6b39c..29ff111e5711 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -509,6 +509,25 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags,
return true;
}

+static struct lruvec *
+compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags,
+ struct compact_control *cc)
+{
+ struct lruvec *lruvec;
+
+ rcu_read_lock();
+retry:
+ lruvec = folio_lruvec(folio);
+ compact_lock_irqsave(&lruvec->lru_lock, flags, cc);
+ if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
+ spin_unlock_irqrestore(&lruvec->lru_lock, *flags);
+ goto retry;
+ }
+ rcu_read_unlock();
+
+ return lruvec;
+}
+
/*
* Compaction requires the taking of some coarse locks that are potentially
* very heavily contended. The lock should be periodically unlocked to avoid
@@ -844,6 +863,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,

/* Time to isolate some pages for migration */
for (; low_pfn < end_pfn; low_pfn++) {
+ struct folio *folio;

if (skip_on_failure && low_pfn >= next_skip_pfn) {
/*
@@ -1065,18 +1085,17 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
if (!TestClearPageLRU(page))
goto isolate_fail_put;

- lruvec = folio_lruvec(page_folio(page));
+ folio = page_folio(page);
+ lruvec = folio_lruvec(folio);

/* If we already hold the lock, we can skip some rechecking */
if (lruvec != locked) {
if (locked)
lruvec_unlock_irqrestore(locked, flags);

- compact_lock_irqsave(&lruvec->lru_lock, &flags, cc);
+ lruvec = compact_folio_lruvec_lock_irqsave(folio, &flags, cc);
locked = lruvec;

- lruvec_memcg_debug(lruvec, page_folio(page));
-
/* Try get exclusive access under lock */
if (!skip_updated) {
skip_updated = true;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 739a1d58ce97..9d98a791353c 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1199,23 +1199,6 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memcg,
return ret;
}

-#ifdef CONFIG_DEBUG_VM
-void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
-{
- struct mem_cgroup *memcg;
-
- if (mem_cgroup_disabled())
- return;
-
- memcg = folio_memcg(folio);
-
- if (!memcg)
- VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) != root_mem_cgroup, folio);
- else
- VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) != memcg, folio);
-}
-#endif
-
/**
* folio_lruvec_lock - Lock the lruvec for a folio.
* @folio: Pointer to the folio.
@@ -1230,10 +1213,18 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
*/
struct lruvec *folio_lruvec_lock(struct folio *folio)
{
- struct lruvec *lruvec = folio_lruvec(folio);
+ struct lruvec *lruvec;

+ rcu_read_lock();
+retry:
+ lruvec = folio_lruvec(folio);
spin_lock(&lruvec->lru_lock);
- lruvec_memcg_debug(lruvec, folio);
+
+ if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
+ spin_unlock(&lruvec->lru_lock);
+ goto retry;
+ }
+ rcu_read_unlock();

return lruvec;
}
@@ -1253,10 +1244,18 @@ struct lruvec *folio_lruvec_lock(struct folio *folio)
*/
struct lruvec *folio_lruvec_lock_irq(struct folio *folio)
{
- struct lruvec *lruvec = folio_lruvec(folio);
+ struct lruvec *lruvec;

+ rcu_read_lock();
+retry:
+ lruvec = folio_lruvec(folio);
spin_lock_irq(&lruvec->lru_lock);
- lruvec_memcg_debug(lruvec, folio);
+
+ if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
+ spin_unlock_irq(&lruvec->lru_lock);
+ goto retry;
+ }
+ rcu_read_unlock();

return lruvec;
}
@@ -1278,10 +1277,18 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio)
struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio,
unsigned long *flags)
{
- struct lruvec *lruvec = folio_lruvec(folio);
+ struct lruvec *lruvec;

+ rcu_read_lock();
+retry:
+ lruvec = folio_lruvec(folio);
spin_lock_irqsave(&lruvec->lru_lock, *flags);
- lruvec_memcg_debug(lruvec, folio);
+
+ if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
+ spin_unlock_irqrestore(&lruvec->lru_lock, *flags);
+ goto retry;
+ }
+ rcu_read_unlock();

return lruvec;
}
diff --git a/mm/swap.c b/mm/swap.c
index 0a8ee33116c5..6cea469b6ff2 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -303,6 +303,11 @@ void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages)

void lru_note_cost_folio(struct folio *folio)
{
+ WARN_ON_ONCE(!rcu_read_lock_held());
+ /*
+ * The rcu read lock is held by the caller, so we do not need to
+ * care about the lruvec returned by folio_lruvec() being released.
+ */
lru_note_cost(folio_lruvec(folio), folio_is_file_lru(folio),
folio_nr_pages(folio));
}
--
2.11.0


2022-05-30 15:55:54

by Muchun Song

[permalink] [raw]
Subject: [PATCH v5 03/11] mm: memcontrol: prepare objcg API for non-kmem usage

Pagecache pages are charged at the allocation time and holding a
reference to the original memory cgroup until being reclaimed.
Depending on the memory pressure, specific patterns of the page
sharing between different cgroups and the cgroup creation and
destruction rates, a large number of dying memory cgroups can be
pinned by pagecache pages. It makes the page reclaim less efficient
and wastes memory.

We can convert LRU pages and most other raw memcg pins to the objcg
direction to fix this problem, and then the page->memcg will always
point to an object cgroup pointer.

Therefore, the infrastructure of objcg no longer only serves
CONFIG_MEMCG_KMEM. In this patch, we move the infrastructure of the
objcg out of the scope of the CONFIG_MEMCG_KMEM so that the LRU pages
can reuse it to charge pages.

We know that the LRU pages are not accounted at the root level. But
the page->memcg_data points to the root_mem_cgroup. So the
page->memcg_data of the LRU pages always points to a valid pointer.
But the root_mem_cgroup dose not have an object cgroup. If we use
obj_cgroup APIs to charge the LRU pages, we should set the
page->memcg_data to a root object cgroup. So we also allocate an
object cgroup for the root_mem_cgroup.

Signed-off-by: Muchun Song <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
---
include/linux/memcontrol.h | 2 +-
mm/memcontrol.c | 56 +++++++++++++++++++++++++++-------------------
2 files changed, 34 insertions(+), 24 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 6d7f97cc3fd4..27f3171f42a1 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -315,10 +315,10 @@ struct mem_cgroup {

#ifdef CONFIG_MEMCG_KMEM
int kmemcg_id;
+#endif
struct obj_cgroup __rcu *objcg;
/* list of inherited objcgs, protected by objcg_lock */
struct list_head objcg_list;
-#endif

MEMCG_PADDING(_pad2_);

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 13da256ff2e4..739a1d58ce97 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -254,9 +254,9 @@ struct mem_cgroup *vmpressure_to_memcg(struct vmpressure *vmpr)
return container_of(vmpr, struct mem_cgroup, vmpressure);
}

-#ifdef CONFIG_MEMCG_KMEM
static DEFINE_SPINLOCK(objcg_lock);

+#ifdef CONFIG_MEMCG_KMEM
bool mem_cgroup_kmem_disabled(void)
{
return cgroup_memory_nokmem;
@@ -265,12 +265,10 @@ bool mem_cgroup_kmem_disabled(void)
static void obj_cgroup_uncharge_pages(struct obj_cgroup *objcg,
unsigned int nr_pages);

-static void obj_cgroup_release(struct percpu_ref *ref)
+static void obj_cgroup_release_bytes(struct obj_cgroup *objcg)
{
- struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt);
unsigned int nr_bytes;
unsigned int nr_pages;
- unsigned long flags;

/*
* At this point all allocated objects are freed, and
@@ -284,9 +282,9 @@ static void obj_cgroup_release(struct percpu_ref *ref)
* 3) CPU1: a process from another memcg is allocating something,
* the stock if flushed,
* objcg->nr_charged_bytes = PAGE_SIZE - 92
- * 5) CPU0: we do release this object,
+ * 4) CPU0: we do release this object,
* 92 bytes are added to stock->nr_bytes
- * 6) CPU0: stock is flushed,
+ * 5) CPU0: stock is flushed,
* 92 bytes are added to objcg->nr_charged_bytes
*
* In the result, nr_charged_bytes == PAGE_SIZE.
@@ -298,6 +296,19 @@ static void obj_cgroup_release(struct percpu_ref *ref)

if (nr_pages)
obj_cgroup_uncharge_pages(objcg, nr_pages);
+}
+#else
+static inline void obj_cgroup_release_bytes(struct obj_cgroup *objcg)
+{
+}
+#endif
+
+static void obj_cgroup_release(struct percpu_ref *ref)
+{
+ struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt);
+ unsigned long flags;
+
+ obj_cgroup_release_bytes(objcg);

spin_lock_irqsave(&objcg_lock, flags);
list_del(&objcg->list);
@@ -326,10 +337,10 @@ static struct obj_cgroup *obj_cgroup_alloc(void)
return objcg;
}

-static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
- struct mem_cgroup *parent)
+static void memcg_reparent_objcgs(struct mem_cgroup *memcg)
{
struct obj_cgroup *objcg, *iter;
+ struct mem_cgroup *parent = parent_mem_cgroup(memcg);

objcg = rcu_replace_pointer(memcg->objcg, NULL, true);

@@ -348,6 +359,7 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
percpu_ref_kill(&objcg->refcnt);
}

+#ifdef CONFIG_MEMCG_KMEM
/*
* A lot of the calls to the cache allocation functions are expected to be
* inlined by the compiler. Since the calls to memcg_slab_pre_alloc_hook() are
@@ -3589,21 +3601,12 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css,
#ifdef CONFIG_MEMCG_KMEM
static int memcg_online_kmem(struct mem_cgroup *memcg)
{
- struct obj_cgroup *objcg;
-
if (cgroup_memory_nokmem)
return 0;

if (unlikely(mem_cgroup_is_root(memcg)))
return 0;

- objcg = obj_cgroup_alloc();
- if (!objcg)
- return -ENOMEM;
-
- objcg->memcg = memcg;
- rcu_assign_pointer(memcg->objcg, objcg);
-
static_branch_enable(&memcg_kmem_enabled_key);

memcg->kmemcg_id = memcg->id.id;
@@ -3613,17 +3616,13 @@ static int memcg_online_kmem(struct mem_cgroup *memcg)

static void memcg_offline_kmem(struct mem_cgroup *memcg)
{
- struct mem_cgroup *parent;
-
if (cgroup_memory_nokmem)
return;

if (unlikely(mem_cgroup_is_root(memcg)))
return;

- parent = parent_mem_cgroup(memcg);
- memcg_reparent_objcgs(memcg, parent);
- memcg_reparent_list_lrus(memcg, parent);
+ memcg_reparent_list_lrus(memcg, parent_mem_cgroup(memcg));
}
#else
static int memcg_online_kmem(struct mem_cgroup *memcg)
@@ -5106,8 +5105,8 @@ static struct mem_cgroup *mem_cgroup_alloc(void)
memcg->socket_pressure = jiffies;
#ifdef CONFIG_MEMCG_KMEM
memcg->kmemcg_id = -1;
- INIT_LIST_HEAD(&memcg->objcg_list);
#endif
+ INIT_LIST_HEAD(&memcg->objcg_list);
#ifdef CONFIG_CGROUP_WRITEBACK
INIT_LIST_HEAD(&memcg->cgwb_list);
for (i = 0; i < MEMCG_CGWB_FRN_CNT; i++)
@@ -5169,6 +5168,7 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
{
struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+ struct obj_cgroup *objcg;

if (memcg_online_kmem(memcg))
goto remove_id;
@@ -5181,6 +5181,13 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
if (alloc_shrinker_info(memcg))
goto offline_kmem;

+ objcg = obj_cgroup_alloc();
+ if (!objcg)
+ goto free_shrinker;
+
+ objcg->memcg = memcg;
+ rcu_assign_pointer(memcg->objcg, objcg);
+
/* Online state pins memcg ID, memcg ID pins CSS */
refcount_set(&memcg->id.ref, 1);
css_get(css);
@@ -5189,6 +5196,8 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
queue_delayed_work(system_unbound_wq, &stats_flush_dwork,
2UL*HZ);
return 0;
+free_shrinker:
+ free_shrinker_info(memcg);
offline_kmem:
memcg_offline_kmem(memcg);
remove_id:
@@ -5216,6 +5225,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
page_counter_set_min(&memcg->memory, 0);
page_counter_set_low(&memcg->memory, 0);

+ memcg_reparent_objcgs(memcg);
memcg_offline_kmem(memcg);
reparent_shrinker_deferred(memcg);
wb_memcg_offline(memcg);
--
2.11.0


2022-05-31 07:15:45

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v5 00/11] Use obj_cgroup APIs to charge the LRU pages

On Mon, May 30, 2022 at 02:17:11PM -0700, Andrew Morton wrote:
> On Mon, 30 May 2022 15:49:08 +0800 Muchun Song <[email protected]> wrote:
>
> > This version is rebased on v5.18.
>
> Not a great choice of base, really. mm-stable or mm-unstable or
> linux-next or even linus-of-the-day are all much more up to date.
>
> Although the memcg reviewer tags are pretty thin, I was going to give
> it a run. But after fixing a bunch of conflicts I got about halfway
> through then gave up on a big snarl in get_obj_cgroup_from_current().
>
> > RFC v1: https://lore.kernel.org/all/[email protected]/
>
> Surprising, that was over a year ago. Why has is taken so long?

It's partially my fault: I was thinking (and to some extent still are)
that using objcg is not the best choice long-term and was pushing on the
idea to used per-memcg lru vectors as intermediate objects instead.
But it looks like I underestimated the complexity and a potential overhead
of this solution.

The objcg-based approach can solve the problem right now and it shouldn't
bring any long-term issues. So I asked Muchun to revive the patchset.

Thanks!

2022-05-31 18:16:07

by Muchun Song

[permalink] [raw]
Subject: [PATCH v5 05/11] mm: vmscan: rework move_pages_to_lru()

In the later patch, we will reparent the LRU pages. The pages moved to
appropriate LRU list can be reparented during the process of the
move_pages_to_lru(). So holding a lruvec lock by the caller is wrong, we
should use the more general interface of folio_lruvec_relock_irq() to
acquire the correct lruvec lock.

Signed-off-by: Muchun Song <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Acked-by: Roman Gushchin <[email protected]>
---
mm/vmscan.c | 49 +++++++++++++++++++++++++------------------------
1 file changed, 25 insertions(+), 24 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index a611ccf03c9b..67f1462b150d 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2226,23 +2226,28 @@ static int too_many_isolated(struct pglist_data *pgdat, int file,
* move_pages_to_lru() moves pages from private @list to appropriate LRU list.
* On return, @list is reused as a list of pages to be freed by the caller.
*
- * Returns the number of pages moved to the given lruvec.
+ * Returns the number of pages moved to the appropriate LRU list.
+ *
+ * Note: The caller must not hold any lruvec lock.
*/
-static unsigned int move_pages_to_lru(struct lruvec *lruvec,
- struct list_head *list)
+static unsigned int move_pages_to_lru(struct list_head *list)
{
- int nr_pages, nr_moved = 0;
+ int nr_moved = 0;
+ struct lruvec *lruvec = NULL;
LIST_HEAD(pages_to_free);
- struct page *page;

while (!list_empty(list)) {
- page = lru_to_page(list);
+ int nr_pages;
+ struct folio *folio = lru_to_folio(list);
+ struct page *page = &folio->page;
+
+ lruvec = folio_lruvec_relock_irq(folio, lruvec);
VM_BUG_ON_PAGE(PageLRU(page), page);
list_del(&page->lru);
if (unlikely(!page_evictable(page))) {
- spin_unlock_irq(&lruvec->lru_lock);
+ lruvec_unlock_irq(lruvec);
putback_lru_page(page);
- spin_lock_irq(&lruvec->lru_lock);
+ lruvec = NULL;
continue;
}

@@ -2263,20 +2268,16 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec,
__clear_page_lru_flags(page);

if (unlikely(PageCompound(page))) {
- spin_unlock_irq(&lruvec->lru_lock);
+ lruvec_unlock_irq(lruvec);
destroy_compound_page(page);
- spin_lock_irq(&lruvec->lru_lock);
+ lruvec = NULL;
} else
list_add(&page->lru, &pages_to_free);

continue;
}

- /*
- * All pages were isolated from the same lruvec (and isolation
- * inhibits memcg migration).
- */
- VM_BUG_ON_PAGE(!folio_matches_lruvec(page_folio(page), lruvec), page);
+ VM_BUG_ON_PAGE(!folio_matches_lruvec(folio, lruvec), page);
add_page_to_lru_list(page, lruvec);
nr_pages = thp_nr_pages(page);
nr_moved += nr_pages;
@@ -2284,6 +2285,8 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec,
workingset_age_nonresident(lruvec, nr_pages);
}

+ if (lruvec)
+ lruvec_unlock_irq(lruvec);
/*
* To save our caller's stack, now use input list for pages to free.
*/
@@ -2355,16 +2358,16 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,

nr_reclaimed = shrink_page_list(&page_list, pgdat, sc, &stat, false);

- spin_lock_irq(&lruvec->lru_lock);
- move_pages_to_lru(lruvec, &page_list);
+ move_pages_to_lru(&page_list);

+ local_irq_disable();
__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT;
if (!cgroup_reclaim(sc))
__count_vm_events(item, nr_reclaimed);
__count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed);
__count_vm_events(PGSTEAL_ANON + file, nr_reclaimed);
- spin_unlock_irq(&lruvec->lru_lock);
+ local_irq_enable();

lru_note_cost(lruvec, file, stat.nr_pageout);
mem_cgroup_uncharge_list(&page_list);
@@ -2494,18 +2497,16 @@ static void shrink_active_list(unsigned long nr_to_scan,
/*
* Move pages back to the lru list.
*/
- spin_lock_irq(&lruvec->lru_lock);
-
- nr_activate = move_pages_to_lru(lruvec, &l_active);
- nr_deactivate = move_pages_to_lru(lruvec, &l_inactive);
+ nr_activate = move_pages_to_lru(&l_active);
+ nr_deactivate = move_pages_to_lru(&l_inactive);
/* Keep all free pages in l_active list */
list_splice(&l_inactive, &l_active);

+ local_irq_disable();
__count_vm_events(PGDEACTIVATE, nr_deactivate);
__count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate);
-
__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
- spin_unlock_irq(&lruvec->lru_lock);
+ local_irq_enable();

mem_cgroup_uncharge_list(&l_active);
free_unref_page_list(&l_active);
--
2.11.0


2022-05-31 23:28:41

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v5 00/11] Use obj_cgroup APIs to charge the LRU pages

On Mon, 30 May 2022 15:49:08 +0800 Muchun Song <[email protected]> wrote:

> This version is rebased on v5.18.

Not a great choice of base, really. mm-stable or mm-unstable or
linux-next or even linus-of-the-day are all much more up to date.

Although the memcg reviewer tags are pretty thin, I was going to give
it a run. But after fixing a bunch of conflicts I got about halfway
through then gave up on a big snarl in get_obj_cgroup_from_current().

> RFC v1: https://lore.kernel.org/all/[email protected]/

Surprising, that was over a year ago. Why has is taken so long?


2022-06-01 13:43:33

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v5 00/11] Use obj_cgroup APIs to charge the LRU pages

On Mon, May 30, 2022 at 10:41:30PM -0400, Waiman Long wrote:
> On 5/30/22 03:49, Muchun Song wrote:
> > This version is rebased on v5.18.
> >
> > Since the following patchsets applied. All the kernel memory are charged
> > with the new APIs of obj_cgroup.
> >
> > [v17,00/19] The new cgroup slab memory controller [1]
> > [v5,0/7] Use obj_cgroup APIs to charge kmem pages [2]
> >
> > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > it exists at a larger scale and is causing recurring problems in the real
> > world: page cache doesn't get reclaimed for a long time, or is used by the
> > second, third, fourth, ... instance of the same job that was restarted into
> > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > and make page reclaim very inefficient.
> >
> > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > to fix this problem, and then the LRU pages will not pin the memcgs.
> >
> > This patchset aims to make the LRU pages to drop the reference to memory
> > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > of the dying cgroups will not increase if we run the following test script.
> >
> > ```bash
> > #!/bin/bash
> >
> > dd if=/dev/zero of=temp bs=4096 count=1
> > cat /proc/cgroups | grep memory
> >
> > for i in {0..2000}
> > do
> > mkdir /sys/fs/cgroup/memory/test$i
> > echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> > cat temp >> log
> > echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> > rmdir /sys/fs/cgroup/memory/test$i
> > done
> >
> > cat /proc/cgroups | grep memory
> >
> > rm -f temp log
> > ```
> >
> > [1] https://lore.kernel.org/linux-mm/[email protected]/
> > [2] https://lore.kernel.org/linux-mm/[email protected]/
> >
> > v4: https://lore.kernel.org/all/[email protected]/
> > v3: https://lore.kernel.org/all/[email protected]/
> > v2: https://lore.kernel.org/all/[email protected]/
> > v1: https://lore.kernel.org/all/[email protected]/
> > RFC v4: https://lore.kernel.org/all/[email protected]/
> > RFC v3: https://lore.kernel.org/all/[email protected]/
> > RFC v2: https://lore.kernel.org/all/[email protected]/
> > RFC v1: https://lore.kernel.org/all/[email protected]/
> >
> > v5:
> > - Lots of improvements from Johannes, Roman and Waiman.
> > - Fix lockdep warning reported by kernel test robot.
> > - Add two new patches to do code cleanup.
> > - Collect Acked-by and Reviewed-by from Johannes and Roman.
> > - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> > local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> > it to local_lock. It could be an improvement in the future.
>
> My comment about local_lock/unlock is just a note that
> local_irq_disable/enable() have to be eventually replaced. However, we need
> to think carefully where to put the newly added local_lock. It is perfectly
> fine to keep it as is and leave the conversion as a future follow-up.
>

Totally agree.

> Thank you very much for your work on this patchset.
>

Thanks.

2022-06-01 18:31:06

by Muchun Song

[permalink] [raw]
Subject: [PATCH v5 11/11] mm: lru: use lruvec lock to serialize memcg changes

As described by commit fc574c23558c ("mm/swap.c: serialize memcg
changes in pagevec_lru_move_fn"), TestClearPageLRU() aims to
serialize mem_cgroup_move_account() during pagevec_lru_move_fn().
Now folio_lruvec_lock*() has the ability to detect whether page
memcg has been changed. So we can use lruvec lock to serialize
mem_cgroup_move_account() during pagevec_lru_move_fn(). This
change is a partial revert of the commit fc574c23558c ("mm/swap.c:
serialize memcg changes in pagevec_lru_move_fn").

And pagevec_lru_move_fn() is more hot compare with
mem_cgroup_move_account(), removing an atomic operation would be
an optimization. Also this change would not dirty cacheline for a
page which isn't on the LRU.

Signed-off-by: Muchun Song <[email protected]>
---
mm/memcontrol.c | 34 ++++++++++++++++++++++++++++++++++
mm/swap.c | 45 ++++++++++++++-------------------------------
mm/vmscan.c | 9 ++++-----
3 files changed, 52 insertions(+), 36 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index f4db3cb2aedc..3a0f3838f02d 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1333,10 +1333,39 @@ struct lruvec *folio_lruvec_lock(struct folio *folio)
lruvec = folio_lruvec(folio);
spin_lock(&lruvec->lru_lock);

+ /*
+ * The memcg of the page can be changed by any the following routines:
+ *
+ * 1) mem_cgroup_move_account() or
+ * 2) memcg_reparent_objcgs()
+ *
+ * The possible bad scenario would like:
+ *
+ * CPU0: CPU1: CPU2:
+ * lruvec = folio_lruvec()
+ *
+ * if (!isolate_lru_page())
+ * mem_cgroup_move_account()
+ *
+ * memcg_reparent_objcgs()
+ *
+ * spin_lock(&lruvec->lru_lock)
+ * ^^^^^^
+ * wrong lock
+ *
+ * Either CPU1 or CPU2 can change page memcg, so we need to check
+ * whether page memcg is changed, if so, we should reacquire the
+ * new lruvec lock.
+ */
if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
spin_unlock(&lruvec->lru_lock);
goto retry;
}
+
+ /*
+ * When we reach here, it means that the folio_memcg(folio) is
+ * stable.
+ */
rcu_read_unlock();

return lruvec;
@@ -1364,6 +1393,7 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio)
lruvec = folio_lruvec(folio);
spin_lock_irq(&lruvec->lru_lock);

+ /* See the comments in folio_lruvec_lock(). */
if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
spin_unlock_irq(&lruvec->lru_lock);
goto retry;
@@ -1397,6 +1427,7 @@ struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio,
lruvec = folio_lruvec(folio);
spin_lock_irqsave(&lruvec->lru_lock, *flags);

+ /* See the comments in folio_lruvec_lock(). */
if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
spin_unlock_irqrestore(&lruvec->lru_lock, *flags);
goto retry;
@@ -5738,7 +5769,10 @@ static int mem_cgroup_move_account(struct page *page,
obj_cgroup_put(rcu_dereference(from->objcg));
rcu_read_unlock();

+ /* See the comments in folio_lruvec_lock(). */
+ spin_lock(&from_vec->lru_lock);
folio->memcg_data = (unsigned long)rcu_access_pointer(to->objcg);
+ spin_unlock(&from_vec->lru_lock);

__folio_memcg_unlock(from);

diff --git a/mm/swap.c b/mm/swap.c
index 6cea469b6ff2..1b893c157bd1 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -199,14 +199,8 @@ static void pagevec_lru_move_fn(struct pagevec *pvec,
struct page *page = pvec->pages[i];
struct folio *folio = page_folio(page);

- /* block memcg migration during page moving between lru */
- if (!TestClearPageLRU(page))
- continue;
-
lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags);
(*move_fn)(page, lruvec);
-
- SetPageLRU(page);
}
if (lruvec)
lruvec_unlock_irqrestore(lruvec, flags);
@@ -218,7 +212,7 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec)
{
struct folio *folio = page_folio(page);

- if (!folio_test_unevictable(folio)) {
+ if (folio_test_lru(folio) && !folio_test_unevictable(folio)) {
lruvec_del_folio(lruvec, folio);
folio_clear_active(folio);
lruvec_add_folio_tail(lruvec, folio);
@@ -314,7 +308,8 @@ void lru_note_cost_folio(struct folio *folio)

static void __folio_activate(struct folio *folio, struct lruvec *lruvec)
{
- if (!folio_test_active(folio) && !folio_test_unevictable(folio)) {
+ if (folio_test_lru(folio) && !folio_test_active(folio) &&
+ !folio_test_unevictable(folio)) {
long nr_pages = folio_nr_pages(folio);

lruvec_del_folio(lruvec, folio);
@@ -371,12 +366,9 @@ static void folio_activate(struct folio *folio)
{
struct lruvec *lruvec;

- if (folio_test_clear_lru(folio)) {
- lruvec = folio_lruvec_lock_irq(folio);
- __folio_activate(folio, lruvec);
- lruvec_unlock_irq(lruvec);
- folio_set_lru(folio);
- }
+ lruvec = folio_lruvec_lock_irq(folio);
+ __folio_activate(folio, lruvec);
+ lruvec_unlock_irq(lruvec);
}
#endif

@@ -519,6 +511,9 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec)
bool active = PageActive(page);
int nr_pages = thp_nr_pages(page);

+ if (!PageLRU(page))
+ return;
+
if (PageUnevictable(page))
return;

@@ -556,7 +551,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec)

static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec)
{
- if (PageActive(page) && !PageUnevictable(page)) {
+ if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) {
int nr_pages = thp_nr_pages(page);

del_page_from_lru_list(page, lruvec);
@@ -572,7 +567,7 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec)

static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec)
{
- if (PageAnon(page) && PageSwapBacked(page) &&
+ if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
!PageSwapCache(page) && !PageUnevictable(page)) {
int nr_pages = thp_nr_pages(page);

@@ -1007,8 +1002,9 @@ void __pagevec_release(struct pagevec *pvec)
}
EXPORT_SYMBOL(__pagevec_release);

-static void __pagevec_lru_add_fn(struct folio *folio, struct lruvec *lruvec)
+static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec)
{
+ struct folio *folio = page_folio(page);
int was_unevictable = folio_test_clear_unevictable(folio);
long nr_pages = folio_nr_pages(folio);

@@ -1054,20 +1050,7 @@ static void __pagevec_lru_add_fn(struct folio *folio, struct lruvec *lruvec)
*/
void __pagevec_lru_add(struct pagevec *pvec)
{
- int i;
- struct lruvec *lruvec = NULL;
- unsigned long flags = 0;
-
- for (i = 0; i < pagevec_count(pvec); i++) {
- struct folio *folio = page_folio(pvec->pages[i]);
-
- lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags);
- __pagevec_lru_add_fn(folio, lruvec);
- }
- if (lruvec)
- lruvec_unlock_irqrestore(lruvec, flags);
- release_pages(pvec->pages, pvec->nr);
- pagevec_reinit(pvec);
+ pagevec_lru_move_fn(pvec, __pagevec_lru_add_fn);
}

/**
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 51853d6df7b4..c591d071a598 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4789,18 +4789,17 @@ void check_move_unevictable_pages(struct pagevec *pvec)
nr_pages = thp_nr_pages(page);
pgscanned += nr_pages;

- /* block memcg migration during page moving between lru */
- if (!TestClearPageLRU(page))
+ lruvec = folio_lruvec_relock_irq(folio, lruvec);
+
+ if (!PageLRU(page) || !PageUnevictable(page))
continue;

- lruvec = folio_lruvec_relock_irq(folio, lruvec);
- if (page_evictable(page) && PageUnevictable(page)) {
+ if (page_evictable(page)) {
del_page_from_lru_list(page, lruvec);
ClearPageUnevictable(page);
add_page_to_lru_list(page, lruvec);
pgrescued += nr_pages;
}
- SetPageLRU(page);
}

if (lruvec) {
--
2.11.0


2022-06-01 19:24:57

by Michal Koutný

[permalink] [raw]
Subject: Re: [PATCH v5 03/11] mm: memcontrol: prepare objcg API for non-kmem usage

Hello.

On Mon, May 30, 2022 at 03:49:11PM +0800, Muchun Song <[email protected]> wrote:
> So we also allocate an object cgroup for the root_mem_cgroup.

This change made me wary that this patch also kmem charging in the
root_mem_cgroup. Fortunately, get_obj_cgroup_from_current won't return
this objcg so all is fine.

> +}
> +#else
> +static inline void obj_cgroup_release_bytes(struct obj_cgroup *objcg)
> +{
> +}
> +#endif

This empty body is for !CONFIG_MEMCG_KMEM, however, the subsequent use for LRU
pages makes no use of these, so it's warranted.

Altogether, I find this correct, hence
Reviewed-by: Michal Koutn? <[email protected]>

2022-06-01 20:33:58

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v5 00/11] Use obj_cgroup APIs to charge the LRU pages

On 5/30/22 03:49, Muchun Song wrote:
> This version is rebased on v5.18.
>
> Since the following patchsets applied. All the kernel memory are charged
> with the new APIs of obj_cgroup.
>
> [v17,00/19] The new cgroup slab memory controller [1]
> [v5,0/7] Use obj_cgroup APIs to charge kmem pages [2]
>
> But user memory allocations (LRU pages) pinning memcgs for a long time -
> it exists at a larger scale and is causing recurring problems in the real
> world: page cache doesn't get reclaimed for a long time, or is used by the
> second, third, fourth, ... instance of the same job that was restarted into
> a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> and make page reclaim very inefficient.
>
> We can convert LRU pages and most other raw memcg pins to the objcg direction
> to fix this problem, and then the LRU pages will not pin the memcgs.
>
> This patchset aims to make the LRU pages to drop the reference to memory
> cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> of the dying cgroups will not increase if we run the following test script.
>
> ```bash
> #!/bin/bash
>
> dd if=/dev/zero of=temp bs=4096 count=1
> cat /proc/cgroups | grep memory
>
> for i in {0..2000}
> do
> mkdir /sys/fs/cgroup/memory/test$i
> echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> cat temp >> log
> echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> rmdir /sys/fs/cgroup/memory/test$i
> done
>
> cat /proc/cgroups | grep memory
>
> rm -f temp log
> ```
>
> [1] https://lore.kernel.org/linux-mm/[email protected]/
> [2] https://lore.kernel.org/linux-mm/[email protected]/
>
> v4: https://lore.kernel.org/all/[email protected]/
> v3: https://lore.kernel.org/all/[email protected]/
> v2: https://lore.kernel.org/all/[email protected]/
> v1: https://lore.kernel.org/all/[email protected]/
> RFC v4: https://lore.kernel.org/all/[email protected]/
> RFC v3: https://lore.kernel.org/all/[email protected]/
> RFC v2: https://lore.kernel.org/all/[email protected]/
> RFC v1: https://lore.kernel.org/all/[email protected]/
>
> v5:
> - Lots of improvements from Johannes, Roman and Waiman.
> - Fix lockdep warning reported by kernel test robot.
> - Add two new patches to do code cleanup.
> - Collect Acked-by and Reviewed-by from Johannes and Roman.
> - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> it to local_lock. It could be an improvement in the future.

My comment about local_lock/unlock is just a note that
local_irq_disable/enable() have to be eventually replaced. However, we
need to think carefully where to put the newly added local_lock. It is
perfectly fine to keep it as is and leave the conversion as a future
follow-up.

Thank you very much for your work on this patchset.

Cheers,
Longman



2022-06-01 21:20:08

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v5 00/11] Use obj_cgroup APIs to charge the LRU pages

On Mon, May 30, 2022 at 02:17:11PM -0700, Andrew Morton wrote:
> On Mon, 30 May 2022 15:49:08 +0800 Muchun Song <[email protected]> wrote:
>
> > This version is rebased on v5.18.
>
> Not a great choice of base, really. mm-stable or mm-unstable or
> linux-next or even linus-of-the-day are all much more up to date.
>

I'll rebase it to linux-next in v6.

> Although the memcg reviewer tags are pretty thin, I was going to give
> it a run. But after fixing a bunch of conflicts I got about halfway
> through then gave up on a big snarl in get_obj_cgroup_from_current().
>

Got it. Will fix.

> > RFC v1: https://lore.kernel.org/all/[email protected]/
>
> Surprising, that was over a year ago. Why has is taken so long?
>

Yeah, a little long. This issue has been going on for years.
I have proposed an approach based on objcg to solve this issue
last year, however, we are not sure if this is the best choice.
So this patchset stalled for months. Recently, this issue
was proposed in LSFMM 2022 conference by Roman, consensus was
that the objcg-based reparenting is fine as well. So this
patchset has recently resumed.

Thanks.

2022-06-01 21:30:45

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v5 03/11] mm: memcontrol: prepare objcg API for non-kmem usage

On Wed, Jun 01, 2022 at 07:34:34PM +0200, Michal Koutny wrote:
> Hello.
>
> On Mon, May 30, 2022 at 03:49:11PM +0800, Muchun Song <[email protected]> wrote:
> > So we also allocate an object cgroup for the root_mem_cgroup.
>
> This change made me wary that this patch also kmem charging in the
> root_mem_cgroup. Fortunately, get_obj_cgroup_from_current won't return
> this objcg so all is fine.

Yes, I had the same experience here :)

Overall I feel like the handling of the root memcg and objcg are begging
for a cleanup, but it's really far from being trivial.

Maybe starting with documenting how it works now is a good idea...

2022-06-02 03:10:13

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v5 03/11] mm: memcontrol: prepare objcg API for non-kmem usage

On Wed, Jun 01, 2022 at 07:34:34PM +0200, Michal Koutn? wrote:
> Hello.
>
> On Mon, May 30, 2022 at 03:49:11PM +0800, Muchun Song <[email protected]> wrote:
> > So we also allocate an object cgroup for the root_mem_cgroup.
>
> This change made me wary that this patch also kmem charging in the
> root_mem_cgroup. Fortunately, get_obj_cgroup_from_current won't return

Sorry for the confusing. Right, we don't charge kmem to the root objcg.

> this objcg so all is fine.
>
> > +}
> > +#else
> > +static inline void obj_cgroup_release_bytes(struct obj_cgroup *objcg)
> > +{
> > +}
> > +#endif
>
> This empty body is for !CONFIG_MEMCG_KMEM, however, the subsequent use for LRU
> pages makes no use of these, so it's warranted.
>
> Altogether, I find this correct, hence
> Reviewed-by: Michal Koutn? <[email protected]>
>

Thanks Michal.


2022-06-02 15:58:57

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v5 03/11] mm: memcontrol: prepare objcg API for non-kmem usage

On Wed, Jun 01, 2022 at 11:33:46AM -0700, Roman Gushchin wrote:
> On Wed, Jun 01, 2022 at 07:34:34PM +0200, Michal Koutny wrote:
> > Hello.
> >
> > On Mon, May 30, 2022 at 03:49:11PM +0800, Muchun Song <[email protected]> wrote:
> > > So we also allocate an object cgroup for the root_mem_cgroup.
> >
> > This change made me wary that this patch also kmem charging in the
> > root_mem_cgroup. Fortunately, get_obj_cgroup_from_current won't return
> > this objcg so all is fine.
>
> Yes, I had the same experience here :)
>

Sorry for the confusing.

> Overall I feel like the handling of the root memcg and objcg are begging
> for a cleanup, but it's really far from being trivial.
>
> Maybe starting with documenting how it works now is a good idea...
>

You mean add more comments into the commit log to explain the
usage of root memcg and root objcg?

Thanks.

2022-06-09 02:52:51

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v5 00/11] Use obj_cgroup APIs to charge the LRU pages

Hi,

Friendly ping. Any comments or objections?

Thanks.

On Mon, May 30, 2022 at 3:50 PM Muchun Song <[email protected]> wrote:
>
> This version is rebased on v5.18.
>
> Since the following patchsets applied. All the kernel memory are charged
> with the new APIs of obj_cgroup.
>
> [v17,00/19] The new cgroup slab memory controller [1]
> [v5,0/7] Use obj_cgroup APIs to charge kmem pages [2]
>
> But user memory allocations (LRU pages) pinning memcgs for a long time -
> it exists at a larger scale and is causing recurring problems in the real
> world: page cache doesn't get reclaimed for a long time, or is used by the
> second, third, fourth, ... instance of the same job that was restarted into
> a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> and make page reclaim very inefficient.
>
> We can convert LRU pages and most other raw memcg pins to the objcg direction
> to fix this problem, and then the LRU pages will not pin the memcgs.
>
> This patchset aims to make the LRU pages to drop the reference to memory
> cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> of the dying cgroups will not increase if we run the following test script.
>
> ```bash
> #!/bin/bash
>
> dd if=/dev/zero of=temp bs=4096 count=1
> cat /proc/cgroups | grep memory
>
> for i in {0..2000}
> do
> mkdir /sys/fs/cgroup/memory/test$i
> echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> cat temp >> log
> echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> rmdir /sys/fs/cgroup/memory/test$i
> done
>
> cat /proc/cgroups | grep memory
>
> rm -f temp log
> ```
>
> [1] https://lore.kernel.org/linux-mm/[email protected]/
> [2] https://lore.kernel.org/linux-mm/[email protected]/
>
> v4: https://lore.kernel.org/all/[email protected]/
> v3: https://lore.kernel.org/all/[email protected]/
> v2: https://lore.kernel.org/all/[email protected]/
> v1: https://lore.kernel.org/all/[email protected]/
> RFC v4: https://lore.kernel.org/all/[email protected]/
> RFC v3: https://lore.kernel.org/all/[email protected]/
> RFC v2: https://lore.kernel.org/all/[email protected]/
> RFC v1: https://lore.kernel.org/all/[email protected]/
>
> v5:
> - Lots of improvements from Johannes, Roman and Waiman.
> - Fix lockdep warning reported by kernel test robot.
> - Add two new patches to do code cleanup.
> - Collect Acked-by and Reviewed-by from Johannes and Roman.
> - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> it to local_lock. It could be an improvement in the future.
>
> v4:
> - Resend and rebased on v5.18.
>
> v3:
> - Removed the Acked-by tags from Roman since this version is based on
> the folio relevant.
>
> v2:
> - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
> dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
> - Rebase to linux 5.15-rc1.
> - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
>
> v1:
> - Drop RFC tag.
> - Rebase to linux next-20210811.
>
> RFC v4:
> - Collect Acked-by from Roman.
> - Rebase to linux next-20210525.
> - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
> - Change the patch 1 title to "prepare objcg API for non-kmem usage".
> - Convert reparent_ops_head to an array in patch 8.
>
> Thanks for Roman's review and suggestions.
>
> RFC v3:
> - Drop the code cleanup and simplification patches. Gather those patches
> into a separate series[1].
> - Rework patch #1 suggested by Johannes.
>
> RFC v2:
> - Collect Acked-by tags by Johannes. Thanks.
> - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
> - Fix move_pages_to_lru().
>
> Muchun Song (11):
> mm: memcontrol: remove dead code and comments
> mm: rename unlock_page_lruvec{_irq, _irqrestore} to
> lruvec_unlock{_irq, _irqrestore}
> mm: memcontrol: prepare objcg API for non-kmem usage
> mm: memcontrol: make lruvec lock safe when LRU pages are reparented
> mm: vmscan: rework move_pages_to_lru()
> mm: thp: make split queue lock safe when LRU pages are reparented
> mm: memcontrol: make all the callers of {folio,page}_memcg() safe
> mm: memcontrol: introduce memcg_reparent_ops
> mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
> mm: lru: add VM_BUG_ON_FOLIO to lru maintenance function
> mm: lru: use lruvec lock to serialize memcg changes
>
> fs/buffer.c | 4 +-
> fs/fs-writeback.c | 23 +-
> include/linux/memcontrol.h | 213 +++++++++------
> include/linux/mm_inline.h | 6 +
> include/trace/events/writeback.h | 5 +
> mm/compaction.c | 39 ++-
> mm/huge_memory.c | 153 +++++++++--
> mm/memcontrol.c | 560 +++++++++++++++++++++++++++------------
> mm/migrate.c | 4 +
> mm/mlock.c | 2 +-
> mm/page_io.c | 5 +-
> mm/swap.c | 62 ++---
> mm/vmscan.c | 67 +++--
> 13 files changed, 767 insertions(+), 376 deletions(-)
>
>
> base-commit: 4b0986a3613c92f4ec1bdc7f60ec66fea135991f
> --
> 2.11.0
>

2022-06-09 03:48:31

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v5 00/11] Use obj_cgroup APIs to charge the LRU pages

On Thu, Jun 9, 2022 at 10:53 AM Roman Gushchin <[email protected]> wrote:
>
> On Thu, Jun 09, 2022 at 10:43:24AM +0800, Muchun Song wrote:
> > Hi,
> >
> > Friendly ping. Any comments or objections?
>
> I'm sorry, I was recently busy with some other stuff, but it's on my todo list.
> I'll try to find some time by the end of the week.

Got it. Thanks Roman. Looking forward to your reviews.

>
> Thanks!
>
> >
> > Thanks.
> >
> > On Mon, May 30, 2022 at 3:50 PM Muchun Song <[email protected]> wrote:
> > >
> > > This version is rebased on v5.18.
> > >
> > > Since the following patchsets applied. All the kernel memory are charged
> > > with the new APIs of obj_cgroup.
> > >
> > > [v17,00/19] The new cgroup slab memory controller [1]
> > > [v5,0/7] Use obj_cgroup APIs to charge kmem pages [2]
>
> Btw, both these patchsets were merged a long time ago, so you can refer
> to upstream commits instead.

Will do.

Thanks.

2022-06-09 04:45:49

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v5 00/11] Use obj_cgroup APIs to charge the LRU pages

On Thu, Jun 09, 2022 at 10:43:24AM +0800, Muchun Song wrote:
> Hi,
>
> Friendly ping. Any comments or objections?

I'm sorry, I was recently busy with some other stuff, but it's on my todo list.
I'll try to find some time by the end of the week.

Thanks!

>
> Thanks.
>
> On Mon, May 30, 2022 at 3:50 PM Muchun Song <[email protected]> wrote:
> >
> > This version is rebased on v5.18.
> >
> > Since the following patchsets applied. All the kernel memory are charged
> > with the new APIs of obj_cgroup.
> >
> > [v17,00/19] The new cgroup slab memory controller [1]
> > [v5,0/7] Use obj_cgroup APIs to charge kmem pages [2]

Btw, both these patchsets were merged a long time ago, so you can refer
to upstream commits instead.

2022-06-10 23:25:52

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v5 03/11] mm: memcontrol: prepare objcg API for non-kmem usage

On Mon, May 30, 2022 at 03:49:11PM +0800, Muchun Song wrote:
> Pagecache pages are charged at the allocation time and holding a
> reference to the original memory cgroup until being reclaimed.
> Depending on the memory pressure, specific patterns of the page
> sharing between different cgroups and the cgroup creation and
> destruction rates, a large number of dying memory cgroups can be
> pinned by pagecache pages. It makes the page reclaim less efficient
> and wastes memory.
>
> We can convert LRU pages and most other raw memcg pins to the objcg
> direction to fix this problem, and then the page->memcg will always
> point to an object cgroup pointer.
>
> Therefore, the infrastructure of objcg no longer only serves
> CONFIG_MEMCG_KMEM. In this patch, we move the infrastructure of the
> objcg out of the scope of the CONFIG_MEMCG_KMEM so that the LRU pages
> can reuse it to charge pages.
>
> We know that the LRU pages are not accounted at the root level. But
> the page->memcg_data points to the root_mem_cgroup. So the
> page->memcg_data of the LRU pages always points to a valid pointer.
> But the root_mem_cgroup dose not have an object cgroup. If we use
> obj_cgroup APIs to charge the LRU pages, we should set the
> page->memcg_data to a root object cgroup. So we also allocate an
> object cgroup for the root_mem_cgroup.
>
> Signed-off-by: Muchun Song <[email protected]>
> Acked-by: Johannes Weiner <[email protected]>

LGTM

Acked-by: Roman Gushchin <[email protected]>