2022-05-24 07:56:49

by Muchun Song

[permalink] [raw]
Subject: [PATCH v4 00/11] Use obj_cgroup APIs to charge the LRU pages

This version is rebased on v5.18.

Since the following patchsets applied. All the kernel memory are charged
with the new APIs of obj_cgroup.

[v17,00/19] The new cgroup slab memory controller [1]
[v5,0/7] Use obj_cgroup APIs to charge kmem pages [2]

But user memory allocations (LRU pages) pinning memcgs for a long time -
it exists at a larger scale and is causing recurring problems in the real
world: page cache doesn't get reclaimed for a long time, or is used by the
second, third, fourth, ... instance of the same job that was restarted into
a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
and make page reclaim very inefficient.

We can convert LRU pages and most other raw memcg pins to the objcg direction
to fix this problem, and then the LRU pages will not pin the memcgs.

This patchset aims to make the LRU pages to drop the reference to memory
cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
of the dying cgroups will not increase if we run the following test script.

```bash
#!/bin/bash

dd if=/dev/zero of=temp bs=4096 count=1
cat /proc/cgroups | grep memory

for i in {0..2000}
do
mkdir /sys/fs/cgroup/memory/test$i
echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
cat temp >> log
echo $$ > /sys/fs/cgroup/memory/cgroup.procs
rmdir /sys/fs/cgroup/memory/test$i
done

cat /proc/cgroups | grep memory

rm -f temp log
```

[1] https://lore.kernel.org/linux-mm/[email protected]/
[2] https://lore.kernel.org/linux-mm/[email protected]/

v3: https://lore.kernel.org/all/[email protected]/
v2: https://lore.kernel.org/all/[email protected]/
v1: https://lore.kernel.org/all/[email protected]/
RFC v4: https://lore.kernel.org/all/[email protected]/
RFC v3: https://lore.kernel.org/all/[email protected]/
RFC v2: https://lore.kernel.org/all/[email protected]/
RFC v1: https://lore.kernel.org/all/[email protected]/

v4:
- Resend and rebased on v5.18.

v3:
- Removed the Acked-by tags from Roman since this version is based on
the folio relevant.

v2:
- Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
- Rebase to linux 5.15-rc1.
- Add a new pacth to cleanup mem_cgroup_kmem_disabled().

v1:
- Drop RFC tag.
- Rebase to linux next-20210811.

RFC v4:
- Collect Acked-by from Roman.
- Rebase to linux next-20210525.
- Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
- Change the patch 1 title to "prepare objcg API for non-kmem usage".
- Convert reparent_ops_head to an array in patch 8.

Thanks for Roman's review and suggestions.

RFC v3:
- Drop the code cleanup and simplification patches. Gather those patches
into a separate series[1].
- Rework patch #1 suggested by Johannes.

RFC v2:
- Collect Acked-by tags by Johannes. Thanks.
- Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
- Fix move_pages_to_lru().

Muchun Song (11):
mm: memcontrol: prepare objcg API for non-kmem usage
mm: memcontrol: introduce compact_folio_lruvec_lock_irqsave
mm: memcontrol: make lruvec lock safe when LRU pages are reparented
mm: vmscan: rework move_pages_to_lru()
mm: thp: introduce folio_split_queue_lock{_irqsave}()
mm: thp: make split queue lock safe when LRU pages are reparented
mm: memcontrol: make all the callers of {folio,page}_memcg() safe
mm: memcontrol: introduce memcg_reparent_ops
mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
mm: lru: add VM_BUG_ON_FOLIO to lru maintenance function
mm: lru: use lruvec lock to serialize memcg changes

fs/buffer.c | 4 +-
fs/fs-writeback.c | 23 +-
include/linux/memcontrol.h | 198 ++++++++------
include/linux/mm_inline.h | 6 +
include/trace/events/writeback.h | 5 +
mm/compaction.c | 39 ++-
mm/huge_memory.c | 157 ++++++++++--
mm/memcontrol.c | 542 ++++++++++++++++++++++++++++-----------
mm/migrate.c | 4 +
mm/page_io.c | 5 +-
mm/swap.c | 49 ++--
mm/vmscan.c | 57 ++--
12 files changed, 756 insertions(+), 333 deletions(-)


base-commit: 4b0986a3613c92f4ec1bdc7f60ec66fea135991f
--
2.11.0



2022-05-24 08:12:54

by Muchun Song

[permalink] [raw]
Subject: [PATCH v4 01/11] mm: memcontrol: prepare objcg API for non-kmem usage

Pagecache pages are charged at the allocation time and holding a
reference to the original memory cgroup until being reclaimed.
Depending on the memory pressure, specific patterns of the page
sharing between different cgroups and the cgroup creation and
destruction rates, a large number of dying memory cgroups can be
pinned by pagecache pages. It makes the page reclaim less efficient
and wastes memory.

We can convert LRU pages and most other raw memcg pins to the objcg
direction to fix this problem, and then the page->memcg will always
point to an object cgroup pointer.

Therefore, the infrastructure of objcg no longer only serves
CONFIG_MEMCG_KMEM. In this patch, we move the infrastructure of the
objcg out of the scope of the CONFIG_MEMCG_KMEM so that the LRU pages
can reuse it to charge pages.

We know that the LRU pages are not accounted at the root level. But
the page->memcg_data points to the root_mem_cgroup. So the
page->memcg_data of the LRU pages always points to a valid pointer.
But the root_mem_cgroup dose not have an object cgroup. If we use
obj_cgroup APIs to charge the LRU pages, we should set the
page->memcg_data to a root object cgroup. So we also allocate an
object cgroup for the root_mem_cgroup.

Signed-off-by: Muchun Song <[email protected]>
---
include/linux/memcontrol.h | 5 ++--
mm/memcontrol.c | 60 +++++++++++++++++++++++++---------------------
2 files changed, 35 insertions(+), 30 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 89b14729d59f..ff1c1dd7e762 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -315,10 +315,10 @@ struct mem_cgroup {

#ifdef CONFIG_MEMCG_KMEM
int kmemcg_id;
+#endif
struct obj_cgroup __rcu *objcg;
/* list of inherited objcgs, protected by objcg_lock */
struct list_head objcg_list;
-#endif

MEMCG_PADDING(_pad2_);

@@ -851,8 +851,7 @@ static inline struct mem_cgroup *lruvec_memcg(struct lruvec *lruvec)
* parent_mem_cgroup - find the accounting parent of a memcg
* @memcg: memcg whose parent to find
*
- * Returns the parent memcg, or NULL if this is the root or the memory
- * controller is in legacy no-hierarchy mode.
+ * Returns the parent memcg, or NULL if this is the root.
*/
static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg)
{
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 598fece89e2b..6de0d3e53eb1 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -254,9 +254,9 @@ struct mem_cgroup *vmpressure_to_memcg(struct vmpressure *vmpr)
return container_of(vmpr, struct mem_cgroup, vmpressure);
}

-#ifdef CONFIG_MEMCG_KMEM
static DEFINE_SPINLOCK(objcg_lock);

+#ifdef CONFIG_MEMCG_KMEM
bool mem_cgroup_kmem_disabled(void)
{
return cgroup_memory_nokmem;
@@ -265,12 +265,10 @@ bool mem_cgroup_kmem_disabled(void)
static void obj_cgroup_uncharge_pages(struct obj_cgroup *objcg,
unsigned int nr_pages);

-static void obj_cgroup_release(struct percpu_ref *ref)
+static void obj_cgroup_release_bytes(struct obj_cgroup *objcg)
{
- struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt);
unsigned int nr_bytes;
unsigned int nr_pages;
- unsigned long flags;

/*
* At this point all allocated objects are freed, and
@@ -284,9 +282,9 @@ static void obj_cgroup_release(struct percpu_ref *ref)
* 3) CPU1: a process from another memcg is allocating something,
* the stock if flushed,
* objcg->nr_charged_bytes = PAGE_SIZE - 92
- * 5) CPU0: we do release this object,
+ * 4) CPU0: we do release this object,
* 92 bytes are added to stock->nr_bytes
- * 6) CPU0: stock is flushed,
+ * 5) CPU0: stock is flushed,
* 92 bytes are added to objcg->nr_charged_bytes
*
* In the result, nr_charged_bytes == PAGE_SIZE.
@@ -298,6 +296,19 @@ static void obj_cgroup_release(struct percpu_ref *ref)

if (nr_pages)
obj_cgroup_uncharge_pages(objcg, nr_pages);
+}
+#else
+static inline void obj_cgroup_release_bytes(struct obj_cgroup *objcg)
+{
+}
+#endif
+
+static void obj_cgroup_release(struct percpu_ref *ref)
+{
+ struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt);
+ unsigned long flags;
+
+ obj_cgroup_release_bytes(objcg);

spin_lock_irqsave(&objcg_lock, flags);
list_del(&objcg->list);
@@ -326,10 +337,10 @@ static struct obj_cgroup *obj_cgroup_alloc(void)
return objcg;
}

-static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
- struct mem_cgroup *parent)
+static void memcg_reparent_objcgs(struct mem_cgroup *memcg)
{
struct obj_cgroup *objcg, *iter;
+ struct mem_cgroup *parent = parent_mem_cgroup(memcg);

objcg = rcu_replace_pointer(memcg->objcg, NULL, true);

@@ -348,6 +359,7 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
percpu_ref_kill(&objcg->refcnt);
}

+#ifdef CONFIG_MEMCG_KMEM
/*
* A lot of the calls to the cache allocation functions are expected to be
* inlined by the compiler. Since the calls to memcg_slab_pre_alloc_hook() are
@@ -3589,21 +3601,12 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css,
#ifdef CONFIG_MEMCG_KMEM
static int memcg_online_kmem(struct mem_cgroup *memcg)
{
- struct obj_cgroup *objcg;
-
if (cgroup_memory_nokmem)
return 0;

if (unlikely(mem_cgroup_is_root(memcg)))
return 0;

- objcg = obj_cgroup_alloc();
- if (!objcg)
- return -ENOMEM;
-
- objcg->memcg = memcg;
- rcu_assign_pointer(memcg->objcg, objcg);
-
static_branch_enable(&memcg_kmem_enabled_key);

memcg->kmemcg_id = memcg->id.id;
@@ -3613,27 +3616,19 @@ static int memcg_online_kmem(struct mem_cgroup *memcg)

static void memcg_offline_kmem(struct mem_cgroup *memcg)
{
- struct mem_cgroup *parent;
-
if (cgroup_memory_nokmem)
return;

if (unlikely(mem_cgroup_is_root(memcg)))
return;

- parent = parent_mem_cgroup(memcg);
- if (!parent)
- parent = root_mem_cgroup;
-
- memcg_reparent_objcgs(memcg, parent);
-
/*
* After we have finished memcg_reparent_objcgs(), all list_lrus
* corresponding to this cgroup are guaranteed to remain empty.
* The ordering is imposed by list_lru_node->lock taken by
* memcg_reparent_list_lrus().
*/
- memcg_reparent_list_lrus(memcg, parent);
+ memcg_reparent_list_lrus(memcg, parent_mem_cgroup(memcg));
}
#else
static int memcg_online_kmem(struct mem_cgroup *memcg)
@@ -5116,8 +5111,8 @@ static struct mem_cgroup *mem_cgroup_alloc(void)
memcg->socket_pressure = jiffies;
#ifdef CONFIG_MEMCG_KMEM
memcg->kmemcg_id = -1;
- INIT_LIST_HEAD(&memcg->objcg_list);
#endif
+ INIT_LIST_HEAD(&memcg->objcg_list);
#ifdef CONFIG_CGROUP_WRITEBACK
INIT_LIST_HEAD(&memcg->cgwb_list);
for (i = 0; i < MEMCG_CGWB_FRN_CNT; i++)
@@ -5179,6 +5174,7 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
{
struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+ struct obj_cgroup *objcg;

if (memcg_online_kmem(memcg))
goto remove_id;
@@ -5191,6 +5187,13 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
if (alloc_shrinker_info(memcg))
goto offline_kmem;

+ objcg = obj_cgroup_alloc();
+ if (!objcg)
+ goto free_shrinker;
+
+ objcg->memcg = memcg;
+ rcu_assign_pointer(memcg->objcg, objcg);
+
/* Online state pins memcg ID, memcg ID pins CSS */
refcount_set(&memcg->id.ref, 1);
css_get(css);
@@ -5199,6 +5202,8 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
queue_delayed_work(system_unbound_wq, &stats_flush_dwork,
2UL*HZ);
return 0;
+free_shrinker:
+ free_shrinker_info(memcg);
offline_kmem:
memcg_offline_kmem(memcg);
remove_id:
@@ -5226,6 +5231,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
page_counter_set_min(&memcg->memory, 0);
page_counter_set_low(&memcg->memory, 0);

+ memcg_reparent_objcgs(memcg);
memcg_offline_kmem(memcg);
reparent_shrinker_deferred(memcg);
wb_memcg_offline(memcg);
--
2.11.0


2022-05-24 09:51:44

by Muchun Song

[permalink] [raw]
Subject: [PATCH v4 11/11] mm: lru: use lruvec lock to serialize memcg changes

As described by commit fc574c23558c ("mm/swap.c: serialize memcg
changes in pagevec_lru_move_fn"), TestClearPageLRU() aims to
serialize mem_cgroup_move_account() during pagevec_lru_move_fn().
Now folio_lruvec_lock*() has the ability to detect whether page
memcg has been changed. So we can use lruvec lock to serialize
mem_cgroup_move_account() during pagevec_lru_move_fn(). This
change is a partial revert of the commit fc574c23558c ("mm/swap.c:
serialize memcg changes in pagevec_lru_move_fn").

And pagevec_lru_move_fn() is more hot compare with
mem_cgroup_move_account(), removing an atomic operation would be
an optimization. Also this change would not dirty cacheline for a
page which isn't on the LRU.

Signed-off-by: Muchun Song <[email protected]>
---
mm/memcontrol.c | 31 +++++++++++++++++++++++++++++++
mm/swap.c | 45 ++++++++++++++-------------------------------
mm/vmscan.c | 9 ++++-----
3 files changed, 49 insertions(+), 36 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 1a35f7fde3ed..7b6d9c308d91 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1331,12 +1331,38 @@ struct lruvec *folio_lruvec_lock(struct folio *folio)
lruvec = folio_lruvec(folio);
spin_lock(&lruvec->lru_lock);

+ /*
+ * The memcg of the page can be changed by any the following routines:
+ *
+ * 1) mem_cgroup_move_account() or
+ * 2) memcg_reparent_objcgs()
+ *
+ * The possible bad scenario would like:
+ *
+ * CPU0: CPU1: CPU2:
+ * lruvec = folio_lruvec()
+ *
+ * if (!isolate_lru_page())
+ * mem_cgroup_move_account()
+ *
+ * memcg_reparent_objcgs()
+ *
+ * spin_lock(&lruvec->lru_lock)
+ * ^^^^^^
+ * wrong lock
+ *
+ * Either CPU1 or CPU2 can change page memcg, so we need to check
+ * whether page memcg is changed, if so, we should reacquire the
+ * new lruvec lock.
+ */
if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
spin_unlock(&lruvec->lru_lock);
goto retry;
}

/*
+ * When we reach here, it means that the folio_memcg(folio) is stable.
+ *
* Preemption is disabled in the internal of spin_lock, which can serve
* as RCU read-side critical sections.
*/
@@ -1367,6 +1393,7 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio)
lruvec = folio_lruvec(folio);
spin_lock_irq(&lruvec->lru_lock);

+ /* See the comments in folio_lruvec_lock(). */
if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
spin_unlock_irq(&lruvec->lru_lock);
goto retry;
@@ -1402,6 +1429,7 @@ struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio,
lruvec = folio_lruvec(folio);
spin_lock_irqsave(&lruvec->lru_lock, *flags);

+ /* See the comments in folio_lruvec_lock(). */
if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
spin_unlock_irqrestore(&lruvec->lru_lock, *flags);
goto retry;
@@ -5751,7 +5779,10 @@ static int mem_cgroup_move_account(struct page *page,
obj_cgroup_put(rcu_dereference(from->objcg));
rcu_read_unlock();

+ /* See the comments in folio_lruvec_lock(). */
+ spin_lock(&from_vec->lru_lock);
folio->memcg_data = (unsigned long)rcu_access_pointer(to->objcg);
+ spin_unlock(&from_vec->lru_lock);

__folio_memcg_unlock(from);

diff --git a/mm/swap.c b/mm/swap.c
index 9680f2fc48b1..984b100e84e4 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -199,14 +199,8 @@ static void pagevec_lru_move_fn(struct pagevec *pvec,
struct page *page = pvec->pages[i];
struct folio *folio = page_folio(page);

- /* block memcg migration during page moving between lru */
- if (!TestClearPageLRU(page))
- continue;
-
lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags);
(*move_fn)(page, lruvec);
-
- SetPageLRU(page);
}
if (lruvec)
unlock_page_lruvec_irqrestore(lruvec, flags);
@@ -218,7 +212,7 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec)
{
struct folio *folio = page_folio(page);

- if (!folio_test_unevictable(folio)) {
+ if (folio_test_lru(folio) && !folio_test_unevictable(folio)) {
lruvec_del_folio(lruvec, folio);
folio_clear_active(folio);
lruvec_add_folio_tail(lruvec, folio);
@@ -313,7 +307,8 @@ void lru_note_cost_folio(struct folio *folio)

static void __folio_activate(struct folio *folio, struct lruvec *lruvec)
{
- if (!folio_test_active(folio) && !folio_test_unevictable(folio)) {
+ if (folio_test_lru(folio) && !folio_test_active(folio) &&
+ !folio_test_unevictable(folio)) {
long nr_pages = folio_nr_pages(folio);

lruvec_del_folio(lruvec, folio);
@@ -370,12 +365,9 @@ static void folio_activate(struct folio *folio)
{
struct lruvec *lruvec;

- if (folio_test_clear_lru(folio)) {
- lruvec = folio_lruvec_lock_irq(folio);
- __folio_activate(folio, lruvec);
- unlock_page_lruvec_irq(lruvec);
- folio_set_lru(folio);
- }
+ lruvec = folio_lruvec_lock_irq(folio);
+ __folio_activate(folio, lruvec);
+ unlock_page_lruvec_irq(lruvec);
}
#endif

@@ -518,6 +510,9 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec)
bool active = PageActive(page);
int nr_pages = thp_nr_pages(page);

+ if (!PageLRU(page))
+ return;
+
if (PageUnevictable(page))
return;

@@ -555,7 +550,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec)

static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec)
{
- if (PageActive(page) && !PageUnevictable(page)) {
+ if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) {
int nr_pages = thp_nr_pages(page);

del_page_from_lru_list(page, lruvec);
@@ -571,7 +566,7 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec)

static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec)
{
- if (PageAnon(page) && PageSwapBacked(page) &&
+ if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
!PageSwapCache(page) && !PageUnevictable(page)) {
int nr_pages = thp_nr_pages(page);

@@ -1006,8 +1001,9 @@ void __pagevec_release(struct pagevec *pvec)
}
EXPORT_SYMBOL(__pagevec_release);

-static void __pagevec_lru_add_fn(struct folio *folio, struct lruvec *lruvec)
+static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec)
{
+ struct folio *folio = page_folio(page);
int was_unevictable = folio_test_clear_unevictable(folio);
long nr_pages = folio_nr_pages(folio);

@@ -1053,20 +1049,7 @@ static void __pagevec_lru_add_fn(struct folio *folio, struct lruvec *lruvec)
*/
void __pagevec_lru_add(struct pagevec *pvec)
{
- int i;
- struct lruvec *lruvec = NULL;
- unsigned long flags = 0;
-
- for (i = 0; i < pagevec_count(pvec); i++) {
- struct folio *folio = page_folio(pvec->pages[i]);
-
- lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags);
- __pagevec_lru_add_fn(folio, lruvec);
- }
- if (lruvec)
- unlock_page_lruvec_irqrestore(lruvec, flags);
- release_pages(pvec->pages, pvec->nr);
- pagevec_reinit(pvec);
+ pagevec_lru_move_fn(pvec, __pagevec_lru_add_fn);
}

/**
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 6c9e2eafc8f9..ec1272ca5ead 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4793,18 +4793,17 @@ void check_move_unevictable_pages(struct pagevec *pvec)
nr_pages = thp_nr_pages(page);
pgscanned += nr_pages;

- /* block memcg migration during page moving between lru */
- if (!TestClearPageLRU(page))
+ lruvec = folio_lruvec_relock_irq(folio, lruvec);
+
+ if (!PageLRU(page) || !PageUnevictable(page))
continue;

- lruvec = folio_lruvec_relock_irq(folio, lruvec);
- if (page_evictable(page) && PageUnevictable(page)) {
+ if (page_evictable(page)) {
del_page_from_lru_list(page, lruvec);
ClearPageUnevictable(page);
add_page_to_lru_list(page, lruvec);
pgrescued += nr_pages;
}
- SetPageLRU(page);
}

if (lruvec) {
--
2.11.0


2022-05-24 10:15:08

by Muchun Song

[permalink] [raw]
Subject: [PATCH v4 02/11] mm: memcontrol: introduce compact_folio_lruvec_lock_irqsave

If we reuse the objcg APIs to charge LRU pages, the folio_memcg()
can be changed when the LRU pages reparented. In this case, we need
to acquire the new lruvec lock.

lruvec = folio_lruvec(folio);

// The page is reparented.

compact_lock_irqsave(&lruvec->lru_lock, &flags, cc);

// Acquired the wrong lruvec lock and need to retry.

But compact_lock_irqsave() only take lruvec lock as the parameter,
we cannot aware this change. If it can take the page as parameter
to acquire the lruvec lock. When the page memcg is changed, we can
use the folio_memcg() detect whether we need to reacquire the new
lruvec lock. So compact_lock_irqsave() is not suitable for us.
Similar to folio_lruvec_lock_irqsave(), introduce
compact_folio_lruvec_lock_irqsave() to acquire the lruvec lock in
the compaction routine.

Signed-off-by: Muchun Song <[email protected]>
---
mm/compaction.c | 31 +++++++++++++++++++++++++++----
1 file changed, 27 insertions(+), 4 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index fe915db6149b..817098817302 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -509,6 +509,29 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags,
return true;
}

+static struct lruvec *
+compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags,
+ struct compact_control *cc)
+{
+ struct lruvec *lruvec;
+
+ lruvec = folio_lruvec(folio);
+
+ /* Track if the lock is contended in async mode */
+ if (cc->mode == MIGRATE_ASYNC && !cc->contended) {
+ if (spin_trylock_irqsave(&lruvec->lru_lock, *flags))
+ goto out;
+
+ cc->contended = true;
+ }
+
+ spin_lock_irqsave(&lruvec->lru_lock, *flags);
+out:
+ lruvec_memcg_debug(lruvec, folio);
+
+ return lruvec;
+}
+
/*
* Compaction requires the taking of some coarse locks that are potentially
* very heavily contended. The lock should be periodically unlocked to avoid
@@ -844,6 +867,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,

/* Time to isolate some pages for migration */
for (; low_pfn < end_pfn; low_pfn++) {
+ struct folio *folio;

if (skip_on_failure && low_pfn >= next_skip_pfn) {
/*
@@ -1065,18 +1089,17 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
if (!TestClearPageLRU(page))
goto isolate_fail_put;

- lruvec = folio_lruvec(page_folio(page));
+ folio = page_folio(page);
+ lruvec = folio_lruvec(folio);

/* If we already hold the lock, we can skip some rechecking */
if (lruvec != locked) {
if (locked)
unlock_page_lruvec_irqrestore(locked, flags);

- compact_lock_irqsave(&lruvec->lru_lock, &flags, cc);
+ lruvec = compact_folio_lruvec_lock_irqsave(folio, &flags, cc);
locked = lruvec;

- lruvec_memcg_debug(lruvec, page_folio(page));
-
/* Try get exclusive access under lock */
if (!skip_updated) {
skip_updated = true;
--
2.11.0


2022-05-24 11:41:02

by Muchun Song

[permalink] [raw]
Subject: [PATCH v4 10/11] mm: lru: add VM_BUG_ON_FOLIO to lru maintenance function

We need to make sure that the page is deleted from or added to the
correct lruvec list. So add a VM_BUG_ON_FOLIO() to catch invalid
users.

Signed-off-by: Muchun Song <[email protected]>
---
include/linux/mm_inline.h | 6 ++++++
mm/vmscan.c | 1 -
2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index ac32125745ab..30d2393da613 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -97,6 +97,8 @@ void lruvec_add_folio(struct lruvec *lruvec, struct folio *folio)
{
enum lru_list lru = folio_lru_list(folio);

+ VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
+
update_lru_size(lruvec, lru, folio_zonenum(folio),
folio_nr_pages(folio));
if (lru != LRU_UNEVICTABLE)
@@ -114,6 +116,8 @@ void lruvec_add_folio_tail(struct lruvec *lruvec, struct folio *folio)
{
enum lru_list lru = folio_lru_list(folio);

+ VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
+
update_lru_size(lruvec, lru, folio_zonenum(folio),
folio_nr_pages(folio));
/* This is not expected to be used on LRU_UNEVICTABLE */
@@ -131,6 +135,8 @@ void lruvec_del_folio(struct lruvec *lruvec, struct folio *folio)
{
enum lru_list lru = folio_lru_list(folio);

+ VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
+
if (lru != LRU_UNEVICTABLE)
list_del(&folio->lru);
update_lru_size(lruvec, lru, folio_zonenum(folio),
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 761d5e0dd78d..6c9e2eafc8f9 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2281,7 +2281,6 @@ static unsigned int move_pages_to_lru(struct list_head *list)
continue;
}

- VM_BUG_ON_PAGE(!folio_matches_lruvec(folio, lruvec), page);
add_page_to_lru_list(page, lruvec);
nr_pages = thp_nr_pages(page);
nr_moved += nr_pages;
--
2.11.0


2022-05-24 13:05:50

by Muchun Song

[permalink] [raw]
Subject: [PATCH v4 05/11] mm: thp: introduce folio_split_queue_lock{_irqsave}()

We should make thp deferred split queue lock safe when LRU pages
are reparented. Similar to folio_lruvec_lock{_irqsave, _irq}(), we
introduce folio_split_queue_lock{_irqsave}() to make the deferred
split queue lock easier to be reparented.

And in the next patch, we can use a similar approach (just like
lruvec lock does) to make thp deferred split queue lock safe when
the LRU pages reparented.

Signed-off-by: Muchun Song <[email protected]>
---
include/linux/memcontrol.h | 10 +++++
mm/huge_memory.c | 100 +++++++++++++++++++++++++++++++++------------
2 files changed, 84 insertions(+), 26 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 4042e4d21fe2..8c2f1ba2f471 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1650,6 +1650,11 @@ int alloc_shrinker_info(struct mem_cgroup *memcg);
void free_shrinker_info(struct mem_cgroup *memcg);
void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id);
void reparent_shrinker_deferred(struct mem_cgroup *memcg);
+
+static inline int shrinker_id(struct shrinker *shrinker)
+{
+ return shrinker->id;
+}
#else
#define mem_cgroup_sockets_enabled 0
static inline void mem_cgroup_sk_alloc(struct sock *sk) { };
@@ -1663,6 +1668,11 @@ static inline void set_shrinker_bit(struct mem_cgroup *memcg,
int nid, int shrinker_id)
{
}
+
+static inline int shrinker_id(struct shrinker *shrinker)
+{
+ return -1;
+}
#endif

#ifdef CONFIG_MEMCG_KMEM
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 910a138e9859..ea152bde441e 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -503,25 +503,74 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
}

#ifdef CONFIG_MEMCG
-static inline struct deferred_split *get_deferred_split_queue(struct page *page)
+static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *folio,
+ struct deferred_split *queue)
{
- struct mem_cgroup *memcg = page_memcg(compound_head(page));
- struct pglist_data *pgdat = NODE_DATA(page_to_nid(page));
+ if (mem_cgroup_disabled())
+ return NULL;
+ if (&NODE_DATA(folio_nid(folio))->deferred_split_queue == queue)
+ return NULL;
+ return container_of(queue, struct mem_cgroup, deferred_split_queue);
+}

- if (memcg)
- return &memcg->deferred_split_queue;
- else
- return &pgdat->deferred_split_queue;
+static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio)
+{
+ struct mem_cgroup *memcg = folio_memcg(folio);
+
+ return memcg ? &memcg->deferred_split_queue : NULL;
}
#else
-static inline struct deferred_split *get_deferred_split_queue(struct page *page)
+static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *folio,
+ struct deferred_split *queue)
{
- struct pglist_data *pgdat = NODE_DATA(page_to_nid(page));
+ return NULL;
+}

- return &pgdat->deferred_split_queue;
+static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio)
+{
+ return NULL;
}
#endif

+static struct deferred_split *folio_split_queue(struct folio *folio)
+{
+ struct deferred_split *queue = folio_memcg_split_queue(folio);
+
+ return queue ? : &NODE_DATA(folio_nid(folio))->deferred_split_queue;
+}
+
+static struct deferred_split *folio_split_queue_lock(struct folio *folio)
+{
+ struct deferred_split *queue;
+
+ queue = folio_split_queue(folio);
+ spin_lock(&queue->split_queue_lock);
+
+ return queue;
+}
+
+static struct deferred_split *
+folio_split_queue_lock_irqsave(struct folio *folio, unsigned long *flags)
+{
+ struct deferred_split *queue;
+
+ queue = folio_split_queue(folio);
+ spin_lock_irqsave(&queue->split_queue_lock, *flags);
+
+ return queue;
+}
+
+static inline void split_queue_unlock(struct deferred_split *queue)
+{
+ spin_unlock(&queue->split_queue_lock);
+}
+
+static inline void split_queue_unlock_irqrestore(struct deferred_split *queue,
+ unsigned long flags)
+{
+ spin_unlock_irqrestore(&queue->split_queue_lock, flags);
+}
+
void prep_transhuge_page(struct page *page)
{
/*
@@ -2489,7 +2538,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
{
struct folio *folio = page_folio(page);
struct page *head = &folio->page;
- struct deferred_split *ds_queue = get_deferred_split_queue(head);
+ struct deferred_split *ds_queue;
XA_STATE(xas, &head->mapping->i_pages, head->index);
struct anon_vma *anon_vma = NULL;
struct address_space *mapping = NULL;
@@ -2581,13 +2630,13 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
}

/* Prevent deferred_split_scan() touching ->_refcount */
- spin_lock(&ds_queue->split_queue_lock);
+ ds_queue = folio_split_queue_lock(folio);
if (page_ref_freeze(head, 1 + extra_pins)) {
if (!list_empty(page_deferred_list(head))) {
ds_queue->split_queue_len--;
list_del(page_deferred_list(head));
}
- spin_unlock(&ds_queue->split_queue_lock);
+ split_queue_unlock(ds_queue);
if (mapping) {
int nr = thp_nr_pages(head);

@@ -2605,7 +2654,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
__split_huge_page(page, list, end);
ret = 0;
} else {
- spin_unlock(&ds_queue->split_queue_lock);
+ split_queue_unlock(ds_queue);
fail:
if (mapping)
xas_unlock(&xas);
@@ -2630,25 +2679,23 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)

void free_transhuge_page(struct page *page)
{
- struct deferred_split *ds_queue = get_deferred_split_queue(page);
+ struct deferred_split *ds_queue;
unsigned long flags;

- spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
+ ds_queue = folio_split_queue_lock_irqsave(page_folio(page), &flags);
if (!list_empty(page_deferred_list(page))) {
ds_queue->split_queue_len--;
list_del(page_deferred_list(page));
}
- spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
+ split_queue_unlock_irqrestore(ds_queue, flags);
free_compound_page(page);
}

void deferred_split_huge_page(struct page *page)
{
- struct deferred_split *ds_queue = get_deferred_split_queue(page);
-#ifdef CONFIG_MEMCG
- struct mem_cgroup *memcg = page_memcg(compound_head(page));
-#endif
+ struct deferred_split *ds_queue;
unsigned long flags;
+ struct folio *folio = page_folio(page);

VM_BUG_ON_PAGE(!PageTransHuge(page), page);

@@ -2665,18 +2712,19 @@ void deferred_split_huge_page(struct page *page)
if (PageSwapCache(page))
return;

- spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
+ ds_queue = folio_split_queue_lock_irqsave(folio, &flags);
if (list_empty(page_deferred_list(page))) {
+ struct mem_cgroup *memcg;
+
+ memcg = folio_split_queue_memcg(folio, ds_queue);
count_vm_event(THP_DEFERRED_SPLIT_PAGE);
list_add_tail(page_deferred_list(page), &ds_queue->split_queue);
ds_queue->split_queue_len++;
-#ifdef CONFIG_MEMCG
if (memcg)
set_shrinker_bit(memcg, page_to_nid(page),
- deferred_split_shrinker.id);
-#endif
+ shrinker_id(&deferred_split_shrinker));
}
- spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
+ split_queue_unlock_irqrestore(ds_queue, flags);
}

static unsigned long deferred_split_count(struct shrinker *shrink,
--
2.11.0


2022-05-24 14:32:11

by Muchun Song

[permalink] [raw]
Subject: [PATCH v4 06/11] mm: thp: make split queue lock safe when LRU pages are reparented

Similar to the lruvec lock, we use the same approach to make the split
queue lock safe when LRU pages are reparented.

Signed-off-by: Muchun Song <[email protected]>
---
mm/huge_memory.c | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index ea152bde441e..cc596034c487 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -543,9 +543,22 @@ static struct deferred_split *folio_split_queue_lock(struct folio *folio)
{
struct deferred_split *queue;

+ rcu_read_lock();
+retry:
queue = folio_split_queue(folio);
spin_lock(&queue->split_queue_lock);

+ if (unlikely(folio_split_queue_memcg(folio, queue) != folio_memcg(folio))) {
+ spin_unlock(&queue->split_queue_lock);
+ goto retry;
+ }
+
+ /*
+ * Preemption is disabled in the internal of spin_lock, which can serve
+ * as RCU read-side critical sections.
+ */
+ rcu_read_unlock();
+
return queue;
}

@@ -554,9 +567,19 @@ folio_split_queue_lock_irqsave(struct folio *folio, unsigned long *flags)
{
struct deferred_split *queue;

+ rcu_read_lock();
+retry:
queue = folio_split_queue(folio);
spin_lock_irqsave(&queue->split_queue_lock, *flags);

+ if (unlikely(folio_split_queue_memcg(folio, queue) != folio_memcg(folio))) {
+ spin_unlock_irqrestore(&queue->split_queue_lock, *flags);
+ goto retry;
+ }
+
+ /* See the comments in folio_split_queue_lock(). */
+ rcu_read_unlock();
+
return queue;
}

--
2.11.0


2022-05-24 14:46:37

by Muchun Song

[permalink] [raw]
Subject: [PATCH v4 07/11] mm: memcontrol: make all the callers of {folio,page}_memcg() safe

When we use objcg APIs to charge the LRU pages, the page will not hold
a reference to the memcg associated with the page. So the caller of the
{folio,page}_memcg() should hold an rcu read lock or obtain a reference
to the memcg associated with the page to protect memcg from being
released. So introduce get_mem_cgroup_from_{page,folio}() to obtain a
reference to the memory cgroup associated with the page.

In this patch, make all the callers hold an rcu read lock or obtain a
reference to the memcg to protect memcg from being released when the LRU
pages reparented.

We do not need to adjust the callers of {folio,page}_memcg() during
the whole process of mem_cgroup_move_task(). Because the cgroup migration
and memory cgroup offlining are serialized by @cgroup_mutex. In this
routine, the LRU pages cannot be reparented to its parent memory cgroup.
So {folio,page}_memcg() is stable and cannot be released.

This is a preparation for reparenting the LRU pages.

Signed-off-by: Muchun Song <[email protected]>
---
fs/buffer.c | 4 +--
fs/fs-writeback.c | 23 ++++++++--------
include/linux/memcontrol.h | 51 ++++++++++++++++++++++++++++++++---
include/trace/events/writeback.h | 5 ++++
mm/memcontrol.c | 58 ++++++++++++++++++++++++++++++----------
mm/migrate.c | 4 +++
mm/page_io.c | 5 ++--
7 files changed, 117 insertions(+), 33 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 2b5561ae5d0b..80975a457670 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -819,8 +819,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size,
if (retry)
gfp |= __GFP_NOFAIL;

- /* The page lock pins the memcg */
- memcg = page_memcg(page);
+ memcg = get_mem_cgroup_from_page(page);
old_memcg = set_active_memcg(memcg);

head = NULL;
@@ -840,6 +839,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size,
set_bh_page(bh, page, offset);
}
out:
+ mem_cgroup_put(memcg);
set_active_memcg(old_memcg);
return head;
/*
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 1fae0196292a..56612ace8778 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -243,15 +243,13 @@ void __inode_attach_wb(struct inode *inode, struct page *page)
if (inode_cgwb_enabled(inode)) {
struct cgroup_subsys_state *memcg_css;

- if (page) {
- memcg_css = mem_cgroup_css_from_page(page);
- wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
- } else {
- /* must pin memcg_css, see wb_get_create() */
+ /* must pin memcg_css, see wb_get_create() */
+ if (page)
+ memcg_css = get_mem_cgroup_css_from_page(page);
+ else
memcg_css = task_get_css(current, memory_cgrp_id);
- wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
- css_put(memcg_css);
- }
+ wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
+ css_put(memcg_css);
}

if (!wb)
@@ -868,16 +866,16 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
if (!wbc->wb || wbc->no_cgroup_owner)
return;

- css = mem_cgroup_css_from_page(page);
+ css = get_mem_cgroup_css_from_page(page);
/* dead cgroups shouldn't contribute to inode ownership arbitration */
if (!(css->flags & CSS_ONLINE))
- return;
+ goto out;

id = css->id;

if (id == wbc->wb_id) {
wbc->wb_bytes += bytes;
- return;
+ goto out;
}

if (id == wbc->wb_lcand_id)
@@ -890,6 +888,9 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
wbc->wb_tcand_bytes += bytes;
else
wbc->wb_tcand_bytes -= min(bytes, wbc->wb_tcand_bytes);
+
+out:
+ css_put(css);
}
EXPORT_SYMBOL_GPL(wbc_account_cgroup_owner);

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 8c2f1ba2f471..3a0e2592434e 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -373,7 +373,7 @@ static inline bool folio_memcg_kmem(struct folio *folio);
* a valid memcg, but can be atomically swapped to the parent memcg.
*
* The caller must ensure that the returned memcg won't be released:
- * e.g. acquire the rcu_read_lock or css_set_lock.
+ * e.g. acquire the rcu_read_lock or objcg_lock or cgroup_mutex.
*/
static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg)
{
@@ -454,7 +454,37 @@ static inline struct mem_cgroup *page_memcg(struct page *page)
return folio_memcg(page_folio(page));
}

-/**
+/*
+ * get_mem_cgroup_from_folio - Obtain a reference on the memory cgroup
+ * associated with a folio.
+ * @folio: Pointer to the folio.
+ *
+ * Returns a pointer to the memory cgroup (and obtain a reference on it)
+ * associated with the folio, or NULL. This function assumes that the
+ * folio is known to have a proper memory cgroup pointer. It's not safe
+ * to call this function against some type of pages, e.g. slab pages or
+ * ex-slab pages.
+ */
+static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio)
+{
+ struct mem_cgroup *memcg;
+
+ rcu_read_lock();
+retry:
+ memcg = folio_memcg(folio);
+ if (unlikely(memcg && !css_tryget(&memcg->css)))
+ goto retry;
+ rcu_read_unlock();
+
+ return memcg;
+}
+
+static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page)
+{
+ return get_mem_cgroup_from_folio(page_folio(page));
+}
+
+/*
* folio_memcg_rcu - Locklessly get the memory cgroup associated with a folio.
* @folio: Pointer to the folio.
*
@@ -873,7 +903,7 @@ static inline bool mm_match_cgroup(struct mm_struct *mm,
return match;
}

-struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page);
+struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page);
ino_t page_cgroup_ino(struct page *page);

static inline bool mem_cgroup_online(struct mem_cgroup *memcg)
@@ -1047,10 +1077,13 @@ static inline void count_memcg_events(struct mem_cgroup *memcg,
static inline void count_memcg_page_event(struct page *page,
enum vm_event_item idx)
{
- struct mem_cgroup *memcg = page_memcg(page);
+ struct mem_cgroup *memcg;

+ rcu_read_lock();
+ memcg = page_memcg(page);
if (memcg)
count_memcg_events(memcg, idx, 1);
+ rcu_read_unlock();
}

static inline void count_memcg_event_mm(struct mm_struct *mm,
@@ -1129,6 +1162,16 @@ static inline struct mem_cgroup *page_memcg(struct page *page)
return NULL;
}

+static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio)
+{
+ return NULL;
+}
+
+static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page)
+{
+ return NULL;
+}
+
static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio)
{
WARN_ON_ONCE(!rcu_read_lock_held());
diff --git a/include/trace/events/writeback.h b/include/trace/events/writeback.h
index 86b2a82da546..cdb822339f13 100644
--- a/include/trace/events/writeback.h
+++ b/include/trace/events/writeback.h
@@ -258,6 +258,11 @@ TRACE_EVENT(track_foreign_dirty,
__entry->ino = inode ? inode->i_ino : 0;
__entry->memcg_id = wb->memcg_css->id;
__entry->cgroup_ino = __trace_wb_assign_cgroup(wb);
+ /*
+ * TP_fast_assign() is under preemption disabled which can
+ * serve as an RCU read-side critical section so that the
+ * memcg returned by folio_memcg() cannot be freed.
+ */
__entry->page_cgroup_ino = cgroup_ino(folio_memcg(folio)->css.cgroup);
),

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index b38a77f6696f..dcaf6cf5dc74 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -371,7 +371,7 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key);
#endif

/**
- * mem_cgroup_css_from_page - css of the memcg associated with a page
+ * get_mem_cgroup_css_from_page - get css of the memcg associated with a page
* @page: page of interest
*
* If memcg is bound to the default hierarchy, css of the memcg associated
@@ -381,13 +381,15 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key);
* If memcg is bound to a traditional hierarchy, the css of root_mem_cgroup
* is returned.
*/
-struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page)
+struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page)
{
struct mem_cgroup *memcg;

- memcg = page_memcg(page);
+ if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
+ return &root_mem_cgroup->css;

- if (!memcg || !cgroup_subsys_on_dfl(memory_cgrp_subsys))
+ memcg = get_mem_cgroup_from_page(page);
+ if (!memcg)
memcg = root_mem_cgroup;

return &memcg->css;
@@ -770,13 +772,13 @@ void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
void __mod_lruvec_page_state(struct page *page, enum node_stat_item idx,
int val)
{
- struct page *head = compound_head(page); /* rmap on tail pages */
+ struct folio *folio = page_folio(page); /* rmap on tail pages */
struct mem_cgroup *memcg;
pg_data_t *pgdat = page_pgdat(page);
struct lruvec *lruvec;

rcu_read_lock();
- memcg = page_memcg(head);
+ memcg = folio_memcg(folio);
/* Untracked pages have no memcg, no lruvec. Update only the node */
if (!memcg) {
rcu_read_unlock();
@@ -2058,7 +2060,9 @@ void folio_memcg_lock(struct folio *folio)
* The RCU lock is held throughout the transaction. The fast
* path can get away without acquiring the memcg->move_lock
* because page moving starts with an RCU grace period.
- */
+ *
+ * The RCU lock also protects the memcg from being freed.
+ */
rcu_read_lock();

if (mem_cgroup_disabled())
@@ -3296,7 +3300,7 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size)
void split_page_memcg(struct page *head, unsigned int nr)
{
struct folio *folio = page_folio(head);
- struct mem_cgroup *memcg = folio_memcg(folio);
+ struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio);
int i;

if (mem_cgroup_disabled() || !memcg)
@@ -3309,6 +3313,8 @@ void split_page_memcg(struct page *head, unsigned int nr)
obj_cgroup_get_many(__folio_objcg(folio), nr - 1);
else
css_get_many(&memcg->css, nr - 1);
+
+ css_put(&memcg->css);
}

#ifdef CONFIG_MEMCG_SWAP
@@ -4511,7 +4517,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages,
void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio,
struct bdi_writeback *wb)
{
- struct mem_cgroup *memcg = folio_memcg(folio);
+ struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio);
struct memcg_cgwb_frn *frn;
u64 now = get_jiffies_64();
u64 oldest_at = now;
@@ -4558,6 +4564,7 @@ void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio,
frn->memcg_id = wb->memcg_css->id;
frn->at = now;
}
+ css_put(&memcg->css);
}

/* issue foreign writeback flushes for recorded foreign dirtying events */
@@ -6092,6 +6099,14 @@ static void mem_cgroup_move_charge(void)
atomic_dec(&mc.from->moving_account);
}

+/*
+ * The cgroup migration and memory cgroup offlining are serialized by
+ * @cgroup_mutex. If we reach here, it means that the LRU pages cannot
+ * be reparented to its parent memory cgroup. So during the whole process
+ * of mem_cgroup_move_task(), page_memcg(page) is stable. So we do not
+ * need to worry about the memcg (returned from page_memcg()) being
+ * released even if we do not hold an rcu read lock.
+ */
static void mem_cgroup_move_task(void)
{
if (mc.to) {
@@ -6895,7 +6910,7 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
if (folio_memcg(new))
return;

- memcg = folio_memcg(old);
+ memcg = get_mem_cgroup_from_folio(old);
VM_WARN_ON_ONCE_FOLIO(!memcg, old);
if (!memcg)
return;
@@ -6914,6 +6929,8 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
mem_cgroup_charge_statistics(memcg, nr_pages);
memcg_check_events(memcg, folio_nid(new));
local_irq_restore(flags);
+
+ css_put(&memcg->css);
}

DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key);
@@ -7100,6 +7117,10 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry)
if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
return;

+ /*
+ * Interrupts should be disabled by the caller (see the comments below),
+ * which can serve as RCU read-side critical sections.
+ */
memcg = folio_memcg(folio);

VM_WARN_ON_ONCE_FOLIO(!memcg, folio);
@@ -7165,15 +7186,16 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
return 0;

+ rcu_read_lock();
memcg = page_memcg(page);

VM_WARN_ON_ONCE_PAGE(!memcg, page);
if (!memcg)
- return 0;
+ goto out;

if (!entry.val) {
memcg_memory_event(memcg, MEMCG_SWAP_FAIL);
- return 0;
+ goto out;
}

memcg = mem_cgroup_id_get_online(memcg);
@@ -7183,6 +7205,7 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
memcg_memory_event(memcg, MEMCG_SWAP_MAX);
memcg_memory_event(memcg, MEMCG_SWAP_FAIL);
mem_cgroup_id_put(memcg);
+ rcu_read_unlock();
return -ENOMEM;
}

@@ -7192,6 +7215,8 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
oldid = swap_cgroup_record(entry, mem_cgroup_id(memcg), nr_pages);
VM_BUG_ON_PAGE(oldid, page);
mod_memcg_state(memcg, MEMCG_SWAP, nr_pages);
+out:
+ rcu_read_unlock();

return 0;
}
@@ -7246,17 +7271,22 @@ bool mem_cgroup_swap_full(struct page *page)
if (cgroup_memory_noswap || !cgroup_subsys_on_dfl(memory_cgrp_subsys))
return false;

+ rcu_read_lock();
memcg = page_memcg(page);
if (!memcg)
- return false;
+ goto out;

for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) {
unsigned long usage = page_counter_read(&memcg->swap);

if (usage * 2 >= READ_ONCE(memcg->swap.high) ||
- usage * 2 >= READ_ONCE(memcg->swap.max))
+ usage * 2 >= READ_ONCE(memcg->swap.max)) {
+ rcu_read_unlock();
return true;
+ }
}
+out:
+ rcu_read_unlock();

return false;
}
diff --git a/mm/migrate.c b/mm/migrate.c
index 6c31ee1e1c9b..59e97a8a64a0 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -430,6 +430,10 @@ int folio_migrate_mapping(struct address_space *mapping,
struct lruvec *old_lruvec, *new_lruvec;
struct mem_cgroup *memcg;

+ /*
+ * Irq is disabled, which can serve as RCU read-side critical
+ * sections.
+ */
memcg = folio_memcg(folio);
old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat);
new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat);
diff --git a/mm/page_io.c b/mm/page_io.c
index 89fbf3cae30f..a0d9cd68e87a 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -221,13 +221,14 @@ static void bio_associate_blkg_from_page(struct bio *bio, struct page *page)
struct cgroup_subsys_state *css;
struct mem_cgroup *memcg;

+ rcu_read_lock();
memcg = page_memcg(page);
if (!memcg)
- return;
+ goto out;

- rcu_read_lock();
css = cgroup_e_css(memcg->css.cgroup, &io_cgrp_subsys);
bio_associate_blkg_from_css(bio, css);
+out:
rcu_read_unlock();
}
#else
--
2.11.0


2022-05-24 15:03:34

by Muchun Song

[permalink] [raw]
Subject: [PATCH v4 08/11] mm: memcontrol: introduce memcg_reparent_ops

In the previous patch, we know how to make the lruvec lock safe when LRU
pages are reparented. We should do something like following.

memcg_reparent_objcgs(memcg)
1) lock
// lruvec belongs to memcg and lruvec_parent belongs to parent memcg.
spin_lock(&lruvec->lru_lock);
spin_lock(&lruvec_parent->lru_lock);

2) relocate from current memcg to its parent
// Move all the pages from the lruvec list to the parent lruvec list.

3) unlock
spin_unlock(&lruvec_parent->lru_lock);
spin_unlock(&lruvec->lru_lock);

Apart from the page lruvec lock, the deferred split queue lock (THP only)
also needs to do something similar. So we extract the necessary three steps
in the memcg_reparent_objcgs().

memcg_reparent_objcgs(memcg)
1) lock
memcg_reparent_ops->lock(memcg, parent);

2) relocate
memcg_reparent_ops->relocate(memcg, reparent);

3) unlock
memcg_reparent_ops->unlock(memcg, reparent);

Now there are two different locks (e.g. lruvec lock and deferred split
queue lock) need to use this infrastructure. In the next patch, we will
use those APIs to make those locks safe when the LRU pages reparented.

Signed-off-by: Muchun Song <[email protected]>
---
include/linux/memcontrol.h | 20 +++++++++++++++
mm/memcontrol.c | 62 ++++++++++++++++++++++++++++++++++++----------
2 files changed, 69 insertions(+), 13 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 3a0e2592434e..e806e743a1fc 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -347,6 +347,26 @@ struct mem_cgroup {
struct mem_cgroup_per_node *nodeinfo[];
};

+struct memcg_reparent_ops {
+ /*
+ * Note that interrupt is disabled before calling those callbacks,
+ * so the interrupt should remain disabled when leaving those callbacks.
+ */
+ void (*lock)(struct mem_cgroup *src, struct mem_cgroup *dst);
+ void (*relocate)(struct mem_cgroup *src, struct mem_cgroup *dst);
+ void (*unlock)(struct mem_cgroup *src, struct mem_cgroup *dst);
+};
+
+#define DEFINE_MEMCG_REPARENT_OPS(name) \
+ const struct memcg_reparent_ops memcg_##name##_reparent_ops = { \
+ .lock = name##_reparent_lock, \
+ .relocate = name##_reparent_relocate, \
+ .unlock = name##_reparent_unlock, \
+ }
+
+#define DECLARE_MEMCG_REPARENT_OPS(name) \
+ extern const struct memcg_reparent_ops memcg_##name##_reparent_ops
+
/*
* size of first charge trial. "32" comes from vmscan.c's magic value.
* TODO: maybe necessary to use big numbers in big irons.
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index dcaf6cf5dc74..7d62764c6380 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -337,24 +337,60 @@ static struct obj_cgroup *obj_cgroup_alloc(void)
return objcg;
}

-static void memcg_reparent_objcgs(struct mem_cgroup *memcg)
+static void objcg_reparent_lock(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+ spin_lock(&objcg_lock);
+}
+
+static void objcg_reparent_relocate(struct mem_cgroup *src, struct mem_cgroup *dst)
{
struct obj_cgroup *objcg, *iter;
- struct mem_cgroup *parent = parent_mem_cgroup(memcg);

- objcg = rcu_replace_pointer(memcg->objcg, NULL, true);
+ objcg = rcu_replace_pointer(src->objcg, NULL, true);
+ /* 1) Ready to reparent active objcg. */
+ list_add(&objcg->list, &src->objcg_list);
+ /* 2) Reparent active objcg and already reparented objcgs to dst. */
+ list_for_each_entry(iter, &src->objcg_list, list)
+ WRITE_ONCE(iter->memcg, dst);
+ /* 3) Move already reparented objcgs to the dst's list */
+ list_splice(&src->objcg_list, &dst->objcg_list);
+}
+
+static void objcg_reparent_unlock(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+ spin_unlock(&objcg_lock);
+}

- spin_lock_irq(&objcg_lock);
+static DEFINE_MEMCG_REPARENT_OPS(objcg);

- /* 1) Ready to reparent active objcg. */
- list_add(&objcg->list, &memcg->objcg_list);
- /* 2) Reparent active objcg and already reparented objcgs to parent. */
- list_for_each_entry(iter, &memcg->objcg_list, list)
- WRITE_ONCE(iter->memcg, parent);
- /* 3) Move already reparented objcgs to the parent's list */
- list_splice(&memcg->objcg_list, &parent->objcg_list);
-
- spin_unlock_irq(&objcg_lock);
+static const struct memcg_reparent_ops *memcg_reparent_ops[] = {
+ &memcg_objcg_reparent_ops,
+};
+
+#define DEFINE_MEMCG_REPARENT_FUNC(phase) \
+ static void memcg_reparent_##phase(struct mem_cgroup *src, \
+ struct mem_cgroup *dst) \
+ { \
+ int i; \
+ \
+ for (i = 0; i < ARRAY_SIZE(memcg_reparent_ops); i++) \
+ memcg_reparent_ops[i]->phase(src, dst); \
+ }
+
+DEFINE_MEMCG_REPARENT_FUNC(lock)
+DEFINE_MEMCG_REPARENT_FUNC(relocate)
+DEFINE_MEMCG_REPARENT_FUNC(unlock)
+
+static void memcg_reparent_objcgs(struct mem_cgroup *src)
+{
+ struct mem_cgroup *dst = parent_mem_cgroup(src);
+ struct obj_cgroup *objcg = rcu_dereference_protected(src->objcg, true);
+
+ local_irq_disable();
+ memcg_reparent_lock(src, dst);
+ memcg_reparent_relocate(src, dst);
+ memcg_reparent_unlock(src, dst);
+ local_irq_enable();

percpu_ref_kill(&objcg->refcnt);
}
--
2.11.0


2022-05-24 16:33:56

by Muchun Song

[permalink] [raw]
Subject: [PATCH v4 04/11] mm: vmscan: rework move_pages_to_lru()

In the later patch, we will reparent the LRU pages. The pages moved to
appropriate LRU list can be reparented during the process of the
move_pages_to_lru(). So holding a lruvec lock by the caller is wrong, we
should use the more general interface of folio_lruvec_relock_irq() to
acquire the correct lruvec lock.

Signed-off-by: Muchun Song <[email protected]>
---
mm/vmscan.c | 49 +++++++++++++++++++++++++------------------------
1 file changed, 25 insertions(+), 24 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 1678802e03e7..761d5e0dd78d 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2230,23 +2230,28 @@ static int too_many_isolated(struct pglist_data *pgdat, int file,
* move_pages_to_lru() moves pages from private @list to appropriate LRU list.
* On return, @list is reused as a list of pages to be freed by the caller.
*
- * Returns the number of pages moved to the given lruvec.
+ * Returns the number of pages moved to the appropriate LRU list.
+ *
+ * Note: The caller must not hold any lruvec lock.
*/
-static unsigned int move_pages_to_lru(struct lruvec *lruvec,
- struct list_head *list)
+static unsigned int move_pages_to_lru(struct list_head *list)
{
- int nr_pages, nr_moved = 0;
+ int nr_moved = 0;
+ struct lruvec *lruvec = NULL;
LIST_HEAD(pages_to_free);
- struct page *page;

while (!list_empty(list)) {
- page = lru_to_page(list);
+ int nr_pages;
+ struct folio *folio = lru_to_folio(list);
+ struct page *page = &folio->page;
+
+ lruvec = folio_lruvec_relock_irq(folio, lruvec);
VM_BUG_ON_PAGE(PageLRU(page), page);
list_del(&page->lru);
if (unlikely(!page_evictable(page))) {
- spin_unlock_irq(&lruvec->lru_lock);
+ unlock_page_lruvec_irq(lruvec);
putback_lru_page(page);
- spin_lock_irq(&lruvec->lru_lock);
+ lruvec = NULL;
continue;
}

@@ -2267,20 +2272,16 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec,
__clear_page_lru_flags(page);

if (unlikely(PageCompound(page))) {
- spin_unlock_irq(&lruvec->lru_lock);
+ unlock_page_lruvec_irq(lruvec);
destroy_compound_page(page);
- spin_lock_irq(&lruvec->lru_lock);
+ lruvec = NULL;
} else
list_add(&page->lru, &pages_to_free);

continue;
}

- /*
- * All pages were isolated from the same lruvec (and isolation
- * inhibits memcg migration).
- */
- VM_BUG_ON_PAGE(!folio_matches_lruvec(page_folio(page), lruvec), page);
+ VM_BUG_ON_PAGE(!folio_matches_lruvec(folio, lruvec), page);
add_page_to_lru_list(page, lruvec);
nr_pages = thp_nr_pages(page);
nr_moved += nr_pages;
@@ -2288,6 +2289,8 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec,
workingset_age_nonresident(lruvec, nr_pages);
}

+ if (lruvec)
+ unlock_page_lruvec_irq(lruvec);
/*
* To save our caller's stack, now use input list for pages to free.
*/
@@ -2359,16 +2362,16 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,

nr_reclaimed = shrink_page_list(&page_list, pgdat, sc, &stat, false);

- spin_lock_irq(&lruvec->lru_lock);
- move_pages_to_lru(lruvec, &page_list);
+ move_pages_to_lru(&page_list);

+ local_irq_disable();
__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT;
if (!cgroup_reclaim(sc))
__count_vm_events(item, nr_reclaimed);
__count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed);
__count_vm_events(PGSTEAL_ANON + file, nr_reclaimed);
- spin_unlock_irq(&lruvec->lru_lock);
+ local_irq_enable();

lru_note_cost(lruvec, file, stat.nr_pageout);
mem_cgroup_uncharge_list(&page_list);
@@ -2498,18 +2501,16 @@ static void shrink_active_list(unsigned long nr_to_scan,
/*
* Move pages back to the lru list.
*/
- spin_lock_irq(&lruvec->lru_lock);
-
- nr_activate = move_pages_to_lru(lruvec, &l_active);
- nr_deactivate = move_pages_to_lru(lruvec, &l_inactive);
+ nr_activate = move_pages_to_lru(&l_active);
+ nr_deactivate = move_pages_to_lru(&l_inactive);
/* Keep all free pages in l_active list */
list_splice(&l_inactive, &l_active);

+ local_irq_disable();
__count_vm_events(PGDEACTIVATE, nr_deactivate);
__count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate);
-
__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
- spin_unlock_irq(&lruvec->lru_lock);
+ local_irq_enable();

mem_cgroup_uncharge_list(&l_active);
free_unref_page_list(&l_active);
--
2.11.0


2022-05-24 19:53:36

by Muchun Song

[permalink] [raw]
Subject: [PATCH v4 09/11] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages

We will reuse the obj_cgroup APIs to charge the LRU pages. Finally,
page->memcg_data will have 2 different meanings.

- For the slab pages, page->memcg_data points to an object cgroups
vector.

- For the kmem pages (exclude the slab pages) and the LRU pages,
page->memcg_data points to an object cgroup.

In this patch, we reuse obj_cgroup APIs to charge LRU pages. In the end,
The page cache cannot prevent long-living objects from pinning the original
memory cgroup in the memory.

At the same time we also changed the rules of page and objcg or memcg
binding stability. The new rules are as follows.

For a page any of the following ensures page and objcg binding stability:

- the page lock
- LRU isolation
- lock_page_memcg()
- exclusive reference

Based on the stable binding of page and objcg, for a page any of the
following ensures page and memcg binding stability:

- css_set_lock
- cgroup_mutex
- the lruvec lock
- the split queue lock (only THP page)

If the caller only want to ensure that the page counters of memcg are
updated correctly, ensure that the binding stability of page and objcg
is sufficient.

Signed-off-by: Muchun Song <[email protected]>
---
include/linux/memcontrol.h | 94 ++++++---------
mm/huge_memory.c | 34 ++++++
mm/memcontrol.c | 287 ++++++++++++++++++++++++++++++++-------------
3 files changed, 277 insertions(+), 138 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index e806e743a1fc..237ae86f8d8e 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -386,8 +386,6 @@ enum page_memcg_data_flags {

#define MEMCG_DATA_FLAGS_MASK (__NR_MEMCG_DATA_FLAGS - 1)

-static inline bool folio_memcg_kmem(struct folio *folio);
-
/*
* After the initialization objcg->memcg is always pointing at
* a valid memcg, but can be atomically swapped to the parent memcg.
@@ -401,43 +399,19 @@ static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg)
}

/*
- * __folio_memcg - Get the memory cgroup associated with a non-kmem folio
- * @folio: Pointer to the folio.
- *
- * Returns a pointer to the memory cgroup associated with the folio,
- * or NULL. This function assumes that the folio is known to have a
- * proper memory cgroup pointer. It's not safe to call this function
- * against some type of folios, e.g. slab folios or ex-slab folios or
- * kmem folios.
- */
-static inline struct mem_cgroup *__folio_memcg(struct folio *folio)
-{
- unsigned long memcg_data = folio->memcg_data;
-
- VM_BUG_ON_FOLIO(folio_test_slab(folio), folio);
- VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio);
- VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio);
-
- return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
-}
-
-/*
- * __folio_objcg - get the object cgroup associated with a kmem folio.
+ * folio_objcg - get the object cgroup associated with a folio.
* @folio: Pointer to the folio.
*
* Returns a pointer to the object cgroup associated with the folio,
* or NULL. This function assumes that the folio is known to have a
- * proper object cgroup pointer. It's not safe to call this function
- * against some type of folios, e.g. slab folios or ex-slab folios or
- * LRU folios.
+ * proper object cgroup pointer.
*/
-static inline struct obj_cgroup *__folio_objcg(struct folio *folio)
+static inline struct obj_cgroup *folio_objcg(struct folio *folio)
{
unsigned long memcg_data = folio->memcg_data;

VM_BUG_ON_FOLIO(folio_test_slab(folio), folio);
VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio);
- VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio);

return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
}
@@ -451,7 +425,7 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio)
* proper memory cgroup pointer. It's not safe to call this function
* against some type of folios, e.g. slab folios or ex-slab folios.
*
- * For a non-kmem folio any of the following ensures folio and memcg binding
+ * For a folio any of the following ensures folio and memcg binding
* stability:
*
* - the folio lock
@@ -459,14 +433,28 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio)
* - lock_page_memcg()
* - exclusive reference
*
- * For a kmem folio a caller should hold an rcu read lock to protect memcg
- * associated with a kmem folio from being released.
+ * Based on the stable binding of folio and objcg, for a folio any of the
+ * following ensures folio and memcg binding stability:
+ *
+ * - css_set_lock
+ * - cgroup_mutex
+ * - the lruvec lock
+ * - the split queue lock (only THP page)
+ *
+ * If the caller only want to ensure that the page counters of memcg are
+ * updated correctly, ensure that the binding stability of folio and objcg
+ * is sufficient.
+ *
+ * A caller should hold an rcu read lock (In addition, regions of code across
+ * which interrupts, preemption, or softirqs have been disabled also serve as
+ * RCU read-side critical sections) to protect memcg associated with a folio
+ * from being released.
*/
static inline struct mem_cgroup *folio_memcg(struct folio *folio)
{
- if (folio_memcg_kmem(folio))
- return obj_cgroup_memcg(__folio_objcg(folio));
- return __folio_memcg(folio);
+ struct obj_cgroup *objcg = folio_objcg(folio);
+
+ return objcg ? obj_cgroup_memcg(objcg) : NULL;
}

static inline struct mem_cgroup *page_memcg(struct page *page)
@@ -484,6 +472,8 @@ static inline struct mem_cgroup *page_memcg(struct page *page)
* folio is known to have a proper memory cgroup pointer. It's not safe
* to call this function against some type of pages, e.g. slab pages or
* ex-slab pages.
+ *
+ * The page and objcg or memcg binding rules can refer to folio_memcg().
*/
static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio)
{
@@ -514,22 +504,20 @@ static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page)
*
* Return: A pointer to the memory cgroup associated with the folio,
* or NULL.
+ *
+ * The folio and objcg or memcg binding rules can refer to folio_memcg().
*/
static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio)
{
unsigned long memcg_data = READ_ONCE(folio->memcg_data);
+ struct obj_cgroup *objcg;

VM_BUG_ON_FOLIO(folio_test_slab(folio), folio);
WARN_ON_ONCE(!rcu_read_lock_held());

- if (memcg_data & MEMCG_DATA_KMEM) {
- struct obj_cgroup *objcg;
-
- objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
- return obj_cgroup_memcg(objcg);
- }
+ objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);

- return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
+ return objcg ? obj_cgroup_memcg(objcg) : NULL;
}

/*
@@ -542,16 +530,10 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio)
* has an associated memory cgroup pointer or an object cgroups vector or
* an object cgroup.
*
- * For a non-kmem page any of the following ensures page and memcg binding
- * stability:
+ * The page and objcg or memcg binding rules can refer to page_memcg().
*
- * - the page lock
- * - LRU isolation
- * - lock_page_memcg()
- * - exclusive reference
- *
- * For a kmem page a caller should hold an rcu read lock to protect memcg
- * associated with a kmem page from being released.
+ * A caller should hold an rcu read lock to protect memcg associated with a
+ * page from being released.
*/
static inline struct mem_cgroup *page_memcg_check(struct page *page)
{
@@ -560,18 +542,14 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
* for slab pages, READ_ONCE() should be used here.
*/
unsigned long memcg_data = READ_ONCE(page->memcg_data);
+ struct obj_cgroup *objcg;

if (memcg_data & MEMCG_DATA_OBJCGS)
return NULL;

- if (memcg_data & MEMCG_DATA_KMEM) {
- struct obj_cgroup *objcg;
-
- objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
- return obj_cgroup_memcg(objcg);
- }
+ objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);

- return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
+ return objcg ? obj_cgroup_memcg(objcg) : NULL;
}

static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index cc596034c487..ec98f346cae6 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -503,6 +503,8 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
}

#ifdef CONFIG_MEMCG
+static struct shrinker deferred_split_shrinker;
+
static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *folio,
struct deferred_split *queue)
{
@@ -519,6 +521,38 @@ static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio

return memcg ? &memcg->deferred_split_queue : NULL;
}
+
+static void thp_reparent_lock(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+ spin_lock(&src->deferred_split_queue.split_queue_lock);
+ spin_lock(&dst->deferred_split_queue.split_queue_lock);
+}
+
+static void thp_reparent_relocate(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+ int nid;
+ struct deferred_split *src_queue, *dst_queue;
+
+ src_queue = &src->deferred_split_queue;
+ dst_queue = &dst->deferred_split_queue;
+
+ if (!src_queue->split_queue_len)
+ return;
+
+ list_splice_tail_init(&src_queue->split_queue, &dst_queue->split_queue);
+ dst_queue->split_queue_len += src_queue->split_queue_len;
+ src_queue->split_queue_len = 0;
+
+ for_each_node(nid)
+ set_shrinker_bit(dst, nid, deferred_split_shrinker.id);
+}
+
+static void thp_reparent_unlock(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+ spin_unlock(&dst->deferred_split_queue.split_queue_lock);
+ spin_unlock(&src->deferred_split_queue.split_queue_lock);
+}
+DEFINE_MEMCG_REPARENT_OPS(thp);
#else
static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *folio,
struct deferred_split *queue)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 7d62764c6380..1a35f7fde3ed 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -76,6 +76,7 @@ struct cgroup_subsys memory_cgrp_subsys __read_mostly;
EXPORT_SYMBOL(memory_cgrp_subsys);

struct mem_cgroup *root_mem_cgroup __read_mostly;
+static struct obj_cgroup *root_obj_cgroup __read_mostly;

/* Active memory cgroup to use from an interrupt context */
DEFINE_PER_CPU(struct mem_cgroup *, int_active_memcg);
@@ -256,6 +257,11 @@ struct mem_cgroup *vmpressure_to_memcg(struct vmpressure *vmpr)

static DEFINE_SPINLOCK(objcg_lock);

+static inline bool obj_cgroup_is_root(struct obj_cgroup *objcg)
+{
+ return objcg == root_obj_cgroup;
+}
+
#ifdef CONFIG_MEMCG_KMEM
bool mem_cgroup_kmem_disabled(void)
{
@@ -363,8 +369,75 @@ static void objcg_reparent_unlock(struct mem_cgroup *src, struct mem_cgroup *dst

static DEFINE_MEMCG_REPARENT_OPS(objcg);

+static void lruvec_reparent_lock(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+ int i;
+
+ for_each_node(i) {
+ spin_lock(&mem_cgroup_lruvec(src, NODE_DATA(i))->lru_lock);
+ spin_lock(&mem_cgroup_lruvec(dst, NODE_DATA(i))->lru_lock);
+ }
+}
+
+static void lruvec_reparent_lru(struct lruvec *src, struct lruvec *dst,
+ enum lru_list lru)
+{
+ int zid;
+ struct mem_cgroup_per_node *mz_src, *mz_dst;
+
+ mz_src = container_of(src, struct mem_cgroup_per_node, lruvec);
+ mz_dst = container_of(dst, struct mem_cgroup_per_node, lruvec);
+
+ if (lru != LRU_UNEVICTABLE)
+ list_splice_tail_init(&src->lists[lru], &dst->lists[lru]);
+
+ for (zid = 0; zid < MAX_NR_ZONES; zid++) {
+ mz_dst->lru_zone_size[zid][lru] += mz_src->lru_zone_size[zid][lru];
+ mz_src->lru_zone_size[zid][lru] = 0;
+ }
+}
+
+static void lruvec_reparent_relocate(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+ int i;
+
+ for_each_node(i) {
+ enum lru_list lru;
+ struct lruvec *src_lruvec, *dst_lruvec;
+
+ src_lruvec = mem_cgroup_lruvec(src, NODE_DATA(i));
+ dst_lruvec = mem_cgroup_lruvec(dst, NODE_DATA(i));
+
+ dst_lruvec->anon_cost += src_lruvec->anon_cost;
+ dst_lruvec->file_cost += src_lruvec->file_cost;
+
+ for_each_lru(lru)
+ lruvec_reparent_lru(src_lruvec, dst_lruvec, lru);
+ }
+}
+
+static void lruvec_reparent_unlock(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+ int i;
+
+ for_each_node(i) {
+ spin_unlock(&mem_cgroup_lruvec(dst, NODE_DATA(i))->lru_lock);
+ spin_unlock(&mem_cgroup_lruvec(src, NODE_DATA(i))->lru_lock);
+ }
+}
+
+static DEFINE_MEMCG_REPARENT_OPS(lruvec);
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+DECLARE_MEMCG_REPARENT_OPS(thp);
+#endif
+
static const struct memcg_reparent_ops *memcg_reparent_ops[] = {
&memcg_objcg_reparent_ops,
+ &memcg_lruvec_reparent_ops,
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ &memcg_thp_reparent_ops,
+#endif
};

#define DEFINE_MEMCG_REPARENT_FUNC(phase) \
@@ -2827,18 +2900,18 @@ static inline void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages
page_counter_uncharge(&memcg->memsw, nr_pages);
}

-static void commit_charge(struct folio *folio, struct mem_cgroup *memcg)
+static void commit_charge(struct folio *folio, struct obj_cgroup *objcg)
{
- VM_BUG_ON_FOLIO(folio_memcg(folio), folio);
+ VM_BUG_ON_FOLIO(folio_objcg(folio), folio);
/*
- * Any of the following ensures page's memcg stability:
+ * Any of the following ensures page's objcg stability:
*
* - the page lock
* - LRU isolation
* - lock_page_memcg()
* - exclusive reference
*/
- folio->memcg_data = (unsigned long)memcg;
+ folio->memcg_data = (unsigned long)objcg;
}

#ifdef CONFIG_MEMCG_KMEM
@@ -2955,6 +3028,21 @@ struct mem_cgroup *mem_cgroup_from_obj(void *p)
return page_memcg_check(folio_page(folio, 0));
}

+static struct obj_cgroup *__get_obj_cgroup_from_memcg(struct mem_cgroup *memcg)
+{
+ struct obj_cgroup *objcg = NULL;
+
+ rcu_read_lock();
+ for (; memcg; memcg = parent_mem_cgroup(memcg)) {
+ objcg = rcu_dereference(memcg->objcg);
+ if (objcg && obj_cgroup_tryget(objcg))
+ break;
+ }
+ rcu_read_unlock();
+
+ return objcg;
+}
+
__always_inline struct obj_cgroup *get_obj_cgroup_from_current(void)
{
struct obj_cgroup *objcg = NULL;
@@ -2969,12 +3057,15 @@ __always_inline struct obj_cgroup *get_obj_cgroup_from_current(void)
else
memcg = mem_cgroup_from_task(current);

- for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) {
- objcg = rcu_dereference(memcg->objcg);
- if (objcg && obj_cgroup_tryget(objcg))
- break;
+ if (mem_cgroup_is_root(memcg))
+ goto out;
+
+ objcg = __get_obj_cgroup_from_memcg(memcg);
+ if (obj_cgroup_is_root(objcg)) {
+ obj_cgroup_put(objcg);
objcg = NULL;
}
+out:
rcu_read_unlock();

return objcg;
@@ -3071,13 +3162,13 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order)
void __memcg_kmem_uncharge_page(struct page *page, int order)
{
struct folio *folio = page_folio(page);
- struct obj_cgroup *objcg;
+ struct obj_cgroup *objcg = folio_objcg(folio);
unsigned int nr_pages = 1 << order;

- if (!folio_memcg_kmem(folio))
+ if (!objcg)
return;

- objcg = __folio_objcg(folio);
+ VM_BUG_ON_FOLIO(!folio_memcg_kmem(folio), folio);
obj_cgroup_uncharge_pages(objcg, nr_pages);
folio->memcg_data = 0;
obj_cgroup_put(objcg);
@@ -3331,26 +3422,21 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size)
#endif /* CONFIG_MEMCG_KMEM */

/*
- * Because page_memcg(head) is not set on tails, set it now.
+ * Because page_objcg(head) is not set on tails, set it now.
*/
void split_page_memcg(struct page *head, unsigned int nr)
{
struct folio *folio = page_folio(head);
- struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio);
+ struct obj_cgroup *objcg = folio_objcg(folio);
int i;

- if (mem_cgroup_disabled() || !memcg)
+ if (mem_cgroup_disabled() || !objcg)
return;

for (i = 1; i < nr; i++)
folio_page(folio, i)->memcg_data = folio->memcg_data;

- if (folio_memcg_kmem(folio))
- obj_cgroup_get_many(__folio_objcg(folio), nr - 1);
- else
- css_get_many(&memcg->css, nr - 1);
-
- css_put(&memcg->css);
+ obj_cgroup_get_many(objcg, nr - 1);
}

#ifdef CONFIG_MEMCG_SWAP
@@ -5253,6 +5339,9 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
objcg->memcg = memcg;
rcu_assign_pointer(memcg->objcg, objcg);

+ if (unlikely(mem_cgroup_is_root(memcg)))
+ root_obj_cgroup = objcg;
+
/* Online state pins memcg ID, memcg ID pins CSS */
refcount_set(&memcg->id.ref, 1);
css_get(css);
@@ -5657,10 +5746,12 @@ static int mem_cgroup_move_account(struct page *page,
*/
smp_mb();

- css_get(&to->css);
- css_put(&from->css);
+ rcu_read_lock();
+ obj_cgroup_get(rcu_dereference(to->objcg));
+ obj_cgroup_put(rcu_dereference(from->objcg));
+ rcu_read_unlock();

- folio->memcg_data = (unsigned long)to;
+ folio->memcg_data = (unsigned long)rcu_access_pointer(to->objcg);

__folio_memcg_unlock(from);

@@ -6133,6 +6224,42 @@ static void mem_cgroup_move_charge(void)

mmap_read_unlock(mc.mm);
atomic_dec(&mc.from->moving_account);
+
+ /*
+ * Moving its pages to another memcg is finished. Wait for already
+ * started RCU-only updates to finish to make sure that the caller
+ * of lock_page_memcg() can unlock the correct move_lock. The
+ * possible bad scenario would like:
+ *
+ * CPU0: CPU1:
+ * mem_cgroup_move_charge()
+ * walk_page_range()
+ *
+ * lock_page_memcg(page)
+ * memcg = folio_memcg()
+ * spin_lock_irqsave(&memcg->move_lock)
+ * memcg->move_lock_task = current
+ *
+ * atomic_dec(&mc.from->moving_account)
+ *
+ * mem_cgroup_css_offline()
+ * memcg_offline_kmem()
+ * memcg_reparent_objcgs() <== reparented
+ *
+ * unlock_page_memcg(page)
+ * memcg = folio_memcg() <== memcg has been changed
+ * if (memcg->move_lock_task == current) <== false
+ * spin_unlock_irqrestore(&memcg->move_lock)
+ *
+ * Once mem_cgroup_move_charge() returns (it means that the cgroup_mutex
+ * would be released soon), the page can be reparented to its parent
+ * memcg. When the unlock_page_memcg() is called for the page, we will
+ * miss unlock the move_lock. So using synchronize_rcu to wait for
+ * already started RCU-only updates to finish before this function
+ * returns (mem_cgroup_move_charge() and mem_cgroup_css_offline() are
+ * serialized by cgroup_mutex).
+ */
+ synchronize_rcu();
}

/*
@@ -6692,21 +6819,26 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root,
static int charge_memcg(struct folio *folio, struct mem_cgroup *memcg,
gfp_t gfp)
{
+ struct obj_cgroup *objcg;
long nr_pages = folio_nr_pages(folio);
- int ret;
+ int ret = 0;

- ret = try_charge(memcg, gfp, nr_pages);
+ objcg = __get_obj_cgroup_from_memcg(memcg);
+ /* Do not account at the root objcg level. */
+ if (!obj_cgroup_is_root(objcg))
+ ret = try_charge(memcg, gfp, nr_pages);
if (ret)
goto out;

- css_get(&memcg->css);
- commit_charge(folio, memcg);
+ obj_cgroup_get(objcg);
+ commit_charge(folio, objcg);

local_irq_disable();
mem_cgroup_charge_statistics(memcg, nr_pages);
memcg_check_events(memcg, folio_nid(folio));
local_irq_enable();
out:
+ obj_cgroup_put(objcg);
return ret;
}

@@ -6792,7 +6924,7 @@ void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry)
}

struct uncharge_gather {
- struct mem_cgroup *memcg;
+ struct obj_cgroup *objcg;
unsigned long nr_memory;
unsigned long pgpgout;
unsigned long nr_kmem;
@@ -6807,63 +6939,56 @@ static inline void uncharge_gather_clear(struct uncharge_gather *ug)
static void uncharge_batch(const struct uncharge_gather *ug)
{
unsigned long flags;
+ struct mem_cgroup *memcg;

+ rcu_read_lock();
+ memcg = obj_cgroup_memcg(ug->objcg);
if (ug->nr_memory) {
- page_counter_uncharge(&ug->memcg->memory, ug->nr_memory);
+ page_counter_uncharge(&memcg->memory, ug->nr_memory);
if (do_memsw_account())
- page_counter_uncharge(&ug->memcg->memsw, ug->nr_memory);
+ page_counter_uncharge(&memcg->memsw, ug->nr_memory);
if (ug->nr_kmem)
- memcg_account_kmem(ug->memcg, -ug->nr_kmem);
- memcg_oom_recover(ug->memcg);
+ memcg_account_kmem(memcg, -ug->nr_kmem);
+ memcg_oom_recover(memcg);
}

local_irq_save(flags);
- __count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout);
- __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_memory);
- memcg_check_events(ug->memcg, ug->nid);
+ __count_memcg_events(memcg, PGPGOUT, ug->pgpgout);
+ __this_cpu_add(memcg->vmstats_percpu->nr_page_events, ug->nr_memory);
+ memcg_check_events(memcg, ug->nid);
local_irq_restore(flags);
+ rcu_read_unlock();

/* drop reference from uncharge_folio */
- css_put(&ug->memcg->css);
+ obj_cgroup_put(ug->objcg);
}

static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug)
{
long nr_pages;
- struct mem_cgroup *memcg;
struct obj_cgroup *objcg;

VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);

/*
* Nobody should be changing or seriously looking at
- * folio memcg or objcg at this point, we have fully
- * exclusive access to the folio.
+ * folio objcg at this point, we have fully exclusive
+ * access to the folio.
*/
- if (folio_memcg_kmem(folio)) {
- objcg = __folio_objcg(folio);
- /*
- * This get matches the put at the end of the function and
- * kmem pages do not hold memcg references anymore.
- */
- memcg = get_mem_cgroup_from_objcg(objcg);
- } else {
- memcg = __folio_memcg(folio);
- }
-
- if (!memcg)
+ objcg = folio_objcg(folio);
+ if (!objcg)
return;

- if (ug->memcg != memcg) {
- if (ug->memcg) {
+ if (ug->objcg != objcg) {
+ if (ug->objcg) {
uncharge_batch(ug);
uncharge_gather_clear(ug);
}
- ug->memcg = memcg;
+ ug->objcg = objcg;
ug->nid = folio_nid(folio);

- /* pairs with css_put in uncharge_batch */
- css_get(&memcg->css);
+ /* pairs with obj_cgroup_put in uncharge_batch */
+ obj_cgroup_get(objcg);
}

nr_pages = folio_nr_pages(folio);
@@ -6871,19 +6996,15 @@ static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug)
if (folio_memcg_kmem(folio)) {
ug->nr_memory += nr_pages;
ug->nr_kmem += nr_pages;
-
- folio->memcg_data = 0;
- obj_cgroup_put(objcg);
} else {
/* LRU pages aren't accounted at the root level */
- if (!mem_cgroup_is_root(memcg))
+ if (!obj_cgroup_is_root(objcg))
ug->nr_memory += nr_pages;
ug->pgpgout++;
-
- folio->memcg_data = 0;
}

- css_put(&memcg->css);
+ folio->memcg_data = 0;
+ obj_cgroup_put(objcg);
}

void __mem_cgroup_uncharge(struct folio *folio)
@@ -6891,7 +7012,7 @@ void __mem_cgroup_uncharge(struct folio *folio)
struct uncharge_gather ug;

/* Don't touch folio->lru of any random page, pre-check: */
- if (!folio_memcg(folio))
+ if (!folio_objcg(folio))
return;

uncharge_gather_clear(&ug);
@@ -6914,7 +7035,7 @@ void __mem_cgroup_uncharge_list(struct list_head *page_list)
uncharge_gather_clear(&ug);
list_for_each_entry(folio, page_list, lru)
uncharge_folio(folio, &ug);
- if (ug.memcg)
+ if (ug.objcg)
uncharge_batch(&ug);
}

@@ -6931,6 +7052,7 @@ void __mem_cgroup_uncharge_list(struct list_head *page_list)
void mem_cgroup_migrate(struct folio *old, struct folio *new)
{
struct mem_cgroup *memcg;
+ struct obj_cgroup *objcg;
long nr_pages = folio_nr_pages(new);
unsigned long flags;

@@ -6943,30 +7065,33 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
return;

/* Page cache replacement: new folio already charged? */
- if (folio_memcg(new))
+ if (folio_objcg(new))
return;

- memcg = get_mem_cgroup_from_folio(old);
- VM_WARN_ON_ONCE_FOLIO(!memcg, old);
- if (!memcg)
+ objcg = folio_objcg(old);
+ VM_WARN_ON_ONCE_FOLIO(!objcg, old);
+ if (!objcg)
return;

+ rcu_read_lock();
+ memcg = obj_cgroup_memcg(objcg);
+
/* Force-charge the new page. The old one will be freed soon */
- if (!mem_cgroup_is_root(memcg)) {
+ if (!obj_cgroup_is_root(objcg)) {
page_counter_charge(&memcg->memory, nr_pages);
if (do_memsw_account())
page_counter_charge(&memcg->memsw, nr_pages);
}

- css_get(&memcg->css);
- commit_charge(new, memcg);
+ obj_cgroup_get(objcg);
+ commit_charge(new, objcg);

local_irq_save(flags);
mem_cgroup_charge_statistics(memcg, nr_pages);
memcg_check_events(memcg, folio_nid(new));
local_irq_restore(flags);

- css_put(&memcg->css);
+ rcu_read_unlock();
}

DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key);
@@ -7141,6 +7266,7 @@ static struct mem_cgroup *mem_cgroup_id_get_online(struct mem_cgroup *memcg)
void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry)
{
struct mem_cgroup *memcg, *swap_memcg;
+ struct obj_cgroup *objcg;
unsigned int nr_entries;
unsigned short oldid;

@@ -7153,15 +7279,16 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry)
if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
return;

+ objcg = folio_objcg(folio);
+ VM_WARN_ON_ONCE_FOLIO(!objcg, folio);
+ if (!objcg)
+ return;
+
/*
* Interrupts should be disabled by the caller (see the comments below),
* which can serve as RCU read-side critical sections.
*/
- memcg = folio_memcg(folio);
-
- VM_WARN_ON_ONCE_FOLIO(!memcg, folio);
- if (!memcg)
- return;
+ memcg = obj_cgroup_memcg(objcg);

/*
* In case the memcg owning these pages has been offlined and doesn't
@@ -7180,7 +7307,7 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry)

folio->memcg_data = 0;

- if (!mem_cgroup_is_root(memcg))
+ if (!obj_cgroup_is_root(objcg))
page_counter_uncharge(&memcg->memory, nr_entries);

if (!cgroup_memory_noswap && memcg != swap_memcg) {
@@ -7200,7 +7327,7 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry)
memcg_stats_unlock();
memcg_check_events(memcg, folio_nid(folio));

- css_put(&memcg->css);
+ obj_cgroup_put(objcg);
}

/**
--
2.11.0


2022-05-24 22:27:45

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v4 10/11] mm: lru: add VM_BUG_ON_FOLIO to lru maintenance function

On Tue, May 24, 2022 at 02:05:50PM +0800, Muchun Song wrote:
> We need to make sure that the page is deleted from or added to the
> correct lruvec list. So add a VM_BUG_ON_FOLIO() to catch invalid
> users.
>
> Signed-off-by: Muchun Song <[email protected]>

Makes sense, but please use VM_WARN_ON_ONCE_FOLIO() so the machine can
continue limping along for extracting debug information.

2022-05-24 23:44:24

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v4 09/11] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages

Hi Muchun,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on 4b0986a3613c92f4ec1bdc7f60ec66fea135991f]

url: https://github.com/intel-lab-lkp/linux/commits/Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20220524-143056
base: 4b0986a3613c92f4ec1bdc7f60ec66fea135991f
config: arm64-buildonly-randconfig-r005-20220524 (https://download.01.org/0day-ci/archive/20220525/[email protected]/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project 10c9ecce9f6096e18222a331c5e7d085bd813f75)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install arm64 cross compiling tool for clang build
# apt-get install binutils-aarch64-linux-gnu
# https://github.com/intel-lab-lkp/linux/commit/bec0ae12106e0cf12dd4e0e21eb0754b99be0ba2
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20220524-143056
git checkout bec0ae12106e0cf12dd4e0e21eb0754b99be0ba2
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=arm64 SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <[email protected]>

All error/warnings (new ones prefixed by >>):

>> mm/memcontrol.c:6826:10: error: call to undeclared function '__get_obj_cgroup_from_memcg'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
objcg = __get_obj_cgroup_from_memcg(memcg);
^
>> mm/memcontrol.c:6826:8: warning: incompatible integer to pointer conversion assigning to 'struct obj_cgroup *' from 'int' [-Wint-conversion]
objcg = __get_obj_cgroup_from_memcg(memcg);
^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 warning and 1 error generated.


vim +/__get_obj_cgroup_from_memcg +6826 mm/memcontrol.c

6818
6819 static int charge_memcg(struct folio *folio, struct mem_cgroup *memcg,
6820 gfp_t gfp)
6821 {
6822 struct obj_cgroup *objcg;
6823 long nr_pages = folio_nr_pages(folio);
6824 int ret = 0;
6825
> 6826 objcg = __get_obj_cgroup_from_memcg(memcg);
6827 /* Do not account at the root objcg level. */
6828 if (!obj_cgroup_is_root(objcg))
6829 ret = try_charge(memcg, gfp, nr_pages);
6830 if (ret)
6831 goto out;
6832
6833 obj_cgroup_get(objcg);
6834 commit_charge(folio, objcg);
6835
6836 local_irq_disable();
6837 mem_cgroup_charge_statistics(memcg, nr_pages);
6838 memcg_check_events(memcg, folio_nid(folio));
6839 local_irq_enable();
6840 out:
6841 obj_cgroup_put(objcg);
6842 return ret;
6843 }
6844

--
0-DAY CI Kernel Test Service
https://01.org/lkp

2022-05-25 04:17:55

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v4 01/11] mm: memcontrol: prepare objcg API for non-kmem usage

On Tue, May 24, 2022 at 02:05:41PM +0800, Muchun Song wrote:
> Pagecache pages are charged at the allocation time and holding a
> reference to the original memory cgroup until being reclaimed.
> Depending on the memory pressure, specific patterns of the page
> sharing between different cgroups and the cgroup creation and
> destruction rates, a large number of dying memory cgroups can be
> pinned by pagecache pages. It makes the page reclaim less efficient
> and wastes memory.
>
> We can convert LRU pages and most other raw memcg pins to the objcg
> direction to fix this problem, and then the page->memcg will always
> point to an object cgroup pointer.
>
> Therefore, the infrastructure of objcg no longer only serves
> CONFIG_MEMCG_KMEM. In this patch, we move the infrastructure of the
> objcg out of the scope of the CONFIG_MEMCG_KMEM so that the LRU pages
> can reuse it to charge pages.
>
> We know that the LRU pages are not accounted at the root level. But
> the page->memcg_data points to the root_mem_cgroup. So the
> page->memcg_data of the LRU pages always points to a valid pointer.
> But the root_mem_cgroup dose not have an object cgroup. If we use
> obj_cgroup APIs to charge the LRU pages, we should set the
> page->memcg_data to a root object cgroup. So we also allocate an
> object cgroup for the root_mem_cgroup.
>
> Signed-off-by: Muchun Song <[email protected]>
> ---
> include/linux/memcontrol.h | 5 ++--
> mm/memcontrol.c | 60 +++++++++++++++++++++++++---------------------
> 2 files changed, 35 insertions(+), 30 deletions(-)
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 89b14729d59f..ff1c1dd7e762 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -315,10 +315,10 @@ struct mem_cgroup {
>
> #ifdef CONFIG_MEMCG_KMEM
> int kmemcg_id;
> +#endif
> struct obj_cgroup __rcu *objcg;
> /* list of inherited objcgs, protected by objcg_lock */
> struct list_head objcg_list;
> -#endif
>
> MEMCG_PADDING(_pad2_);
>
> @@ -851,8 +851,7 @@ static inline struct mem_cgroup *lruvec_memcg(struct lruvec *lruvec)
> * parent_mem_cgroup - find the accounting parent of a memcg
> * @memcg: memcg whose parent to find
> *
> - * Returns the parent memcg, or NULL if this is the root or the memory
> - * controller is in legacy no-hierarchy mode.
> + * Returns the parent memcg, or NULL if this is the root.
> */
> static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg)
> {
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 598fece89e2b..6de0d3e53eb1 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -254,9 +254,9 @@ struct mem_cgroup *vmpressure_to_memcg(struct vmpressure *vmpr)
> return container_of(vmpr, struct mem_cgroup, vmpressure);
> }
>
> -#ifdef CONFIG_MEMCG_KMEM
> static DEFINE_SPINLOCK(objcg_lock);
>
> +#ifdef CONFIG_MEMCG_KMEM
> bool mem_cgroup_kmem_disabled(void)
> {
> return cgroup_memory_nokmem;
> @@ -265,12 +265,10 @@ bool mem_cgroup_kmem_disabled(void)
> static void obj_cgroup_uncharge_pages(struct obj_cgroup *objcg,
> unsigned int nr_pages);
>
> -static void obj_cgroup_release(struct percpu_ref *ref)
> +static void obj_cgroup_release_bytes(struct obj_cgroup *objcg)
> {
> - struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt);
> unsigned int nr_bytes;
> unsigned int nr_pages;
> - unsigned long flags;
>
> /*
> * At this point all allocated objects are freed, and
> @@ -284,9 +282,9 @@ static void obj_cgroup_release(struct percpu_ref *ref)
> * 3) CPU1: a process from another memcg is allocating something,
> * the stock if flushed,
> * objcg->nr_charged_bytes = PAGE_SIZE - 92
> - * 5) CPU0: we do release this object,
> + * 4) CPU0: we do release this object,
> * 92 bytes are added to stock->nr_bytes
> - * 6) CPU0: stock is flushed,
> + * 5) CPU0: stock is flushed,
> * 92 bytes are added to objcg->nr_charged_bytes
> *
> * In the result, nr_charged_bytes == PAGE_SIZE.
> @@ -298,6 +296,19 @@ static void obj_cgroup_release(struct percpu_ref *ref)
>
> if (nr_pages)
> obj_cgroup_uncharge_pages(objcg, nr_pages);
> +}
> +#else
> +static inline void obj_cgroup_release_bytes(struct obj_cgroup *objcg)
> +{
> +}
> +#endif
> +
> +static void obj_cgroup_release(struct percpu_ref *ref)
> +{
> + struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt);
> + unsigned long flags;
> +
> + obj_cgroup_release_bytes(objcg);
>
> spin_lock_irqsave(&objcg_lock, flags);
> list_del(&objcg->list);
> @@ -326,10 +337,10 @@ static struct obj_cgroup *obj_cgroup_alloc(void)
> return objcg;
> }
>
> -static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
> - struct mem_cgroup *parent)
> +static void memcg_reparent_objcgs(struct mem_cgroup *memcg)
> {
> struct obj_cgroup *objcg, *iter;
> + struct mem_cgroup *parent = parent_mem_cgroup(memcg);
>
> objcg = rcu_replace_pointer(memcg->objcg, NULL, true);
>
> @@ -348,6 +359,7 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
> percpu_ref_kill(&objcg->refcnt);
> }
>
> +#ifdef CONFIG_MEMCG_KMEM
> /*
> * A lot of the calls to the cache allocation functions are expected to be
> * inlined by the compiler. Since the calls to memcg_slab_pre_alloc_hook() are
> @@ -3589,21 +3601,12 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css,
> #ifdef CONFIG_MEMCG_KMEM
> static int memcg_online_kmem(struct mem_cgroup *memcg)
> {
> - struct obj_cgroup *objcg;
> -
> if (cgroup_memory_nokmem)
> return 0;
>
> if (unlikely(mem_cgroup_is_root(memcg)))
> return 0;
>
> - objcg = obj_cgroup_alloc();
> - if (!objcg)
> - return -ENOMEM;
> -
> - objcg->memcg = memcg;
> - rcu_assign_pointer(memcg->objcg, objcg);
> -
> static_branch_enable(&memcg_kmem_enabled_key);
>
> memcg->kmemcg_id = memcg->id.id;
> @@ -3613,27 +3616,19 @@ static int memcg_online_kmem(struct mem_cgroup *memcg)
>
> static void memcg_offline_kmem(struct mem_cgroup *memcg)
> {
> - struct mem_cgroup *parent;
> -
> if (cgroup_memory_nokmem)
> return;
>
> if (unlikely(mem_cgroup_is_root(memcg)))
> return;
>
> - parent = parent_mem_cgroup(memcg);
> - if (!parent)
> - parent = root_mem_cgroup;
> -
> - memcg_reparent_objcgs(memcg, parent);
> -
> /*
> * After we have finished memcg_reparent_objcgs(), all list_lrus
> * corresponding to this cgroup are guaranteed to remain empty.
> * The ordering is imposed by list_lru_node->lock taken by
> * memcg_reparent_list_lrus().
> */

This comment doesn't look to be correct after these changes. Should it
be fixed? Or the ordering should be fixed too?

> - memcg_reparent_list_lrus(memcg, parent);
> + memcg_reparent_list_lrus(memcg, parent_mem_cgroup(memcg));
We effectively dropped this:
if (!parent)
parent = root_mem_cgroup;
Is it safe? (assuming v1 non-hierarchical mode, it's usually when all
is getting complicated)

The rest of the patch looks good to me.

Thanks!

2022-05-25 06:55:36

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v4 01/11] mm: memcontrol: prepare objcg API for non-kmem usage

On Tue, May 24, 2022 at 02:05:41PM +0800, Muchun Song wrote:
> Pagecache pages are charged at the allocation time and holding a
> reference to the original memory cgroup until being reclaimed.
> Depending on the memory pressure, specific patterns of the page
> sharing between different cgroups and the cgroup creation and
> destruction rates, a large number of dying memory cgroups can be
> pinned by pagecache pages. It makes the page reclaim less efficient
> and wastes memory.
>
> We can convert LRU pages and most other raw memcg pins to the objcg
> direction to fix this problem, and then the page->memcg will always
> point to an object cgroup pointer.
>
> Therefore, the infrastructure of objcg no longer only serves
> CONFIG_MEMCG_KMEM. In this patch, we move the infrastructure of the
> objcg out of the scope of the CONFIG_MEMCG_KMEM so that the LRU pages
> can reuse it to charge pages.
>
> We know that the LRU pages are not accounted at the root level. But
> the page->memcg_data points to the root_mem_cgroup. So the
> page->memcg_data of the LRU pages always points to a valid pointer.
> But the root_mem_cgroup dose not have an object cgroup. If we use
> obj_cgroup APIs to charge the LRU pages, we should set the
> page->memcg_data to a root object cgroup. So we also allocate an
> object cgroup for the root_mem_cgroup.
>
> Signed-off-by: Muchun Song <[email protected]>

Acked-by: Johannes Weiner <[email protected]>

Looks good to me. Also gets rid of some use_hierarchy cruft.

2022-05-25 12:02:16

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v4 10/11] mm: lru: add VM_BUG_ON_FOLIO to lru maintenance function

On Tue, May 24, 2022 at 02:05:50PM +0800, Muchun Song wrote:
> We need to make sure that the page is deleted from or added to the
> correct lruvec list. So add a VM_BUG_ON_FOLIO() to catch invalid
> users.
>
> Signed-off-by: Muchun Song <[email protected]>
> ---
> include/linux/mm_inline.h | 6 ++++++
> mm/vmscan.c | 1 -
> 2 files changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
> index ac32125745ab..30d2393da613 100644
> --- a/include/linux/mm_inline.h
> +++ b/include/linux/mm_inline.h
> @@ -97,6 +97,8 @@ void lruvec_add_folio(struct lruvec *lruvec, struct folio *folio)
> {
> enum lru_list lru = folio_lru_list(folio);
>
> + VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
> +
> update_lru_size(lruvec, lru, folio_zonenum(folio),
> folio_nr_pages(folio));
> if (lru != LRU_UNEVICTABLE)
> @@ -114,6 +116,8 @@ void lruvec_add_folio_tail(struct lruvec *lruvec, struct folio *folio)
> {
> enum lru_list lru = folio_lru_list(folio);
>
> + VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
> +
> update_lru_size(lruvec, lru, folio_zonenum(folio),
> folio_nr_pages(folio));
> /* This is not expected to be used on LRU_UNEVICTABLE */
> @@ -131,6 +135,8 @@ void lruvec_del_folio(struct lruvec *lruvec, struct folio *folio)
> {
> enum lru_list lru = folio_lru_list(folio);
>
> + VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
> +
> if (lru != LRU_UNEVICTABLE)
> list_del(&folio->lru);
> update_lru_size(lruvec, lru, folio_zonenum(folio),
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 761d5e0dd78d..6c9e2eafc8f9 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2281,7 +2281,6 @@ static unsigned int move_pages_to_lru(struct list_head *list)
> continue;
> }
>
> - VM_BUG_ON_PAGE(!folio_matches_lruvec(folio, lruvec), page);

The commit log describes well why we need to add new BUG_ON's. Please, add
something on why this is removed.


Thanks!

2022-05-25 12:12:17

by Muchun Song

[permalink] [raw]
Subject: [PATCH v4 03/11] mm: memcontrol: make lruvec lock safe when LRU pages are reparented

The diagram below shows how to make the folio lruvec lock safe when LRU
pages are reparented.

folio_lruvec_lock(folio)
retry:
lruvec = folio_lruvec(folio);

// The folio is reparented at this time.
spin_lock(&lruvec->lru_lock);

if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio)))
// Acquired the wrong lruvec lock and need to retry.
// Because this folio is on the parent memcg lruvec list.
goto retry;

// If we reach here, it means that folio_memcg(folio) is stable.

memcg_reparent_objcgs(memcg)
// lruvec belongs to memcg and lruvec_parent belongs to parent memcg.
spin_lock(&lruvec->lru_lock);
spin_lock(&lruvec_parent->lru_lock);

// Move all the pages from the lruvec list to the parent lruvec list.

spin_unlock(&lruvec_parent->lru_lock);
spin_unlock(&lruvec->lru_lock);

After we acquire the lruvec lock, we need to check whether the folio is
reparented. If so, we need to reacquire the new lruvec lock. On the
routine of the LRU pages reparenting, we will also acquire the lruvec
lock (will be implemented in the later patch). So folio_memcg() cannot
be changed when we hold the lruvec lock.

Since lruvec_memcg(lruvec) is always equal to folio_memcg(folio) after
we hold the lruvec lock, lruvec_memcg_debug() check is pointless. So
remove it.

This is a preparation for reparenting the LRU pages.

Signed-off-by: Muchun Song <[email protected]>
---
include/linux/memcontrol.h | 18 +++-----------
mm/compaction.c | 10 +++++++-
mm/memcontrol.c | 62 +++++++++++++++++++++++++++++-----------------
mm/swap.c | 4 +++
4 files changed, 55 insertions(+), 39 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index ff1c1dd7e762..4042e4d21fe2 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -752,7 +752,9 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg,
* folio_lruvec - return lruvec for isolating/putting an LRU folio
* @folio: Pointer to the folio.
*
- * This function relies on folio->mem_cgroup being stable.
+ * The lruvec can be changed to its parent lruvec when the page reparented.
+ * The caller need to recheck if it cares about this changes (just like
+ * folio_lruvec_lock() does).
*/
static inline struct lruvec *folio_lruvec(struct folio *folio)
{
@@ -771,15 +773,6 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio);
struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio,
unsigned long *flags);

-#ifdef CONFIG_DEBUG_VM
-void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio);
-#else
-static inline
-void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
-{
-}
-#endif
-
static inline
struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css){
return css ? container_of(css, struct mem_cgroup, css) : NULL;
@@ -1240,11 +1233,6 @@ static inline struct lruvec *folio_lruvec(struct folio *folio)
return &pgdat->__lruvec;
}

-static inline
-void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
-{
-}
-
static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg)
{
return NULL;
diff --git a/mm/compaction.c b/mm/compaction.c
index 817098817302..1692b17db781 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -515,6 +515,8 @@ compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags,
{
struct lruvec *lruvec;

+ rcu_read_lock();
+retry:
lruvec = folio_lruvec(folio);

/* Track if the lock is contended in async mode */
@@ -527,7 +529,13 @@ compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags,

spin_lock_irqsave(&lruvec->lru_lock, *flags);
out:
- lruvec_memcg_debug(lruvec, folio);
+ if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
+ spin_unlock_irqrestore(&lruvec->lru_lock, *flags);
+ goto retry;
+ }
+
+ /* See the comments in folio_lruvec_lock(). */
+ rcu_read_unlock();

return lruvec;
}
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6de0d3e53eb1..b38a77f6696f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1199,23 +1199,6 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memcg,
return ret;
}

-#ifdef CONFIG_DEBUG_VM
-void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
-{
- struct mem_cgroup *memcg;
-
- if (mem_cgroup_disabled())
- return;
-
- memcg = folio_memcg(folio);
-
- if (!memcg)
- VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) != root_mem_cgroup, folio);
- else
- VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) != memcg, folio);
-}
-#endif
-
/**
* folio_lruvec_lock - Lock the lruvec for a folio.
* @folio: Pointer to the folio.
@@ -1230,10 +1213,23 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
*/
struct lruvec *folio_lruvec_lock(struct folio *folio)
{
- struct lruvec *lruvec = folio_lruvec(folio);
+ struct lruvec *lruvec;

+ rcu_read_lock();
+retry:
+ lruvec = folio_lruvec(folio);
spin_lock(&lruvec->lru_lock);
- lruvec_memcg_debug(lruvec, folio);
+
+ if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
+ spin_unlock(&lruvec->lru_lock);
+ goto retry;
+ }
+
+ /*
+ * Preemption is disabled in the internal of spin_lock, which can serve
+ * as RCU read-side critical sections.
+ */
+ rcu_read_unlock();

return lruvec;
}
@@ -1253,10 +1249,20 @@ struct lruvec *folio_lruvec_lock(struct folio *folio)
*/
struct lruvec *folio_lruvec_lock_irq(struct folio *folio)
{
- struct lruvec *lruvec = folio_lruvec(folio);
+ struct lruvec *lruvec;

+ rcu_read_lock();
+retry:
+ lruvec = folio_lruvec(folio);
spin_lock_irq(&lruvec->lru_lock);
- lruvec_memcg_debug(lruvec, folio);
+
+ if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
+ spin_unlock_irq(&lruvec->lru_lock);
+ goto retry;
+ }
+
+ /* See the comments in folio_lruvec_lock(). */
+ rcu_read_unlock();

return lruvec;
}
@@ -1278,10 +1284,20 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio)
struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio,
unsigned long *flags)
{
- struct lruvec *lruvec = folio_lruvec(folio);
+ struct lruvec *lruvec;

+ rcu_read_lock();
+retry:
+ lruvec = folio_lruvec(folio);
spin_lock_irqsave(&lruvec->lru_lock, *flags);
- lruvec_memcg_debug(lruvec, folio);
+
+ if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
+ spin_unlock_irqrestore(&lruvec->lru_lock, *flags);
+ goto retry;
+ }
+
+ /* See the comments in folio_lruvec_lock(). */
+ rcu_read_unlock();

return lruvec;
}
diff --git a/mm/swap.c b/mm/swap.c
index 7e320ec08c6a..9680f2fc48b1 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -303,6 +303,10 @@ void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages)

void lru_note_cost_folio(struct folio *folio)
{
+ /*
+ * The rcu read lock is held by the caller, so we do not need to
+ * care about the lruvec returned by folio_lruvec() being released.
+ */
lru_note_cost(folio_lruvec(folio), folio_is_file_lru(folio),
folio_nr_pages(folio));
}
--
2.11.0


2022-05-25 12:50:44

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v4 06/11] mm: thp: make split queue lock safe when LRU pages are reparented

On Tue, May 24, 2022 at 02:05:46PM +0800, Muchun Song wrote:
> Similar to the lruvec lock, we use the same approach to make the split
> queue lock safe when LRU pages are reparented.
>
> Signed-off-by: Muchun Song <[email protected]>

Please, merge this into the previous patch (like Johannes asked
for the lruvec counterpart).

And add:
Acked-by: Roman Gushchin <[email protected]> .

Thanks!

2022-05-25 13:27:59

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v4 04/11] mm: vmscan: rework move_pages_to_lru()

On 5/24/22 02:05, Muchun Song wrote:
> In the later patch, we will reparent the LRU pages. The pages moved to
> appropriate LRU list can be reparented during the process of the
> move_pages_to_lru(). So holding a lruvec lock by the caller is wrong, we
> should use the more general interface of folio_lruvec_relock_irq() to
> acquire the correct lruvec lock.
>
> Signed-off-by: Muchun Song <[email protected]>
> ---
> mm/vmscan.c | 49 +++++++++++++++++++++++++------------------------
> 1 file changed, 25 insertions(+), 24 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 1678802e03e7..761d5e0dd78d 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2230,23 +2230,28 @@ static int too_many_isolated(struct pglist_data *pgdat, int file,
> * move_pages_to_lru() moves pages from private @list to appropriate LRU list.
> * On return, @list is reused as a list of pages to be freed by the caller.
> *
> - * Returns the number of pages moved to the given lruvec.
> + * Returns the number of pages moved to the appropriate LRU list.
> + *
> + * Note: The caller must not hold any lruvec lock.
> */
> -static unsigned int move_pages_to_lru(struct lruvec *lruvec,
> - struct list_head *list)
> +static unsigned int move_pages_to_lru(struct list_head *list)
> {
> - int nr_pages, nr_moved = 0;
> + int nr_moved = 0;
> + struct lruvec *lruvec = NULL;
> LIST_HEAD(pages_to_free);
> - struct page *page;
>
> while (!list_empty(list)) {
> - page = lru_to_page(list);
> + int nr_pages;
> + struct folio *folio = lru_to_folio(list);
> + struct page *page = &folio->page;
> +
> + lruvec = folio_lruvec_relock_irq(folio, lruvec);
> VM_BUG_ON_PAGE(PageLRU(page), page);
> list_del(&page->lru);
> if (unlikely(!page_evictable(page))) {
> - spin_unlock_irq(&lruvec->lru_lock);
> + unlock_page_lruvec_irq(lruvec);
> putback_lru_page(page);
> - spin_lock_irq(&lruvec->lru_lock);
> + lruvec = NULL;
> continue;
> }
>
> @@ -2267,20 +2272,16 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec,
> __clear_page_lru_flags(page);
>
> if (unlikely(PageCompound(page))) {
> - spin_unlock_irq(&lruvec->lru_lock);
> + unlock_page_lruvec_irq(lruvec);
> destroy_compound_page(page);
> - spin_lock_irq(&lruvec->lru_lock);
> + lruvec = NULL;
> } else
> list_add(&page->lru, &pages_to_free);
>
> continue;
> }
>
> - /*
> - * All pages were isolated from the same lruvec (and isolation
> - * inhibits memcg migration).
> - */
> - VM_BUG_ON_PAGE(!folio_matches_lruvec(page_folio(page), lruvec), page);
> + VM_BUG_ON_PAGE(!folio_matches_lruvec(folio, lruvec), page);
> add_page_to_lru_list(page, lruvec);
> nr_pages = thp_nr_pages(page);
> nr_moved += nr_pages;
> @@ -2288,6 +2289,8 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec,
> workingset_age_nonresident(lruvec, nr_pages);
> }
>
> + if (lruvec)
> + unlock_page_lruvec_irq(lruvec);
> /*
> * To save our caller's stack, now use input list for pages to free.
> */
> @@ -2359,16 +2362,16 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
>
> nr_reclaimed = shrink_page_list(&page_list, pgdat, sc, &stat, false);
>
> - spin_lock_irq(&lruvec->lru_lock);
> - move_pages_to_lru(lruvec, &page_list);
> + move_pages_to_lru(&page_list);
>
> + local_irq_disable();
> __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
> item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT;
> if (!cgroup_reclaim(sc))
> __count_vm_events(item, nr_reclaimed);
> __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed);
> __count_vm_events(PGSTEAL_ANON + file, nr_reclaimed);
> - spin_unlock_irq(&lruvec->lru_lock);
> + local_irq_enable();
>
> lru_note_cost(lruvec, file, stat.nr_pageout);
> mem_cgroup_uncharge_list(&page_list);
> @@ -2498,18 +2501,16 @@ static void shrink_active_list(unsigned long nr_to_scan,
> /*
> * Move pages back to the lru list.
> */
> - spin_lock_irq(&lruvec->lru_lock);
> -
> - nr_activate = move_pages_to_lru(lruvec, &l_active);
> - nr_deactivate = move_pages_to_lru(lruvec, &l_inactive);
> + nr_activate = move_pages_to_lru(&l_active);
> + nr_deactivate = move_pages_to_lru(&l_inactive);
> /* Keep all free pages in l_active list */
> list_splice(&l_inactive, &l_active);
>
> + local_irq_disable();
> __count_vm_events(PGDEACTIVATE, nr_deactivate);
> __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate);
> -
> __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
> - spin_unlock_irq(&lruvec->lru_lock);
> + local_irq_enable();
>
> mem_cgroup_uncharge_list(&l_active);
> free_unref_page_list(&l_active);

Note that the RT engineers will likely change the
local_irq_disable()/local_irq_enable() to
local_lock_irq()/local_unlock_irq().

Cheers,
Longman


2022-05-25 13:35:07

by kernel test robot

[permalink] [raw]
Subject: [mm] bec0ae1210: WARNING:possible_recursive_locking_detected



Greeting,

FYI, we noticed the following commit (built with gcc-11):

commit: bec0ae12106e0cf12dd4e0e21eb0754b99be0ba2 ("[PATCH v4 09/11] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages")
url: https://github.com/intel-lab-lkp/linux/commits/Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20220524-143056
patch link: https://lore.kernel.org/linux-mm/[email protected]

in testcase: boot

on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G

caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):



If you fix the issue, kindly add following tag
Reported-by: kernel test robot <[email protected]>


[ 41.024908][ T135] WARNING: possible recursive locking detected
[ 41.025923][ T135] 5.18.0-00009-gbec0ae12106e #1 Not tainted
[ 41.026805][ T135] --------------------------------------------
[ 41.027780][ T135] kworker/1:2/135 is trying to acquire lock:
[ 41.028743][ T135] ffff88815b545068 (&lruvec->lru_lock){....}-{2:2}, at: lruvec_reparent_lock (include/linux/nodemask.h:271 mm/memcontrol.c:376)
[ 41.030324][ T135]
[ 41.030324][ T135] but task is already holding lock:
[ 41.031629][ T135] ffff8881a1c43068 (&lruvec->lru_lock){....}-{2:2}, at: lruvec_reparent_lock (mm/memcontrol.c:378)
[ 41.033231][ T135]
[ 41.033231][ T135] other info that might help us debug this:
[ 41.034551][ T135] Possible unsafe locking scenario:
[ 41.034551][ T135]
[ 41.035818][ T135] CPU0
[ 41.036409][ T135] ----
[ 41.037045][ T135] lock(&lruvec->lru_lock);
[ 41.037866][ T135] lock(&lruvec->lru_lock);
[ 41.039123][ T135]
[ 41.039123][ T135] *** DEADLOCK ***
[ 41.039123][ T135]
[ 41.040984][ T135] May be due to missing lock nesting notation
[ 41.040984][ T135]
[ 41.042567][ T135] 5 locks held by kworker/1:2/135:
[ 41.043472][ T135] #0: ffff88839d54b538 ((wq_completion)cgroup_destroy){+.+.}-{0:0}, at: process_one_work (arch/x86/include/asm/atomic64_64.h:34 include/linux/atomic/atomic-long.h:41 include/linux/atomic/atomic-instrumented.h:1280 kernel/workqueue.c:636 kernel/workqueue.c:663 kernel/workqueue.c:2260)
[ 41.045556][ T135] #1: ffffc90000e9fdb8 ((work_completion)(&css->destroy_work)){+.+.}-{0:0}, at: process_one_work (kernel/workqueue.c:2264)
[ 41.047649][ T135] #2: ffffffffa46931c8 (cgroup_mutex){+.+.}-{3:3}, at: css_killed_work_fn (kernel/cgroup/cgroup.c:5271 kernel/cgroup/cgroup.c:5554)
[ 41.049171][ T135] #3: ffffffffa47fe2d8 (objcg_lock){....}-{2:2}, at: mem_cgroup_css_offline (mm/memcontrol.c:453 mm/memcontrol.c:463 mm/memcontrol.c:5382)
[ 41.050617][ T135] #4: ffff8881a1c43068 (&lruvec->lru_lock){....}-{2:2}, at: lruvec_reparent_lock (mm/memcontrol.c:378)
[ 41.052031][ T135]
[ 41.052031][ T135] stack backtrace:
[ 41.052926][ T135] CPU: 1 PID: 135 Comm: kworker/1:2 Not tainted 5.18.0-00009-gbec0ae12106e #1
[ 41.054190][ T135] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.0-debian-1.16.0-4 04/01/2014
[ 41.055742][ T135] Workqueue: cgroup_destroy css_killed_work_fn
[ 41.056645][ T135] Call Trace:
[ 41.057138][ T135] <TASK>
[ 41.057628][ T135] dump_stack_lvl (lib/dump_stack.c:107 (discriminator 4))
[ 41.058392][ T135] validate_chain.cold (kernel/locking/lockdep.c:2958 kernel/locking/lockdep.c:3001 kernel/locking/lockdep.c:3790)
[ 41.059117][ T135] ? check_prev_add (kernel/locking/lockdep.c:3759)
[ 41.059888][ T135] __lock_acquire (kernel/locking/lockdep.c:5029)
[ 41.060579][ T135] lock_acquire (kernel/locking/lockdep.c:436 kernel/locking/lockdep.c:5643 kernel/locking/lockdep.c:5606)
[ 41.061280][ T135] ? lruvec_reparent_lock (include/linux/nodemask.h:271 mm/memcontrol.c:376)
[ 41.062081][ T135] ? rcu_read_unlock (include/linux/rcupdate.h:723 (discriminator 5))
[ 41.062915][ T135] ? lock_acquire (kernel/locking/lockdep.c:436 kernel/locking/lockdep.c:5643 kernel/locking/lockdep.c:5606)
[ 41.063653][ T135] ? mem_cgroup_css_offline (mm/memcontrol.c:453 mm/memcontrol.c:463 mm/memcontrol.c:5382)
[ 41.064504][ T135] ? do_raw_spin_lock (arch/x86/include/asm/atomic.h:202 include/linux/atomic/atomic-instrumented.h:543 include/asm-generic/qspinlock.h:82 kernel/locking/spinlock_debug.c:115)
[ 41.065190][ T135] ? rwlock_bug+0xc0/0xc0
[ 41.065923][ T135] _raw_spin_lock (include/linux/spinlock_api_smp.h:134 kernel/locking/spinlock.c:154)
[ 41.066676][ T135] ? lruvec_reparent_lock (include/linux/nodemask.h:271 mm/memcontrol.c:376)
[ 41.067455][ T135] lruvec_reparent_lock (include/linux/nodemask.h:271 mm/memcontrol.c:376)
[ 41.068227][ T135] mem_cgroup_css_offline (mm/memcontrol.c:453 mm/memcontrol.c:463 mm/memcontrol.c:5382)
[ 41.069103][ T135] ? lock_is_held_type (kernel/locking/lockdep.c:5382 kernel/locking/lockdep.c:5684)
[ 41.069858][ T135] css_killed_work_fn (kernel/cgroup/cgroup.c:5279 kernel/cgroup/cgroup.c:5554)
[ 41.070637][ T135] process_one_work (arch/x86/include/asm/jump_label.h:27 include/linux/jump_label.h:207 include/trace/events/workqueue.h:108 kernel/workqueue.c:2294)
[ 41.071459][ T135] ? rcu_read_unlock (include/linux/rcupdate.h:723 (discriminator 5))
[ 41.072308][ T135] ? pwq_dec_nr_in_flight (kernel/workqueue.c:2184)
[ 41.073231][ T135] ? rwlock_bug+0xc0/0xc0
[ 41.073922][ T135] worker_thread (include/linux/list.h:292 kernel/workqueue.c:2437)
[ 41.074572][ T135] ? __kthread_parkme (arch/x86/include/asm/bitops.h:207 (discriminator 4) include/asm-generic/bitops/instrumented-non-atomic.h:135 (discriminator 4) kernel/kthread.c:270 (discriminator 4))
[ 41.075220][ T135] ? schedule (arch/x86/include/asm/bitops.h:207 (discriminator 1) include/asm-generic/bitops/instrumented-non-atomic.h:135 (discriminator 1) include/linux/thread_info.h:118 (discriminator 1) include/linux/sched.h:2154 (discriminator 1) kernel/sched/core.c:6462 (discriminator 1))
[ 41.075942][ T135] ? process_one_work (kernel/workqueue.c:2379)
[ 41.076755][ T135] ? process_one_work (kernel/workqueue.c:2379)
[ 41.077600][ T135] kthread (kernel/kthread.c:376)
[ 41.078174][ T135] ? kthread_complete_and_exit (kernel/kthread.c:331)
[ 41.078951][ T135] ret_from_fork (arch/x86/entry/entry_64.S:304)
[ 41.079668][ T135] </TASK>
[ OK ] Started Load Kernel Modules.
[ OK ] Mounted RPC Pipe File System.
[ OK ] Started Remount Root and Kernel File Systems.
[ OK ] Mounted Kernel Debug File System.
[ OK ] Mounted Huge Pages File System.
Starting Load/Save Random Seed...
Starting Create System Users...
Starting Apply Kernel Variables...
Mounting Kernel Configuration File System...
[ OK ] Started Load/Save Random Seed.
[ OK ] Started Create System Users.
[ OK ] Started Apply Kernel Variables.
[ OK ] Mounted Kernel Configuration File System.
Starting Create Static Device Nodes in /dev...
[ OK ] Started Create Static Device Nodes in /dev.
[ OK ] Reached target Local File Systems (Pre).
[ OK ] Reached target Local File Systems.
Starting Preprocess NFS configuration...
Starting udev Kernel Device Manager...
[ OK ] Started Journal Service.
[ OK ] Started Preprocess NFS configuration.
[ OK ] Reached target NFS client services.
Starting Flush Journal to Persistent Storage...
[ OK ] Started udev Kernel Device Manager.
[ OK ] Started Flush Journal to Persistent Storage.
Starting Create Volatile Files and Directories...
[ OK ] Started Create Volatile Files and Directories.
Starting Network Time Synchronization...
Starting RPC bind portmap service...
Starting Update UTMP about System Boot/Shutdown...
[ OK ] Started RPC bind portmap service.
[ OK ] Reached target RPC Port Mapper.
[ OK ] Reached target Remote File Systems (Pre).
[ OK ] Reached target Remote File Systems.
[ OK ] Started Update UTMP about System Boot/Shutdown.
[ OK ] Started Network Time Synchronization.
[ OK ] Reached target System Time Synchronized.


To reproduce:

# build kernel
cd linux
cp config-5.18.0-00009-gbec0ae12106e .config
make HOSTCC=gcc-11 CC=gcc-11 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-11 CC=gcc-11 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz


git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email

# if come across any failure that blocks the test,
# please remove ~/.lkp and /lkp dir to run from a clean state.



--
0-DAY CI Kernel Test Service
https://01.org/lkp



Attachments:
(No filename) (8.56 kB)
config-5.18.0-00009-gbec0ae12106e (169.64 kB)
job-script (4.97 kB)
dmesg.xz (14.98 kB)
Download all attachments

2022-05-25 14:46:03

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v4 02/11] mm: memcontrol: introduce compact_folio_lruvec_lock_irqsave

On Tue, May 24, 2022 at 02:05:42PM +0800, Muchun Song wrote:
> If we reuse the objcg APIs to charge LRU pages, the folio_memcg()
> can be changed when the LRU pages reparented. In this case, we need
> to acquire the new lruvec lock.
>
> lruvec = folio_lruvec(folio);
>
> // The page is reparented.
>
> compact_lock_irqsave(&lruvec->lru_lock, &flags, cc);
>
> // Acquired the wrong lruvec lock and need to retry.
>
> But compact_lock_irqsave() only take lruvec lock as the parameter,
> we cannot aware this change. If it can take the page as parameter
> to acquire the lruvec lock. When the page memcg is changed, we can
> use the folio_memcg() detect whether we need to reacquire the new
> lruvec lock. So compact_lock_irqsave() is not suitable for us.
> Similar to folio_lruvec_lock_irqsave(), introduce
> compact_folio_lruvec_lock_irqsave() to acquire the lruvec lock in
> the compaction routine.
>
> Signed-off-by: Muchun Song <[email protected]>

This looks generally good to me.

It did raise the question how deferencing lruvec is safe before the
lock is acquired when reparenting can race. The answer is in the next
patch when you add the rcu_read_lock(). Since the patches aren't big,
it would probably be better to merge them.

> @@ -509,6 +509,29 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags,
> return true;
> }
>
> +static struct lruvec *
> +compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags,
> + struct compact_control *cc)
> +{
> + struct lruvec *lruvec;
> +
> + lruvec = folio_lruvec(folio);
> +
> + /* Track if the lock is contended in async mode */
> + if (cc->mode == MIGRATE_ASYNC && !cc->contended) {
> + if (spin_trylock_irqsave(&lruvec->lru_lock, *flags))
> + goto out;
> +
> + cc->contended = true;
> + }
> +
> + spin_lock_irqsave(&lruvec->lru_lock, *flags);

Can you implement this on top of the existing one?

lruvec = folio_lruvec(folio);
compact_lock_irqsave(&lruvec->lru_lock, flags);
lruvec_memcg_debug(lruvec, folio);
return lruvec;

2022-05-25 14:58:49

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v4 01/11] mm: memcontrol: prepare objcg API for non-kmem usage

On Tue, May 24, 2022 at 07:36:24PM -0700, Roman Gushchin wrote:
> On Tue, May 24, 2022 at 02:05:41PM +0800, Muchun Song wrote:
> > Pagecache pages are charged at the allocation time and holding a
> > reference to the original memory cgroup until being reclaimed.
> > Depending on the memory pressure, specific patterns of the page
> > sharing between different cgroups and the cgroup creation and
> > destruction rates, a large number of dying memory cgroups can be
> > pinned by pagecache pages. It makes the page reclaim less efficient
> > and wastes memory.
> >
> > We can convert LRU pages and most other raw memcg pins to the objcg
> > direction to fix this problem, and then the page->memcg will always
> > point to an object cgroup pointer.
> >
> > Therefore, the infrastructure of objcg no longer only serves
> > CONFIG_MEMCG_KMEM. In this patch, we move the infrastructure of the
> > objcg out of the scope of the CONFIG_MEMCG_KMEM so that the LRU pages
> > can reuse it to charge pages.
> >
> > We know that the LRU pages are not accounted at the root level. But
> > the page->memcg_data points to the root_mem_cgroup. So the
> > page->memcg_data of the LRU pages always points to a valid pointer.
> > But the root_mem_cgroup dose not have an object cgroup. If we use
> > obj_cgroup APIs to charge the LRU pages, we should set the
> > page->memcg_data to a root object cgroup. So we also allocate an
> > object cgroup for the root_mem_cgroup.
> >
> > Signed-off-by: Muchun Song <[email protected]>
> > ---
> > include/linux/memcontrol.h | 5 ++--
> > mm/memcontrol.c | 60 +++++++++++++++++++++++++---------------------
> > 2 files changed, 35 insertions(+), 30 deletions(-)
> >
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index 89b14729d59f..ff1c1dd7e762 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -315,10 +315,10 @@ struct mem_cgroup {
> >
> > #ifdef CONFIG_MEMCG_KMEM
> > int kmemcg_id;
> > +#endif
> > struct obj_cgroup __rcu *objcg;
> > /* list of inherited objcgs, protected by objcg_lock */
> > struct list_head objcg_list;
> > -#endif
> >
> > MEMCG_PADDING(_pad2_);
> >
> > @@ -851,8 +851,7 @@ static inline struct mem_cgroup *lruvec_memcg(struct lruvec *lruvec)
> > * parent_mem_cgroup - find the accounting parent of a memcg
> > * @memcg: memcg whose parent to find
> > *
> > - * Returns the parent memcg, or NULL if this is the root or the memory
> > - * controller is in legacy no-hierarchy mode.
> > + * Returns the parent memcg, or NULL if this is the root.
> > */
> > static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg)
> > {
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 598fece89e2b..6de0d3e53eb1 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -254,9 +254,9 @@ struct mem_cgroup *vmpressure_to_memcg(struct vmpressure *vmpr)
> > return container_of(vmpr, struct mem_cgroup, vmpressure);
> > }
> >
> > -#ifdef CONFIG_MEMCG_KMEM
> > static DEFINE_SPINLOCK(objcg_lock);
> >
> > +#ifdef CONFIG_MEMCG_KMEM
> > bool mem_cgroup_kmem_disabled(void)
> > {
> > return cgroup_memory_nokmem;
> > @@ -265,12 +265,10 @@ bool mem_cgroup_kmem_disabled(void)
> > static void obj_cgroup_uncharge_pages(struct obj_cgroup *objcg,
> > unsigned int nr_pages);
> >
> > -static void obj_cgroup_release(struct percpu_ref *ref)
> > +static void obj_cgroup_release_bytes(struct obj_cgroup *objcg)
> > {
> > - struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt);
> > unsigned int nr_bytes;
> > unsigned int nr_pages;
> > - unsigned long flags;
> >
> > /*
> > * At this point all allocated objects are freed, and
> > @@ -284,9 +282,9 @@ static void obj_cgroup_release(struct percpu_ref *ref)
> > * 3) CPU1: a process from another memcg is allocating something,
> > * the stock if flushed,
> > * objcg->nr_charged_bytes = PAGE_SIZE - 92
> > - * 5) CPU0: we do release this object,
> > + * 4) CPU0: we do release this object,
> > * 92 bytes are added to stock->nr_bytes
> > - * 6) CPU0: stock is flushed,
> > + * 5) CPU0: stock is flushed,
> > * 92 bytes are added to objcg->nr_charged_bytes
> > *
> > * In the result, nr_charged_bytes == PAGE_SIZE.
> > @@ -298,6 +296,19 @@ static void obj_cgroup_release(struct percpu_ref *ref)
> >
> > if (nr_pages)
> > obj_cgroup_uncharge_pages(objcg, nr_pages);
> > +}
> > +#else
> > +static inline void obj_cgroup_release_bytes(struct obj_cgroup *objcg)
> > +{
> > +}
> > +#endif
> > +
> > +static void obj_cgroup_release(struct percpu_ref *ref)
> > +{
> > + struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt);
> > + unsigned long flags;
> > +
> > + obj_cgroup_release_bytes(objcg);
> >
> > spin_lock_irqsave(&objcg_lock, flags);
> > list_del(&objcg->list);
> > @@ -326,10 +337,10 @@ static struct obj_cgroup *obj_cgroup_alloc(void)
> > return objcg;
> > }
> >
> > -static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
> > - struct mem_cgroup *parent)
> > +static void memcg_reparent_objcgs(struct mem_cgroup *memcg)
> > {
> > struct obj_cgroup *objcg, *iter;
> > + struct mem_cgroup *parent = parent_mem_cgroup(memcg);
> >
> > objcg = rcu_replace_pointer(memcg->objcg, NULL, true);
> >
> > @@ -348,6 +359,7 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
> > percpu_ref_kill(&objcg->refcnt);
> > }
> >
> > +#ifdef CONFIG_MEMCG_KMEM
> > /*
> > * A lot of the calls to the cache allocation functions are expected to be
> > * inlined by the compiler. Since the calls to memcg_slab_pre_alloc_hook() are
> > @@ -3589,21 +3601,12 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css,
> > #ifdef CONFIG_MEMCG_KMEM
> > static int memcg_online_kmem(struct mem_cgroup *memcg)
> > {
> > - struct obj_cgroup *objcg;
> > -
> > if (cgroup_memory_nokmem)
> > return 0;
> >
> > if (unlikely(mem_cgroup_is_root(memcg)))
> > return 0;
> >
> > - objcg = obj_cgroup_alloc();
> > - if (!objcg)
> > - return -ENOMEM;
> > -
> > - objcg->memcg = memcg;
> > - rcu_assign_pointer(memcg->objcg, objcg);
> > -
> > static_branch_enable(&memcg_kmem_enabled_key);
> >
> > memcg->kmemcg_id = memcg->id.id;
> > @@ -3613,27 +3616,19 @@ static int memcg_online_kmem(struct mem_cgroup *memcg)
> >
> > static void memcg_offline_kmem(struct mem_cgroup *memcg)
> > {
> > - struct mem_cgroup *parent;
> > -
> > if (cgroup_memory_nokmem)
> > return;
> >
> > if (unlikely(mem_cgroup_is_root(memcg)))
> > return;
> >
> > - parent = parent_mem_cgroup(memcg);
> > - if (!parent)
> > - parent = root_mem_cgroup;
> > -
> > - memcg_reparent_objcgs(memcg, parent);
> > -
> > /*
> > * After we have finished memcg_reparent_objcgs(), all list_lrus
> > * corresponding to this cgroup are guaranteed to remain empty.
> > * The ordering is imposed by list_lru_node->lock taken by
> > * memcg_reparent_list_lrus().
> > */
>
> This comment doesn't look to be correct after these changes. Should it
> be fixed? Or the ordering should be fixed too?
>

I think I could drop those comments since they are out-of-date, we do not
need this ordering since

commit 5abc1e37afa0 ("mm: list_lru: allocate list_lru_one only when needed")

which does the reparenting in memcg_reparent_list_lrus(), right?

> > - memcg_reparent_list_lrus(memcg, parent);
> > + memcg_reparent_list_lrus(memcg, parent_mem_cgroup(memcg));
> We effectively dropped this:
> if (!parent)
> parent = root_mem_cgroup;
> Is it safe? (assuming v1 non-hierarchical mode, it's usually when all
> is getting complicated)

Since no-hierarchy mode is deprecated after commit bef8620cd8e0
("mm: memcg: deprecate the non-hierarchical mode"), so
parent_mem_cgroup() cannot return a NULL except root memcg,
however, root memcg will not be offline, so it is safe. Right?

Thanks.

2022-05-25 16:07:40

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v4 03/11] mm: memcontrol: make lruvec lock safe when LRU pages are reparented

On Wed, May 25, 2022 at 08:30:15AM -0400, Johannes Weiner wrote:
> On Wed, May 25, 2022 at 05:53:30PM +0800, Muchun Song wrote:
> > On Tue, May 24, 2022 at 03:27:20PM -0400, Johannes Weiner wrote:
> > > On Tue, May 24, 2022 at 02:05:43PM +0800, Muchun Song wrote:
> > > > The diagram below shows how to make the folio lruvec lock safe when LRU
> > > > pages are reparented.
> > > >
> > > > folio_lruvec_lock(folio)
> > > > retry:
> > > > lruvec = folio_lruvec(folio);
> > > >
> > > > // The folio is reparented at this time.
> > > > spin_lock(&lruvec->lru_lock);
> > > >
> > > > if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio)))
> > > > // Acquired the wrong lruvec lock and need to retry.
> > > > // Because this folio is on the parent memcg lruvec list.
> > > > goto retry;
> > > >
> > > > // If we reach here, it means that folio_memcg(folio) is stable.
> > > >
> > > > memcg_reparent_objcgs(memcg)
> > > > // lruvec belongs to memcg and lruvec_parent belongs to parent memcg.
> > > > spin_lock(&lruvec->lru_lock);
> > > > spin_lock(&lruvec_parent->lru_lock);
> > > >
> > > > // Move all the pages from the lruvec list to the parent lruvec list.
> > > >
> > > > spin_unlock(&lruvec_parent->lru_lock);
> > > > spin_unlock(&lruvec->lru_lock);
> > > >
> > > > After we acquire the lruvec lock, we need to check whether the folio is
> > > > reparented. If so, we need to reacquire the new lruvec lock. On the
> > > > routine of the LRU pages reparenting, we will also acquire the lruvec
> > > > lock (will be implemented in the later patch). So folio_memcg() cannot
> > > > be changed when we hold the lruvec lock.
> > > >
> > > > Since lruvec_memcg(lruvec) is always equal to folio_memcg(folio) after
> > > > we hold the lruvec lock, lruvec_memcg_debug() check is pointless. So
> > > > remove it.
> > > >
> > > > This is a preparation for reparenting the LRU pages.
> > > >
> > > > Signed-off-by: Muchun Song <[email protected]>
> > >
> > > This looks good to me. Just one question:
> > >
> > > > @@ -1230,10 +1213,23 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
> > > > */
> > > > struct lruvec *folio_lruvec_lock(struct folio *folio)
> > > > {
> > > > - struct lruvec *lruvec = folio_lruvec(folio);
> > > > + struct lruvec *lruvec;
> > > >
> > > > + rcu_read_lock();
> > > > +retry:
> > > > + lruvec = folio_lruvec(folio);
> > > > spin_lock(&lruvec->lru_lock);
> > > > - lruvec_memcg_debug(lruvec, folio);
> > > > +
> > > > + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
> > > > + spin_unlock(&lruvec->lru_lock);
> > > > + goto retry;
> > > > + }
> > > > +
> > > > + /*
> > > > + * Preemption is disabled in the internal of spin_lock, which can serve
> > > > + * as RCU read-side critical sections.
> > > > + */
> > > > + rcu_read_unlock();
> > >
> > > The code looks right to me, but I don't understand the comment: why do
> > > we care that the rcu read-side continues? With the lru_lock held,
> > > reparenting is on hold and the lruvec cannot be rcu-freed anyway, no?
> > >
> >
> > Right. We could hold rcu read lock until end of reparting. So you mean
> > we do rcu_read_unlock in folio_lruvec_lock()?
>
> The comment seems to suggest that disabling preemption is what keeps
> the lruvec alive. But it's the lru_lock that keeps it alive. The
> cgroup destruction path tries to take the lru_lock long before it even
> gets to synchronize_rcu(). Once you hold the lru_lock, having an
> implied read-side critical section as well doesn't seem to matter.
>

Well, I thought that spinlocks have implicit read-side critical sections
because it disables preemption (I learned from the comments above
synchronize_rcu() that says interrupts, preemption, or softirqs have been
disabled also serve as RCU read-side critical sections). So I have a
question: is it still true in a PREEMPT_RT kernel (I am not familiar with
this)?

> Should the comment be deleted?
>

I think we could remove the comments. If the above question is false, seems
like we should continue holding rcu read lock.

Thanks.

2022-05-25 17:09:26

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v4 02/11] mm: memcontrol: introduce compact_folio_lruvec_lock_irqsave

On Tue, May 24, 2022 at 03:22:55PM -0400, Johannes Weiner wrote:
> On Tue, May 24, 2022 at 02:05:42PM +0800, Muchun Song wrote:
> > If we reuse the objcg APIs to charge LRU pages, the folio_memcg()
> > can be changed when the LRU pages reparented. In this case, we need
> > to acquire the new lruvec lock.
> >
> > lruvec = folio_lruvec(folio);
> >
> > // The page is reparented.
> >
> > compact_lock_irqsave(&lruvec->lru_lock, &flags, cc);
> >
> > // Acquired the wrong lruvec lock and need to retry.
> >
> > But compact_lock_irqsave() only take lruvec lock as the parameter,
> > we cannot aware this change. If it can take the page as parameter
> > to acquire the lruvec lock. When the page memcg is changed, we can
> > use the folio_memcg() detect whether we need to reacquire the new
> > lruvec lock. So compact_lock_irqsave() is not suitable for us.
> > Similar to folio_lruvec_lock_irqsave(), introduce
> > compact_folio_lruvec_lock_irqsave() to acquire the lruvec lock in
> > the compaction routine.
> >
> > Signed-off-by: Muchun Song <[email protected]>
>
> This looks generally good to me.
>
> It did raise the question how deferencing lruvec is safe before the
> lock is acquired when reparenting can race. The answer is in the next
> patch when you add the rcu_read_lock(). Since the patches aren't big,
> it would probably be better to merge them.
>

Will do in v5.

> > @@ -509,6 +509,29 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags,
> > return true;
> > }
> >
> > +static struct lruvec *
> > +compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags,
> > + struct compact_control *cc)
> > +{
> > + struct lruvec *lruvec;
> > +
> > + lruvec = folio_lruvec(folio);
> > +
> > + /* Track if the lock is contended in async mode */
> > + if (cc->mode == MIGRATE_ASYNC && !cc->contended) {
> > + if (spin_trylock_irqsave(&lruvec->lru_lock, *flags))
> > + goto out;
> > +
> > + cc->contended = true;
> > + }
> > +
> > + spin_lock_irqsave(&lruvec->lru_lock, *flags);
>
> Can you implement this on top of the existing one?
>
> lruvec = folio_lruvec(folio);
> compact_lock_irqsave(&lruvec->lru_lock, flags);
> lruvec_memcg_debug(lruvec, folio);
> return lruvec;
>

I'll do a try. Thanks for your suggestions.

2022-05-25 17:51:24

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v4 04/11] mm: vmscan: rework move_pages_to_lru()

On Tue, May 24, 2022 at 02:05:44PM +0800, Muchun Song wrote:
> In the later patch, we will reparent the LRU pages. The pages moved to
> appropriate LRU list can be reparented during the process of the
> move_pages_to_lru(). So holding a lruvec lock by the caller is wrong, we
> should use the more general interface of folio_lruvec_relock_irq() to
> acquire the correct lruvec lock.
>
> Signed-off-by: Muchun Song <[email protected]>

With changes asked by Johannes :

Acked-by: Roman Gushchin <[email protected]>

Thanks!

2022-05-25 18:38:16

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v4 10/11] mm: lru: add VM_BUG_ON_FOLIO to lru maintenance function

On Tue, May 24, 2022 at 03:44:02PM -0400, Johannes Weiner wrote:
> On Tue, May 24, 2022 at 02:05:50PM +0800, Muchun Song wrote:
> > We need to make sure that the page is deleted from or added to the
> > correct lruvec list. So add a VM_BUG_ON_FOLIO() to catch invalid
> > users.
> >
> > Signed-off-by: Muchun Song <[email protected]>
>
> Makes sense, but please use VM_WARN_ON_ONCE_FOLIO() so the machine can
> continue limping along for extracting debug information.
>

Make sense. Will do.

Thanks.

2022-05-25 19:06:31

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v4 06/11] mm: thp: make split queue lock safe when LRU pages are reparented

On Tue, May 24, 2022 at 07:54:35PM -0700, Roman Gushchin wrote:
> On Tue, May 24, 2022 at 02:05:46PM +0800, Muchun Song wrote:
> > Similar to the lruvec lock, we use the same approach to make the split
> > queue lock safe when LRU pages are reparented.
> >
> > Signed-off-by: Muchun Song <[email protected]>
>
> Please, merge this into the previous patch (like Johannes asked
> for the lruvec counterpart).
>

Will do in v5.

> And add:
> Acked-by: Roman Gushchin <[email protected]> .
>

Thanks Roman.


2022-05-25 19:44:43

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v4 01/11] mm: memcontrol: prepare objcg API for non-kmem usage

On Wed, May 25, 2022 at 03:57:17PM +0800, Muchun Song wrote:
> On Tue, May 24, 2022 at 07:36:24PM -0700, Roman Gushchin wrote:
> > On Tue, May 24, 2022 at 02:05:41PM +0800, Muchun Song wrote:
> > > - memcg_reparent_list_lrus(memcg, parent);
> > > + memcg_reparent_list_lrus(memcg, parent_mem_cgroup(memcg));
> > We effectively dropped this:
> > if (!parent)
> > parent = root_mem_cgroup;
> > Is it safe? (assuming v1 non-hierarchical mode, it's usually when all
> > is getting complicated)

Yes, it's correct. But it's a quiet, incidental cleanup, so I can see
why it's confusing. It might be better to split the dead code removal
into a separate patch - with the following in the changelog ;):

> Since no-hierarchy mode is deprecated after commit bef8620cd8e0
> ("mm: memcg: deprecate the non-hierarchical mode"), so
> parent_mem_cgroup() cannot return a NULL except root memcg,
> however, root memcg will not be offline, so it is safe. Right?

2022-05-25 19:45:55

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v4 10/11] mm: lru: add VM_BUG_ON_FOLIO to lru maintenance function

On Tue, May 24, 2022 at 07:40:05PM -0700, Roman Gushchin wrote:
> On Tue, May 24, 2022 at 02:05:50PM +0800, Muchun Song wrote:
> > We need to make sure that the page is deleted from or added to the
> > correct lruvec list. So add a VM_BUG_ON_FOLIO() to catch invalid
> > users.
> >
> > Signed-off-by: Muchun Song <[email protected]>
> > ---
> > include/linux/mm_inline.h | 6 ++++++
> > mm/vmscan.c | 1 -
> > 2 files changed, 6 insertions(+), 1 deletion(-)
> >
> > diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
> > index ac32125745ab..30d2393da613 100644
> > --- a/include/linux/mm_inline.h
> > +++ b/include/linux/mm_inline.h
> > @@ -97,6 +97,8 @@ void lruvec_add_folio(struct lruvec *lruvec, struct folio *folio)
> > {
> > enum lru_list lru = folio_lru_list(folio);
> >
> > + VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
> > +
> > update_lru_size(lruvec, lru, folio_zonenum(folio),
> > folio_nr_pages(folio));
> > if (lru != LRU_UNEVICTABLE)
> > @@ -114,6 +116,8 @@ void lruvec_add_folio_tail(struct lruvec *lruvec, struct folio *folio)
> > {
> > enum lru_list lru = folio_lru_list(folio);
> >
> > + VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
> > +
> > update_lru_size(lruvec, lru, folio_zonenum(folio),
> > folio_nr_pages(folio));
> > /* This is not expected to be used on LRU_UNEVICTABLE */
> > @@ -131,6 +135,8 @@ void lruvec_del_folio(struct lruvec *lruvec, struct folio *folio)
> > {
> > enum lru_list lru = folio_lru_list(folio);
> >
> > + VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
> > +
> > if (lru != LRU_UNEVICTABLE)
> > list_del(&folio->lru);
> > update_lru_size(lruvec, lru, folio_zonenum(folio),
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 761d5e0dd78d..6c9e2eafc8f9 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -2281,7 +2281,6 @@ static unsigned int move_pages_to_lru(struct list_head *list)
> > continue;
> > }
> >
> > - VM_BUG_ON_PAGE(!folio_matches_lruvec(folio, lruvec), page);
>
> The commit log describes well why we need to add new BUG_ON's. Please, add
> something on why this is removed.
>

OK. Will do in v5.

Thanks.

2022-05-25 20:01:59

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v4 01/11] mm: memcontrol: prepare objcg API for non-kmem usage

On Wed, May 25, 2022 at 08:37:58AM -0400, Johannes Weiner wrote:
> On Wed, May 25, 2022 at 03:57:17PM +0800, Muchun Song wrote:
> > On Tue, May 24, 2022 at 07:36:24PM -0700, Roman Gushchin wrote:
> > > On Tue, May 24, 2022 at 02:05:41PM +0800, Muchun Song wrote:
> > > > - memcg_reparent_list_lrus(memcg, parent);
> > > > + memcg_reparent_list_lrus(memcg, parent_mem_cgroup(memcg));
> > > We effectively dropped this:
> > > if (!parent)
> > > parent = root_mem_cgroup;
> > > Is it safe? (assuming v1 non-hierarchical mode, it's usually when all
> > > is getting complicated)
>
> Yes, it's correct. But it's a quiet, incidental cleanup, so I can see
> why it's confusing. It might be better to split the dead code removal
> into a separate patch - with the following in the changelog ;):
>

Well, I can split the dead code removal into a separate patch. :-)

Thanks.

> > Since no-hierarchy mode is deprecated after commit bef8620cd8e0
> > ("mm: memcg: deprecate the non-hierarchical mode"), so
> > parent_mem_cgroup() cannot return a NULL except root memcg,
> > however, root memcg will not be offline, so it is safe. Right?
>

2022-05-25 20:40:22

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v4 07/11] mm: memcontrol: make all the callers of {folio,page}_memcg() safe

On Tue, May 24, 2022 at 02:05:47PM +0800, Muchun Song wrote:
> When we use objcg APIs to charge the LRU pages, the page will not hold
> a reference to the memcg associated with the page. So the caller of the
> {folio,page}_memcg() should hold an rcu read lock or obtain a reference
> to the memcg associated with the page to protect memcg from being
> released. So introduce get_mem_cgroup_from_{page,folio}() to obtain a
> reference to the memory cgroup associated with the page.
>
> In this patch, make all the callers hold an rcu read lock or obtain a
> reference to the memcg to protect memcg from being released when the LRU
> pages reparented.
>
> We do not need to adjust the callers of {folio,page}_memcg() during
> the whole process of mem_cgroup_move_task(). Because the cgroup migration
> and memory cgroup offlining are serialized by @cgroup_mutex. In this
> routine, the LRU pages cannot be reparented to its parent memory cgroup.
> So {folio,page}_memcg() is stable and cannot be released.
>
> This is a preparation for reparenting the LRU pages.
>
> Signed-off-by: Muchun Song <[email protected]>

Overall the patch looks good to me (some nits below).

> ---
> fs/buffer.c | 4 +--
> fs/fs-writeback.c | 23 ++++++++--------
> include/linux/memcontrol.h | 51 ++++++++++++++++++++++++++++++++---
> include/trace/events/writeback.h | 5 ++++
> mm/memcontrol.c | 58 ++++++++++++++++++++++++++++++----------
> mm/migrate.c | 4 +++
> mm/page_io.c | 5 ++--
> 7 files changed, 117 insertions(+), 33 deletions(-)
>
> diff --git a/fs/buffer.c b/fs/buffer.c
> index 2b5561ae5d0b..80975a457670 100644
> --- a/fs/buffer.c
> +++ b/fs/buffer.c
> @@ -819,8 +819,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size,
> if (retry)
> gfp |= __GFP_NOFAIL;
>
> - /* The page lock pins the memcg */
> - memcg = page_memcg(page);
> + memcg = get_mem_cgroup_from_page(page);

Looking at these changes I wonder if we need to remove unsafe getters or
at least add a BOLD comment on how/when it's safe to use them.

> old_memcg = set_active_memcg(memcg);
>
> head = NULL;
> @@ -840,6 +839,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size,
> set_bh_page(bh, page, offset);
> }
> out:
> + mem_cgroup_put(memcg);
> set_active_memcg(old_memcg);
> return head;
> /*
> diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
> index 1fae0196292a..56612ace8778 100644
> --- a/fs/fs-writeback.c
> +++ b/fs/fs-writeback.c
> @@ -243,15 +243,13 @@ void __inode_attach_wb(struct inode *inode, struct page *page)
> if (inode_cgwb_enabled(inode)) {
> struct cgroup_subsys_state *memcg_css;
>
> - if (page) {
> - memcg_css = mem_cgroup_css_from_page(page);
> - wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
> - } else {
> - /* must pin memcg_css, see wb_get_create() */
> + /* must pin memcg_css, see wb_get_create() */
> + if (page)
> + memcg_css = get_mem_cgroup_css_from_page(page);
> + else
> memcg_css = task_get_css(current, memory_cgrp_id);
> - wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
> - css_put(memcg_css);
> - }
> + wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
> + css_put(memcg_css);
> }
>
> if (!wb)
> @@ -868,16 +866,16 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
> if (!wbc->wb || wbc->no_cgroup_owner)
> return;
>
> - css = mem_cgroup_css_from_page(page);
> + css = get_mem_cgroup_css_from_page(page);
> /* dead cgroups shouldn't contribute to inode ownership arbitration */
> if (!(css->flags & CSS_ONLINE))
> - return;
> + goto out;
>
> id = css->id;
>
> if (id == wbc->wb_id) {
> wbc->wb_bytes += bytes;
> - return;
> + goto out;
> }
>
> if (id == wbc->wb_lcand_id)
> @@ -890,6 +888,9 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
> wbc->wb_tcand_bytes += bytes;
> else
> wbc->wb_tcand_bytes -= min(bytes, wbc->wb_tcand_bytes);
> +
> +out:
> + css_put(css);
> }
> EXPORT_SYMBOL_GPL(wbc_account_cgroup_owner);
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 8c2f1ba2f471..3a0e2592434e 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -373,7 +373,7 @@ static inline bool folio_memcg_kmem(struct folio *folio);
> * a valid memcg, but can be atomically swapped to the parent memcg.
> *
> * The caller must ensure that the returned memcg won't be released:
> - * e.g. acquire the rcu_read_lock or css_set_lock.
> + * e.g. acquire the rcu_read_lock or objcg_lock or cgroup_mutex.
> */
> static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg)
> {
> @@ -454,7 +454,37 @@ static inline struct mem_cgroup *page_memcg(struct page *page)
> return folio_memcg(page_folio(page));
> }
>
> -/**
> +/*
> + * get_mem_cgroup_from_folio - Obtain a reference on the memory cgroup
> + * associated with a folio.
> + * @folio: Pointer to the folio.
> + *
> + * Returns a pointer to the memory cgroup (and obtain a reference on it)
> + * associated with the folio, or NULL. This function assumes that the
> + * folio is known to have a proper memory cgroup pointer. It's not safe
> + * to call this function against some type of pages, e.g. slab pages or
> + * ex-slab pages.
> + */
> +static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio)
> +{
> + struct mem_cgroup *memcg;
> +
> + rcu_read_lock();
> +retry:
> + memcg = folio_memcg(folio);
> + if (unlikely(memcg && !css_tryget(&memcg->css)))
> + goto retry;
> + rcu_read_unlock();
> +
> + return memcg;
> +}
> +
> +static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page)
> +{
> + return get_mem_cgroup_from_folio(page_folio(page));
> +}
> +
> +/*
> * folio_memcg_rcu - Locklessly get the memory cgroup associated with a folio.
> * @folio: Pointer to the folio.
> *
> @@ -873,7 +903,7 @@ static inline bool mm_match_cgroup(struct mm_struct *mm,
> return match;
> }
>
> -struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page);
> +struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page);
> ino_t page_cgroup_ino(struct page *page);
>
> static inline bool mem_cgroup_online(struct mem_cgroup *memcg)
> @@ -1047,10 +1077,13 @@ static inline void count_memcg_events(struct mem_cgroup *memcg,
> static inline void count_memcg_page_event(struct page *page,
> enum vm_event_item idx)
> {
> - struct mem_cgroup *memcg = page_memcg(page);
> + struct mem_cgroup *memcg;
>
> + rcu_read_lock();
> + memcg = page_memcg(page);
> if (memcg)
> count_memcg_events(memcg, idx, 1);
> + rcu_read_unlock();
> }
>
> static inline void count_memcg_event_mm(struct mm_struct *mm,
> @@ -1129,6 +1162,16 @@ static inline struct mem_cgroup *page_memcg(struct page *page)
> return NULL;
> }
>
> +static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio)
> +{
> + return NULL;
> +}
> +
> +static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page)
> +{
> + return NULL;
> +}
> +
> static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio)
> {
> WARN_ON_ONCE(!rcu_read_lock_held());
> diff --git a/include/trace/events/writeback.h b/include/trace/events/writeback.h
> index 86b2a82da546..cdb822339f13 100644
> --- a/include/trace/events/writeback.h
> +++ b/include/trace/events/writeback.h
> @@ -258,6 +258,11 @@ TRACE_EVENT(track_foreign_dirty,
> __entry->ino = inode ? inode->i_ino : 0;
> __entry->memcg_id = wb->memcg_css->id;
> __entry->cgroup_ino = __trace_wb_assign_cgroup(wb);
> + /*
> + * TP_fast_assign() is under preemption disabled which can
> + * serve as an RCU read-side critical section so that the
> + * memcg returned by folio_memcg() cannot be freed.
> + */
> __entry->page_cgroup_ino = cgroup_ino(folio_memcg(folio)->css.cgroup);
> ),
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index b38a77f6696f..dcaf6cf5dc74 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -371,7 +371,7 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key);
> #endif
>
> /**
> - * mem_cgroup_css_from_page - css of the memcg associated with a page
> + * get_mem_cgroup_css_from_page - get css of the memcg associated with a page
> * @page: page of interest
> *
> * If memcg is bound to the default hierarchy, css of the memcg associated
> @@ -381,13 +381,15 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key);
> * If memcg is bound to a traditional hierarchy, the css of root_mem_cgroup
> * is returned.
> */
> -struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page)
> +struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page)
> {
> struct mem_cgroup *memcg;
>
> - memcg = page_memcg(page);
> + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
> + return &root_mem_cgroup->css;
>
> - if (!memcg || !cgroup_subsys_on_dfl(memory_cgrp_subsys))
> + memcg = get_mem_cgroup_from_page(page);
> + if (!memcg)
> memcg = root_mem_cgroup;
>
> return &memcg->css;
> @@ -770,13 +772,13 @@ void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
> void __mod_lruvec_page_state(struct page *page, enum node_stat_item idx,
> int val)
> {
> - struct page *head = compound_head(page); /* rmap on tail pages */
> + struct folio *folio = page_folio(page); /* rmap on tail pages */
> struct mem_cgroup *memcg;
> pg_data_t *pgdat = page_pgdat(page);
> struct lruvec *lruvec;
>
> rcu_read_lock();
> - memcg = page_memcg(head);
> + memcg = folio_memcg(folio);
> /* Untracked pages have no memcg, no lruvec. Update only the node */
> if (!memcg) {
> rcu_read_unlock();
> @@ -2058,7 +2060,9 @@ void folio_memcg_lock(struct folio *folio)
> * The RCU lock is held throughout the transaction. The fast
> * path can get away without acquiring the memcg->move_lock
> * because page moving starts with an RCU grace period.
> - */
> + *
> + * The RCU lock also protects the memcg from being freed.
> + */
> rcu_read_lock();
>
> if (mem_cgroup_disabled())
> @@ -3296,7 +3300,7 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size)
> void split_page_memcg(struct page *head, unsigned int nr)
> {
> struct folio *folio = page_folio(head);
> - struct mem_cgroup *memcg = folio_memcg(folio);
> + struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio);
> int i;
>
> if (mem_cgroup_disabled() || !memcg)
> @@ -3309,6 +3313,8 @@ void split_page_memcg(struct page *head, unsigned int nr)
> obj_cgroup_get_many(__folio_objcg(folio), nr - 1);
> else
> css_get_many(&memcg->css, nr - 1);
> +
> + css_put(&memcg->css);
> }
>
> #ifdef CONFIG_MEMCG_SWAP
> @@ -4511,7 +4517,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages,
> void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio,
> struct bdi_writeback *wb)
> {
> - struct mem_cgroup *memcg = folio_memcg(folio);
> + struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio);
> struct memcg_cgwb_frn *frn;
> u64 now = get_jiffies_64();
> u64 oldest_at = now;
> @@ -4558,6 +4564,7 @@ void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio,
> frn->memcg_id = wb->memcg_css->id;
> frn->at = now;
> }
> + css_put(&memcg->css);
> }
>
> /* issue foreign writeback flushes for recorded foreign dirtying events */
> @@ -6092,6 +6099,14 @@ static void mem_cgroup_move_charge(void)
> atomic_dec(&mc.from->moving_account);
> }
>
> +/*
> + * The cgroup migration and memory cgroup offlining are serialized by
> + * @cgroup_mutex. If we reach here, it means that the LRU pages cannot
> + * be reparented to its parent memory cgroup. So during the whole process
> + * of mem_cgroup_move_task(), page_memcg(page) is stable. So we do not
> + * need to worry about the memcg (returned from page_memcg()) being
> + * released even if we do not hold an rcu read lock.
> + */
> static void mem_cgroup_move_task(void)
> {
> if (mc.to) {
> @@ -6895,7 +6910,7 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
> if (folio_memcg(new))
> return;
>
> - memcg = folio_memcg(old);
> + memcg = get_mem_cgroup_from_folio(old);
> VM_WARN_ON_ONCE_FOLIO(!memcg, old);
> if (!memcg)
> return;
> @@ -6914,6 +6929,8 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
> mem_cgroup_charge_statistics(memcg, nr_pages);
> memcg_check_events(memcg, folio_nid(new));
> local_irq_restore(flags);
> +
> + css_put(&memcg->css);
> }
>
> DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key);
> @@ -7100,6 +7117,10 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry)
> if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
> return;
>
> + /*
> + * Interrupts should be disabled by the caller (see the comments below),
> + * which can serve as RCU read-side critical sections.
> + */
> memcg = folio_memcg(folio);
>
> VM_WARN_ON_ONCE_FOLIO(!memcg, folio);
> @@ -7165,15 +7186,16 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
> if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
> return 0;
>
> + rcu_read_lock();
> memcg = page_memcg(page);
>
> VM_WARN_ON_ONCE_PAGE(!memcg, page);
> if (!memcg)
> - return 0;
> + goto out;
>
> if (!entry.val) {
> memcg_memory_event(memcg, MEMCG_SWAP_FAIL);
> - return 0;
> + goto out;
> }
>
> memcg = mem_cgroup_id_get_online(memcg);
> @@ -7183,6 +7205,7 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
> memcg_memory_event(memcg, MEMCG_SWAP_MAX);
> memcg_memory_event(memcg, MEMCG_SWAP_FAIL);
> mem_cgroup_id_put(memcg);
> + rcu_read_unlock();
> return -ENOMEM;

If you add the "out" label, please use it here too.

> }
>
> @@ -7192,6 +7215,8 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
> oldid = swap_cgroup_record(entry, mem_cgroup_id(memcg), nr_pages);
> VM_BUG_ON_PAGE(oldid, page);
> mod_memcg_state(memcg, MEMCG_SWAP, nr_pages);
> +out:
> + rcu_read_unlock();
>
> return 0;
> }
> @@ -7246,17 +7271,22 @@ bool mem_cgroup_swap_full(struct page *page)
> if (cgroup_memory_noswap || !cgroup_subsys_on_dfl(memory_cgrp_subsys))
> return false;
>
> + rcu_read_lock();
> memcg = page_memcg(page);
> if (!memcg)
> - return false;
> + goto out;
>
> for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) {
> unsigned long usage = page_counter_read(&memcg->swap);
>
> if (usage * 2 >= READ_ONCE(memcg->swap.high) ||
> - usage * 2 >= READ_ONCE(memcg->swap.max))
> + usage * 2 >= READ_ONCE(memcg->swap.max)) {
> + rcu_read_unlock();
> return true;

Please, make something like
ret = true;
goto out;
here. It will be more consistent.

> + }
> }
> +out:
> + rcu_read_unlock();
>
> return false;
> }
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 6c31ee1e1c9b..59e97a8a64a0 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -430,6 +430,10 @@ int folio_migrate_mapping(struct address_space *mapping,
> struct lruvec *old_lruvec, *new_lruvec;
> struct mem_cgroup *memcg;
>
> + /*
> + * Irq is disabled, which can serve as RCU read-side critical
> + * sections.
> + */
> memcg = folio_memcg(folio);
> old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat);
> new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat);
> diff --git a/mm/page_io.c b/mm/page_io.c
> index 89fbf3cae30f..a0d9cd68e87a 100644
> --- a/mm/page_io.c
> +++ b/mm/page_io.c
> @@ -221,13 +221,14 @@ static void bio_associate_blkg_from_page(struct bio *bio, struct page *page)
> struct cgroup_subsys_state *css;
> struct mem_cgroup *memcg;
>
> + rcu_read_lock();
> memcg = page_memcg(page);
> if (!memcg)
> - return;
> + goto out;
>
> - rcu_read_lock();
> css = cgroup_e_css(memcg->css.cgroup, &io_cgrp_subsys);
> bio_associate_blkg_from_css(bio, css);
> +out:
> rcu_read_unlock();
> }
> #else
> --
> 2.11.0
>

2022-05-25 20:50:48

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v4 03/11] mm: memcontrol: make lruvec lock safe when LRU pages are reparented

On Wed, May 25, 2022 at 05:53:30PM +0800, Muchun Song wrote:
> On Tue, May 24, 2022 at 03:27:20PM -0400, Johannes Weiner wrote:
> > On Tue, May 24, 2022 at 02:05:43PM +0800, Muchun Song wrote:
> > > The diagram below shows how to make the folio lruvec lock safe when LRU
> > > pages are reparented.
> > >
> > > folio_lruvec_lock(folio)
> > > retry:
> > > lruvec = folio_lruvec(folio);
> > >
> > > // The folio is reparented at this time.
> > > spin_lock(&lruvec->lru_lock);
> > >
> > > if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio)))
> > > // Acquired the wrong lruvec lock and need to retry.
> > > // Because this folio is on the parent memcg lruvec list.
> > > goto retry;
> > >
> > > // If we reach here, it means that folio_memcg(folio) is stable.
> > >
> > > memcg_reparent_objcgs(memcg)
> > > // lruvec belongs to memcg and lruvec_parent belongs to parent memcg.
> > > spin_lock(&lruvec->lru_lock);
> > > spin_lock(&lruvec_parent->lru_lock);
> > >
> > > // Move all the pages from the lruvec list to the parent lruvec list.
> > >
> > > spin_unlock(&lruvec_parent->lru_lock);
> > > spin_unlock(&lruvec->lru_lock);
> > >
> > > After we acquire the lruvec lock, we need to check whether the folio is
> > > reparented. If so, we need to reacquire the new lruvec lock. On the
> > > routine of the LRU pages reparenting, we will also acquire the lruvec
> > > lock (will be implemented in the later patch). So folio_memcg() cannot
> > > be changed when we hold the lruvec lock.
> > >
> > > Since lruvec_memcg(lruvec) is always equal to folio_memcg(folio) after
> > > we hold the lruvec lock, lruvec_memcg_debug() check is pointless. So
> > > remove it.
> > >
> > > This is a preparation for reparenting the LRU pages.
> > >
> > > Signed-off-by: Muchun Song <[email protected]>
> >
> > This looks good to me. Just one question:
> >
> > > @@ -1230,10 +1213,23 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
> > > */
> > > struct lruvec *folio_lruvec_lock(struct folio *folio)
> > > {
> > > - struct lruvec *lruvec = folio_lruvec(folio);
> > > + struct lruvec *lruvec;
> > >
> > > + rcu_read_lock();
> > > +retry:
> > > + lruvec = folio_lruvec(folio);
> > > spin_lock(&lruvec->lru_lock);
> > > - lruvec_memcg_debug(lruvec, folio);
> > > +
> > > + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
> > > + spin_unlock(&lruvec->lru_lock);
> > > + goto retry;
> > > + }
> > > +
> > > + /*
> > > + * Preemption is disabled in the internal of spin_lock, which can serve
> > > + * as RCU read-side critical sections.
> > > + */
> > > + rcu_read_unlock();
> >
> > The code looks right to me, but I don't understand the comment: why do
> > we care that the rcu read-side continues? With the lru_lock held,
> > reparenting is on hold and the lruvec cannot be rcu-freed anyway, no?
> >
>
> Right. We could hold rcu read lock until end of reparting. So you mean
> we do rcu_read_unlock in folio_lruvec_lock()?

The comment seems to suggest that disabling preemption is what keeps
the lruvec alive. But it's the lru_lock that keeps it alive. The
cgroup destruction path tries to take the lru_lock long before it even
gets to synchronize_rcu(). Once you hold the lru_lock, having an
implied read-side critical section as well doesn't seem to matter.

Should the comment be deleted?

2022-05-26 00:40:12

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v4 04/11] mm: vmscan: rework move_pages_to_lru()

On Tue, May 24, 2022 at 02:05:44PM +0800, Muchun Song wrote:
> In the later patch, we will reparent the LRU pages. The pages moved to
> appropriate LRU list can be reparented during the process of the
> move_pages_to_lru(). So holding a lruvec lock by the caller is wrong, we
> should use the more general interface of folio_lruvec_relock_irq() to
> acquire the correct lruvec lock.
>
> Signed-off-by: Muchun Song <[email protected]>
> ---
> mm/vmscan.c | 49 +++++++++++++++++++++++++------------------------
> 1 file changed, 25 insertions(+), 24 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 1678802e03e7..761d5e0dd78d 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2230,23 +2230,28 @@ static int too_many_isolated(struct pglist_data *pgdat, int file,
> * move_pages_to_lru() moves pages from private @list to appropriate LRU list.
> * On return, @list is reused as a list of pages to be freed by the caller.
> *
> - * Returns the number of pages moved to the given lruvec.
> + * Returns the number of pages moved to the appropriate LRU list.
> + *
> + * Note: The caller must not hold any lruvec lock.
> */
> -static unsigned int move_pages_to_lru(struct lruvec *lruvec,
> - struct list_head *list)
> +static unsigned int move_pages_to_lru(struct list_head *list)
> {
> - int nr_pages, nr_moved = 0;
> + int nr_moved = 0;
> + struct lruvec *lruvec = NULL;
> LIST_HEAD(pages_to_free);
> - struct page *page;
>
> while (!list_empty(list)) {
> - page = lru_to_page(list);
> + int nr_pages;
> + struct folio *folio = lru_to_folio(list);
> + struct page *page = &folio->page;
> +
> + lruvec = folio_lruvec_relock_irq(folio, lruvec);
> VM_BUG_ON_PAGE(PageLRU(page), page);
> list_del(&page->lru);
> if (unlikely(!page_evictable(page))) {
> - spin_unlock_irq(&lruvec->lru_lock);
> + unlock_page_lruvec_irq(lruvec);

Better to stick with the opencoded unlock. It matches a bit better
with the locking function, compared to locking folio and unlocking
page...

Aside from that, this looks good:

Acked-by: Johannes Weiner <[email protected]>

2022-05-26 00:46:38

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v4 01/11] mm: memcontrol: prepare objcg API for non-kmem usage

On Tue, May 24, 2022 at 03:01:25PM -0400, Johannes Weiner wrote:
> On Tue, May 24, 2022 at 02:05:41PM +0800, Muchun Song wrote:
> > Pagecache pages are charged at the allocation time and holding a
> > reference to the original memory cgroup until being reclaimed.
> > Depending on the memory pressure, specific patterns of the page
> > sharing between different cgroups and the cgroup creation and
> > destruction rates, a large number of dying memory cgroups can be
> > pinned by pagecache pages. It makes the page reclaim less efficient
> > and wastes memory.
> >
> > We can convert LRU pages and most other raw memcg pins to the objcg
> > direction to fix this problem, and then the page->memcg will always
> > point to an object cgroup pointer.
> >
> > Therefore, the infrastructure of objcg no longer only serves
> > CONFIG_MEMCG_KMEM. In this patch, we move the infrastructure of the
> > objcg out of the scope of the CONFIG_MEMCG_KMEM so that the LRU pages
> > can reuse it to charge pages.
> >
> > We know that the LRU pages are not accounted at the root level. But
> > the page->memcg_data points to the root_mem_cgroup. So the
> > page->memcg_data of the LRU pages always points to a valid pointer.
> > But the root_mem_cgroup dose not have an object cgroup. If we use
> > obj_cgroup APIs to charge the LRU pages, we should set the
> > page->memcg_data to a root object cgroup. So we also allocate an
> > object cgroup for the root_mem_cgroup.
> >
> > Signed-off-by: Muchun Song <[email protected]>
>
> Acked-by: Johannes Weiner <[email protected]>
>
> Looks good to me. Also gets rid of some use_hierarchy cruft.
>

Thanks for taking a look.


2022-05-26 03:06:28

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v4 04/11] mm: vmscan: rework move_pages_to_lru()

On Tue, May 24, 2022 at 03:52:22PM -0400, Waiman Long wrote:
> On 5/24/22 02:05, Muchun Song wrote:
> > In the later patch, we will reparent the LRU pages. The pages moved to
> > appropriate LRU list can be reparented during the process of the
> > move_pages_to_lru(). So holding a lruvec lock by the caller is wrong, we
> > should use the more general interface of folio_lruvec_relock_irq() to
> > acquire the correct lruvec lock.
> >
> > Signed-off-by: Muchun Song <[email protected]>
> > ---
> > mm/vmscan.c | 49 +++++++++++++++++++++++++------------------------
> > 1 file changed, 25 insertions(+), 24 deletions(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 1678802e03e7..761d5e0dd78d 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -2230,23 +2230,28 @@ static int too_many_isolated(struct pglist_data *pgdat, int file,
> > * move_pages_to_lru() moves pages from private @list to appropriate LRU list.
> > * On return, @list is reused as a list of pages to be freed by the caller.
> > *
> > - * Returns the number of pages moved to the given lruvec.
> > + * Returns the number of pages moved to the appropriate LRU list.
> > + *
> > + * Note: The caller must not hold any lruvec lock.
> > */
> > -static unsigned int move_pages_to_lru(struct lruvec *lruvec,
> > - struct list_head *list)
> > +static unsigned int move_pages_to_lru(struct list_head *list)
> > {
> > - int nr_pages, nr_moved = 0;
> > + int nr_moved = 0;
> > + struct lruvec *lruvec = NULL;
> > LIST_HEAD(pages_to_free);
> > - struct page *page;
> > while (!list_empty(list)) {
> > - page = lru_to_page(list);
> > + int nr_pages;
> > + struct folio *folio = lru_to_folio(list);
> > + struct page *page = &folio->page;
> > +
> > + lruvec = folio_lruvec_relock_irq(folio, lruvec);
> > VM_BUG_ON_PAGE(PageLRU(page), page);
> > list_del(&page->lru);
> > if (unlikely(!page_evictable(page))) {
> > - spin_unlock_irq(&lruvec->lru_lock);
> > + unlock_page_lruvec_irq(lruvec);
> > putback_lru_page(page);
> > - spin_lock_irq(&lruvec->lru_lock);
> > + lruvec = NULL;
> > continue;
> > }
> > @@ -2267,20 +2272,16 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec,
> > __clear_page_lru_flags(page);
> > if (unlikely(PageCompound(page))) {
> > - spin_unlock_irq(&lruvec->lru_lock);
> > + unlock_page_lruvec_irq(lruvec);
> > destroy_compound_page(page);
> > - spin_lock_irq(&lruvec->lru_lock);
> > + lruvec = NULL;
> > } else
> > list_add(&page->lru, &pages_to_free);
> > continue;
> > }
> > - /*
> > - * All pages were isolated from the same lruvec (and isolation
> > - * inhibits memcg migration).
> > - */
> > - VM_BUG_ON_PAGE(!folio_matches_lruvec(page_folio(page), lruvec), page);
> > + VM_BUG_ON_PAGE(!folio_matches_lruvec(folio, lruvec), page);
> > add_page_to_lru_list(page, lruvec);
> > nr_pages = thp_nr_pages(page);
> > nr_moved += nr_pages;
> > @@ -2288,6 +2289,8 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec,
> > workingset_age_nonresident(lruvec, nr_pages);
> > }
> > + if (lruvec)
> > + unlock_page_lruvec_irq(lruvec);
> > /*
> > * To save our caller's stack, now use input list for pages to free.
> > */
> > @@ -2359,16 +2362,16 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
> > nr_reclaimed = shrink_page_list(&page_list, pgdat, sc, &stat, false);
> > - spin_lock_irq(&lruvec->lru_lock);
> > - move_pages_to_lru(lruvec, &page_list);
> > + move_pages_to_lru(&page_list);
> > + local_irq_disable();
> > __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
> > item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT;
> > if (!cgroup_reclaim(sc))
> > __count_vm_events(item, nr_reclaimed);
> > __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed);
> > __count_vm_events(PGSTEAL_ANON + file, nr_reclaimed);
> > - spin_unlock_irq(&lruvec->lru_lock);
> > + local_irq_enable();
> > lru_note_cost(lruvec, file, stat.nr_pageout);
> > mem_cgroup_uncharge_list(&page_list);
> > @@ -2498,18 +2501,16 @@ static void shrink_active_list(unsigned long nr_to_scan,
> > /*
> > * Move pages back to the lru list.
> > */
> > - spin_lock_irq(&lruvec->lru_lock);
> > -
> > - nr_activate = move_pages_to_lru(lruvec, &l_active);
> > - nr_deactivate = move_pages_to_lru(lruvec, &l_inactive);
> > + nr_activate = move_pages_to_lru(&l_active);
> > + nr_deactivate = move_pages_to_lru(&l_inactive);
> > /* Keep all free pages in l_active list */
> > list_splice(&l_inactive, &l_active);
> > + local_irq_disable();
> > __count_vm_events(PGDEACTIVATE, nr_deactivate);
> > __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate);
> > -
> > __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
> > - spin_unlock_irq(&lruvec->lru_lock);
> > + local_irq_enable();
> > mem_cgroup_uncharge_list(&l_active);
> > free_unref_page_list(&l_active);
>
> Note that the RT engineers will likely change the
> local_irq_disable()/local_irq_enable() to
> local_lock_irq()/local_unlock_irq().
>

Thanks. I'll replace them with local_lock/unlock_irq().


2022-05-26 16:37:15

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v4 04/11] mm: vmscan: rework move_pages_to_lru()

On Tue, May 24, 2022 at 03:38:50PM -0400, Johannes Weiner wrote:
> On Tue, May 24, 2022 at 02:05:44PM +0800, Muchun Song wrote:
> > In the later patch, we will reparent the LRU pages. The pages moved to
> > appropriate LRU list can be reparented during the process of the
> > move_pages_to_lru(). So holding a lruvec lock by the caller is wrong, we
> > should use the more general interface of folio_lruvec_relock_irq() to
> > acquire the correct lruvec lock.
> >
> > Signed-off-by: Muchun Song <[email protected]>
> > ---
> > mm/vmscan.c | 49 +++++++++++++++++++++++++------------------------
> > 1 file changed, 25 insertions(+), 24 deletions(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 1678802e03e7..761d5e0dd78d 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -2230,23 +2230,28 @@ static int too_many_isolated(struct pglist_data *pgdat, int file,
> > * move_pages_to_lru() moves pages from private @list to appropriate LRU list.
> > * On return, @list is reused as a list of pages to be freed by the caller.
> > *
> > - * Returns the number of pages moved to the given lruvec.
> > + * Returns the number of pages moved to the appropriate LRU list.
> > + *
> > + * Note: The caller must not hold any lruvec lock.
> > */
> > -static unsigned int move_pages_to_lru(struct lruvec *lruvec,
> > - struct list_head *list)
> > +static unsigned int move_pages_to_lru(struct list_head *list)
> > {
> > - int nr_pages, nr_moved = 0;
> > + int nr_moved = 0;
> > + struct lruvec *lruvec = NULL;
> > LIST_HEAD(pages_to_free);
> > - struct page *page;
> >
> > while (!list_empty(list)) {
> > - page = lru_to_page(list);
> > + int nr_pages;
> > + struct folio *folio = lru_to_folio(list);
> > + struct page *page = &folio->page;
> > +
> > + lruvec = folio_lruvec_relock_irq(folio, lruvec);
> > VM_BUG_ON_PAGE(PageLRU(page), page);
> > list_del(&page->lru);
> > if (unlikely(!page_evictable(page))) {
> > - spin_unlock_irq(&lruvec->lru_lock);
> > + unlock_page_lruvec_irq(lruvec);
>
> Better to stick with the opencoded unlock. It matches a bit better
> with the locking function, compared to locking folio and unlocking
> page...
>

Seems like we are missing folio unlock variants.
How about intriducing folio_lruvec_unlock() variants?
There are a lot of places where locking folio and
unlocking page.

Thanks.

> Aside from that, this looks good:
>
> Acked-by: Johannes Weiner <[email protected]>
>

Thanks.


2022-05-26 16:43:58

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v4 07/11] mm: memcontrol: make all the callers of {folio,page}_memcg() safe

On Tue, May 24, 2022 at 08:03:41PM -0700, Roman Gushchin wrote:
> On Tue, May 24, 2022 at 02:05:47PM +0800, Muchun Song wrote:
> > When we use objcg APIs to charge the LRU pages, the page will not hold
> > a reference to the memcg associated with the page. So the caller of the
> > {folio,page}_memcg() should hold an rcu read lock or obtain a reference
> > to the memcg associated with the page to protect memcg from being
> > released. So introduce get_mem_cgroup_from_{page,folio}() to obtain a
> > reference to the memory cgroup associated with the page.
> >
> > In this patch, make all the callers hold an rcu read lock or obtain a
> > reference to the memcg to protect memcg from being released when the LRU
> > pages reparented.
> >
> > We do not need to adjust the callers of {folio,page}_memcg() during
> > the whole process of mem_cgroup_move_task(). Because the cgroup migration
> > and memory cgroup offlining are serialized by @cgroup_mutex. In this
> > routine, the LRU pages cannot be reparented to its parent memory cgroup.
> > So {folio,page}_memcg() is stable and cannot be released.
> >
> > This is a preparation for reparenting the LRU pages.
> >
> > Signed-off-by: Muchun Song <[email protected]>
>
> Overall the patch looks good to me (some nits below).
>
> > ---
> > fs/buffer.c | 4 +--
> > fs/fs-writeback.c | 23 ++++++++--------
> > include/linux/memcontrol.h | 51 ++++++++++++++++++++++++++++++++---
> > include/trace/events/writeback.h | 5 ++++
> > mm/memcontrol.c | 58 ++++++++++++++++++++++++++++++----------
> > mm/migrate.c | 4 +++
> > mm/page_io.c | 5 ++--
> > 7 files changed, 117 insertions(+), 33 deletions(-)
> >
> > diff --git a/fs/buffer.c b/fs/buffer.c
> > index 2b5561ae5d0b..80975a457670 100644
> > --- a/fs/buffer.c
> > +++ b/fs/buffer.c
> > @@ -819,8 +819,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size,
> > if (retry)
> > gfp |= __GFP_NOFAIL;
> >
> > - /* The page lock pins the memcg */
> > - memcg = page_memcg(page);
> > + memcg = get_mem_cgroup_from_page(page);
>
> Looking at these changes I wonder if we need to remove unsafe getters or
> at least add a BOLD comment on how/when it's safe to use them.
>

I am not clear here. You mean we add some comments above page_memcg()
or get_mem_cgroup_from_page()?

> > old_memcg = set_active_memcg(memcg);
> >
> > head = NULL;
> > @@ -840,6 +839,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size,
> > set_bh_page(bh, page, offset);
> > }
> > out:
> > + mem_cgroup_put(memcg);
> > set_active_memcg(old_memcg);
> > return head;
> > /*
> > diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
> > index 1fae0196292a..56612ace8778 100644
> > --- a/fs/fs-writeback.c
> > +++ b/fs/fs-writeback.c
> > @@ -243,15 +243,13 @@ void __inode_attach_wb(struct inode *inode, struct page *page)
> > if (inode_cgwb_enabled(inode)) {
> > struct cgroup_subsys_state *memcg_css;
> >
> > - if (page) {
> > - memcg_css = mem_cgroup_css_from_page(page);
> > - wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
> > - } else {
> > - /* must pin memcg_css, see wb_get_create() */
> > + /* must pin memcg_css, see wb_get_create() */
> > + if (page)
> > + memcg_css = get_mem_cgroup_css_from_page(page);
> > + else
> > memcg_css = task_get_css(current, memory_cgrp_id);
> > - wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
> > - css_put(memcg_css);
> > - }
> > + wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
> > + css_put(memcg_css);
> > }
> >
> > if (!wb)
> > @@ -868,16 +866,16 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
> > if (!wbc->wb || wbc->no_cgroup_owner)
> > return;
> >
> > - css = mem_cgroup_css_from_page(page);
> > + css = get_mem_cgroup_css_from_page(page);
> > /* dead cgroups shouldn't contribute to inode ownership arbitration */
> > if (!(css->flags & CSS_ONLINE))
> > - return;
> > + goto out;
> >
> > id = css->id;
> >
> > if (id == wbc->wb_id) {
> > wbc->wb_bytes += bytes;
> > - return;
> > + goto out;
> > }
> >
> > if (id == wbc->wb_lcand_id)
> > @@ -890,6 +888,9 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
> > wbc->wb_tcand_bytes += bytes;
> > else
> > wbc->wb_tcand_bytes -= min(bytes, wbc->wb_tcand_bytes);
> > +
> > +out:
> > + css_put(css);
> > }
> > EXPORT_SYMBOL_GPL(wbc_account_cgroup_owner);
> >
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index 8c2f1ba2f471..3a0e2592434e 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -373,7 +373,7 @@ static inline bool folio_memcg_kmem(struct folio *folio);
> > * a valid memcg, but can be atomically swapped to the parent memcg.
> > *
> > * The caller must ensure that the returned memcg won't be released:
> > - * e.g. acquire the rcu_read_lock or css_set_lock.
> > + * e.g. acquire the rcu_read_lock or objcg_lock or cgroup_mutex.
> > */
> > static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg)
> > {
> > @@ -454,7 +454,37 @@ static inline struct mem_cgroup *page_memcg(struct page *page)
> > return folio_memcg(page_folio(page));
> > }
> >
> > -/**
> > +/*
> > + * get_mem_cgroup_from_folio - Obtain a reference on the memory cgroup
> > + * associated with a folio.
> > + * @folio: Pointer to the folio.
> > + *
> > + * Returns a pointer to the memory cgroup (and obtain a reference on it)
> > + * associated with the folio, or NULL. This function assumes that the
> > + * folio is known to have a proper memory cgroup pointer. It's not safe
> > + * to call this function against some type of pages, e.g. slab pages or
> > + * ex-slab pages.
> > + */
> > +static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio)
> > +{
> > + struct mem_cgroup *memcg;
> > +
> > + rcu_read_lock();
> > +retry:
> > + memcg = folio_memcg(folio);
> > + if (unlikely(memcg && !css_tryget(&memcg->css)))
> > + goto retry;
> > + rcu_read_unlock();
> > +
> > + return memcg;
> > +}
> > +
> > +static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page)
> > +{
> > + return get_mem_cgroup_from_folio(page_folio(page));
> > +}
> > +
> > +/*
> > * folio_memcg_rcu - Locklessly get the memory cgroup associated with a folio.
> > * @folio: Pointer to the folio.
> > *
> > @@ -873,7 +903,7 @@ static inline bool mm_match_cgroup(struct mm_struct *mm,
> > return match;
> > }
> >
> > -struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page);
> > +struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page);
> > ino_t page_cgroup_ino(struct page *page);
> >
> > static inline bool mem_cgroup_online(struct mem_cgroup *memcg)
> > @@ -1047,10 +1077,13 @@ static inline void count_memcg_events(struct mem_cgroup *memcg,
> > static inline void count_memcg_page_event(struct page *page,
> > enum vm_event_item idx)
> > {
> > - struct mem_cgroup *memcg = page_memcg(page);
> > + struct mem_cgroup *memcg;
> >
> > + rcu_read_lock();
> > + memcg = page_memcg(page);
> > if (memcg)
> > count_memcg_events(memcg, idx, 1);
> > + rcu_read_unlock();
> > }
> >
> > static inline void count_memcg_event_mm(struct mm_struct *mm,
> > @@ -1129,6 +1162,16 @@ static inline struct mem_cgroup *page_memcg(struct page *page)
> > return NULL;
> > }
> >
> > +static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio)
> > +{
> > + return NULL;
> > +}
> > +
> > +static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page)
> > +{
> > + return NULL;
> > +}
> > +
> > static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio)
> > {
> > WARN_ON_ONCE(!rcu_read_lock_held());
> > diff --git a/include/trace/events/writeback.h b/include/trace/events/writeback.h
> > index 86b2a82da546..cdb822339f13 100644
> > --- a/include/trace/events/writeback.h
> > +++ b/include/trace/events/writeback.h
> > @@ -258,6 +258,11 @@ TRACE_EVENT(track_foreign_dirty,
> > __entry->ino = inode ? inode->i_ino : 0;
> > __entry->memcg_id = wb->memcg_css->id;
> > __entry->cgroup_ino = __trace_wb_assign_cgroup(wb);
> > + /*
> > + * TP_fast_assign() is under preemption disabled which can
> > + * serve as an RCU read-side critical section so that the
> > + * memcg returned by folio_memcg() cannot be freed.
> > + */
> > __entry->page_cgroup_ino = cgroup_ino(folio_memcg(folio)->css.cgroup);
> > ),
> >
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index b38a77f6696f..dcaf6cf5dc74 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -371,7 +371,7 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key);
> > #endif
> >
> > /**
> > - * mem_cgroup_css_from_page - css of the memcg associated with a page
> > + * get_mem_cgroup_css_from_page - get css of the memcg associated with a page
> > * @page: page of interest
> > *
> > * If memcg is bound to the default hierarchy, css of the memcg associated
> > @@ -381,13 +381,15 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key);
> > * If memcg is bound to a traditional hierarchy, the css of root_mem_cgroup
> > * is returned.
> > */
> > -struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page)
> > +struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page)
> > {
> > struct mem_cgroup *memcg;
> >
> > - memcg = page_memcg(page);
> > + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
> > + return &root_mem_cgroup->css;
> >
> > - if (!memcg || !cgroup_subsys_on_dfl(memory_cgrp_subsys))
> > + memcg = get_mem_cgroup_from_page(page);
> > + if (!memcg)
> > memcg = root_mem_cgroup;
> >
> > return &memcg->css;
> > @@ -770,13 +772,13 @@ void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
> > void __mod_lruvec_page_state(struct page *page, enum node_stat_item idx,
> > int val)
> > {
> > - struct page *head = compound_head(page); /* rmap on tail pages */
> > + struct folio *folio = page_folio(page); /* rmap on tail pages */
> > struct mem_cgroup *memcg;
> > pg_data_t *pgdat = page_pgdat(page);
> > struct lruvec *lruvec;
> >
> > rcu_read_lock();
> > - memcg = page_memcg(head);
> > + memcg = folio_memcg(folio);
> > /* Untracked pages have no memcg, no lruvec. Update only the node */
> > if (!memcg) {
> > rcu_read_unlock();
> > @@ -2058,7 +2060,9 @@ void folio_memcg_lock(struct folio *folio)
> > * The RCU lock is held throughout the transaction. The fast
> > * path can get away without acquiring the memcg->move_lock
> > * because page moving starts with an RCU grace period.
> > - */
> > + *
> > + * The RCU lock also protects the memcg from being freed.
> > + */
> > rcu_read_lock();
> >
> > if (mem_cgroup_disabled())
> > @@ -3296,7 +3300,7 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size)
> > void split_page_memcg(struct page *head, unsigned int nr)
> > {
> > struct folio *folio = page_folio(head);
> > - struct mem_cgroup *memcg = folio_memcg(folio);
> > + struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio);
> > int i;
> >
> > if (mem_cgroup_disabled() || !memcg)
> > @@ -3309,6 +3313,8 @@ void split_page_memcg(struct page *head, unsigned int nr)
> > obj_cgroup_get_many(__folio_objcg(folio), nr - 1);
> > else
> > css_get_many(&memcg->css, nr - 1);
> > +
> > + css_put(&memcg->css);
> > }
> >
> > #ifdef CONFIG_MEMCG_SWAP
> > @@ -4511,7 +4517,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages,
> > void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio,
> > struct bdi_writeback *wb)
> > {
> > - struct mem_cgroup *memcg = folio_memcg(folio);
> > + struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio);
> > struct memcg_cgwb_frn *frn;
> > u64 now = get_jiffies_64();
> > u64 oldest_at = now;
> > @@ -4558,6 +4564,7 @@ void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio,
> > frn->memcg_id = wb->memcg_css->id;
> > frn->at = now;
> > }
> > + css_put(&memcg->css);
> > }
> >
> > /* issue foreign writeback flushes for recorded foreign dirtying events */
> > @@ -6092,6 +6099,14 @@ static void mem_cgroup_move_charge(void)
> > atomic_dec(&mc.from->moving_account);
> > }
> >
> > +/*
> > + * The cgroup migration and memory cgroup offlining are serialized by
> > + * @cgroup_mutex. If we reach here, it means that the LRU pages cannot
> > + * be reparented to its parent memory cgroup. So during the whole process
> > + * of mem_cgroup_move_task(), page_memcg(page) is stable. So we do not
> > + * need to worry about the memcg (returned from page_memcg()) being
> > + * released even if we do not hold an rcu read lock.
> > + */
> > static void mem_cgroup_move_task(void)
> > {
> > if (mc.to) {
> > @@ -6895,7 +6910,7 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
> > if (folio_memcg(new))
> > return;
> >
> > - memcg = folio_memcg(old);
> > + memcg = get_mem_cgroup_from_folio(old);
> > VM_WARN_ON_ONCE_FOLIO(!memcg, old);
> > if (!memcg)
> > return;
> > @@ -6914,6 +6929,8 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
> > mem_cgroup_charge_statistics(memcg, nr_pages);
> > memcg_check_events(memcg, folio_nid(new));
> > local_irq_restore(flags);
> > +
> > + css_put(&memcg->css);
> > }
> >
> > DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key);
> > @@ -7100,6 +7117,10 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry)
> > if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
> > return;
> >
> > + /*
> > + * Interrupts should be disabled by the caller (see the comments below),
> > + * which can serve as RCU read-side critical sections.
> > + */
> > memcg = folio_memcg(folio);
> >
> > VM_WARN_ON_ONCE_FOLIO(!memcg, folio);
> > @@ -7165,15 +7186,16 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
> > if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
> > return 0;
> >
> > + rcu_read_lock();
> > memcg = page_memcg(page);
> >
> > VM_WARN_ON_ONCE_PAGE(!memcg, page);
> > if (!memcg)
> > - return 0;
> > + goto out;
> >
> > if (!entry.val) {
> > memcg_memory_event(memcg, MEMCG_SWAP_FAIL);
> > - return 0;
> > + goto out;
> > }
> >
> > memcg = mem_cgroup_id_get_online(memcg);
> > @@ -7183,6 +7205,7 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
> > memcg_memory_event(memcg, MEMCG_SWAP_MAX);
> > memcg_memory_event(memcg, MEMCG_SWAP_FAIL);
> > mem_cgroup_id_put(memcg);
> > + rcu_read_unlock();
> > return -ENOMEM;
>
> If you add the "out" label, please use it here too.
>

Good point. Will do.

> > }
> >
> > @@ -7192,6 +7215,8 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
> > oldid = swap_cgroup_record(entry, mem_cgroup_id(memcg), nr_pages);
> > VM_BUG_ON_PAGE(oldid, page);
> > mod_memcg_state(memcg, MEMCG_SWAP, nr_pages);
> > +out:
> > + rcu_read_unlock();
> >
> > return 0;
> > }
> > @@ -7246,17 +7271,22 @@ bool mem_cgroup_swap_full(struct page *page)
> > if (cgroup_memory_noswap || !cgroup_subsys_on_dfl(memory_cgrp_subsys))
> > return false;
> >
> > + rcu_read_lock();
> > memcg = page_memcg(page);
> > if (!memcg)
> > - return false;
> > + goto out;
> >
> > for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) {
> > unsigned long usage = page_counter_read(&memcg->swap);
> >
> > if (usage * 2 >= READ_ONCE(memcg->swap.high) ||
> > - usage * 2 >= READ_ONCE(memcg->swap.max))
> > + usage * 2 >= READ_ONCE(memcg->swap.max)) {
> > + rcu_read_unlock();
> > return true;
>
> Please, make something like
> ret = true;
> goto out;
> here. It will be more consistent.
>

Will do.

THanks.

> > + }
> > }
> > +out:
> > + rcu_read_unlock();
> >
> > return false;
> > }
> > diff --git a/mm/migrate.c b/mm/migrate.c
> > index 6c31ee1e1c9b..59e97a8a64a0 100644
> > --- a/mm/migrate.c
> > +++ b/mm/migrate.c
> > @@ -430,6 +430,10 @@ int folio_migrate_mapping(struct address_space *mapping,
> > struct lruvec *old_lruvec, *new_lruvec;
> > struct mem_cgroup *memcg;
> >
> > + /*
> > + * Irq is disabled, which can serve as RCU read-side critical
> > + * sections.
> > + */
> > memcg = folio_memcg(folio);
> > old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat);
> > new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat);
> > diff --git a/mm/page_io.c b/mm/page_io.c
> > index 89fbf3cae30f..a0d9cd68e87a 100644
> > --- a/mm/page_io.c
> > +++ b/mm/page_io.c
> > @@ -221,13 +221,14 @@ static void bio_associate_blkg_from_page(struct bio *bio, struct page *page)
> > struct cgroup_subsys_state *css;
> > struct mem_cgroup *memcg;
> >
> > + rcu_read_lock();
> > memcg = page_memcg(page);
> > if (!memcg)
> > - return;
> > + goto out;
> >
> > - rcu_read_lock();
> > css = cgroup_e_css(memcg->css.cgroup, &io_cgrp_subsys);
> > bio_associate_blkg_from_css(bio, css);
> > +out:
> > rcu_read_unlock();
> > }
> > #else
> > --
> > 2.11.0
> >
>

2022-05-26 18:29:25

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v4 03/11] mm: memcontrol: make lruvec lock safe when LRU pages are reparented

On Wed, May 25, 2022 at 10:48:54AM -0400, Johannes Weiner wrote:
> On Wed, May 25, 2022 at 09:03:59PM +0800, Muchun Song wrote:
> > On Wed, May 25, 2022 at 08:30:15AM -0400, Johannes Weiner wrote:
> > > On Wed, May 25, 2022 at 05:53:30PM +0800, Muchun Song wrote:
> > > > On Tue, May 24, 2022 at 03:27:20PM -0400, Johannes Weiner wrote:
> > > > > On Tue, May 24, 2022 at 02:05:43PM +0800, Muchun Song wrote:
> > > > > > @@ -1230,10 +1213,23 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
> > > > > > */
> > > > > > struct lruvec *folio_lruvec_lock(struct folio *folio)
> > > > > > {
> > > > > > - struct lruvec *lruvec = folio_lruvec(folio);
> > > > > > + struct lruvec *lruvec;
> > > > > >
> > > > > > + rcu_read_lock();
> > > > > > +retry:
> > > > > > + lruvec = folio_lruvec(folio);
> > > > > > spin_lock(&lruvec->lru_lock);
> > > > > > - lruvec_memcg_debug(lruvec, folio);
> > > > > > +
> > > > > > + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
> > > > > > + spin_unlock(&lruvec->lru_lock);
> > > > > > + goto retry;
> > > > > > + }
> > > > > > +
> > > > > > + /*
> > > > > > + * Preemption is disabled in the internal of spin_lock, which can serve
> > > > > > + * as RCU read-side critical sections.
> > > > > > + */
> > > > > > + rcu_read_unlock();
> > > > >
> > > > > The code looks right to me, but I don't understand the comment: why do
> > > > > we care that the rcu read-side continues? With the lru_lock held,
> > > > > reparenting is on hold and the lruvec cannot be rcu-freed anyway, no?
> > > > >
> > > >
> > > > Right. We could hold rcu read lock until end of reparting. So you mean
> > > > we do rcu_read_unlock in folio_lruvec_lock()?
> > >
> > > The comment seems to suggest that disabling preemption is what keeps
> > > the lruvec alive. But it's the lru_lock that keeps it alive. The
> > > cgroup destruction path tries to take the lru_lock long before it even
> > > gets to synchronize_rcu(). Once you hold the lru_lock, having an
> > > implied read-side critical section as well doesn't seem to matter.
> > >
> >
> > Well, I thought that spinlocks have implicit read-side critical sections
> > because it disables preemption (I learned from the comments above
> > synchronize_rcu() that says interrupts, preemption, or softirqs have been
> > disabled also serve as RCU read-side critical sections). So I have a
> > question: is it still true in a PREEMPT_RT kernel (I am not familiar with
> > this)?
>
> Yes, but you're missing my point.
>
> > > Should the comment be deleted?
> >
> > I think we could remove the comments. If the above question is false, seems
> > like we should continue holding rcu read lock.
>
> It's true.
>

Thanks for your answer.

> But assume it's false for a second. Why would you need to continue
> holding it? What would it protect? The lruvec would be pinned by the
> spinlock even if it DIDN'T imply an RCU lock, right?
>
> So I don't understand the point of the comment. If the implied RCU
> lock is protecting something not covered by the bare spinlock itself,
> it should be added to the comment. Otherwise, the comment should go.
>

Got it. Thanks for your nice explanation. I'll remove
the comment here.

2022-05-26 21:31:05

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v4 03/11] mm: memcontrol: make lruvec lock safe when LRU pages are reparented

On Wed, May 25, 2022 at 09:03:59PM +0800, Muchun Song wrote:
> On Wed, May 25, 2022 at 08:30:15AM -0400, Johannes Weiner wrote:
> > On Wed, May 25, 2022 at 05:53:30PM +0800, Muchun Song wrote:
> > > On Tue, May 24, 2022 at 03:27:20PM -0400, Johannes Weiner wrote:
> > > > On Tue, May 24, 2022 at 02:05:43PM +0800, Muchun Song wrote:
> > > > > @@ -1230,10 +1213,23 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
> > > > > */
> > > > > struct lruvec *folio_lruvec_lock(struct folio *folio)
> > > > > {
> > > > > - struct lruvec *lruvec = folio_lruvec(folio);
> > > > > + struct lruvec *lruvec;
> > > > >
> > > > > + rcu_read_lock();
> > > > > +retry:
> > > > > + lruvec = folio_lruvec(folio);
> > > > > spin_lock(&lruvec->lru_lock);
> > > > > - lruvec_memcg_debug(lruvec, folio);
> > > > > +
> > > > > + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
> > > > > + spin_unlock(&lruvec->lru_lock);
> > > > > + goto retry;
> > > > > + }
> > > > > +
> > > > > + /*
> > > > > + * Preemption is disabled in the internal of spin_lock, which can serve
> > > > > + * as RCU read-side critical sections.
> > > > > + */
> > > > > + rcu_read_unlock();
> > > >
> > > > The code looks right to me, but I don't understand the comment: why do
> > > > we care that the rcu read-side continues? With the lru_lock held,
> > > > reparenting is on hold and the lruvec cannot be rcu-freed anyway, no?
> > > >
> > >
> > > Right. We could hold rcu read lock until end of reparting. So you mean
> > > we do rcu_read_unlock in folio_lruvec_lock()?
> >
> > The comment seems to suggest that disabling preemption is what keeps
> > the lruvec alive. But it's the lru_lock that keeps it alive. The
> > cgroup destruction path tries to take the lru_lock long before it even
> > gets to synchronize_rcu(). Once you hold the lru_lock, having an
> > implied read-side critical section as well doesn't seem to matter.
> >
>
> Well, I thought that spinlocks have implicit read-side critical sections
> because it disables preemption (I learned from the comments above
> synchronize_rcu() that says interrupts, preemption, or softirqs have been
> disabled also serve as RCU read-side critical sections). So I have a
> question: is it still true in a PREEMPT_RT kernel (I am not familiar with
> this)?

Yes, but you're missing my point.

> > Should the comment be deleted?
>
> I think we could remove the comments. If the above question is false, seems
> like we should continue holding rcu read lock.

It's true.

But assume it's false for a second. Why would you need to continue
holding it? What would it protect? The lruvec would be pinned by the
spinlock even if it DIDN'T imply an RCU lock, right?

So I don't understand the point of the comment. If the implied RCU
lock is protecting something not covered by the bare spinlock itself,
it should be added to the comment. Otherwise, the comment should go.

2022-05-27 12:03:39

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v4 03/11] mm: memcontrol: make lruvec lock safe when LRU pages are reparented

On 5/25/22 11:38, Muchun Song wrote:
> On Wed, May 25, 2022 at 10:48:54AM -0400, Johannes Weiner wrote:
>> On Wed, May 25, 2022 at 09:03:59PM +0800, Muchun Song wrote:
>>> On Wed, May 25, 2022 at 08:30:15AM -0400, Johannes Weiner wrote:
>>>> On Wed, May 25, 2022 at 05:53:30PM +0800, Muchun Song wrote:
>>>>> On Tue, May 24, 2022 at 03:27:20PM -0400, Johannes Weiner wrote:
>>>>>> On Tue, May 24, 2022 at 02:05:43PM +0800, Muchun Song wrote:
>>>>>>> @@ -1230,10 +1213,23 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
>>>>>>> */
>>>>>>> struct lruvec *folio_lruvec_lock(struct folio *folio)
>>>>>>> {
>>>>>>> - struct lruvec *lruvec = folio_lruvec(folio);
>>>>>>> + struct lruvec *lruvec;
>>>>>>>
>>>>>>> + rcu_read_lock();
>>>>>>> +retry:
>>>>>>> + lruvec = folio_lruvec(folio);
>>>>>>> spin_lock(&lruvec->lru_lock);
>>>>>>> - lruvec_memcg_debug(lruvec, folio);
>>>>>>> +
>>>>>>> + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
>>>>>>> + spin_unlock(&lruvec->lru_lock);
>>>>>>> + goto retry;
>>>>>>> + }
>>>>>>> +
>>>>>>> + /*
>>>>>>> + * Preemption is disabled in the internal of spin_lock, which can serve
>>>>>>> + * as RCU read-side critical sections.
>>>>>>> + */
>>>>>>> + rcu_read_unlock();
>>>>>> The code looks right to me, but I don't understand the comment: why do
>>>>>> we care that the rcu read-side continues? With the lru_lock held,
>>>>>> reparenting is on hold and the lruvec cannot be rcu-freed anyway, no?
>>>>>>
>>>>> Right. We could hold rcu read lock until end of reparting. So you mean
>>>>> we do rcu_read_unlock in folio_lruvec_lock()?
>>>> The comment seems to suggest that disabling preemption is what keeps
>>>> the lruvec alive. But it's the lru_lock that keeps it alive. The
>>>> cgroup destruction path tries to take the lru_lock long before it even
>>>> gets to synchronize_rcu(). Once you hold the lru_lock, having an
>>>> implied read-side critical section as well doesn't seem to matter.
>>>>
>>> Well, I thought that spinlocks have implicit read-side critical sections
>>> because it disables preemption (I learned from the comments above
>>> synchronize_rcu() that says interrupts, preemption, or softirqs have been
>>> disabled also serve as RCU read-side critical sections). So I have a
>>> question: is it still true in a PREEMPT_RT kernel (I am not familiar with
>>> this)?
>> Yes, but you're missing my point.
>>
>>>> Should the comment be deleted?
>>> I think we could remove the comments. If the above question is false, seems
>>> like we should continue holding rcu read lock.
>> It's true.
>>
> Thanks for your answer.
>
>> But assume it's false for a second. Why would you need to continue
>> holding it? What would it protect? The lruvec would be pinned by the
>> spinlock even if it DIDN'T imply an RCU lock, right?
>>
>> So I don't understand the point of the comment. If the implied RCU
>> lock is protecting something not covered by the bare spinlock itself,
>> it should be added to the comment. Otherwise, the comment should go.
>>
> Got it. Thanks for your nice explanation. I'll remove
> the comment here.

Note that there is a similar comment in patch 6 which may have to be
removed as well.

Cheers,
Longman


2022-05-27 18:57:38

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v4 03/11] mm: memcontrol: make lruvec lock safe when LRU pages are reparented

On Thu, May 26, 2022 at 04:17:27PM -0400, Waiman Long wrote:
> On 5/25/22 11:38, Muchun Song wrote:
> > On Wed, May 25, 2022 at 10:48:54AM -0400, Johannes Weiner wrote:
> > > On Wed, May 25, 2022 at 09:03:59PM +0800, Muchun Song wrote:
> > > > On Wed, May 25, 2022 at 08:30:15AM -0400, Johannes Weiner wrote:
> > > > > On Wed, May 25, 2022 at 05:53:30PM +0800, Muchun Song wrote:
> > > > > > On Tue, May 24, 2022 at 03:27:20PM -0400, Johannes Weiner wrote:
> > > > > > > On Tue, May 24, 2022 at 02:05:43PM +0800, Muchun Song wrote:
> > > > > > > > @@ -1230,10 +1213,23 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
> > > > > > > > */
> > > > > > > > struct lruvec *folio_lruvec_lock(struct folio *folio)
> > > > > > > > {
> > > > > > > > - struct lruvec *lruvec = folio_lruvec(folio);
> > > > > > > > + struct lruvec *lruvec;
> > > > > > > > + rcu_read_lock();
> > > > > > > > +retry:
> > > > > > > > + lruvec = folio_lruvec(folio);
> > > > > > > > spin_lock(&lruvec->lru_lock);
> > > > > > > > - lruvec_memcg_debug(lruvec, folio);
> > > > > > > > +
> > > > > > > > + if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
> > > > > > > > + spin_unlock(&lruvec->lru_lock);
> > > > > > > > + goto retry;
> > > > > > > > + }
> > > > > > > > +
> > > > > > > > + /*
> > > > > > > > + * Preemption is disabled in the internal of spin_lock, which can serve
> > > > > > > > + * as RCU read-side critical sections.
> > > > > > > > + */
> > > > > > > > + rcu_read_unlock();
> > > > > > > The code looks right to me, but I don't understand the comment: why do
> > > > > > > we care that the rcu read-side continues? With the lru_lock held,
> > > > > > > reparenting is on hold and the lruvec cannot be rcu-freed anyway, no?
> > > > > > >
> > > > > > Right. We could hold rcu read lock until end of reparting. So you mean
> > > > > > we do rcu_read_unlock in folio_lruvec_lock()?
> > > > > The comment seems to suggest that disabling preemption is what keeps
> > > > > the lruvec alive. But it's the lru_lock that keeps it alive. The
> > > > > cgroup destruction path tries to take the lru_lock long before it even
> > > > > gets to synchronize_rcu(). Once you hold the lru_lock, having an
> > > > > implied read-side critical section as well doesn't seem to matter.
> > > > >
> > > > Well, I thought that spinlocks have implicit read-side critical sections
> > > > because it disables preemption (I learned from the comments above
> > > > synchronize_rcu() that says interrupts, preemption, or softirqs have been
> > > > disabled also serve as RCU read-side critical sections). So I have a
> > > > question: is it still true in a PREEMPT_RT kernel (I am not familiar with
> > > > this)?
> > > Yes, but you're missing my point.
> > >
> > > > > Should the comment be deleted?
> > > > I think we could remove the comments. If the above question is false, seems
> > > > like we should continue holding rcu read lock.
> > > It's true.
> > >
> > Thanks for your answer.
> >
> > > But assume it's false for a second. Why would you need to continue
> > > holding it? What would it protect? The lruvec would be pinned by the
> > > spinlock even if it DIDN'T imply an RCU lock, right?
> > >
> > > So I don't understand the point of the comment. If the implied RCU
> > > lock is protecting something not covered by the bare spinlock itself,
> > > it should be added to the comment. Otherwise, the comment should go.
> > >
> > Got it. Thanks for your nice explanation. I'll remove
> > the comment here.
>
> Note that there is a similar comment in patch 6 which may have to be removed
> as well.
>

I have noticed that. Thank you for remindering me as well.


2022-05-28 20:05:59

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH v4 04/11] mm: vmscan: rework move_pages_to_lru()

On Tue, May 24, 2022 at 07:43:03PM -0700, Roman Gushchin wrote:
> On Tue, May 24, 2022 at 02:05:44PM +0800, Muchun Song wrote:
> > In the later patch, we will reparent the LRU pages. The pages moved to
> > appropriate LRU list can be reparented during the process of the
> > move_pages_to_lru(). So holding a lruvec lock by the caller is wrong, we
> > should use the more general interface of folio_lruvec_relock_irq() to
> > acquire the correct lruvec lock.
> >
> > Signed-off-by: Muchun Song <[email protected]>
>
> With changes asked by Johannes :
>
> Acked-by: Roman Gushchin <[email protected]>
>

Thanks Roman.