2020-01-09 20:55:14

by Roman Gushchin

[permalink] [raw]
Subject: [PATCH v2 0/6] mm: memcg: kmem API cleanup

This patchset aims to clean up the kernel memory charging API.
It doesn't bring any functional changes, just removes unused
arguments, renames some functions and fixes some comments.

Currently it's not obvious which functions are most basic
(memcg_kmem_(un)charge_memcg()) and which are based on them
(memcg_kmem_(un)charge()). The patchset renames these functions
and removes unused arguments:

TL;DR:
was:
memcg_kmem_charge_memcg(page, gfp, order, memcg)
memcg_kmem_uncharge_memcg(memcg, nr_pages)
memcg_kmem_charge(page, gfp, order)
memcg_kmem_uncharge(page, order)

now:
memcg_kmem_charge(memcg, gfp, nr_pages)
memcg_kmem_uncharge(memcg, nr_pages)
memcg_kmem_charge_page(page, gfp, order)
memcg_kmem_uncharge_page(page, order)


v2:
1) Dropped the first patch, which was incorrect. Thanks, Andrew!


Roman Gushchin (6):
mm: kmem: cleanup (__)memcg_kmem_charge_memcg() arguments
mm: kmem: cleanup memcg_kmem_uncharge_memcg() arguments
mm: kmem: rename memcg_kmem_(un)charge() into
memcg_kmem_(un)charge_page()
mm: kmem: switch to nr_pages in (__)memcg_kmem_charge_memcg()
mm: memcg/slab: cache page number in memcg_(un)charge_slab()
mm: kmem: rename (__)memcg_kmem_(un)charge_memcg() to
__memcg_kmem_(un)charge()

fs/pipe.c | 2 +-
include/linux/memcontrol.h | 42 +++++++++++++++--------------
kernel/fork.c | 9 ++++---
mm/memcontrol.c | 54 ++++++++++++++++++--------------------
mm/page_alloc.c | 4 +--
mm/slab.h | 22 +++++++++-------
6 files changed, 68 insertions(+), 65 deletions(-)

--
2.21.1


2020-01-09 20:55:54

by Roman Gushchin

[permalink] [raw]
Subject: [PATCH v2 2/6] mm: kmem: cleanup memcg_kmem_uncharge_memcg() arguments

Drop the unused page argument and put the memcg pointer at the first
place. This make the function consistent with its peers:
__memcg_kmem_uncharge_memcg(), memcg_kmem_charge_memcg(), etc.

Signed-off-by: Roman Gushchin <[email protected]>
---
include/linux/memcontrol.h | 4 ++--
mm/slab.h | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index c954209fd685..900a9f884260 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1409,8 +1409,8 @@ static inline int memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp,
return 0;
}

-static inline void memcg_kmem_uncharge_memcg(struct page *page, int order,
- struct mem_cgroup *memcg)
+static inline void memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg,
+ int order)
{
if (memcg_kmem_enabled())
__memcg_kmem_uncharge_memcg(memcg, 1 << order);
diff --git a/mm/slab.h b/mm/slab.h
index c4c93e991250..e7da63fb8211 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -395,7 +395,7 @@ static __always_inline void memcg_uncharge_slab(struct page *page, int order,
if (likely(!mem_cgroup_is_root(memcg))) {
lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page));
mod_lruvec_state(lruvec, cache_vmstat_idx(s), -(1 << order));
- memcg_kmem_uncharge_memcg(page, order, memcg);
+ memcg_kmem_uncharge_memcg(memcg, order);
} else {
mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
-(1 << order));
--
2.21.1

2020-01-09 20:56:10

by Roman Gushchin

[permalink] [raw]
Subject: [PATCH v2 3/6] mm: kmem: rename memcg_kmem_(un)charge() into memcg_kmem_(un)charge_page()

Rename (__)memcg_kmem_(un)charge() into (__)memcg_kmem_(un)charge_page()
to better reflect what they are actually doing:
1) call __memcg_kmem_(un)charge_memcg() to actually charge or
uncharge the current memcg
2) set or clear the PageKmemcg flag

Signed-off-by: Roman Gushchin <[email protected]>
---
fs/pipe.c | 2 +-
include/linux/memcontrol.h | 23 +++++++++++++----------
kernel/fork.c | 9 +++++----
mm/memcontrol.c | 8 ++++----
mm/page_alloc.c | 4 ++--
5 files changed, 25 insertions(+), 21 deletions(-)

diff --git a/fs/pipe.c b/fs/pipe.c
index 57502c3c0fba..f1851f7ecbd6 100644
--- a/fs/pipe.c
+++ b/fs/pipe.c
@@ -143,7 +143,7 @@ static int anon_pipe_buf_steal(struct pipe_inode_info *pipe,
struct page *page = buf->page;

if (page_count(page) == 1) {
- memcg_kmem_uncharge(page, 0);
+ memcg_kmem_uncharge_page(page, 0);
__SetPageLocked(page);
return 0;
}
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 900a9f884260..4ee0c345e905 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1362,8 +1362,8 @@ struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep);
void memcg_kmem_put_cache(struct kmem_cache *cachep);

#ifdef CONFIG_MEMCG_KMEM
-int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order);
-void __memcg_kmem_uncharge(struct page *page, int order);
+int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order);
+void __memcg_kmem_uncharge_page(struct page *page, int order);
int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp, int order);
void __memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg,
unsigned int nr_pages);
@@ -1388,17 +1388,18 @@ static inline bool memcg_kmem_enabled(void)
return static_branch_unlikely(&memcg_kmem_enabled_key);
}

-static inline int memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
+static inline int memcg_kmem_charge_page(struct page *page, gfp_t gfp,
+ int order)
{
if (memcg_kmem_enabled())
- return __memcg_kmem_charge(page, gfp, order);
+ return __memcg_kmem_charge_page(page, gfp, order);
return 0;
}

-static inline void memcg_kmem_uncharge(struct page *page, int order)
+static inline void memcg_kmem_uncharge_page(struct page *page, int order)
{
if (memcg_kmem_enabled())
- __memcg_kmem_uncharge(page, order);
+ __memcg_kmem_uncharge_page(page, order);
}

static inline int memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp,
@@ -1428,21 +1429,23 @@ static inline int memcg_cache_id(struct mem_cgroup *memcg)

#else

-static inline int memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
+static inline int memcg_kmem_charge_page(struct page *page, gfp_t gfp,
+ int order)
{
return 0;
}

-static inline void memcg_kmem_uncharge(struct page *page, int order)
+static inline void memcg_kmem_uncharge_page(struct page *page, int order)
{
}

-static inline int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
+static inline int __memcg_kmem_charge_page(struct page *page, gfp_t gfp,
+ int order)
{
return 0;
}

-static inline void __memcg_kmem_uncharge(struct page *page, int order)
+static inline void __memcg_kmem_uncharge_page(struct page *page, int order)
{
}

diff --git a/kernel/fork.c b/kernel/fork.c
index 2508a4f238a3..6e410a189683 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -281,7 +281,7 @@ static inline void free_thread_stack(struct task_struct *tsk)
MEMCG_KERNEL_STACK_KB,
-(int)(PAGE_SIZE / 1024));

- memcg_kmem_uncharge(vm->pages[i], 0);
+ memcg_kmem_uncharge_page(vm->pages[i], 0);
}

for (i = 0; i < NR_CACHED_STACKS; i++) {
@@ -413,12 +413,13 @@ static int memcg_charge_kernel_stack(struct task_struct *tsk)

for (i = 0; i < THREAD_SIZE / PAGE_SIZE; i++) {
/*
- * If memcg_kmem_charge() fails, page->mem_cgroup
- * pointer is NULL, and both memcg_kmem_uncharge()
+ * If memcg_kmem_charge_page() fails, page->mem_cgroup
+ * pointer is NULL, and both memcg_kmem_uncharge_page()
* and mod_memcg_page_state() in free_thread_stack()
* will ignore this page. So it's safe.
*/
- ret = memcg_kmem_charge(vm->pages[i], GFP_KERNEL, 0);
+ ret = memcg_kmem_charge_page(vm->pages[i], GFP_KERNEL,
+ 0);
if (ret)
return ret;

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6ef38f669923..6bdf040e4615 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2855,14 +2855,14 @@ int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp, int order)
}

/**
- * __memcg_kmem_charge: charge a kmem page to the current memory cgroup
+ * __memcg_kmem_charge_page: charge a kmem page to the current memory cgroup
* @page: page to charge
* @gfp: reclaim mode
* @order: allocation order
*
* Returns 0 on success, an error code on failure.
*/
-int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
+int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order)
{
struct mem_cgroup *memcg;
int ret = 0;
@@ -2898,11 +2898,11 @@ void __memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg,
page_counter_uncharge(&memcg->memsw, nr_pages);
}
/**
- * __memcg_kmem_uncharge: uncharge a kmem page
+ * __memcg_kmem_uncharge_page: uncharge a kmem page
* @page: page to uncharge
* @order: allocation order
*/
-void __memcg_kmem_uncharge(struct page *page, int order)
+void __memcg_kmem_uncharge_page(struct page *page, int order)
{
struct mem_cgroup *memcg = page->mem_cgroup;
unsigned int nr_pages = 1 << order;
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4785a8a2040e..8fca5b806139 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1159,7 +1159,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
if (PageMappingFlags(page))
page->mapping = NULL;
if (memcg_kmem_enabled() && PageKmemcg(page))
- __memcg_kmem_uncharge(page, order);
+ __memcg_kmem_uncharge_page(page, order);
if (check_free)
bad += free_pages_check(page);
if (bad)
@@ -4777,7 +4777,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,

out:
if (memcg_kmem_enabled() && (gfp_mask & __GFP_ACCOUNT) && page &&
- unlikely(__memcg_kmem_charge(page, gfp_mask, order) != 0)) {
+ unlikely(__memcg_kmem_charge_page(page, gfp_mask, order) != 0)) {
__free_pages(page, order);
page = NULL;
}
--
2.21.1

2020-01-09 20:56:20

by Roman Gushchin

[permalink] [raw]
Subject: [PATCH v2 5/6] mm: memcg/slab: cache page number in memcg_(un)charge_slab()

There are many places in memcg_charge_slab() and memcg_uncharge_slab()
which are calculating the number of pages to charge, css references to
grab etc depending on the order of the slab page.

Let's simplify the code by calculating it once and caching in the
local variable.

Signed-off-by: Roman Gushchin <[email protected]>
---
mm/slab.h | 22 ++++++++++++----------
1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index d96c87a30a9b..43f8ce4aa325 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -348,6 +348,7 @@ static __always_inline int memcg_charge_slab(struct page *page,
gfp_t gfp, int order,
struct kmem_cache *s)
{
+ unsigned int nr_pages = 1 << order;
struct mem_cgroup *memcg;
struct lruvec *lruvec;
int ret;
@@ -360,21 +361,21 @@ static __always_inline int memcg_charge_slab(struct page *page,

if (unlikely(!memcg || mem_cgroup_is_root(memcg))) {
mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
- (1 << order));
- percpu_ref_get_many(&s->memcg_params.refcnt, 1 << order);
+ nr_pages);
+ percpu_ref_get_many(&s->memcg_params.refcnt, nr_pages);
return 0;
}

- ret = memcg_kmem_charge_memcg(memcg, gfp, 1 << order);
+ ret = memcg_kmem_charge_memcg(memcg, gfp, nr_pages);
if (ret)
goto out;

lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page));
- mod_lruvec_state(lruvec, cache_vmstat_idx(s), 1 << order);
+ mod_lruvec_state(lruvec, cache_vmstat_idx(s), nr_pages);

/* transer try_charge() page references to kmem_cache */
- percpu_ref_get_many(&s->memcg_params.refcnt, 1 << order);
- css_put_many(&memcg->css, 1 << order);
+ percpu_ref_get_many(&s->memcg_params.refcnt, nr_pages);
+ css_put_many(&memcg->css, nr_pages);
out:
css_put(&memcg->css);
return ret;
@@ -387,6 +388,7 @@ static __always_inline int memcg_charge_slab(struct page *page,
static __always_inline void memcg_uncharge_slab(struct page *page, int order,
struct kmem_cache *s)
{
+ unsigned int nr_pages = 1 << order;
struct mem_cgroup *memcg;
struct lruvec *lruvec;

@@ -394,15 +396,15 @@ static __always_inline void memcg_uncharge_slab(struct page *page, int order,
memcg = READ_ONCE(s->memcg_params.memcg);
if (likely(!mem_cgroup_is_root(memcg))) {
lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page));
- mod_lruvec_state(lruvec, cache_vmstat_idx(s), -(1 << order));
- memcg_kmem_uncharge_memcg(memcg, order);
+ mod_lruvec_state(lruvec, cache_vmstat_idx(s), -nr_pages);
+ memcg_kmem_uncharge_memcg(memcg, nr_pages);
} else {
mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
- -(1 << order));
+ -nr_pages);
}
rcu_read_unlock();

- percpu_ref_put_many(&s->memcg_params.refcnt, 1 << order);
+ percpu_ref_put_many(&s->memcg_params.refcnt, nr_pages);
}

extern void slab_init_memcg_params(struct kmem_cache *);
--
2.21.1

2020-01-09 20:56:37

by Roman Gushchin

[permalink] [raw]
Subject: [PATCH v2 6/6] mm: kmem: rename (__)memcg_kmem_(un)charge_memcg() to __memcg_kmem_(un)charge()

Drop the _memcg suffix from (__)memcg_kmem_(un)charge functions.
It's shorter and more obvious.

These are the most basic functions which are just (un)charging the
given cgroup with the given amount of pages.

Also fix up the corresponding comments.

Signed-off-by: Roman Gushchin <[email protected]>
---
include/linux/memcontrol.h | 19 +++++++++---------
mm/memcontrol.c | 40 +++++++++++++++++++-------------------
mm/slab.h | 4 ++--
3 files changed, 31 insertions(+), 32 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 851c373edb74..c372bed6be80 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1362,12 +1362,11 @@ struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep);
void memcg_kmem_put_cache(struct kmem_cache *cachep);

#ifdef CONFIG_MEMCG_KMEM
+int __memcg_kmem_charge(struct mem_cgroup *memcg, gfp_t gfp,
+ unsigned int nr_pages);
+void __memcg_kmem_uncharge(struct mem_cgroup *memcg, unsigned int nr_pages);
int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order);
void __memcg_kmem_uncharge_page(struct page *page, int order);
-int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp,
- unsigned int nr_pages);
-void __memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg,
- unsigned int nr_pages);

extern struct static_key_false memcg_kmem_enabled_key;
extern struct workqueue_struct *memcg_kmem_cache_wq;
@@ -1403,19 +1402,19 @@ static inline void memcg_kmem_uncharge_page(struct page *page, int order)
__memcg_kmem_uncharge_page(page, order);
}

-static inline int memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp,
- unsigned int nr_pages)
+static inline int memcg_kmem_charge(struct mem_cgroup *memcg, gfp_t gfp,
+ unsigned int nr_pages)
{
if (memcg_kmem_enabled())
- return __memcg_kmem_charge_memcg(memcg, gfp, nr_pages);
+ return __memcg_kmem_charge(memcg, gfp, nr_pages);
return 0;
}

-static inline void memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg,
- unsigned int nr_pages)
+static inline void memcg_kmem_uncharge(struct mem_cgroup *memcg,
+ unsigned int nr_pages)
{
if (memcg_kmem_enabled())
- __memcg_kmem_uncharge_memcg(memcg, nr_pages);
+ __memcg_kmem_uncharge(memcg, nr_pages);
}

/*
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 8dbfb9fed9d8..8bcd231e6e7b 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2819,15 +2819,15 @@ void memcg_kmem_put_cache(struct kmem_cache *cachep)
}

/**
- * __memcg_kmem_charge_memcg: charge a kmem page
+ * __memcg_kmem_charge: charge a number of kernel pages to a memcg
* @memcg: memory cgroup to charge
* @gfp: reclaim mode
* @nr_pages: number of pages to charge
*
* Returns 0 on success, an error code on failure.
*/
-int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp,
- unsigned int nr_pages)
+int __memcg_kmem_charge(struct mem_cgroup *memcg, gfp_t gfp,
+ unsigned int nr_pages)
{
struct page_counter *counter;
int ret;
@@ -2854,6 +2854,21 @@ int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp,
return 0;
}

+/**
+ * __memcg_kmem_uncharge: uncharge a number of kernel pages from a memcg
+ * @memcg: memcg to uncharge
+ * @nr_pages: number of pages to uncharge
+ */
+void __memcg_kmem_uncharge(struct mem_cgroup *memcg, unsigned int nr_pages)
+{
+ if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
+ page_counter_uncharge(&memcg->kmem, nr_pages);
+
+ page_counter_uncharge(&memcg->memory, nr_pages);
+ if (do_memsw_account())
+ page_counter_uncharge(&memcg->memsw, nr_pages);
+}
+
/**
* __memcg_kmem_charge_page: charge a kmem page to the current memory cgroup
* @page: page to charge
@@ -2872,7 +2887,7 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order)

memcg = get_mem_cgroup_from_current();
if (!mem_cgroup_is_root(memcg)) {
- ret = __memcg_kmem_charge_memcg(memcg, gfp, 1 << order);
+ ret = __memcg_kmem_charge(memcg, gfp, 1 << order);
if (!ret) {
page->mem_cgroup = memcg;
__SetPageKmemcg(page);
@@ -2882,21 +2897,6 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order)
return ret;
}

-/**
- * __memcg_kmem_uncharge_memcg: uncharge a kmem page
- * @memcg: memcg to uncharge
- * @nr_pages: number of pages to uncharge
- */
-void __memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg,
- unsigned int nr_pages)
-{
- if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
- page_counter_uncharge(&memcg->kmem, nr_pages);
-
- page_counter_uncharge(&memcg->memory, nr_pages);
- if (do_memsw_account())
- page_counter_uncharge(&memcg->memsw, nr_pages);
-}
/**
* __memcg_kmem_uncharge_page: uncharge a kmem page
* @page: page to uncharge
@@ -2911,7 +2911,7 @@ void __memcg_kmem_uncharge_page(struct page *page, int order)
return;

VM_BUG_ON_PAGE(mem_cgroup_is_root(memcg), page);
- __memcg_kmem_uncharge_memcg(memcg, nr_pages);
+ __memcg_kmem_uncharge(memcg, nr_pages);
page->mem_cgroup = NULL;

/* slab pages do not have PageKmemcg flag set */
diff --git a/mm/slab.h b/mm/slab.h
index 43f8ce4aa325..207c83ef6e06 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -366,7 +366,7 @@ static __always_inline int memcg_charge_slab(struct page *page,
return 0;
}

- ret = memcg_kmem_charge_memcg(memcg, gfp, nr_pages);
+ ret = memcg_kmem_charge(memcg, gfp, nr_pages);
if (ret)
goto out;

@@ -397,7 +397,7 @@ static __always_inline void memcg_uncharge_slab(struct page *page, int order,
if (likely(!mem_cgroup_is_root(memcg))) {
lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page));
mod_lruvec_state(lruvec, cache_vmstat_idx(s), -nr_pages);
- memcg_kmem_uncharge_memcg(memcg, nr_pages);
+ memcg_kmem_uncharge(memcg, nr_pages);
} else {
mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
-nr_pages);
--
2.21.1

2020-01-09 20:56:55

by Roman Gushchin

[permalink] [raw]
Subject: [PATCH v2 4/6] mm: kmem: switch to nr_pages in (__)memcg_kmem_charge_memcg()

These functions are charging the given number of kernel pages to the
given memory cgroup. The number doesn't have to be a power of two.
Let's make them to take the unsigned int nr_pages as an argument
instead of the page order.

It makes them look consistent with the corresponding uncharge
functions and functions like: mem_cgroup_charge_skmem(memcg, nr_pages).

Signed-off-by: Roman Gushchin <[email protected]>
---
include/linux/memcontrol.h | 11 ++++++-----
mm/memcontrol.c | 8 ++++----
mm/slab.h | 2 +-
3 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 4ee0c345e905..851c373edb74 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1364,7 +1364,8 @@ void memcg_kmem_put_cache(struct kmem_cache *cachep);
#ifdef CONFIG_MEMCG_KMEM
int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order);
void __memcg_kmem_uncharge_page(struct page *page, int order);
-int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp, int order);
+int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp,
+ unsigned int nr_pages);
void __memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg,
unsigned int nr_pages);

@@ -1403,18 +1404,18 @@ static inline void memcg_kmem_uncharge_page(struct page *page, int order)
}

static inline int memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp,
- int order)
+ unsigned int nr_pages)
{
if (memcg_kmem_enabled())
- return __memcg_kmem_charge_memcg(memcg, gfp, order);
+ return __memcg_kmem_charge_memcg(memcg, gfp, nr_pages);
return 0;
}

static inline void memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg,
- int order)
+ unsigned int nr_pages)
{
if (memcg_kmem_enabled())
- __memcg_kmem_uncharge_memcg(memcg, 1 << order);
+ __memcg_kmem_uncharge_memcg(memcg, nr_pages);
}

/*
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6bdf040e4615..8dbfb9fed9d8 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2822,13 +2822,13 @@ void memcg_kmem_put_cache(struct kmem_cache *cachep)
* __memcg_kmem_charge_memcg: charge a kmem page
* @memcg: memory cgroup to charge
* @gfp: reclaim mode
- * @order: allocation order
+ * @nr_pages: number of pages to charge
*
* Returns 0 on success, an error code on failure.
*/
-int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp, int order)
+int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp,
+ unsigned int nr_pages)
{
- unsigned int nr_pages = 1 << order;
struct page_counter *counter;
int ret;

@@ -2872,7 +2872,7 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order)

memcg = get_mem_cgroup_from_current();
if (!mem_cgroup_is_root(memcg)) {
- ret = __memcg_kmem_charge_memcg(memcg, gfp, order);
+ ret = __memcg_kmem_charge_memcg(memcg, gfp, 1 << order);
if (!ret) {
page->mem_cgroup = memcg;
__SetPageKmemcg(page);
diff --git a/mm/slab.h b/mm/slab.h
index e7da63fb8211..d96c87a30a9b 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -365,7 +365,7 @@ static __always_inline int memcg_charge_slab(struct page *page,
return 0;
}

- ret = memcg_kmem_charge_memcg(memcg, gfp, order);
+ ret = memcg_kmem_charge_memcg(memcg, gfp, 1 << order);
if (ret)
goto out;

--
2.21.1

2020-01-09 20:57:27

by Roman Gushchin

[permalink] [raw]
Subject: [PATCH v2 1/6] mm: kmem: cleanup (__)memcg_kmem_charge_memcg() arguments

The first argument of memcg_kmem_charge_memcg() and
__memcg_kmem_charge_memcg() is the page pointer and it's not used.
Let's drop it.

Memcg pointer is passed as the last argument. Move it to
the first place for consistency with other memcg functions,
e.g. __memcg_kmem_uncharge_memcg() or try_charge().

Signed-off-by: Roman Gushchin <[email protected]>
---
include/linux/memcontrol.h | 9 ++++-----
mm/memcontrol.c | 8 +++-----
mm/slab.h | 2 +-
3 files changed, 8 insertions(+), 11 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index a7a0a1a5c8d5..c954209fd685 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1364,8 +1364,7 @@ void memcg_kmem_put_cache(struct kmem_cache *cachep);
#ifdef CONFIG_MEMCG_KMEM
int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order);
void __memcg_kmem_uncharge(struct page *page, int order);
-int __memcg_kmem_charge_memcg(struct page *page, gfp_t gfp, int order,
- struct mem_cgroup *memcg);
+int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp, int order);
void __memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg,
unsigned int nr_pages);

@@ -1402,11 +1401,11 @@ static inline void memcg_kmem_uncharge(struct page *page, int order)
__memcg_kmem_uncharge(page, order);
}

-static inline int memcg_kmem_charge_memcg(struct page *page, gfp_t gfp,
- int order, struct mem_cgroup *memcg)
+static inline int memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp,
+ int order)
{
if (memcg_kmem_enabled())
- return __memcg_kmem_charge_memcg(page, gfp, order, memcg);
+ return __memcg_kmem_charge_memcg(memcg, gfp, order);
return 0;
}

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index c5b5f74cfd4d..6ef38f669923 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2820,15 +2820,13 @@ void memcg_kmem_put_cache(struct kmem_cache *cachep)

/**
* __memcg_kmem_charge_memcg: charge a kmem page
- * @page: page to charge
+ * @memcg: memory cgroup to charge
* @gfp: reclaim mode
* @order: allocation order
- * @memcg: memory cgroup to charge
*
* Returns 0 on success, an error code on failure.
*/
-int __memcg_kmem_charge_memcg(struct page *page, gfp_t gfp, int order,
- struct mem_cgroup *memcg)
+int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp, int order)
{
unsigned int nr_pages = 1 << order;
struct page_counter *counter;
@@ -2874,7 +2872,7 @@ int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order)

memcg = get_mem_cgroup_from_current();
if (!mem_cgroup_is_root(memcg)) {
- ret = __memcg_kmem_charge_memcg(page, gfp, order, memcg);
+ ret = __memcg_kmem_charge_memcg(memcg, gfp, order);
if (!ret) {
page->mem_cgroup = memcg;
__SetPageKmemcg(page);
diff --git a/mm/slab.h b/mm/slab.h
index 7e94700aa78c..c4c93e991250 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -365,7 +365,7 @@ static __always_inline int memcg_charge_slab(struct page *page,
return 0;
}

- ret = memcg_kmem_charge_memcg(page, gfp, order, memcg);
+ ret = memcg_kmem_charge_memcg(memcg, gfp, order);
if (ret)
goto out;

--
2.21.1

2020-01-11 00:24:44

by Shakeel Butt

[permalink] [raw]
Subject: Re: [PATCH v2 1/6] mm: kmem: cleanup (__)memcg_kmem_charge_memcg() arguments

On Thu, Jan 9, 2020 at 12:27 PM Roman Gushchin <[email protected]> wrote:
>
> The first argument of memcg_kmem_charge_memcg() and
> __memcg_kmem_charge_memcg() is the page pointer and it's not used.
> Let's drop it.
>
> Memcg pointer is passed as the last argument. Move it to
> the first place for consistency with other memcg functions,
> e.g. __memcg_kmem_uncharge_memcg() or try_charge().
>
> Signed-off-by: Roman Gushchin <[email protected]>

Reviewed-by: Shakeel Butt <[email protected]>

2020-01-11 00:26:19

by Shakeel Butt

[permalink] [raw]
Subject: Re: [PATCH v2 2/6] mm: kmem: cleanup memcg_kmem_uncharge_memcg() arguments

On Thu, Jan 9, 2020 at 12:27 PM Roman Gushchin <[email protected]> wrote:
>
> Drop the unused page argument and put the memcg pointer at the first
> place. This make the function consistent with its peers:
> __memcg_kmem_uncharge_memcg(), memcg_kmem_charge_memcg(), etc.
>
> Signed-off-by: Roman Gushchin <[email protected]>

Why not squash this patch in the previous one?

Reviewed-by: Shakeel Butt <[email protected]>

2020-01-11 00:28:01

by Shakeel Butt

[permalink] [raw]
Subject: Re: [PATCH v2 3/6] mm: kmem: rename memcg_kmem_(un)charge() into memcg_kmem_(un)charge_page()

On Thu, Jan 9, 2020 at 12:27 PM Roman Gushchin <[email protected]> wrote:
>
> Rename (__)memcg_kmem_(un)charge() into (__)memcg_kmem_(un)charge_page()
> to better reflect what they are actually doing:
> 1) call __memcg_kmem_(un)charge_memcg() to actually charge or
> uncharge the current memcg
> 2) set or clear the PageKmemcg flag
>
> Signed-off-by: Roman Gushchin <[email protected]>

Reviewed-by: Shakeel Butt <[email protected]>

2020-01-11 00:29:36

by Shakeel Butt

[permalink] [raw]
Subject: Re: [PATCH v2 4/6] mm: kmem: switch to nr_pages in (__)memcg_kmem_charge_memcg()

On Thu, Jan 9, 2020 at 12:27 PM Roman Gushchin <[email protected]> wrote:
>
> These functions are charging the given number of kernel pages to the
> given memory cgroup. The number doesn't have to be a power of two.
> Let's make them to take the unsigned int nr_pages as an argument
> instead of the page order.
>
> It makes them look consistent with the corresponding uncharge
> functions and functions like: mem_cgroup_charge_skmem(memcg, nr_pages).
>
> Signed-off-by: Roman Gushchin <[email protected]>

Reviewed-by: Shakeel Butt <[email protected]>

2020-01-11 00:30:42

by Shakeel Butt

[permalink] [raw]
Subject: Re: [PATCH v2 5/6] mm: memcg/slab: cache page number in memcg_(un)charge_slab()

On Thu, Jan 9, 2020 at 12:27 PM Roman Gushchin <[email protected]> wrote:
>
> There are many places in memcg_charge_slab() and memcg_uncharge_slab()
> which are calculating the number of pages to charge, css references to
> grab etc depending on the order of the slab page.
>
> Let's simplify the code by calculating it once and caching in the
> local variable.
>
> Signed-off-by: Roman Gushchin <[email protected]>

Reviewed-by: Shakeel Butt <[email protected]>

2020-01-11 00:34:04

by Shakeel Butt

[permalink] [raw]
Subject: Re: [PATCH v2 6/6] mm: kmem: rename (__)memcg_kmem_(un)charge_memcg() to __memcg_kmem_(un)charge()

On Thu, Jan 9, 2020 at 12:27 PM Roman Gushchin <[email protected]> wrote:
>
> Drop the _memcg suffix from (__)memcg_kmem_(un)charge functions.
> It's shorter and more obvious.
>
> These are the most basic functions which are just (un)charging the
> given cgroup with the given amount of pages.
>
> Also fix up the corresponding comments.
>
> Signed-off-by: Roman Gushchin <[email protected]>

Reviewed-by: Shakeel Butt <[email protected]>

2020-01-13 15:13:11

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v2 2/6] mm: kmem: cleanup memcg_kmem_uncharge_memcg() arguments

On Fri, Jan 10, 2020 at 04:23:52PM -0800, Shakeel Butt wrote:
> On Thu, Jan 9, 2020 at 12:27 PM Roman Gushchin <[email protected]> wrote:
> >
> > Drop the unused page argument and put the memcg pointer at the first
> > place. This make the function consistent with its peers:
> > __memcg_kmem_uncharge_memcg(), memcg_kmem_charge_memcg(), etc.
> >
> > Signed-off-by: Roman Gushchin <[email protected]>
>
> Why not squash this patch in the previous one?

No particular reason, could be squashed too. Just made these changes one by one.
Can be perfectly squashed, but probably it's not worth another version.

>
> Reviewed-by: Shakeel Butt <[email protected]>

Thank you for the review!

2020-01-16 16:44:04

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v2 1/6] mm: kmem: cleanup (__)memcg_kmem_charge_memcg() arguments

On Thu, Jan 09, 2020 at 12:26:54PM -0800, Roman Gushchin wrote:
> The first argument of memcg_kmem_charge_memcg() and
> __memcg_kmem_charge_memcg() is the page pointer and it's not used.
> Let's drop it.
>
> Memcg pointer is passed as the last argument. Move it to
> the first place for consistency with other memcg functions,
> e.g. __memcg_kmem_uncharge_memcg() or try_charge().
>
> Signed-off-by: Roman Gushchin <[email protected]>

Looks good to me.

Acked-by: Johannes Weiner <[email protected]>

2020-01-16 16:46:08

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v2 2/6] mm: kmem: cleanup memcg_kmem_uncharge_memcg() arguments

On Thu, Jan 09, 2020 at 12:26:55PM -0800, Roman Gushchin wrote:
> Drop the unused page argument and put the memcg pointer at the first
> place. This make the function consistent with its peers:
> __memcg_kmem_uncharge_memcg(), memcg_kmem_charge_memcg(), etc.
>
> Signed-off-by: Roman Gushchin <[email protected]>

Acked-by: Johannes Weiner <[email protected]>

2020-01-16 16:49:35

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v2 3/6] mm: kmem: rename memcg_kmem_(un)charge() into memcg_kmem_(un)charge_page()

On Thu, Jan 09, 2020 at 12:26:56PM -0800, Roman Gushchin wrote:
> Rename (__)memcg_kmem_(un)charge() into (__)memcg_kmem_(un)charge_page()

I almost bluescreened trying to parse this!

> to better reflect what they are actually doing:
> 1) call __memcg_kmem_(un)charge_memcg() to actually charge or
> uncharge the current memcg
> 2) set or clear the PageKmemcg flag
>
> Signed-off-by: Roman Gushchin <[email protected]>

Agreed, this is better.

Acked-by: Johannes Weiner <[email protected]>

2020-01-16 17:36:11

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v2 6/6] mm: kmem: rename (__)memcg_kmem_(un)charge_memcg() to __memcg_kmem_(un)charge()

On Thu, Jan 09, 2020 at 12:26:59PM -0800, Roman Gushchin wrote:
> Drop the _memcg suffix from (__)memcg_kmem_(un)charge functions.
> It's shorter and more obvious.
>
> These are the most basic functions which are just (un)charging the
> given cgroup with the given amount of pages.
>
> Also fix up the corresponding comments.
>
> Signed-off-by: Roman Gushchin <[email protected]>

This looks good to me. I also appreciate the grouping of the functions
by layer (first charge/uncharge, then charge_page/uncharge_page).

Acked-by: Johannes Weiner <[email protected]>

2020-01-16 18:34:31

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v2 4/6] mm: kmem: switch to nr_pages in (__)memcg_kmem_charge_memcg()

On Thu, Jan 09, 2020 at 12:26:57PM -0800, Roman Gushchin wrote:
> These functions are charging the given number of kernel pages to the
> given memory cgroup. The number doesn't have to be a power of two.
> Let's make them to take the unsigned int nr_pages as an argument
> instead of the page order.
>
> It makes them look consistent with the corresponding uncharge
> functions and functions like: mem_cgroup_charge_skmem(memcg, nr_pages).
>
> Signed-off-by: Roman Gushchin <[email protected]>

Acked-by: Johannes Weiner <[email protected]>

2020-01-16 20:26:05

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v2 5/6] mm: memcg/slab: cache page number in memcg_(un)charge_slab()

On Thu, Jan 09, 2020 at 12:26:58PM -0800, Roman Gushchin wrote:
> There are many places in memcg_charge_slab() and memcg_uncharge_slab()
> which are calculating the number of pages to charge, css references to
> grab etc depending on the order of the slab page.
>
> Let's simplify the code by calculating it once and caching in the
> local variable.
>
> Signed-off-by: Roman Gushchin <[email protected]>

Acked-by: Johannes Weiner <[email protected]>

2020-01-16 21:35:30

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v2 6/6] mm: kmem: rename (__)memcg_kmem_(un)charge_memcg() to __memcg_kmem_(un)charge()

On Thu, Jan 16, 2020 at 12:30:02PM -0500, Johannes Weiner wrote:
> On Thu, Jan 09, 2020 at 12:26:59PM -0800, Roman Gushchin wrote:
> > Drop the _memcg suffix from (__)memcg_kmem_(un)charge functions.
> > It's shorter and more obvious.
> >
> > These are the most basic functions which are just (un)charging the
> > given cgroup with the given amount of pages.
> >
> > Also fix up the corresponding comments.
> >
> > Signed-off-by: Roman Gushchin <[email protected]>
>
> This looks good to me. I also appreciate the grouping of the functions
> by layer (first charge/uncharge, then charge_page/uncharge_page).
>
> Acked-by: Johannes Weiner <[email protected]>

Thank you for the review!