2021-07-09 00:06:29

by Suren Baghdasaryan

[permalink] [raw]
Subject: [PATCH 0/3] mm, memcg: Optimizations to minimize overhead when memcgs are disabled

Disabling memcgs on Android from kernel command-line because important due to
new requirements for all vendors to share the same kernel config and because
some vendors use memcgs while others don't. The ones who don't, have to disable
memcgs via "cgroup_disable=memory" kernel command-line option and we would like
to minimize the cost of disabling memcgs this way vs disabling CONFIG_MEMCG.
This patchset is focused on minimizing performance costs of this option.
When running pft test with memcgs disabled via CONFIG_MEMCG=n vs
"cgroup_disable=memory" command-line option, we measured ~6% drop in the
average pagefault/sec rate with stddev of ~2%. The results were obtained by
running pft test 1500 times and averaging the results on an 8-core ARM64
Android device with system services stopped, performance governor and enabling
only Big or Little cores in one test to minimize the noise.
Using perf, a number of relatively high-cost areas were identified where extra
operations can be minimized. The patchset consists of a number of optimisations
gradually reducing this regression. Patches are applied incrementally while
testing and recording the impact for each one:

6.01% with vanilla cgroup_disable vs CONFIG_MEMCG=n
3.87% after patch #1 adding mem_cgroup_disabled checks vs CONFIG_MEMCG=n
3.49% after patch #2 inlining mem_cgroup_{charge/uncharge} vs CONFIG_MEMCG=n
2.48% After patch #3 inlining swap-related functions vs CONFIG_MEMCG=n

I kept them separate because they vary in their "impact vs readability cost"
and I'm not sure which ones pass the acceptable threashold.

Suren Baghdasaryan (3):
mm, memcg: add mem_cgroup_disabled checks in vmpressure and
swap-related functions
mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled
memcg config
mm, memcg: inline swap-related functions to improve disabled memcg
config

include/linux/memcontrol.h | 54 ++++++++++++++++++++++++++++++++++----
include/linux/swap.h | 26 +++++++++++++++---
mm/memcontrol.c | 52 +++++-------------------------------
mm/swapfile.c | 2 +-
mm/vmpressure.c | 7 ++++-
5 files changed, 86 insertions(+), 55 deletions(-)

--
2.32.0.93.g670b81a890-goog


2021-07-09 00:07:11

by Suren Baghdasaryan

[permalink] [raw]
Subject: [PATCH 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config

Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions
functions to perform mem_cgroup_disabled static key check inline before
calling the main body of the function. This minimizes the memcg overhead
in the pagefault and exit_mmap paths when memcgs are disabled using
cgroup_disable=memory command-line option.
This change results in ~0.4% overhead reduction when running PFT test
comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
configurationon on an 8-core ARM64 Android device.

Signed-off-by: Suren Baghdasaryan <[email protected]>
---
include/linux/memcontrol.h | 54 ++++++++++++++++++++++++++++++++++----
mm/memcontrol.c | 43 +++---------------------------
2 files changed, 53 insertions(+), 44 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index bfe5c486f4ad..480815feb116 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -693,13 +693,59 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg)
page_counter_read(&memcg->memory);
}

-int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask);
+struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm);
+
+int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
+ gfp_t gfp);
+/**
+ * mem_cgroup_charge - charge a newly allocated page to a cgroup
+ * @page: page to charge
+ * @mm: mm context of the victim
+ * @gfp_mask: reclaim mode
+ *
+ * Try to charge @page to the memcg that @mm belongs to, reclaiming
+ * pages according to @gfp_mask if necessary. if @mm is NULL, try to
+ * charge to the active memcg.
+ *
+ * Do not use this for pages allocated for swapin.
+ *
+ * Returns 0 on success. Otherwise, an error code is returned.
+ */
+static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
+ gfp_t gfp_mask)
+{
+ struct mem_cgroup *memcg;
+ int ret;
+
+ if (mem_cgroup_disabled())
+ return 0;
+
+ memcg = get_mem_cgroup_from_mm(mm);
+ ret = __mem_cgroup_charge(page, memcg, gfp_mask);
+ css_put(&memcg->css);
+
+ return ret;
+}
+
int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm,
gfp_t gfp, swp_entry_t entry);
void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry);

-void mem_cgroup_uncharge(struct page *page);
-void mem_cgroup_uncharge_list(struct list_head *page_list);
+void __mem_cgroup_uncharge(struct page *page);
+static inline void mem_cgroup_uncharge(struct page *page)
+{
+ if (mem_cgroup_disabled())
+ return;
+ __mem_cgroup_uncharge(page);
+}
+
+void __mem_cgroup_uncharge_list(struct list_head *page_list);
+static inline void mem_cgroup_uncharge_list(struct list_head *page_list)
+{
+ if (mem_cgroup_disabled())
+ return;
+ __mem_cgroup_uncharge_list(page_list);
+}

void mem_cgroup_migrate(struct page *oldpage, struct page *newpage);

@@ -756,8 +802,6 @@ static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page)

struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p);

-struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm);
-
struct lruvec *lock_page_lruvec(struct page *page);
struct lruvec *lock_page_lruvec_irq(struct page *page);
struct lruvec *lock_page_lruvec_irqsave(struct page *page,
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index a228cd51c4bd..da677b55b2fe 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6701,8 +6701,8 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root,
atomic_long_read(&parent->memory.children_low_usage)));
}

-static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
- gfp_t gfp)
+int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
+ gfp_t gfp)
{
unsigned int nr_pages = thp_nr_pages(page);
int ret;
@@ -6722,35 +6722,6 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
return ret;
}

-/**
- * mem_cgroup_charge - charge a newly allocated page to a cgroup
- * @page: page to charge
- * @mm: mm context of the victim
- * @gfp_mask: reclaim mode
- *
- * Try to charge @page to the memcg that @mm belongs to, reclaiming
- * pages according to @gfp_mask if necessary. if @mm is NULL, try to
- * charge to the active memcg.
- *
- * Do not use this for pages allocated for swapin.
- *
- * Returns 0 on success. Otherwise, an error code is returned.
- */
-int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
-{
- struct mem_cgroup *memcg;
- int ret;
-
- if (mem_cgroup_disabled())
- return 0;
-
- memcg = get_mem_cgroup_from_mm(mm);
- ret = __mem_cgroup_charge(page, memcg, gfp_mask);
- css_put(&memcg->css);
-
- return ret;
-}
-
/**
* mem_cgroup_swapin_charge_page - charge a newly allocated page for swapin
* @page: page to charge
@@ -6921,13 +6892,10 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
*
* Uncharge a page previously charged with mem_cgroup_charge().
*/
-void mem_cgroup_uncharge(struct page *page)
+void __mem_cgroup_uncharge(struct page *page)
{
struct uncharge_gather ug;

- if (mem_cgroup_disabled())
- return;
-
/* Don't touch page->lru of any random page, pre-check: */
if (!page_memcg(page))
return;
@@ -6944,14 +6912,11 @@ void mem_cgroup_uncharge(struct page *page)
* Uncharge a list of pages previously charged with
* mem_cgroup_charge().
*/
-void mem_cgroup_uncharge_list(struct list_head *page_list)
+void __mem_cgroup_uncharge_list(struct list_head *page_list)
{
struct uncharge_gather ug;
struct page *page;

- if (mem_cgroup_disabled())
- return;
-
uncharge_gather_clear(&ug);
list_for_each_entry(page, page_list, lru)
uncharge_page(page, &ug);
--
2.32.0.93.g670b81a890-goog

2021-07-09 00:08:14

by Suren Baghdasaryan

[permalink] [raw]
Subject: [PATCH 3/3] mm, memcg: inline swap-related functions to improve disabled memcg config

Inline mem_cgroup_try_charge_swap, mem_cgroup_uncharge_swap and
cgroup_throttle_swaprate functions to perform mem_cgroup_disabled static
key check inline before calling the main body of the function. This
minimizes the memcg overhead in the pagefault and exit_mmap paths when
memcgs are disabled using cgroup_disable=memory command-line option.
This change results in ~1% overhead reduction when running PFT test
comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
configuration on an 8-core ARM64 Android device.

Signed-off-by: Suren Baghdasaryan <[email protected]>
---
include/linux/swap.h | 26 +++++++++++++++++++++++---
mm/memcontrol.c | 12 +++---------
mm/swapfile.c | 5 +----
3 files changed, 27 insertions(+), 16 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 6f5a43251593..f30d26b0f71d 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -721,7 +721,13 @@ static inline int mem_cgroup_swappiness(struct mem_cgroup *mem)
#endif

#if defined(CONFIG_SWAP) && defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP)
-extern void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask);
+extern void __cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask);
+static inline void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
+{
+ if (mem_cgroup_disabled())
+ return;
+ __cgroup_throttle_swaprate(page, gfp_mask);
+}
#else
static inline void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
{
@@ -730,8 +736,22 @@ static inline void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)

#ifdef CONFIG_MEMCG_SWAP
extern void mem_cgroup_swapout(struct page *page, swp_entry_t entry);
-extern int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry);
-extern void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages);
+extern int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry);
+static inline int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
+{
+ if (mem_cgroup_disabled())
+ return 0;
+ return __mem_cgroup_try_charge_swap(page, entry);
+}
+
+extern void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages);
+static inline void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
+{
+ if (mem_cgroup_disabled())
+ return;
+ __mem_cgroup_uncharge_swap(entry, nr_pages);
+}
+
extern long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg);
extern bool mem_cgroup_swap_full(struct page *page);
#else
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index da677b55b2fe..43f3f50a4751 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -7208,7 +7208,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
}

/**
- * mem_cgroup_try_charge_swap - try charging swap space for a page
+ * __mem_cgroup_try_charge_swap - try charging swap space for a page
* @page: page being added to swap
* @entry: swap entry to charge
*
@@ -7216,16 +7216,13 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
*
* Returns 0 on success, -ENOMEM on failure.
*/
-int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
+int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
{
unsigned int nr_pages = thp_nr_pages(page);
struct page_counter *counter;
struct mem_cgroup *memcg;
unsigned short oldid;

- if (mem_cgroup_disabled())
- return 0;
-
if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
return 0;

@@ -7265,14 +7262,11 @@ int mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
* @entry: swap entry to uncharge
* @nr_pages: the amount of swap space to uncharge
*/
-void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
+void __mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages)
{
struct mem_cgroup *memcg;
unsigned short id;

- if (mem_cgroup_disabled())
- return;
-
id = swap_cgroup_record(entry, 0, nr_pages);
rcu_read_lock();
memcg = mem_cgroup_from_id(id);
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 707fa0481bb4..04a0c83f1313 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -3773,14 +3773,11 @@ static void free_swap_count_continuations(struct swap_info_struct *si)
}

#if defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP)
-void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
+void __cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask)
{
struct swap_info_struct *si, *next;
int nid = page_to_nid(page);

- if (mem_cgroup_disabled())
- return;
-
if (!(gfp_mask & __GFP_IO))
return;

--
2.32.0.93.g670b81a890-goog

2021-07-09 14:50:19

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config

On Thu, Jul 08, 2021 at 05:05:08PM -0700, Suren Baghdasaryan wrote:
> Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions
> functions to perform mem_cgroup_disabled static key check inline before
> calling the main body of the function. This minimizes the memcg overhead
> in the pagefault and exit_mmap paths when memcgs are disabled using
> cgroup_disable=memory command-line option.
> This change results in ~0.4% overhead reduction when running PFT test
> comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> configurationon on an 8-core ARM64 Android device.
>
> Signed-off-by: Suren Baghdasaryan <[email protected]>

Sounds reasonable to me as well. One comment:

> @@ -693,13 +693,59 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg)
> page_counter_read(&memcg->memory);
> }
>
> -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask);
> +struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm);
> +
> +int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> + gfp_t gfp);
> +/**
> + * mem_cgroup_charge - charge a newly allocated page to a cgroup
> + * @page: page to charge
> + * @mm: mm context of the victim
> + * @gfp_mask: reclaim mode
> + *
> + * Try to charge @page to the memcg that @mm belongs to, reclaiming
> + * pages according to @gfp_mask if necessary. if @mm is NULL, try to
> + * charge to the active memcg.
> + *
> + * Do not use this for pages allocated for swapin.
> + *
> + * Returns 0 on success. Otherwise, an error code is returned.
> + */
> +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> + gfp_t gfp_mask)
> +{
> + struct mem_cgroup *memcg;
> + int ret;
> +
> + if (mem_cgroup_disabled())
> + return 0;
> +
> + memcg = get_mem_cgroup_from_mm(mm);
> + ret = __mem_cgroup_charge(page, memcg, gfp_mask);
> + css_put(&memcg->css);
> +
> + return ret;

Why not do

int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
gfp_t gfp_mask);

static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
gfp_t gfp_mask)
{
if (mem_cgroup_disabled())
return 0;

return __mem_cgroup_charge(page, memcg, gfp_mask);
}

like in the other cases as well?

That would avoid inlining two separate function calls into all the
callsites...

There is an (internal) __mem_cgroup_charge() already, but you can
rename it charge_memcg().

2021-07-09 14:53:30

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH 3/3] mm, memcg: inline swap-related functions to improve disabled memcg config

On Thu, Jul 08, 2021 at 05:05:09PM -0700, Suren Baghdasaryan wrote:
> Inline mem_cgroup_try_charge_swap, mem_cgroup_uncharge_swap and
> cgroup_throttle_swaprate functions to perform mem_cgroup_disabled static
> key check inline before calling the main body of the function. This
> minimizes the memcg overhead in the pagefault and exit_mmap paths when
> memcgs are disabled using cgroup_disable=memory command-line option.
> This change results in ~1% overhead reduction when running PFT test
> comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> configuration on an 8-core ARM64 Android device.
>
> Signed-off-by: Suren Baghdasaryan <[email protected]>

Looks reasonable to me as well.

Acked-by: Johannes Weiner <[email protected]>

2021-07-09 15:23:44

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config

On Fri, Jul 9, 2021 at 7:48 AM Johannes Weiner <[email protected]> wrote:
>
> On Thu, Jul 08, 2021 at 05:05:08PM -0700, Suren Baghdasaryan wrote:
> > Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions
> > functions to perform mem_cgroup_disabled static key check inline before
> > calling the main body of the function. This minimizes the memcg overhead
> > in the pagefault and exit_mmap paths when memcgs are disabled using
> > cgroup_disable=memory command-line option.
> > This change results in ~0.4% overhead reduction when running PFT test
> > comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> > configurationon on an 8-core ARM64 Android device.
> >
> > Signed-off-by: Suren Baghdasaryan <[email protected]>
>
> Sounds reasonable to me as well. One comment:
>
> > @@ -693,13 +693,59 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg)
> > page_counter_read(&memcg->memory);
> > }
> >
> > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask);
> > +struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm);
> > +
> > +int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> > + gfp_t gfp);
> > +/**
> > + * mem_cgroup_charge - charge a newly allocated page to a cgroup
> > + * @page: page to charge
> > + * @mm: mm context of the victim
> > + * @gfp_mask: reclaim mode
> > + *
> > + * Try to charge @page to the memcg that @mm belongs to, reclaiming
> > + * pages according to @gfp_mask if necessary. if @mm is NULL, try to
> > + * charge to the active memcg.
> > + *
> > + * Do not use this for pages allocated for swapin.
> > + *
> > + * Returns 0 on success. Otherwise, an error code is returned.
> > + */
> > +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> > + gfp_t gfp_mask)
> > +{
> > + struct mem_cgroup *memcg;
> > + int ret;
> > +
> > + if (mem_cgroup_disabled())
> > + return 0;
> > +
> > + memcg = get_mem_cgroup_from_mm(mm);
> > + ret = __mem_cgroup_charge(page, memcg, gfp_mask);
> > + css_put(&memcg->css);
> > +
> > + return ret;
>
> Why not do
>
> int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> gfp_t gfp_mask);
>
> static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> gfp_t gfp_mask)
> {
> if (mem_cgroup_disabled())
> return 0;
>
> return __mem_cgroup_charge(page, memcg, gfp_mask);
> }
>
> like in the other cases as well?
>
> That would avoid inlining two separate function calls into all the
> callsites...
>
> There is an (internal) __mem_cgroup_charge() already, but you can
> rename it charge_memcg().

Sounds good. I'll post an updated version with your suggestion.
Thanks for the review, Johannes!

>

2021-07-09 16:07:01

by Shakeel Butt

[permalink] [raw]
Subject: Re: [PATCH 3/3] mm, memcg: inline swap-related functions to improve disabled memcg config

On Thu, Jul 8, 2021 at 5:05 PM Suren Baghdasaryan <[email protected]> wrote:
>
> Inline mem_cgroup_try_charge_swap, mem_cgroup_uncharge_swap and
> cgroup_throttle_swaprate functions to perform mem_cgroup_disabled static
> key check inline before calling the main body of the function. This
> minimizes the memcg overhead in the pagefault and exit_mmap paths when
> memcgs are disabled using cgroup_disable=memory command-line option.
> This change results in ~1% overhead reduction when running PFT test
> comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> configuration on an 8-core ARM64 Android device.
>
> Signed-off-by: Suren Baghdasaryan <[email protected]>

Reviewed-by: Shakeel Butt <[email protected]>

2021-07-09 17:18:46

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config

On Fri, Jul 9, 2021 at 8:18 AM Suren Baghdasaryan <[email protected]> wrote:
>
> On Fri, Jul 9, 2021 at 7:48 AM Johannes Weiner <[email protected]> wrote:
> >
> > On Thu, Jul 08, 2021 at 05:05:08PM -0700, Suren Baghdasaryan wrote:
> > > Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions
> > > functions to perform mem_cgroup_disabled static key check inline before
> > > calling the main body of the function. This minimizes the memcg overhead
> > > in the pagefault and exit_mmap paths when memcgs are disabled using
> > > cgroup_disable=memory command-line option.
> > > This change results in ~0.4% overhead reduction when running PFT test
> > > comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> > > configurationon on an 8-core ARM64 Android device.
> > >
> > > Signed-off-by: Suren Baghdasaryan <[email protected]>
> >
> > Sounds reasonable to me as well. One comment:
> >
> > > @@ -693,13 +693,59 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg)
> > > page_counter_read(&memcg->memory);
> > > }
> > >
> > > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask);
> > > +struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm);
> > > +
> > > +int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> > > + gfp_t gfp);
> > > +/**
> > > + * mem_cgroup_charge - charge a newly allocated page to a cgroup
> > > + * @page: page to charge
> > > + * @mm: mm context of the victim
> > > + * @gfp_mask: reclaim mode
> > > + *
> > > + * Try to charge @page to the memcg that @mm belongs to, reclaiming
> > > + * pages according to @gfp_mask if necessary. if @mm is NULL, try to
> > > + * charge to the active memcg.
> > > + *
> > > + * Do not use this for pages allocated for swapin.
> > > + *
> > > + * Returns 0 on success. Otherwise, an error code is returned.
> > > + */
> > > +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> > > + gfp_t gfp_mask)
> > > +{
> > > + struct mem_cgroup *memcg;
> > > + int ret;
> > > +
> > > + if (mem_cgroup_disabled())
> > > + return 0;
> > > +
> > > + memcg = get_mem_cgroup_from_mm(mm);
> > > + ret = __mem_cgroup_charge(page, memcg, gfp_mask);
> > > + css_put(&memcg->css);
> > > +
> > > + return ret;
> >
> > Why not do
> >
> > int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> > gfp_t gfp_mask);
> >
> > static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> > gfp_t gfp_mask)
> > {
> > if (mem_cgroup_disabled())
> > return 0;
> >
> > return __mem_cgroup_charge(page, memcg, gfp_mask);
> > }
> >
> > like in the other cases as well?
> >
> > That would avoid inlining two separate function calls into all the
> > callsites...
> >
> > There is an (internal) __mem_cgroup_charge() already, but you can
> > rename it charge_memcg().
>
> Sounds good. I'll post an updated version with your suggestion.
> Thanks for the review, Johannes!

Posted v2 just for this patch at
https://lore.kernel.org/patchwork/patch/1455550 . Please let me know
if you want me to resent the whole patchset instead of just this
patch.
>
> >