2022-02-01 08:44:35

by Waiman Long

[permalink] [raw]
Subject: [PATCH v2 0/3] mm/page_owner: Extend page_owner to show memcg information

v2:
- Remove the SNPRINTF() macro as suggested by Ira and use scnprintf()
instead to remove some buffer overrun checks.
- Add a patch to optimize vscnprintf with a size parameter of 0.

While debugging the constant increase in percpu memory consumption on
a system that spawned large number of containers, it was found that a
lot of offlined mem_cgroup structures remained in place without being
freed. Further investigation indicated that those mem_cgroup structures
were pinned by some pages.

In order to find out what those pages are, the existing page_owner
debugging tool is extended to show memory cgroup information and whether
those memcgs are offlined or not. With the enhanced page_owner tool,
the following is a typical page that pinned the mem_cgroup structure
in my test case:

Page allocated via order 0, mask 0x1100cca(GFP_HIGHUSER_MOVABLE), pid 62760, ts 119274296592 ns, free_ts 118989764823 ns
PFN 1273412 type Movable Block 2487 type Movable Flags 0x17ffffc00c001c(uptodate|dirty|lru|reclaim|swapbacked|node=0|zone=2|lastcpupid=0x1fffff)
prep_new_page+0x8e/0xb0
get_page_from_freelist+0xc4d/0xe50
__alloc_pages+0x172/0x320
alloc_pages_vma+0x84/0x230
shmem_alloc_page+0x3f/0x90
shmem_alloc_and_acct_page+0x76/0x1c0
shmem_getpage_gfp+0x48d/0x890
shmem_write_begin+0x36/0xc0
generic_perform_write+0xed/0x1d0
__generic_file_write_iter+0xdc/0x1b0
generic_file_write_iter+0x5d/0xb0
new_sync_write+0x11f/0x1b0
vfs_write+0x1ba/0x2a0
ksys_write+0x59/0xd0
do_syscall_64+0x37/0x80
entry_SYSCALL_64_after_hwframe+0x44/0xae
Charged to offlined memcg libpod-conmon-e59cc83faf807bacc61223fec6a80c1540ebe8f83c802870c6af4708d58f77ea

So the page was not freed because it was part of a shmem segment. That
is useful information that can help users to diagnose similar problems.

Waiman Long (3):
lib/vsprintf: Avoid redundant work with 0 size
mm/page_owner: Use scnprintf() to avoid excessive buffer overrun check
mm/page_owner: Dump memcg information

lib/vsprintf.c | 8 +++++---
mm/page_owner.c | 45 ++++++++++++++++++++++++++++++++++-----------
2 files changed, 39 insertions(+), 14 deletions(-)

--
2.27.0


2022-02-01 08:44:41

by Waiman Long

[permalink] [raw]
Subject: [PATCH v2 1/3] lib/vsprintf: Avoid redundant work with 0 size

For *scnprintf(), vsnprintf() is always called even if the input size is
0. That is a waste of time, so just return 0 in this case.

Signed-off-by: Waiman Long <[email protected]>
---
lib/vsprintf.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/lib/vsprintf.c b/lib/vsprintf.c
index 3b8129dd374c..a65df546fb06 100644
--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -2895,13 +2895,15 @@ int vscnprintf(char *buf, size_t size, const char *fmt, va_list args)
{
int i;

+ if (!size)
+ return 0;
+
i = vsnprintf(buf, size, fmt, args);

if (likely(i < size))
return i;
- if (size != 0)
- return size - 1;
- return 0;
+
+ return size - 1;
}
EXPORT_SYMBOL(vscnprintf);

--
2.27.0

2022-02-01 08:44:45

by Waiman Long

[permalink] [raw]
Subject: [PATCH v2 3/3] mm/page_owner: Dump memcg information

It was found that a number of offlined memcgs were not freed because
they were pinned by some charged pages that were present. Even "echo
1 > /proc/sys/vm/drop_caches" wasn't able to free those pages. These
offlined but not freed memcgs tend to increase in number over time with
the side effect that percpu memory consumption as shown in /proc/meminfo
also increases over time.

In order to find out more information about those pages that pin
offlined memcgs, the page_owner feature is extended to dump memory
cgroup information especially whether the cgroup is offlined or not.

Signed-off-by: Waiman Long <[email protected]>
---
mm/page_owner.c | 31 +++++++++++++++++++++++++++++++
1 file changed, 31 insertions(+)

diff --git a/mm/page_owner.c b/mm/page_owner.c
index 28dac73e0542..8dc5cd0fa227 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -10,6 +10,7 @@
#include <linux/migrate.h>
#include <linux/stackdepot.h>
#include <linux/seq_file.h>
+#include <linux/memcontrol.h>
#include <linux/sched/clock.h>

#include "internal.h"
@@ -331,6 +332,7 @@ print_page_owner(char __user *buf, size_t count, unsigned long pfn,
depot_stack_handle_t handle)
{
int ret, pageblock_mt, page_mt;
+ unsigned long __maybe_unused memcg_data;
char *kbuf;

count = min_t(size_t, count, PAGE_SIZE);
@@ -365,6 +367,35 @@ print_page_owner(char __user *buf, size_t count, unsigned long pfn,
migrate_reason_names[page_owner->last_migrate_reason]);
}

+#ifdef CONFIG_MEMCG
+ /*
+ * Look for memcg information and print it out
+ */
+ memcg_data = READ_ONCE(page->memcg_data);
+ if (memcg_data) {
+ struct mem_cgroup *memcg = page_memcg_check(page);
+ bool onlined;
+ char name[80];
+
+ if (memcg_data & MEMCG_DATA_OBJCGS)
+ ret += scnprintf(kbuf + ret, count - ret,
+ "Slab cache page\n");
+
+ if (!memcg)
+ goto copy_out;
+
+ onlined = (memcg->css.flags & CSS_ONLINE);
+ cgroup_name(memcg->css.cgroup, name, sizeof(name));
+ ret += scnprintf(kbuf + ret, count - ret,
+ "Charged %sto %smemcg %s\n",
+ PageMemcgKmem(page) ? "(via objcg) " : "",
+ onlined ? "" : "offlined ",
+ name);
+ }
+
+copy_out:
+#endif
+
ret += snprintf(kbuf + ret, count - ret, "\n");
if (ret >= count)
goto err;
--
2.27.0

2022-02-01 08:44:49

by Waiman Long

[permalink] [raw]
Subject: [PATCH v2 2/3] mm/page_owner: Use scnprintf() to avoid excessive buffer overrun check

The snprintf() function can return a length greater than the given
input size. That will require a check for buffer overrun after each
invocation of snprintf(). scnprintf(), on the other hand, will never
return a greater length. By using scnprintf() in selected places, we
can avoid some buffer overrun checks except after stack_depot_snprint()
and after the last snprintf().

Signed-off-by: Waiman Long <[email protected]>
---
mm/page_owner.c | 14 +++-----------
1 file changed, 3 insertions(+), 11 deletions(-)

diff --git a/mm/page_owner.c b/mm/page_owner.c
index 99e360df9465..28dac73e0542 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -338,19 +338,16 @@ print_page_owner(char __user *buf, size_t count, unsigned long pfn,
if (!kbuf)
return -ENOMEM;

- ret = snprintf(kbuf, count,
+ ret = scnprintf(kbuf, count,
"Page allocated via order %u, mask %#x(%pGg), pid %d, ts %llu ns, free_ts %llu ns\n",
page_owner->order, page_owner->gfp_mask,
&page_owner->gfp_mask, page_owner->pid,
page_owner->ts_nsec, page_owner->free_ts_nsec);

- if (ret >= count)
- goto err;
-
/* Print information relevant to grouping pages by mobility */
pageblock_mt = get_pageblock_migratetype(page);
page_mt = gfp_migratetype(page_owner->gfp_mask);
- ret += snprintf(kbuf + ret, count - ret,
+ ret += scnprintf(kbuf + ret, count - ret,
"PFN %lu type %s Block %lu type %s Flags %pGp\n",
pfn,
migratetype_names[page_mt],
@@ -358,19 +355,14 @@ print_page_owner(char __user *buf, size_t count, unsigned long pfn,
migratetype_names[pageblock_mt],
&page->flags);

- if (ret >= count)
- goto err;
-
ret += stack_depot_snprint(handle, kbuf + ret, count - ret, 0);
if (ret >= count)
goto err;

if (page_owner->last_migrate_reason != -1) {
- ret += snprintf(kbuf + ret, count - ret,
+ ret += scnprintf(kbuf + ret, count - ret,
"Page has been migrated, last migrate reason: %s\n",
migrate_reason_names[page_owner->last_migrate_reason]);
- if (ret >= count)
- goto err;
}

ret += snprintf(kbuf + ret, count - ret, "\n");
--
2.27.0

2022-02-01 09:51:28

by Mike Rapoport

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/page_owner: Dump memcg information

On Sat, Jan 29, 2022 at 03:53:15PM -0500, Waiman Long wrote:
> It was found that a number of offlined memcgs were not freed because
> they were pinned by some charged pages that were present. Even "echo
> 1 > /proc/sys/vm/drop_caches" wasn't able to free those pages. These
> offlined but not freed memcgs tend to increase in number over time with
> the side effect that percpu memory consumption as shown in /proc/meminfo
> also increases over time.
>
> In order to find out more information about those pages that pin
> offlined memcgs, the page_owner feature is extended to dump memory
> cgroup information especially whether the cgroup is offlined or not.
>
> Signed-off-by: Waiman Long <[email protected]>
> ---
> mm/page_owner.c | 31 +++++++++++++++++++++++++++++++
> 1 file changed, 31 insertions(+)
>
> diff --git a/mm/page_owner.c b/mm/page_owner.c
> index 28dac73e0542..8dc5cd0fa227 100644
> --- a/mm/page_owner.c
> +++ b/mm/page_owner.c
> @@ -10,6 +10,7 @@
> #include <linux/migrate.h>
> #include <linux/stackdepot.h>
> #include <linux/seq_file.h>
> +#include <linux/memcontrol.h>
> #include <linux/sched/clock.h>
>
> #include "internal.h"
> @@ -331,6 +332,7 @@ print_page_owner(char __user *buf, size_t count, unsigned long pfn,
> depot_stack_handle_t handle)
> {
> int ret, pageblock_mt, page_mt;
> + unsigned long __maybe_unused memcg_data;
> char *kbuf;
>
> count = min_t(size_t, count, PAGE_SIZE);
> @@ -365,6 +367,35 @@ print_page_owner(char __user *buf, size_t count, unsigned long pfn,
> migrate_reason_names[page_owner->last_migrate_reason]);
> }
>
> +#ifdef CONFIG_MEMCG

Can we put all this along with the declaration of memcg_data in a helper
function please?

> + /*
> + * Look for memcg information and print it out
> + */
> + memcg_data = READ_ONCE(page->memcg_data);
> + if (memcg_data) {
> + struct mem_cgroup *memcg = page_memcg_check(page);
> + bool onlined;
> + char name[80];
> +
> + if (memcg_data & MEMCG_DATA_OBJCGS)
> + ret += scnprintf(kbuf + ret, count - ret,
> + "Slab cache page\n");
> +
> + if (!memcg)
> + goto copy_out;
> +
> + onlined = (memcg->css.flags & CSS_ONLINE);
> + cgroup_name(memcg->css.cgroup, name, sizeof(name));
> + ret += scnprintf(kbuf + ret, count - ret,
> + "Charged %sto %smemcg %s\n",
> + PageMemcgKmem(page) ? "(via objcg) " : "",
> + onlined ? "" : "offlined ",
> + name);
> + }
> +
> +copy_out:
> +#endif
> +
> ret += snprintf(kbuf + ret, count - ret, "\n");
> if (ret >= count)
> goto err;
> --
> 2.27.0
>
>

--
Sincerely yours,
Mike.

2022-02-01 11:23:55

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/page_owner: Dump memcg information

On 1/30/22 01:33, Mike Rapoport wrote:
> On Sat, Jan 29, 2022 at 03:53:15PM -0500, Waiman Long wrote:
>> It was found that a number of offlined memcgs were not freed because
>> they were pinned by some charged pages that were present. Even "echo
>> 1 > /proc/sys/vm/drop_caches" wasn't able to free those pages. These
>> offlined but not freed memcgs tend to increase in number over time with
>> the side effect that percpu memory consumption as shown in /proc/meminfo
>> also increases over time.
>>
>> In order to find out more information about those pages that pin
>> offlined memcgs, the page_owner feature is extended to dump memory
>> cgroup information especially whether the cgroup is offlined or not.
>>
>> Signed-off-by: Waiman Long <[email protected]>
>> ---
>> mm/page_owner.c | 31 +++++++++++++++++++++++++++++++
>> 1 file changed, 31 insertions(+)
>>
>> diff --git a/mm/page_owner.c b/mm/page_owner.c
>> index 28dac73e0542..8dc5cd0fa227 100644
>> --- a/mm/page_owner.c
>> +++ b/mm/page_owner.c
>> @@ -10,6 +10,7 @@
>> #include <linux/migrate.h>
>> #include <linux/stackdepot.h>
>> #include <linux/seq_file.h>
>> +#include <linux/memcontrol.h>
>> #include <linux/sched/clock.h>
>>
>> #include "internal.h"
>> @@ -331,6 +332,7 @@ print_page_owner(char __user *buf, size_t count, unsigned long pfn,
>> depot_stack_handle_t handle)
>> {
>> int ret, pageblock_mt, page_mt;
>> + unsigned long __maybe_unused memcg_data;
>> char *kbuf;
>>
>> count = min_t(size_t, count, PAGE_SIZE);
>> @@ -365,6 +367,35 @@ print_page_owner(char __user *buf, size_t count, unsigned long pfn,
>> migrate_reason_names[page_owner->last_migrate_reason]);
>> }
>>
>> +#ifdef CONFIG_MEMCG
> Can we put all this along with the declaration of memcg_data in a helper
> function please?
>
Sure. Will post another version with that change.

Cheers,
Longman

2022-02-01 15:07:12

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] lib/vsprintf: Avoid redundant work with 0 size

On Sat, 29 Jan 2022, Waiman Long wrote:

> For *scnprintf(), vsnprintf() is always called even if the input size is
> 0. That is a waste of time, so just return 0 in this case.
>
> Signed-off-by: Waiman Long <[email protected]>
> ---
> lib/vsprintf.c | 8 +++++---
> 1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/lib/vsprintf.c b/lib/vsprintf.c
> index 3b8129dd374c..a65df546fb06 100644
> --- a/lib/vsprintf.c
> +++ b/lib/vsprintf.c
> @@ -2895,13 +2895,15 @@ int vscnprintf(char *buf, size_t size, const char *fmt, va_list args)
> {
> int i;
>
> + if (!size)
> + return 0;

Nit: any reason this shouldn't be unlikely()? If the conditional for
i < size is likely(), this seems assumed already?

> +
> i = vsnprintf(buf, size, fmt, args);
>
> if (likely(i < size))
> return i;
> - if (size != 0)
> - return size - 1;
> - return 0;
> +
> + return size - 1;
> }
> EXPORT_SYMBOL(vscnprintf);
>
> --
> 2.27.0
>
>
>

2022-02-01 15:07:15

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/page_owner: Dump memcg information

On Sun, 30 Jan 2022, Waiman Long wrote:

> On 1/30/22 01:33, Mike Rapoport wrote:
> > On Sat, Jan 29, 2022 at 03:53:15PM -0500, Waiman Long wrote:
> > > It was found that a number of offlined memcgs were not freed because
> > > they were pinned by some charged pages that were present. Even "echo
> > > 1 > /proc/sys/vm/drop_caches" wasn't able to free those pages. These
> > > offlined but not freed memcgs tend to increase in number over time with
> > > the side effect that percpu memory consumption as shown in /proc/meminfo
> > > also increases over time.
> > >
> > > In order to find out more information about those pages that pin
> > > offlined memcgs, the page_owner feature is extended to dump memory
> > > cgroup information especially whether the cgroup is offlined or not.
> > >
> > > Signed-off-by: Waiman Long <[email protected]>
> > > ---
> > > mm/page_owner.c | 31 +++++++++++++++++++++++++++++++
> > > 1 file changed, 31 insertions(+)
> > >
> > > diff --git a/mm/page_owner.c b/mm/page_owner.c
> > > index 28dac73e0542..8dc5cd0fa227 100644
> > > --- a/mm/page_owner.c
> > > +++ b/mm/page_owner.c
> > > @@ -10,6 +10,7 @@
> > > #include <linux/migrate.h>
> > > #include <linux/stackdepot.h>
> > > #include <linux/seq_file.h>
> > > +#include <linux/memcontrol.h>
> > > #include <linux/sched/clock.h>
> > > #include "internal.h"
> > > @@ -331,6 +332,7 @@ print_page_owner(char __user *buf, size_t count,
> > > unsigned long pfn,
> > > depot_stack_handle_t handle)
> > > {
> > > int ret, pageblock_mt, page_mt;
> > > + unsigned long __maybe_unused memcg_data;
> > > char *kbuf;
> > > count = min_t(size_t, count, PAGE_SIZE);
> > > @@ -365,6 +367,35 @@ print_page_owner(char __user *buf, size_t count,
> > > unsigned long pfn,
> > > migrate_reason_names[page_owner->last_migrate_reason]);
> > > }
> > > +#ifdef CONFIG_MEMCG
> > Can we put all this along with the declaration of memcg_data in a helper
> > function please?
> >
> Sure. Will post another version with that change.
>

That would certainly make it much cleaner. After that's done (and perhaps
addressing my nit comment in the first patch), feel free to add

Acked-by: David Rientjes <[email protected]>

to all three patches.

Thanks!

2022-02-01 15:07:20

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] lib/vsprintf: Avoid redundant work with 0 size

On 1/30/22 15:49, David Rientjes wrote:
> On Sat, 29 Jan 2022, Waiman Long wrote:
>
>> For *scnprintf(), vsnprintf() is always called even if the input size is
>> 0. That is a waste of time, so just return 0 in this case.
>>
>> Signed-off-by: Waiman Long <[email protected]>
>> ---
>> lib/vsprintf.c | 8 +++++---
>> 1 file changed, 5 insertions(+), 3 deletions(-)
>>
>> diff --git a/lib/vsprintf.c b/lib/vsprintf.c
>> index 3b8129dd374c..a65df546fb06 100644
>> --- a/lib/vsprintf.c
>> +++ b/lib/vsprintf.c
>> @@ -2895,13 +2895,15 @@ int vscnprintf(char *buf, size_t size, const char *fmt, va_list args)
>> {
>> int i;
>>
>> + if (!size)
>> + return 0;
> Nit: any reason this shouldn't be unlikely()? If the conditional for
> i < size is likely(), this seems assumed already?

Good suggestion. Will make the change in the next version.

Cheers,
Longman

2022-02-01 15:17:49

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] lib/vsprintf: Avoid redundant work with 0 size

On (22/01/29 15:53), Waiman Long wrote:
> For *scnprintf(), vsnprintf() is always called even if the input size is
> 0. That is a waste of time, so just return 0 in this case.
>
> Signed-off-by: Waiman Long <[email protected]>

Reviewed-by: Sergey Senozhatsky <[email protected]>


> +++ b/lib/vsprintf.c
> @@ -2895,13 +2895,15 @@ int vscnprintf(char *buf, size_t size, const char *fmt, va_list args)
> {
> int i;
>
> + if (!size)
> + return 0;
> +
> i = vsnprintf(buf, size, fmt, args);
>
> if (likely(i < size))
> return i;
> - if (size != 0)
> - return size - 1;
> - return 0;
> +
> + return size - 1;
> }
> EXPORT_SYMBOL(vscnprintf);

2022-02-01 15:20:12

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] mm/page_owner: Use scnprintf() to avoid excessive buffer overrun check

On (22/01/29 15:53), Waiman Long wrote:
> The snprintf() function can return a length greater than the given
> input size. That will require a check for buffer overrun after each
> invocation of snprintf(). scnprintf(), on the other hand, will never
> return a greater length. By using scnprintf() in selected places, we
> can avoid some buffer overrun checks except after stack_depot_snprint()
> and after the last snprintf().
>
> Signed-off-by: Waiman Long <[email protected]>

Reviewed-by: Sergey Senozhatsky <[email protected]>

2022-02-01 15:53:39

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/page_owner: Dump memcg information

On Sat 29-01-22 15:53:15, Waiman Long wrote:
> It was found that a number of offlined memcgs were not freed because
> they were pinned by some charged pages that were present. Even "echo
> 1 > /proc/sys/vm/drop_caches" wasn't able to free those pages. These
> offlined but not freed memcgs tend to increase in number over time with
> the side effect that percpu memory consumption as shown in /proc/meminfo
> also increases over time.
>
> In order to find out more information about those pages that pin
> offlined memcgs, the page_owner feature is extended to dump memory
> cgroup information especially whether the cgroup is offlined or not.

It is not really clear to me how this is supposed to be used. Are you
really dumping all the pages in the system to find out offline memcgs?
That looks rather clumsy to me. I am not against adding memcg
information to the page owner output. That can be useful in other
contexts.

> Signed-off-by: Waiman Long <[email protected]>
> ---
> mm/page_owner.c | 31 +++++++++++++++++++++++++++++++
> 1 file changed, 31 insertions(+)
>
> diff --git a/mm/page_owner.c b/mm/page_owner.c
> index 28dac73e0542..8dc5cd0fa227 100644
> --- a/mm/page_owner.c
> +++ b/mm/page_owner.c
> @@ -10,6 +10,7 @@
> #include <linux/migrate.h>
> #include <linux/stackdepot.h>
> #include <linux/seq_file.h>
> +#include <linux/memcontrol.h>
> #include <linux/sched/clock.h>
>
> #include "internal.h"
> @@ -331,6 +332,7 @@ print_page_owner(char __user *buf, size_t count, unsigned long pfn,
> depot_stack_handle_t handle)
> {
> int ret, pageblock_mt, page_mt;
> + unsigned long __maybe_unused memcg_data;
> char *kbuf;
>
> count = min_t(size_t, count, PAGE_SIZE);
> @@ -365,6 +367,35 @@ print_page_owner(char __user *buf, size_t count, unsigned long pfn,
> migrate_reason_names[page_owner->last_migrate_reason]);
> }
>
> +#ifdef CONFIG_MEMCG

This really begs to be in a dedicated function. page_owner_print_memcg
or something like that.

> + /*
> + * Look for memcg information and print it out
> + */
> + memcg_data = READ_ONCE(page->memcg_data);
> + if (memcg_data) {
> + struct mem_cgroup *memcg = page_memcg_check(page);
> + bool onlined;
> + char name[80];

What does prevent memcg to go away and being reused for a different
purpose?

> +
> + if (memcg_data & MEMCG_DATA_OBJCGS)
> + ret += scnprintf(kbuf + ret, count - ret,
> + "Slab cache page\n");
> +
> + if (!memcg)
> + goto copy_out;
> +
> + onlined = (memcg->css.flags & CSS_ONLINE);
> + cgroup_name(memcg->css.cgroup, name, sizeof(name));
> + ret += scnprintf(kbuf + ret, count - ret,
> + "Charged %sto %smemcg %s\n",
> + PageMemcgKmem(page) ? "(via objcg) " : "",
> + onlined ? "" : "offlined ",
> + name);
> + }
> +
> +copy_out:
> +#endif
--
Michal Hocko
SUSE Labs

2022-02-01 16:16:29

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] lib/vsprintf: Avoid redundant work with 0 size

On Sun, Jan 30, 2022 at 12:49:37PM -0800, David Rientjes wrote:
> On Sat, 29 Jan 2022, Waiman Long wrote:
>
> > For *scnprintf(), vsnprintf() is always called even if the input size is
> > 0. That is a waste of time, so just return 0 in this case.

Why do you think it's not legit?

--
With Best Regards,
Andy Shevchenko


2022-02-01 16:19:09

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] lib/vsprintf: Avoid redundant work with 0 size

On Mon, Jan 31, 2022 at 12:25:09PM +0200, Andy Shevchenko wrote:
> On Sun, Jan 30, 2022 at 12:49:37PM -0800, David Rientjes wrote:
> > On Sat, 29 Jan 2022, Waiman Long wrote:
> >
> > > For *scnprintf(), vsnprintf() is always called even if the input size is
> > > 0. That is a waste of time, so just return 0 in this case.
>
> Why do you think it's not legit?

I have to elaborate.

For *nprintf() the size=0 is quite useful to have.
For *cnprintf() the size=0 makes less sense, but, if we read `man snprintf()`:

The functions snprintf() and vsnprintf() do not write more than size bytes
(including the terminating null byte ('\0')). If the output was truncated due
to this limit, then the return value is the number of characters (excluding
the terminating null byte) which would have been written to the final string
if enough space had been available. Thus, a return value of size or more
means that the output was truncated. (See also below under NOTES.)

If an output error is encountered, a negative value is returned.

Note the last sentence there. You need to answer to it in the commit message
why your change is okay and it will show that you thought through all possible
scenarios.

--
With Best Regards,
Andy Shevchenko


2022-02-01 16:23:02

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] lib/vsprintf: Avoid redundant work with 0 size

On Mon, Jan 31, 2022 at 12:30:33PM +0200, Andy Shevchenko wrote:
> On Mon, Jan 31, 2022 at 12:25:09PM +0200, Andy Shevchenko wrote:
> > On Sun, Jan 30, 2022 at 12:49:37PM -0800, David Rientjes wrote:
> > > On Sat, 29 Jan 2022, Waiman Long wrote:
> > >
> > > > For *scnprintf(), vsnprintf() is always called even if the input size is
> > > > 0. That is a waste of time, so just return 0 in this case.
> >
> > Why do you think it's not legit?
>
> I have to elaborate.
>
> For *nprintf() the size=0 is quite useful to have.
> For *cnprintf() the size=0 makes less sense, but, if we read `man snprintf()`:
>
> The functions snprintf() and vsnprintf() do not write more than size bytes
> (including the terminating null byte ('\0')). If the output was truncated due
> to this limit, then the return value is the number of characters (excluding
> the terminating null byte) which would have been written to the final string
> if enough space had been available. Thus, a return value of size or more
> means that the output was truncated. (See also below under NOTES.)
>
> If an output error is encountered, a negative value is returned.
>
> Note the last sentence there. You need to answer to it in the commit message
> why your change is okay and it will show that you thought through all possible
> scenarios.

Also it seems currently the kernel documentation is not aligned with the code

"If @size is == 0 the function returns 0."

It should mention the (theoretical?) possibility of getting negative value,
if vsnprintf() returns negative value.

--
With Best Regards,
Andy Shevchenko


2022-02-01 19:56:32

by Rasmus Villemoes

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] lib/vsprintf: Avoid redundant work with 0 size

On 31/01/2022 11.34, Andy Shevchenko wrote:
> On Mon, Jan 31, 2022 at 12:30:33PM +0200, Andy Shevchenko wrote:
>> On Mon, Jan 31, 2022 at 12:25:09PM +0200, Andy Shevchenko wrote:
>>> On Sun, Jan 30, 2022 at 12:49:37PM -0800, David Rientjes wrote:
>>>> On Sat, 29 Jan 2022, Waiman Long wrote:
>>>>
>>>>> For *scnprintf(), vsnprintf() is always called even if the input size is
>>>>> 0. That is a waste of time, so just return 0 in this case.
>>>
>>> Why do you think it's not legit?
>>
>> I have to elaborate.
>>
>> For *nprintf() the size=0 is quite useful to have.
>> For *cnprintf() the size=0 makes less sense, but, if we read `man snprintf()`:
>>
>> The functions snprintf() and vsnprintf() do not write more than size bytes
>> (including the terminating null byte ('\0')). If the output was truncated due
>> to this limit, then the return value is the number of characters (excluding
>> the terminating null byte) which would have been written to the final string
>> if enough space had been available. Thus, a return value of size or more
>> means that the output was truncated. (See also below under NOTES.)
>>
>> If an output error is encountered, a negative value is returned.
>>
>> Note the last sentence there. You need to answer to it in the commit message
>> why your change is okay and it will show that you thought through all possible
>> scenarios.
>
> Also it seems currently the kernel documentation is not aligned with the code
>
> "If @size is == 0 the function returns 0."
>
> It should mention the (theoretical?) possibility of getting negative value,
> if vsnprintf() returns negative value.
>

The kernel's vsnprintf _will never_ return a negative value. There is
way too much code which relies on that. It also has to work from any
context, so we'll never do any memory allocation or anything else that
could possibly force us to error out, and even if we encounter some
impossible situation, we do not return a negative value, but just stop
the output where we are.

So yes, micro-optimizing [v]scnprintf() is completely valid, but I've
never bothered to send the patch because the use case for scnprintf() is
primarily the

ret += scnprintf(buf + ret, size - ret, ...);

pattern, with ret starting out at 0 and size being some non-zero number.
When given a non-zero size, scnprintf() is guaranteed to return
something _strictly less_ than that value; that invariant guarantees
that the size-ret expression never becomes 0. So if scnprintf() is
properly used, I can't think of any situation where size will be 0,
hence I see that patch as correct-but-mostly-pointless.

Rasmus

2022-02-01 20:13:44

by Andy Shevchenko

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] lib/vsprintf: Avoid redundant work with 0 size

On Mon, Jan 31, 2022 at 12:02:29PM +0100, Rasmus Villemoes wrote:
> On 31/01/2022 11.34, Andy Shevchenko wrote:
> > On Mon, Jan 31, 2022 at 12:30:33PM +0200, Andy Shevchenko wrote:
> >> On Mon, Jan 31, 2022 at 12:25:09PM +0200, Andy Shevchenko wrote:
> >>> On Sun, Jan 30, 2022 at 12:49:37PM -0800, David Rientjes wrote:
> >>>> On Sat, 29 Jan 2022, Waiman Long wrote:
> >>>>
> >>>>> For *scnprintf(), vsnprintf() is always called even if the input size is
> >>>>> 0. That is a waste of time, so just return 0 in this case.
> >>>
> >>> Why do you think it's not legit?
> >>
> >> I have to elaborate.
> >>
> >> For *nprintf() the size=0 is quite useful to have.
> >> For *cnprintf() the size=0 makes less sense, but, if we read `man snprintf()`:
> >>
> >> The functions snprintf() and vsnprintf() do not write more than size bytes
> >> (including the terminating null byte ('\0')). If the output was truncated due
> >> to this limit, then the return value is the number of characters (excluding
> >> the terminating null byte) which would have been written to the final string
> >> if enough space had been available. Thus, a return value of size or more
> >> means that the output was truncated. (See also below under NOTES.)
> >>
> >> If an output error is encountered, a negative value is returned.
> >>
> >> Note the last sentence there. You need to answer to it in the commit message
> >> why your change is okay and it will show that you thought through all possible
> >> scenarios.
> >
> > Also it seems currently the kernel documentation is not aligned with the code
> >
> > "If @size is == 0 the function returns 0."
> >
> > It should mention the (theoretical?) possibility of getting negative value,
> > if vsnprintf() returns negative value.
> >
>
> The kernel's vsnprintf _will never_ return a negative value. There is
> way too much code which relies on that. It also has to work from any
> context, so we'll never do any memory allocation or anything else that
> could possibly force us to error out, and even if we encounter some
> impossible situation, we do not return a negative value, but just stop
> the output where we are.

Yep, I see the code. My comments more or less are related to the (better)
commit message which may include what you just said.

> So yes, micro-optimizing [v]scnprintf() is completely valid, but I've
> never bothered to send the patch because the use case for scnprintf() is
> primarily the
>
> ret += scnprintf(buf + ret, size - ret, ...);
>
> pattern, with ret starting out at 0 and size being some non-zero number.
> When given a non-zero size, scnprintf() is guaranteed to return
> something _strictly less_ than that value; that invariant guarantees
> that the size-ret expression never becomes 0. So if scnprintf() is
> properly used, I can't think of any situation where size will be 0,
> hence I see that patch as correct-but-mostly-pointless.

Good remark and again commit message probably should elaborate this as
well.

--
With Best Regards,
Andy Shevchenko


2022-02-01 20:46:01

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/page_owner: Dump memcg information

On Mon, Jan 31, 2022 at 10:38:51AM +0100, Michal Hocko wrote:
> On Sat 29-01-22 15:53:15, Waiman Long wrote:
> > It was found that a number of offlined memcgs were not freed because
> > they were pinned by some charged pages that were present. Even "echo
> > 1 > /proc/sys/vm/drop_caches" wasn't able to free those pages. These
> > offlined but not freed memcgs tend to increase in number over time with
> > the side effect that percpu memory consumption as shown in /proc/meminfo
> > also increases over time.
> >
> > In order to find out more information about those pages that pin
> > offlined memcgs, the page_owner feature is extended to dump memory
> > cgroup information especially whether the cgroup is offlined or not.
>
> It is not really clear to me how this is supposed to be used. Are you
> really dumping all the pages in the system to find out offline memcgs?
> That looks rather clumsy to me. I am not against adding memcg
> information to the page owner output. That can be useful in other
> contexts.

We've sometimes done exactly that in production, but with drgn
scripts. It's not very common, so it doesn't need to be very efficient
either. Typically, we'd encounter a host with an unusual number of
dying cgroups, ssh in and poke around with drgn to figure out what
kind of objects are still pinning the cgroups in question.

This patch would make that process a little easier, I suppose.

2022-02-01 20:46:37

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/page_owner: Dump memcg information

On Mon, Jan 31, 2022 at 11:53:19AM -0500, Johannes Weiner wrote:
> On Mon, Jan 31, 2022 at 10:38:51AM +0100, Michal Hocko wrote:
> > On Sat 29-01-22 15:53:15, Waiman Long wrote:
> > > It was found that a number of offlined memcgs were not freed because
> > > they were pinned by some charged pages that were present. Even "echo
> > > 1 > /proc/sys/vm/drop_caches" wasn't able to free those pages. These
> > > offlined but not freed memcgs tend to increase in number over time with
> > > the side effect that percpu memory consumption as shown in /proc/meminfo
> > > also increases over time.
> > >
> > > In order to find out more information about those pages that pin
> > > offlined memcgs, the page_owner feature is extended to dump memory
> > > cgroup information especially whether the cgroup is offlined or not.
> >
> > It is not really clear to me how this is supposed to be used. Are you
> > really dumping all the pages in the system to find out offline memcgs?
> > That looks rather clumsy to me. I am not against adding memcg
> > information to the page owner output. That can be useful in other
> > contexts.
>
> We've sometimes done exactly that in production, but with drgn
> scripts. It's not very common, so it doesn't need to be very efficient
> either. Typically, we'd encounter a host with an unusual number of
> dying cgroups, ssh in and poke around with drgn to figure out what
> kind of objects are still pinning the cgroups in question.
>
> This patch would make that process a little easier, I suppose.

Right. Over last few years I've spent enormous amount of time digging into
various aspects of this problem and in my experience the combination of drgn
for the inspection of the current state and bpf for following various decisions
on the reclaim path was the most useful combination.

I really appreciate an effort to put useful tools to track memcg references
into the kernel tree, however the page_owner infra has a limited usefulness
as it has to be enabled on the boot. But because it doesn't add any overhead,
I also don't think there any reasons to not add it.

Thanks!

2022-02-01 20:47:00

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/page_owner: Dump memcg information

On Mon 31-01-22 10:15:45, Roman Gushchin wrote:
> On Mon, Jan 31, 2022 at 11:53:19AM -0500, Johannes Weiner wrote:
> > On Mon, Jan 31, 2022 at 10:38:51AM +0100, Michal Hocko wrote:
> > > On Sat 29-01-22 15:53:15, Waiman Long wrote:
> > > > It was found that a number of offlined memcgs were not freed because
> > > > they were pinned by some charged pages that were present. Even "echo
> > > > 1 > /proc/sys/vm/drop_caches" wasn't able to free those pages. These
> > > > offlined but not freed memcgs tend to increase in number over time with
> > > > the side effect that percpu memory consumption as shown in /proc/meminfo
> > > > also increases over time.
> > > >
> > > > In order to find out more information about those pages that pin
> > > > offlined memcgs, the page_owner feature is extended to dump memory
> > > > cgroup information especially whether the cgroup is offlined or not.
> > >
> > > It is not really clear to me how this is supposed to be used. Are you
> > > really dumping all the pages in the system to find out offline memcgs?
> > > That looks rather clumsy to me. I am not against adding memcg
> > > information to the page owner output. That can be useful in other
> > > contexts.
> >
> > We've sometimes done exactly that in production, but with drgn
> > scripts. It's not very common, so it doesn't need to be very efficient
> > either. Typically, we'd encounter a host with an unusual number of
> > dying cgroups, ssh in and poke around with drgn to figure out what
> > kind of objects are still pinning the cgroups in question.
> >
> > This patch would make that process a little easier, I suppose.
>
> Right. Over last few years I've spent enormous amount of time digging into
> various aspects of this problem and in my experience the combination of drgn
> for the inspection of the current state and bpf for following various decisions
> on the reclaim path was the most useful combination.
>
> I really appreciate an effort to put useful tools to track memcg references
> into the kernel tree, however the page_owner infra has a limited usefulness
> as it has to be enabled on the boot. But because it doesn't add any overhead,
> I also don't think there any reasons to not add it.

Would it be feasible to add a debugfs interface to displa dead memcg
information?
--
Michal Hocko
SUSE Labs

2022-02-01 20:47:24

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/page_owner: Dump memcg information

On 1/31/22 13:25, Michal Hocko wrote:
> On Mon 31-01-22 10:15:45, Roman Gushchin wrote:
>> On Mon, Jan 31, 2022 at 11:53:19AM -0500, Johannes Weiner wrote:
>>> On Mon, Jan 31, 2022 at 10:38:51AM +0100, Michal Hocko wrote:
>>>> On Sat 29-01-22 15:53:15, Waiman Long wrote:
>>>>> It was found that a number of offlined memcgs were not freed because
>>>>> they were pinned by some charged pages that were present. Even "echo
>>>>> 1 > /proc/sys/vm/drop_caches" wasn't able to free those pages. These
>>>>> offlined but not freed memcgs tend to increase in number over time with
>>>>> the side effect that percpu memory consumption as shown in /proc/meminfo
>>>>> also increases over time.
>>>>>
>>>>> In order to find out more information about those pages that pin
>>>>> offlined memcgs, the page_owner feature is extended to dump memory
>>>>> cgroup information especially whether the cgroup is offlined or not.
>>>> It is not really clear to me how this is supposed to be used. Are you
>>>> really dumping all the pages in the system to find out offline memcgs?
>>>> That looks rather clumsy to me. I am not against adding memcg
>>>> information to the page owner output. That can be useful in other
>>>> contexts.
>>> We've sometimes done exactly that in production, but with drgn
>>> scripts. It's not very common, so it doesn't need to be very efficient
>>> either. Typically, we'd encounter a host with an unusual number of
>>> dying cgroups, ssh in and poke around with drgn to figure out what
>>> kind of objects are still pinning the cgroups in question.
>>>
>>> This patch would make that process a little easier, I suppose.
>> Right. Over last few years I've spent enormous amount of time digging into
>> various aspects of this problem and in my experience the combination of drgn
>> for the inspection of the current state and bpf for following various decisions
>> on the reclaim path was the most useful combination.
>>
>> I really appreciate an effort to put useful tools to track memcg references
>> into the kernel tree, however the page_owner infra has a limited usefulness
>> as it has to be enabled on the boot. But because it doesn't add any overhead,
>> I also don't think there any reasons to not add it.
> Would it be feasible to add a debugfs interface to displa dead memcg
> information?

Originally, I added some debug code to keep track of the list of memcg
that has been offlined but not yet freed. After some more testing, I
figured out that the memcg's were not freed because they were pinned by
references in the page structs. At this point, I realize the using the
existing page owner debugging tool will be useful to track this kind of
problem since it already have all the infrastructure to list where the
pages were allocated as well as various field in the page structures.

Of course, it is also possible to have a debugfs interface to list those
dead memcg information, displaying more information about the page that
pins the memcg will be hard without using the page owner tool. Keeping
track of the list of dead memcg's may also have some runtime overhead.

Cheers,
Longman

2022-02-01 20:47:38

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] lib/vsprintf: Avoid redundant work with 0 size

On 1/31/22 05:34, Andy Shevchenko wrote:
> On Mon, Jan 31, 2022 at 12:30:33PM +0200, Andy Shevchenko wrote:
>> On Mon, Jan 31, 2022 at 12:25:09PM +0200, Andy Shevchenko wrote:
>>> On Sun, Jan 30, 2022 at 12:49:37PM -0800, David Rientjes wrote:
>>>> On Sat, 29 Jan 2022, Waiman Long wrote:
>>>>
>>>>> For *scnprintf(), vsnprintf() is always called even if the input size is
>>>>> 0. That is a waste of time, so just return 0 in this case.
>>> Why do you think it's not legit?
>> I have to elaborate.
>>
>> For *nprintf() the size=0 is quite useful to have.
>> For *cnprintf() the size=0 makes less sense, but, if we read `man snprintf()`:
>>
>> The functions snprintf() and vsnprintf() do not write more than size bytes
>> (including the terminating null byte ('\0')). If the output was truncated due
>> to this limit, then the return value is the number of characters (excluding
>> the terminating null byte) which would have been written to the final string
>> if enough space had been available. Thus, a return value of size or more
>> means that the output was truncated. (See also below under NOTES.)
>>
>> If an output error is encountered, a negative value is returned.
>>
>> Note the last sentence there. You need to answer to it in the commit message
>> why your change is okay and it will show that you thought through all possible
>> scenarios.
> Also it seems currently the kernel documentation is not aligned with the code
>
> "If @size is == 0 the function returns 0."
>
> It should mention the (theoretical?) possibility of getting negative value,
> if vsnprintf() returns negative value.

AFAICS, the kernel's vsnprintf() function will not return -1. So in that
sense it is not fully POSIX compliant. Since vscnprintf() function
always returns 0 when size is 0, there is no point in finding out
exactly how much bytes the buffer needs to hold the formatted text as
this information will not be returned back to the caller anyway. I will
update to indicate the vsnprintf() does not return -1.

Thanks,
Longmn

2022-02-01 20:49:21

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] lib/vsprintf: Avoid redundant work with 0 size

On Sat, Jan 29, 2022 at 03:53:13PM -0500, Waiman Long wrote:
> For *scnprintf(), vsnprintf() is always called even if the input size is
> 0. That is a waste of time, so just return 0 in this case.
>
> Signed-off-by: Waiman Long <[email protected]>
> ---
> lib/vsprintf.c | 8 +++++---
> 1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/lib/vsprintf.c b/lib/vsprintf.c
> index 3b8129dd374c..a65df546fb06 100644
> --- a/lib/vsprintf.c
> +++ b/lib/vsprintf.c
> @@ -2895,13 +2895,15 @@ int vscnprintf(char *buf, size_t size, const char *fmt, va_list args)
> {
> int i;
>
> + if (!size)
> + return 0;
> +
> i = vsnprintf(buf, size, fmt, args);
>
> if (likely(i < size))
> return i;
> - if (size != 0)
> - return size - 1;
> - return 0;
> +
> + return size - 1;
> }
> EXPORT_SYMBOL(vscnprintf);

Acked-by: Roman Gushchin <[email protected]>

Thanks!

2022-02-01 20:50:04

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/page_owner: Dump memcg information

On 1/31/22 04:38, Michal Hocko wrote:
> On Sat 29-01-22 15:53:15, Waiman Long wrote:
>> It was found that a number of offlined memcgs were not freed because
>> they were pinned by some charged pages that were present. Even "echo
>> 1 > /proc/sys/vm/drop_caches" wasn't able to free those pages. These
>> offlined but not freed memcgs tend to increase in number over time with
>> the side effect that percpu memory consumption as shown in /proc/meminfo
>> also increases over time.
>>
>> In order to find out more information about those pages that pin
>> offlined memcgs, the page_owner feature is extended to dump memory
>> cgroup information especially whether the cgroup is offlined or not.
> It is not really clear to me how this is supposed to be used. Are you
> really dumping all the pages in the system to find out offline memcgs?
> That looks rather clumsy to me. I am not against adding memcg
> information to the page owner output. That can be useful in other
> contexts.

I am just piggybacking on top of the existing page_owner tool to provide
information for me to find out what pages are pinning the dead memcgs.
page_owner is a debugging tool that is not turned on by default. We do
have to add a kernel parameter and  rebooting the system to use that,
but that is pretty easy to do once we have a reproducer to reproduce the
problem.

Cheers,
Longman

2022-02-02 10:17:41

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/page_owner: Dump memcg information

On Mon 31-01-22 13:38:28, Waiman Long wrote:
[...]
> Of course, it is also possible to have a debugfs interface to list those
> dead memcg information, displaying more information about the page that pins
> the memcg will be hard without using the page owner tool.

Yes, you will need page owner or hook into the kernel by other means
(like already mentioned drgn). The question is whether scanning all
existing pages to get that information is the best we can offer.

> Keeping track of
> the list of dead memcg's may also have some runtime overhead.

Could you be more specific? Offlined memcgs are still part of the
hierarchy IIRC. So it shouldn't be much more than iterating the whole
cgroup tree and collect interesting data about dead cgroups.

--
Michal Hocko
SUSE Labs

2022-02-02 13:11:42

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] lib/vsprintf: Avoid redundant work with 0 size

On 2/1/22 02:12, Rasmus Villemoes wrote:
> On 31/01/2022 19.48, Waiman Long wrote:
>> On 1/31/22 05:34, Andy Shevchenko wrote:
>>> Also it seems currently the kernel documentation is not aligned with
>>> the code
>>>
>>>    "If @size is == 0 the function returns 0."
>>>
>>> It should mention the (theoretical?) possibility of getting negative
>>> value,
>>> if vsnprintf() returns negative value.
>> AFAICS, the kernel's vsnprintf() function will not return -1.
> Even if it did, the "i < size" comparison in vscnprintf() is "int v
> size_t", so integer promotion says that even if i were negative, that
> comparison would be false, so we wouldn't forward that negative value
> anyway.
>
>> So in that
>> sense it is not fully POSIX compliant.
> Of course it's not, but not because it doesn't return -1. POSIX just
> says to return that in case of an error, and as a matter of QoI, the
> kernel's implementation simply can't (and must not) fail. There are
> other cases where we don't follow POSIX/C, e.g. in some corner cases
> around field length and precision (documented in test_printf.c), and the
> non-support of %n (and floating point and handling of wchar_t*), and the
> whole %p<> extension etc.
>
> Rasmus
>
Thanks for the clarification.

Cheers,
Longman

2022-02-02 23:51:28

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/page_owner: Dump memcg information


On 2/1/22 05:49, Michal Hocko wrote:
> On Mon 31-01-22 13:38:28, Waiman Long wrote:
> [...]
>> Of course, it is also possible to have a debugfs interface to list those
>> dead memcg information, displaying more information about the page that pins
>> the memcg will be hard without using the page owner tool.
> Yes, you will need page owner or hook into the kernel by other means
> (like already mentioned drgn). The question is whether scanning all
> existing pages to get that information is the best we can offer.
The page_owner tool records the page information at allocation time.
There are some slight performance overhead, but it is the memory
overhead that is the major drawback of this approach as we need one
page_owner structure for each physical page. Page scanning is only done
when users read the page_owner debugfs file. Yes, I agree that scanning
all the pages is not the most efficient way to get these dead memcg
information, but it is what the page_owner tool does. I would argue that
this is the most efficient coding-wise to get this information.
>> Keeping track of
>> the list of dead memcg's may also have some runtime overhead.
> Could you be more specific? Offlined memcgs are still part of the
> hierarchy IIRC. So it shouldn't be much more than iterating the whole
> cgroup tree and collect interesting data about dead cgroups.

What I mean is that without piggybacking on top of page_owner, we will
to add a lot more code to collect and display those information which
may have some overhead of its own.

Cheers,
Longman

2022-02-03 00:07:52

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/page_owner: Dump memcg information

On Wed, Feb 02, 2022 at 09:57:18AM +0100, Michal Hocko wrote:
> On Tue 01-02-22 11:41:19, Waiman Long wrote:
> >
> > On 2/1/22 05:49, Michal Hocko wrote:
> [...]
> > > Could you be more specific? Offlined memcgs are still part of the
> > > hierarchy IIRC. So it shouldn't be much more than iterating the whole
> > > cgroup tree and collect interesting data about dead cgroups.
> >
> > What I mean is that without piggybacking on top of page_owner, we will to
> > add a lot more code to collect and display those information which may have
> > some overhead of its own.
>
> Yes, there is nothing like a free lunch. Page owner is certainly a tool
> that can be used. My main concern is that this tool doesn't really
> scale on large machines with a lots of memory. It will provide a very
> detailed information but I am not sure this is particularly helpful to
> most admins (why should people process tons of allocation backtraces in
> the first place). Wouldn't it be sufficient to have per dead memcg stats
> to see where the memory sits?
>
> Accumulated offline memcgs is something that bothers more people and I
> am really wondering whether we can do more for those people to evaluate
> the current state.

Cgroup v2 has corresponding counters for years. Or do you mean something different?

2022-02-03 00:09:06

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/page_owner: Dump memcg information

On Wed 02-02-22 09:51:32, Roman Gushchin wrote:
> On Wed, Feb 02, 2022 at 05:38:07PM +0100, Michal Hocko wrote:
> > On Wed 02-02-22 07:54:48, Roman Gushchin wrote:
> > > On Wed, Feb 02, 2022 at 09:57:18AM +0100, Michal Hocko wrote:
> > > > On Tue 01-02-22 11:41:19, Waiman Long wrote:
> > > > >
> > > > > On 2/1/22 05:49, Michal Hocko wrote:
> > > > [...]
> > > > > > Could you be more specific? Offlined memcgs are still part of the
> > > > > > hierarchy IIRC. So it shouldn't be much more than iterating the whole
> > > > > > cgroup tree and collect interesting data about dead cgroups.
> > > > >
> > > > > What I mean is that without piggybacking on top of page_owner, we will to
> > > > > add a lot more code to collect and display those information which may have
> > > > > some overhead of its own.
> > > >
> > > > Yes, there is nothing like a free lunch. Page owner is certainly a tool
> > > > that can be used. My main concern is that this tool doesn't really
> > > > scale on large machines with a lots of memory. It will provide a very
> > > > detailed information but I am not sure this is particularly helpful to
> > > > most admins (why should people process tons of allocation backtraces in
> > > > the first place). Wouldn't it be sufficient to have per dead memcg stats
> > > > to see where the memory sits?
> > > >
> > > > Accumulated offline memcgs is something that bothers more people and I
> > > > am really wondering whether we can do more for those people to evaluate
> > > > the current state.
> > >
> > > Cgroup v2 has corresponding counters for years. Or do you mean something different?
> >
> > Do we have anything more specific than nr_dying_descendants?
>
> No, just nr_dying_descendants.
>
> > I was thinking about an interface which would provide paths and stats for dead
> > memcgs. But I have to confess I haven't really spent much time thinking
> > about how much work that would be. I am by no means against adding memcg
> > information to the page owner. I just think there must be a better way
> > to present resource consumption by dead memcgs.
>
> I'd go with a drgn script. I wrote a bunch of them some times ago and
> can probably revive them and post here (will take few days).

That would be really awsome!

Thanks!

--
Michal Hocko
SUSE Labs

2022-02-03 00:12:16

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/page_owner: Dump memcg information

On Wed 02-02-22 07:54:48, Roman Gushchin wrote:
> On Wed, Feb 02, 2022 at 09:57:18AM +0100, Michal Hocko wrote:
> > On Tue 01-02-22 11:41:19, Waiman Long wrote:
> > >
> > > On 2/1/22 05:49, Michal Hocko wrote:
> > [...]
> > > > Could you be more specific? Offlined memcgs are still part of the
> > > > hierarchy IIRC. So it shouldn't be much more than iterating the whole
> > > > cgroup tree and collect interesting data about dead cgroups.
> > >
> > > What I mean is that without piggybacking on top of page_owner, we will to
> > > add a lot more code to collect and display those information which may have
> > > some overhead of its own.
> >
> > Yes, there is nothing like a free lunch. Page owner is certainly a tool
> > that can be used. My main concern is that this tool doesn't really
> > scale on large machines with a lots of memory. It will provide a very
> > detailed information but I am not sure this is particularly helpful to
> > most admins (why should people process tons of allocation backtraces in
> > the first place). Wouldn't it be sufficient to have per dead memcg stats
> > to see where the memory sits?
> >
> > Accumulated offline memcgs is something that bothers more people and I
> > am really wondering whether we can do more for those people to evaluate
> > the current state.
>
> Cgroup v2 has corresponding counters for years. Or do you mean something different?

Do we have anything more specific than nr_dying_descendants? I was
thinking about an interface which would provide paths and stats for dead
memcgs. But I have to confess I haven't really spent much time thinking
about how much work that would be. I am by no means against adding memcg
information to the page owner. I just think there must be a better way
to present resource consumption by dead memcgs.

--
Michal Hocko
SUSE Labs

2022-02-03 09:17:23

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/page_owner: Dump memcg information

On 2/2/22 03:57, Michal Hocko wrote:
> On Tue 01-02-22 11:41:19, Waiman Long wrote:
>> On 2/1/22 05:49, Michal Hocko wrote:
> [...]
>>> Could you be more specific? Offlined memcgs are still part of the
>>> hierarchy IIRC. So it shouldn't be much more than iterating the whole
>>> cgroup tree and collect interesting data about dead cgroups.
>> What I mean is that without piggybacking on top of page_owner, we will to
>> add a lot more code to collect and display those information which may have
>> some overhead of its own.
> Yes, there is nothing like a free lunch. Page owner is certainly a tool
> that can be used. My main concern is that this tool doesn't really
> scale on large machines with a lots of memory. It will provide a very
> detailed information but I am not sure this is particularly helpful to
> most admins (why should people process tons of allocation backtraces in
> the first place). Wouldn't it be sufficient to have per dead memcg stats
> to see where the memory sits?
>
> Accumulated offline memcgs is something that bothers more people and I
> am really wondering whether we can do more for those people to evaluate
> the current state.

You won't get the stack backtrace information without page_owner
enabled. I believe that is a helpful piece of information. I don't
expect page_owner to be enabled by default on production system because
of its memory overhead.

I believe you can actually see the number of memory cgroups present by
looking at the /proc/cgroups file. Though, you don't know how many of
them are offline memcgs. So if one suspect that there are a large number
of offline memcgs, one can set up a test environment with page_owner
enabled for further analysis.

Cheers,
Longman

2022-02-03 14:17:43

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/page_owner: Dump memcg information

On Tue 01-02-22 11:41:19, Waiman Long wrote:
>
> On 2/1/22 05:49, Michal Hocko wrote:
[...]
> > Could you be more specific? Offlined memcgs are still part of the
> > hierarchy IIRC. So it shouldn't be much more than iterating the whole
> > cgroup tree and collect interesting data about dead cgroups.
>
> What I mean is that without piggybacking on top of page_owner, we will to
> add a lot more code to collect and display those information which may have
> some overhead of its own.

Yes, there is nothing like a free lunch. Page owner is certainly a tool
that can be used. My main concern is that this tool doesn't really
scale on large machines with a lots of memory. It will provide a very
detailed information but I am not sure this is particularly helpful to
most admins (why should people process tons of allocation backtraces in
the first place). Wouldn't it be sufficient to have per dead memcg stats
to see where the memory sits?

Accumulated offline memcgs is something that bothers more people and I
am really wondering whether we can do more for those people to evaluate
the current state.
--
Michal Hocko
SUSE Labs

2022-02-03 20:37:21

by Rasmus Villemoes

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] lib/vsprintf: Avoid redundant work with 0 size

On 31/01/2022 19.48, Waiman Long wrote:
> On 1/31/22 05:34, Andy Shevchenko wrote:

>> Also it seems currently the kernel documentation is not aligned with
>> the code
>>
>>    "If @size is == 0 the function returns 0."
>>
>> It should mention the (theoretical?) possibility of getting negative
>> value,
>> if vsnprintf() returns negative value.
>
> AFAICS, the kernel's vsnprintf() function will not return -1.

Even if it did, the "i < size" comparison in vscnprintf() is "int v
size_t", so integer promotion says that even if i were negative, that
comparison would be false, so we wouldn't forward that negative value
anyway.

> So in that
> sense it is not fully POSIX compliant.

Of course it's not, but not because it doesn't return -1. POSIX just
says to return that in case of an error, and as a matter of QoI, the
kernel's implementation simply can't (and must not) fail. There are
other cases where we don't follow POSIX/C, e.g. in some corner cases
around field length and precision (documented in test_printf.c), and the
non-support of %n (and floating point and handling of wchar_t*), and the
whole %p<> extension etc.

Rasmus

2022-02-03 20:52:55

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] mm/page_owner: Dump memcg information

On Wed, Feb 02, 2022 at 05:38:07PM +0100, Michal Hocko wrote:
> On Wed 02-02-22 07:54:48, Roman Gushchin wrote:
> > On Wed, Feb 02, 2022 at 09:57:18AM +0100, Michal Hocko wrote:
> > > On Tue 01-02-22 11:41:19, Waiman Long wrote:
> > > >
> > > > On 2/1/22 05:49, Michal Hocko wrote:
> > > [...]
> > > > > Could you be more specific? Offlined memcgs are still part of the
> > > > > hierarchy IIRC. So it shouldn't be much more than iterating the whole
> > > > > cgroup tree and collect interesting data about dead cgroups.
> > > >
> > > > What I mean is that without piggybacking on top of page_owner, we will to
> > > > add a lot more code to collect and display those information which may have
> > > > some overhead of its own.
> > >
> > > Yes, there is nothing like a free lunch. Page owner is certainly a tool
> > > that can be used. My main concern is that this tool doesn't really
> > > scale on large machines with a lots of memory. It will provide a very
> > > detailed information but I am not sure this is particularly helpful to
> > > most admins (why should people process tons of allocation backtraces in
> > > the first place). Wouldn't it be sufficient to have per dead memcg stats
> > > to see where the memory sits?
> > >
> > > Accumulated offline memcgs is something that bothers more people and I
> > > am really wondering whether we can do more for those people to evaluate
> > > the current state.
> >
> > Cgroup v2 has corresponding counters for years. Or do you mean something different?
>
> Do we have anything more specific than nr_dying_descendants?

No, just nr_dying_descendants.

> I was thinking about an interface which would provide paths and stats for dead
> memcgs. But I have to confess I haven't really spent much time thinking
> about how much work that would be. I am by no means against adding memcg
> information to the page owner. I just think there must be a better way
> to present resource consumption by dead memcgs.

I'd go with a drgn script. I wrote a bunch of them some times ago and
can probably revive them and post here (will take few days).

I agree that the problem still exists and providing some tool around would be
useful.

Thanks!