2019-04-29 08:17:44

by Jiri Slaby

[permalink] [raw]
Subject: memcg causes crashes in list_lru_add

Hi,

with new enough systemd, one of our systems 100% crashes during boot.
Kernels I tried are all affected: 5.1-rc7, 5.0.10 stable, 4.12.14.

The 5.1-rc7 crash:
> [ 12.022637] systemd[1]: Starting Create list of required static device nodes for the current kernel...
> [ 12.023353] BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
> [ 12.041502] #PF error: [normal kernel read fault]
> [ 12.041502] PGD 0 P4D 0
> [ 12.041502] Oops: 0000 [#1] SMP NOPTI
> [ 12.041502] CPU: 0 PID: 208 Comm: (kmod) Not tainted 5.1.0-rc7-1.g04c1966-default #1 openSUSE Tumbleweed (unreleased)
> [ 12.041502] Hardware name: Supermicro H8DSP-8/H8DSP-8, BIOS 080011 06/30/2006
> [ 12.041502] RIP: 0010:list_lru_add+0x94/0x170
> [ 12.041502] Code: c6 07 00 66 66 66 90 31 c0 5b 5d 41 5c 41 5d 41 5e 41 5f c3 49 8b 7c 24 20 49 8d 54 24 08 48 85 ff 74 07 e9 46 00 00 00 31 ff <48> 8b 42 08 4c 89 6a 08 49 89 55 00 49 89 45 08 4c 89 28 48 8b 42
> [ 12.041502] RSP: 0018:ffffb11b8091be50 EFLAGS: 00010202
> [ 12.041502] RAX: 0000000000000001 RBX: ffff930b35705a40 RCX: ffff9309cf21ade0
> [ 12.041502] RDX: 0000000000000000 RSI: ffff930ab61bc587 RDI: ffff930a17711000
> [ 12.041502] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
> [ 12.041502] R10: 0000000000000000 R11: 0000000000000008 R12: ffff9309f5f86640
> [ 12.041502] R13: ffff930ab5705a40 R14: 0000000000000001 R15: ffff930a171dc4e0
> [ 12.041502] FS: 00007f42d6ea5940(0000) GS:ffff930ab7800000(0000) knlGS:0000000000000000
> [ 12.041502] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 12.041502] CR2: 0000000000000008 CR3: 0000000057dec000 CR4: 00000000000006f0
> [ 12.041502] Call Trace:
> [ 12.041502] d_lru_add+0x44/0x50
> [ 12.041502] dput.part.34+0xfc/0x110
> [ 12.041502] __fput+0x108/0x230
> [ 12.041502] task_work_run+0x9f/0xc0
> [ 12.041502] exit_to_usermode_loop+0xf5/0x100
> [ 12.041502] do_syscall_64+0xe2/0x110
> [ 12.041502] entry_SYSCALL_64_after_hwframe+0x49/0xbe
> [ 12.041502] RIP: 0033:0x7f42d77567b7
> [ 12.041502] Code: ff ff ff ff c3 48 8b 15 df 96 0c 00 f7 d8 64 89 02 b8 ff ff ff ff eb c0 66 2e 0f 1f 84 00 00 00 00 00 90 b8 03 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 01 c3 48 8b 15 b1 96 0c 00 f7 d8 64 89 02 b8
> [ 12.041502] RSP: 002b:00007fffeb85c2c8 EFLAGS: 00000202 ORIG_RAX: 0000000000000003
> [ 12.041502] RAX: 0000000000000000 RBX: 000055dfb6222fd0 RCX: 00007f42d77567b7
> [ 12.041502] RDX: 00007f42d78217c0 RSI: 000055dfb6223053 RDI: 0000000000000003
> [ 12.041502] RBP: 00007f42d78223c0 R08: 000055dfb62230b0 R09: 00007fffeb85c0f5
> [ 12.041502] R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000000
> [ 12.041502] R13: 000055dfb6225080 R14: 00007fffeb85c3aa R15: 0000000000000003
> [ 12.041502] Modules linked in:
> [ 12.041502] CR2: 0000000000000008
> [ 12.491424] ---[ end trace 574d0c998e97d864 ]---

Enabling KASAN reveals a bit more:
> Allocated by task 1:
> __kasan_kmalloc.constprop.13+0xc1/0xd0
> __list_lru_init+0x3cd/0x5e0

This is kvmalloc in memcg_init_list_lru_node:
memcg_lrus = kvmalloc(sizeof(*memcg_lrus) +
size * sizeof(void *), GFP_KERNEL);

> sget_userns+0x65c/0xba0
> kernfs_mount_ns+0x120/0x7f0
> cgroup_do_mount+0x93/0x2e0
> cgroup1_mount+0x335/0x925
> cgroup_mount+0x14a/0x7b0
> mount_fs+0xce/0x304
> vfs_kern_mount.part.33+0x58/0x370
> do_mount+0x390/0x2540
> ksys_mount+0xb6/0xd0
...
>
> Freed by task 1:
> __kasan_slab_free+0x125/0x170
> kfree+0x90/0x1a0
> acpi_ds_terminate_control_method+0x5a2/0x5c9

This is a different object (the address overflowed to an acpi-allocated
memory). Irrelevant info.

> The buggy address belongs to the object at ffff8880d69a2e68
> which belongs to the cache kmalloc-16 of size 16
> The buggy address is located 8 bytes to the right of
> 16-byte region [ffff8880d69a2e68, ffff8880d69a2e78)

Hmm, 16byte slab. 'memcg_lrus' allocated above is 'struct
list_lru_memcg' defined as:
struct rcu_head rcu;
/* array of per cgroup lists, indexed by memcg_cache_id */
struct list_lru_one *lru[0];

sizeof(struct rcu_head) is 16. So it must mean that 'size' used in the
'kvmalloc' above in 'memcg_init_list_lru_node' is 0. That cannot be correct.

This confirms the theory:
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -366,8 +366,14 @@ static int memcg_init_list_lru_node(stru
struct list_lru_memcg *memcg_lrus;
int size = memcg_nr_cache_ids;

+ if (!size) {
+ pr_err("%s: XXXXXXXXX size is zero yet!\n", __func__);
+ size = 256;
+ }
+
memcg_lrus = kvmalloc(sizeof(*memcg_lrus) +
size * sizeof(void *), GFP_KERNEL);
+ printk(KERN_DEBUG "%s: a=%px\n", __func__, memcg_lrus);
if (!memcg_lrus)
return -ENOMEM;


and even makes the beast booting. memcg has very wrong assumptions on
'memcg_nr_cache_ids'. It does not assume it can change later, despite it
does.

These are dump_stacks from 'memcg_alloc_cache_id' which changes
'memcg_nr_cache_ids' later during boot:
CPU: 1 PID: 1 Comm: systemd Tainted: G E
5.0.10-0.ge8fc1e9-default #1 openSUSE Tumbleweed (unreleased)
Hardware name: Supermicro H8DSP-8/H8DSP-8, BIOS 080011 06/30/2006
Call Trace:
dump_stack+0x9a/0xf0
mem_cgroup_css_alloc+0xb16/0x16a0
cgroup_apply_control_enable+0x2d7/0xb40
cgroup_mkdir+0x594/0xc50
kernfs_iop_mkdir+0x21a/0x2e0
vfs_mkdir+0x37a/0x5d0
do_mkdirat+0x1b1/0x200
do_syscall_64+0xa5/0x290
entry_SYSCALL_64_after_hwframe+0x49/0xbe




I am not sure why this is machine-dependent. I cannot reproduce on any
other box.

Any idea how to fix this mess?

The report is in our bugzilla:
https://bugzilla.suse.com/show_bug.cgi?id=1133616

thanks,
--
js
suse labs


2019-04-29 09:27:02

by Jiri Slaby

[permalink] [raw]
Subject: Re: memcg causes crashes in list_lru_add

On 29. 04. 19, 10:16, Jiri Slaby wrote:
> Hi,
>
> with new enough systemd, one of our systems 100% crashes during boot.
> Kernels I tried are all affected: 5.1-rc7, 5.0.10 stable, 4.12.14.
>
> The 5.1-rc7 crash:
>> [ 12.022637] systemd[1]: Starting Create list of required static device nodes for the current kernel...
>> [ 12.023353] BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
>> [ 12.041502] #PF error: [normal kernel read fault]
>> [ 12.041502] PGD 0 P4D 0
>> [ 12.041502] Oops: 0000 [#1] SMP NOPTI
>> [ 12.041502] CPU: 0 PID: 208 Comm: (kmod) Not tainted 5.1.0-rc7-1.g04c1966-default #1 openSUSE Tumbleweed (unreleased)
>> [ 12.041502] Hardware name: Supermicro H8DSP-8/H8DSP-8, BIOS 080011 06/30/2006
>> [ 12.041502] RIP: 0010:list_lru_add+0x94/0x170
>> [ 12.041502] Code: c6 07 00 66 66 66 90 31 c0 5b 5d 41 5c 41 5d 41 5e 41 5f c3 49 8b 7c 24 20 49 8d 54 24 08 48 85 ff 74 07 e9 46 00 00 00 31 ff <48> 8b 42 08 4c 89 6a 08 49 89 55 00 49 89 45 08 4c 89 28 48 8b 42
>> [ 12.041502] RSP: 0018:ffffb11b8091be50 EFLAGS: 00010202
>> [ 12.041502] RAX: 0000000000000001 RBX: ffff930b35705a40 RCX: ffff9309cf21ade0
>> [ 12.041502] RDX: 0000000000000000 RSI: ffff930ab61bc587 RDI: ffff930a17711000
>> [ 12.041502] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
>> [ 12.041502] R10: 0000000000000000 R11: 0000000000000008 R12: ffff9309f5f86640
>> [ 12.041502] R13: ffff930ab5705a40 R14: 0000000000000001 R15: ffff930a171dc4e0
>> [ 12.041502] FS: 00007f42d6ea5940(0000) GS:ffff930ab7800000(0000) knlGS:0000000000000000
>> [ 12.041502] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> [ 12.041502] CR2: 0000000000000008 CR3: 0000000057dec000 CR4: 00000000000006f0
>> [ 12.041502] Call Trace:
>> [ 12.041502] d_lru_add+0x44/0x50

...

> and even makes the beast booting. memcg has very wrong assumptions on
> 'memcg_nr_cache_ids'. It does not assume it can change later, despite it
> does.
...
> I am not sure why this is machine-dependent. I cannot reproduce on any
> other box.
>
> Any idea how to fix this mess?

memcg_update_all_list_lrus should take care about resizing the array. So
it looks like list_lru_from_memcg_idx returns a stale pointer to
list_lru_from_kmem and then to list_lru_add. Still investigating.

thanks,
--
js
suse labs

2019-04-29 10:11:05

by Jiri Slaby

[permalink] [raw]
Subject: Re: memcg causes crashes in list_lru_add

On 29. 04. 19, 11:25, Jiri Slaby wrote:> memcg_update_all_list_lrus
should take care about resizing the array.

It should, but:
[ 0.058362] Number of physical nodes 2
[ 0.058366] Skipping disabled node 0

So this should be the real fix:
--- linux-5.0-stable1.orig/mm/list_lru.c
+++ linux-5.0-stable1/mm/list_lru.c
@@ -37,11 +37,12 @@ static int lru_shrinker_id(struct list_l

static inline bool list_lru_memcg_aware(struct list_lru *lru)
{
- /*
- * This needs node 0 to be always present, even
- * in the systems supporting sparse numa ids.
- */
- return !!lru->node[0].memcg_lrus;
+ int i;
+
+ for_each_online_node(i)
+ return !!lru->node[i].memcg_lrus;
+
+ return false;
}

static inline struct list_lru_one *





Opinions?

thanks,
--
js
suse labs

2019-04-29 10:18:42

by Michal Hocko

[permalink] [raw]
Subject: Re: memcg causes crashes in list_lru_add

On Mon 29-04-19 11:25:48, Jiri Slaby wrote:
> On 29. 04. 19, 10:16, Jiri Slaby wrote:
[...]
> > Any idea how to fix this mess?
>
> memcg_update_all_list_lrus should take care about resizing the array. So
> it looks like list_lru_from_memcg_idx returns a stale pointer to
> list_lru_from_kmem and then to list_lru_add. Still investigating.

I am traveling and on a conference this week. Please open a bug and if
this affects upstream kernel then report upstream as well. Cc linux-mm
and memcg maintainers. This doesn't ring bells immediately. I do not
remember any large changes recently.
--
Michal Hocko
SUSE Labs

2019-04-29 10:42:39

by Michal Hocko

[permalink] [raw]
Subject: Re: memcg causes crashes in list_lru_add

On Mon 29-04-19 12:09:53, Jiri Slaby wrote:
> On 29. 04. 19, 11:25, Jiri Slaby wrote:> memcg_update_all_list_lrus
> should take care about resizing the array.
>
> It should, but:
> [ 0.058362] Number of physical nodes 2
> [ 0.058366] Skipping disabled node 0
>
> So this should be the real fix:
> --- linux-5.0-stable1.orig/mm/list_lru.c
> +++ linux-5.0-stable1/mm/list_lru.c
> @@ -37,11 +37,12 @@ static int lru_shrinker_id(struct list_l
>
> static inline bool list_lru_memcg_aware(struct list_lru *lru)
> {
> - /*
> - * This needs node 0 to be always present, even
> - * in the systems supporting sparse numa ids.
> - */
> - return !!lru->node[0].memcg_lrus;
> + int i;
> +
> + for_each_online_node(i)
> + return !!lru->node[i].memcg_lrus;
> +
> + return false;
> }
>
> static inline struct list_lru_one *
>
>
>
>
>
> Opinions?

Please report upstream. This code here is there for quite some time.
I do not really remember why we do have an assumption about node 0
and why it hasn't been problem until now.

Thanks!
--
Michal Hocko
SUSE Labs

2019-04-29 10:46:23

by Michal Hocko

[permalink] [raw]
Subject: Re: memcg causes crashes in list_lru_add

On Mon 29-04-19 12:40:51, Michal Hocko wrote:
> On Mon 29-04-19 12:09:53, Jiri Slaby wrote:
> > On 29. 04. 19, 11:25, Jiri Slaby wrote:> memcg_update_all_list_lrus
> > should take care about resizing the array.
> >
> > It should, but:
> > [ 0.058362] Number of physical nodes 2
> > [ 0.058366] Skipping disabled node 0
> >
> > So this should be the real fix:
> > --- linux-5.0-stable1.orig/mm/list_lru.c
> > +++ linux-5.0-stable1/mm/list_lru.c
> > @@ -37,11 +37,12 @@ static int lru_shrinker_id(struct list_l
> >
> > static inline bool list_lru_memcg_aware(struct list_lru *lru)
> > {
> > - /*
> > - * This needs node 0 to be always present, even
> > - * in the systems supporting sparse numa ids.
> > - */
> > - return !!lru->node[0].memcg_lrus;
> > + int i;
> > +
> > + for_each_online_node(i)
> > + return !!lru->node[i].memcg_lrus;
> > +
> > + return false;
> > }
> >
> > static inline struct list_lru_one *
> >
> >
> >
> >
> >
> > Opinions?
>
> Please report upstream. This code here is there for quite some time.
> I do not really remember why we do have an assumption about node 0
> and why it hasn't been problem until now.

Humm, I blame jet-lag. I was convinced that this is an internal email.
Sorry about the confusion.

Anyway, time to revisit 145949a1387ba. CCed Raghavendra.
--
Michal Hocko
SUSE Labs

2019-04-29 11:00:51

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH] memcg: make it work on sparse non-0-node systems

We have a single node system with node 0 disabled:
Scanning NUMA topology in Northbridge 24
Number of physical nodes 2
Skipping disabled node 0
Node 1 MemBase 0000000000000000 Limit 00000000fbff0000
NODE_DATA(1) allocated [mem 0xfbfda000-0xfbfeffff]

This causes crashes in memcg when system boots:
BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
#PF error: [normal kernel read fault]
...
RIP: 0010:list_lru_add+0x94/0x170
...
Call Trace:
d_lru_add+0x44/0x50
dput.part.34+0xfc/0x110
__fput+0x108/0x230
task_work_run+0x9f/0xc0
exit_to_usermode_loop+0xf5/0x100

It is reproducible as far as 4.12. I did not try older kernels. You have
to have a new enough systemd, e.g. 241 (the reason is unknown -- was not
investigated). Cannot be reproduced with systemd 234.

The system crashes because the size of lru array is never updated in
memcg_update_all_list_lrus and the reads are past the zero-sized array,
causing dereferences of random memory.

The root cause are list_lru_memcg_aware checks in the list_lru code.
The test in list_lru_memcg_aware is broken: it assumes node 0 is always
present, but it is not true on some systems as can be seen above.

So fix this by checking the first online node instead of node 0.

Signed-off-by: Jiri Slaby <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Vladimir Davydov <[email protected]>
Cc: <[email protected]>
Cc: <[email protected]>
Cc: Raghavendra K T <[email protected]>
---
mm/list_lru.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/mm/list_lru.c b/mm/list_lru.c
index 0730bf8ff39f..7689910f1a91 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)

static inline bool list_lru_memcg_aware(struct list_lru *lru)
{
- /*
- * This needs node 0 to be always present, even
- * in the systems supporting sparse numa ids.
- */
- return !!lru->node[0].memcg_lrus;
+ return !!lru->node[first_online_node].memcg_lrus;
}

static inline struct list_lru_one *
--
2.21.0

2019-04-29 11:31:53

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH] memcg: make it work on sparse non-0-node systems

On Mon 29-04-19 12:59:39, Jiri Slaby wrote:
[...]
> static inline bool list_lru_memcg_aware(struct list_lru *lru)
> {
> - /*
> - * This needs node 0 to be always present, even
> - * in the systems supporting sparse numa ids.
> - */
> - return !!lru->node[0].memcg_lrus;
> + return !!lru->node[first_online_node].memcg_lrus;
> }
>
> static inline struct list_lru_one *

How come this doesn't blow up later - e.g. in memcg_destroy_list_lru
path which does iterate over all existing nodes thus including the
node 0.
--
Michal Hocko
SUSE Labs

2019-04-29 11:58:02

by Jiri Slaby

[permalink] [raw]
Subject: Re: [PATCH] memcg: make it work on sparse non-0-node systems

On 29. 04. 19, 13:30, Michal Hocko wrote:
> On Mon 29-04-19 12:59:39, Jiri Slaby wrote:
> [...]
>> static inline bool list_lru_memcg_aware(struct list_lru *lru)
>> {
>> - /*
>> - * This needs node 0 to be always present, even
>> - * in the systems supporting sparse numa ids.
>> - */
>> - return !!lru->node[0].memcg_lrus;
>> + return !!lru->node[first_online_node].memcg_lrus;
>> }
>>
>> static inline struct list_lru_one *
>
> How come this doesn't blow up later - e.g. in memcg_destroy_list_lru
> path which does iterate over all existing nodes thus including the
> node 0.

If the node is not disabled (i.e. is N_POSSIBLE), lru->node is allocated
for that node too. It will also have memcg_lrus properly set.

If it is disabled, it will never be iterated.

Well, I could have used first_node. But I am not sure, if the first
POSSIBLE node is also ONLINE during boot?

thanks,
--
js
suse labs

2019-04-29 12:13:24

by Jiri Slaby

[permalink] [raw]
Subject: Re: [PATCH] memcg: make it work on sparse non-0-node systems

On 29. 04. 19, 13:55, Jiri Slaby wrote:
> Well, I could have used first_node. But I am not sure, if the first
> POSSIBLE node is also ONLINE during boot?

Thinking about it, it does not matter, actually. Both first_node and
first_online are allocated and set up, no matter which one is ONLINE
node. So first_node should work as good as first_online_node.

thanks,
--
js
suse labs

2019-04-29 13:17:29

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH] memcg: make it work on sparse non-0-node systems

On Mon 29-04-19 13:55:26, Jiri Slaby wrote:
> On 29. 04. 19, 13:30, Michal Hocko wrote:
> > On Mon 29-04-19 12:59:39, Jiri Slaby wrote:
> > [...]
> >> static inline bool list_lru_memcg_aware(struct list_lru *lru)
> >> {
> >> - /*
> >> - * This needs node 0 to be always present, even
> >> - * in the systems supporting sparse numa ids.
> >> - */
> >> - return !!lru->node[0].memcg_lrus;
> >> + return !!lru->node[first_online_node].memcg_lrus;
> >> }
> >>
> >> static inline struct list_lru_one *
> >
> > How come this doesn't blow up later - e.g. in memcg_destroy_list_lru
> > path which does iterate over all existing nodes thus including the
> > node 0.
>
> If the node is not disabled (i.e. is N_POSSIBLE), lru->node is allocated
> for that node too. It will also have memcg_lrus properly set.
>
> If it is disabled, it will never be iterated.
>
> Well, I could have used first_node. But I am not sure, if the first
> POSSIBLE node is also ONLINE during boot?

I dunno. I would have to think about this much more. The whole
expectation that node 0 is always around is simply broken. But also
list_lru_memcg_aware looks very suspicious. We should have a flag or
something rather than what we have now.

I am still not sure I have completely understood the problem though.
I will try to get to this during the week but Vladimir should be much
better fit to judge here.
--
Michal Hocko
SUSE Labs

2019-05-09 07:23:45

by Jiri Slaby

[permalink] [raw]
Subject: Re: [PATCH] memcg: make it work on sparse non-0-node systems

Vladimir,

as you are perhaps the one most familiar with the code, could you take a
look on this?

On 29. 04. 19, 12:59, Jiri Slaby wrote:
> We have a single node system with node 0 disabled:
> Scanning NUMA topology in Northbridge 24
> Number of physical nodes 2
> Skipping disabled node 0
> Node 1 MemBase 0000000000000000 Limit 00000000fbff0000
> NODE_DATA(1) allocated [mem 0xfbfda000-0xfbfeffff]
>
> This causes crashes in memcg when system boots:
> BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
> #PF error: [normal kernel read fault]
> ...
> RIP: 0010:list_lru_add+0x94/0x170
> ...
> Call Trace:
> d_lru_add+0x44/0x50
> dput.part.34+0xfc/0x110
> __fput+0x108/0x230
> task_work_run+0x9f/0xc0
> exit_to_usermode_loop+0xf5/0x100
>
> It is reproducible as far as 4.12. I did not try older kernels. You have
> to have a new enough systemd, e.g. 241 (the reason is unknown -- was not
> investigated). Cannot be reproduced with systemd 234.
>
> The system crashes because the size of lru array is never updated in
> memcg_update_all_list_lrus and the reads are past the zero-sized array,
> causing dereferences of random memory.
>
> The root cause are list_lru_memcg_aware checks in the list_lru code.
> The test in list_lru_memcg_aware is broken: it assumes node 0 is always
> present, but it is not true on some systems as can be seen above.
>
> So fix this by checking the first online node instead of node 0.
>
> Signed-off-by: Jiri Slaby <[email protected]>
> Cc: Johannes Weiner <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Vladimir Davydov <[email protected]>
> Cc: <[email protected]>
> Cc: <[email protected]>
> Cc: Raghavendra K T <[email protected]>
> ---
> mm/list_lru.c | 6 +-----
> 1 file changed, 1 insertion(+), 5 deletions(-)
>
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index 0730bf8ff39f..7689910f1a91 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)
>
> static inline bool list_lru_memcg_aware(struct list_lru *lru)
> {
> - /*
> - * This needs node 0 to be always present, even
> - * in the systems supporting sparse numa ids.
> - */
> - return !!lru->node[0].memcg_lrus;
> + return !!lru->node[first_online_node].memcg_lrus;
> }
>
> static inline struct list_lru_one *
>


--
js
suse labs

2019-05-09 12:27:54

by Vladimir Davydov

[permalink] [raw]
Subject: Re: [PATCH] memcg: make it work on sparse non-0-node systems

On Mon, Apr 29, 2019 at 12:59:39PM +0200, Jiri Slaby wrote:
> We have a single node system with node 0 disabled:
> Scanning NUMA topology in Northbridge 24
> Number of physical nodes 2
> Skipping disabled node 0
> Node 1 MemBase 0000000000000000 Limit 00000000fbff0000
> NODE_DATA(1) allocated [mem 0xfbfda000-0xfbfeffff]
>
> This causes crashes in memcg when system boots:
> BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
> #PF error: [normal kernel read fault]
> ...
> RIP: 0010:list_lru_add+0x94/0x170
> ...
> Call Trace:
> d_lru_add+0x44/0x50
> dput.part.34+0xfc/0x110
> __fput+0x108/0x230
> task_work_run+0x9f/0xc0
> exit_to_usermode_loop+0xf5/0x100
>
> It is reproducible as far as 4.12. I did not try older kernels. You have
> to have a new enough systemd, e.g. 241 (the reason is unknown -- was not
> investigated). Cannot be reproduced with systemd 234.
>
> The system crashes because the size of lru array is never updated in
> memcg_update_all_list_lrus and the reads are past the zero-sized array,
> causing dereferences of random memory.
>
> The root cause are list_lru_memcg_aware checks in the list_lru code.
> The test in list_lru_memcg_aware is broken: it assumes node 0 is always
> present, but it is not true on some systems as can be seen above.
>
> So fix this by checking the first online node instead of node 0.
>
> Signed-off-by: Jiri Slaby <[email protected]>
> Cc: Johannes Weiner <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Vladimir Davydov <[email protected]>
> Cc: <[email protected]>
> Cc: <[email protected]>
> Cc: Raghavendra K T <[email protected]>
> ---
> mm/list_lru.c | 6 +-----
> 1 file changed, 1 insertion(+), 5 deletions(-)
>
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index 0730bf8ff39f..7689910f1a91 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)
>
> static inline bool list_lru_memcg_aware(struct list_lru *lru)
> {
> - /*
> - * This needs node 0 to be always present, even
> - * in the systems supporting sparse numa ids.
> - */
> - return !!lru->node[0].memcg_lrus;
> + return !!lru->node[first_online_node].memcg_lrus;
> }
>
> static inline struct list_lru_one *

Yep, I didn't expect node 0 could ever be unavailable, my bad.
The patch looks fine to me:

Acked-by: Vladimir Davydov <[email protected]>

However, I tend to agree with Michal that (ab)using node[0].memcg_lrus
to check if a list_lru is memcg aware looks confusing. I guess we could
simply add a bool flag to list_lru instead. Something like this, may be:

diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
index aa5efd9351eb..d5ceb2839a2d 100644
--- a/include/linux/list_lru.h
+++ b/include/linux/list_lru.h
@@ -54,6 +54,7 @@ struct list_lru {
#ifdef CONFIG_MEMCG_KMEM
struct list_head list;
int shrinker_id;
+ bool memcg_aware;
#endif
};

diff --git a/mm/list_lru.c b/mm/list_lru.c
index 0730bf8ff39f..8e605e40a4c6 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)

static inline bool list_lru_memcg_aware(struct list_lru *lru)
{
- /*
- * This needs node 0 to be always present, even
- * in the systems supporting sparse numa ids.
- */
- return !!lru->node[0].memcg_lrus;
+ return lru->memcg_aware;
}

static inline struct list_lru_one *
@@ -451,6 +447,7 @@ static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
{
int i;

+ lru->memcg_aware = memcg_aware;
if (!memcg_aware)
return 0;

2019-05-09 16:08:02

by Shakeel Butt

[permalink] [raw]
Subject: Re: [PATCH] memcg: make it work on sparse non-0-node systems

On Thu, May 9, 2019 at 5:25 AM Vladimir Davydov <[email protected]> wrote:
>
> On Mon, Apr 29, 2019 at 12:59:39PM +0200, Jiri Slaby wrote:
> > We have a single node system with node 0 disabled:
> > Scanning NUMA topology in Northbridge 24
> > Number of physical nodes 2
> > Skipping disabled node 0
> > Node 1 MemBase 0000000000000000 Limit 00000000fbff0000
> > NODE_DATA(1) allocated [mem 0xfbfda000-0xfbfeffff]
> >
> > This causes crashes in memcg when system boots:
> > BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
> > #PF error: [normal kernel read fault]
> > ...
> > RIP: 0010:list_lru_add+0x94/0x170
> > ...
> > Call Trace:
> > d_lru_add+0x44/0x50
> > dput.part.34+0xfc/0x110
> > __fput+0x108/0x230
> > task_work_run+0x9f/0xc0
> > exit_to_usermode_loop+0xf5/0x100
> >
> > It is reproducible as far as 4.12. I did not try older kernels. You have
> > to have a new enough systemd, e.g. 241 (the reason is unknown -- was not
> > investigated). Cannot be reproduced with systemd 234.
> >
> > The system crashes because the size of lru array is never updated in
> > memcg_update_all_list_lrus and the reads are past the zero-sized array,
> > causing dereferences of random memory.
> >
> > The root cause are list_lru_memcg_aware checks in the list_lru code.
> > The test in list_lru_memcg_aware is broken: it assumes node 0 is always
> > present, but it is not true on some systems as can be seen above.
> >
> > So fix this by checking the first online node instead of node 0.
> >
> > Signed-off-by: Jiri Slaby <[email protected]>
> > Cc: Johannes Weiner <[email protected]>
> > Cc: Michal Hocko <[email protected]>
> > Cc: Vladimir Davydov <[email protected]>
> > Cc: <[email protected]>
> > Cc: <[email protected]>
> > Cc: Raghavendra K T <[email protected]>
> > ---
> > mm/list_lru.c | 6 +-----
> > 1 file changed, 1 insertion(+), 5 deletions(-)
> >
> > diff --git a/mm/list_lru.c b/mm/list_lru.c
> > index 0730bf8ff39f..7689910f1a91 100644
> > --- a/mm/list_lru.c
> > +++ b/mm/list_lru.c
> > @@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)
> >
> > static inline bool list_lru_memcg_aware(struct list_lru *lru)
> > {
> > - /*
> > - * This needs node 0 to be always present, even
> > - * in the systems supporting sparse numa ids.
> > - */
> > - return !!lru->node[0].memcg_lrus;
> > + return !!lru->node[first_online_node].memcg_lrus;
> > }
> >
> > static inline struct list_lru_one *
>
> Yep, I didn't expect node 0 could ever be unavailable, my bad.
> The patch looks fine to me:
>
> Acked-by: Vladimir Davydov <[email protected]>
>
> However, I tend to agree with Michal that (ab)using node[0].memcg_lrus
> to check if a list_lru is memcg aware looks confusing. I guess we could
> simply add a bool flag to list_lru instead. Something like this, may be:
>

I think the bool flag approach is much better. No assumption on the
node initialization.

If we go with bool approach then add

Reviewed-by: Shakeel Butt <[email protected]>

> diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
> index aa5efd9351eb..d5ceb2839a2d 100644
> --- a/include/linux/list_lru.h
> +++ b/include/linux/list_lru.h
> @@ -54,6 +54,7 @@ struct list_lru {
> #ifdef CONFIG_MEMCG_KMEM
> struct list_head list;
> int shrinker_id;
> + bool memcg_aware;
> #endif
> };
>
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index 0730bf8ff39f..8e605e40a4c6 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)
>
> static inline bool list_lru_memcg_aware(struct list_lru *lru)
> {
> - /*
> - * This needs node 0 to be always present, even
> - * in the systems supporting sparse numa ids.
> - */
> - return !!lru->node[0].memcg_lrus;
> + return lru->memcg_aware;
> }
>
> static inline struct list_lru_one *
> @@ -451,6 +447,7 @@ static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
> {
> int i;
>
> + lru->memcg_aware = memcg_aware;
> if (!memcg_aware)
> return 0;
>

2019-05-16 16:54:49

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH] memcg: make it work on sparse non-0-node systems

On Thu 09-05-19 15:25:26, Vladimir Davydov wrote:
> On Mon, Apr 29, 2019 at 12:59:39PM +0200, Jiri Slaby wrote:
> > We have a single node system with node 0 disabled:
> > Scanning NUMA topology in Northbridge 24
> > Number of physical nodes 2
> > Skipping disabled node 0
> > Node 1 MemBase 0000000000000000 Limit 00000000fbff0000
> > NODE_DATA(1) allocated [mem 0xfbfda000-0xfbfeffff]
> >
> > This causes crashes in memcg when system boots:
> > BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
> > #PF error: [normal kernel read fault]
> > ...
> > RIP: 0010:list_lru_add+0x94/0x170
> > ...
> > Call Trace:
> > d_lru_add+0x44/0x50
> > dput.part.34+0xfc/0x110
> > __fput+0x108/0x230
> > task_work_run+0x9f/0xc0
> > exit_to_usermode_loop+0xf5/0x100
> >
> > It is reproducible as far as 4.12. I did not try older kernels. You have
> > to have a new enough systemd, e.g. 241 (the reason is unknown -- was not
> > investigated). Cannot be reproduced with systemd 234.
> >
> > The system crashes because the size of lru array is never updated in
> > memcg_update_all_list_lrus and the reads are past the zero-sized array,
> > causing dereferences of random memory.
> >
> > The root cause are list_lru_memcg_aware checks in the list_lru code.
> > The test in list_lru_memcg_aware is broken: it assumes node 0 is always
> > present, but it is not true on some systems as can be seen above.
> >
> > So fix this by checking the first online node instead of node 0.
> >
> > Signed-off-by: Jiri Slaby <[email protected]>
> > Cc: Johannes Weiner <[email protected]>
> > Cc: Michal Hocko <[email protected]>
> > Cc: Vladimir Davydov <[email protected]>
> > Cc: <[email protected]>
> > Cc: <[email protected]>
> > Cc: Raghavendra K T <[email protected]>
> > ---
> > mm/list_lru.c | 6 +-----
> > 1 file changed, 1 insertion(+), 5 deletions(-)
> >
> > diff --git a/mm/list_lru.c b/mm/list_lru.c
> > index 0730bf8ff39f..7689910f1a91 100644
> > --- a/mm/list_lru.c
> > +++ b/mm/list_lru.c
> > @@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)
> >
> > static inline bool list_lru_memcg_aware(struct list_lru *lru)
> > {
> > - /*
> > - * This needs node 0 to be always present, even
> > - * in the systems supporting sparse numa ids.
> > - */
> > - return !!lru->node[0].memcg_lrus;
> > + return !!lru->node[first_online_node].memcg_lrus;
> > }
> >
> > static inline struct list_lru_one *
>
> Yep, I didn't expect node 0 could ever be unavailable, my bad.
> The patch looks fine to me:
>
> Acked-by: Vladimir Davydov <[email protected]>
>
> However, I tend to agree with Michal that (ab)using node[0].memcg_lrus
> to check if a list_lru is memcg aware looks confusing. I guess we could
> simply add a bool flag to list_lru instead. Something like this, may be:

Yes, this makes much more sense to me!

>
> diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
> index aa5efd9351eb..d5ceb2839a2d 100644
> --- a/include/linux/list_lru.h
> +++ b/include/linux/list_lru.h
> @@ -54,6 +54,7 @@ struct list_lru {
> #ifdef CONFIG_MEMCG_KMEM
> struct list_head list;
> int shrinker_id;
> + bool memcg_aware;
> #endif
> };
>
> diff --git a/mm/list_lru.c b/mm/list_lru.c
> index 0730bf8ff39f..8e605e40a4c6 100644
> --- a/mm/list_lru.c
> +++ b/mm/list_lru.c
> @@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)
>
> static inline bool list_lru_memcg_aware(struct list_lru *lru)
> {
> - /*
> - * This needs node 0 to be always present, even
> - * in the systems supporting sparse numa ids.
> - */
> - return !!lru->node[0].memcg_lrus;
> + return lru->memcg_aware;
> }
>
> static inline struct list_lru_one *
> @@ -451,6 +447,7 @@ static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
> {
> int i;
>
> + lru->memcg_aware = memcg_aware;
> if (!memcg_aware)
> return 0;
>

--
Michal Hocko
SUSE Labs

2019-05-17 06:21:03

by Jiri Slaby

[permalink] [raw]
Subject: Re: [PATCH] memcg: make it work on sparse non-0-node systems

On 16. 05. 19, 15:59, Michal Hocko wrote:
>> However, I tend to agree with Michal that (ab)using node[0].memcg_lrus
>> to check if a list_lru is memcg aware looks confusing. I guess we could
>> simply add a bool flag to list_lru instead. Something like this, may be:
>
> Yes, this makes much more sense to me!

I am not sure if I should send a patch with this solution or Vladimir
will (given he is an author and has a diff already)?

thanks,
--
js
suse labs

2019-05-17 08:37:31

by Vladimir Davydov

[permalink] [raw]
Subject: Re: [PATCH] memcg: make it work on sparse non-0-node systems

On Fri, May 17, 2019 at 06:48:37AM +0200, Jiri Slaby wrote:
> On 16. 05. 19, 15:59, Michal Hocko wrote:
> >> However, I tend to agree with Michal that (ab)using node[0].memcg_lrus
> >> to check if a list_lru is memcg aware looks confusing. I guess we could
> >> simply add a bool flag to list_lru instead. Something like this, may be:
> >
> > Yes, this makes much more sense to me!
>
> I am not sure if I should send a patch with this solution or Vladimir
> will (given he is an author and has a diff already)?

I didn't even try to compile it, let alone test it. I'd appreciate if
you could wrap it up and send it out using your authorship. Feel free
to add my acked-by.

2019-05-17 08:45:03

by Jiri Slaby

[permalink] [raw]
Subject: Re: [PATCH] memcg: make it work on sparse non-0-node systems

On 17. 05. 19, 10:00, Vladimir Davydov wrote:
> On Fri, May 17, 2019 at 06:48:37AM +0200, Jiri Slaby wrote:
>> On 16. 05. 19, 15:59, Michal Hocko wrote:
>>>> However, I tend to agree with Michal that (ab)using node[0].memcg_lrus
>>>> to check if a list_lru is memcg aware looks confusing. I guess we could
>>>> simply add a bool flag to list_lru instead. Something like this, may be:
>>>
>>> Yes, this makes much more sense to me!
>>
>> I am not sure if I should send a patch with this solution or Vladimir
>> will (given he is an author and has a diff already)?
>
> I didn't even try to compile it, let alone test it. I'd appreciate if
> you could wrap it up and send it out using your authorship. Feel free
> to add my acked-by.

OK, NP.

thanks,
--
js
suse labs

2019-05-17 12:51:43

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v2] memcg: make it work on sparse non-0-node systems

We have a single node system with node 0 disabled:
Scanning NUMA topology in Northbridge 24
Number of physical nodes 2
Skipping disabled node 0
Node 1 MemBase 0000000000000000 Limit 00000000fbff0000
NODE_DATA(1) allocated [mem 0xfbfda000-0xfbfeffff]

This causes crashes in memcg when system boots:
BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
#PF error: [normal kernel read fault]
...
RIP: 0010:list_lru_add+0x94/0x170
...
Call Trace:
d_lru_add+0x44/0x50
dput.part.34+0xfc/0x110
__fput+0x108/0x230
task_work_run+0x9f/0xc0
exit_to_usermode_loop+0xf5/0x100

It is reproducible as far as 4.12. I did not try older kernels. You have
to have a new enough systemd, e.g. 241 (the reason is unknown -- was not
investigated). Cannot be reproduced with systemd 234.

The system crashes because the size of lru array is never updated in
memcg_update_all_list_lrus and the reads are past the zero-sized array,
causing dereferences of random memory.

The root cause are list_lru_memcg_aware checks in the list_lru code.
The test in list_lru_memcg_aware is broken: it assumes node 0 is always
present, but it is not true on some systems as can be seen above.

So fix this by avoiding checks on node 0. Remember the memcg-awareness
by a bool flag in struct list_lru.

[v2] use the idea proposed by Vladimir -- the bool flag.

Signed-off-by: Jiri Slaby <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Michal Hocko <[email protected]>
Suggested-by: Vladimir Davydov <[email protected]>
Acked-by: Vladimir Davydov <[email protected]>
Cc: <[email protected]>
Cc: <[email protected]>
Cc: Raghavendra K T <[email protected]>
---
include/linux/list_lru.h | 1 +
mm/list_lru.c | 8 +++-----
2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
index aa5efd9351eb..d5ceb2839a2d 100644
--- a/include/linux/list_lru.h
+++ b/include/linux/list_lru.h
@@ -54,6 +54,7 @@ struct list_lru {
#ifdef CONFIG_MEMCG_KMEM
struct list_head list;
int shrinker_id;
+ bool memcg_aware;
#endif
};

diff --git a/mm/list_lru.c b/mm/list_lru.c
index 0730bf8ff39f..d3b538146efd 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)

static inline bool list_lru_memcg_aware(struct list_lru *lru)
{
- /*
- * This needs node 0 to be always present, even
- * in the systems supporting sparse numa ids.
- */
- return !!lru->node[0].memcg_lrus;
+ return lru->memcg_aware;
}

static inline struct list_lru_one *
@@ -451,6 +447,8 @@ static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
{
int i;

+ lru->memcg_aware = memcg_aware;
+
if (!memcg_aware)
return 0;

--
2.21.0

2019-05-22 09:21:21

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH -resend v2] memcg: make it work on sparse non-0-node systems

We have a single node system with node 0 disabled:
Scanning NUMA topology in Northbridge 24
Number of physical nodes 2
Skipping disabled node 0
Node 1 MemBase 0000000000000000 Limit 00000000fbff0000
NODE_DATA(1) allocated [mem 0xfbfda000-0xfbfeffff]

This causes crashes in memcg when system boots:
BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
#PF error: [normal kernel read fault]
...
RIP: 0010:list_lru_add+0x94/0x170
...
Call Trace:
d_lru_add+0x44/0x50
dput.part.34+0xfc/0x110
__fput+0x108/0x230
task_work_run+0x9f/0xc0
exit_to_usermode_loop+0xf5/0x100

It is reproducible as far as 4.12. I did not try older kernels. You have
to have a new enough systemd, e.g. 241 (the reason is unknown -- was not
investigated). Cannot be reproduced with systemd 234.

The system crashes because the size of lru array is never updated in
memcg_update_all_list_lrus and the reads are past the zero-sized array,
causing dereferences of random memory.

The root cause are list_lru_memcg_aware checks in the list_lru code.
The test in list_lru_memcg_aware is broken: it assumes node 0 is always
present, but it is not true on some systems as can be seen above.

So fix this by avoiding checks on node 0. Remember the memcg-awareness
by a bool flag in struct list_lru.

[v2] use the idea proposed by Vladimir -- the bool flag.

Signed-off-by: Jiri Slaby <[email protected]>
Fixes: 60d3fd32a7a9 ("list_lru: introduce per-memcg lists")
Cc: Johannes Weiner <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Suggested-by: Vladimir Davydov <[email protected]>
Acked-by: Vladimir Davydov <[email protected]>
Reviewed-by: Shakeel Butt <[email protected]>
Cc: <[email protected]>
Cc: <[email protected]>
Cc: <[email protected]>
Cc: Raghavendra K T <[email protected]>
---

This is only a resent patch. I did not send it the akpm's way previously.

include/linux/list_lru.h | 1 +
mm/list_lru.c | 8 +++-----
2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/include/linux/list_lru.h b/include/linux/list_lru.h
index aa5efd9351eb..d5ceb2839a2d 100644
--- a/include/linux/list_lru.h
+++ b/include/linux/list_lru.h
@@ -54,6 +54,7 @@ struct list_lru {
#ifdef CONFIG_MEMCG_KMEM
struct list_head list;
int shrinker_id;
+ bool memcg_aware;
#endif
};

diff --git a/mm/list_lru.c b/mm/list_lru.c
index 0730bf8ff39f..d3b538146efd 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -37,11 +37,7 @@ static int lru_shrinker_id(struct list_lru *lru)

static inline bool list_lru_memcg_aware(struct list_lru *lru)
{
- /*
- * This needs node 0 to be always present, even
- * in the systems supporting sparse numa ids.
- */
- return !!lru->node[0].memcg_lrus;
+ return lru->memcg_aware;
}

static inline struct list_lru_one *
@@ -451,6 +447,8 @@ static int memcg_init_list_lru(struct list_lru *lru, bool memcg_aware)
{
int i;

+ lru->memcg_aware = memcg_aware;
+
if (!memcg_aware)
return 0;

--
2.21.0