2019-01-25 16:56:49

by Johannes Weiner

[permalink] [raw]
Subject: Re: + memcg-do-not-report-racy-no-eligible-oom-tasks.patch added to -mm tree

On Wed, Jan 09, 2019 at 11:03:06AM -0800, [email protected] wrote:
>
> The patch titled
> Subject: memcg: do not report racy no-eligible OOM tasks
> has been added to the -mm tree. Its filename is
> memcg-do-not-report-racy-no-eligible-oom-tasks.patch
>
> This patch should soon appear at
> http://ozlabs.org/~akpm/mmots/broken-out/memcg-do-not-report-racy-no-eligible-oom-tasks.patch
> and later at
> http://ozlabs.org/~akpm/mmotm/broken-out/memcg-do-not-report-racy-no-eligible-oom-tasks.patch
>
> Before you just go and hit "reply", please:
> a) Consider who else should be cc'ed
> b) Prefer to cc a suitable mailing list as well
> c) Ideally: find the original patch on the mailing list and do a
> reply-to-all to that, adding suitable additional cc's
>
> *** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
>
> The -mm tree is included into linux-next and is updated
> there every 3-4 working days
>
> ------------------------------------------------------
> From: Michal Hocko <[email protected]>
> Subject: memcg: do not report racy no-eligible OOM tasks
>
> Tetsuo has reported [1] that a single process group memcg might easily
> swamp the log with no-eligible oom victim reports due to race between the
> memcg charge and oom_reaper
>
> Thread 1 Thread2 oom_reaper
> try_charge try_charge
> mem_cgroup_out_of_memory
> mutex_lock(oom_lock)
> mem_cgroup_out_of_memory
> mutex_lock(oom_lock)
> out_of_memory
> select_bad_process
> oom_kill_process(current)
> wake_oom_reaper
> oom_reap_task
> MMF_OOM_SKIP->victim
> mutex_unlock(oom_lock)
> out_of_memory
> select_bad_process # no task
>
> If Thread1 didn't race it would bail out from try_charge and force the
> charge. We can achieve the same by checking tsk_is_oom_victim inside the
> oom_lock and therefore close the race.
>
> [1] http://lkml.kernel.org/r/[email protected]
> Link: http://lkml.kernel.org/r/[email protected]
> Signed-off-by: Michal Hocko <[email protected]>
> Cc: Tetsuo Handa <[email protected]>
> Cc: Johannes Weiner <[email protected]>
> Signed-off-by: Andrew Morton <[email protected]>

It looks like this problem is happening in production systems:

https://www.spinics.net/lists/cgroups/msg21268.html

where the threads don't exit because they are trapped writing out the
oom messages to a slow console (running the reproducer from this email
thread triggers the oom flooding).

So IMO we should put this into 5.0 and add:

Fixes: 29ef680ae7c2 ("memcg, oom: move out_of_memory back to the charge path")
Fixes: 3100dab2aa09 ("mm: memcontrol: print proper OOM header when no eligible victim left")
Cc: [email protected] # 4.19+

> --- a/mm/memcontrol.c~memcg-do-not-report-racy-no-eligible-oom-tasks
> +++ a/mm/memcontrol.c
> @@ -1387,10 +1387,22 @@ static bool mem_cgroup_out_of_memory(str
> .gfp_mask = gfp_mask,
> .order = order,
> };
> - bool ret;
> + bool ret = true;

Should this be false if skip the oom kill, btw? Either will result in
a forced charge - false will do so right away, true will retry once
and then trigger the victim check in try_charge().

It's just weird to return true when we didn't do what the caller asked
us to do.

> mutex_lock(&oom_lock);
> +
> + /*
> + * multi-threaded tasks might race with oom_reaper and gain
> + * MMF_OOM_SKIP before reaching out_of_memory which can lead
> + * to out_of_memory failure if the task is the last one in
> + * memcg which would be a false possitive failure reported
> + */
> + if (tsk_is_oom_victim(current))
> + goto unlock;
> +
> ret = out_of_memory(&oc);
> +
> +unlock:
> mutex_unlock(&oom_lock);
> return ret;


2019-01-25 17:26:49

by Michal Hocko

[permalink] [raw]
Subject: Re: + memcg-do-not-report-racy-no-eligible-oom-tasks.patch added to -mm tree

On Fri 25-01-19 11:56:24, Johannes Weiner wrote:
> On Wed, Jan 09, 2019 at 11:03:06AM -0800, [email protected] wrote:
> >
> > The patch titled
> > Subject: memcg: do not report racy no-eligible OOM tasks
> > has been added to the -mm tree. Its filename is
> > memcg-do-not-report-racy-no-eligible-oom-tasks.patch
> >
> > This patch should soon appear at
> > http://ozlabs.org/~akpm/mmots/broken-out/memcg-do-not-report-racy-no-eligible-oom-tasks.patch
> > and later at
> > http://ozlabs.org/~akpm/mmotm/broken-out/memcg-do-not-report-racy-no-eligible-oom-tasks.patch
> >
> > Before you just go and hit "reply", please:
> > a) Consider who else should be cc'ed
> > b) Prefer to cc a suitable mailing list as well
> > c) Ideally: find the original patch on the mailing list and do a
> > reply-to-all to that, adding suitable additional cc's
> >
> > *** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
> >
> > The -mm tree is included into linux-next and is updated
> > there every 3-4 working days
> >
> > ------------------------------------------------------
> > From: Michal Hocko <[email protected]>
> > Subject: memcg: do not report racy no-eligible OOM tasks
> >
> > Tetsuo has reported [1] that a single process group memcg might easily
> > swamp the log with no-eligible oom victim reports due to race between the
> > memcg charge and oom_reaper
> >
> > Thread 1 Thread2 oom_reaper
> > try_charge try_charge
> > mem_cgroup_out_of_memory
> > mutex_lock(oom_lock)
> > mem_cgroup_out_of_memory
> > mutex_lock(oom_lock)
> > out_of_memory
> > select_bad_process
> > oom_kill_process(current)
> > wake_oom_reaper
> > oom_reap_task
> > MMF_OOM_SKIP->victim
> > mutex_unlock(oom_lock)
> > out_of_memory
> > select_bad_process # no task
> >
> > If Thread1 didn't race it would bail out from try_charge and force the
> > charge. We can achieve the same by checking tsk_is_oom_victim inside the
> > oom_lock and therefore close the race.
> >
> > [1] http://lkml.kernel.org/r/[email protected]
> > Link: http://lkml.kernel.org/r/[email protected]
> > Signed-off-by: Michal Hocko <[email protected]>
> > Cc: Tetsuo Handa <[email protected]>
> > Cc: Johannes Weiner <[email protected]>
> > Signed-off-by: Andrew Morton <[email protected]>
>
> It looks like this problem is happening in production systems:
>
> https://www.spinics.net/lists/cgroups/msg21268.html
>
> where the threads don't exit because they are trapped writing out the
> oom messages to a slow console (running the reproducer from this email
> thread triggers the oom flooding).
>
> So IMO we should put this into 5.0 and add:

Please note that Tetsuo has found out that this will not work with the
CLONE_VM without CLONE_SIGHAND cases and his http://lkml.kernel.org/r/[email protected]
should handle this case as well. I've only had objections to the
changelog but other than that the patch looked sensible to me.
--
Michal Hocko
SUSE Labs

2019-01-25 18:34:07

by Johannes Weiner

[permalink] [raw]
Subject: Re: + memcg-do-not-report-racy-no-eligible-oom-tasks.patch added to -mm tree

On Fri, Jan 25, 2019 at 06:24:16PM +0100, Michal Hocko wrote:
> On Fri 25-01-19 11:56:24, Johannes Weiner wrote:
> > On Wed, Jan 09, 2019 at 11:03:06AM -0800, [email protected] wrote:
> > >
> > > The patch titled
> > > Subject: memcg: do not report racy no-eligible OOM tasks
> > > has been added to the -mm tree. Its filename is
> > > memcg-do-not-report-racy-no-eligible-oom-tasks.patch
> > >
> > > This patch should soon appear at
> > > http://ozlabs.org/~akpm/mmots/broken-out/memcg-do-not-report-racy-no-eligible-oom-tasks.patch
> > > and later at
> > > http://ozlabs.org/~akpm/mmotm/broken-out/memcg-do-not-report-racy-no-eligible-oom-tasks.patch
> > >
> > > Before you just go and hit "reply", please:
> > > a) Consider who else should be cc'ed
> > > b) Prefer to cc a suitable mailing list as well
> > > c) Ideally: find the original patch on the mailing list and do a
> > > reply-to-all to that, adding suitable additional cc's
> > >
> > > *** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
> > >
> > > The -mm tree is included into linux-next and is updated
> > > there every 3-4 working days
> > >
> > > ------------------------------------------------------
> > > From: Michal Hocko <[email protected]>
> > > Subject: memcg: do not report racy no-eligible OOM tasks
> > >
> > > Tetsuo has reported [1] that a single process group memcg might easily
> > > swamp the log with no-eligible oom victim reports due to race between the
> > > memcg charge and oom_reaper
> > >
> > > Thread 1 Thread2 oom_reaper
> > > try_charge try_charge
> > > mem_cgroup_out_of_memory
> > > mutex_lock(oom_lock)
> > > mem_cgroup_out_of_memory
> > > mutex_lock(oom_lock)
> > > out_of_memory
> > > select_bad_process
> > > oom_kill_process(current)
> > > wake_oom_reaper
> > > oom_reap_task
> > > MMF_OOM_SKIP->victim
> > > mutex_unlock(oom_lock)
> > > out_of_memory
> > > select_bad_process # no task
> > >
> > > If Thread1 didn't race it would bail out from try_charge and force the
> > > charge. We can achieve the same by checking tsk_is_oom_victim inside the
> > > oom_lock and therefore close the race.
> > >
> > > [1] http://lkml.kernel.org/r/[email protected]
> > > Link: http://lkml.kernel.org/r/[email protected]
> > > Signed-off-by: Michal Hocko <[email protected]>
> > > Cc: Tetsuo Handa <[email protected]>
> > > Cc: Johannes Weiner <[email protected]>
> > > Signed-off-by: Andrew Morton <[email protected]>
> >
> > It looks like this problem is happening in production systems:
> >
> > https://www.spinics.net/lists/cgroups/msg21268.html
> >
> > where the threads don't exit because they are trapped writing out the
> > oom messages to a slow console (running the reproducer from this email
> > thread triggers the oom flooding).
> >
> > So IMO we should put this into 5.0 and add:
>
> Please note that Tetsuo has found out that this will not work with the
> CLONE_VM without CLONE_SIGHAND cases and his http://lkml.kernel.org/r/[email protected]
> should handle this case as well. I've only had objections to the
> changelog but other than that the patch looked sensible to me.

I see. Yeah that looks reasonable to me too.

Tetsuo, could you include the Fixes: and CC: stable in your patch?

2019-01-26 01:10:00

by Tetsuo Handa

[permalink] [raw]
Subject: Re: + memcg-do-not-report-racy-no-eligible-oom-tasks.patch added to -mm tree

On 2019/01/26 3:33, Johannes Weiner wrote:
> On Fri, Jan 25, 2019 at 06:24:16PM +0100, Michal Hocko wrote:
>> On Fri 25-01-19 11:56:24, Johannes Weiner wrote:
>>> It looks like this problem is happening in production systems:
>>>
>>> https://www.spinics.net/lists/cgroups/msg21268.html
>>>
>>> where the threads don't exit because they are trapped writing out the
>>> oom messages to a slow console (running the reproducer from this email
>>> thread triggers the oom flooding).
>>>
>>> So IMO we should put this into 5.0 and add:
>>
>> Please note that Tetsuo has found out that this will not work with the
>> CLONE_VM without CLONE_SIGHAND cases and his http://lkml.kernel.org/r/[email protected]
>> should handle this case as well. I've only had objections to the
>> changelog but other than that the patch looked sensible to me.
>
> I see. Yeah that looks reasonable to me too.
>
> Tetsuo, could you include the Fixes: and CC: stable in your patch?
>

Andrew Morton is still offline. Do we want to ask Linus Torvalds?

2019-01-28 18:27:58

by Andrew Morton

[permalink] [raw]
Subject: Re: + memcg-do-not-report-racy-no-eligible-oom-tasks.patch added to -mm tree

On Fri, 25 Jan 2019 18:24:16 +0100 Michal Hocko <[email protected]> wrote:

> > > out_of_memory
> > > select_bad_process # no task
> > >
> > > If Thread1 didn't race it would bail out from try_charge and force the
> > > charge. We can achieve the same by checking tsk_is_oom_victim inside the
> > > oom_lock and therefore close the race.
> > >
> > > [1] http://lkml.kernel.org/r/[email protected]
> > > Link: http://lkml.kernel.org/r/[email protected]
> > > Signed-off-by: Michal Hocko <[email protected]>
> > > Cc: Tetsuo Handa <[email protected]>
> > > Cc: Johannes Weiner <[email protected]>
> > > Signed-off-by: Andrew Morton <[email protected]>
> >
> > It looks like this problem is happening in production systems:
> >
> > https://www.spinics.net/lists/cgroups/msg21268.html
> >
> > where the threads don't exit because they are trapped writing out the
> > oom messages to a slow console (running the reproducer from this email
> > thread triggers the oom flooding).
> >
> > So IMO we should put this into 5.0 and add:
>
> Please note that Tetsuo has found out that this will not work with the
> CLONE_VM without CLONE_SIGHAND cases and his http://lkml.kernel.org/r/[email protected]
> should handle this case as well. I've only had objections to the
> changelog but other than that the patch looked sensible to me.

So I think you're saying that

mm-oom-marks-all-killed-tasks-as-oom-victims.patch
and
memcg-do-not-report-racy-no-eligible-oom-tasks.patch

should be dropped and that "[PATCH v2] memcg: killed threads should not
invoke memcg OOM killer" should be redone with some changelog
alterations and should be merged instead?


2019-01-28 18:44:25

by Michal Hocko

[permalink] [raw]
Subject: Re: + memcg-do-not-report-racy-no-eligible-oom-tasks.patch added to -mm tree

On Mon 28-01-19 10:26:16, Andrew Morton wrote:
> On Fri, 25 Jan 2019 18:24:16 +0100 Michal Hocko <[email protected]> wrote:
>
> > > > out_of_memory
> > > > select_bad_process # no task
> > > >
> > > > If Thread1 didn't race it would bail out from try_charge and force the
> > > > charge. We can achieve the same by checking tsk_is_oom_victim inside the
> > > > oom_lock and therefore close the race.
> > > >
> > > > [1] http://lkml.kernel.org/r/[email protected]
> > > > Link: http://lkml.kernel.org/r/[email protected]
> > > > Signed-off-by: Michal Hocko <[email protected]>
> > > > Cc: Tetsuo Handa <[email protected]>
> > > > Cc: Johannes Weiner <[email protected]>
> > > > Signed-off-by: Andrew Morton <[email protected]>
> > >
> > > It looks like this problem is happening in production systems:
> > >
> > > https://www.spinics.net/lists/cgroups/msg21268.html
> > >
> > > where the threads don't exit because they are trapped writing out the
> > > oom messages to a slow console (running the reproducer from this email
> > > thread triggers the oom flooding).
> > >
> > > So IMO we should put this into 5.0 and add:
> >
> > Please note that Tetsuo has found out that this will not work with the
> > CLONE_VM without CLONE_SIGHAND cases and his http://lkml.kernel.org/r/[email protected]
> > should handle this case as well. I've only had objections to the
> > changelog but other than that the patch looked sensible to me.
>
> So I think you're saying that
>
> mm-oom-marks-all-killed-tasks-as-oom-victims.patch
> and
> memcg-do-not-report-racy-no-eligible-oom-tasks.patch
>
> should be dropped and that "[PATCH v2] memcg: killed threads should not
> invoke memcg OOM killer" should be redone with some changelog
> alterations and should be merged instead?

Yup.

--
Michal Hocko
SUSE Labs