2024-01-09 09:15:56

by Jianfeng Wang

[permalink] [raw]
Subject: [PATCH] mm, oom: Add lru_add_drain() in __oom_reap_task_mm()

The oom_reaper tries to reclaim additional memory owned by the oom
victim. In __oom_reap_task_mm(), it uses mmu_gather for batched page
free. After oom_reaper was added, mmu_gather feature introduced
CONFIG_MMU_GATHER_NO_GATHER (in 'commit 952a31c9e6fa ("asm-generic/tlb:
Introduce CONFIG_HAVE_MMU_GATHER_NO_GATHER=y")', an option to skip batched
page free. If set, tlb_batch_pages_flush(), which is responsible for
calling lru_add_drain(), is skipped during tlb_finish_mmu(). Without it,
pages could still be held by per-cpu fbatches rather than be freed.

This fix adds lru_add_drain() prior to mmu_gather. This makes the code
consistent with other cases where mmu_gather is used for freeing pages.

Signed-off-by: Jianfeng Wang <[email protected]>
---
mm/oom_kill.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 9e6071fde34a..e2fcf4f062ea 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -537,6 +537,7 @@ static bool __oom_reap_task_mm(struct mm_struct *mm)
struct mmu_notifier_range range;
struct mmu_gather tlb;

+ lru_add_drain();
mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0,
mm, vma->vm_start,
vma->vm_end);
--
2.42.1



2024-01-10 08:47:06

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH] mm, oom: Add lru_add_drain() in __oom_reap_task_mm()

On Tue 09-01-24 01:15:11, Jianfeng Wang wrote:
> The oom_reaper tries to reclaim additional memory owned by the oom
> victim. In __oom_reap_task_mm(), it uses mmu_gather for batched page
> free. After oom_reaper was added, mmu_gather feature introduced
> CONFIG_MMU_GATHER_NO_GATHER (in 'commit 952a31c9e6fa ("asm-generic/tlb:
> Introduce CONFIG_HAVE_MMU_GATHER_NO_GATHER=y")', an option to skip batched
> page free. If set, tlb_batch_pages_flush(), which is responsible for
> calling lru_add_drain(), is skipped during tlb_finish_mmu(). Without it,
> pages could still be held by per-cpu fbatches rather than be freed.
>
> This fix adds lru_add_drain() prior to mmu_gather. This makes the code
> consistent with other cases where mmu_gather is used for freeing pages.

Does this fix any actual problem or is this pure code consistency thing?
I am asking because it doesn't make much sense to me TBH, LRU cache
draining is usually important when we want to ensure that cached pages
are put to LRU to be dealt with because otherwise the MM code wouldn't
be able to deal with them. OOM reaper doesn't necessarily run on the
same CPU as the oom victim so draining on a local CPU doesn't
necessarily do anything for the victim's pages.

While this patch is not harmful I really do not see much point in adding
the local draining here. Could you clarify please?

> Signed-off-by: Jianfeng Wang <[email protected]>
> ---
> mm/oom_kill.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 9e6071fde34a..e2fcf4f062ea 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -537,6 +537,7 @@ static bool __oom_reap_task_mm(struct mm_struct *mm)
> struct mmu_notifier_range range;
> struct mmu_gather tlb;
>
> + lru_add_drain();
> mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0,
> mm, vma->vm_start,
> vma->vm_end);
> --
> 2.42.1
>

--
Michal Hocko
SUSE Labs

2024-01-10 19:02:27

by Jianfeng Wang

[permalink] [raw]
Subject: Re: [PATCH] mm, oom: Add lru_add_drain() in __oom_reap_task_mm()

On 1/10/24 12:46 AM, Michal Hocko wrote:
> On Tue 09-01-24 01:15:11, Jianfeng Wang wrote:
>> The oom_reaper tries to reclaim additional memory owned by the oom
>> victim. In __oom_reap_task_mm(), it uses mmu_gather for batched page
>> free. After oom_reaper was added, mmu_gather feature introduced
>> CONFIG_MMU_GATHER_NO_GATHER (in 'commit 952a31c9e6fa ("asm-generic/tlb:
>> Introduce CONFIG_HAVE_MMU_GATHER_NO_GATHER=y")', an option to skip batched
>> page free. If set, tlb_batch_pages_flush(), which is responsible for
>> calling lru_add_drain(), is skipped during tlb_finish_mmu(). Without it,
>> pages could still be held by per-cpu fbatches rather than be freed.
>>
>> This fix adds lru_add_drain() prior to mmu_gather. This makes the code
>> consistent with other cases where mmu_gather is used for freeing pages.
>
> Does this fix any actual problem or is this pure code consistency thing?
> I am asking because it doesn't make much sense to me TBH, LRU cache
> draining is usually important when we want to ensure that cached pages
> are put to LRU to be dealt with because otherwise the MM code wouldn't
> be able to deal with them. OOM reaper doesn't necessarily run on the
> same CPU as the oom victim so draining on a local CPU doesn't
> necessarily do anything for the victim's pages.
>
> While this patch is not harmful I really do not see much point in adding
> the local draining here. Could you clarify please?
>
It targets the case described in the patch's commit message: oom_killer
thinks that it 'reclaims' pages while pages are still held by per-cpu
fbatches with a ref count.

I admit that pages may sit on a different core(s). Given that
doing remote calls to all CPUs with lru_add_drain_all() is expensive,
this line of code can be helpful if it happens to give back a few pages
to the system right away without the overhead, especially when oom is
involved. Plus, it also makes the code consistent with other places
using mmu_gather feature to free pages in batch.

--JW

>> Signed-off-by: Jianfeng Wang <[email protected]>
>> ---
>> mm/oom_kill.c | 1 +
>> 1 file changed, 1 insertion(+)
>>
>> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
>> index 9e6071fde34a..e2fcf4f062ea 100644
>> --- a/mm/oom_kill.c
>> +++ b/mm/oom_kill.c
>> @@ -537,6 +537,7 @@ static bool __oom_reap_task_mm(struct mm_struct *mm)
>> struct mmu_notifier_range range;
>> struct mmu_gather tlb;
>>
>> + lru_add_drain();
>> mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0,
>> mm, vma->vm_start,
>> vma->vm_end);
>> --
>> 2.42.1
>>
>

2024-01-11 08:46:37

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH] mm, oom: Add lru_add_drain() in __oom_reap_task_mm()

On Wed 10-01-24 11:02:03, Jianfeng Wang wrote:
> On 1/10/24 12:46 AM, Michal Hocko wrote:
> > On Tue 09-01-24 01:15:11, Jianfeng Wang wrote:
> >> The oom_reaper tries to reclaim additional memory owned by the oom
> >> victim. In __oom_reap_task_mm(), it uses mmu_gather for batched page
> >> free. After oom_reaper was added, mmu_gather feature introduced
> >> CONFIG_MMU_GATHER_NO_GATHER (in 'commit 952a31c9e6fa ("asm-generic/tlb:
> >> Introduce CONFIG_HAVE_MMU_GATHER_NO_GATHER=y")', an option to skip batched
> >> page free. If set, tlb_batch_pages_flush(), which is responsible for
> >> calling lru_add_drain(), is skipped during tlb_finish_mmu(). Without it,
> >> pages could still be held by per-cpu fbatches rather than be freed.
> >>
> >> This fix adds lru_add_drain() prior to mmu_gather. This makes the code
> >> consistent with other cases where mmu_gather is used for freeing pages.
> >
> > Does this fix any actual problem or is this pure code consistency thing?
> > I am asking because it doesn't make much sense to me TBH, LRU cache
> > draining is usually important when we want to ensure that cached pages
> > are put to LRU to be dealt with because otherwise the MM code wouldn't
> > be able to deal with them. OOM reaper doesn't necessarily run on the
> > same CPU as the oom victim so draining on a local CPU doesn't
> > necessarily do anything for the victim's pages.
> >
> > While this patch is not harmful I really do not see much point in adding
> > the local draining here. Could you clarify please?
> >
> It targets the case described in the patch's commit message: oom_killer
> thinks that it 'reclaims' pages while pages are still held by per-cpu
> fbatches with a ref count.
>
> I admit that pages may sit on a different core(s). Given that
> doing remote calls to all CPUs with lru_add_drain_all() is expensive,
> this line of code can be helpful if it happens to give back a few pages
> to the system right away without the overhead, especially when oom is
> involved. Plus, it also makes the code consistent with other places
> using mmu_gather feature to free pages in batch.

I would argue that consistency the biggest problem of this patch. It
tries to follow a pattern that is just not really correct. First it
operates on a random CPU from the oom victim perspective and second it
doesn't really block any unmapping operation and that is the main
purpose of the reaper. Sure it frees a lot of unmapped memory but if
there are couple of pages that cannot be freed imeediately because they
are sitting on a per-cpu LRU caches then this is not a deal breaker. As
you have noted those pages might be sitting on any per-cpu cache.

So I do not really see that as a good justification. People will follow
that pattern even more and spread lru_add_drain to other random places.

Unless you can show any actual runtime effect of this patch then I think
it shouldn't be merged.

--
Michal Hocko
SUSE Labs

2024-01-11 18:55:58

by Jianfeng Wang

[permalink] [raw]
Subject: Re: [PATCH] mm, oom: Add lru_add_drain() in __oom_reap_task_mm()


On 1/11/24 12:46 AM, Michal Hocko wrote:
> On Wed 10-01-24 11:02:03, Jianfeng Wang wrote:
>> On 1/10/24 12:46 AM, Michal Hocko wrote:
>>> On Tue 09-01-24 01:15:11, Jianfeng Wang wrote:
>>>> The oom_reaper tries to reclaim additional memory owned by the oom
>>>> victim. In __oom_reap_task_mm(), it uses mmu_gather for batched page
>>>> free. After oom_reaper was added, mmu_gather feature introduced
>>>> CONFIG_MMU_GATHER_NO_GATHER (in 'commit 952a31c9e6fa ("asm-generic/tlb:
>>>> Introduce CONFIG_HAVE_MMU_GATHER_NO_GATHER=y")', an option to skip batched
>>>> page free. If set, tlb_batch_pages_flush(), which is responsible for
>>>> calling lru_add_drain(), is skipped during tlb_finish_mmu(). Without it,
>>>> pages could still be held by per-cpu fbatches rather than be freed.
>>>>
>>>> This fix adds lru_add_drain() prior to mmu_gather. This makes the code
>>>> consistent with other cases where mmu_gather is used for freeing pages.
>>>
>>> Does this fix any actual problem or is this pure code consistency thing?
>>> I am asking because it doesn't make much sense to me TBH, LRU cache
>>> draining is usually important when we want to ensure that cached pages
>>> are put to LRU to be dealt with because otherwise the MM code wouldn't
>>> be able to deal with them. OOM reaper doesn't necessarily run on the
>>> same CPU as the oom victim so draining on a local CPU doesn't
>>> necessarily do anything for the victim's pages.
>>>
>>> While this patch is not harmful I really do not see much point in adding
>>> the local draining here. Could you clarify please?
>>>
>> It targets the case described in the patch's commit message: oom_killer
>> thinks that it 'reclaims' pages while pages are still held by per-cpu
>> fbatches with a ref count.
>>
>> I admit that pages may sit on a different core(s). Given that
>> doing remote calls to all CPUs with lru_add_drain_all() is expensive,
>> this line of code can be helpful if it happens to give back a few pages
>> to the system right away without the overhead, especially when oom is
>> involved. Plus, it also makes the code consistent with other places
>> using mmu_gather feature to free pages in batch.
>
> I would argue that consistency the biggest problem of this patch. It
> tries to follow a pattern that is just not really correct. First it
> operates on a random CPU from the oom victim perspective and second it
> doesn't really block any unmapping operation and that is the main
> purpose of the reaper. Sure it frees a lot of unmapped memory but if
> there are couple of pages that cannot be freed imeediately because they
> are sitting on a per-cpu LRU caches then this is not a deal breaker. As
> you have noted those pages might be sitting on any per-cpu cache.
>
> So I do not really see that as a good justification. People will follow
> that pattern even more and spread lru_add_drain to other random places.
>
> Unless you can show any actual runtime effect of this patch then I think
> it shouldn't be merged.
>

Thanks for raising your concern.
I'd call it a trade-off rather than "not really correct". Look at
unmap_region() / free_pages_and_swap_cache() written by Linus. These are in
favor of this pattern, which indicates that the trade-off (i.e. draining
local CPU or draining all CPUs or no draining at all) had been made in the
same way in the past. I don't have a specific runtime effect to provide,
except that it will free 10s kB pages immediately during OOM.

2024-01-11 21:54:13

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH] mm, oom: Add lru_add_drain() in __oom_reap_task_mm()

On Thu, 11 Jan 2024 10:54:45 -0800 Jianfeng Wang <[email protected]> wrote:

>
> > Unless you can show any actual runtime effect of this patch then I think
> > it shouldn't be merged.
> >
>
> Thanks for raising your concern.
> I'd call it a trade-off rather than "not really correct". Look at
> unmap_region() / free_pages_and_swap_cache() written by Linus. These are in
> favor of this pattern, which indicates that the trade-off (i.e. draining
> local CPU or draining all CPUs or no draining at all) had been made in the
> same way in the past. I don't have a specific runtime effect to provide,
> except that it will free 10s kB pages immediately during OOM.

I don't think it's necessary to run lru_add_drain() for each vma. Once
we've done it it once, it can be skipped for additional vmas.

That's pretty minor because the second and successive calls will be
cheap. But it becomes much more significant if we switch to
lru_add_drain_all(), which sounds like what we should be doing here.
Is it possible?


2024-01-12 00:09:53

by Jianfeng Wang

[permalink] [raw]
Subject: Re: [External] : Re: [PATCH] mm, oom: Add lru_add_drain() in __oom_reap_task_mm()



On 1/11/24 1:54 PM, Andrew Morton wrote:
> On Thu, 11 Jan 2024 10:54:45 -0800 Jianfeng Wang <[email protected]> wrote:
>
>>
>>> Unless you can show any actual runtime effect of this patch then I think
>>> it shouldn't be merged.
>>>
>>
>> Thanks for raising your concern.
>> I'd call it a trade-off rather than "not really correct". Look at
>> unmap_region() / free_pages_and_swap_cache() written by Linus. These are in
>> favor of this pattern, which indicates that the trade-off (i.e. draining
>> local CPU or draining all CPUs or no draining at all) had been made in the
>> same way in the past. I don't have a specific runtime effect to provide,
>> except that it will free 10s kB pages immediately during OOM.
>
> I don't think it's necessary to run lru_add_drain() for each vma. Once
> we've done it it once, it can be skipped for additional vmas.
>
Agreed.

> That's pretty minor because the second and successive calls will be
> cheap. But it becomes much more significant if we switch to
> lru_add_drain_all(), which sounds like what we should be doing here.
> Is it possible?
>
What do you both think of adding lru_add_drain_all() prior to the for loop?

2024-01-12 08:49:21

by Michal Hocko

[permalink] [raw]
Subject: Re: [External] : Re: [PATCH] mm, oom: Add lru_add_drain() in __oom_reap_task_mm()

On Thu 11-01-24 16:08:57, Jianfeng Wang wrote:
>
>
> On 1/11/24 1:54 PM, Andrew Morton wrote:
> > On Thu, 11 Jan 2024 10:54:45 -0800 Jianfeng Wang <[email protected]> wrote:
> >
> >>
> >>> Unless you can show any actual runtime effect of this patch then I think
> >>> it shouldn't be merged.
> >>>
> >>
> >> Thanks for raising your concern.
> >> I'd call it a trade-off rather than "not really correct". Look at
> >> unmap_region() / free_pages_and_swap_cache() written by Linus. These are in
> >> favor of this pattern, which indicates that the trade-off (i.e. draining
> >> local CPU or draining all CPUs or no draining at all) had been made in the
> >> same way in the past. I don't have a specific runtime effect to provide,
> >> except that it will free 10s kB pages immediately during OOM.

You are missing an important point. Those two calls are quite different.
oom_reaper unmaps memory after all the reclaim attempts have failed.
That includes draining all sorts of caches on the way. Including
draining LRU pcp cache (look for lru_add_drain_all in the reclaim path).

> > I don't think it's necessary to run lru_add_drain() for each vma. Once
> > we've done it it once, it can be skipped for additional vmas.
> >
> Agreed.
>
> > That's pretty minor because the second and successive calls will be
> > cheap. But it becomes much more significant if we switch to
> > lru_add_drain_all(), which sounds like what we should be doing here.
> > Is it possible?
> >
> What do you both think of adding lru_add_drain_all() prior to the for loop?

lru_add_drain_all relies on WQs. And we absolutely do not want to get
oom_reaper stuck just because all the WQ is jammed. So no, this is
actually actively harmful!

All that being said I stand by my previous statement that this patch is
not doing anything measurably useful. Prove me wrong otherwise I am
against merging "just for consistency patch". Really, we should go and
re-evaluate existing local lru draining callers. I wouldn't be surprised
if we removed some of them.

--
Michal Hocko
SUSE Labs

2024-01-12 21:43:14

by Minchan Kim

[permalink] [raw]
Subject: Re: [External] : Re: [PATCH] mm, oom: Add lru_add_drain() in __oom_reap_task_mm()

On Fri, Jan 12, 2024 at 09:49:08AM +0100, Michal Hocko wrote:
> On Thu 11-01-24 16:08:57, Jianfeng Wang wrote:
> >
> >
> > On 1/11/24 1:54 PM, Andrew Morton wrote:
> > > On Thu, 11 Jan 2024 10:54:45 -0800 Jianfeng Wang <[email protected]> wrote:
> > >
> > >>
> > >>> Unless you can show any actual runtime effect of this patch then I think
> > >>> it shouldn't be merged.
> > >>>
> > >>
> > >> Thanks for raising your concern.
> > >> I'd call it a trade-off rather than "not really correct". Look at
> > >> unmap_region() / free_pages_and_swap_cache() written by Linus. These are in
> > >> favor of this pattern, which indicates that the trade-off (i.e. draining
> > >> local CPU or draining all CPUs or no draining at all) had been made in the
> > >> same way in the past. I don't have a specific runtime effect to provide,
> > >> except that it will free 10s kB pages immediately during OOM.
>
> You are missing an important point. Those two calls are quite different.
> oom_reaper unmaps memory after all the reclaim attempts have failed.
> That includes draining all sorts of caches on the way. Including
> draining LRU pcp cache (look for lru_add_drain_all in the reclaim path).
>
> > > I don't think it's necessary to run lru_add_drain() for each vma. Once
> > > we've done it it once, it can be skipped for additional vmas.
> > >
> > Agreed.
> >
> > > That's pretty minor because the second and successive calls will be
> > > cheap. But it becomes much more significant if we switch to
> > > lru_add_drain_all(), which sounds like what we should be doing here.
> > > Is it possible?
> > >
> > What do you both think of adding lru_add_drain_all() prior to the for loop?
>
> lru_add_drain_all relies on WQs. And we absolutely do not want to get
> oom_reaper stuck just because all the WQ is jammed. So no, this is
> actually actively harmful!

I completely agree. The oom_reap_task_mm function is also used for process_mrelease,
which is a critical path for releasing memory in Android and is typically used
under system pressure(not only for memory pressure but also CPU pressured at the
same time). The lru_add_drain_all function can take a long time to finish because
Android is susceptible to priority inversion among processes.

The better idea may enable remote draining with lru_add_drain_all, analogous to
the recent PCP modifications.