2011-05-20 16:49:39

by Minchan Kim

[permalink] [raw]
Subject: Re: Unending loop in __alloc_pages_slowpath following OOM-kill; rfc: patch.

On Thu, May 19, 2011 at 07:29:01AM +0900, Minchan Kim wrote:
> Hi Andrew,
>
> On Wed, May 18, 2011 at 12:49 AM, Andrew Barry <[email protected]> wrote:
> > On 05/17/2011 05:34 AM, Minchan Kim wrote:
> >> On Sat, May 14, 2011 at 6:31 AM, Andrew Barry <[email protected]> wrote:
> >>> I believe I found a problem in __alloc_pages_slowpath, which allows a process to
> >>> get stuck endlessly looping, even when lots of memory is available.
> >>>
> >>> Running an I/O and memory intensive stress-test I see a 0-order page allocation
> >>> with __GFP_IO and __GFP_WAIT, running on a system with very little free memory.
> >>> Right about the same time that the stress-test gets killed by the OOM-killer,
> >>> the utility trying to allocate memory gets stuck in __alloc_pages_slowpath even
> >>> though most of the systems memory was freed by the oom-kill of the stress-test.
> >>>
> >>> The utility ends up looping from the rebalance label down through the
> >>> wait_iff_congested continiously. Because order=0, __alloc_pages_direct_compact
> >>> skips the call to get_page_from_freelist. Because all of the reclaimable memory
> >>> on the system has already been reclaimed, __alloc_pages_direct_reclaim skips the
> >>> call to get_page_from_freelist. Since there is no __GFP_FS flag, the block with
> >>> __alloc_pages_may_oom is skipped. The loop hits the wait_iff_congested, then
> >>> jumps back to rebalance without ever trying to get_page_from_freelist. This loop
> >>> repeats infinitely.
> >>>
> >>> Is there a reason that this loop is set up this way for 0 order allocations? I
> >>> applied the below patch, and the problem corrects itself. Does anyone have any
> >>> thoughts on the patch, or on a better way to address this situation?
> >>>
> >>> The test case is pretty pathological. Running a mix of I/O stress-tests that do
> >>> a lot of fork() and consume all of the system memory, I can pretty reliably hit
> >>> this on 600 nodes, in about 12 hours. 32GB/node.
> >>>
> >>
> >> It's amazing.
> >> I think it's _very_ rare but it's possible if test program killed by
> >> oom has only lots of anonymous pages and allocation tasks try to
> >> allocate order-0 page with GFP_NOFS.
> >
> > Unfortunately very rare is a subjective thing. We have been hitting it a couple
> > times a week in our test lab.
>
> Okay.
>
> >
> >> When the [in]active lists are empty suddenly(But I am not sure how
> >> come the situation happens.) and we are reclaiming order-0 page,
> >> compaction and __alloc_pages_direct_reclaim doesn't work. compaction
> >> doesn't work as it's order-0 page reclaiming. ?In case of
> >> __alloc_pages_direct_reclaim, it would work only if we have lru pages
> >> in [in]active list. But unfortunately we don't have any pages in lru
> >> list.
> >> So, last resort is following codes in do_try_to_free_pages.
> >>
> >> ? ? ? ? /* top priority shrink_zones still had more to do? don't OOM, then */
> >> ? ? ? ? if (scanning_global_lru(sc) && !all_unreclaimable(zonelist, sc))
> >> ? ? ? ? ? ? ? ? return 1;
> >>
> >> But it has a problem, too. all_unreclaimable checks zone->all_unreclaimable.
> >> zone->all_unreclaimable is set by below condition.
> >>
> >> zone->pages_scanned < zone_reclaimable_pages(zone) * 6
> >>
> >> If lru list is completely empty, shrink_zone doesn't work so
> >> zone->pages_scanned would be zero. But as we know, zone_page_state
> >> isn't exact by per_cpu_pageset. So it might be positive value. After
> >> all, zone_reclaimable always return true. It means kswapd never set
> >> zone->all_unreclaimable. ?So last resort become nop.
> >>
> >> In this case, current allocation doesn't have a chance to call
> >> get_page_from_freelist as Andrew Barry said.
> >>
> >> Does it make sense?
> >> If it is, how about this?
> >>
> >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >> index ebc7faa..4f64355 100644
> >> --- a/mm/page_alloc.c
> >> +++ b/mm/page_alloc.c
> >> @@ -2105,6 +2105,7 @@ restart:
> >> ? ? ? ? ? ? ? ? first_zones_zonelist(zonelist, high_zoneidx, NULL,
> >> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? &preferred_zone);
> >>
> >> +rebalance:
> >> ? ? ? ? /* This is the last chance, in general, before the goto nopage. */
> >> ? ? ? ? page = get_page_from_freelist(gfp_mask, nodemask, order, zonelist,
> >> ? ? ? ? ? ? ? ? ? ? ? ? high_zoneidx, alloc_flags & ~ALLOC_NO_WATERMARKS,
> >> @@ -2112,7 +2113,6 @@ restart:
> >> ? ? ? ? if (page)
> >> ? ? ? ? ? ? ? ? goto got_pg;
> >>
> >> -rebalance:
> >> ? ? ? ? /* Allocate without watermarks if the context allows */
> >> ? ? ? ? if (alloc_flags & ALLOC_NO_WATERMARKS) {
> >> ? ? ? ? ? ? ? ? page = __alloc_pages_high_priority(gfp_mask, order,
> >
> > I think your solution is simpler than my patch.
> > Thanks very much.
>
> You find the problem and it's harder than fix, I think.
> So I think you have to get a credit.
>
> Could you send the patch to akpm with Cced Mel and me?
> (Maybe it's the subject to send stable).
> You can get my Reviewed-by.
>
> Thanks for the good bug reporting.

>From 8bd3f16736548375238161d1bd85f7d7c381031f Mon Sep 17 00:00:00 2001
From: Minchan Kim <[email protected]>
Date: Sat, 21 May 2011 01:37:41 +0900
Subject: [PATCH] Prevent unending loop in __alloc_pages_slowpath

From: Andrew Barry <[email protected]>

I believe I found a problem in __alloc_pages_slowpath, which allows a process to
get stuck endlessly looping, even when lots of memory is available.

Running an I/O and memory intensive stress-test I see a 0-order page allocation
with __GFP_IO and __GFP_WAIT, running on a system with very little free memory.
Right about the same time that the stress-test gets killed by the OOM-killer,
the utility trying to allocate memory gets stuck in __alloc_pages_slowpath even
though most of the systems memory was freed by the oom-kill of the stress-test.

The utility ends up looping from the rebalance label down through the
wait_iff_congested continiously. Because order=0, __alloc_pages_direct_compact
skips the call to get_page_from_freelist. Because all of the reclaimable memory
on the system has already been reclaimed, __alloc_pages_direct_reclaim skips the
call to get_page_from_freelist. Since there is no __GFP_FS flag, the block with
__alloc_pages_may_oom is skipped. The loop hits the wait_iff_congested, then
jumps back to rebalance without ever trying to get_page_from_freelist. This loop
repeats infinitely.

The test case is pretty pathological. Running a mix of I/O stress-tests that do
a lot of fork() and consume all of the system memory, I can pretty reliably hit
this on 600 nodes, in about 12 hours. 32GB/node.

Signed-off-by: Andrew Barry <[email protected]>
Reviewed-by: Minchan Kim <[email protected]>
Cc: Mel Gorman <[email protected]>
---
mm/page_alloc.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3f8bce2..e78b324 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2064,6 +2064,7 @@ restart:
first_zones_zonelist(zonelist, high_zoneidx, NULL,
&preferred_zone);

+rebalance:
/* This is the last chance, in general, before the goto nopage. */
page = get_page_from_freelist(gfp_mask, nodemask, order, zonelist,
high_zoneidx, alloc_flags & ~ALLOC_NO_WATERMARKS,
@@ -2071,7 +2072,6 @@ restart:
if (page)
goto got_pg;

-rebalance:
/* Allocate without watermarks if the context allows */
if (alloc_flags & ALLOC_NO_WATERMARKS) {
page = __alloc_pages_high_priority(gfp_mask, order,
--
1.7.1


>
> > -Andrew
> >
> >
> >
> >
> >
> >
>
>
>
> --
> Kind regards,
> Minchan Kim

--
Kind regards,
Minchan Kim


2011-05-20 17:17:14

by Rik van Riel

[permalink] [raw]
Subject: Re: Unending loop in __alloc_pages_slowpath following OOM-kill; rfc: patch.

On 05/20/2011 12:49 PM, Minchan Kim wrote:

> From: Andrew Barry<[email protected]>
>
> I believe I found a problem in __alloc_pages_slowpath, which allows a process to
> get stuck endlessly looping, even when lots of memory is available.

> Signed-off-by: Andrew Barry<[email protected]>
> Reviewed-by: Minchan Kim<[email protected]>
> Cc: Mel Gorman<[email protected]>

Reviewed-by: Rik van Riel<[email protected]>

--
All rights reversed

2011-05-20 17:23:20

by Mel Gorman

[permalink] [raw]
Subject: Re: Unending loop in __alloc_pages_slowpath following OOM-kill; rfc: patch.

On Sat, May 21, 2011 at 01:49:24AM +0900, Minchan Kim wrote:
> <SNIP>
>
> From 8bd3f16736548375238161d1bd85f7d7c381031f Mon Sep 17 00:00:00 2001
> From: Minchan Kim <[email protected]>
> Date: Sat, 21 May 2011 01:37:41 +0900
> Subject: [PATCH] Prevent unending loop in __alloc_pages_slowpath
>
> From: Andrew Barry <[email protected]>
>
> I believe I found a problem in __alloc_pages_slowpath, which allows a process to
> get stuck endlessly looping, even when lots of memory is available.
>
> Running an I/O and memory intensive stress-test I see a 0-order page allocation
> with __GFP_IO and __GFP_WAIT, running on a system with very little free memory.
> Right about the same time that the stress-test gets killed by the OOM-killer,
> the utility trying to allocate memory gets stuck in __alloc_pages_slowpath even
> though most of the systems memory was freed by the oom-kill of the stress-test.
>
> The utility ends up looping from the rebalance label down through the
> wait_iff_congested continiously. Because order=0, __alloc_pages_direct_compact
> skips the call to get_page_from_freelist. Because all of the reclaimable memory
> on the system has already been reclaimed, __alloc_pages_direct_reclaim skips the
> call to get_page_from_freelist. Since there is no __GFP_FS flag, the block with
> __alloc_pages_may_oom is skipped. The loop hits the wait_iff_congested, then
> jumps back to rebalance without ever trying to get_page_from_freelist. This loop
> repeats infinitely.
>
> The test case is pretty pathological. Running a mix of I/O stress-tests that do
> a lot of fork() and consume all of the system memory, I can pretty reliably hit
> this on 600 nodes, in about 12 hours. 32GB/node.
>
> Signed-off-by: Andrew Barry <[email protected]>
> Reviewed-by: Minchan Kim <[email protected]>
> Cc: Mel Gorman <[email protected]>

Acked-by: Mel Gorman <[email protected]>

--
Mel Gorman
SUSE Labs

2011-05-24 04:55:10

by KOSAKI Motohiro

[permalink] [raw]
Subject: Re: Unending loop in __alloc_pages_slowpath following OOM-kill; rfc: patch.

>>From 8bd3f16736548375238161d1bd85f7d7c381031f Mon Sep 17 00:00:00 2001
> From: Minchan Kim <[email protected]>
> Date: Sat, 21 May 2011 01:37:41 +0900
> Subject: [PATCH] Prevent unending loop in __alloc_pages_slowpath
>
> From: Andrew Barry <[email protected]>
>
> I believe I found a problem in __alloc_pages_slowpath, which allows a process to
> get stuck endlessly looping, even when lots of memory is available.
>
> Running an I/O and memory intensive stress-test I see a 0-order page allocation
> with __GFP_IO and __GFP_WAIT, running on a system with very little free memory.
> Right about the same time that the stress-test gets killed by the OOM-killer,
> the utility trying to allocate memory gets stuck in __alloc_pages_slowpath even
> though most of the systems memory was freed by the oom-kill of the stress-test.
>
> The utility ends up looping from the rebalance label down through the
> wait_iff_congested continiously. Because order=0, __alloc_pages_direct_compact
> skips the call to get_page_from_freelist. Because all of the reclaimable memory
> on the system has already been reclaimed, __alloc_pages_direct_reclaim skips the
> call to get_page_from_freelist. Since there is no __GFP_FS flag, the block with
> __alloc_pages_may_oom is skipped. The loop hits the wait_iff_congested, then
> jumps back to rebalance without ever trying to get_page_from_freelist. This loop
> repeats infinitely.
>
> The test case is pretty pathological. Running a mix of I/O stress-tests that do
> a lot of fork() and consume all of the system memory, I can pretty reliably hit
> this on 600 nodes, in about 12 hours. 32GB/node.
>
> Signed-off-by: Andrew Barry <[email protected]>
> Reviewed-by: Minchan Kim <[email protected]>
> Cc: Mel Gorman <[email protected]>
> ---
> mm/page_alloc.c | 2 +-
> 1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 3f8bce2..e78b324 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2064,6 +2064,7 @@ restart:
> first_zones_zonelist(zonelist, high_zoneidx, NULL,
> &preferred_zone);
>
> +rebalance:
> /* This is the last chance, in general, before the goto nopage. */
> page = get_page_from_freelist(gfp_mask, nodemask, order, zonelist,
> high_zoneidx, alloc_flags & ~ALLOC_NO_WATERMARKS,
> @@ -2071,7 +2072,6 @@ restart:
> if (page)
> goto got_pg;
>
> -rebalance:
> /* Allocate without watermarks if the context allows */
> if (alloc_flags & ALLOC_NO_WATERMARKS) {
> page = __alloc_pages_high_priority(gfp_mask, order,

I'm sorry I missed this thread long time.

In this case, I think we should call drain_all_pages(). then following
patch is better.
However I also think your patch is valuable. because while the task is
sleeping in wait_iff_congested(), an another task may free some pages.
thus, rebalance path should try to get free pages. iow, you makes sense.

So, I'd like to propose to merge both your and my patch.

Thanks.


>From 2e77784668f6ca53d88ecb46aa6b99d9d0f33ffa Mon Sep 17 00:00:00 2001
From: KOSAKI Motohiro <[email protected]>
Date: Tue, 24 May 2011 13:41:57 +0900
Subject: [PATCH] vmscan: remove painful micro optimization

Currently, __alloc_pages_direct_reclaim() call get_page_from_freelist()
only if try_to_free_pages() return !0.

It's no necessary micro optimization becauase "return 0" mean vmscan reached
priority 0 and didn't get any pages, iow, it's really slow path. But also it
has bad side effect. If we don't call drain_all_pages(), we have a chance to
get infinite loop.

This patch remove its bad and meaningless micro optimization.

Signed-off-by: KOSAKI Motohiro <[email protected]>
---
mm/page_alloc.c | 3 ---
1 files changed, 0 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1572079..c41d488 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1950,9 +1950,6 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,

cond_resched();

- if (unlikely(!(*did_some_progress)))
- return NULL;
-
retry:
page = get_page_from_freelist(gfp_mask, nodemask, order,
zonelist, high_zoneidx,
--
1.7.3.1



2011-05-24 05:45:30

by KOSAKI Motohiro

[permalink] [raw]
Subject: Re: Unending loop in __alloc_pages_slowpath following OOM-kill; rfc: patch.

(2011/05/24 13:54), KOSAKI Motohiro wrote:
>> >From 8bd3f16736548375238161d1bd85f7d7c381031f Mon Sep 17 00:00:00 2001
>> From: Minchan Kim <[email protected]>
>> Date: Sat, 21 May 2011 01:37:41 +0900
>> Subject: [PATCH] Prevent unending loop in __alloc_pages_slowpath
>>
>> From: Andrew Barry <[email protected]>
>>
>> I believe I found a problem in __alloc_pages_slowpath, which allows a process to
>> get stuck endlessly looping, even when lots of memory is available.
>>
>> Running an I/O and memory intensive stress-test I see a 0-order page allocation
>> with __GFP_IO and __GFP_WAIT, running on a system with very little free memory.
>> Right about the same time that the stress-test gets killed by the OOM-killer,
>> the utility trying to allocate memory gets stuck in __alloc_pages_slowpath even
>> though most of the systems memory was freed by the oom-kill of the stress-test.
>>
>> The utility ends up looping from the rebalance label down through the
>> wait_iff_congested continiously. Because order=0, __alloc_pages_direct_compact
>> skips the call to get_page_from_freelist. Because all of the reclaimable memory
>> on the system has already been reclaimed, __alloc_pages_direct_reclaim skips the
>> call to get_page_from_freelist. Since there is no __GFP_FS flag, the block with
>> __alloc_pages_may_oom is skipped. The loop hits the wait_iff_congested, then
>> jumps back to rebalance without ever trying to get_page_from_freelist. This loop
>> repeats infinitely.
>>
>> The test case is pretty pathological. Running a mix of I/O stress-tests that do
>> a lot of fork() and consume all of the system memory, I can pretty reliably hit
>> this on 600 nodes, in about 12 hours. 32GB/node.
>>
>> Signed-off-by: Andrew Barry <[email protected]>
>> Reviewed-by: Minchan Kim <[email protected]>
>> Cc: Mel Gorman <[email protected]>
>> ---
>> mm/page_alloc.c | 2 +-
>> 1 files changed, 1 insertions(+), 1 deletions(-)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 3f8bce2..e78b324 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -2064,6 +2064,7 @@ restart:
>> first_zones_zonelist(zonelist, high_zoneidx, NULL,
>> &preferred_zone);
>>
>> +rebalance:
>> /* This is the last chance, in general, before the goto nopage. */
>> page = get_page_from_freelist(gfp_mask, nodemask, order, zonelist,
>> high_zoneidx, alloc_flags & ~ALLOC_NO_WATERMARKS,
>> @@ -2071,7 +2072,6 @@ restart:
>> if (page)
>> goto got_pg;
>>
>> -rebalance:
>> /* Allocate without watermarks if the context allows */
>> if (alloc_flags & ALLOC_NO_WATERMARKS) {
>> page = __alloc_pages_high_priority(gfp_mask, order,
>
> I'm sorry I missed this thread long time.
>
> In this case, I think we should call drain_all_pages(). then following
> patch is better.
> However I also think your patch is valuable. because while the task is
> sleeping in wait_iff_congested(), an another task may free some pages.
> thus, rebalance path should try to get free pages. iow, you makes sense.
>
> So, I'd like to propose to merge both your and my patch.

I forgot to write important thing. Your patch looks good to me.
Reviewed-by: KOSAKI Motohiro <[email protected]>



2011-05-24 08:30:20

by Mel Gorman

[permalink] [raw]
Subject: Re: Unending loop in __alloc_pages_slowpath following OOM-kill; rfc: patch.

On Tue, May 24, 2011 at 01:54:54PM +0900, KOSAKI Motohiro wrote:
> >>From 8bd3f16736548375238161d1bd85f7d7c381031f Mon Sep 17 00:00:00 2001
> > From: Minchan Kim <[email protected]>
> > Date: Sat, 21 May 2011 01:37:41 +0900
> > Subject: [PATCH] Prevent unending loop in __alloc_pages_slowpath
> >
> > From: Andrew Barry <[email protected]>
> >
> > I believe I found a problem in __alloc_pages_slowpath, which allows a process to
> > get stuck endlessly looping, even when lots of memory is available.
> >
> > Running an I/O and memory intensive stress-test I see a 0-order page allocation
> > with __GFP_IO and __GFP_WAIT, running on a system with very little free memory.
> > Right about the same time that the stress-test gets killed by the OOM-killer,
> > the utility trying to allocate memory gets stuck in __alloc_pages_slowpath even
> > though most of the systems memory was freed by the oom-kill of the stress-test.
> >
> > The utility ends up looping from the rebalance label down through the
> > wait_iff_congested continiously. Because order=0, __alloc_pages_direct_compact
> > skips the call to get_page_from_freelist. Because all of the reclaimable memory
> > on the system has already been reclaimed, __alloc_pages_direct_reclaim skips the
> > call to get_page_from_freelist. Since there is no __GFP_FS flag, the block with
> > __alloc_pages_may_oom is skipped. The loop hits the wait_iff_congested, then
> > jumps back to rebalance without ever trying to get_page_from_freelist. This loop
> > repeats infinitely.
> >
> > The test case is pretty pathological. Running a mix of I/O stress-tests that do
> > a lot of fork() and consume all of the system memory, I can pretty reliably hit
> > this on 600 nodes, in about 12 hours. 32GB/node.
> >
> > Signed-off-by: Andrew Barry <[email protected]>
> > Reviewed-by: Minchan Kim <[email protected]>
> > Cc: Mel Gorman <[email protected]>
> > ---
> > mm/page_alloc.c | 2 +-
> > 1 files changed, 1 insertions(+), 1 deletions(-)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 3f8bce2..e78b324 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -2064,6 +2064,7 @@ restart:
> > first_zones_zonelist(zonelist, high_zoneidx, NULL,
> > &preferred_zone);
> >
> > +rebalance:
> > /* This is the last chance, in general, before the goto nopage. */
> > page = get_page_from_freelist(gfp_mask, nodemask, order, zonelist,
> > high_zoneidx, alloc_flags & ~ALLOC_NO_WATERMARKS,
> > @@ -2071,7 +2072,6 @@ restart:
> > if (page)
> > goto got_pg;
> >
> > -rebalance:
> > /* Allocate without watermarks if the context allows */
> > if (alloc_flags & ALLOC_NO_WATERMARKS) {
> > page = __alloc_pages_high_priority(gfp_mask, order,
>
> I'm sorry I missed this thread long time.
>
> In this case, I think we should call drain_all_pages().

Why?

If the direct reclaimer failed to reclaim any pages on its own, the call
to get_page_from_freelist() is going to be useless and there is
no guarantee that any other CPU managed to reclaim pages either. All
this ends up doing is sending in IPI which if it's very lucky will take
a page from another CPUs free list.

> then following
> patch is better.
> However I also think your patch is valuable. because while the task is
> sleeping in wait_iff_congested(), an another task may free some pages.
> thus, rebalance path should try to get free pages. iow, you makes sense.
>
> So, I'd like to propose to merge both your and my patch.
>
> Thanks.
>
>
> From 2e77784668f6ca53d88ecb46aa6b99d9d0f33ffa Mon Sep 17 00:00:00 2001
> From: KOSAKI Motohiro <[email protected]>
> Date: Tue, 24 May 2011 13:41:57 +0900
> Subject: [PATCH] vmscan: remove painful micro optimization
>
> Currently, __alloc_pages_direct_reclaim() call get_page_from_freelist()
> only if try_to_free_pages() return !0.
>
> It's no necessary micro optimization becauase "return 0" mean vmscan reached
> priority 0 and didn't get any pages, iow, it's really slow path. But also it
> has bad side effect. If we don't call drain_all_pages(), we have a chance to
> get infinite loop.
>

With the "rebalance" patch, where is the infinite loop?

> This patch remove its bad and meaningless micro optimization.
>
> Signed-off-by: KOSAKI Motohiro <[email protected]>
> ---
> mm/page_alloc.c | 3 ---
> 1 files changed, 0 insertions(+), 3 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 1572079..c41d488 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1950,9 +1950,6 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
>
> cond_resched();
>
> - if (unlikely(!(*did_some_progress)))
> - return NULL;
> -
> retry:
> page = get_page_from_freelist(gfp_mask, nodemask, order,
> zonelist, high_zoneidx,
> --
> 1.7.3.1
>
>
>
>

--
Mel Gorman
SUSE Labs

2011-05-24 08:34:43

by Minchan Kim

[permalink] [raw]
Subject: Re: Unending loop in __alloc_pages_slowpath following OOM-kill; rfc: patch.

On Tue, May 24, 2011 at 1:54 PM, KOSAKI Motohiro
<[email protected]> wrote:
>>>From 8bd3f16736548375238161d1bd85f7d7c381031f Mon Sep 17 00:00:00 2001
>> From: Minchan Kim <[email protected]>
>> Date: Sat, 21 May 2011 01:37:41 +0900
>> Subject: [PATCH] Prevent unending loop in __alloc_pages_slowpath
>>
>> From: Andrew Barry <[email protected]>
>>
>> I believe I found a problem in __alloc_pages_slowpath, which allows a process to
>> get stuck endlessly looping, even when lots of memory is available.
>>
>> Running an I/O and memory intensive stress-test I see a 0-order page allocation
>> with __GFP_IO and __GFP_WAIT, running on a system with very little free memory.
>> Right about the same time that the stress-test gets killed by the OOM-killer,
>> the utility trying to allocate memory gets stuck in __alloc_pages_slowpath even
>> though most of the systems memory was freed by the oom-kill of the stress-test.
>>
>> The utility ends up looping from the rebalance label down through the
>> wait_iff_congested continiously. Because order=0, __alloc_pages_direct_compact
>> skips the call to get_page_from_freelist. Because all of the reclaimable memory
>> on the system has already been reclaimed, __alloc_pages_direct_reclaim skips the
>> call to get_page_from_freelist. Since there is no __GFP_FS flag, the block with
>> __alloc_pages_may_oom is skipped. The loop hits the wait_iff_congested, then
>> jumps back to rebalance without ever trying to get_page_from_freelist. This loop
>> repeats infinitely.
>>
>> The test case is pretty pathological. Running a mix of I/O stress-tests that do
>> a lot of fork() and consume all of the system memory, I can pretty reliably hit
>> this on 600 nodes, in about 12 hours. 32GB/node.
>>
>> Signed-off-by: Andrew Barry <[email protected]>
>> Reviewed-by: Minchan Kim <[email protected]>
>> Cc: Mel Gorman <[email protected]>
>> ---
>>  mm/page_alloc.c |    2 +-
>>  1 files changed, 1 insertions(+), 1 deletions(-)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 3f8bce2..e78b324 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -2064,6 +2064,7 @@ restart:
>>               first_zones_zonelist(zonelist, high_zoneidx, NULL,
>>                                       &preferred_zone);
>>
>> +rebalance:
>>       /* This is the last chance, in general, before the goto nopage. */
>>       page = get_page_from_freelist(gfp_mask, nodemask, order, zonelist,
>>                       high_zoneidx, alloc_flags & ~ALLOC_NO_WATERMARKS,
>> @@ -2071,7 +2072,6 @@ restart:
>>       if (page)
>>               goto got_pg;
>>
>> -rebalance:
>>       /* Allocate without watermarks if the context allows */
>>       if (alloc_flags & ALLOC_NO_WATERMARKS) {
>>               page = __alloc_pages_high_priority(gfp_mask, order,
>
> I'm sorry I missed this thread long time.

No problem. It would be better than not review.

>
> In this case, I think we should call drain_all_pages(). then following
> patch is better.

Strictly speaking, this problem isn't related to drain_all_pages.
This problem caused by lru empty but I admit it could work well if
your patch applied.
So yours could help, too.

> However I also think your patch is valuable. because while the task is
> sleeping in wait_iff_congested(), an another task may free some pages.
> thus, rebalance path should try to get free pages. iow, you makes sense.

Yes.
Off-topic.
I would like to move cond_resched below get_page_from_freelist in
__alloc_pages_direct_reclaim. Otherwise, it is likely we can be stolen
pages to other processes.
One more benefit is that if it's apparently OOM path(ie,
did_some_progress = 0), we can reduce OOM kill latency due to remove
unnecessary cond_resched.

>
> So, I'd like to propose to merge both your and my patch.

Recently, there was discussion on drain_all_pages with Wu.
He saw much overhead in 8-core system, AFAIR.
I Cced Wu.

How about checking per-cpu before calling drain_all_pages() than
unconditional calling?
if (per_cpu_ptr(zone->pageset, smp_processor_id())
drain_all_pages();

Of course, It can miss other CPU free pages. But above routine assume
local cpu direct reclaim is successful but it failed by per-cpu. So I
think it works.

Thanks for good suggestion and Reviewed-by, KOSAKI.
--
Kind regards,
Minchan Kim

2011-05-24 08:36:20

by KOSAKI Motohiro

[permalink] [raw]
Subject: Re: Unending loop in __alloc_pages_slowpath following OOM-kill; rfc: patch.

(2011/05/24 17:30), Mel Gorman wrote:
> On Tue, May 24, 2011 at 01:54:54PM +0900, KOSAKI Motohiro wrote:
>>> >From 8bd3f16736548375238161d1bd85f7d7c381031f Mon Sep 17 00:00:00 2001
>>> From: Minchan Kim <[email protected]>
>>> Date: Sat, 21 May 2011 01:37:41 +0900
>>> Subject: [PATCH] Prevent unending loop in __alloc_pages_slowpath
>>>
>>> From: Andrew Barry <[email protected]>
>>>
>>> I believe I found a problem in __alloc_pages_slowpath, which allows a process to
>>> get stuck endlessly looping, even when lots of memory is available.
>>>
>>> Running an I/O and memory intensive stress-test I see a 0-order page allocation
>>> with __GFP_IO and __GFP_WAIT, running on a system with very little free memory.
>>> Right about the same time that the stress-test gets killed by the OOM-killer,
>>> the utility trying to allocate memory gets stuck in __alloc_pages_slowpath even
>>> though most of the systems memory was freed by the oom-kill of the stress-test.
>>>
>>> The utility ends up looping from the rebalance label down through the
>>> wait_iff_congested continiously. Because order=0, __alloc_pages_direct_compact
>>> skips the call to get_page_from_freelist. Because all of the reclaimable memory
>>> on the system has already been reclaimed, __alloc_pages_direct_reclaim skips the
>>> call to get_page_from_freelist. Since there is no __GFP_FS flag, the block with
>>> __alloc_pages_may_oom is skipped. The loop hits the wait_iff_congested, then
>>> jumps back to rebalance without ever trying to get_page_from_freelist. This loop
>>> repeats infinitely.
>>>
>>> The test case is pretty pathological. Running a mix of I/O stress-tests that do
>>> a lot of fork() and consume all of the system memory, I can pretty reliably hit
>>> this on 600 nodes, in about 12 hours. 32GB/node.
>>>
>>> Signed-off-by: Andrew Barry <[email protected]>
>>> Reviewed-by: Minchan Kim <[email protected]>
>>> Cc: Mel Gorman <[email protected]>
>>> ---
>>> mm/page_alloc.c | 2 +-
>>> 1 files changed, 1 insertions(+), 1 deletions(-)
>>>
>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>> index 3f8bce2..e78b324 100644
>>> --- a/mm/page_alloc.c
>>> +++ b/mm/page_alloc.c
>>> @@ -2064,6 +2064,7 @@ restart:
>>> first_zones_zonelist(zonelist, high_zoneidx, NULL,
>>> &preferred_zone);
>>>
>>> +rebalance:
>>> /* This is the last chance, in general, before the goto nopage. */
>>> page = get_page_from_freelist(gfp_mask, nodemask, order, zonelist,
>>> high_zoneidx, alloc_flags & ~ALLOC_NO_WATERMARKS,
>>> @@ -2071,7 +2072,6 @@ restart:
>>> if (page)
>>> goto got_pg;
>>>
>>> -rebalance:
>>> /* Allocate without watermarks if the context allows */
>>> if (alloc_flags & ALLOC_NO_WATERMARKS) {
>>> page = __alloc_pages_high_priority(gfp_mask, order,
>>
>> I'm sorry I missed this thread long time.
>>
>> In this case, I think we should call drain_all_pages().
>
> Why?

Otherwise, we don't have good PCP dropping trigger. Big machine might have
big pcp cache.


> If the direct reclaimer failed to reclaim any pages on its own, the call
> to get_page_from_freelist() is going to be useless and there is
> no guarantee that any other CPU managed to reclaim pages either. All
> this ends up doing is sending in IPI which if it's very lucky will take
> a page from another CPUs free list.

It's no matter. because did_some_progress==0 mean vmscan failed to reclaim
any pages and reach priority==0. Thus, it obviously slow path.


>
>> then following
>> patch is better.
>> However I also think your patch is valuable. because while the task is
>> sleeping in wait_iff_congested(), an another task may free some pages.
>> thus, rebalance path should try to get free pages. iow, you makes sense.
>>
>> So, I'd like to propose to merge both your and my patch.
>>
>> Thanks.
>>
>>
>> From 2e77784668f6ca53d88ecb46aa6b99d9d0f33ffa Mon Sep 17 00:00:00 2001
>> From: KOSAKI Motohiro <[email protected]>
>> Date: Tue, 24 May 2011 13:41:57 +0900
>> Subject: [PATCH] vmscan: remove painful micro optimization
>>
>> Currently, __alloc_pages_direct_reclaim() call get_page_from_freelist()
>> only if try_to_free_pages() return !0.
>>
>> It's no necessary micro optimization becauase "return 0" mean vmscan reached
>> priority 0 and didn't get any pages, iow, it's really slow path. But also it
>> has bad side effect. If we don't call drain_all_pages(), we have a chance to
>> get infinite loop.
>>
>
> With the "rebalance" patch, where is the infinite loop?

I wrote the above.

2011-05-24 08:41:56

by KOSAKI Motohiro

[permalink] [raw]
Subject: Re: Unending loop in __alloc_pages_slowpath following OOM-kill; rfc: patch.

>> I'm sorry I missed this thread long time.
>
> No problem. It would be better than not review.

thx.


>> In this case, I think we should call drain_all_pages(). then following
>> patch is better.
>
> Strictly speaking, this problem isn't related to drain_all_pages.
> This problem caused by lru empty but I admit it could work well if
> your patch applied.
> So yours could help, too.
>
>> However I also think your patch is valuable. because while the task is
>> sleeping in wait_iff_congested(), an another task may free some pages.
>> thus, rebalance path should try to get free pages. iow, you makes sense.
>
> Yes.
> Off-topic.
> I would like to move cond_resched below get_page_from_freelist in
> __alloc_pages_direct_reclaim. Otherwise, it is likely we can be stolen
> pages to other processes.
> One more benefit is that if it's apparently OOM path(ie,
> did_some_progress = 0), we can reduce OOM kill latency due to remove
> unnecessary cond_resched.

I agree. Can you please mind to send a patch?


>> So, I'd like to propose to merge both your and my patch.
>
> Recently, there was discussion on drain_all_pages with Wu.
> He saw much overhead in 8-core system, AFAIR.
> I Cced Wu.
>
> How about checking per-cpu before calling drain_all_pages() than
> unconditional calling?
> if (per_cpu_ptr(zone->pageset, smp_processor_id())
> drain_all_pages();
>
> Of course, It can miss other CPU free pages. But above routine assume
> local cpu direct reclaim is successful but it failed by per-cpu. So I
> think it works.

Can you please tell me previous discussion url or mail subject?
I mean, if it is costly and performance degression risk, we don't have to
take my idea.

Thanks.


>
> Thanks for good suggestion and Reviewed-by, KOSAKI.

2011-05-24 08:49:20

by Mel Gorman

[permalink] [raw]
Subject: Re: Unending loop in __alloc_pages_slowpath following OOM-kill; rfc: patch.

On Tue, May 24, 2011 at 05:36:06PM +0900, KOSAKI Motohiro wrote:
> (2011/05/24 17:30), Mel Gorman wrote:
> > On Tue, May 24, 2011 at 01:54:54PM +0900, KOSAKI Motohiro wrote:
> >>> >From 8bd3f16736548375238161d1bd85f7d7c381031f Mon Sep 17 00:00:00 2001
> >>> From: Minchan Kim <[email protected]>
> >>> Date: Sat, 21 May 2011 01:37:41 +0900
> >>> Subject: [PATCH] Prevent unending loop in __alloc_pages_slowpath
> >>>
> >>> From: Andrew Barry <[email protected]>
> >>>
> >>> I believe I found a problem in __alloc_pages_slowpath, which allows a process to
> >>> get stuck endlessly looping, even when lots of memory is available.
> >>>
> >>> Running an I/O and memory intensive stress-test I see a 0-order page allocation
> >>> with __GFP_IO and __GFP_WAIT, running on a system with very little free memory.
> >>> Right about the same time that the stress-test gets killed by the OOM-killer,
> >>> the utility trying to allocate memory gets stuck in __alloc_pages_slowpath even
> >>> though most of the systems memory was freed by the oom-kill of the stress-test.
> >>>
> >>> The utility ends up looping from the rebalance label down through the
> >>> wait_iff_congested continiously. Because order=0, __alloc_pages_direct_compact
> >>> skips the call to get_page_from_freelist. Because all of the reclaimable memory
> >>> on the system has already been reclaimed, __alloc_pages_direct_reclaim skips the
> >>> call to get_page_from_freelist. Since there is no __GFP_FS flag, the block with
> >>> __alloc_pages_may_oom is skipped. The loop hits the wait_iff_congested, then
> >>> jumps back to rebalance without ever trying to get_page_from_freelist. This loop
> >>> repeats infinitely.
> >>>
> >>> The test case is pretty pathological. Running a mix of I/O stress-tests that do
> >>> a lot of fork() and consume all of the system memory, I can pretty reliably hit
> >>> this on 600 nodes, in about 12 hours. 32GB/node.
> >>>
> >>> Signed-off-by: Andrew Barry <[email protected]>
> >>> Reviewed-by: Minchan Kim <[email protected]>
> >>> Cc: Mel Gorman <[email protected]>
> >>> ---
> >>> mm/page_alloc.c | 2 +-
> >>> 1 files changed, 1 insertions(+), 1 deletions(-)
> >>>
> >>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> >>> index 3f8bce2..e78b324 100644
> >>> --- a/mm/page_alloc.c
> >>> +++ b/mm/page_alloc.c
> >>> @@ -2064,6 +2064,7 @@ restart:
> >>> first_zones_zonelist(zonelist, high_zoneidx, NULL,
> >>> &preferred_zone);
> >>>
> >>> +rebalance:
> >>> /* This is the last chance, in general, before the goto nopage. */
> >>> page = get_page_from_freelist(gfp_mask, nodemask, order, zonelist,
> >>> high_zoneidx, alloc_flags & ~ALLOC_NO_WATERMARKS,
> >>> @@ -2071,7 +2072,6 @@ restart:
> >>> if (page)
> >>> goto got_pg;
> >>>
> >>> -rebalance:
> >>> /* Allocate without watermarks if the context allows */
> >>> if (alloc_flags & ALLOC_NO_WATERMARKS) {
> >>> page = __alloc_pages_high_priority(gfp_mask, order,
> >>
> >> I'm sorry I missed this thread long time.
> >>
> >> In this case, I think we should call drain_all_pages().
> >
> > Why?
>
> Otherwise, we don't have good PCP dropping trigger. Big machine might have
> big pcp cache.
>

Big machines also have a large cost for sending IPIs.

>
> > If the direct reclaimer failed to reclaim any pages on its own, the call
> > to get_page_from_freelist() is going to be useless and there is
> > no guarantee that any other CPU managed to reclaim pages either. All
> > this ends up doing is sending in IPI which if it's very lucky will take
> > a page from another CPUs free list.
>
> It's no matter. because did_some_progress==0 mean vmscan failed to reclaim
> any pages and reach priority==0. Thus, it obviously slow path.
>

Maybe, but that still is no reason to send an IPI that probably isn't
going to help but incur a high cost on large machines (we've had bugs
related to excessive IPI usage before). As it is, a failure to reclaim
will fall through and assuming it has the right flags, it will wait
on congestion to clear before retrying direct reclaim. When it starts
to make progress, the pages will get drained at a time when it'll help.

> >> then following
> >> patch is better.
> >> However I also think your patch is valuable. because while the task is
> >> sleeping in wait_iff_congested(), an another task may free some pages.
> >> thus, rebalance path should try to get free pages. iow, you makes sense.
> >>
> >> So, I'd like to propose to merge both your and my patch.
> >>
> >> Thanks.
> >>
> >>
> >> From 2e77784668f6ca53d88ecb46aa6b99d9d0f33ffa Mon Sep 17 00:00:00 2001
> >> From: KOSAKI Motohiro <[email protected]>
> >> Date: Tue, 24 May 2011 13:41:57 +0900
> >> Subject: [PATCH] vmscan: remove painful micro optimization
> >>
> >> Currently, __alloc_pages_direct_reclaim() call get_page_from_freelist()
> >> only if try_to_free_pages() return !0.
> >>
> >> It's no necessary micro optimization becauase "return 0" mean vmscan reached
> >> priority 0 and didn't get any pages, iow, it's really slow path. But also it
> >> has bad side effect. If we don't call drain_all_pages(), we have a chance to
> >> get infinite loop.
> >>
> >
> > With the "rebalance" patch, where is the infinite loop?
>
> I wrote the above.
>

Where? Failing to call drain_all_pages() if reclaim fails is not an
infinite loop. It'll wait on congestion and retry until some progress
is made on reclaim and even then, it'll only drain the pages if the
subsequent allocation failed. That is not an infinite loop unless the
machine is wedged so badly it cannot make any progress on reclaim in
which case the machine is in serious trouble and an IPI isn't going
to fix things.

Hence, I'm failing to see why avoiding expensive IPI calls is a painful
micro-optimisation.

--
Mel Gorman
SUSE Labs

2011-05-24 08:57:10

by Minchan Kim

[permalink] [raw]
Subject: Re: Unending loop in __alloc_pages_slowpath following OOM-kill; rfc: patch.

On Tue, May 24, 2011 at 5:41 PM, KOSAKI Motohiro
<[email protected]> wrote:
>>> I'm sorry I missed this thread long time.
>>
>> No problem. It would be better than not review.
>
> thx.
>
>
>>> In this case, I think we should call drain_all_pages(). then following
>>> patch is better.
>>
>> Strictly speaking, this problem isn't related to drain_all_pages.
>> This problem caused by lru empty but I admit it could work well if
>> your patch applied.
>> So yours could help, too.
>>
>>> However I also think your patch is valuable. because while the task is
>>> sleeping in wait_iff_congested(), an another task may free some pages.
>>> thus, rebalance path should try to get free pages. iow, you makes sense.
>>
>> Yes.
>> Off-topic.
>> I would like to move cond_resched below get_page_from_freelist in
>> __alloc_pages_direct_reclaim. Otherwise, it is likely we can be stolen
>> pages to other processes.
>> One more benefit is that if it's apparently OOM path(ie,
>> did_some_progress = 0), we can reduce OOM kill latency due to remove
>> unnecessary cond_resched.
>
> I agree. Can you please mind to send a patch?

I had but at that time, Andrew had a concern.
I will resend it when I have a time. Let's discuss, again.

>
>
>>> So, I'd like to propose to merge both your and my patch.
>>
>> Recently, there was discussion on drain_all_pages with Wu.
>> He saw much overhead in 8-core system, AFAIR.
>> I Cced Wu.
>>
>> How about checking per-cpu before calling drain_all_pages() than
>> unconditional calling?
>> if (per_cpu_ptr(zone->pageset, smp_processor_id())
>>     drain_all_pages();
>>
>> Of course, It can miss other CPU free pages. But above routine assume
>> local cpu direct reclaim is successful but it failed by per-cpu. So I
>> think it works.
>
> Can you please tell me previous discussion url or mail subject?
> I mean, if it is costly and performance degression risk, we don't have to
> take my idea.

Yes. You could see it by https://lkml.org/lkml/2011/4/30/81.

>
> Thanks.
>
>
>>
>> Thanks for good suggestion and Reviewed-by, KOSAKI.
>
>
>



--
Kind regards,
Minchan Kim

2011-05-24 09:06:13

by KOSAKI Motohiro

[permalink] [raw]
Subject: Re: Unending loop in __alloc_pages_slowpath following OOM-kill; rfc: patch.

>>> Why?
>>
>> Otherwise, we don't have good PCP dropping trigger. Big machine might have
>> big pcp cache.
>>
>
> Big machines also have a large cost for sending IPIs.

Yes. But it's only matter if IPIs are frequently happen.
But, drain_all_pages() is NOT only IPI source. some vmscan function (e.g.
try_to_umap) makes a lot of IPIs.

Then, it's _relatively_ not costly. I have a question. Do you compare which
operation and drain_all_pages()? IOW, your "costly" mean which scenario suspect?



2011-05-24 09:16:17

by Mel Gorman

[permalink] [raw]
Subject: Re: Unending loop in __alloc_pages_slowpath following OOM-kill; rfc: patch.

On Tue, May 24, 2011 at 06:05:59PM +0900, KOSAKI Motohiro wrote:
> >>> Why?
> >>
> >> Otherwise, we don't have good PCP dropping trigger. Big machine might have
> >> big pcp cache.
> >>
> >
> > Big machines also have a large cost for sending IPIs.
>
> Yes. But it's only matter if IPIs are frequently happen.
> But, drain_all_pages() is NOT only IPI source. some vmscan function (e.g.
> try_to_umap) makes a lot of IPIs.
>
> Then, it's _relatively_ not costly. I have a question. Do you compare which
> operation and drain_all_pages()? IOW, your "costly" mean which scenario suspect?
>

I am concerned that if the machine gets into trouble and we are failing
to reclaim that sending more IPIs is not going to help any. There is no
evidence at the moment that sending extra IPIs here will help anything.

--
Mel Gorman
SUSE Labs

2011-05-24 09:36:44

by KOSAKI Motohiro

[permalink] [raw]
Subject: Re: Unending loop in __alloc_pages_slowpath following OOM-kill; rfc: patch.

>> Can you please tell me previous discussion url or mail subject?
>> I mean, if it is costly and performance degression risk, we don't have to
>> take my idea.
>
> Yes. You could see it by https://lkml.org/lkml/2011/4/30/81.

I think Wu pointed out "lightweight vmscan could reclaim pages but stealed
from another task case". It's very different with "most heavyweight vmscan
still failed to reclaim any pages". The point is, IPIs cost depend on the
frequency. stealing frequently occur on current logic, but vmscan priority==0
is?


2011-05-24 09:40:41

by KOSAKI Motohiro

[permalink] [raw]
Subject: Re: Unending loop in __alloc_pages_slowpath following OOM-kill; rfc: patch.

(2011/05/24 18:16), Mel Gorman wrote:
> On Tue, May 24, 2011 at 06:05:59PM +0900, KOSAKI Motohiro wrote:
>>>>> Why?
>>>>
>>>> Otherwise, we don't have good PCP dropping trigger. Big machine might have
>>>> big pcp cache.
>>>>
>>>
>>> Big machines also have a large cost for sending IPIs.
>>
>> Yes. But it's only matter if IPIs are frequently happen.
>> But, drain_all_pages() is NOT only IPI source. some vmscan function (e.g.
>> try_to_umap) makes a lot of IPIs.
>>
>> Then, it's _relatively_ not costly. I have a question. Do you compare which
>> operation and drain_all_pages()? IOW, your "costly" mean which scenario suspect?
>>
>
> I am concerned that if the machine gets into trouble and we are failing
> to reclaim that sending more IPIs is not going to help any. There is no
> evidence at the moment that sending extra IPIs here will help anything.

In old days, we always call drain_all_pages() if did_some_progress!=0. But
current kernel only call it when get_page_from_freelist() fail. So,
wait_iff_congested() may help but no guarantee to help us.

If you still strongly worry about IPI cost, I'm concern to move drain_all_pages()
to more unfrequently point. but to ignore pcp makes less sense, IMHO.


2011-05-24 10:57:52

by Mel Gorman

[permalink] [raw]
Subject: Re: Unending loop in __alloc_pages_slowpath following OOM-kill; rfc: patch.

On Tue, May 24, 2011 at 06:40:31PM +0900, KOSAKI Motohiro wrote:
> (2011/05/24 18:16), Mel Gorman wrote:
> > On Tue, May 24, 2011 at 06:05:59PM +0900, KOSAKI Motohiro wrote:
> >>>>> Why?
> >>>>
> >>>> Otherwise, we don't have good PCP dropping trigger. Big machine might have
> >>>> big pcp cache.
> >>>>
> >>>
> >>> Big machines also have a large cost for sending IPIs.
> >>
> >> Yes. But it's only matter if IPIs are frequently happen.
> >> But, drain_all_pages() is NOT only IPI source. some vmscan function (e.g.
> >> try_to_umap) makes a lot of IPIs.
> >>
> >> Then, it's _relatively_ not costly. I have a question. Do you compare which
> >> operation and drain_all_pages()? IOW, your "costly" mean which scenario suspect?
> >>
> >
> > I am concerned that if the machine gets into trouble and we are failing
> > to reclaim that sending more IPIs is not going to help any. There is no
> > evidence at the moment that sending extra IPIs here will help anything.
>
> In old days, we always call drain_all_pages() if did_some_progress!=0. But
> current kernel only call it when get_page_from_freelist() fail. So,
> wait_iff_congested() may help but no guarantee to help us.
>
> If you still strongly worry about IPI cost, I'm concern to move drain_all_pages()
> to more unfrequently point. but to ignore pcp makes less sense, IMHO.
>

Yes, I'm worried about it because excessive time
spent in drain_all_pages() has come up on the past
http://lkml.org/lkml/2010/8/23/81 . The PCP lists are not being
ignored at the moment. They are drained when direct reclaim makes
forward progress but still fails to allocate a page.

--
Mel Gorman
SUSE Labs

2011-05-24 23:53:41

by KOSAKI Motohiro

[permalink] [raw]
Subject: Re: Unending loop in __alloc_pages_slowpath following OOM-kill; rfc: patch.

>> In old days, we always call drain_all_pages() if did_some_progress!=0. But
>> current kernel only call it when get_page_from_freelist() fail. So,
>> wait_iff_congested() may help but no guarantee to help us.
>>
>> If you still strongly worry about IPI cost, I'm concern to move drain_all_pages()
>> to more unfrequently point. but to ignore pcp makes less sense, IMHO.
>>
>
> Yes, I'm worried about it because excessive time
> spent in drain_all_pages() has come up on the past
> http://lkml.org/lkml/2010/8/23/81 . The PCP lists are not being
> ignored at the moment. They are drained when direct reclaim makes
> forward progress but still fails to allocate a page.

Well, it's no priority==0 case. that's my point.