2015-08-04 10:40:38

by Jaewon Kim

[permalink] [raw]
Subject: [PATCH v2] vmscan: fix increasing nr_isolated incurred by putback unevictable pages

reclaim_clean_pages_from_list() assumes that shrink_page_list() returns
number of pages removed from the candidate list. But shrink_page_list()
puts back mlocked pages without passing it to caller and without
counting as nr_reclaimed. This incurrs increasing nr_isolated.
To fix this, this patch changes shrink_page_list() to pass unevictable
pages back to caller. Caller will take care those pages.

Signed-off-by: Jaewon Kim <[email protected]>
---
Changes since v1

1/ changed subject from vmscan: reclaim_clean_pages_from_list() must count mlocked pages
2/ changed to return unevictable pages rather than returning the number of unevictable pages

mm/vmscan.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 5e8eadd..a4b2d07 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1157,7 +1157,7 @@ cull_mlocked:
if (PageSwapCache(page))
try_to_free_swap(page);
unlock_page(page);
- putback_lru_page(page);
+ list_add(&page->lru, &ret_pages);
continue;

activate_locked:
--
1.9.1


2015-08-04 22:09:40

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH v2] vmscan: fix increasing nr_isolated incurred by putback unevictable pages

On Tue, 04 Aug 2015 19:40:08 +0900 Jaewon Kim <[email protected]> wrote:

> reclaim_clean_pages_from_list() assumes that shrink_page_list() returns
> number of pages removed from the candidate list. But shrink_page_list()
> puts back mlocked pages without passing it to caller and without
> counting as nr_reclaimed. This incurrs increasing nr_isolated.
> To fix this, this patch changes shrink_page_list() to pass unevictable
> pages back to caller. Caller will take care those pages.
>
> ..
>
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1157,7 +1157,7 @@ cull_mlocked:
> if (PageSwapCache(page))
> try_to_free_swap(page);
> unlock_page(page);
> - putback_lru_page(page);
> + list_add(&page->lru, &ret_pages);
> continue;
>
> activate_locked:

Is this going to cause a whole bunch of mlocked pages to be migrated
whereas in current kernels they stay where they are?

2015-08-04 23:30:55

by Minchan Kim

[permalink] [raw]
Subject: Re: [PATCH v2] vmscan: fix increasing nr_isolated incurred by putback unevictable pages

Hello,

On Tue, Aug 04, 2015 at 03:09:37PM -0700, Andrew Morton wrote:
> On Tue, 04 Aug 2015 19:40:08 +0900 Jaewon Kim <[email protected]> wrote:
>
> > reclaim_clean_pages_from_list() assumes that shrink_page_list() returns
> > number of pages removed from the candidate list. But shrink_page_list()
> > puts back mlocked pages without passing it to caller and without
> > counting as nr_reclaimed. This incurrs increasing nr_isolated.
> > To fix this, this patch changes shrink_page_list() to pass unevictable
> > pages back to caller. Caller will take care those pages.
> >
> > ..
> >
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -1157,7 +1157,7 @@ cull_mlocked:
> > if (PageSwapCache(page))
> > try_to_free_swap(page);
> > unlock_page(page);
> > - putback_lru_page(page);
> > + list_add(&page->lru, &ret_pages);
> > continue;
> >
> > activate_locked:
>
> Is this going to cause a whole bunch of mlocked pages to be migrated
> whereas in current kernels they stay where they are?
>

It fixes two issues.

1. With unevictable page, cma_alloc will be successful.

Exactly speaking, cma_alloc of current kernel will fail due to unevictable pages.

2. fix leaking of NR_ISOLATED counter of vmstat

With it, too_many_isolated works. Otherwise, it could make hang until
the process get SIGKILL.

So, I think it's stable material.

Acked-by: Minchan Kim <[email protected]>

2015-08-05 00:52:16

by Jaewon Kim

[permalink] [raw]
Subject: Re: [PATCH v2] vmscan: fix increasing nr_isolated incurred by putback unevictable pages



On 2015년 08월 05일 08:31, Minchan Kim wrote:
> Hello,
>
> On Tue, Aug 04, 2015 at 03:09:37PM -0700, Andrew Morton wrote:
>> On Tue, 04 Aug 2015 19:40:08 +0900 Jaewon Kim <[email protected]> wrote:
>>
>>> reclaim_clean_pages_from_list() assumes that shrink_page_list() returns
>>> number of pages removed from the candidate list. But shrink_page_list()
>>> puts back mlocked pages without passing it to caller and without
>>> counting as nr_reclaimed. This incurrs increasing nr_isolated.
>>> To fix this, this patch changes shrink_page_list() to pass unevictable
>>> pages back to caller. Caller will take care those pages.
>>>
>>> ..
>>>
>>> --- a/mm/vmscan.c
>>> +++ b/mm/vmscan.c
>>> @@ -1157,7 +1157,7 @@ cull_mlocked:
>>> if (PageSwapCache(page))
>>> try_to_free_swap(page);
>>> unlock_page(page);
>>> - putback_lru_page(page);
>>> + list_add(&page->lru, &ret_pages);
>>> continue;
>>>
>>> activate_locked:
>>
>> Is this going to cause a whole bunch of mlocked pages to be migrated
>> whereas in current kernels they stay where they are?
>>
>
> It fixes two issues.
>
> 1. With unevictable page, cma_alloc will be successful.
>
> Exactly speaking, cma_alloc of current kernel will fail due to unevictable pages.
>
> 2. fix leaking of NR_ISOLATED counter of vmstat
>
> With it, too_many_isolated works. Otherwise, it could make hang until
> the process get SIGKILL.
>
> So, I think it's stable material.
>
> Acked-by: Minchan Kim <[email protected]>
>
>
>
Hello

Traditional shrink_inactive_list will put back the unevictable pages as it does through putback_inactive_pages.
However as Minchan Kim said, cma_alloc will be more successful by migrating unevictable pages.
In current kernel, I think, cma_alloc is already trying to migrate unevictable pages except clean page cache.
This patch will allow clean page cache also to be migrated in cma_alloc.

Thank you

2015-08-06 12:21:43

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH v2] vmscan: fix increasing nr_isolated incurred by putback unevictable pages

On 08/05/2015 02:52 AM, Jaewon Kim wrote:
>
>
> On 2015년 08월 05일 08:31, Minchan Kim wrote:
>> Hello,
>>
>> On Tue, Aug 04, 2015 at 03:09:37PM -0700, Andrew Morton wrote:
>>> On Tue, 04 Aug 2015 19:40:08 +0900 Jaewon Kim <[email protected]> wrote:
>>>
>>>> reclaim_clean_pages_from_list() assumes that shrink_page_list() returns
>>>> number of pages removed from the candidate list. But shrink_page_list()
>>>> puts back mlocked pages without passing it to caller and without
>>>> counting as nr_reclaimed. This incurrs increasing nr_isolated.
>>>> To fix this, this patch changes shrink_page_list() to pass unevictable
>>>> pages back to caller. Caller will take care those pages.
>>>>
>>>> ..
>>>>
>>>> --- a/mm/vmscan.c
>>>> +++ b/mm/vmscan.c
>>>> @@ -1157,7 +1157,7 @@ cull_mlocked:
>>>> if (PageSwapCache(page))
>>>> try_to_free_swap(page);
>>>> unlock_page(page);
>>>> - putback_lru_page(page);
>>>> + list_add(&page->lru, &ret_pages);
>>>> continue;
>>>>
>>>> activate_locked:
>>>
>>> Is this going to cause a whole bunch of mlocked pages to be migrated
>>> whereas in current kernels they stay where they are?

The only user that will see the change wrt migration is
__alloc_contig_migrate_range() which is explicit about isolating mlocked
page for migration (isolate_migratepages_range() calls
isolate_migratepages_block() with ISOLATE_UNEVICTABLE). So this will
make the migration work for clean page cache too.

>>
>>
>> It fixes two issues.
>>
>> 1. With unevictable page, cma_alloc will be successful.
>>
>> Exactly speaking, cma_alloc of current kernel will fail due to unevictable pages.
>>
>> 2. fix leaking of NR_ISOLATED counter of vmstat
>>
>> With it, too_many_isolated works. Otherwise, it could make hang until
>> the process get SIGKILL.

This should be more explicit in the changelog. The first issue is not
mentioned at all. The second is not clear from the description.

>>
>> So, I think it's stable material.
>>
>> Acked-by: Minchan Kim <[email protected]>

Acked-by: Vlastimil Babka <[email protected]>

>>
>>
> Hello
>
> Traditional shrink_inactive_list will put back the unevictable pages as it does through putback_inactive_pages.
> However as Minchan Kim said, cma_alloc will be more successful by migrating unevictable pages.
> In current kernel, I think, cma_alloc is already trying to migrate unevictable pages except clean page cache.
> This patch will allow clean page cache also to be migrated in cma_alloc.
>
> Thank you
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to [email protected]. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
>