2020-07-11 01:00:31

by Alex Shi

[permalink] [raw]
Subject: [PATCH v16 20/22] mm/vmscan: use relock for move_pages_to_lru

From: Hugh Dickins <[email protected]>

Use the relock function to replace relocking action. And try to save few
lock times.

Signed-off-by: Hugh Dickins <[email protected]>
Signed-off-by: Alex Shi <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Jann Horn <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
---
mm/vmscan.c | 17 ++++++-----------
1 file changed, 6 insertions(+), 11 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index bdb53a678e7e..078a1640ec60 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1854,15 +1854,15 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
enum lru_list lru;

while (!list_empty(list)) {
- struct lruvec *new_lruvec = NULL;
-
page = lru_to_page(list);
VM_BUG_ON_PAGE(PageLRU(page), page);
list_del(&page->lru);
if (unlikely(!page_evictable(page))) {
- spin_unlock_irq(&lruvec->lru_lock);
+ if (lruvec) {
+ spin_unlock_irq(&lruvec->lru_lock);
+ lruvec = NULL;
+ }
putback_lru_page(page);
- spin_lock_irq(&lruvec->lru_lock);
continue;
}

@@ -1876,12 +1876,7 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
* list_add(&page->lru,)
* list_add(&page->lru,) //corrupt
*/
- new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page));
- if (new_lruvec != lruvec) {
- if (lruvec)
- spin_unlock_irq(&lruvec->lru_lock);
- lruvec = lock_page_lruvec_irq(page);
- }
+ lruvec = relock_page_lruvec_irq(page, lruvec);
SetPageLRU(page);

if (unlikely(put_page_testzero(page))) {
@@ -1890,8 +1885,8 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,

if (unlikely(PageCompound(page))) {
spin_unlock_irq(&lruvec->lru_lock);
+ lruvec = NULL;
destroy_compound_page(page);
- spin_lock_irq(&lruvec->lru_lock);
} else
list_add(&page->lru, &pages_to_free);

--
1.8.3.1


2020-07-17 21:44:58

by Alexander Duyck

[permalink] [raw]
Subject: Re: [PATCH v16 20/22] mm/vmscan: use relock for move_pages_to_lru

On Fri, Jul 10, 2020 at 5:59 PM Alex Shi <[email protected]> wrote:
>
> From: Hugh Dickins <[email protected]>
>
> Use the relock function to replace relocking action. And try to save few
> lock times.
>
> Signed-off-by: Hugh Dickins <[email protected]>
> Signed-off-by: Alex Shi <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: Tejun Heo <[email protected]>
> Cc: Andrey Ryabinin <[email protected]>
> Cc: Jann Horn <[email protected]>
> Cc: Mel Gorman <[email protected]>
> Cc: Johannes Weiner <[email protected]>
> Cc: Matthew Wilcox <[email protected]>
> Cc: Hugh Dickins <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> ---
> mm/vmscan.c | 17 ++++++-----------
> 1 file changed, 6 insertions(+), 11 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index bdb53a678e7e..078a1640ec60 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1854,15 +1854,15 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
> enum lru_list lru;
>
> while (!list_empty(list)) {
> - struct lruvec *new_lruvec = NULL;
> -
> page = lru_to_page(list);
> VM_BUG_ON_PAGE(PageLRU(page), page);
> list_del(&page->lru);
> if (unlikely(!page_evictable(page))) {
> - spin_unlock_irq(&lruvec->lru_lock);
> + if (lruvec) {
> + spin_unlock_irq(&lruvec->lru_lock);
> + lruvec = NULL;
> + }
> putback_lru_page(page);
> - spin_lock_irq(&lruvec->lru_lock);
> continue;
> }
>
> @@ -1876,12 +1876,7 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
> * list_add(&page->lru,)
> * list_add(&page->lru,) //corrupt
> */
> - new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page));
> - if (new_lruvec != lruvec) {
> - if (lruvec)
> - spin_unlock_irq(&lruvec->lru_lock);
> - lruvec = lock_page_lruvec_irq(page);
> - }
> + lruvec = relock_page_lruvec_irq(page, lruvec);
> SetPageLRU(page);
>
> if (unlikely(put_page_testzero(page))) {
> @@ -1890,8 +1885,8 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
>
> if (unlikely(PageCompound(page))) {
> spin_unlock_irq(&lruvec->lru_lock);
> + lruvec = NULL;
> destroy_compound_page(page);
> - spin_lock_irq(&lruvec->lru_lock);
> } else
> list_add(&page->lru, &pages_to_free);
>

It seems like this should just be rolled into patch 19. Otherwise if
you are wanting to consider it as a "further optimization" type patch
you might pull some of the optimizations you were pushing in patch 18
into this patch as well and just call it out as adding relocks where
there previously were none.

2020-07-18 14:18:47

by Alex Shi

[permalink] [raw]
Subject: Re: [PATCH v16 20/22] mm/vmscan: use relock for move_pages_to_lru



在 2020/7/18 上午5:44, Alexander Duyck 写道:
>> if (unlikely(PageCompound(page))) {
>> spin_unlock_irq(&lruvec->lru_lock);
>> + lruvec = NULL;
>> destroy_compound_page(page);
>> - spin_lock_irq(&lruvec->lru_lock);
>> } else
>> list_add(&page->lru, &pages_to_free);
>>
> It seems like this should just be rolled into patch 19. Otherwise if
> you are wanting to consider it as a "further optimization" type patch
> you might pull some of the optimizations you were pushing in patch 18
> into this patch as well and just call it out as adding relocks where
> there previously were none.

This patch is picked from Hugh Dickin's version in my review. It could be
fine to have a extra patch which no harm for anyone. :)

Thanks
Alex