Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp2121101pxa; Mon, 24 Aug 2020 05:57:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyAhKH5QRcu+XnzU3omldsGdJqq8H3UH3qyPvkadvTt5E8OXDGRDYXxd+7hmoeT7CoX0DfH X-Received: by 2002:a17:906:5681:: with SMTP id am1mr2175182ejc.337.1598273840168; Mon, 24 Aug 2020 05:57:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1598273840; cv=none; d=google.com; s=arc-20160816; b=JaYN/ckFQvlp4RyTIu1oPDUxayuW6XHBivKnpvadMD1Dwd2gtp/0/t4Hmb21pvpUCp GmKETPezU1g91KpmSzWCt00s1499E1nNmsq63WLLuN8SU4nGq4Vy4ItqmnIF5r+m+vpD SKoJU5Z/Cr/JynGyGiKbeaT5BYiNa4B4KoF5eMy2Y2LggErr48yn5mXqkLCuVdoNnMnw Wt7yty/3pgSrDIBUTy8NqvflX9UD1bJRfqfb+aOyWHbkG53b0I/ZfK+iyVdTVYJOXko/ V3J/EFlHyLMaWIp7wSNjcswKSCmxBwoPk5j6CHBtgqYGycEwcDfd4Vj/lBMEKQrZzgaf EJcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=8UnSqz8UYDg23WnIFijfKv8JRc1A8mke9COtleoLRug=; b=SQdzfwtoev9mqMLglCdrMXjkOMAOSkX2gV6q1sSM0kj9egbca8qS9UmukSw3/4ylcm OO61fus8pImTZf5h6mleamMjqXeA6iKsC0hMmEmRGNbAUjV8kGco4z+gbX1iN+5woS3X wgFiD0iqY9HP846h8wKjCHgW/zzIlOr7073mpLtqEEaJf1N3zwPgcvIUsTG+mIQMhVOf DeLZ5elXwOWif+rWGkmJOv+tUnZEQ760Czo7LAgDu3ZUWxb0QeBELWAO6lsQ/VEhbZLn kKdlvmhi7qC4kbWm35jqyGvIAYAbOiMGXd9aRqFrPOn2gtFZ7Hvl12fKLXwX0yJp7cSA Y/Rg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g21si1449520ejc.79.2020.08.24.05.56.57; Mon, 24 Aug 2020 05:57:20 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727907AbgHXMzm (ORCPT + 99 others); Mon, 24 Aug 2020 08:55:42 -0400 Received: from out30-57.freemail.mail.aliyun.com ([115.124.30.57]:54354 "EHLO out30-57.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727827AbgHXMza (ORCPT ); Mon, 24 Aug 2020 08:55:30 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R961e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01353;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=24;SR=0;TI=SMTPD_---0U6k9-bl_1598273712; Received: from aliy80.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U6k9-bl_1598273712) by smtp.aliyun-inc.com(127.0.0.1); Mon, 24 Aug 2020 20:55:20 +0800 From: Alex Shi To: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com, mhocko@suse.com, vdavydov.dev@gmail.com, shy828301@gmail.com Cc: Alexander Duyck , Thomas Gleixner , Andrey Ryabinin Subject: [PATCH v18 21/32] mm/lru: introduce the relock_page_lruvec function Date: Mon, 24 Aug 2020 20:54:54 +0800 Message-Id: <1598273705-69124-22-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1598273705-69124-1-git-send-email-alex.shi@linux.alibaba.com> References: <1598273705-69124-1-git-send-email-alex.shi@linux.alibaba.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Alexander Duyck Use this new function to replace repeated same code, no func change. When testing for relock we can avoid the need for RCU locking if we simply compare the page pgdat and memcg pointers versus those that the lruvec is holding. By doing this we can avoid the extra pointer walks and accesses of the memory cgroup. In addition we can avoid the checks entirely if lruvec is currently NULL. Signed-off-by: Alexander Duyck Signed-off-by: Alex Shi Cc: Johannes Weiner Cc: Andrew Morton Cc: Thomas Gleixner Cc: Andrey Ryabinin Cc: Matthew Wilcox Cc: Mel Gorman Cc: Konstantin Khlebnikov Cc: Hugh Dickins Cc: Tejun Heo Cc: linux-kernel@vger.kernel.org Cc: cgroups@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/memcontrol.h | 52 ++++++++++++++++++++++++++++++++++++++++++++++ mm/mlock.c | 9 +------- mm/swap.c | 33 +++++++---------------------- mm/vmscan.c | 8 +------ 4 files changed, 61 insertions(+), 41 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 7b170e9028b5..ee6ef2d8ad52 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -488,6 +488,22 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg, struct lruvec *mem_cgroup_page_lruvec(struct page *, struct pglist_data *); +static inline bool lruvec_holds_page_lru_lock(struct page *page, + struct lruvec *lruvec) +{ + pg_data_t *pgdat = page_pgdat(page); + const struct mem_cgroup *memcg; + struct mem_cgroup_per_node *mz; + + if (mem_cgroup_disabled()) + return lruvec == &pgdat->__lruvec; + + mz = container_of(lruvec, struct mem_cgroup_per_node, lruvec); + memcg = page->mem_cgroup ? : root_mem_cgroup; + + return lruvec->pgdat == pgdat && mz->memcg == memcg; +} + struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p); struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); @@ -1023,6 +1039,14 @@ static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page, return &pgdat->__lruvec; } +static inline bool lruvec_holds_page_lru_lock(struct page *page, + struct lruvec *lruvec) +{ + pg_data_t *pgdat = page_pgdat(page); + + return lruvec == &pgdat->__lruvec; +} + static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg) { return NULL; @@ -1469,6 +1493,34 @@ static inline void unlock_page_lruvec_irqrestore(struct lruvec *lruvec, spin_unlock_irqrestore(&lruvec->lru_lock, flags); } +/* Don't lock again iff page's lruvec locked */ +static inline struct lruvec *relock_page_lruvec_irq(struct page *page, + struct lruvec *locked_lruvec) +{ + if (locked_lruvec) { + if (lruvec_holds_page_lru_lock(page, locked_lruvec)) + return locked_lruvec; + + unlock_page_lruvec_irq(locked_lruvec); + } + + return lock_page_lruvec_irq(page); +} + +/* Don't lock again iff page's lruvec locked */ +static inline struct lruvec *relock_page_lruvec_irqsave(struct page *page, + struct lruvec *locked_lruvec, unsigned long *flags) +{ + if (locked_lruvec) { + if (lruvec_holds_page_lru_lock(page, locked_lruvec)) + return locked_lruvec; + + unlock_page_lruvec_irqrestore(locked_lruvec, *flags); + } + + return lock_page_lruvec_irqsave(page, flags); +} + #ifdef CONFIG_CGROUP_WRITEBACK struct wb_domain *mem_cgroup_wb_domain(struct bdi_writeback *wb); diff --git a/mm/mlock.c b/mm/mlock.c index 177d2588e863..0448409184e3 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -302,17 +302,10 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) /* Phase 1: page isolation */ for (i = 0; i < nr; i++) { struct page *page = pvec->pages[i]; - struct lruvec *new_lruvec; /* block memcg change in mem_cgroup_move_account */ lock_page_memcg(page); - new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); - if (new_lruvec != lruvec) { - if (lruvec) - unlock_page_lruvec_irq(lruvec); - lruvec = lock_page_lruvec_irq(page); - } - + lruvec = relock_page_lruvec_irq(page, lruvec); if (TestClearPageMlocked(page)) { /* * We already have pin from follow_page_mask() diff --git a/mm/swap.c b/mm/swap.c index b67959b701c0..2ac78e8fab71 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -209,19 +209,12 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, for (i = 0; i < pagevec_count(pvec); i++) { struct page *page = pvec->pages[i]; - struct lruvec *new_lruvec; /* block memcg migration during page moving between lru */ if (!TestClearPageLRU(page)) continue; - new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); - if (lruvec != new_lruvec) { - if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); - lruvec = lock_page_lruvec_irqsave(page, &flags); - } - + lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags); (*move_fn)(page, lruvec); SetPageLRU(page); @@ -865,17 +858,12 @@ void release_pages(struct page **pages, int nr) } if (PageLRU(page)) { - struct lruvec *new_lruvec; - - new_lruvec = mem_cgroup_page_lruvec(page, - page_pgdat(page)); - if (new_lruvec != lruvec) { - if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, - flags); + struct lruvec *prev_lruvec = lruvec; + + lruvec = relock_page_lruvec_irqsave(page, lruvec, + &flags); + if (prev_lruvec != lruvec) lock_batch = 0; - lruvec = lock_page_lruvec_irqsave(page, &flags); - } VM_BUG_ON_PAGE(!PageLRU(page), page); __ClearPageLRU(page); @@ -982,15 +970,8 @@ void __pagevec_lru_add(struct pagevec *pvec) for (i = 0; i < pagevec_count(pvec); i++) { struct page *page = pvec->pages[i]; - struct lruvec *new_lruvec; - - new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); - if (lruvec != new_lruvec) { - if (lruvec) - unlock_page_lruvec_irqrestore(lruvec, flags); - lruvec = lock_page_lruvec_irqsave(page, &flags); - } + lruvec = relock_page_lruvec_irqsave(page, lruvec, &flags); __pagevec_lru_add_fn(page, lruvec); } if (lruvec) diff --git a/mm/vmscan.c b/mm/vmscan.c index 789444ae4c88..2c94790d4cb1 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4280,15 +4280,9 @@ void check_move_unevictable_pages(struct pagevec *pvec) for (i = 0; i < pvec->nr; i++) { struct page *page = pvec->pages[i]; - struct lruvec *new_lruvec; pgscanned++; - new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); - if (lruvec != new_lruvec) { - if (lruvec) - unlock_page_lruvec_irq(lruvec); - lruvec = lock_page_lruvec_irq(page); - } + lruvec = relock_page_lruvec_irq(page, lruvec); if (!PageLRU(page) || !PageUnevictable(page)) continue; -- 1.8.3.1