Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp2125171pxa; Mon, 24 Aug 2020 06:03:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy3bhFgNFTCk22hGa+0ytvruW3f+fdk143xtKUwy0PdrM7UwgOQILoXUh9ArhIFI0YhRT6j X-Received: by 2002:a05:6402:501:: with SMTP id m1mr5046108edv.99.1598274193665; Mon, 24 Aug 2020 06:03:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1598274193; cv=none; d=google.com; s=arc-20160816; b=weglvUPP2Sjhe1Rp1aRUVcAEnWcIgQq8DiwjyxQUumphQJYDSEI5cOnHQ+DyNuSx/9 R9VjB7rMls1e5PRiVyD2AceDpRndTCvSi8LPKQyWIHtTdpVqdqN8+b+KQ7rhsEGoCIqg vL5WS3h7HnKMmim4dKgCEdHEiDapnyqoYK83s7y+G/l7zE7ssOQ5qPUlAeWTZ1fqm0dK koWNpbd+S5mAKGhWUng/5YWzPEf+O10pJ45c9dqdvxhTtzSjkVfxIHDG2FESWcd1lSP2 HkgUOesxsfLXu+SuntOKJPOEQg7yTWKPx4vzhOelf0WMWN2l7OKiDgamldFrmgC2UsA/ PLUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:to:from; bh=fjJLD6+qzz6xf9F/RvGqScm2M/R/95HH8VbIF0lG9v0=; b=0icDOIzcaK4psMxhEegkAra2VlKJ26cwt9x2GnhpRp1+D77wP0QzgVNLgg5xQa+Whj XhaMX49pQD/ox31C5sNqcqVgrk9+lIZi+ZQgApGeu7aLtSvCWJcG466mE7kWdurQHsWm 1FA0R7rXdmZ5TrYeLusEmIbe91OauwTUDiik2W/cxR8nNhnNc4aiD6m9hd5ZUVJD/G3/ vPX52hkDI+nKG+y2etVbNYTqw6WUeRGWwrffqUIhbZ9wf8xGpvs5IAryUieUU4Jbg43o jvfW8NyKtDP41l4C/XH5Brvf3E2Iqh/uqDg12cbSBXJ/MAbtVZij0HCCcqgqu7CZstrM /2SQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id f9si7024279edm.167.2020.08.24.06.02.50; Mon, 24 Aug 2020 06:03:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727050AbgHXMzT (ORCPT + 99 others); Mon, 24 Aug 2020 08:55:19 -0400 Received: from out30-42.freemail.mail.aliyun.com ([115.124.30.42]:33565 "EHLO out30-42.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726968AbgHXMzR (ORCPT ); Mon, 24 Aug 2020 08:55:17 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R111e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07488;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=21;SR=0;TI=SMTPD_---0U6k9-bl_1598273712; Received: from aliy80.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U6k9-bl_1598273712) by smtp.aliyun-inc.com(127.0.0.1); Mon, 24 Aug 2020 20:55:14 +0800 From: Alex Shi To: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com, mhocko@suse.com, vdavydov.dev@gmail.com, shy828301@gmail.com Subject: [PATCH v18 03/32] mm/thp: move lru_add_page_tail func to huge_memory.c Date: Mon, 24 Aug 2020 20:54:36 +0800 Message-Id: <1598273705-69124-4-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1598273705-69124-1-git-send-email-alex.shi@linux.alibaba.com> References: <1598273705-69124-1-git-send-email-alex.shi@linux.alibaba.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The func is only used in huge_memory.c, defining it in other file with a CONFIG_TRANSPARENT_HUGEPAGE macro restrict just looks weird. Let's move it THP. And make it static as Hugh Dickin suggested. Signed-off-by: Alex Shi Reviewed-by: Kirill A. Shutemov Cc: Andrew Morton Cc: Johannes Weiner Cc: Matthew Wilcox Cc: Hugh Dickins Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/swap.h | 2 -- mm/huge_memory.c | 30 ++++++++++++++++++++++++++++++ mm/swap.c | 33 --------------------------------- 3 files changed, 30 insertions(+), 35 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 661046994db4..43e6b3458f58 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -338,8 +338,6 @@ extern void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages); extern void lru_note_cost_page(struct page *); extern void lru_cache_add(struct page *); -extern void lru_add_page_tail(struct page *page, struct page *page_tail, - struct lruvec *lruvec, struct list_head *head); extern void activate_page(struct page *); extern void mark_page_accessed(struct page *); extern void lru_add_drain(void); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2ccff8472cd4..84fb64e8faa1 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2313,6 +2313,36 @@ static void remap_page(struct page *page) } } +static void lru_add_page_tail(struct page *page, struct page *page_tail, + struct lruvec *lruvec, struct list_head *list) +{ + VM_BUG_ON_PAGE(!PageHead(page), page); + VM_BUG_ON_PAGE(PageCompound(page_tail), page); + VM_BUG_ON_PAGE(PageLRU(page_tail), page); + lockdep_assert_held(&lruvec_pgdat(lruvec)->lru_lock); + + if (!list) + SetPageLRU(page_tail); + + if (likely(PageLRU(page))) + list_add_tail(&page_tail->lru, &page->lru); + else if (list) { + /* page reclaim is reclaiming a huge page */ + get_page(page_tail); + list_add_tail(&page_tail->lru, list); + } else { + /* + * Head page has not yet been counted, as an hpage, + * so we must account for each subpage individually. + * + * Put page_tail on the list at the correct position + * so they all end up in order. + */ + add_page_to_lru_list_tail(page_tail, lruvec, + page_lru(page_tail)); + } +} + static void __split_huge_page_tail(struct page *head, int tail, struct lruvec *lruvec, struct list_head *list) { diff --git a/mm/swap.c b/mm/swap.c index d16d65d9b4e0..c674fb441fe9 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -935,39 +935,6 @@ void __pagevec_release(struct pagevec *pvec) } EXPORT_SYMBOL(__pagevec_release); -#ifdef CONFIG_TRANSPARENT_HUGEPAGE -/* used by __split_huge_page_refcount() */ -void lru_add_page_tail(struct page *page, struct page *page_tail, - struct lruvec *lruvec, struct list_head *list) -{ - VM_BUG_ON_PAGE(!PageHead(page), page); - VM_BUG_ON_PAGE(PageCompound(page_tail), page); - VM_BUG_ON_PAGE(PageLRU(page_tail), page); - lockdep_assert_held(&lruvec_pgdat(lruvec)->lru_lock); - - if (!list) - SetPageLRU(page_tail); - - if (likely(PageLRU(page))) - list_add_tail(&page_tail->lru, &page->lru); - else if (list) { - /* page reclaim is reclaiming a huge page */ - get_page(page_tail); - list_add_tail(&page_tail->lru, list); - } else { - /* - * Head page has not yet been counted, as an hpage, - * so we must account for each subpage individually. - * - * Put page_tail on the list at the correct position - * so they all end up in order. - */ - add_page_to_lru_list_tail(page_tail, lruvec, - page_lru(page_tail)); - } -} -#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ - static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, void *arg) { -- 1.8.3.1