Received: by 2002:a05:6a10:16a7:0:0:0:0 with SMTP id gp39csp334124pxb; Thu, 5 Nov 2020 00:58:08 -0800 (PST) X-Google-Smtp-Source: ABdhPJw2HnsTjHCbFuw3k95Z4So+a6WafayBOEC1RWUluFmWaJsLy0Oa/NQJe89K13j9hmuQJlgt X-Received: by 2002:a50:9fcb:: with SMTP id c69mr1445050edf.289.1604566688373; Thu, 05 Nov 2020 00:58:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1604566688; cv=none; d=google.com; s=arc-20160816; b=0JAlpuqrA07JUcBbYH+qr3FLtNsFGlgsioOjCfm2bUoduloB2iIqikkuq1Y7+xS6oB jxFefZixaz4lpJ51GZS/GQg6l/IeFW5TxpI/El8E/NLV3mVdt/mKVlcOmlNL47j8bfgu Wjx+IXuKOxM84leVVJr5pvWw5UedBtWrtQRgT5aPGB3eq2pMMlc97IEKYSVfiD4fwyj5 ksf46Hr3RlzYMWjYLiQVWtuJXPISNE6Li/B3/mPDUTgnEXD+KxFLQQ484vL338Aam2Ad m/ojv5LOeWKaCRQWRon9xaqSfFyOZ2jon7pWSZ54DQe/MPhUUhAtNrZBG7w3Hf3EZ1yQ YJ1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :to:from; bh=0gwKTiTBd4H8MqNVvd+JxDo8dLxMedNQO0p0Z3kQZxU=; b=tzPRJ7Fgpy5P73rH+4l083N6EHYObBdLF8Gf+X4iNx+/33Z3a+hkbxd+Fe+uJywfnD 080GvfgfglsS/lL/rQikiLTHjpg1c+gj2hoL8A3EXzjb40VMpt42tCwHZv8c+q93FJxr JjZzh1eS26VbXjbQdE7bnrldo4klqalXYUErmfCfV8Y/Zvi96HPX3IMkUl9jQ7r8DarD d6cXl5f1t3+R2oauV7z24n/amm62QjA9HnB3LWQ8YDYOnDoI+U1AHT2lpZh8L8qNig6I WY+8QPYKMiJVnGmTMAReVv5pWuHDayFFe9MhpYy7ZEqegbx17VhogBtFVw0GWjE1f0OY ELJA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u5si688369edb.608.2020.11.05.00.57.45; Thu, 05 Nov 2020 00:58:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730559AbgKEI4T (ORCPT + 99 others); Thu, 5 Nov 2020 03:56:19 -0500 Received: from out30-133.freemail.mail.aliyun.com ([115.124.30.133]:33021 "EHLO out30-133.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726737AbgKEI4P (ORCPT ); Thu, 5 Nov 2020 03:56:15 -0500 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R131e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04400;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=21;SR=0;TI=SMTPD_---0UEJC3Fv_1604566567; Received: from aliy80.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0UEJC3Fv_1604566567) by smtp.aliyun-inc.com(127.0.0.1); Thu, 05 Nov 2020 16:56:10 +0800 From: Alex Shi To: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com, mhocko@suse.com, vdavydov.dev@gmail.com, shy828301@gmail.com Subject: [PATCH v21 05/19] mm/vmscan: remove unnecessary lruvec adding Date: Thu, 5 Nov 2020 16:55:35 +0800 Message-Id: <1604566549-62481-6-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1604566549-62481-1-git-send-email-alex.shi@linux.alibaba.com> References: <1604566549-62481-1-git-send-email-alex.shi@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We don't have to add a freeable page into lru and then remove from it. This change saves a couple of actions and makes the moving more clear. The SetPageLRU needs to be kept before put_page_testzero for list integrity, otherwise: #0 move_pages_to_lru #1 release_pages if !put_page_testzero if (put_page_testzero()) !PageLRU //skip lru_lock SetPageLRU() list_add(&page->lru,) list_add(&page->lru,) [akpm@linux-foundation.org: coding style fixes] Signed-off-by: Alex Shi Acked-by: Hugh Dickins Acked-by: Johannes Weiner Cc: Andrew Morton Cc: Johannes Weiner Cc: Tejun Heo Cc: Matthew Wilcox Cc: Hugh Dickins Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/vmscan.c | 38 +++++++++++++++++++++++++------------- 1 file changed, 25 insertions(+), 13 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 12a4873942e2..b9935668d121 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1852,26 +1852,30 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, while (!list_empty(list)) { page = lru_to_page(list); VM_BUG_ON_PAGE(PageLRU(page), page); + list_del(&page->lru); if (unlikely(!page_evictable(page))) { - list_del(&page->lru); spin_unlock_irq(&pgdat->lru_lock); putback_lru_page(page); spin_lock_irq(&pgdat->lru_lock); continue; } - lruvec = mem_cgroup_page_lruvec(page, pgdat); + /* + * The SetPageLRU needs to be kept here for list integrity. + * Otherwise: + * #0 move_pages_to_lru #1 release_pages + * if !put_page_testzero + * if (put_page_testzero()) + * !PageLRU //skip lru_lock + * SetPageLRU() + * list_add(&page->lru,) + * list_add(&page->lru,) + */ SetPageLRU(page); - lru = page_lru(page); - nr_pages = thp_nr_pages(page); - update_lru_size(lruvec, lru, page_zonenum(page), nr_pages); - list_move(&page->lru, &lruvec->lists[lru]); - - if (put_page_testzero(page)) { + if (unlikely(put_page_testzero(page))) { __ClearPageLRU(page); __ClearPageActive(page); - del_page_from_lru_list(page, lruvec, lru); if (unlikely(PageCompound(page))) { spin_unlock_irq(&pgdat->lru_lock); @@ -1879,11 +1883,19 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, spin_lock_irq(&pgdat->lru_lock); } else list_add(&page->lru, &pages_to_free); - } else { - nr_moved += nr_pages; - if (PageActive(page)) - workingset_age_nonresident(lruvec, nr_pages); + + continue; } + + lruvec = mem_cgroup_page_lruvec(page, pgdat); + lru = page_lru(page); + nr_pages = thp_nr_pages(page); + + update_lru_size(lruvec, lru, page_zonenum(page), nr_pages); + list_add(&page->lru, &lruvec->lists[lru]); + nr_moved += nr_pages; + if (PageActive(page)) + workingset_age_nonresident(lruvec, nr_pages); } /* -- 1.8.3.1