Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp2122976pxa; Mon, 24 Aug 2020 06:00:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw8FTQr5QBeqpSadHQ6AhWxoiqxd4KmGBG1+szLznH2VzpJnCs+hXILm2qNXGUZvKn9GsOs X-Received: by 2002:a05:6402:174d:: with SMTP id v13mr4991683edx.231.1598274038972; Mon, 24 Aug 2020 06:00:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1598274038; cv=none; d=google.com; s=arc-20160816; b=gB/J6HJzfO7JZvzfRiX9Cy11b8x7WiLdtzwF33Jh14WNJ9NddB89CzHWS/KCPKeiae DQIo2dXa8LgsqCYBkJ2iAgh1oovBphKeEwP8xNI6AnT8eVSRa9Qvcz1m0YISWEV9aRoW joq5uSs+KSC+lV4QyT3bZZh12I6ti0+Y20ZJHVtA13HoKeVsBJD33yRdXRbzFuJLGMs3 A7knlbBspkJYXcoLof2NXUbMnWLUEENAoly9NK+Em0Oa8gFSx+vv8eBa3MPIt+yDre52 dZv5XlDJuZVWuEl7g4gGG/q50jYXG9M/O+hb77PMrZHHUjxHmjNQMvySFcWRklQr8J3g ACPQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:to:from; bh=50UxmSTtFafyeNFNmTlXuyjrh6MKf6dydaUO5+47hAU=; b=UmoiRRXE/yBwK9ZS6KSdX38kLshcGlHfOQnD6couJHYVuix8K+d8LUes4bC/0LJkgG 1Y2kNUmqrP8FWZ+X7HdmQ5+dgfiHlj2jSWxil0aY20ow38seoczwymbQ7Ix8flw760b8 E+luJE6cYjSWBHDBs+IfRmtK38m3uj+U32HKjo+FuuN4N7iOEsh0h7sxZ/G5jNSl3UZU 254hjdtrqM5Tj7htbsO2OJQOe15GN2mks63C+FLqLKtFcySmyqnMLs3k58W2QqLQDAxD wEGCOLRfbjNge+49GPz+rwrCW0P5XcvLz6pgc6AxpJNot7OrYpOLW0dmIfviOAq1wy/9 6S1w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a20si8399772edj.133.2020.08.24.06.00.16; Mon, 24 Aug 2020 06:00:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727792AbgHXMzZ (ORCPT + 99 others); Mon, 24 Aug 2020 08:55:25 -0400 Received: from out30-130.freemail.mail.aliyun.com ([115.124.30.130]:50613 "EHLO out30-130.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727029AbgHXMzW (ORCPT ); Mon, 24 Aug 2020 08:55:22 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R101e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01355;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=21;SR=0;TI=SMTPD_---0U6k9-bl_1598273712; Received: from aliy80.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U6k9-bl_1598273712) by smtp.aliyun-inc.com(127.0.0.1); Mon, 24 Aug 2020 20:55:15 +0800 From: Alex Shi To: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com, mhocko@suse.com, vdavydov.dev@gmail.com, shy828301@gmail.com Subject: [PATCH v18 08/32] mm/vmscan: remove unnecessary lruvec adding Date: Mon, 24 Aug 2020 20:54:41 +0800 Message-Id: <1598273705-69124-9-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1598273705-69124-1-git-send-email-alex.shi@linux.alibaba.com> References: <1598273705-69124-1-git-send-email-alex.shi@linux.alibaba.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We don't have to add a freeable page into lru and then remove from it. This change saves a couple of actions and makes the moving more clear. The SetPageLRU needs to be kept before put_page_testzero for list intergrity, otherwise: #0 mave_pages_to_lru #1 release_pages if !put_page_testzero if (put_page_testzero()) !PageLRU //skip lru_lock SetPageLRU() list_add(&page->lru,) list_add(&page->lru,) [akpm@linux-foundation.org: coding style fixes] Signed-off-by: Alex Shi Cc: Andrew Morton Cc: Johannes Weiner Cc: Tejun Heo Cc: Matthew Wilcox Cc: Hugh Dickins Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/vmscan.c | 38 +++++++++++++++++++++++++------------- 1 file changed, 25 insertions(+), 13 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 99e1796eb833..ffccb94defaf 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1850,26 +1850,30 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, while (!list_empty(list)) { page = lru_to_page(list); VM_BUG_ON_PAGE(PageLRU(page), page); + list_del(&page->lru); if (unlikely(!page_evictable(page))) { - list_del(&page->lru); spin_unlock_irq(&pgdat->lru_lock); putback_lru_page(page); spin_lock_irq(&pgdat->lru_lock); continue; } - lruvec = mem_cgroup_page_lruvec(page, pgdat); + /* + * The SetPageLRU needs to be kept here for list intergrity. + * Otherwise: + * #0 mave_pages_to_lru #1 release_pages + * if !put_page_testzero + * if (put_page_testzero()) + * !PageLRU //skip lru_lock + * SetPageLRU() + * list_add(&page->lru,) + * list_add(&page->lru,) + */ SetPageLRU(page); - lru = page_lru(page); - nr_pages = thp_nr_pages(page); - update_lru_size(lruvec, lru, page_zonenum(page), nr_pages); - list_move(&page->lru, &lruvec->lists[lru]); - - if (put_page_testzero(page)) { + if (unlikely(put_page_testzero(page))) { __ClearPageLRU(page); __ClearPageActive(page); - del_page_from_lru_list(page, lruvec, lru); if (unlikely(PageCompound(page))) { spin_unlock_irq(&pgdat->lru_lock); @@ -1877,11 +1881,19 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, spin_lock_irq(&pgdat->lru_lock); } else list_add(&page->lru, &pages_to_free); - } else { - nr_moved += nr_pages; - if (PageActive(page)) - workingset_age_nonresident(lruvec, nr_pages); + + continue; } + + lruvec = mem_cgroup_page_lruvec(page, pgdat); + lru = page_lru(page); + nr_pages = thp_nr_pages(page); + + update_lru_size(lruvec, lru, page_zonenum(page), nr_pages); + list_add(&page->lru, &lruvec->lists[lru]); + nr_moved += nr_pages; + if (PageActive(page)) + workingset_age_nonresident(lruvec, nr_pages); } /* -- 1.8.3.1