Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp11752pxk; Wed, 23 Sep 2020 20:30:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy5F3vAfo3J2mzas10ck58Nk5ygOMEMKiIkbqXLqJU7xIYqwf1fWAiQFd2HRUi4gfBPx8Ie X-Received: by 2002:a17:906:4cc2:: with SMTP id q2mr2703063ejt.422.1600918223212; Wed, 23 Sep 2020 20:30:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600918223; cv=none; d=google.com; s=arc-20160816; b=rD9MF3/JzWeM0fb9h68L5PXxsMUZc+5l3GRmcGdqeDGAPia91gha+RD/H8mAWQdNwg Rrlx/OdVHOjGknpUH0np1LzrwC0cv+YxPJwKHhtitCNLPwmpnhk8x4bj9Aa7//rzXi+Y 9qNBUcvwaK6EWKP2ok1HvoCDmIhRg2LEvZyDtncqn9EH6HWo1rNaETI5p2UlS6iqLwDn ByhmeYDhUGqRx7m3CEwXUF0GXxZEMbYMC+v11uCdOa9kZVsH0QUpz+uwaygLguDPEOjh Jb2lMnkGxOZS2FUSDv0AodOz5TKDpi5iGN5lMoqzmZ+f7hM7mGFPJ2/6L6YMKGae6TYF l26Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :to:from; bh=mxbDSJVi9McdMx/EwoWDpOXuGprmRKzvWExGzKEoRhg=; b=vckL3xSZlt5zY2ZE92e7P4U8TaDyyzKiwCD9Hw9rApn8DwGbUup6vCRlq3mWCYDTwO Y4b2PSzNJFsyvy1VR69WxPIjlJXnee7Qy2YM7M+aSkCUMonH5LJ1kKejqlMjZJA4Tmd7 r9nUlb+D9YGKZiFG7DrIxFqH2QTGL6lNb5JfoB/s8hoyXWHtqmcumvbSqMj1lBeXYYe/ paJc78EJiEtphX8t9MBk8Nh28MBaeR1/B5QC2VQRYIURE/1JXWqv75vaJupBz+lqJ5r1 KUlpOLLSr3QhSnE2DpXYKahsOevH5xIfuo9zkz9lPyx8If2RNVRrbvuf1XcKpNQiFFFW u8Zw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id p9si1228297ejy.150.2020.09.23.20.30.00; Wed, 23 Sep 2020 20:30:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727112AbgIXD2w (ORCPT + 99 others); Wed, 23 Sep 2020 23:28:52 -0400 Received: from out30-131.freemail.mail.aliyun.com ([115.124.30.131]:44111 "EHLO out30-131.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726667AbgIXD2p (ORCPT ); Wed, 23 Sep 2020 23:28:45 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R441e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01424;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=22;SR=0;TI=SMTPD_---0U9w0Ss1_1600918116; Received: from aliy80.localdomain(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U9w0Ss1_1600918116) by smtp.aliyun-inc.com(127.0.0.1); Thu, 24 Sep 2020 11:28:40 +0800 From: Alex Shi To: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com, mhocko@suse.com, vdavydov.dev@gmail.com, shy828301@gmail.com, aaron.lwe@gmail.com Subject: [PATCH v19 07/20] mm/vmscan: remove unnecessary lruvec adding Date: Thu, 24 Sep 2020 11:28:22 +0800 Message-Id: <1600918115-22007-8-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1600918115-22007-1-git-send-email-alex.shi@linux.alibaba.com> References: <1600918115-22007-1-git-send-email-alex.shi@linux.alibaba.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We don't have to add a freeable page into lru and then remove from it. This change saves a couple of actions and makes the moving more clear. The SetPageLRU needs to be kept before put_page_testzero for list integrity, otherwise: #0 move_pages_to_lru #1 release_pages if !put_page_testzero if (put_page_testzero()) !PageLRU //skip lru_lock SetPageLRU() list_add(&page->lru,) list_add(&page->lru,) [akpm@linux-foundation.org: coding style fixes] Signed-off-by: Alex Shi Acked-by: Hugh Dickins Cc: Andrew Morton Cc: Johannes Weiner Cc: Tejun Heo Cc: Matthew Wilcox Cc: Hugh Dickins Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/vmscan.c | 38 +++++++++++++++++++++++++------------- 1 file changed, 25 insertions(+), 13 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 466fc3144fff..32102e5d354d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1850,26 +1850,30 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, while (!list_empty(list)) { page = lru_to_page(list); VM_BUG_ON_PAGE(PageLRU(page), page); + list_del(&page->lru); if (unlikely(!page_evictable(page))) { - list_del(&page->lru); spin_unlock_irq(&pgdat->lru_lock); putback_lru_page(page); spin_lock_irq(&pgdat->lru_lock); continue; } - lruvec = mem_cgroup_page_lruvec(page, pgdat); + /* + * The SetPageLRU needs to be kept here for list integrity. + * Otherwise: + * #0 move_pages_to_lru #1 release_pages + * if !put_page_testzero + * if (put_page_testzero()) + * !PageLRU //skip lru_lock + * SetPageLRU() + * list_add(&page->lru,) + * list_add(&page->lru,) + */ SetPageLRU(page); - lru = page_lru(page); - nr_pages = thp_nr_pages(page); - update_lru_size(lruvec, lru, page_zonenum(page), nr_pages); - list_move(&page->lru, &lruvec->lists[lru]); - - if (put_page_testzero(page)) { + if (unlikely(put_page_testzero(page))) { __ClearPageLRU(page); __ClearPageActive(page); - del_page_from_lru_list(page, lruvec, lru); if (unlikely(PageCompound(page))) { spin_unlock_irq(&pgdat->lru_lock); @@ -1877,11 +1881,19 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, spin_lock_irq(&pgdat->lru_lock); } else list_add(&page->lru, &pages_to_free); - } else { - nr_moved += nr_pages; - if (PageActive(page)) - workingset_age_nonresident(lruvec, nr_pages); + + continue; } + + lruvec = mem_cgroup_page_lruvec(page, pgdat); + lru = page_lru(page); + nr_pages = thp_nr_pages(page); + + update_lru_size(lruvec, lru, page_zonenum(page), nr_pages); + list_add(&page->lru, &lruvec->lists[lru]); + nr_moved += nr_pages; + if (PageActive(page)) + workingset_age_nonresident(lruvec, nr_pages); } /* -- 1.8.3.1