Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp1048492pxa; Wed, 5 Aug 2020 20:51:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJydntCEH8FBYHDBohbm18BryjW16Xfz3pOYPn21Fp2Tedcl1xzMuFKHsWVtGWS4+QpS4DGz X-Received: by 2002:a17:906:1c0e:: with SMTP id k14mr2324457ejg.479.1596685866742; Wed, 05 Aug 2020 20:51:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1596685866; cv=none; d=google.com; s=arc-20160816; b=ERWAHWq8e2Bza/zx1MLtCNrrSzyfKBEtgcYtqQPCL/AnVV5mEvE3KU8OdMeuNMpMIp mCWOpBY7eWYfUuv7wD7tSAwlg7JXjPIiEhYiVWnIGO3AsKuzOjjbS7KwJ6psDZDPE8Uh ffCkks607RN//uvkMJxZK5q4/pDkc7/m/L09oH8ZAq0lq7rDIZnWhhYDK+7QpsVn/R22 BP6rvO5V5OGVbfrCjaVpzMmvG/DnVeRQkPmEC7f79SrgaxMnOR7PACrWvDNPPfcdIO30 1Ap/cqDm7ctIEdBrNwM3KUq82uG0JFk5gGd898SaaEhhBkIqHJk4NArTCVrmNNIURUTy +89Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:references:to:from:subject; bh=+qtz2jRxZhWNIucYu1Vdtee4+3FG8+C8EnkzuALkGI8=; b=cUIYtqqwSTqQWRr8TUlapIHIGXNl4dqMT+c5iw2o0KszUoly8MEAAQEP/VObijGkpk mOJqfFRuBDidxW/DD7mpSmNphFuWEPX4Vgqqx7gYhc+gzgWK8hXmyDFhe1aXMw2CdxVP b+rKLDnZHMNRav8bJqmPqm1D6H14yV4GrTV7ShIVy6ByOUmUYplmgE4m7D7lxrIuCAWk nwFBcTalaFyDVP9xzATVPagz8uKrLuRjE8HP0nywm0HIPYdva107D4tmYZBQbbOtSrvJ 9JA5U24mOm6okL17picp+lTh/+gAxFnASCv0mhRX2FQEwxYTrkeVDdBWjH+zZYSTiTye sKPA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id e2si2491543ejj.50.2020.08.05.20.50.44; Wed, 05 Aug 2020 20:51:06 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728003AbgHFDsT (ORCPT + 99 others); Wed, 5 Aug 2020 23:48:19 -0400 Received: from out30-131.freemail.mail.aliyun.com ([115.124.30.131]:33989 "EHLO out30-131.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726026AbgHFDsT (ORCPT ); Wed, 5 Aug 2020 23:48:19 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R161e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01358;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=19;SR=0;TI=SMTPD_---0U4tH-Rl_1596685693; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U4tH-Rl_1596685693) by smtp.aliyun-inc.com(127.0.0.1); Thu, 06 Aug 2020 11:48:14 +0800 Subject: Re: [PATCH v17 01/21] mm/vmscan: remove unnecessary lruvec adding From: Alex Shi To: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com References: <1595681998-19193-1-git-send-email-alex.shi@linux.alibaba.com> <1595681998-19193-2-git-send-email-alex.shi@linux.alibaba.com> Message-ID: Date: Thu, 6 Aug 2020 11:47:53 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: <1595681998-19193-2-git-send-email-alex.shi@linux.alibaba.com> Content-Type: text/plain; charset=gbk Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ?? 2020/7/25 ????8:59, Alex Shi ะด??: > We don't have to add a freeable page into lru and then remove from it. > This change saves a couple of actions and makes the moving more clear. > > The SetPageLRU needs to be kept here for list intergrity. > Otherwise: > #0 mave_pages_to_lru #1 release_pages > if (put_page_testzero()) > if !put_page_testzero > !PageLRU //skip lru_lock > list_add(&page->lru,) > list_add(&page->lru,) //corrupt The race comments should be corrected to this: /* * The SetPageLRU needs to be kept here for list intergrity. * Otherwise: * #0 mave_pages_to_lru #1 release_pages * if !put_page_testzero * if (put_page_testzero()) * !PageLRU //skip lru_lock * SetPageLRU() * list_add(&page->lru,) * list_add(&page->lru,) */ > > [akpm@linux-foundation.org: coding style fixes] > Signed-off-by: Alex Shi > Cc: Andrew Morton > Cc: Johannes Weiner > Cc: Tejun Heo > Cc: Matthew Wilcox > Cc: Hugh Dickins > Cc: linux-mm@kvack.org > Cc: linux-kernel@vger.kernel.org > --- > mm/vmscan.c | 37 ++++++++++++++++++++++++------------- > 1 file changed, 24 insertions(+), 13 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 749d239c62b2..ddb29d813d77 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1856,26 +1856,29 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, > while (!list_empty(list)) { > page = lru_to_page(list); > VM_BUG_ON_PAGE(PageLRU(page), page); > + list_del(&page->lru); > if (unlikely(!page_evictable(page))) { > - list_del(&page->lru); > spin_unlock_irq(&pgdat->lru_lock); > putback_lru_page(page); > spin_lock_irq(&pgdat->lru_lock); > continue; > } > - lruvec = mem_cgroup_page_lruvec(page, pgdat); > > + /* > + * The SetPageLRU needs to be kept here for list intergrity. > + * Otherwise: > + * #0 mave_pages_to_lru #1 release_pages > + * if (put_page_testzero()) > + * if !put_page_testzero > + * !PageLRU //skip lru_lock > + * list_add(&page->lru,) > + * list_add(&page->lru,) //corrupt > + */ /* * The SetPageLRU needs to be kept here for list intergrity. * Otherwise: * #0 mave_pages_to_lru #1 release_pages * if !put_page_testzero * if (put_page_testzero()) * !PageLRU //skip lru_lock * SetPageLRU() * list_add(&page->lru,) * list_add(&page->lru,) */ > SetPageLRU(page); > - lru = page_lru(page); > > - nr_pages = hpage_nr_pages(page); > - update_lru_size(lruvec, lru, page_zonenum(page), nr_pages); > - list_move(&page->lru, &lruvec->lists[lru]); > - > - if (put_page_testzero(page)) { > + if (unlikely(put_page_testzero(page))) { > __ClearPageLRU(page); > __ClearPageActive(page); > - del_page_from_lru_list(page, lruvec, lru); > > if (unlikely(PageCompound(page))) { > spin_unlock_irq(&pgdat->lru_lock); > @@ -1883,11 +1886,19 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, > spin_lock_irq(&pgdat->lru_lock); > } else > list_add(&page->lru, &pages_to_free); > - } else { > - nr_moved += nr_pages; > - if (PageActive(page)) > - workingset_age_nonresident(lruvec, nr_pages); > + > + continue; > } > + > + lruvec = mem_cgroup_page_lruvec(page, pgdat); > + lru = page_lru(page); > + nr_pages = hpage_nr_pages(page); > + > + update_lru_size(lruvec, lru, page_zonenum(page), nr_pages); > + list_add(&page->lru, &lruvec->lists[lru]); > + nr_moved += nr_pages; > + if (PageActive(page)) > + workingset_age_nonresident(lruvec, nr_pages); > } > > /* >