Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp1558867pxu; Tue, 24 Nov 2020 03:27:22 -0800 (PST) X-Google-Smtp-Source: ABdhPJwkz7TeKj4ouHfCpKHXvVgv+5Hneropj0wjxdYcgrJLH0EnFvRHD9DOJCURJHZ6x/Ew2Hs2 X-Received: by 2002:a17:906:1458:: with SMTP id q24mr3822086ejc.541.1606217242023; Tue, 24 Nov 2020 03:27:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1606217242; cv=none; d=google.com; s=arc-20160816; b=AfuEnyNnLeZ4uw7Z2/2yKtWRckez2DM+mj56VpRuNHv2xEL/PGASYX5Ms2NA9Zq4bb gAbvgz4t66bYfI1MoDp7yzJMMCGMcGS/kwp/JdA26NjZ6MLKHdT0I3/nT1yCp6fPJT7s X7QSK5iN9bGhpBLhjprB6IKPkm+ZdAXDTDZ7rbN5b+txtpPTb4lS59EO/1/819zhCvz5 EnfpJ0m2zVhtUZXnfz0GC+S49LCXC+Uk+fcdvDg8MLNE+xutorFaozxeJ7J/g2RGvxYr m/WSgRhxK7O0OfWYDAaiLYAGihsxhqVli9yeFW/CsAxfiPYV/GPeGYtRxfoa5cO5Biac zVWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=NkUs2W+554gS8Y9Aig3SmAiloFhODIy1HVEU4ZEHvuE=; b=iYP8s5zgv4UQczDLUzXZ98uf9vrDswmn49rwx/ukesdtiRDY9JQpBMfheuMCg5q8D+ NR8Ibgs60cE2EyEcOHV4j6EzkkJ8L8joYWaDaDibawKfzFdSDZLpT9sSRYDl566Hv3D4 4XuPH1aNQ6UDB7vLYl7lsiH0Fx+z0s98a2Z51aOiRZyB1pQPRWGKuefEH9uRnCkCLTHs 3PVx8GDXe9CDxpnccM9k0pp5710g2jwOYm0D/egRW5cpXsSdlMmPzOJAmfJrlwLsjItY R9gn3WHDFtHz3ZSFPj8djG5AdK0vN+nMufZG7dnf6gNZ73hhUHR/EHi1FrDiMVDUbtU8 OJ0Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y4si8453431ejj.723.2020.11.24.03.26.56; Tue, 24 Nov 2020 03:27:22 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732606AbgKXLVa (ORCPT + 99 others); Tue, 24 Nov 2020 06:21:30 -0500 Received: from mx2.suse.de ([195.135.220.15]:58496 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732328AbgKXLVa (ORCPT ); Tue, 24 Nov 2020 06:21:30 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id A19CAAC66; Tue, 24 Nov 2020 11:21:28 +0000 (UTC) Subject: Re: [PATCH next] mm/vmscan: __isolate_lru_page_prepare clean up To: Alex Shi , Matthew Wilcox Cc: Andrew Morton , Hugh Dickins , Yu Zhao , Michal Hocko , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <1605859413-53864-1-git-send-email-alex.shi@linux.alibaba.com> <20201120151307.4d9e3ef092ba01a325db7ce2@linux-foundation.org> <20201122123552.GF4327@casper.infradead.org> <728874d7-2d93-4049-68c1-dcc3b2d52ccd@linux.alibaba.com> From: Vlastimil Babka Message-ID: <46ad053f-1401-31e8-50cf-09acda588f6f@suse.cz> Date: Tue, 24 Nov 2020 12:21:28 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.4.3 MIME-Version: 1.0 In-Reply-To: <728874d7-2d93-4049-68c1-dcc3b2d52ccd@linux.alibaba.com> Content-Type: text/plain; charset=gbk; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 11/22/20 3:00 PM, Alex Shi wrote: > Thanks a lot for all comments, I picked all up and here is the v3: > > From 167131dd106a96fd08af725df850e0da6ec899af Mon Sep 17 00:00:00 2001 > From: Alex Shi > Date: Fri, 20 Nov 2020 14:49:16 +0800 > Subject: [PATCH v3 next] mm/vmscan: __isolate_lru_page_prepare clean up > > The function just return 2 results, so use a 'switch' to deal with its > result is unnecessary, and simplify it to a bool func as Vlastimil > suggested. > > Also remove 'goto' by reusing list_move(), and take Matthew Wilcox's > suggestion to update comments in function. I wouldn't mind if the goto stayed, but it's not repeating that much without it (list_move() + continue, 3 times) so... Acked-by: Vlastimil Babka > Signed-off-by: Alex Shi > Cc: Andrew Morton > Cc: Matthew Wilcox > Cc: Hugh Dickins > Cc: Yu Zhao > Cc: Vlastimil Babka > Cc: Michal Hocko > Cc: linux-mm@kvack.org > Cc: linux-kernel@vger.kernel.org > --- > include/linux/swap.h | 2 +- > mm/compaction.c | 2 +- > mm/vmscan.c | 68 ++++++++++++++++++++------------------------ > 3 files changed, 33 insertions(+), 39 deletions(-) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index 596bc2f4d9b0..5bba15ac5a2e 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -356,7 +356,7 @@ extern void lru_cache_add_inactive_or_unevictable(struct page *page, > extern unsigned long zone_reclaimable_pages(struct zone *zone); > extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order, > gfp_t gfp_mask, nodemask_t *mask); > -extern int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode); > +extern bool __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode); > extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, > unsigned long nr_pages, > gfp_t gfp_mask, > diff --git a/mm/compaction.c b/mm/compaction.c > index b68931854253..8d71ffebe6cb 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -988,7 +988,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, > if (unlikely(!get_page_unless_zero(page))) > goto isolate_fail; > > - if (__isolate_lru_page_prepare(page, isolate_mode) != 0) > + if (!__isolate_lru_page_prepare(page, isolate_mode)) > goto isolate_fail_put; > > /* Try isolate the page */ > diff --git a/mm/vmscan.c b/mm/vmscan.c > index c6f94e55c3fe..4d2703c43310 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1536,19 +1536,17 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, > * page: page to consider > * mode: one of the LRU isolation modes defined above > * > - * returns 0 on success, -ve errno on failure. > + * returns true on success, false on failure. > */ > -int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode) > +bool __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode) > { > - int ret = -EBUSY; > - > /* Only take pages on the LRU. */ > if (!PageLRU(page)) > - return ret; > + return false; > > /* Compaction should not handle unevictable pages but CMA can do so */ > if (PageUnevictable(page) && !(mode & ISOLATE_UNEVICTABLE)) > - return ret; > + return false; > > /* > * To minimise LRU disruption, the caller can indicate that it only > @@ -1561,7 +1559,7 @@ int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode) > if (mode & ISOLATE_ASYNC_MIGRATE) { > /* All the caller can do on PageWriteback is block */ > if (PageWriteback(page)) > - return ret; > + return false; > > if (PageDirty(page)) { > struct address_space *mapping; > @@ -1577,20 +1575,20 @@ int __isolate_lru_page_prepare(struct page *page, isolate_mode_t mode) > * from the page cache. > */ > if (!trylock_page(page)) > - return ret; > + return false; > > mapping = page_mapping(page); > migrate_dirty = !mapping || mapping->a_ops->migratepage; > unlock_page(page); > if (!migrate_dirty) > - return ret; > + return false; > } > } > > if ((mode & ISOLATE_UNMAPPED) && page_mapped(page)) > - return ret; > + return false; > > - return 0; > + return true; > } > > /* > @@ -1674,35 +1672,31 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan, > * only when the page is being freed somewhere else. > */ > scan += nr_pages; > - switch (__isolate_lru_page_prepare(page, mode)) { > - case 0: > - /* > - * Be careful not to clear PageLRU until after we're > - * sure the page is not being freed elsewhere -- the > - * page release code relies on it. > - */ > - if (unlikely(!get_page_unless_zero(page))) > - goto busy; > - > - if (!TestClearPageLRU(page)) { > - /* > - * This page may in other isolation path, > - * but we still hold lru_lock. > - */ > - put_page(page); > - goto busy; > - } > - > - nr_taken += nr_pages; > - nr_zone_taken[page_zonenum(page)] += nr_pages; > - list_move(&page->lru, dst); > - break; > + if (!__isolate_lru_page_prepare(page, mode)) { > + /* It is being freed elsewhere */ > + list_move(&page->lru, src); > + continue; > + } > + /* > + * Be careful not to clear PageLRU until after we're > + * sure the page is not being freed elsewhere -- the > + * page release code relies on it. > + */ > + if (unlikely(!get_page_unless_zero(page))) { > + list_move(&page->lru, src); > + continue; > + } > > - default: > -busy: > - /* else it is being freed elsewhere */ > + if (!TestClearPageLRU(page)) { > + /* Another thread is already isolating this page */ > + put_page(page); > list_move(&page->lru, src); > + continue; > } > + > + nr_taken += nr_pages; > + nr_zone_taken[page_zonenum(page)] += nr_pages; > + list_move(&page->lru, dst); > } > > /* >