Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp784222pxk; Thu, 24 Sep 2020 19:47:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzwviXepf+Exh+Jwep4XibHr60Xb33XeXpnT7wlNOotUMWnfHLdOnRu5etrsHD8JliQLaIp X-Received: by 2002:a17:906:556:: with SMTP id k22mr593040eja.369.1601002059040; Thu, 24 Sep 2020 19:47:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601002059; cv=none; d=google.com; s=arc-20160816; b=eNN0WMb8YWcE/vfANuCSTbJwkAc/B1499h/JlGM3gssErmMlWCugZ1EoU9y2j6FeP4 rDdj0/n50BEXA4GCONF4cV2LKcxOIBTPqFICOuM0hnWSdCtdYYoSmByJZFMnBPCAhY48 dXTZiENxPyWiy5DFTVnyzzhmEIvfwMcBHr+BQCiJTaxjZa4PheZ1b/cU9pxRpMrlXpgQ xSkngHR0lMYdV+U0jmPTHHvlKOS9umchoglNaajWVg+8RagaDruX9u9sZhvIuMUZ7NeN orX5tEp6PuREkuM7IcjrwqJ9iqqW3oScgnskdREAjt6cihpQnsA9Z1mTPwCf4cL/f6Ga RooQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:reply-to:message-id:subject:cc:to:from:date; bh=shu8CIsT9TOpUC3pV3IgFfZOGVlxqKa2yLAn/WULukI=; b=GdkjBX+FUv/yA84qJN2VzcU54OtDf/MsS/qpDNmvhBCaMGNG16+4HRfmNpsBiSZsvS T+Rhlu89ILIiRHILp4NV/AyRemr50rpoOCq+5dZ1eljg6QB4VBQU3vK1wkbGBQgRtII0 uEyv4M2mAViMTJV2he96uV+jCJ8GHJnFvHMdor7iaxAWJBuT4yJyvv76MXTBgbkVZj/Z X7FWTDDneYONi5LRTHO88BzCxDWa6NzcuQIa+ky1tMvqEzsXchWbI4aGnnNRLtjTTb2G dJ81fTOtVUbiPRht9ANRcy13P6Bhhs7+hBjX5HkLSj2lUs4ug81f/FPmWtIDPJJzFeeB mz4A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id u10si986281ejx.142.2020.09.24.19.47.15; Thu, 24 Sep 2020 19:47:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726837AbgIYCp6 (ORCPT + 99 others); Thu, 24 Sep 2020 22:45:58 -0400 Received: from out30-56.freemail.mail.aliyun.com ([115.124.30.56]:38402 "EHLO out30-56.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726669AbgIYCp6 (ORCPT ); Thu, 24 Sep 2020 22:45:58 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R131e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04426;MF=richard.weiyang@linux.alibaba.com;NM=1;PH=DS;RN=17;SR=0;TI=SMTPD_---0UA.F98P_1601001952; Received: from localhost(mailfrom:richard.weiyang@linux.alibaba.com fp:SMTPD_---0UA.F98P_1601001952) by smtp.aliyun-inc.com(127.0.0.1); Fri, 25 Sep 2020 10:45:52 +0800 Date: Fri, 25 Sep 2020 10:45:52 +0800 From: Wei Yang To: Vlastimil Babka Cc: David Hildenbrand , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org, Andrew Morton , Alexander Duyck , Mel Gorman , Michal Hocko , Dave Hansen , Wei Yang , Oscar Salvador , Mike Rapoport , Scott Cheloha , Michael Ellerman Subject: Re: [PATCH RFC 3/4] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate() Message-ID: <20200925024552.GA13540@L-31X9LVDL-1304.local> Reply-To: Wei Yang References: <20200916183411.64756-1-david@redhat.com> <20200916183411.64756-4-david@redhat.com> <9c6cc094-b02a-ac6c-e1ca-370ce7257881@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <9c6cc094-b02a-ac6c-e1ca-370ce7257881@suse.cz> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 24, 2020 at 01:13:29PM +0200, Vlastimil Babka wrote: >On 9/16/20 8:34 PM, David Hildenbrand wrote: >> Page isolation doesn't actually touch the pages, it simply isolates >> pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist. >> >> We already place pages to the tail of the freelists when undoing >> isolation via __putback_isolated_page(), let's do it in any case >> (e.g., if order == pageblock_order) and document the behavior. >> >> This change results in all pages getting onlined via online_pages() to >> be placed to the tail of the freelist. >> >> Cc: Andrew Morton >> Cc: Alexander Duyck >> Cc: Mel Gorman >> Cc: Michal Hocko >> Cc: Dave Hansen >> Cc: Vlastimil Babka >> Cc: Wei Yang >> Cc: Oscar Salvador >> Cc: Mike Rapoport >> Cc: Scott Cheloha >> Cc: Michael Ellerman >> Signed-off-by: David Hildenbrand >> --- >> include/linux/page-isolation.h | 2 ++ >> mm/page_alloc.c | 36 +++++++++++++++++++++++++++++----- >> mm/page_isolation.c | 8 ++++++-- >> 3 files changed, 39 insertions(+), 7 deletions(-) >> >> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h >> index 572458016331..a36be2cf4dbb 100644 >> --- a/include/linux/page-isolation.h >> +++ b/include/linux/page-isolation.h >> @@ -38,6 +38,8 @@ struct page *has_unmovable_pages(struct zone *zone, struct page *page, >> void set_pageblock_migratetype(struct page *page, int migratetype); >> int move_freepages_block(struct zone *zone, struct page *page, >> int migratetype, int *num_movable); >> +int move_freepages_block_tail(struct zone *zone, struct page *page, >> + int migratetype); >> >> /* >> * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE. >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >> index bba9a0f60c70..75b0f49b4022 100644 >> --- a/mm/page_alloc.c >> +++ b/mm/page_alloc.c >> @@ -899,6 +899,15 @@ static inline void move_to_free_list(struct page *page, struct zone *zone, >> list_move(&page->lru, &area->free_list[migratetype]); >> } >> >> +/* Used for pages which are on another list */ >> +static inline void move_to_free_list_tail(struct page *page, struct zone *zone, >> + unsigned int order, int migratetype) >> +{ >> + struct free_area *area = &zone->free_area[order]; >> + >> + list_move_tail(&page->lru, &area->free_list[migratetype]); >> +} > >There are just 3 callers of move_to_free_list() before this patch, I would just >add the to_tail parameter there instead of new wrapper. For callers with >constant parameter, the inline will eliminate it anyway. Got the same feeling :-) > >> static inline void del_page_from_free_list(struct page *page, struct zone *zone, >> unsigned int order) >> { >> @@ -2323,7 +2332,7 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone, >> */ >> static int move_freepages(struct zone *zone, >> struct page *start_page, struct page *end_page, >> - int migratetype, int *num_movable) >> + int migratetype, int *num_movable, bool to_tail) >> { >> struct page *page; >> unsigned int order; >> @@ -2354,7 +2363,10 @@ static int move_freepages(struct zone *zone, >> VM_BUG_ON_PAGE(page_zone(page) != zone, page); >> >> order = page_order(page); >> - move_to_free_list(page, zone, order, migratetype); >> + if (to_tail) >> + move_to_free_list_tail(page, zone, order, migratetype); >> + else >> + move_to_free_list(page, zone, order, migratetype); >> page += 1 << order; >> pages_moved += 1 << order; >> } >> @@ -2362,8 +2374,9 @@ static int move_freepages(struct zone *zone, >> return pages_moved; >> } >> >> -int move_freepages_block(struct zone *zone, struct page *page, >> - int migratetype, int *num_movable) >> +static int __move_freepages_block(struct zone *zone, struct page *page, >> + int migratetype, int *num_movable, >> + bool to_tail) >> { >> unsigned long start_pfn, end_pfn; >> struct page *start_page, *end_page; >> @@ -2384,7 +2397,20 @@ int move_freepages_block(struct zone *zone, struct page *page, >> return 0; >> >> return move_freepages(zone, start_page, end_page, migratetype, >> - num_movable); >> + num_movable, to_tail); >> +} >> + >> +int move_freepages_block(struct zone *zone, struct page *page, >> + int migratetype, int *num_movable) >> +{ >> + return __move_freepages_block(zone, page, migratetype, num_movable, >> + false); >> +} >> + >> +int move_freepages_block_tail(struct zone *zone, struct page *page, >> + int migratetype) >> +{ >> + return __move_freepages_block(zone, page, migratetype, NULL, true); >> } > >Likewise, just 5 callers of move_freepages_block(), all in the files you're >already changing, so no need for this wrappers IMHO. > >Thanks, >Vlastimil > >> static void change_pageblock_range(struct page *pageblock_page, >> diff --git a/mm/page_isolation.c b/mm/page_isolation.c >> index abfe26ad59fd..84aa1d14751d 100644 >> --- a/mm/page_isolation.c >> +++ b/mm/page_isolation.c >> @@ -83,7 +83,7 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype) >> * Because freepage with more than pageblock_order on isolated >> * pageblock is restricted to merge due to freepage counting problem, >> * it is possible that there is free buddy page. >> - * move_freepages_block() doesn't care of merge so we need other >> + * move_freepages_block*() don't care about merging, so we need another >> * approach in order to merge them. Isolation and free will make >> * these pages to be merged. >> */ >> @@ -106,9 +106,13 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype) >> * If we isolate freepage with more than pageblock_order, there >> * should be no freepage in the range, so we could avoid costly >> * pageblock scanning for freepage moving. >> + * >> + * We didn't actually touch any of the isolated pages, so place them >> + * to the tail of the freelists. This is especially relevant during >> + * memory onlining. >> */ >> if (!isolated_page) { >> - nr_pages = move_freepages_block(zone, page, migratetype, NULL); >> + nr_pages = move_freepages_block_tail(zone, page, migratetype); >> __mod_zone_freepage_state(zone, nr_pages, migratetype); >> } >> set_pageblock_migratetype(page, migratetype); >> -- Wei Yang Help you, Help me