Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp2893390pxk; Tue, 15 Sep 2020 05:18:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy7cwx5t96YbPezwStSStJlBH+M/HzOERhNeRAhjF2/Gtu8+DpomdSmK1GZVXBEt8ZAMXax X-Received: by 2002:aa7:cf96:: with SMTP id z22mr22716578edx.120.1600172312427; Tue, 15 Sep 2020 05:18:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600172312; cv=none; d=google.com; s=arc-20160816; b=g+k+95bZH4GSHMNfOQSC77Gxq0GF6RP2l5Zz3h/spz693j+txaSWpIMHqE+igdfFex 2G5UpSDg5o+3jL1DlNSE3gZyVKdBavEbPwYEpmLIx0kN1PyaYj71v6a5XzxE7hxJSTEp gqxTu6FXtSP1EqKCYC2XeQCp/3QW6U1+H2OWPVEuidLzFu48ND5/QsRk7HFRn8dI4ExJ 0FMHyL7SpbW9rmllsBfTqdVlKPpaqy0tipxQoaOCdqAnXXu2vSgGRwJwgsdfCi+WkqAQ 6ngrjU7Axfn8UnR0ys98w0UDiYrFsmXYyjR1PlF3FQS1uQ6F7IpKR+ZvCEAymAXiG2e9 MvPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=RK++IB17Nqy/qhn76sqFh6hgaG4VnJ3PX7hikmWLSFg=; b=FI0/Rtw8Uhx36L+yOGgSUUwYmBK4Kji/e8k9CIu62xLCpZ6w0nwtCkProxT78uAHlf E7QFo1vZnnBpAQHEP7lUxKeozSOOkcYdfsYn8t8floTZuDLpnGigWJd7RFNlTVzRB4Pu 0quNUhM4SeaorqgHEtxfsJExkkWbsqls/26LCk1gY+vlL79B/VAaQICkNTz/NwS2KFBN gHeE6+p++jdwha/vjLX9mCIoJM9nsyhJiF0s1AAz6YMBCUm4KP6SAXHHFmYW4b4TfWMr R1rMb2uoFOfQ+Glf1dmbQXF26qIC3wwTW0tAs/JqbVCJN16Mx7tW+UZeP5b/kA8emSv0 oOPA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id c12si9646705edk.238.2020.09.15.05.18.09; Tue, 15 Sep 2020 05:18:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726543AbgIOMQK (ORCPT + 99 others); Tue, 15 Sep 2020 08:16:10 -0400 Received: from mx2.suse.de ([195.135.220.15]:57218 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726460AbgIOMPn (ORCPT ); Tue, 15 Sep 2020 08:15:43 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 69E3EAFF9; Tue, 15 Sep 2020 12:15:57 +0000 (UTC) Date: Tue, 15 Sep 2020 14:15:41 +0200 From: Michal Hocko To: Laurent Dufour Cc: akpm@linux-foundation.org, David Hildenbrand , Oscar Salvador , Greg Kroah-Hartman , linux-mm@kvack.org, "Rafael J . Wysocki" , nathanl@linux.ibm.com, cheloha@linux.ibm.com, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org Subject: Re: [PATCH v3 1/3] mm: replace memmap_context by memplug_context Message-ID: <20200915121541.GD4649@dhcp22.suse.cz> References: <20200915094143.79181-1-ldufour@linux.ibm.com> <20200915094143.79181-2-ldufour@linux.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200915094143.79181-2-ldufour@linux.ibm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue 15-09-20 11:41:41, Laurent Dufour wrote: > The memmap_context is used to detect whether a memory operation is due to a > hot-add operation or happening at boot time. > > Make it general to the hotplug operation and rename it at memplug_context. s@memplug_context@meminit_context@g > There is no functional change introduced by this patch > > Suggested-by: David Hildenbrand > Signed-off-by: Laurent Dufour > --- > arch/ia64/mm/init.c | 6 +++--- > include/linux/mm.h | 2 +- > include/linux/mmzone.h | 11 ++++++++--- > mm/memory_hotplug.c | 2 +- > mm/page_alloc.c | 10 +++++----- > 5 files changed, 18 insertions(+), 13 deletions(-) > > diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c > index 0b3fb4c7af29..8e7b8c6c576e 100644 > --- a/arch/ia64/mm/init.c > +++ b/arch/ia64/mm/init.c > @@ -538,7 +538,7 @@ virtual_memmap_init(u64 start, u64 end, void *arg) > if (map_start < map_end) > memmap_init_zone((unsigned long)(map_end - map_start), > args->nid, args->zone, page_to_pfn(map_start), > - MEMMAP_EARLY, NULL); > + MEMINIT_EARLY, NULL); > return 0; > } > > @@ -547,8 +547,8 @@ memmap_init (unsigned long size, int nid, unsigned long zone, > unsigned long start_pfn) > { > if (!vmem_map) { > - memmap_init_zone(size, nid, zone, start_pfn, MEMMAP_EARLY, > - NULL); > + memmap_init_zone(size, nid, zone, start_pfn, > + MEMINIT_EARLY, NULL); > } else { > struct page *start; > struct memmap_init_callback_data args; > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 1983e08f5906..e942f91ed155 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -2409,7 +2409,7 @@ extern int __meminit __early_pfn_to_nid(unsigned long pfn, > > extern void set_dma_reserve(unsigned long new_dma_reserve); > extern void memmap_init_zone(unsigned long, int, unsigned long, unsigned long, > - enum memmap_context, struct vmem_altmap *); > + enum meminit_context, struct vmem_altmap *); > extern void setup_per_zone_wmarks(void); > extern int __meminit init_per_zone_wmark_min(void); > extern void mem_init(void); > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index 8379432f4f2f..0f7a4ff4b059 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -824,10 +824,15 @@ bool zone_watermark_ok(struct zone *z, unsigned int order, > unsigned int alloc_flags); > bool zone_watermark_ok_safe(struct zone *z, unsigned int order, > unsigned long mark, int highest_zoneidx); > -enum memmap_context { > - MEMMAP_EARLY, > - MEMMAP_HOTPLUG, > +/* > + * Memory initialization context, use to differentiate memory added by > + * the platform statically or via memory hotplug interface. > + */ > +enum meminit_context { > + MEMINIT_EARLY, > + MEMINIT_HOTPLUG, > }; > + > extern void init_currently_empty_zone(struct zone *zone, unsigned long start_pfn, > unsigned long size); > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index e9d5ab5d3ca0..fc25886ad719 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -729,7 +729,7 @@ void __ref move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn, > * are reserved so nobody should be touching them so we should be safe > */ > memmap_init_zone(nr_pages, nid, zone_idx(zone), start_pfn, > - MEMMAP_HOTPLUG, altmap); > + MEMINIT_HOTPLUG, altmap); > > set_zone_contiguous(zone); > } > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index fab5e97dc9ca..5661fa164f13 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -5975,7 +5975,7 @@ overlap_memmap_init(unsigned long zone, unsigned long *pfn) > * done. Non-atomic initialization, single-pass. > */ > void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, > - unsigned long start_pfn, enum memmap_context context, > + unsigned long start_pfn, enum meminit_context context, > struct vmem_altmap *altmap) > { > unsigned long pfn, end_pfn = start_pfn + size; > @@ -6007,7 +6007,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, > * There can be holes in boot-time mem_map[]s handed to this > * function. They do not exist on hotplugged memory. > */ > - if (context == MEMMAP_EARLY) { > + if (context == MEMINIT_EARLY) { > if (overlap_memmap_init(zone, &pfn)) > continue; > if (defer_init(nid, pfn, end_pfn)) > @@ -6016,7 +6016,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, > > page = pfn_to_page(pfn); > __init_single_page(page, pfn, zone, nid); > - if (context == MEMMAP_HOTPLUG) > + if (context == MEMINIT_HOTPLUG) > __SetPageReserved(page); > > /* > @@ -6099,7 +6099,7 @@ void __ref memmap_init_zone_device(struct zone *zone, > * check here not to call set_pageblock_migratetype() against > * pfn out of zone. > * > - * Please note that MEMMAP_HOTPLUG path doesn't clear memmap > + * Please note that MEMINIT_HOTPLUG path doesn't clear memmap > * because this is done early in section_activate() > */ > if (!(pfn & (pageblock_nr_pages - 1))) { > @@ -6137,7 +6137,7 @@ void __meminit __weak memmap_init(unsigned long size, int nid, > if (end_pfn > start_pfn) { > size = end_pfn - start_pfn; > memmap_init_zone(size, nid, zone, start_pfn, > - MEMMAP_EARLY, NULL); > + MEMINIT_EARLY, NULL); > } > } > } > -- > 2.28.0 > -- Michal Hocko SUSE Labs