Received: by 2002:a25:1506:0:0:0:0:0 with SMTP id 6csp3099197ybv; Sun, 9 Feb 2020 15:25:17 -0800 (PST) X-Google-Smtp-Source: APXvYqzYU7RYYU7hVN8/7gXQhG8pgK5lzGFBfyUvj+OMeSCgTbqF21isLvq04j4w4RtdeRdTVNDt X-Received: by 2002:a05:6808:a9c:: with SMTP id q28mr9041511oij.176.1581290717255; Sun, 09 Feb 2020 15:25:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1581290717; cv=none; d=google.com; s=arc-20160816; b=Ky0fFEfp2QtfC9xbM7Twsj1tq2+M//vwiKyJyegf9qkxuaoYd72M9DqH/QZb9mby0c sf7d2lM/mbclMNT+yjIxUs+X2U+uqrh/5TsRQakc6E3miNzK615+EVcSOuD22fHwI+3P T0EDt8jkqtQGJEcPJoSuD5OoyYbC+YURtiSqY4aHeJqFqDUBVMoZ8ouWB+Og3uJEk8uC rIV+2GM+54/NHMioCIe3FirTsdIvjlwKn7iccDy1K8wMKM+SDEPYdAZ2MaYzwBZMBVuk RFYt+Biyl7AfHkX+mJt+W/QF/zl4Qac3TEDLvX0lA32+8yNmwy4Mm1sHqZ4sKDmPGNIc /JVw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:reply-to:message-id :subject:cc:to:from:date; bh=CORLOfpBCczXsrZaWbHMysXmVh2X+Gg6KfwA/fdG6ks=; b=NoIrFxAgYva8DM/X6YqKPW9iADvOtL5+8lKzISIvaRLCmTq1z2Dadanoat4eXXd1xB OmKR0DiRXmnMceEYCc+YKBci49L15v2+ghq8XcYV955dNQPA0zxE6A45jAZd3k97Jb6o 3VAe22DlNKxa44SUMsio9Rd9MTEizcAjwwtzPOqDsrleD3M6eDkMZhGrUQRL5co2vypp DuaTiAMFSNyw1TBFIEYFDskADemuPuOkgJqwkCK5Hmk6JZ4Sn6nsuvvGsSl5VYZj7er6 49MMzN+7KO/p9TzQL2Qe0BU7CfDEm+CMqb6YahjaOu3p0Mf1Z+bJsZdOJ6UROd+vMTx0 AC3w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w197si7375077oia.101.2020.02.09.15.24.52; Sun, 09 Feb 2020 15:25:17 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726915AbgBIXXF (ORCPT + 99 others); Sun, 9 Feb 2020 18:23:05 -0500 Received: from mga18.intel.com ([134.134.136.126]:62336 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726843AbgBIXXE (ORCPT ); Sun, 9 Feb 2020 18:23:04 -0500 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 09 Feb 2020 15:23:03 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,423,1574150400"; d="scan'208";a="405414776" Received: from richard.sh.intel.com (HELO localhost) ([10.239.159.54]) by orsmga005.jf.intel.com with ESMTP; 09 Feb 2020 15:23:01 -0800 Date: Mon, 10 Feb 2020 07:23:19 +0800 From: Wei Yang To: Baoquan He Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, dan.j.williams@intel.com, richardw.yang@linux.intel.com, david@redhat.com Subject: Re: [PATCH 3/7] mm/sparse.c: only use subsection map in VMEMMAP case Message-ID: <20200209232319.GA8705@richard> Reply-To: Wei Yang References: <20200209104826.3385-1-bhe@redhat.com> <20200209104826.3385-4-bhe@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200209104826.3385-4-bhe@redhat.com> User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Feb 09, 2020 at 06:48:22PM +0800, Baoquan He wrote: >Currently, subsection map is used when SPARSEMEM is enabled, including >VMEMMAP case and !VMEMMAP case. However, subsection hotplug is not >supported at all in SPARSEMEM|!VMEMMAP case, subsection map is unnecessary >and misleading. Let's adjust code to only allow subsection map being >used in SPARSEMEM|VMEMMAP case. > >Signed-off-by: Baoquan He The change looks good to me. Reviewed-by: Wei Yang >--- > include/linux/mmzone.h | 2 + > mm/sparse.c | 231 ++++++++++++++++++++++------------------- > 2 files changed, 124 insertions(+), 109 deletions(-) > >diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h >index 462f6873905a..fc0de3a9a51e 100644 >--- a/include/linux/mmzone.h >+++ b/include/linux/mmzone.h >@@ -1185,7 +1185,9 @@ static inline unsigned long section_nr_to_pfn(unsigned long sec) > #define SUBSECTION_ALIGN_DOWN(pfn) ((pfn) & PAGE_SUBSECTION_MASK) > > struct mem_section_usage { >+#ifdef CONFIG_SPARSEMEM_VMEMMAP > DECLARE_BITMAP(subsection_map, SUBSECTIONS_PER_SECTION); >+#endif > /* See declaration of similar field in struct zone */ > unsigned long pageblock_flags[0]; > }; >diff --git a/mm/sparse.c b/mm/sparse.c >index 696f6b9f706e..cf55d272d0a9 100644 >--- a/mm/sparse.c >+++ b/mm/sparse.c >@@ -209,41 +209,6 @@ static inline unsigned long first_present_section_nr(void) > return next_present_section_nr(-1); > } > >-static void subsection_mask_set(unsigned long *map, unsigned long pfn, >- unsigned long nr_pages) >-{ >- int idx = subsection_map_index(pfn); >- int end = subsection_map_index(pfn + nr_pages - 1); >- >- bitmap_set(map, idx, end - idx + 1); >-} >- >-void __init subsection_map_init(unsigned long pfn, unsigned long nr_pages) >-{ >- int end_sec = pfn_to_section_nr(pfn + nr_pages - 1); >- unsigned long nr, start_sec = pfn_to_section_nr(pfn); >- >- if (!nr_pages) >- return; >- >- for (nr = start_sec; nr <= end_sec; nr++) { >- struct mem_section *ms; >- unsigned long pfns; >- >- pfns = min(nr_pages, PAGES_PER_SECTION >- - (pfn & ~PAGE_SECTION_MASK)); >- ms = __nr_to_section(nr); >- subsection_mask_set(ms->usage->subsection_map, pfn, pfns); >- >- pr_debug("%s: sec: %lu pfns: %lu set(%d, %d)\n", __func__, nr, >- pfns, subsection_map_index(pfn), >- subsection_map_index(pfn + pfns - 1)); >- >- pfn += pfns; >- nr_pages -= pfns; >- } >-} >- > /* Record a memory area against a node. */ > void __init memory_present(int nid, unsigned long start, unsigned long end) > { >@@ -432,12 +397,134 @@ static unsigned long __init section_map_size(void) > return ALIGN(sizeof(struct page) * PAGES_PER_SECTION, PMD_SIZE); > } > >+static void subsection_mask_set(unsigned long *map, unsigned long pfn, >+ unsigned long nr_pages) >+{ >+ int idx = subsection_map_index(pfn); >+ int end = subsection_map_index(pfn + nr_pages - 1); >+ >+ bitmap_set(map, idx, end - idx + 1); >+} >+ >+void __init subsection_map_init(unsigned long pfn, unsigned long nr_pages) >+{ >+ int end_sec = pfn_to_section_nr(pfn + nr_pages - 1); >+ unsigned long nr, start_sec = pfn_to_section_nr(pfn); >+ >+ if (!nr_pages) >+ return; >+ >+ for (nr = start_sec; nr <= end_sec; nr++) { >+ struct mem_section *ms; >+ unsigned long pfns; >+ >+ pfns = min(nr_pages, PAGES_PER_SECTION >+ - (pfn & ~PAGE_SECTION_MASK)); >+ ms = __nr_to_section(nr); >+ subsection_mask_set(ms->usage->subsection_map, pfn, pfns); >+ >+ pr_debug("%s: sec: %lu pfns: %lu set(%d, %d)\n", __func__, nr, >+ pfns, subsection_map_index(pfn), >+ subsection_map_index(pfn + pfns - 1)); >+ >+ pfn += pfns; >+ nr_pages -= pfns; >+ } >+} >+ >+/** >+ * clear_subsection_map - Clear subsection map of one memory region >+ * >+ * @pfn - start pfn of the memory range >+ * @nr_pages - number of pfns to add in the region >+ * >+ * This is only intended for hotplug, and clear the related subsection >+ * map inside one section. >+ * >+ * Return: >+ * * -EINVAL - Section already deactived. >+ * * 0 - Subsection map is emptied. >+ * * 1 - Subsection map is not empty. >+ */ >+static int clear_subsection_map(unsigned long pfn, unsigned long nr_pages) >+{ >+ DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 }; >+ DECLARE_BITMAP(tmp, SUBSECTIONS_PER_SECTION) = { 0 }; >+ struct mem_section *ms = __pfn_to_section(pfn); >+ unsigned long *subsection_map = ms->usage >+ ? &ms->usage->subsection_map[0] : NULL; >+ >+ subsection_mask_set(map, pfn, nr_pages); >+ if (subsection_map) >+ bitmap_and(tmp, map, subsection_map, SUBSECTIONS_PER_SECTION); >+ >+ if (WARN(!subsection_map || !bitmap_equal(tmp, map, SUBSECTIONS_PER_SECTION), >+ "section already deactivated (%#lx + %ld)\n", >+ pfn, nr_pages)) >+ return -EINVAL; >+ >+ bitmap_xor(subsection_map, map, subsection_map, SUBSECTIONS_PER_SECTION); >+ >+ if (bitmap_empty(subsection_map, SUBSECTIONS_PER_SECTION)) >+ return 0; >+ >+ return 1; >+} >+ >+/** >+ * fill_subsection_map - fill subsection map of a memory region >+ * @pfn - start pfn of the memory range >+ * @nr_pages - number of pfns to add in the region >+ * >+ * This clears the related subsection map inside one section, and only >+ * intended for hotplug. >+ * >+ * Return: >+ * * 0 - On success. >+ * * -EINVAL - Invalid memory region. >+ * * -EEXIST - Subsection map has been set. >+ */ >+static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages) >+{ >+ struct mem_section *ms = __pfn_to_section(pfn); >+ DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 }; >+ unsigned long *subsection_map; >+ int rc = 0; >+ >+ subsection_mask_set(map, pfn, nr_pages); >+ >+ subsection_map = &ms->usage->subsection_map[0]; >+ >+ if (bitmap_empty(map, SUBSECTIONS_PER_SECTION)) >+ rc = -EINVAL; >+ else if (bitmap_intersects(map, subsection_map, SUBSECTIONS_PER_SECTION)) >+ rc = -EEXIST; >+ else >+ bitmap_or(subsection_map, map, subsection_map, >+ SUBSECTIONS_PER_SECTION); >+ >+ return rc; >+} >+ > #else > static unsigned long __init section_map_size(void) > { > return PAGE_ALIGN(sizeof(struct page) * PAGES_PER_SECTION); > } > >+void __init subsection_map_init(unsigned long pfn, unsigned long nr_pages) >+{ >+} >+ >+static int clear_subsection_map(unsigned long pfn, unsigned long nr_pages) >+{ >+ return 0; >+} >+static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages) >+{ >+ return 0; >+} >+ > struct page __init *__populate_section_memmap(unsigned long pfn, > unsigned long nr_pages, int nid, struct vmem_altmap *altmap) > { >@@ -726,45 +813,6 @@ static void free_map_bootmem(struct page *memmap) > } > #endif /* CONFIG_SPARSEMEM_VMEMMAP */ > >-/** >- * clear_subsection_map - Clear subsection map of one memory region >- * >- * @pfn - start pfn of the memory range >- * @nr_pages - number of pfns to add in the region >- * >- * This is only intended for hotplug, and clear the related subsection >- * map inside one section. >- * >- * Return: >- * * -EINVAL - Section already deactived. >- * * 0 - Subsection map is emptied. >- * * 1 - Subsection map is not empty. >- */ >-static int clear_subsection_map(unsigned long pfn, unsigned long nr_pages) >-{ >- DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 }; >- DECLARE_BITMAP(tmp, SUBSECTIONS_PER_SECTION) = { 0 }; >- struct mem_section *ms = __pfn_to_section(pfn); >- unsigned long *subsection_map = ms->usage >- ? &ms->usage->subsection_map[0] : NULL; >- >- subsection_mask_set(map, pfn, nr_pages); >- if (subsection_map) >- bitmap_and(tmp, map, subsection_map, SUBSECTIONS_PER_SECTION); >- >- if (WARN(!subsection_map || !bitmap_equal(tmp, map, SUBSECTIONS_PER_SECTION), >- "section already deactivated (%#lx + %ld)\n", >- pfn, nr_pages)) >- return -EINVAL; >- >- bitmap_xor(subsection_map, map, subsection_map, SUBSECTIONS_PER_SECTION); >- >- if (bitmap_empty(subsection_map, SUBSECTIONS_PER_SECTION)) >- return 0; >- >- return 1; >-} >- > static void section_deactivate(unsigned long pfn, unsigned long nr_pages, > struct vmem_altmap *altmap) > { >@@ -818,41 +866,6 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages, > depopulate_section_memmap(pfn, nr_pages, altmap); > } > >-/** >- * fill_subsection_map - fill subsection map of a memory region >- * @pfn - start pfn of the memory range >- * @nr_pages - number of pfns to add in the region >- * >- * This clears the related subsection map inside one section, and only >- * intended for hotplug. >- * >- * Return: >- * * 0 - On success. >- * * -EINVAL - Invalid memory region. >- * * -EEXIST - Subsection map has been set. >- */ >-static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages) >-{ >- struct mem_section *ms = __pfn_to_section(pfn); >- DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 }; >- unsigned long *subsection_map; >- int rc = 0; >- >- subsection_mask_set(map, pfn, nr_pages); >- >- subsection_map = &ms->usage->subsection_map[0]; >- >- if (bitmap_empty(map, SUBSECTIONS_PER_SECTION)) >- rc = -EINVAL; >- else if (bitmap_intersects(map, subsection_map, SUBSECTIONS_PER_SECTION)) >- rc = -EEXIST; >- else >- bitmap_or(subsection_map, map, subsection_map, >- SUBSECTIONS_PER_SECTION); >- >- return rc; >-} >- > static struct page * __meminit section_activate(int nid, unsigned long pfn, > unsigned long nr_pages, struct vmem_altmap *altmap) > { >-- >2.17.2 -- Wei Yang Help you, Help me