Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp1485091pxb; Mon, 22 Feb 2021 03:16:40 -0800 (PST) X-Google-Smtp-Source: ABdhPJycliG64fifUbfRkN+le2RGDDreENOQUaRmoLG/okHMVVzfnUG03XFkELYJW4O/4WKOpsU/ X-Received: by 2002:a05:6402:50c6:: with SMTP id h6mr21775193edb.117.1613992600726; Mon, 22 Feb 2021 03:16:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1613992600; cv=none; d=google.com; s=arc-20160816; b=o12W46FCRUWRD7yg96pl4wv28euTOuu/3kgtoeRt5k4B9+71LKlxkS9biSRXzJJbDl Tgm8SHGn9YiyFR2pnynXaFRK1Z+DjjcCVdHT/ytO4Mb0ovNNAFPxbrrT7kuobGJeqdks eBJ0mn6vW/66H4yb9+awA0W8VTGZckRX5K+WWCs0PM5dtrbQclIZbbhFPawunTmZ06pY rPOGNZmAxYP6dsuXk4Zml7gtVqR+YZvNTSX/Ek8vLQsFIN5ArC42PQbohzQZ31Y34ci9 6v5v6Zuvx+fNb3BLnd5yu5Wq07fXxCsIm8AQBz4RmBXlasy3nmX0jKingWQNC5k2YmlO Iy7Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=HMiWZqdjiscfotWSJZ1obe1/QCR9dFAtUpGRRGxtzIU=; b=QOshsYa9W+FQqjnuID2hf6BJqNIfAs0oV5Yua8NELeEmKDQbC/cudLR8/XAjuT0pfe 2Z2NciaKUWZyMN3Z5dgtzoNEjuE+Ra35ralqqUVjwofRUafGED5W/Q2m2Rlr9BikNOjf FWhJtHhcTcszaLfrcU0u2m5hkQerkSVvVKDdB3wh9cnbY48QyXunDpgXK8p4q7smaqpO ioaRXDTrQJP7JSmecdWfauop70FFBZ1kvd9aNIrtHNP1iJzb1R6a22CX9w7PTpmuQjgn ridZmY7j4lHBudTOitwxaQ/Wz+6pvCAQQ2Plz9slQLOc/4bJsWnLAP8xOP8CDED0CfGp 7DDQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=JII4QTpC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cw18si11758628ejb.645.2021.02.22.03.16.17; Mon, 22 Feb 2021 03:16:40 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=JII4QTpC; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230199AbhBVLN3 (ORCPT + 99 others); Mon, 22 Feb 2021 06:13:29 -0500 Received: from mail.kernel.org ([198.145.29.99]:36124 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230042AbhBVLNS (ORCPT ); Mon, 22 Feb 2021 06:13:18 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id D4ECE64E57; Mon, 22 Feb 2021 11:12:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1613992356; bh=UBJ4GSkAQS3RfYMwRXarLRhz/ujNBNXp+OkgPnrwqsY=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=JII4QTpCpCO2mdTlH3GqtL5diw26XUYYXnk6HtSp65LHSQZeF1+Rge4E7T+jGP4GF oby9vhNeLMPZ7C/N+tccgcGpjPnalPkMMq87WEWbGheBeQeIUHCnuWENB4lxEsdbnz yVzt3vwUOXW6pPrhVz/6lzE4ZcWy5/o6kNYDZtrKM6QP9UyAyCeoUY5GFJpLz6K9gU 2KknF33s1WmBLbH3F3ihNVVnGgbdKMmLZzjpJOCr+2zE7QOvn9inPtAfTyTLLo2MlS OyW8F3UmgRVteys4mreEeFKn/HAaVKn7FBPwx5NF1qae4NWyJkGJamPumqd6h/n6B7 ZSixeu1PbHlRw== Date: Mon, 22 Feb 2021 13:12:20 +0200 From: Mike Rapoport To: Andrew Morton Cc: Andrea Arcangeli , Baoquan He , Borislav Petkov , Chris Wilson , David Hildenbrand , "H. Peter Anvin" , Ingo Molnar , Linus Torvalds , =?utf-8?Q?=C5=81ukasz?= Majczak , Mel Gorman , Michal Hocko , Mike Rapoport , Qian Cai , "Sarvela, Tomi P" , Thomas Gleixner , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org, stable@vger.kernel.org, x86@kernel.org Subject: Re: [PATCH v6 1/1] mm/page_alloc.c: refactor initialization of struct page for holes in memory layout Message-ID: <20210222111220.GH1447004@kernel.org> References: <20210222105728.28636-1-rppt@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210222105728.28636-1-rppt@kernel.org> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Somehow I've managed to break the threading, the cover letter is here: https://lore.kernel.org/lkml/20210222105400.28583-1-rppt@kernel.org On Mon, Feb 22, 2021 at 12:57:28PM +0200, Mike Rapoport wrote: > From: Mike Rapoport > > There could be struct pages that are not backed by actual physical memory. > This can happen when the actual memory bank is not a multiple of > SECTION_SIZE or when an architecture does not register memory holes > reserved by the firmware as memblock.memory. > > Such pages are currently initialized using init_unavailable_mem() function > that iterates through PFNs in holes in memblock.memory and if there is a > struct page corresponding to a PFN, the fields of this page are set to > default values and it is marked as Reserved. > > init_unavailable_mem() does not take into account zone and node the page > belongs to and sets both zone and node links in struct page to zero. > > Before commit 73a6e474cb37 ("mm: memmap_init: iterate over memblock regions > rather that check each PFN") the holes inside a zone were re-initialized > during memmap_init() and got their zone/node links right. However, after > that commit nothing updates the struct pages representing such holes. > > On a system that has firmware reserved holes in a zone above ZONE_DMA, for > instance in a configuration below: > > # grep -A1 E820 /proc/iomem > 7a17b000-7a216fff : Unknown E820 type > 7a217000-7bffffff : System RAM > > unset zone link in struct page will trigger > > VM_BUG_ON_PAGE(!zone_spans_pfn(page_zone(page), pfn), page); > > because there are pages in both ZONE_DMA32 and ZONE_DMA (unset zone link > in struct page) in the same pageblock. > > Interleave initialization of the unavailable pages with the normal > initialization of memory map, so that zone and node information will be > properly set on struct pages that are not backed by the actual memory. > > With this change the pages for holes inside a zone will get proper > zone/node links and the pages that are not spanned by any node will get > links to the adjacent zone/node. > > Fixes: 73a6e474cb37 ("mm: memmap_init: iterate over memblock regions rather that check each PFN") > Signed-off-by: Mike Rapoport > Reported-by: Qian Cai > Reported-by: Andrea Arcangeli > Cc: Baoquan He > Cc: David Hildenbrand > Cc: Mel Gorman > Cc: Michal Hocko > Cc: Qian Cai > Cc: Vlastimil Babka > --- > mm/page_alloc.c | 144 ++++++++++++++++++++---------------------------- > 1 file changed, 61 insertions(+), 83 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 3e93f8b29bae..1f1db70b7789 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -6280,12 +6280,60 @@ static void __meminit zone_init_free_lists(struct zone *zone) > } > } > > +#if !defined(CONFIG_FLAT_NODE_MEM_MAP) > +/* > + * Only struct pages that correspond to ranges defined by memblock.memory > + * are zeroed and initialized by going through __init_single_page() during > + * memmap_init_zone(). > + * > + * But, there could be struct pages that correspond to holes in > + * memblock.memory. This can happen because of the following reasons: > + * - phyiscal memory bank size is not necessarily the exact multiple of the > + * arbitrary section size > + * - early reserved memory may not be listed in memblock.memory > + * - memory layouts defined with memmap= kernel parameter may not align > + * nicely with memmap sections > + * > + * Explicitly initialize those struct pages so that: > + * - PG_Reserved is set > + * - zone and node links point to zone and node that span the page > + */ > +static u64 __meminit init_unavailable_range(unsigned long spfn, > + unsigned long epfn, > + int zone, int node) > +{ > + unsigned long pfn; > + u64 pgcnt = 0; > + > + for (pfn = spfn; pfn < epfn; pfn++) { > + if (!pfn_valid(ALIGN_DOWN(pfn, pageblock_nr_pages))) { > + pfn = ALIGN_DOWN(pfn, pageblock_nr_pages) > + + pageblock_nr_pages - 1; > + continue; > + } > + __init_single_page(pfn_to_page(pfn), pfn, zone, node); > + __SetPageReserved(pfn_to_page(pfn)); > + pgcnt++; > + } > + > + return pgcnt; > +} > +#else > +static inline u64 init_unavailable_range(unsigned long spfn, unsigned long epfn, > + int zone, int node) > +{ > + return 0; > +} > +#endif > + > void __meminit __weak memmap_init_zone(struct zone *zone) > { > unsigned long zone_start_pfn = zone->zone_start_pfn; > unsigned long zone_end_pfn = zone_start_pfn + zone->spanned_pages; > int i, nid = zone_to_nid(zone), zone_id = zone_idx(zone); > + static unsigned long hole_pfn = 0; > unsigned long start_pfn, end_pfn; > + u64 pgcnt = 0; > > for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) { > start_pfn = clamp(start_pfn, zone_start_pfn, zone_end_pfn); > @@ -6295,7 +6343,20 @@ void __meminit __weak memmap_init_zone(struct zone *zone) > memmap_init_range(end_pfn - start_pfn, nid, > zone_id, start_pfn, zone_end_pfn, > MEMINIT_EARLY, NULL, MIGRATE_MOVABLE); > + > + if (hole_pfn < start_pfn) > + pgcnt += init_unavailable_range(hole_pfn, start_pfn, > + zone_id, nid); > + hole_pfn = end_pfn; > } > + > + if (hole_pfn < zone_end_pfn) > + pgcnt += init_unavailable_range(hole_pfn, zone_end_pfn, > + zone_id, nid); > + > + if (pgcnt) > + pr_info(" %s zone: %lld pages in unavailable ranges\n", > + zone->name, pgcnt); > } > > static int zone_batchsize(struct zone *zone) > @@ -7092,88 +7153,6 @@ void __init free_area_init_memoryless_node(int nid) > free_area_init_node(nid); > } > > -#if !defined(CONFIG_FLAT_NODE_MEM_MAP) > -/* > - * Initialize all valid struct pages in the range [spfn, epfn) and mark them > - * PageReserved(). Return the number of struct pages that were initialized. > - */ > -static u64 __init init_unavailable_range(unsigned long spfn, unsigned long epfn) > -{ > - unsigned long pfn; > - u64 pgcnt = 0; > - > - for (pfn = spfn; pfn < epfn; pfn++) { > - if (!pfn_valid(ALIGN_DOWN(pfn, pageblock_nr_pages))) { > - pfn = ALIGN_DOWN(pfn, pageblock_nr_pages) > - + pageblock_nr_pages - 1; > - continue; > - } > - /* > - * Use a fake node/zone (0) for now. Some of these pages > - * (in memblock.reserved but not in memblock.memory) will > - * get re-initialized via reserve_bootmem_region() later. > - */ > - __init_single_page(pfn_to_page(pfn), pfn, 0, 0); > - __SetPageReserved(pfn_to_page(pfn)); > - pgcnt++; > - } > - > - return pgcnt; > -} > - > -/* > - * Only struct pages that are backed by physical memory are zeroed and > - * initialized by going through __init_single_page(). But, there are some > - * struct pages which are reserved in memblock allocator and their fields > - * may be accessed (for example page_to_pfn() on some configuration accesses > - * flags). We must explicitly initialize those struct pages. > - * > - * This function also addresses a similar issue where struct pages are left > - * uninitialized because the physical address range is not covered by > - * memblock.memory or memblock.reserved. That could happen when memblock > - * layout is manually configured via memmap=, or when the highest physical > - * address (max_pfn) does not end on a section boundary. > - */ > -static void __init init_unavailable_mem(void) > -{ > - phys_addr_t start, end; > - u64 i, pgcnt; > - phys_addr_t next = 0; > - > - /* > - * Loop through unavailable ranges not covered by memblock.memory. > - */ > - pgcnt = 0; > - for_each_mem_range(i, &start, &end) { > - if (next < start) > - pgcnt += init_unavailable_range(PFN_DOWN(next), > - PFN_UP(start)); > - next = end; > - } > - > - /* > - * Early sections always have a fully populated memmap for the whole > - * section - see pfn_valid(). If the last section has holes at the > - * end and that section is marked "online", the memmap will be > - * considered initialized. Make sure that memmap has a well defined > - * state. > - */ > - pgcnt += init_unavailable_range(PFN_DOWN(next), > - round_up(max_pfn, PAGES_PER_SECTION)); > - > - /* > - * Struct pages that do not have backing memory. This could be because > - * firmware is using some of this memory, or for some other reasons. > - */ > - if (pgcnt) > - pr_info("Zeroed struct page in unavailable ranges: %lld pages", pgcnt); > -} > -#else > -static inline void __init init_unavailable_mem(void) > -{ > -} > -#endif /* !CONFIG_FLAT_NODE_MEM_MAP */ > - > #if MAX_NUMNODES > 1 > /* > * Figure out the number of possible node ids. > @@ -7597,7 +7576,6 @@ void __init free_area_init(unsigned long *max_zone_pfn) > /* Initialise every node */ > mminit_verify_pageflags_layout(); > setup_nr_node_ids(); > - init_unavailable_mem(); > for_each_online_node(nid) { > pg_data_t *pgdat = NODE_DATA(nid); > free_area_init_node(nid); > -- > 2.28.0 > -- Sincerely yours, Mike.