Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp2380362pxb; Tue, 23 Feb 2021 05:55:12 -0800 (PST) X-Google-Smtp-Source: ABdhPJxg28imfbUdy/qfeP/i2OEia9OZx1grtIErRUS6lmKMLk0Dv3eODtjGVE3H/OOxn4TfahMq X-Received: by 2002:a05:6402:1594:: with SMTP id c20mr6701667edv.92.1614088512502; Tue, 23 Feb 2021 05:55:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1614088512; cv=none; d=google.com; s=arc-20160816; b=WS1o6YA4zh7s6spzL633gARzxiMJ6P+vDgWd7QoqwuO+14wFskwsuLpm9m4R4HuKZR IXDYr7RLQThe+dgt8AJSRWcrlfgMDqz0D/pvnhPsS12CLSLsmnH1Dl4jMC/Cg8Xy+Rzu u3s73YBPyIuJQAmyg1p20NLCcm3IPJMh0wqvHnh28dlcokt4utPFwSWPN82tLEgMnDLl G4g4uX9l/YbMcp5sw2z2ztFWjRTtUVhczLzW/4yClcaxwMgsCCQjghb3JU8PnS3mS7c8 FWAvlCPRlrIVA9uTyunGUAvWqVPOBKaZfWMCyhSudZlBcatGBsJT0YymhLlxmc7LKijK T5qw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=wh4RoPzVTEQwPJ5YbFQggtVaPFOioNmN5qijPvalwOg=; b=OC1f2sUZcLvw33l3KyHrqZ3R86QVS/ovGfj+WjchiKzsVn04AkQ2TATKyCR4Zyosvb SqI0kP51TCmt8pdAWS7H2DjbwbAXCvIMkrQBNmVyFr1qb+YpADFFTsSZAmgI0mwNwq0E HoMJQ2o31fiFlIfBBuV1Fnr2fWczif0WC2mK9Gsbw46HkFGDKcTje/enqPHQ2GvLp+kw JzUh0P2q5S7STomiUQreSNB7k/8JSGqmNhn2xIpSqFG4HTCF9j0Od2pGBx01M33DzopS pHxFF2iEydOcKcmom8LCYc2CoFoopWwWS4UVN+pg7TufRtr72npgIhYGCeUFD9MD7oby GW1Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=hOe77KDz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cb10si13931993edb.519.2021.02.23.05.54.40; Tue, 23 Feb 2021 05:55:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=hOe77KDz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230417AbhBWJvm (ORCPT + 99 others); Tue, 23 Feb 2021 04:51:42 -0500 Received: from mail.kernel.org ([198.145.29.99]:52600 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231942AbhBWJtJ (ORCPT ); Tue, 23 Feb 2021 04:49:09 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 8C1F764E02; Tue, 23 Feb 2021 09:48:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1614073694; bh=MvRPY1delFLS2UckkO8/h/N8tshqHooewk/TOzLXKIA=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=hOe77KDz753d52iDZiv2vFnf4374K1izVl3D+ImTV0k34BXOUgfbaGViVzGeAirWc o/KHswiPK+f21I7DbUY1tC4lt0lB2xXYUtW+ls+QGOMi+mHc7up4HZGfUayN2nPO8M JSgLoUo8QoosgXOU1+cv9MBzQsUsgHXxWCQOwCTA5Y7M3QdLVpoHUodBN9u+Ska+5k wOkTwJUTdbb4kwrsAPt4AVotb+srkGFN8Z8XCdahHVIIIZZdG5NdxqUgXtFTf5SCQL ct8gyc50/qNVpmQLAOvPLOso2gWIUtW65D6NrgONUgixmWMNhOWtLs1S+gelVu7vdr eEaGASr1F2NVA== Date: Tue, 23 Feb 2021 11:48:02 +0200 From: Mike Rapoport To: David Hildenbrand Cc: Andrew Morton , Andrea Arcangeli , Baoquan He , Borislav Petkov , Chris Wilson , "H. Peter Anvin" , Ingo Molnar , Linus Torvalds , =?utf-8?Q?=C5=81ukasz?= Majczak , Mel Gorman , Michal Hocko , Mike Rapoport , Qian Cai , "Sarvela, Tomi P" , Thomas Gleixner , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org, stable@vger.kernel.org, x86@kernel.org Subject: Re: [PATCH v6 1/1] mm/page_alloc.c: refactor initialization of struct page for holes in memory layout Message-ID: <20210223094802.GI1447004@kernel.org> References: <20210222105728.28636-1-rppt@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Feb 23, 2021 at 09:04:19AM +0100, David Hildenbrand wrote: > On 22.02.21 11:57, Mike Rapoport wrote: > > From: Mike Rapoport > > > > There could be struct pages that are not backed by actual physical memory. > > This can happen when the actual memory bank is not a multiple of > > SECTION_SIZE or when an architecture does not register memory holes > > reserved by the firmware as memblock.memory. > > > > Such pages are currently initialized using init_unavailable_mem() function > > that iterates through PFNs in holes in memblock.memory and if there is a > > struct page corresponding to a PFN, the fields of this page are set to > > default values and it is marked as Reserved. > > > > init_unavailable_mem() does not take into account zone and node the page > > belongs to and sets both zone and node links in struct page to zero. > > > > Before commit 73a6e474cb37 ("mm: memmap_init: iterate over memblock regions > > rather that check each PFN") the holes inside a zone were re-initialized > > during memmap_init() and got their zone/node links right. However, after > > that commit nothing updates the struct pages representing such holes. > > > > On a system that has firmware reserved holes in a zone above ZONE_DMA, for > > instance in a configuration below: > > > > # grep -A1 E820 /proc/iomem > > 7a17b000-7a216fff : Unknown E820 type > > 7a217000-7bffffff : System RAM > > > > unset zone link in struct page will trigger > > > > VM_BUG_ON_PAGE(!zone_spans_pfn(page_zone(page), pfn), page); > > > > because there are pages in both ZONE_DMA32 and ZONE_DMA (unset zone link > > in struct page) in the same pageblock. > > > > Interleave initialization of the unavailable pages with the normal > > initialization of memory map, so that zone and node information will be > > properly set on struct pages that are not backed by the actual memory. > > > > With this change the pages for holes inside a zone will get proper > > zone/node links and the pages that are not spanned by any node will get > > links to the adjacent zone/node. > > Does this include pages in the last section has handled by ... > ... > > - /* > > - * Early sections always have a fully populated memmap for the whole > > - * section - see pfn_valid(). If the last section has holes at the > > - * end and that section is marked "online", the memmap will be > > - * considered initialized. Make sure that memmap has a well defined > > - * state. > > - */ > > - pgcnt += init_unavailable_range(PFN_DOWN(next), > > - round_up(max_pfn, PAGES_PER_SECTION)); > > - > > ^ this code? > > Or how is that case handled now? Hmm, now it's clamped to node_end_pfn/zone_end_pfn, so in your funny example with -object memory-backend-ram,id=bmem0,size=4160M \ -object memory-backend-ram,id=bmem1,size=4032M \ this is not handled :( But it will be handled with this on top: diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 29bbd08b8e63..6c9b490f5a8b 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6350,9 +6350,12 @@ void __meminit __weak memmap_init_zone(struct zone *zone) hole_pfn = end_pfn; } - if (hole_pfn < zone_end_pfn) - pgcnt += init_unavailable_range(hole_pfn, zone_end_pfn, +#ifdef CONFIG_SPARSEMEM + end_pfn = round_up(zone_end_pfn, PAGES_PER_SECTION); + if (hole_pfn < end_pfn) + pgcnt += init_unavailable_range(hole_pfn, end_pfn, zone_id, nid); +#endif if (pgcnt) pr_info(" %s zone: %lld pages in unavailable ranges\n", -- Sincerely yours, Mike.