Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp3860988pxb; Mon, 1 Feb 2021 06:36:11 -0800 (PST) X-Google-Smtp-Source: ABdhPJwG6PKXaYpwZpIXvJc4O/i+uxL6gHC/vKwMHu4MNngH49Wl6USvegzMztazwGlxJiwsZK1R X-Received: by 2002:a17:906:af92:: with SMTP id mj18mr17782701ejb.290.1612190171077; Mon, 01 Feb 2021 06:36:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612190171; cv=none; d=google.com; s=arc-20160816; b=KJJYUMwNAk2YILpGMj5aCqdxCo4krQ7FpUYCWhssNu4W2yJ5U1hV4MMNWH3bdGr5X4 qbjK62UqCgf8WPu1HD3jEsQzSE8P25Cyjmg83iUY9DH3WB6veZKDExxPZ6kt+5Ty60GS xlEliCuBJkIawr+eZ+i/XmJN4xQMonSyIkW7SvULvZYiim+0bhYJ128YEU/GPBJjx5q6 PuFcZ1YM/z5moR1KQiAhIGwHh2tLsVmJLdp6KGDNNXESvDAmIDlyozfJR90iHWJu1C/m qPrsh9xbVKP03zrepQRvDeiJfVOD9AMR41OGxa8Wh23CdVMTiY4GEsIdcU1kbq8dkHZX zOyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:organization :from:references:cc:to:subject:dkim-signature; bh=ljqvXX+/v7ItzDm31rzeq5jT4gYfoSDTG2W9G3iEdcM=; b=te4bvqB0M4xSQ3iaUBlI81LUSi1mJAZTqfr1736+QB49Q2IDAstNa2VNthx1VtuzAC VSuw1YYbmruE5Z7O1ERHFfy3KVUMmi8hwIsd04ERe6tfyoEM+lYpOmpdtRg/Iai97lyL w9MFPhMedY1HY355/P/MZZUkw8v8XZx/GTVd5gLA3EVG75pp77JWdIcdOagQ4fqVx3rl 3U4EDM/HPjbuAQ9RKCRVQDlV5m7HEPbfX9mn3RSrFAsOMLmGfBtdP5hKkMlia+bRn/p9 jy9oxjkqWYWPVGb2n0Pw9rlLkyO9UBVw1XfmV3yjmNO8oMLvK7APinAnlc49fObYFT20 q87A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=fNw4BFiX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l20si10473732ejx.471.2021.02.01.06.35.46; Mon, 01 Feb 2021 06:36:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=fNw4BFiX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229663AbhBAOeY (ORCPT + 99 others); Mon, 1 Feb 2021 09:34:24 -0500 Received: from us-smtp-delivery-124.mimecast.com ([63.128.21.124]:27447 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229819AbhBAOeO (ORCPT ); Mon, 1 Feb 2021 09:34:14 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1612189967; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ljqvXX+/v7ItzDm31rzeq5jT4gYfoSDTG2W9G3iEdcM=; b=fNw4BFiXY2uydwvs6QuxtIslOaWqwZnzWNm4GtHIzSKEzBqKa0ZVDZ9KBR5IC99psuYwef FSMbLV/75ozqS4nf4B2Y9CrZrZE7HxNi0uo/+4YAJIJDhWHQUNHuGKVTzwrnxdN/teM0xk l6hqxuaUAgzHa03PXXA6tVKsbKAFmeQ= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-114-AiwFcXaZNkiW5xntkpbzVw-1; Mon, 01 Feb 2021 09:32:43 -0500 X-MC-Unique: AiwFcXaZNkiW5xntkpbzVw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A481F8049CD; Mon, 1 Feb 2021 14:32:40 +0000 (UTC) Received: from [10.36.115.24] (ovpn-115-24.ams2.redhat.com [10.36.115.24]) by smtp.corp.redhat.com (Postfix) with ESMTP id C5438614FC; Mon, 1 Feb 2021 14:32:34 +0000 (UTC) Subject: Re: [PATCH v4 1/2] x86/setup: always add the beginning of RAM as memblock.memory To: Mike Rapoport Cc: Andrew Morton , Andrea Arcangeli , Baoquan He , Borislav Petkov , Chris Wilson , "H. Peter Anvin" , Ingo Molnar , Linus Torvalds , =?UTF-8?Q?=c5=81ukasz_Majczak?= , Mel Gorman , Michal Hocko , Mike Rapoport , Qian Cai , "Sarvela, Tomi P" , Thomas Gleixner , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org, stable@vger.kernel.org, x86@kernel.org References: <20210130221035.4169-1-rppt@kernel.org> <20210130221035.4169-2-rppt@kernel.org> <56e2c568-b121-8860-a6b0-274ace46d835@redhat.com> <20210201143014.GI242749@kernel.org> From: David Hildenbrand Organization: Red Hat GmbH Message-ID: <759698b8-ac81-de31-4916-023d8dfa9fe5@redhat.com> Date: Mon, 1 Feb 2021 15:32:33 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.5.0 MIME-Version: 1.0 In-Reply-To: <20210201143014.GI242749@kernel.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 01.02.21 15:30, Mike Rapoport wrote: > On Mon, Feb 01, 2021 at 10:32:44AM +0100, David Hildenbrand wrote: >> On 30.01.21 23:10, Mike Rapoport wrote: >>> From: Mike Rapoport >>> >>> The physical memory on an x86 system starts at address 0, but this is not >>> always reflected in e820 map. For example, the BIOS can have e820 entries >>> like >>> >>> [ 0.000000] BIOS-provided physical RAM map: >>> [ 0.000000] BIOS-e820: [mem 0x0000000000001000-0x000000000009ffff] usable >>> >>> or >>> >>> [ 0.000000] BIOS-provided physical RAM map: >>> [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved >>> [ 0.000000] BIOS-e820: [mem 0x0000000000001000-0x0000000000057fff] usable >>> >>> In either case, e820__memblock_setup() won't add the range 0x0000 - 0x1000 >>> to memblock.memory and later during memory map initialization this range is >>> left outside any zone. >>> >>> With SPARSEMEM=y there is always a struct page for pfn 0 and this struct >>> page will have it's zone link wrong no matter what value will be set there. >>> >>> To avoid this inconsistency, add the beginning of RAM to memblock.memory. >>> Limit the added chunk size to match the reserved memory to avoid >>> registering memory that may be used by the firmware but never reserved at >>> e820__memblock_setup() time. >>> >>> Fixes: bde9cfa3afe4 ("x86/setup: don't remove E820_TYPE_RAM for pfn 0") >>> Signed-off-by: Mike Rapoport >>> Cc: stable@vger.kernel.org >>> --- >>> arch/x86/kernel/setup.c | 8 ++++++++ >>> 1 file changed, 8 insertions(+) >>> >>> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c >>> index 3412c4595efd..67c77ed6eef8 100644 >>> --- a/arch/x86/kernel/setup.c >>> +++ b/arch/x86/kernel/setup.c >>> @@ -727,6 +727,14 @@ static void __init trim_low_memory_range(void) >>> * Kconfig help text for X86_RESERVE_LOW. >>> */ >>> memblock_reserve(0, ALIGN(reserve_low, PAGE_SIZE)); >>> + >>> + /* >>> + * Even if the firmware does not report the memory at address 0 as >>> + * usable, inform the generic memory management about its existence >>> + * to ensure it is a part of ZONE_DMA and the memory map for it is >>> + * properly initialized. >>> + */ >>> + memblock_add(0, ALIGN(reserve_low, PAGE_SIZE)); >>> } >>> >>> /* >>> >> >> I think, to make that code more robust, and to not rely on archs to do the >> right thing, we should do something like >> >> 1) Make sure in free_area_init() that each PFN with a memmap (i.e., falls >> into a partial present section) is spanned by a zone; that would include PFN >> 0 in this case. >> >> 2) In init_zone_unavailable_mem(), similar to round_up(max_pfn, >> PAGES_PER_SECTION) handling, consider range >> [round_down(min_pfn, PAGES_PER_SECTION), min_pfn - 1] >> which would handle in the x86-64 case [0..0] and, therefore, initialize PFN >> 0. >> >> Also, I think the special-case of PFN 0 is analogous to the >> round_up(max_pfn, PAGES_PER_SECTION) handling in >> init_zone_unavailable_mem(): who guarantees that these PFN above the highest >> present PFN are actually spanned by a zone? >> >> I'd suggest going through all zone ranges in free_area_init() first, dealing >> with zones that have "not section aligned start/end", clamping them up/down >> if required such that no holes within a section are left uncovered by a >> zone. > > I thought about changing the way zone extents are calculated so that zone > start/end will be always on a section boundary, but zone->zone_start_pfn > depends on node->node_start_pfn which is defined by hardware and expanding > a node to make its start pfn aligned at the section boundary might violate > the HW addressing scheme. > > Maybe this could never happen, or maybe it's not really important as the > pages there will be reserved anyway, but I'm not sure I can estimate all > the implications. > I'm suggesting to let zone (+node?) ranges cover memory holes with a valid memmap. Not to move actual memory between nodes/zones. -- Thanks, David / dhildenb