Received: by 2002:a25:ef43:0:0:0:0:0 with SMTP id w3csp262176ybm; Thu, 28 May 2020 02:02:57 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzepEihnSuobR7lmx9chLzBAnx4ZlqyqtwSWZu2hmr+Dq4WkllGekj6yZea4GcmwxekBxs2 X-Received: by 2002:a17:906:a88:: with SMTP id y8mr900494ejf.68.1590656577691; Thu, 28 May 2020 02:02:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1590656577; cv=none; d=google.com; s=arc-20160816; b=oPRPa2tBK0VCbxeyrWIIqlVRSd6/AZUwUW7zySrLKhYjH2cJPGY5IUTpeZe8Dz9N8A XOdpeqgaQbWv7Y3O9+VFm68w9HqBIIrxAnKj0uoaKxyjJk5w58SaffRy4yqRmaTDISPw cBIzu4zudASRFpodhwQ9hZYI9A3WT5m9hfWNa+vWuDGbD8Zew1fXzA/2qqG7xaOargH4 lyPLmXpNNLXf+AWujA7manBxWPYa5+t8BNTDfHADVnl4Z0/wbJyDq8w0fT/CfpzH69G/ /Llt9TaQEQn/br+u6WFlO8GqnZmsfMDURWX3jAmEb9yiOtPzfU5bqYeOsAKbK73Lm646 63CA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:content-disposition :mime-version:message-id:subject:cc:to:from:date:dkim-signature; bh=BFmr/kjkmqPRc/dMySvD4z/7WbFhVX9Px6pCOUVjat0=; b=CvL5smzPd15SJYHk/5MpwLkz6NwO0k5CYv2ukevONSBHBb8ra64QwJjLiF3DnRzE8Y mLoEB//xgrHithllIC6vI7w8Q9nyUym/dYVHv/c7wSeKKg6K1ogVU6K5pAqqy6geiCmG bFl0CzudsoqqMHeeA+Nl3niiX3G8Bt9vVagkTEviUFTGidYKiwRQy71engujwFU/vxNX O6ZbN8dSVnKFXgKwC4sCruDMK2YJpqP450PP3kMqTtQlpQtYh5340vHAI92vbCyNveFd 3mHLFW559wQFTFoguQn00nf2bWrHu2teZdg09Bf9uBCUWbHAXURhA15k0z0DHv76O/GY nqEg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@redhat.com header.s=mimecast20190719 header.b=gfXwMvR0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cw7si2967069edb.2.2020.05.28.02.02.34; Thu, 28 May 2020 02:02:57 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=fail header.i=@redhat.com header.s=mimecast20190719 header.b=gfXwMvR0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728098AbgE1I7t (ORCPT + 99 others); Thu, 28 May 2020 04:59:49 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:51397 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727991AbgE1I7q (ORCPT ); Thu, 28 May 2020 04:59:46 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1590656384; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type; bh=BFmr/kjkmqPRc/dMySvD4z/7WbFhVX9Px6pCOUVjat0=; b=gfXwMvR0kE9aUc5hcX1DLhOBnvnqZfylmjIPXfTzTTCnbv324WckJmvJL94eVyDjPTU2Nc UrCBakd8Zkv4fs4jfDvLJsccKMqIGeAUTI1jJZFwUYfHDs/Sn7gRahQSeJla0L+xloCtct sk9SVj6cWd1sNTjZHdVQJoQCbr7/xiQ= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-296-rj9m7kd1OryB3j99O6zc8w-1; Thu, 28 May 2020 04:59:40 -0400 X-MC-Unique: rj9m7kd1OryB3j99O6zc8w-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A37B218FE86D; Thu, 28 May 2020 08:59:39 +0000 (UTC) Received: from localhost (ovpn-12-80.pek2.redhat.com [10.72.12.80]) by smtp.corp.redhat.com (Postfix) with ESMTPS id CF65B768B4; Thu, 28 May 2020 08:59:36 +0000 (UTC) Date: Thu, 28 May 2020 16:59:33 +0800 From: Baoquan He To: David Hildenbrand , rppt@linux.ibm.com Cc: mgorman@suse.de, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH] mm/compaction: Fix the incorrect hole in fast_isolate_freepages() =?ISO-8859-1?Q?=1B?= Message-ID: <20200528085933.GH20045@MiWiFi-R3L-srv> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.10.1 (2018-07-13) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org akpm@linux-foundation.org, cai@lca.pw, mhocko@kernel.org, steve.wahl@hpe.com, Bcc: bhe@redhat.com Subject: Re: [PATCH] mm/compaction: Fix the incorrect hole in fast_isolate_freepages() Reply-To: In-Reply-To: <01beec81-565f-d335-5eff-22693fc09c0e@redhat.com> On 05/26/20 at 01:49pm, David Hildenbrand wrote: > On 26.05.20 13:32, Mike Rapoport wrote: > > Hello Baoquan, > > > > On Tue, May 26, 2020 at 04:45:43PM +0800, Baoquan He wrote: > >> On 05/22/20 at 05:20pm, Mike Rapoport wrote: > >>> Hello Baoquan, > >>> > >>> On Fri, May 22, 2020 at 03:25:24PM +0800, Baoquan He wrote: > >>>> On 05/22/20 at 03:01pm, Baoquan He wrote: > >>>>> > >>>>> So let's add these unavailable ranges into memblock and reserve them > >>>>> in init_unavailable_range() instead. With this change, they will be added > >>>>> into appropriate node and zone in memmap_init(), and initialized in > >>>>> reserve_bootmem_region() just like any other memblock reserved regions. > >>>> > >>>> Seems this is not right. They can't get nid in init_unavailable_range(). > >>>> Adding e820 ranges may let them get nid. But the hole range won't be > >>>> added to memblock, and still has the issue. > >>>> > >>>> Nack this one for now, still considering. > >>> > >>> Why won't we add the e820 reserved ranges to memblock.memory during > >>> early boot as I suggested? > >>> > >>> diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c > >>> index c5399e80c59c..b0940c618ed9 100644 > >>> --- a/arch/x86/kernel/e820.c > >>> +++ b/arch/x86/kernel/e820.c > >>> @@ -1301,8 +1301,11 @@ void __init e820__memblock_setup(void) > >>> if (end != (resource_size_t)end) > >>> continue; > >>> > >>> - if (entry->type == E820_TYPE_SOFT_RESERVED) > >>> + if (entry->type == E820_TYPE_SOFT_RESERVED || > >>> + entry->type == E820_TYPE_RESERVED) { > >>> + memblock_add(entry->addr, entry->size); > >>> memblock_reserve(entry->addr, entry->size); > >>> + } > >>> > >>> if (entry->type != E820_TYPE_RAM && entry->type != E820_TYPE_RESERVED_KERN) > >>> continue; > >>> > >>> The setting of node later in numa_init() will assign the proper node > >>> for these regions as it does for the usable memory. > >> > >> Yes, if it's only related to e820 reserved region, this truly works. > >> > >> However, it also has ACPI table regions. That's why I changed to call > >> the problematic area as firmware reserved ranges later. > >> > >> Bisides, you can see below line, there's another reserved region which only > >> occupies one page in one memory seciton. If adding to memblock.memory, we also > >> will build struct mem_section and the relevant struct pages for the whole > >> section. And then the holes around that page will be added and initialized in > >> init_unavailable_mem(). numa_init() will assign proper node for memblock.memory > >> and memblock.reserved, but won't assign proper node for the holes. > >> > >> ~~~ > >> [ 0.000000] BIOS-e820: [mem 0x00000000fed80000-0x00000000fed80fff] reserved > >> ~~~ > >> > >> So I still think we should not add firmware reserved range into > >> memblock for fixing this issue. > >> > >> And, the fix in the original patch seems necessary. You can see in > >> compaction code, the migration source is chosen from LRU pages or > >> movable pages, the migration target has to be got from Buddy. However, > >> only the min_pfn in fast_isolate_freepages(), it's calculated by > >> distance between cc->free_pfn - cc->migrate_pfn, we can't guarantee it's > >> safe, then use it as the target to handle. > > > > I do not object to your original fix with careful check for pfn validity. > > > > But I still think that the memory reserved by the firmware is still > > memory and it should be added to memblock.memory. This way the memory > > If it's really memory that could be read/written, I think I agree. I would say some of them may not be allowed to be read/written, if I understand it correctly. I roughly went through the x86 init code, there are some places where mem region is marked as E820_TYPE_RESERVED so that they are not touched after initialization. E.g: 1) pfn 0 In trim_bios_range(), we set the pfn 0 as E820_TYPE_RESERVED. You can see the code comment, this is a BIOS owned area, but not kernel RAM. 2)GART reserved region In early_gart_iommu_check(), GART IOMMU firmware will reserve a region in an area, firmware designer won't map system RAM into that area. And also intel_graphics_stolen(), arch_rmrr_sanity_check(), these regions are not system RAM backed area, reading from or writting into these area may cause error. Futhermore, there's a KASLR bug found by HPE, its triggering and root cause are written into below commit log. You can see that accessing to firmware reserved region caused BIOS to halt system when cpu doing speculative. commit 2aa85f246c181b1fa89f27e8e20c5636426be624 Author: Steve Wahl Date: Tue Sep 24 16:03:55 2019 -0500 x86/boot/64: Make level2_kernel_pgt pages invalid outside kernel area Our hardware (UV aka Superdome Flex) has address ranges marked reserved by the BIOS. Access to these ranges is caught as an error, causing the BIOS to halt the system. > > > map will be properly initialized from the very beginning and we won't > > need init_unavailable_mem() and alike workarounds and. Obviously, the patch > > I remember init_unavailable_mem() is necessary for holes within > sections, where we actually *don't* have memory, but we still have have > a valid memmap (full section) that we have to initialize. > > See the example from 4b094b7851bf ("mm/page_alloc.c: initialize memmap > of unavailable memory directly"). Our main memory ends within a section, > so we have to initialize the remaining parts because the whole section > will be marked valid/online. Yes, memory hole need be handled in init_unavailable_mem(). Since we have created struct page for them, need initialize them. We can't discard init_unavailable_mem() for now. > > Any way to improve this handling is appreciated. In that patch I also > spelled out that we might want to mark such holes via a new page type, > e.g., PageHole(). Such a page is a memory hole, but has a valid memmap. > Any content in the memmap (zone/node) should be ignored. As I said at above, I am a little conservative to add all those regions of E820_TYPE_RESERVED into memblock.memory and memblock.reserved, because most of them are firmware reserved region, they may be not backed by normal RAM. I was thinking to step back to use mm_zero_struct_page() inside init_unavailable_range() as below. But it doesn't differ much from __init_single_page(), except of the _refcount and mapcount. Zeroing struct page equals to putting them into node 0, zero 0. diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3973b5fdfe3f..4e4b72cf5283 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -6901,7 +6901,7 @@ static u64 __init init_unavailable_range(unsigned long spfn, unsigned long epfn) * (in memblock.reserved but not in memblock.memory) will * get re-initialized via reserve_bootmem_region() later. */ - __init_single_page(pfn_to_page(pfn), pfn, 0, 0); + mm_zero_struct_page(pfn_to_page(pfn)); __SetPageReserved(pfn_to_page(pfn)); pgcnt++; } About adding these unavailable ranges into node/zone, in the old code, it just happened to add them into expected node/zone. You can see in early_pfn_in_nid(), if no nid found from memblock, the returned '-1' will make it true ironically. But that is not saying the bad thing always got good result. If the last zone of node 0 is DMA32 zone, the deferred init will skip the only chance to add some of unavailable rnages into expected node/zone. Means they were not always added into appropriate node/zone before, the change of iterating memblock.memory in memmap_init() dones't introduce regression. static inline bool __meminit early_pfn_in_nid(unsigned long pfn, int node) { int nid; nid = __early_pfn_to_nid(pfn, &early_pfnnid_cache); if (nid >= 0 && nid != node) return false; return true; } So if no anybody need access them after boot, not adding them into any node/zone sounds better. Otherwise, better add them in the appropriate node/zone. > > But it's all quite confusing, especially across architectures and ... > > > above is not enough, but it's a small step in this direction. > > > > I believe that improving the early memory initialization would make many > > things simpler and more robust, but that's a different story :) > > ... I second that. > > -- > Thanks, > > David / dhildenb