Received: by 2002:a6b:fb09:0:0:0:0:0 with SMTP id h9csp3137092iog; Mon, 27 Jun 2022 09:53:37 -0700 (PDT) X-Google-Smtp-Source: AGRyM1swzqMEHwXvkzHpScWu/y6I5BzPo5XWENrE4wz5dBZMSdRSAfzrTr2VKZ6km/anSgs5bLec X-Received: by 2002:a50:fc15:0:b0:435:7897:e8ab with SMTP id i21-20020a50fc15000000b004357897e8abmr17662211edr.17.1656348817444; Mon, 27 Jun 2022 09:53:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1656348817; cv=none; d=google.com; s=arc-20160816; b=oVJOl/lK5sUmcMWy0h08n03/DPPM0TQ9Sw6q0iBHc2Xsx7SH5x1n/54q3W2P9j5mtW JYsRaqgcj604GaPhOCvkhSyqPikq74ngXgxbx4R7qhVwfjcniNDYrCKxWcK2k4BoK+9O VfY1LaIbxHLHIdUxqpgeH/VxsHbBp6m6amZ9bX7gDjuMte/Xk9MHzQMLSPRxxgR0ZoCx n5Hhz3B8T4SaPGcN3mcgH9bs01IshxKfJu5Z6Cgyi62WcYGWECFc8dVD3KM0z8SphhGx A32svJA8mt6Hg/FJc4PBjH1skN2Bk1X5wfVpbvGROidXYdkgZIXLbflEJJF+vslcZNX7 aJJw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-transfer-encoding :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=fRZZ35zI24hWHZQ73muiXx2K7am7L7x7yxylnddFvJU=; b=TEezRfWwnj8QnYJGD+aN+ozd2mtDZHFn6uh4oyyhnBBJ+HNN1tb3NdwPimbJVX3c5U 0nEE19PXkR+Y2TRzMjINk2F4CxePK8yYadFjP7R7Th2HN9qM8hgldOib2P9Osa8TcBSi tg2RUwXp/g2Zzed8k9JXq6xOqrgeqhhb1F1y13adEFi5bcdWXFKAzLRas+5u45+TCiAC fM+EHNVSbkFZQvPovF/PC2V9LrR1d+Pw4otfCVXFE0/V6xoJCs9KhC+ohH4W0FNaerJn BwzsltAj+LsrLCeFSZNiTepOUem8xqiLDBXBryvjSAbGsyqtbBPzYiZTerYsDLAvM1YN Gdww== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=KDQktQiy; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gb32-20020a170907962000b00726323fa1e8si12238848ejc.22.2022.06.27.09.53.07; Mon, 27 Jun 2022 09:53:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=KDQktQiy; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237628AbiF0Qtx (ORCPT + 99 others); Mon, 27 Jun 2022 12:49:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46172 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234113AbiF0Qtw (ORCPT ); Mon, 27 Jun 2022 12:49:52 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99DA0BCB3 for ; Mon, 27 Jun 2022 09:49:50 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 1693FB818DC for ; Mon, 27 Jun 2022 16:49:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C386EC3411D; Mon, 27 Jun 2022 16:49:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1656348587; bh=aSGSX1ASo5IBDh6L8HBXDPH7ehKPSmx8CallCVsLjBs=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=KDQktQiyglZjmHJ5sxQGK4Cypl4dCxYfraR4jplnTElVbyUtONtjRdhyNxjcJmP+H IyKU6U8zWPpZNMkOEMHUxXGcqStmHArKO/LtboCYFxLTvfrBHHPg63UzYI7+D8Fk7j +uLJ/gCVdS9+Ej5EOzjH6n3nZu4zIgzCEgn71CRtjlaQ5QiuM6gEv2/tf671MAUxm3 EwlQ1/RRS9N3wGbcHtukidvPi6HsOz+YMdo2pE9BmpbHxe2ASs8ikTTHDHhowqmsti WctQuGIUzMAkQnLZpmWn86fVRfWDSkBPojHO4BnJ1oPF/d0JVkVDlS+42NxDZd2Bs4 U88tiYn1syjjQ== Date: Mon, 27 Jun 2022 19:49:30 +0300 From: Mike Rapoport To: "guanghui.fgh" Cc: baolin.wang@linux.alibaba.com, catalin.marinas@arm.com, will@kernel.org, akpm@linux-foundation.org, david@redhat.com, jianyong.wu@arm.com, james.morse@arm.com, quic_qiancai@quicinc.com, christophe.leroy@csgroup.eu, jonathan@marek.ca, mark.rutland@arm.com, thunder.leizhen@huawei.com, anshuman.khandual@arm.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, geert+renesas@glider.be, ardb@kernel.org, linux-mm@kvack.org Subject: Re: [PATCH] arm64: mm: fix linear mapping mem access performace degradation Message-ID: References: <1656241815-28494-1-git-send-email-guanghuifeng@linux.alibaba.com> <4d18d303-aeed-0beb-a8a4-32893f2d438d@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jun 27, 2022 at 06:46:50PM +0800, guanghui.fgh wrote: > 在 2022/6/27 17:49, Mike Rapoport 写道: > > On Mon, Jun 27, 2022 at 05:24:10PM +0800, guanghui.fgh wrote: > > > 在 2022/6/27 14:34, Mike Rapoport 写道: > > > > > > On Sun, Jun 26, 2022 at 07:10:15PM +0800, Guanghui Feng wrote: > > > > > > The arm64 can build 2M/1G block/sectiion mapping. When using DMA/DMA32 zone > > > (enable crashkernel, disable rodata full, disable kfence), the mem_map will > > > use non block/section mapping(for crashkernel requires to shrink the region > > > in page granularity). But it will degrade performance when doing larging > > > continuous mem access in kernel(memcpy/memmove, etc). > > > > > > There are many changes and discussions: > > > commit 031495635b46 > > > commit 1a8e1cef7603 > > > commit 8424ecdde7df > > > commit 0a30c53573b0 > > > commit 2687275a5843 > > > > > > Please include oneline summary of the commit. (See section "Describe your > > > changes" in Documentation/process/submitting-patches.rst) > > > > > > OK, I will add oneline summary in the git commit messages. > > > > > > This patch changes mem_map to use block/section mapping with crashkernel. > > > Firstly, do block/section mapping(normally 2M or 1G) for all avail mem at > > > mem_map, reserve crashkernel memory. And then walking pagetable to split > > > block/section mapping to non block/section mapping(normally 4K) [[[only]]] > > > for crashkernel mem. > > > > > > This already happens when ZONE_DMA/ZONE_DMA32 are disabled. Please explain > > > why is it Ok to change the way the memory is mapped with > > > ZONE_DMA/ZONE_DMA32 enabled. > > > > > > In short: > > > > > > 1.building all avail mem with block/section mapping(normally 1G/2M) without > > > inspecting crashkernel > > > 2. Reserve crashkernel mem as same as previous doing > > > 3. only change the crashkernle mem mapping to normal mapping(normally 4k). > > > With this method, there are block/section mapping as more as possible. > > > > This does not answer the question why changing the way the memory is mapped > > when there is ZONE_DMA/DMA32 and crashkernel won't cause a regression. > > > 1.Quoted messages from arch/arm64/mm/init.c > > "Memory reservation for crash kernel either done early or deferred > depending on DMA memory zones configs (ZONE_DMA) -- > > In absence of ZONE_DMA configs arm64_dma_phys_limit initialized > here instead of max_zone_phys(). This lets early reservation of > crash kernel memory which has a dependency on arm64_dma_phys_limit. > Reserving memory early for crash kernel allows linear creation of block > mappings (greater than page-granularity) for all the memory bank rangs. > In this scheme a comparatively quicker boot is observed. > > If ZONE_DMA configs are defined, crash kernel memory reservation > is delayed until DMA zone memory range size initialization performed in > zone_sizes_init(). The defer is necessary to steer clear of DMA zone > memory range to avoid overlap allocation. So crash kernel memory boundaries > are not known when mapping all bank memory ranges, which otherwise means not > possible to exclude crash kernel range from creating block mappings so > page-granularity mappings are created for the entire memory range." > > Namely, the init order: memblock init--->linear mem mapping(4k mapping for > crashkernel, requirinig page-granularity changing))--->zone dma > limit--->reserve crashkernel. > So when enable ZONE DMA and using crashkernel, the mem mapping using 4k > mapping. > > 2.As mentioned above, when linear mem use 4k mapping simply, there is high > dtlb miss(degrade performance). > This patch use block/section mapping as far as possible with performance > improvement. > > 3.This patch reserve crashkernel as same as the history(ZONE DMA & > crashkernel reserving order), and only change the linear mem mapping to > block/section mapping. Thank you for the explanation. So if I understand correctly, you suggest to reserve the crash kernel memory late regardless of ZONE_DMA/32 configuration and then map that memory using page level mappings after the memory is reserved. This makes sense, but I think the patch has a few shortcomings than need to be addressed. > Signed-off-by: Guanghui Feng > --- > arch/arm64/include/asm/mmu.h | 1 + > arch/arm64/mm/init.c | 8 +- > arch/arm64/mm/mmu.c | 274 +++++++++++++++++++++++++++++++++++++++---- > 3 files changed, 253 insertions(+), 30 deletions(-) > > diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h > index 48f8466..df113cc 100644 > --- a/arch/arm64/include/asm/mmu.h > +++ b/arch/arm64/include/asm/mmu.h > @@ -63,6 +63,7 @@ static inline bool arm64_kernel_unmapped_at_el0(void) > extern void arm64_memblock_init(void); > extern void paging_init(void); > extern void bootmem_init(void); > +extern void mapping_crashkernel(void); I think map_crashkernel() would be a better name. > extern void __iomem *early_io_map(phys_addr_t phys, unsigned long virt); > extern void init_mem_pgprot(void); > extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > index 339ee84..0e7540b 100644 > --- a/arch/arm64/mm/init.c > +++ b/arch/arm64/mm/init.c > @@ -388,10 +388,6 @@ void __init arm64_memblock_init(void) > } > > early_init_fdt_scan_reserved_mem(); > - > - if (!IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32)) > - reserve_crashkernel(); > - > high_memory = __va(memblock_end_of_DRAM() - 1) + 1; > } > > @@ -438,8 +434,8 @@ void __init bootmem_init(void) > * request_standard_resources() depends on crashkernel's memory being > * reserved, so do it here. > */ > - if (IS_ENABLED(CONFIG_ZONE_DMA) || IS_ENABLED(CONFIG_ZONE_DMA32)) > - reserve_crashkernel(); > + reserve_crashkernel(); > + mapping_crashkernel(); This can be called from reserve_crashkernel() directly. > memblock_dump_all(); > } > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c > index 626ec32..0672afd 100644 > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -498,6 +498,256 @@ static int __init enable_crash_mem_map(char *arg) > } > early_param("crashkernel", enable_crash_mem_map); > > +#ifdef CONFIG_KEXEC_CORE > +static phys_addr_t __init early_crashkernel_pgtable_alloc(int shift) > +{ > + phys_addr_t phys; > + void *ptr; > + > + phys = memblock_phys_alloc_range(PAGE_SIZE, PAGE_SIZE, 0, > + MEMBLOCK_ALLOC_NOLEAKTRACE); > + if (!phys) > + panic("Failed to allocate page table page\n"); > + > + ptr = (void *)__phys_to_virt(phys); > + memset(ptr, 0, PAGE_SIZE); > + return phys; > +} > + > +static void init_crashkernel_pte(pmd_t *pmdp, unsigned long addr, > + unsigned long end, > + phys_addr_t phys, pgprot_t prot) > +{ > + pte_t *ptep; > + ptep = pte_offset_kernel(pmdp, addr); > + do { > + set_pte(ptep, pfn_pte(__phys_to_pfn(phys), prot)); > + phys += PAGE_SIZE; > + } while (ptep++, addr += PAGE_SIZE, addr != end); > +} > + > +static void alloc_crashkernel_cont_pte(pmd_t *pmdp, unsigned long addr, > + unsigned long end, phys_addr_t phys, > + pgprot_t prot, > + phys_addr_t (*pgtable_alloc)(int), > + int flags) > +{ > + unsigned long next; > + pmd_t pmd = READ_ONCE(*pmdp); > + > + BUG_ON(pmd_sect(pmd)); > + if (pmd_none(pmd)) { > + pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN; > + phys_addr_t pte_phys; > + > + if (flags & NO_EXEC_MAPPINGS) > + pmdval |= PMD_TABLE_PXN; > + BUG_ON(!pgtable_alloc); > + pte_phys = pgtable_alloc(PAGE_SHIFT); > + __pmd_populate(pmdp, pte_phys, pmdval); > + pmd = READ_ONCE(*pmdp); > + } > + BUG_ON(pmd_bad(pmd)); > + > + do { > + pgprot_t __prot = prot; > + next = pte_cont_addr_end(addr, end); > + init_crashkernel_pte(pmdp, addr, next, phys, __prot); > + phys += next - addr; > + } while (addr = next, addr != end); > +} > + > +static void init_crashkernel_pmd(pud_t *pudp, unsigned long addr, > + unsigned long end, phys_addr_t phys, > + pgprot_t prot, > + phys_addr_t (*pgtable_alloc)(int), int flags) > +{ > + phys_addr_t map_offset; > + unsigned long next; > + pmd_t *pmdp; > + pmdval_t pmdval; > + > + pmdp = pmd_offset(pudp, addr); > + do { > + next = pmd_addr_end(addr, end); > + if (!pmd_none(*pmdp) && pmd_sect(*pmdp)) { > + phys_addr_t pte_phys = pgtable_alloc(PAGE_SHIFT); > + pmd_clear(pmdp); > + pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN; > + if (flags & NO_EXEC_MAPPINGS) > + pmdval |= PMD_TABLE_PXN; > + __pmd_populate(pmdp, pte_phys, pmdval); > + flush_tlb_kernel_range(addr, addr + PAGE_SIZE); > + > + map_offset = addr - (addr & PMD_MASK); > + if (map_offset) > + alloc_init_cont_pte(pmdp, addr & PMD_MASK, addr, > + phys - map_offset, prot, > + pgtable_alloc, flags); > + > + if (next < (addr & PMD_MASK) + PMD_SIZE) > + alloc_init_cont_pte(pmdp, next, (addr & PUD_MASK) + > + PUD_SIZE, next - addr + phys, > + prot, pgtable_alloc, flags); > + } > + alloc_crashkernel_cont_pte(pmdp, addr, next, phys, prot, > + pgtable_alloc, flags); > + phys += next - addr; > + } while (pmdp++, addr = next, addr != end); > +} > + > +static void alloc_crashkernel_cont_pmd(pud_t *pudp, unsigned long addr, > + unsigned long end, phys_addr_t phys, > + pgprot_t prot, > + phys_addr_t (*pgtable_alloc)(int), > + int flags) > +{ > + unsigned long next; > + pud_t pud = READ_ONCE(*pudp); > + > + /* > + * Check for initial section mappings in the pgd/pud. > + */ > + BUG_ON(pud_sect(pud)); > + if (pud_none(pud)) { > + pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN; > + phys_addr_t pmd_phys; > + > + if (flags & NO_EXEC_MAPPINGS) > + pudval |= PUD_TABLE_PXN; > + BUG_ON(!pgtable_alloc); > + pmd_phys = pgtable_alloc(PMD_SHIFT); > + __pud_populate(pudp, pmd_phys, pudval); > + pud = READ_ONCE(*pudp); > + } > + BUG_ON(pud_bad(pud)); > + > + do { > + pgprot_t __prot = prot; > + next = pmd_cont_addr_end(addr, end); > + init_crashkernel_pmd(pudp, addr, next, phys, __prot, > + pgtable_alloc, flags); > + phys += next - addr; > + } while (addr = next, addr != end); > +} > + > +static void alloc_crashkernel_pud(pgd_t *pgdp, unsigned long addr, > + unsigned long end, phys_addr_t phys, > + pgprot_t prot, > + phys_addr_t (*pgtable_alloc)(int), > + int flags) > +{ > + phys_addr_t map_offset; > + unsigned long next; > + pud_t *pudp; > + p4d_t *p4dp = p4d_offset(pgdp, addr); > + p4d_t p4d = READ_ONCE(*p4dp); > + pudval_t pudval; > + > + if (p4d_none(p4d)) { > + p4dval_t p4dval = P4D_TYPE_TABLE | P4D_TABLE_UXN; > + phys_addr_t pud_phys; > + > + if (flags & NO_EXEC_MAPPINGS) > + p4dval |= P4D_TABLE_PXN; > + BUG_ON(!pgtable_alloc); > + pud_phys = pgtable_alloc(PUD_SHIFT); > + __p4d_populate(p4dp, pud_phys, p4dval); > + p4d = READ_ONCE(*p4dp); > + } > + BUG_ON(p4d_bad(p4d)); > + > + /* > + * No need for locking during early boot. And it doesn't work as > + * expected with KASLR enabled. > + */ > + if (system_state != SYSTEM_BOOTING) > + mutex_lock(&fixmap_lock); > + pudp = pud_offset(p4dp, addr); > + do { > + next = pud_addr_end(addr, end); > + if (!pud_none(*pudp) && pud_sect(*pudp)) { > + phys_addr_t pmd_phys = pgtable_alloc(PMD_SHIFT); > + pud_clear(pudp); > + > + pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN; > + if (flags & NO_EXEC_MAPPINGS) > + pudval |= PUD_TABLE_PXN; > + __pud_populate(pudp, pmd_phys, pudval); > + flush_tlb_kernel_range(addr, addr + PAGE_SIZE); > + > + map_offset = addr - (addr & PUD_MASK); > + if (map_offset) > + alloc_init_cont_pmd(pudp, addr & PUD_MASK, addr, > + phys - map_offset, prot, > + pgtable_alloc, flags); > + > + if (next < (addr & PUD_MASK) + PUD_SIZE) > + alloc_init_cont_pmd(pudp, next, (addr & PUD_MASK) + > + PUD_SIZE, next - addr + phys, > + prot, pgtable_alloc, flags); > + } > + alloc_crashkernel_cont_pmd(pudp, addr, next, phys, prot, > + pgtable_alloc, flags); > + phys += next - addr; > + } while (pudp++, addr = next, addr != end); > + > + if (system_state != SYSTEM_BOOTING) > + mutex_unlock(&fixmap_lock); > +} > + > +static void __create_crashkernel_mapping(pgd_t *pgdir, phys_addr_t phys, > + unsigned long virt, phys_addr_t size, > + pgprot_t prot, > + phys_addr_t (*pgtable_alloc)(int), > + int flags) > +{ > + unsigned long addr, end, next; > + pgd_t *pgdp = pgd_offset_pgd(pgdir, virt); > + > + /* > + * If the virtual and physical address don't have the same offset > + * within a page, we cannot map the region as the caller expects. > + */ > + if (WARN_ON((phys ^ virt) & ~PAGE_MASK)) > + return; > + > + phys &= PAGE_MASK; > + addr = virt & PAGE_MASK; > + end = PAGE_ALIGN(virt + size); > + > + do { > + next = pgd_addr_end(addr, end); > + alloc_crashkernel_pud(pgdp, addr, next, phys, prot, > + pgtable_alloc, flags); > + phys += next - addr; > + } while (pgdp++, addr = next, addr != end); > +} __create_crashkernel_mapping() and the helpers it uses resemble very much __create_pgd_mapping() and it's helpers. I think a cleaner way would be to clear section mappings and than call __create_pgd_mapping() for crash kernel memory with block/section mappings disabled. > +static void __init map_crashkernel(pgd_t *pgdp, phys_addr_t start, > + phys_addr_t end, pgprot_t prot, int flags) > +{ > + __create_crashkernel_mapping(pgdp, start, __phys_to_virt(start), > + end - start, prot, > + early_crashkernel_pgtable_alloc, flags); > +} No need for this, you can call __create_crashkernel_mapping() directly from mapping_crashkernel(). > + > +void __init mapping_crashkernel(void) > +{ > + if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE)) > + return; > + > + if (!crash_mem_map || !crashk_res.end) > + return; > + > + map_crashkernel(swapper_pg_dir, crashk_res.start, > + crashk_res.end + 1, PAGE_KERNEL, > + NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS); > +} > +#else > +void __init mapping_crashkernel(void) {} > +#endif > + > static void __init map_mem(pgd_t *pgdp) > { > static const u64 direct_map_end = _PAGE_END(VA_BITS_MIN); > @@ -527,17 +777,6 @@ static void __init map_mem(pgd_t *pgdp) > */ > memblock_mark_nomap(kernel_start, kernel_end - kernel_start); > > -#ifdef CONFIG_KEXEC_CORE > - if (crash_mem_map) { > - if (IS_ENABLED(CONFIG_ZONE_DMA) || > - IS_ENABLED(CONFIG_ZONE_DMA32)) > - flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; > - else if (crashk_res.end) > - memblock_mark_nomap(crashk_res.start, > - resource_size(&crashk_res)); > - } > -#endif > - > /* map all the memory banks */ > for_each_mem_range(i, &start, &end) { > if (start >= end) > @@ -570,19 +809,6 @@ static void __init map_mem(pgd_t *pgdp) > * in page granularity and put back unused memory to buddy system > * through /sys/kernel/kexec_crash_size interface. > */ > -#ifdef CONFIG_KEXEC_CORE > - if (crash_mem_map && > - !IS_ENABLED(CONFIG_ZONE_DMA) && !IS_ENABLED(CONFIG_ZONE_DMA32)) { > - if (crashk_res.end) { > - __map_memblock(pgdp, crashk_res.start, > - crashk_res.end + 1, > - PAGE_KERNEL, > - NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS); > - memblock_clear_nomap(crashk_res.start, > - resource_size(&crashk_res)); > - } > - } > -#endif > } > > void mark_rodata_ro(void) > -- > 1.8.3.1 -- Sincerely yours, Mike.