Received: by 2002:a05:6a10:9e8c:0:0:0:0 with SMTP id y12csp2765100pxx; Sun, 1 Nov 2020 09:09:47 -0800 (PST) X-Google-Smtp-Source: ABdhPJwIhSKTlkc9zn011b9ke7WY1K0pewMpMYLbaJH/VviriW198cj2OYNl+GePiavMHk0PP1Kz X-Received: by 2002:a17:906:1e04:: with SMTP id g4mr11601002ejj.72.1604250587200; Sun, 01 Nov 2020 09:09:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1604250587; cv=none; d=google.com; s=arc-20160816; b=I+KPArufCDAhN5WmjnvxSk1fJZIouAtXH1xb22r/DWL8Q8rTOwigUSK9slYzRj/m8w knAciN7WEZ6ceEQOOg9GOFzT4HUieiRqLuoxGCHC33p41PCJZaBL6o73RZ3J6qDXllDW itLDbba73C3WRg8Y5nWh3/Jw0jKtHDrp9TjBMMrWUYjx61q1dFNbdLK4x+NYCMCdnr5g KTuMrZNS3s3AyAuh4gbIv0T3W3bBhS1xP+1ML/V43aJVqOtS2jXjz4cvld51caVSNjWh hB2JduaDj1/ao2G+0WJk9w8cM/QbzOfsyMGLXRagWmmb5ZEUnfR6juN5CWc098hKpb4m XihQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=/MC1UZVji0e154YI2Rmt7vWorBpXxo0lt+hnGfLGRmI=; b=gxJRDNAMcKH+O3SE+sXViabiNo2gcpX8DKYfLpa2GsI+8RUvC/e/ainjiNPqeKWbgs +RX71cYqAKrgfOPLvwRcZd4wJhObakXBUQ/fdwEUyc/OO1mMAb6Pd6c3Dp0QQ7Tatrsa NfINCyFEAiqwEIpWMisF05vKRqAApKxb1yC+rcvljTqtpVmBG8hO6FIpaPkAWXNtFH8g smxAZKnpetFUmQMo7quW9N6u9FRQv2S94fyaDXhTZtKR6t+BEHE8t/eSE2dqAXwZ0xei LacSs1HBg/DXfu0ZYN6ZgXn38n5OUTgDjbxwMeHFUchert4iYEdkijbIzUG2zy+/Ht4V H+jA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=wxxGuLrL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id i5si8469994edn.310.2020.11.01.09.09.24; Sun, 01 Nov 2020 09:09:47 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=wxxGuLrL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727221AbgKARFq (ORCPT + 99 others); Sun, 1 Nov 2020 12:05:46 -0500 Received: from mail.kernel.org ([198.145.29.99]:39580 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727220AbgKARFn (ORCPT ); Sun, 1 Nov 2020 12:05:43 -0500 Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id C24ED208B6; Sun, 1 Nov 2020 17:05:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1604250342; bh=XipFZ9Gk+9MwwOjLDX33Y4z9wXHWMYFSkxcbyoqGVRQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=wxxGuLrLX+ICvuwbdx16n7qfBsgXoKkEYEoyvWVCGniESk0SE/dR+Vo6y93Q46RLf CBmi5IVXLjXJpzj5ZLc4QTH2S2JtGVEeh0R0uQk+k4LYCQ/3hf/2AyIZSFpEaCt9qs nE8+9aBPzssmUqbJgjPdDSaQe68GVbzAJNvF4c8g= From: Mike Rapoport To: Andrew Morton Cc: Alexey Dobriyan , Catalin Marinas , Geert Uytterhoeven , Greg Ungerer , John Paul Adrian Glaubitz , Jonathan Corbet , Matt Turner , Meelis Roos , Michael Schmitz , Mike Rapoport , Mike Rapoport , Russell King , Tony Luck , Vineet Gupta , Will Deacon , linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mm@kvack.org, linux-snps-arc@lists.infradead.org Subject: [PATCH v2 06/13] ia64: forbid using VIRTUAL_MEM_MAP with FLATMEM Date: Sun, 1 Nov 2020 19:04:47 +0200 Message-Id: <20201101170454.9567-7-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201101170454.9567-1-rppt@kernel.org> References: <20201101170454.9567-1-rppt@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mike Rapoport Virtual memory map was intended to avoid wasting memory on the memory map on systems with large holes in the physical memory layout. Long ago it been superseded first by DISCONTIGMEM and then by SPARSEMEM. Moreover, SPARSEMEM_VMEMMAP provide the same functionality in much more portable way. As the first step to removing the VIRTUAL_MEM_MAP forbid it's usage with FLATMEM and panic on systems with large holes in the physical memory layout that try to run FLATMEM kernels. Signed-off-by: Mike Rapoport --- arch/ia64/Kconfig | 2 +- arch/ia64/include/asm/meminit.h | 2 -- arch/ia64/mm/contig.c | 48 +++++++++++++++------------------ arch/ia64/mm/init.c | 14 ---------- 4 files changed, 22 insertions(+), 44 deletions(-) diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig index 12aae706cb27..83de0273d474 100644 --- a/arch/ia64/Kconfig +++ b/arch/ia64/Kconfig @@ -329,7 +329,7 @@ config NODES_SHIFT # VIRTUAL_MEM_MAP has been retained for historical reasons. config VIRTUAL_MEM_MAP bool "Virtual mem map" - depends on !SPARSEMEM + depends on !SPARSEMEM && !FLATMEM default y help Say Y to compile the kernel with support for a virtual mem map. diff --git a/arch/ia64/include/asm/meminit.h b/arch/ia64/include/asm/meminit.h index 092f1c91b36c..e789c0818edb 100644 --- a/arch/ia64/include/asm/meminit.h +++ b/arch/ia64/include/asm/meminit.h @@ -59,10 +59,8 @@ extern int reserve_elfcorehdr(u64 *start, u64 *end); extern int register_active_ranges(u64 start, u64 len, int nid); #ifdef CONFIG_VIRTUAL_MEM_MAP -# define LARGE_GAP 0x40000000 /* Use virtual mem map if hole is > than this */ extern unsigned long VMALLOC_END; extern struct page *vmem_map; - extern int find_largest_hole(u64 start, u64 end, void *arg); extern int create_mem_map_page_table(u64 start, u64 end, void *arg); extern int vmemmap_find_next_valid_pfn(int, int); #else diff --git a/arch/ia64/mm/contig.c b/arch/ia64/mm/contig.c index ba81d8cb0059..bfc4ecd0a2ab 100644 --- a/arch/ia64/mm/contig.c +++ b/arch/ia64/mm/contig.c @@ -19,15 +19,12 @@ #include #include #include +#include #include #include #include -#ifdef CONFIG_VIRTUAL_MEM_MAP -static unsigned long max_gap; -#endif - /* physical address where the bootmem map is located */ unsigned long bootmap_start; @@ -166,33 +163,30 @@ find_memory (void) alloc_per_cpu_data(); } -static void __init virtual_map_init(void) +static int __init find_largest_hole(u64 start, u64 end, void *arg) { -#ifdef CONFIG_VIRTUAL_MEM_MAP - efi_memmap_walk(find_largest_hole, (u64 *)&max_gap); - if (max_gap < LARGE_GAP) { - vmem_map = (struct page *) 0; - } else { - unsigned long map_size; + u64 *max_gap = arg; - /* allocate virtual_mem_map */ + static u64 last_end = PAGE_OFFSET; - map_size = PAGE_ALIGN(ALIGN(max_low_pfn, MAX_ORDER_NR_PAGES) * - sizeof(struct page)); - VMALLOC_END -= map_size; - vmem_map = (struct page *) VMALLOC_END; - efi_memmap_walk(create_mem_map_page_table, NULL); + /* NOTE: this algorithm assumes efi memmap table is ordered */ - /* - * alloc_node_mem_map makes an adjustment for mem_map - * which isn't compatible with vmem_map. - */ - NODE_DATA(0)->node_mem_map = vmem_map + - find_min_pfn_with_active_regions(); + if (*max_gap < (start - last_end)) + *max_gap = start - last_end; + last_end = end; + return 0; +} - printk("Virtual mem_map starts at 0x%p\n", mem_map); - } -#endif /* !CONFIG_VIRTUAL_MEM_MAP */ +static void __init verify_gap_absence(void) +{ + unsigned long max_gap; + + /* Forbid FLATMEM if hole is > than 1G */ + efi_memmap_walk(find_largest_hole, (u64 *)&max_gap); + if (max_gap >= SZ_1G) + panic("Cannot use FLATMEM with %ldMB hole\n" + "Please switch over to SPARSEMEM\n", + (max_gap >> 20)); } /* @@ -210,7 +204,7 @@ paging_init (void) max_zone_pfns[ZONE_DMA32] = max_dma; max_zone_pfns[ZONE_NORMAL] = max_low_pfn; - virtual_map_init(); + verify_gap_absence(); free_area_init(max_zone_pfns); zero_page_memmap_ptr = virt_to_page(ia64_imva(empty_zero_page)); diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index ef12e097f318..9b5acf8fb092 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -574,20 +574,6 @@ ia64_pfn_valid (unsigned long pfn) } EXPORT_SYMBOL(ia64_pfn_valid); -int __init find_largest_hole(u64 start, u64 end, void *arg) -{ - u64 *max_gap = arg; - - static u64 last_end = PAGE_OFFSET; - - /* NOTE: this algorithm assumes efi memmap table is ordered */ - - if (*max_gap < (start - last_end)) - *max_gap = start - last_end; - last_end = end; - return 0; -} - #endif /* CONFIG_VIRTUAL_MEM_MAP */ int __init register_active_ranges(u64 start, u64 len, int nid) -- 2.28.0