Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F2D9C433F5 for ; Wed, 22 Dec 2021 13:12:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245377AbhLVNMg (ORCPT ); Wed, 22 Dec 2021 08:12:36 -0500 Received: from szxga01-in.huawei.com ([45.249.212.187]:33895 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245287AbhLVNMX (ORCPT ); Wed, 22 Dec 2021 08:12:23 -0500 Received: from dggpemm500023.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4JJtyQ2kstzcc55; Wed, 22 Dec 2021 21:11:58 +0800 (CST) Received: from dggpemm500006.china.huawei.com (7.185.36.236) by dggpemm500023.china.huawei.com (7.185.36.83) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Wed, 22 Dec 2021 21:12:21 +0800 Received: from thunder-town.china.huawei.com (10.174.178.55) by dggpemm500006.china.huawei.com (7.185.36.236) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.20; Wed, 22 Dec 2021 21:12:20 +0800 From: Zhen Lei To: Thomas Gleixner , Ingo Molnar , Borislav Petkov , , "H . Peter Anvin" , , Dave Young , Baoquan He , Vivek Goyal , Eric Biederman , , Catalin Marinas , "Will Deacon" , , Rob Herring , Frank Rowand , , Jonathan Corbet , CC: Zhen Lei , Randy Dunlap , Feng Zhou , Kefeng Wang , Chen Zhou , "John Donnelly" Subject: [PATCH v18 03/17] x86/setup: Adjust the range of codes separated by CONFIG_X86_64 Date: Wed, 22 Dec 2021 21:08:06 +0800 Message-ID: <20211222130820.1754-4-thunder.leizhen@huawei.com> X-Mailer: git-send-email 2.26.0.windows.1 In-Reply-To: <20211222130820.1754-1-thunder.leizhen@huawei.com> References: <20211222130820.1754-1-thunder.leizhen@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.174.178.55] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500006.china.huawei.com (7.185.36.236) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently, only X86_64 requires that at least 256M low memory be reserved. X86_32 does not have this requirement. So move all the code related to reserve_crashkernel_low() into macro CONFIG_X86_64. Signed-off-by: Zhen Lei --- arch/x86/kernel/setup.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index acf2f2eedfe3415..d9080bfa131a654 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -392,9 +392,9 @@ static void __init memblock_x86_reserve_range_setup_data(void) #ifdef CONFIG_KEXEC_CORE +#ifdef CONFIG_X86_64 static int __init reserve_crashkernel_low(void) { -#ifdef CONFIG_X86_64 unsigned long long base, low_base = 0, low_size = 0; unsigned long low_mem_limit; int ret; @@ -434,9 +434,10 @@ static int __init reserve_crashkernel_low(void) crashk_low_res.start = low_base; crashk_low_res.end = low_base + low_size - 1; -#endif + return 0; } +#endif static void __init reserve_crashkernel(void) { @@ -490,10 +491,12 @@ static void __init reserve_crashkernel(void) } } +#ifdef CONFIG_X86_64 if (crash_base >= (1ULL << 32) && reserve_crashkernel_low()) { memblock_phys_free(crash_base, crash_size); return; } +#endif pr_info("Reserving %ldMB of memory at %ldMB for crashkernel (System RAM: %ldMB)\n", (unsigned long)(crash_size >> 20), -- 2.25.1