Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp3081122yba; Tue, 16 Apr 2019 04:26:03 -0700 (PDT) X-Google-Smtp-Source: APXvYqxitN+2+5nuaTlLkavpa5ku7yhYlPa5M2OGsDxcVmLy2FTVm0Jk0FnWlVu65qg6ohmBLam9 X-Received: by 2002:a63:b811:: with SMTP id p17mr76078561pge.219.1555413963069; Tue, 16 Apr 2019 04:26:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555413963; cv=none; d=google.com; s=arc-20160816; b=RTgRDii4ZLQ7L+ZDmqVahxNd9OOO33/TtdLyO/q4r/O6PieN+M1Kr3NOv3tYXx+qL+ pfT7jSf1ant/TNepOvZMyb8GGyrdywKzgkdy88ITW7QdZpvlt3xwoyx4qMjvS8wriiP9 XAVHPLjw78P0B8Yj9UW+M2eW5ZVaQmwA3+xTGZm08qzeE4c00wWIwpN3WvVsEoPnbycv nYvtuFjujrpN1lYCABURVFzQ+z1TygfkvZkLmiLbqk6aWL86NS/w6Yoe0PFnkPreXaoM 4etOmub6PLkSFjlT3bID4HeGHW5Rnb//ohUH+1HLg4Nyvfn0JJc4w1OBPqEp6oZLs16X earg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=msW/DfQw6RLXEJV1GdO6xcS+vBgdX+OmDHFBMpqAuDk=; b=WeLYVSUHimW8jKMUzIOfXFuoNgHhrHPaNa1rSGQWtpsZLjhLiYjcoXg0lsi++PNsdM 8nhYBFgI3NMGf2GD9e8nIPEAHal6WsxDucRu90gjImtZhwdLyzeqjlUzgPTxBKHsfkEI 6nCQhrXFQskO2as9nCfiuowbO0jm9rvmLNa2+hReeAzJIpaqbwj0RLkDR+RuD96xDOGX pOorD5NaTG0P4X1PnAYDIoHiLiOBF/a5GjjNSTPds/N4lsEXmxp0Awz3p2xEgwkjFSn4 ykB7lcPQ+EqGI7jPBIK4R93Qbu5EJSXpNU7neM1XeUOWblJcDRsmEe4cyPqjth8l2xmY 090Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j22si35026017pfi.167.2019.04.16.04.25.47; Tue, 16 Apr 2019 04:26:03 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729192AbfDPLZE (ORCPT + 99 others); Tue, 16 Apr 2019 07:25:04 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:52014 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727240AbfDPLZB (ORCPT ); Tue, 16 Apr 2019 07:25:01 -0400 Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 681B4554A1C5BD91B5E9; Tue, 16 Apr 2019 19:24:59 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.408.0; Tue, 16 Apr 2019 19:24:49 +0800 From: Chen Zhou To: , , , , , , , , CC: , , , , , , , Chen Zhou Subject: [RESEND PATCH v5 2/4] arm64: kdump: support reserving crashkernel above 4G Date: Tue, 16 Apr 2019 19:35:17 +0800 Message-ID: <20190416113519.90507-3-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190416113519.90507-1-chenzhou10@huawei.com> References: <20190416113519.90507-1-chenzhou10@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When crashkernel is reserved above 4G in memory, kernel should reserve some amount of low memory for swiotlb and some DMA buffers. Kernel would try to allocate at least 256M below 4G automatically as x86_64 if crashkernel is above 4G. Meanwhile, support crashkernel=X,[high,low] in arm64. Signed-off-by: Chen Zhou --- arch/arm64/include/asm/kexec.h | 3 +++ arch/arm64/kernel/setup.c | 3 +++ arch/arm64/mm/init.c | 25 ++++++++++++++++++++----- 3 files changed, 26 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 67e4cb7..32949bf 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -28,6 +28,9 @@ #define KEXEC_ARCH KEXEC_ARCH_AARCH64 +/* 2M alignment for crash kernel regions */ +#define CRASH_ALIGN SZ_2M + #ifndef __ASSEMBLY__ /** diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index 413d566..82cd9a0 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -243,6 +243,9 @@ static void __init request_standard_resources(void) request_resource(res, &kernel_data); #ifdef CONFIG_KEXEC_CORE /* Userspace will find "Crash kernel" region in /proc/iomem. */ + if (crashk_low_res.end && crashk_low_res.start >= res->start && + crashk_low_res.end <= res->end) + request_resource(res, &crashk_low_res); if (crashk_res.end && crashk_res.start >= res->start && crashk_res.end <= res->end) request_resource(res, &crashk_res); diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 972bf43..f5dde73 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -74,20 +74,30 @@ phys_addr_t arm64_dma_phys_limit __ro_after_init; static void __init reserve_crashkernel(void) { unsigned long long crash_base, crash_size; + bool high = false; int ret; ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), &crash_size, &crash_base); /* no crashkernel= or invalid value specified */ - if (ret || !crash_size) - return; + if (ret || !crash_size) { + /* crashkernel=X,high */ + ret = parse_crashkernel_high(boot_command_line, + memblock_phys_mem_size(), + &crash_size, &crash_base); + if (ret || !crash_size) + return; + high = true; + } crash_size = PAGE_ALIGN(crash_size); if (crash_base == 0) { /* Current arm64 boot protocol requires 2MB alignment */ - crash_base = memblock_find_in_range(0, ARCH_LOW_ADDRESS_LIMIT, - crash_size, SZ_2M); + crash_base = memblock_find_in_range(0, + high ? memblock_end_of_DRAM() + : ARCH_LOW_ADDRESS_LIMIT, + crash_size, CRASH_ALIGN); if (crash_base == 0) { pr_warn("cannot allocate crashkernel (size:0x%llx)\n", crash_size); @@ -105,13 +115,18 @@ static void __init reserve_crashkernel(void) return; } - if (!IS_ALIGNED(crash_base, SZ_2M)) { + if (!IS_ALIGNED(crash_base, CRASH_ALIGN)) { pr_warn("cannot reserve crashkernel: base address is not 2MB aligned\n"); return; } } memblock_reserve(crash_base, crash_size); + if (crash_base >= SZ_4G && reserve_crashkernel_low()) { + memblock_free(crash_base, crash_size); + return; + } + pr_info("crashkernel reserved: 0x%016llx - 0x%016llx (%lld MB)\n", crash_base, crash_base + crash_size, crash_size >> 20); -- 2.7.4