Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp2927426yba; Tue, 16 Apr 2019 00:34:26 -0700 (PDT) X-Google-Smtp-Source: APXvYqyhwdvcP5jSbF5nspEeingjskoG3FAkT1yLBsa9LUU0ol+pGaT13KnzASgZ2y9qY02NPDtj X-Received: by 2002:a17:902:7441:: with SMTP id e1mr78570570plt.13.1555400066832; Tue, 16 Apr 2019 00:34:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1555400066; cv=none; d=google.com; s=arc-20160816; b=SwKHPL3oM0ltRXxtLFtz+Y4AYHH3kLTmPUyEg+PjXOxEEuHhCTIJxRRQ7hYZ4sVGEv +6xGVERr+RQlDSOAr2US4F9qlPZqiTy74+KdHgmOfz6kNYJT46okO2EzAecictq7qPcx 9uIXcNe4xEU0C73P7hJ/gSoCWqtX8gFyNfOzbesmrQgfIoASc5fT4jE2mtjUf4DF7wks Pl9SjXXMWJJQr7+x+KyVR7KMUWvIVqI8QFtAYTNfqWvDLSlFc69iNmeajaOHgpqTCQfN vRJ3WSVNT5FxMiFVZPFQXHfMh2dH2wyWvwSbNharb1TjGSLqF/P3vpCW5e8G83KS0jzU I54g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=msW/DfQw6RLXEJV1GdO6xcS+vBgdX+OmDHFBMpqAuDk=; b=D3UE7WG8nEe36vhmnjii9yxJYXi3Q35P2z5X8mpOOS9GY/YzpNhPziLrH9e7lvRJX0 U5DlMmicxBtwy1lc+4LJO/4LpEJn4/eYjrFIcirQhlD7h8RxThuI7AJByIrZT2pRYTDe HnpepNe0iU6FyizJ1CKwv3349YKgOQOsGCBT2oP6FkI6hCeAUeKlsoTWM8614KGZpJ7x Ed1mf+UQRiNEc0DLk4xr65M2Tf+FNWFHX8jau0n/i/mjhgjPsIVLp2EEE5D9Mq55qgqG SoeIuSMVXMKbviuUEixjeDqElJ2B3M6S0tWcltG8cs140PjTvxXC7jQvoXkhfAN9fZ9N BI0Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v47si46302705pgn.117.2019.04.16.00.34.11; Tue, 16 Apr 2019 00:34:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728596AbfDPHd0 (ORCPT + 99 others); Tue, 16 Apr 2019 03:33:26 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:35018 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727845AbfDPHdZ (ORCPT ); Tue, 16 Apr 2019 03:33:25 -0400 Received: from DGGEMS402-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 9A06628F29508A9FEC9C; Tue, 16 Apr 2019 15:33:23 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS402-HUB.china.huawei.com (10.3.19.202) with Microsoft SMTP Server id 14.3.408.0; Tue, 16 Apr 2019 15:33:14 +0800 From: Chen Zhou To: , , , , , , , , CC: , , , , , , , Chen Zhou Subject: [PATCH v5 2/4] arm64: kdump: support reserving crashkernel above 4G Date: Tue, 16 Apr 2019 15:43:27 +0800 Message-ID: <20190416074329.44928-3-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190416074329.44928-1-chenzhou10@huawei.com> References: <20190416074329.44928-1-chenzhou10@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When crashkernel is reserved above 4G in memory, kernel should reserve some amount of low memory for swiotlb and some DMA buffers. Kernel would try to allocate at least 256M below 4G automatically as x86_64 if crashkernel is above 4G. Meanwhile, support crashkernel=X,[high,low] in arm64. Signed-off-by: Chen Zhou --- arch/arm64/include/asm/kexec.h | 3 +++ arch/arm64/kernel/setup.c | 3 +++ arch/arm64/mm/init.c | 25 ++++++++++++++++++++----- 3 files changed, 26 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 67e4cb7..32949bf 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -28,6 +28,9 @@ #define KEXEC_ARCH KEXEC_ARCH_AARCH64 +/* 2M alignment for crash kernel regions */ +#define CRASH_ALIGN SZ_2M + #ifndef __ASSEMBLY__ /** diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index 413d566..82cd9a0 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -243,6 +243,9 @@ static void __init request_standard_resources(void) request_resource(res, &kernel_data); #ifdef CONFIG_KEXEC_CORE /* Userspace will find "Crash kernel" region in /proc/iomem. */ + if (crashk_low_res.end && crashk_low_res.start >= res->start && + crashk_low_res.end <= res->end) + request_resource(res, &crashk_low_res); if (crashk_res.end && crashk_res.start >= res->start && crashk_res.end <= res->end) request_resource(res, &crashk_res); diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 972bf43..f5dde73 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -74,20 +74,30 @@ phys_addr_t arm64_dma_phys_limit __ro_after_init; static void __init reserve_crashkernel(void) { unsigned long long crash_base, crash_size; + bool high = false; int ret; ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), &crash_size, &crash_base); /* no crashkernel= or invalid value specified */ - if (ret || !crash_size) - return; + if (ret || !crash_size) { + /* crashkernel=X,high */ + ret = parse_crashkernel_high(boot_command_line, + memblock_phys_mem_size(), + &crash_size, &crash_base); + if (ret || !crash_size) + return; + high = true; + } crash_size = PAGE_ALIGN(crash_size); if (crash_base == 0) { /* Current arm64 boot protocol requires 2MB alignment */ - crash_base = memblock_find_in_range(0, ARCH_LOW_ADDRESS_LIMIT, - crash_size, SZ_2M); + crash_base = memblock_find_in_range(0, + high ? memblock_end_of_DRAM() + : ARCH_LOW_ADDRESS_LIMIT, + crash_size, CRASH_ALIGN); if (crash_base == 0) { pr_warn("cannot allocate crashkernel (size:0x%llx)\n", crash_size); @@ -105,13 +115,18 @@ static void __init reserve_crashkernel(void) return; } - if (!IS_ALIGNED(crash_base, SZ_2M)) { + if (!IS_ALIGNED(crash_base, CRASH_ALIGN)) { pr_warn("cannot reserve crashkernel: base address is not 2MB aligned\n"); return; } } memblock_reserve(crash_base, crash_size); + if (crash_base >= SZ_4G && reserve_crashkernel_low()) { + memblock_free(crash_base, crash_size); + return; + } + pr_info("crashkernel reserved: 0x%016llx - 0x%016llx (%lld MB)\n", crash_base, crash_base + crash_size, crash_size >> 20); -- 2.7.4