Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp3252601yba; Mon, 6 May 2019 20:44:32 -0700 (PDT) X-Google-Smtp-Source: APXvYqxjTC4UlO5/KrXyrahJNETs/KNx1OFQerz5ivD7IIH990Wd43/d4WKY78zY+TQW5upI3nBY X-Received: by 2002:a65:62c4:: with SMTP id m4mr36596276pgv.308.1557200672519; Mon, 06 May 2019 20:44:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557200672; cv=none; d=google.com; s=arc-20160816; b=SSlZF/ANr2Y561JIUSo4gPCLNGyku+Q+Ydan327zdl0DhmXRTYUWHFdrkJN9xSw7Gh nUnAyJSXu1Rkmf6W/PTSve8mwZJxOnT/7TGZJByaO/TP6m6ftVzV+9NgQYLFNnn2pSBi /pzjMzWTh6Zq6rKzHtMF1671FWTZQ8qRtabeSXQTLT/HyV089qIA5oKh0m/2b2yArRJH zyljBVPrQHo4qVRAzpN5k4wQsnhLZ7oMFEXI3SOXt+o5bePN40REwHdzZJuezJEG16q+ TPBtJ8/kWp+N/XMNfOj0nwP/WCVuz+6+/FEzmcVOb3pr5EAozdWfSbiocxDJq5tswe7b u91w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=cbtj+TMP+/lIzIc/5TZ2OkHDkxCb2+xgbljpaF/2A7w=; b=0cF5PusPdMaoyH2YVXimsEaozk3naxJLLshZRqJMfoxZAa7bz7Ww/Zjro7eyBABMMt SVt+3b/C/QxgGtDOK5lYy9dXuAQjVvFZooKxaYPirEvuyj4i/OGgQi95zjbW7Raz+OmI 5kn5P71jaV+scKfH37oTg1BDEyYKc+ZzfaqLqRnqIeVR9g7DFcQoFls1DTycMScTuI6/ XolP/NsC3eKccJgC3nj8YJVd8VhkAz2/qvefwawxa5DJOjPjFXE9wjSkKxCSNDy7qt6t Lxl5f9mvoHrU8UrAtkUo2kQx5KwFgtn4QoeZX0W7vWV1najGtt3o6hFxH+WALXw6bqHF hDng== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e13si17630754pfi.259.2019.05.06.20.44.14; Mon, 06 May 2019 20:44:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727368AbfEGDmM (ORCPT + 99 others); Mon, 6 May 2019 23:42:12 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:7730 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727356AbfEGDmM (ORCPT ); Mon, 6 May 2019 23:42:12 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 5923ECCBB0A73C5F1D49; Tue, 7 May 2019 11:42:10 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.439.0; Tue, 7 May 2019 11:42:02 +0800 From: Chen Zhou To: , , , , , , , , CC: , , , , , , , Chen Zhou Subject: [PATCH 2/4] arm64: kdump: support reserving crashkernel above 4G Date: Tue, 7 May 2019 11:50:56 +0800 Message-ID: <20190507035058.63992-3-chenzhou10@huawei.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190507035058.63992-1-chenzhou10@huawei.com> References: <20190507035058.63992-1-chenzhou10@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When crashkernel is reserved above 4G in memory, kernel should reserve some amount of low memory for swiotlb and some DMA buffers. Meanwhile, support crashkernel=X,[high,low] in arm64. When use crashkernel=X parameter, try low memory first and fall back to high memory unless "crashkernel=X,high" is specified. Signed-off-by: Chen Zhou --- arch/arm64/include/asm/kexec.h | 3 +++ arch/arm64/kernel/setup.c | 3 +++ arch/arm64/mm/init.c | 34 ++++++++++++++++++++++++++++------ 3 files changed, 34 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 67e4cb7..32949bf 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -28,6 +28,9 @@ #define KEXEC_ARCH KEXEC_ARCH_AARCH64 +/* 2M alignment for crash kernel regions */ +#define CRASH_ALIGN SZ_2M + #ifndef __ASSEMBLY__ /** diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c index 413d566..82cd9a0 100644 --- a/arch/arm64/kernel/setup.c +++ b/arch/arm64/kernel/setup.c @@ -243,6 +243,9 @@ static void __init request_standard_resources(void) request_resource(res, &kernel_data); #ifdef CONFIG_KEXEC_CORE /* Userspace will find "Crash kernel" region in /proc/iomem. */ + if (crashk_low_res.end && crashk_low_res.start >= res->start && + crashk_low_res.end <= res->end) + request_resource(res, &crashk_low_res); if (crashk_res.end && crashk_res.start >= res->start && crashk_res.end <= res->end) request_resource(res, &crashk_res); diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index d2adffb..3fcd739 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -74,20 +74,37 @@ phys_addr_t arm64_dma_phys_limit __ro_after_init; static void __init reserve_crashkernel(void) { unsigned long long crash_base, crash_size; + bool high = false; int ret; ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), &crash_size, &crash_base); /* no crashkernel= or invalid value specified */ - if (ret || !crash_size) - return; + if (ret || !crash_size) { + /* crashkernel=X,high */ + ret = parse_crashkernel_high(boot_command_line, + memblock_phys_mem_size(), + &crash_size, &crash_base); + if (ret || !crash_size) + return; + high = true; + } crash_size = PAGE_ALIGN(crash_size); if (crash_base == 0) { - /* Current arm64 boot protocol requires 2MB alignment */ - crash_base = memblock_find_in_range(0, ARCH_LOW_ADDRESS_LIMIT, - crash_size, SZ_2M); + /* + * Try low memory first and fall back to high memory + * unless "crashkernel=size[KMG],high" is specified. + */ + if (!high) + crash_base = memblock_find_in_range(0, + ARCH_LOW_ADDRESS_LIMIT, + crash_size, CRASH_ALIGN); + if (!crash_base) + crash_base = memblock_find_in_range(0, + memblock_end_of_DRAM(), + crash_size, CRASH_ALIGN); if (crash_base == 0) { pr_warn("cannot allocate crashkernel (size:0x%llx)\n", crash_size); @@ -105,13 +122,18 @@ static void __init reserve_crashkernel(void) return; } - if (!IS_ALIGNED(crash_base, SZ_2M)) { + if (!IS_ALIGNED(crash_base, CRASH_ALIGN)) { pr_warn("cannot reserve crashkernel: base address is not 2MB aligned\n"); return; } } memblock_reserve(crash_base, crash_size); + if (crash_base >= SZ_4G && reserve_crashkernel_low()) { + memblock_free(crash_base, crash_size); + return; + } + pr_info("crashkernel reserved: 0x%016llx - 0x%016llx (%lld MB)\n", crash_base, crash_base + crash_size, crash_size >> 20); -- 2.7.4