Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp1819975yba; Thu, 4 Apr 2019 20:04:55 -0700 (PDT) X-Google-Smtp-Source: APXvYqx+MwyXaDYA8XBBhKwg2q53zGqFhmOSrFTdDys5nH903dWLbUXJLaZt1YyGeW++R6bwvnYm X-Received: by 2002:a63:6503:: with SMTP id z3mr5531300pgb.113.1554433495111; Thu, 04 Apr 2019 20:04:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554433495; cv=none; d=google.com; s=arc-20160816; b=tShHNz7NFF6KXu7tKOv3DoZZnhpoGmWbkS1llmjTMad92GYeFCtRoFtQm7riETTVR0 pPURymXLvwjUyT6seM5csxTw7z+2/jVBriSq+YlCcuMVQPgCdClKG/J1X7gSWq/gMKJV DGfdHcIztPJNH0aFIlLBiWgN8/Qm4vdLcWkptM4NNfSjZeqonY2rVpUa2n4aiqwIObIj 18Ql2g9HB0d+ZVey+oZ+aSA4R+FszNTT2Tvd8Av2VMtLS7gymW4bX97KR0s2mgseqRMW 6W1+AG1j9lABrmBPKLe41msCpfO/dy2Sx63D6KmSsysdbvI1R64FpllzzlB0NpDfM8u6 9pTw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:cc:references:to :subject; bh=8n1ufHNp8l7e+vRcjJT0MsQkEFINzxGVulJL7Wf2aKM=; b=ZFGX+nP/ZUEBu7T6kcjS7x5CxUkFSadDGmU4GwIqDTTNGYnTDkboLbzB3xwvrBYKcS 6l1ghDxCRDtM7hZZCAYUbsFNeHZm9TqYjwLcU2GoY9P1fuOvVctHRZYld1ONW6/UOUsl elBZqplIp3ypa77nfeWzOyG+t/BBUO4aMqo7EnKLzrUWTDLXRp7s8YPRAbydIpBLF578 oyKqqGLzC0ARz1iiIzu/95d52LtznuIt6VKs4qTCun7EZuTVfNRUoxA3k6Y3IjxwDec9 dJmdOnwUA4LZQ0doUYpX9AtAxRF8vYM/F/D/1CRuYYF5QKDbx27AoO+XpriFRxzrZW0P OQKQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m2si17438430plt.394.2019.04.04.20.04.37; Thu, 04 Apr 2019 20:04:55 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729501AbfDEDDw (ORCPT + 99 others); Thu, 4 Apr 2019 23:03:52 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:6252 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727051AbfDEDDw (ORCPT ); Thu, 4 Apr 2019 23:03:52 -0400 Received: from DGGEMS406-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 77E4C68E05D71F357661; Fri, 5 Apr 2019 11:03:49 +0800 (CST) Received: from [127.0.0.1] (10.177.131.64) by DGGEMS406-HUB.china.huawei.com (10.3.19.206) with Microsoft SMTP Server id 14.3.408.0; Fri, 5 Apr 2019 11:03:41 +0800 Subject: Re: [PATCH 1/3] arm64: kdump: support reserving crashkernel above 4G To: Mike Rapoport References: <20190403030546.23718-1-chenzhou10@huawei.com> <20190403030546.23718-2-chenzhou10@huawei.com> <20190404144618.GB6433@rapoport-lnx> CC: , , , , , , , , , From: Chen Zhou Message-ID: <59ef4532-2402-3887-2794-b503827fac5a@huawei.com> Date: Fri, 5 Apr 2019 11:03:39 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: <20190404144618.GB6433@rapoport-lnx> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.177.131.64] X-CFilter-Loop: Reflected Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Mike, On 2019/4/4 22:46, Mike Rapoport wrote: > Hi, > > On Wed, Apr 03, 2019 at 11:05:44AM +0800, Chen Zhou wrote: >> When crashkernel is reserved above 4G in memory, kernel should >> reserve some amount of low memory for swiotlb and some DMA buffers. >> >> Kernel would try to allocate at least 256M below 4G automatically >> as x86_64 if crashkernel is above 4G. Meanwhile, support >> crashkernel=X,[high,low] in arm64. >> >> Signed-off-by: Chen Zhou >> --- >> arch/arm64/kernel/setup.c | 3 ++ >> arch/arm64/mm/init.c | 71 +++++++++++++++++++++++++++++++++++++++++++++-- >> 2 files changed, 71 insertions(+), 3 deletions(-) >> >> diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c >> index 413d566..82cd9a0 100644 >> --- a/arch/arm64/kernel/setup.c >> +++ b/arch/arm64/kernel/setup.c >> @@ -243,6 +243,9 @@ static void __init request_standard_resources(void) >> request_resource(res, &kernel_data); >> #ifdef CONFIG_KEXEC_CORE >> /* Userspace will find "Crash kernel" region in /proc/iomem. */ >> + if (crashk_low_res.end && crashk_low_res.start >= res->start && >> + crashk_low_res.end <= res->end) >> + request_resource(res, &crashk_low_res); >> if (crashk_res.end && crashk_res.start >= res->start && >> crashk_res.end <= res->end) >> request_resource(res, &crashk_res); >> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c >> index 6bc1350..ceb2a25 100644 >> --- a/arch/arm64/mm/init.c >> +++ b/arch/arm64/mm/init.c >> @@ -64,6 +64,57 @@ EXPORT_SYMBOL(memstart_addr); >> phys_addr_t arm64_dma_phys_limit __ro_after_init; >> >> #ifdef CONFIG_KEXEC_CORE >> +static int __init reserve_crashkernel_low(void) >> +{ >> + unsigned long long base, low_base = 0, low_size = 0; >> + unsigned long total_low_mem; >> + int ret; >> + >> + total_low_mem = memblock_mem_size(1UL << (32 - PAGE_SHIFT)); >> + >> + /* crashkernel=Y,low */ >> + ret = parse_crashkernel_low(boot_command_line, total_low_mem, &low_size, &base); >> + if (ret) { >> + /* >> + * two parts from lib/swiotlb.c: >> + * -swiotlb size: user-specified with swiotlb= or default. >> + * >> + * -swiotlb overflow buffer: now hardcoded to 32k. We round it >> + * to 8M for other buffers that may need to stay low too. Also >> + * make sure we allocate enough extra low memory so that we >> + * don't run out of DMA buffers for 32-bit devices. >> + */ >> + low_size = max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20); >> + } else { >> + /* passed with crashkernel=0,low ? */ >> + if (!low_size) >> + return 0; >> + } >> + >> + low_base = memblock_find_in_range(0, 1ULL << 32, low_size, SZ_2M); >> + if (!low_base) { >> + pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", >> + (unsigned long)(low_size >> 20)); >> + return -ENOMEM; >> + } >> + >> + ret = memblock_reserve(low_base, low_size); >> + if (ret) { >> + pr_err("%s: Error reserving crashkernel low memblock.\n", __func__); >> + return ret; >> + } >> + >> + pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (System RAM: %ldMB)\n", >> + (unsigned long)(low_size >> 20), >> + (unsigned long)(low_base >> 20), >> + (unsigned long)(total_low_mem >> 20)); >> + >> + crashk_low_res.start = low_base; >> + crashk_low_res.end = low_base + low_size - 1; >> + >> + return 0; >> +} >> + >> /* >> * reserve_crashkernel() - reserves memory for crash kernel >> * >> @@ -74,19 +125,28 @@ phys_addr_t arm64_dma_phys_limit __ro_after_init; >> static void __init reserve_crashkernel(void) >> { >> unsigned long long crash_base, crash_size; >> + bool high = false; >> int ret; >> >> ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), >> &crash_size, &crash_base); >> /* no crashkernel= or invalid value specified */ >> - if (ret || !crash_size) >> - return; >> + if (ret || !crash_size) { >> + /* crashkernel=X,high */ >> + ret = parse_crashkernel_high(boot_command_line, memblock_phys_mem_size(), >> + &crash_size, &crash_base); >> + if (ret || !crash_size) >> + return; >> + high = true; >> + } >> >> crash_size = PAGE_ALIGN(crash_size); >> >> if (crash_base == 0) { >> /* Current arm64 boot protocol requires 2MB alignment */ >> - crash_base = memblock_find_in_range(0, ARCH_LOW_ADDRESS_LIMIT, >> + crash_base = memblock_find_in_range(0, >> + high ? memblock_end_of_DRAM() >> + : ARCH_LOW_ADDRESS_LIMIT, >> crash_size, SZ_2M); >> if (crash_base == 0) { >> pr_warn("cannot allocate crashkernel (size:0x%llx)\n", >> @@ -112,6 +172,11 @@ static void __init reserve_crashkernel(void) >> } >> memblock_reserve(crash_base, crash_size); >> >> + if (crash_base >= SZ_4G && reserve_crashkernel_low()) { >> + memblock_free(crash_base, crash_size); >> + return; >> + } >> + > > This very reminds what x86 does. Any chance some of the code can be reused > rather than duplicated? As i said in the comment, i transport reserve_crashkernel_low() from x86_64. There are minor differences. In arm64, we don't need to do insert_resource(), we do request_resource() in request_standard_resources() later. How about doing like this: move common reserve_crashkernel_low() code into kernel/kexec_core.c. and do in x86 like this: --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -573,9 +573,12 @@ static void __init reserve_crashkernel(void) return; } - if (crash_base >= (1ULL << 32) && reserve_crashkernel_low()) { - memblock_free(crash_base, crash_size); - return; + if (crash_base >= (1ULL << 32)) { + if (reserve_crashkernel_low()) { + memblock_free(crash_base, crash_size); + return; + } else + insert_resource(&iomem_resource, &crashk_low_res); } > >> pr_info("crashkernel reserved: 0x%016llx - 0x%016llx (%lld MB)\n", >> crash_base, crash_base + crash_size, crash_size >> 20); >> >> -- >> 2.7.4 >> >