Received: by 2002:a25:2c96:0:0:0:0:0 with SMTP id s144csp1485725ybs; Mon, 25 May 2020 17:59:32 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxwLMtePfH8H8UbCsYpfXevgicde0v3js4FA/y0AZNGyCER3mYcNwPCB9fsfxGRTM/gkzIX X-Received: by 2002:a17:907:364:: with SMTP id rs4mr19874278ejb.311.1590454772749; Mon, 25 May 2020 17:59:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1590454772; cv=none; d=google.com; s=arc-20160816; b=cBvCXgYl+KzL/zHoX57TkhSi/npvK6JkY8bFX9yi53N1X/qp4GSGqhL5Ckx2E1YF5m Xe9+J175I5EZ30bA7+l0VQm5xI5D/qYz2lw/CyYS0o0T57khWJEINODJ3Z+2uZgVBs/F cZ/tpOSF1+J7VUfZyhi9EY4cs+ghKrXW+q2grkozEfGpiHE6Za2d+21rFXoUNSZ2XUy5 drkYPIv+Fe5CVFx348HCyhv4YGx6iA0Hkc/XNvl/wg5h9L58ncvRmiR+aMXbt3BbaV2I Nu+NlsoGtvigCCWx6zUFbfJEr7LxZknEyyPZNIvHNYLdBoXPKAm37XfiyG+PeGKE8HA/ 9ZVQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=G5/Ea1akwxOl59DcoOMjCc61S17nI8qYFSeKawxlUeg=; b=Arty12ebsBMX59eiRpRBCjJnzh9LkkJ8K2ta/SaCtFDLEl98zGxI6mtUsKLCbXql9A YxoEq3uDFEAqyOwdlPCf2TPu63LLtE2/JRCDpPc9PwvqG6qB+SdXGnZoiDFlq6qLf9Sj URYGYlDDuA0ZZ9vYT50VYf/HUpcpzWvvRscL1xDuY53PXuKcJx4Q02kbgIEC8KcgK+cJ fNCqycGeWFCYMJ0icNUDyWsFgnSCIsbT9/32hA6ngRvtQss7ZcGSnitgGH3zx2aGJSFG f9eCvSKcrdVZOvhjKiylizQIB0RLYWuFwXD8YpyhyWYztMdvPmsnLsEB2PqBRxgxJVpb sgAg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=gyfiO2I5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id y1si10954973ede.53.2020.05.25.17.59.08; Mon, 25 May 2020 17:59:32 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=gyfiO2I5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388351AbgEZA4g (ORCPT + 99 others); Mon, 25 May 2020 20:56:36 -0400 Received: from us-smtp-delivery-1.mimecast.com ([205.139.110.120]:44421 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S2388013AbgEZA4g (ORCPT ); Mon, 25 May 2020 20:56:36 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1590454593; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=G5/Ea1akwxOl59DcoOMjCc61S17nI8qYFSeKawxlUeg=; b=gyfiO2I5dpf87pEE6DqL9wwFyjZsDKIL1FKeIrleH4t9bLd0P0tP7qaKH0itdhk4FUSqjN lga3zMpfTNitxkGr9o3fIIZ41/pPXS6exwDfrqerWsER6SL4xG7JhSWUxiNF5WiyKHso2p sx6ODI3fHEV279rWW5OB1RFfho9A2lk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-64-XCQQD_5UMnma4HtF_tkYrg-1; Mon, 25 May 2020 20:56:25 -0400 X-MC-Unique: XCQQD_5UMnma4HtF_tkYrg-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9AAFB8735C4; Tue, 26 May 2020 00:56:22 +0000 (UTC) Received: from localhost (ovpn-12-31.pek2.redhat.com [10.72.12.31]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 8FF7A5C1BB; Tue, 26 May 2020 00:56:18 +0000 (UTC) Date: Tue, 26 May 2020 08:56:15 +0800 From: Baoquan He To: Chen Zhou Cc: tglx@linutronix.de, mingo@redhat.com, catalin.marinas@arm.com, will@kernel.org, dyoung@redhat.com, robh+dt@kernel.org, John.p.donnelly@oracle.com, arnd@arndb.de, devicetree@vger.kernel.org, linux-doc@vger.kernel.org, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, horms@verge.net.au, guohanjun@huawei.com, pkushwaha@marvell.com, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH v8 1/5] x86: kdump: move reserve_crashkernel_low() into crash_core.c Message-ID: <20200526005615.GD20045@MiWiFi-R3L-srv> References: <20200521093805.64398-1-chenzhou10@huawei.com> <20200521093805.64398-2-chenzhou10@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200521093805.64398-2-chenzhou10@huawei.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 05/21/20 at 05:38pm, Chen Zhou wrote: > In preparation for supporting reserve_crashkernel_low in arm64 as > x86_64 does, move reserve_crashkernel_low() into kernel/crash_core.c. > BTW, move x86 CRASH_ALIGN to 2M. The reason is? > > Note, in arm64, we reserve low memory if and only if crashkernel=X,low > is specified. Different with x86_64, don't set low memory automatically. > > Reported-by: kbuild test robot > Signed-off-by: Chen Zhou > Tested-by: John Donnelly > Tested-by: Prabhakar Kushwaha > --- > arch/x86/kernel/setup.c | 66 ++++------------------------- > include/linux/crash_core.h | 3 ++ > include/linux/kexec.h | 2 - > kernel/crash_core.c | 85 ++++++++++++++++++++++++++++++++++++++ > kernel/kexec_core.c | 17 -------- > 5 files changed, 96 insertions(+), 77 deletions(-) > > diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c > index 4b3fa6cd3106..de75fec73d47 100644 > --- a/arch/x86/kernel/setup.c > +++ b/arch/x86/kernel/setup.c > @@ -395,8 +395,8 @@ static void __init memblock_x86_reserve_range_setup_data(void) > > #ifdef CONFIG_KEXEC_CORE > > -/* 16M alignment for crash kernel regions */ > -#define CRASH_ALIGN SZ_16M > +/* 2M alignment for crash kernel regions */ > +#define CRASH_ALIGN SZ_2M > > /* > * Keep the crash kernel below this limit. > @@ -419,59 +419,6 @@ static void __init memblock_x86_reserve_range_setup_data(void) > # define CRASH_ADDR_HIGH_MAX SZ_64T > #endif > > -static int __init reserve_crashkernel_low(void) > -{ > -#ifdef CONFIG_X86_64 > - unsigned long long base, low_base = 0, low_size = 0; > - unsigned long total_low_mem; > - int ret; > - > - total_low_mem = memblock_mem_size(1UL << (32 - PAGE_SHIFT)); > - > - /* crashkernel=Y,low */ > - ret = parse_crashkernel_low(boot_command_line, total_low_mem, &low_size, &base); > - if (ret) { > - /* > - * two parts from kernel/dma/swiotlb.c: > - * -swiotlb size: user-specified with swiotlb= or default. > - * > - * -swiotlb overflow buffer: now hardcoded to 32k. We round it > - * to 8M for other buffers that may need to stay low too. Also > - * make sure we allocate enough extra low memory so that we > - * don't run out of DMA buffers for 32-bit devices. > - */ > - low_size = max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20); > - } else { > - /* passed with crashkernel=0,low ? */ > - if (!low_size) > - return 0; > - } > - > - low_base = memblock_find_in_range(0, 1ULL << 32, low_size, CRASH_ALIGN); > - if (!low_base) { > - pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", > - (unsigned long)(low_size >> 20)); > - return -ENOMEM; > - } > - > - ret = memblock_reserve(low_base, low_size); > - if (ret) { > - pr_err("%s: Error reserving crashkernel low memblock.\n", __func__); > - return ret; > - } > - > - pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (System low RAM: %ldMB)\n", > - (unsigned long)(low_size >> 20), > - (unsigned long)(low_base >> 20), > - (unsigned long)(total_low_mem >> 20)); > - > - crashk_low_res.start = low_base; > - crashk_low_res.end = low_base + low_size - 1; > - insert_resource(&iomem_resource, &crashk_low_res); > -#endif > - return 0; > -} > - > static void __init reserve_crashkernel(void) > { > unsigned long long crash_size, crash_base, total_mem; > @@ -535,9 +482,12 @@ static void __init reserve_crashkernel(void) > return; > } > > - if (crash_base >= (1ULL << 32) && reserve_crashkernel_low()) { > - memblock_free(crash_base, crash_size); > - return; > + if (crash_base >= (1ULL << 32)) { > + if (reserve_crashkernel_low()) { > + memblock_free(crash_base, crash_size); > + return; > + } > + insert_resource(&iomem_resource, &crashk_low_res); > } > > pr_info("Reserving %ldMB of memory at %ldMB for crashkernel (System RAM: %ldMB)\n", > diff --git a/include/linux/crash_core.h b/include/linux/crash_core.h > index 525510a9f965..4df8c0bff03e 100644 > --- a/include/linux/crash_core.h > +++ b/include/linux/crash_core.h > @@ -63,6 +63,8 @@ phys_addr_t paddr_vmcoreinfo_note(void); > extern unsigned char *vmcoreinfo_data; > extern size_t vmcoreinfo_size; > extern u32 *vmcoreinfo_note; > +extern struct resource crashk_res; > +extern struct resource crashk_low_res; > > Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type, > void *data, size_t data_len); > @@ -74,5 +76,6 @@ int parse_crashkernel_high(char *cmdline, unsigned long long system_ram, > unsigned long long *crash_size, unsigned long long *crash_base); > int parse_crashkernel_low(char *cmdline, unsigned long long system_ram, > unsigned long long *crash_size, unsigned long long *crash_base); > +int __init reserve_crashkernel_low(void); > > #endif /* LINUX_CRASH_CORE_H */ > diff --git a/include/linux/kexec.h b/include/linux/kexec.h > index 1776eb2e43a4..5d5d9635b18d 100644 > --- a/include/linux/kexec.h > +++ b/include/linux/kexec.h > @@ -330,8 +330,6 @@ extern int kexec_load_disabled; > > /* Location of a reserved region to hold the crash kernel. > */ > -extern struct resource crashk_res; > -extern struct resource crashk_low_res; > extern note_buf_t __percpu *crash_notes; > > /* flag to track if kexec reboot is in progress */ > diff --git a/kernel/crash_core.c b/kernel/crash_core.c > index 9f1557b98468..a7580d291c37 100644 > --- a/kernel/crash_core.c > +++ b/kernel/crash_core.c > @@ -7,6 +7,8 @@ > #include > #include > #include > +#include > +#include > > #include > #include > @@ -19,6 +21,22 @@ u32 *vmcoreinfo_note; > /* trusted vmcoreinfo, e.g. we can make a copy in the crash memory */ > static unsigned char *vmcoreinfo_data_safecopy; > > +/* Location of the reserved area for the crash kernel */ > +struct resource crashk_res = { > + .name = "Crash kernel", > + .start = 0, > + .end = 0, > + .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM, > + .desc = IORES_DESC_CRASH_KERNEL > +}; > +struct resource crashk_low_res = { > + .name = "Crash kernel", > + .start = 0, > + .end = 0, > + .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM, > + .desc = IORES_DESC_CRASH_KERNEL > +}; > + > /* > * parsing the "crashkernel" commandline > * > @@ -292,6 +310,73 @@ int __init parse_crashkernel_low(char *cmdline, > "crashkernel=", suffix_tbl[SUFFIX_LOW]); > } > > +#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64) > +#define CRASH_ALIGN SZ_2M > +#endif > + > +int __init reserve_crashkernel_low(void) > +{ > +#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64) > + unsigned long long base, low_base = 0, low_size = 0; > + unsigned long total_low_mem; > + int ret; > + > + total_low_mem = memblock_mem_size(1UL << (32 - PAGE_SHIFT)); > + > + /* crashkernel=Y,low */ > + ret = parse_crashkernel_low(boot_command_line, total_low_mem, &low_size, > + &base); > + if (ret) { > +#ifdef CONFIG_X86_64 > + /* > + * two parts from lib/swiotlb.c: > + * -swiotlb size: user-specified with swiotlb= or default. > + * > + * -swiotlb overflow buffer: now hardcoded to 32k. We round it > + * to 8M for other buffers that may need to stay low too. Also > + * make sure we allocate enough extra low memory so that we > + * don't run out of DMA buffers for 32-bit devices. > + */ > + low_size = max(swiotlb_size_or_default() + (8UL << 20), > + 256UL << 20); > +#else > + /* > + * in arm64, reserve low memory if and only if crashkernel=X,low > + * specified. > + */ > + return -EINVAL; > +#endif > + } else { > + /* passed with crashkernel=0,low ? */ > + if (!low_size) > + return 0; > + } > + > + low_base = memblock_find_in_range(0, 1ULL << 32, low_size, CRASH_ALIGN); > + if (!low_base) { > + pr_err("Cannot reserve %ldMB crashkernel low memory, please try smaller size.\n", > + (unsigned long)(low_size >> 20)); > + return -ENOMEM; > + } > + > + ret = memblock_reserve(low_base, low_size); > + if (ret) { > + pr_err("%s: Error reserving crashkernel low memblock.\n", > + __func__); > + return ret; > + } > + > + pr_info("Reserving %ldMB of low memory at %ldMB for crashkernel (System low RAM: %ldMB)\n", > + (unsigned long)(low_size >> 20), > + (unsigned long)(low_base >> 20), > + (unsigned long)(total_low_mem >> 20)); > + > + crashk_low_res.start = low_base; > + crashk_low_res.end = low_base + low_size - 1; > +#endif > + return 0; > +} > + > Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type, > void *data, size_t data_len) > { > diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c > index c19c0dad1ebe..db66bbabfff3 100644 > --- a/kernel/kexec_core.c > +++ b/kernel/kexec_core.c > @@ -53,23 +53,6 @@ note_buf_t __percpu *crash_notes; > /* Flag to indicate we are going to kexec a new kernel */ > bool kexec_in_progress = false; > > - > -/* Location of the reserved area for the crash kernel */ > -struct resource crashk_res = { > - .name = "Crash kernel", > - .start = 0, > - .end = 0, > - .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM, > - .desc = IORES_DESC_CRASH_KERNEL > -}; > -struct resource crashk_low_res = { > - .name = "Crash kernel", > - .start = 0, > - .end = 0, > - .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM, > - .desc = IORES_DESC_CRASH_KERNEL > -}; > - > int kexec_should_crash(struct task_struct *p) > { > /* > -- > 2.20.1 > > > _______________________________________________ > kexec mailing list > kexec@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/kexec >