Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 881B2C7618A for ; Mon, 20 Mar 2023 14:37:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231802AbjCTOhM (ORCPT ); Mon, 20 Mar 2023 10:37:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231797AbjCTOhE (ORCPT ); Mon, 20 Mar 2023 10:37:04 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66B372799C; Mon, 20 Mar 2023 07:36:59 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id AFB226155B; Mon, 20 Mar 2023 14:36:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 22BA8C433EF; Mon, 20 Mar 2023 14:36:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679323018; bh=RBCPLadb47MmmM5lyC16jA4sDrS0Ng9R8msS2b1JT2I=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=dIUx8Kpc72knwAt3Y2kVvxbqBuS8f6IsLzyPUO9KSs4xprK5vfQZQN8+Q5qocAxQ1 6ymOu9/uAXcxB0ehwdkHa3u9SJMGcsgnz6eLRLy1MRtYvQsQlj4uKRltIEqkVkvOPr ic7Ci9vZgyxy/FyEzEEPMJInt9ptuiUxeccwwi/QvTZajJBZi6CGokC2/umBl+Uazb AmKkCTLp9iTF8m4gVBOTEp+FgKJbn42Mwc4sLAQWap8Qw2EffV9sybw7/H8il1zXd9 ZL3Zrwq2GEC14wgsgKeJTBt1bvkCANgg0kvCCnpUIVhabc5XvGPvxPXewOYhbz7R7H qqGFjH0t65WFw== Date: Mon, 20 Mar 2023 15:36:52 +0100 From: Simon Horman To: Chen Jiahao Cc: paul.walmsley@sifive.com, palmer@dabbelt.com, conor.dooley@microchip.com, guoren@kernel.org, heiko@sntech.de, bjorn@rivosinc.com, alex@ghiti.fr, akpm@linux-foundation.org, atishp@rivosinc.com, bhe@redhat.com, thunder.leizhen@huawei.com, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kexec@lists.infradead.org, linux-doc@vger.kernel.org Subject: Re: [PATCH -next 1/2] riscv: kdump: Implement crashkernel=X,[high,low] Message-ID: References: <20230320204244.1637821-1-chenjiahao16@huawei.com> <20230320204244.1637821-2-chenjiahao16@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230320204244.1637821-2-chenjiahao16@huawei.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 21, 2023 at 04:42:43AM +0800, Chen Jiahao wrote: > On riscv, the current crash kernel allocation logic is trying to > allocate within 32bit addressible memory region by default, if > failed, try to allocate without 4G restriction. > > In need of saving DMA zone memory while allocating a relatively large > crash kernel region, allocating the reserved memory top down in > high memory, without overlapping the DMA zone, is a mature solution. > Here introduce the parameter option crashkernel=X,[high,low]. > > We can reserve the crash kernel from high memory above DMA zone range > by explicitly passing "crashkernel=X,high"; or reserve a memory range > below 4G with "crashkernel=X,low". > > Signed-off-by: Chen Jiahao Some minor nits, but I don't think there is any need to respin for them. Reviewed-by: Simon Horman ... > diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c > index 478d6763a01a..5def2174b243 100644 > --- a/arch/riscv/mm/init.c > +++ b/arch/riscv/mm/init.c ... > @@ -1201,16 +1242,25 @@ static void __init reserve_crashkernel(void) > */ > crash_base = memblock_phys_alloc_range(crash_size, PMD_SIZE, > search_start, > - min(search_end, (unsigned long) SZ_4G)); > + min(search_end, (unsigned long) dma32_phys_limit)); nit: While here, you could drop the space before 'ma32_phys_limit'. Or perhaps use min_t, which seems appropriate here. > if (crash_base == 0) { > /* Try again without restricting region to 32bit addressible memory */ > crash_base = memblock_phys_alloc_range(crash_size, PMD_SIZE, > - search_start, search_end); > + search_start, search_end); > if (crash_base == 0) { > pr_warn("crashkernel: couldn't allocate %lldKB\n", > crash_size >> 10); > return; > } > + > + if (!crash_low_size) > + crash_low_size = DEFAULT_CRASH_KERNEL_LOW_SIZE; > + } > + > + if ((crash_base > dma32_phys_limit - crash_low_size) && > + crash_low_size && reserve_crashkernel_low(crash_low_size)) { nit: The line above should be aligned one character to the left (remove one space in the indent). > + memblock_phys_free(crash_base, crash_size); > + return; > } > > pr_info("crashkernel: reserved 0x%016llx - 0x%016llx (%lld MB)\n", ...