Received: by 2002:a05:7208:9594:b0:7e:5202:c8b4 with SMTP id gs20csp1222412rbb; Mon, 26 Feb 2024 02:43:44 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCXrg1v2mAtWJx3XLTyTN1CV6BF2iGBemmRQa1jjQdlmPRKcHH6nMtLro61wbfQgIFybEAJi7d//hGQhTBkFr6uCtpBUKcAejoH2/yZFNw== X-Google-Smtp-Source: AGHT+IEQWRhzXTpv8W/vztBLYK9eSIESzTWymZYbJxtHflSk7XId1SotXGNBGQyy79jDsSnCKOdV X-Received: by 2002:a05:6214:20e3:b0:68f:eca7:2ee4 with SMTP id 3-20020a05621420e300b0068feca72ee4mr7723098qvk.20.1708944224655; Mon, 26 Feb 2024 02:43:44 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708944224; cv=pass; d=google.com; s=arc-20160816; b=wb0h6o4+3Zlcoi9XfJPnmkpcHrTZJGsdKco6yd8/iC18hVz0Cm9dGqfiN6lQjlTD/p 71RHpx7KinsMjgArWJ7a+FkdxgWTntGgLTpmBftyDsUDBE3DWNUM+wqEcT5u1nXvx+iO EbHvKxLnqnDknhZ5NFvmMirriooEuX5RZjmvsCYNonpkNxd6IKKMJcXTVGKUs+MaEQvD XbFjKnjzfsFejPetkJvvgYSt1YtuKU5luxbyA5omHe24eCY/QPbXK4ZQcvpBnjeom+qs gf3fQr9sVnPpHjhzgemVA3Hz0oGFp7eTqdiHKjZN6KGH+X0klsQ8e0uogvLU+nEpJueh Qesg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=V56H8dSQDFUmj5SE8+kVo4W9WtUOFJf243loJuYhJEc=; fh=z9rUOfI7DWlJL8qyA9Y8IPDLFxDy7XKd0U/QXR+CGbI=; b=IPVRo00r6dQvFKfjzACZ0nGkBnXxD1FznnJllAWUGY0wXe+2zPvEp5AXcy1+R0XTn3 mAUaCqrRAgF6quuJGVM29Y4i550DHGLoVt5M6oUkMbAXcnGcjDCcDDLMGoJE5aM961OR n/xlDiHJuknks5YNHs/HOPauRn9x28ool39C5szaDM0Kblgb0kAH+MXvRXzXc8Wa1SZD Yb90uNLHsjHP3RQTEDRMp6DdJgcmhmo0b3nQQVF80xh67GJXUh28i0rw+B1NwwrQyXxi yOgvEn+CWl4SUrPrHNUdFhWl57XDDHHrHF81a/7MllGGKeQvHRDyD7N06fMvGxL3GNoq HcLA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=HortYwNU; arc=pass (i=1 spf=pass spfdomain=linux.ibm.com dkim=pass dkdomain=ibm.com dmarc=pass fromdomain=linux.ibm.com); spf=pass (google.com: domain of linux-kernel+bounces-81174-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-81174-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id gi14-20020a056214248e00b0068fd23ec42asi4633560qvb.409.2024.02.26.02.43.44 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Feb 2024 02:43:44 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-81174-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@ibm.com header.s=pp1 header.b=HortYwNU; arc=pass (i=1 spf=pass spfdomain=linux.ibm.com dkim=pass dkdomain=ibm.com dmarc=pass fromdomain=linux.ibm.com); spf=pass (google.com: domain of linux-kernel+bounces-81174-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-81174-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=NONE dis=NONE) header.from=ibm.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 5C1C91C2803A for ; Mon, 26 Feb 2024 10:43:44 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 577911F61C; Mon, 26 Feb 2024 10:35:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="HortYwNU" Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0D5231F5FF for ; Mon, 26 Feb 2024 10:35:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=148.163.158.5 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708943714; cv=none; b=YHSYbPavrZNRPpK5FRBHPJjMkQbclLQK7Hfe7XS92xb9fKrlqe9oqc1TO+7G/LaoPQLHMFRvIIIKE44sDffruhAbcA/UE3bxg8ZMWnMuDGy+3INh2SEuYuBqJoGrTMJaMRhm5MfES7MCYC7draXtTPr1lbfzvZsNwTQrmJGHo9g= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708943714; c=relaxed/simple; bh=T7zGhS78At0wbAcQZyhEP9gF27ui6xLlMo9zATLQfv0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mV5W86pbBdL/rm6qoNpNRkKJ6EGctknJ5loF9rod5olalVJi2tjAcounAr6qIYakzZIAbw1cjDnqXnnoSZp9QJP+FRS0cfWZpj2bfYuc74dT0gQhHmWyM1vn8tflZ6YEmySTI1qwr4vlsz/IE/3leNqvZUfcDEVbv7t1SOUZ1Xo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com; spf=pass smtp.mailfrom=linux.ibm.com; dkim=pass (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b=HortYwNU; arc=none smtp.client-ip=148.163.158.5 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.ibm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.ibm.com Received: from pps.filterd (m0353725.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 41QAYbQ5008698; Mon, 26 Feb 2024 10:35:02 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=V56H8dSQDFUmj5SE8+kVo4W9WtUOFJf243loJuYhJEc=; b=HortYwNU1V4NM68WZkvGHn5n/oQPWqLRNtv8HW/r5b8Xr8ME3n688Ila+EkwNxOBApjX 1zNyzdTAGpypMhWhbaNiqAR5nZDCAv9sVCLMCicqqPCcX2dCoXJSez6H/yz1yJrYxA+f NXZgN/D7h3FPR3tO1cBBdrj5LTA63Lnm21lL6ylKQmSyWIdI0SUMtMlYSSR2WfkjNSlr 5w8c/kQM24HcVVrlAW6d5mOr3MhjY+mGHlgvXdp106vUbWoV7KrGTZAkSFiZAwvVDJ7K UQdAYdWQvoZ/2ty0OmgtCCf20wDOGJ2bO0qUYVVouWKakPQ5PaHQkG+705jcXvG7ddfh Bg== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3wgpbkukbb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 26 Feb 2024 10:34:43 +0000 Received: from m0353725.ppops.net (m0353725.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 41QAYh5b009414; Mon, 26 Feb 2024 10:34:43 GMT Received: from ppma23.wdc07v.mail.ibm.com (5d.69.3da9.ip4.static.sl-reverse.com [169.61.105.93]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3wgpbkujv3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 26 Feb 2024 10:34:41 +0000 Received: from pps.filterd (ppma23.wdc07v.mail.ibm.com [127.0.0.1]) by ppma23.wdc07v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 41Q8lCeV008189; Mon, 26 Feb 2024 10:30:26 GMT Received: from smtprelay06.fra02v.mail.ibm.com ([9.218.2.230]) by ppma23.wdc07v.mail.ibm.com (PPS) with ESMTPS id 3wfv9m08r1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 26 Feb 2024 10:30:26 +0000 Received: from smtpav02.fra02v.mail.ibm.com (smtpav02.fra02v.mail.ibm.com [10.20.54.101]) by smtprelay06.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 41QAUKUi40370488 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 26 Feb 2024 10:30:22 GMT Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 5AAF720043; Mon, 26 Feb 2024 10:30:20 +0000 (GMT) Received: from smtpav02.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id DD2E320040; Mon, 26 Feb 2024 10:30:17 +0000 (GMT) Received: from li-bd3f974c-2712-11b2-a85c-df1cec4d728e.in.ibm.com (unknown [9.203.115.195]) by smtpav02.fra02v.mail.ibm.com (Postfix) with ESMTP; Mon, 26 Feb 2024 10:30:17 +0000 (GMT) From: Hari Bathini To: linuxppc-dev , Kexec-ml Cc: lkml , Andrew Morton , Baoquan He , Sourabh Jain , Mahesh J Salgaonkar , "Naveen N. Rao" , Nicholas Piggin , Michael Ellerman , Dave Young Subject: [PATCH linux-next v2 2/3] powerpc/kexec: split CONFIG_KEXEC_FILE and CONFIG_CRASH_DUMP Date: Mon, 26 Feb 2024 16:00:09 +0530 Message-ID: <20240226103010.589537-3-hbathini@linux.ibm.com> X-Mailer: git-send-email 2.43.2 In-Reply-To: <20240226103010.589537-1-hbathini@linux.ibm.com> References: <20240226103010.589537-1-hbathini@linux.ibm.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-TM-AS-GCONF: 00 X-Proofpoint-GUID: jK0-TGLB_rLaVm-3O0YtcIQVKqGv-Q6- X-Proofpoint-ORIG-GUID: cv236Gc8X3PNU7UQ0jBV20aPAm2IwaAd X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-02-26_07,2024-02-26_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 mlxscore=0 bulkscore=0 mlxlogscore=999 phishscore=0 adultscore=0 lowpriorityscore=0 spamscore=0 malwarescore=0 clxscore=1015 priorityscore=1501 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2311290000 definitions=main-2402260079 CONFIG_KEXEC_FILE does not have to select CONFIG_CRASH_DUMP. Move some code under CONFIG_CRASH_DUMP to support CONFIG_KEXEC_FILE and !CONFIG_CRASH_DUMP case. Signed-off-by: Hari Bathini --- * No changes in v2. arch/powerpc/kexec/elf_64.c | 4 +- arch/powerpc/kexec/file_load_64.c | 269 ++++++++++++++++-------------- 2 files changed, 142 insertions(+), 131 deletions(-) diff --git a/arch/powerpc/kexec/elf_64.c b/arch/powerpc/kexec/elf_64.c index 904016cf89ea..6d8951e8e966 100644 --- a/arch/powerpc/kexec/elf_64.c +++ b/arch/powerpc/kexec/elf_64.c @@ -47,7 +47,7 @@ static void *elf64_load(struct kimage *image, char *kernel_buf, if (ret) return ERR_PTR(ret); - if (image->type == KEXEC_TYPE_CRASH) { + if (IS_ENABLED(CONFIG_CRASH_DUMP) && image->type == KEXEC_TYPE_CRASH) { /* min & max buffer values for kdump case */ kbuf.buf_min = pbuf.buf_min = crashk_res.start; kbuf.buf_max = pbuf.buf_max = @@ -70,7 +70,7 @@ static void *elf64_load(struct kimage *image, char *kernel_buf, kexec_dprintk("Loaded purgatory at 0x%lx\n", pbuf.mem); /* Load additional segments needed for panic kernel */ - if (image->type == KEXEC_TYPE_CRASH) { + if (IS_ENABLED(CONFIG_CRASH_DUMP) && image->type == KEXEC_TYPE_CRASH) { ret = load_crashdump_segments_ppc64(image, &kbuf); if (ret) { pr_err("Failed to load kdump kernel segments\n"); diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c index 5b4c5cb23354..1bc65de6174f 100644 --- a/arch/powerpc/kexec/file_load_64.c +++ b/arch/powerpc/kexec/file_load_64.c @@ -96,119 +96,6 @@ static int get_exclude_memory_ranges(struct crash_mem **mem_ranges) return ret; } -/** - * get_usable_memory_ranges - Get usable memory ranges. This list includes - * regions like crashkernel, opal/rtas & tce-table, - * that kdump kernel could use. - * @mem_ranges: Range list to add the memory ranges to. - * - * Returns 0 on success, negative errno on error. - */ -static int get_usable_memory_ranges(struct crash_mem **mem_ranges) -{ - int ret; - - /* - * Early boot failure observed on guests when low memory (first memory - * block?) is not added to usable memory. So, add [0, crashk_res.end] - * instead of [crashk_res.start, crashk_res.end] to workaround it. - * Also, crashed kernel's memory must be added to reserve map to - * avoid kdump kernel from using it. - */ - ret = add_mem_range(mem_ranges, 0, crashk_res.end + 1); - if (ret) - goto out; - - ret = add_rtas_mem_range(mem_ranges); - if (ret) - goto out; - - ret = add_opal_mem_range(mem_ranges); - if (ret) - goto out; - - ret = add_tce_mem_ranges(mem_ranges); -out: - if (ret) - pr_err("Failed to setup usable memory ranges\n"); - return ret; -} - -/** - * get_crash_memory_ranges - Get crash memory ranges. This list includes - * first/crashing kernel's memory regions that - * would be exported via an elfcore. - * @mem_ranges: Range list to add the memory ranges to. - * - * Returns 0 on success, negative errno on error. - */ -static int get_crash_memory_ranges(struct crash_mem **mem_ranges) -{ - phys_addr_t base, end; - struct crash_mem *tmem; - u64 i; - int ret; - - for_each_mem_range(i, &base, &end) { - u64 size = end - base; - - /* Skip backup memory region, which needs a separate entry */ - if (base == BACKUP_SRC_START) { - if (size > BACKUP_SRC_SIZE) { - base = BACKUP_SRC_END + 1; - size -= BACKUP_SRC_SIZE; - } else - continue; - } - - ret = add_mem_range(mem_ranges, base, size); - if (ret) - goto out; - - /* Try merging adjacent ranges before reallocation attempt */ - if ((*mem_ranges)->nr_ranges == (*mem_ranges)->max_nr_ranges) - sort_memory_ranges(*mem_ranges, true); - } - - /* Reallocate memory ranges if there is no space to split ranges */ - tmem = *mem_ranges; - if (tmem && (tmem->nr_ranges == tmem->max_nr_ranges)) { - tmem = realloc_mem_ranges(mem_ranges); - if (!tmem) - goto out; - } - - /* Exclude crashkernel region */ - ret = crash_exclude_mem_range(tmem, crashk_res.start, crashk_res.end); - if (ret) - goto out; - - /* - * FIXME: For now, stay in parity with kexec-tools but if RTAS/OPAL - * regions are exported to save their context at the time of - * crash, they should actually be backed up just like the - * first 64K bytes of memory. - */ - ret = add_rtas_mem_range(mem_ranges); - if (ret) - goto out; - - ret = add_opal_mem_range(mem_ranges); - if (ret) - goto out; - - /* create a separate program header for the backup region */ - ret = add_mem_range(mem_ranges, BACKUP_SRC_START, BACKUP_SRC_SIZE); - if (ret) - goto out; - - sort_memory_ranges(*mem_ranges, false); -out: - if (ret) - pr_err("Failed to setup crash memory ranges\n"); - return ret; -} - /** * get_reserved_memory_ranges - Get reserve memory ranges. This list includes * memory regions that should be added to the @@ -434,6 +321,120 @@ static int locate_mem_hole_bottom_up_ppc64(struct kexec_buf *kbuf, return ret; } +#ifdef CONFIG_CRASH_DUMP +/** + * get_usable_memory_ranges - Get usable memory ranges. This list includes + * regions like crashkernel, opal/rtas & tce-table, + * that kdump kernel could use. + * @mem_ranges: Range list to add the memory ranges to. + * + * Returns 0 on success, negative errno on error. + */ +static int get_usable_memory_ranges(struct crash_mem **mem_ranges) +{ + int ret; + + /* + * Early boot failure observed on guests when low memory (first memory + * block?) is not added to usable memory. So, add [0, crashk_res.end] + * instead of [crashk_res.start, crashk_res.end] to workaround it. + * Also, crashed kernel's memory must be added to reserve map to + * avoid kdump kernel from using it. + */ + ret = add_mem_range(mem_ranges, 0, crashk_res.end + 1); + if (ret) + goto out; + + ret = add_rtas_mem_range(mem_ranges); + if (ret) + goto out; + + ret = add_opal_mem_range(mem_ranges); + if (ret) + goto out; + + ret = add_tce_mem_ranges(mem_ranges); +out: + if (ret) + pr_err("Failed to setup usable memory ranges\n"); + return ret; +} + +/** + * get_crash_memory_ranges - Get crash memory ranges. This list includes + * first/crashing kernel's memory regions that + * would be exported via an elfcore. + * @mem_ranges: Range list to add the memory ranges to. + * + * Returns 0 on success, negative errno on error. + */ +static int get_crash_memory_ranges(struct crash_mem **mem_ranges) +{ + phys_addr_t base, end; + struct crash_mem *tmem; + u64 i; + int ret; + + for_each_mem_range(i, &base, &end) { + u64 size = end - base; + + /* Skip backup memory region, which needs a separate entry */ + if (base == BACKUP_SRC_START) { + if (size > BACKUP_SRC_SIZE) { + base = BACKUP_SRC_END + 1; + size -= BACKUP_SRC_SIZE; + } else + continue; + } + + ret = add_mem_range(mem_ranges, base, size); + if (ret) + goto out; + + /* Try merging adjacent ranges before reallocation attempt */ + if ((*mem_ranges)->nr_ranges == (*mem_ranges)->max_nr_ranges) + sort_memory_ranges(*mem_ranges, true); + } + + /* Reallocate memory ranges if there is no space to split ranges */ + tmem = *mem_ranges; + if (tmem && (tmem->nr_ranges == tmem->max_nr_ranges)) { + tmem = realloc_mem_ranges(mem_ranges); + if (!tmem) + goto out; + } + + /* Exclude crashkernel region */ + ret = crash_exclude_mem_range(tmem, crashk_res.start, crashk_res.end); + if (ret) + goto out; + + /* + * FIXME: For now, stay in parity with kexec-tools but if RTAS/OPAL + * regions are exported to save their context at the time of + * crash, they should actually be backed up just like the + * first 64K bytes of memory. + */ + ret = add_rtas_mem_range(mem_ranges); + if (ret) + goto out; + + ret = add_opal_mem_range(mem_ranges); + if (ret) + goto out; + + /* create a separate program header for the backup region */ + ret = add_mem_range(mem_ranges, BACKUP_SRC_START, BACKUP_SRC_SIZE); + if (ret) + goto out; + + sort_memory_ranges(*mem_ranges, false); +out: + if (ret) + pr_err("Failed to setup crash memory ranges\n"); + return ret; +} + /** * check_realloc_usable_mem - Reallocate buffer if it can't accommodate entries * @um_info: Usable memory buffer and ranges info. @@ -863,6 +864,7 @@ int load_crashdump_segments_ppc64(struct kimage *image, return 0; } +#endif /** * setup_purgatory_ppc64 - initialize PPC64 specific purgatory's global @@ -972,26 +974,14 @@ static unsigned int cpu_node_size(void) return size; } -/** - * kexec_extra_fdt_size_ppc64 - Return the estimated additional size needed to - * setup FDT for kexec/kdump kernel. - * @image: kexec image being loaded. - * - * Returns the estimated extra size needed for kexec/kdump kernel FDT. - */ -unsigned int kexec_extra_fdt_size_ppc64(struct kimage *image) +static unsigned int kdump_extra_fdt_size_ppc64(struct kimage *image) { unsigned int cpu_nodes, extra_size = 0; struct device_node *dn; u64 usm_entries; - // Budget some space for the password blob. There's already extra space - // for the key name - if (plpks_is_available()) - extra_size += (unsigned int)plpks_get_passwordlen(); - - if (image->type != KEXEC_TYPE_CRASH) - return extra_size; + if (!IS_ENABLED(CONFIG_CRASH_DUMP) || image->type != KEXEC_TYPE_CRASH) + return 0; /* * For kdump kernel, account for linux,usable-memory and @@ -1019,6 +1009,25 @@ unsigned int kexec_extra_fdt_size_ppc64(struct kimage *image) return extra_size; } +/** + * kexec_extra_fdt_size_ppc64 - Return the estimated additional size needed to + * setup FDT for kexec/kdump kernel. + * @image: kexec image being loaded. + * + * Returns the estimated extra size needed for kexec/kdump kernel FDT. + */ +unsigned int kexec_extra_fdt_size_ppc64(struct kimage *image) +{ + unsigned int extra_size = 0; + + // Budget some space for the password blob. There's already extra space + // for the key name + if (plpks_is_available()) + extra_size += (unsigned int)plpks_get_passwordlen(); + + return extra_size + kdump_extra_fdt_size_ppc64(image); +} + /** * add_node_props - Reads node properties from device node structure and add * them to fdt. @@ -1171,6 +1180,7 @@ int setup_new_fdt_ppc64(const struct kimage *image, void *fdt, struct crash_mem *umem = NULL, *rmem = NULL; int i, nr_ranges, ret; +#ifdef CONFIG_CRASH_DUMP /* * Restrict memory usage for kdump kernel by setting up * usable memory ranges and memory reserve map. @@ -1207,6 +1217,7 @@ int setup_new_fdt_ppc64(const struct kimage *image, void *fdt, goto out; } } +#endif /* Update cpus nodes information to account hotplug CPUs. */ ret = update_cpus_node(fdt); @@ -1278,7 +1289,7 @@ int arch_kexec_locate_mem_hole(struct kexec_buf *kbuf) buf_min = kbuf->buf_min; buf_max = kbuf->buf_max; /* Segments for kdump kernel should be within crashkernel region */ - if (kbuf->image->type == KEXEC_TYPE_CRASH) { + if (IS_ENABLED(CONFIG_CRASH_DUMP) && kbuf->image->type == KEXEC_TYPE_CRASH) { buf_min = (buf_min < crashk_res.start ? crashk_res.start : buf_min); buf_max = (buf_max > crashk_res.end ? -- 2.43.2