2020-07-26 19:37:25

by Hari Bathini

[permalink] [raw]
Subject: [RESEND PATCH v5 00/11] ppc64: enable kdump support for kexec_file_load syscall

Sorry! There was a gateway issue on my system while posting v5, due to
which some patches did not make it through. Resending...

This patch series enables kdump support for kexec_file_load system
call (kexec -s -p) on PPC64. The changes are inspired from kexec-tools
code but heavily modified for kernel consumption.

The first patch adds a weak arch_kexec_locate_mem_hole() function to
override locate memory hole logic suiting arch needs. There are some
special regions in ppc64 which should be avoided while loading buffer
& there are multiple callers to kexec_add_buffer making it complicated
to maintain range sanity and using generic lookup at the same time.

The second patch marks ppc64 specific code within arch/powerpc/kexec
and arch/powerpc/purgatory to make the subsequent code changes easy
to understand.

The next patch adds helper function to setup different memory ranges
needed for loading kdump kernel, booting into it and exporting the
crashing kernel's elfcore.

The fourth patch overrides arch_kexec_locate_mem_hole() function to
locate memory hole for kdump segments by accounting for the special
memory regions, referred to as excluded memory ranges, and sets
kbuf->mem when a suitable memory region is found.

The fifth patch moves walk_drmem_lmbs() out of .init section with
a few changes to reuse it for setting up kdump kernel's usable memory
ranges. The next patch uses walk_drmem_lmbs() to look up the LMBs
and set linux,drconf-usable-memory & linux,usable-memory properties
in order to restrict kdump kernel's memory usage.

The seventh patch updates purgatory to setup r8 & r9 with opal base
and opal entry addresses respectively to aid kernels built with
CONFIG_PPC_EARLY_DEBUG_OPAL enabled. The next patch setups up backup
region as a kexec segment while loading kdump kernel and teaches
purgatory to copy data from source to destination.

Patch 09 builds the elfcore header for the running kernel & passes
the info to kdump kernel via "elfcorehdr=" parameter to export as
/proc/vmcore file. The next patch sets up the memory reserve map
for the kexec kernel and also claims kdump support for kdump as
all the necessary changes are added.

The last patch fixes a lookup issue for `kexec -l -s` case when
memory is reserved for crashkernel.

Tested the changes successfully on P8, P9 lpars, couple of OpenPOWER
boxes, one with secureboot enabled, KVM guest and a simulator.

v4 -> v5:
* Dropped patches 07/12 & 08/12 and updated purgatory to do everything
in assembly.
* Added a new patch (which was part of patch 08/12 in v4) to update
r8 & r9 registers with opal base & opal entry addresses as it is
expected on kernels built with CONFIG_PPC_EARLY_DEBUG_OPAL enabled.
* Fixed kexec load issue on KVM guest.

v3 -> v4:
* Updated get_node_path() function to be iterative instead of a recursive one.
* Added comment explaining why low memory is added to kdump kernel's usable
memory ranges though it doesn't fall in crashkernel region.
* Fixed stack_buf to be quadword aligned in accordance with ABI.
* Added missing of_node_put() in setup_purgatory_ppc64().
* Added a FIXME tag to indicate issue in adding opal/rtas regions to
core image.

v2 -> v3:
* Fixed TOC pointer calculation for purgatory by using section info
that has relocations applied.
* Fixed arch_kexec_locate_mem_hole() function to fallback to generic
kexec_locate_mem_hole() lookup if exclude ranges list is empty.
* Dropped check for backup_start in trampoline_64.S as purgatory()
function takes care of it anyway.

v1 -> v2:
* Introduced arch_kexec_locate_mem_hole() for override and dropped
weak arch_kexec_add_buffer().
* Addressed warnings reported by lkp.
* Added patch to address kexec load issue when memory is reserved
for crashkernel.
* Used the appropriate license header for the new files added.
* Added an option to merge ranges to minimize reallocations while
adding memory ranges.
* Dropped within_crashkernel parameter for add_opal_mem_range() &
add_rtas_mem_range() functions as it is not really needed.

---

Hari Bathini (11):
kexec_file: allow archs to handle special regions while locating memory hole
powerpc/kexec_file: mark PPC64 specific code
powerpc/kexec_file: add helper functions for getting memory ranges
ppc64/kexec_file: avoid stomping memory used by special regions
powerpc/drmem: make lmb walk a bit more flexible
ppc64/kexec_file: restrict memory usage of kdump kernel
ppc64/kexec_file: enable early kernel's OPAL calls
ppc64/kexec_file: setup backup region for kdump kernel
ppc64/kexec_file: prepare elfcore header for crashing kernel
ppc64/kexec_file: add appropriate regions for memory reserve map
ppc64/kexec_file: fix kexec load failure with lack of memory hole


arch/powerpc/include/asm/crashdump-ppc64.h | 19
arch/powerpc/include/asm/drmem.h | 9
arch/powerpc/include/asm/kexec.h | 29 +
arch/powerpc/include/asm/kexec_ranges.h | 25 +
arch/powerpc/kernel/prom.c | 13
arch/powerpc/kexec/Makefile | 2
arch/powerpc/kexec/elf_64.c | 36 +
arch/powerpc/kexec/file_load.c | 60 +
arch/powerpc/kexec/file_load_64.c | 1209 ++++++++++++++++++++++++++++
arch/powerpc/kexec/ranges.c | 417 ++++++++++
arch/powerpc/mm/drmem.c | 87 +-
arch/powerpc/mm/numa.c | 13
arch/powerpc/purgatory/Makefile | 4
arch/powerpc/purgatory/trampoline.S | 117 ---
arch/powerpc/purgatory/trampoline_64.S | 162 ++++
include/linux/kexec.h | 29 -
kernel/kexec_file.c | 16
17 files changed, 2052 insertions(+), 195 deletions(-)
create mode 100644 arch/powerpc/include/asm/crashdump-ppc64.h
create mode 100644 arch/powerpc/include/asm/kexec_ranges.h
create mode 100644 arch/powerpc/kexec/file_load_64.c
create mode 100644 arch/powerpc/kexec/ranges.c
delete mode 100644 arch/powerpc/purgatory/trampoline.S
create mode 100644 arch/powerpc/purgatory/trampoline_64.S



2020-07-26 19:38:22

by Hari Bathini

[permalink] [raw]
Subject: [RESEND PATCH v5 01/11] kexec_file: allow archs to handle special regions while locating memory hole

Some architectures may have special memory regions, within the given
memory range, which can't be used for the buffer in a kexec segment.
Implement weak arch_kexec_locate_mem_hole() definition which arch code
may override, to take care of special regions, while trying to locate
a memory hole.

Also, add the missing declarations for arch overridable functions and
and drop the __weak descriptors in the declarations to avoid non-weak
definitions from becoming weak.

Reported-by: kernel test robot <[email protected]>
[lkp: In v1, arch_kimage_file_post_load_cleanup() declaration was missing]
Signed-off-by: Hari Bathini <[email protected]>
Tested-by: Pingfan Liu <[email protected]>
Acked-by: Dave Young <[email protected]>
Reviewed-by: Thiago Jung Bauermann <[email protected]>
---

v4 -> v5:
* Unchanged.

v3 -> v4:
* Unchanged. Added Reviewed-by tag from Thiago.

v2 -> v3:
* Unchanged. Added Acked-by & Tested-by tags from Dave & Pingfan.

v1 -> v2:
* Introduced arch_kexec_locate_mem_hole() for override and dropped
weak arch_kexec_add_buffer().
* Dropped __weak identifier for arch overridable functions.
* Fixed the missing declaration for arch_kimage_file_post_load_cleanup()
reported by lkp. lkp report for reference:
- https://lore.kernel.org/patchwork/patch/1264418/


include/linux/kexec.h | 29 ++++++++++++++++++-----------
kernel/kexec_file.c | 16 ++++++++++++++--
2 files changed, 32 insertions(+), 13 deletions(-)

diff --git a/include/linux/kexec.h b/include/linux/kexec.h
index ea67910ae6b7..9e93bef52968 100644
--- a/include/linux/kexec.h
+++ b/include/linux/kexec.h
@@ -183,17 +183,24 @@ int kexec_purgatory_get_set_symbol(struct kimage *image, const char *name,
bool get_value);
void *kexec_purgatory_get_symbol_addr(struct kimage *image, const char *name);

-int __weak arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
- unsigned long buf_len);
-void * __weak arch_kexec_kernel_image_load(struct kimage *image);
-int __weak arch_kexec_apply_relocations_add(struct purgatory_info *pi,
- Elf_Shdr *section,
- const Elf_Shdr *relsec,
- const Elf_Shdr *symtab);
-int __weak arch_kexec_apply_relocations(struct purgatory_info *pi,
- Elf_Shdr *section,
- const Elf_Shdr *relsec,
- const Elf_Shdr *symtab);
+/* Architectures may override the below functions */
+int arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
+ unsigned long buf_len);
+void *arch_kexec_kernel_image_load(struct kimage *image);
+int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
+ Elf_Shdr *section,
+ const Elf_Shdr *relsec,
+ const Elf_Shdr *symtab);
+int arch_kexec_apply_relocations(struct purgatory_info *pi,
+ Elf_Shdr *section,
+ const Elf_Shdr *relsec,
+ const Elf_Shdr *symtab);
+int arch_kimage_file_post_load_cleanup(struct kimage *image);
+#ifdef CONFIG_KEXEC_SIG
+int arch_kexec_kernel_verify_sig(struct kimage *image, void *buf,
+ unsigned long buf_len);
+#endif
+int arch_kexec_locate_mem_hole(struct kexec_buf *kbuf);

extern int kexec_add_buffer(struct kexec_buf *kbuf);
int kexec_locate_mem_hole(struct kexec_buf *kbuf);
diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
index 09cc78df53c6..e89912d33a27 100644
--- a/kernel/kexec_file.c
+++ b/kernel/kexec_file.c
@@ -635,6 +635,19 @@ int kexec_locate_mem_hole(struct kexec_buf *kbuf)
return ret == 1 ? 0 : -EADDRNOTAVAIL;
}

+/**
+ * arch_kexec_locate_mem_hole - Find free memory to place the segments.
+ * @kbuf: Parameters for the memory search.
+ *
+ * On success, kbuf->mem will have the start address of the memory region found.
+ *
+ * Return: 0 on success, negative errno on error.
+ */
+int __weak arch_kexec_locate_mem_hole(struct kexec_buf *kbuf)
+{
+ return kexec_locate_mem_hole(kbuf);
+}
+
/**
* kexec_add_buffer - place a buffer in a kexec segment
* @kbuf: Buffer contents and memory parameters.
@@ -647,7 +660,6 @@ int kexec_locate_mem_hole(struct kexec_buf *kbuf)
*/
int kexec_add_buffer(struct kexec_buf *kbuf)
{
-
struct kexec_segment *ksegment;
int ret;

@@ -675,7 +687,7 @@ int kexec_add_buffer(struct kexec_buf *kbuf)
kbuf->buf_align = max(kbuf->buf_align, PAGE_SIZE);

/* Walk the RAM ranges and allocate a suitable range for the buffer */
- ret = kexec_locate_mem_hole(kbuf);
+ ret = arch_kexec_locate_mem_hole(kbuf);
if (ret)
return ret;



2020-07-26 19:38:27

by Hari Bathini

[permalink] [raw]
Subject: [RESEND PATCH v5 02/11] powerpc/kexec_file: mark PPC64 specific code

Some of the kexec_file_load code isn't PPC64 specific. Move PPC64
specific code from kexec/file_load.c to kexec/file_load_64.c. Also,
rename purgatory/trampoline.S to purgatory/trampoline_64.S in the
same spirit. No functional changes.

Signed-off-by: Hari Bathini <[email protected]>
Tested-by: Pingfan Liu <[email protected]>
Reviewed-by: Laurent Dufour <[email protected]>
Reviewed-by: Thiago Jung Bauermann <[email protected]>
---

v4 -> v5:
* Unchanged.

v3 -> v4:
* Moved common code back to set_new_fdt() from setup_new_fdt_ppc64()
function. Added Reviewed-by tags from Laurent & Thiago.

v2 -> v3:
* Unchanged. Added Tested-by tag from Pingfan.

v1 -> v2:
* No changes.


arch/powerpc/include/asm/kexec.h | 9 ++
arch/powerpc/kexec/Makefile | 2 -
arch/powerpc/kexec/elf_64.c | 7 +-
arch/powerpc/kexec/file_load.c | 19 +----
arch/powerpc/kexec/file_load_64.c | 87 ++++++++++++++++++++++++
arch/powerpc/purgatory/Makefile | 4 +
arch/powerpc/purgatory/trampoline.S | 117 --------------------------------
arch/powerpc/purgatory/trampoline_64.S | 117 ++++++++++++++++++++++++++++++++
8 files changed, 222 insertions(+), 140 deletions(-)
create mode 100644 arch/powerpc/kexec/file_load_64.c
delete mode 100644 arch/powerpc/purgatory/trampoline.S
create mode 100644 arch/powerpc/purgatory/trampoline_64.S

diff --git a/arch/powerpc/include/asm/kexec.h b/arch/powerpc/include/asm/kexec.h
index c68476818753..ac8fd4839171 100644
--- a/arch/powerpc/include/asm/kexec.h
+++ b/arch/powerpc/include/asm/kexec.h
@@ -116,6 +116,15 @@ int setup_new_fdt(const struct kimage *image, void *fdt,
unsigned long initrd_load_addr, unsigned long initrd_len,
const char *cmdline);
int delete_fdt_mem_rsv(void *fdt, unsigned long start, unsigned long size);
+
+#ifdef CONFIG_PPC64
+int setup_purgatory_ppc64(struct kimage *image, const void *slave_code,
+ const void *fdt, unsigned long kernel_load_addr,
+ unsigned long fdt_load_addr);
+int setup_new_fdt_ppc64(const struct kimage *image, void *fdt,
+ unsigned long initrd_load_addr,
+ unsigned long initrd_len, const char *cmdline);
+#endif /* CONFIG_PPC64 */
#endif /* CONFIG_KEXEC_FILE */

#else /* !CONFIG_KEXEC_CORE */
diff --git a/arch/powerpc/kexec/Makefile b/arch/powerpc/kexec/Makefile
index 86380c69f5ce..67c355329457 100644
--- a/arch/powerpc/kexec/Makefile
+++ b/arch/powerpc/kexec/Makefile
@@ -7,7 +7,7 @@ obj-y += core.o crash.o core_$(BITS).o

obj-$(CONFIG_PPC32) += relocate_32.o

-obj-$(CONFIG_KEXEC_FILE) += file_load.o elf_$(BITS).o
+obj-$(CONFIG_KEXEC_FILE) += file_load.o file_load_$(BITS).o elf_$(BITS).o

ifdef CONFIG_HAVE_IMA_KEXEC
ifdef CONFIG_IMA
diff --git a/arch/powerpc/kexec/elf_64.c b/arch/powerpc/kexec/elf_64.c
index 3072fd6dbe94..23ad04ccaf8e 100644
--- a/arch/powerpc/kexec/elf_64.c
+++ b/arch/powerpc/kexec/elf_64.c
@@ -88,7 +88,8 @@ static void *elf64_load(struct kimage *image, char *kernel_buf,
goto out;
}

- ret = setup_new_fdt(image, fdt, initrd_load_addr, initrd_len, cmdline);
+ ret = setup_new_fdt_ppc64(image, fdt, initrd_load_addr,
+ initrd_len, cmdline);
if (ret)
goto out;

@@ -107,8 +108,8 @@ static void *elf64_load(struct kimage *image, char *kernel_buf,
pr_debug("Loaded device tree at 0x%lx\n", fdt_load_addr);

slave_code = elf_info.buffer + elf_info.proghdrs[0].p_offset;
- ret = setup_purgatory(image, slave_code, fdt, kernel_load_addr,
- fdt_load_addr);
+ ret = setup_purgatory_ppc64(image, slave_code, fdt, kernel_load_addr,
+ fdt_load_addr);
if (ret)
pr_err("Error setting up the purgatory.\n");

diff --git a/arch/powerpc/kexec/file_load.c b/arch/powerpc/kexec/file_load.c
index 143c91724617..38439aba27d7 100644
--- a/arch/powerpc/kexec/file_load.c
+++ b/arch/powerpc/kexec/file_load.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
- * ppc64 code to implement the kexec_file_load syscall
+ * powerpc code to implement the kexec_file_load syscall
*
* Copyright (C) 2004 Adam Litke ([email protected])
* Copyright (C) 2004 IBM Corp.
@@ -20,22 +20,7 @@
#include <linux/libfdt.h>
#include <asm/ima.h>

-#define SLAVE_CODE_SIZE 256
-
-const struct kexec_file_ops * const kexec_file_loaders[] = {
- &kexec_elf64_ops,
- NULL
-};
-
-int arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
- unsigned long buf_len)
-{
- /* We don't support crash kernels yet. */
- if (image->type == KEXEC_TYPE_CRASH)
- return -EOPNOTSUPP;
-
- return kexec_image_probe_default(image, buf, buf_len);
-}
+#define SLAVE_CODE_SIZE 256 /* First 0x100 bytes */

/**
* setup_purgatory - initialize the purgatory's global variables
diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c
new file mode 100644
index 000000000000..41fe8b6c72d6
--- /dev/null
+++ b/arch/powerpc/kexec/file_load_64.c
@@ -0,0 +1,87 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * ppc64 code to implement the kexec_file_load syscall
+ *
+ * Copyright (C) 2004 Adam Litke ([email protected])
+ * Copyright (C) 2004 IBM Corp.
+ * Copyright (C) 2004,2005 Milton D Miller II, IBM Corporation
+ * Copyright (C) 2005 R Sharada ([email protected])
+ * Copyright (C) 2006 Mohan Kumar M ([email protected])
+ * Copyright (C) 2020 IBM Corporation
+ *
+ * Based on kexec-tools' kexec-ppc64.c, kexec-elf-rel-ppc64.c, fs2dt.c.
+ * Heavily modified for the kernel by
+ * Hari Bathini <[email protected]>.
+ */
+
+#include <linux/kexec.h>
+#include <linux/of_fdt.h>
+#include <linux/libfdt.h>
+
+const struct kexec_file_ops * const kexec_file_loaders[] = {
+ &kexec_elf64_ops,
+ NULL
+};
+
+/**
+ * setup_purgatory_ppc64 - initialize PPC64 specific purgatory's global
+ * variables and call setup_purgatory() to initialize
+ * common global variable.
+ * @image: kexec image.
+ * @slave_code: Slave code for the purgatory.
+ * @fdt: Flattened device tree for the next kernel.
+ * @kernel_load_addr: Address where the kernel is loaded.
+ * @fdt_load_addr: Address where the flattened device tree is loaded.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+int setup_purgatory_ppc64(struct kimage *image, const void *slave_code,
+ const void *fdt, unsigned long kernel_load_addr,
+ unsigned long fdt_load_addr)
+{
+ int ret;
+
+ ret = setup_purgatory(image, slave_code, fdt, kernel_load_addr,
+ fdt_load_addr);
+ if (ret)
+ pr_err("Failed to setup purgatory symbols");
+ return ret;
+}
+
+/**
+ * setup_new_fdt_ppc64 - Update the flattend device-tree of the kernel
+ * being loaded.
+ * @image: kexec image being loaded.
+ * @fdt: Flattened device tree for the next kernel.
+ * @initrd_load_addr: Address where the next initrd will be loaded.
+ * @initrd_len: Size of the next initrd, or 0 if there will be none.
+ * @cmdline: Command line for the next kernel, or NULL if there will
+ * be none.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+int setup_new_fdt_ppc64(const struct kimage *image, void *fdt,
+ unsigned long initrd_load_addr,
+ unsigned long initrd_len, const char *cmdline)
+{
+ return setup_new_fdt(image, fdt, initrd_load_addr, initrd_len, cmdline);
+}
+
+/**
+ * arch_kexec_kernel_image_probe - Does additional handling needed to setup
+ * kexec segments.
+ * @image: kexec image being loaded.
+ * @buf: Buffer pointing to elf data.
+ * @buf_len: Length of the buffer.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+int arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
+ unsigned long buf_len)
+{
+ /* We don't support crash kernels yet. */
+ if (image->type == KEXEC_TYPE_CRASH)
+ return -EOPNOTSUPP;
+
+ return kexec_image_probe_default(image, buf, buf_len);
+}
diff --git a/arch/powerpc/purgatory/Makefile b/arch/powerpc/purgatory/Makefile
index 7c6d8b14f440..348f59581052 100644
--- a/arch/powerpc/purgatory/Makefile
+++ b/arch/powerpc/purgatory/Makefile
@@ -2,11 +2,11 @@

KASAN_SANITIZE := n

-targets += trampoline.o purgatory.ro kexec-purgatory.c
+targets += trampoline_$(BITS).o purgatory.ro kexec-purgatory.c

LDFLAGS_purgatory.ro := -e purgatory_start -r --no-undefined

-$(obj)/purgatory.ro: $(obj)/trampoline.o FORCE
+$(obj)/purgatory.ro: $(obj)/trampoline_$(BITS).o FORCE
$(call if_changed,ld)

quiet_cmd_bin2c = BIN2C $@
diff --git a/arch/powerpc/purgatory/trampoline.S b/arch/powerpc/purgatory/trampoline.S
deleted file mode 100644
index a5a83c3f53e6..000000000000
--- a/arch/powerpc/purgatory/trampoline.S
+++ /dev/null
@@ -1,117 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * kexec trampoline
- *
- * Based on code taken from kexec-tools and kexec-lite.
- *
- * Copyright (C) 2004 - 2005, Milton D Miller II, IBM Corporation
- * Copyright (C) 2006, Mohan Kumar M, IBM Corporation
- * Copyright (C) 2013, Anton Blanchard, IBM Corporation
- */
-
-#include <asm/asm-compat.h>
-
- .machine ppc64
- .balign 256
- .globl purgatory_start
-purgatory_start:
- b master
-
- /* ABI: possible run_at_load flag at 0x5c */
- .org purgatory_start + 0x5c
- .globl run_at_load
-run_at_load:
- .long 0
- .size run_at_load, . - run_at_load
-
- /* ABI: slaves start at 60 with r3=phys */
- .org purgatory_start + 0x60
-slave:
- b .
- /* ABI: end of copied region */
- .org purgatory_start + 0x100
- .size purgatory_start, . - purgatory_start
-
-/*
- * The above 0x100 bytes at purgatory_start are replaced with the
- * code from the kernel (or next stage) by setup_purgatory().
- */
-
-master:
- or %r1,%r1,%r1 /* low priority to let other threads catchup */
- isync
- mr %r17,%r3 /* save cpu id to r17 */
- mr %r15,%r4 /* save physical address in reg15 */
-
- or %r3,%r3,%r3 /* ok now to high priority, lets boot */
- lis %r6,0x1
- mtctr %r6 /* delay a bit for slaves to catch up */
- bdnz . /* before we overwrite 0-100 again */
-
- bl 0f /* Work out where we're running */
-0: mflr %r18
-
- /* load device-tree address */
- ld %r3, (dt_offset - 0b)(%r18)
- mr %r16,%r3 /* save dt address in reg16 */
- li %r4,20
- LWZX_BE %r6,%r3,%r4 /* fetch __be32 version number at byte 20 */
- cmpwi %cr0,%r6,2 /* v2 or later? */
- blt 1f
- li %r4,28
- STWX_BE %r17,%r3,%r4 /* Store my cpu as __be32 at byte 28 */
-1:
- /* load the kernel address */
- ld %r4,(kernel - 0b)(%r18)
-
- /* load the run_at_load flag */
- /* possibly patched by kexec */
- ld %r6,(run_at_load - 0b)(%r18)
- /* and patch it into the kernel */
- stw %r6,(0x5c)(%r4)
-
- mr %r3,%r16 /* restore dt address */
-
- li %r5,0 /* r5 will be 0 for kernel */
-
- mfmsr %r11
- andi. %r10,%r11,1 /* test MSR_LE */
- bne .Little_endian
-
- mtctr %r4 /* prepare branch to */
- bctr /* start kernel */
-
-.Little_endian:
- mtsrr0 %r4 /* prepare branch to */
-
- clrrdi %r11,%r11,1 /* clear MSR_LE */
- mtsrr1 %r11
-
- rfid /* update MSR and start kernel */
-
-
- .balign 8
- .globl kernel
-kernel:
- .8byte 0x0
- .size kernel, . - kernel
-
- .balign 8
- .globl dt_offset
-dt_offset:
- .8byte 0x0
- .size dt_offset, . - dt_offset
-
-
- .data
- .balign 8
-.globl purgatory_sha256_digest
-purgatory_sha256_digest:
- .skip 32
- .size purgatory_sha256_digest, . - purgatory_sha256_digest
-
- .balign 8
-.globl purgatory_sha_regions
-purgatory_sha_regions:
- .skip 8 * 2 * 16
- .size purgatory_sha_regions, . - purgatory_sha_regions
diff --git a/arch/powerpc/purgatory/trampoline_64.S b/arch/powerpc/purgatory/trampoline_64.S
new file mode 100644
index 000000000000..a5a83c3f53e6
--- /dev/null
+++ b/arch/powerpc/purgatory/trampoline_64.S
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * kexec trampoline
+ *
+ * Based on code taken from kexec-tools and kexec-lite.
+ *
+ * Copyright (C) 2004 - 2005, Milton D Miller II, IBM Corporation
+ * Copyright (C) 2006, Mohan Kumar M, IBM Corporation
+ * Copyright (C) 2013, Anton Blanchard, IBM Corporation
+ */
+
+#include <asm/asm-compat.h>
+
+ .machine ppc64
+ .balign 256
+ .globl purgatory_start
+purgatory_start:
+ b master
+
+ /* ABI: possible run_at_load flag at 0x5c */
+ .org purgatory_start + 0x5c
+ .globl run_at_load
+run_at_load:
+ .long 0
+ .size run_at_load, . - run_at_load
+
+ /* ABI: slaves start at 60 with r3=phys */
+ .org purgatory_start + 0x60
+slave:
+ b .
+ /* ABI: end of copied region */
+ .org purgatory_start + 0x100
+ .size purgatory_start, . - purgatory_start
+
+/*
+ * The above 0x100 bytes at purgatory_start are replaced with the
+ * code from the kernel (or next stage) by setup_purgatory().
+ */
+
+master:
+ or %r1,%r1,%r1 /* low priority to let other threads catchup */
+ isync
+ mr %r17,%r3 /* save cpu id to r17 */
+ mr %r15,%r4 /* save physical address in reg15 */
+
+ or %r3,%r3,%r3 /* ok now to high priority, lets boot */
+ lis %r6,0x1
+ mtctr %r6 /* delay a bit for slaves to catch up */
+ bdnz . /* before we overwrite 0-100 again */
+
+ bl 0f /* Work out where we're running */
+0: mflr %r18
+
+ /* load device-tree address */
+ ld %r3, (dt_offset - 0b)(%r18)
+ mr %r16,%r3 /* save dt address in reg16 */
+ li %r4,20
+ LWZX_BE %r6,%r3,%r4 /* fetch __be32 version number at byte 20 */
+ cmpwi %cr0,%r6,2 /* v2 or later? */
+ blt 1f
+ li %r4,28
+ STWX_BE %r17,%r3,%r4 /* Store my cpu as __be32 at byte 28 */
+1:
+ /* load the kernel address */
+ ld %r4,(kernel - 0b)(%r18)
+
+ /* load the run_at_load flag */
+ /* possibly patched by kexec */
+ ld %r6,(run_at_load - 0b)(%r18)
+ /* and patch it into the kernel */
+ stw %r6,(0x5c)(%r4)
+
+ mr %r3,%r16 /* restore dt address */
+
+ li %r5,0 /* r5 will be 0 for kernel */
+
+ mfmsr %r11
+ andi. %r10,%r11,1 /* test MSR_LE */
+ bne .Little_endian
+
+ mtctr %r4 /* prepare branch to */
+ bctr /* start kernel */
+
+.Little_endian:
+ mtsrr0 %r4 /* prepare branch to */
+
+ clrrdi %r11,%r11,1 /* clear MSR_LE */
+ mtsrr1 %r11
+
+ rfid /* update MSR and start kernel */
+
+
+ .balign 8
+ .globl kernel
+kernel:
+ .8byte 0x0
+ .size kernel, . - kernel
+
+ .balign 8
+ .globl dt_offset
+dt_offset:
+ .8byte 0x0
+ .size dt_offset, . - dt_offset
+
+
+ .data
+ .balign 8
+.globl purgatory_sha256_digest
+purgatory_sha256_digest:
+ .skip 32
+ .size purgatory_sha256_digest, . - purgatory_sha256_digest
+
+ .balign 8
+.globl purgatory_sha_regions
+purgatory_sha_regions:
+ .skip 8 * 2 * 16
+ .size purgatory_sha_regions, . - purgatory_sha_regions


2020-07-26 19:39:54

by Hari Bathini

[permalink] [raw]
Subject: [RESEND PATCH v5 04/11] ppc64/kexec_file: avoid stomping memory used by special regions

crashkernel region could have an overlap with special memory regions
like opal, rtas, tce-table & such. These regions are referred to as
exclude memory ranges. Setup this ranges during image probe in order
to avoid them while finding the buffer for different kdump segments.
Override arch_kexec_locate_mem_hole() to locate a memory hole taking
these ranges into account.

Signed-off-by: Hari Bathini <[email protected]>
Reviewed-by: Thiago Jung Bauermann <[email protected]>
---

v4 -> v5:
* Unchanged. Added Reviewed-by tag from Thiago.

v3 -> v4:
* Dropped KDUMP_BUF_MIN & KDUMP_BUF_MAX macros and fixed off-by-one error
in arch_locate_mem_hole() helper routines.

v2 -> v3:
* If there are no exclude ranges, the right thing to do is fallbacking
back to default kexec_locate_mem_hole() implementation instead of
returning 0. Fixed that.

v1 -> v2:
* Did arch_kexec_locate_mem_hole() override to handle special regions.
* Ensured holes in the memory are accounted for while locating mem hole.
* Updated add_rtas_mem_range() & add_opal_mem_range() callsites based on
the new prototype for these functions.


arch/powerpc/include/asm/kexec.h | 7 +
arch/powerpc/kexec/elf_64.c | 8 +
arch/powerpc/kexec/file_load_64.c | 337 +++++++++++++++++++++++++++++++++++++
3 files changed, 348 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/kexec.h b/arch/powerpc/include/asm/kexec.h
index ac8fd4839171..835dc92e091c 100644
--- a/arch/powerpc/include/asm/kexec.h
+++ b/arch/powerpc/include/asm/kexec.h
@@ -100,14 +100,16 @@ void relocate_new_kernel(unsigned long indirection_page, unsigned long reboot_co
#ifdef CONFIG_KEXEC_FILE
extern const struct kexec_file_ops kexec_elf64_ops;

-#ifdef CONFIG_IMA_KEXEC
#define ARCH_HAS_KIMAGE_ARCH

struct kimage_arch {
+ struct crash_mem *exclude_ranges;
+
+#ifdef CONFIG_IMA_KEXEC
phys_addr_t ima_buffer_addr;
size_t ima_buffer_size;
-};
#endif
+};

int setup_purgatory(struct kimage *image, const void *slave_code,
const void *fdt, unsigned long kernel_load_addr,
@@ -125,6 +127,7 @@ int setup_new_fdt_ppc64(const struct kimage *image, void *fdt,
unsigned long initrd_load_addr,
unsigned long initrd_len, const char *cmdline);
#endif /* CONFIG_PPC64 */
+
#endif /* CONFIG_KEXEC_FILE */

#else /* !CONFIG_KEXEC_CORE */
diff --git a/arch/powerpc/kexec/elf_64.c b/arch/powerpc/kexec/elf_64.c
index 23ad04ccaf8e..64c15a5a280b 100644
--- a/arch/powerpc/kexec/elf_64.c
+++ b/arch/powerpc/kexec/elf_64.c
@@ -46,6 +46,14 @@ static void *elf64_load(struct kimage *image, char *kernel_buf,
if (ret)
goto out;

+ if (image->type == KEXEC_TYPE_CRASH) {
+ /* min & max buffer values for kdump case */
+ kbuf.buf_min = pbuf.buf_min = crashk_res.start;
+ kbuf.buf_max = pbuf.buf_max =
+ ((crashk_res.end < ppc64_rma_size) ?
+ crashk_res.end : (ppc64_rma_size - 1));
+ }
+
ret = kexec_elf_load(image, &ehdr, &elf_info, &kbuf, &kernel_load_addr);
if (ret)
goto out;
diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c
index 41fe8b6c72d6..2df6f4273ddd 100644
--- a/arch/powerpc/kexec/file_load_64.c
+++ b/arch/powerpc/kexec/file_load_64.c
@@ -17,12 +17,262 @@
#include <linux/kexec.h>
#include <linux/of_fdt.h>
#include <linux/libfdt.h>
+#include <linux/memblock.h>
+#include <asm/kexec_ranges.h>

const struct kexec_file_ops * const kexec_file_loaders[] = {
&kexec_elf64_ops,
NULL
};

+/**
+ * get_exclude_memory_ranges - Get exclude memory ranges. This list includes
+ * regions like opal/rtas, tce-table, initrd,
+ * kernel, htab which should be avoided while
+ * setting up kexec load segments.
+ * @mem_ranges: Range list to add the memory ranges to.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+static int get_exclude_memory_ranges(struct crash_mem **mem_ranges)
+{
+ int ret;
+
+ ret = add_tce_mem_ranges(mem_ranges);
+ if (ret)
+ goto out;
+
+ ret = add_initrd_mem_range(mem_ranges);
+ if (ret)
+ goto out;
+
+ ret = add_htab_mem_range(mem_ranges);
+ if (ret)
+ goto out;
+
+ ret = add_kernel_mem_range(mem_ranges);
+ if (ret)
+ goto out;
+
+ ret = add_rtas_mem_range(mem_ranges);
+ if (ret)
+ goto out;
+
+ ret = add_opal_mem_range(mem_ranges);
+ if (ret)
+ goto out;
+
+ ret = add_reserved_ranges(mem_ranges);
+ if (ret)
+ goto out;
+
+ /* exclude memory ranges should be sorted for easy lookup */
+ sort_memory_ranges(*mem_ranges, true);
+out:
+ if (ret)
+ pr_err("Failed to setup exclude memory ranges\n");
+ return ret;
+}
+
+/**
+ * __locate_mem_hole_top_down - Looks top down for a large enough memory hole
+ * in the memory regions between buf_min & buf_max
+ * for the buffer. If found, sets kbuf->mem.
+ * @kbuf: Buffer contents and memory parameters.
+ * @buf_min: Minimum address for the buffer.
+ * @buf_max: Maximum address for the buffer.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+static int __locate_mem_hole_top_down(struct kexec_buf *kbuf,
+ u64 buf_min, u64 buf_max)
+{
+ int ret = -EADDRNOTAVAIL;
+ phys_addr_t start, end;
+ u64 i;
+
+ for_each_mem_range_rev(i, &memblock.memory, NULL, NUMA_NO_NODE,
+ MEMBLOCK_NONE, &start, &end, NULL) {
+ /*
+ * memblock uses [start, end) convention while it is
+ * [start, end] here. Fix the off-by-one to have the
+ * same convention.
+ */
+ end -= 1;
+
+ if (start > buf_max)
+ continue;
+
+ /* Memory hole not found */
+ if (end < buf_min)
+ break;
+
+ /* Adjust memory region based on the given range */
+ if (start < buf_min)
+ start = buf_min;
+ if (end > buf_max)
+ end = buf_max;
+
+ start = ALIGN(start, kbuf->buf_align);
+ if (start < end && (end - start + 1) >= kbuf->memsz) {
+ /* Suitable memory range found. Set kbuf->mem */
+ kbuf->mem = ALIGN_DOWN(end - kbuf->memsz + 1,
+ kbuf->buf_align);
+ ret = 0;
+ break;
+ }
+ }
+
+ return ret;
+}
+
+/**
+ * locate_mem_hole_top_down_ppc64 - Skip special memory regions to find a
+ * suitable buffer with top down approach.
+ * @kbuf: Buffer contents and memory parameters.
+ * @buf_min: Minimum address for the buffer.
+ * @buf_max: Maximum address for the buffer.
+ * @emem: Exclude memory ranges.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+static int locate_mem_hole_top_down_ppc64(struct kexec_buf *kbuf,
+ u64 buf_min, u64 buf_max,
+ const struct crash_mem *emem)
+{
+ int i, ret = 0, err = -EADDRNOTAVAIL;
+ u64 start, end, tmin, tmax;
+
+ tmax = buf_max;
+ for (i = (emem->nr_ranges - 1); i >= 0; i--) {
+ start = emem->ranges[i].start;
+ end = emem->ranges[i].end;
+
+ if (start > tmax)
+ continue;
+
+ if (end < tmax) {
+ tmin = (end < buf_min ? buf_min : end + 1);
+ ret = __locate_mem_hole_top_down(kbuf, tmin, tmax);
+ if (!ret)
+ return 0;
+ }
+
+ tmax = start - 1;
+
+ if (tmax < buf_min) {
+ ret = err;
+ break;
+ }
+ ret = 0;
+ }
+
+ if (!ret) {
+ tmin = buf_min;
+ ret = __locate_mem_hole_top_down(kbuf, tmin, tmax);
+ }
+ return ret;
+}
+
+/**
+ * __locate_mem_hole_bottom_up - Looks bottom up for a large enough memory hole
+ * in the memory regions between buf_min & buf_max
+ * for the buffer. If found, sets kbuf->mem.
+ * @kbuf: Buffer contents and memory parameters.
+ * @buf_min: Minimum address for the buffer.
+ * @buf_max: Maximum address for the buffer.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+static int __locate_mem_hole_bottom_up(struct kexec_buf *kbuf,
+ u64 buf_min, u64 buf_max)
+{
+ int ret = -EADDRNOTAVAIL;
+ phys_addr_t start, end;
+ u64 i;
+
+ for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE,
+ MEMBLOCK_NONE, &start, &end, NULL) {
+ /*
+ * memblock uses [start, end) convention while it is
+ * [start, end] here. Fix the off-by-one to have the
+ * same convention.
+ */
+ end -= 1;
+
+ if (end < buf_min)
+ continue;
+
+ /* Memory hole not found */
+ if (start > buf_max)
+ break;
+
+ /* Adjust memory region based on the given range */
+ if (start < buf_min)
+ start = buf_min;
+ if (end > buf_max)
+ end = buf_max;
+
+ start = ALIGN(start, kbuf->buf_align);
+ if (start < end && (end - start + 1) >= kbuf->memsz) {
+ /* Suitable memory range found. Set kbuf->mem */
+ kbuf->mem = start;
+ ret = 0;
+ break;
+ }
+ }
+
+ return ret;
+}
+
+/**
+ * locate_mem_hole_bottom_up_ppc64 - Skip special memory regions to find a
+ * suitable buffer with bottom up approach.
+ * @kbuf: Buffer contents and memory parameters.
+ * @buf_min: Minimum address for the buffer.
+ * @buf_max: Maximum address for the buffer.
+ * @emem: Exclude memory ranges.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+static int locate_mem_hole_bottom_up_ppc64(struct kexec_buf *kbuf,
+ u64 buf_min, u64 buf_max,
+ const struct crash_mem *emem)
+{
+ int i, ret = 0, err = -EADDRNOTAVAIL;
+ u64 start, end, tmin, tmax;
+
+ tmin = buf_min;
+ for (i = 0; i < emem->nr_ranges; i++) {
+ start = emem->ranges[i].start;
+ end = emem->ranges[i].end;
+
+ if (end < tmin)
+ continue;
+
+ if (start > tmin) {
+ tmax = (start > buf_max ? buf_max : start - 1);
+ ret = __locate_mem_hole_bottom_up(kbuf, tmin, tmax);
+ if (!ret)
+ return 0;
+ }
+
+ tmin = end + 1;
+
+ if (tmin > buf_max) {
+ ret = err;
+ break;
+ }
+ ret = 0;
+ }
+
+ if (!ret) {
+ tmax = buf_max;
+ ret = __locate_mem_hole_bottom_up(kbuf, tmin, tmax);
+ }
+ return ret;
+}
+
/**
* setup_purgatory_ppc64 - initialize PPC64 specific purgatory's global
* variables and call setup_purgatory() to initialize
@@ -67,6 +317,67 @@ int setup_new_fdt_ppc64(const struct kimage *image, void *fdt,
return setup_new_fdt(image, fdt, initrd_load_addr, initrd_len, cmdline);
}

+/**
+ * arch_kexec_locate_mem_hole - Skip special memory regions like rtas, opal,
+ * tce-table, reserved-ranges & such (exclude
+ * memory ranges) as they can't be used for kexec
+ * segment buffer. Sets kbuf->mem when a suitable
+ * memory hole is found.
+ * @kbuf: Buffer contents and memory parameters.
+ *
+ * Assumes minimum of PAGE_SIZE alignment for kbuf->memsz & kbuf->buf_align.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+int arch_kexec_locate_mem_hole(struct kexec_buf *kbuf)
+{
+ struct crash_mem **emem;
+ u64 buf_min, buf_max;
+ int ret;
+
+ /*
+ * Use the generic kexec_locate_mem_hole for regular
+ * kexec_file_load syscall
+ */
+ if (kbuf->image->type != KEXEC_TYPE_CRASH)
+ return kexec_locate_mem_hole(kbuf);
+
+ /* Look up the exclude ranges list while locating the memory hole */
+ emem = &(kbuf->image->arch.exclude_ranges);
+ if (!(*emem) || ((*emem)->nr_ranges == 0)) {
+ pr_warn("No exclude range list. Using the default locate mem hole method\n");
+ return kexec_locate_mem_hole(kbuf);
+ }
+
+ /* Segments for kdump kernel should be within crashkernel region */
+ buf_min = (kbuf->buf_min < crashk_res.start ?
+ crashk_res.start : kbuf->buf_min);
+ buf_max = (kbuf->buf_max > crashk_res.end ?
+ crashk_res.end : kbuf->buf_max);
+
+ if (buf_min > buf_max) {
+ pr_err("Invalid buffer min and/or max values\n");
+ return -EINVAL;
+ }
+
+ if (kbuf->top_down)
+ ret = locate_mem_hole_top_down_ppc64(kbuf, buf_min, buf_max,
+ *emem);
+ else
+ ret = locate_mem_hole_bottom_up_ppc64(kbuf, buf_min, buf_max,
+ *emem);
+
+ /* Add the buffer allocated to the exclude list for the next lookup */
+ if (!ret) {
+ add_mem_range(emem, kbuf->mem, kbuf->memsz);
+ sort_memory_ranges(*emem, true);
+ } else {
+ pr_err("Failed to locate memory buffer of size %lu\n",
+ kbuf->memsz);
+ }
+ return ret;
+}
+
/**
* arch_kexec_kernel_image_probe - Does additional handling needed to setup
* kexec segments.
@@ -79,9 +390,31 @@ int setup_new_fdt_ppc64(const struct kimage *image, void *fdt,
int arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
unsigned long buf_len)
{
- /* We don't support crash kernels yet. */
- if (image->type == KEXEC_TYPE_CRASH)
+ if (image->type == KEXEC_TYPE_CRASH) {
+ int ret;
+
+ /* Get exclude memory ranges needed for setting up kdump segments */
+ ret = get_exclude_memory_ranges(&(image->arch.exclude_ranges));
+ if (ret)
+ pr_err("Failed to setup exclude memory ranges for buffer lookup\n");
+ /* Return this until all changes for panic kernel are in */
return -EOPNOTSUPP;
+ }

return kexec_image_probe_default(image, buf, buf_len);
}
+
+/**
+ * arch_kimage_file_post_load_cleanup - Frees up all the allocations done
+ * while loading the image.
+ * @image: kexec image being loaded.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+int arch_kimage_file_post_load_cleanup(struct kimage *image)
+{
+ kfree(image->arch.exclude_ranges);
+ image->arch.exclude_ranges = NULL;
+
+ return kexec_image_post_load_cleanup_default(image);
+}


2020-07-26 19:40:14

by Hari Bathini

[permalink] [raw]
Subject: [RESEND PATCH v5 05/11] powerpc/drmem: make lmb walk a bit more flexible

Currently, numa & prom are the users of drmem lmb walk code. Loading
kdump with kexec_file also needs to walk the drmem LMBs to setup the
usable memory ranges for kdump kernel. But there are couple of issues
in using the code as is. One, walk_drmem_lmb() code is built into the
.init section currently, while kexec_file needs it later. Two, there
is no scope to pass data to the callback function for processing and/
or erroring out on certain conditions.

Fix that by, moving drmem LMB walk code out of .init section, adding
scope to pass data to the callback function and bailing out when
an error is encountered in the callback function.

Signed-off-by: Hari Bathini <[email protected]>
Tested-by: Pingfan Liu <[email protected]>
Reviewed-by: Thiago Jung Bauermann <[email protected]>
---

v4 -> v5:
* Unchanged.

v3 -> v4:
* Unchanged. Added Reviewed-by tag from Thiago.

v2 -> v3:
* Unchanged. Added Tested-by tag from Pingfan.

v1 -> v2:
* No changes.


arch/powerpc/include/asm/drmem.h | 9 ++--
arch/powerpc/kernel/prom.c | 13 +++---
arch/powerpc/mm/drmem.c | 87 +++++++++++++++++++++++++-------------
arch/powerpc/mm/numa.c | 13 +++---
4 files changed, 78 insertions(+), 44 deletions(-)

diff --git a/arch/powerpc/include/asm/drmem.h b/arch/powerpc/include/asm/drmem.h
index 414d209f45bb..17ccc6474ab6 100644
--- a/arch/powerpc/include/asm/drmem.h
+++ b/arch/powerpc/include/asm/drmem.h
@@ -90,13 +90,14 @@ static inline bool drmem_lmb_reserved(struct drmem_lmb *lmb)
}

u64 drmem_lmb_memory_max(void);
-void __init walk_drmem_lmbs(struct device_node *dn,
- void (*func)(struct drmem_lmb *, const __be32 **));
+int walk_drmem_lmbs(struct device_node *dn, void *data,
+ int (*func)(struct drmem_lmb *, const __be32 **, void *));
int drmem_update_dt(void);

#ifdef CONFIG_PPC_PSERIES
-void __init walk_drmem_lmbs_early(unsigned long node,
- void (*func)(struct drmem_lmb *, const __be32 **));
+int __init
+walk_drmem_lmbs_early(unsigned long node, void *data,
+ int (*func)(struct drmem_lmb *, const __be32 **, void *));
#endif

static inline void invalidate_lmb_associativity_index(struct drmem_lmb *lmb)
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index 9cc49f265c86..7df78de378b0 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -468,8 +468,9 @@ static bool validate_mem_limit(u64 base, u64 *size)
* This contains a list of memory blocks along with NUMA affinity
* information.
*/
-static void __init early_init_drmem_lmb(struct drmem_lmb *lmb,
- const __be32 **usm)
+static int __init early_init_drmem_lmb(struct drmem_lmb *lmb,
+ const __be32 **usm,
+ void *data)
{
u64 base, size;
int is_kexec_kdump = 0, rngs;
@@ -484,7 +485,7 @@ static void __init early_init_drmem_lmb(struct drmem_lmb *lmb,
*/
if ((lmb->flags & DRCONF_MEM_RESERVED) ||
!(lmb->flags & DRCONF_MEM_ASSIGNED))
- return;
+ return 0;

if (*usm)
is_kexec_kdump = 1;
@@ -499,7 +500,7 @@ static void __init early_init_drmem_lmb(struct drmem_lmb *lmb,
*/
rngs = dt_mem_next_cell(dt_root_size_cells, usm);
if (!rngs) /* there are no (base, size) duple */
- return;
+ return 0;
}

do {
@@ -524,6 +525,8 @@ static void __init early_init_drmem_lmb(struct drmem_lmb *lmb,
if (lmb->flags & DRCONF_MEM_HOTREMOVABLE)
memblock_mark_hotplug(base, size);
} while (--rngs);
+
+ return 0;
}
#endif /* CONFIG_PPC_PSERIES */

@@ -534,7 +537,7 @@ static int __init early_init_dt_scan_memory_ppc(unsigned long node,
#ifdef CONFIG_PPC_PSERIES
if (depth == 1 &&
strcmp(uname, "ibm,dynamic-reconfiguration-memory") == 0) {
- walk_drmem_lmbs_early(node, early_init_drmem_lmb);
+ walk_drmem_lmbs_early(node, NULL, early_init_drmem_lmb);
return 0;
}
#endif
diff --git a/arch/powerpc/mm/drmem.c b/arch/powerpc/mm/drmem.c
index 59327cefbc6a..b2eeea39684c 100644
--- a/arch/powerpc/mm/drmem.c
+++ b/arch/powerpc/mm/drmem.c
@@ -14,6 +14,8 @@
#include <asm/prom.h>
#include <asm/drmem.h>

+static int n_root_addr_cells, n_root_size_cells;
+
static struct drmem_lmb_info __drmem_info;
struct drmem_lmb_info *drmem_info = &__drmem_info;

@@ -189,12 +191,13 @@ int drmem_update_dt(void)
return rc;
}

-static void __init read_drconf_v1_cell(struct drmem_lmb *lmb,
+static void read_drconf_v1_cell(struct drmem_lmb *lmb,
const __be32 **prop)
{
const __be32 *p = *prop;

- lmb->base_addr = dt_mem_next_cell(dt_root_addr_cells, &p);
+ lmb->base_addr = of_read_number(p, n_root_addr_cells);
+ p += n_root_addr_cells;
lmb->drc_index = of_read_number(p++, 1);

p++; /* skip reserved field */
@@ -205,29 +208,33 @@ static void __init read_drconf_v1_cell(struct drmem_lmb *lmb,
*prop = p;
}

-static void __init __walk_drmem_v1_lmbs(const __be32 *prop, const __be32 *usm,
- void (*func)(struct drmem_lmb *, const __be32 **))
+static int
+__walk_drmem_v1_lmbs(const __be32 *prop, const __be32 *usm, void *data,
+ int (*func)(struct drmem_lmb *, const __be32 **, void *))
{
struct drmem_lmb lmb;
u32 i, n_lmbs;
+ int ret = 0;

n_lmbs = of_read_number(prop++, 1);
- if (n_lmbs == 0)
- return;
-
for (i = 0; i < n_lmbs; i++) {
read_drconf_v1_cell(&lmb, &prop);
- func(&lmb, &usm);
+ ret = func(&lmb, &usm, data);
+ if (ret)
+ break;
}
+
+ return ret;
}

-static void __init read_drconf_v2_cell(struct of_drconf_cell_v2 *dr_cell,
+static void read_drconf_v2_cell(struct of_drconf_cell_v2 *dr_cell,
const __be32 **prop)
{
const __be32 *p = *prop;

dr_cell->seq_lmbs = of_read_number(p++, 1);
- dr_cell->base_addr = dt_mem_next_cell(dt_root_addr_cells, &p);
+ dr_cell->base_addr = of_read_number(p, n_root_addr_cells);
+ p += n_root_addr_cells;
dr_cell->drc_index = of_read_number(p++, 1);
dr_cell->aa_index = of_read_number(p++, 1);
dr_cell->flags = of_read_number(p++, 1);
@@ -235,17 +242,16 @@ static void __init read_drconf_v2_cell(struct of_drconf_cell_v2 *dr_cell,
*prop = p;
}

-static void __init __walk_drmem_v2_lmbs(const __be32 *prop, const __be32 *usm,
- void (*func)(struct drmem_lmb *, const __be32 **))
+static int
+__walk_drmem_v2_lmbs(const __be32 *prop, const __be32 *usm, void *data,
+ int (*func)(struct drmem_lmb *, const __be32 **, void *))
{
struct of_drconf_cell_v2 dr_cell;
struct drmem_lmb lmb;
u32 i, j, lmb_sets;
+ int ret = 0;

lmb_sets = of_read_number(prop++, 1);
- if (lmb_sets == 0)
- return;
-
for (i = 0; i < lmb_sets; i++) {
read_drconf_v2_cell(&dr_cell, &prop);

@@ -259,21 +265,29 @@ static void __init __walk_drmem_v2_lmbs(const __be32 *prop, const __be32 *usm,
lmb.aa_index = dr_cell.aa_index;
lmb.flags = dr_cell.flags;

- func(&lmb, &usm);
+ ret = func(&lmb, &usm, data);
+ if (ret)
+ break;
}
}
+
+ return ret;
}

#ifdef CONFIG_PPC_PSERIES
-void __init walk_drmem_lmbs_early(unsigned long node,
- void (*func)(struct drmem_lmb *, const __be32 **))
+int __init walk_drmem_lmbs_early(unsigned long node, void *data,
+ int (*func)(struct drmem_lmb *, const __be32 **, void *))
{
const __be32 *prop, *usm;
- int len;
+ int len, ret = -ENODEV;

prop = of_get_flat_dt_prop(node, "ibm,lmb-size", &len);
if (!prop || len < dt_root_size_cells * sizeof(__be32))
- return;
+ return ret;
+
+ /* Get the address & size cells */
+ n_root_addr_cells = dt_root_addr_cells;
+ n_root_size_cells = dt_root_size_cells;

drmem_info->lmb_size = dt_mem_next_cell(dt_root_size_cells, &prop);

@@ -281,20 +295,21 @@ void __init walk_drmem_lmbs_early(unsigned long node,

prop = of_get_flat_dt_prop(node, "ibm,dynamic-memory", &len);
if (prop) {
- __walk_drmem_v1_lmbs(prop, usm, func);
+ ret = __walk_drmem_v1_lmbs(prop, usm, data, func);
} else {
prop = of_get_flat_dt_prop(node, "ibm,dynamic-memory-v2",
&len);
if (prop)
- __walk_drmem_v2_lmbs(prop, usm, func);
+ ret = __walk_drmem_v2_lmbs(prop, usm, data, func);
}

memblock_dump_all();
+ return ret;
}

#endif

-static int __init init_drmem_lmb_size(struct device_node *dn)
+static int init_drmem_lmb_size(struct device_node *dn)
{
const __be32 *prop;
int len;
@@ -303,12 +318,12 @@ static int __init init_drmem_lmb_size(struct device_node *dn)
return 0;

prop = of_get_property(dn, "ibm,lmb-size", &len);
- if (!prop || len < dt_root_size_cells * sizeof(__be32)) {
+ if (!prop || len < n_root_size_cells * sizeof(__be32)) {
pr_info("Could not determine LMB size\n");
return -1;
}

- drmem_info->lmb_size = dt_mem_next_cell(dt_root_size_cells, &prop);
+ drmem_info->lmb_size = of_read_number(prop, n_root_size_cells);
return 0;
}

@@ -329,24 +344,36 @@ static const __be32 *of_get_usable_memory(struct device_node *dn)
return prop;
}

-void __init walk_drmem_lmbs(struct device_node *dn,
- void (*func)(struct drmem_lmb *, const __be32 **))
+int walk_drmem_lmbs(struct device_node *dn, void *data,
+ int (*func)(struct drmem_lmb *, const __be32 **, void *))
{
const __be32 *prop, *usm;
+ int ret = -ENODEV;
+
+ if (!of_root)
+ return ret;
+
+ /* Get the address & size cells */
+ of_node_get(of_root);
+ n_root_addr_cells = of_n_addr_cells(of_root);
+ n_root_size_cells = of_n_size_cells(of_root);
+ of_node_put(of_root);

if (init_drmem_lmb_size(dn))
- return;
+ return ret;

usm = of_get_usable_memory(dn);

prop = of_get_property(dn, "ibm,dynamic-memory", NULL);
if (prop) {
- __walk_drmem_v1_lmbs(prop, usm, func);
+ ret = __walk_drmem_v1_lmbs(prop, usm, data, func);
} else {
prop = of_get_property(dn, "ibm,dynamic-memory-v2", NULL);
if (prop)
- __walk_drmem_v2_lmbs(prop, usm, func);
+ ret = __walk_drmem_v2_lmbs(prop, usm, data, func);
}
+
+ return ret;
}

static void __init init_drmem_v1_lmbs(const __be32 *prop)
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index 9fcf2d195830..88eb6894418d 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/numa.c
@@ -644,8 +644,9 @@ static inline int __init read_usm_ranges(const __be32 **usm)
* Extract NUMA information from the ibm,dynamic-reconfiguration-memory
* node. This assumes n_mem_{addr,size}_cells have been set.
*/
-static void __init numa_setup_drmem_lmb(struct drmem_lmb *lmb,
- const __be32 **usm)
+static int __init numa_setup_drmem_lmb(struct drmem_lmb *lmb,
+ const __be32 **usm,
+ void *data)
{
unsigned int ranges, is_kexec_kdump = 0;
unsigned long base, size, sz;
@@ -657,7 +658,7 @@ static void __init numa_setup_drmem_lmb(struct drmem_lmb *lmb,
*/
if ((lmb->flags & DRCONF_MEM_RESERVED)
|| !(lmb->flags & DRCONF_MEM_ASSIGNED))
- return;
+ return 0;

if (*usm)
is_kexec_kdump = 1;
@@ -669,7 +670,7 @@ static void __init numa_setup_drmem_lmb(struct drmem_lmb *lmb,
if (is_kexec_kdump) {
ranges = read_usm_ranges(usm);
if (!ranges) /* there are no (base, size) duple */
- return;
+ return 0;
}

do {
@@ -686,6 +687,8 @@ static void __init numa_setup_drmem_lmb(struct drmem_lmb *lmb,
if (sz)
memblock_set_node(base, sz, &memblock.memory, nid);
} while (--ranges);
+
+ return 0;
}

static int __init parse_numa_properties(void)
@@ -787,7 +790,7 @@ static int __init parse_numa_properties(void)
*/
memory = of_find_node_by_path("/ibm,dynamic-reconfiguration-memory");
if (memory) {
- walk_drmem_lmbs(memory, numa_setup_drmem_lmb);
+ walk_drmem_lmbs(memory, NULL, numa_setup_drmem_lmb);
of_node_put(memory);
}



2020-07-26 19:41:00

by Hari Bathini

[permalink] [raw]
Subject: [RESEND PATCH v5 03/11] powerpc/kexec_file: add helper functions for getting memory ranges

In kexec case, the kernel to be loaded uses the same memory layout as
the running kernel. So, passing on the DT of the running kernel would
be good enough.

But in case of kdump, different memory ranges are needed to manage
loading the kdump kernel, booting into it and exporting the elfcore
of the crashing kernel. The ranges are exclude memory ranges, usable
memory ranges, reserved memory ranges and crash memory ranges.

Exclude memory ranges specify the list of memory ranges to avoid while
loading kdump segments. Usable memory ranges list the memory ranges
that could be used for booting kdump kernel. Reserved memory ranges
list the memory regions for the loading kernel's reserve map. Crash
memory ranges list the memory ranges to be exported as the crashing
kernel's elfcore.

Add helper functions for setting up the above mentioned memory ranges.
This helpers facilitate in understanding the subsequent changes better
and make it easy to setup the different memory ranges listed above, as
and when appropriate.

Signed-off-by: Hari Bathini <[email protected]>
Tested-by: Pingfan Liu <[email protected]>
Reviewed-by: Thiago Jung Bauermann <[email protected]>
---

v4 -> v5:
* Added Reviewed-by tag from Thiago.
* Added the missing "#ifdef CONFIG_PPC_BOOK3S_64" around add_htab_mem_range()
function in arch/powerpc/kexec/ranges.c file.
* add_tce_mem_ranges() function returned error when tce table is not found
in a pci node. This is wrong as pci nodes may not always have tce tables
(KVM guests, for example). Fixed it by ignoring error in reading tce
table base/size while returning from the function.

v3 -> v4:
* Updated sort_memory_ranges() function to reuse sort() from lib/sort.c
and addressed other review comments from Thiago.

v2 -> v3:
* Unchanged. Added Tested-by tag from Pingfan.

v1 -> v2:
* Added an option to merge ranges while sorting to minimize reallocations
for memory ranges list.
* Dropped within_crashkernel option for add_opal_mem_range() &
add_rtas_mem_range() as it is not really needed.


arch/powerpc/include/asm/kexec_ranges.h | 25 ++
arch/powerpc/kexec/Makefile | 2
arch/powerpc/kexec/ranges.c | 417 +++++++++++++++++++++++++++++++
3 files changed, 443 insertions(+), 1 deletion(-)
create mode 100644 arch/powerpc/include/asm/kexec_ranges.h
create mode 100644 arch/powerpc/kexec/ranges.c

diff --git a/arch/powerpc/include/asm/kexec_ranges.h b/arch/powerpc/include/asm/kexec_ranges.h
new file mode 100644
index 000000000000..78f3111e4e74
--- /dev/null
+++ b/arch/powerpc/include/asm/kexec_ranges.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef _ASM_POWERPC_KEXEC_RANGES_H
+#define _ASM_POWERPC_KEXEC_RANGES_H
+
+#define MEM_RANGE_CHUNK_SZ 2048 /* Memory ranges size chunk */
+
+struct crash_mem *realloc_mem_ranges(struct crash_mem **mem_ranges);
+int add_mem_range(struct crash_mem **mem_ranges, u64 base, u64 size);
+int add_tce_mem_ranges(struct crash_mem **mem_ranges);
+int add_initrd_mem_range(struct crash_mem **mem_ranges);
+#ifdef CONFIG_PPC_BOOK3S_64
+int add_htab_mem_range(struct crash_mem **mem_ranges);
+#else
+static inline int add_htab_mem_range(struct crash_mem **mem_ranges)
+{
+ return 0;
+}
+#endif
+int add_kernel_mem_range(struct crash_mem **mem_ranges);
+int add_rtas_mem_range(struct crash_mem **mem_ranges);
+int add_opal_mem_range(struct crash_mem **mem_ranges);
+int add_reserved_ranges(struct crash_mem **mem_ranges);
+void sort_memory_ranges(struct crash_mem *mrngs, bool merge);
+
+#endif /* _ASM_POWERPC_KEXEC_RANGES_H */
diff --git a/arch/powerpc/kexec/Makefile b/arch/powerpc/kexec/Makefile
index 67c355329457..4aff6846c772 100644
--- a/arch/powerpc/kexec/Makefile
+++ b/arch/powerpc/kexec/Makefile
@@ -7,7 +7,7 @@ obj-y += core.o crash.o core_$(BITS).o

obj-$(CONFIG_PPC32) += relocate_32.o

-obj-$(CONFIG_KEXEC_FILE) += file_load.o file_load_$(BITS).o elf_$(BITS).o
+obj-$(CONFIG_KEXEC_FILE) += file_load.o ranges.o file_load_$(BITS).o elf_$(BITS).o

ifdef CONFIG_HAVE_IMA_KEXEC
ifdef CONFIG_IMA
diff --git a/arch/powerpc/kexec/ranges.c b/arch/powerpc/kexec/ranges.c
new file mode 100644
index 000000000000..21bea1b78443
--- /dev/null
+++ b/arch/powerpc/kexec/ranges.c
@@ -0,0 +1,417 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * powerpc code to implement the kexec_file_load syscall
+ *
+ * Copyright (C) 2004 Adam Litke ([email protected])
+ * Copyright (C) 2004 IBM Corp.
+ * Copyright (C) 2004,2005 Milton D Miller II, IBM Corporation
+ * Copyright (C) 2005 R Sharada ([email protected])
+ * Copyright (C) 2006 Mohan Kumar M ([email protected])
+ * Copyright (C) 2020 IBM Corporation
+ *
+ * Based on kexec-tools' kexec-ppc64.c, fs2dt.c.
+ * Heavily modified for the kernel by
+ * Hari Bathini <[email protected]>.
+ */
+
+#undef DEBUG
+#define pr_fmt(fmt) "kexec ranges: " fmt
+
+#include <linux/sort.h>
+#include <linux/kexec.h>
+#include <linux/of_device.h>
+#include <linux/slab.h>
+#include <asm/sections.h>
+#include <asm/kexec_ranges.h>
+
+/**
+ * get_max_nr_ranges - Get the max no. of ranges crash_mem structure
+ * could hold, given the size allocated for it.
+ * @size: Allocation size of crash_mem structure.
+ *
+ * Returns the maximum no. of ranges.
+ */
+static inline unsigned int get_max_nr_ranges(size_t size)
+{
+ return ((size - sizeof(struct crash_mem)) /
+ sizeof(struct crash_mem_range));
+}
+
+/**
+ * get_mem_rngs_size - Get the allocated size of mrngs based on
+ * max_nr_ranges and chunk size.
+ * @mrngs: Memory ranges.
+ *
+ * Returns the maximum size of @mrngs.
+ */
+static inline size_t get_mem_rngs_size(struct crash_mem *mrngs)
+{
+ size_t size;
+
+ if (!mrngs)
+ return 0;
+
+ size = (sizeof(struct crash_mem) +
+ (mrngs->max_nr_ranges * sizeof(struct crash_mem_range)));
+
+ /*
+ * Memory is allocated in size multiple of MEM_RANGE_CHUNK_SZ.
+ * So, align to get the actual length.
+ */
+ return ALIGN(size, MEM_RANGE_CHUNK_SZ);
+}
+
+/**
+ * __add_mem_range - add a memory range to memory ranges list.
+ * @mem_ranges: Range list to add the memory range to.
+ * @base: Base address of the range to add.
+ * @size: Size of the memory range to add.
+ *
+ * (Re)allocates memory, if needed.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+static int __add_mem_range(struct crash_mem **mem_ranges, u64 base, u64 size)
+{
+ struct crash_mem *mrngs = *mem_ranges;
+
+ if ((mrngs == NULL) || (mrngs->nr_ranges == mrngs->max_nr_ranges)) {
+ mrngs = realloc_mem_ranges(mem_ranges);
+ if (!mrngs)
+ return -ENOMEM;
+ }
+
+ mrngs->ranges[mrngs->nr_ranges].start = base;
+ mrngs->ranges[mrngs->nr_ranges].end = base + size - 1;
+ pr_debug("Added memory range [%#016llx - %#016llx] at index %d\n",
+ base, base + size - 1, mrngs->nr_ranges);
+ mrngs->nr_ranges++;
+ return 0;
+}
+
+/**
+ * __merge_memory_ranges - Merges the given memory ranges list.
+ * @mem_ranges: Range list to merge.
+ *
+ * Assumes a sorted range list.
+ *
+ * Returns nothing.
+ */
+static void __merge_memory_ranges(struct crash_mem *mrngs)
+{
+ struct crash_mem_range *rngs;
+ int i, idx;
+
+ if (!mrngs)
+ return;
+
+ idx = 0;
+ rngs = &mrngs->ranges[0];
+ for (i = 1; i < mrngs->nr_ranges; i++) {
+ if (rngs[i].start <= (rngs[i-1].end + 1))
+ rngs[idx].end = rngs[i].end;
+ else {
+ idx++;
+ if (i == idx)
+ continue;
+
+ rngs[idx] = rngs[i];
+ }
+ }
+ mrngs->nr_ranges = idx + 1;
+}
+
+/**
+ * realloc_mem_ranges - reallocate mem_ranges with size incremented
+ * by MEM_RANGE_CHUNK_SZ. Frees up the old memory,
+ * if memory allocation fails.
+ * @mem_ranges: Memory ranges to reallocate.
+ *
+ * Returns pointer to reallocated memory on success, NULL otherwise.
+ */
+struct crash_mem *realloc_mem_ranges(struct crash_mem **mem_ranges)
+{
+ struct crash_mem *mrngs = *mem_ranges;
+ unsigned int nr_ranges;
+ size_t size;
+
+ size = get_mem_rngs_size(mrngs);
+ nr_ranges = mrngs ? mrngs->nr_ranges : 0;
+
+ size += MEM_RANGE_CHUNK_SZ;
+ mrngs = krealloc(*mem_ranges, size, GFP_KERNEL);
+ if (!mrngs) {
+ kfree(*mem_ranges);
+ *mem_ranges = NULL;
+ return NULL;
+ }
+
+ mrngs->nr_ranges = nr_ranges;
+ mrngs->max_nr_ranges = get_max_nr_ranges(size);
+ *mem_ranges = mrngs;
+
+ return mrngs;
+}
+
+/**
+ * add_mem_range - Updates existing memory range, if there is an overlap.
+ * Else, adds a new memory range.
+ * @mem_ranges: Range list to add the memory range to.
+ * @base: Base address of the range to add.
+ * @size: Size of the memory range to add.
+ *
+ * (Re)allocates memory, if needed.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+int add_mem_range(struct crash_mem **mem_ranges, u64 base, u64 size)
+{
+ struct crash_mem *mrngs = *mem_ranges;
+ u64 mstart, mend, end;
+ unsigned int i;
+
+ if (!size)
+ return 0;
+
+ end = base + size - 1;
+
+ if ((mrngs == NULL) || (mrngs->nr_ranges == 0))
+ return __add_mem_range(mem_ranges, base, size);
+
+ for (i = 0; i < mrngs->nr_ranges; i++) {
+ mstart = mrngs->ranges[i].start;
+ mend = mrngs->ranges[i].end;
+ if (base < mend && end > mstart) {
+ if (base < mstart)
+ mrngs->ranges[i].start = base;
+ if (end > mend)
+ mrngs->ranges[i].end = end;
+ return 0;
+ }
+ }
+
+ return __add_mem_range(mem_ranges, base, size);
+}
+
+/**
+ * add_tce_mem_ranges - Adds tce-table range to the given memory ranges list.
+ * @mem_ranges: Range list to add the memory range(s) to.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+int add_tce_mem_ranges(struct crash_mem **mem_ranges)
+{
+ struct device_node *dn;
+ int ret = 0;
+
+ for_each_node_by_type(dn, "pci") {
+ u64 base;
+ u32 size;
+ int rc;
+
+ /*
+ * It is ok to have pci nodes without tce. So, ignore
+ * any read errors here.
+ */
+ rc = of_property_read_u64(dn, "linux,tce-base", &base);
+ rc |= of_property_read_u32(dn, "linux,tce-size", &size);
+ if (rc)
+ continue;
+
+ ret = add_mem_range(mem_ranges, base, size);
+ if (ret)
+ break;
+ }
+
+ return ret;
+}
+
+/**
+ * add_initrd_mem_range - Adds initrd range to the given memory ranges list,
+ * if the initrd was retained.
+ * @mem_ranges: Range list to add the memory range to.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+int add_initrd_mem_range(struct crash_mem **mem_ranges)
+{
+ u64 base, end;
+ char *str;
+ int ret;
+
+ /* This range means something only if initrd was retained */
+ str = strstr(saved_command_line, "retain_initrd");
+ if (!str)
+ return 0;
+
+ ret = of_property_read_u64(of_chosen, "linux,initrd-start", &base);
+ ret |= of_property_read_u64(of_chosen, "linux,initrd-end", &end);
+ if (!ret)
+ ret = add_mem_range(mem_ranges, base, end - base + 1);
+ return ret;
+}
+
+#ifdef CONFIG_PPC_BOOK3S_64
+/**
+ * add_htab_mem_range - Adds htab range to the given memory ranges list,
+ * if it exists
+ * @mem_ranges: Range list to add the memory range to.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+int add_htab_mem_range(struct crash_mem **mem_ranges)
+{
+ if (!htab_address)
+ return 0;
+
+ return add_mem_range(mem_ranges, __pa(htab_address), htab_size_bytes);
+}
+#endif
+
+/**
+ * add_kernel_mem_range - Adds kernel text region to the given
+ * memory ranges list.
+ * @mem_ranges: Range list to add the memory range to.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+int add_kernel_mem_range(struct crash_mem **mem_ranges)
+{
+ return add_mem_range(mem_ranges, 0, __pa(_end));
+}
+
+/**
+ * add_rtas_mem_range - Adds RTAS region to the given memory ranges list.
+ * @mem_ranges: Range list to add the memory range to.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+int add_rtas_mem_range(struct crash_mem **mem_ranges)
+{
+ struct device_node *dn;
+ int ret = 0;
+
+ dn = of_find_node_by_path("/rtas");
+ if (dn) {
+ u32 base, size;
+
+ ret = of_property_read_u32(dn, "linux,rtas-base", &base);
+ ret |= of_property_read_u32(dn, "rtas-size", &size);
+ if (ret)
+ goto out;
+
+ ret = add_mem_range(mem_ranges, base, size);
+ }
+
+out:
+ of_node_put(dn);
+ return ret;
+}
+
+/**
+ * add_opal_mem_range - Adds OPAL region to the given memory ranges list.
+ * @mem_ranges: Range list to add the memory range to.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+int add_opal_mem_range(struct crash_mem **mem_ranges)
+{
+ struct device_node *dn;
+ int ret = 0;
+
+ dn = of_find_node_by_path("/ibm,opal");
+ if (dn) {
+ u64 base, size;
+
+ ret = of_property_read_u64(dn, "opal-base-address", &base);
+ ret |= of_property_read_u64(dn, "opal-runtime-size", &size);
+ if (ret)
+ goto out;
+
+ ret = add_mem_range(mem_ranges, base, size);
+ }
+
+out:
+ of_node_put(dn);
+ return ret;
+}
+
+/**
+ * add_reserved_ranges - Adds "/reserved-ranges" regions exported by f/w
+ * to the given memory ranges list.
+ * @mem_ranges: Range list to add the memory ranges to.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+int add_reserved_ranges(struct crash_mem **mem_ranges)
+{
+ int n_mem_addr_cells, n_mem_size_cells, i, len, cells, ret = 0;
+ const __be32 *prop;
+
+ prop = of_get_property(of_root, "reserved-ranges", &len);
+ if (!prop)
+ return 0;
+
+ of_node_get(of_root);
+ n_mem_addr_cells = of_n_addr_cells(of_root);
+ n_mem_size_cells = of_n_size_cells(of_root);
+ cells = n_mem_addr_cells + n_mem_size_cells;
+
+ /* Each reserved range is an (address,size) pair */
+ for (i = 0; i < (len / (sizeof(*prop) * cells)); i++) {
+ u64 base, size;
+
+ base = of_read_number(prop + (i * cells), n_mem_addr_cells);
+ size = of_read_number(prop + (i * cells) + n_mem_addr_cells,
+ n_mem_size_cells);
+
+ ret = add_mem_range(mem_ranges, base, size);
+ if (ret)
+ break;
+ }
+
+ of_node_put(of_root);
+ return ret;
+}
+
+/* cmp_func_t callback to sort ranges with sort() */
+static int rngcmp(const void *_x, const void *_y)
+{
+ const struct crash_mem_range *x = _x, *y = _y;
+
+ if (x->start > y->start)
+ return 1;
+ if (x->start < y->start)
+ return -1;
+ return 0;
+}
+
+/**
+ * sort_memory_ranges - Sorts the given memory ranges list.
+ * @mem_ranges: Range list to sort.
+ * @merge: If true, merge the list after sorting.
+ *
+ * Returns nothing.
+ */
+void sort_memory_ranges(struct crash_mem *mrngs, bool merge)
+{
+ int i;
+
+ if (!mrngs)
+ return;
+
+ /* Sort the ranges in-place */
+ sort(&(mrngs->ranges[0]), mrngs->nr_ranges, sizeof(mrngs->ranges[0]),
+ rngcmp, NULL);
+
+ if (merge)
+ __merge_memory_ranges(mrngs);
+
+ /* For debugging purpose */
+ pr_debug("Memory ranges:\n");
+ for (i = 0; i < mrngs->nr_ranges; i++) {
+ pr_debug("\t[%03d][%#016llx - %#016llx]\n", i,
+ mrngs->ranges[i].start,
+ mrngs->ranges[i].end);
+ }
+}


2020-07-26 19:41:54

by Hari Bathini

[permalink] [raw]
Subject: [RESEND PATCH v5 08/11] ppc64/kexec_file: setup backup region for kdump kernel

Though kdump kernel boots from loaded address, the first 64KB of it is
copied down to real 0. So, setup a backup region and let purgatory
copy the first 64KB of crashed kernel into this backup region before
booting into kdump kernel. Update reserve map with backup region and
crashed kernel's memory to avoid kdump kernel from accidentially using
that memory.

Signed-off-by: Hari Bathini <[email protected]>
---

v4 -> v5:
* Did not add Reviewed-by tag from Thiago yet as he might want to reconsider
it with the changes in this patch.
* Wrote backup region copy code in assembler. Also, dropped the patch that
applies RELA relocations & the patch that sets up stack as they are no
longer needed.
* For correctness, updated fdt_add_mem_rsv() to take "BACKUP_SRC_END + 1"
as start address instead of BACKUP_SRC_SIZE.

v3 -> v4:
* Moved fdt_add_mem_rsv() for backup region under kdump flag, on Thiago's
suggestion, as it is only relevant for kdump.

v2 -> v3:
* Dropped check for backup_start in trampoline_64.S as purgatory() takes
care of it anyway.

v1 -> v2:
* Check if backup region is available before branching out. This is
to keep `kexec -l -s` flow as before as much as possible. This would
eventually change with more testing and addition of sha256 digest
verification support.
* Fixed missing prototype for purgatory() as reported by lkp.
lkp report for reference:
- https://lore.kernel.org/patchwork/patch/1264423/


arch/powerpc/include/asm/crashdump-ppc64.h | 19 ++++++
arch/powerpc/include/asm/kexec.h | 7 ++
arch/powerpc/kexec/elf_64.c | 9 +++
arch/powerpc/kexec/file_load_64.c | 95 +++++++++++++++++++++++++++-
arch/powerpc/purgatory/trampoline_64.S | 38 ++++++++++-
5 files changed, 161 insertions(+), 7 deletions(-)
create mode 100644 arch/powerpc/include/asm/crashdump-ppc64.h

diff --git a/arch/powerpc/include/asm/crashdump-ppc64.h b/arch/powerpc/include/asm/crashdump-ppc64.h
new file mode 100644
index 000000000000..68d9717cc5ee
--- /dev/null
+++ b/arch/powerpc/include/asm/crashdump-ppc64.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef _ASM_POWERPC_CRASHDUMP_PPC64_H
+#define _ASM_POWERPC_CRASHDUMP_PPC64_H
+
+/*
+ * Backup region - first 64KB of System RAM
+ *
+ * If ever the below macros are to be changed, please be judicious.
+ * The implicit assumptions are:
+ * - start, end & size are less than UINT32_MAX.
+ * - start & size are at least 8 byte aligned.
+ *
+ * For implementation details: arch/powerpc/purgatory/trampoline_64.S
+ */
+#define BACKUP_SRC_START 0
+#define BACKUP_SRC_END 0xffff
+#define BACKUP_SRC_SIZE (BACKUP_SRC_END - BACKUP_SRC_START + 1)
+
+#endif /* __ASM_POWERPC_CRASHDUMP_PPC64_H */
diff --git a/arch/powerpc/include/asm/kexec.h b/arch/powerpc/include/asm/kexec.h
index 835dc92e091c..f9514ebeffaa 100644
--- a/arch/powerpc/include/asm/kexec.h
+++ b/arch/powerpc/include/asm/kexec.h
@@ -105,6 +105,9 @@ extern const struct kexec_file_ops kexec_elf64_ops;
struct kimage_arch {
struct crash_mem *exclude_ranges;

+ unsigned long backup_start;
+ void *backup_buf;
+
#ifdef CONFIG_IMA_KEXEC
phys_addr_t ima_buffer_addr;
size_t ima_buffer_size;
@@ -120,6 +123,10 @@ int setup_new_fdt(const struct kimage *image, void *fdt,
int delete_fdt_mem_rsv(void *fdt, unsigned long start, unsigned long size);

#ifdef CONFIG_PPC64
+struct kexec_buf;
+
+int load_crashdump_segments_ppc64(struct kimage *image,
+ struct kexec_buf *kbuf);
int setup_purgatory_ppc64(struct kimage *image, const void *slave_code,
const void *fdt, unsigned long kernel_load_addr,
unsigned long fdt_load_addr);
diff --git a/arch/powerpc/kexec/elf_64.c b/arch/powerpc/kexec/elf_64.c
index 64c15a5a280b..76e2fc7e6dc3 100644
--- a/arch/powerpc/kexec/elf_64.c
+++ b/arch/powerpc/kexec/elf_64.c
@@ -68,6 +68,15 @@ static void *elf64_load(struct kimage *image, char *kernel_buf,

pr_debug("Loaded purgatory at 0x%lx\n", pbuf.mem);

+ /* Load additional segments needed for panic kernel */
+ if (image->type == KEXEC_TYPE_CRASH) {
+ ret = load_crashdump_segments_ppc64(image, &kbuf);
+ if (ret) {
+ pr_err("Failed to load kdump kernel segments\n");
+ goto out;
+ }
+ }
+
if (initrd != NULL) {
kbuf.buffer = initrd;
kbuf.bufsz = kbuf.memsz = initrd_len;
diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c
index a5c1442590b2..88408b17a7f6 100644
--- a/arch/powerpc/kexec/file_load_64.c
+++ b/arch/powerpc/kexec/file_load_64.c
@@ -20,8 +20,10 @@
#include <linux/of_device.h>
#include <linux/memblock.h>
#include <linux/slab.h>
+#include <linux/vmalloc.h>
#include <asm/drmem.h>
#include <asm/kexec_ranges.h>
+#include <asm/crashdump-ppc64.h>

struct umem_info {
uint64_t *buf; /* data buffer for usable-memory property */
@@ -697,6 +699,69 @@ static int update_usable_mem_fdt(void *fdt, struct crash_mem *usable_mem)
return ret;
}

+/**
+ * load_backup_segment - Locate a memory hole to place the backup region.
+ * @image: Kexec image.
+ * @kbuf: Buffer contents and memory parameters.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+static int load_backup_segment(struct kimage *image, struct kexec_buf *kbuf)
+{
+ void *buf;
+ int ret;
+
+ /* Setup a segment for backup region */
+ buf = vzalloc(BACKUP_SRC_SIZE);
+ if (!buf)
+ return -ENOMEM;
+
+ /*
+ * A source buffer has no meaning for backup region as data will
+ * be copied from backup source, after crash, in the purgatory.
+ * But as load segment code doesn't recognize such segments,
+ * setup a dummy source buffer to keep it happy for now.
+ */
+ kbuf->buffer = buf;
+ kbuf->mem = KEXEC_BUF_MEM_UNKNOWN;
+ kbuf->bufsz = kbuf->memsz = BACKUP_SRC_SIZE;
+ kbuf->top_down = false;
+
+ ret = kexec_add_buffer(kbuf);
+ if (ret) {
+ vfree(buf);
+ return ret;
+ }
+
+ image->arch.backup_buf = buf;
+ image->arch.backup_start = kbuf->mem;
+ return 0;
+}
+
+/**
+ * load_crashdump_segments_ppc64 - Initialize the additional segements needed
+ * to load kdump kernel.
+ * @image: Kexec image.
+ * @kbuf: Buffer contents and memory parameters.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+int load_crashdump_segments_ppc64(struct kimage *image,
+ struct kexec_buf *kbuf)
+{
+ int ret;
+
+ /* Load backup segment - first 64K bytes of the crashing kernel */
+ ret = load_backup_segment(image, kbuf);
+ if (ret) {
+ pr_err("Failed to load backup segment\n");
+ return ret;
+ }
+ pr_debug("Loaded the backup region at 0x%lx\n", kbuf->mem);
+
+ return 0;
+}
+
/**
* setup_purgatory_ppc64 - initialize PPC64 specific purgatory's global
* variables and call setup_purgatory() to initialize
@@ -737,6 +802,14 @@ int setup_purgatory_ppc64(struct kimage *image, const void *slave_code,
goto out;
}

+ /* Tell purgatory where to look for backup region */
+ ret = kexec_purgatory_get_set_symbol(image, "backup_start",
+ &image->arch.backup_start,
+ sizeof(image->arch.backup_start),
+ false);
+ if (ret)
+ goto out;
+
/* Setup OPAL base & entry values */
dn = of_find_node_by_path("/ibm,opal");
if (dn) {
@@ -782,7 +855,7 @@ int setup_new_fdt_ppc64(const struct kimage *image, void *fdt,

/*
* Restrict memory usage for kdump kernel by setting up
- * usable memory ranges.
+ * usable memory ranges and memory reserve map.
*/
if (image->type == KEXEC_TYPE_CRASH) {
ret = get_usable_memory_ranges(&umem);
@@ -795,13 +868,26 @@ int setup_new_fdt_ppc64(const struct kimage *image, void *fdt,
goto out;
}

- /* Ensure we don't touch crashed kernel's memory */
- ret = fdt_add_mem_rsv(fdt, 0, crashk_res.start);
+ /*
+ * Ensure we don't touch crashed kernel's memory except the
+ * first 64K of RAM, which will be backed up.
+ */
+ ret = fdt_add_mem_rsv(fdt, BACKUP_SRC_END + 1,
+ crashk_res.start - BACKUP_SRC_SIZE);
if (ret) {
pr_err("Error reserving crash memory: %s\n",
fdt_strerror(ret));
goto out;
}
+
+ /* Ensure backup region is not used by kdump/capture kernel */
+ ret = fdt_add_mem_rsv(fdt, image->arch.backup_start,
+ BACKUP_SRC_SIZE);
+ if (ret) {
+ pr_err("Error reserving memory for backup: %s\n",
+ fdt_strerror(ret));
+ goto out;
+ }
}

out:
@@ -908,5 +994,8 @@ int arch_kimage_file_post_load_cleanup(struct kimage *image)
kfree(image->arch.exclude_ranges);
image->arch.exclude_ranges = NULL;

+ vfree(image->arch.backup_buf);
+ image->arch.backup_buf = NULL;
+
return kexec_image_post_load_cleanup_default(image);
}
diff --git a/arch/powerpc/purgatory/trampoline_64.S b/arch/powerpc/purgatory/trampoline_64.S
index 464af8e8a4cb..d4b52961f592 100644
--- a/arch/powerpc/purgatory/trampoline_64.S
+++ b/arch/powerpc/purgatory/trampoline_64.S
@@ -10,6 +10,7 @@
*/

#include <asm/asm-compat.h>
+#include <asm/crashdump-ppc64.h>

.machine ppc64
.balign 256
@@ -43,14 +44,38 @@ master:
mr %r17,%r3 /* save cpu id to r17 */
mr %r15,%r4 /* save physical address in reg15 */

+ bl 0f /* Work out where we're running */
+0: mflr %r18
+
+ /*
+ * Copy BACKUP_SRC_SIZE bytes from BACKUP_SRC_START to
+ * backup_start 8 bytes at a time.
+ *
+ * Use r3 = dest, r4 = src, r5 = size, r6 = count
+ */
+ ld %r3,(backup_start - 0b)(%r18)
+ cmpdi %cr0,%r3,0
+ beq 80f /* skip if there is no backup region */
+ lis %r5,BACKUP_SRC_SIZE@h
+ ori %r5,%r5,BACKUP_SRC_SIZE@l
+ cmpdi %cr0,%r5,0
+ beq 80f /* skip if copy size is zero */
+ lis %r4,BACKUP_SRC_START@h
+ ori %r4,%r4,BACKUP_SRC_START@l
+ li %r6,0
+70:
+ ldx %r0,%r6,%r4
+ stdx %r0,%r6,%r3
+ addi %r6,%r6,8
+ cmpld %cr0,%r6,%r5
+ blt 70b
+
+80:
or %r3,%r3,%r3 /* ok now to high priority, lets boot */
lis %r6,0x1
mtctr %r6 /* delay a bit for slaves to catch up */
bdnz . /* before we overwrite 0-100 again */

- bl 0f /* Work out where we're running */
-0: mflr %r18
-
/* load device-tree address */
ld %r3, (dt_offset - 0b)(%r18)
mr %r16,%r3 /* save dt address in reg16 */
@@ -93,7 +118,6 @@ master:

rfid /* update MSR and start kernel */

-
.balign 8
.globl kernel
kernel:
@@ -106,6 +130,12 @@ dt_offset:
.8byte 0x0
.size dt_offset, . - dt_offset

+ .balign 8
+ .globl backup_start
+backup_start:
+ .8byte 0x0
+ .size backup_start, . - backup_start
+
.balign 8
.globl opal_base
opal_base:


2020-07-26 19:42:33

by Hari Bathini

[permalink] [raw]
Subject: [RESEND PATCH v5 06/11] ppc64/kexec_file: restrict memory usage of kdump kernel

Kdump kernel, used for capturing the kernel core image, is supposed
to use only specific memory regions to avoid corrupting the image to
be captured. The regions are crashkernel range - the memory reserved
explicitly for kdump kernel, memory used for the tce-table, the OPAL
region and RTAS region as applicable. Restrict kdump kernel memory
to use only these regions by setting up usable-memory DT property.
Also, tell the kdump kernel to run at the loaded address by setting
the magic word at 0x5c.

Signed-off-by: Hari Bathini <[email protected]>
Tested-by: Pingfan Liu <[email protected]>
---

v4 -> v5:
* Renamed get_node_pathlen() function to get_node_path_size() and
handled root node separately to avoid off-by-one error in
calculating string size.
* Updated get_node_path() in line with change in get_node_path_size().

v3 -> v4:
* Updated get_node_path() to be an iterative function instead of a
recursive one.
* Added comment explaining why low memory is added to kdump kernel's
usable memory ranges though it doesn't fall in crashkernel region.
* For correctness, added fdt_add_mem_rsv() for the low memory being
added to kdump kernel's usable memory ranges.
* Fixed prop pointer update in add_usable_mem_property() and changed
duple to tuple as suggested by Thiago.

v2 -> v3:
* Unchanged. Added Tested-by tag from Pingfan.

v1 -> v2:
* Fixed off-by-one error while setting up usable-memory properties.
* Updated add_rtas_mem_range() & add_opal_mem_range() callsites based on
the new prototype for these functions.


arch/powerpc/kexec/file_load_64.c | 478 +++++++++++++++++++++++++++++++++++++
1 file changed, 477 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c
index 2df6f4273ddd..8df085a22fd7 100644
--- a/arch/powerpc/kexec/file_load_64.c
+++ b/arch/powerpc/kexec/file_load_64.c
@@ -17,9 +17,21 @@
#include <linux/kexec.h>
#include <linux/of_fdt.h>
#include <linux/libfdt.h>
+#include <linux/of_device.h>
#include <linux/memblock.h>
+#include <linux/slab.h>
+#include <asm/drmem.h>
#include <asm/kexec_ranges.h>

+struct umem_info {
+ uint64_t *buf; /* data buffer for usable-memory property */
+ uint32_t idx; /* current index */
+ uint32_t size; /* size allocated for the data buffer */
+
+ /* usable memory ranges to look up */
+ const struct crash_mem *umrngs;
+};
+
const struct kexec_file_ops * const kexec_file_loaders[] = {
&kexec_elf64_ops,
NULL
@@ -74,6 +86,42 @@ static int get_exclude_memory_ranges(struct crash_mem **mem_ranges)
return ret;
}

+/**
+ * get_usable_memory_ranges - Get usable memory ranges. This list includes
+ * regions like crashkernel, opal/rtas & tce-table,
+ * that kdump kernel could use.
+ * @mem_ranges: Range list to add the memory ranges to.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+static int get_usable_memory_ranges(struct crash_mem **mem_ranges)
+{
+ int ret;
+
+ /*
+ * prom code doesn't take kindly to missing low memory. So, add
+ * [0, crashk_res.end] instead of [crashk_res.start, crashk_res.end]
+ * to keep it happy.
+ */
+ ret = add_mem_range(mem_ranges, 0, crashk_res.end + 1);
+ if (ret)
+ goto out;
+
+ ret = add_rtas_mem_range(mem_ranges);
+ if (ret)
+ goto out;
+
+ ret = add_opal_mem_range(mem_ranges);
+ if (ret)
+ goto out;
+
+ ret = add_tce_mem_ranges(mem_ranges);
+out:
+ if (ret)
+ pr_err("Failed to setup usable memory ranges\n");
+ return ret;
+}
+
/**
* __locate_mem_hole_top_down - Looks top down for a large enough memory hole
* in the memory regions between buf_min & buf_max
@@ -273,6 +321,382 @@ static int locate_mem_hole_bottom_up_ppc64(struct kexec_buf *kbuf,
return ret;
}

+/**
+ * check_realloc_usable_mem - Reallocate buffer if it can't accommodate entries
+ * @um_info: Usable memory buffer and ranges info.
+ * @cnt: No. of entries to accommodate.
+ *
+ * Frees up the old buffer if memory reallocation fails.
+ *
+ * Returns buffer on success, NULL on error.
+ */
+static uint64_t *check_realloc_usable_mem(struct umem_info *um_info, int cnt)
+{
+ void *tbuf;
+
+ if (um_info->size >=
+ ((um_info->idx + cnt) * sizeof(*(um_info->buf))))
+ return um_info->buf;
+
+ um_info->size += MEM_RANGE_CHUNK_SZ;
+ tbuf = krealloc(um_info->buf, um_info->size, GFP_KERNEL);
+ if (!tbuf) {
+ um_info->size -= MEM_RANGE_CHUNK_SZ;
+ return NULL;
+ }
+
+ memset(tbuf + um_info->idx, 0, MEM_RANGE_CHUNK_SZ);
+ return tbuf;
+}
+
+/**
+ * add_usable_mem - Add the usable memory ranges within the given memory range
+ * to the buffer
+ * @um_info: Usable memory buffer and ranges info.
+ * @base: Base address of memory range to look for.
+ * @end: End address of memory range to look for.
+ * @cnt: No. of usable memory ranges added to buffer.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+static int add_usable_mem(struct umem_info *um_info, uint64_t base,
+ uint64_t end, int *cnt)
+{
+ uint64_t loc_base, loc_end, *buf;
+ const struct crash_mem *umrngs;
+ int i, add;
+
+ *cnt = 0;
+ umrngs = um_info->umrngs;
+ for (i = 0; i < umrngs->nr_ranges; i++) {
+ add = 0;
+ loc_base = umrngs->ranges[i].start;
+ loc_end = umrngs->ranges[i].end;
+ if (loc_base >= base && loc_end <= end)
+ add = 1;
+ else if (base < loc_end && end > loc_base) {
+ if (loc_base < base)
+ loc_base = base;
+ if (loc_end > end)
+ loc_end = end;
+ add = 1;
+ }
+
+ if (add) {
+ buf = check_realloc_usable_mem(um_info, 2);
+ if (!buf)
+ return -ENOMEM;
+
+ um_info->buf = buf;
+ buf[um_info->idx++] = cpu_to_be64(loc_base);
+ buf[um_info->idx++] =
+ cpu_to_be64(loc_end - loc_base + 1);
+ (*cnt)++;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * kdump_setup_usable_lmb - This is a callback function that gets called by
+ * walk_drmem_lmbs for every LMB to set its
+ * usable memory ranges.
+ * @lmb: LMB info.
+ * @usm: linux,drconf-usable-memory property value.
+ * @data: Pointer to usable memory buffer and ranges info.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+static int kdump_setup_usable_lmb(struct drmem_lmb *lmb, const __be32 **usm,
+ void *data)
+{
+ struct umem_info *um_info;
+ uint64_t base, end, *buf;
+ int cnt, tmp_idx, ret;
+
+ /*
+ * kdump load isn't supported on kernels already booted with
+ * linux,drconf-usable-memory property.
+ */
+ if (*usm) {
+ pr_err("linux,drconf-usable-memory property already exists!");
+ return -EINVAL;
+ }
+
+ um_info = data;
+ tmp_idx = um_info->idx;
+ buf = check_realloc_usable_mem(um_info, 1);
+ if (!buf)
+ return -ENOMEM;
+
+ um_info->idx++;
+ um_info->buf = buf;
+ base = lmb->base_addr;
+ end = base + drmem_lmb_size() - 1;
+ ret = add_usable_mem(um_info, base, end, &cnt);
+ if (!ret)
+ um_info->buf[tmp_idx] = cpu_to_be64(cnt);
+
+ return ret;
+}
+
+/**
+ * get_node_path_size - Get the full path length of the given node.
+ * @dn: Device Node.
+ *
+ * Also, counts '\0' at the end of the path.
+ * For example, /memory@0 will be "/memory@0\0" => 10 bytes.
+ *
+ * Returns the string size of the node's full path.
+ */
+static int get_node_path_size(struct device_node *dn)
+{
+ int len = 0;
+
+ if (!dn)
+ return 0;
+
+ /* Root node */
+ if (!(dn->parent))
+ return 2;
+
+ while (dn) {
+ len += strlen(dn->full_name) + 1;
+ dn = dn->parent;
+ }
+
+ return len;
+}
+
+/**
+ * get_node_path - Get the full path of the given node.
+ * @node: Device node.
+ *
+ * Allocates buffer for node path. The caller must free the buffer
+ * after use.
+ *
+ * Returns buffer with path on success, NULL otherwise.
+ */
+static char *get_node_path(struct device_node *node)
+{
+ struct device_node *dn;
+ int len, idx, nlen;
+ char *path = NULL;
+ bool end_char;
+
+ if (!node)
+ goto err;
+
+ /*
+ * Get the path size first and use it to iteratively build the path
+ * from node to root.
+ */
+ len = get_node_path_size(node);
+
+ /* Allocate memory for node path */
+ path = kzalloc(ALIGN(len, 8), GFP_KERNEL);
+ if (!path)
+ goto err;
+
+ /*
+ * Iteratively update path from "node" to root by decrementing
+ * index appropriately.
+ *
+ * Adds %NUL at the end of "node" & '/' at the end of all its
+ * parent nodes.
+ */
+ dn = node;
+ idx = len;
+ path[0] = '/';
+ end_char = true;
+ path[--idx] = '\0';
+ while (dn->parent) {
+ if (!end_char)
+ path[--idx] = '/';
+ end_char = false;
+
+ nlen = strlen(dn->full_name);
+ idx -= nlen;
+ memcpy(path + idx, dn->full_name, nlen);
+
+ dn = dn->parent;
+ }
+
+ return path;
+err:
+ kfree(path);
+ return NULL;
+}
+
+/**
+ * add_usable_mem_property - Add usable memory property for the given
+ * memory node.
+ * @fdt: Flattened device tree for the kdump kernel.
+ * @dn: Memory node.
+ * @um_info: Usable memory buffer and ranges info.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+static int add_usable_mem_property(void *fdt, struct device_node *dn,
+ struct umem_info *um_info)
+{
+ int n_mem_addr_cells, n_mem_size_cells, node;
+ int i, len, ranges, cnt, ret;
+ uint64_t base, end, *buf;
+ const __be32 *prop;
+ char *pathname;
+
+ of_node_get(dn);
+
+ /* Get the full path of the memory node */
+ pathname = get_node_path(dn);
+ if (!pathname) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ pr_debug("Memory node path: %s\n", pathname);
+
+ /* Now that we know the path, find its offset in kdump kernel's fdt */
+ node = fdt_path_offset(fdt, pathname);
+ if (node < 0) {
+ pr_err("Malformed device tree: error reading %s\n",
+ pathname);
+ ret = -EINVAL;
+ goto out;
+ }
+
+ /* Get the address & size cells */
+ n_mem_addr_cells = of_n_addr_cells(dn);
+ n_mem_size_cells = of_n_size_cells(dn);
+ pr_debug("address cells: %d, size cells: %d\n", n_mem_addr_cells,
+ n_mem_size_cells);
+
+ um_info->idx = 0;
+ buf = check_realloc_usable_mem(um_info, 2);
+ if (!buf) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ um_info->buf = buf;
+
+ prop = of_get_property(dn, "reg", &len);
+ if (!prop || len <= 0) {
+ ret = 0;
+ goto out;
+ }
+
+ /*
+ * "reg" property represents sequence of (addr,size) tuples
+ * each representing a memory range.
+ */
+ ranges = (len >> 2) / (n_mem_addr_cells + n_mem_size_cells);
+
+ for (i = 0; i < ranges; i++) {
+ base = of_read_number(prop, n_mem_addr_cells);
+ prop += n_mem_addr_cells;
+ end = base + of_read_number(prop, n_mem_size_cells) - 1;
+ prop += n_mem_size_cells;
+
+ ret = add_usable_mem(um_info, base, end, &cnt);
+ if (ret) {
+ ret = ret;
+ goto out;
+ }
+ }
+
+ /*
+ * No kdump kernel usable memory found in this memory node.
+ * Write (0,0) tuple in linux,usable-memory property for
+ * this region to be ignored.
+ */
+ if (um_info->idx == 0) {
+ um_info->buf[0] = 0;
+ um_info->buf[1] = 0;
+ um_info->idx = 2;
+ }
+
+ ret = fdt_setprop(fdt, node, "linux,usable-memory", um_info->buf,
+ (um_info->idx * sizeof(*(um_info->buf))));
+
+out:
+ kfree(pathname);
+ of_node_put(dn);
+ return ret;
+}
+
+
+/**
+ * update_usable_mem_fdt - Updates kdump kernel's fdt with linux,usable-memory
+ * and linux,drconf-usable-memory DT properties as
+ * appropriate to restrict its memory usage.
+ * @fdt: Flattened device tree for the kdump kernel.
+ * @usable_mem: Usable memory ranges for kdump kernel.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+static int update_usable_mem_fdt(void *fdt, struct crash_mem *usable_mem)
+{
+ struct umem_info um_info;
+ struct device_node *dn;
+ int node, ret = 0;
+
+ if (!usable_mem) {
+ pr_err("Usable memory ranges for kdump kernel not found\n");
+ return -ENOENT;
+ }
+
+ node = fdt_path_offset(fdt, "/ibm,dynamic-reconfiguration-memory");
+ if (node == -FDT_ERR_NOTFOUND)
+ pr_debug("No dynamic reconfiguration memory found\n");
+ else if (node < 0) {
+ pr_err("Malformed device tree: error reading /ibm,dynamic-reconfiguration-memory.\n");
+ return -EINVAL;
+ }
+
+ um_info.size = 0;
+ um_info.idx = 0;
+ um_info.buf = NULL;
+ um_info.umrngs = usable_mem;
+
+ dn = of_find_node_by_path("/ibm,dynamic-reconfiguration-memory");
+ if (dn) {
+ ret = walk_drmem_lmbs(dn, &um_info, kdump_setup_usable_lmb);
+ of_node_put(dn);
+
+ if (ret) {
+ pr_err("Could not setup linux,drconf-usable-memory property for kdump\n");
+ goto out;
+ }
+
+ ret = fdt_setprop(fdt, node, "linux,drconf-usable-memory",
+ um_info.buf,
+ (um_info.idx * sizeof(*(um_info.buf))));
+ if (ret) {
+ pr_err("Failed to update fdt with linux,drconf-usable-memory property");
+ goto out;
+ }
+ }
+
+ /*
+ * Walk through each memory node and set linux,usable-memory property
+ * for the corresponding node in kdump kernel's fdt.
+ */
+ for_each_node_by_type(dn, "memory") {
+ ret = add_usable_mem_property(fdt, dn, &um_info);
+ if (ret) {
+ pr_err("Failed to set linux,usable-memory property for %s node",
+ dn->full_name);
+ goto out;
+ }
+ }
+
+out:
+ kfree(um_info.buf);
+ return ret;
+}
+
/**
* setup_purgatory_ppc64 - initialize PPC64 specific purgatory's global
* variables and call setup_purgatory() to initialize
@@ -293,6 +717,25 @@ int setup_purgatory_ppc64(struct kimage *image, const void *slave_code,

ret = setup_purgatory(image, slave_code, fdt, kernel_load_addr,
fdt_load_addr);
+ if (ret)
+ goto out;
+
+ if (image->type == KEXEC_TYPE_CRASH) {
+ uint32_t my_run_at_load = 1;
+
+ /*
+ * Tell relocatable kernel to run at load address
+ * via the word meant for that at 0x5c.
+ */
+ ret = kexec_purgatory_get_set_symbol(image, "run_at_load",
+ &my_run_at_load,
+ sizeof(my_run_at_load),
+ false);
+ if (ret)
+ goto out;
+ }
+
+out:
if (ret)
pr_err("Failed to setup purgatory symbols");
return ret;
@@ -314,7 +757,40 @@ int setup_new_fdt_ppc64(const struct kimage *image, void *fdt,
unsigned long initrd_load_addr,
unsigned long initrd_len, const char *cmdline)
{
- return setup_new_fdt(image, fdt, initrd_load_addr, initrd_len, cmdline);
+ struct crash_mem *umem = NULL;
+ int ret;
+
+ ret = setup_new_fdt(image, fdt, initrd_load_addr, initrd_len, cmdline);
+ if (ret)
+ goto out;
+
+ /*
+ * Restrict memory usage for kdump kernel by setting up
+ * usable memory ranges.
+ */
+ if (image->type == KEXEC_TYPE_CRASH) {
+ ret = get_usable_memory_ranges(&umem);
+ if (ret)
+ goto out;
+
+ ret = update_usable_mem_fdt(fdt, umem);
+ if (ret) {
+ pr_err("Error setting up usable-memory property for kdump kernel\n");
+ goto out;
+ }
+
+ /* Ensure we don't touch crashed kernel's memory */
+ ret = fdt_add_mem_rsv(fdt, 0, crashk_res.start);
+ if (ret) {
+ pr_err("Error reserving crash memory: %s\n",
+ fdt_strerror(ret));
+ goto out;
+ }
+ }
+
+out:
+ kfree(umem);
+ return ret;
}

/**


2020-07-26 19:42:53

by Hari Bathini

[permalink] [raw]
Subject: [RESEND PATCH v5 07/11] ppc64/kexec_file: enable early kernel's OPAL calls

Kernel built with CONFIG_PPC_EARLY_DEBUG_OPAL enabled expects r8 & r9
to be filled with OPAL base & entry addresses respectively. Setting
these registers allows the kernel to perform OPAL calls before the
device tree is parsed.

Signed-off-by: Hari Bathini <[email protected]>
---

v4 -> v5:
* New patch. Updated opal_base & opal_entry values in r8 & r9 respectively.
This change was part of the below dropped patch in v4:
- https://lore.kernel.org/patchwork/patch/1275667/


arch/powerpc/kexec/file_load_64.c | 16 ++++++++++++++++
arch/powerpc/purgatory/trampoline_64.S | 15 +++++++++++++++
2 files changed, 31 insertions(+)

diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c
index 8df085a22fd7..a5c1442590b2 100644
--- a/arch/powerpc/kexec/file_load_64.c
+++ b/arch/powerpc/kexec/file_load_64.c
@@ -713,6 +713,8 @@ int setup_purgatory_ppc64(struct kimage *image, const void *slave_code,
const void *fdt, unsigned long kernel_load_addr,
unsigned long fdt_load_addr)
{
+ struct device_node *dn = NULL;
+ uint64_t val;
int ret;

ret = setup_purgatory(image, slave_code, fdt, kernel_load_addr,
@@ -735,9 +737,23 @@ int setup_purgatory_ppc64(struct kimage *image, const void *slave_code,
goto out;
}

+ /* Setup OPAL base & entry values */
+ dn = of_find_node_by_path("/ibm,opal");
+ if (dn) {
+ of_property_read_u64(dn, "opal-base-address", &val);
+ ret = kexec_purgatory_get_set_symbol(image, "opal_base", &val,
+ sizeof(val), false);
+ if (ret)
+ goto out;
+
+ of_property_read_u64(dn, "opal-entry-address", &val);
+ ret = kexec_purgatory_get_set_symbol(image, "opal_entry", &val,
+ sizeof(val), false);
+ }
out:
if (ret)
pr_err("Failed to setup purgatory symbols");
+ of_node_put(dn);
return ret;
}

diff --git a/arch/powerpc/purgatory/trampoline_64.S b/arch/powerpc/purgatory/trampoline_64.S
index a5a83c3f53e6..464af8e8a4cb 100644
--- a/arch/powerpc/purgatory/trampoline_64.S
+++ b/arch/powerpc/purgatory/trampoline_64.S
@@ -61,6 +61,10 @@ master:
li %r4,28
STWX_BE %r17,%r3,%r4 /* Store my cpu as __be32 at byte 28 */
1:
+ /* Load opal base and entry values in r8 & r9 respectively */
+ ld %r8,(opal_base - 0b)(%r18)
+ ld %r9,(opal_entry - 0b)(%r18)
+
/* load the kernel address */
ld %r4,(kernel - 0b)(%r18)

@@ -102,6 +106,17 @@ dt_offset:
.8byte 0x0
.size dt_offset, . - dt_offset

+ .balign 8
+ .globl opal_base
+opal_base:
+ .8byte 0x0
+ .size opal_base, . - opal_base
+
+ .balign 8
+ .globl opal_entry
+opal_entry:
+ .8byte 0x0
+ .size opal_entry, . - opal_entry

.data
.balign 8


2020-07-26 19:44:17

by Hari Bathini

[permalink] [raw]
Subject: [RESEND PATCH v5 09/11] ppc64/kexec_file: prepare elfcore header for crashing kernel

Prepare elf headers for the crashing kernel's core file using
crash_prepare_elf64_headers() and pass on this info to kdump
kernel by updating its command line with elfcorehdr parameter.
Also, add elfcorehdr location to reserve map to avoid it from
being stomped on while booting.

Signed-off-by: Hari Bathini <[email protected]>
Tested-by: Pingfan Liu <[email protected]>
Reviewed-by: Thiago Jung Bauermann <[email protected]>
---

v4 -> v5:
* Unchanged. Added Reviewed-by tag from Thiago.

v3 -> v4:
* Added a FIXME tag to indicate issue in adding opal/rtas regions to
core image.
* Folded prepare_elf_headers() function into load_elfcorehdr_segment().

v2 -> v3:
* Unchanged. Added Tested-by tag from Pingfan.

v1 -> v2:
* Tried merging adjacent memory ranges on hitting maximum ranges limit
to reduce reallocations for memory ranges and also, minimize PT_LOAD
segments for elfcore.
* Updated add_rtas_mem_range() & add_opal_mem_range() callsites based on
the new prototype for these functions.


arch/powerpc/include/asm/kexec.h | 6 +
arch/powerpc/kexec/elf_64.c | 12 +++
arch/powerpc/kexec/file_load.c | 49 +++++++++++
arch/powerpc/kexec/file_load_64.c | 165 +++++++++++++++++++++++++++++++++++++
4 files changed, 232 insertions(+)

diff --git a/arch/powerpc/include/asm/kexec.h b/arch/powerpc/include/asm/kexec.h
index f9514ebeffaa..fe885bc3127e 100644
--- a/arch/powerpc/include/asm/kexec.h
+++ b/arch/powerpc/include/asm/kexec.h
@@ -108,12 +108,18 @@ struct kimage_arch {
unsigned long backup_start;
void *backup_buf;

+ unsigned long elfcorehdr_addr;
+ unsigned long elf_headers_sz;
+ void *elf_headers;
+
#ifdef CONFIG_IMA_KEXEC
phys_addr_t ima_buffer_addr;
size_t ima_buffer_size;
#endif
};

+char *setup_kdump_cmdline(struct kimage *image, char *cmdline,
+ unsigned long cmdline_len);
int setup_purgatory(struct kimage *image, const void *slave_code,
const void *fdt, unsigned long kernel_load_addr,
unsigned long fdt_load_addr);
diff --git a/arch/powerpc/kexec/elf_64.c b/arch/powerpc/kexec/elf_64.c
index 76e2fc7e6dc3..d0e459bb2f05 100644
--- a/arch/powerpc/kexec/elf_64.c
+++ b/arch/powerpc/kexec/elf_64.c
@@ -35,6 +35,7 @@ static void *elf64_load(struct kimage *image, char *kernel_buf,
void *fdt;
const void *slave_code;
struct elfhdr ehdr;
+ char *modified_cmdline = NULL;
struct kexec_elf_info elf_info;
struct kexec_buf kbuf = { .image = image, .buf_min = 0,
.buf_max = ppc64_rma_size };
@@ -75,6 +76,16 @@ static void *elf64_load(struct kimage *image, char *kernel_buf,
pr_err("Failed to load kdump kernel segments\n");
goto out;
}
+
+ /* Setup cmdline for kdump kernel case */
+ modified_cmdline = setup_kdump_cmdline(image, cmdline,
+ cmdline_len);
+ if (!modified_cmdline) {
+ pr_err("Setting up cmdline for kdump kernel failed\n");
+ ret = -EINVAL;
+ goto out;
+ }
+ cmdline = modified_cmdline;
}

if (initrd != NULL) {
@@ -131,6 +142,7 @@ static void *elf64_load(struct kimage *image, char *kernel_buf,
pr_err("Error setting up the purgatory.\n");

out:
+ kfree(modified_cmdline);
kexec_free_elf_info(&elf_info);

/* Make kimage_file_post_load_cleanup free the fdt buffer for us. */
diff --git a/arch/powerpc/kexec/file_load.c b/arch/powerpc/kexec/file_load.c
index 38439aba27d7..d52c09729edd 100644
--- a/arch/powerpc/kexec/file_load.c
+++ b/arch/powerpc/kexec/file_load.c
@@ -18,10 +18,45 @@
#include <linux/kexec.h>
#include <linux/of_fdt.h>
#include <linux/libfdt.h>
+#include <asm/setup.h>
#include <asm/ima.h>

#define SLAVE_CODE_SIZE 256 /* First 0x100 bytes */

+/**
+ * setup_kdump_cmdline - Prepend "elfcorehdr=<addr> " to command line
+ * of kdump kernel for exporting the core.
+ * @image: Kexec image
+ * @cmdline: Command line parameters to update.
+ * @cmdline_len: Length of the cmdline parameters.
+ *
+ * kdump segment must be setup before calling this function.
+ *
+ * Returns new cmdline buffer for kdump kernel on success, NULL otherwise.
+ */
+char *setup_kdump_cmdline(struct kimage *image, char *cmdline,
+ unsigned long cmdline_len)
+{
+ int elfcorehdr_strlen;
+ char *cmdline_ptr;
+
+ cmdline_ptr = kzalloc(COMMAND_LINE_SIZE, GFP_KERNEL);
+ if (!cmdline_ptr)
+ return NULL;
+
+ elfcorehdr_strlen = sprintf(cmdline_ptr, "elfcorehdr=0x%lx ",
+ image->arch.elfcorehdr_addr);
+
+ if (elfcorehdr_strlen + cmdline_len > COMMAND_LINE_SIZE) {
+ pr_err("Appending elfcorehdr=<addr> exceeds cmdline size\n");
+ kfree(cmdline_ptr);
+ return NULL;
+ }
+
+ memcpy(cmdline_ptr + elfcorehdr_strlen, cmdline, cmdline_len);
+ return cmdline_ptr;
+}
+
/**
* setup_purgatory - initialize the purgatory's global variables
* @image: kexec image.
@@ -221,6 +256,20 @@ int setup_new_fdt(const struct kimage *image, void *fdt,
}
}

+ if (image->type == KEXEC_TYPE_CRASH) {
+ /*
+ * Avoid elfcorehdr from being stomped on in kdump kernel by
+ * setting up memory reserve map.
+ */
+ ret = fdt_add_mem_rsv(fdt, image->arch.elfcorehdr_addr,
+ image->arch.elf_headers_sz);
+ if (ret) {
+ pr_err("Error reserving elfcorehdr memory: %s\n",
+ fdt_strerror(ret));
+ goto err;
+ }
+ }
+
ret = setup_ima_buffer(image, fdt, chosen_node);
if (ret) {
pr_err("Error setting up the new device tree.\n");
diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c
index 88408b17a7f6..7a52f0634ce6 100644
--- a/arch/powerpc/kexec/file_load_64.c
+++ b/arch/powerpc/kexec/file_load_64.c
@@ -124,6 +124,83 @@ static int get_usable_memory_ranges(struct crash_mem **mem_ranges)
return ret;
}

+/**
+ * get_crash_memory_ranges - Get crash memory ranges. This list includes
+ * first/crashing kernel's memory regions that
+ * would be exported via an elfcore.
+ * @mem_ranges: Range list to add the memory ranges to.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+static int get_crash_memory_ranges(struct crash_mem **mem_ranges)
+{
+ struct memblock_region *reg;
+ struct crash_mem *tmem;
+ int ret;
+
+ for_each_memblock(memory, reg) {
+ u64 base, size;
+
+ base = (u64)reg->base;
+ size = (u64)reg->size;
+
+ /* Skip backup memory region, which needs a separate entry */
+ if (base == BACKUP_SRC_START) {
+ if (size > BACKUP_SRC_SIZE) {
+ base = BACKUP_SRC_END + 1;
+ size -= BACKUP_SRC_SIZE;
+ } else
+ continue;
+ }
+
+ ret = add_mem_range(mem_ranges, base, size);
+ if (ret)
+ goto out;
+
+ /* Try merging adjacent ranges before reallocation attempt */
+ if ((*mem_ranges)->nr_ranges == (*mem_ranges)->max_nr_ranges)
+ sort_memory_ranges(*mem_ranges, true);
+ }
+
+ /* Reallocate memory ranges if there is no space to split ranges */
+ tmem = *mem_ranges;
+ if (tmem && (tmem->nr_ranges == tmem->max_nr_ranges)) {
+ tmem = realloc_mem_ranges(mem_ranges);
+ if (!tmem)
+ goto out;
+ }
+
+ /* Exclude crashkernel region */
+ ret = crash_exclude_mem_range(tmem, crashk_res.start, crashk_res.end);
+ if (ret)
+ goto out;
+
+ /*
+ * FIXME: For now, stay in parity with kexec-tools but if RTAS/OPAL
+ * regions are exported to save their context at the time of
+ * crash, they should actually be backed up just like the
+ * first 64K bytes of memory.
+ */
+ ret = add_rtas_mem_range(mem_ranges);
+ if (ret)
+ goto out;
+
+ ret = add_opal_mem_range(mem_ranges);
+ if (ret)
+ goto out;
+
+ /* create a separate program header for the backup region */
+ ret = add_mem_range(mem_ranges, BACKUP_SRC_START, BACKUP_SRC_SIZE);
+ if (ret)
+ goto out;
+
+ sort_memory_ranges(*mem_ranges, false);
+out:
+ if (ret)
+ pr_err("Failed to setup crash memory ranges\n");
+ return ret;
+}
+
/**
* __locate_mem_hole_top_down - Looks top down for a large enough memory hole
* in the memory regions between buf_min & buf_max
@@ -738,6 +815,81 @@ static int load_backup_segment(struct kimage *image, struct kexec_buf *kbuf)
return 0;
}

+/**
+ * update_backup_region_phdr - Update backup region's offset for the core to
+ * export the region appropriately.
+ * @image: Kexec image.
+ * @ehdr: ELF core header.
+ *
+ * Assumes an exclusive program header is setup for the backup region
+ * in the ELF headers
+ *
+ * Returns nothing.
+ */
+static void update_backup_region_phdr(struct kimage *image, Elf64_Ehdr *ehdr)
+{
+ Elf64_Phdr *phdr;
+ unsigned int i;
+
+ phdr = (Elf64_Phdr *)(ehdr + 1);
+ for (i = 0; i < ehdr->e_phnum; i++) {
+ if (phdr->p_paddr == BACKUP_SRC_START) {
+ phdr->p_offset = image->arch.backup_start;
+ pr_debug("Backup region offset updated to 0x%lx\n",
+ image->arch.backup_start);
+ return;
+ }
+ }
+}
+
+/**
+ * load_elfcorehdr_segment - Setup crash memory ranges and initialize elfcorehdr
+ * segment needed to load kdump kernel.
+ * @image: Kexec image.
+ * @kbuf: Buffer contents and memory parameters.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+static int load_elfcorehdr_segment(struct kimage *image, struct kexec_buf *kbuf)
+{
+ struct crash_mem *cmem = NULL;
+ unsigned long headers_sz;
+ void *headers = NULL;
+ int ret;
+
+ ret = get_crash_memory_ranges(&cmem);
+ if (ret)
+ goto out;
+
+ /* Setup elfcorehdr segment */
+ ret = crash_prepare_elf64_headers(cmem, false, &headers, &headers_sz);
+ if (ret) {
+ pr_err("Failed to prepare elf headers for the core\n");
+ goto out;
+ }
+
+ /* Fix the offset for backup region in the ELF header */
+ update_backup_region_phdr(image, headers);
+
+ kbuf->buffer = headers;
+ kbuf->mem = KEXEC_BUF_MEM_UNKNOWN;
+ kbuf->bufsz = kbuf->memsz = headers_sz;
+ kbuf->top_down = false;
+
+ ret = kexec_add_buffer(kbuf);
+ if (ret) {
+ vfree(headers);
+ goto out;
+ }
+
+ image->arch.elfcorehdr_addr = kbuf->mem;
+ image->arch.elf_headers_sz = headers_sz;
+ image->arch.elf_headers = headers;
+out:
+ kfree(cmem);
+ return ret;
+}
+
/**
* load_crashdump_segments_ppc64 - Initialize the additional segements needed
* to load kdump kernel.
@@ -759,6 +911,15 @@ int load_crashdump_segments_ppc64(struct kimage *image,
}
pr_debug("Loaded the backup region at 0x%lx\n", kbuf->mem);

+ /* Load elfcorehdr segment - to export crashing kernel's vmcore */
+ ret = load_elfcorehdr_segment(image, kbuf);
+ if (ret) {
+ pr_err("Failed to load elfcorehdr segment\n");
+ return ret;
+ }
+ pr_debug("Loaded elf core header at 0x%lx, bufsz=0x%lx memsz=0x%lx\n",
+ image->arch.elfcorehdr_addr, kbuf->bufsz, kbuf->memsz);
+
return 0;
}

@@ -997,5 +1158,9 @@ int arch_kimage_file_post_load_cleanup(struct kimage *image)
vfree(image->arch.backup_buf);
image->arch.backup_buf = NULL;

+ vfree(image->arch.elf_headers);
+ image->arch.elf_headers = NULL;
+ image->arch.elf_headers_sz = 0;
+
return kexec_image_post_load_cleanup_default(image);
}


2020-07-26 19:45:01

by Hari Bathini

[permalink] [raw]
Subject: [RESEND PATCH v5 10/11] ppc64/kexec_file: add appropriate regions for memory reserve map

While initrd, elfcorehdr and backup regions are already added to the
reserve map, there are a few missing regions that need to be added to
the memory reserve map. Add them here. And now that all the changes
to load panic kernel are in place, claim likewise.

Signed-off-by: Hari Bathini <[email protected]>
Tested-by: Pingfan Liu <[email protected]>
Reviewed-by: Thiago Jung Bauermann <[email protected]>
---

v4 -> v5:
* Unchanged.

v3 -> v4:
* Fixed a spellcheck and added Reviewed-by tag from Thiago.

v2 -> v3:
* Unchanged. Added Tested-by tag from Pingfan.

v1 -> v2:
* Updated add_rtas_mem_range() & add_opal_mem_range() callsites based on
the new prototype for these functions.


arch/powerpc/kexec/file_load_64.c | 58 ++++++++++++++++++++++++++++++++++---
1 file changed, 53 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c
index 7a52f0634ce6..296be7fc6440 100644
--- a/arch/powerpc/kexec/file_load_64.c
+++ b/arch/powerpc/kexec/file_load_64.c
@@ -201,6 +201,34 @@ static int get_crash_memory_ranges(struct crash_mem **mem_ranges)
return ret;
}

+/**
+ * get_reserved_memory_ranges - Get reserve memory ranges. This list includes
+ * memory regions that should be added to the
+ * memory reserve map to ensure the region is
+ * protected from any mischief.
+ * @mem_ranges: Range list to add the memory ranges to.
+ *
+ * Returns 0 on success, negative errno on error.
+ */
+static int get_reserved_memory_ranges(struct crash_mem **mem_ranges)
+{
+ int ret;
+
+ ret = add_rtas_mem_range(mem_ranges);
+ if (ret)
+ goto out;
+
+ ret = add_tce_mem_ranges(mem_ranges);
+ if (ret)
+ goto out;
+
+ ret = add_reserved_ranges(mem_ranges);
+out:
+ if (ret)
+ pr_err("Failed to setup reserved memory ranges\n");
+ return ret;
+}
+
/**
* __locate_mem_hole_top_down - Looks top down for a large enough memory hole
* in the memory regions between buf_min & buf_max
@@ -1007,8 +1035,8 @@ int setup_new_fdt_ppc64(const struct kimage *image, void *fdt,
unsigned long initrd_load_addr,
unsigned long initrd_len, const char *cmdline)
{
- struct crash_mem *umem = NULL;
- int ret;
+ struct crash_mem *umem = NULL, *rmem = NULL;
+ int i, nr_ranges, ret;

ret = setup_new_fdt(image, fdt, initrd_load_addr, initrd_len, cmdline);
if (ret)
@@ -1051,7 +1079,27 @@ int setup_new_fdt_ppc64(const struct kimage *image, void *fdt,
}
}

+ /* Update memory reserve map */
+ ret = get_reserved_memory_ranges(&rmem);
+ if (ret)
+ goto out;
+
+ nr_ranges = rmem ? rmem->nr_ranges : 0;
+ for (i = 0; i < nr_ranges; i++) {
+ u64 base, size;
+
+ base = rmem->ranges[i].start;
+ size = rmem->ranges[i].end - base + 1;
+ ret = fdt_add_mem_rsv(fdt, base, size);
+ if (ret) {
+ pr_err("Error updating memory reserve map: %s\n",
+ fdt_strerror(ret));
+ goto out;
+ }
+ }
+
out:
+ kfree(rmem);
kfree(umem);
return ret;
}
@@ -1134,10 +1182,10 @@ int arch_kexec_kernel_image_probe(struct kimage *image, void *buf,

/* Get exclude memory ranges needed for setting up kdump segments */
ret = get_exclude_memory_ranges(&(image->arch.exclude_ranges));
- if (ret)
+ if (ret) {
pr_err("Failed to setup exclude memory ranges for buffer lookup\n");
- /* Return this until all changes for panic kernel are in */
- return -EOPNOTSUPP;
+ return ret;
+ }
}

return kexec_image_probe_default(image, buf, buf_len);


2020-07-26 19:45:17

by Hari Bathini

[permalink] [raw]
Subject: [RESEND PATCH v5 11/11] ppc64/kexec_file: fix kexec load failure with lack of memory hole

The kexec purgatory has to run in real mode. Only the first memory
block maybe accessible in real mode. And, unlike the case with panic
kernel, no memory is set aside for regular kexec load. Another thing
to note is, the memory for crashkernel is reserved at an offset of
128MB. So, when crashkernel memory is reserved, the memory ranges to
load kexec segments shrink further as the generic code only looks for
memblock free memory ranges and in all likelihood only a tiny bit of
memory from 0 to 128MB would be available to load kexec segments.

With kdump being used by default in general, kexec file load is likely
to fail almost always. This can be fixed by changing the memory hole
lookup logic for regular kexec to use the same method as kdump. This
would mean that most kexec segments will overlap with crashkernel
memory region. That should still be ok as the pages, whose destination
address isn't available while loading, are placed in an intermediate
location till a flush to the actual destination address happens during
kexec boot sequence.

Signed-off-by: Hari Bathini <[email protected]>
Tested-by: Pingfan Liu <[email protected]>
Reviewed-by: Thiago Jung Bauermann <[email protected]>
---

v4 -> v5:
* Unchanged.

v3 -> v4:
* Unchanged. Added Reviewed-by tag from Thiago.

v2 -> v3:
* Unchanged. Added Tested-by tag from Pingfan.

v1 -> v2:
* New patch to fix locating memory hole for kexec_file_load (kexec -s -l)
when memory is reserved for crashkernel.


arch/powerpc/kexec/file_load_64.c | 33 ++++++++++++++-------------------
1 file changed, 14 insertions(+), 19 deletions(-)

diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c
index 296be7fc6440..7933b8990714 100644
--- a/arch/powerpc/kexec/file_load_64.c
+++ b/arch/powerpc/kexec/file_load_64.c
@@ -1122,13 +1122,6 @@ int arch_kexec_locate_mem_hole(struct kexec_buf *kbuf)
u64 buf_min, buf_max;
int ret;

- /*
- * Use the generic kexec_locate_mem_hole for regular
- * kexec_file_load syscall
- */
- if (kbuf->image->type != KEXEC_TYPE_CRASH)
- return kexec_locate_mem_hole(kbuf);
-
/* Look up the exclude ranges list while locating the memory hole */
emem = &(kbuf->image->arch.exclude_ranges);
if (!(*emem) || ((*emem)->nr_ranges == 0)) {
@@ -1136,11 +1129,15 @@ int arch_kexec_locate_mem_hole(struct kexec_buf *kbuf)
return kexec_locate_mem_hole(kbuf);
}

+ buf_min = kbuf->buf_min;
+ buf_max = kbuf->buf_max;
/* Segments for kdump kernel should be within crashkernel region */
- buf_min = (kbuf->buf_min < crashk_res.start ?
- crashk_res.start : kbuf->buf_min);
- buf_max = (kbuf->buf_max > crashk_res.end ?
- crashk_res.end : kbuf->buf_max);
+ if (kbuf->image->type == KEXEC_TYPE_CRASH) {
+ buf_min = (buf_min < crashk_res.start ?
+ crashk_res.start : buf_min);
+ buf_max = (buf_max > crashk_res.end ?
+ crashk_res.end : buf_max);
+ }

if (buf_min > buf_max) {
pr_err("Invalid buffer min and/or max values\n");
@@ -1177,15 +1174,13 @@ int arch_kexec_locate_mem_hole(struct kexec_buf *kbuf)
int arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
unsigned long buf_len)
{
- if (image->type == KEXEC_TYPE_CRASH) {
- int ret;
+ int ret;

- /* Get exclude memory ranges needed for setting up kdump segments */
- ret = get_exclude_memory_ranges(&(image->arch.exclude_ranges));
- if (ret) {
- pr_err("Failed to setup exclude memory ranges for buffer lookup\n");
- return ret;
- }
+ /* Get exclude memory ranges needed for setting up kexec segments */
+ ret = get_exclude_memory_ranges(&(image->arch.exclude_ranges));
+ if (ret) {
+ pr_err("Failed to setup exclude memory ranges for buffer lookup\n");
+ return ret;
}

return kexec_image_probe_default(image, buf, buf_len);


2020-07-28 02:13:52

by Thiago Jung Bauermann

[permalink] [raw]
Subject: Re: [RESEND PATCH v5 06/11] ppc64/kexec_file: restrict memory usage of kdump kernel


Hari Bathini <[email protected]> writes:

> Kdump kernel, used for capturing the kernel core image, is supposed
> to use only specific memory regions to avoid corrupting the image to
> be captured. The regions are crashkernel range - the memory reserved
> explicitly for kdump kernel, memory used for the tce-table, the OPAL
> region and RTAS region as applicable. Restrict kdump kernel memory
> to use only these regions by setting up usable-memory DT property.
> Also, tell the kdump kernel to run at the loaded address by setting
> the magic word at 0x5c.
>
> Signed-off-by: Hari Bathini <[email protected]>
> Tested-by: Pingfan Liu <[email protected]>

I liked the new versions of get_node_path_size() and get_node_path().

Reviewed-by: Thiago Jung Bauermann <[email protected]>

--
Thiago Jung Bauermann
IBM Linux Technology Center

2020-07-28 02:20:40

by Thiago Jung Bauermann

[permalink] [raw]
Subject: Re: [RESEND PATCH v5 07/11] ppc64/kexec_file: enable early kernel's OPAL calls


Hari Bathini <[email protected]> writes:

> Kernel built with CONFIG_PPC_EARLY_DEBUG_OPAL enabled expects r8 & r9
> to be filled with OPAL base & entry addresses respectively. Setting
> these registers allows the kernel to perform OPAL calls before the
> device tree is parsed.
>
> Signed-off-by: Hari Bathini <[email protected]>

Reviewed-by: Thiago Jung Bauermann <[email protected]>

--
Thiago Jung Bauermann
IBM Linux Technology Center

2020-07-28 02:36:19

by Pingfan Liu

[permalink] [raw]
Subject: Re: [RESEND PATCH v5 00/11] ppc64: enable kdump support for kexec_file_load syscall



On 07/27/2020 03:36 AM, Hari Bathini wrote:
> Sorry! There was a gateway issue on my system while posting v5, due to
> which some patches did not make it through. Resending...
>
> This patch series enables kdump support for kexec_file_load system
> call (kexec -s -p) on PPC64. The changes are inspired from kexec-tools
> code but heavily modified for kernel consumption.
>
> The first patch adds a weak arch_kexec_locate_mem_hole() function to
> override locate memory hole logic suiting arch needs. There are some
> special regions in ppc64 which should be avoided while loading buffer
> & there are multiple callers to kexec_add_buffer making it complicated
> to maintain range sanity and using generic lookup at the same time.
>
> The second patch marks ppc64 specific code within arch/powerpc/kexec
> and arch/powerpc/purgatory to make the subsequent code changes easy
> to understand.
>
> The next patch adds helper function to setup different memory ranges
> needed for loading kdump kernel, booting into it and exporting the
> crashing kernel's elfcore.
>
> The fourth patch overrides arch_kexec_locate_mem_hole() function to
> locate memory hole for kdump segments by accounting for the special
> memory regions, referred to as excluded memory ranges, and sets
> kbuf->mem when a suitable memory region is found.
>
> The fifth patch moves walk_drmem_lmbs() out of .init section with
> a few changes to reuse it for setting up kdump kernel's usable memory
> ranges. The next patch uses walk_drmem_lmbs() to look up the LMBs
> and set linux,drconf-usable-memory & linux,usable-memory properties
> in order to restrict kdump kernel's memory usage.
>
> The seventh patch updates purgatory to setup r8 & r9 with opal base
> and opal entry addresses respectively to aid kernels built with
> CONFIG_PPC_EARLY_DEBUG_OPAL enabled. The next patch setups up backup
> region as a kexec segment while loading kdump kernel and teaches
> purgatory to copy data from source to destination.
>
> Patch 09 builds the elfcore header for the running kernel & passes
> the info to kdump kernel via "elfcorehdr=" parameter to export as
> /proc/vmcore file. The next patch sets up the memory reserve map
> for the kexec kernel and also claims kdump support for kdump as
> all the necessary changes are added.
>
> The last patch fixes a lookup issue for `kexec -l -s` case when
> memory is reserved for crashkernel.
>
> Tested the changes successfully on P8, P9 lpars, couple of OpenPOWER
> boxes, one with secureboot enabled, KVM guest and a simulator.
>
> v4 -> v5:
> * Dropped patches 07/12 & 08/12 and updated purgatory to do everything
> in assembly.
I guess you achieve this by carefully selecting instruction to avoid
relocation issue, right?

Thanks,
Pingfan

2020-07-28 02:38:24

by Thiago Jung Bauermann

[permalink] [raw]
Subject: Re: [RESEND PATCH v5 08/11] ppc64/kexec_file: setup backup region for kdump kernel


Hari Bathini <[email protected]> writes:

> Though kdump kernel boots from loaded address, the first 64KB of it is
> copied down to real 0. So, setup a backup region and let purgatory
> copy the first 64KB of crashed kernel into this backup region before
> booting into kdump kernel. Update reserve map with backup region and
> crashed kernel's memory to avoid kdump kernel from accidentially using
> that memory.
>
> Signed-off-by: Hari Bathini <[email protected]>

Reviewed-by: Thiago Jung Bauermann <[email protected]>

--
Thiago Jung Bauermann
IBM Linux Technology Center

2020-07-28 13:02:28

by Michael Ellerman

[permalink] [raw]
Subject: Re: [RESEND PATCH v5 03/11] powerpc/kexec_file: add helper functions for getting memory ranges

Hi Hari,

Some comments inline ...

Hari Bathini <[email protected]> writes:
> diff --git a/arch/powerpc/kexec/ranges.c b/arch/powerpc/kexec/ranges.c
> new file mode 100644
> index 000000000000..21bea1b78443
> --- /dev/null
> +++ b/arch/powerpc/kexec/ranges.c
> @@ -0,0 +1,417 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * powerpc code to implement the kexec_file_load syscall
> + *
> + * Copyright (C) 2004 Adam Litke ([email protected])
> + * Copyright (C) 2004 IBM Corp.
> + * Copyright (C) 2004,2005 Milton D Miller II, IBM Corporation
> + * Copyright (C) 2005 R Sharada ([email protected])
> + * Copyright (C) 2006 Mohan Kumar M ([email protected])
> + * Copyright (C) 2020 IBM Corporation
> + *
> + * Based on kexec-tools' kexec-ppc64.c, fs2dt.c.
> + * Heavily modified for the kernel by
> + * Hari Bathini <[email protected]>.

Please just use your name, email addresses bit rot. It's in the commit
log anyway.

> + */
> +
> +#undef DEBUG
^
Dont do that in new code please.

> +#define pr_fmt(fmt) "kexec ranges: " fmt
> +
> +#include <linux/sort.h>
> +#include <linux/kexec.h>
> +#include <linux/of_device.h>
> +#include <linux/slab.h>
> +#include <asm/sections.h>
> +#include <asm/kexec_ranges.h>
> +
> +/**
> + * get_max_nr_ranges - Get the max no. of ranges crash_mem structure
> + * could hold, given the size allocated for it.
> + * @size: Allocation size of crash_mem structure.
> + *
> + * Returns the maximum no. of ranges.
> + */
> +static inline unsigned int get_max_nr_ranges(size_t size)
> +{
> + return ((size - sizeof(struct crash_mem)) /
> + sizeof(struct crash_mem_range));
> +}
> +
> +/**
> + * get_mem_rngs_size - Get the allocated size of mrngs based on
> + * max_nr_ranges and chunk size.
> + * @mrngs: Memory ranges.

mrngs is not a great name, what about memory_ranges or ranges?

Ditto everywhere else you use mrngs.

> + *
> + * Returns the maximum size of @mrngs.
> + */
> +static inline size_t get_mem_rngs_size(struct crash_mem *mrngs)
> +{
> + size_t size;
> +
> + if (!mrngs)
> + return 0;
> +
> + size = (sizeof(struct crash_mem) +
> + (mrngs->max_nr_ranges * sizeof(struct crash_mem_range)));
> +
> + /*
> + * Memory is allocated in size multiple of MEM_RANGE_CHUNK_SZ.
> + * So, align to get the actual length.
> + */
> + return ALIGN(size, MEM_RANGE_CHUNK_SZ);
> +}
> +
> +/**
> + * __add_mem_range - add a memory range to memory ranges list.
> + * @mem_ranges: Range list to add the memory range to.
> + * @base: Base address of the range to add.
> + * @size: Size of the memory range to add.
> + *
> + * (Re)allocates memory, if needed.
> + *
> + * Returns 0 on success, negative errno on error.
> + */
> +static int __add_mem_range(struct crash_mem **mem_ranges, u64 base, u64 size)
> +{
> + struct crash_mem *mrngs = *mem_ranges;
> +
> + if ((mrngs == NULL) || (mrngs->nr_ranges == mrngs->max_nr_ranges)) {

(mrngs == NULL) should just be !mrngs.

> + mrngs = realloc_mem_ranges(mem_ranges);
> + if (!mrngs)
> + return -ENOMEM;
> + }
> +
> + mrngs->ranges[mrngs->nr_ranges].start = base;
> + mrngs->ranges[mrngs->nr_ranges].end = base + size - 1;
> + pr_debug("Added memory range [%#016llx - %#016llx] at index %d\n",
> + base, base + size - 1, mrngs->nr_ranges);
> + mrngs->nr_ranges++;
> + return 0;
> +}
> +
> +/**
> + * __merge_memory_ranges - Merges the given memory ranges list.
> + * @mem_ranges: Range list to merge.
> + *
> + * Assumes a sorted range list.
> + *
> + * Returns nothing.
> + */

A lot of this code is annoyingly similar to the memblock code, though
the internals of that are all static these days.

I guess for now we'll just have to add all this. Maybe in future it can
be consolidated.

> +static void __merge_memory_ranges(struct crash_mem *mrngs)
> +{
> + struct crash_mem_range *rngs;
> + int i, idx;
> +
> + if (!mrngs)
> + return;
> +
> + idx = 0;
> + rngs = &mrngs->ranges[0];
> + for (i = 1; i < mrngs->nr_ranges; i++) {
> + if (rngs[i].start <= (rngs[i-1].end + 1))
> + rngs[idx].end = rngs[i].end;
> + else {
> + idx++;
> + if (i == idx)
> + continue;
> +
> + rngs[idx] = rngs[i];
> + }
> + }
> + mrngs->nr_ranges = idx + 1;
> +}
> +
> +/**
> + * realloc_mem_ranges - reallocate mem_ranges with size incremented
> + * by MEM_RANGE_CHUNK_SZ. Frees up the old memory,
> + * if memory allocation fails.
> + * @mem_ranges: Memory ranges to reallocate.
> + *
> + * Returns pointer to reallocated memory on success, NULL otherwise.
> + */
> +struct crash_mem *realloc_mem_ranges(struct crash_mem **mem_ranges)
> +{
> + struct crash_mem *mrngs = *mem_ranges;
> + unsigned int nr_ranges;
> + size_t size;
> +
> + size = get_mem_rngs_size(mrngs);
> + nr_ranges = mrngs ? mrngs->nr_ranges : 0;
> +
> + size += MEM_RANGE_CHUNK_SZ;
> + mrngs = krealloc(*mem_ranges, size, GFP_KERNEL);
> + if (!mrngs) {
> + kfree(*mem_ranges);
> + *mem_ranges = NULL;
> + return NULL;
> + }
> +
> + mrngs->nr_ranges = nr_ranges;
> + mrngs->max_nr_ranges = get_max_nr_ranges(size);
> + *mem_ranges = mrngs;
> +
> + return mrngs;
> +}
> +
> +/**
> + * add_mem_range - Updates existing memory range, if there is an overlap.
> + * Else, adds a new memory range.
> + * @mem_ranges: Range list to add the memory range to.
> + * @base: Base address of the range to add.
> + * @size: Size of the memory range to add.
> + *
> + * (Re)allocates memory, if needed.
> + *
> + * Returns 0 on success, negative errno on error.
> + */
> +int add_mem_range(struct crash_mem **mem_ranges, u64 base, u64 size)
> +{
> + struct crash_mem *mrngs = *mem_ranges;
> + u64 mstart, mend, end;
> + unsigned int i;
> +
> + if (!size)
> + return 0;
> +
> + end = base + size - 1;
> +
> + if ((mrngs == NULL) || (mrngs->nr_ranges == 0))
> + return __add_mem_range(mem_ranges, base, size);
> +
> + for (i = 0; i < mrngs->nr_ranges; i++) {
> + mstart = mrngs->ranges[i].start;
> + mend = mrngs->ranges[i].end;
> + if (base < mend && end > mstart) {
> + if (base < mstart)
> + mrngs->ranges[i].start = base;
> + if (end > mend)
> + mrngs->ranges[i].end = end;
> + return 0;
> + }
> + }
> +
> + return __add_mem_range(mem_ranges, base, size);
> +}
> +
> +/**
> + * add_tce_mem_ranges - Adds tce-table range to the given memory ranges list.
> + * @mem_ranges: Range list to add the memory range(s) to.
> + *
> + * Returns 0 on success, negative errno on error.
> + */
> +int add_tce_mem_ranges(struct crash_mem **mem_ranges)

Not sure this and the other add_foo_mem_ranges() really belong in this patch.

> +{
> + struct device_node *dn;
> + int ret = 0;
> +
> + for_each_node_by_type(dn, "pci") {
> + u64 base;
> + u32 size;
> + int rc;

Do you really need ret and rc?

> + /*
> + * It is ok to have pci nodes without tce. So, ignore
> + * any read errors here.
> + */
> + rc = of_property_read_u64(dn, "linux,tce-base", &base);
> + rc |= of_property_read_u32(dn, "linux,tce-size", &size);
> + if (rc)
> + continue;
> +
> + ret = add_mem_range(mem_ranges, base, size);
> + if (ret)
> + break;
^
dn leaked.
> + }
> +
> + return ret;
> +}
> +
> +/**
> + * add_initrd_mem_range - Adds initrd range to the given memory ranges list,
> + * if the initrd was retained.
> + * @mem_ranges: Range list to add the memory range to.
> + *
> + * Returns 0 on success, negative errno on error.
> + */
> +int add_initrd_mem_range(struct crash_mem **mem_ranges)
> +{
> + u64 base, end;
> + char *str;
> + int ret;
> +
> + /* This range means something only if initrd was retained */
> + str = strstr(saved_command_line, "retain_initrd");
> + if (!str)
> + return 0;

Unfortunate that we have to go and scan the command line again. But I
don't see a better way ATM.

Could be more concise:

if (!strstr(saved_command_line, "retain_initrd"))
return 0;

> +
> + ret = of_property_read_u64(of_chosen, "linux,initrd-start", &base);
> + ret |= of_property_read_u64(of_chosen, "linux,initrd-end", &end);
> + if (!ret)
> + ret = add_mem_range(mem_ranges, base, end - base + 1);
> + return ret;
> +}
> +
> +#ifdef CONFIG_PPC_BOOK3S_64
> +/**
> + * add_htab_mem_range - Adds htab range to the given memory ranges list,
> + * if it exists
> + * @mem_ranges: Range list to add the memory range to.
> + *
> + * Returns 0 on success, negative errno on error.
> + */
> +int add_htab_mem_range(struct crash_mem **mem_ranges)
> +{
> + if (!htab_address)
> + return 0;
> +
> + return add_mem_range(mem_ranges, __pa(htab_address), htab_size_bytes);
> +}
> +#endif
> +
> +/**
> + * add_kernel_mem_range - Adds kernel text region to the given
> + * memory ranges list.
> + * @mem_ranges: Range list to add the memory range to.
> + *
> + * Returns 0 on success, negative errno on error.
> + */
> +int add_kernel_mem_range(struct crash_mem **mem_ranges)
> +{
> + return add_mem_range(mem_ranges, 0, __pa(_end));
> +}
> +
> +/**
> + * add_rtas_mem_range - Adds RTAS region to the given memory ranges list.
> + * @mem_ranges: Range list to add the memory range to.
> + *
> + * Returns 0 on success, negative errno on error.
> + */
> +int add_rtas_mem_range(struct crash_mem **mem_ranges)
> +{
> + struct device_node *dn;
> + int ret = 0;
> +
> + dn = of_find_node_by_path("/rtas");
> + if (dn) {
> + u32 base, size;
> +
> + ret = of_property_read_u32(dn, "linux,rtas-base", &base);
> + ret |= of_property_read_u32(dn, "rtas-size", &size);
> + if (ret)
> + goto out;
> +
> + ret = add_mem_range(mem_ranges, base, size);
> + }
> +
> +out:
> + of_node_put(dn);
> + return ret;
> +}

Or:
struct device_node *dn;
u32 base, size;
int rc;

dn = of_find_node_by_path("/rtas");
if (!dn)
return 0;

rc = of_property_read_u32(dn, "linux,rtas-base", &base);
rc |= of_property_read_u32(dn, "rtas-size", &size);
if (rc == 0)
rc = add_mem_range(mem_ranges, base, size);

of_node_put(dn);
return rc;
}


> +
> +/**
> + * add_opal_mem_range - Adds OPAL region to the given memory ranges list.
> + * @mem_ranges: Range list to add the memory range to.
> + *
> + * Returns 0 on success, negative errno on error.
> + */
> +int add_opal_mem_range(struct crash_mem **mem_ranges)
> +{
> + struct device_node *dn;
> + int ret = 0;
> +
> + dn = of_find_node_by_path("/ibm,opal");
> + if (dn) {
> + u64 base, size;
> +
> + ret = of_property_read_u64(dn, "opal-base-address", &base);
> + ret |= of_property_read_u64(dn, "opal-runtime-size", &size);
> + if (ret)
> + goto out;
> +
> + ret = add_mem_range(mem_ranges, base, size);
> + }
> +
> +out:
> + of_node_put(dn);
> + return ret;
> +}
> +
> +/**
> + * add_reserved_ranges - Adds "/reserved-ranges" regions exported by f/w
> + * to the given memory ranges list.
> + * @mem_ranges: Range list to add the memory ranges to.
> + *
> + * Returns 0 on success, negative errno on error.
> + */
> +int add_reserved_ranges(struct crash_mem **mem_ranges)
> +{
> + int n_mem_addr_cells, n_mem_size_cells, i, len, cells, ret = 0;
> + const __be32 *prop;
> +
> + prop = of_get_property(of_root, "reserved-ranges", &len);
> + if (!prop)
> + return 0;
> +
> + of_node_get(of_root);

You shouldn't need to get the root node, you already used it above anyway.

> + n_mem_addr_cells = of_n_addr_cells(of_root);
> + n_mem_size_cells = of_n_size_cells(of_root);
> + cells = n_mem_addr_cells + n_mem_size_cells;
> +
> + /* Each reserved range is an (address,size) pair */
> + for (i = 0; i < (len / (sizeof(*prop) * cells)); i++) {
^
just u32 would be clearer I think.



cheers

2020-07-28 13:46:59

by Michael Ellerman

[permalink] [raw]
Subject: Re: [RESEND PATCH v5 06/11] ppc64/kexec_file: restrict memory usage of kdump kernel

Hari Bathini <[email protected]> writes:
> diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c
> index 2df6f4273ddd..8df085a22fd7 100644
> --- a/arch/powerpc/kexec/file_load_64.c
> +++ b/arch/powerpc/kexec/file_load_64.c
> @@ -17,9 +17,21 @@
> #include <linux/kexec.h>
> #include <linux/of_fdt.h>
> #include <linux/libfdt.h>
> +#include <linux/of_device.h>
> #include <linux/memblock.h>
> +#include <linux/slab.h>
> +#include <asm/drmem.h>
> #include <asm/kexec_ranges.h>
>
> +struct umem_info {
> + uint64_t *buf; /* data buffer for usable-memory property */
> + uint32_t idx; /* current index */
> + uint32_t size; /* size allocated for the data buffer */

Use kernel types please, u64, u32.

> + /* usable memory ranges to look up */
> + const struct crash_mem *umrngs;

"umrngs".

Given it's part of the umem_info struct could it just be "ranges"?

> +};
> +
> const struct kexec_file_ops * const kexec_file_loaders[] = {
> &kexec_elf64_ops,
> NULL
> @@ -74,6 +86,42 @@ static int get_exclude_memory_ranges(struct crash_mem **mem_ranges)
> return ret;
> }
>
> +/**
> + * get_usable_memory_ranges - Get usable memory ranges. This list includes
> + * regions like crashkernel, opal/rtas & tce-table,
> + * that kdump kernel could use.
> + * @mem_ranges: Range list to add the memory ranges to.
> + *
> + * Returns 0 on success, negative errno on error.
> + */
> +static int get_usable_memory_ranges(struct crash_mem **mem_ranges)
> +{
> + int ret;
> +
> + /*
> + * prom code doesn't take kindly to missing low memory. So, add

I don't know what that's referring to, "prom code" is too vague.

> + * [0, crashk_res.end] instead of [crashk_res.start, crashk_res.end]
> + * to keep it happy.
> + */
> + ret = add_mem_range(mem_ranges, 0, crashk_res.end + 1);
> + if (ret)
> + goto out;
> +
> + ret = add_rtas_mem_range(mem_ranges);
> + if (ret)
> + goto out;
> +
> + ret = add_opal_mem_range(mem_ranges);
> + if (ret)
> + goto out;
> +
> + ret = add_tce_mem_ranges(mem_ranges);
> +out:
> + if (ret)
> + pr_err("Failed to setup usable memory ranges\n");
> + return ret;
> +}
> +
> /**
> * __locate_mem_hole_top_down - Looks top down for a large enough memory hole
> * in the memory regions between buf_min & buf_max
> @@ -273,6 +321,382 @@ static int locate_mem_hole_bottom_up_ppc64(struct kexec_buf *kbuf,
> return ret;
> }
>
> +/**
> + * check_realloc_usable_mem - Reallocate buffer if it can't accommodate entries
> + * @um_info: Usable memory buffer and ranges info.
> + * @cnt: No. of entries to accommodate.
> + *
> + * Frees up the old buffer if memory reallocation fails.
> + *
> + * Returns buffer on success, NULL on error.
> + */
> +static uint64_t *check_realloc_usable_mem(struct umem_info *um_info, int cnt)
> +{
> + void *tbuf;
> +
> + if (um_info->size >=
> + ((um_info->idx + cnt) * sizeof(*(um_info->buf))))
> + return um_info->buf;

This is awkward.

AFAICS you only use um_info->size here, so instead why not store the
number of u64s you have space for, as num for example.

Then the above comparison becomes:

if (um_info->num >= (um_info->idx + count))

Then you only have to calculate the size internally here for the
realloc.

> +
> + um_info->size += MEM_RANGE_CHUNK_SZ;

new_size = um_info->size + MEM_RANGE_CHUNK_SZ;
tbuf = krealloc(um_info->buf, new_size, GFP_KERNEL);

> + tbuf = krealloc(um_info->buf, um_info->size, GFP_KERNEL);
> + if (!tbuf) {
> + um_info->size -= MEM_RANGE_CHUNK_SZ;

Then you can drop this.

> + return NULL;
> + }

um_info->size = new_size;

> +
> + memset(tbuf + um_info->idx, 0, MEM_RANGE_CHUNK_SZ);

Just pass __GFP_ZERO to krealloc?

> + return tbuf;
> +}
> +
> +/**
> + * add_usable_mem - Add the usable memory ranges within the given memory range
> + * to the buffer
> + * @um_info: Usable memory buffer and ranges info.
> + * @base: Base address of memory range to look for.
> + * @end: End address of memory range to look for.
> + * @cnt: No. of usable memory ranges added to buffer.

One caller never uses this AFAICS.

Couldn't the other caller just compare the um_info->idx before and after
the call, and avoid another pass by reference parameter.

> + *
> + * Returns 0 on success, negative errno on error.
> + */
> +static int add_usable_mem(struct umem_info *um_info, uint64_t base,
> + uint64_t end, int *cnt)
> +{
> + uint64_t loc_base, loc_end, *buf;
> + const struct crash_mem *umrngs;
> + int i, add;

add should be bool.

> + *cnt = 0;
> + umrngs = um_info->umrngs;
> + for (i = 0; i < umrngs->nr_ranges; i++) {
> + add = 0;
> + loc_base = umrngs->ranges[i].start;
> + loc_end = umrngs->ranges[i].end;
> + if (loc_base >= base && loc_end <= end)
> + add = 1;
> + else if (base < loc_end && end > loc_base) {
> + if (loc_base < base)
> + loc_base = base;
> + if (loc_end > end)
> + loc_end = end;
> + add = 1;
> + }
> +
> + if (add) {
> + buf = check_realloc_usable_mem(um_info, 2);
> + if (!buf)
> + return -ENOMEM;
> +
> + um_info->buf = buf;
> + buf[um_info->idx++] = cpu_to_be64(loc_base);
> + buf[um_info->idx++] =
> + cpu_to_be64(loc_end - loc_base + 1);
> + (*cnt)++;
> + }
> + }
> +
> + return 0;
> +}
> +
> +/**
> + * kdump_setup_usable_lmb - This is a callback function that gets called by
> + * walk_drmem_lmbs for every LMB to set its
> + * usable memory ranges.
> + * @lmb: LMB info.
> + * @usm: linux,drconf-usable-memory property value.
> + * @data: Pointer to usable memory buffer and ranges info.
> + *
> + * Returns 0 on success, negative errno on error.
> + */
> +static int kdump_setup_usable_lmb(struct drmem_lmb *lmb, const __be32 **usm,
> + void *data)
> +{
> + struct umem_info *um_info;
> + uint64_t base, end, *buf;
> + int cnt, tmp_idx, ret;
> +
> + /*
> + * kdump load isn't supported on kernels already booted with
> + * linux,drconf-usable-memory property.
> + */
> + if (*usm) {
> + pr_err("linux,drconf-usable-memory property already exists!");
> + return -EINVAL;
> + }
> +
> + um_info = data;
> + tmp_idx = um_info->idx;
> + buf = check_realloc_usable_mem(um_info, 1);
> + if (!buf)
> + return -ENOMEM;
> +
> + um_info->idx++;
> + um_info->buf = buf;
> + base = lmb->base_addr;
> + end = base + drmem_lmb_size() - 1;
> + ret = add_usable_mem(um_info, base, end, &cnt);
> + if (!ret)
> + um_info->buf[tmp_idx] = cpu_to_be64(cnt);
> +
> + return ret;
> +}
> +
> +/**
> + * get_node_path_size - Get the full path length of the given node.
> + * @dn: Device Node.
> + *
> + * Also, counts '\0' at the end of the path.
> + * For example, /memory@0 will be "/memory@0\0" => 10 bytes.
> + *
> + * Returns the string size of the node's full path.
> + */
> +static int get_node_path_size(struct device_node *dn)
> +{
> + int len = 0;
> +
> + if (!dn)
> + return 0;
> +
> + /* Root node */
> + if (!(dn->parent))
> + return 2;
> +
> + while (dn) {
> + len += strlen(dn->full_name) + 1;
> + dn = dn->parent;
> + }
> +
> + return len;
> +}
> +
> +/**
> + * get_node_path - Get the full path of the given node.
> + * @node: Device node.
> + *
> + * Allocates buffer for node path. The caller must free the buffer
> + * after use.
> + *
> + * Returns buffer with path on success, NULL otherwise.
> + */
> +static char *get_node_path(struct device_node *node)
> +{


As discussed this can probably be replaced with snprintf(buf, "%pOF") ?


cheers

2020-07-28 13:47:21

by Michael Ellerman

[permalink] [raw]
Subject: Re: [RESEND PATCH v5 07/11] ppc64/kexec_file: enable early kernel's OPAL calls

Hari Bathini <[email protected]> writes:
> Kernel built with CONFIG_PPC_EARLY_DEBUG_OPAL enabled expects r8 & r9
> to be filled with OPAL base & entry addresses respectively. Setting
> these registers allows the kernel to perform OPAL calls before the
> device tree is parsed.

I'm not convinced we want to do this.

If we do it becomes part of the kexec ABI and we have to honour it into
the future.

And in practice there are no non-development kernels built with OPAL early
debugging enabled, so it's not clear it actually helps anyone other than
developers.

cheers

> v4 -> v5:
> * New patch. Updated opal_base & opal_entry values in r8 & r9 respectively.
> This change was part of the below dropped patch in v4:
> - https://lore.kernel.org/patchwork/patch/1275667/
>
>
> arch/powerpc/kexec/file_load_64.c | 16 ++++++++++++++++
> arch/powerpc/purgatory/trampoline_64.S | 15 +++++++++++++++
> 2 files changed, 31 insertions(+)
>
> diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c
> index 8df085a22fd7..a5c1442590b2 100644
> --- a/arch/powerpc/kexec/file_load_64.c
> +++ b/arch/powerpc/kexec/file_load_64.c
> @@ -713,6 +713,8 @@ int setup_purgatory_ppc64(struct kimage *image, const void *slave_code,
> const void *fdt, unsigned long kernel_load_addr,
> unsigned long fdt_load_addr)
> {
> + struct device_node *dn = NULL;
> + uint64_t val;
> int ret;
>
> ret = setup_purgatory(image, slave_code, fdt, kernel_load_addr,
> @@ -735,9 +737,23 @@ int setup_purgatory_ppc64(struct kimage *image, const void *slave_code,
> goto out;
> }
>
> + /* Setup OPAL base & entry values */
> + dn = of_find_node_by_path("/ibm,opal");
> + if (dn) {
> + of_property_read_u64(dn, "opal-base-address", &val);
> + ret = kexec_purgatory_get_set_symbol(image, "opal_base", &val,
> + sizeof(val), false);
> + if (ret)
> + goto out;
> +
> + of_property_read_u64(dn, "opal-entry-address", &val);
> + ret = kexec_purgatory_get_set_symbol(image, "opal_entry", &val,
> + sizeof(val), false);
> + }
> out:
> if (ret)
> pr_err("Failed to setup purgatory symbols");
> + of_node_put(dn);
> return ret;
> }
>
> diff --git a/arch/powerpc/purgatory/trampoline_64.S b/arch/powerpc/purgatory/trampoline_64.S
> index a5a83c3f53e6..464af8e8a4cb 100644
> --- a/arch/powerpc/purgatory/trampoline_64.S
> +++ b/arch/powerpc/purgatory/trampoline_64.S
> @@ -61,6 +61,10 @@ master:
> li %r4,28
> STWX_BE %r17,%r3,%r4 /* Store my cpu as __be32 at byte 28 */
> 1:
> + /* Load opal base and entry values in r8 & r9 respectively */
> + ld %r8,(opal_base - 0b)(%r18)
> + ld %r9,(opal_entry - 0b)(%r18)
> +
> /* load the kernel address */
> ld %r4,(kernel - 0b)(%r18)
>
> @@ -102,6 +106,17 @@ dt_offset:
> .8byte 0x0
> .size dt_offset, . - dt_offset
>
> + .balign 8
> + .globl opal_base
> +opal_base:
> + .8byte 0x0
> + .size opal_base, . - opal_base
> +
> + .balign 8
> + .globl opal_entry
> +opal_entry:
> + .8byte 0x0
> + .size opal_entry, . - opal_entry
>
> .data
> .balign 8

2020-07-28 14:15:32

by Michael Ellerman

[permalink] [raw]
Subject: Re: [RESEND PATCH v5 08/11] ppc64/kexec_file: setup backup region for kdump kernel

Hari Bathini <[email protected]> writes:
> diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c
> index a5c1442590b2..88408b17a7f6 100644
> --- a/arch/powerpc/kexec/file_load_64.c
> +++ b/arch/powerpc/kexec/file_load_64.c
> @@ -697,6 +699,69 @@ static int update_usable_mem_fdt(void *fdt, struct crash_mem *usable_mem)
> return ret;
> }
>
> +/**
> + * load_backup_segment - Locate a memory hole to place the backup region.
> + * @image: Kexec image.
> + * @kbuf: Buffer contents and memory parameters.
> + *
> + * Returns 0 on success, negative errno on error.
> + */
> +static int load_backup_segment(struct kimage *image, struct kexec_buf *kbuf)
> +{
> + void *buf;
> + int ret;
> +
> + /* Setup a segment for backup region */
> + buf = vzalloc(BACKUP_SRC_SIZE);

This worried me initially, because we can't copy from physically
discontiguous pages in real mode.

But as you explained this buffer is not used for copying.

I think if you move the large comment below up here, it would be
clearer.


> diff --git a/arch/powerpc/purgatory/trampoline_64.S b/arch/powerpc/purgatory/trampoline_64.S
> index 464af8e8a4cb..d4b52961f592 100644
> --- a/arch/powerpc/purgatory/trampoline_64.S
> +++ b/arch/powerpc/purgatory/trampoline_64.S
> @@ -43,14 +44,38 @@ master:
> mr %r17,%r3 /* save cpu id to r17 */
> mr %r15,%r4 /* save physical address in reg15 */
>
> + bl 0f /* Work out where we're running */
> +0: mflr %r18

I know you just moved it, but this should use:

bcl 20, 31, $+4
mflr %r18

Which is a special form of branch and link that doesn't unbalance the
link stack in the chip.

> + /*
> + * Copy BACKUP_SRC_SIZE bytes from BACKUP_SRC_START to
> + * backup_start 8 bytes at a time.
> + *
> + * Use r3 = dest, r4 = src, r5 = size, r6 = count
> + */
> + ld %r3,(backup_start - 0b)(%r18)
> + cmpdi %cr0,%r3,0

I prefer spaces or tabs between arguments, eg:

cmpdi %cr0, %r3, 0

> + beq 80f /* skip if there is no backup region */

Local labels will make this clearer I think. eg:

beq .Lskip_copy

> + lis %r5,BACKUP_SRC_SIZE@h
> + ori %r5,%r5,BACKUP_SRC_SIZE@l
> + cmpdi %cr0,%r5,0
> + beq 80f /* skip if copy size is zero */
> + lis %r4,BACKUP_SRC_START@h
> + ori %r4,%r4,BACKUP_SRC_START@l
> + li %r6,0
> +70:

.Lcopy_loop:

> + ldx %r0,%r6,%r4
> + stdx %r0,%r6,%r3
> + addi %r6,%r6,8
> + cmpld %cr0,%r6,%r5
> + blt 70b

blt .Lcopy_loop

> +

.Lskip_copy:

> +80:
> or %r3,%r3,%r3 /* ok now to high priority, lets boot */
> lis %r6,0x1
> mtctr %r6 /* delay a bit for slaves to catch up */
> bdnz . /* before we overwrite 0-100 again */


cheers

2020-07-28 20:04:06

by Hari Bathini

[permalink] [raw]
Subject: Re: [RESEND PATCH v5 06/11] ppc64/kexec_file: restrict memory usage of kdump kernel



On 28/07/20 7:14 pm, Michael Ellerman wrote:
> Hari Bathini <[email protected]> writes:
>> diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c
>> index 2df6f4273ddd..8df085a22fd7 100644
>> --- a/arch/powerpc/kexec/file_load_64.c
>> +++ b/arch/powerpc/kexec/file_load_64.c
>> @@ -17,9 +17,21 @@
>> #include <linux/kexec.h>
>> #include <linux/of_fdt.h>
>> #include <linux/libfdt.h>
>> +#include <linux/of_device.h>
>> #include <linux/memblock.h>
>> +#include <linux/slab.h>
>> +#include <asm/drmem.h>
>> #include <asm/kexec_ranges.h>
>>
>> +struct umem_info {
>> + uint64_t *buf; /* data buffer for usable-memory property */
>> + uint32_t idx; /* current index */
>> + uint32_t size; /* size allocated for the data buffer */
>
> Use kernel types please, u64, u32.
>
>> + /* usable memory ranges to look up */
>> + const struct crash_mem *umrngs;
>
> "umrngs".
>
> Given it's part of the umem_info struct could it just be "ranges"?

True. Actually, having crash_mem_range *ranges + u32 nr_ranges and
populating them seems better. Will do that..

>> + return NULL;
>> + }
>
> um_info->size = new_size;
>
>> +
>> + memset(tbuf + um_info->idx, 0, MEM_RANGE_CHUNK_SZ);
>
> Just pass __GFP_ZERO to krealloc?

There are patches submitted to stable fixing a few modules that use
krealloc with __GFP_ZERO. Also, this zeroing is not really needed.
I will drop the memset instead..

Thanks
Hari

2020-07-28 20:05:55

by Hari Bathini

[permalink] [raw]
Subject: Re: [RESEND PATCH v5 07/11] ppc64/kexec_file: enable early kernel's OPAL calls



On 28/07/20 7:16 pm, Michael Ellerman wrote:
> Hari Bathini <[email protected]> writes:
>> Kernel built with CONFIG_PPC_EARLY_DEBUG_OPAL enabled expects r8 & r9
>> to be filled with OPAL base & entry addresses respectively. Setting
>> these registers allows the kernel to perform OPAL calls before the
>> device tree is parsed.
>
> I'm not convinced we want to do this.
>
> If we do it becomes part of the kexec ABI and we have to honour it into
> the future.
>
> And in practice there are no non-development kernels built with OPAL early
> debugging enabled, so it's not clear it actually helps anyone other than
> developers.
>

Hmmm.. kexec-tools does it since commit d58ad564852c ("kexec/ppc64
Enable early kernel's OPAL calls") for kexec_load syscall. So, we would
be breaking kexec ABI either way, I guess.

Let me put this patch at the end of the series in the respin to let you
decide whether to have it or not..

Thanks
Hari

2020-07-29 01:15:49

by Michael Ellerman

[permalink] [raw]
Subject: Re: [RESEND PATCH v5 07/11] ppc64/kexec_file: enable early kernel's OPAL calls

Hari Bathini <[email protected]> writes:
> On 28/07/20 7:16 pm, Michael Ellerman wrote:
>> Hari Bathini <[email protected]> writes:
>>> Kernel built with CONFIG_PPC_EARLY_DEBUG_OPAL enabled expects r8 & r9
>>> to be filled with OPAL base & entry addresses respectively. Setting
>>> these registers allows the kernel to perform OPAL calls before the
>>> device tree is parsed.
>>
>> I'm not convinced we want to do this.
>>
>> If we do it becomes part of the kexec ABI and we have to honour it into
>> the future.
>>
>> And in practice there are no non-development kernels built with OPAL early
>> debugging enabled, so it's not clear it actually helps anyone other than
>> developers.
>>
>
> Hmmm.. kexec-tools does it since commit d58ad564852c ("kexec/ppc64
> Enable early kernel's OPAL calls") for kexec_load syscall. So, we would
> be breaking kexec ABI either way, I guess.

Ugh, OK.

> Let me put this patch at the end of the series in the respin to let you
> decide whether to have it or not..

Thanks.

cheers

2020-07-30 06:06:06

by Hari Bathini

[permalink] [raw]
Subject: Re: [RESEND PATCH v5 00/11] ppc64: enable kdump support for kexec_file_load syscall



On 28/07/20 8:02 am, piliu wrote:
>
>
> On 07/27/2020 03:36 AM, Hari Bathini wrote:
>> Sorry! There was a gateway issue on my system while posting v5, due to
>> which some patches did not make it through. Resending...
>>
>> This patch series enables kdump support for kexec_file_load system
>> call (kexec -s -p) on PPC64. The changes are inspired from kexec-tools
>> code but heavily modified for kernel consumption.
>>
>> The first patch adds a weak arch_kexec_locate_mem_hole() function to
>> override locate memory hole logic suiting arch needs. There are some
>> special regions in ppc64 which should be avoided while loading buffer
>> & there are multiple callers to kexec_add_buffer making it complicated
>> to maintain range sanity and using generic lookup at the same time.
>>
>> The second patch marks ppc64 specific code within arch/powerpc/kexec
>> and arch/powerpc/purgatory to make the subsequent code changes easy
>> to understand.
>>
>> The next patch adds helper function to setup different memory ranges
>> needed for loading kdump kernel, booting into it and exporting the
>> crashing kernel's elfcore.
>>
>> The fourth patch overrides arch_kexec_locate_mem_hole() function to
>> locate memory hole for kdump segments by accounting for the special
>> memory regions, referred to as excluded memory ranges, and sets
>> kbuf->mem when a suitable memory region is found.
>>
>> The fifth patch moves walk_drmem_lmbs() out of .init section with
>> a few changes to reuse it for setting up kdump kernel's usable memory
>> ranges. The next patch uses walk_drmem_lmbs() to look up the LMBs
>> and set linux,drconf-usable-memory & linux,usable-memory properties
>> in order to restrict kdump kernel's memory usage.
>>
>> The seventh patch updates purgatory to setup r8 & r9 with opal base
>> and opal entry addresses respectively to aid kernels built with
>> CONFIG_PPC_EARLY_DEBUG_OPAL enabled. The next patch setups up backup
>> region as a kexec segment while loading kdump kernel and teaches
>> purgatory to copy data from source to destination.
>>
>> Patch 09 builds the elfcore header for the running kernel & passes
>> the info to kdump kernel via "elfcorehdr=" parameter to export as
>> /proc/vmcore file. The next patch sets up the memory reserve map
>> for the kexec kernel and also claims kdump support for kdump as
>> all the necessary changes are added.
>>
>> The last patch fixes a lookup issue for `kexec -l -s` case when
>> memory is reserved for crashkernel.
>>
>> Tested the changes successfully on P8, P9 lpars, couple of OpenPOWER
>> boxes, one with secureboot enabled, KVM guest and a simulator.
>>
>> v4 -> v5:
>> * Dropped patches 07/12 & 08/12 and updated purgatory to do everything
>> in assembly.

Hello Pingfan,

Sorry, I missed out on responding to this.


> I guess you achieve this by carefully selecting instruction to avoid
> relocation issue, right?

Yes. No far branching or reference to data from elsewhere.

Thanks
Hari