2023-12-22 19:36:33

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 00/17] kexec: Allow preservation of ftrace buffers

Kexec today considers itself purely a boot loader: When we enter the new
kernel, any state the previous kernel left behind is irrelevant and the
new kernel reinitializes the system.

However, there are use cases where this mode of operation is not what we
actually want. In virtualization hosts for example, we want to use kexec
to update the host kernel while virtual machine memory stays untouched.
When we add device assignment to the mix, we also need to ensure that
IOMMU and VFIO states are untouched. If we add PCIe peer to peer DMA, we
need to do the same for the PCI subsystem. If we want to kexec while an
SEV-SNP enabled virtual machine is running, we need to preserve the VM
context pages and physical memory. See James' and my Linux Plumbers
Conference 2023 presentation for details:

https://lpc.events/event/17/contributions/1485/

To start us on the journey to support all the use cases above, this
patch implements basic infrastructure to allow hand over of kernel state
across kexec (Kexec HandOver, aka KHO). As example target, we use ftrace:
With this patch set applied, you can read ftrace records from the
pre-kexec environment in your post-kexec one. This creates a very powerful
debugging and performance analysis tool for kexec. It's also slightly
easier to reason about than full blown VFIO state preservation.

== Alternatives ==

There are alternative approaches to (parts of) the problems above:

* Memory Pools [1] - preallocated persistent memory region + allocator
* PRMEM [2] - resizable persistent memory regions with fixed metadata
pointer on the kernel command line + allocator
* Pkernfs [3] - preallocated file system for in-kernel data with fixed
address location on the kernel command line
* PKRAM [4] - handover of user space pages using a fixed metadata page
specified via command line

All of the approaches above fundamentally have the same problem: They
require the administrator to explicitly carve out a physical memory
location because they have no mechanism outside of the kernel command
line to pass data (including memory reservations) between kexec'ing
kernels.

KHO provides that base foundation. We will determine later whether we
still need any of the approaches above for fast bulk memory handover of for
example IOMMU page tables. But IMHO they would all be users of KHO, with
KHO providing the foundational primitive to pass metadata and bulk memory
reservations as well as provide easy versioning for data.

== Documentation ==

If people are happy with the approach in this patch set, I will write up
conclusive documentation including schemas for the metadata as part of its
next iteration. For now, here's a rudimentary overview:

We introduce a metadata file that the kernels pass between each other. How
they pass it is architecture specific. The file's format is a Flattened
Device Tree (fdt) which has a generator and parser already included in
Linux. When the root user enables KHO through /sys/kernel/kho/active, the
kernel invokes callbacks to every driver that supports KHO to serialize
its state. When the actual kexec happens, the fdt is part of the image
set that we boot into. In addition, we keep a "scratch region" available
for kexec: A physically contiguous memory region that is guaranteed to
not have any memory that KHO would preserve. The new kernel bootstraps
itself using the scratch region and sets all handed over memory as in use.
When drivers initialize that support KHO, they introspect the fdt and
recover their state from it. This includes memory reservations, where the
driver can either discard or claim reservations.

== Limitations ==

I currently only implemented file based kexec. The kernel interfaces
in the patch set are already in place to support user space kexec as well,
but I have not implemented it yet.

== How to Use ==

To use the code, please boot the kernel with the "kho_scratch=" command
line parameter set: "kho_scratch=512M". KHO requires a scratch region.

Make sure to fill ftrace with contents that you want to observe after
kexec. Then, before you invoke file based "kexec -l", activate KHO:

# echo 1 > /sys/kernel/kho/active
# kexec -l Image --initrd=initrd -s
# kexec -e

The new kernel will boot up and contain the previous kernel's trace
buffers in /sys/kernel/debug/tracing/trace.

== Changelog ==

v1 -> v2:
- Removed: tracing: Introduce names for ring buffers
- Removed: tracing: Introduce names for events
- New: kexec: Add config option for KHO
- New: kexec: Add documentation for KHO
- New: tracing: Initialize fields before registering
- New: devicetree: Add bindings for ftrace KHO
- test bot warning fixes
- Change kconfig option to ARCH_SUPPORTS_KEXEC_KHO
- s/kho_reserve_mem/kho_reserve_previous_mem/g
- s/kho_reserve/kho_reserve_scratch/g
- Remove / reduce ifdefs
- Select crc32
- Leave anything that requires a name in trace.c to keep buffers
unnamed entities
- Put events as array into a property, use fingerprint instead of
names to identify them
- Reduce footprint without CONFIG_FTRACE_KHO
- s/kho_reserve_mem/kho_reserve_previous_mem/g
- make kho_get_fdt() const
- Add stubs for return_mem and claim_mem
- make kho_get_fdt() const
- Get events as array from a property, use fingerprint instead of
names to identify events
- Change kconfig option to ARCH_SUPPORTS_KEXEC_KHO
- s/kho_reserve_mem/kho_reserve_previous_mem/g
- s/kho_reserve/kho_reserve_scratch/g
- Leave the node generation code that needs to know the name in
trace.c so that ring buffers can stay anonymous
- s/kho_reserve/kho_reserve_scratch/g
- Move kho enums out of ifdef
- Move from names to fdt offsets. That way, trace.c can find the trace
array offset and then the ring buffer code only needs to read out
its per-CPU data. That way it can stay oblivient to its name.
- Make kho_get_fdt() const


Alex

[1] https://lore.kernel.org/all/169645773092.11424.7258549771090599226.stgit@skinsburskii./
[2] https://lore.kernel.org/all/[email protected]/
[3] https://lpc.events/event/17/contributions/1485/attachments/1296/2650/jgowans-preserving-across-kexec.pdf
[4] https://lore.kernel.org/kexec/[email protected]/

Alexander Graf (17):
mm,memblock: Add support for scratch memory
memblock: Declare scratch memory as CMA
kexec: Add Kexec HandOver (KHO) generation helpers
kexec: Add KHO parsing support
kexec: Add KHO support to kexec file loads
kexec: Add config option for KHO
kexec: Add documentation for KHO
arm64: Add KHO support
x86: Add KHO support
tracing: Initialize fields before registering
tracing: Introduce kho serialization
tracing: Add kho serialization of trace buffers
tracing: Recover trace buffers from kexec handover
tracing: Add kho serialization of trace events
tracing: Recover trace events from kexec handover
tracing: Add config option for kexec handover
devicetree: Add bindings for ftrace KHO

Documentation/ABI/testing/sysfs-firmware-kho | 9 +
Documentation/ABI/testing/sysfs-kernel-kho | 53 ++
.../admin-guide/kernel-parameters.txt | 10 +
.../bindings/kho/ftrace/ftrace-array.yaml | 46 ++
.../bindings/kho/ftrace/ftrace-cpu.yaml | 56 ++
.../bindings/kho/ftrace/ftrace.yaml | 48 ++
Documentation/kho/concepts.rst | 88 +++
Documentation/kho/index.rst | 19 +
Documentation/kho/usage.rst | 57 ++
Documentation/subsystem-apis.rst | 1 +
MAINTAINERS | 2 +
arch/arm64/Kconfig | 3 +
arch/arm64/kernel/setup.c | 2 +
arch/arm64/mm/init.c | 8 +
arch/x86/Kconfig | 3 +
arch/x86/boot/compressed/kaslr.c | 55 ++
arch/x86/include/uapi/asm/bootparam.h | 15 +-
arch/x86/kernel/e820.c | 9 +
arch/x86/kernel/kexec-bzimage64.c | 39 ++
arch/x86/kernel/setup.c | 46 ++
arch/x86/mm/init_32.c | 7 +
arch/x86/mm/init_64.c | 7 +
drivers/of/fdt.c | 39 ++
drivers/of/kexec.c | 54 ++
include/linux/kexec.h | 58 ++
include/linux/memblock.h | 19 +
include/linux/ring_buffer.h | 17 +-
include/linux/trace_events.h | 1 +
include/uapi/linux/kexec.h | 6 +
kernel/Kconfig.kexec | 13 +
kernel/Makefile | 2 +
kernel/kexec_file.c | 41 ++
kernel/kexec_kho_in.c | 298 ++++++++++
kernel/kexec_kho_out.c | 526 ++++++++++++++++++
kernel/trace/Kconfig | 14 +
kernel/trace/ring_buffer.c | 243 +++++++-
kernel/trace/trace.c | 96 +++-
kernel/trace/trace_events.c | 14 +-
kernel/trace/trace_events_synth.c | 14 +-
kernel/trace/trace_events_user.c | 4 +
kernel/trace/trace_output.c | 246 +++++++-
kernel/trace/trace_output.h | 5 +
kernel/trace/trace_probe.c | 4 +
mm/Kconfig | 4 +
mm/memblock.c | 83 ++-
45 files changed, 2360 insertions(+), 24 deletions(-)
create mode 100644 Documentation/ABI/testing/sysfs-firmware-kho
create mode 100644 Documentation/ABI/testing/sysfs-kernel-kho
create mode 100644 Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml
create mode 100644 Documentation/devicetree/bindings/kho/ftrace/ftrace-cpu.yaml
create mode 100644 Documentation/devicetree/bindings/kho/ftrace/ftrace.yaml
create mode 100644 Documentation/kho/concepts.rst
create mode 100644 Documentation/kho/index.rst
create mode 100644 Documentation/kho/usage.rst
create mode 100644 kernel/kexec_kho_in.c
create mode 100644 kernel/kexec_kho_out.c

--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879





2023-12-22 19:36:54

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 02/17] memblock: Declare scratch memory as CMA

When we finish populating our memory, we don't want to lose the scratch
region as memory we can use for useful data. Do do that, we mark it as
CMA memory. That means that any allocation within it only happens with
movable memory which we can then happily discard for the next kexec.

That way we don't lose the scratch region's memory anymore for
allocations after boot.

Signed-off-by: Alexander Graf <[email protected]>

---

v1 -> v2:

- test bot warning fix
---
mm/memblock.c | 30 ++++++++++++++++++++++++++----
1 file changed, 26 insertions(+), 4 deletions(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index e89e6c8f9d75..3700c2c1a96d 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -16,6 +16,7 @@
#include <linux/kmemleak.h>
#include <linux/seq_file.h>
#include <linux/memblock.h>
+#include <linux/page-isolation.h>

#include <asm/sections.h>
#include <linux/io.h>
@@ -1100,10 +1101,6 @@ static bool should_skip_region(struct memblock_type *type,
if ((flags & MEMBLOCK_SCRATCH) && !memblock_is_scratch(m))
return true;

- /* Leave scratch memory alone after scratch-only phase */
- if (!(flags & MEMBLOCK_SCRATCH) && memblock_is_scratch(m))
- return true;
-
return false;
}

@@ -2153,6 +2150,20 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end)
}
}

+#ifdef CONFIG_MEMBLOCK_SCRATCH
+static void reserve_scratch_mem(phys_addr_t start, phys_addr_t end)
+{
+ ulong start_pfn = pageblock_start_pfn(PFN_DOWN(start));
+ ulong end_pfn = pageblock_align(PFN_UP(end));
+ ulong pfn;
+
+ for (pfn = start_pfn; pfn < end_pfn; pfn += pageblock_nr_pages) {
+ /* Mark as CMA to prevent kernel allocations in it */
+ set_pageblock_migratetype(pfn_to_page(pfn), MIGRATE_CMA);
+ }
+}
+#endif
+
static unsigned long __init __free_memory_core(phys_addr_t start,
phys_addr_t end)
{
@@ -2214,6 +2225,17 @@ static unsigned long __init free_low_memory_core_early(void)

memmap_init_reserved_pages();

+#ifdef CONFIG_MEMBLOCK_SCRATCH
+ /*
+ * Mark scratch mem as CMA before we return it. That way we ensure that
+ * no kernel allocations happen on it. That means we can reuse it as
+ * scratch memory again later.
+ */
+ __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE,
+ MEMBLOCK_SCRATCH, &start, &end, NULL)
+ reserve_scratch_mem(start, end);
+#endif
+
/*
* We need to use NUMA_NO_NODE instead of NODE_DATA(0)->node_id
* because in some case like Node0 doesn't have RAM installed
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:36:56

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 01/17] mm,memblock: Add support for scratch memory

With KHO (Kexec HandOver), we need a way to ensure that the new kernel
does not allocate memory on top of any memory regions that the previous
kernel was handing over. But to know where those are, we need to include
them in the reserved memblocks array which may not be big enough to hold
all allocations. To resize the array, we need to allocate memory. That
brings us into a catch 22 situation.

The solution to that is the scratch region: a safe region to operate in.
KHO provides a "scratch region" as part of its metadata. This scratch
region is a single, contiguous memory block that we know does not
contain any KHO allocations. We can exclusively allocate from there until
we finish kernel initialization to a point where it knows about all the
KHO memory reservations. We introduce a new memblock_set_scratch_only()
function that allows KHO to indicate that any memblock allocation must
happen from the scratch region.

Later, we may want to perform another KHO kexec. For that, we reuse the
same scratch region. To ensure that no eventually handed over data gets
allocated inside that scratch region, we flip the semantics of the scratch
region with memblock_clear_scratch_only(): After that call, no allocations
may happen from scratch memblock regions. We will lift that restriction
in the next patch.

Signed-off-by: Alexander Graf <[email protected]>
---
include/linux/memblock.h | 19 +++++++++++++
mm/Kconfig | 4 +++
mm/memblock.c | 61 +++++++++++++++++++++++++++++++++++++++-
3 files changed, 83 insertions(+), 1 deletion(-)

diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index ae3bde302f70..14043f5b696f 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -42,6 +42,10 @@ extern unsigned long long max_possible_pfn;
* kernel resource tree.
* @MEMBLOCK_RSRV_NOINIT: memory region for which struct pages are
* not initialized (only for reserved regions).
+ * @MEMBLOCK_SCRATCH: memory region that kexec can pass to the next kernel in
+ * handover mode. During early boot, we do not know about all memory reservations
+ * yet, so we get scratch memory from the previous kernel that we know is good
+ * to use. It is the only memory that allocations may happen from in this phase.
*/
enum memblock_flags {
MEMBLOCK_NONE = 0x0, /* No special request */
@@ -50,6 +54,7 @@ enum memblock_flags {
MEMBLOCK_NOMAP = 0x4, /* don't add to kernel direct mapping */
MEMBLOCK_DRIVER_MANAGED = 0x8, /* always detected via a driver */
MEMBLOCK_RSRV_NOINIT = 0x10, /* don't initialize struct pages */
+ MEMBLOCK_SCRATCH = 0x20, /* scratch memory for kexec handover */
};

/**
@@ -129,6 +134,8 @@ int memblock_mark_mirror(phys_addr_t base, phys_addr_t size);
int memblock_mark_nomap(phys_addr_t base, phys_addr_t size);
int memblock_clear_nomap(phys_addr_t base, phys_addr_t size);
int memblock_reserved_mark_noinit(phys_addr_t base, phys_addr_t size);
+int memblock_mark_scratch(phys_addr_t base, phys_addr_t size);
+int memblock_clear_scratch(phys_addr_t base, phys_addr_t size);

void memblock_free_all(void);
void memblock_free(void *ptr, size_t size);
@@ -273,6 +280,11 @@ static inline bool memblock_is_driver_managed(struct memblock_region *m)
return m->flags & MEMBLOCK_DRIVER_MANAGED;
}

+static inline bool memblock_is_scratch(struct memblock_region *m)
+{
+ return m->flags & MEMBLOCK_SCRATCH;
+}
+
int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn,
unsigned long *end_pfn);
void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn,
@@ -610,5 +622,12 @@ static inline void early_memtest(phys_addr_t start, phys_addr_t end) { }
static inline void memtest_report_meminfo(struct seq_file *m) { }
#endif

+#ifdef CONFIG_MEMBLOCK_SCRATCH
+void memblock_set_scratch_only(void);
+void memblock_clear_scratch_only(void);
+#else
+static inline void memblock_set_scratch_only(void) { }
+static inline void memblock_clear_scratch_only(void) { }
+#endif

#endif /* _LINUX_MEMBLOCK_H */
diff --git a/mm/Kconfig b/mm/Kconfig
index 57cd378c73d6..384369e40f10 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -513,6 +513,10 @@ config ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP
config HAVE_MEMBLOCK_PHYS_MAP
bool

+# Enable memblock support for scratch memory which is needed for KHO
+config MEMBLOCK_SCRATCH
+ bool
+
config HAVE_FAST_GUP
depends on MMU
bool
diff --git a/mm/memblock.c b/mm/memblock.c
index 5a88d6d24d79..e89e6c8f9d75 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -106,6 +106,13 @@ unsigned long min_low_pfn;
unsigned long max_pfn;
unsigned long long max_possible_pfn;

+#ifdef CONFIG_MEMBLOCK_SCRATCH
+/* When set to true, only allocate from MEMBLOCK_SCRATCH ranges */
+static bool scratch_only;
+#else
+#define scratch_only false
+#endif
+
static struct memblock_region memblock_memory_init_regions[INIT_MEMBLOCK_MEMORY_REGIONS] __initdata_memblock;
static struct memblock_region memblock_reserved_init_regions[INIT_MEMBLOCK_RESERVED_REGIONS] __initdata_memblock;
#ifdef CONFIG_HAVE_MEMBLOCK_PHYS_MAP
@@ -168,6 +175,10 @@ bool __init_memblock memblock_has_mirror(void)

static enum memblock_flags __init_memblock choose_memblock_flags(void)
{
+ /* skip non-scratch memory for kho early boot allocations */
+ if (scratch_only)
+ return MEMBLOCK_SCRATCH;
+
return system_has_some_mirror ? MEMBLOCK_MIRROR : MEMBLOCK_NONE;
}

@@ -643,7 +654,7 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
#ifdef CONFIG_NUMA
WARN_ON(nid != memblock_get_region_node(rgn));
#endif
- WARN_ON(flags != rgn->flags);
+ WARN_ON(flags != (rgn->flags & ~MEMBLOCK_SCRATCH));
nr_new++;
if (insert) {
if (start_rgn == -1)
@@ -890,6 +901,18 @@ int __init_memblock memblock_physmem_add(phys_addr_t base, phys_addr_t size)
}
#endif

+#ifdef CONFIG_MEMBLOCK_SCRATCH
+__init_memblock void memblock_set_scratch_only(void)
+{
+ scratch_only = true;
+}
+
+__init_memblock void memblock_clear_scratch_only(void)
+{
+ scratch_only = false;
+}
+#endif
+
/**
* memblock_setclr_flag - set or clear flag for a memory region
* @type: memblock type to set/clear flag for
@@ -1015,6 +1038,33 @@ int __init_memblock memblock_reserved_mark_noinit(phys_addr_t base, phys_addr_t
MEMBLOCK_RSRV_NOINIT);
}

+/**
+ * memblock_mark_scratch - Mark a memory region with flag MEMBLOCK_SCRATCH.
+ * @base: the base phys addr of the region
+ * @size: the size of the region
+ *
+ * Only memory regions marked with %MEMBLOCK_SCRATCH will be considered for
+ * allocations during early boot with kexec handover.
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+int __init_memblock memblock_mark_scratch(phys_addr_t base, phys_addr_t size)
+{
+ return memblock_setclr_flag(&memblock.memory, base, size, 1, MEMBLOCK_SCRATCH);
+}
+
+/**
+ * memblock_clear_scratch - Clear flag MEMBLOCK_SCRATCH for a specified region.
+ * @base: the base phys addr of the region
+ * @size: the size of the region
+ *
+ * Return: 0 on success, -errno on failure.
+ */
+int __init_memblock memblock_clear_scratch(phys_addr_t base, phys_addr_t size)
+{
+ return memblock_setclr_flag(&memblock.memory, base, size, 0, MEMBLOCK_SCRATCH);
+}
+
static bool should_skip_region(struct memblock_type *type,
struct memblock_region *m,
int nid, int flags)
@@ -1046,6 +1096,14 @@ static bool should_skip_region(struct memblock_type *type,
if (!(flags & MEMBLOCK_DRIVER_MANAGED) && memblock_is_driver_managed(m))
return true;

+ /* In early alloc during kho, we can only consider scratch allocations */
+ if ((flags & MEMBLOCK_SCRATCH) && !memblock_is_scratch(m))
+ return true;
+
+ /* Leave scratch memory alone after scratch-only phase */
+ if (!(flags & MEMBLOCK_SCRATCH) && memblock_is_scratch(m))
+ return true;
+
return false;
}

@@ -2211,6 +2269,7 @@ static const char * const flagname[] = {
[ilog2(MEMBLOCK_MIRROR)] = "MIRROR",
[ilog2(MEMBLOCK_NOMAP)] = "NOMAP",
[ilog2(MEMBLOCK_DRIVER_MANAGED)] = "DRV_MNG",
+ [ilog2(MEMBLOCK_SCRATCH)] = "SCRATCH",
};

static int memblock_debug_show(struct seq_file *m, void *private)
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:37:26

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 03/17] kexec: Add Kexec HandOver (KHO) generation helpers

This patch adds the core infrastructure to generate Kexec HandOver
metadata. Kexec HandOver is a mechanism that allows Linux to preserve
state - arbitrary properties as well as memory locations - across kexec.

It does so using 3 concepts:

1) Device Tree - Every KHO kexec carries a KHO specific flattened
device tree blob that describes the state of the system. Device
drivers can register to KHO to serialize their state before kexec.

2) Mem cache - A memblocks like structure that contains full page
ranges of reservations. These can not be part of the architectural
reservations, because they differ on every kexec.

3) Scratch Region - A CMA region that we allocate in the first kernel.
CMA gives us the guarantee that no handover pages land in that
region, because handover pages must be at a static physical memory
location. We use this region as the place to load future kexec
images into which then won't collide with any handover data.

Signed-off-by: Alexander Graf <[email protected]>

---

v1 -> v2:

- s/kho_reserve/kho_reserve_scratch/g
- Move kho enums out of ifdef
---
Documentation/ABI/testing/sysfs-kernel-kho | 53 +++
.../admin-guide/kernel-parameters.txt | 10 +
MAINTAINERS | 1 +
include/linux/kexec.h | 24 ++
include/uapi/linux/kexec.h | 6 +
kernel/Makefile | 1 +
kernel/kexec_kho_out.c | 316 ++++++++++++++++++
7 files changed, 411 insertions(+)
create mode 100644 Documentation/ABI/testing/sysfs-kernel-kho
create mode 100644 kernel/kexec_kho_out.c

diff --git a/Documentation/ABI/testing/sysfs-kernel-kho b/Documentation/ABI/testing/sysfs-kernel-kho
new file mode 100644
index 000000000000..f69e7b81a337
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-kernel-kho
@@ -0,0 +1,53 @@
+What: /sys/kernel/kho/active
+Date: December 2023
+Contact: Alexander Graf <[email protected]>
+Description:
+ Kexec HandOver (KHO) allows Linux to transition the state of
+ compatible drivers into the next kexec'ed kernel. To do so,
+ device drivers will serialize their current state into a DT.
+ While the state is serialized, they are unable to perform
+ any modifications to state that was serialized, such as
+ handed over memory allocations.
+
+ When this file contains "1", the system is in the transition
+ state. When contains "0", it is not. To switch between the
+ two states, echo the respective number into this file.
+
+What: /sys/kernel/kho/dt_max
+Date: December 2023
+Contact: Alexander Graf <[email protected]>
+Description:
+ KHO needs to allocate a buffer for the DT that gets
+ generated before it knows the final size. By default, it
+ will allocate 10 MiB for it. You can write to this file
+ to modify the size of that allocation.
+
+What: /sys/kernel/kho/scratch_len
+Date: December 2023
+Contact: Alexander Graf <[email protected]>
+Description:
+ To support continuous KHO kexecs, we need to reserve a
+ physically contiguous memory region that will always stay
+ available for future kexec allocations. This file describes
+ the length of that memory region. Kexec user space tooling
+ can use this to determine where it should place its payload
+ images.
+
+What: /sys/kernel/kho/scratch_phys
+Date: December 2023
+Contact: Alexander Graf <[email protected]>
+Description:
+ To support continuous KHO kexecs, we need to reserve a
+ physically contiguous memory region that will always stay
+ available for future kexec allocations. This file describes
+ the physical location of that memory region. Kexec user space
+ tooling can use this to determine where it should place its
+ payload images.
+
+What: /sys/kernel/kho/dt
+Date: December 2023
+Contact: Alexander Graf <[email protected]>
+Description:
+ When KHO is active, the kernel exposes the generated DT that
+ carries its current KHO state in this file. Kexec user space
+ tooling can use this as input file for the KHO payload image.
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 51575cd31741..efeef075617e 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2504,6 +2504,16 @@
kgdbwait [KGDB] Stop kernel execution and enter the
kernel debugger at the earliest opportunity.

+ kho_scratch=n[KMG] [KEXEC] Sets the size of the KHO scratch
+ region. The KHO scratch region is a physically
+ memory range that can only be used for non-kernel
+ allocations. That way, even when memory is heavily
+ fragmented with handed over memory, kexec will always
+ be able to find contiguous memory to place the next
+ kernel for kexec into.
+
+ The default is 0.
+
kmac= [MIPS] Korina ethernet MAC address.
Configure the RouterBoard 532 series on-chip
Ethernet adapter MAC address.
diff --git a/MAINTAINERS b/MAINTAINERS
index 9104430e148e..2a19bd282dd0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -11713,6 +11713,7 @@ M: Eric Biederman <[email protected]>
L: [email protected]
S: Maintained
W: http://kernel.org/pub/linux/utils/kernel/kexec/
+F: Documentation/ABI/testing/sysfs-kernel-kho
F: include/linux/kexec.h
F: include/uapi/linux/kexec.h
F: kernel/kexec*
diff --git a/include/linux/kexec.h b/include/linux/kexec.h
index 8227455192b7..5d3b6b015838 100644
--- a/include/linux/kexec.h
+++ b/include/linux/kexec.h
@@ -21,6 +21,8 @@

#include <uapi/linux/kexec.h>
#include <linux/verification.h>
+#include <linux/libfdt.h>
+#include <linux/notifier.h>

extern note_buf_t __percpu *crash_notes;

@@ -516,6 +518,28 @@ void set_kexec_sig_enforced(void);
static inline void set_kexec_sig_enforced(void) {}
#endif

+/* Notifier index */
+enum kho_event {
+ KEXEC_KHO_DUMP = 0,
+ KEXEC_KHO_ABORT = 1,
+};
+
+#ifdef CONFIG_KEXEC_KHO
+extern phys_addr_t kho_scratch_phys;
+extern phys_addr_t kho_scratch_len;
+
+/* egest handover metadata */
+void kho_reserve_scratch(void);
+int register_kho_notifier(struct notifier_block *nb);
+int unregister_kho_notifier(struct notifier_block *nb);
+bool kho_is_active(void);
+#else
+static inline void kho_reserve_scratch(void) {}
+static inline int register_kho_notifier(struct notifier_block *nb) { return -EINVAL; }
+static inline int unregister_kho_notifier(struct notifier_block *nb) { return -EINVAL; }
+static inline bool kho_is_active(void) { return false; }
+#endif
+
#endif /* !defined(__ASSEBMLY__) */

#endif /* LINUX_KEXEC_H */
diff --git a/include/uapi/linux/kexec.h b/include/uapi/linux/kexec.h
index 01766dd839b0..d02ffd5960d6 100644
--- a/include/uapi/linux/kexec.h
+++ b/include/uapi/linux/kexec.h
@@ -49,6 +49,12 @@
/* The artificial cap on the number of segments passed to kexec_load. */
#define KEXEC_SEGMENT_MAX 16

+/* KHO passes an array of kho_mem as "mem cache" to the new kernel */
+struct kho_mem {
+ __u64 addr;
+ __u64 len;
+};
+
#ifndef __KERNEL__
/*
* This structure is used to hold the arguments that are used when
diff --git a/kernel/Makefile b/kernel/Makefile
index 3947122d618b..a6bd31e22c09 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -73,6 +73,7 @@ obj-$(CONFIG_KEXEC_CORE) += kexec_core.o
obj-$(CONFIG_KEXEC) += kexec.o
obj-$(CONFIG_KEXEC_FILE) += kexec_file.o
obj-$(CONFIG_KEXEC_ELF) += kexec_elf.o
+obj-$(CONFIG_KEXEC_KHO) += kexec_kho_out.o
obj-$(CONFIG_BACKTRACE_SELF_TEST) += backtracetest.o
obj-$(CONFIG_COMPAT) += compat.o
obj-$(CONFIG_CGROUPS) += cgroup/
diff --git a/kernel/kexec_kho_out.c b/kernel/kexec_kho_out.c
new file mode 100644
index 000000000000..765cf6ba7a46
--- /dev/null
+++ b/kernel/kexec_kho_out.c
@@ -0,0 +1,316 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * kexec_kho_out.c - kexec handover code to egest metadata.
+ * Copyright (C) 2023 Alexander Graf <[email protected]>
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/cma.h>
+#include <linux/kexec.h>
+#include <linux/device.h>
+#include <linux/compiler.h>
+#include <linux/kmsg_dump.h>
+
+struct kho_out {
+ struct kobject *kobj;
+ bool active;
+ struct cma *cma;
+ struct blocking_notifier_head chain_head;
+ void *dt;
+ u64 dt_len;
+ u64 dt_max;
+ struct mutex lock;
+};
+
+static struct kho_out kho = {
+ .dt_max = (1024 * 1024 * 10),
+ .chain_head = BLOCKING_NOTIFIER_INIT(kho.chain_head),
+ .lock = __MUTEX_INITIALIZER(kho.lock),
+};
+
+/*
+ * Size for scratch (non-KHO) memory. With KHO enabled, memory can become
+ * fragmented because KHO regions may be anywhere in physical address
+ * space. The scratch region gives us a safe zone that we will never see
+ * KHO allocations from. This is where we can later safely load our new kexec
+ * images into.
+ */
+static phys_addr_t kho_scratch_size __initdata;
+
+int register_kho_notifier(struct notifier_block *nb)
+{
+ return blocking_notifier_chain_register(&kho.chain_head, nb);
+}
+EXPORT_SYMBOL_GPL(register_kho_notifier);
+
+int unregister_kho_notifier(struct notifier_block *nb)
+{
+ return blocking_notifier_chain_unregister(&kho.chain_head, nb);
+}
+EXPORT_SYMBOL_GPL(unregister_kho_notifier);
+
+bool kho_is_active(void)
+{
+ return kho.active;
+}
+EXPORT_SYMBOL_GPL(kho_is_active);
+
+static ssize_t raw_read(struct file *file, struct kobject *kobj,
+ struct bin_attribute *attr, char *buf,
+ loff_t pos, size_t count)
+{
+ mutex_lock(&kho.lock);
+ memcpy(buf, attr->private + pos, count);
+ mutex_unlock(&kho.lock);
+
+ return count;
+}
+
+static BIN_ATTR(dt, 0400, raw_read, NULL, 0);
+
+static int kho_expose_dt(void *fdt)
+{
+ long fdt_len = fdt_totalsize(fdt);
+ int err;
+
+ kho.dt = fdt;
+ kho.dt_len = fdt_len;
+
+ bin_attr_dt.size = fdt_totalsize(fdt);
+ bin_attr_dt.private = fdt;
+ err = sysfs_create_bin_file(kho.kobj, &bin_attr_dt);
+
+ return err;
+}
+
+static void kho_abort(void)
+{
+ if (!kho.active)
+ return;
+
+ sysfs_remove_bin_file(kho.kobj, &bin_attr_dt);
+
+ kvfree(kho.dt);
+ kho.dt = NULL;
+ kho.dt_len = 0;
+
+ blocking_notifier_call_chain(&kho.chain_head, KEXEC_KHO_ABORT, NULL);
+
+ kho.active = false;
+}
+
+static int kho_serialize(void)
+{
+ void *fdt = NULL;
+ int err;
+
+ kho.active = true;
+ err = -ENOMEM;
+
+ fdt = kvmalloc(kho.dt_max, GFP_KERNEL);
+ if (!fdt)
+ goto out;
+
+ if (fdt_create(fdt, kho.dt_max)) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ err = fdt_finish_reservemap(fdt);
+ if (err)
+ goto out;
+
+ err = fdt_begin_node(fdt, "");
+ if (err)
+ goto out;
+
+ err = fdt_property_string(fdt, "compatible", "kho-v1");
+ if (err)
+ goto out;
+
+ /* Loop through all kho dump functions */
+ err = blocking_notifier_call_chain(&kho.chain_head, KEXEC_KHO_DUMP, fdt);
+ err = notifier_to_errno(err);
+ if (err)
+ goto out;
+
+ /* Close / */
+ err = fdt_end_node(fdt);
+ if (err)
+ goto out;
+
+ err = fdt_finish(fdt);
+ if (err)
+ goto out;
+
+ if (WARN_ON(fdt_check_header(fdt))) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ err = kho_expose_dt(fdt);
+
+out:
+ if (err) {
+ pr_err("kho failed to serialize state: %d", err);
+ kho_abort();
+ }
+ return err;
+}
+
+/* Handling for /sys/kernel/kho */
+
+#define KHO_ATTR_RO(_name) static struct kobj_attribute _name##_attr = __ATTR_RO_MODE(_name, 0400)
+#define KHO_ATTR_RW(_name) static struct kobj_attribute _name##_attr = __ATTR_RW_MODE(_name, 0600)
+
+static ssize_t active_store(struct kobject *dev, struct kobj_attribute *attr,
+ const char *buf, size_t size)
+{
+ ssize_t retsize = size;
+ bool val = false;
+ int ret;
+
+ if (kstrtobool(buf, &val) < 0)
+ return -EINVAL;
+
+ if (!kho_scratch_len)
+ return -ENOMEM;
+
+ mutex_lock(&kho.lock);
+ if (val != kho.active) {
+ if (val) {
+ ret = kho_serialize();
+ if (ret) {
+ retsize = -EINVAL;
+ goto out;
+ }
+ } else {
+ kho_abort();
+ }
+ }
+
+out:
+ mutex_unlock(&kho.lock);
+ return retsize;
+}
+
+static ssize_t active_show(struct kobject *dev, struct kobj_attribute *attr,
+ char *buf)
+{
+ ssize_t ret;
+
+ mutex_lock(&kho.lock);
+ ret = sysfs_emit(buf, "%d\n", kho.active);
+ mutex_unlock(&kho.lock);
+
+ return ret;
+}
+KHO_ATTR_RW(active);
+
+static ssize_t dt_max_store(struct kobject *dev, struct kobj_attribute *attr,
+ const char *buf, size_t size)
+{
+ u64 val;
+
+ if (kstrtoull(buf, 0, &val))
+ return -EINVAL;
+
+ kho.dt_max = val;
+
+ return size;
+}
+
+static ssize_t dt_max_show(struct kobject *dev, struct kobj_attribute *attr,
+ char *buf)
+{
+ return sysfs_emit(buf, "0x%llx\n", kho.dt_max);
+}
+KHO_ATTR_RW(dt_max);
+
+static ssize_t scratch_len_show(struct kobject *dev, struct kobj_attribute *attr,
+ char *buf)
+{
+ return sysfs_emit(buf, "0x%llx\n", kho_scratch_len);
+}
+KHO_ATTR_RO(scratch_len);
+
+static ssize_t scratch_phys_show(struct kobject *dev, struct kobj_attribute *attr,
+ char *buf)
+{
+ return sysfs_emit(buf, "0x%llx\n", kho_scratch_phys);
+}
+KHO_ATTR_RO(scratch_phys);
+
+static __init int kho_out_init(void)
+{
+ int ret = 0;
+
+ kho.kobj = kobject_create_and_add("kho", kernel_kobj);
+ if (!kho.kobj) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ ret = sysfs_create_file(kho.kobj, &active_attr.attr);
+ if (ret)
+ goto err;
+
+ ret = sysfs_create_file(kho.kobj, &dt_max_attr.attr);
+ if (ret)
+ goto err;
+
+ ret = sysfs_create_file(kho.kobj, &scratch_phys_attr.attr);
+ if (ret)
+ goto err;
+
+ ret = sysfs_create_file(kho.kobj, &scratch_len_attr.attr);
+ if (ret)
+ goto err;
+
+err:
+ return ret;
+}
+late_initcall(kho_out_init);
+
+static int __init early_kho_scratch(char *p)
+{
+ kho_scratch_size = memparse(p, &p);
+ return 0;
+}
+early_param("kho_scratch", early_kho_scratch);
+
+/**
+ * kho_reserve_scratch - Reserve a contiguous chunk of memory for kexec
+ *
+ * With KHO we can preserve arbitrary pages in the system. To ensure we still
+ * have a large contiguous region of memory when we search the physical address
+ * space for target memory, let's make sure we always have a large CMA region
+ * active. This CMA region will only be used for movable pages which are not a
+ * problem for us during KHO because we can just move them somewhere else.
+ */
+__init void kho_reserve_scratch(void)
+{
+ int r;
+
+ if (kho_get_fdt()) {
+ /*
+ * We came from a previous KHO handover, so we already have
+ * a known good scratch region that we preserve. No need to
+ * allocate another.
+ */
+ return;
+ }
+
+ /* Only allocate KHO scratch memory when we're asked to */
+ if (!kho_scratch_size)
+ return;
+
+ r = cma_declare_contiguous_nid(0, kho_scratch_size, 0, PAGE_SIZE, 0,
+ false, "kho", &kho.cma, NUMA_NO_NODE);
+ if (WARN_ON(r))
+ return;
+
+ kho_scratch_phys = cma_get_base(kho.cma);
+ kho_scratch_len = cma_get_size(kho.cma);
+}
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:37:46

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 04/17] kexec: Add KHO parsing support

When we have a KHO kexec, we get a device tree, mem cache and scratch
region to populate the state of the system. Provide helper functions
that allow architecture code to easily handle memory reservations based
on them and give device drivers visibility into the KHO DT and memory
reservations so they can recover their own state.

Signed-off-by: Alexander Graf <[email protected]>

---

v1 -> v2:

- s/kho_reserve_mem/kho_reserve_previous_mem/g
- make kho_get_fdt() const
- Add stubs for return_mem and claim_mem
---
Documentation/ABI/testing/sysfs-firmware-kho | 9 +
MAINTAINERS | 1 +
include/linux/kexec.h | 27 +-
kernel/Makefile | 1 +
kernel/kexec_kho_in.c | 298 +++++++++++++++++++
5 files changed, 335 insertions(+), 1 deletion(-)
create mode 100644 Documentation/ABI/testing/sysfs-firmware-kho
create mode 100644 kernel/kexec_kho_in.c

diff --git a/Documentation/ABI/testing/sysfs-firmware-kho b/Documentation/ABI/testing/sysfs-firmware-kho
new file mode 100644
index 000000000000..e4ed2cb7c810
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-firmware-kho
@@ -0,0 +1,9 @@
+What: /sys/firmware/kho/dt
+Date: December 2023
+Contact: Alexander Graf <[email protected]>
+Description:
+ When the kernel was booted with Kexec HandOver (KHO),
+ the device tree that carries metadata about the previous
+ kernel's state is in this file. This file may disappear
+ when all consumers of it finished to interpret their
+ metadata.
diff --git a/MAINTAINERS b/MAINTAINERS
index 2a19bd282dd0..61bdfd47bb23 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -11713,6 +11713,7 @@ M: Eric Biederman <[email protected]>
L: [email protected]
S: Maintained
W: http://kernel.org/pub/linux/utils/kernel/kexec/
+F: Documentation/ABI/testing/sysfs-firmware-kho
F: Documentation/ABI/testing/sysfs-kernel-kho
F: include/linux/kexec.h
F: include/uapi/linux/kexec.h
diff --git a/include/linux/kexec.h b/include/linux/kexec.h
index 5d3b6b015838..765f71976230 100644
--- a/include/linux/kexec.h
+++ b/include/linux/kexec.h
@@ -528,13 +528,38 @@ enum kho_event {
extern phys_addr_t kho_scratch_phys;
extern phys_addr_t kho_scratch_len;

+/* ingest handover metadata */
+void kho_reserve_previous_mem(void);
+void kho_populate(phys_addr_t dt_phys, phys_addr_t scratch_phys, u64 scratch_len,
+ phys_addr_t mem_phys, u64 mem_len);
+void kho_populate_refcount(void);
+const void *kho_get_fdt(void);
+void kho_return_mem(const struct kho_mem *mem);
+void *kho_claim_mem(const struct kho_mem *mem);
+static inline bool is_kho_boot(void)
+{
+ return !!kho_scratch_phys;
+}
+
/* egest handover metadata */
void kho_reserve_scratch(void);
int register_kho_notifier(struct notifier_block *nb);
int unregister_kho_notifier(struct notifier_block *nb);
bool kho_is_active(void);
#else
-static inline void kho_reserve_scratch(void) {}
+/* ingest handover metadata */
+static inline void kho_reserve_previous_mem(void) { }
+static inline void kho_populate(phys_addr_t dt_phys, phys_addr_t scratch_phys,
+ u64 scratch_len, phys_addr_t mem_phys,
+ u64 mem_len) { }
+static inline void kho_populate_refcount(void) { }
+static inline void *kho_get_fdt(void) { return NULL; }
+static inline void kho_return_mem(const struct kho_mem *mem) { }
+static inline void *kho_claim_mem(const struct kho_mem *mem) { return NULL; }
+static inline bool is_kho_boot(void) { return false; }
+
+/* egest handover metadata */
+static inline void kho_reserve_scratch(void) { }
static inline int register_kho_notifier(struct notifier_block *nb) { return -EINVAL; }
static inline int unregister_kho_notifier(struct notifier_block *nb) { return -EINVAL; }
static inline bool kho_is_active(void) { return false; }
diff --git a/kernel/Makefile b/kernel/Makefile
index a6bd31e22c09..7c3065e40c75 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -73,6 +73,7 @@ obj-$(CONFIG_KEXEC_CORE) += kexec_core.o
obj-$(CONFIG_KEXEC) += kexec.o
obj-$(CONFIG_KEXEC_FILE) += kexec_file.o
obj-$(CONFIG_KEXEC_ELF) += kexec_elf.o
+obj-$(CONFIG_KEXEC_KHO) += kexec_kho_in.o
obj-$(CONFIG_KEXEC_KHO) += kexec_kho_out.o
obj-$(CONFIG_BACKTRACE_SELF_TEST) += backtracetest.o
obj-$(CONFIG_COMPAT) += compat.o
diff --git a/kernel/kexec_kho_in.c b/kernel/kexec_kho_in.c
new file mode 100644
index 000000000000..5f8e0d9f9e12
--- /dev/null
+++ b/kernel/kexec_kho_in.c
@@ -0,0 +1,298 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * kexec_kho_in.c - kexec handover code to ingest metadata.
+ * Copyright (C) 2023 Alexander Graf <[email protected]>
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/kexec.h>
+#include <linux/device.h>
+#include <linux/compiler.h>
+#include <linux/io.h>
+#include <linux/kmsg_dump.h>
+#include <linux/memblock.h>
+
+/* The kho dt during runtime */
+static void *fdt;
+
+/* Globals to hand over phys/len from early to runtime */
+static phys_addr_t handover_phys __initdata;
+static u32 handover_len __initdata;
+
+static phys_addr_t mem_phys __initdata;
+static u32 mem_len __initdata;
+
+phys_addr_t kho_scratch_phys;
+phys_addr_t kho_scratch_len;
+
+const void *kho_get_fdt(void)
+{
+ return fdt;
+}
+EXPORT_SYMBOL_GPL(kho_get_fdt);
+
+/**
+ * kho_populate_refcount - Scan the DT for any memory ranges. Increase the
+ * affected pages' refcount by 1 for each.
+ */
+__init void kho_populate_refcount(void)
+{
+ const void *fdt = kho_get_fdt();
+ void *mem_virt = __va(mem_phys);
+ int offset = 0, depth = 0, initial_depth = 0, len;
+
+ if (!fdt)
+ return;
+
+ /* Go through the mem list and add 1 for each reference */
+ for (offset = 0;
+ offset >= 0 && depth >= initial_depth;
+ offset = fdt_next_node(fdt, offset, &depth)) {
+ const struct kho_mem *mems;
+ u32 i;
+
+ mems = fdt_getprop(fdt, offset, "mem", &len);
+ if (!mems || len & (sizeof(*mems) - 1))
+ continue;
+
+ for (i = 0; i < len; i += sizeof(*mems)) {
+ const struct kho_mem *mem = ((void *)mems) + i;
+ u64 start_pfn = PFN_DOWN(mem->addr);
+ u64 end_pfn = PFN_UP(mem->addr + mem->len);
+ u64 pfn;
+
+ for (pfn = start_pfn; pfn < end_pfn; pfn++)
+ get_page(pfn_to_page(pfn));
+ }
+ }
+
+ /*
+ * Then reduce the reference count by 1 to offset the initial ref count
+ * of 1. In addition, unreserve the page. That way, we can free_page()
+ * it for every consumer and automatically free it to the global memory
+ * pool when everyone is done.
+ */
+ for (offset = 0; offset < mem_len; offset += sizeof(struct kho_mem)) {
+ struct kho_mem *mem = mem_virt + offset;
+ u64 start_pfn = PFN_DOWN(mem->addr);
+ u64 end_pfn = PFN_UP(mem->addr + mem->len);
+ u64 pfn;
+
+ for (pfn = start_pfn; pfn < end_pfn; pfn++) {
+ struct page *page = pfn_to_page(pfn);
+
+ /*
+ * This is similar to free_reserved_page(), but
+ * preserves the reference count
+ */
+ ClearPageReserved(page);
+ __free_page(page);
+ adjust_managed_page_count(page, 1);
+ }
+ }
+}
+
+static void kho_return_pfn(ulong pfn)
+{
+ struct page *page = pfn_to_page(pfn);
+
+ if (WARN_ON(!page))
+ return;
+ __free_page(page);
+}
+
+/**
+ * kho_return_mem - Notify the kernel that initially reserved memory is no
+ * longer needed. When the last consumer of a page returns their mem, kho
+ * returns the page to the buddy allocator as free page.
+ */
+void kho_return_mem(const struct kho_mem *mem)
+{
+ uint64_t start_pfn, end_pfn, pfn;
+
+ start_pfn = PFN_DOWN(mem->addr);
+ end_pfn = PFN_UP(mem->addr + mem->len);
+
+ for (pfn = start_pfn; pfn < end_pfn; pfn++)
+ kho_return_pfn(pfn);
+}
+EXPORT_SYMBOL_GPL(kho_return_mem);
+
+static void kho_claim_pfn(ulong pfn)
+{
+ struct page *page = pfn_to_page(pfn);
+
+ WARN_ON(!page);
+ if (WARN_ON(page_count(page) != 1))
+ pr_err("Claimed non kho pfn %lx", pfn);
+}
+
+/**
+ * kho_claim_mem - Notify the kernel that a handed over memory range is now in
+ * use by a kernel subsystem and considered an allocated page. This function
+ * removes the reserved state for all pages that the mem spans.
+ */
+void *kho_claim_mem(const struct kho_mem *mem)
+{
+ u64 start_pfn, end_pfn, pfn;
+ void *va = __va(mem->addr);
+
+ start_pfn = PFN_DOWN(mem->addr);
+ end_pfn = PFN_UP(mem->addr + mem->len);
+
+ for (pfn = start_pfn; pfn < end_pfn; pfn++)
+ kho_claim_pfn(pfn);
+
+ return va;
+}
+EXPORT_SYMBOL_GPL(kho_claim_mem);
+
+/**
+ * kho_reserve_previous_mem - Adds all memory reservations into memblocks
+ * and moves us out of the scratch only phase. Must be called after page tables
+ * are initialized and memblock_allow_resize().
+ */
+void __init kho_reserve_previous_mem(void)
+{
+ void *mem_virt = __va(mem_phys);
+ int off, err;
+
+ if (!handover_phys || !mem_phys)
+ return;
+
+ /*
+ * We reached here because we are running inside a working linear map
+ * that allows us to resize memblocks dynamically. Use the chance and
+ * populate the global fdt pointer
+ */
+ fdt = __va(handover_phys);
+
+ off = fdt_path_offset(fdt, "/");
+ if (off < 0) {
+ fdt = NULL;
+ return;
+ }
+
+ err = fdt_node_check_compatible(fdt, off, "kho-v1");
+ if (err) {
+ pr_warn("KHO has invalid compatible, disabling.");
+ return;
+ }
+
+ /* Then populate all preserved memory areas as reserved */
+ for (off = 0; off < mem_len; off += sizeof(struct kho_mem)) {
+ struct kho_mem *mem = mem_virt + off;
+
+ memblock_reserve(mem->addr, mem->len);
+ }
+
+ /* Unreserve the mem cache - we don't need it from here on */
+ memblock_phys_free(mem_phys, mem_len);
+
+ /*
+ * Now we know about all memory reservations, release the scratch only
+ * constraint and allow normal allocations from the scratch region.
+ */
+ memblock_clear_scratch_only();
+}
+
+/* Handling for /sys/firmware/kho */
+static struct kobject *kho_kobj;
+
+static ssize_t raw_read(struct file *file, struct kobject *kobj,
+ struct bin_attribute *attr, char *buf,
+ loff_t pos, size_t count)
+{
+ memcpy(buf, attr->private + pos, count);
+ return count;
+}
+
+static BIN_ATTR(dt, 0400, raw_read, NULL, 0);
+
+static __init int kho_in_init(void)
+{
+ int ret = 0;
+
+ if (!fdt)
+ return 0;
+
+ kho_kobj = kobject_create_and_add("kho", firmware_kobj);
+ if (!kho_kobj) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ bin_attr_dt.size = fdt_totalsize(fdt);
+ bin_attr_dt.private = fdt;
+ ret = sysfs_create_bin_file(kho_kobj, &bin_attr_dt);
+ if (ret)
+ goto err;
+
+err:
+ return ret;
+}
+subsys_initcall(kho_in_init);
+
+void __init kho_populate(phys_addr_t handover_dt_phys, phys_addr_t scratch_phys,
+ u64 scratch_len, phys_addr_t mem_cache_phys,
+ u64 mem_cache_len)
+{
+ void *handover_dt;
+
+ /* Determine the real size of the DT */
+ handover_dt = early_memremap(handover_dt_phys, sizeof(struct fdt_header));
+ if (!handover_dt) {
+ pr_warn("setup: failed to memremap kexec FDT (0x%llx)\n", handover_dt_phys);
+ return;
+ }
+
+ if (fdt_check_header(handover_dt)) {
+ pr_warn("setup: kexec handover FDT is invalid (0x%llx)\n", handover_dt_phys);
+ early_memunmap(handover_dt, PAGE_SIZE);
+ return;
+ }
+
+ handover_len = fdt_totalsize(handover_dt);
+ handover_phys = handover_dt_phys;
+
+ /* Reserve the DT so we can still access it in late boot */
+ memblock_reserve(handover_phys, handover_len);
+
+ /* Reserve the mem cache so we can still access it later */
+ memblock_reserve(mem_cache_phys, mem_cache_len);
+
+ /*
+ * We pass a safe contiguous block of memory to use for early boot purporses from
+ * the previous kernel so that we can resize the memblock array as needed.
+ */
+ memblock_add(scratch_phys, scratch_len);
+
+ if (WARN_ON(memblock_mark_scratch(scratch_phys, scratch_len))) {
+ pr_err("Kexec failed to mark the scratch region. Disabling KHO.");
+ handover_len = 0;
+ handover_phys = 0;
+ return;
+ }
+ pr_debug("Marked 0x%lx+0x%lx as scratch", (long)scratch_phys, (long)scratch_len);
+
+ /*
+ * Now that we have a viable region of scratch memory, let's tell the memblocks
+ * allocator to only use that for any allocations. That way we ensure that nothing
+ * scribbles over in use data while we initialize the page tables which we will need
+ * to ingest all memory reservations from the previous kernel.
+ */
+ memblock_set_scratch_only();
+
+ early_memunmap(handover_dt, sizeof(struct fdt_header));
+
+ /* Remember the mem cache location for kho_reserve_previous_mem() */
+ mem_len = mem_cache_len;
+ mem_phys = mem_cache_phys;
+
+ /* Remember the scratch block - we will reuse it again for the next kexec */
+ kho_scratch_phys = scratch_phys;
+ kho_scratch_len = scratch_len;
+
+ pr_info("setup: Found kexec handover data. Will skip init for some devices\n");
+}
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:38:03

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 05/17] kexec: Add KHO support to kexec file loads

Kexec has 2 modes: A user space driven mode and a kernel driven mode.
For the kernel driven mode, kernel code determines the physical
addresses of all target buffers that the payload gets copied into.

With KHO, we can only safely copy payloads into the "scratch area".
Teach the kexec file loader about it, so it only allocates for that
area. In addition, enlighten it with support to ask the KHO subsystem
for its respective payloads to copy into target memory. Also teach the
KHO subsystem how to fill the images for file loads.

Signed-off-by: Alexander Graf <[email protected]>
---
include/linux/kexec.h | 9 ++
kernel/kexec_file.c | 41 ++++++++
kernel/kexec_kho_out.c | 210 +++++++++++++++++++++++++++++++++++++++++
3 files changed, 260 insertions(+)

diff --git a/include/linux/kexec.h b/include/linux/kexec.h
index 765f71976230..39a2990007a8 100644
--- a/include/linux/kexec.h
+++ b/include/linux/kexec.h
@@ -362,6 +362,13 @@ struct kimage {
size_t ima_buffer_size;
#endif

+#ifdef CONFIG_KEXEC_KHO
+ struct {
+ struct kexec_buf dt;
+ struct kexec_buf mem_cache;
+ } kho;
+#endif
+
/* Core ELF header buffer */
void *elf_headers;
unsigned long elf_headers_sz;
@@ -543,6 +550,7 @@ static inline bool is_kho_boot(void)

/* egest handover metadata */
void kho_reserve_scratch(void);
+int kho_fill_kimage(struct kimage *image);
int register_kho_notifier(struct notifier_block *nb);
int unregister_kho_notifier(struct notifier_block *nb);
bool kho_is_active(void);
@@ -560,6 +568,7 @@ static inline bool is_kho_boot(void) { return false; }

/* egest handover metadata */
static inline void kho_reserve_scratch(void) { }
+static inline int kho_fill_kimage(struct kimage *image) { return 0; }
static inline int register_kho_notifier(struct notifier_block *nb) { return -EINVAL; }
static inline int unregister_kho_notifier(struct notifier_block *nb) { return -EINVAL; }
static inline bool kho_is_active(void) { return false; }
diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
index f9a419cd22d4..d895d0a49bd9 100644
--- a/kernel/kexec_file.c
+++ b/kernel/kexec_file.c
@@ -113,6 +113,13 @@ void kimage_file_post_load_cleanup(struct kimage *image)
image->ima_buffer = NULL;
#endif /* CONFIG_IMA_KEXEC */

+#ifdef CONFIG_KEXEC_KHO
+ kvfree(image->kho.mem_cache.buffer);
+ image->kho.mem_cache = (struct kexec_buf) {};
+ kvfree(image->kho.dt.buffer);
+ image->kho.dt = (struct kexec_buf) {};
+#endif
+
/* See if architecture has anything to cleanup post load */
arch_kimage_file_post_load_cleanup(image);

@@ -249,6 +256,11 @@ kimage_file_prepare_segments(struct kimage *image, int kernel_fd, int initrd_fd,
/* IMA needs to pass the measurement list to the next kernel. */
ima_add_kexec_buffer(image);

+ /* If KHO is active, add its images to the list */
+ ret = kho_fill_kimage(image);
+ if (ret)
+ goto out;
+
/* Call image load handler */
ldata = kexec_image_load_default(image);

@@ -518,6 +530,24 @@ static int locate_mem_hole_callback(struct resource *res, void *arg)
return locate_mem_hole_bottom_up(start, end, kbuf);
}

+#ifdef CONFIG_KEXEC_KHO
+static int kexec_walk_kho_scratch(struct kexec_buf *kbuf,
+ int (*func)(struct resource *, void *))
+{
+ int ret = 0;
+
+ struct resource res = {
+ .start = kho_scratch_phys,
+ .end = kho_scratch_phys + kho_scratch_len,
+ };
+
+ /* Try to fit the kimage into our KHO scratch region */
+ ret = func(&res, kbuf);
+
+ return ret;
+}
+#endif
+
#ifdef CONFIG_ARCH_KEEP_MEMBLOCK
static int kexec_walk_memblock(struct kexec_buf *kbuf,
int (*func)(struct resource *, void *))
@@ -612,6 +642,17 @@ int kexec_locate_mem_hole(struct kexec_buf *kbuf)
if (kbuf->mem != KEXEC_BUF_MEM_UNKNOWN)
return 0;

+#ifdef CONFIG_KEXEC_KHO
+ /*
+ * If KHO is active, only use KHO scratch memory. All other memory
+ * could potentially be handed over.
+ */
+ if (kho_is_active() && kbuf->image->type != KEXEC_TYPE_CRASH) {
+ ret = kexec_walk_kho_scratch(kbuf, locate_mem_hole_callback);
+ return ret == 1 ? 0 : -EADDRNOTAVAIL;
+ }
+#endif
+
if (!IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK))
ret = kexec_walk_resources(kbuf, locate_mem_hole_callback);
else
diff --git a/kernel/kexec_kho_out.c b/kernel/kexec_kho_out.c
index 765cf6ba7a46..2cf5755f5e4a 100644
--- a/kernel/kexec_kho_out.c
+++ b/kernel/kexec_kho_out.c
@@ -50,6 +50,216 @@ int unregister_kho_notifier(struct notifier_block *nb)
}
EXPORT_SYMBOL_GPL(unregister_kho_notifier);

+static int kho_mem_cache_add(void *fdt, struct kho_mem *mem_cache, int size,
+ struct kho_mem *new_mem)
+{
+ int entries = size / sizeof(*mem_cache);
+ u64 new_start = new_mem->addr;
+ u64 new_end = new_mem->addr + new_mem->len;
+ u64 prev_start = 0;
+ u64 prev_end = 0;
+ int i;
+
+ if (WARN_ON((new_start < (kho_scratch_phys + kho_scratch_len)) &&
+ (new_end > kho_scratch_phys))) {
+ pr_err("KHO memory runs over scratch memory");
+ return -EINVAL;
+ }
+
+ /*
+ * We walk the existing sorted mem cache and find the spot where this
+ * new entry would start, so we can insert it right there.
+ */
+ for (i = 0; i < entries; i++) {
+ struct kho_mem *mem = &mem_cache[i];
+ u64 mem_end = (mem->addr + mem->len);
+
+ if (mem_end < new_start) {
+ /* No overlap */
+ prev_start = mem->addr;
+ prev_end = mem->addr + mem->len;
+ continue;
+ } else if ((new_start >= mem->addr) && (new_end <= mem_end)) {
+ /* new_mem fits into mem, skip */
+ return size;
+ } else if ((new_end >= mem->addr) && (new_start <= mem_end)) {
+ /* new_mem and mem overlap, fold them */
+ bool remove = false;
+
+ mem->addr = min(new_start, mem->addr);
+ mem->len = max(mem_end, new_end) - mem->addr;
+ mem_end = (mem->addr + mem->len);
+
+ if (i > 0 && prev_end >= mem->addr) {
+ /* We now overlap with the previous mem, fold */
+ struct kho_mem *prev = &mem_cache[i - 1];
+
+ prev->addr = min(prev->addr, mem->addr);
+ prev->len = max(mem_end, prev_end) - prev->addr;
+ remove = true;
+ } else if (i < (entries - 1) && mem_end >= mem_cache[i + 1].addr) {
+ /* We now overlap with the next mem, fold */
+ struct kho_mem *next = &mem_cache[i + 1];
+ u64 next_end = (next->addr + next->len);
+
+ next->addr = min(next->addr, mem->addr);
+ next->len = max(mem_end, next_end) - next->addr;
+ remove = true;
+ }
+
+ if (remove) {
+ /* We folded this mem into another, remove it */
+ memmove(mem, mem + 1, (entries - i - 1) * sizeof(*mem));
+ size -= sizeof(*new_mem);
+ }
+
+ return size;
+ } else if (mem->addr > new_end) {
+ /*
+ * The mem cache is sorted. If we find the current
+ * entry start after our new_mem's end, we shot over
+ * which means we need to add it by creating a new
+ * hole right after the current entry.
+ */
+ memmove(mem + 1, mem, (entries - i) * sizeof(*mem));
+ break;
+ }
+ }
+
+ mem_cache[i] = *new_mem;
+ size += sizeof(*new_mem);
+
+ return size;
+}
+
+/**
+ * kho_alloc_mem_cache - Allocate and initialize the mem cache kexec_buf
+ */
+static int kho_alloc_mem_cache(struct kimage *image, void *fdt)
+{
+ int offset, depth, initial_depth, len;
+ void *mem_cache;
+ int size;
+
+ /* Count the elements inside all "mem" properties in the DT */
+ size = offset = depth = initial_depth = 0;
+ for (offset = 0;
+ offset >= 0 && depth >= initial_depth;
+ offset = fdt_next_node(fdt, offset, &depth)) {
+ const struct kho_mem *mems;
+
+ mems = fdt_getprop(fdt, offset, "mem", &len);
+ if (!mems || len & (sizeof(*mems) - 1))
+ continue;
+ size += len;
+ }
+
+ /* Allocate based on the max size we determined */
+ mem_cache = kvmalloc(size, GFP_KERNEL);
+ if (!mem_cache)
+ return -ENOMEM;
+
+ /* And populate the array */
+ size = offset = depth = initial_depth = 0;
+ for (offset = 0;
+ offset >= 0 && depth >= initial_depth;
+ offset = fdt_next_node(fdt, offset, &depth)) {
+ const struct kho_mem *mems;
+ int nr_mems, i;
+
+ mems = fdt_getprop(fdt, offset, "mem", &len);
+ if (!mems || len & (sizeof(*mems) - 1))
+ continue;
+
+ for (i = 0, nr_mems = len / sizeof(*mems); i < nr_mems; i++) {
+ const struct kho_mem *mem = &mems[i];
+ ulong mstart = PAGE_ALIGN_DOWN(mem->addr);
+ ulong mend = PAGE_ALIGN(mem->addr + mem->len);
+ struct kho_mem cmem = {
+ .addr = mstart,
+ .len = (mend - mstart),
+ };
+
+ size = kho_mem_cache_add(fdt, mem_cache, size, &cmem);
+ if (size < 0)
+ return size;
+ }
+ }
+
+ image->kho.mem_cache.buffer = mem_cache;
+ image->kho.mem_cache.bufsz = size;
+ image->kho.mem_cache.memsz = size;
+
+ return 0;
+}
+
+int kho_fill_kimage(struct kimage *image)
+{
+ int err = 0;
+ void *dt;
+
+ mutex_lock(&kho.lock);
+
+ if (!kho.active)
+ goto out;
+
+ /* Initialize kexec_buf for mem_cache */
+ image->kho.mem_cache = (struct kexec_buf) {
+ .image = image,
+ .buffer = NULL,
+ .bufsz = 0,
+ .mem = KEXEC_BUF_MEM_UNKNOWN,
+ .memsz = 0,
+ .buf_align = SZ_64K, /* Makes it easier to map */
+ .buf_max = ULONG_MAX,
+ .top_down = true,
+ };
+
+ /*
+ * We need to make all allocations visible here via the mem_cache so that
+ * kho_is_destination_range() can identify overlapping regions and ensure
+ * that no kimage (including the DT one) lands on handed over memory.
+ *
+ * Since we conveniently already built an array of all allocations, let's
+ * pass that on to the target kernel so that reuse it to initialize its
+ * memory blocks.
+ */
+ err = kho_alloc_mem_cache(image, kho.dt);
+ if (err)
+ goto out;
+
+ err = kexec_add_buffer(&image->kho.mem_cache);
+ if (err)
+ goto out;
+
+ /*
+ * Create a kexec copy of the DT here. We need this because lifetime may
+ * be different between kho.dt and the kimage
+ */
+ dt = kvmemdup(kho.dt, kho.dt_len, GFP_KERNEL);
+ if (!dt) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ /* Allocate target memory for kho dt */
+ image->kho.dt = (struct kexec_buf) {
+ .image = image,
+ .buffer = dt,
+ .bufsz = kho.dt_len,
+ .mem = KEXEC_BUF_MEM_UNKNOWN,
+ .memsz = kho.dt_len,
+ .buf_align = SZ_64K, /* Makes it easier to map */
+ .buf_max = ULONG_MAX,
+ .top_down = true,
+ };
+ err = kexec_add_buffer(&image->kho.dt);
+
+out:
+ mutex_unlock(&kho.lock);
+ return err;
+}
+
bool kho_is_active(void)
{
return kho.active;
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:38:20

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 06/17] kexec: Add config option for KHO

We have all generic code in place now to support Kexec with KHO. This
patch adds a config option that depends on architecture support to
enable KHO support.

Signed-off-by: Alexander Graf <[email protected]>
---
kernel/Kconfig.kexec | 13 +++++++++++++
1 file changed, 13 insertions(+)

diff --git a/kernel/Kconfig.kexec b/kernel/Kconfig.kexec
index 2fd510256604..909ab28f1341 100644
--- a/kernel/Kconfig.kexec
+++ b/kernel/Kconfig.kexec
@@ -91,6 +91,19 @@ config KEXEC_JUMP
Jump between original kernel and kexeced kernel and invoke
code in physical address mode via KEXEC

+config KEXEC_KHO
+ bool "kexec handover"
+ depends on ARCH_SUPPORTS_KEXEC_KHO
+ depends on KEXEC
+ select MEMBLOCK_SCRATCH
+ select LIBFDT
+ select CMA
+ help
+ Allow kexec to hand over state across kernels by generating and
+ passing additional metadata to the target kernel. This is useful
+ to keep data or state alive across the kexec. For this to work,
+ both source and target kernels need to have this option enabled.
+
config CRASH_DUMP
bool "kernel crash dumps"
depends on ARCH_SUPPORTS_CRASH_DUMP
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:38:37

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 07/17] kexec: Add documentation for KHO

With KHO in place, let's add documentation that describes what it is and
how to use it.

Signed-off-by: Alexander Graf <[email protected]>
---
Documentation/kho/concepts.rst | 88 ++++++++++++++++++++++++++++++++
Documentation/kho/index.rst | 19 +++++++
Documentation/kho/usage.rst | 57 +++++++++++++++++++++
Documentation/subsystem-apis.rst | 1 +
4 files changed, 165 insertions(+)
create mode 100644 Documentation/kho/concepts.rst
create mode 100644 Documentation/kho/index.rst
create mode 100644 Documentation/kho/usage.rst

diff --git a/Documentation/kho/concepts.rst b/Documentation/kho/concepts.rst
new file mode 100644
index 000000000000..8e4fe8c57865
--- /dev/null
+++ b/Documentation/kho/concepts.rst
@@ -0,0 +1,88 @@
+.. SPDX-License-Identifier: GPL-2.0-or-later
+
+=======================
+Kexec Handover Concepts
+=======================
+
+Kexec HandOver (KHO) is a mechanism that allows Linux to preserve state -
+arbitrary properties as well as memory locations - across kexec.
+
+It introduces multiple concepts:
+
+KHO Device Tree
+---------------
+
+Every KHO kexec carries a KHO specific flattened device tree blob that
+describes the state of the system. Device drivers can register to KHO to
+serialize their state before kexec. After KHO, device drivers can read
+the device tree and extract previous state.
+
+KHO only uses the fdt container format and libfdt library, but does not
+adhere to the same property semantics that normal device trees do: Properties
+are passed in native endianness and standardized properties like ``regs`` and
+``ranges`` do not exist, hence there are no ``#...-cells`` properties.
+
+KHO introduces a new concept to its device tree: ``mem`` properties. A
+``mem`` property can inside any subnode in the device tree. When present,
+it contains an array of physical memory ranges that the new kernel must mark
+as reserved on boot. It is recommended, but not required, to make these ranges
+as physically contiguous as possible to reduce the number of array elements ::
+
+ struct kho_mem {
+ __u64 addr;
+ __u64 len;
+ };
+
+After boot, drivers can call the kho subsystem to transfer ownership of memory
+that was reserved via a ``mem`` property to themselves to continue using memory
+from the previous execution.
+
+The KHO device tree follows the in-Linux schema requirements. Any element in
+the device tree is documented via device tree schema yamls that explain what
+data gets transferred.
+
+Mem cache
+---------
+
+The new kernel needs to know about all memory reservations, but is unable to
+parse the device tree yet in early bootup code because of memory limitations.
+To simplify the initial memory reservation flow, the old kernel passes a
+preprocessed array of physically contiguous reserved ranges to the new kernel.
+
+These reservations have to be separate from architectural memory maps and
+reservations because they differ on every kexec, while the architectural ones
+get passed directly between invocations.
+
+The less entries this cache contains, the faster the new kernel will boot.
+
+Scratch Region
+--------------
+
+To boot into kexec, we need to have a physically contiguous memory range that
+contains no handed over memory. Kexec then places the target kernel and initrd
+into that region. The new kernel exclusively uses this region for memory
+allocations before it ingests the mem cache.
+
+We guarantee that we always have such a region through the scratch region: On
+first boot, you can pass the ``kho_scratch`` kernel command line option. When
+it is set, Linux allocates a CMA region of the given size. CMA gives us the
+guarantee that no handover pages land in that region, because handover
+pages must be at a static physical memory location and CMA enforces that
+only movable pages can be located inside.
+
+After KHO kexec, we ignore the ``kho_scratch`` kernel command line option and
+instead reuse the exact same region that was originally allocated. This allows
+us to recursively execute any amount of KHO kexecs. Because we used this region
+for boot memory allocations and as target memory for kexec blobs, some parts
+of that memory region may be reserved. These reservations are irrenevant for
+the next KHO, because kexec can overwrite even the original kernel.
+
+KHO active phase
+----------------
+
+To enable user space based kexec file loader, the kernel needs to be able to
+provide the device tree that describes the previous kernel's state before
+performing the actual kexec. The process of generating that device tree is
+called serialization. When the device tree is generated, some properties
+of the system may become immutable because they are already written down
+in the device tree. That state is called the KHO active phase.
diff --git a/Documentation/kho/index.rst b/Documentation/kho/index.rst
new file mode 100644
index 000000000000..5e7eeeca8520
--- /dev/null
+++ b/Documentation/kho/index.rst
@@ -0,0 +1,19 @@
+.. SPDX-License-Identifier: GPL-2.0-or-later
+
+========================
+Kexec Handover Subsystem
+========================
+
+.. toctree::
+ :maxdepth: 1
+
+ concepts
+ usage
+
+.. only:: subproject and html
+
+
+ Indices
+ =======
+
+ * :ref:`genindex`
diff --git a/Documentation/kho/usage.rst b/Documentation/kho/usage.rst
new file mode 100644
index 000000000000..5efa2a58f9c3
--- /dev/null
+++ b/Documentation/kho/usage.rst
@@ -0,0 +1,57 @@
+.. SPDX-License-Identifier: GPL-2.0-or-later
+
+====================
+Kexec Handover Usage
+====================
+
+Kexec HandOver (KHO) is a mechanism that allows Linux to preserve state -
+arbitrary properties as well as memory locations - across kexec.
+
+This document expects that you are familiar with the base KHO
+:ref:`Documentation/kho/concepts.rst <concepts>`. If you have not read
+them yet, please do so now.
+
+Prerequisites
+-------------
+
+KHO is available when the ``CONFIG_KEXEC_KHO`` config option is set to y
+at compile team. Every KHO producer has its own config option that you
+need to enable if you would like to preserve their respective state across
+kexec.
+
+To use KHO, please boot the kernel with the ``kho_scratch`` command
+line parameter set to allocate a scratch region. For example
+``kho_scratch=512M`` will reserve a 512 MiB scratch region on boot.
+
+Perform a KHO kexec
+-------------------
+
+Before you can perform a KHO kexec, you need to move the system into the
+:ref:`Documentation/kho/concepts.rst <KHO active phase>` ::
+
+ $ echo 1 > /sys/kernel/kho/active
+
+After this command, the KHO device tree is available in ``/sys/kernel/kho/dt``.
+
+Next, load the target payload and kexec into it. It is important that you
+use the ``-s`` parameter to use the in-kernel kexec file loader, as user
+space kexec tooling currently has no support for KHO with the user space
+based file loader ::
+
+ # kexec -l Image --initrd=initrd -s
+ # kexec -e
+
+The new kernel will boot up and contain some of the previous kernel's state.
+
+For example, if you enabled ``CONFIG_FTRACE_KHO``, the new kernel will contain
+the old kernel's trace buffers in ``/sys/kernel/debug/tracing/trace``.
+
+Abort a KHO exec
+----------------
+
+You can move the system out of KHO active phase again by calling ::
+
+ $ echo 1 > /sys/kernel/kho/active
+
+After this command, the KHO device tree is no longer available in
+``/sys/kernel/kho/dt``.
diff --git a/Documentation/subsystem-apis.rst b/Documentation/subsystem-apis.rst
index 930dc23998a0..8207b6514d87 100644
--- a/Documentation/subsystem-apis.rst
+++ b/Documentation/subsystem-apis.rst
@@ -86,3 +86,4 @@ Storage interfaces
misc-devices/index
peci/index
wmi/index
+ kho/index
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:38:52

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 08/17] arm64: Add KHO support

We now have all bits in place to support KHO kexecs. This patch adds
awareness of KHO in the kexec file as well as boot path for arm64 and
adds the respective kconfig option to the architecture so that it can
use KHO successfully.

Signed-off-by: Alexander Graf <[email protected]>

---

v1 -> v2:

- test bot warning fix
- Change kconfig option to ARCH_SUPPORTS_KEXEC_KHO
- s/kho_reserve_mem/kho_reserve_previous_mem/g
- s/kho_reserve/kho_reserve_scratch/g
- Remove / reduce ifdefs for kho fdt code
---
arch/arm64/Kconfig | 3 +++
arch/arm64/kernel/setup.c | 2 ++
arch/arm64/mm/init.c | 8 ++++++
drivers/of/fdt.c | 39 ++++++++++++++++++++++++++++
drivers/of/kexec.c | 54 +++++++++++++++++++++++++++++++++++++++
5 files changed, 106 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7b071a00425d..4a2fd3deaa16 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1495,6 +1495,9 @@ config ARCH_SUPPORTS_KEXEC_IMAGE_VERIFY_SIG
config ARCH_DEFAULT_KEXEC_IMAGE_VERIFY_SIG
def_bool y

+config ARCH_SUPPORTS_KEXEC_KHO
+ def_bool y
+
config ARCH_SUPPORTS_CRASH_DUMP
def_bool y

diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 417a8a86b2db..9aa05b84d202 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -346,6 +346,8 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)

paging_init();

+ kho_reserve_previous_mem();
+
acpi_table_upgrade();

/* Parse the ACPI tables for possible boot-time configuration */
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 74c1db8ce271..1a8fc91509af 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -358,6 +358,8 @@ void __init bootmem_init(void)
*/
arch_reserve_crashkernel();

+ kho_reserve_scratch();
+
memblock_dump_all();
}

@@ -386,6 +388,12 @@ void __init mem_init(void)
/* this will put all unused low memory onto the freelists */
memblock_free_all();

+ /*
+ * Now that all KHO pages are marked as reserved, let's flip them back
+ * to normal pages with accurate refcount.
+ */
+ kho_populate_refcount();
+
/*
* Check boundaries twice: Some fundamental inconsistencies can be
* detected at build time already.
diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
index bf502ba8da95..f9b9a36fb722 100644
--- a/drivers/of/fdt.c
+++ b/drivers/of/fdt.c
@@ -1006,6 +1006,42 @@ void __init early_init_dt_check_for_usable_mem_range(void)
memblock_add(rgn[i].base, rgn[i].size);
}

+/**
+ * early_init_dt_check_kho - Decode info required for kexec handover from DT
+ */
+static void __init early_init_dt_check_kho(void)
+{
+ unsigned long node = chosen_node_offset;
+ u64 kho_start, scratch_start, scratch_size, mem_start, mem_size;
+ const __be32 *p;
+ int l;
+
+ if (!IS_ENABLED(CONFIG_KEXEC_KHO) || (long)node < 0)
+ return;
+
+ p = of_get_flat_dt_prop(node, "linux,kho-dt", &l);
+ if (l != (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32))
+ return;
+
+ kho_start = dt_mem_next_cell(dt_root_addr_cells, &p);
+
+ p = of_get_flat_dt_prop(node, "linux,kho-scratch", &l);
+ if (l != (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32))
+ return;
+
+ scratch_start = dt_mem_next_cell(dt_root_addr_cells, &p);
+ scratch_size = dt_mem_next_cell(dt_root_addr_cells, &p);
+
+ p = of_get_flat_dt_prop(node, "linux,kho-mem", &l);
+ if (l != (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32))
+ return;
+
+ mem_start = dt_mem_next_cell(dt_root_addr_cells, &p);
+ mem_size = dt_mem_next_cell(dt_root_addr_cells, &p);
+
+ kho_populate(kho_start, scratch_start, scratch_size, mem_start, mem_size);
+}
+
#ifdef CONFIG_SERIAL_EARLYCON

int __init early_init_dt_scan_chosen_stdout(void)
@@ -1304,6 +1340,9 @@ void __init early_init_dt_scan_nodes(void)

/* Handle linux,usable-memory-range property */
early_init_dt_check_for_usable_mem_range();
+
+ /* Handle kexec handover */
+ early_init_dt_check_kho();
}

bool __init early_init_dt_scan(void *params)
diff --git a/drivers/of/kexec.c b/drivers/of/kexec.c
index 68278340cecf..59070b09ad45 100644
--- a/drivers/of/kexec.c
+++ b/drivers/of/kexec.c
@@ -264,6 +264,55 @@ static inline int setup_ima_buffer(const struct kimage *image, void *fdt,
}
#endif /* CONFIG_IMA_KEXEC */

+static int kho_add_chosen(const struct kimage *image, void *fdt, int chosen_node)
+{
+ void *dt = NULL;
+ phys_addr_t dt_mem = 0;
+ phys_addr_t dt_len = 0;
+ phys_addr_t scratch_mem = 0;
+ phys_addr_t scratch_len = 0;
+ void *mem_cache = NULL;
+ phys_addr_t mem_cache_mem = 0;
+ phys_addr_t mem_cache_len = 0;
+ int ret = 0;
+
+#ifdef CONFIG_KEXEC_KHO
+ dt = image->kho.dt.buffer;
+ dt_mem = image->kho.dt.mem;
+ dt_len = image->kho.dt.bufsz;
+
+ scratch_mem = kho_scratch_phys;
+ scratch_len = kho_scratch_len;
+
+ mem_cache = image->kho.mem_cache.buffer;
+ mem_cache_mem = image->kho.mem_cache.mem;
+ mem_cache_len = image->kho.mem_cache.bufsz;
+#endif
+
+ if (!dt || !mem_cache)
+ goto out;
+
+ pr_debug("Adding kho metadata to DT");
+
+ ret = fdt_appendprop_addrrange(fdt, 0, chosen_node, "linux,kho-dt",
+ dt_mem, dt_len);
+ if (ret)
+ goto out;
+
+ ret = fdt_appendprop_addrrange(fdt, 0, chosen_node, "linux,kho-scratch",
+ scratch_mem, scratch_len);
+ if (ret)
+ goto out;
+
+ ret = fdt_appendprop_addrrange(fdt, 0, chosen_node, "linux,kho-mem",
+ mem_cache_mem, mem_cache_len);
+ if (ret)
+ goto out;
+
+out:
+ return ret;
+}
+
/*
* of_kexec_alloc_and_setup_fdt - Alloc and setup a new Flattened Device Tree
*
@@ -412,6 +461,11 @@ void *of_kexec_alloc_and_setup_fdt(const struct kimage *image,
}
}

+ /* Add kho metadata if this is a KHO image */
+ ret = kho_add_chosen(image, fdt, chosen_node);
+ if (ret)
+ goto out;
+
/* add bootargs */
if (cmdline) {
ret = fdt_setprop_string(fdt, chosen_node, "bootargs", cmdline);
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:39:20

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 09/17] x86: Add KHO support

We now have all bits in place to support KHO kexecs. This patch adds
awareness of KHO in the kexec file as well as boot path for x86 and
adds the respective kconfig option to the architecture so that it can
use KHO successfully.

In addition, it enlightens it decompression code with KHO so that its
KASLR location finder only considers memory regions that are not already
occupied by KHO memory.

Signed-off-by: Alexander Graf <[email protected]>

---

v1 -> v2:

- Change kconfig option to ARCH_SUPPORTS_KEXEC_KHO
- s/kho_reserve_mem/kho_reserve_previous_mem/g
- s/kho_reserve/kho_reserve_scratch/g
---
arch/x86/Kconfig | 3 ++
arch/x86/boot/compressed/kaslr.c | 55 +++++++++++++++++++++++++++
arch/x86/include/uapi/asm/bootparam.h | 15 +++++++-
arch/x86/kernel/e820.c | 9 +++++
arch/x86/kernel/kexec-bzimage64.c | 39 +++++++++++++++++++
arch/x86/kernel/setup.c | 46 ++++++++++++++++++++++
arch/x86/mm/init_32.c | 7 ++++
arch/x86/mm/init_64.c | 7 ++++
8 files changed, 180 insertions(+), 1 deletion(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 3762f41bb092..9aa31b3dcebc 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2094,6 +2094,9 @@ config ARCH_SUPPORTS_KEXEC_BZIMAGE_VERIFY_SIG
config ARCH_SUPPORTS_KEXEC_JUMP
def_bool y

+config ARCH_SUPPORTS_KEXEC_KHO
+ def_bool y
+
config ARCH_SUPPORTS_CRASH_DUMP
def_bool X86_64 || (X86_32 && HIGHMEM)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index dec961c6d16a..93ea292e4c18 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -29,6 +29,7 @@
#include <linux/uts.h>
#include <linux/utsname.h>
#include <linux/ctype.h>
+#include <uapi/linux/kexec.h>
#include <generated/utsversion.h>
#include <generated/utsrelease.h>

@@ -472,6 +473,60 @@ static bool mem_avoid_overlap(struct mem_vector *img,
}
}

+#ifdef CONFIG_KEXEC_KHO
+ if (ptr->type == SETUP_KEXEC_KHO) {
+ struct kho_data *kho = (struct kho_data *)ptr->data;
+ struct kho_mem *mems = (void *)kho->mem_cache_addr;
+ int nr_mems = kho->mem_cache_size / sizeof(*mems);
+ int i;
+
+ /* Avoid the mem cache */
+ avoid = (struct mem_vector) {
+ .start = kho->mem_cache_addr,
+ .size = kho->mem_cache_size,
+ };
+
+ if (mem_overlaps(img, &avoid) && (avoid.start < earliest)) {
+ *overlap = avoid;
+ earliest = overlap->start;
+ is_overlapping = true;
+ }
+
+ /* And the KHO DT */
+ avoid = (struct mem_vector) {
+ .start = kho->dt_addr,
+ .size = kho->dt_size,
+ };
+
+ if (mem_overlaps(img, &avoid) && (avoid.start < earliest)) {
+ *overlap = avoid;
+ earliest = overlap->start;
+ is_overlapping = true;
+ }
+
+ /* As well as any other KHO memory reservations */
+ for (i = 0; i < nr_mems; i++) {
+ avoid = (struct mem_vector) {
+ .start = mems[i].addr,
+ .size = mems[i].len,
+ };
+
+ /*
+ * This mem starts after our current break.
+ * The array is sorted, so we're done.
+ */
+ if (avoid.start >= earliest)
+ break;
+
+ if (mem_overlaps(img, &avoid)) {
+ *overlap = avoid;
+ earliest = overlap->start;
+ is_overlapping = true;
+ }
+ }
+ }
+#endif
+
ptr = (struct setup_data *)(unsigned long)ptr->next;
}

diff --git a/arch/x86/include/uapi/asm/bootparam.h b/arch/x86/include/uapi/asm/bootparam.h
index 01d19fc22346..013af38a9673 100644
--- a/arch/x86/include/uapi/asm/bootparam.h
+++ b/arch/x86/include/uapi/asm/bootparam.h
@@ -13,7 +13,8 @@
#define SETUP_CC_BLOB 7
#define SETUP_IMA 8
#define SETUP_RNG_SEED 9
-#define SETUP_ENUM_MAX SETUP_RNG_SEED
+#define SETUP_KEXEC_KHO 10
+#define SETUP_ENUM_MAX SETUP_KEXEC_KHO

#define SETUP_INDIRECT (1<<31)
#define SETUP_TYPE_MAX (SETUP_ENUM_MAX | SETUP_INDIRECT)
@@ -181,6 +182,18 @@ struct ima_setup_data {
__u64 size;
} __attribute__((packed));

+/*
+ * Locations of kexec handover metadata
+ */
+struct kho_data {
+ __u64 dt_addr;
+ __u64 dt_size;
+ __u64 scratch_addr;
+ __u64 scratch_size;
+ __u64 mem_cache_addr;
+ __u64 mem_cache_size;
+} __attribute__((packed));
+
/* The so-called "zeropage" */
struct boot_params {
struct screen_info screen_info; /* 0x000 */
diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
index fb8cf953380d..c891b83f5b1c 100644
--- a/arch/x86/kernel/e820.c
+++ b/arch/x86/kernel/e820.c
@@ -1341,6 +1341,15 @@ void __init e820__memblock_setup(void)
continue;

memblock_add(entry->addr, entry->size);
+
+ /*
+ * At this point with KHO we only allocate from scratch memory
+ * and only from memory below ISA_END_ADDRESS. Make sure that
+ * when we add memory for the eligible range, we add it as
+ * scratch memory so that we can resize the memblocks array.
+ */
+ if (is_kho_boot() && (end <= ISA_END_ADDRESS))
+ memblock_mark_scratch(entry->addr, end);
}

/* Throw away partial pages: */
diff --git a/arch/x86/kernel/kexec-bzimage64.c b/arch/x86/kernel/kexec-bzimage64.c
index a61c12c01270..0cb8d0650a02 100644
--- a/arch/x86/kernel/kexec-bzimage64.c
+++ b/arch/x86/kernel/kexec-bzimage64.c
@@ -15,6 +15,7 @@
#include <linux/slab.h>
#include <linux/kexec.h>
#include <linux/kernel.h>
+#include <linux/libfdt.h>
#include <linux/mm.h>
#include <linux/efi.h>
#include <linux/random.h>
@@ -233,6 +234,33 @@ setup_ima_state(const struct kimage *image, struct boot_params *params,
#endif /* CONFIG_IMA_KEXEC */
}

+static void setup_kho(const struct kimage *image, struct boot_params *params,
+ unsigned long params_load_addr,
+ unsigned int setup_data_offset)
+{
+#ifdef CONFIG_KEXEC_KHO
+ struct setup_data *sd = (void *)params + setup_data_offset;
+ struct kho_data *kho = (void *)sd + sizeof(*sd);
+
+ sd->type = SETUP_KEXEC_KHO;
+ sd->len = sizeof(struct kho_data);
+
+ /* Only add if we have all KHO images in place */
+ if (!image->kho.dt.buffer || !image->kho.mem_cache.buffer)
+ return;
+
+ /* Add setup data */
+ kho->dt_addr = image->kho.dt.mem;
+ kho->dt_size = image->kho.dt.bufsz;
+ kho->scratch_addr = kho_scratch_phys;
+ kho->scratch_size = kho_scratch_len;
+ kho->mem_cache_addr = image->kho.mem_cache.mem;
+ kho->mem_cache_size = image->kho.mem_cache.bufsz;
+ sd->next = params->hdr.setup_data;
+ params->hdr.setup_data = params_load_addr + setup_data_offset;
+#endif /* CONFIG_KEXEC_KHO */
+}
+
static int
setup_boot_parameters(struct kimage *image, struct boot_params *params,
unsigned long params_load_addr,
@@ -305,6 +333,13 @@ setup_boot_parameters(struct kimage *image, struct boot_params *params,
sizeof(struct ima_setup_data);
}

+ if (IS_ENABLED(CONFIG_KEXEC_KHO)) {
+ /* Setup space to store preservation metadata */
+ setup_kho(image, params, params_load_addr, setup_data_offset);
+ setup_data_offset += sizeof(struct setup_data) +
+ sizeof(struct kho_data);
+ }
+
/* Setup RNG seed */
setup_rng_seed(params, params_load_addr, setup_data_offset);

@@ -470,6 +505,10 @@ static void *bzImage64_load(struct kimage *image, char *kernel,
kbuf.bufsz += sizeof(struct setup_data) +
sizeof(struct ima_setup_data);

+ if (IS_ENABLED(CONFIG_KEXEC_KHO))
+ kbuf.bufsz += sizeof(struct setup_data) +
+ sizeof(struct kho_data);
+
params = kzalloc(kbuf.bufsz, GFP_KERNEL);
if (!params)
return ERR_PTR(-ENOMEM);
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 1526747bedf2..bd21f9a601a2 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -382,6 +382,29 @@ int __init ima_get_kexec_buffer(void **addr, size_t *size)
}
#endif

+static void __init add_kho(u64 phys_addr, u32 data_len)
+{
+#ifdef CONFIG_KEXEC_KHO
+ struct kho_data *kho;
+ u64 addr = phys_addr + sizeof(struct setup_data);
+ u64 size = data_len - sizeof(struct setup_data);
+
+ kho = early_memremap(addr, size);
+ if (!kho) {
+ pr_warn("setup: failed to memremap kho data (0x%llx, 0x%llx)\n",
+ addr, size);
+ return;
+ }
+
+ kho_populate(kho->dt_addr, kho->scratch_addr, kho->scratch_size,
+ kho->mem_cache_addr, kho->mem_cache_size);
+
+ early_memunmap(kho, size);
+#else
+ pr_warn("Passed KHO data, but CONFIG_KEXEC_KHO not set. Ignoring.\n");
+#endif
+}
+
static void __init parse_setup_data(void)
{
struct setup_data *data;
@@ -410,6 +433,9 @@ static void __init parse_setup_data(void)
case SETUP_IMA:
add_early_ima_buffer(pa_data);
break;
+ case SETUP_KEXEC_KHO:
+ add_kho(pa_data, data_len);
+ break;
case SETUP_RNG_SEED:
data = early_memremap(pa_data, data_len);
add_bootloader_randomness(data->data, data->len);
@@ -989,8 +1015,26 @@ void __init setup_arch(char **cmdline_p)
cleanup_highmap();

memblock_set_current_limit(ISA_END_ADDRESS);
+
e820__memblock_setup();

+ /*
+ * We can resize memblocks at this point, let's dump all KHO
+ * reservations in and switch from scratch-only to normal allocations
+ */
+ kho_reserve_previous_mem();
+
+ /* Allocations now skip scratch mem, return low 1M to the pool */
+ if (is_kho_boot()) {
+ u64 i;
+ phys_addr_t base, end;
+
+ __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE,
+ MEMBLOCK_SCRATCH, &base, &end, NULL)
+ if (end <= ISA_END_ADDRESS)
+ memblock_clear_scratch(base, end - base);
+ }
+
/*
* Needs to run after memblock setup because it needs the physical
* memory size.
@@ -1106,6 +1150,8 @@ void __init setup_arch(char **cmdline_p)
*/
arch_reserve_crashkernel();

+ kho_reserve_scratch();
+
memblock_find_dma_reserve();

if (!early_xdbc_setup_hardware())
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index b63403d7179d..6c3810afed04 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -20,6 +20,7 @@
#include <linux/smp.h>
#include <linux/init.h>
#include <linux/highmem.h>
+#include <linux/kexec.h>
#include <linux/pagemap.h>
#include <linux/pci.h>
#include <linux/pfn.h>
@@ -738,6 +739,12 @@ void __init mem_init(void)
after_bootmem = 1;
x86_init.hyper.init_after_bootmem();

+ /*
+ * Now that all KHO pages are marked as reserved, let's flip them back
+ * to normal pages with accurate refcount.
+ */
+ kho_populate_refcount();
+
/*
* Check boundaries twice: Some fundamental inconsistencies can
* be detected at build time already.
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index a190aae8ceaf..3ce1a4767610 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -20,6 +20,7 @@
#include <linux/smp.h>
#include <linux/init.h>
#include <linux/initrd.h>
+#include <linux/kexec.h>
#include <linux/pagemap.h>
#include <linux/memblock.h>
#include <linux/proc_fs.h>
@@ -1339,6 +1340,12 @@ void __init mem_init(void)
after_bootmem = 1;
x86_init.hyper.init_after_bootmem();

+ /*
+ * Now that all KHO pages are marked as reserved, let's flip them back
+ * to normal pages with accurate refcount.
+ */
+ kho_populate_refcount();
+
/*
* Must be done after boot memory is put on freelist, because here we
* might set fields in deferred struct pages that have not yet been
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:39:48

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 11/17] tracing: Introduce kho serialization

We want to be able to transfer ftrace state from one kernel to the next.
To start off with, let's establish all the boiler plate to get a write
hook when KHO wants to serialize and fill out basic data.

Follow-up patches will fill in serialization of ring buffers and events.

Signed-off-by: Alexander Graf <[email protected]>

---

v1 -> v2:

- Remove ifdefs
---
kernel/trace/trace.c | 47 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 47 insertions(+)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 199df497db07..6ec31879b4eb 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -32,6 +32,7 @@
#include <linux/percpu.h>
#include <linux/splice.h>
#include <linux/kdebug.h>
+#include <linux/kexec.h>
#include <linux/string.h>
#include <linux/mount.h>
#include <linux/rwsem.h>
@@ -866,6 +867,8 @@ static struct tracer *trace_types __read_mostly;
*/
DEFINE_MUTEX(trace_types_lock);

+static bool trace_in_kho;
+
/*
* serialize the access of the ring buffer
*
@@ -10560,12 +10563,56 @@ void __init early_trace_init(void)
init_events();
}

+static int trace_kho_notifier(struct notifier_block *self,
+ unsigned long cmd,
+ void *v)
+{
+ const char compatible[] = "ftrace-v1";
+ void *fdt = v;
+ int err = 0;
+
+ switch (cmd) {
+ case KEXEC_KHO_ABORT:
+ if (trace_in_kho)
+ mutex_unlock(&trace_types_lock);
+ trace_in_kho = false;
+ return NOTIFY_DONE;
+ case KEXEC_KHO_DUMP:
+ /* Handled below */
+ break;
+ default:
+ return NOTIFY_BAD;
+ }
+
+ if (unlikely(tracing_disabled))
+ return NOTIFY_DONE;
+
+ err |= fdt_begin_node(fdt, "ftrace");
+ err |= fdt_property(fdt, "compatible", compatible, sizeof(compatible));
+ err |= fdt_end_node(fdt);
+
+ if (!err) {
+ /* Hold all future allocations */
+ mutex_lock(&trace_types_lock);
+ trace_in_kho = true;
+ }
+
+ return err ? NOTIFY_BAD : NOTIFY_DONE;
+}
+
+static struct notifier_block trace_kho_nb = {
+ .notifier_call = trace_kho_notifier,
+};
+
void __init trace_init(void)
{
trace_event_init();

if (boot_instance_index)
enable_instances();
+
+ if (IS_ENABLED(CONFIG_FTRACE_KHO))
+ register_kho_notifier(&trace_kho_nb);
}

__init static void clear_boot_tracer(void)
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:40:45

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 10/17] tracing: Initialize fields before registering

With KHO, we need to know all event fields before we allocate an event
type for a trace event so that we can recover it based on a previous
execution context.

Before this patch, fields were only initialized after we allocated a
type id. After this patch, we try to allocate it early as well.

This patch leaves the old late initialization logic in place. The field
init code already validates whether there are any fields present, which
means it's legal to call it multiple times. This way we're sure we don't
miss any call sites.

Signed-off-by: Alexander Graf <[email protected]>
---
include/linux/trace_events.h | 1 +
kernel/trace/trace_events.c | 14 +++++++++-----
kernel/trace/trace_events_synth.c | 14 +++++++++-----
kernel/trace/trace_events_user.c | 4 ++++
kernel/trace/trace_probe.c | 4 ++++
5 files changed, 27 insertions(+), 10 deletions(-)

diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index d68ff9b1247f..8fe8970b48e3 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -842,6 +842,7 @@ extern int trace_define_field(struct trace_event_call *call, const char *type,
extern int trace_add_event_call(struct trace_event_call *call);
extern int trace_remove_event_call(struct trace_event_call *call);
extern int trace_event_get_offsets(struct trace_event_call *call);
+extern int trace_event_define_fields(struct trace_event_call *call);

int ftrace_set_clr_event(struct trace_array *tr, char *buf, int set);
int trace_set_clr_event(const char *system, const char *event, int set);
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index f29e815ca5b2..fbf8be1d2806 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -462,6 +462,11 @@ static void test_event_printk(struct trace_event_call *call)
int trace_event_raw_init(struct trace_event_call *call)
{
int id;
+ int ret;
+
+ ret = trace_event_define_fields(call);
+ if (ret)
+ return ret;

id = register_trace_event(&call->event);
if (!id)
@@ -2402,8 +2407,7 @@ event_subsystem_dir(struct trace_array *tr, const char *name,
return NULL;
}

-static int
-event_define_fields(struct trace_event_call *call)
+int trace_event_define_fields(struct trace_event_call *call)
{
struct list_head *head;
int ret = 0;
@@ -2592,7 +2596,7 @@ event_create_dir(struct eventfs_inode *parent, struct trace_event_file *file)

file->ei = ei;

- ret = event_define_fields(call);
+ ret = trace_event_define_fields(call);
if (ret < 0) {
pr_warn("Could not initialize trace point events/%s\n", name);
return ret;
@@ -2978,7 +2982,7 @@ __trace_add_new_event(struct trace_event_call *call, struct trace_array *tr)
if (eventdir_initialized)
return event_create_dir(tr->event_dir, file);
else
- return event_define_fields(call);
+ return trace_event_define_fields(call);
}

static void trace_early_triggers(struct trace_event_file *file, const char *name)
@@ -3015,7 +3019,7 @@ __trace_early_add_new_event(struct trace_event_call *call,
if (!file)
return -ENOMEM;

- ret = event_define_fields(call);
+ ret = trace_event_define_fields(call);
if (ret)
return ret;

diff --git a/kernel/trace/trace_events_synth.c b/kernel/trace/trace_events_synth.c
index 846e02c0fb59..4db41218ccf7 100644
--- a/kernel/trace/trace_events_synth.c
+++ b/kernel/trace/trace_events_synth.c
@@ -880,17 +880,21 @@ static int register_synth_event(struct synth_event *event)
INIT_LIST_HEAD(&call->class->fields);
call->event.funcs = &synth_event_funcs;
call->class->fields_array = synth_event_fields_array;
+ call->flags = TRACE_EVENT_FL_TRACEPOINT;
+ call->class->reg = trace_event_reg;
+ call->class->probe = trace_event_raw_event_synth;
+ call->data = event;
+ call->tp = event->tp;
+
+ ret = trace_event_define_fields(call);
+ if (ret)
+ goto out;

ret = register_trace_event(&call->event);
if (!ret) {
ret = -ENODEV;
goto out;
}
- call->flags = TRACE_EVENT_FL_TRACEPOINT;
- call->class->reg = trace_event_reg;
- call->class->probe = trace_event_raw_event_synth;
- call->data = event;
- call->tp = event->tp;

ret = trace_add_event_call(call);
if (ret) {
diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
index 9365ce407426..b9837e987525 100644
--- a/kernel/trace/trace_events_user.c
+++ b/kernel/trace/trace_events_user.c
@@ -1900,6 +1900,10 @@ static int user_event_trace_register(struct user_event *user)
{
int ret;

+ ret = trace_event_define_fields(&user->call);
+ if (ret)
+ return ret;
+
ret = register_trace_event(&user->call.event);

if (!ret)
diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
index 4dc74d73fc1d..da73a02246d8 100644
--- a/kernel/trace/trace_probe.c
+++ b/kernel/trace/trace_probe.c
@@ -1835,6 +1835,10 @@ int trace_probe_register_event_call(struct trace_probe *tp)
trace_probe_name(tp)))
return -EEXIST;

+ ret = trace_event_define_fields(call);
+ if (ret)
+ return ret;
+
ret = register_trace_event(&call->event);
if (!ret)
return -ENODEV;
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:52:11

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 06/17] kexec: Add config option for KHO

We have all generic code in place now to support Kexec with KHO. This
patch adds a config option that depends on architecture support to
enable KHO support.

Signed-off-by: Alexander Graf <[email protected]>
---
kernel/Kconfig.kexec | 13 +++++++++++++
1 file changed, 13 insertions(+)

diff --git a/kernel/Kconfig.kexec b/kernel/Kconfig.kexec
index 2fd510256604..909ab28f1341 100644
--- a/kernel/Kconfig.kexec
+++ b/kernel/Kconfig.kexec
@@ -91,6 +91,19 @@ config KEXEC_JUMP
Jump between original kernel and kexeced kernel and invoke
code in physical address mode via KEXEC

+config KEXEC_KHO
+ bool "kexec handover"
+ depends on ARCH_SUPPORTS_KEXEC_KHO
+ depends on KEXEC
+ select MEMBLOCK_SCRATCH
+ select LIBFDT
+ select CMA
+ help
+ Allow kexec to hand over state across kernels by generating and
+ passing additional metadata to the target kernel. This is useful
+ to keep data or state alive across the kexec. For this to work,
+ both source and target kernels need to have this option enabled.
+
config CRASH_DUMP
bool "kernel crash dumps"
depends on ARCH_SUPPORTS_CRASH_DUMP
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:52:26

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 07/17] kexec: Add documentation for KHO

With KHO in place, let's add documentation that describes what it is and
how to use it.

Signed-off-by: Alexander Graf <[email protected]>
---
Documentation/kho/concepts.rst | 88 ++++++++++++++++++++++++++++++++
Documentation/kho/index.rst | 19 +++++++
Documentation/kho/usage.rst | 57 +++++++++++++++++++++
Documentation/subsystem-apis.rst | 1 +
4 files changed, 165 insertions(+)
create mode 100644 Documentation/kho/concepts.rst
create mode 100644 Documentation/kho/index.rst
create mode 100644 Documentation/kho/usage.rst

diff --git a/Documentation/kho/concepts.rst b/Documentation/kho/concepts.rst
new file mode 100644
index 000000000000..8e4fe8c57865
--- /dev/null
+++ b/Documentation/kho/concepts.rst
@@ -0,0 +1,88 @@
+.. SPDX-License-Identifier: GPL-2.0-or-later
+
+=======================
+Kexec Handover Concepts
+=======================
+
+Kexec HandOver (KHO) is a mechanism that allows Linux to preserve state -
+arbitrary properties as well as memory locations - across kexec.
+
+It introduces multiple concepts:
+
+KHO Device Tree
+---------------
+
+Every KHO kexec carries a KHO specific flattened device tree blob that
+describes the state of the system. Device drivers can register to KHO to
+serialize their state before kexec. After KHO, device drivers can read
+the device tree and extract previous state.
+
+KHO only uses the fdt container format and libfdt library, but does not
+adhere to the same property semantics that normal device trees do: Properties
+are passed in native endianness and standardized properties like ``regs`` and
+``ranges`` do not exist, hence there are no ``#...-cells`` properties.
+
+KHO introduces a new concept to its device tree: ``mem`` properties. A
+``mem`` property can inside any subnode in the device tree. When present,
+it contains an array of physical memory ranges that the new kernel must mark
+as reserved on boot. It is recommended, but not required, to make these ranges
+as physically contiguous as possible to reduce the number of array elements ::
+
+ struct kho_mem {
+ __u64 addr;
+ __u64 len;
+ };
+
+After boot, drivers can call the kho subsystem to transfer ownership of memory
+that was reserved via a ``mem`` property to themselves to continue using memory
+from the previous execution.
+
+The KHO device tree follows the in-Linux schema requirements. Any element in
+the device tree is documented via device tree schema yamls that explain what
+data gets transferred.
+
+Mem cache
+---------
+
+The new kernel needs to know about all memory reservations, but is unable to
+parse the device tree yet in early bootup code because of memory limitations.
+To simplify the initial memory reservation flow, the old kernel passes a
+preprocessed array of physically contiguous reserved ranges to the new kernel.
+
+These reservations have to be separate from architectural memory maps and
+reservations because they differ on every kexec, while the architectural ones
+get passed directly between invocations.
+
+The less entries this cache contains, the faster the new kernel will boot.
+
+Scratch Region
+--------------
+
+To boot into kexec, we need to have a physically contiguous memory range that
+contains no handed over memory. Kexec then places the target kernel and initrd
+into that region. The new kernel exclusively uses this region for memory
+allocations before it ingests the mem cache.
+
+We guarantee that we always have such a region through the scratch region: On
+first boot, you can pass the ``kho_scratch`` kernel command line option. When
+it is set, Linux allocates a CMA region of the given size. CMA gives us the
+guarantee that no handover pages land in that region, because handover
+pages must be at a static physical memory location and CMA enforces that
+only movable pages can be located inside.
+
+After KHO kexec, we ignore the ``kho_scratch`` kernel command line option and
+instead reuse the exact same region that was originally allocated. This allows
+us to recursively execute any amount of KHO kexecs. Because we used this region
+for boot memory allocations and as target memory for kexec blobs, some parts
+of that memory region may be reserved. These reservations are irrenevant for
+the next KHO, because kexec can overwrite even the original kernel.
+
+KHO active phase
+----------------
+
+To enable user space based kexec file loader, the kernel needs to be able to
+provide the device tree that describes the previous kernel's state before
+performing the actual kexec. The process of generating that device tree is
+called serialization. When the device tree is generated, some properties
+of the system may become immutable because they are already written down
+in the device tree. That state is called the KHO active phase.
diff --git a/Documentation/kho/index.rst b/Documentation/kho/index.rst
new file mode 100644
index 000000000000..5e7eeeca8520
--- /dev/null
+++ b/Documentation/kho/index.rst
@@ -0,0 +1,19 @@
+.. SPDX-License-Identifier: GPL-2.0-or-later
+
+========================
+Kexec Handover Subsystem
+========================
+
+.. toctree::
+ :maxdepth: 1
+
+ concepts
+ usage
+
+.. only:: subproject and html
+
+
+ Indices
+ =======
+
+ * :ref:`genindex`
diff --git a/Documentation/kho/usage.rst b/Documentation/kho/usage.rst
new file mode 100644
index 000000000000..5efa2a58f9c3
--- /dev/null
+++ b/Documentation/kho/usage.rst
@@ -0,0 +1,57 @@
+.. SPDX-License-Identifier: GPL-2.0-or-later
+
+====================
+Kexec Handover Usage
+====================
+
+Kexec HandOver (KHO) is a mechanism that allows Linux to preserve state -
+arbitrary properties as well as memory locations - across kexec.
+
+This document expects that you are familiar with the base KHO
+:ref:`Documentation/kho/concepts.rst <concepts>`. If you have not read
+them yet, please do so now.
+
+Prerequisites
+-------------
+
+KHO is available when the ``CONFIG_KEXEC_KHO`` config option is set to y
+at compile team. Every KHO producer has its own config option that you
+need to enable if you would like to preserve their respective state across
+kexec.
+
+To use KHO, please boot the kernel with the ``kho_scratch`` command
+line parameter set to allocate a scratch region. For example
+``kho_scratch=512M`` will reserve a 512 MiB scratch region on boot.
+
+Perform a KHO kexec
+-------------------
+
+Before you can perform a KHO kexec, you need to move the system into the
+:ref:`Documentation/kho/concepts.rst <KHO active phase>` ::
+
+ $ echo 1 > /sys/kernel/kho/active
+
+After this command, the KHO device tree is available in ``/sys/kernel/kho/dt``.
+
+Next, load the target payload and kexec into it. It is important that you
+use the ``-s`` parameter to use the in-kernel kexec file loader, as user
+space kexec tooling currently has no support for KHO with the user space
+based file loader ::
+
+ # kexec -l Image --initrd=initrd -s
+ # kexec -e
+
+The new kernel will boot up and contain some of the previous kernel's state.
+
+For example, if you enabled ``CONFIG_FTRACE_KHO``, the new kernel will contain
+the old kernel's trace buffers in ``/sys/kernel/debug/tracing/trace``.
+
+Abort a KHO exec
+----------------
+
+You can move the system out of KHO active phase again by calling ::
+
+ $ echo 1 > /sys/kernel/kho/active
+
+After this command, the KHO device tree is no longer available in
+``/sys/kernel/kho/dt``.
diff --git a/Documentation/subsystem-apis.rst b/Documentation/subsystem-apis.rst
index 930dc23998a0..8207b6514d87 100644
--- a/Documentation/subsystem-apis.rst
+++ b/Documentation/subsystem-apis.rst
@@ -86,3 +86,4 @@ Storage interfaces
misc-devices/index
peci/index
wmi/index
+ kho/index
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:52:44

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 08/17] arm64: Add KHO support

We now have all bits in place to support KHO kexecs. This patch adds
awareness of KHO in the kexec file as well as boot path for arm64 and
adds the respective kconfig option to the architecture so that it can
use KHO successfully.

Signed-off-by: Alexander Graf <[email protected]>

---

v1 -> v2:

- test bot warning fix
- Change kconfig option to ARCH_SUPPORTS_KEXEC_KHO
- s/kho_reserve_mem/kho_reserve_previous_mem/g
- s/kho_reserve/kho_reserve_scratch/g
- Remove / reduce ifdefs for kho fdt code
---
arch/arm64/Kconfig | 3 +++
arch/arm64/kernel/setup.c | 2 ++
arch/arm64/mm/init.c | 8 ++++++
drivers/of/fdt.c | 39 ++++++++++++++++++++++++++++
drivers/of/kexec.c | 54 +++++++++++++++++++++++++++++++++++++++
5 files changed, 106 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7b071a00425d..4a2fd3deaa16 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1495,6 +1495,9 @@ config ARCH_SUPPORTS_KEXEC_IMAGE_VERIFY_SIG
config ARCH_DEFAULT_KEXEC_IMAGE_VERIFY_SIG
def_bool y

+config ARCH_SUPPORTS_KEXEC_KHO
+ def_bool y
+
config ARCH_SUPPORTS_CRASH_DUMP
def_bool y

diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 417a8a86b2db..9aa05b84d202 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -346,6 +346,8 @@ void __init __no_sanitize_address setup_arch(char **cmdline_p)

paging_init();

+ kho_reserve_previous_mem();
+
acpi_table_upgrade();

/* Parse the ACPI tables for possible boot-time configuration */
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 74c1db8ce271..1a8fc91509af 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -358,6 +358,8 @@ void __init bootmem_init(void)
*/
arch_reserve_crashkernel();

+ kho_reserve_scratch();
+
memblock_dump_all();
}

@@ -386,6 +388,12 @@ void __init mem_init(void)
/* this will put all unused low memory onto the freelists */
memblock_free_all();

+ /*
+ * Now that all KHO pages are marked as reserved, let's flip them back
+ * to normal pages with accurate refcount.
+ */
+ kho_populate_refcount();
+
/*
* Check boundaries twice: Some fundamental inconsistencies can be
* detected at build time already.
diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
index bf502ba8da95..f9b9a36fb722 100644
--- a/drivers/of/fdt.c
+++ b/drivers/of/fdt.c
@@ -1006,6 +1006,42 @@ void __init early_init_dt_check_for_usable_mem_range(void)
memblock_add(rgn[i].base, rgn[i].size);
}

+/**
+ * early_init_dt_check_kho - Decode info required for kexec handover from DT
+ */
+static void __init early_init_dt_check_kho(void)
+{
+ unsigned long node = chosen_node_offset;
+ u64 kho_start, scratch_start, scratch_size, mem_start, mem_size;
+ const __be32 *p;
+ int l;
+
+ if (!IS_ENABLED(CONFIG_KEXEC_KHO) || (long)node < 0)
+ return;
+
+ p = of_get_flat_dt_prop(node, "linux,kho-dt", &l);
+ if (l != (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32))
+ return;
+
+ kho_start = dt_mem_next_cell(dt_root_addr_cells, &p);
+
+ p = of_get_flat_dt_prop(node, "linux,kho-scratch", &l);
+ if (l != (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32))
+ return;
+
+ scratch_start = dt_mem_next_cell(dt_root_addr_cells, &p);
+ scratch_size = dt_mem_next_cell(dt_root_addr_cells, &p);
+
+ p = of_get_flat_dt_prop(node, "linux,kho-mem", &l);
+ if (l != (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32))
+ return;
+
+ mem_start = dt_mem_next_cell(dt_root_addr_cells, &p);
+ mem_size = dt_mem_next_cell(dt_root_addr_cells, &p);
+
+ kho_populate(kho_start, scratch_start, scratch_size, mem_start, mem_size);
+}
+
#ifdef CONFIG_SERIAL_EARLYCON

int __init early_init_dt_scan_chosen_stdout(void)
@@ -1304,6 +1340,9 @@ void __init early_init_dt_scan_nodes(void)

/* Handle linux,usable-memory-range property */
early_init_dt_check_for_usable_mem_range();
+
+ /* Handle kexec handover */
+ early_init_dt_check_kho();
}

bool __init early_init_dt_scan(void *params)
diff --git a/drivers/of/kexec.c b/drivers/of/kexec.c
index 68278340cecf..59070b09ad45 100644
--- a/drivers/of/kexec.c
+++ b/drivers/of/kexec.c
@@ -264,6 +264,55 @@ static inline int setup_ima_buffer(const struct kimage *image, void *fdt,
}
#endif /* CONFIG_IMA_KEXEC */

+static int kho_add_chosen(const struct kimage *image, void *fdt, int chosen_node)
+{
+ void *dt = NULL;
+ phys_addr_t dt_mem = 0;
+ phys_addr_t dt_len = 0;
+ phys_addr_t scratch_mem = 0;
+ phys_addr_t scratch_len = 0;
+ void *mem_cache = NULL;
+ phys_addr_t mem_cache_mem = 0;
+ phys_addr_t mem_cache_len = 0;
+ int ret = 0;
+
+#ifdef CONFIG_KEXEC_KHO
+ dt = image->kho.dt.buffer;
+ dt_mem = image->kho.dt.mem;
+ dt_len = image->kho.dt.bufsz;
+
+ scratch_mem = kho_scratch_phys;
+ scratch_len = kho_scratch_len;
+
+ mem_cache = image->kho.mem_cache.buffer;
+ mem_cache_mem = image->kho.mem_cache.mem;
+ mem_cache_len = image->kho.mem_cache.bufsz;
+#endif
+
+ if (!dt || !mem_cache)
+ goto out;
+
+ pr_debug("Adding kho metadata to DT");
+
+ ret = fdt_appendprop_addrrange(fdt, 0, chosen_node, "linux,kho-dt",
+ dt_mem, dt_len);
+ if (ret)
+ goto out;
+
+ ret = fdt_appendprop_addrrange(fdt, 0, chosen_node, "linux,kho-scratch",
+ scratch_mem, scratch_len);
+ if (ret)
+ goto out;
+
+ ret = fdt_appendprop_addrrange(fdt, 0, chosen_node, "linux,kho-mem",
+ mem_cache_mem, mem_cache_len);
+ if (ret)
+ goto out;
+
+out:
+ return ret;
+}
+
/*
* of_kexec_alloc_and_setup_fdt - Alloc and setup a new Flattened Device Tree
*
@@ -412,6 +461,11 @@ void *of_kexec_alloc_and_setup_fdt(const struct kimage *image,
}
}

+ /* Add kho metadata if this is a KHO image */
+ ret = kho_add_chosen(image, fdt, chosen_node);
+ if (ret)
+ goto out;
+
/* add bootargs */
if (cmdline) {
ret = fdt_setprop_string(fdt, chosen_node, "bootargs", cmdline);
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:53:17

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 10/17] tracing: Initialize fields before registering

With KHO, we need to know all event fields before we allocate an event
type for a trace event so that we can recover it based on a previous
execution context.

Before this patch, fields were only initialized after we allocated a
type id. After this patch, we try to allocate it early as well.

This patch leaves the old late initialization logic in place. The field
init code already validates whether there are any fields present, which
means it's legal to call it multiple times. This way we're sure we don't
miss any call sites.

Signed-off-by: Alexander Graf <[email protected]>
---
include/linux/trace_events.h | 1 +
kernel/trace/trace_events.c | 14 +++++++++-----
kernel/trace/trace_events_synth.c | 14 +++++++++-----
kernel/trace/trace_events_user.c | 4 ++++
kernel/trace/trace_probe.c | 4 ++++
5 files changed, 27 insertions(+), 10 deletions(-)

diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index d68ff9b1247f..8fe8970b48e3 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -842,6 +842,7 @@ extern int trace_define_field(struct trace_event_call *call, const char *type,
extern int trace_add_event_call(struct trace_event_call *call);
extern int trace_remove_event_call(struct trace_event_call *call);
extern int trace_event_get_offsets(struct trace_event_call *call);
+extern int trace_event_define_fields(struct trace_event_call *call);

int ftrace_set_clr_event(struct trace_array *tr, char *buf, int set);
int trace_set_clr_event(const char *system, const char *event, int set);
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index f29e815ca5b2..fbf8be1d2806 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -462,6 +462,11 @@ static void test_event_printk(struct trace_event_call *call)
int trace_event_raw_init(struct trace_event_call *call)
{
int id;
+ int ret;
+
+ ret = trace_event_define_fields(call);
+ if (ret)
+ return ret;

id = register_trace_event(&call->event);
if (!id)
@@ -2402,8 +2407,7 @@ event_subsystem_dir(struct trace_array *tr, const char *name,
return NULL;
}

-static int
-event_define_fields(struct trace_event_call *call)
+int trace_event_define_fields(struct trace_event_call *call)
{
struct list_head *head;
int ret = 0;
@@ -2592,7 +2596,7 @@ event_create_dir(struct eventfs_inode *parent, struct trace_event_file *file)

file->ei = ei;

- ret = event_define_fields(call);
+ ret = trace_event_define_fields(call);
if (ret < 0) {
pr_warn("Could not initialize trace point events/%s\n", name);
return ret;
@@ -2978,7 +2982,7 @@ __trace_add_new_event(struct trace_event_call *call, struct trace_array *tr)
if (eventdir_initialized)
return event_create_dir(tr->event_dir, file);
else
- return event_define_fields(call);
+ return trace_event_define_fields(call);
}

static void trace_early_triggers(struct trace_event_file *file, const char *name)
@@ -3015,7 +3019,7 @@ __trace_early_add_new_event(struct trace_event_call *call,
if (!file)
return -ENOMEM;

- ret = event_define_fields(call);
+ ret = trace_event_define_fields(call);
if (ret)
return ret;

diff --git a/kernel/trace/trace_events_synth.c b/kernel/trace/trace_events_synth.c
index 846e02c0fb59..4db41218ccf7 100644
--- a/kernel/trace/trace_events_synth.c
+++ b/kernel/trace/trace_events_synth.c
@@ -880,17 +880,21 @@ static int register_synth_event(struct synth_event *event)
INIT_LIST_HEAD(&call->class->fields);
call->event.funcs = &synth_event_funcs;
call->class->fields_array = synth_event_fields_array;
+ call->flags = TRACE_EVENT_FL_TRACEPOINT;
+ call->class->reg = trace_event_reg;
+ call->class->probe = trace_event_raw_event_synth;
+ call->data = event;
+ call->tp = event->tp;
+
+ ret = trace_event_define_fields(call);
+ if (ret)
+ goto out;

ret = register_trace_event(&call->event);
if (!ret) {
ret = -ENODEV;
goto out;
}
- call->flags = TRACE_EVENT_FL_TRACEPOINT;
- call->class->reg = trace_event_reg;
- call->class->probe = trace_event_raw_event_synth;
- call->data = event;
- call->tp = event->tp;

ret = trace_add_event_call(call);
if (ret) {
diff --git a/kernel/trace/trace_events_user.c b/kernel/trace/trace_events_user.c
index 9365ce407426..b9837e987525 100644
--- a/kernel/trace/trace_events_user.c
+++ b/kernel/trace/trace_events_user.c
@@ -1900,6 +1900,10 @@ static int user_event_trace_register(struct user_event *user)
{
int ret;

+ ret = trace_event_define_fields(&user->call);
+ if (ret)
+ return ret;
+
ret = register_trace_event(&user->call.event);

if (!ret)
diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
index 4dc74d73fc1d..da73a02246d8 100644
--- a/kernel/trace/trace_probe.c
+++ b/kernel/trace/trace_probe.c
@@ -1835,6 +1835,10 @@ int trace_probe_register_event_call(struct trace_probe *tp)
trace_probe_name(tp)))
return -EEXIST;

+ ret = trace_event_define_fields(call);
+ if (ret)
+ return ret;
+
ret = register_trace_event(&call->event);
if (!ret)
return -ENODEV;
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:53:36

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 09/17] x86: Add KHO support

We now have all bits in place to support KHO kexecs. This patch adds
awareness of KHO in the kexec file as well as boot path for x86 and
adds the respective kconfig option to the architecture so that it can
use KHO successfully.

In addition, it enlightens it decompression code with KHO so that its
KASLR location finder only considers memory regions that are not already
occupied by KHO memory.

Signed-off-by: Alexander Graf <[email protected]>

---

v1 -> v2:

- Change kconfig option to ARCH_SUPPORTS_KEXEC_KHO
- s/kho_reserve_mem/kho_reserve_previous_mem/g
- s/kho_reserve/kho_reserve_scratch/g
---
arch/x86/Kconfig | 3 ++
arch/x86/boot/compressed/kaslr.c | 55 +++++++++++++++++++++++++++
arch/x86/include/uapi/asm/bootparam.h | 15 +++++++-
arch/x86/kernel/e820.c | 9 +++++
arch/x86/kernel/kexec-bzimage64.c | 39 +++++++++++++++++++
arch/x86/kernel/setup.c | 46 ++++++++++++++++++++++
arch/x86/mm/init_32.c | 7 ++++
arch/x86/mm/init_64.c | 7 ++++
8 files changed, 180 insertions(+), 1 deletion(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 3762f41bb092..9aa31b3dcebc 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2094,6 +2094,9 @@ config ARCH_SUPPORTS_KEXEC_BZIMAGE_VERIFY_SIG
config ARCH_SUPPORTS_KEXEC_JUMP
def_bool y

+config ARCH_SUPPORTS_KEXEC_KHO
+ def_bool y
+
config ARCH_SUPPORTS_CRASH_DUMP
def_bool X86_64 || (X86_32 && HIGHMEM)

diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
index dec961c6d16a..93ea292e4c18 100644
--- a/arch/x86/boot/compressed/kaslr.c
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -29,6 +29,7 @@
#include <linux/uts.h>
#include <linux/utsname.h>
#include <linux/ctype.h>
+#include <uapi/linux/kexec.h>
#include <generated/utsversion.h>
#include <generated/utsrelease.h>

@@ -472,6 +473,60 @@ static bool mem_avoid_overlap(struct mem_vector *img,
}
}

+#ifdef CONFIG_KEXEC_KHO
+ if (ptr->type == SETUP_KEXEC_KHO) {
+ struct kho_data *kho = (struct kho_data *)ptr->data;
+ struct kho_mem *mems = (void *)kho->mem_cache_addr;
+ int nr_mems = kho->mem_cache_size / sizeof(*mems);
+ int i;
+
+ /* Avoid the mem cache */
+ avoid = (struct mem_vector) {
+ .start = kho->mem_cache_addr,
+ .size = kho->mem_cache_size,
+ };
+
+ if (mem_overlaps(img, &avoid) && (avoid.start < earliest)) {
+ *overlap = avoid;
+ earliest = overlap->start;
+ is_overlapping = true;
+ }
+
+ /* And the KHO DT */
+ avoid = (struct mem_vector) {
+ .start = kho->dt_addr,
+ .size = kho->dt_size,
+ };
+
+ if (mem_overlaps(img, &avoid) && (avoid.start < earliest)) {
+ *overlap = avoid;
+ earliest = overlap->start;
+ is_overlapping = true;
+ }
+
+ /* As well as any other KHO memory reservations */
+ for (i = 0; i < nr_mems; i++) {
+ avoid = (struct mem_vector) {
+ .start = mems[i].addr,
+ .size = mems[i].len,
+ };
+
+ /*
+ * This mem starts after our current break.
+ * The array is sorted, so we're done.
+ */
+ if (avoid.start >= earliest)
+ break;
+
+ if (mem_overlaps(img, &avoid)) {
+ *overlap = avoid;
+ earliest = overlap->start;
+ is_overlapping = true;
+ }
+ }
+ }
+#endif
+
ptr = (struct setup_data *)(unsigned long)ptr->next;
}

diff --git a/arch/x86/include/uapi/asm/bootparam.h b/arch/x86/include/uapi/asm/bootparam.h
index 01d19fc22346..013af38a9673 100644
--- a/arch/x86/include/uapi/asm/bootparam.h
+++ b/arch/x86/include/uapi/asm/bootparam.h
@@ -13,7 +13,8 @@
#define SETUP_CC_BLOB 7
#define SETUP_IMA 8
#define SETUP_RNG_SEED 9
-#define SETUP_ENUM_MAX SETUP_RNG_SEED
+#define SETUP_KEXEC_KHO 10
+#define SETUP_ENUM_MAX SETUP_KEXEC_KHO

#define SETUP_INDIRECT (1<<31)
#define SETUP_TYPE_MAX (SETUP_ENUM_MAX | SETUP_INDIRECT)
@@ -181,6 +182,18 @@ struct ima_setup_data {
__u64 size;
} __attribute__((packed));

+/*
+ * Locations of kexec handover metadata
+ */
+struct kho_data {
+ __u64 dt_addr;
+ __u64 dt_size;
+ __u64 scratch_addr;
+ __u64 scratch_size;
+ __u64 mem_cache_addr;
+ __u64 mem_cache_size;
+} __attribute__((packed));
+
/* The so-called "zeropage" */
struct boot_params {
struct screen_info screen_info; /* 0x000 */
diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
index fb8cf953380d..c891b83f5b1c 100644
--- a/arch/x86/kernel/e820.c
+++ b/arch/x86/kernel/e820.c
@@ -1341,6 +1341,15 @@ void __init e820__memblock_setup(void)
continue;

memblock_add(entry->addr, entry->size);
+
+ /*
+ * At this point with KHO we only allocate from scratch memory
+ * and only from memory below ISA_END_ADDRESS. Make sure that
+ * when we add memory for the eligible range, we add it as
+ * scratch memory so that we can resize the memblocks array.
+ */
+ if (is_kho_boot() && (end <= ISA_END_ADDRESS))
+ memblock_mark_scratch(entry->addr, end);
}

/* Throw away partial pages: */
diff --git a/arch/x86/kernel/kexec-bzimage64.c b/arch/x86/kernel/kexec-bzimage64.c
index a61c12c01270..0cb8d0650a02 100644
--- a/arch/x86/kernel/kexec-bzimage64.c
+++ b/arch/x86/kernel/kexec-bzimage64.c
@@ -15,6 +15,7 @@
#include <linux/slab.h>
#include <linux/kexec.h>
#include <linux/kernel.h>
+#include <linux/libfdt.h>
#include <linux/mm.h>
#include <linux/efi.h>
#include <linux/random.h>
@@ -233,6 +234,33 @@ setup_ima_state(const struct kimage *image, struct boot_params *params,
#endif /* CONFIG_IMA_KEXEC */
}

+static void setup_kho(const struct kimage *image, struct boot_params *params,
+ unsigned long params_load_addr,
+ unsigned int setup_data_offset)
+{
+#ifdef CONFIG_KEXEC_KHO
+ struct setup_data *sd = (void *)params + setup_data_offset;
+ struct kho_data *kho = (void *)sd + sizeof(*sd);
+
+ sd->type = SETUP_KEXEC_KHO;
+ sd->len = sizeof(struct kho_data);
+
+ /* Only add if we have all KHO images in place */
+ if (!image->kho.dt.buffer || !image->kho.mem_cache.buffer)
+ return;
+
+ /* Add setup data */
+ kho->dt_addr = image->kho.dt.mem;
+ kho->dt_size = image->kho.dt.bufsz;
+ kho->scratch_addr = kho_scratch_phys;
+ kho->scratch_size = kho_scratch_len;
+ kho->mem_cache_addr = image->kho.mem_cache.mem;
+ kho->mem_cache_size = image->kho.mem_cache.bufsz;
+ sd->next = params->hdr.setup_data;
+ params->hdr.setup_data = params_load_addr + setup_data_offset;
+#endif /* CONFIG_KEXEC_KHO */
+}
+
static int
setup_boot_parameters(struct kimage *image, struct boot_params *params,
unsigned long params_load_addr,
@@ -305,6 +333,13 @@ setup_boot_parameters(struct kimage *image, struct boot_params *params,
sizeof(struct ima_setup_data);
}

+ if (IS_ENABLED(CONFIG_KEXEC_KHO)) {
+ /* Setup space to store preservation metadata */
+ setup_kho(image, params, params_load_addr, setup_data_offset);
+ setup_data_offset += sizeof(struct setup_data) +
+ sizeof(struct kho_data);
+ }
+
/* Setup RNG seed */
setup_rng_seed(params, params_load_addr, setup_data_offset);

@@ -470,6 +505,10 @@ static void *bzImage64_load(struct kimage *image, char *kernel,
kbuf.bufsz += sizeof(struct setup_data) +
sizeof(struct ima_setup_data);

+ if (IS_ENABLED(CONFIG_KEXEC_KHO))
+ kbuf.bufsz += sizeof(struct setup_data) +
+ sizeof(struct kho_data);
+
params = kzalloc(kbuf.bufsz, GFP_KERNEL);
if (!params)
return ERR_PTR(-ENOMEM);
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 1526747bedf2..bd21f9a601a2 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -382,6 +382,29 @@ int __init ima_get_kexec_buffer(void **addr, size_t *size)
}
#endif

+static void __init add_kho(u64 phys_addr, u32 data_len)
+{
+#ifdef CONFIG_KEXEC_KHO
+ struct kho_data *kho;
+ u64 addr = phys_addr + sizeof(struct setup_data);
+ u64 size = data_len - sizeof(struct setup_data);
+
+ kho = early_memremap(addr, size);
+ if (!kho) {
+ pr_warn("setup: failed to memremap kho data (0x%llx, 0x%llx)\n",
+ addr, size);
+ return;
+ }
+
+ kho_populate(kho->dt_addr, kho->scratch_addr, kho->scratch_size,
+ kho->mem_cache_addr, kho->mem_cache_size);
+
+ early_memunmap(kho, size);
+#else
+ pr_warn("Passed KHO data, but CONFIG_KEXEC_KHO not set. Ignoring.\n");
+#endif
+}
+
static void __init parse_setup_data(void)
{
struct setup_data *data;
@@ -410,6 +433,9 @@ static void __init parse_setup_data(void)
case SETUP_IMA:
add_early_ima_buffer(pa_data);
break;
+ case SETUP_KEXEC_KHO:
+ add_kho(pa_data, data_len);
+ break;
case SETUP_RNG_SEED:
data = early_memremap(pa_data, data_len);
add_bootloader_randomness(data->data, data->len);
@@ -989,8 +1015,26 @@ void __init setup_arch(char **cmdline_p)
cleanup_highmap();

memblock_set_current_limit(ISA_END_ADDRESS);
+
e820__memblock_setup();

+ /*
+ * We can resize memblocks at this point, let's dump all KHO
+ * reservations in and switch from scratch-only to normal allocations
+ */
+ kho_reserve_previous_mem();
+
+ /* Allocations now skip scratch mem, return low 1M to the pool */
+ if (is_kho_boot()) {
+ u64 i;
+ phys_addr_t base, end;
+
+ __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE,
+ MEMBLOCK_SCRATCH, &base, &end, NULL)
+ if (end <= ISA_END_ADDRESS)
+ memblock_clear_scratch(base, end - base);
+ }
+
/*
* Needs to run after memblock setup because it needs the physical
* memory size.
@@ -1106,6 +1150,8 @@ void __init setup_arch(char **cmdline_p)
*/
arch_reserve_crashkernel();

+ kho_reserve_scratch();
+
memblock_find_dma_reserve();

if (!early_xdbc_setup_hardware())
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index b63403d7179d..6c3810afed04 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -20,6 +20,7 @@
#include <linux/smp.h>
#include <linux/init.h>
#include <linux/highmem.h>
+#include <linux/kexec.h>
#include <linux/pagemap.h>
#include <linux/pci.h>
#include <linux/pfn.h>
@@ -738,6 +739,12 @@ void __init mem_init(void)
after_bootmem = 1;
x86_init.hyper.init_after_bootmem();

+ /*
+ * Now that all KHO pages are marked as reserved, let's flip them back
+ * to normal pages with accurate refcount.
+ */
+ kho_populate_refcount();
+
/*
* Check boundaries twice: Some fundamental inconsistencies can
* be detected at build time already.
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index a190aae8ceaf..3ce1a4767610 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -20,6 +20,7 @@
#include <linux/smp.h>
#include <linux/init.h>
#include <linux/initrd.h>
+#include <linux/kexec.h>
#include <linux/pagemap.h>
#include <linux/memblock.h>
#include <linux/proc_fs.h>
@@ -1339,6 +1340,12 @@ void __init mem_init(void)
after_bootmem = 1;
x86_init.hyper.init_after_bootmem();

+ /*
+ * Now that all KHO pages are marked as reserved, let's flip them back
+ * to normal pages with accurate refcount.
+ */
+ kho_populate_refcount();
+
/*
* Must be done after boot memory is put on freelist, because here we
* might set fields in deferred struct pages that have not yet been
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:53:57

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 11/17] tracing: Introduce kho serialization

We want to be able to transfer ftrace state from one kernel to the next.
To start off with, let's establish all the boiler plate to get a write
hook when KHO wants to serialize and fill out basic data.

Follow-up patches will fill in serialization of ring buffers and events.

Signed-off-by: Alexander Graf <[email protected]>

---

v1 -> v2:

- Remove ifdefs
---
kernel/trace/trace.c | 47 ++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 47 insertions(+)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 199df497db07..6ec31879b4eb 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -32,6 +32,7 @@
#include <linux/percpu.h>
#include <linux/splice.h>
#include <linux/kdebug.h>
+#include <linux/kexec.h>
#include <linux/string.h>
#include <linux/mount.h>
#include <linux/rwsem.h>
@@ -866,6 +867,8 @@ static struct tracer *trace_types __read_mostly;
*/
DEFINE_MUTEX(trace_types_lock);

+static bool trace_in_kho;
+
/*
* serialize the access of the ring buffer
*
@@ -10560,12 +10563,56 @@ void __init early_trace_init(void)
init_events();
}

+static int trace_kho_notifier(struct notifier_block *self,
+ unsigned long cmd,
+ void *v)
+{
+ const char compatible[] = "ftrace-v1";
+ void *fdt = v;
+ int err = 0;
+
+ switch (cmd) {
+ case KEXEC_KHO_ABORT:
+ if (trace_in_kho)
+ mutex_unlock(&trace_types_lock);
+ trace_in_kho = false;
+ return NOTIFY_DONE;
+ case KEXEC_KHO_DUMP:
+ /* Handled below */
+ break;
+ default:
+ return NOTIFY_BAD;
+ }
+
+ if (unlikely(tracing_disabled))
+ return NOTIFY_DONE;
+
+ err |= fdt_begin_node(fdt, "ftrace");
+ err |= fdt_property(fdt, "compatible", compatible, sizeof(compatible));
+ err |= fdt_end_node(fdt);
+
+ if (!err) {
+ /* Hold all future allocations */
+ mutex_lock(&trace_types_lock);
+ trace_in_kho = true;
+ }
+
+ return err ? NOTIFY_BAD : NOTIFY_DONE;
+}
+
+static struct notifier_block trace_kho_nb = {
+ .notifier_call = trace_kho_notifier,
+};
+
void __init trace_init(void)
{
trace_event_init();

if (boot_instance_index)
enable_instances();
+
+ if (IS_ENABLED(CONFIG_FTRACE_KHO))
+ register_kho_notifier(&trace_kho_nb);
}

__init static void clear_boot_tracer(void)
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:54:43

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 12/17] tracing: Add kho serialization of trace buffers

When we do a kexec handover, we want to preserve previous ftrace data
into the new kernel. At the point when we write out the handover data,
ftrace may still be running and recording new events and we want to
capture all of those too.

To allow the new kernel to revive all trace data up to reboot, we store
all locations of trace buffers as well as their linked list metadata. We
can then later reuse the linked list to reconstruct the head pointer.

This patch implements the write-out logic for trace buffers.

Signed-off-by: Alexander Graf <[email protected]>

---

v1 -> v2:

- Leave the node generation code that needs to know the name in
trace.c so that ring buffers can stay anonymous
---
include/linux/ring_buffer.h | 2 +
kernel/trace/ring_buffer.c | 76 +++++++++++++++++++++++++++++++++++++
kernel/trace/trace.c | 16 ++++++++
3 files changed, 94 insertions(+)

diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
index 782e14f62201..1c5eb33f0cb5 100644
--- a/include/linux/ring_buffer.h
+++ b/include/linux/ring_buffer.h
@@ -211,4 +211,6 @@ int trace_rb_cpu_prepare(unsigned int cpu, struct hlist_node *node);
#define trace_rb_cpu_prepare NULL
#endif

+int ring_buffer_kho_write(void *fdt, struct trace_buffer *buffer);
+
#endif /* _LINUX_RING_BUFFER_H */
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 83eab547f1d1..971af7ee35da 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -20,6 +20,7 @@
#include <linux/percpu.h>
#include <linux/mutex.h>
#include <linux/delay.h>
+#include <linux/kexec.h>
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/hash.h>
@@ -5853,6 +5854,81 @@ int trace_rb_cpu_prepare(unsigned int cpu, struct hlist_node *node)
return 0;
}

+#ifdef CONFIG_FTRACE_KHO
+static int rb_kho_write_cpu(void *fdt, struct trace_buffer *buffer, int cpu)
+{
+ int i = 0;
+ int err = 0;
+ struct list_head *tmp;
+ const char compatible[] = "ftrace,cpu-v1";
+ char name[] = "cpuffffffff";
+ int nr_pages;
+ struct ring_buffer_per_cpu *cpu_buffer;
+ bool first_loop = true;
+ struct kho_mem *mem;
+ uint64_t mem_len;
+
+ if (!cpumask_test_cpu(cpu, buffer->cpumask))
+ return 0;
+
+ cpu_buffer = buffer->buffers[cpu];
+
+ nr_pages = cpu_buffer->nr_pages;
+ mem_len = sizeof(*mem) * nr_pages * 2;
+ mem = vmalloc(mem_len);
+
+ snprintf(name, sizeof(name), "cpu%x", cpu);
+
+ err |= fdt_begin_node(fdt, name);
+ err |= fdt_property(fdt, "compatible", compatible, sizeof(compatible));
+ err |= fdt_property(fdt, "cpu", &cpu, sizeof(cpu));
+
+ for (tmp = rb_list_head(cpu_buffer->pages);
+ tmp != rb_list_head(cpu_buffer->pages) || first_loop;
+ tmp = rb_list_head(tmp->next), first_loop = false) {
+ struct buffer_page *bpage = (struct buffer_page *)tmp;
+
+ /* Ring is larger than it should be? */
+ if (i >= (nr_pages * 2)) {
+ pr_err("ftrace ring has more pages than nr_pages (%d / %d)", i, nr_pages);
+ err = -EINVAL;
+ break;
+ }
+
+ /* First describe the bpage */
+ mem[i++] = (struct kho_mem) {
+ .addr = __pa(bpage),
+ .len = sizeof(*bpage)
+ };
+
+ /* Then the data page */
+ mem[i++] = (struct kho_mem) {
+ .addr = __pa(bpage->page),
+ .len = PAGE_SIZE
+ };
+ }
+
+ err |= fdt_property(fdt, "mem", mem, mem_len);
+ err |= fdt_end_node(fdt);
+
+ vfree(mem);
+ return err;
+}
+
+int ring_buffer_kho_write(void *fdt, struct trace_buffer *buffer)
+{
+ int err, i;
+
+ for (i = 0; i < buffer->cpus; i++) {
+ err = rb_kho_write_cpu(fdt, buffer, i);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+#endif
+
#ifdef CONFIG_RING_BUFFER_STARTUP_TEST
/*
* This is a basic integrity check of the ring buffer.
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 6ec31879b4eb..2ccea4c1965b 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -10563,6 +10563,21 @@ void __init early_trace_init(void)
init_events();
}

+static int trace_kho_write_trace_array(void *fdt, struct trace_array *tr)
+{
+ const char *name = tr->name ? tr->name : "global_trace";
+ const char compatible[] = "ftrace,array-v1";
+ int err = 0;
+
+ err |= fdt_begin_node(fdt, name);
+ err |= fdt_property(fdt, "compatible", compatible, sizeof(compatible));
+ err |= fdt_property(fdt, "trace_flags", &tr->trace_flags, sizeof(tr->trace_flags));
+ err |= ring_buffer_kho_write(fdt, tr->array_buffer.buffer);
+ err |= fdt_end_node(fdt);
+
+ return err;
+}
+
static int trace_kho_notifier(struct notifier_block *self,
unsigned long cmd,
void *v)
@@ -10589,6 +10604,7 @@ static int trace_kho_notifier(struct notifier_block *self,

err |= fdt_begin_node(fdt, "ftrace");
err |= fdt_property(fdt, "compatible", compatible, sizeof(compatible));
+ err |= trace_kho_write_trace_array(fdt, &global_trace);
err |= fdt_end_node(fdt);

if (!err) {
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:54:53

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 13/17] tracing: Recover trace buffers from kexec handover

When kexec handover is in place, we now know the location of all
previous buffers for ftrace rings. With this patch applied, ftrace
reassembles any new trace buffer that carries the same name as a
previous one with the same data pages that the previous buffer had.

That way, a buffer that we had in place before kexec becomes readable
after kexec again as soon as it gets initialized with the same name.

Signed-off-by: Alexander Graf <[email protected]>

---

v1 -> v2:

- Move from names to fdt offsets. That way, trace.c can find the trace
array offset and then the ring buffer code only needs to read out
its per-CPU data. That way it can stay oblivient to its name.
- Make kho_get_fdt() const
- Remove ifdefs
---
include/linux/ring_buffer.h | 15 ++--
kernel/trace/ring_buffer.c | 171 ++++++++++++++++++++++++++++++++++--
kernel/trace/trace.c | 32 ++++++-
3 files changed, 206 insertions(+), 12 deletions(-)

diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
index 1c5eb33f0cb5..f6d6ce441890 100644
--- a/include/linux/ring_buffer.h
+++ b/include/linux/ring_buffer.h
@@ -84,20 +84,23 @@ void ring_buffer_discard_commit(struct trace_buffer *buffer,
/*
* size is in bytes for each per CPU buffer.
*/
-struct trace_buffer *
-__ring_buffer_alloc(unsigned long size, unsigned flags, struct lock_class_key *key);
+struct trace_buffer *__ring_buffer_alloc(unsigned long size, unsigned flags,
+ struct lock_class_key *key,
+ int tr_off);

/*
* Because the ring buffer is generic, if other users of the ring buffer get
* traced by ftrace, it can produce lockdep warnings. We need to keep each
* ring buffer's lock class separate.
*/
-#define ring_buffer_alloc(size, flags) \
-({ \
- static struct lock_class_key __key; \
- __ring_buffer_alloc((size), (flags), &__key); \
+#define ring_buffer_alloc_kho(size, flags, tr_off) \
+({ \
+ static struct lock_class_key __key; \
+ __ring_buffer_alloc((size), (flags), &__key, tr_off); \
})

+#define ring_buffer_alloc(size, flags) ring_buffer_alloc_kho(size, flags, 0)
+
int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full);
__poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu,
struct file *filp, poll_table *poll_table, int full);
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 971af7ee35da..4c62a66068a7 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -558,6 +558,7 @@ struct trace_buffer {

struct rb_irq_work irq_work;
bool time_stamp_abs;
+ int tr_off;
};

struct ring_buffer_iter {
@@ -574,6 +575,15 @@ struct ring_buffer_iter {
int missed_events;
};

+struct rb_kho_cpu {
+ const struct kho_mem *mem;
+ uint32_t nr_mems;
+};
+
+static int rb_kho_replace_buffers(struct ring_buffer_per_cpu *cpu_buffer,
+ struct rb_kho_cpu *kho);
+static int rb_kho_read_cpu(int tr_off, int cpu, struct rb_kho_cpu *kho);
+
#ifdef RB_TIME_32

/*
@@ -1762,12 +1772,15 @@ static void rb_free_cpu_buffer(struct ring_buffer_per_cpu *cpu_buffer)
* drop data when the tail hits the head.
*/
struct trace_buffer *__ring_buffer_alloc(unsigned long size, unsigned flags,
- struct lock_class_key *key)
+ struct lock_class_key *key,
+ int tr_off)
{
+ int cpu = raw_smp_processor_id();
+ struct rb_kho_cpu kho = {};
struct trace_buffer *buffer;
+ bool use_kho = false;
long nr_pages;
int bsize;
- int cpu;
int ret;

/* keep it in its own cache line */
@@ -1780,9 +1793,16 @@ struct trace_buffer *__ring_buffer_alloc(unsigned long size, unsigned flags,
goto fail_free_buffer;

nr_pages = DIV_ROUND_UP(size, BUF_PAGE_SIZE);
+ if (!rb_kho_read_cpu(tr_off, cpu, &kho) && kho.nr_mems > 4) {
+ nr_pages = kho.nr_mems / 2;
+ use_kho = true;
+ pr_debug("Using kho on CPU [%03d]", cpu);
+ }
+
buffer->flags = flags;
buffer->clock = trace_clock_local;
buffer->reader_lock_key = key;
+ buffer->tr_off = tr_off;

init_irq_work(&buffer->irq_work.work, rb_wake_up_waiters);
init_waitqueue_head(&buffer->irq_work.waiters);
@@ -1799,12 +1819,14 @@ struct trace_buffer *__ring_buffer_alloc(unsigned long size, unsigned flags,
if (!buffer->buffers)
goto fail_free_cpumask;

- cpu = raw_smp_processor_id();
cpumask_set_cpu(cpu, buffer->cpumask);
buffer->buffers[cpu] = rb_allocate_cpu_buffer(buffer, nr_pages, cpu);
if (!buffer->buffers[cpu])
goto fail_free_buffers;

+ if (use_kho && rb_kho_replace_buffers(buffer->buffers[cpu], &kho))
+ pr_warn("Could not revive all previous trace data");
+
ret = cpuhp_state_add_instance(CPUHP_TRACE_RB_PREPARE, &buffer->node);
if (ret < 0)
goto fail_free_buffers;
@@ -5818,7 +5840,9 @@ EXPORT_SYMBOL_GPL(ring_buffer_read_page);
*/
int trace_rb_cpu_prepare(unsigned int cpu, struct hlist_node *node)
{
+ struct rb_kho_cpu kho = {};
struct trace_buffer *buffer;
+ bool use_kho = false;
long nr_pages_same;
int cpu_i;
unsigned long nr_pages;
@@ -5842,6 +5866,12 @@ int trace_rb_cpu_prepare(unsigned int cpu, struct hlist_node *node)
/* allocate minimum pages, user can later expand it */
if (!nr_pages_same)
nr_pages = 2;
+
+ if (!rb_kho_read_cpu(buffer->tr_off, cpu, &kho) && kho.nr_mems > 4) {
+ nr_pages = kho.nr_mems / 2;
+ use_kho = true;
+ }
+
buffer->buffers[cpu] =
rb_allocate_cpu_buffer(buffer, nr_pages, cpu);
if (!buffer->buffers[cpu]) {
@@ -5849,13 +5879,143 @@ int trace_rb_cpu_prepare(unsigned int cpu, struct hlist_node *node)
cpu);
return -ENOMEM;
}
+
+ if (use_kho && rb_kho_replace_buffers(buffer->buffers[cpu], &kho))
+ pr_warn("Could not revive all previous trace data");
+
smp_wmb();
cpumask_set_cpu(cpu, buffer->cpumask);
return 0;
}

-#ifdef CONFIG_FTRACE_KHO
-static int rb_kho_write_cpu(void *fdt, struct trace_buffer *buffer, int cpu)
+static int rb_kho_replace_buffers(struct ring_buffer_per_cpu *cpu_buffer,
+ struct rb_kho_cpu *kho)
+{
+ bool first_loop = true;
+ struct list_head *tmp;
+ int err = 0;
+ int i = 0;
+
+ if (!IS_ENABLED(CONFIG_FTRACE_KHO))
+ return -EINVAL;
+
+ if (kho->nr_mems != cpu_buffer->nr_pages * 2)
+ return -EINVAL;
+
+ for (tmp = rb_list_head(cpu_buffer->pages);
+ tmp != rb_list_head(cpu_buffer->pages) || first_loop;
+ tmp = rb_list_head(tmp->next), first_loop = false) {
+ struct buffer_page *bpage = (struct buffer_page *)tmp;
+ const struct kho_mem *mem_bpage = &kho->mem[i++];
+ const struct kho_mem *mem_page = &kho->mem[i++];
+ const uint64_t rb_page_head = 1;
+ struct buffer_page *old_bpage;
+ void *old_page;
+
+ old_bpage = __va(mem_bpage->addr);
+ if (!bpage)
+ goto out;
+
+ if ((ulong)old_bpage->list.next & rb_page_head) {
+ struct list_head *new_lhead;
+ struct buffer_page *new_head;
+
+ new_lhead = rb_list_head(bpage->list.next);
+ new_head = (struct buffer_page *)new_lhead;
+
+ /* Assume the buffer is completely full */
+ cpu_buffer->tail_page = bpage;
+ cpu_buffer->commit_page = bpage;
+ /* Set the head pointers to what they were before */
+ cpu_buffer->head_page->list.prev->next = (struct list_head *)
+ ((ulong)cpu_buffer->head_page->list.prev->next & ~rb_page_head);
+ cpu_buffer->head_page = new_head;
+ bpage->list.next = (struct list_head *)((ulong)new_lhead | rb_page_head);
+ }
+
+ if (rb_page_entries(old_bpage) || rb_page_write(old_bpage)) {
+ /*
+ * We want to recycle the pre-kho page, it contains
+ * trace data. To do so, we unreserve it and swap the
+ * current data page with the pre-kho one
+ */
+ old_page = kho_claim_mem(mem_page);
+
+ /* Recycle the old page, it contains data */
+ free_page((ulong)bpage->page);
+ bpage->page = old_page;
+
+ bpage->write = old_bpage->write;
+ bpage->entries = old_bpage->entries;
+ bpage->real_end = old_bpage->real_end;
+
+ local_inc(&cpu_buffer->pages_touched);
+ } else {
+ kho_return_mem(mem_page);
+ }
+
+ kho_return_mem(mem_bpage);
+ }
+
+out:
+ return err;
+}
+
+static int rb_kho_read_cpu(int tr_off, int cpu, struct rb_kho_cpu *kho)
+{
+ const void *fdt = kho_get_fdt();
+ int mem_len;
+ int err = 0;
+ char *path;
+ int off;
+
+ if (!IS_ENABLED(CONFIG_FTRACE_KHO))
+ return -EINVAL;
+
+ if (!tr_off || !fdt || !kho)
+ return -EINVAL;
+
+ path = kasprintf(GFP_KERNEL, "cpu%x", cpu);
+ if (!path)
+ return -ENOMEM;
+
+ pr_debug("Trying to revive trace cpu '%s'", path);
+
+ off = fdt_subnode_offset(fdt, tr_off, path);
+ if (off < 0) {
+ pr_debug("Could not find '%s' in DT", path);
+ err = -ENOENT;
+ goto out;
+ }
+
+ err = fdt_node_check_compatible(fdt, off, "ftrace,cpu-v1");
+ if (err) {
+ pr_warn("Node '%s' has invalid compatible", path);
+ err = -EINVAL;
+ goto out;
+ }
+
+ kho->mem = fdt_getprop(fdt, off, "mem", &mem_len);
+ if (!kho->mem) {
+ pr_warn("Node '%s' has invalid mem property", path);
+ err = -EINVAL;
+ goto out;
+ }
+
+ kho->nr_mems = mem_len / sizeof(*kho->mem);
+
+ /* Should follow "bpage 0, page 0, bpage 1, page 1, ..." pattern */
+ if ((kho->nr_mems & 1)) {
+ err = -EINVAL;
+ goto out;
+ }
+
+out:
+ kfree(path);
+ return err;
+}
+
+static int __maybe_unused rb_kho_write_cpu(void *fdt, struct trace_buffer *buffer, int cpu)
{
int i = 0;
int err = 0;
@@ -5915,6 +6075,7 @@ static int rb_kho_write_cpu(void *fdt, struct trace_buffer *buffer, int cpu)
return err;
}

+#ifdef CONFIG_FTRACE_KHO
int ring_buffer_kho_write(void *fdt, struct trace_buffer *buffer)
{
int err, i;
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 2ccea4c1965b..94e30dfacfd1 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -9348,16 +9348,46 @@ static struct dentry *trace_instance_dir;
static void
init_tracer_tracefs(struct trace_array *tr, struct dentry *d_tracer);

+static int trace_kho_off_tr(struct trace_array *tr)
+{
+ const char *name = tr->name ? tr->name : "global_trace";
+ const void *fdt = kho_get_fdt();
+ char *path;
+ int off;
+
+ if (!IS_ENABLED(CONFIG_FTRACE_KHO))
+ return 0;
+
+ if (!fdt)
+ return 0;
+
+ path = kasprintf(GFP_KERNEL, "/ftrace/%s", name);
+ if (!path)
+ return -ENOMEM;
+
+ pr_debug("Trying to revive trace buffer '%s'", path);
+
+ off = fdt_path_offset(fdt, path);
+ if (off < 0) {
+ pr_debug("Could not find '%s' in DT", path);
+ off = 0;
+ }
+
+ kfree(path);
+ return off;
+}
+
static int
allocate_trace_buffer(struct trace_array *tr, struct array_buffer *buf, int size)
{
+ int tr_off = trace_kho_off_tr(tr);
enum ring_buffer_flags rb_flags;

rb_flags = tr->trace_flags & TRACE_ITER_OVERWRITE ? RB_FL_OVERWRITE : 0;

buf->tr = tr;

- buf->buffer = ring_buffer_alloc(size, rb_flags);
+ buf->buffer = ring_buffer_alloc_kho(size, rb_flags, tr_off);
if (!buf->buffer)
return -ENOMEM;

--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:55:23

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 14/17] tracing: Add kho serialization of trace events

Events and thus their parsing handle in ftrace have dynamic IDs that get
assigned whenever the event is added to the system. If we want to parse
trace events after kexec, we need to link event IDs back to the original
trace event that existed before we kexec'ed.

There are broadly 2 paths we could take for that:

1) Save full event description across KHO, restore after kexec,
merge identical trace events into a single identifier.
2) Recover the ID of post-kexec added events so they get the same
ID after kexec that they had before kexec

This patch implements the second option. It's simpler and thus less
intrusive. However, it means we can not fully parse affected events
when the kernel removes or modifies trace events across a kho kexec.

Signed-off-by: Alexander Graf <[email protected]>

---

v1 -> v2:

- Leave anything that requires a name in trace.c to keep buffers
unnamed entities
- Put events as array into a property, use fingerprint instead of
names to identify them
- Reduce footprint without CONFIG_FTRACE_KHO
---
kernel/trace/trace.c | 1 +
kernel/trace/trace_output.c | 89 +++++++++++++++++++++++++++++++++++++
kernel/trace/trace_output.h | 5 +++
3 files changed, 95 insertions(+)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 94e30dfacfd1..b9ce8cf24d02 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -10634,6 +10634,7 @@ static int trace_kho_notifier(struct notifier_block *self,

err |= fdt_begin_node(fdt, "ftrace");
err |= fdt_property(fdt, "compatible", compatible, sizeof(compatible));
+ err |= trace_kho_write_events(fdt);
err |= trace_kho_write_trace_array(fdt, &global_trace);
err |= fdt_end_node(fdt);

diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index 3e7fa44dc2b2..7d8815352e20 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -12,6 +12,8 @@
#include <linux/sched/clock.h>
#include <linux/sched/mm.h>
#include <linux/idr.h>
+#include <linux/kexec.h>
+#include <linux/crc32.h>

#include "trace_output.h"

@@ -669,6 +671,93 @@ int trace_print_lat_context(struct trace_iterator *iter)
return !trace_seq_has_overflowed(s);
}

+/**
+ * event2fp - Return fingerprint of an event
+ * @event: The event to fingerprint
+ *
+ * For KHO, we need to match events before and after kexec to recover its type
+ * id. This function returns a hash that combines an event's name, and all of
+ * its fields' lengths.
+ */
+static u32 event2fp(struct trace_event *event)
+{
+ struct ftrace_event_field *field;
+ struct trace_event_call *call;
+ struct list_head *head;
+ const char *name;
+ u32 crc32 = ~0;
+
+ /* Low type numbers are static, nothing to checksum */
+ if (event->type && event->type < __TRACE_LAST_TYPE)
+ return event->type;
+
+ call = container_of(event, struct trace_event_call, event);
+ name = trace_event_name(call);
+ if (name)
+ crc32 = crc32_le(crc32, name, strlen(name));
+
+ head = trace_get_fields(call);
+ list_for_each_entry(field, head, link)
+ crc32 = crc32_le(crc32, (char *)&field->size, sizeof(field->size));
+
+ return crc32;
+}
+
+struct trace_event_map {
+ u32 crc32;
+ u32 type;
+};
+
+static int __maybe_unused _trace_kho_write_events(void *fdt)
+{
+ struct trace_event_call *call;
+ int count = __TRACE_LAST_TYPE - 1;
+ struct trace_event_map *map;
+ int err = 0;
+ int i;
+
+ down_read(&trace_event_sem);
+ /* Allocate an array that we can place all maps into */
+ list_for_each_entry(call, &ftrace_events, list)
+ count++;
+
+ map = vmalloc(count * sizeof(*map));
+ if (!map)
+ return -ENOMEM;
+
+ /* Then fill the array with all crc32 values */
+ count = 0;
+ for (i = 1; i < __TRACE_LAST_TYPE; i++)
+ map[count++] = (struct trace_event_map) {
+ .crc32 = count,
+ .type = count,
+ };
+
+ list_for_each_entry(call, &ftrace_events, list) {
+ struct trace_event *event = &call->event;
+
+ map[count++] = (struct trace_event_map) {
+ .crc32 = event2fp(event),
+ .type = event->type,
+ };
+ }
+ up_read(&trace_event_sem);
+
+ /* And finally write it into a DT variable */
+ err |= fdt_property(fdt, "events", map, count * sizeof(*map));
+
+ vfree(map);
+ return err;
+}
+
+#ifdef CONFIG_FTRACE_KHO
+int trace_kho_write_events(void *fdt)
+{
+ return _trace_kho_write_events(fdt);
+}
+#endif
+
+
/**
* ftrace_find_event - find a registered event
* @type: the type of event to look for
diff --git a/kernel/trace/trace_output.h b/kernel/trace/trace_output.h
index dca40f1f1da4..07481f295436 100644
--- a/kernel/trace/trace_output.h
+++ b/kernel/trace/trace_output.h
@@ -25,6 +25,11 @@ extern enum print_line_t print_event_fields(struct trace_iterator *iter,
extern void trace_event_read_lock(void);
extern void trace_event_read_unlock(void);
extern struct trace_event *ftrace_find_event(int type);
+#ifdef CONFIG_FTRACE_KHO
+extern int trace_kho_write_events(void *fdt);
+#else
+static inline int trace_kho_write_events(void *fdt) { return -EINVAL; }
+#endif

extern enum print_line_t trace_nop_print(struct trace_iterator *iter,
int flags, struct trace_event *event);
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:55:27

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 15/17] tracing: Recover trace events from kexec handover

This patch implements all logic necessary to match a new trace event
that we add against preserved trace events from kho. If we find a match,
we give the new trace event the old event's identifier. That way, trace
read-outs are able to make sense of buffer contents again because the
parsing code for events looks at the same identifiers.

Signed-off-by: Alexander Graf <[email protected]>

---

v1 -> v2:

- make kho_get_fdt() const
- Get events as array from a property, use fingerprint instead of
names to identify events
- Remove ifdefs
---
kernel/trace/trace_output.c | 158 +++++++++++++++++++++++++++++++++++-
1 file changed, 156 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index 7d8815352e20..937002a204e1 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -24,6 +24,8 @@ DECLARE_RWSEM(trace_event_sem);

static struct hlist_head event_hash[EVENT_HASHSIZE] __read_mostly;

+static bool trace_is_kho_event(int type);
+
enum print_line_t trace_print_bputs_msg_only(struct trace_iterator *iter)
{
struct trace_seq *s = &iter->seq;
@@ -784,7 +786,7 @@ static DEFINE_IDA(trace_event_ida);

static void free_trace_event_type(int type)
{
- if (type >= __TRACE_LAST_TYPE)
+ if (type >= __TRACE_LAST_TYPE && !trace_is_kho_event(type))
ida_free(&trace_event_ida, type);
}

@@ -810,6 +812,156 @@ void trace_event_read_unlock(void)
up_read(&trace_event_sem);
}

+
+/**
+ * trace_kho_get_map - Return the KHO event map
+ * @pmap: Pointer to a trace map array. Will be filled on success.
+ * @plen: Pointer to the length of the map. Will be filled on success.
+ * @unallocated: True if the event does not have an ID yet
+ *
+ * Event types are semi-dynamically generated. To ensure that
+ * their identifiers match before and after kexec with KHO,
+ * we store an event map in the KHO DT. Whenever we need the
+ * map, this function provides it.
+ *
+ * The first time we request a map, it also walks through it and
+ * reserves all identifiers so later event registration has find their
+ * identifier already reserved.
+ */
+static int trace_kho_get_map(const struct trace_event_map **pmap, int *plen,
+ bool unallocated)
+{
+ static const struct trace_event_map *event_map;
+ static int event_map_len;
+ static bool event_map_reserved;
+ const struct trace_event_map *map = NULL;
+ const void *fdt = kho_get_fdt();
+ const char *path = "/ftrace";
+ int off, err, len = 0;
+ int i;
+
+ if (!IS_ENABLED(CONFIG_FTRACE_KHO) || !fdt)
+ return -EINVAL;
+
+ if (event_map) {
+ map = event_map;
+ len = event_map_len;
+ }
+
+ if (!map) {
+ off = fdt_path_offset(fdt, path);
+
+ if (off < 0) {
+ pr_debug("Could not find '%s' in DT", path);
+ return -EINVAL;
+ }
+
+ err = fdt_node_check_compatible(fdt, off, "ftrace-v1");
+ if (err) {
+ pr_warn("Node '%s' has invalid compatible", path);
+ return -EINVAL;
+ }
+
+ map = fdt_getprop(fdt, off, "events", &len);
+ if (!map)
+ return -EINVAL;
+
+ event_map = map;
+ event_map_len = len;
+ }
+
+ if (unallocated && !event_map_reserved) {
+ /*
+ * Reserve all IDs in our IDA. We only have a working IDA later
+ * in boot, so restrict it to when we allocate a dynamic type id
+ * for an event.
+ */
+ for (i = 0; i < len; i += sizeof(*map)) {
+ const struct trace_event_map *imap = (void *)map + i;
+
+ if (imap->type < __TRACE_LAST_TYPE)
+ continue;
+ if (ida_alloc_range(&trace_event_ida, imap->type, imap->type,
+ GFP_KERNEL) != imap->type) {
+ pr_warn("Unable to reserve id %d", imap->type);
+ return -EINVAL;
+ }
+ }
+
+ event_map_reserved = true;
+ }
+
+ *pmap = map;
+ *plen = len;
+
+ return 0;
+}
+
+/**
+ * trace_is_kho_event - returns true if the event type is KHO reserved
+ * @event: the event type to enumerate
+ *
+ * With KHO, we reserve all previous kernel's trace event types in the
+ * KHO DT. Then, when we allocate a type, we just reuse the previous
+ * kernel's value. However, that means we have to keep these type identifiers
+ * reserved across the lifetime of the system, because we may get a new event
+ * that matches the old kernel's event fingerprint. This function is a small
+ * helper that allows us to check whether a type ID is in use by KHO.
+ */
+static bool trace_is_kho_event(int type)
+{
+ const struct trace_event_map *map = NULL;
+ int len, i;
+
+ if (trace_kho_get_map(&map, &len, false))
+ return false;
+
+ if (!map)
+ return false;
+
+ for (i = 0; i < len; i += sizeof(*map), map++)
+ if (map->type == type)
+ return true;
+
+ return false;
+}
+
+/**
+ * trace_kho_fill_event_type - restore event type info from KHO
+ * @event: the event to enumerate
+ *
+ * Event types are semi-dynamically generated. To ensure that
+ * their identifiers match before and after kexec with KHO,
+ * let's match up unique fingerprint - either their predetermined
+ * type or their crc32 value - and fill in the respective type
+ * information if we booted with KHO.
+ */
+static bool trace_kho_fill_event_type(struct trace_event *event)
+{
+ const struct trace_event_map *map = NULL;
+ int len = 0, i;
+ u32 crc32;
+
+ if (trace_kho_get_map(&map, &len, !event->type))
+ return false;
+
+ crc32 = event2fp(event);
+
+ for (i = 0; i < len; i += sizeof(*map), map++) {
+ if (map->crc32 == crc32) {
+ if (!map->type)
+ return false;
+
+ event->type = map->type;
+ return true;
+ }
+ }
+
+ pr_debug("Could not find event");
+
+ return false;
+}
+
/**
* register_trace_event - register output for an event type
* @event: the event type to register
@@ -838,7 +990,9 @@ int register_trace_event(struct trace_event *event)
if (WARN_ON(!event->funcs))
goto out;

- if (!event->type) {
+ if (trace_kho_fill_event_type(event)) {
+ pr_debug("Recovered id=%d", event->type);
+ } else if (!event->type) {
event->type = alloc_trace_event_type();
if (!event->type)
goto out;
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:55:44

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 16/17] tracing: Add config option for kexec handover

Now that all bits are in place to allow ftrace to pass its trace data
into the next kernel on kexec, let's give users a kconfig option to
enable the functionality.

Signed-off-by: Alexander Graf <[email protected]>

---

v1 -> v2:

- Select crc32
---
kernel/trace/Kconfig | 14 ++++++++++++++
1 file changed, 14 insertions(+)

diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index 61c541c36596..418a5ae11aac 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -1169,6 +1169,20 @@ config HIST_TRIGGERS_DEBUG

If unsure, say N.

+config FTRACE_KHO
+ bool "Ftrace Kexec handover support"
+ depends on KEXEC_KHO
+ select CRC32
+ help
+ Enable support for ftrace to pass metadata across kexec so the new
+ kernel continues to use the previous kernel's trace buffers.
+
+ This can be useful when debugging kexec performance or correctness
+ issues: The new kernel can dump the old kernel's trace buffer which
+ contains all events until reboot.
+
+ If unsure, say N.
+
source "kernel/trace/rv/Kconfig"

endif # FTRACE
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 19:56:00

by Alexander Graf

[permalink] [raw]
Subject: [PATCH v2 17/17] devicetree: Add bindings for ftrace KHO

With ftrace in KHO, we are creating an ABI between old kernel and new
kernel about the state that they transfer. To ensure that we document
that state and catch any breaking change, let's add its schema to the
common devicetree bindings. This way, we can quickly reason about the
state that gets passed.

Signed-off-by: Alexander Graf <[email protected]>
---
.../bindings/kho/ftrace/ftrace-array.yaml | 46 +++++++++++++++
.../bindings/kho/ftrace/ftrace-cpu.yaml | 56 +++++++++++++++++++
.../bindings/kho/ftrace/ftrace.yaml | 48 ++++++++++++++++
3 files changed, 150 insertions(+)
create mode 100644 Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml
create mode 100644 Documentation/devicetree/bindings/kho/ftrace/ftrace-cpu.yaml
create mode 100644 Documentation/devicetree/bindings/kho/ftrace/ftrace.yaml

diff --git a/Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml b/Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml
new file mode 100644
index 000000000000..9960fefc292d
--- /dev/null
+++ b/Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml
@@ -0,0 +1,46 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/kho/ftrace/ftrace-array.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Ftrace trace array
+
+maintainers:
+ - Alexander Graf <[email protected]>
+
+properties:
+ compatible:
+ enum:
+ - ftrace,array-v1
+
+ trace_flags:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ description:
+ Bitmap of all the trace flags that were enabled in the trace array at the
+ point of serialization.
+
+# Subnodes will be of type "ftrace,cpu-v1", one each per CPU
+additionalProperties: true
+
+required:
+ - compatible
+ - trace_flags
+
+examples:
+ - |
+ ftrace {
+ compatible = "ftrace-v1";
+ events = <1 1 2 2 3 3>;
+
+ global_trace {
+ compatible = "ftrace,array-v1";
+ trace_flags = < 0x3354601 >;
+
+ cpu0 {
+ compatible = "ftrace,cpu-v1";
+ cpu = < 0x00 >;
+ mem = < 0x101000000ULL 0x38ULL 0x101000100ULL 0x1000ULL 0x101000038ULL 0x38ULL 0x101002000ULL 0x1000ULL>;
+ };
+ };
+ };
diff --git a/Documentation/devicetree/bindings/kho/ftrace/ftrace-cpu.yaml b/Documentation/devicetree/bindings/kho/ftrace/ftrace-cpu.yaml
new file mode 100644
index 000000000000..58c715e93f37
--- /dev/null
+++ b/Documentation/devicetree/bindings/kho/ftrace/ftrace-cpu.yaml
@@ -0,0 +1,56 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/kho/ftrace/ftrace-cpu.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Ftrace per-CPU ring buffer contents
+
+maintainers:
+ - Alexander Graf <[email protected]>
+
+properties:
+ compatible:
+ enum:
+ - ftrace,cpu-v1
+
+ cpu:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ description:
+ CPU number of the CPU that this ring buffer belonged to when it was
+ serialized.
+
+ mem:
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ description:
+ Array of { u64 phys_addr, u64 len } elements that describe a list of ring
+ buffer pages. Each page consists of two elements. The first element
+ describes the location of the struct buffer_page that contains metadata
+ for a given ring buffer page, such as the ring's head indicator. The
+ second element points to the ring buffer data page which contains the raw
+ trace data.
+
+additionalProperties: false
+
+required:
+ - compatible
+ - cpu
+ - mem
+
+examples:
+ - |
+ ftrace {
+ compatible = "ftrace-v1";
+ events = <1 1 2 2 3 3>;
+
+ global_trace {
+ compatible = "ftrace,array-v1";
+ trace_flags = < 0x3354601 >;
+
+ cpu0 {
+ compatible = "ftrace,cpu-v1";
+ cpu = < 0x00 >;
+ mem = < 0x101000000ULL 0x38ULL 0x101000100ULL 0x1000ULL 0x101000038ULL 0x38ULL 0x101002000ULL 0x1000ULL>;
+ };
+ };
+ };
diff --git a/Documentation/devicetree/bindings/kho/ftrace/ftrace.yaml b/Documentation/devicetree/bindings/kho/ftrace/ftrace.yaml
new file mode 100644
index 000000000000..b87a64843af3
--- /dev/null
+++ b/Documentation/devicetree/bindings/kho/ftrace/ftrace.yaml
@@ -0,0 +1,48 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/kho/ftrace/ftrace.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Ftrace core data
+
+maintainers:
+ - Alexander Graf <[email protected]>
+
+properties:
+ compatible:
+ enum:
+ - ftrace-v1
+
+ events:
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ description:
+ Array of { u32 crc, u32 type } elements. Each element contains a unique
+ identifier for an event, followed by the identifier that this event had
+ in the previous kernel's trace buffers.
+
+# Other child nodes will be of type "ftrace,array-v1". Each of which describe
+# a trace buffer
+additionalProperties: true
+
+required:
+ - compatible
+ - events
+
+examples:
+ - |
+ ftrace {
+ compatible = "ftrace-v1";
+ events = <1 1 2 2 3 3>;
+
+ global_trace {
+ compatible = "ftrace,array-v1";
+ trace_flags = < 0x3354601 >;
+
+ cpu0 {
+ compatible = "ftrace,cpu-v1";
+ cpu = < 0x00 >;
+ mem = < 0x101000000ULL 0x38ULL 0x101000100ULL 0x1000ULL 0x101000038ULL 0x38ULL 0x101002000ULL 0x1000ULL>;
+ };
+ };
+ };
--
2.40.1




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879




2023-12-22 21:19:53

by Rob Herring (Arm)

[permalink] [raw]
Subject: Re: [PATCH v2 17/17] devicetree: Add bindings for ftrace KHO


On Fri, 22 Dec 2023 19:51:44 +0000, Alexander Graf wrote:
> With ftrace in KHO, we are creating an ABI between old kernel and new
> kernel about the state that they transfer. To ensure that we document
> that state and catch any breaking change, let's add its schema to the
> common devicetree bindings. This way, we can quickly reason about the
> state that gets passed.
>
> Signed-off-by: Alexander Graf <[email protected]>
> ---
> .../bindings/kho/ftrace/ftrace-array.yaml | 46 +++++++++++++++
> .../bindings/kho/ftrace/ftrace-cpu.yaml | 56 +++++++++++++++++++
> .../bindings/kho/ftrace/ftrace.yaml | 48 ++++++++++++++++
> 3 files changed, 150 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml
> create mode 100644 Documentation/devicetree/bindings/kho/ftrace/ftrace-cpu.yaml
> create mode 100644 Documentation/devicetree/bindings/kho/ftrace/ftrace.yaml
>

My bot found errors running 'make DT_CHECKER_FLAGS=-m dt_binding_check'
on your patch (DT_CHECKER_FLAGS is new in v5.13):

yamllint warnings/errors:
./Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml:43:111: [warning] line too long (117 > 110 characters) (line-length)
./Documentation/devicetree/bindings/kho/ftrace/ftrace-cpu.yaml:53:111: [warning] line too long (117 > 110 characters) (line-length)
./Documentation/devicetree/bindings/kho/ftrace/ftrace.yaml:45:111: [warning] line too long (117 > 110 characters) (line-length)

dtschema/dtc warnings/errors:
WARNING: Documentation/devicetree/bindings/kho/ftrace/ftrace-array.example.dts:29.25-39: Value 0x0000000101000000 truncated to 0x01000000
WARNING: Documentation/devicetree/bindings/kho/ftrace/ftrace-array.example.dts:29.48-62: Value 0x0000000101000100 truncated to 0x01000100
WARNING: Documentation/devicetree/bindings/kho/ftrace/ftrace-array.example.dts:29.73-87: Value 0x0000000101000038 truncated to 0x01000038
WARNING: Documentation/devicetree/bindings/kho/ftrace/ftrace-array.example.dts:29.96-110: Value 0x0000000101002000 truncated to 0x01002000
WARNING: Documentation/devicetree/bindings/kho/ftrace/ftrace-cpu.example.dts:29.25-39: Value 0x0000000101000000 truncated to 0x01000000
WARNING: Documentation/devicetree/bindings/kho/ftrace/ftrace-cpu.example.dts:29.48-62: Value 0x0000000101000100 truncated to 0x01000100
WARNING: Documentation/devicetree/bindings/kho/ftrace/ftrace-cpu.example.dts:29.73-87: Value 0x0000000101000038 truncated to 0x01000038
WARNING: Documentation/devicetree/bindings/kho/ftrace/ftrace-cpu.example.dts:29.96-110: Value 0x0000000101002000 truncated to 0x01002000
WARNING: Documentation/devicetree/bindings/kho/ftrace/ftrace.example.dts:29.25-39: Value 0x0000000101000000 truncated to 0x01000000
WARNING: Documentation/devicetree/bindings/kho/ftrace/ftrace.example.dts:29.48-62: Value 0x0000000101000100 truncated to 0x01000100
WARNING: Documentation/devicetree/bindings/kho/ftrace/ftrace.example.dts:29.73-87: Value 0x0000000101000038 truncated to 0x01000038
WARNING: Documentation/devicetree/bindings/kho/ftrace/ftrace.example.dts:29.96-110: Value 0x0000000101002000 truncated to 0x01002000

doc reference errors (make refcheckdocs):

See https://patchwork.ozlabs.org/project/devicetree-bindings/patch/[email protected]

The base for the series is generally the latest rc1. A different dependency
should be noted in *this* patch.

If you already ran 'make dt_binding_check' and didn't see the above
error(s), then make sure 'yamllint' is installed and dt-schema is up to
date:

pip3 install dtschema --upgrade

Please check and re-submit after running the above command yourself. Note
that DT_SCHEMA_FILES can be set to your schema file to speed up checking
your schema. However, it must be unset to test all examples with your schema.


2023-12-23 14:31:32

by Krzysztof Kozlowski

[permalink] [raw]
Subject: Re: [PATCH v2 17/17] devicetree: Add bindings for ftrace KHO

On 22/12/2023 20:51, Alexander Graf wrote:
> With ftrace in KHO, we are creating an ABI between old kernel and new
> kernel about the state that they transfer. To ensure that we document
> that state and catch any breaking change, let's add its schema to the
> common devicetree bindings. This way, we can quickly reason about the
> state that gets passed.

Please use scripts/get_maintainers.pl to get a list of necessary people
and lists to CC (and consider --no-git-fallback argument). It might
happen, that command when run on an older kernel, gives you outdated
entries. Therefore please be sure you base your patches on recent Linux
kernel.

Please use subject prefixes matching the subsystem. You can get them for
example with `git log --oneline -- DIRECTORY_OR_FILE` on the directory
your patch is touching.

A nit, subject: drop second/last, redundant "bindings". The
"dt-bindings" prefix is already stating that these are bindings.

>
> Signed-off-by: Alexander Graf <[email protected]>
> ---
> .../bindings/kho/ftrace/ftrace-array.yaml | 46 +++++++++++++++
> .../bindings/kho/ftrace/ftrace-cpu.yaml | 56 +++++++++++++++++++
> .../bindings/kho/ftrace/ftrace.yaml | 48 ++++++++++++++++
> 3 files changed, 150 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml
> create mode 100644 Documentation/devicetree/bindings/kho/ftrace/ftrace-cpu.yaml
> create mode 100644 Documentation/devicetree/bindings/kho/ftrace/ftrace.yaml
>
> diff --git a/Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml b/Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml
> new file mode 100644
> index 000000000000..9960fefc292d
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml
> @@ -0,0 +1,46 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/kho/ftrace/ftrace-array.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Ftrace trace array
> +

Missing description. Commit msg also does not tell me much. This must
stand on its own and must describe the hardware. Whatever you have in
cover letter, does not matter, especially that you did not Cc us on it.

> +maintainers:
> + - Alexander Graf <[email protected]>
> +
> +properties:
> + compatible:
> + enum:
> + - ftrace,array-v1
> +
> + trace_flags:

Underscores are not allowed. Does not look like generic property.


> + $ref: /schemas/types.yaml#/definitions/uint32
> + description:
> + Bitmap of all the trace flags that were enabled in the trace array at the
> + point of serialization.
> +
> +# Subnodes will be of type "ftrace,cpu-v1", one each per CPU

> +additionalProperties: true

No, this must be false. And it goes after required:


> +
> +required:
> + - compatible
> + - trace_flags
> +
> +examples:
> + - |
> + ftrace {
> + compatible = "ftrace-v1";
> + events = <1 1 2 2 3 3>;
> +
> + global_trace {

Again, no underscores.

> + compatible = "ftrace,array-v1";
> + trace_flags = < 0x3354601 >;
> +
> + cpu0 {
> + compatible = "ftrace,cpu-v1";
> + cpu = < 0x00 >;

Drop redundant spaces.

> + mem = < 0x101000000ULL 0x38ULL 0x101000100ULL 0x1000ULL 0x101000038ULL 0x38ULL 0x101002000ULL 0x1000ULL>;

? Do you see any of such syntax in DTS?

Best regards,
Krzysztof


2023-12-23 23:20:46

by Alexander Graf

[permalink] [raw]
Subject: Re: [PATCH v2 17/17] devicetree: Add bindings for ftrace KHO

Hi Krzysztof!

Thanks a lot for the fast review!

On 23.12.23 15:30, Krzysztof Kozlowski wrote:
> On 22/12/2023 20:51, Alexander Graf wrote:
>> With ftrace in KHO, we are creating an ABI between old kernel and new
>> kernel about the state that they transfer. To ensure that we document
>> that state and catch any breaking change, let's add its schema to the
>> common devicetree bindings. This way, we can quickly reason about the
>> state that gets passed.
> Please use scripts/get_maintainers.pl to get a list of necessary people
> and lists to CC (and consider --no-git-fallback argument). It might
> happen, that command when run on an older kernel, gives you outdated
> entries. Therefore please be sure you base your patches on recent Linux
> kernel.


Ah, this is about directly CC'ing maintainers? I was slightly picky on
CCs since the CC list is already a bit long for this patch set, so I
limited the CC list to mailing lists and people that I know were
directly interested. Happy to CC you next time.


>
> Please use subject prefixes matching the subsystem. You can get them for
> example with `git log --oneline -- DIRECTORY_OR_FILE` on the directory
> your patch is touching.
>
> A nit, subject: drop second/last, redundant "bindings". The
> "dt-bindings" prefix is already stating that these are bindings.


Happy to fix up for v3 :)


>
>> Signed-off-by: Alexander Graf <[email protected]>
>> ---
>> .../bindings/kho/ftrace/ftrace-array.yaml | 46 +++++++++++++++
>> .../bindings/kho/ftrace/ftrace-cpu.yaml | 56 +++++++++++++++++++
>> .../bindings/kho/ftrace/ftrace.yaml | 48 ++++++++++++++++
>> 3 files changed, 150 insertions(+)
>> create mode 100644 Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml
>> create mode 100644 Documentation/devicetree/bindings/kho/ftrace/ftrace-cpu.yaml
>> create mode 100644 Documentation/devicetree/bindings/kho/ftrace/ftrace.yaml
>>
>> diff --git a/Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml b/Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml
>> new file mode 100644
>> index 000000000000..9960fefc292d
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml
>> @@ -0,0 +1,46 @@
>> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
>> +%YAML 1.2
>> +---
>> +$id: http://devicetree.org/schemas/kho/ftrace/ftrace-array.yaml#
>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>> +
>> +title: Ftrace trace array
>> +
> Missing description. Commit msg also does not tell me much. This must
> stand on its own and must describe the hardware. Whatever you have in
> cover letter, does not matter, especially that you did not Cc us on it.


Alrighty, I'll add descriptions and make the commit message stand on its
own.

For quick reference: KHO is a new mechanism this patch set introduces
which allows Linux to pass arbitrary memory and metadata between kernels
on kexec. I'm reusing FDTs to implement the hand over protocol, as
Linux-to-Linux boot communication holds very similar properties to
firmware-to-Linux boot communication. So this binding is not about
hardware; it's about preserving Linux subsystem state across kexec.

For more details, please refer to the KHO documentation which is part of
patch 7 of this patch set:
https://lore.kernel.org/lkml/[email protected]/


>
>> +maintainers:
>> + - Alexander Graf <[email protected]>
>> +
>> +properties:
>> + compatible:
>> + enum:
>> + - ftrace,array-v1
>> +
>> + trace_flags:
> Underscores are not allowed. Does not look like generic property.


Let me make it "trace-flags" to not have underscores. Could you please
elaborate on what you mean by generic property?


>
>
>> + $ref: /schemas/types.yaml#/definitions/uint32
>> + description:
>> + Bitmap of all the trace flags that were enabled in the trace array at the
>> + point of serialization.
>> +
>> +# Subnodes will be of type "ftrace,cpu-v1", one each per CPU
>> +additionalProperties: true
> No, this must be false. And it goes after required:


Ok, making it false and adding pattern matches instead for subnodes.


>
>
>> +
>> +required:
>> + - compatible
>> + - trace_flags
>> +
>> +examples:
>> + - |
>> + ftrace {
>> + compatible = "ftrace-v1";
>> + events = <1 1 2 2 3 3>;
>> +
>> + global_trace {
> Again, no underscores.


Ok :)


>
>> + compatible = "ftrace,array-v1";
>> + trace_flags = < 0x3354601 >;
>> +
>> + cpu0 {
>> + compatible = "ftrace,cpu-v1";
>> + cpu = < 0x00 >;
> Drop redundant spaces.


I don't understand what you're referring to as redundant spaces? Double
checking, I believe indentation is off for every line below "ftrace {".
Is that what you're referring to? Fixing :)


>
>> + mem = < 0x101000000ULL 0x38ULL 0x101000100ULL 0x1000ULL 0x101000038ULL 0x38ULL 0x101002000ULL 0x1000ULL>;
> ? Do you see any of such syntax in DTS?


I was trying to make it easy to reason to readers about 64bit numbers -
and then potentially extend dtc to consume that new syntax. KHO DTs are
native/little endian, so dtc already has some difficulties interpreting
it which I'll need to fix up with patches to it eventually :). I'll
change it to something that looks more 32bit'y for now.


Alex




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879


2023-12-24 08:59:21

by Krzysztof Kozlowski

[permalink] [raw]
Subject: Re: [PATCH v2 17/17] devicetree: Add bindings for ftrace KHO

On 24/12/2023 00:20, Alexander Graf wrote:
>>> new file mode 100644
>>> index 000000000000..9960fefc292d
>>> --- /dev/null
>>> +++ b/Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml
>>> @@ -0,0 +1,46 @@
>>> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
>>> +%YAML 1.2
>>> +---
>>> +$id: http://devicetree.org/schemas/kho/ftrace/ftrace-array.yaml#
>>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>>> +
>>> +title: Ftrace trace array
>>> +
>> Missing description. Commit msg also does not tell me much. This must
>> stand on its own and must describe the hardware. Whatever you have in
>> cover letter, does not matter, especially that you did not Cc us on it.
>
>
> Alrighty, I'll add descriptions and make the commit message stand on its
> own.
>
> For quick reference: KHO is a new mechanism this patch set introduces
> which allows Linux to pass arbitrary memory and metadata between kernels
> on kexec. I'm reusing FDTs to implement the hand over protocol, as
> Linux-to-Linux boot communication holds very similar properties to
> firmware-to-Linux boot communication. So this binding is not about
> hardware; it's about preserving Linux subsystem state across kexec.

Devicetree is for non-discoverable systems and their hardware, not for
passing arbitrary data between kernels. For me this does not suit DT at
all, please use other ways.


>
> For more details, please refer to the KHO documentation which is part of
> patch 7 of this patch set:
> https://lore.kernel.org/lkml/[email protected]/
>
>
>>
>>> +maintainers:
>>> + - Alexander Graf <[email protected]>
>>> +
>>> +properties:
>>> + compatible:
>>> + enum:
>>> + - ftrace,array-v1
>>> +
>>> + trace_flags:
>> Underscores are not allowed. Does not look like generic property.
>
>
> Let me make it "trace-flags" to not have underscores. Could you please
> elaborate on what you mean by generic property?

Generic property, so one without vendor prefix, is shared and common to
a group of devices.


>
>
>>
>>
>>> + $ref: /schemas/types.yaml#/definitions/uint32
>>> + description:
>>> + Bitmap of all the trace flags that were enabled in the trace array at the
>>> + point of serialization.
>>> +
>>> +# Subnodes will be of type "ftrace,cpu-v1", one each per CPU
>>> +additionalProperties: true
>> No, this must be false. And it goes after required:
>
>
> Ok, making it false and adding pattern matches instead for subnodes.
>
>
>>
>>
>>> +
>>> +required:
>>> + - compatible
>>> + - trace_flags
>>> +
>>> +examples:
>>> + - |
>>> + ftrace {
>>> + compatible = "ftrace-v1";
>>> + events = <1 1 2 2 3 3>;
>>> +
>>> + global_trace {
>> Again, no underscores.
>
>
> Ok :)
>
>
>>
>>> + compatible = "ftrace,array-v1";
>>> + trace_flags = < 0x3354601 >;
>>> +
>>> + cpu0 {
>>> + compatible = "ftrace,cpu-v1";
>>> + cpu = < 0x00 >;
>> Drop redundant spaces.
>
>
> I don't understand what you're referring to as redundant spaces? Double
> checking, I believe indentation is off for every line below "ftrace {".
> Is that what you're referring to? Fixing :)

Open DTS, some recent, arm64 like Qualcomm. Do you see spaces around <>?
Or open the coding style document... Please do not introduce different
coding style.

>
>
>>
>>> + mem = < 0x101000000ULL 0x38ULL 0x101000100ULL 0x1000ULL 0x101000038ULL 0x38ULL 0x101002000ULL 0x1000ULL>;
>> ? Do you see any of such syntax in DTS?
>
>
> I was trying to make it easy to reason to readers about 64bit numbers -

64bit numbers are not a problem for DTS reading. Above syntax is.

> and then potentially extend dtc to consume that new syntax. KHO DTs are
> native/little endian, so dtc already has some difficulties interpreting
> it which I'll need to fix up with patches to it eventually :). I'll
> change it to something that looks more 32bit'y for now.
>


Best regards,
Krzysztof


2024-01-02 14:53:32

by Rob Herring (Arm)

[permalink] [raw]
Subject: Re: [PATCH v2 17/17] devicetree: Add bindings for ftrace KHO

On Sun, Dec 24, 2023 at 12:20:17AM +0100, Alexander Graf wrote:
> Hi Krzysztof!
>
> Thanks a lot for the fast review!
>
> On 23.12.23 15:30, Krzysztof Kozlowski wrote:
> > On 22/12/2023 20:51, Alexander Graf wrote:
> > > With ftrace in KHO, we are creating an ABI between old kernel and new
> > > kernel about the state that they transfer. To ensure that we document
> > > that state and catch any breaking change, let's add its schema to the
> > > common devicetree bindings. This way, we can quickly reason about the
> > > state that gets passed.
> > Please use scripts/get_maintainers.pl to get a list of necessary people
> > and lists to CC (and consider --no-git-fallback argument). It might
> > happen, that command when run on an older kernel, gives you outdated
> > entries. Therefore please be sure you base your patches on recent Linux
> > kernel.

[...]

> >
> > > + mem = < 0x101000000ULL 0x38ULL 0x101000100ULL 0x1000ULL 0x101000038ULL 0x38ULL 0x101002000ULL 0x1000ULL>;
> > ? Do you see any of such syntax in DTS?
>
>
> I was trying to make it easy to reason to readers about 64bit numbers - and
> then potentially extend dtc to consume that new syntax. KHO DTs are
> native/little endian, so dtc already has some difficulties interpreting it
> which I'll need to fix up with patches to it eventually :). I'll change it
> to something that looks more 32bit'y for now.

"/bits/ 64 <0x0 ...>" is what you are looking for.

Rob

2024-01-02 15:20:36

by Rob Herring (Arm)

[permalink] [raw]
Subject: Re: [PATCH v2 17/17] devicetree: Add bindings for ftrace KHO

On Fri, Dec 22, 2023 at 07:51:44PM +0000, Alexander Graf wrote:
> With ftrace in KHO, we are creating an ABI between old kernel and new
> kernel about the state that they transfer. To ensure that we document
> that state and catch any breaking change, let's add its schema to the
> common devicetree bindings. This way, we can quickly reason about the
> state that gets passed.

Why so much data in DT rather than putting all this information into
memory in your own data structure and DT just has a single property
pointing to that? That's what is done with every other blob of data
passed by kexec.

>
> Signed-off-by: Alexander Graf <[email protected]>
> ---
> .../bindings/kho/ftrace/ftrace-array.yaml | 46 +++++++++++++++
> .../bindings/kho/ftrace/ftrace-cpu.yaml | 56 +++++++++++++++++++
> .../bindings/kho/ftrace/ftrace.yaml | 48 ++++++++++++++++
> 3 files changed, 150 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml
> create mode 100644 Documentation/devicetree/bindings/kho/ftrace/ftrace-cpu.yaml
> create mode 100644 Documentation/devicetree/bindings/kho/ftrace/ftrace.yaml
>
> diff --git a/Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml b/Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml
> new file mode 100644
> index 000000000000..9960fefc292d
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml
> @@ -0,0 +1,46 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/kho/ftrace/ftrace-array.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Ftrace trace array
> +
> +maintainers:
> + - Alexander Graf <[email protected]>
> +
> +properties:
> + compatible:
> + enum:
> + - ftrace,array-v1
> +
> + trace_flags:
> + $ref: /schemas/types.yaml#/definitions/uint32
> + description:
> + Bitmap of all the trace flags that were enabled in the trace array at the
> + point of serialization.
> +
> +# Subnodes will be of type "ftrace,cpu-v1", one each per CPU

This can be expressed as a schema.

> +additionalProperties: true
> +
> +required:
> + - compatible
> + - trace_flags
> +
> +examples:
> + - |
> + ftrace {
> + compatible = "ftrace-v1";
> + events = <1 1 2 2 3 3>;
> +
> + global_trace {
> + compatible = "ftrace,array-v1";
> + trace_flags = < 0x3354601 >;
> +
> + cpu0 {
> + compatible = "ftrace,cpu-v1";
> + cpu = < 0x00 >;
> + mem = < 0x101000000ULL 0x38ULL 0x101000100ULL 0x1000ULL 0x101000038ULL 0x38ULL 0x101002000ULL 0x1000ULL>;
> + };
> + };
> + };
> diff --git a/Documentation/devicetree/bindings/kho/ftrace/ftrace-cpu.yaml b/Documentation/devicetree/bindings/kho/ftrace/ftrace-cpu.yaml
> new file mode 100644
> index 000000000000..58c715e93f37
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/kho/ftrace/ftrace-cpu.yaml
> @@ -0,0 +1,56 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/kho/ftrace/ftrace-cpu.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Ftrace per-CPU ring buffer contents
> +
> +maintainers:
> + - Alexander Graf <[email protected]>
> +
> +properties:
> + compatible:
> + enum:
> + - ftrace,cpu-v1
> +
> + cpu:
> + $ref: /schemas/types.yaml#/definitions/uint32

'cpu' is already a defined property of type 'phandle'. While we can have
multiple types for a given property name, best practice is to avoid
that. The normal way to refer to a CPU would be a phandle to the CPU
node, but I can see that might not make sense here.

"CPU numbers" on arm64 are 64-bit values as well as they are the
CPU's MPIDR value.

> + description:
> + CPU number of the CPU that this ring buffer belonged to when it was
> + serialized.
> +
> + mem:

Too vague. Make the property name indicate what's in the memory.

> + $ref: /schemas/types.yaml#/definitions/uint32-array
> + description:
> + Array of { u64 phys_addr, u64 len } elements that describe a list of ring
> + buffer pages. Each page consists of two elements. The first element
> + describes the location of the struct buffer_page that contains metadata
> + for a given ring buffer page, such as the ring's head indicator. The
> + second element points to the ring buffer data page which contains the raw
> + trace data.
> +
> +additionalProperties: false
> +
> +required:
> + - compatible
> + - cpu
> + - mem
> +
> +examples:
> + - |
> + ftrace {
> + compatible = "ftrace-v1";
> + events = <1 1 2 2 3 3>;
> +
> + global_trace {
> + compatible = "ftrace,array-v1";
> + trace_flags = < 0x3354601 >;
> +
> + cpu0 {
> + compatible = "ftrace,cpu-v1";
> + cpu = < 0x00 >;
> + mem = < 0x101000000ULL 0x38ULL 0x101000100ULL 0x1000ULL 0x101000038ULL 0x38ULL 0x101002000ULL 0x1000ULL>;
> + };
> + };
> + };
> diff --git a/Documentation/devicetree/bindings/kho/ftrace/ftrace.yaml b/Documentation/devicetree/bindings/kho/ftrace/ftrace.yaml
> new file mode 100644
> index 000000000000..b87a64843af3
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/kho/ftrace/ftrace.yaml
> @@ -0,0 +1,48 @@
> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/kho/ftrace/ftrace.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Ftrace core data
> +
> +maintainers:
> + - Alexander Graf <[email protected]>
> +
> +properties:
> + compatible:
> + enum:
> + - ftrace-v1
> +
> + events:

Again, too vague.

> + $ref: /schemas/types.yaml#/definitions/uint32-array
> + description:
> + Array of { u32 crc, u32 type } elements. Each element contains a unique
> + identifier for an event, followed by the identifier that this event had
> + in the previous kernel's trace buffers.
> +
> +# Other child nodes will be of type "ftrace,array-v1". Each of which describe
> +# a trace buffer
> +additionalProperties: true
> +
> +required:
> + - compatible
> + - events
> +
> +examples:
> + - |
> + ftrace {

This should go under /chosen. Show that here. Start the example with
'/{' to do that and not add the usual boilerplate we add when extracting
the examples.

Also, we don't need 3 examples. Just do 1 complete example here.


> + compatible = "ftrace-v1";
> + events = <1 1 2 2 3 3>;
> +
> + global_trace {
> + compatible = "ftrace,array-v1";
> + trace_flags = < 0x3354601 >;
> +
> + cpu0 {
> + compatible = "ftrace,cpu-v1";
> + cpu = < 0x00 >;
> + mem = < 0x101000000ULL 0x38ULL 0x101000100ULL 0x1000ULL 0x101000038ULL 0x38ULL 0x101002000ULL 0x1000ULL>;
> + };
> + };
> + };
> --
> 2.40.1
>
>
>
>
> Amazon Development Center Germany GmbH
> Krausenstr. 38
> 10117 Berlin
> Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
> Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
> Sitz: Berlin
> Ust-ID: DE 289 237 879
>
>
>

2024-01-02 19:29:29

by Stanislav Kinsburskii

[permalink] [raw]
Subject: Re: [PATCH v2 02/17] memblock: Declare scratch memory as CMA

On Fri, Dec 22, 2023 at 07:35:52PM +0000, Alexander Graf wrote:
> When we finish populating our memory, we don't want to lose the scratch
> region as memory we can use for useful data. Do do that, we mark it as
> CMA memory. That means that any allocation within it only happens with
> movable memory which we can then happily discard for the next kexec.
>
> That way we don't lose the scratch region's memory anymore for
> allocations after boot.
>
> Signed-off-by: Alexander Graf <[email protected]>
>
> ---
>
> v1 -> v2:
>
> - test bot warning fix
> ---
> mm/memblock.c | 30 ++++++++++++++++++++++++++----
> 1 file changed, 26 insertions(+), 4 deletions(-)
>
> diff --git a/mm/memblock.c b/mm/memblock.c
> index e89e6c8f9d75..3700c2c1a96d 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -16,6 +16,7 @@
> #include <linux/kmemleak.h>
> #include <linux/seq_file.h>
> #include <linux/memblock.h>
> +#include <linux/page-isolation.h>
>
> #include <asm/sections.h>
> #include <linux/io.h>
> @@ -1100,10 +1101,6 @@ static bool should_skip_region(struct memblock_type *type,
> if ((flags & MEMBLOCK_SCRATCH) && !memblock_is_scratch(m))
> return true;
>
> - /* Leave scratch memory alone after scratch-only phase */
> - if (!(flags & MEMBLOCK_SCRATCH) && memblock_is_scratch(m))
> - return true;
> -
> return false;
> }
>
> @@ -2153,6 +2150,20 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end)
> }
> }
>
> +#ifdef CONFIG_MEMBLOCK_SCRATCH
> +static void reserve_scratch_mem(phys_addr_t start, phys_addr_t end)

nit: the function name doesn't look reasonable as it has nothing
limiting it to neither reservation nor scratch mem.
Perhaps something like "set_mem_cma_type" would be a better fit.

> +{
> + ulong start_pfn = pageblock_start_pfn(PFN_DOWN(start));
> + ulong end_pfn = pageblock_align(PFN_UP(end));
> + ulong pfn;
> +
> + for (pfn = start_pfn; pfn < end_pfn; pfn += pageblock_nr_pages) {
> + /* Mark as CMA to prevent kernel allocations in it */

nit: the comment above looks irrelevant/redundant.

> + set_pageblock_migratetype(pfn_to_page(pfn), MIGRATE_CMA);
> + }
> +}
> +#endif
> +
> static unsigned long __init __free_memory_core(phys_addr_t start,
> phys_addr_t end)
> {
> @@ -2214,6 +2225,17 @@ static unsigned long __init free_low_memory_core_early(void)
>
> memmap_init_reserved_pages();
>
> +#ifdef CONFIG_MEMBLOCK_SCRATCH
> + /*
> + * Mark scratch mem as CMA before we return it. That way we ensure that
> + * no kernel allocations happen on it. That means we can reuse it as
> + * scratch memory again later.
> + */
> + __for_each_mem_range(i, &memblock.memory, NULL, NUMA_NO_NODE,
> + MEMBLOCK_SCRATCH, &start, &end, NULL)
> + reserve_scratch_mem(start, end);
> +#endif
> +
> /*
> * We need to use NUMA_NO_NODE instead of NODE_DATA(0)->node_id
> * because in some case like Node0 doesn't have RAM installed
> --
> 2.40.1
>
>
>
>
> Amazon Development Center Germany GmbH
> Krausenstr. 38
> 10117 Berlin
> Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
> Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
> Sitz: Berlin
> Ust-ID: DE 289 237 879
>
>

2024-01-02 19:58:57

by Stanislav Kinsburskii

[permalink] [raw]
Subject: Re: [PATCH v2 04/17] kexec: Add KHO parsing support

On Fri, Dec 22, 2023 at 07:35:54PM +0000, Alexander Graf wrote:
> +/**
> + * kho_reserve_previous_mem - Adds all memory reservations into memblocks
> + * and moves us out of the scratch only phase. Must be called after page tables
> + * are initialized and memblock_allow_resize().
> + */
> +void __init kho_reserve_previous_mem(void)
> +{
> + void *mem_virt = __va(mem_phys);
> + int off, err;
> +
> + if (!handover_phys || !mem_phys)
> + return;
> +
> + /*
> + * We reached here because we are running inside a working linear map
> + * that allows us to resize memblocks dynamically. Use the chance and
> + * populate the global fdt pointer
> + */
> + fdt = __va(handover_phys);
> +
> + off = fdt_path_offset(fdt, "/");
> + if (off < 0) {
> + fdt = NULL;
> + return;
> + }
> +
> + err = fdt_node_check_compatible(fdt, off, "kho-v1");
> + if (err) {
> + pr_warn("KHO has invalid compatible, disabling.");

It looks like KHO preserved regions won't be reserved in this case.
Should KHO DT state be destroyed here to prevent KHO memory regions
reuse upon rollback?

> +
> +void __init kho_populate(phys_addr_t handover_dt_phys, phys_addr_t scratch_phys,
> + u64 scratch_len, phys_addr_t mem_cache_phys,
> + u64 mem_cache_len)
> +{
> + void *handover_dt;
> +
> + /* Determine the real size of the DT */
> + handover_dt = early_memremap(handover_dt_phys, sizeof(struct fdt_header));
> + if (!handover_dt) {
> + pr_warn("setup: failed to memremap kexec FDT (0x%llx)\n", handover_dt_phys);
> + return;
> + }
> +
> + if (fdt_check_header(handover_dt)) {
> + pr_warn("setup: kexec handover FDT is invalid (0x%llx)\n", handover_dt_phys);
> + early_memunmap(handover_dt, PAGE_SIZE);
> + return;
> + }
> +
> + handover_len = fdt_totalsize(handover_dt);
> + handover_phys = handover_dt_phys;
> +
> + /* Reserve the DT so we can still access it in late boot */
> + memblock_reserve(handover_phys, handover_len);
> +
> + /* Reserve the mem cache so we can still access it later */
> + memblock_reserve(mem_cache_phys, mem_cache_len);
> +
> + /*
> + * We pass a safe contiguous block of memory to use for early boot purporses from
> + * the previous kernel so that we can resize the memblock array as needed.
> + */
> + memblock_add(scratch_phys, scratch_len);
> +
> + if (WARN_ON(memblock_mark_scratch(scratch_phys, scratch_len))) {
> + pr_err("Kexec failed to mark the scratch region. Disabling KHO.");
> + handover_len = 0;
> + handover_phys = 0;

Same question here: doesn't all the KHO state gets invalid in case of any
restoration error?


2024-01-02 20:19:04

by Stanislav Kinsburskii

[permalink] [raw]
Subject: Re: [PATCH v2 07/17] kexec: Add documentation for KHO

On Fri, Dec 22, 2023 at 07:35:57PM +0000, Alexander Graf wrote:
> diff --git a/Documentation/kho/concepts.rst b/Documentation/kho/concepts.rst
> new file mode 100644
> index 000000000000..8e4fe8c57865
> --- /dev/null
> +++ b/Documentation/kho/concepts.rst
> @@ -0,0 +1,88 @@
> +.. SPDX-License-Identifier: GPL-2.0-or-later
> +
> +=======================
> +Kexec Handover Concepts
> +=======================
> +
> +Kexec HandOver (KHO) is a mechanism that allows Linux to preserve state -
> +arbitrary properties as well as memory locations - across kexec.
> +
> +It introduces multiple concepts:
> +
> +KHO Device Tree
> +---------------
> +
> +Every KHO kexec carries a KHO specific flattened device tree blob that
> +describes the state of the system. Device drivers can register to KHO to
> +serialize their state before kexec. After KHO, device drivers can read
> +the device tree and extract previous state.
> +
> +KHO only uses the fdt container format and libfdt library, but does not
> +adhere to the same property semantics that normal device trees do: Properties
> +are passed in native endianness and standardized properties like ``regs`` and
> +``ranges`` do not exist, hence there are no ``#...-cells`` properties.
> +
> +KHO introduces a new concept to its device tree: ``mem`` properties. A
> +``mem`` property can inside any subnode in the device tree. When present,

Should it be "property can be" ?

...

> diff --git a/Documentation/kho/usage.rst b/Documentation/kho/usage.rst
> new file mode 100644
> index 000000000000..5efa2a58f9c3
> --- /dev/null
> +++ b/Documentation/kho/usage.rst
> @@ -0,0 +1,57 @@
> +.. SPDX-License-Identifier: GPL-2.0-or-later
> +
> +====================
> +Kexec Handover Usage
> +====================
> +
> +Kexec HandOver (KHO) is a mechanism that allows Linux to preserve state -
> +arbitrary properties as well as memory locations - across kexec.
> +
> +This document expects that you are familiar with the base KHO
> +:ref:`Documentation/kho/concepts.rst <concepts>`. If you have not read
> +them yet, please do so now.
> +
> +Prerequisites
> +-------------
> +
> +KHO is available when the ``CONFIG_KEXEC_KHO`` config option is set to y
> +at compile team. Every KHO producer has its own config option that you

Should it be "at compile time."?


2024-01-03 18:50:04

by Rob Herring

[permalink] [raw]
Subject: Re: [PATCH v2 07/17] kexec: Add documentation for KHO

On Fri, Dec 22, 2023 at 12:52 PM Alexander Graf <[email protected]> wrote:
>
> With KHO in place, let's add documentation that describes what it is and
> how to use it.
>
> Signed-off-by: Alexander Graf <[email protected]>
> ---
> Documentation/kho/concepts.rst | 88 ++++++++++++++++++++++++++++++++
> Documentation/kho/index.rst | 19 +++++++
> Documentation/kho/usage.rst | 57 +++++++++++++++++++++
> Documentation/subsystem-apis.rst | 1 +
> 4 files changed, 165 insertions(+)
> create mode 100644 Documentation/kho/concepts.rst
> create mode 100644 Documentation/kho/index.rst
> create mode 100644 Documentation/kho/usage.rst
>
> diff --git a/Documentation/kho/concepts.rst b/Documentation/kho/concepts.rst
> new file mode 100644
> index 000000000000..8e4fe8c57865
> --- /dev/null
> +++ b/Documentation/kho/concepts.rst
> @@ -0,0 +1,88 @@
> +.. SPDX-License-Identifier: GPL-2.0-or-later
> +
> +=======================
> +Kexec Handover Concepts
> +=======================
> +
> +Kexec HandOver (KHO) is a mechanism that allows Linux to preserve state -
> +arbitrary properties as well as memory locations - across kexec.
> +
> +It introduces multiple concepts:
> +
> +KHO Device Tree
> +---------------
> +
> +Every KHO kexec carries a KHO specific flattened device tree blob that
> +describes the state of the system. Device drivers can register to KHO to
> +serialize their state before kexec. After KHO, device drivers can read
> +the device tree and extract previous state.

How does this work with kexec when there is also the FDT for the h/w?
The h/w FDT has a /chosen property pointing to this FDT blob?

> +
> +KHO only uses the fdt container format and libfdt library, but does not
> +adhere to the same property semantics that normal device trees do: Properties
> +are passed in native endianness and standardized properties like ``regs`` and
> +``ranges`` do not exist, hence there are no ``#...-cells`` properties.

I think native endianness is asking for trouble. libfdt would need
different swap functions here than elsewhere in the kernel for example
which wouldn't even work. So you are just crossing your fingers that
you aren't using any libfdt functions that swap. And when I sync
dtc/libfdt and that changes, I might break you.

Also, if you want to dump the FDT and do a dtc DTB->DTS pass, it is
not going to be too readable given that outputs swapped 32-bit values
for anything that's a 4 byte multiple.

> +
> +KHO introduces a new concept to its device tree: ``mem`` properties. A
> +``mem`` property can inside any subnode in the device tree. When present,
> +it contains an array of physical memory ranges that the new kernel must mark
> +as reserved on boot. It is recommended, but not required, to make these ranges
> +as physically contiguous as possible to reduce the number of array elements ::
> +
> + struct kho_mem {
> + __u64 addr;
> + __u64 len;
> + };
> +
> +After boot, drivers can call the kho subsystem to transfer ownership of memory
> +that was reserved via a ``mem`` property to themselves to continue using memory
> +from the previous execution.
> +
> +The KHO device tree follows the in-Linux schema requirements. Any element in
> +the device tree is documented via device tree schema yamls that explain what
> +data gets transferred.

If this is all separate, then I think the schemas should be too. And
then from my (DT maintainer) perspective, you can do whatever you want
here (like FIT images). The dtschema tools are pretty much only geared
for "normal" DTs. A couple of problems come to mind. You can't exclude
or change standard properties. The decoding of the DTB to run
validation assumes big endian. We could probably split things up a
bit, but you may be better off just using jsonschema directly. I'm not
even sure running validation here would that valuable. You have 1
source of code generating the DT and 1 consumer. Yes, there's
different kernel versions to deal with, but it's not 100s of people
creating 1000s of DTs with 100s of nodes.

You might look at the netlink stuff which is using its own yaml syntax
to generate code and jsonschema is used to validate the yaml.

Rob

2024-01-15 13:28:13

by Alexander Graf

[permalink] [raw]
Subject: Re: [PATCH v2 04/17] kexec: Add KHO parsing support


On 01.01.24 04:33, Stanislav Kinsburskii wrote:
> On Fri, Dec 22, 2023 at 07:35:54PM +0000, Alexander Graf wrote:
>> +/**
>> + * kho_reserve_previous_mem - Adds all memory reservations into memblocks
>> + * and moves us out of the scratch only phase. Must be called after page tables
>> + * are initialized and memblock_allow_resize().
>> + */
>> +void __init kho_reserve_previous_mem(void)
>> +{
>> + void *mem_virt = __va(mem_phys);
>> + int off, err;
>> +
>> + if (!handover_phys || !mem_phys)
>> + return;
>> +
>> + /*
>> + * We reached here because we are running inside a working linear map
>> + * that allows us to resize memblocks dynamically. Use the chance and
>> + * populate the global fdt pointer
>> + */
>> + fdt = __va(handover_phys);
>> +
>> + off = fdt_path_offset(fdt, "/");
>> + if (off < 0) {
>> + fdt = NULL;
>> + return;
>> + }
>> +
>> + err = fdt_node_check_compatible(fdt, off, "kho-v1");
>> + if (err) {
>> + pr_warn("KHO has invalid compatible, disabling.");
> It looks like KHO preserved regions won't be reserved in this case.
> Should KHO DT state be destroyed here to prevent KHO memory regions
> reuse upon rollback?


Good catch. I'll set fdt to NULL in that case in v3.


>
>> +
>> +void __init kho_populate(phys_addr_t handover_dt_phys, phys_addr_t scratch_phys,
>> + u64 scratch_len, phys_addr_t mem_cache_phys,
>> + u64 mem_cache_len)
>> +{
>> + void *handover_dt;
>> +
>> + /* Determine the real size of the DT */
>> + handover_dt = early_memremap(handover_dt_phys, sizeof(struct fdt_header));
>> + if (!handover_dt) {
>> + pr_warn("setup: failed to memremap kexec FDT (0x%llx)\n", handover_dt_phys);
>> + return;
>> + }
>> +
>> + if (fdt_check_header(handover_dt)) {
>> + pr_warn("setup: kexec handover FDT is invalid (0x%llx)\n", handover_dt_phys);
>> + early_memunmap(handover_dt, PAGE_SIZE);
>> + return;
>> + }
>> +
>> + handover_len = fdt_totalsize(handover_dt);
>> + handover_phys = handover_dt_phys;
>> +
>> + /* Reserve the DT so we can still access it in late boot */
>> + memblock_reserve(handover_phys, handover_len);
>> +
>> + /* Reserve the mem cache so we can still access it later */
>> + memblock_reserve(mem_cache_phys, mem_cache_len);
>> +
>> + /*
>> + * We pass a safe contiguous block of memory to use for early boot purporses from
>> + * the previous kernel so that we can resize the memblock array as needed.
>> + */
>> + memblock_add(scratch_phys, scratch_len);
>> +
>> + if (WARN_ON(memblock_mark_scratch(scratch_phys, scratch_len))) {
>> + pr_err("Kexec failed to mark the scratch region. Disabling KHO.");
>> + handover_len = 0;
>> + handover_phys = 0;
> Same question here: doesn't all the KHO state gets invalid in case of any
> restoration error?


It does, which is what the error case here does, no? Or are you
referring to the fact that we're not unrolling the memblock
reservations? If we can't mark the scratch region, I'd rather leave
everything else alone. It means the scratch region is in a hole, which
should never happen.


Alex




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879


2024-01-17 13:57:33

by Alexander Graf

[permalink] [raw]
Subject: Re: [PATCH v2 17/17] devicetree: Add bindings for ftrace KHO

Hey Rob,

Thanks a lot for taking the time to review!

On 02.01.24 16:20, Rob Herring wrote:
> On Fri, Dec 22, 2023 at 07:51:44PM +0000, Alexander Graf wrote:
>> With ftrace in KHO, we are creating an ABI between old kernel and new
>> kernel about the state that they transfer. To ensure that we document
>> that state and catch any breaking change, let's add its schema to the
>> common devicetree bindings. This way, we can quickly reason about the
>> state that gets passed.
> Why so much data in DT rather than putting all this information into
> memory in your own data structure and DT just has a single property
> pointing to that? That's what is done with every other blob of data
> passed by kexec.


This is our own data structure for KHO that just happens to again
contain a DT structure. The reason is simple: I want a unified,
versioned, introspectable data format that is cross platform so you
don't need to touch every architecture specific boot passing logic every
time you want to add a tiny piece of data.


>
>> Signed-off-by: Alexander Graf <[email protected]>
>> ---
>> .../bindings/kho/ftrace/ftrace-array.yaml | 46 +++++++++++++++
>> .../bindings/kho/ftrace/ftrace-cpu.yaml | 56 +++++++++++++++++++
>> .../bindings/kho/ftrace/ftrace.yaml | 48 ++++++++++++++++
>> 3 files changed, 150 insertions(+)
>> create mode 100644 Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml
>> create mode 100644 Documentation/devicetree/bindings/kho/ftrace/ftrace-cpu.yaml
>> create mode 100644 Documentation/devicetree/bindings/kho/ftrace/ftrace.yaml
>>
>> diff --git a/Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml b/Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml
>> new file mode 100644
>> index 000000000000..9960fefc292d
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/kho/ftrace/ftrace-array.yaml
>> @@ -0,0 +1,46 @@
>> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
>> +%YAML 1.2
>> +---
>> +$id: http://devicetree.org/schemas/kho/ftrace/ftrace-array.yaml#
>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>> +
>> +title: Ftrace trace array
>> +
>> +maintainers:
>> + - Alexander Graf <[email protected]>
>> +
>> +properties:
>> + compatible:
>> + enum:
>> + - ftrace,array-v1
>> +
>> + trace_flags:
>> + $ref: /schemas/types.yaml#/definitions/uint32
>> + description:
>> + Bitmap of all the trace flags that were enabled in the trace array at the
>> + point of serialization.
>> +
>> +# Subnodes will be of type "ftrace,cpu-v1", one each per CPU
> This can be expressed as a schema.


Could you please give me a few more hints here? I'm not sure I understand how :)


>
>> +additionalProperties: true
>> +
>> +required:
>> + - compatible
>> + - trace_flags
>> +
>> +examples:
>> + - |
>> + ftrace {
>> + compatible = "ftrace-v1";
>> + events = <1 1 2 2 3 3>;
>> +
>> + global_trace {
>> + compatible = "ftrace,array-v1";
>> + trace_flags = < 0x3354601 >;
>> +
>> + cpu0 {
>> + compatible = "ftrace,cpu-v1";
>> + cpu = < 0x00 >;
>> + mem = < 0x101000000ULL 0x38ULL 0x101000100ULL 0x1000ULL 0x101000038ULL 0x38ULL 0x101002000ULL 0x1000ULL>;
>> + };
>> + };
>> + };
>> diff --git a/Documentation/devicetree/bindings/kho/ftrace/ftrace-cpu.yaml b/Documentation/devicetree/bindings/kho/ftrace/ftrace-cpu.yaml
>> new file mode 100644
>> index 000000000000..58c715e93f37
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/kho/ftrace/ftrace-cpu.yaml
>> @@ -0,0 +1,56 @@
>> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
>> +%YAML 1.2
>> +---
>> +$id: http://devicetree.org/schemas/kho/ftrace/ftrace-cpu.yaml#
>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>> +
>> +title: Ftrace per-CPU ring buffer contents
>> +
>> +maintainers:
>> + - Alexander Graf <[email protected]>
>> +
>> +properties:
>> + compatible:
>> + enum:
>> + - ftrace,cpu-v1
>> +
>> + cpu:
>> + $ref: /schemas/types.yaml#/definitions/uint32
> 'cpu' is already a defined property of type 'phandle'. While we can have
> multiple types for a given property name, best practice is to avoid
> that. The normal way to refer to a CPU would be a phandle to the CPU
> node, but I can see that might not make sense here.
>
> "CPU numbers" on arm64 are 64-bit values as well as they are the
> CPU's MPIDR value.


Here we're looking at the Linux internal CPU numbering which I believe
does not have to use the MIDR value?


>
>> + description:
>> + CPU number of the CPU that this ring buffer belonged to when it was
>> + serialized.
>> +
>> + mem:
> Too vague. Make the property name indicate what's in the memory.


"mem" is a generic property for every node in the KHO DT that contains a
<u64 phys_start, u64 size> array that describes memory to pass over. I
use it in generic code so that we don't need to do memory reservations
individually per node. That means I can't change the name here.


>
>> + $ref: /schemas/types.yaml#/definitions/uint32-array
>> + description:
>> + Array of { u64 phys_addr, u64 len } elements that describe a list of ring
>> + buffer pages. Each page consists of two elements. The first element
>> + describes the location of the struct buffer_page that contains metadata
>> + for a given ring buffer page, such as the ring's head indicator. The
>> + second element points to the ring buffer data page which contains the raw
>> + trace data.
>> +
>> +additionalProperties: false
>> +
>> +required:
>> + - compatible
>> + - cpu
>> + - mem
>> +
>> +examples:
>> + - |
>> + ftrace {
>> + compatible = "ftrace-v1";
>> + events = <1 1 2 2 3 3>;
>> +
>> + global_trace {
>> + compatible = "ftrace,array-v1";
>> + trace_flags = < 0x3354601 >;
>> +
>> + cpu0 {
>> + compatible = "ftrace,cpu-v1";
>> + cpu = < 0x00 >;
>> + mem = < 0x101000000ULL 0x38ULL 0x101000100ULL 0x1000ULL 0x101000038ULL 0x38ULL 0x101002000ULL 0x1000ULL>;
>> + };
>> + };
>> + };
>> diff --git a/Documentation/devicetree/bindings/kho/ftrace/ftrace.yaml b/Documentation/devicetree/bindings/kho/ftrace/ftrace.yaml
>> new file mode 100644
>> index 000000000000..b87a64843af3
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/kho/ftrace/ftrace.yaml
>> @@ -0,0 +1,48 @@
>> +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
>> +%YAML 1.2
>> +---
>> +$id: http://devicetree.org/schemas/kho/ftrace/ftrace.yaml#
>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>> +
>> +title: Ftrace core data
>> +
>> +maintainers:
>> + - Alexander Graf <[email protected]>
>> +
>> +properties:
>> + compatible:
>> + enum:
>> + - ftrace-v1
>> +
>> + events:
> Again, too vague.
>
>> + $ref: /schemas/types.yaml#/definitions/uint32-array
>> + description:
>> + Array of { u32 crc, u32 type } elements. Each element contains a unique
>> + identifier for an event, followed by the identifier that this event had
>> + in the previous kernel's trace buffers.
>> +
>> +# Other child nodes will be of type "ftrace,array-v1". Each of which describe
>> +# a trace buffer
>> +additionalProperties: true
>> +
>> +required:
>> + - compatible
>> + - events
>> +
>> +examples:
>> + - |
>> + ftrace {
> This should go under /chosen. Show that here. Start the example with


It can't go under /chosen because x86 doesn't have /chosen :). This is
not a device DT, it's a KHO DT.


> '/{' to do that and not add the usual boilerplate we add when extracting
> the examples.


What exact difference does /{ and ftrace { make?


>
> Also, we don't need 3 examples. Just do 1 complete example here.


Great idea :)


>
>
>> + compatible = "ftrace-v1";
>> + events = <1 1 2 2 3 3>;
>> +
>> + global_trace {
>> + compatible = "ftrace,array-v1";
>> + trace_flags = < 0x3354601 >;
>> +
>> + cpu0 {
>> + compatible = "ftrace,cpu-v1";
>> + cpu = < 0x00 >;
>> + mem = < 0x101000000ULL 0x38ULL 0x101000100ULL 0x1000ULL 0x101000038ULL 0x38ULL 0x101002000ULL 0x1000ULL>;
>> + };
>> + };
>> + };
>> --
>> 2.40.1
>>




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879


2024-01-17 14:02:20

by Alexander Graf

[permalink] [raw]
Subject: Re: [PATCH v2 07/17] kexec: Add documentation for KHO


On 03.01.24 19:48, Rob Herring wrote:
>
> On Fri, Dec 22, 2023 at 12:52 PM Alexander Graf <[email protected]> wrote:
>> With KHO in place, let's add documentation that describes what it is and
>> how to use it.
>>
>> Signed-off-by: Alexander Graf <[email protected]>
>> ---
>> Documentation/kho/concepts.rst | 88 ++++++++++++++++++++++++++++++++
>> Documentation/kho/index.rst | 19 +++++++
>> Documentation/kho/usage.rst | 57 +++++++++++++++++++++
>> Documentation/subsystem-apis.rst | 1 +
>> 4 files changed, 165 insertions(+)
>> create mode 100644 Documentation/kho/concepts.rst
>> create mode 100644 Documentation/kho/index.rst
>> create mode 100644 Documentation/kho/usage.rst
>>
>> diff --git a/Documentation/kho/concepts.rst b/Documentation/kho/concepts.rst
>> new file mode 100644
>> index 000000000000..8e4fe8c57865
>> --- /dev/null
>> +++ b/Documentation/kho/concepts.rst
>> @@ -0,0 +1,88 @@
>> +.. SPDX-License-Identifier: GPL-2.0-or-later
>> +
>> +=======================
>> +Kexec Handover Concepts
>> +=======================
>> +
>> +Kexec HandOver (KHO) is a mechanism that allows Linux to preserve state -
>> +arbitrary properties as well as memory locations - across kexec.
>> +
>> +It introduces multiple concepts:
>> +
>> +KHO Device Tree
>> +---------------
>> +
>> +Every KHO kexec carries a KHO specific flattened device tree blob that
>> +describes the state of the system. Device drivers can register to KHO to
>> +serialize their state before kexec. After KHO, device drivers can read
>> +the device tree and extract previous state.
> How does this work with kexec when there is also the FDT for the h/w?
> The h/w FDT has a /chosen property pointing to this FDT blob?


Yep, exactly.


>
>> +
>> +KHO only uses the fdt container format and libfdt library, but does not
>> +adhere to the same property semantics that normal device trees do: Properties
>> +are passed in native endianness and standardized properties like ``regs`` and
>> +``ranges`` do not exist, hence there are no ``#...-cells`` properties.
> I think native endianness is asking for trouble. libfdt would need
> different swap functions here than elsewhere in the kernel for example
> which wouldn't even work. So you are just crossing your fingers that
> you aren't using any libfdt functions that swap. And when I sync
> dtc/libfdt and that changes, I might break you.
>
> Also, if you want to dump the FDT and do a dtc DTB->DTS pass, it is
> not going to be too readable given that outputs swapped 32-bit values
> for anything that's a 4 byte multiple.


Yeah, but big endian these days is just a complete waste of brain and
cpu cycles :). And yes, I don't really want to use any libfdt helper
functions to read data. I use it only to give me the raw data and take
it from there.


>
>> +
>> +KHO introduces a new concept to its device tree: ``mem`` properties. A
>> +``mem`` property can inside any subnode in the device tree. When present,
>> +it contains an array of physical memory ranges that the new kernel must mark
>> +as reserved on boot. It is recommended, but not required, to make these ranges
>> +as physically contiguous as possible to reduce the number of array elements ::
>> +
>> + struct kho_mem {
>> + __u64 addr;
>> + __u64 len;
>> + };
>> +
>> +After boot, drivers can call the kho subsystem to transfer ownership of memory
>> +that was reserved via a ``mem`` property to themselves to continue using memory
>> +from the previous execution.
>> +
>> +The KHO device tree follows the in-Linux schema requirements. Any element in
>> +the device tree is documented via device tree schema yamls that explain what
>> +data gets transferred.
> If this is all separate, then I think the schemas should be too. And
> then from my (DT maintainer) perspective, you can do whatever you want
> here (like FIT images). The dtschema tools are pretty much only geared
> for "normal" DTs. A couple of problems come to mind. You can't exclude
> or change standard properties. The decoding of the DTB to run
> validation assumes big endian. We could probably split things up a
> bit, but you may be better off just using jsonschema directly. I'm not
> even sure running validation here would that valuable. You have 1
> source of code generating the DT and 1 consumer. Yes, there's
> different kernel versions to deal with, but it's not 100s of people
> creating 1000s of DTs with 100s of nodes.
>
> You might look at the netlink stuff which is using its own yaml syntax
> to generate code and jsonschema is used to validate the yaml.


I'm currently a lot more interested in the documentation aspect than in
the validation, yeah. So I think for v3, I'll just throw the schemas
into the Documentation/kho directory without any validation. We can
worry about that later :)

Thanks a lot again for the review!


Alex





Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879


2024-01-17 16:55:10

by Rob Herring

[permalink] [raw]
Subject: Re: [PATCH v2 07/17] kexec: Add documentation for KHO

On Wed, Jan 17, 2024 at 8:02 AM Alexander Graf <[email protected]> wrote:
>
>
> On 03.01.24 19:48, Rob Herring wrote:
> >
> > On Fri, Dec 22, 2023 at 12:52 PM Alexander Graf <[email protected]> wrote:
> >> With KHO in place, let's add documentation that describes what it is and
> >> how to use it.
> >>
> >> Signed-off-by: Alexander Graf <[email protected]>
> >> ---
> >> Documentation/kho/concepts.rst | 88 ++++++++++++++++++++++++++++++++
> >> Documentation/kho/index.rst | 19 +++++++
> >> Documentation/kho/usage.rst | 57 +++++++++++++++++++++
> >> Documentation/subsystem-apis.rst | 1 +
> >> 4 files changed, 165 insertions(+)
> >> create mode 100644 Documentation/kho/concepts.rst
> >> create mode 100644 Documentation/kho/index.rst
> >> create mode 100644 Documentation/kho/usage.rst
> >>
> >> diff --git a/Documentation/kho/concepts.rst b/Documentation/kho/concepts.rst
> >> new file mode 100644
> >> index 000000000000..8e4fe8c57865
> >> --- /dev/null
> >> +++ b/Documentation/kho/concepts.rst
> >> @@ -0,0 +1,88 @@
> >> +.. SPDX-License-Identifier: GPL-2.0-or-later
> >> +
> >> +=======================
> >> +Kexec Handover Concepts
> >> +=======================
> >> +
> >> +Kexec HandOver (KHO) is a mechanism that allows Linux to preserve state -
> >> +arbitrary properties as well as memory locations - across kexec.
> >> +
> >> +It introduces multiple concepts:
> >> +
> >> +KHO Device Tree
> >> +---------------
> >> +
> >> +Every KHO kexec carries a KHO specific flattened device tree blob that
> >> +describes the state of the system. Device drivers can register to KHO to
> >> +serialize their state before kexec. After KHO, device drivers can read
> >> +the device tree and extract previous state.

Can you avoid calling anything "device tree" as much as possible. We
can't avoid the format is FDT/DTB, but otherwise none of this is
Devicetree as most folks know it. Sure, there can be trees of devices
which are not Devicetree, but this is neither. You could have used
BSON or any hierarchical key-value pair serialization format just as
easily (if we already had a parser in the kernel).

> > How does this work with kexec when there is also the FDT for the h/w?
> > The h/w FDT has a /chosen property pointing to this FDT blob?
>
>
> Yep, exactly.

Those properties need to be documented here[1].

[...]

> >> +KHO introduces a new concept to its device tree: ``mem`` properties. A
> >> +``mem`` property can inside any subnode in the device tree. When present,
> >> +it contains an array of physical memory ranges that the new kernel must mark
> >> +as reserved on boot. It is recommended, but not required, to make these ranges
> >> +as physically contiguous as possible to reduce the number of array elements ::
> >> +
> >> + struct kho_mem {
> >> + __u64 addr;
> >> + __u64 len;
> >> + };
> >> +
> >> +After boot, drivers can call the kho subsystem to transfer ownership of memory
> >> +that was reserved via a ``mem`` property to themselves to continue using memory
> >> +from the previous execution.
> >> +
> >> +The KHO device tree follows the in-Linux schema requirements. Any element in
> >> +the device tree is documented via device tree schema yamls that explain what
> >> +data gets transferred.
> > If this is all separate, then I think the schemas should be too. And
> > then from my (DT maintainer) perspective, you can do whatever you want
> > here (like FIT images). The dtschema tools are pretty much only geared
> > for "normal" DTs. A couple of problems come to mind. You can't exclude
> > or change standard properties. The decoding of the DTB to run
> > validation assumes big endian. We could probably split things up a
> > bit, but you may be better off just using jsonschema directly. I'm not
> > even sure running validation here would that valuable. You have 1
> > source of code generating the DT and 1 consumer. Yes, there's
> > different kernel versions to deal with, but it's not 100s of people
> > creating 1000s of DTs with 100s of nodes.
> >
> > You might look at the netlink stuff which is using its own yaml syntax
> > to generate code and jsonschema is used to validate the yaml.
>
>
> I'm currently a lot more interested in the documentation aspect than in
> the validation, yeah. So I think for v3, I'll just throw the schemas
> into the Documentation/kho directory without any validation. We can
> worry about that later :)

I'll regret that when I get patches fixing them, but okay.

Rob

[1] https://github.com/devicetree-org/dt-schema/blob/main/dtschema/schemas/chosen.yaml

2024-01-17 17:00:43

by Alexander Graf

[permalink] [raw]
Subject: Re: [PATCH v2 07/17] kexec: Add documentation for KHO


On 17.01.24 17:54, Rob Herring wrote:
> On Wed, Jan 17, 2024 at 8:02 AM Alexander Graf <[email protected]> wrote:
>>
>> On 03.01.24 19:48, Rob Herring wrote:
>>> On Fri, Dec 22, 2023 at 12:52 PM Alexander Graf <[email protected]> wrote:
>>>> With KHO in place, let's add documentation that describes what it is and
>>>> how to use it.
>>>>
>>>> Signed-off-by: Alexander Graf <[email protected]>
>>>> ---
>>>> Documentation/kho/concepts.rst | 88 ++++++++++++++++++++++++++++++++
>>>> Documentation/kho/index.rst | 19 +++++++
>>>> Documentation/kho/usage.rst | 57 +++++++++++++++++++++
>>>> Documentation/subsystem-apis.rst | 1 +
>>>> 4 files changed, 165 insertions(+)
>>>> create mode 100644 Documentation/kho/concepts.rst
>>>> create mode 100644 Documentation/kho/index.rst
>>>> create mode 100644 Documentation/kho/usage.rst
>>>>
>>>> diff --git a/Documentation/kho/concepts.rst b/Documentation/kho/concepts.rst
>>>> new file mode 100644
>>>> index 000000000000..8e4fe8c57865
>>>> --- /dev/null
>>>> +++ b/Documentation/kho/concepts.rst
>>>> @@ -0,0 +1,88 @@
>>>> +.. SPDX-License-Identifier: GPL-2.0-or-later
>>>> +
>>>> +=======================
>>>> +Kexec Handover Concepts
>>>> +=======================
>>>> +
>>>> +Kexec HandOver (KHO) is a mechanism that allows Linux to preserve state -
>>>> +arbitrary properties as well as memory locations - across kexec.
>>>> +
>>>> +It introduces multiple concepts:
>>>> +
>>>> +KHO Device Tree
>>>> +---------------
>>>> +
>>>> +Every KHO kexec carries a KHO specific flattened device tree blob that
>>>> +describes the state of the system. Device drivers can register to KHO to
>>>> +serialize their state before kexec. After KHO, device drivers can read
>>>> +the device tree and extract previous state.
> Can you avoid calling anything "device tree" as much as possible. We
> can't avoid the format is FDT/DTB, but otherwise none of this is
> Devicetree as most folks know it. Sure, there can be trees of devices
> which are not Devicetree, but this is neither. You could have used
> BSON or any hierarchical key-value pair serialization format just as
> easily (if we already had a parser in the kernel).


I understand and agree - it's been confusing to pretty much everyone who
was looking at KHO so far. Unfortunately I'm terrible at naming. Do you
happen to have a good suggestion? :)


>
>>> How does this work with kexec when there is also the FDT for the h/w?
>>> The h/w FDT has a /chosen property pointing to this FDT blob?
>>
>> Yep, exactly.
> Those properties need to be documented here[1].


Oooh, thanks a lot for the pointer! I'll add them :)


Alex





Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879