This is a preparatory patch set for adding kexec_file support on arm64.
It was originally included in a arm64 patch set[1], but Philipp is also
working on their kexec_file support on s390[2] and some changes are now
conflicting.
So these common parts were extracted and put into a separate patch set for
better integration. What's more, my original patch#4 was split into a few
small chunks for easier review after Dave's comment.
As such, the resulting code is basically identical with my original, and
the only *visible* differences are:
* renamings of _kexec_kernel_image_probe() and
_kimage_file_post_load_cleanup()
* change one of types of arguments at prepare_elf64_headers()
Those, unfortunately, require a couple of trivial changes on the rest
(#1, #6 to #13) of my arm64 kexec_file patch set[1].
Patch#1 allows making a use of purgatory optional, particularly useful
for arm64.
Patch#2 commonalizes arch_kexec_kernel_{image_probe, image_load,
verify_sig}() and arch_kimage_file_post_load_cleanup() across architectures.
Patch#3-#7 is also intended to generalize parse_elf64_headers(), along with
exclude_mem_range(), to be made best re-use of.
[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2018-February/561182.html
[2] http://lkml.iu.edu//hypermail/linux/kernel/1802.1/02596.html
Changes in v2 (Mar 6, 2018)
* use IS_ENABLED() instead of #ifdef's in kexec_calculate_store_digests()
* rename _kexec_kernel_*() and make them static for now except for
kexec_image_probe_default() which is actually used by arch(powerpc) code
(Note: kexec_image_post_load_cleanup_default() will also be exported by
arm64 patches.)
* change a type of kernel_map in prepare_elf64_headers() to "int"
* check for a total number of system ram regions in full_up_crash_elf_data()
and use vzalloc() instead of vmalloc()
AKASHI Takahiro (7):
kexec_file: make an use of purgatory optional
kexec_file,x86,powerpc: factor out kexec_file_ops functions
x86: kexec_file: purge system-ram walking from prepare_elf64_headers()
x86: kexec_file: remove X86_64 dependency from prepare_elf64_headers()
x86: kexec_file: lift CRASH_MAX_RANGES limit on crash_mem buffer
x86: kexec_file: clean up prepare_elf64_headers()
kexec_file, x86: move re-factored code to generic side
arch/powerpc/Kconfig | 3 +
arch/powerpc/include/asm/kexec.h | 2 +-
arch/powerpc/kernel/kexec_elf_64.c | 2 +-
arch/powerpc/kernel/machine_kexec_file_64.c | 39 +---
arch/x86/Kconfig | 3 +
arch/x86/include/asm/kexec-bzimage64.h | 2 +-
arch/x86/kernel/crash.c | 334 ++++++----------------------
arch/x86/kernel/kexec-bzimage64.c | 2 +-
arch/x86/kernel/machine_kexec_64.c | 45 +---
include/linux/kexec.h | 32 ++-
kernel/kexec_file.c | 240 +++++++++++++++++++-
11 files changed, 338 insertions(+), 366 deletions(-)
--
2.16.2
On arm64, crash dump kernel's usable memory is protected by
*unmapping* it from kernel virtual space unlike other architectures
where the region is just made read-only. It is highly unlikely that
the region is accidentally corrupted and this observation rationalizes
that digest check code can also be dropped from purgatory.
The resulting code is so simple as it doesn't require a bit ugly
re-linking/relocation stuff, i.e. arch_kexec_apply_relocations_add().
Please see:
http://lists.infradead.org/pipermail/linux-arm-kernel/2017-December/545428.html
All that the purgatory does is to shuffle arguments and jump into a new
kernel, while we still need to have some space for a hash value
(purgatory_sha256_digest) which is never checked against.
As such, it doesn't make sense to have trampline code between old kernel
and new kernel on arm64.
This patch introduces a new configuration, ARCH_HAS_KEXEC_PURGATORY, and
allows related code to be compiled in only if necessary.
Signed-off-by: AKASHI Takahiro <[email protected]>
Cc: Dave Young <[email protected]>
Cc: Vivek Goyal <[email protected]>
Cc: Baoquan He <[email protected]>
---
arch/powerpc/Kconfig | 3 +++
arch/x86/Kconfig | 3 +++
kernel/kexec_file.c | 5 +++++
3 files changed, 11 insertions(+)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 73ce5dd07642..c32a181a7cbb 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -552,6 +552,9 @@ config KEXEC_FILE
for kernel and initramfs as opposed to a list of segments as is the
case for the older kexec call.
+config ARCH_HAS_KEXEC_PURGATORY
+ def_bool KEXEC_FILE
+
config RELOCATABLE
bool "Build a relocatable kernel"
depends on PPC64 || (FLATMEM && (44x || FSL_BOOKE))
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index c1236b187824..f031c3efe47e 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2019,6 +2019,9 @@ config KEXEC_FILE
for kernel and initramfs as opposed to list of segments as
accepted by previous system call.
+config ARCH_HAS_KEXEC_PURGATORY
+ def_bool KEXEC_FILE
+
config KEXEC_VERIFY_SIG
bool "Verify kernel signature during kexec_file_load() syscall"
depends on KEXEC_FILE
diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
index e5bcd94c1efb..6dbbb89cbbac 100644
--- a/kernel/kexec_file.c
+++ b/kernel/kexec_file.c
@@ -532,6 +532,9 @@ static int kexec_calculate_store_digests(struct kimage *image)
struct kexec_sha_region *sha_regions;
struct purgatory_info *pi = &image->purgatory_info;
+ if (!IS_ENABLED(ARCH_HAS_KEXEC_PURGATORY))
+ return 0;
+
zero_buf = __va(page_to_pfn(ZERO_PAGE(0)) << PAGE_SHIFT);
zero_buf_sz = PAGE_SIZE;
@@ -633,6 +636,7 @@ static int kexec_calculate_store_digests(struct kimage *image)
return ret;
}
+#ifdef CONFIG_ARCH_HAS_KEXEC_PURGATORY
/* Actually load purgatory. Lot of code taken from kexec-tools */
static int __kexec_load_purgatory(struct kimage *image, unsigned long min,
unsigned long max, int top_down)
@@ -1022,3 +1026,4 @@ int kexec_purgatory_get_set_symbol(struct kimage *image, const char *name,
return 0;
}
+#endif /* CONFIG_ARCH_HAS_KEXEC_PURGATORY */
--
2.16.2
removing bufp variable in prepare_elf64_headers() makes the code simpler
and more understandable.
Signed-off-by: AKASHI Takahiro <[email protected]>
Cc: Dave Young <[email protected]>
Cc: Vivek Goyal <[email protected]>
Cc: Baoquan He <[email protected]>
---
arch/x86/kernel/crash.c | 18 +++++++-----------
1 file changed, 7 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
index 0f20a6330bd6..24de3f13f40c 100644
--- a/arch/x86/kernel/crash.c
+++ b/arch/x86/kernel/crash.c
@@ -336,7 +336,7 @@ static int prepare_elf64_headers(struct crash_mem *cmem, bool kernel_map,
Elf64_Ehdr *ehdr;
Elf64_Phdr *phdr;
unsigned long nr_cpus = num_possible_cpus(), nr_phdr, elf_sz;
- unsigned char *buf, *bufp;
+ unsigned char *buf;
unsigned int cpu, i;
unsigned long long notes_addr;
unsigned long mstart, mend;
@@ -361,9 +361,8 @@ static int prepare_elf64_headers(struct crash_mem *cmem, bool kernel_map,
if (!buf)
return -ENOMEM;
- bufp = buf;
- ehdr = (Elf64_Ehdr *)bufp;
- bufp += sizeof(Elf64_Ehdr);
+ ehdr = (Elf64_Ehdr *)buf;
+ phdr = (Elf64_Phdr *)(ehdr + 1);
memcpy(ehdr->e_ident, ELFMAG, SELFMAG);
ehdr->e_ident[EI_CLASS] = ELFCLASS64;
ehdr->e_ident[EI_DATA] = ELFDATA2LSB;
@@ -379,33 +378,30 @@ static int prepare_elf64_headers(struct crash_mem *cmem, bool kernel_map,
/* Prepare one phdr of type PT_NOTE for each present cpu */
for_each_present_cpu(cpu) {
- phdr = (Elf64_Phdr *)bufp;
- bufp += sizeof(Elf64_Phdr);
phdr->p_type = PT_NOTE;
notes_addr = per_cpu_ptr_to_phys(per_cpu_ptr(crash_notes, cpu));
phdr->p_offset = phdr->p_paddr = notes_addr;
phdr->p_filesz = phdr->p_memsz = sizeof(note_buf_t);
(ehdr->e_phnum)++;
+ phdr++;
}
/* Prepare one PT_NOTE header for vmcoreinfo */
- phdr = (Elf64_Phdr *)bufp;
- bufp += sizeof(Elf64_Phdr);
phdr->p_type = PT_NOTE;
phdr->p_offset = phdr->p_paddr = paddr_vmcoreinfo_note();
phdr->p_filesz = phdr->p_memsz = VMCOREINFO_NOTE_SIZE;
(ehdr->e_phnum)++;
+ phdr++;
/* Prepare PT_LOAD type program header for kernel text region */
if (kernel_map) {
- phdr = (Elf64_Phdr *)bufp;
- bufp += sizeof(Elf64_Phdr);
phdr->p_type = PT_LOAD;
phdr->p_flags = PF_R|PF_W|PF_X;
phdr->p_vaddr = (Elf64_Addr)_text;
phdr->p_filesz = phdr->p_memsz = _end - _text;
phdr->p_offset = phdr->p_paddr = __pa_symbol(_text);
- (ehdr->e_phnum)++;
+ ehdr->e_phnum++;
+ phdr++;
}
/* Go through all the ranges in cmem->ranges[] and prepare phdr */
--
2.16.2
While CRASH_MAX_RANGES (== 16) seems to be good enough, fixed-number
array is not a good idea in general.
In this patch, size of crash_mem buffer is calculated as before and
the buffer is now dynamically allocated. This change also allows removing
crash_elf_data structure.
Signed-off-by: AKASHI Takahiro <[email protected]>
Cc: Dave Young <[email protected]>
Cc: Vivek Goyal <[email protected]>
Cc: Baoquan He <[email protected]>
---
arch/x86/kernel/crash.c | 82 +++++++++++++++++++------------------------------
1 file changed, 31 insertions(+), 51 deletions(-)
diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
index b328153cee65..0f20a6330bd6 100644
--- a/arch/x86/kernel/crash.c
+++ b/arch/x86/kernel/crash.c
@@ -41,32 +41,14 @@
/* Alignment required for elf header segment */
#define ELF_CORE_HEADER_ALIGN 4096
-/* This primarily represents number of split ranges due to exclusion */
-#define CRASH_MAX_RANGES 16
-
struct crash_mem_range {
u64 start, end;
};
struct crash_mem {
- unsigned int nr_ranges;
- struct crash_mem_range ranges[CRASH_MAX_RANGES];
-};
-
-/* Misc data about ram ranges needed to prepare elf headers */
-struct crash_elf_data {
- struct kimage *image;
- /*
- * Total number of ram ranges we have after various adjustments for
- * crash reserved region, etc.
- */
unsigned int max_nr_ranges;
-
- /* Pointer to elf header */
- void *ehdr;
- /* Pointer to next phdr */
- void *bufp;
- struct crash_mem mem;
+ unsigned int nr_ranges;
+ struct crash_mem_range ranges[0];
};
/* Used while preparing memory map entries for second kernel */
@@ -217,26 +199,31 @@ static int get_nr_ram_ranges_callback(struct resource *res, void *arg)
return 0;
}
-
/* Gather all the required information to prepare elf headers for ram regions */
-static void fill_up_crash_elf_data(struct crash_elf_data *ced,
- struct kimage *image)
+static struct crash_mem *fill_up_crash_elf_data(void)
{
unsigned int nr_ranges = 0;
-
- ced->image = image;
+ struct crash_mem *cmem;
walk_system_ram_res(0, -1, &nr_ranges,
get_nr_ram_ranges_callback);
+ if (!nr_ranges)
+ return NULL;
- ced->max_nr_ranges = nr_ranges;
+ /*
+ * Exclusion of crash region and/or crashk_low_res may cause
+ * another range split. So add extra two slots here.
+ */
+ nr_ranges += 2;
+ cmem = vzalloc(sizeof(struct crash_mem) +
+ sizeof(struct crash_mem_range) * nr_ranges);
+ if (!cmem)
+ return NULL;
- /* Exclusion of crash region could split memory ranges */
- ced->max_nr_ranges++;
+ cmem->max_nr_ranges = nr_ranges;
+ cmem->nr_ranges = 0;
- /* If crashk_low_res is not 0, another range split possible */
- if (crashk_low_res.end)
- ced->max_nr_ranges++;
+ return cmem;
}
static int exclude_mem_range(struct crash_mem *mem,
@@ -293,10 +280,8 @@ static int exclude_mem_range(struct crash_mem *mem,
return 0;
/* Split happened */
- if (i == CRASH_MAX_RANGES - 1) {
- pr_err("Too many crash ranges after split\n");
+ if (i == mem->max_nr_ranges - 1)
return -ENOMEM;
- }
/* Location where new range should go */
j = i + 1;
@@ -314,11 +299,10 @@ static int exclude_mem_range(struct crash_mem *mem,
/*
* Look for any unwanted ranges between mstart, mend and remove them. This
- * might lead to split and split ranges are put in ced->mem.ranges[] array
+ * might lead to split and split ranges are put in cmem->ranges[] array
*/
-static int elf_header_exclude_ranges(struct crash_elf_data *ced)
+static int elf_header_exclude_ranges(struct crash_mem *cmem)
{
- struct crash_mem *cmem = &ced->mem;
int ret = 0;
/* Exclude crashkernel region */
@@ -337,8 +321,7 @@ static int elf_header_exclude_ranges(struct crash_elf_data *ced)
static int prepare_elf64_ram_headers_callback(struct resource *res, void *arg)
{
- struct crash_elf_data *ced = arg;
- struct crash_mem *cmem = &ced->mem;
+ struct crash_mem *cmem = arg;
cmem->ranges[cmem->nr_ranges].start = res->start;
cmem->ranges[cmem->nr_ranges].end = res->end;
@@ -347,7 +330,7 @@ static int prepare_elf64_ram_headers_callback(struct resource *res, void *arg)
return 0;
}
-static int prepare_elf64_headers(struct crash_elf_data *ced, bool kernel_map,
+static int prepare_elf64_headers(struct crash_mem *cmem, bool kernel_map,
void **addr, unsigned long *sz)
{
Elf64_Ehdr *ehdr;
@@ -356,12 +339,11 @@ static int prepare_elf64_headers(struct crash_elf_data *ced, bool kernel_map,
unsigned char *buf, *bufp;
unsigned int cpu, i;
unsigned long long notes_addr;
- struct crash_mem *cmem = &ced->mem;
unsigned long mstart, mend;
/* extra phdr for vmcoreinfo elf note */
nr_phdr = nr_cpus + 1;
- nr_phdr += ced->max_nr_ranges;
+ nr_phdr += cmem->nr_ranges;
/*
* kexec-tools creates an extra PT_LOAD phdr for kernel text mapping
@@ -455,29 +437,27 @@ static int prepare_elf64_headers(struct crash_elf_data *ced, bool kernel_map,
static int prepare_elf_headers(struct kimage *image, void **addr,
unsigned long *sz)
{
- struct crash_elf_data *ced;
+ struct crash_mem *cmem;
Elf64_Ehdr *ehdr;
Elf64_Phdr *phdr;
int ret, i;
- ced = kzalloc(sizeof(*ced), GFP_KERNEL);
- if (!ced)
+ cmem = fill_up_crash_elf_data();
+ if (!cmem)
return -ENOMEM;
- fill_up_crash_elf_data(ced, image);
-
- ret = walk_system_ram_res(0, -1, ced,
+ ret = walk_system_ram_res(0, -1, cmem,
prepare_elf64_ram_headers_callback);
if (ret)
goto out;
/* Exclude unwanted mem ranges */
- ret = elf_header_exclude_ranges(ced);
+ ret = elf_header_exclude_ranges(cmem);
if (ret)
goto out;
/* By default prepare 64bit headers */
- ret = prepare_elf64_headers(ced, IS_ENABLED(CONFIG_X86_64), addr, sz);
+ ret = prepare_elf64_headers(cmem, IS_ENABLED(CONFIG_X86_64), addr, sz);
if (ret)
goto out;
@@ -495,7 +475,7 @@ static int prepare_elf_headers(struct kimage *image, void **addr,
break;
}
out:
- kfree(ced);
+ vfree(cmem);
return ret;
}
--
2.16.2
The code guarded by CONFIG_X86_64 is necessary on some architectures
which have a dedicated kernel mapping outside of linear memory mapping.
(arm64 is among those.)
In this patch, an additional argument, kernel_map, is added to enable/
disable the code removing #ifdef.
Signed-off-by: AKASHI Takahiro <[email protected]>
Cc: Dave Young <[email protected]>
Cc: Vivek Goyal <[email protected]>
Cc: Baoquan He <[email protected]>
---
arch/x86/kernel/crash.c | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
index 2123fa0efc17..b328153cee65 100644
--- a/arch/x86/kernel/crash.c
+++ b/arch/x86/kernel/crash.c
@@ -347,7 +347,7 @@ static int prepare_elf64_ram_headers_callback(struct resource *res, void *arg)
return 0;
}
-static int prepare_elf64_headers(struct crash_elf_data *ced,
+static int prepare_elf64_headers(struct crash_elf_data *ced, bool kernel_map,
void **addr, unsigned long *sz)
{
Elf64_Ehdr *ehdr;
@@ -414,17 +414,17 @@ static int prepare_elf64_headers(struct crash_elf_data *ced,
phdr->p_filesz = phdr->p_memsz = VMCOREINFO_NOTE_SIZE;
(ehdr->e_phnum)++;
-#ifdef CONFIG_X86_64
/* Prepare PT_LOAD type program header for kernel text region */
- phdr = (Elf64_Phdr *)bufp;
- bufp += sizeof(Elf64_Phdr);
- phdr->p_type = PT_LOAD;
- phdr->p_flags = PF_R|PF_W|PF_X;
- phdr->p_vaddr = (Elf64_Addr)_text;
- phdr->p_filesz = phdr->p_memsz = _end - _text;
- phdr->p_offset = phdr->p_paddr = __pa_symbol(_text);
- (ehdr->e_phnum)++;
-#endif
+ if (kernel_map) {
+ phdr = (Elf64_Phdr *)bufp;
+ bufp += sizeof(Elf64_Phdr);
+ phdr->p_type = PT_LOAD;
+ phdr->p_flags = PF_R|PF_W|PF_X;
+ phdr->p_vaddr = (Elf64_Addr)_text;
+ phdr->p_filesz = phdr->p_memsz = _end - _text;
+ phdr->p_offset = phdr->p_paddr = __pa_symbol(_text);
+ (ehdr->e_phnum)++;
+ }
/* Go through all the ranges in cmem->ranges[] and prepare phdr */
for (i = 0; i < cmem->nr_ranges; i++) {
@@ -477,7 +477,7 @@ static int prepare_elf_headers(struct kimage *image, void **addr,
goto out;
/* By default prepare 64bit headers */
- ret = prepare_elf64_headers(ced, addr, sz);
+ ret = prepare_elf64_headers(ced, IS_ENABLED(CONFIG_X86_64), addr, sz);
if (ret)
goto out;
--
2.16.2
As arch_kexec_kernel_image_{probe,load}(),
arch_kimage_file_post_load_cleanup() and arch_kexec_kernel_verify_sig()
are almost duplicated among architectures, they can be commonalized with
an architecture-defined kexec_file_ops array. So let's factor them out.
Signed-off-by: AKASHI Takahiro <[email protected]>
Cc: Dave Young <[email protected]>
Cc: Vivek Goyal <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Thiago Jung Bauermann <[email protected]>
---
arch/powerpc/include/asm/kexec.h | 2 +-
arch/powerpc/kernel/kexec_elf_64.c | 2 +-
arch/powerpc/kernel/machine_kexec_file_64.c | 39 ++-----------------
arch/x86/include/asm/kexec-bzimage64.h | 2 +-
arch/x86/kernel/kexec-bzimage64.c | 2 +-
arch/x86/kernel/machine_kexec_64.c | 45 +---------------------
include/linux/kexec.h | 13 +++----
kernel/kexec_file.c | 60 +++++++++++++++++++++++++++--
8 files changed, 71 insertions(+), 94 deletions(-)
diff --git a/arch/powerpc/include/asm/kexec.h b/arch/powerpc/include/asm/kexec.h
index d8b1e8e7e035..4a585cba1787 100644
--- a/arch/powerpc/include/asm/kexec.h
+++ b/arch/powerpc/include/asm/kexec.h
@@ -95,7 +95,7 @@ static inline bool kdump_in_progress(void)
}
#ifdef CONFIG_KEXEC_FILE
-extern struct kexec_file_ops kexec_elf64_ops;
+extern const struct kexec_file_ops kexec_elf64_ops;
#ifdef CONFIG_IMA_KEXEC
#define ARCH_HAS_KIMAGE_ARCH
diff --git a/arch/powerpc/kernel/kexec_elf_64.c b/arch/powerpc/kernel/kexec_elf_64.c
index 9a42309b091a..6c78c11c7faf 100644
--- a/arch/powerpc/kernel/kexec_elf_64.c
+++ b/arch/powerpc/kernel/kexec_elf_64.c
@@ -657,7 +657,7 @@ static void *elf64_load(struct kimage *image, char *kernel_buf,
return ret ? ERR_PTR(ret) : fdt;
}
-struct kexec_file_ops kexec_elf64_ops = {
+const struct kexec_file_ops kexec_elf64_ops = {
.probe = elf64_probe,
.load = elf64_load,
};
diff --git a/arch/powerpc/kernel/machine_kexec_file_64.c b/arch/powerpc/kernel/machine_kexec_file_64.c
index e4395f937d63..3a962c87149a 100644
--- a/arch/powerpc/kernel/machine_kexec_file_64.c
+++ b/arch/powerpc/kernel/machine_kexec_file_64.c
@@ -31,52 +31,19 @@
#define SLAVE_CODE_SIZE 256
-static struct kexec_file_ops *kexec_file_loaders[] = {
+const struct kexec_file_ops * const kexec_file_loaders[] = {
&kexec_elf64_ops,
+ NULL
};
int arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
unsigned long buf_len)
{
- int i, ret = -ENOEXEC;
- struct kexec_file_ops *fops;
-
/* We don't support crash kernels yet. */
if (image->type == KEXEC_TYPE_CRASH)
return -ENOTSUPP;
- for (i = 0; i < ARRAY_SIZE(kexec_file_loaders); i++) {
- fops = kexec_file_loaders[i];
- if (!fops || !fops->probe)
- continue;
-
- ret = fops->probe(buf, buf_len);
- if (!ret) {
- image->fops = fops;
- return ret;
- }
- }
-
- return ret;
-}
-
-void *arch_kexec_kernel_image_load(struct kimage *image)
-{
- if (!image->fops || !image->fops->load)
- return ERR_PTR(-ENOEXEC);
-
- return image->fops->load(image, image->kernel_buf,
- image->kernel_buf_len, image->initrd_buf,
- image->initrd_buf_len, image->cmdline_buf,
- image->cmdline_buf_len);
-}
-
-int arch_kimage_file_post_load_cleanup(struct kimage *image)
-{
- if (!image->fops || !image->fops->cleanup)
- return 0;
-
- return image->fops->cleanup(image->image_loader_data);
+ return kexec_image_probe_default(image, buf, buf_len);
}
/**
diff --git a/arch/x86/include/asm/kexec-bzimage64.h b/arch/x86/include/asm/kexec-bzimage64.h
index 9f07cff43705..df89ee7d3e9e 100644
--- a/arch/x86/include/asm/kexec-bzimage64.h
+++ b/arch/x86/include/asm/kexec-bzimage64.h
@@ -2,6 +2,6 @@
#ifndef _ASM_KEXEC_BZIMAGE64_H
#define _ASM_KEXEC_BZIMAGE64_H
-extern struct kexec_file_ops kexec_bzImage64_ops;
+extern const struct kexec_file_ops kexec_bzImage64_ops;
#endif /* _ASM_KEXE_BZIMAGE64_H */
diff --git a/arch/x86/kernel/kexec-bzimage64.c b/arch/x86/kernel/kexec-bzimage64.c
index fb095ba0c02f..705654776c0c 100644
--- a/arch/x86/kernel/kexec-bzimage64.c
+++ b/arch/x86/kernel/kexec-bzimage64.c
@@ -538,7 +538,7 @@ static int bzImage64_verify_sig(const char *kernel, unsigned long kernel_len)
}
#endif
-struct kexec_file_ops kexec_bzImage64_ops = {
+const struct kexec_file_ops kexec_bzImage64_ops = {
.probe = bzImage64_probe,
.load = bzImage64_load,
.cleanup = bzImage64_cleanup,
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index 1f790cf9d38f..2cdd29d64181 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -30,8 +30,9 @@
#include <asm/set_memory.h>
#ifdef CONFIG_KEXEC_FILE
-static struct kexec_file_ops *kexec_file_loaders[] = {
+const struct kexec_file_ops * const kexec_file_loaders[] = {
&kexec_bzImage64_ops,
+ NULL
};
#endif
@@ -363,27 +364,6 @@ void arch_crash_save_vmcoreinfo(void)
/* arch-dependent functionality related to kexec file-based syscall */
#ifdef CONFIG_KEXEC_FILE
-int arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
- unsigned long buf_len)
-{
- int i, ret = -ENOEXEC;
- struct kexec_file_ops *fops;
-
- for (i = 0; i < ARRAY_SIZE(kexec_file_loaders); i++) {
- fops = kexec_file_loaders[i];
- if (!fops || !fops->probe)
- continue;
-
- ret = fops->probe(buf, buf_len);
- if (!ret) {
- image->fops = fops;
- return ret;
- }
- }
-
- return ret;
-}
-
void *arch_kexec_kernel_image_load(struct kimage *image)
{
vfree(image->arch.elf_headers);
@@ -398,27 +378,6 @@ void *arch_kexec_kernel_image_load(struct kimage *image)
image->cmdline_buf_len);
}
-int arch_kimage_file_post_load_cleanup(struct kimage *image)
-{
- if (!image->fops || !image->fops->cleanup)
- return 0;
-
- return image->fops->cleanup(image->image_loader_data);
-}
-
-#ifdef CONFIG_KEXEC_VERIFY_SIG
-int arch_kexec_kernel_verify_sig(struct kimage *image, void *kernel,
- unsigned long kernel_len)
-{
- if (!image->fops || !image->fops->verify_sig) {
- pr_debug("kernel loader does not support signature verification.");
- return -EKEYREJECTED;
- }
-
- return image->fops->verify_sig(kernel, kernel_len);
-}
-#endif
-
/*
* Apply purgatory relocations.
*
diff --git a/include/linux/kexec.h b/include/linux/kexec.h
index f16f6ceb3875..4ed18cc718fc 100644
--- a/include/linux/kexec.h
+++ b/include/linux/kexec.h
@@ -135,6 +135,11 @@ struct kexec_file_ops {
#endif
};
+extern const struct kexec_file_ops * const kexec_file_loaders[];
+
+int kexec_image_probe_default(struct kimage *image, void *buf,
+ unsigned long buf_len);
+
/**
* struct kexec_buf - parameters for finding a place for a buffer in memory
* @image: kexec image in which memory to search.
@@ -209,7 +214,7 @@ struct kimage {
unsigned long cmdline_buf_len;
/* File operations provided by image loader */
- struct kexec_file_ops *fops;
+ const struct kexec_file_ops *fops;
/* Image loader handling the kernel can store a pointer here */
void *image_loader_data;
@@ -277,12 +282,6 @@ int crash_shrink_memory(unsigned long new_size);
size_t crash_get_memory_size(void);
void crash_free_reserved_phys_range(unsigned long begin, unsigned long end);
-int __weak arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
- unsigned long buf_len);
-void * __weak arch_kexec_kernel_image_load(struct kimage *image);
-int __weak arch_kimage_file_post_load_cleanup(struct kimage *image);
-int __weak arch_kexec_kernel_verify_sig(struct kimage *image, void *buf,
- unsigned long buf_len);
int __weak arch_kexec_apply_relocations_add(const Elf_Ehdr *ehdr,
Elf_Shdr *sechdrs, unsigned int relsec);
int __weak arch_kexec_apply_relocations(const Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
index 6dbbb89cbbac..6a62c5ef8a07 100644
--- a/kernel/kexec_file.c
+++ b/kernel/kexec_file.c
@@ -28,28 +28,80 @@
static int kexec_calculate_store_digests(struct kimage *image);
+/*
+ * Currently this is the only default function that is exported as some
+ * architectures need it to do additional handlings.
+ * In the future, other default functions may be exported too if required.
+ */
+int kexec_image_probe_default(struct kimage *image, void *buf,
+ unsigned long buf_len)
+{
+ const struct kexec_file_ops * const *fops;
+ int ret = -ENOEXEC;
+
+ for (fops = &kexec_file_loaders[0]; *fops && (*fops)->probe; ++fops) {
+ ret = (*fops)->probe(buf, buf_len);
+ if (!ret) {
+ image->fops = *fops;
+ return ret;
+ }
+ }
+
+ return ret;
+}
+
/* Architectures can provide this probe function */
int __weak arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
unsigned long buf_len)
{
- return -ENOEXEC;
+ return kexec_image_probe_default(image, buf, buf_len);
+}
+
+static void *kexec_image_load_default(struct kimage *image)
+{
+ if (!image->fops || !image->fops->load)
+ return ERR_PTR(-ENOEXEC);
+
+ return image->fops->load(image, image->kernel_buf,
+ image->kernel_buf_len, image->initrd_buf,
+ image->initrd_buf_len, image->cmdline_buf,
+ image->cmdline_buf_len);
}
void * __weak arch_kexec_kernel_image_load(struct kimage *image)
{
- return ERR_PTR(-ENOEXEC);
+ return kexec_image_load_default(image);
+}
+
+static int kexec_image_post_load_cleanup_default(struct kimage *image)
+{
+ if (!image->fops || !image->fops->cleanup)
+ return 0;
+
+ return image->fops->cleanup(image->image_loader_data);
}
int __weak arch_kimage_file_post_load_cleanup(struct kimage *image)
{
- return -EINVAL;
+ return kexec_image_post_load_cleanup_default(image);
}
#ifdef CONFIG_KEXEC_VERIFY_SIG
+static int kexec_image_verify_sig_default(struct kimage *image, void *buf,
+ unsigned long buf_len)
+{
+ if (!image->fops || !image->fops->verify_sig) {
+ pr_debug("kernel loader does not support signature verification.\n");
+ return -EKEYREJECTED;
+ }
+
+ return image->fops->verify_sig(buf, buf_len);
+}
+
int __weak arch_kexec_kernel_verify_sig(struct kimage *image, void *buf,
unsigned long buf_len)
{
- return -EKEYREJECTED;
+ return kexec_image_verify_sig_default(image, buf, buf_len);
}
#endif
--
2.16.2
In the previous patches, commonly-used routines, exclude_mem_range() and
prepare_elf64_headers(), were carved out. Now place them in kexec common
code. A prefix "crash_" is given to each of their names to avoid possible
name collisions.
Signed-off-by: AKASHI Takahiro <[email protected]>
Cc: Dave Young <[email protected]>
Cc: Vivek Goyal <[email protected]>
Cc: Baoquan He <[email protected]>
---
arch/x86/kernel/crash.c | 195 ++----------------------------------------------
include/linux/kexec.h | 19 +++++
kernel/kexec_file.c | 175 +++++++++++++++++++++++++++++++++++++++++++
3 files changed, 201 insertions(+), 188 deletions(-)
diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
index 24de3f13f40c..a52671487ff2 100644
--- a/arch/x86/kernel/crash.c
+++ b/arch/x86/kernel/crash.c
@@ -38,19 +38,6 @@
#include <asm/virtext.h>
#include <asm/intel_pt.h>
-/* Alignment required for elf header segment */
-#define ELF_CORE_HEADER_ALIGN 4096
-
-struct crash_mem_range {
- u64 start, end;
-};
-
-struct crash_mem {
- unsigned int max_nr_ranges;
- unsigned int nr_ranges;
- struct crash_mem_range ranges[0];
-};
-
/* Used while preparing memory map entries for second kernel */
struct crash_memmap_data {
struct boot_params *params;
@@ -226,77 +213,6 @@ static struct crash_mem *fill_up_crash_elf_data(void)
return cmem;
}
-static int exclude_mem_range(struct crash_mem *mem,
- unsigned long long mstart, unsigned long long mend)
-{
- int i, j;
- unsigned long long start, end;
- struct crash_mem_range temp_range = {0, 0};
-
- for (i = 0; i < mem->nr_ranges; i++) {
- start = mem->ranges[i].start;
- end = mem->ranges[i].end;
-
- if (mstart > end || mend < start)
- continue;
-
- /* Truncate any area outside of range */
- if (mstart < start)
- mstart = start;
- if (mend > end)
- mend = end;
-
- /* Found completely overlapping range */
- if (mstart == start && mend == end) {
- mem->ranges[i].start = 0;
- mem->ranges[i].end = 0;
- if (i < mem->nr_ranges - 1) {
- /* Shift rest of the ranges to left */
- for (j = i; j < mem->nr_ranges - 1; j++) {
- mem->ranges[j].start =
- mem->ranges[j+1].start;
- mem->ranges[j].end =
- mem->ranges[j+1].end;
- }
- }
- mem->nr_ranges--;
- return 0;
- }
-
- if (mstart > start && mend < end) {
- /* Split original range */
- mem->ranges[i].end = mstart - 1;
- temp_range.start = mend + 1;
- temp_range.end = end;
- } else if (mstart != start)
- mem->ranges[i].end = mstart - 1;
- else
- mem->ranges[i].start = mend + 1;
- break;
- }
-
- /* If a split happend, add the split to array */
- if (!temp_range.end)
- return 0;
-
- /* Split happened */
- if (i == mem->max_nr_ranges - 1)
- return -ENOMEM;
-
- /* Location where new range should go */
- j = i + 1;
- if (j < mem->nr_ranges) {
- /* Move over all ranges one slot towards the end */
- for (i = mem->nr_ranges - 1; i >= j; i--)
- mem->ranges[i + 1] = mem->ranges[i];
- }
-
- mem->ranges[j].start = temp_range.start;
- mem->ranges[j].end = temp_range.end;
- mem->nr_ranges++;
- return 0;
-}
-
/*
* Look for any unwanted ranges between mstart, mend and remove them. This
* might lead to split and split ranges are put in cmem->ranges[] array
@@ -306,12 +222,13 @@ static int elf_header_exclude_ranges(struct crash_mem *cmem)
int ret = 0;
/* Exclude crashkernel region */
- ret = exclude_mem_range(cmem, crashk_res.start, crashk_res.end);
+ ret = crash_exclude_mem_range(cmem, crashk_res.start, crashk_res.end);
if (ret)
return ret;
if (crashk_low_res.end) {
- ret = exclude_mem_range(cmem, crashk_low_res.start, crashk_low_res.end);
+ ret = crash_exclude_mem_range(cmem, crashk_low_res.start,
+ crashk_low_res.end);
if (ret)
return ret;
}
@@ -330,105 +247,6 @@ static int prepare_elf64_ram_headers_callback(struct resource *res, void *arg)
return 0;
}
-static int prepare_elf64_headers(struct crash_mem *cmem, bool kernel_map,
- void **addr, unsigned long *sz)
-{
- Elf64_Ehdr *ehdr;
- Elf64_Phdr *phdr;
- unsigned long nr_cpus = num_possible_cpus(), nr_phdr, elf_sz;
- unsigned char *buf;
- unsigned int cpu, i;
- unsigned long long notes_addr;
- unsigned long mstart, mend;
-
- /* extra phdr for vmcoreinfo elf note */
- nr_phdr = nr_cpus + 1;
- nr_phdr += cmem->nr_ranges;
-
- /*
- * kexec-tools creates an extra PT_LOAD phdr for kernel text mapping
- * area on x86_64 (ffffffff80000000 - ffffffffa0000000).
- * I think this is required by tools like gdb. So same physical
- * memory will be mapped in two elf headers. One will contain kernel
- * text virtual addresses and other will have __va(physical) addresses.
- */
-
- nr_phdr++;
- elf_sz = sizeof(Elf64_Ehdr) + nr_phdr * sizeof(Elf64_Phdr);
- elf_sz = ALIGN(elf_sz, ELF_CORE_HEADER_ALIGN);
-
- buf = vzalloc(elf_sz);
- if (!buf)
- return -ENOMEM;
-
- ehdr = (Elf64_Ehdr *)buf;
- phdr = (Elf64_Phdr *)(ehdr + 1);
- memcpy(ehdr->e_ident, ELFMAG, SELFMAG);
- ehdr->e_ident[EI_CLASS] = ELFCLASS64;
- ehdr->e_ident[EI_DATA] = ELFDATA2LSB;
- ehdr->e_ident[EI_VERSION] = EV_CURRENT;
- ehdr->e_ident[EI_OSABI] = ELF_OSABI;
- memset(ehdr->e_ident + EI_PAD, 0, EI_NIDENT - EI_PAD);
- ehdr->e_type = ET_CORE;
- ehdr->e_machine = ELF_ARCH;
- ehdr->e_version = EV_CURRENT;
- ehdr->e_phoff = sizeof(Elf64_Ehdr);
- ehdr->e_ehsize = sizeof(Elf64_Ehdr);
- ehdr->e_phentsize = sizeof(Elf64_Phdr);
-
- /* Prepare one phdr of type PT_NOTE for each present cpu */
- for_each_present_cpu(cpu) {
- phdr->p_type = PT_NOTE;
- notes_addr = per_cpu_ptr_to_phys(per_cpu_ptr(crash_notes, cpu));
- phdr->p_offset = phdr->p_paddr = notes_addr;
- phdr->p_filesz = phdr->p_memsz = sizeof(note_buf_t);
- (ehdr->e_phnum)++;
- phdr++;
- }
-
- /* Prepare one PT_NOTE header for vmcoreinfo */
- phdr->p_type = PT_NOTE;
- phdr->p_offset = phdr->p_paddr = paddr_vmcoreinfo_note();
- phdr->p_filesz = phdr->p_memsz = VMCOREINFO_NOTE_SIZE;
- (ehdr->e_phnum)++;
- phdr++;
-
- /* Prepare PT_LOAD type program header for kernel text region */
- if (kernel_map) {
- phdr->p_type = PT_LOAD;
- phdr->p_flags = PF_R|PF_W|PF_X;
- phdr->p_vaddr = (Elf64_Addr)_text;
- phdr->p_filesz = phdr->p_memsz = _end - _text;
- phdr->p_offset = phdr->p_paddr = __pa_symbol(_text);
- ehdr->e_phnum++;
- phdr++;
- }
-
- /* Go through all the ranges in cmem->ranges[] and prepare phdr */
- for (i = 0; i < cmem->nr_ranges; i++) {
- mstart = cmem->ranges[i].start;
- mend = cmem->ranges[i].end;
-
- phdr->p_type = PT_LOAD;
- phdr->p_flags = PF_R|PF_W|PF_X;
- phdr->p_offset = mstart;
-
- phdr->p_paddr = mstart;
- phdr->p_vaddr = (unsigned long long) __va(mstart);
- phdr->p_filesz = phdr->p_memsz = mend - mstart + 1;
- phdr->p_align = 0;
- ehdr->e_phnum++;
- phdr++;
- pr_debug("Crash PT_LOAD elf header. phdr=%p vaddr=0x%llx, paddr=0x%llx, sz=0x%llx e_phnum=%d p_offset=0x%llx\n",
- phdr, phdr->p_vaddr, phdr->p_paddr, phdr->p_filesz,
- ehdr->e_phnum, phdr->p_offset);
- }
-
- *addr = buf;
- *sz = elf_sz;
- return 0;
-}
-
/* Prepare elf headers. Return addr and size */
static int prepare_elf_headers(struct kimage *image, void **addr,
unsigned long *sz)
@@ -453,7 +271,8 @@ static int prepare_elf_headers(struct kimage *image, void **addr,
goto out;
/* By default prepare 64bit headers */
- ret = prepare_elf64_headers(cmem, IS_ENABLED(CONFIG_X86_64), addr, sz);
+ ret = crash_prepare_elf64_headers(cmem,
+ IS_ENABLED(CONFIG_X86_64), addr, sz);
if (ret)
goto out;
@@ -517,14 +336,14 @@ static int memmap_exclude_ranges(struct kimage *image, struct crash_mem *cmem,
/* Exclude Backup region */
start = image->arch.backup_load_addr;
end = start + image->arch.backup_src_sz - 1;
- ret = exclude_mem_range(cmem, start, end);
+ ret = crash_exclude_mem_range(cmem, start, end);
if (ret)
return ret;
/* Exclude elf header region */
start = image->arch.elf_load_addr;
end = start + image->arch.elf_headers_sz - 1;
- return exclude_mem_range(cmem, start, end);
+ return crash_exclude_mem_range(cmem, start, end);
}
/* Prepare memory map for crash dump kernel */
diff --git a/include/linux/kexec.h b/include/linux/kexec.h
index 4ed18cc718fc..85609221cd80 100644
--- a/include/linux/kexec.h
+++ b/include/linux/kexec.h
@@ -168,6 +168,25 @@ int __weak arch_kexec_walk_mem(struct kexec_buf *kbuf,
int (*func)(struct resource *, void *));
extern int kexec_add_buffer(struct kexec_buf *kbuf);
int kexec_locate_mem_hole(struct kexec_buf *kbuf);
+
+/* Alignment required for elf header segment */
+#define ELF_CORE_HEADER_ALIGN 4096
+
+struct crash_mem_range {
+ u64 start, end;
+};
+
+struct crash_mem {
+ unsigned int max_nr_ranges;
+ unsigned int nr_ranges;
+ struct crash_mem_range ranges[0];
+};
+
+extern int crash_exclude_mem_range(struct crash_mem *mem,
+ unsigned long long mstart,
+ unsigned long long mend);
+extern int crash_prepare_elf64_headers(struct crash_mem *mem, int kernel_map,
+ void **addr, unsigned long *sz);
#endif /* CONFIG_KEXEC_FILE */
struct kimage {
diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
index 6a62c5ef8a07..7d38eb578116 100644
--- a/kernel/kexec_file.c
+++ b/kernel/kexec_file.c
@@ -22,6 +22,11 @@
#include <linux/ima.h>
#include <crypto/hash.h>
#include <crypto/sha.h>
+#include <linux/elf.h>
+#include <linux/elfcore.h>
+#include <linux/kernel.h>
+#include <linux/kexec.h>
+#include <linux/slab.h>
#include <linux/syscalls.h>
#include <linux/vmalloc.h>
#include "kexec_internal.h"
@@ -1079,3 +1084,173 @@ int kexec_purgatory_get_set_symbol(struct kimage *image, const char *name,
return 0;
}
#endif /* CONFIG_ARCH_HAS_KEXEC_PURGATORY */
+
+int crash_exclude_mem_range(struct crash_mem *mem,
+ unsigned long long mstart, unsigned long long mend)
+{
+ int i, j;
+ unsigned long long start, end;
+ struct crash_mem_range temp_range = {0, 0};
+
+ for (i = 0; i < mem->nr_ranges; i++) {
+ start = mem->ranges[i].start;
+ end = mem->ranges[i].end;
+
+ if (mstart > end || mend < start)
+ continue;
+
+ /* Truncate any area outside of range */
+ if (mstart < start)
+ mstart = start;
+ if (mend > end)
+ mend = end;
+
+ /* Found completely overlapping range */
+ if (mstart == start && mend == end) {
+ mem->ranges[i].start = 0;
+ mem->ranges[i].end = 0;
+ if (i < mem->nr_ranges - 1) {
+ /* Shift rest of the ranges to left */
+ for (j = i; j < mem->nr_ranges - 1; j++) {
+ mem->ranges[j].start =
+ mem->ranges[j+1].start;
+ mem->ranges[j].end =
+ mem->ranges[j+1].end;
+ }
+ }
+ mem->nr_ranges--;
+ return 0;
+ }
+
+ if (mstart > start && mend < end) {
+ /* Split original range */
+ mem->ranges[i].end = mstart - 1;
+ temp_range.start = mend + 1;
+ temp_range.end = end;
+ } else if (mstart != start)
+ mem->ranges[i].end = mstart - 1;
+ else
+ mem->ranges[i].start = mend + 1;
+ break;
+ }
+
+ /* If a split happened, add the split to array */
+ if (!temp_range.end)
+ return 0;
+
+ /* Split happened */
+ if (i == mem->max_nr_ranges - 1)
+ return -ENOMEM;
+
+ /* Location where new range should go */
+ j = i + 1;
+ if (j < mem->nr_ranges) {
+ /* Move over all ranges one slot towards the end */
+ for (i = mem->nr_ranges - 1; i >= j; i--)
+ mem->ranges[i + 1] = mem->ranges[i];
+ }
+
+ mem->ranges[j].start = temp_range.start;
+ mem->ranges[j].end = temp_range.end;
+ mem->nr_ranges++;
+ return 0;
+}
+
+int crash_prepare_elf64_headers(struct crash_mem *mem, int kernel_map,
+ void **addr, unsigned long *sz)
+{
+ Elf64_Ehdr *ehdr;
+ Elf64_Phdr *phdr;
+ unsigned long nr_cpus = num_possible_cpus(), nr_phdr, elf_sz;
+ unsigned char *buf;
+ unsigned int cpu, i;
+ unsigned long long notes_addr;
+ unsigned long mstart, mend;
+
+ /* extra phdr for vmcoreinfo elf note */
+ nr_phdr = nr_cpus + 1;
+ nr_phdr += mem->nr_ranges;
+
+ /*
+ * kexec-tools creates an extra PT_LOAD phdr for kernel text mapping
+ * area (for example, ffffffff80000000 - ffffffffa0000000 on x86_64).
+ * I think this is required by tools like gdb. So same physical
+ * memory will be mapped in two elf headers. One will contain kernel
+ * text virtual addresses and other will have __va(physical) addresses.
+ */
+
+ nr_phdr++;
+ elf_sz = sizeof(Elf64_Ehdr) + nr_phdr * sizeof(Elf64_Phdr);
+ elf_sz = ALIGN(elf_sz, ELF_CORE_HEADER_ALIGN);
+
+ buf = vzalloc(elf_sz);
+ if (!buf)
+ return -ENOMEM;
+
+ ehdr = (Elf64_Ehdr *)buf;
+ phdr = (Elf64_Phdr *)(ehdr + 1);
+ memcpy(ehdr->e_ident, ELFMAG, SELFMAG);
+ ehdr->e_ident[EI_CLASS] = ELFCLASS64;
+ ehdr->e_ident[EI_DATA] = ELFDATA2LSB;
+ ehdr->e_ident[EI_VERSION] = EV_CURRENT;
+ ehdr->e_ident[EI_OSABI] = ELF_OSABI;
+ memset(ehdr->e_ident + EI_PAD, 0, EI_NIDENT - EI_PAD);
+ ehdr->e_type = ET_CORE;
+ ehdr->e_machine = ELF_ARCH;
+ ehdr->e_version = EV_CURRENT;
+ ehdr->e_phoff = sizeof(Elf64_Ehdr);
+ ehdr->e_ehsize = sizeof(Elf64_Ehdr);
+ ehdr->e_phentsize = sizeof(Elf64_Phdr);
+
+ /* Prepare one phdr of type PT_NOTE for each present cpu */
+ for_each_present_cpu(cpu) {
+ phdr->p_type = PT_NOTE;
+ notes_addr = per_cpu_ptr_to_phys(per_cpu_ptr(crash_notes, cpu));
+ phdr->p_offset = phdr->p_paddr = notes_addr;
+ phdr->p_filesz = phdr->p_memsz = sizeof(note_buf_t);
+ (ehdr->e_phnum)++;
+ phdr++;
+ }
+
+ /* Prepare one PT_NOTE header for vmcoreinfo */
+ phdr->p_type = PT_NOTE;
+ phdr->p_offset = phdr->p_paddr = paddr_vmcoreinfo_note();
+ phdr->p_filesz = phdr->p_memsz = VMCOREINFO_NOTE_SIZE;
+ (ehdr->e_phnum)++;
+ phdr++;
+
+ /* Prepare PT_LOAD type program header for kernel text region */
+ if (kernel_map) {
+ phdr->p_type = PT_LOAD;
+ phdr->p_flags = PF_R|PF_W|PF_X;
+ phdr->p_vaddr = (Elf64_Addr)_text;
+ phdr->p_filesz = phdr->p_memsz = _end - _text;
+ phdr->p_offset = phdr->p_paddr = __pa_symbol(_text);
+ ehdr->e_phnum++;
+ phdr++;
+ }
+
+ /* Go through all the ranges in mem->ranges[] and prepare phdr */
+ for (i = 0; i < mem->nr_ranges; i++) {
+ mstart = mem->ranges[i].start;
+ mend = mem->ranges[i].end;
+
+ phdr->p_type = PT_LOAD;
+ phdr->p_flags = PF_R|PF_W|PF_X;
+ phdr->p_offset = mstart;
+
+ phdr->p_paddr = mstart;
+ phdr->p_vaddr = (unsigned long long) __va(mstart);
+ phdr->p_filesz = phdr->p_memsz = mend - mstart + 1;
+ phdr->p_align = 0;
+ ehdr->e_phnum++;
+ phdr++;
+ pr_debug("Crash PT_LOAD elf header. phdr=%p vaddr=0x%llx, paddr=0x%llx, sz=0x%llx e_phnum=%d p_offset=0x%llx\n",
+ phdr, phdr->p_vaddr, phdr->p_paddr, phdr->p_filesz,
+ ehdr->e_phnum, phdr->p_offset);
+ }
+
+ *addr = buf;
+ *sz = elf_sz;
+ return 0;
+}
--
2.16.2
While prepare_elf64_headers() in x86 looks pretty generic for other
architectures' use, it contains some code which tries to list crash memory
regions by walking through system resources, which is not always
architecture agnostic.
To make this function more generic, the related code should be purged.
In this patch, prepare_elf64_headers() simply scans crash_mem buffer passed
and add all the listed regions to elf header as a PT_LOAD segment.
So walk_system_ram_res(prepare_elf64_headers_callback) have been moved
forward before prepare_elf64_headers() where the callback,
prepare_elf64_headers_callback(), is now responsible for filling up
crash_mem buffer.
Meanwhile exclude_elf_header_ranges() used to be called every time in
this callback it is rather redundant and now called only once in
prepare_elf_headers() as well.
Signed-off-by: AKASHI Takahiro <[email protected]>
Cc: Dave Young <[email protected]>
Cc: Vivek Goyal <[email protected]>
Cc: Baoquan He <[email protected]>
---
arch/x86/kernel/crash.c | 121 +++++++++++++++++++++++-------------------------
1 file changed, 58 insertions(+), 63 deletions(-)
diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c
index 10e74d4778a1..2123fa0efc17 100644
--- a/arch/x86/kernel/crash.c
+++ b/arch/x86/kernel/crash.c
@@ -316,18 +316,11 @@ static int exclude_mem_range(struct crash_mem *mem,
* Look for any unwanted ranges between mstart, mend and remove them. This
* might lead to split and split ranges are put in ced->mem.ranges[] array
*/
-static int elf_header_exclude_ranges(struct crash_elf_data *ced,
- unsigned long long mstart, unsigned long long mend)
+static int elf_header_exclude_ranges(struct crash_elf_data *ced)
{
struct crash_mem *cmem = &ced->mem;
int ret = 0;
- memset(cmem->ranges, 0, sizeof(cmem->ranges));
-
- cmem->ranges[0].start = mstart;
- cmem->ranges[0].end = mend;
- cmem->nr_ranges = 1;
-
/* Exclude crashkernel region */
ret = exclude_mem_range(cmem, crashk_res.start, crashk_res.end);
if (ret)
@@ -345,53 +338,13 @@ static int elf_header_exclude_ranges(struct crash_elf_data *ced,
static int prepare_elf64_ram_headers_callback(struct resource *res, void *arg)
{
struct crash_elf_data *ced = arg;
- Elf64_Ehdr *ehdr;
- Elf64_Phdr *phdr;
- unsigned long mstart, mend;
- struct kimage *image = ced->image;
- struct crash_mem *cmem;
- int ret, i;
-
- ehdr = ced->ehdr;
-
- /* Exclude unwanted mem ranges */
- ret = elf_header_exclude_ranges(ced, res->start, res->end);
- if (ret)
- return ret;
-
- /* Go through all the ranges in ced->mem.ranges[] and prepare phdr */
- cmem = &ced->mem;
-
- for (i = 0; i < cmem->nr_ranges; i++) {
- mstart = cmem->ranges[i].start;
- mend = cmem->ranges[i].end;
-
- phdr = ced->bufp;
- ced->bufp += sizeof(Elf64_Phdr);
+ struct crash_mem *cmem = &ced->mem;
- phdr->p_type = PT_LOAD;
- phdr->p_flags = PF_R|PF_W|PF_X;
- phdr->p_offset = mstart;
+ cmem->ranges[cmem->nr_ranges].start = res->start;
+ cmem->ranges[cmem->nr_ranges].end = res->end;
+ cmem->nr_ranges++;
- /*
- * If a range matches backup region, adjust offset to backup
- * segment.
- */
- if (mstart == image->arch.backup_src_start &&
- (mend - mstart + 1) == image->arch.backup_src_sz)
- phdr->p_offset = image->arch.backup_load_addr;
-
- phdr->p_paddr = mstart;
- phdr->p_vaddr = (unsigned long long) __va(mstart);
- phdr->p_filesz = phdr->p_memsz = mend - mstart + 1;
- phdr->p_align = 0;
- ehdr->e_phnum++;
- pr_debug("Crash PT_LOAD elf header. phdr=%p vaddr=0x%llx, paddr=0x%llx, sz=0x%llx e_phnum=%d p_offset=0x%llx\n",
- phdr, phdr->p_vaddr, phdr->p_paddr, phdr->p_filesz,
- ehdr->e_phnum, phdr->p_offset);
- }
-
- return ret;
+ return 0;
}
static int prepare_elf64_headers(struct crash_elf_data *ced,
@@ -401,9 +354,10 @@ static int prepare_elf64_headers(struct crash_elf_data *ced,
Elf64_Phdr *phdr;
unsigned long nr_cpus = num_possible_cpus(), nr_phdr, elf_sz;
unsigned char *buf, *bufp;
- unsigned int cpu;
+ unsigned int cpu, i;
unsigned long long notes_addr;
- int ret;
+ struct crash_mem *cmem = &ced->mem;
+ unsigned long mstart, mend;
/* extra phdr for vmcoreinfo elf note */
nr_phdr = nr_cpus + 1;
@@ -472,13 +426,25 @@ static int prepare_elf64_headers(struct crash_elf_data *ced,
(ehdr->e_phnum)++;
#endif
- /* Prepare PT_LOAD headers for system ram chunks. */
- ced->ehdr = ehdr;
- ced->bufp = bufp;
- ret = walk_system_ram_res(0, -1, ced,
- prepare_elf64_ram_headers_callback);
- if (ret < 0)
- return ret;
+ /* Go through all the ranges in cmem->ranges[] and prepare phdr */
+ for (i = 0; i < cmem->nr_ranges; i++) {
+ mstart = cmem->ranges[i].start;
+ mend = cmem->ranges[i].end;
+
+ phdr->p_type = PT_LOAD;
+ phdr->p_flags = PF_R|PF_W|PF_X;
+ phdr->p_offset = mstart;
+
+ phdr->p_paddr = mstart;
+ phdr->p_vaddr = (unsigned long long) __va(mstart);
+ phdr->p_filesz = phdr->p_memsz = mend - mstart + 1;
+ phdr->p_align = 0;
+ ehdr->e_phnum++;
+ phdr++;
+ pr_debug("Crash PT_LOAD elf header. phdr=%p vaddr=0x%llx, paddr=0x%llx, sz=0x%llx e_phnum=%d p_offset=0x%llx\n",
+ phdr, phdr->p_vaddr, phdr->p_paddr, phdr->p_filesz,
+ ehdr->e_phnum, phdr->p_offset);
+ }
*addr = buf;
*sz = elf_sz;
@@ -490,7 +456,9 @@ static int prepare_elf_headers(struct kimage *image, void **addr,
unsigned long *sz)
{
struct crash_elf_data *ced;
- int ret;
+ Elf64_Ehdr *ehdr;
+ Elf64_Phdr *phdr;
+ int ret, i;
ced = kzalloc(sizeof(*ced), GFP_KERNEL);
if (!ced)
@@ -498,8 +466,35 @@ static int prepare_elf_headers(struct kimage *image, void **addr,
fill_up_crash_elf_data(ced, image);
+ ret = walk_system_ram_res(0, -1, ced,
+ prepare_elf64_ram_headers_callback);
+ if (ret)
+ goto out;
+
+ /* Exclude unwanted mem ranges */
+ ret = elf_header_exclude_ranges(ced);
+ if (ret)
+ goto out;
+
/* By default prepare 64bit headers */
ret = prepare_elf64_headers(ced, addr, sz);
+ if (ret)
+ goto out;
+
+ /*
+ * If a range matches backup region, adjust offset to backup
+ * segment.
+ */
+ ehdr = (Elf64_Ehdr *)*addr;
+ phdr = (Elf64_Phdr *)(ehdr + 1);
+ for (i = 0; i < ehdr->e_phnum; phdr++, i++)
+ if (phdr->p_type == PT_LOAD &&
+ phdr->p_paddr == image->arch.backup_src_start &&
+ phdr->p_memsz == image->arch.backup_src_sz) {
+ phdr->p_offset = image->arch.backup_load_addr;
+ break;
+ }
+out:
kfree(ced);
return ret;
}
--
2.16.2
[Cc Andrew]
On 03/06/18 at 07:22pm, AKASHI Takahiro wrote:
> This is a preparatory patch set for adding kexec_file support on arm64.
>
> It was originally included in a arm64 patch set[1], but Philipp is also
> working on their kexec_file support on s390[2] and some changes are now
> conflicting.
>
> So these common parts were extracted and put into a separate patch set for
> better integration. What's more, my original patch#4 was split into a few
> small chunks for easier review after Dave's comment.
>
> As such, the resulting code is basically identical with my original, and
> the only *visible* differences are:
> * renamings of _kexec_kernel_image_probe() and
> _kimage_file_post_load_cleanup()
> * change one of types of arguments at prepare_elf64_headers()
> Those, unfortunately, require a couple of trivial changes on the rest
> (#1, #6 to #13) of my arm64 kexec_file patch set[1].
>
> Patch#1 allows making a use of purgatory optional, particularly useful
> for arm64.
> Patch#2 commonalizes arch_kexec_kernel_{image_probe, image_load,
> verify_sig}() and arch_kimage_file_post_load_cleanup() across architectures.
> Patch#3-#7 is also intended to generalize parse_elf64_headers(), along with
> exclude_mem_range(), to be made best re-use of.
>
> [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2018-February/561182.html
> [2] http://lkml.iu.edu//hypermail/linux/kernel/1802.1/02596.html
>
>
> Changes in v2 (Mar 6, 2018)
> * use IS_ENABLED() instead of #ifdef's in kexec_calculate_store_digests()
> * rename _kexec_kernel_*() and make them static for now except for
> kexec_image_probe_default() which is actually used by arch(powerpc) code
> (Note: kexec_image_post_load_cleanup_default() will also be exported by
> arm64 patches.)
> * change a type of kernel_map in prepare_elf64_headers() to "int"
> * check for a total number of system ram regions in full_up_crash_elf_data()
> and use vzalloc() instead of vmalloc()
>
> AKASHI Takahiro (7):
> kexec_file: make an use of purgatory optional
> kexec_file,x86,powerpc: factor out kexec_file_ops functions
> x86: kexec_file: purge system-ram walking from prepare_elf64_headers()
> x86: kexec_file: remove X86_64 dependency from prepare_elf64_headers()
> x86: kexec_file: lift CRASH_MAX_RANGES limit on crash_mem buffer
> x86: kexec_file: clean up prepare_elf64_headers()
> kexec_file, x86: move re-factored code to generic side
>
> arch/powerpc/Kconfig | 3 +
> arch/powerpc/include/asm/kexec.h | 2 +-
> arch/powerpc/kernel/kexec_elf_64.c | 2 +-
> arch/powerpc/kernel/machine_kexec_file_64.c | 39 +---
> arch/x86/Kconfig | 3 +
> arch/x86/include/asm/kexec-bzimage64.h | 2 +-
> arch/x86/kernel/crash.c | 334 ++++++----------------------
> arch/x86/kernel/kexec-bzimage64.c | 2 +-
> arch/x86/kernel/machine_kexec_64.c | 45 +---
> include/linux/kexec.h | 32 ++-
> kernel/kexec_file.c | 240 +++++++++++++++++++-
> 11 files changed, 338 insertions(+), 366 deletions(-)
>
> --
> 2.16.2
>
AKASHI, thanks for another update, looks good to me:
Acked-by: Dave Young <[email protected]>
Thanks
Dave
On 03/06/18 at 07:22pm, AKASHI Takahiro wrote:
> As arch_kexec_kernel_image_{probe,load}(),
> arch_kimage_file_post_load_cleanup() and arch_kexec_kernel_verify_sig()
> are almost duplicated among architectures, they can be commonalized with
> an architecture-defined kexec_file_ops array. So let's factor them out.
>
> Signed-off-by: AKASHI Takahiro <[email protected]>
> Cc: Dave Young <[email protected]>
> Cc: Vivek Goyal <[email protected]>
> Cc: Baoquan He <[email protected]>
> Cc: Michael Ellerman <[email protected]>
> Cc: Thiago Jung Bauermann <[email protected]>
> ---
> arch/powerpc/include/asm/kexec.h | 2 +-
> arch/powerpc/kernel/kexec_elf_64.c | 2 +-
> arch/powerpc/kernel/machine_kexec_file_64.c | 39 ++-----------------
> arch/x86/include/asm/kexec-bzimage64.h | 2 +-
> arch/x86/kernel/kexec-bzimage64.c | 2 +-
> arch/x86/kernel/machine_kexec_64.c | 45 +---------------------
> include/linux/kexec.h | 13 +++----
> kernel/kexec_file.c | 60 +++++++++++++++++++++++++++--
> 8 files changed, 71 insertions(+), 94 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/kexec.h b/arch/powerpc/include/asm/kexec.h
> index d8b1e8e7e035..4a585cba1787 100644
> --- a/arch/powerpc/include/asm/kexec.h
> +++ b/arch/powerpc/include/asm/kexec.h
> @@ -95,7 +95,7 @@ static inline bool kdump_in_progress(void)
> }
>
> #ifdef CONFIG_KEXEC_FILE
> -extern struct kexec_file_ops kexec_elf64_ops;
> +extern const struct kexec_file_ops kexec_elf64_ops;
>
> #ifdef CONFIG_IMA_KEXEC
> #define ARCH_HAS_KIMAGE_ARCH
> diff --git a/arch/powerpc/kernel/kexec_elf_64.c b/arch/powerpc/kernel/kexec_elf_64.c
> index 9a42309b091a..6c78c11c7faf 100644
> --- a/arch/powerpc/kernel/kexec_elf_64.c
> +++ b/arch/powerpc/kernel/kexec_elf_64.c
> @@ -657,7 +657,7 @@ static void *elf64_load(struct kimage *image, char *kernel_buf,
> return ret ? ERR_PTR(ret) : fdt;
> }
>
> -struct kexec_file_ops kexec_elf64_ops = {
> +const struct kexec_file_ops kexec_elf64_ops = {
> .probe = elf64_probe,
> .load = elf64_load,
> };
> diff --git a/arch/powerpc/kernel/machine_kexec_file_64.c b/arch/powerpc/kernel/machine_kexec_file_64.c
> index e4395f937d63..3a962c87149a 100644
> --- a/arch/powerpc/kernel/machine_kexec_file_64.c
> +++ b/arch/powerpc/kernel/machine_kexec_file_64.c
> @@ -31,52 +31,19 @@
>
> #define SLAVE_CODE_SIZE 256
>
> -static struct kexec_file_ops *kexec_file_loaders[] = {
> +const struct kexec_file_ops * const kexec_file_loaders[] = {
> &kexec_elf64_ops,
> + NULL
> };
>
> int arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
> unsigned long buf_len)
> {
> - int i, ret = -ENOEXEC;
> - struct kexec_file_ops *fops;
> -
> /* We don't support crash kernels yet. */
> if (image->type == KEXEC_TYPE_CRASH)
> return -ENOTSUPP;
>
> - for (i = 0; i < ARRAY_SIZE(kexec_file_loaders); i++) {
> - fops = kexec_file_loaders[i];
> - if (!fops || !fops->probe)
> - continue;
> -
> - ret = fops->probe(buf, buf_len);
> - if (!ret) {
> - image->fops = fops;
> - return ret;
> - }
> - }
> -
> - return ret;
> -}
> -
> -void *arch_kexec_kernel_image_load(struct kimage *image)
> -{
> - if (!image->fops || !image->fops->load)
> - return ERR_PTR(-ENOEXEC);
> -
> - return image->fops->load(image, image->kernel_buf,
> - image->kernel_buf_len, image->initrd_buf,
> - image->initrd_buf_len, image->cmdline_buf,
> - image->cmdline_buf_len);
> -}
> -
> -int arch_kimage_file_post_load_cleanup(struct kimage *image)
> -{
> - if (!image->fops || !image->fops->cleanup)
> - return 0;
> -
> - return image->fops->cleanup(image->image_loader_data);
> + return kexec_image_probe_default(image, buf, buf_len);
> }
>
> /**
> diff --git a/arch/x86/include/asm/kexec-bzimage64.h b/arch/x86/include/asm/kexec-bzimage64.h
> index 9f07cff43705..df89ee7d3e9e 100644
> --- a/arch/x86/include/asm/kexec-bzimage64.h
> +++ b/arch/x86/include/asm/kexec-bzimage64.h
> @@ -2,6 +2,6 @@
> #ifndef _ASM_KEXEC_BZIMAGE64_H
> #define _ASM_KEXEC_BZIMAGE64_H
>
> -extern struct kexec_file_ops kexec_bzImage64_ops;
> +extern const struct kexec_file_ops kexec_bzImage64_ops;
>
> #endif /* _ASM_KEXE_BZIMAGE64_H */
> diff --git a/arch/x86/kernel/kexec-bzimage64.c b/arch/x86/kernel/kexec-bzimage64.c
> index fb095ba0c02f..705654776c0c 100644
> --- a/arch/x86/kernel/kexec-bzimage64.c
> +++ b/arch/x86/kernel/kexec-bzimage64.c
> @@ -538,7 +538,7 @@ static int bzImage64_verify_sig(const char *kernel, unsigned long kernel_len)
> }
> #endif
>
> -struct kexec_file_ops kexec_bzImage64_ops = {
> +const struct kexec_file_ops kexec_bzImage64_ops = {
> .probe = bzImage64_probe,
> .load = bzImage64_load,
> .cleanup = bzImage64_cleanup,
> diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
> index 1f790cf9d38f..2cdd29d64181 100644
> --- a/arch/x86/kernel/machine_kexec_64.c
> +++ b/arch/x86/kernel/machine_kexec_64.c
> @@ -30,8 +30,9 @@
> #include <asm/set_memory.h>
>
> #ifdef CONFIG_KEXEC_FILE
> -static struct kexec_file_ops *kexec_file_loaders[] = {
> +const struct kexec_file_ops * const kexec_file_loaders[] = {
> &kexec_bzImage64_ops,
> + NULL
> };
> #endif
>
> @@ -363,27 +364,6 @@ void arch_crash_save_vmcoreinfo(void)
> /* arch-dependent functionality related to kexec file-based syscall */
>
> #ifdef CONFIG_KEXEC_FILE
> -int arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
> - unsigned long buf_len)
> -{
> - int i, ret = -ENOEXEC;
> - struct kexec_file_ops *fops;
> -
> - for (i = 0; i < ARRAY_SIZE(kexec_file_loaders); i++) {
> - fops = kexec_file_loaders[i];
> - if (!fops || !fops->probe)
> - continue;
> -
> - ret = fops->probe(buf, buf_len);
> - if (!ret) {
> - image->fops = fops;
> - return ret;
> - }
> - }
> -
> - return ret;
> -}
> -
> void *arch_kexec_kernel_image_load(struct kimage *image)
> {
> vfree(image->arch.elf_headers);
> @@ -398,27 +378,6 @@ void *arch_kexec_kernel_image_load(struct kimage *image)
> image->cmdline_buf_len);
> }
>
> -int arch_kimage_file_post_load_cleanup(struct kimage *image)
> -{
> - if (!image->fops || !image->fops->cleanup)
> - return 0;
> -
> - return image->fops->cleanup(image->image_loader_data);
> -}
> -
> -#ifdef CONFIG_KEXEC_VERIFY_SIG
> -int arch_kexec_kernel_verify_sig(struct kimage *image, void *kernel,
> - unsigned long kernel_len)
> -{
> - if (!image->fops || !image->fops->verify_sig) {
> - pr_debug("kernel loader does not support signature verification.");
> - return -EKEYREJECTED;
> - }
> -
> - return image->fops->verify_sig(kernel, kernel_len);
> -}
> -#endif
> -
> /*
> * Apply purgatory relocations.
> *
> diff --git a/include/linux/kexec.h b/include/linux/kexec.h
> index f16f6ceb3875..4ed18cc718fc 100644
> --- a/include/linux/kexec.h
> +++ b/include/linux/kexec.h
> @@ -135,6 +135,11 @@ struct kexec_file_ops {
> #endif
> };
>
> +extern const struct kexec_file_ops * const kexec_file_loaders[];
> +
> +int kexec_image_probe_default(struct kimage *image, void *buf,
> + unsigned long buf_len);
> +
> /**
> * struct kexec_buf - parameters for finding a place for a buffer in memory
> * @image: kexec image in which memory to search.
> @@ -209,7 +214,7 @@ struct kimage {
> unsigned long cmdline_buf_len;
>
> /* File operations provided by image loader */
> - struct kexec_file_ops *fops;
> + const struct kexec_file_ops *fops;
>
> /* Image loader handling the kernel can store a pointer here */
> void *image_loader_data;
> @@ -277,12 +282,6 @@ int crash_shrink_memory(unsigned long new_size);
> size_t crash_get_memory_size(void);
> void crash_free_reserved_phys_range(unsigned long begin, unsigned long end);
>
> -int __weak arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
> - unsigned long buf_len);
> -void * __weak arch_kexec_kernel_image_load(struct kimage *image);
> -int __weak arch_kimage_file_post_load_cleanup(struct kimage *image);
> -int __weak arch_kexec_kernel_verify_sig(struct kimage *image, void *buf,
> - unsigned long buf_len);
> int __weak arch_kexec_apply_relocations_add(const Elf_Ehdr *ehdr,
> Elf_Shdr *sechdrs, unsigned int relsec);
> int __weak arch_kexec_apply_relocations(const Elf_Ehdr *ehdr, Elf_Shdr *sechdrs,
> diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
> index 6dbbb89cbbac..6a62c5ef8a07 100644
> --- a/kernel/kexec_file.c
> +++ b/kernel/kexec_file.c
> @@ -28,28 +28,80 @@
>
> static int kexec_calculate_store_digests(struct kimage *image);
>
> +/*
> + * Currently this is the only default function that is exported as some
> + * architectures need it to do additional handlings.
> + * In the future, other default functions may be exported too if required.
> + */
> +int kexec_image_probe_default(struct kimage *image, void *buf,
> + unsigned long buf_len)
> +{
> + const struct kexec_file_ops * const *fops;
> + int ret = -ENOEXEC;
> +
> + for (fops = &kexec_file_loaders[0]; *fops && (*fops)->probe; ++fops) {
> + ret = (*fops)->probe(buf, buf_len);
> + if (!ret) {
> + image->fops = *fops;
> + return ret;
> + }
> + }
> +
> + return ret;
> +}
> +
> /* Architectures can provide this probe function */
> int __weak arch_kexec_kernel_image_probe(struct kimage *image, void *buf,
> unsigned long buf_len)
> {
> - return -ENOEXEC;
> + return kexec_image_probe_default(image, buf, buf_len);
> +}
> +
> +static void *kexec_image_load_default(struct kimage *image)
> +{
> + if (!image->fops || !image->fops->load)
> + return ERR_PTR(-ENOEXEC);
> +
> + return image->fops->load(image, image->kernel_buf,
> + image->kernel_buf_len, image->initrd_buf,
> + image->initrd_buf_len, image->cmdline_buf,
> + image->cmdline_buf_len);
> }
>
> void * __weak arch_kexec_kernel_image_load(struct kimage *image)
> {
> - return ERR_PTR(-ENOEXEC);
> + return kexec_image_load_default(image);
> +}
> +
> +static int kexec_image_post_load_cleanup_default(struct kimage *image)
> +{
> + if (!image->fops || !image->fops->cleanup)
> + return 0;
> +
> + return image->fops->cleanup(image->image_loader_data);
> }
>
> int __weak arch_kimage_file_post_load_cleanup(struct kimage *image)
> {
> - return -EINVAL;
> + return kexec_image_post_load_cleanup_default(image);
> }
>
> #ifdef CONFIG_KEXEC_VERIFY_SIG
> +static int kexec_image_verify_sig_default(struct kimage *image, void *buf,
> + unsigned long buf_len)
> +{
> + if (!image->fops || !image->fops->verify_sig) {
> + pr_debug("kernel loader does not support signature verification.\n");
> + return -EKEYREJECTED;
> + }
> +
> + return image->fops->verify_sig(buf, buf_len);
> +}
> +
> int __weak arch_kexec_kernel_verify_sig(struct kimage *image, void *buf,
> unsigned long buf_len)
> {
> - return -EKEYREJECTED;
> + return kexec_image_verify_sig_default(image, buf, buf_len);
> }
> #endif
>
> --
> 2.16.2
>
For this patch it also needs some review from powerpc people.
Thanks
Dave
On 03/08/18 at 09:05am, Dave Young wrote:
> [Cc Andrew]
>
> On 03/06/18 at 07:22pm, AKASHI Takahiro wrote:
> > This is a preparatory patch set for adding kexec_file support on arm64.
> >
> > It was originally included in a arm64 patch set[1], but Philipp is also
> > working on their kexec_file support on s390[2] and some changes are now
> > conflicting.
> >
> > So these common parts were extracted and put into a separate patch set for
> > better integration. What's more, my original patch#4 was split into a few
> > small chunks for easier review after Dave's comment.
> >
> > As such, the resulting code is basically identical with my original, and
> > the only *visible* differences are:
> > * renamings of _kexec_kernel_image_probe() and
> > _kimage_file_post_load_cleanup()
> > * change one of types of arguments at prepare_elf64_headers()
> > Those, unfortunately, require a couple of trivial changes on the rest
> > (#1, #6 to #13) of my arm64 kexec_file patch set[1].
> >
> > Patch#1 allows making a use of purgatory optional, particularly useful
> > for arm64.
> > Patch#2 commonalizes arch_kexec_kernel_{image_probe, image_load,
> > verify_sig}() and arch_kimage_file_post_load_cleanup() across architectures.
> > Patch#3-#7 is also intended to generalize parse_elf64_headers(), along with
> > exclude_mem_range(), to be made best re-use of.
> >
> > [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2018-February/561182.html
> > [2] http://lkml.iu.edu//hypermail/linux/kernel/1802.1/02596.html
> >
> >
> > Changes in v2 (Mar 6, 2018)
> > * use IS_ENABLED() instead of #ifdef's in kexec_calculate_store_digests()
> > * rename _kexec_kernel_*() and make them static for now except for
> > kexec_image_probe_default() which is actually used by arch(powerpc) code
> > (Note: kexec_image_post_load_cleanup_default() will also be exported by
> > arm64 patches.)
> > * change a type of kernel_map in prepare_elf64_headers() to "int"
> > * check for a total number of system ram regions in full_up_crash_elf_data()
> > and use vzalloc() instead of vmalloc()
> >
> > AKASHI Takahiro (7):
> > kexec_file: make an use of purgatory optional
> > kexec_file,x86,powerpc: factor out kexec_file_ops functions
> > x86: kexec_file: purge system-ram walking from prepare_elf64_headers()
> > x86: kexec_file: remove X86_64 dependency from prepare_elf64_headers()
> > x86: kexec_file: lift CRASH_MAX_RANGES limit on crash_mem buffer
> > x86: kexec_file: clean up prepare_elf64_headers()
> > kexec_file, x86: move re-factored code to generic side
> >
> > arch/powerpc/Kconfig | 3 +
> > arch/powerpc/include/asm/kexec.h | 2 +-
> > arch/powerpc/kernel/kexec_elf_64.c | 2 +-
> > arch/powerpc/kernel/machine_kexec_file_64.c | 39 +---
> > arch/x86/Kconfig | 3 +
> > arch/x86/include/asm/kexec-bzimage64.h | 2 +-
> > arch/x86/kernel/crash.c | 334 ++++++----------------------
> > arch/x86/kernel/kexec-bzimage64.c | 2 +-
> > arch/x86/kernel/machine_kexec_64.c | 45 +---
> > include/linux/kexec.h | 32 ++-
> > kernel/kexec_file.c | 240 +++++++++++++++++++-
> > 11 files changed, 338 insertions(+), 366 deletions(-)
> >
> > --
> > 2.16.2
> >
>
> AKASHI, thanks for another update, looks good to me:
>
> Acked-by: Dave Young <[email protected]>
Hold on the ack, during today's tests kexec kernel can not boot, it
hangs early. Will see if it is caused by this series and come back
later.
>
> Thanks
> Dave
Thanks
Dave
On 03/09/18 at 02:02pm, Dave Young wrote:
> On 03/08/18 at 09:05am, Dave Young wrote:
> > [Cc Andrew]
> >
> > On 03/06/18 at 07:22pm, AKASHI Takahiro wrote:
> > > This is a preparatory patch set for adding kexec_file support on arm64.
> > >
> > > It was originally included in a arm64 patch set[1], but Philipp is also
> > > working on their kexec_file support on s390[2] and some changes are now
> > > conflicting.
> > >
> > > So these common parts were extracted and put into a separate patch set for
> > > better integration. What's more, my original patch#4 was split into a few
> > > small chunks for easier review after Dave's comment.
> > >
> > > As such, the resulting code is basically identical with my original, and
> > > the only *visible* differences are:
> > > * renamings of _kexec_kernel_image_probe() and
> > > _kimage_file_post_load_cleanup()
> > > * change one of types of arguments at prepare_elf64_headers()
> > > Those, unfortunately, require a couple of trivial changes on the rest
> > > (#1, #6 to #13) of my arm64 kexec_file patch set[1].
> > >
> > > Patch#1 allows making a use of purgatory optional, particularly useful
> > > for arm64.
> > > Patch#2 commonalizes arch_kexec_kernel_{image_probe, image_load,
> > > verify_sig}() and arch_kimage_file_post_load_cleanup() across architectures.
> > > Patch#3-#7 is also intended to generalize parse_elf64_headers(), along with
> > > exclude_mem_range(), to be made best re-use of.
> > >
> > > [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2018-February/561182.html
> > > [2] http://lkml.iu.edu//hypermail/linux/kernel/1802.1/02596.html
> > >
> > >
> > > Changes in v2 (Mar 6, 2018)
> > > * use IS_ENABLED() instead of #ifdef's in kexec_calculate_store_digests()
> > > * rename _kexec_kernel_*() and make them static for now except for
> > > kexec_image_probe_default() which is actually used by arch(powerpc) code
> > > (Note: kexec_image_post_load_cleanup_default() will also be exported by
> > > arm64 patches.)
> > > * change a type of kernel_map in prepare_elf64_headers() to "int"
> > > * check for a total number of system ram regions in full_up_crash_elf_data()
> > > and use vzalloc() instead of vmalloc()
> > >
> > > AKASHI Takahiro (7):
> > > kexec_file: make an use of purgatory optional
> > > kexec_file,x86,powerpc: factor out kexec_file_ops functions
> > > x86: kexec_file: purge system-ram walking from prepare_elf64_headers()
> > > x86: kexec_file: remove X86_64 dependency from prepare_elf64_headers()
> > > x86: kexec_file: lift CRASH_MAX_RANGES limit on crash_mem buffer
> > > x86: kexec_file: clean up prepare_elf64_headers()
> > > kexec_file, x86: move re-factored code to generic side
> > >
> > > arch/powerpc/Kconfig | 3 +
> > > arch/powerpc/include/asm/kexec.h | 2 +-
> > > arch/powerpc/kernel/kexec_elf_64.c | 2 +-
> > > arch/powerpc/kernel/machine_kexec_file_64.c | 39 +---
> > > arch/x86/Kconfig | 3 +
> > > arch/x86/include/asm/kexec-bzimage64.h | 2 +-
> > > arch/x86/kernel/crash.c | 334 ++++++----------------------
> > > arch/x86/kernel/kexec-bzimage64.c | 2 +-
> > > arch/x86/kernel/machine_kexec_64.c | 45 +---
> > > include/linux/kexec.h | 32 ++-
> > > kernel/kexec_file.c | 240 +++++++++++++++++++-
> > > 11 files changed, 338 insertions(+), 366 deletions(-)
> > >
> > > --
> > > 2.16.2
> > >
> >
> > AKASHI, thanks for another update, looks good to me:
> >
> > Acked-by: Dave Young <[email protected]>
>
> Hold on the ack, during today's tests kexec kernel can not boot, it
> hangs early. Will see if it is caused by this series and come back
> later.
AKASHI, it seems the first patch causes the hang, can you take a look?
>
> >
> > Thanks
> > Dave
>
> Thanks
> Dave
Dave,
On Fri, Mar 09, 2018 at 02:44:12PM +0800, Dave Young wrote:
> On 03/09/18 at 02:02pm, Dave Young wrote:
> > On 03/08/18 at 09:05am, Dave Young wrote:
> > > [Cc Andrew]
> > >
> > > On 03/06/18 at 07:22pm, AKASHI Takahiro wrote:
> > > > This is a preparatory patch set for adding kexec_file support on arm64.
> > > >
> > > > It was originally included in a arm64 patch set[1], but Philipp is also
> > > > working on their kexec_file support on s390[2] and some changes are now
> > > > conflicting.
> > > >
> > > > So these common parts were extracted and put into a separate patch set for
> > > > better integration. What's more, my original patch#4 was split into a few
> > > > small chunks for easier review after Dave's comment.
> > > >
> > > > As such, the resulting code is basically identical with my original, and
> > > > the only *visible* differences are:
> > > > * renamings of _kexec_kernel_image_probe() and
> > > > _kimage_file_post_load_cleanup()
> > > > * change one of types of arguments at prepare_elf64_headers()
> > > > Those, unfortunately, require a couple of trivial changes on the rest
> > > > (#1, #6 to #13) of my arm64 kexec_file patch set[1].
> > > >
> > > > Patch#1 allows making a use of purgatory optional, particularly useful
> > > > for arm64.
> > > > Patch#2 commonalizes arch_kexec_kernel_{image_probe, image_load,
> > > > verify_sig}() and arch_kimage_file_post_load_cleanup() across architectures.
> > > > Patch#3-#7 is also intended to generalize parse_elf64_headers(), along with
> > > > exclude_mem_range(), to be made best re-use of.
> > > >
> > > > [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2018-February/561182.html
> > > > [2] http://lkml.iu.edu//hypermail/linux/kernel/1802.1/02596.html
> > > >
> > > >
> > > > Changes in v2 (Mar 6, 2018)
> > > > * use IS_ENABLED() instead of #ifdef's in kexec_calculate_store_digests()
> > > > * rename _kexec_kernel_*() and make them static for now except for
> > > > kexec_image_probe_default() which is actually used by arch(powerpc) code
> > > > (Note: kexec_image_post_load_cleanup_default() will also be exported by
> > > > arm64 patches.)
> > > > * change a type of kernel_map in prepare_elf64_headers() to "int"
> > > > * check for a total number of system ram regions in full_up_crash_elf_data()
> > > > and use vzalloc() instead of vmalloc()
> > > >
> > > > AKASHI Takahiro (7):
> > > > kexec_file: make an use of purgatory optional
> > > > kexec_file,x86,powerpc: factor out kexec_file_ops functions
> > > > x86: kexec_file: purge system-ram walking from prepare_elf64_headers()
> > > > x86: kexec_file: remove X86_64 dependency from prepare_elf64_headers()
> > > > x86: kexec_file: lift CRASH_MAX_RANGES limit on crash_mem buffer
> > > > x86: kexec_file: clean up prepare_elf64_headers()
> > > > kexec_file, x86: move re-factored code to generic side
> > > >
> > > > arch/powerpc/Kconfig | 3 +
> > > > arch/powerpc/include/asm/kexec.h | 2 +-
> > > > arch/powerpc/kernel/kexec_elf_64.c | 2 +-
> > > > arch/powerpc/kernel/machine_kexec_file_64.c | 39 +---
> > > > arch/x86/Kconfig | 3 +
> > > > arch/x86/include/asm/kexec-bzimage64.h | 2 +-
> > > > arch/x86/kernel/crash.c | 334 ++++++----------------------
> > > > arch/x86/kernel/kexec-bzimage64.c | 2 +-
> > > > arch/x86/kernel/machine_kexec_64.c | 45 +---
> > > > include/linux/kexec.h | 32 ++-
> > > > kernel/kexec_file.c | 240 +++++++++++++++++++-
> > > > 11 files changed, 338 insertions(+), 366 deletions(-)
> > > >
> > > > --
> > > > 2.16.2
> > > >
> > >
> > > AKASHI, thanks for another update, looks good to me:
> > >
> > > Acked-by: Dave Young <[email protected]>
> >
> > Hold on the ack, during today's tests kexec kernel can not boot, it
> > hangs early. Will see if it is caused by this series and come back
> > later.
>
> AKASHI, it seems the first patch causes the hang, can you take a look?
Sure, but can you first tell me specifics about what you have observed?
Thanks,
-Takahiro AKASHI
> >
> > >
> > > Thanks
> > > Dave
> >
> > Thanks
> > Dave
Hi AKASHI,
On 03/09/18 at 04:18pm, AKASHI Takahiro wrote:
> Dave,
>
> On Fri, Mar 09, 2018 at 02:44:12PM +0800, Dave Young wrote:
> > On 03/09/18 at 02:02pm, Dave Young wrote:
> > > On 03/08/18 at 09:05am, Dave Young wrote:
> > > > [Cc Andrew]
> > > >
> > > > On 03/06/18 at 07:22pm, AKASHI Takahiro wrote:
> > > > > This is a preparatory patch set for adding kexec_file support on arm64.
> > > > >
> > > > > It was originally included in a arm64 patch set[1], but Philipp is also
> > > > > working on their kexec_file support on s390[2] and some changes are now
> > > > > conflicting.
> > > > >
> > > > > So these common parts were extracted and put into a separate patch set for
> > > > > better integration. What's more, my original patch#4 was split into a few
> > > > > small chunks for easier review after Dave's comment.
> > > > >
> > > > > As such, the resulting code is basically identical with my original, and
> > > > > the only *visible* differences are:
> > > > > * renamings of _kexec_kernel_image_probe() and
> > > > > _kimage_file_post_load_cleanup()
> > > > > * change one of types of arguments at prepare_elf64_headers()
> > > > > Those, unfortunately, require a couple of trivial changes on the rest
> > > > > (#1, #6 to #13) of my arm64 kexec_file patch set[1].
> > > > >
> > > > > Patch#1 allows making a use of purgatory optional, particularly useful
> > > > > for arm64.
> > > > > Patch#2 commonalizes arch_kexec_kernel_{image_probe, image_load,
> > > > > verify_sig}() and arch_kimage_file_post_load_cleanup() across architectures.
> > > > > Patch#3-#7 is also intended to generalize parse_elf64_headers(), along with
> > > > > exclude_mem_range(), to be made best re-use of.
> > > > >
> > > > > [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2018-February/561182.html
> > > > > [2] http://lkml.iu.edu//hypermail/linux/kernel/1802.1/02596.html
> > > > >
> > > > >
> > > > > Changes in v2 (Mar 6, 2018)
> > > > > * use IS_ENABLED() instead of #ifdef's in kexec_calculate_store_digests()
> > > > > * rename _kexec_kernel_*() and make them static for now except for
> > > > > kexec_image_probe_default() which is actually used by arch(powerpc) code
> > > > > (Note: kexec_image_post_load_cleanup_default() will also be exported by
> > > > > arm64 patches.)
> > > > > * change a type of kernel_map in prepare_elf64_headers() to "int"
> > > > > * check for a total number of system ram regions in full_up_crash_elf_data()
> > > > > and use vzalloc() instead of vmalloc()
> > > > >
> > > > > AKASHI Takahiro (7):
> > > > > kexec_file: make an use of purgatory optional
> > > > > kexec_file,x86,powerpc: factor out kexec_file_ops functions
> > > > > x86: kexec_file: purge system-ram walking from prepare_elf64_headers()
> > > > > x86: kexec_file: remove X86_64 dependency from prepare_elf64_headers()
> > > > > x86: kexec_file: lift CRASH_MAX_RANGES limit on crash_mem buffer
> > > > > x86: kexec_file: clean up prepare_elf64_headers()
> > > > > kexec_file, x86: move re-factored code to generic side
> > > > >
> > > > > arch/powerpc/Kconfig | 3 +
> > > > > arch/powerpc/include/asm/kexec.h | 2 +-
> > > > > arch/powerpc/kernel/kexec_elf_64.c | 2 +-
> > > > > arch/powerpc/kernel/machine_kexec_file_64.c | 39 +---
> > > > > arch/x86/Kconfig | 3 +
> > > > > arch/x86/include/asm/kexec-bzimage64.h | 2 +-
> > > > > arch/x86/kernel/crash.c | 334 ++++++----------------------
> > > > > arch/x86/kernel/kexec-bzimage64.c | 2 +-
> > > > > arch/x86/kernel/machine_kexec_64.c | 45 +---
> > > > > include/linux/kexec.h | 32 ++-
> > > > > kernel/kexec_file.c | 240 +++++++++++++++++++-
> > > > > 11 files changed, 338 insertions(+), 366 deletions(-)
> > > > >
> > > > > --
> > > > > 2.16.2
> > > > >
> > > >
> > > > AKASHI, thanks for another update, looks good to me:
> > > >
> > > > Acked-by: Dave Young <[email protected]>
> > >
> > > Hold on the ack, during today's tests kexec kernel can not boot, it
> > > hangs early. Will see if it is caused by this series and come back
> > > later.
> >
> > AKASHI, it seems the first patch causes the hang, can you take a look?
>
> Sure, but can you first tell me specifics about what you have observed?
It is a kexec load test like below:
kexec -s -l /boot/vmlinuz-`uname -r` --reuse-cmdline
sync
sync
kexec -e
And it hangs after the message "Starting new kernel". Now I'm not sure
how I tested the vmcore parsing before, maybe I forgot to add "-s"
option in kexec-tools..
Thanks
Dave
>
> Thanks,
> -Takahiro AKASHI
>
>
> > >
> > > >
> > > > Thanks
> > > > Dave
> > >
> > > Thanks
> > > Dave
On 03/09/18 at 03:46pm, Dave Young wrote:
> Hi AKASHI,
>
> On 03/09/18 at 04:18pm, AKASHI Takahiro wrote:
> > Dave,
> >
> > On Fri, Mar 09, 2018 at 02:44:12PM +0800, Dave Young wrote:
> > > On 03/09/18 at 02:02pm, Dave Young wrote:
> > > > On 03/08/18 at 09:05am, Dave Young wrote:
> > > > > [Cc Andrew]
> > > > >
> > > > > On 03/06/18 at 07:22pm, AKASHI Takahiro wrote:
> > > > > > This is a preparatory patch set for adding kexec_file support on arm64.
> > > > > >
> > > > > > It was originally included in a arm64 patch set[1], but Philipp is also
> > > > > > working on their kexec_file support on s390[2] and some changes are now
> > > > > > conflicting.
> > > > > >
> > > > > > So these common parts were extracted and put into a separate patch set for
> > > > > > better integration. What's more, my original patch#4 was split into a few
> > > > > > small chunks for easier review after Dave's comment.
> > > > > >
> > > > > > As such, the resulting code is basically identical with my original, and
> > > > > > the only *visible* differences are:
> > > > > > * renamings of _kexec_kernel_image_probe() and
> > > > > > _kimage_file_post_load_cleanup()
> > > > > > * change one of types of arguments at prepare_elf64_headers()
> > > > > > Those, unfortunately, require a couple of trivial changes on the rest
> > > > > > (#1, #6 to #13) of my arm64 kexec_file patch set[1].
> > > > > >
> > > > > > Patch#1 allows making a use of purgatory optional, particularly useful
> > > > > > for arm64.
> > > > > > Patch#2 commonalizes arch_kexec_kernel_{image_probe, image_load,
> > > > > > verify_sig}() and arch_kimage_file_post_load_cleanup() across architectures.
> > > > > > Patch#3-#7 is also intended to generalize parse_elf64_headers(), along with
> > > > > > exclude_mem_range(), to be made best re-use of.
> > > > > >
> > > > > > [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2018-February/561182.html
> > > > > > [2] http://lkml.iu.edu//hypermail/linux/kernel/1802.1/02596.html
> > > > > >
> > > > > >
> > > > > > Changes in v2 (Mar 6, 2018)
> > > > > > * use IS_ENABLED() instead of #ifdef's in kexec_calculate_store_digests()
> > > > > > * rename _kexec_kernel_*() and make them static for now except for
> > > > > > kexec_image_probe_default() which is actually used by arch(powerpc) code
> > > > > > (Note: kexec_image_post_load_cleanup_default() will also be exported by
> > > > > > arm64 patches.)
> > > > > > * change a type of kernel_map in prepare_elf64_headers() to "int"
> > > > > > * check for a total number of system ram regions in full_up_crash_elf_data()
> > > > > > and use vzalloc() instead of vmalloc()
> > > > > >
> > > > > > AKASHI Takahiro (7):
> > > > > > kexec_file: make an use of purgatory optional
> > > > > > kexec_file,x86,powerpc: factor out kexec_file_ops functions
> > > > > > x86: kexec_file: purge system-ram walking from prepare_elf64_headers()
> > > > > > x86: kexec_file: remove X86_64 dependency from prepare_elf64_headers()
> > > > > > x86: kexec_file: lift CRASH_MAX_RANGES limit on crash_mem buffer
> > > > > > x86: kexec_file: clean up prepare_elf64_headers()
> > > > > > kexec_file, x86: move re-factored code to generic side
> > > > > >
> > > > > > arch/powerpc/Kconfig | 3 +
> > > > > > arch/powerpc/include/asm/kexec.h | 2 +-
> > > > > > arch/powerpc/kernel/kexec_elf_64.c | 2 +-
> > > > > > arch/powerpc/kernel/machine_kexec_file_64.c | 39 +---
> > > > > > arch/x86/Kconfig | 3 +
> > > > > > arch/x86/include/asm/kexec-bzimage64.h | 2 +-
> > > > > > arch/x86/kernel/crash.c | 334 ++++++----------------------
> > > > > > arch/x86/kernel/kexec-bzimage64.c | 2 +-
> > > > > > arch/x86/kernel/machine_kexec_64.c | 45 +---
> > > > > > include/linux/kexec.h | 32 ++-
> > > > > > kernel/kexec_file.c | 240 +++++++++++++++++++-
> > > > > > 11 files changed, 338 insertions(+), 366 deletions(-)
> > > > > >
> > > > > > --
> > > > > > 2.16.2
> > > > > >
> > > > >
> > > > > AKASHI, thanks for another update, looks good to me:
> > > > >
> > > > > Acked-by: Dave Young <[email protected]>
> > > >
> > > > Hold on the ack, during today's tests kexec kernel can not boot, it
> > > > hangs early. Will see if it is caused by this series and come back
> > > > later.
> > >
> > > AKASHI, it seems the first patch causes the hang, can you take a look?
> >
> > Sure, but can you first tell me specifics about what you have observed?
>
> It is a kexec load test like below:
> kexec -s -l /boot/vmlinuz-`uname -r` --reuse-cmdline
> sync
> sync
> kexec -e
>
> And it hangs after the message "Starting new kernel". Now I'm not sure
> how I tested the vmcore parsing before, maybe I forgot to add "-s"
> option in kexec-tools..
BTW kdump kernel also failed to boot.
I use linus tree, the git HEAD is
86f84779d8e9 (origin/master, origin/HEAD, master) Merge branch 'siginfo-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace
It is a little odd for my testing, for kdump testing I did before, I do
not need add "-s" explictly since Fedora will take it automatically in
case Secure Boot and I always enabled Secure Boot.
>
> Thanks
> Dave
>
> >
> > Thanks,
> > -Takahiro AKASHI
> >
> >
> > > >
> > > > >
> > > > > Thanks
> > > > > Dave
> > > >
> > > > Thanks
> > > > Dave
Dave,
On Fri, Mar 09, 2018 at 03:55:47PM +0800, Dave Young wrote:
> On 03/09/18 at 03:46pm, Dave Young wrote:
> > Hi AKASHI,
> >
> > On 03/09/18 at 04:18pm, AKASHI Takahiro wrote:
> > > Dave,
> > >
> > > On Fri, Mar 09, 2018 at 02:44:12PM +0800, Dave Young wrote:
> > > > On 03/09/18 at 02:02pm, Dave Young wrote:
> > > > > On 03/08/18 at 09:05am, Dave Young wrote:
> > > > > > [Cc Andrew]
> > > > > >
> > > > > > On 03/06/18 at 07:22pm, AKASHI Takahiro wrote:
> > > > > > > This is a preparatory patch set for adding kexec_file support on arm64.
> > > > > > >
> > > > > > > It was originally included in a arm64 patch set[1], but Philipp is also
> > > > > > > working on their kexec_file support on s390[2] and some changes are now
> > > > > > > conflicting.
> > > > > > >
> > > > > > > So these common parts were extracted and put into a separate patch set for
> > > > > > > better integration. What's more, my original patch#4 was split into a few
> > > > > > > small chunks for easier review after Dave's comment.
> > > > > > >
> > > > > > > As such, the resulting code is basically identical with my original, and
> > > > > > > the only *visible* differences are:
> > > > > > > * renamings of _kexec_kernel_image_probe() and
> > > > > > > _kimage_file_post_load_cleanup()
> > > > > > > * change one of types of arguments at prepare_elf64_headers()
> > > > > > > Those, unfortunately, require a couple of trivial changes on the rest
> > > > > > > (#1, #6 to #13) of my arm64 kexec_file patch set[1].
> > > > > > >
> > > > > > > Patch#1 allows making a use of purgatory optional, particularly useful
> > > > > > > for arm64.
> > > > > > > Patch#2 commonalizes arch_kexec_kernel_{image_probe, image_load,
> > > > > > > verify_sig}() and arch_kimage_file_post_load_cleanup() across architectures.
> > > > > > > Patch#3-#7 is also intended to generalize parse_elf64_headers(), along with
> > > > > > > exclude_mem_range(), to be made best re-use of.
> > > > > > >
> > > > > > > [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2018-February/561182.html
> > > > > > > [2] http://lkml.iu.edu//hypermail/linux/kernel/1802.1/02596.html
> > > > > > >
> > > > > > >
> > > > > > > Changes in v2 (Mar 6, 2018)
> > > > > > > * use IS_ENABLED() instead of #ifdef's in kexec_calculate_store_digests()
> > > > > > > * rename _kexec_kernel_*() and make them static for now except for
> > > > > > > kexec_image_probe_default() which is actually used by arch(powerpc) code
> > > > > > > (Note: kexec_image_post_load_cleanup_default() will also be exported by
> > > > > > > arm64 patches.)
> > > > > > > * change a type of kernel_map in prepare_elf64_headers() to "int"
> > > > > > > * check for a total number of system ram regions in full_up_crash_elf_data()
> > > > > > > and use vzalloc() instead of vmalloc()
> > > > > > >
> > > > > > > AKASHI Takahiro (7):
> > > > > > > kexec_file: make an use of purgatory optional
> > > > > > > kexec_file,x86,powerpc: factor out kexec_file_ops functions
> > > > > > > x86: kexec_file: purge system-ram walking from prepare_elf64_headers()
> > > > > > > x86: kexec_file: remove X86_64 dependency from prepare_elf64_headers()
> > > > > > > x86: kexec_file: lift CRASH_MAX_RANGES limit on crash_mem buffer
> > > > > > > x86: kexec_file: clean up prepare_elf64_headers()
> > > > > > > kexec_file, x86: move re-factored code to generic side
> > > > > > >
> > > > > > > arch/powerpc/Kconfig | 3 +
> > > > > > > arch/powerpc/include/asm/kexec.h | 2 +-
> > > > > > > arch/powerpc/kernel/kexec_elf_64.c | 2 +-
> > > > > > > arch/powerpc/kernel/machine_kexec_file_64.c | 39 +---
> > > > > > > arch/x86/Kconfig | 3 +
> > > > > > > arch/x86/include/asm/kexec-bzimage64.h | 2 +-
> > > > > > > arch/x86/kernel/crash.c | 334 ++++++----------------------
> > > > > > > arch/x86/kernel/kexec-bzimage64.c | 2 +-
> > > > > > > arch/x86/kernel/machine_kexec_64.c | 45 +---
> > > > > > > include/linux/kexec.h | 32 ++-
> > > > > > > kernel/kexec_file.c | 240 +++++++++++++++++++-
> > > > > > > 11 files changed, 338 insertions(+), 366 deletions(-)
> > > > > > >
> > > > > > > --
> > > > > > > 2.16.2
> > > > > > >
> > > > > >
> > > > > > AKASHI, thanks for another update, looks good to me:
> > > > > >
> > > > > > Acked-by: Dave Young <[email protected]>
> > > > >
> > > > > Hold on the ack, during today's tests kexec kernel can not boot, it
> > > > > hangs early. Will see if it is caused by this series and come back
> > > > > later.
> > > >
> > > > AKASHI, it seems the first patch causes the hang, can you take a look?
> > >
> > > Sure, but can you first tell me specifics about what you have observed?
> >
> > It is a kexec load test like below:
> > kexec -s -l /boot/vmlinuz-`uname -r` --reuse-cmdline
> > sync
> > sync
> > kexec -e
> >
> > And it hangs after the message "Starting new kernel". Now I'm not sure
> > how I tested the vmcore parsing before, maybe I forgot to add "-s"
> > option in kexec-tools..
>
> BTW kdump kernel also failed to boot.
> I use linus tree, the git HEAD is
> 86f84779d8e9 (origin/master, origin/HEAD, master) Merge branch 'siginfo-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace
>
> It is a little odd for my testing, for kdump testing I did before, I do
> not need add "-s" explictly since Fedora will take it automatically in
> case Secure Boot and I always enabled Secure Boot.
Oops, please apply the fix below to my first patch ("kexec_file: make
an use of purgatory optional"). My screw-up is trivial here :)
Please let me know if you think that I should repost my patch set.
Thanks,
-Takahiro AKASHI
===8<===
diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
index 6dbbb89cbbac..ab1dced677fd 100644
--- a/kernel/kexec_file.c
+++ b/kernel/kexec_file.c
@@ -532,7 +532,7 @@ static int kexec_calculate_store_digests(struct kimage *image)
struct kexec_sha_region *sha_regions;
struct purgatory_info *pi = &image->purgatory_info;
- if (!IS_ENABLED(ARCH_HAS_KEXEC_PURGATORY))
+ if (!IS_ENABLED(CONFIG_ARCH_HAS_KEXEC_PURGATORY))
return 0;
zero_buf = __va(page_to_pfn(ZERO_PAGE(0)) << PAGE_SHIFT);
===>8===
> >
> > Thanks
> > Dave
> >
> > >
> > > Thanks,
> > > -Takahiro AKASHI
> > >
> > >
> > > > >
> > > > > >
> > > > > > Thanks
> > > > > > Dave
> > > > >
> > > > > Thanks
> > > > > Dave
On 03/09/18 at 06:33pm, AKASHI Takahiro wrote:
[snip]
> Oops, please apply the fix below to my first patch ("kexec_file: make
> an use of purgatory optional"). My screw-up is trivial here :)
Wow, human eyes sometimes can not catch such issue, thanks for the fix.
It works for me.
>
> Please let me know if you think that I should repost my patch set.
>
Because akpm takes kexec patches, this question should be passing to
Andrew. Andrew, do you need a repost?
> Thanks,
> -Takahiro AKASHI
>
> ===8<===
> diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
> index 6dbbb89cbbac..ab1dced677fd 100644
> --- a/kernel/kexec_file.c
> +++ b/kernel/kexec_file.c
> @@ -532,7 +532,7 @@ static int kexec_calculate_store_digests(struct kimage *image)
> struct kexec_sha_region *sha_regions;
> struct purgatory_info *pi = &image->purgatory_info;
>
> - if (!IS_ENABLED(ARCH_HAS_KEXEC_PURGATORY))
> + if (!IS_ENABLED(CONFIG_ARCH_HAS_KEXEC_PURGATORY))
> return 0;
>
> zero_buf = __va(page_to_pfn(ZERO_PAGE(0)) << PAGE_SHIFT);
> ===>8===
>
Thanks
Dave
Dave Young <[email protected]> writes:
> On 03/06/18 at 07:22pm, AKASHI Takahiro wrote:
>> As arch_kexec_kernel_image_{probe,load}(),
>> arch_kimage_file_post_load_cleanup() and arch_kexec_kernel_verify_sig()
>> are almost duplicated among architectures, they can be commonalized with
>> an architecture-defined kexec_file_ops array. So let's factor them out.
>>
>> Signed-off-by: AKASHI Takahiro <[email protected]>
>> Cc: Dave Young <[email protected]>
>> Cc: Vivek Goyal <[email protected]>
>> Cc: Baoquan He <[email protected]>
>> Cc: Michael Ellerman <[email protected]>
>> Cc: Thiago Jung Bauermann <[email protected]>
>> ---
>> arch/powerpc/include/asm/kexec.h | 2 +-
>> arch/powerpc/kernel/kexec_elf_64.c | 2 +-
>> arch/powerpc/kernel/machine_kexec_file_64.c | 39 ++-----------------
>> arch/x86/include/asm/kexec-bzimage64.h | 2 +-
>> arch/x86/kernel/kexec-bzimage64.c | 2 +-
>> arch/x86/kernel/machine_kexec_64.c | 45 +---------------------
>> include/linux/kexec.h | 13 +++----
>> kernel/kexec_file.c | 60 +++++++++++++++++++++++++++--
>> 8 files changed, 71 insertions(+), 94 deletions(-)
>>
>
> For this patch it also needs some review from powerpc people.
FWIW:
Reviewed-by: Thiago Jung Bauermann <[email protected]>
Also, tested on a ppc64le KVM guest:
Tested-by: Thiago Jung Bauermann <[email protected]>
--
Thiago Jung Bauermann
IBM Linux Technology Center
Thiago Jung Bauermann <[email protected]> writes:
> Dave Young <[email protected]> writes:
>
>> On 03/06/18 at 07:22pm, AKASHI Takahiro wrote:
>>> As arch_kexec_kernel_image_{probe,load}(),
>>> arch_kimage_file_post_load_cleanup() and arch_kexec_kernel_verify_sig()
>>> are almost duplicated among architectures, they can be commonalized with
>>> an architecture-defined kexec_file_ops array. So let's factor them out.
>>>
>>> Signed-off-by: AKASHI Takahiro <[email protected]>
>>> Cc: Dave Young <[email protected]>
>>> Cc: Vivek Goyal <[email protected]>
>>> Cc: Baoquan He <[email protected]>
>>> Cc: Michael Ellerman <[email protected]>
>>> Cc: Thiago Jung Bauermann <[email protected]>
>>> ---
>>> arch/powerpc/include/asm/kexec.h | 2 +-
>>> arch/powerpc/kernel/kexec_elf_64.c | 2 +-
>>> arch/powerpc/kernel/machine_kexec_file_64.c | 39 ++-----------------
>>> arch/x86/include/asm/kexec-bzimage64.h | 2 +-
>>> arch/x86/kernel/kexec-bzimage64.c | 2 +-
>>> arch/x86/kernel/machine_kexec_64.c | 45 +---------------------
>>> include/linux/kexec.h | 13 +++----
>>> kernel/kexec_file.c | 60 +++++++++++++++++++++++++++--
>>> 8 files changed, 71 insertions(+), 94 deletions(-)
>>>
>>
>> For this patch it also needs some review from powerpc people.
>
> FWIW:
>
> Reviewed-by: Thiago Jung Bauermann <[email protected]>
>
> Also, tested on a ppc64le KVM guest:
>
> Tested-by: Thiago Jung Bauermann <[email protected]>
Thanks for testing. It looks OK to me.
Acked-by: Michael Ellerman <[email protected]> (powerpc)
cheers