2015-04-27 14:27:36

by Jiri Kosina

[permalink] [raw]
Subject: [PATCH 0/2] introduce kaslr_offset() and its users

There is already in-kernel code which computes the offset that has been
used for kASLR -- that's dump_kernel_offset() notifier. As there is now a
potential second user coming, it seems reasonable to provide a common
helper that will compute the offset.

- Patch 1/2 introduces kaslr_offset() which computes the kASLR offset, and
converts dump_kernel_offset() to make use of it
- Patch 2/2 extends the - currently limited - functionality of
livepatching when kASLR has been enabled and is active

----------------------------------------------------------------
Jiri Kosina (2):
x86: introduce kaslr_offset()
livepatch: x86: make kASLR logic more accurate

arch/x86/include/asm/livepatch.h | 1 +
arch/x86/include/asm/setup.h | 6 ++++++
arch/x86/kernel/setup.c | 2 +-
kernel/livepatch/core.c | 5 +++--
4 files changed, 11 insertions(+), 3 deletions(-)

--
Jiri Kosina
SUSE Labs


2015-04-27 14:28:13

by Jiri Kosina

[permalink] [raw]
Subject: [PATCH 1/2] x86: introduce kaslr_offset()

Offset that has been chosen for kaslr during kernel decompression can be
easily computed as a difference between _text and __START_KERNEL. We are
already making use of this in dump_kernel_offset() notifier.

Introduce kaslr_offset() that makes this computation instead of
hard-coding it, so that other kernel code (such as live patching) can make
use of it.

Signed-off-by: Jiri Kosina <[email protected]>
---
arch/x86/include/asm/setup.h | 6 ++++++
arch/x86/kernel/setup.c | 2 +-
2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
index f69e06b..785ac2f 100644
--- a/arch/x86/include/asm/setup.h
+++ b/arch/x86/include/asm/setup.h
@@ -65,12 +65,18 @@ static inline void x86_ce4100_early_setup(void) { }
* This is set up by the setup-routine at boot-time
*/
extern struct boot_params boot_params;
+extern char _text[];

static inline bool kaslr_enabled(void)
{
return !!(boot_params.hdr.loadflags & KASLR_FLAG);
}

+static inline unsigned long kaslr_offset(void)
+{
+ return (unsigned long)&_text - __START_KERNEL;
+}
+
/*
* Do NOT EVER look at the BIOS memory size location.
* It does not work on many machines.
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index d74ac33..5056d3c 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -834,7 +834,7 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p)
{
if (kaslr_enabled()) {
pr_emerg("Kernel Offset: 0x%lx from 0x%lx (relocation range: 0x%lx-0x%lx)\n",
- (unsigned long)&_text - __START_KERNEL,
+ kaslr_offset(),
__START_KERNEL,
__START_KERNEL_map,
MODULES_VADDR-1);
--
Jiri Kosina
SUSE Labs

2015-04-27 14:29:01

by Jiri Kosina

[permalink] [raw]
Subject: [PATCH 2/2] livepatch: x86: make kASLR logic more accurate

We give up old_addr hint from the coming patch module in cases when kernel
load base has been randomized (as in such case, the coming module has no
idea about the exact randomization offset).

We are currently too pessimistic, and give up immediately as soon as
CONFIG_RANDOMIZE_BASE is set; this doesn't however directly imply that the
load base has actually been randomized. There are config options that
disable kASLR (such as hibernation), user could have disabled kaslr on
kernel command-line, etc.

The loader propagates the information whether kernel has been randomized
through bootparams. This allows us to have the condition more accurate.

On top of that, it seems unnecessary to give up old_addr hints even if
randomization is active. The relocation offset can be computed using
kaslr_ofsset(), and therefore old_addr can be adjusted accordingly.

Signed-off-by: Jiri Kosina <[email protected]>
---
arch/x86/include/asm/livepatch.h | 1 +
kernel/livepatch/core.c | 5 +++--
2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/livepatch.h b/arch/x86/include/asm/livepatch.h
index 2d29197..19c099a 100644
--- a/arch/x86/include/asm/livepatch.h
+++ b/arch/x86/include/asm/livepatch.h
@@ -21,6 +21,7 @@
#ifndef _ASM_X86_LIVEPATCH_H
#define _ASM_X86_LIVEPATCH_H

+#include <asm/setup.h>
#include <linux/module.h>
#include <linux/ftrace.h>

diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
index 284e269..0e7c23c 100644
--- a/kernel/livepatch/core.c
+++ b/kernel/livepatch/core.c
@@ -234,8 +234,9 @@ static int klp_find_verify_func_addr(struct klp_object *obj,
int ret;

#if defined(CONFIG_RANDOMIZE_BASE)
- /* KASLR is enabled, disregard old_addr from user */
- func->old_addr = 0;
+ /* If KASLR has been enabled, adjust old_addr accordingly */
+ if (kaslr_enabled() && func->old_addr)
+ func->old_addr += kaslr_offset();
#endif

if (!func->old_addr || klp_is_module(obj))
--
Jiri Kosina
SUSE Labs

2015-04-27 14:46:33

by Minfei Huang

[permalink] [raw]
Subject: Re: [PATCH 2/2] livepatch: x86: make kASLR logic more accurate

On 04/27/15 at 04:28P, Jiri Kosina wrote:
> We give up old_addr hint from the coming patch module in cases when kernel
> load base has been randomized (as in such case, the coming module has no
> idea about the exact randomization offset).
>
> We are currently too pessimistic, and give up immediately as soon as
> CONFIG_RANDOMIZE_BASE is set; this doesn't however directly imply that the
> load base has actually been randomized. There are config options that
> disable kASLR (such as hibernation), user could have disabled kaslr on
> kernel command-line, etc.
>
> The loader propagates the information whether kernel has been randomized
> through bootparams. This allows us to have the condition more accurate.
>
> On top of that, it seems unnecessary to give up old_addr hints even if
> randomization is active. The relocation offset can be computed using
> kaslr_ofsset(), and therefore old_addr can be adjusted accordingly.
>
> Signed-off-by: Jiri Kosina <[email protected]>
> ---
> arch/x86/include/asm/livepatch.h | 1 +
> kernel/livepatch/core.c | 5 +++--
> 2 files changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/livepatch.h b/arch/x86/include/asm/livepatch.h
> index 2d29197..19c099a 100644
> --- a/arch/x86/include/asm/livepatch.h
> +++ b/arch/x86/include/asm/livepatch.h
> @@ -21,6 +21,7 @@
> #ifndef _ASM_X86_LIVEPATCH_H
> #define _ASM_X86_LIVEPATCH_H
>
> +#include <asm/setup.h>
> #include <linux/module.h>
> #include <linux/ftrace.h>
>
> diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> index 284e269..0e7c23c 100644
> --- a/kernel/livepatch/core.c
> +++ b/kernel/livepatch/core.c
> @@ -234,8 +234,9 @@ static int klp_find_verify_func_addr(struct klp_object *obj,
> int ret;
>
> #if defined(CONFIG_RANDOMIZE_BASE)
> - /* KASLR is enabled, disregard old_addr from user */
> - func->old_addr = 0;
> + /* If KASLR has been enabled, adjust old_addr accordingly */
> + if (kaslr_enabled() && func->old_addr)
> + func->old_addr += kaslr_offset();

Hi.

Remove the judgement "CONFIG_RANDOMIZE_BASE" is fine. if kaslr is
disabled, the offset will be 0.

Found that kaslr_enabled is only exist for x86. Maybe you can define a
weak function klp_adjustment_function_addr in general. Then each arch
can overwrite the function to implemente it specially.

Thanks
Minfei

> #endif
>
> if (!func->old_addr || klp_is_module(obj))
> --
> Jiri Kosina
> SUSE Labs
> --
> To unsubscribe from this list: send the line "unsubscribe live-patching" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2015-04-27 23:29:46

by Jiri Kosina

[permalink] [raw]
Subject: Re: [PATCH 2/2] livepatch: x86: make kASLR logic more accurate

On Mon, 27 Apr 2015, Minfei Huang wrote:

> Found that kaslr_enabled is only exist for x86. Maybe you can define a
> weak function klp_adjustment_function_addr in general. Then each arch
> can overwrite the function to implemente it specially.

It might start to make sense once there is at least one additional arch
that supports kaslr. Currently, I don't see a benefit.

Why are you so obstinate about this? I personally don't find that
important at all; it's something that can always be sorted out once more
archs start supporting kaslr.

Thanks,

--
Jiri Kosina
SUSE Labs

2015-04-28 00:08:09

by Minfei Huang

[permalink] [raw]
Subject: Re: [PATCH 2/2] livepatch: x86: make kASLR logic more accurate

On 04/28/15 at 01:29P, Jiri Kosina wrote:
> On Mon, 27 Apr 2015, Minfei Huang wrote:
>
> > Found that kaslr_enabled is only exist for x86. Maybe you can define a
> > weak function klp_adjustment_function_addr in general. Then each arch
> > can overwrite the function to implemente it specially.
>
> It might start to make sense once there is at least one additional arch
> that supports kaslr. Currently, I don't see a benefit.
>
> Why are you so obstinate about this? I personally don't find that
> important at all; it's something that can always be sorted out once more
> archs start supporting kaslr.
>

ohhh... Previously, IMO, putting the relevant function address adjustment
into the specified arch is more clearly to be reviewed and understood.

Now, I know what you actual want according to above commit, I am fine
with it.

Thanks
Minfei

> Thanks,
>
> --
> Jiri Kosina
> SUSE Labs
> --
> To unsubscribe from this list: send the line "unsubscribe live-patching" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2015-04-28 12:09:05

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 1/2] x86: introduce kaslr_offset()

On Mon, Apr 27, 2015 at 04:28:07PM +0200, Jiri Kosina wrote:
> Offset that has been chosen for kaslr during kernel decompression can be
> easily computed as a difference between _text and __START_KERNEL. We are
> already making use of this in dump_kernel_offset() notifier.
>
> Introduce kaslr_offset() that makes this computation instead of
> hard-coding it, so that other kernel code (such as live patching) can make
> use of it.
>
> Signed-off-by: Jiri Kosina <[email protected]>
> ---
> arch/x86/include/asm/setup.h | 6 ++++++
> arch/x86/kernel/setup.c | 2 +-
> 2 files changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
> index f69e06b..785ac2f 100644
> --- a/arch/x86/include/asm/setup.h
> +++ b/arch/x86/include/asm/setup.h
> @@ -65,12 +65,18 @@ static inline void x86_ce4100_early_setup(void) { }
> * This is set up by the setup-routine at boot-time
> */
> extern struct boot_params boot_params;
> +extern char _text[];
>
> static inline bool kaslr_enabled(void)
> {
> return !!(boot_params.hdr.loadflags & KASLR_FLAG);
> }
>
> +static inline unsigned long kaslr_offset(void)
> +{
> + return (unsigned long)&_text - __START_KERNEL;
> +}
> +
> /*
> * Do NOT EVER look at the BIOS memory size location.
> * It does not work on many machines.
> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> index d74ac33..5056d3c 100644
> --- a/arch/x86/kernel/setup.c
> +++ b/arch/x86/kernel/setup.c
> @@ -834,7 +834,7 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p)
> {
> if (kaslr_enabled()) {
> pr_emerg("Kernel Offset: 0x%lx from 0x%lx (relocation range: 0x%lx-0x%lx)\n",
> - (unsigned long)&_text - __START_KERNEL,
> + kaslr_offset(),
> __START_KERNEL,
> __START_KERNEL_map,
> MODULES_VADDR-1);

It looks like kaslr_offset() can also be used by
arch_crash_save_vmcoreinfo() in machine_kexec_64.c.

--
Josh

2015-04-28 12:09:23

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH 2/2] livepatch: x86: make kASLR logic more accurate

On Mon, Apr 27, 2015 at 04:28:58PM +0200, Jiri Kosina wrote:
> We give up old_addr hint from the coming patch module in cases when kernel
> load base has been randomized (as in such case, the coming module has no
> idea about the exact randomization offset).
>
> We are currently too pessimistic, and give up immediately as soon as
> CONFIG_RANDOMIZE_BASE is set; this doesn't however directly imply that the
> load base has actually been randomized. There are config options that
> disable kASLR (such as hibernation), user could have disabled kaslr on
> kernel command-line, etc.
>
> The loader propagates the information whether kernel has been randomized
> through bootparams. This allows us to have the condition more accurate.
>
> On top of that, it seems unnecessary to give up old_addr hints even if
> randomization is active. The relocation offset can be computed using
> kaslr_ofsset(), and therefore old_addr can be adjusted accordingly.
>
> Signed-off-by: Jiri Kosina <[email protected]>

Acked-by: Josh Poimboeuf <[email protected]>

> ---
> arch/x86/include/asm/livepatch.h | 1 +
> kernel/livepatch/core.c | 5 +++--
> 2 files changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/include/asm/livepatch.h b/arch/x86/include/asm/livepatch.h
> index 2d29197..19c099a 100644
> --- a/arch/x86/include/asm/livepatch.h
> +++ b/arch/x86/include/asm/livepatch.h
> @@ -21,6 +21,7 @@
> #ifndef _ASM_X86_LIVEPATCH_H
> #define _ASM_X86_LIVEPATCH_H
>
> +#include <asm/setup.h>
> #include <linux/module.h>
> #include <linux/ftrace.h>
>
> diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
> index 284e269..0e7c23c 100644
> --- a/kernel/livepatch/core.c
> +++ b/kernel/livepatch/core.c
> @@ -234,8 +234,9 @@ static int klp_find_verify_func_addr(struct klp_object *obj,
> int ret;
>
> #if defined(CONFIG_RANDOMIZE_BASE)
> - /* KASLR is enabled, disregard old_addr from user */
> - func->old_addr = 0;
> + /* If KASLR has been enabled, adjust old_addr accordingly */
> + if (kaslr_enabled() && func->old_addr)
> + func->old_addr += kaslr_offset();
> #endif
>
> if (!func->old_addr || klp_is_module(obj))
> --
> Jiri Kosina
> SUSE Labs

--
Josh

2015-04-28 15:15:40

by Jiri Kosina

[permalink] [raw]
Subject: [PATCH v2 1/2] x86: introduce kaslr_offset()

Offset that has been chosen for kaslr during kernel decompression can be
easily computed as a difference between _text and __START_KERNEL. We are
already making use of this in dump_kernel_offset() notifier and in
arch_crash_save_vmcoreinfo().

Introduce kaslr_offset() that makes this computation instead of
hard-coding it, so that other kernel code (such as live patching) can make
use of it. Also convert existing users to make use of it.

Signed-off-by: Jiri Kosina <[email protected]>
---

It'd be great to potentially have Ack from x86 guys for this patch so that
I could take it through livepatching.git with the depending 2/2 patch.
Thanks.

v1 -> v2: convert arch_crash_save_vmcoreinfo(), as spotted by Josh
Poimboeuf.

arch/x86/include/asm/setup.h | 6 ++++++
arch/x86/kernel/machine_kexec_64.c | 3 ++-
arch/x86/kernel/setup.c | 2 +-
3 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
index f69e06b..785ac2f 100644
--- a/arch/x86/include/asm/setup.h
+++ b/arch/x86/include/asm/setup.h
@@ -65,12 +65,18 @@ static inline void x86_ce4100_early_setup(void) { }
* This is set up by the setup-routine at boot-time
*/
extern struct boot_params boot_params;
+extern char _text[];

static inline bool kaslr_enabled(void)
{
return !!(boot_params.hdr.loadflags & KASLR_FLAG);
}

+static inline unsigned long kaslr_offset(void)
+{
+ return (unsigned long)&_text - __START_KERNEL;
+}
+
/*
* Do NOT EVER look at the BIOS memory size location.
* It does not work on many machines.
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index 415480d..e102963 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -25,6 +25,7 @@
#include <asm/io_apic.h>
#include <asm/debugreg.h>
#include <asm/kexec-bzimage64.h>
+#include <asm/setup.h>

#ifdef CONFIG_KEXEC_FILE
static struct kexec_file_ops *kexec_file_loaders[] = {
@@ -334,7 +335,7 @@ void arch_crash_save_vmcoreinfo(void)
VMCOREINFO_LENGTH(node_data, MAX_NUMNODES);
#endif
vmcoreinfo_append_str("KERNELOFFSET=%lx\n",
- (unsigned long)&_text - __START_KERNEL);
+ kaslr_offset());
}

/* arch-dependent functionality related to kexec file-based syscall */
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index d74ac33..5056d3c 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -834,7 +834,7 @@ dump_kernel_offset(struct notifier_block *self, unsigned long v, void *p)
{
if (kaslr_enabled()) {
pr_emerg("Kernel Offset: 0x%lx from 0x%lx (relocation range: 0x%lx-0x%lx)\n",
- (unsigned long)&_text - __START_KERNEL,
+ kaslr_offset(),
__START_KERNEL,
__START_KERNEL_map,
MODULES_VADDR-1);

--
Jiri Kosina
SUSE Labs

2015-04-28 15:57:18

by Jiri Kosina

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] x86: introduce kaslr_offset()

On Tue, 28 Apr 2015, Jiri Kosina wrote:

> Offset that has been chosen for kaslr during kernel decompression can be
> easily computed as a difference between _text and __START_KERNEL. We are
> already making use of this in dump_kernel_offset() notifier and in
> arch_crash_save_vmcoreinfo().
>
> Introduce kaslr_offset() that makes this computation instead of
> hard-coding it, so that other kernel code (such as live patching) can make
> use of it. Also convert existing users to make use of it.
>
> Signed-off-by: Jiri Kosina <[email protected]>
> ---
>
> It'd be great to potentially have Ack from x86 guys for this patch so that
> I could take it through livepatching.git with the depending 2/2 patch.
> Thanks.
>
> v1 -> v2: convert arch_crash_save_vmcoreinfo(), as spotted by Josh
> Poimboeuf.

FWIW this patch is equivalent transofrmation without any effects on the
resulting code:

$ diff -u vmlinux.old.asm vmlinux.new.asm
--- vmlinux.old.asm 2015-04-28 17:55:19.520983368 +0200
+++ vmlinux.new.asm 2015-04-28 17:55:24.141206072 +0200
@@ -1,5 +1,5 @@

-vmlinux.old: file format elf64-x86-64
+vmlinux.new: file format elf64-x86-64


Disassembly of section .text:
$

--
Jiri Kosina
SUSE Labs

2015-04-28 16:00:19

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] x86: introduce kaslr_offset()

On Tue, Apr 28, 2015 at 05:57:14PM +0200, Jiri Kosina wrote:
> On Tue, 28 Apr 2015, Jiri Kosina wrote:
>
> > Offset that has been chosen for kaslr during kernel decompression can be
> > easily computed as a difference between _text and __START_KERNEL. We are
> > already making use of this in dump_kernel_offset() notifier and in
> > arch_crash_save_vmcoreinfo().
> >
> > Introduce kaslr_offset() that makes this computation instead of
> > hard-coding it, so that other kernel code (such as live patching) can make
> > use of it. Also convert existing users to make use of it.
> >
> > Signed-off-by: Jiri Kosina <[email protected]>
> > ---
> >
> > It'd be great to potentially have Ack from x86 guys for this patch so that
> > I could take it through livepatching.git with the depending 2/2 patch.
> > Thanks.
> >
> > v1 -> v2: convert arch_crash_save_vmcoreinfo(), as spotted by Josh
> > Poimboeuf.
>
> FWIW this patch is equivalent transofrmation without any effects on the
> resulting code:
>
> $ diff -u vmlinux.old.asm vmlinux.new.asm
> --- vmlinux.old.asm 2015-04-28 17:55:19.520983368 +0200
> +++ vmlinux.new.asm 2015-04-28 17:55:24.141206072 +0200
> @@ -1,5 +1,5 @@
>
> -vmlinux.old: file format elf64-x86-64
> +vmlinux.new: file format elf64-x86-64
>
>
> Disassembly of section .text:
> $

Then those are easy. Please add that piece of infomation to the commit
message.

With that:

Acked-by: Borislav Petkov <[email protected]>

--
Regards/Gruss,
Boris.

ECO tip #101: Trim your mails when you reply.
--

2015-04-29 15:01:35

by Jiri Kosina

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] x86: introduce kaslr_offset()

On Tue, 28 Apr 2015, Borislav Petkov wrote:

> > > Offset that has been chosen for kaslr during kernel decompression can be
> > > easily computed as a difference between _text and __START_KERNEL. We are
> > > already making use of this in dump_kernel_offset() notifier and in
> > > arch_crash_save_vmcoreinfo().
> > >
> > > Introduce kaslr_offset() that makes this computation instead of
> > > hard-coding it, so that other kernel code (such as live patching) can make
> > > use of it. Also convert existing users to make use of it.
> > >
> > > Signed-off-by: Jiri Kosina <[email protected]>
> > > ---
> > >
> > > It'd be great to potentially have Ack from x86 guys for this patch so that
> > > I could take it through livepatching.git with the depending 2/2 patch.
> > > Thanks.
> > >
> > > v1 -> v2: convert arch_crash_save_vmcoreinfo(), as spotted by Josh
> > > Poimboeuf.
> >
> > FWIW this patch is equivalent transofrmation without any effects on the
> > resulting code:
> >
> > $ diff -u vmlinux.old.asm vmlinux.new.asm
> > --- vmlinux.old.asm 2015-04-28 17:55:19.520983368 +0200
> > +++ vmlinux.new.asm 2015-04-28 17:55:24.141206072 +0200
> > @@ -1,5 +1,5 @@
> >
> > -vmlinux.old: file format elf64-x86-64
> > +vmlinux.new: file format elf64-x86-64
> >
> >
> > Disassembly of section .text:
> > $
>
> Then those are easy. Please add that piece of infomation to the commit
> message.
>
> With that:
>
> Acked-by: Borislav Petkov <[email protected]>

Applied to livepatching.git#for-4.2/kaslr. Thanks,

--
Jiri Kosina
SUSE Labs

2015-04-29 14:57:03

by Jiri Kosina

[permalink] [raw]
Subject: Re: [PATCH 2/2] livepatch: x86: make kASLR logic more accurate

On Tue, 28 Apr 2015, Josh Poimboeuf wrote:

> On Mon, Apr 27, 2015 at 04:28:58PM +0200, Jiri Kosina wrote:
> > We give up old_addr hint from the coming patch module in cases when kernel
> > load base has been randomized (as in such case, the coming module has no
> > idea about the exact randomization offset).
> >
> > We are currently too pessimistic, and give up immediately as soon as
> > CONFIG_RANDOMIZE_BASE is set; this doesn't however directly imply that the
> > load base has actually been randomized. There are config options that
> > disable kASLR (such as hibernation), user could have disabled kaslr on
> > kernel command-line, etc.
> >
> > The loader propagates the information whether kernel has been randomized
> > through bootparams. This allows us to have the condition more accurate.
> >
> > On top of that, it seems unnecessary to give up old_addr hints even if
> > randomization is active. The relocation offset can be computed using
> > kaslr_ofsset(), and therefore old_addr can be adjusted accordingly.
> >
> > Signed-off-by: Jiri Kosina <[email protected]>
>
> Acked-by: Josh Poimboeuf <[email protected]>

Applied to for-4.2/kaslr.

Thanks,

--
Jiri Kosina
SUSE Labs

2015-04-29 16:16:26

by Jiri Kosina

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] x86: introduce kaslr_offset()

On Wed, 29 Apr 2015, Jiri Kosina wrote:

> > Acked-by: Borislav Petkov <[email protected]>
>
> Applied to livepatching.git#for-4.2/kaslr. Thanks,

Fengguang's buildbot reported a randconfig build breakage caused by this
patch. The fix below is necessary on top.




From: Jiri Kosina <[email protected]>
Subject: [PATCH] x86: kaslr: fix build due to missing ALIGN definition

Fengguang's bot reported that 4545c898 ("x86: introduce kaslr_offset()")
broke randconfig build

In file included from arch/x86/xen/vga.c:5:0:
arch/x86/include/asm/setup.h: In function 'kaslr_offset':
>> arch/x86/include/asm/setup.h:77:2: error: implicit declaration of function 'ALIGN' [-Werror=implicit-function-declaration]
return (unsigned long)&_text - __START_KERNEL;
^
Fix that by making setup.h self-sufficient by explicitly including
linux/kernel.h, which is needed for ALIGN() (which is what __START_KERNEL
contains in its expansion).

Reported-by: [email protected]
Signed-off-by: Jiri Kosina <[email protected]>
---
arch/x86/include/asm/setup.h | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
index 785ac2f..11af24e 100644
--- a/arch/x86/include/asm/setup.h
+++ b/arch/x86/include/asm/setup.h
@@ -60,6 +60,7 @@ static inline void x86_ce4100_early_setup(void) { }
#ifndef _SETUP

#include <asm/espfix.h>
+#include <linux/kernel.h>

/*
* This is set up by the setup-routine at boot-time

--
Jiri Kosina
SUSE Labs