2020-01-31 13:37:02

by Christophe Leroy

[permalink] [raw]
Subject: [PATCH v2 2/7] powerpc/kprobes: Mark newly allocated probes as RO

With CONFIG_STRICT_KERNEL_RWX=y and CONFIG_KPROBES=y, there will be one
W+X page at boot by default. This can be tested with
CONFIG_PPC_PTDUMP=y and CONFIG_PPC_DEBUG_WX=y set, and checking the
kernel log during boot.

powerpc doesn't implement its own alloc() for kprobes like other
architectures do, but we couldn't immediately mark RO anyway since we do
a memcpy to the page we allocate later. After that, nothing should be
allowed to modify the page, and write permissions are removed well
before the kprobe is armed.

The memcpy() would fail if >1 probes were allocated, so use
patch_instruction() instead which is safe for RO.

Reviewed-by: Daniel Axtens <[email protected]>
Signed-off-by: Russell Currey <[email protected]>
Signed-off-by: Christophe Leroy <[email protected]>
---
v2: removed the redundant flush
---
arch/powerpc/kernel/kprobes.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 2d27ec4feee4..d3e594e6094c 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -24,6 +24,7 @@
#include <asm/sstep.h>
#include <asm/sections.h>
#include <linux/uaccess.h>
+#include <linux/set_memory.h>

DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
@@ -124,13 +125,12 @@ int arch_prepare_kprobe(struct kprobe *p)
}

if (!ret) {
- memcpy(p->ainsn.insn, p->addr,
- MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
+ patch_instruction(p->ainsn.insn, *p->addr);
p->opcode = *p->addr;
- flush_icache_range((unsigned long)p->ainsn.insn,
- (unsigned long)p->ainsn.insn + sizeof(kprobe_opcode_t));
}

+ set_memory_ro((unsigned long)p->ainsn.insn, 1);
+
p->ainsn.boostable = 0;
return ret;
}
--
2.25.0


2020-02-03 06:46:56

by Russell Currey

[permalink] [raw]
Subject: Re: [PATCH v2 2/7] powerpc/kprobes: Mark newly allocated probes as RO

On Fri, 2020-01-31 at 13:34 +0000, Christophe Leroy wrote:
> With CONFIG_STRICT_KERNEL_RWX=y and CONFIG_KPROBES=y, there will be
> one
> W+X page at boot by default. This can be tested with
> CONFIG_PPC_PTDUMP=y and CONFIG_PPC_DEBUG_WX=y set, and checking the
> kernel log during boot.
>
> powerpc doesn't implement its own alloc() for kprobes like other
> architectures do, but we couldn't immediately mark RO anyway since we
> do
> a memcpy to the page we allocate later. After that, nothing should
> be
> allowed to modify the page, and write permissions are removed well
> before the kprobe is armed.
>
> The memcpy() would fail if >1 probes were allocated, so use
> patch_instruction() instead which is safe for RO.
>
> Reviewed-by: Daniel Axtens <[email protected]>
> Signed-off-by: Russell Currey <[email protected]>
> Signed-off-by: Christophe Leroy <[email protected]>
> ---
> v2: removed the redundant flush
> ---
> arch/powerpc/kernel/kprobes.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/arch/powerpc/kernel/kprobes.c
> b/arch/powerpc/kernel/kprobes.c
> index 2d27ec4feee4..d3e594e6094c 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -24,6 +24,7 @@
> #include <asm/sstep.h>
> #include <asm/sections.h>
> #include <linux/uaccess.h>
> +#include <linux/set_memory.h>
>
> DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
> DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
> @@ -124,13 +125,12 @@ int arch_prepare_kprobe(struct kprobe *p)
> }
>
> if (!ret) {
> - memcpy(p->ainsn.insn, p->addr,
> - MAX_INSN_SIZE *
> sizeof(kprobe_opcode_t));
> + patch_instruction(p->ainsn.insn, *p->addr);
> p->opcode = *p->addr;
> - flush_icache_range((unsigned long)p->ainsn.insn,
> - (unsigned long)p->ainsn.insn +
> sizeof(kprobe_opcode_t));
> }
>
> + set_memory_ro((unsigned long)p->ainsn.insn, 1);
> +


Since this can be called multiple times on the same page, can avoid by
implementing:

void *alloc_insn_page(void)
{
void *page;

page = vmalloc_exec(PAGE_SIZE);
if (page)
set_memory_ro((unsigned long)page, 1);

return page;
}

Which is pretty much the same as what's in arm64. Works for me and
passes ftracetest, I was originally doing this but cut it because it
broke with the memcpy, but works with patch_instruction().

> p->ainsn.boostable = 0;
> return ret;
> }

2020-02-03 08:36:10

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v2 2/7] powerpc/kprobes: Mark newly allocated probes as RO



Le 03/02/2020 à 05:50, Russell Currey a écrit :
> On Fri, 2020-01-31 at 13:34 +0000, Christophe Leroy wrote:
>> With CONFIG_STRICT_KERNEL_RWX=y and CONFIG_KPROBES=y, there will be
>> one
>> W+X page at boot by default. This can be tested with
>> CONFIG_PPC_PTDUMP=y and CONFIG_PPC_DEBUG_WX=y set, and checking the
>> kernel log during boot.
>>
>> powerpc doesn't implement its own alloc() for kprobes like other
>> architectures do, but we couldn't immediately mark RO anyway since we
>> do
>> a memcpy to the page we allocate later. After that, nothing should
>> be
>> allowed to modify the page, and write permissions are removed well
>> before the kprobe is armed.
>>
>> The memcpy() would fail if >1 probes were allocated, so use
>> patch_instruction() instead which is safe for RO.
>>
>> Reviewed-by: Daniel Axtens <[email protected]>
>> Signed-off-by: Russell Currey <[email protected]>
>> Signed-off-by: Christophe Leroy <[email protected]>
>> ---
>> v2: removed the redundant flush
>> ---
>> arch/powerpc/kernel/kprobes.c | 8 ++++----
>> 1 file changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/powerpc/kernel/kprobes.c
>> b/arch/powerpc/kernel/kprobes.c
>> index 2d27ec4feee4..d3e594e6094c 100644
>> --- a/arch/powerpc/kernel/kprobes.c
>> +++ b/arch/powerpc/kernel/kprobes.c
>> @@ -24,6 +24,7 @@
>> #include <asm/sstep.h>
>> #include <asm/sections.h>
>> #include <linux/uaccess.h>
>> +#include <linux/set_memory.h>
>>
>> DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
>> DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
>> @@ -124,13 +125,12 @@ int arch_prepare_kprobe(struct kprobe *p)
>> }
>>
>> if (!ret) {
>> - memcpy(p->ainsn.insn, p->addr,
>> - MAX_INSN_SIZE *
>> sizeof(kprobe_opcode_t));
>> + patch_instruction(p->ainsn.insn, *p->addr);
>> p->opcode = *p->addr;
>> - flush_icache_range((unsigned long)p->ainsn.insn,
>> - (unsigned long)p->ainsn.insn +
>> sizeof(kprobe_opcode_t));
>> }
>>
>> + set_memory_ro((unsigned long)p->ainsn.insn, 1);
>> +
>
>
> Since this can be called multiple times on the same page, can avoid by
> implementing:
>
> void *alloc_insn_page(void)
> {
> void *page;
>
> page = vmalloc_exec(PAGE_SIZE);
> if (page)
> set_memory_ro((unsigned long)page, 1);
>
> return page;
> }
>
> Which is pretty much the same as what's in arm64. Works for me and
> passes ftracetest, I was originally doing this but cut it because it
> broke with the memcpy, but works with patch_instruction().
>
>> p->ainsn.boostable = 0;
>> return ret;
>> }

Ok. I'll send out v3 as patch 1 fails on PPC64, so I'll take that in.

Christophe