2020-07-04 03:35:10

by Guo Ren

[permalink] [raw]
Subject: [PATCH V1 0/5] riscv: Add k/uprobe supported

From: Guo Ren <[email protected]>

The patchset includes kprobe/uprobe support and some related fixups.
Patrick provides HAVE_REGS_AND_STACK_ACCESS_API support and some
kprobe's code. The framework of k/uprobe is from csky but also refers
to other arches'.

There is no single step exception in riscv ISA, so utilize ebreak to
simulate. Some pc related instructions couldn't be executed out of line
and some system/fence instructions couldn't be a trace site at all.
So we give out a reject list and simulate list in decode-insn.c.

You could use uprobe to test simulate code like this:

echo 'p:enter_current_state_one /hello:0x6e4 a0=%a0 a1=%a1' >> /sys/kernel/debug/tracing/uprobe_events
echo 1 > /sys/kernel/debug/tracing/events/uprobes/enable
/hello
^C
cat /sys/kernel/debug/tracing/trace
tracer: nop

entries-in-buffer/entries-written: 1/1 #P:1

_-----=> irqs-off
/ _----=> need-resched
| / _---=> hardirq/softirq
|| / _--=> preempt-depth
||| / delay
TASK-PID CPU# |||| TIMESTAMP FUNCTION
| | | |||| | |
hello-94 [000] d... 55.404242: enter_current_state_one: (0x106e4) a0=0x1 a1=0x3fffa8ada8

Be care /hello:0x6e4 is the file offset in elf and it relate to 0x106e4
in memory and hello is your target elf program.

Try kprobe like this:

echo 'p:myprobe _do_fork dfd=%a0 filename=%a1 flags=%a2 mode=+4($stack)' > /sys/kernel/debug/tracing/kprobe_events
echo 'r:myretprobe _do_fork $retval' >> /sys/kernel/debug/tracing/kprobe_event

echo 1 >/sys/kernel/debug/tracing/events/kprobes/enable
cat /sys/kernel/debug/tracing/trace
tracer: nop

entries-in-buffer/entries-written: 2/2 #P:1

_-----=> irqs-off
/ _----=> need-resched
| / _---=> hardirq/softirq
|| / _--=> preempt-depth
||| / delay
TASK-PID CPU# |||| TIMESTAMP FUNCTION
| | | |||| | |
sh-92 [000] .n.. 131.804230: myprobe: (_do_fork+0x0/0x2e6) dfd=0xffffffe03929fdf8 filename=0x0 flags=0x101000 mode=0x1200000ffffffe0
sh-92 [000] d... 131.806607: myretprobe: (__do_sys_clone+0x70/0x82 <- _do_fork) arg1=0x5f
cat /sys/kernel/debug/tracing/trace

Guo Ren (4):
riscv: Fixup __vdso_gettimeofday broke dynamic ftrace
riscv: Fixup compile error BUILD_BUG_ON failed
riscv: Add kprobes supported
riscv: Add uprobes supported

Patrick Stählin (1):
RISC-V: Implement ptrace regs and stack API

arch/riscv/Kconfig | 6 +
arch/riscv/include/asm/kprobes.h | 40 +++
arch/riscv/include/asm/probes.h | 24 ++
arch/riscv/include/asm/processor.h | 1 +
arch/riscv/include/asm/ptrace.h | 29 ++
arch/riscv/include/asm/thread_info.h | 4 +-
arch/riscv/include/asm/uprobes.h | 40 +++
arch/riscv/kernel/Makefile | 1 +
arch/riscv/kernel/patch.c | 8 +-
arch/riscv/kernel/probes/Makefile | 5 +
arch/riscv/kernel/probes/decode-insn.c | 48 +++
arch/riscv/kernel/probes/decode-insn.h | 18 +
arch/riscv/kernel/probes/kprobes.c | 471 ++++++++++++++++++++++++++
arch/riscv/kernel/probes/kprobes_trampoline.S | 93 +++++
arch/riscv/kernel/probes/simulate-insn.c | 85 +++++
arch/riscv/kernel/probes/simulate-insn.h | 47 +++
arch/riscv/kernel/probes/uprobes.c | 186 ++++++++++
arch/riscv/kernel/ptrace.c | 99 ++++++
arch/riscv/kernel/signal.c | 3 +
arch/riscv/kernel/traps.c | 19 ++
arch/riscv/kernel/vdso/Makefile | 3 +
arch/riscv/mm/fault.c | 11 +
22 files changed, 1238 insertions(+), 3 deletions(-)
create mode 100644 arch/riscv/include/asm/probes.h
create mode 100644 arch/riscv/include/asm/uprobes.h
create mode 100644 arch/riscv/kernel/probes/Makefile
create mode 100644 arch/riscv/kernel/probes/decode-insn.c
create mode 100644 arch/riscv/kernel/probes/decode-insn.h
create mode 100644 arch/riscv/kernel/probes/kprobes.c
create mode 100644 arch/riscv/kernel/probes/kprobes_trampoline.S
create mode 100644 arch/riscv/kernel/probes/simulate-insn.c
create mode 100644 arch/riscv/kernel/probes/simulate-insn.h
create mode 100644 arch/riscv/kernel/probes/uprobes.c

--
2.7.4


2020-07-04 03:35:20

by Guo Ren

[permalink] [raw]
Subject: [PATCH V1 1/5] riscv: Fixup __vdso_gettimeofday broke dynamic ftrace

From: Guo Ren <[email protected]>

For linux-5.8-rc1, enable ftrace of riscv will cause boot panic:

[ 2.388980] Run /sbin/init as init process
[ 2.529938] init[39]: unhandled signal 4 code 0x1 at 0x0000003ff449e000
[ 2.531078] CPU: 0 PID: 39 Comm: init Not tainted 5.8.0-rc1-dirty #13
[ 2.532719] epc: 0000003ff449e000 ra : 0000003ff449e954 sp : 0000003fffedb900
[ 2.534005] gp : 00000000000e8528 tp : 0000003ff449d800 t0 : 000000000000001e
[ 2.534965] t1 : 000000000000000a t2 : 0000003fffedb89e s0 : 0000003fffedb920
[ 2.536279] s1 : 0000003fffedb940 a0 : 0000003ff43d4b2c a1 : 0000000000000000
[ 2.537334] a2 : 0000000000000001 a3 : 0000000000000000 a4 : fffffffffbad8000
[ 2.538466] a5 : 0000003ff449e93a a6 : 0000000000000000 a7 : 0000000000000000
[ 2.539511] s2 : 0000000000000000 s3 : 0000003ff448412c s4 : 0000000000000010
[ 2.541260] s5 : 0000000000000016 s6 : 00000000000d0a30 s7 : 0000003fffedba70
[ 2.542152] s8 : 0000000000000000 s9 : 0000000000000000 s10: 0000003fffedb960
[ 2.543335] s11: 0000000000000000 t3 : 0000000000000000 t4 : 0000003fffedb8a0
[ 2.544471] t5 : 0000000000000000 t6 : 0000000000000000
[ 2.545730] status: 0000000000004020 badaddr: 00000000464c457f cause: 0000000000000002
[ 2.549867] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000004
[ 2.551267] CPU: 0 PID: 1 Comm: init Not tainted 5.8.0-rc1-dirty #13
[ 2.552061] Call Trace:
[ 2.552626] [<ffffffe00020374a>] walk_stackframe+0x0/0xc4
[ 2.553486] [<ffffffe0002039f4>] show_stack+0x40/0x4c
[ 2.553995] [<ffffffe00054a6ae>] dump_stack+0x7a/0x98
[ 2.554615] [<ffffffe00020b9b8>] panic+0x114/0x2f4
[ 2.555395] [<ffffffe00020ebd6>] do_exit+0x89c/0x8c2
[ 2.555949] [<ffffffe00020f930>] do_group_exit+0x3a/0x90
[ 2.556715] [<ffffffe000219e08>] get_signal+0xe2/0x6e6
[ 2.557388] [<ffffffe000202d72>] do_notify_resume+0x6a/0x37a
[ 2.558089] [<ffffffe000201c16>] ret_from_exception+0x0/0xc

"ra:0x3ff449e954" is the return address of "call _mcount" in the
prologue of __vdso_gettimeofday(). Without proper relocate, pc jmp
to 0x0000003ff449e000 (vdso map base) with a illegal instruction
trap.

The solution comes from arch/arm64/kernel/vdso/Makefile:

CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS)

- CC_FLAGS_SCS is ShadowCallStack feature in Clang and only
implemented for arm64, no use for riscv.

The bug comes from the following commit:

ad5d1122b82f ("riscv: use vDSO common flow to reduce the latency of the time-related functions")

Signed-off-by: Guo Ren <[email protected]>
Cc: Vincent Chen <[email protected]>
Cc: Atish Patra <[email protected]>
Cc: Palmer Dabbelt <[email protected]>
Cc: Alan Kao <[email protected]>
Cc: Greentime Hu <[email protected]>
---
arch/riscv/kernel/vdso/Makefile | 3 +++
1 file changed, 3 insertions(+)

diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
index 38ba55b..3079935 100644
--- a/arch/riscv/kernel/vdso/Makefile
+++ b/arch/riscv/kernel/vdso/Makefile
@@ -27,6 +27,9 @@ obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
obj-y += vdso.o vdso-syms.o
CPPFLAGS_vdso.lds += -P -C -U$(ARCH)

+# Disable -pg to prevent insert call site
+CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os
+
# Disable gcov profiling for VDSO code
GCOV_PROFILE := n

--
2.7.4

2020-07-04 03:35:37

by Guo Ren

[permalink] [raw]
Subject: [PATCH V1 2/5] RISC-V: Implement ptrace regs and stack API

From: Patrick Stählin <[email protected]>

Needed for kprobes support. Copied and adapted from arm64 code.

Guo Ren fixup pt_regs type for linux-5.8-rc1.

Signed-off-by: Patrick Stählin <[email protected]>
Signed-off-by: Guo Ren <[email protected]>
---
arch/riscv/Kconfig | 1 +
arch/riscv/include/asm/ptrace.h | 29 ++++++++++++
arch/riscv/kernel/ptrace.c | 99 +++++++++++++++++++++++++++++++++++++++++
3 files changed, 129 insertions(+)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 128192e..58d6f66 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -76,6 +76,7 @@ config RISCV
select SPARSE_IRQ
select SYSCTL_EXCEPTION_TRACE
select THREAD_INFO_IN_TASK
+ select HAVE_REGS_AND_STACK_ACCESS_API

config ARCH_MMAP_RND_BITS_MIN
default 18 if 64BIT
diff --git a/arch/riscv/include/asm/ptrace.h b/arch/riscv/include/asm/ptrace.h
index ee49f80..23372bb 100644
--- a/arch/riscv/include/asm/ptrace.h
+++ b/arch/riscv/include/asm/ptrace.h
@@ -8,6 +8,7 @@

#include <uapi/asm/ptrace.h>
#include <asm/csr.h>
+#include <linux/compiler.h>

#ifndef __ASSEMBLY__

@@ -60,6 +61,7 @@ struct pt_regs {

#define user_mode(regs) (((regs)->status & SR_PP) == 0)

+#define MAX_REG_OFFSET offsetof(struct pt_regs, orig_a0)

/* Helpers for working with the instruction pointer */
static inline unsigned long instruction_pointer(struct pt_regs *regs)
@@ -85,6 +87,12 @@ static inline void user_stack_pointer_set(struct pt_regs *regs,
regs->sp = val;
}

+/* Valid only for Kernel mode traps. */
+static inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
+{
+ return regs->sp;
+}
+
/* Helpers for working with the frame pointer */
static inline unsigned long frame_pointer(struct pt_regs *regs)
{
@@ -101,6 +109,27 @@ static inline unsigned long regs_return_value(struct pt_regs *regs)
return regs->a0;
}

+extern int regs_query_register_offset(const char *name);
+extern unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs,
+ unsigned int n);
+
+/**
+ * regs_get_register() - get register value from its offset
+ * @regs: pt_regs from which register value is gotten
+ * @offset: offset of the register.
+ *
+ * regs_get_register returns the value of a register whose offset from @regs.
+ * The @offset is the offset of the register in struct pt_regs.
+ * If @offset is bigger than MAX_REG_OFFSET, this returns 0.
+ */
+static inline unsigned long regs_get_register(struct pt_regs *regs,
+ unsigned int offset)
+{
+ if (unlikely(offset > MAX_REG_OFFSET))
+ return 0;
+
+ return *(unsigned long *)((unsigned long)regs + offset);
+}
#endif /* __ASSEMBLY__ */

#endif /* _ASM_RISCV_PTRACE_H */
diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c
index 444dc7b..a11c692 100644
--- a/arch/riscv/kernel/ptrace.c
+++ b/arch/riscv/kernel/ptrace.c
@@ -125,6 +125,105 @@ const struct user_regset_view *task_user_regset_view(struct task_struct *task)
return &riscv_user_native_view;
}

+struct pt_regs_offset {
+ const char *name;
+ int offset;
+};
+
+#define REG_OFFSET_NAME(r) {.name = #r, .offset = offsetof(struct pt_regs, r)}
+#define REG_OFFSET_END {.name = NULL, .offset = 0}
+
+static const struct pt_regs_offset regoffset_table[] = {
+ REG_OFFSET_NAME(epc),
+ REG_OFFSET_NAME(ra),
+ REG_OFFSET_NAME(sp),
+ REG_OFFSET_NAME(gp),
+ REG_OFFSET_NAME(tp),
+ REG_OFFSET_NAME(t0),
+ REG_OFFSET_NAME(t1),
+ REG_OFFSET_NAME(t2),
+ REG_OFFSET_NAME(s0),
+ REG_OFFSET_NAME(s1),
+ REG_OFFSET_NAME(a0),
+ REG_OFFSET_NAME(a1),
+ REG_OFFSET_NAME(a2),
+ REG_OFFSET_NAME(a3),
+ REG_OFFSET_NAME(a4),
+ REG_OFFSET_NAME(a5),
+ REG_OFFSET_NAME(a6),
+ REG_OFFSET_NAME(a7),
+ REG_OFFSET_NAME(s2),
+ REG_OFFSET_NAME(s3),
+ REG_OFFSET_NAME(s4),
+ REG_OFFSET_NAME(s5),
+ REG_OFFSET_NAME(s6),
+ REG_OFFSET_NAME(s7),
+ REG_OFFSET_NAME(s8),
+ REG_OFFSET_NAME(s9),
+ REG_OFFSET_NAME(s10),
+ REG_OFFSET_NAME(s11),
+ REG_OFFSET_NAME(t3),
+ REG_OFFSET_NAME(t4),
+ REG_OFFSET_NAME(t5),
+ REG_OFFSET_NAME(t6),
+ REG_OFFSET_NAME(status),
+ REG_OFFSET_NAME(badaddr),
+ REG_OFFSET_NAME(cause),
+ REG_OFFSET_NAME(orig_a0),
+ REG_OFFSET_END,
+};
+
+/**
+ * regs_query_register_offset() - query register offset from its name
+ * @name: the name of a register
+ *
+ * regs_query_register_offset() returns the offset of a register in struct
+ * pt_regs from its name. If the name is invalid, this returns -EINVAL;
+ */
+int regs_query_register_offset(const char *name)
+{
+ const struct pt_regs_offset *roff;
+
+ for (roff = regoffset_table; roff->name != NULL; roff++)
+ if (!strcmp(roff->name, name))
+ return roff->offset;
+ return -EINVAL;
+}
+
+/**
+ * regs_within_kernel_stack() - check the address in the stack
+ * @regs: pt_regs which contains kernel stack pointer.
+ * @addr: address which is checked.
+ *
+ * regs_within_kernel_stack() checks @addr is within the kernel stack page(s).
+ * If @addr is within the kernel stack, it returns true. If not, returns false.
+ */
+static bool regs_within_kernel_stack(struct pt_regs *regs, unsigned long addr)
+{
+ return (addr & ~(THREAD_SIZE - 1)) ==
+ (kernel_stack_pointer(regs) & ~(THREAD_SIZE - 1));
+}
+
+/**
+ * regs_get_kernel_stack_nth() - get Nth entry of the stack
+ * @regs: pt_regs which contains kernel stack pointer.
+ * @n: stack entry number.
+ *
+ * regs_get_kernel_stack_nth() returns @n th entry of the kernel stack which
+ * is specified by @regs. If the @n th entry is NOT in the kernel stack,
+ * this returns 0.
+ */
+unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs, unsigned int n)
+{
+ unsigned long *addr = (unsigned long *)kernel_stack_pointer(regs);
+
+ addr += n;
+ if (regs_within_kernel_stack(regs, (unsigned long)addr))
+ return *addr;
+ else
+ return 0;
+}
+
void ptrace_disable(struct task_struct *child)
{
clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
--
2.7.4

2020-07-04 03:36:21

by Guo Ren

[permalink] [raw]
Subject: [PATCH V1 4/5] riscv: Add kprobes supported

From: Guo Ren <[email protected]>

This patch enables "kprobe & kretprobe" to work with ftrace
interface. It utilized software breakpoint as single-step
mechanism.

Some instructions which can't be single-step executed must be
simulated in kernel execution slot, such as: branch, jal, auipc,
la ...

Some instructions should be rejected for probing and we use a
blacklist to filter, such as: ecall, ebreak, ...

We use ebreak & c.ebreak to replace origin instruction and the
kprobe handler prepares an executable memory slot for out-of-line
execution with a copy of the original instruction being probed.
In execution slot we add ebreak behind original instruction to
simulate a single-setp mechanism.

The patch is based on packi's work [1] and csky's work [2].
- The kprobes_trampoline.S is all from packi's patch
- The single-step mechanism is new designed for riscv without hw
single-step trap
- The simulation codes are from csky
- Frankly, all codes refer to other archs' implementation

[1] https://lore.kernel.org/linux-riscv/[email protected]/
[2] https://lore.kernel.org/linux-csky/[email protected]/

Signed-off-by: Guo Ren <[email protected]>
Co-Developed-by: Patrick Stählin <[email protected]>
Cc: Patrick Stählin <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Palmer Dabbelt <[email protected]>
Cc: Björn Töpel <[email protected]>
---
arch/riscv/Kconfig | 2 +
arch/riscv/include/asm/kprobes.h | 40 +++
arch/riscv/include/asm/probes.h | 24 ++
arch/riscv/kernel/Makefile | 1 +
arch/riscv/kernel/probes/Makefile | 4 +
arch/riscv/kernel/probes/decode-insn.c | 48 +++
arch/riscv/kernel/probes/decode-insn.h | 18 +
arch/riscv/kernel/probes/kprobes.c | 471 ++++++++++++++++++++++++++
arch/riscv/kernel/probes/kprobes_trampoline.S | 93 +++++
arch/riscv/kernel/probes/simulate-insn.c | 85 +++++
arch/riscv/kernel/probes/simulate-insn.h | 47 +++
arch/riscv/kernel/traps.c | 9 +
arch/riscv/mm/fault.c | 4 +
13 files changed, 846 insertions(+)
create mode 100644 arch/riscv/include/asm/probes.h
create mode 100644 arch/riscv/kernel/probes/Makefile
create mode 100644 arch/riscv/kernel/probes/decode-insn.c
create mode 100644 arch/riscv/kernel/probes/decode-insn.h
create mode 100644 arch/riscv/kernel/probes/kprobes.c
create mode 100644 arch/riscv/kernel/probes/kprobes_trampoline.S
create mode 100644 arch/riscv/kernel/probes/simulate-insn.c
create mode 100644 arch/riscv/kernel/probes/simulate-insn.h

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 58d6f66..a295f0b 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -57,6 +57,8 @@ config RISCV
select HAVE_EBPF_JIT if MMU
select HAVE_FUTEX_CMPXCHG if FUTEX
select HAVE_GENERIC_VDSO if MMU && 64BIT
+ select HAVE_KPROBES
+ select HAVE_KRETPROBES
select HAVE_PCI
select HAVE_PERF_EVENTS
select HAVE_PERF_REGS
diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
index 56a98ea3..4647d38 100644
--- a/arch/riscv/include/asm/kprobes.h
+++ b/arch/riscv/include/asm/kprobes.h
@@ -11,4 +11,44 @@

#include <asm-generic/kprobes.h>

+#ifdef CONFIG_KPROBES
+#include <linux/types.h>
+#include <linux/ptrace.h>
+#include <linux/percpu.h>
+
+#define __ARCH_WANT_KPROBES_INSN_SLOT
+#define MAX_INSN_SIZE 2
+
+#define flush_insn_slot(p) do { } while (0)
+#define kretprobe_blacklist_size 0
+
+#include <asm/probes.h>
+
+struct prev_kprobe {
+ struct kprobe *kp;
+ unsigned int status;
+};
+
+/* Single step context for kprobe */
+struct kprobe_step_ctx {
+ unsigned long ss_pending;
+ unsigned long match_addr;
+};
+
+/* per-cpu kprobe control block */
+struct kprobe_ctlblk {
+ unsigned int kprobe_status;
+ unsigned long saved_status;
+ struct prev_kprobe prev_kprobe;
+ struct kprobe_step_ctx ss_ctx;
+};
+
+void arch_remove_kprobe(struct kprobe *p);
+int kprobe_fault_handler(struct pt_regs *regs, unsigned int trapnr);
+bool kprobe_breakpoint_handler(struct pt_regs *regs);
+bool kprobe_single_step_handler(struct pt_regs *regs);
+void kretprobe_trampoline(void);
+void __kprobes *trampoline_probe_handler(struct pt_regs *regs);
+
+#endif /* CONFIG_KPROBES */
#endif /* _ASM_RISCV_KPROBES_H */
diff --git a/arch/riscv/include/asm/probes.h b/arch/riscv/include/asm/probes.h
new file mode 100644
index 00000000..a787e6d
--- /dev/null
+++ b/arch/riscv/include/asm/probes.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _ASM_RISCV_PROBES_H
+#define _ASM_RISCV_PROBES_H
+
+typedef u32 probe_opcode_t;
+typedef bool (probes_handler_t) (u32 opcode, unsigned long addr, struct pt_regs *);
+
+/* architecture specific copy of original instruction */
+struct arch_probe_insn {
+ probe_opcode_t *insn;
+ probes_handler_t *handler;
+ /* restore address after simulation */
+ unsigned long restore;
+};
+
+#ifdef CONFIG_KPROBES
+typedef u32 kprobe_opcode_t;
+struct arch_specific_insn {
+ struct arch_probe_insn api;
+};
+#endif
+
+#endif /* _ASM_RISCV_PROBES_H */
diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
index b355cf4..c3fff3e 100644
--- a/arch/riscv/kernel/Makefile
+++ b/arch/riscv/kernel/Makefile
@@ -29,6 +29,7 @@ obj-y += riscv_ksyms.o
obj-y += stacktrace.o
obj-y += cacheinfo.o
obj-y += patch.o
+obj-y += probes/
obj-$(CONFIG_MMU) += vdso.o vdso/

obj-$(CONFIG_RISCV_M_MODE) += clint.o traps_misaligned.o
diff --git a/arch/riscv/kernel/probes/Makefile b/arch/riscv/kernel/probes/Makefile
new file mode 100644
index 00000000..8a39507
--- /dev/null
+++ b/arch/riscv/kernel/probes/Makefile
@@ -0,0 +1,4 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_KPROBES) += kprobes.o decode-insn.o simulate-insn.o
+obj-$(CONFIG_KPROBES) += kprobes_trampoline.o
+CFLAGS_REMOVE_simulate-insn.o = $(CC_FLAGS_FTRACE)
diff --git a/arch/riscv/kernel/probes/decode-insn.c b/arch/riscv/kernel/probes/decode-insn.c
new file mode 100644
index 00000000..0876c30
--- /dev/null
+++ b/arch/riscv/kernel/probes/decode-insn.c
@@ -0,0 +1,48 @@
+// SPDX-License-Identifier: GPL-2.0+
+
+#include <linux/kernel.h>
+#include <linux/kprobes.h>
+#include <linux/module.h>
+#include <linux/kallsyms.h>
+#include <asm/sections.h>
+
+#include "decode-insn.h"
+#include "simulate-insn.h"
+
+/* Return:
+ * INSN_REJECTED If instruction is one not allowed to kprobe,
+ * INSN_GOOD_NO_SLOT If instruction is supported but doesn't use its slot.
+ */
+enum probe_insn __kprobes
+riscv_probe_decode_insn(probe_opcode_t *addr, struct arch_probe_insn *api)
+{
+ probe_opcode_t insn = le32_to_cpu(*addr);
+
+ /*
+ * Reject instructions list:
+ */
+ RISCV_INSN_REJECTED(system, insn);
+ RISCV_INSN_REJECTED(fence, insn);
+
+ /*
+ * Simulate instructions list:
+ * TODO: the REJECTED ones below need to be implemented
+ */
+#ifdef CONFIG_RISCV_ISA_C
+ RISCV_INSN_REJECTED(c_j, insn);
+ RISCV_INSN_REJECTED(c_jr, insn);
+ RISCV_INSN_REJECTED(c_jal, insn);
+ RISCV_INSN_REJECTED(c_jalr, insn);
+ RISCV_INSN_REJECTED(c_beqz, insn);
+ RISCV_INSN_REJECTED(c_bnez, insn);
+ RISCV_INSN_REJECTED(c_ebreak, insn);
+#endif
+
+ RISCV_INSN_REJECTED(auipc, insn);
+ RISCV_INSN_REJECTED(branch, insn);
+
+ RISCV_INSN_SET_SIMULATE(jal, insn);
+ RISCV_INSN_SET_SIMULATE(jalr, insn);
+
+ return INSN_GOOD;
+}
diff --git a/arch/riscv/kernel/probes/decode-insn.h b/arch/riscv/kernel/probes/decode-insn.h
new file mode 100644
index 00000000..42269a7
--- /dev/null
+++ b/arch/riscv/kernel/probes/decode-insn.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+
+#ifndef _RISCV_KERNEL_KPROBES_DECODE_INSN_H
+#define _RISCV_KERNEL_KPROBES_DECODE_INSN_H
+
+#include <asm/sections.h>
+#include <asm/kprobes.h>
+
+enum probe_insn {
+ INSN_REJECTED,
+ INSN_GOOD_NO_SLOT,
+ INSN_GOOD,
+};
+
+enum probe_insn __kprobes
+riscv_probe_decode_insn(probe_opcode_t *addr, struct arch_probe_insn *asi);
+
+#endif /* _RISCV_KERNEL_KPROBES_DECODE_INSN_H */
diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c
new file mode 100644
index 00000000..31b6196
--- /dev/null
+++ b/arch/riscv/kernel/probes/kprobes.c
@@ -0,0 +1,471 @@
+// SPDX-License-Identifier: GPL-2.0+
+
+#include <linux/kprobes.h>
+#include <linux/extable.h>
+#include <linux/slab.h>
+#include <linux/stop_machine.h>
+#include <asm/ptrace.h>
+#include <linux/uaccess.h>
+#include <asm/sections.h>
+#include <asm/cacheflush.h>
+#include <asm/bug.h>
+#include <asm/patch.h>
+
+#include "decode-insn.h"
+
+DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
+DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
+
+static void __kprobes
+post_kprobe_handler(struct kprobe_ctlblk *, struct pt_regs *);
+
+static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
+{
+ unsigned long offset = GET_INSN_LENGTH(p->opcode);
+
+ p->ainsn.api.restore = (unsigned long)p->addr + offset;
+
+ patch_text(p->ainsn.api.insn, p->opcode);
+ patch_text((void *)((unsigned long)(p->ainsn.api.insn) + offset),
+ __BUG_INSN_32);
+}
+
+static void __kprobes arch_prepare_simulate(struct kprobe *p)
+{
+ p->ainsn.api.restore = 0;
+}
+
+static void __kprobes arch_simulate_insn(struct kprobe *p, struct pt_regs *regs)
+{
+ struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+
+ if (p->ainsn.api.handler)
+ p->ainsn.api.handler((u32)p->opcode,
+ (unsigned long)p->addr, regs);
+
+ post_kprobe_handler(kcb, regs);
+}
+
+int __kprobes arch_prepare_kprobe(struct kprobe *p)
+{
+ unsigned long probe_addr = (unsigned long)p->addr;
+
+ if (probe_addr & 0x1) {
+ pr_warn("Address not aligned.\n");
+
+ return -EINVAL;
+ }
+
+ /* copy instruction */
+ p->opcode = le32_to_cpu(*p->addr);
+
+ /* decode instruction */
+ switch (riscv_probe_decode_insn(p->addr, &p->ainsn.api)) {
+ case INSN_REJECTED: /* insn not supported */
+ return -EINVAL;
+
+ case INSN_GOOD_NO_SLOT: /* insn need simulation */
+ p->ainsn.api.insn = NULL;
+ break;
+
+ case INSN_GOOD: /* instruction uses slot */
+ p->ainsn.api.insn = get_insn_slot();
+ if (!p->ainsn.api.insn)
+ return -ENOMEM;
+ break;
+ }
+
+ /* prepare the instruction */
+ if (p->ainsn.api.insn)
+ arch_prepare_ss_slot(p);
+ else
+ arch_prepare_simulate(p);
+
+ return 0;
+}
+
+/* install breakpoint in text */
+void __kprobes arch_arm_kprobe(struct kprobe *p)
+{
+ if ((p->opcode & __INSN_LENGTH_MASK) == __INSN_LENGTH_32)
+ patch_text(p->addr, __BUG_INSN_32);
+ else
+ patch_text(p->addr, __BUG_INSN_16);
+}
+
+/* remove breakpoint from text */
+void __kprobes arch_disarm_kprobe(struct kprobe *p)
+{
+ patch_text(p->addr, p->opcode);
+}
+
+void __kprobes arch_remove_kprobe(struct kprobe *p)
+{
+}
+
+static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb)
+{
+ kcb->prev_kprobe.kp = kprobe_running();
+ kcb->prev_kprobe.status = kcb->kprobe_status;
+}
+
+static void __kprobes restore_previous_kprobe(struct kprobe_ctlblk *kcb)
+{
+ __this_cpu_write(current_kprobe, kcb->prev_kprobe.kp);
+ kcb->kprobe_status = kcb->prev_kprobe.status;
+}
+
+static void __kprobes set_current_kprobe(struct kprobe *p)
+{
+ __this_cpu_write(current_kprobe, p);
+}
+
+/*
+ * Interrupts need to be disabled before single-step mode is set, and not
+ * reenabled until after single-step mode ends.
+ * Without disabling interrupt on local CPU, there is a chance of
+ * interrupt occurrence in the period of exception return and start of
+ * out-of-line single-step, that result in wrongly single stepping
+ * into the interrupt handler.
+ */
+static void __kprobes kprobes_save_local_irqflag(struct kprobe_ctlblk *kcb,
+ struct pt_regs *regs)
+{
+ kcb->saved_status = regs->status;
+ regs->status &= ~SR_SPIE;
+}
+
+static void __kprobes kprobes_restore_local_irqflag(struct kprobe_ctlblk *kcb,
+ struct pt_regs *regs)
+{
+ regs->status = kcb->saved_status;
+}
+
+static void __kprobes
+set_ss_context(struct kprobe_ctlblk *kcb, unsigned long addr, struct kprobe *p)
+{
+ unsigned long offset = GET_INSN_LENGTH(p->opcode);
+
+ kcb->ss_ctx.ss_pending = true;
+ kcb->ss_ctx.match_addr = addr + offset;
+}
+
+static void __kprobes clear_ss_context(struct kprobe_ctlblk *kcb)
+{
+ kcb->ss_ctx.ss_pending = false;
+ kcb->ss_ctx.match_addr = 0;
+}
+
+static void __kprobes setup_singlestep(struct kprobe *p,
+ struct pt_regs *regs,
+ struct kprobe_ctlblk *kcb, int reenter)
+{
+ unsigned long slot;
+
+ if (reenter) {
+ save_previous_kprobe(kcb);
+ set_current_kprobe(p);
+ kcb->kprobe_status = KPROBE_REENTER;
+ } else {
+ kcb->kprobe_status = KPROBE_HIT_SS;
+ }
+
+ if (p->ainsn.api.insn) {
+ /* prepare for single stepping */
+ slot = (unsigned long)p->ainsn.api.insn;
+
+ set_ss_context(kcb, slot, p); /* mark pending ss */
+
+ /* IRQs and single stepping do not mix well. */
+ kprobes_save_local_irqflag(kcb, regs);
+
+ instruction_pointer_set(regs, slot);
+ } else {
+ /* insn simulation */
+ arch_simulate_insn(p, regs);
+ }
+}
+
+static int __kprobes reenter_kprobe(struct kprobe *p,
+ struct pt_regs *regs,
+ struct kprobe_ctlblk *kcb)
+{
+ switch (kcb->kprobe_status) {
+ case KPROBE_HIT_SSDONE:
+ case KPROBE_HIT_ACTIVE:
+ kprobes_inc_nmissed_count(p);
+ setup_singlestep(p, regs, kcb, 1);
+ break;
+ case KPROBE_HIT_SS:
+ case KPROBE_REENTER:
+ pr_warn("Unrecoverable kprobe detected.\n");
+ dump_kprobe(p);
+ BUG();
+ break;
+ default:
+ WARN_ON(1);
+ return 0;
+ }
+
+ return 1;
+}
+
+static void __kprobes
+post_kprobe_handler(struct kprobe_ctlblk *kcb, struct pt_regs *regs)
+{
+ struct kprobe *cur = kprobe_running();
+
+ if (!cur)
+ return;
+
+ /* return addr restore if non-branching insn */
+ if (cur->ainsn.api.restore != 0)
+ regs->epc = cur->ainsn.api.restore;
+
+ /* restore back original saved kprobe variables and continue */
+ if (kcb->kprobe_status == KPROBE_REENTER) {
+ restore_previous_kprobe(kcb);
+ return;
+ }
+
+ /* call post handler */
+ kcb->kprobe_status = KPROBE_HIT_SSDONE;
+ if (cur->post_handler) {
+ /* post_handler can hit breakpoint and single step
+ * again, so we enable D-flag for recursive exception.
+ */
+ cur->post_handler(cur, regs, 0);
+ }
+
+ reset_current_kprobe();
+}
+
+int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int trapnr)
+{
+ struct kprobe *cur = kprobe_running();
+ struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+
+ switch (kcb->kprobe_status) {
+ case KPROBE_HIT_SS:
+ case KPROBE_REENTER:
+ /*
+ * We are here because the instruction being single
+ * stepped caused a page fault. We reset the current
+ * kprobe and the ip points back to the probe address
+ * and allow the page fault handler to continue as a
+ * normal page fault.
+ */
+ regs->epc = (unsigned long) cur->addr;
+ if (!instruction_pointer(regs))
+ BUG();
+
+ if (kcb->kprobe_status == KPROBE_REENTER)
+ restore_previous_kprobe(kcb);
+ else
+ reset_current_kprobe();
+
+ break;
+ case KPROBE_HIT_ACTIVE:
+ case KPROBE_HIT_SSDONE:
+ /*
+ * We increment the nmissed count for accounting,
+ * we can also use npre/npostfault count for accounting
+ * these specific fault cases.
+ */
+ kprobes_inc_nmissed_count(cur);
+
+ /*
+ * We come here because instructions in the pre/post
+ * handler caused the page_fault, this could happen
+ * if handler tries to access user space by
+ * copy_from_user(), get_user() etc. Let the
+ * user-specified handler try to fix it first.
+ */
+ if (cur->fault_handler && cur->fault_handler(cur, regs, trapnr))
+ return 1;
+
+ /*
+ * In case the user-specified fault handler returned
+ * zero, try to fix up.
+ */
+ if (fixup_exception(regs))
+ return 1;
+ }
+ return 0;
+}
+
+bool __kprobes
+kprobe_breakpoint_handler(struct pt_regs *regs)
+{
+ struct kprobe *p, *cur_kprobe;
+ struct kprobe_ctlblk *kcb;
+ unsigned long addr = instruction_pointer(regs);
+
+ kcb = get_kprobe_ctlblk();
+ cur_kprobe = kprobe_running();
+
+ p = get_kprobe((kprobe_opcode_t *) addr);
+
+ if (p) {
+ if (cur_kprobe) {
+ if (reenter_kprobe(p, regs, kcb))
+ return true;
+ } else {
+ /* Probe hit */
+ set_current_kprobe(p);
+ kcb->kprobe_status = KPROBE_HIT_ACTIVE;
+
+ /*
+ * If we have no pre-handler or it returned 0, we
+ * continue with normal processing. If we have a
+ * pre-handler and it returned non-zero, it will
+ * modify the execution path and no need to single
+ * stepping. Let's just reset current kprobe and exit.
+ *
+ * pre_handler can hit a breakpoint and can step thru
+ * before return.
+ */
+ if (!p->pre_handler || !p->pre_handler(p, regs))
+ setup_singlestep(p, regs, kcb, 0);
+ else
+ reset_current_kprobe();
+ }
+ return true;
+ }
+
+ /*
+ * The breakpoint instruction was removed right
+ * after we hit it. Another cpu has removed
+ * either a probepoint or a debugger breakpoint
+ * at this address. In either case, no further
+ * handling of this interrupt is appropriate.
+ * Return back to original instruction, and continue.
+ */
+ return false;
+}
+
+bool __kprobes
+kprobe_single_step_handler(struct pt_regs *regs)
+{
+ struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+
+ if ((kcb->ss_ctx.ss_pending)
+ && (kcb->ss_ctx.match_addr == instruction_pointer(regs))) {
+ clear_ss_context(kcb); /* clear pending ss */
+
+ kprobes_restore_local_irqflag(kcb, regs);
+
+ post_kprobe_handler(kcb, regs);
+ return true;
+ }
+ return false;
+}
+
+/*
+ * Provide a blacklist of symbols identifying ranges which cannot be kprobed.
+ * This blacklist is exposed to userspace via debugfs (kprobes/blacklist).
+ */
+int __init arch_populate_kprobe_blacklist(void)
+{
+ int ret;
+
+ ret = kprobe_add_area_blacklist((unsigned long)__irqentry_text_start,
+ (unsigned long)__irqentry_text_end);
+ return ret;
+}
+
+void __kprobes __used *trampoline_probe_handler(struct pt_regs *regs)
+{
+ struct kretprobe_instance *ri = NULL;
+ struct hlist_head *head, empty_rp;
+ struct hlist_node *tmp;
+ unsigned long flags, orig_ret_address = 0;
+ unsigned long trampoline_address =
+ (unsigned long)&kretprobe_trampoline;
+ kprobe_opcode_t *correct_ret_addr = NULL;
+
+ INIT_HLIST_HEAD(&empty_rp);
+ kretprobe_hash_lock(current, &head, &flags);
+
+ /*
+ * It is possible to have multiple instances associated with a given
+ * task either because multiple functions in the call path have
+ * return probes installed on them, and/or more than one
+ * return probe was registered for a target function.
+ *
+ * We can handle this because:
+ * - instances are always pushed into the head of the list
+ * - when multiple return probes are registered for the same
+ * function, the (chronologically) first instance's ret_addr
+ * will be the real return address, and all the rest will
+ * point to kretprobe_trampoline.
+ */
+ hlist_for_each_entry_safe(ri, tmp, head, hlist) {
+ if (ri->task != current)
+ /* another task is sharing our hash bucket */
+ continue;
+
+ orig_ret_address = (unsigned long)ri->ret_addr;
+
+ if (orig_ret_address != trampoline_address)
+ /*
+ * This is the real return address. Any other
+ * instances associated with this task are for
+ * other calls deeper on the call stack
+ */
+ break;
+ }
+
+ kretprobe_assert(ri, orig_ret_address, trampoline_address);
+
+ correct_ret_addr = ri->ret_addr;
+ hlist_for_each_entry_safe(ri, tmp, head, hlist) {
+ if (ri->task != current)
+ /* another task is sharing our hash bucket */
+ continue;
+
+ orig_ret_address = (unsigned long)ri->ret_addr;
+ if (ri->rp && ri->rp->handler) {
+ __this_cpu_write(current_kprobe, &ri->rp->kp);
+ get_kprobe_ctlblk()->kprobe_status = KPROBE_HIT_ACTIVE;
+ ri->ret_addr = correct_ret_addr;
+ ri->rp->handler(ri, regs);
+ __this_cpu_write(current_kprobe, NULL);
+ }
+
+ recycle_rp_inst(ri, &empty_rp);
+
+ if (orig_ret_address != trampoline_address)
+ /*
+ * This is the real return address. Any other
+ * instances associated with this task are for
+ * other calls deeper on the call stack
+ */
+ break;
+ }
+
+ kretprobe_hash_unlock(current, &flags);
+
+ hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
+ hlist_del(&ri->hlist);
+ kfree(ri);
+ }
+ return (void *)orig_ret_address;
+}
+
+void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
+ struct pt_regs *regs)
+{
+ ri->ret_addr = (kprobe_opcode_t *)regs->ra;
+ regs->ra = (unsigned long) &kretprobe_trampoline;
+}
+
+int __kprobes arch_trampoline_kprobe(struct kprobe *p)
+{
+ return 0;
+}
+
+int __init arch_init_kprobes(void)
+{
+ return 0;
+}
diff --git a/arch/riscv/kernel/probes/kprobes_trampoline.S b/arch/riscv/kernel/probes/kprobes_trampoline.S
new file mode 100644
index 00000000..6e85d02
--- /dev/null
+++ b/arch/riscv/kernel/probes/kprobes_trampoline.S
@@ -0,0 +1,93 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Author: Patrick Stählin <[email protected]>
+ */
+#include <linux/linkage.h>
+
+#include <asm/asm.h>
+#include <asm/asm-offsets.h>
+
+ .text
+ .altmacro
+
+ .macro save_all_base_regs
+ REG_S x1, PT_RA(sp)
+ REG_S x3, PT_GP(sp)
+ REG_S x4, PT_TP(sp)
+ REG_S x5, PT_T0(sp)
+ REG_S x6, PT_T1(sp)
+ REG_S x7, PT_T2(sp)
+ REG_S x8, PT_S0(sp)
+ REG_S x9, PT_S1(sp)
+ REG_S x10, PT_A0(sp)
+ REG_S x11, PT_A1(sp)
+ REG_S x12, PT_A2(sp)
+ REG_S x13, PT_A3(sp)
+ REG_S x14, PT_A4(sp)
+ REG_S x15, PT_A5(sp)
+ REG_S x16, PT_A6(sp)
+ REG_S x17, PT_A7(sp)
+ REG_S x18, PT_S2(sp)
+ REG_S x19, PT_S3(sp)
+ REG_S x20, PT_S4(sp)
+ REG_S x21, PT_S5(sp)
+ REG_S x22, PT_S6(sp)
+ REG_S x23, PT_S7(sp)
+ REG_S x24, PT_S8(sp)
+ REG_S x25, PT_S9(sp)
+ REG_S x26, PT_S10(sp)
+ REG_S x27, PT_S11(sp)
+ REG_S x28, PT_T3(sp)
+ REG_S x29, PT_T4(sp)
+ REG_S x30, PT_T5(sp)
+ REG_S x31, PT_T6(sp)
+ .endm
+
+ .macro restore_all_base_regs
+ REG_L x3, PT_GP(sp)
+ REG_L x4, PT_TP(sp)
+ REG_L x5, PT_T0(sp)
+ REG_L x6, PT_T1(sp)
+ REG_L x7, PT_T2(sp)
+ REG_L x8, PT_S0(sp)
+ REG_L x9, PT_S1(sp)
+ REG_L x10, PT_A0(sp)
+ REG_L x11, PT_A1(sp)
+ REG_L x12, PT_A2(sp)
+ REG_L x13, PT_A3(sp)
+ REG_L x14, PT_A4(sp)
+ REG_L x15, PT_A5(sp)
+ REG_L x16, PT_A6(sp)
+ REG_L x17, PT_A7(sp)
+ REG_L x18, PT_S2(sp)
+ REG_L x19, PT_S3(sp)
+ REG_L x20, PT_S4(sp)
+ REG_L x21, PT_S5(sp)
+ REG_L x22, PT_S6(sp)
+ REG_L x23, PT_S7(sp)
+ REG_L x24, PT_S8(sp)
+ REG_L x25, PT_S9(sp)
+ REG_L x26, PT_S10(sp)
+ REG_L x27, PT_S11(sp)
+ REG_L x28, PT_T3(sp)
+ REG_L x29, PT_T4(sp)
+ REG_L x30, PT_T5(sp)
+ REG_L x31, PT_T6(sp)
+ .endm
+
+ENTRY(kretprobe_trampoline)
+ addi sp, sp, -(PT_SIZE_ON_STACK)
+ save_all_base_regs
+
+ move a0, sp /* pt_regs */
+
+ call trampoline_probe_handler
+
+ /* use the result as the return-address */
+ move ra, a0
+
+ restore_all_base_regs
+ addi sp, sp, PT_SIZE_ON_STACK
+
+ ret
+ENDPROC(kretprobe_trampoline)
diff --git a/arch/riscv/kernel/probes/simulate-insn.c b/arch/riscv/kernel/probes/simulate-insn.c
new file mode 100644
index 00000000..2519ce2
--- /dev/null
+++ b/arch/riscv/kernel/probes/simulate-insn.c
@@ -0,0 +1,85 @@
+// SPDX-License-Identifier: GPL-2.0+
+
+#include <linux/bitops.h>
+#include <linux/kernel.h>
+#include <linux/kprobes.h>
+
+#include "decode-insn.h"
+#include "simulate-insn.h"
+
+static inline bool rv_insn_reg_get_val(struct pt_regs *regs, u32 index,
+ unsigned long *ptr)
+{
+ if (index == 0)
+ *ptr = 0;
+ else if (index <= 31)
+ *ptr = *((unsigned long *)regs + index);
+ else
+ return false;
+
+ return true;
+}
+
+static inline bool rv_insn_reg_set_val(struct pt_regs *regs, u32 index,
+ unsigned long val)
+{
+ if (index == 0)
+ return false;
+ else if (index <= 31)
+ *((unsigned long *)regs + index) = val;
+ else
+ return false;
+
+ return true;
+}
+
+bool __kprobes simulate_jal(u32 opcode, unsigned long addr, struct pt_regs *regs)
+{
+ /*
+ * 31 30 21 20 19 12 11 7 6 0
+ * imm [20] | imm[10:1] | imm[11] | imm[19:12] | rd | opcode
+ * 1 10 1 8 5 JAL/J
+ */
+ bool ret;
+ u32 imm;
+ u32 index = (opcode >> 7) & 0x1f;
+
+ ret = rv_insn_reg_set_val(regs, index, addr + 4);
+ if (!ret)
+ return ret;
+
+ imm = ((opcode >> 21) & 0x3ff) << 1;
+ imm |= ((opcode >> 20) & 0x1) << 11;
+ imm |= ((opcode >> 12) & 0xff) << 12;
+ imm |= ((opcode >> 31) & 0x1) << 20;
+
+ instruction_pointer_set(regs, addr + sign_extend32((imm), 20));
+
+ return ret;
+}
+
+bool __kprobes simulate_jalr(u32 opcode, unsigned long addr, struct pt_regs *regs)
+{
+ /*
+ * 31 20 19 15 14 12 11 7 6 0
+ * offset[11:0] | rs1 | 010 | rd | opcode
+ * 12 5 3 5 JALR/JR
+ */
+ bool ret;
+ unsigned long base_addr;
+ u32 imm = (opcode >> 20) & 0xfff;
+ u32 rd_index = (opcode >> 7) & 0x1f;
+ u32 rs1_index = (opcode >> 15) & 0x1f;
+
+ ret = rv_insn_reg_set_val(regs, rd_index, addr + 4);
+ if (!ret)
+ return ret;
+
+ ret = rv_insn_reg_get_val(regs, rs1_index, &base_addr);
+ if (!ret)
+ return ret;
+
+ instruction_pointer_set(regs, (base_addr + sign_extend32((imm), 11))&~1);
+
+ return ret;
+}
diff --git a/arch/riscv/kernel/probes/simulate-insn.h b/arch/riscv/kernel/probes/simulate-insn.h
new file mode 100644
index 00000000..a62d784
--- /dev/null
+++ b/arch/riscv/kernel/probes/simulate-insn.h
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+
+#ifndef _RISCV_KERNEL_PROBES_SIMULATE_INSN_H
+#define _RISCV_KERNEL_PROBES_SIMULATE_INSN_H
+
+#define __RISCV_INSN_FUNCS(name, mask, val) \
+static __always_inline bool riscv_insn_is_##name(probe_opcode_t code) \
+{ \
+ BUILD_BUG_ON(~(mask) & (val)); \
+ return (code & (mask)) == (val); \
+} \
+bool simulate_##name(u32 opcode, unsigned long addr, \
+ struct pt_regs *regs);
+
+#define RISCV_INSN_REJECTED(name, code) \
+ do { \
+ if (riscv_insn_is_##name(code)) { \
+ return INSN_REJECTED; \
+ } \
+ } while (0)
+
+__RISCV_INSN_FUNCS(system, 0x7f, 0x73)
+__RISCV_INSN_FUNCS(fence, 0x7f, 0x0f)
+
+#define RISCV_INSN_SET_SIMULATE(name, code) \
+ do { \
+ if (riscv_insn_is_##name(code)) { \
+ api->handler = simulate_##name; \
+ return INSN_GOOD_NO_SLOT; \
+ } \
+ } while (0)
+
+__RISCV_INSN_FUNCS(c_j, 0xe003, 0xa001)
+__RISCV_INSN_FUNCS(c_jr, 0xf007, 0x8002)
+__RISCV_INSN_FUNCS(c_jal, 0xe003, 0x2001)
+__RISCV_INSN_FUNCS(c_jalr, 0xf007, 0x9002)
+__RISCV_INSN_FUNCS(c_beqz, 0xe003, 0xc001)
+__RISCV_INSN_FUNCS(c_bnez, 0xe003, 0xe001)
+__RISCV_INSN_FUNCS(c_ebreak, 0xffff, 0x9002)
+
+__RISCV_INSN_FUNCS(auipc, 0x7f, 0x17)
+__RISCV_INSN_FUNCS(branch, 0x7f, 0x63)
+
+__RISCV_INSN_FUNCS(jal, 0x7f, 0x6f)
+__RISCV_INSN_FUNCS(jalr, 0x707f, 0x67)
+
+#endif /* _RISCV_KERNEL_PROBES_SIMULATE_INSN_H */
diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
index ecec177..ac2e786 100644
--- a/arch/riscv/kernel/traps.c
+++ b/arch/riscv/kernel/traps.c
@@ -12,6 +12,7 @@
#include <linux/signal.h>
#include <linux/kdebug.h>
#include <linux/uaccess.h>
+#include <linux/kprobes.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/irq.h>
@@ -145,6 +146,14 @@ static inline unsigned long get_break_insn_length(unsigned long pc)

asmlinkage __visible void do_trap_break(struct pt_regs *regs)
{
+#ifdef CONFIG_KPROBES
+ if (kprobe_single_step_handler(regs))
+ return;
+
+ if (kprobe_breakpoint_handler(regs))
+ return;
+#endif
+
if (user_mode(regs))
force_sig_fault(SIGTRAP, TRAP_BRKPT, (void __user *)regs->epc);
#ifdef CONFIG_KGDB
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
index ae7b7fe..da0c08c 100644
--- a/arch/riscv/mm/fault.c
+++ b/arch/riscv/mm/fault.c
@@ -13,6 +13,7 @@
#include <linux/perf_event.h>
#include <linux/signal.h>
#include <linux/uaccess.h>
+#include <linux/kprobes.h>

#include <asm/pgalloc.h>
#include <asm/ptrace.h>
@@ -40,6 +41,9 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
tsk = current;
mm = tsk->mm;

+ if (kprobe_page_fault(regs, cause))
+ return;
+
/*
* Fault-in kernel-space virtual memory on-demand.
* The 'reference' page table is init_mm.pgd.
--
2.7.4

2020-07-04 03:37:05

by Guo Ren

[permalink] [raw]
Subject: [PATCH V1 5/5] riscv: Add uprobes supported

From: Guo Ren <[email protected]>

This patch adds support for uprobes on riscv architecture.

Just like kprobe, it support single-step and simulate instructions.

Signed-off-by: Guo Ren <[email protected]>
Cc: Patrick Stählin <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Palmer Dabbelt <[email protected]>
Cc: Björn Töpel <[email protected]>
---
arch/riscv/Kconfig | 3 +
arch/riscv/include/asm/processor.h | 1 +
arch/riscv/include/asm/thread_info.h | 4 +-
arch/riscv/include/asm/uprobes.h | 40 ++++++++
arch/riscv/kernel/probes/Makefile | 1 +
arch/riscv/kernel/probes/uprobes.c | 186 +++++++++++++++++++++++++++++++++++
arch/riscv/kernel/signal.c | 3 +
arch/riscv/kernel/traps.c | 10 ++
arch/riscv/mm/fault.c | 7 ++
9 files changed, 254 insertions(+), 1 deletion(-)
create mode 100644 arch/riscv/include/asm/uprobes.h
create mode 100644 arch/riscv/kernel/probes/uprobes.c

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index a295f0b..f927a91 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -146,6 +146,9 @@ config ARCH_WANT_GENERAL_HUGETLB
config ARCH_SUPPORTS_DEBUG_PAGEALLOC
def_bool y

+config ARCH_SUPPORTS_UPROBES
+ def_bool y
+
config SYS_SUPPORTS_HUGETLBFS
depends on MMU
def_bool y
diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
index bdddcd5..3a24003 100644
--- a/arch/riscv/include/asm/processor.h
+++ b/arch/riscv/include/asm/processor.h
@@ -34,6 +34,7 @@ struct thread_struct {
unsigned long sp; /* Kernel mode stack */
unsigned long s[12]; /* s[0]: frame pointer */
struct __riscv_d_ext_state fstate;
+ unsigned long bad_cause;
};

#define INIT_THREAD { \
diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h
index 1dd12a0..b3a7eb6 100644
--- a/arch/riscv/include/asm/thread_info.h
+++ b/arch/riscv/include/asm/thread_info.h
@@ -76,6 +76,7 @@ struct thread_info {
#define TIF_SYSCALL_TRACEPOINT 6 /* syscall tracepoint instrumentation */
#define TIF_SYSCALL_AUDIT 7 /* syscall auditing */
#define TIF_SECCOMP 8 /* syscall secure computing */
+#define TIF_UPROBE 9 /* uprobe breakpoint or singlestep */

#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
@@ -84,9 +85,10 @@ struct thread_info {
#define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT)
#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
#define _TIF_SECCOMP (1 << TIF_SECCOMP)
+#define _TIF_UPROBE (1 << TIF_UPROBE)

#define _TIF_WORK_MASK \
- (_TIF_NOTIFY_RESUME | _TIF_SIGPENDING | _TIF_NEED_RESCHED)
+ (_TIF_NOTIFY_RESUME | _TIF_SIGPENDING | _TIF_NEED_RESCHED | _TIF_UPROBE)

#define _TIF_SYSCALL_WORK \
(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_TRACEPOINT | _TIF_SYSCALL_AUDIT | \
diff --git a/arch/riscv/include/asm/uprobes.h b/arch/riscv/include/asm/uprobes.h
new file mode 100644
index 00000000..f2183e0
--- /dev/null
+++ b/arch/riscv/include/asm/uprobes.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#ifndef _ASM_RISCV_UPROBES_H
+#define _ASM_RISCV_UPROBES_H
+
+#include <asm/probes.h>
+#include <asm/patch.h>
+#include <asm/bug.h>
+
+#define MAX_UINSN_BYTES 8
+
+#ifdef CONFIG_RISCV_ISA_C
+#define UPROBE_SWBP_INSN __BUG_INSN_16
+#define UPROBE_SWBP_INSN_SIZE 2
+#else
+#define UPROBE_SWBP_INSN __BUG_INSN_32
+#define UPROBE_SWBP_INSN_SIZE 4
+#endif
+#define UPROBE_XOL_SLOT_BYTES MAX_UINSN_BYTES
+
+typedef u32 uprobe_opcode_t;
+
+struct arch_uprobe_task {
+ unsigned long saved_cause;
+};
+
+struct arch_uprobe {
+ union {
+ u8 insn[MAX_UINSN_BYTES];
+ u8 ixol[MAX_UINSN_BYTES];
+ };
+ struct arch_probe_insn api;
+ unsigned long insn_size;
+ bool simulate;
+};
+
+bool uprobe_breakpoint_handler(struct pt_regs *regs);
+bool uprobe_single_step_handler(struct pt_regs *regs);
+
+#endif /* _ASM_RISCV_UPROBES_H */
diff --git a/arch/riscv/kernel/probes/Makefile b/arch/riscv/kernel/probes/Makefile
index 8a39507..cb62991 100644
--- a/arch/riscv/kernel/probes/Makefile
+++ b/arch/riscv/kernel/probes/Makefile
@@ -1,4 +1,5 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_KPROBES) += kprobes.o decode-insn.o simulate-insn.o
obj-$(CONFIG_KPROBES) += kprobes_trampoline.o
+obj-$(CONFIG_UPROBES) += uprobes.o decode-insn.o simulate-insn.o
CFLAGS_REMOVE_simulate-insn.o = $(CC_FLAGS_FTRACE)
diff --git a/arch/riscv/kernel/probes/uprobes.c b/arch/riscv/kernel/probes/uprobes.c
new file mode 100644
index 00000000..7a057b5
--- /dev/null
+++ b/arch/riscv/kernel/probes/uprobes.c
@@ -0,0 +1,186 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <linux/highmem.h>
+#include <linux/ptrace.h>
+#include <linux/uprobes.h>
+
+#include "decode-insn.h"
+
+#define UPROBE_TRAP_NR UINT_MAX
+
+bool is_swbp_insn(uprobe_opcode_t *insn)
+{
+#ifdef CONFIG_RISCV_ISA_C
+ return (*insn & 0xffff) == UPROBE_SWBP_INSN;
+#else
+ return *insn == UPROBE_SWBP_INSN;
+#endif
+}
+
+unsigned long uprobe_get_swbp_addr(struct pt_regs *regs)
+{
+ return instruction_pointer(regs);
+}
+
+int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm,
+ unsigned long addr)
+{
+ probe_opcode_t opcode;
+
+ opcode = *(probe_opcode_t *)(&auprobe->insn[0]);
+
+ auprobe->insn_size = GET_INSN_LENGTH(opcode);
+
+ switch (riscv_probe_decode_insn(&opcode, &auprobe->api)) {
+ case INSN_REJECTED:
+ return -EINVAL;
+
+ case INSN_GOOD_NO_SLOT:
+ auprobe->simulate = true;
+ break;
+
+ case INSN_GOOD:
+ auprobe->simulate = false;
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+int arch_uprobe_pre_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+ struct uprobe_task *utask = current->utask;
+
+ utask->autask.saved_cause = current->thread.bad_cause;
+ current->thread.bad_cause = UPROBE_TRAP_NR;
+
+ instruction_pointer_set(regs, utask->xol_vaddr);
+
+ regs->status &= ~SR_SPIE;
+
+ return 0;
+}
+
+int arch_uprobe_post_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+ struct uprobe_task *utask = current->utask;
+
+ WARN_ON_ONCE(current->thread.bad_cause != UPROBE_TRAP_NR);
+
+ instruction_pointer_set(regs, utask->vaddr + auprobe->insn_size);
+
+ regs->status |= SR_SPIE;
+
+ return 0;
+}
+
+bool arch_uprobe_xol_was_trapped(struct task_struct *t)
+{
+ if (t->thread.bad_cause != UPROBE_TRAP_NR)
+ return true;
+
+ return false;
+}
+
+bool arch_uprobe_skip_sstep(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+ probe_opcode_t insn;
+ unsigned long addr;
+
+ if (!auprobe->simulate)
+ return false;
+
+ insn = *(probe_opcode_t *)(&auprobe->insn[0]);
+ addr = instruction_pointer(regs);
+
+ if (auprobe->api.handler)
+ auprobe->api.handler(insn, addr, regs);
+
+ return true;
+}
+
+void arch_uprobe_abort_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+ struct uprobe_task *utask = current->utask;
+
+ /*
+ * Task has received a fatal signal, so reset back to probbed
+ * address.
+ */
+ instruction_pointer_set(regs, utask->vaddr);
+
+ regs->status &= ~SR_SPIE;
+}
+
+bool arch_uretprobe_is_alive(struct return_instance *ret, enum rp_check ctx,
+ struct pt_regs *regs)
+{
+ if (ctx == RP_CHECK_CHAIN_CALL)
+ return regs->sp <= ret->stack;
+ else
+ return regs->sp < ret->stack;
+}
+
+unsigned long
+arch_uretprobe_hijack_return_addr(unsigned long trampoline_vaddr,
+ struct pt_regs *regs)
+{
+ unsigned long ra;
+
+ ra = regs->ra;
+
+ regs->ra = trampoline_vaddr;
+
+ return ra;
+}
+
+int arch_uprobe_exception_notify(struct notifier_block *self,
+ unsigned long val, void *data)
+{
+ return NOTIFY_DONE;
+}
+
+bool uprobe_breakpoint_handler(struct pt_regs *regs)
+{
+ if (uprobe_pre_sstep_notifier(regs))
+ return true;
+
+ return false;
+}
+
+bool uprobe_single_step_handler(struct pt_regs *regs)
+{
+ if (uprobe_post_sstep_notifier(regs))
+ return true;
+
+ return false;
+}
+
+void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,
+ void *src, unsigned long len)
+{
+ /* Initialize the slot */
+ void *kaddr = kmap_atomic(page);
+ void *dst = kaddr + (vaddr & ~PAGE_MASK);
+
+ memcpy(dst, src, len);
+
+ /* Add ebreak behind opcode to simulate singlestep */
+ if (vaddr) {
+ dst += GET_INSN_LENGTH(*(probe_opcode_t *)src);
+ *(uprobe_opcode_t *)dst = __BUG_INSN_32;
+ }
+
+ kunmap_atomic(kaddr);
+
+ /*
+ * We probably need flush_icache_user_page() but it needs vma.
+ * This should work on most of architectures by default. If
+ * architecture needs to do something different it can define
+ * its own version of the function.
+ */
+ flush_dcache_page(page);
+}
diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c
index 17ba190..a96db83b 100644
--- a/arch/riscv/kernel/signal.c
+++ b/arch/riscv/kernel/signal.c
@@ -309,6 +309,9 @@ static void do_signal(struct pt_regs *regs)
asmlinkage __visible void do_notify_resume(struct pt_regs *regs,
unsigned long thread_info_flags)
{
+ if (thread_info_flags & _TIF_UPROBE)
+ uprobe_notify_resume(regs);
+
/* Handle pending signal delivery */
if (thread_info_flags & _TIF_SIGPENDING)
do_signal(regs);
diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
index ac2e786..6981276 100644
--- a/arch/riscv/kernel/traps.c
+++ b/arch/riscv/kernel/traps.c
@@ -76,6 +76,8 @@ void do_trap(struct pt_regs *regs, int signo, int code, unsigned long addr)
static void do_trap_error(struct pt_regs *regs, int signo, int code,
unsigned long addr, const char *str)
{
+ current->thread.bad_cause = regs->cause;
+
if (user_mode(regs)) {
do_trap(regs, signo, code, addr);
} else {
@@ -153,6 +155,14 @@ asmlinkage __visible void do_trap_break(struct pt_regs *regs)
if (kprobe_breakpoint_handler(regs))
return;
#endif
+#ifdef CONFIG_UPROBES
+ if (uprobe_single_step_handler(regs))
+ return;
+
+ if (uprobe_breakpoint_handler(regs))
+ return;
+#endif
+ current->thread.bad_cause = regs->cause;

if (user_mode(regs))
force_sig_fault(SIGTRAP, TRAP_BRKPT, (void __user *)regs->epc);
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
index da0c08c..ac96d93 100644
--- a/arch/riscv/mm/fault.c
+++ b/arch/riscv/mm/fault.c
@@ -170,11 +170,14 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
mmap_read_unlock(mm);
/* User mode accesses just cause a SIGSEGV */
if (user_mode(regs)) {
+ tsk->thread.bad_cause = cause;
do_trap(regs, SIGSEGV, code, addr);
return;
}

no_context:
+ tsk->thread.bad_cause = cause;
+
/* Are we prepared to handle this kernel fault? */
if (fixup_exception(regs))
return;
@@ -195,6 +198,8 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
* (which will retry the fault, or kill us if we got oom-killed).
*/
out_of_memory:
+ tsk->thread.bad_cause = cause;
+
mmap_read_unlock(mm);
if (!user_mode(regs))
goto no_context;
@@ -202,6 +207,8 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
return;

do_sigbus:
+ tsk->thread.bad_cause = cause;
+
mmap_read_unlock(mm);
/* Kernel mode? Handle exceptions or die */
if (!user_mode(regs))
--
2.7.4

2020-07-04 03:37:41

by Guo Ren

[permalink] [raw]
Subject: [PATCH V1 3/5] riscv: Fixup compile error BUILD_BUG_ON failed

From: Guo Ren <[email protected]>

Unfortunately, the current code couldn't be compiled:

CC arch/riscv/kernel/patch.o
In file included from ./include/linux/kernel.h:11,
from ./include/linux/list.h:9,
from ./include/linux/preempt.h:11,
from ./include/linux/spinlock.h:51,
from arch/riscv/kernel/patch.c:6:
In function ‘fix_to_virt’,
inlined from ‘patch_map’ at arch/riscv/kernel/patch.c:37:17:
./include/linux/compiler.h:392:38: error: call to ‘__compiletime_assert_205’ declared with attribute error: BUILD_BUG_ON failed: idx >= __end_of_fixed_addresses
_compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
^
./include/linux/compiler.h:373:4: note: in definition of macro ‘__compiletime_assert’
prefix ## suffix(); \
^~~~~~
./include/linux/compiler.h:392:2: note: in expansion of macro ‘_compiletime_assert’
_compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
^~~~~~~~~~~~~~~~~~~
./include/linux/build_bug.h:39:37: note: in expansion of macro ‘compiletime_assert’
#define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
^~~~~~~~~~~~~~~~~~
./include/linux/build_bug.h:50:2: note: in expansion of macro ‘BUILD_BUG_ON_MSG’
BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition)
^~~~~~~~~~~~~~~~
./include/asm-generic/fixmap.h:32:2: note: in expansion of macro ‘BUILD_BUG_ON’
BUILD_BUG_ON(idx >= __end_of_fixed_addresses);
^~~~~~~~~~~~

Because fix_to_virt(, idx) needs a const value, not a dynamic variable of
reg-a0 or BUILD_BUG_ON failed with "idx >= __end_of_fixed_addresses".

Signed-off-by: Guo Ren <[email protected]>
Reviewed-by: Masami Hiramatsu <[email protected]>
Cc: Zong Li <[email protected]>
---
Changelog V2:
- Use __always_inline as same as fix_to_virt
- Use const "const unsigned int" for 2th param

Signed-off-by: Guo Ren <[email protected]>
---
arch/riscv/kernel/patch.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
index d4a64df..3179a4e 100644
--- a/arch/riscv/kernel/patch.c
+++ b/arch/riscv/kernel/patch.c
@@ -20,7 +20,12 @@ struct patch_insn {
};

#ifdef CONFIG_MMU
-static void *patch_map(void *addr, int fixmap)
+/*
+ * The fix_to_virt(, idx) needs a const value (not a dynamic variable of
+ * reg-a0) or BUILD_BUG_ON failed with "idx >= __end_of_fixed_addresses".
+ * So use '__always_inline' and 'const unsigned int fixmap' here.
+ */
+static __always_inline void *patch_map(void *addr, const unsigned int fixmap)
{
uintptr_t uintaddr = (uintptr_t) addr;
struct page *page;
@@ -37,7 +42,6 @@ static void *patch_map(void *addr, int fixmap)
return (void *)set_fixmap_offset(fixmap, page_to_phys(page) +
(uintaddr & ~PAGE_MASK));
}
-NOKPROBE_SYMBOL(patch_map);

static void patch_unmap(int fixmap)
{
--
2.7.4

2020-07-04 06:41:48

by Pekka Enberg

[permalink] [raw]
Subject: Re: [PATCH V1 0/5] riscv: Add k/uprobe supported

On Sat, Jul 4, 2020 at 6:34 AM <[email protected]> wrote:
> The patchset includes kprobe/uprobe support and some related fixups.

Nice!

On Sat, Jul 4, 2020 at 6:34 AM <[email protected]> wrote:
> There is no single step exception in riscv ISA, so utilize ebreak to
> simulate. Some pc related instructions couldn't be executed out of line
> and some system/fence instructions couldn't be a trace site at all.
> So we give out a reject list and simulate list in decode-insn.c.

Can you elaborate on what you mean by this? Why would you need a
single-step facility for kprobes? Is it for executing the instruction
that was replaced with a probe breakpoint?

Also, the "Debug Specification" [1] specifies a single-step facility
for RISC-V -- why is that not useful for implementing kprobes?

1. https://riscv.org/specifications/debug-specification/

- Pekka

2020-07-04 14:57:01

by Guo Ren

[permalink] [raw]
Subject: Re: [PATCH V1 0/5] riscv: Add k/uprobe supported

Hi Pekka,

On Sat, Jul 4, 2020 at 2:40 PM Pekka Enberg <[email protected]> wrote:
>
> On Sat, Jul 4, 2020 at 6:34 AM <[email protected]> wrote:
> > The patchset includes kprobe/uprobe support and some related fixups.
>
> Nice!
>
> On Sat, Jul 4, 2020 at 6:34 AM <[email protected]> wrote:
> > There is no single step exception in riscv ISA, so utilize ebreak to
> > simulate. Some pc related instructions couldn't be executed out of line
> > and some system/fence instructions couldn't be a trace site at all.
> > So we give out a reject list and simulate list in decode-insn.c.
>
> Can you elaborate on what you mean by this? Why would you need a
> single-step facility for kprobes? Is it for executing the instruction
> that was replaced with a probe breakpoint?

It's the single-step exception, not single-step facility!

Other arches use hardware single-step exception for k/uprobe, eg:
- powerpc: regs->msr |= MSR_SINGLESTEP
- arm/arm64: PSTATE.D for enabling software step exceptions
- s390: Set PER control regs, turns on single step for the given address
- x86: regs->flags |= X86_EFLAGS_TF
- csky: of course use hw single step :)

Yes, All the above arches use a hardware single-step exception
mechanism to execute the instruction that was replaced with a probe
breakpoint.

>
> Also, the "Debug Specification" [1] specifies a single-step facility
> for RISC-V -- why is that not useful for implementing kprobes?
>
> 1. https://riscv.org/specifications/debug-specification/
We need single-step exception not single-step by jtag, so above spec
is not related to the patchset.

See riscv-Privileged spec:

Interrupt Exception Code-Description
1 0 Reserved
1 1 Supervisor software interrupt
1 2–4 Reserved
1 5 Supervisor timer interrupt
1 6–8 Reserved
1 9 Supervisor external interrupt
1 10–15 Reserved
1 ≥16 Available for platform use
0 0 Instruction address misaligned
0 1 Instruction access fault
0 2 Illegal instruction
0 3 Breakpoint
0 4 Load address misaligned
0 5 Load access fault
0 6 Store/AMO address misaligned
0 7 Store/AMO access fault
0 8 Environment call from U-mode
0 9 Environment call from S-mode
0 10–11 Reserved
0 12 Instruction page fault
0 13 Load page fault
0 14 Reserved
0 15 Store/AMO page fault
0 16–23 Reserved
0 24–31 Available for custom use
0 32–47 Reserved
0 48–63 Available for custom use
0 ≥64 Reserved

No single step!

So I insert a "ebreak" instruction behind the target single-step
instruction to simulate the same mechanism.

--
Best Regards
Guo Ren

ML: https://lore.kernel.org/linux-csky/

2020-07-04 18:33:13

by Pekka Enberg

[permalink] [raw]
Subject: Re: [PATCH V1 0/5] riscv: Add k/uprobe supported

Hi Guo,

On Sat, Jul 4, 2020 at 6:34 AM <[email protected]> wrote:
> > > There is no single step exception in riscv ISA, so utilize ebreak to
> > > simulate. Some pc related instructions couldn't be executed out of line
> > > and some system/fence instructions couldn't be a trace site at all.
> > > So we give out a reject list and simulate list in decode-insn.c.

On Sat, Jul 4, 2020 at 2:40 PM Pekka Enberg <[email protected]> wrote:
> > Can you elaborate on what you mean by this? Why would you need a
> > single-step facility for kprobes? Is it for executing the instruction
> > that was replaced with a probe breakpoint?

On Sat, Jul 4, 2020 at 5:55 PM Guo Ren <[email protected]> wrote:
> It's the single-step exception, not single-step facility!

Aah, right, I didn't read the specification carefully enough. Thanks
for taking the time to clarify this!

FWIW, for the whole series:

Reviewed-by: Pekka Enberg <[email protected]>

- Pekka

2020-07-06 10:12:52

by Masami Hiramatsu

[permalink] [raw]
Subject: Re: [PATCH V1 4/5] riscv: Add kprobes supported

Hi Guo,

On Sat, 4 Jul 2020 03:34:18 +0000
[email protected] wrote:

> From: Guo Ren <[email protected]>
>
> This patch enables "kprobe & kretprobe" to work with ftrace
> interface. It utilized software breakpoint as single-step
> mechanism.
>
> Some instructions which can't be single-step executed must be
> simulated in kernel execution slot, such as: branch, jal, auipc,
> la ...
>
> Some instructions should be rejected for probing and we use a
> blacklist to filter, such as: ecall, ebreak, ...
>
> We use ebreak & c.ebreak to replace origin instruction and the
> kprobe handler prepares an executable memory slot for out-of-line
> execution with a copy of the original instruction being probed.
> In execution slot we add ebreak behind original instruction to
> simulate a single-setp mechanism.
>
> The patch is based on packi's work [1] and csky's work [2].
> - The kprobes_trampoline.S is all from packi's patch
> - The single-step mechanism is new designed for riscv without hw
> single-step trap
> - The simulation codes are from csky
> - Frankly, all codes refer to other archs' implementation
>
> [1] https://lore.kernel.org/linux-riscv/[email protected]/
> [2] https://lore.kernel.org/linux-csky/[email protected]/
>

This looks good to me. Thanks for updating !

Acked-by: Masami Hiramatsu <[email protected]>

Thank you,


> Signed-off-by: Guo Ren <[email protected]>
> Co-Developed-by: Patrick Stählin <[email protected]>
> Cc: Patrick Stählin <[email protected]>
> Cc: Masami Hiramatsu <[email protected]>
> Cc: Palmer Dabbelt <[email protected]>
> Cc: Björn Töpel <[email protected]>
> ---
> arch/riscv/Kconfig | 2 +
> arch/riscv/include/asm/kprobes.h | 40 +++
> arch/riscv/include/asm/probes.h | 24 ++
> arch/riscv/kernel/Makefile | 1 +
> arch/riscv/kernel/probes/Makefile | 4 +
> arch/riscv/kernel/probes/decode-insn.c | 48 +++
> arch/riscv/kernel/probes/decode-insn.h | 18 +
> arch/riscv/kernel/probes/kprobes.c | 471 ++++++++++++++++++++++++++
> arch/riscv/kernel/probes/kprobes_trampoline.S | 93 +++++
> arch/riscv/kernel/probes/simulate-insn.c | 85 +++++
> arch/riscv/kernel/probes/simulate-insn.h | 47 +++
> arch/riscv/kernel/traps.c | 9 +
> arch/riscv/mm/fault.c | 4 +
> 13 files changed, 846 insertions(+)
> create mode 100644 arch/riscv/include/asm/probes.h
> create mode 100644 arch/riscv/kernel/probes/Makefile
> create mode 100644 arch/riscv/kernel/probes/decode-insn.c
> create mode 100644 arch/riscv/kernel/probes/decode-insn.h
> create mode 100644 arch/riscv/kernel/probes/kprobes.c
> create mode 100644 arch/riscv/kernel/probes/kprobes_trampoline.S
> create mode 100644 arch/riscv/kernel/probes/simulate-insn.c
> create mode 100644 arch/riscv/kernel/probes/simulate-insn.h
>
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index 58d6f66..a295f0b 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -57,6 +57,8 @@ config RISCV
> select HAVE_EBPF_JIT if MMU
> select HAVE_FUTEX_CMPXCHG if FUTEX
> select HAVE_GENERIC_VDSO if MMU && 64BIT
> + select HAVE_KPROBES
> + select HAVE_KRETPROBES
> select HAVE_PCI
> select HAVE_PERF_EVENTS
> select HAVE_PERF_REGS
> diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
> index 56a98ea3..4647d38 100644
> --- a/arch/riscv/include/asm/kprobes.h
> +++ b/arch/riscv/include/asm/kprobes.h
> @@ -11,4 +11,44 @@
>
> #include <asm-generic/kprobes.h>
>
> +#ifdef CONFIG_KPROBES
> +#include <linux/types.h>
> +#include <linux/ptrace.h>
> +#include <linux/percpu.h>
> +
> +#define __ARCH_WANT_KPROBES_INSN_SLOT
> +#define MAX_INSN_SIZE 2
> +
> +#define flush_insn_slot(p) do { } while (0)
> +#define kretprobe_blacklist_size 0
> +
> +#include <asm/probes.h>
> +
> +struct prev_kprobe {
> + struct kprobe *kp;
> + unsigned int status;
> +};
> +
> +/* Single step context for kprobe */
> +struct kprobe_step_ctx {
> + unsigned long ss_pending;
> + unsigned long match_addr;
> +};
> +
> +/* per-cpu kprobe control block */
> +struct kprobe_ctlblk {
> + unsigned int kprobe_status;
> + unsigned long saved_status;
> + struct prev_kprobe prev_kprobe;
> + struct kprobe_step_ctx ss_ctx;
> +};
> +
> +void arch_remove_kprobe(struct kprobe *p);
> +int kprobe_fault_handler(struct pt_regs *regs, unsigned int trapnr);
> +bool kprobe_breakpoint_handler(struct pt_regs *regs);
> +bool kprobe_single_step_handler(struct pt_regs *regs);
> +void kretprobe_trampoline(void);
> +void __kprobes *trampoline_probe_handler(struct pt_regs *regs);
> +
> +#endif /* CONFIG_KPROBES */
> #endif /* _ASM_RISCV_KPROBES_H */
> diff --git a/arch/riscv/include/asm/probes.h b/arch/riscv/include/asm/probes.h
> new file mode 100644
> index 00000000..a787e6d
> --- /dev/null
> +++ b/arch/riscv/include/asm/probes.h
> @@ -0,0 +1,24 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef _ASM_RISCV_PROBES_H
> +#define _ASM_RISCV_PROBES_H
> +
> +typedef u32 probe_opcode_t;
> +typedef bool (probes_handler_t) (u32 opcode, unsigned long addr, struct pt_regs *);
> +
> +/* architecture specific copy of original instruction */
> +struct arch_probe_insn {
> + probe_opcode_t *insn;
> + probes_handler_t *handler;
> + /* restore address after simulation */
> + unsigned long restore;
> +};
> +
> +#ifdef CONFIG_KPROBES
> +typedef u32 kprobe_opcode_t;
> +struct arch_specific_insn {
> + struct arch_probe_insn api;
> +};
> +#endif
> +
> +#endif /* _ASM_RISCV_PROBES_H */
> diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
> index b355cf4..c3fff3e 100644
> --- a/arch/riscv/kernel/Makefile
> +++ b/arch/riscv/kernel/Makefile
> @@ -29,6 +29,7 @@ obj-y += riscv_ksyms.o
> obj-y += stacktrace.o
> obj-y += cacheinfo.o
> obj-y += patch.o
> +obj-y += probes/
> obj-$(CONFIG_MMU) += vdso.o vdso/
>
> obj-$(CONFIG_RISCV_M_MODE) += clint.o traps_misaligned.o
> diff --git a/arch/riscv/kernel/probes/Makefile b/arch/riscv/kernel/probes/Makefile
> new file mode 100644
> index 00000000..8a39507
> --- /dev/null
> +++ b/arch/riscv/kernel/probes/Makefile
> @@ -0,0 +1,4 @@
> +# SPDX-License-Identifier: GPL-2.0
> +obj-$(CONFIG_KPROBES) += kprobes.o decode-insn.o simulate-insn.o
> +obj-$(CONFIG_KPROBES) += kprobes_trampoline.o
> +CFLAGS_REMOVE_simulate-insn.o = $(CC_FLAGS_FTRACE)
> diff --git a/arch/riscv/kernel/probes/decode-insn.c b/arch/riscv/kernel/probes/decode-insn.c
> new file mode 100644
> index 00000000..0876c30
> --- /dev/null
> +++ b/arch/riscv/kernel/probes/decode-insn.c
> @@ -0,0 +1,48 @@
> +// SPDX-License-Identifier: GPL-2.0+
> +
> +#include <linux/kernel.h>
> +#include <linux/kprobes.h>
> +#include <linux/module.h>
> +#include <linux/kallsyms.h>
> +#include <asm/sections.h>
> +
> +#include "decode-insn.h"
> +#include "simulate-insn.h"
> +
> +/* Return:
> + * INSN_REJECTED If instruction is one not allowed to kprobe,
> + * INSN_GOOD_NO_SLOT If instruction is supported but doesn't use its slot.
> + */
> +enum probe_insn __kprobes
> +riscv_probe_decode_insn(probe_opcode_t *addr, struct arch_probe_insn *api)
> +{
> + probe_opcode_t insn = le32_to_cpu(*addr);
> +
> + /*
> + * Reject instructions list:
> + */
> + RISCV_INSN_REJECTED(system, insn);
> + RISCV_INSN_REJECTED(fence, insn);
> +
> + /*
> + * Simulate instructions list:
> + * TODO: the REJECTED ones below need to be implemented
> + */
> +#ifdef CONFIG_RISCV_ISA_C
> + RISCV_INSN_REJECTED(c_j, insn);
> + RISCV_INSN_REJECTED(c_jr, insn);
> + RISCV_INSN_REJECTED(c_jal, insn);
> + RISCV_INSN_REJECTED(c_jalr, insn);
> + RISCV_INSN_REJECTED(c_beqz, insn);
> + RISCV_INSN_REJECTED(c_bnez, insn);
> + RISCV_INSN_REJECTED(c_ebreak, insn);
> +#endif
> +
> + RISCV_INSN_REJECTED(auipc, insn);
> + RISCV_INSN_REJECTED(branch, insn);
> +
> + RISCV_INSN_SET_SIMULATE(jal, insn);
> + RISCV_INSN_SET_SIMULATE(jalr, insn);
> +
> + return INSN_GOOD;
> +}
> diff --git a/arch/riscv/kernel/probes/decode-insn.h b/arch/riscv/kernel/probes/decode-insn.h
> new file mode 100644
> index 00000000..42269a7
> --- /dev/null
> +++ b/arch/riscv/kernel/probes/decode-insn.h
> @@ -0,0 +1,18 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +
> +#ifndef _RISCV_KERNEL_KPROBES_DECODE_INSN_H
> +#define _RISCV_KERNEL_KPROBES_DECODE_INSN_H
> +
> +#include <asm/sections.h>
> +#include <asm/kprobes.h>
> +
> +enum probe_insn {
> + INSN_REJECTED,
> + INSN_GOOD_NO_SLOT,
> + INSN_GOOD,
> +};
> +
> +enum probe_insn __kprobes
> +riscv_probe_decode_insn(probe_opcode_t *addr, struct arch_probe_insn *asi);
> +
> +#endif /* _RISCV_KERNEL_KPROBES_DECODE_INSN_H */
> diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c
> new file mode 100644
> index 00000000..31b6196
> --- /dev/null
> +++ b/arch/riscv/kernel/probes/kprobes.c
> @@ -0,0 +1,471 @@
> +// SPDX-License-Identifier: GPL-2.0+
> +
> +#include <linux/kprobes.h>
> +#include <linux/extable.h>
> +#include <linux/slab.h>
> +#include <linux/stop_machine.h>
> +#include <asm/ptrace.h>
> +#include <linux/uaccess.h>
> +#include <asm/sections.h>
> +#include <asm/cacheflush.h>
> +#include <asm/bug.h>
> +#include <asm/patch.h>
> +
> +#include "decode-insn.h"
> +
> +DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
> +DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
> +
> +static void __kprobes
> +post_kprobe_handler(struct kprobe_ctlblk *, struct pt_regs *);
> +
> +static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
> +{
> + unsigned long offset = GET_INSN_LENGTH(p->opcode);
> +
> + p->ainsn.api.restore = (unsigned long)p->addr + offset;
> +
> + patch_text(p->ainsn.api.insn, p->opcode);
> + patch_text((void *)((unsigned long)(p->ainsn.api.insn) + offset),
> + __BUG_INSN_32);
> +}
> +
> +static void __kprobes arch_prepare_simulate(struct kprobe *p)
> +{
> + p->ainsn.api.restore = 0;
> +}
> +
> +static void __kprobes arch_simulate_insn(struct kprobe *p, struct pt_regs *regs)
> +{
> + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> +
> + if (p->ainsn.api.handler)
> + p->ainsn.api.handler((u32)p->opcode,
> + (unsigned long)p->addr, regs);
> +
> + post_kprobe_handler(kcb, regs);
> +}
> +
> +int __kprobes arch_prepare_kprobe(struct kprobe *p)
> +{
> + unsigned long probe_addr = (unsigned long)p->addr;
> +
> + if (probe_addr & 0x1) {
> + pr_warn("Address not aligned.\n");
> +
> + return -EINVAL;
> + }
> +
> + /* copy instruction */
> + p->opcode = le32_to_cpu(*p->addr);
> +
> + /* decode instruction */
> + switch (riscv_probe_decode_insn(p->addr, &p->ainsn.api)) {
> + case INSN_REJECTED: /* insn not supported */
> + return -EINVAL;
> +
> + case INSN_GOOD_NO_SLOT: /* insn need simulation */
> + p->ainsn.api.insn = NULL;
> + break;
> +
> + case INSN_GOOD: /* instruction uses slot */
> + p->ainsn.api.insn = get_insn_slot();
> + if (!p->ainsn.api.insn)
> + return -ENOMEM;
> + break;
> + }
> +
> + /* prepare the instruction */
> + if (p->ainsn.api.insn)
> + arch_prepare_ss_slot(p);
> + else
> + arch_prepare_simulate(p);
> +
> + return 0;
> +}
> +
> +/* install breakpoint in text */
> +void __kprobes arch_arm_kprobe(struct kprobe *p)
> +{
> + if ((p->opcode & __INSN_LENGTH_MASK) == __INSN_LENGTH_32)
> + patch_text(p->addr, __BUG_INSN_32);
> + else
> + patch_text(p->addr, __BUG_INSN_16);
> +}
> +
> +/* remove breakpoint from text */
> +void __kprobes arch_disarm_kprobe(struct kprobe *p)
> +{
> + patch_text(p->addr, p->opcode);
> +}
> +
> +void __kprobes arch_remove_kprobe(struct kprobe *p)
> +{
> +}
> +
> +static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb)
> +{
> + kcb->prev_kprobe.kp = kprobe_running();
> + kcb->prev_kprobe.status = kcb->kprobe_status;
> +}
> +
> +static void __kprobes restore_previous_kprobe(struct kprobe_ctlblk *kcb)
> +{
> + __this_cpu_write(current_kprobe, kcb->prev_kprobe.kp);
> + kcb->kprobe_status = kcb->prev_kprobe.status;
> +}
> +
> +static void __kprobes set_current_kprobe(struct kprobe *p)
> +{
> + __this_cpu_write(current_kprobe, p);
> +}
> +
> +/*
> + * Interrupts need to be disabled before single-step mode is set, and not
> + * reenabled until after single-step mode ends.
> + * Without disabling interrupt on local CPU, there is a chance of
> + * interrupt occurrence in the period of exception return and start of
> + * out-of-line single-step, that result in wrongly single stepping
> + * into the interrupt handler.
> + */
> +static void __kprobes kprobes_save_local_irqflag(struct kprobe_ctlblk *kcb,
> + struct pt_regs *regs)
> +{
> + kcb->saved_status = regs->status;
> + regs->status &= ~SR_SPIE;
> +}
> +
> +static void __kprobes kprobes_restore_local_irqflag(struct kprobe_ctlblk *kcb,
> + struct pt_regs *regs)
> +{
> + regs->status = kcb->saved_status;
> +}
> +
> +static void __kprobes
> +set_ss_context(struct kprobe_ctlblk *kcb, unsigned long addr, struct kprobe *p)
> +{
> + unsigned long offset = GET_INSN_LENGTH(p->opcode);
> +
> + kcb->ss_ctx.ss_pending = true;
> + kcb->ss_ctx.match_addr = addr + offset;
> +}
> +
> +static void __kprobes clear_ss_context(struct kprobe_ctlblk *kcb)
> +{
> + kcb->ss_ctx.ss_pending = false;
> + kcb->ss_ctx.match_addr = 0;
> +}
> +
> +static void __kprobes setup_singlestep(struct kprobe *p,
> + struct pt_regs *regs,
> + struct kprobe_ctlblk *kcb, int reenter)
> +{
> + unsigned long slot;
> +
> + if (reenter) {
> + save_previous_kprobe(kcb);
> + set_current_kprobe(p);
> + kcb->kprobe_status = KPROBE_REENTER;
> + } else {
> + kcb->kprobe_status = KPROBE_HIT_SS;
> + }
> +
> + if (p->ainsn.api.insn) {
> + /* prepare for single stepping */
> + slot = (unsigned long)p->ainsn.api.insn;
> +
> + set_ss_context(kcb, slot, p); /* mark pending ss */
> +
> + /* IRQs and single stepping do not mix well. */
> + kprobes_save_local_irqflag(kcb, regs);
> +
> + instruction_pointer_set(regs, slot);
> + } else {
> + /* insn simulation */
> + arch_simulate_insn(p, regs);
> + }
> +}
> +
> +static int __kprobes reenter_kprobe(struct kprobe *p,
> + struct pt_regs *regs,
> + struct kprobe_ctlblk *kcb)
> +{
> + switch (kcb->kprobe_status) {
> + case KPROBE_HIT_SSDONE:
> + case KPROBE_HIT_ACTIVE:
> + kprobes_inc_nmissed_count(p);
> + setup_singlestep(p, regs, kcb, 1);
> + break;
> + case KPROBE_HIT_SS:
> + case KPROBE_REENTER:
> + pr_warn("Unrecoverable kprobe detected.\n");
> + dump_kprobe(p);
> + BUG();
> + break;
> + default:
> + WARN_ON(1);
> + return 0;
> + }
> +
> + return 1;
> +}
> +
> +static void __kprobes
> +post_kprobe_handler(struct kprobe_ctlblk *kcb, struct pt_regs *regs)
> +{
> + struct kprobe *cur = kprobe_running();
> +
> + if (!cur)
> + return;
> +
> + /* return addr restore if non-branching insn */
> + if (cur->ainsn.api.restore != 0)
> + regs->epc = cur->ainsn.api.restore;
> +
> + /* restore back original saved kprobe variables and continue */
> + if (kcb->kprobe_status == KPROBE_REENTER) {
> + restore_previous_kprobe(kcb);
> + return;
> + }
> +
> + /* call post handler */
> + kcb->kprobe_status = KPROBE_HIT_SSDONE;
> + if (cur->post_handler) {
> + /* post_handler can hit breakpoint and single step
> + * again, so we enable D-flag for recursive exception.
> + */
> + cur->post_handler(cur, regs, 0);
> + }
> +
> + reset_current_kprobe();
> +}
> +
> +int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int trapnr)
> +{
> + struct kprobe *cur = kprobe_running();
> + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> +
> + switch (kcb->kprobe_status) {
> + case KPROBE_HIT_SS:
> + case KPROBE_REENTER:
> + /*
> + * We are here because the instruction being single
> + * stepped caused a page fault. We reset the current
> + * kprobe and the ip points back to the probe address
> + * and allow the page fault handler to continue as a
> + * normal page fault.
> + */
> + regs->epc = (unsigned long) cur->addr;
> + if (!instruction_pointer(regs))
> + BUG();
> +
> + if (kcb->kprobe_status == KPROBE_REENTER)
> + restore_previous_kprobe(kcb);
> + else
> + reset_current_kprobe();
> +
> + break;
> + case KPROBE_HIT_ACTIVE:
> + case KPROBE_HIT_SSDONE:
> + /*
> + * We increment the nmissed count for accounting,
> + * we can also use npre/npostfault count for accounting
> + * these specific fault cases.
> + */
> + kprobes_inc_nmissed_count(cur);
> +
> + /*
> + * We come here because instructions in the pre/post
> + * handler caused the page_fault, this could happen
> + * if handler tries to access user space by
> + * copy_from_user(), get_user() etc. Let the
> + * user-specified handler try to fix it first.
> + */
> + if (cur->fault_handler && cur->fault_handler(cur, regs, trapnr))
> + return 1;
> +
> + /*
> + * In case the user-specified fault handler returned
> + * zero, try to fix up.
> + */
> + if (fixup_exception(regs))
> + return 1;
> + }
> + return 0;
> +}
> +
> +bool __kprobes
> +kprobe_breakpoint_handler(struct pt_regs *regs)
> +{
> + struct kprobe *p, *cur_kprobe;
> + struct kprobe_ctlblk *kcb;
> + unsigned long addr = instruction_pointer(regs);
> +
> + kcb = get_kprobe_ctlblk();
> + cur_kprobe = kprobe_running();
> +
> + p = get_kprobe((kprobe_opcode_t *) addr);
> +
> + if (p) {
> + if (cur_kprobe) {
> + if (reenter_kprobe(p, regs, kcb))
> + return true;
> + } else {
> + /* Probe hit */
> + set_current_kprobe(p);
> + kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> +
> + /*
> + * If we have no pre-handler or it returned 0, we
> + * continue with normal processing. If we have a
> + * pre-handler and it returned non-zero, it will
> + * modify the execution path and no need to single
> + * stepping. Let's just reset current kprobe and exit.
> + *
> + * pre_handler can hit a breakpoint and can step thru
> + * before return.
> + */
> + if (!p->pre_handler || !p->pre_handler(p, regs))
> + setup_singlestep(p, regs, kcb, 0);
> + else
> + reset_current_kprobe();
> + }
> + return true;
> + }
> +
> + /*
> + * The breakpoint instruction was removed right
> + * after we hit it. Another cpu has removed
> + * either a probepoint or a debugger breakpoint
> + * at this address. In either case, no further
> + * handling of this interrupt is appropriate.
> + * Return back to original instruction, and continue.
> + */
> + return false;
> +}
> +
> +bool __kprobes
> +kprobe_single_step_handler(struct pt_regs *regs)
> +{
> + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> +
> + if ((kcb->ss_ctx.ss_pending)
> + && (kcb->ss_ctx.match_addr == instruction_pointer(regs))) {
> + clear_ss_context(kcb); /* clear pending ss */
> +
> + kprobes_restore_local_irqflag(kcb, regs);
> +
> + post_kprobe_handler(kcb, regs);
> + return true;
> + }
> + return false;
> +}
> +
> +/*
> + * Provide a blacklist of symbols identifying ranges which cannot be kprobed.
> + * This blacklist is exposed to userspace via debugfs (kprobes/blacklist).
> + */
> +int __init arch_populate_kprobe_blacklist(void)
> +{
> + int ret;
> +
> + ret = kprobe_add_area_blacklist((unsigned long)__irqentry_text_start,
> + (unsigned long)__irqentry_text_end);
> + return ret;
> +}
> +
> +void __kprobes __used *trampoline_probe_handler(struct pt_regs *regs)
> +{
> + struct kretprobe_instance *ri = NULL;
> + struct hlist_head *head, empty_rp;
> + struct hlist_node *tmp;
> + unsigned long flags, orig_ret_address = 0;
> + unsigned long trampoline_address =
> + (unsigned long)&kretprobe_trampoline;
> + kprobe_opcode_t *correct_ret_addr = NULL;
> +
> + INIT_HLIST_HEAD(&empty_rp);
> + kretprobe_hash_lock(current, &head, &flags);
> +
> + /*
> + * It is possible to have multiple instances associated with a given
> + * task either because multiple functions in the call path have
> + * return probes installed on them, and/or more than one
> + * return probe was registered for a target function.
> + *
> + * We can handle this because:
> + * - instances are always pushed into the head of the list
> + * - when multiple return probes are registered for the same
> + * function, the (chronologically) first instance's ret_addr
> + * will be the real return address, and all the rest will
> + * point to kretprobe_trampoline.
> + */
> + hlist_for_each_entry_safe(ri, tmp, head, hlist) {
> + if (ri->task != current)
> + /* another task is sharing our hash bucket */
> + continue;
> +
> + orig_ret_address = (unsigned long)ri->ret_addr;
> +
> + if (orig_ret_address != trampoline_address)
> + /*
> + * This is the real return address. Any other
> + * instances associated with this task are for
> + * other calls deeper on the call stack
> + */
> + break;
> + }
> +
> + kretprobe_assert(ri, orig_ret_address, trampoline_address);
> +
> + correct_ret_addr = ri->ret_addr;
> + hlist_for_each_entry_safe(ri, tmp, head, hlist) {
> + if (ri->task != current)
> + /* another task is sharing our hash bucket */
> + continue;
> +
> + orig_ret_address = (unsigned long)ri->ret_addr;
> + if (ri->rp && ri->rp->handler) {
> + __this_cpu_write(current_kprobe, &ri->rp->kp);
> + get_kprobe_ctlblk()->kprobe_status = KPROBE_HIT_ACTIVE;
> + ri->ret_addr = correct_ret_addr;
> + ri->rp->handler(ri, regs);
> + __this_cpu_write(current_kprobe, NULL);
> + }
> +
> + recycle_rp_inst(ri, &empty_rp);
> +
> + if (orig_ret_address != trampoline_address)
> + /*
> + * This is the real return address. Any other
> + * instances associated with this task are for
> + * other calls deeper on the call stack
> + */
> + break;
> + }
> +
> + kretprobe_hash_unlock(current, &flags);
> +
> + hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
> + hlist_del(&ri->hlist);
> + kfree(ri);
> + }
> + return (void *)orig_ret_address;
> +}
> +
> +void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
> + struct pt_regs *regs)
> +{
> + ri->ret_addr = (kprobe_opcode_t *)regs->ra;
> + regs->ra = (unsigned long) &kretprobe_trampoline;
> +}
> +
> +int __kprobes arch_trampoline_kprobe(struct kprobe *p)
> +{
> + return 0;
> +}
> +
> +int __init arch_init_kprobes(void)
> +{
> + return 0;
> +}
> diff --git a/arch/riscv/kernel/probes/kprobes_trampoline.S b/arch/riscv/kernel/probes/kprobes_trampoline.S
> new file mode 100644
> index 00000000..6e85d02
> --- /dev/null
> +++ b/arch/riscv/kernel/probes/kprobes_trampoline.S
> @@ -0,0 +1,93 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/*
> + * Author: Patrick Stählin <[email protected]>
> + */
> +#include <linux/linkage.h>
> +
> +#include <asm/asm.h>
> +#include <asm/asm-offsets.h>
> +
> + .text
> + .altmacro
> +
> + .macro save_all_base_regs
> + REG_S x1, PT_RA(sp)
> + REG_S x3, PT_GP(sp)
> + REG_S x4, PT_TP(sp)
> + REG_S x5, PT_T0(sp)
> + REG_S x6, PT_T1(sp)
> + REG_S x7, PT_T2(sp)
> + REG_S x8, PT_S0(sp)
> + REG_S x9, PT_S1(sp)
> + REG_S x10, PT_A0(sp)
> + REG_S x11, PT_A1(sp)
> + REG_S x12, PT_A2(sp)
> + REG_S x13, PT_A3(sp)
> + REG_S x14, PT_A4(sp)
> + REG_S x15, PT_A5(sp)
> + REG_S x16, PT_A6(sp)
> + REG_S x17, PT_A7(sp)
> + REG_S x18, PT_S2(sp)
> + REG_S x19, PT_S3(sp)
> + REG_S x20, PT_S4(sp)
> + REG_S x21, PT_S5(sp)
> + REG_S x22, PT_S6(sp)
> + REG_S x23, PT_S7(sp)
> + REG_S x24, PT_S8(sp)
> + REG_S x25, PT_S9(sp)
> + REG_S x26, PT_S10(sp)
> + REG_S x27, PT_S11(sp)
> + REG_S x28, PT_T3(sp)
> + REG_S x29, PT_T4(sp)
> + REG_S x30, PT_T5(sp)
> + REG_S x31, PT_T6(sp)
> + .endm
> +
> + .macro restore_all_base_regs
> + REG_L x3, PT_GP(sp)
> + REG_L x4, PT_TP(sp)
> + REG_L x5, PT_T0(sp)
> + REG_L x6, PT_T1(sp)
> + REG_L x7, PT_T2(sp)
> + REG_L x8, PT_S0(sp)
> + REG_L x9, PT_S1(sp)
> + REG_L x10, PT_A0(sp)
> + REG_L x11, PT_A1(sp)
> + REG_L x12, PT_A2(sp)
> + REG_L x13, PT_A3(sp)
> + REG_L x14, PT_A4(sp)
> + REG_L x15, PT_A5(sp)
> + REG_L x16, PT_A6(sp)
> + REG_L x17, PT_A7(sp)
> + REG_L x18, PT_S2(sp)
> + REG_L x19, PT_S3(sp)
> + REG_L x20, PT_S4(sp)
> + REG_L x21, PT_S5(sp)
> + REG_L x22, PT_S6(sp)
> + REG_L x23, PT_S7(sp)
> + REG_L x24, PT_S8(sp)
> + REG_L x25, PT_S9(sp)
> + REG_L x26, PT_S10(sp)
> + REG_L x27, PT_S11(sp)
> + REG_L x28, PT_T3(sp)
> + REG_L x29, PT_T4(sp)
> + REG_L x30, PT_T5(sp)
> + REG_L x31, PT_T6(sp)
> + .endm
> +
> +ENTRY(kretprobe_trampoline)
> + addi sp, sp, -(PT_SIZE_ON_STACK)
> + save_all_base_regs
> +
> + move a0, sp /* pt_regs */
> +
> + call trampoline_probe_handler
> +
> + /* use the result as the return-address */
> + move ra, a0
> +
> + restore_all_base_regs
> + addi sp, sp, PT_SIZE_ON_STACK
> +
> + ret
> +ENDPROC(kretprobe_trampoline)
> diff --git a/arch/riscv/kernel/probes/simulate-insn.c b/arch/riscv/kernel/probes/simulate-insn.c
> new file mode 100644
> index 00000000..2519ce2
> --- /dev/null
> +++ b/arch/riscv/kernel/probes/simulate-insn.c
> @@ -0,0 +1,85 @@
> +// SPDX-License-Identifier: GPL-2.0+
> +
> +#include <linux/bitops.h>
> +#include <linux/kernel.h>
> +#include <linux/kprobes.h>
> +
> +#include "decode-insn.h"
> +#include "simulate-insn.h"
> +
> +static inline bool rv_insn_reg_get_val(struct pt_regs *regs, u32 index,
> + unsigned long *ptr)
> +{
> + if (index == 0)
> + *ptr = 0;
> + else if (index <= 31)
> + *ptr = *((unsigned long *)regs + index);
> + else
> + return false;
> +
> + return true;
> +}
> +
> +static inline bool rv_insn_reg_set_val(struct pt_regs *regs, u32 index,
> + unsigned long val)
> +{
> + if (index == 0)
> + return false;
> + else if (index <= 31)
> + *((unsigned long *)regs + index) = val;
> + else
> + return false;
> +
> + return true;
> +}
> +
> +bool __kprobes simulate_jal(u32 opcode, unsigned long addr, struct pt_regs *regs)
> +{
> + /*
> + * 31 30 21 20 19 12 11 7 6 0
> + * imm [20] | imm[10:1] | imm[11] | imm[19:12] | rd | opcode
> + * 1 10 1 8 5 JAL/J
> + */
> + bool ret;
> + u32 imm;
> + u32 index = (opcode >> 7) & 0x1f;
> +
> + ret = rv_insn_reg_set_val(regs, index, addr + 4);
> + if (!ret)
> + return ret;
> +
> + imm = ((opcode >> 21) & 0x3ff) << 1;
> + imm |= ((opcode >> 20) & 0x1) << 11;
> + imm |= ((opcode >> 12) & 0xff) << 12;
> + imm |= ((opcode >> 31) & 0x1) << 20;
> +
> + instruction_pointer_set(regs, addr + sign_extend32((imm), 20));
> +
> + return ret;
> +}
> +
> +bool __kprobes simulate_jalr(u32 opcode, unsigned long addr, struct pt_regs *regs)
> +{
> + /*
> + * 31 20 19 15 14 12 11 7 6 0
> + * offset[11:0] | rs1 | 010 | rd | opcode
> + * 12 5 3 5 JALR/JR
> + */
> + bool ret;
> + unsigned long base_addr;
> + u32 imm = (opcode >> 20) & 0xfff;
> + u32 rd_index = (opcode >> 7) & 0x1f;
> + u32 rs1_index = (opcode >> 15) & 0x1f;
> +
> + ret = rv_insn_reg_set_val(regs, rd_index, addr + 4);
> + if (!ret)
> + return ret;
> +
> + ret = rv_insn_reg_get_val(regs, rs1_index, &base_addr);
> + if (!ret)
> + return ret;
> +
> + instruction_pointer_set(regs, (base_addr + sign_extend32((imm), 11))&~1);
> +
> + return ret;
> +}
> diff --git a/arch/riscv/kernel/probes/simulate-insn.h b/arch/riscv/kernel/probes/simulate-insn.h
> new file mode 100644
> index 00000000..a62d784
> --- /dev/null
> +++ b/arch/riscv/kernel/probes/simulate-insn.h
> @@ -0,0 +1,47 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +
> +#ifndef _RISCV_KERNEL_PROBES_SIMULATE_INSN_H
> +#define _RISCV_KERNEL_PROBES_SIMULATE_INSN_H
> +
> +#define __RISCV_INSN_FUNCS(name, mask, val) \
> +static __always_inline bool riscv_insn_is_##name(probe_opcode_t code) \
> +{ \
> + BUILD_BUG_ON(~(mask) & (val)); \
> + return (code & (mask)) == (val); \
> +} \
> +bool simulate_##name(u32 opcode, unsigned long addr, \
> + struct pt_regs *regs);
> +
> +#define RISCV_INSN_REJECTED(name, code) \
> + do { \
> + if (riscv_insn_is_##name(code)) { \
> + return INSN_REJECTED; \
> + } \
> + } while (0)
> +
> +__RISCV_INSN_FUNCS(system, 0x7f, 0x73)
> +__RISCV_INSN_FUNCS(fence, 0x7f, 0x0f)
> +
> +#define RISCV_INSN_SET_SIMULATE(name, code) \
> + do { \
> + if (riscv_insn_is_##name(code)) { \
> + api->handler = simulate_##name; \
> + return INSN_GOOD_NO_SLOT; \
> + } \
> + } while (0)
> +
> +__RISCV_INSN_FUNCS(c_j, 0xe003, 0xa001)
> +__RISCV_INSN_FUNCS(c_jr, 0xf007, 0x8002)
> +__RISCV_INSN_FUNCS(c_jal, 0xe003, 0x2001)
> +__RISCV_INSN_FUNCS(c_jalr, 0xf007, 0x9002)
> +__RISCV_INSN_FUNCS(c_beqz, 0xe003, 0xc001)
> +__RISCV_INSN_FUNCS(c_bnez, 0xe003, 0xe001)
> +__RISCV_INSN_FUNCS(c_ebreak, 0xffff, 0x9002)
> +
> +__RISCV_INSN_FUNCS(auipc, 0x7f, 0x17)
> +__RISCV_INSN_FUNCS(branch, 0x7f, 0x63)
> +
> +__RISCV_INSN_FUNCS(jal, 0x7f, 0x6f)
> +__RISCV_INSN_FUNCS(jalr, 0x707f, 0x67)
> +
> +#endif /* _RISCV_KERNEL_PROBES_SIMULATE_INSN_H */
> diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
> index ecec177..ac2e786 100644
> --- a/arch/riscv/kernel/traps.c
> +++ b/arch/riscv/kernel/traps.c
> @@ -12,6 +12,7 @@
> #include <linux/signal.h>
> #include <linux/kdebug.h>
> #include <linux/uaccess.h>
> +#include <linux/kprobes.h>
> #include <linux/mm.h>
> #include <linux/module.h>
> #include <linux/irq.h>
> @@ -145,6 +146,14 @@ static inline unsigned long get_break_insn_length(unsigned long pc)
>
> asmlinkage __visible void do_trap_break(struct pt_regs *regs)
> {
> +#ifdef CONFIG_KPROBES
> + if (kprobe_single_step_handler(regs))
> + return;
> +
> + if (kprobe_breakpoint_handler(regs))
> + return;
> +#endif
> +
> if (user_mode(regs))
> force_sig_fault(SIGTRAP, TRAP_BRKPT, (void __user *)regs->epc);
> #ifdef CONFIG_KGDB
> diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
> index ae7b7fe..da0c08c 100644
> --- a/arch/riscv/mm/fault.c
> +++ b/arch/riscv/mm/fault.c
> @@ -13,6 +13,7 @@
> #include <linux/perf_event.h>
> #include <linux/signal.h>
> #include <linux/uaccess.h>
> +#include <linux/kprobes.h>
>
> #include <asm/pgalloc.h>
> #include <asm/ptrace.h>
> @@ -40,6 +41,9 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
> tsk = current;
> mm = tsk->mm;
>
> + if (kprobe_page_fault(regs, cause))
> + return;
> +
> /*
> * Fault-in kernel-space virtual memory on-demand.
> * The 'reference' page table is init_mm.pgd.
> --
> 2.7.4
>


--
Masami Hiramatsu <[email protected]>

2020-07-07 08:24:28

by Zong Li

[permalink] [raw]
Subject: Re: [PATCH V1 2/5] RISC-V: Implement ptrace regs and stack API

On Sat, Jul 4, 2020 at 11:34 AM <[email protected]> wrote:
>
> From: Patrick Stählin <[email protected]>
>
> Needed for kprobes support. Copied and adapted from arm64 code.
>
> Guo Ren fixup pt_regs type for linux-5.8-rc1.
>
> Signed-off-by: Patrick Stählin <[email protected]>
> Signed-off-by: Guo Ren <[email protected]>
> ---
> arch/riscv/Kconfig | 1 +
> arch/riscv/include/asm/ptrace.h | 29 ++++++++++++
> arch/riscv/kernel/ptrace.c | 99 +++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 129 insertions(+)
>
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index 128192e..58d6f66 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -76,6 +76,7 @@ config RISCV
> select SPARSE_IRQ
> select SYSCTL_EXCEPTION_TRACE
> select THREAD_INFO_IN_TASK
> + select HAVE_REGS_AND_STACK_ACCESS_API
>
> config ARCH_MMAP_RND_BITS_MIN
> default 18 if 64BIT
> diff --git a/arch/riscv/include/asm/ptrace.h b/arch/riscv/include/asm/ptrace.h
> index ee49f80..23372bb 100644
> --- a/arch/riscv/include/asm/ptrace.h
> +++ b/arch/riscv/include/asm/ptrace.h
> @@ -8,6 +8,7 @@
>
> #include <uapi/asm/ptrace.h>
> #include <asm/csr.h>
> +#include <linux/compiler.h>
>
> #ifndef __ASSEMBLY__
>
> @@ -60,6 +61,7 @@ struct pt_regs {
>
> #define user_mode(regs) (((regs)->status & SR_PP) == 0)
>
> +#define MAX_REG_OFFSET offsetof(struct pt_regs, orig_a0)
>
> /* Helpers for working with the instruction pointer */
> static inline unsigned long instruction_pointer(struct pt_regs *regs)
> @@ -85,6 +87,12 @@ static inline void user_stack_pointer_set(struct pt_regs *regs,
> regs->sp = val;
> }
>
> +/* Valid only for Kernel mode traps. */
> +static inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
> +{
> + return regs->sp;
> +}
> +
> /* Helpers for working with the frame pointer */
> static inline unsigned long frame_pointer(struct pt_regs *regs)
> {
> @@ -101,6 +109,27 @@ static inline unsigned long regs_return_value(struct pt_regs *regs)
> return regs->a0;
> }
>
> +extern int regs_query_register_offset(const char *name);
> +extern unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs,
> + unsigned int n);
> +
> +/**
> + * regs_get_register() - get register value from its offset
> + * @regs: pt_regs from which register value is gotten
> + * @offset: offset of the register.
> + *
> + * regs_get_register returns the value of a register whose offset from @regs.
> + * The @offset is the offset of the register in struct pt_regs.
> + * If @offset is bigger than MAX_REG_OFFSET, this returns 0.
> + */
> +static inline unsigned long regs_get_register(struct pt_regs *regs,
> + unsigned int offset)
> +{
> + if (unlikely(offset > MAX_REG_OFFSET))
> + return 0;
> +
> + return *(unsigned long *)((unsigned long)regs + offset);
> +}
> #endif /* __ASSEMBLY__ */
>
> #endif /* _ASM_RISCV_PTRACE_H */
> diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c
> index 444dc7b..a11c692 100644
> --- a/arch/riscv/kernel/ptrace.c
> +++ b/arch/riscv/kernel/ptrace.c
> @@ -125,6 +125,105 @@ const struct user_regset_view *task_user_regset_view(struct task_struct *task)
> return &riscv_user_native_view;
> }
>
> +struct pt_regs_offset {
> + const char *name;
> + int offset;
> +};
> +
> +#define REG_OFFSET_NAME(r) {.name = #r, .offset = offsetof(struct pt_regs, r)}
> +#define REG_OFFSET_END {.name = NULL, .offset = 0}
> +
> +static const struct pt_regs_offset regoffset_table[] = {
> + REG_OFFSET_NAME(epc),
> + REG_OFFSET_NAME(ra),
> + REG_OFFSET_NAME(sp),
> + REG_OFFSET_NAME(gp),
> + REG_OFFSET_NAME(tp),
> + REG_OFFSET_NAME(t0),
> + REG_OFFSET_NAME(t1),
> + REG_OFFSET_NAME(t2),
> + REG_OFFSET_NAME(s0),
> + REG_OFFSET_NAME(s1),
> + REG_OFFSET_NAME(a0),
> + REG_OFFSET_NAME(a1),
> + REG_OFFSET_NAME(a2),
> + REG_OFFSET_NAME(a3),
> + REG_OFFSET_NAME(a4),
> + REG_OFFSET_NAME(a5),
> + REG_OFFSET_NAME(a6),
> + REG_OFFSET_NAME(a7),
> + REG_OFFSET_NAME(s2),
> + REG_OFFSET_NAME(s3),
> + REG_OFFSET_NAME(s4),
> + REG_OFFSET_NAME(s5),
> + REG_OFFSET_NAME(s6),
> + REG_OFFSET_NAME(s7),
> + REG_OFFSET_NAME(s8),
> + REG_OFFSET_NAME(s9),
> + REG_OFFSET_NAME(s10),
> + REG_OFFSET_NAME(s11),
> + REG_OFFSET_NAME(t3),
> + REG_OFFSET_NAME(t4),
> + REG_OFFSET_NAME(t5),
> + REG_OFFSET_NAME(t6),
> + REG_OFFSET_NAME(status),
> + REG_OFFSET_NAME(badaddr),
> + REG_OFFSET_NAME(cause),
> + REG_OFFSET_NAME(orig_a0),
> + REG_OFFSET_END,
> +};
> +
> +/**
> + * regs_query_register_offset() - query register offset from its name
> + * @name: the name of a register
> + *
> + * regs_query_register_offset() returns the offset of a register in struct
> + * pt_regs from its name. If the name is invalid, this returns -EINVAL;
> + */
> +int regs_query_register_offset(const char *name)
> +{
> + const struct pt_regs_offset *roff;
> +
> + for (roff = regoffset_table; roff->name != NULL; roff++)
> + if (!strcmp(roff->name, name))
> + return roff->offset;
> + return -EINVAL;
> +}
> +
> +/**
> + * regs_within_kernel_stack() - check the address in the stack
> + * @regs: pt_regs which contains kernel stack pointer.
> + * @addr: address which is checked.
> + *
> + * regs_within_kernel_stack() checks @addr is within the kernel stack page(s).
> + * If @addr is within the kernel stack, it returns true. If not, returns false.
> + */
> +static bool regs_within_kernel_stack(struct pt_regs *regs, unsigned long addr)
> +{
> + return (addr & ~(THREAD_SIZE - 1)) ==
> + (kernel_stack_pointer(regs) & ~(THREAD_SIZE - 1));
> +}
> +
> +/**
> + * regs_get_kernel_stack_nth() - get Nth entry of the stack
> + * @regs: pt_regs which contains kernel stack pointer.
> + * @n: stack entry number.
> + *
> + * regs_get_kernel_stack_nth() returns @n th entry of the kernel stack which
> + * is specified by @regs. If the @n th entry is NOT in the kernel stack,
> + * this returns 0.
> + */
> +unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs, unsigned int n)
> +{
> + unsigned long *addr = (unsigned long *)kernel_stack_pointer(regs);
> +
> + addr += n;
> + if (regs_within_kernel_stack(regs, (unsigned long)addr))
> + return *addr;
> + else
> + return 0;
> +}
> +
> void ptrace_disable(struct task_struct *child)
> {
> clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
> --
> 2.7.4
>

Looks good to me.

Reviewed-by: Zong Li <[email protected]>

2020-07-07 08:26:41

by Zong Li

[permalink] [raw]
Subject: Re: [PATCH V1 4/5] riscv: Add kprobes supported

On Mon, Jul 6, 2020 at 6:12 PM Masami Hiramatsu <[email protected]> wrote:
>
> Hi Guo,
>
> On Sat, 4 Jul 2020 03:34:18 +0000
> [email protected] wrote:
>
> > From: Guo Ren <[email protected]>
> >
> > This patch enables "kprobe & kretprobe" to work with ftrace
> > interface. It utilized software breakpoint as single-step
> > mechanism.
> >
> > Some instructions which can't be single-step executed must be
> > simulated in kernel execution slot, such as: branch, jal, auipc,
> > la ...
> >
> > Some instructions should be rejected for probing and we use a
> > blacklist to filter, such as: ecall, ebreak, ...
> >
> > We use ebreak & c.ebreak to replace origin instruction and the
> > kprobe handler prepares an executable memory slot for out-of-line
> > execution with a copy of the original instruction being probed.
> > In execution slot we add ebreak behind original instruction to
> > simulate a single-setp mechanism.
> >
> > The patch is based on packi's work [1] and csky's work [2].
> > - The kprobes_trampoline.S is all from packi's patch
> > - The single-step mechanism is new designed for riscv without hw
> > single-step trap
> > - The simulation codes are from csky
> > - Frankly, all codes refer to other archs' implementation
> >
> > [1] https://lore.kernel.org/linux-riscv/[email protected]/
> > [2] https://lore.kernel.org/linux-csky/[email protected]/
> >
>
> This looks good to me. Thanks for updating !
>
> Acked-by: Masami Hiramatsu <[email protected]>
>
> Thank you,
>

It works to me. Thanks!

Tested-by: Zong Li <[email protected]>

>
> > Signed-off-by: Guo Ren <[email protected]>
> > Co-Developed-by: Patrick Stählin <[email protected]>
> > Cc: Patrick Stählin <[email protected]>
> > Cc: Masami Hiramatsu <[email protected]>
> > Cc: Palmer Dabbelt <[email protected]>
> > Cc: Björn Töpel <[email protected]>
> > ---
> > arch/riscv/Kconfig | 2 +
> > arch/riscv/include/asm/kprobes.h | 40 +++
> > arch/riscv/include/asm/probes.h | 24 ++
> > arch/riscv/kernel/Makefile | 1 +
> > arch/riscv/kernel/probes/Makefile | 4 +
> > arch/riscv/kernel/probes/decode-insn.c | 48 +++
> > arch/riscv/kernel/probes/decode-insn.h | 18 +
> > arch/riscv/kernel/probes/kprobes.c | 471 ++++++++++++++++++++++++++
> > arch/riscv/kernel/probes/kprobes_trampoline.S | 93 +++++
> > arch/riscv/kernel/probes/simulate-insn.c | 85 +++++
> > arch/riscv/kernel/probes/simulate-insn.h | 47 +++
> > arch/riscv/kernel/traps.c | 9 +
> > arch/riscv/mm/fault.c | 4 +
> > 13 files changed, 846 insertions(+)
> > create mode 100644 arch/riscv/include/asm/probes.h
> > create mode 100644 arch/riscv/kernel/probes/Makefile
> > create mode 100644 arch/riscv/kernel/probes/decode-insn.c
> > create mode 100644 arch/riscv/kernel/probes/decode-insn.h
> > create mode 100644 arch/riscv/kernel/probes/kprobes.c
> > create mode 100644 arch/riscv/kernel/probes/kprobes_trampoline.S
> > create mode 100644 arch/riscv/kernel/probes/simulate-insn.c
> > create mode 100644 arch/riscv/kernel/probes/simulate-insn.h
> >
> > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> > index 58d6f66..a295f0b 100644
> > --- a/arch/riscv/Kconfig
> > +++ b/arch/riscv/Kconfig
> > @@ -57,6 +57,8 @@ config RISCV
> > select HAVE_EBPF_JIT if MMU
> > select HAVE_FUTEX_CMPXCHG if FUTEX
> > select HAVE_GENERIC_VDSO if MMU && 64BIT
> > + select HAVE_KPROBES
> > + select HAVE_KRETPROBES
> > select HAVE_PCI
> > select HAVE_PERF_EVENTS
> > select HAVE_PERF_REGS
> > diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
> > index 56a98ea3..4647d38 100644
> > --- a/arch/riscv/include/asm/kprobes.h
> > +++ b/arch/riscv/include/asm/kprobes.h
> > @@ -11,4 +11,44 @@
> >
> > #include <asm-generic/kprobes.h>
> >
> > +#ifdef CONFIG_KPROBES
> > +#include <linux/types.h>
> > +#include <linux/ptrace.h>
> > +#include <linux/percpu.h>
> > +
> > +#define __ARCH_WANT_KPROBES_INSN_SLOT
> > +#define MAX_INSN_SIZE 2
> > +
> > +#define flush_insn_slot(p) do { } while (0)
> > +#define kretprobe_blacklist_size 0
> > +
> > +#include <asm/probes.h>
> > +
> > +struct prev_kprobe {
> > + struct kprobe *kp;
> > + unsigned int status;
> > +};
> > +
> > +/* Single step context for kprobe */
> > +struct kprobe_step_ctx {
> > + unsigned long ss_pending;
> > + unsigned long match_addr;
> > +};
> > +
> > +/* per-cpu kprobe control block */
> > +struct kprobe_ctlblk {
> > + unsigned int kprobe_status;
> > + unsigned long saved_status;
> > + struct prev_kprobe prev_kprobe;
> > + struct kprobe_step_ctx ss_ctx;
> > +};
> > +
> > +void arch_remove_kprobe(struct kprobe *p);
> > +int kprobe_fault_handler(struct pt_regs *regs, unsigned int trapnr);
> > +bool kprobe_breakpoint_handler(struct pt_regs *regs);
> > +bool kprobe_single_step_handler(struct pt_regs *regs);
> > +void kretprobe_trampoline(void);
> > +void __kprobes *trampoline_probe_handler(struct pt_regs *regs);
> > +
> > +#endif /* CONFIG_KPROBES */
> > #endif /* _ASM_RISCV_KPROBES_H */
> > diff --git a/arch/riscv/include/asm/probes.h b/arch/riscv/include/asm/probes.h
> > new file mode 100644
> > index 00000000..a787e6d
> > --- /dev/null
> > +++ b/arch/riscv/include/asm/probes.h
> > @@ -0,0 +1,24 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +
> > +#ifndef _ASM_RISCV_PROBES_H
> > +#define _ASM_RISCV_PROBES_H
> > +
> > +typedef u32 probe_opcode_t;
> > +typedef bool (probes_handler_t) (u32 opcode, unsigned long addr, struct pt_regs *);
> > +
> > +/* architecture specific copy of original instruction */
> > +struct arch_probe_insn {
> > + probe_opcode_t *insn;
> > + probes_handler_t *handler;
> > + /* restore address after simulation */
> > + unsigned long restore;
> > +};
> > +
> > +#ifdef CONFIG_KPROBES
> > +typedef u32 kprobe_opcode_t;
> > +struct arch_specific_insn {
> > + struct arch_probe_insn api;
> > +};
> > +#endif
> > +
> > +#endif /* _ASM_RISCV_PROBES_H */
> > diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
> > index b355cf4..c3fff3e 100644
> > --- a/arch/riscv/kernel/Makefile
> > +++ b/arch/riscv/kernel/Makefile
> > @@ -29,6 +29,7 @@ obj-y += riscv_ksyms.o
> > obj-y += stacktrace.o
> > obj-y += cacheinfo.o
> > obj-y += patch.o
> > +obj-y += probes/
> > obj-$(CONFIG_MMU) += vdso.o vdso/
> >
> > obj-$(CONFIG_RISCV_M_MODE) += clint.o traps_misaligned.o
> > diff --git a/arch/riscv/kernel/probes/Makefile b/arch/riscv/kernel/probes/Makefile
> > new file mode 100644
> > index 00000000..8a39507
> > --- /dev/null
> > +++ b/arch/riscv/kernel/probes/Makefile
> > @@ -0,0 +1,4 @@
> > +# SPDX-License-Identifier: GPL-2.0
> > +obj-$(CONFIG_KPROBES) += kprobes.o decode-insn.o simulate-insn.o
> > +obj-$(CONFIG_KPROBES) += kprobes_trampoline.o
> > +CFLAGS_REMOVE_simulate-insn.o = $(CC_FLAGS_FTRACE)
> > diff --git a/arch/riscv/kernel/probes/decode-insn.c b/arch/riscv/kernel/probes/decode-insn.c
> > new file mode 100644
> > index 00000000..0876c30
> > --- /dev/null
> > +++ b/arch/riscv/kernel/probes/decode-insn.c
> > @@ -0,0 +1,48 @@
> > +// SPDX-License-Identifier: GPL-2.0+
> > +
> > +#include <linux/kernel.h>
> > +#include <linux/kprobes.h>
> > +#include <linux/module.h>
> > +#include <linux/kallsyms.h>
> > +#include <asm/sections.h>
> > +
> > +#include "decode-insn.h"
> > +#include "simulate-insn.h"
> > +
> > +/* Return:
> > + * INSN_REJECTED If instruction is one not allowed to kprobe,
> > + * INSN_GOOD_NO_SLOT If instruction is supported but doesn't use its slot.
> > + */
> > +enum probe_insn __kprobes
> > +riscv_probe_decode_insn(probe_opcode_t *addr, struct arch_probe_insn *api)
> > +{
> > + probe_opcode_t insn = le32_to_cpu(*addr);
> > +
> > + /*
> > + * Reject instructions list:
> > + */
> > + RISCV_INSN_REJECTED(system, insn);
> > + RISCV_INSN_REJECTED(fence, insn);
> > +
> > + /*
> > + * Simulate instructions list:
> > + * TODO: the REJECTED ones below need to be implemented
> > + */
> > +#ifdef CONFIG_RISCV_ISA_C
> > + RISCV_INSN_REJECTED(c_j, insn);
> > + RISCV_INSN_REJECTED(c_jr, insn);
> > + RISCV_INSN_REJECTED(c_jal, insn);
> > + RISCV_INSN_REJECTED(c_jalr, insn);
> > + RISCV_INSN_REJECTED(c_beqz, insn);
> > + RISCV_INSN_REJECTED(c_bnez, insn);
> > + RISCV_INSN_REJECTED(c_ebreak, insn);
> > +#endif
> > +
> > + RISCV_INSN_REJECTED(auipc, insn);
> > + RISCV_INSN_REJECTED(branch, insn);
> > +
> > + RISCV_INSN_SET_SIMULATE(jal, insn);
> > + RISCV_INSN_SET_SIMULATE(jalr, insn);
> > +
> > + return INSN_GOOD;
> > +}
> > diff --git a/arch/riscv/kernel/probes/decode-insn.h b/arch/riscv/kernel/probes/decode-insn.h
> > new file mode 100644
> > index 00000000..42269a7
> > --- /dev/null
> > +++ b/arch/riscv/kernel/probes/decode-insn.h
> > @@ -0,0 +1,18 @@
> > +/* SPDX-License-Identifier: GPL-2.0+ */
> > +
> > +#ifndef _RISCV_KERNEL_KPROBES_DECODE_INSN_H
> > +#define _RISCV_KERNEL_KPROBES_DECODE_INSN_H
> > +
> > +#include <asm/sections.h>
> > +#include <asm/kprobes.h>
> > +
> > +enum probe_insn {
> > + INSN_REJECTED,
> > + INSN_GOOD_NO_SLOT,
> > + INSN_GOOD,
> > +};
> > +
> > +enum probe_insn __kprobes
> > +riscv_probe_decode_insn(probe_opcode_t *addr, struct arch_probe_insn *asi);
> > +
> > +#endif /* _RISCV_KERNEL_KPROBES_DECODE_INSN_H */
> > diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c
> > new file mode 100644
> > index 00000000..31b6196
> > --- /dev/null
> > +++ b/arch/riscv/kernel/probes/kprobes.c
> > @@ -0,0 +1,471 @@
> > +// SPDX-License-Identifier: GPL-2.0+
> > +
> > +#include <linux/kprobes.h>
> > +#include <linux/extable.h>
> > +#include <linux/slab.h>
> > +#include <linux/stop_machine.h>
> > +#include <asm/ptrace.h>
> > +#include <linux/uaccess.h>
> > +#include <asm/sections.h>
> > +#include <asm/cacheflush.h>
> > +#include <asm/bug.h>
> > +#include <asm/patch.h>
> > +
> > +#include "decode-insn.h"
> > +
> > +DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
> > +DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
> > +
> > +static void __kprobes
> > +post_kprobe_handler(struct kprobe_ctlblk *, struct pt_regs *);
> > +
> > +static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
> > +{
> > + unsigned long offset = GET_INSN_LENGTH(p->opcode);
> > +
> > + p->ainsn.api.restore = (unsigned long)p->addr + offset;
> > +
> > + patch_text(p->ainsn.api.insn, p->opcode);
> > + patch_text((void *)((unsigned long)(p->ainsn.api.insn) + offset),
> > + __BUG_INSN_32);
> > +}
> > +
> > +static void __kprobes arch_prepare_simulate(struct kprobe *p)
> > +{
> > + p->ainsn.api.restore = 0;
> > +}
> > +
> > +static void __kprobes arch_simulate_insn(struct kprobe *p, struct pt_regs *regs)
> > +{
> > + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> > +
> > + if (p->ainsn.api.handler)
> > + p->ainsn.api.handler((u32)p->opcode,
> > + (unsigned long)p->addr, regs);
> > +
> > + post_kprobe_handler(kcb, regs);
> > +}
> > +
> > +int __kprobes arch_prepare_kprobe(struct kprobe *p)
> > +{
> > + unsigned long probe_addr = (unsigned long)p->addr;
> > +
> > + if (probe_addr & 0x1) {
> > + pr_warn("Address not aligned.\n");
> > +
> > + return -EINVAL;
> > + }
> > +
> > + /* copy instruction */
> > + p->opcode = le32_to_cpu(*p->addr);
> > +
> > + /* decode instruction */
> > + switch (riscv_probe_decode_insn(p->addr, &p->ainsn.api)) {
> > + case INSN_REJECTED: /* insn not supported */
> > + return -EINVAL;
> > +
> > + case INSN_GOOD_NO_SLOT: /* insn need simulation */
> > + p->ainsn.api.insn = NULL;
> > + break;
> > +
> > + case INSN_GOOD: /* instruction uses slot */
> > + p->ainsn.api.insn = get_insn_slot();
> > + if (!p->ainsn.api.insn)
> > + return -ENOMEM;
> > + break;
> > + }
> > +
> > + /* prepare the instruction */
> > + if (p->ainsn.api.insn)
> > + arch_prepare_ss_slot(p);
> > + else
> > + arch_prepare_simulate(p);
> > +
> > + return 0;
> > +}
> > +
> > +/* install breakpoint in text */
> > +void __kprobes arch_arm_kprobe(struct kprobe *p)
> > +{
> > + if ((p->opcode & __INSN_LENGTH_MASK) == __INSN_LENGTH_32)
> > + patch_text(p->addr, __BUG_INSN_32);
> > + else
> > + patch_text(p->addr, __BUG_INSN_16);
> > +}
> > +
> > +/* remove breakpoint from text */
> > +void __kprobes arch_disarm_kprobe(struct kprobe *p)
> > +{
> > + patch_text(p->addr, p->opcode);
> > +}
> > +
> > +void __kprobes arch_remove_kprobe(struct kprobe *p)
> > +{
> > +}
> > +
> > +static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb)
> > +{
> > + kcb->prev_kprobe.kp = kprobe_running();
> > + kcb->prev_kprobe.status = kcb->kprobe_status;
> > +}
> > +
> > +static void __kprobes restore_previous_kprobe(struct kprobe_ctlblk *kcb)
> > +{
> > + __this_cpu_write(current_kprobe, kcb->prev_kprobe.kp);
> > + kcb->kprobe_status = kcb->prev_kprobe.status;
> > +}
> > +
> > +static void __kprobes set_current_kprobe(struct kprobe *p)
> > +{
> > + __this_cpu_write(current_kprobe, p);
> > +}
> > +
> > +/*
> > + * Interrupts need to be disabled before single-step mode is set, and not
> > + * reenabled until after single-step mode ends.
> > + * Without disabling interrupt on local CPU, there is a chance of
> > + * interrupt occurrence in the period of exception return and start of
> > + * out-of-line single-step, that result in wrongly single stepping
> > + * into the interrupt handler.
> > + */
> > +static void __kprobes kprobes_save_local_irqflag(struct kprobe_ctlblk *kcb,
> > + struct pt_regs *regs)
> > +{
> > + kcb->saved_status = regs->status;
> > + regs->status &= ~SR_SPIE;
> > +}
> > +
> > +static void __kprobes kprobes_restore_local_irqflag(struct kprobe_ctlblk *kcb,
> > + struct pt_regs *regs)
> > +{
> > + regs->status = kcb->saved_status;
> > +}
> > +
> > +static void __kprobes
> > +set_ss_context(struct kprobe_ctlblk *kcb, unsigned long addr, struct kprobe *p)
> > +{
> > + unsigned long offset = GET_INSN_LENGTH(p->opcode);
> > +
> > + kcb->ss_ctx.ss_pending = true;
> > + kcb->ss_ctx.match_addr = addr + offset;
> > +}
> > +
> > +static void __kprobes clear_ss_context(struct kprobe_ctlblk *kcb)
> > +{
> > + kcb->ss_ctx.ss_pending = false;
> > + kcb->ss_ctx.match_addr = 0;
> > +}
> > +
> > +static void __kprobes setup_singlestep(struct kprobe *p,
> > + struct pt_regs *regs,
> > + struct kprobe_ctlblk *kcb, int reenter)
> > +{
> > + unsigned long slot;
> > +
> > + if (reenter) {
> > + save_previous_kprobe(kcb);
> > + set_current_kprobe(p);
> > + kcb->kprobe_status = KPROBE_REENTER;
> > + } else {
> > + kcb->kprobe_status = KPROBE_HIT_SS;
> > + }
> > +
> > + if (p->ainsn.api.insn) {
> > + /* prepare for single stepping */
> > + slot = (unsigned long)p->ainsn.api.insn;
> > +
> > + set_ss_context(kcb, slot, p); /* mark pending ss */
> > +
> > + /* IRQs and single stepping do not mix well. */
> > + kprobes_save_local_irqflag(kcb, regs);
> > +
> > + instruction_pointer_set(regs, slot);
> > + } else {
> > + /* insn simulation */
> > + arch_simulate_insn(p, regs);
> > + }
> > +}
> > +
> > +static int __kprobes reenter_kprobe(struct kprobe *p,
> > + struct pt_regs *regs,
> > + struct kprobe_ctlblk *kcb)
> > +{
> > + switch (kcb->kprobe_status) {
> > + case KPROBE_HIT_SSDONE:
> > + case KPROBE_HIT_ACTIVE:
> > + kprobes_inc_nmissed_count(p);
> > + setup_singlestep(p, regs, kcb, 1);
> > + break;
> > + case KPROBE_HIT_SS:
> > + case KPROBE_REENTER:
> > + pr_warn("Unrecoverable kprobe detected.\n");
> > + dump_kprobe(p);
> > + BUG();
> > + break;
> > + default:
> > + WARN_ON(1);
> > + return 0;
> > + }
> > +
> > + return 1;
> > +}
> > +
> > +static void __kprobes
> > +post_kprobe_handler(struct kprobe_ctlblk *kcb, struct pt_regs *regs)
> > +{
> > + struct kprobe *cur = kprobe_running();
> > +
> > + if (!cur)
> > + return;
> > +
> > + /* return addr restore if non-branching insn */
> > + if (cur->ainsn.api.restore != 0)
> > + regs->epc = cur->ainsn.api.restore;
> > +
> > + /* restore back original saved kprobe variables and continue */
> > + if (kcb->kprobe_status == KPROBE_REENTER) {
> > + restore_previous_kprobe(kcb);
> > + return;
> > + }
> > +
> > + /* call post handler */
> > + kcb->kprobe_status = KPROBE_HIT_SSDONE;
> > + if (cur->post_handler) {
> > + /* post_handler can hit breakpoint and single step
> > + * again, so we enable D-flag for recursive exception.
> > + */
> > + cur->post_handler(cur, regs, 0);
> > + }
> > +
> > + reset_current_kprobe();
> > +}
> > +
> > +int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int trapnr)
> > +{
> > + struct kprobe *cur = kprobe_running();
> > + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> > +
> > + switch (kcb->kprobe_status) {
> > + case KPROBE_HIT_SS:
> > + case KPROBE_REENTER:
> > + /*
> > + * We are here because the instruction being single
> > + * stepped caused a page fault. We reset the current
> > + * kprobe and the ip points back to the probe address
> > + * and allow the page fault handler to continue as a
> > + * normal page fault.
> > + */
> > + regs->epc = (unsigned long) cur->addr;
> > + if (!instruction_pointer(regs))
> > + BUG();
> > +
> > + if (kcb->kprobe_status == KPROBE_REENTER)
> > + restore_previous_kprobe(kcb);
> > + else
> > + reset_current_kprobe();
> > +
> > + break;
> > + case KPROBE_HIT_ACTIVE:
> > + case KPROBE_HIT_SSDONE:
> > + /*
> > + * We increment the nmissed count for accounting,
> > + * we can also use npre/npostfault count for accounting
> > + * these specific fault cases.
> > + */
> > + kprobes_inc_nmissed_count(cur);
> > +
> > + /*
> > + * We come here because instructions in the pre/post
> > + * handler caused the page_fault, this could happen
> > + * if handler tries to access user space by
> > + * copy_from_user(), get_user() etc. Let the
> > + * user-specified handler try to fix it first.
> > + */
> > + if (cur->fault_handler && cur->fault_handler(cur, regs, trapnr))
> > + return 1;
> > +
> > + /*
> > + * In case the user-specified fault handler returned
> > + * zero, try to fix up.
> > + */
> > + if (fixup_exception(regs))
> > + return 1;
> > + }
> > + return 0;
> > +}
> > +
> > +bool __kprobes
> > +kprobe_breakpoint_handler(struct pt_regs *regs)
> > +{
> > + struct kprobe *p, *cur_kprobe;
> > + struct kprobe_ctlblk *kcb;
> > + unsigned long addr = instruction_pointer(regs);
> > +
> > + kcb = get_kprobe_ctlblk();
> > + cur_kprobe = kprobe_running();
> > +
> > + p = get_kprobe((kprobe_opcode_t *) addr);
> > +
> > + if (p) {
> > + if (cur_kprobe) {
> > + if (reenter_kprobe(p, regs, kcb))
> > + return true;
> > + } else {
> > + /* Probe hit */
> > + set_current_kprobe(p);
> > + kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> > +
> > + /*
> > + * If we have no pre-handler or it returned 0, we
> > + * continue with normal processing. If we have a
> > + * pre-handler and it returned non-zero, it will
> > + * modify the execution path and no need to single
> > + * stepping. Let's just reset current kprobe and exit.
> > + *
> > + * pre_handler can hit a breakpoint and can step thru
> > + * before return.
> > + */
> > + if (!p->pre_handler || !p->pre_handler(p, regs))
> > + setup_singlestep(p, regs, kcb, 0);
> > + else
> > + reset_current_kprobe();
> > + }
> > + return true;
> > + }
> > +
> > + /*
> > + * The breakpoint instruction was removed right
> > + * after we hit it. Another cpu has removed
> > + * either a probepoint or a debugger breakpoint
> > + * at this address. In either case, no further
> > + * handling of this interrupt is appropriate.
> > + * Return back to original instruction, and continue.
> > + */
> > + return false;
> > +}
> > +
> > +bool __kprobes
> > +kprobe_single_step_handler(struct pt_regs *regs)
> > +{
> > + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> > +
> > + if ((kcb->ss_ctx.ss_pending)
> > + && (kcb->ss_ctx.match_addr == instruction_pointer(regs))) {
> > + clear_ss_context(kcb); /* clear pending ss */
> > +
> > + kprobes_restore_local_irqflag(kcb, regs);
> > +
> > + post_kprobe_handler(kcb, regs);
> > + return true;
> > + }
> > + return false;
> > +}
> > +
> > +/*
> > + * Provide a blacklist of symbols identifying ranges which cannot be kprobed.
> > + * This blacklist is exposed to userspace via debugfs (kprobes/blacklist).
> > + */
> > +int __init arch_populate_kprobe_blacklist(void)
> > +{
> > + int ret;
> > +
> > + ret = kprobe_add_area_blacklist((unsigned long)__irqentry_text_start,
> > + (unsigned long)__irqentry_text_end);
> > + return ret;
> > +}
> > +
> > +void __kprobes __used *trampoline_probe_handler(struct pt_regs *regs)
> > +{
> > + struct kretprobe_instance *ri = NULL;
> > + struct hlist_head *head, empty_rp;
> > + struct hlist_node *tmp;
> > + unsigned long flags, orig_ret_address = 0;
> > + unsigned long trampoline_address =
> > + (unsigned long)&kretprobe_trampoline;
> > + kprobe_opcode_t *correct_ret_addr = NULL;
> > +
> > + INIT_HLIST_HEAD(&empty_rp);
> > + kretprobe_hash_lock(current, &head, &flags);
> > +
> > + /*
> > + * It is possible to have multiple instances associated with a given
> > + * task either because multiple functions in the call path have
> > + * return probes installed on them, and/or more than one
> > + * return probe was registered for a target function.
> > + *
> > + * We can handle this because:
> > + * - instances are always pushed into the head of the list
> > + * - when multiple return probes are registered for the same
> > + * function, the (chronologically) first instance's ret_addr
> > + * will be the real return address, and all the rest will
> > + * point to kretprobe_trampoline.
> > + */
> > + hlist_for_each_entry_safe(ri, tmp, head, hlist) {
> > + if (ri->task != current)
> > + /* another task is sharing our hash bucket */
> > + continue;
> > +
> > + orig_ret_address = (unsigned long)ri->ret_addr;
> > +
> > + if (orig_ret_address != trampoline_address)
> > + /*
> > + * This is the real return address. Any other
> > + * instances associated with this task are for
> > + * other calls deeper on the call stack
> > + */
> > + break;
> > + }
> > +
> > + kretprobe_assert(ri, orig_ret_address, trampoline_address);
> > +
> > + correct_ret_addr = ri->ret_addr;
> > + hlist_for_each_entry_safe(ri, tmp, head, hlist) {
> > + if (ri->task != current)
> > + /* another task is sharing our hash bucket */
> > + continue;
> > +
> > + orig_ret_address = (unsigned long)ri->ret_addr;
> > + if (ri->rp && ri->rp->handler) {
> > + __this_cpu_write(current_kprobe, &ri->rp->kp);
> > + get_kprobe_ctlblk()->kprobe_status = KPROBE_HIT_ACTIVE;
> > + ri->ret_addr = correct_ret_addr;
> > + ri->rp->handler(ri, regs);
> > + __this_cpu_write(current_kprobe, NULL);
> > + }
> > +
> > + recycle_rp_inst(ri, &empty_rp);
> > +
> > + if (orig_ret_address != trampoline_address)
> > + /*
> > + * This is the real return address. Any other
> > + * instances associated with this task are for
> > + * other calls deeper on the call stack
> > + */
> > + break;
> > + }
> > +
> > + kretprobe_hash_unlock(current, &flags);
> > +
> > + hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
> > + hlist_del(&ri->hlist);
> > + kfree(ri);
> > + }
> > + return (void *)orig_ret_address;
> > +}
> > +
> > +void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
> > + struct pt_regs *regs)
> > +{
> > + ri->ret_addr = (kprobe_opcode_t *)regs->ra;
> > + regs->ra = (unsigned long) &kretprobe_trampoline;
> > +}
> > +
> > +int __kprobes arch_trampoline_kprobe(struct kprobe *p)
> > +{
> > + return 0;
> > +}
> > +
> > +int __init arch_init_kprobes(void)
> > +{
> > + return 0;
> > +}
> > diff --git a/arch/riscv/kernel/probes/kprobes_trampoline.S b/arch/riscv/kernel/probes/kprobes_trampoline.S
> > new file mode 100644
> > index 00000000..6e85d02
> > --- /dev/null
> > +++ b/arch/riscv/kernel/probes/kprobes_trampoline.S
> > @@ -0,0 +1,93 @@
> > +/* SPDX-License-Identifier: GPL-2.0+ */
> > +/*
> > + * Author: Patrick Stählin <[email protected]>
> > + */
> > +#include <linux/linkage.h>
> > +
> > +#include <asm/asm.h>
> > +#include <asm/asm-offsets.h>
> > +
> > + .text
> > + .altmacro
> > +
> > + .macro save_all_base_regs
> > + REG_S x1, PT_RA(sp)
> > + REG_S x3, PT_GP(sp)
> > + REG_S x4, PT_TP(sp)
> > + REG_S x5, PT_T0(sp)
> > + REG_S x6, PT_T1(sp)
> > + REG_S x7, PT_T2(sp)
> > + REG_S x8, PT_S0(sp)
> > + REG_S x9, PT_S1(sp)
> > + REG_S x10, PT_A0(sp)
> > + REG_S x11, PT_A1(sp)
> > + REG_S x12, PT_A2(sp)
> > + REG_S x13, PT_A3(sp)
> > + REG_S x14, PT_A4(sp)
> > + REG_S x15, PT_A5(sp)
> > + REG_S x16, PT_A6(sp)
> > + REG_S x17, PT_A7(sp)
> > + REG_S x18, PT_S2(sp)
> > + REG_S x19, PT_S3(sp)
> > + REG_S x20, PT_S4(sp)
> > + REG_S x21, PT_S5(sp)
> > + REG_S x22, PT_S6(sp)
> > + REG_S x23, PT_S7(sp)
> > + REG_S x24, PT_S8(sp)
> > + REG_S x25, PT_S9(sp)
> > + REG_S x26, PT_S10(sp)
> > + REG_S x27, PT_S11(sp)
> > + REG_S x28, PT_T3(sp)
> > + REG_S x29, PT_T4(sp)
> > + REG_S x30, PT_T5(sp)
> > + REG_S x31, PT_T6(sp)
> > + .endm
> > +
> > + .macro restore_all_base_regs
> > + REG_L x3, PT_GP(sp)
> > + REG_L x4, PT_TP(sp)
> > + REG_L x5, PT_T0(sp)
> > + REG_L x6, PT_T1(sp)
> > + REG_L x7, PT_T2(sp)
> > + REG_L x8, PT_S0(sp)
> > + REG_L x9, PT_S1(sp)
> > + REG_L x10, PT_A0(sp)
> > + REG_L x11, PT_A1(sp)
> > + REG_L x12, PT_A2(sp)
> > + REG_L x13, PT_A3(sp)
> > + REG_L x14, PT_A4(sp)
> > + REG_L x15, PT_A5(sp)
> > + REG_L x16, PT_A6(sp)
> > + REG_L x17, PT_A7(sp)
> > + REG_L x18, PT_S2(sp)
> > + REG_L x19, PT_S3(sp)
> > + REG_L x20, PT_S4(sp)
> > + REG_L x21, PT_S5(sp)
> > + REG_L x22, PT_S6(sp)
> > + REG_L x23, PT_S7(sp)
> > + REG_L x24, PT_S8(sp)
> > + REG_L x25, PT_S9(sp)
> > + REG_L x26, PT_S10(sp)
> > + REG_L x27, PT_S11(sp)
> > + REG_L x28, PT_T3(sp)
> > + REG_L x29, PT_T4(sp)
> > + REG_L x30, PT_T5(sp)
> > + REG_L x31, PT_T6(sp)
> > + .endm
> > +
> > +ENTRY(kretprobe_trampoline)
> > + addi sp, sp, -(PT_SIZE_ON_STACK)
> > + save_all_base_regs
> > +
> > + move a0, sp /* pt_regs */
> > +
> > + call trampoline_probe_handler
> > +
> > + /* use the result as the return-address */
> > + move ra, a0
> > +
> > + restore_all_base_regs
> > + addi sp, sp, PT_SIZE_ON_STACK
> > +
> > + ret
> > +ENDPROC(kretprobe_trampoline)
> > diff --git a/arch/riscv/kernel/probes/simulate-insn.c b/arch/riscv/kernel/probes/simulate-insn.c
> > new file mode 100644
> > index 00000000..2519ce2
> > --- /dev/null
> > +++ b/arch/riscv/kernel/probes/simulate-insn.c
> > @@ -0,0 +1,85 @@
> > +// SPDX-License-Identifier: GPL-2.0+
> > +
> > +#include <linux/bitops.h>
> > +#include <linux/kernel.h>
> > +#include <linux/kprobes.h>
> > +
> > +#include "decode-insn.h"
> > +#include "simulate-insn.h"
> > +
> > +static inline bool rv_insn_reg_get_val(struct pt_regs *regs, u32 index,
> > + unsigned long *ptr)
> > +{
> > + if (index == 0)
> > + *ptr = 0;
> > + else if (index <= 31)
> > + *ptr = *((unsigned long *)regs + index);
> > + else
> > + return false;
> > +
> > + return true;
> > +}
> > +
> > +static inline bool rv_insn_reg_set_val(struct pt_regs *regs, u32 index,
> > + unsigned long val)
> > +{
> > + if (index == 0)
> > + return false;
> > + else if (index <= 31)
> > + *((unsigned long *)regs + index) = val;
> > + else
> > + return false;
> > +
> > + return true;
> > +}
> > +
> > +bool __kprobes simulate_jal(u32 opcode, unsigned long addr, struct pt_regs *regs)
> > +{
> > + /*
> > + * 31 30 21 20 19 12 11 7 6 0
> > + * imm [20] | imm[10:1] | imm[11] | imm[19:12] | rd | opcode
> > + * 1 10 1 8 5 JAL/J
> > + */
> > + bool ret;
> > + u32 imm;
> > + u32 index = (opcode >> 7) & 0x1f;
> > +
> > + ret = rv_insn_reg_set_val(regs, index, addr + 4);
> > + if (!ret)
> > + return ret;
> > +
> > + imm = ((opcode >> 21) & 0x3ff) << 1;
> > + imm |= ((opcode >> 20) & 0x1) << 11;
> > + imm |= ((opcode >> 12) & 0xff) << 12;
> > + imm |= ((opcode >> 31) & 0x1) << 20;
> > +
> > + instruction_pointer_set(regs, addr + sign_extend32((imm), 20));
> > +
> > + return ret;
> > +}
> > +
> > +bool __kprobes simulate_jalr(u32 opcode, unsigned long addr, struct pt_regs *regs)
> > +{
> > + /*
> > + * 31 20 19 15 14 12 11 7 6 0
> > + * offset[11:0] | rs1 | 010 | rd | opcode
> > + * 12 5 3 5 JALR/JR
> > + */
> > + bool ret;
> > + unsigned long base_addr;
> > + u32 imm = (opcode >> 20) & 0xfff;
> > + u32 rd_index = (opcode >> 7) & 0x1f;
> > + u32 rs1_index = (opcode >> 15) & 0x1f;
> > +
> > + ret = rv_insn_reg_set_val(regs, rd_index, addr + 4);
> > + if (!ret)
> > + return ret;
> > +
> > + ret = rv_insn_reg_get_val(regs, rs1_index, &base_addr);
> > + if (!ret)
> > + return ret;
> > +
> > + instruction_pointer_set(regs, (base_addr + sign_extend32((imm), 11))&~1);
> > +
> > + return ret;
> > +}
> > diff --git a/arch/riscv/kernel/probes/simulate-insn.h b/arch/riscv/kernel/probes/simulate-insn.h
> > new file mode 100644
> > index 00000000..a62d784
> > --- /dev/null
> > +++ b/arch/riscv/kernel/probes/simulate-insn.h
> > @@ -0,0 +1,47 @@
> > +/* SPDX-License-Identifier: GPL-2.0+ */
> > +
> > +#ifndef _RISCV_KERNEL_PROBES_SIMULATE_INSN_H
> > +#define _RISCV_KERNEL_PROBES_SIMULATE_INSN_H
> > +
> > +#define __RISCV_INSN_FUNCS(name, mask, val) \
> > +static __always_inline bool riscv_insn_is_##name(probe_opcode_t code) \
> > +{ \
> > + BUILD_BUG_ON(~(mask) & (val)); \
> > + return (code & (mask)) == (val); \
> > +} \
> > +bool simulate_##name(u32 opcode, unsigned long addr, \
> > + struct pt_regs *regs);
> > +
> > +#define RISCV_INSN_REJECTED(name, code) \
> > + do { \
> > + if (riscv_insn_is_##name(code)) { \
> > + return INSN_REJECTED; \
> > + } \
> > + } while (0)
> > +
> > +__RISCV_INSN_FUNCS(system, 0x7f, 0x73)
> > +__RISCV_INSN_FUNCS(fence, 0x7f, 0x0f)
> > +
> > +#define RISCV_INSN_SET_SIMULATE(name, code) \
> > + do { \
> > + if (riscv_insn_is_##name(code)) { \
> > + api->handler = simulate_##name; \
> > + return INSN_GOOD_NO_SLOT; \
> > + } \
> > + } while (0)
> > +
> > +__RISCV_INSN_FUNCS(c_j, 0xe003, 0xa001)
> > +__RISCV_INSN_FUNCS(c_jr, 0xf007, 0x8002)
> > +__RISCV_INSN_FUNCS(c_jal, 0xe003, 0x2001)
> > +__RISCV_INSN_FUNCS(c_jalr, 0xf007, 0x9002)
> > +__RISCV_INSN_FUNCS(c_beqz, 0xe003, 0xc001)
> > +__RISCV_INSN_FUNCS(c_bnez, 0xe003, 0xe001)
> > +__RISCV_INSN_FUNCS(c_ebreak, 0xffff, 0x9002)
> > +
> > +__RISCV_INSN_FUNCS(auipc, 0x7f, 0x17)
> > +__RISCV_INSN_FUNCS(branch, 0x7f, 0x63)
> > +
> > +__RISCV_INSN_FUNCS(jal, 0x7f, 0x6f)
> > +__RISCV_INSN_FUNCS(jalr, 0x707f, 0x67)
> > +
> > +#endif /* _RISCV_KERNEL_PROBES_SIMULATE_INSN_H */
> > diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
> > index ecec177..ac2e786 100644
> > --- a/arch/riscv/kernel/traps.c
> > +++ b/arch/riscv/kernel/traps.c
> > @@ -12,6 +12,7 @@
> > #include <linux/signal.h>
> > #include <linux/kdebug.h>
> > #include <linux/uaccess.h>
> > +#include <linux/kprobes.h>
> > #include <linux/mm.h>
> > #include <linux/module.h>
> > #include <linux/irq.h>
> > @@ -145,6 +146,14 @@ static inline unsigned long get_break_insn_length(unsigned long pc)
> >
> > asmlinkage __visible void do_trap_break(struct pt_regs *regs)
> > {
> > +#ifdef CONFIG_KPROBES
> > + if (kprobe_single_step_handler(regs))
> > + return;
> > +
> > + if (kprobe_breakpoint_handler(regs))
> > + return;
> > +#endif
> > +
> > if (user_mode(regs))
> > force_sig_fault(SIGTRAP, TRAP_BRKPT, (void __user *)regs->epc);
> > #ifdef CONFIG_KGDB
> > diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
> > index ae7b7fe..da0c08c 100644
> > --- a/arch/riscv/mm/fault.c
> > +++ b/arch/riscv/mm/fault.c
> > @@ -13,6 +13,7 @@
> > #include <linux/perf_event.h>
> > #include <linux/signal.h>
> > #include <linux/uaccess.h>
> > +#include <linux/kprobes.h>
> >
> > #include <asm/pgalloc.h>
> > #include <asm/ptrace.h>
> > @@ -40,6 +41,9 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
> > tsk = current;
> > mm = tsk->mm;
> >
> > + if (kprobe_page_fault(regs, cause))
> > + return;
> > +
> > /*
> > * Fault-in kernel-space virtual memory on-demand.
> > * The 'reference' page table is init_mm.pgd.
> > --
> > 2.7.4
> >
>
>
> --
> Masami Hiramatsu <[email protected]>

2020-07-07 08:54:53

by Guo Ren

[permalink] [raw]
Subject: Re: [PATCH V1 4/5] riscv: Add kprobes supported

On Tue, Jul 7, 2020 at 4:26 PM Zong Li <[email protected]> wrote:
>
> On Mon, Jul 6, 2020 at 6:12 PM Masami Hiramatsu <[email protected]> wrote:
> >
> > Hi Guo,
> >
> > On Sat, 4 Jul 2020 03:34:18 +0000
> > [email protected] wrote:
> >
> > > From: Guo Ren <[email protected]>
> > >
> > > This patch enables "kprobe & kretprobe" to work with ftrace
> > > interface. It utilized software breakpoint as single-step
> > > mechanism.
> > >
> > > Some instructions which can't be single-step executed must be
> > > simulated in kernel execution slot, such as: branch, jal, auipc,
> > > la ...
> > >
> > > Some instructions should be rejected for probing and we use a
> > > blacklist to filter, such as: ecall, ebreak, ...
> > >
> > > We use ebreak & c.ebreak to replace origin instruction and the
> > > kprobe handler prepares an executable memory slot for out-of-line
> > > execution with a copy of the original instruction being probed.
> > > In execution slot we add ebreak behind original instruction to
> > > simulate a single-setp mechanism.
> > >
> > > The patch is based on packi's work [1] and csky's work [2].
> > > - The kprobes_trampoline.S is all from packi's patch
> > > - The single-step mechanism is new designed for riscv without hw
> > > single-step trap
> > > - The simulation codes are from csky
> > > - Frankly, all codes refer to other archs' implementation
> > >
> > > [1] https://lore.kernel.org/linux-riscv/[email protected]/
> > > [2] https://lore.kernel.org/linux-csky/[email protected]/
> > >
> >
> > This looks good to me. Thanks for updating !
> >
> > Acked-by: Masami Hiramatsu <[email protected]>
> >
> > Thank you,
> >
>
> It works to me. Thanks!
>
> Tested-by: Zong Li <[email protected]>

Thank you!

>
> >
> > > Signed-off-by: Guo Ren <[email protected]>
> > > Co-Developed-by: Patrick Stählin <[email protected]>
> > > Cc: Patrick Stählin <[email protected]>
> > > Cc: Masami Hiramatsu <[email protected]>
> > > Cc: Palmer Dabbelt <[email protected]>
> > > Cc: Björn Töpel <[email protected]>
> > > ---
> > > arch/riscv/Kconfig | 2 +
> > > arch/riscv/include/asm/kprobes.h | 40 +++
> > > arch/riscv/include/asm/probes.h | 24 ++
> > > arch/riscv/kernel/Makefile | 1 +
> > > arch/riscv/kernel/probes/Makefile | 4 +
> > > arch/riscv/kernel/probes/decode-insn.c | 48 +++
> > > arch/riscv/kernel/probes/decode-insn.h | 18 +
> > > arch/riscv/kernel/probes/kprobes.c | 471 ++++++++++++++++++++++++++
> > > arch/riscv/kernel/probes/kprobes_trampoline.S | 93 +++++
> > > arch/riscv/kernel/probes/simulate-insn.c | 85 +++++
> > > arch/riscv/kernel/probes/simulate-insn.h | 47 +++
> > > arch/riscv/kernel/traps.c | 9 +
> > > arch/riscv/mm/fault.c | 4 +
> > > 13 files changed, 846 insertions(+)
> > > create mode 100644 arch/riscv/include/asm/probes.h
> > > create mode 100644 arch/riscv/kernel/probes/Makefile
> > > create mode 100644 arch/riscv/kernel/probes/decode-insn.c
> > > create mode 100644 arch/riscv/kernel/probes/decode-insn.h
> > > create mode 100644 arch/riscv/kernel/probes/kprobes.c
> > > create mode 100644 arch/riscv/kernel/probes/kprobes_trampoline.S
> > > create mode 100644 arch/riscv/kernel/probes/simulate-insn.c
> > > create mode 100644 arch/riscv/kernel/probes/simulate-insn.h
> > >
> > > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> > > index 58d6f66..a295f0b 100644
> > > --- a/arch/riscv/Kconfig
> > > +++ b/arch/riscv/Kconfig
> > > @@ -57,6 +57,8 @@ config RISCV
> > > select HAVE_EBPF_JIT if MMU
> > > select HAVE_FUTEX_CMPXCHG if FUTEX
> > > select HAVE_GENERIC_VDSO if MMU && 64BIT
> > > + select HAVE_KPROBES
> > > + select HAVE_KRETPROBES
> > > select HAVE_PCI
> > > select HAVE_PERF_EVENTS
> > > select HAVE_PERF_REGS
> > > diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
> > > index 56a98ea3..4647d38 100644
> > > --- a/arch/riscv/include/asm/kprobes.h
> > > +++ b/arch/riscv/include/asm/kprobes.h
> > > @@ -11,4 +11,44 @@
> > >
> > > #include <asm-generic/kprobes.h>
> > >
> > > +#ifdef CONFIG_KPROBES
> > > +#include <linux/types.h>
> > > +#include <linux/ptrace.h>
> > > +#include <linux/percpu.h>
> > > +
> > > +#define __ARCH_WANT_KPROBES_INSN_SLOT
> > > +#define MAX_INSN_SIZE 2
> > > +
> > > +#define flush_insn_slot(p) do { } while (0)
> > > +#define kretprobe_blacklist_size 0
> > > +
> > > +#include <asm/probes.h>
> > > +
> > > +struct prev_kprobe {
> > > + struct kprobe *kp;
> > > + unsigned int status;
> > > +};
> > > +
> > > +/* Single step context for kprobe */
> > > +struct kprobe_step_ctx {
> > > + unsigned long ss_pending;
> > > + unsigned long match_addr;
> > > +};
> > > +
> > > +/* per-cpu kprobe control block */
> > > +struct kprobe_ctlblk {
> > > + unsigned int kprobe_status;
> > > + unsigned long saved_status;
> > > + struct prev_kprobe prev_kprobe;
> > > + struct kprobe_step_ctx ss_ctx;
> > > +};
> > > +
> > > +void arch_remove_kprobe(struct kprobe *p);
> > > +int kprobe_fault_handler(struct pt_regs *regs, unsigned int trapnr);
> > > +bool kprobe_breakpoint_handler(struct pt_regs *regs);
> > > +bool kprobe_single_step_handler(struct pt_regs *regs);
> > > +void kretprobe_trampoline(void);
> > > +void __kprobes *trampoline_probe_handler(struct pt_regs *regs);
> > > +
> > > +#endif /* CONFIG_KPROBES */
> > > #endif /* _ASM_RISCV_KPROBES_H */
> > > diff --git a/arch/riscv/include/asm/probes.h b/arch/riscv/include/asm/probes.h
> > > new file mode 100644
> > > index 00000000..a787e6d
> > > --- /dev/null
> > > +++ b/arch/riscv/include/asm/probes.h
> > > @@ -0,0 +1,24 @@
> > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > +
> > > +#ifndef _ASM_RISCV_PROBES_H
> > > +#define _ASM_RISCV_PROBES_H
> > > +
> > > +typedef u32 probe_opcode_t;
> > > +typedef bool (probes_handler_t) (u32 opcode, unsigned long addr, struct pt_regs *);
> > > +
> > > +/* architecture specific copy of original instruction */
> > > +struct arch_probe_insn {
> > > + probe_opcode_t *insn;
> > > + probes_handler_t *handler;
> > > + /* restore address after simulation */
> > > + unsigned long restore;
> > > +};
> > > +
> > > +#ifdef CONFIG_KPROBES
> > > +typedef u32 kprobe_opcode_t;
> > > +struct arch_specific_insn {
> > > + struct arch_probe_insn api;
> > > +};
> > > +#endif
> > > +
> > > +#endif /* _ASM_RISCV_PROBES_H */
> > > diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
> > > index b355cf4..c3fff3e 100644
> > > --- a/arch/riscv/kernel/Makefile
> > > +++ b/arch/riscv/kernel/Makefile
> > > @@ -29,6 +29,7 @@ obj-y += riscv_ksyms.o
> > > obj-y += stacktrace.o
> > > obj-y += cacheinfo.o
> > > obj-y += patch.o
> > > +obj-y += probes/
> > > obj-$(CONFIG_MMU) += vdso.o vdso/
> > >
> > > obj-$(CONFIG_RISCV_M_MODE) += clint.o traps_misaligned.o
> > > diff --git a/arch/riscv/kernel/probes/Makefile b/arch/riscv/kernel/probes/Makefile
> > > new file mode 100644
> > > index 00000000..8a39507
> > > --- /dev/null
> > > +++ b/arch/riscv/kernel/probes/Makefile
> > > @@ -0,0 +1,4 @@
> > > +# SPDX-License-Identifier: GPL-2.0
> > > +obj-$(CONFIG_KPROBES) += kprobes.o decode-insn.o simulate-insn.o
> > > +obj-$(CONFIG_KPROBES) += kprobes_trampoline.o
> > > +CFLAGS_REMOVE_simulate-insn.o = $(CC_FLAGS_FTRACE)
> > > diff --git a/arch/riscv/kernel/probes/decode-insn.c b/arch/riscv/kernel/probes/decode-insn.c
> > > new file mode 100644
> > > index 00000000..0876c30
> > > --- /dev/null
> > > +++ b/arch/riscv/kernel/probes/decode-insn.c
> > > @@ -0,0 +1,48 @@
> > > +// SPDX-License-Identifier: GPL-2.0+
> > > +
> > > +#include <linux/kernel.h>
> > > +#include <linux/kprobes.h>
> > > +#include <linux/module.h>
> > > +#include <linux/kallsyms.h>
> > > +#include <asm/sections.h>
> > > +
> > > +#include "decode-insn.h"
> > > +#include "simulate-insn.h"
> > > +
> > > +/* Return:
> > > + * INSN_REJECTED If instruction is one not allowed to kprobe,
> > > + * INSN_GOOD_NO_SLOT If instruction is supported but doesn't use its slot.
> > > + */
> > > +enum probe_insn __kprobes
> > > +riscv_probe_decode_insn(probe_opcode_t *addr, struct arch_probe_insn *api)
> > > +{
> > > + probe_opcode_t insn = le32_to_cpu(*addr);
> > > +
> > > + /*
> > > + * Reject instructions list:
> > > + */
> > > + RISCV_INSN_REJECTED(system, insn);
> > > + RISCV_INSN_REJECTED(fence, insn);
> > > +
> > > + /*
> > > + * Simulate instructions list:
> > > + * TODO: the REJECTED ones below need to be implemented
> > > + */
> > > +#ifdef CONFIG_RISCV_ISA_C
> > > + RISCV_INSN_REJECTED(c_j, insn);
> > > + RISCV_INSN_REJECTED(c_jr, insn);
> > > + RISCV_INSN_REJECTED(c_jal, insn);
> > > + RISCV_INSN_REJECTED(c_jalr, insn);
> > > + RISCV_INSN_REJECTED(c_beqz, insn);
> > > + RISCV_INSN_REJECTED(c_bnez, insn);
> > > + RISCV_INSN_REJECTED(c_ebreak, insn);
> > > +#endif
> > > +
> > > + RISCV_INSN_REJECTED(auipc, insn);
> > > + RISCV_INSN_REJECTED(branch, insn);
> > > +
> > > + RISCV_INSN_SET_SIMULATE(jal, insn);
> > > + RISCV_INSN_SET_SIMULATE(jalr, insn);
> > > +
> > > + return INSN_GOOD;
> > > +}
> > > diff --git a/arch/riscv/kernel/probes/decode-insn.h b/arch/riscv/kernel/probes/decode-insn.h
> > > new file mode 100644
> > > index 00000000..42269a7
> > > --- /dev/null
> > > +++ b/arch/riscv/kernel/probes/decode-insn.h
> > > @@ -0,0 +1,18 @@
> > > +/* SPDX-License-Identifier: GPL-2.0+ */
> > > +
> > > +#ifndef _RISCV_KERNEL_KPROBES_DECODE_INSN_H
> > > +#define _RISCV_KERNEL_KPROBES_DECODE_INSN_H
> > > +
> > > +#include <asm/sections.h>
> > > +#include <asm/kprobes.h>
> > > +
> > > +enum probe_insn {
> > > + INSN_REJECTED,
> > > + INSN_GOOD_NO_SLOT,
> > > + INSN_GOOD,
> > > +};
> > > +
> > > +enum probe_insn __kprobes
> > > +riscv_probe_decode_insn(probe_opcode_t *addr, struct arch_probe_insn *asi);
> > > +
> > > +#endif /* _RISCV_KERNEL_KPROBES_DECODE_INSN_H */
> > > diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c
> > > new file mode 100644
> > > index 00000000..31b6196
> > > --- /dev/null
> > > +++ b/arch/riscv/kernel/probes/kprobes.c
> > > @@ -0,0 +1,471 @@
> > > +// SPDX-License-Identifier: GPL-2.0+
> > > +
> > > +#include <linux/kprobes.h>
> > > +#include <linux/extable.h>
> > > +#include <linux/slab.h>
> > > +#include <linux/stop_machine.h>
> > > +#include <asm/ptrace.h>
> > > +#include <linux/uaccess.h>
> > > +#include <asm/sections.h>
> > > +#include <asm/cacheflush.h>
> > > +#include <asm/bug.h>
> > > +#include <asm/patch.h>
> > > +
> > > +#include "decode-insn.h"
> > > +
> > > +DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
> > > +DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
> > > +
> > > +static void __kprobes
> > > +post_kprobe_handler(struct kprobe_ctlblk *, struct pt_regs *);
> > > +
> > > +static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
> > > +{
> > > + unsigned long offset = GET_INSN_LENGTH(p->opcode);
> > > +
> > > + p->ainsn.api.restore = (unsigned long)p->addr + offset;
> > > +
> > > + patch_text(p->ainsn.api.insn, p->opcode);
> > > + patch_text((void *)((unsigned long)(p->ainsn.api.insn) + offset),
> > > + __BUG_INSN_32);
> > > +}
> > > +
> > > +static void __kprobes arch_prepare_simulate(struct kprobe *p)
> > > +{
> > > + p->ainsn.api.restore = 0;
> > > +}
> > > +
> > > +static void __kprobes arch_simulate_insn(struct kprobe *p, struct pt_regs *regs)
> > > +{
> > > + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> > > +
> > > + if (p->ainsn.api.handler)
> > > + p->ainsn.api.handler((u32)p->opcode,
> > > + (unsigned long)p->addr, regs);
> > > +
> > > + post_kprobe_handler(kcb, regs);
> > > +}
> > > +
> > > +int __kprobes arch_prepare_kprobe(struct kprobe *p)
> > > +{
> > > + unsigned long probe_addr = (unsigned long)p->addr;
> > > +
> > > + if (probe_addr & 0x1) {
> > > + pr_warn("Address not aligned.\n");
> > > +
> > > + return -EINVAL;
> > > + }
> > > +
> > > + /* copy instruction */
> > > + p->opcode = le32_to_cpu(*p->addr);
> > > +
> > > + /* decode instruction */
> > > + switch (riscv_probe_decode_insn(p->addr, &p->ainsn.api)) {
> > > + case INSN_REJECTED: /* insn not supported */
> > > + return -EINVAL;
> > > +
> > > + case INSN_GOOD_NO_SLOT: /* insn need simulation */
> > > + p->ainsn.api.insn = NULL;
> > > + break;
> > > +
> > > + case INSN_GOOD: /* instruction uses slot */
> > > + p->ainsn.api.insn = get_insn_slot();
> > > + if (!p->ainsn.api.insn)
> > > + return -ENOMEM;
> > > + break;
> > > + }
> > > +
> > > + /* prepare the instruction */
> > > + if (p->ainsn.api.insn)
> > > + arch_prepare_ss_slot(p);
> > > + else
> > > + arch_prepare_simulate(p);
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +/* install breakpoint in text */
> > > +void __kprobes arch_arm_kprobe(struct kprobe *p)
> > > +{
> > > + if ((p->opcode & __INSN_LENGTH_MASK) == __INSN_LENGTH_32)
> > > + patch_text(p->addr, __BUG_INSN_32);
> > > + else
> > > + patch_text(p->addr, __BUG_INSN_16);
> > > +}
> > > +
> > > +/* remove breakpoint from text */
> > > +void __kprobes arch_disarm_kprobe(struct kprobe *p)
> > > +{
> > > + patch_text(p->addr, p->opcode);
> > > +}
> > > +
> > > +void __kprobes arch_remove_kprobe(struct kprobe *p)
> > > +{
> > > +}
> > > +
> > > +static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb)
> > > +{
> > > + kcb->prev_kprobe.kp = kprobe_running();
> > > + kcb->prev_kprobe.status = kcb->kprobe_status;
> > > +}
> > > +
> > > +static void __kprobes restore_previous_kprobe(struct kprobe_ctlblk *kcb)
> > > +{
> > > + __this_cpu_write(current_kprobe, kcb->prev_kprobe.kp);
> > > + kcb->kprobe_status = kcb->prev_kprobe.status;
> > > +}
> > > +
> > > +static void __kprobes set_current_kprobe(struct kprobe *p)
> > > +{
> > > + __this_cpu_write(current_kprobe, p);
> > > +}
> > > +
> > > +/*
> > > + * Interrupts need to be disabled before single-step mode is set, and not
> > > + * reenabled until after single-step mode ends.
> > > + * Without disabling interrupt on local CPU, there is a chance of
> > > + * interrupt occurrence in the period of exception return and start of
> > > + * out-of-line single-step, that result in wrongly single stepping
> > > + * into the interrupt handler.
> > > + */
> > > +static void __kprobes kprobes_save_local_irqflag(struct kprobe_ctlblk *kcb,
> > > + struct pt_regs *regs)
> > > +{
> > > + kcb->saved_status = regs->status;
> > > + regs->status &= ~SR_SPIE;
> > > +}
> > > +
> > > +static void __kprobes kprobes_restore_local_irqflag(struct kprobe_ctlblk *kcb,
> > > + struct pt_regs *regs)
> > > +{
> > > + regs->status = kcb->saved_status;
> > > +}
> > > +
> > > +static void __kprobes
> > > +set_ss_context(struct kprobe_ctlblk *kcb, unsigned long addr, struct kprobe *p)
> > > +{
> > > + unsigned long offset = GET_INSN_LENGTH(p->opcode);
> > > +
> > > + kcb->ss_ctx.ss_pending = true;
> > > + kcb->ss_ctx.match_addr = addr + offset;
> > > +}
> > > +
> > > +static void __kprobes clear_ss_context(struct kprobe_ctlblk *kcb)
> > > +{
> > > + kcb->ss_ctx.ss_pending = false;
> > > + kcb->ss_ctx.match_addr = 0;
> > > +}
> > > +
> > > +static void __kprobes setup_singlestep(struct kprobe *p,
> > > + struct pt_regs *regs,
> > > + struct kprobe_ctlblk *kcb, int reenter)
> > > +{
> > > + unsigned long slot;
> > > +
> > > + if (reenter) {
> > > + save_previous_kprobe(kcb);
> > > + set_current_kprobe(p);
> > > + kcb->kprobe_status = KPROBE_REENTER;
> > > + } else {
> > > + kcb->kprobe_status = KPROBE_HIT_SS;
> > > + }
> > > +
> > > + if (p->ainsn.api.insn) {
> > > + /* prepare for single stepping */
> > > + slot = (unsigned long)p->ainsn.api.insn;
> > > +
> > > + set_ss_context(kcb, slot, p); /* mark pending ss */
> > > +
> > > + /* IRQs and single stepping do not mix well. */
> > > + kprobes_save_local_irqflag(kcb, regs);
> > > +
> > > + instruction_pointer_set(regs, slot);
> > > + } else {
> > > + /* insn simulation */
> > > + arch_simulate_insn(p, regs);
> > > + }
> > > +}
> > > +
> > > +static int __kprobes reenter_kprobe(struct kprobe *p,
> > > + struct pt_regs *regs,
> > > + struct kprobe_ctlblk *kcb)
> > > +{
> > > + switch (kcb->kprobe_status) {
> > > + case KPROBE_HIT_SSDONE:
> > > + case KPROBE_HIT_ACTIVE:
> > > + kprobes_inc_nmissed_count(p);
> > > + setup_singlestep(p, regs, kcb, 1);
> > > + break;
> > > + case KPROBE_HIT_SS:
> > > + case KPROBE_REENTER:
> > > + pr_warn("Unrecoverable kprobe detected.\n");
> > > + dump_kprobe(p);
> > > + BUG();
> > > + break;
> > > + default:
> > > + WARN_ON(1);
> > > + return 0;
> > > + }
> > > +
> > > + return 1;
> > > +}
> > > +
> > > +static void __kprobes
> > > +post_kprobe_handler(struct kprobe_ctlblk *kcb, struct pt_regs *regs)
> > > +{
> > > + struct kprobe *cur = kprobe_running();
> > > +
> > > + if (!cur)
> > > + return;
> > > +
> > > + /* return addr restore if non-branching insn */
> > > + if (cur->ainsn.api.restore != 0)
> > > + regs->epc = cur->ainsn.api.restore;
> > > +
> > > + /* restore back original saved kprobe variables and continue */
> > > + if (kcb->kprobe_status == KPROBE_REENTER) {
> > > + restore_previous_kprobe(kcb);
> > > + return;
> > > + }
> > > +
> > > + /* call post handler */
> > > + kcb->kprobe_status = KPROBE_HIT_SSDONE;
> > > + if (cur->post_handler) {
> > > + /* post_handler can hit breakpoint and single step
> > > + * again, so we enable D-flag for recursive exception.
> > > + */
> > > + cur->post_handler(cur, regs, 0);
> > > + }
> > > +
> > > + reset_current_kprobe();
> > > +}
> > > +
> > > +int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int trapnr)
> > > +{
> > > + struct kprobe *cur = kprobe_running();
> > > + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> > > +
> > > + switch (kcb->kprobe_status) {
> > > + case KPROBE_HIT_SS:
> > > + case KPROBE_REENTER:
> > > + /*
> > > + * We are here because the instruction being single
> > > + * stepped caused a page fault. We reset the current
> > > + * kprobe and the ip points back to the probe address
> > > + * and allow the page fault handler to continue as a
> > > + * normal page fault.
> > > + */
> > > + regs->epc = (unsigned long) cur->addr;
> > > + if (!instruction_pointer(regs))
> > > + BUG();
> > > +
> > > + if (kcb->kprobe_status == KPROBE_REENTER)
> > > + restore_previous_kprobe(kcb);
> > > + else
> > > + reset_current_kprobe();
> > > +
> > > + break;
> > > + case KPROBE_HIT_ACTIVE:
> > > + case KPROBE_HIT_SSDONE:
> > > + /*
> > > + * We increment the nmissed count for accounting,
> > > + * we can also use npre/npostfault count for accounting
> > > + * these specific fault cases.
> > > + */
> > > + kprobes_inc_nmissed_count(cur);
> > > +
> > > + /*
> > > + * We come here because instructions in the pre/post
> > > + * handler caused the page_fault, this could happen
> > > + * if handler tries to access user space by
> > > + * copy_from_user(), get_user() etc. Let the
> > > + * user-specified handler try to fix it first.
> > > + */
> > > + if (cur->fault_handler && cur->fault_handler(cur, regs, trapnr))
> > > + return 1;
> > > +
> > > + /*
> > > + * In case the user-specified fault handler returned
> > > + * zero, try to fix up.
> > > + */
> > > + if (fixup_exception(regs))
> > > + return 1;
> > > + }
> > > + return 0;
> > > +}
> > > +
> > > +bool __kprobes
> > > +kprobe_breakpoint_handler(struct pt_regs *regs)
> > > +{
> > > + struct kprobe *p, *cur_kprobe;
> > > + struct kprobe_ctlblk *kcb;
> > > + unsigned long addr = instruction_pointer(regs);
> > > +
> > > + kcb = get_kprobe_ctlblk();
> > > + cur_kprobe = kprobe_running();
> > > +
> > > + p = get_kprobe((kprobe_opcode_t *) addr);
> > > +
> > > + if (p) {
> > > + if (cur_kprobe) {
> > > + if (reenter_kprobe(p, regs, kcb))
> > > + return true;
> > > + } else {
> > > + /* Probe hit */
> > > + set_current_kprobe(p);
> > > + kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> > > +
> > > + /*
> > > + * If we have no pre-handler or it returned 0, we
> > > + * continue with normal processing. If we have a
> > > + * pre-handler and it returned non-zero, it will
> > > + * modify the execution path and no need to single
> > > + * stepping. Let's just reset current kprobe and exit.
> > > + *
> > > + * pre_handler can hit a breakpoint and can step thru
> > > + * before return.
> > > + */
> > > + if (!p->pre_handler || !p->pre_handler(p, regs))
> > > + setup_singlestep(p, regs, kcb, 0);
> > > + else
> > > + reset_current_kprobe();
> > > + }
> > > + return true;
> > > + }
> > > +
> > > + /*
> > > + * The breakpoint instruction was removed right
> > > + * after we hit it. Another cpu has removed
> > > + * either a probepoint or a debugger breakpoint
> > > + * at this address. In either case, no further
> > > + * handling of this interrupt is appropriate.
> > > + * Return back to original instruction, and continue.
> > > + */
> > > + return false;
> > > +}
> > > +
> > > +bool __kprobes
> > > +kprobe_single_step_handler(struct pt_regs *regs)
> > > +{
> > > + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> > > +
> > > + if ((kcb->ss_ctx.ss_pending)
> > > + && (kcb->ss_ctx.match_addr == instruction_pointer(regs))) {
> > > + clear_ss_context(kcb); /* clear pending ss */
> > > +
> > > + kprobes_restore_local_irqflag(kcb, regs);
> > > +
> > > + post_kprobe_handler(kcb, regs);
> > > + return true;
> > > + }
> > > + return false;
> > > +}
> > > +
> > > +/*
> > > + * Provide a blacklist of symbols identifying ranges which cannot be kprobed.
> > > + * This blacklist is exposed to userspace via debugfs (kprobes/blacklist).
> > > + */
> > > +int __init arch_populate_kprobe_blacklist(void)
> > > +{
> > > + int ret;
> > > +
> > > + ret = kprobe_add_area_blacklist((unsigned long)__irqentry_text_start,
> > > + (unsigned long)__irqentry_text_end);
> > > + return ret;
> > > +}
> > > +
> > > +void __kprobes __used *trampoline_probe_handler(struct pt_regs *regs)
> > > +{
> > > + struct kretprobe_instance *ri = NULL;
> > > + struct hlist_head *head, empty_rp;
> > > + struct hlist_node *tmp;
> > > + unsigned long flags, orig_ret_address = 0;
> > > + unsigned long trampoline_address =
> > > + (unsigned long)&kretprobe_trampoline;
> > > + kprobe_opcode_t *correct_ret_addr = NULL;
> > > +
> > > + INIT_HLIST_HEAD(&empty_rp);
> > > + kretprobe_hash_lock(current, &head, &flags);
> > > +
> > > + /*
> > > + * It is possible to have multiple instances associated with a given
> > > + * task either because multiple functions in the call path have
> > > + * return probes installed on them, and/or more than one
> > > + * return probe was registered for a target function.
> > > + *
> > > + * We can handle this because:
> > > + * - instances are always pushed into the head of the list
> > > + * - when multiple return probes are registered for the same
> > > + * function, the (chronologically) first instance's ret_addr
> > > + * will be the real return address, and all the rest will
> > > + * point to kretprobe_trampoline.
> > > + */
> > > + hlist_for_each_entry_safe(ri, tmp, head, hlist) {
> > > + if (ri->task != current)
> > > + /* another task is sharing our hash bucket */
> > > + continue;
> > > +
> > > + orig_ret_address = (unsigned long)ri->ret_addr;
> > > +
> > > + if (orig_ret_address != trampoline_address)
> > > + /*
> > > + * This is the real return address. Any other
> > > + * instances associated with this task are for
> > > + * other calls deeper on the call stack
> > > + */
> > > + break;
> > > + }
> > > +
> > > + kretprobe_assert(ri, orig_ret_address, trampoline_address);
> > > +
> > > + correct_ret_addr = ri->ret_addr;
> > > + hlist_for_each_entry_safe(ri, tmp, head, hlist) {
> > > + if (ri->task != current)
> > > + /* another task is sharing our hash bucket */
> > > + continue;
> > > +
> > > + orig_ret_address = (unsigned long)ri->ret_addr;
> > > + if (ri->rp && ri->rp->handler) {
> > > + __this_cpu_write(current_kprobe, &ri->rp->kp);
> > > + get_kprobe_ctlblk()->kprobe_status = KPROBE_HIT_ACTIVE;
> > > + ri->ret_addr = correct_ret_addr;
> > > + ri->rp->handler(ri, regs);
> > > + __this_cpu_write(current_kprobe, NULL);
> > > + }
> > > +
> > > + recycle_rp_inst(ri, &empty_rp);
> > > +
> > > + if (orig_ret_address != trampoline_address)
> > > + /*
> > > + * This is the real return address. Any other
> > > + * instances associated with this task are for
> > > + * other calls deeper on the call stack
> > > + */
> > > + break;
> > > + }
> > > +
> > > + kretprobe_hash_unlock(current, &flags);
> > > +
> > > + hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
> > > + hlist_del(&ri->hlist);
> > > + kfree(ri);
> > > + }
> > > + return (void *)orig_ret_address;
> > > +}
> > > +
> > > +void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
> > > + struct pt_regs *regs)
> > > +{
> > > + ri->ret_addr = (kprobe_opcode_t *)regs->ra;
> > > + regs->ra = (unsigned long) &kretprobe_trampoline;
> > > +}
> > > +
> > > +int __kprobes arch_trampoline_kprobe(struct kprobe *p)
> > > +{
> > > + return 0;
> > > +}
> > > +
> > > +int __init arch_init_kprobes(void)
> > > +{
> > > + return 0;
> > > +}
> > > diff --git a/arch/riscv/kernel/probes/kprobes_trampoline.S b/arch/riscv/kernel/probes/kprobes_trampoline.S
> > > new file mode 100644
> > > index 00000000..6e85d02
> > > --- /dev/null
> > > +++ b/arch/riscv/kernel/probes/kprobes_trampoline.S
> > > @@ -0,0 +1,93 @@
> > > +/* SPDX-License-Identifier: GPL-2.0+ */
> > > +/*
> > > + * Author: Patrick Stählin <[email protected]>
> > > + */
> > > +#include <linux/linkage.h>
> > > +
> > > +#include <asm/asm.h>
> > > +#include <asm/asm-offsets.h>
> > > +
> > > + .text
> > > + .altmacro
> > > +
> > > + .macro save_all_base_regs
> > > + REG_S x1, PT_RA(sp)
> > > + REG_S x3, PT_GP(sp)
> > > + REG_S x4, PT_TP(sp)
> > > + REG_S x5, PT_T0(sp)
> > > + REG_S x6, PT_T1(sp)
> > > + REG_S x7, PT_T2(sp)
> > > + REG_S x8, PT_S0(sp)
> > > + REG_S x9, PT_S1(sp)
> > > + REG_S x10, PT_A0(sp)
> > > + REG_S x11, PT_A1(sp)
> > > + REG_S x12, PT_A2(sp)
> > > + REG_S x13, PT_A3(sp)
> > > + REG_S x14, PT_A4(sp)
> > > + REG_S x15, PT_A5(sp)
> > > + REG_S x16, PT_A6(sp)
> > > + REG_S x17, PT_A7(sp)
> > > + REG_S x18, PT_S2(sp)
> > > + REG_S x19, PT_S3(sp)
> > > + REG_S x20, PT_S4(sp)
> > > + REG_S x21, PT_S5(sp)
> > > + REG_S x22, PT_S6(sp)
> > > + REG_S x23, PT_S7(sp)
> > > + REG_S x24, PT_S8(sp)
> > > + REG_S x25, PT_S9(sp)
> > > + REG_S x26, PT_S10(sp)
> > > + REG_S x27, PT_S11(sp)
> > > + REG_S x28, PT_T3(sp)
> > > + REG_S x29, PT_T4(sp)
> > > + REG_S x30, PT_T5(sp)
> > > + REG_S x31, PT_T6(sp)
> > > + .endm
> > > +
> > > + .macro restore_all_base_regs
> > > + REG_L x3, PT_GP(sp)
> > > + REG_L x4, PT_TP(sp)
> > > + REG_L x5, PT_T0(sp)
> > > + REG_L x6, PT_T1(sp)
> > > + REG_L x7, PT_T2(sp)
> > > + REG_L x8, PT_S0(sp)
> > > + REG_L x9, PT_S1(sp)
> > > + REG_L x10, PT_A0(sp)
> > > + REG_L x11, PT_A1(sp)
> > > + REG_L x12, PT_A2(sp)
> > > + REG_L x13, PT_A3(sp)
> > > + REG_L x14, PT_A4(sp)
> > > + REG_L x15, PT_A5(sp)
> > > + REG_L x16, PT_A6(sp)
> > > + REG_L x17, PT_A7(sp)
> > > + REG_L x18, PT_S2(sp)
> > > + REG_L x19, PT_S3(sp)
> > > + REG_L x20, PT_S4(sp)
> > > + REG_L x21, PT_S5(sp)
> > > + REG_L x22, PT_S6(sp)
> > > + REG_L x23, PT_S7(sp)
> > > + REG_L x24, PT_S8(sp)
> > > + REG_L x25, PT_S9(sp)
> > > + REG_L x26, PT_S10(sp)
> > > + REG_L x27, PT_S11(sp)
> > > + REG_L x28, PT_T3(sp)
> > > + REG_L x29, PT_T4(sp)
> > > + REG_L x30, PT_T5(sp)
> > > + REG_L x31, PT_T6(sp)
> > > + .endm
> > > +
> > > +ENTRY(kretprobe_trampoline)
> > > + addi sp, sp, -(PT_SIZE_ON_STACK)
> > > + save_all_base_regs
> > > +
> > > + move a0, sp /* pt_regs */
> > > +
> > > + call trampoline_probe_handler
> > > +
> > > + /* use the result as the return-address */
> > > + move ra, a0
> > > +
> > > + restore_all_base_regs
> > > + addi sp, sp, PT_SIZE_ON_STACK
> > > +
> > > + ret
> > > +ENDPROC(kretprobe_trampoline)
> > > diff --git a/arch/riscv/kernel/probes/simulate-insn.c b/arch/riscv/kernel/probes/simulate-insn.c
> > > new file mode 100644
> > > index 00000000..2519ce2
> > > --- /dev/null
> > > +++ b/arch/riscv/kernel/probes/simulate-insn.c
> > > @@ -0,0 +1,85 @@
> > > +// SPDX-License-Identifier: GPL-2.0+
> > > +
> > > +#include <linux/bitops.h>
> > > +#include <linux/kernel.h>
> > > +#include <linux/kprobes.h>
> > > +
> > > +#include "decode-insn.h"
> > > +#include "simulate-insn.h"
> > > +
> > > +static inline bool rv_insn_reg_get_val(struct pt_regs *regs, u32 index,
> > > + unsigned long *ptr)
> > > +{
> > > + if (index == 0)
> > > + *ptr = 0;
> > > + else if (index <= 31)
> > > + *ptr = *((unsigned long *)regs + index);
> > > + else
> > > + return false;
> > > +
> > > + return true;
> > > +}
> > > +
> > > +static inline bool rv_insn_reg_set_val(struct pt_regs *regs, u32 index,
> > > + unsigned long val)
> > > +{
> > > + if (index == 0)
> > > + return false;
> > > + else if (index <= 31)
> > > + *((unsigned long *)regs + index) = val;
> > > + else
> > > + return false;
> > > +
> > > + return true;
> > > +}
> > > +
> > > +bool __kprobes simulate_jal(u32 opcode, unsigned long addr, struct pt_regs *regs)
> > > +{
> > > + /*
> > > + * 31 30 21 20 19 12 11 7 6 0
> > > + * imm [20] | imm[10:1] | imm[11] | imm[19:12] | rd | opcode
> > > + * 1 10 1 8 5 JAL/J
> > > + */
> > > + bool ret;
> > > + u32 imm;
> > > + u32 index = (opcode >> 7) & 0x1f;
> > > +
> > > + ret = rv_insn_reg_set_val(regs, index, addr + 4);
> > > + if (!ret)
> > > + return ret;
> > > +
> > > + imm = ((opcode >> 21) & 0x3ff) << 1;
> > > + imm |= ((opcode >> 20) & 0x1) << 11;
> > > + imm |= ((opcode >> 12) & 0xff) << 12;
> > > + imm |= ((opcode >> 31) & 0x1) << 20;
> > > +
> > > + instruction_pointer_set(regs, addr + sign_extend32((imm), 20));
> > > +
> > > + return ret;
> > > +}
> > > +
> > > +bool __kprobes simulate_jalr(u32 opcode, unsigned long addr, struct pt_regs *regs)
> > > +{
> > > + /*
> > > + * 31 20 19 15 14 12 11 7 6 0
> > > + * offset[11:0] | rs1 | 010 | rd | opcode
> > > + * 12 5 3 5 JALR/JR
> > > + */
> > > + bool ret;
> > > + unsigned long base_addr;
> > > + u32 imm = (opcode >> 20) & 0xfff;
> > > + u32 rd_index = (opcode >> 7) & 0x1f;
> > > + u32 rs1_index = (opcode >> 15) & 0x1f;
> > > +
> > > + ret = rv_insn_reg_set_val(regs, rd_index, addr + 4);
> > > + if (!ret)
> > > + return ret;
> > > +
> > > + ret = rv_insn_reg_get_val(regs, rs1_index, &base_addr);
> > > + if (!ret)
> > > + return ret;
> > > +
> > > + instruction_pointer_set(regs, (base_addr + sign_extend32((imm), 11))&~1);
> > > +
> > > + return ret;
> > > +}
> > > diff --git a/arch/riscv/kernel/probes/simulate-insn.h b/arch/riscv/kernel/probes/simulate-insn.h
> > > new file mode 100644
> > > index 00000000..a62d784
> > > --- /dev/null
> > > +++ b/arch/riscv/kernel/probes/simulate-insn.h
> > > @@ -0,0 +1,47 @@
> > > +/* SPDX-License-Identifier: GPL-2.0+ */
> > > +
> > > +#ifndef _RISCV_KERNEL_PROBES_SIMULATE_INSN_H
> > > +#define _RISCV_KERNEL_PROBES_SIMULATE_INSN_H
> > > +
> > > +#define __RISCV_INSN_FUNCS(name, mask, val) \
> > > +static __always_inline bool riscv_insn_is_##name(probe_opcode_t code) \
> > > +{ \
> > > + BUILD_BUG_ON(~(mask) & (val)); \
> > > + return (code & (mask)) == (val); \
> > > +} \
> > > +bool simulate_##name(u32 opcode, unsigned long addr, \
> > > + struct pt_regs *regs);
> > > +
> > > +#define RISCV_INSN_REJECTED(name, code) \
> > > + do { \
> > > + if (riscv_insn_is_##name(code)) { \
> > > + return INSN_REJECTED; \
> > > + } \
> > > + } while (0)
> > > +
> > > +__RISCV_INSN_FUNCS(system, 0x7f, 0x73)
> > > +__RISCV_INSN_FUNCS(fence, 0x7f, 0x0f)
> > > +
> > > +#define RISCV_INSN_SET_SIMULATE(name, code) \
> > > + do { \
> > > + if (riscv_insn_is_##name(code)) { \
> > > + api->handler = simulate_##name; \
> > > + return INSN_GOOD_NO_SLOT; \
> > > + } \
> > > + } while (0)
> > > +
> > > +__RISCV_INSN_FUNCS(c_j, 0xe003, 0xa001)
> > > +__RISCV_INSN_FUNCS(c_jr, 0xf007, 0x8002)
> > > +__RISCV_INSN_FUNCS(c_jal, 0xe003, 0x2001)
> > > +__RISCV_INSN_FUNCS(c_jalr, 0xf007, 0x9002)
> > > +__RISCV_INSN_FUNCS(c_beqz, 0xe003, 0xc001)
> > > +__RISCV_INSN_FUNCS(c_bnez, 0xe003, 0xe001)
> > > +__RISCV_INSN_FUNCS(c_ebreak, 0xffff, 0x9002)
> > > +
> > > +__RISCV_INSN_FUNCS(auipc, 0x7f, 0x17)
> > > +__RISCV_INSN_FUNCS(branch, 0x7f, 0x63)
> > > +
> > > +__RISCV_INSN_FUNCS(jal, 0x7f, 0x6f)
> > > +__RISCV_INSN_FUNCS(jalr, 0x707f, 0x67)
> > > +
> > > +#endif /* _RISCV_KERNEL_PROBES_SIMULATE_INSN_H */
> > > diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
> > > index ecec177..ac2e786 100644
> > > --- a/arch/riscv/kernel/traps.c
> > > +++ b/arch/riscv/kernel/traps.c
> > > @@ -12,6 +12,7 @@
> > > #include <linux/signal.h>
> > > #include <linux/kdebug.h>
> > > #include <linux/uaccess.h>
> > > +#include <linux/kprobes.h>
> > > #include <linux/mm.h>
> > > #include <linux/module.h>
> > > #include <linux/irq.h>
> > > @@ -145,6 +146,14 @@ static inline unsigned long get_break_insn_length(unsigned long pc)
> > >
> > > asmlinkage __visible void do_trap_break(struct pt_regs *regs)
> > > {
> > > +#ifdef CONFIG_KPROBES
> > > + if (kprobe_single_step_handler(regs))
> > > + return;
> > > +
> > > + if (kprobe_breakpoint_handler(regs))
> > > + return;
> > > +#endif
> > > +
> > > if (user_mode(regs))
> > > force_sig_fault(SIGTRAP, TRAP_BRKPT, (void __user *)regs->epc);
> > > #ifdef CONFIG_KGDB
> > > diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
> > > index ae7b7fe..da0c08c 100644
> > > --- a/arch/riscv/mm/fault.c
> > > +++ b/arch/riscv/mm/fault.c
> > > @@ -13,6 +13,7 @@
> > > #include <linux/perf_event.h>
> > > #include <linux/signal.h>
> > > #include <linux/uaccess.h>
> > > +#include <linux/kprobes.h>
> > >
> > > #include <asm/pgalloc.h>
> > > #include <asm/ptrace.h>
> > > @@ -40,6 +41,9 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
> > > tsk = current;
> > > mm = tsk->mm;
> > >
> > > + if (kprobe_page_fault(regs, cause))
> > > + return;
> > > +
> > > /*
> > > * Fault-in kernel-space virtual memory on-demand.
> > > * The 'reference' page table is init_mm.pgd.
> > > --
> > > 2.7.4
> > >
> >
> >
> > --
> > Masami Hiramatsu <[email protected]>



--
Best Regards
Guo Ren

ML: https://lore.kernel.org/linux-csky/

2020-07-07 15:34:02

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH V1 5/5] riscv: Add uprobes supported

Hi,

I love your patch! Perhaps something to improve:

[auto build test WARNING on v5.8-rc2]
[cannot apply to linus/master v5.8-rc3 next-20200707]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use as documented in
https://git-scm.com/docs/git-format-patch]

url: https://github.com/0day-ci/linux/commits/guoren-kernel-org/riscv-Add-k-uprobe-supported/20200704-113653
base: 48778464bb7d346b47157d21ffde2af6b2d39110
config: riscv-randconfig-s032-20200707 (attached as .config)
compiler: riscv32-linux-gcc (GCC) 9.3.0
reproduce:
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# apt-get install sparse
# sparse version: v0.6.2-31-gabbfd661-dirty
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=riscv

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>


sparse warnings: (new ones prefixed by >>)

kernel/events/uprobes.c:1977:33: sparse: sparse: incorrect type in argument 1 (different address spaces) @@ expected struct spinlock [usertype] *lock @@ got struct spinlock [noderef] __rcu * @@
kernel/events/uprobes.c:1977:33: sparse: expected struct spinlock [usertype] *lock
kernel/events/uprobes.c:1977:33: sparse: got struct spinlock [noderef] __rcu *
kernel/events/uprobes.c:1979:35: sparse: sparse: incorrect type in argument 1 (different address spaces) @@ expected struct spinlock [usertype] *lock @@ got struct spinlock [noderef] __rcu * @@
kernel/events/uprobes.c:1979:35: sparse: expected struct spinlock [usertype] *lock
kernel/events/uprobes.c:1979:35: sparse: got struct spinlock [noderef] __rcu *
kernel/events/uprobes.c:2279:31: sparse: sparse: incorrect type in argument 1 (different address spaces) @@ expected struct spinlock [usertype] *lock @@ got struct spinlock [noderef] __rcu * @@
kernel/events/uprobes.c:2279:31: sparse: expected struct spinlock [usertype] *lock
kernel/events/uprobes.c:2279:31: sparse: got struct spinlock [noderef] __rcu *
kernel/events/uprobes.c:2281:33: sparse: sparse: incorrect type in argument 1 (different address spaces) @@ expected struct spinlock [usertype] *lock @@ got struct spinlock [noderef] __rcu * @@
kernel/events/uprobes.c:2281:33: sparse: expected struct spinlock [usertype] *lock
kernel/events/uprobes.c:2281:33: sparse: got struct spinlock [noderef] __rcu *
>> include/asm-generic/mmiowb.h:56:9: sparse: sparse: context imbalance in '__replace_page' - unexpected unlock

vim +/__replace_page +56 include/asm-generic/mmiowb.h

d1be6a28b13ce6 Will Deacon 2019-02-22 46
d1be6a28b13ce6 Will Deacon 2019-02-22 47 static inline void mmiowb_spin_unlock(void)
d1be6a28b13ce6 Will Deacon 2019-02-22 48 {
d1be6a28b13ce6 Will Deacon 2019-02-22 49 struct mmiowb_state *ms = __mmiowb_state();
d1be6a28b13ce6 Will Deacon 2019-02-22 50
d1be6a28b13ce6 Will Deacon 2019-02-22 51 if (unlikely(ms->mmiowb_pending)) {
d1be6a28b13ce6 Will Deacon 2019-02-22 52 ms->mmiowb_pending = 0;
d1be6a28b13ce6 Will Deacon 2019-02-22 53 mmiowb();
d1be6a28b13ce6 Will Deacon 2019-02-22 54 }
d1be6a28b13ce6 Will Deacon 2019-02-22 55
d1be6a28b13ce6 Will Deacon 2019-02-22 @56 ms->nesting_count--;

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/[email protected]


Attachments:
(No filename) (3.61 kB)
.config.gz (23.97 kB)
Download all attachments

2020-07-14 18:44:36

by Palmer Dabbelt

[permalink] [raw]
Subject: Re: [PATCH V1 0/5] riscv: Add k/uprobe supported

On Sat, 04 Jul 2020 07:55:28 PDT (-0700), [email protected] wrote:
> Hi Pekka,
>
> On Sat, Jul 4, 2020 at 2:40 PM Pekka Enberg <[email protected]> wrote:
>>
>> On Sat, Jul 4, 2020 at 6:34 AM <[email protected]> wrote:
>> > The patchset includes kprobe/uprobe support and some related fixups.
>>
>> Nice!
>>
>> On Sat, Jul 4, 2020 at 6:34 AM <[email protected]> wrote:
>> > There is no single step exception in riscv ISA, so utilize ebreak to
>> > simulate. Some pc related instructions couldn't be executed out of line
>> > and some system/fence instructions couldn't be a trace site at all.
>> > So we give out a reject list and simulate list in decode-insn.c.
>>
>> Can you elaborate on what you mean by this? Why would you need a
>> single-step facility for kprobes? Is it for executing the instruction
>> that was replaced with a probe breakpoint?
>
> It's the single-step exception, not single-step facility!
>
> Other arches use hardware single-step exception for k/uprobe, eg:
> - powerpc: regs->msr |= MSR_SINGLESTEP
> - arm/arm64: PSTATE.D for enabling software step exceptions
> - s390: Set PER control regs, turns on single step for the given address
> - x86: regs->flags |= X86_EFLAGS_TF
> - csky: of course use hw single step :)
>
> Yes, All the above arches use a hardware single-step exception
> mechanism to execute the instruction that was replaced with a probe
> breakpoint.

I guess we could handle fences by just IPIing over there and executing the
fence? Probably not worth the effort, though, as if you have an issue that's
showing up close enough to a fence that you can't just probe somewhere nearby
then you're probably going to disrupt things too much to learn anything. I'd
assume that AMOs are also too much of a headache to emulate, as moving them to
a different hart would allow for different orderings that may break things.

I suppose the tricker issue is that inserting a probe in the middle of a LR/SC
sequence will result in a loss of forward progress (or maybe even incorrect
behavior, if you mess up a pairing), as there are fairly heavyweight
restrictions on what you're allowed to do inside there. I don't see any
mechanism for handling this, maybe we need to build up tables of restricted
regions? All the LR/SC sequences should be hidden behind macros already, so it
shouldn't be that hard to figure it out.

I only gave the code a quick look, but I don't see any references to LR/SC or
AMO so if you are handling these I guess we at least need a comment :)

>
>>
>> Also, the "Debug Specification" [1] specifies a single-step facility
>> for RISC-V -- why is that not useful for implementing kprobes?
>>
>> 1. https://riscv.org/specifications/debug-specification/
> We need single-step exception not single-step by jtag, so above spec
> is not related to the patchset.
>
> See riscv-Privileged spec:
>
> Interrupt Exception Code-Description
> 1 0 Reserved
> 1 1 Supervisor software interrupt
> 1 2–4 Reserved
> 1 5 Supervisor timer interrupt
> 1 6–8 Reserved
> 1 9 Supervisor external interrupt
> 1 10–15 Reserved
> 1 ≥16 Available for platform use
> 0 0 Instruction address misaligned
> 0 1 Instruction access fault
> 0 2 Illegal instruction
> 0 3 Breakpoint
> 0 4 Load address misaligned
> 0 5 Load access fault
> 0 6 Store/AMO address misaligned
> 0 7 Store/AMO access fault
> 0 8 Environment call from U-mode
> 0 9 Environment call from S-mode
> 0 10–11 Reserved
> 0 12 Instruction page fault
> 0 13 Load page fault
> 0 14 Reserved
> 0 15 Store/AMO page fault
> 0 16–23 Reserved
> 0 24–31 Available for custom use
> 0 32–47 Reserved
> 0 48–63 Available for custom use
> 0 ≥64 Reserved
>
> No single step!
>
> So I insert a "ebreak" instruction behind the target single-step
> instruction to simulate the same mechanism.

Single step is part of the debug spec. That mostly discusses JTAG debugging,
but there's also some stuff in there related to in-band debugging (at least
watch points and single step, though there may be more). IIRC you get a
breakpoint exception and then chase around some CSRs to differentiate between
the various reasons, but it's been a while since I've looked at this stuff.

It's all kind of irrelevant, though, as there's no way to get at all this stuff
from supervisor mode. I don't see any reason we couldn't put together an SBI
extension to access this stuff, but I also don't know anyone who's looked into
doing so. There are some complexities involved because this state is all
shared between machine mode and debug mode that we'd need to deal with, but I
think we could put something together -- at least for single step those are
fairly straight-forward issues to handle.

> --
> Best Regards
> Guo Ren
>
> ML: https://lore.kernel.org/linux-csky/

2020-07-15 05:57:40

by Guo Ren

[permalink] [raw]
Subject: Re: [PATCH V1 0/5] riscv: Add k/uprobe supported

Hi Palmer,

On Wed, Jul 15, 2020 at 2:43 AM Palmer Dabbelt <[email protected]> wrote:
>
> On Sat, 04 Jul 2020 07:55:28 PDT (-0700), [email protected] wrote:
> > Hi Pekka,
> >
> > On Sat, Jul 4, 2020 at 2:40 PM Pekka Enberg <[email protected]> wrote:
> >>
> >> On Sat, Jul 4, 2020 at 6:34 AM <[email protected]> wrote:
> >> > The patchset includes kprobe/uprobe support and some related fixups.
> >>
> >> Nice!
> >>
> >> On Sat, Jul 4, 2020 at 6:34 AM <[email protected]> wrote:
> >> > There is no single step exception in riscv ISA, so utilize ebreak to
> >> > simulate. Some pc related instructions couldn't be executed out of line
> >> > and some system/fence instructions couldn't be a trace site at all.
> >> > So we give out a reject list and simulate list in decode-insn.c.
> >>
> >> Can you elaborate on what you mean by this? Why would you need a
> >> single-step facility for kprobes? Is it for executing the instruction
> >> that was replaced with a probe breakpoint?
> >
> > It's the single-step exception, not single-step facility!
> >
> > Other arches use hardware single-step exception for k/uprobe, eg:
> > - powerpc: regs->msr |= MSR_SINGLESTEP
> > - arm/arm64: PSTATE.D for enabling software step exceptions
> > - s390: Set PER control regs, turns on single step for the given address
> > - x86: regs->flags |= X86_EFLAGS_TF
> > - csky: of course use hw single step :)
> >
> > Yes, All the above arches use a hardware single-step exception
> > mechanism to execute the instruction that was replaced with a probe
> > breakpoint.
>
> I guess we could handle fences by just IPIing over there and executing the
> fence? Probably not worth the effort, though, as if you have an issue that's
> showing up close enough to a fence that you can't just probe somewhere nearby
> then you're probably going to disrupt things too much to learn anything.
All fence instructions are rejected to probe in the current patchset.
ref arch/riscv/kernel/probes/decode-insn.c:
/*
* Reject instructions list:
*/
RISCV_INSN_REJECTED(system, insn);
RISCV_INSN_REJECTED(fence, insn);

> I'd
> assume that AMOs are also too much of a headache to emulate, as moving them to
> a different hart would allow for different orderings that may break things.
All AMO instructions could be single-step emulated.

>
> I suppose the tricker issue is that inserting a probe in the middle of a LR/SC
> sequence will result in a loss of forward progress (or maybe even incorrect
> behavior, if you mess up a pairing), as there are fairly heavyweight
> restrictions on what you're allowed to do inside there. I don't see any
> mechanism for handling this, maybe we need to build up tables of restricted
> regions? All the LR/SC sequences should be hidden behind macros already, so it
> shouldn't be that hard to figure it out.
Yes, the probe between LR/SC will cause dead loop risk and arm64 just
prevent exclusive instructions to probe without the middle detecting.
The macros wrapper idea seems good, but I prefer to leave it to user caring.

>
> I only gave the code a quick look, but I don't see any references to LR/SC or
> AMO so if you are handling these I guess we at least need a comment :)
Yes, I let all AMO & LR/SC could be executed out of line with a
single-step style.
I'll add a comment about LR/SC wrapper macros' idea which you mentioned.

>
> >
> >>
> >> Also, the "Debug Specification" [1] specifies a single-step facility
> >> for RISC-V -- why is that not useful for implementing kprobes?
> >>
> >> 1. https://riscv.org/specifications/debug-specification/
> > We need single-step exception not single-step by jtag, so above spec
> > is not related to the patchset.
> >
> > See riscv-Privileged spec:
> >
> > Interrupt Exception Code-Description
> > 1 0 Reserved
> > 1 1 Supervisor software interrupt
> > 1 2–4 Reserved
> > 1 5 Supervisor timer interrupt
> > 1 6–8 Reserved
> > 1 9 Supervisor external interrupt
> > 1 10–15 Reserved
> > 1 ≥16 Available for platform use
> > 0 0 Instruction address misaligned
> > 0 1 Instruction access fault
> > 0 2 Illegal instruction
> > 0 3 Breakpoint
> > 0 4 Load address misaligned
> > 0 5 Load access fault
> > 0 6 Store/AMO address misaligned
> > 0 7 Store/AMO access fault
> > 0 8 Environment call from U-mode
> > 0 9 Environment call from S-mode
> > 0 10–11 Reserved
> > 0 12 Instruction page fault
> > 0 13 Load page fault
> > 0 14 Reserved
> > 0 15 Store/AMO page fault
> > 0 16–23 Reserved
> > 0 24–31 Available for custom use
> > 0 32–47 Reserved
> > 0 48–63 Available for custom use
> > 0 ≥64 Reserved
> >
> > No single step!
> >
> > So I insert a "ebreak" instruction behind the target single-step
> > instruction to simulate the same mechanism.
>
> Single step is part of the debug spec. That mostly discusses JTAG debugging,
> but there's also some stuff in there related to in-band debugging (at least
> watch points and single step, though there may be more). IIRC you get a
What's the meaning of IIRC?

> breakpoint exception and then chase around some CSRs to differentiate between
> the various reasons, but it's been a while since I've looked at this stuff.
I just use k/uprobe state check function in ebreak handler, no
additional csr tested.
asmlinkage __visible void do_trap_break(struct pt_regs *regs)
{
#ifdef CONFIG_KPROBES
if (kprobe_single_step_handler(regs))
return;

if (kprobe_breakpoint_handler(regs))
return;
#endif
#ifdef CONFIG_UPROBES
if (uprobe_single_step_handler(regs))
return;

if (uprobe_breakpoint_handler(regs))
return;
#endif
current->thread.bad_cause = regs->cause;

Seems it works good.

>
> It's all kind of irrelevant, though, as there's no way to get at all this stuff
> from supervisor mode. I don't see any reason we couldn't put together an SBI
> extension to access this stuff, but I also don't know anyone who's looked into
> doing so. There are some complexities involved because this state is all
> shared between machine mode and debug mode that we'd need to deal with, but I
> think we could put something together -- at least for single step those are
> fairly straight-forward issues to handle.
Do you prefer to add a single-step mechanism into a privileged-spec?

--
Best Regards
Guo Ren

ML: https://lore.kernel.org/linux-csky/