2023-01-18 02:03:56

by Tiezhu Yang

[permalink] [raw]
Subject: [PATCH v12 0/5] Add kprobe and kretprobe support for LoongArch

v12:
-- Rebase on the latest code
-- Fix -Wmissing-prototypes warning when make W=1
-- Drop patch #6 "Use common function sign_extend64()"
since it has been applied yet

v11:
-- Rebase on the latest code
-- Address all the review comments, thank you all
-- Modify arch_prepare_kprobe() and setup_singlestep()
to make the probe logic more clear
-- Mark some assembler symbols as non-kprobe-able
-- Use common function sign_extend64()
-- Test 20 times about 36 hours for all the 71 assembler
functions annotated with SYM_CODE_START and SYM_FUNC_START
under arch/loongarch, especially test memset alone for 10
hours, no hang problems

v10:
-- Remove sign_extend() based on the latest code
-- Rename insns_are_not_supported() to insns_not_supported()
-- Rename insns_are_not_simulated() to insns_not_simulated()
-- Set KPROBE_HIT_SSDONE if cur->post_handler is not NULL
-- Enable preemption for KPROBE_REENTER in kprobe_fault_handler()

v9:
-- Rename sign_extended() to sign_extend()
-- Modify kprobe_fault_handler() to handle all of kprobe_status

v8:
-- Put "regs->csr_prmd &= ~CSR_PRMD_PIE;" ahead to save one line
-- Add code comment of preempt_disable()
-- Put kprobe_page_fault() in __do_page_fault()
-- Modify the check condition of break insn in kprobe_breakpoint_handler()

v7:
-- Remove stop_machine_cpuslocked() related code

v6:
-- Add a new patch to redefine larch_insn_patch_text() with
stop_machine_cpuslocked()
-- Modify kprobe_breakpoint_handler() to consider the original
insn is break and return the correct value
-- Modify do_bp() to refresh bcode when original insn is break

v5:
-- Rebase on the latest code
-- Use stop_machine_cpuslocked() to modify insn to avoid CPU race

v4:
-- Remove kprobe_exceptions_notify() in kprobes.c
-- Call kprobe_breakpoint_handler() and kprobe_singlestep_handler()
in do_bp()

v3:
-- Rebase on the latest code
-- Check the alignment of PC in simu_branch() and simu_pc()
-- Add ibar in flush_insn_slot()
-- Rename kprobe_{pre,post}_handler() to {post_}kprobe_handler
-- Add preempt_disable() and preempt_enable_no_resched()
-- Remove r0 save/restore and do some minor changes
in kprobes_trampoline.S
-- Do not enable CONFIG_KPROBES by default

v2:
-- Split simu_branch() and simu_pc() into a single patch
-- Call kprobe_page_fault() in do_page_fault()
-- Add kprobes_trampoline.S for kretprobe

Tiezhu Yang (5):
LoongArch: Simulate branch and PC* instructions
LoongArch: Add kprobe support
LoongArch: Add kretprobe support
LoongArch: Mark some assembler symbols as non-kprobe-able
samples/kprobes: Add LoongArch support

arch/loongarch/Kconfig | 2 +
arch/loongarch/include/asm/asm.h | 10 +
arch/loongarch/include/asm/inst.h | 20 ++
arch/loongarch/include/asm/kprobes.h | 61 +++++
arch/loongarch/include/asm/ptrace.h | 1 +
arch/loongarch/kernel/Makefile | 2 +
arch/loongarch/kernel/entry.S | 1 +
arch/loongarch/kernel/inst.c | 123 +++++++++
arch/loongarch/kernel/kprobes.c | 405 +++++++++++++++++++++++++++++
arch/loongarch/kernel/kprobes_trampoline.S | 96 +++++++
arch/loongarch/kernel/traps.c | 11 +-
arch/loongarch/lib/memcpy.S | 3 +
arch/loongarch/mm/fault.c | 3 +
samples/kprobes/kprobe_example.c | 8 +
14 files changed, 741 insertions(+), 5 deletions(-)
create mode 100644 arch/loongarch/include/asm/kprobes.h
create mode 100644 arch/loongarch/kernel/kprobes.c
create mode 100644 arch/loongarch/kernel/kprobes_trampoline.S

--
2.1.0


2023-01-18 02:05:07

by Tiezhu Yang

[permalink] [raw]
Subject: [PATCH v12 5/5] samples/kprobes: Add LoongArch support

Add LoongArch specific info in handler_pre() and handler_post().

Signed-off-by: Tiezhu Yang <[email protected]>
---
samples/kprobes/kprobe_example.c | 8 ++++++++
1 file changed, 8 insertions(+)

diff --git a/samples/kprobes/kprobe_example.c b/samples/kprobes/kprobe_example.c
index fd346f5..ef44c61 100644
--- a/samples/kprobes/kprobe_example.c
+++ b/samples/kprobes/kprobe_example.c
@@ -55,6 +55,10 @@ static int __kprobes handler_pre(struct kprobe *p, struct pt_regs *regs)
pr_info("<%s> p->addr, 0x%p, ip = 0x%lx, flags = 0x%lx\n",
p->symbol_name, p->addr, regs->psw.addr, regs->flags);
#endif
+#ifdef CONFIG_LOONGARCH
+ pr_info("<%s> p->addr = 0x%p, era = 0x%lx, estat = 0x%lx\n",
+ p->symbol_name, p->addr, regs->csr_era, regs->csr_estat);
+#endif

/* A dump_stack() here will give a stack backtrace */
return 0;
@@ -92,6 +96,10 @@ static void __kprobes handler_post(struct kprobe *p, struct pt_regs *regs,
pr_info("<%s> p->addr, 0x%p, flags = 0x%lx\n",
p->symbol_name, p->addr, regs->flags);
#endif
+#ifdef CONFIG_LOONGARCH
+ pr_info("<%s> p->addr = 0x%p, estat = 0x%lx\n",
+ p->symbol_name, p->addr, regs->csr_estat);
+#endif
}

static int __init kprobe_init(void)
--
2.1.0

2023-01-18 02:37:37

by Tiezhu Yang

[permalink] [raw]
Subject: [PATCH v12 1/5] LoongArch: Simulate branch and PC* instructions

According to LoongArch Reference Manual, simulate branch and PC*
instructions, this is preparation for later patch.

Link: https://loongson.github.io/LoongArch-Documentation/LoongArch-Vol1-EN.html#branch-instructions
Link: https://loongson.github.io/LoongArch-Documentation/LoongArch-Vol1-EN.html#_pcaddi_pcaddu121_pcaddu18l_pcalau12i

Co-developed-by: Jinyang He <[email protected]>
Signed-off-by: Jinyang He <[email protected]>
Signed-off-by: Tiezhu Yang <[email protected]>
---
arch/loongarch/include/asm/inst.h | 5 ++
arch/loongarch/include/asm/ptrace.h | 1 +
arch/loongarch/kernel/inst.c | 123 ++++++++++++++++++++++++++++++++++++
3 files changed, 129 insertions(+)

diff --git a/arch/loongarch/include/asm/inst.h b/arch/loongarch/include/asm/inst.h
index 7eedd83..a5dad39 100644
--- a/arch/loongarch/include/asm/inst.h
+++ b/arch/loongarch/include/asm/inst.h
@@ -7,6 +7,7 @@

#include <linux/types.h>
#include <asm/asm.h>
+#include <asm/ptrace.h>

#define INSN_NOP 0x03400000
#define INSN_BREAK 0x002a0000
@@ -32,6 +33,7 @@ enum reg1i20_op {
lu12iw_op = 0x0a,
lu32id_op = 0x0b,
pcaddi_op = 0x0c,
+ pcalau12i_op = 0x0d,
pcaddu12i_op = 0x0e,
pcaddu18i_op = 0x0f,
};
@@ -351,6 +353,9 @@ static inline bool is_stack_alloc_ins(union loongarch_instruction *ip)
is_imm12_negative(ip->reg2i12_format.immediate);
}

+void simu_pc(struct pt_regs *regs, union loongarch_instruction insn);
+void simu_branch(struct pt_regs *regs, union loongarch_instruction insn);
+
int larch_insn_read(void *addr, u32 *insnp);
int larch_insn_write(void *addr, u32 insn);
int larch_insn_patch_text(void *addr, u32 insn);
diff --git a/arch/loongarch/include/asm/ptrace.h b/arch/loongarch/include/asm/ptrace.h
index 59c4608..58596c4 100644
--- a/arch/loongarch/include/asm/ptrace.h
+++ b/arch/loongarch/include/asm/ptrace.h
@@ -6,6 +6,7 @@
#define _ASM_PTRACE_H

#include <asm/page.h>
+#include <asm/irqflags.h>
#include <asm/thread_info.h>
#include <uapi/asm/ptrace.h>

diff --git a/arch/loongarch/kernel/inst.c b/arch/loongarch/kernel/inst.c
index badc590..258ef26 100644
--- a/arch/loongarch/kernel/inst.c
+++ b/arch/loongarch/kernel/inst.c
@@ -10,6 +10,129 @@

static DEFINE_RAW_SPINLOCK(patch_lock);

+void simu_pc(struct pt_regs *regs, union loongarch_instruction insn)
+{
+ unsigned long pc = regs->csr_era;
+ unsigned int rd = insn.reg1i20_format.rd;
+ unsigned int imm = insn.reg1i20_format.immediate;
+
+ if (pc & 3) {
+ pr_warn("%s: invalid pc 0x%lx\n", __func__, pc);
+ return;
+ }
+
+ switch (insn.reg1i20_format.opcode) {
+ case pcaddi_op:
+ regs->regs[rd] = pc + sign_extend64(imm << 2, 21);
+ break;
+ case pcaddu12i_op:
+ regs->regs[rd] = pc + sign_extend64(imm << 12, 31);
+ break;
+ case pcaddu18i_op:
+ regs->regs[rd] = pc + sign_extend64(imm << 18, 37);
+ break;
+ case pcalau12i_op:
+ regs->regs[rd] = pc + sign_extend64(imm << 12, 31);
+ regs->regs[rd] &= ~((1 << 12) - 1);
+ break;
+ default:
+ pr_info("%s: unknown opcode\n", __func__);
+ return;
+ }
+
+ regs->csr_era += LOONGARCH_INSN_SIZE;
+}
+
+void simu_branch(struct pt_regs *regs, union loongarch_instruction insn)
+{
+ unsigned int imm, imm_l, imm_h, rd, rj;
+ unsigned long pc = regs->csr_era;
+
+ if (pc & 3) {
+ pr_warn("%s: invalid pc 0x%lx\n", __func__, pc);
+ return;
+ }
+
+ imm_l = insn.reg0i26_format.immediate_l;
+ imm_h = insn.reg0i26_format.immediate_h;
+ switch (insn.reg0i26_format.opcode) {
+ case b_op:
+ regs->csr_era = pc + sign_extend64((imm_h << 16 | imm_l) << 2, 27);
+ return;
+ case bl_op:
+ regs->csr_era = pc + sign_extend64((imm_h << 16 | imm_l) << 2, 27);
+ regs->regs[1] = pc + LOONGARCH_INSN_SIZE;
+ return;
+ }
+
+ imm_l = insn.reg1i21_format.immediate_l;
+ imm_h = insn.reg1i21_format.immediate_h;
+ rj = insn.reg1i21_format.rj;
+ switch (insn.reg1i21_format.opcode) {
+ case beqz_op:
+ if (regs->regs[rj] == 0)
+ regs->csr_era = pc + sign_extend64((imm_h << 16 | imm_l) << 2, 22);
+ else
+ regs->csr_era = pc + LOONGARCH_INSN_SIZE;
+ return;
+ case bnez_op:
+ if (regs->regs[rj] != 0)
+ regs->csr_era = pc + sign_extend64((imm_h << 16 | imm_l) << 2, 22);
+ else
+ regs->csr_era = pc + LOONGARCH_INSN_SIZE;
+ return;
+ }
+
+ imm = insn.reg2i16_format.immediate;
+ rj = insn.reg2i16_format.rj;
+ rd = insn.reg2i16_format.rd;
+ switch (insn.reg2i16_format.opcode) {
+ case beq_op:
+ if (regs->regs[rj] == regs->regs[rd])
+ regs->csr_era = pc + sign_extend64(imm << 2, 17);
+ else
+ regs->csr_era = pc + LOONGARCH_INSN_SIZE;
+ break;
+ case bne_op:
+ if (regs->regs[rj] != regs->regs[rd])
+ regs->csr_era = pc + sign_extend64(imm << 2, 17);
+ else
+ regs->csr_era = pc + LOONGARCH_INSN_SIZE;
+ break;
+ case blt_op:
+ if ((long)regs->regs[rj] < (long)regs->regs[rd])
+ regs->csr_era = pc + sign_extend64(imm << 2, 17);
+ else
+ regs->csr_era = pc + LOONGARCH_INSN_SIZE;
+ break;
+ case bge_op:
+ if ((long)regs->regs[rj] >= (long)regs->regs[rd])
+ regs->csr_era = pc + sign_extend64(imm << 2, 17);
+ else
+ regs->csr_era = pc + LOONGARCH_INSN_SIZE;
+ break;
+ case bltu_op:
+ if (regs->regs[rj] < regs->regs[rd])
+ regs->csr_era = pc + sign_extend64(imm << 2, 17);
+ else
+ regs->csr_era = pc + LOONGARCH_INSN_SIZE;
+ break;
+ case bgeu_op:
+ if (regs->regs[rj] >= regs->regs[rd])
+ regs->csr_era = pc + sign_extend64(imm << 2, 17);
+ else
+ regs->csr_era = pc + LOONGARCH_INSN_SIZE;
+ break;
+ case jirl_op:
+ regs->csr_era = regs->regs[rj] + sign_extend64(imm << 2, 17);
+ regs->regs[rd] = pc + LOONGARCH_INSN_SIZE;
+ break;
+ default:
+ pr_info("%s: unknown opcode\n", __func__);
+ return;
+ }
+}
+
int larch_insn_read(void *addr, u32 *insnp)
{
int ret;
--
2.1.0

2023-01-18 02:45:39

by Tiezhu Yang

[permalink] [raw]
Subject: [PATCH v12 3/5] LoongArch: Add kretprobe support

Use the generic kretprobe trampoline handler to add kretprobe support
for LoongArch.

Signed-off-by: Tiezhu Yang <[email protected]>
---
arch/loongarch/Kconfig | 1 +
arch/loongarch/include/asm/kprobes.h | 3 +
arch/loongarch/kernel/Makefile | 2 +-
arch/loongarch/kernel/kprobes.c | 27 +++++++++
arch/loongarch/kernel/kprobes_trampoline.S | 96 ++++++++++++++++++++++++++++++
5 files changed, 128 insertions(+), 1 deletion(-)
create mode 100644 arch/loongarch/kernel/kprobes_trampoline.S

diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
index ce930f2..134a2f8 100644
--- a/arch/loongarch/Kconfig
+++ b/arch/loongarch/Kconfig
@@ -104,6 +104,7 @@ config LOONGARCH
select HAVE_IRQ_EXIT_ON_IRQ_STACK
select HAVE_IRQ_TIME_ACCOUNTING
select HAVE_KPROBES
+ select HAVE_KRETPROBES
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_NMI
select HAVE_PCI
diff --git a/arch/loongarch/include/asm/kprobes.h b/arch/loongarch/include/asm/kprobes.h
index 7b9fc3e..798020a 100644
--- a/arch/loongarch/include/asm/kprobes.h
+++ b/arch/loongarch/include/asm/kprobes.h
@@ -49,6 +49,9 @@ bool kprobe_fault_handler(struct pt_regs *regs, int trapnr);
bool kprobe_breakpoint_handler(struct pt_regs *regs);
bool kprobe_singlestep_handler(struct pt_regs *regs);

+void __kretprobe_trampoline(void);
+void *trampoline_probe_handler(struct pt_regs *regs);
+
#else /* !CONFIG_KPROBES */

static inline bool kprobe_breakpoint_handler(struct pt_regs *regs) { return false; }
diff --git a/arch/loongarch/kernel/Makefile b/arch/loongarch/kernel/Makefile
index 017ac59..45c78ae 100644
--- a/arch/loongarch/kernel/Makefile
+++ b/arch/loongarch/kernel/Makefile
@@ -47,6 +47,6 @@ obj-$(CONFIG_UNWINDER_PROLOGUE) += unwind_prologue.o

obj-$(CONFIG_PERF_EVENTS) += perf_event.o perf_regs.o

-obj-$(CONFIG_KPROBES) += kprobes.o
+obj-$(CONFIG_KPROBES) += kprobes.o kprobes_trampoline.o

CPPFLAGS_vmlinux.lds := $(KBUILD_CFLAGS)
diff --git a/arch/loongarch/kernel/kprobes.c b/arch/loongarch/kernel/kprobes.c
index a0c2f9d..ced9c4c 100644
--- a/arch/loongarch/kernel/kprobes.c
+++ b/arch/loongarch/kernel/kprobes.c
@@ -372,6 +372,33 @@ int __init arch_populate_kprobe_blacklist(void)
(unsigned long)__irqentry_text_end);
}

+/* Called from __kretprobe_trampoline */
+void __used *trampoline_probe_handler(struct pt_regs *regs)
+{
+ return (void *)kretprobe_trampoline_handler(regs, NULL);
+}
+NOKPROBE_SYMBOL(trampoline_probe_handler);
+
+/* assembler function that handles the kretprobes must not be probed itself */
+NOKPROBE_SYMBOL(__kretprobe_trampoline);
+
+void arch_prepare_kretprobe(struct kretprobe_instance *ri,
+ struct pt_regs *regs)
+{
+ ri->ret_addr = (kprobe_opcode_t *)regs->regs[1];
+ ri->fp = NULL;
+
+ /* Replace the return addr with trampoline addr */
+ regs->regs[1] = (unsigned long)&__kretprobe_trampoline;
+}
+NOKPROBE_SYMBOL(arch_prepare_kretprobe);
+
+int arch_trampoline_kprobe(struct kprobe *p)
+{
+ return 0;
+}
+NOKPROBE_SYMBOL(arch_trampoline_kprobe);
+
int __init arch_init_kprobes(void)
{
return 0;
diff --git a/arch/loongarch/kernel/kprobes_trampoline.S b/arch/loongarch/kernel/kprobes_trampoline.S
new file mode 100644
index 0000000..af94b0d
--- /dev/null
+++ b/arch/loongarch/kernel/kprobes_trampoline.S
@@ -0,0 +1,96 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+#include <linux/linkage.h>
+#include <asm/stackframe.h>
+
+ .text
+
+ .macro save_all_base_regs
+ cfi_st ra, PT_R1
+ cfi_st tp, PT_R2
+ cfi_st a0, PT_R4
+ cfi_st a1, PT_R5
+ cfi_st a2, PT_R6
+ cfi_st a3, PT_R7
+ cfi_st a4, PT_R8
+ cfi_st a5, PT_R9
+ cfi_st a6, PT_R10
+ cfi_st a7, PT_R11
+ cfi_st t0, PT_R12
+ cfi_st t1, PT_R13
+ cfi_st t2, PT_R14
+ cfi_st t3, PT_R15
+ cfi_st t4, PT_R16
+ cfi_st t5, PT_R17
+ cfi_st t6, PT_R18
+ cfi_st t7, PT_R19
+ cfi_st t8, PT_R20
+ cfi_st u0, PT_R21
+ cfi_st fp, PT_R22
+ cfi_st s0, PT_R23
+ cfi_st s1, PT_R24
+ cfi_st s2, PT_R25
+ cfi_st s3, PT_R26
+ cfi_st s4, PT_R27
+ cfi_st s5, PT_R28
+ cfi_st s6, PT_R29
+ cfi_st s7, PT_R30
+ cfi_st s8, PT_R31
+ csrrd t0, LOONGARCH_CSR_CRMD
+ andi t0, t0, 0x7 /* extract bit[1:0] PLV, bit[2] IE */
+ LONG_S t0, sp, PT_CRMD
+ .endm
+
+ .macro restore_all_base_regs
+ cfi_ld tp, PT_R2
+ cfi_ld a0, PT_R4
+ cfi_ld a1, PT_R5
+ cfi_ld a2, PT_R6
+ cfi_ld a3, PT_R7
+ cfi_ld a4, PT_R8
+ cfi_ld a5, PT_R9
+ cfi_ld a6, PT_R10
+ cfi_ld a7, PT_R11
+ cfi_ld t0, PT_R12
+ cfi_ld t1, PT_R13
+ cfi_ld t2, PT_R14
+ cfi_ld t3, PT_R15
+ cfi_ld t4, PT_R16
+ cfi_ld t5, PT_R17
+ cfi_ld t6, PT_R18
+ cfi_ld t7, PT_R19
+ cfi_ld t8, PT_R20
+ cfi_ld u0, PT_R21
+ cfi_ld fp, PT_R22
+ cfi_ld s0, PT_R23
+ cfi_ld s1, PT_R24
+ cfi_ld s2, PT_R25
+ cfi_ld s3, PT_R26
+ cfi_ld s4, PT_R27
+ cfi_ld s5, PT_R28
+ cfi_ld s6, PT_R29
+ cfi_ld s7, PT_R30
+ cfi_ld s8, PT_R31
+ LONG_L t0, sp, PT_CRMD
+ li.d t1, 0x7 /* mask bit[1:0] PLV, bit[2] IE */
+ csrxchg t0, t1, LOONGARCH_CSR_CRMD
+ .endm
+
+SYM_CODE_START(__kretprobe_trampoline)
+ addi.d sp, sp, -PT_SIZE
+ save_all_base_regs
+
+ addi.d t0, sp, PT_SIZE
+ LONG_S t0, sp, PT_R3
+
+ move a0, sp /* pt_regs */
+
+ bl trampoline_probe_handler
+
+ /* use the result as the return-address */
+ move ra, a0
+
+ restore_all_base_regs
+ addi.d sp, sp, PT_SIZE
+
+ jr ra
+SYM_CODE_END(__kretprobe_trampoline)
--
2.1.0

2023-01-18 02:55:10

by Tiezhu Yang

[permalink] [raw]
Subject: [PATCH v12 2/5] LoongArch: Add kprobe support

Kprobe allows you to trap at almost any kernel address and execute a
callback function, this commit adds kprobe support for LoongArch.

Signed-off-by: Tiezhu Yang <[email protected]>
---
arch/loongarch/Kconfig | 1 +
arch/loongarch/include/asm/inst.h | 15 ++
arch/loongarch/include/asm/kprobes.h | 58 ++++++
arch/loongarch/kernel/Makefile | 2 +
arch/loongarch/kernel/kprobes.c | 378 +++++++++++++++++++++++++++++++++++
arch/loongarch/kernel/traps.c | 11 +-
arch/loongarch/mm/fault.c | 3 +
7 files changed, 463 insertions(+), 5 deletions(-)
create mode 100644 arch/loongarch/include/asm/kprobes.h
create mode 100644 arch/loongarch/kernel/kprobes.c

diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
index 9be74ec..ce930f2 100644
--- a/arch/loongarch/Kconfig
+++ b/arch/loongarch/Kconfig
@@ -103,6 +103,7 @@ config LOONGARCH
select HAVE_IOREMAP_PROT
select HAVE_IRQ_EXIT_ON_IRQ_STACK
select HAVE_IRQ_TIME_ACCOUNTING
+ select HAVE_KPROBES
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_NMI
select HAVE_PCI
diff --git a/arch/loongarch/include/asm/inst.h b/arch/loongarch/include/asm/inst.h
index a5dad39..ba18ce8 100644
--- a/arch/loongarch/include/asm/inst.h
+++ b/arch/loongarch/include/asm/inst.h
@@ -24,6 +24,10 @@

#define ADDR_IMM(addr, INSN) ((addr & ADDR_IMMMASK_##INSN) >> ADDR_IMMSHIFT_##INSN)

+enum reg0i15_op {
+ break_op = 0x54,
+};
+
enum reg0i26_op {
b_op = 0x14,
bl_op = 0x15,
@@ -180,6 +184,11 @@ enum reg3sa2_op {
alsld_op = 0x16,
};

+struct reg0i15_format {
+ unsigned int immediate : 15;
+ unsigned int opcode : 17;
+};
+
struct reg0i26_format {
unsigned int immediate_h : 10;
unsigned int immediate_l : 16;
@@ -265,6 +274,7 @@ struct reg3sa2_format {

union loongarch_instruction {
unsigned int word;
+ struct reg0i15_format reg0i15_format;
struct reg0i26_format reg0i26_format;
struct reg1i20_format reg1i20_format;
struct reg1i21_format reg1i21_format;
@@ -323,6 +333,11 @@ static inline bool is_imm_negative(unsigned long val, unsigned int bit)
return val & (1UL << (bit - 1));
}

+static inline bool is_break_ins(union loongarch_instruction *ip)
+{
+ return ip->reg0i15_format.opcode == break_op;
+}
+
static inline bool is_pc_ins(union loongarch_instruction *ip)
{
return ip->reg1i20_format.opcode >= pcaddi_op &&
diff --git a/arch/loongarch/include/asm/kprobes.h b/arch/loongarch/include/asm/kprobes.h
new file mode 100644
index 0000000..7b9fc3e
--- /dev/null
+++ b/arch/loongarch/include/asm/kprobes.h
@@ -0,0 +1,58 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef __ASM_LOONGARCH_KPROBES_H
+#define __ASM_LOONGARCH_KPROBES_H
+
+#include <asm-generic/kprobes.h>
+
+#ifdef CONFIG_KPROBES
+
+#include <asm/inst.h>
+#include <asm/cacheflush.h>
+
+#define __ARCH_WANT_KPROBES_INSN_SLOT
+#define MAX_INSN_SIZE 2
+
+#define flush_insn_slot(p) \
+do { \
+ if (p->addr) \
+ flush_icache_range((unsigned long)p->addr, \
+ (unsigned long)p->addr + \
+ (MAX_INSN_SIZE * sizeof(kprobe_opcode_t))); \
+} while (0)
+
+#define kretprobe_blacklist_size 0
+
+typedef union loongarch_instruction kprobe_opcode_t;
+
+/* Architecture specific copy of original instruction */
+struct arch_specific_insn {
+ /* copy of the original instruction */
+ kprobe_opcode_t *insn;
+ /* restore address after simulation */
+ unsigned long restore;
+};
+
+struct prev_kprobe {
+ struct kprobe *kp;
+ unsigned int status;
+};
+
+/* per-cpu kprobe control block */
+struct kprobe_ctlblk {
+ unsigned int kprobe_status;
+ unsigned long saved_status;
+ struct prev_kprobe prev_kprobe;
+};
+
+void arch_remove_kprobe(struct kprobe *p);
+bool kprobe_fault_handler(struct pt_regs *regs, int trapnr);
+bool kprobe_breakpoint_handler(struct pt_regs *regs);
+bool kprobe_singlestep_handler(struct pt_regs *regs);
+
+#else /* !CONFIG_KPROBES */
+
+static inline bool kprobe_breakpoint_handler(struct pt_regs *regs) { return false; }
+static inline bool kprobe_singlestep_handler(struct pt_regs *regs) { return false; }
+
+#endif /* CONFIG_KPROBES */
+#endif /* __ASM_LOONGARCH_KPROBES_H */
diff --git a/arch/loongarch/kernel/Makefile b/arch/loongarch/kernel/Makefile
index c8cfbd5..017ac59 100644
--- a/arch/loongarch/kernel/Makefile
+++ b/arch/loongarch/kernel/Makefile
@@ -47,4 +47,6 @@ obj-$(CONFIG_UNWINDER_PROLOGUE) += unwind_prologue.o

obj-$(CONFIG_PERF_EVENTS) += perf_event.o perf_regs.o

+obj-$(CONFIG_KPROBES) += kprobes.o
+
CPPFLAGS_vmlinux.lds := $(KBUILD_CFLAGS)
diff --git a/arch/loongarch/kernel/kprobes.c b/arch/loongarch/kernel/kprobes.c
new file mode 100644
index 0000000..a0c2f9d
--- /dev/null
+++ b/arch/loongarch/kernel/kprobes.c
@@ -0,0 +1,378 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#include <linux/kdebug.h>
+#include <linux/kprobes.h>
+#include <linux/preempt.h>
+#include <asm/break.h>
+
+static const union loongarch_instruction breakpoint_insn = {
+ .reg0i15_format = {
+ .opcode = break_op,
+ .immediate = BRK_KPROBE_BP,
+ }
+};
+
+static const union loongarch_instruction singlestep_insn = {
+ .reg0i15_format = {
+ .opcode = break_op,
+ .immediate = BRK_KPROBE_SSTEPBP,
+ }
+};
+
+DEFINE_PER_CPU(struct kprobe *, current_kprobe);
+DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
+
+static void post_kprobe_handler(struct kprobe *, struct kprobe_ctlblk *, struct pt_regs *);
+
+static bool insns_not_supported(union loongarch_instruction insn)
+{
+ switch (insn.reg2i14_format.opcode) {
+ case llw_op:
+ case lld_op:
+ case scw_op:
+ case scd_op:
+ pr_notice("kprobe: ll and sc instructions are not supported\n");
+ return true;
+ }
+
+ switch (insn.reg1i21_format.opcode) {
+ case bceqz_op:
+ pr_notice("kprobe: bceqz and bcnez instructions are not supported\n");
+ return true;
+ }
+
+ return false;
+}
+NOKPROBE_SYMBOL(insns_not_supported);
+
+static bool insn_need_simulation(struct kprobe *p)
+{
+ if (is_pc_ins(&p->opcode))
+ return true;
+
+ if (is_branch_ins(&p->opcode))
+ return true;
+
+ return false;
+}
+NOKPROBE_SYMBOL(insn_need_simulation);
+
+static void arch_simulate_insn(struct kprobe *p, struct pt_regs *regs)
+{
+ if (is_pc_ins(&p->opcode))
+ simu_pc(regs, p->opcode);
+ else if (is_branch_ins(&p->opcode))
+ simu_branch(regs, p->opcode);
+}
+NOKPROBE_SYMBOL(arch_simulate_insn);
+
+static void arch_prepare_ss_slot(struct kprobe *p)
+{
+ p->ainsn.insn[0] = *p->addr;
+ p->ainsn.insn[1] = singlestep_insn;
+ p->ainsn.restore = (unsigned long)p->addr + LOONGARCH_INSN_SIZE;
+}
+NOKPROBE_SYMBOL(arch_prepare_ss_slot);
+
+static void arch_prepare_simulate(struct kprobe *p)
+{
+ p->ainsn.restore = 0;
+}
+NOKPROBE_SYMBOL(arch_prepare_simulate);
+
+int arch_prepare_kprobe(struct kprobe *p)
+{
+ /* copy instruction */
+ p->opcode = *p->addr;
+
+ /* decode instruction */
+ if (insns_not_supported(p->opcode))
+ return -EINVAL;
+
+ if (insn_need_simulation(p)) {
+ p->ainsn.insn = NULL;
+ } else {
+ p->ainsn.insn = get_insn_slot();
+ if (!p->ainsn.insn)
+ return -ENOMEM;
+ }
+
+ /* prepare the instruction */
+ if (p->ainsn.insn)
+ arch_prepare_ss_slot(p);
+ else
+ arch_prepare_simulate(p);
+
+ return 0;
+}
+NOKPROBE_SYMBOL(arch_prepare_kprobe);
+
+/* Install breakpoint in text */
+void arch_arm_kprobe(struct kprobe *p)
+{
+ *p->addr = breakpoint_insn;
+ flush_insn_slot(p);
+}
+NOKPROBE_SYMBOL(arch_arm_kprobe);
+
+/* Remove breakpoint from text */
+void arch_disarm_kprobe(struct kprobe *p)
+{
+ *p->addr = p->opcode;
+ flush_insn_slot(p);
+}
+NOKPROBE_SYMBOL(arch_disarm_kprobe);
+
+void arch_remove_kprobe(struct kprobe *p)
+{
+ if (p->ainsn.insn) {
+ free_insn_slot(p->ainsn.insn, 0);
+ p->ainsn.insn = NULL;
+ }
+}
+NOKPROBE_SYMBOL(arch_remove_kprobe);
+
+static void save_previous_kprobe(struct kprobe_ctlblk *kcb)
+{
+ kcb->prev_kprobe.kp = kprobe_running();
+ kcb->prev_kprobe.status = kcb->kprobe_status;
+}
+NOKPROBE_SYMBOL(save_previous_kprobe);
+
+static void restore_previous_kprobe(struct kprobe_ctlblk *kcb)
+{
+ __this_cpu_write(current_kprobe, kcb->prev_kprobe.kp);
+ kcb->kprobe_status = kcb->prev_kprobe.status;
+}
+NOKPROBE_SYMBOL(restore_previous_kprobe);
+
+static void set_current_kprobe(struct kprobe *p)
+{
+ __this_cpu_write(current_kprobe, p);
+}
+NOKPROBE_SYMBOL(set_current_kprobe);
+
+/*
+ * Interrupts need to be disabled before single-step mode is set,
+ * and not reenabled until after single-step mode ends.
+ * Without disabling interrupt on local CPU, there is a chance of
+ * interrupt occurrence in the period of exception return and start
+ * of out-of-line single-step, that result in wrongly single stepping
+ * into the interrupt handler.
+ */
+static void save_local_irqflag(struct kprobe_ctlblk *kcb,
+ struct pt_regs *regs)
+{
+ kcb->saved_status = regs->csr_prmd;
+ regs->csr_prmd &= ~CSR_PRMD_PIE;
+}
+NOKPROBE_SYMBOL(save_local_irqflag);
+
+static void restore_local_irqflag(struct kprobe_ctlblk *kcb,
+ struct pt_regs *regs)
+{
+ regs->csr_prmd = kcb->saved_status;
+}
+NOKPROBE_SYMBOL(restore_local_irqflag);
+
+static void setup_singlestep(struct kprobe *p, struct pt_regs *regs,
+ struct kprobe_ctlblk *kcb, int reenter)
+{
+ if (reenter) {
+ save_previous_kprobe(kcb);
+ set_current_kprobe(p);
+ kcb->kprobe_status = KPROBE_REENTER;
+ } else {
+ kcb->kprobe_status = KPROBE_HIT_SS;
+ }
+
+ if (p->ainsn.insn) {
+ /* IRQs and single stepping do not mix well */
+ save_local_irqflag(kcb, regs);
+ /* set ip register to prepare for single stepping */
+ regs->csr_era = (unsigned long)p->ainsn.insn;
+ } else {
+ /* simulate single steping */
+ arch_simulate_insn(p, regs);
+ /* now go for post processing */
+ post_kprobe_handler(p, kcb, regs);
+ }
+}
+NOKPROBE_SYMBOL(setup_singlestep);
+
+static bool reenter_kprobe(struct kprobe *p, struct pt_regs *regs,
+ struct kprobe_ctlblk *kcb)
+{
+ switch (kcb->kprobe_status) {
+ case KPROBE_HIT_SSDONE:
+ case KPROBE_HIT_ACTIVE:
+ case KPROBE_HIT_SS:
+ kprobes_inc_nmissed_count(p);
+ setup_singlestep(p, regs, kcb, 1);
+ break;
+ case KPROBE_REENTER:
+ pr_warn("Failed to recover from reentered kprobes.\n");
+ dump_kprobe(p);
+ WARN_ON_ONCE(1);
+ break;
+ default:
+ WARN_ON(1);
+ return false;
+ }
+
+ return true;
+}
+NOKPROBE_SYMBOL(reenter_kprobe);
+
+static void post_kprobe_handler(struct kprobe *cur, struct kprobe_ctlblk *kcb,
+ struct pt_regs *regs)
+{
+ /* return addr restore if non-branching insn */
+ if (cur->ainsn.restore != 0)
+ instruction_pointer_set(regs, cur->ainsn.restore);
+
+ /* restore back original saved kprobe variables and continue */
+ if (kcb->kprobe_status == KPROBE_REENTER) {
+ restore_previous_kprobe(kcb);
+ preempt_enable_no_resched();
+ return;
+ }
+
+ /*
+ * update the kcb status even if the cur->post_handler is
+ * not set because reset_curent_kprobe() doesn't update kcb.
+ */
+ kcb->kprobe_status = KPROBE_HIT_SSDONE;
+ if (cur->post_handler)
+ cur->post_handler(cur, regs, 0);
+
+ reset_current_kprobe();
+ preempt_enable_no_resched();
+}
+NOKPROBE_SYMBOL(post_kprobe_handler);
+
+bool kprobe_breakpoint_handler(struct pt_regs *regs)
+{
+ struct kprobe_ctlblk *kcb;
+ struct kprobe *p, *cur_kprobe;
+ kprobe_opcode_t *addr = (kprobe_opcode_t *)regs->csr_era;
+
+ /*
+ * We don't want to be preempted for the entire
+ * duration of kprobe processing.
+ */
+ preempt_disable();
+ kcb = get_kprobe_ctlblk();
+ cur_kprobe = kprobe_running();
+
+ p = get_kprobe(addr);
+ if (p) {
+ if (cur_kprobe) {
+ if (reenter_kprobe(p, regs, kcb))
+ return true;
+ } else {
+ /* Probe hit */
+ set_current_kprobe(p);
+ kcb->kprobe_status = KPROBE_HIT_ACTIVE;
+
+ /*
+ * If we have no pre-handler or it returned 0, we
+ * continue with normal processing. If we have a
+ * pre-handler and it returned non-zero, it will
+ * modify the execution path and no need to single
+ * stepping. Let's just reset current kprobe and exit.
+ *
+ * pre_handler can hit a breakpoint and can step thru
+ * before return.
+ */
+ if (!p->pre_handler || !p->pre_handler(p, regs)) {
+ setup_singlestep(p, regs, kcb, 0);
+ } else {
+ reset_current_kprobe();
+ preempt_enable_no_resched();
+ }
+ return true;
+ }
+ }
+
+ if (addr->word != breakpoint_insn.word) {
+ /*
+ * The breakpoint instruction was removed right
+ * after we hit it. Another cpu has removed
+ * either a probepoint or a debugger breakpoint
+ * at this address. In either case, no further
+ * handling of this interrupt is appropriate.
+ * Return back to original instruction, and continue.
+ */
+ regs->csr_era = (unsigned long)addr;
+ preempt_enable_no_resched();
+ return true;
+ }
+
+ preempt_enable_no_resched();
+ return false;
+}
+NOKPROBE_SYMBOL(kprobe_breakpoint_handler);
+
+bool kprobe_singlestep_handler(struct pt_regs *regs)
+{
+ struct kprobe *cur = kprobe_running();
+ struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+ unsigned long addr = instruction_pointer(regs);
+
+ if (cur && (kcb->kprobe_status & (KPROBE_HIT_SS | KPROBE_REENTER)) &&
+ ((unsigned long)&cur->ainsn.insn[1] == addr)) {
+ restore_local_irqflag(kcb, regs);
+ post_kprobe_handler(cur, kcb, regs);
+ return true;
+ }
+
+ preempt_enable_no_resched();
+ return false;
+}
+NOKPROBE_SYMBOL(kprobe_singlestep_handler);
+
+bool kprobe_fault_handler(struct pt_regs *regs, int trapnr)
+{
+ struct kprobe *cur = kprobe_running();
+ struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+
+ switch (kcb->kprobe_status) {
+ case KPROBE_HIT_SS:
+ case KPROBE_REENTER:
+ /*
+ * We are here because the instruction being single
+ * stepped caused a page fault. We reset the current
+ * kprobe and the ip points back to the probe address
+ * and allow the page fault handler to continue as a
+ * normal page fault.
+ */
+ regs->csr_era = (unsigned long)cur->addr;
+ WARN_ON_ONCE(!instruction_pointer(regs));
+
+ if (kcb->kprobe_status == KPROBE_REENTER) {
+ restore_previous_kprobe(kcb);
+ } else {
+ restore_local_irqflag(kcb, regs);
+ reset_current_kprobe();
+ }
+ preempt_enable_no_resched();
+ break;
+ }
+ return false;
+}
+NOKPROBE_SYMBOL(kprobe_fault_handler);
+
+/*
+ * Provide a blacklist of symbols identifying ranges which cannot be kprobed.
+ * This blacklist is exposed to userspace via debugfs (kprobes/blacklist).
+ */
+int __init arch_populate_kprobe_blacklist(void)
+{
+ return kprobe_add_area_blacklist((unsigned long)__irqentry_text_start,
+ (unsigned long)__irqentry_text_end);
+}
+
+int __init arch_init_kprobes(void)
+{
+ return 0;
+}
diff --git a/arch/loongarch/kernel/traps.c b/arch/loongarch/kernel/traps.c
index 8029757..18aa217 100644
--- a/arch/loongarch/kernel/traps.c
+++ b/arch/loongarch/kernel/traps.c
@@ -432,7 +432,10 @@ asmlinkage void noinstr do_bp(struct pt_regs *regs)
unsigned long era = exception_era(regs);
irqentry_state_t state = irqentry_enter(regs);

- local_irq_enable();
+ /* Enable interrupt if enabled in parent context */
+ if (likely(regs->csr_prmd & CSR_PRMD_PIE))
+ local_irq_enable();
+
current->thread.trap_nr = read_csr_excode();
if (__get_inst(&opcode, (u32 *)era, user))
goto out_sigsegv;
@@ -445,14 +448,12 @@ asmlinkage void noinstr do_bp(struct pt_regs *regs)
*/
switch (bcode) {
case BRK_KPROBE_BP:
- if (notify_die(DIE_BREAK, "Kprobe", regs, bcode,
- current->thread.trap_nr, SIGTRAP) == NOTIFY_STOP)
+ if (kprobe_breakpoint_handler(regs))
goto out;
else
break;
case BRK_KPROBE_SSTEPBP:
- if (notify_die(DIE_SSTEPBP, "Kprobe_SingleStep", regs, bcode,
- current->thread.trap_nr, SIGTRAP) == NOTIFY_STOP)
+ if (kprobe_singlestep_handler(regs))
goto out;
else
break;
diff --git a/arch/loongarch/mm/fault.c b/arch/loongarch/mm/fault.c
index 1ccd536..449087b 100644
--- a/arch/loongarch/mm/fault.c
+++ b/arch/loongarch/mm/fault.c
@@ -135,6 +135,9 @@ static void __kprobes __do_page_fault(struct pt_regs *regs,
struct vm_area_struct *vma = NULL;
vm_fault_t fault;

+ if (kprobe_page_fault(regs, current->thread.trap_nr))
+ return;
+
/*
* We fault-in kernel-space virtual memory on-demand. The
* 'reference' page table is init_mm.pgd.
--
2.1.0

2023-01-18 03:44:53

by Huacai Chen

[permalink] [raw]
Subject: Re: [PATCH v12 0/5] Add kprobe and kretprobe support for LoongArch

Hi, Masami,

Could you please pay some time to review this series? Thank you.

Huacai

On Wed, Jan 18, 2023 at 10:01 AM Tiezhu Yang <[email protected]> wrote:
>
> v12:
> -- Rebase on the latest code
> -- Fix -Wmissing-prototypes warning when make W=1
> -- Drop patch #6 "Use common function sign_extend64()"
> since it has been applied yet
>
> v11:
> -- Rebase on the latest code
> -- Address all the review comments, thank you all
> -- Modify arch_prepare_kprobe() and setup_singlestep()
> to make the probe logic more clear
> -- Mark some assembler symbols as non-kprobe-able
> -- Use common function sign_extend64()
> -- Test 20 times about 36 hours for all the 71 assembler
> functions annotated with SYM_CODE_START and SYM_FUNC_START
> under arch/loongarch, especially test memset alone for 10
> hours, no hang problems
>
> v10:
> -- Remove sign_extend() based on the latest code
> -- Rename insns_are_not_supported() to insns_not_supported()
> -- Rename insns_are_not_simulated() to insns_not_simulated()
> -- Set KPROBE_HIT_SSDONE if cur->post_handler is not NULL
> -- Enable preemption for KPROBE_REENTER in kprobe_fault_handler()
>
> v9:
> -- Rename sign_extended() to sign_extend()
> -- Modify kprobe_fault_handler() to handle all of kprobe_status
>
> v8:
> -- Put "regs->csr_prmd &= ~CSR_PRMD_PIE;" ahead to save one line
> -- Add code comment of preempt_disable()
> -- Put kprobe_page_fault() in __do_page_fault()
> -- Modify the check condition of break insn in kprobe_breakpoint_handler()
>
> v7:
> -- Remove stop_machine_cpuslocked() related code
>
> v6:
> -- Add a new patch to redefine larch_insn_patch_text() with
> stop_machine_cpuslocked()
> -- Modify kprobe_breakpoint_handler() to consider the original
> insn is break and return the correct value
> -- Modify do_bp() to refresh bcode when original insn is break
>
> v5:
> -- Rebase on the latest code
> -- Use stop_machine_cpuslocked() to modify insn to avoid CPU race
>
> v4:
> -- Remove kprobe_exceptions_notify() in kprobes.c
> -- Call kprobe_breakpoint_handler() and kprobe_singlestep_handler()
> in do_bp()
>
> v3:
> -- Rebase on the latest code
> -- Check the alignment of PC in simu_branch() and simu_pc()
> -- Add ibar in flush_insn_slot()
> -- Rename kprobe_{pre,post}_handler() to {post_}kprobe_handler
> -- Add preempt_disable() and preempt_enable_no_resched()
> -- Remove r0 save/restore and do some minor changes
> in kprobes_trampoline.S
> -- Do not enable CONFIG_KPROBES by default
>
> v2:
> -- Split simu_branch() and simu_pc() into a single patch
> -- Call kprobe_page_fault() in do_page_fault()
> -- Add kprobes_trampoline.S for kretprobe
>
> Tiezhu Yang (5):
> LoongArch: Simulate branch and PC* instructions
> LoongArch: Add kprobe support
> LoongArch: Add kretprobe support
> LoongArch: Mark some assembler symbols as non-kprobe-able
> samples/kprobes: Add LoongArch support
>
> arch/loongarch/Kconfig | 2 +
> arch/loongarch/include/asm/asm.h | 10 +
> arch/loongarch/include/asm/inst.h | 20 ++
> arch/loongarch/include/asm/kprobes.h | 61 +++++
> arch/loongarch/include/asm/ptrace.h | 1 +
> arch/loongarch/kernel/Makefile | 2 +
> arch/loongarch/kernel/entry.S | 1 +
> arch/loongarch/kernel/inst.c | 123 +++++++++
> arch/loongarch/kernel/kprobes.c | 405 +++++++++++++++++++++++++++++
> arch/loongarch/kernel/kprobes_trampoline.S | 96 +++++++
> arch/loongarch/kernel/traps.c | 11 +-
> arch/loongarch/lib/memcpy.S | 3 +
> arch/loongarch/mm/fault.c | 3 +
> samples/kprobes/kprobe_example.c | 8 +
> 14 files changed, 741 insertions(+), 5 deletions(-)
> create mode 100644 arch/loongarch/include/asm/kprobes.h
> create mode 100644 arch/loongarch/kernel/kprobes.c
> create mode 100644 arch/loongarch/kernel/kprobes_trampoline.S
>
> --
> 2.1.0
>
>

2023-02-01 04:56:29

by Huacai Chen

[permalink] [raw]
Subject: Re: [PATCH v12 0/5] Add kprobe and kretprobe support for LoongArch

Hi, Jeff,

Could you please pay some time to test this series? Thank you.

Huacai

On Wed, Jan 18, 2023 at 11:30 AM Huacai Chen <[email protected]> wrote:
>
> Hi, Masami,
>
> Could you please pay some time to review this series? Thank you.
>
> Huacai
>
> On Wed, Jan 18, 2023 at 10:01 AM Tiezhu Yang <[email protected]> wrote:
> >
> > v12:
> > -- Rebase on the latest code
> > -- Fix -Wmissing-prototypes warning when make W=1
> > -- Drop patch #6 "Use common function sign_extend64()"
> > since it has been applied yet
> >
> > v11:
> > -- Rebase on the latest code
> > -- Address all the review comments, thank you all
> > -- Modify arch_prepare_kprobe() and setup_singlestep()
> > to make the probe logic more clear
> > -- Mark some assembler symbols as non-kprobe-able
> > -- Use common function sign_extend64()
> > -- Test 20 times about 36 hours for all the 71 assembler
> > functions annotated with SYM_CODE_START and SYM_FUNC_START
> > under arch/loongarch, especially test memset alone for 10
> > hours, no hang problems
> >
> > v10:
> > -- Remove sign_extend() based on the latest code
> > -- Rename insns_are_not_supported() to insns_not_supported()
> > -- Rename insns_are_not_simulated() to insns_not_simulated()
> > -- Set KPROBE_HIT_SSDONE if cur->post_handler is not NULL
> > -- Enable preemption for KPROBE_REENTER in kprobe_fault_handler()
> >
> > v9:
> > -- Rename sign_extended() to sign_extend()
> > -- Modify kprobe_fault_handler() to handle all of kprobe_status
> >
> > v8:
> > -- Put "regs->csr_prmd &= ~CSR_PRMD_PIE;" ahead to save one line
> > -- Add code comment of preempt_disable()
> > -- Put kprobe_page_fault() in __do_page_fault()
> > -- Modify the check condition of break insn in kprobe_breakpoint_handler()
> >
> > v7:
> > -- Remove stop_machine_cpuslocked() related code
> >
> > v6:
> > -- Add a new patch to redefine larch_insn_patch_text() with
> > stop_machine_cpuslocked()
> > -- Modify kprobe_breakpoint_handler() to consider the original
> > insn is break and return the correct value
> > -- Modify do_bp() to refresh bcode when original insn is break
> >
> > v5:
> > -- Rebase on the latest code
> > -- Use stop_machine_cpuslocked() to modify insn to avoid CPU race
> >
> > v4:
> > -- Remove kprobe_exceptions_notify() in kprobes.c
> > -- Call kprobe_breakpoint_handler() and kprobe_singlestep_handler()
> > in do_bp()
> >
> > v3:
> > -- Rebase on the latest code
> > -- Check the alignment of PC in simu_branch() and simu_pc()
> > -- Add ibar in flush_insn_slot()
> > -- Rename kprobe_{pre,post}_handler() to {post_}kprobe_handler
> > -- Add preempt_disable() and preempt_enable_no_resched()
> > -- Remove r0 save/restore and do some minor changes
> > in kprobes_trampoline.S
> > -- Do not enable CONFIG_KPROBES by default
> >
> > v2:
> > -- Split simu_branch() and simu_pc() into a single patch
> > -- Call kprobe_page_fault() in do_page_fault()
> > -- Add kprobes_trampoline.S for kretprobe
> >
> > Tiezhu Yang (5):
> > LoongArch: Simulate branch and PC* instructions
> > LoongArch: Add kprobe support
> > LoongArch: Add kretprobe support
> > LoongArch: Mark some assembler symbols as non-kprobe-able
> > samples/kprobes: Add LoongArch support
> >
> > arch/loongarch/Kconfig | 2 +
> > arch/loongarch/include/asm/asm.h | 10 +
> > arch/loongarch/include/asm/inst.h | 20 ++
> > arch/loongarch/include/asm/kprobes.h | 61 +++++
> > arch/loongarch/include/asm/ptrace.h | 1 +
> > arch/loongarch/kernel/Makefile | 2 +
> > arch/loongarch/kernel/entry.S | 1 +
> > arch/loongarch/kernel/inst.c | 123 +++++++++
> > arch/loongarch/kernel/kprobes.c | 405 +++++++++++++++++++++++++++++
> > arch/loongarch/kernel/kprobes_trampoline.S | 96 +++++++
> > arch/loongarch/kernel/traps.c | 11 +-
> > arch/loongarch/lib/memcpy.S | 3 +
> > arch/loongarch/mm/fault.c | 3 +
> > samples/kprobes/kprobe_example.c | 8 +
> > 14 files changed, 741 insertions(+), 5 deletions(-)
> > create mode 100644 arch/loongarch/include/asm/kprobes.h
> > create mode 100644 arch/loongarch/kernel/kprobes.c
> > create mode 100644 arch/loongarch/kernel/kprobes_trampoline.S
> >
> > --
> > 2.1.0
> >
> >

2023-02-01 09:40:56

by Jeff Xie

[permalink] [raw]
Subject: Re: [PATCH v12 0/5] Add kprobe and kretprobe support for LoongArch

On Wed, Feb 1, 2023 at 12:56 PM Huacai Chen <[email protected]> wrote:
>
> Hi, Jeff,
>
> Could you please pay some time to test this series? Thank you.

Thanks for notifying me about the test.

I have tested the patchset(based on the
https://github.com/loongson/linux/tree/loongarch-next),
I found that some functions can't be probed e.g. scheduler_tick() or
uart_write_wakeup()
the two functions have the same point, they are all run in the hardirq context.

I don't know if it's related to the hardirq context, I haven't had
time to study this patchset carefully.
and they can be probed in the x86_64 arch.

root@loongarch modules]# insmod ./kprobe_example.ko symbol=scheduler_tick
insmod: can't insert './kprobe_example.ko': invalid parameter

dmesg:
[ 39.806435] kprobe_init: register_kprobe failed, returned -22


>
> Huacai
>
> On Wed, Jan 18, 2023 at 11:30 AM Huacai Chen <[email protected]> wrote:
> >
> > Hi, Masami,
> >
> > Could you please pay some time to review this series? Thank you.
> >
> > Huacai
> >
> > On Wed, Jan 18, 2023 at 10:01 AM Tiezhu Yang <[email protected]> wrote:
> > >
> > > v12:
> > > -- Rebase on the latest code
> > > -- Fix -Wmissing-prototypes warning when make W=1
> > > -- Drop patch #6 "Use common function sign_extend64()"
> > > since it has been applied yet
> > >
> > > v11:
> > > -- Rebase on the latest code
> > > -- Address all the review comments, thank you all
> > > -- Modify arch_prepare_kprobe() and setup_singlestep()
> > > to make the probe logic more clear
> > > -- Mark some assembler symbols as non-kprobe-able
> > > -- Use common function sign_extend64()
> > > -- Test 20 times about 36 hours for all the 71 assembler
> > > functions annotated with SYM_CODE_START and SYM_FUNC_START
> > > under arch/loongarch, especially test memset alone for 10
> > > hours, no hang problems
> > >
> > > v10:
> > > -- Remove sign_extend() based on the latest code
> > > -- Rename insns_are_not_supported() to insns_not_supported()
> > > -- Rename insns_are_not_simulated() to insns_not_simulated()
> > > -- Set KPROBE_HIT_SSDONE if cur->post_handler is not NULL
> > > -- Enable preemption for KPROBE_REENTER in kprobe_fault_handler()
> > >
> > > v9:
> > > -- Rename sign_extended() to sign_extend()
> > > -- Modify kprobe_fault_handler() to handle all of kprobe_status
> > >
> > > v8:
> > > -- Put "regs->csr_prmd &= ~CSR_PRMD_PIE;" ahead to save one line
> > > -- Add code comment of preempt_disable()
> > > -- Put kprobe_page_fault() in __do_page_fault()
> > > -- Modify the check condition of break insn in kprobe_breakpoint_handler()
> > >
> > > v7:
> > > -- Remove stop_machine_cpuslocked() related code
> > >
> > > v6:
> > > -- Add a new patch to redefine larch_insn_patch_text() with
> > > stop_machine_cpuslocked()
> > > -- Modify kprobe_breakpoint_handler() to consider the original
> > > insn is break and return the correct value
> > > -- Modify do_bp() to refresh bcode when original insn is break
> > >
> > > v5:
> > > -- Rebase on the latest code
> > > -- Use stop_machine_cpuslocked() to modify insn to avoid CPU race
> > >
> > > v4:
> > > -- Remove kprobe_exceptions_notify() in kprobes.c
> > > -- Call kprobe_breakpoint_handler() and kprobe_singlestep_handler()
> > > in do_bp()
> > >
> > > v3:
> > > -- Rebase on the latest code
> > > -- Check the alignment of PC in simu_branch() and simu_pc()
> > > -- Add ibar in flush_insn_slot()
> > > -- Rename kprobe_{pre,post}_handler() to {post_}kprobe_handler
> > > -- Add preempt_disable() and preempt_enable_no_resched()
> > > -- Remove r0 save/restore and do some minor changes
> > > in kprobes_trampoline.S
> > > -- Do not enable CONFIG_KPROBES by default
> > >
> > > v2:
> > > -- Split simu_branch() and simu_pc() into a single patch
> > > -- Call kprobe_page_fault() in do_page_fault()
> > > -- Add kprobes_trampoline.S for kretprobe
> > >
> > > Tiezhu Yang (5):
> > > LoongArch: Simulate branch and PC* instructions
> > > LoongArch: Add kprobe support
> > > LoongArch: Add kretprobe support
> > > LoongArch: Mark some assembler symbols as non-kprobe-able
> > > samples/kprobes: Add LoongArch support
> > >
> > > arch/loongarch/Kconfig | 2 +
> > > arch/loongarch/include/asm/asm.h | 10 +
> > > arch/loongarch/include/asm/inst.h | 20 ++
> > > arch/loongarch/include/asm/kprobes.h | 61 +++++
> > > arch/loongarch/include/asm/ptrace.h | 1 +
> > > arch/loongarch/kernel/Makefile | 2 +
> > > arch/loongarch/kernel/entry.S | 1 +
> > > arch/loongarch/kernel/inst.c | 123 +++++++++
> > > arch/loongarch/kernel/kprobes.c | 405 +++++++++++++++++++++++++++++
> > > arch/loongarch/kernel/kprobes_trampoline.S | 96 +++++++
> > > arch/loongarch/kernel/traps.c | 11 +-
> > > arch/loongarch/lib/memcpy.S | 3 +
> > > arch/loongarch/mm/fault.c | 3 +
> > > samples/kprobes/kprobe_example.c | 8 +
> > > 14 files changed, 741 insertions(+), 5 deletions(-)
> > > create mode 100644 arch/loongarch/include/asm/kprobes.h
> > > create mode 100644 arch/loongarch/kernel/kprobes.c
> > > create mode 100644 arch/loongarch/kernel/kprobes_trampoline.S
> > >
> > > --
> > > 2.1.0
> > >
> > >



--
Thanks,
JeffXie

2023-02-02 02:23:23

by Tiezhu Yang

[permalink] [raw]
Subject: Re: [PATCH v12 0/5] Add kprobe and kretprobe support for LoongArch



On 02/01/2023 05:40 PM, Jeff Xie wrote:
> On Wed, Feb 1, 2023 at 12:56 PM Huacai Chen <[email protected]> wrote:
>>
>> Hi, Jeff,
>>
>> Could you please pay some time to test this series? Thank you.
>
> Thanks for notifying me about the test.
>
> I have tested the patchset(based on the
> https://github.com/loongson/linux/tree/loongarch-next),
> I found that some functions can't be probed e.g. scheduler_tick() or
> uart_write_wakeup()
> the two functions have the same point, they are all run in the hardirq context.
>
> I don't know if it's related to the hardirq context, I haven't had
> time to study this patchset carefully.
> and they can be probed in the x86_64 arch.
>
> root@loongarch modules]# insmod ./kprobe_example.ko symbol=scheduler_tick
> insmod: can't insert './kprobe_example.ko': invalid parameter
>
> dmesg:
> [ 39.806435] kprobe_init: register_kprobe failed, returned -22
>

Thanks for your test.

On my test environment, I can not reproduce the above issue,
here are the test results, it seems no problem.

[root@linux loongson]# dmesg -c
[root@linux loongson]# uname -m
loongarch64
[root@linux loongson]# modprobe kprobe_example symbol=scheduler_tick
[root@linux loongson]# rmmod kprobe_example
[root@linux loongson]# dmesg | tail -2
[ 3317.138086] handler_post: <scheduler_tick> p->addr =
0x0000000065d12f66, estat = 0xc0000
[ 3317.154086] kprobe_exit: kprobe at 0000000065d12f66 unregistered

[root@linux loongson]# dmesg -c
[root@linux loongson]# uname -m
loongarch64
[root@linux loongson]# modprobe kprobe_example symbol=uart_write_wakeup
[root@linux loongson]# rmmod kprobe_example
[root@linux loongson]# dmesg | tail -2
[ 3433.502092] handler_post: <uart_write_wakeup> p->addr =
0x0000000019718061, estat = 0xc0000
[ 3433.762085] kprobe_exit: kprobe at 0000000019718061 unregistered

Additionally, "register_kprobe failed, returned -22" means the symbol
can not be probed, here is the related code:

register_kprobe()
check_kprobe_address_safe()

static int check_kprobe_address_safe(struct kprobe *p,
struct module **probed_mod)
{
int ret;

ret = check_ftrace_location(p);
if (ret)
return ret;
jump_label_lock();
preempt_disable();

/* Ensure it is not in reserved area nor out of text */
if (!(core_kernel_text((unsigned long) p->addr) ||
is_module_text_address((unsigned long) p->addr)) ||
in_gate_area_no_mm((unsigned long) p->addr) ||
within_kprobe_blacklist((unsigned long) p->addr) ||
jump_label_text_reserved(p->addr, p->addr) ||
static_call_text_reserved(p->addr, p->addr) ||
find_bug((unsigned long)p->addr)) {
ret = -EINVAL;
goto out;
}
...
}

Thanks,
Tiezhu


2023-02-02 03:33:44

by Jeff Xie

[permalink] [raw]
Subject: Re: [PATCH v12 0/5] Add kprobe and kretprobe support for LoongArch

On Thu, Feb 2, 2023 at 10:23 AM Tiezhu Yang <[email protected]> wrote:
>
>
>
> On 02/01/2023 05:40 PM, Jeff Xie wrote:
> > On Wed, Feb 1, 2023 at 12:56 PM Huacai Chen <[email protected]> wrote:
> >>
> >> Hi, Jeff,
> >>
> >> Could you please pay some time to test this series? Thank you.
> >
> > Thanks for notifying me about the test.
> >
> > I have tested the patchset(based on the
> > https://github.com/loongson/linux/tree/loongarch-next),
> > I found that some functions can't be probed e.g. scheduler_tick() or
> > uart_write_wakeup()
> > the two functions have the same point, they are all run in the hardirq context.
> >
> > I don't know if it's related to the hardirq context, I haven't had
> > time to study this patchset carefully.
> > and they can be probed in the x86_64 arch.
> >
> > root@loongarch modules]# insmod ./kprobe_example.ko symbol=scheduler_tick
> > insmod: can't insert './kprobe_example.ko': invalid parameter
> >
> > dmesg:
> > [ 39.806435] kprobe_init: register_kprobe failed, returned -22
> >
>
> Thanks for your test.
>
> On my test environment, I can not reproduce the above issue,
> here are the test results, it seems no problem.
>
> [root@linux loongson]# dmesg -c
> [root@linux loongson]# uname -m
> loongarch64
> [root@linux loongson]# modprobe kprobe_example symbol=scheduler_tick
> [root@linux loongson]# rmmod kprobe_example
> [root@linux loongson]# dmesg | tail -2
> [ 3317.138086] handler_post: <scheduler_tick> p->addr =
> 0x0000000065d12f66, estat = 0xc0000
> [ 3317.154086] kprobe_exit: kprobe at 0000000065d12f66 unregistered
>
> [root@linux loongson]# dmesg -c
> [root@linux loongson]# uname -m
> loongarch64
> [root@linux loongson]# modprobe kprobe_example symbol=uart_write_wakeup
> [root@linux loongson]# rmmod kprobe_example
> [root@linux loongson]# dmesg | tail -2
> [ 3433.502092] handler_post: <uart_write_wakeup> p->addr =
> 0x0000000019718061, estat = 0xc0000
> [ 3433.762085] kprobe_exit: kprobe at 0000000019718061 unregistered
>
> Additionally, "register_kprobe failed, returned -22" means the symbol
> can not be probed, here is the related code:
>
> register_kprobe()
> check_kprobe_address_safe()
>
> static int check_kprobe_address_safe(struct kprobe *p,
> struct module **probed_mod)
> {
> int ret;
>
> ret = check_ftrace_location(p);
> if (ret)
> return ret;
> jump_label_lock();
> preempt_disable();
>
> /* Ensure it is not in reserved area nor out of text */
> if (!(core_kernel_text((unsigned long) p->addr) ||
> is_module_text_address((unsigned long) p->addr)) ||
> in_gate_area_no_mm((unsigned long) p->addr) ||
> within_kprobe_blacklist((unsigned long) p->addr) ||
> jump_label_text_reserved(p->addr, p->addr) ||
> static_call_text_reserved(p->addr, p->addr) ||
> find_bug((unsigned long)p->addr)) {
> ret = -EINVAL;
> goto out;
> }
> ...
> }

Today I looked at the code, this has nothing to do with hardirq :-)
because I enabled this kernel option CONFIG_DYNAMIC_FTRACE, the
loongarch should not support the option yet.

#ifdef CONFIG_DYNAMIC_FTRACE
unsigned long ftrace_location(unsigned long ip);

#else /* CONFIG_DYNAMIC_FTRACE */

static inline unsigned long ftrace_location(unsigned long ip)
{
return 0;
}

#endif


static int check_ftrace_location(struct kprobe *p)
{
unsigned long addr = (unsigned long)p->addr;

if (ftrace_location(addr) == addr) {
#ifdef CONFIG_KPROBES_ON_FTRACE
p->flags |= KPROBE_FLAG_FTRACE;
#else /* !CONFIG_KPROBES_ON_FTRACE */
return -EINVAL; // get error from here
#endif
}
return 0;
}

static int check_kprobe_address_safe(struct kprobe *p,
struct module **probed_mod)
{
int ret;

ret = check_ftrace_location(p);
if (ret)
return ret; // return -EINVAL
}


>
> Thanks,
> Tiezhu
>


--
Thanks,
JeffXie

2023-02-02 23:59:28

by Masami Hiramatsu

[permalink] [raw]
Subject: Re: [PATCH v12 0/5] Add kprobe and kretprobe support for LoongArch

On Thu, 2 Feb 2023 11:33:23 +0800
Jeff Xie <[email protected]> wrote:

> On Thu, Feb 2, 2023 at 10:23 AM Tiezhu Yang <[email protected]> wrote:
> >
> >
> >
> > On 02/01/2023 05:40 PM, Jeff Xie wrote:
> > > On Wed, Feb 1, 2023 at 12:56 PM Huacai Chen <[email protected]> wrote:
> > >>
> > >> Hi, Jeff,
> > >>
> > >> Could you please pay some time to test this series? Thank you.
> > >
> > > Thanks for notifying me about the test.
> > >
> > > I have tested the patchset(based on the
> > > https://github.com/loongson/linux/tree/loongarch-next),
> > > I found that some functions can't be probed e.g. scheduler_tick() or
> > > uart_write_wakeup()
> > > the two functions have the same point, they are all run in the hardirq context.
> > >
> > > I don't know if it's related to the hardirq context, I haven't had
> > > time to study this patchset carefully.
> > > and they can be probed in the x86_64 arch.
> > >
> > > root@loongarch modules]# insmod ./kprobe_example.ko symbol=scheduler_tick
> > > insmod: can't insert './kprobe_example.ko': invalid parameter
> > >
> > > dmesg:
> > > [ 39.806435] kprobe_init: register_kprobe failed, returned -22
> > >
> >
> > Thanks for your test.
> >
> > On my test environment, I can not reproduce the above issue,
> > here are the test results, it seems no problem.
> >
> > [root@linux loongson]# dmesg -c
> > [root@linux loongson]# uname -m
> > loongarch64
> > [root@linux loongson]# modprobe kprobe_example symbol=scheduler_tick
> > [root@linux loongson]# rmmod kprobe_example
> > [root@linux loongson]# dmesg | tail -2
> > [ 3317.138086] handler_post: <scheduler_tick> p->addr =
> > 0x0000000065d12f66, estat = 0xc0000
> > [ 3317.154086] kprobe_exit: kprobe at 0000000065d12f66 unregistered
> >
> > [root@linux loongson]# dmesg -c
> > [root@linux loongson]# uname -m
> > loongarch64
> > [root@linux loongson]# modprobe kprobe_example symbol=uart_write_wakeup
> > [root@linux loongson]# rmmod kprobe_example
> > [root@linux loongson]# dmesg | tail -2
> > [ 3433.502092] handler_post: <uart_write_wakeup> p->addr =
> > 0x0000000019718061, estat = 0xc0000
> > [ 3433.762085] kprobe_exit: kprobe at 0000000019718061 unregistered
> >
> > Additionally, "register_kprobe failed, returned -22" means the symbol
> > can not be probed, here is the related code:
> >
> > register_kprobe()
> > check_kprobe_address_safe()
> >
> > static int check_kprobe_address_safe(struct kprobe *p,
> > struct module **probed_mod)
> > {
> > int ret;
> >
> > ret = check_ftrace_location(p);
> > if (ret)
> > return ret;
> > jump_label_lock();
> > preempt_disable();
> >
> > /* Ensure it is not in reserved area nor out of text */
> > if (!(core_kernel_text((unsigned long) p->addr) ||
> > is_module_text_address((unsigned long) p->addr)) ||
> > in_gate_area_no_mm((unsigned long) p->addr) ||
> > within_kprobe_blacklist((unsigned long) p->addr) ||
> > jump_label_text_reserved(p->addr, p->addr) ||
> > static_call_text_reserved(p->addr, p->addr) ||
> > find_bug((unsigned long)p->addr)) {
> > ret = -EINVAL;
> > goto out;
> > }
> > ...
> > }
>
> Today I looked at the code, this has nothing to do with hardirq :-)
> because I enabled this kernel option CONFIG_DYNAMIC_FTRACE, the
> loongarch should not support the option yet.

Until you implement the feature, "HAVE_<FEATURE>" should not be selected
in arch/*/Kconfig. So if the DYNAMIC_FTRACE is not implemented,
please drop "select HAVE_DYNAMIC_FTRACE"...

Thank you,


>
> #ifdef CONFIG_DYNAMIC_FTRACE
> unsigned long ftrace_location(unsigned long ip);
>
> #else /* CONFIG_DYNAMIC_FTRACE */
>
> static inline unsigned long ftrace_location(unsigned long ip)
> {
> return 0;
> }
>
> #endif
>
>
> static int check_ftrace_location(struct kprobe *p)
> {
> unsigned long addr = (unsigned long)p->addr;
>
> if (ftrace_location(addr) == addr) {
> #ifdef CONFIG_KPROBES_ON_FTRACE
> p->flags |= KPROBE_FLAG_FTRACE;
> #else /* !CONFIG_KPROBES_ON_FTRACE */
> return -EINVAL; // get error from here
> #endif
> }
> return 0;
> }
>
> static int check_kprobe_address_safe(struct kprobe *p,
> struct module **probed_mod)
> {
> int ret;
>
> ret = check_ftrace_location(p);
> if (ret)
> return ret; // return -EINVAL
> }
>
>
> >
> > Thanks,
> > Tiezhu
> >
>
>
> --
> Thanks,
> JeffXie


--
Masami Hiramatsu (Google) <[email protected]>

2023-02-06 12:13:24

by Huacai Chen

[permalink] [raw]
Subject: Re: [PATCH v12 0/5] Add kprobe and kretprobe support for LoongArch

Hi, Jeff,

Now I add kprobes on ftrace support in
https://github.com/loongson/linux/commits/loongarch-next, please test
again. Thank you.

Huacai

On Thu, Feb 2, 2023 at 11:33 AM Jeff Xie <[email protected]> wrote:
>
> On Thu, Feb 2, 2023 at 10:23 AM Tiezhu Yang <[email protected]> wrote:
> >
> >
> >
> > On 02/01/2023 05:40 PM, Jeff Xie wrote:
> > > On Wed, Feb 1, 2023 at 12:56 PM Huacai Chen <[email protected]> wrote:
> > >>
> > >> Hi, Jeff,
> > >>
> > >> Could you please pay some time to test this series? Thank you.
> > >
> > > Thanks for notifying me about the test.
> > >
> > > I have tested the patchset(based on the
> > > https://github.com/loongson/linux/tree/loongarch-next),
> > > I found that some functions can't be probed e.g. scheduler_tick() or
> > > uart_write_wakeup()
> > > the two functions have the same point, they are all run in the hardirq context.
> > >
> > > I don't know if it's related to the hardirq context, I haven't had
> > > time to study this patchset carefully.
> > > and they can be probed in the x86_64 arch.
> > >
> > > root@loongarch modules]# insmod ./kprobe_example.ko symbol=scheduler_tick
> > > insmod: can't insert './kprobe_example.ko': invalid parameter
> > >
> > > dmesg:
> > > [ 39.806435] kprobe_init: register_kprobe failed, returned -22
> > >
> >
> > Thanks for your test.
> >
> > On my test environment, I can not reproduce the above issue,
> > here are the test results, it seems no problem.
> >
> > [root@linux loongson]# dmesg -c
> > [root@linux loongson]# uname -m
> > loongarch64
> > [root@linux loongson]# modprobe kprobe_example symbol=scheduler_tick
> > [root@linux loongson]# rmmod kprobe_example
> > [root@linux loongson]# dmesg | tail -2
> > [ 3317.138086] handler_post: <scheduler_tick> p->addr =
> > 0x0000000065d12f66, estat = 0xc0000
> > [ 3317.154086] kprobe_exit: kprobe at 0000000065d12f66 unregistered
> >
> > [root@linux loongson]# dmesg -c
> > [root@linux loongson]# uname -m
> > loongarch64
> > [root@linux loongson]# modprobe kprobe_example symbol=uart_write_wakeup
> > [root@linux loongson]# rmmod kprobe_example
> > [root@linux loongson]# dmesg | tail -2
> > [ 3433.502092] handler_post: <uart_write_wakeup> p->addr =
> > 0x0000000019718061, estat = 0xc0000
> > [ 3433.762085] kprobe_exit: kprobe at 0000000019718061 unregistered
> >
> > Additionally, "register_kprobe failed, returned -22" means the symbol
> > can not be probed, here is the related code:
> >
> > register_kprobe()
> > check_kprobe_address_safe()
> >
> > static int check_kprobe_address_safe(struct kprobe *p,
> > struct module **probed_mod)
> > {
> > int ret;
> >
> > ret = check_ftrace_location(p);
> > if (ret)
> > return ret;
> > jump_label_lock();
> > preempt_disable();
> >
> > /* Ensure it is not in reserved area nor out of text */
> > if (!(core_kernel_text((unsigned long) p->addr) ||
> > is_module_text_address((unsigned long) p->addr)) ||
> > in_gate_area_no_mm((unsigned long) p->addr) ||
> > within_kprobe_blacklist((unsigned long) p->addr) ||
> > jump_label_text_reserved(p->addr, p->addr) ||
> > static_call_text_reserved(p->addr, p->addr) ||
> > find_bug((unsigned long)p->addr)) {
> > ret = -EINVAL;
> > goto out;
> > }
> > ...
> > }
>
> Today I looked at the code, this has nothing to do with hardirq :-)
> because I enabled this kernel option CONFIG_DYNAMIC_FTRACE, the
> loongarch should not support the option yet.
>
> #ifdef CONFIG_DYNAMIC_FTRACE
> unsigned long ftrace_location(unsigned long ip);
>
> #else /* CONFIG_DYNAMIC_FTRACE */
>
> static inline unsigned long ftrace_location(unsigned long ip)
> {
> return 0;
> }
>
> #endif
>
>
> static int check_ftrace_location(struct kprobe *p)
> {
> unsigned long addr = (unsigned long)p->addr;
>
> if (ftrace_location(addr) == addr) {
> #ifdef CONFIG_KPROBES_ON_FTRACE
> p->flags |= KPROBE_FLAG_FTRACE;
> #else /* !CONFIG_KPROBES_ON_FTRACE */
> return -EINVAL; // get error from here
> #endif
> }
> return 0;
> }
>
> static int check_kprobe_address_safe(struct kprobe *p,
> struct module **probed_mod)
> {
> int ret;
>
> ret = check_ftrace_location(p);
> if (ret)
> return ret; // return -EINVAL
> }
>
>
> >
> > Thanks,
> > Tiezhu
> >
>
>
> --
> Thanks,
> JeffXie

2023-02-06 12:48:18

by Jeff Xie

[permalink] [raw]
Subject: Re: [PATCH v12 0/5] Add kprobe and kretprobe support for LoongArch

On Mon, Feb 6, 2023 at 8:13 PM Huacai Chen <[email protected]> wrote:
>
> Hi, Jeff,
>
> Now I add kprobes on ftrace support in
> https://github.com/loongson/linux/commits/loongarch-next, please test
> again. Thank you.
>

When using the kprobe example module kprobe_example.ko, I haven't seen
any errors.

But when using the ftrace to probe the symbol + offset, the kernel will panic:
e.g. probe the scheduler_tick+4 is fine, but when probe the
scheduler_tick+5, the kernel will panic.

root@loongarch tracing]# echo 'p scheduler_tick+4' > ./kprobe_events
[root@loongarch tracing]# echo 1 > ./events/kprobes/p_scheduler_tick_4/enable
[root@loongarch tracing]# cat /sys/kernel/debug/kprobes/list
900000000027b5f4 k scheduler_tick+0x4

[root@loongarch tracing]# echo 0 > ./events/kprobes/p_scheduler_tick_4/enable
[root@loongarch tracing]# > ./kprobe_events
[root@loongarch tracing]# echo 'p scheduler_tick+5' > ./kprobe_events
[root@loongarch tracing]# echo 1 > ./events/kprobes/p_scheduler_tick_5/enable

[The kernel will panic]

dmesg:

[ 69.138541] CPU 0 Unable to handle kernel paging request at virtual
address 00000000ffff1e8c, era == 900000000027b5f4, ra ==
90000000002ed69c
[ 69.139325] Oops[#1]:
[ 69.139399] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G W
6.2.0-rc7+ #28
[ 69.139422] Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015
[ 69.139446] $ 0 : 0000000000000000 90000000002ed69c
90000000019ec000 900000010020bd90
[ 69.139533] $ 4 : 9000000006025600 900000000182bd48
0000000000000000 9000000001d493c8
[ 69.139601] $ 8 : 0000000000000040 9000000001412900
0000000000000000 0000000000000049
[ 69.139667] $12 : 90000000002ed69c 0000000000000000
0000000000000001 00000000000f0000
[ 69.139733] $16 : 00000000ffff1e8a 00000000ffff1e89
ffffffffffffeb7f 00000000ffff330b
[ 69.139797] $20 : 0000000000000000 00000000000000b0
9000000006003600 90000000060010c0
[ 69.139862] $24 : 0000000000000000 90000000019efc80
9000000006003620 9000000006003ae8
[ 69.139927] $28 : 9000000001c7fe80 9000000001c7fec0
9000000001c7fe40 0000000000000001
[ 69.139993] era : 900000000027b5f4 scheduler_tick+0x4/0x124
[ 69.140025] ra : 90000000002ed69c update_process_times+0xac/0xc0
[ 69.140047] CSR crmd: 000000b0
[ 69.140058] CSR prmd: 00000000
[ 69.140069] CSR euen: 00000000
[ 69.140080] CSR ecfg: 00071c1c
[ 69.140090] CSR estat: 00010000
[ 69.140112] ExcCode : 1 (SubCode 0)
[ 69.140139] BadVA : 00000000ffff1e8c
[ 69.140153] PrId : 0014c010 (Loongson-64bit)
[ 69.140174] Modules linked in:
[ 69.140216] Process swapper/0 (pid: 0, threadinfo=(____ptrval____),
task=(____ptrval____))
[ 69.140499] Stack : 0000000000000000 0000000000000000
90000000019efc80 00000010185c7aa6
[ 69.140570] 9000000006003ae8 90000000002ff070
90000000060035c0 90000000002ff014
[ 69.140636] 90000000060035c0 90000000002ee7ec
00000000000000b0 00000000000000b0
[ 69.140701] 0000000000000000 00000010185c4c98
00000010185c4c98 7d6a3ef38af5219f
[ 69.140766] 9000000001ddaa60 90000000060036b8
9000000006003678 90000000060035cc
[ 69.140830] 0000000000000001 00000010185c4c98
0000000000000003 00000010185c4c98
[ 69.140894] 00000000000000b0 90000000060035c0
90000000060036f8 90000000002ef404
[ 69.140958] 9000000001b05880 9000000001a0c5e8
0000000000012400 9000000001c7ea40
[ 69.141023] 9000000001c7ea00 0000000000010000
0000000000000002 0000000000000012
[ 69.141087] 900000010017ec00 0000000000000000
900000010015cf80 90000000002265a4
[ 69.141151] ...
[ 69.141192] Call Trace:
[ 69.141278] [<900000000027b5f4>] scheduler_tick+0x4/0x124
[ 69.141339] [<90000000002ed69c>] update_process_times+0xac/0xc0
[ 69.141361] [<90000000002ff070>] tick_sched_timer+0x5c/0xe8
[ 69.141381] [<90000000002ee7ec>] __hrtimer_run_queues+0x1f0/0x3d0
[ 69.141400] [<90000000002ef404>] hrtimer_interrupt+0x108/0x28c
[ 69.141420] [<90000000002265a4>] constant_timer_interrupt+0x38/0x48
[ 69.141439] [<90000000002b358c>] __handle_irq_event_percpu+0xbc/0x290
[ 69.141458] [<90000000002b3780>] handle_irq_event_percpu+0x20/0x78
[ 69.141477] [<90000000002b9cac>] handle_percpu_irq+0x5c/0x90
[ 69.141497] [<90000000002b2740>] generic_handle_domain_irq+0x30/0x48
[ 69.141516] [<9000000000af906c>] handle_cpu_irq+0x70/0xac
[ 69.141538] [<90000000012bf8b8>] handle_loongarch_irq+0x34/0x4c
[ 69.141560] [<90000000012bf950>] do_vint+0x80/0xb4
[ 69.141672] [<9000000000222120>] __arch_cpu_idle+0x20/0x24
[ 69.141694] [<90000000012cf934>] default_idle_call+0x70/0x168
[ 69.141713] [<9000000000291df4>] do_idle+0xc8/0x144
[ 69.141732] [<9000000000292078>] cpu_startup_entry+0x28/0x2c
[ 69.141750] [<90000000012c20e4>] kernel_init+0x0/0x120
[ 69.142210] [<90000000012f09e8>] arch_post_acpi_subsys_init+0x0/0xc
[ 69.142268] Code: 53ffbbff 54442c41 0015002c <2a000a00> 02ff4000
29c08077 29c06078 29c04079 29c0207a
[ 69.142624] ---[ end trace 0000000000000000 ]---
[ 69.143284] Kernel panic - not syncing: Fatal exception in interrupt
[ 69.143469] ------------[ cut here ]------------
[ 69.143498] WARNING: CPU: 0 PID: 0 at kernel/smp.c:912
smp_call_function_many_cond+0x3dc/0x3fc
[ 69.143523] Modules linked in:
[ 69.143546] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G D W
6.2.0-rc7+ #28
[ 69.143565] Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015
[ 69.143577] $ 0 : 0000000000000000 9000000000305f50
90000000019ec000 900000010020ba40
[ 69.143644] $ 4 : 9000000001a02fc8 900000000022c534
0000000000000000 0000000000000000
[ 69.143709] $ 8 : 0000000000000000 900000010020bad8
900000010020b8f0 9000000001a02788
[ 69.143775] $12 : 0000000000010000 0000000000000000
0000000000000000 0000000000000000
[ 69.143839] $16 : 0000000000f00000 0000000000000000
0000000000000000 0000000000000000
[ 69.143904] $20 : 0000000000000001 9000000001a222c8
9000000006003600 9000000001a02fc8
[ 69.143968] $24 : 0000000000000000 0000000000000000
0000000000000000 0000000000000000
[ 69.144032] $28 : 0000000000000000 0000000000000000
9000000001c7fe40 0000000000000001
[ 69.144096] era : 9000000000305cb4 smp_call_function_many_cond+0x3dc/0x3fc
[ 69.144115] ra : 9000000000305f50 smp_call_function+0x4c/0x9c
[ 69.144133] CSR crmd: 000000b0
[ 69.144144] CSR prmd: 00000000
[ 69.144155] CSR euen: 00000000
[ 69.144166] CSR ecfg: 00071c1c
[ 69.144177] CSR estat: 000c0000
[ 69.144198] ExcCode : c (SubCode 0)
[ 69.144210] PrId : 0014c010 (Loongson-64bit)
[ 69.144254] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G D W
6.2.0-rc7+ #28
[ 69.144274] Hardware name: QEMU QEMU Virtual Machine, BIOS 0.0.0 02/06/2015
[ 69.144287] Stack : 00000000000002cb 0000000000000000
9000000000223a08 90000000019ec000
[ 69.144354] 900000010020b6a0 900000010020b6a8
0000000000000000 0000000000000000
[ 69.144419] 900000010020b6a8 0000000000000040
900000010020b778 900000010020b510
[ 69.144483] 9000000001a02788 900000010020b6a8
7d6a3ef38af5219f 9000000100330000
[ 69.144548] 0000000000000003 0000000000010005
0000000000000000 0000000000000000
[ 69.144612] 0000000000000000 900000000181da00
0000000004c1c000 9000000006003600
[ 69.144676] 0000000000000000 0000000000000000
900000000189cbc0 9000000001a02788
[ 69.144741] 0000000000000000 0000000000000390
0000000000000000 9000000001c7fe40
[ 69.144806] 0000000000000001 0000000000000000
9000000000223a20 9000000000305cb4
[ 69.144870] 00000000000000b0 0000000000000000
0000000000000000 0000000000071c1c
[ 69.144935] ...
[ 69.144963] Call Trace:
[ 69.144974] [<9000000000223a20>] show_stack+0x68/0x18c
[ 69.144996] [<90000000012bea54>] dump_stack_lvl+0x80/0xb8
[ 69.145016] [<900000000023c264>] __warn+0x8c/0x180
[ 69.145035] [<900000000127769c>] report_bug+0xac/0x178
[ 69.145056] [<90000000012bf140>] do_bp+0x350/0x3a4
[ 69.145075] [<9000000001cf1924>] exception_handlers+0x1924/0x10000
[ 69.145098] [<9000000000305cb4>] smp_call_function_many_cond+0x3dc/0x3fc
[ 69.145116] [<9000000000305f50>] smp_call_function+0x4c/0x9c
[ 69.145135] [<90000000012a3da8>] panic+0x17c/0x37c
[ 69.145155] [<9000000000223cb8>] die+0x138/0x144
[ 69.145173] [<90000000012d0ed8>] do_sigsegv+0x0/0x190
[ 69.145193] [<90000000012d1310>] do_page_fault+0x2a8/0x4a8
[ 69.145212] [<90000000002312c0>] tlb_do_page_fault_0+0x128/0x1c4
[ 69.145231] [<90000000002ed69c>] update_process_times+0xac/0xc0
[ 69.145250] [<90000000002ff070>] tick_sched_timer+0x5c/0xe8
[ 69.145270] [<90000000002ee7ec>] __hrtimer_run_queues+0x1f0/0x3d0
[ 69.145289] [<90000000002ef404>] hrtimer_interrupt+0x108/0x28c
[ 69.145307] [<90000000002265a4>] constant_timer_interrupt+0x38/0x48
[ 69.145326] [<90000000002b358c>] __handle_irq_event_percpu+0xbc/0x290
[ 69.145345] [<90000000002b3780>] handle_irq_event_percpu+0x20/0x78
[ 69.145364] [<90000000002b9cac>] handle_percpu_irq+0x5c/0x90
[ 69.145383] [<90000000002b2740>] generic_handle_domain_irq+0x30/0x48
[ 69.145402] [<9000000000af906c>] handle_cpu_irq+0x70/0xac
[ 69.145419] [<90000000012bf8b8>] handle_loongarch_irq+0x34/0x4c
[ 69.145439] [<90000000012bf950>] do_vint+0x80/0xb4
[ 69.145458] [<9000000000222120>] __arch_cpu_idle+0x20/0x24
[ 69.145476] [<90000000012cf934>] default_idle_call+0x70/0x168
[ 69.145495] [<9000000000291df4>] do_idle+0xc8/0x144
[ 69.145513] [<9000000000292078>] cpu_startup_entry+0x28/0x2c
[ 69.145532] [<90000000012c20e4>] kernel_init+0x0/0x120
[ 69.145554] [<90000000012f09e8>] arch_post_acpi_subsys_init+0x0/0xc
[ 69.145587] ---[ end trace 0000000000000000 ]---
[ 69.146277] ---[ end Kernel panic - not syncing: Fatal exception in
interrupt ]---



> Huacai
>
> On Thu, Feb 2, 2023 at 11:33 AM Jeff Xie <[email protected]> wrote:
> >
> > On Thu, Feb 2, 2023 at 10:23 AM Tiezhu Yang <[email protected]> wrote:
> > >
> > >
> > >
> > > On 02/01/2023 05:40 PM, Jeff Xie wrote:
> > > > On Wed, Feb 1, 2023 at 12:56 PM Huacai Chen <[email protected]> wrote:
> > > >>
> > > >> Hi, Jeff,
> > > >>
> > > >> Could you please pay some time to test this series? Thank you.
> > > >
> > > > Thanks for notifying me about the test.
> > > >
> > > > I have tested the patchset(based on the
> > > > https://github.com/loongson/linux/tree/loongarch-next),
> > > > I found that some functions can't be probed e.g. scheduler_tick() or
> > > > uart_write_wakeup()
> > > > the two functions have the same point, they are all run in the hardirq context.
> > > >
> > > > I don't know if it's related to the hardirq context, I haven't had
> > > > time to study this patchset carefully.
> > > > and they can be probed in the x86_64 arch.
> > > >
> > > > root@loongarch modules]# insmod ./kprobe_example.ko symbol=scheduler_tick
> > > > insmod: can't insert './kprobe_example.ko': invalid parameter
> > > >
> > > > dmesg:
> > > > [ 39.806435] kprobe_init: register_kprobe failed, returned -22
> > > >
> > >
> > > Thanks for your test.
> > >
> > > On my test environment, I can not reproduce the above issue,
> > > here are the test results, it seems no problem.
> > >
> > > [root@linux loongson]# dmesg -c
> > > [root@linux loongson]# uname -m
> > > loongarch64
> > > [root@linux loongson]# modprobe kprobe_example symbol=scheduler_tick
> > > [root@linux loongson]# rmmod kprobe_example
> > > [root@linux loongson]# dmesg | tail -2
> > > [ 3317.138086] handler_post: <scheduler_tick> p->addr =
> > > 0x0000000065d12f66, estat = 0xc0000
> > > [ 3317.154086] kprobe_exit: kprobe at 0000000065d12f66 unregistered
> > >
> > > [root@linux loongson]# dmesg -c
> > > [root@linux loongson]# uname -m
> > > loongarch64
> > > [root@linux loongson]# modprobe kprobe_example symbol=uart_write_wakeup
> > > [root@linux loongson]# rmmod kprobe_example
> > > [root@linux loongson]# dmesg | tail -2
> > > [ 3433.502092] handler_post: <uart_write_wakeup> p->addr =
> > > 0x0000000019718061, estat = 0xc0000
> > > [ 3433.762085] kprobe_exit: kprobe at 0000000019718061 unregistered
> > >
> > > Additionally, "register_kprobe failed, returned -22" means the symbol
> > > can not be probed, here is the related code:
> > >
> > > register_kprobe()
> > > check_kprobe_address_safe()
> > >
> > > static int check_kprobe_address_safe(struct kprobe *p,
> > > struct module **probed_mod)
> > > {
> > > int ret;
> > >
> > > ret = check_ftrace_location(p);
> > > if (ret)
> > > return ret;
> > > jump_label_lock();
> > > preempt_disable();
> > >
> > > /* Ensure it is not in reserved area nor out of text */
> > > if (!(core_kernel_text((unsigned long) p->addr) ||
> > > is_module_text_address((unsigned long) p->addr)) ||
> > > in_gate_area_no_mm((unsigned long) p->addr) ||
> > > within_kprobe_blacklist((unsigned long) p->addr) ||
> > > jump_label_text_reserved(p->addr, p->addr) ||
> > > static_call_text_reserved(p->addr, p->addr) ||
> > > find_bug((unsigned long)p->addr)) {
> > > ret = -EINVAL;
> > > goto out;
> > > }
> > > ...
> > > }
> >
> > Today I looked at the code, this has nothing to do with hardirq :-)
> > because I enabled this kernel option CONFIG_DYNAMIC_FTRACE, the
> > loongarch should not support the option yet.
> >
> > #ifdef CONFIG_DYNAMIC_FTRACE
> > unsigned long ftrace_location(unsigned long ip);
> >
> > #else /* CONFIG_DYNAMIC_FTRACE */
> >
> > static inline unsigned long ftrace_location(unsigned long ip)
> > {
> > return 0;
> > }
> >
> > #endif
> >
> >
> > static int check_ftrace_location(struct kprobe *p)
> > {
> > unsigned long addr = (unsigned long)p->addr;
> >
> > if (ftrace_location(addr) == addr) {
> > #ifdef CONFIG_KPROBES_ON_FTRACE
> > p->flags |= KPROBE_FLAG_FTRACE;
> > #else /* !CONFIG_KPROBES_ON_FTRACE */
> > return -EINVAL; // get error from here
> > #endif
> > }
> > return 0;
> > }
> >
> > static int check_kprobe_address_safe(struct kprobe *p,
> > struct module **probed_mod)
> > {
> > int ret;
> >
> > ret = check_ftrace_location(p);
> > if (ret)
> > return ret; // return -EINVAL
> > }
> >
> >
> > >
> > > Thanks,
> > > Tiezhu
> > >
> >
> >
> > --
> > Thanks,
> > JeffXie



--
Thanks,
JeffXie

2023-02-07 03:14:44

by Tiezhu Yang

[permalink] [raw]
Subject: Re: [PATCH v12 0/5] Add kprobe and kretprobe support for LoongArch



On 02/06/2023 08:48 PM, Jeff Xie wrote:
> On Mon, Feb 6, 2023 at 8:13 PM Huacai Chen <[email protected]> wrote:
>>
>> Hi, Jeff,
>>
>> Now I add kprobes on ftrace support in
>> https://github.com/loongson/linux/commits/loongarch-next, please test
>> again. Thank you.
>>
>
> When using the kprobe example module kprobe_example.ko, I haven't seen
> any errors.
>
> But when using the ftrace to probe the symbol + offset, the kernel will panic:
> e.g. probe the scheduler_tick+4 is fine, but when probe the
> scheduler_tick+5, the kernel will panic.
>

Thanks for your test.

We can see that the instruction address is 4-byte alignment,
this is because the instruction length is 32-bit on LoongArch.

$ objdump -d vmlinux > dump.txt
$ grep -A 20 scheduler_tick dump.txt | head -21
9000000000279fc8 <scheduler_tick>:
9000000000279fc8: 03400000 andi $zero, $zero, 0x0
9000000000279fcc: 03400000 andi $zero, $zero, 0x0
9000000000279fd0: 02ff4063 addi.d $sp, $sp, -48(0xfd0)
9000000000279fd4: 29c08077 st.d $s0, $sp, 32(0x20)
9000000000279fd8: 29c06078 st.d $s1, $sp, 24(0x18)
9000000000279fdc: 29c04079 st.d $s2, $sp, 16(0x10)
9000000000279fe0: 29c0207a st.d $s3, $sp, 8(0x8)
9000000000279fe4: 29c0a061 st.d $ra, $sp, 40(0x28)
9000000000279fe8: 2700007b stptr.d $s4, $sp, 0
9000000000279fec: 24001844 ldptr.w $a0, $tp, 24(0x18)
9000000000279ff0: 1a02edd9 pcalau12i $s2, 5998(0x176e)
9000000000279ff4: 1a034bac pcalau12i $t0, 6749(0x1a5d)
9000000000279ff8: 02f56339 addi.d $s2, $s2, -680(0xd58)
9000000000279ffc: 00410c9a slli.d $s3, $a0, 0x3
900000000027a000: 28aae18d ld.w $t1, $t0, -1352(0xab8)
900000000027a004: 380c6b2e ldx.d $t2, $s2, $s3
900000000027a008: 1a022fcc pcalau12i $t0, 4478(0x117e)
900000000027a00c: 02f20198 addi.d $s1, $t0, -896(0xc80)
900000000027a010: 00150317 move $s0, $s1
900000000027a014: 004081ac slli.w $t0, $t1, 0x0

So we should check the probe address at the beginning of
arch_prepare_kprobe(), some other archs do the same thing.

$ git diff
diff --git a/arch/loongarch/kernel/kprobes.c
b/arch/loongarch/kernel/kprobes.c
index bdab707b6edf..56c8c4b09a42 100644
--- a/arch/loongarch/kernel/kprobes.c
+++ b/arch/loongarch/kernel/kprobes.c
@@ -79,6 +79,9 @@ NOKPROBE_SYMBOL(arch_prepare_simulate);

int arch_prepare_kprobe(struct kprobe *p)
{
+ if ((unsigned long)p->addr & 0x3)
+ return -EILSEQ;
+
/* copy instruction */
p->opcode = *p->addr;


Thanks,
Tiezhu


2023-02-07 04:03:47

by Huacai Chen

[permalink] [raw]
Subject: Re: [PATCH v12 0/5] Add kprobe and kretprobe support for LoongArch

Hi, Jeff,

The code has been updated here,
https://github.com/loongson/linux/commits/loongarch-next, you can test
again.

Huacai

On Tue, Feb 7, 2023 at 11:14 AM Tiezhu Yang <[email protected]> wrote:
>
>
>
> On 02/06/2023 08:48 PM, Jeff Xie wrote:
> > On Mon, Feb 6, 2023 at 8:13 PM Huacai Chen <[email protected]> wrote:
> >>
> >> Hi, Jeff,
> >>
> >> Now I add kprobes on ftrace support in
> >> https://github.com/loongson/linux/commits/loongarch-next, please test
> >> again. Thank you.
> >>
> >
> > When using the kprobe example module kprobe_example.ko, I haven't seen
> > any errors.
> >
> > But when using the ftrace to probe the symbol + offset, the kernel will panic:
> > e.g. probe the scheduler_tick+4 is fine, but when probe the
> > scheduler_tick+5, the kernel will panic.
> >
>
> Thanks for your test.
>
> We can see that the instruction address is 4-byte alignment,
> this is because the instruction length is 32-bit on LoongArch.
>
> $ objdump -d vmlinux > dump.txt
> $ grep -A 20 scheduler_tick dump.txt | head -21
> 9000000000279fc8 <scheduler_tick>:
> 9000000000279fc8: 03400000 andi $zero, $zero, 0x0
> 9000000000279fcc: 03400000 andi $zero, $zero, 0x0
> 9000000000279fd0: 02ff4063 addi.d $sp, $sp, -48(0xfd0)
> 9000000000279fd4: 29c08077 st.d $s0, $sp, 32(0x20)
> 9000000000279fd8: 29c06078 st.d $s1, $sp, 24(0x18)
> 9000000000279fdc: 29c04079 st.d $s2, $sp, 16(0x10)
> 9000000000279fe0: 29c0207a st.d $s3, $sp, 8(0x8)
> 9000000000279fe4: 29c0a061 st.d $ra, $sp, 40(0x28)
> 9000000000279fe8: 2700007b stptr.d $s4, $sp, 0
> 9000000000279fec: 24001844 ldptr.w $a0, $tp, 24(0x18)
> 9000000000279ff0: 1a02edd9 pcalau12i $s2, 5998(0x176e)
> 9000000000279ff4: 1a034bac pcalau12i $t0, 6749(0x1a5d)
> 9000000000279ff8: 02f56339 addi.d $s2, $s2, -680(0xd58)
> 9000000000279ffc: 00410c9a slli.d $s3, $a0, 0x3
> 900000000027a000: 28aae18d ld.w $t1, $t0, -1352(0xab8)
> 900000000027a004: 380c6b2e ldx.d $t2, $s2, $s3
> 900000000027a008: 1a022fcc pcalau12i $t0, 4478(0x117e)
> 900000000027a00c: 02f20198 addi.d $s1, $t0, -896(0xc80)
> 900000000027a010: 00150317 move $s0, $s1
> 900000000027a014: 004081ac slli.w $t0, $t1, 0x0
>
> So we should check the probe address at the beginning of
> arch_prepare_kprobe(), some other archs do the same thing.
>
> $ git diff
> diff --git a/arch/loongarch/kernel/kprobes.c
> b/arch/loongarch/kernel/kprobes.c
> index bdab707b6edf..56c8c4b09a42 100644
> --- a/arch/loongarch/kernel/kprobes.c
> +++ b/arch/loongarch/kernel/kprobes.c
> @@ -79,6 +79,9 @@ NOKPROBE_SYMBOL(arch_prepare_simulate);
>
> int arch_prepare_kprobe(struct kprobe *p)
> {
> + if ((unsigned long)p->addr & 0x3)
> + return -EILSEQ;
> +
> /* copy instruction */
> p->opcode = *p->addr;
>
>
> Thanks,
> Tiezhu
>
>

2023-02-07 04:32:26

by Jeff Xie

[permalink] [raw]
Subject: Re: [PATCH v12 0/5] Add kprobe and kretprobe support for LoongArch

On Tue, Feb 7, 2023 at 12:03 PM Huacai Chen <[email protected]> wrote:
>
> Hi, Jeff,
>
> The code has been updated here,
> https://github.com/loongson/linux/commits/loongarch-next, you can test
> again.

It looks good to me.

Tested-by: Jeff Xie <[email protected]>


> Huacai
>
> On Tue, Feb 7, 2023 at 11:14 AM Tiezhu Yang <[email protected]> wrote:
> >
> >
> >
> > On 02/06/2023 08:48 PM, Jeff Xie wrote:
> > > On Mon, Feb 6, 2023 at 8:13 PM Huacai Chen <[email protected]> wrote:
> > >>
> > >> Hi, Jeff,
> > >>
> > >> Now I add kprobes on ftrace support in
> > >> https://github.com/loongson/linux/commits/loongarch-next, please test
> > >> again. Thank you.
> > >>
> > >
> > > When using the kprobe example module kprobe_example.ko, I haven't seen
> > > any errors.
> > >
> > > But when using the ftrace to probe the symbol + offset, the kernel will panic:
> > > e.g. probe the scheduler_tick+4 is fine, but when probe the
> > > scheduler_tick+5, the kernel will panic.
> > >
> >
> > Thanks for your test.
> >
> > We can see that the instruction address is 4-byte alignment,
> > this is because the instruction length is 32-bit on LoongArch.
> >
> > $ objdump -d vmlinux > dump.txt
> > $ grep -A 20 scheduler_tick dump.txt | head -21
> > 9000000000279fc8 <scheduler_tick>:
> > 9000000000279fc8: 03400000 andi $zero, $zero, 0x0
> > 9000000000279fcc: 03400000 andi $zero, $zero, 0x0
> > 9000000000279fd0: 02ff4063 addi.d $sp, $sp, -48(0xfd0)
> > 9000000000279fd4: 29c08077 st.d $s0, $sp, 32(0x20)
> > 9000000000279fd8: 29c06078 st.d $s1, $sp, 24(0x18)
> > 9000000000279fdc: 29c04079 st.d $s2, $sp, 16(0x10)
> > 9000000000279fe0: 29c0207a st.d $s3, $sp, 8(0x8)
> > 9000000000279fe4: 29c0a061 st.d $ra, $sp, 40(0x28)
> > 9000000000279fe8: 2700007b stptr.d $s4, $sp, 0
> > 9000000000279fec: 24001844 ldptr.w $a0, $tp, 24(0x18)
> > 9000000000279ff0: 1a02edd9 pcalau12i $s2, 5998(0x176e)
> > 9000000000279ff4: 1a034bac pcalau12i $t0, 6749(0x1a5d)
> > 9000000000279ff8: 02f56339 addi.d $s2, $s2, -680(0xd58)
> > 9000000000279ffc: 00410c9a slli.d $s3, $a0, 0x3
> > 900000000027a000: 28aae18d ld.w $t1, $t0, -1352(0xab8)
> > 900000000027a004: 380c6b2e ldx.d $t2, $s2, $s3
> > 900000000027a008: 1a022fcc pcalau12i $t0, 4478(0x117e)
> > 900000000027a00c: 02f20198 addi.d $s1, $t0, -896(0xc80)
> > 900000000027a010: 00150317 move $s0, $s1
> > 900000000027a014: 004081ac slli.w $t0, $t1, 0x0
> >
> > So we should check the probe address at the beginning of
> > arch_prepare_kprobe(), some other archs do the same thing.
> >
> > $ git diff
> > diff --git a/arch/loongarch/kernel/kprobes.c
> > b/arch/loongarch/kernel/kprobes.c
> > index bdab707b6edf..56c8c4b09a42 100644
> > --- a/arch/loongarch/kernel/kprobes.c
> > +++ b/arch/loongarch/kernel/kprobes.c
> > @@ -79,6 +79,9 @@ NOKPROBE_SYMBOL(arch_prepare_simulate);
> >
> > int arch_prepare_kprobe(struct kprobe *p)
> > {
> > + if ((unsigned long)p->addr & 0x3)
> > + return -EILSEQ;
> > +
> > /* copy instruction */
> > p->opcode = *p->addr;
> >
> >
> > Thanks,
> > Tiezhu
> >
> >



--
Thanks,
JeffXie