2015-06-18 03:59:39

by Pratyush Anand

[permalink] [raw]
Subject: [RFC PATCH V2 00/10] ARM64: Uprobe support added

These patches have been prepared on top of ARM64 kprobe v7 patches [1].
Keeping as RFC, because kprobe-v7 still need to be ACKed.

Unit test for following has been done so far and they have been found
working
1. Step-able instructions, like sub, ldr, add etc.
2. Simulation-able like ret.
3. uretprobe
4. Reject-able instructions like sev, wfe etc.
5. trapped and abort xol path
6. probe at unaligned user address.

Currently it does not support aarch32 instruction probing.

RFC PATCH V1 is here [2].

Changes since V1:
===================
* Most of the part of V1-(1-2) have been merged into kprobe patches.
* V1 Patch-3 has been removed.
* Other patches have also been re-arranged.
* Patch-1 in this series does changes to make 'prepare' and 'handler'
function independent of 'struct kprobe', so that they can be reused for uprobe.
* Patch-2 fixes kgdb_step_brk_fn to ignore other's exception
* Patch3-8 are preparations for uprobe patch to work.
* Patch 9-10 is actual work for uprobe support

Other significant changes

* Now relying on uprobe_task->vaddr, and so removed saved_user_pc and
ss_ctx from struct arch_uprobe_task.
* irqs disabling around uprobe_pre/post_sstep_notifier removed.
* Now returning DBG_HOOK_HANDLED from breakpoint and step handler only
on success.
* Removed step_ctx logic.
* A comment for not supporting compat.
* unaligned address check in arch_uprobe_analyze_insn
* includes asm-generic/ptrace.h in asm/ptrace.h
* rename enum debug_el to enum debug_elx

[1] http://marc.info/?l=linux-arm-kernel&m=143439540523827&w=2
[2] http://marc.info/?l=linux-arm-kernel&m=142003951103185&w=2

Pratyush Anand (9):
arm64: kprobe: Make prepare and handler function independent of
'struct kprobe'
arm64: fix kgdb_step_brk_fn to ignore other's exception
arm64: include asm-generic/ptrace.h in asm/ptrace.h
arm64: Add helper for link pointer
arm64: Re-factor flush_ptrace_access
arm64: Handle TRAP_HWBRKPT for user mode as well
arm64: Handle TRAP_BRKPT for user mode as well
arm64: rename enum debug_el to enum debug_elx to fix "wrong kind of
tag"
arm64: Add uprobe support

Steve Capper (1):
arm64: uprobes: check conditions before simulating instructions

arch/arm64/Kconfig | 3 +
arch/arm64/include/asm/debug-monitors.h | 9 +-
arch/arm64/include/asm/probes.h | 6 +-
arch/arm64/include/asm/ptrace.h | 39 +++++-
arch/arm64/include/asm/thread_info.h | 5 +-
arch/arm64/include/asm/uprobes.h | 37 ++++++
arch/arm64/kernel/Makefile | 3 +
arch/arm64/kernel/debug-monitors.c | 59 +++++----
arch/arm64/kernel/hw_breakpoint.c | 6 +-
arch/arm64/kernel/kgdb.c | 3 +
arch/arm64/kernel/kprobes-arm64.c | 33 ++---
arch/arm64/kernel/kprobes.c | 20 +--
arch/arm64/kernel/probes-simulate-insn.c | 16 +--
arch/arm64/kernel/signal.c | 4 +-
arch/arm64/kernel/uprobes.c | 220 +++++++++++++++++++++++++++++++
arch/arm64/mm/flush.c | 30 +++--
16 files changed, 401 insertions(+), 92 deletions(-)
create mode 100644 arch/arm64/include/asm/uprobes.h
create mode 100644 arch/arm64/kernel/uprobes.c

--
2.1.0


2015-06-18 03:59:52

by Pratyush Anand

[permalink] [raw]
Subject: [RFC PATCH V2 01/10] arm64: kprobe: Make prepare and handler function independent of 'struct kprobe'

prepare and handler function will also be used by uprobe. So, make them
struct kprobe independent.

Signed-off-by: Pratyush Anand <[email protected]>
---
arch/arm64/include/asm/probes.h | 5 +++--
arch/arm64/kernel/kprobes-arm64.c | 33 +++++++++++++--------------------
arch/arm64/kernel/kprobes.c | 7 ++++---
3 files changed, 20 insertions(+), 25 deletions(-)

diff --git a/arch/arm64/include/asm/probes.h b/arch/arm64/include/asm/probes.h
index 7f5a27fa071c..f07968f1335f 100644
--- a/arch/arm64/include/asm/probes.h
+++ b/arch/arm64/include/asm/probes.h
@@ -21,9 +21,10 @@ struct arch_specific_insn;
typedef u32 kprobe_opcode_t;
typedef unsigned long (kprobes_pstate_check_t)(unsigned long);
typedef unsigned long
-(probes_condition_check_t)(struct kprobe *p, struct pt_regs *);
+(probes_condition_check_t)(u32 opcode, struct arch_specific_insn *asi,
+ struct pt_regs *);
typedef void
-(probes_prepare_t)(struct kprobe *, struct arch_specific_insn *);
+(probes_prepare_t)(u32 insn, struct arch_specific_insn *);
typedef void (kprobes_handler_t) (u32 opcode, long addr, struct pt_regs *);

enum pc_restore_type {
diff --git a/arch/arm64/kernel/kprobes-arm64.c b/arch/arm64/kernel/kprobes-arm64.c
index 8a7e6b0290a7..d8f6e79b4de0 100644
--- a/arch/arm64/kernel/kprobes-arm64.c
+++ b/arch/arm64/kernel/kprobes-arm64.c
@@ -26,68 +26,61 @@
* condition check functions for kprobes simulation
*/
static unsigned long __kprobes
-__check_pstate(struct kprobe *p, struct pt_regs *regs)
+__check_pstate(u32 opcode, struct arch_specific_insn *asi, struct pt_regs *regs)
{
- struct arch_specific_insn *asi = &p->ainsn;
unsigned long pstate = regs->pstate & 0xffffffff;

return asi->pstate_cc(pstate);
}

static unsigned long __kprobes
-__check_cbz(struct kprobe *p, struct pt_regs *regs)
+__check_cbz(u32 opcode, struct arch_specific_insn *asi, struct pt_regs *regs)
{
- return check_cbz((u32)p->opcode, regs);
+ return check_cbz(opcode, regs);
}

static unsigned long __kprobes
-__check_cbnz(struct kprobe *p, struct pt_regs *regs)
+__check_cbnz(u32 opcode, struct arch_specific_insn *asi, struct pt_regs *regs)
{
- return check_cbnz((u32)p->opcode, regs);
+ return check_cbnz(opcode, regs);
}

static unsigned long __kprobes
-__check_tbz(struct kprobe *p, struct pt_regs *regs)
+__check_tbz(u32 opcode, struct arch_specific_insn *asi, struct pt_regs *regs)
{
- return check_tbz((u32)p->opcode, regs);
+ return check_tbz(opcode, regs);
}

static unsigned long __kprobes
-__check_tbnz(struct kprobe *p, struct pt_regs *regs)
+__check_tbnz(u32 opcode, struct arch_specific_insn *asi, struct pt_regs *regs)
{
- return check_tbnz((u32)p->opcode, regs);
+ return check_tbnz(opcode, regs);
}

/*
* prepare functions for instruction simulation
*/
static void __kprobes
-prepare_none(struct kprobe *p, struct arch_specific_insn *asi)
+prepare_none(u32 insn, struct arch_specific_insn *asi)
{
}

static void __kprobes
-prepare_bcond(struct kprobe *p, struct arch_specific_insn *asi)
+prepare_bcond(u32 insn, struct arch_specific_insn *asi)
{
- kprobe_opcode_t insn = p->opcode;
-
asi->check_condn = __check_pstate;
asi->pstate_cc = kprobe_condition_checks[insn & 0xf];
}

static void __kprobes
-prepare_cbz_cbnz(struct kprobe *p, struct arch_specific_insn *asi)
+prepare_cbz_cbnz(u32 insn, struct arch_specific_insn *asi)
{
- kprobe_opcode_t insn = p->opcode;
-
asi->check_condn = (insn & (1 << 24)) ? __check_cbnz : __check_cbz;
}

static void __kprobes
-prepare_tbz_tbnz(struct kprobe *p, struct arch_specific_insn *asi)
+prepare_tbz_tbnz(u32 insn, struct arch_specific_insn *asi)
{
- kprobe_opcode_t insn = p->opcode;
-
asi->check_condn = (insn & (1 << 24)) ? __check_tbnz : __check_tbz;
}

diff --git a/arch/arm64/kernel/kprobes.c b/arch/arm64/kernel/kprobes.c
index 7e34ef381055..740f71695b07 100644
--- a/arch/arm64/kernel/kprobes.c
+++ b/arch/arm64/kernel/kprobes.c
@@ -60,7 +60,7 @@ static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
static void __kprobes arch_prepare_simulate(struct kprobe *p)
{
if (p->ainsn.prepare)
- p->ainsn.prepare(p, &p->ainsn);
+ p->ainsn.prepare(p->opcode, &p->ainsn);

/* This instructions is not executed xol. No need to adjust the PC */
p->ainsn.restore.addr = 0;
@@ -271,7 +271,8 @@ static int __kprobes reenter_kprobe(struct kprobe *p,
switch (kcb->kprobe_status) {
case KPROBE_HIT_SSDONE:
case KPROBE_HIT_ACTIVE:
- if (!p->ainsn.check_condn || p->ainsn.check_condn(p, regs)) {
+ if (!p->ainsn.check_condn ||
+ p->ainsn.check_condn((u32)p->opcode, &p->ainsn, regs)) {
kprobes_inc_nmissed_count(p);
setup_singlestep(p, regs, kcb, 1);
} else {
@@ -402,7 +403,7 @@ void __kprobes kprobe_handler(struct pt_regs *regs)
if (reenter_kprobe(p, regs, kcb))
return;
} else if (!p->ainsn.check_condn ||
- p->ainsn.check_condn(p, regs)) {
+ p->ainsn.check_condn((u32)p->opcode, &p->ainsn, regs)) {
/* Probe hit and conditional execution check ok. */
set_current_kprobe(p);
kcb->kprobe_status = KPROBE_HIT_ACTIVE;
--
2.1.0

2015-06-18 04:00:05

by Pratyush Anand

[permalink] [raw]
Subject: [RFC PATCH V2 02/10] arm64: fix kgdb_step_brk_fn to ignore other's exception

ARM64 step exception does not have any syndrome information. So, it is
responsibility of exception handler to take care that they handle it
only if exception was raised for them.
After kprobe support, both kprobe and kgdb uses register_step_hook
mechanism to register its step handler. So, if call_step_hook calls kgdb
handler first, it was always returning 0 and in that case if an
exception was raised for kprobe, it would never be handled.

Signed-off-by: Pratyush Anand <[email protected]>
---
arch/arm64/kernel/kgdb.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/arch/arm64/kernel/kgdb.c b/arch/arm64/kernel/kgdb.c
index a0d10c55f307..9469465a5e03 100644
--- a/arch/arm64/kernel/kgdb.c
+++ b/arch/arm64/kernel/kgdb.c
@@ -229,6 +229,9 @@ static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned int esr)

static int kgdb_step_brk_fn(struct pt_regs *regs, unsigned int esr)
{
+ if (!kgdb_single_step)
+ return DBG_HOOK_ERROR;
+
kgdb_handle_exception(1, SIGTRAP, 0, regs);
return 0;
}
--
2.1.0

2015-06-18 04:00:10

by Pratyush Anand

[permalink] [raw]
Subject: [RFC PATCH V2 03/10] arm64: include asm-generic/ptrace.h in asm/ptrace.h

instruction_pointer_set is needed for uprobe implementation.
asm-generic/ptrace.h already defines it. So include it in asm/ptrace.h.

But inclusion of asm-generic/ptrace.h, needs definition of GET_USP,
SET_USP, GET_FP & SET_FP as they are different than the generic
definition. So, define them in asm/ptrace.h.

user_stack_pointer, instruction_pointer and profile_pc have already been
defined by asm-generic/ptrace.h now, therefore remove them from asm/ptrace.h.

To modify instruction pointer in kprobe, use
instruction_pointer_set(regs, val) instead of instruction_pointer(regs)
= val, otherwise lvalue error.

Signed-off-by: Pratyush Anand <[email protected]>
---
arch/arm64/include/asm/ptrace.h | 32 +++++++++++++++++++++++++-------
arch/arm64/kernel/kprobes.c | 13 +++++++------
arch/arm64/kernel/probes-simulate-insn.c | 16 ++++++++--------
3 files changed, 40 insertions(+), 21 deletions(-)

diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
index aadf61a334eb..3ea7f5a04bfc 100644
--- a/arch/arm64/include/asm/ptrace.h
+++ b/arch/arm64/include/asm/ptrace.h
@@ -144,10 +144,6 @@ struct pt_regs {

#define fast_interrupts_enabled(regs) \
(!((regs)->pstate & PSR_F_BIT))
-
-#define user_stack_pointer(regs) \
- (!compat_user_mode(regs) ? (regs)->sp : (regs)->compat_sp)
-
/**
* regs_get_register() - get register value from its offset
* @regs: pt_regs from which register value is gotten
@@ -206,13 +202,35 @@ static inline int valid_user_regs(struct user_pt_regs *regs)
return 0;
}

-#define instruction_pointer(regs) ((regs)->pc)
+#define GET_USP(regs) \
+ (!compat_user_mode(regs) ? (regs)->sp : (regs)->compat_sp)
+
+#define SET_USP(regs, val) \
+ do { \
+ if (compat_user_mode(regs)) \
+ (regs)->compat_sp = val; \
+ else \
+ (regs)->sp = val; \
+ } while (0)
+
+#define GET_FP(regs) \
+ (!compat_user_mode(regs) ? (regs)->regs[29] : (regs)->compat_fp)
+
+#define SET_FP(regs, val) \
+ do { \
+ if (compat_user_mode(regs)) \
+ (regs)->compat_fp = val; \
+ else \
+ (regs)->regs[29] = val; \
+ } while (0)
+
+#include <asm-generic/ptrace.h>
+
#define stack_pointer(regs) ((regs)->sp)

#ifdef CONFIG_SMP
+#undef profile_pc
extern unsigned long profile_pc(struct pt_regs *regs);
-#else
-#define profile_pc(regs) instruction_pointer(regs)
#endif

#endif /* __ASSEMBLY__ */
diff --git a/arch/arm64/kernel/kprobes.c b/arch/arm64/kernel/kprobes.c
index 740f71695b07..6c9f8b5f04ce 100644
--- a/arch/arm64/kernel/kprobes.c
+++ b/arch/arm64/kernel/kprobes.c
@@ -228,7 +228,8 @@ static void __kprobes
skip_singlestep_missed(struct kprobe_ctlblk *kcb, struct pt_regs *regs)
{
/* set return addr to next pc to continue */
- instruction_pointer(regs) += sizeof(kprobe_opcode_t);
+ instruction_pointer_set(regs,
+ instruction_pointer(regs) + sizeof(kprobe_opcode_t));
}

static void __kprobes setup_singlestep(struct kprobe *p,
@@ -257,7 +258,7 @@ static void __kprobes setup_singlestep(struct kprobe *p,
/* IRQs and single stepping do not mix well. */
kprobes_save_local_irqflag(regs);
kernel_enable_single_step(regs);
- instruction_pointer(regs) = slot;
+ instruction_pointer_set(regs, slot);
} else {
/* insn simulation */
arch_simulate_insn(p, regs);
@@ -304,7 +305,7 @@ post_kprobe_handler(struct kprobe_ctlblk *kcb, struct pt_regs *regs)

/* return addr restore if non-branching insn */
if (cur->ainsn.restore.type == RESTORE_PC) {
- instruction_pointer(regs) = cur->ainsn.restore.addr;
+ instruction_pointer_set(regs, cur->ainsn.restore.addr);
if (!instruction_pointer(regs))
BUG();
}
@@ -341,7 +342,7 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr)
* and allow the page fault handler to continue as a
* normal page fault.
*/
- instruction_pointer(regs) = (unsigned long)cur->addr;
+ instruction_pointer_set(regs, (unsigned long)cur->addr);
if (!instruction_pointer(regs))
BUG();
if (kcb->kprobe_status == KPROBE_REENTER)
@@ -507,7 +508,7 @@ int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs)
memcpy(kcb->jprobes_stack, (void *)stack_ptr,
MIN_STACK_SIZE(stack_ptr));

- instruction_pointer(regs) = (long)jp->entry;
+ instruction_pointer_set(regs, (long)jp->entry);
preempt_disable();
return 1;
}
@@ -633,7 +634,7 @@ static void __kprobes __used *trampoline_probe_handler(struct pt_regs *regs)

kretprobe_assert(ri, orig_ret_addr, trampoline_address);
/* restore the original return address */
- instruction_pointer(regs) = orig_ret_addr;
+ instruction_pointer_set(regs, orig_ret_addr);
reset_current_kprobe();
kretprobe_hash_unlock(current, &flags);

diff --git a/arch/arm64/kernel/probes-simulate-insn.c b/arch/arm64/kernel/probes-simulate-insn.c
index a224c91001d9..098b434ab6fc 100644
--- a/arch/arm64/kernel/probes-simulate-insn.c
+++ b/arch/arm64/kernel/probes-simulate-insn.c
@@ -92,7 +92,7 @@ simulate_adr_adrp(u32 opcode, long addr, struct pt_regs *regs)

regs->regs[xn] = val;

- instruction_pointer(regs) += 4;
+ instruction_pointer_set(regs, instruction_pointer(regs) + 4);
}

void __kprobes
@@ -104,7 +104,7 @@ simulate_b_bl(u32 opcode, long addr, struct pt_regs *regs)
if (opcode & (1 << 31))
regs->regs[30] = addr + 4;

- instruction_pointer(regs) = addr + disp;
+ instruction_pointer_set(regs, addr + disp);
}

void __kprobes
@@ -112,7 +112,7 @@ simulate_b_cond(u32 opcode, long addr, struct pt_regs *regs)
{
int disp = bcond_displacement(opcode);

- instruction_pointer(regs) = addr + disp;
+ instruction_pointer_set(regs, addr + disp);
}

void __kprobes
@@ -124,7 +124,7 @@ simulate_br_blr_ret(u32 opcode, long addr, struct pt_regs *regs)
if (((opcode >> 21) & 0x3) == 1)
regs->regs[30] = addr + 4;

- instruction_pointer(regs) = regs->regs[xn];
+ instruction_pointer_set(regs, regs->regs[xn]);
}

void __kprobes
@@ -132,7 +132,7 @@ simulate_cbz_cbnz(u32 opcode, long addr, struct pt_regs *regs)
{
int disp = cbz_displacement(opcode);

- instruction_pointer(regs) = addr + disp;
+ instruction_pointer_set(regs, addr + disp);
}

void __kprobes
@@ -140,7 +140,7 @@ simulate_tbz_tbnz(u32 opcode, long addr, struct pt_regs *regs)
{
int disp = tbz_displacement(opcode);

- instruction_pointer(regs) = addr + disp;
+ instruction_pointer_set(regs, addr + disp);
}

void __kprobes
@@ -157,7 +157,7 @@ simulate_ldr_literal(u32 opcode, long addr, struct pt_regs *regs)
else /* w0-w31 */
*(u32 *) (&regs->regs[xn]) = (*(u32 *) (load_addr));

- instruction_pointer(regs) += 4;
+ instruction_pointer_set(regs, instruction_pointer(regs) + 4);
}

void __kprobes
@@ -170,5 +170,5 @@ simulate_ldrsw_literal(u32 opcode, long addr, struct pt_regs *regs)
load_addr = (s32 *) (addr + disp);
regs->regs[xn] = *load_addr;

- instruction_pointer(regs) += 4;
+ instruction_pointer_set(regs, instruction_pointer(regs) + 4);
}
--
2.1.0

2015-06-18 04:00:19

by Pratyush Anand

[permalink] [raw]
Subject: [RFC PATCH V2 04/10] arm64: Add helper for link pointer

At many a place we program procedure link pointer ie regs[30]. So adding
helper to do that.

Signed-off-by: Pratyush Anand <[email protected]>
---
arch/arm64/include/asm/ptrace.h | 7 +++++++
1 file changed, 7 insertions(+)

diff --git a/arch/arm64/include/asm/ptrace.h b/arch/arm64/include/asm/ptrace.h
index 3ea7f5a04bfc..fa2c122e5bd6 100644
--- a/arch/arm64/include/asm/ptrace.h
+++ b/arch/arm64/include/asm/ptrace.h
@@ -227,6 +227,13 @@ static inline int valid_user_regs(struct user_pt_regs *regs)
#include <asm-generic/ptrace.h>

#define stack_pointer(regs) ((regs)->sp)
+#define procedure_link_pointer(regs) ((regs)->regs[30])
+
+static inline void procedure_link_pointer_set(struct pt_regs *regs,
+ unsigned long val)
+{
+ procedure_link_pointer(regs) = val;
+}

#ifdef CONFIG_SMP
#undef profile_pc
--
2.1.0

2015-06-18 04:00:29

by Pratyush Anand

[permalink] [raw]
Subject: [RFC PATCH V2 05/10] arm64: Re-factor flush_ptrace_access

Re-factor flush_ptrace_access to reuse vma independent part, which is
needed for functions like arch_uprobe_copy_ixol where we do not have
a vma.

Signed-off-by: Pratyush Anand <[email protected]>
---
arch/arm64/mm/flush.c | 24 +++++++++++++++---------
1 file changed, 15 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index b6f14e8d2121..9a4dd6f39cfb 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -34,19 +34,25 @@ void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
__flush_icache_all();
}

+static void __flush_ptrace_access(struct page *page, unsigned long uaddr,
+ void *kaddr, unsigned long len)
+{
+ unsigned long addr = (unsigned long)kaddr;
+
+ if (icache_is_aliasing()) {
+ __flush_dcache_area(kaddr, len);
+ __flush_icache_all();
+ } else {
+ flush_icache_range(addr, addr + len);
+ }
+}
+
static void flush_ptrace_access(struct vm_area_struct *vma, struct page *page,
unsigned long uaddr, void *kaddr,
unsigned long len)
{
- if (vma->vm_flags & VM_EXEC) {
- unsigned long addr = (unsigned long)kaddr;
- if (icache_is_aliasing()) {
- __flush_dcache_area(kaddr, len);
- __flush_icache_all();
- } else {
- flush_icache_range(addr, addr + len);
- }
- }
+ if (vma->vm_flags & VM_EXEC)
+ __flush_ptrace_access(page, uaddr, kaddr, len);
}

/*
--
2.1.0

2015-06-18 04:00:23

by Pratyush Anand

[permalink] [raw]
Subject: [RFC PATCH V2 06/10] arm64: Handle TRAP_HWBRKPT for user mode as well

uprobe registers a handler at step_hook. So, single_step_handler now
checks for user mode as well if there is a valid hook.

Signed-off-by: Pratyush Anand <[email protected]>
---
arch/arm64/kernel/debug-monitors.c | 32 +++++++++++++++-----------------
1 file changed, 15 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
index 486ee94304a0..7eb13dcf09fa 100644
--- a/arch/arm64/kernel/debug-monitors.c
+++ b/arch/arm64/kernel/debug-monitors.c
@@ -238,7 +238,14 @@ static int single_step_handler(unsigned long addr, unsigned int esr,
if (!reinstall_suspended_bps(regs))
return 0;

- if (user_mode(regs)) {
+#ifdef CONFIG_KPROBES
+ if (kprobe_single_step_handler(regs, esr) == DBG_HOOK_HANDLED)
+ handler_found = true;
+#endif
+ if (!handler_found && call_step_hook(regs, esr) == DBG_HOOK_HANDLED)
+ handler_found = true;
+
+ if (!handler_found && user_mode(regs)) {
info.si_signo = SIGTRAP;
info.si_errno = 0;
info.si_code = TRAP_HWBKPT;
@@ -252,22 +259,13 @@ static int single_step_handler(unsigned long addr, unsigned int esr,
* to the active-not-pending state).
*/
user_rewind_single_step(current);
- } else {
-#ifdef CONFIG_KPROBES
- if (kprobe_single_step_handler(regs, esr) == DBG_HOOK_HANDLED)
- handler_found = true;
-#endif
- if (call_step_hook(regs, esr) == DBG_HOOK_HANDLED)
- handler_found = true;
-
- if (!handler_found) {
- pr_warn("Unexpected kernel single-step exception at EL1\n");
- /*
- * Re-enable stepping since we know that we will be
- * returning to regs.
- */
- set_regs_spsr_ss(regs);
- }
+ } else if (!handler_found) {
+ pr_warning("Unexpected kernel single-step exception at EL1\n");
+ /*
+ * Re-enable stepping since we know that we will be
+ * returning to regs.
+ */
+ set_regs_spsr_ss(regs);
}

return 0;
--
2.1.0

2015-06-18 04:01:50

by Pratyush Anand

[permalink] [raw]
Subject: [RFC PATCH V2 07/10] arm64: Handle TRAP_BRKPT for user mode as well

uprobe is registered at break_hook with a unique ESR code. So, when a
TRAP_BRKPT occurs, call_break_hook checks if it was for uprobe. If not,
then send a SIGTRAP to user.

Signed-off-by: Pratyush Anand <[email protected]>
---
arch/arm64/kernel/debug-monitors.c | 23 +++++++++++++----------
1 file changed, 13 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
index 7eb13dcf09fa..1fe912e77f62 100644
--- a/arch/arm64/kernel/debug-monitors.c
+++ b/arch/arm64/kernel/debug-monitors.c
@@ -311,8 +311,18 @@ static int brk_handler(unsigned long addr, unsigned int esr,
struct pt_regs *regs)
{
siginfo_t info;
+ bool handler_found = false;
+
+#ifdef CONFIG_KPROBES
+ if ((esr & BRK64_ESR_MASK) == BRK64_ESR_KPROBES) {
+ if (kprobe_breakpoint_handler(regs, esr) == DBG_HOOK_HANDLED)
+ handler_found = true;
+ }
+#endif
+ if (!handler_found && call_break_hook(regs, esr) == DBG_HOOK_HANDLED)
+ handler_found = true;

- if (user_mode(regs)) {
+ if (!handler_found && user_mode(regs)) {
info = (siginfo_t) {
.si_signo = SIGTRAP,
.si_errno = 0,
@@ -321,15 +331,8 @@ static int brk_handler(unsigned long addr, unsigned int esr,
};

force_sig_info(SIGTRAP, &info, current);
- }
-#ifdef CONFIG_KPROBES
- else if ((esr & BRK64_ESR_MASK) == BRK64_ESR_KPROBES) {
- if (kprobe_breakpoint_handler(regs, esr) != DBG_HOOK_HANDLED)
- return -EFAULT;
- }
-#endif
- else if (call_break_hook(regs, esr) != DBG_HOOK_HANDLED) {
- pr_warn("Unexpected kernel BRK exception at EL1\n");
+ } else if (!handler_found) {
+ pr_warning("Unexpected kernel BRK exception at EL1\n");
return -EFAULT;
}

--
2.1.0

2015-06-18 04:01:43

by Pratyush Anand

[permalink] [raw]
Subject: [RFC PATCH V2 08/10] arm64: rename enum debug_el to enum debug_elx to fix "wrong kind of tag"

asm/debug-monitors.h contains definition for debug opcode. So, it will
be needed by asm/uprobes.h.

With enum debug_el it generates following compilation error, since
asm/uprobes.h is included.

lib/list_sort.c:160:8: error: ‘debug_el’ defined as wrong kind of tag
struct debug_el {

Therefore rename enum debug_el to enum debug_elx.

Signed-off-by: Pratyush Anand <[email protected]>
---
arch/arm64/include/asm/debug-monitors.h | 6 +++---
arch/arm64/kernel/debug-monitors.c | 4 ++--
arch/arm64/kernel/hw_breakpoint.c | 6 +++---
3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/debug-monitors.h b/arch/arm64/include/asm/debug-monitors.h
index 92d7ceac9adf..d9e79b01d09e 100644
--- a/arch/arm64/include/asm/debug-monitors.h
+++ b/arch/arm64/include/asm/debug-monitors.h
@@ -132,13 +132,13 @@ void unregister_break_hook(struct break_hook *hook);

u8 debug_monitors_arch(void);

-enum debug_el {
+enum debug_elx {
DBG_ACTIVE_EL0 = 0,
DBG_ACTIVE_EL1,
};

-void enable_debug_monitors(enum debug_el el);
-void disable_debug_monitors(enum debug_el el);
+void enable_debug_monitors(enum debug_elx el);
+void disable_debug_monitors(enum debug_elx el);

void user_rewind_single_step(struct task_struct *task);
void user_fastforward_single_step(struct task_struct *task);
diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
index 1fe912e77f62..237a21f675fd 100644
--- a/arch/arm64/kernel/debug-monitors.c
+++ b/arch/arm64/kernel/debug-monitors.c
@@ -83,7 +83,7 @@ early_param("nodebugmon", early_debug_disable);
static DEFINE_PER_CPU(int, mde_ref_count);
static DEFINE_PER_CPU(int, kde_ref_count);

-void enable_debug_monitors(enum debug_el el)
+void enable_debug_monitors(enum debug_elx el)
{
u32 mdscr, enable = 0;

@@ -103,7 +103,7 @@ void enable_debug_monitors(enum debug_el el)
}
}

-void disable_debug_monitors(enum debug_el el)
+void disable_debug_monitors(enum debug_elx el)
{
u32 mdscr, disable = 0;

diff --git a/arch/arm64/kernel/hw_breakpoint.c b/arch/arm64/kernel/hw_breakpoint.c
index e7d934d3afe0..43b74a3ddaef 100644
--- a/arch/arm64/kernel/hw_breakpoint.c
+++ b/arch/arm64/kernel/hw_breakpoint.c
@@ -157,7 +157,7 @@ static void write_wb_reg(int reg, int n, u64 val)
* Convert a breakpoint privilege level to the corresponding exception
* level.
*/
-static enum debug_el debug_exception_level(int privilege)
+static enum debug_elx debug_exception_level(int privilege)
{
switch (privilege) {
case AARCH64_BREAKPOINT_EL0:
@@ -231,7 +231,7 @@ static int hw_breakpoint_control(struct perf_event *bp,
struct perf_event **slots;
struct debug_info *debug_info = &current->thread.debug;
int i, max_slots, ctrl_reg, val_reg, reg_enable;
- enum debug_el dbg_el = debug_exception_level(info->ctrl.privilege);
+ enum debug_elx dbg_el = debug_exception_level(info->ctrl.privilege);
u32 ctrl;

if (info->ctrl.type == ARM_BREAKPOINT_EXECUTE) {
@@ -538,7 +538,7 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp)
* exception level at the register level.
* This is used when single-stepping after a breakpoint exception.
*/
-static void toggle_bp_registers(int reg, enum debug_el el, int enable)
+static void toggle_bp_registers(int reg, enum debug_elx el, int enable)
{
int i, max_slots, privilege;
u32 ctrl;
--
2.1.0

2015-06-18 04:00:34

by Pratyush Anand

[permalink] [raw]
Subject: [RFC PATCH V2 09/10] arm64: Add uprobe support

This patch adds support for uprobe on ARM64 architecture.

Unit test for following has been done so far and they have been found
working
1. Step-able instructions, like sub, ldr, add etc.
2. Simulation-able like ret.
3. uretprobe
4. Reject-able instructions like sev, wfe etc.
5. trapped and abort xol path
6. probe at unaligned user address.

Currently it does not support aarch32 instruction probing.

Signed-off-by: Pratyush Anand <[email protected]>
---
arch/arm64/Kconfig | 3 +
arch/arm64/include/asm/debug-monitors.h | 3 +
arch/arm64/include/asm/probes.h | 1 +
arch/arm64/include/asm/thread_info.h | 5 +-
arch/arm64/include/asm/uprobes.h | 37 ++++++
arch/arm64/kernel/Makefile | 3 +
arch/arm64/kernel/signal.c | 4 +-
arch/arm64/kernel/uprobes.c | 213 ++++++++++++++++++++++++++++++++
arch/arm64/mm/flush.c | 6 +
9 files changed, 273 insertions(+), 2 deletions(-)
create mode 100644 arch/arm64/include/asm/uprobes.h
create mode 100644 arch/arm64/kernel/uprobes.c

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 5312be5b40ad..3ff4e038a365 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -156,6 +156,9 @@ config PGTABLE_LEVELS
default 3 if ARM64_4K_PAGES && ARM64_VA_BITS_39
default 4 if ARM64_4K_PAGES && ARM64_VA_BITS_48

+config ARCH_SUPPORTS_UPROBES
+ def_bool y
+
source "init/Kconfig"

source "kernel/Kconfig.freezer"
diff --git a/arch/arm64/include/asm/debug-monitors.h b/arch/arm64/include/asm/debug-monitors.h
index d9e79b01d09e..1c3a4e635f1c 100644
--- a/arch/arm64/include/asm/debug-monitors.h
+++ b/arch/arm64/include/asm/debug-monitors.h
@@ -94,6 +94,9 @@
#define BRK64_ESR_MASK 0xFFFF
#define BRK64_ESR_KPROBES 0x0004
#define BRK64_OPCODE_KPROBES (AARCH64_BREAK_MON | (BRK64_ESR_KPROBES << 5))
+/* uprobes BRK opcodes with ESR encoding */
+#define BRK64_ESR_UPROBES 0x0008
+#define BRK64_OPCODE_UPROBES (AARCH64_BREAK_MON | (BRK64_ESR_UPROBES << 5))

/* AArch32 */
#define DBG_ESR_EVT_BKPT 0x4
diff --git a/arch/arm64/include/asm/probes.h b/arch/arm64/include/asm/probes.h
index f07968f1335f..52db4e4c47c7 100644
--- a/arch/arm64/include/asm/probes.h
+++ b/arch/arm64/include/asm/probes.h
@@ -19,6 +19,7 @@ struct kprobe;
struct arch_specific_insn;

typedef u32 kprobe_opcode_t;
+typedef u32 uprobe_opcode_t;
typedef unsigned long (kprobes_pstate_check_t)(unsigned long);
typedef unsigned long
(probes_condition_check_t)(u32 opcode, struct arch_specific_insn *asi,
diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h
index dcd06d18a42a..2e0644e0600e 100644
--- a/arch/arm64/include/asm/thread_info.h
+++ b/arch/arm64/include/asm/thread_info.h
@@ -101,6 +101,7 @@ static inline struct thread_info *current_thread_info(void)
#define TIF_NEED_RESCHED 1
#define TIF_NOTIFY_RESUME 2 /* callback before returning to user */
#define TIF_FOREIGN_FPSTATE 3 /* CPU's FP state is not current's */
+#define TIF_UPROBE 4 /* uprobe breakpoint or singlestep */
#define TIF_NOHZ 7
#define TIF_SYSCALL_TRACE 8
#define TIF_SYSCALL_AUDIT 9
@@ -122,10 +123,12 @@ static inline struct thread_info *current_thread_info(void)
#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
#define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT)
#define _TIF_SECCOMP (1 << TIF_SECCOMP)
+#define _TIF_UPROBE (1 << TIF_UPROBE)
#define _TIF_32BIT (1 << TIF_32BIT)

#define _TIF_WORK_MASK (_TIF_NEED_RESCHED | _TIF_SIGPENDING | \
- _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE)
+ _TIF_NOTIFY_RESUME | _TIF_FOREIGN_FPSTATE | \
+ _TIF_UPROBE)

#define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
_TIF_SYSCALL_TRACEPOINT | _TIF_SECCOMP | \
diff --git a/arch/arm64/include/asm/uprobes.h b/arch/arm64/include/asm/uprobes.h
new file mode 100644
index 000000000000..9d64317d1e21
--- /dev/null
+++ b/arch/arm64/include/asm/uprobes.h
@@ -0,0 +1,37 @@
+/*
+ * Copyright (C) 2014-2015 Pratyush Anand <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef _ASM_UPROBES_H
+#define _ASM_UPROBES_H
+
+#include <asm/debug-monitors.h>
+#include <asm/insn.h>
+#include <asm/probes.h>
+
+#define MAX_UINSN_BYTES AARCH64_INSN_SIZE
+
+#define UPROBE_SWBP_INSN BRK64_OPCODE_UPROBES
+#define UPROBE_SWBP_INSN_SIZE 4
+#define UPROBE_XOL_SLOT_BYTES MAX_UINSN_BYTES
+
+struct arch_uprobe_task {
+ unsigned long saved_fault_code;
+};
+
+struct arch_uprobe {
+ union {
+ u8 insn[MAX_UINSN_BYTES];
+ u8 ixol[MAX_UINSN_BYTES];
+ };
+ struct arch_specific_insn ainsn;
+ bool simulate;
+};
+
+extern void flush_uprobe_xol_access(struct page *page, unsigned long uaddr,
+ void *kaddr, unsigned long len);
+#endif
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 5e9d54f2763c..181659e8c371 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -35,6 +35,9 @@ arm64-obj-$(CONFIG_KGDB) += kgdb.o
arm64-obj-$(CONFIG_KPROBES) += kprobes.o kprobes-arm64.o \
probes-simulate-insn.o \
probes-condn-check.o
+arm64-obj-$(CONFIG_UPROBES) += uprobes.o kprobes-arm64.o \
+ probes-simulate-insn.o \
+ probes-condn-check.o
arm64-obj-$(CONFIG_EFI) += efi.o efi-stub.o efi-entry.o
arm64-obj-$(CONFIG_PCI) += pci.o
arm64-obj-$(CONFIG_ARMV8_DEPRECATED) += armv8_deprecated.o
diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index e18c48cb6db1..6da825923864 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -402,6 +402,9 @@ static void do_signal(struct pt_regs *regs)
asmlinkage void do_notify_resume(struct pt_regs *regs,
unsigned int thread_flags)
{
+ if (thread_flags & _TIF_UPROBE)
+ uprobe_notify_resume(regs);
+
if (thread_flags & _TIF_SIGPENDING)
do_signal(regs);

@@ -412,5 +415,4 @@ asmlinkage void do_notify_resume(struct pt_regs *regs,

if (thread_flags & _TIF_FOREIGN_FPSTATE)
fpsimd_restore_current_state();
-
}
diff --git a/arch/arm64/kernel/uprobes.c b/arch/arm64/kernel/uprobes.c
new file mode 100644
index 000000000000..2cc9114deac2
--- /dev/null
+++ b/arch/arm64/kernel/uprobes.c
@@ -0,0 +1,213 @@
+/*
+ * Copyright (C) 2014-2015 Pratyush Anand <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/highmem.h>
+#include <linux/ptrace.h>
+#include <linux/uprobes.h>
+
+#include "kprobes-arm64.h"
+
+#define UPROBE_INV_FAULT_CODE UINT_MAX
+
+void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,
+ void *src, unsigned long len)
+{
+ void *xol_page_kaddr = kmap_atomic(page);
+ void *dst = xol_page_kaddr + (vaddr & ~PAGE_MASK);
+
+ preempt_disable();
+
+ /* Initialize the slot */
+ memcpy(dst, src, len);
+
+ /* flush caches (dcache/icache) */
+ flush_uprobe_xol_access(page, vaddr, dst, len);
+
+ preempt_enable();
+
+ kunmap_atomic(xol_page_kaddr);
+}
+
+unsigned long uprobe_get_swbp_addr(struct pt_regs *regs)
+{
+ return instruction_pointer(regs);
+}
+
+int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm,
+ unsigned long addr)
+{
+ kprobe_opcode_t insn;
+
+ /* TODO: Currently we do not support AARCH32 instruction probing */
+
+ if (!IS_ALIGNED(addr, AARCH64_INSN_SIZE))
+ return -EINVAL;
+
+ insn = *(kprobe_opcode_t *)(&auprobe->insn[0]);
+
+ switch (arm_kprobe_decode_insn(insn, &auprobe->ainsn)) {
+ case INSN_REJECTED:
+ return -EINVAL;
+
+ case INSN_GOOD_NO_SLOT:
+ auprobe->simulate = true;
+ if (auprobe->ainsn.prepare)
+ auprobe->ainsn.prepare(insn, &auprobe->ainsn);
+ break;
+
+ case INSN_GOOD:
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+int arch_uprobe_pre_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+ struct uprobe_task *utask = current->utask;
+
+ /* saved fault code is restored in post_xol */
+ utask->autask.saved_fault_code = current->thread.fault_code;
+
+ /* An invalid fault code between pre/post xol event */
+ current->thread.fault_code = UPROBE_INV_FAULT_CODE;
+
+ /* Instruction point to execute ol */
+ instruction_pointer_set(regs, utask->xol_vaddr);
+
+ user_enable_single_step(current);
+
+ return 0;
+}
+
+int arch_uprobe_post_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+ struct uprobe_task *utask = current->utask;
+
+ WARN_ON_ONCE(current->thread.fault_code != UPROBE_INV_FAULT_CODE);
+
+ /* restore fault code */
+ current->thread.fault_code = utask->autask.saved_fault_code;
+
+ /* Instruction point to execute next to breakpoint address */
+ instruction_pointer_set(regs, utask->vaddr + 4);
+
+ user_disable_single_step(current);
+
+ return 0;
+}
+bool arch_uprobe_xol_was_trapped(struct task_struct *t)
+{
+ /*
+ * Between arch_uprobe_pre_xol and arch_uprobe_post_xol, if an xol
+ * insn itself is trapped, then detect the case with the help of
+ * invalid fault code which is being set in arch_uprobe_pre_xol and
+ * restored in arch_uprobe_post_xol.
+ */
+ if (t->thread.fault_code != UPROBE_INV_FAULT_CODE)
+ return true;
+
+ return false;
+}
+
+bool arch_uprobe_skip_sstep(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+ kprobe_opcode_t insn;
+ unsigned long addr;
+
+ if (!auprobe->simulate)
+ return false;
+
+ insn = *(kprobe_opcode_t *)(&auprobe->insn[0]);
+ addr = instruction_pointer(regs);
+
+ if (auprobe->ainsn.handler)
+ auprobe->ainsn.handler(insn, addr, regs);
+
+ return true;
+}
+
+void arch_uprobe_abort_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+ struct uprobe_task *utask = current->utask;
+
+ current->thread.fault_code = utask->autask.saved_fault_code;
+ /*
+ * Task has received a fatal signal, so reset back to probbed
+ * address.
+ */
+ instruction_pointer_set(regs, utask->vaddr);
+
+ user_disable_single_step(current);
+}
+
+unsigned long
+arch_uretprobe_hijack_return_addr(unsigned long trampoline_vaddr,
+ struct pt_regs *regs)
+{
+ unsigned long orig_ret_vaddr;
+
+ orig_ret_vaddr = procedure_link_pointer(regs);
+ /* Replace the return addr with trampoline addr */
+ procedure_link_pointer_set(regs, trampoline_vaddr);
+
+ return orig_ret_vaddr;
+}
+
+int arch_uprobe_exception_notify(struct notifier_block *self,
+ unsigned long val, void *data)
+{
+ return NOTIFY_DONE;
+}
+
+static int __kprobes uprobe_breakpoint_handler(struct pt_regs *regs,
+ unsigned int esr)
+{
+ if (user_mode(regs) && uprobe_pre_sstep_notifier(regs))
+ return DBG_HOOK_HANDLED;
+
+ return DBG_HOOK_ERROR;
+}
+
+static int __kprobes uprobe_single_step_handler(struct pt_regs *regs,
+ unsigned int esr)
+{
+ struct uprobe_task *utask = current->utask;
+
+ if (user_mode(regs)) {
+ WARN_ON(utask &&
+ (instruction_pointer(regs) != utask->xol_vaddr + 4));
+
+ if (uprobe_post_sstep_notifier(regs))
+ return DBG_HOOK_HANDLED;
+ }
+
+ return DBG_HOOK_ERROR;
+}
+
+/* uprobe breakpoint handler hook */
+static struct break_hook uprobes_break_hook = {
+ .esr_mask = BRK64_ESR_MASK,
+ .esr_val = BRK64_ESR_UPROBES,
+ .fn = uprobe_breakpoint_handler,
+};
+
+/* uprobe single step handler hook */
+static struct step_hook uprobes_step_hook = {
+ .fn = uprobe_single_step_handler,
+};
+
+static int __init arch_init_uprobes(void)
+{
+ register_break_hook(&uprobes_break_hook);
+ register_step_hook(&uprobes_step_hook);
+
+ return 0;
+}
+
+device_initcall(arch_init_uprobes);
diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c
index 9a4dd6f39cfb..04fe6671907e 100644
--- a/arch/arm64/mm/flush.c
+++ b/arch/arm64/mm/flush.c
@@ -55,6 +55,12 @@ static void flush_ptrace_access(struct vm_area_struct *vma, struct page *page,
__flush_ptrace_access(page, uaddr, kaddr, len);
}

+void flush_uprobe_xol_access(struct page *page, unsigned long uaddr,
+ void *kaddr, unsigned long len)
+{
+ __flush_ptrace_access(page, uaddr, kaddr, len);
+}
+
/*
* Copy user data from/to a page which is mapped into a different processes
* address space. Really, we want to allow our "user space" model to handle
--
2.1.0

2015-06-18 04:00:40

by Pratyush Anand

[permalink] [raw]
Subject: [RFC PATCH V2 10/10] arm64: uprobes: check conditions before simulating instructions

From: Steve Capper <[email protected]>

Currently uprobes just simulates any instruction that it can't in
place execute. This can lead to unpredictable behaviour if the
execution condition fails and the instruction wouldn't otherwise
have been executed.

This patch adds the condition check

Signed-off-by: Steve Capper <[email protected]>
---
arch/arm64/kernel/uprobes.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/uprobes.c b/arch/arm64/kernel/uprobes.c
index 2cc9114deac2..a6d12b81e9ae 100644
--- a/arch/arm64/kernel/uprobes.c
+++ b/arch/arm64/kernel/uprobes.c
@@ -119,15 +119,22 @@ bool arch_uprobe_skip_sstep(struct arch_uprobe *auprobe, struct pt_regs *regs)
{
kprobe_opcode_t insn;
unsigned long addr;
+ struct arch_specific_insn *ainsn;

if (!auprobe->simulate)
return false;

insn = *(kprobe_opcode_t *)(&auprobe->insn[0]);
addr = instruction_pointer(regs);
+ ainsn = &auprobe->ainsn;
+
+ if (ainsn->handler) {
+ if (!ainsn->check_condn || ainsn->check_condn(insn, ainsn, regs))
+ ainsn->handler(insn, addr, regs);
+ else
+ instruction_pointer_set(regs, instruction_pointer(regs) + 4);
+ }

- if (auprobe->ainsn.handler)
- auprobe->ainsn.handler(insn, addr, regs);

return true;
}
--
2.1.0

2015-08-03 11:09:13

by Will Deacon

[permalink] [raw]
Subject: Re: [RFC PATCH V2 00/10] ARM64: Uprobe support added

On Thu, Jun 18, 2015 at 04:58:47AM +0100, Pratyush Anand wrote:
> These patches have been prepared on top of ARM64 kprobe v7 patches [1].
> Keeping as RFC, because kprobe-v7 still need to be ACKed.

Unfortunately, I've not seen any movement on the kprobes patches recently,
so this is out of the picture for 4.3.

Dave: did you plan to respin your series after Steve's comments on v7?

Will

2015-08-03 13:44:01

by David Long

[permalink] [raw]
Subject: Re: [RFC PATCH V2 00/10] ARM64: Uprobe support added

On 08/03/15 07:09, Will Deacon wrote:
> On Thu, Jun 18, 2015 at 04:58:47AM +0100, Pratyush Anand wrote:
>> These patches have been prepared on top of ARM64 kprobe v7 patches [1].
>> Keeping as RFC, because kprobe-v7 still need to be ACKed.
>
> Unfortunately, I've not seen any movement on the kprobes patches recently,
> so this is out of the picture for 4.3.
>
> Dave: did you plan to respin your series after Steve's comments on v7?
>
> Will
>

Catalin's comments last week helped me clarify in my own mind that we
should be able to simplfy the register pushing somewhat. Will, does that
also make sense to you?

So, also taking into consideration Steve's suggestions about
reorganizing the asm code, I am now working on a new revision.

-dl

2015-08-03 13:45:38

by David Long

[permalink] [raw]
Subject: Re: [RFC PATCH V2 00/10] ARM64: Uprobe support added

On 08/03/15 09:43, David Long wrote:
> On 08/03/15 07:09, Will Deacon wrote:
>> On Thu, Jun 18, 2015 at 04:58:47AM +0100, Pratyush Anand wrote:
>>> These patches have been prepared on top of ARM64 kprobe v7 patches [1].
>>> Keeping as RFC, because kprobe-v7 still need to be ACKed.
>>
>> Unfortunately, I've not seen any movement on the kprobes patches
>> recently,
>> so this is out of the picture for 4.3.
>>
>> Dave: did you plan to respin your series after Steve's comments on v7?
>>
>> Will
>>
>
> Catalin's comments last week helped me clarify in my own mind that we
> should be able to simplfy the register pushing somewhat. Will, does that
> also make sense to you?
>

^^ That question was aimed at Will Cohen, just to clarify.

> So, also taking into consideration Steve's suggestions about
> reorganizing the asm code, I am now working on a new revision.
>
> -dl
>

2015-08-04 15:07:48

by William Cohen

[permalink] [raw]
Subject: Re: [RFC PATCH V2 00/10] ARM64: Uprobe support added

On 08/03/2015 09:45 AM, David Long wrote:
> On 08/03/15 09:43, David Long wrote:
>> On 08/03/15 07:09, Will Deacon wrote:
>>> On Thu, Jun 18, 2015 at 04:58:47AM +0100, Pratyush Anand wrote:
>>>> These patches have been prepared on top of ARM64 kprobe v7 patches [1].
>>>> Keeping as RFC, because kprobe-v7 still need to be ACKed.
>>>
>>> Unfortunately, I've not seen any movement on the kprobes patches
>>> recently,
>>> so this is out of the picture for 4.3.
>>>
>>> Dave: did you plan to respin your series after Steve's comments on v7?
>>>
>>> Will
>>>
>>
>> Catalin's comments last week helped me clarify in my own mind that we
>> should be able to simplfy the register pushing somewhat. Will, does that
>> also make sense to you?
>>
>
> ^^ That question was aimed at Will Cohen, just to clarify.

Hi Dave,

The suggestion to only save the caller saved registers would make the save and restore code shorter and faster. There would be cases reduced saved and restore could be visible to systemtap scripts using print_reg() and register() functions such when a call to a return probed function is followed by a call to a function with a kprobe on entry. I worry about cases where code is broken (not following the calling conventions maybe assembly code functions) and using a return probe causes things to fail in a different way making it more difficult to diagnose the problem. There are less than 50 "*.S" files for arm64, but I prefer to minimize the chances that the return probe changes something.

There is room for improvements for the patch. The magic offset numbers in the save restore code could be eliminated.

-Will

>
>> So, also taking into consideration Steve's suggestions about
>> reorganizing the asm code, I am now working on a new revision.
>>
>> -dl
>>
>

2015-08-04 15:36:22

by David Long

[permalink] [raw]
Subject: Re: [RFC PATCH V2 00/10] ARM64: Uprobe support added

On 08/04/15 11:07, William Cohen wrote:
> On 08/03/2015 09:45 AM, David Long wrote:
>> On 08/03/15 09:43, David Long wrote:
>>> On 08/03/15 07:09, Will Deacon wrote:
>>>> On Thu, Jun 18, 2015 at 04:58:47AM +0100, Pratyush Anand wrote:
>>>>> These patches have been prepared on top of ARM64 kprobe v7 patches [1].
>>>>> Keeping as RFC, because kprobe-v7 still need to be ACKed.
>>>>
>>>> Unfortunately, I've not seen any movement on the kprobes patches
>>>> recently,
>>>> so this is out of the picture for 4.3.
>>>>
>>>> Dave: did you plan to respin your series after Steve's comments on v7?
>>>>
>>>> Will
>>>>
>>>
>>> Catalin's comments last week helped me clarify in my own mind that we
>>> should be able to simplfy the register pushing somewhat. Will, does that
>>> also make sense to you?
>>>
>>
>> ^^ That question was aimed at Will Cohen, just to clarify.
>
> Hi Dave,
>
> The suggestion to only save the caller saved registers would make the save and restore code shorter and faster. There would be cases reduced saved and restore could be visible to systemtap scripts using print_reg() and register() functions such when a call to a return probed function is followed by a call to a function with a kprobe on entry. I worry about cases where code is broken (not following the calling conventions maybe assembly code functions) and using a return probe causes things to fail in a different way making it more difficult to diagnose the problem. There are less than 50 "*.S" files for arm64, but I prefer to minimize the chances that the return probe changes something.
>

We will always have to save all the registers since a pointer to the
saved ptregs struct is passed down to the kprobes/events processing code
and used for reporting (or processing in user-written modules) the
register contents at the probe point. So we'd only be talking about not
*restoring* some of the registers. We're talking about a fairly small
amount of code now. It would be nice to get some kind of agreement on
this ASAP. In the end I'll have to defer to the subsystem maintainers.

> There is room for improvements for the patch. The magic offset numbers in the save restore code could be eliminated.
>
> -Will
>
>>
>>> So, also taking into consideration Steve's suggestions about
>>> reorganizing the asm code, I am now working on a new revision.
>>>
>>> -dl
>>>
>>
>

-dl

2015-08-04 15:43:04

by David Long

[permalink] [raw]
Subject: Re: [RFC PATCH V2 00/10] ARM64: Uprobe support added

On 08/04/15 11:07, William Cohen wrote:
> On 08/03/2015 09:45 AM, David Long wrote:
>> On 08/03/15 09:43, David Long wrote:
>>> On 08/03/15 07:09, Will Deacon wrote:
>>>> On Thu, Jun 18, 2015 at 04:58:47AM +0100, Pratyush Anand wrote:
>>>>> These patches have been prepared on top of ARM64 kprobe v7 patches [1].
>>>>> Keeping as RFC, because kprobe-v7 still need to be ACKed.
>>>>
>>>> Unfortunately, I've not seen any movement on the kprobes patches
>>>> recently,
>>>> so this is out of the picture for 4.3.
>>>>
>>>> Dave: did you plan to respin your series after Steve's comments on v7?
>>>>
>>>> Will
>>>>
>>>
>>> Catalin's comments last week helped me clarify in my own mind that we
>>> should be able to simplfy the register pushing somewhat. Will, does that
>>> also make sense to you?
>>>
>>
>> ^^ That question was aimed at Will Cohen, just to clarify.
>
> Hi Dave,
>
> The suggestion to only save the caller saved registers would make the save and restore code shorter and faster. There would be cases reduced saved and restore could be visible to systemtap scripts using print_reg() and register() functions such when a call to a return probed function is followed by a call to a function with a kprobe on entry. I worry about cases where code is broken (not following the calling conventions maybe assembly code functions) and using a return probe causes things to fail in a different way making it more difficult to diagnose the problem. There are less than 50 "*.S" files for arm64, but I prefer to minimize the chances that the return probe changes something.
>

Perhaps the best argument for leaving it as-is is so that user-written
kprobe modules can alter the values of these registers, although
specifically altering callee-saved registers on function returnd does
not sound as useful as say modifying x0.

> There is room for improvements for the patch. The magic offset numbers in the save restore code could be eliminated.
>
> -Will
>
>>
>>> So, also taking into consideration Steve's suggestions about
>>> reorganizing the asm code, I am now working on a new revision.
>>>
>>> -dl
>>>
>>
>

-dl