2017-04-19 12:52:53

by Naveen N. Rao

[permalink] [raw]
Subject: [PATCH v4 0/6] powerpc: add support for KPROBES_ON_FTRACE

v3:
https://www.mail-archive.com/[email protected]/msg116800.html

For v4, patch 5/6 is new and has been moved into this series. It has
also been updated to use strlcat() instead of strncat(). No other
changes.

Also, though patch 3/6 is generic, it needs to be carried in this
series as we crash on powerpc without that patch.


- Naveen


Masami Hiramatsu (1):
kprobes: Skip preparing optprobe if the probe is ftrace-based

Naveen N. Rao (5):
powerpc: ftrace: minor cleanup
powerpc: ftrace: restore LR from pt_regs
powerpc: kprobes: add support for KPROBES_ON_FTRACE
powerpc: introduce a new helper to obtain function entry points
powerpc: kprobes: prefer ftrace when probing function entry

.../debug/kprobes-on-ftrace/arch-support.txt | 2 +-
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/code-patching.h | 37 ++++++++
arch/powerpc/include/asm/kprobes.h | 10 ++
arch/powerpc/kernel/Makefile | 3 +
arch/powerpc/kernel/entry_64.S | 19 ++--
arch/powerpc/kernel/kprobes-ftrace.c | 104 +++++++++++++++++++++
arch/powerpc/kernel/kprobes.c | 25 ++++-
arch/powerpc/kernel/optprobes.c | 6 +-
kernel/kprobes.c | 11 ++-
10 files changed, 199 insertions(+), 19 deletions(-)
create mode 100644 arch/powerpc/kernel/kprobes-ftrace.c

--
2.12.1


2017-04-19 12:53:02

by Naveen N. Rao

[permalink] [raw]
Subject: [PATCH v4 4/6] powerpc: kprobes: add support for KPROBES_ON_FTRACE

Allow kprobes to be placed on ftrace _mcount() call sites. This
optimization avoids the use of a trap, by riding on ftrace
infrastructure.

This depends on HAVE_DYNAMIC_FTRACE_WITH_REGS which depends on
MPROFILE_KERNEL, which is only currently enabled on powerpc64le with
newer toolchains.

Based on the x86 code by Masami.

Signed-off-by: Naveen N. Rao <[email protected]>
---
.../debug/kprobes-on-ftrace/arch-support.txt | 2 +-
arch/powerpc/Kconfig | 1 +
arch/powerpc/include/asm/kprobes.h | 10 ++
arch/powerpc/kernel/Makefile | 3 +
arch/powerpc/kernel/kprobes-ftrace.c | 104 +++++++++++++++++++++
arch/powerpc/kernel/kprobes.c | 8 +-
6 files changed, 126 insertions(+), 2 deletions(-)
create mode 100644 arch/powerpc/kernel/kprobes-ftrace.c

diff --git a/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt b/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
index 40f44d041fb4..930430c6aef6 100644
--- a/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
+++ b/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
@@ -27,7 +27,7 @@
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
- | powerpc: | TODO |
+ | powerpc: | ok |
| s390: | TODO |
| score: | TODO |
| sh: | TODO |
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 9ff731f50a29..a55a776a1a43 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -142,6 +142,7 @@ config PPC
select HAVE_IRQ_EXIT_ON_IRQ_STACK
select HAVE_KERNEL_GZIP
select HAVE_KPROBES
+ select HAVE_KPROBES_ON_FTRACE
select HAVE_KRETPROBES
select HAVE_LIVEPATCH if HAVE_DYNAMIC_FTRACE_WITH_REGS
select HAVE_MEMBLOCK
diff --git a/arch/powerpc/include/asm/kprobes.h b/arch/powerpc/include/asm/kprobes.h
index a843884aafaf..a83821f33ea3 100644
--- a/arch/powerpc/include/asm/kprobes.h
+++ b/arch/powerpc/include/asm/kprobes.h
@@ -103,6 +103,16 @@ extern int kprobe_exceptions_notify(struct notifier_block *self,
extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
extern int kprobe_handler(struct pt_regs *regs);
extern int kprobe_post_handler(struct pt_regs *regs);
+#ifdef CONFIG_KPROBES_ON_FTRACE
+extern int skip_singlestep(struct kprobe *p, struct pt_regs *regs,
+ struct kprobe_ctlblk *kcb);
+#else
+static inline int skip_singlestep(struct kprobe *p, struct pt_regs *regs,
+ struct kprobe_ctlblk *kcb)
+{
+ return 0;
+}
+#endif
#else
static inline int kprobe_handler(struct pt_regs *regs) { return 0; }
static inline int kprobe_post_handler(struct pt_regs *regs) { return 0; }
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 811f441a125f..3e461637b64d 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -97,6 +97,7 @@ obj-$(CONFIG_BOOTX_TEXT) += btext.o
obj-$(CONFIG_SMP) += smp.o
obj-$(CONFIG_KPROBES) += kprobes.o
obj-$(CONFIG_OPTPROBES) += optprobes.o optprobes_head.o
+obj-$(CONFIG_KPROBES_ON_FTRACE) += kprobes-ftrace.o
obj-$(CONFIG_UPROBES) += uprobes.o
obj-$(CONFIG_PPC_UDBG_16550) += legacy_serial.o udbg_16550.o
obj-$(CONFIG_STACKTRACE) += stacktrace.o
@@ -150,6 +151,8 @@ GCOV_PROFILE_machine_kexec_32.o := n
UBSAN_SANITIZE_machine_kexec_32.o := n
GCOV_PROFILE_kprobes.o := n
UBSAN_SANITIZE_kprobes.o := n
+GCOV_PROFILE_kprobes-ftrace.o := n
+UBSAN_SANITIZE_kprobes-ftrace.o := n
UBSAN_SANITIZE_vdso.o := n

extra-$(CONFIG_PPC_FPU) += fpu.o
diff --git a/arch/powerpc/kernel/kprobes-ftrace.c b/arch/powerpc/kernel/kprobes-ftrace.c
new file mode 100644
index 000000000000..6c089d9757c9
--- /dev/null
+++ b/arch/powerpc/kernel/kprobes-ftrace.c
@@ -0,0 +1,104 @@
+/*
+ * Dynamic Ftrace based Kprobes Optimization
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ *
+ * Copyright (C) Hitachi Ltd., 2012
+ * Copyright 2016 Naveen N. Rao <[email protected]>
+ * IBM Corporation
+ */
+#include <linux/kprobes.h>
+#include <linux/ptrace.h>
+#include <linux/hardirq.h>
+#include <linux/preempt.h>
+#include <linux/ftrace.h>
+
+static nokprobe_inline
+int __skip_singlestep(struct kprobe *p, struct pt_regs *regs,
+ struct kprobe_ctlblk *kcb, unsigned long orig_nip)
+{
+ /*
+ * Emulate singlestep (and also recover regs->nip)
+ * as if there is a nop
+ */
+ regs->nip = (unsigned long)p->addr + MCOUNT_INSN_SIZE;
+ if (unlikely(p->post_handler)) {
+ kcb->kprobe_status = KPROBE_HIT_SSDONE;
+ p->post_handler(p, regs, 0);
+ }
+ __this_cpu_write(current_kprobe, NULL);
+ if (orig_nip)
+ regs->nip = orig_nip;
+ return 1;
+}
+
+int skip_singlestep(struct kprobe *p, struct pt_regs *regs,
+ struct kprobe_ctlblk *kcb)
+{
+ if (kprobe_ftrace(p))
+ return __skip_singlestep(p, regs, kcb, 0);
+ else
+ return 0;
+}
+NOKPROBE_SYMBOL(skip_singlestep);
+
+/* Ftrace callback handler for kprobes */
+void kprobe_ftrace_handler(unsigned long nip, unsigned long parent_nip,
+ struct ftrace_ops *ops, struct pt_regs *regs)
+{
+ struct kprobe *p;
+ struct kprobe_ctlblk *kcb;
+ unsigned long flags;
+
+ /* Disable irq for emulating a breakpoint and avoiding preempt */
+ local_irq_save(flags);
+ hard_irq_disable();
+
+ p = get_kprobe((kprobe_opcode_t *)nip);
+ if (unlikely(!p) || kprobe_disabled(p))
+ goto end;
+
+ kcb = get_kprobe_ctlblk();
+ if (kprobe_running()) {
+ kprobes_inc_nmissed_count(p);
+ } else {
+ unsigned long orig_nip = regs->nip;
+
+ /*
+ * On powerpc, NIP is *before* this instruction for the
+ * pre handler
+ */
+ regs->nip -= MCOUNT_INSN_SIZE;
+
+ __this_cpu_write(current_kprobe, p);
+ kcb->kprobe_status = KPROBE_HIT_ACTIVE;
+ if (!p->pre_handler || !p->pre_handler(p, regs))
+ __skip_singlestep(p, regs, kcb, orig_nip);
+ /*
+ * If pre_handler returns !0, it sets regs->nip and
+ * resets current kprobe.
+ */
+ }
+end:
+ local_irq_restore(flags);
+}
+NOKPROBE_SYMBOL(kprobe_ftrace_handler);
+
+int arch_prepare_kprobe_ftrace(struct kprobe *p)
+{
+ p->ainsn.insn = NULL;
+ p->ainsn.boostable = -1;
+ return 0;
+}
diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 5c0a1ffcbcf9..df963c6e47b1 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -191,7 +191,11 @@ static void __kprobes set_current_kprobe(struct kprobe *p, struct pt_regs *regs,
bool arch_function_offset_within_entry(unsigned long offset)
{
#ifdef PPC64_ELF_ABI_v2
+#ifdef CONFIG_KPROBES_ON_FTRACE
+ return offset <= 16;
+#else
return offset <= 8;
+#endif
#else
return !offset;
#endif
@@ -299,7 +303,9 @@ int __kprobes kprobe_handler(struct pt_regs *regs)
}
p = __this_cpu_read(current_kprobe);
if (p->break_handler && p->break_handler(p, regs)) {
- goto ss_probe;
+ if (!skip_singlestep(p, regs, kcb))
+ goto ss_probe;
+ ret = 1;
}
}
goto no_kprobe;
--
2.12.1

2017-04-19 12:52:58

by Naveen N. Rao

[permalink] [raw]
Subject: [PATCH v4 6/6] powerpc: kprobes: prefer ftrace when probing function entry

KPROBES_ON_FTRACE avoids much of the overhead with regular kprobes as it
eliminates the need for a trap, as well as the need to emulate or
single-step instructions.

Though OPTPROBES provides us with similar performance, we have limited
optprobes trampoline slots. As such, when asked to probe at a function
entry, default to using the ftrace infrastructure.

With:
# cd /sys/kernel/debug/tracing
# echo 'p _do_fork' > kprobe_events

before patch:
# cat ../kprobes/list
c0000000000daf08 k _do_fork+0x8 [DISABLED]
c000000000044fc0 k kretprobe_trampoline+0x0 [OPTIMIZED]

and after patch:
# cat ../kprobes/list
c0000000000d074c k _do_fork+0xc [DISABLED][FTRACE]
c0000000000412b0 k kretprobe_trampoline+0x0 [OPTIMIZED]

Signed-off-by: Naveen N. Rao <[email protected]>
---
arch/powerpc/kernel/kprobes.c | 17 +++++++++++++++--
1 file changed, 15 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index df963c6e47b1..d5d6fbd1d788 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -49,8 +49,21 @@ kprobe_opcode_t *kprobe_lookup_name(const char *name, unsigned int offset)
#ifdef PPC64_ELF_ABI_v2
/* PPC64 ABIv2 needs local entry point */
addr = (kprobe_opcode_t *)kallsyms_lookup_name(name);
- if (addr && !offset)
- addr = (kprobe_opcode_t *)ppc_function_entry(addr);
+ if (addr && !offset) {
+#ifdef CONFIG_KPROBES_ON_FTRACE
+ unsigned long faddr;
+ /*
+ * Per livepatch.h, ftrace location is always within the first
+ * 16 bytes of a function on powerpc with -mprofile-kernel.
+ */
+ faddr = ftrace_location_range((unsigned long)addr,
+ (unsigned long)addr + 16);
+ if (faddr)
+ addr = (kprobe_opcode_t *)faddr;
+ else
+#endif
+ addr = (kprobe_opcode_t *)ppc_function_entry(addr);
+ }
#elif defined(PPC64_ELF_ABI_v1)
/*
* 64bit powerpc ABIv1 uses function descriptors:
--
2.12.1

2017-04-19 12:52:59

by Naveen N. Rao

[permalink] [raw]
Subject: [PATCH v4 5/6] powerpc: introduce a new helper to obtain function entry points

kprobe_lookup_name() is specific to the kprobe subsystem and may not
always return the function entry point (in a subsequent patch for
KPROBES_ON_FTRACE). For looking up function entry points, introduce a
separate helper and use the same in optprobes.c

Signed-off-by: Naveen N. Rao <[email protected]>
---
arch/powerpc/include/asm/code-patching.h | 41 ++++++++++++++++++++++++++++++++
arch/powerpc/kernel/optprobes.c | 6 ++---
2 files changed, 44 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
index 8ab937771068..abef812de7f8 100644
--- a/arch/powerpc/include/asm/code-patching.h
+++ b/arch/powerpc/include/asm/code-patching.h
@@ -12,6 +12,8 @@

#include <asm/types.h>
#include <asm/ppc-opcode.h>
+#include <linux/string.h>
+#include <linux/kallsyms.h>

/* Flags for create_branch:
* "b" == create_branch(addr, target, 0);
@@ -99,6 +101,45 @@ static inline unsigned long ppc_global_function_entry(void *func)
#endif
}

+/*
+ * Wrapper around kallsyms_lookup() to return function entry address:
+ * - For ABIv1, we lookup the dot variant.
+ * - For ABIv2, we return the local entry point.
+ */
+static inline unsigned long ppc_kallsyms_lookup_name(const char *name)
+{
+ unsigned long addr;
+#ifdef PPC64_ELF_ABI_v1
+ /* check for dot variant */
+ char dot_name[1 + KSYM_NAME_LEN];
+ bool dot_appended = false;
+
+ if (strnlen(name, KSYM_NAME_LEN) >= KSYM_NAME_LEN)
+ return 0;
+
+ if (name[0] != '.') {
+ dot_name[0] = '.';
+ dot_name[1] = '\0';
+ strlcat(dot_name, name, sizeof(dot_name));
+ dot_appended = true;
+ } else {
+ dot_name[0] = '\0';
+ strlcat(dot_name, name, sizeof(dot_name));
+ }
+ addr = kallsyms_lookup_name(dot_name);
+ if (!addr && dot_appended)
+ /* Let's try the original non-dot symbol lookup */
+ addr = kallsyms_lookup_name(name);
+#elif defined(PPC64_ELF_ABI_v2)
+ addr = kallsyms_lookup_name(name);
+ if (addr)
+ addr = ppc_function_entry((void *)addr);
+#else
+ addr = kallsyms_lookup_name(name);
+#endif
+ return addr;
+}
+
#ifdef CONFIG_PPC64
/*
* Some instruction encodings commonly used in dynamic ftracing
diff --git a/arch/powerpc/kernel/optprobes.c b/arch/powerpc/kernel/optprobes.c
index ce81a322251c..ec60ed0d4aad 100644
--- a/arch/powerpc/kernel/optprobes.c
+++ b/arch/powerpc/kernel/optprobes.c
@@ -243,10 +243,10 @@ int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *p)
/*
* 2. branch to optimized_callback() and emulate_step()
*/
- op_callback_addr = kprobe_lookup_name("optimized_callback", 0);
- emulate_step_addr = kprobe_lookup_name("emulate_step", 0);
+ op_callback_addr = (kprobe_opcode_t *)ppc_kallsyms_lookup_name("optimized_callback");
+ emulate_step_addr = (kprobe_opcode_t *)ppc_kallsyms_lookup_name("emulate_step");
if (!op_callback_addr || !emulate_step_addr) {
- WARN(1, "kprobe_lookup_name() failed\n");
+ WARN(1, "Unable to lookup optimized_callback()/emulate_step()\n");
goto error;
}

--
2.12.1

2017-04-19 12:53:41

by Naveen N. Rao

[permalink] [raw]
Subject: [PATCH v4 3/6] kprobes: Skip preparing optprobe if the probe is ftrace-based

From: Masami Hiramatsu <[email protected]>

Skip preparing optprobe if the probe is ftrace-based, since anyway, it
must not be optimized (or already optimized by ftrace).

Tested-by: Naveen N. Rao <[email protected]>
Signed-off-by: Masami Hiramatsu <[email protected]>
---
kernel/kprobes.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index bb86681c8a10..50c48c5b6490 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -743,13 +743,20 @@ static void kill_optimized_kprobe(struct kprobe *p)
arch_remove_optimized_kprobe(op);
}

+static inline
+void __prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *p)
+{
+ if (!kprobe_ftrace(p))
+ arch_prepare_optimized_kprobe(op, p);
+}
+
/* Try to prepare optimized instructions */
static void prepare_optimized_kprobe(struct kprobe *p)
{
struct optimized_kprobe *op;

op = container_of(p, struct optimized_kprobe, kp);
- arch_prepare_optimized_kprobe(op, p);
+ __prepare_optimized_kprobe(op, p);
}

/* Allocate new optimized_kprobe and try to prepare optimized instructions */
@@ -763,7 +770,7 @@ static struct kprobe *alloc_aggr_kprobe(struct kprobe *p)

INIT_LIST_HEAD(&op->list);
op->kp.addr = p->addr;
- arch_prepare_optimized_kprobe(op, p);
+ __prepare_optimized_kprobe(op, p);

return &op->kp;
}
--
2.12.1

2017-04-19 12:53:59

by Naveen N. Rao

[permalink] [raw]
Subject: [PATCH v4 2/6] powerpc: ftrace: restore LR from pt_regs

Pass the real LR to the ftrace handler. This is needed for
KPROBES_ON_FTRACE for the pre handlers.

Also, with KPROBES_ON_FTRACE, the link register may be updated by the
pre handlers or by a registed kretprobe. Honor updated LR by restoring
it from pt_regs, rather than from the stack save area.

Live patch and function graph continue to work fine with this change.

Signed-off-by: Naveen N. Rao <[email protected]>
---
arch/powerpc/kernel/entry_64.S | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 8fd8718722a1..744b2f91444a 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -1248,9 +1248,10 @@ _GLOBAL(ftrace_caller)

/* Get the _mcount() call site out of LR */
mflr r7
- /* Save it as pt_regs->nip & pt_regs->link */
+ /* Save it as pt_regs->nip */
std r7, _NIP(r1)
- std r7, _LINK(r1)
+ /* Save the read LR in pt_regs->link */
+ std r0, _LINK(r1)

/* Save callee's TOC in the ABI compliant location */
std r2, 24(r1)
@@ -1297,16 +1298,16 @@ ftrace_call:
REST_8GPRS(16,r1)
REST_8GPRS(24,r1)

+ /* Restore possibly modified LR */
+ ld r0, _LINK(r1)
+ mtlr r0
+
/* Restore callee's TOC */
ld r2, 24(r1)

/* Pop our stack frame */
addi r1, r1, SWITCH_FRAME_SIZE

- /* Restore original LR for return to B */
- ld r0, LRSAVE(r1)
- mtlr r0
-
#ifdef CONFIG_LIVEPATCH
/* Based on the cmpd above, if the NIP was altered handle livepatch */
bne- livepatch_handler
--
2.12.1

2017-04-19 12:54:16

by Naveen N. Rao

[permalink] [raw]
Subject: [PATCH v4 1/6] powerpc: ftrace: minor cleanup

Move the stack setup and teardown code to the ftrace_graph_caller().
This way, we don't incur the cost of setting it up unless function graph
is enabled for this function.

Also, remove the extraneous LR restore code after the function graph
stub. LR has previously been restored and neither livepatch_handler()
nor ftrace_graph_caller() return back here.

Signed-off-by: Naveen N. Rao <[email protected]>
---
arch/powerpc/kernel/entry_64.S | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 6432d4bf08c8..8fd8718722a1 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -1313,16 +1313,12 @@ ftrace_call:
#endif

#ifdef CONFIG_FUNCTION_GRAPH_TRACER
- stdu r1, -112(r1)
.globl ftrace_graph_call
ftrace_graph_call:
b ftrace_graph_stub
_GLOBAL(ftrace_graph_stub)
- addi r1, r1, 112
#endif

- ld r0,LRSAVE(r1) /* restore callee's lr at _mcount site */
- mtlr r0
bctr /* jump after _mcount site */
#endif /* CC_USING_MPROFILE_KERNEL */

@@ -1446,6 +1442,7 @@ _GLOBAL(ftrace_stub)
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
#ifndef CC_USING_MPROFILE_KERNEL
_GLOBAL(ftrace_graph_caller)
+ stdu r1, -112(r1)
/* load r4 with local address */
ld r4, 128(r1)
subi r4, r4, MCOUNT_INSN_SIZE
@@ -1471,6 +1468,7 @@ _GLOBAL(ftrace_graph_caller)

#else /* CC_USING_MPROFILE_KERNEL */
_GLOBAL(ftrace_graph_caller)
+ stdu r1, -112(r1)
/* with -mprofile-kernel, parameter regs are still alive at _mcount */
std r10, 104(r1)
std r9, 96(r1)
--
2.12.1

2017-04-23 11:53:37

by Michael Ellerman

[permalink] [raw]
Subject: Re: [v4,1/6] powerpc: ftrace: minor cleanup

On Wed, 2017-04-19 at 12:52:23 UTC, "Naveen N. Rao" wrote:
> Move the stack setup and teardown code to the ftrace_graph_caller().
> This way, we don't incur the cost of setting it up unless function graph
> is enabled for this function.
>
> Also, remove the extraneous LR restore code after the function graph
> stub. LR has previously been restored and neither livepatch_handler()
> nor ftrace_graph_caller() return back here.
>
> Signed-off-by: Naveen N. Rao <[email protected]>

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/700e64377c2c8e2406e9c4c1632e2e

cheers

2017-04-24 22:47:42

by Michael Ellerman

[permalink] [raw]
Subject: Re: [v4, 5/6] powerpc: introduce a new helper to obtain function entry points

On Wed, 2017-04-19 at 12:52:27 UTC, "Naveen N. Rao" wrote:
> kprobe_lookup_name() is specific to the kprobe subsystem and may not
> always return the function entry point (in a subsequent patch for
> KPROBES_ON_FTRACE). For looking up function entry points, introduce a
> separate helper and use the same in optprobes.c
>
> Signed-off-by: Naveen N. Rao <[email protected]>

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/1b32cd1715378c9a3856df4a80920f

cheers

2017-04-24 22:47:56

by Michael Ellerman

[permalink] [raw]
Subject: Re: [v4, 3/6] kprobes: Skip preparing optprobe if the probe is ftrace-based

On Wed, 2017-04-19 at 12:52:25 UTC, "Naveen N. Rao" wrote:
> From: Masami Hiramatsu <[email protected]>
>
> Skip preparing optprobe if the probe is ftrace-based, since anyway, it
> must not be optimized (or already optimized by ftrace).
>
> Tested-by: Naveen N. Rao <[email protected]>
> Signed-off-by: Masami Hiramatsu <[email protected]>

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/a460246c70d1ac25a0717112e7a167

cheers

2017-04-24 22:48:30

by Michael Ellerman

[permalink] [raw]
Subject: Re: [v4,2/6] powerpc: ftrace: restore LR from pt_regs

On Wed, 2017-04-19 at 12:52:24 UTC, "Naveen N. Rao" wrote:
> Pass the real LR to the ftrace handler. This is needed for
> KPROBES_ON_FTRACE for the pre handlers.
>
> Also, with KPROBES_ON_FTRACE, the link register may be updated by the
> pre handlers or by a registed kretprobe. Honor updated LR by restoring
> it from pt_regs, rather than from the stack save area.
>
> Live patch and function graph continue to work fine with this change.
>
> Signed-off-by: Naveen N. Rao <[email protected]>

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/2f59be5b970b503ca8db1cb723b155

cheers

2017-04-24 22:48:40

by Michael Ellerman

[permalink] [raw]
Subject: Re: [v4,6/6] powerpc: kprobes: prefer ftrace when probing function entry

On Wed, 2017-04-19 at 12:52:28 UTC, "Naveen N. Rao" wrote:
> KPROBES_ON_FTRACE avoids much of the overhead with regular kprobes as it
> eliminates the need for a trap, as well as the need to emulate or
> single-step instructions.
>
> Though OPTPROBES provides us with similar performance, we have limited
> optprobes trampoline slots. As such, when asked to probe at a function
> entry, default to using the ftrace infrastructure.
>
> With:
> # cd /sys/kernel/debug/tracing
> # echo 'p _do_fork' > kprobe_events
>
> before patch:
> # cat ../kprobes/list
> c0000000000daf08 k _do_fork+0x8 [DISABLED]
> c000000000044fc0 k kretprobe_trampoline+0x0 [OPTIMIZED]
>
> and after patch:
> # cat ../kprobes/list
> c0000000000d074c k _do_fork+0xc [DISABLED][FTRACE]
> c0000000000412b0 k kretprobe_trampoline+0x0 [OPTIMIZED]
>
> Signed-off-by: Naveen N. Rao <[email protected]>

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/24bd909e94776ecce95291bff910f1

cheers

2017-04-24 22:48:21

by Michael Ellerman

[permalink] [raw]
Subject: Re: [v4,4/6] powerpc: kprobes: add support for KPROBES_ON_FTRACE

On Wed, 2017-04-19 at 12:52:26 UTC, "Naveen N. Rao" wrote:
> Allow kprobes to be placed on ftrace _mcount() call sites. This
> optimization avoids the use of a trap, by riding on ftrace
> infrastructure.
>
> This depends on HAVE_DYNAMIC_FTRACE_WITH_REGS which depends on
> MPROFILE_KERNEL, which is only currently enabled on powerpc64le with
> newer toolchains.
>
> Based on the x86 code by Masami.
>
> Signed-off-by: Naveen N. Rao <[email protected]>

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/ead514d5fb30a0889d51c0f0e35c3e

cheers