2019-08-20 03:51:47

by Jisheng Zhang

[permalink] [raw]
Subject: [PATCH v2 0/3] arm64: KPROBES_ON_FTRACE

KPROBES_ON_FTRACE avoids much of the overhead with regular kprobes as it
eliminates the need for a trap, as well as the need to emulate or
single-step instructions.

Applied after arm64 FTRACE_WITH_REGS:
http://lists.infradead.org/pipermail/linux-arm-kernel/2019-August/674404.html

Changes since v1:
- make the kprobes/x86: use instruction_pointer and instruction_pointer_set
as patch1
- add Masami's ACK to patch1
- add some description about KPROBES_ON_FTRACE and why we need it on
arm64
- correct the log before the patch
- remove the consolidation patch, make it as TODO
- only adjust kprobe's addr when KPROBE_FLAG_FTRACE is set
- if KPROBES_ON_FTRACE, ftrace_call_adjust() the kprobe's addr before
calling ftrace_location()
- update the kprobes-on-ftrace/arch-support.txt in doc

Jisheng Zhang (3):
kprobes/x86: use instruction_pointer and instruction_pointer_set
kprobes: adjust kprobe addr for KPROBES_ON_FTRACE
arm64: implement KPROBES_ON_FTRACE

.../debug/kprobes-on-ftrace/arch-support.txt | 2 +-
arch/arm64/Kconfig | 1 +
arch/arm64/kernel/probes/Makefile | 1 +
arch/arm64/kernel/probes/ftrace.c | 60 +++++++++++++++++++
arch/x86/kernel/kprobes/ftrace.c | 9 +--
kernel/kprobes.c | 10 +++-
6 files changed, 75 insertions(+), 8 deletions(-)
create mode 100644 arch/arm64/kernel/probes/ftrace.c

--
2.23.0.rc1


2019-08-20 03:54:16

by Jisheng Zhang

[permalink] [raw]
Subject: [PATCH v2 1/3] kprobes/x86: use instruction_pointer and instruction_pointer_set

This is to make the x86 kprobe_ftrace_handler() more common so that
the code could be reused in future.

Signed-off-by: Jisheng Zhang <[email protected]>
Acked-by: Masami Hiramatsu <[email protected]>
---
arch/x86/kernel/kprobes/ftrace.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c
index 681a4b36e9bb..c2ad0b9259ca 100644
--- a/arch/x86/kernel/kprobes/ftrace.c
+++ b/arch/x86/kernel/kprobes/ftrace.c
@@ -28,9 +28,9 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
if (kprobe_running()) {
kprobes_inc_nmissed_count(p);
} else {
- unsigned long orig_ip = regs->ip;
+ unsigned long orig_ip = instruction_pointer(regs);
/* Kprobe handler expects regs->ip = ip + 1 as breakpoint hit */
- regs->ip = ip + sizeof(kprobe_opcode_t);
+ instruction_pointer_set(regs, ip + sizeof(kprobe_opcode_t));

__this_cpu_write(current_kprobe, p);
kcb->kprobe_status = KPROBE_HIT_ACTIVE;
@@ -39,12 +39,13 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
* Emulate singlestep (and also recover regs->ip)
* as if there is a 5byte nop
*/
- regs->ip = (unsigned long)p->addr + MCOUNT_INSN_SIZE;
+ instruction_pointer_set(regs,
+ (unsigned long)p->addr + MCOUNT_INSN_SIZE);
if (unlikely(p->post_handler)) {
kcb->kprobe_status = KPROBE_HIT_SSDONE;
p->post_handler(p, regs, 0);
}
- regs->ip = orig_ip;
+ instruction_pointer_set(regs, orig_ip);
}
/*
* If pre_handler returns !0, it changes regs->ip. We have to
--
2.23.0.rc1

2019-08-20 03:55:56

by Jisheng Zhang

[permalink] [raw]
Subject: [PATCH v2 2/3] kprobes: adjust kprobe addr for KPROBES_ON_FTRACE

For KPROBES_ON_FTRACE case, we need to adjust the kprobe's addr
correspondingly.

Signed-off-by: Jisheng Zhang <[email protected]>
---
kernel/kprobes.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 9873fc627d61..3fd2f68644da 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -1484,15 +1484,19 @@ static inline int check_kprobe_rereg(struct kprobe *p)

int __weak arch_check_ftrace_location(struct kprobe *p)
{
- unsigned long ftrace_addr;
+ unsigned long ftrace_addr, addr = (unsigned long)p->addr;

- ftrace_addr = ftrace_location((unsigned long)p->addr);
+#ifdef CONFIG_KPROBES_ON_FTRACE
+ addr = ftrace_call_adjust(addr);
+#endif
+ ftrace_addr = ftrace_location(addr);
if (ftrace_addr) {
#ifdef CONFIG_KPROBES_ON_FTRACE
/* Given address is not on the instruction boundary */
- if ((unsigned long)p->addr != ftrace_addr)
+ if (addr != ftrace_addr)
return -EILSEQ;
p->flags |= KPROBE_FLAG_FTRACE;
+ p->addr = (kprobe_opcode_t *)addr;
#else /* !CONFIG_KPROBES_ON_FTRACE */
return -EINVAL;
#endif
--
2.23.0.rc1

2019-08-20 03:57:20

by Jisheng Zhang

[permalink] [raw]
Subject: [PATCH v2 3/3] arm64: implement KPROBES_ON_FTRACE

KPROBES_ON_FTRACE avoids much of the overhead with regular kprobes as it
eliminates the need for a trap, as well as the need to emulate or
single-step instructions.

This patch implements KPROBES_ON_FTRACE for arm64.

Tested on berlin arm64 platform.

~ # mount -t debugfs debugfs /sys/kernel/debug/
~ # cd /sys/kernel/debug/
/sys/kernel/debug # echo 'p _do_fork' > tracing/kprobe_events

before the patch:

/sys/kernel/debug # cat kprobes/list
ffffff801009fe28 k _do_fork+0x0 [DISABLED]

after the patch:

/sys/kernel/debug # cat kprobes/list
ffffff801009ff54 k _do_fork+0x4 [DISABLED][FTRACE]

Signed-off-by: Jisheng Zhang <[email protected]>
---
.../debug/kprobes-on-ftrace/arch-support.txt | 2 +-
arch/arm64/Kconfig | 1 +
arch/arm64/kernel/probes/Makefile | 1 +
arch/arm64/kernel/probes/ftrace.c | 60 +++++++++++++++++++
4 files changed, 63 insertions(+), 1 deletion(-)
create mode 100644 arch/arm64/kernel/probes/ftrace.c

diff --git a/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt b/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
index 68f266944d5f..e8358a38981c 100644
--- a/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
+++ b/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
@@ -9,7 +9,7 @@
| alpha: | TODO |
| arc: | TODO |
| arm: | TODO |
- | arm64: | TODO |
+ | arm64: | ok |
| c6x: | TODO |
| csky: | TODO |
| h8300: | TODO |
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 663392d1eae2..928700f15e23 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -167,6 +167,7 @@ config ARM64
select HAVE_STACKPROTECTOR
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_KPROBES
+ select HAVE_KPROBES_ON_FTRACE
select HAVE_KRETPROBES
select HAVE_GENERIC_VDSO
select IOMMU_DMA if IOMMU_SUPPORT
diff --git a/arch/arm64/kernel/probes/Makefile b/arch/arm64/kernel/probes/Makefile
index 8e4be92e25b1..4020cfc66564 100644
--- a/arch/arm64/kernel/probes/Makefile
+++ b/arch/arm64/kernel/probes/Makefile
@@ -4,3 +4,4 @@ obj-$(CONFIG_KPROBES) += kprobes.o decode-insn.o \
simulate-insn.o
obj-$(CONFIG_UPROBES) += uprobes.o decode-insn.o \
simulate-insn.o
+obj-$(CONFIG_KPROBES_ON_FTRACE) += ftrace.o
diff --git a/arch/arm64/kernel/probes/ftrace.c b/arch/arm64/kernel/probes/ftrace.c
new file mode 100644
index 000000000000..52901ffff570
--- /dev/null
+++ b/arch/arm64/kernel/probes/ftrace.c
@@ -0,0 +1,60 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Dynamic Ftrace based Kprobes Optimization
+ *
+ * Copyright (C) Hitachi Ltd., 2012
+ * Copyright (C) 2019 Jisheng Zhang <[email protected]>
+ * Synaptics Incorporated
+ */
+
+#include <linux/kprobes.h>
+
+/* Ftrace callback handler for kprobes -- called under preepmt disabed */
+void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
+ struct ftrace_ops *ops, struct pt_regs *regs)
+{
+ struct kprobe *p;
+ struct kprobe_ctlblk *kcb;
+
+ /* Preempt is disabled by ftrace */
+ p = get_kprobe((kprobe_opcode_t *)ip);
+ if (unlikely(!p) || kprobe_disabled(p))
+ return;
+
+ kcb = get_kprobe_ctlblk();
+ if (kprobe_running()) {
+ kprobes_inc_nmissed_count(p);
+ } else {
+ unsigned long orig_ip = instruction_pointer(regs);
+ /* Kprobe handler expects regs->pc = pc + 1 as breakpoint hit */
+ instruction_pointer_set(regs, ip + sizeof(kprobe_opcode_t));
+
+ __this_cpu_write(current_kprobe, p);
+ kcb->kprobe_status = KPROBE_HIT_ACTIVE;
+ if (!p->pre_handler || !p->pre_handler(p, regs)) {
+ /*
+ * Emulate singlestep (and also recover regs->pc)
+ * as if there is a nop
+ */
+ instruction_pointer_set(regs,
+ (unsigned long)p->addr + MCOUNT_INSN_SIZE);
+ if (unlikely(p->post_handler)) {
+ kcb->kprobe_status = KPROBE_HIT_SSDONE;
+ p->post_handler(p, regs, 0);
+ }
+ instruction_pointer_set(regs, orig_ip);
+ }
+ /*
+ * If pre_handler returns !0, it changes regs->pc. We have to
+ * skip emulating post_handler.
+ */
+ __this_cpu_write(current_kprobe, NULL);
+ }
+}
+NOKPROBE_SYMBOL(kprobe_ftrace_handler);
+
+int arch_prepare_kprobe_ftrace(struct kprobe *p)
+{
+ p->ainsn.api.insn = NULL;
+ return 0;
+}
--
2.23.0.rc1

2019-08-20 07:22:49

by Jisheng Zhang

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] arm64: implement KPROBES_ON_FTRACE

On Tue, 20 Aug 2019 03:54:20 +0000 Jisheng Zhang wrote:

>
>
> KPROBES_ON_FTRACE avoids much of the overhead with regular kprobes as it
> eliminates the need for a trap, as well as the need to emulate or
> single-step instructions.
>
> This patch implements KPROBES_ON_FTRACE for arm64.
>
> Tested on berlin arm64 platform.

some performance numbers may be interesting.

HW: Berlin arm64 platform, cpufreq is forced to 800MHZ
SW: getppid syscall micro-benchmark, source code is put at the end of this email.

A. Not probed.
B. Probed at __arm64_sys_getppid w/ non-operation probe functions, w/o KPROBES_ON_FTRACE
C. Probed at __arm64_sys_getppid w/ non-operation probe functions, w/ KPROBES_ON_FTRACE

A: 1905 ns/call
B: 5833 ns/call
C: 2169 ns/call

The overhead of kprobes is 5833 - 1905 = 3928 ns/call
The overhead of kprobes w/ KPROBES_ON_FTRACE is 2169 - 1905 = 264 ns/call

As can be seen, KPROBES_ON_FTRACE significantly reduce the overhead of kprobes.

Thanks

<---8---
#include <stdio.h>
#include <stdlib.h>
#include <sys/time.h>
#include <sys/resource.h>
#include <unistd.h>

int main (int argc, char *argv[])
{
struct timeval tv;
unsigned long count;
struct rusage usage;

for (count = 0; count < 10000000; count++)
getppid();
getrusage(RUSAGE_SELF, &usage);
tv = usage.ru_stime;
tv.tv_sec += usage.ru_utime.tv_sec;
tv.tv_usec += usage.ru_utime.tv_usec;
fprintf(stderr, "getppid was called %u times: %d nsec per call\n",
count, (tv.tv_sec*1000*1000 + tv.tv_usec)/(count/1000));

return 0;
}

>
> ~ # mount -t debugfs debugfs /sys/kernel/debug/
> ~ # cd /sys/kernel/debug/
> /sys/kernel/debug # echo 'p _do_fork' > tracing/kprobe_events
>
> before the patch:
>
> /sys/kernel/debug # cat kprobes/list
> ffffff801009fe28 k _do_fork+0x0 [DISABLED]
>
> after the patch:
>
> /sys/kernel/debug # cat kprobes/list
> ffffff801009ff54 k _do_fork+0x4 [DISABLED][FTRACE]
>
> Signed-off-by: Jisheng Zhang <[email protected]>
> ---
> .../debug/kprobes-on-ftrace/arch-support.txt | 2 +-
> arch/arm64/Kconfig | 1 +
> arch/arm64/kernel/probes/Makefile | 1 +
> arch/arm64/kernel/probes/ftrace.c | 60 +++++++++++++++++++
> 4 files changed, 63 insertions(+), 1 deletion(-)
> create mode 100644 arch/arm64/kernel/probes/ftrace.c
>
> diff --git a/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt b/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
> index 68f266944d5f..e8358a38981c 100644
> --- a/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
> +++ b/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
> @@ -9,7 +9,7 @@
> | alpha: | TODO |
> | arc: | TODO |
> | arm: | TODO |
> - | arm64: | TODO |
> + | arm64: | ok |
> | c6x: | TODO |
> | csky: | TODO |
> | h8300: | TODO |
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 663392d1eae2..928700f15e23 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -167,6 +167,7 @@ config ARM64
> select HAVE_STACKPROTECTOR
> select HAVE_SYSCALL_TRACEPOINTS
> select HAVE_KPROBES
> + select HAVE_KPROBES_ON_FTRACE
> select HAVE_KRETPROBES
> select HAVE_GENERIC_VDSO
> select IOMMU_DMA if IOMMU_SUPPORT
> diff --git a/arch/arm64/kernel/probes/Makefile b/arch/arm64/kernel/probes/Makefile
> index 8e4be92e25b1..4020cfc66564 100644
> --- a/arch/arm64/kernel/probes/Makefile
> +++ b/arch/arm64/kernel/probes/Makefile
> @@ -4,3 +4,4 @@ obj-$(CONFIG_KPROBES) += kprobes.o decode-insn.o \
> simulate-insn.o
> obj-$(CONFIG_UPROBES) += uprobes.o decode-insn.o \
> simulate-insn.o
> +obj-$(CONFIG_KPROBES_ON_FTRACE) += ftrace.o
> diff --git a/arch/arm64/kernel/probes/ftrace.c b/arch/arm64/kernel/probes/ftrace.c
> new file mode 100644
> index 000000000000..52901ffff570
> --- /dev/null
> +++ b/arch/arm64/kernel/probes/ftrace.c
> @@ -0,0 +1,60 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/*
> + * Dynamic Ftrace based Kprobes Optimization
> + *
> + * Copyright (C) Hitachi Ltd., 2012
> + * Copyright (C) 2019 Jisheng Zhang <[email protected]>
> + * Synaptics Incorporated
> + */
> +
> +#include <linux/kprobes.h>
> +
> +/* Ftrace callback handler for kprobes -- called under preepmt disabed */
> +void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
> + struct ftrace_ops *ops, struct pt_regs *regs)
> +{
> + struct kprobe *p;
> + struct kprobe_ctlblk *kcb;
> +
> + /* Preempt is disabled by ftrace */
> + p = get_kprobe((kprobe_opcode_t *)ip);
> + if (unlikely(!p) || kprobe_disabled(p))
> + return;
> +
> + kcb = get_kprobe_ctlblk();
> + if (kprobe_running()) {
> + kprobes_inc_nmissed_count(p);
> + } else {
> + unsigned long orig_ip = instruction_pointer(regs);
> + /* Kprobe handler expects regs->pc = pc + 1 as breakpoint hit */
> + instruction_pointer_set(regs, ip + sizeof(kprobe_opcode_t));
> +
> + __this_cpu_write(current_kprobe, p);
> + kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> + if (!p->pre_handler || !p->pre_handler(p, regs)) {
> + /*
> + * Emulate singlestep (and also recover regs->pc)
> + * as if there is a nop
> + */
> + instruction_pointer_set(regs,
> + (unsigned long)p->addr + MCOUNT_INSN_SIZE);
> + if (unlikely(p->post_handler)) {
> + kcb->kprobe_status = KPROBE_HIT_SSDONE;
> + p->post_handler(p, regs, 0);
> + }
> + instruction_pointer_set(regs, orig_ip);
> + }
> + /*
> + * If pre_handler returns !0, it changes regs->pc. We have to
> + * skip emulating post_handler.
> + */
> + __this_cpu_write(current_kprobe, NULL);
> + }
> +}
> +NOKPROBE_SYMBOL(kprobe_ftrace_handler);
> +
> +int arch_prepare_kprobe_ftrace(struct kprobe *p)
> +{
> + p->ainsn.api.insn = NULL;
> + return 0;
> +}
> --
> 2.23.0.rc1
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> [email protected]

2019-08-20 08:55:21

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] kprobes/x86: use instruction_pointer and instruction_pointer_set

On Tue, 20 Aug 2019, Jisheng Zhang wrote:

> This is to make the x86 kprobe_ftrace_handler() more common so that
> the code could be reused in future.

While I agree with the change in general, I can't find anything which
reuses that code. So the change log is pretty useless and I have no idea
how this is related to the rest of the series.

Thanks,

tglx

2019-08-20 08:56:45

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH v2 3/3] arm64: implement KPROBES_ON_FTRACE

On Tue, 20 Aug 2019, Jisheng Zhang wrote:

> KPROBES_ON_FTRACE avoids much of the overhead with regular kprobes as it
> eliminates the need for a trap, as well as the need to emulate or
> single-step instructions.
>
> This patch implements KPROBES_ON_FTRACE for arm64.

git grep 'This patch' Documentation/process/submitting-patches.rst

Thanks,

tglx

2019-08-20 09:04:13

by Jisheng Zhang

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] kprobes/x86: use instruction_pointer and instruction_pointer_set

Hi Thomas,

On Tue, 20 Aug 2019 10:53:58 +0200 (CEST) Thomas Gleixner wrote:

>
>
> On Tue, 20 Aug 2019, Jisheng Zhang wrote:
>
> > This is to make the x86 kprobe_ftrace_handler() more common so that
> > the code could be reused in future.
>
> While I agree with the change in general, I can't find anything which
> reuses that code. So the change log is pretty useless and I have no idea
> how this is related to the rest of the series.

In v1, this code is moved from x86 to common kprobes.c [1]
But I agree with Masami, consolidation could be done when arm64 kprobes
on ftrace is stable.

In v2, actually, the arm64 version's kprobe_ftrace_handler() is the same
as x86's, the only difference is comment, e.g

/* Kprobe handler expects regs->ip = ip + 1 as breakpoint hit */

while in arm64

/* Kprobe handler expects regs->pc = ip + 1 as breakpoint hit */


W/ above, any suggestion about the suitable change log?

Thanks

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2019-August/674417.html

2019-08-20 09:21:59

by Jisheng Zhang

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] kprobes/x86: use instruction_pointer and instruction_pointer_set

On Tue, 20 Aug 2019 09:02:59 +0000 Jisheng Zhang wrote:


>
>
> Hi Thomas,
>
> On Tue, 20 Aug 2019 10:53:58 +0200 (CEST) Thomas Gleixner wrote:
>
> >
> >
> > On Tue, 20 Aug 2019, Jisheng Zhang wrote:
> >
> > > This is to make the x86 kprobe_ftrace_handler() more common so that
> > > the code could be reused in future.
> >
> > While I agree with the change in general, I can't find anything which
> > reuses that code. So the change log is pretty useless and I have no idea
> > how this is related to the rest of the series.

Indeed, this isn't related to the rest of the series. So will update the
change log and resend it alone.

>
> In v1, this code is moved from x86 to common kprobes.c [1]
> But I agree with Masami, consolidation could be done when arm64 kprobes
> on ftrace is stable.
>
> In v2, actually, the arm64 version's kprobe_ftrace_handler() is the same
> as x86's, the only difference is comment, e.g
>
> /* Kprobe handler expects regs->ip = ip + 1 as breakpoint hit */
>
> while in arm64
>
> /* Kprobe handler expects regs->pc = ip + 1 as breakpoint hit */
>
>
> W/ above, any suggestion about the suitable change log?
>
> Thanks
>

2019-08-20 10:16:41

by Naveen N. Rao

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] kprobes: adjust kprobe addr for KPROBES_ON_FTRACE

Jisheng Zhang wrote:
> For KPROBES_ON_FTRACE case, we need to adjust the kprobe's addr
> correspondingly.
>
> Signed-off-by: Jisheng Zhang <[email protected]>
> ---
> kernel/kprobes.c | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index 9873fc627d61..3fd2f68644da 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -1484,15 +1484,19 @@ static inline int check_kprobe_rereg(struct kprobe *p)
>
> int __weak arch_check_ftrace_location(struct kprobe *p)
> {
> - unsigned long ftrace_addr;
> + unsigned long ftrace_addr, addr = (unsigned long)p->addr;
>
> - ftrace_addr = ftrace_location((unsigned long)p->addr);
> +#ifdef CONFIG_KPROBES_ON_FTRACE
> + addr = ftrace_call_adjust(addr);
> +#endif

Looking at the commit message for patch 3/3, it looks like you want the
probe to be placed on ftrace entry by default, and this patch seems to
be aimed at that.

If so, this is not the right approach. As I mentioned previously, you
would want to over-ride kprobe_lookup_name(). This ensures that the
address is changed only if the user provided a symbol, and not if the
user wanted to probe at a very specific address. See commit
24bd909e94776 ("powerpc/kprobes: Prefer ftrace when probing function
entry").

If this patch is for some other purpose, then it isn't clear from the
commit log. Please provide a better explanation.


- Naveen

2019-08-20 10:42:22

by Jisheng Zhang

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] kprobes: adjust kprobe addr for KPROBES_ON_FTRACE

On Tue, 20 Aug 2019 15:45:24 +0530 "Naveen N. Rao" wrote:

>
>
> Jisheng Zhang wrote:
> > For KPROBES_ON_FTRACE case, we need to adjust the kprobe's addr
> > correspondingly.
> >
> > Signed-off-by: Jisheng Zhang <[email protected]>
> > ---
> > kernel/kprobes.c | 10 +++++++---
> > 1 file changed, 7 insertions(+), 3 deletions(-)
> >
> > diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> > index 9873fc627d61..3fd2f68644da 100644
> > --- a/kernel/kprobes.c
> > +++ b/kernel/kprobes.c
> > @@ -1484,15 +1484,19 @@ static inline int check_kprobe_rereg(struct kprobe *p)
> >
> > int __weak arch_check_ftrace_location(struct kprobe *p)
> > {
> > - unsigned long ftrace_addr;
> > + unsigned long ftrace_addr, addr = (unsigned long)p->addr;
> >
> > - ftrace_addr = ftrace_location((unsigned long)p->addr);
> > +#ifdef CONFIG_KPROBES_ON_FTRACE
> > + addr = ftrace_call_adjust(addr);
> > +#endif
>
> Looking at the commit message for patch 3/3, it looks like you want the
> probe to be placed on ftrace entry by default, and this patch seems to
> be aimed at that.

Yeah.

>
> If so, this is not the right approach. As I mentioned previously, you
> would want to over-ride kprobe_lookup_name(). This ensures that the
> address is changed only if the user provided a symbol, and not if the
> user wanted to probe at a very specific address. See commit

Great! Now I understand the reason.

> 24bd909e94776 ("powerpc/kprobes: Prefer ftrace when probing function
> entry").

Now, I got your meaning. You are right. I will update the patch in newer
version.

Thanks a lot!

>
> If this patch is for some other purpose, then it isn't clear from the
> commit log. Please provide a better explanation.
>
>
> - Naveen
>

2019-08-20 13:23:00

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] kprobes/x86: use instruction_pointer and instruction_pointer_set

On Tue, Aug 20, 2019 at 09:02:59AM +0000, Jisheng Zhang wrote:
> In v2, actually, the arm64 version's kprobe_ftrace_handler() is the same
> as x86's, the only difference is comment, e.g
>
> /* Kprobe handler expects regs->ip = ip + 1 as breakpoint hit */
>
> while in arm64
>
> /* Kprobe handler expects regs->pc = ip + 1 as breakpoint hit */

What's weird; I thought ARM has fixed sized instructions and they are
all 4 bytes? So how does a single byte offset make sense for ARM?

2019-08-21 02:11:25

by Masami Hiramatsu

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] kprobes: adjust kprobe addr for KPROBES_ON_FTRACE

Hi Jisheng,

On Tue, 20 Aug 2019 03:53:31 +0000
Jisheng Zhang <[email protected]> wrote:

> For KPROBES_ON_FTRACE case, we need to adjust the kprobe's addr
> correspondingly.

Either KPROBES_ON_FTRACE=y or not, ftrace_location() check must be
done correctly. If it failed, kprobes can modify the instruction
which can be modified by ftrace.

>
> Signed-off-by: Jisheng Zhang <[email protected]>
> ---
> kernel/kprobes.c | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index 9873fc627d61..3fd2f68644da 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -1484,15 +1484,19 @@ static inline int check_kprobe_rereg(struct kprobe *p)
>
> int __weak arch_check_ftrace_location(struct kprobe *p)
> {
> - unsigned long ftrace_addr;
> + unsigned long ftrace_addr, addr = (unsigned long)p->addr;
>
> - ftrace_addr = ftrace_location((unsigned long)p->addr);
> +#ifdef CONFIG_KPROBES_ON_FTRACE
> + addr = ftrace_call_adjust(addr);
> +#endif
> + ftrace_addr = ftrace_location(addr);

No, this is not right way to do. If we always need to adjust address
before calling ftrace_location(), something wrong with ftrace_location()
interface.
ftrace_location(addr) must check the address is within the range which
can be changed by ftrace. (dyn->ip <= addr <= dyn->ip+MCOUNT_INSN_SIZE)


> if (ftrace_addr) {
> #ifdef CONFIG_KPROBES_ON_FTRACE
> /* Given address is not on the instruction boundary */
> - if ((unsigned long)p->addr != ftrace_addr)
> + if (addr != ftrace_addr)
> return -EILSEQ;
> p->flags |= KPROBE_FLAG_FTRACE;
> + p->addr = (kprobe_opcode_t *)addr;

And again, please don't change the p->addr silently.

Thank you,

> #else /* !CONFIG_KPROBES_ON_FTRACE */
> return -EINVAL;
> #endif
> --
> 2.23.0.rc1
>


--
Masami Hiramatsu <[email protected]>

2019-08-21 02:12:54

by Jisheng Zhang

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] kprobes/x86: use instruction_pointer and instruction_pointer_set

Hi,

On Wed, 21 Aug 2019 10:52:47 +0900 Masami Hiramatsu wrote:
>
>
> Hi Jisheng,
>
> On Tue, 20 Aug 2019 09:02:59 +0000
> Jisheng Zhang <[email protected]> wrote:
>
> > Hi Thomas,
> >
> > On Tue, 20 Aug 2019 10:53:58 +0200 (CEST) Thomas Gleixner wrote:
> >
> > >
> > >
> > > On Tue, 20 Aug 2019, Jisheng Zhang wrote:
> > >
> > > > This is to make the x86 kprobe_ftrace_handler() more common so that
> > > > the code could be reused in future.
> > >
> > > While I agree with the change in general, I can't find anything which
> > > reuses that code. So the change log is pretty useless and I have no idea
> > > how this is related to the rest of the series.
> >
> > In v1, this code is moved from x86 to common kprobes.c [1]
> > But I agree with Masami, consolidation could be done when arm64 kprobes
> > on ftrace is stable.
>
> We'll revisit to consolidate the code after we got 3rd or 4th clones.
>
> >
> > In v2, actually, the arm64 version's kprobe_ftrace_handler() is the same
> > as x86's, the only difference is comment, e.g
> >
> > /* Kprobe handler expects regs->ip = ip + 1 as breakpoint hit */
> >
> > while in arm64
> >
> > /* Kprobe handler expects regs->pc = ip + 1 as breakpoint hit */
>
> As Peter pointed, on arm64, is that really 1 or 4 bytes?
> This part is heavily depends on the processor software-breakpoint
> implementation.

Per my understanding, the "+1" here means "+ one kprobe_opcode_t".

>
> >
> >
> > W/ above, any suggestion about the suitable change log?
>
> I think you just need to keep the first half of the description.
> Since this patch itself is not related to the series, could you update
> the description and resend it as a single cleanup patch out of the series?
>

Got it. Will do today.

Thanks a lot

2019-08-21 02:20:31

by Masami Hiramatsu

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] kprobes/x86: use instruction_pointer and instruction_pointer_set

Hi Jisheng,

On Tue, 20 Aug 2019 09:02:59 +0000
Jisheng Zhang <[email protected]> wrote:

> Hi Thomas,
>
> On Tue, 20 Aug 2019 10:53:58 +0200 (CEST) Thomas Gleixner wrote:
>
> >
> >
> > On Tue, 20 Aug 2019, Jisheng Zhang wrote:
> >
> > > This is to make the x86 kprobe_ftrace_handler() more common so that
> > > the code could be reused in future.
> >
> > While I agree with the change in general, I can't find anything which
> > reuses that code. So the change log is pretty useless and I have no idea
> > how this is related to the rest of the series.
>
> In v1, this code is moved from x86 to common kprobes.c [1]
> But I agree with Masami, consolidation could be done when arm64 kprobes
> on ftrace is stable.

We'll revisit to consolidate the code after we got 3rd or 4th clones.

>
> In v2, actually, the arm64 version's kprobe_ftrace_handler() is the same
> as x86's, the only difference is comment, e.g
>
> /* Kprobe handler expects regs->ip = ip + 1 as breakpoint hit */
>
> while in arm64
>
> /* Kprobe handler expects regs->pc = ip + 1 as breakpoint hit */

As Peter pointed, on arm64, is that really 1 or 4 bytes?
This part is heavily depends on the processor software-breakpoint
implementation.

>
>
> W/ above, any suggestion about the suitable change log?

I think you just need to keep the first half of the description.
Since this patch itself is not related to the series, could you update
the description and resend it as a single cleanup patch out of the series?

Thank you!

>
> Thanks
>
> [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2019-August/674417.html


--
Masami Hiramatsu <[email protected]>

2019-08-21 02:20:31

by Jisheng Zhang

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] kprobes/x86: use instruction_pointer and instruction_pointer_set

Hi Peter,

On Tue, 20 Aug 2019 15:21:10 +0200 Peter Zijlstra wrote:

>
>
> On Tue, Aug 20, 2019 at 09:02:59AM +0000, Jisheng Zhang wrote:
> > In v2, actually, the arm64 version's kprobe_ftrace_handler() is the same
> > as x86's, the only difference is comment, e.g
> >
> > /* Kprobe handler expects regs->ip = ip + 1 as breakpoint hit */
> >
> > while in arm64
> >
> > /* Kprobe handler expects regs->pc = ip + 1 as breakpoint hit */
>
> What's weird; I thought ARM has fixed sized instructions and they are
> all 4 bytes? So how does a single byte offset make sense for ARM?

I believe the "+1" here means + one kprobe_opcode_t.

Thanks

2019-08-21 02:58:17

by Jisheng Zhang

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] kprobes: adjust kprobe addr for KPROBES_ON_FTRACE

Hi,

On Wed, 21 Aug 2019 11:07:39 +0900 Masami Hiramatsu wrote:

>
>
> Hi Jisheng,
>
> On Tue, 20 Aug 2019 03:53:31 +0000
> Jisheng Zhang <[email protected]> wrote:
>
> > For KPROBES_ON_FTRACE case, we need to adjust the kprobe's addr
> > correspondingly.
>
> Either KPROBES_ON_FTRACE=y or not, ftrace_location() check must be
> done correctly. If it failed, kprobes can modify the instruction
> which can be modified by ftrace.
>
> >
> > Signed-off-by: Jisheng Zhang <[email protected]>
> > ---
> > kernel/kprobes.c | 10 +++++++---
> > 1 file changed, 7 insertions(+), 3 deletions(-)
> >
> > diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> > index 9873fc627d61..3fd2f68644da 100644
> > --- a/kernel/kprobes.c
> > +++ b/kernel/kprobes.c
> > @@ -1484,15 +1484,19 @@ static inline int check_kprobe_rereg(struct kprobe *p)
> >
> > int __weak arch_check_ftrace_location(struct kprobe *p)
> > {
> > - unsigned long ftrace_addr;
> > + unsigned long ftrace_addr, addr = (unsigned long)p->addr;
> >
> > - ftrace_addr = ftrace_location((unsigned long)p->addr);
> > +#ifdef CONFIG_KPROBES_ON_FTRACE
> > + addr = ftrace_call_adjust(addr);
> > +#endif
> > + ftrace_addr = ftrace_location(addr);
>
> No, this is not right way to do. If we always need to adjust address
> before calling ftrace_location(), something wrong with ftrace_location()
> interface.
> ftrace_location(addr) must check the address is within the range which
> can be changed by ftrace. (dyn->ip <= addr <= dyn->ip+MCOUNT_INSN_SIZE)

yeah! I will try Naveen's suggestion, I.E patch kprobe_lookup_name() instead.

Thanks

>
>
> > if (ftrace_addr) {
> > #ifdef CONFIG_KPROBES_ON_FTRACE
> > /* Given address is not on the instruction boundary */
> > - if ((unsigned long)p->addr != ftrace_addr)
> > + if (addr != ftrace_addr)
> > return -EILSEQ;
> > p->flags |= KPROBE_FLAG_FTRACE;
> > + p->addr = (kprobe_opcode_t *)addr;
>
> And again, please don't change the p->addr silently.
>
> Thank you,
>
> > #else /* !CONFIG_KPROBES_ON_FTRACE */
> > return -EINVAL;
> > #endif
> > --
> > 2.23.0.rc1
> >
>
>
> --
> Masami Hiramatsu <[email protected]>

2019-08-23 23:28:25

by Masami Hiramatsu

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] kprobes/x86: use instruction_pointer and instruction_pointer_set

Hi Jisheng,

On Wed, 21 Aug 2019 02:09:10 +0000
Jisheng Zhang <[email protected]> wrote:

> > > In v2, actually, the arm64 version's kprobe_ftrace_handler() is the same
> > > as x86's, the only difference is comment, e.g
> > >
> > > /* Kprobe handler expects regs->ip = ip + 1 as breakpoint hit */
> > >
> > > while in arm64
> > >
> > > /* Kprobe handler expects regs->pc = ip + 1 as breakpoint hit */
> >
> > As Peter pointed, on arm64, is that really 1 or 4 bytes?
> > This part is heavily depends on the processor software-breakpoint
> > implementation.
>
> Per my understanding, the "+1" here means "+ one kprobe_opcode_t".

No, that is the size of INT3. It just emulates the software trap on x86.

Thank you,
--
Masami Hiramatsu <[email protected]>