2017-11-09 23:03:59

by Yonghong Song

[permalink] [raw]
Subject: [PATCH] uprobes/x86: emulate push insns for uprobe on x86

Uprobe is a tracing mechanism for userspace programs.
Typical uprobe will incur overhead of two traps.
First trap is caused by replaced trap insn, and
the second trap is to execute the original displaced
insn in user space.

To reduce the overhead, kernel provides hooks
for architectures to emulate the original insn
and skip the second trap. In x86, emulation
is done for certain branch insns.

This patch extends the emulation to "push <reg>"
insns. These insns are typical in the beginning
of the function. For example, bcc
in https://github.com/iovisor/bcc repo provides
tools to measure funclantency, detect memleak, etc.
The tools will place uprobes in the beginning of
function and possibly uretprobes at the end of function.
This patch is able to reduce the trap overhead for
uprobe from 2 to 1.

Without this patch, uretprobe will typically incur
three traps. With this patch, if the function starts
with "push" insn, the number of traps can be
reduced from 3 to 2.

An experiment was conducted on two local VMs,
fedora 26 64-bit VM and 32-bit VM, both 4 processors
and 4GB memory, booted with latest tip repo (and this patch).
The host is MacBook with intel i7 processor.

The test program looks like
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <sys/time.h>

static void test() __attribute__((noinline));
void test() {}
int main() {
struct timeval start, end;

gettimeofday(&start, NULL);
for (int i = 0; i < 1000000; i++) {
test();
}
gettimeofday(&end, NULL);

printf("%ld\n", ((end.tv_sec * 1000000 + end.tv_usec)
- (start.tv_sec * 1000000 + start.tv_usec)));
return 0;
}

The program is compiled without optimization, and
the first insn for function "test" is "push %rbp".
The host is relatively idle.

Before the test run, the uprobe is inserted as below for uprobe:
echo 'p <binary>:<test_func_offset>' > /sys/kernel/debug/tracing/uprobe_events
echo 1 > /sys/kernel/debug/tracing/events/uprobes/enable
and for uretprobe:
echo 'r <binary>:<test_func_offset>' > /sys/kernel/debug/tracing/uprobe_events
echo 1 > /sys/kernel/debug/tracing/events/uprobes/enable

Unit: microsecond(usec) per loop iteration

x86_64 W/ this patch W/O this patch
uprobe 1.55 3.1
uretprobe 2.0 3.6

x86_32 W/ this patch W/O this patch
uprobe 1.41 3.5
uretprobe 1.75 4.0

You can see that this patch significantly reduced the overhead,
50% for uprobe and 44% for uretprobe on x86_64, and even more
on x86_32.

Signed-off-by: Yonghong Song <[email protected]>
---
arch/x86/include/asm/uprobes.h | 4 ++
arch/x86/kernel/uprobes.c | 110 ++++++++++++++++++++++++++++++++++++++---
2 files changed, 107 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/uprobes.h b/arch/x86/include/asm/uprobes.h
index 74f4c2f..a90090c 100644
--- a/arch/x86/include/asm/uprobes.h
+++ b/arch/x86/include/asm/uprobes.h
@@ -53,6 +53,10 @@ struct arch_uprobe {
u8 fixups;
u8 ilen;
} defparam;
+ struct {
+ u8 src_offset; /* to the start of pt_regs */
+ u8 ilen;
+ } push;
};
};

diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c
index a3755d2..1ee8b59 100644
--- a/arch/x86/kernel/uprobes.c
+++ b/arch/x86/kernel/uprobes.c
@@ -528,11 +528,11 @@ static int default_pre_xol_op(struct arch_uprobe *auprobe, struct pt_regs *regs)
return 0;
}

-static int push_ret_address(struct pt_regs *regs, unsigned long ip)
+static int emulate_push_stack(struct pt_regs *regs, unsigned long val)
{
unsigned long new_sp = regs->sp - sizeof_long();

- if (copy_to_user((void __user *)new_sp, &ip, sizeof_long()))
+ if (copy_to_user((void __user *)new_sp, &val, sizeof_long()))
return -EFAULT;

regs->sp = new_sp;
@@ -566,7 +566,7 @@ static int default_post_xol_op(struct arch_uprobe *auprobe, struct pt_regs *regs
regs->ip += correction;
} else if (auprobe->defparam.fixups & UPROBE_FIX_CALL) {
regs->sp += sizeof_long(); /* Pop incorrect return address */
- if (push_ret_address(regs, utask->vaddr + auprobe->defparam.ilen))
+ if (emulate_push_stack(regs, utask->vaddr + auprobe->defparam.ilen))
return -ERESTART;
}
/* popf; tell the caller to not touch TF */
@@ -655,7 +655,7 @@ static bool branch_emulate_op(struct arch_uprobe *auprobe, struct pt_regs *regs)
*
* But there is corner case, see the comment in ->post_xol().
*/
- if (push_ret_address(regs, new_ip))
+ if (emulate_push_stack(regs, new_ip))
return false;
} else if (!check_jmp_cond(auprobe, regs)) {
offs = 0;
@@ -665,6 +665,16 @@ static bool branch_emulate_op(struct arch_uprobe *auprobe, struct pt_regs *regs)
return true;
}

+static bool push_emulate_op(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+ void *src_ptr = (void *)regs + auprobe->push.src_offset;
+
+ if (emulate_push_stack(regs, *(unsigned long *)src_ptr))
+ return false;
+ regs->ip += auprobe->push.ilen;
+ return true;
+}
+
static int branch_post_xol_op(struct arch_uprobe *auprobe, struct pt_regs *regs)
{
BUG_ON(!branch_is_call(auprobe));
@@ -703,13 +713,99 @@ static const struct uprobe_xol_ops branch_xol_ops = {
.post_xol = branch_post_xol_op,
};

-/* Returns -ENOSYS if branch_xol_ops doesn't handle this insn */
-static int branch_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn)
+static const struct uprobe_xol_ops push_xol_ops = {
+ .emulate = push_emulate_op,
+};
+
+static int uprobe_setup_push_ops(struct arch_uprobe *auprobe, struct insn *insn,
+ u8 opc1)
+{
+ u8 src_offset = 0;
+
+ if (insn->length > 2)
+ return -ENOSYS;
+ if (insn->length == 2) {
+ /* only support rex_prefix 0x41 (x64 only) */
+#ifdef CONFIG_X86_64
+ if (insn->rex_prefix.nbytes != 1 ||
+ insn->rex_prefix.bytes[0] != 0x41)
+ return -ENOSYS;
+
+ auprobe->push.ilen = 2;
+ switch (opc1) {
+ case 0x50:
+ src_offset = offsetof(struct pt_regs, r8);
+ break;
+ case 0x51:
+ src_offset = offsetof(struct pt_regs, r9);
+ break;
+ case 0x52:
+ src_offset = offsetof(struct pt_regs, r10);
+ break;
+ case 0x53:
+ src_offset = offsetof(struct pt_regs, r11);
+ break;
+ case 0x54:
+ src_offset = offsetof(struct pt_regs, r12);
+ break;
+ case 0x55:
+ src_offset = offsetof(struct pt_regs, r13);
+ break;
+ case 0x56:
+ src_offset = offsetof(struct pt_regs, r14);
+ break;
+ case 0x57:
+ src_offset = offsetof(struct pt_regs, r15);
+ break;
+ }
+#else
+ return -ENOSYS;
+#endif
+ } else {
+ auprobe->push.ilen = 1;
+ switch (opc1) {
+ case 0x50:
+ src_offset = offsetof(struct pt_regs, ax);
+ break;
+ case 0x51:
+ src_offset = offsetof(struct pt_regs, cx);
+ break;
+ case 0x52:
+ src_offset = offsetof(struct pt_regs, dx);
+ break;
+ case 0x53:
+ src_offset = offsetof(struct pt_regs, bx);
+ break;
+ case 0x54:
+ src_offset = offsetof(struct pt_regs, sp);
+ break;
+ case 0x55:
+ src_offset = offsetof(struct pt_regs, bp);
+ break;
+ case 0x56:
+ src_offset = offsetof(struct pt_regs, si);
+ break;
+ case 0x57:
+ src_offset = offsetof(struct pt_regs, di);
+ break;
+ }
+ }
+
+ auprobe->push.src_offset = src_offset;
+ auprobe->ops = &push_xol_ops;
+ return 0;
+}
+
+/* Returns -ENOSYS if {branch|push}_xol_ops doesn't handle this insn */
+static int uprobe_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn)
{
u8 opc1 = OPCODE1(insn);
int i;

switch (opc1) {
+ case 0x50 ... 0x57:
+ return uprobe_setup_push_ops(auprobe, insn, opc1);
+
case 0xeb: /* jmp 8 */
case 0xe9: /* jmp 32 */
case 0x90: /* prefix* + nop; same as jmp with .offs = 0 */
@@ -767,7 +863,7 @@ int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm,
if (ret)
return ret;

- ret = branch_setup_xol_ops(auprobe, &insn);
+ ret = uprobe_setup_xol_ops(auprobe, &insn);
if (ret != -ENOSYS)
return ret;

--
2.9.5


From 1583627105344225462@xxx Thu Nov 09 21:54:57 +0000 2017
X-GM-THRID: 1583574831236519975
X-Gmail-Labels: Inbox,Category Forums,HistoricalUnread


2017-11-09 21:54:56

by Yonghong Song

[permalink] [raw]
Subject: Re: [PATCH] uprobes/x86: emulate push insns for uprobe on x86



On 11/9/17 5:44 AM, Oleg Nesterov wrote:
> On 11/09, Yonghong Song wrote:
>>
>> This patch extends the emulation to "push <reg>"
>> insns. These insns are typical in the beginning
>> of the function. For example, bcc
>> in https://github.com/iovisor/bcc repo provides
>> tools to measure funclantency, detect memleak, etc.
>> The tools will place uprobes in the beginning of
>> function and possibly uretprobes at the end of function.
>> This patch is able to reduce the trap overhead for
>> uprobe from 2 to 1.
>
> OK. but to be honest I do not like the implementation, please see below.
>
>> +enum uprobe_insn_t {
>> + UPROBE_BRANCH_INSN = 0,
>> + UPROBE_PUSH_INSN = 1,
>> +};
>> +
>> struct uprobe_xol_ops;
>>
>> struct arch_uprobe {
>> @@ -42,6 +47,7 @@ struct arch_uprobe {
>> };
>>
>> const struct uprobe_xol_ops *ops;
>> + enum uprobe_insn_t insn_class;
>
> Why?
>
> I'd suggest to leave branch_xol_ops alone and add the new push_xol_ops{},
> the code will look much simpler.
>
> The only thing they can share is branch_post_xol_op() which is just
>
> regs->sp += sizeof_long();
> return -ERESTART;
>
> I think a bit of code duplication would be fine in this case.

Just prototyped. Agreed, having seperate uprobe_xol_ops for "push"
emulation is clean and better.

>
> And. Do you really need ->post_xol() method to emulate "push"? Why we can't
> simply execute it out-of-line if copy_to_user() fails?

Thanks for pointing it out. Agreed, we do not really need post_xol for
"push". xol execution is just fine.

Will address your other comments as well in the next revision.

>
> branch_post_xol_op() is needed because we can't execute "call" out-of-line,
> we need to restart and try again if copy_to_user() fails, but I don not
> understand why it is needed to emulate "push".
>
> Oleg.
>

From 1583600390168465683@xxx Thu Nov 09 14:50:19 +0000 2017
X-GM-THRID: 1583574831236519975
X-Gmail-Labels: Inbox,Category Forums,HistoricalUnread

2017-11-09 14:50:19

by Oleg Nesterov

[permalink] [raw]
Subject: Re: [PATCH] uprobes/x86: emulate push insns for uprobe on x86

On 11/09, Yonghong Song wrote:
>
> + if (insn_class == UPROBE_PUSH_INSN) {
> + src_ptr = get_push_reg_ptr(auprobe, regs);
> + reg_width = sizeof_long();
> + sp = regs->sp;
> + if (copy_to_user((void __user *)(sp - reg_width), src_ptr, reg_width))
> + return false;
> +
> + regs->sp = sp - reg_width;
> + regs->ip += 1 + (auprobe->push.rex_prefix != 0);
> + return true;

Another nit... You can rename push_ret_address() and use it here

src_ptr = ...;
if (push_ret_address(regs, *src_ptr))
return false;

regs->ip += ...;
return true;

and I think get_push_reg_ptr() should just return "unsigned long", not the
pointer.

And again, please make a separate method for this code. Let me repeat, the
main reason for branch_xol_ops/etc is that we simply can not execute these
insns out-of-line, we have to emulate them. "push" differs, the only reason
why we may want to emulate it is optimization.

Oleg.


From 1583597550571130102@xxx Thu Nov 09 14:05:11 +0000 2017
X-GM-THRID: 1583574831236519975
X-Gmail-Labels: Inbox,Category Forums,HistoricalUnread

2017-11-09 14:05:11

by Oleg Nesterov

[permalink] [raw]
Subject: Re: [PATCH] uprobes/x86: emulate push insns for uprobe on x86

On 11/09, Oleg Nesterov wrote:
>
> And. Do you really need ->post_xol() method to emulate "push"? Why we can't
> simply execute it out-of-line if copy_to_user() fails?
>
> branch_post_xol_op() is needed because we can't execute "call" out-of-line,
> we need to restart and try again if copy_to_user() fails, but I don not
> understand why it is needed to emulate "push".

If I wasn't clear, please see the comment in branch_clear_offset().

Oleg.


From 1583596297633034476@xxx Thu Nov 09 13:45:16 +0000 2017
X-GM-THRID: 1583574831236519975
X-Gmail-Labels: Inbox,Category Forums,HistoricalUnread

2017-11-09 13:45:17

by Oleg Nesterov

[permalink] [raw]
Subject: Re: [PATCH] uprobes/x86: emulate push insns for uprobe on x86

On 11/09, Yonghong Song wrote:
>
> This patch extends the emulation to "push <reg>"
> insns. These insns are typical in the beginning
> of the function. For example, bcc
> in https://github.com/iovisor/bcc repo provides
> tools to measure funclantency, detect memleak, etc.
> The tools will place uprobes in the beginning of
> function and possibly uretprobes at the end of function.
> This patch is able to reduce the trap overhead for
> uprobe from 2 to 1.

OK. but to be honest I do not like the implementation, please see below.

> +enum uprobe_insn_t {
> + UPROBE_BRANCH_INSN = 0,
> + UPROBE_PUSH_INSN = 1,
> +};
> +
> struct uprobe_xol_ops;
>
> struct arch_uprobe {
> @@ -42,6 +47,7 @@ struct arch_uprobe {
> };
>
> const struct uprobe_xol_ops *ops;
> + enum uprobe_insn_t insn_class;

Why?

I'd suggest to leave branch_xol_ops alone and add the new push_xol_ops{},
the code will look much simpler.

The only thing they can share is branch_post_xol_op() which is just

regs->sp += sizeof_long();
return -ERESTART;

I think a bit of code duplication would be fine in this case.

And. Do you really need ->post_xol() method to emulate "push"? Why we can't
simply execute it out-of-line if copy_to_user() fails?

branch_post_xol_op() is needed because we can't execute "call" out-of-line,
we need to restart and try again if copy_to_user() fails, but I don not
understand why it is needed to emulate "push".

Oleg.


From 1583574831236519975@xxx Thu Nov 09 08:04:04 +0000 2017
X-GM-THRID: 1583574831236519975
X-Gmail-Labels: Inbox,Category Forums,HistoricalUnread