Received: by 10.223.164.202 with SMTP id h10csp1081590wrb; Wed, 15 Nov 2017 12:54:09 -0800 (PST) X-Google-Smtp-Source: AGs4zMaCWgGj+KTdZtslC7yOBbUKqJziUzqqZyIaXAzXsEdWII2UTgSJUOK73MuSq/jbM5oRNYYH X-Received: by 10.84.248.77 with SMTP id e13mr17497483pln.200.1510779249438; Wed, 15 Nov 2017 12:54:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1510779249; cv=none; d=google.com; s=arc-20160816; b=gUr89Io2Z1gGzZvD/1xKgjr9sHj8uspZqi/TiXNFtBbrHWU25z/1eb96dUl4mziR/e QOlwHEQCx5w6pdnchNnJosCdBKOxyaooiGbIQHCZ2yokdpZ/FLmAf3gQYrawO3j1bxvI RD76pzxrwtChyWGfdO8UXgP1m0FBDgIxd1/71qxiUoo0p2Z6Onblqo47ErqQnDWSS+AF t7W+yzzruWfqIretGBXypeUZrvBWP3Xj991nTZAQbNWGohlUcNIX1eJBZW0TV5BR+Xb7 85kF3zNj58tfBsC37+//cIAO/KfcdNLqJqYLH0Qx2zQymABeDkOjfEVNH3W+9jGsOnH2 799w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:message-id:date:subject :smtp-origin-cluster:cc:to:smtp-origin-hostname:from :smtp-origin-hostprefix:dkim-signature:arc-authentication-results; bh=kC30SlxDEfCdgb9Nn6CiBqdoPB0sOXsCi2hclUl4aIQ=; b=s7NaKt88izFF1FFgaUa3aPCuinS/rjYns+rUjkl7jOw7+mhTUUHD475P5QrRdd9BAK 6EUl4ftdlXM/6G+vxyfYU4I1wKCnmvy5PDSZSTn/a89CKNfSoHSoIbYgN5w5UK5ysvrX ENF7hbSXEyDBGQ95BkCj1nV7SaNkYNBa226SvapC/Z2GGQTA6F1aBNnVW1EuO7hTdGh9 QJHiCQtuC6cF34EkO1Men04iueFbw6KdwAIQ3Y+3bA+hxoodK4qk6wIp21YA2XlZk0Ud /3WR+Z4KS5FmmmAQuOFQUglcIdlLA8WA048gyV8TTNLY2Hj0zernIRKtfcKBb+aD37ny DBcA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@fb.com header.s=facebook header.b=Lhl9gfK0; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=fb.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b61si18661225plc.385.2017.11.15.12.53.56; Wed, 15 Nov 2017 12:54:09 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@fb.com header.s=facebook header.b=Lhl9gfK0; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=fb.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933361AbdKOR7z (ORCPT + 89 others); Wed, 15 Nov 2017 12:59:55 -0500 Received: from mx0b-00082601.pphosted.com ([67.231.153.30]:43944 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S933136AbdKOR7t (ORCPT ); Wed, 15 Nov 2017 12:59:49 -0500 Received: from pps.filterd (m0001255.ppops.net [127.0.0.1]) by mx0b-00082601.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id vAFHuhaB028660 for ; Wed, 15 Nov 2017 09:59:48 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fb.com; h=from : to : cc : subject : date : message-id : mime-version : content-type; s=facebook; bh=kC30SlxDEfCdgb9Nn6CiBqdoPB0sOXsCi2hclUl4aIQ=; b=Lhl9gfK0RqInDbnH4Vbpl2t2/cikrFPQOLUf4bSzE/OgwwyMNOuatrbRXvOkms99bCYm b7lh9ngte0kXK9069hgsu/XRETQRdrTJtKvAygaMBzNtfYJXvXKrWdh76TSTfX9c8J+a OMFewH7xRDNWIaQfqqyhABSEyzoQfZ0AKzg= Received: from mail.thefacebook.com ([199.201.64.23]) by mx0b-00082601.pphosted.com with ESMTP id 2e8kqh9fpa-9 (version=TLSv1 cipher=ECDHE-RSA-AES256-SHA bits=256 verify=NOT) for ; Wed, 15 Nov 2017 09:59:48 -0800 Received: from mx-out.facebook.com (192.168.52.123) by PRN-CHUB13.TheFacebook.com (192.168.16.23) with Microsoft SMTP Server id 14.3.361.1; Wed, 15 Nov 2017 09:59:30 -0800 Received: by devbig474.prn1.facebook.com (Postfix, from userid 128203) id C0B43E40E6B; Wed, 15 Nov 2017 09:59:28 -0800 (PST) Smtp-Origin-Hostprefix: devbig From: Yonghong Song Smtp-Origin-Hostname: devbig474.prn1.facebook.com To: , , , , , , , CC: Smtp-Origin-Cluster: prn1c29 Subject: [PATCH][v4] uprobes/x86: emulate push insns for uprobe on x86 Date: Wed, 15 Nov 2017 09:59:28 -0800 Message-ID: <20171115175928.3821714-1-yhs@fb.com> X-Mailer: git-send-email 2.9.5 X-FB-Internal: Safe MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-11-15_09:,, signatures=0 X-Proofpoint-Spam-Reason: safe X-FB-Internal: Safe Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Uprobe is a tracing mechanism for userspace programs. Typical uprobe will incur overhead of two traps. First trap is caused by replaced trap insn, and the second trap is to execute the original displaced insn in user space. To reduce the overhead, kernel provides hooks for architectures to emulate the original insn and skip the second trap. In x86, emulation is done for certain branch insns. This patch extends the emulation to "push " insns. These insns are typical in the beginning of the function. For example, bcc in https://github.com/iovisor/bcc repo provides tools to measure funclantency, detect memleak, etc. The tools will place uprobes in the beginning of function and possibly uretprobes at the end of function. This patch is able to reduce the trap overhead for uprobe from 2 to 1. Without this patch, uretprobe will typically incur three traps. With this patch, if the function starts with "push" insn, the number of traps can be reduced from 3 to 2. An experiment was conducted on two local VMs, fedora 26 64-bit VM and 32-bit VM, both 4 processors and 4GB memory, booted with latest tip repo (and this patch). The host is MacBook with intel i7 processor. The test program looks like #include #include #include #include static void test() __attribute__((noinline)); void test() {} int main() { struct timeval start, end; gettimeofday(&start, NULL); for (int i = 0; i < 1000000; i++) { test(); } gettimeofday(&end, NULL); printf("%ld\n", ((end.tv_sec * 1000000 + end.tv_usec) - (start.tv_sec * 1000000 + start.tv_usec))); return 0; } The program is compiled without optimization, and the first insn for function "test" is "push %rbp". The host is relatively idle. Before the test run, the uprobe is inserted as below for uprobe: echo 'p :' > /sys/kernel/debug/tracing/uprobe_events echo 1 > /sys/kernel/debug/tracing/events/uprobes/enable and for uretprobe: echo 'r :' > /sys/kernel/debug/tracing/uprobe_events echo 1 > /sys/kernel/debug/tracing/events/uprobes/enable Unit: microsecond(usec) per loop iteration x86_64 W/ this patch W/O this patch uprobe 1.55 3.1 uretprobe 2.0 3.6 x86_32 W/ this patch W/O this patch uprobe 1.41 3.5 uretprobe 1.75 4.0 You can see that this patch significantly reduced the overhead, 50% for uprobe and 44% for uretprobe on x86_64, and even more on x86_32. Signed-off-by: Yonghong Song --- arch/x86/include/asm/uprobes.h | 4 ++ arch/x86/kernel/uprobes.c | 107 +++++++++++++++++++++++++++++++++++++++-- 2 files changed, 107 insertions(+), 4 deletions(-) Changelogs: v3 -> v4: . Revert most of v3 change as 32bit emulation is not really working on x86_64 platform as among other issues, function emulate_push_stack() needs to account for 32bit app on 64bit platform. A separate effort is ongoing to address this issue. v2 -> v3: . Do not emulate 32bit application on x86_64 platforms v1 -> v2: . Address Oleg's comments diff --git a/arch/x86/include/asm/uprobes.h b/arch/x86/include/asm/uprobes.h index 74f4c2f..d8bfa98 100644 --- a/arch/x86/include/asm/uprobes.h +++ b/arch/x86/include/asm/uprobes.h @@ -53,6 +53,10 @@ struct arch_uprobe { u8 fixups; u8 ilen; } defparam; + struct { + u8 reg_offset; /* to the start of pt_regs */ + u8 ilen; + } push; }; }; diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c index a3755d2..85c7ef2 100644 --- a/arch/x86/kernel/uprobes.c +++ b/arch/x86/kernel/uprobes.c @@ -528,11 +528,11 @@ static int default_pre_xol_op(struct arch_uprobe *auprobe, struct pt_regs *regs) return 0; } -static int push_ret_address(struct pt_regs *regs, unsigned long ip) +static int emulate_push_stack(struct pt_regs *regs, unsigned long val) { unsigned long new_sp = regs->sp - sizeof_long(); - if (copy_to_user((void __user *)new_sp, &ip, sizeof_long())) + if (copy_to_user((void __user *)new_sp, &val, sizeof_long())) return -EFAULT; regs->sp = new_sp; @@ -566,7 +566,7 @@ static int default_post_xol_op(struct arch_uprobe *auprobe, struct pt_regs *regs regs->ip += correction; } else if (auprobe->defparam.fixups & UPROBE_FIX_CALL) { regs->sp += sizeof_long(); /* Pop incorrect return address */ - if (push_ret_address(regs, utask->vaddr + auprobe->defparam.ilen)) + if (emulate_push_stack(regs, utask->vaddr + auprobe->defparam.ilen)) return -ERESTART; } /* popf; tell the caller to not touch TF */ @@ -655,7 +655,7 @@ static bool branch_emulate_op(struct arch_uprobe *auprobe, struct pt_regs *regs) * * But there is corner case, see the comment in ->post_xol(). */ - if (push_ret_address(regs, new_ip)) + if (emulate_push_stack(regs, new_ip)) return false; } else if (!check_jmp_cond(auprobe, regs)) { offs = 0; @@ -665,6 +665,16 @@ static bool branch_emulate_op(struct arch_uprobe *auprobe, struct pt_regs *regs) return true; } +static bool push_emulate_op(struct arch_uprobe *auprobe, struct pt_regs *regs) +{ + unsigned long *src_ptr = (void *)regs + auprobe->push.reg_offset; + + if (emulate_push_stack(regs, *src_ptr)) + return false; + regs->ip += auprobe->push.ilen; + return true; +} + static int branch_post_xol_op(struct arch_uprobe *auprobe, struct pt_regs *regs) { BUG_ON(!branch_is_call(auprobe)); @@ -703,6 +713,10 @@ static const struct uprobe_xol_ops branch_xol_ops = { .post_xol = branch_post_xol_op, }; +static const struct uprobe_xol_ops push_xol_ops = { + .emulate = push_emulate_op, +}; + /* Returns -ENOSYS if branch_xol_ops doesn't handle this insn */ static int branch_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn) { @@ -750,6 +764,87 @@ static int branch_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn) return 0; } +/* Returns -ENOSYS if push_xol_ops doesn't handle this insn */ +static int push_setup_xol_ops(struct arch_uprobe *auprobe, struct insn *insn) +{ + u8 opc1 = OPCODE1(insn), reg_offset = 0; + + if (opc1 < 0x50 || opc1 > 0x57) + return -ENOSYS; + + if (insn->length > 2) + return -ENOSYS; + if (insn->length == 2) { + /* only support rex_prefix 0x41 (x64 only) */ +#ifdef CONFIG_X86_64 + if (insn->rex_prefix.nbytes != 1 || + insn->rex_prefix.bytes[0] != 0x41) + return -ENOSYS; + + switch (opc1) { + case 0x50: + reg_offset = offsetof(struct pt_regs, r8); + break; + case 0x51: + reg_offset = offsetof(struct pt_regs, r9); + break; + case 0x52: + reg_offset = offsetof(struct pt_regs, r10); + break; + case 0x53: + reg_offset = offsetof(struct pt_regs, r11); + break; + case 0x54: + reg_offset = offsetof(struct pt_regs, r12); + break; + case 0x55: + reg_offset = offsetof(struct pt_regs, r13); + break; + case 0x56: + reg_offset = offsetof(struct pt_regs, r14); + break; + case 0x57: + reg_offset = offsetof(struct pt_regs, r15); + break; + } +#else + return -ENOSYS; +#endif + } else { + switch (opc1) { + case 0x50: + reg_offset = offsetof(struct pt_regs, ax); + break; + case 0x51: + reg_offset = offsetof(struct pt_regs, cx); + break; + case 0x52: + reg_offset = offsetof(struct pt_regs, dx); + break; + case 0x53: + reg_offset = offsetof(struct pt_regs, bx); + break; + case 0x54: + reg_offset = offsetof(struct pt_regs, sp); + break; + case 0x55: + reg_offset = offsetof(struct pt_regs, bp); + break; + case 0x56: + reg_offset = offsetof(struct pt_regs, si); + break; + case 0x57: + reg_offset = offsetof(struct pt_regs, di); + break; + } + } + + auprobe->push.reg_offset = reg_offset; + auprobe->push.ilen = insn->length; + auprobe->ops = &push_xol_ops; + return 0; +} + /** * arch_uprobe_analyze_insn - instruction analysis including validity and fixups. * @mm: the probed address space. @@ -771,6 +866,10 @@ int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, if (ret != -ENOSYS) return ret; + ret = push_setup_xol_ops(auprobe, &insn); + if (ret != -ENOSYS) + return ret; + /* * Figure out which fixups default_post_xol_op() will need to perform, * and annotate defparam->fixups accordingly. -- 2.9.5 From 1584161466059497400@xxx Wed Nov 15 19:28:23 +0000 2017 X-GM-THRID: 1583700940287669077 X-Gmail-Labels: Inbox,Category Forums,HistoricalUnread