Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753062AbdI1K6M (ORCPT ); Thu, 28 Sep 2017 06:58:12 -0400 Received: from terminus.zytor.com ([65.50.211.136]:49843 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752847AbdI1K6J (ORCPT ); Thu, 28 Sep 2017 06:58:09 -0400 Date: Thu, 28 Sep 2017 03:54:33 -0700 From: tip-bot for Masami Hiramatsu Message-ID: Cc: ast@fb.com, mhiramat@kernel.org, linux-kernel@vger.kernel.org, rostedt@goodmis.org, peterz@infradead.org, ast@kernel.org, ananth@linux.vnet.ibm.com, hpa@zytor.com, mingo@kernel.org, paulmck@linux.vnet.ibm.com, torvalds@linux-foundation.org, tglx@linutronix.de Reply-To: linux-kernel@vger.kernel.org, peterz@infradead.org, rostedt@goodmis.org, mhiramat@kernel.org, ast@fb.com, paulmck@linux.vnet.ibm.com, tglx@linutronix.de, torvalds@linux-foundation.org, ast@kernel.org, ananth@linux.vnet.ibm.com, hpa@zytor.com, mingo@kernel.org In-Reply-To: <150581534039.32348.11331736206004264553.stgit@devbox> References: <150581534039.32348.11331736206004264553.stgit@devbox> To: linux-tip-commits@vger.kernel.org Subject: [tip:perf/core] kprobes/x86: Remove IRQ disabling from ftrace-based/optimized kprobes Git-Commit-ID: a19b2e3d783964d48d2b494439648e929bcdc976 X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3249 Lines: 93 Commit-ID: a19b2e3d783964d48d2b494439648e929bcdc976 Gitweb: https://git.kernel.org/tip/a19b2e3d783964d48d2b494439648e929bcdc976 Author: Masami Hiramatsu AuthorDate: Tue, 19 Sep 2017 19:02:20 +0900 Committer: Ingo Molnar CommitDate: Thu, 28 Sep 2017 09:25:50 +0200 kprobes/x86: Remove IRQ disabling from ftrace-based/optimized kprobes Kkprobes don't need to disable IRQs if they are called from the ftrace/jump trampoline code, because Documentation/kprobes.txt says: ----- Probe handlers are run with preemption disabled. Depending on the architecture and optimization state, handlers may also run with interrupts disabled (e.g., kretprobe handlers and optimized kprobe handlers run without interrupt disabled on x86/x86-64). ----- So let's remove IRQ disabling from those handlers. Signed-off-by: Masami Hiramatsu Cc: Alexei Starovoitov Cc: Alexei Starovoitov Cc: Ananth N Mavinakayanahalli Cc: Linus Torvalds Cc: Paul E . McKenney Cc: Peter Zijlstra Cc: Steven Rostedt Cc: Thomas Gleixner Link: http://lkml.kernel.org/r/150581534039.32348.11331736206004264553.stgit@devbox Signed-off-by: Ingo Molnar --- arch/x86/kernel/kprobes/ftrace.c | 9 ++------- arch/x86/kernel/kprobes/opt.c | 4 ---- 2 files changed, 2 insertions(+), 11 deletions(-) diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c index bcfee4f..8dc0161 100644 --- a/arch/x86/kernel/kprobes/ftrace.c +++ b/arch/x86/kernel/kprobes/ftrace.c @@ -61,14 +61,11 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip, { struct kprobe *p; struct kprobe_ctlblk *kcb; - unsigned long flags; - - /* Disable irq for emulating a breakpoint and avoiding preempt */ - local_irq_save(flags); + /* Preempt is disabled by ftrace */ p = get_kprobe((kprobe_opcode_t *)ip); if (unlikely(!p) || kprobe_disabled(p)) - goto end; + return; kcb = get_kprobe_ctlblk(); if (kprobe_running()) { @@ -91,8 +88,6 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip, * resets current kprobe, and keep preempt count +1. */ } -end: - local_irq_restore(flags); } NOKPROBE_SYMBOL(kprobe_ftrace_handler); diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c index 32c35cb..e941136 100644 --- a/arch/x86/kernel/kprobes/opt.c +++ b/arch/x86/kernel/kprobes/opt.c @@ -154,13 +154,10 @@ STACK_FRAME_NON_STANDARD(optprobe_template_func); static void optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs) { - unsigned long flags; - /* This is possible if op is under delayed unoptimizing */ if (kprobe_disabled(&op->kp)) return; - local_irq_save(flags); preempt_disable(); if (kprobe_running()) { kprobes_inc_nmissed_count(&op->kp); @@ -182,7 +179,6 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs) __this_cpu_write(current_kprobe, NULL); } preempt_enable_no_resched(); - local_irq_restore(flags); } NOKPROBE_SYMBOL(optimized_callback);