Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp5434338yba; Tue, 30 Apr 2019 14:54:48 -0700 (PDT) X-Google-Smtp-Source: APXvYqx8sQG56U2bSqv5eg9tMljrzYyAtDfos3VTWyZn17eGZs4WAxFw1J6nUZtahXWOeGbDaE+d X-Received: by 2002:a62:5483:: with SMTP id i125mr12947250pfb.211.1556661288715; Tue, 30 Apr 2019 14:54:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1556661288; cv=none; d=google.com; s=arc-20160816; b=ce6X8q1VOqmhe9Q8z9Uk+gJT7xK/ZOrK62DbedlFNcWfujOetyGtPi5oOVMQyBQ+/k fKbqC2c/WUhcAhyVTNk9PVcBoFKEckHMdN1ZhqOKoT87HkItEfsuJdo4XGmZLzPRc0qV bcPN6FAHRgQOseXfAJ2/hiNbGAjpG0+sqd/kROMRDln+ayIaCrRnj28R6wMRBEp4SvsA LlqtexhBIH31dBV8V3b+o10lu2sQTV95ti2vtyDKiFzGbsm7oLWRD6UZPy0NqYddH/sP z0BBVxtaULIDblosoCtCMXd8qA/1fs2JUdR+BPyG9sDZlpWW2n+wyYp1T69+qYx27VFC aMtQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date; bh=t4t3ms08+uairxDjskQkgXjPtQnwc3YkMCxEKRET/MA=; b=Ga7RVoD90F5qaEDTFEp9DcZ1/L6Tu9jADCHYii77dZ8PZURemZRKLj24L+MT5umt6r NZ0Cn1ObrC8YY5FHitFH8tmbbrBrfGTO7PRnzLDU98KmklL7WrOv/edZphNSVjFHVheh vb0Stm7B3q2q7gv77aUCLZQkPY3l+0lh2Tl6Gob54CWNrnkO4jTV2UlNwf9QW+1/klmw bfiLgjA4PURGmOkv1QFS8AHqx/iTgmlgC0ADxGy/I36/2kQ72XM/Y23yIU8lIC9vFhn5 Ti/Lg8VJELDrlNx6aCIyMiuplpB0A1zXWq2wG2gdOl+Eu3H0O/SBkNDp0sAKx8ZYiVFy 5klg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a13si7225796pln.208.2019.04.30.14.54.31; Tue, 30 Apr 2019 14:54:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726328AbfD3Vxk (ORCPT + 99 others); Tue, 30 Apr 2019 17:53:40 -0400 Received: from mail.kernel.org ([198.145.29.99]:39924 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726048AbfD3Vxj (ORCPT ); Tue, 30 Apr 2019 17:53:39 -0400 Received: from gandalf.local.home (cpe-66-24-58-225.stny.res.rr.com [66.24.58.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7FA252087B; Tue, 30 Apr 2019 21:53:36 +0000 (UTC) Date: Tue, 30 Apr 2019 17:53:34 -0400 From: Steven Rostedt To: Linus Torvalds Cc: Andy Lutomirski , Peter Zijlstra , Nicolai Stange , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , "the arch/x86 maintainers" , Josh Poimboeuf , Jiri Kosina , Miroslav Benes , Petr Mladek , Joe Lawrence , Shuah Khan , Konrad Rzeszutek Wilk , Tim Chen , Sebastian Andrzej Siewior , Mimi Zohar , Juergen Gross , Nick Desaulniers , Nayna Jain , Masahiro Yamada , Joerg Roedel , Linux List Kernel Mailing , live-patching@vger.kernel.org, "open list:KERNEL SELFTEST FRAMEWORK" Subject: [RFC][PATCH v2] ftrace/x86: Emulate call function while updating in breakpoint handler Message-ID: <20190430175334.423821c0@gandalf.local.home> In-Reply-To: <20190430134913.4e29ce72@gandalf.local.home> References: <20190428133826.3e142cfd@oasis.local.home> <20190430135602.GD2589@hirez.programming.kicks-ass.net> <20190430130359.330e895b@gandalf.local.home> <20190430132024.0f03f5b8@gandalf.local.home> <20190430134913.4e29ce72@gandalf.local.home> X-Mailer: Claws Mail 3.17.3 (GTK+ 2.24.32; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Steven Rostedt (VMware)" Nicolai Stange discovered[1] that if live kernel patching is enabled, and the function tracer started tracing the same function that was patched, the conversion of the fentry call site during the translation of going from calling the live kernel patch trampoline to the iterator trampoline, would have as slight window where it didn't call anything. As live kernel patching depends on ftrace to always call its code (to prevent the function being traced from being called, as it will redirect it). This small window would allow the old buggy function to be called, and this can cause undesirable results. Nicolai submitted new patches[2] but these were controversial. As this is similar to the static call emulation issues that came up a while ago[3], Linus suggested using per CPU data along with special trampolines[4] to emulate the calls. Linus's solution was for text poke (which was mostly what the static_call code did), but as ftrace has its own mechanism, it required doing its own thing. Having ftrace use its own per CPU data and having its own set of specialized trampolines solves the issue of missed calls that live kernel patching suffers. [1] http://lkml.kernel.org/r/20180726104029.7736-1-nstange@suse.de [2] http://lkml.kernel.org/r/20190427100639.15074-1-nstange@suse.de [3] http://lkml.kernel.org/r/3cf04e113d71c9f8e4be95fb84a510f085aa4afa.1541711457.git.jpoimboe@redhat.com [4] http://lkml.kernel.org/r/CAHk-=wh5OpheSU8Em_Q3Hg8qw_JtoijxOdPtHru6d+5K8TWM=A@mail.gmail.com Inspired-by: Linus Torvalds Cc: stable@vger.kernel.org Fixes: b700e7f03df5 ("livepatch: kernel: add support for live patching") Signed-off-by: Steven Rostedt (VMware) --- Changes since v1: - Use "push push ret" instead of indirect jumps (Linus) - Handle 32 bit as well as non SMP - Fool lockdep into thinking interrupts are enabled arch/x86/kernel/ftrace.c | 175 +++++++++++++++++++++++++++++++++++++-- 1 file changed, 170 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c index ef49517f6bb2..9160f5cc3b6d 100644 --- a/arch/x86/kernel/ftrace.c +++ b/arch/x86/kernel/ftrace.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -232,6 +233,9 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, static unsigned long ftrace_update_func; +/* Used within inline asm below */ +unsigned long ftrace_update_func_call; + static int update_ftrace_func(unsigned long ip, void *new) { unsigned char old[MCOUNT_INSN_SIZE]; @@ -259,6 +263,8 @@ int ftrace_update_ftrace_func(ftrace_func_t func) unsigned char *new; int ret; + ftrace_update_func_call = (unsigned long)func; + new = ftrace_call_replace(ip, (unsigned long)func); ret = update_ftrace_func(ip, new); @@ -280,6 +286,125 @@ static nokprobe_inline int is_ftrace_caller(unsigned long ip) return 0; } +/* + * We need to handle the "call func1" -> "call func2" case. + * Just skipping the call is not sufficient as it will be like + * changing to "nop" first and then updating the call. But some + * users of ftrace require calls never to be missed. + * + * To emulate the call while converting the call site with a breakpoint, + * some trampolines are used along with per CPU buffers. + * There are three trampolines for the call sites and three trampolines + * for the updating of the call in ftrace trampoline. The three + * trampolines are: + * + * 1) Interrupts are enabled when the breakpoint is hit + * 2) Interrupts are disabled when the breakpoint is hit + * 3) The breakpoint was hit in an NMI + * + * As per CPU data is used, interrupts must be disabled to prevent them + * from corrupting the data. A separate NMI trampoline is used for the + * NMI case. If interrupts are already disabled, then the return path + * of where the breakpoint was hit (saved in the per CPU data) is pushed + * on the stack and then a jump to either the ftrace_caller (which will + * loop through all registered ftrace_ops handlers depending on the ip + * address), or if its a ftrace trampoline call update, it will call + * ftrace_update_func_call which will hold the call that should be + * called. + */ +extern asmlinkage void ftrace_emulate_call_irqon(void); +extern asmlinkage void ftrace_emulate_call_irqoff(void); +extern asmlinkage void ftrace_emulate_call_nmi(void); +extern asmlinkage void ftrace_emulate_call_update_irqoff(void); +extern asmlinkage void ftrace_emulate_call_update_irqon(void); +extern asmlinkage void ftrace_emulate_call_update_nmi(void); + +static DEFINE_PER_CPU(void *, ftrace_bp_call_return); +static DEFINE_PER_CPU(void *, ftrace_bp_call_nmi_return); + +#ifdef CONFIG_SMP +#ifdef CONFIG_X86_64 +# define BP_CALL_RETURN "%gs:ftrace_bp_call_return" +# define BP_CALL_NMI_RETURN "%gs:ftrace_bp_call_nmi_return" +#else +# define BP_CALL_RETURN "%fs:ftrace_bp_call_return" +# define BP_CALL_NMI_RETURN "%fs:ftrace_bp_call_nmi_return" +#endif +#else /* SMP */ +# define BP_CALL_RETURN "ftrace_bp_call_return" +# define BP_CALL_NMI_RETURN "ftrace_bp_call_nmi_return" +#endif + +/* To hold the ftrace_caller address to push on the stack */ +void *ftrace_caller_func = (void *)ftrace_caller; + +asm( + ".text\n" + + /* Trampoline for function update with interrupts enabled */ + ".global ftrace_emulate_call_irqoff\n" + ".type ftrace_emulate_call_irqoff, @function\n" + "ftrace_emulate_call_irqoff:\n\t" + "push "BP_CALL_RETURN"\n\t" + "push ftrace_caller_func\n" + "sti\n\t" + "ret\n\t" + ".size ftrace_emulate_call_irqoff, .-ftrace_emulate_call_irqoff\n" + + /* Trampoline for function update with interrupts disabled*/ + ".global ftrace_emulate_call_irqon\n" + ".type ftrace_emulate_call_irqon, @function\n" + "ftrace_emulate_call_irqon:\n\t" + "push "BP_CALL_RETURN"\n\t" + "push ftrace_caller_func\n" + "ret\n\t" + ".size ftrace_emulate_call_irqon, .-ftrace_emulate_call_irqon\n" + + /* Trampoline for function update in an NMI */ + ".global ftrace_emulate_call_nmi\n" + ".type ftrace_emulate_call_nmi, @function\n" + "ftrace_emulate_call_nmi:\n\t" + "push "BP_CALL_NMI_RETURN"\n\t" + "push ftrace_caller_func\n" + "ret\n\t" + ".size ftrace_emulate_call_nmi, .-ftrace_emulate_call_nmi\n" + + /* Trampoline for ftrace trampoline call update with interrupts enabled */ + ".global ftrace_emulate_call_update_irqoff\n" + ".type ftrace_emulate_call_update_irqoff, @function\n" + "ftrace_emulate_call_update_irqoff:\n\t" + "push "BP_CALL_RETURN"\n\t" + "push ftrace_update_func_call\n" + "sti\n\t" + "ret\n\t" + ".size ftrace_emulate_call_update_irqoff, .-ftrace_emulate_call_update_irqoff\n" + + /* Trampoline for ftrace trampoline call update with interrupts disabled */ + ".global ftrace_emulate_call_update_irqon\n" + ".type ftrace_emulate_call_update_irqon, @function\n" + "ftrace_emulate_call_update_irqon:\n\t" + "push "BP_CALL_RETURN"\n\t" + "push ftrace_update_func_call\n" + "ret\n\t" + ".size ftrace_emulate_call_update_irqon, .-ftrace_emulate_call_update_irqon\n" + + /* Trampoline for ftrace trampoline call update in an NMI */ + ".global ftrace_emulate_call_update_nmi\n" + ".type ftrace_emulate_call_update_nmi, @function\n" + "ftrace_emulate_call_update_nmi:\n\t" + "push "BP_CALL_NMI_RETURN"\n\t" + "push ftrace_update_func_call\n" + "ret\n\t" + ".size ftrace_emulate_call_update_nmi, .-ftrace_emulate_call_update_nmi\n" + ".previous\n"); + +STACK_FRAME_NON_STANDARD(ftrace_emulate_call_irqoff); +STACK_FRAME_NON_STANDARD(ftrace_emulate_call_irqon); +STACK_FRAME_NON_STANDARD(ftrace_emulate_call_nmi); +STACK_FRAME_NON_STANDARD(ftrace_emulate_call_update_irqoff); +STACK_FRAME_NON_STANDARD(ftrace_emulate_call_update_irqon); +STACK_FRAME_NON_STANDARD(ftrace_emulate_call_update_nmi); + /* * A breakpoint was added to the code address we are about to * modify, and this is the handle that will just skip over it. @@ -295,12 +420,49 @@ int ftrace_int3_handler(struct pt_regs *regs) return 0; ip = regs->ip - 1; - if (!ftrace_location(ip) && !is_ftrace_caller(ip)) - return 0; - - regs->ip += MCOUNT_INSN_SIZE - 1; + if (ftrace_location(ip)) { + /* A breakpoint at the beginning of the function was hit */ + if (in_nmi()) { + /* NMIs have their own trampoline */ + this_cpu_write(ftrace_bp_call_nmi_return, (void *)ip + MCOUNT_INSN_SIZE); + regs->ip = (unsigned long) ftrace_emulate_call_nmi; + return 1; + } + this_cpu_write(ftrace_bp_call_return, (void *)ip + MCOUNT_INSN_SIZE); + if (regs->flags & X86_EFLAGS_IF) { + regs->flags &= ~X86_EFLAGS_IF; + regs->ip = (unsigned long) ftrace_emulate_call_irqoff; + /* Tell lockdep here we are enabling interrupts */ + trace_hardirqs_on(); + } else { + regs->ip = (unsigned long) ftrace_emulate_call_irqon; + } + return 1; + } else if (is_ftrace_caller(ip)) { + /* An ftrace trampoline is being updated */ + if (!ftrace_update_func_call) { + /* If it's a jump, just need to skip it */ + regs->ip += MCOUNT_INSN_SIZE -1; + return 1; + } + if (in_nmi()) { + /* NMIs have their own trampoline */ + this_cpu_write(ftrace_bp_call_nmi_return, (void *)ip + MCOUNT_INSN_SIZE); + regs->ip = (unsigned long) ftrace_emulate_call_update_nmi; + return 1; + } + this_cpu_write(ftrace_bp_call_return, (void *)ip + MCOUNT_INSN_SIZE); + if (regs->flags & X86_EFLAGS_IF) { + regs->flags &= ~X86_EFLAGS_IF; + regs->ip = (unsigned long) ftrace_emulate_call_update_irqoff; + trace_hardirqs_on(); + } else { + regs->ip = (unsigned long) ftrace_emulate_call_update_irqon; + } + return 1; + } - return 1; + return 0; } NOKPROBE_SYMBOL(ftrace_int3_handler); @@ -859,6 +1021,8 @@ void arch_ftrace_update_trampoline(struct ftrace_ops *ops) func = ftrace_ops_get_func(ops); + ftrace_update_func_call = (unsigned long)func; + /* Do a safe modify in case the trampoline is executing */ new = ftrace_call_replace(ip, (unsigned long)func); ret = update_ftrace_func(ip, new); @@ -960,6 +1124,7 @@ static int ftrace_mod_jmp(unsigned long ip, void *func) { unsigned char *new; + ftrace_update_func_call = 0; new = ftrace_jmp_replace(ip, (unsigned long)func); return update_ftrace_func(ip, new); -- 2.20.1