Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3338001imu; Fri, 18 Jan 2019 08:42:32 -0800 (PST) X-Google-Smtp-Source: ALg8bN4DGOKFcYssJGl9DMsUfH7/8NkkWxUXFIVeLh0cMylTb5hprwa6WrcPFlzRAx02mwdPKf+V X-Received: by 2002:a17:902:bd4a:: with SMTP id b10mr19950928plx.232.1547829752105; Fri, 18 Jan 2019 08:42:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547829752; cv=none; d=google.com; s=arc-20160816; b=pCwfh5mnGWHbDjXHeSN2BTLatpxhavkzmy1JmuYHyWvFfQm1/6l7BEJMPt9P4Jlzm0 VkLUxvdwZ/61N7dAvAKejZv9QMvt7tGpBaWXLxe9PCnnEGwXIbBDK6pmRTx6MmQQu9U4 CCiGRmzJb68V4MnHcAqphRClaGxMpujVK2xtF1HUJb3bwrqwgDiI/13VTZ5EvcPIR/aj kI4mUjnUpWVGdsIpWgnowzwFyMEbj6d1WFRQERU3Oq7Uq7myYgA5i132rZkWPYnOtsUZ GKNF5uN/E1+0w0txg4nMlrfvR9QdvNjp+BPwTi0uLj1zMOALO9DJL8FhSDCk3eToJzeM BroQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:from:date:message-id:references :in-reply-to:subject:cc:to; bh=p4G3nw92hKmU4BRDCuDKJoLNqJDUOLjMShutRITkDo4=; b=zhClCDeWSnbWc9MMerqQvtzXleMSk4eNN1fhm+fC3oH64QPIL3D1Aql4SdLN+S2UJj rcDuyKARORpt0qzqhb7kYdVq3bUV4rcEDbms7ezDwvs5k9fHeKW4OYb+En4ur6YzNxmt tGMczTlpe/r7GHFKjK263i88+b/fACkE1q3e2eGXR2KJROQ5La7P2vXmkmeLZJ3xGdOT Si2VuRCF7J/1JbV6ojPOQLPbjjh632cBOqSz2OO6SAysSipXQrijjUT16/9h2IOjkuqM PtBZv1VkEDfAiIIx29JZQysUkz1cUHktHxkzcB7XZOSL8Zf/A2bst8xnJxathLPGgTur bwig== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e22si4909240pge.479.2019.01.18.08.42.14; Fri, 18 Jan 2019 08:42:32 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728216AbfARQjL (ORCPT + 99 others); Fri, 18 Jan 2019 11:39:11 -0500 Received: from verein.lst.de ([213.95.11.211]:44670 "EHLO newverein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728161AbfARQjK (ORCPT ); Fri, 18 Jan 2019 11:39:10 -0500 Received: by newverein.lst.de (Postfix, from userid 2005) id E338E68D93; Fri, 18 Jan 2019 17:39:08 +0100 (CET) To: Mark Rutland , Will Deacon , Catalin Marinas , Julien Thierry , Steven Rostedt , Josh Poimboeuf , Ingo Molnar , Ard Biesheuvel , Arnd Bergmann , AKASHI Takahiro , Amit Daniel Kachhap Cc: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, live-patching@vger.kernel.org Subject: [PATCH v7 2/3] arm64: implement ftrace with regs In-Reply-To: <20190118163736.6A99268CEB@newverein.lst.de> References: <20190118163736.6A99268CEB@newverein.lst.de> Message-Id: <20190118163908.E338E68D93@newverein.lst.de> Date: Fri, 18 Jan 2019 17:39:08 +0100 (CET) From: duwe@lst.de (Torsten Duwe) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Once gcc8 adds 2 NOPs at the beginning of each function, replace the first NOP thus generated with a quick LR saver (move it to scratch reg x9), so the 2nd replacement insn, the call to ftrace, does not clobber the value. Ftrace will then generate the standard stack frames. Note that patchable-function-entry in GCC disables IPA-RA, which means ABI register calling conventions are obeyed *and* scratch registers such as x9 are available. Introduce and handle an ftrace_regs_trampoline for module PLTs, right after ftrace_trampoline, and double the size of this special section. Signed-off-by: Torsten Duwe --- Mark, if you see your ftrace entry macro code being represented correctly here, please add your sign-off, As I've initially copied it from your mail. --- arch/arm64/include/asm/ftrace.h | 17 ++++- arch/arm64/include/asm/module.h | 3 arch/arm64/kernel/entry-ftrace.S | 125 +++++++++++++++++++++++++++++++++++++-- arch/arm64/kernel/ftrace.c | 114 ++++++++++++++++++++++++++--------- arch/arm64/kernel/module-plts.c | 3 arch/arm64/kernel/module.c | 2 6 files changed, 227 insertions(+), 37 deletions(-) --- a/arch/arm64/include/asm/ftrace.h +++ b/arch/arm64/include/asm/ftrace.h @@ -14,9 +14,24 @@ #include #define HAVE_FUNCTION_GRAPH_FP_TEST -#define MCOUNT_ADDR ((unsigned long)_mcount) #define MCOUNT_INSN_SIZE AARCH64_INSN_SIZE +/* + * DYNAMIC_FTRACE_WITH_REGS is implemented by adding 2 NOPs at the beginning + * of each function, with the second NOP actually calling ftrace. In contrary + * to a classic _mcount call, the call instruction to be modified is thus + * the second one, and not the only one. + */ +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS +#define ARCH_SUPPORTS_FTRACE_OPS 1 +#define REC_IP_BRANCH_OFFSET AARCH64_INSN_SIZE +/* All we need is some magic value. Simply use "_mCount:" */ +#define MCOUNT_ADDR (0x5f6d436f756e743a) +#else +#define REC_IP_BRANCH_OFFSET 0 +#define MCOUNT_ADDR ((unsigned long)_mcount) +#endif + #ifndef __ASSEMBLY__ #include --- a/arch/arm64/kernel/entry-ftrace.S +++ b/arch/arm64/kernel/entry-ftrace.S @@ -10,6 +10,7 @@ */ #include +#include #include #include #include @@ -124,6 +125,7 @@ EXPORT_SYMBOL(_mcount) NOKPROBE(_mcount) #else /* CONFIG_DYNAMIC_FTRACE */ +#ifndef CONFIG_DYNAMIC_FTRACE_WITH_REGS /* * _mcount() is used to build the kernel with -pg option, but all the branch * instructions to _mcount() are replaced to NOP initially at kernel start up, @@ -163,11 +165,6 @@ GLOBAL(ftrace_graph_call) // ftrace_gra mcount_exit ENDPROC(ftrace_caller) -#endif /* CONFIG_DYNAMIC_FTRACE */ - -ENTRY(ftrace_stub) - ret -ENDPROC(ftrace_stub) #ifdef CONFIG_FUNCTION_GRAPH_TRACER /* @@ -187,7 +184,125 @@ ENTRY(ftrace_graph_caller) mcount_exit ENDPROC(ftrace_graph_caller) +#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ + +#else /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */ + + .macro ftrace_regs_entry, allregs=0 + /* make room for pt_regs, plus a callee frame */ + sub sp, sp, #(S_FRAME_SIZE + 16) + + /* save function arguments */ + stp x0, x1, [sp, #S_X0] + stp x2, x3, [sp, #S_X2] + stp x4, x5, [sp, #S_X4] + stp x6, x7, [sp, #S_X6] + stp x8, x9, [sp, #S_X8] + .if \allregs == 1 + stp x10, x11, [sp, #S_X10] + stp x12, x13, [sp, #S_X12] + stp x14, x15, [sp, #S_X14] + stp x16, x17, [sp, #S_X16] + stp x18, x19, [sp, #S_X18] + stp x20, x21, [sp, #S_X20] + stp x22, x23, [sp, #S_X22] + stp x24, x25, [sp, #S_X24] + stp x26, x27, [sp, #S_X26] + .endif + + /* Save fp and x28, which is used in this function. */ + stp x28, x29, [sp, #S_X28] + + /* The stack pointer as it was on ftrace_caller entry... */ + add x28, sp, #(S_FRAME_SIZE + 16) + /* ...and the link Register at callee entry */ + stp x9, x28, [sp, #S_LR] /* to pt_regs.r[30] and .sp */ + + /* The program counter just after the ftrace call site */ + str lr, [sp, #S_PC] + + /* Now fill in callee's preliminary stackframe. */ + stp x29, x9, [sp, #S_FRAME_SIZE] + /* Let FP point to it. */ + add x29, sp, #S_FRAME_SIZE + + /* Our stackframe, stored inside pt_regs. */ + stp x29, x30, [sp, #S_STACKFRAME] + add x29, sp, #S_STACKFRAME + .endm + +ENTRY(ftrace_regs_caller) + ftrace_regs_entry 1 + b ftrace_common +ENDPROC(ftrace_regs_caller) + +ENTRY(ftrace_caller) + ftrace_regs_entry 0 + b ftrace_common +ENDPROC(ftrace_caller) + +ENTRY(ftrace_common) + + mov x3, sp /* pt_regs are @sp */ + ldr_l x2, function_trace_op, x0 + mov x1, x9 /* parent IP */ + sub x0, lr, #8 /* function entry == IP */ + +GLOBAL(ftrace_call) + bl ftrace_stub + +#ifdef CONFIG_FUNCTION_GRAPH_TRACER +GLOBAL(ftrace_graph_call) // ftrace_graph_caller(); + nop // If enabled, this will be replaced + // "b ftrace_graph_caller" +#endif + +/* + * GCC's patchable-function-entry implicitly disables IPA-RA, + * so all non-argument registers are either scratch / dead + * or callee-saved (within the ftrace framework). Function + * arguments of the call we are intercepting right now however + * need to be preserved in any case. + */ +ftrace_common_return: + /* restore function args */ + ldp x0, x1, [sp] + ldp x2, x3, [sp, #S_X2] + ldp x4, x5, [sp, #S_X4] + ldp x6, x7, [sp, #S_X6] + ldr x8, [sp, #S_X8] + + /* restore fp and x28 */ + ldp x28, x29, [sp, #S_X28] + + ldr lr, [sp, #S_LR] + ldr x9, [sp, #S_PC] + /* clean up both frames, ours and callee preliminary */ + add sp, sp, #S_FRAME_SIZE + 16 + + ret x9 +ENDPROC(ftrace_common) + +#ifdef CONFIG_FUNCTION_GRAPH_TRACER +ENTRY(ftrace_graph_caller) + ldr x0, [sp, #S_PC] /* pc */ + sub x0, x0, #8 /* start of the ftrace call site */ + add x1, sp, #S_LR /* &lr */ + ldr x2, [sp, #S_FRAME_SIZE] /* fp */ + bl prepare_ftrace_return + b ftrace_common_return +ENDPROC(ftrace_graph_caller) +#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ +#endif /* CONFIG_DYNAMIC_FTRACE_WITH_REGS */ +#endif /* CONFIG_DYNAMIC_FTRACE */ + +ENTRY(ftrace_stub) + ret +ENDPROC(ftrace_stub) + + +#ifdef CONFIG_FUNCTION_GRAPH_TRACER /* * void return_to_handler(void) * --- a/arch/arm64/kernel/ftrace.c +++ b/arch/arm64/kernel/ftrace.c @@ -65,19 +65,67 @@ int ftrace_update_ftrace_func(ftrace_fun return ftrace_modify_code(pc, 0, new, false); } +#ifdef CONFIG_ARM64_MODULE_PLTS +static int install_ftrace_trampoline(struct module *mod, unsigned long *addr) +{ + struct plt_entry trampoline, *mod_trampoline; + + /* + * Iterate over + * mod->arch.ftrace_trampolines[MOD_ARCH_NR_FTRACE_TRAMPOLINES] + * The assignment to various ftrace functions happens here. + */ + if (*addr == FTRACE_ADDR) + mod_trampoline = &mod->arch.ftrace_trampolines[0]; + else if (*addr == FTRACE_REGS_ADDR) + mod_trampoline = &mod->arch.ftrace_trampolines[1]; + else + return -EINVAL; + + trampoline = get_plt_entry(*addr, mod_trampoline); + + if (!plt_entries_equal(mod_trampoline, &trampoline)) { + /* point the trampoline at our ftrace entry point */ + module_disable_ro(mod); + *mod_trampoline = trampoline; + module_enable_ro(mod, true); + + /* update trampoline before patching in the branch */ + smp_wmb(); + } + *addr = (unsigned long)(void *)mod_trampoline; + + return 0; +} +#endif + +/* + * Ftrace with regs generates the tracer calls as close as possible to + * the function entry; no stack frame has been set up at that point. + * In order to make another call e.g to ftrace_caller, the LR must be + * saved from being overwritten. + * Between two functions, and with IPA-RA turned off, the scratch registers + * are available, so move the LR to x9 before calling into ftrace. + * "mov x9, lr" is officially aliased from "orr x9, xzr, lr". + */ +#define MOV_X9_X30 aarch64_insn_gen_logical_shifted_reg( \ + AARCH64_INSN_REG_9, AARCH64_INSN_REG_ZR, \ + AARCH64_INSN_REG_LR, 0, AARCH64_INSN_VARIANT_64BIT, \ + AARCH64_INSN_LOGIC_ORR) + /* * Turn on the call to ftrace_caller() in instrumented function */ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) { - unsigned long pc = rec->ip; + unsigned long pc = rec->ip + REC_IP_BRANCH_OFFSET; u32 old, new; long offset = (long)pc - (long)addr; if (offset < -SZ_128M || offset >= SZ_128M) { #ifdef CONFIG_ARM64_MODULE_PLTS - struct plt_entry trampoline; struct module *mod; + int ret; /* * On kernels that support module PLTs, the offset between the @@ -96,32 +144,14 @@ int ftrace_make_call(struct dyn_ftrace * if (WARN_ON(!mod)) return -EINVAL; - /* - * There is only one ftrace trampoline per module. For now, - * this is not a problem since on arm64, all dynamic ftrace - * invocations are routed via ftrace_caller(). This will need - * to be revisited if support for multiple ftrace entry points - * is added in the future, but for now, the pr_err() below - * deals with a theoretical issue only. - */ - trampoline = get_plt_entry(addr, mod->arch.ftrace_trampoline); - if (!plt_entries_equal(mod->arch.ftrace_trampoline, - &trampoline)) { - if (!plt_entries_equal(mod->arch.ftrace_trampoline, - &(struct plt_entry){})) { - pr_err("ftrace: far branches to multiple entry points unsupported inside a single module\n"); - return -EINVAL; - } - - /* point the trampoline to our ftrace entry point */ - module_disable_ro(mod); - *mod->arch.ftrace_trampoline = trampoline; - module_enable_ro(mod, true); + /* Check against our well-known list of ftrace entry points */ + if (addr == FTRACE_ADDR || addr == FTRACE_REGS_ADDR) { + ret = install_ftrace_trampoline(mod, &addr); + if (ret < 0) + return ret; + } else + return -EINVAL; - /* update trampoline before patching in the branch */ - smp_wmb(); - } - addr = (unsigned long)(void *)mod->arch.ftrace_trampoline; #else /* CONFIG_ARM64_MODULE_PLTS */ return -EINVAL; #endif /* CONFIG_ARM64_MODULE_PLTS */ @@ -133,17 +163,45 @@ int ftrace_make_call(struct dyn_ftrace * return ftrace_modify_code(pc, old, new, true); } +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS +int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, + unsigned long addr) +{ + unsigned long pc = rec->ip + REC_IP_BRANCH_OFFSET; + u32 old, new; + + old = aarch64_insn_gen_branch_imm(pc, old_addr, true); + new = aarch64_insn_gen_branch_imm(pc, addr, true); + + return ftrace_modify_code(pc, old, new, true); +} +#endif + /* * Turn off the call to ftrace_caller() in instrumented function */ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, unsigned long addr) { - unsigned long pc = rec->ip; + unsigned long pc = rec->ip + REC_IP_BRANCH_OFFSET; bool validate = true; u32 old = 0, new; long offset = (long)pc - (long)addr; + /* + * -fpatchable-function-entry= does not generate a profiling call + * initially; the NOPs are already there. So instead, + * put the LR saver there ahead of time, in order to avoid + * any race condition over patching 2 instructions. + */ + if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) && + addr == MCOUNT_ADDR) { + old = aarch64_insn_gen_nop(); + new = MOV_X9_X30; + pc -= REC_IP_BRANCH_OFFSET; + return ftrace_modify_code(pc, old, new, validate); + } + if (offset < -SZ_128M || offset >= SZ_128M) { #ifdef CONFIG_ARM64_MODULE_PLTS u32 replaced; --- a/arch/arm64/include/asm/module.h +++ b/arch/arm64/include/asm/module.h @@ -32,7 +32,8 @@ struct mod_arch_specific { struct mod_plt_sec init; /* for CONFIG_DYNAMIC_FTRACE */ - struct plt_entry *ftrace_trampoline; + struct plt_entry *ftrace_trampolines; +#define MOD_ARCH_NR_FTRACE_TRAMPOLINES 2 }; #endif --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -452,7 +452,7 @@ int module_finalize(const Elf_Ehdr *hdr, #ifdef CONFIG_ARM64_MODULE_PLTS if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE) && !strcmp(".text.ftrace_trampoline", secstrs + s->sh_name)) - me->arch.ftrace_trampoline = (void *)s->sh_addr; + me->arch.ftrace_trampolines = (void *)s->sh_addr; #endif } --- a/arch/arm64/kernel/module-plts.c +++ b/arch/arm64/kernel/module-plts.c @@ -333,7 +333,8 @@ int module_frob_arch_sections(Elf_Ehdr * tramp->sh_type = SHT_NOBITS; tramp->sh_flags = SHF_EXECINSTR | SHF_ALLOC; tramp->sh_addralign = __alignof__(struct plt_entry); - tramp->sh_size = sizeof(struct plt_entry); + tramp->sh_size = MOD_ARCH_NR_FTRACE_TRAMPOLINES + * sizeof(struct plt_entry); } return 0;