Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753369AbeAFAbK (ORCPT + 1 other); Fri, 5 Jan 2018 19:31:10 -0500 Received: from mx2.suse.de ([195.135.220.15]:39249 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752619AbeAFAbJ (ORCPT ); Fri, 5 Jan 2018 19:31:09 -0500 Date: Sat, 6 Jan 2018 01:30:59 +0100 From: Borislav Petkov To: Josh Poimboeuf , "Woodhouse, David" Cc: "tglx@linutronix.de" , "linux-kernel@vger.kernel.org" , "tim.c.chen@linux.intel.com" , "peterz@infradead.org" , "torvalds@linux-foundation.org" , "ak@linux.intel.com" , "riel@redhat.com" , "keescook@google.com" , "gnomes@lxorguk.ukuu.org.uk" , "pjt@google.com" , "dave.hansen@intel.com" , "luto@amacapital.net" , "jikos@kernel.org" , "gregkh@linux-foundation.org" Subject: Re: [PATCH v3 01/13] x86/retpoline: Add initial retpoline support Message-ID: <20180106003059.jhwx4ouc7xbt7yw6@pd.tnic> References: <1515058213.12987.89.camel@amazon.co.uk> <20180104143710.8961-1-dwmw@amazon.co.uk> <1515160619.29312.126.camel@amazon.co.uk> <1515170506.29312.149.camel@amazon.co.uk> <20180105164505.xpw5pefxsyu3z56e@pd.tnic> <20180105170806.mtylu2zagfxyj3ry@treble> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20180105170806.mtylu2zagfxyj3ry@treble> User-Agent: NeoMutt/20170609 (1.8.3) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: On Fri, Jan 05, 2018 at 11:08:06AM -0600, Josh Poimboeuf wrote: > I seem to recall that we also discussed the need for this for converting > pvops to use alternatives, though the "why" is eluding me at the moment. Ok, here's something which seems to work in my VM here. I'll continue playing with it tomorrow. Josh, if you have some example sequences for me to try, send them my way pls. Anyway, here's an example: alternative("", "xor %%rdi, %%rdi; jmp startup_64", X86_FEATURE_K8); which did this: [ 0.921013] apply_alternatives: feat: 3*32+4, old: (ffffffff81027429, len: 8), repl: (ffffffff824759d2, len: 8), pad: 8 [ 0.924002] ffffffff81027429: old_insn: 90 90 90 90 90 90 90 90 [ 0.928003] ffffffff824759d2: rpl_insn: 48 31 ff e9 26 a6 b8 fe [ 0.930212] process_jumps: repl[0]: 0x48 [ 0.932002] process_jumps: insn len: 3 [ 0.932814] process_jumps: repl[0]: 0xe9 [ 0.934003] recompute_jump: o_dspl: 0xfeb8a626 [ 0.934914] recompute_jump: target RIP: ffffffff81000000, new_displ: 0xfffd8bd7 [ 0.936001] recompute_jump: final displ: 0xfffd8bd2, JMP 0xffffffff81000000 [ 0.937240] process_jumps: insn len: 5 [ 0.938053] ffffffff81027429: final_insn: e9 d2 8b fd ff a6 b8 fe Apparently our insn decoder is smart enough to parse the insn and get its length, so I can use that. It jumps over the first 3-byte XOR and than massaged the following 5-byte jump. --- From: Borislav Petkov Date: Fri, 5 Jan 2018 20:32:58 +0100 Subject: [PATCH] WIP Signed-off-by: Borislav Petkov --- arch/x86/kernel/alternative.c | 51 +++++++++++++++++++++++++++++++------------ 1 file changed, 37 insertions(+), 14 deletions(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index dbaf14d69ebd..14a855789a50 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -21,6 +21,7 @@ #include #include #include +#include int __read_mostly alternatives_patched; @@ -281,24 +282,24 @@ static inline bool is_jmp(const u8 opcode) } static void __init_or_module -recompute_jump(struct alt_instr *a, u8 *orig_insn, u8 *repl_insn, u8 *insnbuf) +recompute_jump(struct alt_instr *a, u8 *orig_insn, u8 *repl_insn, u8 repl_len, u8 *insnbuf) { u8 *next_rip, *tgt_rip; s32 n_dspl, o_dspl; - int repl_len; - if (a->replacementlen != 5) + if (repl_len != 5) return; - o_dspl = *(s32 *)(insnbuf + 1); + o_dspl = *(s32 *)(repl_insn + 1); /* next_rip of the replacement JMP */ - next_rip = repl_insn + a->replacementlen; + next_rip = repl_insn + repl_len; + /* target rip of the replacement JMP */ tgt_rip = next_rip + o_dspl; n_dspl = tgt_rip - orig_insn; - DPRINTK("target RIP: %p, new_displ: 0x%x", tgt_rip, n_dspl); + DPRINTK("target RIP: %px, new_displ: 0x%x", tgt_rip, n_dspl); if (tgt_rip - orig_insn >= 0) { if (n_dspl - 2 <= 127) @@ -337,6 +338,29 @@ recompute_jump(struct alt_instr *a, u8 *orig_insn, u8 *repl_insn, u8 *insnbuf) n_dspl, (unsigned long)orig_insn + n_dspl + repl_len); } +static void __init_or_module process_jumps(struct alt_instr *a, u8 *insnbuf) +{ + u8 *repl = (u8 *)&a->repl_offset + a->repl_offset; + u8 *instr = (u8 *)&a->instr_offset + a->instr_offset; + struct insn insn; + int i = 0; + + if (!a->replacementlen) + return; + + while (i < a->replacementlen) { + kernel_insn_init(&insn, repl, a->replacementlen); + + insn_get_length(&insn); + + if (is_jmp(repl[0])) + recompute_jump(a, instr, repl, insn.length, insnbuf); + + i += insn.length; + repl += insn.length; + } +} + /* * "noinline" to cause control flow change and thus invalidate I$ and * cause refetch after modification. @@ -352,7 +376,7 @@ static void __init_or_module noinline optimize_nops(struct alt_instr *a, u8 *ins add_nops(instr + (a->instrlen - a->padlen), a->padlen); local_irq_restore(flags); - DUMP_BYTES(instr, a->instrlen, "%p: [%d:%d) optimized NOPs: ", + DUMP_BYTES(instr, a->instrlen, "%px: [%d:%d) optimized NOPs: ", instr, a->instrlen - a->padlen, a->padlen); } @@ -373,7 +397,7 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, u8 *instr, *replacement; u8 insnbuf[MAX_PATCH_LEN]; - DPRINTK("alt table %p -> %p", start, end); + DPRINTK("alt table %px -> %px", start, end); /* * The scan order should be from start to end. A later scanned * alternative code can overwrite previously scanned alternative code. @@ -397,14 +421,14 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, continue; } - DPRINTK("feat: %d*32+%d, old: (%p, len: %d), repl: (%p, len: %d), pad: %d", + DPRINTK("feat: %d*32+%d, old: (%px, len: %d), repl: (%px, len: %d), pad: %d", a->cpuid >> 5, a->cpuid & 0x1f, instr, a->instrlen, replacement, a->replacementlen, a->padlen); - DUMP_BYTES(instr, a->instrlen, "%p: old_insn: ", instr); - DUMP_BYTES(replacement, a->replacementlen, "%p: rpl_insn: ", replacement); + DUMP_BYTES(instr, a->instrlen, "%px: old_insn: ", instr); + DUMP_BYTES(replacement, a->replacementlen, "%px: rpl_insn: ", replacement); memcpy(insnbuf, replacement, a->replacementlen); insnbuf_sz = a->replacementlen; @@ -422,15 +446,14 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start, (unsigned long)instr + *(s32 *)(insnbuf + 1) + 5); } - if (a->replacementlen && is_jmp(replacement[0])) - recompute_jump(a, instr, replacement, insnbuf); + process_jumps(a, insnbuf); if (a->instrlen > a->replacementlen) { add_nops(insnbuf + a->replacementlen, a->instrlen - a->replacementlen); insnbuf_sz += a->instrlen - a->replacementlen; } - DUMP_BYTES(insnbuf, insnbuf_sz, "%p: final_insn: ", instr); + DUMP_BYTES(insnbuf, insnbuf_sz, "%px: final_insn: ", instr); text_poke_early(instr, insnbuf, insnbuf_sz); } -- 2.13.0 SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) --