Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp746245imm; Fri, 5 Oct 2018 11:02:38 -0700 (PDT) X-Google-Smtp-Source: ACcGV61EA/xitC52mSPyOPLW/bFNzywPE5QmzxjagypDEN/DwsUhQsJ2MRscjd0lF1KIjCQMvG0R X-Received: by 2002:a17:902:1026:: with SMTP id b35-v6mr12912534pla.283.1538762558559; Fri, 05 Oct 2018 11:02:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538762558; cv=none; d=google.com; s=arc-20160816; b=ZuAYyBNzmHIfR0rSncFwqQ7jZf6U1RtqnQHyuD3uj8kLWIdE5PBdkd1XP2j+OW9/Ob dyHzTIS3MWAwvqCjECSJwzdpC+wTzhyAVVC6K+cnvgYwz+xI5WQWgV5iXlwiAQ/LasAu 742HpuX1FVkVrW0nL+PIdY21SiVgv5Oe1vgJd0dCVhAMd+/G7YEQ1cngIbzAWr7nT2f9 p0i1b2JphDBYn8c4PlFG6IeoJf3/ZMjI07NugL/6yHxcfNXvCJeJ75IvP4fycY4FAVCL ypaPx8O3/eskfX87Ojzx1fxVQkalVFGmQCiIr/DOhHNKfLdhn3TRb7CdeW9pzS0seUfg S83A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=MB+YwX8EA31oJI7w9FJtoAXR0U3VvqScnJcAzK5wQ00=; b=YCfmtnYOKz+ZksiQ/78uhSWCtvdLst+OgxjIi9klAIMwqwgNXT8uez9XDHMaEVzsUY 9t7rQZNHUmNxNgehFVmm3UOoAiFpK/tT/N5O4l4wnKVlojCMI4EGRJc/56H2JBYf2Zie 7QRT/vMlAPucbfcgwLI8ZCMKogea8XHWdyjsIIQNWWwec0OR+0TGJB9QVhX0iSSKz+XS CNYVVsZyYx6rydjHb4B17pZA6p7L8OgENYNzMQdBSIzkAZRU/H88GJQToq9wAT5ikpmQ vrbtH5DTXAYH+J2sEg6N8Shx/yaIBNhwxAh+vNvkhK4XEwKCT0TDWNbn70g+QsNSJCr0 VG2A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=bCjD4u9L; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 16-v6si11407576pfc.21.2018.10.05.11.02.23; Fri, 05 Oct 2018 11:02:38 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=bCjD4u9L; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728325AbeJFBBC (ORCPT + 99 others); Fri, 5 Oct 2018 21:01:02 -0400 Received: from mail.kernel.org ([198.145.29.99]:33914 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727941AbeJFBBC (ORCPT ); Fri, 5 Oct 2018 21:01:02 -0400 Received: from mail-wm1-f41.google.com (mail-wm1-f41.google.com [209.85.128.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 43512214C2 for ; Fri, 5 Oct 2018 18:01:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1538762472; bh=mu5NgnhwcBDogvrb3TYThtKFmNUUWGyoW1uh1XmVZcs=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=bCjD4u9L0vSic4sBpQ8Ht5wKnBMC6i/KCelyeOgvCsGUj9HxjsaT2vRwHBwEEgNki FiurOQDvrusn/Y+FgDpQJwWt7JBotqa4dqZwI3Pffhd9Tp4kYxj/ejO7Me/w121BJf BIg2EQcpykvj8yA68XOvG7L6RVkC/X9zPhvsxkTQ= Received: by mail-wm1-f41.google.com with SMTP id 206-v6so2635508wmb.5 for ; Fri, 05 Oct 2018 11:01:12 -0700 (PDT) X-Gm-Message-State: ABuFfoiarHIbIbyzsaOgJ21t/Sf7+5DW5+RBoaeNUvKkM17tTgWaha9C eN3xWWm1JHkXUGTy+O5lQ4ROYCYnJFa5Oo0C13FnRA== X-Received: by 2002:a1c:1fcd:: with SMTP id f196-v6mr8665998wmf.19.1538762470521; Fri, 05 Oct 2018 11:01:10 -0700 (PDT) MIME-Version: 1.0 References: <20181005081333.15018-1-ard.biesheuvel@linaro.org> <20181005133705.GA4588@zx2c4.com> In-Reply-To: From: Andy Lutomirski Date: Fri, 5 Oct 2018 11:00:58 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [RFC PATCH 0/9] patchable function pointers for pluggable crypto routines To: Ard Biesheuvel , Josh Poimboeuf Cc: Andrew Lutomirski , "Jason A. Donenfeld" , LKML , Eric Biggers , Samuel Neves , Arnd Bergmann , Herbert Xu , "David S. Miller" , Catalin Marinas , Will Deacon , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Thomas Gleixner , Ingo Molnar , Kees Cook , "Martin K. Petersen" , Greg KH , Andrew Morton , Richard Weinberger , Peter Zijlstra , Linux Crypto Mailing List , linux-arm-kernel , linuxppc-dev Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Oct 5, 2018 at 10:28 AM Ard Biesheuvel wrote: > > On 5 October 2018 at 19:26, Andy Lutomirski wrote: > > On Fri, Oct 5, 2018 at 10:15 AM Ard Biesheuvel > > wrote: > >> > >> On 5 October 2018 at 15:37, Jason A. Donenfeld wrote: > >> ... > >> > Therefore, I think this patch goes in exactly the wrong direction. I > >> > mean, if you want to introduce dynamic patching as a means for making > >> > the crypto API's dynamic dispatch stuff not as slow in a post-spectre > >> > world, sure, go for it; that may very well be a good idea. But > >> > presenting it as an alternative to Zinc very widely misses the point and > >> > serves to prolong a series of bad design choices, which are now able to > >> > be rectified by putting energy into Zinc instead. > >> > > >> > >> This series has nothing to do with dynamic dispatch: the call sites > >> call crypto functions using ordinary function calls (although my > >> example uses CRC-T10DIF), and these calls are redirected via what is > >> essentially a PLT entry, so that we can supsersede those routines at > >> runtime. > > > > If you really want to do it PLT-style, then just do: > > > > extern void whatever_func(args); > > > > Call it like: > > whatever_func(args here); > > > > And rig up something to emit asm like: > > > > GLOBAL(whatever_func) > > jmpq default_whatever_func > > ENDPROC(whatever_func) > > > > Architectures without support can instead do: > > > > void whatever_func(args) > > { > > READ_ONCE(patchable_function_struct_for_whatever_func->ptr)(args); > > } > > > > and patch the asm function for basic support. It will be slower than > > necessary, but maybe the relocation trick could be used on top of this > > to redirect the call to whatever_func directly to the target for > > architectures that want to squeeze out the last bit of performance. > > This might actually be the best of all worlds: easy implementation on > > all architectures, no inline asm, and the totally non-magical version > > works with okay performance. > > > > (Is this what your code is doing? I admit I didn't follow all the way > > through all the macros.) > > Basically Adding Josh Poimboeuf. Here's a sketch of how this could work for better performance. For a static call "foo" that returns void and takes no arguments, the generic implementation does something like this: extern void foo(void); struct static_call { void (*target)(void); /* arch-specific part containing an array of struct static_call_site */ }; void foo(void) { READ_ONCE(__static_call_foo->target)(); } Arch code overrides it to: GLOBAL(foo) jmpq *__static_call_foo(%rip) ENDPROC(foo) and some extra asm to emit a static_call_site object saying that the address "foo" is a jmp/call instruction where the operand is at offset 1 into the instruction. (Or whatever the offset is.) The patch code is like: void set_static_call(struct static_call *call, void *target) { /* take a spinlock? */ WRITE_ONCE(call->target, target); arch_set_static_call(call, target); } and the arch code patches the call site if needed. On x86, an even better implementation would have objtool make a bunch of additional static_call_site objects for each call to foo, and arch_set_static_call() would update all of them, too. Using text_poke_bp() if needed, and "if needed" can maybe be clever and check the alignment of the instruction. I admit that I never actually remember the full rules for atomically patching an instruction on x86 SMP. (Hmm. This will be really epically slow. Maybe we don't care. Or we could finally optimize text_poke, etc to take a list of pokes to do and do them as a batch. But that's not a prerequisite for the rest of this.) What do you all think?