Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp4959650imu; Sat, 1 Dec 2018 06:26:32 -0800 (PST) X-Google-Smtp-Source: AFSGD/Wwrc3G5x3WIRrthJPy7fxa9OrNEy4bwM2theOs9rwL/+fK24Z70YEsDvST4OcYSJTi0O3C X-Received: by 2002:a63:dd15:: with SMTP id t21mr7703514pgg.347.1543674392532; Sat, 01 Dec 2018 06:26:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543674392; cv=none; d=google.com; s=arc-20160816; b=B6wIngst5OSOtnjRtZ2f6Ytp0TLVgaUoQGsFXxJBcaYHevZWh26vlfqdzSZ3TohZ0N N3/93xE7iuVAl7X1q+Jg0O1hBoDZIWBjpEkgYElaax3kFLvoam0WMWbm8Jw4Ywlf/kjm clMNxlEei7L7fjvTspQ2w9n12wdBi0g8BikJ0f3vA14TOLRjUmUhimbdYswythA+onl4 D+liUC++yIqOOiR1EfumDP89lMt0ehpVec13PUOZx5Ri3aoFUcY7LOjhi56X/EoLb3y2 +hXS/ARMNlK0Cx3jE1lVwGyPevRI8UCXPNsnPche2Onsa4pZrq6xADacJ9vBGHOO/2eQ BAWg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=0K3Ak5wR2bXblGe2d6uAoERWbfZMcZdI0KaNzVsEDHU=; b=lHbFH8CGBBkK1RdyqA8EAFVYXJ/CDMV/c0VnuRYMWHecDscYMfi/tN7AudGfNbHHL3 oJ8IgndHyzpCB0kRIK8nNveyAqAzzJ4gBqfXfqXsP/ZOizgdf/6doWgy2jh2KPACgDuf 131CaY9Xhfjb71eWPnwI/1wfYg21bxL0TY+Tfba0uzH4qLQfx2tzkF6fG7cPMGO73kv2 Npd5OacSW5MBZFZ8B0+wcRrXJ7USySHbZHxtUV35z1gzsNxiuq1V6OJtT6HRg2E0uTob XJFDJgvsHrImCXerg6GZvP3s4PC+htvJtHB9r2r2vjuzGTGoQbGDnXctNX0RN+xrEinb MvPg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ay7si140010plb.410.2018.12.01.06.26.17; Sat, 01 Dec 2018 06:26:32 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726881AbeLBBiT (ORCPT + 99 others); Sat, 1 Dec 2018 20:38:19 -0500 Received: from mx1.redhat.com ([209.132.183.28]:34766 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726458AbeLBBiT (ORCPT ); Sat, 1 Dec 2018 20:38:19 -0500 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 2145537E60; Sat, 1 Dec 2018 14:25:42 +0000 (UTC) Received: from treble (ovpn-121-129.rdu2.redhat.com [10.10.121.129]) by smtp.corp.redhat.com (Postfix) with ESMTPS id CE01F67662; Sat, 1 Dec 2018 14:25:40 +0000 (UTC) Date: Sat, 1 Dec 2018 08:25:38 -0600 From: Josh Poimboeuf To: Nadav Amit Cc: Andy Lutomirski , Ingo Molnar , Peter Zijlstra , "H. Peter Anvin" , Thomas Gleixner , LKML , X86 ML , Borislav Petkov , "Woodhouse, David" Subject: Re: [RFC PATCH 0/5] x86: dynamic indirect call promotion Message-ID: <20181201142538.tuxabm2sy2xtrfuq@treble> References: <20181018005420.82993-1-namit@vmware.com> <20181128160849.epmoto4o5jaxxxol@treble> <9EACED43-EC21-41FB-BFAC-4E98C3842FD9@vmware.com> <20181129003837.6lgxsnhoyipkebmz@treble> <0E75C656-18BF-4967-98A3-35E0BD83D603@vmware.com> <4CD1975E-3B15-4B9C-B2A9-2E5F72E1D95F@amacapital.net> <20181129151906.owxeef2e3cm4nn2y@treble> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: NeoMutt/20180716 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Sat, 01 Dec 2018 14:25:42 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Dec 01, 2018 at 06:52:45AM +0000, Nadav Amit wrote: > > On Nov 29, 2018, at 7:19 AM, Josh Poimboeuf wrote: > > > > On Wed, Nov 28, 2018 at 10:06:52PM -0800, Andy Lutomirski wrote: > >> On Wed, Nov 28, 2018 at 7:24 PM Andy Lutomirski wrote: > >>> On Nov 28, 2018, at 6:06 PM, Nadav Amit wrote: > >>> > >>>>> On Nov 28, 2018, at 5:40 PM, Andy Lutomirski wrote: > >>>>> > >>>>>> On Wed, Nov 28, 2018 at 4:38 PM Josh Poimboeuf wrote: > >>>>>> On Wed, Nov 28, 2018 at 07:34:52PM +0000, Nadav Amit wrote: > >>>>>>>> On Nov 28, 2018, at 8:08 AM, Josh Poimboeuf wrote: > >>>>>>>> > >>>>>>>>> On Wed, Oct 17, 2018 at 05:54:15PM -0700, Nadav Amit wrote: > >>>>>>>>> This RFC introduces indirect call promotion in runtime, which for the > >>>>>>>>> matter of simplification (and branding) will be called here "relpolines" > >>>>>>>>> (relative call + trampoline). Relpolines are mainly intended as a way > >>>>>>>>> of reducing retpoline overheads due to Spectre v2. > >>>>>>>>> > >>>>>>>>> Unlike indirect call promotion through profile guided optimization, the > >>>>>>>>> proposed approach does not require a profiling stage, works well with > >>>>>>>>> modules whose address is unknown and can adapt to changing workloads. > >>>>>>>>> > >>>>>>>>> The main idea is simple: for every indirect call, we inject a piece of > >>>>>>>>> code with fast- and slow-path calls. The fast path is used if the target > >>>>>>>>> matches the expected (hot) target. The slow-path uses a retpoline. > >>>>>>>>> During training, the slow-path is set to call a function that saves the > >>>>>>>>> call source and target in a hash-table and keep count for call > >>>>>>>>> frequency. The most common target is then patched into the hot path. > >>>>>>>>> > >>>>>>>>> The patching is done on-the-fly by patching the conditional branch > >>>>>>>>> (opcode and offset) that is used to compare the target to the hot > >>>>>>>>> target. This allows to direct all cores to the fast-path, while patching > >>>>>>>>> the slow-path and vice-versa. Patching follows 2 more rules: (1) Only > >>>>>>>>> patch a single byte when the code might be executed by any core. (2) > >>>>>>>>> When patching more than one byte, ensure that all cores do not run the > >>>>>>>>> to-be-patched-code by preventing this code from being preempted, and > >>>>>>>>> using synchronize_sched() after patching the branch that jumps over this > >>>>>>>>> code. > >>>>>>>>> > >>>>>>>>> Changing all the indirect calls to use relpolines is done using assembly > >>>>>>>>> macro magic. There are alternative solutions, but this one is > >>>>>>>>> relatively simple and transparent. There is also logic to retrain the > >>>>>>>>> software predictor, but the policy it uses may need to be refined. > >>>>>>>>> > >>>>>>>>> Eventually the results are not bad (2 VCPU VM, throughput reported): > >>>>>>>>> > >>>>>>>>> base relpoline > >>>>>>>>> ---- --------- > >>>>>>>>> nginx 22898 25178 (+10%) > >>>>>>>>> redis-ycsb 24523 25486 (+4%) > >>>>>>>>> dbench 2144 2103 (+2%) > >>>>>>>>> > >>>>>>>>> When retpolines are disabled, and if retraining is off, performance > >>>>>>>>> benefits are up to 2% (nginx), but are much less impressive. > >>>>>>>> > >>>>>>>> Hi Nadav, > >>>>>>>> > >>>>>>>> Peter pointed me to these patches during a discussion about retpoline > >>>>>>>> profiling. Personally, I think this is brilliant. This could help > >>>>>>>> networking and filesystem intensive workloads a lot. > >>>>>>> > >>>>>>> Thanks! I was a bit held-back by the relatively limited number of responses. > >>>>>> > >>>>>> It is a rather, erm, ambitious idea, maybe they were speechless :-) > >>>>>> > >>>>>>> I finished another version two weeks ago, and every day I think: "should it > >>>>>>> be RFCv2 or v1”, ending up not sending it… > >>>>>>> > >>>>>>> There is one issue that I realized while working on the new version: I’m not > >>>>>>> sure it is well-defined what an outline retpoline is allowed to do. The > >>>>>>> indirect branch promotion code can change rflags, which might cause > >>>>>>> correction issues. In practice, using gcc, it is not a problem. > >>>>>> > >>>>>> Callees can clobber flags, so it seems fine to me. > >>>>> > >>>>> Just to check I understand your approach right: you made a macro > >>>>> called "call", and you're therefore causing all instances of "call" to > >>>>> become magic? This is... terrifying. It's even plausibly worse than > >>>>> "#define if" :) The scariest bit is that it will impact inline asm as > >>>>> well. Maybe a gcc plugin would be less alarming? > >>>> > >>>> It is likely to look less alarming. When I looked at the inline retpoline > >>>> implementation of gcc, it didn’t look much better than what I did - it > >>>> basically just emits assembly instructions. > >>> > >>> To be clear, that wasn’t a NAK. It was merely a “this is alarming.” > >> > >> Although... how do you avoid matching on things that really don't want > >> this treatment? paravirt ops come to mind. > > > > Paravirt ops don't use retpolines because they're patched into direct > > calls during boot. So Nadav's patches won't touch them. > > Actually, the way it’s handled is slightly more complicated - yes, the CALL > macro should not be applied, as Josh said, but the question is how it is > achieved. > > The basic idea is that the CALL macro should only be applied to C > source-files and not to assembly files and for macros.s, which holds the PV > call macros. I will recheck it is done this way. Even if the CALL macro were applied, it would get ignored by your code because the PARAVIRT_CALL macro doesn't use retpolines. So it would get skipped by this check: .ifc "\v", "__x86_indirect_thunk_\reg_it" relpoline_call reg=\reg_it retpoline = 1 .endif -- Josh