Received: by 2002:ac0:a582:0:0:0:0:0 with SMTP id m2-v6csp2648078imm; Thu, 11 Oct 2018 13:50:09 -0700 (PDT) X-Google-Smtp-Source: ACcGV60Hpso++WOKu0VYcHUTbvId2kMLZdFlL3OFTS4Fa9bpZQUZf19Qg5cCB1xcuO/Zu6jFHR66 X-Received: by 2002:a17:902:e012:: with SMTP id ca18-v6mr2933256plb.195.1539291009835; Thu, 11 Oct 2018 13:50:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1539291009; cv=none; d=google.com; s=arc-20160816; b=l2c8qF5sN2ynoVdv9BtrSDdh6FaPgW9jbHAMB+BoDAFrZx4j4GhsNv+pbzpvKnAEFp XGYTk80xP8u3+hPZTwZAizW3g8mCTHbkRcycgo3lxMPJLYxE8F94rjuXf6p4fsMEQiTf QjkeBlOtTle6K6rMJEAhvHMCTYErJdpm+/l0Bh1i0DebX17MqVIn+23LB+zQCbMATXz6 ujO2YlejRW6gWnB1vlazwlhkUdCgCZZDdQSNwHTG6QBNf7A/S1a08UwQw+R0jbeAx+Yn PYfT6ZxUedtH4G3C0UB8ffPLUuyhR77y0+CzOEVf7CupVm72tTpMm+TMJaixH1zeDeCf XF4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=6R5pWQl0sByYDIIe0heSHH2OmQpNuJmJwP7CPfEVS/A=; b=fBqa/t5v62d+xtKK9coaxtFYjNJktpaM5nMOuyN8LiYBFye7+QQKkF/265eN6gChZ3 8Hc72nFVgjeFgHLWUOxPBbs3lE3L52UpEP5ZcrBXfWZ4HT6Qekffog/4WGJh2usODaw1 hJpW2xnD+HaZDPeRulwQnLWOQMFaKWZPBkfFKpqQEZQyqHQ2DhksoKccXQZLLEZam3/F 6wM9oPmrR2EpHPmwUavQ9poXB4JI7rlX6CdhTB2wUPNEAVRrAG6zLOsHzelHe+tEDK+r nohMXkdQSfOTgIuqpCmEgKTvNMvX0wp+Dnb+f8YDxXyQe5+4O4+xJn5f0WLRY+WGo751 +NaA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="T8/zwhAi"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id p17-v6si24753637pgh.515.2018.10.11.13.49.55; Thu, 11 Oct 2018 13:50:09 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b="T8/zwhAi"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726666AbeJLERC (ORCPT + 99 others); Fri, 12 Oct 2018 00:17:02 -0400 Received: from mail.kernel.org ([198.145.29.99]:34166 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725903AbeJLERB (ORCPT ); Fri, 12 Oct 2018 00:17:01 -0400 Received: from mail-wr1-f53.google.com (mail-wr1-f53.google.com [209.85.221.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 4DC8C21476 for ; Thu, 11 Oct 2018 20:48:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1539290883; bh=qWbwIsw8rhTxjTA36p8BXwsikQcZ2ZVCAByAjj/9hb0=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=T8/zwhAiBLVt6dC+y4qmNuMP3Yq15LzSXbNpgdpoCxhz2428G2ej+aHn3bELoVMbq TeYOKTTqI6SriDJseZYV5ooSBJrKWb2lVNTEoU3I9mnpTFhlH5Nw8UyBep5Nb3zi/1 xtUgVKYkQOCo8UAFl984l++rAmA0z/kXxPubMoY4= Received: by mail-wr1-f53.google.com with SMTP id n11-v6so11051594wru.13 for ; Thu, 11 Oct 2018 13:48:03 -0700 (PDT) X-Gm-Message-State: ABuFfogSFU1M/LMfXTVWaiXL7/+A6jOLkkfWCO3JRE0AronqVXBrTkW5 qNao0lz9un5z2WGlFiDdwliAC6vhzviMiZHUXRJNEw== X-Received: by 2002:adf:9792:: with SMTP id s18-v6mr3065017wrb.283.1539290881656; Thu, 11 Oct 2018 13:48:01 -0700 (PDT) MIME-Version: 1.0 References: <20181011185458.10186-1-kristen@linux.intel.com> <20181011212504.012c3ece@alans-desktop> In-Reply-To: <20181011212504.012c3ece@alans-desktop> From: Andy Lutomirski Date: Thu, 11 Oct 2018 13:47:49 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH] x86: entry: flush the cache if syscall error To: One Thousand Gnomes Cc: Andrew Lutomirski , Kristen Carlson Accardi , Kernel Hardening , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , X86 ML , LKML Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Oct 11, 2018 at 1:25 PM Alan Cox wrote: > > > Ugh. > > > > What exactly is this trying to protect against? And how many cycles > > Most attacks by speculation rely upon leaving footprints in the L1 cache. > They also almost inevitably resolve non speculatively to errors. If you > look through all the 'yet another potential spectre case' patches people > have found they would have been rendered close to useless by this change. Can you give an example? AFAIK the interesting Meltdown-like attacks are unaffected because Meltdown doesn't actually need the target data to be in L1D. And most of the Spectre-style attacks would have been blocked by doing LFENCE on the error cases (and somehow making sure that the CPU doesn't speculate around the LFENCE without noticing it). But this patch is doing an L1D flush, which, as far as I've heard, isn't actually relevant. > > It's a way to deal with the ones we don't know about, all the ones theion > tools won't find and it has pretty much zero cost > > (If you are bored strace an entire days desktop session, bang it through > a script or two to extract the number of triggerig error returns and do > the maths...) > > > should we expect L1D_FLUSH to take? > > More to the point you pretty much never trigger it. Errors are not the > normal path in real code. The original version of this code emptied the > L1 the hard way - and even then it was in the noise for real workloads we > tried. > > You can argue that the other thread could be some evil task that > deliberately triggers flushes, but it can already thrash the L1 on > processors that share L1 between threads using perfectly normal memory > instructions. > That's not what I meant. I meant that, if an attacker can run code on *both* logical threads on the same CPU, then they can run their attack code on the other logical thread before the L1D_FLUSH command takes effect. I care about the performance of single-threaded workloads, though. How slow is this thing? No one cares about syscall performance on regular desktop sessions except for gaming. But people do care about syscall performance on all kinds of crazy server, database, etc workloads. And compilation. And HPC stuff, although that mostly doesn't involve syscalls. So: benchmarks, please. And estimated cycle counts, please, on at least a couple of relevant CPU generations. On Meltdown-affected CPUs, we're doing a CR3 write anyway, which is fully serializing, so it's slow. But AFAIK that *already* blocks most of these attacks except L1TF, and L1TF has (hopefully!) been fixed anyway on Linux.