Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752038AbeADB7q (ORCPT + 1 other); Wed, 3 Jan 2018 20:59:46 -0500 Received: from www.llwyncelyn.cymru ([82.70.14.225]:42428 "EHLO fuzix.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751842AbeADB7p (ORCPT ); Wed, 3 Jan 2018 20:59:45 -0500 Date: Thu, 4 Jan 2018 01:59:20 +0000 From: Alan Cox To: Paolo Bonzini Cc: Linus Torvalds , Andi Kleen , tglx@linuxtronix.de, Greg Kroah-Hartman , dwmw@amazon.co.uk, Tim Chen , Linux Kernel Mailing List , Dave Hansen Subject: Re: Avoid speculative indirect calls in kernel Message-ID: <20180104015920.1ad7b9d3@alans-desktop> In-Reply-To: References: <20180103230934.15788-1-andi@firstfloor.org> Organization: Intel Corporation X-Mailer: Claws Mail 3.15.1-dirty (GTK+ 2.24.31; x86_64-redhat-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: > But then, exactly because the retpoline approach adds quite some cruft > and leaves something to be desired, why even bother? Intel has also Performance > Also, according to Google the KVM PoC can be broken simply by clearing > the registers on every exit to the hypervisor. Of course it's just > mitigation, but perhaps _that_ is where we should start fixing the > user/kernel boundary too. The syscall boundary isn't quite that simple and clearing registers make things harder but not impossible. It's a good hardening exercise as are things like anding the top bit off userspace addresses on x86 64bit so that even if someone speculates through a user copy they get to steal their own data. Other hardening possibilities include moving processes between cores, yielding to another task for a bit or clearing L1 data if a syscall returns an error, running only processes for the same uid on hyperthreaded pairs/quads (more easy to do with VMs and something some cloud folk kind of do anyway so that you more clearly get what you pay for in CPU time) etc Buffer overruns went from fly swatting, through organized analysis, hardening, tools, better interfaces and language changes. History usually repeats itself. But absolutely - yes we should be looking at effective hardening mechanisms in the kernel just as people will be in the hardware. Alan