Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp4318441imm; Mon, 30 Jul 2018 12:21:13 -0700 (PDT) X-Google-Smtp-Source: AAOMgpeNZnuIw3DYAv4CoB2BF9TjZpU41qlT7yRFdiNeYrGI7cKfuXgqpSrgXQrDorv56x35e99j X-Received: by 2002:a65:5245:: with SMTP id q5-v6mr17231156pgp.67.1532978473569; Mon, 30 Jul 2018 12:21:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532978473; cv=none; d=google.com; s=arc-20160816; b=EUOk7PacCBuIhW2NgV0jdFmAK77qUGadNest2UO+9znwpEWPNIFbiZ35E/f4GpCOns NIQs37z6WdT6vSo6Wsu3cDWqZs4AuidhxtsO7UWt4Jx+nE7WCPSjMs2tVHbkXnQafPer 89nQP5Tocalo0GmqktDYpdy4hgv2hQ/gZq5AfXeOa2k52T6DE38YicnLIjOXGY5dvYfS wL0UJY0mDqhRrU7hn5Cy9qc1ev+3TU2fFCS8n0HXniGxmTergAYBaeCIUU1xC7PhZNXl P85KzoOOP2euorm6AdyzwsglZddn3lYN5j65QdxoHRRcoeytg2a7frC8g6JoQNMDuukK Zyyw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from :arc-authentication-results; bh=z0/DC3Ui0fjA93aWIfaHDiLx6onDLcGQ2lKXnyQbXWk=; b=Lm21uNT7BN387b133AV3Fo2/I72e75wOtkLxvfjEDUQcW6orLuC4LW9RpfPQZP6D+d FXhQyXp4DRUDFsxBjZ3I6tFhLqylgzMx8POxS8wM/HIblPIBEyV7KT8klOd2Jh5dZA0X wsDkMNdIG3gyArqzt5wtQ7pVEIXx9K8SpHhBi8gT345bQh4yZqmRho9FukasYUZbwwRj 504sGqTB1LQ8pIdL5KhwlgWdT4kwF9xiJEA+i5PzvGPMXdpdSDN/w4p0lRY4o25f1NsX GBx3ugK7GXxeuoSFpTP6zeVQu8WZov+rydhbIpPTouC/EKdEnmwVr4oKGbvtgD0GSv6Z evGA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 17-v6si11840141pgw.648.2018.07.30.12.20.59; Mon, 30 Jul 2018 12:21:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731991AbeG3U43 (ORCPT + 99 others); Mon, 30 Jul 2018 16:56:29 -0400 Received: from mga14.intel.com ([192.55.52.115]:19052 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731016AbeG3U43 (ORCPT ); Mon, 30 Jul 2018 16:56:29 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 30 Jul 2018 12:20:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,424,1526367600"; d="scan'208";a="244596209" Received: from sai-dev-mach.sc.intel.com ([143.183.140.52]) by orsmga005.jf.intel.com with ESMTP; 30 Jul 2018 12:20:03 -0700 From: Sai Praneeth Prakhya To: linux-kernel@vger.kernel.org, x86@kernel.org Cc: Sai Praneeth , Tim C Chen , Dave Hansen , Thomas Gleixner , Ravi Shankar , Ingo Molnar Subject: [PATCH V2] x86/speculation: Support Enhanced IBRS on future CPUs Date: Mon, 30 Jul 2018 12:19:56 -0700 Message-Id: <1532978396-2197-1-git-send-email-sai.praneeth.prakhya@intel.com> X-Mailer: git-send-email 2.7.4 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Sai Praneeth Some future Intel processors may support "Enhanced IBRS" which is an "always on" mode i.e. IBRS bit in SPEC_CTRL MSR is enabled once and never disabled. [With enhanced IBRS, the predicted targets of indirect branches executed cannot be controlled by software that was executed in a less privileged predictor mode or on another logical processor. As a result, software operating on a processor with enhanced IBRS need not use WRMSR to set IA32_SPEC_CTRL.IBRS after every transition to a more privileged predictor mode. Software can isolate predictor modes effectively simply by setting the bit once. Software need not disable enhanced IBRS prior to entering a sleep state such as MWAIT or HLT.] - Specification [1] Even with enhanced IBRS, we still need to make sure that IBRS bit in SPEC_CTRL MSR is always set i.e. while booting, if we detect support for Enhanced IBRS, we enable IBRS bit in SPEC_CTRL MSR and we should also make sure that it remains set always. In other words, if the guest has cleared IBRS bit, upon VMEXIT the bit should still be set. Fortunately, kernel already has the infrastructure ready. kvm/vmx.c does x86_spec_ctrl_set_guest() before entering guest and x86_spec_ctrl_restore_host() after leaving guest. So, the guest view of SPEC_CTRL MSR is restored before entering guest and the host view of SPEC_CTRL MSR is restored before entering host and hence IBRS will be set after VMEXIT. Intel's white paper on Retpoline [2] says that "Retpoline is known to be an effective branch target injection (Spectre variant 2) mitigation on Intel processors belonging to family 6 (enumerated by the CPUID instruction) that do not have support for enhanced IBRS. On processors that support enhanced IBRS, it should be used for mitigation instead of retpoline." This means, Intel recommends using Enhanced IBRS over Retpoline where available and it also means that retpoline provides less mitigation on processors with enhanced IBRS compared to those without. Hence, on processors that support Enhanced IBRS, this patch makes Enhanced IBRS as the default Spectre V2 mitigation technique instead of retpoline. Also, we still need IBPB even with enhanced IBRS. [1] https://software.intel.com/sites/default/files/managed/c5/63/336996-Speculative-Execution-Side-Channel-Mitigations.pdf [2] https://software.intel.com/sites/default/files/managed/1d/46/Retpoline-A-Branch-Target-Injection-Mitigation.pdf Signed-off-by: Sai Praneeth Prakhya Originally-by: David Woodhouse Cc: Tim C Chen Cc: Dave Hansen Cc: Thomas Gleixner Cc: Ravi Shankar Cc: Ingo Molnar --- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/nospec-branch.h | 2 +- arch/x86/kernel/cpu/bugs.c | 29 +++++++++++++++++++++++++++-- arch/x86/kernel/cpu/common.c | 3 +++ 4 files changed, 32 insertions(+), 3 deletions(-) Changes from V1 to V2: 1. Explicitly spell out in the change log, the reason for using Enhanced IBRS as the default Spectre V2 mitigation technique instead of Retpoline. diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 5701f5cecd31..f75815b1dbee 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -219,6 +219,7 @@ #define X86_FEATURE_IBPB ( 7*32+26) /* Indirect Branch Prediction Barrier */ #define X86_FEATURE_STIBP ( 7*32+27) /* Single Thread Indirect Branch Predictors */ #define X86_FEATURE_ZEN ( 7*32+28) /* "" CPU is AMD family 0x17 (Zen) */ +#define X86_FEATURE_IBRS_ENHANCED ( 7*32+29) /* "ibrs_enhanced" Use Enhanced IBRS in kernel */ /* Virtualization flags: Linux defined, word 8 */ #define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */ diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h index f6f6c63da62f..fd2a8c1b88bc 100644 --- a/arch/x86/include/asm/nospec-branch.h +++ b/arch/x86/include/asm/nospec-branch.h @@ -214,7 +214,7 @@ enum spectre_v2_mitigation { SPECTRE_V2_RETPOLINE_MINIMAL_AMD, SPECTRE_V2_RETPOLINE_GENERIC, SPECTRE_V2_RETPOLINE_AMD, - SPECTRE_V2_IBRS, + SPECTRE_V2_IBRS_ENHANCED, }; /* The Speculative Store Bypass disable variants */ diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c index 5c0ea39311fe..a66517de1301 100644 --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -130,6 +130,7 @@ static const char *spectre_v2_strings[] = { [SPECTRE_V2_RETPOLINE_MINIMAL_AMD] = "Vulnerable: Minimal AMD ASM retpoline", [SPECTRE_V2_RETPOLINE_GENERIC] = "Mitigation: Full generic retpoline", [SPECTRE_V2_RETPOLINE_AMD] = "Mitigation: Full AMD retpoline", + [SPECTRE_V2_IBRS_ENHANCED] = "Mitigation: Enhanced IBRS", }; #undef pr_fmt @@ -349,6 +350,8 @@ static void __init spectre_v2_select_mitigation(void) case SPECTRE_V2_CMD_FORCE: case SPECTRE_V2_CMD_AUTO: + if (boot_cpu_has(X86_FEATURE_IBRS_ENHANCED)) + goto skip_retpoline_enable_ibrs; if (IS_ENABLED(CONFIG_RETPOLINE)) goto retpoline_auto; break; @@ -385,7 +388,22 @@ static void __init spectre_v2_select_mitigation(void) SPECTRE_V2_RETPOLINE_MINIMAL; setup_force_cpu_cap(X86_FEATURE_RETPOLINE); } + goto enable_other_mitigations; +skip_retpoline_enable_ibrs: + mode = SPECTRE_V2_IBRS_ENHANCED; + + /* + * As we don't use IBRS in kernel, nobody should have set + * SPEC_CTRL_IBRS until now. Shout loud if somebody did enable + * SPEC_CTRL_IBRS before us. + */ + WARN_ON_ONCE(x86_spec_ctrl_base & SPEC_CTRL_IBRS); + + /* Ensure SPEC_CTRL_IBRS is set after VMEXIT from a guest */ + x86_spec_ctrl_base |= SPEC_CTRL_IBRS; + +enable_other_mitigations: spectre_v2_enabled = mode; pr_info("%s\n", spectre_v2_strings[mode]); @@ -415,9 +433,16 @@ static void __init spectre_v2_select_mitigation(void) /* * Retpoline means the kernel is safe because it has no indirect - * branches. But firmware isn't, so use IBRS to protect that. + * branches. Enhanced IBRS protects firmware too, so, enable restricted + * speculation around firmware calls only when Enhanced IBRS isn't + * supported. + * + * Use "mode" to check Enhanced IBRS instead of boot_cpu_has(), because + * user might select retpoline on command line and if CPU supports + * Enhanced IBRS, we might un-intentionally not enable IBRS around + * firmware calls. */ - if (boot_cpu_has(X86_FEATURE_IBRS)) { + if (boot_cpu_has(X86_FEATURE_IBRS) && mode != SPECTRE_V2_IBRS_ENHANCED) { setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW); pr_info("Enabling Restricted Speculation for firmware calls\n"); } diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index eb4cb3efd20e..8ed73a46511f 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -1005,6 +1005,9 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) !cpu_has(c, X86_FEATURE_AMD_SSB_NO)) setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS); + if (ia32_cap & ARCH_CAP_IBRS_ALL) + setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED); + if (x86_match_cpu(cpu_no_meltdown)) return; -- 2.7.4