Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp796190imm; Mon, 21 May 2018 14:38:14 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrlp/Pr+LEl1VxxvGNr4JcEWd2CxlTeUnHqzu1vFXVz0VOQnr4pgAQ37RzHgALHX504G+pd X-Received: by 2002:a63:6e8e:: with SMTP id j136-v6mr17340845pgc.450.1526938694922; Mon, 21 May 2018 14:38:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526938694; cv=none; d=google.com; s=arc-20160816; b=tVbYPLOj7jHsN0dsxqGZ7h57agepQR2fUdTJaq1H4x/cYUcesAPSocysyhaCZ0Iskx NXvxD1ams2XBpxzCJAoUcl5N8QUkT9NbyoGrdXboFQHhN/iaUlHYO6sXW/d/AfZxEgQN 4+aMoZABlhkLBcRDPXfUl595fY7B+t7Wl3YjeYcia8VMbdP6e9GY2vg2UlxLKDBMY55h Ypt3u1ek3LEWRl7pF7BQMCyIEp26XBocuNgdwYP/G1irO9BRTPX/2E6YgHt4jWB4Rg4Z oI94tDhfvDQfbWDF/s6FsPyOJNyVeYI3OZqSGGyfBaR9VhIsoXAwi3b090qUIrAqSmAs hfEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature :arc-authentication-results; bh=VmV7hP+Ws6WMv85Spe/tmE5f7eOZ7oX0Ka0syEu5QlM=; b=SGNiiXyKnBQUiOr6lHaNTnYmN73jo8YCiMvJmxguS9b3gwT6ncR0q3stqG5mUB1pya 2WVRv11qlQzkrBmf8x9Jc0QRIe71TjNR53V4UTLopiOS7m08zGv5tpKx5aGVdpDVi2Xn QTtbvgeUpJ2lIrtTuyp/LzNf7KHc/cpOX8x0puoiRqZq5N0zXqsAUekmBWqpdW8SRsMj +R52AFTmDwHwp3iznojveRuS1VYpit3rVXxPyQV9TcUqmQATaGiwptGh2y6yP2OMGXhj zB5W5gyWxyTG6UzopXqA8WqNhahp0EUgr8FDEsZ/quAHNPQpHZxpM2wa6zCPHQz0saxz uAjQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=m46QDPH2; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c1-v6si13815939pld.103.2018.05.21.14.38.00; Mon, 21 May 2018 14:38:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=m46QDPH2; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932442AbeEUVZc (ORCPT + 99 others); Mon, 21 May 2018 17:25:32 -0400 Received: from mail.kernel.org ([198.145.29.99]:40276 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754310AbeEUVZY (ORCPT ); Mon, 21 May 2018 17:25:24 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 209FD20853; Mon, 21 May 2018 21:25:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1526937923; bh=asGvfKcH52SKoxNM0l5VSRcwSLD+zIEvaCo/10hvvTc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=m46QDPH2UpmMfUVMYGUcAi1KahvS0lY4fxsIdz8PNcaun+gJrO9j81RcHNwcuaDF5 n4fE7KM6g1zF7o3R+LORS4dIph6UKzzwE2VBq+bdGSUflIWnN5Vz1wPwT+rcbWdxtF M5Fuwb6wjtdIWYK9OIpJq7X4rriw7Hg++uLJSvqI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Ingo Molnar , Konrad Rzeszutek Wilk Subject: [PATCH 4.16 076/110] x86/process: Allow runtime control of Speculative Store Bypass Date: Mon, 21 May 2018 23:12:13 +0200 Message-Id: <20180521210512.839843987@linuxfoundation.org> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180521210503.823249477@linuxfoundation.org> References: <20180521210503.823249477@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.16-stable review patch. If anyone has any objections, please let me know. ------------------ From: Thomas Gleixner commit 885f82bfbc6fefb6664ea27965c3ab9ac4194b8c upstream The Speculative Store Bypass vulnerability can be mitigated with the Reduced Data Speculation (RDS) feature. To allow finer grained control of this eventually expensive mitigation a per task mitigation control is required. Add a new TIF_RDS flag and put it into the group of TIF flags which are evaluated for mismatch in switch_to(). If these bits differ in the previous and the next task, then the slow path function __switch_to_xtra() is invoked. Implement the TIF_RDS dependent mitigation control in the slow path. If the prctl for controlling Speculative Store Bypass is disabled or no task uses the prctl then there is no overhead in the switch_to() fast path. Update the KVM related speculation control functions to take TID_RDS into account as well. Based on a patch from Tim Chen. Completely rewritten. Signed-off-by: Thomas Gleixner Reviewed-by: Ingo Molnar Reviewed-by: Konrad Rzeszutek Wilk Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/msr-index.h | 3 ++- arch/x86/include/asm/spec-ctrl.h | 17 +++++++++++++++++ arch/x86/include/asm/thread_info.h | 4 +++- arch/x86/kernel/cpu/bugs.c | 26 +++++++++++++++++++++----- arch/x86/kernel/process.c | 22 ++++++++++++++++++++++ 5 files changed, 65 insertions(+), 7 deletions(-) --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -42,7 +42,8 @@ #define MSR_IA32_SPEC_CTRL 0x00000048 /* Speculation Control */ #define SPEC_CTRL_IBRS (1 << 0) /* Indirect Branch Restricted Speculation */ #define SPEC_CTRL_STIBP (1 << 1) /* Single Thread Indirect Branch Predictors */ -#define SPEC_CTRL_RDS (1 << 2) /* Reduced Data Speculation */ +#define SPEC_CTRL_RDS_SHIFT 2 /* Reduced Data Speculation bit */ +#define SPEC_CTRL_RDS (1 << SPEC_CTRL_RDS_SHIFT) /* Reduced Data Speculation */ #define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */ #define PRED_CMD_IBPB (1 << 0) /* Indirect Branch Prediction Barrier */ --- a/arch/x86/include/asm/spec-ctrl.h +++ b/arch/x86/include/asm/spec-ctrl.h @@ -2,6 +2,7 @@ #ifndef _ASM_X86_SPECCTRL_H_ #define _ASM_X86_SPECCTRL_H_ +#include #include /* @@ -18,4 +19,20 @@ extern void x86_spec_ctrl_restore_host(u extern u64 x86_amd_ls_cfg_base; extern u64 x86_amd_ls_cfg_rds_mask; +/* The Intel SPEC CTRL MSR base value cache */ +extern u64 x86_spec_ctrl_base; + +static inline u64 rds_tif_to_spec_ctrl(u64 tifn) +{ + BUILD_BUG_ON(TIF_RDS < SPEC_CTRL_RDS_SHIFT); + return (tifn & _TIF_RDS) >> (TIF_RDS - SPEC_CTRL_RDS_SHIFT); +} + +static inline u64 rds_tif_to_amd_ls_cfg(u64 tifn) +{ + return (tifn & _TIF_RDS) ? x86_amd_ls_cfg_rds_mask : 0ULL; +} + +extern void speculative_store_bypass_update(void); + #endif --- a/arch/x86/include/asm/thread_info.h +++ b/arch/x86/include/asm/thread_info.h @@ -79,6 +79,7 @@ struct thread_info { #define TIF_SIGPENDING 2 /* signal pending */ #define TIF_NEED_RESCHED 3 /* rescheduling necessary */ #define TIF_SINGLESTEP 4 /* reenable singlestep on user return*/ +#define TIF_RDS 5 /* Reduced data speculation */ #define TIF_SYSCALL_EMU 6 /* syscall emulation active */ #define TIF_SYSCALL_AUDIT 7 /* syscall auditing active */ #define TIF_SECCOMP 8 /* secure computing */ @@ -105,6 +106,7 @@ struct thread_info { #define _TIF_SIGPENDING (1 << TIF_SIGPENDING) #define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED) #define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP) +#define _TIF_RDS (1 << TIF_RDS) #define _TIF_SYSCALL_EMU (1 << TIF_SYSCALL_EMU) #define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT) #define _TIF_SECCOMP (1 << TIF_SECCOMP) @@ -144,7 +146,7 @@ struct thread_info { /* flags to check in __switch_to() */ #define _TIF_WORK_CTXSW \ - (_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP) + (_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|_TIF_RDS) #define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY) #define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW) --- a/arch/x86/kernel/cpu/bugs.c +++ b/arch/x86/kernel/cpu/bugs.c @@ -33,7 +33,7 @@ static void __init ssb_select_mitigation * Our boot-time value of the SPEC_CTRL MSR. We read it once so that any * writes to SPEC_CTRL contain whatever reserved bits have been set. */ -static u64 __ro_after_init x86_spec_ctrl_base; +u64 __ro_after_init x86_spec_ctrl_base; /* * The vendor and possibly platform specific bits which can be modified in @@ -140,25 +140,41 @@ EXPORT_SYMBOL_GPL(x86_spec_ctrl_set); u64 x86_spec_ctrl_get_default(void) { - return x86_spec_ctrl_base; + u64 msrval = x86_spec_ctrl_base; + + if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) + msrval |= rds_tif_to_spec_ctrl(current_thread_info()->flags); + return msrval; } EXPORT_SYMBOL_GPL(x86_spec_ctrl_get_default); void x86_spec_ctrl_set_guest(u64 guest_spec_ctrl) { + u64 host = x86_spec_ctrl_base; + if (!boot_cpu_has(X86_FEATURE_IBRS)) return; - if (x86_spec_ctrl_base != guest_spec_ctrl) + + if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) + host |= rds_tif_to_spec_ctrl(current_thread_info()->flags); + + if (host != guest_spec_ctrl) wrmsrl(MSR_IA32_SPEC_CTRL, guest_spec_ctrl); } EXPORT_SYMBOL_GPL(x86_spec_ctrl_set_guest); void x86_spec_ctrl_restore_host(u64 guest_spec_ctrl) { + u64 host = x86_spec_ctrl_base; + if (!boot_cpu_has(X86_FEATURE_IBRS)) return; - if (x86_spec_ctrl_base != guest_spec_ctrl) - wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); + + if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) + host |= rds_tif_to_spec_ctrl(current_thread_info()->flags); + + if (host != guest_spec_ctrl) + wrmsrl(MSR_IA32_SPEC_CTRL, host); } EXPORT_SYMBOL_GPL(x86_spec_ctrl_restore_host); --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -38,6 +38,7 @@ #include #include #include +#include /* * per-CPU TSS segments. Threads are completely 'soft' on Linux, @@ -278,6 +279,24 @@ static inline void switch_to_bitmap(stru } } +static __always_inline void __speculative_store_bypass_update(unsigned long tifn) +{ + u64 msr; + + if (static_cpu_has(X86_FEATURE_AMD_RDS)) { + msr = x86_amd_ls_cfg_base | rds_tif_to_amd_ls_cfg(tifn); + wrmsrl(MSR_AMD64_LS_CFG, msr); + } else { + msr = x86_spec_ctrl_base | rds_tif_to_spec_ctrl(tifn); + wrmsrl(MSR_IA32_SPEC_CTRL, msr); + } +} + +void speculative_store_bypass_update(void) +{ + __speculative_store_bypass_update(current_thread_info()->flags); +} + void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p, struct tss_struct *tss) { @@ -309,6 +328,9 @@ void __switch_to_xtra(struct task_struct if ((tifp ^ tifn) & _TIF_NOCPUID) set_cpuid_faulting(!!(tifn & _TIF_NOCPUID)); + + if ((tifp ^ tifn) & _TIF_RDS) + __speculative_store_bypass_update(tifn); } /*