Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp156045imm; Tue, 25 Sep 2018 18:19:55 -0700 (PDT) X-Google-Smtp-Source: ACcGV6243naOWB1yMCdC+iQ0PIIwD2/OBQ/cFAwkqOXhMMpSWwvjy4ev2xwpaiV8zBXVvtYTeRzG X-Received: by 2002:a62:670a:: with SMTP id b10-v6mr3660228pfc.243.1537924795770; Tue, 25 Sep 2018 18:19:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537924795; cv=none; d=google.com; s=arc-20160816; b=im1xq6eHNh2zFcBlcRvcVbGW77SuCIivqYcFcnc7vCi2dZjh5QhJsARsZjYfN9Enmi yQNvkt/0bT6ssUfIWPwqMdOHeTxYq6vkhYFDMH3++l8CD9E7fiNeEm08IqMXmusCmgr2 9y27wIAeIrOovW8s6McnMb82E+aXLm9cpOAA3rxN3rmKYoweh4u/9dwDrqXY07ZLwHBm A5wT4zccQTEo+LxoBhDxm72UtBgbt89+jlB4LA8EwSTVTa5D22kgpt59xNBdR07HaxMn +RusLHp5GDoRa6AtGR9eCsqyCe6GxY08nTxwT+8APvTQNeHk8aj/8PFT2GW3wUlhBql1 RYoQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:references :in-reply-to:message-id:date:subject:cc:to:from; bh=raFHhm94jC7hF7MYPFjS7HiEmCY3jrlHD/odAqmj0Y0=; b=MU0/pZJyvBPrBfe+CohzYykz9zIm9lF7EoBYJAfMqHo3Xq4E4PVGRvVqfIsG3n9CyW I8oWrUvL7VVSilwVqGg065y2LEbzT7wsGYrq728O5RTux7W5b7nf7HmE/+CnYJxktkp1 fd4qAJpqbynyY33MJCn0ZezxipSVKiL4cyTSVO3QtaosoNLZdvIg3siIev2rmDXKMZlj 7ZtGhb2TUUT/hE1hFVTAdcqPlarGYlWTavTsI+/LvFn7fgr34dppjhPuU6pnkH0gQQOl xWPuPPBtgPFfPMh32JoGDZhYIF6Ys7j+SazBdhopfhwLHDpCpiMsGRtf8HevP0wt3x34 tW0w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h30-v6si3920914pgb.269.2018.09.25.18.19.40; Tue, 25 Sep 2018 18:19:55 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727217AbeIZH3Z (ORCPT + 99 others); Wed, 26 Sep 2018 03:29:25 -0400 Received: from mga11.intel.com ([192.55.52.93]:52990 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726436AbeIZH3P (ORCPT ); Wed, 26 Sep 2018 03:29:15 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Sep 2018 18:18:58 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.54,304,1534834800"; d="scan'208";a="72890396" Received: from skl-02.jf.intel.com ([10.54.74.62]) by fmsmga007.fm.intel.com with ESMTP; 25 Sep 2018 18:17:19 -0700 From: Tim Chen To: Jiri Kosina , Thomas Gleixner Cc: Thomas Lendacky , Tom Lendacky , Ingo Molnar , Peter Zijlstra , Josh Poimboeuf , Andrea Arcangeli , David Woodhouse , Andi Kleen , Dave Hansen , Casey Schaufler , Asit Mallick , Arjan van de Ven , Jon Masters , linux-kernel@vger.kernel.org, x86@kernel.org, Tim Chen Subject: [Patch v2 3/4] x86/speculation: Extend per process STIBP to AMD cpus. Date: Tue, 25 Sep 2018 17:43:58 -0700 Message-Id: <705b51cba5b5e7805aeb08af7f7d21e6ec897a17.1537920575.git.tim.c.chen@linux.intel.com> X-Mailer: git-send-email 2.9.4 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Thomas Lendacky We extend the app to app spectre v2 mitigation using STIBP to the AMD cpus. We need to take care of special cases for AMD cpu's update of SPEC_CTRL MSR to avoid double writing of MSRs from update to SSBD and STIBP. Originally-by: Thomas Lendacky Signed-off-by: Tim Chen --- arch/x86/kernel/process.c | 48 +++++++++++++++++++++++++++++++++++++---------- 1 file changed, 38 insertions(+), 10 deletions(-) diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index cb24014..4a3a672 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -399,6 +399,10 @@ static __always_inline void set_spec_ctrl_state(unsigned long tifn) { u64 msr = x86_spec_ctrl_base; + /* + * AMD cpu may have used a different method to update SSBD, so + * we need to be sure we are using the SPEC_CTRL MSR for SSBD. + */ if (static_cpu_has(X86_FEATURE_SSBD)) msr |= ssbd_tif_to_spec_ctrl(tifn); @@ -408,20 +412,45 @@ static __always_inline void set_spec_ctrl_state(unsigned long tifn) wrmsrl(MSR_IA32_SPEC_CTRL, msr); } -static __always_inline void __speculative_store_bypass_update(unsigned long tifn) +static __always_inline void __speculative_store_bypass_update(unsigned long tifp, + unsigned long tifn) { - if (static_cpu_has(X86_FEATURE_VIRT_SSBD)) - amd_set_ssb_virt_state(tifn); - else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) - amd_set_core_ssb_state(tifn); - else - set_spec_ctrl_state(tifn); + bool stibp = !!((tifp ^ tifn) & _TIF_STIBP); + bool ssbd = !!((tifp ^ tifn) & _TIF_SSBD); + + if (!ssbd && !stibp) + return; + + if (ssbd) { + /* + * For AMD, try these methods first. The ssbd variable will + * reflect if the SPEC_CTRL MSR method is needed. + */ + ssbd = false; + + if (static_cpu_has(X86_FEATURE_VIRT_SSBD)) + amd_set_ssb_virt_state(tifn); + else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) + amd_set_core_ssb_state(tifn); + else + ssbd = true; + } + + /* Avoid a possible extra MSR write, recheck the flags */ + if (!ssbd && !stibp) + return; + + set_spec_ctrl_state(tifn); } void speculative_store_bypass_update(unsigned long tif) { + /* + * On this path we're forcing the update, so use ~tif as the + * previous flags. + */ preempt_disable(); - __speculative_store_bypass_update(tif); + __speculative_store_bypass_update(~tif, tif); preempt_enable(); } @@ -457,8 +486,7 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p, if ((tifp ^ tifn) & _TIF_NOCPUID) set_cpuid_faulting(!!(tifn & _TIF_NOCPUID)); - if ((tifp ^ tifn) & (_TIF_SSBD | _TIF_STIBP)) - __speculative_store_bypass_update(tifn); + __speculative_store_bypass_update(tifp, tifn); } /* -- 2.9.4