Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp790721imm; Mon, 21 May 2018 14:31:11 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrHzNS18bHiBFqlkC7A6dIxEmA5eQr+9FHXfBldL0705qVEDCgJWBvWcEcMM76VRvy0Crxt X-Received: by 2002:a65:5183:: with SMTP id h3-v6mr16641434pgq.58.1526938271718; Mon, 21 May 2018 14:31:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526938271; cv=none; d=google.com; s=arc-20160816; b=s6mkF3DRrZKKVTZc+De9aVTZpHapnZtnRBKZhBqIQwDzY1e5PExgL/SsvEzsFsFz6x a5N2+x5oQ6hPfCwVljuPG9LXktnGh/EuZW+81N4c4756ufweldHdbylmcdtTt2jPHCGv tuy53oR08unN/ps51t63EN1VijspnY+3hHC07vaySC4SZG/ouPusK4FI4Sl+CAUL0JDV wosgHBsctHiLPVUcshAsP+m6m3NXhcuOEEU5xUVsBt8TGW1ivB6oYNZ7LHVGphN4ktwn iIu1J6imQg6PqH8p9xqlnFjbKG3FrxhuaSCrdaLdp/t/8XVC7dcP669arqy95BMSv147 Y78A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature :arc-authentication-results; bh=07szletfcHTt5Y3i8aC0FMF5V9fj5vlVJ1GidwqdYvs=; b=sn0m62QS7L9sv1Qc3hzpzhigtkx4ZPeWfbej9DUYvgMkCdok8Ef8RF4GTKMNOIEK/3 DGbGaZcS/VRnoFobzmjPKKJW/6cC5A/O5B7WJyRmoE+HdZ8kZxshXscvaZ8NfFJGcB8p zeplQ+zFYN1MvaAPYOYHxm5dby+/E/6xLwlJtKZ/g1rAK5LDGDbm9mn4nV6dsj7LbmVx FY5Gmzf6pez4jmoDq/xdTZf/RxIY5XWgmRc3ABLePXcBTDUkXQyKtZtfHWwzuk1uTCIR 0KbT7UNdlklYDRBY1IVgCwqeElubc/ZuBNlLSp5haokZ5oKULur4o8+q7H84Nsb8zCP0 5hsA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=ix32tg6T; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id t70-v6si5739899pgc.141.2018.05.21.14.30.56; Mon, 21 May 2018 14:31:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=ix32tg6T; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754479AbeEUV3k (ORCPT + 99 others); Mon, 21 May 2018 17:29:40 -0400 Received: from mail.kernel.org ([198.145.29.99]:41808 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932661AbeEUV07 (ORCPT ); Mon, 21 May 2018 17:26:59 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 67FAC20871; Mon, 21 May 2018 21:26:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1526938018; bh=daqOHVNLKIEpysGiP7fgwyVUakJQiwUsQdqlmfg03kU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ix32tg6TvESlMu6x93rmsjFvtzwEn+nrdptbh2I2QQzDjwMjBDcFBWnMa7vY6LLRR eXcpvHh2B4fnk5gHatl90a2mhpzXy6gbHzKyPz1j2PjAgUhp0EJb3/sOvNSjeTPWiC BbXy7+ntdW5PcOBkWkuG+zVxlRcA6SpullSCtD5A= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Thomas Gleixner , Borislav Petkov , Konrad Rzeszutek Wilk Subject: [PATCH 4.16 099/110] x86/speculation: Handle HT correctly on AMD Date: Mon, 21 May 2018 23:12:36 +0200 Message-Id: <20180521210514.471629102@linuxfoundation.org> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180521210503.823249477@linuxfoundation.org> References: <20180521210503.823249477@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.16-stable review patch. If anyone has any objections, please let me know. ------------------ From: Thomas Gleixner commit 1f50ddb4f4189243c05926b842dc1a0332195f31 upstream The AMD64_LS_CFG MSR is a per core MSR on Family 17H CPUs. That means when hyperthreading is enabled the SSBD bit toggle needs to take both cores into account. Otherwise the following situation can happen: CPU0 CPU1 disable SSB disable SSB enable SSB <- Enables it for the Core, i.e. for CPU0 as well So after the SSB enable on CPU1 the task on CPU0 runs with SSB enabled again. On Intel the SSBD control is per core as well, but the synchronization logic is implemented behind the per thread SPEC_CTRL MSR. It works like this: CORE_SPEC_CTRL = THREAD0_SPEC_CTRL | THREAD1_SPEC_CTRL i.e. if one of the threads enables a mitigation then this affects both and the mitigation is only disabled in the core when both threads disabled it. Add the necessary synchronization logic for AMD family 17H. Unfortunately that requires a spinlock to serialize the access to the MSR, but the locks are only shared between siblings. Signed-off-by: Thomas Gleixner Reviewed-by: Borislav Petkov Reviewed-by: Konrad Rzeszutek Wilk Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/spec-ctrl.h | 6 + arch/x86/kernel/process.c | 125 +++++++++++++++++++++++++++++++++++++-- arch/x86/kernel/smpboot.c | 5 + 3 files changed, 130 insertions(+), 6 deletions(-) --- a/arch/x86/include/asm/spec-ctrl.h +++ b/arch/x86/include/asm/spec-ctrl.h @@ -33,6 +33,12 @@ static inline u64 ssbd_tif_to_amd_ls_cfg return (tifn & _TIF_SSBD) ? x86_amd_ls_cfg_ssbd_mask : 0ULL; } +#ifdef CONFIG_SMP +extern void speculative_store_bypass_ht_init(void); +#else +static inline void speculative_store_bypass_ht_init(void) { } +#endif + extern void speculative_store_bypass_update(void); #endif --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -279,22 +279,135 @@ static inline void switch_to_bitmap(stru } } -static __always_inline void __speculative_store_bypass_update(unsigned long tifn) +#ifdef CONFIG_SMP + +struct ssb_state { + struct ssb_state *shared_state; + raw_spinlock_t lock; + unsigned int disable_state; + unsigned long local_state; +}; + +#define LSTATE_SSB 0 + +static DEFINE_PER_CPU(struct ssb_state, ssb_state); + +void speculative_store_bypass_ht_init(void) +{ + struct ssb_state *st = this_cpu_ptr(&ssb_state); + unsigned int this_cpu = smp_processor_id(); + unsigned int cpu; + + st->local_state = 0; + + /* + * Shared state setup happens once on the first bringup + * of the CPU. It's not destroyed on CPU hotunplug. + */ + if (st->shared_state) + return; + + raw_spin_lock_init(&st->lock); + + /* + * Go over HT siblings and check whether one of them has set up the + * shared state pointer already. + */ + for_each_cpu(cpu, topology_sibling_cpumask(this_cpu)) { + if (cpu == this_cpu) + continue; + + if (!per_cpu(ssb_state, cpu).shared_state) + continue; + + /* Link it to the state of the sibling: */ + st->shared_state = per_cpu(ssb_state, cpu).shared_state; + return; + } + + /* + * First HT sibling to come up on the core. Link shared state of + * the first HT sibling to itself. The siblings on the same core + * which come up later will see the shared state pointer and link + * themself to the state of this CPU. + */ + st->shared_state = st; +} + +/* + * Logic is: First HT sibling enables SSBD for both siblings in the core + * and last sibling to disable it, disables it for the whole core. This how + * MSR_SPEC_CTRL works in "hardware": + * + * CORE_SPEC_CTRL = THREAD0_SPEC_CTRL | THREAD1_SPEC_CTRL + */ +static __always_inline void amd_set_core_ssb_state(unsigned long tifn) { - u64 msr; + struct ssb_state *st = this_cpu_ptr(&ssb_state); + u64 msr = x86_amd_ls_cfg_base; - if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) { - msr = x86_amd_ls_cfg_base | ssbd_tif_to_amd_ls_cfg(tifn); + if (!static_cpu_has(X86_FEATURE_ZEN)) { + msr |= ssbd_tif_to_amd_ls_cfg(tifn); wrmsrl(MSR_AMD64_LS_CFG, msr); + return; + } + + if (tifn & _TIF_SSBD) { + /* + * Since this can race with prctl(), block reentry on the + * same CPU. + */ + if (__test_and_set_bit(LSTATE_SSB, &st->local_state)) + return; + + msr |= x86_amd_ls_cfg_ssbd_mask; + + raw_spin_lock(&st->shared_state->lock); + /* First sibling enables SSBD: */ + if (!st->shared_state->disable_state) + wrmsrl(MSR_AMD64_LS_CFG, msr); + st->shared_state->disable_state++; + raw_spin_unlock(&st->shared_state->lock); } else { - msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn); - wrmsrl(MSR_IA32_SPEC_CTRL, msr); + if (!__test_and_clear_bit(LSTATE_SSB, &st->local_state)) + return; + + raw_spin_lock(&st->shared_state->lock); + st->shared_state->disable_state--; + if (!st->shared_state->disable_state) + wrmsrl(MSR_AMD64_LS_CFG, msr); + raw_spin_unlock(&st->shared_state->lock); } } +#else +static __always_inline void amd_set_core_ssb_state(unsigned long tifn) +{ + u64 msr = x86_amd_ls_cfg_base | ssbd_tif_to_amd_ls_cfg(tifn); + + wrmsrl(MSR_AMD64_LS_CFG, msr); +} +#endif + +static __always_inline void intel_set_ssb_state(unsigned long tifn) +{ + u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn); + + wrmsrl(MSR_IA32_SPEC_CTRL, msr); +} + +static __always_inline void __speculative_store_bypass_update(unsigned long tifn) +{ + if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) + amd_set_core_ssb_state(tifn); + else + intel_set_ssb_state(tifn); +} void speculative_store_bypass_update(void) { + preempt_disable(); __speculative_store_bypass_update(current_thread_info()->flags); + preempt_enable(); } void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p, --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -77,6 +77,7 @@ #include #include #include +#include /* Number of siblings per CPU package */ int smp_num_siblings = 1; @@ -242,6 +243,8 @@ static void notrace start_secondary(void */ check_tsc_sync_target(); + speculative_store_bypass_ht_init(); + /* * Lock vector_lock, set CPU online and bring the vector * allocator online. Online must be set with vector_lock held @@ -1257,6 +1260,8 @@ void __init native_smp_prepare_cpus(unsi set_mtrr_aps_delayed_init(); smp_quirk_init_udelay(); + + speculative_store_bypass_ht_init(); } void arch_enable_nonboot_cpus_begin(void)