Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753860AbeAFCeG (ORCPT + 1 other); Fri, 5 Jan 2018 21:34:06 -0500 Received: from mga03.intel.com ([134.134.136.65]:10701 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753709AbeAFCdG (ORCPT ); Fri, 5 Jan 2018 21:33:06 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,320,1511856000"; d="scan'208";a="190375389" From: Tim Chen To: Thomas Gleixner , Andy Lutomirski , Linus Torvalds , Greg KH Cc: Tim Chen , Dave Hansen , Andrea Arcangeli , Andi Kleen , Arjan Van De Ven , David Woodhouse , linux-kernel@vger.kernel.org Subject: [PATCH v2 5/8] x86/idle: Disable IBRS entering idle and enable it on wakeup Date: Fri, 5 Jan 2018 18:12:20 -0800 Message-Id: <70dff66b881b83beb0edd492a8accfec252561e2.1515204614.git.tim.c.chen@linux.intel.com> X-Mailer: git-send-email 2.9.4 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: Clear IBRS on idle entry and set it on idle exit into kernel on mwait. When we are in mwait, we are not running but if we leave IBRS on, it will affect the performance on the sibling hardware thread. So we disable IBRS and reenable it when we wake up. Signed-off-by: Tim Chen --- arch/x86/include/asm/mwait.h | 13 +++++++++++++ arch/x86/include/asm/spec_ctrl.h | 35 +++++++++++++++++++++++++++++++++++ arch/x86/kernel/process.c | 9 +++++++-- 3 files changed, 55 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h index 39a2fb2..f28f2ea 100644 --- a/arch/x86/include/asm/mwait.h +++ b/arch/x86/include/asm/mwait.h @@ -6,6 +6,7 @@ #include #include +#include #define MWAIT_SUBSTATE_MASK 0xf #define MWAIT_CSTATE_MASK 0xf @@ -106,9 +107,21 @@ static inline void mwait_idle_with_hints(unsigned long eax, unsigned long ecx) mb(); } + /* + * CPUs run faster with speculation protection + * disabled. All CPU threads in a core must + * disable speculation protection for it to be + * disabled. Disable it while we are idle so the + * other hyperthread can run fast. + * + * Interrupts have been disabled at this point. + */ + + unprotected_speculation_begin(); __monitor((void *)¤t_thread_info()->flags, 0, 0); if (!need_resched()) __mwait(eax, ecx); + unprotected_speculation_end(); } current_clr_polling(); } diff --git a/arch/x86/include/asm/spec_ctrl.h b/arch/x86/include/asm/spec_ctrl.h index 4fda38b..62c5dc8 100644 --- a/arch/x86/include/asm/spec_ctrl.h +++ b/arch/x86/include/asm/spec_ctrl.h @@ -12,4 +12,39 @@ bool ibrs_inuse(void); extern unsigned int dynamic_ibrs; +static inline void __disable_indirect_speculation(void) +{ + native_wrmsrl(MSR_IA32_SPEC_CTRL, SPEC_CTRL_FEATURE_ENABLE_IBRS); +} + +static inline void __enable_indirect_speculation(void) +{ + native_wrmsrl(MSR_IA32_SPEC_CTRL, SPEC_CTRL_FEATURE_DISABLE_IBRS); +} + +/* + * Interrupts must be disabled to begin unprotected speculation. + * Otherwise interrupts could come in and start running in unprotected mode. + */ + +static inline void unprotected_speculation_begin(void) +{ + lockdep_assert_irqs_disabled(); + if (dynamic_ibrs) + __enable_indirect_speculation(); +} + +static inline void unprotected_speculation_end(void) +{ + if (dynamic_ibrs) { + __disable_indirect_speculation(); + } else { + /* + * rmb prevent unwanted speculation when we + * are setting IBRS + */ + rmb(); + } +} + #endif /* _ASM_X86_SPEC_CTRL_H */ diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index aed9d94..4b1ac7c 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -39,6 +39,7 @@ #include #include #include +#include /* * per-CPU TSS segments. Threads are completely 'soft' on Linux, @@ -461,11 +462,15 @@ static __cpuidle void mwait_idle(void) mb(); /* quirk */ } + unprotected_speculation_begin(); __monitor((void *)¤t_thread_info()->flags, 0, 0); - if (!need_resched()) + if (!need_resched()) { __sti_mwait(0, 0); - else + unprotected_speculation_end(); + } else { + unprotected_speculation_end(); local_irq_enable(); + } trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, smp_processor_id()); } else { local_irq_enable(); -- 2.9.4