Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933868AbeAJCrS (ORCPT + 1 other); Tue, 9 Jan 2018 21:47:18 -0500 Received: from mga07.intel.com ([134.134.136.100]:50028 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964949AbeAJCrP (ORCPT ); Tue, 9 Jan 2018 21:47:15 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,337,1511856000"; d="scan'208";a="9187251" From: Tim Chen To: Thomas Gleixner , Andy Lutomirski , Linus Torvalds , Greg KH Cc: Tim Chen , Dave Hansen , Andrea Arcangeli , Andi Kleen , Arjan Van De Ven , David Woodhouse , Peter Zijlstra , Dan Williams , Paolo Bonzini , Ashok Raj , linux-kernel@vger.kernel.org Subject: [PATCH v3 2/5] x86/enter: Create macros to set/clear IBRS Date: Tue, 9 Jan 2018 18:26:46 -0800 Message-Id: <3aab341725ee6a9aafd3141387453b45d788d61a.1515542293.git.tim.c.chen@linux.intel.com> X-Mailer: git-send-email 2.9.4 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: Create macros to control IBRS. Use these macros to enable IBRS on kernel entry paths and disable IBRS on kernel exit paths. The registers rax, rcx and rdx are touched when controlling IBRS so they need to be saved when they can't be clobbered. Signed-off-by: Tim Chen --- arch/x86/entry/calling.h | 73 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 73 insertions(+) diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h index 45a63e0..3b9b238 100644 --- a/arch/x86/entry/calling.h +++ b/arch/x86/entry/calling.h @@ -6,6 +6,8 @@ #include #include #include +#include +#include /* @@ -347,3 +349,74 @@ For 32-bit we have the following conventions - kernel is built with .Lafter_call_\@: #endif .endm + +/* + * IBRS related macros + */ + +.macro PUSH_MSR_REGS + pushq %rax + pushq %rcx + pushq %rdx +.endm + +.macro POP_MSR_REGS + popq %rdx + popq %rcx + popq %rax +.endm + +.macro WRMSR_ASM msr_nr:req edx_val:req eax_val:req + movl \msr_nr, %ecx + movl \edx_val, %edx + movl \eax_val, %eax +.endm + +.macro ENABLE_IBRS + ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL_IBRS + PUSH_MSR_REGS + WRMSR_ASM $MSR_IA32_SPEC_CTRL, $0, $SPEC_CTRL_ENABLE_IBRS + POP_MSR_REGS +.Lskip_\@: +.endm + +.macro DISABLE_IBRS + ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL_IBRS + PUSH_MSR_REGS + WRMSR_ASM $MSR_IA32_SPEC_CTRL, $0, $SPEC_CTRL_DISABLE_IBRS + POP_MSR_REGS +.Lskip_\@: +.endm + +.macro ENABLE_IBRS_CLOBBER + ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL_IBRS + WRMSR_ASM $MSR_IA32_SPEC_CTRL, $0, $SPEC_CTRL_ENABLE_IBRS +.Lskip_\@: +.endm + +.macro DISABLE_IBRS_CLOBBER + ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL_IBRS + WRMSR_ASM $MSR_IA32_SPEC_CTRL, $0, $SPEC_CTRL_DISABLE_IBRS +.Lskip_\@: +.endm + +.macro ENABLE_IBRS_SAVE_AND_CLOBBER save_reg:req + ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL_IBRS + movl $MSR_IA32_SPEC_CTRL, %ecx + rdmsr + movl %eax, \save_reg + movl $0, %edx + movl $SPEC_CTRL_ENABLE_IBRS, %eax + wrmsr +.Lskip_\@: +.endm + +.macro RESTORE_IBRS_CLOBBER save_reg:req + ALTERNATIVE "jmp .Lskip_\@", "", X86_FEATURE_SPEC_CTRL_IBRS + /* Set IBRS to the value saved in the save_reg */ + movl $MSR_IA32_SPEC_CTRL, %ecx + movl $0, %edx + movl \save_reg, %eax + wrmsr +.Lskip_\@: +.endm -- 2.9.4