Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754853AbdIGKiH (ORCPT ); Thu, 7 Sep 2017 06:38:07 -0400 Received: from terminus.zytor.com ([65.50.211.136]:51565 "EHLO terminus.zytor.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754633AbdIGKiF (ORCPT ); Thu, 7 Sep 2017 06:38:05 -0400 Date: Thu, 7 Sep 2017 03:34:21 -0700 From: tip-bot for Borislav Petkov Message-ID: Cc: tglx@linutronix.de, hpa@zytor.com, bp@suse.de, linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com, brijesh.singh@amd.com, peterz@infradead.org, mingo@kernel.org, Thomas.Lendacky@amd.com, torvalds@linux-foundation.org Reply-To: mingo@kernel.org, Thomas.Lendacky@amd.com, torvalds@linux-foundation.org, peterz@infradead.org, brijesh.singh@amd.com, boris.ostrovsky@oracle.com, linux-kernel@vger.kernel.org, bp@suse.de, hpa@zytor.com, tglx@linutronix.de In-Reply-To: <20170907093837.76zojtkgebwtqc74@pd.tnic> References: <20170907093837.76zojtkgebwtqc74@pd.tnic> To: linux-tip-commits@vger.kernel.org Subject: [tip:x86/urgent] x86/mm: Make the SME mask a u64 Git-Commit-ID: 21d9bb4a05bac50fb4f850517af4030baecd00f6 X-Mailer: tip-git-log-daemon Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset=UTF-8 Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3831 Lines: 110 Commit-ID: 21d9bb4a05bac50fb4f850517af4030baecd00f6 Gitweb: http://git.kernel.org/tip/21d9bb4a05bac50fb4f850517af4030baecd00f6 Author: Borislav Petkov AuthorDate: Thu, 7 Sep 2017 11:38:37 +0200 Committer: Ingo Molnar CommitDate: Thu, 7 Sep 2017 11:53:11 +0200 x86/mm: Make the SME mask a u64 The SME encryption mask is for masking 64-bit pagetable entries. It being an unsigned long works fine on X86_64 but on 32-bit builds in truncates bits leading to Xen guests crashing very early. And regardless, the whole SME mask handling shouldnt've leaked into 32-bit because SME is X86_64-only feature. So, first make the mask u64. And then, add trivial 32-bit versions of the __sme_* macros so that nothing happens there. Reported-and-tested-by: Boris Ostrovsky Tested-by: Brijesh Singh Signed-off-by: Borislav Petkov Acked-by: Tom Lendacky Acked-by: Thomas Gleixner Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Thomas Fixes: 21729f81ce8a ("x86/mm: Provide general kernel support for memory encryption") Link: http://lkml.kernel.org/r/20170907093837.76zojtkgebwtqc74@pd.tnic Signed-off-by: Ingo Molnar --- arch/x86/include/asm/mem_encrypt.h | 4 ++-- arch/x86/mm/mem_encrypt.c | 2 +- include/linux/mem_encrypt.h | 13 +++++++++---- 3 files changed, 12 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 8e618fc..6a77c63 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -21,7 +21,7 @@ #ifdef CONFIG_AMD_MEM_ENCRYPT -extern unsigned long sme_me_mask; +extern u64 sme_me_mask; void sme_encrypt_execute(unsigned long encrypted_kernel_vaddr, unsigned long decrypted_kernel_vaddr, @@ -49,7 +49,7 @@ void swiotlb_set_mem_attributes(void *vaddr, unsigned long size); #else /* !CONFIG_AMD_MEM_ENCRYPT */ -#define sme_me_mask 0UL +#define sme_me_mask 0ULL static inline void __init sme_early_encrypt(resource_size_t paddr, unsigned long size) { } diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 0fbd092..3fcc8e0 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -37,7 +37,7 @@ static char sme_cmdline_off[] __initdata = "off"; * reside in the .data section so as not to be zeroed out when the .bss * section is later cleared. */ -unsigned long sme_me_mask __section(.data) = 0; +u64 sme_me_mask __section(.data) = 0; EXPORT_SYMBOL_GPL(sme_me_mask); /* Buffer used for early in-place encryption by BSP, no locking needed */ diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h index 1255f09..265a9cd 100644 --- a/include/linux/mem_encrypt.h +++ b/include/linux/mem_encrypt.h @@ -21,7 +21,7 @@ #else /* !CONFIG_ARCH_HAS_MEM_ENCRYPT */ -#define sme_me_mask 0UL +#define sme_me_mask 0ULL #endif /* CONFIG_ARCH_HAS_MEM_ENCRYPT */ @@ -30,18 +30,23 @@ static inline bool sme_active(void) return !!sme_me_mask; } -static inline unsigned long sme_get_me_mask(void) +static inline u64 sme_get_me_mask(void) { return sme_me_mask; } +#ifdef CONFIG_AMD_MEM_ENCRYPT /* * The __sme_set() and __sme_clr() macros are useful for adding or removing * the encryption mask from a value (e.g. when dealing with pagetable * entries). */ -#define __sme_set(x) ((unsigned long)(x) | sme_me_mask) -#define __sme_clr(x) ((unsigned long)(x) & ~sme_me_mask) +#define __sme_set(x) ((x) | sme_me_mask) +#define __sme_clr(x) ((x) & ~sme_me_mask) +#else +#define __sme_set(x) (x) +#define __sme_clr(x) (x) +#endif #endif /* __ASSEMBLY__ */