2021-09-08 23:02:18

by Tom Lendacky

[permalink] [raw]
Subject: [PATCH v3 0/8] Implement generic cc_platform_has() helper function

This patch series provides a generic helper function, cc_platform_has(),
to replace the sme_active(), sev_active(), sev_es_active() and
mem_encrypt_active() functions.

It is expected that as new confidential computing technologies are
added to the kernel, they can all be covered by a single function call
instead of a collection of specific function calls all called from the
same locations.

The powerpc and s390 patches have been compile tested only. Can the
folks copied on this series verify that nothing breaks for them. Also,
a new file, arch/powerpc/platforms/pseries/cc_platform.c, has been
created for powerpc to hold the out of line function.

Cc: Andi Kleen <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Christian Borntraeger <[email protected]>
Cc: Daniel Vetter <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Dave Young <[email protected]>
Cc: David Airlie <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Maarten Lankhorst <[email protected]>
Cc: Maxime Ripard <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Thomas Zimmermann <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: VMware Graphics <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Christoph Hellwig <[email protected]>

---

Patches based on:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
4b93c544e90e ("thunderbolt: test: split up test cases in tb_test_credit_alloc_all")

Changes since v2:
- Changed the name from prot_guest_has() to cc_platform_has()
- Took the cc_platform_has() function out of line. Created two new files,
cc_platform.c, in both x86 and ppc to implment the function. As a
result, also changed the attribute defines into enums.
- Removed any received Reviewed-by's and Acked-by's given changes in this
version.
- Added removal of new instances of mem_encrypt_active() usage in powerpc
arch.
- Based on latest Linux tree to pick up powerpc changes related to the
mem_encrypt_active() function.

Changes since v1:
- Moved some arch ioremap functions within #ifdef CONFIG_AMD_MEM_ENCRYPT
in prep for use of prot_guest_has() by TDX.
- Added type includes to the the protected_guest.h header file to prevent
build errors outside of x86.
- Made amd_prot_guest_has() EXPORT_SYMBOL_GPL
- Used amd_prot_guest_has() in place of checking sme_me_mask in the
arch/x86/mm/mem_encrypt.c file.

Tom Lendacky (8):
x86/ioremap: Selectively build arch override encryption functions
mm: Introduce a function to check for confidential computing features
x86/sev: Add an x86 version of cc_platform_has()
powerpc/pseries/svm: Add a powerpc version of cc_platform_has()
x86/sme: Replace occurrences of sme_active() with cc_platform_has()
x86/sev: Replace occurrences of sev_active() with cc_platform_has()
x86/sev: Replace occurrences of sev_es_active() with cc_platform_has()
treewide: Replace the use of mem_encrypt_active() with
cc_platform_has()

arch/Kconfig | 3 +
arch/powerpc/include/asm/mem_encrypt.h | 5 --
arch/powerpc/platforms/pseries/Kconfig | 1 +
arch/powerpc/platforms/pseries/Makefile | 2 +
arch/powerpc/platforms/pseries/cc_platform.c | 26 ++++++
arch/powerpc/platforms/pseries/svm.c | 5 +-
arch/s390/include/asm/mem_encrypt.h | 2 -
arch/x86/Kconfig | 1 +
arch/x86/include/asm/io.h | 8 ++
arch/x86/include/asm/kexec.h | 2 +-
arch/x86/include/asm/mem_encrypt.h | 14 +---
arch/x86/kernel/Makefile | 3 +
arch/x86/kernel/cc_platform.c | 21 +++++
arch/x86/kernel/crash_dump_64.c | 4 +-
arch/x86/kernel/head64.c | 4 +-
arch/x86/kernel/kvm.c | 3 +-
arch/x86/kernel/kvmclock.c | 4 +-
arch/x86/kernel/machine_kexec_64.c | 19 +++--
arch/x86/kernel/pci-swiotlb.c | 9 +-
arch/x86/kernel/relocate_kernel_64.S | 2 +-
arch/x86/kernel/sev.c | 6 +-
arch/x86/kvm/svm/svm.c | 3 +-
arch/x86/mm/ioremap.c | 18 ++--
arch/x86/mm/mem_encrypt.c | 57 +++++++------
arch/x86/mm/mem_encrypt_identity.c | 3 +-
arch/x86/mm/pat/set_memory.c | 3 +-
arch/x86/platform/efi/efi_64.c | 9 +-
arch/x86/realmode/init.c | 8 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 +-
drivers/gpu/drm/drm_cache.c | 4 +-
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 +-
drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 6 +-
drivers/iommu/amd/init.c | 7 +-
drivers/iommu/amd/iommu.c | 3 +-
drivers/iommu/amd/iommu_v2.c | 3 +-
drivers/iommu/iommu.c | 3 +-
fs/proc/vmcore.c | 6 +-
include/linux/cc_platform.h | 88 ++++++++++++++++++++
include/linux/mem_encrypt.h | 4 -
kernel/dma/swiotlb.c | 4 +-
40 files changed, 267 insertions(+), 114 deletions(-)
create mode 100644 arch/powerpc/platforms/pseries/cc_platform.c
create mode 100644 arch/x86/kernel/cc_platform.c
create mode 100644 include/linux/cc_platform.h


base-commit: 4b93c544e90e2b28326182d31ee008eb80e02074
--
2.33.0


2021-09-08 23:02:33

by Tom Lendacky

[permalink] [raw]
Subject: [PATCH v3 1/8] x86/ioremap: Selectively build arch override encryption functions

In prep for other uses of the cc_platform_has() function besides AMD's
memory encryption support, selectively build the AMD memory encryption
architecture override functions only when CONFIG_AMD_MEM_ENCRYPT=y. These
functions are:
- early_memremap_pgprot_adjust()
- arch_memremap_can_ram_remap()

Additionally, routines that are only invoked by these architecture
override functions can also be conditionally built. These functions are:
- memremap_should_map_decrypted()
- memremap_is_efi_data()
- memremap_is_setup_data()
- early_memremap_is_setup_data()

And finally, phys_mem_access_encrypted() is conditionally built as well,
but requires a static inline version of it when CONFIG_AMD_MEM_ENCRYPT is
not set.

Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Signed-off-by: Tom Lendacky <[email protected]>
---
arch/x86/include/asm/io.h | 8 ++++++++
arch/x86/mm/ioremap.c | 2 +-
2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index 841a5d104afa..5c6a4af0b911 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -391,6 +391,7 @@ extern void arch_io_free_memtype_wc(resource_size_t start, resource_size_t size)
#define arch_io_reserve_memtype_wc arch_io_reserve_memtype_wc
#endif

+#ifdef CONFIG_AMD_MEM_ENCRYPT
extern bool arch_memremap_can_ram_remap(resource_size_t offset,
unsigned long size,
unsigned long flags);
@@ -398,6 +399,13 @@ extern bool arch_memremap_can_ram_remap(resource_size_t offset,

extern bool phys_mem_access_encrypted(unsigned long phys_addr,
unsigned long size);
+#else
+static inline bool phys_mem_access_encrypted(unsigned long phys_addr,
+ unsigned long size)
+{
+ return true;
+}
+#endif

/**
* iosubmit_cmds512 - copy data to single MMIO location, in 512-bit units
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index 60ade7dd71bd..ccff76cedd8f 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -508,6 +508,7 @@ void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr)
memunmap((void *)((unsigned long)addr & PAGE_MASK));
}

+#ifdef CONFIG_AMD_MEM_ENCRYPT
/*
* Examine the physical address to determine if it is an area of memory
* that should be mapped decrypted. If the memory is not part of the
@@ -746,7 +747,6 @@ bool phys_mem_access_encrypted(unsigned long phys_addr, unsigned long size)
return arch_memremap_can_ram_remap(phys_addr, size, 0);
}

-#ifdef CONFIG_AMD_MEM_ENCRYPT
/* Remap memory with encryption */
void __init *early_memremap_encrypted(resource_size_t phys_addr,
unsigned long size)
--
2.33.0

2021-09-08 23:04:39

by Tom Lendacky

[permalink] [raw]
Subject: [PATCH v3 7/8] x86/sev: Replace occurrences of sev_es_active() with cc_platform_has()

Replace uses of sev_es_active() with the more generic cc_platform_has()
using CC_ATTR_GUEST_STATE_ENCRYPT. If future support is added for other
memory encyrption techonologies, the use of CC_ATTR_GUEST_STATE_ENCRYPT
can be updated, as required.

Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Signed-off-by: Tom Lendacky <[email protected]>
---
arch/x86/include/asm/mem_encrypt.h | 2 --
arch/x86/kernel/sev.c | 6 +++---
arch/x86/mm/mem_encrypt.c | 14 ++++----------
arch/x86/realmode/init.c | 3 +--
4 files changed, 8 insertions(+), 17 deletions(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index f440eebeeb2c..499440781b39 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -51,7 +51,6 @@ void __init mem_encrypt_free_decrypted_mem(void);
void __init mem_encrypt_init(void);

void __init sev_es_init_vc_handling(void);
-bool sev_es_active(void);
bool amd_cc_platform_has(enum cc_attr attr);

#define __bss_decrypted __section(".bss..decrypted")
@@ -75,7 +74,6 @@ static inline void __init sme_encrypt_kernel(struct boot_params *bp) { }
static inline void __init sme_enable(struct boot_params *bp) { }

static inline void sev_es_init_vc_handling(void) { }
-static inline bool sev_es_active(void) { return false; }
static inline bool amd_cc_platform_has(enum cc_attr attr) { return false; }

static inline int __init
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index a6895e440bc3..53a6837d354b 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -11,7 +11,7 @@

#include <linux/sched/debug.h> /* For show_regs() */
#include <linux/percpu-defs.h>
-#include <linux/mem_encrypt.h>
+#include <linux/cc_platform.h>
#include <linux/printk.h>
#include <linux/mm_types.h>
#include <linux/set_memory.h>
@@ -615,7 +615,7 @@ int __init sev_es_efi_map_ghcbs(pgd_t *pgd)
int cpu;
u64 pfn;

- if (!sev_es_active())
+ if (!cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT))
return 0;

pflags = _PAGE_NX | _PAGE_RW;
@@ -774,7 +774,7 @@ void __init sev_es_init_vc_handling(void)

BUILD_BUG_ON(offsetof(struct sev_es_runtime_data, ghcb_page) % PAGE_SIZE);

- if (!sev_es_active())
+ if (!cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT))
return;

if (!sev_es_check_cpu_features())
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 22d4e152a6de..47d571a2cd28 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -373,13 +373,6 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size)
* up under SME the trampoline area cannot be encrypted, whereas under SEV
* the trampoline area must be encrypted.
*/
-
-/* Needs to be called from non-instrumentable code */
-bool noinstr sev_es_active(void)
-{
- return sev_status & MSR_AMD64_SEV_ES_ENABLED;
-}
-
bool amd_cc_platform_has(enum cc_attr attr)
{
switch (attr) {
@@ -393,7 +386,7 @@ bool amd_cc_platform_has(enum cc_attr attr)
return sev_status & MSR_AMD64_SEV_ENABLED;

case CC_ATTR_GUEST_STATE_ENCRYPT:
- return sev_es_active();
+ return sev_status & MSR_AMD64_SEV_ES_ENABLED;

default:
return false;
@@ -469,7 +462,7 @@ static void print_mem_encrypt_feature_info(void)
pr_cont(" SEV");

/* Encrypted Register State */
- if (sev_es_active())
+ if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT))
pr_cont(" SEV-ES");

pr_cont("\n");
@@ -488,7 +481,8 @@ void __init mem_encrypt_init(void)
* With SEV, we need to unroll the rep string I/O instructions,
* but SEV-ES supports them through the #VC handler.
*/
- if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) && !sev_es_active())
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) &&
+ !cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT))
static_branch_enable(&sev_enable_key);

print_mem_encrypt_feature_info();
diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index c878c5ee5a4c..4a3da7592b99 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -2,7 +2,6 @@
#include <linux/io.h>
#include <linux/slab.h>
#include <linux/memblock.h>
-#include <linux/mem_encrypt.h>
#include <linux/cc_platform.h>
#include <linux/pgtable.h>

@@ -48,7 +47,7 @@ static void sme_sev_setup_real_mode(struct trampoline_header *th)
if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
th->flags |= TH_FLAGS_SME_ACTIVE;

- if (sev_es_active()) {
+ if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
/*
* Skip the call to verify_cpu() in secondary_startup_64 as it
* will cause #VC exceptions when the AP can't handle them yet.
--
2.33.0

2021-09-08 23:04:46

by Tom Lendacky

[permalink] [raw]
Subject: [PATCH v3 8/8] treewide: Replace the use of mem_encrypt_active() with cc_platform_has()

Replace uses of mem_encrypt_active() with calls to cc_platform_has() with
the CC_ATTR_MEM_ENCRYPT attribute.

Remove the implementation of mem_encrypt_active() across all arches.

For s390, since the default implementation of the cc_platform_has()
matches the s390 implementation of mem_encrypt_active(), cc_platform_has()
does not need to be implemented in s390 (the config option
ARCH_HAS_CC_PLATFORM is not set).

Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: David Airlie <[email protected]>
Cc: Daniel Vetter <[email protected]>
Cc: Maarten Lankhorst <[email protected]>
Cc: Maxime Ripard <[email protected]>
Cc: Thomas Zimmermann <[email protected]>
Cc: VMware Graphics <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Dave Young <[email protected]>
Cc: Baoquan He <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Vasily Gorbik <[email protected]>
Cc: Christian Borntraeger <[email protected]>
Signed-off-by: Tom Lendacky <[email protected]>
---
arch/powerpc/include/asm/mem_encrypt.h | 5 -----
arch/powerpc/platforms/pseries/svm.c | 5 +++--
arch/s390/include/asm/mem_encrypt.h | 2 --
arch/x86/include/asm/mem_encrypt.h | 5 -----
arch/x86/kernel/head64.c | 4 ++--
arch/x86/mm/ioremap.c | 4 ++--
arch/x86/mm/mem_encrypt.c | 2 +-
arch/x86/mm/pat/set_memory.c | 3 ++-
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 +++-
drivers/gpu/drm/drm_cache.c | 4 ++--
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 ++--
drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 6 +++---
drivers/iommu/amd/iommu.c | 3 ++-
drivers/iommu/amd/iommu_v2.c | 3 ++-
drivers/iommu/iommu.c | 3 ++-
fs/proc/vmcore.c | 6 +++---
include/linux/mem_encrypt.h | 4 ----
kernel/dma/swiotlb.c | 4 ++--
18 files changed, 31 insertions(+), 40 deletions(-)

diff --git a/arch/powerpc/include/asm/mem_encrypt.h b/arch/powerpc/include/asm/mem_encrypt.h
index ba9dab07c1be..2f26b8fc8d29 100644
--- a/arch/powerpc/include/asm/mem_encrypt.h
+++ b/arch/powerpc/include/asm/mem_encrypt.h
@@ -10,11 +10,6 @@

#include <asm/svm.h>

-static inline bool mem_encrypt_active(void)
-{
- return is_secure_guest();
-}
-
static inline bool force_dma_unencrypted(struct device *dev)
{
return is_secure_guest();
diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c
index 87f001b4c4e4..c083ecbbae4d 100644
--- a/arch/powerpc/platforms/pseries/svm.c
+++ b/arch/powerpc/platforms/pseries/svm.c
@@ -8,6 +8,7 @@

#include <linux/mm.h>
#include <linux/memblock.h>
+#include <linux/cc_platform.h>
#include <asm/machdep.h>
#include <asm/svm.h>
#include <asm/swiotlb.h>
@@ -63,7 +64,7 @@ void __init svm_swiotlb_init(void)

int set_memory_encrypted(unsigned long addr, int numpages)
{
- if (!mem_encrypt_active())
+ if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT))
return 0;

if (!PAGE_ALIGNED(addr))
@@ -76,7 +77,7 @@ int set_memory_encrypted(unsigned long addr, int numpages)

int set_memory_decrypted(unsigned long addr, int numpages)
{
- if (!mem_encrypt_active())
+ if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT))
return 0;

if (!PAGE_ALIGNED(addr))
diff --git a/arch/s390/include/asm/mem_encrypt.h b/arch/s390/include/asm/mem_encrypt.h
index 2542cbf7e2d1..08a8b96606d7 100644
--- a/arch/s390/include/asm/mem_encrypt.h
+++ b/arch/s390/include/asm/mem_encrypt.h
@@ -4,8 +4,6 @@

#ifndef __ASSEMBLY__

-static inline bool mem_encrypt_active(void) { return false; }
-
int set_memory_encrypted(unsigned long addr, int numpages);
int set_memory_decrypted(unsigned long addr, int numpages);

diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index 499440781b39..ed954aa5c448 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -98,11 +98,6 @@ static inline void mem_encrypt_free_decrypted_mem(void) { }

extern char __start_bss_decrypted[], __end_bss_decrypted[], __start_bss_decrypted_unused[];

-static inline bool mem_encrypt_active(void)
-{
- return sme_me_mask;
-}
-
static inline u64 sme_get_me_mask(void)
{
return sme_me_mask;
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index de01903c3735..f98c76a1d16c 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -19,7 +19,7 @@
#include <linux/start_kernel.h>
#include <linux/io.h>
#include <linux/memblock.h>
-#include <linux/mem_encrypt.h>
+#include <linux/cc_platform.h>
#include <linux/pgtable.h>

#include <asm/processor.h>
@@ -285,7 +285,7 @@ unsigned long __head __startup_64(unsigned long physaddr,
* there is no need to zero it after changing the memory encryption
* attribute.
*/
- if (mem_encrypt_active()) {
+ if (cc_platform_has(CC_ATTR_MEM_ENCRYPT)) {
vaddr = (unsigned long)__start_bss_decrypted;
vaddr_end = (unsigned long)__end_bss_decrypted;
for (; vaddr < vaddr_end; vaddr += PMD_SIZE) {
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index b59a5cbc6bc5..026031b3b782 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -694,7 +694,7 @@ static bool __init early_memremap_is_setup_data(resource_size_t phys_addr,
bool arch_memremap_can_ram_remap(resource_size_t phys_addr, unsigned long size,
unsigned long flags)
{
- if (!mem_encrypt_active())
+ if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT))
return true;

if (flags & MEMREMAP_ENC)
@@ -724,7 +724,7 @@ pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr,
{
bool encrypted_prot;

- if (!mem_encrypt_active())
+ if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT))
return prot;

encrypted_prot = true;
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 47d571a2cd28..7f09b86d2467 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -432,7 +432,7 @@ void __init mem_encrypt_free_decrypted_mem(void)
* The unused memory range was mapped decrypted, change the encryption
* attribute from decrypted to encrypted before freeing it.
*/
- if (mem_encrypt_active()) {
+ if (cc_platform_has(CC_ATTR_MEM_ENCRYPT)) {
r = set_memory_encrypted(vaddr, npages);
if (r) {
pr_warn("failed to free unused decrypted pages\n");
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index ad8a5c586a35..527957586f3c 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -18,6 +18,7 @@
#include <linux/libnvdimm.h>
#include <linux/vmstat.h>
#include <linux/kernel.h>
+#include <linux/cc_platform.h>

#include <asm/e820/api.h>
#include <asm/processor.h>
@@ -1986,7 +1987,7 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
int ret;

/* Nothing to do if memory encryption is not active */
- if (!mem_encrypt_active())
+ if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT))
return 0;

/* Should not be working on unaligned addresses */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index b6640291f980..c8973bbb7d3f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -38,6 +38,7 @@
#include <drm/drm_probe_helper.h>
#include <linux/mmu_notifier.h>
#include <linux/suspend.h>
+#include <linux/cc_platform.h>

#include "amdgpu.h"
#include "amdgpu_irq.h"
@@ -1252,7 +1253,8 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
* however, SME requires an indirect IOMMU mapping because the encryption
* bit is beyond the DMA mask of the chip.
*/
- if (mem_encrypt_active() && ((flags & AMD_ASIC_MASK) == CHIP_RAVEN)) {
+ if (cc_platform_has(CC_ATTR_MEM_ENCRYPT) &&
+ ((flags & AMD_ASIC_MASK) == CHIP_RAVEN)) {
dev_info(&pdev->dev,
"SME is not compatible with RAVEN\n");
return -ENOTSUPP;
diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/gpu/drm/drm_cache.c
index 30cc59fe6ef7..f19d9acbe959 100644
--- a/drivers/gpu/drm/drm_cache.c
+++ b/drivers/gpu/drm/drm_cache.c
@@ -31,7 +31,7 @@
#include <linux/dma-buf-map.h>
#include <linux/export.h>
#include <linux/highmem.h>
-#include <linux/mem_encrypt.h>
+#include <linux/cc_platform.h>
#include <xen/xen.h>

#include <drm/drm_cache.h>
@@ -204,7 +204,7 @@ bool drm_need_swiotlb(int dma_bits)
* Enforce dma_alloc_coherent when memory encryption is active as well
* for the same reasons as for Xen paravirtual hosts.
*/
- if (mem_encrypt_active())
+ if (cc_platform_has(CC_ATTR_MEM_ENCRYPT))
return true;

for (tmp = iomem_resource.child; tmp; tmp = tmp->sibling)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
index ab9a1750e1df..bfd71c86faa5 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
@@ -29,7 +29,7 @@
#include <linux/dma-mapping.h>
#include <linux/module.h>
#include <linux/pci.h>
-#include <linux/mem_encrypt.h>
+#include <linux/cc_platform.h>

#include <drm/drm_aperture.h>
#include <drm/drm_drv.h>
@@ -666,7 +666,7 @@ static int vmw_dma_select_mode(struct vmw_private *dev_priv)
[vmw_dma_map_bind] = "Giving up DMA mappings early."};

/* TTM currently doesn't fully support SEV encryption. */
- if (mem_encrypt_active())
+ if (cc_platform_has(CC_ATTR_MEM_ENCRYPT))
return -EINVAL;

if (vmw_force_coherent)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
index e50fb82a3030..2aceac7856e2 100644
--- a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
+++ b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.c
@@ -28,7 +28,7 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/slab.h>
-#include <linux/mem_encrypt.h>
+#include <linux/cc_platform.h>

#include <asm/hypervisor.h>
#include <drm/drm_ioctl.h>
@@ -160,7 +160,7 @@ static unsigned long vmw_port_hb_out(struct rpc_channel *channel,
unsigned long msg_len = strlen(msg);

/* HB port can't access encrypted memory. */
- if (hb && !mem_encrypt_active()) {
+ if (hb && !cc_platform_has(CC_ATTR_MEM_ENCRYPT)) {
unsigned long bp = channel->cookie_high;
u32 channel_id = (channel->channel_id << 16);

@@ -216,7 +216,7 @@ static unsigned long vmw_port_hb_in(struct rpc_channel *channel, char *reply,
unsigned long si, di, eax, ebx, ecx, edx;

/* HB port can't access encrypted memory */
- if (hb && !mem_encrypt_active()) {
+ if (hb && !cc_platform_has(CC_ATTR_MEM_ENCRYPT)) {
unsigned long bp = channel->cookie_low;
u32 channel_id = (channel->channel_id << 16);

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 1722bb161841..9e5da037d949 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -31,6 +31,7 @@
#include <linux/irqdomain.h>
#include <linux/percpu.h>
#include <linux/io-pgtable.h>
+#include <linux/cc_platform.h>
#include <asm/irq_remapping.h>
#include <asm/io_apic.h>
#include <asm/apic.h>
@@ -2238,7 +2239,7 @@ static int amd_iommu_def_domain_type(struct device *dev)
* active, because some of those devices (AMD GPUs) don't have the
* encryption bit in their DMA-mask and require remapping.
*/
- if (!mem_encrypt_active() && dev_data->iommu_v2)
+ if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT) && dev_data->iommu_v2)
return IOMMU_DOMAIN_IDENTITY;

return 0;
diff --git a/drivers/iommu/amd/iommu_v2.c b/drivers/iommu/amd/iommu_v2.c
index a9e568276c99..13cbeb997cc1 100644
--- a/drivers/iommu/amd/iommu_v2.c
+++ b/drivers/iommu/amd/iommu_v2.c
@@ -17,6 +17,7 @@
#include <linux/wait.h>
#include <linux/pci.h>
#include <linux/gfp.h>
+#include <linux/cc_platform.h>

#include "amd_iommu.h"

@@ -742,7 +743,7 @@ int amd_iommu_init_device(struct pci_dev *pdev, int pasids)
* When memory encryption is active the device is likely not in a
* direct-mapped domain. Forbid using IOMMUv2 functionality for now.
*/
- if (mem_encrypt_active())
+ if (cc_platform_has(CC_ATTR_MEM_ENCRYPT))
return -ENODEV;

if (!amd_iommu_v2_supported())
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 3303d707bab4..e80261d17a49 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -25,6 +25,7 @@
#include <linux/property.h>
#include <linux/fsl/mc.h>
#include <linux/module.h>
+#include <linux/cc_platform.h>
#include <trace/events/iommu.h>

static struct kset *iommu_group_kset;
@@ -130,7 +131,7 @@ static int __init iommu_subsys_init(void)
else
iommu_set_default_translated(false);

- if (iommu_default_passthrough() && mem_encrypt_active()) {
+ if (iommu_default_passthrough() && cc_platform_has(CC_ATTR_MEM_ENCRYPT)) {
pr_info("Memory encryption detected - Disabling default IOMMU Passthrough\n");
iommu_set_default_translated(false);
}
diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
index 9a15334da208..cdbbf819d2d6 100644
--- a/fs/proc/vmcore.c
+++ b/fs/proc/vmcore.c
@@ -26,7 +26,7 @@
#include <linux/vmalloc.h>
#include <linux/pagemap.h>
#include <linux/uaccess.h>
-#include <linux/mem_encrypt.h>
+#include <linux/cc_platform.h>
#include <asm/io.h>
#include "internal.h"

@@ -177,7 +177,7 @@ ssize_t __weak elfcorehdr_read(char *buf, size_t count, u64 *ppos)
*/
ssize_t __weak elfcorehdr_read_notes(char *buf, size_t count, u64 *ppos)
{
- return read_from_oldmem(buf, count, ppos, 0, mem_encrypt_active());
+ return read_from_oldmem(buf, count, ppos, 0, cc_platform_has(CC_ATTR_MEM_ENCRYPT));
}

/*
@@ -378,7 +378,7 @@ static ssize_t __read_vmcore(char *buffer, size_t buflen, loff_t *fpos,
buflen);
start = m->paddr + *fpos - m->offset;
tmp = read_from_oldmem(buffer, tsz, &start,
- userbuf, mem_encrypt_active());
+ userbuf, cc_platform_has(CC_ATTR_MEM_ENCRYPT));
if (tmp < 0)
return tmp;
buflen -= tsz;
diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
index 5c4a18a91f89..ae4526389261 100644
--- a/include/linux/mem_encrypt.h
+++ b/include/linux/mem_encrypt.h
@@ -16,10 +16,6 @@

#include <asm/mem_encrypt.h>

-#else /* !CONFIG_ARCH_HAS_MEM_ENCRYPT */
-
-static inline bool mem_encrypt_active(void) { return false; }
-
#endif /* CONFIG_ARCH_HAS_MEM_ENCRYPT */

#ifdef CONFIG_AMD_MEM_ENCRYPT
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 87c40517e822..c4ca040fdb05 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -34,7 +34,7 @@
#include <linux/highmem.h>
#include <linux/gfp.h>
#include <linux/scatterlist.h>
-#include <linux/mem_encrypt.h>
+#include <linux/cc_platform.h>
#include <linux/set_memory.h>
#ifdef CONFIG_DEBUG_FS
#include <linux/debugfs.h>
@@ -552,7 +552,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
if (!mem)
panic("Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer");

- if (mem_encrypt_active())
+ if (cc_platform_has(CC_ATTR_MEM_ENCRYPT))
pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n");

if (mapping_size > alloc_size) {
--
2.33.0

2021-09-08 23:05:11

by Tom Lendacky

[permalink] [raw]
Subject: [PATCH v3 2/8] mm: Introduce a function to check for confidential computing features

In prep for other confidential computing technologies, introduce a generic
helper function, cc_platform_has(), that can be used to check for specific
active confidential computing attributes, like memory encryption. This is
intended to eliminate having to add multiple technology-specific checks to
the code (e.g. if (sev_active() || tdx_active())).

Co-developed-by: Andi Kleen <[email protected]>
Signed-off-by: Andi Kleen <[email protected]>
Co-developed-by: Kuppuswamy Sathyanarayanan <[email protected]>
Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>
Signed-off-by: Tom Lendacky <[email protected]>
---
arch/Kconfig | 3 ++
include/linux/cc_platform.h | 88 +++++++++++++++++++++++++++++++++++++
2 files changed, 91 insertions(+)
create mode 100644 include/linux/cc_platform.h

diff --git a/arch/Kconfig b/arch/Kconfig
index 3743174da870..ca7c359e5da8 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1234,6 +1234,9 @@ config RELR
config ARCH_HAS_MEM_ENCRYPT
bool

+config ARCH_HAS_CC_PLATFORM
+ bool
+
config HAVE_SPARSE_SYSCALL_NR
bool
help
diff --git a/include/linux/cc_platform.h b/include/linux/cc_platform.h
new file mode 100644
index 000000000000..253f3ea66cd8
--- /dev/null
+++ b/include/linux/cc_platform.h
@@ -0,0 +1,88 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Confidential Computing Platform Capability checks
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <[email protected]>
+ */
+
+#ifndef _CC_PLATFORM_H
+#define _CC_PLATFORM_H
+
+#include <linux/types.h>
+#include <linux/stddef.h>
+
+/**
+ * enum cc_attr - Confidential computing attributes
+ *
+ * These attributes represent confidential computing features that are
+ * currently active.
+ */
+enum cc_attr {
+ /**
+ * @CC_ATTR_MEM_ENCRYPT: Memory encryption is active
+ *
+ * The platform/OS is running with active memory encryption. This
+ * includes running either as a bare-metal system or a hypervisor
+ * and actively using memory encryption or as a guest/virtual machine
+ * and actively using memory encryption.
+ *
+ * Examples include SME, SEV and SEV-ES.
+ */
+ CC_ATTR_MEM_ENCRYPT,
+
+ /**
+ * @CC_ATTR_HOST_MEM_ENCRYPT: Host memory encryption is active
+ *
+ * The platform/OS is running as a bare-metal system or a hypervisor
+ * and actively using memory encryption.
+ *
+ * Examples include SME.
+ */
+ CC_ATTR_HOST_MEM_ENCRYPT,
+
+ /**
+ * @CC_ATTR_GUEST_MEM_ENCRYPT: Guest memory encryption is active
+ *
+ * The platform/OS is running as a guest/virtual machine and actively
+ * using memory encryption.
+ *
+ * Examples include SEV and SEV-ES.
+ */
+ CC_ATTR_GUEST_MEM_ENCRYPT,
+
+ /**
+ * @CC_ATTR_GUEST_STATE_ENCRYPT: Guest state encryption is active
+ *
+ * The platform/OS is running as a guest/virtual machine and actively
+ * using memory encryption and register state encryption.
+ *
+ * Examples include SEV-ES.
+ */
+ CC_ATTR_GUEST_STATE_ENCRYPT,
+};
+
+#ifdef CONFIG_ARCH_HAS_CC_PLATFORM
+
+/**
+ * cc_platform_has() - Checks if the specified cc_attr attribute is active
+ * @attr: Confidential computing attribute to check
+ *
+ * The cc_platform_has() function will return an indicator as to whether the
+ * specified Confidential Computing attribute is currently active.
+ *
+ * Context: Any context
+ * Return:
+ * * TRUE - Specified Confidential Computing attribute is active
+ * * FALSE - Specified Confidential Computing attribute is not active
+ */
+bool cc_platform_has(enum cc_attr attr);
+
+#else /* !CONFIG_ARCH_HAS_CC_PLATFORM */
+
+static inline bool cc_platform_has(enum cc_attr attr) { return false; }
+
+#endif /* CONFIG_ARCH_HAS_CC_PLATFORM */
+
+#endif /* _CC_PLATFORM_H */
--
2.33.0

2021-09-08 23:05:22

by Tom Lendacky

[permalink] [raw]
Subject: [PATCH v3 3/8] x86/sev: Add an x86 version of cc_platform_has()

Introduce an x86 version of the cc_platform_has() function. This will be
used to replace vendor specific calls like sme_active(), sev_active(),
etc.

Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Co-developed-by: Andi Kleen <[email protected]>
Signed-off-by: Andi Kleen <[email protected]>
Co-developed-by: Kuppuswamy Sathyanarayanan <[email protected]>
Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>
Signed-off-by: Tom Lendacky <[email protected]>
---
arch/x86/Kconfig | 1 +
arch/x86/include/asm/mem_encrypt.h | 3 +++
arch/x86/kernel/Makefile | 3 +++
arch/x86/kernel/cc_platform.c | 21 +++++++++++++++++++++
arch/x86/mm/mem_encrypt.c | 21 +++++++++++++++++++++
5 files changed, 49 insertions(+)
create mode 100644 arch/x86/kernel/cc_platform.c

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 4e001bbbb425..2b2a9639d8ae 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1513,6 +1513,7 @@ config AMD_MEM_ENCRYPT
select ARCH_HAS_FORCE_DMA_UNENCRYPTED
select INSTRUCTION_DECODER
select ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS
+ select ARCH_HAS_CC_PLATFORM
help
Say yes to enable support for the encryption of system memory.
This requires an AMD processor that supports Secure Memory
diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index 9c80c68d75b5..3d8a5e8b2e3f 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -13,6 +13,7 @@
#ifndef __ASSEMBLY__

#include <linux/init.h>
+#include <linux/cc_platform.h>

#include <asm/bootparam.h>

@@ -53,6 +54,7 @@ void __init sev_es_init_vc_handling(void);
bool sme_active(void);
bool sev_active(void);
bool sev_es_active(void);
+bool amd_cc_platform_has(enum cc_attr attr);

#define __bss_decrypted __section(".bss..decrypted")

@@ -78,6 +80,7 @@ static inline void sev_es_init_vc_handling(void) { }
static inline bool sme_active(void) { return false; }
static inline bool sev_active(void) { return false; }
static inline bool sev_es_active(void) { return false; }
+static inline bool amd_cc_platform_has(enum cc_attr attr) { return false; }

static inline int __init
early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 0; }
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 8f4e8fa6ed75..f91403a78594 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -147,6 +147,9 @@ obj-$(CONFIG_UNWINDER_FRAME_POINTER) += unwind_frame.o
obj-$(CONFIG_UNWINDER_GUESS) += unwind_guess.o

obj-$(CONFIG_AMD_MEM_ENCRYPT) += sev.o
+
+obj-$(CONFIG_ARCH_HAS_CC_PLATFORM) += cc_platform.o
+
###
# 64 bit specific files
ifeq ($(CONFIG_X86_64),y)
diff --git a/arch/x86/kernel/cc_platform.c b/arch/x86/kernel/cc_platform.c
new file mode 100644
index 000000000000..3c9bacd3c3f3
--- /dev/null
+++ b/arch/x86/kernel/cc_platform.c
@@ -0,0 +1,21 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Confidential Computing Platform Capability checks
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <[email protected]>
+ */
+
+#include <linux/export.h>
+#include <linux/cc_platform.h>
+#include <linux/mem_encrypt.h>
+
+bool cc_platform_has(enum cc_attr attr)
+{
+ if (sme_me_mask)
+ return amd_cc_platform_has(attr);
+
+ return false;
+}
+EXPORT_SYMBOL_GPL(cc_platform_has);
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index ff08dc463634..18fe19916bc3 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -20,6 +20,7 @@
#include <linux/bitops.h>
#include <linux/dma-mapping.h>
#include <linux/virtio_config.h>
+#include <linux/cc_platform.h>

#include <asm/tlbflush.h>
#include <asm/fixmap.h>
@@ -389,6 +390,26 @@ bool noinstr sev_es_active(void)
return sev_status & MSR_AMD64_SEV_ES_ENABLED;
}

+bool amd_cc_platform_has(enum cc_attr attr)
+{
+ switch (attr) {
+ case CC_ATTR_MEM_ENCRYPT:
+ return sme_me_mask != 0;
+
+ case CC_ATTR_HOST_MEM_ENCRYPT:
+ return sme_active();
+
+ case CC_ATTR_GUEST_MEM_ENCRYPT:
+ return sev_active();
+
+ case CC_ATTR_GUEST_STATE_ENCRYPT:
+ return sev_es_active();
+
+ default:
+ return false;
+ }
+}
+
/* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */
bool force_dma_unencrypted(struct device *dev)
{
--
2.33.0

2021-09-08 23:11:53

by Tom Lendacky

[permalink] [raw]
Subject: [PATCH v3 6/8] x86/sev: Replace occurrences of sev_active() with cc_platform_has()

Replace uses of sev_active() with the more generic cc_platform_has()
using CC_ATTR_GUEST_MEM_ENCRYPT. If future support is added for other
memory encryption technologies, the use of CC_ATTR_GUEST_MEM_ENCRYPT
can be updated, as required.

Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Signed-off-by: Tom Lendacky <[email protected]>
---
arch/x86/include/asm/mem_encrypt.h | 2 --
arch/x86/kernel/crash_dump_64.c | 4 +++-
arch/x86/kernel/kvm.c | 3 ++-
arch/x86/kernel/kvmclock.c | 4 ++--
arch/x86/kernel/machine_kexec_64.c | 4 ++--
arch/x86/kvm/svm/svm.c | 3 ++-
arch/x86/mm/ioremap.c | 6 +++---
arch/x86/mm/mem_encrypt.c | 25 ++++++++++---------------
arch/x86/platform/efi/efi_64.c | 9 +++++----
9 files changed, 29 insertions(+), 31 deletions(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index 8c4f0dfe63f9..f440eebeeb2c 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -51,7 +51,6 @@ void __init mem_encrypt_free_decrypted_mem(void);
void __init mem_encrypt_init(void);

void __init sev_es_init_vc_handling(void);
-bool sev_active(void);
bool sev_es_active(void);
bool amd_cc_platform_has(enum cc_attr attr);

@@ -76,7 +75,6 @@ static inline void __init sme_encrypt_kernel(struct boot_params *bp) { }
static inline void __init sme_enable(struct boot_params *bp) { }

static inline void sev_es_init_vc_handling(void) { }
-static inline bool sev_active(void) { return false; }
static inline bool sev_es_active(void) { return false; }
static inline bool amd_cc_platform_has(enum cc_attr attr) { return false; }

diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index 045e82e8945b..a7f617a3981d 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -10,6 +10,7 @@
#include <linux/crash_dump.h>
#include <linux/uaccess.h>
#include <linux/io.h>
+#include <linux/cc_platform.h>

static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
unsigned long offset, int userbuf,
@@ -73,5 +74,6 @@ ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char *buf, size_t csize,

ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
{
- return read_from_oldmem(buf, count, ppos, 0, sev_active());
+ return read_from_oldmem(buf, count, ppos, 0,
+ cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT));
}
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index a26643dc6bd6..509a578f56a0 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -27,6 +27,7 @@
#include <linux/nmi.h>
#include <linux/swait.h>
#include <linux/syscore_ops.h>
+#include <linux/cc_platform.h>
#include <asm/timer.h>
#include <asm/cpu.h>
#include <asm/traps.h>
@@ -418,7 +419,7 @@ static void __init sev_map_percpu_data(void)
{
int cpu;

- if (!sev_active())
+ if (!cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
return;

for_each_possible_cpu(cpu) {
diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
index ad273e5861c1..fc3930c5db1b 100644
--- a/arch/x86/kernel/kvmclock.c
+++ b/arch/x86/kernel/kvmclock.c
@@ -16,9 +16,9 @@
#include <linux/mm.h>
#include <linux/slab.h>
#include <linux/set_memory.h>
+#include <linux/cc_platform.h>

#include <asm/hypervisor.h>
-#include <asm/mem_encrypt.h>
#include <asm/x86_init.h>
#include <asm/kvmclock.h>

@@ -232,7 +232,7 @@ static void __init kvmclock_init_mem(void)
* hvclock is shared between the guest and the hypervisor, must
* be mapped decrypted.
*/
- if (sev_active()) {
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) {
r = set_memory_decrypted((unsigned long) hvclock_mem,
1UL << order);
if (r) {
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index 7040c0fa921c..f5da4a18070a 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -167,7 +167,7 @@ static int init_transition_pgtable(struct kimage *image, pgd_t *pgd)
}
pte = pte_offset_kernel(pmd, vaddr);

- if (sev_active())
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
prot = PAGE_KERNEL_EXEC;

set_pte(pte, pfn_pte(paddr >> PAGE_SHIFT, prot));
@@ -207,7 +207,7 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable)
level4p = (pgd_t *)__va(start_pgtable);
clear_page(level4p);

- if (sev_active()) {
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) {
info.page_flag |= _PAGE_ENC;
info.kernpg_flag |= _PAGE_ENC;
}
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 69639f9624f5..eb3669154b48 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -25,6 +25,7 @@
#include <linux/pagemap.h>
#include <linux/swap.h>
#include <linux/rwsem.h>
+#include <linux/cc_platform.h>

#include <asm/apic.h>
#include <asm/perf_event.h>
@@ -457,7 +458,7 @@ static int has_svm(void)
return 0;
}

- if (sev_active()) {
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) {
pr_info("KVM is unsupported when running as an SEV guest\n");
return 0;
}
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index a7250fa3d45f..b59a5cbc6bc5 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -92,7 +92,7 @@ static unsigned int __ioremap_check_ram(struct resource *res)
*/
static unsigned int __ioremap_check_encrypted(struct resource *res)
{
- if (!sev_active())
+ if (!cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
return 0;

switch (res->desc) {
@@ -112,7 +112,7 @@ static unsigned int __ioremap_check_encrypted(struct resource *res)
*/
static void __ioremap_check_other(resource_size_t addr, struct ioremap_desc *desc)
{
- if (!sev_active())
+ if (!cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
return;

if (!IS_ENABLED(CONFIG_EFI))
@@ -556,7 +556,7 @@ static bool memremap_should_map_decrypted(resource_size_t phys_addr,
case E820_TYPE_NVS:
case E820_TYPE_UNUSABLE:
/* For SEV, these areas are encrypted */
- if (sev_active())
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
break;
fallthrough;

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 4b54a2377821..22d4e152a6de 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -194,7 +194,7 @@ void __init sme_early_init(void)
for (i = 0; i < ARRAY_SIZE(protection_map); i++)
protection_map[i] = pgprot_encrypted(protection_map[i]);

- if (sev_active())
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
swiotlb_force = SWIOTLB_FORCE;
}

@@ -203,7 +203,7 @@ void __init sev_setup_arch(void)
phys_addr_t total_mem = memblock_phys_mem_size();
unsigned long size;

- if (!sev_active())
+ if (!cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
return;

/*
@@ -364,8 +364,8 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size)
/*
* SME and SEV are very similar but they are not the same, so there are
* times that the kernel will need to distinguish between SME and SEV. The
- * sme_active() and sev_active() functions are used for this. When a
- * distinction isn't needed, the mem_encrypt_active() function can be used.
+ * cc_platform_has() function is used for this. When a distinction isn't
+ * needed, the CC_ATTR_MEM_ENCRYPT attribute can be used.
*
* The trampoline code is a good example for this requirement. Before
* paging is activated, SME will access all memory as decrypted, but SEV
@@ -373,11 +373,6 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size)
* up under SME the trampoline area cannot be encrypted, whereas under SEV
* the trampoline area must be encrypted.
*/
-bool sev_active(void)
-{
- return sev_status & MSR_AMD64_SEV_ENABLED;
-}
-EXPORT_SYMBOL_GPL(sev_active);

/* Needs to be called from non-instrumentable code */
bool noinstr sev_es_active(void)
@@ -392,10 +387,10 @@ bool amd_cc_platform_has(enum cc_attr attr)
return sme_me_mask != 0;

case CC_ATTR_HOST_MEM_ENCRYPT:
- return sme_me_mask && !sev_active();
+ return sme_me_mask && !(sev_status & MSR_AMD64_SEV_ENABLED);

case CC_ATTR_GUEST_MEM_ENCRYPT:
- return sev_active();
+ return sev_status & MSR_AMD64_SEV_ENABLED;

case CC_ATTR_GUEST_STATE_ENCRYPT:
return sev_es_active();
@@ -411,7 +406,7 @@ bool force_dma_unencrypted(struct device *dev)
/*
* For SEV, all DMA must be to unencrypted addresses.
*/
- if (sev_active())
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
return true;

/*
@@ -470,7 +465,7 @@ static void print_mem_encrypt_feature_info(void)
}

/* Secure Encrypted Virtualization */
- if (sev_active())
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
pr_cont(" SEV");

/* Encrypted Register State */
@@ -493,7 +488,7 @@ void __init mem_encrypt_init(void)
* With SEV, we need to unroll the rep string I/O instructions,
* but SEV-ES supports them through the #VC handler.
*/
- if (sev_active() && !sev_es_active())
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) && !sev_es_active())
static_branch_enable(&sev_enable_key);

print_mem_encrypt_feature_info();
@@ -501,6 +496,6 @@ void __init mem_encrypt_init(void)

int arch_has_restricted_virtio_memory_access(void)
{
- return sev_active();
+ return cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT);
}
EXPORT_SYMBOL_GPL(arch_has_restricted_virtio_memory_access);
diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
index 7515e78ef898..1f3675453a57 100644
--- a/arch/x86/platform/efi/efi_64.c
+++ b/arch/x86/platform/efi/efi_64.c
@@ -33,7 +33,7 @@
#include <linux/reboot.h>
#include <linux/slab.h>
#include <linux/ucs2_string.h>
-#include <linux/mem_encrypt.h>
+#include <linux/cc_platform.h>
#include <linux/sched/task.h>

#include <asm/setup.h>
@@ -284,7 +284,8 @@ static void __init __map_region(efi_memory_desc_t *md, u64 va)
if (!(md->attribute & EFI_MEMORY_WB))
flags |= _PAGE_PCD;

- if (sev_active() && md->type != EFI_MEMORY_MAPPED_IO)
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) &&
+ md->type != EFI_MEMORY_MAPPED_IO)
flags |= _PAGE_ENC;

pfn = md->phys_addr >> PAGE_SHIFT;
@@ -390,7 +391,7 @@ static int __init efi_update_mem_attr(struct mm_struct *mm, efi_memory_desc_t *m
if (!(md->attribute & EFI_MEMORY_RO))
pf |= _PAGE_RW;

- if (sev_active())
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
pf |= _PAGE_ENC;

return efi_update_mappings(md, pf);
@@ -438,7 +439,7 @@ void __init efi_runtime_update_mappings(void)
(md->type != EFI_RUNTIME_SERVICES_CODE))
pf |= _PAGE_RW;

- if (sev_active())
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
pf |= _PAGE_ENC;

efi_update_mappings(md, pf);
--
2.33.0

2021-09-08 23:12:14

by Tom Lendacky

[permalink] [raw]
Subject: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

Replace uses of sme_active() with the more generic cc_platform_has()
using CC_ATTR_HOST_MEM_ENCRYPT. If future support is added for other
memory encryption technologies, the use of CC_ATTR_HOST_MEM_ENCRYPT
can be updated, as required.

This also replaces two usages of sev_active() that are really geared
towards detecting if SME is active.

Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Tom Lendacky <[email protected]>
---
arch/x86/include/asm/kexec.h | 2 +-
arch/x86/include/asm/mem_encrypt.h | 2 --
arch/x86/kernel/machine_kexec_64.c | 15 ++++++++-------
arch/x86/kernel/pci-swiotlb.c | 9 ++++-----
arch/x86/kernel/relocate_kernel_64.S | 2 +-
arch/x86/mm/ioremap.c | 6 +++---
arch/x86/mm/mem_encrypt.c | 15 +++++----------
arch/x86/mm/mem_encrypt_identity.c | 3 ++-
arch/x86/realmode/init.c | 5 +++--
drivers/iommu/amd/init.c | 7 ++++---
10 files changed, 31 insertions(+), 35 deletions(-)

diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index 0a6e34b07017..11b7c06e2828 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -129,7 +129,7 @@ relocate_kernel(unsigned long indirection_page,
unsigned long page_list,
unsigned long start_address,
unsigned int preserve_context,
- unsigned int sme_active);
+ unsigned int host_mem_enc_active);
#endif

#define ARCH_HAS_KIMAGE_ARCH
diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index 3d8a5e8b2e3f..8c4f0dfe63f9 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -51,7 +51,6 @@ void __init mem_encrypt_free_decrypted_mem(void);
void __init mem_encrypt_init(void);

void __init sev_es_init_vc_handling(void);
-bool sme_active(void);
bool sev_active(void);
bool sev_es_active(void);
bool amd_cc_platform_has(enum cc_attr attr);
@@ -77,7 +76,6 @@ static inline void __init sme_encrypt_kernel(struct boot_params *bp) { }
static inline void __init sme_enable(struct boot_params *bp) { }

static inline void sev_es_init_vc_handling(void) { }
-static inline bool sme_active(void) { return false; }
static inline bool sev_active(void) { return false; }
static inline bool sev_es_active(void) { return false; }
static inline bool amd_cc_platform_has(enum cc_attr attr) { return false; }
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index 131f30fdcfbd..7040c0fa921c 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -17,6 +17,7 @@
#include <linux/suspend.h>
#include <linux/vmalloc.h>
#include <linux/efi.h>
+#include <linux/cc_platform.h>

#include <asm/init.h>
#include <asm/tlbflush.h>
@@ -358,7 +359,7 @@ void machine_kexec(struct kimage *image)
(unsigned long)page_list,
image->start,
image->preserve_context,
- sme_active());
+ cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT));

#ifdef CONFIG_KEXEC_JUMP
if (image->preserve_context)
@@ -569,12 +570,12 @@ void arch_kexec_unprotect_crashkres(void)
*/
int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, gfp_t gfp)
{
- if (sev_active())
+ if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
return 0;

/*
- * If SME is active we need to be sure that kexec pages are
- * not encrypted because when we boot to the new kernel the
+ * If host memory encryption is active we need to be sure that kexec
+ * pages are not encrypted because when we boot to the new kernel the
* pages won't be accessed encrypted (initially).
*/
return set_memory_decrypted((unsigned long)vaddr, pages);
@@ -582,12 +583,12 @@ int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, gfp_t gfp)

void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages)
{
- if (sev_active())
+ if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
return;

/*
- * If SME is active we need to reset the pages back to being
- * an encrypted mapping before freeing them.
+ * If host memory encryption is active we need to reset the pages back
+ * to being an encrypted mapping before freeing them.
*/
set_memory_encrypted((unsigned long)vaddr, pages);
}
diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c
index c2cfa5e7c152..814ab46a0dad 100644
--- a/arch/x86/kernel/pci-swiotlb.c
+++ b/arch/x86/kernel/pci-swiotlb.c
@@ -6,7 +6,7 @@
#include <linux/swiotlb.h>
#include <linux/memblock.h>
#include <linux/dma-direct.h>
-#include <linux/mem_encrypt.h>
+#include <linux/cc_platform.h>

#include <asm/iommu.h>
#include <asm/swiotlb.h>
@@ -45,11 +45,10 @@ int __init pci_swiotlb_detect_4gb(void)
swiotlb = 1;

/*
- * If SME is active then swiotlb will be set to 1 so that bounce
- * buffers are allocated and used for devices that do not support
- * the addressing range required for the encryption mask.
+ * Set swiotlb to 1 so that bounce buffers are allocated and used for
+ * devices that can't support DMA to encrypted memory.
*/
- if (sme_active())
+ if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
swiotlb = 1;

return swiotlb;
diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
index c53271aebb64..c8fe74a28143 100644
--- a/arch/x86/kernel/relocate_kernel_64.S
+++ b/arch/x86/kernel/relocate_kernel_64.S
@@ -47,7 +47,7 @@ SYM_CODE_START_NOALIGN(relocate_kernel)
* %rsi page_list
* %rdx start address
* %rcx preserve_context
- * %r8 sme_active
+ * %r8 host_mem_enc_active
*/

/* Save the CPU context, used for jumping back */
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index ccff76cedd8f..a7250fa3d45f 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -14,7 +14,7 @@
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <linux/mmiotrace.h>
-#include <linux/mem_encrypt.h>
+#include <linux/cc_platform.h>
#include <linux/efi.h>
#include <linux/pgtable.h>

@@ -703,7 +703,7 @@ bool arch_memremap_can_ram_remap(resource_size_t phys_addr, unsigned long size,
if (flags & MEMREMAP_DEC)
return false;

- if (sme_active()) {
+ if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)) {
if (memremap_is_setup_data(phys_addr, size) ||
memremap_is_efi_data(phys_addr, size))
return false;
@@ -729,7 +729,7 @@ pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr,

encrypted_prot = true;

- if (sme_active()) {
+ if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)) {
if (early_memremap_is_setup_data(phys_addr, size) ||
memremap_is_efi_data(phys_addr, size))
encrypted_prot = false;
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 18fe19916bc3..4b54a2377821 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -144,7 +144,7 @@ void __init sme_unmap_bootdata(char *real_mode_data)
struct boot_params *boot_data;
unsigned long cmdline_paddr;

- if (!sme_active())
+ if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
return;

/* Get the command line address before unmapping the real_mode_data */
@@ -164,7 +164,7 @@ void __init sme_map_bootdata(char *real_mode_data)
struct boot_params *boot_data;
unsigned long cmdline_paddr;

- if (!sme_active())
+ if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
return;

__sme_early_map_unmap_mem(real_mode_data, sizeof(boot_params), true);
@@ -377,11 +377,6 @@ bool sev_active(void)
{
return sev_status & MSR_AMD64_SEV_ENABLED;
}
-
-bool sme_active(void)
-{
- return sme_me_mask && !sev_active();
-}
EXPORT_SYMBOL_GPL(sev_active);

/* Needs to be called from non-instrumentable code */
@@ -397,7 +392,7 @@ bool amd_cc_platform_has(enum cc_attr attr)
return sme_me_mask != 0;

case CC_ATTR_HOST_MEM_ENCRYPT:
- return sme_active();
+ return sme_me_mask && !sev_active();

case CC_ATTR_GUEST_MEM_ENCRYPT:
return sev_active();
@@ -424,7 +419,7 @@ bool force_dma_unencrypted(struct device *dev)
* device does not support DMA to addresses that include the
* encryption mask.
*/
- if (sme_active()) {
+ if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)) {
u64 dma_enc_mask = DMA_BIT_MASK(__ffs64(sme_me_mask));
u64 dma_dev_mask = min_not_zero(dev->coherent_dma_mask,
dev->bus_dma_limit);
@@ -465,7 +460,7 @@ static void print_mem_encrypt_feature_info(void)
pr_info("AMD Memory Encryption Features active:");

/* Secure Memory Encryption */
- if (sme_active()) {
+ if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)) {
/*
* SME is mutually exclusive with any of the SEV
* features below.
diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
index 470b20208430..eff4d19f9cb4 100644
--- a/arch/x86/mm/mem_encrypt_identity.c
+++ b/arch/x86/mm/mem_encrypt_identity.c
@@ -30,6 +30,7 @@
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/mem_encrypt.h>
+#include <linux/cc_platform.h>

#include <asm/setup.h>
#include <asm/sections.h>
@@ -287,7 +288,7 @@ void __init sme_encrypt_kernel(struct boot_params *bp)
unsigned long pgtable_area_len;
unsigned long decrypted_base;

- if (!sme_active())
+ if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
return;

/*
diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index 31b5856010cb..c878c5ee5a4c 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -3,6 +3,7 @@
#include <linux/slab.h>
#include <linux/memblock.h>
#include <linux/mem_encrypt.h>
+#include <linux/cc_platform.h>
#include <linux/pgtable.h>

#include <asm/set_memory.h>
@@ -44,7 +45,7 @@ void __init reserve_real_mode(void)
static void sme_sev_setup_real_mode(struct trampoline_header *th)
{
#ifdef CONFIG_AMD_MEM_ENCRYPT
- if (sme_active())
+ if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
th->flags |= TH_FLAGS_SME_ACTIVE;

if (sev_es_active()) {
@@ -81,7 +82,7 @@ static void __init setup_real_mode(void)
* decrypted memory in order to bring up other processors
* successfully. This is not needed for SEV.
*/
- if (sme_active())
+ if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
set_memory_decrypted((unsigned long)base, size >> PAGE_SHIFT);

memcpy(base, real_mode_blob, size);
diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
index bdcf167b4afe..07504f67ec9c 100644
--- a/drivers/iommu/amd/init.c
+++ b/drivers/iommu/amd/init.c
@@ -20,7 +20,7 @@
#include <linux/amd-iommu.h>
#include <linux/export.h>
#include <linux/kmemleak.h>
-#include <linux/mem_encrypt.h>
+#include <linux/cc_platform.h>
#include <asm/pci-direct.h>
#include <asm/iommu.h>
#include <asm/apic.h>
@@ -964,7 +964,7 @@ static bool copy_device_table(void)
pr_err("The address of old device table is above 4G, not trustworthy!\n");
return false;
}
- old_devtb = (sme_active() && is_kdump_kernel())
+ old_devtb = (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT) && is_kdump_kernel())
? (__force void *)ioremap_encrypted(old_devtb_phys,
dev_table_size)
: memremap(old_devtb_phys, dev_table_size, MEMREMAP_WB);
@@ -3024,7 +3024,8 @@ static int __init amd_iommu_init(void)

static bool amd_iommu_sme_check(void)
{
- if (!sme_active() || (boot_cpu_data.x86 != 0x17))
+ if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT) ||
+ (boot_cpu_data.x86 != 0x17))
return true;

/* For Fam17h, a specific level of support is required */
--
2.33.0

2021-09-08 23:12:41

by Tom Lendacky

[permalink] [raw]
Subject: [PATCH v3 4/8] powerpc/pseries/svm: Add a powerpc version of cc_platform_has()

Introduce a powerpc version of the cc_platform_has() function. This will
be used to replace the powerpc mem_encrypt_active() implementation, so
the implementation will initially only support the CC_ATTR_MEM_ENCRYPT
attribute.

Cc: Michael Ellerman <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Paul Mackerras <[email protected]>
Signed-off-by: Tom Lendacky <[email protected]>
---
arch/powerpc/platforms/pseries/Kconfig | 1 +
arch/powerpc/platforms/pseries/Makefile | 2 ++
arch/powerpc/platforms/pseries/cc_platform.c | 26 ++++++++++++++++++++
3 files changed, 29 insertions(+)
create mode 100644 arch/powerpc/platforms/pseries/cc_platform.c

diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig
index 5e037df2a3a1..2e57391e0778 100644
--- a/arch/powerpc/platforms/pseries/Kconfig
+++ b/arch/powerpc/platforms/pseries/Kconfig
@@ -159,6 +159,7 @@ config PPC_SVM
select SWIOTLB
select ARCH_HAS_MEM_ENCRYPT
select ARCH_HAS_FORCE_DMA_UNENCRYPTED
+ select ARCH_HAS_CC_PLATFORM
help
There are certain POWER platforms which support secure guests using
the Protected Execution Facility, with the help of an Ultravisor
diff --git a/arch/powerpc/platforms/pseries/Makefile b/arch/powerpc/platforms/pseries/Makefile
index 4cda0ef87be0..41d8aee98da4 100644
--- a/arch/powerpc/platforms/pseries/Makefile
+++ b/arch/powerpc/platforms/pseries/Makefile
@@ -31,3 +31,5 @@ obj-$(CONFIG_FA_DUMP) += rtas-fadump.o

obj-$(CONFIG_SUSPEND) += suspend.o
obj-$(CONFIG_PPC_VAS) += vas.o
+
+obj-$(CONFIG_ARCH_HAS_CC_PLATFORM) += cc_platform.o
diff --git a/arch/powerpc/platforms/pseries/cc_platform.c b/arch/powerpc/platforms/pseries/cc_platform.c
new file mode 100644
index 000000000000..e8021af83a19
--- /dev/null
+++ b/arch/powerpc/platforms/pseries/cc_platform.c
@@ -0,0 +1,26 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Confidential Computing Platform Capability checks
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky <[email protected]>
+ */
+
+#include <linux/export.h>
+#include <linux/cc_platform.h>
+
+#include <asm/machdep.h>
+#include <asm/svm.h>
+
+bool cc_platform_has(enum cc_attr attr)
+{
+ switch (attr) {
+ case CC_ATTR_MEM_ENCRYPT:
+ return is_secure_guest();
+
+ default:
+ return false;
+ }
+}
+EXPORT_SYMBOL_GPL(cc_platform_has);
--
2.33.0

2021-09-09 07:31:28

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v3 8/8] treewide: Replace the use of mem_encrypt_active() with cc_platform_has()



On 9/8/21 10:58 PM, Tom Lendacky wrote:
>
> diff --git a/arch/powerpc/include/asm/mem_encrypt.h b/arch/powerpc/include/asm/mem_encrypt.h
> index ba9dab07c1be..2f26b8fc8d29 100644
> --- a/arch/powerpc/include/asm/mem_encrypt.h
> +++ b/arch/powerpc/include/asm/mem_encrypt.h
> @@ -10,11 +10,6 @@
>
> #include <asm/svm.h>
>
> -static inline bool mem_encrypt_active(void)
> -{
> - return is_secure_guest();
> -}
> -
> static inline bool force_dma_unencrypted(struct device *dev)
> {
> return is_secure_guest();
> diff --git a/arch/powerpc/platforms/pseries/svm.c b/arch/powerpc/platforms/pseries/svm.c
> index 87f001b4c4e4..c083ecbbae4d 100644
> --- a/arch/powerpc/platforms/pseries/svm.c
> +++ b/arch/powerpc/platforms/pseries/svm.c
> @@ -8,6 +8,7 @@
>
> #include <linux/mm.h>
> #include <linux/memblock.h>
> +#include <linux/cc_platform.h>
> #include <asm/machdep.h>
> #include <asm/svm.h>
> #include <asm/swiotlb.h>
> @@ -63,7 +64,7 @@ void __init svm_swiotlb_init(void)
>
> int set_memory_encrypted(unsigned long addr, int numpages)
> {
> - if (!mem_encrypt_active())
> + if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT))
> return 0;
>
> if (!PAGE_ALIGNED(addr))
> @@ -76,7 +77,7 @@ int set_memory_encrypted(unsigned long addr, int numpages)
>
> int set_memory_decrypted(unsigned long addr, int numpages)
> {
> - if (!mem_encrypt_active())
> + if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT))
> return 0;
>
> if (!PAGE_ALIGNED(addr))

This change unnecessarily complexifies the two functions. This is due to
cc_platform_has() being out-line. It should really remain inline.

Before the change we got:

0000000000000000 <.set_memory_encrypted>:
0: 7d 20 00 a6 mfmsr r9
4: 75 29 00 40 andis. r9,r9,64
8: 41 82 00 48 beq 50 <.set_memory_encrypted+0x50>
c: 78 69 04 20 clrldi r9,r3,48
10: 2c 29 00 00 cmpdi r9,0
14: 40 82 00 4c bne 60 <.set_memory_encrypted+0x60>
18: 7c 08 02 a6 mflr r0
1c: 7c 85 23 78 mr r5,r4
20: 78 64 85 02 rldicl r4,r3,48,20
24: 61 23 f1 34 ori r3,r9,61748
28: f8 01 00 10 std r0,16(r1)
2c: f8 21 ff 91 stdu r1,-112(r1)
30: 48 00 00 01 bl 30 <.set_memory_encrypted+0x30>
30: R_PPC64_REL24 .ucall_norets
34: 60 00 00 00 nop
38: 38 60 00 00 li r3,0
3c: 38 21 00 70 addi r1,r1,112
40: e8 01 00 10 ld r0,16(r1)
44: 7c 08 03 a6 mtlr r0
48: 4e 80 00 20 blr
50: 38 60 00 00 li r3,0
54: 4e 80 00 20 blr
60: 38 60 ff ea li r3,-22
64: 4e 80 00 20 blr

After the change we get:

0000000000000000 <.set_memory_encrypted>:
0: 7c 08 02 a6 mflr r0
4: fb c1 ff f0 std r30,-16(r1)
8: fb e1 ff f8 std r31,-8(r1)
c: 7c 7f 1b 78 mr r31,r3
10: 38 60 00 00 li r3,0
14: 7c 9e 23 78 mr r30,r4
18: f8 01 00 10 std r0,16(r1)
1c: f8 21 ff 81 stdu r1,-128(r1)
20: 48 00 00 01 bl 20 <.set_memory_encrypted+0x20>
20: R_PPC64_REL24 .cc_platform_has
24: 60 00 00 00 nop
28: 2c 23 00 00 cmpdi r3,0
2c: 41 82 00 44 beq 70 <.set_memory_encrypted+0x70>
30: 7b e9 04 20 clrldi r9,r31,48
34: 2c 29 00 00 cmpdi r9,0
38: 40 82 00 58 bne 90 <.set_memory_encrypted+0x90>
3c: 38 60 00 00 li r3,0
40: 7f c5 f3 78 mr r5,r30
44: 7b e4 85 02 rldicl r4,r31,48,20
48: 60 63 f1 34 ori r3,r3,61748
4c: 48 00 00 01 bl 4c <.set_memory_encrypted+0x4c>
4c: R_PPC64_REL24 .ucall_norets
50: 60 00 00 00 nop
54: 38 60 00 00 li r3,0
58: 38 21 00 80 addi r1,r1,128
5c: e8 01 00 10 ld r0,16(r1)
60: eb c1 ff f0 ld r30,-16(r1)
64: eb e1 ff f8 ld r31,-8(r1)
68: 7c 08 03 a6 mtlr r0
6c: 4e 80 00 20 blr
70: 38 21 00 80 addi r1,r1,128
74: 38 60 00 00 li r3,0
78: e8 01 00 10 ld r0,16(r1)
7c: eb c1 ff f0 ld r30,-16(r1)
80: eb e1 ff f8 ld r31,-8(r1)
84: 7c 08 03 a6 mtlr r0
88: 4e 80 00 20 blr
90: 38 60 ff ea li r3,-22
94: 4b ff ff c4 b 58 <.set_memory_encrypted+0x58>

2021-09-09 07:36:48

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v3 2/8] mm: Introduce a function to check for confidential computing features



On 9/8/21 10:58 PM, Tom Lendacky wrote:
> In prep for other confidential computing technologies, introduce a generic
> helper function, cc_platform_has(), that can be used to check for specific

I have little problem with that naming.

For me CC has always meant Compiler Collection.

> active confidential computing attributes, like memory encryption. This is
> intended to eliminate having to add multiple technology-specific checks to
> the code (e.g. if (sev_active() || tdx_active())).
>
> Co-developed-by: Andi Kleen <[email protected]>
> Signed-off-by: Andi Kleen <[email protected]>
> Co-developed-by: Kuppuswamy Sathyanarayanan <[email protected]>
> Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>
> Signed-off-by: Tom Lendacky <[email protected]>
> ---
> arch/Kconfig | 3 ++
> include/linux/cc_platform.h | 88 +++++++++++++++++++++++++++++++++++++
> 2 files changed, 91 insertions(+)
> create mode 100644 include/linux/cc_platform.h
>
> diff --git a/arch/Kconfig b/arch/Kconfig
> index 3743174da870..ca7c359e5da8 100644
> --- a/arch/Kconfig
> +++ b/arch/Kconfig
> @@ -1234,6 +1234,9 @@ config RELR
> config ARCH_HAS_MEM_ENCRYPT
> bool
>
> +config ARCH_HAS_CC_PLATFORM
> + bool
> +
> config HAVE_SPARSE_SYSCALL_NR
> bool
> help
> diff --git a/include/linux/cc_platform.h b/include/linux/cc_platform.h
> new file mode 100644
> index 000000000000..253f3ea66cd8
> --- /dev/null
> +++ b/include/linux/cc_platform.h
> @@ -0,0 +1,88 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Confidential Computing Platform Capability checks
> + *
> + * Copyright (C) 2021 Advanced Micro Devices, Inc.
> + *
> + * Author: Tom Lendacky <[email protected]>
> + */
> +
> +#ifndef _CC_PLATFORM_H
> +#define _CC_PLATFORM_H
> +
> +#include <linux/types.h>
> +#include <linux/stddef.h>
> +
> +/**
> + * enum cc_attr - Confidential computing attributes
> + *
> + * These attributes represent confidential computing features that are
> + * currently active.
> + */
> +enum cc_attr {
> + /**
> + * @CC_ATTR_MEM_ENCRYPT: Memory encryption is active
> + *
> + * The platform/OS is running with active memory encryption. This
> + * includes running either as a bare-metal system or a hypervisor
> + * and actively using memory encryption or as a guest/virtual machine
> + * and actively using memory encryption.
> + *
> + * Examples include SME, SEV and SEV-ES.
> + */
> + CC_ATTR_MEM_ENCRYPT,
> +
> + /**
> + * @CC_ATTR_HOST_MEM_ENCRYPT: Host memory encryption is active
> + *
> + * The platform/OS is running as a bare-metal system or a hypervisor
> + * and actively using memory encryption.
> + *
> + * Examples include SME.
> + */
> + CC_ATTR_HOST_MEM_ENCRYPT,
> +
> + /**
> + * @CC_ATTR_GUEST_MEM_ENCRYPT: Guest memory encryption is active
> + *
> + * The platform/OS is running as a guest/virtual machine and actively
> + * using memory encryption.
> + *
> + * Examples include SEV and SEV-ES.
> + */
> + CC_ATTR_GUEST_MEM_ENCRYPT,
> +
> + /**
> + * @CC_ATTR_GUEST_STATE_ENCRYPT: Guest state encryption is active
> + *
> + * The platform/OS is running as a guest/virtual machine and actively
> + * using memory encryption and register state encryption.
> + *
> + * Examples include SEV-ES.
> + */
> + CC_ATTR_GUEST_STATE_ENCRYPT,
> +};
> +
> +#ifdef CONFIG_ARCH_HAS_CC_PLATFORM
> +
> +/**
> + * cc_platform_has() - Checks if the specified cc_attr attribute is active
> + * @attr: Confidential computing attribute to check
> + *
> + * The cc_platform_has() function will return an indicator as to whether the
> + * specified Confidential Computing attribute is currently active.
> + *
> + * Context: Any context
> + * Return:
> + * * TRUE - Specified Confidential Computing attribute is active
> + * * FALSE - Specified Confidential Computing attribute is not active
> + */
> +bool cc_platform_has(enum cc_attr attr);

This declaration make it impossible for architectures to define this
function inline.

For such function, having it inline would make more sense as it would
allow GCC to perform constant folding and avoid the overhead of calling
a sub-function.

> +
> +#else /* !CONFIG_ARCH_HAS_CC_PLATFORM */
> +
> +static inline bool cc_platform_has(enum cc_attr attr) { return false; }
> +
> +#endif /* CONFIG_ARCH_HAS_CC_PLATFORM */
> +
> +#endif /* _CC_PLATFORM_H */
>

2021-09-09 07:45:22

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v3 4/8] powerpc/pseries/svm: Add a powerpc version of cc_platform_has()



On 9/8/21 10:58 PM, Tom Lendacky wrote:
> Introduce a powerpc version of the cc_platform_has() function. This will
> be used to replace the powerpc mem_encrypt_active() implementation, so
> the implementation will initially only support the CC_ATTR_MEM_ENCRYPT
> attribute.
>
> Cc: Michael Ellerman <[email protected]>
> Cc: Benjamin Herrenschmidt <[email protected]>
> Cc: Paul Mackerras <[email protected]>
> Signed-off-by: Tom Lendacky <[email protected]>
> ---
> arch/powerpc/platforms/pseries/Kconfig | 1 +
> arch/powerpc/platforms/pseries/Makefile | 2 ++
> arch/powerpc/platforms/pseries/cc_platform.c | 26 ++++++++++++++++++++
> 3 files changed, 29 insertions(+)
> create mode 100644 arch/powerpc/platforms/pseries/cc_platform.c
>
> diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig
> index 5e037df2a3a1..2e57391e0778 100644
> --- a/arch/powerpc/platforms/pseries/Kconfig
> +++ b/arch/powerpc/platforms/pseries/Kconfig
> @@ -159,6 +159,7 @@ config PPC_SVM
> select SWIOTLB
> select ARCH_HAS_MEM_ENCRYPT
> select ARCH_HAS_FORCE_DMA_UNENCRYPTED
> + select ARCH_HAS_CC_PLATFORM
> help
> There are certain POWER platforms which support secure guests using
> the Protected Execution Facility, with the help of an Ultravisor
> diff --git a/arch/powerpc/platforms/pseries/Makefile b/arch/powerpc/platforms/pseries/Makefile
> index 4cda0ef87be0..41d8aee98da4 100644
> --- a/arch/powerpc/platforms/pseries/Makefile
> +++ b/arch/powerpc/platforms/pseries/Makefile
> @@ -31,3 +31,5 @@ obj-$(CONFIG_FA_DUMP) += rtas-fadump.o
>
> obj-$(CONFIG_SUSPEND) += suspend.o
> obj-$(CONFIG_PPC_VAS) += vas.o
> +
> +obj-$(CONFIG_ARCH_HAS_CC_PLATFORM) += cc_platform.o
> diff --git a/arch/powerpc/platforms/pseries/cc_platform.c b/arch/powerpc/platforms/pseries/cc_platform.c
> new file mode 100644
> index 000000000000..e8021af83a19
> --- /dev/null
> +++ b/arch/powerpc/platforms/pseries/cc_platform.c
> @@ -0,0 +1,26 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Confidential Computing Platform Capability checks
> + *
> + * Copyright (C) 2021 Advanced Micro Devices, Inc.
> + *
> + * Author: Tom Lendacky <[email protected]>
> + */
> +
> +#include <linux/export.h>
> +#include <linux/cc_platform.h>
> +
> +#include <asm/machdep.h>
> +#include <asm/svm.h>
> +
> +bool cc_platform_has(enum cc_attr attr)
> +{

Please keep this function inline as mem_encrypt_active() is


> + switch (attr) {
> + case CC_ATTR_MEM_ENCRYPT:
> + return is_secure_guest();
> +
> + default:
> + return false;
> + }
> +}
> +EXPORT_SYMBOL_GPL(cc_platform_has);
>

2021-09-09 07:53:11

by Christian Borntraeger

[permalink] [raw]
Subject: Re: [PATCH v3 0/8] Implement generic cc_platform_has() helper function



On 09.09.21 00:58, Tom Lendacky wrote:
> This patch series provides a generic helper function, cc_platform_has(),
> to replace the sme_active(), sev_active(), sev_es_active() and
> mem_encrypt_active() functions.
>
> It is expected that as new confidential computing technologies are
> added to the kernel, they can all be covered by a single function call
> instead of a collection of specific function calls all called from the
> same locations.
>
> The powerpc and s390 patches have been compile tested only. Can the
> folks copied on this series verify that nothing breaks for them.

Is there a tree somewhere?

Also,
> a new file, arch/powerpc/platforms/pseries/cc_platform.c, has been
> created for powerpc to hold the out of line function.
>
> Cc: Andi Kleen <[email protected]>
> Cc: Andy Lutomirski <[email protected]>
> Cc: Ard Biesheuvel <[email protected]>
> Cc: Baoquan He <[email protected]>
> Cc: Benjamin Herrenschmidt <[email protected]>
> Cc: Borislav Petkov <[email protected]>
> Cc: Christian Borntraeger <[email protected]>
> Cc: Daniel Vetter <[email protected]>
> Cc: Dave Hansen <[email protected]>
> Cc: Dave Young <[email protected]>
> Cc: David Airlie <[email protected]>
> Cc: Heiko Carstens <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Joerg Roedel <[email protected]>
> Cc: Maarten Lankhorst <[email protected]>
> Cc: Maxime Ripard <[email protected]>
> Cc: Michael Ellerman <[email protected]>
> Cc: Paul Mackerras <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Thomas Zimmermann <[email protected]>
> Cc: Vasily Gorbik <[email protected]>
> Cc: VMware Graphics <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
>
> ---
>
> Patches based on:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
> 4b93c544e90e ("thunderbolt: test: split up test cases in tb_test_credit_alloc_all")
>
> Changes since v2:
> - Changed the name from prot_guest_has() to cc_platform_has()
> - Took the cc_platform_has() function out of line. Created two new files,
> cc_platform.c, in both x86 and ppc to implment the function. As a
> result, also changed the attribute defines into enums.
> - Removed any received Reviewed-by's and Acked-by's given changes in this
> version.
> - Added removal of new instances of mem_encrypt_active() usage in powerpc
> arch.
> - Based on latest Linux tree to pick up powerpc changes related to the
> mem_encrypt_active() function.
>
> Changes since v1:
> - Moved some arch ioremap functions within #ifdef CONFIG_AMD_MEM_ENCRYPT
> in prep for use of prot_guest_has() by TDX.
> - Added type includes to the the protected_guest.h header file to prevent
> build errors outside of x86.
> - Made amd_prot_guest_has() EXPORT_SYMBOL_GPL
> - Used amd_prot_guest_has() in place of checking sme_me_mask in the
> arch/x86/mm/mem_encrypt.c file.
>
> Tom Lendacky (8):
> x86/ioremap: Selectively build arch override encryption functions
> mm: Introduce a function to check for confidential computing features
> x86/sev: Add an x86 version of cc_platform_has()
> powerpc/pseries/svm: Add a powerpc version of cc_platform_has()
> x86/sme: Replace occurrences of sme_active() with cc_platform_has()
> x86/sev: Replace occurrences of sev_active() with cc_platform_has()
> x86/sev: Replace occurrences of sev_es_active() with cc_platform_has()
> treewide: Replace the use of mem_encrypt_active() with
> cc_platform_has()
>
> arch/Kconfig | 3 +
> arch/powerpc/include/asm/mem_encrypt.h | 5 --
> arch/powerpc/platforms/pseries/Kconfig | 1 +
> arch/powerpc/platforms/pseries/Makefile | 2 +
> arch/powerpc/platforms/pseries/cc_platform.c | 26 ++++++
> arch/powerpc/platforms/pseries/svm.c | 5 +-
> arch/s390/include/asm/mem_encrypt.h | 2 -
> arch/x86/Kconfig | 1 +
> arch/x86/include/asm/io.h | 8 ++
> arch/x86/include/asm/kexec.h | 2 +-
> arch/x86/include/asm/mem_encrypt.h | 14 +---
> arch/x86/kernel/Makefile | 3 +
> arch/x86/kernel/cc_platform.c | 21 +++++
> arch/x86/kernel/crash_dump_64.c | 4 +-
> arch/x86/kernel/head64.c | 4 +-
> arch/x86/kernel/kvm.c | 3 +-
> arch/x86/kernel/kvmclock.c | 4 +-
> arch/x86/kernel/machine_kexec_64.c | 19 +++--
> arch/x86/kernel/pci-swiotlb.c | 9 +-
> arch/x86/kernel/relocate_kernel_64.S | 2 +-
> arch/x86/kernel/sev.c | 6 +-
> arch/x86/kvm/svm/svm.c | 3 +-
> arch/x86/mm/ioremap.c | 18 ++--
> arch/x86/mm/mem_encrypt.c | 57 +++++++------
> arch/x86/mm/mem_encrypt_identity.c | 3 +-
> arch/x86/mm/pat/set_memory.c | 3 +-
> arch/x86/platform/efi/efi_64.c | 9 +-
> arch/x86/realmode/init.c | 8 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 +-
> drivers/gpu/drm/drm_cache.c | 4 +-
> drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 +-
> drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 6 +-
> drivers/iommu/amd/init.c | 7 +-
> drivers/iommu/amd/iommu.c | 3 +-
> drivers/iommu/amd/iommu_v2.c | 3 +-
> drivers/iommu/iommu.c | 3 +-
> fs/proc/vmcore.c | 6 +-
> include/linux/cc_platform.h | 88 ++++++++++++++++++++
> include/linux/mem_encrypt.h | 4 -
> kernel/dma/swiotlb.c | 4 +-
> 40 files changed, 267 insertions(+), 114 deletions(-)
> create mode 100644 arch/powerpc/platforms/pseries/cc_platform.c
> create mode 100644 arch/x86/kernel/cc_platform.c
> create mode 100644 include/linux/cc_platform.h
>
>
> base-commit: 4b93c544e90e2b28326182d31ee008eb80e02074
>

2021-09-09 13:21:32

by Tom Lendacky

[permalink] [raw]
Subject: Re: [PATCH v3 0/8] Implement generic cc_platform_has() helper function

On 9/9/21 2:32 AM, Christian Borntraeger wrote:
>
>
> On 09.09.21 00:58, Tom Lendacky wrote:
>> This patch series provides a generic helper function, cc_platform_has(),
>> to replace the sme_active(), sev_active(), sev_es_active() and
>> mem_encrypt_active() functions.
>>
>> It is expected that as new confidential computing technologies are
>> added to the kernel, they can all be covered by a single function call
>> instead of a collection of specific function calls all called from the
>> same locations.
>>
>> The powerpc and s390 patches have been compile tested only. Can the
>> folks copied on this series verify that nothing breaks for them.
>
> Is there a tree somewhere?

I pushed it up to github:

https://github.com/AMDESE/linux/tree/prot-guest-has-v3

Thanks,
Tom

>
>  Also,
>> a new file, arch/powerpc/platforms/pseries/cc_platform.c, has been
>> created for powerpc to hold the out of line function.
>>
>> Cc: Andi Kleen <[email protected]>
>> Cc: Andy Lutomirski <[email protected]>
>> Cc: Ard Biesheuvel <[email protected]>
>> Cc: Baoquan He <[email protected]>
>> Cc: Benjamin Herrenschmidt <[email protected]>
>> Cc: Borislav Petkov <[email protected]>
>> Cc: Christian Borntraeger <[email protected]>
>> Cc: Daniel Vetter <[email protected]>
>> Cc: Dave Hansen <[email protected]>
>> Cc: Dave Young <[email protected]>
>> Cc: David Airlie <[email protected]>
>> Cc: Heiko Carstens <[email protected]>
>> Cc: Ingo Molnar <[email protected]>
>> Cc: Joerg Roedel <[email protected]>
>> Cc: Maarten Lankhorst <[email protected]>
>> Cc: Maxime Ripard <[email protected]>
>> Cc: Michael Ellerman <[email protected]>
>> Cc: Paul Mackerras <[email protected]>
>> Cc: Peter Zijlstra <[email protected]>
>> Cc: Thomas Gleixner <[email protected]>
>> Cc: Thomas Zimmermann <[email protected]>
>> Cc: Vasily Gorbik <[email protected]>
>> Cc: VMware Graphics <[email protected]>
>> Cc: Will Deacon <[email protected]>
>> Cc: Christoph Hellwig <[email protected]>
>>
>> ---
>>
>> Patches based on:
>>
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Ftorvalds%2Flinux.git&amp;data=04%7C01%7Cthomas.lendacky%40amd.com%7C5cd71ef2c2ce4b90060708d973640358%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637667695657121432%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=FVngrPSxCCRKutAaIMtU2Nk8WArFQB1dEE2wN7v8RgA%3D&amp;reserved=0
>> master
>>    4b93c544e90e ("thunderbolt: test: split up test cases in
>> tb_test_credit_alloc_all")
>>
>> Changes since v2:
>> - Changed the name from prot_guest_has() to cc_platform_has()
>> - Took the cc_platform_has() function out of line. Created two new files,
>>    cc_platform.c, in both x86 and ppc to implment the function. As a
>>    result, also changed the attribute defines into enums.
>> - Removed any received Reviewed-by's and Acked-by's given changes in this
>>    version.
>> - Added removal of new instances of mem_encrypt_active() usage in powerpc
>>    arch.
>> - Based on latest Linux tree to pick up powerpc changes related to the
>>    mem_encrypt_active() function.
>>
>> Changes since v1:
>> - Moved some arch ioremap functions within #ifdef CONFIG_AMD_MEM_ENCRYPT
>>    in prep for use of prot_guest_has() by TDX.
>> - Added type includes to the the protected_guest.h header file to prevent
>>    build errors outside of x86.
>> - Made amd_prot_guest_has() EXPORT_SYMBOL_GPL
>> - Used amd_prot_guest_has() in place of checking sme_me_mask in the
>>    arch/x86/mm/mem_encrypt.c file.
>>
>> Tom Lendacky (8):
>>    x86/ioremap: Selectively build arch override encryption functions
>>    mm: Introduce a function to check for confidential computing features
>>    x86/sev: Add an x86 version of cc_platform_has()
>>    powerpc/pseries/svm: Add a powerpc version of cc_platform_has()
>>    x86/sme: Replace occurrences of sme_active() with cc_platform_has()
>>    x86/sev: Replace occurrences of sev_active() with cc_platform_has()
>>    x86/sev: Replace occurrences of sev_es_active() with cc_platform_has()
>>    treewide: Replace the use of mem_encrypt_active() with
>>      cc_platform_has()
>>
>>   arch/Kconfig                                 |  3 +
>>   arch/powerpc/include/asm/mem_encrypt.h       |  5 --
>>   arch/powerpc/platforms/pseries/Kconfig       |  1 +
>>   arch/powerpc/platforms/pseries/Makefile      |  2 +
>>   arch/powerpc/platforms/pseries/cc_platform.c | 26 ++++++
>>   arch/powerpc/platforms/pseries/svm.c         |  5 +-
>>   arch/s390/include/asm/mem_encrypt.h          |  2 -
>>   arch/x86/Kconfig                             |  1 +
>>   arch/x86/include/asm/io.h                    |  8 ++
>>   arch/x86/include/asm/kexec.h                 |  2 +-
>>   arch/x86/include/asm/mem_encrypt.h           | 14 +---
>>   arch/x86/kernel/Makefile                     |  3 +
>>   arch/x86/kernel/cc_platform.c                | 21 +++++
>>   arch/x86/kernel/crash_dump_64.c              |  4 +-
>>   arch/x86/kernel/head64.c                     |  4 +-
>>   arch/x86/kernel/kvm.c                        |  3 +-
>>   arch/x86/kernel/kvmclock.c                   |  4 +-
>>   arch/x86/kernel/machine_kexec_64.c           | 19 +++--
>>   arch/x86/kernel/pci-swiotlb.c                |  9 +-
>>   arch/x86/kernel/relocate_kernel_64.S         |  2 +-
>>   arch/x86/kernel/sev.c                        |  6 +-
>>   arch/x86/kvm/svm/svm.c                       |  3 +-
>>   arch/x86/mm/ioremap.c                        | 18 ++--
>>   arch/x86/mm/mem_encrypt.c                    | 57 +++++++------
>>   arch/x86/mm/mem_encrypt_identity.c           |  3 +-
>>   arch/x86/mm/pat/set_memory.c                 |  3 +-
>>   arch/x86/platform/efi/efi_64.c               |  9 +-
>>   arch/x86/realmode/init.c                     |  8 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c      |  4 +-
>>   drivers/gpu/drm/drm_cache.c                  |  4 +-
>>   drivers/gpu/drm/vmwgfx/vmwgfx_drv.c          |  4 +-
>>   drivers/gpu/drm/vmwgfx/vmwgfx_msg.c          |  6 +-
>>   drivers/iommu/amd/init.c                     |  7 +-
>>   drivers/iommu/amd/iommu.c                    |  3 +-
>>   drivers/iommu/amd/iommu_v2.c                 |  3 +-
>>   drivers/iommu/iommu.c                        |  3 +-
>>   fs/proc/vmcore.c                             |  6 +-
>>   include/linux/cc_platform.h                  | 88 ++++++++++++++++++++
>>   include/linux/mem_encrypt.h                  |  4 -
>>   kernel/dma/swiotlb.c                         |  4 +-
>>   40 files changed, 267 insertions(+), 114 deletions(-)
>>   create mode 100644 arch/powerpc/platforms/pseries/cc_platform.c
>>   create mode 100644 arch/x86/kernel/cc_platform.c
>>   create mode 100644 include/linux/cc_platform.h
>>
>>
>> base-commit: 4b93c544e90e2b28326182d31ee008eb80e02074
>>

2021-09-09 13:28:16

by Tom Lendacky

[permalink] [raw]
Subject: Re: [PATCH v3 8/8] treewide: Replace the use of mem_encrypt_active() with cc_platform_has()

On 9/9/21 2:25 AM, Christophe Leroy wrote:
>
>
> On 9/8/21 10:58 PM, Tom Lendacky wrote:
>>
>> diff --git a/arch/powerpc/include/asm/mem_encrypt.h
>> b/arch/powerpc/include/asm/mem_encrypt.h
>> index ba9dab07c1be..2f26b8fc8d29 100644
>> --- a/arch/powerpc/include/asm/mem_encrypt.h
>> +++ b/arch/powerpc/include/asm/mem_encrypt.h
>> @@ -10,11 +10,6 @@
>>   #include <asm/svm.h>
>> -static inline bool mem_encrypt_active(void)
>> -{
>> -    return is_secure_guest();
>> -}
>> -
>>   static inline bool force_dma_unencrypted(struct device *dev)
>>   {
>>       return is_secure_guest();
>> diff --git a/arch/powerpc/platforms/pseries/svm.c
>> b/arch/powerpc/platforms/pseries/svm.c
>> index 87f001b4c4e4..c083ecbbae4d 100644
>> --- a/arch/powerpc/platforms/pseries/svm.c
>> +++ b/arch/powerpc/platforms/pseries/svm.c
>> @@ -8,6 +8,7 @@
>>   #include <linux/mm.h>
>>   #include <linux/memblock.h>
>> +#include <linux/cc_platform.h>
>>   #include <asm/machdep.h>
>>   #include <asm/svm.h>
>>   #include <asm/swiotlb.h>
>> @@ -63,7 +64,7 @@ void __init svm_swiotlb_init(void)
>>   int set_memory_encrypted(unsigned long addr, int numpages)
>>   {
>> -    if (!mem_encrypt_active())
>> +    if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT))
>>           return 0;
>>       if (!PAGE_ALIGNED(addr))
>> @@ -76,7 +77,7 @@ int set_memory_encrypted(unsigned long addr, int
>> numpages)
>>   int set_memory_decrypted(unsigned long addr, int numpages)
>>   {
>> -    if (!mem_encrypt_active())
>> +    if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT))
>>           return 0;
>>       if (!PAGE_ALIGNED(addr))
>
> This change unnecessarily complexifies the two functions. This is due to
> cc_platform_has() being out-line. It should really remain inline.

Please see previous discussion(s) on this series for why the function is
implemented out of line and for the naming:

V1: https://lore.kernel.org/lkml/[email protected]/

V2: https://lore.kernel.org/lkml/[email protected]/

Thanks,
Tom

>
> Before the change we got:
>
> 0000000000000000 <.set_memory_encrypted>:
>    0:    7d 20 00 a6     mfmsr   r9
>    4:    75 29 00 40     andis.  r9,r9,64
>    8:    41 82 00 48     beq     50 <.set_memory_encrypted+0x50>
>    c:    78 69 04 20     clrldi  r9,r3,48
>   10:    2c 29 00 00     cmpdi   r9,0
>   14:    40 82 00 4c     bne     60 <.set_memory_encrypted+0x60>
>   18:    7c 08 02 a6     mflr    r0
>   1c:    7c 85 23 78     mr      r5,r4
>   20:    78 64 85 02     rldicl  r4,r3,48,20
>   24:    61 23 f1 34     ori     r3,r9,61748
>   28:    f8 01 00 10     std     r0,16(r1)
>   2c:    f8 21 ff 91     stdu    r1,-112(r1)
>   30:    48 00 00 01     bl      30 <.set_memory_encrypted+0x30>
>             30: R_PPC64_REL24    .ucall_norets
>   34:    60 00 00 00     nop
>   38:    38 60 00 00     li      r3,0
>   3c:    38 21 00 70     addi    r1,r1,112
>   40:    e8 01 00 10     ld      r0,16(r1)
>   44:    7c 08 03 a6     mtlr    r0
>   48:    4e 80 00 20     blr
>   50:    38 60 00 00     li      r3,0
>   54:    4e 80 00 20     blr
>   60:    38 60 ff ea     li      r3,-22
>   64:    4e 80 00 20     blr
>
> After the change we get:
>
> 0000000000000000 <.set_memory_encrypted>:
>    0:    7c 08 02 a6     mflr    r0
>    4:    fb c1 ff f0     std     r30,-16(r1)
>    8:    fb e1 ff f8     std     r31,-8(r1)
>    c:    7c 7f 1b 78     mr      r31,r3
>   10:    38 60 00 00     li      r3,0
>   14:    7c 9e 23 78     mr      r30,r4
>   18:    f8 01 00 10     std     r0,16(r1)
>   1c:    f8 21 ff 81     stdu    r1,-128(r1)
>   20:    48 00 00 01     bl      20 <.set_memory_encrypted+0x20>
>             20: R_PPC64_REL24    .cc_platform_has
>   24:    60 00 00 00     nop
>   28:    2c 23 00 00     cmpdi   r3,0
>   2c:    41 82 00 44     beq     70 <.set_memory_encrypted+0x70>
>   30:    7b e9 04 20     clrldi  r9,r31,48
>   34:    2c 29 00 00     cmpdi   r9,0
>   38:    40 82 00 58     bne     90 <.set_memory_encrypted+0x90>
>   3c:    38 60 00 00     li      r3,0
>   40:    7f c5 f3 78     mr      r5,r30
>   44:    7b e4 85 02     rldicl  r4,r31,48,20
>   48:    60 63 f1 34     ori     r3,r3,61748
>   4c:    48 00 00 01     bl      4c <.set_memory_encrypted+0x4c>
>             4c: R_PPC64_REL24    .ucall_norets
>   50:    60 00 00 00     nop
>   54:    38 60 00 00     li      r3,0
>   58:    38 21 00 80     addi    r1,r1,128
>   5c:    e8 01 00 10     ld      r0,16(r1)
>   60:    eb c1 ff f0     ld      r30,-16(r1)
>   64:    eb e1 ff f8     ld      r31,-8(r1)
>   68:    7c 08 03 a6     mtlr    r0
>   6c:    4e 80 00 20     blr
>   70:    38 21 00 80     addi    r1,r1,128
>   74:    38 60 00 00     li      r3,0
>   78:    e8 01 00 10     ld      r0,16(r1)
>   7c:    eb c1 ff f0     ld      r30,-16(r1)
>   80:    eb e1 ff f8     ld      r31,-8(r1)
>   84:    7c 08 03 a6     mtlr    r0
>   88:    4e 80 00 20     blr
>   90:    38 60 ff ea     li      r3,-22
>   94:    4b ff ff c4     b       58 <.set_memory_encrypted+0x58>
>

2021-09-10 15:04:43

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v3 2/8] mm: Introduce a function to check for confidential computing features

On Wed, Sep 08, 2021 at 05:58:33PM -0500, Tom Lendacky wrote:
> In prep for other confidential computing technologies, introduce a generic

preparation

> helper function, cc_platform_has(), that can be used to check for specific
> active confidential computing attributes, like memory encryption. This is
> intended to eliminate having to add multiple technology-specific checks to
> the code (e.g. if (sev_active() || tdx_active())).

...

> diff --git a/include/linux/cc_platform.h b/include/linux/cc_platform.h
> new file mode 100644
> index 000000000000..253f3ea66cd8
> --- /dev/null
> +++ b/include/linux/cc_platform.h
> @@ -0,0 +1,88 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/*
> + * Confidential Computing Platform Capability checks
> + *
> + * Copyright (C) 2021 Advanced Micro Devices, Inc.
> + *
> + * Author: Tom Lendacky <[email protected]>
> + */
> +
> +#ifndef _CC_PLATFORM_H

_LINUX_CC_PLATFORM_H

> +#define _CC_PLATFORM_H

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-09-11 10:15:02

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v3 3/8] x86/sev: Add an x86 version of cc_platform_has()

On Wed, Sep 08, 2021 at 05:58:34PM -0500, Tom Lendacky wrote:
> diff --git a/arch/x86/kernel/cc_platform.c b/arch/x86/kernel/cc_platform.c
> new file mode 100644
> index 000000000000..3c9bacd3c3f3
> --- /dev/null
> +++ b/arch/x86/kernel/cc_platform.c
> @@ -0,0 +1,21 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Confidential Computing Platform Capability checks
> + *
> + * Copyright (C) 2021 Advanced Micro Devices, Inc.
> + *
> + * Author: Tom Lendacky <[email protected]>
> + */
> +
> +#include <linux/export.h>
> +#include <linux/cc_platform.h>
> +#include <linux/mem_encrypt.h>
> +
> +bool cc_platform_has(enum cc_attr attr)
> +{
> + if (sme_me_mask)

Why are you still checking the sme_me_mask here? AFAIR, we said that
we'll do that only when the KVM folks come with a valid use case...

> + return amd_cc_platform_has(attr);
> +
> + return false;
> +}
> +EXPORT_SYMBOL_GPL(cc_platform_has);
> diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
> index ff08dc463634..18fe19916bc3 100644
> --- a/arch/x86/mm/mem_encrypt.c
> +++ b/arch/x86/mm/mem_encrypt.c
> @@ -20,6 +20,7 @@
> #include <linux/bitops.h>
> #include <linux/dma-mapping.h>
> #include <linux/virtio_config.h>
> +#include <linux/cc_platform.h>
>
> #include <asm/tlbflush.h>
> #include <asm/fixmap.h>
> @@ -389,6 +390,26 @@ bool noinstr sev_es_active(void)
> return sev_status & MSR_AMD64_SEV_ES_ENABLED;
> }
>
> +bool amd_cc_platform_has(enum cc_attr attr)
> +{
> + switch (attr) {
> + case CC_ATTR_MEM_ENCRYPT:
> + return sme_me_mask != 0;

No need for the "!= 0"

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-09-14 12:01:04

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v3 4/8] powerpc/pseries/svm: Add a powerpc version of cc_platform_has()

On Wed, Sep 08, 2021 at 05:58:35PM -0500, Tom Lendacky wrote:
> Introduce a powerpc version of the cc_platform_has() function. This will
> be used to replace the powerpc mem_encrypt_active() implementation, so
> the implementation will initially only support the CC_ATTR_MEM_ENCRYPT
> attribute.
>
> Cc: Michael Ellerman <[email protected]>
> Cc: Benjamin Herrenschmidt <[email protected]>
> Cc: Paul Mackerras <[email protected]>
> Signed-off-by: Tom Lendacky <[email protected]>
> ---
> arch/powerpc/platforms/pseries/Kconfig | 1 +
> arch/powerpc/platforms/pseries/Makefile | 2 ++
> arch/powerpc/platforms/pseries/cc_platform.c | 26 ++++++++++++++++++++
> 3 files changed, 29 insertions(+)
> create mode 100644 arch/powerpc/platforms/pseries/cc_platform.c

Michael,

can I get an ACK for the ppc bits to carry them through the tip tree
pls?

Btw, on a related note, cross-compiling this throws the following error here:

$ make CROSS_COMPILE=/home/share/src/crosstool/gcc-9.4.0-nolibc/powerpc64-linux/bin/powerpc64-linux- V=1 ARCH=powerpc

...

/home/share/src/crosstool/gcc-9.4.0-nolibc/powerpc64-linux/bin/powerpc64-linux-gcc -Wp,-MD,arch/powerpc/boot/.crt0.o.d -D__ASSEMBLY__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -O2 -msoft-float -mno-altivec -mno-vsx -pipe -fomit-frame-pointer -fno-builtin -fPIC -nostdinc -include ./include/linux/compiler_attributes.h -I./arch/powerpc/include -I./arch/powerpc/include/generated -I./include -I./arch/powerpc/include/uapi -I./arch/powerpc/include/generated/uapi -I./include/uapi -I./include/generated/uapi -include ./include/linux/compiler-version.h -include ./include/linux/kconfig.h -m32 -isystem /home/share/src/crosstool/gcc-9.4.0-nolibc/powerpc64-linux/bin/../lib/gcc/powerpc64-linux/9.4.0/include -mbig-endian -nostdinc -c -o arch/powerpc/boot/crt0.o arch/powerpc/boot/crt0.S
In file included from <command-line>:
././include/linux/compiler_attributes.h:62:5: warning: "__has_attribute" is not defined, evaluates to 0 [-Wundef]
62 | #if __has_attribute(__assume_aligned__)
| ^~~~~~~~~~~~~~~
././include/linux/compiler_attributes.h:62:20: error: missing binary operator before token "("
62 | #if __has_attribute(__assume_aligned__)
| ^
././include/linux/compiler_attributes.h:88:5: warning: "__has_attribute" is not defined, evaluates to 0 [-Wundef]
88 | #if __has_attribute(__copy__)
| ^~~~~~~~~~~~~~~
...

Known issue?

This __has_attribute() thing is supposed to be supported
in gcc since 5.1 and I'm using the crosstool stuff from
https://www.kernel.org/pub/tools/crosstool/ and gcc-9.4 above is pretty
new so that should not happen actually.

But it does...

Hmmm.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-09-14 14:50:15

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v3 4/8] powerpc/pseries/svm: Add a powerpc version of cc_platform_has()



Le 14/09/2021 à 13:58, Borislav Petkov a écrit :
> On Wed, Sep 08, 2021 at 05:58:35PM -0500, Tom Lendacky wrote:
>> Introduce a powerpc version of the cc_platform_has() function. This will
>> be used to replace the powerpc mem_encrypt_active() implementation, so
>> the implementation will initially only support the CC_ATTR_MEM_ENCRYPT
>> attribute.
>>
>> Cc: Michael Ellerman <[email protected]>
>> Cc: Benjamin Herrenschmidt <[email protected]>
>> Cc: Paul Mackerras <[email protected]>
>> Signed-off-by: Tom Lendacky <[email protected]>
>> ---
>> arch/powerpc/platforms/pseries/Kconfig | 1 +
>> arch/powerpc/platforms/pseries/Makefile | 2 ++
>> arch/powerpc/platforms/pseries/cc_platform.c | 26 ++++++++++++++++++++
>> 3 files changed, 29 insertions(+)
>> create mode 100644 arch/powerpc/platforms/pseries/cc_platform.c
>
> Michael,
>
> can I get an ACK for the ppc bits to carry them through the tip tree
> pls?
>
> Btw, on a related note, cross-compiling this throws the following error here:
>
> $ make CROSS_COMPILE=/home/share/src/crosstool/gcc-9.4.0-nolibc/powerpc64-linux/bin/powerpc64-linux- V=1 ARCH=powerpc
>
> ...
>
> /home/share/src/crosstool/gcc-9.4.0-nolibc/powerpc64-linux/bin/powerpc64-linux-gcc -Wp,-MD,arch/powerpc/boot/.crt0.o.d -D__ASSEMBLY__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -O2 -msoft-float -mno-altivec -mno-vsx -pipe -fomit-frame-pointer -fno-builtin -fPIC -nostdinc -include ./include/linux/compiler_attributes.h -I./arch/powerpc/include -I./arch/powerpc/include/generated -I./include -I./arch/powerpc/include/uapi -I./arch/powerpc/include/generated/uapi -I./include/uapi -I./include/generated/uapi -include ./include/linux/compiler-version.h -include ./include/linux/kconfig.h -m32 -isystem /home/share/src/crosstool/gcc-9.4.0-nolibc/powerpc64-linux/bin/../lib/gcc/powerpc64-linux/9.4.0/include -mbig-endian -nostdinc -c -o arch/powerpc/boot/crt0.o arch/powerpc/boot/crt0.S
> In file included from <command-line>:
> ././include/linux/compiler_attributes.h:62:5: warning: "__has_attribute" is not defined, evaluates to 0 [-Wundef]
> 62 | #if __has_attribute(__assume_aligned__)
> | ^~~~~~~~~~~~~~~
> ././include/linux/compiler_attributes.h:62:20: error: missing binary operator before token "("
> 62 | #if __has_attribute(__assume_aligned__)
> | ^
> ././include/linux/compiler_attributes.h:88:5: warning: "__has_attribute" is not defined, evaluates to 0 [-Wundef]
> 88 | #if __has_attribute(__copy__)
> | ^~~~~~~~~~~~~~~
> ...
>
> Known issue?
>
> This __has_attribute() thing is supposed to be supported
> in gcc since 5.1 and I'm using the crosstool stuff from
> https://www.kernel.org/pub/tools/crosstool/ and gcc-9.4 above is pretty
> new so that should not happen actually.
>
> But it does...
>
> Hmmm.
>


Yes, see
https://lore.kernel.org/linuxppc-dev/[email protected]/T/#t

2021-09-14 14:58:59

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v3 4/8] powerpc/pseries/svm: Add a powerpc version of cc_platform_has()

On Tue, Sep 14, 2021 at 04:47:41PM +0200, Christophe Leroy wrote:
> Yes, see https://lore.kernel.org/linuxppc-dev/[email protected]/T/#t

Aha, more compiler magic stuff ;-\

Oh well, I guess that fix will land upstream soon.

Thx.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-09-14 18:27:31

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

On Wed, Sep 08, 2021 at 05:58:36PM -0500, Tom Lendacky wrote:
> diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
> index 18fe19916bc3..4b54a2377821 100644
> --- a/arch/x86/mm/mem_encrypt.c
> +++ b/arch/x86/mm/mem_encrypt.c
> @@ -144,7 +144,7 @@ void __init sme_unmap_bootdata(char *real_mode_data)
> struct boot_params *boot_data;
> unsigned long cmdline_paddr;
>
> - if (!sme_active())
> + if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
> return;
>
> /* Get the command line address before unmapping the real_mode_data */
> @@ -164,7 +164,7 @@ void __init sme_map_bootdata(char *real_mode_data)
> struct boot_params *boot_data;
> unsigned long cmdline_paddr;
>
> - if (!sme_active())
> + if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
> return;
>
> __sme_early_map_unmap_mem(real_mode_data, sizeof(boot_params), true);
> @@ -377,11 +377,6 @@ bool sev_active(void)
> {
> return sev_status & MSR_AMD64_SEV_ENABLED;
> }
> -
> -bool sme_active(void)
> -{
> - return sme_me_mask && !sev_active();
> -}
> EXPORT_SYMBOL_GPL(sev_active);
>
> /* Needs to be called from non-instrumentable code */

You forgot this hunk:

diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 5635ca9a1fbe..a3a2396362a5 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -364,8 +364,9 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size)
/*
* SME and SEV are very similar but they are not the same, so there are
* times that the kernel will need to distinguish between SME and SEV. The
- * sme_active() and sev_active() functions are used for this. When a
- * distinction isn't needed, the mem_encrypt_active() function can be used.
+ * PATTR_HOST_MEM_ENCRYPT and PATTR_GUEST_MEM_ENCRYPT flags to
+ * amd_prot_guest_has() are used for this. When a distinction isn't needed,
+ * the mem_encrypt_active() function can be used.
*
* The trampoline code is a good example for this requirement. Before
* paging is activated, SME will access all memory as decrypted, but SEV

because there's still a sme_active() mentioned there:

$ git grep sme_active
arch/x86/mm/mem_encrypt.c:367: * sme_active() and sev_active() functions are used for this. When a

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-09-15 00:31:54

by Michael Ellerman

[permalink] [raw]
Subject: Re: [PATCH v3 4/8] powerpc/pseries/svm: Add a powerpc version of cc_platform_has()

Borislav Petkov <[email protected]> writes:
> On Wed, Sep 08, 2021 at 05:58:35PM -0500, Tom Lendacky wrote:
>> Introduce a powerpc version of the cc_platform_has() function. This will
>> be used to replace the powerpc mem_encrypt_active() implementation, so
>> the implementation will initially only support the CC_ATTR_MEM_ENCRYPT
>> attribute.
>>
>> Cc: Michael Ellerman <[email protected]>
>> Cc: Benjamin Herrenschmidt <[email protected]>
>> Cc: Paul Mackerras <[email protected]>
>> Signed-off-by: Tom Lendacky <[email protected]>
>> ---
>> arch/powerpc/platforms/pseries/Kconfig | 1 +
>> arch/powerpc/platforms/pseries/Makefile | 2 ++
>> arch/powerpc/platforms/pseries/cc_platform.c | 26 ++++++++++++++++++++
>> 3 files changed, 29 insertions(+)
>> create mode 100644 arch/powerpc/platforms/pseries/cc_platform.c
>
> Michael,
>
> can I get an ACK for the ppc bits to carry them through the tip tree
> pls?

Yeah.

I don't love it, a new C file and an out-of-line call to then call back
to a static inline that for most configuration will return false ... but
whatever :)

Acked-by: Michael Ellerman <[email protected]> (powerpc)


> Btw, on a related note, cross-compiling this throws the following error here:
>
> $ make CROSS_COMPILE=/home/share/src/crosstool/gcc-9.4.0-nolibc/powerpc64-linux/bin/powerpc64-linux- V=1 ARCH=powerpc
>
> ...
>
> /home/share/src/crosstool/gcc-9.4.0-nolibc/powerpc64-linux/bin/powerpc64-linux-gcc -Wp,-MD,arch/powerpc/boot/.crt0.o.d -D__ASSEMBLY__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -O2 -msoft-float -mno-altivec -mno-vsx -pipe -fomit-frame-pointer -fno-builtin -fPIC -nostdinc -include ./include/linux/compiler_attributes.h -I./arch/powerpc/include -I./arch/powerpc/include/generated -I./include -I./arch/powerpc/include/uapi -I./arch/powerpc/include/generated/uapi -I./include/uapi -I./include/generated/uapi -include ./include/linux/compiler-version.h -include ./include/linux/kconfig.h -m32 -isystem /home/share/src/crosstool/gcc-9.4.0-nolibc/powerpc64-linux/bin/../lib/gcc/powerpc64-linux/9.4.0/include -mbig-endian -nostdinc -c -o arch/powerpc/boot/crt0.o arch/powerpc/boot/crt0.S
> In file included from <command-line>:
> ././include/linux/compiler_attributes.h:62:5: warning: "__has_attribute" is not defined, evaluates to 0 [-Wundef]
> 62 | #if __has_attribute(__assume_aligned__)
> | ^~~~~~~~~~~~~~~
> ././include/linux/compiler_attributes.h:62:20: error: missing binary operator before token "("
> 62 | #if __has_attribute(__assume_aligned__)
> | ^
> ././include/linux/compiler_attributes.h:88:5: warning: "__has_attribute" is not defined, evaluates to 0 [-Wundef]
> 88 | #if __has_attribute(__copy__)
> | ^~~~~~~~~~~~~~~
> ...
>
> Known issue?

Yeah, fixed in mainline today, thanks for trying to cross compile :)

cheers

2021-09-15 10:12:28

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v3 4/8] powerpc/pseries/svm: Add a powerpc version of cc_platform_has()

On Wed, Sep 15, 2021 at 10:28:59AM +1000, Michael Ellerman wrote:
> I don't love it, a new C file and an out-of-line call to then call back
> to a static inline that for most configuration will return false ... but
> whatever :)

Yeah, hch thinks it'll cause a big mess otherwise:

https://lore.kernel.org/lkml/YSScWvpXeVXw%[email protected]/

I guess less ifdeffery is nice too.

> Acked-by: Michael Ellerman <[email protected]> (powerpc)

Thx.

> Yeah, fixed in mainline today, thanks for trying to cross compile :)

Always!

:-)

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-09-15 16:50:27

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v3 0/8] Implement generic cc_platform_has() helper function

On Wed, Sep 08, 2021 at 05:58:31PM -0500, Tom Lendacky wrote:
> This patch series provides a generic helper function, cc_platform_has(),
> to replace the sme_active(), sev_active(), sev_es_active() and
> mem_encrypt_active() functions.
>
> It is expected that as new confidential computing technologies are
> added to the kernel, they can all be covered by a single function call
> instead of a collection of specific function calls all called from the
> same locations.
>
> The powerpc and s390 patches have been compile tested only. Can the
> folks copied on this series verify that nothing breaks for them. Also,
> a new file, arch/powerpc/platforms/pseries/cc_platform.c, has been
> created for powerpc to hold the out of line function.

...

>
> Tom Lendacky (8):
> x86/ioremap: Selectively build arch override encryption functions
> mm: Introduce a function to check for confidential computing features
> x86/sev: Add an x86 version of cc_platform_has()
> powerpc/pseries/svm: Add a powerpc version of cc_platform_has()
> x86/sme: Replace occurrences of sme_active() with cc_platform_has()
> x86/sev: Replace occurrences of sev_active() with cc_platform_has()
> x86/sev: Replace occurrences of sev_es_active() with cc_platform_has()
> treewide: Replace the use of mem_encrypt_active() with
> cc_platform_has()

Ok, modulo the minor things the plan is to take this through tip after
-rc2 releases in order to pick up the powerpc build fix and have a clean
base (-rc2) to base stuff on, at the same time.

Pls holler if something's still amiss.

Sathya,

if you want to prepare the Intel variant intel_cc_platform_has() ontop
of those and send it to me, that would be good because then I can
integrate it all in one branch which can be used to base future work
ontop.

Thx.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-09-15 17:21:01

by Christophe Leroy

[permalink] [raw]
Subject: Re: [PATCH v3 4/8] powerpc/pseries/svm: Add a powerpc version of cc_platform_has()



Le 15/09/2021 à 12:08, Borislav Petkov a écrit :
> On Wed, Sep 15, 2021 at 10:28:59AM +1000, Michael Ellerman wrote:
>> I don't love it, a new C file and an out-of-line call to then call back
>> to a static inline that for most configuration will return false ... but
>> whatever :)
>
> Yeah, hch thinks it'll cause a big mess otherwise:
>
> https://lore.kernel.org/lkml/YSScWvpXeVXw%[email protected]/

Could you please provide more explicit explanation why inlining such an
helper is considered as bad practice and messy ?

Because as demonstrated in my previous response some days ago, taking
that outline ends up with an unneccessary ugly generated code and we
don't benefit front GCC's capability to fold in and opt out unreachable
code.

As pointed by Michael in most cases the function will just return false
so behind the performance concern, there is also the code size and code
coverage topic that is to be taken into account. And even when the
function doesn't return false, the only thing it does folds into a
single powerpc instruction so there is really no point in making a
dedicated out-of-line fonction for that and suffer the cost and the size
of a function call and to justify the addition of a dedicated C file.


>
> I guess less ifdeffery is nice too.

I can't see your point here. Inlining the function wouldn't add any
ifdeffery as far as I can see.

So, would you mind reconsidering your approach and allow architectures
to provide inline implementation by just not enforcing a generic
prototype ? Or otherwise provide more details and exemple of why the
cons are more important versus the pros ?

Thanks
Christophe

Subject: Re: [PATCH v3 0/8] Implement generic cc_platform_has() helper function



On 9/15/21 9:46 AM, Borislav Petkov wrote:
> Sathya,
>
> if you want to prepare the Intel variant intel_cc_platform_has() ontop
> of those and send it to me, that would be good because then I can
> integrate it all in one branch which can be used to base future work
> ontop.

I have a Intel variant patch (please check following patch). But it includes
TDX changes as well. Shall I move TDX changes to different patch and just
create a separate patch for adding intel_cc_platform_has()?


commit fc5f98a0ed94629d903827c5b44ee9295f835831
Author: Kuppuswamy Sathyanarayanan <[email protected]>
Date: Wed May 12 11:35:13 2021 -0700

x86/tdx: Add confidential guest support for TDX guest

TDX architecture provides a way for VM guests to be highly secure and
isolated (from untrusted VMM). To achieve this requirement, any data
coming from VMM cannot be completely trusted. TDX guest fixes this
issue by hardening the IO drivers against the attack from the VMM.
So, when adding hardening fixes to the generic drivers, to protect
custom fixes use cc_platform_has() API.

Also add TDX guest support to cc_platform_has() API to protect the
TDX specific fixes.

Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index a5b14de03458..2e78358923a1 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -871,6 +871,7 @@ config INTEL_TDX_GUEST
depends on SECURITY
select X86_X2APIC
select SECURITY_LOCKDOWN_LSM
+ select ARCH_HAS_CC_PLATFORM
help
Provide support for running in a trusted domain on Intel processors
equipped with Trusted Domain eXtensions. TDX is a new Intel
diff --git a/arch/x86/include/asm/intel_cc_platform.h b/arch/x86/include/asm/intel_cc_platform.h
new file mode 100644
index 000000000000..472c3174beac
--- /dev/null
+++ b/arch/x86/include/asm/intel_cc_platform.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2021 Intel Corporation */
+#ifndef _ASM_X86_INTEL_CC_PLATFORM_H
+#define _ASM_X86_INTEL_CC_PLATFORM_H
+
+#if defined(CONFIG_CPU_SUP_INTEL) && defined(CONFIG_ARCH_HAS_CC_PLATFORM)
+bool intel_cc_platform_has(unsigned int flag);
+#else
+static inline bool intel_cc_platform_has(unsigned int flag) { return false; }
+#endif
+
+#endif /* _ASM_X86_INTEL_CC_PLATFORM_H */
+
diff --git a/arch/x86/kernel/cc_platform.c b/arch/x86/kernel/cc_platform.c
index 3c9bacd3c3f3..e83bc2f48efe 100644
--- a/arch/x86/kernel/cc_platform.c
+++ b/arch/x86/kernel/cc_platform.c
@@ -10,11 +10,16 @@
#include <linux/export.h>
#include <linux/cc_platform.h>
#include <linux/mem_encrypt.h>
+#include <linux/processor.h>
+
+#include <asm/intel_cc_platform.h>

bool cc_platform_has(enum cc_attr attr)
{
if (sme_me_mask)
return amd_cc_platform_has(attr);
+ else if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)
+ return intel_cc_platform_has(attr);

return false;
}
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 8321c43554a1..ab486a3b1eb0 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -11,6 +11,7 @@
#include <linux/init.h>
#include <linux/uaccess.h>
#include <linux/delay.h>
+#include <linux/cc_platform.h>

#include <asm/cpufeature.h>
#include <asm/msr.h>
@@ -60,6 +61,21 @@ static u64 msr_test_ctrl_cache __ro_after_init;
*/
static bool cpu_model_supports_sld __ro_after_init;

+#ifdef CONFIG_ARCH_HAS_CC_PLATFORM
+bool intel_cc_platform_has(enum cc_attr attr)
+{
+ switch (attr) {
+ case CC_ATTR_GUEST_TDX:
+ return cpu_feature_enabled(X86_FEATURE_TDX_GUEST);
+ default:
+ return false;
+ }
+
+ return false;
+}
+EXPORT_SYMBOL_GPL(intel_cc_platform_has);
+#endif
+
/*
* Processors which have self-snooping capability can handle conflicting
* memory type across CPUs by snooping its own cache. However, there exists
diff --git a/include/linux/cc_platform.h b/include/linux/cc_platform.h
index 253f3ea66cd8..e38430e6e396 100644
--- a/include/linux/cc_platform.h
+++ b/include/linux/cc_platform.h
@@ -61,6 +61,15 @@ enum cc_attr {
* Examples include SEV-ES.
*/
CC_ATTR_GUEST_STATE_ENCRYPT,
+
+ /**
+ * @CC_ATTR_GUEST_TDX: Trusted Domain Extension Support
+ *
+ * The platform/OS is running as a TDX guest/virtual machine.
+ *
+ * Examples include SEV-ES.
+ */
+ CC_ATTR_GUEST_TDX,
};

#ifdef CONFIG_ARCH_HAS_CC_PLATFORM


--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

2021-09-15 18:50:03

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v3 4/8] powerpc/pseries/svm: Add a powerpc version of cc_platform_has()

On Wed, Sep 15, 2021 at 07:18:34PM +0200, Christophe Leroy wrote:
> Could you please provide more explicit explanation why inlining such an
> helper is considered as bad practice and messy ?

Tom already told you to look at the previous threads. Let's read them
together. This one, for example:

https://lore.kernel.org/lkml/YSScWvpXeVXw%[email protected]/

| > To take it out of line, I'm leaning towards the latter, creating a new
| > file that is built based on the ARCH_HAS_PROTECTED_GUEST setting.
|
| Yes. In general everytime architectures have to provide the prototype
| and not just the implementation of something we end up with a giant mess
| sooner or later. In a few cases that is still warranted due to
| performance concerns, but i don't think that is the case here.

So I think what Christoph means here is that you want to have the
generic prototype defined in a header and arches get to implement it
exactly to the letter so that there's no mess.

As to what mess exactly, I'd let him explain that.

> Because as demonstrated in my previous response some days ago, taking that
> outline ends up with an unneccessary ugly generated code and we don't
> benefit front GCC's capability to fold in and opt out unreachable code.

And this is real fast path where a couple of instructions matter or what?

set_memory_encrypted/_decrypted doesn't look like one to me.

> I can't see your point here. Inlining the function wouldn't add any
> ifdeffery as far as I can see.

If the function is touching defines etc, they all need to be visible.
If that function needs to call other functions - which is the case on
x86, perhaps not so much on power - then you need to either ifdef around
them or provide stubs with ifdeffery in the headers. And you need to
make them global functions instead of keeping them static to the same
compilation unit, etc, etc.

With a separate compilation unit, you don't need any of that and it is
all kept in that single file.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-09-16 07:38:49

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v3 4/8] powerpc/pseries/svm: Add a powerpc version of cc_platform_has()

On Wed, Sep 15, 2021 at 07:18:34PM +0200, Christophe Leroy wrote:
> Could you please provide more explicit explanation why inlining such an
> helper is considered as bad practice and messy ?

Because now we get architectures to all subly differ. Look at the mess
for ioremap and the ioremap* variant.

The only good reason to allow for inlines if if they are used in a hot
path. Which cc_platform_has is not, especially not on powerpc.

2021-09-16 11:52:41

by Michael Ellerman

[permalink] [raw]
Subject: Re: [PATCH v3 4/8] powerpc/pseries/svm: Add a powerpc version of cc_platform_has()

Christoph Hellwig <[email protected]> writes:
> On Wed, Sep 15, 2021 at 07:18:34PM +0200, Christophe Leroy wrote:
>> Could you please provide more explicit explanation why inlining such an
>> helper is considered as bad practice and messy ?
>
> Because now we get architectures to all subly differ. Look at the mess
> for ioremap and the ioremap* variant.
>
> The only good reason to allow for inlines if if they are used in a hot
> path. Which cc_platform_has is not, especially not on powerpc.

Yes I agree, it's not a hot path so it doesn't really matter, which is
why I Acked it.

I think it is possible to do both, share the declaration across arches
but also give arches flexibility to use an inline if they prefer, see
patch below.

I'm not suggesting we actually do that for this series now, but I think
it would solve the problem if we ever needed to in future.

cheers


diff --git a/arch/powerpc/platforms/pseries/cc_platform.c b/arch/powerpc/include/asm/cc_platform.h
similarity index 74%
rename from arch/powerpc/platforms/pseries/cc_platform.c
rename to arch/powerpc/include/asm/cc_platform.h
index e8021af83a19..6285c3c385a6 100644
--- a/arch/powerpc/platforms/pseries/cc_platform.c
+++ b/arch/powerpc/include/asm/cc_platform.h
@@ -7,13 +7,10 @@
* Author: Tom Lendacky <[email protected]>
*/

-#include <linux/export.h>
#include <linux/cc_platform.h>
-
-#include <asm/machdep.h>
#include <asm/svm.h>

-bool cc_platform_has(enum cc_attr attr)
+static inline bool arch_cc_platform_has(enum cc_attr attr)
{
switch (attr) {
case CC_ATTR_MEM_ENCRYPT:
@@ -23,4 +20,3 @@ bool cc_platform_has(enum cc_attr attr)
return false;
}
}
-EXPORT_SYMBOL_GPL(cc_platform_has);
diff --git a/arch/x86/include/asm/cc_platform.h b/arch/x86/include/asm/cc_platform.h
new file mode 100644
index 000000000000..0a4220697043
--- /dev/null
+++ b/arch/x86/include/asm/cc_platform.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_X86_CC_PLATFORM_H_
+#define _ASM_X86_CC_PLATFORM_H_
+
+bool arch_cc_platform_has(enum cc_attr attr);
+
+#endif // _ASM_X86_CC_PLATFORM_H_
diff --git a/arch/x86/kernel/cc_platform.c b/arch/x86/kernel/cc_platform.c
index 3c9bacd3c3f3..77e8c3465979 100644
--- a/arch/x86/kernel/cc_platform.c
+++ b/arch/x86/kernel/cc_platform.c
@@ -11,11 +11,11 @@
#include <linux/cc_platform.h>
#include <linux/mem_encrypt.h>

-bool cc_platform_has(enum cc_attr attr)
+bool arch_cc_platform_has(enum cc_attr attr)
{
if (sme_me_mask)
return amd_cc_platform_has(attr);

return false;
}
-EXPORT_SYMBOL_GPL(cc_platform_has);
+EXPORT_SYMBOL_GPL(arch_cc_platform_has);
diff --git a/include/linux/cc_platform.h b/include/linux/cc_platform.h
index 253f3ea66cd8..f3306647c5d9 100644
--- a/include/linux/cc_platform.h
+++ b/include/linux/cc_platform.h
@@ -65,6 +65,8 @@ enum cc_attr {

#ifdef CONFIG_ARCH_HAS_CC_PLATFORM

+#include <asm/cc_platform.h>
+
/**
* cc_platform_has() - Checks if the specified cc_attr attribute is active
* @attr: Confidential computing attribute to check
@@ -77,7 +79,10 @@ enum cc_attr {
* * TRUE - Specified Confidential Computing attribute is active
* * FALSE - Specified Confidential Computing attribute is not active
*/
-bool cc_platform_has(enum cc_attr attr);
+static inline bool cc_platform_has(enum cc_attr attr)
+{
+ return arch_cc_platform_has(attr);
+}

#else /* !CONFIG_ARCH_HAS_CC_PLATFORM */



2021-09-16 18:57:02

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v3 0/8] Implement generic cc_platform_has() helper function

On Wed, Sep 15, 2021 at 10:26:06AM -0700, Kuppuswamy, Sathyanarayanan wrote:
> I have a Intel variant patch (please check following patch). But it includes
> TDX changes as well. Shall I move TDX changes to different patch and just
> create a separate patch for adding intel_cc_platform_has()?

Yes, please, so that I can expedite that stuff separately and so that it
can go in early in order for future work to be based ontop.

Thx.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

Subject: Re: [PATCH v3 0/8] Implement generic cc_platform_has() helper function



On 9/16/21 8:02 AM, Borislav Petkov wrote:
> On Wed, Sep 15, 2021 at 10:26:06AM -0700, Kuppuswamy, Sathyanarayanan wrote:
>> I have a Intel variant patch (please check following patch). But it includes
>> TDX changes as well. Shall I move TDX changes to different patch and just
>> create a separate patch for adding intel_cc_platform_has()?
>
> Yes, please, so that I can expedite that stuff separately and so that it
> can go in early in order for future work to be based ontop.

Sent it part of TDX patch series. Please check and cherry pick it.

https://lore.kernel.org/lkml/20210916183550.15349-2-sathyanarayanan.kuppuswamy@linux.intel.com/

>
> Thx.
>

--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

2021-09-21 04:10:57

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

On Wed, Sep 08, 2021 at 05:58:36PM -0500, Tom Lendacky wrote:
> diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
> index 470b20208430..eff4d19f9cb4 100644
> --- a/arch/x86/mm/mem_encrypt_identity.c
> +++ b/arch/x86/mm/mem_encrypt_identity.c
> @@ -30,6 +30,7 @@
> #include <linux/kernel.h>
> #include <linux/mm.h>
> #include <linux/mem_encrypt.h>
> +#include <linux/cc_platform.h>
>
> #include <asm/setup.h>
> #include <asm/sections.h>
> @@ -287,7 +288,7 @@ void __init sme_encrypt_kernel(struct boot_params *bp)
> unsigned long pgtable_area_len;
> unsigned long decrypted_base;
>
> - if (!sme_active())
> + if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
> return;
>
> /*

This change break boot for me (in KVM on Intel host). It only reproduces
with allyesconfig. More reasonable config works fine, but I didn't try to
find exact cause in config.

Convertion to cc_platform_has() in __startup_64() in 8/8 has the same
effect.

I believe it caused by sme_me_mask access from __startup_64() without
fixup_pointer() magic. I think __startup_64() requires special treatement
and we should avoid cc_platform_has() there (or have a special version of
the helper). Note that only AMD requires these cc_platform_has() to return
true.

--
Kirill A. Shutemov

2021-09-21 17:06:49

by Tom Lendacky

[permalink] [raw]
Subject: Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

On 9/20/21 2:23 PM, Kirill A. Shutemov wrote:
> On Wed, Sep 08, 2021 at 05:58:36PM -0500, Tom Lendacky wrote:
>> diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
>> index 470b20208430..eff4d19f9cb4 100644
>> --- a/arch/x86/mm/mem_encrypt_identity.c
>> +++ b/arch/x86/mm/mem_encrypt_identity.c
>> @@ -30,6 +30,7 @@
>> #include <linux/kernel.h>
>> #include <linux/mm.h>
>> #include <linux/mem_encrypt.h>
>> +#include <linux/cc_platform.h>
>>
>> #include <asm/setup.h>
>> #include <asm/sections.h>
>> @@ -287,7 +288,7 @@ void __init sme_encrypt_kernel(struct boot_params *bp)
>> unsigned long pgtable_area_len;
>> unsigned long decrypted_base;
>>
>> - if (!sme_active())
>> + if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
>> return;
>>
>> /*
>
> This change break boot for me (in KVM on Intel host). It only reproduces
> with allyesconfig. More reasonable config works fine, but I didn't try to
> find exact cause in config.

Looks like instrumentation during early boot. I worked with Boris offline
to exclude arch/x86/kernel/cc_platform.c from some of the instrumentation
and that allowed an allyesconfig to boot.

Thanks,
Tom

>
> Convertion to cc_platform_has() in __startup_64() in 8/8 has the same
> effect.
>
> I believe it caused by sme_me_mask access from __startup_64() without
> fixup_pointer() magic. I think __startup_64() requires special treatement
> and we should avoid cc_platform_has() there (or have a special version of
> the helper). Note that only AMD requires these cc_platform_has() to return
> true.
>

2021-09-21 17:49:54

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

On Tue, Sep 21, 2021 at 12:04:58PM -0500, Tom Lendacky wrote:
> Looks like instrumentation during early boot. I worked with Boris offline to
> exclude arch/x86/kernel/cc_platform.c from some of the instrumentation and
> that allowed an allyesconfig to boot.

And here's the lineup I have so far, I'd appreciate it if ppc and s390 folks
could run it too:

https://git.kernel.org/pub/scm/linux/kernel/git/bp/bp.git/log/?h=rc2-cc

Thx.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-09-21 21:40:41

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

On Wed, Sep 22, 2021 at 12:20:59AM +0300, Kirill A. Shutemov wrote:
> I still believe calling cc_platform_has() from __startup_64() is totally
> broken as it lacks proper wrapping while accessing global variables.

Well, one of the issues on the AMD side was using boot_cpu_data too
early and the Intel side uses it too. Can you replace those checks with
is_tdx_guest() or whatever was the helper's name which would check
whether the the kernel is running as a TDX guest, and see if that helps?

Thx.

--
Regards/Gruss,
Boris.

2021-09-21 21:43:43

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

On Tue, Sep 21, 2021 at 11:27:17PM +0200, Borislav Petkov wrote:
> On Wed, Sep 22, 2021 at 12:20:59AM +0300, Kirill A. Shutemov wrote:
> > I still believe calling cc_platform_has() from __startup_64() is totally
> > broken as it lacks proper wrapping while accessing global variables.
>
> Well, one of the issues on the AMD side was using boot_cpu_data too
> early and the Intel side uses it too. Can you replace those checks with
> is_tdx_guest() or whatever was the helper's name which would check
> whether the the kernel is running as a TDX guest, and see if that helps?

There's no need in Intel check this early. Only AMD need it. Maybe just
opencode them?

--
Kirill A. Shutemov

2021-09-22 00:06:40

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

On Tue, Sep 21, 2021 at 07:47:15PM +0200, Borislav Petkov wrote:
> On Tue, Sep 21, 2021 at 12:04:58PM -0500, Tom Lendacky wrote:
> > Looks like instrumentation during early boot. I worked with Boris offline to
> > exclude arch/x86/kernel/cc_platform.c from some of the instrumentation and
> > that allowed an allyesconfig to boot.
>
> And here's the lineup I have so far, I'd appreciate it if ppc and s390 folks
> could run it too:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/bp/bp.git/log/?h=rc2-cc

Still broken for me with allyesconfig.

gcc version 11.2.0 (Gentoo 11.2.0 p1)
GNU ld (Gentoo 2.37_p1 p0) 2.37

I still believe calling cc_platform_has() from __startup_64() is totally
broken as it lacks proper wrapping while accessing global variables.

I think sme_get_me_mask() has the same problem. I just happened to work
(until next compiler update).

This hack makes kernel boot again:

diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index f98c76a1d16c..e9110a44bf1b 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -285,7 +285,7 @@ unsigned long __head __startup_64(unsigned long physaddr,
* there is no need to zero it after changing the memory encryption
* attribute.
*/
- if (cc_platform_has(CC_ATTR_MEM_ENCRYPT)) {
+ if (0 && cc_platform_has(CC_ATTR_MEM_ENCRYPT)) {
vaddr = (unsigned long)__start_bss_decrypted;
vaddr_end = (unsigned long)__end_bss_decrypted;
for (; vaddr < vaddr_end; vaddr += PMD_SIZE) {
diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
index eff4d19f9cb4..91638ed0b1db 100644
--- a/arch/x86/mm/mem_encrypt_identity.c
+++ b/arch/x86/mm/mem_encrypt_identity.c
@@ -288,7 +288,7 @@ void __init sme_encrypt_kernel(struct boot_params *bp)
unsigned long pgtable_area_len;
unsigned long decrypted_base;

- if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
+ if (1 || !cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
return;

/*
--
Kirill A. Shutemov

2021-09-22 00:08:14

by Tom Lendacky

[permalink] [raw]
Subject: Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

On 9/21/21 4:34 PM, Kirill A. Shutemov wrote:
> On Tue, Sep 21, 2021 at 11:27:17PM +0200, Borislav Petkov wrote:
>> On Wed, Sep 22, 2021 at 12:20:59AM +0300, Kirill A. Shutemov wrote:
>>> I still believe calling cc_platform_has() from __startup_64() is totally
>>> broken as it lacks proper wrapping while accessing global variables.
>>
>> Well, one of the issues on the AMD side was using boot_cpu_data too
>> early and the Intel side uses it too. Can you replace those checks with
>> is_tdx_guest() or whatever was the helper's name which would check
>> whether the the kernel is running as a TDX guest, and see if that helps?
>
> There's no need in Intel check this early. Only AMD need it. Maybe just
> opencode them?

Any way you can put a gzipped/bzipped copy of your vmlinux file somewhere
I can grab it from and take a look at it?

Thanks,
Tom

>

2021-09-22 00:09:12

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

On Tue, Sep 21, 2021 at 04:43:59PM -0500, Tom Lendacky wrote:
> On 9/21/21 4:34 PM, Kirill A. Shutemov wrote:
> > On Tue, Sep 21, 2021 at 11:27:17PM +0200, Borislav Petkov wrote:
> > > On Wed, Sep 22, 2021 at 12:20:59AM +0300, Kirill A. Shutemov wrote:
> > > > I still believe calling cc_platform_has() from __startup_64() is totally
> > > > broken as it lacks proper wrapping while accessing global variables.
> > >
> > > Well, one of the issues on the AMD side was using boot_cpu_data too
> > > early and the Intel side uses it too. Can you replace those checks with
> > > is_tdx_guest() or whatever was the helper's name which would check
> > > whether the the kernel is running as a TDX guest, and see if that helps?
> >
> > There's no need in Intel check this early. Only AMD need it. Maybe just
> > opencode them?
>
> Any way you can put a gzipped/bzipped copy of your vmlinux file somewhere I
> can grab it from and take a look at it?

You can find broken vmlinux and bzImage here:

https://drive.google.com/drive/folders/1n74vUQHOGebnF70Im32qLFY8iS3wvjIs?usp=sharing

Let me know when I can remove it.

--
Kirill A. Shutemov

2021-09-22 13:44:41

by Tom Lendacky

[permalink] [raw]
Subject: Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

On 9/21/21 4:58 PM, Kirill A. Shutemov wrote:
> On Tue, Sep 21, 2021 at 04:43:59PM -0500, Tom Lendacky wrote:
>> On 9/21/21 4:34 PM, Kirill A. Shutemov wrote:
>>> On Tue, Sep 21, 2021 at 11:27:17PM +0200, Borislav Petkov wrote:
>>>> On Wed, Sep 22, 2021 at 12:20:59AM +0300, Kirill A. Shutemov wrote:
>>>>> I still believe calling cc_platform_has() from __startup_64() is totally
>>>>> broken as it lacks proper wrapping while accessing global variables.
>>>>
>>>> Well, one of the issues on the AMD side was using boot_cpu_data too
>>>> early and the Intel side uses it too. Can you replace those checks with
>>>> is_tdx_guest() or whatever was the helper's name which would check
>>>> whether the the kernel is running as a TDX guest, and see if that helps?
>>>
>>> There's no need in Intel check this early. Only AMD need it. Maybe just
>>> opencode them?
>>
>> Any way you can put a gzipped/bzipped copy of your vmlinux file somewhere I
>> can grab it from and take a look at it?
>
> You can find broken vmlinux and bzImage here:
>
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdrive.google.com%2Fdrive%2Ffolders%2F1n74vUQHOGebnF70Im32qLFY8iS3wvjIs%3Fusp%3Dsharing&amp;data=04%7C01%7Cthomas.lendacky%40amd.com%7C1c7adf380cbe4c1a6bb708d97d4af6ff%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637678583935705530%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=gA30x%2Bfu97tUx0p2UqI8HgjiL8bxDbK1GqgJBbUrUE4%3D&amp;reserved=0
>
> Let me know when I can remove it.

Looking at everything, it is all RIP relative addressing, so those
accesses should be fine. Your image has the intel_cc_platform_has()
function, does it work if you remove that call? Because I think it may be
the early call into that function which looks like it has instrumentation
that uses %gs in __sanitizer_cov_trace_pc and %gs is not setup properly
yet. And since boot_cpu_data.x86_vendor will likely be zero this early it
will match X86_VENDOR_INTEL and call into that function.

ffffffff8124f880 <intel_cc_platform_has>:
ffffffff8124f880: e8 bb 64 06 00 callq ffffffff812b5d40 <__fentry__>
ffffffff8124f885: e8 36 ca 42 00 callq ffffffff8167c2c0 <__sanitizer_cov_trace_pc>
ffffffff8124f88a: 31 c0 xor %eax,%eax
ffffffff8124f88c: c3 retq


ffffffff8167c2c0 <__sanitizer_cov_trace_pc>:
ffffffff8167c2c0: 65 8b 05 39 ad 9a 7e mov %gs:0x7e9aad39(%rip),%eax # 27000 <__preempt_count>
ffffffff8167c2c7: 89 c6 mov %eax,%esi
ffffffff8167c2c9: 48 8b 0c 24 mov (%rsp),%rcx
ffffffff8167c2cd: 81 e6 00 01 00 00 and $0x100,%esi
ffffffff8167c2d3: 65 48 8b 14 25 40 70 mov %gs:0x27040,%rdx

Thanks,
Tom

>

2021-09-22 14:33:57

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

On Wed, Sep 22, 2021 at 08:40:43AM -0500, Tom Lendacky wrote:
> On 9/21/21 4:58 PM, Kirill A. Shutemov wrote:
> > On Tue, Sep 21, 2021 at 04:43:59PM -0500, Tom Lendacky wrote:
> > > On 9/21/21 4:34 PM, Kirill A. Shutemov wrote:
> > > > On Tue, Sep 21, 2021 at 11:27:17PM +0200, Borislav Petkov wrote:
> > > > > On Wed, Sep 22, 2021 at 12:20:59AM +0300, Kirill A. Shutemov wrote:
> > > > > > I still believe calling cc_platform_has() from __startup_64() is totally
> > > > > > broken as it lacks proper wrapping while accessing global variables.
> > > > >
> > > > > Well, one of the issues on the AMD side was using boot_cpu_data too
> > > > > early and the Intel side uses it too. Can you replace those checks with
> > > > > is_tdx_guest() or whatever was the helper's name which would check
> > > > > whether the the kernel is running as a TDX guest, and see if that helps?
> > > >
> > > > There's no need in Intel check this early. Only AMD need it. Maybe just
> > > > opencode them?
> > >
> > > Any way you can put a gzipped/bzipped copy of your vmlinux file somewhere I
> > > can grab it from and take a look at it?
> >
> > You can find broken vmlinux and bzImage here:
> >
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdrive.google.com%2Fdrive%2Ffolders%2F1n74vUQHOGebnF70Im32qLFY8iS3wvjIs%3Fusp%3Dsharing&amp;data=04%7C01%7Cthomas.lendacky%40amd.com%7C1c7adf380cbe4c1a6bb708d97d4af6ff%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637678583935705530%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=gA30x%2Bfu97tUx0p2UqI8HgjiL8bxDbK1GqgJBbUrUE4%3D&amp;reserved=0
> >
> > Let me know when I can remove it.
>
> Looking at everything, it is all RIP relative addressing, so those
> accesses should be fine.

Not fine, but waiting to blowup with random build environment change.

> Your image has the intel_cc_platform_has()
> function, does it work if you remove that call? Because I think it may be
> the early call into that function which looks like it has instrumentation
> that uses %gs in __sanitizer_cov_trace_pc and %gs is not setup properly
> yet. And since boot_cpu_data.x86_vendor will likely be zero this early it
> will match X86_VENDOR_INTEL and call into that function.

Right removing call to intel_cc_platform_has() or moving it to
cc_platform.c fixes the issue.

--
Kirill A. Shutemov

2021-09-22 19:56:18

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

On Wed, Sep 22, 2021 at 05:30:15PM +0300, Kirill A. Shutemov wrote:
> Not fine, but waiting to blowup with random build environment change.

Why is it not fine?

Are you suspecting that the compiler might generate something else and
not a rip-relative access?

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-09-22 21:09:39

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

On Wed, Sep 22, 2021 at 09:52:07PM +0200, Borislav Petkov wrote:
> On Wed, Sep 22, 2021 at 05:30:15PM +0300, Kirill A. Shutemov wrote:
> > Not fine, but waiting to blowup with random build environment change.
>
> Why is it not fine?
>
> Are you suspecting that the compiler might generate something else and
> not a rip-relative access?

Yes. We had it before for __supported_pte_mask and other users of
fixup_pointer().

See for instance 4a09f0210c8b ("x86/boot/64/clang: Use fixup_pointer() to
access '__supported_pte_mask'")

Unless we find other way to guarantee RIP-relative access, we must use
fixup_pointer() to access any global variables.

--
Kirill A. Shutemov

2021-09-23 18:23:04

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

On Thu, Sep 23, 2021 at 12:05:58AM +0300, Kirill A. Shutemov wrote:
> Unless we find other way to guarantee RIP-relative access, we must use
> fixup_pointer() to access any global variables.

Yah, I've asked compiler folks about any guarantees we have wrt
rip-relative addresses but it doesn't look good. Worst case, we'd have
to do the fixup_pointer() thing.

In the meantime, Tom and I did some more poking at this and here's a
diff ontop.

The direction being that we'll stick both the AMD and Intel
*cc_platform_has() call into cc_platform.c for which instrumentation
will be disabled so no issues with that.

And that will keep all that querying all together in a single file.

---
diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h
index a73712b6ee0e..2d4f5c17d79c 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -51,7 +51,6 @@ void __init mem_encrypt_free_decrypted_mem(void);
void __init mem_encrypt_init(void);

void __init sev_es_init_vc_handling(void);
-bool amd_cc_platform_has(enum cc_attr attr);

#define __bss_decrypted __section(".bss..decrypted")

@@ -74,7 +73,6 @@ static inline void __init sme_encrypt_kernel(struct boot_params *bp) { }
static inline void __init sme_enable(struct boot_params *bp) { }

static inline void sev_es_init_vc_handling(void) { }
-static inline bool amd_cc_platform_has(enum cc_attr attr) { return false; }

static inline int __init
early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 0; }
@@ -103,12 +101,6 @@ static inline u64 sme_get_me_mask(void)
return sme_me_mask;
}

-#if defined(CONFIG_CPU_SUP_INTEL) && defined(CONFIG_ARCH_HAS_CC_PLATFORM)
-bool intel_cc_platform_has(enum cc_attr attr);
-#else
-static inline bool intel_cc_platform_has(enum cc_attr attr) { return false; }
-#endif
-
#endif /* __ASSEMBLY__ */

#endif /* __X86_MEM_ENCRYPT_H__ */
diff --git a/arch/x86/kernel/cc_platform.c b/arch/x86/kernel/cc_platform.c
index da54a1805211..97ede7052f77 100644
--- a/arch/x86/kernel/cc_platform.c
+++ b/arch/x86/kernel/cc_platform.c
@@ -13,6 +13,52 @@

#include <asm/processor.h>

+static bool intel_cc_platform_has(enum cc_attr attr)
+{
+#ifdef CONFIG_INTEL_TDX_GUEST
+ return false;
+#else
+ return false;
+#endif
+}
+
+/*
+ * SME and SEV are very similar but they are not the same, so there are
+ * times that the kernel will need to distinguish between SME and SEV. The
+ * cc_platform_has() function is used for this. When a distinction isn't
+ * needed, the CC_ATTR_MEM_ENCRYPT attribute can be used.
+ *
+ * The trampoline code is a good example for this requirement. Before
+ * paging is activated, SME will access all memory as decrypted, but SEV
+ * will access all memory as encrypted. So, when APs are being brought
+ * up under SME the trampoline area cannot be encrypted, whereas under SEV
+ * the trampoline area must be encrypted.
+ */
+static bool amd_cc_platform_has(enum cc_attr attr)
+{
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+ switch (attr) {
+ case CC_ATTR_MEM_ENCRYPT:
+ return sme_me_mask;
+
+ case CC_ATTR_HOST_MEM_ENCRYPT:
+ return sme_me_mask && !(sev_status & MSR_AMD64_SEV_ENABLED);
+
+ case CC_ATTR_GUEST_MEM_ENCRYPT:
+ return sev_status & MSR_AMD64_SEV_ENABLED;
+
+ case CC_ATTR_GUEST_STATE_ENCRYPT:
+ return sev_status & MSR_AMD64_SEV_ES_ENABLED;
+
+ default:
+ return false;
+ }
+#else
+ return false;
+#endif
+}
+
+
bool cc_platform_has(enum cc_attr attr)
{
if (sme_me_mask)
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 53756ff12295..8321c43554a1 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -60,13 +60,6 @@ static u64 msr_test_ctrl_cache __ro_after_init;
*/
static bool cpu_model_supports_sld __ro_after_init;

-#ifdef CONFIG_ARCH_HAS_CC_PLATFORM
-bool intel_cc_platform_has(enum cc_attr attr)
-{
- return false;
-}
-#endif
-
/*
* Processors which have self-snooping capability can handle conflicting
* memory type across CPUs by snooping its own cache. However, there exists
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 9417d404ea92..23d54b810f08 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -361,38 +361,6 @@ int __init early_set_memory_encrypted(unsigned long vaddr, unsigned long size)
return early_set_memory_enc_dec(vaddr, size, true);
}

-/*
- * SME and SEV are very similar but they are not the same, so there are
- * times that the kernel will need to distinguish between SME and SEV. The
- * cc_platform_has() function is used for this. When a distinction isn't
- * needed, the CC_ATTR_MEM_ENCRYPT attribute can be used.
- *
- * The trampoline code is a good example for this requirement. Before
- * paging is activated, SME will access all memory as decrypted, but SEV
- * will access all memory as encrypted. So, when APs are being brought
- * up under SME the trampoline area cannot be encrypted, whereas under SEV
- * the trampoline area must be encrypted.
- */
-bool amd_cc_platform_has(enum cc_attr attr)
-{
- switch (attr) {
- case CC_ATTR_MEM_ENCRYPT:
- return sme_me_mask;
-
- case CC_ATTR_HOST_MEM_ENCRYPT:
- return sme_me_mask && !(sev_status & MSR_AMD64_SEV_ENABLED);
-
- case CC_ATTR_GUEST_MEM_ENCRYPT:
- return sev_status & MSR_AMD64_SEV_ENABLED;
-
- case CC_ATTR_GUEST_STATE_ENCRYPT:
- return sev_status & MSR_AMD64_SEV_ES_ENABLED;
-
- default:
- return false;
- }
-}
-
/* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */
bool force_dma_unencrypted(struct device *dev)
{

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-09-24 09:56:08

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

On Fri, Sep 24, 2021 at 12:41:32PM +0300, Kirill A. Shutemov wrote:
> On Thu, Sep 23, 2021 at 08:21:03PM +0200, Borislav Petkov wrote:
> > On Thu, Sep 23, 2021 at 12:05:58AM +0300, Kirill A. Shutemov wrote:
> > > Unless we find other way to guarantee RIP-relative access, we must use
> > > fixup_pointer() to access any global variables.
> >
> > Yah, I've asked compiler folks about any guarantees we have wrt
> > rip-relative addresses but it doesn't look good. Worst case, we'd have
> > to do the fixup_pointer() thing.
> >
> > In the meantime, Tom and I did some more poking at this and here's a
> > diff ontop.
> >
> > The direction being that we'll stick both the AMD and Intel
> > *cc_platform_has() call into cc_platform.c for which instrumentation
> > will be disabled so no issues with that.
> >
> > And that will keep all that querying all together in a single file.
>
> And still do cc_platform_has() calls in __startup_64() codepath?
>
> It's broken.
>
> Intel detection in cc_platform_has() relies on boot_cpu_data.x86_vendor
> which is not initialized until early_cpu_init() in setup_arch(). Given
> that X86_VENDOR_INTEL is 0 it leads to false-positive.

Yeah, Tom, I had the same question yesterday.

/me looks in his direction.

:-)

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-09-24 11:44:43

by Kirill A. Shutemov

[permalink] [raw]
Subject: Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

On Thu, Sep 23, 2021 at 08:21:03PM +0200, Borislav Petkov wrote:
> On Thu, Sep 23, 2021 at 12:05:58AM +0300, Kirill A. Shutemov wrote:
> > Unless we find other way to guarantee RIP-relative access, we must use
> > fixup_pointer() to access any global variables.
>
> Yah, I've asked compiler folks about any guarantees we have wrt
> rip-relative addresses but it doesn't look good. Worst case, we'd have
> to do the fixup_pointer() thing.
>
> In the meantime, Tom and I did some more poking at this and here's a
> diff ontop.
>
> The direction being that we'll stick both the AMD and Intel
> *cc_platform_has() call into cc_platform.c for which instrumentation
> will be disabled so no issues with that.
>
> And that will keep all that querying all together in a single file.

And still do cc_platform_has() calls in __startup_64() codepath?

It's broken.

Intel detection in cc_platform_has() relies on boot_cpu_data.x86_vendor
which is not initialized until early_cpu_init() in setup_arch(). Given
that X86_VENDOR_INTEL is 0 it leads to false-positive.

I think opencode these two calls is the way forward. Maybe also move the
check from sme_encrypt_kernel() to __startup_64().

--
Kirill A. Shutemov

2021-09-24 20:50:10

by Tom Lendacky

[permalink] [raw]
Subject: Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

On 9/24/21 4:51 AM, Borislav Petkov wrote:
> On Fri, Sep 24, 2021 at 12:41:32PM +0300, Kirill A. Shutemov wrote:
>> On Thu, Sep 23, 2021 at 08:21:03PM +0200, Borislav Petkov wrote:
>>> On Thu, Sep 23, 2021 at 12:05:58AM +0300, Kirill A. Shutemov wrote:
>>>> Unless we find other way to guarantee RIP-relative access, we must use
>>>> fixup_pointer() to access any global variables.
>>>
>>> Yah, I've asked compiler folks about any guarantees we have wrt
>>> rip-relative addresses but it doesn't look good. Worst case, we'd have
>>> to do the fixup_pointer() thing.
>>>
>>> In the meantime, Tom and I did some more poking at this and here's a
>>> diff ontop.
>>>
>>> The direction being that we'll stick both the AMD and Intel
>>> *cc_platform_has() call into cc_platform.c for which instrumentation
>>> will be disabled so no issues with that.
>>>
>>> And that will keep all that querying all together in a single file.
>>
>> And still do cc_platform_has() calls in __startup_64() codepath?
>>
>> It's broken.
>>
>> Intel detection in cc_platform_has() relies on boot_cpu_data.x86_vendor
>> which is not initialized until early_cpu_init() in setup_arch(). Given
>> that X86_VENDOR_INTEL is 0 it leads to false-positive.
>
> Yeah, Tom, I had the same question yesterday.
>
> /me looks in his direction.
>

Yup, all the things we talked about.

But we also know that cc_platform_has() gets called a few other times
before boot_cpu_data is initialized - so more false-positives. For
cc_platform_has() to work properly, given the desire to consolidate the
calls, IMO, Intel will have to come up with some early setting that can be
enabled and checked in place of boot_cpu_data or else you live with
false-positives.

Thanks,
Tom

> :-)
>