Received: by 2002:a05:7208:3003:b0:81:def:69cd with SMTP id f3csp565770rba; Wed, 27 Mar 2024 05:27:34 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCXMgi6WV934/Ts3Rix5Mwh1vh98SYcotOElSTHZkY6blrKS11tSeDFHpDrCPpvKvpRAdlI8WrPh9O2Z+ov0ux3o0rZVAzpcVviyA9Zf1g== X-Google-Smtp-Source: AGHT+IGUfwcIZDW7mLgxGL5KaOFWWelIx5BFmYjvU6Qzn53PQ69wZkjwekZecx3R5KbsIJ2sXeRw X-Received: by 2002:a05:6a00:23cb:b0:6ea:74d4:a00d with SMTP id g11-20020a056a0023cb00b006ea74d4a00dmr6570107pfc.5.1711542453974; Wed, 27 Mar 2024 05:27:33 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1711542453; cv=pass; d=google.com; s=arc-20160816; b=MO1YAgM50dQWAJRP2y/jMD9w8J0bWWQdl3HWa3ysL+wU+havOVmvHydCCET53w9+Jb 5rSKkg3vvyFjjr4GxcMyAGNnQTlau0ruK70Lm7/S//4rCh7yHzOBCPxqJNHdciaJxv34 6wH4xzZQ/neQQqLbA3s8I+AeyV56GnQWgQUiLw5nicQaciGh6yPfnLC6/UxKu52ZF2mt +AqivF+TNGwi41ns1ik1smQW+U5qSr9i4SiOuwxjIeLIr36QXtqMvSOVFfydM/fEMWLw tN82LQQ8I7x14ywQY+Py3yXT2maqvA4CyYuPcakRxH3VbXhb7tkkMCkWmZKsckSQrPDO cJbg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:message-id:date:subject:cc:to :from:dkim-signature; bh=P7Kgt/gWpLfHW+JnaSI/D0wIDZxHU32uun37UVcb5D0=; fh=WMM+D8SyNAh+2B/0w6W0nWY+u+AP78NEwoyG3l4lswU=; b=tukR7VjsAGyIQeNXjQYZ5xpAnAV+bOxUBx8+lAH36cXhVFgyrbUo6ZIth5F3q71lQe VoQ9pCz+n5M3goohzN9qOgN9IhTZecerjvPcSjbj3hQjahE46xUkcuD5qEWLI0JJhrWL mqnmkfXXB/bdK/OSgO+if9E/5ZBBpP8puoe3sgCu5pqaIqBbZv2I1rGMDhBp46dzCNos a/rg7sv8CbeAoABxRhzDHVAY1I2Pwdw9rz93YwAzuFhHsrwxzP1Hjwn91MGjELny0XZ2 HzgzZsW3P7UzzDWPbB/0hyBoLWM2oLtkEnrS92NdLGGXPbEBdahGxQvLT09HxqsrwLUj qmoQ==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ty2EuCwg; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-120819-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-120819-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id du8-20020a056a002b4800b006e6a174e49esi9171613pfb.227.2024.03.27.05.27.33 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 05:27:33 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-120819-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=ty2EuCwg; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-120819-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-120819-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 8F25AB258AD for ; Wed, 27 Mar 2024 12:14:59 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 099AD137761; Wed, 27 Mar 2024 12:08:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ty2EuCwg" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B99D2137743; Wed, 27 Mar 2024 12:08:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711541292; cv=none; b=cm1IGF0PK5e9MquAg46Oc3QLTKnxHoYD/AuJe1CmUR//YGAOkHpWx5xr8QUhRaiurbpKhyLKH/e+p9FgJa9PzAxm9BD2EV1S5cCkPRlPayXS50+lyv8bZpWTNIiACzbbbmGybVYVjuz9jVeCIg18nc+PNj5aMuYCDDXqEWEspZM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711541292; c=relaxed/simple; bh=rqb+hV38sRL8zt1yXlQkMtyG4OEk8LqWrZzawIlCgrQ=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=PHKoyOjZtJDsfz12FpU+Je+q+fty7wMMCqgMpSI8q+tVKpRKFFwdBlk6uPKl7dAwUFB8iYqzGNZ0TrE0YpA9SW81rkkspWEaNz0GejgMqL/hBFmCusiDaNHnCsmuIGInHZpw9M9m9VObzGYI6ZIcf7445vUsmduqmPWkXnCnTK4= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ty2EuCwg; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id C9CCEC433B1; Wed, 27 Mar 2024 12:08:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1711541292; bh=rqb+hV38sRL8zt1yXlQkMtyG4OEk8LqWrZzawIlCgrQ=; h=From:To:Cc:Subject:Date:From; b=ty2EuCwgO4aRHLD28/g6LRk5gJhbaY08rAravBtZoAy9tdJVHdvjPR7R1T27o6vCt xECpNehGcecTUQK3KiTbH3vc5789grv9zvhJqPdzJGehBZRDwaADdxdKi6T+JjF5Bi sHnXs9ORi4HSWIMpDeuJqiHPccc1WWZpDJ5jYDL24+5+tgmUVXEEoBaPZsFfwsdYJv bznnttCdFfD9SGurVbJpWCA9E/GVZB1RektBN5MCBc9uXy7Qa7SKb01Zp8aXzk7p0P 358NAm/z8Fw3yH1Dq9Uh+SfHKO371yWDWPU/YSLBg09jXCBiqtXU11Ts6ohYTAU8MH n00D2zyMa1NlA== From: Sasha Levin To: stable@vger.kernel.org, ardb@kernel.org Cc: Kevin Loughlin , Borislav Petkov , stable@kernel.org, linux-kernel@vger.kernel.org Subject: FAILED: Patch "x86/sev: Fix position dependent variable references in startup code" failed to apply to 6.8-stable tree Date: Wed, 27 Mar 2024 08:08:10 -0400 Message-ID: <20240327120810.2825990-1-sashal@kernel.org> X-Mailer: git-send-email 2.43.0 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Hint: ignore X-stable: review Content-Transfer-Encoding: 8bit The patch below does not apply to the 6.8-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to . Thanks, Sasha ------------------ original commit in Linus's tree ------------------ From 1c811d403afd73f04bde82b83b24c754011bd0e8 Mon Sep 17 00:00:00 2001 From: Ard Biesheuvel Date: Sat, 3 Feb 2024 13:53:06 +0100 Subject: [PATCH] x86/sev: Fix position dependent variable references in startup code The early startup code executes from a 1:1 mapping of memory, which differs from the mapping that the code was linked and/or relocated to run at. The latter mapping is not active yet at this point, and so symbol references that rely on it will fault. Given that the core kernel is built without -fPIC, symbol references are typically emitted as absolute, and so any such references occuring in the early startup code will therefore crash the kernel. While an attempt was made to work around this for the early SEV/SME startup code, by forcing RIP-relative addressing for certain global SEV/SME variables via inline assembly (see snp_cpuid_get_table() for example), RIP-relative addressing must be pervasively enforced for SEV/SME global variables when accessed prior to page table fixups. __startup_64() already handles this issue for select non-SEV/SME global variables using fixup_pointer(), which adjusts the pointer relative to a `physaddr` argument. To avoid having to pass around this `physaddr` argument across all functions needing to apply pointer fixups, introduce a macro RIP_RELATIVE_REF() which generates a RIP-relative reference to a given global variable. It is used where necessary to force RIP-relative accesses to global variables. For backporting purposes, this patch makes no attempt at cleaning up other occurrences of this pattern, involving either inline asm or fixup_pointer(). Those will be addressed later. [ bp: Call it "rip_rel_ref" everywhere like other code shortens "rIP-relative reference" and make the asm wrapper __always_inline. ] Co-developed-by: Kevin Loughlin Signed-off-by: Kevin Loughlin Signed-off-by: Ard Biesheuvel Signed-off-by: Borislav Petkov (AMD) Cc: Link: https://lore.kernel.org/all/20240130220845.1978329-1-kevinloughlin@google.com --- arch/x86/coco/core.c | 7 +------ arch/x86/include/asm/asm.h | 14 ++++++++++++++ arch/x86/include/asm/coco.h | 8 +++++++- arch/x86/include/asm/mem_encrypt.h | 15 +++++++++------ arch/x86/kernel/sev-shared.c | 12 ++++++------ arch/x86/kernel/sev.c | 4 ++-- arch/x86/mm/mem_encrypt_identity.c | 27 ++++++++++++--------------- 7 files changed, 51 insertions(+), 36 deletions(-) diff --git a/arch/x86/coco/core.c b/arch/x86/coco/core.c index eeec9986570ed..d07be9d05cd03 100644 --- a/arch/x86/coco/core.c +++ b/arch/x86/coco/core.c @@ -14,7 +14,7 @@ #include enum cc_vendor cc_vendor __ro_after_init = CC_VENDOR_NONE; -static u64 cc_mask __ro_after_init; +u64 cc_mask __ro_after_init; static bool noinstr intel_cc_platform_has(enum cc_attr attr) { @@ -148,8 +148,3 @@ u64 cc_mkdec(u64 val) } } EXPORT_SYMBOL_GPL(cc_mkdec); - -__init void cc_set_mask(u64 mask) -{ - cc_mask = mask; -} diff --git a/arch/x86/include/asm/asm.h b/arch/x86/include/asm/asm.h index fbcfec4dc4ccd..ca8eed1d496ab 100644 --- a/arch/x86/include/asm/asm.h +++ b/arch/x86/include/asm/asm.h @@ -113,6 +113,20 @@ #endif +#ifndef __ASSEMBLY__ +#ifndef __pic__ +static __always_inline __pure void *rip_rel_ptr(void *p) +{ + asm("leaq %c1(%%rip), %0" : "=r"(p) : "i"(p)); + + return p; +} +#define RIP_REL_REF(var) (*(typeof(&(var)))rip_rel_ptr(&(var))) +#else +#define RIP_REL_REF(var) (var) +#endif +#endif + /* * Macros to generate condition code outputs from inline assembly, * The output operand must be type "bool". diff --git a/arch/x86/include/asm/coco.h b/arch/x86/include/asm/coco.h index 6ae2d16a7613b..21940ef8d2904 100644 --- a/arch/x86/include/asm/coco.h +++ b/arch/x86/include/asm/coco.h @@ -2,6 +2,7 @@ #ifndef _ASM_X86_COCO_H #define _ASM_X86_COCO_H +#include #include enum cc_vendor { @@ -11,9 +12,14 @@ enum cc_vendor { }; extern enum cc_vendor cc_vendor; +extern u64 cc_mask; #ifdef CONFIG_ARCH_HAS_CC_PLATFORM -void cc_set_mask(u64 mask); +static inline void cc_set_mask(u64 mask) +{ + RIP_REL_REF(cc_mask) = mask; +} + u64 cc_mkenc(u64 val); u64 cc_mkdec(u64 val); #else diff --git a/arch/x86/include/asm/mem_encrypt.h b/arch/x86/include/asm/mem_encrypt.h index 359ada486fa92..b31eb9fd59544 100644 --- a/arch/x86/include/asm/mem_encrypt.h +++ b/arch/x86/include/asm/mem_encrypt.h @@ -15,7 +15,8 @@ #include #include -#include +#include +struct boot_params; #ifdef CONFIG_X86_MEM_ENCRYPT void __init mem_encrypt_init(void); @@ -58,6 +59,11 @@ void __init mem_encrypt_free_decrypted_mem(void); void __init sev_es_init_vc_handling(void); +static inline u64 sme_get_me_mask(void) +{ + return RIP_REL_REF(sme_me_mask); +} + #define __bss_decrypted __section(".bss..decrypted") #else /* !CONFIG_AMD_MEM_ENCRYPT */ @@ -89,6 +95,8 @@ early_set_mem_enc_dec_hypercall(unsigned long vaddr, unsigned long size, bool en static inline void mem_encrypt_free_decrypted_mem(void) { } +static inline u64 sme_get_me_mask(void) { return 0; } + #define __bss_decrypted #endif /* CONFIG_AMD_MEM_ENCRYPT */ @@ -106,11 +114,6 @@ void add_encrypt_protection_map(void); extern char __start_bss_decrypted[], __end_bss_decrypted[], __start_bss_decrypted_unused[]; -static inline u64 sme_get_me_mask(void) -{ - return sme_me_mask; -} - #endif /* __ASSEMBLY__ */ #endif /* __X86_MEM_ENCRYPT_H__ */ diff --git a/arch/x86/kernel/sev-shared.c b/arch/x86/kernel/sev-shared.c index 5db24d0fc557c..ae79f9505298d 100644 --- a/arch/x86/kernel/sev-shared.c +++ b/arch/x86/kernel/sev-shared.c @@ -560,9 +560,9 @@ static int snp_cpuid(struct ghcb *ghcb, struct es_em_ctxt *ctxt, struct cpuid_le leaf->eax = leaf->ebx = leaf->ecx = leaf->edx = 0; /* Skip post-processing for out-of-range zero leafs. */ - if (!(leaf->fn <= cpuid_std_range_max || - (leaf->fn >= 0x40000000 && leaf->fn <= cpuid_hyp_range_max) || - (leaf->fn >= 0x80000000 && leaf->fn <= cpuid_ext_range_max))) + if (!(leaf->fn <= RIP_REL_REF(cpuid_std_range_max) || + (leaf->fn >= 0x40000000 && leaf->fn <= RIP_REL_REF(cpuid_hyp_range_max)) || + (leaf->fn >= 0x80000000 && leaf->fn <= RIP_REL_REF(cpuid_ext_range_max)))) return 0; } @@ -1072,11 +1072,11 @@ static void __init setup_cpuid_table(const struct cc_blob_sev_info *cc_info) const struct snp_cpuid_fn *fn = &cpuid_table->fn[i]; if (fn->eax_in == 0x0) - cpuid_std_range_max = fn->eax; + RIP_REL_REF(cpuid_std_range_max) = fn->eax; else if (fn->eax_in == 0x40000000) - cpuid_hyp_range_max = fn->eax; + RIP_REL_REF(cpuid_hyp_range_max) = fn->eax; else if (fn->eax_in == 0x80000000) - cpuid_ext_range_max = fn->eax; + RIP_REL_REF(cpuid_ext_range_max) = fn->eax; } } diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index 002af6c30601b..1ef7ae806a01b 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -748,7 +748,7 @@ void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long padd * This eliminates worries about jump tables or checking boot_cpu_data * in the cc_platform_has() function. */ - if (!(sev_status & MSR_AMD64_SEV_SNP_ENABLED)) + if (!(RIP_REL_REF(sev_status) & MSR_AMD64_SEV_SNP_ENABLED)) return; /* @@ -767,7 +767,7 @@ void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr * This eliminates worries about jump tables or checking boot_cpu_data * in the cc_platform_has() function. */ - if (!(sev_status & MSR_AMD64_SEV_SNP_ENABLED)) + if (!(RIP_REL_REF(sev_status) & MSR_AMD64_SEV_SNP_ENABLED)) return; /* Ask hypervisor to mark the memory pages shared in the RMP table. */ diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c index efe9f217fcf99..0166ab1780ccb 100644 --- a/arch/x86/mm/mem_encrypt_identity.c +++ b/arch/x86/mm/mem_encrypt_identity.c @@ -304,7 +304,8 @@ void __init sme_encrypt_kernel(struct boot_params *bp) * instrumentation or checking boot_cpu_data in the cc_platform_has() * function. */ - if (!sme_get_me_mask() || sev_status & MSR_AMD64_SEV_ENABLED) + if (!sme_get_me_mask() || + RIP_REL_REF(sev_status) & MSR_AMD64_SEV_ENABLED) return; /* @@ -541,11 +542,11 @@ void __init sme_enable(struct boot_params *bp) me_mask = 1UL << (ebx & 0x3f); /* Check the SEV MSR whether SEV or SME is enabled */ - sev_status = __rdmsr(MSR_AMD64_SEV); - feature_mask = (sev_status & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT : AMD_SME_BIT; + RIP_REL_REF(sev_status) = msr = __rdmsr(MSR_AMD64_SEV); + feature_mask = (msr & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT : AMD_SME_BIT; /* The SEV-SNP CC blob should never be present unless SEV-SNP is enabled. */ - if (snp && !(sev_status & MSR_AMD64_SEV_SNP_ENABLED)) + if (snp && !(msr & MSR_AMD64_SEV_SNP_ENABLED)) snp_abort(); /* Check if memory encryption is enabled */ @@ -571,7 +572,6 @@ void __init sme_enable(struct boot_params *bp) return; } else { /* SEV state cannot be controlled by a command line option */ - sme_me_mask = me_mask; goto out; } @@ -590,16 +590,13 @@ void __init sme_enable(struct boot_params *bp) cmdline_ptr = (const char *)((u64)bp->hdr.cmd_line_ptr | ((u64)bp->ext_cmd_line_ptr << 32)); - if (cmdline_find_option(cmdline_ptr, cmdline_arg, buffer, sizeof(buffer)) < 0) - goto out; - - if (!strncmp(buffer, cmdline_on, sizeof(buffer))) - sme_me_mask = me_mask; + if (cmdline_find_option(cmdline_ptr, cmdline_arg, buffer, sizeof(buffer)) < 0 || + strncmp(buffer, cmdline_on, sizeof(buffer))) + return; out: - if (sme_me_mask) { - physical_mask &= ~sme_me_mask; - cc_vendor = CC_VENDOR_AMD; - cc_set_mask(sme_me_mask); - } + RIP_REL_REF(sme_me_mask) = me_mask; + physical_mask &= ~me_mask; + cc_vendor = CC_VENDOR_AMD; + cc_set_mask(me_mask); } -- 2.43.0