Subject: [PATCH v5 00/12] Add TDX Guest Support (Initial support)

Hi All,

Intel's Trust Domain Extensions (TDX) protect guest VMs from malicious
hosts and some physical attacks. This series adds the basic TDX guest
infrastructure support (including #VE handler support, and #VE support
for halt and CPUID). This is just a subset of patches in the bare minimum
TDX support patch list which is required for supporting minimal
functional TDX guest. Other basic feature features like #VE support for
IO, MMIO, boot optimization fixes and shared-mm support will be submitted
in a separate patch set. To make reviewing easier we split it into smaller
series. This series alone is not necessarily fully functional.

Also, the host-side support patches, and support for advanced TD guest
features like attestation or debug-mode will be submitted at a later time.
Also, at this point it is not secure with some known holes in drivers, and
also hasn’t been fully audited and fuzzed yet.

TDX has a lot of similarities to SEV. It enhances confidentiality and
of guest memory and state (like registers) and includes a new exception
(#VE) for the same basic reasons as SEV-ES. Like SEV-SNP (not merged
yet), TDX limits the host's ability to effect changes in the guest
physical address space. With TDX the host cannot access the guest memory,
so various functionality that would normally be done in KVM has moved
into a (paravirtualized) guest. Partially this is done using the
Virtualization Exception (#VE) and partially with direct paravirtual hooks.

The TDX architecture also includes a new CPU mode called
Secure-Arbitration Mode (SEAM). The software (TDX module) running in this
mode arbitrates interactions between host and guest and implements many of
the guarantees of the TDX architecture.

Some of the key differences between TD and regular VM is,

1. Multi CPU bring-up is done using the ACPI MADT wake-up table.
2. A new #VE exception handler is added. The TDX module injects #VE exception
   to the guest TD in cases of instructions that need to be emulated, disallowed
   MSR accesses, etc.
3. By default memory is marked as private, and TD will selectively share it with
   VMM based on need.
   
Note that the kernel will also need to be hardened against low level inputs from
the now untrusted hosts. This will be done in follow on patches.

You can find TDX related documents in the following link.

https://software.intel.com/content/www/br/pt/develop/articles/intel-trust-domain-extensions.html

This patchset has dependency on protected guest changes submitted by Tom Lendacky

https://lore.kernel.org/patchwork/cover/1468760/

Changes since v4:
* Added a patch that adds TDX guest exception for CSTAR MSR.
* Rebased on top of Tom Lendacky's protected guest changes.
* Rest of the change log is added per patch.

Changes since v3:
* Moved generic protected guest changes from patch titled "x86:
Introduce generic protected guest abstraction" into seperate
patch outside this patchset. Also, TDX specific changes are
moved to patch titled "x86/tdx: Add protected guest support
for TDX guest"
* Rebased on top of v5.14-rc1.
* Rest of the change log is added per patch.

Changes since v1 (v2 is partial set submission):
* Patch titled "x86/x86: Add early_is_tdx_guest() interface" is moved
out of this series.
* Rest of the change log is added per patch.

Andi Kleen (1):
x86/tdx: Don't write CSTAR MSR on Intel

Kirill A. Shutemov (7):
x86/paravirt: Move halt paravirt calls under CONFIG_PARAVIRT
x86/tdx: Get TD execution environment information via TDINFO
x86/traps: Add #VE support for TDX guest
x86/tdx: Add HLT support for TDX guest
x86/tdx: Wire up KVM hypercalls
x86/tdx: Add MSR support for TDX guest
x86/tdx: Handle CPUID via #VE

Kuppuswamy Sathyanarayanan (4):
x86/tdx: Introduce INTEL_TDX_GUEST config option
x86/cpufeatures: Add TDX Guest CPU feature
x86/tdx: Add protected guest support for TDX guest
x86/tdx: Add __tdx_module_call() and __tdx_hypercall() helper
functions

arch/x86/Kconfig | 20 ++
arch/x86/include/asm/asm-prototypes.h | 4 +
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/idtentry.h | 4 +
arch/x86/include/asm/irqflags.h | 40 ++--
arch/x86/include/asm/kvm_para.h | 22 ++
arch/x86/include/asm/paravirt.h | 20 +-
arch/x86/include/asm/paravirt_types.h | 3 +-
arch/x86/include/asm/protected_guest.h | 5 +
arch/x86/include/asm/tdx.h | 109 +++++++++
arch/x86/kernel/Makefile | 1 +
arch/x86/kernel/asm-offsets.c | 23 ++
arch/x86/kernel/cpu/common.c | 11 +-
arch/x86/kernel/head64.c | 3 +
arch/x86/kernel/idt.c | 6 +
arch/x86/kernel/paravirt.c | 4 +-
arch/x86/kernel/tdcall.S | 313 +++++++++++++++++++++++++
arch/x86/kernel/tdx.c | 242 +++++++++++++++++++
arch/x86/kernel/traps.c | 69 ++++++
include/linux/protected_guest.h | 3 +
20 files changed, 870 insertions(+), 33 deletions(-)
create mode 100644 arch/x86/include/asm/tdx.h
create mode 100644 arch/x86/kernel/tdcall.S
create mode 100644 arch/x86/kernel/tdx.c

--
2.25.1


Subject: [PATCH v5 06/12] x86/tdx: Get TD execution environment information via TDINFO

From: "Kirill A. Shutemov" <[email protected]>

Per Guest-Host-Communication Interface (GHCI) for Intel Trust
Domain Extensions (Intel TDX) specification, sec 2.4.2,
TDCALL[TDINFO] provides basic TD execution environment information, not
provided by CPUID.

Call TDINFO during early boot to be used for following system
initialization.

The call provides info on which bit in pfn is used to indicate that the
page is shared with the host and attributes of the TD, such as debug.

Information about the number of CPUs need not be saved because there are
no users so far for it.

Signed-off-by: Kirill A. Shutemov <[email protected]>
Reviewed-by: Andi Kleen <[email protected]>
Reviewed-by: Tony Luck <[email protected]>
Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>
---

Changes since v4:
* None

Changes since v3:
* None

arch/x86/kernel/tdx.c | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)

diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c
index 287564990f21..3973e81751ba 100644
--- a/arch/x86/kernel/tdx.c
+++ b/arch/x86/kernel/tdx.c
@@ -8,6 +8,14 @@

#include <asm/tdx.h>

+/* TDX Module call Leaf IDs */
+#define TDINFO 1
+
+static struct {
+ unsigned int gpa_width;
+ unsigned long attributes;
+} td_info __ro_after_init;
+
/*
* Wrapper for standard use of __tdx_hypercall with BUG_ON() check
* for TDCALL error.
@@ -54,6 +62,19 @@ bool tdx_prot_guest_has(unsigned long flag)
}
EXPORT_SYMBOL_GPL(tdx_prot_guest_has);

+static void tdg_get_info(void)
+{
+ u64 ret;
+ struct tdx_module_output out = {0};
+
+ ret = __tdx_module_call(TDINFO, 0, 0, 0, 0, &out);
+
+ BUG_ON(ret);
+
+ td_info.gpa_width = out.rcx & GENMASK(5, 0);
+ td_info.attributes = out.rdx;
+}
+
void __init tdx_early_init(void)
{
if (!cpuid_has_tdx_guest())
@@ -61,5 +82,7 @@ void __init tdx_early_init(void)

setup_force_cpu_cap(X86_FEATURE_TDX_GUEST);

+ tdg_get_info();
+
pr_info("Guest initialized\n");
}
--
2.25.1

Subject: [PATCH v5 09/12] x86/tdx: Wire up KVM hypercalls

From: "Kirill A. Shutemov" <[email protected]>

KVM hypercalls use the "vmcall" or "vmmcall" instructions.
Although the ABI is similar, those instructions no longer
function for TDX guests. Make vendor-specific TDVMCALLs
instead of VMCALL. This enables TDX guests to run with KVM
acting as the hypervisor. TDX guests running under other
hypervisors will continue to use those hypervisors'
hypercalls.

Since KVM driver can be built as a kernel module, export
tdx_kvm_hypercall*() to make the symbols visible to kvm.ko.

[Isaku Yamahata: proposed KVM VENDOR string]
Signed-off-by: Kirill A. Shutemov <[email protected]>
Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>
---

Changes since v4:
* No functional changes.

Changes since v3:
* Fixed ASM sysmbol generation issue in tdcall.S by including tdx.h
in asm-prototypes.h

Changes since v1:
* Replaced is_tdx_guest() with prot_guest_has(PR_GUEST_TDX).
* Replaced tdx_kvm_hypercall{1-4} with single generic
function tdx_kvm_hypercall().
* Removed __tdx_hypercall_vendor_kvm() and re-used __tdx_hypercall().

arch/x86/Kconfig | 5 +++++
arch/x86/include/asm/asm-prototypes.h | 4 ++++
arch/x86/include/asm/kvm_para.h | 22 ++++++++++++++++++++
arch/x86/include/asm/tdx.h | 30 +++++++++++++++++++++++++--
arch/x86/kernel/tdcall.S | 2 ++
5 files changed, 61 insertions(+), 2 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 10f2cb51a39d..b500f2afacce 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -880,6 +880,11 @@ config INTEL_TDX_GUEST
run in a CPU mode that protects the confidentiality of TD memory
contents and the TD’s CPU state from other software, including VMM.

+# This option enables KVM specific hypercalls in TDX guest.
+config INTEL_TDX_GUEST_KVM
+ def_bool y
+ depends on KVM_GUEST && INTEL_TDX_GUEST
+
endif #HYPERVISOR_GUEST

source "arch/x86/Kconfig.cpu"
diff --git a/arch/x86/include/asm/asm-prototypes.h b/arch/x86/include/asm/asm-prototypes.h
index 4cb726c71ed8..9855a9ff2924 100644
--- a/arch/x86/include/asm/asm-prototypes.h
+++ b/arch/x86/include/asm/asm-prototypes.h
@@ -17,6 +17,10 @@
extern void cmpxchg8b_emu(void);
#endif

+#ifdef CONFIG_INTEL_TDX_GUEST
+#include <asm/tdx.h>
+#endif
+
#ifdef CONFIG_RETPOLINE

#undef GEN
diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
index 69299878b200..bd0ab7c3ae25 100644
--- a/arch/x86/include/asm/kvm_para.h
+++ b/arch/x86/include/asm/kvm_para.h
@@ -4,7 +4,9 @@

#include <asm/processor.h>
#include <asm/alternative.h>
+#include <asm/tdx.h>
#include <linux/interrupt.h>
+#include <linux/protected_guest.h>
#include <uapi/asm/kvm_para.h>

#ifdef CONFIG_KVM_GUEST
@@ -32,6 +34,10 @@ static inline bool kvm_check_and_clear_guest_paused(void)
static inline long kvm_hypercall0(unsigned int nr)
{
long ret;
+
+ if (prot_guest_has(PATTR_GUEST_TDX))
+ return tdx_kvm_hypercall(nr, 0, 0, 0, 0);
+
asm volatile(KVM_HYPERCALL
: "=a"(ret)
: "a"(nr)
@@ -42,6 +48,10 @@ static inline long kvm_hypercall0(unsigned int nr)
static inline long kvm_hypercall1(unsigned int nr, unsigned long p1)
{
long ret;
+
+ if (prot_guest_has(PATTR_GUEST_TDX))
+ return tdx_kvm_hypercall(nr, p1, 0, 0, 0);
+
asm volatile(KVM_HYPERCALL
: "=a"(ret)
: "a"(nr), "b"(p1)
@@ -53,6 +63,10 @@ static inline long kvm_hypercall2(unsigned int nr, unsigned long p1,
unsigned long p2)
{
long ret;
+
+ if (prot_guest_has(PATTR_GUEST_TDX))
+ return tdx_kvm_hypercall(nr, p1, p2, 0, 0);
+
asm volatile(KVM_HYPERCALL
: "=a"(ret)
: "a"(nr), "b"(p1), "c"(p2)
@@ -64,6 +78,10 @@ static inline long kvm_hypercall3(unsigned int nr, unsigned long p1,
unsigned long p2, unsigned long p3)
{
long ret;
+
+ if (prot_guest_has(PATTR_GUEST_TDX))
+ return tdx_kvm_hypercall(nr, p1, p2, p3, 0);
+
asm volatile(KVM_HYPERCALL
: "=a"(ret)
: "a"(nr), "b"(p1), "c"(p2), "d"(p3)
@@ -76,6 +94,10 @@ static inline long kvm_hypercall4(unsigned int nr, unsigned long p1,
unsigned long p4)
{
long ret;
+
+ if (prot_guest_has(PATTR_GUEST_TDX))
+ return tdx_kvm_hypercall(nr, p1, p2, p3, p4);
+
asm volatile(KVM_HYPERCALL
: "=a"(ret)
: "a"(nr), "b"(p1), "c"(p2), "d"(p3), "S"(p4)
diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index 846fe58f0426..8fa33e2c98db 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -6,8 +6,9 @@
#include <linux/cpufeature.h>
#include <linux/types.h>

-#define TDX_CPUID_LEAF_ID 0x21
-#define TDX_HYPERCALL_STANDARD 0
+#define TDX_CPUID_LEAF_ID 0x21
+#define TDX_HYPERCALL_STANDARD 0
+#define TDX_HYPERCALL_VENDOR_KVM 0x4d564b2e584454

/*
* Used in __tdx_module_call() helper function to gather the
@@ -80,4 +81,29 @@ static inline bool tdx_prot_guest_has(unsigned long flag) { return false; }

#endif /* CONFIG_INTEL_TDX_GUEST */

+#ifdef CONFIG_INTEL_TDX_GUEST_KVM
+
+static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1,
+ unsigned long p2, unsigned long p3,
+ unsigned long p4)
+{
+ struct tdx_hypercall_output out;
+ u64 err;
+
+ err = __tdx_hypercall(TDX_HYPERCALL_VENDOR_KVM, nr, p1, p2,
+ p3, p4, &out);
+
+ BUG_ON(err);
+
+ return out.r10;
+}
+#else
+static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1,
+ unsigned long p2, unsigned long p3,
+ unsigned long p4)
+{
+ return -ENODEV;
+}
+#endif /* CONFIG_INTEL_TDX_GUEST_KVM */
+
#endif /* _ASM_X86_TDX_H */
diff --git a/arch/x86/kernel/tdcall.S b/arch/x86/kernel/tdcall.S
index 9df94f87465d..1823bac4542d 100644
--- a/arch/x86/kernel/tdcall.S
+++ b/arch/x86/kernel/tdcall.S
@@ -3,6 +3,7 @@
#include <asm/asm.h>
#include <asm/frame.h>
#include <asm/unwind_hints.h>
+#include <asm/export.h>

#include <linux/linkage.h>
#include <linux/bits.h>
@@ -309,3 +310,4 @@ skip_sti:

retq
SYM_FUNC_END(__tdx_hypercall)
+EXPORT_SYMBOL(__tdx_hypercall);
--
2.25.1

Subject: [PATCH v5 01/12] x86/paravirt: Move halt paravirt calls under CONFIG_PARAVIRT

From: "Kirill A. Shutemov" <[email protected]>

CONFIG_PARAVIRT_XXL is mainly defined/used by XEN PV guests. For
other VM guest types, features supported under CONFIG_PARAVIRT
are self sufficient. CONFIG_PARAVIRT mainly provides support for
TLB flush operations and time related operations.

For TDX guest as well, paravirt calls under CONFIG_PARVIRT meets
most of its requirement except the need of HLT and SAFE_HLT
paravirt calls, which is currently defined under
COFNIG_PARAVIRT_XXL.

Since enabling CONFIG_PARAVIRT_XXL is too bloated for TDX guest
like platforms, move HLT and SAFE_HLT paravirt calls under
CONFIG_PARAVIRT.

Moving HLT and SAFE_HLT paravirt calls are not fatal and should not
break any functionality for current users of CONFIG_PARAVIRT.

Co-developed-by: Kuppuswamy Sathyanarayanan <[email protected]>
Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>
Signed-off-by: Kirill A. Shutemov <[email protected]>
Reviewed-by: Andi Kleen <[email protected]>
Reviewed-by: Tony Luck <[email protected]>
---

Changes since v4:
* None.

arch/x86/include/asm/irqflags.h | 40 +++++++++++++++------------
arch/x86/include/asm/paravirt.h | 20 +++++++-------
arch/x86/include/asm/paravirt_types.h | 3 +-
arch/x86/kernel/paravirt.c | 4 ++-
4 files changed, 36 insertions(+), 31 deletions(-)

diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
index c5ce9845c999..f3bb33b1715d 100644
--- a/arch/x86/include/asm/irqflags.h
+++ b/arch/x86/include/asm/irqflags.h
@@ -59,6 +59,28 @@ static inline __cpuidle void native_halt(void)

#endif

+#ifndef CONFIG_PARAVIRT
+#ifndef __ASSEMBLY__
+/*
+ * Used in the idle loop; sti takes one instruction cycle
+ * to complete:
+ */
+static inline __cpuidle void arch_safe_halt(void)
+{
+ native_safe_halt();
+}
+
+/*
+ * Used when interrupts are already enabled or to
+ * shutdown the processor:
+ */
+static inline __cpuidle void halt(void)
+{
+ native_halt();
+}
+#endif /* __ASSEMBLY__ */
+#endif /* CONFIG_PARAVIRT */
+
#ifdef CONFIG_PARAVIRT_XXL
#include <asm/paravirt.h>
#else
@@ -80,24 +102,6 @@ static __always_inline void arch_local_irq_enable(void)
native_irq_enable();
}

-/*
- * Used in the idle loop; sti takes one instruction cycle
- * to complete:
- */
-static inline __cpuidle void arch_safe_halt(void)
-{
- native_safe_halt();
-}
-
-/*
- * Used when interrupts are already enabled or to
- * shutdown the processor:
- */
-static inline __cpuidle void halt(void)
-{
- native_halt();
-}
-
/*
* For spinlocks, etc:
*/
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index da3a1ac82be5..d323a626c7a8 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -97,6 +97,16 @@ static inline void paravirt_arch_exit_mmap(struct mm_struct *mm)
PVOP_VCALL1(mmu.exit_mmap, mm);
}

+static inline void arch_safe_halt(void)
+{
+ PVOP_VCALL0(irq.safe_halt);
+}
+
+static inline void halt(void)
+{
+ PVOP_VCALL0(irq.halt);
+}
+
#ifdef CONFIG_PARAVIRT_XXL
static inline void load_sp0(unsigned long sp0)
{
@@ -162,16 +172,6 @@ static inline void __write_cr4(unsigned long x)
PVOP_VCALL1(cpu.write_cr4, x);
}

-static inline void arch_safe_halt(void)
-{
- PVOP_VCALL0(irq.safe_halt);
-}
-
-static inline void halt(void)
-{
- PVOP_VCALL0(irq.halt);
-}
-
static inline void wbinvd(void)
{
PVOP_ALT_VCALL0(cpu.wbinvd, "wbinvd", ALT_NOT(X86_FEATURE_XENPV));
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index d9d6b0203ec4..40082847f314 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -150,10 +150,9 @@ struct pv_irq_ops {
struct paravirt_callee_save save_fl;
struct paravirt_callee_save irq_disable;
struct paravirt_callee_save irq_enable;
-
+#endif
void (*safe_halt)(void);
void (*halt)(void);
-#endif
} __no_randomize_layout;

struct pv_mmu_ops {
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 04cafc057bed..124e0f6c5d1c 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -283,9 +283,11 @@ struct paravirt_patch_template pv_ops = {
.irq.save_fl = __PV_IS_CALLEE_SAVE(native_save_fl),
.irq.irq_disable = __PV_IS_CALLEE_SAVE(native_irq_disable),
.irq.irq_enable = __PV_IS_CALLEE_SAVE(native_irq_enable),
+#endif /* CONFIG_PARAVIRT_XXL */
+
+ /* Irq HLT ops. */
.irq.safe_halt = native_safe_halt,
.irq.halt = native_halt,
-#endif /* CONFIG_PARAVIRT_XXL */

/* Mmu ops. */
.mmu.flush_tlb_user = native_flush_tlb_local,
--
2.25.1

Subject: [PATCH v5 08/12] x86/tdx: Add HLT support for TDX guest

From: "Kirill A. Shutemov" <[email protected]>

Per Guest-Host-Communication Interface (GHCI) for Intel Trust
Domain Extensions (Intel TDX) specification, sec 3.8,
TDVMCALL[Instruction.HLT] provides HLT operation. Use it to implement
halt() and safe_halt() paravirtualization calls.

The same TDX hypercall is used to handle #VE exception due to
EXIT_REASON_HLT.

Signed-off-by: Kirill A. Shutemov <[email protected]>
Reviewed-by: Andi Kleen <[email protected]>
Reviewed-by: Tony Luck <[email protected]>
Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>
---

Changes since v4:
* Added exception for EXIT_REASON_HLT in __tdx_hypercall() to
enable interrupts using sti.

Changes since v3:
* None

arch/x86/kernel/tdcall.S | 29 +++++++++++++++++++++++
arch/x86/kernel/tdx.c | 51 ++++++++++++++++++++++++++++++++++------
2 files changed, 73 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/tdcall.S b/arch/x86/kernel/tdcall.S
index c9c91c1bf99d..9df94f87465d 100644
--- a/arch/x86/kernel/tdcall.S
+++ b/arch/x86/kernel/tdcall.S
@@ -40,6 +40,9 @@
*/
#define tdcall .byte 0x66,0x0f,0x01,0xcc

+/* HLT TDVMCALL sub-function ID */
+#define EXIT_REASON_HLT 12
+
/*
* __tdx_module_call() - Helper function used by TDX guests to request
* services from the TDX module (does not include VMM services).
@@ -240,6 +243,32 @@ SYM_FUNC_START(__tdx_hypercall)

movl $TDVMCALL_EXPOSE_REGS_MASK, %ecx

+ /*
+ * For the idle loop STI needs to be called directly before
+ * the TDCALL that enters idle (EXIT_REASON_HLT case). STI
+ * enables interrupts only one instruction later. If there
+ * are any instructions between the STI and the TDCALL for
+ * HLT then an interrupt could happen in that time, but the
+ * code would go back to sleep afterwards, which can cause
+ * longer delays. There leads to significant difference in
+ * network performance benchmarks. So add a special case for
+ * EXIT_REASON_HLT to trigger sti before TDCALL. But this
+ * change is not required for all HLT cases. So use R15
+ * register value to identify the case which needs sti. So,
+ * if R11 is EXIT_REASON_HLT and R15 is 1, then call sti
+ * before TDCALL instruction. Note that R15 register is not
+ * required by TDCALL ABI when triggering the hypercall for
+ * EXIT_REASON_HLT case. So use it in software to select the
+ * sti case.
+ */
+ cmpl $EXIT_REASON_HLT, %r11d
+ jne skip_sti
+ cmpl $1, %r15d
+ jne skip_sti
+ /* Set R15 register to 0, it is unused in EXIT_REASON_HLT case */
+ xor %r15, %r15
+ sti
+skip_sti:
tdcall

/* Restore output pointer to R9 */
diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c
index 6169f9c740b2..bdd041c4c509 100644
--- a/arch/x86/kernel/tdx.c
+++ b/arch/x86/kernel/tdx.c
@@ -7,6 +7,7 @@
#include <linux/protected_guest.h>

#include <asm/tdx.h>
+#include <asm/vmx.h>

/* TDX Module call Leaf IDs */
#define TDINFO 1
@@ -76,6 +77,33 @@ static void tdg_get_info(void)
td_info.attributes = out.rdx;
}

+static __cpuidle void tdg_halt(void)
+{
+ u64 ret;
+
+ ret = _tdx_hypercall(EXIT_REASON_HLT, irqs_disabled(), 0, 0, 0, NULL);
+
+ /* It should never fail */
+ BUG_ON(ret);
+}
+
+static __cpuidle void tdg_safe_halt(void)
+{
+ u64 ret;
+
+ /*
+ * Enable interrupts next to the TDVMCALL to avoid
+ * performance degradation.
+ */
+ local_irq_enable();
+
+ /* IRQ is enabled, So set R12 as 0 */
+ ret = _tdx_hypercall(EXIT_REASON_HLT, 0, 0, 0, 1, NULL);
+
+ /* It should never fail */
+ BUG_ON(ret);
+}
+
unsigned long tdg_get_ve_info(struct ve_info *ve)
{
u64 ret;
@@ -102,13 +130,19 @@ unsigned long tdg_get_ve_info(struct ve_info *ve)
int tdg_handle_virtualization_exception(struct pt_regs *regs,
struct ve_info *ve)
{
- /*
- * TODO: Add handler support for various #VE exit
- * reasons. It will be added by other patches in
- * the series.
- */
- pr_warn("Unexpected #VE: %lld\n", ve->exit_reason);
- return -EFAULT;
+ switch (ve->exit_reason) {
+ case EXIT_REASON_HLT:
+ tdg_halt();
+ break;
+ default:
+ pr_warn("Unexpected #VE: %lld\n", ve->exit_reason);
+ return -EFAULT;
+ }
+
+ /* After successful #VE handling, move the IP */
+ regs->ip += ve->instr_len;
+
+ return 0;
}

void __init tdx_early_init(void)
@@ -120,5 +154,8 @@ void __init tdx_early_init(void)

tdg_get_info();

+ pv_ops.irq.safe_halt = tdg_safe_halt;
+ pv_ops.irq.halt = tdg_halt;
+
pr_info("Guest initialized\n");
}
--
2.25.1

Subject: [PATCH v5 12/12] x86/tdx: Handle CPUID via #VE

From: "Kirill A. Shutemov" <[email protected]>

TDX has three classes of CPUID leaves: some CPUID leaves
are always handled by the CPU, others are handled by the TDX module,
and some others are handled by the VMM. Since the VMM cannot directly
intercept the instruction these are reflected with a #VE exception
to the guest, which then converts it into a hypercall to the VMM,
or handled directly.

The TDX module EAS has a full list of CPUID leaves which are handled
natively or by the TDX module in 16.2. Only unknown CPUIDs are handled by
the #VE method. In practice this typically only applies to the
hypervisor specific CPUIDs unknown to the native CPU.

Therefore there is no risk of causing this in early CPUID code which
runs before the #VE handler is set up because it will never access
those exotic CPUID leaves.

Signed-off-by: Kirill A. Shutemov <[email protected]>
Reviewed-by: Andi Kleen <[email protected]>
Reviewed-by: Tony Luck <[email protected]>
Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>
---

Changes since v4:
* None

Changes since v3:
* None

arch/x86/kernel/tdx.c | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)

diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c
index d16c7f8759ea..5d2fd6c8b01c 100644
--- a/arch/x86/kernel/tdx.c
+++ b/arch/x86/kernel/tdx.c
@@ -153,6 +153,21 @@ static int tdg_write_msr_safe(unsigned int msr, unsigned int low,
return ret ? -EIO : 0;
}

+static void tdg_handle_cpuid(struct pt_regs *regs)
+{
+ u64 ret;
+ struct tdx_hypercall_output out = {0};
+
+ ret = _tdx_hypercall(EXIT_REASON_CPUID, regs->ax, regs->cx, 0, 0, &out);
+
+ WARN_ON(ret);
+
+ regs->ax = out.r12;
+ regs->bx = out.r13;
+ regs->cx = out.r14;
+ regs->dx = out.r15;
+}
+
unsigned long tdg_get_ve_info(struct ve_info *ve)
{
u64 ret;
@@ -196,6 +211,9 @@ int tdg_handle_virtualization_exception(struct pt_regs *regs,
case EXIT_REASON_MSR_WRITE:
ret = tdg_write_msr_safe(regs->cx, regs->ax, regs->dx);
break;
+ case EXIT_REASON_CPUID:
+ tdg_handle_cpuid(regs);
+ break;
default:
pr_warn("Unexpected #VE: %lld\n", ve->exit_reason);
return -EFAULT;
--
2.25.1

Subject: [PATCH v5 10/12] x86/tdx: Add MSR support for TDX guest

From: "Kirill A. Shutemov" <[email protected]>

Operations on context-switched MSRs can be run natively. The rest of
MSRs should be handled through TDVMCALLs.

TDVMCALL[Instruction.RDMSR] and TDVMCALL[Instruction.WRMSR] provide
MSR oprations.

RDMSR and WRMSR specification details can be found in
Guest-Host-Communication Interface (GHCI) for Intel Trust Domain
Extensions (Intel TDX) specification, sec 3.10, 3.11.

Signed-off-by: Kirill A. Shutemov <[email protected]>
Reviewed-by: Andi Kleen <[email protected]>
Reviewed-by: Tony Luck <[email protected]>
Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>
---

Change since v4:
* Removed You usage from commit log.

Changes since v3:
* None

arch/x86/kernel/tdx.c | 67 +++++++++++++++++++++++++++++++++++++++++--
1 file changed, 65 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c
index bdd041c4c509..d16c7f8759ea 100644
--- a/arch/x86/kernel/tdx.c
+++ b/arch/x86/kernel/tdx.c
@@ -104,6 +104,55 @@ static __cpuidle void tdg_safe_halt(void)
BUG_ON(ret);
}

+static bool tdg_is_context_switched_msr(unsigned int msr)
+{
+ switch (msr) {
+ case MSR_EFER:
+ case MSR_IA32_CR_PAT:
+ case MSR_FS_BASE:
+ case MSR_GS_BASE:
+ case MSR_KERNEL_GS_BASE:
+ case MSR_IA32_SYSENTER_CS:
+ case MSR_IA32_SYSENTER_EIP:
+ case MSR_IA32_SYSENTER_ESP:
+ case MSR_STAR:
+ case MSR_LSTAR:
+ case MSR_SYSCALL_MASK:
+ case MSR_IA32_XSS:
+ case MSR_TSC_AUX:
+ case MSR_IA32_BNDCFGS:
+ return true;
+ }
+ return false;
+}
+
+static u64 tdg_read_msr_safe(unsigned int msr, int *err)
+{
+ u64 ret;
+ struct tdx_hypercall_output out = {0};
+
+ WARN_ON_ONCE(tdg_is_context_switched_msr(msr));
+
+ ret = _tdx_hypercall(EXIT_REASON_MSR_READ, msr, 0, 0, 0, &out);
+
+ *err = ret ? -EIO : 0;
+
+ return out.r11;
+}
+
+static int tdg_write_msr_safe(unsigned int msr, unsigned int low,
+ unsigned int high)
+{
+ u64 ret;
+
+ WARN_ON_ONCE(tdg_is_context_switched_msr(msr));
+
+ ret = _tdx_hypercall(EXIT_REASON_MSR_WRITE, msr, (u64)high << 32 | low,
+ 0, 0, NULL);
+
+ return ret ? -EIO : 0;
+}
+
unsigned long tdg_get_ve_info(struct ve_info *ve)
{
u64 ret;
@@ -130,19 +179,33 @@ unsigned long tdg_get_ve_info(struct ve_info *ve)
int tdg_handle_virtualization_exception(struct pt_regs *regs,
struct ve_info *ve)
{
+ unsigned long val;
+ int ret = 0;
+
switch (ve->exit_reason) {
case EXIT_REASON_HLT:
tdg_halt();
break;
+ case EXIT_REASON_MSR_READ:
+ val = tdg_read_msr_safe(regs->cx, (unsigned int *)&ret);
+ if (!ret) {
+ regs->ax = val & UINT_MAX;
+ regs->dx = val >> 32;
+ }
+ break;
+ case EXIT_REASON_MSR_WRITE:
+ ret = tdg_write_msr_safe(regs->cx, regs->ax, regs->dx);
+ break;
default:
pr_warn("Unexpected #VE: %lld\n", ve->exit_reason);
return -EFAULT;
}

/* After successful #VE handling, move the IP */
- regs->ip += ve->instr_len;
+ if (!ret)
+ regs->ip += ve->instr_len;

- return 0;
+ return ret;
}

void __init tdx_early_init(void)
--
2.25.1

Subject: [PATCH v5 11/12] x86/tdx: Don't write CSTAR MSR on Intel

From: Andi Kleen <[email protected]>

On Intel CPUs writing the CSTAR MSR is not really needed. Syscalls
from 32bit work using SYSENTER and 32bit SYSCALL is an illegal opcode.
But the kernel did write it anyways even though it was ignored by
the CPU. Inside a TDX guest this actually leads to a #GP. While the #GP
is caught and recovered from, it prints an ugly message at boot.
Do not write the CSTAR MSR on Intel CPUs.

Signed-off-by: Andi Kleen <[email protected]>
---
arch/x86/kernel/cpu/common.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 64b805bd6a54..d936f0e4ec51 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1752,7 +1752,13 @@ void syscall_init(void)
wrmsrl(MSR_LSTAR, (unsigned long)entry_SYSCALL_64);

#ifdef CONFIG_IA32_EMULATION
- wrmsrl(MSR_CSTAR, (unsigned long)entry_SYSCALL_compat);
+ /*
+ * CSTAR is not needed on Intel because it doesn't support
+ * 32bit SYSCALL, but only SYSENTER. On a TDX guest
+ * it leads to a #GP.
+ */
+ if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
+ wrmsrl(MSR_CSTAR, (unsigned long)entry_SYSCALL_compat);
/*
* This only works on Intel CPUs.
* On AMD CPUs these MSRs are 32-bit, CPU truncates MSR_IA32_SYSENTER_EIP.
@@ -1764,7 +1770,8 @@ void syscall_init(void)
(unsigned long)(cpu_entry_stack(smp_processor_id()) + 1));
wrmsrl_safe(MSR_IA32_SYSENTER_EIP, (u64)entry_SYSENTER_compat);
#else
- wrmsrl(MSR_CSTAR, (unsigned long)ignore_sysret);
+ if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
+ wrmsrl(MSR_CSTAR, (unsigned long)ignore_sysret);
wrmsrl_safe(MSR_IA32_SYSENTER_CS, (u64)GDT_ENTRY_INVALID_SEG);
wrmsrl_safe(MSR_IA32_SYSENTER_ESP, 0ULL);
wrmsrl_safe(MSR_IA32_SYSENTER_EIP, 0ULL);
--
2.25.1

2021-08-04 20:52:45

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v5 11/12] x86/tdx: Don't write CSTAR MSR on Intel

On Wed, Aug 04, 2021, Kuppuswamy Sathyanarayanan wrote:
> From: Andi Kleen <[email protected]>
>
> On Intel CPUs writing the CSTAR MSR is not really needed. Syscalls
> from 32bit work using SYSENTER and 32bit SYSCALL is an illegal opcode.
> But the kernel did write it anyways even though it was ignored by
> the CPU. Inside a TDX guest this actually leads to a #GP. While the #GP
> is caught and recovered from, it prints an ugly message at boot.
> Do not write the CSTAR MSR on Intel CPUs.

Not that it really matters, but...

Is #GP the actual TDX-Module behavior? If so, isn't that a contradiction with
respect to the TDX-Module architecture? It says:

guest TD access violations to MSRs can cause a #GP(0) in most cases where the
MSR is enumerated as inaccessible by the Intel TDX module via CPUID
virtualization. In other cases, guest TD access violations to MSRs can cause
a #VE.

Given that there is no dedicated CPUID flag for CSTAR and CSTAR obviously exists
on Intel CPUs, I don't see how the TDX-Module can possible enumerate CSTAR as
being inaccessible.

Regardless of #GP versus #VE, "Table 16.2 MSR Virtualization" needs to state the
actual behavior.

Subject: Re: [PATCH v5 11/12] x86/tdx: Don't write CSTAR MSR on Intel



On 8/4/21 2:48 PM, Dave Hansen wrote:
>> No, #GP is triggered by guest.
> ...
>>> Regardless of #GP versus #VE, "Table 16.2 MSR Virtualization" needs
>>> to state the actual behavior.
>> Even in this case, it will trigger #VE. But since CSTAR MSR is not
>> supported, write to it will fail and leads to #VE fault.
> Sathya, I think there might be a mixup of terminology here that's
> confusing. I'm confused by this exchange.
>
> In general, we refer to hardware exceptions by their architecture names:
> #GP for general protection fault, #PF for page fault, #VE for
> Virtualization Exception.
>
> Those hardware exceptions are wired up to software handlers:
> #GP lands in asm_exc_general_protection
> #PF ends up in exc_page_fault
> #VE ends up in exc_virtualization_exception
> ... and more of course
>
> But, to add to the confusion, the #VE handler
> (exc_virtualization_exception()) itself calls (or did once upon a time
> call) do_general_protection() when it can't handle something.
> do_general_protection() is (was?)*ALSO* called by the #GP handler.
>
> So, is that what you meant? By "#GP is triggered by guest", you mean
> that a write to the CSTAR MSR and the resulting #VE will end up being
> handled in a way that is similar to how a #GP hardware exception would
> have been handled?
>
> If that's what you meant, I'm not_sure_ that's totally accurate. Could
> you elaborate on this a bit? It also would be really handy if you were
> able to adopt the terminology I talked about above. It will really make
> things less confusing.


In TDX guest, MSR write will trigger #VE which will be handled by
exc_virtualization_exception()->tdg_handle_virtualization_exception().
Internally this exception handler emulates the "MSR write" using
hypercalls. But if the hypercall returns failure, then it means we
failed to handle the #VE exception. In such cases,
exc_virtualization_exception() handler will trigger #GP like behavior
using ve_raise_fault(). ve_raise_fault() is the customized version of
do_general_protection(). This what I meant by guest triggers #GP(0).

Since CSTAR_MSR is not supported/used in Intel platforms, instead of
going through all these processes before triggering the failure, we
have added the exception for it before it is used.

Following are the implementation details:

static void ve_raise_fault(struct pt_regs *regs, long error_code)
{
struct task_struct *tsk = current;

if (user_mode(regs)) {
tsk->thread.error_code = error_code;
tsk->thread.trap_nr = X86_TRAP_VE;

/*
* Not fixing up VDSO exceptions similar to #GP handler
* because we don't expect the VDSO to trigger #VE.
*/
show_signal(tsk, SIGSEGV, "", VEFSTR, regs, error_code);
force_sig(SIGSEGV);
return;
}

if (fixup_exception(regs, X86_TRAP_VE, error_code, 0))
return;

tsk->thread.error_code = error_code;
tsk->thread.trap_nr = X86_TRAP_VE;

/*
* To be potentially processing a kprobe fault and to trust the result
* from kprobe_running(), we have to be non-preemptible.
*/
if (!preemptible() &&
kprobe_running() &&
kprobe_fault_handler(regs, X86_TRAP_VE))
return;

notify_die(DIE_GPF, VEFSTR, regs, error_code, X86_TRAP_VE, SIGSEGV);

die_addr(VEFSTR, regs, error_code, 0);
}


DEFINE_IDTENTRY(exc_virtualization_exception)
{
struct ve_info ve;
int ret;

RCU_LOCKDEP_WARN(!rcu_is_watching(), "entry code didn't wake RCU");

inc_irq_stat(tdg_ve_count);

/*
* NMIs/Machine-checks/Interrupts will be in a disabled state
* till TDGETVEINFO TDCALL is executed. This prevents #VE
* nesting issue.
*/
ret = tdg_get_ve_info(&ve);

cond_local_irq_enable(regs);

if (!ret)
ret = tdg_handle_virtualization_exception(regs, &ve);
/*
* If tdg_handle_virtualization_exception() could not process
* it successfully, treat it as #GP(0) and handle it.
*/
if (ret)
ve_raise_fault(regs, 0);

cond_local_irq_disable(regs);

}
--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

2021-08-04 23:00:31

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v5 06/12] x86/tdx: Get TD execution environment information via TDINFO

On Wed, Aug 04, 2021, Kuppuswamy Sathyanarayanan wrote:
> diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c
> index 287564990f21..3973e81751ba 100644
> --- a/arch/x86/kernel/tdx.c
> +++ b/arch/x86/kernel/tdx.c
> @@ -8,6 +8,14 @@
>
> #include <asm/tdx.h>
>
> +/* TDX Module call Leaf IDs */
> +#define TDINFO 1
> +
> +static struct {
> + unsigned int gpa_width;
> + unsigned long attributes;
> +} td_info __ro_after_init;
> +
> /*
> * Wrapper for standard use of __tdx_hypercall with BUG_ON() check
> * for TDCALL error.
> @@ -54,6 +62,19 @@ bool tdx_prot_guest_has(unsigned long flag)
> }
> EXPORT_SYMBOL_GPL(tdx_prot_guest_has);
>
> +static void tdg_get_info(void)

I've already made my dislike of "tdg" abundantly clear, but I will keep on complaining
so long as y'all keep sending patches with "tdx" and "tdg" interspersed without any
obvious method to the madness.

Also, a function with "get" in the name that returns a void and fills in a global
struct that's is a bit misleading.

> +{
> + u64 ret;
> + struct tdx_module_output out = {0};
> +
> + ret = __tdx_module_call(TDINFO, 0, 0, 0, 0, &out);
> +
> + BUG_ON(ret);
> +
> + td_info.gpa_width = out.rcx & GENMASK(5, 0);
> + td_info.attributes = out.rdx;
> +}
> +
> void __init tdx_early_init(void)
> {
> if (!cpuid_has_tdx_guest())
> @@ -61,5 +82,7 @@ void __init tdx_early_init(void)
>
> setup_force_cpu_cap(X86_FEATURE_TDX_GUEST);
>
> + tdg_get_info();
> +
> pr_info("Guest initialized\n");
> }
> --
> 2.25.1
>

Subject: [PATCH v5 07/12] x86/traps: Add #VE support for TDX guest

From: "Kirill A. Shutemov" <[email protected]>

Virtualization Exceptions (#VE) are delivered to TDX guests due to
specific guest actions which may happen in either user space or the kernel:

 * Specific instructions (WBINVD, for example)
 * Specific MSR accesses
 * Specific CPUID leaf accesses
 * Access to TD-shared memory, which includes MMIO

In the settings that Linux will run in, virtual exceptions are never
generated on accesses to normal, TD-private memory that has been
accepted.

The entry paths do not access TD-shared memory, MMIO regions or use
those specific MSRs, instructions, CPUID leaves that might generate #VE.
In addition, all interrupts including NMIs are blocked by the hardware
starting with #VE delivery until TDGETVEINFO is called.  This eliminates
the chance of a #VE during the syscall gap or paranoid entry paths and
simplifies #VE handling.

After TDGETVEINFO #VE could happen in theory (e.g. through an NMI),
but it is expected not to happen because TDX expects NMIs not to
trigger #VEs. Another case where they could happen is if the #VE
exception panics, but in this case there are no guarantees on anything
anyways.

If a guest kernel action which would normally cause a #VE occurs in the
interrupt-disabled region before TDGETVEINFO, a #DF is delivered to the
guest which will result in an oops (and should eventually be a panic, as
we would like to set panic_on_oops to 1 for TDX guests).

Add basic infrastructure to handle any #VE which occurs in the kernel or
userspace.  Later patches will add handling for specific #VE scenarios.

Convert unhandled #VE's (everything, until later in this series) so that
they appear just like a #GP by calling ve_raise_fault() directly.
ve_raise_fault() is similar to #GP handler and is responsible for
sending SIGSEGV to userspace and cpu die and notifying debuggers and
other die chain users.  

Co-developed-by: Sean Christopherson <[email protected]>
Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Kirill A. Shutemov <[email protected]>
Reviewed-by: Andi Kleen <[email protected]>
Reviewed-by: Tony Luck <[email protected]>
Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>
---

Changes since v4:
* Since ve_raise_fault() is used only by TDX code, moved it
within #ifdef CONFIG_INTEL_TDX_GUEST.

Changes since v3:
* None

arch/x86/include/asm/idtentry.h | 4 ++
arch/x86/include/asm/tdx.h | 19 +++++++++
arch/x86/kernel/idt.c | 6 +++
arch/x86/kernel/tdx.c | 36 +++++++++++++++++
arch/x86/kernel/traps.c | 69 +++++++++++++++++++++++++++++++++
5 files changed, 134 insertions(+)

diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
index 1345088e9902..8ccc81d653b3 100644
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -625,6 +625,10 @@ DECLARE_IDTENTRY_XENCB(X86_TRAP_OTHER, exc_xen_hypervisor_callback);
DECLARE_IDTENTRY_RAW(X86_TRAP_OTHER, exc_xen_unknown_trap);
#endif

+#ifdef CONFIG_INTEL_TDX_GUEST
+DECLARE_IDTENTRY(X86_TRAP_VE, exc_virtualization_exception);
+#endif
+
/* Device interrupts common/spurious */
DECLARE_IDTENTRY_IRQ(X86_TRAP_OTHER, common_interrupt);
#ifdef CONFIG_X86_LOCAL_APIC
diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
index 72a6a719ce37..846fe58f0426 100644
--- a/arch/x86/include/asm/tdx.h
+++ b/arch/x86/include/asm/tdx.h
@@ -39,6 +39,20 @@ struct tdx_hypercall_output {
u64 r15;
};

+/*
+ * Used by #VE exception handler to gather the #VE exception
+ * info from the TDX module. This is software only structure
+ * and not related to TDX module/VMM.
+ */
+struct ve_info {
+ u64 exit_reason;
+ u64 exit_qual;
+ u64 gla; /* Guest Linear (virtual) Address */
+ u64 gpa; /* Guest Physical (virtual) Address */
+ u32 instr_len;
+ u32 instr_info;
+};
+
#ifdef CONFIG_INTEL_TDX_GUEST

void __init tdx_early_init(void);
@@ -53,6 +67,11 @@ u64 __tdx_module_call(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
u64 __tdx_hypercall(u64 type, u64 fn, u64 r12, u64 r13, u64 r14,
u64 r15, struct tdx_hypercall_output *out);

+unsigned long tdg_get_ve_info(struct ve_info *ve);
+
+int tdg_handle_virtualization_exception(struct pt_regs *regs,
+ struct ve_info *ve);
+
#else

static inline void tdx_early_init(void) { };
diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c
index df0fa695bb09..a5eaae8e6c44 100644
--- a/arch/x86/kernel/idt.c
+++ b/arch/x86/kernel/idt.c
@@ -68,6 +68,9 @@ static const __initconst struct idt_data early_idts[] = {
*/
INTG(X86_TRAP_PF, asm_exc_page_fault),
#endif
+#ifdef CONFIG_INTEL_TDX_GUEST
+ INTG(X86_TRAP_VE, asm_exc_virtualization_exception),
+#endif
};

/*
@@ -91,6 +94,9 @@ static const __initconst struct idt_data def_idts[] = {
INTG(X86_TRAP_MF, asm_exc_coprocessor_error),
INTG(X86_TRAP_AC, asm_exc_alignment_check),
INTG(X86_TRAP_XF, asm_exc_simd_coprocessor_error),
+#ifdef CONFIG_INTEL_TDX_GUEST
+ INTG(X86_TRAP_VE, asm_exc_virtualization_exception),
+#endif

#ifdef CONFIG_X86_32
TSKG(X86_TRAP_DF, GDT_ENTRY_DOUBLEFAULT_TSS),
diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c
index 3973e81751ba..6169f9c740b2 100644
--- a/arch/x86/kernel/tdx.c
+++ b/arch/x86/kernel/tdx.c
@@ -10,6 +10,7 @@

/* TDX Module call Leaf IDs */
#define TDINFO 1
+#define TDGETVEINFO 3

static struct {
unsigned int gpa_width;
@@ -75,6 +76,41 @@ static void tdg_get_info(void)
td_info.attributes = out.rdx;
}

+unsigned long tdg_get_ve_info(struct ve_info *ve)
+{
+ u64 ret;
+ struct tdx_module_output out = {0};
+
+ /*
+ * NMIs and machine checks are suppressed. Before this point any
+ * #VE is fatal. After this point (TDGETVEINFO call), NMIs and
+ * additional #VEs are permitted (but we don't expect them to
+ * happen unless you panic).
+ */
+ ret = __tdx_module_call(TDGETVEINFO, 0, 0, 0, 0, &out);
+
+ ve->exit_reason = out.rcx;
+ ve->exit_qual = out.rdx;
+ ve->gla = out.r8;
+ ve->gpa = out.r9;
+ ve->instr_len = out.r10 & UINT_MAX;
+ ve->instr_info = out.r10 >> 32;
+
+ return ret;
+}
+
+int tdg_handle_virtualization_exception(struct pt_regs *regs,
+ struct ve_info *ve)
+{
+ /*
+ * TODO: Add handler support for various #VE exit
+ * reasons. It will be added by other patches in
+ * the series.
+ */
+ pr_warn("Unexpected #VE: %lld\n", ve->exit_reason);
+ return -EFAULT;
+}
+
void __init tdx_early_init(void)
{
if (!cpuid_has_tdx_guest())
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index a58800973aed..be56f0281cb5 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -61,6 +61,7 @@
#include <asm/insn.h>
#include <asm/insn-eval.h>
#include <asm/vdso.h>
+#include <asm/tdx.h>

#ifdef CONFIG_X86_64
#include <asm/x86_init.h>
@@ -1140,6 +1141,74 @@ DEFINE_IDTENTRY(exc_device_not_available)
}
}

+#ifdef CONFIG_INTEL_TDX_GUEST
+#define VEFSTR "VE fault"
+static void ve_raise_fault(struct pt_regs *regs, long error_code)
+{
+ struct task_struct *tsk = current;
+
+ if (user_mode(regs)) {
+ tsk->thread.error_code = error_code;
+ tsk->thread.trap_nr = X86_TRAP_VE;
+
+ /*
+ * Not fixing up VDSO exceptions similar to #GP handler
+ * because we don't expect the VDSO to trigger #VE.
+ */
+ show_signal(tsk, SIGSEGV, "", VEFSTR, regs, error_code);
+ force_sig(SIGSEGV);
+ return;
+ }
+
+ if (fixup_exception(regs, X86_TRAP_VE, error_code, 0))
+ return;
+
+ tsk->thread.error_code = error_code;
+ tsk->thread.trap_nr = X86_TRAP_VE;
+
+ /*
+ * To be potentially processing a kprobe fault and to trust the result
+ * from kprobe_running(), we have to be non-preemptible.
+ */
+ if (!preemptible() &&
+ kprobe_running() &&
+ kprobe_fault_handler(regs, X86_TRAP_VE))
+ return;
+
+ notify_die(DIE_GPF, VEFSTR, regs, error_code, X86_TRAP_VE, SIGSEGV);
+
+ die_addr(VEFSTR, regs, error_code, 0);
+}
+
+DEFINE_IDTENTRY(exc_virtualization_exception)
+{
+ struct ve_info ve;
+ int ret;
+
+ RCU_LOCKDEP_WARN(!rcu_is_watching(), "entry code didn't wake RCU");
+
+ /*
+ * NMIs/Machine-checks/Interrupts will be in a disabled state
+ * till TDGETVEINFO TDCALL is executed. This prevents #VE
+ * nesting issue.
+ */
+ ret = tdg_get_ve_info(&ve);
+
+ cond_local_irq_enable(regs);
+
+ if (!ret)
+ ret = tdg_handle_virtualization_exception(regs, &ve);
+ /*
+ * If tdg_handle_virtualization_exception() could not process
+ * it successfully, treat it as #GP(0) and handle it.
+ */
+ if (ret)
+ ve_raise_fault(regs, 0);
+
+ cond_local_irq_disable(regs);
+}
+#endif
+
#ifdef CONFIG_X86_32
DEFINE_IDTENTRY_SW(iret_error)
{
--
2.25.1

Subject: Re: [PATCH v5 11/12] x86/tdx: Don't write CSTAR MSR on Intel



On 8/4/21 11:31 AM, Sean Christopherson wrote:
>> On Intel CPUs writing the CSTAR MSR is not really needed. Syscalls
>> from 32bit work using SYSENTER and 32bit SYSCALL is an illegal opcode.
>> But the kernel did write it anyways even though it was ignored by
>> the CPU. Inside a TDX guest this actually leads to a #GP. While the #GP
>> is caught and recovered from, it prints an ugly message at boot.
>> Do not write the CSTAR MSR on Intel CPUs.
> Not that it really matters, but...
>
> Is #GP the actual TDX-Module behavior? If so, isn't that a contradiction with

No, #GP is triggered by guest.

> respect to the TDX-Module architecture? It says:
>
> guest TD access violations to MSRs can cause a #GP(0) in most cases where the
> MSR is enumerated as inaccessible by the Intel TDX module via CPUID
> virtualization. In other cases, guest TD access violations to MSRs can cause
> a #VE.
>
> Given that there is no dedicated CPUID flag for CSTAR and CSTAR obviously exists
> on Intel CPUs, I don't see how the TDX-Module can possible enumerate CSTAR as
> being inaccessible.
>
> Regardless of #GP versus #VE, "Table 16.2 MSR Virtualization" needs to state the
> actual behavior.

Even in this case, it will trigger #VE. But since CSTAR MSR is not supported, write
to it will fail and leads to #VE fault.

File: arch/x86/kernel/traps.c

1183 DEFINE_IDTENTRY(exc_virtualization_exception)
1201 if (!ret)
1202 ret = tdg_handle_virtualization_exception(regs, &ve);
1203 /*
1204 * If tdg_handle_virtualization_exception() could not process
1205 * it successfully, treat it as #GP(0) and handle it.
1206 */
1207 if (ret)
1208 ve_raise_fault(regs, 0);

--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

2021-08-05 02:30:04

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v5 11/12] x86/tdx: Don't write CSTAR MSR on Intel

On Wed, Aug 04, 2021, Kuppuswamy, Sathyanarayanan wrote:
>
> On 8/4/21 11:31 AM, Sean Christopherson wrote:
> > > On Intel CPUs writing the CSTAR MSR is not really needed. Syscalls
> > > from 32bit work using SYSENTER and 32bit SYSCALL is an illegal opcode.
> > > But the kernel did write it anyways even though it was ignored by
> > > the CPU. Inside a TDX guest this actually leads to a #GP. While the #GP
> > > is caught and recovered from, it prints an ugly message at boot.
> > > Do not write the CSTAR MSR on Intel CPUs.
> > Not that it really matters, but...
> >
> > Is #GP the actual TDX-Module behavior? If so, isn't that a contradiction with
>
> No, #GP is triggered by guest.

#GP is not triggered by the guest, it's not even reported by the guest. From
patch 7, the #VE handler escalates unhandled #VEs "similar to #GP handler", but
it still reports #VE as the actual vector.

Now, that particular behavior could change, e.g. setting tsk->thread.trap_nr to
#VE might confuse userspace, but at no point does this "trigger" a #GP.

2021-08-05 02:31:22

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v5 11/12] x86/tdx: Don't write CSTAR MSR on Intel

On 8/4/21 2:03 PM, Kuppuswamy, Sathyanarayanan wrote:
>> Is #GP the actual TDX-Module behavior?  If so, isn't that a
>> contradiction with
>
> No, #GP is triggered by guest.
...
>> Regardless of #GP versus #VE, "Table 16.2 MSR Virtualization" needs
>> to state the actual behavior.
>
> Even in this case, it will trigger #VE. But since CSTAR MSR is not
> supported, write to it will fail and leads to #VE fault.

Sathya, I think there might be a mixup of terminology here that's
confusing. I'm confused by this exchange.

In general, we refer to hardware exceptions by their architecture names:
#GP for general protection fault, #PF for page fault, #VE for
Virtualization Exception.

Those hardware exceptions are wired up to software handlers:
#GP lands in asm_exc_general_protection
#PF ends up in exc_page_fault
#VE ends up in exc_virtualization_exception
... and more of course

But, to add to the confusion, the #VE handler
(exc_virtualization_exception()) itself calls (or did once upon a time
call) do_general_protection() when it can't handle something.
do_general_protection() is (was?) *ALSO* called by the #GP handler.

So, is that what you meant? By "#GP is triggered by guest", you mean
that a write to the CSTAR MSR and the resulting #VE will end up being
handled in a way that is similar to how a #GP hardware exception would
have been handled?

If that's what you meant, I'm not _sure_ that's totally accurate. Could
you elaborate on this a bit? It also would be really handy if you were
able to adopt the terminology I talked about above. It will really make
things less confusing.

2021-08-12 07:44:11

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v5 01/12] x86/paravirt: Move halt paravirt calls under CONFIG_PARAVIRT

On Wed, Aug 04, 2021 at 11:13:18AM -0700, Kuppuswamy Sathyanarayanan wrote:
> From: "Kirill A. Shutemov" <[email protected]>
>
> CONFIG_PARAVIRT_XXL is mainly defined/used by XEN PV guests. For
> other VM guest types, features supported under CONFIG_PARAVIRT
> are self sufficient. CONFIG_PARAVIRT mainly provides support for
> TLB flush operations and time related operations.
>
> For TDX guest as well, paravirt calls under CONFIG_PARVIRT meets
> most of its requirement except the need of HLT and SAFE_HLT
> paravirt calls, which is currently defined under
> COFNIG_PARAVIRT_XXL.
>
> Since enabling CONFIG_PARAVIRT_XXL is too bloated for TDX guest
> like platforms, move HLT and SAFE_HLT paravirt calls under
> CONFIG_PARAVIRT.
>
> Moving HLT and SAFE_HLT paravirt calls are not fatal and should not
> break any functionality for current users of CONFIG_PARAVIRT.
>
> Co-developed-by: Kuppuswamy Sathyanarayanan <[email protected]>
> Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>
> Signed-off-by: Kirill A. Shutemov <[email protected]>
> Reviewed-by: Andi Kleen <[email protected]>
> Reviewed-by: Tony Luck <[email protected]>
> ---

You need to do this before sending your patches:

./scripts/get_maintainer.pl /tmp/tdx.01
Thomas Gleixner <[email protected]> (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),commit_signer:1/6=17%)
Ingo Molnar <[email protected]> (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
Borislav Petkov <[email protected]> (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),commit_signer:6/6=100%)
[email protected] (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
"H. Peter Anvin" <[email protected]> (reviewer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
Juergen Gross <[email protected]> (supporter:PARAVIRT_OPS INTERFACE,commit_signer:5/6=83%,authored:5/6=83%,added_lines:15/16=94%,removed_lines:38/39=97%)
Deep Shah <[email protected]> (supporter:PARAVIRT_OPS INTERFACE)
"VMware, Inc." <[email protected]> (supporter:PARAVIRT_OPS INTERFACE)
...

and CC also the supporters - I'm pretty sure at least Juergen would like
to be kept up-to-date on pv changes. I'll CC him and the others now and
leave in the whole diff but make sure you do that in the future please.

> arch/x86/include/asm/irqflags.h | 40 +++++++++++++++------------
> arch/x86/include/asm/paravirt.h | 20 +++++++-------
> arch/x86/include/asm/paravirt_types.h | 3 +-
> arch/x86/kernel/paravirt.c | 4 ++-
> 4 files changed, 36 insertions(+), 31 deletions(-)
>
> diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
> index c5ce9845c999..f3bb33b1715d 100644
> --- a/arch/x86/include/asm/irqflags.h
> +++ b/arch/x86/include/asm/irqflags.h
> @@ -59,6 +59,28 @@ static inline __cpuidle void native_halt(void)
>
> #endif
>
> +#ifndef CONFIG_PARAVIRT
> +#ifndef __ASSEMBLY__
> +/*
> + * Used in the idle loop; sti takes one instruction cycle
> + * to complete:
> + */
> +static inline __cpuidle void arch_safe_halt(void)
> +{
> + native_safe_halt();
> +}
> +
> +/*
> + * Used when interrupts are already enabled or to
> + * shutdown the processor:
> + */
> +static inline __cpuidle void halt(void)
> +{
> + native_halt();
> +}
> +#endif /* __ASSEMBLY__ */
> +#endif /* CONFIG_PARAVIRT */
> +
> #ifdef CONFIG_PARAVIRT_XXL
> #include <asm/paravirt.h>
> #else
> @@ -80,24 +102,6 @@ static __always_inline void arch_local_irq_enable(void)
> native_irq_enable();
> }
>
> -/*
> - * Used in the idle loop; sti takes one instruction cycle
> - * to complete:
> - */
> -static inline __cpuidle void arch_safe_halt(void)
> -{
> - native_safe_halt();
> -}
> -
> -/*
> - * Used when interrupts are already enabled or to
> - * shutdown the processor:
> - */
> -static inline __cpuidle void halt(void)
> -{
> - native_halt();
> -}
> -
> /*
> * For spinlocks, etc:
> */
> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
> index da3a1ac82be5..d323a626c7a8 100644
> --- a/arch/x86/include/asm/paravirt.h
> +++ b/arch/x86/include/asm/paravirt.h
> @@ -97,6 +97,16 @@ static inline void paravirt_arch_exit_mmap(struct mm_struct *mm)
> PVOP_VCALL1(mmu.exit_mmap, mm);
> }
>
> +static inline void arch_safe_halt(void)
> +{
> + PVOP_VCALL0(irq.safe_halt);
> +}
> +
> +static inline void halt(void)
> +{
> + PVOP_VCALL0(irq.halt);
> +}
> +
> #ifdef CONFIG_PARAVIRT_XXL
> static inline void load_sp0(unsigned long sp0)
> {
> @@ -162,16 +172,6 @@ static inline void __write_cr4(unsigned long x)
> PVOP_VCALL1(cpu.write_cr4, x);
> }
>
> -static inline void arch_safe_halt(void)
> -{
> - PVOP_VCALL0(irq.safe_halt);
> -}
> -
> -static inline void halt(void)
> -{
> - PVOP_VCALL0(irq.halt);
> -}
> -
> static inline void wbinvd(void)
> {
> PVOP_ALT_VCALL0(cpu.wbinvd, "wbinvd", ALT_NOT(X86_FEATURE_XENPV));
> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
> index d9d6b0203ec4..40082847f314 100644
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -150,10 +150,9 @@ struct pv_irq_ops {
> struct paravirt_callee_save save_fl;
> struct paravirt_callee_save irq_disable;
> struct paravirt_callee_save irq_enable;
> -
> +#endif
> void (*safe_halt)(void);
> void (*halt)(void);
> -#endif
> } __no_randomize_layout;
>
> struct pv_mmu_ops {
> diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
> index 04cafc057bed..124e0f6c5d1c 100644
> --- a/arch/x86/kernel/paravirt.c
> +++ b/arch/x86/kernel/paravirt.c
> @@ -283,9 +283,11 @@ struct paravirt_patch_template pv_ops = {
> .irq.save_fl = __PV_IS_CALLEE_SAVE(native_save_fl),
> .irq.irq_disable = __PV_IS_CALLEE_SAVE(native_irq_disable),
> .irq.irq_enable = __PV_IS_CALLEE_SAVE(native_irq_enable),
> +#endif /* CONFIG_PARAVIRT_XXL */
> +
> + /* Irq HLT ops. */

What's that comment for?

> .irq.safe_halt = native_safe_halt,
> .irq.halt = native_halt,
> -#endif /* CONFIG_PARAVIRT_XXL */
>
> /* Mmu ops. */
> .mmu.flush_tlb_user = native_flush_tlb_local,
> --
> 2.25.1
>

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

Subject: Re: [PATCH v5 01/12] x86/paravirt: Move halt paravirt calls under CONFIG_PARAVIRT



On 8/12/21 12:18 AM, Borislav Petkov wrote:
> and CC also the supporters - I'm pretty sure at least Juergen would like
> to be kept up-to-date on pv changes. I'll CC him and the others now and
> leave in the whole diff but make sure you do that in the future please.

Sure. I will do so in future submissions.

--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

2021-08-17 12:52:35

by Jürgen Groß

[permalink] [raw]
Subject: Re: [PATCH v5 01/12] x86/paravirt: Move halt paravirt calls under CONFIG_PARAVIRT

On 12.08.21 09:18, Borislav Petkov wrote:
> On Wed, Aug 04, 2021 at 11:13:18AM -0700, Kuppuswamy Sathyanarayanan wrote:
>> From: "Kirill A. Shutemov" <[email protected]>
>>
>> CONFIG_PARAVIRT_XXL is mainly defined/used by XEN PV guests. For
>> other VM guest types, features supported under CONFIG_PARAVIRT
>> are self sufficient. CONFIG_PARAVIRT mainly provides support for
>> TLB flush operations and time related operations.
>>
>> For TDX guest as well, paravirt calls under CONFIG_PARVIRT meets
>> most of its requirement except the need of HLT and SAFE_HLT
>> paravirt calls, which is currently defined under
>> COFNIG_PARAVIRT_XXL.
>>
>> Since enabling CONFIG_PARAVIRT_XXL is too bloated for TDX guest
>> like platforms, move HLT and SAFE_HLT paravirt calls under
>> CONFIG_PARAVIRT.
>>
>> Moving HLT and SAFE_HLT paravirt calls are not fatal and should not
>> break any functionality for current users of CONFIG_PARAVIRT.
>>
>> Co-developed-by: Kuppuswamy Sathyanarayanan <[email protected]>
>> Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>
>> Signed-off-by: Kirill A. Shutemov <[email protected]>
>> Reviewed-by: Andi Kleen <[email protected]>
>> Reviewed-by: Tony Luck <[email protected]>
>> ---
>
> You need to do this before sending your patches:
>
> ./scripts/get_maintainer.pl /tmp/tdx.01
> Thomas Gleixner <[email protected]> (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),commit_signer:1/6=17%)
> Ingo Molnar <[email protected]> (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
> Borislav Petkov <[email protected]> (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT),commit_signer:6/6=100%)
> [email protected] (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
> "H. Peter Anvin" <[email protected]> (reviewer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
> Juergen Gross <[email protected]> (supporter:PARAVIRT_OPS INTERFACE,commit_signer:5/6=83%,authored:5/6=83%,added_lines:15/16=94%,removed_lines:38/39=97%)
> Deep Shah <[email protected]> (supporter:PARAVIRT_OPS INTERFACE)
> "VMware, Inc." <[email protected]> (supporter:PARAVIRT_OPS INTERFACE)
> ...
>
> and CC also the supporters - I'm pretty sure at least Juergen would like
> to be kept up-to-date on pv changes. I'll CC him and the others now and
> leave in the whole diff but make sure you do that in the future please.
>
>> arch/x86/include/asm/irqflags.h | 40 +++++++++++++++------------
>> arch/x86/include/asm/paravirt.h | 20 +++++++-------
>> arch/x86/include/asm/paravirt_types.h | 3 +-
>> arch/x86/kernel/paravirt.c | 4 ++-
>> 4 files changed, 36 insertions(+), 31 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
>> index c5ce9845c999..f3bb33b1715d 100644
>> --- a/arch/x86/include/asm/irqflags.h
>> +++ b/arch/x86/include/asm/irqflags.h
>> @@ -59,6 +59,28 @@ static inline __cpuidle void native_halt(void)
>>
>> #endif
>>
>> +#ifndef CONFIG_PARAVIRT
>> +#ifndef __ASSEMBLY__
>> +/*
>> + * Used in the idle loop; sti takes one instruction cycle
>> + * to complete:
>> + */
>> +static inline __cpuidle void arch_safe_halt(void)
>> +{
>> + native_safe_halt();
>> +}
>> +
>> +/*
>> + * Used when interrupts are already enabled or to
>> + * shutdown the processor:
>> + */
>> +static inline __cpuidle void halt(void)
>> +{
>> + native_halt();
>> +}
>> +#endif /* __ASSEMBLY__ */
>> +#endif /* CONFIG_PARAVIRT */
>> +
>> #ifdef CONFIG_PARAVIRT_XXL
>> #include <asm/paravirt.h>

Did you test this with CONFIG_PARAVIRT enabled and CONFIG_PARAVIRT_XXL
disabled?

I'm asking because in this case I don't see where halt() and
arch_safe_halt() would be defined in case someone is including
asm/irqflags.h and not asm/paravirt.h.

>> #else
>> @@ -80,24 +102,6 @@ static __always_inline void arch_local_irq_enable(void)
>> native_irq_enable();
>> }
>>
>> -/*
>> - * Used in the idle loop; sti takes one instruction cycle
>> - * to complete:
>> - */
>> -static inline __cpuidle void arch_safe_halt(void)
>> -{
>> - native_safe_halt();
>> -}
>> -
>> -/*
>> - * Used when interrupts are already enabled or to
>> - * shutdown the processor:
>> - */
>> -static inline __cpuidle void halt(void)
>> -{
>> - native_halt();
>> -}
>> -
>> /*
>> * For spinlocks, etc:
>> */
>> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
>> index da3a1ac82be5..d323a626c7a8 100644
>> --- a/arch/x86/include/asm/paravirt.h
>> +++ b/arch/x86/include/asm/paravirt.h
>> @@ -97,6 +97,16 @@ static inline void paravirt_arch_exit_mmap(struct mm_struct *mm)
>> PVOP_VCALL1(mmu.exit_mmap, mm);
>> }
>>
>> +static inline void arch_safe_halt(void)
>> +{
>> + PVOP_VCALL0(irq.safe_halt);
>> +}
>> +
>> +static inline void halt(void)
>> +{
>> + PVOP_VCALL0(irq.halt);
>> +}
>> +
>> #ifdef CONFIG_PARAVIRT_XXL
>> static inline void load_sp0(unsigned long sp0)
>> {
>> @@ -162,16 +172,6 @@ static inline void __write_cr4(unsigned long x)
>> PVOP_VCALL1(cpu.write_cr4, x);
>> }
>>
>> -static inline void arch_safe_halt(void)
>> -{
>> - PVOP_VCALL0(irq.safe_halt);
>> -}
>> -
>> -static inline void halt(void)
>> -{
>> - PVOP_VCALL0(irq.halt);
>> -}
>> -
>> static inline void wbinvd(void)
>> {
>> PVOP_ALT_VCALL0(cpu.wbinvd, "wbinvd", ALT_NOT(X86_FEATURE_XENPV));
>> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
>> index d9d6b0203ec4..40082847f314 100644
>> --- a/arch/x86/include/asm/paravirt_types.h
>> +++ b/arch/x86/include/asm/paravirt_types.h
>> @@ -150,10 +150,9 @@ struct pv_irq_ops {
>> struct paravirt_callee_save save_fl;
>> struct paravirt_callee_save irq_disable;
>> struct paravirt_callee_save irq_enable;
>> -
>> +#endif
>> void (*safe_halt)(void);
>> void (*halt)(void);
>> -#endif
>> } __no_randomize_layout;
>>
>> struct pv_mmu_ops {
>> diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
>> index 04cafc057bed..124e0f6c5d1c 100644
>> --- a/arch/x86/kernel/paravirt.c
>> +++ b/arch/x86/kernel/paravirt.c
>> @@ -283,9 +283,11 @@ struct paravirt_patch_template pv_ops = {
>> .irq.save_fl = __PV_IS_CALLEE_SAVE(native_save_fl),
>> .irq.irq_disable = __PV_IS_CALLEE_SAVE(native_irq_disable),
>> .irq.irq_enable = __PV_IS_CALLEE_SAVE(native_irq_enable),
>> +#endif /* CONFIG_PARAVIRT_XXL */
>> +
>> + /* Irq HLT ops. */
>
> What's that comment for?

I agree, please drop it.


Juergen


Attachments:
OpenPGP_0xB0DE9DD628BF132F.asc (3.06 kB)
OpenPGP public key
OpenPGP_signature (505.00 B)
OpenPGP digital signature
Download all attachments
Subject: Re: [PATCH v5 01/12] x86/paravirt: Move halt paravirt calls under CONFIG_PARAVIRT



On 8/17/21 5:50 AM, Juergen Gross wrote:
> On 12.08.21 09:18, Borislav Petkov wrote:
>> On Wed, Aug 04, 2021 at 11:13:18AM -0700, Kuppuswamy Sathyanarayanan wrote:
>>> From: "Kirill A. Shutemov" <[email protected]>
>>>
>>> CONFIG_PARAVIRT_XXL is mainly defined/used by XEN PV guests. For
>>> other VM guest types, features supported under CONFIG_PARAVIRT
>>> are self sufficient. CONFIG_PARAVIRT mainly provides support for
>>> TLB flush operations and time related operations.
>>>
>>> For TDX guest as well, paravirt calls under CONFIG_PARVIRT meets
>>> most of its requirement except the need of HLT and SAFE_HLT
>>> paravirt calls, which is currently defined under
>>> COFNIG_PARAVIRT_XXL.
>>>
>>> Since enabling CONFIG_PARAVIRT_XXL is too bloated for TDX guest
>>> like platforms, move HLT and SAFE_HLT paravirt calls under
>>> CONFIG_PARAVIRT.
>>>
>>> Moving HLT and SAFE_HLT paravirt calls are not fatal and should not
>>> break any functionality for current users of CONFIG_PARAVIRT.
>>>
>>> Co-developed-by: Kuppuswamy Sathyanarayanan <[email protected]>
>>> Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>
>>> Signed-off-by: Kirill A. Shutemov <[email protected]>
>>> Reviewed-by: Andi Kleen <[email protected]>
>>> Reviewed-by: Tony Luck <[email protected]>
>>> ---
>>
>> You need to do this before sending your patches:
>>
>> ./scripts/get_maintainer.pl /tmp/tdx.01
>> Thomas Gleixner <[email protected]> (maintainer:X86 ARCHITECTURE (32-BIT AND
>> 64-BIT),commit_signer:1/6=17%)
>> Ingo Molnar <[email protected]> (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
>> Borislav Petkov <[email protected]> (maintainer:X86 ARCHITECTURE (32-BIT AND
>> 64-BIT),commit_signer:6/6=100%)
>> [email protected] (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
>> "H. Peter Anvin" <[email protected]> (reviewer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
>> Juergen Gross <[email protected]> (supporter:PARAVIRT_OPS
>> INTERFACE,commit_signer:5/6=83%,authored:5/6=83%,added_lines:15/16=94%,removed_lines:38/39=97%)
>> Deep Shah <[email protected]> (supporter:PARAVIRT_OPS INTERFACE)
>> "VMware, Inc." <[email protected]> (supporter:PARAVIRT_OPS INTERFACE)
>> ...
>>
>> and CC also the supporters - I'm pretty sure at least Juergen would like
>> to be kept up-to-date on pv changes. I'll CC him and the others now and
>> leave in the whole diff but make sure you do that in the future please.
>>
>>>   arch/x86/include/asm/irqflags.h       | 40 +++++++++++++++------------
>>>   arch/x86/include/asm/paravirt.h       | 20 +++++++-------
>>>   arch/x86/include/asm/paravirt_types.h |  3 +-
>>>   arch/x86/kernel/paravirt.c            |  4 ++-
>>>   4 files changed, 36 insertions(+), 31 deletions(-)
>>>
>>> diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
>>> index c5ce9845c999..f3bb33b1715d 100644
>>> --- a/arch/x86/include/asm/irqflags.h
>>> +++ b/arch/x86/include/asm/irqflags.h
>>> @@ -59,6 +59,28 @@ static inline __cpuidle void native_halt(void)
>>>   #endif
>>> +#ifndef CONFIG_PARAVIRT
>>> +#ifndef __ASSEMBLY__
>>> +/*
>>> + * Used in the idle loop; sti takes one instruction cycle
>>> + * to complete:
>>> + */
>>> +static inline __cpuidle void arch_safe_halt(void)
>>> +{
>>> +    native_safe_halt();
>>> +}
>>> +
>>> +/*
>>> + * Used when interrupts are already enabled or to
>>> + * shutdown the processor:
>>> + */
>>> +static inline __cpuidle void halt(void)
>>> +{
>>> +    native_halt();
>>> +}
>>> +#endif /* __ASSEMBLY__ */
>>> +#endif /* CONFIG_PARAVIRT */
>>> +
>>>   #ifdef CONFIG_PARAVIRT_XXL
>>>   #include <asm/paravirt.h>
>
> Did you test this with CONFIG_PARAVIRT enabled and CONFIG_PARAVIRT_XXL
> disabled?
>
> I'm asking because in this case I don't see where halt() and
> arch_safe_halt() would be defined in case someone is including
> asm/irqflags.h and not asm/paravirt.h.

We have tested both cases and did not hit any issues.

1. CONFIG_PARAVIRT=y and CONFIG_PARAVIRT_XXL=y
2. CONFIG_PARAVIRT=y and CONFIG_PARAVIRT_XXL=n


>>> diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
>>> index 04cafc057bed..124e0f6c5d1c 100644
>>> --- a/arch/x86/kernel/paravirt.c
>>> +++ b/arch/x86/kernel/paravirt.c
>>> @@ -283,9 +283,11 @@ struct paravirt_patch_template pv_ops = {
>>>       .irq.save_fl        = __PV_IS_CALLEE_SAVE(native_save_fl),
>>>       .irq.irq_disable    = __PV_IS_CALLEE_SAVE(native_irq_disable),
>>>       .irq.irq_enable        = __PV_IS_CALLEE_SAVE(native_irq_enable),
>>> +#endif /* CONFIG_PARAVIRT_XXL */
>>> +
>>> +    /* Irq HLT ops. */
>>
>> What's that comment for?
>
> I agree, please drop it.

Yes. I will drop it.

>
>
> Juergen

--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

2021-08-17 13:33:50

by Jürgen Groß

[permalink] [raw]
Subject: Re: [PATCH v5 01/12] x86/paravirt: Move halt paravirt calls under CONFIG_PARAVIRT

On 17.08.21 15:16, Kuppuswamy, Sathyanarayanan wrote:
>
>
> On 8/17/21 5:50 AM, Juergen Gross wrote:
>> On 12.08.21 09:18, Borislav Petkov wrote:
>>> On Wed, Aug 04, 2021 at 11:13:18AM -0700, Kuppuswamy Sathyanarayanan
>>> wrote:
>>>> From: "Kirill A. Shutemov" <[email protected]>
>>>>
>>>> CONFIG_PARAVIRT_XXL is mainly defined/used by XEN PV guests. For
>>>> other VM guest types, features supported under CONFIG_PARAVIRT
>>>> are self sufficient. CONFIG_PARAVIRT mainly provides support for
>>>> TLB flush operations and time related operations.
>>>>
>>>> For TDX guest as well, paravirt calls under CONFIG_PARVIRT meets
>>>> most of its requirement except the need of HLT and SAFE_HLT
>>>> paravirt calls, which is currently defined under
>>>> COFNIG_PARAVIRT_XXL.
>>>>
>>>> Since enabling CONFIG_PARAVIRT_XXL is too bloated for TDX guest
>>>> like platforms, move HLT and SAFE_HLT paravirt calls under
>>>> CONFIG_PARAVIRT.
>>>>
>>>> Moving HLT and SAFE_HLT paravirt calls are not fatal and should not
>>>> break any functionality for current users of CONFIG_PARAVIRT.
>>>>
>>>> Co-developed-by: Kuppuswamy Sathyanarayanan
>>>> <[email protected]>
>>>> Signed-off-by: Kuppuswamy Sathyanarayanan
>>>> <[email protected]>
>>>> Signed-off-by: Kirill A. Shutemov <[email protected]>
>>>> Reviewed-by: Andi Kleen <[email protected]>
>>>> Reviewed-by: Tony Luck <[email protected]>
>>>> ---
>>>
>>> You need to do this before sending your patches:
>>>
>>> ./scripts/get_maintainer.pl /tmp/tdx.01
>>> Thomas Gleixner <[email protected]> (maintainer:X86 ARCHITECTURE
>>> (32-BIT AND 64-BIT),commit_signer:1/6=17%)
>>> Ingo Molnar <[email protected]> (maintainer:X86 ARCHITECTURE (32-BIT
>>> AND 64-BIT))
>>> Borislav Petkov <[email protected]> (maintainer:X86 ARCHITECTURE (32-BIT
>>> AND 64-BIT),commit_signer:6/6=100%)
>>> [email protected] (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
>>> "H. Peter Anvin" <[email protected]> (reviewer:X86 ARCHITECTURE (32-BIT
>>> AND 64-BIT))
>>> Juergen Gross <[email protected]> (supporter:PARAVIRT_OPS
>>> INTERFACE,commit_signer:5/6=83%,authored:5/6=83%,added_lines:15/16=94%,removed_lines:38/39=97%)
>>>
>>> Deep Shah <[email protected]> (supporter:PARAVIRT_OPS INTERFACE)
>>> "VMware, Inc." <[email protected]> (supporter:PARAVIRT_OPS
>>> INTERFACE)
>>> ...
>>>
>>> and CC also the supporters - I'm pretty sure at least Juergen would like
>>> to be kept up-to-date on pv changes. I'll CC him and the others now and
>>> leave in the whole diff but make sure you do that in the future please.
>>>
>>>>   arch/x86/include/asm/irqflags.h       | 40
>>>> +++++++++++++++------------
>>>>   arch/x86/include/asm/paravirt.h       | 20 +++++++-------
>>>>   arch/x86/include/asm/paravirt_types.h |  3 +-
>>>>   arch/x86/kernel/paravirt.c            |  4 ++-
>>>>   4 files changed, 36 insertions(+), 31 deletions(-)
>>>>
>>>> diff --git a/arch/x86/include/asm/irqflags.h
>>>> b/arch/x86/include/asm/irqflags.h
>>>> index c5ce9845c999..f3bb33b1715d 100644
>>>> --- a/arch/x86/include/asm/irqflags.h
>>>> +++ b/arch/x86/include/asm/irqflags.h
>>>> @@ -59,6 +59,28 @@ static inline __cpuidle void native_halt(void)
>>>>   #endif
>>>> +#ifndef CONFIG_PARAVIRT
>>>> +#ifndef __ASSEMBLY__
>>>> +/*
>>>> + * Used in the idle loop; sti takes one instruction cycle
>>>> + * to complete:
>>>> + */
>>>> +static inline __cpuidle void arch_safe_halt(void)
>>>> +{
>>>> +    native_safe_halt();
>>>> +}
>>>> +
>>>> +/*
>>>> + * Used when interrupts are already enabled or to
>>>> + * shutdown the processor:
>>>> + */
>>>> +static inline __cpuidle void halt(void)
>>>> +{
>>>> +    native_halt();
>>>> +}
>>>> +#endif /* __ASSEMBLY__ */
>>>> +#endif /* CONFIG_PARAVIRT */
>>>> +
>>>>   #ifdef CONFIG_PARAVIRT_XXL
>>>>   #include <asm/paravirt.h>
>>
>> Did you test this with CONFIG_PARAVIRT enabled and CONFIG_PARAVIRT_XXL
>> disabled?
>>
>> I'm asking because in this case I don't see where halt() and
>> arch_safe_halt() would be defined in case someone is including
>> asm/irqflags.h and not asm/paravirt.h.
>
> We have tested both cases and did not hit any issues.
>
> 1. CONFIG_PARAVIRT=y and  CONFIG_PARAVIRT_XXL=y
> 2. CONFIG_PARAVIRT=y and  CONFIG_PARAVIRT_XXL=n

I guess you have been lucky and all users of arch_safe_halt() and halt()
are directly or indirectly including asm/paravirt.h by other means.

There might be configs where this is not true, though.

In any case I believe you should fix your patch to let asm/irqflags.h
include asm/paravirt.h for the CONFIG_PARAVIRT case.


Juergen


Attachments:
OpenPGP_0xB0DE9DD628BF132F.asc (3.06 kB)
OpenPGP public key
OpenPGP_signature (505.00 B)
OpenPGP digital signature
Download all attachments
Subject: Re: [PATCH v5 01/12] x86/paravirt: Move halt paravirt calls under CONFIG_PARAVIRT



On 8/17/21 6:28 AM, Juergen Gross wrote:
> I guess you have been lucky and all users of arch_safe_halt() and halt()
> are directly or indirectly including asm/paravirt.h by other means.
>
> There might be configs where this is not true, though.
>
> In any case I believe you should fix your patch to let asm/irqflags.h
> include asm/paravirt.h for the CONFIG_PARAVIRT case.

Ok. I will include it.

#if defined(CONFIG_PARAVIRT) && !defined(CONFIG_PARAVIRT_XXL)
#include <asm/paravirt.h>
#endif

--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

2021-08-17 13:49:35

by Jürgen Groß

[permalink] [raw]
Subject: Re: [PATCH v5 01/12] x86/paravirt: Move halt paravirt calls under CONFIG_PARAVIRT

On 17.08.21 15:39, Kuppuswamy, Sathyanarayanan wrote:
>
>
> On 8/17/21 6:28 AM, Juergen Gross wrote:
>> I guess you have been lucky and all users of arch_safe_halt() and halt()
>> are directly or indirectly including asm/paravirt.h by other means.
>>
>> There might be configs where this is not true, though.
>>
>> In any case I believe you should fix your patch to let asm/irqflags.h
>> include asm/paravirt.h for the CONFIG_PARAVIRT case.
>
> Ok. I will include it.
>
> #if defined(CONFIG_PARAVIRT) && !defined(CONFIG_PARAVIRT_XXL)
> #include <asm/paravirt.h>
> #endif
>

I don't see a reason to have two "#include <asm/paravirt.h>" lines in
one file. Why don't you use:

#ifdef CONFIG_PARAVIRT
#include <asm/paravirt.h>
#else
#ifndef __ASSEMBLY___
...
#endif
#endif

#ifndef CONFIG_PARAVIRT_XXL
...
#endif


Juergen


Attachments:
OpenPGP_0xB0DE9DD628BF132F.asc (3.06 kB)
OpenPGP public key
OpenPGP_signature (505.00 B)
OpenPGP digital signature
Download all attachments
Subject: Re: [PATCH v5 01/12] x86/paravirt: Move halt paravirt calls under CONFIG_PARAVIRT



On 8/17/21 6:47 AM, Juergen Gross wrote:
> I don't see a reason to have two "#include <asm/paravirt.h>" lines in
> one file. Why don't you use:
>
> #ifdef CONFIG_PARAVIRT
> #include <asm/paravirt.h>
> #else
> #ifndef __ASSEMBLY___
> ...
> #endif
> #endif
>
> #ifndef CONFIG_PARAVIRT_XXL
> ...
> #endif

Ok. I will use your format.

--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

2021-08-20 17:14:22

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v5 06/12] x86/tdx: Get TD execution environment information via TDINFO

On Wed, Aug 04, 2021 at 11:13:23AM -0700, Kuppuswamy Sathyanarayanan wrote:
> From: "Kirill A. Shutemov" <[email protected]>
>
> Per Guest-Host-Communication Interface (GHCI) for Intel Trust
> Domain Extensions (Intel TDX) specification, sec 2.4.2,
> TDCALL[TDINFO] provides basic TD execution environment information, not
> provided by CPUID.
>
> Call TDINFO during early boot to be used for following system
> initialization.
>
> The call provides info on which bit in pfn is used to indicate that the
> page is shared with the host and attributes of the TD, such as debug.
>
> Information about the number of CPUs need not be saved because there are
> no users so far for it.
>
> Signed-off-by: Kirill A. Shutemov <[email protected]>
> Reviewed-by: Andi Kleen <[email protected]>
> Reviewed-by: Tony Luck <[email protected]>
> Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>
> ---
>
> Changes since v4:
> * None
>
> Changes since v3:
> * None
>
> arch/x86/kernel/tdx.c | 23 +++++++++++++++++++++++
> 1 file changed, 23 insertions(+)
>
> diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c
> index 287564990f21..3973e81751ba 100644
> --- a/arch/x86/kernel/tdx.c
> +++ b/arch/x86/kernel/tdx.c
> @@ -8,6 +8,14 @@
>
> #include <asm/tdx.h>
>
> +/* TDX Module call Leaf IDs */
> +#define TDINFO 1
> +
> +static struct {
> + unsigned int gpa_width;
> + unsigned long attributes;
> +} td_info __ro_after_init;

Where is that thing even used? I don't see it in the whole patchset.

> +
> /*
> * Wrapper for standard use of __tdx_hypercall with BUG_ON() check
> * for TDCALL error.
> @@ -54,6 +62,19 @@ bool tdx_prot_guest_has(unsigned long flag)
> }
> EXPORT_SYMBOL_GPL(tdx_prot_guest_has);
>
> +static void tdg_get_info(void)

Also, what Sean said: "tdx_" please. Unless there's a real reason to
have a different prefix - then state that reason.

> +{
> + u64 ret;
> + struct tdx_module_output out = {0};

The tip-tree preferred ordering of variable declarations at the
beginning of a function is reverse fir tree order::

struct long_struct_name *descriptive_name;
unsigned long foo, bar;
unsigned int tmp;
int ret;

The above is faster to parse than the reverse ordering::

int ret;
unsigned int tmp;
unsigned long foo, bar;
struct long_struct_name *descriptive_name;

And even more so than random ordering::

unsigned long foo, bar;
int ret;
struct long_struct_name *descriptive_name;
unsigned int tmp;

> +
> + ret = __tdx_module_call(TDINFO, 0, 0, 0, 0, &out);
> +
> + BUG_ON(ret);

WARNING: Avoid crashing the kernel - try using WARN_ON & recovery code rather than BUG() or BUG_ON()
#121: FILE: arch/x86/kernel/tdx.c:72:
+ BUG_ON(ret);

Have I already told you about checkpatch?

If not, here it is:

Please integrate scripts/checkpatch.pl into your patch creation
workflow. Some of the warnings/errors *actually* make sense.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

Subject: Re: [PATCH v5 06/12] x86/tdx: Get TD execution environment information via TDINFO



On 8/20/21 10:13 AM, Borislav Petkov wrote:
> On Wed, Aug 04, 2021 at 11:13:23AM -0700, Kuppuswamy Sathyanarayanan wrote:
>> From: "Kirill A. Shutemov" <[email protected]>
>>
>> Per Guest-Host-Communication Interface (GHCI) for Intel Trust
>> Domain Extensions (Intel TDX) specification, sec 2.4.2,
>> TDCALL[TDINFO] provides basic TD execution environment information, not
>> provided by CPUID.
>>
>> Call TDINFO during early boot to be used for following system
>> initialization.
>>
>> The call provides info on which bit in pfn is used to indicate that the
>> page is shared with the host and attributes of the TD, such as debug.
>>
>> Information about the number of CPUs need not be saved because there are
>> no users so far for it.
>>
>> Signed-off-by: Kirill A. Shutemov <[email protected]>
>> Reviewed-by: Andi Kleen <[email protected]>
>> Reviewed-by: Tony Luck <[email protected]>
>> Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>
>> ---
>>
>> Changes since v4:
>> * None
>>
>> Changes since v3:
>> * None
>>
>> arch/x86/kernel/tdx.c | 23 +++++++++++++++++++++++
>> 1 file changed, 23 insertions(+)
>>
>> diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c
>> index 287564990f21..3973e81751ba 100644
>> --- a/arch/x86/kernel/tdx.c
>> +++ b/arch/x86/kernel/tdx.c
>> @@ -8,6 +8,14 @@
>>
>> #include <asm/tdx.h>
>>
>> +/* TDX Module call Leaf IDs */
>> +#define TDINFO 1
>> +
>> +static struct {
>> + unsigned int gpa_width;
>> + unsigned long attributes;
>> +} td_info __ro_after_init;
>
> Where is that thing even used? I don't see it in the whole patchset.

It is used in different patch set. If you prefer to move it there, I can
move it to that patch set.

patch: https://lore.kernel.org/patchwork/patch/1472343/
series: https://lore.kernel.org/patchwork/project/lkml/list/?series=510836


>
>> +
>> /*
>> * Wrapper for standard use of __tdx_hypercall with BUG_ON() check
>> * for TDCALL error.
>> @@ -54,6 +62,19 @@ bool tdx_prot_guest_has(unsigned long flag)
>> }
>> EXPORT_SYMBOL_GPL(tdx_prot_guest_has);
>>
>> +static void tdg_get_info(void)
>
> Also, what Sean said: "tdx_" please. Unless there's a real reason to
> have a different prefix - then state that reason.
>
>> +{
>> + u64 ret;
>> + struct tdx_module_output out = {0};
>
> The tip-tree preferred ordering of variable declarations at the
> beginning of a function is reverse fir tree order::
>
> struct long_struct_name *descriptive_name;
> unsigned long foo, bar;
> unsigned int tmp;
> int ret;
>
> The above is faster to parse than the reverse ordering::
>
> int ret;
> unsigned int tmp;
> unsigned long foo, bar;
> struct long_struct_name *descriptive_name;
>
> And even more so than random ordering::
>
> unsigned long foo, bar;
> int ret;
> struct long_struct_name *descriptive_name;
> unsigned int tmp;

I will re-check the TDX patchset and fix the ordering issues.

>
>> +
>> + ret = __tdx_module_call(TDINFO, 0, 0, 0, 0, &out);
>> +
>> + BUG_ON(ret);
>
> WARNING: Avoid crashing the kernel - try using WARN_ON & recovery code rather than BUG() or BUG_ON()
> #121: FILE: arch/x86/kernel/tdx.c:72:
> + BUG_ON(ret);

I have already fixed reasonable check-patch issues. For this case, we
want to use BUG_ON(). Failure in tdx_module_call means buggy TDX
module. So it is safer to crash the kernel.

>
> Have I already told you about checkpatch?
>
> If not, here it is:
>
> Please integrate scripts/checkpatch.pl into your patch creation
> workflow. Some of the warnings/errors *actually* make sense.
>

--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

2021-08-20 17:37:02

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v5 06/12] x86/tdx: Get TD execution environment information via TDINFO

On Fri, Aug 20, 2021 at 10:31:10AM -0700, Kuppuswamy, Sathyanarayanan wrote:
> It is used in different patch set. If you prefer to move it there, I can
> move it to that patch set.

Yes please.

> I have already fixed reasonable check-patch issues. For this case, we
> want to use BUG_ON(). Failure in tdx_module_call means buggy TDX
> module. So it is safer to crash the kernel.

Ok, put that as a comment above it to explain why it cannot continue.
Also, make sure you issue an error message before it explodes so that
the user knows.

Thx.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

Subject: Re: [PATCH v5 06/12] x86/tdx: Get TD execution environment information via TDINFO



On 8/20/21 10:35 AM, Borislav Petkov wrote:
> Ok, put that as a comment above it to explain why it cannot continue.
> Also, make sure you issue an error message before it explodes so that
> the user knows.

Ok. I will fix this in next version.

--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

2021-08-20 18:59:59

by Andi Kleen

[permalink] [raw]
Subject: Re: [PATCH v5 06/12] x86/tdx: Get TD execution environment information via TDINFO


On 8/20/2021 11:29 AM, Kuppuswamy, Sathyanarayanan wrote:
>
>
> On 8/20/21 10:35 AM, Borislav Petkov wrote:
>> Ok, put that as a comment above it to explain why it cannot continue.
>> Also, make sure you issue an error message before it explodes so that
>> the user knows.
>
> Ok. I will fix this in next version.


Without working TDCALLs the error message won't appear anywhere. The
only practical way to debug such a problem is a kernel debugger.

Also printing an error message might end up recursing because the
console write would trigger TDCALL again, or eventually stop because the
console lock is already taken. In any case it won't work.


-Andi

Subject: Re: [PATCH v5 06/12] x86/tdx: Get TD execution environment information via TDINFO



On 8/20/21 11:58 AM, Andi Kleen wrote:
>
>
> Without working TDCALLs the error message won't appear anywhere. The only practical way to debug
> such a problem is a kernel debugger.
>
> Also printing an error message might end up recursing because the console write would trigger TDCALL
> again, or eventually stop because the console lock is already taken. In any case it won't work.

Yes, good point. In this case, adding debug print will not work. But I
can add more comments to the code.

--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

2021-08-24 10:19:43

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v5 07/12] x86/traps: Add #VE support for TDX guest

On Wed, Aug 04, 2021 at 11:13:24AM -0700, Kuppuswamy Sathyanarayanan wrote:
> If a guest kernel action which would normally cause a #VE occurs in the
> interrupt-disabled region before TDGETVEINFO, a #DF is delivered to the
> guest which will result in an oops (and should eventually be a panic, as
> we would like to set panic_on_oops to 1 for TDX guests).

Who's "we"?

Please use passive voice in your commit message and comments: no "we"
or "I", etc. Personal pronouns are ambiguous in text, especially with
so many parties/companies/etc developing the kernel so let's avoid them.

Audit all your patchsets pls.

> Add basic infrastructure to handle any #VE which occurs in the kernel or
> userspace.  Later patches will add handling for specific #VE scenarios.
>
> Convert unhandled #VE's (everything, until later in this series) so that
> they appear just like a #GP by calling ve_raise_fault() directly.
> ve_raise_fault() is similar to #GP handler and is responsible for
> sending SIGSEGV to userspace and cpu die and notifying debuggers and

In all your text:

s/cpu/CPU/g

Audit all your patchsets pls.

> @@ -53,6 +67,11 @@ u64 __tdx_module_call(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
> u64 __tdx_hypercall(u64 type, u64 fn, u64 r12, u64 r13, u64 r14,
> u64 r15, struct tdx_hypercall_output *out);
>
> +unsigned long tdg_get_ve_info(struct ve_info *ve);
> +
> +int tdg_handle_virtualization_exception(struct pt_regs *regs,

There's that "tdg" prefix again. Please fix all your patchsets.

> + struct ve_info *ve);
> +
> #else
>
> static inline void tdx_early_init(void) { };
> diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c
> index df0fa695bb09..a5eaae8e6c44 100644
> --- a/arch/x86/kernel/idt.c
> +++ b/arch/x86/kernel/idt.c
> @@ -68,6 +68,9 @@ static const __initconst struct idt_data early_idts[] = {
> */
> INTG(X86_TRAP_PF, asm_exc_page_fault),
> #endif
> +#ifdef CONFIG_INTEL_TDX_GUEST
> + INTG(X86_TRAP_VE, asm_exc_virtualization_exception),
> +#endif
> };
>
> /*
> @@ -91,6 +94,9 @@ static const __initconst struct idt_data def_idts[] = {
> INTG(X86_TRAP_MF, asm_exc_coprocessor_error),
> INTG(X86_TRAP_AC, asm_exc_alignment_check),
> INTG(X86_TRAP_XF, asm_exc_simd_coprocessor_error),
> +#ifdef CONFIG_INTEL_TDX_GUEST
> + INTG(X86_TRAP_VE, asm_exc_virtualization_exception),
> +#endif
>
> #ifdef CONFIG_X86_32
> TSKG(X86_TRAP_DF, GDT_ENTRY_DOUBLEFAULT_TSS),
> diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c
> index 3973e81751ba..6169f9c740b2 100644
> --- a/arch/x86/kernel/tdx.c
> +++ b/arch/x86/kernel/tdx.c
> @@ -10,6 +10,7 @@
>
> /* TDX Module call Leaf IDs */
> #define TDINFO 1
> +#define TDGETVEINFO 3
>
> static struct {
> unsigned int gpa_width;
> @@ -75,6 +76,41 @@ static void tdg_get_info(void)
> td_info.attributes = out.rdx;
> }
>
> +unsigned long tdg_get_ve_info(struct ve_info *ve)
> +{
> + u64 ret;
> + struct tdx_module_output out = {0};

The tip-tree preferred ordering of variable declarations at the
beginning of a function is reverse fir tree order::

struct long_struct_name *descriptive_name;
unsigned long foo, bar;
unsigned int tmp;
int ret;

The above is faster to parse than the reverse ordering::

int ret;
unsigned int tmp;
unsigned long foo, bar;
struct long_struct_name *descriptive_name;

And even more so than random ordering::

unsigned long foo, bar;
int ret;
struct long_struct_name *descriptive_name;
unsigned int tmp;

> +
> + /*
> + * NMIs and machine checks are suppressed. Before this point any
> + * #VE is fatal. After this point (TDGETVEINFO call), NMIs and
> + * additional #VEs are permitted (but we don't expect them to
> + * happen unless you panic).
> + */
> + ret = __tdx_module_call(TDGETVEINFO, 0, 0, 0, 0, &out);
> +
> + ve->exit_reason = out.rcx;
> + ve->exit_qual = out.rdx;
> + ve->gla = out.r8;
> + ve->gpa = out.r9;
> + ve->instr_len = out.r10 & UINT_MAX;
> + ve->instr_info = out.r10 >> 32;
> +
> + return ret;
> +}
> +
> +int tdg_handle_virtualization_exception(struct pt_regs *regs,
> + struct ve_info *ve)
> +{
> + /*
> + * TODO: Add handler support for various #VE exit
> + * reasons. It will be added by other patches in
> + * the series.
> + */

That comment needs to go.

> + pr_warn("Unexpected #VE: %lld\n", ve->exit_reason);
> + return -EFAULT;
> +}
> +
> void __init tdx_early_init(void)
> {
> if (!cpuid_has_tdx_guest())
> diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
> index a58800973aed..be56f0281cb5 100644
> --- a/arch/x86/kernel/traps.c
> +++ b/arch/x86/kernel/traps.c
> @@ -61,6 +61,7 @@
> #include <asm/insn.h>
> #include <asm/insn-eval.h>
> #include <asm/vdso.h>
> +#include <asm/tdx.h>
>
> #ifdef CONFIG_X86_64
> #include <asm/x86_init.h>
> @@ -1140,6 +1141,74 @@ DEFINE_IDTENTRY(exc_device_not_available)
> }
> }
>
> +#ifdef CONFIG_INTEL_TDX_GUEST
> +#define VEFSTR "VE fault"
> +static void ve_raise_fault(struct pt_regs *regs, long error_code)
> +{
> + struct task_struct *tsk = current;
> +
> + if (user_mode(regs)) {
> + tsk->thread.error_code = error_code;
> + tsk->thread.trap_nr = X86_TRAP_VE;
> +
> + /*
> + * Not fixing up VDSO exceptions similar to #GP handler
> + * because we don't expect the VDSO to trigger #VE.
> + */
> + show_signal(tsk, SIGSEGV, "", VEFSTR, regs, error_code);
> + force_sig(SIGSEGV);
> + return;
> + }
> +
> + if (fixup_exception(regs, X86_TRAP_VE, error_code, 0))

There are exception table entries which can trigger a #VE?

> + return;
> +
> + tsk->thread.error_code = error_code;
> + tsk->thread.trap_nr = X86_TRAP_VE;
> +
> + /*
> + * To be potentially processing a kprobe fault and to trust the result
> + * from kprobe_running(), we have to be non-preemptible.
> + */
> + if (!preemptible() &&
> + kprobe_running() &&
> + kprobe_fault_handler(regs, X86_TRAP_VE))
> + return;
> +
> + notify_die(DIE_GPF, VEFSTR, regs, error_code, X86_TRAP_VE, SIGSEGV);

Other handlers check that retval.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-08-24 16:11:46

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v5 08/12] x86/tdx: Add HLT support for TDX guest

On Wed, Aug 04, 2021 at 11:13:25AM -0700, Kuppuswamy Sathyanarayanan wrote:
> @@ -240,6 +243,32 @@ SYM_FUNC_START(__tdx_hypercall)
>
> movl $TDVMCALL_EXPOSE_REGS_MASK, %ecx
>
> + /*
> + * For the idle loop STI needs to be called directly before
> + * the TDCALL that enters idle (EXIT_REASON_HLT case). STI
> + * enables interrupts only one instruction later. If there
> + * are any instructions between the STI and the TDCALL for
> + * HLT then an interrupt could happen in that time, but the
> + * code would go back to sleep afterwards, which can cause
> + * longer delays.

<-- newline in the comment here for better readability.

> There leads to significant difference in

"There leads..." ?

> + * network performance benchmarks. So add a special case for
> + * EXIT_REASON_HLT to trigger sti before TDCALL. But this
> + * change is not required for all HLT cases. So use R15
> + * register value to identify the case which needs sti. So,

s/sti/STI/g

> + * if R11 is EXIT_REASON_HLT and R15 is 1, then call sti
> + * before TDCALL instruction. Note that R15 register is not
> + * required by TDCALL ABI when triggering the hypercall for
> + * EXIT_REASON_HLT case. So use it in software to select the
> + * sti case.
> + */
> + cmpl $EXIT_REASON_HLT, %r11d
> + jne skip_sti
> + cmpl $1, %r15d
> + jne skip_sti
> + /* Set R15 register to 0, it is unused in EXIT_REASON_HLT case */
> + xor %r15, %r15
> + sti
> +skip_sti:
> tdcall

...

> +static __cpuidle void tdg_safe_halt(void)
> +{
> + u64 ret;
> +
> + /*
> + * Enable interrupts next to the TDVMCALL to avoid
> + * performance degradation.

That comment needs some more love to say exactly what the problem is.

> + */
> + local_irq_enable();
> +
> + /* IRQ is enabled, So set R12 as 0 */
> + ret = _tdx_hypercall(EXIT_REASON_HLT, 0, 0, 0, 1, NULL);
> +
> + /* It should never fail */
> + BUG_ON(ret);
> +}

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-08-24 16:36:10

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v5 09/12] x86/tdx: Wire up KVM hypercalls

On Wed, Aug 04, 2021 at 11:13:26AM -0700, Kuppuswamy Sathyanarayanan wrote:
> From: "Kirill A. Shutemov" <[email protected]>
>
> KVM hypercalls use the "vmcall" or "vmmcall" instructions.

Write instruction mnemonics in all caps pls.

> +# This option enables KVM specific hypercalls in TDX guest.
> +config INTEL_TDX_GUEST_KVM

What is that config option really for? IOW, can't you use
CONFIG_KVM_GUEST instead?

> + def_bool y
> + depends on KVM_GUEST && INTEL_TDX_GUEST
> +
> endif #HYPERVISOR_GUEST
>
> source "arch/x86/Kconfig.cpu"
> diff --git a/arch/x86/include/asm/asm-prototypes.h b/arch/x86/include/asm/asm-prototypes.h
> index 4cb726c71ed8..9855a9ff2924 100644
> --- a/arch/x86/include/asm/asm-prototypes.h
> +++ b/arch/x86/include/asm/asm-prototypes.h
> @@ -17,6 +17,10 @@
> extern void cmpxchg8b_emu(void);
> #endif
>
> +#ifdef CONFIG_INTEL_TDX_GUEST
> +#include <asm/tdx.h>
> +#endif

What "ASM sysmbol generation issue" forced this?

...

> diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
> index 846fe58f0426..8fa33e2c98db 100644
> --- a/arch/x86/include/asm/tdx.h
> +++ b/arch/x86/include/asm/tdx.h
> @@ -6,8 +6,9 @@
> #include <linux/cpufeature.h>
> #include <linux/types.h>
>
> -#define TDX_CPUID_LEAF_ID 0x21
> -#define TDX_HYPERCALL_STANDARD 0
> +#define TDX_CPUID_LEAF_ID 0x21
> +#define TDX_HYPERCALL_STANDARD 0
> +#define TDX_HYPERCALL_VENDOR_KVM 0x4d564b2e584454

"TDX.KVM"

Yeah, you can put it in a comment so that people don't have to do the
CTRL-V game in vim insert mode, i.e., ":help i_CTRL-V_digit" :-)

> /*
> * Used in __tdx_module_call() helper function to gather the
> @@ -80,4 +81,29 @@ static inline bool tdx_prot_guest_has(unsigned long flag) { return false; }
>
> #endif /* CONFIG_INTEL_TDX_GUEST */
>
> +#ifdef CONFIG_INTEL_TDX_GUEST_KVM

I don't think that even needs the ifdeffery. If it is not used, the
inline will simply get discarded so why bother?

> +
> +static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1,
> + unsigned long p2, unsigned long p3,
> + unsigned long p4)
> +{
> + struct tdx_hypercall_output out;
> + u64 err;
> +
> + err = __tdx_hypercall(TDX_HYPERCALL_VENDOR_KVM, nr, p1, p2,
> + p3, p4, &out);
> +
> + BUG_ON(err);
> +
> + return out.r10;
> +}
> +#else
> +static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1,
> + unsigned long p2, unsigned long p3,
> + unsigned long p4)
> +{
> + return -ENODEV;
> +}
> +#endif /* CONFIG_INTEL_TDX_GUEST_KVM */
> +
> #endif /* _ASM_X86_TDX_H */
> diff --git a/arch/x86/kernel/tdcall.S b/arch/x86/kernel/tdcall.S
> index 9df94f87465d..1823bac4542d 100644
> --- a/arch/x86/kernel/tdcall.S
> +++ b/arch/x86/kernel/tdcall.S
> @@ -3,6 +3,7 @@
> #include <asm/asm.h>
> #include <asm/frame.h>
> #include <asm/unwind_hints.h>
> +#include <asm/export.h>
>
> #include <linux/linkage.h>
> #include <linux/bits.h>
> @@ -309,3 +310,4 @@ skip_sti:
>
> retq
> SYM_FUNC_END(__tdx_hypercall)
> +EXPORT_SYMBOL(__tdx_hypercall);

EXPORT_SYMBOL_GPL, of course.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-08-24 16:58:47

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v5 10/12] x86/tdx: Add MSR support for TDX guest

On Wed, Aug 04, 2021 at 11:13:27AM -0700, Kuppuswamy Sathyanarayanan wrote:
> int tdg_handle_virtualization_exception(struct pt_regs *regs,
> struct ve_info *ve)
> {
> + unsigned long val;
> + int ret = 0;
> +
> switch (ve->exit_reason) {
> case EXIT_REASON_HLT:
> tdg_halt();
> break;
> + case EXIT_REASON_MSR_READ:
> + val = tdg_read_msr_safe(regs->cx, (unsigned int *)&ret);
> + if (!ret) {
> + regs->ax = val & UINT_MAX;

regs->ax = (u32)val;

> + regs->dx = val >> 32;
> + }
> + break;
> + case EXIT_REASON_MSR_WRITE:
> + ret = tdg_write_msr_safe(regs->cx, regs->ax, regs->dx);
> + break;
> default:
> pr_warn("Unexpected #VE: %lld\n", ve->exit_reason);
> return -EFAULT;
> }
>
> /* After successful #VE handling, move the IP */
> - regs->ip += ve->instr_len;
> + if (!ret)
> + regs->ip += ve->instr_len;
>
> - return 0;
> + return ret;
> }
>
> void __init tdx_early_init(void)
> --

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-08-24 17:33:16

by Andi Kleen

[permalink] [raw]
Subject: Re: [PATCH v5 08/12] x86/tdx: Add HLT support for TDX guest


>>> + */
>>> + local_irq_enable();
> ...because this is broken. It's also disturbing because it suggests that these
> patches are not being tested.

This is already fixed in the github tree. Yes it took some time for the
fix to trickle out.


-Andi


Subject: Re: [PATCH v5 07/12] x86/traps: Add #VE support for TDX guest



On 8/24/21 3:17 AM, Borislav Petkov wrote:
> On Wed, Aug 04, 2021 at 11:13:24AM -0700, Kuppuswamy Sathyanarayanan wrote:
>> If a guest kernel action which would normally cause a #VE occurs in the
>> interrupt-disabled region before TDGETVEINFO, a #DF is delivered to the
>> guest which will result in an oops (and should eventually be a panic, as
>> we would like to set panic_on_oops to 1 for TDX guests).
>
> Who's "we"?
>
> Please use passive voice in your commit message and comments: no "we"
> or "I", etc. Personal pronouns are ambiguous in text, especially with
> so many parties/companies/etc developing the kernel so let's avoid them.
>
> Audit all your patchsets pls.

Sorry. I will fix this in next version.

>
>> Add basic infrastructure to handle any #VE which occurs in the kernel or
>> userspace.  Later patches will add handling for specific #VE scenarios.
>>
>> Convert unhandled #VE's (everything, until later in this series) so that
>> they appear just like a #GP by calling ve_raise_fault() directly.
>> ve_raise_fault() is similar to #GP handler and is responsible for
>> sending SIGSEGV to userspace and cpu die and notifying debuggers and
>
> In all your text:
>
> s/cpu/CPU/g
>
> Audit all your patchsets pls.

Yes. I will fix this in next version.

>
>> @@ -53,6 +67,11 @@ u64 __tdx_module_call(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
>> u64 __tdx_hypercall(u64 type, u64 fn, u64 r12, u64 r13, u64 r14,
>> u64 r15, struct tdx_hypercall_output *out);
>>
>> +unsigned long tdg_get_ve_info(struct ve_info *ve);
>> +
>> +int tdg_handle_virtualization_exception(struct pt_regs *regs,
>
> There's that "tdg" prefix again. Please fix all your patchsets.

Mainly chose it avoid future name conflicts with KVM (tdx) calls. But
if you don't like "tdg", I can change it back to "tdx" and resolve the
naming issues when it occurs.


>> static struct {
>> unsigned int gpa_width;
>> @@ -75,6 +76,41 @@ static void tdg_get_info(void)
>> td_info.attributes = out.rdx;
>> }
>>
>> +unsigned long tdg_get_ve_info(struct ve_info *ve)
>> +{
>> + u64 ret;
>> + struct tdx_module_output out = {0};
>
> The tip-tree preferred ordering of variable declarations at the
> beginning of a function is reverse fir tree order::
>
> struct long_struct_name *descriptive_name;
> unsigned long foo, bar;
> unsigned int tmp;
> int ret;
>
> The above is faster to parse than the reverse ordering::
>
> int ret;
> unsigned int tmp;
> unsigned long foo, bar;
> struct long_struct_name *descriptive_name;
>
> And even more so than random ordering::
>
> unsigned long foo, bar;
> int ret;
> struct long_struct_name *descriptive_name;
> unsigned int tmp;

Yes. I will fix this in next version.


>> +int tdg_handle_virtualization_exception(struct pt_regs *regs,
>> + struct ve_info *ve)
>> +{
>> + /*
>> + * TODO: Add handler support for various #VE exit
>> + * reasons. It will be added by other patches in
>> + * the series.
>> + */
>
> That comment needs to go.

Ok. I will remove it.

>> +#ifdef CONFIG_INTEL_TDX_GUEST
>> +#define VEFSTR "VE fault"
>> +static void ve_raise_fault(struct pt_regs *regs, long error_code)
>> +{
>> + struct task_struct *tsk = current;
>> +
>> + if (user_mode(regs)) {
>> + tsk->thread.error_code = error_code;
>> + tsk->thread.trap_nr = X86_TRAP_VE;
>> +
>> + /*
>> + * Not fixing up VDSO exceptions similar to #GP handler
>> + * because we don't expect the VDSO to trigger #VE.
>> + */
>> + show_signal(tsk, SIGSEGV, "", VEFSTR, regs, error_code);
>> + force_sig(SIGSEGV);
>> + return;
>> + }
>> +
>> + if (fixup_exception(regs, X86_TRAP_VE, error_code, 0))
>
> There are exception table entries which can trigger a #VE?

It is required to handle #VE exceptions raised by unhandled MSR
read/writes.

>
>> + return;
>> +
>> + tsk->thread.error_code = error_code;
>> + tsk->thread.trap_nr = X86_TRAP_VE;
>> +
>> + /*
>> + * To be potentially processing a kprobe fault and to trust the result
>> + * from kprobe_running(), we have to be non-preemptible.
>> + */
>> + if (!preemptible() &&
>> + kprobe_running() &&
>> + kprobe_fault_handler(regs, X86_TRAP_VE))
>> + return;
>> +
>> + notify_die(DIE_GPF, VEFSTR, regs, error_code, X86_TRAP_VE, SIGSEGV);
>
> Other handlers check that retval.

Ok. I can check it. But there is only one statement after this call. So it
may not be very helpful.

>

--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

Subject: Re: [PATCH v5 08/12] x86/tdx: Add HLT support for TDX guest



On 8/24/21 9:10 AM, Borislav Petkov wrote:
> On Wed, Aug 04, 2021 at 11:13:25AM -0700, Kuppuswamy Sathyanarayanan wrote:
>> @@ -240,6 +243,32 @@ SYM_FUNC_START(__tdx_hypercall)
>>
>> movl $TDVMCALL_EXPOSE_REGS_MASK, %ecx
>>
>> + /*
>> + * For the idle loop STI needs to be called directly before
>> + * the TDCALL that enters idle (EXIT_REASON_HLT case). STI
>> + * enables interrupts only one instruction later. If there
>> + * are any instructions between the STI and the TDCALL for
>> + * HLT then an interrupt could happen in that time, but the
>> + * code would go back to sleep afterwards, which can cause
>> + * longer delays.
>
> <-- newline in the comment here for better readability.

Ok. I will add it in next version.

>
>> There leads to significant difference in
>
> "There leads..." ?

Will fix this in next version. "This leads"

>
>> + * network performance benchmarks. So add a special case for
>> + * EXIT_REASON_HLT to trigger sti before TDCALL. But this
>> + * change is not required for all HLT cases. So use R15
>> + * register value to identify the case which needs sti. So,
>
> s/sti/STI/g

will fix this in next version.

>
>> + * if R11 is EXIT_REASON_HLT and R15 is 1, then call sti
>> + * before TDCALL instruction. Note that R15 register is not
>> + * required by TDCALL ABI when triggering the hypercall for
>> + * EXIT_REASON_HLT case. So use it in software to select the
>> + * sti case.
>> + */
>> + cmpl $EXIT_REASON_HLT, %r11d
>> + jne skip_sti
>> + cmpl $1, %r15d
>> + jne skip_sti
>> + /* Set R15 register to 0, it is unused in EXIT_REASON_HLT case */
>> + xor %r15, %r15
>> + sti
>> +skip_sti:
>> tdcall
>
> ...
>
>> +static __cpuidle void tdg_safe_halt(void)
>> +{
>> + u64 ret;
>> +
>> + /*
>> + * Enable interrupts next to the TDVMCALL to avoid
>> + * performance degradation.
>
> That comment needs some more love to say exactly what the problem is.

It is a bug in this submission. After adding STI fix, this local_irq_enable()
had to be removed. Somehow I have missed to do it. I will fix this
in next version.

>
>> + */
>> + local_irq_enable();
>> +
>> + /* IRQ is enabled, So set R12 as 0 */
>> + ret = _tdx_hypercall(EXIT_REASON_HLT, 0, 0, 0, 1, NULL);
>> +
>> + /* It should never fail */
>> + BUG_ON(ret);
>> +}
>

--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

2021-08-24 17:48:59

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v5 07/12] x86/traps: Add #VE support for TDX guest

On 8/24/21 10:32 AM, Kuppuswamy, Sathyanarayanan wrote:
>>> +    if (fixup_exception(regs, X86_TRAP_VE, error_code, 0))
>>
>> There are exception table entries which can trigger a #VE?
>
> It is required to handle #VE exceptions raised by unhandled MSR
> read/writes.

Is that really the *complete* set of reasons that a #VE can be triggered
from the kernel?

Just off the top of my head, I could imagine the kernel doing a
copy_{to,from}_user() which touched user-mapped MMIO causing a #VE.

2021-08-24 17:50:24

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v5 08/12] x86/tdx: Add HLT support for TDX guest

On Tue, Aug 24, 2021, Borislav Petkov wrote:
> On Wed, Aug 04, 2021 at 11:13:25AM -0700, Kuppuswamy Sathyanarayanan wrote:
> > +static __cpuidle void tdg_safe_halt(void)
> > +{
> > + u64 ret;
> > +
> > + /*
> > + * Enable interrupts next to the TDVMCALL to avoid
> > + * performance degradation.
>
> That comment needs some more love to say exactly what the problem is.

LOL, I guess hanging the vCPU counts as degraded performance. But this comment
can and should go away entirely...

> > + */
> > + local_irq_enable();

...because this is broken. It's also disturbing because it suggests that these
patches are not being tested.

The STI _must_ immediately precede TDCALL, and it _must_ execute with interrupts
disabled. The whole point of the STI blocking shadow is to ensure interrupts are
blocked until _after_ the HLT completes so that a wake event is not recongized
before the HLT, in which case the vCPU will get stuck in HLT because its wake
event alreadyfired. Enabling IRQs well before the TDCALL defeats the purpose of
the STI dance in __tdx_hypercall().

There's even a massive comment in __tdx_hypercall() explaining all this...

> > +
> > + /* IRQ is enabled, So set R12 as 0 */

It would be helpful to use local variables to document what's up, e.g.

const bool irqs_enabled = true;
const bool do_sti = true;

ret = _tdx_hypercall(EXIT_REASON_HLT, irqs_enabled0, 0, 0, do_sti, NULL);

> > + ret = _tdx_hypercall(EXIT_REASON_HLT, 0, 0, 0, 1, NULL);
> > +
> > + /* It should never fail */
> > + BUG_ON(ret);
> > +}
>
> --
> Regards/Gruss,
> Boris.
>
> https://people.kernel.org/tglx/notes-about-netiquette

2021-08-24 17:50:33

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v5 08/12] x86/tdx: Add HLT support for TDX guest

On Tue, Aug 24, 2021 at 10:35:33AM -0700, Kuppuswamy, Sathyanarayanan wrote:
> It is a bug in this submission. After adding STI fix, this local_irq_enable()
> had to be removed. Somehow I have missed to do it. I will fix this
> in next version.

Thanks, and also do this pls:

"I think in the next version all those _tdx_hypercall() wrappers should
spell it out what the parameters they pass are used for."

See Sean's mail for more info.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-08-24 17:51:07

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v5 07/12] x86/traps: Add #VE support for TDX guest

On Tue, Aug 24, 2021 at 10:32:13AM -0700, Kuppuswamy, Sathyanarayanan wrote:
> Mainly chose it avoid future name conflicts with KVM (tdx) calls. But

What name conflicts with KVM calls? Please explain.

> It is required to handle #VE exceptions raised by unhandled MSR
> read/writes.

Example? Please elaborate.

> Ok. I can check it. But there is only one statement after this call.
> So it may not be very helpful.

Looking at die_addr(), that calls the die notifier too. So do you
even *have* to call it here with VEFSTR? As yo say, there's only one
statement after that call and box is dead in the water after that so why
even bother...

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-08-24 17:56:29

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v5 08/12] x86/tdx: Add HLT support for TDX guest

On Tue, Aug 24, 2021 at 05:06:21PM +0000, Sean Christopherson wrote:
> On Tue, Aug 24, 2021, Borislav Petkov wrote:
> > On Wed, Aug 04, 2021 at 11:13:25AM -0700, Kuppuswamy Sathyanarayanan wrote:
> > > +static __cpuidle void tdg_safe_halt(void)
> > > +{
> > > + u64 ret;
> > > +
> > > + /*
> > > + * Enable interrupts next to the TDVMCALL to avoid
> > > + * performance degradation.
> >
> > That comment needs some more love to say exactly what the problem is.
>
> LOL, I guess hanging the vCPU counts as degraded performance. But this comment
> can and should go away entirely...
>
> > > + */
> > > + local_irq_enable();
>
> ...because this is broken. It's also disturbing because it suggests that these
> patches are not being tested.

My complaint since '88.

> The STI _must_ immediately precede TDCALL, and it _must_ execute with interrupts
> disabled. The whole point of the STI blocking shadow is to ensure interrupts are
> blocked until _after_ the HLT completes so that a wake event is not recongized
> before the HLT, in which case the vCPU will get stuck in HLT because its wake
> event alreadyfired. Enabling IRQs well before the TDCALL defeats the purpose of
> the STI dance in __tdx_hypercall().

Wait, whaaaat?!

So tdg_halt() does that but tdg_safe_halt() goes to great lengths not to
do it. And it looks all legit and all, like it really wanted to do it
differently. WTF?

> There's even a massive comment in __tdx_hypercall() explaining all this...
>
> > > +
> > > + /* IRQ is enabled, So set R12 as 0 */
>
> It would be helpful to use local variables to document what's up, e.g.
>
> const bool irqs_enabled = true;
> const bool do_sti = true;
>
> ret = _tdx_hypercall(EXIT_REASON_HLT, irqs_enabled0, 0, 0, do_sti, NULL);

Wait, is this do_sti thing supposed to be:

* ... But this
* change is not required for all HLT cases. So use R15
* register value to identify the case which needs sti. So,
* if R11 is EXIT_REASON_HLT and R15 is 1, then call sti
* before TDCALL instruction.

?


> > > + ret = _tdx_hypercall(EXIT_REASON_HLT, 0, 0, 0, 1, NULL);
^^^
Yeah, it must be it - the 1 there.

And what's with the irqs_enabled first parameter?

Is that used by the TDX module?

I think in the next version all those _tdx_hypercall() wrappers should
spell it out what the parameters they pass are used for.

Hohumm.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-08-24 18:02:49

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v5 12/12] x86/tdx: Handle CPUID via #VE

On Wed, Aug 04, 2021 at 11:13:29AM -0700, Kuppuswamy Sathyanarayanan wrote:
> From: "Kirill A. Shutemov" <[email protected]>
>
> TDX has three classes of CPUID leaves: some CPUID leaves
> are always handled by the CPU, others are handled by the TDX module,
> and some others are handled by the VMM. Since the VMM cannot directly
> intercept the instruction these are reflected with a #VE exception
> to the guest, which then converts it into a hypercall to the VMM,
> or handled directly.
>
> The TDX module EAS has a full list of CPUID leaves which are handled

EAS?

> natively or by the TDX module in 16.2. Only unknown CPUIDs are handled by

16.2?

I believe that commit message was forgotten to be converted to
outside-Intel speak. Please do so.

> the #VE method. In practice this typically only applies to the
> hypervisor specific CPUIDs unknown to the native CPU.

hypervisor-specific

> Therefore there is no risk of causing this in early CPUID code which
> runs before the #VE handler is set up because it will never access
> those exotic CPUID leaves.
>
> Signed-off-by: Kirill A. Shutemov <[email protected]>
> Reviewed-by: Andi Kleen <[email protected]>
> Reviewed-by: Tony Luck <[email protected]>
> Signed-off-by: Kuppuswamy Sathyanarayanan <[email protected]>
> ---
>
> Changes since v4:
> * None
>
> Changes since v3:
> * None
>
> arch/x86/kernel/tdx.c | 18 ++++++++++++++++++
> 1 file changed, 18 insertions(+)
>
> diff --git a/arch/x86/kernel/tdx.c b/arch/x86/kernel/tdx.c
> index d16c7f8759ea..5d2fd6c8b01c 100644
> --- a/arch/x86/kernel/tdx.c
> +++ b/arch/x86/kernel/tdx.c
> @@ -153,6 +153,21 @@ static int tdg_write_msr_safe(unsigned int msr, unsigned int low,
> return ret ? -EIO : 0;
> }
>
> +static void tdg_handle_cpuid(struct pt_regs *regs)
> +{
> + u64 ret;
> + struct tdx_hypercall_output out = {0};
> +
> + ret = _tdx_hypercall(EXIT_REASON_CPUID, regs->ax, regs->cx, 0, 0, &out);
> +
> + WARN_ON(ret);

Why warn and not forward the error, instead, so that it lands in
ve_raise_fault() ?

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

2021-08-24 18:03:39

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v5 08/12] x86/tdx: Add HLT support for TDX guest

On Tue, Aug 24, 2021, Borislav Petkov wrote:
> On Tue, Aug 24, 2021 at 05:06:21PM +0000, Sean Christopherson wrote:
> > It would be helpful to use local variables to document what's up, e.g.
> >
> > const bool irqs_enabled = true;
> > const bool do_sti = true;
> >
> > ret = _tdx_hypercall(EXIT_REASON_HLT, irqs_enabled0, 0, 0, do_sti, NULL);
>
> Wait, is this do_sti thing supposed to be:
>
> * ... But this
> * change is not required for all HLT cases. So use R15
> * register value to identify the case which needs sti. So,
> * if R11 is EXIT_REASON_HLT and R15 is 1, then call sti
> * before TDCALL instruction.
>
> ?
>
>
> > > > + ret = _tdx_hypercall(EXIT_REASON_HLT, 0, 0, 0, 1, NULL);
> ^^^
> Yeah, it must be it - the 1 there.
>
> And what's with the irqs_enabled first parameter?
>
> Is that used by the TDX module?

It's passed to the (untrusted) VMM. The TDX Module has direct access to the guest's
entire FLAGS via the VMCS.

The VMM uses the "IRQs enabled" param to understand whether or not it should
schedule the halted vCPU if an IRQ becomes pending. E.g. if IRQs are disabled
the VMM can keep the vCPU in virtual HLT, even if an IRQ is pending, without
hanging/breaking the guest.

2021-08-24 18:05:49

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v5 08/12] x86/tdx: Add HLT support for TDX guest

On Tue, Aug 24, 2021 at 05:47:25PM +0000, Sean Christopherson wrote:
> It's passed to the (untrusted) VMM. The TDX Module has direct access to the guest's
> entire FLAGS via the VMCS.
>
> The VMM uses the "IRQs enabled" param to understand whether or not it should
> schedule the halted vCPU if an IRQ becomes pending. E.g. if IRQs are disabled
> the VMM can keep the vCPU in virtual HLT, even if an IRQ is pending, without
> hanging/breaking the guest.

See, explanations like that need to be over those _tdx_hypercall()
wrappers. Otherwise it is just random magic.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

Subject: Re: [PATCH v5 09/12] x86/tdx: Wire up KVM hypercalls



On 8/24/21 9:34 AM, Borislav Petkov wrote:
> On Wed, Aug 04, 2021 at 11:13:26AM -0700, Kuppuswamy Sathyanarayanan wrote:
>> From: "Kirill A. Shutemov" <[email protected]>
>>
>> KVM hypercalls use the "vmcall" or "vmmcall" instructions.
>
> Write instruction mnemonics in all caps pls.

Ok. I will change it in next submission.

>
>> +# This option enables KVM specific hypercalls in TDX guest.
>> +config INTEL_TDX_GUEST_KVM
>
> What is that config option really for? IOW, can't you use
> CONFIG_KVM_GUEST instead?
Since TDX code can be used by other hypervisor (non KVM case) we
want to have a config to differentiate the KVM related calls.

>
>> + def_bool y
>> + depends on KVM_GUEST && INTEL_TDX_GUEST
>> +
>> endif #HYPERVISOR_GUEST
>>
>> source "arch/x86/Kconfig.cpu"
>> diff --git a/arch/x86/include/asm/asm-prototypes.h b/arch/x86/include/asm/asm-prototypes.h
>> index 4cb726c71ed8..9855a9ff2924 100644
>> --- a/arch/x86/include/asm/asm-prototypes.h
>> +++ b/arch/x86/include/asm/asm-prototypes.h
>> @@ -17,6 +17,10 @@
>> extern void cmpxchg8b_emu(void);
>> #endif
>>
>> +#ifdef CONFIG_INTEL_TDX_GUEST
>> +#include <asm/tdx.h>
>> +#endif
>
> What "ASM sysmbol generation issue" forced this?

Compiler raised version generation issue for __tdx_hypercall

>
> ...
>
>> diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
>> index 846fe58f0426..8fa33e2c98db 100644
>> --- a/arch/x86/include/asm/tdx.h
>> +++ b/arch/x86/include/asm/tdx.h
>> @@ -6,8 +6,9 @@
>> #include <linux/cpufeature.h>
>> #include <linux/types.h>
>>
>> -#define TDX_CPUID_LEAF_ID 0x21
>> -#define TDX_HYPERCALL_STANDARD 0
>> +#define TDX_CPUID_LEAF_ID 0x21
>> +#define TDX_HYPERCALL_STANDARD 0
>> +#define TDX_HYPERCALL_VENDOR_KVM 0x4d564b2e584454
>
> "TDX.KVM"
>
> Yeah, you can put it in a comment so that people don't have to do the
> CTRL-V game in vim insert mode, i.e., ":help i_CTRL-V_digit" :-)

Got it. I will add it in comment.

>
>> /*
>> * Used in __tdx_module_call() helper function to gather the
>> @@ -80,4 +81,29 @@ static inline bool tdx_prot_guest_has(unsigned long flag) { return false; }
>>
>> #endif /* CONFIG_INTEL_TDX_GUEST */
>>
>> +#ifdef CONFIG_INTEL_TDX_GUEST_KVM
>
> I don't think that even needs the ifdeffery. If it is not used, the
> inline will simply get discarded so why bother?

It makes sense. I can remove it.

>
>> +
>> +static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1,
>> + unsigned long p2, unsigned long p3,
>> + unsigned long p4)
>> +{
>> + struct tdx_hypercall_output out;
>> + u64 err;
>> +
>> + err = __tdx_hypercall(TDX_HYPERCALL_VENDOR_KVM, nr, p1, p2,
>> + p3, p4, &out);
>> +
>> + BUG_ON(err);
>> +
>> + return out.r10;
>> +}
>> +#else
>> +static inline long tdx_kvm_hypercall(unsigned int nr, unsigned long p1,
>> + unsigned long p2, unsigned long p3,
>> + unsigned long p4)
>> +{
>> + return -ENODEV;
>> +}
>> +#endif /* CONFIG_INTEL_TDX_GUEST_KVM */
>> +
>> #endif /* _ASM_X86_TDX_H */
>> diff --git a/arch/x86/kernel/tdcall.S b/arch/x86/kernel/tdcall.S
>> index 9df94f87465d..1823bac4542d 100644
>> --- a/arch/x86/kernel/tdcall.S
>> +++ b/arch/x86/kernel/tdcall.S
>> @@ -3,6 +3,7 @@
>> #include <asm/asm.h>
>> #include <asm/frame.h>
>> #include <asm/unwind_hints.h>
>> +#include <asm/export.h>
>>
>> #include <linux/linkage.h>
>> #include <linux/bits.h>
>> @@ -309,3 +310,4 @@ skip_sti:
>>
>> retq
>> SYM_FUNC_END(__tdx_hypercall)
>> +EXPORT_SYMBOL(__tdx_hypercall);
>
> EXPORT_SYMBOL_GPL, of course.

Yes. I will fix this in next version.

>

--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

Subject: Re: [PATCH v5 10/12] x86/tdx: Add MSR support for TDX guest



On 8/24/21 9:55 AM, Borislav Petkov wrote:
>> + regs->ax = val & UINT_MAX;
> regs->ax = (u32)val;
>

Ok. I will use your version in next submission.

--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

Subject: Re: [PATCH v5 08/12] x86/tdx: Add HLT support for TDX guest



On 8/24/21 10:06 AM, Sean Christopherson wrote:
> On Tue, Aug 24, 2021, Borislav Petkov wrote:
>> On Wed, Aug 04, 2021 at 11:13:25AM -0700, Kuppuswamy Sathyanarayanan wrote:
>>> +static __cpuidle void tdg_safe_halt(void)
>>> +{
>>> + u64 ret;
>>> +
>>> + /*
>>> + * Enable interrupts next to the TDVMCALL to avoid
>>> + * performance degradation.
>>
>> That comment needs some more love to say exactly what the problem is.
>
> LOL, I guess hanging the vCPU counts as degraded performance. But this comment
> can and should go away entirely...
>
>>> + */
>>> + local_irq_enable();
>
> ...because this is broken. It's also disturbing because it suggests that these
> patches are not being tested.

Sorry, some how we missed this issue before our submission.

We do usual boot test before submission. Since this fix does not block the
boot process, it did not get caught. But we already found this in full functional
testing and also fixed it in github tree.

I will remove this in next submission.

>
> The STI _must_ immediately precede TDCALL, and it _must_ execute with interrupts
> disabled. The whole point of the STI blocking shadow is to ensure interrupts are
> blocked until _after_ the HLT completes so that a wake event is not recongized
> before the HLT, in which case the vCPU will get stuck in HLT because its wake
> event alreadyfired. Enabling IRQs well before the TDCALL defeats the purpose of
> the STI dance in __tdx_hypercall().
>
> There's even a massive comment in __tdx_hypercall() explaining all this...
>
>>> +
>>> + /* IRQ is enabled, So set R12 as 0 */
>
> It would be helpful to use local variables to document what's up, e.g.
>
> const bool irqs_enabled = true;
> const bool do_sti = true;
>
> ret = _tdx_hypercall(EXIT_REASON_HLT, irqs_enabled0, 0, 0, do_sti, NULL);

Ok. I can follow your suggestion in next submission.

>
>>> + ret = _tdx_hypercall(EXIT_REASON_HLT, 0, 0, 0, 1, NULL);
>>> +
>>> + /* It should never fail */
>>> + BUG_ON(ret);
>>> +}
>>
>> --
>> Regards/Gruss,
>> Boris.
>>
>> https://people.kernel.org/tglx/notes-about-netiquette

--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

2021-08-24 18:29:40

by Andi Kleen

[permalink] [raw]
Subject: Re: [PATCH v5 08/12] x86/tdx: Add HLT support for TDX guest


>> It would be helpful to use local variables to document what's up, e.g.
>>
>>       const bool irqs_enabled = true;
>>     const bool do_sti = true;
>>
>>     ret = _tdx_hypercall(EXIT_REASON_HLT, irqs_enabled0, 0, 0,
>> do_sti, NULL);
>
> Ok. I can follow your suggestion in next submission.


I would use defines for the argument values, then it's self documenting.


-Andi


2021-08-24 18:33:07

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v5 09/12] x86/tdx: Wire up KVM hypercalls

On Tue, Aug 24, 2021 at 11:11:43AM -0700, Kuppuswamy, Sathyanarayanan wrote:
> Since TDX code can be used by other hypervisor (non KVM case) we
> want to have a config to differentiate the KVM related calls.

You need to start explaining yourself better. WTH does "to differentiate
the KVM related calls" even mean? Differentiate for what?!

Our CONFIG space is a huuge mess. Adding another option better be
properly justified.

> Compiler raised version generation issue for __tdx_hypercall

-ENOTENOUGHINFO.

Try again.

> Yes. I will fix this in next version.

And here audit all your patchsets. All exports better be _GPL.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

Subject: Re: [PATCH v5 09/12] x86/tdx: Wire up KVM hypercalls



On 8/24/21 11:29 AM, Borislav Petkov wrote:
> On Tue, Aug 24, 2021 at 11:11:43AM -0700, Kuppuswamy, Sathyanarayanan wrote:
>> Since TDX code can be used by other hypervisor (non KVM case) we
>> want to have a config to differentiate the KVM related calls.
>
> You need to start explaining yourself better. WTH does "to differentiate
> the KVM related calls" even mean? Differentiate for what?!

tdx_kvm_hypercall() function and its usage in arch/x86/include/asm/kvm_para.h
is only required for KVM hypervisor.

static inline long kvm_hypercall0(unsigned int nr)
{
long ret;
+
+ if (prot_guest_has(PATTR_GUEST_TDX))
+ return tdx_kvm_hypercall(nr, 0, 0, 0, 0);

If the TDX code is complied for another hypervior, we need some config to
disable above the above code. CONFIG_INTEL_TDX_GUEST_KVM is added
for this purpose. If you think there is no sufficient reason, I can
use defined(CONFIG_KVM_GUEST) && defined(CONFIG_INTEL_TDX_GUEST) to protect
the implementation of tdx_kvm_hypercall()


>
> Our CONFIG space is a huuge mess. Adding another option better be
> properly justified.
>
>> Compiler raised version generation issue for __tdx_hypercall
>
> -ENOTENOUGHINFO.

Following is the error info.

WARNING: modpost: EXPORT symbol "__tdx_hypercall" [vmlinux] version generation failed, symbol will
not be versioned.

So to fix the above issue, added tdx.h in arch/x86/include/asm/asm-prototypes.h

>
> Try again.
>
>> Yes. I will fix this in next version.
>
> And here audit all your patchsets. All exports better be _GPL.

Ok. I will check it before my next submission.

>

--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

2021-08-24 19:39:51

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v5 09/12] x86/tdx: Wire up KVM hypercalls

On Tue, Aug 24, 2021 at 12:11:00PM -0700, Kuppuswamy, Sathyanarayanan wrote:
> If the TDX code is complied for another hypervior, we need some config to
> disable above the above code.

Isn't that what CONFIG_KVM_GUEST is for?

Also, if they don't get used anywhere, the compiler will simply discard
them. I still don't see the need for the ifdeffery.

> Following is the error info.
>
> WARNING: modpost: EXPORT symbol "__tdx_hypercall" [vmlinux] version
> generation failed, symbol will not be versioned.
>
> So to fix the above issue, added tdx.h in arch/x86/include/asm/asm-prototypes.h

You need the C-style declaration of __tdx_hypercall, see

334bb7738764 ("x86/kbuild: enable modversions for symbols exported from asm")

and you can do the include without the ifdeffery.

And also state in the commit message why you're including it.

Thx.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

Subject: Re: [PATCH v5 08/12] x86/tdx: Add HLT support for TDX guest

Hi Boris,

On 8/24/21 10:27 AM, Borislav Petkov wrote:
> I think in the next version all those _tdx_hypercall() wrappers should
> spell it out what the parameters they pass are used for.
>
> Hohumm.

Regarding details about _tdx_hypercall() wrapper usage, do you want me
to document the ABI details?

For example for MSR read,

static u64 tdx_read_msr_safe(unsigned int msr, int *err)
{
struct tdx_hypercall_output out = {0};
u64 ret;

WARN_ON_ONCE(tdx_is_context_switched_msr(msr));

/*
* TDX MSR READ Hypercall ABI:
*
* Input Registers:
*
* R11(EXIT_REASON_MSR_READ) - hypercall sub function id
* R12(msr) - MSR index
*
* Output Registers:
*
* R10(out.r10) - Hypercall return error code
* R11(out.r11) - MSR read value
* RAX(ret) - TDCALL error code
*/
ret = _tdx_hypercall(EXIT_REASON_MSR_READ, msr, 0, 0, 0, &out);

Let me know if you agree with above format?

--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

2021-09-01 18:49:26

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v5 08/12] x86/tdx: Add HLT support for TDX guest

On Tue, Aug 31, 2021 at 01:49:54PM -0700, Kuppuswamy, Sathyanarayanan wrote:
> Regarding details about _tdx_hypercall() wrapper usage, do you want me
> to document the ABI details?
>
> For example for MSR read,
>
> static u64 tdx_read_msr_safe(unsigned int msr, int *err)
> {
> struct tdx_hypercall_output out = {0};
> u64 ret;
>
> WARN_ON_ONCE(tdx_is_context_switched_msr(msr));
>
> /*
> * TDX MSR READ Hypercall ABI:
> *
> * Input Registers:
> *
> * R11(EXIT_REASON_MSR_READ) - hypercall sub function id
> * R12(msr) - MSR index
> *
> * Output Registers:
> *
> * R10(out.r10) - Hypercall return error code
> * R11(out.r11) - MSR read value
> * RAX(ret) - TDCALL error code
> */
> ret = _tdx_hypercall(EXIT_REASON_MSR_READ, msr, 0, 0, 0, &out);
>
> Let me know if you agree with above format?

Yes, that's nice, thanks.

Alternatively, you could simply point at the relevant place in the TDX
documentation so that people can go look at it and find out how, for
example, a MSR read is done, ABI-wise.

With all those confidential computing solutions and the amount of specs
out there, one can get lost pretty easily so having doc references
should be very helpful.

Thx.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette

Subject: Re: [PATCH v5 07/12] x86/traps: Add #VE support for TDX guest



On 8/24/21 10:46 AM, Borislav Petkov wrote:
> On Tue, Aug 24, 2021 at 10:32:13AM -0700, Kuppuswamy, Sathyanarayanan wrote:
>> Mainly chose it avoid future name conflicts with KVM (tdx) calls. But
>
> What name conflicts with KVM calls? Please explain.

Currently there are no name conflicts. But in our initial submissions (RFC v?)
we had some conflicts in functions like (tdx_get_tdreport() and
tdx_get_quote()).

Since it is no longer true and "tdg" is not a favorite prefix, I will
rename tdg -> tdx in next submission.

>
>> It is required to handle #VE exceptions raised by unhandled MSR
>> read/writes.
>
> Example? Please elaborate.

If MSR read/write failed in tdx_handle_virtualization_exception(), it will
return non zero return value which in turn will trigger ve_raise_fault().

If we don't call fixup_exception() for such case, it will trigger oops
and eventually panic in TDX. For MSR read/write failures we don't want
to panic.

#VE MSR read/write
-> exc_virtualization_exception()
-> tdx_handle_virtualization_exception()
->tdx_write_msr_safe()
-> ve_raise_fault
-> fixup_exception()

>
>> Ok. I can check it. But there is only one statement after this call.
>> So it may not be very helpful.
>
> Looking at die_addr(), that calls the die notifier too. So do you
> even *have* to call it here with VEFSTR? As yo say, there's only one
> statement after that call and box is dead in the water after that so why
> even bother...

Reason for calling die_addr() is to trigger oops for failed #VE handling, which
is desirable for TDX. Also sending die notification may be useful for debuggers.

This sequence of calls are similar to exc_general_protection().

>

--
Sathyanarayanan Kuppuswamy
Linux Kernel Developer

2021-09-03 11:45:01

by Borislav Petkov

[permalink] [raw]
Subject: Re: [PATCH v5 07/12] x86/traps: Add #VE support for TDX guest

On Thu, Sep 02, 2021 at 08:24:53AM -0700, Kuppuswamy, Sathyanarayanan wrote:
> If MSR read/write failed in tdx_handle_virtualization_exception(), it will
> return non zero return value which in turn will trigger ve_raise_fault().
>
> If we don't call fixup_exception() for such case, it will trigger oops
> and eventually panic in TDX. For MSR read/write failures we don't want
> to panic.
>
> #VE MSR read/write
> -> exc_virtualization_exception()
> -> tdx_handle_virtualization_exception()
> ->tdx_write_msr_safe()
> -> ve_raise_fault
> -> fixup_exception()

Lemme see if I understand this correctly: you're relying on the kernel
exception handling fixup to end up in

ex_handler_{rd,wr}msr_unsafe()

which would warn but succeed so that you return from ve_raise_fault()
before die()ing?

If so, I could use a comment in ve_raise_fault() in case we touch the
fixup exception machinery, like we're currently doing.

> Reason for calling die_addr() is to trigger oops for failed #VE handling, which
> is desirable for TDX. Also sending die notification may be useful for debuggers.
>
> This sequence of calls are similar to exc_general_protection().

Ok.

--
Regards/Gruss,
Boris.

https://people.kernel.org/tglx/notes-about-netiquette