2022-08-30 22:22:42

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v2 00/17] TDX KVM selftests

Hello,

This is v2 of the patch series for TDX selftests.

It is based on v5.19-rc8 and Intel's V8 of the TDX host patches which
was proposed in https://lkml.org/lkml/2022/8/8/877

The tree can be found at
https://github.com/googleprodkernel/linux-cc/tree/selftests

Major changes vrom v1:
- rebased to v5.19
- added helpers for success and failure reporting
- added additional test cases

---
TDX stands for Trust Domain Extensions which isolates VMs from the
virtual-machine manager (VMM)/hypervisor and any other software on the
platform.

Intel has recently submitted a set of RFC patches for KVM support for
TDX and more information can be found on the latest TDX Support
Patches: https://lkml.org/lkml/2022/8/8/877

Due to the nature of the confidential computing environment that TDX
provides, it is very difficult to verify/test the KVM support. TDX
requires UEFI and the guest kernel to be enlightened which are all under
development.

We are working on a set of selftests to close this gap and be able to
verify the KVM functionality to support TDX lifecycle and GHCI [1]
interface.

We are looking for any feedback on:
- Patch series itself
- Any suggestion on how we should approach testing TDX functionality.
Does selftests seems reasonable or should we switch to using KVM
unit tests. I would be happy to get some perspective on how KVM unit
tests can help us more.
- Any test case or scenario that we should add.
- Anything else I have not thought of yet.

Current patch series provide the following capabilities:

- Provide helper functions to create a TD (Trusted Domain) using the KVM
ioctls
- Provide helper functions to create a guest image that can include any
testing code
- Provide helper functions and wrapper functions to write testing code
using GHCI interface
- Add a test case that verifies TDX life cycle
- Add a test case that verifies TDX GHCI port IO

TODOs:
- Use existing function to create page tables dynamically
(ie __virt_pg_map())
- Remove arbitrary defined magic numbers for data structure offsets
- Add TDVMCALL for error reporting
- Add additional test cases as some listed below
- Add #VE handlers to help testing more complicated test cases

---
Erdem Aktas (4):
KVM: selftests: Add support for creating non-default type VMs
KVM: selftest: Add helper functions to create TDX VMs
KVM: selftest: Adding TDX life cycle test.
KVM: selftest: Adding test case for TDX port IO

Roger Wang (1):
KVM: selftest: TDX: Add TDG.VP.INFO test

Ryan Afranji (2):
KVM: selftest: TDX: Verify the behavior when host consumes a TD
private memory
KVM: selftest: TDX: Add shared memory test

Sagi Shahar (10):
KVM: selftest: TDX: Add report_fatal_error test
KVM: selftest: TDX: Add basic TDX CPUID test
KVM: selftest: TDX: Add basic get_td_vmcall_info test
KVM: selftest: TDX: Add TDX IO writes test
KVM: selftest: TDX: Add TDX IO reads test
KVM: selftest: TDX: Add TDX MSR read/write tests
KVM: selftest: TDX: Add TDX HLT exit test
KVM: selftest: TDX: Add TDX MMIO reads test
KVM: selftest: TDX: Add TDX MMIO writes test
KVM: selftest: TDX: Add TDX CPUID TDVMCALL test

tools/testing/selftests/kvm/Makefile | 2 +
.../selftests/kvm/include/kvm_util_base.h | 12 +-
.../selftests/kvm/include/x86_64/processor.h | 1 +
tools/testing/selftests/kvm/lib/kvm_util.c | 6 +-
.../selftests/kvm/lib/x86_64/processor.c | 27 +
tools/testing/selftests/kvm/lib/x86_64/tdx.h | 495 +++++
.../selftests/kvm/lib/x86_64/tdx_lib.c | 373 ++++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 1666 +++++++++++++++++
8 files changed, 2577 insertions(+), 5 deletions(-)
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx.h
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c
create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c

--
2.37.2.789.g6183377224-goog


2022-08-30 22:22:45

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v2 02/17] KVM: selftest: Add helper functions to create TDX VMs

From: Erdem Aktas <[email protected]>

TDX requires additional IOCTLs to initialize VM and vCPUs, to add
private memory and to finalize the VM memory. Also additional utility
functions are provided to create a guest image that will include test code.

TDX enabled VM's memory is encrypted and cannot be modified or observed
by the VMM. We need to create a guest image that includes the testing
code.

When TDX is enabled, vCPUs will enter guest mode with 32 bit mode with
paging disabled. TDX requires the CPU to run on long mode with paging
enabled. The guest image should have transition code from 32 bit to 64
bit, enable paging and run the testing code. There has to be predifined
offset values for each data structure that will be used by the guest code.
The guest image layout is as following:

| Page Tables | GDTR | GDT | Stack | Testing Code | Transition Boot Code |

Guest image will be loaded to the bottom of the first 4GB of the memory.

Signed-off-by: Erdem Aktas <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
---
tools/testing/selftests/kvm/Makefile | 1 +
.../selftests/kvm/include/kvm_util_base.h | 6 +
.../selftests/kvm/include/x86_64/processor.h | 1 +
.../selftests/kvm/lib/x86_64/processor.c | 27 ++
tools/testing/selftests/kvm/lib/x86_64/tdx.h | 86 +++++
.../selftests/kvm/lib/x86_64/tdx_lib.c | 338 ++++++++++++++++++
6 files changed, 459 insertions(+)
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx.h
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 690b499c3471..ad4d60dadc06 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -52,6 +52,7 @@ LIBKVM_x86_64 += lib/x86_64/handlers.S
LIBKVM_x86_64 += lib/x86_64/perf_test_util.c
LIBKVM_x86_64 += lib/x86_64/processor.c
LIBKVM_x86_64 += lib/x86_64/svm.c
+LIBKVM_x86_64 += lib/x86_64/tdx_lib.c
LIBKVM_x86_64 += lib/x86_64/ucall.c
LIBKVM_x86_64 += lib/x86_64/vmx.c

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 2de7a7a2e56b..8d3bcf1719c6 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -653,6 +653,12 @@ static inline struct kvm_vm *vm_create_barebones(void)
return ____vm_create(VM_MODE_DEFAULT, 0, KVM_VM_TYPE_DEFAULT);
}

+/* TDX VMs are always created with no memory and memory is added later */
+static inline struct kvm_vm *vm_create_tdx(void)
+{
+ return ____vm_create(VM_MODE_DEFAULT, 0, KVM_X86_TDX_VM);
+}
+
static inline struct kvm_vm *vm_create(uint32_t nr_runnable_vcpus)
{
return __vm_create(VM_MODE_DEFAULT, nr_runnable_vcpus, 0);
diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index 45edf45821d0..57bbab7d025c 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -855,6 +855,7 @@ enum pg_level {
#define PG_SIZE_1G PG_LEVEL_SIZE(PG_LEVEL_1G)

void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level);
+struct kvm_vcpu *vm_vcpu_add_tdx(struct kvm_vm *vm, uint32_t vcpu_id);

/*
* Basic CPU control in CR0
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index f35626df1dea..2a6e28c769f2 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -8,6 +8,7 @@
#include "test_util.h"
#include "kvm_util.h"
#include "processor.h"
+#include "tdx.h"

#ifndef NUM_INTERRUPTS
#define NUM_INTERRUPTS 256
@@ -641,6 +642,32 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id,
return vcpu;
}

+/*
+ * Adds a vCPU to a TD (Trusted Domain) with minimum defaults. It will not set
+ * up any general purpose registers as they will be initialized by the TDX. In
+ * TDX, vCPUs RIP is set to 0xFFFFFFF0. See Intel TDX EAS Section "Initial State
+ * of Guest GPRs" for more information on vCPUs initial register values when
+ * entering the TD first time.
+ *
+ * Input Args:
+ * vm - Virtual Machine
+ * vcpuid - The id of the VCPU to add to the VM.
+ */
+struct kvm_vcpu *vm_vcpu_add_tdx(struct kvm_vm *vm, uint32_t vcpu_id)
+{
+ struct kvm_mp_state mp_state;
+ struct kvm_vcpu *vcpu;
+
+ vcpu = __vm_vcpu_add(vm, vcpu_id);
+ initialize_td_vcpu(vcpu);
+
+ /* Setup the MP state */
+ mp_state.mp_state = 0;
+ vcpu_mp_state_set(vcpu, &mp_state);
+
+ return vcpu;
+}
+
struct kvm_vcpu *vm_arch_vcpu_recreate(struct kvm_vm *vm, uint32_t vcpu_id)
{
struct kvm_vcpu *vcpu = __vm_vcpu_add(vm, vcpu_id);
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx.h b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
new file mode 100644
index 000000000000..61b997dfc420
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
@@ -0,0 +1,86 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef KVM_LIB_TDX_H_
+#define KVM_LIB_TDX_H_
+
+#include <kvm_util.h>
+#include "processor.h"
+
+/*
+ * Max page size for the guest image.
+ */
+#define TDX_GUEST_MAX_NR_PAGES 10000
+
+/*
+ * Page Table Address used when paging is enabled.
+ */
+#define TDX_GUEST_PT_FIXED_ADDR (0xFFFFFFFF -\
+ (TDX_GUEST_MAX_NR_PAGES * PAGE_SIZE) + 1)
+
+/*
+ * Max Page Table Size
+ * To map 4GB memory region with 2MB pages, there needs to be 1 page for PML4,
+ * 1 Page for PDPT, 4 pages for PD. Reserving 6 pages for PT.
+ */
+#define TDX_GUEST_NR_PT_PAGES (1 + 1 + 4)
+
+/*
+ * Predefined GDTR values.
+ */
+#define TDX_GUEST_GDTR_ADDR (TDX_GUEST_PT_FIXED_ADDR + (TDX_GUEST_NR_PT_PAGES *\
+ PAGE_SIZE))
+#define TDX_GUEST_GDTR_BASE (TDX_GUEST_GDTR_ADDR + PAGE_SIZE)
+#define TDX_GUEST_LINEAR_CODE64_SEL 0x38
+
+#define TDX_GUEST_STACK_NR_PAGES (3)
+#define TDX_GUEST_STACK_BASE (TDX_GUEST_GDTR_BASE + (TDX_GUEST_STACK_NR_PAGES *\
+ PAGE_SIZE) - 1)
+/*
+ * Reserving some pages to copy the test code. This is an arbitrary number for
+ * now to simplify to guest image layout calculation.
+ * TODO: calculate the guest code dynamcially.
+ */
+#define TDX_GUEST_CODE_ENTRY (TDX_GUEST_GDTR_BASE + (TDX_GUEST_STACK_NR_PAGES *\
+ PAGE_SIZE))
+
+#define KVM_MAX_CPUID_ENTRIES 256
+
+/*
+ * TODO: Move page attributes to processor.h file.
+ */
+#define _PAGE_PRESENT (1UL<<0) /* is present */
+#define _PAGE_RW (1UL<<1) /* writeable */
+#define _PAGE_PS (1UL<<7) /* page size bit*/
+
+#define GDT_ENTRY(flags, base, limit) \
+ ((((base) & 0xff000000ULL) << (56-24)) | \
+ (((flags) & 0x0000f0ffULL) << 40) | \
+ (((limit) & 0x000f0000ULL) << (48-16)) | \
+ (((base) & 0x00ffffffULL) << 16) | \
+ (((limit) & 0x0000ffffULL)))
+
+struct tdx_cpuid_data {
+ struct kvm_cpuid2 cpuid;
+ struct kvm_cpuid_entry2 entries[KVM_MAX_CPUID_ENTRIES];
+};
+
+struct __packed tdx_gdtr {
+ uint16_t limit;
+ uint32_t base;
+};
+
+struct page_table {
+ uint64_t pml4[512];
+ uint64_t pdpt[512];
+ uint64_t pd[4][512];
+};
+
+void add_td_memory(struct kvm_vm *vm, void *source_page,
+ uint64_t gpa, int size);
+void finalize_td_memory(struct kvm_vm *vm);
+void initialize_td(struct kvm_vm *vm);
+void initialize_td_vcpu(struct kvm_vcpu *vcpu);
+void prepare_source_image(struct kvm_vm *vm, void *guest_code,
+ size_t guest_code_size,
+ uint64_t guest_code_signature);
+
+#endif // KVM_LIB_TDX_H_
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c b/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c
new file mode 100644
index 000000000000..72bf2ff24a29
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c
@@ -0,0 +1,338 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/stringify.h>
+#include "asm/kvm.h"
+#include "tdx.h"
+#include <stdlib.h>
+#include <malloc.h>
+#include "processor.h"
+#include <string.h>
+
+char *tdx_cmd_str[] = {
+ "KVM_TDX_CAPABILITIES",
+ "KVM_TDX_INIT_VM",
+ "KVM_TDX_INIT_VCPU",
+ "KVM_TDX_INIT_MEM_REGION",
+ "KVM_TDX_FINALIZE_VM"
+};
+
+#define TDX_MAX_CMD_STR (ARRAY_SIZE(tdx_cmd_str))
+#define EIGHT_INT3_INSTRUCTIONS 0xCCCCCCCCCCCCCCCC
+
+#define XFEATURE_LBR 15
+#define XFEATURE_XTILECFG 17
+#define XFEATURE_XTILEDATA 18
+#define XFEATURE_MASK_LBR (1 << XFEATURE_LBR)
+#define XFEATURE_MASK_XTILECFG (1 << XFEATURE_XTILECFG)
+#define XFEATURE_MASK_XTILEDATA (1 << XFEATURE_XTILEDATA)
+#define XFEATURE_MASK_XTILE (XFEATURE_MASK_XTILECFG | XFEATURE_MASK_XTILEDATA)
+
+
+static void tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data)
+{
+ struct kvm_tdx_cmd tdx_cmd;
+ int r;
+
+ TEST_ASSERT(ioctl_no < TDX_MAX_CMD_STR, "Unknown TDX CMD : %d\n",
+ ioctl_no);
+
+ memset(&tdx_cmd, 0x0, sizeof(tdx_cmd));
+ tdx_cmd.id = ioctl_no;
+ tdx_cmd.flags = flags;
+ tdx_cmd.data = (uint64_t)data;
+ r = ioctl(fd, KVM_MEMORY_ENCRYPT_OP, &tdx_cmd);
+ TEST_ASSERT(r == 0, "%s failed: %d %d", tdx_cmd_str[ioctl_no], r,
+ errno);
+}
+
+static struct tdx_cpuid_data get_tdx_cpuid_data(struct kvm_vm *vm)
+{
+ static struct tdx_cpuid_data cpuid_data;
+ int ret, i;
+
+ if (cpuid_data.cpuid.nent)
+ return cpuid_data;
+
+ memset(&cpuid_data, 0, sizeof(cpuid_data));
+ cpuid_data.cpuid.nent = KVM_MAX_CPUID_ENTRIES;
+ ret = ioctl(vm->kvm_fd, KVM_GET_SUPPORTED_CPUID, &cpuid_data);
+ if (ret) {
+ TEST_FAIL("KVM_GET_SUPPORTED_CPUID failed %d %d\n",
+ ret, errno);
+ cpuid_data.cpuid.nent = 0;
+ return cpuid_data;
+ }
+
+ for (i = 0; i < KVM_MAX_CPUID_ENTRIES; i++) {
+ struct kvm_cpuid_entry2 *e = &cpuid_data.entries[i];
+
+ /* TDX doesn't support LBR and AMX features yet.
+ * Disable those bits from the XCR0 register.
+ */
+ if (e->function == 0xd && (e->index == 0)) {
+ e->eax &= ~XFEATURE_MASK_LBR;
+ e->eax &= ~XFEATURE_MASK_XTILE;
+ }
+ }
+
+ return cpuid_data;
+}
+
+/*
+ * Initialize a VM as a TD.
+ *
+ */
+void initialize_td(struct kvm_vm *vm)
+{
+ struct tdx_cpuid_data cpuid_data;
+ int rc;
+
+ /* No guest VMM controlled cpuid information yet. */
+ struct kvm_tdx_init_vm init_vm;
+
+ rc = kvm_check_cap(KVM_CAP_X2APIC_API);
+ TEST_ASSERT(rc, "TDX: KVM_CAP_X2APIC_API is not supported!");
+ rc = kvm_check_cap(KVM_CAP_SPLIT_IRQCHIP);
+ TEST_ASSERT(rc, "TDX: KVM_CAP_SPLIT_IRQCHIP is not supported!");
+
+ vm_enable_cap(vm, KVM_CAP_X2APIC_API,
+ KVM_X2APIC_API_USE_32BIT_IDS |
+ KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK);
+ vm_enable_cap(vm, KVM_CAP_SPLIT_IRQCHIP, 24);
+
+ /* Allocate and setup memoryfor the td guest. */
+ vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
+ TDX_GUEST_PT_FIXED_ADDR,
+ 0, TDX_GUEST_MAX_NR_PAGES, 0);
+
+ memset(&init_vm, 0, sizeof(init_vm));
+
+ cpuid_data = get_tdx_cpuid_data(vm);
+
+ init_vm.max_vcpus = 1;
+ init_vm.attributes = 0;
+ memcpy(&init_vm.cpuid, &cpuid_data, sizeof(cpuid_data));
+ tdx_ioctl(vm->fd, KVM_TDX_INIT_VM, 0, &init_vm);
+}
+
+
+void initialize_td_vcpu(struct kvm_vcpu *vcpu)
+{
+ struct tdx_cpuid_data cpuid_data;
+
+ cpuid_data = get_tdx_cpuid_data(vcpu->vm);
+ vcpu_init_cpuid(vcpu, (struct kvm_cpuid2 *) &cpuid_data);
+ tdx_ioctl(vcpu->fd, KVM_TDX_INIT_VCPU, 0, NULL);
+}
+
+void add_td_memory(struct kvm_vm *vm, void *source_pages,
+ uint64_t gpa, int size)
+{
+ struct kvm_tdx_init_mem_region mem_region = {
+ .source_addr = (uint64_t)source_pages,
+ .gpa = gpa,
+ .nr_pages = size / PAGE_SIZE,
+ };
+ uint32_t metadata = KVM_TDX_MEASURE_MEMORY_REGION;
+
+ TEST_ASSERT((mem_region.nr_pages > 0) &&
+ ((mem_region.nr_pages * PAGE_SIZE) == size),
+ "Cannot add partial pages to the guest memory.\n");
+ TEST_ASSERT(((uint64_t)source_pages & (PAGE_SIZE - 1)) == 0,
+ "Source memory buffer is not page aligned\n");
+ tdx_ioctl(vm->fd, KVM_TDX_INIT_MEM_REGION, metadata, &mem_region);
+}
+
+void finalize_td_memory(struct kvm_vm *vm)
+{
+ tdx_ioctl(vm->fd, KVM_TDX_FINALIZE_VM, 0, NULL);
+}
+
+void build_gdtr_table(void *gdtr_target, void *gdt_target)
+{
+ uint64_t gdt_table[] = {
+ GDT_ENTRY(0, 0, 0), // NULL_SEL
+ GDT_ENTRY(0xc093, 0, 0xfffff), // LINEAR_DATA32_SEL
+ GDT_ENTRY(0xc09b, 0, 0xfffff), // LINEAR_CODE32_SEL
+ GDT_ENTRY(0, 0, 0), // NULL_SEL
+ GDT_ENTRY(0, 0, 0), // NULL_SEL
+ GDT_ENTRY(0, 0, 0), // NULL_SEL
+ GDT_ENTRY(0, 0, 0), // NULL_SEL
+ GDT_ENTRY(0xa09b, 0, 0xfffff) // LINEAR_CODE64_SEL
+ };
+
+ struct tdx_gdtr gdtr;
+
+ gdtr.limit = sizeof(gdt_table) - 1;
+ gdtr.base = TDX_GUEST_GDTR_BASE;
+
+ memcpy(gdt_target, gdt_table, sizeof(gdt_table));
+ memcpy(gdtr_target, &gdtr, sizeof(gdtr));
+}
+
+
+/*
+ * Constructing 1:1 mapping for the lowest 4GB address space using 2MB pages
+ * which will be used by the TDX guest when paging is enabled.
+ * TODO: use virt_pg_map() functions to dynamically allocate the page tables.
+ */
+void build_page_tables(void *pt_target, uint64_t pml4_base_address)
+{
+ uint64_t i;
+ struct page_table *pt;
+
+ pt = malloc(sizeof(struct page_table));
+ TEST_ASSERT(pt != NULL, "Could not allocate memory for page tables!\n");
+ memset((void *) &(pt->pml4[0]), 0, sizeof(pt->pml4));
+ memset((void *) &(pt->pdpt[0]), 0, sizeof(pt->pdpt));
+ for (i = 0; i < 4; i++)
+ memset((void *) &(pt->pd[i][0]), 0, sizeof(pt->pd[i]));
+
+ pt->pml4[0] = (pml4_base_address + PAGE_SIZE) |
+ _PAGE_PRESENT | _PAGE_RW;
+ for (i = 0; i < 4; i++)
+ pt->pdpt[i] = (pml4_base_address + (i + 2) * PAGE_SIZE) |
+ _PAGE_PRESENT | _PAGE_RW;
+
+ uint64_t *pde = &(pt->pd[0][0]);
+
+ for (i = 0; i < sizeof(pt->pd) / sizeof(pt->pd[0][0]); i++, pde++)
+ *pde = (i << 21) | _PAGE_PRESENT | _PAGE_RW | _PAGE_PS;
+ memcpy(pt_target, pt, 6 * PAGE_SIZE);
+}
+
+static void
+__attribute__((__flatten__, section("guest_boot_section"))) guest_boot(void)
+{
+ asm volatile(" .code32\n\t;"
+ "main_32:\n\t;"
+ " cli\n\t;"
+ " movl $" __stringify(TDX_GUEST_STACK_BASE) ", %%esp\n\t;"
+ " movl $" __stringify(TDX_GUEST_GDTR_ADDR) ", %%eax\n\t;"
+ " lgdt (%%eax)\n\t;"
+ " movl $0x660, %%eax\n\t;"
+ " movl %%eax, %%cr4\n\t;"
+ " movl $" __stringify(TDX_GUEST_PT_FIXED_ADDR) ", %%eax\n\t;"
+ " movl %%eax, %%cr3\n\t;"
+ " movl $0x80000023, %%eax\n\t;"
+ " movl %%eax, %%cr0\n\t;"
+ " ljmp $" __stringify(TDX_GUEST_LINEAR_CODE64_SEL)
+ ", $" __stringify(TDX_GUEST_CODE_ENTRY) "\n\t;"
+ /*
+ * This is where the CPU will start running.
+ * Do not remove any int3 instruction below.
+ */
+ "reset_vector:\n\t;"
+ " jmp main_32\n\t;"
+ " int3\n\t;"
+ " int3\n\t;"
+ " int3\n\t;"
+ " int3\n\t;"
+ " int3\n\t;"
+ " int3\n\t;"
+ " int3\n\t;"
+ " int3\n\t;"
+ " int3\n\t;"
+ " int3\n\t;"
+ " int3\n\t;"
+ " int3\n\t;"
+ " int3\n\t;"
+ " int3\n\t;"
+ ".code64\n\t"
+ :::"rax");
+}
+
+extern char *__start_guest_boot_section;
+extern char *__stop_guest_boot_section;
+#define GUEST_BOOT_SIZE ((uint64_t)&__stop_guest_boot_section -\
+ (uint64_t)&__start_guest_boot_section)
+
+/*
+ * Copies the guest code to the guest image. If signature value is not 0, it
+ * will verify that the guest code ends with the signature provided. We might
+ * need to check the signature to prevent compiler to add additional instruction
+ * to the end of the guest code which might create problems in some cases ie
+ * when copying code for resetvector.
+ */
+void copy_guest_code(void *target, void *guest_function, size_t code_size,
+ uint64_t signature)
+{
+ uint64_t *end;
+
+ TEST_ASSERT((target != NULL) && (guest_function != NULL) &&
+ (code_size > 0), "Invalid inputs to copy guest code\n");
+ if (signature) {
+ while (code_size >= sizeof(signature)) {
+ end = guest_function + code_size - sizeof(signature);
+ if (*end == signature)
+ break;
+ /* Trimming the unwanted code at the end added by
+ * compiler. We need to add nop instruction to the
+ * begginning of the buffer to make sure that the guest
+ * code is aligned from the bottom and top as expected
+ * based on the original code size. This is important
+ * for reset vector which is copied to the bottom of
+ * the first 4GB memory.
+ */
+ code_size--;
+ *(unsigned char *)target = 0x90;
+ target++;
+ }
+ TEST_ASSERT(code_size >= sizeof(signature),
+ "Guest code does not end with the signature: %lx\n"
+ , signature);
+ }
+
+ memcpy(target, guest_function, code_size);
+}
+
+void prepare_source_image(struct kvm_vm *vm, void *guest_code,
+ size_t guest_code_size, uint64_t guest_code_signature)
+{
+ void *source_mem, *pt_address, *code_address, *gdtr_address,
+ *gdt_address, *guest_code_base;
+ int number_of_pages;
+
+ number_of_pages = (GUEST_BOOT_SIZE + guest_code_size) / PAGE_SIZE + 1 +
+ TDX_GUEST_NR_PT_PAGES + TDX_GUEST_STACK_NR_PAGES;
+ TEST_ASSERT(number_of_pages < TDX_GUEST_MAX_NR_PAGES,
+ "Initial image does not fit to the memory");
+
+ source_mem = memalign(PAGE_SIZE,
+ (TDX_GUEST_MAX_NR_PAGES * PAGE_SIZE));
+ TEST_ASSERT(source_mem != NULL,
+ "Could not allocate memory for guest image\n");
+
+ pt_address = source_mem;
+ gdtr_address = source_mem + (TDX_GUEST_NR_PT_PAGES * PAGE_SIZE);
+ gdt_address = gdtr_address + PAGE_SIZE;
+ code_address = source_mem + (TDX_GUEST_MAX_NR_PAGES * PAGE_SIZE) -
+ GUEST_BOOT_SIZE;
+ guest_code_base = gdt_address + (TDX_GUEST_STACK_NR_PAGES *
+ PAGE_SIZE);
+
+ build_page_tables(pt_address, TDX_GUEST_PT_FIXED_ADDR);
+ build_gdtr_table(gdtr_address, gdt_address);
+
+ /* reset vector code should end with int3 instructions.
+ * The unused bytes at the reset vector with int3 to trigger triple
+ * fault shutdown if the guest manages to get into the unused code.
+ * Using the last 8 int3 instruction as a signature to find the function
+ * end offset for guest boot code that includes the instructions for
+ * reset vector.
+ * TODO: Using signature to find the exact size is a little strange but
+ * compiler might add additional bytes to the end of the function which
+ * makes it hard to calculate the offset addresses correctly.
+ * Alternatively, we can construct the jmp instruction for the reset
+ * vector manually to prevent any offset mismatch when copying the
+ * compiler generated code.
+ */
+ copy_guest_code(code_address, guest_boot, GUEST_BOOT_SIZE,
+ EIGHT_INT3_INSTRUCTIONS);
+ if (guest_code)
+ copy_guest_code(guest_code_base, guest_code, guest_code_size,
+ guest_code_signature);
+
+ add_td_memory(vm, source_mem, TDX_GUEST_PT_FIXED_ADDR,
+ (TDX_GUEST_MAX_NR_PAGES * PAGE_SIZE));
+ free(source_mem);
+}
--
2.37.2.789.g6183377224-goog

2022-08-30 22:22:54

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v2 03/17] KVM: selftest: Adding TDX life cycle test.

From: Erdem Aktas <[email protected]>

Adding a test to verify TDX lifecycle by creating a TD and running a
dummy TDVMCALL<INSTRUCTION.IO> inside it.

Signed-off-by: Erdem Aktas <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
---
tools/testing/selftests/kvm/Makefile | 1 +
tools/testing/selftests/kvm/lib/x86_64/tdx.h | 149 ++++++++++++++++++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 104 ++++++++++++
3 files changed, 254 insertions(+)
create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index ad4d60dadc06..208e0cc30048 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -140,6 +140,7 @@ TEST_GEN_PROGS_x86_64 += set_memory_region_test
TEST_GEN_PROGS_x86_64 += steal_time
TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test
TEST_GEN_PROGS_x86_64 += system_counter_offset_test
+TEST_GEN_PROGS_x86_64 += x86_64/tdx_vm_tests

# Compiled outputs used by test targets
TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx.h b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
index 61b997dfc420..d5de52657112 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx.h
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
@@ -51,6 +51,12 @@
#define _PAGE_RW (1UL<<1) /* writeable */
#define _PAGE_PS (1UL<<7) /* page size bit*/

+#define TDX_INSTRUCTION_IO 30
+
+#define TDX_SUCCESS_PORT 0x30
+#define TDX_IO_READ 0
+#define TDX_IO_WRITE 1
+
#define GDT_ENTRY(flags, base, limit) \
((((base) & 0xff000000ULL) << (56-24)) | \
(((flags) & 0x0000f0ffULL) << 40) | \
@@ -83,4 +89,147 @@ void prepare_source_image(struct kvm_vm *vm, void *guest_code,
size_t guest_code_size,
uint64_t guest_code_signature);

+/*
+ * Generic TDCALL function that can be used to communicate with TDX module or
+ * VMM.
+ * Input operands: rax, rbx, rcx, rdx, r8-r15, rbp, rsi, rdi
+ * Output operands: rax, r8-r15, rbx, rdx, rdi, rsi
+ * rcx is actually a bitmap to tell TDX module which register values will be
+ * exposed to the VMM.
+ * XMM0-XMM15 registers can be used as input operands but the current
+ * implementation does not support it yet.
+ */
+static inline void tdcall(struct kvm_regs *regs)
+{
+ asm volatile (
+ "mov %13, %%rax;\n\t"
+ "mov %14, %%rbx;\n\t"
+ "mov %15, %%rcx;\n\t"
+ "mov %16, %%rdx;\n\t"
+ "mov %17, %%r8;\n\t"
+ "mov %18, %%r9;\n\t"
+ "mov %19, %%r10;\n\t"
+ "mov %20, %%r11;\n\t"
+ "mov %21, %%r12;\n\t"
+ "mov %22, %%r13;\n\t"
+ "mov %23, %%r14;\n\t"
+ "mov %24, %%r15;\n\t"
+ "mov %25, %%rbp;\n\t"
+ "mov %26, %%rsi;\n\t"
+ "mov %27, %%rdi;\n\t"
+ ".byte 0x66, 0x0F, 0x01, 0xCC;\n\t"
+ "mov %%rax, %0;\n\t"
+ "mov %%rbx, %1;\n\t"
+ "mov %%rdx, %2;\n\t"
+ "mov %%r8, %3;\n\t"
+ "mov %%r9, %4;\n\t"
+ "mov %%r10, %5;\n\t"
+ "mov %%r11, %6;\n\t"
+ "mov %%r12, %7;\n\t"
+ "mov %%r13, %8;\n\t"
+ "mov %%r14, %9;\n\t"
+ "mov %%r15, %10;\n\t"
+ "mov %%rsi, %11;\n\t"
+ "mov %%rdi, %12;\n\t"
+ : "=m" (regs->rax), "=m" (regs->rbx), "=m" (regs->rdx),
+ "=m" (regs->r8), "=m" (regs->r9), "=m" (regs->r10),
+ "=m" (regs->r11), "=m" (regs->r12), "=m" (regs->r13),
+ "=m" (regs->r14), "=m" (regs->r15), "=m" (regs->rsi),
+ "=m" (regs->rdi)
+ : "m" (regs->rax), "m" (regs->rbx), "m" (regs->rcx),
+ "m" (regs->rdx), "m" (regs->r8), "m" (regs->r9),
+ "m" (regs->r10), "m" (regs->r11), "m" (regs->r12),
+ "m" (regs->r13), "m" (regs->r14), "m" (regs->r15),
+ "m" (regs->rbp), "m" (regs->rsi), "m" (regs->rdi)
+ : "rax", "rbx", "rcx", "rdx", "r8", "r9", "r10", "r11",
+ "r12", "r13", "r14", "r15", "rbp", "rsi", "rdi");
+}
+
+
+/*
+ * Do a TDVMCALL IO request
+ *
+ * Input Args:
+ * port - IO port to do read/write
+ * size - Number of bytes to read/write. 1=1byte, 2=2bytes, 4=4bytes.
+ * write - 1=IO write 0=IO read
+ * data - pointer for the data to write
+ *
+ * Output Args:
+ * data - pointer for data to be read
+ *
+ * Return:
+ * On success, return 0. For Invalid-IO-Port error, returns -1.
+ *
+ * Does an IO operation using the following tdvmcall interface.
+ *
+ * TDG.VP.VMCALL<Instruction.IO>-Input Operands
+ * R11 30 for IO
+ *
+ * R12 Size of access. 1=1byte, 2=2bytes, 4=4bytes.
+ * R13 Direction. 0=Read, 1=Write.
+ * R14 Port number
+ * R15 Data to write, if R13 is 1.
+ *
+ * TDG.VP.VMCALL<Instruction.IO>-Output Operands
+ * R10 TDG.VP.VMCALL-return code.
+ * R11 Data to read, if R13 is 0.
+ *
+ * TDG.VP.VMCALL<Instruction.IO>-Status Codes
+ * Error Code Value Description
+ * TDG.VP.VMCALL_SUCCESS 0x0 TDG.VP.VMCALL is successful
+ * TDG.VP.VMCALL_INVALID_OPERAND 0x80000000 00000000 Invalid-IO-Port access
+ */
+static inline uint64_t tdvmcall_io(uint64_t port, uint64_t size,
+ uint64_t write, uint64_t *data)
+{
+ struct kvm_regs regs;
+
+ memset(&regs, 0, sizeof(regs));
+ regs.r11 = TDX_INSTRUCTION_IO;
+ regs.r12 = size;
+ regs.r13 = write;
+ regs.r14 = port;
+ if (write) {
+ regs.r15 = *data;
+ regs.rcx = 0xFC00;
+ } else {
+ regs.rcx = 0x7C00;
+ }
+ tdcall(&regs);
+ if (!write)
+ *data = regs.r11;
+ return regs.r10;
+}
+
+/*
+ * Report test success to user space.
+ */
+static inline void tdvmcall_success(void)
+{
+ uint64_t code = 0;
+
+ tdvmcall_io(TDX_SUCCESS_PORT, /*size=*/4, TDX_IO_WRITE, &code);
+}
+
+
+#define TDX_FUNCTION_SIZE(name) ((uint64_t)&__stop_sec_ ## name -\
+ (uint64_t)&__start_sec_ ## name) \
+
+#define TDX_GUEST_FUNCTION__(name, section_name) \
+extern char *__start_sec_ ## name ; \
+extern char *__stop_sec_ ## name ; \
+static void \
+__attribute__((__flatten__, section(section_name))) name(void *arg)
+
+
+#define STRINGIFY2(x) #x
+#define STRINGIFY(x) STRINGIFY2(x)
+#define CONCAT2(a, b) a##b
+#define CONCAT(a, b) CONCAT2(a, b)
+
+
+#define TDX_GUEST_FUNCTION(name) \
+TDX_GUEST_FUNCTION__(name, STRINGIFY(CONCAT(sec_, name)))
+
#endif // KVM_LIB_TDX_H_
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
new file mode 100644
index 000000000000..590e45aa7570
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -0,0 +1,104 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <fcntl.h>
+#include <limits.h>
+#include <kvm_util.h>
+#include "../lib/x86_64/tdx.h"
+#include <linux/kvm.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/ioctl.h>
+#include <sys/types.h>
+#include <test_util.h>
+#include <unistd.h>
+#include <processor.h>
+#include <time.h>
+#include <sys/mman.h>
+#include<sys/wait.h>
+
+#define CHECK_GUEST_COMPLETION(VCPU) \
+ (TEST_ASSERT( \
+ ((VCPU)->run->exit_reason == KVM_EXIT_IO) && \
+ ((VCPU)->run->io.port == TDX_SUCCESS_PORT) && \
+ ((VCPU)->run->io.size == 4) && \
+ ((VCPU)->run->io.direction == TDX_IO_WRITE), \
+ "Unexpected exit values while waiting for test complition: %u (%s) %d %d %d\n", \
+ (VCPU)->run->exit_reason, exit_reason_str((VCPU)->run->exit_reason), \
+ (VCPU)->run->io.port, (VCPU)->run->io.size, (VCPU)->run->io.direction))
+
+/*
+ * There might be multiple tests we are running and if one test fails, it will
+ * prevent the subsequent tests to run due to how tests are failing with
+ * TEST_ASSERT function. The run_in_new_process function will run a test in a
+ * new process context and wait for it to finish or fail to prevent TEST_ASSERT
+ * to kill the main testing process.
+ */
+void run_in_new_process(void (*func)(void))
+{
+ if (fork() == 0) {
+ func();
+ exit(0);
+ }
+ wait(NULL);
+}
+
+/*
+ * Verify that the TDX is supported by the KVM.
+ */
+bool is_tdx_enabled(void)
+{
+ return !!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_TDX_VM));
+}
+
+/*
+ * Do a dummy io exit to verify that the TD has been initialized correctly and
+ * guest can run some code inside.
+ */
+TDX_GUEST_FUNCTION(guest_dummy_exit)
+{
+ tdvmcall_success();
+}
+
+/*
+ * TD lifecycle test will create a TD which runs a dumy IO exit to verify that
+ * the guest TD has been created correctly.
+ */
+void verify_td_lifecycle(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+
+ printf("Verifying TD lifecycle:\n");
+ /* Create a TD VM with no memory.*/
+ vm = vm_create_tdx();
+
+ /* Allocate TD guest memory and initialize the TD.*/
+ initialize_td(vm);
+
+ /* Initialize the TD vcpu and copy the test code to the guest memory.*/
+ vcpu = vm_vcpu_add_tdx(vm, 0);
+
+ /* Setup and initialize VM memory */
+ prepare_source_image(vm, guest_dummy_exit,
+ TDX_FUNCTION_SIZE(guest_dummy_exit), 0);
+ finalize_td_memory(vm);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_COMPLETION(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
+int main(int argc, char **argv)
+{
+ if (!is_tdx_enabled()) {
+ print_skip("TDX is not supported by the KVM");
+ exit(KSFT_SKIP);
+ }
+
+ run_in_new_process(&verify_td_lifecycle);
+
+ return 0;
+}
--
2.37.2.789.g6183377224-goog

2022-08-30 22:23:23

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v2 09/17] KVM: selftest: TDX: Add TDX IO reads test

The test verifies IO reads of various sizes from the host to the guest.

Signed-off-by: Sagi Shahar <[email protected]>
---
.../selftests/kvm/x86_64/tdx_vm_tests.c | 84 +++++++++++++++++++
1 file changed, 84 insertions(+)

diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index ee60f77fe38e..85b5ab99424e 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -1,5 +1,7 @@
// SPDX-License-Identifier: GPL-2.0-only

+#include "asm/kvm.h"
+#include <bits/stdint-uintn.h>
#include <fcntl.h>
#include <limits.h>
#include <kvm_util.h>
@@ -573,6 +575,87 @@ void verify_guest_writes(void)
printf("\t ... PASSED\n");
}

+/*
+ * Verifies IO functionality by reading values of different sizes
+ * from the host.
+ */
+TDX_GUEST_FUNCTION(guest_io_reads)
+{
+ uint64_t data;
+ uint64_t ret;
+
+ ret = tdvmcall_io(TDX_TEST_PORT, 1, TDX_IO_READ, &data);
+ if (ret)
+ tdvmcall_fatal(ret);
+ if (data != 0xAB)
+ tdvmcall_fatal(1);
+
+ ret = tdvmcall_io(TDX_TEST_PORT, 2, TDX_IO_READ, &data);
+ if (ret)
+ tdvmcall_fatal(ret);
+ if (data != 0xABCD)
+ tdvmcall_fatal(2);
+
+ ret = tdvmcall_io(TDX_TEST_PORT, 4, TDX_IO_READ, &data);
+ if (ret)
+ tdvmcall_fatal(ret);
+ if (data != 0xFFABCDEF)
+ tdvmcall_fatal(4);
+
+ // Read an invalid number of bytes.
+ ret = tdvmcall_io(TDX_TEST_PORT, 5, TDX_IO_READ, &data);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ tdvmcall_success();
+}
+
+void verify_guest_reads(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+
+ printf("Verifying guest reads:\n");
+ /* Create a TD VM with no memory.*/
+ vm = vm_create_tdx();
+
+ /* Allocate TD guest memory and initialize the TD.*/
+ initialize_td(vm);
+
+ /* Initialize the TD vcpu and copy the test code to the guest memory.*/
+ vcpu = vm_vcpu_add_tdx(vm, 0);
+
+ /* Setup and initialize VM memory */
+ prepare_source_image(vm, guest_io_reads,
+ TDX_FUNCTION_SIZE(guest_io_reads), 0);
+ finalize_td_memory(vm);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_IO(vcpu, TDX_TEST_PORT, 1, TDX_IO_READ);
+ *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = 0xAB;
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_IO(vcpu, TDX_TEST_PORT, 2, TDX_IO_READ);
+ *(uint16_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = 0xABCD;
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_IO(vcpu, TDX_TEST_PORT, 4, TDX_IO_READ);
+ *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = 0xFFABCDEF;
+
+ vcpu_run(vcpu);
+ ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
+ ASSERT_EQ(vcpu->run->system_event.data[1], TDX_VMCALL_INVALID_OPERAND);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_COMPLETION(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
if (!is_tdx_enabled()) {
@@ -586,6 +669,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_td_cpuid);
run_in_new_process(&verify_get_td_vmcall_info);
run_in_new_process(&verify_guest_writes);
+ run_in_new_process(&verify_guest_reads);

return 0;
}
--
2.37.2.789.g6183377224-goog

2022-08-30 22:23:42

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v2 04/17] KVM: selftest: TDX: Add report_fatal_error test

The test checks report_fatal_error functionality.

Signed-off-by: Sagi Shahar <[email protected]>
---
tools/testing/selftests/kvm/lib/x86_64/tdx.h | 17 ++++++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 55 +++++++++++++++++++
2 files changed, 72 insertions(+)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx.h b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
index d5de52657112..351ece3e80e2 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx.h
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
@@ -51,6 +51,7 @@
#define _PAGE_RW (1UL<<1) /* writeable */
#define _PAGE_PS (1UL<<7) /* page size bit*/

+#define TDX_REPORT_FATAL_ERROR 0x10003
#define TDX_INSTRUCTION_IO 30

#define TDX_SUCCESS_PORT 0x30
@@ -212,6 +213,22 @@ static inline void tdvmcall_success(void)
tdvmcall_io(TDX_SUCCESS_PORT, /*size=*/4, TDX_IO_WRITE, &code);
}

+/*
+ * Report an error to user space.
+ * data_gpa may point to an optional shared guest memory holding the error string.
+ * Return value from tdvmcall is ignored since execution is not expected to
+ * continue beyond this point.
+ */
+static inline void tdvmcall_fatal(uint64_t error_code)
+{
+ struct kvm_regs regs;
+
+ memset(&regs, 0, sizeof(regs));
+ regs.r11 = TDX_REPORT_FATAL_ERROR;
+ regs.r12 = error_code;
+ regs.rcx = 0x1C00;
+ tdcall(&regs);
+}

#define TDX_FUNCTION_SIZE(name) ((uint64_t)&__stop_sec_ ## name -\
(uint64_t)&__start_sec_ ## name) \
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 590e45aa7570..1db5400ca5ef 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -91,6 +91,60 @@ void verify_td_lifecycle(void)
printf("\t ... PASSED\n");
}

+/*
+ * Verifies TDX_REPORT_FATAL_ERROR functionality.
+ */
+TDX_GUEST_FUNCTION(guest_code_report_fatal_error)
+{
+ uint64_t err;
+ /* Note: err should follow the GHCI spec definition:
+ * bits 31:0 should be set to 0.
+ * bits 62:32 are used for TD-specific extended error code.
+ * bit 63 is used to mark additional information in shared memory.
+ */
+ err = 0x0BAAAAAD00000000;
+
+ if (err)
+ tdvmcall_fatal(err);
+
+ tdvmcall_success();
+}
+
+void verify_report_fatal_error(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+
+ printf("Verifying report_fatal_error:\n");
+ /* Create a TD VM with no memory.*/
+ vm = vm_create_tdx();
+
+ /* Allocate TD guest memory and initialize the TD.*/
+ initialize_td(vm);
+
+ /* Initialize the TD vcpu and copy the test code to the guest memory.*/
+ vcpu = vm_vcpu_add_tdx(vm, 0);
+
+ /* Setup and initialize VM memory */
+ prepare_source_image(vm, guest_code_report_fatal_error,
+ TDX_FUNCTION_SIZE(guest_code_report_fatal_error),
+ 0);
+ finalize_td_memory(vm);
+
+ vcpu_run(vcpu);
+ ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
+ ASSERT_EQ(vcpu->run->system_event.ndata, 3);
+ ASSERT_EQ(vcpu->run->system_event.data[0], TDX_REPORT_FATAL_ERROR);
+ ASSERT_EQ(vcpu->run->system_event.data[1], 0x0BAAAAAD00000000);
+ ASSERT_EQ(vcpu->run->system_event.data[2], 0);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_COMPLETION(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
if (!is_tdx_enabled()) {
@@ -99,6 +153,7 @@ int main(int argc, char **argv)
}

run_in_new_process(&verify_td_lifecycle);
+ run_in_new_process(&verify_report_fatal_error);

return 0;
}
--
2.37.2.789.g6183377224-goog

2022-08-30 22:23:43

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v2 08/17] KVM: selftest: TDX: Add TDX IO writes test

The test verifies IO writes of various sizes from the guest to the host.

Google-Bug-Id: 235407183

Signed-off-by: Sagi Shahar <[email protected]>
---
tools/testing/selftests/kvm/lib/x86_64/tdx.h | 3 +
.../selftests/kvm/x86_64/tdx_vm_tests.c | 85 +++++++++++++++++++
2 files changed, 88 insertions(+)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx.h b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
index 39b000118e26..f1f44c2ad40e 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx.h
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
@@ -51,6 +51,9 @@
#define _PAGE_RW (1UL<<1) /* writeable */
#define _PAGE_PS (1UL<<7) /* page size bit*/

+#define TDX_VMCALL_SUCCESS 0x0000000000000000
+#define TDX_VMCALL_INVALID_OPERAND 0x8000000000000000
+
#define TDX_GET_TD_VM_CALL_INFO 0x10000
#define TDX_REPORT_FATAL_ERROR 0x10003
#define TDX_INSTRUCTION_IO 30
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index cf8260db1f5b..ee60f77fe38e 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -489,6 +489,90 @@ void verify_get_td_vmcall_info(void)
printf("\t ... PASSED\n");
}

+/*
+ * Verifies IO functionality by writing values of different sizes
+ * to the host.
+ */
+TDX_GUEST_FUNCTION(guest_io_writes)
+{
+ uint64_t byte_1 = 0xAB;
+ uint64_t byte_2 = 0xABCD;
+ uint64_t byte_4 = 0xFFABCDEF;
+ uint64_t ret;
+
+ ret = tdvmcall_io(TDX_TEST_PORT, 1, TDX_IO_WRITE, &byte_1);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ ret = tdvmcall_io(TDX_TEST_PORT, 2, TDX_IO_WRITE, &byte_2);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ ret = tdvmcall_io(TDX_TEST_PORT, 4, TDX_IO_WRITE, &byte_4);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ // Write an invalid number of bytes.
+ ret = tdvmcall_io(TDX_TEST_PORT, 5, TDX_IO_WRITE, &byte_4);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ tdvmcall_success();
+}
+
+void verify_guest_writes(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+ uint8_t byte_1;
+ uint16_t byte_2;
+ uint32_t byte_4;
+
+ printf("Verifying guest writes:\n");
+ /* Create a TD VM with no memory.*/
+ vm = vm_create_tdx();
+
+ /* Allocate TD guest memory and initialize the TD.*/
+ initialize_td(vm);
+
+ /* Initialize the TD vcpu and copy the test code to the guest memory.*/
+ vcpu = vm_vcpu_add_tdx(vm, 0);
+
+ /* Setup and initialize VM memory */
+ prepare_source_image(vm, guest_io_writes,
+ TDX_FUNCTION_SIZE(guest_io_writes), 0);
+ finalize_td_memory(vm);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_IO(vcpu, TDX_TEST_PORT, 1, TDX_IO_WRITE);
+ byte_1 = *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_IO(vcpu, TDX_TEST_PORT, 2, TDX_IO_WRITE);
+ byte_2 = *(uint16_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_IO(vcpu, TDX_TEST_PORT, 4, TDX_IO_WRITE);
+ byte_4 = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ ASSERT_EQ(byte_1, 0xAB);
+ ASSERT_EQ(byte_2, 0xABCD);
+ ASSERT_EQ(byte_4, 0xFFABCDEF);
+
+ vcpu_run(vcpu);
+ ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
+ ASSERT_EQ(vcpu->run->system_event.data[1], TDX_VMCALL_INVALID_OPERAND);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_COMPLETION(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
if (!is_tdx_enabled()) {
@@ -501,6 +585,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_td_ioexit);
run_in_new_process(&verify_td_cpuid);
run_in_new_process(&verify_get_td_vmcall_info);
+ run_in_new_process(&verify_guest_writes);

return 0;
}
--
2.37.2.789.g6183377224-goog

2022-08-30 22:23:45

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v2 11/17] KVM: selftest: TDX: Add TDX HLT exit test

The test verifies that the guest runs TDVMCALL<INSTRUCTION.HLT> and the
guest vCPU enters to the halted state.

Signed-off-by: Erdem Aktas <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
---
tools/testing/selftests/kvm/lib/x86_64/tdx.h | 16 ++++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 88 +++++++++++++++++++
2 files changed, 104 insertions(+)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx.h b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
index 263834979727..b11200028546 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx.h
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
@@ -56,6 +56,7 @@

#define TDX_GET_TD_VM_CALL_INFO 0x10000
#define TDX_REPORT_FATAL_ERROR 0x10003
+#define TDX_INSTRUCTION_HLT 12
#define TDX_INSTRUCTION_IO 30
#define TDX_INSTRUCTION_RDMSR 31
#define TDX_INSTRUCTION_WRMSR 32
@@ -292,6 +293,21 @@ static inline uint64_t tdvmcall_wrmsr(uint64_t index, uint64_t value)
return regs.r10;
}

+/*
+ * Execute HLT instruction.
+ */
+static inline uint64_t tdvmcall_hlt(uint64_t interrupt_blocked_flag)
+{
+ struct kvm_regs regs;
+
+ memset(&regs, 0, sizeof(regs));
+ regs.r11 = TDX_INSTRUCTION_HLT;
+ regs.r12 = interrupt_blocked_flag;
+ regs.rcx = 0x1C00;
+ tdcall(&regs);
+ return regs.r10;
+}
+
/*
* Reports a 32 bit value from the guest to user space using a TDVM IO call.
* Data is reported on port TDX_DATA_REPORT_PORT.
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index fb3b8de7e5cd..39604aac54bd 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -883,6 +883,93 @@ void verify_guest_msr_writes(void)
printf("\t ... PASSED\n");
}

+/*
+ * Verifies HLT functionality.
+ */
+TDX_GUEST_FUNCTION(guest_hlt)
+{
+ uint64_t ret;
+ uint64_t interrupt_blocked_flag;
+
+ interrupt_blocked_flag = 0;
+ ret = tdvmcall_hlt(interrupt_blocked_flag);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ tdvmcall_success();
+}
+
+void _verify_guest_hlt(int signum);
+
+void wake_me(int interval)
+{
+ struct sigaction action;
+
+ action.sa_handler = _verify_guest_hlt;
+ sigemptyset(&action.sa_mask);
+ action.sa_flags = 0;
+
+ TEST_ASSERT(sigaction(SIGALRM, &action, NULL) == 0,
+ "Could not set the alarm handler!");
+
+ alarm(interval);
+}
+
+void _verify_guest_hlt(int signum)
+{
+ static struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+
+ /*
+ * This function will also be called by SIGALRM handler to check the
+ * vCPU MP State. If vm has been initialized, then we are in the signal
+ * handler. Check the MP state and let the guest run again.
+ */
+ if (vcpu != NULL) {
+ struct kvm_mp_state mp_state;
+
+ vcpu_mp_state_get(vcpu, &mp_state);
+ ASSERT_EQ(mp_state.mp_state, KVM_MP_STATE_HALTED);
+
+ /* Let the guest to run and finish the test.*/
+ mp_state.mp_state = KVM_MP_STATE_RUNNABLE;
+ vcpu_mp_state_set(vcpu, &mp_state);
+ return;
+ }
+
+ printf("Verifying HLT:\n");
+
+ /* Create a TD VM with no memory.*/
+ vm = vm_create_tdx();
+
+ /* Allocate TD guest memory and initialize the TD.*/
+ initialize_td(vm);
+
+ /* Initialize the TD vcpu and copy the test code to the guest memory.*/
+ vcpu = vm_vcpu_add_tdx(vm, 0);
+
+ /* Setup and initialize VM memory */
+ prepare_source_image(vm, guest_hlt,
+ TDX_FUNCTION_SIZE(guest_hlt), 0);
+ finalize_td_memory(vm);
+
+ printf("\t ... Running guest\n");
+
+ /* Wait 1 second for guest to execute HLT */
+ wake_me(1);
+ vcpu_run(vcpu);
+
+ CHECK_GUEST_COMPLETION(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
+void verify_guest_hlt(void)
+{
+ _verify_guest_hlt(0);
+}
+
int main(int argc, char **argv)
{
if (!is_tdx_enabled()) {
@@ -899,6 +986,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_guest_reads);
run_in_new_process(&verify_guest_msr_reads);
run_in_new_process(&verify_guest_msr_writes);
+ run_in_new_process(&verify_guest_hlt);

return 0;
}
--
2.37.2.789.g6183377224-goog

2022-08-30 22:23:48

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v2 10/17] KVM: selftest: TDX: Add TDX MSR read/write tests

The test verifies reads and writes for MSR registers with different access
level.

Signed-off-by: Sagi Shahar <[email protected]>
---
tools/testing/selftests/kvm/lib/x86_64/tdx.h | 34 +++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 229 ++++++++++++++++++
2 files changed, 263 insertions(+)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx.h b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
index f1f44c2ad40e..263834979727 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx.h
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
@@ -57,6 +57,8 @@
#define TDX_GET_TD_VM_CALL_INFO 0x10000
#define TDX_REPORT_FATAL_ERROR 0x10003
#define TDX_INSTRUCTION_IO 30
+#define TDX_INSTRUCTION_RDMSR 31
+#define TDX_INSTRUCTION_WRMSR 32

#define TDX_SUCCESS_PORT 0x30
#define TDX_TEST_PORT 0x31
@@ -258,6 +260,38 @@ static inline uint64_t tdvmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12,
return regs.r10;
}

+/*
+ * Read MSR register.
+ */
+static inline uint64_t tdvmcall_rdmsr(uint64_t index, uint64_t *ret_value)
+{
+ struct kvm_regs regs;
+
+ memset(&regs, 0, sizeof(regs));
+ regs.r11 = TDX_INSTRUCTION_RDMSR;
+ regs.r12 = index;
+ regs.rcx = 0x1C00;
+ tdcall(&regs);
+ *ret_value = regs.r11;
+ return regs.r10;
+}
+
+/*
+ * Write MSR register.
+ */
+static inline uint64_t tdvmcall_wrmsr(uint64_t index, uint64_t value)
+{
+ struct kvm_regs regs;
+
+ memset(&regs, 0, sizeof(regs));
+ regs.r11 = TDX_INSTRUCTION_WRMSR;
+ regs.r12 = index;
+ regs.r13 = value;
+ regs.rcx = 0x3C00;
+ tdcall(&regs);
+ return regs.r10;
+}
+
/*
* Reports a 32 bit value from the guest to user space using a TDVM IO call.
* Data is reported on port TDX_DATA_REPORT_PORT.
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 85b5ab99424e..fb3b8de7e5cd 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -55,6 +55,40 @@
(VCPU)->run->system_event.data[1]); \
} while (0)

+
+/*
+ * Define a filter which denies all MSR access except the following:
+ * MTTR_BASE_0: Allow read/write access
+ * MTTR_BASE_1: Allow read access
+ * MTTR_BASE_2: Allow write access
+ */
+static u64 allow_bits = 0xFFFFFFFFFFFFFFFF;
+#define MTTR_BASE_0 (0x200)
+#define MTTR_BASE_1 (0x202)
+#define MTTR_BASE_2 (0x204)
+struct kvm_msr_filter test_filter = {
+ .flags = KVM_MSR_FILTER_DEFAULT_DENY,
+ .ranges = {
+ {
+ .flags = KVM_MSR_FILTER_READ |
+ KVM_MSR_FILTER_WRITE,
+ .nmsrs = 1,
+ .base = MTTR_BASE_0,
+ .bitmap = (uint8_t *)&allow_bits,
+ }, {
+ .flags = KVM_MSR_FILTER_READ,
+ .nmsrs = 1,
+ .base = MTTR_BASE_1,
+ .bitmap = (uint8_t *)&allow_bits,
+ }, {
+ .flags = KVM_MSR_FILTER_WRITE,
+ .nmsrs = 1,
+ .base = MTTR_BASE_2,
+ .bitmap = (uint8_t *)&allow_bits,
+ },
+ },
+};
+
static uint64_t read_64bit_from_guest(struct kvm_vcpu *vcpu, uint64_t port)
{
uint32_t lo, hi;
@@ -656,6 +690,199 @@ void verify_guest_reads(void)
printf("\t ... PASSED\n");
}

+/*
+ * Verifies MSR read functionality.
+ */
+TDX_GUEST_FUNCTION(guest_msr_read)
+{
+ uint64_t data;
+ uint64_t ret;
+
+ ret = tdvmcall_rdmsr(MTTR_BASE_0, &data);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ ret = tdvm_report_64bit_to_user_space(data);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ ret = tdvmcall_rdmsr(MTTR_BASE_1, &data);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ ret = tdvm_report_64bit_to_user_space(data);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ /* We expect this call to fail since MTTR_BASE_2 is write only */
+ ret = tdvmcall_rdmsr(MTTR_BASE_2, &data);
+ if (ret) {
+ ret = tdvm_report_64bit_to_user_space(ret);
+ if (ret)
+ tdvmcall_fatal(ret);
+ } else {
+ tdvmcall_fatal(-99);
+ }
+
+ tdvmcall_success();
+}
+
+void verify_guest_msr_reads(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+ uint64_t data;
+ int ret;
+
+ printf("Verifying guest msr reads:\n");
+
+ /* Create a TD VM with no memory.*/
+ vm = vm_create_tdx();
+
+ /* Set explicit MSR filter map to control access to the MSR registers
+ * used in the test.
+ */
+ printf("\t ... Setting test MSR filter\n");
+ ret = kvm_check_cap(KVM_CAP_X86_USER_SPACE_MSR);
+ TEST_ASSERT(ret, "KVM_CAP_X86_USER_SPACE_MSR is unavailable");
+ vm_enable_cap(vm, KVM_CAP_X86_USER_SPACE_MSR, KVM_MSR_EXIT_REASON_FILTER);
+
+ ret = kvm_check_cap(KVM_CAP_X86_MSR_FILTER);
+ TEST_ASSERT(ret, "KVM_CAP_X86_MSR_FILTER is unavailable");
+
+ ret = ioctl(vm->fd, KVM_X86_SET_MSR_FILTER, &test_filter);
+ TEST_ASSERT(ret == 0,
+ "KVM_X86_SET_MSR_FILTER failed, ret: %i errno: %i (%s)",
+ ret, errno, strerror(errno));
+
+ /* Allocate TD guest memory and initialize the TD.*/
+ initialize_td(vm);
+
+ /* Initialize the TD vcpu and copy the test code to the guest memory.*/
+ vcpu = vm_vcpu_add_tdx(vm, 0);
+
+ /* Setup and initialize VM memory */
+ prepare_source_image(vm, guest_msr_read,
+ TDX_FUNCTION_SIZE(guest_msr_read), 0);
+ finalize_td_memory(vm);
+
+ printf("\t ... Setting test MTTR values\n");
+ /* valid values for mttr type are 0, 1, 4, 5, 6 */
+ vcpu_set_msr(vcpu, MTTR_BASE_0, 4);
+ vcpu_set_msr(vcpu, MTTR_BASE_1, 5);
+ vcpu_set_msr(vcpu, MTTR_BASE_2, 6);
+
+ printf("\t ... Running guest\n");
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ data = read_64bit_from_guest(vcpu, TDX_DATA_REPORT_PORT);
+ ASSERT_EQ(data, 4);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ data = read_64bit_from_guest(vcpu, TDX_DATA_REPORT_PORT);
+ ASSERT_EQ(data, 5);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ data = read_64bit_from_guest(vcpu, TDX_DATA_REPORT_PORT);
+ ASSERT_EQ(data, TDX_VMCALL_INVALID_OPERAND);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_COMPLETION(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
+/*
+ * Verifies MSR write functionality.
+ */
+TDX_GUEST_FUNCTION(guest_msr_write)
+{
+ uint64_t ret;
+
+ ret = tdvmcall_wrmsr(MTTR_BASE_0, 4);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ /* We expect this call to fail since MTTR_BASE_1 is read only */
+ ret = tdvmcall_wrmsr(MTTR_BASE_1, 5);
+ if (ret) {
+ ret = tdvm_report_64bit_to_user_space(ret);
+ if (ret)
+ tdvmcall_fatal(ret);
+ } else {
+ tdvmcall_fatal(-99);
+ }
+
+
+ ret = tdvmcall_wrmsr(MTTR_BASE_2, 6);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ tdvmcall_success();
+}
+
+void verify_guest_msr_writes(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+ uint64_t data;
+ int ret;
+
+ printf("Verifying guest msr writes:\n");
+
+ /* Create a TD VM with no memory.*/
+ vm = vm_create_tdx();
+
+ /* Set explicit MSR filter map to control access to the MSR registers
+ * used in the test.
+ */
+ printf("\t ... Setting test MSR filter\n");
+ ret = kvm_check_cap(KVM_CAP_X86_USER_SPACE_MSR);
+ TEST_ASSERT(ret, "KVM_CAP_X86_USER_SPACE_MSR is unavailable");
+ vm_enable_cap(vm, KVM_CAP_X86_USER_SPACE_MSR, KVM_MSR_EXIT_REASON_FILTER);
+
+ ret = kvm_check_cap(KVM_CAP_X86_MSR_FILTER);
+ TEST_ASSERT(ret, "KVM_CAP_X86_MSR_FILTER is unavailable");
+
+ ret = ioctl(vm->fd, KVM_X86_SET_MSR_FILTER, &test_filter);
+ TEST_ASSERT(ret == 0,
+ "KVM_X86_SET_MSR_FILTER failed, ret: %i errno: %i (%s)",
+ ret, errno, strerror(errno));
+
+ /* Allocate TD guest memory and initialize the TD.*/
+ initialize_td(vm);
+
+ /* Initialize the TD vcpu and copy the test code to the guest memory.*/
+ vcpu = vm_vcpu_add_tdx(vm, 0);
+
+ /* Setup and initialize VM memory */
+ prepare_source_image(vm, guest_msr_write,
+ TDX_FUNCTION_SIZE(guest_msr_write), 0);
+ finalize_td_memory(vm);
+
+ printf("\t ... Running guest\n");
+ /* Only the write to MTTR_BASE_1 should trigger an exit */
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ data = read_64bit_from_guest(vcpu, TDX_DATA_REPORT_PORT);
+ ASSERT_EQ(data, TDX_VMCALL_INVALID_OPERAND);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_COMPLETION(vcpu);
+
+ printf("\t ... Verifying MTTR values writen by guest\n");
+
+ ASSERT_EQ(vcpu_get_msr(vcpu, MTTR_BASE_0), 4);
+ ASSERT_EQ(vcpu_get_msr(vcpu, MTTR_BASE_1), 0);
+ ASSERT_EQ(vcpu_get_msr(vcpu, MTTR_BASE_2), 6);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
if (!is_tdx_enabled()) {
@@ -670,6 +897,8 @@ int main(int argc, char **argv)
run_in_new_process(&verify_get_td_vmcall_info);
run_in_new_process(&verify_guest_writes);
run_in_new_process(&verify_guest_reads);
+ run_in_new_process(&verify_guest_msr_reads);
+ run_in_new_process(&verify_guest_msr_writes);

return 0;
}
--
2.37.2.789.g6183377224-goog

2022-08-30 22:23:49

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v2 17/17] KVM: selftest: TDX: Add shared memory test

From: Ryan Afranji <[email protected]>

Adds a test that sets up shared memory between the host and guest.

Signed-off-by: Ryan Afranji <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
---
tools/testing/selftests/kvm/lib/x86_64/tdx.h | 38 +++++-
.../selftests/kvm/lib/x86_64/tdx_lib.c | 41 +++++-
.../selftests/kvm/x86_64/tdx_vm_tests.c | 124 ++++++++++++++++++
3 files changed, 192 insertions(+), 11 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx.h b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
index 7af2d189043f..be8564f4672d 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx.h
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
@@ -23,10 +23,21 @@

/*
* Max Page Table Size
- * To map 4GB memory region with 2MB pages, there needs to be 1 page for PML4,
- * 1 Page for PDPT, 4 pages for PD. Reserving 6 pages for PT.
+ * To map 4GB memory regions for each private and shared memory with 2MB pages,
+ * there needs to be 1 page for PML4, 1 Page for PDPT, 8 pages for PD. Reserving
+ * 10 pages for PT.
*/
-#define TDX_GUEST_NR_PT_PAGES (1 + 1 + 4)
+#define TDX_GUEST_NR_PT_PAGES (1 + 1 + 8)
+
+/*
+ * Guest Virtual Address Shared Bit
+ * TDX's shared bit is defined as the highest order bit in the GPA. Since the
+ * highest order bit allowed in the GPA may exceed the GVA's, a 1:1 mapping
+ * cannot be applied for shared memory. This value is a bit within the range
+ * [32 - 38] (0-indexed) that will designate a 4 GB region of GVAs that map the
+ * shared GPAs. This approach does not increase number of PML4 and PDPT pages.
+ */
+#define TDX_GUEST_VIRT_SHARED_BIT 32

/*
* Predefined GDTR values.
@@ -60,6 +71,7 @@
#define TDX_VMCALL_INVALID_OPERAND 0x8000000000000000

#define TDX_GET_TD_VM_CALL_INFO 0x10000
+#define TDX_MAP_GPA 0x10001
#define TDX_REPORT_FATAL_ERROR 0x10003
#define TDX_INSTRUCTION_CPUID 10
#define TDX_INSTRUCTION_HLT 12
@@ -101,7 +113,7 @@ struct __packed tdx_gdtr {
struct page_table {
uint64_t pml4[512];
uint64_t pdpt[512];
- uint64_t pd[4][512];
+ uint64_t pd[8][512];
};

void add_td_memory(struct kvm_vm *vm, void *source_page,
@@ -411,6 +423,24 @@ static inline uint64_t tdcall_vp_info(uint64_t *rcx, uint64_t *rdx,
return regs.rax;
}

+/*
+ * Execute MapGPA instruction.
+ */
+static inline uint64_t tdvmcall_map_gpa(uint64_t address, uint64_t size,
+ uint64_t *data_out)
+{
+ struct kvm_regs regs;
+
+ memset(&regs, 0, sizeof(regs));
+ regs.r11 = TDX_MAP_GPA;
+ regs.r12 = address;
+ regs.r13 = size;
+ regs.rcx = 0x3C00;
+ tdcall(&regs);
+ *data_out = regs.r11;
+ return regs.r10;
+}
+
/*
* Reports a 32 bit value from the guest to user space using a TDVM IO call.
* Data is reported on port TDX_DATA_REPORT_PORT.
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c b/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c
index dc9a44ae4064..23893949c3a1 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c
@@ -183,29 +183,56 @@ void build_gdtr_table(void *gdtr_target, void *gdt_target)
* which will be used by the TDX guest when paging is enabled.
* TODO: use virt_pg_map() functions to dynamically allocate the page tables.
*/
-void build_page_tables(void *pt_target, uint64_t pml4_base_address)
+void build_page_tables(void *pt_target, uint64_t pml4_base_address,
+ uint64_t gpa_shared_bit)
{
uint64_t i;
+ uint64_t shared_pdpt_index;
+ uint64_t gpa_shared_mask;
+ uint64_t *pde;
struct page_table *pt;

pt = malloc(sizeof(struct page_table));
TEST_ASSERT(pt != NULL, "Could not allocate memory for page tables!\n");
memset((void *) &(pt->pml4[0]), 0, sizeof(pt->pml4));
memset((void *) &(pt->pdpt[0]), 0, sizeof(pt->pdpt));
- for (i = 0; i < 4; i++)
+ for (i = 0; i < 8; i++)
memset((void *) &(pt->pd[i][0]), 0, sizeof(pt->pd[i]));

+ /* Populate pml4 entry. */
pt->pml4[0] = (pml4_base_address + PAGE_SIZE) |
_PAGE_PRESENT | _PAGE_RW;
+
+ /* Populate pdpt entries for private memory region. */
for (i = 0; i < 4; i++)
pt->pdpt[i] = (pml4_base_address + (i + 2) * PAGE_SIZE) |
- _PAGE_PRESENT | _PAGE_RW;
+ _PAGE_PRESENT | _PAGE_RW;
+
+ /* Index used in pdpt #0 to map to pd with guest virt shared bit set. */
+ static_assert(TDX_GUEST_VIRT_SHARED_BIT >= 32 &&
+ TDX_GUEST_VIRT_SHARED_BIT <= 38,
+ "Guest virtual shared bit must be in the range [32 - 38].\n");
+ shared_pdpt_index = 1 << (TDX_GUEST_VIRT_SHARED_BIT - 30);

- uint64_t *pde = &(pt->pd[0][0]);
+ /* Populate pdpt entries for shared memory region. */
+ for (i = 0; i < 4; i++)
+ pt->pdpt[shared_pdpt_index + i] = (pml4_base_address + (i + 6) *
+ PAGE_SIZE) | _PAGE_PRESENT |
+ _PAGE_RW;

- for (i = 0; i < sizeof(pt->pd) / sizeof(pt->pd[0][0]); i++, pde++)
+ /* Populate pd entries for private memory region. */
+ pde = &(pt->pd[0][0]);
+ for (i = 0; i < (sizeof(pt->pd) / sizeof(pt->pd[0][0])) / 2; i++, pde++)
*pde = (i << 21) | _PAGE_PRESENT | _PAGE_RW | _PAGE_PS;
- memcpy(pt_target, pt, 6 * PAGE_SIZE);
+
+ /* Populate pd entries for shared memory region; set shared bit. */
+ pde = &(pt->pd[4][0]);
+ gpa_shared_mask = BIT_ULL(gpa_shared_bit);
+ for (i = 0; i < (sizeof(pt->pd) / sizeof(pt->pd[0][0])) / 2; i++, pde++)
+ *pde = gpa_shared_mask | (i << 21) | _PAGE_PRESENT | _PAGE_RW |
+ _PAGE_PS;
+
+ memcpy(pt_target, pt, 10 * PAGE_SIZE);
}

static void
@@ -318,7 +345,7 @@ void prepare_source_image(struct kvm_vm *vm, void *guest_code,
guest_code_base = gdt_address + (TDX_GUEST_STACK_NR_PAGES *
PAGE_SIZE);

- build_page_tables(pt_address, TDX_GUEST_PT_FIXED_ADDR);
+ build_page_tables(pt_address, TDX_GUEST_PT_FIXED_ADDR, vm->pa_bits - 1);
build_gdtr_table(gdtr_address, gdt_address);

/* reset vector code should end with int3 instructions.
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 8d49099e1ed8..a96abada54b6 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -1515,6 +1515,129 @@ void verify_tdcall_vp_info(void)
printf("\t ... PASSED\n");
}

+TDX_GUEST_FUNCTION(guest_shared_mem)
+{
+ uint64_t gpa_shared_mask;
+ uint64_t gva_shared_mask;
+ uint64_t shared_gpa;
+ uint64_t shared_gva;
+ uint64_t gpa_width;
+ uint64_t failed_gpa;
+ uint64_t ret;
+ uint64_t err;
+
+ gva_shared_mask = BIT_ULL(TDX_GUEST_VIRT_SHARED_BIT);
+ shared_gpa = 0x80000000;
+ shared_gva = gva_shared_mask | shared_gpa;
+
+ /* Read highest order physical bit to calculate shared mask. */
+ err = tdcall_vp_info(&gpa_width, 0, 0, 0, 0, 0);
+ if (err)
+ tdvmcall_fatal(err);
+
+ /* Map gpa as shared. */
+ gpa_shared_mask = BIT_ULL(gpa_width - 1);
+ ret = tdvmcall_map_gpa(shared_gpa | gpa_shared_mask, PAGE_SIZE,
+ &failed_gpa);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ /* Write to shared memory. */
+ *(uint16_t *)shared_gva = 0x1234;
+ tdvmcall_success();
+
+ /* Read from shared memory; report to host. */
+ ret = tdvmcall_io(TDX_TEST_PORT, 2, TDX_IO_WRITE,
+ (uint64_t *)shared_gva);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ tdvmcall_success();
+}
+
+void verify_shared_mem(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+ struct userspace_mem_region *region;
+ uint16_t guest_read_val;
+ uint64_t shared_gpa;
+ uint64_t shared_hva;
+ uint64_t shared_pages_num;
+ int ctr;
+
+ printf("Verifying shared memory\n");
+
+ /* Create a TD VM with no memory. */
+ vm = vm_create_tdx();
+
+ /* Allocate TD guest memory and initialize the TD. */
+ initialize_td(vm);
+
+ /* Initialize the TD vcpu and copy the test code to the guest memory. */
+ vcpu = vm_vcpu_add_tdx(vm, 0);
+
+ /* Allocate shared memory. */
+ shared_gpa = 0x80000000;
+ shared_pages_num = 1;
+ vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
+ shared_gpa, 1,
+ shared_pages_num, 0);
+
+ /* Setup and initialize VM memory. */
+ prepare_source_image(vm, guest_shared_mem,
+ TDX_FUNCTION_SIZE(guest_shared_mem), 0);
+ finalize_td_memory(vm);
+
+ /* Begin guest execution; guest writes to shared memory. */
+ printf("\t ... Starting guest execution\n");
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+
+ /* Get the host's shared memory address. */
+ shared_hva = 0;
+ hash_for_each(vm->regions.slot_hash, ctr, region, slot_node) {
+ uint64_t region_guest_addr;
+
+ region_guest_addr = (uint64_t)region->region.guest_phys_addr;
+ if (region_guest_addr == (shared_gpa)) {
+ shared_hva = (uint64_t)region->host_mem;
+ break;
+ }
+ }
+ TEST_ASSERT(shared_hva != 0,
+ "Guest address not found in guest memory regions\n");
+
+ /* Verify guest write -> host read succeeds. */
+ printf("\t ... Guest wrote 0x1234 to shared memory\n");
+ if (*(uint16_t *)shared_hva != 0x1234) {
+ printf("\t ... FAILED: Host read 0x%x instead of 0x1234\n",
+ *(uint16_t *)shared_hva);
+ }
+ printf("\t ... Host read 0x%x from shared memory\n",
+ *(uint16_t *)shared_hva);
+
+ /* Verify host write -> guest read succeeds. */
+ *((uint16_t *)shared_hva) = 0xABCD;
+ printf("\t ... Host wrote 0xabcd to shared memory\n");
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_IO(vcpu, TDX_TEST_PORT, 2, TDX_IO_WRITE);
+ guest_read_val = *(uint16_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ if (guest_read_val != 0xABCD) {
+ printf("\t ... FAILED: Guest read 0x%x instead of 0xABCD\n",
+ guest_read_val);
+ kvm_vm_free(vm);
+ return;
+ }
+ printf("\t ... Guest read 0x%x from shared memory\n",
+ guest_read_val);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
if (!is_tdx_enabled()) {
@@ -1537,6 +1660,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_mmio_writes);
run_in_new_process(&verify_host_reading_private_mem);
run_in_new_process(&verify_tdcall_vp_info);
+ run_in_new_process(&verify_shared_mem);

return 0;
}
--
2.37.2.789.g6183377224-goog

2022-08-30 22:24:24

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v2 12/17] KVM: selftest: TDX: Add TDX MMIO reads test

The test verifies MMIO reads of various sizes from the host to the guest.

Signed-off-by: Sagi Shahar <[email protected]>
---
tools/testing/selftests/kvm/lib/x86_64/tdx.h | 21 ++++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 113 ++++++++++++++++++
2 files changed, 134 insertions(+)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx.h b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
index b11200028546..7045d617dd78 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx.h
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
@@ -60,12 +60,15 @@
#define TDX_INSTRUCTION_IO 30
#define TDX_INSTRUCTION_RDMSR 31
#define TDX_INSTRUCTION_WRMSR 32
+#define TDX_INSTRUCTION_VE_REQUEST_MMIO 48

#define TDX_SUCCESS_PORT 0x30
#define TDX_TEST_PORT 0x31
#define TDX_DATA_REPORT_PORT 0x32
#define TDX_IO_READ 0
#define TDX_IO_WRITE 1
+#define TDX_MMIO_READ 0
+#define TDX_MMIO_WRITE 1

#define GDT_ENTRY(flags, base, limit) \
((((base) & 0xff000000ULL) << (56-24)) | \
@@ -308,6 +311,24 @@ static inline uint64_t tdvmcall_hlt(uint64_t interrupt_blocked_flag)
return regs.r10;
}

+/*
+ * Execute MMIO request instruction for read.
+ */
+static inline uint64_t tdvmcall_mmio_read(uint64_t address, uint64_t size, uint64_t *data_out)
+{
+ struct kvm_regs regs;
+
+ memset(&regs, 0, sizeof(regs));
+ regs.r11 = TDX_INSTRUCTION_VE_REQUEST_MMIO;
+ regs.r12 = size;
+ regs.r13 = TDX_MMIO_READ;
+ regs.r14 = address;
+ regs.rcx = 0x7C00;
+ tdcall(&regs);
+ *data_out = regs.r11;
+ return regs.r10;
+}
+
/*
* Reports a 32 bit value from the guest to user space using a TDVM IO call.
* Data is reported on port TDX_DATA_REPORT_PORT.
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 39604aac54bd..963e4feae31a 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0-only

#include "asm/kvm.h"
+#include "linux/kernel.h"
#include <bits/stdint-uintn.h>
#include <fcntl.h>
#include <limits.h>
@@ -47,6 +48,24 @@
(VCPU)->run->io.direction); \
} while (0)

+#define CHECK_MMIO(VCPU, ADDR, SIZE, DIR) \
+ do { \
+ TEST_ASSERT((VCPU)->run->exit_reason == KVM_EXIT_MMIO, \
+ "Got exit_reason other than KVM_EXIT_MMIO: %u (%s)\n", \
+ (VCPU)->run->exit_reason, \
+ exit_reason_str((VCPU)->run->exit_reason)); \
+ \
+ TEST_ASSERT(((VCPU)->run->exit_reason == KVM_EXIT_MMIO) && \
+ ((VCPU)->run->mmio.phys_addr == (ADDR)) && \
+ ((VCPU)->run->mmio.len == (SIZE)) && \
+ ((VCPU)->run->mmio.is_write == (DIR)), \
+ "Got an unexpected MMIO exit values: %u (%s) %llu %d %d\n", \
+ (VCPU)->run->exit_reason, \
+ exit_reason_str((VCPU)->run->exit_reason), \
+ (VCPU)->run->mmio.phys_addr, (VCPU)->run->mmio.len, \
+ (VCPU)->run->mmio.is_write); \
+ } while (0)
+
#define CHECK_GUEST_FAILURE(VCPU) \
do { \
if ((VCPU)->run->exit_reason == KVM_EXIT_SYSTEM_EVENT) \
@@ -89,6 +108,8 @@ struct kvm_msr_filter test_filter = {
},
};

+#define MMIO_VALID_ADDRESS (TDX_GUEST_MAX_NR_PAGES * PAGE_SIZE + 1)
+
static uint64_t read_64bit_from_guest(struct kvm_vcpu *vcpu, uint64_t port)
{
uint32_t lo, hi;
@@ -970,6 +991,97 @@ void verify_guest_hlt(void)
_verify_guest_hlt(0);
}

+TDX_GUEST_FUNCTION(guest_mmio_reads)
+{
+ uint64_t data;
+ uint64_t ret;
+
+ ret = tdvmcall_mmio_read(MMIO_VALID_ADDRESS, 1, &data);
+ if (ret)
+ tdvmcall_fatal(ret);
+ if (data != 0x12)
+ tdvmcall_fatal(1);
+
+ ret = tdvmcall_mmio_read(MMIO_VALID_ADDRESS, 2, &data);
+ if (ret)
+ tdvmcall_fatal(ret);
+ if (data != 0x1234)
+ tdvmcall_fatal(2);
+
+ ret = tdvmcall_mmio_read(MMIO_VALID_ADDRESS, 4, &data);
+ if (ret)
+ tdvmcall_fatal(ret);
+ if (data != 0x12345678)
+ tdvmcall_fatal(4);
+
+ ret = tdvmcall_mmio_read(MMIO_VALID_ADDRESS, 8, &data);
+ if (ret)
+ tdvmcall_fatal(ret);
+ if (data != 0x1234567890ABCDEF)
+ tdvmcall_fatal(8);
+
+ // Read an invalid number of bytes.
+ ret = tdvmcall_mmio_read(MMIO_VALID_ADDRESS, 10, &data);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ tdvmcall_success();
+}
+
+/*
+ * Varifies guest MMIO reads.
+ */
+void verify_mmio_reads(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+
+ printf("Verifying TD MMIO reads:\n");
+ /* Create a TD VM with no memory.*/
+ vm = vm_create_tdx();
+
+ /* Allocate TD guest memory and initialize the TD.*/
+ initialize_td(vm);
+
+ /* Initialize the TD vcpu and copy the test code to the guest memory.*/
+ vcpu = vm_vcpu_add_tdx(vm, 0);
+
+ /* Setup and initialize VM memory */
+ prepare_source_image(vm, guest_mmio_reads,
+ TDX_FUNCTION_SIZE(guest_mmio_reads), 0);
+ finalize_td_memory(vm);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_MMIO(vcpu, MMIO_VALID_ADDRESS, 1, TDX_MMIO_READ);
+ *(uint8_t *)vcpu->run->mmio.data = 0x12;
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_MMIO(vcpu, MMIO_VALID_ADDRESS, 2, TDX_MMIO_READ);
+ *(uint16_t *)vcpu->run->mmio.data = 0x1234;
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_MMIO(vcpu, MMIO_VALID_ADDRESS, 4, TDX_MMIO_READ);
+ *(uint32_t *)vcpu->run->mmio.data = 0x12345678;
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_MMIO(vcpu, MMIO_VALID_ADDRESS, 8, TDX_MMIO_READ);
+ *(uint64_t *)vcpu->run->mmio.data = 0x1234567890ABCDEF;
+
+ vcpu_run(vcpu);
+ ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
+ ASSERT_EQ(vcpu->run->system_event.data[1], TDX_VMCALL_INVALID_OPERAND);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_COMPLETION(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
if (!is_tdx_enabled()) {
@@ -987,6 +1099,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_guest_msr_reads);
run_in_new_process(&verify_guest_msr_writes);
run_in_new_process(&verify_guest_hlt);
+ run_in_new_process(&verify_mmio_reads);

return 0;
}
--
2.37.2.789.g6183377224-goog

2022-08-30 22:42:22

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v2 16/17] KVM: selftest: TDX: Add TDG.VP.INFO test

From: Roger Wang <[email protected]>

Adds a test for TDG.VP.INFO

Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
---
tools/testing/selftests/kvm/lib/x86_64/tdx.h | 103 ++++++++----
.../selftests/kvm/lib/x86_64/tdx_lib.c | 18 ++-
.../selftests/kvm/x86_64/tdx_vm_tests.c | 150 ++++++++++++++++++
3 files changed, 235 insertions(+), 36 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx.h b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
index 3729543a05a3..7af2d189043f 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx.h
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
@@ -10,6 +10,11 @@
*/
#define TDX_GUEST_MAX_NR_PAGES 10000

+/*
+ * Max number of vCPUs for the guest VM
+ */
+ #define TDX_GUEST_MAX_NUM_VCPUS 3
+
/*
* Page Table Address used when paging is enabled.
*/
@@ -71,6 +76,11 @@
#define TDX_MMIO_READ 0
#define TDX_MMIO_WRITE 1

+#define TDX_TDCALL_INFO 1
+
+#define TDX_TDPARAM_ATTR_SEPT_VE_DISABLE_BIT (1UL << 28)
+#define TDX_TDPARAM_ATTR_PKS_BIT (1UL << 30)
+
#define GDT_ENTRY(flags, base, limit) \
((((base) & 0xff000000ULL) << (56-24)) | \
(((flags) & 0x0000f0ffULL) << 40) | \
@@ -98,6 +108,7 @@ void add_td_memory(struct kvm_vm *vm, void *source_page,
uint64_t gpa, int size);
void finalize_td_memory(struct kvm_vm *vm);
void initialize_td(struct kvm_vm *vm);
+void initialize_td_with_attributes(struct kvm_vm *vm, uint64_t attributes);
void initialize_td_vcpu(struct kvm_vcpu *vcpu);
void prepare_source_image(struct kvm_vm *vm, void *guest_code,
size_t guest_code_size,
@@ -116,40 +127,41 @@ void prepare_source_image(struct kvm_vm *vm, void *guest_code,
static inline void tdcall(struct kvm_regs *regs)
{
asm volatile (
- "mov %13, %%rax;\n\t"
- "mov %14, %%rbx;\n\t"
- "mov %15, %%rcx;\n\t"
- "mov %16, %%rdx;\n\t"
- "mov %17, %%r8;\n\t"
- "mov %18, %%r9;\n\t"
- "mov %19, %%r10;\n\t"
- "mov %20, %%r11;\n\t"
- "mov %21, %%r12;\n\t"
- "mov %22, %%r13;\n\t"
- "mov %23, %%r14;\n\t"
- "mov %24, %%r15;\n\t"
- "mov %25, %%rbp;\n\t"
- "mov %26, %%rsi;\n\t"
- "mov %27, %%rdi;\n\t"
+ "mov %14, %%rax;\n\t"
+ "mov %15, %%rbx;\n\t"
+ "mov %16, %%rcx;\n\t"
+ "mov %17, %%rdx;\n\t"
+ "mov %18, %%r8;\n\t"
+ "mov %19, %%r9;\n\t"
+ "mov %20, %%r10;\n\t"
+ "mov %21, %%r11;\n\t"
+ "mov %22, %%r12;\n\t"
+ "mov %23, %%r13;\n\t"
+ "mov %24, %%r14;\n\t"
+ "mov %25, %%r15;\n\t"
+ "mov %26, %%rbp;\n\t"
+ "mov %27, %%rsi;\n\t"
+ "mov %28, %%rdi;\n\t"
".byte 0x66, 0x0F, 0x01, 0xCC;\n\t"
"mov %%rax, %0;\n\t"
"mov %%rbx, %1;\n\t"
- "mov %%rdx, %2;\n\t"
- "mov %%r8, %3;\n\t"
- "mov %%r9, %4;\n\t"
- "mov %%r10, %5;\n\t"
- "mov %%r11, %6;\n\t"
- "mov %%r12, %7;\n\t"
- "mov %%r13, %8;\n\t"
- "mov %%r14, %9;\n\t"
- "mov %%r15, %10;\n\t"
- "mov %%rsi, %11;\n\t"
- "mov %%rdi, %12;\n\t"
- : "=m" (regs->rax), "=m" (regs->rbx), "=m" (regs->rdx),
- "=m" (regs->r8), "=m" (regs->r9), "=m" (regs->r10),
- "=m" (regs->r11), "=m" (regs->r12), "=m" (regs->r13),
- "=m" (regs->r14), "=m" (regs->r15), "=m" (regs->rsi),
- "=m" (regs->rdi)
+ "mov %%rcx, %2;\n\t"
+ "mov %%rdx, %3;\n\t"
+ "mov %%r8, %4;\n\t"
+ "mov %%r9, %5;\n\t"
+ "mov %%r10, %6;\n\t"
+ "mov %%r11, %7;\n\t"
+ "mov %%r12, %8;\n\t"
+ "mov %%r13, %9;\n\t"
+ "mov %%r14, %10;\n\t"
+ "mov %%r15, %11;\n\t"
+ "mov %%rsi, %12;\n\t"
+ "mov %%rdi, %13;\n\t"
+ : "=m" (regs->rax), "=m" (regs->rbx), "=m" (regs->rcx),
+ "=m" (regs->rdx), "=m" (regs->r8), "=m" (regs->r9),
+ "=m" (regs->r10), "=m" (regs->r11), "=m" (regs->r12),
+ "=m" (regs->r13), "=m" (regs->r14), "=m" (regs->r15),
+ "=m" (regs->rsi), "=m" (regs->rdi)
: "m" (regs->rax), "m" (regs->rbx), "m" (regs->rcx),
"m" (regs->rdx), "m" (regs->r8), "m" (regs->r9),
"m" (regs->r10), "m" (regs->r11), "m" (regs->r12),
@@ -370,6 +382,35 @@ static inline uint64_t tdvmcall_cpuid(uint32_t eax, uint32_t ecx,
return regs.r10;
}

+/*
+ * Execute TDG.VP.INFO instruction.
+ */
+static inline uint64_t tdcall_vp_info(uint64_t *rcx, uint64_t *rdx,
+ uint64_t *r8, uint64_t *r9,
+ uint64_t *r10, uint64_t *r11)
+{
+ struct kvm_regs regs;
+
+ memset(&regs, 0, sizeof(regs));
+ regs.rax = TDX_TDCALL_INFO;
+ tdcall(&regs);
+
+ if (rcx)
+ *rcx = regs.rcx;
+ if (rdx)
+ *rdx = regs.rdx;
+ if (r8)
+ *r8 = regs.r8;
+ if (r9)
+ *r9 = regs.r9;
+ if (r10)
+ *r10 = regs.r10;
+ if (r11)
+ *r11 = regs.r11;
+
+ return regs.rax;
+}
+
/*
* Reports a 32 bit value from the guest to user space using a TDVM IO call.
* Data is reported on port TDX_DATA_REPORT_PORT.
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c b/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c
index 72bf2ff24a29..dc9a44ae4064 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c
@@ -78,10 +78,10 @@ static struct tdx_cpuid_data get_tdx_cpuid_data(struct kvm_vm *vm)
}

/*
- * Initialize a VM as a TD.
+ * Initialize a VM as a TD with attributes.
*
*/
-void initialize_td(struct kvm_vm *vm)
+void initialize_td_with_attributes(struct kvm_vm *vm, uint64_t attributes)
{
struct tdx_cpuid_data cpuid_data;
int rc;
@@ -99,7 +99,7 @@ void initialize_td(struct kvm_vm *vm)
KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK);
vm_enable_cap(vm, KVM_CAP_SPLIT_IRQCHIP, 24);

- /* Allocate and setup memoryfor the td guest. */
+ /* Allocate and setup memory for the td guest. */
vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
TDX_GUEST_PT_FIXED_ADDR,
0, TDX_GUEST_MAX_NR_PAGES, 0);
@@ -108,12 +108,20 @@ void initialize_td(struct kvm_vm *vm)

cpuid_data = get_tdx_cpuid_data(vm);

- init_vm.max_vcpus = 1;
- init_vm.attributes = 0;
+ init_vm.max_vcpus = TDX_GUEST_MAX_NUM_VCPUS;
+ init_vm.attributes = attributes;
memcpy(&init_vm.cpuid, &cpuid_data, sizeof(cpuid_data));
tdx_ioctl(vm->fd, KVM_TDX_INIT_VM, 0, &init_vm);
}

+/*
+ * Initialize a VM as a TD with no attributes.
+ *
+ */
+void initialize_td(struct kvm_vm *vm)
+{
+ initialize_td_with_attributes(vm, 0);
+}

void initialize_td_vcpu(struct kvm_vcpu *vcpu)
{
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 1776b39b7d9e..8d49099e1ed8 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -2,6 +2,7 @@

#include "asm/kvm.h"
#include "linux/kernel.h"
+#include <assert.h>
#include <bits/stdint-uintn.h>
#include <fcntl.h>
#include <limits.h>
@@ -1366,6 +1367,154 @@ void verify_host_reading_private_mem(void)
printf("\t ... PASSED\n");
}

+/*
+ * Do a TDG.VP.INFO call from the guest
+ */
+TDX_GUEST_FUNCTION(guest_tdcall_vp_info)
+{
+ uint64_t err;
+ uint64_t rcx, rdx, r8, r9, r10, r11;
+
+ err = tdcall_vp_info(&rcx, &rdx, &r8, &r9, &r10, &r11);
+ if (err)
+ tdvmcall_fatal(err);
+
+ /* return values to user space host */
+ err = tdvm_report_64bit_to_user_space(rcx);
+ if (err)
+ tdvmcall_fatal(err);
+
+ err = tdvm_report_64bit_to_user_space(rdx);
+ if (err)
+ tdvmcall_fatal(err);
+
+ err = tdvm_report_64bit_to_user_space(r8);
+ if (err)
+ tdvmcall_fatal(err);
+
+ err = tdvm_report_64bit_to_user_space(r9);
+ if (err)
+ tdvmcall_fatal(err);
+
+ err = tdvm_report_64bit_to_user_space(r10);
+ if (err)
+ tdvmcall_fatal(err);
+
+ err = tdvm_report_64bit_to_user_space(r11);
+ if (err)
+ tdvmcall_fatal(err);
+
+ tdvmcall_success();
+}
+
+/*
+ * TDG.VP.INFO call from the guest. Verify the right values are returned
+ */
+void verify_tdcall_vp_info(void)
+{
+ const int num_vcpus = 2;
+ struct kvm_vcpu *vcpus[num_vcpus];
+ struct kvm_vm *vm;
+ uint64_t rcx, rdx, r8, r9, r10, r11;
+ uint32_t ret_num_vcpus, ret_max_vcpus;
+ uint64_t attributes;
+ uint32_t i;
+ struct kvm_cpuid_entry2 *cpuid_entry;
+ struct tdx_cpuid_data cpuid_data;
+ int max_pa = -1;
+ int ret;
+
+ printf("Verifying TDG.VP.INFO call:\n");
+ /* Create a TD VM with no memory.*/
+ vm = vm_create_tdx();
+
+ /* Setting attributes parameter used by TDH.MNG.INIT to 0x50000000 */
+ attributes = TDX_TDPARAM_ATTR_SEPT_VE_DISABLE_BIT |
+ TDX_TDPARAM_ATTR_PKS_BIT;
+
+ /* Allocate TD guest memory and initialize the TD.*/
+ initialize_td_with_attributes(vm, attributes);
+
+ /* Create vCPUs*/
+ for (i = 0; i < num_vcpus; i++)
+ vcpus[i] = vm_vcpu_add_tdx(vm, i);
+
+ /* Setup and initialize VM memory */
+ prepare_source_image(vm, guest_tdcall_vp_info,
+ TDX_FUNCTION_SIZE(guest_tdcall_vp_info), 0);
+ finalize_td_memory(vm);
+
+ /* Get KVM CPUIDs for reference */
+ memset(&cpuid_data, 0, sizeof(cpuid_data));
+ cpuid_data.cpuid.nent = KVM_MAX_CPUID_ENTRIES;
+ ret = ioctl(vm->kvm_fd, KVM_GET_SUPPORTED_CPUID, &cpuid_data);
+ TEST_ASSERT(!ret, "KVM_GET_SUPPORTED_CPUID failed\n");
+ cpuid_entry = find_cpuid_entry(cpuid_data, 0x80000008, 0);
+ TEST_ASSERT(cpuid_entry, "CPUID entry missing\n");
+ max_pa = cpuid_entry->eax & 0xff;
+
+ for (i = 0; i < num_vcpus; i++) {
+ struct kvm_vcpu *vcpu = vcpus[i];
+
+ /* Wait for guest to report rcx value */
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ rcx = read_64bit_from_guest(vcpu, TDX_DATA_REPORT_PORT);
+
+ /* Wait for guest to report rdx value */
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ rdx = read_64bit_from_guest(vcpu, TDX_DATA_REPORT_PORT);
+
+ /* Wait for guest to report r8 value */
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ r8 = read_64bit_from_guest(vcpu, TDX_DATA_REPORT_PORT);
+
+ /* Wait for guest to report r9 value */
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ r9 = read_64bit_from_guest(vcpu, TDX_DATA_REPORT_PORT);
+
+ /* Wait for guest to report r10 value */
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ r10 = read_64bit_from_guest(vcpu, TDX_DATA_REPORT_PORT);
+
+ /* Wait for guest to report r11 value */
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ r11 = read_64bit_from_guest(vcpu, TDX_DATA_REPORT_PORT);
+
+ ret_num_vcpus = r8 & 0xFFFFFFFF;
+ ret_max_vcpus = (r8 >> 32) & 0xFFFFFFFF;
+
+ /* first bits 5:0 of rcx represent the GPAW */
+ ASSERT_EQ(rcx & 0x3F, max_pa);
+ /* next 63:6 bits of rcx is reserved and must be 0 */
+ ASSERT_EQ(rcx >> 6, 0);
+ ASSERT_EQ(rdx, attributes);
+ ASSERT_EQ(ret_num_vcpus, num_vcpus);
+ ASSERT_EQ(ret_max_vcpus, TDX_GUEST_MAX_NUM_VCPUS);
+ /* VCPU_INDEX = i */
+ ASSERT_EQ(r9, i);
+ /* verify reserved registers are 0 */
+ ASSERT_EQ(r10, 0);
+ ASSERT_EQ(r11, 0);
+
+ /* Wait for guest to complete execution */
+ vcpu_run(vcpu);
+
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_GUEST_COMPLETION(vcpu);
+
+ printf("\t ... Guest completed run on VCPU=%u\n", i);
+ }
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
if (!is_tdx_enabled()) {
@@ -1387,6 +1536,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_mmio_reads);
run_in_new_process(&verify_mmio_writes);
run_in_new_process(&verify_host_reading_private_mem);
+ run_in_new_process(&verify_tdcall_vp_info);

return 0;
}
--
2.37.2.789.g6183377224-goog

2022-08-30 22:42:41

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v2 01/17] KVM: selftests: Add support for creating non-default type VMs

From: Erdem Aktas <[email protected]>

Currently vm_create function only creates KVM_VM_TYPE_DEFAULT type VMs.
Adding type parameter to ____vm_create to create new VM types.

Signed-off-by: Erdem Aktas <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
---
tools/testing/selftests/kvm/include/kvm_util_base.h | 6 ++++--
tools/testing/selftests/kvm/lib/kvm_util.c | 6 +++---
2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 24fde97f6121..2de7a7a2e56b 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -26,6 +26,8 @@

#define NSEC_PER_SEC 1000000000L

+#define KVM_VM_TYPE_DEFAULT 0
+
typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */
typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */

@@ -642,13 +644,13 @@ vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm);
* __vm_create() does NOT create vCPUs, @nr_runnable_vcpus is used purely to
* calculate the amount of memory needed for per-vCPU data, e.g. stacks.
*/
-struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages);
+struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages, int type);
struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
uint64_t nr_extra_pages);

static inline struct kvm_vm *vm_create_barebones(void)
{
- return ____vm_create(VM_MODE_DEFAULT, 0);
+ return ____vm_create(VM_MODE_DEFAULT, 0, KVM_VM_TYPE_DEFAULT);
}

static inline struct kvm_vm *vm_create(uint32_t nr_runnable_vcpus)
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 9889fe0d8919..ac10ad8919a6 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -143,7 +143,7 @@ const struct vm_guest_mode_params vm_guest_mode_params[] = {
_Static_assert(sizeof(vm_guest_mode_params)/sizeof(struct vm_guest_mode_params) == NUM_VM_MODES,
"Missing new mode params?");

-struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages)
+struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages, int type)
{
struct kvm_vm *vm;

@@ -159,7 +159,7 @@ struct kvm_vm *____vm_create(enum vm_guest_mode mode, uint64_t nr_pages)
hash_init(vm->regions.slot_hash);

vm->mode = mode;
- vm->type = 0;
+ vm->type = type;

vm->pa_bits = vm_guest_mode_params[mode].pa_bits;
vm->va_bits = vm_guest_mode_params[mode].va_bits;
@@ -294,7 +294,7 @@ struct kvm_vm *__vm_create(enum vm_guest_mode mode, uint32_t nr_runnable_vcpus,
nr_extra_pages);
struct kvm_vm *vm;

- vm = ____vm_create(mode, nr_pages);
+ vm = ____vm_create(mode, nr_pages, KVM_VM_TYPE_DEFAULT);

kvm_vm_elf_load(vm, program_invocation_name);

--
2.37.2.789.g6183377224-goog

2022-08-30 22:45:31

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v2 15/17] KVM: selftest: TDX: Verify the behavior when host consumes a TD private memory

From: Ryan Afranji <[email protected]>

The test checks that host can only read fixed values when trying to
access the guest's private memory.

Signed-off-by: Ryan Afranji <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
---
.../selftests/kvm/x86_64/tdx_vm_tests.c | 93 +++++++++++++++++++
1 file changed, 93 insertions(+)

diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 934f2f7a5df9..1776b39b7d9e 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -1274,6 +1274,98 @@ void verify_mmio_writes(void)
printf("\t ... PASSED\n");
}

+TDX_GUEST_FUNCTION(guest_host_read_priv_mem)
+{
+ uint64_t guest_var = 0xABCD;
+ uint64_t ret;
+
+ /* Sends address to host. */
+ ret = tdvm_report_64bit_to_user_space((uint64_t)&guest_var);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ /* Update guest_var's value and have host reread it. */
+ guest_var = 0xFEDC;
+
+ tdvmcall_success();
+}
+
+void verify_host_reading_private_mem(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+ struct userspace_mem_region *region;
+ uint64_t guest_var_addr;
+ uint64_t host_virt;
+ uint64_t first_host_read;
+ uint64_t second_host_read;
+ int ctr;
+
+ printf("Verifying host's behavior when reading TD private memory:\n");
+ /* Create a TD VM with no memory. */
+ vm = vm_create_tdx();
+
+ /* Allocate TD guest memory and initialize the TD. */
+ initialize_td(vm);
+
+ /* Initialize the TD vcpu and copy the test code to the guest memory. */
+ vcpu = vm_vcpu_add_tdx(vm, 0);
+
+ /* Setup and initialize VM memory. */
+ prepare_source_image(vm, guest_host_read_priv_mem,
+ TDX_FUNCTION_SIZE(guest_host_read_priv_mem), 0);
+ finalize_td_memory(vm);
+
+ /* Get the address of the guest's variable. */
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ printf("\t ... Guest's variable contains 0xABCD\n");
+
+ /* Guest virtual and guest physical addresses have 1:1 mapping. */
+ guest_var_addr = read_64bit_from_guest(vcpu, TDX_DATA_REPORT_PORT);
+
+ /* Search for the guest's address in guest's memory regions. */
+ host_virt = 0;
+ hash_for_each(vm->regions.slot_hash, ctr, region, slot_node) {
+ uint64_t offset;
+ uint64_t host_virt_base;
+ uint64_t guest_base;
+
+ guest_base = (uint64_t)region->region.guest_phys_addr;
+ offset = guest_var_addr - guest_base;
+
+ if (guest_base <= guest_var_addr &&
+ offset <= region->region.memory_size) {
+ host_virt_base = (uint64_t)region->host_mem;
+ host_virt = host_virt_base + offset;
+ break;
+ }
+ }
+ TEST_ASSERT(host_virt != 0,
+ "Guest address not found in guest memory regions\n");
+
+ /* Host reads guest's variable. */
+ first_host_read = *(uint64_t *)host_virt;
+ printf("\t ... Host's read attempt value: %lu\n", first_host_read);
+
+ /* Guest updates variable and host rereads it. */
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ printf("\t ... Guest's variable updated to 0xFEDC\n");
+
+ second_host_read = *(uint64_t *)host_virt;
+ printf("\t ... Host's second read attempt value: %lu\n",
+ second_host_read);
+
+ TEST_ASSERT(first_host_read == second_host_read,
+ "Host did not read a fixed pattern\n");
+
+ printf("\t ... Fixed pattern was returned to the host\n");
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
if (!is_tdx_enabled()) {
@@ -1294,6 +1386,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_guest_hlt);
run_in_new_process(&verify_mmio_reads);
run_in_new_process(&verify_mmio_writes);
+ run_in_new_process(&verify_host_reading_private_mem);

return 0;
}
--
2.37.2.789.g6183377224-goog

2022-08-30 22:45:43

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v2 06/17] KVM: selftest: TDX: Add basic TDX CPUID test

The test reads CPUID values from inside a TD VM and compare them
to expected values.

The test targets CPUID values which are virtualized as "As Configured",
"As Configured (if Native)", "Calculated", "Fixed" and "Native"
according to the TDX spec.

Signed-off-by: Sagi Shahar <[email protected]>
---
tools/testing/selftests/kvm/lib/x86_64/tdx.h | 13 ++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 132 ++++++++++++++++++
2 files changed, 145 insertions(+)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx.h b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
index c78dba1af14f..a28d15417d3e 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx.h
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
@@ -56,6 +56,7 @@

#define TDX_SUCCESS_PORT 0x30
#define TDX_TEST_PORT 0x31
+#define TDX_DATA_REPORT_PORT 0x32
#define TDX_IO_READ 0
#define TDX_IO_WRITE 1

@@ -231,6 +232,18 @@ static inline void tdvmcall_fatal(uint64_t error_code)
tdcall(&regs);
}

+/*
+ * Reports a 32 bit value from the guest to user space using a TDVM IO call.
+ * Data is reported on port TDX_DATA_REPORT_PORT.
+ */
+static inline uint64_t tdvm_report_to_user_space(uint32_t data)
+{
+ // Need to upcast data to match tdvmcall_io signature.
+ uint64_t data_64 = data;
+
+ return tdvmcall_io(TDX_DATA_REPORT_PORT, /*size=*/4, TDX_IO_WRITE, &data_64);
+}
+
#define TDX_FUNCTION_SIZE(name) ((uint64_t)&__stop_sec_ ## name -\
(uint64_t)&__start_sec_ ## name) \

diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index a93629cfd13f..3f51f936ea5a 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -77,6 +77,27 @@ bool is_tdx_enabled(void)
return !!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_TDX_VM));
}

+/*
+ * Find a specific CPUID entry.
+ */
+static struct kvm_cpuid_entry2 *
+find_cpuid_entry(struct tdx_cpuid_data cpuid_data, uint32_t function,
+ uint32_t index)
+{
+ struct kvm_cpuid_entry2 *e;
+ int i;
+
+ for (i = 0; i < cpuid_data.cpuid.nent; i++) {
+ e = &cpuid_data.entries[i];
+
+ if (e->function == function &&
+ (e->index == index ||
+ !(e->flags & KVM_CPUID_FLAG_SIGNIFCANT_INDEX)))
+ return e;
+ }
+ return NULL;
+}
+
/*
* Do a dummy io exit to verify that the TD has been initialized correctly and
* guest can run some code inside.
@@ -251,6 +272,116 @@ void verify_td_ioexit(void)
printf("\t ... PASSED\n");
}

+/*
+ * Verifies CPUID functionality by reading CPUID values in guest. The guest
+ * will then send the values to userspace using an IO write to be checked
+ * against the expected values.
+ */
+TDX_GUEST_FUNCTION(guest_code_cpuid)
+{
+ uint64_t err;
+ uint32_t eax, ebx, edx, ecx;
+
+ // Read CPUID leaf 0x1.
+ cpuid(1, &eax, &ebx, &ecx, &edx);
+
+ err = tdvm_report_to_user_space(ebx);
+ if (err)
+ tdvmcall_fatal(err);
+
+ err = tdvm_report_to_user_space(ecx);
+ if (err)
+ tdvmcall_fatal(err);
+
+ tdvmcall_success();
+}
+
+void verify_td_cpuid(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+ uint32_t ebx, ecx;
+ struct kvm_cpuid_entry2 *cpuid_entry;
+ struct tdx_cpuid_data cpuid_data;
+ uint32_t guest_clflush_line_size;
+ uint32_t guest_max_addressable_ids, host_max_addressable_ids;
+ uint32_t guest_sse3_enabled;
+ uint32_t guest_fma_enabled;
+ uint32_t guest_initial_apic_id;
+ int ret;
+
+ printf("Verifying TD CPUID:\n");
+ /* Create a TD VM with no memory.*/
+ vm = vm_create_tdx();
+
+ /* Allocate TD guest memory and initialize the TD.*/
+ initialize_td(vm);
+
+ /* Initialize the TD vcpu and copy the test code to the guest memory.*/
+ vcpu = vm_vcpu_add_tdx(vm, 0);
+
+ /* Setup and initialize VM memory */
+ prepare_source_image(vm, guest_code_cpuid,
+ TDX_FUNCTION_SIZE(guest_code_cpuid), 0);
+ finalize_td_memory(vm);
+
+ /* Wait for guest to report ebx value */
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_IO(vcpu, TDX_DATA_REPORT_PORT, 4, TDX_IO_WRITE);
+ ebx = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ /* Wait for guest to report either ecx value or error */
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_IO(vcpu, TDX_DATA_REPORT_PORT, 4, TDX_IO_WRITE);
+ ecx = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ /* Wait for guest to complete execution */
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_GUEST_COMPLETION(vcpu);
+
+ /* Verify the CPUID values we got from the guest. */
+ printf("\t ... Verifying CPUID values from guest\n");
+
+ /* Get KVM CPUIDs for reference */
+ memset(&cpuid_data, 0, sizeof(cpuid_data));
+ cpuid_data.cpuid.nent = KVM_MAX_CPUID_ENTRIES;
+ ret = ioctl(vm->kvm_fd, KVM_GET_SUPPORTED_CPUID, &cpuid_data);
+ TEST_ASSERT(!ret, "KVM_GET_SUPPORTED_CPUID failed\n");
+ cpuid_entry = find_cpuid_entry(cpuid_data, 1, 0);
+ TEST_ASSERT(cpuid_entry, "CPUID entry missing\n");
+
+ host_max_addressable_ids = (cpuid_entry->ebx >> 16) & 0xFF;
+
+ guest_sse3_enabled = ecx & 0x1; // Native
+ guest_clflush_line_size = (ebx >> 8) & 0xFF; // Fixed
+ guest_max_addressable_ids = (ebx >> 16) & 0xFF; // As Configured
+ guest_fma_enabled = (ecx >> 12) & 0x1; // As Configured (if Native)
+ guest_initial_apic_id = (ebx >> 24) & 0xFF; // Calculated
+
+ ASSERT_EQ(guest_sse3_enabled, 1);
+ ASSERT_EQ(guest_clflush_line_size, 8);
+ ASSERT_EQ(guest_max_addressable_ids, host_max_addressable_ids);
+
+ /* TODO: This only tests the native value. To properly test
+ * "As Configured (if Native)" we need to override this value
+ * in the TD params
+ */
+ ASSERT_EQ(guest_fma_enabled, 1);
+
+ /* TODO: guest_initial_apic_id is calculated based on the number of
+ * VCPUs in the TD. From the spec: "Virtual CPU index, starting from 0
+ * and allocated sequentially on each successful TDH.VP.INIT"
+ * To test non-trivial values we either need a TD with multiple VCPUs
+ * or to pick a different calculated value.
+ */
+ ASSERT_EQ(guest_initial_apic_id, 0);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}

int main(int argc, char **argv)
{
@@ -262,6 +393,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_td_lifecycle);
run_in_new_process(&verify_report_fatal_error);
run_in_new_process(&verify_td_ioexit);
+ run_in_new_process(&verify_td_cpuid);

return 0;
}
--
2.37.2.789.g6183377224-goog

2022-08-30 22:48:53

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v2 14/17] KVM: selftest: TDX: Add TDX CPUID TDVMCALL test

This test issues a CPUID TDVMCALL from inside the guest to get the CPUID
values as seen by KVM.

Signed-off-by: Sagi Shahar <[email protected]>
---
tools/testing/selftests/kvm/lib/x86_64/tdx.h | 23 ++++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 102 ++++++++++++++++++
2 files changed, 125 insertions(+)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx.h b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
index 17e3649e5729..3729543a05a3 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx.h
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
@@ -56,6 +56,7 @@

#define TDX_GET_TD_VM_CALL_INFO 0x10000
#define TDX_REPORT_FATAL_ERROR 0x10003
+#define TDX_INSTRUCTION_CPUID 10
#define TDX_INSTRUCTION_HLT 12
#define TDX_INSTRUCTION_IO 30
#define TDX_INSTRUCTION_RDMSR 31
@@ -347,6 +348,28 @@ static inline uint64_t tdvmcall_mmio_write(uint64_t address, uint64_t size, uint
return regs.r10;
}

+/*
+ * Execute CPUID instruction.
+ */
+static inline uint64_t tdvmcall_cpuid(uint32_t eax, uint32_t ecx,
+ uint32_t *ret_eax, uint32_t *ret_ebx,
+ uint32_t *ret_ecx, uint32_t *ret_edx)
+{
+ struct kvm_regs regs;
+
+ memset(&regs, 0, sizeof(regs));
+ regs.r11 = TDX_INSTRUCTION_CPUID;
+ regs.r12 = eax;
+ regs.r13 = ecx;
+ regs.rcx = 0xFC00;
+ tdcall(&regs);
+ *ret_eax = regs.r12;
+ *ret_ebx = regs.r13;
+ *ret_ecx = regs.r14;
+ *ret_edx = regs.r15;
+ return regs.r10;
+}
+
/*
* Reports a 32 bit value from the guest to user space using a TDVM IO call.
* Data is reported on port TDX_DATA_REPORT_PORT.
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 382119bd444b..934f2f7a5df9 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -459,6 +459,107 @@ void verify_td_cpuid(void)
printf("\t ... PASSED\n");
}

+/*
+ * Verifies CPUID TDVMCALL functionality.
+ * The guest will then send the values to userspace using an IO write to be
+ * checked against the expected values.
+ */
+TDX_GUEST_FUNCTION(guest_code_cpuid_tdcall)
+{
+ uint64_t err;
+ uint32_t eax, ebx, ecx, edx;
+
+ // Read CPUID leaf 0x1 from host.
+ err = tdvmcall_cpuid(/*eax=*/1, /*ecx=*/0, &eax, &ebx, &ecx, &edx);
+ if (err)
+ tdvmcall_fatal(err);
+
+ err = tdvm_report_to_user_space(eax);
+ if (err)
+ tdvmcall_fatal(err);
+
+ err = tdvm_report_to_user_space(ebx);
+ if (err)
+ tdvmcall_fatal(err);
+
+ err = tdvm_report_to_user_space(ecx);
+ if (err)
+ tdvmcall_fatal(err);
+
+ err = tdvm_report_to_user_space(edx);
+ if (err)
+ tdvmcall_fatal(err);
+
+ tdvmcall_success();
+}
+
+void verify_td_cpuid_tdcall(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+ uint32_t eax, ebx, ecx, edx;
+ struct kvm_cpuid_entry2 *cpuid_entry;
+ struct tdx_cpuid_data cpuid_data;
+ int ret;
+
+ printf("Verifying TD CPUID TDVMCALL:\n");
+ /* Create a TD VM with no memory.*/
+ vm = vm_create_tdx();
+
+ /* Allocate TD guest memory and initialize the TD.*/
+ initialize_td(vm);
+
+ /* Initialize the TD vcpu and copy the test code to the guest memory.*/
+ vcpu = vm_vcpu_add_tdx(vm, 0);
+
+ /* Setup and initialize VM memory */
+ prepare_source_image(vm, guest_code_cpuid_tdcall,
+ TDX_FUNCTION_SIZE(guest_code_cpuid_tdcall), 0);
+ finalize_td_memory(vm);
+
+ /* Wait for guest to report CPUID values */
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_IO(vcpu, TDX_DATA_REPORT_PORT, 4, TDX_IO_WRITE);
+ eax = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_IO(vcpu, TDX_DATA_REPORT_PORT, 4, TDX_IO_WRITE);
+ ebx = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_IO(vcpu, TDX_DATA_REPORT_PORT, 4, TDX_IO_WRITE);
+ ecx = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_IO(vcpu, TDX_DATA_REPORT_PORT, 4, TDX_IO_WRITE);
+ edx = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_GUEST_COMPLETION(vcpu);
+
+ /* Get KVM CPUIDs for reference */
+ memset(&cpuid_data, 0, sizeof(cpuid_data));
+ cpuid_data.cpuid.nent = KVM_MAX_CPUID_ENTRIES;
+ ret = ioctl(vm->kvm_fd, KVM_GET_SUPPORTED_CPUID, &cpuid_data);
+ TEST_ASSERT(!ret, "KVM_GET_SUPPORTED_CPUID failed\n");
+ cpuid_entry = find_cpuid_entry(cpuid_data, 1, 0);
+ TEST_ASSERT(cpuid_entry, "CPUID entry missing\n");
+
+ ASSERT_EQ(cpuid_entry->eax, eax);
+ // Mask lapic ID when comparing ebx.
+ ASSERT_EQ(cpuid_entry->ebx & ~0xFF000000, ebx & ~0xFF000000);
+ ASSERT_EQ(cpuid_entry->ecx, ecx);
+ ASSERT_EQ(cpuid_entry->edx, edx);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
/*
* Verifies get_td_vmcall_info functionality.
*/
@@ -1184,6 +1285,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_report_fatal_error);
run_in_new_process(&verify_td_ioexit);
run_in_new_process(&verify_td_cpuid);
+ run_in_new_process(&verify_td_cpuid_tdcall);
run_in_new_process(&verify_get_td_vmcall_info);
run_in_new_process(&verify_guest_writes);
run_in_new_process(&verify_guest_reads);
--
2.37.2.789.g6183377224-goog

2022-08-30 22:48:58

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v2 13/17] KVM: selftest: TDX: Add TDX MMIO writes test

The test verifies MMIO writes of various sizes from the guest to the host.

Signed-off-by: Sagi Shahar <[email protected]>
---
tools/testing/selftests/kvm/lib/x86_64/tdx.h | 18 ++++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 92 +++++++++++++++++++
2 files changed, 110 insertions(+)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx.h b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
index 7045d617dd78..17e3649e5729 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx.h
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
@@ -329,6 +329,24 @@ static inline uint64_t tdvmcall_mmio_read(uint64_t address, uint64_t size, uint6
return regs.r10;
}

+/*
+ * Execute MMIO request instruction for write.
+ */
+static inline uint64_t tdvmcall_mmio_write(uint64_t address, uint64_t size, uint64_t data_in)
+{
+ struct kvm_regs regs;
+
+ memset(&regs, 0, sizeof(regs));
+ regs.r11 = TDX_INSTRUCTION_VE_REQUEST_MMIO;
+ regs.r12 = size;
+ regs.r13 = TDX_MMIO_WRITE;
+ regs.r14 = address;
+ regs.r15 = data_in;
+ regs.rcx = 0xFC00;
+ tdcall(&regs);
+ return regs.r10;
+}
+
/*
* Reports a 32 bit value from the guest to user space using a TDVM IO call.
* Data is reported on port TDX_DATA_REPORT_PORT.
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 963e4feae31a..382119bd444b 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -1082,6 +1082,97 @@ void verify_mmio_reads(void)
printf("\t ... PASSED\n");
}

+TDX_GUEST_FUNCTION(guest_mmio_writes)
+{
+ uint64_t ret;
+
+ ret = tdvmcall_mmio_write(MMIO_VALID_ADDRESS, 1, 0x12);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ ret = tdvmcall_mmio_write(MMIO_VALID_ADDRESS, 2, 0x1234);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ ret = tdvmcall_mmio_write(MMIO_VALID_ADDRESS, 4, 0x12345678);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ ret = tdvmcall_mmio_write(MMIO_VALID_ADDRESS, 8, 0x1234567890ABCDEF);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ // Write across page boundary.
+ ret = tdvmcall_mmio_write(PAGE_SIZE - 1, 8, 0);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ tdvmcall_success();
+}
+
+/*
+ * Varifies guest MMIO writes.
+ */
+void verify_mmio_writes(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+ uint8_t byte_1;
+ uint16_t byte_2;
+ uint32_t byte_4;
+ uint64_t byte_8;
+
+ printf("Verifying TD MMIO writes:\n");
+ /* Create a TD VM with no memory.*/
+ vm = vm_create_tdx();
+
+ /* Allocate TD guest memory and initialize the TD.*/
+ initialize_td(vm);
+
+ /* Initialize the TD vcpu and copy the test code to the guest memory.*/
+ vcpu = vm_vcpu_add_tdx(vm, 0);
+
+ /* Setup and initialize VM memory */
+ prepare_source_image(vm, guest_mmio_writes,
+ TDX_FUNCTION_SIZE(guest_mmio_writes), 0);
+ finalize_td_memory(vm);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_MMIO(vcpu, MMIO_VALID_ADDRESS, 1, TDX_MMIO_WRITE);
+ byte_1 = *(uint8_t *)(vcpu->run->mmio.data);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_MMIO(vcpu, MMIO_VALID_ADDRESS, 2, TDX_MMIO_WRITE);
+ byte_2 = *(uint16_t *)(vcpu->run->mmio.data);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_MMIO(vcpu, MMIO_VALID_ADDRESS, 4, TDX_MMIO_WRITE);
+ byte_4 = *(uint32_t *)(vcpu->run->mmio.data);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_MMIO(vcpu, MMIO_VALID_ADDRESS, 8, TDX_MMIO_WRITE);
+ byte_8 = *(uint64_t *)(vcpu->run->mmio.data);
+
+ ASSERT_EQ(byte_1, 0x12);
+ ASSERT_EQ(byte_2, 0x1234);
+ ASSERT_EQ(byte_4, 0x12345678);
+ ASSERT_EQ(byte_8, 0x1234567890ABCDEF);
+
+ vcpu_run(vcpu);
+ ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
+ ASSERT_EQ(vcpu->run->system_event.data[1], TDX_VMCALL_INVALID_OPERAND);
+
+ vcpu_run(vcpu);
+ CHECK_GUEST_COMPLETION(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
if (!is_tdx_enabled()) {
@@ -1100,6 +1191,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_guest_msr_writes);
run_in_new_process(&verify_guest_hlt);
run_in_new_process(&verify_mmio_reads);
+ run_in_new_process(&verify_mmio_writes);

return 0;
}
--
2.37.2.789.g6183377224-goog

2022-08-30 23:01:03

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v2 05/17] KVM: selftest: Adding test case for TDX port IO

From: Erdem Aktas <[email protected]>

Verifies TDVMCALL<INSTRUCTION.IO> READ and WRITE operations.

Signed-off-by: Erdem Aktas <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
---
tools/testing/selftests/kvm/lib/x86_64/tdx.h | 1 +
.../selftests/kvm/x86_64/tdx_vm_tests.c | 108 ++++++++++++++++++
2 files changed, 109 insertions(+)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx.h b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
index 351ece3e80e2..c78dba1af14f 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx.h
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
@@ -55,6 +55,7 @@
#define TDX_INSTRUCTION_IO 30

#define TDX_SUCCESS_PORT 0x30
+#define TDX_TEST_PORT 0x31
#define TDX_IO_READ 0
#define TDX_IO_WRITE 1

diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 1db5400ca5ef..a93629cfd13f 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -27,6 +27,32 @@
(VCPU)->run->exit_reason, exit_reason_str((VCPU)->run->exit_reason), \
(VCPU)->run->io.port, (VCPU)->run->io.size, (VCPU)->run->io.direction))

+#define CHECK_IO(VCPU, PORT, SIZE, DIR) \
+ do { \
+ TEST_ASSERT((VCPU)->run->exit_reason == KVM_EXIT_IO, \
+ "Got exit_reason other than KVM_EXIT_IO: %u (%s)\n", \
+ (VCPU)->run->exit_reason, \
+ exit_reason_str((VCPU)->run->exit_reason)); \
+ \
+ TEST_ASSERT(((VCPU)->run->exit_reason == KVM_EXIT_IO) && \
+ ((VCPU)->run->io.port == (PORT)) && \
+ ((VCPU)->run->io.size == (SIZE)) && \
+ ((VCPU)->run->io.direction == (DIR)), \
+ "Got an unexpected IO exit values: %u (%s) %d %d %d\n", \
+ (VCPU)->run->exit_reason, \
+ exit_reason_str((VCPU)->run->exit_reason), \
+ (VCPU)->run->io.port, (VCPU)->run->io.size, \
+ (VCPU)->run->io.direction); \
+ } while (0)
+
+#define CHECK_GUEST_FAILURE(VCPU) \
+ do { \
+ if ((VCPU)->run->exit_reason == KVM_EXIT_SYSTEM_EVENT) \
+ TEST_FAIL("Guest reported error. error code: %lld (0x%llx)\n", \
+ (VCPU)->run->system_event.data[1], \
+ (VCPU)->run->system_event.data[1]); \
+ } while (0)
+
/*
* There might be multiple tests we are running and if one test fails, it will
* prevent the subsequent tests to run due to how tests are failing with
@@ -145,6 +171,87 @@ void verify_report_fatal_error(void)
printf("\t ... PASSED\n");
}

+/*
+ * Verifies IO functionality by writing a |value| to a predefined port.
+ * Verifies that the read value is |value| + 1 from the same port.
+ * If all the tests are passed then write a value to port TDX_TEST_PORT
+ */
+TDX_GUEST_FUNCTION(guest_io_exit)
+{
+ uint64_t data_out, data_in, delta;
+ uint64_t ret;
+
+ data_out = 0xAB;
+
+ ret = tdvmcall_io(TDX_TEST_PORT, 1, TDX_IO_WRITE, &data_out);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ ret = tdvmcall_io(TDX_TEST_PORT, 1, TDX_IO_READ, &data_in);
+ if (ret)
+ tdvmcall_fatal(ret);
+
+ delta = data_in - data_out;
+ if (delta != 1)
+ tdvmcall_fatal(ret);
+
+ tdvmcall_success();
+}
+
+void verify_td_ioexit(void)
+{
+ struct kvm_vcpu *vcpu;
+ uint32_t port_data;
+ struct kvm_vm *vm;
+
+ printf("Verifying TD IO Exit:\n");
+ /* Create a TD VM with no memory.*/
+ vm = vm_create_tdx();
+
+ /* Allocate TD guest memory and initialize the TD.*/
+ initialize_td(vm);
+
+ /* Initialize the TD vcpu and copy the test code to the guest memory.*/
+ vcpu = vm_vcpu_add_tdx(vm, 0);
+
+ /* Setup and initialize VM memory */
+ prepare_source_image(vm, guest_io_exit,
+ TDX_FUNCTION_SIZE(guest_io_exit), 0);
+ finalize_td_memory(vm);
+
+ /* Wait for guest to do a IO write */
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_IO(vcpu, TDX_TEST_PORT, 1, TDX_IO_WRITE);
+ port_data = *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ printf("\t ... IO WRITE: OK\n");
+
+ /*
+ * Wait for the guest to do a IO read. Provide the previos written data
+ * + 1 back to the guest
+ */
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_IO(vcpu, TDX_TEST_PORT, 1, TDX_IO_READ);
+ *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = port_data + 1;
+
+ printf("\t ... IO READ: OK\n");
+
+ /*
+ * Wait for the guest to complete execution successfully. The read
+ * value is checked within the guest.
+ */
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_GUEST_COMPLETION(vcpu);
+
+ printf("\t ... IO verify read/write values: OK\n");
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
+
int main(int argc, char **argv)
{
if (!is_tdx_enabled()) {
@@ -154,6 +261,7 @@ int main(int argc, char **argv)

run_in_new_process(&verify_td_lifecycle);
run_in_new_process(&verify_report_fatal_error);
+ run_in_new_process(&verify_td_ioexit);

return 0;
}
--
2.37.2.789.g6183377224-goog

2022-08-30 23:02:15

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v2 07/17] KVM: selftest: TDX: Add basic get_td_vmcall_info test

The test calls get_td_vmcall_info from the guest and verifies the
expected returned values.

Signed-off-by: Sagi Shahar <[email protected]>
---
tools/testing/selftests/kvm/lib/x86_64/tdx.h | 43 +++++++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 107 ++++++++++++++++++
2 files changed, 150 insertions(+)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx.h b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
index a28d15417d3e..39b000118e26 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx.h
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
@@ -51,6 +51,7 @@
#define _PAGE_RW (1UL<<1) /* writeable */
#define _PAGE_PS (1UL<<7) /* page size bit*/

+#define TDX_GET_TD_VM_CALL_INFO 0x10000
#define TDX_REPORT_FATAL_ERROR 0x10003
#define TDX_INSTRUCTION_IO 30

@@ -232,6 +233,28 @@ static inline void tdvmcall_fatal(uint64_t error_code)
tdcall(&regs);
}

+/*
+ * Get td vmcall info.
+ * Used to help request the host VMM enumerate which TDG.VP.VMCALLs are supported.
+ * Returns return in r10 code and leaf-specific output in r11-r14.
+ */
+static inline uint64_t tdvmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12,
+ uint64_t *r13, uint64_t *r14)
+{
+ struct kvm_regs regs;
+
+ memset(&regs, 0, sizeof(regs));
+ regs.r11 = TDX_GET_TD_VM_CALL_INFO;
+ regs.r12 = 0;
+ regs.rcx = 0x1C00;
+ tdcall(&regs);
+ *r11 = regs.r11;
+ *r12 = regs.r12;
+ *r13 = regs.r13;
+ *r14 = regs.r14;
+ return regs.r10;
+}
+
/*
* Reports a 32 bit value from the guest to user space using a TDVM IO call.
* Data is reported on port TDX_DATA_REPORT_PORT.
@@ -244,6 +267,26 @@ static inline uint64_t tdvm_report_to_user_space(uint32_t data)
return tdvmcall_io(TDX_DATA_REPORT_PORT, /*size=*/4, TDX_IO_WRITE, &data_64);
}

+/*
+ * Reports a 64 bit value from the guest to user space using a TDVM IO call.
+ * Data is reported on port TDX_DATA_REPORT_PORT.
+ * Data is sent to host in 2 calls. LSB is sent (and needs to be read) first.
+ */
+static inline uint64_t tdvm_report_64bit_to_user_space(uint64_t data)
+{
+ uint64_t err;
+ uint64_t data_lo = data & 0xFFFFFFFF;
+ uint64_t data_hi = (data >> 32) & 0xFFFFFFFF;
+
+ err = tdvmcall_io(TDX_DATA_REPORT_PORT, /*size=*/4, TDX_IO_WRITE,
+ &data_lo);
+ if (err)
+ return err;
+
+ return tdvmcall_io(TDX_DATA_REPORT_PORT, /*size=*/4, TDX_IO_WRITE,
+ &data_hi);
+}
+
#define TDX_FUNCTION_SIZE(name) ((uint64_t)&__stop_sec_ ## name -\
(uint64_t)&__start_sec_ ## name) \

diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 3f51f936ea5a..cf8260db1f5b 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -53,6 +53,25 @@
(VCPU)->run->system_event.data[1]); \
} while (0)

+static uint64_t read_64bit_from_guest(struct kvm_vcpu *vcpu, uint64_t port)
+{
+ uint32_t lo, hi;
+ uint64_t res;
+
+ CHECK_IO(vcpu, port, 4, TDX_IO_WRITE);
+ lo = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ vcpu_run(vcpu);
+
+ CHECK_IO(vcpu, port, 4, TDX_IO_WRITE);
+ hi = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ res = hi;
+ res = (res << 32) | lo;
+ return res;
+}
+
+
/*
* There might be multiple tests we are running and if one test fails, it will
* prevent the subsequent tests to run due to how tests are failing with
@@ -383,6 +402,93 @@ void verify_td_cpuid(void)
printf("\t ... PASSED\n");
}

+/*
+ * Verifies get_td_vmcall_info functionality.
+ */
+TDX_GUEST_FUNCTION(guest_code_get_td_vmcall_info)
+{
+ uint64_t err;
+ uint64_t r11, r12, r13, r14;
+
+ err = tdvmcall_get_td_vmcall_info(&r11, &r12, &r13, &r14);
+ if (err)
+ tdvmcall_fatal(err);
+
+ err = tdvm_report_64bit_to_user_space(r11);
+ if (err)
+ tdvmcall_fatal(err);
+
+ err = tdvm_report_64bit_to_user_space(r12);
+ if (err)
+ tdvmcall_fatal(err);
+
+ err = tdvm_report_64bit_to_user_space(r13);
+ if (err)
+ tdvmcall_fatal(err);
+
+ err = tdvm_report_64bit_to_user_space(r14);
+ if (err)
+ tdvmcall_fatal(err);
+
+ tdvmcall_success();
+}
+
+void verify_get_td_vmcall_info(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+ uint64_t r11, r12, r13, r14;
+
+ printf("Verifying TD get vmcall info:\n");
+ /* Create a TD VM with no memory.*/
+ vm = vm_create_tdx();
+
+ /* Allocate TD guest memory and initialize the TD.*/
+ initialize_td(vm);
+
+ /* Initialize the TD vcpu and copy the test code to the guest memory.*/
+ vcpu = vm_vcpu_add_tdx(vm, 0);
+
+ /* Setup and initialize VM memory */
+ prepare_source_image(vm, guest_code_get_td_vmcall_info,
+ TDX_FUNCTION_SIZE(guest_code_get_td_vmcall_info),
+ 0);
+ finalize_td_memory(vm);
+
+ /* Wait for guest to report r11 value */
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ r11 = read_64bit_from_guest(vcpu, TDX_DATA_REPORT_PORT);
+
+ /* Wait for guest to report r12 value */
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ r12 = read_64bit_from_guest(vcpu, TDX_DATA_REPORT_PORT);
+
+ /* Wait for guest to report r13 value */
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ r13 = read_64bit_from_guest(vcpu, TDX_DATA_REPORT_PORT);
+
+ /* Wait for guest to report r14 value */
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ r14 = read_64bit_from_guest(vcpu, TDX_DATA_REPORT_PORT);
+
+ ASSERT_EQ(r11, 0);
+ ASSERT_EQ(r12, 0);
+ ASSERT_EQ(r13, 0);
+ ASSERT_EQ(r14, 0);
+
+ /* Wait for guest to complete execution */
+ vcpu_run(vcpu);
+ CHECK_GUEST_FAILURE(vcpu);
+ CHECK_GUEST_COMPLETION(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
if (!is_tdx_enabled()) {
@@ -394,6 +500,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_report_fatal_error);
run_in_new_process(&verify_td_ioexit);
run_in_new_process(&verify_td_cpuid);
+ run_in_new_process(&verify_get_td_vmcall_info);

return 0;
}
--
2.37.2.789.g6183377224-goog

2022-09-01 01:09:44

by Isaku Yamahata

[permalink] [raw]
Subject: Re: [RFC PATCH v2 03/17] KVM: selftest: Adding TDX life cycle test.

On Tue, Aug 30, 2022 at 10:19:46PM +0000,
Sagi Shahar <[email protected]> wrote:

> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx.h b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
> index 61b997dfc420..d5de52657112 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx.h
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
> @@ -51,6 +51,12 @@
> #define _PAGE_RW (1UL<<1) /* writeable */
> #define _PAGE_PS (1UL<<7) /* page size bit*/
>
> +#define TDX_INSTRUCTION_IO 30
> +
> +#define TDX_SUCCESS_PORT 0x30
> +#define TDX_IO_READ 0
> +#define TDX_IO_WRITE 1
> +
> #define GDT_ENTRY(flags, base, limit) \
> ((((base) & 0xff000000ULL) << (56-24)) | \
> (((flags) & 0x0000f0ffULL) << 40) | \
> @@ -83,4 +89,147 @@ void prepare_source_image(struct kvm_vm *vm, void *guest_code,
> size_t guest_code_size,
> uint64_t guest_code_signature);
>
> +/*
> + * Generic TDCALL function that can be used to communicate with TDX module or
> + * VMM.
> + * Input operands: rax, rbx, rcx, rdx, r8-r15, rbp, rsi, rdi
> + * Output operands: rax, r8-r15, rbx, rdx, rdi, rsi
> + * rcx is actually a bitmap to tell TDX module which register values will be
> + * exposed to the VMM.
> + * XMM0-XMM15 registers can be used as input operands but the current
> + * implementation does not support it yet.
> + */
> +static inline void tdcall(struct kvm_regs *regs)
> +{
> + asm volatile (
> + "mov %13, %%rax;\n\t"
> + "mov %14, %%rbx;\n\t"
> + "mov %15, %%rcx;\n\t"
> + "mov %16, %%rdx;\n\t"
> + "mov %17, %%r8;\n\t"
> + "mov %18, %%r9;\n\t"
> + "mov %19, %%r10;\n\t"
> + "mov %20, %%r11;\n\t"
> + "mov %21, %%r12;\n\t"
> + "mov %22, %%r13;\n\t"
> + "mov %23, %%r14;\n\t"
> + "mov %24, %%r15;\n\t"
> + "mov %25, %%rbp;\n\t"
> + "mov %26, %%rsi;\n\t"
> + "mov %27, %%rdi;\n\t"
> + ".byte 0x66, 0x0F, 0x01, 0xCC;\n\t"
> + "mov %%rax, %0;\n\t"
> + "mov %%rbx, %1;\n\t"
> + "mov %%rdx, %2;\n\t"
> + "mov %%r8, %3;\n\t"
> + "mov %%r9, %4;\n\t"
> + "mov %%r10, %5;\n\t"
> + "mov %%r11, %6;\n\t"
> + "mov %%r12, %7;\n\t"
> + "mov %%r13, %8;\n\t"
> + "mov %%r14, %9;\n\t"
> + "mov %%r15, %10;\n\t"
> + "mov %%rsi, %11;\n\t"
> + "mov %%rdi, %12;\n\t"
> + : "=m" (regs->rax), "=m" (regs->rbx), "=m" (regs->rdx),
> + "=m" (regs->r8), "=m" (regs->r9), "=m" (regs->r10),
> + "=m" (regs->r11), "=m" (regs->r12), "=m" (regs->r13),
> + "=m" (regs->r14), "=m" (regs->r15), "=m" (regs->rsi),
> + "=m" (regs->rdi)
> + : "m" (regs->rax), "m" (regs->rbx), "m" (regs->rcx),
> + "m" (regs->rdx), "m" (regs->r8), "m" (regs->r9),
> + "m" (regs->r10), "m" (regs->r11), "m" (regs->r12),
> + "m" (regs->r13), "m" (regs->r14), "m" (regs->r15),
> + "m" (regs->rbp), "m" (regs->rsi), "m" (regs->rdi)
> + : "rax", "rbx", "rcx", "rdx", "r8", "r9", "r10", "r11",
> + "r12", "r13", "r14", "r15", "rbp", "rsi", "rdi");
> +}

Sometimes compiler (my gcc is (Ubuntu 11.1.0-1ubuntu1~20.04) 11.1.0) doesn't like
clobering the frame pointer as follows. (I edited the caller site for other test.)

x86_64/tdx_vm_tests.c:343:1: error: bp cannot be used in ‘asm’ here

I ended up the following workaround. I didn't use pushq/popq pair because
I didn't want to play with offset in the stack of the caller.


diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx.h b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
index aa6961c6f304..8ddf3b64f003 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx.h
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
@@ -122,7 +122,11 @@ void prepare_source_image(struct kvm_vm *vm, void *guest_code,
*/
static inline void tdcall(struct kvm_regs *regs)
{
+ unsigned long saved_rbp = 0;
+
asm volatile (
+ /* gcc complains that frame pointer %rbp can't be clobbered. */
+ "movq %%rbp, %28;\n\t"
"mov %13, %%rax;\n\t"
"mov %14, %%rbx;\n\t"
"mov %15, %%rcx;\n\t"
@@ -152,6 +156,8 @@ static inline void tdcall(struct kvm_regs *regs)
"mov %%r15, %10;\n\t"
"mov %%rsi, %11;\n\t"
"mov %%rdi, %12;\n\t"
+ "movq %28, %%rbp\n\t"
+ "movq $0, %28\n\t"
: "=m" (regs->rax), "=m" (regs->rbx), "=m" (regs->rdx),
"=m" (regs->r8), "=m" (regs->r9), "=m" (regs->r10),
"=m" (regs->r11), "=m" (regs->r12), "=m" (regs->r13),
@@ -161,9 +167,10 @@ static inline void tdcall(struct kvm_regs *regs)
"m" (regs->rdx), "m" (regs->r8), "m" (regs->r9),
"m" (regs->r10), "m" (regs->r11), "m" (regs->r12),
"m" (regs->r13), "m" (regs->r14), "m" (regs->r15),
- "m" (regs->rbp), "m" (regs->rsi), "m" (regs->rdi)
+ "m" (regs->rbp), "m" (regs->rsi), "m" (regs->rdi),
+ "m" (saved_rbp)
: "rax", "rbx", "rcx", "rdx", "r8", "r9", "r10", "r11",
- "r12", "r13", "r14", "r15", "rbp", "rsi", "rdi");
+ "r12", "r13", "r14", "r15", "rsi", "rdi");
}

--
Isaku Yamahata <[email protected]>

2022-09-01 01:35:22

by Isaku Yamahata

[permalink] [raw]
Subject: Re: [RFC PATCH v2 02/17] KVM: selftest: Add helper functions to create TDX VMs

On Tue, Aug 30, 2022 at 10:19:45PM +0000,
Sagi Shahar <[email protected]> wrote:
...
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c b/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c
> new file mode 100644
> index 000000000000..72bf2ff24a29
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c
> @@ -0,0 +1,338 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#include <linux/stringify.h>
> +#include "asm/kvm.h"
> +#include "tdx.h"
> +#include <stdlib.h>
> +#include <malloc.h>
> +#include "processor.h"
> +#include <string.h>
> +
> +char *tdx_cmd_str[] = {
> + "KVM_TDX_CAPABILITIES",
> + "KVM_TDX_INIT_VM",
> + "KVM_TDX_INIT_VCPU",
> + "KVM_TDX_INIT_MEM_REGION",
> + "KVM_TDX_FINALIZE_VM"
> +};
> +
> +#define TDX_MAX_CMD_STR (ARRAY_SIZE(tdx_cmd_str))
> +#define EIGHT_INT3_INSTRUCTIONS 0xCCCCCCCCCCCCCCCC
> +
> +#define XFEATURE_LBR 15
> +#define XFEATURE_XTILECFG 17
> +#define XFEATURE_XTILEDATA 18
> +#define XFEATURE_MASK_LBR (1 << XFEATURE_LBR)
> +#define XFEATURE_MASK_XTILECFG (1 << XFEATURE_XTILECFG)
> +#define XFEATURE_MASK_XTILEDATA (1 << XFEATURE_XTILEDATA)
> +#define XFEATURE_MASK_XTILE (XFEATURE_MASK_XTILECFG | XFEATURE_MASK_XTILEDATA)
> +
> +
> +static void tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data)
> +{
> + struct kvm_tdx_cmd tdx_cmd;
> + int r;
> +
> + TEST_ASSERT(ioctl_no < TDX_MAX_CMD_STR, "Unknown TDX CMD : %d\n",
> + ioctl_no);
> +
> + memset(&tdx_cmd, 0x0, sizeof(tdx_cmd));
> + tdx_cmd.id = ioctl_no;
> + tdx_cmd.flags = flags;
> + tdx_cmd.data = (uint64_t)data;
> + r = ioctl(fd, KVM_MEMORY_ENCRYPT_OP, &tdx_cmd);
> + TEST_ASSERT(r == 0, "%s failed: %d %d", tdx_cmd_str[ioctl_no], r,
> + errno);
> +}
> +
> +static struct tdx_cpuid_data get_tdx_cpuid_data(struct kvm_vm *vm)
> +{
> + static struct tdx_cpuid_data cpuid_data;
> + int ret, i;
> +
> + if (cpuid_data.cpuid.nent)
> + return cpuid_data;
> +
> + memset(&cpuid_data, 0, sizeof(cpuid_data));
> + cpuid_data.cpuid.nent = KVM_MAX_CPUID_ENTRIES;
> + ret = ioctl(vm->kvm_fd, KVM_GET_SUPPORTED_CPUID, &cpuid_data);
> + if (ret) {
> + TEST_FAIL("KVM_GET_SUPPORTED_CPUID failed %d %d\n",
> + ret, errno);
> + cpuid_data.cpuid.nent = 0;
> + return cpuid_data;
> + }
> +
> + for (i = 0; i < KVM_MAX_CPUID_ENTRIES; i++) {
> + struct kvm_cpuid_entry2 *e = &cpuid_data.entries[i];
> +
> + /* TDX doesn't support LBR and AMX features yet.
> + * Disable those bits from the XCR0 register.
> + */
> + if (e->function == 0xd && (e->index == 0)) {
> + e->eax &= ~XFEATURE_MASK_LBR;
> + e->eax &= ~XFEATURE_MASK_XTILE;
> + }
> + }
> +
> + return cpuid_data;
> +}

CET also needs adjust. How about the followings?

diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c b/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c
index 1c3e47006cd2..123db9b76f82 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c
@@ -25,7 +25,7 @@ char *tdx_cmd_str[] = {
#define XFEATURE_MASK_XTILECFG (1 << XFEATURE_XTILECFG)
#define XFEATURE_MASK_XTILEDATA (1 << XFEATURE_XTILEDATA)
#define XFEATURE_MASK_XTILE (XFEATURE_MASK_XTILECFG | XFEATURE_MASK_XTILEDATA)
-
+#define XFEATURE_MASK_CET ((1 << 11) | (1 << 12))

static int __tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data)
{
@@ -72,12 +72,26 @@ static struct tdx_cpuid_data get_tdx_cpuid_data(struct kvm_vm *vm)
for (i = 0; i < KVM_MAX_CPUID_ENTRIES; i++) {
struct kvm_cpuid_entry2 *e = &cpuid_data.entries[i];

- /* TDX doesn't support LBR and AMX features yet.
+ /* TDX doesn't support LBR yet.
* Disable those bits from the XCR0 register.
*/
if (e->function == 0xd && (e->index == 0)) {
e->eax &= ~XFEATURE_MASK_LBR;
- e->eax &= ~XFEATURE_MASK_XTILE;
+
+ /*
+ * TDX modules requires both CET_{U, S} to be set even
+ * if only one is supported.
+ */
+ if (e->eax & XFEATURE_MASK_CET) {
+ e->eax |= XFEATURE_MASK_CET;
+ }
+ /*
+ * TDX module requires both XTILE_{CFG, DATA} to be set.
+ * Both bits are required for AMX to be functional.
+ */
+ if ((e->eax & XFEATURE_MASK_XTILE) != XFEATURE_MASK_XTILE) {
+ e->eax &= ~XFEATURE_MASK_XTILE;
+ }
}
}

--
Isaku Yamahata <[email protected]>

2022-09-01 01:59:35

by Isaku Yamahata

[permalink] [raw]
Subject: Re: [RFC PATCH v2 02/17] KVM: selftest: Add helper functions to create TDX VMs

On Tue, Aug 30, 2022 at 10:19:45PM +0000,
Sagi Shahar <[email protected]> wrote:

> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index f35626df1dea..2a6e28c769f2 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -8,6 +8,7 @@
> #include "test_util.h"
> #include "kvm_util.h"
> #include "processor.h"
> +#include "tdx.h"
>
> #ifndef NUM_INTERRUPTS
> #define NUM_INTERRUPTS 256
> @@ -641,6 +642,32 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id,
> return vcpu;
> }
>
> +/*
> + * Adds a vCPU to a TD (Trusted Domain) with minimum defaults. It will not set
> + * up any general purpose registers as they will be initialized by the TDX. In
> + * TDX, vCPUs RIP is set to 0xFFFFFFF0. See Intel TDX EAS Section "Initial State
> + * of Guest GPRs" for more information on vCPUs initial register values when
> + * entering the TD first time.
> + *
> + * Input Args:
> + * vm - Virtual Machine
> + * vcpuid - The id of the VCPU to add to the VM.
> + */
> +struct kvm_vcpu *vm_vcpu_add_tdx(struct kvm_vm *vm, uint32_t vcpu_id)
> +{
> + struct kvm_mp_state mp_state;
> + struct kvm_vcpu *vcpu;
> +
> + vcpu = __vm_vcpu_add(vm, vcpu_id);
> + initialize_td_vcpu(vcpu);
> +
> + /* Setup the MP state */
> + mp_state.mp_state = 0;
> + vcpu_mp_state_set(vcpu, &mp_state);
> +
> + return vcpu;
> +}
> +

It's better to use symbolic value. I know this is copied from vmx version, though.

diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 3bb7dc5a55ea..4009bc926e33 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -636,7 +636,7 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id,
vcpu_regs_set(vcpu, &regs);

/* Setup the MP state */
- mp_state.mp_state = 0;
+ mp_state.mp_state = KVM_MP_STATE_RUNNABLE;
vcpu_mp_state_set(vcpu, &mp_state);

return vcpu;
@@ -662,7 +662,7 @@ struct kvm_vcpu *vm_vcpu_add_tdx(struct kvm_vm *vm, uint32_t vcpu_id)
initialize_td_vcpu(vcpu);

/* Setup the MP state */
- mp_state.mp_state = 0;
+ mp_state.mp_state = KVM_MP_STATE_RUNNABLE;
vcpu_mp_state_set(vcpu, &mp_state);

return vcpu;


--
Isaku Yamahata <[email protected]>

2022-09-01 02:22:11

by Isaku Yamahata

[permalink] [raw]
Subject: Re: [RFC PATCH v2 00/17] TDX KVM selftests

Here is one more test to exercise KVM_TDX_CAPABILITIES on top of this patch
series.

From f9c4c9013040ce7dee84e1d3370875e5158900bf Mon Sep 17 00:00:00 2001
Message-Id: <f9c4c9013040ce7dee84e1d3370875e5158900bf.1661995648.git.isaku.yamahata@intel.com>
In-Reply-To: <6ce32225079b83991b9f170730a8810005a079b0.1661995647.git.isaku.yamahata@intel.com>
References: <6ce32225079b83991b9f170730a8810005a079b0.1661995647.git.isaku.yamahata@intel.com>
From: Isaku Yamahata <[email protected]>
Date: Wed, 16 Mar 2022 09:15:40 -0700
Subject: [PATCH] KVM: selftest: tdx: call KVM_TDX_CAPABILITIES for
test

Add exercise of KVM_TDX_CAPABILITIES. The result isn't used.

Signed-off-by: Isaku Yamahata <[email protected]>
---
tools/testing/selftests/kvm/lib/x86_64/tdx.h | 1 +
.../selftests/kvm/lib/x86_64/tdx_lib.c | 52 +++++++++++++++++--
.../selftests/kvm/x86_64/tdx_vm_tests.c | 3 ++
3 files changed, 53 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx.h b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
index be8564f4672d..bfa3709a76e5 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx.h
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
@@ -119,6 +119,7 @@ struct page_table {
void add_td_memory(struct kvm_vm *vm, void *source_page,
uint64_t gpa, int size);
void finalize_td_memory(struct kvm_vm *vm);
+void get_tdx_capabilities(struct kvm_vm *vm);
void initialize_td(struct kvm_vm *vm);
void initialize_td_with_attributes(struct kvm_vm *vm, uint64_t attributes);
void initialize_td_vcpu(struct kvm_vcpu *vcpu);
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c b/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c
index 23893949c3a1..b07af314737a 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c
@@ -27,10 +27,9 @@ char *tdx_cmd_str[] = {
#define XFEATURE_MASK_XTILE (XFEATURE_MASK_XTILECFG | XFEATURE_MASK_XTILEDATA)


-static void tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data)
+static int __tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data)
{
struct kvm_tdx_cmd tdx_cmd;
- int r;

TEST_ASSERT(ioctl_no < TDX_MAX_CMD_STR, "Unknown TDX CMD : %d\n",
ioctl_no);
@@ -39,7 +38,15 @@ static void tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data)
tdx_cmd.id = ioctl_no;
tdx_cmd.flags = flags;
tdx_cmd.data = (uint64_t)data;
- r = ioctl(fd, KVM_MEMORY_ENCRYPT_OP, &tdx_cmd);
+ return ioctl(fd, KVM_MEMORY_ENCRYPT_OP, &tdx_cmd);
+}
+
+
+static void tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data)
+{
+ int r;
+
+ r = __tdx_ioctl(fd, ioctl_no, flags, data);
TEST_ASSERT(r == 0, "%s failed: %d %d", tdx_cmd_str[ioctl_no], r,
errno);
}
@@ -77,6 +84,45 @@ static struct tdx_cpuid_data get_tdx_cpuid_data(struct kvm_vm *vm)
return cpuid_data;
}

+/* Call KVM_TDX_CAPABILITIES for API test. The result isn't used. */
+void get_tdx_capabilities(struct kvm_vm *vm)
+{
+ int i;
+ int rc;
+ int nr_cpuid_configs = 8;
+ struct kvm_tdx_capabilities *tdx_cap = NULL;
+
+ while (true) {
+ tdx_cap = realloc(
+ tdx_cap, sizeof(*tdx_cap) +
+ nr_cpuid_configs * sizeof(*tdx_cap->cpuid_configs));
+ tdx_cap->nr_cpuid_configs = nr_cpuid_configs;
+ TEST_ASSERT(tdx_cap != NULL,
+ "Could not allocate memory for tdx capability "
+ "nr_cpuid_configs %d\n", nr_cpuid_configs);
+ rc = __tdx_ioctl(vm->fd, KVM_TDX_CAPABILITIES, 0, tdx_cap);
+ if (rc < 0 && errno == E2BIG) {
+ nr_cpuid_configs *= 2;
+ continue;
+ }
+ TEST_ASSERT(rc == 0, "%s failed: %d %d",
+ tdx_cmd_str[KVM_TDX_CAPABILITIES], rc, errno);
+ break;
+ }
+ pr_debug("tdx_cap: attrs: fixed0 0x%016llx fixed1 0x%016llx\n"
+ "tdx_cap: xfam fixed0 0x%016llx fixed1 0x%016llx\n",
+ tdx_cap->attrs_fixed0, tdx_cap->attrs_fixed1,
+ tdx_cap->xfam_fixed0, tdx_cap->xfam_fixed1);
+ for (i = 0; i < tdx_cap->nr_cpuid_configs; i++) {
+ const struct kvm_tdx_cpuid_config *config =
+ &tdx_cap->cpuid_configs[i];
+ pr_debug("cpuid config[%d]: leaf 0x%x sub_leaf 0x%x "
+ "eax 0x%08x ebx 0x%08x ecx 0x%08x edx 0x%08x\n",
+ i, config->leaf, config->sub_leaf,
+ config->eax, config->ebx, config->ecx, config->edx);
+ }
+}
+
/*
* Initialize a VM as a TD with attributes.
*
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index a96abada54b6..b3f9e3fa41f4 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -197,6 +197,9 @@ void verify_td_lifecycle(void)
/* Create a TD VM with no memory.*/
vm = vm_create_tdx();

+ /* Get TDX capabilities */
+ get_tdx_capabilities(vm);
+
/* Allocate TD guest memory and initialize the TD.*/
initialize_td(vm);

--
2.25.1

On Tue, Aug 30, 2022 at 10:19:43PM +0000,
Sagi Shahar <[email protected]> wrote:

> Hello,
>
> This is v2 of the patch series for TDX selftests.
>
> It is based on v5.19-rc8 and Intel's V8 of the TDX host patches which
> was proposed in https://lkml.org/lkml/2022/8/8/877
>
> The tree can be found at
> https://github.com/googleprodkernel/linux-cc/tree/selftests
>
> Major changes vrom v1:
> - rebased to v5.19
> - added helpers for success and failure reporting
> - added additional test cases
>
> ---
> TDX stands for Trust Domain Extensions which isolates VMs from the
> virtual-machine manager (VMM)/hypervisor and any other software on the
> platform.
>
> Intel has recently submitted a set of RFC patches for KVM support for
> TDX and more information can be found on the latest TDX Support
> Patches: https://lkml.org/lkml/2022/8/8/877
>
> Due to the nature of the confidential computing environment that TDX
> provides, it is very difficult to verify/test the KVM support. TDX
> requires UEFI and the guest kernel to be enlightened which are all under
> development.
>
> We are working on a set of selftests to close this gap and be able to
> verify the KVM functionality to support TDX lifecycle and GHCI [1]
> interface.
>
> We are looking for any feedback on:
> - Patch series itself
> - Any suggestion on how we should approach testing TDX functionality.
> Does selftests seems reasonable or should we switch to using KVM
> unit tests. I would be happy to get some perspective on how KVM unit
> tests can help us more.
> - Any test case or scenario that we should add.
> - Anything else I have not thought of yet.
>
> Current patch series provide the following capabilities:
>
> - Provide helper functions to create a TD (Trusted Domain) using the KVM
> ioctls
> - Provide helper functions to create a guest image that can include any
> testing code
> - Provide helper functions and wrapper functions to write testing code
> using GHCI interface
> - Add a test case that verifies TDX life cycle
> - Add a test case that verifies TDX GHCI port IO
>
> TODOs:
> - Use existing function to create page tables dynamically
> (ie __virt_pg_map())
> - Remove arbitrary defined magic numbers for data structure offsets
> - Add TDVMCALL for error reporting
> - Add additional test cases as some listed below
> - Add #VE handlers to help testing more complicated test cases
>
> ---
> Erdem Aktas (4):
> KVM: selftests: Add support for creating non-default type VMs
> KVM: selftest: Add helper functions to create TDX VMs
> KVM: selftest: Adding TDX life cycle test.
> KVM: selftest: Adding test case for TDX port IO
>
> Roger Wang (1):
> KVM: selftest: TDX: Add TDG.VP.INFO test
>
> Ryan Afranji (2):
> KVM: selftest: TDX: Verify the behavior when host consumes a TD
> private memory
> KVM: selftest: TDX: Add shared memory test
>
> Sagi Shahar (10):
> KVM: selftest: TDX: Add report_fatal_error test
> KVM: selftest: TDX: Add basic TDX CPUID test
> KVM: selftest: TDX: Add basic get_td_vmcall_info test
> KVM: selftest: TDX: Add TDX IO writes test
> KVM: selftest: TDX: Add TDX IO reads test
> KVM: selftest: TDX: Add TDX MSR read/write tests
> KVM: selftest: TDX: Add TDX HLT exit test
> KVM: selftest: TDX: Add TDX MMIO reads test
> KVM: selftest: TDX: Add TDX MMIO writes test
> KVM: selftest: TDX: Add TDX CPUID TDVMCALL test
>
> tools/testing/selftests/kvm/Makefile | 2 +
> .../selftests/kvm/include/kvm_util_base.h | 12 +-
> .../selftests/kvm/include/x86_64/processor.h | 1 +
> tools/testing/selftests/kvm/lib/kvm_util.c | 6 +-
> .../selftests/kvm/lib/x86_64/processor.c | 27 +
> tools/testing/selftests/kvm/lib/x86_64/tdx.h | 495 +++++
> .../selftests/kvm/lib/x86_64/tdx_lib.c | 373 ++++
> .../selftests/kvm/x86_64/tdx_vm_tests.c | 1666 +++++++++++++++++
> 8 files changed, 2577 insertions(+), 5 deletions(-)
> create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx.h
> create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx_lib.c
> create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
>
> --
> 2.37.2.789.g6183377224-goog
>

--
Isaku Yamahata <[email protected]>

2022-09-01 15:10:19

by Sean Christopherson

[permalink] [raw]
Subject: Re: [RFC PATCH v2 03/17] KVM: selftest: Adding TDX life cycle test.

On Wed, Aug 31, 2022, Isaku Yamahata wrote:
> Sometimes compiler (my gcc is (Ubuntu 11.1.0-1ubuntu1~20.04) 11.1.0) doesn't like
> clobering the frame pointer as follows. (I edited the caller site for other test.)
>
> x86_64/tdx_vm_tests.c:343:1: error: bp cannot be used in ‘asm’ here
>
> I ended up the following workaround. I didn't use pushq/popq pair because
> I didn't want to play with offset in the stack of the caller.
>
>
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx.h b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
> index aa6961c6f304..8ddf3b64f003 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx.h
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx.h
> @@ -122,7 +122,11 @@ void prepare_source_image(struct kvm_vm *vm, void *guest_code,
> */
> static inline void tdcall(struct kvm_regs *regs)
> {
> + unsigned long saved_rbp = 0;
> +
> asm volatile (
> + /* gcc complains that frame pointer %rbp can't be clobbered. */
> + "movq %%rbp, %28;\n\t"
> "mov %13, %%rax;\n\t"
> "mov %14, %%rbx;\n\t"
> "mov %15, %%rcx;\n\t"
> @@ -152,6 +156,8 @@ static inline void tdcall(struct kvm_regs *regs)
> "mov %%r15, %10;\n\t"
> "mov %%rsi, %11;\n\t"
> "mov %%rdi, %12;\n\t"
> + "movq %28, %%rbp\n\t"
> + "movq $0, %28\n\t"
> : "=m" (regs->rax), "=m" (regs->rbx), "=m" (regs->rdx),
> "=m" (regs->r8), "=m" (regs->r9), "=m" (regs->r10),
> "=m" (regs->r11), "=m" (regs->r12), "=m" (regs->r13),
> @@ -161,9 +167,10 @@ static inline void tdcall(struct kvm_regs *regs)
> "m" (regs->rdx), "m" (regs->r8), "m" (regs->r9),
> "m" (regs->r10), "m" (regs->r11), "m" (regs->r12),
> "m" (regs->r13), "m" (regs->r14), "m" (regs->r15),
> - "m" (regs->rbp), "m" (regs->rsi), "m" (regs->rdi)
> + "m" (regs->rbp), "m" (regs->rsi), "m" (regs->rdi),
> + "m" (saved_rbp)
> : "rax", "rbx", "rcx", "rdx", "r8", "r9", "r10", "r11",
> - "r12", "r13", "r14", "r15", "rbp", "rsi", "rdi");
> + "r12", "r13", "r14", "r15", "rsi", "rdi");
> }

Inline assembly for TDCALL is going to be a mess. Assuming proper assembly doesn't
Just Work for selftests, we should solve that problem and build this on top.