2023-12-12 20:47:14

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 00/29] TDX KVM selftests

Hello,

This is v4 of the patch series for TDX selftests.

It has been updated for Intel’s v17 of the TDX host patches which was
proposed here:
https://lore.kernel.org/all/[email protected]/

The tree can be found at:
https://github.com/googleprodkernel/linux-cc/tree/tdx-selftests-rfc-v5

Changes from RFC v4:

Added patch to propagate KVM_EXIT_MEMORY_FAULT to userspace.

Minor tweaks to align the tests to the new TDX 1.5 spec such as changes
in the expected values in TDG.VP.INFO.

In RFCv5, TDX selftest code is organized into:

+ headers in tools/testing/selftests/kvm/include/x86_64/tdx/
+ common code in tools/testing/selftests/kvm/lib/x86_64/tdx/
+ selftests in tools/testing/selftests/kvm/x86_64/tdx_*

Dependencies

+ Peter’s patches, which provide functions for the host to allocate
and track protected memory in the guest.
https://lore.kernel.org/all/[email protected]/

Further work for this patch series/TODOs

+ Sean’s comments for the non-confidential UPM selftests patch series
at https://lore.kernel.org/lkml/[email protected]/T/#u apply
here as well
+ Add ucall support for TDX selftests

I would also like to acknowledge the following people, who helped
review or test patches in previous versions:

+ Sean Christopherson <[email protected]>
+ Zhenzhong Duan <[email protected]>
+ Peter Gonda <[email protected]>
+ Andrew Jones <[email protected]>
+ Maxim Levitsky <[email protected]>
+ Xiaoyao Li <[email protected]>
+ David Matlack <[email protected]>
+ Marc Orr <[email protected]>
+ Isaku Yamahata <[email protected]>
+ Maciej S. Szmigiero <[email protected]>

Links to earlier patch series

+ RFC v1: https://lore.kernel.org/lkml/[email protected]/T/#u
+ RFC v2: https://lore.kernel.org/lkml/[email protected]/T/#u
+ RFC v3: https://lore.kernel.org/lkml/[email protected]/T/#u
+ RFC v4: https://lore.kernel.org/lkml/[email protected]/

*** BLURB HERE ***

Ackerley Tng (12):
KVM: selftests: Add function to allow one-to-one GVA to GPA mappings
KVM: selftests: Expose function that sets up sregs based on VM's mode
KVM: selftests: Store initial stack address in struct kvm_vcpu
KVM: selftests: Refactor steps in vCPU descriptor table initialization
KVM: selftests: TDX: Use KVM_TDX_CAPABILITIES to validate TDs'
attribute configuration
KVM: selftests: TDX: Update load_td_memory_region for VM memory backed
by guest memfd
KVM: selftests: Add functions to allow mapping as shared
KVM: selftests: Expose _vm_vaddr_alloc
KVM: selftests: TDX: Add support for TDG.MEM.PAGE.ACCEPT
KVM: selftests: TDX: Add support for TDG.VP.VEINFO.GET
KVM: selftests: TDX: Add TDX UPM selftest
KVM: selftests: TDX: Add TDX UPM selftests for implicit conversion

Erdem Aktas (3):
KVM: selftests: Add helper functions to create TDX VMs
KVM: selftests: TDX: Add TDX lifecycle test
KVM: selftests: TDX: Adding test case for TDX port IO

Roger Wang (1):
KVM: selftests: TDX: Add TDG.VP.INFO test

Ryan Afranji (2):
KVM: selftests: TDX: Verify the behavior when host consumes a TD
private memory
KVM: selftests: TDX: Add shared memory test

Sagi Shahar (11):
KVM: selftests: TDX: Add report_fatal_error test
KVM: selftests: TDX: Add basic TDX CPUID test
KVM: selftests: TDX: Add basic get_td_vmcall_info test
KVM: selftests: TDX: Add TDX IO writes test
KVM: selftests: TDX: Add TDX IO reads test
KVM: selftests: TDX: Add TDX MSR read/write tests
KVM: selftests: TDX: Add TDX HLT exit test
KVM: selftests: TDX: Add TDX MMIO reads test
KVM: selftests: TDX: Add TDX MMIO writes test
KVM: selftests: TDX: Add TDX CPUID TDVMCALL test
KVM: selftests: Propagate KVM_EXIT_MEMORY_FAULT to userspace

tools/testing/selftests/kvm/Makefile | 8 +
.../selftests/kvm/include/kvm_util_base.h | 30 +
.../selftests/kvm/include/x86_64/processor.h | 4 +
.../kvm/include/x86_64/tdx/td_boot.h | 82 +
.../kvm/include/x86_64/tdx/td_boot_asm.h | 16 +
.../selftests/kvm/include/x86_64/tdx/tdcall.h | 59 +
.../selftests/kvm/include/x86_64/tdx/tdx.h | 65 +
.../kvm/include/x86_64/tdx/tdx_util.h | 19 +
.../kvm/include/x86_64/tdx/test_util.h | 164 ++
tools/testing/selftests/kvm/lib/kvm_util.c | 101 +-
.../selftests/kvm/lib/x86_64/processor.c | 77 +-
.../selftests/kvm/lib/x86_64/tdx/td_boot.S | 101 ++
.../selftests/kvm/lib/x86_64/tdx/tdcall.S | 158 ++
.../selftests/kvm/lib/x86_64/tdx/tdx.c | 262 ++++
.../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 558 +++++++
.../selftests/kvm/lib/x86_64/tdx/test_util.c | 101 ++
.../kvm/x86_64/tdx_shared_mem_test.c | 135 ++
.../selftests/kvm/x86_64/tdx_upm_test.c | 469 ++++++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 1319 +++++++++++++++++
19 files changed, 3693 insertions(+), 35 deletions(-)
create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h
create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h
create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c
create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_upm_test.c
create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c

--
2.43.0.472.g3155946c3a-goog


2023-12-12 20:47:14

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 03/29] KVM: selftests: Store initial stack address in struct kvm_vcpu

From: Ackerley Tng <[email protected]>

TDX guests' registers cannot be initialized directly using
vcpu_regs_set(), hence the stack pointer needs to be initialized by
the guest itself, running boot code beginning at the reset vector.

We store the stack address as part of struct kvm_vcpu so that it can
be accessible later to be passed to the boot code for rsp
initialization.

Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
---
tools/testing/selftests/kvm/include/kvm_util_base.h | 1 +
tools/testing/selftests/kvm/lib/x86_64/processor.c | 4 +++-
2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index c2e5c5f25dfc..b353617fcdd1 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -68,6 +68,7 @@ struct kvm_vcpu {
int fd;
struct kvm_vm *vm;
struct kvm_run *run;
+ vm_vaddr_t initial_stack_addr;
#ifdef __x86_64__
struct kvm_cpuid2 *cpuid;
#endif
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index f130f78a4974..b6b9438e0a33 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -621,10 +621,12 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id,
vcpu_init_cpuid(vcpu, kvm_get_supported_cpuid());
vcpu_setup(vm, vcpu);

+ vcpu->initial_stack_addr = stack_vaddr;
+
/* Setup guest general purpose registers */
vcpu_regs_get(vcpu, &regs);
regs.rflags = regs.rflags | 0x2;
- regs.rsp = stack_vaddr;
+ regs.rsp = vcpu->initial_stack_addr;
regs.rip = (unsigned long) guest_code;
vcpu_regs_set(vcpu, &regs);

--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:47:23

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 02/29] KVM: selftests: Expose function that sets up sregs based on VM's mode

From: Ackerley Tng <[email protected]>

This allows initializing sregs without setting vCPU registers in
KVM.

No functional change intended.

Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
---
.../selftests/kvm/include/x86_64/processor.h | 2 +
.../selftests/kvm/lib/x86_64/processor.c | 39 ++++++++++---------
2 files changed, 23 insertions(+), 18 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index 35fcf4d78dfa..0b8855d68744 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -958,6 +958,8 @@ static inline struct kvm_cpuid2 *allocate_kvm_cpuid2(int nr_entries)
void vcpu_init_cpuid(struct kvm_vcpu *vcpu, const struct kvm_cpuid2 *cpuid);
void vcpu_set_hv_cpuid(struct kvm_vcpu *vcpu);

+void vcpu_setup_mode_sregs(struct kvm_vm *vm, struct kvm_sregs *sregs);
+
static inline struct kvm_cpuid_entry2 *__vcpu_get_cpuid_entry(struct kvm_vcpu *vcpu,
uint32_t function,
uint32_t index)
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index aef1c021c4bb..f130f78a4974 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -543,36 +543,39 @@ static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp,
kvm_seg_fill_gdt_64bit(vm, segp);
}

-static void vcpu_setup(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
+void vcpu_setup_mode_sregs(struct kvm_vm *vm, struct kvm_sregs *sregs)
{
- struct kvm_sregs sregs;
-
- /* Set mode specific system register values. */
- vcpu_sregs_get(vcpu, &sregs);
-
- sregs.idt.limit = 0;
+ sregs->idt.limit = 0;

- kvm_setup_gdt(vm, &sregs.gdt);
+ kvm_setup_gdt(vm, &sregs->gdt);

switch (vm->mode) {
case VM_MODE_PXXV48_4K_SEV:
case VM_MODE_PXXV48_4K:
- sregs.cr0 = X86_CR0_PE | X86_CR0_NE | X86_CR0_PG;
- sregs.cr4 |= X86_CR4_PAE | X86_CR4_OSFXSR;
- sregs.efer |= (EFER_LME | EFER_LMA | EFER_NX);
-
- kvm_seg_set_unusable(&sregs.ldt);
- kvm_seg_set_kernel_code_64bit(vm, DEFAULT_CODE_SELECTOR, &sregs.cs);
- kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.ds);
- kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.es);
- kvm_setup_tss_64bit(vm, &sregs.tr, 0x18);
+ sregs->cr0 = X86_CR0_PE | X86_CR0_NE | X86_CR0_PG;
+ sregs->cr4 |= X86_CR4_PAE | X86_CR4_OSFXSR;
+ sregs->efer |= (EFER_LME | EFER_LMA | EFER_NX);
+
+ kvm_seg_set_unusable(&sregs->ldt);
+ kvm_seg_set_kernel_code_64bit(vm, DEFAULT_CODE_SELECTOR, &sregs->cs);
+ kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs->ds);
+ kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs->es);
+ kvm_setup_tss_64bit(vm, &sregs->tr, 0x18);
break;

default:
TEST_FAIL("Unknown guest mode, mode: 0x%x", vm->mode);
}

- sregs.cr3 = vm->pgd;
+ sregs->cr3 = vm->pgd;
+}
+
+static void vcpu_setup(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
+{
+ struct kvm_sregs sregs;
+
+ vcpu_sregs_get(vcpu, &sregs);
+ vcpu_setup_mode_sregs(vm, &sregs);
vcpu_sregs_set(vcpu, &sregs);
}

--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:47:31

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 06/29] KVM: selftests: TDX: Use KVM_TDX_CAPABILITIES to validate TDs' attribute configuration

From: Ackerley Tng <[email protected]>

This also exercises the KVM_TDX_CAPABILITIES ioctl.

Suggested-by: Isaku Yamahata <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
---
.../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 69 ++++++++++++++++++-
1 file changed, 66 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
index 9b69c733ce01..6b995c3f6153 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
@@ -27,10 +27,9 @@ static char *tdx_cmd_str[] = {
};
#define TDX_MAX_CMD_STR (ARRAY_SIZE(tdx_cmd_str))

-static void tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data)
+static int _tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data)
{
struct kvm_tdx_cmd tdx_cmd;
- int r;

TEST_ASSERT(ioctl_no < TDX_MAX_CMD_STR, "Unknown TDX CMD : %d\n",
ioctl_no);
@@ -40,11 +39,58 @@ static void tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data)
tdx_cmd.flags = flags;
tdx_cmd.data = (uint64_t)data;

- r = ioctl(fd, KVM_MEMORY_ENCRYPT_OP, &tdx_cmd);
+ return ioctl(fd, KVM_MEMORY_ENCRYPT_OP, &tdx_cmd);
+}
+
+static void tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data)
+{
+ int r;
+
+ r = _tdx_ioctl(fd, ioctl_no, flags, data);
TEST_ASSERT(r == 0, "%s failed: %d %d", tdx_cmd_str[ioctl_no], r,
errno);
}

+static struct kvm_tdx_capabilities *tdx_read_capabilities(struct kvm_vm *vm)
+{
+ int i;
+ int rc = -1;
+ int nr_cpuid_configs = 4;
+ struct kvm_tdx_capabilities *tdx_cap = NULL;
+
+ do {
+ nr_cpuid_configs *= 2;
+
+ tdx_cap = realloc(
+ tdx_cap, sizeof(*tdx_cap) +
+ nr_cpuid_configs * sizeof(*tdx_cap->cpuid_configs));
+ TEST_ASSERT(tdx_cap != NULL,
+ "Could not allocate memory for tdx capability nr_cpuid_configs %d\n",
+ nr_cpuid_configs);
+
+ tdx_cap->nr_cpuid_configs = nr_cpuid_configs;
+ rc = _tdx_ioctl(vm->fd, KVM_TDX_CAPABILITIES, 0, tdx_cap);
+ } while (rc < 0 && errno == E2BIG);
+
+ TEST_ASSERT(rc == 0, "KVM_TDX_CAPABILITIES failed: %d %d",
+ rc, errno);
+
+ pr_debug("tdx_cap: attrs: fixed0 0x%016llx fixed1 0x%016llx\n"
+ "tdx_cap: xfam fixed0 0x%016llx fixed1 0x%016llx\n",
+ tdx_cap->attrs_fixed0, tdx_cap->attrs_fixed1,
+ tdx_cap->xfam_fixed0, tdx_cap->xfam_fixed1);
+
+ for (i = 0; i < tdx_cap->nr_cpuid_configs; i++) {
+ const struct kvm_tdx_cpuid_config *config =
+ &tdx_cap->cpuid_configs[i];
+ pr_debug("cpuid config[%d]: leaf 0x%x sub_leaf 0x%x eax 0x%08x ebx 0x%08x ecx 0x%08x edx 0x%08x\n",
+ i, config->leaf, config->sub_leaf,
+ config->eax, config->ebx, config->ecx, config->edx);
+ }
+
+ return tdx_cap;
+}
+
#define XFEATURE_MASK_CET (XFEATURE_MASK_CET_USER | XFEATURE_MASK_CET_KERNEL)

static void tdx_apply_cpuid_restrictions(struct kvm_cpuid2 *cpuid_data)
@@ -78,6 +124,21 @@ static void tdx_apply_cpuid_restrictions(struct kvm_cpuid2 *cpuid_data)
}
}

+static void tdx_check_attributes(struct kvm_vm *vm, uint64_t attributes)
+{
+ struct kvm_tdx_capabilities *tdx_cap;
+
+ tdx_cap = tdx_read_capabilities(vm);
+
+ /* TDX spec: any bits 0 in attrs_fixed0 must be 0 in attributes */
+ TEST_ASSERT_EQ(attributes & ~tdx_cap->attrs_fixed0, 0);
+
+ /* TDX spec: any bits 1 in attrs_fixed1 must be 1 in attributes */
+ TEST_ASSERT_EQ(attributes & tdx_cap->attrs_fixed1, tdx_cap->attrs_fixed1);
+
+ free(tdx_cap);
+}
+
static void tdx_td_init(struct kvm_vm *vm, uint64_t attributes)
{
const struct kvm_cpuid2 *cpuid;
@@ -91,6 +152,8 @@ static void tdx_td_init(struct kvm_vm *vm, uint64_t attributes)
memset(init_vm, 0, sizeof(*init_vm));
memcpy(&init_vm->cpuid, cpuid, kvm_cpuid2_size(cpuid->nent));

+ tdx_check_attributes(vm, attributes);
+
init_vm->attributes = attributes;

tdx_apply_cpuid_restrictions(&init_vm->cpuid);
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:47:33

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 04/29] KVM: selftests: Refactor steps in vCPU descriptor table initialization

From: Ackerley Tng <[email protected]>

Split the vCPU descriptor table initialization process into a few
steps and expose them:

+ Setting up the IDT
+ Syncing exception handlers into the guest

In kvm_setup_idt(), we conditionally allocate guest memory for vm->idt
to avoid double allocation when kvm_setup_idt() is used after
vm_init_descriptor_tables().

Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
---
.../selftests/kvm/include/x86_64/processor.h | 2 ++
.../selftests/kvm/lib/x86_64/processor.c | 19 ++++++++++++++++---
2 files changed, 18 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index 0b8855d68744..5c4e9a27d9e2 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -1089,6 +1089,8 @@ struct idt_entry {
uint32_t offset2; uint32_t reserved;
};

+void kvm_setup_idt(struct kvm_vm *vm, struct kvm_dtable *dt);
+void sync_exception_handlers_to_guest(struct kvm_vm *vm);
void vm_init_descriptor_tables(struct kvm_vm *vm);
void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu);
void vm_install_exception_handler(struct kvm_vm *vm, int vector,
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index b6b9438e0a33..566d82829da4 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -1155,19 +1155,32 @@ void vm_init_descriptor_tables(struct kvm_vm *vm)
DEFAULT_CODE_SELECTOR);
}

+void kvm_setup_idt(struct kvm_vm *vm, struct kvm_dtable *dt)
+{
+ if (!vm->idt)
+ vm->idt = vm_vaddr_alloc_page(vm);
+
+ dt->base = vm->idt;
+ dt->limit = NUM_INTERRUPTS * sizeof(struct idt_entry) - 1;
+}
+
+void sync_exception_handlers_to_guest(struct kvm_vm *vm)
+{
+ *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
+}
+
void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu)
{
struct kvm_vm *vm = vcpu->vm;
struct kvm_sregs sregs;

vcpu_sregs_get(vcpu, &sregs);
- sregs.idt.base = vm->idt;
- sregs.idt.limit = NUM_INTERRUPTS * sizeof(struct idt_entry) - 1;
+ kvm_setup_idt(vcpu->vm, &sregs.idt);
sregs.gdt.base = vm->gdt;
sregs.gdt.limit = getpagesize() - 1;
kvm_seg_set_kernel_data_64bit(NULL, DEFAULT_DATA_SELECTOR, &sregs.gs);
vcpu_sregs_set(vcpu, &sregs);
- *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
+ sync_exception_handlers_to_guest(vm);
}

void vm_install_exception_handler(struct kvm_vm *vm, int vector,
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:47:38

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 05/29] KVM: selftests: Add helper functions to create TDX VMs

From: Erdem Aktas <[email protected]>

TDX requires additional IOCTLs to initialize VM and vCPUs to add
private memory and to finalize the VM memory. Also additional utility
functions are provided to manipulate a TD, similar to those that
manipulate a VM in the current selftest framework.

A TD's initial register state cannot be manipulated directly by
setting the VM's memory, hence boot code is provided at the TD's reset
vector. This boot code takes boot parameters loaded in the TD's memory
and sets up the TD for the selftest.

Signed-off-by: Erdem Aktas <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
Co-developed-by: Ackerley Tng <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
---
tools/testing/selftests/kvm/Makefile | 2 +
.../kvm/include/x86_64/tdx/td_boot.h | 82 ++++
.../kvm/include/x86_64/tdx/td_boot_asm.h | 16 +
.../kvm/include/x86_64/tdx/tdx_util.h | 16 +
.../selftests/kvm/lib/x86_64/tdx/td_boot.S | 101 ++++
.../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 434 ++++++++++++++++++
6 files changed, 651 insertions(+)
create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h
create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h
create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index b11ac221aba4..a35150ab855f 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -50,6 +50,8 @@ LIBKVM_x86_64 += lib/x86_64/svm.c
LIBKVM_x86_64 += lib/x86_64/ucall.c
LIBKVM_x86_64 += lib/x86_64/vmx.c
LIBKVM_x86_64 += lib/x86_64/sev.c
+LIBKVM_x86_64 += lib/x86_64/tdx/tdx_util.c
+LIBKVM_x86_64 += lib/x86_64/tdx/td_boot.S

LIBKVM_aarch64 += lib/aarch64/gic.c
LIBKVM_aarch64 += lib/aarch64/gic_v3.c
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h b/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h
new file mode 100644
index 000000000000..148057e569d6
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef SELFTEST_TDX_TD_BOOT_H
+#define SELFTEST_TDX_TD_BOOT_H
+
+#include <stdint.h>
+#include "tdx/td_boot_asm.h"
+
+/*
+ * Layout for boot section (not to scale)
+ *
+ * GPA
+ * ┌─────────────────────────────┬──0x1_0000_0000 (4GB)
+ * │ Boot code trampoline │
+ * ├─────────────────────────────┼──0x0_ffff_fff0: Reset vector (16B below 4GB)
+ * │ Boot code │
+ * ├─────────────────────────────┼──td_boot will be copied here, so that the
+ * │ │ jmp to td_boot is exactly at the reset vector
+ * │ Empty space │
+ * │ │
+ * ├─────────────────────────────┤
+ * │ │
+ * │ │
+ * │ Boot parameters │
+ * │ │
+ * │ │
+ * └─────────────────────────────┴──0x0_ffff_0000: TD_BOOT_PARAMETERS_GPA
+ */
+#define FOUR_GIGABYTES_GPA (4ULL << 30)
+
+/**
+ * The exact memory layout for LGDT or LIDT instructions.
+ */
+struct __packed td_boot_parameters_dtr {
+ uint16_t limit;
+ uint32_t base;
+};
+
+/**
+ * The exact layout in memory required for a ljmp, including the selector for
+ * changing code segment.
+ */
+struct __packed td_boot_parameters_ljmp_target {
+ uint32_t eip_gva;
+ uint16_t code64_sel;
+};
+
+/**
+ * Allows each vCPU to be initialized with different eip and esp.
+ */
+struct __packed td_per_vcpu_parameters {
+ uint32_t esp_gva;
+ struct td_boot_parameters_ljmp_target ljmp_target;
+};
+
+/**
+ * Boot parameters for the TD.
+ *
+ * Unlike a regular VM, we can't ask KVM to set registers such as esp, eip, etc
+ * before boot, so to run selftests, these registers' values have to be
+ * initialized by the TD.
+ *
+ * This struct is loaded in TD private memory at TD_BOOT_PARAMETERS_GPA.
+ *
+ * The TD boot code will read off parameters from this struct and set up the
+ * vcpu for executing selftests.
+ */
+struct __packed td_boot_parameters {
+ uint32_t cr0;
+ uint32_t cr3;
+ uint32_t cr4;
+ struct td_boot_parameters_dtr gdtr;
+ struct td_boot_parameters_dtr idtr;
+ struct td_per_vcpu_parameters per_vcpu[];
+};
+
+extern void td_boot(void);
+extern void reset_vector(void);
+extern void td_boot_code_end(void);
+
+#define TD_BOOT_CODE_SIZE (td_boot_code_end - td_boot)
+
+#endif /* SELFTEST_TDX_TD_BOOT_H */
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h b/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h
new file mode 100644
index 000000000000..0a07104f7deb
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef SELFTEST_TDX_TD_BOOT_ASM_H
+#define SELFTEST_TDX_TD_BOOT_ASM_H
+
+/*
+ * GPA where TD boot parameters wil lbe loaded.
+ *
+ * TD_BOOT_PARAMETERS_GPA is arbitrarily chosen to
+ *
+ * + be within the 4GB address space
+ * + provide enough contiguous memory for the struct td_boot_parameters such
+ * that there is one struct td_per_vcpu_parameters for KVM_MAX_VCPUS
+ */
+#define TD_BOOT_PARAMETERS_GPA 0xffff0000
+
+#endif // SELFTEST_TDX_TD_BOOT_ASM_H
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
new file mode 100644
index 000000000000..274b245f200b
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef SELFTESTS_TDX_KVM_UTIL_H
+#define SELFTESTS_TDX_KVM_UTIL_H
+
+#include <stdint.h>
+
+#include "kvm_util_base.h"
+
+struct kvm_vcpu *td_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, void *guest_code);
+
+struct kvm_vm *td_create(void);
+void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
+ uint64_t attributes);
+void td_finalize(struct kvm_vm *vm);
+
+#endif // SELFTESTS_TDX_KVM_UTIL_H
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S b/tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S
new file mode 100644
index 000000000000..800e09264d4e
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#include "tdx/td_boot_asm.h"
+
+/* Offsets for reading struct td_boot_parameters */
+#define TD_BOOT_PARAMETERS_CR0 0
+#define TD_BOOT_PARAMETERS_CR3 4
+#define TD_BOOT_PARAMETERS_CR4 8
+#define TD_BOOT_PARAMETERS_GDT 12
+#define TD_BOOT_PARAMETERS_IDT 18
+#define TD_BOOT_PARAMETERS_PER_VCPU 24
+
+/* Offsets for reading struct td_per_vcpu_parameters */
+#define TD_PER_VCPU_PARAMETERS_ESP_GVA 0
+#define TD_PER_VCPU_PARAMETERS_LJMP_TARGET 4
+
+#define SIZEOF_TD_PER_VCPU_PARAMETERS 10
+
+.code32
+
+.globl td_boot
+td_boot:
+ /* In this procedure, edi is used as a temporary register */
+ cli
+
+ /* Paging is off */
+
+ movl $TD_BOOT_PARAMETERS_GPA, %ebx
+
+ /*
+ * Find the address of struct td_per_vcpu_parameters for this
+ * vCPU based on esi (TDX spec: initialized with vcpu id). Put
+ * struct address into register for indirect addressing
+ */
+ movl $SIZEOF_TD_PER_VCPU_PARAMETERS, %eax
+ mul %esi
+ leal TD_BOOT_PARAMETERS_PER_VCPU(%ebx), %edi
+ addl %edi, %eax
+
+ /* Setup stack */
+ movl TD_PER_VCPU_PARAMETERS_ESP_GVA(%eax), %esp
+
+ /* Setup GDT */
+ leal TD_BOOT_PARAMETERS_GDT(%ebx), %edi
+ lgdt (%edi)
+
+ /* Setup IDT */
+ leal TD_BOOT_PARAMETERS_IDT(%ebx), %edi
+ lidt (%edi)
+
+ /*
+ * Set up control registers (There are no instructions to
+ * mov from memory to control registers, hence we need to use ebx
+ * as a scratch register)
+ */
+ movl TD_BOOT_PARAMETERS_CR4(%ebx), %edi
+ movl %edi, %cr4
+ movl TD_BOOT_PARAMETERS_CR3(%ebx), %edi
+ movl %edi, %cr3
+ movl TD_BOOT_PARAMETERS_CR0(%ebx), %edi
+ movl %edi, %cr0
+
+ /* Paging is on after setting the most significant bit on cr0 */
+
+ /*
+ * Jump to selftest guest code. Far jumps read <segment
+ * selector:new eip> from <addr+4:addr>. This location has
+ * already been set up in boot parameters, and we can read boot
+ * parameters because boot code and boot parameters are loaded so
+ * that GVA and GPA are mapped 1:1.
+ */
+ ljmp *TD_PER_VCPU_PARAMETERS_LJMP_TARGET(%eax)
+
+.globl reset_vector
+reset_vector:
+ jmp td_boot
+ /*
+ * Pad reset_vector to its full size of 16 bytes so that this
+ * can be loaded with the end of reset_vector aligned to GPA=4G
+ */
+ int3
+ int3
+ int3
+ int3
+ int3
+ int3
+ int3
+ int3
+ int3
+ int3
+ int3
+ int3
+ int3
+ int3
+
+/* Leave marker so size of td_boot code can be computed */
+.globl td_boot_code_end
+td_boot_code_end:
+
+/* Disable executable stack */
+.section .note.GNU-stack,"",%progbits
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
new file mode 100644
index 000000000000..9b69c733ce01
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
@@ -0,0 +1,434 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#define _GNU_SOURCE
+#include <asm/kvm.h>
+#include <asm/kvm_host.h>
+#include <errno.h>
+#include <linux/kvm.h>
+#include <stdint.h>
+#include <sys/ioctl.h>
+
+#include "kvm_util.h"
+#include "test_util.h"
+#include "tdx/td_boot.h"
+#include "kvm_util_base.h"
+#include "processor.h"
+
+/*
+ * TDX ioctls
+ */
+
+static char *tdx_cmd_str[] = {
+ "KVM_TDX_CAPABILITIES",
+ "KVM_TDX_INIT_VM",
+ "KVM_TDX_INIT_VCPU",
+ "KVM_TDX_INIT_MEM_REGION",
+ "KVM_TDX_FINALIZE_VM"
+};
+#define TDX_MAX_CMD_STR (ARRAY_SIZE(tdx_cmd_str))
+
+static void tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data)
+{
+ struct kvm_tdx_cmd tdx_cmd;
+ int r;
+
+ TEST_ASSERT(ioctl_no < TDX_MAX_CMD_STR, "Unknown TDX CMD : %d\n",
+ ioctl_no);
+
+ memset(&tdx_cmd, 0x0, sizeof(tdx_cmd));
+ tdx_cmd.id = ioctl_no;
+ tdx_cmd.flags = flags;
+ tdx_cmd.data = (uint64_t)data;
+
+ r = ioctl(fd, KVM_MEMORY_ENCRYPT_OP, &tdx_cmd);
+ TEST_ASSERT(r == 0, "%s failed: %d %d", tdx_cmd_str[ioctl_no], r,
+ errno);
+}
+
+#define XFEATURE_MASK_CET (XFEATURE_MASK_CET_USER | XFEATURE_MASK_CET_KERNEL)
+
+static void tdx_apply_cpuid_restrictions(struct kvm_cpuid2 *cpuid_data)
+{
+ for (int i = 0; i < cpuid_data->nent; i++) {
+ struct kvm_cpuid_entry2 *e = &cpuid_data->entries[i];
+
+ if (e->function == 0xd && e->index == 0) {
+ /*
+ * TDX module requires both XTILE_{CFG, DATA} to be set.
+ * Both bits are required for AMX to be functional.
+ */
+ if ((e->eax & XFEATURE_MASK_XTILE) !=
+ XFEATURE_MASK_XTILE) {
+ e->eax &= ~XFEATURE_MASK_XTILE;
+ }
+ }
+ if (e->function == 0xd && e->index == 1) {
+ /*
+ * TDX doesn't support LBR yet.
+ * Disable bits from the XCR0 register.
+ */
+ e->ecx &= ~XFEATURE_MASK_LBR;
+ /*
+ * TDX modules requires both CET_{U, S} to be set even
+ * if only one is supported.
+ */
+ if (e->ecx & XFEATURE_MASK_CET)
+ e->ecx |= XFEATURE_MASK_CET;
+ }
+ }
+}
+
+static void tdx_td_init(struct kvm_vm *vm, uint64_t attributes)
+{
+ const struct kvm_cpuid2 *cpuid;
+ struct kvm_tdx_init_vm *init_vm;
+
+ cpuid = kvm_get_supported_cpuid();
+
+ init_vm = malloc(sizeof(*init_vm) +
+ sizeof(init_vm->cpuid.entries[0]) * cpuid->nent);
+
+ memset(init_vm, 0, sizeof(*init_vm));
+ memcpy(&init_vm->cpuid, cpuid, kvm_cpuid2_size(cpuid->nent));
+
+ init_vm->attributes = attributes;
+
+ tdx_apply_cpuid_restrictions(&init_vm->cpuid);
+
+ tdx_ioctl(vm->fd, KVM_TDX_INIT_VM, 0, init_vm);
+}
+
+static void tdx_td_vcpu_init(struct kvm_vcpu *vcpu)
+{
+ const struct kvm_cpuid2 *cpuid = kvm_get_supported_cpuid();
+
+ vcpu_init_cpuid(vcpu, cpuid);
+ tdx_ioctl(vcpu->fd, KVM_TDX_INIT_VCPU, 0, NULL);
+}
+
+static void tdx_init_mem_region(struct kvm_vm *vm, void *source_pages,
+ uint64_t gpa, uint64_t size)
+{
+ struct kvm_tdx_init_mem_region mem_region = {
+ .source_addr = (uint64_t)source_pages,
+ .gpa = gpa,
+ .nr_pages = size / PAGE_SIZE,
+ };
+ uint32_t metadata = KVM_TDX_MEASURE_MEMORY_REGION;
+
+ TEST_ASSERT((mem_region.nr_pages > 0) &&
+ ((mem_region.nr_pages * PAGE_SIZE) == size),
+ "Cannot add partial pages to the guest memory.\n");
+ TEST_ASSERT(((uint64_t)source_pages & (PAGE_SIZE - 1)) == 0,
+ "Source memory buffer is not page aligned\n");
+ tdx_ioctl(vm->fd, KVM_TDX_INIT_MEM_REGION, metadata, &mem_region);
+}
+
+static void tdx_td_finalizemr(struct kvm_vm *vm)
+{
+ tdx_ioctl(vm->fd, KVM_TDX_FINALIZE_VM, 0, NULL);
+}
+
+/*
+ * TD creation/setup/finalization
+ */
+
+static void tdx_enable_capabilities(struct kvm_vm *vm)
+{
+ int rc;
+
+ rc = kvm_check_cap(KVM_CAP_X2APIC_API);
+ TEST_ASSERT(rc, "TDX: KVM_CAP_X2APIC_API is not supported!");
+ rc = kvm_check_cap(KVM_CAP_SPLIT_IRQCHIP);
+ TEST_ASSERT(rc, "TDX: KVM_CAP_SPLIT_IRQCHIP is not supported!");
+
+ vm_enable_cap(vm, KVM_CAP_X2APIC_API,
+ KVM_X2APIC_API_USE_32BIT_IDS |
+ KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK);
+ vm_enable_cap(vm, KVM_CAP_SPLIT_IRQCHIP, 24);
+}
+
+static void tdx_configure_memory_encryption(struct kvm_vm *vm)
+{
+ /* Configure shared/enCrypted bit for this VM according to TDX spec */
+ vm->arch.s_bit = 1ULL << (vm->pa_bits - 1);
+ vm->arch.c_bit = 0;
+ /* Set gpa_protected_mask so that tagging/untagging of GPAs works */
+ vm->gpa_protected_mask = vm->arch.s_bit;
+ /* This VM is protected (has memory encryption) */
+ vm->protected = true;
+}
+
+static void tdx_apply_cr4_restrictions(struct kvm_sregs *sregs)
+{
+ /* TDX spec 11.6.2: CR4 bit MCE is fixed to 1 */
+ sregs->cr4 |= X86_CR4_MCE;
+
+ /* Set this because UEFI also sets this up, to handle XMM exceptions */
+ sregs->cr4 |= X86_CR4_OSXMMEXCPT;
+
+ /* TDX spec 11.6.2: CR4 bit VMXE and SMXE are fixed to 0 */
+ sregs->cr4 &= ~(X86_CR4_VMXE | X86_CR4_SMXE);
+}
+
+static void load_td_boot_code(struct kvm_vm *vm)
+{
+ void *boot_code_hva = addr_gpa2hva(vm, FOUR_GIGABYTES_GPA - TD_BOOT_CODE_SIZE);
+
+ TEST_ASSERT(td_boot_code_end - reset_vector == 16,
+ "The reset vector must be 16 bytes in size.");
+ memcpy(boot_code_hva, td_boot, TD_BOOT_CODE_SIZE);
+}
+
+static void load_td_per_vcpu_parameters(struct td_boot_parameters *params,
+ struct kvm_sregs *sregs,
+ struct kvm_vcpu *vcpu,
+ void *guest_code)
+{
+ /* Store vcpu_index to match what the TDX module would store internally */
+ static uint32_t vcpu_index;
+
+ struct td_per_vcpu_parameters *vcpu_params = &params->per_vcpu[vcpu_index];
+
+ TEST_ASSERT(vcpu->initial_stack_addr != 0,
+ "initial stack address should not be 0");
+ TEST_ASSERT(vcpu->initial_stack_addr <= 0xffffffff,
+ "initial stack address must fit in 32 bits");
+ TEST_ASSERT((uint64_t)guest_code <= 0xffffffff,
+ "guest_code must fit in 32 bits");
+ TEST_ASSERT(sregs->cs.selector != 0, "cs.selector should not be 0");
+
+ vcpu_params->esp_gva = (uint32_t)(uint64_t)vcpu->initial_stack_addr;
+ vcpu_params->ljmp_target.eip_gva = (uint32_t)(uint64_t)guest_code;
+ vcpu_params->ljmp_target.code64_sel = sregs->cs.selector;
+
+ vcpu_index++;
+}
+
+static void load_td_common_parameters(struct td_boot_parameters *params,
+ struct kvm_sregs *sregs)
+{
+ /* Set parameters! */
+ params->cr0 = sregs->cr0;
+ params->cr3 = sregs->cr3;
+ params->cr4 = sregs->cr4;
+ params->gdtr.limit = sregs->gdt.limit;
+ params->gdtr.base = sregs->gdt.base;
+ params->idtr.limit = sregs->idt.limit;
+ params->idtr.base = sregs->idt.base;
+
+ TEST_ASSERT(params->cr0 != 0, "cr0 should not be 0");
+ TEST_ASSERT(params->cr3 != 0, "cr3 should not be 0");
+ TEST_ASSERT(params->cr4 != 0, "cr4 should not be 0");
+ TEST_ASSERT(params->gdtr.base != 0, "gdt base address should not be 0");
+}
+
+static void load_td_boot_parameters(struct td_boot_parameters *params,
+ struct kvm_vcpu *vcpu, void *guest_code)
+{
+ struct kvm_sregs sregs;
+
+ /* Assemble parameters in sregs */
+ memset(&sregs, 0, sizeof(struct kvm_sregs));
+ vcpu_setup_mode_sregs(vcpu->vm, &sregs);
+ tdx_apply_cr4_restrictions(&sregs);
+ kvm_setup_idt(vcpu->vm, &sregs.idt);
+
+ if (!params->cr0)
+ load_td_common_parameters(params, &sregs);
+
+ load_td_per_vcpu_parameters(params, &sregs, vcpu, guest_code);
+}
+
+/**
+ * Adds a vCPU to a TD (Trusted Domain) with minimum defaults. It will not set
+ * up any general purpose registers as they will be initialized by the TDX. In
+ * TDX, vCPUs RIP is set to 0xFFFFFFF0. See Intel TDX EAS Section "Initial State
+ * of Guest GPRs" for more information on vCPUs initial register values when
+ * entering the TD first time.
+ *
+ * Input Args:
+ * vm - Virtual Machine
+ * vcpuid - The id of the VCPU to add to the VM.
+ */
+struct kvm_vcpu *td_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, void *guest_code)
+{
+ struct kvm_vcpu *vcpu;
+
+ /*
+ * TD setup will not use the value of rip set in vm_vcpu_add anyway, so
+ * NULL can be used for guest_code.
+ */
+ vcpu = vm_vcpu_add(vm, vcpu_id, NULL);
+
+ tdx_td_vcpu_init(vcpu);
+
+ load_td_boot_parameters(addr_gpa2hva(vm, TD_BOOT_PARAMETERS_GPA),
+ vcpu, guest_code);
+
+ return vcpu;
+}
+
+/**
+ * Iterate over set ranges within sparsebit @s. In each iteration,
+ * @range_begin and @range_end will take the beginning and end of the set range,
+ * which are of type sparsebit_idx_t.
+ *
+ * For example, if the range [3, 7] (inclusive) is set, within the iteration,
+ * @range_begin will take the value 3 and @range_end will take the value 7.
+ *
+ * Ensure that there is at least one bit set before using this macro with
+ * sparsebit_any_set(), because sparsebit_first_set() will abort if none are
+ * set.
+ */
+#define sparsebit_for_each_set_range(s, range_begin, range_end) \
+ for (range_begin = sparsebit_first_set(s), \
+ range_end = sparsebit_next_clear(s, range_begin) - 1; \
+ range_begin && range_end; \
+ range_begin = sparsebit_next_set(s, range_end), \
+ range_end = sparsebit_next_clear(s, range_begin) - 1)
+/*
+ * sparsebit_next_clear() can return 0 if [x, 2**64-1] are all set, and the -1
+ * would then cause an underflow back to 2**64 - 1. This is expected and
+ * correct.
+ *
+ * If the last range in the sparsebit is [x, y] and we try to iterate,
+ * sparsebit_next_set() will return 0, and sparsebit_next_clear() will try and
+ * find the first range, but that's correct because the condition expression
+ * would cause us to quit the loop.
+ */
+
+static void load_td_memory_region(struct kvm_vm *vm,
+ struct userspace_mem_region *region)
+{
+ const struct sparsebit *pages = region->protected_phy_pages;
+ const uint64_t hva_base = region->region.userspace_addr;
+ const vm_paddr_t gpa_base = region->region.guest_phys_addr;
+ const sparsebit_idx_t lowest_page_in_region = gpa_base >>
+ vm->page_shift;
+
+ sparsebit_idx_t i;
+ sparsebit_idx_t j;
+
+ if (!sparsebit_any_set(pages))
+ return;
+
+ sparsebit_for_each_set_range(pages, i, j) {
+ const uint64_t size_to_load = (j - i + 1) * vm->page_size;
+ const uint64_t offset =
+ (i - lowest_page_in_region) * vm->page_size;
+ const uint64_t hva = hva_base + offset;
+ const uint64_t gpa = gpa_base + offset;
+ void *source_addr;
+
+ /*
+ * KVM_TDX_INIT_MEM_REGION ioctl cannot encrypt memory in place,
+ * hence we have to make a copy if there's only one backing
+ * memory source
+ */
+ source_addr = mmap(NULL, size_to_load, PROT_READ | PROT_WRITE,
+ MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+ TEST_ASSERT(
+ source_addr,
+ "Could not allocate memory for loading memory region");
+
+ memcpy(source_addr, (void *)hva, size_to_load);
+
+ tdx_init_mem_region(vm, source_addr, gpa, size_to_load);
+
+ munmap(source_addr, size_to_load);
+ }
+}
+
+static void load_td_private_memory(struct kvm_vm *vm)
+{
+ int ctr;
+ struct userspace_mem_region *region;
+
+ hash_for_each(vm->regions.slot_hash, ctr, region, slot_node) {
+ load_td_memory_region(vm, region);
+ }
+}
+
+struct kvm_vm *td_create(void)
+{
+ struct vm_shape shape;
+
+ shape.mode = VM_MODE_DEFAULT;
+ shape.type = KVM_X86_TDX_VM;
+ return ____vm_create(shape);
+}
+
+static void td_setup_boot_code(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type)
+{
+ vm_vaddr_t addr;
+ size_t boot_code_allocation = round_up(TD_BOOT_CODE_SIZE, PAGE_SIZE);
+ vm_paddr_t boot_code_base_gpa = FOUR_GIGABYTES_GPA - boot_code_allocation;
+ size_t npages = DIV_ROUND_UP(boot_code_allocation, PAGE_SIZE);
+
+ vm_userspace_mem_region_add(vm, src_type, boot_code_base_gpa, 1, npages,
+ KVM_MEM_PRIVATE);
+ addr = vm_vaddr_alloc_1to1(vm, boot_code_allocation, boot_code_base_gpa, 1);
+ TEST_ASSERT_EQ(addr, boot_code_base_gpa);
+
+ load_td_boot_code(vm);
+}
+
+static size_t td_boot_parameters_size(void)
+{
+ int max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS);
+ size_t total_per_vcpu_parameters_size =
+ max_vcpus * sizeof(struct td_per_vcpu_parameters);
+
+ return sizeof(struct td_boot_parameters) + total_per_vcpu_parameters_size;
+}
+
+static void td_setup_boot_parameters(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type)
+{
+ vm_vaddr_t addr;
+ size_t boot_params_size = td_boot_parameters_size();
+ int npages = DIV_ROUND_UP(boot_params_size, PAGE_SIZE);
+ size_t total_size = npages * PAGE_SIZE;
+
+ vm_userspace_mem_region_add(vm, src_type, TD_BOOT_PARAMETERS_GPA, 2,
+ npages, KVM_MEM_PRIVATE);
+ addr = vm_vaddr_alloc_1to1(vm, total_size, TD_BOOT_PARAMETERS_GPA, 2);
+ TEST_ASSERT_EQ(addr, TD_BOOT_PARAMETERS_GPA);
+}
+
+void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
+ uint64_t attributes)
+{
+ uint64_t nr_pages_required;
+
+ tdx_enable_capabilities(vm);
+
+ tdx_configure_memory_encryption(vm);
+
+ tdx_td_init(vm, attributes);
+
+ nr_pages_required = vm_nr_pages_required(VM_MODE_DEFAULT, 1, 0);
+
+ /*
+ * Add memory (add 0th memslot) for TD. This will be used to setup the
+ * CPU (provide stack space for the CPU) and to load the elf file.
+ */
+ vm_userspace_mem_region_add(vm, src_type, 0, 0, nr_pages_required,
+ KVM_MEM_PRIVATE);
+
+ kvm_vm_elf_load(vm, program_invocation_name);
+
+ vm_init_descriptor_tables(vm);
+
+ td_setup_boot_code(vm, src_type);
+ td_setup_boot_parameters(vm, src_type);
+}
+
+void td_finalize(struct kvm_vm *vm)
+{
+ sync_exception_handlers_to_guest(vm);
+
+ load_td_private_memory(vm);
+
+ tdx_td_finalizemr(vm);
+}
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:47:48

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 09/29] KVM: selftests: TDX: Add report_fatal_error test

The test checks report_fatal_error functionality.

Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
---
.../selftests/kvm/include/x86_64/tdx/tdx.h | 6 ++-
.../kvm/include/x86_64/tdx/tdx_util.h | 1 +
.../kvm/include/x86_64/tdx/test_util.h | 19 ++++++++
.../selftests/kvm/lib/x86_64/tdx/tdx.c | 39 ++++++++++++++++
.../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 12 +++++
.../selftests/kvm/lib/x86_64/tdx/test_util.c | 10 +++++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 45 +++++++++++++++++++
7 files changed, 131 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
index a7161efe4ee2..1340c1070002 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
@@ -3,10 +3,14 @@
#define SELFTEST_TDX_TDX_H

#include <stdint.h>
+#include "kvm_util_base.h"

-#define TDG_VP_VMCALL_INSTRUCTION_IO 30
+#define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003

+#define TDG_VP_VMCALL_INSTRUCTION_IO 30
+void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu);
uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
uint64_t write, uint64_t *data);
+void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa);

#endif // SELFTEST_TDX_TDX_H
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
index 274b245f200b..32dd6b8fda46 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
@@ -12,5 +12,6 @@ struct kvm_vm *td_create(void);
void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
uint64_t attributes);
void td_finalize(struct kvm_vm *vm);
+void td_vcpu_run(struct kvm_vcpu *vcpu);

#endif // SELFTESTS_TDX_KVM_UTIL_H
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
index b570b6d978ff..6d69921136bd 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
@@ -49,4 +49,23 @@ bool is_tdx_enabled(void);
*/
void tdx_test_success(void);

+/**
+ * Report an error with @error_code to userspace.
+ *
+ * Return value from tdg_vp_vmcall_report_fatal_error is ignored since execution
+ * is not expected to continue beyond this point.
+ */
+void tdx_test_fatal(uint64_t error_code);
+
+/**
+ * Report an error with @error_code to userspace.
+ *
+ * @data_gpa may point to an optional shared guest memory holding the error
+ * string.
+ *
+ * Return value from tdg_vp_vmcall_report_fatal_error is ignored since execution
+ * is not expected to continue beyond this point.
+ */
+void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa);
+
#endif // SELFTEST_TDX_TEST_UTIL_H
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
index c2414523487a..b854c3aa34ff 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
@@ -1,8 +1,31 @@
// SPDX-License-Identifier: GPL-2.0-only

+#include <string.h>
+
#include "tdx/tdcall.h"
#include "tdx/tdx.h"

+void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu)
+{
+ struct kvm_tdx_vmcall *vmcall_info = &vcpu->run->tdx.u.vmcall;
+ uint64_t vmcall_subfunction = vmcall_info->subfunction;
+
+ switch (vmcall_subfunction) {
+ case TDG_VP_VMCALL_REPORT_FATAL_ERROR:
+ vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
+ vcpu->run->system_event.ndata = 3;
+ vcpu->run->system_event.data[0] =
+ TDG_VP_VMCALL_REPORT_FATAL_ERROR;
+ vcpu->run->system_event.data[1] = vmcall_info->in_r12;
+ vcpu->run->system_event.data[2] = vmcall_info->in_r13;
+ vmcall_info->status_code = 0;
+ break;
+ default:
+ TEST_FAIL("TD VMCALL subfunction %lu is unsupported.\n",
+ vmcall_subfunction);
+ }
+}
+
uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
uint64_t write, uint64_t *data)
{
@@ -25,3 +48,19 @@ uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,

return ret;
}
+
+void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa)
+{
+ struct tdx_hypercall_args args;
+
+ memset(&args, 0, sizeof(struct tdx_hypercall_args));
+
+ if (data_gpa)
+ error_code |= 0x8000000000000000;
+
+ args.r11 = TDG_VP_VMCALL_REPORT_FATAL_ERROR;
+ args.r12 = error_code;
+ args.r13 = data_gpa;
+
+ __tdx_hypercall(&args, 0);
+}
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
index b302060049d5..d745bb6287c1 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
@@ -10,6 +10,7 @@

#include "kvm_util.h"
#include "test_util.h"
+#include "tdx/tdx.h"
#include "tdx/td_boot.h"
#include "kvm_util_base.h"
#include "processor.h"
@@ -519,3 +520,14 @@ void td_finalize(struct kvm_vm *vm)

tdx_td_finalizemr(vm);
}
+
+void td_vcpu_run(struct kvm_vcpu *vcpu)
+{
+ vcpu_run(vcpu);
+
+ /* Handle TD VMCALLs that require userspace handling. */
+ if (vcpu->run->exit_reason == KVM_EXIT_TDX &&
+ vcpu->run->tdx.type == KVM_EXIT_TDX_VMCALL) {
+ handle_userspace_tdg_vp_vmcall_exit(vcpu);
+ }
+}
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
index 6905d0ca3877..7f3cd8089cea 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
@@ -32,3 +32,13 @@ void tdx_test_success(void)
TDX_TEST_SUCCESS_SIZE,
TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, &code);
}
+
+void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa)
+{
+ tdg_vp_vmcall_report_fatal_error(error_code, data_gpa);
+}
+
+void tdx_test_fatal(uint64_t error_code)
+{
+ tdx_test_fatal_with_data(error_code, 0);
+}
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index a18d1c9d6026..8638c7bbedaa 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -2,6 +2,7 @@

#include <signal.h>
#include "kvm_util_base.h"
+#include "tdx/tdx.h"
#include "tdx/tdx_util.h"
#include "tdx/test_util.h"
#include "test_util.h"
@@ -30,6 +31,49 @@ void verify_td_lifecycle(void)
printf("\t ... PASSED\n");
}

+void guest_code_report_fatal_error(void)
+{
+ uint64_t err;
+
+ /*
+ * Note: err should follow the GHCI spec definition:
+ * bits 31:0 should be set to 0.
+ * bits 62:32 are used for TD-specific extended error code.
+ * bit 63 is used to mark additional information in shared memory.
+ */
+ err = 0x0BAAAAAD00000000;
+ if (err)
+ tdx_test_fatal(err);
+
+ tdx_test_success();
+}
+void verify_report_fatal_error(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+ vcpu = td_vcpu_add(vm, 0, guest_code_report_fatal_error);
+ td_finalize(vm);
+
+ printf("Verifying report_fatal_error:\n");
+
+ td_vcpu_run(vcpu);
+
+ TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
+ TEST_ASSERT_EQ(vcpu->run->system_event.ndata, 3);
+ TEST_ASSERT_EQ(vcpu->run->system_event.data[0], TDG_VP_VMCALL_REPORT_FATAL_ERROR);
+ TEST_ASSERT_EQ(vcpu->run->system_event.data[1], 0x0BAAAAAD00000000);
+ TEST_ASSERT_EQ(vcpu->run->system_event.data[2], 0);
+
+ vcpu_run(vcpu);
+ TDX_TEST_ASSERT_SUCCESS(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
setbuf(stdout, NULL);
@@ -40,6 +84,7 @@ int main(int argc, char **argv)
}

run_in_new_process(&verify_td_lifecycle);
+ run_in_new_process(&verify_report_fatal_error);

return 0;
}
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:47:54

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 10/29] KVM: selftests: TDX: Adding test case for TDX port IO

From: Erdem Aktas <[email protected]>

Verifies TDVMCALL<INSTRUCTION.IO> READ and WRITE operations.

Signed-off-by: Erdem Aktas <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
---
.../kvm/include/x86_64/tdx/test_util.h | 34 ++++++++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 82 +++++++++++++++++++
2 files changed, 116 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
index 6d69921136bd..95a5d5be7f0b 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
@@ -9,6 +9,40 @@
#define TDX_TEST_SUCCESS_PORT 0x30
#define TDX_TEST_SUCCESS_SIZE 4

+/**
+ * Assert that some IO operation involving tdg_vp_vmcall_instruction_io() was
+ * called in the guest.
+ */
+#define TDX_TEST_ASSERT_IO(VCPU, PORT, SIZE, DIR) \
+ do { \
+ TEST_ASSERT((VCPU)->run->exit_reason == KVM_EXIT_IO, \
+ "Got exit_reason other than KVM_EXIT_IO: %u (%s)\n", \
+ (VCPU)->run->exit_reason, \
+ exit_reason_str((VCPU)->run->exit_reason)); \
+ \
+ TEST_ASSERT(((VCPU)->run->exit_reason == KVM_EXIT_IO) && \
+ ((VCPU)->run->io.port == (PORT)) && \
+ ((VCPU)->run->io.size == (SIZE)) && \
+ ((VCPU)->run->io.direction == (DIR)), \
+ "Got unexpected IO exit values: %u (%s) %d %d %d\n", \
+ (VCPU)->run->exit_reason, \
+ exit_reason_str((VCPU)->run->exit_reason), \
+ (VCPU)->run->io.port, (VCPU)->run->io.size, \
+ (VCPU)->run->io.direction); \
+ } while (0)
+
+/**
+ * Check and report if there was some failure in the guest, either an exception
+ * like a triple fault, or if a tdx_test_fatal() was hit.
+ */
+#define TDX_TEST_CHECK_GUEST_FAILURE(VCPU) \
+ do { \
+ if ((VCPU)->run->exit_reason == KVM_EXIT_SYSTEM_EVENT) \
+ TEST_FAIL("Guest reported error. error code: %lld (0x%llx)\n", \
+ (VCPU)->run->system_event.data[1], \
+ (VCPU)->run->system_event.data[1]); \
+ } while (0)
+
/**
* Assert that tdx_test_success() was called in the guest.
*/
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 8638c7bbedaa..75467c407ca7 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -2,6 +2,7 @@

#include <signal.h>
#include "kvm_util_base.h"
+#include "tdx/tdcall.h"
#include "tdx/tdx.h"
#include "tdx/tdx_util.h"
#include "tdx/test_util.h"
@@ -74,6 +75,86 @@ void verify_report_fatal_error(void)
printf("\t ... PASSED\n");
}

+#define TDX_IOEXIT_TEST_PORT 0x50
+
+/*
+ * Verifies IO functionality by writing a |value| to a predefined port.
+ * Verifies that the read value is |value| + 1 from the same port.
+ * If all the tests are passed then write a value to port TDX_TEST_PORT
+ */
+void guest_ioexit(void)
+{
+ uint64_t data_out, data_in, delta;
+ uint64_t ret;
+
+ data_out = 0xAB;
+ ret = tdg_vp_vmcall_instruction_io(TDX_IOEXIT_TEST_PORT, 1,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
+ &data_out);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ ret = tdg_vp_vmcall_instruction_io(TDX_IOEXIT_TEST_PORT, 1,
+ TDG_VP_VMCALL_INSTRUCTION_IO_READ,
+ &data_in);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ delta = data_in - data_out;
+ if (delta != 1)
+ tdx_test_fatal(ret);
+
+ tdx_test_success();
+}
+
+void verify_td_ioexit(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ uint32_t port_data;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+ vcpu = td_vcpu_add(vm, 0, guest_ioexit);
+ td_finalize(vm);
+
+ printf("Verifying TD IO Exit:\n");
+
+ /* Wait for guest to do a IO write */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_IOEXIT_TEST_PORT, 1,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ port_data = *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ printf("\t ... IO WRITE: OK\n");
+
+ /*
+ * Wait for the guest to do a IO read. Provide the previous written data
+ * + 1 back to the guest
+ */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_IOEXIT_TEST_PORT, 1,
+ TDG_VP_VMCALL_INSTRUCTION_IO_READ);
+ *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = port_data + 1;
+
+ printf("\t ... IO READ: OK\n");
+
+ /*
+ * Wait for the guest to complete execution successfully. The read
+ * value is checked within the guest.
+ */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_SUCCESS(vcpu);
+
+ printf("\t ... IO verify read/write values: OK\n");
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
setbuf(stdout, NULL);
@@ -85,6 +166,7 @@ int main(int argc, char **argv)

run_in_new_process(&verify_td_lifecycle);
run_in_new_process(&verify_report_fatal_error);
+ run_in_new_process(&verify_td_ioexit);

return 0;
}
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:47:55

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 07/29] KVM: selftests: TDX: Update load_td_memory_region for VM memory backed by guest memfd

From: Ackerley Tng <[email protected]>

If guest memory is backed by restricted memfd

+ UPM is being used, hence encrypted memory region has to be
registered
+ Can avoid making a copy of guest memory before getting TDX to
initialize the memory region

Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
---
.../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 41 +++++++++++++++----
1 file changed, 32 insertions(+), 9 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
index 6b995c3f6153..063ff486fb86 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
@@ -192,6 +192,21 @@ static void tdx_td_finalizemr(struct kvm_vm *vm)
tdx_ioctl(vm->fd, KVM_TDX_FINALIZE_VM, 0, NULL);
}

+/*
+ * Other ioctls
+ */
+
+/**
+ * Register a memory region that may contain encrypted data in KVM.
+ */
+static void register_encrypted_memory_region(
+ struct kvm_vm *vm, struct userspace_mem_region *region)
+{
+ vm_set_memory_attributes(vm, region->region.guest_phys_addr,
+ region->region.memory_size,
+ KVM_MEMORY_ATTRIBUTE_PRIVATE);
+}
+
/*
* TD creation/setup/finalization
*/
@@ -376,30 +391,38 @@ static void load_td_memory_region(struct kvm_vm *vm,
if (!sparsebit_any_set(pages))
return;

+
+ if (region->region.guest_memfd != -1)
+ register_encrypted_memory_region(vm, region);
+
sparsebit_for_each_set_range(pages, i, j) {
const uint64_t size_to_load = (j - i + 1) * vm->page_size;
const uint64_t offset =
(i - lowest_page_in_region) * vm->page_size;
const uint64_t hva = hva_base + offset;
const uint64_t gpa = gpa_base + offset;
- void *source_addr;
+ void *source_addr = (void *)hva;

/*
* KVM_TDX_INIT_MEM_REGION ioctl cannot encrypt memory in place,
* hence we have to make a copy if there's only one backing
* memory source
*/
- source_addr = mmap(NULL, size_to_load, PROT_READ | PROT_WRITE,
- MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
- TEST_ASSERT(
- source_addr,
- "Could not allocate memory for loading memory region");
-
- memcpy(source_addr, (void *)hva, size_to_load);
+ if (region->region.guest_memfd == -1) {
+ source_addr = mmap(NULL, size_to_load, PROT_READ | PROT_WRITE,
+ MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+ TEST_ASSERT(
+ source_addr,
+ "Could not allocate memory for loading memory region");
+
+ memcpy(source_addr, (void *)hva, size_to_load);
+ memset((void *)hva, 0, size_to_load);
+ }

tdx_init_mem_region(vm, source_addr, gpa, size_to_load);

- munmap(source_addr, size_to_load);
+ if (region->region.guest_memfd == -1)
+ munmap(source_addr, size_to_load);
}
}

--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:48:07

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 08/29] KVM: selftests: TDX: Add TDX lifecycle test

From: Erdem Aktas <[email protected]>

Adding a test to verify TDX lifecycle by creating a TD and running a
dummy TDG.VP.VMCALL <Instruction.IO> inside it.

Signed-off-by: Erdem Aktas <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
Co-developed-by: Ackerley Tng <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
---
tools/testing/selftests/kvm/Makefile | 4 +
.../selftests/kvm/include/x86_64/tdx/tdcall.h | 35 ++++++++
.../selftests/kvm/include/x86_64/tdx/tdx.h | 12 +++
.../kvm/include/x86_64/tdx/test_util.h | 52 +++++++++++
.../selftests/kvm/lib/x86_64/tdx/tdcall.S | 90 +++++++++++++++++++
.../selftests/kvm/lib/x86_64/tdx/tdx.c | 27 ++++++
.../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 1 +
.../selftests/kvm/lib/x86_64/tdx/test_util.c | 34 +++++++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 45 ++++++++++
9 files changed, 300 insertions(+)
create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index a35150ab855f..80d4a50eeb9f 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -52,6 +52,9 @@ LIBKVM_x86_64 += lib/x86_64/vmx.c
LIBKVM_x86_64 += lib/x86_64/sev.c
LIBKVM_x86_64 += lib/x86_64/tdx/tdx_util.c
LIBKVM_x86_64 += lib/x86_64/tdx/td_boot.S
+LIBKVM_x86_64 += lib/x86_64/tdx/tdcall.S
+LIBKVM_x86_64 += lib/x86_64/tdx/tdx.c
+LIBKVM_x86_64 += lib/x86_64/tdx/test_util.c

LIBKVM_aarch64 += lib/aarch64/gic.c
LIBKVM_aarch64 += lib/aarch64/gic_v3.c
@@ -152,6 +155,7 @@ TEST_GEN_PROGS_x86_64 += set_memory_region_test
TEST_GEN_PROGS_x86_64 += steal_time
TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test
TEST_GEN_PROGS_x86_64 += system_counter_offset_test
+TEST_GEN_PROGS_x86_64 += x86_64/tdx_vm_tests

# Compiled outputs used by test targets
TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
new file mode 100644
index 000000000000..78001bfec9c8
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Adapted from arch/x86/include/asm/shared/tdx.h */
+
+#ifndef SELFTESTS_TDX_TDCALL_H
+#define SELFTESTS_TDX_TDCALL_H
+
+#include <linux/bits.h>
+#include <linux/types.h>
+
+#define TDG_VP_VMCALL_INSTRUCTION_IO_READ 0
+#define TDG_VP_VMCALL_INSTRUCTION_IO_WRITE 1
+
+#define TDX_HCALL_HAS_OUTPUT BIT(0)
+
+#define TDX_HYPERCALL_STANDARD 0
+
+/*
+ * Used in __tdx_hypercall() to pass down and get back registers' values of
+ * the TDCALL instruction when requesting services from the VMM.
+ *
+ * This is a software only structure and not part of the TDX module/VMM ABI.
+ */
+struct tdx_hypercall_args {
+ u64 r10;
+ u64 r11;
+ u64 r12;
+ u64 r13;
+ u64 r14;
+ u64 r15;
+};
+
+/* Used to request services from the VMM */
+u64 __tdx_hypercall(struct tdx_hypercall_args *args, unsigned long flags);
+
+#endif // SELFTESTS_TDX_TDCALL_H
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
new file mode 100644
index 000000000000..a7161efe4ee2
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef SELFTEST_TDX_TDX_H
+#define SELFTEST_TDX_TDX_H
+
+#include <stdint.h>
+
+#define TDG_VP_VMCALL_INSTRUCTION_IO 30
+
+uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
+ uint64_t write, uint64_t *data);
+
+#endif // SELFTEST_TDX_TDX_H
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
new file mode 100644
index 000000000000..b570b6d978ff
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef SELFTEST_TDX_TEST_UTIL_H
+#define SELFTEST_TDX_TEST_UTIL_H
+
+#include <stdbool.h>
+
+#include "tdcall.h"
+
+#define TDX_TEST_SUCCESS_PORT 0x30
+#define TDX_TEST_SUCCESS_SIZE 4
+
+/**
+ * Assert that tdx_test_success() was called in the guest.
+ */
+#define TDX_TEST_ASSERT_SUCCESS(VCPU) \
+ (TEST_ASSERT( \
+ ((VCPU)->run->exit_reason == KVM_EXIT_IO) && \
+ ((VCPU)->run->io.port == TDX_TEST_SUCCESS_PORT) && \
+ ((VCPU)->run->io.size == TDX_TEST_SUCCESS_SIZE) && \
+ ((VCPU)->run->io.direction == \
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE), \
+ "Unexpected exit values while waiting for test completion: %u (%s) %d %d %d\n", \
+ (VCPU)->run->exit_reason, \
+ exit_reason_str((VCPU)->run->exit_reason), \
+ (VCPU)->run->io.port, (VCPU)->run->io.size, \
+ (VCPU)->run->io.direction))
+
+/**
+ * Run a test in a new process.
+ *
+ * There might be multiple tests we are running and if one test fails, it will
+ * prevent the subsequent tests to run due to how tests are failing with
+ * TEST_ASSERT function. The run_in_new_process function will run a test in a
+ * new process context and wait for it to finish or fail to prevent TEST_ASSERT
+ * to kill the main testing process.
+ */
+void run_in_new_process(void (*func)(void));
+
+/**
+ * Verify that the TDX is supported by KVM.
+ */
+bool is_tdx_enabled(void);
+
+/**
+ * Report test success to userspace.
+ *
+ * Use TDX_TEST_ASSERT_SUCCESS() to assert that this function was called in the
+ * guest.
+ */
+void tdx_test_success(void);
+
+#endif // SELFTEST_TDX_TEST_UTIL_H
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
new file mode 100644
index 000000000000..df9c1ed4bb2d
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
@@ -0,0 +1,90 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Adapted from arch/x86/coco/tdx/tdcall.S */
+
+#define TDX_HYPERCALL_r10 0 /* offsetof(struct tdx_hypercall_args, r10) */
+#define TDX_HYPERCALL_r11 8 /* offsetof(struct tdx_hypercall_args, r11) */
+#define TDX_HYPERCALL_r12 16 /* offsetof(struct tdx_hypercall_args, r12) */
+#define TDX_HYPERCALL_r13 24 /* offsetof(struct tdx_hypercall_args, r13) */
+#define TDX_HYPERCALL_r14 32 /* offsetof(struct tdx_hypercall_args, r14) */
+#define TDX_HYPERCALL_r15 40 /* offsetof(struct tdx_hypercall_args, r15) */
+
+/*
+ * Bitmasks of exposed registers (with VMM).
+ */
+#define TDX_R10 0x400
+#define TDX_R11 0x800
+#define TDX_R12 0x1000
+#define TDX_R13 0x2000
+#define TDX_R14 0x4000
+#define TDX_R15 0x8000
+
+#define TDX_HCALL_HAS_OUTPUT 0x1
+
+/*
+ * These registers are clobbered to hold arguments for each
+ * TDVMCALL. They are safe to expose to the VMM.
+ * Each bit in this mask represents a register ID. Bit field
+ * details can be found in TDX GHCI specification, section
+ * titled "TDCALL [TDG.VP.VMCALL] leaf".
+ */
+#define TDVMCALL_EXPOSE_REGS_MASK ( TDX_R10 | TDX_R11 | \
+ TDX_R12 | TDX_R13 | \
+ TDX_R14 | TDX_R15 )
+
+.code64
+.section .text
+
+.globl __tdx_hypercall
+.type __tdx_hypercall, @function
+__tdx_hypercall:
+ /* Set up stack frame */
+ push %rbp
+ movq %rsp, %rbp
+
+ /* Save callee-saved GPRs as mandated by the x86_64 ABI */
+ push %r15
+ push %r14
+ push %r13
+ push %r12
+
+ /* Mangle function call ABI into TDCALL ABI: */
+ /* Set TDCALL leaf ID (TDVMCALL (0)) in RAX */
+ xor %eax, %eax
+
+ /* Copy hypercall registers from arg struct: */
+ movq TDX_HYPERCALL_r10(%rdi), %r10
+ movq TDX_HYPERCALL_r11(%rdi), %r11
+ movq TDX_HYPERCALL_r12(%rdi), %r12
+ movq TDX_HYPERCALL_r13(%rdi), %r13
+ movq TDX_HYPERCALL_r14(%rdi), %r14
+ movq TDX_HYPERCALL_r15(%rdi), %r15
+
+ movl $TDVMCALL_EXPOSE_REGS_MASK, %ecx
+
+ tdcall
+
+ /* TDVMCALL leaf return code is in R10 */
+ movq %r10, %rax
+
+ /* Copy hypercall result registers to arg struct if needed */
+ testq $TDX_HCALL_HAS_OUTPUT, %rsi
+ jz .Lout
+
+ movq %r10, TDX_HYPERCALL_r10(%rdi)
+ movq %r11, TDX_HYPERCALL_r11(%rdi)
+ movq %r12, TDX_HYPERCALL_r12(%rdi)
+ movq %r13, TDX_HYPERCALL_r13(%rdi)
+ movq %r14, TDX_HYPERCALL_r14(%rdi)
+ movq %r15, TDX_HYPERCALL_r15(%rdi)
+.Lout:
+ /* Restore callee-saved GPRs as mandated by the x86_64 ABI */
+ pop %r12
+ pop %r13
+ pop %r14
+ pop %r15
+
+ pop %rbp
+ ret
+
+/* Disable executable stack */
+.section .note.GNU-stack,"",%progbits
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
new file mode 100644
index 000000000000..c2414523487a
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
@@ -0,0 +1,27 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include "tdx/tdcall.h"
+#include "tdx/tdx.h"
+
+uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
+ uint64_t write, uint64_t *data)
+{
+ uint64_t ret;
+ struct tdx_hypercall_args args = {
+ .r10 = TDX_HYPERCALL_STANDARD,
+ .r11 = TDG_VP_VMCALL_INSTRUCTION_IO,
+ .r12 = size,
+ .r13 = write,
+ .r14 = port,
+ };
+
+ if (write)
+ args.r15 = *data;
+
+ ret = __tdx_hypercall(&args, write ? 0 : TDX_HCALL_HAS_OUTPUT);
+
+ if (!write)
+ *data = args.r11;
+
+ return ret;
+}
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
index 063ff486fb86..b302060049d5 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
@@ -224,6 +224,7 @@ static void tdx_enable_capabilities(struct kvm_vm *vm)
KVM_X2APIC_API_USE_32BIT_IDS |
KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK);
vm_enable_cap(vm, KVM_CAP_SPLIT_IRQCHIP, 24);
+ vm_enable_cap(vm, KVM_CAP_MAX_VCPUS, 512);
}

static void tdx_configure_memory_encryption(struct kvm_vm *vm)
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
new file mode 100644
index 000000000000..6905d0ca3877
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
@@ -0,0 +1,34 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <stdbool.h>
+#include <stdint.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+#include <unistd.h>
+
+#include "kvm_util_base.h"
+#include "tdx/tdx.h"
+#include "tdx/test_util.h"
+
+void run_in_new_process(void (*func)(void))
+{
+ if (fork() == 0) {
+ func();
+ exit(0);
+ }
+ wait(NULL);
+}
+
+bool is_tdx_enabled(void)
+{
+ return !!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_TDX_VM));
+}
+
+void tdx_test_success(void)
+{
+ uint64_t code = 0;
+
+ tdg_vp_vmcall_instruction_io(TDX_TEST_SUCCESS_PORT,
+ TDX_TEST_SUCCESS_SIZE,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, &code);
+}
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
new file mode 100644
index 000000000000..a18d1c9d6026
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -0,0 +1,45 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <signal.h>
+#include "kvm_util_base.h"
+#include "tdx/tdx_util.h"
+#include "tdx/test_util.h"
+#include "test_util.h"
+
+void guest_code_lifecycle(void)
+{
+ tdx_test_success();
+}
+
+void verify_td_lifecycle(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+ vcpu = td_vcpu_add(vm, 0, guest_code_lifecycle);
+ td_finalize(vm);
+
+ printf("Verifying TD lifecycle:\n");
+
+ vcpu_run(vcpu);
+ TDX_TEST_ASSERT_SUCCESS(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
+int main(int argc, char **argv)
+{
+ setbuf(stdout, NULL);
+
+ if (!is_tdx_enabled()) {
+ print_skip("TDX is not supported by the KVM");
+ exit(KSFT_SKIP);
+ }
+
+ run_in_new_process(&verify_td_lifecycle);
+
+ return 0;
+}
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:48:12

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 12/29] KVM: selftests: TDX: Add basic get_td_vmcall_info test

The test calls get_td_vmcall_info from the guest and verifies the
expected returned values.

Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
---
.../selftests/kvm/include/x86_64/tdx/tdx.h | 3 +
.../kvm/include/x86_64/tdx/test_util.h | 27 +++++++
.../selftests/kvm/lib/x86_64/tdx/tdx.c | 23 ++++++
.../selftests/kvm/lib/x86_64/tdx/test_util.c | 46 +++++++++++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 80 +++++++++++++++++++
5 files changed, 179 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
index 1340c1070002..63788012bf94 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
@@ -5,6 +5,7 @@
#include <stdint.h>
#include "kvm_util_base.h"

+#define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000
#define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003

#define TDG_VP_VMCALL_INSTRUCTION_IO 30
@@ -12,5 +13,7 @@ void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu);
uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
uint64_t write, uint64_t *data);
void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa);
+uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12,
+ uint64_t *r13, uint64_t *r14);

#endif // SELFTEST_TDX_TDX_H
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
index af0ddbfe8d71..8a9b6a1bec3e 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
@@ -4,6 +4,7 @@

#include <stdbool.h>

+#include "kvm_util_base.h"
#include "tdcall.h"

#define TDX_TEST_SUCCESS_PORT 0x30
@@ -111,4 +112,30 @@ void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa);
*/
uint64_t tdx_test_report_to_user_space(uint32_t data);

+/**
+ * Report a 64 bit value from the guest to user space using TDG.VP.VMCALL
+ * <Instruction.IO> call.
+ *
+ * Data is sent to host in 2 calls. LSB is sent (and needs to be read) first.
+ */
+uint64_t tdx_test_send_64bit(uint64_t port, uint64_t data);
+
+/**
+ * Report a 64 bit value from the guest to user space using TDG.VP.VMCALL
+ * <Instruction.IO> call. Data is reported on port TDX_TEST_REPORT_PORT.
+ */
+uint64_t tdx_test_report_64bit_to_user_space(uint64_t data);
+
+/**
+ * Read a 64 bit value from the guest in user space, sent using
+ * tdx_test_send_64bit().
+ */
+uint64_t tdx_test_read_64bit(struct kvm_vcpu *vcpu, uint64_t port);
+
+/**
+ * Read a 64 bit value from the guest in user space, sent using
+ * tdx_test_report_64bit_to_user_space.
+ */
+uint64_t tdx_test_read_64bit_report_from_guest(struct kvm_vcpu *vcpu);
+
#endif // SELFTEST_TDX_TEST_UTIL_H
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
index b854c3aa34ff..e5a9e13c62e2 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
@@ -64,3 +64,26 @@ void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa)

__tdx_hypercall(&args, 0);
}
+
+uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12,
+ uint64_t *r13, uint64_t *r14)
+{
+ uint64_t ret;
+ struct tdx_hypercall_args args = {
+ .r11 = TDG_VP_VMCALL_GET_TD_VM_CALL_INFO,
+ .r12 = 0,
+ };
+
+ ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT);
+
+ if (r11)
+ *r11 = args.r11;
+ if (r12)
+ *r12 = args.r12;
+ if (r13)
+ *r13 = args.r13;
+ if (r14)
+ *r14 = args.r14;
+
+ return ret;
+}
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
index 55c5a1e634df..3ae651cd5fac 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
@@ -7,6 +7,7 @@
#include <unistd.h>

#include "kvm_util_base.h"
+#include "tdx/tdcall.h"
#include "tdx/tdx.h"
#include "tdx/test_util.h"

@@ -53,3 +54,48 @@ uint64_t tdx_test_report_to_user_space(uint32_t data)
TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
&data_64);
}
+
+uint64_t tdx_test_send_64bit(uint64_t port, uint64_t data)
+{
+ uint64_t err;
+ uint64_t data_lo = data & 0xFFFFFFFF;
+ uint64_t data_hi = (data >> 32) & 0xFFFFFFFF;
+
+ err = tdg_vp_vmcall_instruction_io(port, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
+ &data_lo);
+ if (err)
+ return err;
+
+ return tdg_vp_vmcall_instruction_io(port, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
+ &data_hi);
+}
+
+uint64_t tdx_test_report_64bit_to_user_space(uint64_t data)
+{
+ return tdx_test_send_64bit(TDX_TEST_REPORT_PORT, data);
+}
+
+uint64_t tdx_test_read_64bit(struct kvm_vcpu *vcpu, uint64_t port)
+{
+ uint32_t lo, hi;
+ uint64_t res;
+
+ TDX_TEST_ASSERT_IO(vcpu, port, 4, TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ lo = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ vcpu_run(vcpu);
+
+ TDX_TEST_ASSERT_IO(vcpu, port, 4, TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ hi = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ res = hi;
+ res = (res << 32) | lo;
+ return res;
+}
+
+uint64_t tdx_test_read_64bit_report_from_guest(struct kvm_vcpu *vcpu)
+{
+ return tdx_test_read_64bit(vcpu, TDX_TEST_REPORT_PORT);
+}
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 1b30e6f5a569..569c8fb0a59f 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -260,6 +260,85 @@ void verify_td_cpuid(void)
printf("\t ... PASSED\n");
}

+/*
+ * Verifies get_td_vmcall_info functionality.
+ */
+void guest_code_get_td_vmcall_info(void)
+{
+ uint64_t err;
+ uint64_t r11, r12, r13, r14;
+
+ err = tdg_vp_vmcall_get_td_vmcall_info(&r11, &r12, &r13, &r14);
+ if (err)
+ tdx_test_fatal(err);
+
+ err = tdx_test_report_64bit_to_user_space(r11);
+ if (err)
+ tdx_test_fatal(err);
+
+ err = tdx_test_report_64bit_to_user_space(r12);
+ if (err)
+ tdx_test_fatal(err);
+
+ err = tdx_test_report_64bit_to_user_space(r13);
+ if (err)
+ tdx_test_fatal(err);
+
+ err = tdx_test_report_64bit_to_user_space(r14);
+ if (err)
+ tdx_test_fatal(err);
+
+ tdx_test_success();
+}
+
+void verify_get_td_vmcall_info(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ uint64_t r11, r12, r13, r14;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+ vcpu = td_vcpu_add(vm, 0, guest_code_get_td_vmcall_info);
+ td_finalize(vm);
+
+ printf("Verifying TD get vmcall info:\n");
+
+ /* Wait for guest to report r11 value */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ r11 = tdx_test_read_64bit_report_from_guest(vcpu);
+
+ /* Wait for guest to report r12 value */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ r12 = tdx_test_read_64bit_report_from_guest(vcpu);
+
+ /* Wait for guest to report r13 value */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ r13 = tdx_test_read_64bit_report_from_guest(vcpu);
+
+ /* Wait for guest to report r14 value */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ r14 = tdx_test_read_64bit_report_from_guest(vcpu);
+
+ TEST_ASSERT_EQ(r11, 0);
+ TEST_ASSERT_EQ(r12, 0);
+ TEST_ASSERT_EQ(r13, 0);
+ TEST_ASSERT_EQ(r14, 0);
+
+ /* Wait for guest to complete execution */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_SUCCESS(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
setbuf(stdout, NULL);
@@ -273,6 +352,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_report_fatal_error);
run_in_new_process(&verify_td_ioexit);
run_in_new_process(&verify_td_cpuid);
+ run_in_new_process(&verify_get_td_vmcall_info);

return 0;
}
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:48:30

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 13/29] KVM: selftests: TDX: Add TDX IO writes test

The test verifies IO writes of various sizes from the guest to the host.

Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
---
.../selftests/kvm/include/x86_64/tdx/tdcall.h | 3 +
.../selftests/kvm/x86_64/tdx_vm_tests.c | 91 +++++++++++++++++++
2 files changed, 94 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
index 78001bfec9c8..b5e94b7c48fa 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
@@ -10,6 +10,9 @@
#define TDG_VP_VMCALL_INSTRUCTION_IO_READ 0
#define TDG_VP_VMCALL_INSTRUCTION_IO_WRITE 1

+#define TDG_VP_VMCALL_SUCCESS 0x0000000000000000
+#define TDG_VP_VMCALL_INVALID_OPERAND 0x8000000000000000
+
#define TDX_HCALL_HAS_OUTPUT BIT(0)

#define TDX_HYPERCALL_STANDARD 0
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 569c8fb0a59f..a2b3e1aef151 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -339,6 +339,96 @@ void verify_get_td_vmcall_info(void)
printf("\t ... PASSED\n");
}

+#define TDX_IO_WRITES_TEST_PORT 0x51
+
+/*
+ * Verifies IO functionality by writing values of different sizes
+ * to the host.
+ */
+void guest_io_writes(void)
+{
+ uint64_t byte_1 = 0xAB;
+ uint64_t byte_2 = 0xABCD;
+ uint64_t byte_4 = 0xFFABCDEF;
+ uint64_t ret;
+
+ ret = tdg_vp_vmcall_instruction_io(TDX_IO_WRITES_TEST_PORT, 1,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
+ &byte_1);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ ret = tdg_vp_vmcall_instruction_io(TDX_IO_WRITES_TEST_PORT, 2,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
+ &byte_2);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ ret = tdg_vp_vmcall_instruction_io(TDX_IO_WRITES_TEST_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
+ &byte_4);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ // Write an invalid number of bytes.
+ ret = tdg_vp_vmcall_instruction_io(TDX_IO_WRITES_TEST_PORT, 5,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
+ &byte_4);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ tdx_test_success();
+}
+
+void verify_guest_writes(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ uint8_t byte_1;
+ uint16_t byte_2;
+ uint32_t byte_4;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+ vcpu = td_vcpu_add(vm, 0, guest_io_writes);
+ td_finalize(vm);
+
+ printf("Verifying guest writes:\n");
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_IO_WRITES_TEST_PORT, 1,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ byte_1 = *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_IO_WRITES_TEST_PORT, 2,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ byte_2 = *(uint16_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_IO_WRITES_TEST_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ byte_4 = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ TEST_ASSERT_EQ(byte_1, 0xAB);
+ TEST_ASSERT_EQ(byte_2, 0xABCD);
+ TEST_ASSERT_EQ(byte_4, 0xFFABCDEF);
+
+ td_vcpu_run(vcpu);
+ TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
+ TEST_ASSERT_EQ(vcpu->run->system_event.data[1], TDG_VP_VMCALL_INVALID_OPERAND);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_ASSERT_SUCCESS(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
setbuf(stdout, NULL);
@@ -353,6 +443,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_td_ioexit);
run_in_new_process(&verify_td_cpuid);
run_in_new_process(&verify_get_td_vmcall_info);
+ run_in_new_process(&verify_guest_writes);

return 0;
}
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:48:32

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 11/29] KVM: selftests: TDX: Add basic TDX CPUID test

The test reads CPUID values from inside a TD VM and compare them
to expected values.

The test targets CPUID values which are virtualized as "As Configured",
"As Configured (if Native)", "Calculated", "Fixed" and "Native"
according to the TDX spec.

Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
---
.../kvm/include/x86_64/tdx/test_util.h | 9 ++
.../selftests/kvm/lib/x86_64/tdx/test_util.c | 11 ++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 106 ++++++++++++++++++
3 files changed, 126 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
index 95a5d5be7f0b..af0ddbfe8d71 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
@@ -9,6 +9,9 @@
#define TDX_TEST_SUCCESS_PORT 0x30
#define TDX_TEST_SUCCESS_SIZE 4

+#define TDX_TEST_REPORT_PORT 0x31
+#define TDX_TEST_REPORT_SIZE 4
+
/**
* Assert that some IO operation involving tdg_vp_vmcall_instruction_io() was
* called in the guest.
@@ -102,4 +105,10 @@ void tdx_test_fatal(uint64_t error_code);
*/
void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa);

+/**
+ * Report a 32 bit value from the guest to user space using TDG.VP.VMCALL
+ * <Instruction.IO> call. Data is reported on port TDX_TEST_REPORT_PORT.
+ */
+uint64_t tdx_test_report_to_user_space(uint32_t data);
+
#endif // SELFTEST_TDX_TEST_UTIL_H
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
index 7f3cd8089cea..55c5a1e634df 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
@@ -42,3 +42,14 @@ void tdx_test_fatal(uint64_t error_code)
{
tdx_test_fatal_with_data(error_code, 0);
}
+
+uint64_t tdx_test_report_to_user_space(uint32_t data)
+{
+ /* Upcast data to match tdg_vp_vmcall_instruction_io signature */
+ uint64_t data_64 = data;
+
+ return tdg_vp_vmcall_instruction_io(TDX_TEST_REPORT_PORT,
+ TDX_TEST_REPORT_SIZE,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
+ &data_64);
+}
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 75467c407ca7..1b30e6f5a569 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -2,6 +2,7 @@

#include <signal.h>
#include "kvm_util_base.h"
+#include "processor.h"
#include "tdx/tdcall.h"
#include "tdx/tdx.h"
#include "tdx/tdx_util.h"
@@ -155,6 +156,110 @@ void verify_td_ioexit(void)
printf("\t ... PASSED\n");
}

+/*
+ * Verifies CPUID functionality by reading CPUID values in guest. The guest
+ * will then send the values to userspace using an IO write to be checked
+ * against the expected values.
+ */
+void guest_code_cpuid(void)
+{
+ uint64_t err;
+ uint32_t ebx, ecx;
+
+ /* Read CPUID leaf 0x1 */
+ asm volatile (
+ "cpuid"
+ : "=b" (ebx), "=c" (ecx)
+ : "a" (0x1)
+ : "edx");
+
+ err = tdx_test_report_to_user_space(ebx);
+ if (err)
+ tdx_test_fatal(err);
+
+ err = tdx_test_report_to_user_space(ecx);
+ if (err)
+ tdx_test_fatal(err);
+
+ tdx_test_success();
+}
+
+void verify_td_cpuid(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ uint32_t ebx, ecx;
+ const struct kvm_cpuid_entry2 *cpuid_entry;
+ uint32_t guest_clflush_line_size;
+ uint32_t guest_max_addressable_ids, host_max_addressable_ids;
+ uint32_t guest_sse3_enabled;
+ uint32_t guest_fma_enabled;
+ uint32_t guest_initial_apic_id;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+ vcpu = td_vcpu_add(vm, 0, guest_code_cpuid);
+ td_finalize(vm);
+
+ printf("Verifying TD CPUID:\n");
+
+ /* Wait for guest to report ebx value */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ ebx = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ /* Wait for guest to report either ecx value or error */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ ecx = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ /* Wait for guest to complete execution */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_SUCCESS(vcpu);
+
+ /* Verify the CPUID values we got from the guest. */
+ printf("\t ... Verifying CPUID values from guest\n");
+
+ /* Get KVM CPUIDs for reference */
+ cpuid_entry = get_cpuid_entry(kvm_get_supported_cpuid(), 1, 0);
+ TEST_ASSERT(cpuid_entry, "CPUID entry missing\n");
+
+ host_max_addressable_ids = (cpuid_entry->ebx >> 16) & 0xFF;
+
+ guest_sse3_enabled = ecx & 0x1; // Native
+ guest_clflush_line_size = (ebx >> 8) & 0xFF; // Fixed
+ guest_max_addressable_ids = (ebx >> 16) & 0xFF; // As Configured
+ guest_fma_enabled = (ecx >> 12) & 0x1; // As Configured (if Native)
+ guest_initial_apic_id = (ebx >> 24) & 0xFF; // Calculated
+
+ TEST_ASSERT_EQ(guest_sse3_enabled, 1);
+ TEST_ASSERT_EQ(guest_clflush_line_size, 8);
+ TEST_ASSERT_EQ(guest_max_addressable_ids, host_max_addressable_ids);
+
+ /* TODO: This only tests the native value. To properly test
+ * "As Configured (if Native)" we need to override this value
+ * in the TD params
+ */
+ TEST_ASSERT_EQ(guest_fma_enabled, 1);
+
+ /* TODO: guest_initial_apic_id is calculated based on the number of
+ * VCPUs in the TD. From the spec: "Virtual CPU index, starting from 0
+ * and allocated sequentially on each successful TDH.VP.INIT"
+ * To test non-trivial values we either need a TD with multiple VCPUs
+ * or to pick a different calculated value.
+ */
+ TEST_ASSERT_EQ(guest_initial_apic_id, 0);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
setbuf(stdout, NULL);
@@ -167,6 +272,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_td_lifecycle);
run_in_new_process(&verify_report_fatal_error);
run_in_new_process(&verify_td_ioexit);
+ run_in_new_process(&verify_td_cpuid);

return 0;
}
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:48:38

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 15/29] KVM: selftests: TDX: Add TDX MSR read/write tests

The test verifies reads and writes for MSR registers with different access
level.

Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
---
.../selftests/kvm/include/x86_64/tdx/tdx.h | 5 +
.../selftests/kvm/lib/x86_64/tdx/tdx.c | 27 +++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 209 ++++++++++++++++++
3 files changed, 241 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
index 63788012bf94..85ba6aab79a7 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
@@ -9,11 +9,16 @@
#define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003

#define TDG_VP_VMCALL_INSTRUCTION_IO 30
+#define TDG_VP_VMCALL_INSTRUCTION_RDMSR 31
+#define TDG_VP_VMCALL_INSTRUCTION_WRMSR 32
+
void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu);
uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
uint64_t write, uint64_t *data);
void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa);
uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12,
uint64_t *r13, uint64_t *r14);
+uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t *ret_value);
+uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value);

#endif // SELFTEST_TDX_TDX_H
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
index e5a9e13c62e2..88ea6f2a6469 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
@@ -87,3 +87,30 @@ uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12,

return ret;
}
+
+uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t *ret_value)
+{
+ uint64_t ret;
+ struct tdx_hypercall_args args = {
+ .r11 = TDG_VP_VMCALL_INSTRUCTION_RDMSR,
+ .r12 = index,
+ };
+
+ ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT);
+
+ if (ret_value)
+ *ret_value = args.r11;
+
+ return ret;
+}
+
+uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value)
+{
+ struct tdx_hypercall_args args = {
+ .r11 = TDG_VP_VMCALL_INSTRUCTION_WRMSR,
+ .r12 = index,
+ .r13 = value,
+ };
+
+ return __tdx_hypercall(&args, 0);
+}
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 699cba36e9ce..5db3701cc6d9 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -515,6 +515,213 @@ void verify_guest_reads(void)
printf("\t ... PASSED\n");
}

+/*
+ * Define a filter which denies all MSR access except the following:
+ * MSR_X2APIC_APIC_ICR: Allow read/write access (allowed by default)
+ * MSR_IA32_MISC_ENABLE: Allow read access
+ * MSR_IA32_POWER_CTL: Allow write access
+ */
+#define MSR_X2APIC_APIC_ICR 0x830
+static u64 tdx_msr_test_allow_bits = 0xFFFFFFFFFFFFFFFF;
+struct kvm_msr_filter tdx_msr_test_filter = {
+ .flags = KVM_MSR_FILTER_DEFAULT_DENY,
+ .ranges = {
+ {
+ .flags = KVM_MSR_FILTER_READ,
+ .nmsrs = 1,
+ .base = MSR_IA32_MISC_ENABLE,
+ .bitmap = (uint8_t *)&tdx_msr_test_allow_bits,
+ }, {
+ .flags = KVM_MSR_FILTER_WRITE,
+ .nmsrs = 1,
+ .base = MSR_IA32_POWER_CTL,
+ .bitmap = (uint8_t *)&tdx_msr_test_allow_bits,
+ },
+ },
+};
+
+/*
+ * Verifies MSR read functionality.
+ */
+void guest_msr_read(void)
+{
+ uint64_t data;
+ uint64_t ret;
+
+ ret = tdg_vp_vmcall_instruction_rdmsr(MSR_X2APIC_APIC_ICR, &data);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ ret = tdx_test_report_64bit_to_user_space(data);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ ret = tdg_vp_vmcall_instruction_rdmsr(MSR_IA32_MISC_ENABLE, &data);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ ret = tdx_test_report_64bit_to_user_space(data);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ /* We expect this call to fail since MSR_IA32_POWER_CTL is write only */
+ ret = tdg_vp_vmcall_instruction_rdmsr(MSR_IA32_POWER_CTL, &data);
+ if (ret) {
+ ret = tdx_test_report_64bit_to_user_space(ret);
+ if (ret)
+ tdx_test_fatal(ret);
+ } else {
+ tdx_test_fatal(-99);
+ }
+
+ tdx_test_success();
+}
+
+void verify_guest_msr_reads(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ uint64_t data;
+ int ret;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+
+ /*
+ * Set explicit MSR filter map to control access to the MSR registers
+ * used in the test.
+ */
+ printf("\t ... Setting test MSR filter\n");
+ ret = kvm_check_cap(KVM_CAP_X86_USER_SPACE_MSR);
+ TEST_ASSERT(ret, "KVM_CAP_X86_USER_SPACE_MSR is unavailable");
+ vm_enable_cap(vm, KVM_CAP_X86_USER_SPACE_MSR, KVM_MSR_EXIT_REASON_FILTER);
+
+ ret = kvm_check_cap(KVM_CAP_X86_MSR_FILTER);
+ TEST_ASSERT(ret, "KVM_CAP_X86_MSR_FILTER is unavailable");
+
+ ret = ioctl(vm->fd, KVM_X86_SET_MSR_FILTER, &tdx_msr_test_filter);
+ TEST_ASSERT(ret == 0,
+ "KVM_X86_SET_MSR_FILTER failed, ret: %i errno: %i (%s)",
+ ret, errno, strerror(errno));
+
+ vcpu = td_vcpu_add(vm, 0, guest_msr_read);
+ td_finalize(vm);
+
+ printf("Verifying guest msr reads:\n");
+
+ printf("\t ... Setting test MSR values\n");
+ /* Write arbitrary to the MSRs. */
+ vcpu_set_msr(vcpu, MSR_X2APIC_APIC_ICR, 4);
+ vcpu_set_msr(vcpu, MSR_IA32_MISC_ENABLE, 5);
+ vcpu_set_msr(vcpu, MSR_IA32_POWER_CTL, 6);
+
+ printf("\t ... Running guest\n");
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ data = tdx_test_read_64bit_report_from_guest(vcpu);
+ TEST_ASSERT_EQ(data, 4);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ data = tdx_test_read_64bit_report_from_guest(vcpu);
+ TEST_ASSERT_EQ(data, 5);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ data = tdx_test_read_64bit_report_from_guest(vcpu);
+ TEST_ASSERT_EQ(data, TDG_VP_VMCALL_INVALID_OPERAND);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_ASSERT_SUCCESS(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
+/*
+ * Verifies MSR write functionality.
+ */
+void guest_msr_write(void)
+{
+ uint64_t ret;
+
+ ret = tdg_vp_vmcall_instruction_wrmsr(MSR_X2APIC_APIC_ICR, 4);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ /* We expect this call to fail since MSR_IA32_MISC_ENABLE is read only */
+ ret = tdg_vp_vmcall_instruction_wrmsr(MSR_IA32_MISC_ENABLE, 5);
+ if (ret) {
+ ret = tdx_test_report_64bit_to_user_space(ret);
+ if (ret)
+ tdx_test_fatal(ret);
+ } else {
+ tdx_test_fatal(-99);
+ }
+
+
+ ret = tdg_vp_vmcall_instruction_wrmsr(MSR_IA32_POWER_CTL, 6);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ tdx_test_success();
+}
+
+void verify_guest_msr_writes(void)
+{
+ struct kvm_vcpu *vcpu;
+ struct kvm_vm *vm;
+
+ uint64_t data;
+ int ret;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+
+ /*
+ * Set explicit MSR filter map to control access to the MSR registers
+ * used in the test.
+ */
+ printf("\t ... Setting test MSR filter\n");
+ ret = kvm_check_cap(KVM_CAP_X86_USER_SPACE_MSR);
+ TEST_ASSERT(ret, "KVM_CAP_X86_USER_SPACE_MSR is unavailable");
+ vm_enable_cap(vm, KVM_CAP_X86_USER_SPACE_MSR, KVM_MSR_EXIT_REASON_FILTER);
+
+ ret = kvm_check_cap(KVM_CAP_X86_MSR_FILTER);
+ TEST_ASSERT(ret, "KVM_CAP_X86_MSR_FILTER is unavailable");
+
+ ret = ioctl(vm->fd, KVM_X86_SET_MSR_FILTER, &tdx_msr_test_filter);
+ TEST_ASSERT(ret == 0,
+ "KVM_X86_SET_MSR_FILTER failed, ret: %i errno: %i (%s)",
+ ret, errno, strerror(errno));
+
+ vcpu = td_vcpu_add(vm, 0, guest_msr_write);
+ td_finalize(vm);
+
+ printf("Verifying guest msr writes:\n");
+
+ printf("\t ... Running guest\n");
+ /* Only the write to MSR_IA32_MISC_ENABLE should trigger an exit */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ data = tdx_test_read_64bit_report_from_guest(vcpu);
+ TEST_ASSERT_EQ(data, TDG_VP_VMCALL_INVALID_OPERAND);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_ASSERT_SUCCESS(vcpu);
+
+ printf("\t ... Verifying MSR values writen by guest\n");
+
+ TEST_ASSERT_EQ(vcpu_get_msr(vcpu, MSR_X2APIC_APIC_ICR), 4);
+ TEST_ASSERT_EQ(vcpu_get_msr(vcpu, MSR_IA32_MISC_ENABLE), 0x1800);
+ TEST_ASSERT_EQ(vcpu_get_msr(vcpu, MSR_IA32_POWER_CTL), 6);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
+
int main(int argc, char **argv)
{
setbuf(stdout, NULL);
@@ -531,6 +738,8 @@ int main(int argc, char **argv)
run_in_new_process(&verify_get_td_vmcall_info);
run_in_new_process(&verify_guest_writes);
run_in_new_process(&verify_guest_reads);
+ run_in_new_process(&verify_guest_msr_writes);
+ run_in_new_process(&verify_guest_msr_reads);

return 0;
}
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:48:42

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 14/29] KVM: selftests: TDX: Add TDX IO reads test

The test verifies IO reads of various sizes from the host to the guest.

Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
---
.../selftests/kvm/x86_64/tdx_vm_tests.c | 87 +++++++++++++++++++
1 file changed, 87 insertions(+)

diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index a2b3e1aef151..699cba36e9ce 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -429,6 +429,92 @@ void verify_guest_writes(void)
printf("\t ... PASSED\n");
}

+#define TDX_IO_READS_TEST_PORT 0x52
+
+/*
+ * Verifies IO functionality by reading values of different sizes
+ * from the host.
+ */
+void guest_io_reads(void)
+{
+ uint64_t data;
+ uint64_t ret;
+
+ ret = tdg_vp_vmcall_instruction_io(TDX_IO_READS_TEST_PORT, 1,
+ TDG_VP_VMCALL_INSTRUCTION_IO_READ,
+ &data);
+ if (ret)
+ tdx_test_fatal(ret);
+ if (data != 0xAB)
+ tdx_test_fatal(1);
+
+ ret = tdg_vp_vmcall_instruction_io(TDX_IO_READS_TEST_PORT, 2,
+ TDG_VP_VMCALL_INSTRUCTION_IO_READ,
+ &data);
+ if (ret)
+ tdx_test_fatal(ret);
+ if (data != 0xABCD)
+ tdx_test_fatal(2);
+
+ ret = tdg_vp_vmcall_instruction_io(TDX_IO_READS_TEST_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_READ,
+ &data);
+ if (ret)
+ tdx_test_fatal(ret);
+ if (data != 0xFFABCDEF)
+ tdx_test_fatal(4);
+
+ // Read an invalid number of bytes.
+ ret = tdg_vp_vmcall_instruction_io(TDX_IO_READS_TEST_PORT, 5,
+ TDG_VP_VMCALL_INSTRUCTION_IO_READ,
+ &data);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ tdx_test_success();
+}
+
+void verify_guest_reads(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+ vcpu = td_vcpu_add(vm, 0, guest_io_reads);
+ td_finalize(vm);
+
+ printf("Verifying guest reads:\n");
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_IO_READS_TEST_PORT, 1,
+ TDG_VP_VMCALL_INSTRUCTION_IO_READ);
+ *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = 0xAB;
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_IO_READS_TEST_PORT, 2,
+ TDG_VP_VMCALL_INSTRUCTION_IO_READ);
+ *(uint16_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = 0xABCD;
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_IO_READS_TEST_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_READ);
+ *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = 0xFFABCDEF;
+
+ td_vcpu_run(vcpu);
+ TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
+ TEST_ASSERT_EQ(vcpu->run->system_event.data[1], TDG_VP_VMCALL_INVALID_OPERAND);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_ASSERT_SUCCESS(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
setbuf(stdout, NULL);
@@ -444,6 +530,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_td_cpuid);
run_in_new_process(&verify_get_td_vmcall_info);
run_in_new_process(&verify_guest_writes);
+ run_in_new_process(&verify_guest_reads);

return 0;
}
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:48:56

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 16/29] KVM: selftests: TDX: Add TDX HLT exit test

The test verifies that the guest runs TDVMCALL<INSTRUCTION.HLT> and the
guest vCPU enters to the halted state.

Signed-off-by: Erdem Aktas <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
---
.../selftests/kvm/include/x86_64/tdx/tdx.h | 2 +
.../selftests/kvm/lib/x86_64/tdx/tdx.c | 10 +++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 78 +++++++++++++++++++
3 files changed, 90 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
index 85ba6aab79a7..b18e39d20498 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
@@ -8,6 +8,7 @@
#define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000
#define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003

+#define TDG_VP_VMCALL_INSTRUCTION_HLT 12
#define TDG_VP_VMCALL_INSTRUCTION_IO 30
#define TDG_VP_VMCALL_INSTRUCTION_RDMSR 31
#define TDG_VP_VMCALL_INSTRUCTION_WRMSR 32
@@ -20,5 +21,6 @@ uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12,
uint64_t *r13, uint64_t *r14);
uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t *ret_value);
uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value);
+uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag);

#endif // SELFTEST_TDX_TDX_H
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
index 88ea6f2a6469..9485bafedc38 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
@@ -114,3 +114,13 @@ uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value)

return __tdx_hypercall(&args, 0);
}
+
+uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag)
+{
+ struct tdx_hypercall_args args = {
+ .r11 = TDG_VP_VMCALL_INSTRUCTION_HLT,
+ .r12 = interrupt_blocked_flag,
+ };
+
+ return __tdx_hypercall(&args, 0);
+}
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 5db3701cc6d9..5fae4c6e5f95 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -721,6 +721,83 @@ void verify_guest_msr_writes(void)
printf("\t ... PASSED\n");
}

+/*
+ * Verifies HLT functionality.
+ */
+void guest_hlt(void)
+{
+ uint64_t ret;
+ uint64_t interrupt_blocked_flag;
+
+ interrupt_blocked_flag = 0;
+ ret = tdg_vp_vmcall_instruction_hlt(interrupt_blocked_flag);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ tdx_test_success();
+}
+
+void _verify_guest_hlt(int signum);
+
+void wake_me(int interval)
+{
+ struct sigaction action;
+
+ action.sa_handler = _verify_guest_hlt;
+ sigemptyset(&action.sa_mask);
+ action.sa_flags = 0;
+
+ TEST_ASSERT(sigaction(SIGALRM, &action, NULL) == 0,
+ "Could not set the alarm handler!");
+
+ alarm(interval);
+}
+
+void _verify_guest_hlt(int signum)
+{
+ struct kvm_vm *vm;
+ static struct kvm_vcpu *vcpu;
+
+ /*
+ * This function will also be called by SIGALRM handler to check the
+ * vCPU MP State. If vm has been initialized, then we are in the signal
+ * handler. Check the MP state and let the guest run again.
+ */
+ if (vcpu != NULL) {
+ struct kvm_mp_state mp_state;
+
+ vcpu_mp_state_get(vcpu, &mp_state);
+ TEST_ASSERT_EQ(mp_state.mp_state, KVM_MP_STATE_HALTED);
+
+ /* Let the guest to run and finish the test.*/
+ mp_state.mp_state = KVM_MP_STATE_RUNNABLE;
+ vcpu_mp_state_set(vcpu, &mp_state);
+ return;
+ }
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+ vcpu = td_vcpu_add(vm, 0, guest_hlt);
+ td_finalize(vm);
+
+ printf("Verifying HLT:\n");
+
+ printf("\t ... Running guest\n");
+
+ /* Wait 1 second for guest to execute HLT */
+ wake_me(1);
+ td_vcpu_run(vcpu);
+
+ TDX_TEST_ASSERT_SUCCESS(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
+void verify_guest_hlt(void)
+{
+ _verify_guest_hlt(0);
+}

int main(int argc, char **argv)
{
@@ -740,6 +817,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_guest_reads);
run_in_new_process(&verify_guest_msr_writes);
run_in_new_process(&verify_guest_msr_reads);
+ run_in_new_process(&verify_guest_hlt);

return 0;
}
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:49:05

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 18/29] KVM: selftests: TDX: Add TDX MMIO writes test

The test verifies MMIO writes of various sizes from the guest to the host.

Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
---
.../selftests/kvm/include/x86_64/tdx/tdx.h | 2 +
.../selftests/kvm/lib/x86_64/tdx/tdx.c | 14 +++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 85 +++++++++++++++++++
3 files changed, 101 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
index 13ce60df5684..502b670ea699 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
@@ -25,5 +25,7 @@ uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value);
uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag);
uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size,
uint64_t *data_out);
+uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size,
+ uint64_t data_in);

#endif // SELFTEST_TDX_TDX_H
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
index b19f07ebc0e7..f4afa09f7e3d 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
@@ -143,3 +143,17 @@ uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size,

return ret;
}
+
+uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size,
+ uint64_t data_in)
+{
+ struct tdx_hypercall_args args = {
+ .r11 = TDG_VP_VMCALL_VE_REQUEST_MMIO,
+ .r12 = size,
+ .r13 = TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE,
+ .r14 = address,
+ .r15 = data_in,
+ };
+
+ return __tdx_hypercall(&args, 0);
+}
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 48902b69d13e..5e28ba828a92 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -885,6 +885,90 @@ void verify_mmio_reads(void)
printf("\t ... PASSED\n");
}

+void guest_mmio_writes(void)
+{
+ uint64_t ret;
+
+ ret = tdg_vp_vmcall_ve_request_mmio_write(TDX_MMIO_TEST_ADDR, 1, 0x12);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ ret = tdg_vp_vmcall_ve_request_mmio_write(TDX_MMIO_TEST_ADDR, 2, 0x1234);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ ret = tdg_vp_vmcall_ve_request_mmio_write(TDX_MMIO_TEST_ADDR, 4, 0x12345678);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ ret = tdg_vp_vmcall_ve_request_mmio_write(TDX_MMIO_TEST_ADDR, 8, 0x1234567890ABCDEF);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ // Write across page boundary.
+ ret = tdg_vp_vmcall_ve_request_mmio_write(PAGE_SIZE - 1, 8, 0);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ tdx_test_success();
+}
+
+/*
+ * Varifies guest MMIO writes.
+ */
+void verify_mmio_writes(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ uint8_t byte_1;
+ uint16_t byte_2;
+ uint32_t byte_4;
+ uint64_t byte_8;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+ vcpu = td_vcpu_add(vm, 0, guest_mmio_writes);
+ td_finalize(vm);
+
+ printf("Verifying TD MMIO writes:\n");
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 1, TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE);
+ byte_1 = *(uint8_t *)(vcpu->run->mmio.data);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 2, TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE);
+ byte_2 = *(uint16_t *)(vcpu->run->mmio.data);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 4, TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE);
+ byte_4 = *(uint32_t *)(vcpu->run->mmio.data);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 8, TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE);
+ byte_8 = *(uint64_t *)(vcpu->run->mmio.data);
+
+ TEST_ASSERT_EQ(byte_1, 0x12);
+ TEST_ASSERT_EQ(byte_2, 0x1234);
+ TEST_ASSERT_EQ(byte_4, 0x12345678);
+ TEST_ASSERT_EQ(byte_8, 0x1234567890ABCDEF);
+
+ td_vcpu_run(vcpu);
+ TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
+ TEST_ASSERT_EQ(vcpu->run->system_event.data[1], TDG_VP_VMCALL_INVALID_OPERAND);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_ASSERT_SUCCESS(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
setbuf(stdout, NULL);
@@ -905,6 +989,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_guest_msr_reads);
run_in_new_process(&verify_guest_hlt);
run_in_new_process(&verify_mmio_reads);
+ run_in_new_process(&verify_mmio_writes);

return 0;
}
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:49:16

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 19/29] KVM: selftests: TDX: Add TDX CPUID TDVMCALL test

This test issues a CPUID TDVMCALL from inside the guest to get the CPUID
values as seen by KVM.

Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
---
.../selftests/kvm/include/x86_64/tdx/tdx.h | 4 +
.../selftests/kvm/lib/x86_64/tdx/tdx.c | 26 +++++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 94 +++++++++++++++++++
3 files changed, 124 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
index 502b670ea699..b13a533234fd 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
@@ -8,6 +8,7 @@
#define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000
#define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003

+#define TDG_VP_VMCALL_INSTRUCTION_CPUID 10
#define TDG_VP_VMCALL_INSTRUCTION_HLT 12
#define TDG_VP_VMCALL_INSTRUCTION_IO 30
#define TDG_VP_VMCALL_INSTRUCTION_RDMSR 31
@@ -27,5 +28,8 @@ uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size,
uint64_t *data_out);
uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size,
uint64_t data_in);
+uint64_t tdg_vp_vmcall_instruction_cpuid(uint32_t eax, uint32_t ecx,
+ uint32_t *ret_eax, uint32_t *ret_ebx,
+ uint32_t *ret_ecx, uint32_t *ret_edx);

#endif // SELFTEST_TDX_TDX_H
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
index f4afa09f7e3d..a45e2ceb6eda 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
@@ -157,3 +157,29 @@ uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size,

return __tdx_hypercall(&args, 0);
}
+
+uint64_t tdg_vp_vmcall_instruction_cpuid(uint32_t eax, uint32_t ecx,
+ uint32_t *ret_eax, uint32_t *ret_ebx,
+ uint32_t *ret_ecx, uint32_t *ret_edx)
+{
+ uint64_t ret;
+ struct tdx_hypercall_args args = {
+ .r11 = TDG_VP_VMCALL_INSTRUCTION_CPUID,
+ .r12 = eax,
+ .r13 = ecx,
+ };
+
+
+ ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT);
+
+ if (ret_eax)
+ *ret_eax = args.r12;
+ if (ret_ebx)
+ *ret_ebx = args.r13;
+ if (ret_ecx)
+ *ret_ecx = args.r14;
+ if (ret_edx)
+ *ret_edx = args.r15;
+
+ return ret;
+}
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 5e28ba828a92..6935604d768b 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -969,6 +969,99 @@ void verify_mmio_writes(void)
printf("\t ... PASSED\n");
}

+/*
+ * Verifies CPUID TDVMCALL functionality.
+ * The guest will then send the values to userspace using an IO write to be
+ * checked against the expected values.
+ */
+void guest_code_cpuid_tdcall(void)
+{
+ uint64_t err;
+ uint32_t eax, ebx, ecx, edx;
+
+ // Read CPUID leaf 0x1 from host.
+ err = tdg_vp_vmcall_instruction_cpuid(/*eax=*/1, /*ecx=*/0,
+ &eax, &ebx, &ecx, &edx);
+ if (err)
+ tdx_test_fatal(err);
+
+ err = tdx_test_report_to_user_space(eax);
+ if (err)
+ tdx_test_fatal(err);
+
+ err = tdx_test_report_to_user_space(ebx);
+ if (err)
+ tdx_test_fatal(err);
+
+ err = tdx_test_report_to_user_space(ecx);
+ if (err)
+ tdx_test_fatal(err);
+
+ err = tdx_test_report_to_user_space(edx);
+ if (err)
+ tdx_test_fatal(err);
+
+ tdx_test_success();
+}
+
+void verify_td_cpuid_tdcall(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ uint32_t eax, ebx, ecx, edx;
+ const struct kvm_cpuid_entry2 *cpuid_entry;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+ vcpu = td_vcpu_add(vm, 0, guest_code_cpuid_tdcall);
+ td_finalize(vm);
+
+ printf("Verifying TD CPUID TDVMCALL:\n");
+
+ /* Wait for guest to report CPUID values */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ eax = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ ebx = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ ecx = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ edx = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_SUCCESS(vcpu);
+
+ /* Get KVM CPUIDs for reference */
+ cpuid_entry = get_cpuid_entry(kvm_get_supported_cpuid(), 1, 0);
+ TEST_ASSERT(cpuid_entry, "CPUID entry missing\n");
+
+ TEST_ASSERT_EQ(cpuid_entry->eax, eax);
+ // Mask lapic ID when comparing ebx.
+ TEST_ASSERT_EQ(cpuid_entry->ebx & ~0xFF000000, ebx & ~0xFF000000);
+ TEST_ASSERT_EQ(cpuid_entry->ecx, ecx);
+ TEST_ASSERT_EQ(cpuid_entry->edx, edx);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
setbuf(stdout, NULL);
@@ -990,6 +1083,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_guest_hlt);
run_in_new_process(&verify_mmio_reads);
run_in_new_process(&verify_mmio_writes);
+ run_in_new_process(&verify_td_cpuid_tdcall);

return 0;
}
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:49:21

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 17/29] KVM: selftests: TDX: Add TDX MMIO reads test

The test verifies MMIO reads of various sizes from the host to the guest.

Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
---
.../selftests/kvm/include/x86_64/tdx/tdcall.h | 2 +
.../selftests/kvm/include/x86_64/tdx/tdx.h | 3 +
.../kvm/include/x86_64/tdx/test_util.h | 23 +++++
.../selftests/kvm/lib/x86_64/tdx/tdx.c | 19 ++++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 87 +++++++++++++++++++
5 files changed, 134 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
index b5e94b7c48fa..95fcdbd8404e 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
@@ -9,6 +9,8 @@

#define TDG_VP_VMCALL_INSTRUCTION_IO_READ 0
#define TDG_VP_VMCALL_INSTRUCTION_IO_WRITE 1
+#define TDG_VP_VMCALL_VE_REQUEST_MMIO_READ 0
+#define TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE 1

#define TDG_VP_VMCALL_SUCCESS 0x0000000000000000
#define TDG_VP_VMCALL_INVALID_OPERAND 0x8000000000000000
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
index b18e39d20498..13ce60df5684 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
@@ -12,6 +12,7 @@
#define TDG_VP_VMCALL_INSTRUCTION_IO 30
#define TDG_VP_VMCALL_INSTRUCTION_RDMSR 31
#define TDG_VP_VMCALL_INSTRUCTION_WRMSR 32
+#define TDG_VP_VMCALL_VE_REQUEST_MMIO 48

void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu);
uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
@@ -22,5 +23,7 @@ uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12,
uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t *ret_value);
uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value);
uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag);
+uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size,
+ uint64_t *data_out);

#endif // SELFTEST_TDX_TDX_H
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
index 8a9b6a1bec3e..af412b764604 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
@@ -35,6 +35,29 @@
(VCPU)->run->io.direction); \
} while (0)

+
+/**
+ * Assert that some MMIO operation involving TDG.VP.VMCALL <#VERequestMMIO> was
+ * called in the guest.
+ */
+#define TDX_TEST_ASSERT_MMIO(VCPU, ADDR, SIZE, DIR) \
+ do { \
+ TEST_ASSERT((VCPU)->run->exit_reason == KVM_EXIT_MMIO, \
+ "Got exit_reason other than KVM_EXIT_MMIO: %u (%s)\n", \
+ (VCPU)->run->exit_reason, \
+ exit_reason_str((VCPU)->run->exit_reason)); \
+ \
+ TEST_ASSERT(((VCPU)->run->exit_reason == KVM_EXIT_MMIO) && \
+ ((VCPU)->run->mmio.phys_addr == (ADDR)) && \
+ ((VCPU)->run->mmio.len == (SIZE)) && \
+ ((VCPU)->run->mmio.is_write == (DIR)), \
+ "Got an unexpected MMIO exit values: %u (%s) %llu %d %d\n", \
+ (VCPU)->run->exit_reason, \
+ exit_reason_str((VCPU)->run->exit_reason), \
+ (VCPU)->run->mmio.phys_addr, (VCPU)->run->mmio.len, \
+ (VCPU)->run->mmio.is_write); \
+ } while (0)
+
/**
* Check and report if there was some failure in the guest, either an exception
* like a triple fault, or if a tdx_test_fatal() was hit.
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
index 9485bafedc38..b19f07ebc0e7 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
@@ -124,3 +124,22 @@ uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag)

return __tdx_hypercall(&args, 0);
}
+
+uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size,
+ uint64_t *data_out)
+{
+ uint64_t ret;
+ struct tdx_hypercall_args args = {
+ .r11 = TDG_VP_VMCALL_VE_REQUEST_MMIO,
+ .r12 = size,
+ .r13 = TDG_VP_VMCALL_VE_REQUEST_MMIO_READ,
+ .r14 = address,
+ };
+
+ ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT);
+
+ if (data_out)
+ *data_out = args.r11;
+
+ return ret;
+}
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 5fae4c6e5f95..48902b69d13e 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -799,6 +799,92 @@ void verify_guest_hlt(void)
_verify_guest_hlt(0);
}

+/* Pick any address that was not mapped into the guest to test MMIO */
+#define TDX_MMIO_TEST_ADDR 0x200000000
+
+void guest_mmio_reads(void)
+{
+ uint64_t data;
+ uint64_t ret;
+
+ ret = tdg_vp_vmcall_ve_request_mmio_read(TDX_MMIO_TEST_ADDR, 1, &data);
+ if (ret)
+ tdx_test_fatal(ret);
+ if (data != 0x12)
+ tdx_test_fatal(1);
+
+ ret = tdg_vp_vmcall_ve_request_mmio_read(TDX_MMIO_TEST_ADDR, 2, &data);
+ if (ret)
+ tdx_test_fatal(ret);
+ if (data != 0x1234)
+ tdx_test_fatal(2);
+
+ ret = tdg_vp_vmcall_ve_request_mmio_read(TDX_MMIO_TEST_ADDR, 4, &data);
+ if (ret)
+ tdx_test_fatal(ret);
+ if (data != 0x12345678)
+ tdx_test_fatal(4);
+
+ ret = tdg_vp_vmcall_ve_request_mmio_read(TDX_MMIO_TEST_ADDR, 8, &data);
+ if (ret)
+ tdx_test_fatal(ret);
+ if (data != 0x1234567890ABCDEF)
+ tdx_test_fatal(8);
+
+ // Read an invalid number of bytes.
+ ret = tdg_vp_vmcall_ve_request_mmio_read(TDX_MMIO_TEST_ADDR, 10, &data);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ tdx_test_success();
+}
+
+/*
+ * Varifies guest MMIO reads.
+ */
+void verify_mmio_reads(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+ vcpu = td_vcpu_add(vm, 0, guest_mmio_reads);
+ td_finalize(vm);
+
+ printf("Verifying TD MMIO reads:\n");
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 1, TDG_VP_VMCALL_VE_REQUEST_MMIO_READ);
+ *(uint8_t *)vcpu->run->mmio.data = 0x12;
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 2, TDG_VP_VMCALL_VE_REQUEST_MMIO_READ);
+ *(uint16_t *)vcpu->run->mmio.data = 0x1234;
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 4, TDG_VP_VMCALL_VE_REQUEST_MMIO_READ);
+ *(uint32_t *)vcpu->run->mmio.data = 0x12345678;
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 8, TDG_VP_VMCALL_VE_REQUEST_MMIO_READ);
+ *(uint64_t *)vcpu->run->mmio.data = 0x1234567890ABCDEF;
+
+ td_vcpu_run(vcpu);
+ TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
+ TEST_ASSERT_EQ(vcpu->run->system_event.data[1], TDG_VP_VMCALL_INVALID_OPERAND);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_ASSERT_SUCCESS(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
setbuf(stdout, NULL);
@@ -818,6 +904,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_guest_msr_writes);
run_in_new_process(&verify_guest_msr_reads);
run_in_new_process(&verify_guest_hlt);
+ run_in_new_process(&verify_mmio_reads);

return 0;
}
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:49:24

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 22/29] KVM: selftests: Add functions to allow mapping as shared

From: Ackerley Tng <[email protected]>

Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
---
.../selftests/kvm/include/kvm_util_base.h | 24 ++++++++++++++
tools/testing/selftests/kvm/lib/kvm_util.c | 32 +++++++++++++++++++
.../selftests/kvm/lib/x86_64/processor.c | 15 +++++++--
3 files changed, 69 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index b353617fcdd1..efd7ae8abb20 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -574,6 +574,8 @@ vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm);

void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
unsigned int npages);
+void virt_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
+ unsigned int npages);
void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa);
void *addr_gva2hva(struct kvm_vm *vm, vm_vaddr_t gva);
vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva);
@@ -1034,6 +1036,28 @@ static inline void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr
virt_arch_pg_map(vm, vaddr, paddr);
}

+/*
+ * VM Virtual Page Map as Shared
+ *
+ * Input Args:
+ * vm - Virtual Machine
+ * vaddr - VM Virtual Address
+ * paddr - VM Physical Address
+ * memslot - Memory region slot for new virtual translation tables
+ *
+ * Output Args: None
+ *
+ * Return: None
+ *
+ * Within @vm, creates a virtual translation for the page starting
+ * at @vaddr to the page starting at @paddr.
+ */
+void virt_arch_pg_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr);
+
+static inline void virt_pg_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
+{
+ virt_arch_pg_map_shared(vm, vaddr, paddr);
+}

/*
* Address Guest Virtual to Guest Physical
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 4f1ae0f1eef0..28780fa1f0f2 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1573,6 +1573,38 @@ void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
}
}

+/*
+ * Map a range of VM virtual address to the VM's physical address as shared
+ *
+ * Input Args:
+ * vm - Virtual Machine
+ * vaddr - Virtuall address to map
+ * paddr - VM Physical Address
+ * npages - The number of pages to map
+ *
+ * Output Args: None
+ *
+ * Return: None
+ *
+ * Within the VM given by @vm, creates a virtual translation for
+ * @npages starting at @vaddr to the page range starting at @paddr.
+ */
+void virt_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
+ unsigned int npages)
+{
+ size_t page_size = vm->page_size;
+ size_t size = npages * page_size;
+
+ TEST_ASSERT(vaddr + size > vaddr, "Vaddr overflow");
+ TEST_ASSERT(paddr + size > paddr, "Paddr overflow");
+
+ while (npages--) {
+ virt_pg_map_shared(vm, vaddr, paddr);
+ vaddr += page_size;
+ paddr += page_size;
+ }
+}
+
/*
* Address VM Physical to Host Virtual
*
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 566d82829da4..aa2a57ddb8d3 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -190,7 +190,8 @@ static uint64_t *virt_create_upper_pte(struct kvm_vm *vm,
return pte;
}

-void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level)
+static void ___virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
+ int level, bool protected)
{
const uint64_t pg_size = PG_LEVEL_SIZE(level);
uint64_t *pml4e, *pdpe, *pde;
@@ -235,17 +236,27 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level)
"PTE already present for 4k page at vaddr: 0x%lx\n", vaddr);
*pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK | (paddr & PHYSICAL_PAGE_MASK);

- if (vm_is_gpa_protected(vm, paddr))
+ if (protected)
*pte |= vm->arch.c_bit;
else
*pte |= vm->arch.s_bit;
}

+void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level)
+{
+ ___virt_pg_map(vm, vaddr, paddr, level, vm_is_gpa_protected(vm, paddr));
+}
+
void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
{
__virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K);
}

+void virt_arch_pg_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
+{
+ ___virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K, false);
+}
+
void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
uint64_t nr_bytes, int level)
{
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:49:33

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 20/29] KVM: selftests: TDX: Verify the behavior when host consumes a TD private memory

From: Ryan Afranji <[email protected]>

The test checks that host can only read fixed values when trying to
access the guest's private memory.

Signed-off-by: Ryan Afranji <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
---
.../selftests/kvm/x86_64/tdx_vm_tests.c | 85 +++++++++++++++++++
1 file changed, 85 insertions(+)

diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 6935604d768b..c977223ff871 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -1062,6 +1062,90 @@ void verify_td_cpuid_tdcall(void)
printf("\t ... PASSED\n");
}

+/*
+ * Shared variables between guest and host for host reading private mem test
+ */
+static uint64_t tdx_test_host_read_private_mem_addr;
+#define TDX_HOST_READ_PRIVATE_MEM_PORT_TEST 0x53
+
+void guest_host_read_priv_mem(void)
+{
+ uint64_t ret;
+ uint64_t placeholder = 0;
+
+ /* Set value */
+ *((uint32_t *) tdx_test_host_read_private_mem_addr) = 0xABCD;
+
+ /* Exit so host can read value */
+ ret = tdg_vp_vmcall_instruction_io(
+ TDX_HOST_READ_PRIVATE_MEM_PORT_TEST, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, &placeholder);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ /* Update guest_var's value and have host reread it. */
+ *((uint32_t *) tdx_test_host_read_private_mem_addr) = 0xFEDC;
+
+ tdx_test_success();
+}
+
+void verify_host_reading_private_mem(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ vm_vaddr_t test_page;
+ uint64_t *host_virt;
+ uint64_t first_host_read;
+ uint64_t second_host_read;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+ vcpu = td_vcpu_add(vm, 0, guest_host_read_priv_mem);
+
+ test_page = vm_vaddr_alloc_page(vm);
+ TEST_ASSERT(test_page < BIT_ULL(32),
+ "Test address should fit in 32 bits so it can be sent to the guest");
+
+ host_virt = addr_gva2hva(vm, test_page);
+ TEST_ASSERT(host_virt != NULL,
+ "Guest address not found in guest memory regions\n");
+
+ tdx_test_host_read_private_mem_addr = test_page;
+ sync_global_to_guest(vm, tdx_test_host_read_private_mem_addr);
+
+ td_finalize(vm);
+
+ printf("Verifying host's behavior when reading TD private memory:\n");
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_HOST_READ_PRIVATE_MEM_PORT_TEST,
+ 4, TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ printf("\t ... Guest's variable contains 0xABCD\n");
+
+ /* Host reads guest's variable. */
+ first_host_read = *host_virt;
+ printf("\t ... Host's read attempt value: %lu\n", first_host_read);
+
+ /* Guest updates variable and host rereads it. */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ printf("\t ... Guest's variable updated to 0xFEDC\n");
+
+ second_host_read = *host_virt;
+ printf("\t ... Host's second read attempt value: %lu\n",
+ second_host_read);
+
+ TEST_ASSERT(first_host_read == second_host_read,
+ "Host did not read a fixed pattern\n");
+
+ printf("\t ... Fixed pattern was returned to the host\n");
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
setbuf(stdout, NULL);
@@ -1084,6 +1168,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_mmio_reads);
run_in_new_process(&verify_mmio_writes);
run_in_new_process(&verify_td_cpuid_tdcall);
+ run_in_new_process(&verify_host_reading_private_mem);

return 0;
}
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:49:41

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 21/29] KVM: selftests: TDX: Add TDG.VP.INFO test

From: Roger Wang <[email protected]>

Adds a test for TDG.VP.INFO

Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
---
.../selftests/kvm/include/x86_64/tdx/tdcall.h | 19 +++
.../selftests/kvm/include/x86_64/tdx/tdx.h | 5 +
.../selftests/kvm/lib/x86_64/tdx/tdcall.S | 68 ++++++++
.../selftests/kvm/lib/x86_64/tdx/tdx.c | 27 ++++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 145 ++++++++++++++++++
5 files changed, 264 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
index 95fcdbd8404e..a65ce8f3c109 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
@@ -37,4 +37,23 @@ struct tdx_hypercall_args {
/* Used to request services from the VMM */
u64 __tdx_hypercall(struct tdx_hypercall_args *args, unsigned long flags);

+/*
+ * Used to gather the output registers values of the TDCALL and SEAMCALL
+ * instructions when requesting services from the TDX module.
+ *
+ * This is a software only structure and not part of the TDX module/VMM ABI.
+ */
+struct tdx_module_output {
+ u64 rcx;
+ u64 rdx;
+ u64 r8;
+ u64 r9;
+ u64 r10;
+ u64 r11;
+};
+
+/* Used to communicate with the TDX module */
+u64 __tdx_module_call(u64 fn, u64 rcx, u64 rdx, u64 r8, u64 r9,
+ struct tdx_module_output *out);
+
#endif // SELFTESTS_TDX_TDCALL_H
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
index b13a533234fd..6b176de1e795 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
@@ -5,6 +5,8 @@
#include <stdint.h>
#include "kvm_util_base.h"

+#define TDG_VP_INFO 1
+
#define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000
#define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003

@@ -31,5 +33,8 @@ uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size,
uint64_t tdg_vp_vmcall_instruction_cpuid(uint32_t eax, uint32_t ecx,
uint32_t *ret_eax, uint32_t *ret_ebx,
uint32_t *ret_ecx, uint32_t *ret_edx);
+uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx,
+ uint64_t *r8, uint64_t *r9,
+ uint64_t *r10, uint64_t *r11);

#endif // SELFTEST_TDX_TDX_H
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
index df9c1ed4bb2d..601d71531443 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
@@ -86,5 +86,73 @@ __tdx_hypercall:
pop %rbp
ret

+#define TDX_MODULE_rcx 0 /* offsetof(struct tdx_module_output, rcx) */
+#define TDX_MODULE_rdx 8 /* offsetof(struct tdx_module_output, rdx) */
+#define TDX_MODULE_r8 16 /* offsetof(struct tdx_module_output, r8) */
+#define TDX_MODULE_r9 24 /* offsetof(struct tdx_module_output, r9) */
+#define TDX_MODULE_r10 32 /* offsetof(struct tdx_module_output, r10) */
+#define TDX_MODULE_r11 40 /* offsetof(struct tdx_module_output, r11) */
+
+.globl __tdx_module_call
+.type __tdx_module_call, @function
+__tdx_module_call:
+ /* Set up stack frame */
+ push %rbp
+ movq %rsp, %rbp
+
+ /* Callee-saved, so preserve it */
+ push %r12
+
+ /*
+ * Push output pointer to stack.
+ * After the operation, it will be fetched into R12 register.
+ */
+ push %r9
+
+ /* Mangle function call ABI into TDCALL/SEAMCALL ABI: */
+ /* Move Leaf ID to RAX */
+ mov %rdi, %rax
+ /* Move input 4 to R9 */
+ mov %r8, %r9
+ /* Move input 3 to R8 */
+ mov %rcx, %r8
+ /* Move input 1 to RCX */
+ mov %rsi, %rcx
+ /* Leave input param 2 in RDX */
+
+ tdcall
+
+ /*
+ * Fetch output pointer from stack to R12 (It is used
+ * as temporary storage)
+ */
+ pop %r12
+
+ /*
+ * Since this macro can be invoked with NULL as an output pointer,
+ * check if caller provided an output struct before storing output
+ * registers.
+ *
+ * Update output registers, even if the call failed (RAX != 0).
+ * Other registers may contain details of the failure.
+ */
+ test %r12, %r12
+ jz .Lno_output_struct
+
+ /* Copy result registers to output struct: */
+ movq %rcx, TDX_MODULE_rcx(%r12)
+ movq %rdx, TDX_MODULE_rdx(%r12)
+ movq %r8, TDX_MODULE_r8(%r12)
+ movq %r9, TDX_MODULE_r9(%r12)
+ movq %r10, TDX_MODULE_r10(%r12)
+ movq %r11, TDX_MODULE_r11(%r12)
+
+.Lno_output_struct:
+ /* Restore the state of R12 register */
+ pop %r12
+
+ pop %rbp
+ ret
+
/* Disable executable stack */
.section .note.GNU-stack,"",%progbits
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
index a45e2ceb6eda..bcd9cceb3372 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
@@ -183,3 +183,30 @@ uint64_t tdg_vp_vmcall_instruction_cpuid(uint32_t eax, uint32_t ecx,

return ret;
}
+
+uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx,
+ uint64_t *r8, uint64_t *r9,
+ uint64_t *r10, uint64_t *r11)
+{
+ uint64_t ret;
+ struct tdx_module_output out;
+
+ memset(&out, 0, sizeof(struct tdx_module_output));
+
+ ret = __tdx_module_call(TDG_VP_INFO, 0, 0, 0, 0, &out);
+
+ if (rcx)
+ *rcx = out.rcx;
+ if (rdx)
+ *rdx = out.rdx;
+ if (r8)
+ *r8 = out.r8;
+ if (r9)
+ *r9 = out.r9;
+ if (r10)
+ *r10 = out.r10;
+ if (r11)
+ *r11 = out.r11;
+
+ return ret;
+}
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index c977223ff871..60b4504d1245 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -1146,6 +1146,150 @@ void verify_host_reading_private_mem(void)
printf("\t ... PASSED\n");
}

+/*
+ * Do a TDG.VP.INFO call from the guest
+ */
+void guest_tdcall_vp_info(void)
+{
+ uint64_t err;
+ uint64_t rcx, rdx, r8, r9, r10, r11;
+
+ err = tdg_vp_info(&rcx, &rdx, &r8, &r9, &r10, &r11);
+ if (err)
+ tdx_test_fatal(err);
+
+ /* return values to user space host */
+ err = tdx_test_report_64bit_to_user_space(rcx);
+ if (err)
+ tdx_test_fatal(err);
+
+ err = tdx_test_report_64bit_to_user_space(rdx);
+ if (err)
+ tdx_test_fatal(err);
+
+ err = tdx_test_report_64bit_to_user_space(r8);
+ if (err)
+ tdx_test_fatal(err);
+
+ err = tdx_test_report_64bit_to_user_space(r9);
+ if (err)
+ tdx_test_fatal(err);
+
+ err = tdx_test_report_64bit_to_user_space(r10);
+ if (err)
+ tdx_test_fatal(err);
+
+ err = tdx_test_report_64bit_to_user_space(r11);
+ if (err)
+ tdx_test_fatal(err);
+
+ tdx_test_success();
+}
+
+/*
+ * TDG.VP.INFO call from the guest. Verify the right values are returned
+ */
+void verify_tdcall_vp_info(void)
+{
+ const int num_vcpus = 2;
+ struct kvm_vcpu *vcpus[num_vcpus];
+ struct kvm_vm *vm;
+
+ uint64_t rcx, rdx, r8, r9, r10, r11;
+ uint32_t ret_num_vcpus, ret_max_vcpus;
+ uint64_t attributes;
+ uint32_t i;
+ const struct kvm_cpuid_entry2 *cpuid_entry;
+ int max_pa = -1;
+
+ vm = td_create();
+
+#define TDX_TDPARAM_ATTR_SEPT_VE_DISABLE_BIT (1UL << 28)
+#define TDX_TDPARAM_ATTR_PKS_BIT (1UL << 30)
+ /* Setting attributes parameter used by TDH.MNG.INIT to 0x50000000 */
+ attributes = TDX_TDPARAM_ATTR_SEPT_VE_DISABLE_BIT |
+ TDX_TDPARAM_ATTR_PKS_BIT;
+
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, attributes);
+
+ for (i = 0; i < num_vcpus; i++)
+ vcpus[i] = td_vcpu_add(vm, i, guest_tdcall_vp_info);
+
+ td_finalize(vm);
+
+ printf("Verifying TDG.VP.INFO call:\n");
+
+ /* Get KVM CPUIDs for reference */
+ cpuid_entry = get_cpuid_entry(kvm_get_supported_cpuid(), 0x80000008, 0);
+ TEST_ASSERT(cpuid_entry, "CPUID entry missing\n");
+ max_pa = cpuid_entry->eax & 0xff;
+
+ for (i = 0; i < num_vcpus; i++) {
+ struct kvm_vcpu *vcpu = vcpus[i];
+
+ /* Wait for guest to report rcx value */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ rcx = tdx_test_read_64bit_report_from_guest(vcpu);
+
+ /* Wait for guest to report rdx value */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ rdx = tdx_test_read_64bit_report_from_guest(vcpu);
+
+ /* Wait for guest to report r8 value */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ r8 = tdx_test_read_64bit_report_from_guest(vcpu);
+
+ /* Wait for guest to report r9 value */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ r9 = tdx_test_read_64bit_report_from_guest(vcpu);
+
+ /* Wait for guest to report r10 value */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ r10 = tdx_test_read_64bit_report_from_guest(vcpu);
+
+ /* Wait for guest to report r11 value */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ r11 = tdx_test_read_64bit_report_from_guest(vcpu);
+
+ ret_num_vcpus = r8 & 0xFFFFFFFF;
+ ret_max_vcpus = (r8 >> 32) & 0xFFFFFFFF;
+
+ /* first bits 5:0 of rcx represent the GPAW */
+ TEST_ASSERT_EQ(rcx & 0x3F, max_pa);
+ /* next 63:6 bits of rcx is reserved and must be 0 */
+ TEST_ASSERT_EQ(rcx >> 6, 0);
+ TEST_ASSERT_EQ(rdx, attributes);
+ TEST_ASSERT_EQ(ret_num_vcpus, num_vcpus);
+ TEST_ASSERT_EQ(ret_max_vcpus, 512);
+ /* VCPU_INDEX = i */
+ TEST_ASSERT_EQ(r9, i);
+ /*
+ * verify reserved bits are 0
+ * r10 bit 0 (SYS_RD) indicates that the TDG.SYS.RD/RDM/RDALL
+ * functions are available and can be either 0 or 1.
+ */
+ TEST_ASSERT_EQ(r10 & ~1, 0);
+ TEST_ASSERT_EQ(r11, 0);
+
+ /* Wait for guest to complete execution */
+ td_vcpu_run(vcpu);
+
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_SUCCESS(vcpu);
+
+ printf("\t ... Guest completed run on VCPU=%u\n", i);
+ }
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
setbuf(stdout, NULL);
@@ -1169,6 +1313,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_mmio_writes);
run_in_new_process(&verify_td_cpuid_tdcall);
run_in_new_process(&verify_host_reading_private_mem);
+ run_in_new_process(&verify_tdcall_vp_info);

return 0;
}
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:49:52

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 27/29] KVM: selftests: Propagate KVM_EXIT_MEMORY_FAULT to userspace

Allow userspace to handle KVM_EXIT_MEMORY_FAULT instead of triggering
TEST_ASSERT.

From the KVM_EXIT_MEMORY_FAULT documentation:
Note! KVM_EXIT_MEMORY_FAULT is unique among all KVM exit reasons in that it
accompanies a return code of '-1', not '0'! errno will always be set to EFAULT
or EHWPOISON when KVM exits with KVM_EXIT_MEMORY_FAULT, userspace should assume
kvm_run.exit_reason is stale/undefined for all other error numbers.

Signed-off-by: Sagi Shahar <[email protected]>
---
tools/testing/selftests/kvm/lib/kvm_util.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index d024abc5379c..8fb041e51484 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1742,6 +1742,10 @@ void vcpu_run(struct kvm_vcpu *vcpu)
{
int ret = _vcpu_run(vcpu);

+ // Allow this scenario to be handled by the caller.
+ if (ret == -1 && errno == EFAULT)
+ return;
+
TEST_ASSERT(!ret, KVM_IOCTL_ERROR(KVM_RUN, ret));
}

--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:49:56

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 23/29] KVM: selftests: TDX: Add shared memory test

From: Ryan Afranji <[email protected]>

Adds a test that sets up shared memory between the host and guest.

Signed-off-by: Ryan Afranji <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
---
tools/testing/selftests/kvm/Makefile | 1 +
.../selftests/kvm/include/x86_64/tdx/tdx.h | 2 +
.../kvm/include/x86_64/tdx/tdx_util.h | 2 +
.../selftests/kvm/lib/x86_64/tdx/tdx.c | 26 ++++
.../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 25 ++++
.../kvm/x86_64/tdx_shared_mem_test.c | 135 ++++++++++++++++++
6 files changed, 191 insertions(+)
create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 80d4a50eeb9f..8c0a6b395ee5 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -156,6 +156,7 @@ TEST_GEN_PROGS_x86_64 += steal_time
TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test
TEST_GEN_PROGS_x86_64 += system_counter_offset_test
TEST_GEN_PROGS_x86_64 += x86_64/tdx_vm_tests
+TEST_GEN_PROGS_x86_64 += x86_64/tdx_shared_mem_test

# Compiled outputs used by test targets
TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
index 6b176de1e795..db4cc62abb5d 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
@@ -8,6 +8,7 @@
#define TDG_VP_INFO 1

#define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000
+#define TDG_VP_VMCALL_MAP_GPA 0x10001
#define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003

#define TDG_VP_VMCALL_INSTRUCTION_CPUID 10
@@ -36,5 +37,6 @@ uint64_t tdg_vp_vmcall_instruction_cpuid(uint32_t eax, uint32_t ecx,
uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx,
uint64_t *r8, uint64_t *r9,
uint64_t *r10, uint64_t *r11);
+uint64_t tdg_vp_vmcall_map_gpa(uint64_t address, uint64_t size, uint64_t *data_out);

#endif // SELFTEST_TDX_TDX_H
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
index 32dd6b8fda46..3e850ecb85a6 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
@@ -13,5 +13,7 @@ void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
uint64_t attributes);
void td_finalize(struct kvm_vm *vm);
void td_vcpu_run(struct kvm_vcpu *vcpu);
+void handle_memory_conversion(struct kvm_vm *vm, uint64_t gpa, uint64_t size,
+ bool shared_to_private);

#endif // SELFTESTS_TDX_KVM_UTIL_H
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
index bcd9cceb3372..061a5c0bef34 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
@@ -4,9 +4,11 @@

#include "tdx/tdcall.h"
#include "tdx/tdx.h"
+#include "tdx/tdx_util.h"

void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu)
{
+ struct kvm_vm *vm = vcpu->vm;
struct kvm_tdx_vmcall *vmcall_info = &vcpu->run->tdx.u.vmcall;
uint64_t vmcall_subfunction = vmcall_info->subfunction;

@@ -20,6 +22,14 @@ void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu)
vcpu->run->system_event.data[2] = vmcall_info->in_r13;
vmcall_info->status_code = 0;
break;
+ case TDG_VP_VMCALL_MAP_GPA:
+ uint64_t gpa = vmcall_info->in_r12 & ~vm->arch.s_bit;
+ bool shared_to_private = !(vm->arch.s_bit &
+ vmcall_info->in_r12);
+ handle_memory_conversion(vm, gpa, vmcall_info->in_r13,
+ shared_to_private);
+ vmcall_info->status_code = 0;
+ break;
default:
TEST_FAIL("TD VMCALL subfunction %lu is unsupported.\n",
vmcall_subfunction);
@@ -210,3 +220,19 @@ uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx,

return ret;
}
+
+uint64_t tdg_vp_vmcall_map_gpa(uint64_t address, uint64_t size, uint64_t *data_out)
+{
+ uint64_t ret;
+ struct tdx_hypercall_args args = {
+ .r11 = TDG_VP_VMCALL_MAP_GPA,
+ .r12 = address,
+ .r13 = size
+ };
+
+ ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT);
+
+ if (data_out)
+ *data_out = args.r11;
+ return ret;
+}
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
index d745bb6287c1..92fa6bd13229 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
@@ -531,3 +531,28 @@ void td_vcpu_run(struct kvm_vcpu *vcpu)
handle_userspace_tdg_vp_vmcall_exit(vcpu);
}
}
+
+/**
+ * Handle conversion of memory with @size beginning @gpa for @vm. Set
+ * @shared_to_private to true for shared to private conversions and false
+ * otherwise.
+ *
+ * Since this is just for selftests, we will just keep both pieces of backing
+ * memory allocated and not deallocate/allocate memory; we'll just do the
+ * minimum of calling KVM_MEMORY_ENCRYPT_REG_REGION and
+ * KVM_MEMORY_ENCRYPT_UNREG_REGION.
+ */
+void handle_memory_conversion(struct kvm_vm *vm, uint64_t gpa, uint64_t size,
+ bool shared_to_private)
+{
+ struct kvm_memory_attributes range;
+
+ range.address = gpa;
+ range.size = size;
+ range.attributes = shared_to_private ? KVM_MEMORY_ATTRIBUTE_PRIVATE : 0;
+ range.flags = 0;
+
+ printf("\t ... calling KVM_SET_MEMORY_ATTRIBUTES ioctl with gpa=%#lx, size=%#lx, attributes=%#llx\n", gpa, size, range.attributes);
+
+ vm_ioctl(vm, KVM_SET_MEMORY_ATTRIBUTES, &range);
+}
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c b/tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c
new file mode 100644
index 000000000000..ba6bdc470270
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c
@@ -0,0 +1,135 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <linux/kvm.h>
+#include <stdint.h>
+
+#include "kvm_util_base.h"
+#include "processor.h"
+#include "tdx/tdcall.h"
+#include "tdx/tdx.h"
+#include "tdx/tdx_util.h"
+#include "tdx/test_util.h"
+#include "test_util.h"
+
+#define TDX_SHARED_MEM_TEST_PRIVATE_GVA (0x80000000)
+#define TDX_SHARED_MEM_TEST_VADDR_SHARED_MASK BIT_ULL(30)
+#define TDX_SHARED_MEM_TEST_SHARED_GVA \
+ (TDX_SHARED_MEM_TEST_PRIVATE_GVA | \
+ TDX_SHARED_MEM_TEST_VADDR_SHARED_MASK)
+
+#define TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE (0xcafecafe)
+#define TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE (0xabcdabcd)
+
+#define TDX_SHARED_MEM_TEST_INFO_PORT 0x87
+
+/*
+ * Shared variables between guest and host
+ */
+static uint64_t test_mem_private_gpa;
+static uint64_t test_mem_shared_gpa;
+
+void guest_shared_mem(void)
+{
+ uint32_t *test_mem_shared_gva =
+ (uint32_t *)TDX_SHARED_MEM_TEST_SHARED_GVA;
+
+ uint64_t placeholder;
+ uint64_t ret;
+
+ /* Map gpa as shared */
+ ret = tdg_vp_vmcall_map_gpa(test_mem_shared_gpa, PAGE_SIZE,
+ &placeholder);
+ if (ret)
+ tdx_test_fatal_with_data(ret, __LINE__);
+
+ *test_mem_shared_gva = TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE;
+
+ /* Exit so host can read shared value */
+ ret = tdg_vp_vmcall_instruction_io(TDX_SHARED_MEM_TEST_INFO_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
+ &placeholder);
+ if (ret)
+ tdx_test_fatal_with_data(ret, __LINE__);
+
+ /* Read value written by host and send it back out for verification */
+ ret = tdg_vp_vmcall_instruction_io(TDX_SHARED_MEM_TEST_INFO_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
+ (uint64_t *)test_mem_shared_gva);
+ if (ret)
+ tdx_test_fatal_with_data(ret, __LINE__);
+}
+
+int verify_shared_mem(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ vm_vaddr_t test_mem_private_gva;
+ uint32_t *test_mem_hva;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+ vcpu = td_vcpu_add(vm, 0, guest_shared_mem);
+
+ /*
+ * Set up shared memory page for testing by first allocating as private
+ * and then mapping the same GPA again as shared. This way, the TD does
+ * not have to remap its page tables at runtime.
+ */
+ test_mem_private_gva = vm_vaddr_alloc(vm, vm->page_size,
+ TDX_SHARED_MEM_TEST_PRIVATE_GVA);
+ TEST_ASSERT_EQ(test_mem_private_gva, TDX_SHARED_MEM_TEST_PRIVATE_GVA);
+
+ test_mem_hva = addr_gva2hva(vm, test_mem_private_gva);
+ TEST_ASSERT(test_mem_hva != NULL,
+ "Guest address not found in guest memory regions\n");
+
+ test_mem_private_gpa = addr_gva2gpa(vm, test_mem_private_gva);
+ virt_pg_map_shared(vm, TDX_SHARED_MEM_TEST_SHARED_GVA,
+ test_mem_private_gpa);
+
+ test_mem_shared_gpa = test_mem_private_gpa | BIT_ULL(vm->pa_bits - 1);
+ sync_global_to_guest(vm, test_mem_private_gpa);
+ sync_global_to_guest(vm, test_mem_shared_gpa);
+
+ td_finalize(vm);
+
+ printf("Verifying shared memory accesses for TDX\n");
+
+ /* Begin guest execution; guest writes to shared memory. */
+ printf("\t ... Starting guest execution\n");
+
+ /* Handle map gpa as shared */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_SHARED_MEM_TEST_INFO_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ TEST_ASSERT_EQ(*test_mem_hva, TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE);
+
+ *test_mem_hva = TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE;
+ td_vcpu_run(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_SHARED_MEM_TEST_INFO_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ TEST_ASSERT_EQ(
+ *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset),
+ TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE);
+
+ printf("\t ... PASSED\n");
+
+ kvm_vm_free(vm);
+
+ return 0;
+}
+
+int main(int argc, char **argv)
+{
+ if (!is_tdx_enabled()) {
+ printf("TDX is not supported by the KVM\n"
+ "Skipping the TDX tests.\n");
+ return 0;
+ }
+
+ return verify_shared_mem();
+}
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:49:58

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 25/29] KVM: selftests: TDX: Add support for TDG.MEM.PAGE.ACCEPT

From: Ackerley Tng <[email protected]>

Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
---
tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h | 2 ++
tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c | 5 +++++
2 files changed, 7 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
index db4cc62abb5d..b71bcea40b5c 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
@@ -6,6 +6,7 @@
#include "kvm_util_base.h"

#define TDG_VP_INFO 1
+#define TDG_MEM_PAGE_ACCEPT 6

#define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000
#define TDG_VP_VMCALL_MAP_GPA 0x10001
@@ -38,5 +39,6 @@ uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx,
uint64_t *r8, uint64_t *r9,
uint64_t *r10, uint64_t *r11);
uint64_t tdg_vp_vmcall_map_gpa(uint64_t address, uint64_t size, uint64_t *data_out);
+uint64_t tdg_mem_page_accept(uint64_t gpa, uint8_t level);

#endif // SELFTEST_TDX_TDX_H
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
index 061a5c0bef34..d8c4ab635c06 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
@@ -236,3 +236,8 @@ uint64_t tdg_vp_vmcall_map_gpa(uint64_t address, uint64_t size, uint64_t *data_o
*data_out = args.r11;
return ret;
}
+
+uint64_t tdg_mem_page_accept(uint64_t gpa, uint8_t level)
+{
+ return __tdx_module_call(TDG_MEM_PAGE_ACCEPT, gpa | level, 0, 0, 0, NULL);
+}
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:49:58

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 24/29] KVM: selftests: Expose _vm_vaddr_alloc

From: Ackerley Tng <[email protected]>

vm_vaddr_alloc always allocates memory in memslot 0. This allows users
of this function to choose which memslot to allocate virtual memory
in.

Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
---
tools/testing/selftests/kvm/include/kvm_util_base.h | 3 +++
tools/testing/selftests/kvm/lib/kvm_util.c | 6 +++---
2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index efd7ae8abb20..5dbebf5cfd07 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -561,6 +561,9 @@ void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
void vm_populate_vaddr_bitmap(struct kvm_vm *vm);
vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
+vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz,
+ vm_vaddr_t vaddr_min, vm_paddr_t paddr_min,
+ uint32_t data_memslot, bool encrypt);
vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
enum kvm_mem_region_type type);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 28780fa1f0f2..d024abc5379c 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1410,9 +1410,9 @@ vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz,
* a unique set of pages, with the minimum real allocation being at least
* a page.
*/
-static vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz,
- vm_vaddr_t vaddr_min, vm_paddr_t paddr_min,
- uint32_t data_memslot, bool encrypt)
+vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz,
+ vm_vaddr_t vaddr_min, vm_paddr_t paddr_min,
+ uint32_t data_memslot, bool encrypt)
{
uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0);

--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:50:49

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 29/29] KVM: selftests: TDX: Add TDX UPM selftests for implicit conversion

From: Ackerley Tng <[email protected]>

This tests the use of guest memory without explicit MapGPA calls.

Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
---
.../selftests/kvm/x86_64/tdx_upm_test.c | 86 +++++++++++++++++--
1 file changed, 77 insertions(+), 9 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c b/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c
index 44671874a4f1..bfa921f125a0 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c
@@ -149,7 +149,7 @@ enum {
* Does vcpu_run, and also manages memory conversions if requested by the TD.
*/
void vcpu_run_and_manage_memory_conversions(struct kvm_vm *vm,
- struct kvm_vcpu *vcpu)
+ struct kvm_vcpu *vcpu, bool handle_conversions)
{
for (;;) {
vcpu_run(vcpu);
@@ -163,6 +163,13 @@ void vcpu_run_and_manage_memory_conversions(struct kvm_vm *vm,
!(vm->arch.s_bit & vmcall_info->in_r12));
vmcall_info->status_code = 0;
continue;
+ } else if (handle_conversions &&
+ vcpu->run->exit_reason == KVM_EXIT_MEMORY_FAULT) {
+ handle_memory_conversion(
+ vm, vcpu->run->memory_fault.gpa,
+ vcpu->run->memory_fault.size,
+ vcpu->run->memory_fault.flags == KVM_MEMORY_EXIT_FLAG_PRIVATE);
+ continue;
} else if (
vcpu->run->exit_reason == KVM_EXIT_IO &&
vcpu->run->io.port == TDX_UPM_TEST_ACCEPT_PRINT_PORT) {
@@ -243,8 +250,53 @@ static void guest_upm_explicit(void)
tdx_test_success();
}

+static void guest_upm_implicit(void)
+{
+ struct tdx_upm_test_area *test_area_gva_private =
+ (struct tdx_upm_test_area *)TDX_UPM_TEST_AREA_GVA_PRIVATE;
+ struct tdx_upm_test_area *test_area_gva_shared =
+ (struct tdx_upm_test_area *)TDX_UPM_TEST_AREA_GVA_SHARED;
+
+ /* Check: host reading private memory does not modify guest's view */
+ fill_test_area(test_area_gva_private, PATTERN_GUEST_GENERAL);
+
+ tdx_test_report_to_user_space(SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST);
+
+ TDX_UPM_TEST_ASSERT(
+ check_test_area(test_area_gva_private, PATTERN_GUEST_GENERAL));
+
+ /* Use focus area as shared */
+ fill_focus_area(test_area_gva_shared, PATTERN_GUEST_FOCUS);
+
+ /* General areas should not be affected */
+ TDX_UPM_TEST_ASSERT(
+ check_general_areas(test_area_gva_private, PATTERN_GUEST_GENERAL));
+
+ tdx_test_report_to_user_space(SYNC_CHECK_READ_SHARED_MEMORY_FROM_HOST);
+
+ /* Check that guest has the same view of shared memory */
+ TDX_UPM_TEST_ASSERT(
+ check_focus_area(test_area_gva_shared, PATTERN_HOST_FOCUS));
+
+ /* Use focus area as private */
+ fill_focus_area(test_area_gva_private, PATTERN_GUEST_FOCUS);
+
+ /* General areas should be unaffected by remapping */
+ TDX_UPM_TEST_ASSERT(
+ check_general_areas(test_area_gva_private, PATTERN_GUEST_GENERAL));
+
+ tdx_test_report_to_user_space(SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST_AGAIN);
+
+ /* Check that guest can use private memory after focus area is remapped as private */
+ TDX_UPM_TEST_ASSERT(
+ fill_and_check(test_area_gva_private, PATTERN_GUEST_GENERAL));
+
+ tdx_test_success();
+}
+
static void run_selftest(struct kvm_vm *vm, struct kvm_vcpu *vcpu,
- struct tdx_upm_test_area *test_area_base_hva)
+ struct tdx_upm_test_area *test_area_base_hva,
+ bool implicit)
{
vcpu_run(vcpu);
TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
@@ -263,7 +315,7 @@ static void run_selftest(struct kvm_vm *vm, struct kvm_vcpu *vcpu,
TEST_ASSERT(check_test_area(test_area_base_hva, PATTERN_CONFIDENCE_CHECK),
"Host should read PATTERN_CONFIDENCE_CHECK from guest's private memory.");

- vcpu_run_and_manage_memory_conversions(vm, vcpu);
+ vcpu_run_and_manage_memory_conversions(vm, vcpu, implicit);
TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE,
TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
@@ -280,7 +332,7 @@ static void run_selftest(struct kvm_vm *vm, struct kvm_vcpu *vcpu,
TEST_ASSERT(check_focus_area(test_area_base_hva, PATTERN_HOST_FOCUS),
"Host should be able to use shared memory.");

- vcpu_run_and_manage_memory_conversions(vm, vcpu);
+ vcpu_run_and_manage_memory_conversions(vm, vcpu, implicit);
TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE,
TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
@@ -329,18 +381,20 @@ static void guest_ve_handler(struct ex_regs *regs)
TDX_UPM_TEST_ASSERT(!ret);
}

-static void verify_upm_test(void)
+static void verify_upm_test(bool implicit)
{
struct kvm_vm *vm;
struct kvm_vcpu *vcpu;

+ void *guest_code;
vm_vaddr_t test_area_gva_private;
struct tdx_upm_test_area *test_area_base_hva;
uint64_t test_area_npages;

vm = td_create();
td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
- vcpu = td_vcpu_add(vm, 0, guest_upm_explicit);
+ guest_code = implicit ? guest_upm_implicit : guest_upm_explicit;
+ vcpu = td_vcpu_add(vm, 0, guest_code);

vm_install_exception_handler(vm, VE_VECTOR, guest_ve_handler);

@@ -379,13 +433,26 @@ static void verify_upm_test(void)

td_finalize(vm);

- printf("Verifying UPM functionality: explicit MapGPA\n");
+ if (implicit)
+ printf("Verifying UPM functionality: implicit conversion\n");
+ else
+ printf("Verifying UPM functionality: explicit MapGPA\n");

- run_selftest(vm, vcpu, test_area_base_hva);
+ run_selftest(vm, vcpu, test_area_base_hva, implicit);

kvm_vm_free(vm);
}

+void verify_upm_test_explicit(void)
+{
+ verify_upm_test(false);
+}
+
+void verify_upm_test_implicit(void)
+{
+ verify_upm_test(true);
+}
+
int main(int argc, char **argv)
{
/* Disable stdout buffering */
@@ -397,5 +464,6 @@ int main(int argc, char **argv)
return 0;
}

- run_in_new_process(&verify_upm_test);
+ run_in_new_process(&verify_upm_test_explicit);
+ run_in_new_process(&verify_upm_test_implicit);
}
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:52:01

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 26/29] KVM: selftests: TDX: Add support for TDG.VP.VEINFO.GET

From: Ackerley Tng <[email protected]>

Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
---
.../selftests/kvm/include/x86_64/tdx/tdx.h | 21 +++++++++++++++++++
.../selftests/kvm/lib/x86_64/tdx/tdx.c | 19 +++++++++++++++++
2 files changed, 40 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
index b71bcea40b5c..12863a8beaae 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
@@ -6,6 +6,7 @@
#include "kvm_util_base.h"

#define TDG_VP_INFO 1
+#define TDG_VP_VEINFO_GET 3
#define TDG_MEM_PAGE_ACCEPT 6

#define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000
@@ -41,4 +42,24 @@ uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx,
uint64_t tdg_vp_vmcall_map_gpa(uint64_t address, uint64_t size, uint64_t *data_out);
uint64_t tdg_mem_page_accept(uint64_t gpa, uint8_t level);

+/*
+ * Used by the #VE exception handler to gather the #VE exception
+ * info from the TDX module. This is a software only structure
+ * and not part of the TDX module/VMM ABI.
+ *
+ * Adapted from arch/x86/include/asm/tdx.h
+ */
+struct ve_info {
+ uint64_t exit_reason;
+ uint64_t exit_qual;
+ /* Guest Linear (virtual) Address */
+ uint64_t gla;
+ /* Guest Physical Address */
+ uint64_t gpa;
+ uint32_t instr_len;
+ uint32_t instr_info;
+};
+
+uint64_t tdg_vp_veinfo_get(struct ve_info *ve);
+
#endif // SELFTEST_TDX_TDX_H
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
index d8c4ab635c06..71d9f55007f7 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
@@ -241,3 +241,22 @@ uint64_t tdg_mem_page_accept(uint64_t gpa, uint8_t level)
{
return __tdx_module_call(TDG_MEM_PAGE_ACCEPT, gpa | level, 0, 0, 0, NULL);
}
+
+uint64_t tdg_vp_veinfo_get(struct ve_info *ve)
+{
+ uint64_t ret;
+ struct tdx_module_output out;
+
+ memset(&out, 0, sizeof(struct tdx_module_output));
+
+ ret = __tdx_module_call(TDG_VP_VEINFO_GET, 0, 0, 0, 0, &out);
+
+ ve->exit_reason = out.rcx;
+ ve->exit_qual = out.rdx;
+ ve->gla = out.r8;
+ ve->gpa = out.r9;
+ ve->instr_len = out.r10 & 0xffffffff;
+ ve->instr_info = out.r10 >> 32;
+
+ return ret;
+}
--
2.43.0.472.g3155946c3a-goog

2023-12-12 20:52:23

by Sagi Shahar

[permalink] [raw]
Subject: [RFC PATCH v5 28/29] KVM: selftests: TDX: Add TDX UPM selftest

From: Ackerley Tng <[email protected]>

This tests the use of guest memory with explicit MapGPA calls.

Signed-off-by: Ackerley Tng <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
---
tools/testing/selftests/kvm/Makefile | 1 +
.../selftests/kvm/x86_64/tdx_upm_test.c | 401 ++++++++++++++++++
2 files changed, 402 insertions(+)
create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_upm_test.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 8c0a6b395ee5..2f2669af15d6 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -157,6 +157,7 @@ TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test
TEST_GEN_PROGS_x86_64 += system_counter_offset_test
TEST_GEN_PROGS_x86_64 += x86_64/tdx_vm_tests
TEST_GEN_PROGS_x86_64 += x86_64/tdx_shared_mem_test
+TEST_GEN_PROGS_x86_64 += x86_64/tdx_upm_test

# Compiled outputs used by test targets
TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c b/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c
new file mode 100644
index 000000000000..44671874a4f1
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c
@@ -0,0 +1,401 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <asm/kvm.h>
+#include <asm/vmx.h>
+#include <linux/kvm.h>
+#include <stdbool.h>
+#include <stdint.h>
+
+#include "kvm_util_base.h"
+#include "processor.h"
+#include "tdx/tdcall.h"
+#include "tdx/tdx.h"
+#include "tdx/tdx_util.h"
+#include "tdx/test_util.h"
+#include "test_util.h"
+
+/* TDX UPM test patterns */
+#define PATTERN_CONFIDENCE_CHECK (0x11)
+#define PATTERN_HOST_FOCUS (0x22)
+#define PATTERN_GUEST_GENERAL (0x33)
+#define PATTERN_GUEST_FOCUS (0x44)
+
+/*
+ * 0x80000000 is arbitrarily selected. The selected address need not be the same
+ * as TDX_UPM_TEST_AREA_GVA_PRIVATE, but it should not overlap with selftest
+ * code or boot page.
+ */
+#define TDX_UPM_TEST_AREA_GPA (0x80000000)
+/* Test area GPA is arbitrarily selected */
+#define TDX_UPM_TEST_AREA_GVA_PRIVATE (0x90000000)
+/* Select any bit that can be used as a flag */
+#define TDX_UPM_TEST_AREA_GVA_SHARED_BIT (32)
+/*
+ * TDX_UPM_TEST_AREA_GVA_SHARED is used to map the same GPA twice into the
+ * guest, once as shared and once as private
+ */
+#define TDX_UPM_TEST_AREA_GVA_SHARED \
+ (TDX_UPM_TEST_AREA_GVA_PRIVATE | \
+ BIT_ULL(TDX_UPM_TEST_AREA_GVA_SHARED_BIT))
+
+/* The test area is 2MB in size */
+#define TDX_UPM_TEST_AREA_SIZE (2 << 20)
+/* 0th general area is 1MB in size */
+#define TDX_UPM_GENERAL_AREA_0_SIZE (1 << 20)
+/* Focus area is 40KB in size */
+#define TDX_UPM_FOCUS_AREA_SIZE (40 << 10)
+/* 1st general area is the rest of the space in the test area */
+#define TDX_UPM_GENERAL_AREA_1_SIZE \
+ (TDX_UPM_TEST_AREA_SIZE - TDX_UPM_GENERAL_AREA_0_SIZE - \
+ TDX_UPM_FOCUS_AREA_SIZE)
+
+/*
+ * The test memory area is set up as two general areas, sandwiching a focus
+ * area. The general areas act as control areas. After they are filled, they
+ * are not expected to change throughout the tests. The focus area is memory
+ * permissions change from private to shared and vice-versa.
+ *
+ * The focus area is intentionally small, and sandwiched to test that when the
+ * focus area's permissions change, the other areas' permissions are not
+ * affected.
+ */
+struct __packed tdx_upm_test_area {
+ uint8_t general_area_0[TDX_UPM_GENERAL_AREA_0_SIZE];
+ uint8_t focus_area[TDX_UPM_FOCUS_AREA_SIZE];
+ uint8_t general_area_1[TDX_UPM_GENERAL_AREA_1_SIZE];
+};
+
+static void fill_test_area(struct tdx_upm_test_area *test_area_base,
+ uint8_t pattern)
+{
+ memset(test_area_base, pattern, sizeof(*test_area_base));
+}
+
+static void fill_focus_area(struct tdx_upm_test_area *test_area_base,
+ uint8_t pattern)
+{
+ memset(test_area_base->focus_area, pattern,
+ sizeof(test_area_base->focus_area));
+}
+
+static bool check_area(uint8_t *base, uint64_t size, uint8_t expected_pattern)
+{
+ size_t i;
+
+ for (i = 0; i < size; i++) {
+ if (base[i] != expected_pattern)
+ return false;
+ }
+
+ return true;
+}
+
+static bool check_general_areas(struct tdx_upm_test_area *test_area_base,
+ uint8_t expected_pattern)
+{
+ return (check_area(test_area_base->general_area_0,
+ sizeof(test_area_base->general_area_0),
+ expected_pattern) &&
+ check_area(test_area_base->general_area_1,
+ sizeof(test_area_base->general_area_1),
+ expected_pattern));
+}
+
+static bool check_focus_area(struct tdx_upm_test_area *test_area_base,
+ uint8_t expected_pattern)
+{
+ return check_area(test_area_base->focus_area,
+ sizeof(test_area_base->focus_area), expected_pattern);
+}
+
+static bool check_test_area(struct tdx_upm_test_area *test_area_base,
+ uint8_t expected_pattern)
+{
+ return (check_general_areas(test_area_base, expected_pattern) &&
+ check_focus_area(test_area_base, expected_pattern));
+}
+
+static bool fill_and_check(struct tdx_upm_test_area *test_area_base, uint8_t pattern)
+{
+ fill_test_area(test_area_base, pattern);
+
+ return check_test_area(test_area_base, pattern);
+}
+
+#define TDX_UPM_TEST_ASSERT(x) \
+ do { \
+ if (!(x)) \
+ tdx_test_fatal(__LINE__); \
+ } while (0)
+
+/*
+ * Shared variables between guest and host
+ */
+static struct tdx_upm_test_area *test_area_gpa_private;
+static struct tdx_upm_test_area *test_area_gpa_shared;
+
+/*
+ * Test stages for syncing with host
+ */
+enum {
+ SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST = 1,
+ SYNC_CHECK_READ_SHARED_MEMORY_FROM_HOST,
+ SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST_AGAIN,
+};
+
+#define TDX_UPM_TEST_ACCEPT_PRINT_PORT 0x87
+
+/**
+ * Does vcpu_run, and also manages memory conversions if requested by the TD.
+ */
+void vcpu_run_and_manage_memory_conversions(struct kvm_vm *vm,
+ struct kvm_vcpu *vcpu)
+{
+ for (;;) {
+ vcpu_run(vcpu);
+ if (vcpu->run->exit_reason == KVM_EXIT_TDX &&
+ vcpu->run->tdx.type == KVM_EXIT_TDX_VMCALL &&
+ vcpu->run->tdx.u.vmcall.subfunction == TDG_VP_VMCALL_MAP_GPA) {
+ struct kvm_tdx_vmcall *vmcall_info = &vcpu->run->tdx.u.vmcall;
+ uint64_t gpa = vmcall_info->in_r12 & ~vm->arch.s_bit;
+
+ handle_memory_conversion(vm, gpa, vmcall_info->in_r13,
+ !(vm->arch.s_bit & vmcall_info->in_r12));
+ vmcall_info->status_code = 0;
+ continue;
+ } else if (
+ vcpu->run->exit_reason == KVM_EXIT_IO &&
+ vcpu->run->io.port == TDX_UPM_TEST_ACCEPT_PRINT_PORT) {
+ uint64_t gpa = tdx_test_read_64bit(
+ vcpu, TDX_UPM_TEST_ACCEPT_PRINT_PORT);
+ printf("\t ... guest accepting 1 page at GPA: 0x%lx\n", gpa);
+ continue;
+ }
+
+ break;
+ }
+}
+
+static void guest_upm_explicit(void)
+{
+ uint64_t ret = 0;
+ uint64_t failed_gpa;
+
+ struct tdx_upm_test_area *test_area_gva_private =
+ (struct tdx_upm_test_area *)TDX_UPM_TEST_AREA_GVA_PRIVATE;
+ struct tdx_upm_test_area *test_area_gva_shared =
+ (struct tdx_upm_test_area *)TDX_UPM_TEST_AREA_GVA_SHARED;
+
+ /* Check: host reading private memory does not modify guest's view */
+ fill_test_area(test_area_gva_private, PATTERN_GUEST_GENERAL);
+
+ tdx_test_report_to_user_space(SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST);
+
+ TDX_UPM_TEST_ASSERT(
+ check_test_area(test_area_gva_private, PATTERN_GUEST_GENERAL));
+
+ /* Remap focus area as shared */
+ ret = tdg_vp_vmcall_map_gpa((uint64_t)test_area_gpa_shared->focus_area,
+ sizeof(test_area_gpa_shared->focus_area),
+ &failed_gpa);
+ TDX_UPM_TEST_ASSERT(!ret);
+
+ /* General areas should be unaffected by remapping */
+ TDX_UPM_TEST_ASSERT(
+ check_general_areas(test_area_gva_private, PATTERN_GUEST_GENERAL));
+
+ /*
+ * Use memory contents to confirm that the memory allocated using mmap
+ * is used as backing memory for shared memory - PATTERN_CONFIDENCE_CHECK
+ * was written by the VMM at the beginning of this test.
+ */
+ TDX_UPM_TEST_ASSERT(
+ check_focus_area(test_area_gva_shared, PATTERN_CONFIDENCE_CHECK));
+
+ /* Guest can use focus area after remapping as shared */
+ fill_focus_area(test_area_gva_shared, PATTERN_GUEST_FOCUS);
+
+ tdx_test_report_to_user_space(SYNC_CHECK_READ_SHARED_MEMORY_FROM_HOST);
+
+ /* Check that guest has the same view of shared memory */
+ TDX_UPM_TEST_ASSERT(
+ check_focus_area(test_area_gva_shared, PATTERN_HOST_FOCUS));
+
+ /* Remap focus area back to private */
+ ret = tdg_vp_vmcall_map_gpa((uint64_t)test_area_gpa_private->focus_area,
+ sizeof(test_area_gpa_private->focus_area),
+ &failed_gpa);
+ TDX_UPM_TEST_ASSERT(!ret);
+
+ /* General areas should be unaffected by remapping */
+ TDX_UPM_TEST_ASSERT(
+ check_general_areas(test_area_gva_private, PATTERN_GUEST_GENERAL));
+
+ /* Focus area should be zeroed after remapping */
+ TDX_UPM_TEST_ASSERT(check_focus_area(test_area_gva_private, 0));
+
+ tdx_test_report_to_user_space(SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST_AGAIN);
+
+ /* Check that guest can use private memory after focus area is remapped as private */
+ TDX_UPM_TEST_ASSERT(
+ fill_and_check(test_area_gva_private, PATTERN_GUEST_GENERAL));
+
+ tdx_test_success();
+}
+
+static void run_selftest(struct kvm_vm *vm, struct kvm_vcpu *vcpu,
+ struct tdx_upm_test_area *test_area_base_hva)
+{
+ vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ TEST_ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset),
+ SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST);
+
+ /*
+ * Check that host should read PATTERN_CONFIDENCE_CHECK from guest's
+ * private memory. This confirms that regular memory (userspace_addr in
+ * struct kvm_userspace_memory_region) is used to back the host's view
+ * of private memory, since PATTERN_CONFIDENCE_CHECK was written to that
+ * memory before starting the guest.
+ */
+ TEST_ASSERT(check_test_area(test_area_base_hva, PATTERN_CONFIDENCE_CHECK),
+ "Host should read PATTERN_CONFIDENCE_CHECK from guest's private memory.");
+
+ vcpu_run_and_manage_memory_conversions(vm, vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ TEST_ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset),
+ SYNC_CHECK_READ_SHARED_MEMORY_FROM_HOST);
+
+ TEST_ASSERT(check_focus_area(test_area_base_hva, PATTERN_GUEST_FOCUS),
+ "Host should have the same view of shared memory as guest.");
+ TEST_ASSERT(check_general_areas(test_area_base_hva, PATTERN_CONFIDENCE_CHECK),
+ "Host's view of private memory should still be backed by regular memory.");
+
+ /* Check that host can use shared memory */
+ fill_focus_area(test_area_base_hva, PATTERN_HOST_FOCUS);
+ TEST_ASSERT(check_focus_area(test_area_base_hva, PATTERN_HOST_FOCUS),
+ "Host should be able to use shared memory.");
+
+ vcpu_run_and_manage_memory_conversions(vm, vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ TEST_ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset),
+ SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST_AGAIN);
+
+ TEST_ASSERT(check_general_areas(test_area_base_hva, PATTERN_CONFIDENCE_CHECK),
+ "Host's view of private memory should be backed by regular memory.");
+ TEST_ASSERT(check_focus_area(test_area_base_hva, PATTERN_HOST_FOCUS),
+ "Host's view of private memory should be backed by regular memory.");
+
+ vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_SUCCESS(vcpu);
+
+ printf("\t ... PASSED\n");
+}
+
+static bool address_between(uint64_t addr, void *lo, void *hi)
+{
+ return (uint64_t)lo <= addr && addr < (uint64_t)hi;
+}
+
+static void guest_ve_handler(struct ex_regs *regs)
+{
+ uint64_t ret;
+ struct ve_info ve;
+
+ ret = tdg_vp_veinfo_get(&ve);
+ TDX_UPM_TEST_ASSERT(!ret);
+
+ /* For this test, we will only handle EXIT_REASON_EPT_VIOLATION */
+ TDX_UPM_TEST_ASSERT(ve.exit_reason == EXIT_REASON_EPT_VIOLATION);
+
+ /* Validate GPA in fault */
+ TDX_UPM_TEST_ASSERT(
+ address_between(ve.gpa,
+ test_area_gpa_private->focus_area,
+ test_area_gpa_private->general_area_1));
+
+ tdx_test_send_64bit(TDX_UPM_TEST_ACCEPT_PRINT_PORT, ve.gpa);
+
+#define MEM_PAGE_ACCEPT_LEVEL_4K 0
+#define MEM_PAGE_ACCEPT_LEVEL_2M 1
+ ret = tdg_mem_page_accept(ve.gpa, MEM_PAGE_ACCEPT_LEVEL_4K);
+ TDX_UPM_TEST_ASSERT(!ret);
+}
+
+static void verify_upm_test(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ vm_vaddr_t test_area_gva_private;
+ struct tdx_upm_test_area *test_area_base_hva;
+ uint64_t test_area_npages;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+ vcpu = td_vcpu_add(vm, 0, guest_upm_explicit);
+
+ vm_install_exception_handler(vm, VE_VECTOR, guest_ve_handler);
+
+ /*
+ * Set up shared memory page for testing by first allocating as private
+ * and then mapping the same GPA again as shared. This way, the TD does
+ * not have to remap its page tables at runtime.
+ */
+ test_area_npages = TDX_UPM_TEST_AREA_SIZE / vm->page_size;
+ vm_userspace_mem_region_add(vm,
+ VM_MEM_SRC_ANONYMOUS, TDX_UPM_TEST_AREA_GPA,
+ 3, test_area_npages, KVM_MEM_PRIVATE);
+
+ test_area_gva_private = ____vm_vaddr_alloc(
+ vm, TDX_UPM_TEST_AREA_SIZE, TDX_UPM_TEST_AREA_GVA_PRIVATE,
+ TDX_UPM_TEST_AREA_GPA, 3, true);
+ TEST_ASSERT_EQ(test_area_gva_private, TDX_UPM_TEST_AREA_GVA_PRIVATE);
+
+ test_area_gpa_private = (struct tdx_upm_test_area *)
+ addr_gva2gpa(vm, test_area_gva_private);
+ virt_map_shared(vm, TDX_UPM_TEST_AREA_GVA_SHARED,
+ (uint64_t)test_area_gpa_private,
+ test_area_npages);
+ TEST_ASSERT_EQ(addr_gva2gpa(vm, TDX_UPM_TEST_AREA_GVA_SHARED),
+ (vm_paddr_t)test_area_gpa_private);
+
+ test_area_base_hva = addr_gva2hva(vm, TDX_UPM_TEST_AREA_GVA_PRIVATE);
+
+ TEST_ASSERT(fill_and_check(test_area_base_hva, PATTERN_CONFIDENCE_CHECK),
+ "Failed to mark memory intended as backing memory for TD shared memory");
+
+ sync_global_to_guest(vm, test_area_gpa_private);
+ test_area_gpa_shared = (struct tdx_upm_test_area *)
+ ((uint64_t)test_area_gpa_private | BIT_ULL(vm->pa_bits - 1));
+ sync_global_to_guest(vm, test_area_gpa_shared);
+
+ td_finalize(vm);
+
+ printf("Verifying UPM functionality: explicit MapGPA\n");
+
+ run_selftest(vm, vcpu, test_area_base_hva);
+
+ kvm_vm_free(vm);
+}
+
+int main(int argc, char **argv)
+{
+ /* Disable stdout buffering */
+ setbuf(stdout, NULL);
+
+ if (!is_tdx_enabled()) {
+ printf("TDX is not supported by the KVM\n"
+ "Skipping the TDX tests.\n");
+ return 0;
+ }
+
+ run_in_new_process(&verify_upm_test);
+}
--
2.43.0.472.g3155946c3a-goog

2024-02-21 04:46:30

by Binbin Wu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 02/29] KVM: selftests: Expose function that sets up sregs based on VM's mode



On 12/13/2023 4:46 AM, Sagi Shahar wrote:
> From: Ackerley Tng <[email protected]>
>
> This allows initializing sregs without setting vCPU registers in
> KVM.
>
> No functional change intended.

Reviewed-by: Binbin Wu <[email protected]>

>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> Signed-off-by: Sagi Shahar <[email protected]>
> ---
> .../selftests/kvm/include/x86_64/processor.h | 2 +
> .../selftests/kvm/lib/x86_64/processor.c | 39 ++++++++++---------
> 2 files changed, 23 insertions(+), 18 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
> index 35fcf4d78dfa..0b8855d68744 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/processor.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
> @@ -958,6 +958,8 @@ static inline struct kvm_cpuid2 *allocate_kvm_cpuid2(int nr_entries)
> void vcpu_init_cpuid(struct kvm_vcpu *vcpu, const struct kvm_cpuid2 *cpuid);
> void vcpu_set_hv_cpuid(struct kvm_vcpu *vcpu);
>
> +void vcpu_setup_mode_sregs(struct kvm_vm *vm, struct kvm_sregs *sregs);
> +
> static inline struct kvm_cpuid_entry2 *__vcpu_get_cpuid_entry(struct kvm_vcpu *vcpu,
> uint32_t function,
> uint32_t index)
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index aef1c021c4bb..f130f78a4974 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -543,36 +543,39 @@ static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp,
> kvm_seg_fill_gdt_64bit(vm, segp);
> }
>
> -static void vcpu_setup(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
> +void vcpu_setup_mode_sregs(struct kvm_vm *vm, struct kvm_sregs *sregs)
> {
> - struct kvm_sregs sregs;
> -
> - /* Set mode specific system register values. */
> - vcpu_sregs_get(vcpu, &sregs);
> -
> - sregs.idt.limit = 0;
> + sregs->idt.limit = 0;
>
> - kvm_setup_gdt(vm, &sregs.gdt);
> + kvm_setup_gdt(vm, &sregs->gdt);
>
> switch (vm->mode) {
> case VM_MODE_PXXV48_4K_SEV:
> case VM_MODE_PXXV48_4K:
> - sregs.cr0 = X86_CR0_PE | X86_CR0_NE | X86_CR0_PG;
> - sregs.cr4 |= X86_CR4_PAE | X86_CR4_OSFXSR;
> - sregs.efer |= (EFER_LME | EFER_LMA | EFER_NX);
> -
> - kvm_seg_set_unusable(&sregs.ldt);
> - kvm_seg_set_kernel_code_64bit(vm, DEFAULT_CODE_SELECTOR, &sregs.cs);
> - kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.ds);
> - kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.es);
> - kvm_setup_tss_64bit(vm, &sregs.tr, 0x18);
> + sregs->cr0 = X86_CR0_PE | X86_CR0_NE | X86_CR0_PG;
> + sregs->cr4 |= X86_CR4_PAE | X86_CR4_OSFXSR;
> + sregs->efer |= (EFER_LME | EFER_LMA | EFER_NX);
> +
> + kvm_seg_set_unusable(&sregs->ldt);
> + kvm_seg_set_kernel_code_64bit(vm, DEFAULT_CODE_SELECTOR, &sregs->cs);
> + kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs->ds);
> + kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs->es);
> + kvm_setup_tss_64bit(vm, &sregs->tr, 0x18);
> break;
>
> default:
> TEST_FAIL("Unknown guest mode, mode: 0x%x", vm->mode);
> }
>
> - sregs.cr3 = vm->pgd;
> + sregs->cr3 = vm->pgd;
> +}
> +
> +static void vcpu_setup(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
> +{
> + struct kvm_sregs sregs;
> +
> + vcpu_sregs_get(vcpu, &sregs);
> + vcpu_setup_mode_sregs(vm, &sregs);
> vcpu_sregs_set(vcpu, &sregs);
> }
>


2024-02-21 08:25:10

by Binbin Wu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 03/29] KVM: selftests: Store initial stack address in struct kvm_vcpu



On 12/13/2023 4:46 AM, Sagi Shahar wrote:
> From: Ackerley Tng <[email protected]>
>
> TDX guests' registers cannot be initialized directly using
> vcpu_regs_set(), hence the stack pointer needs to be initialized by
> the guest itself, running boot code beginning at the reset vector.
>
> We store the stack address as part of struct kvm_vcpu so that it can
> be accessible later to be passed to the boot code for rsp
> initialization.
>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> Signed-off-by: Sagi Shahar <[email protected]>
> ---
> tools/testing/selftests/kvm/include/kvm_util_base.h | 1 +
> tools/testing/selftests/kvm/lib/x86_64/processor.c | 4 +++-
> 2 files changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
> index c2e5c5f25dfc..b353617fcdd1 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util_base.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
> @@ -68,6 +68,7 @@ struct kvm_vcpu {
> int fd;
> struct kvm_vm *vm;
> struct kvm_run *run;
> + vm_vaddr_t initial_stack_addr;
> #ifdef __x86_64__
> struct kvm_cpuid2 *cpuid;
> #endif
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index f130f78a4974..b6b9438e0a33 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -621,10 +621,12 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id,
> vcpu_init_cpuid(vcpu, kvm_get_supported_cpuid());
> vcpu_setup(vm, vcpu);
>
> + vcpu->initial_stack_addr = stack_vaddr;
> +
> /* Setup guest general purpose registers */
> vcpu_regs_get(vcpu, &regs);
> regs.rflags = regs.rflags | 0x2;
> - regs.rsp = stack_vaddr;
> + regs.rsp = vcpu->initial_stack_addr;

Nit: No need to do this change.

Reviewed-by: Binbin Wu <[email protected]>

> regs.rip = (unsigned long) guest_code;
> vcpu_regs_set(vcpu, &regs);
>


2024-02-21 12:51:33

by Binbin Wu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 04/29] KVM: selftests: Refactor steps in vCPU descriptor table initialization



On 12/13/2023 4:46 AM, Sagi Shahar wrote:
> From: Ackerley Tng <[email protected]>
>
> Split the vCPU descriptor table initialization process into a few
> steps and expose them:
>
> + Setting up the IDT
> + Syncing exception handlers into the guest
>
> In kvm_setup_idt(), we conditionally allocate guest memory for vm->idt
> to avoid double allocation when kvm_setup_idt() is used after
> vm_init_descriptor_tables().
>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> Signed-off-by: Sagi Shahar <[email protected]>
> ---
> .../selftests/kvm/include/x86_64/processor.h | 2 ++
> .../selftests/kvm/lib/x86_64/processor.c | 19 ++++++++++++++++---
> 2 files changed, 18 insertions(+), 3 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
> index 0b8855d68744..5c4e9a27d9e2 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/processor.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
> @@ -1089,6 +1089,8 @@ struct idt_entry {
> uint32_t offset2; uint32_t reserved;
> };
>
> +void kvm_setup_idt(struct kvm_vm *vm, struct kvm_dtable *dt);
> +void sync_exception_handlers_to_guest(struct kvm_vm *vm);
> void vm_init_descriptor_tables(struct kvm_vm *vm);
> void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu);
> void vm_install_exception_handler(struct kvm_vm *vm, int vector,
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index b6b9438e0a33..566d82829da4 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -1155,19 +1155,32 @@ void vm_init_descriptor_tables(struct kvm_vm *vm)
> DEFAULT_CODE_SELECTOR);
> }
>
> +void kvm_setup_idt(struct kvm_vm *vm, struct kvm_dtable *dt)
> +{
> + if (!vm->idt)
> + vm->idt = vm_vaddr_alloc_page(vm);

IDT is allocated in DATA memslot in current code, but here, when using
vm_vaddr_alloc_page(), it will be allocated in TEST_DATA memslot.

Do we need to follow the current code to use
__vm_vaddr_alloc_page(vm, MEM_REGION_DATA) instead?

> +
> + dt->base = vm->idt;
> + dt->limit = NUM_INTERRUPTS * sizeof(struct idt_entry) - 1;
> +}
> +
> +void sync_exception_handlers_to_guest(struct kvm_vm *vm)
> +{
> + *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
> +}
> +
> void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu)
> {
> struct kvm_vm *vm = vcpu->vm;
> struct kvm_sregs sregs;
>
> vcpu_sregs_get(vcpu, &sregs);
> - sregs.idt.base = vm->idt;
> - sregs.idt.limit = NUM_INTERRUPTS * sizeof(struct idt_entry) - 1;
> + kvm_setup_idt(vcpu->vm, &sregs.idt);
> sregs.gdt.base = vm->gdt;
> sregs.gdt.limit = getpagesize() - 1;
> kvm_seg_set_kernel_data_64bit(NULL, DEFAULT_DATA_SELECTOR, &sregs.gs);
> vcpu_sregs_set(vcpu, &sregs);
> - *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
> + sync_exception_handlers_to_guest(vm);
> }
>
> void vm_install_exception_handler(struct kvm_vm *vm, int vector,


2024-02-22 09:54:42

by Yan Zhao

[permalink] [raw]
Subject: Re: [RFC PATCH v5 05/29] KVM: selftests: Add helper functions to create TDX VMs

On Tue, Dec 12, 2023 at 12:46:20PM -0800, Sagi Shahar wrote:
> From: Erdem Aktas <[email protected]>
> +/**
> + * Adds a vCPU to a TD (Trusted Domain) with minimum defaults. It will not set
> + * up any general purpose registers as they will be initialized by the TDX. In
> + * TDX, vCPUs RIP is set to 0xFFFFFFF0. See Intel TDX EAS Section "Initial State
> + * of Guest GPRs" for more information on vCPUs initial register values when
> + * entering the TD first time.
> + *
> + * Input Args:
> + * vm - Virtual Machine
> + * vcpuid - The id of the VCPU to add to the VM.
> + */
> +struct kvm_vcpu *td_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, void *guest_code)
> +{
> + struct kvm_vcpu *vcpu;
> +
> + /*
> + * TD setup will not use the value of rip set in vm_vcpu_add anyway, so
> + * NULL can be used for guest_code.
> + */
> + vcpu = vm_vcpu_add(vm, vcpu_id, NULL);
Rather than to call vm_vcpu_add(), is is better to call __vm_vcpu_add(),
__vm_vaddr_alloc() for vcpu->initial_stack_addr and vcpu_mp_state_set() only?

> + tdx_td_vcpu_init(vcpu);
> +
> + load_td_boot_parameters(addr_gpa2hva(vm, TD_BOOT_PARAMETERS_GPA),
> + vcpu, guest_code);
> +
> + return vcpu;
> +}
> +
..

> +static void td_setup_boot_code(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type)
> +{
> + vm_vaddr_t addr;
> + size_t boot_code_allocation = round_up(TD_BOOT_CODE_SIZE, PAGE_SIZE);
> + vm_paddr_t boot_code_base_gpa = FOUR_GIGABYTES_GPA - boot_code_allocation;
> + size_t npages = DIV_ROUND_UP(boot_code_allocation, PAGE_SIZE);
> +
> + vm_userspace_mem_region_add(vm, src_type, boot_code_base_gpa, 1, npages,
> + KVM_MEM_PRIVATE);
> + addr = vm_vaddr_alloc_1to1(vm, boot_code_allocation, boot_code_base_gpa, 1);
> + TEST_ASSERT_EQ(addr, boot_code_base_gpa);
> +
> + load_td_boot_code(vm);
> +}
> +
> +static size_t td_boot_parameters_size(void)
> +{
> + int max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS);
> + size_t total_per_vcpu_parameters_size =
> + max_vcpus * sizeof(struct td_per_vcpu_parameters);
> +
> + return sizeof(struct td_boot_parameters) + total_per_vcpu_parameters_size;
> +}
> +
> +static void td_setup_boot_parameters(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type)
> +{
> + vm_vaddr_t addr;
> + size_t boot_params_size = td_boot_parameters_size();
> + int npages = DIV_ROUND_UP(boot_params_size, PAGE_SIZE);
> + size_t total_size = npages * PAGE_SIZE;
> +
> + vm_userspace_mem_region_add(vm, src_type, TD_BOOT_PARAMETERS_GPA, 2,
> + npages, KVM_MEM_PRIVATE);
> + addr = vm_vaddr_alloc_1to1(vm, total_size, TD_BOOT_PARAMETERS_GPA, 2);
> + TEST_ASSERT_EQ(addr, TD_BOOT_PARAMETERS_GPA);
> +}
> +
> +void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
> + uint64_t attributes)
> +{
> + uint64_t nr_pages_required;
> +
> + tdx_enable_capabilities(vm);
> +
> + tdx_configure_memory_encryption(vm);
> +
> + tdx_td_init(vm, attributes);
> +
> + nr_pages_required = vm_nr_pages_required(VM_MODE_DEFAULT, 1, 0);
> +
> + /*
> + * Add memory (add 0th memslot) for TD. This will be used to setup the
> + * CPU (provide stack space for the CPU) and to load the elf file.
> + */
> + vm_userspace_mem_region_add(vm, src_type, 0, 0, nr_pages_required,
> + KVM_MEM_PRIVATE);
> +
> + kvm_vm_elf_load(vm, program_invocation_name);
> +
> + vm_init_descriptor_tables(vm);
> +
> + td_setup_boot_code(vm, src_type);
> + td_setup_boot_parameters(vm, src_type);
> +}
Could we define slot ID macros for slot 0, 1, 2?
e.g. BOOT_SLOT_ID_0, BOOT_SLOT_ID_1,BOOT_SLOT_ID_2.

2024-02-22 10:06:39

by Yan Zhao

[permalink] [raw]
Subject: Re: [RFC PATCH v5 07/29] KVM: selftests: TDX: Update load_td_memory_region for VM memory backed by guest memfd

On Tue, Dec 12, 2023 at 12:46:22PM -0800, Sagi Shahar wrote:
> From: Ackerley Tng <[email protected]>
>
> If guest memory is backed by restricted memfd
>
> + UPM is being used, hence encrypted memory region has to be
> registered
> + Can avoid making a copy of guest memory before getting TDX to
> initialize the memory region
>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> Signed-off-by: Sagi Shahar <[email protected]>
> ---
> .../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 41 +++++++++++++++----
> 1 file changed, 32 insertions(+), 9 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> index 6b995c3f6153..063ff486fb86 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> @@ -192,6 +192,21 @@ static void tdx_td_finalizemr(struct kvm_vm *vm)
> tdx_ioctl(vm->fd, KVM_TDX_FINALIZE_VM, 0, NULL);
> }
>
> +/*
> + * Other ioctls
> + */
> +
> +/**
> + * Register a memory region that may contain encrypted data in KVM.
> + */
> +static void register_encrypted_memory_region(
> + struct kvm_vm *vm, struct userspace_mem_region *region)
> +{
> + vm_set_memory_attributes(vm, region->region.guest_phys_addr,
> + region->region.memory_size,
> + KVM_MEMORY_ATTRIBUTE_PRIVATE);
> +}
> +
> /*
> * TD creation/setup/finalization
> */
> @@ -376,30 +391,38 @@ static void load_td_memory_region(struct kvm_vm *vm,
> if (!sparsebit_any_set(pages))
> return;
>
> +
> + if (region->region.guest_memfd != -1)
> + register_encrypted_memory_region(vm, region);
> +
> sparsebit_for_each_set_range(pages, i, j) {
> const uint64_t size_to_load = (j - i + 1) * vm->page_size;
> const uint64_t offset =
> (i - lowest_page_in_region) * vm->page_size;
> const uint64_t hva = hva_base + offset;
> const uint64_t gpa = gpa_base + offset;
> - void *source_addr;
> + void *source_addr = (void *)hva;
>
> /*
> * KVM_TDX_INIT_MEM_REGION ioctl cannot encrypt memory in place,
> * hence we have to make a copy if there's only one backing
> * memory source
> */
> - source_addr = mmap(NULL, size_to_load, PROT_READ | PROT_WRITE,
> - MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> - TEST_ASSERT(
> - source_addr,
> - "Could not allocate memory for loading memory region");
> -
> - memcpy(source_addr, (void *)hva, size_to_load);
> + if (region->region.guest_memfd == -1) {
> + source_addr = mmap(NULL, size_to_load, PROT_READ | PROT_WRITE,
> + MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> + TEST_ASSERT(
> + source_addr,
> + "Could not allocate memory for loading memory region");
> +
> + memcpy(source_addr, (void *)hva, size_to_load);
> + memset((void *)hva, 0, size_to_load);
> + }
>
> tdx_init_mem_region(vm, source_addr, gpa, size_to_load);
>
> - munmap(source_addr, size_to_load);
> + if (region->region.guest_memfd == -1)
> + munmap(source_addr, size_to_load);
> }

For memslot 0, 1, 2, when guest_memfd != -1,
is it possible to also munmap(mmap_start, mmap_size) after finish loading?

> }
>
> --
> 2.43.0.472.g3155946c3a-goog
>
>

2024-02-23 01:56:03

by Chen Yu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 08/29] KVM: selftests: TDX: Add TDX lifecycle test

Hi Sagi,

On 2023-12-12 at 12:46:23 -0800, Sagi Shahar wrote:
> From: Erdem Aktas <[email protected]>
>
> Adding a test to verify TDX lifecycle by creating a TD and running a
> dummy TDG.VP.VMCALL <Instruction.IO> inside it.
>
> Signed-off-by: Erdem Aktas <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> Signed-off-by: Sagi Shahar <[email protected]>
> Co-developed-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ackerley Tng <[email protected]>
> ---
> tools/testing/selftests/kvm/Makefile | 4 +
> .../selftests/kvm/include/x86_64/tdx/tdcall.h | 35 ++++++++
> .../selftests/kvm/include/x86_64/tdx/tdx.h | 12 +++
> .../kvm/include/x86_64/tdx/test_util.h | 52 +++++++++++
> .../selftests/kvm/lib/x86_64/tdx/tdcall.S | 90 +++++++++++++++++++
> .../selftests/kvm/lib/x86_64/tdx/tdx.c | 27 ++++++
> .../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 1 +
> .../selftests/kvm/lib/x86_64/tdx/test_util.c | 34 +++++++
> .../selftests/kvm/x86_64/tdx_vm_tests.c | 45 ++++++++++
> 9 files changed, 300 insertions(+)
> create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
> create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
> create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
> create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
>
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index a35150ab855f..80d4a50eeb9f 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -52,6 +52,9 @@ LIBKVM_x86_64 += lib/x86_64/vmx.c
> LIBKVM_x86_64 += lib/x86_64/sev.c
> LIBKVM_x86_64 += lib/x86_64/tdx/tdx_util.c
> LIBKVM_x86_64 += lib/x86_64/tdx/td_boot.S
> +LIBKVM_x86_64 += lib/x86_64/tdx/tdcall.S
> +LIBKVM_x86_64 += lib/x86_64/tdx/tdx.c
> +LIBKVM_x86_64 += lib/x86_64/tdx/test_util.c
>
> LIBKVM_aarch64 += lib/aarch64/gic.c
> LIBKVM_aarch64 += lib/aarch64/gic_v3.c
> @@ -152,6 +155,7 @@ TEST_GEN_PROGS_x86_64 += set_memory_region_test
> TEST_GEN_PROGS_x86_64 += steal_time
> TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test
> TEST_GEN_PROGS_x86_64 += system_counter_offset_test
> +TEST_GEN_PROGS_x86_64 += x86_64/tdx_vm_tests
>
> # Compiled outputs used by test targets
> TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
> new file mode 100644
> index 000000000000..78001bfec9c8
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
> @@ -0,0 +1,35 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/* Adapted from arch/x86/include/asm/shared/tdx.h */
> +
> +#ifndef SELFTESTS_TDX_TDCALL_H
> +#define SELFTESTS_TDX_TDCALL_H
> +
> +#include <linux/bits.h>
> +#include <linux/types.h>
> +
> +#define TDG_VP_VMCALL_INSTRUCTION_IO_READ 0
> +#define TDG_VP_VMCALL_INSTRUCTION_IO_WRITE 1
> +
> +#define TDX_HCALL_HAS_OUTPUT BIT(0)
> +
> +#define TDX_HYPERCALL_STANDARD 0
> +
> +/*
> + * Used in __tdx_hypercall() to pass down and get back registers' values of
> + * the TDCALL instruction when requesting services from the VMM.
> + *
> + * This is a software only structure and not part of the TDX module/VMM ABI.
> + */
> +struct tdx_hypercall_args {
> + u64 r10;
> + u64 r11;
> + u64 r12;
> + u64 r13;
> + u64 r14;
> + u64 r15;
> +};
> +
> +/* Used to request services from the VMM */
> +u64 __tdx_hypercall(struct tdx_hypercall_args *args, unsigned long flags);
> +
> +#endif // SELFTESTS_TDX_TDCALL_H
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> new file mode 100644
> index 000000000000..a7161efe4ee2
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> @@ -0,0 +1,12 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +#ifndef SELFTEST_TDX_TDX_H
> +#define SELFTEST_TDX_TDX_H
> +
> +#include <stdint.h>
> +
> +#define TDG_VP_VMCALL_INSTRUCTION_IO 30
> +
> +uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
> + uint64_t write, uint64_t *data);
> +
> +#endif // SELFTEST_TDX_TDX_H
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> new file mode 100644
> index 000000000000..b570b6d978ff
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> @@ -0,0 +1,52 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +#ifndef SELFTEST_TDX_TEST_UTIL_H
> +#define SELFTEST_TDX_TEST_UTIL_H
> +
> +#include <stdbool.h>
> +
> +#include "tdcall.h"
> +
> +#define TDX_TEST_SUCCESS_PORT 0x30
> +#define TDX_TEST_SUCCESS_SIZE 4
> +
> +/**
> + * Assert that tdx_test_success() was called in the guest.
> + */
> +#define TDX_TEST_ASSERT_SUCCESS(VCPU) \
> + (TEST_ASSERT( \
> + ((VCPU)->run->exit_reason == KVM_EXIT_IO) && \
> + ((VCPU)->run->io.port == TDX_TEST_SUCCESS_PORT) && \
> + ((VCPU)->run->io.size == TDX_TEST_SUCCESS_SIZE) && \
> + ((VCPU)->run->io.direction == \
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE), \
> + "Unexpected exit values while waiting for test completion: %u (%s) %d %d %d\n", \
> + (VCPU)->run->exit_reason, \
> + exit_reason_str((VCPU)->run->exit_reason), \
> + (VCPU)->run->io.port, (VCPU)->run->io.size, \
> + (VCPU)->run->io.direction))
> +
> +/**
> + * Run a test in a new process.
> + *
> + * There might be multiple tests we are running and if one test fails, it will
> + * prevent the subsequent tests to run due to how tests are failing with
> + * TEST_ASSERT function. The run_in_new_process function will run a test in a
> + * new process context and wait for it to finish or fail to prevent TEST_ASSERT
> + * to kill the main testing process.
> + */
> +void run_in_new_process(void (*func)(void));
> +
> +/**
> + * Verify that the TDX is supported by KVM.
> + */
> +bool is_tdx_enabled(void);
> +
> +/**
> + * Report test success to userspace.
> + *
> + * Use TDX_TEST_ASSERT_SUCCESS() to assert that this function was called in the
> + * guest.
> + */
> +void tdx_test_success(void);
> +
> +#endif // SELFTEST_TDX_TEST_UTIL_H
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
> new file mode 100644
> index 000000000000..df9c1ed4bb2d
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
> @@ -0,0 +1,90 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/* Adapted from arch/x86/coco/tdx/tdcall.S */
> +
> +#define TDX_HYPERCALL_r10 0 /* offsetof(struct tdx_hypercall_args, r10) */
> +#define TDX_HYPERCALL_r11 8 /* offsetof(struct tdx_hypercall_args, r11) */
> +#define TDX_HYPERCALL_r12 16 /* offsetof(struct tdx_hypercall_args, r12) */
> +#define TDX_HYPERCALL_r13 24 /* offsetof(struct tdx_hypercall_args, r13) */
> +#define TDX_HYPERCALL_r14 32 /* offsetof(struct tdx_hypercall_args, r14) */
> +#define TDX_HYPERCALL_r15 40 /* offsetof(struct tdx_hypercall_args, r15) */
> +
> +/*
> + * Bitmasks of exposed registers (with VMM).
> + */
> +#define TDX_R10 0x400
> +#define TDX_R11 0x800
> +#define TDX_R12 0x1000
> +#define TDX_R13 0x2000
> +#define TDX_R14 0x4000
> +#define TDX_R15 0x8000
> +
> +#define TDX_HCALL_HAS_OUTPUT 0x1
> +
> +/*
> + * These registers are clobbered to hold arguments for each
> + * TDVMCALL. They are safe to expose to the VMM.
> + * Each bit in this mask represents a register ID. Bit field
> + * details can be found in TDX GHCI specification, section
> + * titled "TDCALL [TDG.VP.VMCALL] leaf".
> + */
> +#define TDVMCALL_EXPOSE_REGS_MASK ( TDX_R10 | TDX_R11 | \
> + TDX_R12 | TDX_R13 | \
> + TDX_R14 | TDX_R15 )
> +
> +.code64
> +.section .text
> +
> +.globl __tdx_hypercall
> +.type __tdx_hypercall, @function
> +__tdx_hypercall:
> + /* Set up stack frame */
> + push %rbp
> + movq %rsp, %rbp
> +
> + /* Save callee-saved GPRs as mandated by the x86_64 ABI */
> + push %r15
> + push %r14
> + push %r13
> + push %r12
> +
> + /* Mangle function call ABI into TDCALL ABI: */
> + /* Set TDCALL leaf ID (TDVMCALL (0)) in RAX */
> + xor %eax, %eax
> +
> + /* Copy hypercall registers from arg struct: */
> + movq TDX_HYPERCALL_r10(%rdi), %r10
> + movq TDX_HYPERCALL_r11(%rdi), %r11
> + movq TDX_HYPERCALL_r12(%rdi), %r12
> + movq TDX_HYPERCALL_r13(%rdi), %r13
> + movq TDX_HYPERCALL_r14(%rdi), %r14
> + movq TDX_HYPERCALL_r15(%rdi), %r15
> +
> + movl $TDVMCALL_EXPOSE_REGS_MASK, %ecx
> +
> + tdcall
> +
> + /* TDVMCALL leaf return code is in R10 */
> + movq %r10, %rax
> +
> + /* Copy hypercall result registers to arg struct if needed */
> + testq $TDX_HCALL_HAS_OUTPUT, %rsi
> + jz .Lout
> +
> + movq %r10, TDX_HYPERCALL_r10(%rdi)
> + movq %r11, TDX_HYPERCALL_r11(%rdi)
> + movq %r12, TDX_HYPERCALL_r12(%rdi)
> + movq %r13, TDX_HYPERCALL_r13(%rdi)
> + movq %r14, TDX_HYPERCALL_r14(%rdi)
> + movq %r15, TDX_HYPERCALL_r15(%rdi)
> +.Lout:
> + /* Restore callee-saved GPRs as mandated by the x86_64 ABI */
> + pop %r12
> + pop %r13
> + pop %r14
> + pop %r15
> +
> + pop %rbp
> + ret
> +
> +/* Disable executable stack */
> +.section .note.GNU-stack,"",%progbits
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> new file mode 100644
> index 000000000000..c2414523487a
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> @@ -0,0 +1,27 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +
> +#include "tdx/tdcall.h"
> +#include "tdx/tdx.h"
> +
> +uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
> + uint64_t write, uint64_t *data)
> +{
> + uint64_t ret;
> + struct tdx_hypercall_args args = {
> + .r10 = TDX_HYPERCALL_STANDARD,
> + .r11 = TDG_VP_VMCALL_INSTRUCTION_IO,
> + .r12 = size,
> + .r13 = write,
> + .r14 = port,
> + };
> +
> + if (write)
> + args.r15 = *data;
> +
> + ret = __tdx_hypercall(&args, write ? 0 : TDX_HCALL_HAS_OUTPUT);
> +
> + if (!write)
> + *data = args.r11;
> +
> + return ret;
> +}
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> index 063ff486fb86..b302060049d5 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> @@ -224,6 +224,7 @@ static void tdx_enable_capabilities(struct kvm_vm *vm)
> KVM_X2APIC_API_USE_32BIT_IDS |
> KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK);
> vm_enable_cap(vm, KVM_CAP_SPLIT_IRQCHIP, 24);
> + vm_enable_cap(vm, KVM_CAP_MAX_VCPUS, 512);
> }
>
> static void tdx_configure_memory_encryption(struct kvm_vm *vm)
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
> new file mode 100644
> index 000000000000..6905d0ca3877
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
> @@ -0,0 +1,34 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +
> +#include <stdbool.h>
> +#include <stdint.h>
> +#include <stdlib.h>
> +#include <sys/wait.h>
> +#include <unistd.h>
> +
> +#include "kvm_util_base.h"
> +#include "tdx/tdx.h"
> +#include "tdx/test_util.h"
> +
> +void run_in_new_process(void (*func)(void))
> +{
> + if (fork() == 0) {
> + func();
> + exit(0);
> + }
> + wait(NULL);
> +}
> +
> +bool is_tdx_enabled(void)
> +{
> + return !!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_TDX_VM));
> +}
> +
> +void tdx_test_success(void)
> +{
> + uint64_t code = 0;
> +
> + tdg_vp_vmcall_instruction_io(TDX_TEST_SUCCESS_PORT,
> + TDX_TEST_SUCCESS_SIZE,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, &code);
> +}
> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> new file mode 100644
> index 000000000000..a18d1c9d6026
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> @@ -0,0 +1,45 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +
> +#include <signal.h>
> +#include "kvm_util_base.h"
> +#include "tdx/tdx_util.h"
> +#include "tdx/test_util.h"
> +#include "test_util.h"
> +
> +void guest_code_lifecycle(void)
> +{
> + tdx_test_success();
> +}
> +
> +void verify_td_lifecycle(void)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);

From a user point of view, this lifecycle test is very useful
to me for vm creation/destruction time test, and I believe you
have an enhanced version to specify the memory size. May I know if
there is any plan to add that part too? And maybe allow the user
to customize the memory size rather than a fixed size.

thanks,
Chenyu

2024-02-28 20:24:49

by Binbin Wu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 05/29] KVM: selftests: Add helper functions to create TDX VMs



On 12/13/2023 4:46 AM, Sagi Shahar wrote:
> From: Erdem Aktas <[email protected]>
>
> TDX requires additional IOCTLs to initialize VM and vCPUs to add
> private memory and to finalize the VM memory. Also additional utility
> functions are provided to manipulate a TD, similar to those that
> manipulate a VM in the current selftest framework.
>
> A TD's initial register state cannot be manipulated directly by
> setting the VM's memory, hence boot code is provided at the TD's reset
> vector. This boot code takes boot parameters loaded in the TD's memory
> and sets up the TD for the selftest.
>
> Signed-off-by: Erdem Aktas <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> Signed-off-by: Sagi Shahar <[email protected]>
> Co-developed-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ackerley Tng <[email protected]>
> ---
> tools/testing/selftests/kvm/Makefile | 2 +
> .../kvm/include/x86_64/tdx/td_boot.h | 82 ++++
> .../kvm/include/x86_64/tdx/td_boot_asm.h | 16 +
> .../kvm/include/x86_64/tdx/tdx_util.h | 16 +
> .../selftests/kvm/lib/x86_64/tdx/td_boot.S | 101 ++++
> .../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 434 ++++++++++++++++++
> 6 files changed, 651 insertions(+)
> create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h
> create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h
> create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
> create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S
> create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
>
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index b11ac221aba4..a35150ab855f 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -50,6 +50,8 @@ LIBKVM_x86_64 += lib/x86_64/svm.c
> LIBKVM_x86_64 += lib/x86_64/ucall.c
> LIBKVM_x86_64 += lib/x86_64/vmx.c
> LIBKVM_x86_64 += lib/x86_64/sev.c
> +LIBKVM_x86_64 += lib/x86_64/tdx/tdx_util.c
> +LIBKVM_x86_64 += lib/x86_64/tdx/td_boot.S
>
> LIBKVM_aarch64 += lib/aarch64/gic.c
> LIBKVM_aarch64 += lib/aarch64/gic_v3.c
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h b/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h
> new file mode 100644
> index 000000000000..148057e569d6
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h
> @@ -0,0 +1,82 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +#ifndef SELFTEST_TDX_TD_BOOT_H
> +#define SELFTEST_TDX_TD_BOOT_H
> +
> +#include <stdint.h>
> +#include "tdx/td_boot_asm.h"
> +
> +/*
> + * Layout for boot section (not to scale)
> + *
> + * GPA
> + * ┌─────────────────────────────┬──0x1_0000_0000 (4GB)
> + * │ Boot code trampoline │
> + * ├─────────────────────────────┼──0x0_ffff_fff0: Reset vector (16B below 4GB)
> + * │ Boot code │
> + * ├─────────────────────────────┼──td_boot will be copied here, so that the
> + * │ │ jmp to td_boot is exactly at the reset vector
> + * │ Empty space │
> + * │ │
> + * ├─────────────────────────────┤
> + * │ │
> + * │ │
> + * │ Boot parameters │
> + * │ │
> + * │ │
> + * └─────────────────────────────┴──0x0_ffff_0000: TD_BOOT_PARAMETERS_GPA
> + */
> +#define FOUR_GIGABYTES_GPA (4ULL << 30)
> +
> +/**
> + * The exact memory layout for LGDT or LIDT instructions.
> + */
> +struct __packed td_boot_parameters_dtr {
> + uint16_t limit;
> + uint32_t base;
> +};
> +
> +/**
> + * The exact layout in memory required for a ljmp, including the selector for
> + * changing code segment.
> + */
> +struct __packed td_boot_parameters_ljmp_target {
> + uint32_t eip_gva;
> + uint16_t code64_sel;
> +};
> +
> +/**
> + * Allows each vCPU to be initialized with different eip and esp.
> + */
> +struct __packed td_per_vcpu_parameters {
> + uint32_t esp_gva;
> + struct td_boot_parameters_ljmp_target ljmp_target;
> +};
> +
> +/**
> + * Boot parameters for the TD.
> + *
> + * Unlike a regular VM, we can't ask KVM to set registers such as esp, eip, etc
> + * before boot, so to run selftests, these registers' values have to be
> + * initialized by the TD.
> + *
> + * This struct is loaded in TD private memory at TD_BOOT_PARAMETERS_GPA.
> + *
> + * The TD boot code will read off parameters from this struct and set up the
> + * vcpu for executing selftests.
> + */
> +struct __packed td_boot_parameters {
> + uint32_t cr0;
> + uint32_t cr3;
> + uint32_t cr4;
> + struct td_boot_parameters_dtr gdtr;
> + struct td_boot_parameters_dtr idtr;
> + struct td_per_vcpu_parameters per_vcpu[];
> +};
> +
> +extern void td_boot(void);
> +extern void reset_vector(void);
> +extern void td_boot_code_end(void);
> +
> +#define TD_BOOT_CODE_SIZE (td_boot_code_end - td_boot)
> +
> +#endif /* SELFTEST_TDX_TD_BOOT_H */
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h b/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h
> new file mode 100644
> index 000000000000..0a07104f7deb
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h
> @@ -0,0 +1,16 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +#ifndef SELFTEST_TDX_TD_BOOT_ASM_H
> +#define SELFTEST_TDX_TD_BOOT_ASM_H
> +
> +/*
> + * GPA where TD boot parameters wil lbe loaded.
> + *
> + * TD_BOOT_PARAMETERS_GPA is arbitrarily chosen to
> + *
> + * + be within the 4GB address space
> + * + provide enough contiguous memory for the struct td_boot_parameters such
> + * that there is one struct td_per_vcpu_parameters for KVM_MAX_VCPUS
> + */
> +#define TD_BOOT_PARAMETERS_GPA 0xffff0000
> +
> +#endif // SELFTEST_TDX_TD_BOOT_ASM_H
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
> new file mode 100644
> index 000000000000..274b245f200b
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
> @@ -0,0 +1,16 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +#ifndef SELFTESTS_TDX_KVM_UTIL_H
> +#define SELFTESTS_TDX_KVM_UTIL_H
> +
> +#include <stdint.h>
> +
> +#include "kvm_util_base.h"
> +
> +struct kvm_vcpu *td_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, void *guest_code);
> +
> +struct kvm_vm *td_create(void);
> +void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
> + uint64_t attributes);
> +void td_finalize(struct kvm_vm *vm);
> +
> +#endif // SELFTESTS_TDX_KVM_UTIL_H
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S b/tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S
> new file mode 100644
> index 000000000000..800e09264d4e
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S
> @@ -0,0 +1,101 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +
> +#include "tdx/td_boot_asm.h"
> +
> +/* Offsets for reading struct td_boot_parameters */
> +#define TD_BOOT_PARAMETERS_CR0 0
> +#define TD_BOOT_PARAMETERS_CR3 4
> +#define TD_BOOT_PARAMETERS_CR4 8
> +#define TD_BOOT_PARAMETERS_GDT 12
> +#define TD_BOOT_PARAMETERS_IDT 18
> +#define TD_BOOT_PARAMETERS_PER_VCPU 24
> +
> +/* Offsets for reading struct td_per_vcpu_parameters */
> +#define TD_PER_VCPU_PARAMETERS_ESP_GVA 0
> +#define TD_PER_VCPU_PARAMETERS_LJMP_TARGET 4
> +
> +#define SIZEOF_TD_PER_VCPU_PARAMETERS 10
> +
> +.code32
> +
> +.globl td_boot
> +td_boot:
> + /* In this procedure, edi is used as a temporary register */
> + cli
> +
> + /* Paging is off */
> +
> + movl $TD_BOOT_PARAMETERS_GPA, %ebx
> +
> + /*
> + * Find the address of struct td_per_vcpu_parameters for this
> + * vCPU based on esi (TDX spec: initialized with vcpu id). Put
> + * struct address into register for indirect addressing
> + */
> + movl $SIZEOF_TD_PER_VCPU_PARAMETERS, %eax
> + mul %esi
> + leal TD_BOOT_PARAMETERS_PER_VCPU(%ebx), %edi
> + addl %edi, %eax
> +
> + /* Setup stack */
> + movl TD_PER_VCPU_PARAMETERS_ESP_GVA(%eax), %esp
> +
> + /* Setup GDT */
> + leal TD_BOOT_PARAMETERS_GDT(%ebx), %edi
> + lgdt (%edi)
> +
> + /* Setup IDT */
> + leal TD_BOOT_PARAMETERS_IDT(%ebx), %edi
> + lidt (%edi)
> +
> + /*
> + * Set up control registers (There are no instructions to
> + * mov from memory to control registers, hence we need to use ebx
> + * as a scratch register)
> + */
> + movl TD_BOOT_PARAMETERS_CR4(%ebx), %edi
> + movl %edi, %cr4
> + movl TD_BOOT_PARAMETERS_CR3(%ebx), %edi
> + movl %edi, %cr3
> + movl TD_BOOT_PARAMETERS_CR0(%ebx), %edi
> + movl %edi, %cr0
> +
> + /* Paging is on after setting the most significant bit on cr0 */
> +
> + /*
> + * Jump to selftest guest code. Far jumps read <segment
> + * selector:new eip> from <addr+4:addr>. This location has
> + * already been set up in boot parameters, and we can read boot
> + * parameters because boot code and boot parameters are loaded so
> + * that GVA and GPA are mapped 1:1.
> + */
> + ljmp *TD_PER_VCPU_PARAMETERS_LJMP_TARGET(%eax)
> +
> +.globl reset_vector
> +reset_vector:
> + jmp td_boot
> + /*
> + * Pad reset_vector to its full size of 16 bytes so that this
> + * can be loaded with the end of reset_vector aligned to GPA=4G
> + */
> + int3
> + int3
> + int3
> + int3
> + int3
> + int3
> + int3
> + int3
> + int3
> + int3
> + int3
> + int3
> + int3
> + int3
> +
> +/* Leave marker so size of td_boot code can be computed */
> +.globl td_boot_code_end
> +td_boot_code_end:
> +
> +/* Disable executable stack */
> +.section .note.GNU-stack,"",%progbits
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> new file mode 100644
> index 000000000000..9b69c733ce01
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> @@ -0,0 +1,434 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +
> +#define _GNU_SOURCE
> +#include <asm/kvm.h>
> +#include <asm/kvm_host.h>
> +#include <errno.h>
> +#include <linux/kvm.h>
> +#include <stdint.h>
> +#include <sys/ioctl.h>
> +
> +#include "kvm_util.h"
> +#include "test_util.h"
> +#include "tdx/td_boot.h"
> +#include "kvm_util_base.h"
> +#include "processor.h"
> +
> +/*
> + * TDX ioctls
> + */
> +
> +static char *tdx_cmd_str[] = {
> + "KVM_TDX_CAPABILITIES",
> + "KVM_TDX_INIT_VM",
> + "KVM_TDX_INIT_VCPU",
> + "KVM_TDX_INIT_MEM_REGION",
> + "KVM_TDX_FINALIZE_VM"
> +};
> +#define TDX_MAX_CMD_STR (ARRAY_SIZE(tdx_cmd_str))
> +
> +static void tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data)
> +{
> + struct kvm_tdx_cmd tdx_cmd;
> + int r;
> +
> + TEST_ASSERT(ioctl_no < TDX_MAX_CMD_STR, "Unknown TDX CMD : %d\n",
> + ioctl_no);
> +
> + memset(&tdx_cmd, 0x0, sizeof(tdx_cmd));
> + tdx_cmd.id = ioctl_no;
> + tdx_cmd.flags = flags;
> + tdx_cmd.data = (uint64_t)data;
> +
> + r = ioctl(fd, KVM_MEMORY_ENCRYPT_OP, &tdx_cmd);
> + TEST_ASSERT(r == 0, "%s failed: %d %d", tdx_cmd_str[ioctl_no], r,
> + errno);
> +}
> +
> +#define XFEATURE_MASK_CET (XFEATURE_MASK_CET_USER | XFEATURE_MASK_CET_KERNEL)
> +
> +static void tdx_apply_cpuid_restrictions(struct kvm_cpuid2 *cpuid_data)
> +{
> + for (int i = 0; i < cpuid_data->nent; i++) {
> + struct kvm_cpuid_entry2 *e = &cpuid_data->entries[i];
> +
> + if (e->function == 0xd && e->index == 0) {
> + /*
> + * TDX module requires both XTILE_{CFG, DATA} to be set.
> + * Both bits are required for AMX to be functional.
> + */
> + if ((e->eax & XFEATURE_MASK_XTILE) !=
> + XFEATURE_MASK_XTILE) {
> + e->eax &= ~XFEATURE_MASK_XTILE;
> + }
> + }
> + if (e->function == 0xd && e->index == 1) {
> + /*
> + * TDX doesn't support LBR yet.
> + * Disable bits from the XCR0 register.
> + */
> + e->ecx &= ~XFEATURE_MASK_LBR;
> + /*
> + * TDX modules requires both CET_{U, S} to be set even
> + * if only one is supported.
> + */
> + if (e->ecx & XFEATURE_MASK_CET)
> + e->ecx |= XFEATURE_MASK_CET;
> + }
> + }
> +}
> +
> +static void tdx_td_init(struct kvm_vm *vm, uint64_t attributes)
> +{
> + const struct kvm_cpuid2 *cpuid;
> + struct kvm_tdx_init_vm *init_vm;
> +
> + cpuid = kvm_get_supported_cpuid();
> +
> + init_vm = malloc(sizeof(*init_vm) +
> + sizeof(init_vm->cpuid.entries[0]) * cpuid->nent);
> +
> + memset(init_vm, 0, sizeof(*init_vm));
> + memcpy(&init_vm->cpuid, cpuid, kvm_cpuid2_size(cpuid->nent));
> +
> + init_vm->attributes = attributes;
> +
> + tdx_apply_cpuid_restrictions(&init_vm->cpuid);
> +
> + tdx_ioctl(vm->fd, KVM_TDX_INIT_VM, 0, init_vm);
> +}
> +
> +static void tdx_td_vcpu_init(struct kvm_vcpu *vcpu)
> +{
> + const struct kvm_cpuid2 *cpuid = kvm_get_supported_cpuid();
> +
> + vcpu_init_cpuid(vcpu, cpuid);
> + tdx_ioctl(vcpu->fd, KVM_TDX_INIT_VCPU, 0, NULL);
> +}
> +
> +static void tdx_init_mem_region(struct kvm_vm *vm, void *source_pages,
> + uint64_t gpa, uint64_t size)
> +{
> + struct kvm_tdx_init_mem_region mem_region = {
> + .source_addr = (uint64_t)source_pages,
> + .gpa = gpa,
> + .nr_pages = size / PAGE_SIZE,
> + };
> + uint32_t metadata = KVM_TDX_MEASURE_MEMORY_REGION;
> +
> + TEST_ASSERT((mem_region.nr_pages > 0) &&
> + ((mem_region.nr_pages * PAGE_SIZE) == size),
> + "Cannot add partial pages to the guest memory.\n");
> + TEST_ASSERT(((uint64_t)source_pages & (PAGE_SIZE - 1)) == 0,
> + "Source memory buffer is not page aligned\n");
> + tdx_ioctl(vm->fd, KVM_TDX_INIT_MEM_REGION, metadata, &mem_region);
> +}
> +
> +static void tdx_td_finalizemr(struct kvm_vm *vm)
> +{
> + tdx_ioctl(vm->fd, KVM_TDX_FINALIZE_VM, 0, NULL);
> +}
> +
> +/*
> + * TD creation/setup/finalization
> + */
> +
> +static void tdx_enable_capabilities(struct kvm_vm *vm)
> +{
> + int rc;
> +
> + rc = kvm_check_cap(KVM_CAP_X2APIC_API);
> + TEST_ASSERT(rc, "TDX: KVM_CAP_X2APIC_API is not supported!");
> + rc = kvm_check_cap(KVM_CAP_SPLIT_IRQCHIP);
> + TEST_ASSERT(rc, "TDX: KVM_CAP_SPLIT_IRQCHIP is not supported!");
> +
> + vm_enable_cap(vm, KVM_CAP_X2APIC_API,
> + KVM_X2APIC_API_USE_32BIT_IDS |
> + KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK);
> + vm_enable_cap(vm, KVM_CAP_SPLIT_IRQCHIP, 24);
> +}
> +
> +static void tdx_configure_memory_encryption(struct kvm_vm *vm)
> +{
> + /* Configure shared/enCrypted bit for this VM according to TDX spec */
> + vm->arch.s_bit = 1ULL << (vm->pa_bits - 1);
> + vm->arch.c_bit = 0;
> + /* Set gpa_protected_mask so that tagging/untagging of GPAs works */
> + vm->gpa_protected_mask = vm->arch.s_bit;
> + /* This VM is protected (has memory encryption) */
> + vm->protected = true;
> +}
> +
> +static void tdx_apply_cr4_restrictions(struct kvm_sregs *sregs)
> +{
> + /* TDX spec 11.6.2: CR4 bit MCE is fixed to 1 */
> + sregs->cr4 |= X86_CR4_MCE;
> +
> + /* Set this because UEFI also sets this up, to handle XMM exceptions */
> + sregs->cr4 |= X86_CR4_OSXMMEXCPT;
> +
> + /* TDX spec 11.6.2: CR4 bit VMXE and SMXE are fixed to 0 */
> + sregs->cr4 &= ~(X86_CR4_VMXE | X86_CR4_SMXE);
> +}
> +
> +static void load_td_boot_code(struct kvm_vm *vm)
> +{
> + void *boot_code_hva = addr_gpa2hva(vm, FOUR_GIGABYTES_GPA - TD_BOOT_CODE_SIZE);
> +
> + TEST_ASSERT(td_boot_code_end - reset_vector == 16,
> + "The reset vector must be 16 bytes in size.");
> + memcpy(boot_code_hva, td_boot, TD_BOOT_CODE_SIZE);
> +}
> +
> +static void load_td_per_vcpu_parameters(struct td_boot_parameters *params,
> + struct kvm_sregs *sregs,
> + struct kvm_vcpu *vcpu,
> + void *guest_code)
> +{
> + /* Store vcpu_index to match what the TDX module would store internally */
> + static uint32_t vcpu_index;
> +
> + struct td_per_vcpu_parameters *vcpu_params = &params->per_vcpu[vcpu_index];
> +
> + TEST_ASSERT(vcpu->initial_stack_addr != 0,
> + "initial stack address should not be 0");
> + TEST_ASSERT(vcpu->initial_stack_addr <= 0xffffffff,
> + "initial stack address must fit in 32 bits");
> + TEST_ASSERT((uint64_t)guest_code <= 0xffffffff,
> + "guest_code must fit in 32 bits");
> + TEST_ASSERT(sregs->cs.selector != 0, "cs.selector should not be 0");
> +
> + vcpu_params->esp_gva = (uint32_t)(uint64_t)vcpu->initial_stack_addr;
> + vcpu_params->ljmp_target.eip_gva = (uint32_t)(uint64_t)guest_code;
> + vcpu_params->ljmp_target.code64_sel = sregs->cs.selector;
> +
> + vcpu_index++;
> +}
> +
> +static void load_td_common_parameters(struct td_boot_parameters *params,
> + struct kvm_sregs *sregs)
> +{
> + /* Set parameters! */
> + params->cr0 = sregs->cr0;
> + params->cr3 = sregs->cr3;
> + params->cr4 = sregs->cr4;
> + params->gdtr.limit = sregs->gdt.limit;
> + params->gdtr.base = sregs->gdt.base;
> + params->idtr.limit = sregs->idt.limit;
> + params->idtr.base = sregs->idt.base;
> +
> + TEST_ASSERT(params->cr0 != 0, "cr0 should not be 0");
> + TEST_ASSERT(params->cr3 != 0, "cr3 should not be 0");
> + TEST_ASSERT(params->cr4 != 0, "cr4 should not be 0");
> + TEST_ASSERT(params->gdtr.base != 0, "gdt base address should not be 0");

Do we also need to check idtr.base?


> +}
> +
> +static void load_td_boot_parameters(struct td_boot_parameters *params,
> + struct kvm_vcpu *vcpu, void *guest_code)
> +{
> + struct kvm_sregs sregs;
> +
> + /* Assemble parameters in sregs */
> + memset(&sregs, 0, sizeof(struct kvm_sregs));
> + vcpu_setup_mode_sregs(vcpu->vm, &sregs);
> + tdx_apply_cr4_restrictions(&sregs);
> + kvm_setup_idt(vcpu->vm, &sregs.idt);
> +
> + if (!params->cr0)
> + load_td_common_parameters(params, &sregs);
> +
> + load_td_per_vcpu_parameters(params, &sregs, vcpu, guest_code);
> +}
> +
> +/**
> + * Adds a vCPU to a TD (Trusted Domain) with minimum defaults. It will not set
> + * up any general purpose registers as they will be initialized by the TDX. In
> + * TDX, vCPUs RIP is set to 0xFFFFFFF0. See Intel TDX EAS Section "Initial State
> + * of Guest GPRs" for more information on vCPUs initial register values when
> + * entering the TD first time.
> + *
> + * Input Args:
> + * vm - Virtual Machine
> + * vcpuid - The id of the VCPU to add to the VM.
> + */
> +struct kvm_vcpu *td_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, void *guest_code)
> +{
> + struct kvm_vcpu *vcpu;
> +
> + /*
> + * TD setup will not use the value of rip set in vm_vcpu_add anyway, so
> + * NULL can be used for guest_code.
> + */
> + vcpu = vm_vcpu_add(vm, vcpu_id, NULL);
> +
> + tdx_td_vcpu_init(vcpu);
> +
> + load_td_boot_parameters(addr_gpa2hva(vm, TD_BOOT_PARAMETERS_GPA),
> + vcpu, guest_code);
> +
> + return vcpu;
> +}
> +
> +/**
> + * Iterate over set ranges within sparsebit @s. In each iteration,
> + * @range_begin and @range_end will take the beginning and end of the set range,
> + * which are of type sparsebit_idx_t.
> + *
> + * For example, if the range [3, 7] (inclusive) is set, within the iteration,
> + * @range_begin will take the value 3 and @range_end will take the value 7.
> + *
> + * Ensure that there is at least one bit set before using this macro with
> + * sparsebit_any_set(), because sparsebit_first_set() will abort if none are
> + * set.
> + */
> +#define sparsebit_for_each_set_range(s, range_begin, range_end) \
> + for (range_begin = sparsebit_first_set(s), \
> + range_end = sparsebit_next_clear(s, range_begin) - 1; \
> + range_begin && range_end; \
> + range_begin = sparsebit_next_set(s, range_end), \
> + range_end = sparsebit_next_clear(s, range_begin) - 1)
> +/*
> + * sparsebit_next_clear() can return 0 if [x, 2**64-1] are all set, and the -1
> + * would then cause an underflow back to 2**64 - 1. This is expected and
> + * correct.
> + *
> + * If the last range in the sparsebit is [x, y] and we try to iterate,
> + * sparsebit_next_set() will return 0, and sparsebit_next_clear() will try and
> + * find the first range, but that's correct because the condition expression
> + * would cause us to quit the loop.
> + */

Since both sev and tdx need sparsebit_for_each_set_range(),
can it be moved to a header file to avoid code duplication?


> +
> +static void load_td_memory_region(struct kvm_vm *vm,
> + struct userspace_mem_region *region)
> +{
> + const struct sparsebit *pages = region->protected_phy_pages;
> + const uint64_t hva_base = region->region.userspace_addr;
> + const vm_paddr_t gpa_base = region->region.guest_phys_addr;
> + const sparsebit_idx_t lowest_page_in_region = gpa_base >>
> + vm->page_shift;
> +
> + sparsebit_idx_t i;
> + sparsebit_idx_t j;
> +
> + if (!sparsebit_any_set(pages))
> + return;
> +
> + sparsebit_for_each_set_range(pages, i, j) {
> + const uint64_t size_to_load = (j - i + 1) * vm->page_size;
> + const uint64_t offset =
> + (i - lowest_page_in_region) * vm->page_size;
> + const uint64_t hva = hva_base + offset;
> + const uint64_t gpa = gpa_base + offset;
> + void *source_addr;
> +
> + /*
> + * KVM_TDX_INIT_MEM_REGION ioctl cannot encrypt memory in place,
> + * hence we have to make a copy if there's only one backing
> + * memory source
> + */
> + source_addr = mmap(NULL, size_to_load, PROT_READ | PROT_WRITE,
> + MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> + TEST_ASSERT(
> + source_addr,
> + "Could not allocate memory for loading memory region");
> +
> + memcpy(source_addr, (void *)hva, size_to_load);
> +
> + tdx_init_mem_region(vm, source_addr, gpa, size_to_load);
> +
> + munmap(source_addr, size_to_load);
> + }
> +}
> +
> +static void load_td_private_memory(struct kvm_vm *vm)
> +{
> + int ctr;
> + struct userspace_mem_region *region;
> +
> + hash_for_each(vm->regions.slot_hash, ctr, region, slot_node) {
> + load_td_memory_region(vm, region);
> + }
> +}
> +
> +struct kvm_vm *td_create(void)
> +{
> + struct vm_shape shape;
> +
> + shape.mode = VM_MODE_DEFAULT;
> + shape.type = KVM_X86_TDX_VM;
> + return ____vm_create(shape);
> +}
> +
> +static void td_setup_boot_code(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type)
> +{
> + vm_vaddr_t addr;
> + size_t boot_code_allocation = round_up(TD_BOOT_CODE_SIZE, PAGE_SIZE);
> + vm_paddr_t boot_code_base_gpa = FOUR_GIGABYTES_GPA - boot_code_allocation;
> + size_t npages = DIV_ROUND_UP(boot_code_allocation, PAGE_SIZE);
> +
> + vm_userspace_mem_region_add(vm, src_type, boot_code_base_gpa, 1, npages,
> + KVM_MEM_PRIVATE);
> + addr = vm_vaddr_alloc_1to1(vm, boot_code_allocation, boot_code_base_gpa, 1);
> + TEST_ASSERT_EQ(addr, boot_code_base_gpa);
> +
> + load_td_boot_code(vm);
> +}
> +
> +static size_t td_boot_parameters_size(void)
> +{
> + int max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS);
> + size_t total_per_vcpu_parameters_size =
> + max_vcpus * sizeof(struct td_per_vcpu_parameters);
> +
> + return sizeof(struct td_boot_parameters) + total_per_vcpu_parameters_size;
> +}
> +
> +static void td_setup_boot_parameters(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type)
> +{
> + vm_vaddr_t addr;
> + size_t boot_params_size = td_boot_parameters_size();
> + int npages = DIV_ROUND_UP(boot_params_size, PAGE_SIZE);
> + size_t total_size = npages * PAGE_SIZE;
> +
> + vm_userspace_mem_region_add(vm, src_type, TD_BOOT_PARAMETERS_GPA, 2,
> + npages, KVM_MEM_PRIVATE);
> + addr = vm_vaddr_alloc_1to1(vm, total_size, TD_BOOT_PARAMETERS_GPA, 2);
> + TEST_ASSERT_EQ(addr, TD_BOOT_PARAMETERS_GPA);
> +}
> +
> +void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
> + uint64_t attributes)
> +{
> + uint64_t nr_pages_required;
> +
> + tdx_enable_capabilities(vm);
> +
> + tdx_configure_memory_encryption(vm);
> +
> + tdx_td_init(vm, attributes);
> +
> + nr_pages_required = vm_nr_pages_required(VM_MODE_DEFAULT, 1, 0);
> +
> + /*
> + * Add memory (add 0th memslot) for TD. This will be used to setup the
> + * CPU (provide stack space for the CPU) and to load the elf file.
> + */
> + vm_userspace_mem_region_add(vm, src_type, 0, 0, nr_pages_required,
> + KVM_MEM_PRIVATE);
> +
> + kvm_vm_elf_load(vm, program_invocation_name);
> +
> + vm_init_descriptor_tables(vm);
> +
> + td_setup_boot_code(vm, src_type);
> + td_setup_boot_parameters(vm, src_type);
> +}
> +
> +void td_finalize(struct kvm_vm *vm)
> +{
> + sync_exception_handlers_to_guest(vm);
> +
> + load_td_private_memory(vm);
> +
> + tdx_td_finalizemr(vm);
> +}


2024-02-29 13:31:34

by Binbin Wu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 09/29] KVM: selftests: TDX: Add report_fatal_error test



On 12/13/2023 4:46 AM, Sagi Shahar wrote:
> The test checks report_fatal_error functionality.
>
> Signed-off-by: Sagi Shahar <[email protected]>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> ---
> .../selftests/kvm/include/x86_64/tdx/tdx.h | 6 ++-
> .../kvm/include/x86_64/tdx/tdx_util.h | 1 +
> .../kvm/include/x86_64/tdx/test_util.h | 19 ++++++++
> .../selftests/kvm/lib/x86_64/tdx/tdx.c | 39 ++++++++++++++++
> .../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 12 +++++
> .../selftests/kvm/lib/x86_64/tdx/test_util.c | 10 +++++
> .../selftests/kvm/x86_64/tdx_vm_tests.c | 45 +++++++++++++++++++
> 7 files changed, 131 insertions(+), 1 deletion(-)
>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> index a7161efe4ee2..1340c1070002 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> @@ -3,10 +3,14 @@
> #define SELFTEST_TDX_TDX_H
>
> #include <stdint.h>
> +#include "kvm_util_base.h"
>
> -#define TDG_VP_VMCALL_INSTRUCTION_IO 30
> +#define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003
>
> +#define TDG_VP_VMCALL_INSTRUCTION_IO 30
> +void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu);
> uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
> uint64_t write, uint64_t *data);
> +void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa);
>
> #endif // SELFTEST_TDX_TDX_H
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
> index 274b245f200b..32dd6b8fda46 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
> @@ -12,5 +12,6 @@ struct kvm_vm *td_create(void);
> void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
> uint64_t attributes);
> void td_finalize(struct kvm_vm *vm);
> +void td_vcpu_run(struct kvm_vcpu *vcpu);
>
> #endif // SELFTESTS_TDX_KVM_UTIL_H
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> index b570b6d978ff..6d69921136bd 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> @@ -49,4 +49,23 @@ bool is_tdx_enabled(void);
> */
> void tdx_test_success(void);
>
> +/**
> + * Report an error with @error_code to userspace.
> + *
> + * Return value from tdg_vp_vmcall_report_fatal_error is ignored since execution
> + * is not expected to continue beyond this point.
> + */
> +void tdx_test_fatal(uint64_t error_code);
> +
> +/**
> + * Report an error with @error_code to userspace.
> + *
> + * @data_gpa may point to an optional shared guest memory holding the error
> + * string.
> + *
> + * Return value from tdg_vp_vmcall_report_fatal_error is ignored since execution
> + * is not expected to continue beyond this point.
> + */
> +void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa);
> +
> #endif // SELFTEST_TDX_TEST_UTIL_H
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> index c2414523487a..b854c3aa34ff 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> @@ -1,8 +1,31 @@
> // SPDX-License-Identifier: GPL-2.0-only
>
> +#include <string.h>
> +
> #include "tdx/tdcall.h"
> #include "tdx/tdx.h"
>
> +void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu)
> +{
> + struct kvm_tdx_vmcall *vmcall_info = &vcpu->run->tdx.u.vmcall;
> + uint64_t vmcall_subfunction = vmcall_info->subfunction;
> +
> + switch (vmcall_subfunction) {
> + case TDG_VP_VMCALL_REPORT_FATAL_ERROR:
> + vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
> + vcpu->run->system_event.ndata = 3;
> + vcpu->run->system_event.data[0] =
> + TDG_VP_VMCALL_REPORT_FATAL_ERROR;
> + vcpu->run->system_event.data[1] = vmcall_info->in_r12;
> + vcpu->run->system_event.data[2] = vmcall_info->in_r13;
> + vmcall_info->status_code = 0;
> + break;
> + default:
> + TEST_FAIL("TD VMCALL subfunction %lu is unsupported.\n",
> + vmcall_subfunction);
> + }
> +}
> +
> uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
> uint64_t write, uint64_t *data)
> {
> @@ -25,3 +48,19 @@ uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
>
> return ret;
> }
> +
> +void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa)
> +{
> + struct tdx_hypercall_args args;
> +
> + memset(&args, 0, sizeof(struct tdx_hypercall_args));
> +
> + if (data_gpa)
> + error_code |= 0x8000000000000000;
> +
> + args.r11 = TDG_VP_VMCALL_REPORT_FATAL_ERROR;
> + args.r12 = error_code;
> + args.r13 = data_gpa;
> +
> + __tdx_hypercall(&args, 0);
> +}
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> index b302060049d5..d745bb6287c1 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> @@ -10,6 +10,7 @@
>
> #include "kvm_util.h"
> #include "test_util.h"
> +#include "tdx/tdx.h"
> #include "tdx/td_boot.h"
> #include "kvm_util_base.h"
> #include "processor.h"
> @@ -519,3 +520,14 @@ void td_finalize(struct kvm_vm *vm)
>
> tdx_td_finalizemr(vm);
> }
> +
> +void td_vcpu_run(struct kvm_vcpu *vcpu)
> +{
> + vcpu_run(vcpu);
> +
> + /* Handle TD VMCALLs that require userspace handling. */
> + if (vcpu->run->exit_reason == KVM_EXIT_TDX &&
> + vcpu->run->tdx.type == KVM_EXIT_TDX_VMCALL) {
> + handle_userspace_tdg_vp_vmcall_exit(vcpu);
> + }
> +}
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
> index 6905d0ca3877..7f3cd8089cea 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
> @@ -32,3 +32,13 @@ void tdx_test_success(void)
> TDX_TEST_SUCCESS_SIZE,
> TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, &code);
> }
> +
> +void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa)
> +{
> + tdg_vp_vmcall_report_fatal_error(error_code, data_gpa);
> +}
> +
> +void tdx_test_fatal(uint64_t error_code)
> +{
> + tdx_test_fatal_with_data(error_code, 0);
> +}
> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> index a18d1c9d6026..8638c7bbedaa 100644
> --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> @@ -2,6 +2,7 @@
>
> #include <signal.h>
> #include "kvm_util_base.h"
> +#include "tdx/tdx.h"
> #include "tdx/tdx_util.h"
> #include "tdx/test_util.h"
> #include "test_util.h"
> @@ -30,6 +31,49 @@ void verify_td_lifecycle(void)
> printf("\t ... PASSED\n");
> }
>
> +void guest_code_report_fatal_error(void)
> +{
> + uint64_t err;
> +
> + /*
> + * Note: err should follow the GHCI spec definition:
> + * bits 31:0 should be set to 0.
> + * bits 62:32 are used for TD-specific extended error code.
> + * bit 63 is used to mark additional information in shared memory.
> + */
> + err = 0x0BAAAAAD00000000;
> + if (err)
> + tdx_test_fatal(err);

Nit:
Since the error code is a constant, no need to use if statement.
Or even save the variable?


> +
> + tdx_test_success();
> +}
> +void verify_report_fatal_error(void)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> + vcpu = td_vcpu_add(vm, 0, guest_code_report_fatal_error);
> + td_finalize(vm);
> +
> + printf("Verifying report_fatal_error:\n");
> +
> + td_vcpu_run(vcpu);
> +
> + TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
> + TEST_ASSERT_EQ(vcpu->run->system_event.ndata, 3);
> + TEST_ASSERT_EQ(vcpu->run->system_event.data[0], TDG_VP_VMCALL_REPORT_FATAL_ERROR);
> + TEST_ASSERT_EQ(vcpu->run->system_event.data[1], 0x0BAAAAAD00000000);
> + TEST_ASSERT_EQ(vcpu->run->system_event.data[2], 0);
> +
> + vcpu_run(vcpu);
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + kvm_vm_free(vm);
> + printf("\t ... PASSED\n");
> +}
> +
> int main(int argc, char **argv)
> {
> setbuf(stdout, NULL);
> @@ -40,6 +84,7 @@ int main(int argc, char **argv)
> }
>
> run_in_new_process(&verify_td_lifecycle);
> + run_in_new_process(&verify_report_fatal_error);
>
> return 0;
> }


2024-02-29 14:00:22

by Binbin Wu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 10/29] KVM: selftests: TDX: Adding test case for TDX port IO



On 12/13/2023 4:46 AM, Sagi Shahar wrote:
> From: Erdem Aktas <[email protected]>
>
> Verifies TDVMCALL<INSTRUCTION.IO> READ and WRITE operations.
>
> Signed-off-by: Erdem Aktas <[email protected]>
> Signed-off-by: Sagi Shahar <[email protected]>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> ---
> .../kvm/include/x86_64/tdx/test_util.h | 34 ++++++++
> .../selftests/kvm/x86_64/tdx_vm_tests.c | 82 +++++++++++++++++++
> 2 files changed, 116 insertions(+)

One nit comment below.

Reviewed-by: Binbin Wu <[email protected]>

>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> index 6d69921136bd..95a5d5be7f0b 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> @@ -9,6 +9,40 @@
> #define TDX_TEST_SUCCESS_PORT 0x30
> #define TDX_TEST_SUCCESS_SIZE 4
>
> +/**
> + * Assert that some IO operation involving tdg_vp_vmcall_instruction_io() was
> + * called in the guest.
> + */
> +#define TDX_TEST_ASSERT_IO(VCPU, PORT, SIZE, DIR) \
> + do { \
> + TEST_ASSERT((VCPU)->run->exit_reason == KVM_EXIT_IO, \
> + "Got exit_reason other than KVM_EXIT_IO: %u (%s)\n", \
> + (VCPU)->run->exit_reason, \
> + exit_reason_str((VCPU)->run->exit_reason)); \
> + \
> + TEST_ASSERT(((VCPU)->run->exit_reason == KVM_EXIT_IO) && \
> + ((VCPU)->run->io.port == (PORT)) && \
> + ((VCPU)->run->io.size == (SIZE)) && \
> + ((VCPU)->run->io.direction == (DIR)), \
> + "Got unexpected IO exit values: %u (%s) %d %d %d\n", \
> + (VCPU)->run->exit_reason, \
> + exit_reason_str((VCPU)->run->exit_reason), \
> + (VCPU)->run->io.port, (VCPU)->run->io.size, \
> + (VCPU)->run->io.direction); \
> + } while (0)
> +
> +/**
> + * Check and report if there was some failure in the guest, either an exception
> + * like a triple fault, or if a tdx_test_fatal() was hit.
> + */
> +#define TDX_TEST_CHECK_GUEST_FAILURE(VCPU) \
> + do { \
> + if ((VCPU)->run->exit_reason == KVM_EXIT_SYSTEM_EVENT) \
> + TEST_FAIL("Guest reported error. error code: %lld (0x%llx)\n", \
> + (VCPU)->run->system_event.data[1], \
> + (VCPU)->run->system_event.data[1]); \
> + } while (0)
> +
> /**
> * Assert that tdx_test_success() was called in the guest.
> */
> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> index 8638c7bbedaa..75467c407ca7 100644
> --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> @@ -2,6 +2,7 @@
>
> #include <signal.h>
> #include "kvm_util_base.h"
> +#include "tdx/tdcall.h"
> #include "tdx/tdx.h"
> #include "tdx/tdx_util.h"
> #include "tdx/test_util.h"
> @@ -74,6 +75,86 @@ void verify_report_fatal_error(void)
> printf("\t ... PASSED\n");
> }
>
> +#define TDX_IOEXIT_TEST_PORT 0x50
> +
> +/*
> + * Verifies IO functionality by writing a |value| to a predefined port.
> + * Verifies that the read value is |value| + 1 from the same port.
> + * If all the tests are passed then write a value to port TDX_TEST_PORT
> + */
> +void guest_ioexit(void)
> +{
> + uint64_t data_out, data_in, delta;
> + uint64_t ret;
> +
> + data_out = 0xAB;
> + ret = tdg_vp_vmcall_instruction_io(TDX_IOEXIT_TEST_PORT, 1,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
> + &data_out);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + ret = tdg_vp_vmcall_instruction_io(TDX_IOEXIT_TEST_PORT, 1,
> + TDG_VP_VMCALL_INSTRUCTION_IO_READ,
> + &data_in);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + delta = data_in - data_out;
> + if (delta != 1)

Nit: Is it more direct to compare data_in with 0xAC?

> + tdx_test_fatal(ret);
> +
> + tdx_test_success();
> +}
> +
> +void verify_td_ioexit(void)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> +
> + uint32_t port_data;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> + vcpu = td_vcpu_add(vm, 0, guest_ioexit);
> + td_finalize(vm);
> +
> + printf("Verifying TD IO Exit:\n");
> +
> + /* Wait for guest to do a IO write */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_IO(vcpu, TDX_IOEXIT_TEST_PORT, 1,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> + port_data = *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
> +
> + printf("\t ... IO WRITE: OK\n");
> +
> + /*
> + * Wait for the guest to do a IO read. Provide the previous written data
> + * + 1 back to the guest
> + */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_IO(vcpu, TDX_IOEXIT_TEST_PORT, 1,
> + TDG_VP_VMCALL_INSTRUCTION_IO_READ);
> + *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = port_data + 1;
> +
> + printf("\t ... IO READ: OK\n");
> +
> + /*
> + * Wait for the guest to complete execution successfully. The read
> + * value is checked within the guest.
> + */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + printf("\t ... IO verify read/write values: OK\n");
> + kvm_vm_free(vm);
> + printf("\t ... PASSED\n");
> +}
> +
> int main(int argc, char **argv)
> {
> setbuf(stdout, NULL);
> @@ -85,6 +166,7 @@ int main(int argc, char **argv)
>
> run_in_new_process(&verify_td_lifecycle);
> run_in_new_process(&verify_report_fatal_error);
> + run_in_new_process(&verify_td_ioexit);
>
> return 0;
> }


2024-02-29 15:36:51

by Binbin Wu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 06/29] KVM: selftests: TDX: Use KVM_TDX_CAPABILITIES to validate TDs' attribute configuration



On 12/13/2023 4:46 AM, Sagi Shahar wrote:
> From: Ackerley Tng <[email protected]>
>
> This also exercises the KVM_TDX_CAPABILITIES ioctl.
>
> Suggested-by: Isaku Yamahata <[email protected]>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> Signed-off-by: Sagi Shahar <[email protected]>
> ---
> .../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 69 ++++++++++++++++++-
> 1 file changed, 66 insertions(+), 3 deletions(-)

Nit: Can also dump 'supported_gpaw' in tdx_read_capabilities().

Reviewed-by: Binbin Wu <[email protected]>

>
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> index 9b69c733ce01..6b995c3f6153 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> @@ -27,10 +27,9 @@ static char *tdx_cmd_str[] = {
> };
> #define TDX_MAX_CMD_STR (ARRAY_SIZE(tdx_cmd_str))
>
> -static void tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data)
> +static int _tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data)
> {
> struct kvm_tdx_cmd tdx_cmd;
> - int r;
>
> TEST_ASSERT(ioctl_no < TDX_MAX_CMD_STR, "Unknown TDX CMD : %d\n",
> ioctl_no);
> @@ -40,11 +39,58 @@ static void tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data)
> tdx_cmd.flags = flags;
> tdx_cmd.data = (uint64_t)data;
>
> - r = ioctl(fd, KVM_MEMORY_ENCRYPT_OP, &tdx_cmd);
> + return ioctl(fd, KVM_MEMORY_ENCRYPT_OP, &tdx_cmd);
> +}
> +
> +static void tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data)
> +{
> + int r;
> +
> + r = _tdx_ioctl(fd, ioctl_no, flags, data);
> TEST_ASSERT(r == 0, "%s failed: %d %d", tdx_cmd_str[ioctl_no], r,
> errno);
> }
>
> +static struct kvm_tdx_capabilities *tdx_read_capabilities(struct kvm_vm *vm)
> +{
> + int i;
> + int rc = -1;
> + int nr_cpuid_configs = 4;
> + struct kvm_tdx_capabilities *tdx_cap = NULL;
> +
> + do {
> + nr_cpuid_configs *= 2;
> +
> + tdx_cap = realloc(
> + tdx_cap, sizeof(*tdx_cap) +
> + nr_cpuid_configs * sizeof(*tdx_cap->cpuid_configs));
> + TEST_ASSERT(tdx_cap != NULL,
> + "Could not allocate memory for tdx capability nr_cpuid_configs %d\n",
> + nr_cpuid_configs);
> +
> + tdx_cap->nr_cpuid_configs = nr_cpuid_configs;
> + rc = _tdx_ioctl(vm->fd, KVM_TDX_CAPABILITIES, 0, tdx_cap);
> + } while (rc < 0 && errno == E2BIG);
> +
> + TEST_ASSERT(rc == 0, "KVM_TDX_CAPABILITIES failed: %d %d",
> + rc, errno);
> +
> + pr_debug("tdx_cap: attrs: fixed0 0x%016llx fixed1 0x%016llx\n"
> + "tdx_cap: xfam fixed0 0x%016llx fixed1 0x%016llx\n",
> + tdx_cap->attrs_fixed0, tdx_cap->attrs_fixed1,
> + tdx_cap->xfam_fixed0, tdx_cap->xfam_fixed1);
> +
> + for (i = 0; i < tdx_cap->nr_cpuid_configs; i++) {
> + const struct kvm_tdx_cpuid_config *config =
> + &tdx_cap->cpuid_configs[i];
> + pr_debug("cpuid config[%d]: leaf 0x%x sub_leaf 0x%x eax 0x%08x ebx 0x%08x ecx 0x%08x edx 0x%08x\n",
> + i, config->leaf, config->sub_leaf,
> + config->eax, config->ebx, config->ecx, config->edx);
> + }
> +
> + return tdx_cap;
> +}
> +
> #define XFEATURE_MASK_CET (XFEATURE_MASK_CET_USER | XFEATURE_MASK_CET_KERNEL)
>
> static void tdx_apply_cpuid_restrictions(struct kvm_cpuid2 *cpuid_data)
> @@ -78,6 +124,21 @@ static void tdx_apply_cpuid_restrictions(struct kvm_cpuid2 *cpuid_data)
> }
> }
>
> +static void tdx_check_attributes(struct kvm_vm *vm, uint64_t attributes)
> +{
> + struct kvm_tdx_capabilities *tdx_cap;
> +
> + tdx_cap = tdx_read_capabilities(vm);
> +
> + /* TDX spec: any bits 0 in attrs_fixed0 must be 0 in attributes */
> + TEST_ASSERT_EQ(attributes & ~tdx_cap->attrs_fixed0, 0);
> +
> + /* TDX spec: any bits 1 in attrs_fixed1 must be 1 in attributes */
> + TEST_ASSERT_EQ(attributes & tdx_cap->attrs_fixed1, tdx_cap->attrs_fixed1);
> +
> + free(tdx_cap);
> +}
> +
> static void tdx_td_init(struct kvm_vm *vm, uint64_t attributes)
> {
> const struct kvm_cpuid2 *cpuid;
> @@ -91,6 +152,8 @@ static void tdx_td_init(struct kvm_vm *vm, uint64_t attributes)
> memset(init_vm, 0, sizeof(*init_vm));
> memcpy(&init_vm->cpuid, cpuid, kvm_cpuid2_size(cpuid->nent));
>
> + tdx_check_attributes(vm, attributes);
> +
> init_vm->attributes = attributes;
>
> tdx_apply_cpuid_restrictions(&init_vm->cpuid);


2024-03-01 05:28:42

by Yan Zhao

[permalink] [raw]
Subject: Re: [RFC PATCH v5 08/29] KVM: selftests: TDX: Add TDX lifecycle test

> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
> new file mode 100644
> index 000000000000..df9c1ed4bb2d
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
> @@ -0,0 +1,90 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/* Adapted from arch/x86/coco/tdx/tdcall.S */
> +
> +#define TDX_HYPERCALL_r10 0 /* offsetof(struct tdx_hypercall_args, r10) */
> +#define TDX_HYPERCALL_r11 8 /* offsetof(struct tdx_hypercall_args, r11) */
> +#define TDX_HYPERCALL_r12 16 /* offsetof(struct tdx_hypercall_args, r12) */
> +#define TDX_HYPERCALL_r13 24 /* offsetof(struct tdx_hypercall_args, r13) */
> +#define TDX_HYPERCALL_r14 32 /* offsetof(struct tdx_hypercall_args, r14) */
> +#define TDX_HYPERCALL_r15 40 /* offsetof(struct tdx_hypercall_args, r15) */
> +
> +/*
> + * Bitmasks of exposed registers (with VMM).
> + */
> +#define TDX_R10 0x400
> +#define TDX_R11 0x800
> +#define TDX_R12 0x1000
> +#define TDX_R13 0x2000
> +#define TDX_R14 0x4000
> +#define TDX_R15 0x8000
> +
> +#define TDX_HCALL_HAS_OUTPUT 0x1
> +
> +/*
> + * These registers are clobbered to hold arguments for each
> + * TDVMCALL. They are safe to expose to the VMM.
> + * Each bit in this mask represents a register ID. Bit field
> + * details can be found in TDX GHCI specification, section
> + * titled "TDCALL [TDG.VP.VMCALL] leaf".
> + */
> +#define TDVMCALL_EXPOSE_REGS_MASK ( TDX_R10 | TDX_R11 | \
> + TDX_R12 | TDX_R13 | \
> + TDX_R14 | TDX_R15 )
> +
> +.code64
> +.section .text
> +
> +.globl __tdx_hypercall
> +.type __tdx_hypercall, @function
> +__tdx_hypercall:
> + /* Set up stack frame */
> + push %rbp
> + movq %rsp, %rbp
> +
> + /* Save callee-saved GPRs as mandated by the x86_64 ABI */
> + push %r15
> + push %r14
> + push %r13
> + push %r12
> +
> + /* Mangle function call ABI into TDCALL ABI: */
> + /* Set TDCALL leaf ID (TDVMCALL (0)) in RAX */
> + xor %eax, %eax
> +
> + /* Copy hypercall registers from arg struct: */
> + movq TDX_HYPERCALL_r10(%rdi), %r10
> + movq TDX_HYPERCALL_r11(%rdi), %r11
> + movq TDX_HYPERCALL_r12(%rdi), %r12
> + movq TDX_HYPERCALL_r13(%rdi), %r13
> + movq TDX_HYPERCALL_r14(%rdi), %r14
> + movq TDX_HYPERCALL_r15(%rdi), %r15
> +
> + movl $TDVMCALL_EXPOSE_REGS_MASK, %ecx
> +
> + tdcall
Looks there's a missing of definition for tdcall, and will produce below
error:
lib/x86_64/tdx/tdcall.S:65: Error: no such instruction: `tdcall'

I pulled the code https://github.com/googleprodkernel/linux-cc.git with
branch tdx-selftests-rfc-v5.

Fixed by adding a line in tdcall.S in my side.
#define tdcall .byte 0x66,0x0f,0x01,0xcc

> +
> + /* TDVMCALL leaf return code is in R10 */
> + movq %r10, %rax
> +
> + /* Copy hypercall result registers to arg struct if needed */
> + testq $TDX_HCALL_HAS_OUTPUT, %rsi
> + jz .Lout
> +
> + movq %r10, TDX_HYPERCALL_r10(%rdi)
> + movq %r11, TDX_HYPERCALL_r11(%rdi)
> + movq %r12, TDX_HYPERCALL_r12(%rdi)
> + movq %r13, TDX_HYPERCALL_r13(%rdi)
> + movq %r14, TDX_HYPERCALL_r14(%rdi)
> + movq %r15, TDX_HYPERCALL_r15(%rdi)
> +.Lout:
> + /* Restore callee-saved GPRs as mandated by the x86_64 ABI */
> + pop %r12
> + pop %r13
> + pop %r14
> + pop %r15
> +
> + pop %rbp
> + ret
> +
> +/* Disable executable stack */




2024-03-01 06:52:59

by Binbin Wu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 09/29] KVM: selftests: TDX: Add report_fatal_error test



On 12/13/2023 4:46 AM, Sagi Shahar wrote:
> The test checks report_fatal_error functionality.
>
> Signed-off-by: Sagi Shahar <[email protected]>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> ---
> .../selftests/kvm/include/x86_64/tdx/tdx.h | 6 ++-
> .../kvm/include/x86_64/tdx/tdx_util.h | 1 +
> .../kvm/include/x86_64/tdx/test_util.h | 19 ++++++++
> .../selftests/kvm/lib/x86_64/tdx/tdx.c | 39 ++++++++++++++++
> .../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 12 +++++
> .../selftests/kvm/lib/x86_64/tdx/test_util.c | 10 +++++
> .../selftests/kvm/x86_64/tdx_vm_tests.c | 45 +++++++++++++++++++
> 7 files changed, 131 insertions(+), 1 deletion(-)
>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> index a7161efe4ee2..1340c1070002 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> @@ -3,10 +3,14 @@
> #define SELFTEST_TDX_TDX_H
>
> #include <stdint.h>
> +#include "kvm_util_base.h"
>
> -#define TDG_VP_VMCALL_INSTRUCTION_IO 30
> +#define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003
>
> +#define TDG_VP_VMCALL_INSTRUCTION_IO 30
> +void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu);
> uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
> uint64_t write, uint64_t *data);
> +void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa);
>
> #endif // SELFTEST_TDX_TDX_H
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
> index 274b245f200b..32dd6b8fda46 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
> @@ -12,5 +12,6 @@ struct kvm_vm *td_create(void);
> void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
> uint64_t attributes);
> void td_finalize(struct kvm_vm *vm);
> +void td_vcpu_run(struct kvm_vcpu *vcpu);
>
> #endif // SELFTESTS_TDX_KVM_UTIL_H
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> index b570b6d978ff..6d69921136bd 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> @@ -49,4 +49,23 @@ bool is_tdx_enabled(void);
> */
> void tdx_test_success(void);
>
> +/**
> + * Report an error with @error_code to userspace.
> + *
> + * Return value from tdg_vp_vmcall_report_fatal_error is ignored since execution
> + * is not expected to continue beyond this point.
> + */
> +void tdx_test_fatal(uint64_t error_code);
> +
> +/**
> + * Report an error with @error_code to userspace.
> + *
> + * @data_gpa may point to an optional shared guest memory holding the error
> + * string.
> + *
> + * Return value from tdg_vp_vmcall_report_fatal_error is ignored since execution
> + * is not expected to continue beyond this point.
> + */
> +void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa);
> +
> #endif // SELFTEST_TDX_TEST_UTIL_H
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> index c2414523487a..b854c3aa34ff 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> @@ -1,8 +1,31 @@
> // SPDX-License-Identifier: GPL-2.0-only
>
> +#include <string.h>
> +
> #include "tdx/tdcall.h"
> #include "tdx/tdx.h"
>
> +void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu)
> +{
> + struct kvm_tdx_vmcall *vmcall_info = &vcpu->run->tdx.u.vmcall;
> + uint64_t vmcall_subfunction = vmcall_info->subfunction;
> +
> + switch (vmcall_subfunction) {
> + case TDG_VP_VMCALL_REPORT_FATAL_ERROR:
> + vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
> + vcpu->run->system_event.ndata = 3;
> + vcpu->run->system_event.data[0] =
> + TDG_VP_VMCALL_REPORT_FATAL_ERROR;
> + vcpu->run->system_event.data[1] = vmcall_info->in_r12;
> + vcpu->run->system_event.data[2] = vmcall_info->in_r13;
> + vmcall_info->status_code = 0;
> + break;
> + default:
> + TEST_FAIL("TD VMCALL subfunction %lu is unsupported.\n",
> + vmcall_subfunction);
> + }
> +}
> +
> uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
> uint64_t write, uint64_t *data)
> {
> @@ -25,3 +48,19 @@ uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
>
> return ret;
> }
> +
> +void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa)
> +{
> + struct tdx_hypercall_args args;
> +
> + memset(&args, 0, sizeof(struct tdx_hypercall_args));
> +
> + if (data_gpa)
> + error_code |= 0x8000000000000000;
> +
> + args.r11 = TDG_VP_VMCALL_REPORT_FATAL_ERROR;
> + args.r12 = error_code;
> + args.r13 = data_gpa;
> +
> + __tdx_hypercall(&args, 0);
> +}
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> index b302060049d5..d745bb6287c1 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> @@ -10,6 +10,7 @@
>
> #include "kvm_util.h"
> #include "test_util.h"
> +#include "tdx/tdx.h"
> #include "tdx/td_boot.h"
> #include "kvm_util_base.h"
> #include "processor.h"
> @@ -519,3 +520,14 @@ void td_finalize(struct kvm_vm *vm)
>
> tdx_td_finalizemr(vm);
> }
> +
> +void td_vcpu_run(struct kvm_vcpu *vcpu)
> +{
> + vcpu_run(vcpu);
> +
> + /* Handle TD VMCALLs that require userspace handling. */
> + if (vcpu->run->exit_reason == KVM_EXIT_TDX &&
> + vcpu->run->tdx.type == KVM_EXIT_TDX_VMCALL) {
> + handle_userspace_tdg_vp_vmcall_exit(vcpu);
> + }
> +}
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
> index 6905d0ca3877..7f3cd8089cea 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
> @@ -32,3 +32,13 @@ void tdx_test_success(void)
> TDX_TEST_SUCCESS_SIZE,
> TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, &code);
> }
> +
> +void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa)
> +{
> + tdg_vp_vmcall_report_fatal_error(error_code, data_gpa);
> +}
> +
> +void tdx_test_fatal(uint64_t error_code)
> +{
> + tdx_test_fatal_with_data(error_code, 0);
> +}
> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> index a18d1c9d6026..8638c7bbedaa 100644
> --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> @@ -2,6 +2,7 @@
>
> #include <signal.h>
> #include "kvm_util_base.h"
> +#include "tdx/tdx.h"
> #include "tdx/tdx_util.h"
> #include "tdx/test_util.h"
> #include "test_util.h"
> @@ -30,6 +31,49 @@ void verify_td_lifecycle(void)
> printf("\t ... PASSED\n");
> }
>
> +void guest_code_report_fatal_error(void)
> +{
> + uint64_t err;
> +
> + /*
> + * Note: err should follow the GHCI spec definition:
> + * bits 31:0 should be set to 0.
> + * bits 62:32 are used for TD-specific extended error code.
> + * bit 63 is used to mark additional information in shared memory.
> + */
> + err = 0x0BAAAAAD00000000;
> + if (err)
> + tdx_test_fatal(err);

I find tdx_test_fatal() is called a lot and each call site checks the err
before calling it. Is it simpler to move the check of err inside of
tdx_test_fatal() so that the callers just call it without check it every
time?


> +
> + tdx_test_success();
> +}
> +void verify_report_fatal_error(void)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> + vcpu = td_vcpu_add(vm, 0, guest_code_report_fatal_error);
> + td_finalize(vm);
> +
> + printf("Verifying report_fatal_error:\n");
> +
> + td_vcpu_run(vcpu);
> +
> + TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
> + TEST_ASSERT_EQ(vcpu->run->system_event.ndata, 3);
> + TEST_ASSERT_EQ(vcpu->run->system_event.data[0], TDG_VP_VMCALL_REPORT_FATAL_ERROR);
> + TEST_ASSERT_EQ(vcpu->run->system_event.data[1], 0x0BAAAAAD00000000);
> + TEST_ASSERT_EQ(vcpu->run->system_event.data[2], 0);
> +
> + vcpu_run(vcpu);
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + kvm_vm_free(vm);
> + printf("\t ... PASSED\n");
> +}
> +
> int main(int argc, char **argv)
> {
> setbuf(stdout, NULL);
> @@ -40,6 +84,7 @@ int main(int argc, char **argv)
> }
>
> run_in_new_process(&verify_td_lifecycle);
> + run_in_new_process(&verify_report_fatal_error);
>
> return 0;
> }


2024-03-01 08:06:51

by Yan Zhao

[permalink] [raw]
Subject: Re: [RFC PATCH v5 08/29] KVM: selftests: TDX: Add TDX lifecycle test

> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> index 063ff486fb86..b302060049d5 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> @@ -224,6 +224,7 @@ static void tdx_enable_capabilities(struct kvm_vm *vm)
> KVM_X2APIC_API_USE_32BIT_IDS |
> KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK);
> vm_enable_cap(vm, KVM_CAP_SPLIT_IRQCHIP, 24);
> + vm_enable_cap(vm, KVM_CAP_MAX_VCPUS, 512);

This is not a must for TD life cycle?
Though I know, currently, the selftest will fail without setting this
line, due to TD's default max vcpu count is -1.
But I guess it's an error.
https://lore.kernel.org/all/[email protected]/

2024-03-01 08:30:20

by Binbin Wu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 12/29] KVM: selftests: TDX: Add basic get_td_vmcall_info test



On 12/13/2023 4:46 AM, Sagi Shahar wrote:
> The test calls get_td_vmcall_info from the guest and verifies the
> expected returned values.
>
> Signed-off-by: Sagi Shahar <[email protected]>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> ---
> .../selftests/kvm/include/x86_64/tdx/tdx.h | 3 +
> .../kvm/include/x86_64/tdx/test_util.h | 27 +++++++
> .../selftests/kvm/lib/x86_64/tdx/tdx.c | 23 ++++++
> .../selftests/kvm/lib/x86_64/tdx/test_util.c | 46 +++++++++++
> .../selftests/kvm/x86_64/tdx_vm_tests.c | 80 +++++++++++++++++++
> 5 files changed, 179 insertions(+)

Reviewed-by: Binbin Wu <[email protected]>

Also, does it need to add another case for the non-zero value of r12 to
test the VMCALL_INVALID_OPERAND path?

>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> index 1340c1070002..63788012bf94 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> @@ -5,6 +5,7 @@
> #include <stdint.h>
> #include "kvm_util_base.h"
>
> +#define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000
> #define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003
>
> #define TDG_VP_VMCALL_INSTRUCTION_IO 30
> @@ -12,5 +13,7 @@ void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu);
> uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
> uint64_t write, uint64_t *data);
> void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa);
> +uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12,
> + uint64_t *r13, uint64_t *r14);
>
> #endif // SELFTEST_TDX_TDX_H
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> index af0ddbfe8d71..8a9b6a1bec3e 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> @@ -4,6 +4,7 @@
>
> #include <stdbool.h>
>
> +#include "kvm_util_base.h"
> #include "tdcall.h"
>
> #define TDX_TEST_SUCCESS_PORT 0x30
> @@ -111,4 +112,30 @@ void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa);
> */
> uint64_t tdx_test_report_to_user_space(uint32_t data);
>
> +/**
> + * Report a 64 bit value from the guest to user space using TDG.VP.VMCALL
> + * <Instruction.IO> call.
> + *
> + * Data is sent to host in 2 calls. LSB is sent (and needs to be read) first.
> + */
> +uint64_t tdx_test_send_64bit(uint64_t port, uint64_t data);
> +
> +/**
> + * Report a 64 bit value from the guest to user space using TDG.VP.VMCALL
> + * <Instruction.IO> call. Data is reported on port TDX_TEST_REPORT_PORT.
> + */
> +uint64_t tdx_test_report_64bit_to_user_space(uint64_t data);
> +
> +/**
> + * Read a 64 bit value from the guest in user space, sent using
> + * tdx_test_send_64bit().
> + */
> +uint64_t tdx_test_read_64bit(struct kvm_vcpu *vcpu, uint64_t port);
> +
> +/**
> + * Read a 64 bit value from the guest in user space, sent using
> + * tdx_test_report_64bit_to_user_space.
> + */
> +uint64_t tdx_test_read_64bit_report_from_guest(struct kvm_vcpu *vcpu);
> +
> #endif // SELFTEST_TDX_TEST_UTIL_H
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> index b854c3aa34ff..e5a9e13c62e2 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> @@ -64,3 +64,26 @@ void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa)
>
> __tdx_hypercall(&args, 0);
> }
> +
> +uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12,
> + uint64_t *r13, uint64_t *r14)
> +{
> + uint64_t ret;
> + struct tdx_hypercall_args args = {
> + .r11 = TDG_VP_VMCALL_GET_TD_VM_CALL_INFO,
> + .r12 = 0,
> + };
> +
> + ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT);
> +
> + if (r11)
> + *r11 = args.r11;
> + if (r12)
> + *r12 = args.r12;
> + if (r13)
> + *r13 = args.r13;
> + if (r14)
> + *r14 = args.r14;
> +
> + return ret;
> +}
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
> index 55c5a1e634df..3ae651cd5fac 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
> @@ -7,6 +7,7 @@
> #include <unistd.h>
>
> #include "kvm_util_base.h"
> +#include "tdx/tdcall.h"
> #include "tdx/tdx.h"
> #include "tdx/test_util.h"
>
> @@ -53,3 +54,48 @@ uint64_t tdx_test_report_to_user_space(uint32_t data)
> TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
> &data_64);
> }
> +
> +uint64_t tdx_test_send_64bit(uint64_t port, uint64_t data)
> +{
> + uint64_t err;
> + uint64_t data_lo = data & 0xFFFFFFFF;
> + uint64_t data_hi = (data >> 32) & 0xFFFFFFFF;
> +
> + err = tdg_vp_vmcall_instruction_io(port, 4,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
> + &data_lo);
> + if (err)
> + return err;
> +
> + return tdg_vp_vmcall_instruction_io(port, 4,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
> + &data_hi);
> +}
> +
> +uint64_t tdx_test_report_64bit_to_user_space(uint64_t data)
> +{
> + return tdx_test_send_64bit(TDX_TEST_REPORT_PORT, data);
> +}
> +
> +uint64_t tdx_test_read_64bit(struct kvm_vcpu *vcpu, uint64_t port)
> +{
> + uint32_t lo, hi;
> + uint64_t res;
> +
> + TDX_TEST_ASSERT_IO(vcpu, port, 4, TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> + lo = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
> +
> + vcpu_run(vcpu);
> +
> + TDX_TEST_ASSERT_IO(vcpu, port, 4, TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> + hi = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
> +
> + res = hi;
> + res = (res << 32) | lo;
> + return res;
> +}
> +
> +uint64_t tdx_test_read_64bit_report_from_guest(struct kvm_vcpu *vcpu)
> +{
> + return tdx_test_read_64bit(vcpu, TDX_TEST_REPORT_PORT);
> +}
> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> index 1b30e6f5a569..569c8fb0a59f 100644
> --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> @@ -260,6 +260,85 @@ void verify_td_cpuid(void)
> printf("\t ... PASSED\n");
> }
>
> +/*
> + * Verifies get_td_vmcall_info functionality.
> + */
> +void guest_code_get_td_vmcall_info(void)
> +{
> + uint64_t err;
> + uint64_t r11, r12, r13, r14;
> +
> + err = tdg_vp_vmcall_get_td_vmcall_info(&r11, &r12, &r13, &r14);
> + if (err)
> + tdx_test_fatal(err);
> +
> + err = tdx_test_report_64bit_to_user_space(r11);
> + if (err)
> + tdx_test_fatal(err);
> +
> + err = tdx_test_report_64bit_to_user_space(r12);
> + if (err)
> + tdx_test_fatal(err);
> +
> + err = tdx_test_report_64bit_to_user_space(r13);
> + if (err)
> + tdx_test_fatal(err);
> +
> + err = tdx_test_report_64bit_to_user_space(r14);
> + if (err)
> + tdx_test_fatal(err);
> +
> + tdx_test_success();
> +}
> +
> +void verify_get_td_vmcall_info(void)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> +
> + uint64_t r11, r12, r13, r14;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> + vcpu = td_vcpu_add(vm, 0, guest_code_get_td_vmcall_info);
> + td_finalize(vm);
> +
> + printf("Verifying TD get vmcall info:\n");
> +
> + /* Wait for guest to report r11 value */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + r11 = tdx_test_read_64bit_report_from_guest(vcpu);
> +
> + /* Wait for guest to report r12 value */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + r12 = tdx_test_read_64bit_report_from_guest(vcpu);
> +
> + /* Wait for guest to report r13 value */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + r13 = tdx_test_read_64bit_report_from_guest(vcpu);
> +
> + /* Wait for guest to report r14 value */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + r14 = tdx_test_read_64bit_report_from_guest(vcpu);
> +
> + TEST_ASSERT_EQ(r11, 0);
> + TEST_ASSERT_EQ(r12, 0);
> + TEST_ASSERT_EQ(r13, 0);
> + TEST_ASSERT_EQ(r14, 0);
> +
> + /* Wait for guest to complete execution */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + kvm_vm_free(vm);
> + printf("\t ... PASSED\n");
> +}
> +
> int main(int argc, char **argv)
> {
> setbuf(stdout, NULL);
> @@ -273,6 +352,7 @@ int main(int argc, char **argv)
> run_in_new_process(&verify_report_fatal_error);
> run_in_new_process(&verify_td_ioexit);
> run_in_new_process(&verify_td_cpuid);
> + run_in_new_process(&verify_get_td_vmcall_info);
>
> return 0;
> }


2024-03-01 09:32:51

by Binbin Wu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 14/29] KVM: selftests: TDX: Add TDX IO reads test



On 12/13/2023 4:46 AM, Sagi Shahar wrote:
> The test verifies IO reads of various sizes from the host to the guest.
>
> Signed-off-by: Sagi Shahar <[email protected]>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> ---
> .../selftests/kvm/x86_64/tdx_vm_tests.c | 87 +++++++++++++++++++
> 1 file changed, 87 insertions(+)

Reviewed-by: Binbin Wu <[email protected]>

>
> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> index a2b3e1aef151..699cba36e9ce 100644
> --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> @@ -429,6 +429,92 @@ void verify_guest_writes(void)
> printf("\t ... PASSED\n");
> }
>
> +#define TDX_IO_READS_TEST_PORT 0x52
> +
> +/*
> + * Verifies IO functionality by reading values of different sizes
> + * from the host.
> + */
> +void guest_io_reads(void)
> +{
> + uint64_t data;
> + uint64_t ret;
> +
> + ret = tdg_vp_vmcall_instruction_io(TDX_IO_READS_TEST_PORT, 1,
> + TDG_VP_VMCALL_INSTRUCTION_IO_READ,
> + &data);
> + if (ret)
> + tdx_test_fatal(ret);
> + if (data != 0xAB)
> + tdx_test_fatal(1);
> +
> + ret = tdg_vp_vmcall_instruction_io(TDX_IO_READS_TEST_PORT, 2,
> + TDG_VP_VMCALL_INSTRUCTION_IO_READ,
> + &data);
> + if (ret)
> + tdx_test_fatal(ret);
> + if (data != 0xABCD)
> + tdx_test_fatal(2);
> +
> + ret = tdg_vp_vmcall_instruction_io(TDX_IO_READS_TEST_PORT, 4,
> + TDG_VP_VMCALL_INSTRUCTION_IO_READ,
> + &data);
> + if (ret)
> + tdx_test_fatal(ret);
> + if (data != 0xFFABCDEF)
> + tdx_test_fatal(4);
> +
> + // Read an invalid number of bytes.
> + ret = tdg_vp_vmcall_instruction_io(TDX_IO_READS_TEST_PORT, 5,
> + TDG_VP_VMCALL_INSTRUCTION_IO_READ,
> + &data);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + tdx_test_success();
> +}
> +
> +void verify_guest_reads(void)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> + vcpu = td_vcpu_add(vm, 0, guest_io_reads);
> + td_finalize(vm);
> +
> + printf("Verifying guest reads:\n");
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_IO(vcpu, TDX_IO_READS_TEST_PORT, 1,
> + TDG_VP_VMCALL_INSTRUCTION_IO_READ);
> + *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = 0xAB;
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_IO(vcpu, TDX_IO_READS_TEST_PORT, 2,
> + TDG_VP_VMCALL_INSTRUCTION_IO_READ);
> + *(uint16_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = 0xABCD;
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_IO(vcpu, TDX_IO_READS_TEST_PORT, 4,
> + TDG_VP_VMCALL_INSTRUCTION_IO_READ);
> + *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = 0xFFABCDEF;
> +
> + td_vcpu_run(vcpu);
> + TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
> + TEST_ASSERT_EQ(vcpu->run->system_event.data[1], TDG_VP_VMCALL_INVALID_OPERAND);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + kvm_vm_free(vm);
> + printf("\t ... PASSED\n");
> +}
> +
> int main(int argc, char **argv)
> {
> setbuf(stdout, NULL);
> @@ -444,6 +530,7 @@ int main(int argc, char **argv)
> run_in_new_process(&verify_td_cpuid);
> run_in_new_process(&verify_get_td_vmcall_info);
> run_in_new_process(&verify_guest_writes);
> + run_in_new_process(&verify_guest_reads);
>
> return 0;
> }


2024-03-01 12:00:17

by Binbin Wu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 13/29] KVM: selftests: TDX: Add TDX IO writes test



On 12/13/2023 4:46 AM, Sagi Shahar wrote:
> The test verifies IO writes of various sizes from the guest to the host.
>
> Signed-off-by: Sagi Shahar <[email protected]>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> ---
> .../selftests/kvm/include/x86_64/tdx/tdcall.h | 3 +
> .../selftests/kvm/x86_64/tdx_vm_tests.c | 91 +++++++++++++++++++
> 2 files changed, 94 insertions(+)

Reviewed-by: Binbin Wu <[email protected]>

>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
> index 78001bfec9c8..b5e94b7c48fa 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
> @@ -10,6 +10,9 @@
> #define TDG_VP_VMCALL_INSTRUCTION_IO_READ 0
> #define TDG_VP_VMCALL_INSTRUCTION_IO_WRITE 1
>
> +#define TDG_VP_VMCALL_SUCCESS 0x0000000000000000
> +#define TDG_VP_VMCALL_INVALID_OPERAND 0x8000000000000000
> +
> #define TDX_HCALL_HAS_OUTPUT BIT(0)
>
> #define TDX_HYPERCALL_STANDARD 0
> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> index 569c8fb0a59f..a2b3e1aef151 100644
> --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> @@ -339,6 +339,96 @@ void verify_get_td_vmcall_info(void)
> printf("\t ... PASSED\n");
> }
>
> +#define TDX_IO_WRITES_TEST_PORT 0x51
> +
> +/*
> + * Verifies IO functionality by writing values of different sizes
> + * to the host.
> + */
> +void guest_io_writes(void)
> +{
> + uint64_t byte_1 = 0xAB;
> + uint64_t byte_2 = 0xABCD;
> + uint64_t byte_4 = 0xFFABCDEF;
> + uint64_t ret;
> +
> + ret = tdg_vp_vmcall_instruction_io(TDX_IO_WRITES_TEST_PORT, 1,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
> + &byte_1);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + ret = tdg_vp_vmcall_instruction_io(TDX_IO_WRITES_TEST_PORT, 2,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
> + &byte_2);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + ret = tdg_vp_vmcall_instruction_io(TDX_IO_WRITES_TEST_PORT, 4,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
> + &byte_4);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + // Write an invalid number of bytes.
> + ret = tdg_vp_vmcall_instruction_io(TDX_IO_WRITES_TEST_PORT, 5,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
> + &byte_4);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + tdx_test_success();
> +}
> +
> +void verify_guest_writes(void)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> +
> + uint8_t byte_1;
> + uint16_t byte_2;
> + uint32_t byte_4;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> + vcpu = td_vcpu_add(vm, 0, guest_io_writes);
> + td_finalize(vm);
> +
> + printf("Verifying guest writes:\n");
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_IO(vcpu, TDX_IO_WRITES_TEST_PORT, 1,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> + byte_1 = *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_IO(vcpu, TDX_IO_WRITES_TEST_PORT, 2,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> + byte_2 = *(uint16_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_IO(vcpu, TDX_IO_WRITES_TEST_PORT, 4,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> + byte_4 = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
> +
> + TEST_ASSERT_EQ(byte_1, 0xAB);
> + TEST_ASSERT_EQ(byte_2, 0xABCD);
> + TEST_ASSERT_EQ(byte_4, 0xFFABCDEF);
> +
> + td_vcpu_run(vcpu);
> + TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
> + TEST_ASSERT_EQ(vcpu->run->system_event.data[1], TDG_VP_VMCALL_INVALID_OPERAND);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + kvm_vm_free(vm);
> + printf("\t ... PASSED\n");
> +}
> +
> int main(int argc, char **argv)
> {
> setbuf(stdout, NULL);
> @@ -353,6 +443,7 @@ int main(int argc, char **argv)
> run_in_new_process(&verify_td_ioexit);
> run_in_new_process(&verify_td_cpuid);
> run_in_new_process(&verify_get_td_vmcall_info);
> + run_in_new_process(&verify_guest_writes);
>
> return 0;
> }


2024-03-01 12:32:56

by Yan Zhao

[permalink] [raw]
Subject: Re: [RFC PATCH v5 23/29] KVM: selftests: TDX: Add shared memory test

> +void guest_shared_mem(void)
> +{
> + uint32_t *test_mem_shared_gva =
> + (uint32_t *)TDX_SHARED_MEM_TEST_SHARED_GVA;
> +
> + uint64_t placeholder;
> + uint64_t ret;
> +
> + /* Map gpa as shared */
> + ret = tdg_vp_vmcall_map_gpa(test_mem_shared_gpa, PAGE_SIZE,
> + &placeholder);
> + if (ret)
> + tdx_test_fatal_with_data(ret, __LINE__);
> +
> + *test_mem_shared_gva = TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE;
> +
> + /* Exit so host can read shared value */
> + ret = tdg_vp_vmcall_instruction_io(TDX_SHARED_MEM_TEST_INFO_PORT, 4,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
> + &placeholder);
> + if (ret)
> + tdx_test_fatal_with_data(ret, __LINE__);
> +
> + /* Read value written by host and send it back out for verification */
> + ret = tdg_vp_vmcall_instruction_io(TDX_SHARED_MEM_TEST_INFO_PORT, 4,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
> + (uint64_t *)test_mem_shared_gva);
> + if (ret)
> + tdx_test_fatal_with_data(ret, __LINE__);
> +}
> +
> +int verify_shared_mem(void)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> +
> + vm_vaddr_t test_mem_private_gva;
> + uint32_t *test_mem_hva;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> + vcpu = td_vcpu_add(vm, 0, guest_shared_mem);
> +
> + /*
> + * Set up shared memory page for testing by first allocating as private
> + * and then mapping the same GPA again as shared. This way, the TD does
> + * not have to remap its page tables at runtime.
> + */
> + test_mem_private_gva = vm_vaddr_alloc(vm, vm->page_size,
> + TDX_SHARED_MEM_TEST_PRIVATE_GVA);
> + TEST_ASSERT_EQ(test_mem_private_gva, TDX_SHARED_MEM_TEST_PRIVATE_GVA);
> +
> + test_mem_hva = addr_gva2hva(vm, test_mem_private_gva);
> + TEST_ASSERT(test_mem_hva != NULL,
> + "Guest address not found in guest memory regions\n");
> +
> + test_mem_private_gpa = addr_gva2gpa(vm, test_mem_private_gva);
> + virt_pg_map_shared(vm, TDX_SHARED_MEM_TEST_SHARED_GVA,
> + test_mem_private_gpa);
> +
> + test_mem_shared_gpa = test_mem_private_gpa | BIT_ULL(vm->pa_bits - 1);
> + sync_global_to_guest(vm, test_mem_private_gpa);
> + sync_global_to_guest(vm, test_mem_shared_gpa);
> +
> + td_finalize(vm);
> +
> + printf("Verifying shared memory accesses for TDX\n");
> +
> + /* Begin guest execution; guest writes to shared memory. */
> + printf("\t ... Starting guest execution\n");
> +
> + /* Handle map gpa as shared */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
The first VMExit should be caused by tdvmcall map gpa, so it's
impossible to be guest failure.

Move this line TDX_TEST_CHECK_GUEST_FAILURE(vcpu) to after the next td_vcpu_run()
is better.
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_ASSERT_IO(vcpu, TDX_SHARED_MEM_TEST_INFO_PORT, 4,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> + TEST_ASSERT_EQ(*test_mem_hva, TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE);
> +
> + *test_mem_hva = TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE;
> + td_vcpu_run(vcpu);
> + TDX_TEST_ASSERT_IO(vcpu, TDX_SHARED_MEM_TEST_INFO_PORT, 4,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> + TEST_ASSERT_EQ(
> + *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset),
> + TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE);
> +
> + printf("\t ... PASSED\n");
> +
> + kvm_vm_free(vm);
> +
> + return 0;
> +}


2024-03-01 12:40:10

by Yan Zhao

[permalink] [raw]
Subject: Re: [RFC PATCH v5 09/29] KVM: selftests: TDX: Add report_fatal_error test

..
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> index b570b6d978ff..6d69921136bd 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> @@ -49,4 +49,23 @@ bool is_tdx_enabled(void);
> */
> void tdx_test_success(void);
>
> +/**
> + * Report an error with @error_code to userspace.
> + *
> + * Return value from tdg_vp_vmcall_report_fatal_error is ignored since execution
> + * is not expected to continue beyond this point.
> + */
> +void tdx_test_fatal(uint64_t error_code);
> +
> +/**
> + * Report an error with @error_code to userspace.
> + *
> + * @data_gpa may point to an optional shared guest memory holding the error
> + * string.
> + *
> + * Return value from tdg_vp_vmcall_report_fatal_error is ignored since execution
> + * is not expected to continue beyond this point.
> + */
> +void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa);
I found nowhere is using "data_gpa" as a gpa, even in patch 23, it's
usage is to pass a line number ("tdx_test_fatal_with_data(ret, __LINE__)").


> #endif // SELFTEST_TDX_TEST_UTIL_H
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> index c2414523487a..b854c3aa34ff 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> @@ -1,8 +1,31 @@
> // SPDX-License-Identifier: GPL-2.0-only
>
> +#include <string.h>
> +
> #include "tdx/tdcall.h"
> #include "tdx/tdx.h"
>
> +void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu)
> +{
> + struct kvm_tdx_vmcall *vmcall_info = &vcpu->run->tdx.u.vmcall;
> + uint64_t vmcall_subfunction = vmcall_info->subfunction;
> +
> + switch (vmcall_subfunction) {
> + case TDG_VP_VMCALL_REPORT_FATAL_ERROR:
> + vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
> + vcpu->run->system_event.ndata = 3;
> + vcpu->run->system_event.data[0] =
> + TDG_VP_VMCALL_REPORT_FATAL_ERROR;
> + vcpu->run->system_event.data[1] = vmcall_info->in_r12;
> + vcpu->run->system_event.data[2] = vmcall_info->in_r13;
> + vmcall_info->status_code = 0;
> + break;
> + default:
> + TEST_FAIL("TD VMCALL subfunction %lu is unsupported.\n",
> + vmcall_subfunction);
> + }
> +}
> +
> uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
> uint64_t write, uint64_t *data)
> {
> @@ -25,3 +48,19 @@ uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
>
> return ret;
> }
> +
> +void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa)
> +{
> + struct tdx_hypercall_args args;
> +
> + memset(&args, 0, sizeof(struct tdx_hypercall_args));
> +
> + if (data_gpa)
> + error_code |= 0x8000000000000000;
>
So, why this error_code needs to set bit 63?


> + args.r11 = TDG_VP_VMCALL_REPORT_FATAL_ERROR;
> + args.r12 = error_code;
> + args.r13 = data_gpa;
> +
> + __tdx_hypercall(&args, 0);
> +}
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> index b302060049d5..d745bb6287c1 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> @@ -10,6 +10,7 @@
>
> #include "kvm_util.h"
> #include "test_util.h"
> +#include "tdx/tdx.h"
> #include "tdx/td_boot.h"
> #include "kvm_util_base.h"
> #include "processor.h"
> @@ -519,3 +520,14 @@ void td_finalize(struct kvm_vm *vm)
>
> tdx_td_finalizemr(vm);
> }
> +
> +void td_vcpu_run(struct kvm_vcpu *vcpu)
> +{
> + vcpu_run(vcpu);
> +
> + /* Handle TD VMCALLs that require userspace handling. */
> + if (vcpu->run->exit_reason == KVM_EXIT_TDX &&
> + vcpu->run->tdx.type == KVM_EXIT_TDX_VMCALL) {
> + handle_userspace_tdg_vp_vmcall_exit(vcpu);
> + }
> +}
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
> index 6905d0ca3877..7f3cd8089cea 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
> @@ -32,3 +32,13 @@ void tdx_test_success(void)
> TDX_TEST_SUCCESS_SIZE,
> TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, &code);
> }
> +
> +void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa)
> +{
> + tdg_vp_vmcall_report_fatal_error(error_code, data_gpa);
> +}
> +
> +void tdx_test_fatal(uint64_t error_code)
> +{
> + tdx_test_fatal_with_data(error_code, 0);
> +}
> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> index a18d1c9d6026..8638c7bbedaa 100644
> --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> @@ -2,6 +2,7 @@
>
> #include <signal.h>
> #include "kvm_util_base.h"
> +#include "tdx/tdx.h"
> #include "tdx/tdx_util.h"
> #include "tdx/test_util.h"
> #include "test_util.h"
> @@ -30,6 +31,49 @@ void verify_td_lifecycle(void)
> printf("\t ... PASSED\n");
> }
>
> +void guest_code_report_fatal_error(void)
> +{
> + uint64_t err;
> +
> + /*
> + * Note: err should follow the GHCI spec definition:
> + * bits 31:0 should be set to 0.
> + * bits 62:32 are used for TD-specific extended error code.
> + * bit 63 is used to mark additional information in shared memory.
> + */
> + err = 0x0BAAAAAD00000000;
> + if (err)
> + tdx_test_fatal(err);
> +
> + tdx_test_success();
> +}
> +void verify_report_fatal_error(void)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> + vcpu = td_vcpu_add(vm, 0, guest_code_report_fatal_error);
> + td_finalize(vm);
> +
> + printf("Verifying report_fatal_error:\n");
> +
> + td_vcpu_run(vcpu);
> +
> + TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
> + TEST_ASSERT_EQ(vcpu->run->system_event.ndata, 3);
> + TEST_ASSERT_EQ(vcpu->run->system_event.data[0], TDG_VP_VMCALL_REPORT_FATAL_ERROR);
> + TEST_ASSERT_EQ(vcpu->run->system_event.data[1], 0x0BAAAAAD00000000);
> + TEST_ASSERT_EQ(vcpu->run->system_event.data[2], 0);
> +
> + vcpu_run(vcpu);
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + kvm_vm_free(vm);
> + printf("\t ... PASSED\n");
> +}
> +
> int main(int argc, char **argv)
> {
> setbuf(stdout, NULL);
> @@ -40,6 +84,7 @@ int main(int argc, char **argv)
> }
>
> run_in_new_process(&verify_td_lifecycle);
> + run_in_new_process(&verify_report_fatal_error);
>
> return 0;
> }
> --
> 2.43.0.472.g3155946c3a-goog
>
>

2024-03-01 19:06:42

by Binbin Wu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 15/29] KVM: selftests: TDX: Add TDX MSR read/write tests



On 12/13/2023 4:46 AM, Sagi Shahar wrote:
> The test verifies reads and writes for MSR registers with different access
> level.
>
> Signed-off-by: Sagi Shahar <[email protected]>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> ---
> .../selftests/kvm/include/x86_64/tdx/tdx.h | 5 +
> .../selftests/kvm/lib/x86_64/tdx/tdx.c | 27 +++
> .../selftests/kvm/x86_64/tdx_vm_tests.c | 209 ++++++++++++++++++
> 3 files changed, 241 insertions(+)
>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> index 63788012bf94..85ba6aab79a7 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> @@ -9,11 +9,16 @@
> #define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003
>
> #define TDG_VP_VMCALL_INSTRUCTION_IO 30
> +#define TDG_VP_VMCALL_INSTRUCTION_RDMSR 31
> +#define TDG_VP_VMCALL_INSTRUCTION_WRMSR 32
> +
> void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu);
> uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
> uint64_t write, uint64_t *data);
> void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa);
> uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12,
> uint64_t *r13, uint64_t *r14);
> +uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t *ret_value);
> +uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value);
>
> #endif // SELFTEST_TDX_TDX_H
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> index e5a9e13c62e2..88ea6f2a6469 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> @@ -87,3 +87,30 @@ uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12,
>
> return ret;
> }
> +
> +uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t *ret_value)
> +{
> + uint64_t ret;
> + struct tdx_hypercall_args args = {
> + .r11 = TDG_VP_VMCALL_INSTRUCTION_RDMSR,
> + .r12 = index,
> + };
> +
> + ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT);
> +
> + if (ret_value)
> + *ret_value = args.r11;
> +
> + return ret;
> +}
> +
> +uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value)
> +{
> + struct tdx_hypercall_args args = {
> + .r11 = TDG_VP_VMCALL_INSTRUCTION_WRMSR,
> + .r12 = index,
> + .r13 = value,
> + };
> +
> + return __tdx_hypercall(&args, 0);
> +}
> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> index 699cba36e9ce..5db3701cc6d9 100644
> --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> @@ -515,6 +515,213 @@ void verify_guest_reads(void)
> printf("\t ... PASSED\n");
> }
>
> +/*
> + * Define a filter which denies all MSR access except the following:
> + * MSR_X2APIC_APIC_ICR: Allow read/write access (allowed by default)

The default filtering behavior of tdx_msr_test_filter is
KVM_MSR_FILTER_DEFAULT_DENY, and MSR_X2APIC_APIC_ICR is not covered
by any specific range, shouldn't MSR_X2APIC_APIC_ICR be denied by default?

> + * MSR_IA32_MISC_ENABLE: Allow read access
> + * MSR_IA32_POWER_CTL: Allow write access
> + */
> +#define MSR_X2APIC_APIC_ICR 0x830
> +static u64 tdx_msr_test_allow_bits = 0xFFFFFFFFFFFFFFFF;
> +struct kvm_msr_filter tdx_msr_test_filter = {
> + .flags = KVM_MSR_FILTER_DEFAULT_DENY,
> + .ranges = {
> + {
> + .flags = KVM_MSR_FILTER_READ,
> + .nmsrs = 1,
> + .base = MSR_IA32_MISC_ENABLE,
> + .bitmap = (uint8_t *)&tdx_msr_test_allow_bits,
> + }, {
> + .flags = KVM_MSR_FILTER_WRITE,
> + .nmsrs = 1,
> + .base = MSR_IA32_POWER_CTL,
> + .bitmap = (uint8_t *)&tdx_msr_test_allow_bits,
> + },
> + },
> +};
> +
> +/*
> + * Verifies MSR read functionality.
> + */
> +void guest_msr_read(void)
> +{
> + uint64_t data;
> + uint64_t ret;
> +
> + ret = tdg_vp_vmcall_instruction_rdmsr(MSR_X2APIC_APIC_ICR, &data);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + ret = tdx_test_report_64bit_to_user_space(data);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + ret = tdg_vp_vmcall_instruction_rdmsr(MSR_IA32_MISC_ENABLE, &data);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + ret = tdx_test_report_64bit_to_user_space(data);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + /* We expect this call to fail since MSR_IA32_POWER_CTL is write only */
> + ret = tdg_vp_vmcall_instruction_rdmsr(MSR_IA32_POWER_CTL, &data);
> + if (ret) {
> + ret = tdx_test_report_64bit_to_user_space(ret);
> + if (ret)
> + tdx_test_fatal(ret);
> + } else {
> + tdx_test_fatal(-99);
> + }
> +
> + tdx_test_success();
> +}
> +
> +void verify_guest_msr_reads(void)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> +
> + uint64_t data;
> + int ret;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> +
> + /*
> + * Set explicit MSR filter map to control access to the MSR registers
> + * used in the test.
> + */
> + printf("\t ... Setting test MSR filter\n");
> + ret = kvm_check_cap(KVM_CAP_X86_USER_SPACE_MSR);
> + TEST_ASSERT(ret, "KVM_CAP_X86_USER_SPACE_MSR is unavailable");
> + vm_enable_cap(vm, KVM_CAP_X86_USER_SPACE_MSR, KVM_MSR_EXIT_REASON_FILTER);
> +
> + ret = kvm_check_cap(KVM_CAP_X86_MSR_FILTER);
> + TEST_ASSERT(ret, "KVM_CAP_X86_MSR_FILTER is unavailable");
> +
> + ret = ioctl(vm->fd, KVM_X86_SET_MSR_FILTER, &tdx_msr_test_filter);
> + TEST_ASSERT(ret == 0,
> + "KVM_X86_SET_MSR_FILTER failed, ret: %i errno: %i (%s)",
> + ret, errno, strerror(errno));
> +
> + vcpu = td_vcpu_add(vm, 0, guest_msr_read);
> + td_finalize(vm);
> +
> + printf("Verifying guest msr reads:\n");
> +
> + printf("\t ... Setting test MSR values\n");
> + /* Write arbitrary to the MSRs. */
> + vcpu_set_msr(vcpu, MSR_X2APIC_APIC_ICR, 4);
> + vcpu_set_msr(vcpu, MSR_IA32_MISC_ENABLE, 5);
> + vcpu_set_msr(vcpu, MSR_IA32_POWER_CTL, 6);
> +
> + printf("\t ... Running guest\n");
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + data = tdx_test_read_64bit_report_from_guest(vcpu);
> + TEST_ASSERT_EQ(data, 4);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + data = tdx_test_read_64bit_report_from_guest(vcpu);
> + TEST_ASSERT_EQ(data, 5);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + data = tdx_test_read_64bit_report_from_guest(vcpu);
> + TEST_ASSERT_EQ(data, TDG_VP_VMCALL_INVALID_OPERAND);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + kvm_vm_free(vm);
> + printf("\t ... PASSED\n");
> +}
> +
> +/*
> + * Verifies MSR write functionality.
> + */
> +void guest_msr_write(void)
> +{
> + uint64_t ret;
> +
> + ret = tdg_vp_vmcall_instruction_wrmsr(MSR_X2APIC_APIC_ICR, 4);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + /* We expect this call to fail since MSR_IA32_MISC_ENABLE is read only */
> + ret = tdg_vp_vmcall_instruction_wrmsr(MSR_IA32_MISC_ENABLE, 5);
> + if (ret) {
> + ret = tdx_test_report_64bit_to_user_space(ret);
> + if (ret)
> + tdx_test_fatal(ret);
> + } else {
> + tdx_test_fatal(-99);
> + }
> +
> +
> + ret = tdg_vp_vmcall_instruction_wrmsr(MSR_IA32_POWER_CTL, 6);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + tdx_test_success();
> +}
> +
> +void verify_guest_msr_writes(void)
> +{
> + struct kvm_vcpu *vcpu;
> + struct kvm_vm *vm;
> +
> + uint64_t data;
> + int ret;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> +
> + /*
> + * Set explicit MSR filter map to control access to the MSR registers
> + * used in the test.
> + */
> + printf("\t ... Setting test MSR filter\n");
> + ret = kvm_check_cap(KVM_CAP_X86_USER_SPACE_MSR);
> + TEST_ASSERT(ret, "KVM_CAP_X86_USER_SPACE_MSR is unavailable");
> + vm_enable_cap(vm, KVM_CAP_X86_USER_SPACE_MSR, KVM_MSR_EXIT_REASON_FILTER);
> +
> + ret = kvm_check_cap(KVM_CAP_X86_MSR_FILTER);
> + TEST_ASSERT(ret, "KVM_CAP_X86_MSR_FILTER is unavailable");
> +
> + ret = ioctl(vm->fd, KVM_X86_SET_MSR_FILTER, &tdx_msr_test_filter);
> + TEST_ASSERT(ret == 0,
> + "KVM_X86_SET_MSR_FILTER failed, ret: %i errno: %i (%s)",
> + ret, errno, strerror(errno));
> +
> + vcpu = td_vcpu_add(vm, 0, guest_msr_write);
> + td_finalize(vm);
> +
> + printf("Verifying guest msr writes:\n");
> +
> + printf("\t ... Running guest\n");
> + /* Only the write to MSR_IA32_MISC_ENABLE should trigger an exit */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + data = tdx_test_read_64bit_report_from_guest(vcpu);
> + TEST_ASSERT_EQ(data, TDG_VP_VMCALL_INVALID_OPERAND);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + printf("\t ... Verifying MSR values writen by guest\n");
> +
> + TEST_ASSERT_EQ(vcpu_get_msr(vcpu, MSR_X2APIC_APIC_ICR), 4);
> + TEST_ASSERT_EQ(vcpu_get_msr(vcpu, MSR_IA32_MISC_ENABLE), 0x1800);
> + TEST_ASSERT_EQ(vcpu_get_msr(vcpu, MSR_IA32_POWER_CTL), 6);
> +
> + kvm_vm_free(vm);
> + printf("\t ... PASSED\n");
> +}
> +
> +
> int main(int argc, char **argv)
> {
> setbuf(stdout, NULL);
> @@ -531,6 +738,8 @@ int main(int argc, char **argv)
> run_in_new_process(&verify_get_td_vmcall_info);
> run_in_new_process(&verify_guest_writes);
> run_in_new_process(&verify_guest_reads);
> + run_in_new_process(&verify_guest_msr_writes);
> + run_in_new_process(&verify_guest_msr_reads);
>
> return 0;
> }


2024-03-01 20:26:55

by Binbin Wu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 15/29] KVM: selftests: TDX: Add TDX MSR read/write tests



On 3/1/2024 8:00 PM, Binbin Wu wrote:
>
>
> On 12/13/2023 4:46 AM, Sagi Shahar wrote:
>> The test verifies reads and writes for MSR registers with different
>> access
>> level.
>>
>> Signed-off-by: Sagi Shahar <[email protected]>
>> Signed-off-by: Ackerley Tng <[email protected]>
>> Signed-off-by: Ryan Afranji <[email protected]>
>> ---
>>   .../selftests/kvm/include/x86_64/tdx/tdx.h    |   5 +
>>   .../selftests/kvm/lib/x86_64/tdx/tdx.c        |  27 +++
>>   .../selftests/kvm/x86_64/tdx_vm_tests.c       | 209 ++++++++++++++++++
>>   3 files changed, 241 insertions(+)
>>
>> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
>> b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
>> index 63788012bf94..85ba6aab79a7 100644
>> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
>> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
>> @@ -9,11 +9,16 @@
>>   #define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003
>>     #define TDG_VP_VMCALL_INSTRUCTION_IO 30
>> +#define TDG_VP_VMCALL_INSTRUCTION_RDMSR 31
>> +#define TDG_VP_VMCALL_INSTRUCTION_WRMSR 32
>> +
>>   void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu);
>>   uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
>>                         uint64_t write, uint64_t *data);
>>   void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t
>> data_gpa);
>>   uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t
>> *r12,
>>                       uint64_t *r13, uint64_t *r14);
>> +uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t
>> *ret_value);
>> +uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t
>> value);
>>     #endif // SELFTEST_TDX_TDX_H
>> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
>> b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
>> index e5a9e13c62e2..88ea6f2a6469 100644
>> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
>> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
>> @@ -87,3 +87,30 @@ uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t
>> *r11, uint64_t *r12,
>>         return ret;
>>   }
>> +
>> +uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t
>> *ret_value)
>> +{
>> +    uint64_t ret;
>> +    struct tdx_hypercall_args args = {
>> +        .r11 = TDG_VP_VMCALL_INSTRUCTION_RDMSR,
>> +        .r12 = index,
>> +    };
>> +
>> +    ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT);
>> +
>> +    if (ret_value)
>> +        *ret_value = args.r11;
>> +
>> +    return ret;
>> +}
>> +
>> +uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t
>> value)
>> +{
>> +    struct tdx_hypercall_args args = {
>> +        .r11 = TDG_VP_VMCALL_INSTRUCTION_WRMSR,
>> +        .r12 = index,
>> +        .r13 = value,
>> +    };
>> +
>> +    return __tdx_hypercall(&args, 0);
>> +}
>> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
>> b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
>> index 699cba36e9ce..5db3701cc6d9 100644
>> --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
>> +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
>> @@ -515,6 +515,213 @@ void verify_guest_reads(void)
>>       printf("\t ... PASSED\n");
>>   }
>>   +/*
>> + * Define a filter which denies all MSR access except the following:
>> + * MSR_X2APIC_APIC_ICR: Allow read/write access (allowed by default)
>
> The default filtering behavior of tdx_msr_test_filter is
> KVM_MSR_FILTER_DEFAULT_DENY, and MSR_X2APIC_APIC_ICR is not covered
> by any specific range, shouldn't MSR_X2APIC_APIC_ICR be denied by
> default?

Sorry, please ignore this comment.

I see the description from the KVM document later:
"x2APIC MSR accesses cannot be filtered (KVM silently ignores filters
that cover any x2APIC MSRs)."

>
>> + * MSR_IA32_MISC_ENABLE: Allow read access
>> + * MSR_IA32_POWER_CTL: Allow write access
>> + */
>> +#define MSR_X2APIC_APIC_ICR 0x830
>> +static u64 tdx_msr_test_allow_bits = 0xFFFFFFFFFFFFFFFF;
>> +struct kvm_msr_filter tdx_msr_test_filter = {
>> +    .flags = KVM_MSR_FILTER_DEFAULT_DENY,
>> +    .ranges = {
>> +        {
>> +            .flags = KVM_MSR_FILTER_READ,
>> +            .nmsrs = 1,
>> +            .base = MSR_IA32_MISC_ENABLE,
>> +            .bitmap = (uint8_t *)&tdx_msr_test_allow_bits,
>> +        }, {
>> +            .flags = KVM_MSR_FILTER_WRITE,
>> +            .nmsrs = 1,
>> +            .base = MSR_IA32_POWER_CTL,
>> +            .bitmap = (uint8_t *)&tdx_msr_test_allow_bits,
>> +        },
>> +    },
>> +};
>> +
>> +/*
>> + * Verifies MSR read functionality.
>> + */
>> +void guest_msr_read(void)
>> +{
>> +    uint64_t data;
>> +    uint64_t ret;
>> +
>> +    ret = tdg_vp_vmcall_instruction_rdmsr(MSR_X2APIC_APIC_ICR, &data);
>> +    if (ret)
>> +        tdx_test_fatal(ret);
>> +
>> +    ret = tdx_test_report_64bit_to_user_space(data);
>> +    if (ret)
>> +        tdx_test_fatal(ret);
>> +
>> +    ret = tdg_vp_vmcall_instruction_rdmsr(MSR_IA32_MISC_ENABLE, &data);
>> +    if (ret)
>> +        tdx_test_fatal(ret);
>> +
>> +    ret = tdx_test_report_64bit_to_user_space(data);
>> +    if (ret)
>> +        tdx_test_fatal(ret);
>> +
>> +    /* We expect this call to fail since MSR_IA32_POWER_CTL is write
>> only */
>> +    ret = tdg_vp_vmcall_instruction_rdmsr(MSR_IA32_POWER_CTL, &data);
>> +    if (ret) {
>> +        ret = tdx_test_report_64bit_to_user_space(ret);
>> +        if (ret)
>> +            tdx_test_fatal(ret);
>> +    } else {
>> +        tdx_test_fatal(-99);
>> +    }
>> +
>> +    tdx_test_success();
>> +}
>> +
>> +void verify_guest_msr_reads(void)
>> +{
>> +    struct kvm_vm *vm;
>> +    struct kvm_vcpu *vcpu;
>> +
>> +    uint64_t data;
>> +    int ret;
>> +
>> +    vm = td_create();
>> +    td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
>> +
>> +    /*
>> +     * Set explicit MSR filter map to control access to the MSR
>> registers
>> +     * used in the test.
>> +     */
>> +    printf("\t ... Setting test MSR filter\n");
>> +    ret = kvm_check_cap(KVM_CAP_X86_USER_SPACE_MSR);
>> +    TEST_ASSERT(ret, "KVM_CAP_X86_USER_SPACE_MSR is unavailable");
>> +    vm_enable_cap(vm, KVM_CAP_X86_USER_SPACE_MSR,
>> KVM_MSR_EXIT_REASON_FILTER);
>> +
>> +    ret = kvm_check_cap(KVM_CAP_X86_MSR_FILTER);
>> +    TEST_ASSERT(ret, "KVM_CAP_X86_MSR_FILTER is unavailable");
>> +
>> +    ret = ioctl(vm->fd, KVM_X86_SET_MSR_FILTER, &tdx_msr_test_filter);
>> +    TEST_ASSERT(ret == 0,
>> +            "KVM_X86_SET_MSR_FILTER failed, ret: %i errno: %i (%s)",
>> +            ret, errno, strerror(errno));
>> +
>> +    vcpu = td_vcpu_add(vm, 0, guest_msr_read);
>> +    td_finalize(vm);
>> +
>> +    printf("Verifying guest msr reads:\n");
>> +
>> +    printf("\t ... Setting test MSR values\n");
>> +    /* Write arbitrary to the MSRs. */
>> +    vcpu_set_msr(vcpu, MSR_X2APIC_APIC_ICR, 4);
>> +    vcpu_set_msr(vcpu, MSR_IA32_MISC_ENABLE, 5);
>> +    vcpu_set_msr(vcpu, MSR_IA32_POWER_CTL, 6);
>> +
>> +    printf("\t ... Running guest\n");
>> +    td_vcpu_run(vcpu);
>> +    TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
>> +    data = tdx_test_read_64bit_report_from_guest(vcpu);
>> +    TEST_ASSERT_EQ(data, 4);
>> +
>> +    td_vcpu_run(vcpu);
>> +    TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
>> +    data = tdx_test_read_64bit_report_from_guest(vcpu);
>> +    TEST_ASSERT_EQ(data, 5);
>> +
>> +    td_vcpu_run(vcpu);
>> +    TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
>> +    data = tdx_test_read_64bit_report_from_guest(vcpu);
>> +    TEST_ASSERT_EQ(data, TDG_VP_VMCALL_INVALID_OPERAND);
>> +
>> +    td_vcpu_run(vcpu);
>> +    TDX_TEST_ASSERT_SUCCESS(vcpu);
>> +
>> +    kvm_vm_free(vm);
>> +    printf("\t ... PASSED\n");
>> +}
>> +
>> +/*
>> + * Verifies MSR write functionality.
>> + */
>> +void guest_msr_write(void)
>> +{
>> +    uint64_t ret;
>> +
>> +    ret = tdg_vp_vmcall_instruction_wrmsr(MSR_X2APIC_APIC_ICR, 4);
>> +    if (ret)
>> +        tdx_test_fatal(ret);
>> +
>> +    /* We expect this call to fail since MSR_IA32_MISC_ENABLE is
>> read only */
>> +    ret = tdg_vp_vmcall_instruction_wrmsr(MSR_IA32_MISC_ENABLE, 5);
>> +    if (ret) {
>> +        ret = tdx_test_report_64bit_to_user_space(ret);
>> +        if (ret)
>> +            tdx_test_fatal(ret);
>> +    } else {
>> +        tdx_test_fatal(-99);
>> +    }
>> +
>> +
>> +    ret = tdg_vp_vmcall_instruction_wrmsr(MSR_IA32_POWER_CTL, 6);
>> +    if (ret)
>> +        tdx_test_fatal(ret);
>> +
>> +    tdx_test_success();
>> +}
>> +
>> +void verify_guest_msr_writes(void)
>> +{
>> +    struct kvm_vcpu *vcpu;
>> +    struct kvm_vm *vm;
>> +
>> +    uint64_t data;
>> +    int ret;
>> +
>> +    vm = td_create();
>> +    td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
>> +
>> +    /*
>> +     * Set explicit MSR filter map to control access to the MSR
>> registers
>> +     * used in the test.
>> +     */
>> +    printf("\t ... Setting test MSR filter\n");
>> +    ret = kvm_check_cap(KVM_CAP_X86_USER_SPACE_MSR);
>> +    TEST_ASSERT(ret, "KVM_CAP_X86_USER_SPACE_MSR is unavailable");
>> +    vm_enable_cap(vm, KVM_CAP_X86_USER_SPACE_MSR,
>> KVM_MSR_EXIT_REASON_FILTER);
>> +
>> +    ret = kvm_check_cap(KVM_CAP_X86_MSR_FILTER);
>> +    TEST_ASSERT(ret, "KVM_CAP_X86_MSR_FILTER is unavailable");
>> +
>> +    ret = ioctl(vm->fd, KVM_X86_SET_MSR_FILTER, &tdx_msr_test_filter);
>> +    TEST_ASSERT(ret == 0,
>> +            "KVM_X86_SET_MSR_FILTER failed, ret: %i errno: %i (%s)",
>> +            ret, errno, strerror(errno));
>> +
>> +    vcpu = td_vcpu_add(vm, 0, guest_msr_write);
>> +    td_finalize(vm);
>> +
>> +    printf("Verifying guest msr writes:\n");
>> +
>> +    printf("\t ... Running guest\n");
>> +    /* Only the write to MSR_IA32_MISC_ENABLE should trigger an exit */
>> +    td_vcpu_run(vcpu);
>> +    TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
>> +    data = tdx_test_read_64bit_report_from_guest(vcpu);
>> +    TEST_ASSERT_EQ(data, TDG_VP_VMCALL_INVALID_OPERAND);
>> +
>> +    td_vcpu_run(vcpu);
>> +    TDX_TEST_ASSERT_SUCCESS(vcpu);
>> +
>> +    printf("\t ... Verifying MSR values writen by guest\n");
>> +
>> +    TEST_ASSERT_EQ(vcpu_get_msr(vcpu, MSR_X2APIC_APIC_ICR), 4);
>> +    TEST_ASSERT_EQ(vcpu_get_msr(vcpu, MSR_IA32_MISC_ENABLE), 0x1800);
>> +    TEST_ASSERT_EQ(vcpu_get_msr(vcpu, MSR_IA32_POWER_CTL), 6);
>> +
>> +    kvm_vm_free(vm);
>> +    printf("\t ... PASSED\n");
>> +}
>> +
>> +
>>   int main(int argc, char **argv)
>>   {
>>       setbuf(stdout, NULL);
>> @@ -531,6 +738,8 @@ int main(int argc, char **argv)
>>       run_in_new_process(&verify_get_td_vmcall_info);
>>       run_in_new_process(&verify_guest_writes);
>>       run_in_new_process(&verify_guest_reads);
>> +    run_in_new_process(&verify_guest_msr_writes);
>> +    run_in_new_process(&verify_guest_msr_reads);
>>         return 0;
>>   }
>
>


2024-03-02 09:56:37

by Binbin Wu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 16/29] KVM: selftests: TDX: Add TDX HLT exit test



On 12/13/2023 4:46 AM, Sagi Shahar wrote:
> The test verifies that the guest runs TDVMCALL<INSTRUCTION.HLT> and the
> guest vCPU enters to the halted state.
>
> Signed-off-by: Erdem Aktas <[email protected]>
> Signed-off-by: Sagi Shahar <[email protected]>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> ---
> .../selftests/kvm/include/x86_64/tdx/tdx.h | 2 +
> .../selftests/kvm/lib/x86_64/tdx/tdx.c | 10 +++
> .../selftests/kvm/x86_64/tdx_vm_tests.c | 78 +++++++++++++++++++
> 3 files changed, 90 insertions(+)
>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> index 85ba6aab79a7..b18e39d20498 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> @@ -8,6 +8,7 @@
> #define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000
> #define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003
>
> +#define TDG_VP_VMCALL_INSTRUCTION_HLT 12
> #define TDG_VP_VMCALL_INSTRUCTION_IO 30
> #define TDG_VP_VMCALL_INSTRUCTION_RDMSR 31
> #define TDG_VP_VMCALL_INSTRUCTION_WRMSR 32
> @@ -20,5 +21,6 @@ uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12,
> uint64_t *r13, uint64_t *r14);
> uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t *ret_value);
> uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value);
> +uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag);
>
> #endif // SELFTEST_TDX_TDX_H
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> index 88ea6f2a6469..9485bafedc38 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> @@ -114,3 +114,13 @@ uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value)
>
> return __tdx_hypercall(&args, 0);
> }
> +
> +uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag)
> +{
> + struct tdx_hypercall_args args = {
> + .r11 = TDG_VP_VMCALL_INSTRUCTION_HLT,
> + .r12 = interrupt_blocked_flag,
> + };
> +
> + return __tdx_hypercall(&args, 0);
> +}
> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> index 5db3701cc6d9..5fae4c6e5f95 100644
> --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> @@ -721,6 +721,83 @@ void verify_guest_msr_writes(void)
> printf("\t ... PASSED\n");
> }
>
> +/*
> + * Verifies HLT functionality.
> + */
> +void guest_hlt(void)
> +{
> + uint64_t ret;
> + uint64_t interrupt_blocked_flag;
> +
> + interrupt_blocked_flag = 0;
> + ret = tdg_vp_vmcall_instruction_hlt(interrupt_blocked_flag);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + tdx_test_success();
> +}
> +
> +void _verify_guest_hlt(int signum);
> +
> +void wake_me(int interval)
> +{
> + struct sigaction action;
> +
> + action.sa_handler = _verify_guest_hlt;
> + sigemptyset(&action.sa_mask);
> + action.sa_flags = 0;
> +
> + TEST_ASSERT(sigaction(SIGALRM, &action, NULL) == 0,
> + "Could not set the alarm handler!");
> +
> + alarm(interval);
> +}
> +
> +void _verify_guest_hlt(int signum)
> +{
> + struct kvm_vm *vm;
> + static struct kvm_vcpu *vcpu;
> +
> + /*
> + * This function will also be called by SIGALRM handler to check the
> + * vCPU MP State. If vm has been initialized, then we are in the signal
> + * handler. Check the MP state and let the guest run again.
> + */
> + if (vcpu != NULL) {

What if the following case if there is a bug in KVM so that:

In guest, execution of tdg_vp_vmcall_instruction_hlt() return 0, but the
vcpu is not actually halted. Then guest will call tdx_test_success().

And the first call of _verify_guest_hlt() will call kvm_vm_free(vm) to free
the vm, which also frees the vcpu, and 1 second later, in this path vcpu
will
be accessed after free.

> + struct kvm_mp_state mp_state;
> +
> + vcpu_mp_state_get(vcpu, &mp_state);
> + TEST_ASSERT_EQ(mp_state.mp_state, KVM_MP_STATE_HALTED);
> +
> + /* Let the guest to run and finish the test.*/
> + mp_state.mp_state = KVM_MP_STATE_RUNNABLE;
> + vcpu_mp_state_set(vcpu, &mp_state);
> + return;
> + }
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> + vcpu = td_vcpu_add(vm, 0, guest_hlt);
> + td_finalize(vm);
> +
> + printf("Verifying HLT:\n");
> +
> + printf("\t ... Running guest\n");
> +
> + /* Wait 1 second for guest to execute HLT */
> + wake_me(1);
> + td_vcpu_run(vcpu);
> +
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + kvm_vm_free(vm);
> + printf("\t ... PASSED\n");
> +}
> +
> +void verify_guest_hlt(void)
> +{
> + _verify_guest_hlt(0);
> +}
>
> int main(int argc, char **argv)
> {
> @@ -740,6 +817,7 @@ int main(int argc, char **argv)
> run_in_new_process(&verify_guest_reads);
> run_in_new_process(&verify_guest_msr_writes);
> run_in_new_process(&verify_guest_msr_reads);
> + run_in_new_process(&verify_guest_hlt);
>
> return 0;
> }


2024-03-02 10:25:20

by Binbin Wu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 18/29] KVM: selftests: TDX: Add TDX MMIO writes test



On 12/13/2023 4:46 AM, Sagi Shahar wrote:
> The test verifies MMIO writes of various sizes from the guest to the host.
>
> Signed-off-by: Sagi Shahar <[email protected]>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>

Patch 17 and 18 test the part that guest has received the #VE caused by
MMIO access, so calls the td vmcall to kvm to do the emulation.

Should the generation of #VE due to MMIO access be covered as well?

> ---
> .../selftests/kvm/include/x86_64/tdx/tdx.h | 2 +
> .../selftests/kvm/lib/x86_64/tdx/tdx.c | 14 +++
> .../selftests/kvm/x86_64/tdx_vm_tests.c | 85 +++++++++++++++++++
> 3 files changed, 101 insertions(+)
>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> index 13ce60df5684..502b670ea699 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> @@ -25,5 +25,7 @@ uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value);
> uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag);
> uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size,
> uint64_t *data_out);
> +uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size,
> + uint64_t data_in);
>
> #endif // SELFTEST_TDX_TDX_H
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> index b19f07ebc0e7..f4afa09f7e3d 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> @@ -143,3 +143,17 @@ uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size,
>
> return ret;
> }
> +
> +uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size,
> + uint64_t data_in)
> +{
> + struct tdx_hypercall_args args = {
> + .r11 = TDG_VP_VMCALL_VE_REQUEST_MMIO,
> + .r12 = size,
> + .r13 = TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE,
> + .r14 = address,
> + .r15 = data_in,
> + };
> +
> + return __tdx_hypercall(&args, 0);
> +}
> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> index 48902b69d13e..5e28ba828a92 100644
> --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> @@ -885,6 +885,90 @@ void verify_mmio_reads(void)
> printf("\t ... PASSED\n");
> }
>
> +void guest_mmio_writes(void)
> +{
> + uint64_t ret;
> +
> + ret = tdg_vp_vmcall_ve_request_mmio_write(TDX_MMIO_TEST_ADDR, 1, 0x12);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + ret = tdg_vp_vmcall_ve_request_mmio_write(TDX_MMIO_TEST_ADDR, 2, 0x1234);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + ret = tdg_vp_vmcall_ve_request_mmio_write(TDX_MMIO_TEST_ADDR, 4, 0x12345678);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + ret = tdg_vp_vmcall_ve_request_mmio_write(TDX_MMIO_TEST_ADDR, 8, 0x1234567890ABCDEF);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + // Write across page boundary.
> + ret = tdg_vp_vmcall_ve_request_mmio_write(PAGE_SIZE - 1, 8, 0);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + tdx_test_success();
> +}
> +
> +/*
> + * Varifies guest MMIO writes.
> + */
> +void verify_mmio_writes(void)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> +
> + uint8_t byte_1;
> + uint16_t byte_2;
> + uint32_t byte_4;
> + uint64_t byte_8;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> + vcpu = td_vcpu_add(vm, 0, guest_mmio_writes);
> + td_finalize(vm);
> +
> + printf("Verifying TD MMIO writes:\n");
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 1, TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE);
> + byte_1 = *(uint8_t *)(vcpu->run->mmio.data);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 2, TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE);
> + byte_2 = *(uint16_t *)(vcpu->run->mmio.data);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 4, TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE);
> + byte_4 = *(uint32_t *)(vcpu->run->mmio.data);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 8, TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE);
> + byte_8 = *(uint64_t *)(vcpu->run->mmio.data);
> +
> + TEST_ASSERT_EQ(byte_1, 0x12);
> + TEST_ASSERT_EQ(byte_2, 0x1234);
> + TEST_ASSERT_EQ(byte_4, 0x12345678);
> + TEST_ASSERT_EQ(byte_8, 0x1234567890ABCDEF);
> +
> + td_vcpu_run(vcpu);
> + TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
> + TEST_ASSERT_EQ(vcpu->run->system_event.data[1], TDG_VP_VMCALL_INVALID_OPERAND);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + kvm_vm_free(vm);
> + printf("\t ... PASSED\n");
> +}
> +
> int main(int argc, char **argv)
> {
> setbuf(stdout, NULL);
> @@ -905,6 +989,7 @@ int main(int argc, char **argv)
> run_in_new_process(&verify_guest_msr_reads);
> run_in_new_process(&verify_guest_hlt);
> run_in_new_process(&verify_mmio_reads);
> + run_in_new_process(&verify_mmio_writes);
>
> return 0;
> }


2024-03-04 02:49:46

by Yan Zhao

[permalink] [raw]
Subject: Re: [RFC PATCH v5 10/29] KVM: selftests: TDX: Adding test case for TDX port IO

On Tue, Dec 12, 2023 at 12:46:25PM -0800, Sagi Shahar wrote:
> From: Erdem Aktas <[email protected]>
>
> Verifies TDVMCALL<INSTRUCTION.IO> READ and WRITE operations.
>
> Signed-off-by: Erdem Aktas <[email protected]>
> Signed-off-by: Sagi Shahar <[email protected]>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> ---
> .../kvm/include/x86_64/tdx/test_util.h | 34 ++++++++
> .../selftests/kvm/x86_64/tdx_vm_tests.c | 82 +++++++++++++++++++
> 2 files changed, 116 insertions(+)
>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> index 6d69921136bd..95a5d5be7f0b 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> @@ -9,6 +9,40 @@
> #define TDX_TEST_SUCCESS_PORT 0x30
> #define TDX_TEST_SUCCESS_SIZE 4
>
> +/**
> + * Assert that some IO operation involving tdg_vp_vmcall_instruction_io() was
> + * called in the guest.
> + */
> +#define TDX_TEST_ASSERT_IO(VCPU, PORT, SIZE, DIR) \
> + do { \
> + TEST_ASSERT((VCPU)->run->exit_reason == KVM_EXIT_IO, \
> + "Got exit_reason other than KVM_EXIT_IO: %u (%s)\n", \
> + (VCPU)->run->exit_reason, \
> + exit_reason_str((VCPU)->run->exit_reason)); \
> + \
> + TEST_ASSERT(((VCPU)->run->exit_reason == KVM_EXIT_IO) && \
> + ((VCPU)->run->io.port == (PORT)) && \
> + ((VCPU)->run->io.size == (SIZE)) && \
> + ((VCPU)->run->io.direction == (DIR)), \
> + "Got unexpected IO exit values: %u (%s) %d %d %d\n", \
> + (VCPU)->run->exit_reason, \
> + exit_reason_str((VCPU)->run->exit_reason), \
> + (VCPU)->run->io.port, (VCPU)->run->io.size, \
> + (VCPU)->run->io.direction); \
> + } while (0)
> +
> +/**
> + * Check and report if there was some failure in the guest, either an exception
> + * like a triple fault, or if a tdx_test_fatal() was hit.
> + */
> +#define TDX_TEST_CHECK_GUEST_FAILURE(VCPU) \
> + do { \
> + if ((VCPU)->run->exit_reason == KVM_EXIT_SYSTEM_EVENT) \
> + TEST_FAIL("Guest reported error. error code: %lld (0x%llx)\n", \
> + (VCPU)->run->system_event.data[1], \
> + (VCPU)->run->system_event.data[1]); \
> + } while (0)
> +
> /**
> * Assert that tdx_test_success() was called in the guest.
> */
> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> index 8638c7bbedaa..75467c407ca7 100644
> --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> @@ -2,6 +2,7 @@
>
> #include <signal.h>
> #include "kvm_util_base.h"
> +#include "tdx/tdcall.h"
> #include "tdx/tdx.h"
> #include "tdx/tdx_util.h"
> #include "tdx/test_util.h"
> @@ -74,6 +75,86 @@ void verify_report_fatal_error(void)
> printf("\t ... PASSED\n");
> }
>
> +#define TDX_IOEXIT_TEST_PORT 0x50
> +
> +/*
> + * Verifies IO functionality by writing a |value| to a predefined port.
> + * Verifies that the read value is |value| + 1 from the same port.
> + * If all the tests are passed then write a value to port TDX_TEST_PORT
> + */
> +void guest_ioexit(void)
> +{
> + uint64_t data_out, data_in, delta;
> + uint64_t ret;
> +
> + data_out = 0xAB;
> + ret = tdg_vp_vmcall_instruction_io(TDX_IOEXIT_TEST_PORT, 1,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
> + &data_out);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + ret = tdg_vp_vmcall_instruction_io(TDX_IOEXIT_TEST_PORT, 1,
> + TDG_VP_VMCALL_INSTRUCTION_IO_READ,
> + &data_in);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + delta = data_in - data_out;
> + if (delta != 1)
> + tdx_test_fatal(ret);
> +
> + tdx_test_success();
> +}
> +
> +void verify_td_ioexit(void)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> +
> + uint32_t port_data;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> + vcpu = td_vcpu_add(vm, 0, guest_ioexit);
> + td_finalize(vm);
> +
> + printf("Verifying TD IO Exit:\n");
> +
> + /* Wait for guest to do a IO write */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
This check is a vain, because the first VMExit from vcpu run is always
KVM_EXIT_IO caused by tdg_vp_vmcall_instruction_io().


> + TDX_TEST_ASSERT_IO(vcpu, TDX_IOEXIT_TEST_PORT, 1,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> + port_data = *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
> +
> + printf("\t ... IO WRITE: OK\n");
So, even if there's an error in emulating writing of TDX_IOEXIT_TEST_PORT,
and guest would then find a failure and trigger tdx_test_fatal(), this line
will still print "IO WRITE: OK", which is not right.

> +
> + /*
> + * Wait for the guest to do a IO read. Provide the previous written data
> + * + 1 back to the guest
> + */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
This check is a vain, too, as in write case.

> + TDX_TEST_ASSERT_IO(vcpu, TDX_IOEXIT_TEST_PORT, 1,
> + TDG_VP_VMCALL_INSTRUCTION_IO_READ);
> + *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = port_data + 1;
> +
> + printf("\t ... IO READ: OK\n");
Same as in write case, this line should not be printed until after guest
finishing checking return code.

> +
> + /*
> + * Wait for the guest to complete execution successfully. The read
> + * value is checked within the guest.
> + */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + printf("\t ... IO verify read/write values: OK\n");
> + kvm_vm_free(vm);
> + printf("\t ... PASSED\n");
> +}
> +
> int main(int argc, char **argv)
> {
> setbuf(stdout, NULL);
> @@ -85,6 +166,7 @@ int main(int argc, char **argv)
>
> run_in_new_process(&verify_td_lifecycle);
> run_in_new_process(&verify_report_fatal_error);
> + run_in_new_process(&verify_td_ioexit);
>
> return 0;
> }
> --
> 2.43.0.472.g3155946c3a-goog
>
>

2024-03-04 09:49:19

by Yan Zhao

[permalink] [raw]
Subject: Re: [RFC PATCH v5 10/29] KVM: selftests: TDX: Adding test case for TDX port IO

On Mon, Mar 04, 2024 at 05:16:53PM +0800, Binbin Wu wrote:
>
>
> On 3/4/2024 10:19 AM, Yan Zhao wrote:
> > On Tue, Dec 12, 2023 at 12:46:25PM -0800, Sagi Shahar wrote:
> > > From: Erdem Aktas <[email protected]>
> > >
> > > Verifies TDVMCALL<INSTRUCTION.IO> READ and WRITE operations.
> > >
> > > Signed-off-by: Erdem Aktas <[email protected]>
> > > Signed-off-by: Sagi Shahar <[email protected]>
> > > Signed-off-by: Ackerley Tng <[email protected]>
> > > Signed-off-by: Ryan Afranji <[email protected]>
> > > ---
> > > .../kvm/include/x86_64/tdx/test_util.h | 34 ++++++++
> > > .../selftests/kvm/x86_64/tdx_vm_tests.c | 82 +++++++++++++++++++
> > > 2 files changed, 116 insertions(+)
> > >
> > > diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> > > index 6d69921136bd..95a5d5be7f0b 100644
> > > --- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> > > +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> > > @@ -9,6 +9,40 @@
> > > #define TDX_TEST_SUCCESS_PORT 0x30
> > > #define TDX_TEST_SUCCESS_SIZE 4
> > > +/**
> > > + * Assert that some IO operation involving tdg_vp_vmcall_instruction_io() was
> > > + * called in the guest.
> > > + */
> > > +#define TDX_TEST_ASSERT_IO(VCPU, PORT, SIZE, DIR) \
> > > + do { \
> > > + TEST_ASSERT((VCPU)->run->exit_reason == KVM_EXIT_IO, \
> > > + "Got exit_reason other than KVM_EXIT_IO: %u (%s)\n", \
> > > + (VCPU)->run->exit_reason, \
> > > + exit_reason_str((VCPU)->run->exit_reason)); \
> > > + \
> > > + TEST_ASSERT(((VCPU)->run->exit_reason == KVM_EXIT_IO) && \
> > > + ((VCPU)->run->io.port == (PORT)) && \
> > > + ((VCPU)->run->io.size == (SIZE)) && \
> > > + ((VCPU)->run->io.direction == (DIR)), \
> > > + "Got unexpected IO exit values: %u (%s) %d %d %d\n", \
> > > + (VCPU)->run->exit_reason, \
> > > + exit_reason_str((VCPU)->run->exit_reason), \
> > > + (VCPU)->run->io.port, (VCPU)->run->io.size, \
> > > + (VCPU)->run->io.direction); \
> > > + } while (0)
> > > +
> > > +/**
> > > + * Check and report if there was some failure in the guest, either an exception
> > > + * like a triple fault, or if a tdx_test_fatal() was hit.
> > > + */
> > > +#define TDX_TEST_CHECK_GUEST_FAILURE(VCPU) \
> > > + do { \
> > > + if ((VCPU)->run->exit_reason == KVM_EXIT_SYSTEM_EVENT) \
> > > + TEST_FAIL("Guest reported error. error code: %lld (0x%llx)\n", \
> > > + (VCPU)->run->system_event.data[1], \
> > > + (VCPU)->run->system_event.data[1]); \
> > > + } while (0)
> > > +
> > > /**
> > > * Assert that tdx_test_success() was called in the guest.
> > > */
> > > diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> > > index 8638c7bbedaa..75467c407ca7 100644
> > > --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> > > +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> > > @@ -2,6 +2,7 @@
> > > #include <signal.h>
> > > #include "kvm_util_base.h"
> > > +#include "tdx/tdcall.h"
> > > #include "tdx/tdx.h"
> > > #include "tdx/tdx_util.h"
> > > #include "tdx/test_util.h"
> > > @@ -74,6 +75,86 @@ void verify_report_fatal_error(void)
> > > printf("\t ... PASSED\n");
> > > }
> > > +#define TDX_IOEXIT_TEST_PORT 0x50
> > > +
> > > +/*
> > > + * Verifies IO functionality by writing a |value| to a predefined port.
> > > + * Verifies that the read value is |value| + 1 from the same port.
> > > + * If all the tests are passed then write a value to port TDX_TEST_PORT
> > > + */
> > > +void guest_ioexit(void)
> > > +{
> > > + uint64_t data_out, data_in, delta;
> > > + uint64_t ret;
> > > +
> > > + data_out = 0xAB;
> > > + ret = tdg_vp_vmcall_instruction_io(TDX_IOEXIT_TEST_PORT, 1,
> > > + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
> > > + &data_out);
> > > + if (ret)
> > > + tdx_test_fatal(ret);
> > > +
> > > + ret = tdg_vp_vmcall_instruction_io(TDX_IOEXIT_TEST_PORT, 1,
> > > + TDG_VP_VMCALL_INSTRUCTION_IO_READ,
> > > + &data_in);
> > > + if (ret)
> > > + tdx_test_fatal(ret);
> > > +
> > > + delta = data_in - data_out;
> > > + if (delta != 1)
> > > + tdx_test_fatal(ret);
> > > +
> > > + tdx_test_success();
> > > +}
> > > +
> > > +void verify_td_ioexit(void)
> > > +{
> > > + struct kvm_vm *vm;
> > > + struct kvm_vcpu *vcpu;
> > > +
> > > + uint32_t port_data;
> > > +
> > > + vm = td_create();
> > > + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> > > + vcpu = td_vcpu_add(vm, 0, guest_ioexit);
> > > + td_finalize(vm);
> > > +
> > > + printf("Verifying TD IO Exit:\n");
> > > +
> > > + /* Wait for guest to do a IO write */
> > > + td_vcpu_run(vcpu);
> > > + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> > This check is a vain, because the first VMExit from vcpu run is always
> > KVM_EXIT_IO caused by tdg_vp_vmcall_instruction_io().
>
> I think tdg_vp_vmcall_instruction_io() could fail if RCX (GPR select)
> doesn't
> meet the requirement (some bits must be 0).
> Although RCX is set by guest code (in selftest, it set in __tdx_hypercall())
> and it will not trigger the error, it still can be used as a guard to make
> sure guest doesn't pass a invalid RCX.
>
Right. This check can be kept in case an failure is delivered to TD in host
kernel directly, though it cannot guard if the failure will trigger an
exit to user space (e.g. if kernel TDX code has a bug).
>
> >
> >
> > > + TDX_TEST_ASSERT_IO(vcpu, TDX_IOEXIT_TEST_PORT, 1,
> > > + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> > > + port_data = *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
> > > +
> > > + printf("\t ... IO WRITE: OK\n");
> > So, even if there's an error in emulating writing of TDX_IOEXIT_TEST_PORT,
> > and guest would then find a failure and trigger tdx_test_fatal(), this line
> > will still print "IO WRITE: OK", which is not right.
> >
> > > +
> > > + /*
> > > + * Wait for the guest to do a IO read. Provide the previous written data
> > > + * + 1 back to the guest
> > > + */
> > > + td_vcpu_run(vcpu);
> > > + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> > This check is a vain, too, as in write case.
> >
> > > + TDX_TEST_ASSERT_IO(vcpu, TDX_IOEXIT_TEST_PORT, 1,
> > > + TDG_VP_VMCALL_INSTRUCTION_IO_READ);
> > > + *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = port_data + 1;
> > > +
> > > + printf("\t ... IO READ: OK\n");
> > Same as in write case, this line should not be printed until after guest
> > finishing checking return code.
> >
> > > +
> > > + /*
> > > + * Wait for the guest to complete execution successfully. The read
> > > + * value is checked within the guest.
> > > + */
> > > + td_vcpu_run(vcpu);
> > > + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> > > + TDX_TEST_ASSERT_SUCCESS(vcpu);
> > > +
> > > + printf("\t ... IO verify read/write values: OK\n");
> > > + kvm_vm_free(vm);
> > > + printf("\t ... PASSED\n");
> > > +}
> > > +
> > > int main(int argc, char **argv)
> > > {
> > > setbuf(stdout, NULL);
> > > @@ -85,6 +166,7 @@ int main(int argc, char **argv)
> > > run_in_new_process(&verify_td_lifecycle);
> > > run_in_new_process(&verify_report_fatal_error);
> > > + run_in_new_process(&verify_td_ioexit);
> > > return 0;
> > > }
> > > --
> > > 2.43.0.472.g3155946c3a-goog
> > >
> > >
>

2024-03-04 15:57:40

by Binbin Wu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 24/29] KVM: selftests: Expose _vm_vaddr_alloc



On 12/13/2023 4:46 AM, Sagi Shahar wrote:
> From: Ackerley Tng <[email protected]>
>
> vm_vaddr_alloc always allocates memory in memslot 0. This allows users
> of this function to choose which memslot to allocate virtual memory
> in.

Nit: The patch exposes ____vm_vaddr_alloc() instead of _vm_vaddr_alloc().

> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> Signed-off-by: Sagi Shahar <[email protected]>
> ---
> tools/testing/selftests/kvm/include/kvm_util_base.h | 3 +++
> tools/testing/selftests/kvm/lib/kvm_util.c | 6 +++---
> 2 files changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
> index efd7ae8abb20..5dbebf5cfd07 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util_base.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
> @@ -561,6 +561,9 @@ void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
> struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
> void vm_populate_vaddr_bitmap(struct kvm_vm *vm);
> vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
> +vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz,
> + vm_vaddr_t vaddr_min, vm_paddr_t paddr_min,
> + uint32_t data_memslot, bool encrypt);
> vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
> vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
> enum kvm_mem_region_type type);
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 28780fa1f0f2..d024abc5379c 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -1410,9 +1410,9 @@ vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz,
> * a unique set of pages, with the minimum real allocation being at least
> * a page.
> */
> -static vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz,
> - vm_vaddr_t vaddr_min, vm_paddr_t paddr_min,
> - uint32_t data_memslot, bool encrypt)
> +vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz,
> + vm_vaddr_t vaddr_min, vm_paddr_t paddr_min,
> + uint32_t data_memslot, bool encrypt)
> {
> uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0);
>


2024-03-04 16:52:32

by Binbin Wu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 10/29] KVM: selftests: TDX: Adding test case for TDX port IO



On 3/4/2024 10:19 AM, Yan Zhao wrote:
> On Tue, Dec 12, 2023 at 12:46:25PM -0800, Sagi Shahar wrote:
>> From: Erdem Aktas <[email protected]>
>>
>> Verifies TDVMCALL<INSTRUCTION.IO> READ and WRITE operations.
>>
>> Signed-off-by: Erdem Aktas <[email protected]>
>> Signed-off-by: Sagi Shahar <[email protected]>
>> Signed-off-by: Ackerley Tng <[email protected]>
>> Signed-off-by: Ryan Afranji <[email protected]>
>> ---
>> .../kvm/include/x86_64/tdx/test_util.h | 34 ++++++++
>> .../selftests/kvm/x86_64/tdx_vm_tests.c | 82 +++++++++++++++++++
>> 2 files changed, 116 insertions(+)
>>
>> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
>> index 6d69921136bd..95a5d5be7f0b 100644
>> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
>> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
>> @@ -9,6 +9,40 @@
>> #define TDX_TEST_SUCCESS_PORT 0x30
>> #define TDX_TEST_SUCCESS_SIZE 4
>>
>> +/**
>> + * Assert that some IO operation involving tdg_vp_vmcall_instruction_io() was
>> + * called in the guest.
>> + */
>> +#define TDX_TEST_ASSERT_IO(VCPU, PORT, SIZE, DIR) \
>> + do { \
>> + TEST_ASSERT((VCPU)->run->exit_reason == KVM_EXIT_IO, \
>> + "Got exit_reason other than KVM_EXIT_IO: %u (%s)\n", \
>> + (VCPU)->run->exit_reason, \
>> + exit_reason_str((VCPU)->run->exit_reason)); \
>> + \
>> + TEST_ASSERT(((VCPU)->run->exit_reason == KVM_EXIT_IO) && \
>> + ((VCPU)->run->io.port == (PORT)) && \
>> + ((VCPU)->run->io.size == (SIZE)) && \
>> + ((VCPU)->run->io.direction == (DIR)), \
>> + "Got unexpected IO exit values: %u (%s) %d %d %d\n", \
>> + (VCPU)->run->exit_reason, \
>> + exit_reason_str((VCPU)->run->exit_reason), \
>> + (VCPU)->run->io.port, (VCPU)->run->io.size, \
>> + (VCPU)->run->io.direction); \
>> + } while (0)
>> +
>> +/**
>> + * Check and report if there was some failure in the guest, either an exception
>> + * like a triple fault, or if a tdx_test_fatal() was hit.
>> + */
>> +#define TDX_TEST_CHECK_GUEST_FAILURE(VCPU) \
>> + do { \
>> + if ((VCPU)->run->exit_reason == KVM_EXIT_SYSTEM_EVENT) \
>> + TEST_FAIL("Guest reported error. error code: %lld (0x%llx)\n", \
>> + (VCPU)->run->system_event.data[1], \
>> + (VCPU)->run->system_event.data[1]); \
>> + } while (0)
>> +
>> /**
>> * Assert that tdx_test_success() was called in the guest.
>> */
>> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
>> index 8638c7bbedaa..75467c407ca7 100644
>> --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
>> +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
>> @@ -2,6 +2,7 @@
>>
>> #include <signal.h>
>> #include "kvm_util_base.h"
>> +#include "tdx/tdcall.h"
>> #include "tdx/tdx.h"
>> #include "tdx/tdx_util.h"
>> #include "tdx/test_util.h"
>> @@ -74,6 +75,86 @@ void verify_report_fatal_error(void)
>> printf("\t ... PASSED\n");
>> }
>>
>> +#define TDX_IOEXIT_TEST_PORT 0x50
>> +
>> +/*
>> + * Verifies IO functionality by writing a |value| to a predefined port.
>> + * Verifies that the read value is |value| + 1 from the same port.
>> + * If all the tests are passed then write a value to port TDX_TEST_PORT
>> + */
>> +void guest_ioexit(void)
>> +{
>> + uint64_t data_out, data_in, delta;
>> + uint64_t ret;
>> +
>> + data_out = 0xAB;
>> + ret = tdg_vp_vmcall_instruction_io(TDX_IOEXIT_TEST_PORT, 1,
>> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
>> + &data_out);
>> + if (ret)
>> + tdx_test_fatal(ret);
>> +
>> + ret = tdg_vp_vmcall_instruction_io(TDX_IOEXIT_TEST_PORT, 1,
>> + TDG_VP_VMCALL_INSTRUCTION_IO_READ,
>> + &data_in);
>> + if (ret)
>> + tdx_test_fatal(ret);
>> +
>> + delta = data_in - data_out;
>> + if (delta != 1)
>> + tdx_test_fatal(ret);
>> +
>> + tdx_test_success();
>> +}
>> +
>> +void verify_td_ioexit(void)
>> +{
>> + struct kvm_vm *vm;
>> + struct kvm_vcpu *vcpu;
>> +
>> + uint32_t port_data;
>> +
>> + vm = td_create();
>> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
>> + vcpu = td_vcpu_add(vm, 0, guest_ioexit);
>> + td_finalize(vm);
>> +
>> + printf("Verifying TD IO Exit:\n");
>> +
>> + /* Wait for guest to do a IO write */
>> + td_vcpu_run(vcpu);
>> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> This check is a vain, because the first VMExit from vcpu run is always
> KVM_EXIT_IO caused by tdg_vp_vmcall_instruction_io().

I think tdg_vp_vmcall_instruction_io() could fail if RCX (GPR select)
doesn't
meet the requirement (some bits must be 0).
Although RCX is set by guest code (in selftest, it set in __tdx_hypercall())
and it will not trigger the error, it still can be used as a guard to make
sure guest doesn't pass a invalid RCX.


>
>
>> + TDX_TEST_ASSERT_IO(vcpu, TDX_IOEXIT_TEST_PORT, 1,
>> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
>> + port_data = *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
>> +
>> + printf("\t ... IO WRITE: OK\n");
> So, even if there's an error in emulating writing of TDX_IOEXIT_TEST_PORT,
> and guest would then find a failure and trigger tdx_test_fatal(), this line
> will still print "IO WRITE: OK", which is not right.
>
>> +
>> + /*
>> + * Wait for the guest to do a IO read. Provide the previous written data
>> + * + 1 back to the guest
>> + */
>> + td_vcpu_run(vcpu);
>> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> This check is a vain, too, as in write case.
>
>> + TDX_TEST_ASSERT_IO(vcpu, TDX_IOEXIT_TEST_PORT, 1,
>> + TDG_VP_VMCALL_INSTRUCTION_IO_READ);
>> + *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = port_data + 1;
>> +
>> + printf("\t ... IO READ: OK\n");
> Same as in write case, this line should not be printed until after guest
> finishing checking return code.
>
>> +
>> + /*
>> + * Wait for the guest to complete execution successfully. The read
>> + * value is checked within the guest.
>> + */
>> + td_vcpu_run(vcpu);
>> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
>> + TDX_TEST_ASSERT_SUCCESS(vcpu);
>> +
>> + printf("\t ... IO verify read/write values: OK\n");
>> + kvm_vm_free(vm);
>> + printf("\t ... PASSED\n");
>> +}
>> +
>> int main(int argc, char **argv)
>> {
>> setbuf(stdout, NULL);
>> @@ -85,6 +166,7 @@ int main(int argc, char **argv)
>>
>> run_in_new_process(&verify_td_lifecycle);
>> run_in_new_process(&verify_report_fatal_error);
>> + run_in_new_process(&verify_td_ioexit);
>>
>> return 0;
>> }
>> --
>> 2.43.0.472.g3155946c3a-goog
>>
>>


2024-03-04 17:02:32

by Binbin Wu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 26/29] KVM: selftests: TDX: Add support for TDG.VP.VEINFO.GET



On 12/13/2023 4:46 AM, Sagi Shahar wrote:
> From: Ackerley Tng <[email protected]>
>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> Signed-off-by: Sagi Shahar <[email protected]>
> ---
> .../selftests/kvm/include/x86_64/tdx/tdx.h | 21 +++++++++++++++++++
> .../selftests/kvm/lib/x86_64/tdx/tdx.c | 19 +++++++++++++++++
> 2 files changed, 40 insertions(+)

Reviewed-by: Binbin Wu <[email protected]>

>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> index b71bcea40b5c..12863a8beaae 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> @@ -6,6 +6,7 @@
> #include "kvm_util_base.h"
>
> #define TDG_VP_INFO 1
> +#define TDG_VP_VEINFO_GET 3
> #define TDG_MEM_PAGE_ACCEPT 6
>
> #define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000
> @@ -41,4 +42,24 @@ uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx,
> uint64_t tdg_vp_vmcall_map_gpa(uint64_t address, uint64_t size, uint64_t *data_out);
> uint64_t tdg_mem_page_accept(uint64_t gpa, uint8_t level);
>
> +/*
> + * Used by the #VE exception handler to gather the #VE exception
> + * info from the TDX module. This is a software only structure
> + * and not part of the TDX module/VMM ABI.
> + *
> + * Adapted from arch/x86/include/asm/tdx.h
> + */
> +struct ve_info {
> + uint64_t exit_reason;
> + uint64_t exit_qual;
> + /* Guest Linear (virtual) Address */
> + uint64_t gla;
> + /* Guest Physical Address */
> + uint64_t gpa;
> + uint32_t instr_len;
> + uint32_t instr_info;
> +};
> +
> +uint64_t tdg_vp_veinfo_get(struct ve_info *ve);
> +
> #endif // SELFTEST_TDX_TDX_H
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> index d8c4ab635c06..71d9f55007f7 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> @@ -241,3 +241,22 @@ uint64_t tdg_mem_page_accept(uint64_t gpa, uint8_t level)
> {
> return __tdx_module_call(TDG_MEM_PAGE_ACCEPT, gpa | level, 0, 0, 0, NULL);
> }
> +
> +uint64_t tdg_vp_veinfo_get(struct ve_info *ve)
> +{
> + uint64_t ret;
> + struct tdx_module_output out;
> +
> + memset(&out, 0, sizeof(struct tdx_module_output));
> +
> + ret = __tdx_module_call(TDG_VP_VEINFO_GET, 0, 0, 0, 0, &out);
> +
> + ve->exit_reason = out.rcx;
> + ve->exit_qual = out.rdx;
> + ve->gla = out.r8;
> + ve->gpa = out.r9;
> + ve->instr_len = out.r10 & 0xffffffff;
> + ve->instr_info = out.r10 >> 32;
> +
> + return ret;
> +}


2024-03-05 00:53:31

by Yan Zhao

[permalink] [raw]
Subject: Re: [RFC PATCH v5 15/29] KVM: selftests: TDX: Add TDX MSR read/write tests

> +void verify_guest_msr_writes(void)
> +{
> + struct kvm_vcpu *vcpu;
> + struct kvm_vm *vm;
> +
> + uint64_t data;
> + int ret;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> +
> + /*
> + * Set explicit MSR filter map to control access to the MSR registers
> + * used in the test.
> + */
> + printf("\t ... Setting test MSR filter\n");
> + ret = kvm_check_cap(KVM_CAP_X86_USER_SPACE_MSR);
> + TEST_ASSERT(ret, "KVM_CAP_X86_USER_SPACE_MSR is unavailable");
> + vm_enable_cap(vm, KVM_CAP_X86_USER_SPACE_MSR, KVM_MSR_EXIT_REASON_FILTER);
> +
> + ret = kvm_check_cap(KVM_CAP_X86_MSR_FILTER);
> + TEST_ASSERT(ret, "KVM_CAP_X86_MSR_FILTER is unavailable");
> +
> + ret = ioctl(vm->fd, KVM_X86_SET_MSR_FILTER, &tdx_msr_test_filter);
> + TEST_ASSERT(ret == 0,
> + "KVM_X86_SET_MSR_FILTER failed, ret: %i errno: %i (%s)",
> + ret, errno, strerror(errno));
> +
> + vcpu = td_vcpu_add(vm, 0, guest_msr_write);
> + td_finalize(vm);
> +
> + printf("Verifying guest msr writes:\n");
> +
> + printf("\t ... Running guest\n");
> + /* Only the write to MSR_IA32_MISC_ENABLE should trigger an exit */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + data = tdx_test_read_64bit_report_from_guest(vcpu);
> + TEST_ASSERT_EQ(data, TDG_VP_VMCALL_INVALID_OPERAND);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + printf("\t ... Verifying MSR values writen by guest\n");
> +
> + TEST_ASSERT_EQ(vcpu_get_msr(vcpu, MSR_X2APIC_APIC_ICR), 4);
> + TEST_ASSERT_EQ(vcpu_get_msr(vcpu, MSR_IA32_MISC_ENABLE), 0x1800);
It's not staightforward to assert MSR_IA32_MISC_ENABLE is 0x1800.
Rather than assume MSR_IA32_MISC_ENABLE is reset to
(MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL | MSR_IA32_MISC_ENABLE_BTS_UNAVAIL)
which is 0x1800, why not call vcpu_get_msr() before guest write and compare
the saved value here?

> + TEST_ASSERT_EQ(vcpu_get_msr(vcpu, MSR_IA32_POWER_CTL), 6);
> +
> + kvm_vm_free(vm);
> + printf("\t ... PASSED\n");
> +}
> +
> +
> int main(int argc, char **argv)
> {
> setbuf(stdout, NULL);
> @@ -531,6 +738,8 @@ int main(int argc, char **argv)
> run_in_new_process(&verify_get_td_vmcall_info);
> run_in_new_process(&verify_guest_writes);
> run_in_new_process(&verify_guest_reads);
> + run_in_new_process(&verify_guest_msr_writes);
> + run_in_new_process(&verify_guest_msr_reads);
>
> return 0;
> }
> --
> 2.43.0.472.g3155946c3a-goog
>
>

2024-03-05 06:11:12

by Yan Zhao

[permalink] [raw]
Subject: Re: [RFC PATCH v5 16/29] KVM: selftests: TDX: Add TDX HLT exit test

On Sat, Mar 02, 2024 at 03:31:07PM +0800, Binbin Wu wrote:
> On 12/13/2023 4:46 AM, Sagi Shahar wrote:
> > The test verifies that the guest runs TDVMCALL<INSTRUCTION.HLT> and the
> > guest vCPU enters to the halted state.
> >
> > Signed-off-by: Erdem Aktas <[email protected]>
> > Signed-off-by: Sagi Shahar <[email protected]>
> > Signed-off-by: Ackerley Tng <[email protected]>
> > Signed-off-by: Ryan Afranji <[email protected]>
> > ---
> > .../selftests/kvm/include/x86_64/tdx/tdx.h | 2 +
> > .../selftests/kvm/lib/x86_64/tdx/tdx.c | 10 +++
> > .../selftests/kvm/x86_64/tdx_vm_tests.c | 78 +++++++++++++++++++
> > 3 files changed, 90 insertions(+)
> >
> > diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> > index 85ba6aab79a7..b18e39d20498 100644
> > --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> > +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> > @@ -8,6 +8,7 @@
> > #define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000
> > #define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003
> > +#define TDG_VP_VMCALL_INSTRUCTION_HLT 12
> > #define TDG_VP_VMCALL_INSTRUCTION_IO 30
> > #define TDG_VP_VMCALL_INSTRUCTION_RDMSR 31
> > #define TDG_VP_VMCALL_INSTRUCTION_WRMSR 32
> > @@ -20,5 +21,6 @@ uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12,
> > uint64_t *r13, uint64_t *r14);
> > uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t *ret_value);
> > uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value);
> > +uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag);
> > #endif // SELFTEST_TDX_TDX_H
> > diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> > index 88ea6f2a6469..9485bafedc38 100644
> > --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> > +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> > @@ -114,3 +114,13 @@ uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value)
> > return __tdx_hypercall(&args, 0);
> > }
> > +
> > +uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag)
> > +{
> > + struct tdx_hypercall_args args = {
> > + .r11 = TDG_VP_VMCALL_INSTRUCTION_HLT,
> > + .r12 = interrupt_blocked_flag,
> > + };
> > +
> > + return __tdx_hypercall(&args, 0);
> > +}
> > diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> > index 5db3701cc6d9..5fae4c6e5f95 100644
> > --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> > +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> > @@ -721,6 +721,83 @@ void verify_guest_msr_writes(void)
> > printf("\t ... PASSED\n");
> > }
> > +/*
> > + * Verifies HLT functionality.
> > + */
> > +void guest_hlt(void)
> > +{
> > + uint64_t ret;
> > + uint64_t interrupt_blocked_flag;
> > +
> > + interrupt_blocked_flag = 0;
> > + ret = tdg_vp_vmcall_instruction_hlt(interrupt_blocked_flag);
> > + if (ret)
> > + tdx_test_fatal(ret);
> > +
> > + tdx_test_success();
> > +}
> > +
> > +void _verify_guest_hlt(int signum);
> > +
> > +void wake_me(int interval)
> > +{
> > + struct sigaction action;
> > +
> > + action.sa_handler = _verify_guest_hlt;
> > + sigemptyset(&action.sa_mask);
> > + action.sa_flags = 0;
> > +
> > + TEST_ASSERT(sigaction(SIGALRM, &action, NULL) == 0,
> > + "Could not set the alarm handler!");
> > +
> > + alarm(interval);
> > +}
> > +
> > +void _verify_guest_hlt(int signum)
> > +{
> > + struct kvm_vm *vm;
> > + static struct kvm_vcpu *vcpu;
> > +
> > + /*
> > + * This function will also be called by SIGALRM handler to check the
> > + * vCPU MP State. If vm has been initialized, then we are in the signal
> > + * handler. Check the MP state and let the guest run again.
> > + */
> > + if (vcpu != NULL) {
>
> What if the following case if there is a bug in KVM so that:
>
> In guest, execution of tdg_vp_vmcall_instruction_hlt() return 0, but the
> vcpu is not actually halted. Then guest will call tdx_test_success().
>
> And the first call of _verify_guest_hlt() will call kvm_vm_free(vm) to free
> the vm, which also frees the vcpu, and 1 second later, in this path vcpu
> will
> be accessed after free.
>
Right. Another possibility is that if buggy KVM returns success to guest
without putting guest to halted state, the selftest will still print
"PASSED" because the second _verify_guest_hlt() (after waiting for 1s)
has no chance to get executed before the process exits.

> > + struct kvm_mp_state mp_state;
> > +
> > + vcpu_mp_state_get(vcpu, &mp_state);
> > + TEST_ASSERT_EQ(mp_state.mp_state, KVM_MP_STATE_HALTED);
> > +
> > + /* Let the guest to run and finish the test.*/
> > + mp_state.mp_state = KVM_MP_STATE_RUNNABLE;
> > + vcpu_mp_state_set(vcpu, &mp_state);
> > + return;
> > + }
> > +
> > + vm = td_create();
> > + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> > + vcpu = td_vcpu_add(vm, 0, guest_hlt);
> > + td_finalize(vm);
> > +
> > + printf("Verifying HLT:\n");
> > +
> > + printf("\t ... Running guest\n");
> > +
> > + /* Wait 1 second for guest to execute HLT */
> > + wake_me(1);
> > + td_vcpu_run(vcpu);
> > +
> > + TDX_TEST_ASSERT_SUCCESS(vcpu);
> > +
> > + kvm_vm_free(vm);
> > + printf("\t ... PASSED\n");
> > +}
> > +
> > +void verify_guest_hlt(void)
> > +{
> > + _verify_guest_hlt(0);
> > +}
> > int main(int argc, char **argv)
> > {
> > @@ -740,6 +817,7 @@ int main(int argc, char **argv)
> > run_in_new_process(&verify_guest_reads);
> > run_in_new_process(&verify_guest_msr_writes);
> > run_in_new_process(&verify_guest_msr_reads);
> > + run_in_new_process(&verify_guest_hlt);
> > return 0;
> > }
>
>

2024-03-05 07:47:16

by Yan Zhao

[permalink] [raw]
Subject: Re: [RFC PATCH v5 17/29] KVM: selftests: TDX: Add TDX MMIO reads test

> +/* Pick any address that was not mapped into the guest to test MMIO */
> +#define TDX_MMIO_TEST_ADDR 0x200000000
Also need to test below MMIO addresses
(1) GPA with shared bit on.
(2) GPA in memslot

2024-03-05 09:40:24

by Yan Zhao

[permalink] [raw]
Subject: Re: [RFC PATCH v5 18/29] KVM: selftests: TDX: Add TDX MMIO writes test

On Tue, Dec 12, 2023 at 12:46:33PM -0800, Sagi Shahar wrote:
> The test verifies MMIO writes of various sizes from the guest to the host.
>
> Signed-off-by: Sagi Shahar <[email protected]>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> ---
> .../selftests/kvm/include/x86_64/tdx/tdx.h | 2 +
> .../selftests/kvm/lib/x86_64/tdx/tdx.c | 14 +++
> .../selftests/kvm/x86_64/tdx_vm_tests.c | 85 +++++++++++++++++++
> 3 files changed, 101 insertions(+)
>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> index 13ce60df5684..502b670ea699 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> @@ -25,5 +25,7 @@ uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value);
> uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag);
> uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size,
> uint64_t *data_out);
> +uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size,
> + uint64_t data_in);
>
> #endif // SELFTEST_TDX_TDX_H
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> index b19f07ebc0e7..f4afa09f7e3d 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> @@ -143,3 +143,17 @@ uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size,
>
> return ret;
> }
> +
> +uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size,
> + uint64_t data_in)
> +{
> + struct tdx_hypercall_args args = {
> + .r11 = TDG_VP_VMCALL_VE_REQUEST_MMIO,
> + .r12 = size,
> + .r13 = TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE,
> + .r14 = address,
> + .r15 = data_in,
> + };
> +
> + return __tdx_hypercall(&args, 0);
> +}
> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> index 48902b69d13e..5e28ba828a92 100644
> --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> @@ -885,6 +885,90 @@ void verify_mmio_reads(void)
> printf("\t ... PASSED\n");
> }
>
> +void guest_mmio_writes(void)
> +{
> + uint64_t ret;
> +
> + ret = tdg_vp_vmcall_ve_request_mmio_write(TDX_MMIO_TEST_ADDR, 1, 0x12);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + ret = tdg_vp_vmcall_ve_request_mmio_write(TDX_MMIO_TEST_ADDR, 2, 0x1234);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + ret = tdg_vp_vmcall_ve_request_mmio_write(TDX_MMIO_TEST_ADDR, 4, 0x12345678);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + ret = tdg_vp_vmcall_ve_request_mmio_write(TDX_MMIO_TEST_ADDR, 8, 0x1234567890ABCDEF);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + // Write across page boundary.
> + ret = tdg_vp_vmcall_ve_request_mmio_write(PAGE_SIZE - 1, 8, 0);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + tdx_test_success();
> +}
> +
> +/*
> + * Varifies guest MMIO writes.
> + */
> +void verify_mmio_writes(void)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> +
> + uint8_t byte_1;
> + uint16_t byte_2;
> + uint32_t byte_4;
> + uint64_t byte_8;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> + vcpu = td_vcpu_add(vm, 0, guest_mmio_writes);
> + td_finalize(vm);
> +
> + printf("Verifying TD MMIO writes:\n");
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 1, TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE);
> + byte_1 = *(uint8_t *)(vcpu->run->mmio.data);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 2, TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE);
> + byte_2 = *(uint16_t *)(vcpu->run->mmio.data);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 4, TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE);
> + byte_4 = *(uint32_t *)(vcpu->run->mmio.data);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 8, TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE);
> + byte_8 = *(uint64_t *)(vcpu->run->mmio.data);
> +
> + TEST_ASSERT_EQ(byte_1, 0x12);
> + TEST_ASSERT_EQ(byte_2, 0x1234);
> + TEST_ASSERT_EQ(byte_4, 0x12345678);
> + TEST_ASSERT_EQ(byte_8, 0x1234567890ABCDEF);
> +
> + td_vcpu_run(vcpu);
> + TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
> + TEST_ASSERT_EQ(vcpu->run->system_event.data[1], TDG_VP_VMCALL_INVALID_OPERAND);
Is it possible that this event is caused by an failure of the last 8 byte write?
i.e. though MMIO exit to host with correct value 0x1234567890ABCDEF, but guest
sees ret as TDG_VP_VMCALL_INVALID_OPERAND.

And if, coincidently, guest gets a ret=0 in the next across page boundary write,
the selftest will show "PASSED", which is not right.


> + td_vcpu_run(vcpu);
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + kvm_vm_free(vm);
> + printf("\t ... PASSED\n");
> +}
> +
> int main(int argc, char **argv)
> {
> setbuf(stdout, NULL);
> @@ -905,6 +989,7 @@ int main(int argc, char **argv)
> run_in_new_process(&verify_guest_msr_reads);
> run_in_new_process(&verify_guest_hlt);
> run_in_new_process(&verify_mmio_reads);
> + run_in_new_process(&verify_mmio_writes);
>
> return 0;
> }
> --
> 2.43.0.472.g3155946c3a-goog
>
>

2024-03-05 10:56:41

by Binbin Wu

[permalink] [raw]
Subject: Re: [RFC PATCH v5 28/29] KVM: selftests: TDX: Add TDX UPM selftest



On 12/13/2023 4:46 AM, Sagi Shahar wrote:
> From: Ackerley Tng <[email protected]>
>
> This tests the use of guest memory with explicit MapGPA calls.
>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> Signed-off-by: Sagi Shahar <[email protected]>
> ---
> tools/testing/selftests/kvm/Makefile | 1 +
> .../selftests/kvm/x86_64/tdx_upm_test.c | 401 ++++++++++++++++++
> 2 files changed, 402 insertions(+)
> create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_upm_test.c
>
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index 8c0a6b395ee5..2f2669af15d6 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -157,6 +157,7 @@ TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test
> TEST_GEN_PROGS_x86_64 += system_counter_offset_test
> TEST_GEN_PROGS_x86_64 += x86_64/tdx_vm_tests
> TEST_GEN_PROGS_x86_64 += x86_64/tdx_shared_mem_test
> +TEST_GEN_PROGS_x86_64 += x86_64/tdx_upm_test
>
> # Compiled outputs used by test targets
> TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test
> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c b/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c
> new file mode 100644
> index 000000000000..44671874a4f1
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c
> @@ -0,0 +1,401 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +
> +#include <asm/kvm.h>
> +#include <asm/vmx.h>
> +#include <linux/kvm.h>
> +#include <stdbool.h>
> +#include <stdint.h>
> +
> +#include "kvm_util_base.h"
> +#include "processor.h"
> +#include "tdx/tdcall.h"
> +#include "tdx/tdx.h"
> +#include "tdx/tdx_util.h"
> +#include "tdx/test_util.h"
> +#include "test_util.h"
> +
> +/* TDX UPM test patterns */
> +#define PATTERN_CONFIDENCE_CHECK (0x11)
> +#define PATTERN_HOST_FOCUS (0x22)
> +#define PATTERN_GUEST_GENERAL (0x33)
> +#define PATTERN_GUEST_FOCUS (0x44)
> +
> +/*
> + * 0x80000000 is arbitrarily selected. The selected address need not be the same
> + * as TDX_UPM_TEST_AREA_GVA_PRIVATE, but it should not overlap with selftest
> + * code or boot page.
> + */
> +#define TDX_UPM_TEST_AREA_GPA (0x80000000)
> +/* Test area GPA is arbitrarily selected */
> +#define TDX_UPM_TEST_AREA_GVA_PRIVATE (0x90000000)
> +/* Select any bit that can be used as a flag */
> +#define TDX_UPM_TEST_AREA_GVA_SHARED_BIT (32)
> +/*
> + * TDX_UPM_TEST_AREA_GVA_SHARED is used to map the same GPA twice into the
> + * guest, once as shared and once as private
> + */
> +#define TDX_UPM_TEST_AREA_GVA_SHARED \
> + (TDX_UPM_TEST_AREA_GVA_PRIVATE | \
> + BIT_ULL(TDX_UPM_TEST_AREA_GVA_SHARED_BIT))
> +
> +/* The test area is 2MB in size */
> +#define TDX_UPM_TEST_AREA_SIZE (2 << 20)
> +/* 0th general area is 1MB in size */
> +#define TDX_UPM_GENERAL_AREA_0_SIZE (1 << 20)
> +/* Focus area is 40KB in size */
> +#define TDX_UPM_FOCUS_AREA_SIZE (40 << 10)
> +/* 1st general area is the rest of the space in the test area */
> +#define TDX_UPM_GENERAL_AREA_1_SIZE \
> + (TDX_UPM_TEST_AREA_SIZE - TDX_UPM_GENERAL_AREA_0_SIZE - \
> + TDX_UPM_FOCUS_AREA_SIZE)
> +
> +/*
> + * The test memory area is set up as two general areas, sandwiching a focus
> + * area. The general areas act as control areas. After they are filled, they
> + * are not expected to change throughout the tests. The focus area is memory
> + * permissions change from private to shared and vice-versa.
> + *
> + * The focus area is intentionally small, and sandwiched to test that when the
> + * focus area's permissions change, the other areas' permissions are not
> + * affected.
> + */
> +struct __packed tdx_upm_test_area {
> + uint8_t general_area_0[TDX_UPM_GENERAL_AREA_0_SIZE];
> + uint8_t focus_area[TDX_UPM_FOCUS_AREA_SIZE];
> + uint8_t general_area_1[TDX_UPM_GENERAL_AREA_1_SIZE];
> +};
> +
> +static void fill_test_area(struct tdx_upm_test_area *test_area_base,
> + uint8_t pattern)
> +{
> + memset(test_area_base, pattern, sizeof(*test_area_base));
> +}
> +
> +static void fill_focus_area(struct tdx_upm_test_area *test_area_base,
> + uint8_t pattern)
> +{
> + memset(test_area_base->focus_area, pattern,
> + sizeof(test_area_base->focus_area));
> +}
> +
> +static bool check_area(uint8_t *base, uint64_t size, uint8_t expected_pattern)
> +{
> + size_t i;
> +
> + for (i = 0; i < size; i++) {
> + if (base[i] != expected_pattern)
> + return false;
> + }
> +
> + return true;
> +}
> +
> +static bool check_general_areas(struct tdx_upm_test_area *test_area_base,
> + uint8_t expected_pattern)
> +{
> + return (check_area(test_area_base->general_area_0,
> + sizeof(test_area_base->general_area_0),
> + expected_pattern) &&
> + check_area(test_area_base->general_area_1,
> + sizeof(test_area_base->general_area_1),
> + expected_pattern));
> +}
> +
> +static bool check_focus_area(struct tdx_upm_test_area *test_area_base,
> + uint8_t expected_pattern)
> +{
> + return check_area(test_area_base->focus_area,
> + sizeof(test_area_base->focus_area), expected_pattern);
> +}
> +
> +static bool check_test_area(struct tdx_upm_test_area *test_area_base,
> + uint8_t expected_pattern)
> +{
> + return (check_general_areas(test_area_base, expected_pattern) &&
> + check_focus_area(test_area_base, expected_pattern));
> +}
> +
> +static bool fill_and_check(struct tdx_upm_test_area *test_area_base, uint8_t pattern)
> +{
> + fill_test_area(test_area_base, pattern);
> +
> + return check_test_area(test_area_base, pattern);
> +}
> +
> +#define TDX_UPM_TEST_ASSERT(x) \
> + do { \
> + if (!(x)) \
> + tdx_test_fatal(__LINE__); \
> + } while (0)
> +
> +/*
> + * Shared variables between guest and host
> + */
> +static struct tdx_upm_test_area *test_area_gpa_private;
> +static struct tdx_upm_test_area *test_area_gpa_shared;
> +
> +/*
> + * Test stages for syncing with host
> + */
> +enum {
> + SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST = 1,
> + SYNC_CHECK_READ_SHARED_MEMORY_FROM_HOST,
> + SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST_AGAIN,
> +};
> +
> +#define TDX_UPM_TEST_ACCEPT_PRINT_PORT 0x87
> +
> +/**
> + * Does vcpu_run, and also manages memory conversions if requested by the TD.
> + */
> +void vcpu_run_and_manage_memory_conversions(struct kvm_vm *vm,
> + struct kvm_vcpu *vcpu)
> +{
> + for (;;) {
> + vcpu_run(vcpu);
> + if (vcpu->run->exit_reason == KVM_EXIT_TDX &&
> + vcpu->run->tdx.type == KVM_EXIT_TDX_VMCALL &&
> + vcpu->run->tdx.u.vmcall.subfunction == TDG_VP_VMCALL_MAP_GPA) {
> + struct kvm_tdx_vmcall *vmcall_info = &vcpu->run->tdx.u.vmcall;
> + uint64_t gpa = vmcall_info->in_r12 & ~vm->arch.s_bit;
> +
> + handle_memory_conversion(vm, gpa, vmcall_info->in_r13,
> + !(vm->arch.s_bit & vmcall_info->in_r12));
> + vmcall_info->status_code = 0;
> + continue;
> + } else if (
> + vcpu->run->exit_reason == KVM_EXIT_IO &&
> + vcpu->run->io.port == TDX_UPM_TEST_ACCEPT_PRINT_PORT) {
> + uint64_t gpa = tdx_test_read_64bit(
> + vcpu, TDX_UPM_TEST_ACCEPT_PRINT_PORT);
> + printf("\t ... guest accepting 1 page at GPA: 0x%lx\n", gpa);
> + continue;
> + }
> +
> + break;
> + }
> +}
> +
> +static void guest_upm_explicit(void)
> +{
> + uint64_t ret = 0;
> + uint64_t failed_gpa;
> +
> + struct tdx_upm_test_area *test_area_gva_private =
> + (struct tdx_upm_test_area *)TDX_UPM_TEST_AREA_GVA_PRIVATE;
> + struct tdx_upm_test_area *test_area_gva_shared =
> + (struct tdx_upm_test_area *)TDX_UPM_TEST_AREA_GVA_SHARED;
> +
> + /* Check: host reading private memory does not modify guest's view */
> + fill_test_area(test_area_gva_private, PATTERN_GUEST_GENERAL);
> +
> + tdx_test_report_to_user_space(SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST);
> +
> + TDX_UPM_TEST_ASSERT(
> + check_test_area(test_area_gva_private, PATTERN_GUEST_GENERAL));
> +
> + /* Remap focus area as shared */
> + ret = tdg_vp_vmcall_map_gpa((uint64_t)test_area_gpa_shared->focus_area,
> + sizeof(test_area_gpa_shared->focus_area),
> + &failed_gpa);
> + TDX_UPM_TEST_ASSERT(!ret);
> +
> + /* General areas should be unaffected by remapping */
> + TDX_UPM_TEST_ASSERT(
> + check_general_areas(test_area_gva_private, PATTERN_GUEST_GENERAL));
> +
> + /*
> + * Use memory contents to confirm that the memory allocated using mmap
> + * is used as backing memory for shared memory - PATTERN_CONFIDENCE_CHECK
> + * was written by the VMM at the beginning of this test.
> + */
> + TDX_UPM_TEST_ASSERT(
> + check_focus_area(test_area_gva_shared, PATTERN_CONFIDENCE_CHECK));
> +
> + /* Guest can use focus area after remapping as shared */
> + fill_focus_area(test_area_gva_shared, PATTERN_GUEST_FOCUS);
> +
> + tdx_test_report_to_user_space(SYNC_CHECK_READ_SHARED_MEMORY_FROM_HOST);
> +
> + /* Check that guest has the same view of shared memory */
> + TDX_UPM_TEST_ASSERT(
> + check_focus_area(test_area_gva_shared, PATTERN_HOST_FOCUS));
> +
> + /* Remap focus area back to private */
> + ret = tdg_vp_vmcall_map_gpa((uint64_t)test_area_gpa_private->focus_area,
> + sizeof(test_area_gpa_private->focus_area),
> + &failed_gpa);
> + TDX_UPM_TEST_ASSERT(!ret);
> +
> + /* General areas should be unaffected by remapping */
> + TDX_UPM_TEST_ASSERT(
> + check_general_areas(test_area_gva_private, PATTERN_GUEST_GENERAL));
> +
> + /* Focus area should be zeroed after remapping */
> + TDX_UPM_TEST_ASSERT(check_focus_area(test_area_gva_private, 0));
> +
> + tdx_test_report_to_user_space(SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST_AGAIN);
> +
> + /* Check that guest can use private memory after focus area is remapped as private */
> + TDX_UPM_TEST_ASSERT(
> + fill_and_check(test_area_gva_private, PATTERN_GUEST_GENERAL));
> +
> + tdx_test_success();
> +}
> +
> +static void run_selftest(struct kvm_vm *vm, struct kvm_vcpu *vcpu,
> + struct tdx_upm_test_area *test_area_base_hva)
> +{
> + vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);

This check seems to be a vain.

> + TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> + TEST_ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset),
> + SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST);
> +
> + /*
> + * Check that host should read PATTERN_CONFIDENCE_CHECK from guest's
> + * private memory.

I think this description is confusing.
It's not actually accessing guest's private memory.


> This confirms that regular memory (userspace_addr in
> + * struct kvm_userspace_memory_region) is used to back the host's view
> + * of private memory, since PATTERN_CONFIDENCE_CHECK was written to that
> + * memory before starting the guest.
> + */
> + TEST_ASSERT(check_test_area(test_area_base_hva, PATTERN_CONFIDENCE_CHECK),
> + "Host should read PATTERN_CONFIDENCE_CHECK from guest's private memory.");
> +
> + vcpu_run_and_manage_memory_conversions(vm, vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> + TEST_ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset),
> + SYNC_CHECK_READ_SHARED_MEMORY_FROM_HOST);
> +
> + TEST_ASSERT(check_focus_area(test_area_base_hva, PATTERN_GUEST_FOCUS),
> + "Host should have the same view of shared memory as guest.");
> + TEST_ASSERT(check_general_areas(test_area_base_hva, PATTERN_CONFIDENCE_CHECK),
> + "Host's view of private memory should still be backed by regular memory.");
> +
> + /* Check that host can use shared memory */
> + fill_focus_area(test_area_base_hva, PATTERN_HOST_FOCUS);
> + TEST_ASSERT(check_focus_area(test_area_base_hva, PATTERN_HOST_FOCUS),
> + "Host should be able to use shared memory.");
> +
> + vcpu_run_and_manage_memory_conversions(vm, vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> + TEST_ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset),
> + SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST_AGAIN);
> +
> + TEST_ASSERT(check_general_areas(test_area_base_hva, PATTERN_CONFIDENCE_CHECK),
> + "Host's view of private memory should be backed by regular memory.");
> + TEST_ASSERT(check_focus_area(test_area_base_hva, PATTERN_HOST_FOCUS),
> + "Host's view of private memory should be backed by regular memory.");
> +
> + vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + printf("\t ... PASSED\n");
> +}
> +
> +static bool address_between(uint64_t addr, void *lo, void *hi)
> +{
> + return (uint64_t)lo <= addr && addr < (uint64_t)hi;
> +}
> +
> +static void guest_ve_handler(struct ex_regs *regs)
> +{
> + uint64_t ret;
> + struct ve_info ve;
> +
> + ret = tdg_vp_veinfo_get(&ve);
> + TDX_UPM_TEST_ASSERT(!ret);
> +
> + /* For this test, we will only handle EXIT_REASON_EPT_VIOLATION */
> + TDX_UPM_TEST_ASSERT(ve.exit_reason == EXIT_REASON_EPT_VIOLATION);
> +
> + /* Validate GPA in fault */
> + TDX_UPM_TEST_ASSERT(
> + address_between(ve.gpa,
> + test_area_gpa_private->focus_area,
> + test_area_gpa_private->general_area_1));
> +
> + tdx_test_send_64bit(TDX_UPM_TEST_ACCEPT_PRINT_PORT, ve.gpa);
> +
> +#define MEM_PAGE_ACCEPT_LEVEL_4K 0
> +#define MEM_PAGE_ACCEPT_LEVEL_2M 1
> + ret = tdg_mem_page_accept(ve.gpa, MEM_PAGE_ACCEPT_LEVEL_4K);
> + TDX_UPM_TEST_ASSERT(!ret);
> +}
> +
> +static void verify_upm_test(void)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> +
> + vm_vaddr_t test_area_gva_private;
> + struct tdx_upm_test_area *test_area_base_hva;
> + uint64_t test_area_npages;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> + vcpu = td_vcpu_add(vm, 0, guest_upm_explicit);
> +
> + vm_install_exception_handler(vm, VE_VECTOR, guest_ve_handler);
> +
> + /*
> + * Set up shared memory page for testing by first allocating as private
> + * and then mapping the same GPA again as shared. This way, the TD does
> + * not have to remap its page tables at runtime.
> + */
> + test_area_npages = TDX_UPM_TEST_AREA_SIZE / vm->page_size;
> + vm_userspace_mem_region_add(vm,
> + VM_MEM_SRC_ANONYMOUS, TDX_UPM_TEST_AREA_GPA,
> + 3, test_area_npages, KVM_MEM_PRIVATE);
> +
> + test_area_gva_private = ____vm_vaddr_alloc(
> + vm, TDX_UPM_TEST_AREA_SIZE, TDX_UPM_TEST_AREA_GVA_PRIVATE,
> + TDX_UPM_TEST_AREA_GPA, 3, true);
> + TEST_ASSERT_EQ(test_area_gva_private, TDX_UPM_TEST_AREA_GVA_PRIVATE);
> +
> + test_area_gpa_private = (struct tdx_upm_test_area *)
> + addr_gva2gpa(vm, test_area_gva_private);
> + virt_map_shared(vm, TDX_UPM_TEST_AREA_GVA_SHARED,
> + (uint64_t)test_area_gpa_private,
> + test_area_npages);
> + TEST_ASSERT_EQ(addr_gva2gpa(vm, TDX_UPM_TEST_AREA_GVA_SHARED),
> + (vm_paddr_t)test_area_gpa_private);
> +
> + test_area_base_hva = addr_gva2hva(vm, TDX_UPM_TEST_AREA_GVA_PRIVATE);
> +
> + TEST_ASSERT(fill_and_check(test_area_base_hva, PATTERN_CONFIDENCE_CHECK),
> + "Failed to mark memory intended as backing memory for TD shared memory");
> +
> + sync_global_to_guest(vm, test_area_gpa_private);
> + test_area_gpa_shared = (struct tdx_upm_test_area *)
> + ((uint64_t)test_area_gpa_private | BIT_ULL(vm->pa_bits - 1));
> + sync_global_to_guest(vm, test_area_gpa_shared);
> +
> + td_finalize(vm);
> +
> + printf("Verifying UPM functionality: explicit MapGPA\n");
> +
> + run_selftest(vm, vcpu, test_area_base_hva);
> +
> + kvm_vm_free(vm);
> +}
> +
> +int main(int argc, char **argv)
> +{
> + /* Disable stdout buffering */
> + setbuf(stdout, NULL);
> +
> + if (!is_tdx_enabled()) {
> + printf("TDX is not supported by the KVM\n"
> + "Skipping the TDX tests.\n");
> + return 0;
> + }
> +
> + run_in_new_process(&verify_upm_test);
> +}


2024-03-05 11:53:31

by Yan Zhao

[permalink] [raw]
Subject: Re: [RFC PATCH v5 22/29] KVM: selftests: Add functions to allow mapping as shared

On Tue, Dec 12, 2023 at 12:46:37PM -0800, Sagi Shahar wrote:
> From: Ackerley Tng <[email protected]>
>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> Signed-off-by: Sagi Shahar <[email protected]>
> ---
> .../selftests/kvm/include/kvm_util_base.h | 24 ++++++++++++++
> tools/testing/selftests/kvm/lib/kvm_util.c | 32 +++++++++++++++++++
> .../selftests/kvm/lib/x86_64/processor.c | 15 +++++++--
> 3 files changed, 69 insertions(+), 2 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
> index b353617fcdd1..efd7ae8abb20 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util_base.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
> @@ -574,6 +574,8 @@ vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm);
>
> void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
> unsigned int npages);
> +void virt_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
> + unsigned int npages);
> void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa);
> void *addr_gva2hva(struct kvm_vm *vm, vm_vaddr_t gva);
> vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva);
> @@ -1034,6 +1036,28 @@ static inline void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr
> virt_arch_pg_map(vm, vaddr, paddr);
> }
>
> +/*
> + * VM Virtual Page Map as Shared
> + *
> + * Input Args:
> + * vm - Virtual Machine
> + * vaddr - VM Virtual Address
> + * paddr - VM Physical Address
> + * memslot - Memory region slot for new virtual translation tables
> + *
> + * Output Args: None
> + *
> + * Return: None
> + *
> + * Within @vm, creates a virtual translation for the page starting
> + * at @vaddr to the page starting at @paddr.
> + */
> +void virt_arch_pg_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr);
> +
> +static inline void virt_pg_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
> +{
> + virt_arch_pg_map_shared(vm, vaddr, paddr);
> +}
>
> /*
> * Address Guest Virtual to Guest Physical
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 4f1ae0f1eef0..28780fa1f0f2 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -1573,6 +1573,38 @@ void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
> }
> }
>
> +/*
> + * Map a range of VM virtual address to the VM's physical address as shared
> + *
> + * Input Args:
> + * vm - Virtual Machine
> + * vaddr - Virtuall address to map
> + * paddr - VM Physical Address
> + * npages - The number of pages to map
> + *
> + * Output Args: None
> + *
> + * Return: None
> + *
> + * Within the VM given by @vm, creates a virtual translation for
> + * @npages starting at @vaddr to the page range starting at @paddr.
> + */
> +void virt_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
> + unsigned int npages)
> +{
> + size_t page_size = vm->page_size;
> + size_t size = npages * page_size;
> +
> + TEST_ASSERT(vaddr + size > vaddr, "Vaddr overflow");
> + TEST_ASSERT(paddr + size > paddr, "Paddr overflow");
> +
> + while (npages--) {
> + virt_pg_map_shared(vm, vaddr, paddr);
> + vaddr += page_size;
> + paddr += page_size;
> + }
> +}
Set vm->vpages_mapped as what's done in virt_map() ?

> +
> /*
> * Address VM Physical to Host Virtual
> *
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index 566d82829da4..aa2a57ddb8d3 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -190,7 +190,8 @@ static uint64_t *virt_create_upper_pte(struct kvm_vm *vm,
> return pte;
> }
>
> -void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level)
> +static void ___virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
> + int level, bool protected)
> {
> const uint64_t pg_size = PG_LEVEL_SIZE(level);
> uint64_t *pml4e, *pdpe, *pde;
> @@ -235,17 +236,27 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level)
> "PTE already present for 4k page at vaddr: 0x%lx\n", vaddr);
> *pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK | (paddr & PHYSICAL_PAGE_MASK);
>
> - if (vm_is_gpa_protected(vm, paddr))
> + if (protected)
> *pte |= vm->arch.c_bit;
> else
> *pte |= vm->arch.s_bit;
> }
>
> +void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level)
> +{
> + ___virt_pg_map(vm, vaddr, paddr, level, vm_is_gpa_protected(vm, paddr));
> +}
> +
> void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
> {
> __virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K);
> }
>
> +void virt_arch_pg_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
> +{
> + ___virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K, false);
> +}
> +
> void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
> uint64_t nr_bytes, int level)
> {
> --
> 2.43.0.472.g3155946c3a-goog
>
>

2024-03-06 01:51:47

by Yan Zhao

[permalink] [raw]
Subject: Re: [RFC PATCH v5 23/29] KVM: selftests: TDX: Add shared memory test

> +int verify_shared_mem(void)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> +
> + vm_vaddr_t test_mem_private_gva;
> + uint32_t *test_mem_hva;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> + vcpu = td_vcpu_add(vm, 0, guest_shared_mem);
> +
> + /*
> + * Set up shared memory page for testing by first allocating as private
> + * and then mapping the same GPA again as shared. This way, the TD does
> + * not have to remap its page tables at runtime.
> + */
> + test_mem_private_gva = vm_vaddr_alloc(vm, vm->page_size,
> + TDX_SHARED_MEM_TEST_PRIVATE_GVA);
> + TEST_ASSERT_EQ(test_mem_private_gva, TDX_SHARED_MEM_TEST_PRIVATE_GVA);
> +
> + test_mem_hva = addr_gva2hva(vm, test_mem_private_gva);
> + TEST_ASSERT(test_mem_hva != NULL,
> + "Guest address not found in guest memory regions\n");
> +
> + test_mem_private_gpa = addr_gva2gpa(vm, test_mem_private_gva);
> + virt_pg_map_shared(vm, TDX_SHARED_MEM_TEST_SHARED_GVA,
> + test_mem_private_gpa);
> +
> + test_mem_shared_gpa = test_mem_private_gpa | BIT_ULL(vm->pa_bits - 1);
Why not use vm->arch.s_bit?

> + sync_global_to_guest(vm, test_mem_private_gpa);
test_mem_private_gpa is not used in guest. No need to sync.

> + sync_global_to_guest(vm, test_mem_shared_gpa);
> +
> + td_finalize(vm);
> +
> + printf("Verifying shared memory accesses for TDX\n");
> +
> + /* Begin guest execution; guest writes to shared memory. */
> + printf("\t ... Starting guest execution\n");
> +
> + /* Handle map gpa as shared */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_ASSERT_IO(vcpu, TDX_SHARED_MEM_TEST_INFO_PORT, 4,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> + TEST_ASSERT_EQ(*test_mem_hva, TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE);
> +
> + *test_mem_hva = TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE;
> + td_vcpu_run(vcpu);
> + TDX_TEST_ASSERT_IO(vcpu, TDX_SHARED_MEM_TEST_INFO_PORT, 4,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> + TEST_ASSERT_EQ(
> + *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset),
> + TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE);
> +
> + printf("\t ... PASSED\n");
> +
> + kvm_vm_free(vm);
> +
> + return 0;
> +}


2024-03-06 02:08:22

by Yan Zhao

[permalink] [raw]
Subject: Re: [RFC PATCH v5 23/29] KVM: selftests: TDX: Add shared memory test

On Fri, Mar 01, 2024 at 08:02:43PM +0800, Yan Zhao wrote:
> > +void guest_shared_mem(void)
> > +{
> > + uint32_t *test_mem_shared_gva =
> > + (uint32_t *)TDX_SHARED_MEM_TEST_SHARED_GVA;
> > +
> > + uint64_t placeholder;
> > + uint64_t ret;
> > +
> > + /* Map gpa as shared */
> > + ret = tdg_vp_vmcall_map_gpa(test_mem_shared_gpa, PAGE_SIZE,
> > + &placeholder);
> > + if (ret)
> > + tdx_test_fatal_with_data(ret, __LINE__);
> > +
> > + *test_mem_shared_gva = TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE;
> > +
> > + /* Exit so host can read shared value */
> > + ret = tdg_vp_vmcall_instruction_io(TDX_SHARED_MEM_TEST_INFO_PORT, 4,
> > + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
> > + &placeholder);
> > + if (ret)
> > + tdx_test_fatal_with_data(ret, __LINE__);
> > +
> > + /* Read value written by host and send it back out for verification */
> > + ret = tdg_vp_vmcall_instruction_io(TDX_SHARED_MEM_TEST_INFO_PORT, 4,
> > + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
> > + (uint64_t *)test_mem_shared_gva);
> > + if (ret)
> > + tdx_test_fatal_with_data(ret, __LINE__);
> > +}
> > +
> > +int verify_shared_mem(void)
> > +{
> > + struct kvm_vm *vm;
> > + struct kvm_vcpu *vcpu;
> > +
> > + vm_vaddr_t test_mem_private_gva;
> > + uint32_t *test_mem_hva;
> > +
> > + vm = td_create();
> > + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> > + vcpu = td_vcpu_add(vm, 0, guest_shared_mem);
> > +
> > + /*
> > + * Set up shared memory page for testing by first allocating as private
> > + * and then mapping the same GPA again as shared. This way, the TD does
> > + * not have to remap its page tables at runtime.
> > + */
> > + test_mem_private_gva = vm_vaddr_alloc(vm, vm->page_size,
> > + TDX_SHARED_MEM_TEST_PRIVATE_GVA);
> > + TEST_ASSERT_EQ(test_mem_private_gva, TDX_SHARED_MEM_TEST_PRIVATE_GVA);
> > +
> > + test_mem_hva = addr_gva2hva(vm, test_mem_private_gva);
> > + TEST_ASSERT(test_mem_hva != NULL,
> > + "Guest address not found in guest memory regions\n");
> > +
> > + test_mem_private_gpa = addr_gva2gpa(vm, test_mem_private_gva);
> > + virt_pg_map_shared(vm, TDX_SHARED_MEM_TEST_SHARED_GVA,
> > + test_mem_private_gpa);
> > +
> > + test_mem_shared_gpa = test_mem_private_gpa | BIT_ULL(vm->pa_bits - 1);
> > + sync_global_to_guest(vm, test_mem_private_gpa);
> > + sync_global_to_guest(vm, test_mem_shared_gpa);
> > +
> > + td_finalize(vm);
> > +
> > + printf("Verifying shared memory accesses for TDX\n");
> > +
> > + /* Begin guest execution; guest writes to shared memory. */
> > + printf("\t ... Starting guest execution\n");
> > +
> > + /* Handle map gpa as shared */
> > + td_vcpu_run(vcpu);
> > + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> The first VMExit should be caused by tdvmcall map gpa, so it's
> impossible to be guest failure.
>
Ah, if KVM has bugs and returns to guest's map gpa tdvmcall as error without
exiting to user space, then it's possible to meet guest failure here.

> Move this line TDX_TEST_CHECK_GUEST_FAILURE(vcpu) to after the next td_vcpu_run()
> is better.
So, looks it's required to be checked after every vcpu run.
Without checking it (as in below), the selftest will not be able to print out
the guest reported fatal error.

> > +
> > + td_vcpu_run(vcpu);
> > + TDX_TEST_ASSERT_IO(vcpu, TDX_SHARED_MEM_TEST_INFO_PORT, 4,
> > + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> > + TEST_ASSERT_EQ(*test_mem_hva, TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE);
> > +
> > + *test_mem_hva = TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE;
> > + td_vcpu_run(vcpu);
> > + TDX_TEST_ASSERT_IO(vcpu, TDX_SHARED_MEM_TEST_INFO_PORT, 4,
> > + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> > + TEST_ASSERT_EQ(
> > + *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset),
> > + TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE);
> > +
> > + printf("\t ... PASSED\n");
> > +
> > + kvm_vm_free(vm);
> > +
> > + return 0;
> > +}
>
>

2024-03-06 02:24:17

by Yan Zhao

[permalink] [raw]
Subject: Re: [RFC PATCH v5 24/29] KVM: selftests: Expose _vm_vaddr_alloc

On Tue, Dec 12, 2023 at 12:46:39PM -0800, Sagi Shahar wrote:
> From: Ackerley Tng <[email protected]>
>
> vm_vaddr_alloc always allocates memory in memslot 0. This allows users
> of this function to choose which memslot to allocate virtual memory
> in.
>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> Signed-off-by: Sagi Shahar <[email protected]>
> ---
> tools/testing/selftests/kvm/include/kvm_util_base.h | 3 +++
> tools/testing/selftests/kvm/lib/kvm_util.c | 6 +++---
> 2 files changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
> index efd7ae8abb20..5dbebf5cfd07 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util_base.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
> @@ -561,6 +561,9 @@ void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
> struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
> void vm_populate_vaddr_bitmap(struct kvm_vm *vm);
> vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
> +vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz,
> + vm_vaddr_t vaddr_min, vm_paddr_t paddr_min,
> + uint32_t data_memslot, bool encrypt);
> vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
> vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
> enum kvm_mem_region_type type);
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 28780fa1f0f2..d024abc5379c 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -1410,9 +1410,9 @@ vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz,
> * a unique set of pages, with the minimum real allocation being at least
> * a page.
> */
> -static vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz,
> - vm_vaddr_t vaddr_min, vm_paddr_t paddr_min,
> - uint32_t data_memslot, bool encrypt)
> +vm_vaddr_t ____vm_vaddr_alloc(struct kvm_vm *vm, size_t sz,
> + vm_vaddr_t vaddr_min, vm_paddr_t paddr_min,
> + uint32_t data_memslot, bool encrypt)
Is it better to expose a helper function with meaningful name?
It's annoying to count the number of "_" to figure out which version is
used, especially when there're "__", "____" versions :)

> {
> uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0);
>
> --
> 2.43.0.472.g3155946c3a-goog
>
>

2024-03-06 05:20:57

by Yan Zhao

[permalink] [raw]
Subject: Re: [RFC PATCH v5 21/29] KVM: selftests: TDX: Add TDG.VP.INFO test

> +/*
> + * TDG.VP.INFO call from the guest. Verify the right values are returned
> + */
> +void verify_tdcall_vp_info(void)
> +{
> + const int num_vcpus = 2;
> + struct kvm_vcpu *vcpus[num_vcpus];
> + struct kvm_vm *vm;
> +
> + uint64_t rcx, rdx, r8, r9, r10, r11;
> + uint32_t ret_num_vcpus, ret_max_vcpus;
> + uint64_t attributes;
> + uint32_t i;
> + const struct kvm_cpuid_entry2 *cpuid_entry;
> + int max_pa = -1;
> +
> + vm = td_create();
> +
> +#define TDX_TDPARAM_ATTR_SEPT_VE_DISABLE_BIT (1UL << 28)
> +#define TDX_TDPARAM_ATTR_PKS_BIT (1UL << 30)
> + /* Setting attributes parameter used by TDH.MNG.INIT to 0x50000000 */
> + attributes = TDX_TDPARAM_ATTR_SEPT_VE_DISABLE_BIT |
> + TDX_TDPARAM_ATTR_PKS_BIT;
> +
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, attributes);
> +
> + for (i = 0; i < num_vcpus; i++)
> + vcpus[i] = td_vcpu_add(vm, i, guest_tdcall_vp_info);
> +
> + td_finalize(vm);
> +
> + printf("Verifying TDG.VP.INFO call:\n");
> +
> + /* Get KVM CPUIDs for reference */
> + cpuid_entry = get_cpuid_entry(kvm_get_supported_cpuid(), 0x80000008, 0);
> + TEST_ASSERT(cpuid_entry, "CPUID entry missing\n");
> + max_pa = cpuid_entry->eax & 0xff;
> +
> + for (i = 0; i < num_vcpus; i++) {
> + struct kvm_vcpu *vcpu = vcpus[i];
> +
> + /* Wait for guest to report rcx value */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + rcx = tdx_test_read_64bit_report_from_guest(vcpu);
> +
> + /* Wait for guest to report rdx value */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + rdx = tdx_test_read_64bit_report_from_guest(vcpu);
> +
> + /* Wait for guest to report r8 value */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + r8 = tdx_test_read_64bit_report_from_guest(vcpu);
> +
> + /* Wait for guest to report r9 value */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + r9 = tdx_test_read_64bit_report_from_guest(vcpu);
> +
> + /* Wait for guest to report r10 value */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + r10 = tdx_test_read_64bit_report_from_guest(vcpu);
> +
> + /* Wait for guest to report r11 value */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + r11 = tdx_test_read_64bit_report_from_guest(vcpu);
> +
> + ret_num_vcpus = r8 & 0xFFFFFFFF;
> + ret_max_vcpus = (r8 >> 32) & 0xFFFFFFFF;
> +
> + /* first bits 5:0 of rcx represent the GPAW */
> + TEST_ASSERT_EQ(rcx & 0x3F, max_pa);
> + /* next 63:6 bits of rcx is reserved and must be 0 */
> + TEST_ASSERT_EQ(rcx >> 6, 0);
> + TEST_ASSERT_EQ(rdx, attributes);
> + TEST_ASSERT_EQ(ret_num_vcpus, num_vcpus);
> + TEST_ASSERT_EQ(ret_max_vcpus, 512);

Better to assert of kvm_check_cap(KVM_CAP_MAX_VCPUS) here .

> + /* VCPU_INDEX = i */
> + TEST_ASSERT_EQ(r9, i);
> + /*
> + * verify reserved bits are 0
> + * r10 bit 0 (SYS_RD) indicates that the TDG.SYS.RD/RDM/RDALL
> + * functions are available and can be either 0 or 1.
> + */
> + TEST_ASSERT_EQ(r10 & ~1, 0);
> + TEST_ASSERT_EQ(r11, 0);
> +
> + /* Wait for guest to complete execution */
> + td_vcpu_run(vcpu);
> +
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + printf("\t ... Guest completed run on VCPU=%u\n", i);
> + }
> +
> + kvm_vm_free(vm);
> + printf("\t ... PASSED\n");
> +}
> +
> int main(int argc, char **argv)
> {
> setbuf(stdout, NULL);
> @@ -1169,6 +1313,7 @@ int main(int argc, char **argv)
> run_in_new_process(&verify_mmio_writes);
> run_in_new_process(&verify_td_cpuid_tdcall);
> run_in_new_process(&verify_host_reading_private_mem);
> + run_in_new_process(&verify_tdcall_vp_info);
>
> return 0;
> }
> --
> 2.43.0.472.g3155946c3a-goog
>
>

2024-03-06 09:25:04

by Yan Zhao

[permalink] [raw]
Subject: Re: [RFC PATCH v5 28/29] KVM: selftests: TDX: Add TDX UPM selftest

On Tue, Dec 12, 2023 at 12:46:43PM -0800, Sagi Shahar wrote:
> From: Ackerley Tng <[email protected]>
>
> This tests the use of guest memory with explicit MapGPA calls.
>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> Signed-off-by: Sagi Shahar <[email protected]>
> ---
> tools/testing/selftests/kvm/Makefile | 1 +
> .../selftests/kvm/x86_64/tdx_upm_test.c | 401 ++++++++++++++++++
> 2 files changed, 402 insertions(+)
> create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_upm_test.c
>
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index 8c0a6b395ee5..2f2669af15d6 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -157,6 +157,7 @@ TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test
> TEST_GEN_PROGS_x86_64 += system_counter_offset_test
> TEST_GEN_PROGS_x86_64 += x86_64/tdx_vm_tests
> TEST_GEN_PROGS_x86_64 += x86_64/tdx_shared_mem_test
> +TEST_GEN_PROGS_x86_64 += x86_64/tdx_upm_test
>
> # Compiled outputs used by test targets
> TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test
> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c b/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c
> new file mode 100644
> index 000000000000..44671874a4f1
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c
> @@ -0,0 +1,401 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +
> +#include <asm/kvm.h>
> +#include <asm/vmx.h>
> +#include <linux/kvm.h>
> +#include <stdbool.h>
> +#include <stdint.h>
> +
> +#include "kvm_util_base.h"
> +#include "processor.h"
> +#include "tdx/tdcall.h"
> +#include "tdx/tdx.h"
> +#include "tdx/tdx_util.h"
> +#include "tdx/test_util.h"
> +#include "test_util.h"
> +
> +/* TDX UPM test patterns */
> +#define PATTERN_CONFIDENCE_CHECK (0x11)
> +#define PATTERN_HOST_FOCUS (0x22)
> +#define PATTERN_GUEST_GENERAL (0x33)
> +#define PATTERN_GUEST_FOCUS (0x44)
> +
> +/*
> + * 0x80000000 is arbitrarily selected. The selected address need not be the same
> + * as TDX_UPM_TEST_AREA_GVA_PRIVATE, but it should not overlap with selftest
> + * code or boot page.
> + */
> +#define TDX_UPM_TEST_AREA_GPA (0x80000000)
> +/* Test area GPA is arbitrarily selected */
> +#define TDX_UPM_TEST_AREA_GVA_PRIVATE (0x90000000)
> +/* Select any bit that can be used as a flag */
> +#define TDX_UPM_TEST_AREA_GVA_SHARED_BIT (32)
> +/*
> + * TDX_UPM_TEST_AREA_GVA_SHARED is used to map the same GPA twice into the
> + * guest, once as shared and once as private
> + */
> +#define TDX_UPM_TEST_AREA_GVA_SHARED \
> + (TDX_UPM_TEST_AREA_GVA_PRIVATE | \
> + BIT_ULL(TDX_UPM_TEST_AREA_GVA_SHARED_BIT))
> +
> +/* The test area is 2MB in size */
> +#define TDX_UPM_TEST_AREA_SIZE (2 << 20)
> +/* 0th general area is 1MB in size */
> +#define TDX_UPM_GENERAL_AREA_0_SIZE (1 << 20)
> +/* Focus area is 40KB in size */
> +#define TDX_UPM_FOCUS_AREA_SIZE (40 << 10)
> +/* 1st general area is the rest of the space in the test area */
> +#define TDX_UPM_GENERAL_AREA_1_SIZE \
> + (TDX_UPM_TEST_AREA_SIZE - TDX_UPM_GENERAL_AREA_0_SIZE - \
> + TDX_UPM_FOCUS_AREA_SIZE)
> +
> +/*
> + * The test memory area is set up as two general areas, sandwiching a focus
> + * area. The general areas act as control areas. After they are filled, they
> + * are not expected to change throughout the tests. The focus area is memory
> + * permissions change from private to shared and vice-versa.
> + *
> + * The focus area is intentionally small, and sandwiched to test that when the
> + * focus area's permissions change, the other areas' permissions are not
> + * affected.
> + */
> +struct __packed tdx_upm_test_area {
> + uint8_t general_area_0[TDX_UPM_GENERAL_AREA_0_SIZE];
> + uint8_t focus_area[TDX_UPM_FOCUS_AREA_SIZE];
> + uint8_t general_area_1[TDX_UPM_GENERAL_AREA_1_SIZE];
> +};
> +
> +static void fill_test_area(struct tdx_upm_test_area *test_area_base,
> + uint8_t pattern)
> +{
> + memset(test_area_base, pattern, sizeof(*test_area_base));
> +}
> +
> +static void fill_focus_area(struct tdx_upm_test_area *test_area_base,
> + uint8_t pattern)
> +{
> + memset(test_area_base->focus_area, pattern,
> + sizeof(test_area_base->focus_area));
> +}
> +
> +static bool check_area(uint8_t *base, uint64_t size, uint8_t expected_pattern)
> +{
> + size_t i;
> +
> + for (i = 0; i < size; i++) {
> + if (base[i] != expected_pattern)
> + return false;
> + }
> +
> + return true;
> +}
> +
> +static bool check_general_areas(struct tdx_upm_test_area *test_area_base,
> + uint8_t expected_pattern)
> +{
> + return (check_area(test_area_base->general_area_0,
> + sizeof(test_area_base->general_area_0),
> + expected_pattern) &&
> + check_area(test_area_base->general_area_1,
> + sizeof(test_area_base->general_area_1),
> + expected_pattern));
> +}
> +
> +static bool check_focus_area(struct tdx_upm_test_area *test_area_base,
> + uint8_t expected_pattern)
> +{
> + return check_area(test_area_base->focus_area,
> + sizeof(test_area_base->focus_area), expected_pattern);
> +}
> +
> +static bool check_test_area(struct tdx_upm_test_area *test_area_base,
> + uint8_t expected_pattern)
> +{
> + return (check_general_areas(test_area_base, expected_pattern) &&
> + check_focus_area(test_area_base, expected_pattern));
> +}
> +
> +static bool fill_and_check(struct tdx_upm_test_area *test_area_base, uint8_t pattern)
> +{
> + fill_test_area(test_area_base, pattern);
> +
> + return check_test_area(test_area_base, pattern);
> +}
> +
> +#define TDX_UPM_TEST_ASSERT(x) \
> + do { \
> + if (!(x)) \
> + tdx_test_fatal(__LINE__); \
> + } while (0)
> +
> +/*
> + * Shared variables between guest and host
> + */
> +static struct tdx_upm_test_area *test_area_gpa_private;
> +static struct tdx_upm_test_area *test_area_gpa_shared;
> +
> +/*
> + * Test stages for syncing with host
> + */
> +enum {
> + SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST = 1,
> + SYNC_CHECK_READ_SHARED_MEMORY_FROM_HOST,
> + SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST_AGAIN,
> +};
> +
> +#define TDX_UPM_TEST_ACCEPT_PRINT_PORT 0x87
> +
> +/**
> + * Does vcpu_run, and also manages memory conversions if requested by the TD.
> + */
> +void vcpu_run_and_manage_memory_conversions(struct kvm_vm *vm,
> + struct kvm_vcpu *vcpu)
> +{
> + for (;;) {
> + vcpu_run(vcpu);
> + if (vcpu->run->exit_reason == KVM_EXIT_TDX &&
> + vcpu->run->tdx.type == KVM_EXIT_TDX_VMCALL &&
> + vcpu->run->tdx.u.vmcall.subfunction == TDG_VP_VMCALL_MAP_GPA) {
> + struct kvm_tdx_vmcall *vmcall_info = &vcpu->run->tdx.u.vmcall;
> + uint64_t gpa = vmcall_info->in_r12 & ~vm->arch.s_bit;
> +
> + handle_memory_conversion(vm, gpa, vmcall_info->in_r13,
> + !(vm->arch.s_bit & vmcall_info->in_r12));
> + vmcall_info->status_code = 0;
> + continue;
> + } else if (
> + vcpu->run->exit_reason == KVM_EXIT_IO &&
> + vcpu->run->io.port == TDX_UPM_TEST_ACCEPT_PRINT_PORT) {
> + uint64_t gpa = tdx_test_read_64bit(
> + vcpu, TDX_UPM_TEST_ACCEPT_PRINT_PORT);
> + printf("\t ... guest accepting 1 page at GPA: 0x%lx\n", gpa);
> + continue;
> + }
>
Lack of converting TDG_VP_VMCALL_REPORT_FATAL_ERROR to KVM_EXIT_SYSTEM_EVENT.

> + break;
> + }
> +}
> +
> +static void guest_upm_explicit(void)
> +{
> + uint64_t ret = 0;
> + uint64_t failed_gpa;
> +
> + struct tdx_upm_test_area *test_area_gva_private =
> + (struct tdx_upm_test_area *)TDX_UPM_TEST_AREA_GVA_PRIVATE;
> + struct tdx_upm_test_area *test_area_gva_shared =
> + (struct tdx_upm_test_area *)TDX_UPM_TEST_AREA_GVA_SHARED;
> +
> + /* Check: host reading private memory does not modify guest's view */
> + fill_test_area(test_area_gva_private, PATTERN_GUEST_GENERAL);
> +
> + tdx_test_report_to_user_space(SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST);
> +
> + TDX_UPM_TEST_ASSERT(
> + check_test_area(test_area_gva_private, PATTERN_GUEST_GENERAL));
> +
> + /* Remap focus area as shared */
> + ret = tdg_vp_vmcall_map_gpa((uint64_t)test_area_gpa_shared->focus_area,
> + sizeof(test_area_gpa_shared->focus_area),
> + &failed_gpa);
"failed gpa" is not filled by host and is uninitialized.


> + TDX_UPM_TEST_ASSERT(!ret);
> +
> + /* General areas should be unaffected by remapping */
> + TDX_UPM_TEST_ASSERT(
> + check_general_areas(test_area_gva_private, PATTERN_GUEST_GENERAL));
> +
> + /*
> + * Use memory contents to confirm that the memory allocated using mmap
> + * is used as backing memory for shared memory - PATTERN_CONFIDENCE_CHECK
> + * was written by the VMM at the beginning of this test.
> + */
> + TDX_UPM_TEST_ASSERT(
> + check_focus_area(test_area_gva_shared, PATTERN_CONFIDENCE_CHECK));
> +
> + /* Guest can use focus area after remapping as shared */
> + fill_focus_area(test_area_gva_shared, PATTERN_GUEST_FOCUS);
> +
> + tdx_test_report_to_user_space(SYNC_CHECK_READ_SHARED_MEMORY_FROM_HOST);
> +
> + /* Check that guest has the same view of shared memory */
> + TDX_UPM_TEST_ASSERT(
> + check_focus_area(test_area_gva_shared, PATTERN_HOST_FOCUS));
> +
> + /* Remap focus area back to private */
> + ret = tdg_vp_vmcall_map_gpa((uint64_t)test_area_gpa_private->focus_area,
> + sizeof(test_area_gpa_private->focus_area),
> + &failed_gpa);
> + TDX_UPM_TEST_ASSERT(!ret);
> +
> + /* General areas should be unaffected by remapping */
> + TDX_UPM_TEST_ASSERT(
> + check_general_areas(test_area_gva_private, PATTERN_GUEST_GENERAL));
> +
> + /* Focus area should be zeroed after remapping */
> + TDX_UPM_TEST_ASSERT(check_focus_area(test_area_gva_private, 0));
> +
> + tdx_test_report_to_user_space(SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST_AGAIN);
> +
> + /* Check that guest can use private memory after focus area is remapped as private */
> + TDX_UPM_TEST_ASSERT(
> + fill_and_check(test_area_gva_private, PATTERN_GUEST_GENERAL));
> +
> + tdx_test_success();
> +}
> +
> +static void run_selftest(struct kvm_vm *vm, struct kvm_vcpu *vcpu,
> + struct tdx_upm_test_area *test_area_base_hva)
> +{
> + vcpu_run(vcpu);
Should be td_vcpu_run()?
Otherwise, nowhere to convert TDG_VP_VMCALL_REPORT_FATAL_ERROR to
exit_reason KVM_EXIT_SYSTEM_EVENT required by TDX_TEST_CHECK_GUEST_FAILURE().

> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> + TEST_ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset),
> + SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST);
> +
> + /*
> + * Check that host should read PATTERN_CONFIDENCE_CHECK from guest's
> + * private memory. This confirms that regular memory (userspace_addr in
> + * struct kvm_userspace_memory_region) is used to back the host's view
> + * of private memory, since PATTERN_CONFIDENCE_CHECK was written to that
> + * memory before starting the guest.
> + */
> + TEST_ASSERT(check_test_area(test_area_base_hva, PATTERN_CONFIDENCE_CHECK),
> + "Host should read PATTERN_CONFIDENCE_CHECK from guest's private memory.");
> +
> + vcpu_run_and_manage_memory_conversions(vm, vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> + TEST_ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset),
> + SYNC_CHECK_READ_SHARED_MEMORY_FROM_HOST);
> +
> + TEST_ASSERT(check_focus_area(test_area_base_hva, PATTERN_GUEST_FOCUS),
> + "Host should have the same view of shared memory as guest.");
> + TEST_ASSERT(check_general_areas(test_area_base_hva, PATTERN_CONFIDENCE_CHECK),
> + "Host's view of private memory should still be backed by regular memory.");
> +
> + /* Check that host can use shared memory */
> + fill_focus_area(test_area_base_hva, PATTERN_HOST_FOCUS);
> + TEST_ASSERT(check_focus_area(test_area_base_hva, PATTERN_HOST_FOCUS),
> + "Host should be able to use shared memory.");
> +
> + vcpu_run_and_manage_memory_conversions(vm, vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> + TEST_ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset),
> + SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST_AGAIN);
> +
> + TEST_ASSERT(check_general_areas(test_area_base_hva, PATTERN_CONFIDENCE_CHECK),
> + "Host's view of private memory should be backed by regular memory.");
> + TEST_ASSERT(check_focus_area(test_area_base_hva, PATTERN_HOST_FOCUS),
> + "Host's view of private memory should be backed by regular memory.");
> +
> + vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + printf("\t ... PASSED\n");
> +}
> +
> +static bool address_between(uint64_t addr, void *lo, void *hi)
> +{
> + return (uint64_t)lo <= addr && addr < (uint64_t)hi;
> +}
> +
> +static void guest_ve_handler(struct ex_regs *regs)
> +{
> + uint64_t ret;
> + struct ve_info ve;
> +
> + ret = tdg_vp_veinfo_get(&ve);
> + TDX_UPM_TEST_ASSERT(!ret);
> +
> + /* For this test, we will only handle EXIT_REASON_EPT_VIOLATION */
> + TDX_UPM_TEST_ASSERT(ve.exit_reason == EXIT_REASON_EPT_VIOLATION);
> +
> + /* Validate GPA in fault */
> + TDX_UPM_TEST_ASSERT(
> + address_between(ve.gpa,
> + test_area_gpa_private->focus_area,
> + test_area_gpa_private->general_area_1));
> +
> + tdx_test_send_64bit(TDX_UPM_TEST_ACCEPT_PRINT_PORT, ve.gpa);
> +
> +#define MEM_PAGE_ACCEPT_LEVEL_4K 0
> +#define MEM_PAGE_ACCEPT_LEVEL_2M 1
> + ret = tdg_mem_page_accept(ve.gpa, MEM_PAGE_ACCEPT_LEVEL_4K);
What if ve.gpa could end with 1?
e.g. if could be 0x80100001 if guest accesses test_area_gva_private->focus_area[1]
after remapping focus area back to private.

Though this does not happen in current test, it's better to strip the low 12
bits of ve.gpa before passing to tdg_mem_page_accept(),

> + TDX_UPM_TEST_ASSERT(!ret);
> +}
> +
> +static void verify_upm_test(void)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> +
> + vm_vaddr_t test_area_gva_private;
> + struct tdx_upm_test_area *test_area_base_hva;
> + uint64_t test_area_npages;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> + vcpu = td_vcpu_add(vm, 0, guest_upm_explicit);
> +
> + vm_install_exception_handler(vm, VE_VECTOR, guest_ve_handler);
> +
> + /*
> + * Set up shared memory page for testing by first allocating as private
> + * and then mapping the same GPA again as shared. This way, the TD does
> + * not have to remap its page tables at runtime.
> + */
> + test_area_npages = TDX_UPM_TEST_AREA_SIZE / vm->page_size;
> + vm_userspace_mem_region_add(vm,
> + VM_MEM_SRC_ANONYMOUS, TDX_UPM_TEST_AREA_GPA,
> + 3, test_area_npages, KVM_MEM_PRIVATE);
> +
> + test_area_gva_private = ____vm_vaddr_alloc(
> + vm, TDX_UPM_TEST_AREA_SIZE, TDX_UPM_TEST_AREA_GVA_PRIVATE,
> + TDX_UPM_TEST_AREA_GPA, 3, true);
> + TEST_ASSERT_EQ(test_area_gva_private, TDX_UPM_TEST_AREA_GVA_PRIVATE);
> +
> + test_area_gpa_private = (struct tdx_upm_test_area *)
> + addr_gva2gpa(vm, test_area_gva_private);
> + virt_map_shared(vm, TDX_UPM_TEST_AREA_GVA_SHARED,
> + (uint64_t)test_area_gpa_private,
> + test_area_npages);
> + TEST_ASSERT_EQ(addr_gva2gpa(vm, TDX_UPM_TEST_AREA_GVA_SHARED),
> + (vm_paddr_t)test_area_gpa_private);
> +
> + test_area_base_hva = addr_gva2hva(vm, TDX_UPM_TEST_AREA_GVA_PRIVATE);
> +
> + TEST_ASSERT(fill_and_check(test_area_base_hva, PATTERN_CONFIDENCE_CHECK),
> + "Failed to mark memory intended as backing memory for TD shared memory");
> +
> + sync_global_to_guest(vm, test_area_gpa_private);
> + test_area_gpa_shared = (struct tdx_upm_test_area *)
> + ((uint64_t)test_area_gpa_private | BIT_ULL(vm->pa_bits - 1));
> + sync_global_to_guest(vm, test_area_gpa_shared);
> +
> + td_finalize(vm);
> +
> + printf("Verifying UPM functionality: explicit MapGPA\n");
> +
> + run_selftest(vm, vcpu, test_area_base_hva);
> +
> + kvm_vm_free(vm);
> +}
> +
> +int main(int argc, char **argv)
> +{
> + /* Disable stdout buffering */
> + setbuf(stdout, NULL);
> +
> + if (!is_tdx_enabled()) {
> + printf("TDX is not supported by the KVM\n"
> + "Skipping the TDX tests.\n");
> + return 0;
> + }
> +
> + run_in_new_process(&verify_upm_test);
> +}
> --
> 2.43.0.472.g3155946c3a-goog
>
>

2024-03-14 21:46:49

by Chen, Zide

[permalink] [raw]
Subject: Re: [RFC PATCH v5 27/29] KVM: selftests: Propagate KVM_EXIT_MEMORY_FAULT to userspace



On 12/12/2023 12:47 PM, Shashar, Sagi wrote:
>
>
> -----Original Message-----
> From: Sagi Shahar <[email protected]>
> Sent: Tuesday, December 12, 2023 12:47 PM
> To: [email protected]; Ackerley Tng <[email protected]>; Afranji, Ryan <[email protected]>; Aktas, Erdem <[email protected]>; Sagi Shahar <[email protected]>; Yamahata, Isaku <[email protected]>
> Cc: Sean Christopherson <[email protected]>; Paolo Bonzini <[email protected]>; Shuah Khan <[email protected]>; Peter Gonda <[email protected]>; Xu, Haibo1 <[email protected]>; Chao Peng <[email protected]>; Annapurve, Vishal <[email protected]>; Roger Wang <[email protected]>; Vipin Sharma <[email protected]>; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]
> Subject: [RFC PATCH v5 27/29] KVM: selftests: Propagate KVM_EXIT_MEMORY_FAULT to userspace
>
> Allow userspace to handle KVM_EXIT_MEMORY_FAULT instead of triggering TEST_ASSERT.
>
> From the KVM_EXIT_MEMORY_FAULT documentation:
> Note! KVM_EXIT_MEMORY_FAULT is unique among all KVM exit reasons in that it accompanies a return code of '-1', not '0'! errno will always be set to EFAULT or EHWPOISON when KVM exits with KVM_EXIT_MEMORY_FAULT, userspace should assume kvm_run.exit_reason is stale/undefined for all other error numbers.

If KVM exits to userspace with KVM_EXIT_MEMORY_FAULT, most likely it's because the guest attempts to access the gfn in a way that is different from what the KVM is configured, in terms of private/shared property. I'd suggest to drop this patch and work on the selftests code to eliminate this exit.

If we need a testcase to catch this exit intentionally, we may call _vcpu_run() directly from the testcase and keep the common API vcpu_run() intact.

>
> Signed-off-by: Sagi Shahar <[email protected]>
> ---
> tools/testing/selftests/kvm/lib/kvm_util.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index d024abc5379c..8fb041e51484 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -1742,6 +1742,10 @@ void vcpu_run(struct kvm_vcpu *vcpu) {
> int ret = _vcpu_run(vcpu);
>
> + // Allow this scenario to be handled by the caller.
> + if (ret == -1 && errno == EFAULT)
> + return;
> +
> TEST_ASSERT(!ret, KVM_IOCTL_ERROR(KVM_RUN, ret)); }
>
> --
> 2.43.0.472.g3155946c3a-goog
>

2024-03-16 06:25:17

by Chen, Zide

[permalink] [raw]
Subject: Re: [RFC PATCH v5 23/29] KVM: selftests: TDX: Add shared memory test



On 12/12/2023 12:47 PM, Sagi Shahar wrote:
>
>
> -----Original Message-----
> From: Sagi Shahar <[email protected]>
> Sent: Tuesday, December 12, 2023 12:47 PM
> To: [email protected]; Ackerley Tng <[email protected]>; Afranji, Ryan <[email protected]>; Aktas, Erdem <[email protected]>; Sagi Shahar <[email protected]>; Yamahata, Isaku <[email protected]>
> Cc: Sean Christopherson <[email protected]>; Paolo Bonzini <[email protected]>; Shuah Khan <[email protected]>; Peter Gonda <[email protected]>; Xu, Haibo1 <[email protected]>; Chao Peng <[email protected]>; Annapurve, Vishal <[email protected]>; Roger Wang <[email protected]>; Vipin Sharma <[email protected]>; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]
> Subject: [RFC PATCH v5 23/29] KVM: selftests: TDX: Add shared memory test
>
> From: Ryan Afranji <[email protected]>
>
> Adds a test that sets up shared memory between the host and guest.
>
> Signed-off-by: Ryan Afranji <[email protected]>
> Signed-off-by: Sagi Shahar <[email protected]>
> Signed-off-by: Ackerley Tng <[email protected]>
> ---
> tools/testing/selftests/kvm/Makefile | 1 +
> .../selftests/kvm/include/x86_64/tdx/tdx.h | 2 +
> .../kvm/include/x86_64/tdx/tdx_util.h | 2 +
> .../selftests/kvm/lib/x86_64/tdx/tdx.c | 26 ++++
> .../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 25 ++++
> .../kvm/x86_64/tdx_shared_mem_test.c | 135 ++++++++++++++++++
> 6 files changed, 191 insertions(+)
> create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c
>
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index 80d4a50eeb9f..8c0a6b395ee5 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -156,6 +156,7 @@ TEST_GEN_PROGS_x86_64 += steal_time
> TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test
> TEST_GEN_PROGS_x86_64 += system_counter_offset_test
> TEST_GEN_PROGS_x86_64 += x86_64/tdx_vm_tests
> +TEST_GEN_PROGS_x86_64 += x86_64/tdx_shared_mem_test
>
> # Compiled outputs used by test targets
> TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> index 6b176de1e795..db4cc62abb5d 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> @@ -8,6 +8,7 @@
> #define TDG_VP_INFO 1
>
> #define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000
> +#define TDG_VP_VMCALL_MAP_GPA 0x10001
> #define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003
>
> #define TDG_VP_VMCALL_INSTRUCTION_CPUID 10 @@ -36,5 +37,6 @@ uint64_t tdg_vp_vmcall_instruction_cpuid(uint32_t eax, uint32_t ecx, uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx,
> uint64_t *r8, uint64_t *r9,
> uint64_t *r10, uint64_t *r11);
> +uint64_t tdg_vp_vmcall_map_gpa(uint64_t address, uint64_t size,
> +uint64_t *data_out);
>
> #endif // SELFTEST_TDX_TDX_H
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
> index 32dd6b8fda46..3e850ecb85a6 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
> @@ -13,5 +13,7 @@ void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
> uint64_t attributes);
> void td_finalize(struct kvm_vm *vm);
> void td_vcpu_run(struct kvm_vcpu *vcpu);
> +void handle_memory_conversion(struct kvm_vm *vm, uint64_t gpa, uint64_t size,
> + bool shared_to_private);
>
> #endif // SELFTESTS_TDX_KVM_UTIL_H
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> index bcd9cceb3372..061a5c0bef34 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> @@ -4,9 +4,11 @@
>
> #include "tdx/tdcall.h"
> #include "tdx/tdx.h"
> +#include "tdx/tdx_util.h"
>
> void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu) {
> + struct kvm_vm *vm = vcpu->vm;
> struct kvm_tdx_vmcall *vmcall_info = &vcpu->run->tdx.u.vmcall;
> uint64_t vmcall_subfunction = vmcall_info->subfunction;
>
> @@ -20,6 +22,14 @@ void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu)
> vcpu->run->system_event.data[2] = vmcall_info->in_r13;
> vmcall_info->status_code = 0;
> break;
> + case TDG_VP_VMCALL_MAP_GPA:
> + uint64_t gpa = vmcall_info->in_r12 & ~vm->arch.s_bit;
> + bool shared_to_private = !(vm->arch.s_bit &
> + vmcall_info->in_r12);
> + handle_memory_conversion(vm, gpa, vmcall_info->in_r13,
> + shared_to_private);
> + vmcall_info->status_code = 0;
> + break;
> default:
> TEST_FAIL("TD VMCALL subfunction %lu is unsupported.\n",
> vmcall_subfunction);
> @@ -210,3 +220,19 @@ uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx,
>
> return ret;
> }
> +
> +uint64_t tdg_vp_vmcall_map_gpa(uint64_t address, uint64_t size,
> +uint64_t *data_out) {
> + uint64_t ret;
> + struct tdx_hypercall_args args = {
> + .r11 = TDG_VP_VMCALL_MAP_GPA,
> + .r12 = address,
> + .r13 = size
> + };
> +
> + ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT);
> +
> + if (data_out)
> + *data_out = args.r11;
> + return ret;
> +}
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> index d745bb6287c1..92fa6bd13229 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> @@ -531,3 +531,28 @@ void td_vcpu_run(struct kvm_vcpu *vcpu)
> handle_userspace_tdg_vp_vmcall_exit(vcpu);
> }
> }
> +
> +/**
> + * Handle conversion of memory with @size beginning @gpa for @vm. Set
> + * @shared_to_private to true for shared to private conversions and
> +false
> + * otherwise.
> + *
> + * Since this is just for selftests, we will just keep both pieces of
> +backing
> + * memory allocated and not deallocate/allocate memory; we'll just do
> +the
> + * minimum of calling KVM_MEMORY_ENCRYPT_REG_REGION and
> + * KVM_MEMORY_ENCRYPT_UNREG_REGION.
> + */
> +void handle_memory_conversion(struct kvm_vm *vm, uint64_t gpa, uint64_t size,
> + bool shared_to_private)
> +{
> + struct kvm_memory_attributes range;
> +
> + range.address = gpa;
> + range.size = size;
> + range.attributes = shared_to_private ? KVM_MEMORY_ATTRIBUTE_PRIVATE : 0;
> + range.flags = 0;
> +
> + printf("\t ... calling KVM_SET_MEMORY_ATTRIBUTES ioctl with gpa=%#lx,
> +size=%#lx, attributes=%#llx\n", gpa, size, range.attributes);
> +
> + vm_ioctl(vm, KVM_SET_MEMORY_ATTRIBUTES, &range); }
> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c b/tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c
> new file mode 100644
> index 000000000000..ba6bdc470270
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c
> @@ -0,0 +1,135 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +
> +#include <linux/kvm.h>
> +#include <stdint.h>
> +
> +#include "kvm_util_base.h"
> +#include "processor.h"
> +#include "tdx/tdcall.h"
> +#include "tdx/tdx.h"
> +#include "tdx/tdx_util.h"
> +#include "tdx/test_util.h"
> +#include "test_util.h"
> +
> +#define TDX_SHARED_MEM_TEST_PRIVATE_GVA (0x80000000) #define
> +TDX_SHARED_MEM_TEST_VADDR_SHARED_MASK BIT_ULL(30)
> +#define TDX_SHARED_MEM_TEST_SHARED_GVA \
> + (TDX_SHARED_MEM_TEST_PRIVATE_GVA | \
> + TDX_SHARED_MEM_TEST_VADDR_SHARED_MASK)
> +
> +#define TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE (0xcafecafe) #define
> +TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE (0xabcdabcd)
> +
> +#define TDX_SHARED_MEM_TEST_INFO_PORT 0x87
> +
> +/*
> + * Shared variables between guest and host */ static uint64_t
> +test_mem_private_gpa; static uint64_t test_mem_shared_gpa;
> +
> +void guest_shared_mem(void)
> +{
> + uint32_t *test_mem_shared_gva =
> + (uint32_t *)TDX_SHARED_MEM_TEST_SHARED_GVA;
> +
> + uint64_t placeholder;
> + uint64_t ret;
> +
> + /* Map gpa as shared */
> + ret = tdg_vp_vmcall_map_gpa(test_mem_shared_gpa, PAGE_SIZE,
> + &placeholder);
> + if (ret)
> + tdx_test_fatal_with_data(ret, __LINE__);
> +
> + *test_mem_shared_gva = TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE;
> +
> + /* Exit so host can read shared value */
> + ret = tdg_vp_vmcall_instruction_io(TDX_SHARED_MEM_TEST_INFO_PORT, 4,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
> + &placeholder);
> + if (ret)
> + tdx_test_fatal_with_data(ret, __LINE__);
> +
> + /* Read value written by host and send it back out for verification */
> + ret = tdg_vp_vmcall_instruction_io(TDX_SHARED_MEM_TEST_INFO_PORT, 4,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
> + (uint64_t *)test_mem_shared_gva);
> + if (ret)
> + tdx_test_fatal_with_data(ret, __LINE__); }
> +
> +int verify_shared_mem(void)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> +
> + vm_vaddr_t test_mem_private_gva;
> + uint32_t *test_mem_hva;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> + vcpu = td_vcpu_add(vm, 0, guest_shared_mem);
> +
> + /*
> + * Set up shared memory page for testing by first allocating as private
> + * and then mapping the same GPA again as shared. This way, the TD does
> + * not have to remap its page tables at runtime.
> + */
> + test_mem_private_gva = vm_vaddr_alloc(vm, vm->page_size,
> + TDX_SHARED_MEM_TEST_PRIVATE_GVA);
> + TEST_ASSERT_EQ(test_mem_private_gva, TDX_SHARED_MEM_TEST_PRIVATE_GVA);
> +
> + test_mem_hva = addr_gva2hva(vm, test_mem_private_gva);
> + TEST_ASSERT(test_mem_hva != NULL,
> + "Guest address not found in guest memory regions\n");
> +
> + test_mem_private_gpa = addr_gva2gpa(vm, test_mem_private_gva);
> + virt_pg_map_shared(vm, TDX_SHARED_MEM_TEST_SHARED_GVA,
> + test_mem_private_gpa);

As mentioned in the comments in [PATCH 22/29], how about replacing
virt_pg_map_shared() with the following?

vm_phy_pages_conversion(); // new API to update protected_phy_pages
virt_pg_map(); // set s_bit according to protected_phy_pages
> +
> + test_mem_shared_gpa = test_mem_private_gpa | BIT_ULL(vm->pa_bits - 1);
> + sync_global_to_guest(vm, test_mem_private_gpa);
> + sync_global_to_guest(vm, test_mem_shared_gpa);
> +
> + td_finalize(vm);
> +
> + printf("Verifying shared memory accesses for TDX\n");
> +
> + /* Begin guest execution; guest writes to shared memory. */
> + printf("\t ... Starting guest execution\n");
> +
> + /* Handle map gpa as shared */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_ASSERT_IO(vcpu, TDX_SHARED_MEM_TEST_INFO_PORT, 4,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> + TEST_ASSERT_EQ(*test_mem_hva, TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE);
> +
> + *test_mem_hva = TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE;
> + td_vcpu_run(vcpu);
> + TDX_TEST_ASSERT_IO(vcpu, TDX_SHARED_MEM_TEST_INFO_PORT, 4,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
> + TEST_ASSERT_EQ(
> + *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset),
> + TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE);
> +
> + printf("\t ... PASSED\n");
> +
> + kvm_vm_free(vm);
> +
> + return 0;
> +}
> +
> +int main(int argc, char **argv)
> +{
> + if (!is_tdx_enabled()) {
> + printf("TDX is not supported by the KVM\n"
> + "Skipping the TDX tests.\n");
> + return 0;
> + }
> +
> + return verify_shared_mem();
> +}
> --
> 2.43.0.472.g3155946c3a-goog
>

2024-03-16 06:25:22

by Chen, Zide

[permalink] [raw]
Subject: Re: [RFC PATCH v5 22/29] KVM: selftests: Add functions to allow mapping as shared



On 12/12/2023 12:47 PM, Sagi Shahar wrote:

>
>
> -----Original Message-----
> From: Sagi Shahar <[email protected]>
> Sent: Tuesday, December 12, 2023 12:47 PM
> To: [email protected]; Ackerley Tng <[email protected]>; Afranji, Ryan <[email protected]>; Aktas, Erdem <[email protected]>; Sagi Shahar <[email protected]>; Yamahata, Isaku <[email protected]>
> Cc: Sean Christopherson <[email protected]>; Paolo Bonzini <[email protected]>; Shuah Khan <[email protected]>; Peter Gonda <[email protected]>; Xu, Haibo1 <[email protected]>; Chao Peng <[email protected]>; Annapurve, Vishal <[email protected]>; Roger Wang <[email protected]>; Vipin Sharma <[email protected]>; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]
> Subject: [RFC PATCH v5 22/29] KVM: selftests: Add functions to allow mapping as shared

Since protected_phy_pages is introduced to keep track of the guest
memory's private/shared property, it's better to keep it consistent with
the guest mappings.

Instead of having a set of new APIs to force it to map shared guest
pages, how about to update protected_phy_pages sparsebits right before
the mapping, and just call the existing virt_pg_map() to do the mapping?


> From: Ackerley Tng <[email protected]>
>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> Signed-off-by: Sagi Shahar <[email protected]>
> ---
> .../selftests/kvm/include/kvm_util_base.h | 24 ++++++++++++++
> tools/testing/selftests/kvm/lib/kvm_util.c | 32 +++++++++++++++++++
> .../selftests/kvm/lib/x86_64/processor.c | 15 +++++++--
> 3 files changed, 69 insertions(+), 2 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
> index b353617fcdd1..efd7ae8abb20 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util_base.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
> @@ -574,6 +574,8 @@ vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm);
>
> void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
> unsigned int npages);
> +void virt_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
> + unsigned int npages);
> void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa); void *addr_gva2hva(struct kvm_vm *vm, vm_vaddr_t gva); vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva); @@ -1034,6 +1036,28 @@ static inline void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr
> virt_arch_pg_map(vm, vaddr, paddr);
> }
>
> +/*
> + * VM Virtual Page Map as Shared
> + *
> + * Input Args:
> + * vm - Virtual Machine
> + * vaddr - VM Virtual Address
> + * paddr - VM Physical Address
> + * memslot - Memory region slot for new virtual translation tables
> + *
> + * Output Args: None
> + *
> + * Return: None
> + *
> + * Within @vm, creates a virtual translation for the page starting
> + * at @vaddr to the page starting at @paddr.
> + */
> +void virt_arch_pg_map_shared(struct kvm_vm *vm, uint64_t vaddr,
> +uint64_t paddr);
> +
> +static inline void virt_pg_map_shared(struct kvm_vm *vm, uint64_t
> +vaddr, uint64_t paddr) {
> + virt_arch_pg_map_shared(vm, vaddr, paddr); }
>
> /*
> * Address Guest Virtual to Guest Physical diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 4f1ae0f1eef0..28780fa1f0f2 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -1573,6 +1573,38 @@ void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
> }
> }
>
> +/*
> + * Map a range of VM virtual address to the VM's physical address as
> +shared
> + *
> + * Input Args:
> + * vm - Virtual Machine
> + * vaddr - Virtuall address to map
> + * paddr - VM Physical Address
> + * npages - The number of pages to map
> + *
> + * Output Args: None
> + *
> + * Return: None
> + *
> + * Within the VM given by @vm, creates a virtual translation for
> + * @npages starting at @vaddr to the page range starting at @paddr.
> + */
> +void virt_map_shared(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
> + unsigned int npages)
> +{
> + size_t page_size = vm->page_size;
> + size_t size = npages * page_size;
> +
> + TEST_ASSERT(vaddr + size > vaddr, "Vaddr overflow");
> + TEST_ASSERT(paddr + size > paddr, "Paddr overflow");
> +
> + while (npages--) {
> + virt_pg_map_shared(vm, vaddr, paddr);
> + vaddr += page_size;
> + paddr += page_size;
> + }
> +}
> +
> /*
> * Address VM Physical to Host Virtual
> *
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index 566d82829da4..aa2a57ddb8d3 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -190,7 +190,8 @@ static uint64_t *virt_create_upper_pte(struct kvm_vm *vm,
> return pte;
> }
>
> -void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level)
> +static void ___virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
> + int level, bool protected)
> {
> const uint64_t pg_size = PG_LEVEL_SIZE(level);
> uint64_t *pml4e, *pdpe, *pde;
> @@ -235,17 +236,27 @@ void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr, int level)
> "PTE already present for 4k page at vaddr: 0x%lx\n", vaddr);
> *pte = PTE_PRESENT_MASK | PTE_WRITABLE_MASK | (paddr & PHYSICAL_PAGE_MASK);
>
> - if (vm_is_gpa_protected(vm, paddr))
> + if (protected)> *pte |= vm->arch.c_bit;
> else
> *pte |= vm->arch.s_bit;
> }
>
> +void __virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
> +int level) {
> + ___virt_pg_map(vm, vaddr, paddr, level, vm_is_gpa_protected(vm,
> +paddr)); }
> +
> void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr) {
> __virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K); }
>
> +void virt_arch_pg_map_shared(struct kvm_vm *vm, uint64_t vaddr,
> +uint64_t paddr) {
> + ___virt_pg_map(vm, vaddr, paddr, PG_LEVEL_4K, false); }

Here, it tries to create the guest mappings regardless of what the value
of protected_phy_pages is, which could create confusion.

> +
> void virt_map_level(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
> uint64_t nr_bytes, int level)
> {
> --
> 2.43.0.472.g3155946c3a-goog
>

2024-03-21 22:54:57

by Zhang, Dongsheng X

[permalink] [raw]
Subject: Re: [RFC PATCH v5 05/29] KVM: selftests: Add helper functions to create TDX VMs



On 12/12/2023 12:46 PM, Sagi Shahar wrote:
> From: Erdem Aktas <[email protected]>
>
> TDX requires additional IOCTLs to initialize VM and vCPUs to add
> private memory and to finalize the VM memory. Also additional utility
> functions are provided to manipulate a TD, similar to those that
> manipulate a VM in the current selftest framework.
>
> A TD's initial register state cannot be manipulated directly by
> setting the VM's memory, hence boot code is provided at the TD's reset
> vector. This boot code takes boot parameters loaded in the TD's memory
> and sets up the TD for the selftest.
>
> Signed-off-by: Erdem Aktas <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> Signed-off-by: Sagi Shahar <[email protected]>
> Co-developed-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ackerley Tng <[email protected]>
> ---
> tools/testing/selftests/kvm/Makefile | 2 +
> .../kvm/include/x86_64/tdx/td_boot.h | 82 ++++
> .../kvm/include/x86_64/tdx/td_boot_asm.h | 16 +
> .../kvm/include/x86_64/tdx/tdx_util.h | 16 +
> .../selftests/kvm/lib/x86_64/tdx/td_boot.S | 101 ++++
> .../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 434 ++++++++++++++++++
> 6 files changed, 651 insertions(+)
> create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h
> create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h
> create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
> create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S
> create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
>
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index b11ac221aba4..a35150ab855f 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -50,6 +50,8 @@ LIBKVM_x86_64 += lib/x86_64/svm.c
> LIBKVM_x86_64 += lib/x86_64/ucall.c
> LIBKVM_x86_64 += lib/x86_64/vmx.c
> LIBKVM_x86_64 += lib/x86_64/sev.c
> +LIBKVM_x86_64 += lib/x86_64/tdx/tdx_util.c
> +LIBKVM_x86_64 += lib/x86_64/tdx/td_boot.S
>
> LIBKVM_aarch64 += lib/aarch64/gic.c
> LIBKVM_aarch64 += lib/aarch64/gic_v3.c
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h b/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h
> new file mode 100644
> index 000000000000..148057e569d6
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h
> @@ -0,0 +1,82 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +#ifndef SELFTEST_TDX_TD_BOOT_H
> +#define SELFTEST_TDX_TD_BOOT_H
> +
> +#include <stdint.h>
> +#include "tdx/td_boot_asm.h"
> +
> +/*
> + * Layout for boot section (not to scale)
> + *
> + * GPA
> + * ┌─────────────────────────────┬──0x1_0000_0000 (4GB)
> + * │ Boot code trampoline │
> + * ├─────────────────────────────┼──0x0_ffff_fff0: Reset vector (16B below 4GB)
> + * │ Boot code │
> + * ├─────────────────────────────┼──td_boot will be copied here, so that the
> + * │ │ jmp to td_boot is exactly at the reset vector
> + * │ Empty space │
> + * │ │
> + * ├─────────────────────────────┤
> + * │ │
> + * │ │
> + * │ Boot parameters │
> + * │ │
> + * │ │
> + * └─────────────────────────────┴──0x0_ffff_0000: TD_BOOT_PARAMETERS_GPA
> + */
> +#define FOUR_GIGABYTES_GPA (4ULL << 30)
> +
> +/**
> + * The exact memory layout for LGDT or LIDT instructions.
> + */
> +struct __packed td_boot_parameters_dtr {
> + uint16_t limit;
> + uint32_t base;
> +};
> +
> +/**
> + * The exact layout in memory required for a ljmp, including the selector for
> + * changing code segment.
> + */
> +struct __packed td_boot_parameters_ljmp_target {
> + uint32_t eip_gva;
> + uint16_t code64_sel;
> +};
> +
> +/**
> + * Allows each vCPU to be initialized with different eip and esp.
> + */
> +struct __packed td_per_vcpu_parameters {
> + uint32_t esp_gva;
> + struct td_boot_parameters_ljmp_target ljmp_target;
> +};
> +
> +/**
> + * Boot parameters for the TD.
> + *
> + * Unlike a regular VM, we can't ask KVM to set registers such as esp, eip, etc
> + * before boot, so to run selftests, these registers' values have to be
> + * initialized by the TD.
> + *
> + * This struct is loaded in TD private memory at TD_BOOT_PARAMETERS_GPA.
> + *
> + * The TD boot code will read off parameters from this struct and set up the
> + * vcpu for executing selftests.
> + */
> +struct __packed td_boot_parameters {
> + uint32_t cr0;
> + uint32_t cr3;
> + uint32_t cr4;
> + struct td_boot_parameters_dtr gdtr;
> + struct td_boot_parameters_dtr idtr;
> + struct td_per_vcpu_parameters per_vcpu[];
> +};
> +
> +extern void td_boot(void);
> +extern void reset_vector(void);
> +extern void td_boot_code_end(void);
> +
> +#define TD_BOOT_CODE_SIZE (td_boot_code_end - td_boot)
> +
> +#endif /* SELFTEST_TDX_TD_BOOT_H */
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h b/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h
> new file mode 100644
> index 000000000000..0a07104f7deb
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h
> @@ -0,0 +1,16 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +#ifndef SELFTEST_TDX_TD_BOOT_ASM_H
> +#define SELFTEST_TDX_TD_BOOT_ASM_H
> +
> +/*
> + * GPA where TD boot parameters wil lbe loaded.

Typo: "wil lbe" ==> "will be"

> + *
> + * TD_BOOT_PARAMETERS_GPA is arbitrarily chosen to
> + *
> + * + be within the 4GB address space
> + * + provide enough contiguous memory for the struct td_boot_parameters such
> + * that there is one struct td_per_vcpu_parameters for KVM_MAX_VCPUS
> + */
> +#define TD_BOOT_PARAMETERS_GPA 0xffff0000
> +
> +#endif // SELFTEST_TDX_TD_BOOT_ASM_H
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
> new file mode 100644
> index 000000000000..274b245f200b
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
> @@ -0,0 +1,16 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +#ifndef SELFTESTS_TDX_KVM_UTIL_H
> +#define SELFTESTS_TDX_KVM_UTIL_H
> +
> +#include <stdint.h>
> +
> +#include "kvm_util_base.h"
> +
> +struct kvm_vcpu *td_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, void *guest_code);
> +
> +struct kvm_vm *td_create(void);
> +void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
> + uint64_t attributes);
> +void td_finalize(struct kvm_vm *vm);
> +
> +#endif // SELFTESTS_TDX_KVM_UTIL_H
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S b/tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S
> new file mode 100644
> index 000000000000..800e09264d4e
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S
> @@ -0,0 +1,101 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +
> +#include "tdx/td_boot_asm.h"
> +
> +/* Offsets for reading struct td_boot_parameters */
> +#define TD_BOOT_PARAMETERS_CR0 0
> +#define TD_BOOT_PARAMETERS_CR3 4
> +#define TD_BOOT_PARAMETERS_CR4 8
> +#define TD_BOOT_PARAMETERS_GDT 12
> +#define TD_BOOT_PARAMETERS_IDT 18
> +#define TD_BOOT_PARAMETERS_PER_VCPU 24
> +
> +/* Offsets for reading struct td_per_vcpu_parameters */
> +#define TD_PER_VCPU_PARAMETERS_ESP_GVA 0
> +#define TD_PER_VCPU_PARAMETERS_LJMP_TARGET 4
> +
> +#define SIZEOF_TD_PER_VCPU_PARAMETERS 10
> +
> +.code32
> +
> +.globl td_boot
> +td_boot:
> + /* In this procedure, edi is used as a temporary register */
> + cli
> +
> + /* Paging is off */
> +
> + movl $TD_BOOT_PARAMETERS_GPA, %ebx
> +
> + /*
> + * Find the address of struct td_per_vcpu_parameters for this
> + * vCPU based on esi (TDX spec: initialized with vcpu id). Put
> + * struct address into register for indirect addressing
> + */
> + movl $SIZEOF_TD_PER_VCPU_PARAMETERS, %eax
> + mul %esi
> + leal TD_BOOT_PARAMETERS_PER_VCPU(%ebx), %edi
> + addl %edi, %eax
> +
> + /* Setup stack */
> + movl TD_PER_VCPU_PARAMETERS_ESP_GVA(%eax), %esp
> +
> + /* Setup GDT */
> + leal TD_BOOT_PARAMETERS_GDT(%ebx), %edi
> + lgdt (%edi)
> +
> + /* Setup IDT */
> + leal TD_BOOT_PARAMETERS_IDT(%ebx), %edi
> + lidt (%edi)
> +
> + /*
> + * Set up control registers (There are no instructions to
> + * mov from memory to control registers, hence we need to use ebx
> + * as a scratch register)
> + */
> + movl TD_BOOT_PARAMETERS_CR4(%ebx), %edi
> + movl %edi, %cr4
> + movl TD_BOOT_PARAMETERS_CR3(%ebx), %edi
> + movl %edi, %cr3
> + movl TD_BOOT_PARAMETERS_CR0(%ebx), %edi
> + movl %edi, %cr0
> +
> + /* Paging is on after setting the most significant bit on cr0 */
> +
> + /*
> + * Jump to selftest guest code. Far jumps read <segment
> + * selector:new eip> from <addr+4:addr>. This location has
> + * already been set up in boot parameters, and we can read boot
> + * parameters because boot code and boot parameters are loaded so
> + * that GVA and GPA are mapped 1:1.
> + */
> + ljmp *TD_PER_VCPU_PARAMETERS_LJMP_TARGET(%eax)
> +
> +.globl reset_vector
> +reset_vector:
> + jmp td_boot
> + /*
> + * Pad reset_vector to its full size of 16 bytes so that this
> + * can be loaded with the end of reset_vector aligned to GPA=4G
> + */
> + int3
> + int3
> + int3
> + int3
> + int3
> + int3
> + int3
> + int3
> + int3
> + int3
> + int3
> + int3
> + int3
> + int3
> +
> +/* Leave marker so size of td_boot code can be computed */
> +.globl td_boot_code_end
> +td_boot_code_end:
> +
> +/* Disable executable stack */
> +.section .note.GNU-stack,"",%progbits
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> new file mode 100644
> index 000000000000..9b69c733ce01
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> @@ -0,0 +1,434 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +
> +#define _GNU_SOURCE
> +#include <asm/kvm.h>
> +#include <asm/kvm_host.h>
> +#include <errno.h>
> +#include <linux/kvm.h>
> +#include <stdint.h>
> +#include <sys/ioctl.h>
> +
> +#include "kvm_util.h"
> +#include "test_util.h"
> +#include "tdx/td_boot.h"
> +#include "kvm_util_base.h"
> +#include "processor.h"
> +
> +/*
> + * TDX ioctls
> + */
> +
> +static char *tdx_cmd_str[] = {
> + "KVM_TDX_CAPABILITIES",
> + "KVM_TDX_INIT_VM",
> + "KVM_TDX_INIT_VCPU",
> + "KVM_TDX_INIT_MEM_REGION",
> + "KVM_TDX_FINALIZE_VM"
> +};
> +#define TDX_MAX_CMD_STR (ARRAY_SIZE(tdx_cmd_str))
> +
> +static void tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data)
> +{
> + struct kvm_tdx_cmd tdx_cmd;
> + int r;
> +
> + TEST_ASSERT(ioctl_no < TDX_MAX_CMD_STR, "Unknown TDX CMD : %d\n",
> + ioctl_no);
> +
> + memset(&tdx_cmd, 0x0, sizeof(tdx_cmd));
> + tdx_cmd.id = ioctl_no;
> + tdx_cmd.flags = flags;
> + tdx_cmd.data = (uint64_t)data;
> +
> + r = ioctl(fd, KVM_MEMORY_ENCRYPT_OP, &tdx_cmd);
> + TEST_ASSERT(r == 0, "%s failed: %d %d", tdx_cmd_str[ioctl_no], r,
> + errno);
> +}
> +
> +#define XFEATURE_MASK_CET (XFEATURE_MASK_CET_USER | XFEATURE_MASK_CET_KERNEL)
> +
> +static void tdx_apply_cpuid_restrictions(struct kvm_cpuid2 *cpuid_data)
> +{
> + for (int i = 0; i < cpuid_data->nent; i++) {
> + struct kvm_cpuid_entry2 *e = &cpuid_data->entries[i];
> +
> + if (e->function == 0xd && e->index == 0) {
> + /*
> + * TDX module requires both XTILE_{CFG, DATA} to be set.
> + * Both bits are required for AMX to be functional.
> + */
> + if ((e->eax & XFEATURE_MASK_XTILE) !=
> + XFEATURE_MASK_XTILE) {
> + e->eax &= ~XFEATURE_MASK_XTILE;
> + }
> + }
> + if (e->function == 0xd && e->index == 1) {
> + /*
> + * TDX doesn't support LBR yet.
> + * Disable bits from the XCR0 register.
> + */
> + e->ecx &= ~XFEATURE_MASK_LBR;
> + /*
> + * TDX modules requires both CET_{U, S} to be set even
> + * if only one is supported.
> + */
> + if (e->ecx & XFEATURE_MASK_CET)
> + e->ecx |= XFEATURE_MASK_CET;
> + }
> + }
> +}
> +
> +static void tdx_td_init(struct kvm_vm *vm, uint64_t attributes)
> +{
> + const struct kvm_cpuid2 *cpuid;
> + struct kvm_tdx_init_vm *init_vm;
> +
> + cpuid = kvm_get_supported_cpuid();
> +
> + init_vm = malloc(sizeof(*init_vm) +
> + sizeof(init_vm->cpuid.entries[0]) * cpuid->nent);

Can add a sanity checking for init_vm here like the following:
TEST_ASSERT(init_vm, "vm allocation failed");

> +
> + memset(init_vm, 0, sizeof(*init_vm));

Can use calloc instead:
init_vm = calloc(1, sizeof(*init_vm));


> + memcpy(&init_vm->cpuid, cpuid, kvm_cpuid2_size(cpuid->nent));
> +
> + init_vm->attributes = attributes;
> +
> + tdx_apply_cpuid_restrictions(&init_vm->cpuid);
> +
> + tdx_ioctl(vm->fd, KVM_TDX_INIT_VM, 0, init_vm);
> +}
> +
> +static void tdx_td_vcpu_init(struct kvm_vcpu *vcpu)
> +{
> + const struct kvm_cpuid2 *cpuid = kvm_get_supported_cpuid();
> +
> + vcpu_init_cpuid(vcpu, cpuid);
> + tdx_ioctl(vcpu->fd, KVM_TDX_INIT_VCPU, 0, NULL);
> +}
> +
> +static void tdx_init_mem_region(struct kvm_vm *vm, void *source_pages,
> + uint64_t gpa, uint64_t size)
> +{
> + struct kvm_tdx_init_mem_region mem_region = {
> + .source_addr = (uint64_t)source_pages,
> + .gpa = gpa,
> + .nr_pages = size / PAGE_SIZE,
> + };
> + uint32_t metadata = KVM_TDX_MEASURE_MEMORY_REGION;
> +
> + TEST_ASSERT((mem_region.nr_pages > 0) &&
> + ((mem_region.nr_pages * PAGE_SIZE) == size),
> + "Cannot add partial pages to the guest memory.\n");
> + TEST_ASSERT(((uint64_t)source_pages & (PAGE_SIZE - 1)) == 0,
> + "Source memory buffer is not page aligned\n");
> + tdx_ioctl(vm->fd, KVM_TDX_INIT_MEM_REGION, metadata, &mem_region);
> +}
> +
> +static void tdx_td_finalizemr(struct kvm_vm *vm)
> +{
> + tdx_ioctl(vm->fd, KVM_TDX_FINALIZE_VM, 0, NULL);
> +}
> +

Nit: tdx_td_finalizemr ==> tdx_td_finalize_mr

> +/*
> + * TD creation/setup/finalization
> + */
> +
> +static void tdx_enable_capabilities(struct kvm_vm *vm)
> +{
> + int rc;
> +
> + rc = kvm_check_cap(KVM_CAP_X2APIC_API);
> + TEST_ASSERT(rc, "TDX: KVM_CAP_X2APIC_API is not supported!");
> + rc = kvm_check_cap(KVM_CAP_SPLIT_IRQCHIP);
> + TEST_ASSERT(rc, "TDX: KVM_CAP_SPLIT_IRQCHIP is not supported!");
> +
> + vm_enable_cap(vm, KVM_CAP_X2APIC_API,
> + KVM_X2APIC_API_USE_32BIT_IDS |
> + KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK);
> + vm_enable_cap(vm, KVM_CAP_SPLIT_IRQCHIP, 24);
> +}
> +
> +static void tdx_configure_memory_encryption(struct kvm_vm *vm)
> +{
> + /* Configure shared/enCrypted bit for this VM according to TDX spec */
> + vm->arch.s_bit = 1ULL << (vm->pa_bits - 1);
> + vm->arch.c_bit = 0;
> + /* Set gpa_protected_mask so that tagging/untagging of GPAs works */
> + vm->gpa_protected_mask = vm->arch.s_bit;
> + /* This VM is protected (has memory encryption) */
> + vm->protected = true;
> +}
> +
> +static void tdx_apply_cr4_restrictions(struct kvm_sregs *sregs)
> +{
> + /* TDX spec 11.6.2: CR4 bit MCE is fixed to 1 */
> + sregs->cr4 |= X86_CR4_MCE;
> +
> + /* Set this because UEFI also sets this up, to handle XMM exceptions */
> + sregs->cr4 |= X86_CR4_OSXMMEXCPT;
> +
> + /* TDX spec 11.6.2: CR4 bit VMXE and SMXE are fixed to 0 */
> + sregs->cr4 &= ~(X86_CR4_VMXE | X86_CR4_SMXE);
> +}
> +
> +static void load_td_boot_code(struct kvm_vm *vm)
> +{
> + void *boot_code_hva = addr_gpa2hva(vm, FOUR_GIGABYTES_GPA - TD_BOOT_CODE_SIZE);
> +
> + TEST_ASSERT(td_boot_code_end - reset_vector == 16,
> + "The reset vector must be 16 bytes in size.");
> + memcpy(boot_code_hva, td_boot, TD_BOOT_CODE_SIZE);
> +}
> +
> +static void load_td_per_vcpu_parameters(struct td_boot_parameters *params,
> + struct kvm_sregs *sregs,
> + struct kvm_vcpu *vcpu,
> + void *guest_code)
> +{
> + /* Store vcpu_index to match what the TDX module would store internally */
> + static uint32_t vcpu_index;
> +
> + struct td_per_vcpu_parameters *vcpu_params = &params->per_vcpu[vcpu_index];

I think we can use vcpu->id in place of vcpu_index in this function, thus removing vcpu_index

> +
> + TEST_ASSERT(vcpu->initial_stack_addr != 0,
> + "initial stack address should not be 0");
> + TEST_ASSERT(vcpu->initial_stack_addr <= 0xffffffff,
> + "initial stack address must fit in 32 bits");
> + TEST_ASSERT((uint64_t)guest_code <= 0xffffffff,
> + "guest_code must fit in 32 bits");
> + TEST_ASSERT(sregs->cs.selector != 0, "cs.selector should not be 0");
> +
> + vcpu_params->esp_gva = (uint32_t)(uint64_t)vcpu->initial_stack_addr;
> + vcpu_params->ljmp_target.eip_gva = (uint32_t)(uint64_t)guest_code;
> + vcpu_params->ljmp_target.code64_sel = sregs->cs.selector;
> +
> + vcpu_index++;
> +}
> +
> +static void load_td_common_parameters(struct td_boot_parameters *params,
> + struct kvm_sregs *sregs)
> +{
> + /* Set parameters! */
> + params->cr0 = sregs->cr0;
> + params->cr3 = sregs->cr3;
> + params->cr4 = sregs->cr4;
> + params->gdtr.limit = sregs->gdt.limit;
> + params->gdtr.base = sregs->gdt.base;
> + params->idtr.limit = sregs->idt.limit;
> + params->idtr.base = sregs->idt.base;
> +
> + TEST_ASSERT(params->cr0 != 0, "cr0 should not be 0");
> + TEST_ASSERT(params->cr3 != 0, "cr3 should not be 0");
> + TEST_ASSERT(params->cr4 != 0, "cr4 should not be 0");
> + TEST_ASSERT(params->gdtr.base != 0, "gdt base address should not be 0");
> +}
> +
> +static void load_td_boot_parameters(struct td_boot_parameters *params,
> + struct kvm_vcpu *vcpu, void *guest_code)
> +{
> + struct kvm_sregs sregs;
> +
> + /* Assemble parameters in sregs */
> + memset(&sregs, 0, sizeof(struct kvm_sregs));
> + vcpu_setup_mode_sregs(vcpu->vm, &sregs);
> + tdx_apply_cr4_restrictions(&sregs);
> + kvm_setup_idt(vcpu->vm, &sregs.idt);
> +
> + if (!params->cr0)
> + load_td_common_parameters(params, &sregs);
> +
> + load_td_per_vcpu_parameters(params, &sregs, vcpu, guest_code);
> +}
> +
> +/**
> + * Adds a vCPU to a TD (Trusted Domain) with minimum defaults. It will not set
> + * up any general purpose registers as they will be initialized by the TDX. In
> + * TDX, vCPUs RIP is set to 0xFFFFFFF0. See Intel TDX EAS Section "Initial State
> + * of Guest GPRs" for more information on vCPUs initial register values when
> + * entering the TD first time.
> + *
> + * Input Args:
> + * vm - Virtual Machine
> + * vcpuid - The id of the VCPU to add to the VM.
> + */
> +struct kvm_vcpu *td_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, void *guest_code)
> +{
> + struct kvm_vcpu *vcpu;
> +
> + /*
> + * TD setup will not use the value of rip set in vm_vcpu_add anyway, so
> + * NULL can be used for guest_code.
> + */
> + vcpu = vm_vcpu_add(vm, vcpu_id, NULL);
> +
> + tdx_td_vcpu_init(vcpu);
> +
> + load_td_boot_parameters(addr_gpa2hva(vm, TD_BOOT_PARAMETERS_GPA),
> + vcpu, guest_code);
> +
> + return vcpu;
> +}
> +
> +/**
> + * Iterate over set ranges within sparsebit @s. In each iteration,
> + * @range_begin and @range_end will take the beginning and end of the set range,
> + * which are of type sparsebit_idx_t.
> + *
> + * For example, if the range [3, 7] (inclusive) is set, within the iteration,
> + * @range_begin will take the value 3 and @range_end will take the value 7.
> + *
> + * Ensure that there is at least one bit set before using this macro with
> + * sparsebit_any_set(), because sparsebit_first_set() will abort if none are
> + * set.
> + */
> +#define sparsebit_for_each_set_range(s, range_begin, range_end) \
> + for (range_begin = sparsebit_first_set(s), \
> + range_end = sparsebit_next_clear(s, range_begin) - 1; \
> + range_begin && range_end; \
> + range_begin = sparsebit_next_set(s, range_end), \
> + range_end = sparsebit_next_clear(s, range_begin) - 1)
> +/*
> + * sparsebit_next_clear() can return 0 if [x, 2**64-1] are all set, and the -1
> + * would then cause an underflow back to 2**64 - 1. This is expected and
> + * correct.
> + *
> + * If the last range in the sparsebit is [x, y] and we try to iterate,
> + * sparsebit_next_set() will return 0, and sparsebit_next_clear() will try and
> + * find the first range, but that's correct because the condition expression
> + * would cause us to quit the loop.
> + */
> +
> +static void load_td_memory_region(struct kvm_vm *vm,
> + struct userspace_mem_region *region)
> +{
> + const struct sparsebit *pages = region->protected_phy_pages;
> + const uint64_t hva_base = region->region.userspace_addr;
> + const vm_paddr_t gpa_base = region->region.guest_phys_addr;
> + const sparsebit_idx_t lowest_page_in_region = gpa_base >>
> + vm->page_shift;
> +
> + sparsebit_idx_t i;
> + sparsebit_idx_t j;
> +
> + if (!sparsebit_any_set(pages))
> + return;
> +
> + sparsebit_for_each_set_range(pages, i, j) {
> + const uint64_t size_to_load = (j - i + 1) * vm->page_size;
> + const uint64_t offset =
> + (i - lowest_page_in_region) * vm->page_size;
> + const uint64_t hva = hva_base + offset;
> + const uint64_t gpa = gpa_base + offset;
> + void *source_addr;
> +
> + /*
> + * KVM_TDX_INIT_MEM_REGION ioctl cannot encrypt memory in place,
> + * hence we have to make a copy if there's only one backing
> + * memory source
> + */
> + source_addr = mmap(NULL, size_to_load, PROT_READ | PROT_WRITE,
> + MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> + TEST_ASSERT(
> + source_addr,
> + "Could not allocate memory for loading memory region");
> +
> + memcpy(source_addr, (void *)hva, size_to_load);
> +
> + tdx_init_mem_region(vm, source_addr, gpa, size_to_load);
> +
> + munmap(source_addr, size_to_load);
> + }
> +}
> +
> +static void load_td_private_memory(struct kvm_vm *vm)
> +{
> + int ctr;
> + struct userspace_mem_region *region;
> +
> + hash_for_each(vm->regions.slot_hash, ctr, region, slot_node) {
> + load_td_memory_region(vm, region);
> + }
> +}
> +
> +struct kvm_vm *td_create(void)
> +{
> + struct vm_shape shape;
> +
> + shape.mode = VM_MODE_DEFAULT;
> + shape.type = KVM_X86_TDX_VM;
> + return ____vm_create(shape);

Nit:
init shape to 0s:
struct vm_shape shape = {};

Pass pointer of share to ____vm_create() instead:
____vm_create(&shape)

> +}
> +
> +static void td_setup_boot_code(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type)
> +{
> + vm_vaddr_t addr;
> + size_t boot_code_allocation = round_up(TD_BOOT_CODE_SIZE, PAGE_SIZE);
> + vm_paddr_t boot_code_base_gpa = FOUR_GIGABYTES_GPA - boot_code_allocation;
> + size_t npages = DIV_ROUND_UP(boot_code_allocation, PAGE_SIZE);
> +
> + vm_userspace_mem_region_add(vm, src_type, boot_code_base_gpa, 1, npages,
> + KVM_MEM_PRIVATE);
> + addr = vm_vaddr_alloc_1to1(vm, boot_code_allocation, boot_code_base_gpa, 1);
> + TEST_ASSERT_EQ(addr, boot_code_base_gpa);
> +
> + load_td_boot_code(vm);
> +}
> +
> +static size_t td_boot_parameters_size(void)
> +{
> + int max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS);
> + size_t total_per_vcpu_parameters_size =
> + max_vcpus * sizeof(struct td_per_vcpu_parameters);
> +
> + return sizeof(struct td_boot_parameters) + total_per_vcpu_parameters_size;
> +}
> +
> +static void td_setup_boot_parameters(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type)
> +{
> + vm_vaddr_t addr;
> + size_t boot_params_size = td_boot_parameters_size();
> + int npages = DIV_ROUND_UP(boot_params_size, PAGE_SIZE);
> + size_t total_size = npages * PAGE_SIZE;
> +
> + vm_userspace_mem_region_add(vm, src_type, TD_BOOT_PARAMETERS_GPA, 2,
> + npages, KVM_MEM_PRIVATE);
> + addr = vm_vaddr_alloc_1to1(vm, total_size, TD_BOOT_PARAMETERS_GPA, 2);
> + TEST_ASSERT_EQ(addr, TD_BOOT_PARAMETERS_GPA);
> +}
> +
> +void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
> + uint64_t attributes)
> +{
> + uint64_t nr_pages_required;
> +
> + tdx_enable_capabilities(vm);
> +
> + tdx_configure_memory_encryption(vm);
> +
> + tdx_td_init(vm, attributes);
> +
> + nr_pages_required = vm_nr_pages_required(VM_MODE_DEFAULT, 1, 0);
> +
> + /*
> + * Add memory (add 0th memslot) for TD. This will be used to setup the
> + * CPU (provide stack space for the CPU) and to load the elf file.
> + */
> + vm_userspace_mem_region_add(vm, src_type, 0, 0, nr_pages_required,
> + KVM_MEM_PRIVATE);
> +
> + kvm_vm_elf_load(vm, program_invocation_name);
> +
> + vm_init_descriptor_tables(vm);
> +
> + td_setup_boot_code(vm, src_type);
> + td_setup_boot_parameters(vm, src_type);
> +}
> +
> +void td_finalize(struct kvm_vm *vm)
> +{
> + sync_exception_handlers_to_guest(vm);
> +
> + load_td_private_memory(vm);
> +
> + tdx_td_finalizemr(vm);
> +}

2024-03-21 23:21:00

by Zhang, Dongsheng X

[permalink] [raw]
Subject: Re: [RFC PATCH v5 08/29] KVM: selftests: TDX: Add TDX lifecycle test



On 12/12/2023 12:46 PM, Sagi Shahar wrote:
> From: Erdem Aktas <[email protected]>
>
> Adding a test to verify TDX lifecycle by creating a TD and running a
> dummy TDG.VP.VMCALL <Instruction.IO> inside it.
>
> Signed-off-by: Erdem Aktas <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> Signed-off-by: Sagi Shahar <[email protected]>
> Co-developed-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ackerley Tng <[email protected]>
> ---
> tools/testing/selftests/kvm/Makefile | 4 +
> .../selftests/kvm/include/x86_64/tdx/tdcall.h | 35 ++++++++
> .../selftests/kvm/include/x86_64/tdx/tdx.h | 12 +++
> .../kvm/include/x86_64/tdx/test_util.h | 52 +++++++++++
> .../selftests/kvm/lib/x86_64/tdx/tdcall.S | 90 +++++++++++++++++++
> .../selftests/kvm/lib/x86_64/tdx/tdx.c | 27 ++++++
> .../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 1 +
> .../selftests/kvm/lib/x86_64/tdx/test_util.c | 34 +++++++
> .../selftests/kvm/x86_64/tdx_vm_tests.c | 45 ++++++++++
> 9 files changed, 300 insertions(+)
> create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
> create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
> create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
> create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
>
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index a35150ab855f..80d4a50eeb9f 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -52,6 +52,9 @@ LIBKVM_x86_64 += lib/x86_64/vmx.c
> LIBKVM_x86_64 += lib/x86_64/sev.c
> LIBKVM_x86_64 += lib/x86_64/tdx/tdx_util.c
> LIBKVM_x86_64 += lib/x86_64/tdx/td_boot.S
> +LIBKVM_x86_64 += lib/x86_64/tdx/tdcall.S
> +LIBKVM_x86_64 += lib/x86_64/tdx/tdx.c
> +LIBKVM_x86_64 += lib/x86_64/tdx/test_util.c
>
> LIBKVM_aarch64 += lib/aarch64/gic.c
> LIBKVM_aarch64 += lib/aarch64/gic_v3.c
> @@ -152,6 +155,7 @@ TEST_GEN_PROGS_x86_64 += set_memory_region_test
> TEST_GEN_PROGS_x86_64 += steal_time
> TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test
> TEST_GEN_PROGS_x86_64 += system_counter_offset_test
> +TEST_GEN_PROGS_x86_64 += x86_64/tdx_vm_tests
>
> # Compiled outputs used by test targets
> TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
> new file mode 100644
> index 000000000000..78001bfec9c8
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
> @@ -0,0 +1,35 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/* Adapted from arch/x86/include/asm/shared/tdx.h */
> +
> +#ifndef SELFTESTS_TDX_TDCALL_H
> +#define SELFTESTS_TDX_TDCALL_H
> +
> +#include <linux/bits.h>
> +#include <linux/types.h>
> +
> +#define TDG_VP_VMCALL_INSTRUCTION_IO_READ 0
> +#define TDG_VP_VMCALL_INSTRUCTION_IO_WRITE 1

Nit:
Probably we can define the following instead in test_util.c?
/* Port I/O direction */
#define PORT_READ 0
#define PORT_WRITE 1

Then use them in place of TDG_VP_VMCALL_INSTRUCTION_IO_READ/TDG_VP_VMCALL_INSTRUCTION_IO_WRITE?
which are too long

> +
> +#define TDX_HCALL_HAS_OUTPUT BIT(0)
> +
> +#define TDX_HYPERCALL_STANDARD 0
> +
> +/*
> + * Used in __tdx_hypercall() to pass down and get back registers' values of
> + * the TDCALL instruction when requesting services from the VMM.
> + *
> + * This is a software only structure and not part of the TDX module/VMM ABI.
> + */
> +struct tdx_hypercall_args {
> + u64 r10;
> + u64 r11;
> + u64 r12;
> + u64 r13;
> + u64 r14;
> + u64 r15;
> +};
> +
> +/* Used to request services from the VMM */
> +u64 __tdx_hypercall(struct tdx_hypercall_args *args, unsigned long flags);
> +
> +#endif // SELFTESTS_TDX_TDCALL_H
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> new file mode 100644
> index 000000000000..a7161efe4ee2
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> @@ -0,0 +1,12 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +#ifndef SELFTEST_TDX_TDX_H
> +#define SELFTEST_TDX_TDX_H
> +
> +#include <stdint.h>
> +
> +#define TDG_VP_VMCALL_INSTRUCTION_IO 30

Nit:
arch/x86/include/uapi/asm/vmx.h already exports the following define:
#define EXIT_REASON_IO_INSTRUCTION 30

Linux kernel example (arch/x86/coco/tdx/tdx.c):
static bool handle_in(struct pt_regs *regs, int size, int port)
{
struct tdx_module_args args = {
.r10 = TDX_HYPERCALL_STANDARD,
.r11 = hcall_func(EXIT_REASON_IO_INSTRUCTION),
.r12 = size,
.r13 = PORT_READ,
.r14 = port,
};

So just like the kernel, here we can also use EXIT_REASON_IO_INSTRUCTION in place of TDG_VP_VMCALL_INSTRUCTION_IO,
just need to do a '#include "vmx.h"' or '#include <asm/vmx.h>' to bring in the define

> +
> +uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
> + uint64_t write, uint64_t *data);
> +
> +#endif // SELFTEST_TDX_TDX_H
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> new file mode 100644
> index 000000000000..b570b6d978ff
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> @@ -0,0 +1,52 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +#ifndef SELFTEST_TDX_TEST_UTIL_H
> +#define SELFTEST_TDX_TEST_UTIL_H
> +
> +#include <stdbool.h>
> +
> +#include "tdcall.h"
> +
> +#define TDX_TEST_SUCCESS_PORT 0x30
> +#define TDX_TEST_SUCCESS_SIZE 4
> +
> +/**
> + * Assert that tdx_test_success() was called in the guest.
> + */
> +#define TDX_TEST_ASSERT_SUCCESS(VCPU) \
> + (TEST_ASSERT( \
> + ((VCPU)->run->exit_reason == KVM_EXIT_IO) && \
> + ((VCPU)->run->io.port == TDX_TEST_SUCCESS_PORT) && \
> + ((VCPU)->run->io.size == TDX_TEST_SUCCESS_SIZE) && \
> + ((VCPU)->run->io.direction == \
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE), \
> + "Unexpected exit values while waiting for test completion: %u (%s) %d %d %d\n", \
> + (VCPU)->run->exit_reason, \
> + exit_reason_str((VCPU)->run->exit_reason), \
> + (VCPU)->run->io.port, (VCPU)->run->io.size, \
> + (VCPU)->run->io.direction))
> +
> +/**
> + * Run a test in a new process.
> + *
> + * There might be multiple tests we are running and if one test fails, it will
> + * prevent the subsequent tests to run due to how tests are failing with
> + * TEST_ASSERT function. The run_in_new_process function will run a test in a
> + * new process context and wait for it to finish or fail to prevent TEST_ASSERT
> + * to kill the main testing process.
> + */
> +void run_in_new_process(void (*func)(void));
> +
> +/**
> + * Verify that the TDX is supported by KVM.
> + */
> +bool is_tdx_enabled(void);
> +
> +/**
> + * Report test success to userspace.
> + *
> + * Use TDX_TEST_ASSERT_SUCCESS() to assert that this function was called in the
> + * guest.
> + */
> +void tdx_test_success(void);
> +
> +#endif // SELFTEST_TDX_TEST_UTIL_H
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
> new file mode 100644
> index 000000000000..df9c1ed4bb2d
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
> @@ -0,0 +1,90 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/* Adapted from arch/x86/coco/tdx/tdcall.S */
> +
> +#define TDX_HYPERCALL_r10 0 /* offsetof(struct tdx_hypercall_args, r10) */
> +#define TDX_HYPERCALL_r11 8 /* offsetof(struct tdx_hypercall_args, r11) */
> +#define TDX_HYPERCALL_r12 16 /* offsetof(struct tdx_hypercall_args, r12) */
> +#define TDX_HYPERCALL_r13 24 /* offsetof(struct tdx_hypercall_args, r13) */
> +#define TDX_HYPERCALL_r14 32 /* offsetof(struct tdx_hypercall_args, r14) */
> +#define TDX_HYPERCALL_r15 40 /* offsetof(struct tdx_hypercall_args, r15) */
> +
> +/*
> + * Bitmasks of exposed registers (with VMM).
> + */
> +#define TDX_R10 0x400
> +#define TDX_R11 0x800
> +#define TDX_R12 0x1000
> +#define TDX_R13 0x2000
> +#define TDX_R14 0x4000
> +#define TDX_R15 0x8000
> +
> +#define TDX_HCALL_HAS_OUTPUT 0x1
> +
> +/*
> + * These registers are clobbered to hold arguments for each
> + * TDVMCALL. They are safe to expose to the VMM.
> + * Each bit in this mask represents a register ID. Bit field
> + * details can be found in TDX GHCI specification, section
> + * titled "TDCALL [TDG.VP.VMCALL] leaf".
> + */
> +#define TDVMCALL_EXPOSE_REGS_MASK ( TDX_R10 | TDX_R11 | \
> + TDX_R12 | TDX_R13 | \
> + TDX_R14 | TDX_R15 )
> +
> +.code64
> +.section .text
> +
> +.globl __tdx_hypercall
> +.type __tdx_hypercall, @function
> +__tdx_hypercall:
> + /* Set up stack frame */
> + push %rbp
> + movq %rsp, %rbp
> +
> + /* Save callee-saved GPRs as mandated by the x86_64 ABI */
> + push %r15
> + push %r14
> + push %r13
> + push %r12
> +
> + /* Mangle function call ABI into TDCALL ABI: */
> + /* Set TDCALL leaf ID (TDVMCALL (0)) in RAX */
> + xor %eax, %eax
> +
> + /* Copy hypercall registers from arg struct: */
> + movq TDX_HYPERCALL_r10(%rdi), %r10
> + movq TDX_HYPERCALL_r11(%rdi), %r11
> + movq TDX_HYPERCALL_r12(%rdi), %r12
> + movq TDX_HYPERCALL_r13(%rdi), %r13
> + movq TDX_HYPERCALL_r14(%rdi), %r14
> + movq TDX_HYPERCALL_r15(%rdi), %r15
> +
> + movl $TDVMCALL_EXPOSE_REGS_MASK, %ecx
> +
> + tdcall
> +
> + /* TDVMCALL leaf return code is in R10 */
> + movq %r10, %rax
> +
> + /* Copy hypercall result registers to arg struct if needed */
> + testq $TDX_HCALL_HAS_OUTPUT, %rsi
> + jz .Lout
> +
> + movq %r10, TDX_HYPERCALL_r10(%rdi)
> + movq %r11, TDX_HYPERCALL_r11(%rdi)
> + movq %r12, TDX_HYPERCALL_r12(%rdi)
> + movq %r13, TDX_HYPERCALL_r13(%rdi)
> + movq %r14, TDX_HYPERCALL_r14(%rdi)
> + movq %r15, TDX_HYPERCALL_r15(%rdi)
> +.Lout:
> + /* Restore callee-saved GPRs as mandated by the x86_64 ABI */
> + pop %r12
> + pop %r13
> + pop %r14
> + pop %r15
> +
> + pop %rbp
> + ret
> +
> +/* Disable executable stack */
> +.section .note.GNU-stack,"",%progbits
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> new file mode 100644
> index 000000000000..c2414523487a
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> @@ -0,0 +1,27 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +
> +#include "tdx/tdcall.h"
> +#include "tdx/tdx.h"
> +
> +uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
> + uint64_t write, uint64_t *data)
> +{
> + uint64_t ret;
> + struct tdx_hypercall_args args = {
> + .r10 = TDX_HYPERCALL_STANDARD,
> + .r11 = TDG_VP_VMCALL_INSTRUCTION_IO,
> + .r12 = size,
> + .r13 = write,
> + .r14 = port,
> + };
> +
> + if (write)
> + args.r15 = *data;
> +
> + ret = __tdx_hypercall(&args, write ? 0 : TDX_HCALL_HAS_OUTPUT);
> +
> + if (!write)
> + *data = args.r11;
> +
> + return ret;
> +}
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> index 063ff486fb86..b302060049d5 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> @@ -224,6 +224,7 @@ static void tdx_enable_capabilities(struct kvm_vm *vm)
> KVM_X2APIC_API_USE_32BIT_IDS |
> KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK);
> vm_enable_cap(vm, KVM_CAP_SPLIT_IRQCHIP, 24);
> + vm_enable_cap(vm, KVM_CAP_MAX_VCPUS, 512);
> }
>
> static void tdx_configure_memory_encryption(struct kvm_vm *vm)
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
> new file mode 100644
> index 000000000000..6905d0ca3877
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
> @@ -0,0 +1,34 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +
> +#include <stdbool.h>
> +#include <stdint.h>
> +#include <stdlib.h>
> +#include <sys/wait.h>
> +#include <unistd.h>
> +
> +#include "kvm_util_base.h"
> +#include "tdx/tdx.h"
> +#include "tdx/test_util.h"
> +
> +void run_in_new_process(void (*func)(void))
> +{
> + if (fork() == 0) {
> + func();
> + exit(0);
> + }
> + wait(NULL);
> +}
> +
> +bool is_tdx_enabled(void)
> +{
> + return !!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_TDX_VM));
> +}
> +
> +void tdx_test_success(void)
> +{
> + uint64_t code = 0;
> +
> + tdg_vp_vmcall_instruction_io(TDX_TEST_SUCCESS_PORT,
> + TDX_TEST_SUCCESS_SIZE,
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, &code);
> +}
> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> new file mode 100644
> index 000000000000..a18d1c9d6026
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> @@ -0,0 +1,45 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +
> +#include <signal.h>
> +#include "kvm_util_base.h"
> +#include "tdx/tdx_util.h"
> +#include "tdx/test_util.h"
> +#include "test_util.h"
> +
> +void guest_code_lifecycle(void)
> +{
> + tdx_test_success();
> +}
> +
> +void verify_td_lifecycle(void)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> + vcpu = td_vcpu_add(vm, 0, guest_code_lifecycle);
> + td_finalize(vm);
> +
> + printf("Verifying TD lifecycle:\n");
> +
> + vcpu_run(vcpu);
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + kvm_vm_free(vm);
> + printf("\t ... PASSED\n");
> +}

Nit:
All the functions used locally inside tdx_vm_tests.c can be declared static:
static void guest_code_lifecycle(void)
static void verify_td_lifecycle(void)

> +
> +int main(int argc, char **argv)
> +{
> + setbuf(stdout, NULL);
> +
> + if (!is_tdx_enabled()) {
> + print_skip("TDX is not supported by the KVM");
> + exit(KSFT_SKIP);
> + }
> +
> + run_in_new_process(&verify_td_lifecycle);
> +
> + return 0;
> +}

2024-03-21 23:41:03

by Zhang, Dongsheng X

[permalink] [raw]
Subject: Re: [RFC PATCH v5 15/29] KVM: selftests: TDX: Add TDX MSR read/write tests



On 12/12/2023 12:46 PM, Sagi Shahar wrote:
> The test verifies reads and writes for MSR registers with different access
> level.
>
> Signed-off-by: Sagi Shahar <[email protected]>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> ---
> .../selftests/kvm/include/x86_64/tdx/tdx.h | 5 +
> .../selftests/kvm/lib/x86_64/tdx/tdx.c | 27 +++
> .../selftests/kvm/x86_64/tdx_vm_tests.c | 209 ++++++++++++++++++
> 3 files changed, 241 insertions(+)
>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> index 63788012bf94..85ba6aab79a7 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> @@ -9,11 +9,16 @@
> #define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003
>
> #define TDG_VP_VMCALL_INSTRUCTION_IO 30
> +#define TDG_VP_VMCALL_INSTRUCTION_RDMSR 31
> +#define TDG_VP_VMCALL_INSTRUCTION_WRMSR 32

Nit:
"arch/x86/include/uapi/asm/vmx.h" already defined the following defs:
#define EXIT_REASON_IO_INSTRUCTION 30
#define EXIT_REASON_MSR_READ 31
#define EXIT_REASON_MSR_WRITE 32


> +
> void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu);
> uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
> uint64_t write, uint64_t *data);
> void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa);
> uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12,
> uint64_t *r13, uint64_t *r14);
> +uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t *ret_value);
> +uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value);
>
> #endif // SELFTEST_TDX_TDX_H
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> index e5a9e13c62e2..88ea6f2a6469 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> @@ -87,3 +87,30 @@ uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12,
>
> return ret;
> }
> +
> +uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t *ret_value)
> +{
> + uint64_t ret;
> + struct tdx_hypercall_args args = {
> + .r11 = TDG_VP_VMCALL_INSTRUCTION_RDMSR,
> + .r12 = index,
> + };
> +
> + ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT);
> +
> + if (ret_value)
> + *ret_value = args.r11;
> +
> + return ret;
> +}
> +
> +uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value)
> +{
> + struct tdx_hypercall_args args = {
> + .r11 = TDG_VP_VMCALL_INSTRUCTION_WRMSR,
> + .r12 = index,
> + .r13 = value,
> + };
> +
> + return __tdx_hypercall(&args, 0);
> +}
> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> index 699cba36e9ce..5db3701cc6d9 100644
> --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> @@ -515,6 +515,213 @@ void verify_guest_reads(void)
> printf("\t ... PASSED\n");
> }
>
> +/*
> + * Define a filter which denies all MSR access except the following:
> + * MSR_X2APIC_APIC_ICR: Allow read/write access (allowed by default)
> + * MSR_IA32_MISC_ENABLE: Allow read access
> + * MSR_IA32_POWER_CTL: Allow write access
> + */
> +#define MSR_X2APIC_APIC_ICR 0x830
> +static u64 tdx_msr_test_allow_bits = 0xFFFFFFFFFFFFFFFF;

Nit:
0xFFFFFFFFFFFFFFFF is error prone to define? the following?
static u64 tdx_msr_test_allow_bits = ~0ULL;


> +struct kvm_msr_filter tdx_msr_test_filter = {
> + .flags = KVM_MSR_FILTER_DEFAULT_DENY,
> + .ranges = {
> + {
> + .flags = KVM_MSR_FILTER_READ,
> + .nmsrs = 1,
> + .base = MSR_IA32_MISC_ENABLE,
> + .bitmap = (uint8_t *)&tdx_msr_test_allow_bits,
> + }, {
> + .flags = KVM_MSR_FILTER_WRITE,
> + .nmsrs = 1,
> + .base = MSR_IA32_POWER_CTL,
> + .bitmap = (uint8_t *)&tdx_msr_test_allow_bits,
> + },
> + },
> +};
> +
> +/*
> + * Verifies MSR read functionality.
> + */
> +void guest_msr_read(void)
> +{
> + uint64_t data;
> + uint64_t ret;
> +
> + ret = tdg_vp_vmcall_instruction_rdmsr(MSR_X2APIC_APIC_ICR, &data);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + ret = tdx_test_report_64bit_to_user_space(data);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + ret = tdg_vp_vmcall_instruction_rdmsr(MSR_IA32_MISC_ENABLE, &data);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + ret = tdx_test_report_64bit_to_user_space(data);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + /* We expect this call to fail since MSR_IA32_POWER_CTL is write only */
> + ret = tdg_vp_vmcall_instruction_rdmsr(MSR_IA32_POWER_CTL, &data);
> + if (ret) {
> + ret = tdx_test_report_64bit_to_user_space(ret);
> + if (ret)
> + tdx_test_fatal(ret);
> + } else {
> + tdx_test_fatal(-99);
> + }
> +
> + tdx_test_success();
> +}
> +
> +void verify_guest_msr_reads(void)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> +
> + uint64_t data;
> + int ret;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> +
> + /*
> + * Set explicit MSR filter map to control access to the MSR registers
> + * used in the test.
> + */
> + printf("\t ... Setting test MSR filter\n");
> + ret = kvm_check_cap(KVM_CAP_X86_USER_SPACE_MSR);
> + TEST_ASSERT(ret, "KVM_CAP_X86_USER_SPACE_MSR is unavailable");
> + vm_enable_cap(vm, KVM_CAP_X86_USER_SPACE_MSR, KVM_MSR_EXIT_REASON_FILTER);
> +
> + ret = kvm_check_cap(KVM_CAP_X86_MSR_FILTER);
> + TEST_ASSERT(ret, "KVM_CAP_X86_MSR_FILTER is unavailable");
> +
> + ret = ioctl(vm->fd, KVM_X86_SET_MSR_FILTER, &tdx_msr_test_filter);
> + TEST_ASSERT(ret == 0,
> + "KVM_X86_SET_MSR_FILTER failed, ret: %i errno: %i (%s)",
> + ret, errno, strerror(errno));
> +
> + vcpu = td_vcpu_add(vm, 0, guest_msr_read);
> + td_finalize(vm);
> +
> + printf("Verifying guest msr reads:\n");
> +
> + printf("\t ... Setting test MSR values\n");
> + /* Write arbitrary to the MSRs. */
> + vcpu_set_msr(vcpu, MSR_X2APIC_APIC_ICR, 4);
> + vcpu_set_msr(vcpu, MSR_IA32_MISC_ENABLE, 5);
> + vcpu_set_msr(vcpu, MSR_IA32_POWER_CTL, 6);
> +
> + printf("\t ... Running guest\n");
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + data = tdx_test_read_64bit_report_from_guest(vcpu);
> + TEST_ASSERT_EQ(data, 4);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + data = tdx_test_read_64bit_report_from_guest(vcpu);
> + TEST_ASSERT_EQ(data, 5);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + data = tdx_test_read_64bit_report_from_guest(vcpu);
> + TEST_ASSERT_EQ(data, TDG_VP_VMCALL_INVALID_OPERAND);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + kvm_vm_free(vm);
> + printf("\t ... PASSED\n");
> +}
> +
> +/*
> + * Verifies MSR write functionality.
> + */
> +void guest_msr_write(void)
> +{
> + uint64_t ret;
> +
> + ret = tdg_vp_vmcall_instruction_wrmsr(MSR_X2APIC_APIC_ICR, 4);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + /* We expect this call to fail since MSR_IA32_MISC_ENABLE is read only */
> + ret = tdg_vp_vmcall_instruction_wrmsr(MSR_IA32_MISC_ENABLE, 5);
> + if (ret) {
> + ret = tdx_test_report_64bit_to_user_space(ret);
> + if (ret)
> + tdx_test_fatal(ret);
> + } else {
> + tdx_test_fatal(-99);
> + }
> +
> +
> + ret = tdg_vp_vmcall_instruction_wrmsr(MSR_IA32_POWER_CTL, 6);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + tdx_test_success();
> +}
> +
> +void verify_guest_msr_writes(void)
> +{
> + struct kvm_vcpu *vcpu;
> + struct kvm_vm *vm;
> +
> + uint64_t data;
> + int ret;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> +
> + /*
> + * Set explicit MSR filter map to control access to the MSR registers
> + * used in the test.
> + */
> + printf("\t ... Setting test MSR filter\n");
> + ret = kvm_check_cap(KVM_CAP_X86_USER_SPACE_MSR);
> + TEST_ASSERT(ret, "KVM_CAP_X86_USER_SPACE_MSR is unavailable");
> + vm_enable_cap(vm, KVM_CAP_X86_USER_SPACE_MSR, KVM_MSR_EXIT_REASON_FILTER);
> +
> + ret = kvm_check_cap(KVM_CAP_X86_MSR_FILTER);
> + TEST_ASSERT(ret, "KVM_CAP_X86_MSR_FILTER is unavailable");
> +
> + ret = ioctl(vm->fd, KVM_X86_SET_MSR_FILTER, &tdx_msr_test_filter);
> + TEST_ASSERT(ret == 0,
> + "KVM_X86_SET_MSR_FILTER failed, ret: %i errno: %i (%s)",
> + ret, errno, strerror(errno));
> +
> + vcpu = td_vcpu_add(vm, 0, guest_msr_write);
> + td_finalize(vm);
> +
> + printf("Verifying guest msr writes:\n");
> +
> + printf("\t ... Running guest\n");
> + /* Only the write to MSR_IA32_MISC_ENABLE should trigger an exit */
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + data = tdx_test_read_64bit_report_from_guest(vcpu);
> + TEST_ASSERT_EQ(data, TDG_VP_VMCALL_INVALID_OPERAND);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + printf("\t ... Verifying MSR values writen by guest\n");
> +
> + TEST_ASSERT_EQ(vcpu_get_msr(vcpu, MSR_X2APIC_APIC_ICR), 4);
> + TEST_ASSERT_EQ(vcpu_get_msr(vcpu, MSR_IA32_MISC_ENABLE), 0x1800);
> + TEST_ASSERT_EQ(vcpu_get_msr(vcpu, MSR_IA32_POWER_CTL), 6);
> +
> + kvm_vm_free(vm);
> + printf("\t ... PASSED\n");
> +}
> +
> +
> int main(int argc, char **argv)
> {
> setbuf(stdout, NULL);
> @@ -531,6 +738,8 @@ int main(int argc, char **argv)
> run_in_new_process(&verify_get_td_vmcall_info);
> run_in_new_process(&verify_guest_writes);
> run_in_new_process(&verify_guest_reads);
> + run_in_new_process(&verify_guest_msr_writes);
> + run_in_new_process(&verify_guest_msr_reads);
>
> return 0;
> }

2024-03-21 23:45:59

by Zhang, Dongsheng X

[permalink] [raw]
Subject: Re: [RFC PATCH v5 17/29] KVM: selftests: TDX: Add TDX MMIO reads test



On 12/12/2023 12:46 PM, Sagi Shahar wrote:
> The test verifies MMIO reads of various sizes from the host to the guest.
>
> Signed-off-by: Sagi Shahar <[email protected]>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> ---
> .../selftests/kvm/include/x86_64/tdx/tdcall.h | 2 +
> .../selftests/kvm/include/x86_64/tdx/tdx.h | 3 +
> .../kvm/include/x86_64/tdx/test_util.h | 23 +++++
> .../selftests/kvm/lib/x86_64/tdx/tdx.c | 19 ++++
> .../selftests/kvm/x86_64/tdx_vm_tests.c | 87 +++++++++++++++++++
> 5 files changed, 134 insertions(+)
>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
> index b5e94b7c48fa..95fcdbd8404e 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
> @@ -9,6 +9,8 @@
>
> #define TDG_VP_VMCALL_INSTRUCTION_IO_READ 0
> #define TDG_VP_VMCALL_INSTRUCTION_IO_WRITE 1
> +#define TDG_VP_VMCALL_VE_REQUEST_MMIO_READ 0
> +#define TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE 1
>
> #define TDG_VP_VMCALL_SUCCESS 0x0000000000000000
> #define TDG_VP_VMCALL_INVALID_OPERAND 0x8000000000000000
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> index b18e39d20498..13ce60df5684 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> @@ -12,6 +12,7 @@
> #define TDG_VP_VMCALL_INSTRUCTION_IO 30
> #define TDG_VP_VMCALL_INSTRUCTION_RDMSR 31
> #define TDG_VP_VMCALL_INSTRUCTION_WRMSR 32
> +#define TDG_VP_VMCALL_VE_REQUEST_MMIO 48
>
> void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu);
> uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
> @@ -22,5 +23,7 @@ uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12,
> uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t *ret_value);
> uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value);
> uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag);
> +uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size,
> + uint64_t *data_out);
>
> #endif // SELFTEST_TDX_TDX_H
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> index 8a9b6a1bec3e..af412b764604 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> @@ -35,6 +35,29 @@
> (VCPU)->run->io.direction); \
> } while (0)
>
> +
> +/**
> + * Assert that some MMIO operation involving TDG.VP.VMCALL <#VERequestMMIO> was
> + * called in the guest.
> + */
> +#define TDX_TEST_ASSERT_MMIO(VCPU, ADDR, SIZE, DIR) \
> + do { \
> + TEST_ASSERT((VCPU)->run->exit_reason == KVM_EXIT_MMIO, \
> + "Got exit_reason other than KVM_EXIT_MMIO: %u (%s)\n", \
> + (VCPU)->run->exit_reason, \
> + exit_reason_str((VCPU)->run->exit_reason)); \
> + \
> + TEST_ASSERT(((VCPU)->run->exit_reason == KVM_EXIT_MMIO) && \
> + ((VCPU)->run->mmio.phys_addr == (ADDR)) && \
> + ((VCPU)->run->mmio.len == (SIZE)) && \
> + ((VCPU)->run->mmio.is_write == (DIR)), \
> + "Got an unexpected MMIO exit values: %u (%s) %llu %d %d\n", \
> + (VCPU)->run->exit_reason, \
> + exit_reason_str((VCPU)->run->exit_reason), \
> + (VCPU)->run->mmio.phys_addr, (VCPU)->run->mmio.len, \
> + (VCPU)->run->mmio.is_write); \
> + } while (0)
> +
> /**
> * Check and report if there was some failure in the guest, either an exception
> * like a triple fault, or if a tdx_test_fatal() was hit.
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> index 9485bafedc38..b19f07ebc0e7 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> @@ -124,3 +124,22 @@ uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag)
>
> return __tdx_hypercall(&args, 0);
> }
> +
> +uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size,
> + uint64_t *data_out)
> +{
> + uint64_t ret;
> + struct tdx_hypercall_args args = {
> + .r11 = TDG_VP_VMCALL_VE_REQUEST_MMIO,
> + .r12 = size,
> + .r13 = TDG_VP_VMCALL_VE_REQUEST_MMIO_READ,
> + .r14 = address,
> + };
> +
> + ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT);
> +
> + if (data_out)
> + *data_out = args.r11;
> +
> + return ret;
> +}
> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> index 5fae4c6e5f95..48902b69d13e 100644
> --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> @@ -799,6 +799,92 @@ void verify_guest_hlt(void)
> _verify_guest_hlt(0);
> }
>
> +/* Pick any address that was not mapped into the guest to test MMIO */
> +#define TDX_MMIO_TEST_ADDR 0x200000000
> +
> +void guest_mmio_reads(void)
> +{
> + uint64_t data;
> + uint64_t ret;
> +
> + ret = tdg_vp_vmcall_ve_request_mmio_read(TDX_MMIO_TEST_ADDR, 1, &data);
> + if (ret)
> + tdx_test_fatal(ret);
> + if (data != 0x12)
> + tdx_test_fatal(1);
> +
> + ret = tdg_vp_vmcall_ve_request_mmio_read(TDX_MMIO_TEST_ADDR, 2, &data);
> + if (ret)
> + tdx_test_fatal(ret);
> + if (data != 0x1234)
> + tdx_test_fatal(2);
> +
> + ret = tdg_vp_vmcall_ve_request_mmio_read(TDX_MMIO_TEST_ADDR, 4, &data);
> + if (ret)
> + tdx_test_fatal(ret);
> + if (data != 0x12345678)
> + tdx_test_fatal(4);
> +
> + ret = tdg_vp_vmcall_ve_request_mmio_read(TDX_MMIO_TEST_ADDR, 8, &data);
> + if (ret)
> + tdx_test_fatal(ret);
> + if (data != 0x1234567890ABCDEF)
> + tdx_test_fatal(8);
> +
> + // Read an invalid number of bytes.
> + ret = tdg_vp_vmcall_ve_request_mmio_read(TDX_MMIO_TEST_ADDR, 10, &data);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + tdx_test_success();
> +}
> +
> +/*
> + * Varifies guest MMIO reads.

Nit: typo? Varifies ==> Verifies

> + */
> +void verify_mmio_reads(void)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> + vcpu = td_vcpu_add(vm, 0, guest_mmio_reads);
> + td_finalize(vm);
> +
> + printf("Verifying TD MMIO reads:\n");
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 1, TDG_VP_VMCALL_VE_REQUEST_MMIO_READ);
> + *(uint8_t *)vcpu->run->mmio.data = 0x12;
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 2, TDG_VP_VMCALL_VE_REQUEST_MMIO_READ);
> + *(uint16_t *)vcpu->run->mmio.data = 0x1234;
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 4, TDG_VP_VMCALL_VE_REQUEST_MMIO_READ);
> + *(uint32_t *)vcpu->run->mmio.data = 0x12345678;
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 8, TDG_VP_VMCALL_VE_REQUEST_MMIO_READ);
> + *(uint64_t *)vcpu->run->mmio.data = 0x1234567890ABCDEF;
> +
> + td_vcpu_run(vcpu);
> + TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
> + TEST_ASSERT_EQ(vcpu->run->system_event.data[1], TDG_VP_VMCALL_INVALID_OPERAND);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + kvm_vm_free(vm);
> + printf("\t ... PASSED\n");
> +}
> +
> int main(int argc, char **argv)
> {
> setbuf(stdout, NULL);
> @@ -818,6 +904,7 @@ int main(int argc, char **argv)
> run_in_new_process(&verify_guest_msr_writes);
> run_in_new_process(&verify_guest_msr_reads);
> run_in_new_process(&verify_guest_hlt);
> + run_in_new_process(&verify_mmio_reads);
>
> return 0;
> }

2024-03-21 23:47:02

by Zhang, Dongsheng X

[permalink] [raw]
Subject: Re: [RFC PATCH v5 18/29] KVM: selftests: TDX: Add TDX MMIO writes test



On 12/12/2023 12:46 PM, Sagi Shahar wrote:
> The test verifies MMIO writes of various sizes from the guest to the host.
>
> Signed-off-by: Sagi Shahar <[email protected]>
> Signed-off-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> ---
> .../selftests/kvm/include/x86_64/tdx/tdx.h | 2 +
> .../selftests/kvm/lib/x86_64/tdx/tdx.c | 14 +++
> .../selftests/kvm/x86_64/tdx_vm_tests.c | 85 +++++++++++++++++++
> 3 files changed, 101 insertions(+)
>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> index 13ce60df5684..502b670ea699 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> @@ -25,5 +25,7 @@ uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value);
> uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag);
> uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size,
> uint64_t *data_out);
> +uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size,
> + uint64_t data_in);
>
> #endif // SELFTEST_TDX_TDX_H
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> index b19f07ebc0e7..f4afa09f7e3d 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> @@ -143,3 +143,17 @@ uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size,
>
> return ret;
> }
> +
> +uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size,
> + uint64_t data_in)
> +{
> + struct tdx_hypercall_args args = {
> + .r11 = TDG_VP_VMCALL_VE_REQUEST_MMIO,
> + .r12 = size,
> + .r13 = TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE,
> + .r14 = address,
> + .r15 = data_in,
> + };
> +
> + return __tdx_hypercall(&args, 0);
> +}
> diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> index 48902b69d13e..5e28ba828a92 100644
> --- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> +++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> @@ -885,6 +885,90 @@ void verify_mmio_reads(void)
> printf("\t ... PASSED\n");
> }
>
> +void guest_mmio_writes(void)
> +{
> + uint64_t ret;
> +
> + ret = tdg_vp_vmcall_ve_request_mmio_write(TDX_MMIO_TEST_ADDR, 1, 0x12);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + ret = tdg_vp_vmcall_ve_request_mmio_write(TDX_MMIO_TEST_ADDR, 2, 0x1234);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + ret = tdg_vp_vmcall_ve_request_mmio_write(TDX_MMIO_TEST_ADDR, 4, 0x12345678);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + ret = tdg_vp_vmcall_ve_request_mmio_write(TDX_MMIO_TEST_ADDR, 8, 0x1234567890ABCDEF);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + // Write across page boundary.
> + ret = tdg_vp_vmcall_ve_request_mmio_write(PAGE_SIZE - 1, 8, 0);
> + if (ret)
> + tdx_test_fatal(ret);
> +
> + tdx_test_success();
> +}
> +
> +/*
> + * Varifies guest MMIO writes.
> + */

Nit: typo? Varifies ==> Verifies

> +void verify_mmio_writes(void)
> +{
> + struct kvm_vm *vm;
> + struct kvm_vcpu *vcpu;
> +
> + uint8_t byte_1;
> + uint16_t byte_2;
> + uint32_t byte_4;
> + uint64_t byte_8;
> +
> + vm = td_create();
> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
> + vcpu = td_vcpu_add(vm, 0, guest_mmio_writes);
> + td_finalize(vm);
> +
> + printf("Verifying TD MMIO writes:\n");
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 1, TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE);
> + byte_1 = *(uint8_t *)(vcpu->run->mmio.data);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 2, TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE);
> + byte_2 = *(uint16_t *)(vcpu->run->mmio.data);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 4, TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE);
> + byte_4 = *(uint32_t *)(vcpu->run->mmio.data);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
> + TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 8, TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE);
> + byte_8 = *(uint64_t *)(vcpu->run->mmio.data);
> +
> + TEST_ASSERT_EQ(byte_1, 0x12);
> + TEST_ASSERT_EQ(byte_2, 0x1234);
> + TEST_ASSERT_EQ(byte_4, 0x12345678);
> + TEST_ASSERT_EQ(byte_8, 0x1234567890ABCDEF);
> +
> + td_vcpu_run(vcpu);
> + TEST_ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
> + TEST_ASSERT_EQ(vcpu->run->system_event.data[1], TDG_VP_VMCALL_INVALID_OPERAND);
> +
> + td_vcpu_run(vcpu);
> + TDX_TEST_ASSERT_SUCCESS(vcpu);
> +
> + kvm_vm_free(vm);
> + printf("\t ... PASSED\n");
> +}
> +
> int main(int argc, char **argv)
> {
> setbuf(stdout, NULL);
> @@ -905,6 +989,7 @@ int main(int argc, char **argv)
> run_in_new_process(&verify_guest_msr_reads);
> run_in_new_process(&verify_guest_hlt);
> run_in_new_process(&verify_mmio_reads);
> + run_in_new_process(&verify_mmio_writes);
>
> return 0;
> }

2024-03-22 21:34:00

by Chen, Zide

[permalink] [raw]
Subject: Re: [RFC PATCH v5 08/29] KVM: selftests: TDX: Add TDX lifecycle test



On 12/12/2023 12:46 PM, Sagi Shahar wrote:
> From: Erdem Aktas <[email protected]>
>
> Adding a test to verify TDX lifecycle by creating a TD and running a
> dummy TDG.VP.VMCALL <Instruction.IO> inside it.
>
> Signed-off-by: Erdem Aktas <[email protected]>
> Signed-off-by: Ryan Afranji <[email protected]>
> Signed-off-by: Sagi Shahar <[email protected]>
> Co-developed-by: Ackerley Tng <[email protected]>
> Signed-off-by: Ackerley Tng <[email protected]>
> ---
> tools/testing/selftests/kvm/Makefile | 4 +
> .../selftests/kvm/include/x86_64/tdx/tdcall.h | 35 ++++++++
> .../selftests/kvm/include/x86_64/tdx/tdx.h | 12 +++
> .../kvm/include/x86_64/tdx/test_util.h | 52 +++++++++++
> .../selftests/kvm/lib/x86_64/tdx/tdcall.S | 90 +++++++++++++++++++
> .../selftests/kvm/lib/x86_64/tdx/tdx.c | 27 ++++++
> .../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 1 +
> .../selftests/kvm/lib/x86_64/tdx/test_util.c | 34 +++++++
> .../selftests/kvm/x86_64/tdx_vm_tests.c | 45 ++++++++++
> 9 files changed, 300 insertions(+)
> create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
> create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
> create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
> create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
>
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index a35150ab855f..80d4a50eeb9f 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -52,6 +52,9 @@ LIBKVM_x86_64 += lib/x86_64/vmx.c
> LIBKVM_x86_64 += lib/x86_64/sev.c
> LIBKVM_x86_64 += lib/x86_64/tdx/tdx_util.c
> LIBKVM_x86_64 += lib/x86_64/tdx/td_boot.S
> +LIBKVM_x86_64 += lib/x86_64/tdx/tdcall.S
> +LIBKVM_x86_64 += lib/x86_64/tdx/tdx.c
> +LIBKVM_x86_64 += lib/x86_64/tdx/test_util.c
>
> LIBKVM_aarch64 += lib/aarch64/gic.c
> LIBKVM_aarch64 += lib/aarch64/gic_v3.c
> @@ -152,6 +155,7 @@ TEST_GEN_PROGS_x86_64 += set_memory_region_test
> TEST_GEN_PROGS_x86_64 += steal_time
> TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test
> TEST_GEN_PROGS_x86_64 += system_counter_offset_test
> +TEST_GEN_PROGS_x86_64 += x86_64/tdx_vm_tests
>
> # Compiled outputs used by test targets
> TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
> new file mode 100644
> index 000000000000..78001bfec9c8
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
> @@ -0,0 +1,35 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/* Adapted from arch/x86/include/asm/shared/tdx.h */
> +
> +#ifndef SELFTESTS_TDX_TDCALL_H
> +#define SELFTESTS_TDX_TDCALL_H
> +
> +#include <linux/bits.h>
> +#include <linux/types.h>
> +
> +#define TDG_VP_VMCALL_INSTRUCTION_IO_READ 0
> +#define TDG_VP_VMCALL_INSTRUCTION_IO_WRITE 1
> +
> +#define TDX_HCALL_HAS_OUTPUT BIT(0)
> +
> +#define TDX_HYPERCALL_STANDARD 0
> +
> +/*
> + * Used in __tdx_hypercall() to pass down and get back registers' values of
> + * the TDCALL instruction when requesting services from the VMM.
> + *
> + * This is a software only structure and not part of the TDX module/VMM ABI.
> + */
> +struct tdx_hypercall_args {
> + u64 r10;
> + u64 r11;
> + u64 r12;
> + u64 r13;
> + u64 r14;
> + u64 r15;
> +};
> +
> +/* Used to request services from the VMM */
> +u64 __tdx_hypercall(struct tdx_hypercall_args *args, unsigned long flags);
> +
> +#endif // SELFTESTS_TDX_TDCALL_H
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> new file mode 100644
> index 000000000000..a7161efe4ee2
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> @@ -0,0 +1,12 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +#ifndef SELFTEST_TDX_TDX_H
> +#define SELFTEST_TDX_TDX_H
> +
> +#include <stdint.h>
> +
> +#define TDG_VP_VMCALL_INSTRUCTION_IO 30
> +
> +uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
> + uint64_t write, uint64_t *data);
> +
> +#endif // SELFTEST_TDX_TDX_H
> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> new file mode 100644
> index 000000000000..b570b6d978ff
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> @@ -0,0 +1,52 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +#ifndef SELFTEST_TDX_TEST_UTIL_H
> +#define SELFTEST_TDX_TEST_UTIL_H
> +
> +#include <stdbool.h>
> +
> +#include "tdcall.h"
> +
> +#define TDX_TEST_SUCCESS_PORT 0x30
> +#define TDX_TEST_SUCCESS_SIZE 4
> +
> +/**
> + * Assert that tdx_test_success() was called in the guest.
> + */
> +#define TDX_TEST_ASSERT_SUCCESS(VCPU) \
> + (TEST_ASSERT( \
> + ((VCPU)->run->exit_reason == KVM_EXIT_IO) && \
> + ((VCPU)->run->io.port == TDX_TEST_SUCCESS_PORT) && \
> + ((VCPU)->run->io.size == TDX_TEST_SUCCESS_SIZE) && \
> + ((VCPU)->run->io.direction == \
> + TDG_VP_VMCALL_INSTRUCTION_IO_WRITE), \
> + "Unexpected exit values while waiting for test completion: %u (%s) %d %d %d\n", \
> + (VCPU)->run->exit_reason, \
> + exit_reason_str((VCPU)->run->exit_reason), \
> + (VCPU)->run->io.port, (VCPU)->run->io.size, \
> + (VCPU)->run->io.direction))
> +
> +/**
> + * Run a test in a new process.
> + *
> + * There might be multiple tests we are running and if one test fails, it will
> + * prevent the subsequent tests to run due to how tests are failing with
> + * TEST_ASSERT function. The run_in_new_process function will run a test in a
> + * new process context and wait for it to finish or fail to prevent TEST_ASSERT
> + * to kill the main testing process.
> + */
> +void run_in_new_process(void (*func)(void));
> +
> +/**
> + * Verify that the TDX is supported by KVM.
> + */
> +bool is_tdx_enabled(void);
> +
> +/**
> + * Report test success to userspace.
> + *
> + * Use TDX_TEST_ASSERT_SUCCESS() to assert that this function was called in the
> + * guest.
> + */
> +void tdx_test_success(void);
> +
> +#endif // SELFTEST_TDX_TEST_UTIL_H
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
> new file mode 100644
> index 000000000000..df9c1ed4bb2d
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
> @@ -0,0 +1,90 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +/* Adapted from arch/x86/coco/tdx/tdcall.S */
> +
> +#define TDX_HYPERCALL_r10 0 /* offsetof(struct tdx_hypercall_args, r10) */
> +#define TDX_HYPERCALL_r11 8 /* offsetof(struct tdx_hypercall_args, r11) */
> +#define TDX_HYPERCALL_r12 16 /* offsetof(struct tdx_hypercall_args, r12) */
> +#define TDX_HYPERCALL_r13 24 /* offsetof(struct tdx_hypercall_args, r13) */
> +#define TDX_HYPERCALL_r14 32 /* offsetof(struct tdx_hypercall_args, r14) */
> +#define TDX_HYPERCALL_r15 40 /* offsetof(struct tdx_hypercall_args, r15) */
> +
> +/*
> + * Bitmasks of exposed registers (with VMM).
> + */
> +#define TDX_R10 0x400
> +#define TDX_R11 0x800
> +#define TDX_R12 0x1000
> +#define TDX_R13 0x2000
> +#define TDX_R14 0x4000
> +#define TDX_R15 0x8000
> +
> +#define TDX_HCALL_HAS_OUTPUT 0x1
> +
> +/*
> + * These registers are clobbered to hold arguments for each
> + * TDVMCALL. They are safe to expose to the VMM.
> + * Each bit in this mask represents a register ID. Bit field
> + * details can be found in TDX GHCI specification, section
> + * titled "TDCALL [TDG.VP.VMCALL] leaf".
> + */
> +#define TDVMCALL_EXPOSE_REGS_MASK ( TDX_R10 | TDX_R11 | \
> + TDX_R12 | TDX_R13 | \
> + TDX_R14 | TDX_R15 )
> +
> +.code64
> +.section .text
> +
> +.globl __tdx_hypercall
> +.type __tdx_hypercall, @function
> +__tdx_hypercall:
> + /* Set up stack frame */
> + push %rbp
> + movq %rsp, %rbp
> +
> + /* Save callee-saved GPRs as mandated by the x86_64 ABI */
> + push %r15
> + push %r14
> + push %r13
> + push %r12
> +
> + /* Mangle function call ABI into TDCALL ABI: */
> + /* Set TDCALL leaf ID (TDVMCALL (0)) in RAX */
> + xor %eax, %eax
> +
> + /* Copy hypercall registers from arg struct: */
> + movq TDX_HYPERCALL_r10(%rdi), %r10
> + movq TDX_HYPERCALL_r11(%rdi), %r11
> + movq TDX_HYPERCALL_r12(%rdi), %r12
> + movq TDX_HYPERCALL_r13(%rdi), %r13
> + movq TDX_HYPERCALL_r14(%rdi), %r14
> + movq TDX_HYPERCALL_r15(%rdi), %r15
> +
> + movl $TDVMCALL_EXPOSE_REGS_MASK, %ecx
> +
> + tdcall
> +
> + /* TDVMCALL leaf return code is in R10 */
> + movq %r10, %rax
> +
> + /* Copy hypercall result registers to arg struct if needed */
> + testq $TDX_HCALL_HAS_OUTPUT, %rsi
> + jz .Lout
> +
> + movq %r10, TDX_HYPERCALL_r10(%rdi)
> + movq %r11, TDX_HYPERCALL_r11(%rdi)
> + movq %r12, TDX_HYPERCALL_r12(%rdi)
> + movq %r13, TDX_HYPERCALL_r13(%rdi)
> + movq %r14, TDX_HYPERCALL_r14(%rdi)
> + movq %r15, TDX_HYPERCALL_r15(%rdi)
> +.Lout:
> + /* Restore callee-saved GPRs as mandated by the x86_64 ABI */
> + pop %r12
> + pop %r13
> + pop %r14
> + pop %r15
> +
> + pop %rbp
> + ret
> +
> +/* Disable executable stack */
> +.section .note.GNU-stack,"",%progbits
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> new file mode 100644
> index 000000000000..c2414523487a
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> @@ -0,0 +1,27 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +
> +#include "tdx/tdcall.h"
> +#include "tdx/tdx.h"
> +
> +uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
> + uint64_t write, uint64_t *data)
> +{
> + uint64_t ret;
> + struct tdx_hypercall_args args = {
> + .r10 = TDX_HYPERCALL_STANDARD,
> + .r11 = TDG_VP_VMCALL_INSTRUCTION_IO,
> + .r12 = size,
> + .r13 = write,
> + .r14 = port,
> + };
> +
> + if (write)
> + args.r15 = *data;
> +
> + ret = __tdx_hypercall(&args, write ? 0 : TDX_HCALL_HAS_OUTPUT);
> +
> + if (!write)
> + *data = args.r11;
> +
> + return ret;
> +}
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> index 063ff486fb86..b302060049d5 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> @@ -224,6 +224,7 @@ static void tdx_enable_capabilities(struct kvm_vm *vm)
> KVM_X2APIC_API_USE_32BIT_IDS |
> KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK);
> vm_enable_cap(vm, KVM_CAP_SPLIT_IRQCHIP, 24);
> + vm_enable_cap(vm, KVM_CAP_MAX_VCPUS, 512);

Since TDX spec doesn't define max vCPUs, is it a good idea to fix it to
512 in this common code?

How about to move this line to the specific test case where you are
actually verifying this capability?

For example, move it to PATCH v5 21/29] KVM: selftests: TDX: Add
TDG.VP.INFO test

+ vm_enable_cap(vm, KVM_CAP_MAX_VCPUS, 512);
..

TEST_ASSERT_EQ(ret_max_vcpus, 512);



2024-04-12 04:42:41

by Ackerley Tng

[permalink] [raw]
Subject: Re: [RFC PATCH v5 08/29] KVM: selftests: TDX: Add TDX lifecycle test

"Zhang, Dongsheng X" <[email protected]> writes:

> On 12/12/2023 12:46 PM, Sagi Shahar wrote:
>> From: Erdem Aktas <[email protected]>
>>
>> Adding a test to verify TDX lifecycle by creating a TD and running a
>> dummy TDG.VP.VMCALL <Instruction.IO> inside it.
>>
>> Signed-off-by: Erdem Aktas <[email protected]>
>> Signed-off-by: Ryan Afranji <[email protected]>
>> Signed-off-by: Sagi Shahar <[email protected]>
>> Co-developed-by: Ackerley Tng <[email protected]>
>> Signed-off-by: Ackerley Tng <[email protected]>
>> ---
>> tools/testing/selftests/kvm/Makefile | 4 +
>> .../selftests/kvm/include/x86_64/tdx/tdcall.h | 35 ++++++++
>> .../selftests/kvm/include/x86_64/tdx/tdx.h | 12 +++
>> .../kvm/include/x86_64/tdx/test_util.h | 52 +++++++++++
>> .../selftests/kvm/lib/x86_64/tdx/tdcall.S | 90 +++++++++++++++++++
>> .../selftests/kvm/lib/x86_64/tdx/tdx.c | 27 ++++++
>> .../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 1 +
>> .../selftests/kvm/lib/x86_64/tdx/test_util.c | 34 +++++++
>> .../selftests/kvm/x86_64/tdx_vm_tests.c | 45 ++++++++++
>> 9 files changed, 300 insertions(+)
>> create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
>> create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
>> create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
>> create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
>> create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
>> create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
>> create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
>>
>> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
>> index a35150ab855f..80d4a50eeb9f 100644
>> --- a/tools/testing/selftests/kvm/Makefile
>> +++ b/tools/testing/selftests/kvm/Makefile
>> @@ -52,6 +52,9 @@ LIBKVM_x86_64 += lib/x86_64/vmx.c
>> LIBKVM_x86_64 += lib/x86_64/sev.c
>> LIBKVM_x86_64 += lib/x86_64/tdx/tdx_util.c
>> LIBKVM_x86_64 += lib/x86_64/tdx/td_boot.S
>> +LIBKVM_x86_64 += lib/x86_64/tdx/tdcall.S
>> +LIBKVM_x86_64 += lib/x86_64/tdx/tdx.c
>> +LIBKVM_x86_64 += lib/x86_64/tdx/test_util.c
>>
>> LIBKVM_aarch64 += lib/aarch64/gic.c
>> LIBKVM_aarch64 += lib/aarch64/gic_v3.c
>> @@ -152,6 +155,7 @@ TEST_GEN_PROGS_x86_64 += set_memory_region_test
>> TEST_GEN_PROGS_x86_64 += steal_time
>> TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test
>> TEST_GEN_PROGS_x86_64 += system_counter_offset_test
>> +TEST_GEN_PROGS_x86_64 += x86_64/tdx_vm_tests
>>
>> # Compiled outputs used by test targets
>> TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test
>> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
>> new file mode 100644
>> index 000000000000..78001bfec9c8
>> --- /dev/null
>> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
>> @@ -0,0 +1,35 @@
>> +/* SPDX-License-Identifier: GPL-2.0-only */
>> +/* Adapted from arch/x86/include/asm/shared/tdx.h */
>> +
>> +#ifndef SELFTESTS_TDX_TDCALL_H
>> +#define SELFTESTS_TDX_TDCALL_H
>> +
>> +#include <linux/bits.h>
>> +#include <linux/types.h>
>> +
>> +#define TDG_VP_VMCALL_INSTRUCTION_IO_READ 0
>> +#define TDG_VP_VMCALL_INSTRUCTION_IO_WRITE 1
>
> Nit:
> Probably we can define the following instead in test_util.c?
> /* Port I/O direction */
> #define PORT_READ 0
> #define PORT_WRITE 1
>
> Then use them in place of TDG_VP_VMCALL_INSTRUCTION_IO_READ/TDG_VP_VMCALL_INSTRUCTION_IO_WRITE?
> which are too long
>

I was actually thinking to align all the macro definitions with the
definitions in the Intel GHCI Spec, so

3.9 TDG.VP.VMCALL<Instruction.IO>

becomes TDG_VP_VMCALL_INSTRUCTION_IO and then add suffixes READ and
WRITE for the directions.

PORT_READ and PORT_WRITE seem a little too unspecific, but I agree that
TDG_VP_VMCALL_INSTRUCTION_IO_READ/TDG_VP_VMCALL_INSTRUCTION_IO_WRITE are
long.

>> +
>> +#define TDX_HCALL_HAS_OUTPUT BIT(0)
>> +
>> +#define TDX_HYPERCALL_STANDARD 0
>> +
>> +/*
>> + * Used in __tdx_hypercall() to pass down and get back registers' values of
>> + * the TDCALL instruction when requesting services from the VMM.
>> + *
>> + * This is a software only structure and not part of the TDX module/VMM ABI.
>> + */
>> +struct tdx_hypercall_args {
>> + u64 r10;
>> + u64 r11;
>> + u64 r12;
>> + u64 r13;
>> + u64 r14;
>> + u64 r15;
>> +};
>> +
>> +/* Used to request services from the VMM */
>> +u64 __tdx_hypercall(struct tdx_hypercall_args *args, unsigned long flags);
>> +
>> +#endif // SELFTESTS_TDX_TDCALL_H
>> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
>> new file mode 100644
>> index 000000000000..a7161efe4ee2
>> --- /dev/null
>> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
>> @@ -0,0 +1,12 @@
>> +/* SPDX-License-Identifier: GPL-2.0-only */
>> +#ifndef SELFTEST_TDX_TDX_H
>> +#define SELFTEST_TDX_TDX_H
>> +
>> +#include <stdint.h>
>> +
>> +#define TDG_VP_VMCALL_INSTRUCTION_IO 30
>
> Nit:
> arch/x86/include/uapi/asm/vmx.h already exports the following define:
> #define EXIT_REASON_IO_INSTRUCTION 30
>
> Linux kernel example (arch/x86/coco/tdx/tdx.c):
> static bool handle_in(struct pt_regs *regs, int size, int port)
> {
> struct tdx_module_args args = {
> .r10 = TDX_HYPERCALL_STANDARD,
> .r11 = hcall_func(EXIT_REASON_IO_INSTRUCTION),
> .r12 = size,
> .r13 = PORT_READ,
> .r14 = port,
> };
>
> So just like the kernel, here we can also use EXIT_REASON_IO_INSTRUCTION in place of TDG_VP_VMCALL_INSTRUCTION_IO,
> just need to do a '#include "vmx.h"' or '#include <asm/vmx.h>' to bring in the define
>

I think aligning macro definitions with the spec is better in this case.

It seems odd to be calling an EXIT_REASON_* when making a hypercall.

Later on in this patch series this macro is added

#define TDG_VP_VMCALL_VE_REQUEST_MMIO 48

which matches

3.7 TDG.VP.VMCALL<#VE.RequestMMIO>

in the Intel GHCI Spec.

The equivalent EXIT_REASON is EXIT_REASON_EPT_VIOLATION, which I feel
doesn't carry the same meaning as an explicit request for MMIO, as in
TDG_VP_VMCALL_VE_REQUEST_MMIO.

So I think even though the numbers are the same, they don't carry the
same meaning and it's probably better to have different macro
definitions.

Or we could define one in terms of the other?

Later on in this patch series other macros are also added, specific to TDX

#define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000
#define TDG_VP_VMCALL_MAP_GPA 0x10001
#define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003

which matches

3.1 TDG.VP.VMCALL<GetTdVmCallInfo>
3.2 TDG.VP.VMCALL<MapGPA>
3.4 TDG.VP.VMCALL<ReportFatalError>

in the Intel GHCI Spec.

It's nice to have the naming convention for all the VMCALLs line up. :)

>> +
>> +uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
>> + uint64_t write, uint64_t *data);
>> +

>> <snip>

>> +void verify_td_lifecycle(void)
>> +{
>> + struct kvm_vm *vm;
>> + struct kvm_vcpu *vcpu;
>> +
>> + vm = td_create();
>> + td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
>> + vcpu = td_vcpu_add(vm, 0, guest_code_lifecycle);
>> + td_finalize(vm);
>> +
>> + printf("Verifying TD lifecycle:\n");
>> +
>> + vcpu_run(vcpu);
>> + TDX_TEST_ASSERT_SUCCESS(vcpu);
>> +
>> + kvm_vm_free(vm);
>> + printf("\t ... PASSED\n");
>> +}
>
> Nit:
> All the functions used locally inside tdx_vm_tests.c can be declared static:
> static void guest_code_lifecycle(void)
> static void verify_td_lifecycle(void)
>

Will fix this, thanks!

>> +
>> +int main(int argc, char **argv)
>> +{
>> + setbuf(stdout, NULL);
>> +
>> + if (!is_tdx_enabled()) {
>> + print_skip("TDX is not supported by the KVM");
>> + exit(KSFT_SKIP);
>> + }
>> +
>> + run_in_new_process(&verify_td_lifecycle);
>> +
>> + return 0;
>> +}

2024-04-12 04:56:58

by Ackerley Tng

[permalink] [raw]
Subject: Re: [RFC PATCH v5 09/29] KVM: selftests: TDX: Add report_fatal_error test

Yan Zhao <[email protected]> writes:

> ...
>> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
>> index b570b6d978ff..6d69921136bd 100644
>> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
>> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
>> @@ -49,4 +49,23 @@ bool is_tdx_enabled(void);
>> */
>> void tdx_test_success(void);
>>
>> +/**
>> + * Report an error with @error_code to userspace.
>> + *
>> + * Return value from tdg_vp_vmcall_report_fatal_error is ignored since execution
>> + * is not expected to continue beyond this point.
>> + */
>> +void tdx_test_fatal(uint64_t error_code);
>> +
>> +/**
>> + * Report an error with @error_code to userspace.
>> + *
>> + * @data_gpa may point to an optional shared guest memory holding the error
>> + * string.
>> + *
>> + * Return value from tdg_vp_vmcall_report_fatal_error is ignored since execution
>> + * is not expected to continue beyond this point.
>> + */
>> +void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa);
> I found nowhere is using "data_gpa" as a gpa, even in patch 23, it's
> usage is to pass a line number ("tdx_test_fatal_with_data(ret, __LINE__)").
>
>

This function tdx_test_fatal_with_data() is meant to provide a generic
interface for TDX tests to use TDG.VP.VMCALL<ReportFatalError>, and so
the parameters of tdx_test_fatal_with_data() generically allow error_code and
data_gpa to be specified.

The tests just happen to use the data_gpa parameter to pass __LINE__ to
the host VMM, but other tests in future that use the
tdx_test_fatal_with_data() function in the TDX testing library could
actually pass a GPA through using data_gpa.

>> #endif // SELFTEST_TDX_TEST_UTIL_H
>> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
>> index c2414523487a..b854c3aa34ff 100644
>> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
>> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
>> @@ -1,8 +1,31 @@
>> // SPDX-License-Identifier: GPL-2.0-only
>>
>> +#include <string.h>
>> +
>> #include "tdx/tdcall.h"
>> #include "tdx/tdx.h"
>>
>> +void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu)
>> +{
>> + struct kvm_tdx_vmcall *vmcall_info = &vcpu->run->tdx.u.vmcall;
>> + uint64_t vmcall_subfunction = vmcall_info->subfunction;
>> +
>> + switch (vmcall_subfunction) {
>> + case TDG_VP_VMCALL_REPORT_FATAL_ERROR:
>> + vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
>> + vcpu->run->system_event.ndata = 3;
>> + vcpu->run->system_event.data[0] =
>> + TDG_VP_VMCALL_REPORT_FATAL_ERROR;
>> + vcpu->run->system_event.data[1] = vmcall_info->in_r12;
>> + vcpu->run->system_event.data[2] = vmcall_info->in_r13;
>> + vmcall_info->status_code = 0;
>> + break;
>> + default:
>> + TEST_FAIL("TD VMCALL subfunction %lu is unsupported.\n",
>> + vmcall_subfunction);
>> + }
>> +}
>> +
>> uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
>> uint64_t write, uint64_t *data)
>> {
>> @@ -25,3 +48,19 @@ uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
>>
>> return ret;
>> }
>> +
>> +void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa)
>> +{
>> + struct tdx_hypercall_args args;
>> +
>> + memset(&args, 0, sizeof(struct tdx_hypercall_args));
>> +
>> + if (data_gpa)
>> + error_code |= 0x8000000000000000;
>>
> So, why this error_code needs to set bit 63?
>
>

The Intel GHCI Spec says in R12, bit 63 is set if the GPA is valid. As a
generic TDX testing library function, this check allows the user to use
tdg_vp_vmcall_report_fatal_error() with error_code and data_gpa and not
worry about setting bit 63 before calling
tdg_vp_vmcall_report_fatal_error(), though if the user set bit 63 before
that, there is no issue.

>> + args.r11 = TDG_VP_VMCALL_REPORT_FATAL_ERROR;
>> + args.r12 = error_code;
>> + args.r13 = data_gpa;
>> +
>> + __tdx_hypercall(&args, 0);
>> +}

>> <snip>

2024-04-12 05:35:10

by Ackerley Tng

[permalink] [raw]
Subject: Re: [RFC PATCH v5 05/29] KVM: selftests: Add helper functions to create TDX VMs


Thank you for your other comments!

>> <snip>

>> +static void load_td_per_vcpu_parameters(struct td_boot_parameters *params,
>> + struct kvm_sregs *sregs,
>> + struct kvm_vcpu *vcpu,
>> + void *guest_code)
>> +{
>> + /* Store vcpu_index to match what the TDX module would store internally */
>> + static uint32_t vcpu_index;
>> +
>> + struct td_per_vcpu_parameters *vcpu_params = &params->per_vcpu[vcpu_index];
>
> I think we can use vcpu->id in place of vcpu_index in this function, thus removing vcpu_index
>

td_per_vcpu_parameters is used in the selftest setup code (see
tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S), (read via ESI) to
access the set of parameters belonging to the vcpu running the selftest
code, based on vcpu_index.

ESI is used because according to the TDX base spec, RSI contains the
vcpu index, which starts "from 0 and allocated sequentially on each
successful TDH.VP.INIT".

Hence, vcpu_index is set up to be static and is incremented once every
time load_td_per_vcpu_parameters() is called, which is once every time
td_vcpu_add() is called, which is aligned with the TDX base spec.

vcpu->id can be specified by the user when vm_vcpu_add() is called, but
that may not be the same as vcpu_index.

>> +
>> + TEST_ASSERT(vcpu->initial_stack_addr != 0,
>> + "initial stack address should not be 0");
>> + TEST_ASSERT(vcpu->initial_stack_addr <= 0xffffffff,
>> + "initial stack address must fit in 32 bits");
>> + TEST_ASSERT((uint64_t)guest_code <= 0xffffffff,
>> + "guest_code must fit in 32 bits");
>> + TEST_ASSERT(sregs->cs.selector != 0, "cs.selector should not be 0");
>> +
>> + vcpu_params->esp_gva = (uint32_t)(uint64_t)vcpu->initial_stack_addr;
>> + vcpu_params->ljmp_target.eip_gva = (uint32_t)(uint64_t)guest_code;
>> + vcpu_params->ljmp_target.code64_sel = sregs->cs.selector;
>> +
>> + vcpu_index++;
>> +}

>> <snip>

2024-04-12 11:58:45

by Yan Zhao

[permalink] [raw]
Subject: Re: [RFC PATCH v5 09/29] KVM: selftests: TDX: Add report_fatal_error test

On Fri, Apr 12, 2024 at 04:56:36AM +0000, Ackerley Tng wrote:
> Yan Zhao <[email protected]> writes:
>
> > ...
> >> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> >> index b570b6d978ff..6d69921136bd 100644
> >> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> >> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> >> @@ -49,4 +49,23 @@ bool is_tdx_enabled(void);
> >> */
> >> void tdx_test_success(void);
> >>
> >> +/**
> >> + * Report an error with @error_code to userspace.
> >> + *
> >> + * Return value from tdg_vp_vmcall_report_fatal_error is ignored since execution
> >> + * is not expected to continue beyond this point.
> >> + */
> >> +void tdx_test_fatal(uint64_t error_code);
> >> +
> >> +/**
> >> + * Report an error with @error_code to userspace.
> >> + *
> >> + * @data_gpa may point to an optional shared guest memory holding the error
> >> + * string.
> >> + *
> >> + * Return value from tdg_vp_vmcall_report_fatal_error is ignored since execution
> >> + * is not expected to continue beyond this point.
> >> + */
> >> +void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa);
> > I found nowhere is using "data_gpa" as a gpa, even in patch 23, it's
> > usage is to pass a line number ("tdx_test_fatal_with_data(ret, __LINE__)").
> >
> >
>
> This function tdx_test_fatal_with_data() is meant to provide a generic
> interface for TDX tests to use TDG.VP.VMCALL<ReportFatalError>, and so
> the parameters of tdx_test_fatal_with_data() generically allow error_code and
> data_gpa to be specified.
>
> The tests just happen to use the data_gpa parameter to pass __LINE__ to
> the host VMM, but other tests in future that use the
> tdx_test_fatal_with_data() function in the TDX testing library could
> actually pass a GPA through using data_gpa.
>
> >> #endif // SELFTEST_TDX_TEST_UTIL_H
> >> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> >> index c2414523487a..b854c3aa34ff 100644
> >> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> >> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> >> @@ -1,8 +1,31 @@
> >> // SPDX-License-Identifier: GPL-2.0-only
> >>
> >> +#include <string.h>
> >> +
> >> #include "tdx/tdcall.h"
> >> #include "tdx/tdx.h"
> >>
> >> +void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu)
> >> +{
> >> + struct kvm_tdx_vmcall *vmcall_info = &vcpu->run->tdx.u.vmcall;
> >> + uint64_t vmcall_subfunction = vmcall_info->subfunction;
> >> +
> >> + switch (vmcall_subfunction) {
> >> + case TDG_VP_VMCALL_REPORT_FATAL_ERROR:
> >> + vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
> >> + vcpu->run->system_event.ndata = 3;
> >> + vcpu->run->system_event.data[0] =
> >> + TDG_VP_VMCALL_REPORT_FATAL_ERROR;
> >> + vcpu->run->system_event.data[1] = vmcall_info->in_r12;
> >> + vcpu->run->system_event.data[2] = vmcall_info->in_r13;
> >> + vmcall_info->status_code = 0;
> >> + break;
> >> + default:
> >> + TEST_FAIL("TD VMCALL subfunction %lu is unsupported.\n",
> >> + vmcall_subfunction);
> >> + }
> >> +}
> >> +
> >> uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
> >> uint64_t write, uint64_t *data)
> >> {
> >> @@ -25,3 +48,19 @@ uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
> >>
> >> return ret;
> >> }
> >> +
> >> +void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa)
> >> +{
> >> + struct tdx_hypercall_args args;
> >> +
> >> + memset(&args, 0, sizeof(struct tdx_hypercall_args));
> >> +
> >> + if (data_gpa)
> >> + error_code |= 0x8000000000000000;
> >>
> > So, why this error_code needs to set bit 63?
> >
> >
>
> The Intel GHCI Spec says in R12, bit 63 is set if the GPA is valid. As a
But above "__LINE__" is obviously not a valid GPA.

Do you think it's better to check "data_gpa" is with shared bit on and
aligned in 4K before setting bit 63?

> generic TDX testing library function, this check allows the user to use
> tdg_vp_vmcall_report_fatal_error() with error_code and data_gpa and not
> worry about setting bit 63 before calling
> tdg_vp_vmcall_report_fatal_error(), though if the user set bit 63 before
> that, there is no issue.
>
> >> + args.r11 = TDG_VP_VMCALL_REPORT_FATAL_ERROR;
> >> + args.r12 = error_code;
> >> + args.r13 = data_gpa;
> >> +
> >> + __tdx_hypercall(&args, 0);
> >> +}
>
> >> <snip>
>

2024-04-15 08:06:09

by Ackerley Tng

[permalink] [raw]
Subject: Re: [RFC PATCH v5 09/29] KVM: selftests: TDX: Add report_fatal_error test

Yan Zhao <[email protected]> writes:

> On Fri, Apr 12, 2024 at 04:56:36AM +0000, Ackerley Tng wrote:
>> Yan Zhao <[email protected]> writes:
>>
>> > ...
>> >> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
>> >> index b570b6d978ff..6d69921136bd 100644
>> >> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
>> >> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
>> >> @@ -49,4 +49,23 @@ bool is_tdx_enabled(void);
>> >> */
>> >> void tdx_test_success(void);
>> >>
>> >> +/**
>> >> + * Report an error with @error_code to userspace.
>> >> + *
>> >> + * Return value from tdg_vp_vmcall_report_fatal_error is ignored since execution
>> >> + * is not expected to continue beyond this point.
>> >> + */
>> >> +void tdx_test_fatal(uint64_t error_code);
>> >> +
>> >> +/**
>> >> + * Report an error with @error_code to userspace.
>> >> + *
>> >> + * @data_gpa may point to an optional shared guest memory holding the error
>> >> + * string.
>> >> + *
>> >> + * Return value from tdg_vp_vmcall_report_fatal_error is ignored since execution
>> >> + * is not expected to continue beyond this point.
>> >> + */
>> >> +void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa);
>> > I found nowhere is using "data_gpa" as a gpa, even in patch 23, it's
>> > usage is to pass a line number ("tdx_test_fatal_with_data(ret, __LINE__)").
>> >
>> >
>>
>> This function tdx_test_fatal_with_data() is meant to provide a generic
>> interface for TDX tests to use TDG.VP.VMCALL<ReportFatalError>, and so
>> the parameters of tdx_test_fatal_with_data() generically allow error_code and
>> data_gpa to be specified.
>>
>> The tests just happen to use the data_gpa parameter to pass __LINE__ to
>> the host VMM, but other tests in future that use the
>> tdx_test_fatal_with_data() function in the TDX testing library could
>> actually pass a GPA through using data_gpa.
>>
>> >> #endif // SELFTEST_TDX_TEST_UTIL_H
>> >> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
>> >> index c2414523487a..b854c3aa34ff 100644
>> >> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
>> >> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
>> >> @@ -1,8 +1,31 @@
>> >> // SPDX-License-Identifier: GPL-2.0-only
>> >>
>> >> +#include <string.h>
>> >> +
>> >> #include "tdx/tdcall.h"
>> >> #include "tdx/tdx.h"
>> >>
>> >> +void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu)
>> >> +{
>> >> + struct kvm_tdx_vmcall *vmcall_info = &vcpu->run->tdx.u.vmcall;
>> >> + uint64_t vmcall_subfunction = vmcall_info->subfunction;
>> >> +
>> >> + switch (vmcall_subfunction) {
>> >> + case TDG_VP_VMCALL_REPORT_FATAL_ERROR:
>> >> + vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
>> >> + vcpu->run->system_event.ndata = 3;
>> >> + vcpu->run->system_event.data[0] =
>> >> + TDG_VP_VMCALL_REPORT_FATAL_ERROR;
>> >> + vcpu->run->system_event.data[1] = vmcall_info->in_r12;
>> >> + vcpu->run->system_event.data[2] = vmcall_info->in_r13;
>> >> + vmcall_info->status_code = 0;
>> >> + break;
>> >> + default:
>> >> + TEST_FAIL("TD VMCALL subfunction %lu is unsupported.\n",
>> >> + vmcall_subfunction);
>> >> + }
>> >> +}
>> >> +
>> >> uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
>> >> uint64_t write, uint64_t *data)
>> >> {
>> >> @@ -25,3 +48,19 @@ uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
>> >>
>> >> return ret;
>> >> }
>> >> +
>> >> +void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa)
>> >> +{
>> >> + struct tdx_hypercall_args args;
>> >> +
>> >> + memset(&args, 0, sizeof(struct tdx_hypercall_args));
>> >> +
>> >> + if (data_gpa)
>> >> + error_code |= 0x8000000000000000;
>> >>
>> > So, why this error_code needs to set bit 63?
>> >
>> >
>>
>> The Intel GHCI Spec says in R12, bit 63 is set if the GPA is valid. As a
> But above "__LINE__" is obviously not a valid GPA.
>
> Do you think it's better to check "data_gpa" is with shared bit on and
> aligned in 4K before setting bit 63?
>

I read "valid" in the spec to mean that the value in R13 "should be
considered as useful" or "should be passed on to the host VMM via the
TDX module", and not so much as in "validated".

We could validate the data_gpa as you suggested to check alignment and
shared bit, but perhaps that could be a higher-level function that calls
tdg_vp_vmcall_report_fatal_error()?

If it helps, shall we rename "data_gpa" to "data" for this lower-level,
generic helper function and remove these two lines

if (data_gpa)
error_code |= 0x8000000000000000;

A higher-level function could perhaps do the validation as you suggested
and then set bit 63.

Are you objecting to the use of R13 to hold extra test information, such
as __LINE__?

I feel that R13 is just another register that could be used to hold
error information, and in the case of this test, we can use it to send
__LINE__ to aid in debugging selftests. On the host side of the
selftest we can printf() :).

>> generic TDX testing library function, this check allows the user to use
>> tdg_vp_vmcall_report_fatal_error() with error_code and data_gpa and not
>> worry about setting bit 63 before calling
>> tdg_vp_vmcall_report_fatal_error(), though if the user set bit 63 before
>> that, there is no issue.
>>
>> >> + args.r11 = TDG_VP_VMCALL_REPORT_FATAL_ERROR;
>> >> + args.r12 = error_code;
>> >> + args.r13 = data_gpa;
>> >> +
>> >> + __tdx_hypercall(&args, 0);
>> >> +}
>>
>> >> <snip>
>>

2024-04-15 10:16:24

by Yan Zhao

[permalink] [raw]
Subject: Re: [RFC PATCH v5 09/29] KVM: selftests: TDX: Add report_fatal_error test

On Mon, Apr 15, 2024 at 08:05:49AM +0000, Ackerley Tng wrote:
> Yan Zhao <[email protected]> writes:
>
> > On Fri, Apr 12, 2024 at 04:56:36AM +0000, Ackerley Tng wrote:
> >> Yan Zhao <[email protected]> writes:
> >>
> >> > ...
> >> >> diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> >> >> index b570b6d978ff..6d69921136bd 100644
> >> >> --- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> >> >> +++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> >> >> @@ -49,4 +49,23 @@ bool is_tdx_enabled(void);
> >> >> */
> >> >> void tdx_test_success(void);
> >> >>
> >> >> +/**
> >> >> + * Report an error with @error_code to userspace.
> >> >> + *
> >> >> + * Return value from tdg_vp_vmcall_report_fatal_error is ignored since execution
> >> >> + * is not expected to continue beyond this point.
> >> >> + */
> >> >> +void tdx_test_fatal(uint64_t error_code);
> >> >> +
> >> >> +/**
> >> >> + * Report an error with @error_code to userspace.
> >> >> + *
> >> >> + * @data_gpa may point to an optional shared guest memory holding the error
> >> >> + * string.
> >> >> + *
> >> >> + * Return value from tdg_vp_vmcall_report_fatal_error is ignored since execution
> >> >> + * is not expected to continue beyond this point.
> >> >> + */
> >> >> +void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa);
> >> > I found nowhere is using "data_gpa" as a gpa, even in patch 23, it's
> >> > usage is to pass a line number ("tdx_test_fatal_with_data(ret, __LINE__)").
> >> >
> >> >
> >>
> >> This function tdx_test_fatal_with_data() is meant to provide a generic
> >> interface for TDX tests to use TDG.VP.VMCALL<ReportFatalError>, and so
> >> the parameters of tdx_test_fatal_with_data() generically allow error_code and
> >> data_gpa to be specified.
> >>
> >> The tests just happen to use the data_gpa parameter to pass __LINE__ to
> >> the host VMM, but other tests in future that use the
> >> tdx_test_fatal_with_data() function in the TDX testing library could
> >> actually pass a GPA through using data_gpa.
> >>
> >> >> #endif // SELFTEST_TDX_TEST_UTIL_H
> >> >> diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> >> >> index c2414523487a..b854c3aa34ff 100644
> >> >> --- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> >> >> +++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> >> >> @@ -1,8 +1,31 @@
> >> >> // SPDX-License-Identifier: GPL-2.0-only
> >> >>
> >> >> +#include <string.h>
> >> >> +
> >> >> #include "tdx/tdcall.h"
> >> >> #include "tdx/tdx.h"
> >> >>
> >> >> +void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu)
> >> >> +{
> >> >> + struct kvm_tdx_vmcall *vmcall_info = &vcpu->run->tdx.u.vmcall;
> >> >> + uint64_t vmcall_subfunction = vmcall_info->subfunction;
> >> >> +
> >> >> + switch (vmcall_subfunction) {
> >> >> + case TDG_VP_VMCALL_REPORT_FATAL_ERROR:
> >> >> + vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
> >> >> + vcpu->run->system_event.ndata = 3;
> >> >> + vcpu->run->system_event.data[0] =
> >> >> + TDG_VP_VMCALL_REPORT_FATAL_ERROR;
> >> >> + vcpu->run->system_event.data[1] = vmcall_info->in_r12;
> >> >> + vcpu->run->system_event.data[2] = vmcall_info->in_r13;
> >> >> + vmcall_info->status_code = 0;
> >> >> + break;
> >> >> + default:
> >> >> + TEST_FAIL("TD VMCALL subfunction %lu is unsupported.\n",
> >> >> + vmcall_subfunction);
> >> >> + }
> >> >> +}
> >> >> +
> >> >> uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
> >> >> uint64_t write, uint64_t *data)
> >> >> {
> >> >> @@ -25,3 +48,19 @@ uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
> >> >>
> >> >> return ret;
> >> >> }
> >> >> +
> >> >> +void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa)
> >> >> +{
> >> >> + struct tdx_hypercall_args args;
> >> >> +
> >> >> + memset(&args, 0, sizeof(struct tdx_hypercall_args));
> >> >> +
> >> >> + if (data_gpa)
> >> >> + error_code |= 0x8000000000000000;
> >> >>
> >> > So, why this error_code needs to set bit 63?
> >> >
> >> >
> >>
> >> The Intel GHCI Spec says in R12, bit 63 is set if the GPA is valid. As a
> > But above "__LINE__" is obviously not a valid GPA.
> >
> > Do you think it's better to check "data_gpa" is with shared bit on and
> > aligned in 4K before setting bit 63?
> >
>
> I read "valid" in the spec to mean that the value in R13 "should be
> considered as useful" or "should be passed on to the host VMM via the
> TDX module", and not so much as in "validated".
>
> We could validate the data_gpa as you suggested to check alignment and
> shared bit, but perhaps that could be a higher-level function that calls
> tdg_vp_vmcall_report_fatal_error()?
>
> If it helps, shall we rename "data_gpa" to "data" for this lower-level,
> generic helper function and remove these two lines
>
> if (data_gpa)
> error_code |= 0x8000000000000000;
>
> A higher-level function could perhaps do the validation as you suggested
> and then set bit 63.
This could be all right. But I'm not sure if it would be a burden for
higher-level function to set bit 63 which is of GHCI details.

What about adding another "data_gpa_valid" parameter and then test
"data_gpa_valid" rather than test "data_gpa" to set bit 63?

>
> Are you objecting to the use of R13 to hold extra test information, such
> as __LINE__?
>
> I feel that R13 is just another register that could be used to hold
> error information, and in the case of this test, we can use it to send
> __LINE__ to aid in debugging selftests. On the host side of the
> selftest we can printf() :).
>
Hmm, I just feel it's confusing to use R13 as error code holder and set
gpa_valid bit on at the same time.
As it looks complicated to convert __LINE__ to a string in a shared GPA,
maybe it's ok to pass it in R13 :)

> >> generic TDX testing library function, this check allows the user to use
> >> tdg_vp_vmcall_report_fatal_error() with error_code and data_gpa and not
> >> worry about setting bit 63 before calling
> >> tdg_vp_vmcall_report_fatal_error(), though if the user set bit 63 before
> >> that, there is no issue.
> >>
> >> >> + args.r11 = TDG_VP_VMCALL_REPORT_FATAL_ERROR;
> >> >> + args.r12 = error_code;
> >> >> + args.r13 = data_gpa;
> >> >> +
> >> >> + __tdx_hypercall(&args, 0);
> >> >> +}
> >>
> >> >> <snip>
> >>

2024-04-16 18:53:48

by Sean Christopherson

[permalink] [raw]
Subject: Re: [RFC PATCH v5 09/29] KVM: selftests: TDX: Add report_fatal_error test

On Mon, Apr 15, 2024, Yan Zhao wrote:
> On Mon, Apr 15, 2024 at 08:05:49AM +0000, Ackerley Tng wrote:
> > >> The Intel GHCI Spec says in R12, bit 63 is set if the GPA is valid. As a
> > > But above "__LINE__" is obviously not a valid GPA.
> > >
> > > Do you think it's better to check "data_gpa" is with shared bit on and
> > > aligned in 4K before setting bit 63?
> > >
> >
> > I read "valid" in the spec to mean that the value in R13 "should be
> > considered as useful" or "should be passed on to the host VMM via the
> > TDX module", and not so much as in "validated".
> >
> > We could validate the data_gpa as you suggested to check alignment and
> > shared bit, but perhaps that could be a higher-level function that calls
> > tdg_vp_vmcall_report_fatal_error()?
> >
> > If it helps, shall we rename "data_gpa" to "data" for this lower-level,
> > generic helper function and remove these two lines
> >
> > if (data_gpa)
> > error_code |= 0x8000000000000000;
> >
> > A higher-level function could perhaps do the validation as you suggested
> > and then set bit 63.
> This could be all right. But I'm not sure if it would be a burden for
> higher-level function to set bit 63 which is of GHCI details.
>
> What about adding another "data_gpa_valid" parameter and then test
> "data_gpa_valid" rather than test "data_gpa" to set bit 63?

Who cares what the GHCI says about validity? The GHCI is a spec for getting
random guests to play nice with random hosts. Selftests own both, and the goal
of selftests is to test that KVM (and KVM's dependencies) adhere to their relevant
specs. And more importantly, KVM is NOT inheriting the GHCI ABI verbatim[*].

So except for the bits and bobs that *KVM* (or the TDX module) gets involved in,
just ignore the GHCI (or even deliberately abuse it). To put it differently, use
selftests verify *KVM's* ABI and functionality.

As it pertains to this thread, while I haven't looked at any of this in detail,
I'm guessing that whether or not bit 63 is set is a complete "don't care", i.e.
KVM and the TDX Module should pass it through as-is.

[*] https://lore.kernel.org/all/[email protected]

2024-04-17 22:43:00

by Yan Zhao

[permalink] [raw]
Subject: Re: [RFC PATCH v5 09/29] KVM: selftests: TDX: Add report_fatal_error test

On Tue, Apr 16, 2024 at 11:50:19AM -0700, Sean Christopherson wrote:
> On Mon, Apr 15, 2024, Yan Zhao wrote:
> > On Mon, Apr 15, 2024 at 08:05:49AM +0000, Ackerley Tng wrote:
> > > >> The Intel GHCI Spec says in R12, bit 63 is set if the GPA is valid. As a
> > > > But above "__LINE__" is obviously not a valid GPA.
> > > >
> > > > Do you think it's better to check "data_gpa" is with shared bit on and
> > > > aligned in 4K before setting bit 63?
> > > >
> > >
> > > I read "valid" in the spec to mean that the value in R13 "should be
> > > considered as useful" or "should be passed on to the host VMM via the
> > > TDX module", and not so much as in "validated".
> > >
> > > We could validate the data_gpa as you suggested to check alignment and
> > > shared bit, but perhaps that could be a higher-level function that calls
> > > tdg_vp_vmcall_report_fatal_error()?
> > >
> > > If it helps, shall we rename "data_gpa" to "data" for this lower-level,
> > > generic helper function and remove these two lines
> > >
> > > if (data_gpa)
> > > error_code |= 0x8000000000000000;
> > >
> > > A higher-level function could perhaps do the validation as you suggested
> > > and then set bit 63.
> > This could be all right. But I'm not sure if it would be a burden for
> > higher-level function to set bit 63 which is of GHCI details.
> >
> > What about adding another "data_gpa_valid" parameter and then test
> > "data_gpa_valid" rather than test "data_gpa" to set bit 63?
>
> Who cares what the GHCI says about validity? The GHCI is a spec for getting
> random guests to play nice with random hosts. Selftests own both, and the goal
> of selftests is to test that KVM (and KVM's dependencies) adhere to their relevant
> specs. And more importantly, KVM is NOT inheriting the GHCI ABI verbatim[*].
>
> So except for the bits and bobs that *KVM* (or the TDX module) gets involved in,
> just ignore the GHCI (or even deliberately abuse it). To put it differently, use
> selftests verify *KVM's* ABI and functionality.
>
> As it pertains to this thread, while I haven't looked at any of this in detail,
> I'm guessing that whether or not bit 63 is set is a complete "don't care", i.e.
> KVM and the TDX Module should pass it through as-is.
>
> [*] https://lore.kernel.org/all/[email protected]
Ok. It makes sense to KVM_EXIT_TDX.
But what if the TDVMCALL is handled in TDX specific code in kernel in future?
(not possible?)
Should guest set bits correctly according to GHCI?



2024-04-22 21:24:11

by Sean Christopherson

[permalink] [raw]
Subject: Re: [RFC PATCH v5 09/29] KVM: selftests: TDX: Add report_fatal_error test

On Thu, Apr 18, 2024, Yan Zhao wrote:
> On Tue, Apr 16, 2024 at 11:50:19AM -0700, Sean Christopherson wrote:
> > On Mon, Apr 15, 2024, Yan Zhao wrote:
> > > On Mon, Apr 15, 2024 at 08:05:49AM +0000, Ackerley Tng wrote:
> > > > >> The Intel GHCI Spec says in R12, bit 63 is set if the GPA is valid. As a
> > > > > But above "__LINE__" is obviously not a valid GPA.
> > > > >
> > > > > Do you think it's better to check "data_gpa" is with shared bit on and
> > > > > aligned in 4K before setting bit 63?
> > > > >
> > > >
> > > > I read "valid" in the spec to mean that the value in R13 "should be
> > > > considered as useful" or "should be passed on to the host VMM via the
> > > > TDX module", and not so much as in "validated".
> > > >
> > > > We could validate the data_gpa as you suggested to check alignment and
> > > > shared bit, but perhaps that could be a higher-level function that calls
> > > > tdg_vp_vmcall_report_fatal_error()?
> > > >
> > > > If it helps, shall we rename "data_gpa" to "data" for this lower-level,
> > > > generic helper function and remove these two lines
> > > >
> > > > if (data_gpa)
> > > > error_code |= 0x8000000000000000;
> > > >
> > > > A higher-level function could perhaps do the validation as you suggested
> > > > and then set bit 63.
> > > This could be all right. But I'm not sure if it would be a burden for
> > > higher-level function to set bit 63 which is of GHCI details.
> > >
> > > What about adding another "data_gpa_valid" parameter and then test
> > > "data_gpa_valid" rather than test "data_gpa" to set bit 63?
> >
> > Who cares what the GHCI says about validity? The GHCI is a spec for getting
> > random guests to play nice with random hosts. Selftests own both, and the goal
> > of selftests is to test that KVM (and KVM's dependencies) adhere to their relevant
> > specs. And more importantly, KVM is NOT inheriting the GHCI ABI verbatim[*].
> >
> > So except for the bits and bobs that *KVM* (or the TDX module) gets involved in,
> > just ignore the GHCI (or even deliberately abuse it). To put it differently, use
> > selftests verify *KVM's* ABI and functionality.
> >
> > As it pertains to this thread, while I haven't looked at any of this in detail,
> > I'm guessing that whether or not bit 63 is set is a complete "don't care", i.e.
> > KVM and the TDX Module should pass it through as-is.
> >
> > [*] https://lore.kernel.org/all/[email protected]
> Ok. It makes sense to KVM_EXIT_TDX.
> But what if the TDVMCALL is handled in TDX specific code in kernel in future?
> (not possible?)

KVM will "handle" ReportFatalError, and will do so before this code lands[*], but
I *highly* doubt KVM will ever do anything but forward the information to userspace,
e.g. as KVM_SYSTEM_EVENT_CRASH with data[] filled in with the raw register information.

> Should guest set bits correctly according to GHCI?

No. Selftests exist first and foremost to verify KVM behavior, not to verify
firmware behavior. We can and should use selftests to verify that *KVM* doesn't
*violate* the GHCI, but that doesn't mean that selftests themselves can't ignore
and/or abuse the GCHI, especially since the GHCI definition for ReportFatalError
is frankly awful.

E.g. the GHCI prescibes actual behavior for R13, but then doesn't say *anything*
about what's in the data page. Why!?!?! If the format in the data page is
completely undefined, what's the point of restricting R13 to only be allowed to
hold a GPA?

And the wording is just as awful:

The VMM must validate that this GPA has the Shared bit set. In other words,
that a shared-mapping is used, and that this is a valid mapping for the TD.

I'm pretty sure it's just saying that the TDX module isn't going to verify the
operate, i.e. that the VMM needs to protect itself, but it would be so much
better to simply state "The TDX Module does not verify this GPA", because saying
the VMM "must" do something leads to pointless discussions like this one, where
we're debating over whether or *our* VMM should inject an error into *our* guest.

Anyways, we should do what makes sense for selftests and ignore the stupidity of
the GHCI when doing so yields better code. If that means abusing R13, go for it.
If it's a sticking point for anyone, just use one of the "optional" registers.

Whatever we do, bury the host and guest side of selftests behind #defines or helpers
so that there are at most two pieces of code that care which register holds which
piece of information.

[*] https://lore.kernel.org/all/[email protected]

2024-06-05 18:38:33

by Verma, Vishal L

[permalink] [raw]
Subject: Re: [RFC PATCH v5 00/29] TDX KVM selftests

On Tue, 2023-12-12 at 12:46 -0800, Sagi Shahar wrote:
> Hello,
>
> This is v4 of the patch series for TDX selftests.
>
> It has been updated for Intel’s v17 of the TDX host patches which was
> proposed here:
> https://lore.kernel.org/all/[email protected]/
>
> The tree can be found at:
> https://github.com/googleprodkernel/linux-cc/tree/tdx-selftests-rfc-v5

Hello,

I wanted to check if there were any plans from Google to refresh this
series for the current TDX patches and the kvm-coco-queue baseline?

I'm setting up a CI system that the team is using to test updates to
the different TDX patch series, and it currently runs the KVM Unit
tests, and kvm selftests, and we'd like to be able to add these three
new TDX tests to that as well.

I tried to take a quick shot at rebasing it, but ran into several
conflicts since kvm-coco-queue has in the meantime made changes e.g. in
tools/testing/selftests/kvm/lib/x86_64/processor.c vcpu_setup().

If you can help rebase this, Rick's MMU prep series might be a good
baseline to use:
https://lore.kernel.org/all/[email protected]/

This is also available in a tree at:
https://github.com/intel/tdx/tree/tdx_kvm_dev-2024-05-30

Thank you,
Vishal

>
> Changes from RFC v4:
>
> Added patch to propagate KVM_EXIT_MEMORY_FAULT to userspace.
>
> Minor tweaks to align the tests to the new TDX 1.5 spec such as changes
> in the expected values in TDG.VP.INFO.
>
> In RFCv5, TDX selftest code is organized into:
>
> + headers in tools/testing/selftests/kvm/include/x86_64/tdx/
> + common code in tools/testing/selftests/kvm/lib/x86_64/tdx/
> + selftests in tools/testing/selftests/kvm/x86_64/tdx_*
>
> Dependencies
>
> + Peter’s patches, which provide functions for the host to allocate
>   and track protected memory in the guest.
>   https://lore.kernel.org/all/[email protected]/
>
> Further work for this patch series/TODOs
>
> + Sean’s comments for the non-confidential UPM selftests patch series
>   at https://lore.kernel.org/lkml/[email protected]/T/#u apply
>   here as well
> + Add ucall support for TDX selftests
>
> I would also like to acknowledge the following people, who helped
> review or test patches in previous versions:
>
> + Sean Christopherson <[email protected]>
> + Zhenzhong Duan <[email protected]>
> + Peter Gonda <[email protected]>
> + Andrew Jones <[email protected]>
> + Maxim Levitsky <[email protected]>
> + Xiaoyao Li <[email protected]>
> + David Matlack <[email protected]>
> + Marc Orr <[email protected]>
> + Isaku Yamahata <[email protected]>
> + Maciej S. Szmigiero <[email protected]>
>
> Links to earlier patch series
>
> + RFC v1: https://lore.kernel.org/lkml/[email protected]/T/#u
> + RFC v2: https://lore.kernel.org/lkml/[email protected]/T/#u
> + RFC v3: https://lore.kernel.org/lkml/[email protected]/T/#u
> + RFC v4: https://lore.kernel.org/lkml/[email protected]/
>
> *** BLURB HERE ***
>
> Ackerley Tng (12):
>   KVM: selftests: Add function to allow one-to-one GVA to GPA mappings
>   KVM: selftests: Expose function that sets up sregs based on VM's mode
>   KVM: selftests: Store initial stack address in struct kvm_vcpu
>   KVM: selftests: Refactor steps in vCPU descriptor table initialization
>   KVM: selftests: TDX: Use KVM_TDX_CAPABILITIES to validate TDs'
>     attribute configuration
>   KVM: selftests: TDX: Update load_td_memory_region for VM memory backed
>     by guest memfd
>   KVM: selftests: Add functions to allow mapping as shared
>   KVM: selftests: Expose _vm_vaddr_alloc
>   KVM: selftests: TDX: Add support for TDG.MEM.PAGE.ACCEPT
>   KVM: selftests: TDX: Add support for TDG.VP.VEINFO.GET
>   KVM: selftests: TDX: Add TDX UPM selftest
>   KVM: selftests: TDX: Add TDX UPM selftests for implicit conversion
>
> Erdem Aktas (3):
>   KVM: selftests: Add helper functions to create TDX VMs
>   KVM: selftests: TDX: Add TDX lifecycle test
>   KVM: selftests: TDX: Adding test case for TDX port IO
>
> Roger Wang (1):
>   KVM: selftests: TDX: Add TDG.VP.INFO test
>
> Ryan Afranji (2):
>   KVM: selftests: TDX: Verify the behavior when host consumes a TD
>     private memory
>   KVM: selftests: TDX: Add shared memory test
>
> Sagi Shahar (11):
>   KVM: selftests: TDX: Add report_fatal_error test
>   KVM: selftests: TDX: Add basic TDX CPUID test
>   KVM: selftests: TDX: Add basic get_td_vmcall_info test
>   KVM: selftests: TDX: Add TDX IO writes test
>   KVM: selftests: TDX: Add TDX IO reads test
>   KVM: selftests: TDX: Add TDX MSR read/write tests
>   KVM: selftests: TDX: Add TDX HLT exit test
>   KVM: selftests: TDX: Add TDX MMIO reads test
>   KVM: selftests: TDX: Add TDX MMIO writes test
>   KVM: selftests: TDX: Add TDX CPUID TDVMCALL test
>   KVM: selftests: Propagate KVM_EXIT_MEMORY_FAULT to userspace
>
>  tools/testing/selftests/kvm/Makefile          |    8 +
>  .../selftests/kvm/include/kvm_util_base.h     |   30 +
>  .../selftests/kvm/include/x86_64/processor.h  |    4 +
>  .../kvm/include/x86_64/tdx/td_boot.h          |   82 +
>  .../kvm/include/x86_64/tdx/td_boot_asm.h      |   16 +
>  .../selftests/kvm/include/x86_64/tdx/tdcall.h |   59 +
>  .../selftests/kvm/include/x86_64/tdx/tdx.h    |   65 +
>  .../kvm/include/x86_64/tdx/tdx_util.h         |   19 +
>  .../kvm/include/x86_64/tdx/test_util.h        |  164 ++
>  tools/testing/selftests/kvm/lib/kvm_util.c    |  101 +-
>  .../selftests/kvm/lib/x86_64/processor.c      |   77 +-
>  .../selftests/kvm/lib/x86_64/tdx/td_boot.S    |  101 ++
>  .../selftests/kvm/lib/x86_64/tdx/tdcall.S     |  158 ++
>  .../selftests/kvm/lib/x86_64/tdx/tdx.c        |  262 ++++
>  .../selftests/kvm/lib/x86_64/tdx/tdx_util.c   |  558 +++++++
>  .../selftests/kvm/lib/x86_64/tdx/test_util.c  |  101 ++
>  .../kvm/x86_64/tdx_shared_mem_test.c          |  135 ++
>  .../selftests/kvm/x86_64/tdx_upm_test.c       |  469 ++++++
>  .../selftests/kvm/x86_64/tdx_vm_tests.c       | 1319 +++++++++++++++++
>  19 files changed, 3693 insertions(+), 35 deletions(-)
>  create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h
>  create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h
>  create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
>  create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
>  create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
>  create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
>  create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S
>  create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
>  create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
>  create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
>  create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
>  create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c
>  create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_upm_test.c
>  create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
>

2024-06-05 20:12:06

by Sagi Shahar

[permalink] [raw]
Subject: Re: [RFC PATCH v5 00/29] TDX KVM selftests

On Wed, Jun 5, 2024 at 1:38 PM Verma, Vishal L <[email protected]> wrote:
>
> On Tue, 2023-12-12 at 12:46 -0800, Sagi Shahar wrote:
> > Hello,
> >
> > This is v4 of the patch series for TDX selftests.
> >
> > It has been updated for Intel’s v17 of the TDX host patches which was
> > proposed here:
> > https://lore.kernel.org/all/[email protected]/
> >
> > The tree can be found at:
> > https://github.com/googleprodkernel/linux-cc/tree/tdx-selftests-rfc-v5
>
> Hello,
>
> I wanted to check if there were any plans from Google to refresh this
> series for the current TDX patches and the kvm-coco-queue baseline?
>
I'm going to work on it soon and was planning on using Isaku's V19 of
the TDX host patches

> I'm setting up a CI system that the team is using to test updates to
> the different TDX patch series, and it currently runs the KVM Unit
> tests, and kvm selftests, and we'd like to be able to add these three
> new TDX tests to that as well.
>
> I tried to take a quick shot at rebasing it, but ran into several
> conflicts since kvm-coco-queue has in the meantime made changes e.g. in
> tools/testing/selftests/kvm/lib/x86_64/processor.c vcpu_setup().
>
> If you can help rebase this, Rick's MMU prep series might be a good
> baseline to use:
> https://lore.kernel.org/all/[email protected]/

This patch series only includes the basic TDX MMU changes and is
missing a lot of the TDX support. Not sure how this can be used as a
baseline without the rest of the TDX patches. Are there other patch
series that were posted based on this series which provides the rest
of the TDX support?
>
> This is also available in a tree at:
> https://github.com/intel/tdx/tree/tdx_kvm_dev-2024-05-30
>
> Thank you,
> Vishal
>
> >
> > Changes from RFC v4:
> >
> > Added patch to propagate KVM_EXIT_MEMORY_FAULT to userspace.
> >
> > Minor tweaks to align the tests to the new TDX 1.5 spec such as changes
> > in the expected values in TDG.VP.INFO.
> >
> > In RFCv5, TDX selftest code is organized into:
> >
> > + headers in tools/testing/selftests/kvm/include/x86_64/tdx/
> > + common code in tools/testing/selftests/kvm/lib/x86_64/tdx/
> > + selftests in tools/testing/selftests/kvm/x86_64/tdx_*
> >
> > Dependencies
> >
> > + Peter’s patches, which provide functions for the host to allocate
> > and track protected memory in the guest.
> > https://lore.kernel.org/all/[email protected]/
> >
> > Further work for this patch series/TODOs
> >
> > + Sean’s comments for the non-confidential UPM selftests patch series
> > at https://lore.kernel.org/lkml/[email protected]/T/#u apply
> > here as well
> > + Add ucall support for TDX selftests
> >
> > I would also like to acknowledge the following people, who helped
> > review or test patches in previous versions:
> >
> > + Sean Christopherson <[email protected]>
> > + Zhenzhong Duan <[email protected]>
> > + Peter Gonda <[email protected]>
> > + Andrew Jones <[email protected]>
> > + Maxim Levitsky <[email protected]>
> > + Xiaoyao Li <[email protected]>
> > + David Matlack <[email protected]>
> > + Marc Orr <[email protected]>
> > + Isaku Yamahata <[email protected]>
> > + Maciej S. Szmigiero <[email protected]>
> >
> > Links to earlier patch series
> >
> > + RFC v1: https://lore.kernel.org/lkml/[email protected]/T/#u
> > + RFC v2: https://lore.kernel.org/lkml/[email protected]/T/#u
> > + RFC v3: https://lore.kernel.org/lkml/[email protected]/T/#u
> > + RFC v4: https://lore.kernel.org/lkml/[email protected]/
> >
> > *** BLURB HERE ***
> >
> > Ackerley Tng (12):
> > KVM: selftests: Add function to allow one-to-one GVA to GPA mappings
> > KVM: selftests: Expose function that sets up sregs based on VM's mode
> > KVM: selftests: Store initial stack address in struct kvm_vcpu
> > KVM: selftests: Refactor steps in vCPU descriptor table initialization
> > KVM: selftests: TDX: Use KVM_TDX_CAPABILITIES to validate TDs'
> > attribute configuration
> > KVM: selftests: TDX: Update load_td_memory_region for VM memory backed
> > by guest memfd
> > KVM: selftests: Add functions to allow mapping as shared
> > KVM: selftests: Expose _vm_vaddr_alloc
> > KVM: selftests: TDX: Add support for TDG.MEM.PAGE.ACCEPT
> > KVM: selftests: TDX: Add support for TDG.VP.VEINFO.GET
> > KVM: selftests: TDX: Add TDX UPM selftest
> > KVM: selftests: TDX: Add TDX UPM selftests for implicit conversion
> >
> > Erdem Aktas (3):
> > KVM: selftests: Add helper functions to create TDX VMs
> > KVM: selftests: TDX: Add TDX lifecycle test
> > KVM: selftests: TDX: Adding test case for TDX port IO
> >
> > Roger Wang (1):
> > KVM: selftests: TDX: Add TDG.VP.INFO test
> >
> > Ryan Afranji (2):
> > KVM: selftests: TDX: Verify the behavior when host consumes a TD
> > private memory
> > KVM: selftests: TDX: Add shared memory test
> >
> > Sagi Shahar (11):
> > KVM: selftests: TDX: Add report_fatal_error test
> > KVM: selftests: TDX: Add basic TDX CPUID test
> > KVM: selftests: TDX: Add basic get_td_vmcall_info test
> > KVM: selftests: TDX: Add TDX IO writes test
> > KVM: selftests: TDX: Add TDX IO reads test
> > KVM: selftests: TDX: Add TDX MSR read/write tests
> > KVM: selftests: TDX: Add TDX HLT exit test
> > KVM: selftests: TDX: Add TDX MMIO reads test
> > KVM: selftests: TDX: Add TDX MMIO writes test
> > KVM: selftests: TDX: Add TDX CPUID TDVMCALL test
> > KVM: selftests: Propagate KVM_EXIT_MEMORY_FAULT to userspace
> >
> > tools/testing/selftests/kvm/Makefile | 8 +
> > .../selftests/kvm/include/kvm_util_base.h | 30 +
> > .../selftests/kvm/include/x86_64/processor.h | 4 +
> > .../kvm/include/x86_64/tdx/td_boot.h | 82 +
> > .../kvm/include/x86_64/tdx/td_boot_asm.h | 16 +
> > .../selftests/kvm/include/x86_64/tdx/tdcall.h | 59 +
> > .../selftests/kvm/include/x86_64/tdx/tdx.h | 65 +
> > .../kvm/include/x86_64/tdx/tdx_util.h | 19 +
> > .../kvm/include/x86_64/tdx/test_util.h | 164 ++
> > tools/testing/selftests/kvm/lib/kvm_util.c | 101 +-
> > .../selftests/kvm/lib/x86_64/processor.c | 77 +-
> > .../selftests/kvm/lib/x86_64/tdx/td_boot.S | 101 ++
> > .../selftests/kvm/lib/x86_64/tdx/tdcall.S | 158 ++
> > .../selftests/kvm/lib/x86_64/tdx/tdx.c | 262 ++++
> > .../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 558 +++++++
> > .../selftests/kvm/lib/x86_64/tdx/test_util.c | 101 ++
> > .../kvm/x86_64/tdx_shared_mem_test.c | 135 ++
> > .../selftests/kvm/x86_64/tdx_upm_test.c | 469 ++++++
> > .../selftests/kvm/x86_64/tdx_vm_tests.c | 1319 +++++++++++++++++
> > 19 files changed, 3693 insertions(+), 35 deletions(-)
> > create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h
> > create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h
> > create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
> > create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> > create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
> > create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> > create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S
> > create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
> > create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> > create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> > create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
> > create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c
> > create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_upm_test.c
> > create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> >
>

2024-06-05 20:16:04

by Verma, Vishal L

[permalink] [raw]
Subject: Re: [RFC PATCH v5 00/29] TDX KVM selftests

On Wed, 2024-06-05 at 15:10 -0500, Sagi Shahar wrote:
> On Wed, Jun 5, 2024 at 1:38 PM Verma, Vishal L <[email protected]> wrote:
> >
> > On Tue, 2023-12-12 at 12:46 -0800, Sagi Shahar wrote:
> > > Hello,
> > >
> > > This is v4 of the patch series for TDX selftests.
> > >
> > > It has been updated for Intel’s v17 of the TDX host patches which was
> > > proposed here:
> > > https://lore.kernel.org/all/[email protected]/
> > >
> > > The tree can be found at:
> > > https://github.com/googleprodkernel/linux-cc/tree/tdx-selftests-rfc-v5
> >
> > Hello,
> >
> > I wanted to check if there were any plans from Google to refresh this
> > series for the current TDX patches and the kvm-coco-queue baseline?
> >
> I'm going to work on it soon and was planning on using Isaku's V19 of
> the TDX host patches

That's great, thank you!

>
> > I'm setting up a CI system that the team is using to test updates to
> > the different TDX patch series, and it currently runs the KVM Unit
> > tests, and kvm selftests, and we'd like to be able to add these three
> > new TDX tests to that as well.
> >
> > I tried to take a quick shot at rebasing it, but ran into several
> > conflicts since kvm-coco-queue has in the meantime made changes e.g. in
> > tools/testing/selftests/kvm/lib/x86_64/processor.c vcpu_setup().
> >
> > If you can help rebase this, Rick's MMU prep series might be a good
> > baseline to use:
> > https://lore.kernel.org/all/[email protected]/
>
> This patch series only includes the basic TDX MMU changes and is
> missing a lot of the TDX support. Not sure how this can be used as a
> baseline without the rest of the TDX patches. Are there other patch
> series that were posted based on this series which provides the rest
> of the TDX support?

Hm you're right, I was looking more narrowly because of the kvm-coco-
queue conflicts, for some of which even v19 might be too old. The MMU
prep series uses a much more recent kvm-coco-queue baseline.

Rick, can we post a branch with /everything/ on this MMU prep baseline
for this selftest refresh?

> >
> > This is also available in a tree at:
> > https://github.com/intel/tdx/tree/tdx_kvm_dev-2024-05-30
> >
> > Thank you,
> > Vishal
> >
> > >
> > > Changes from RFC v4:
> > >
> > > Added patch to propagate KVM_EXIT_MEMORY_FAULT to userspace.
> > >
> > > Minor tweaks to align the tests to the new TDX 1.5 spec such as changes
> > > in the expected values in TDG.VP.INFO.
> > >
> > > In RFCv5, TDX selftest code is organized into:
> > >
> > > + headers in tools/testing/selftests/kvm/include/x86_64/tdx/
> > > + common code in tools/testing/selftests/kvm/lib/x86_64/tdx/
> > > + selftests in tools/testing/selftests/kvm/x86_64/tdx_*
> > >
> > > Dependencies
> > >
> > > + Peter’s patches, which provide functions for the host to allocate
> > >   and track protected memory in the guest.
> > >   https://lore.kernel.org/all/[email protected]/
> > >
> > > Further work for this patch series/TODOs
> > >
> > > + Sean’s comments for the non-confidential UPM selftests patch series
> > >   at https://lore.kernel.org/lkml/[email protected]/T/#u apply
> > >   here as well
> > > + Add ucall support for TDX selftests
> > >
> > > I would also like to acknowledge the following people, who helped
> > > review or test patches in previous versions:
> > >
> > > + Sean Christopherson <[email protected]>
> > > + Zhenzhong Duan <[email protected]>
> > > + Peter Gonda <[email protected]>
> > > + Andrew Jones <[email protected]>
> > > + Maxim Levitsky <[email protected]>
> > > + Xiaoyao Li <[email protected]>
> > > + David Matlack <[email protected]>
> > > + Marc Orr <[email protected]>
> > > + Isaku Yamahata <[email protected]>
> > > + Maciej S. Szmigiero <[email protected]>
> > >
> > > Links to earlier patch series
> > >
> > > + RFC v1: https://lore.kernel.org/lkml/[email protected]/T/#u
> > > + RFC v2: https://lore.kernel.org/lkml/[email protected]/T/#u
> > > + RFC v3: https://lore.kernel.org/lkml/[email protected]/T/#u
> > > + RFC v4: https://lore.kernel.org/lkml/[email protected]/
> > >
> > > *** BLURB HERE ***
> > >
> > > Ackerley Tng (12):
> > >   KVM: selftests: Add function to allow one-to-one GVA to GPA mappings
> > >   KVM: selftests: Expose function that sets up sregs based on VM's mode
> > >   KVM: selftests: Store initial stack address in struct kvm_vcpu
> > >   KVM: selftests: Refactor steps in vCPU descriptor table initialization
> > >   KVM: selftests: TDX: Use KVM_TDX_CAPABILITIES to validate TDs'
> > >     attribute configuration
> > >   KVM: selftests: TDX: Update load_td_memory_region for VM memory backed
> > >     by guest memfd
> > >   KVM: selftests: Add functions to allow mapping as shared
> > >   KVM: selftests: Expose _vm_vaddr_alloc
> > >   KVM: selftests: TDX: Add support for TDG.MEM.PAGE.ACCEPT
> > >   KVM: selftests: TDX: Add support for TDG.VP.VEINFO.GET
> > >   KVM: selftests: TDX: Add TDX UPM selftest
> > >   KVM: selftests: TDX: Add TDX UPM selftests for implicit conversion
> > >
> > > Erdem Aktas (3):
> > >   KVM: selftests: Add helper functions to create TDX VMs
> > >   KVM: selftests: TDX: Add TDX lifecycle test
> > >   KVM: selftests: TDX: Adding test case for TDX port IO
> > >
> > > Roger Wang (1):
> > >   KVM: selftests: TDX: Add TDG.VP.INFO test
> > >
> > > Ryan Afranji (2):
> > >   KVM: selftests: TDX: Verify the behavior when host consumes a TD
> > >     private memory
> > >   KVM: selftests: TDX: Add shared memory test
> > >
> > > Sagi Shahar (11):
> > >   KVM: selftests: TDX: Add report_fatal_error test
> > >   KVM: selftests: TDX: Add basic TDX CPUID test
> > >   KVM: selftests: TDX: Add basic get_td_vmcall_info test
> > >   KVM: selftests: TDX: Add TDX IO writes test
> > >   KVM: selftests: TDX: Add TDX IO reads test
> > >   KVM: selftests: TDX: Add TDX MSR read/write tests
> > >   KVM: selftests: TDX: Add TDX HLT exit test
> > >   KVM: selftests: TDX: Add TDX MMIO reads test
> > >   KVM: selftests: TDX: Add TDX MMIO writes test
> > >   KVM: selftests: TDX: Add TDX CPUID TDVMCALL test
> > >   KVM: selftests: Propagate KVM_EXIT_MEMORY_FAULT to userspace
> > >
> > >  tools/testing/selftests/kvm/Makefile          |    8 +
> > >  .../selftests/kvm/include/kvm_util_base.h     |   30 +
> > >  .../selftests/kvm/include/x86_64/processor.h  |    4 +
> > >  .../kvm/include/x86_64/tdx/td_boot.h          |   82 +
> > >  .../kvm/include/x86_64/tdx/td_boot_asm.h      |   16 +
> > >  .../selftests/kvm/include/x86_64/tdx/tdcall.h |   59 +
> > >  .../selftests/kvm/include/x86_64/tdx/tdx.h    |   65 +
> > >  .../kvm/include/x86_64/tdx/tdx_util.h         |   19 +
> > >  .../kvm/include/x86_64/tdx/test_util.h        |  164 ++
> > >  tools/testing/selftests/kvm/lib/kvm_util.c    |  101 +-
> > >  .../selftests/kvm/lib/x86_64/processor.c      |   77 +-
> > >  .../selftests/kvm/lib/x86_64/tdx/td_boot.S    |  101 ++
> > >  .../selftests/kvm/lib/x86_64/tdx/tdcall.S     |  158 ++
> > >  .../selftests/kvm/lib/x86_64/tdx/tdx.c        |  262 ++++
> > >  .../selftests/kvm/lib/x86_64/tdx/tdx_util.c   |  558 +++++++
> > >  .../selftests/kvm/lib/x86_64/tdx/test_util.c  |  101 ++
> > >  .../kvm/x86_64/tdx_shared_mem_test.c          |  135 ++
> > >  .../selftests/kvm/x86_64/tdx_upm_test.c       |  469 ++++++
> > >  .../selftests/kvm/x86_64/tdx_vm_tests.c       | 1319 +++++++++++++++++
> > >  19 files changed, 3693 insertions(+), 35 deletions(-)
> > >  create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h
> > >  create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h
> > >  create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
> > >  create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
> > >  create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
> > >  create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
> > >  create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S
> > >  create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
> > >  create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
> > >  create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
> > >  create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
> > >  create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c
> > >  create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_upm_test.c
> > >  create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
> > >
> >

2024-06-05 20:18:59

by Verma, Vishal L

[permalink] [raw]
Subject: Re: [RFC PATCH v5 00/29] TDX KVM selftests

On Wed, 2024-06-05 at 20:15 +0000, Verma, Vishal L wrote:
> On Wed, 2024-06-05 at 15:10 -0500, Sagi Shahar wrote:
> > On Wed, Jun 5, 2024 at 1:38 PM Verma, Vishal L <[email protected]> wrote:
> > >
> > > On Tue, 2023-12-12 at 12:46 -0800, Sagi Shahar wrote:
> > > > Hello,
> > > >
> > > > This is v4 of the patch series for TDX selftests.
> > > >
> > > > It has been updated for Intel’s v17 of the TDX host patches which was
> > > > proposed here:
> > > > https://lore.kernel.org/all/[email protected]/
> > > >
> > > > The tree can be found at:
> > > > https://github.com/googleprodkernel/linux-cc/tree/tdx-selftests-rfc-v5
> > >
> > > Hello,
> > >
> > > I wanted to check if there were any plans from Google to refresh this
> > > series for the current TDX patches and the kvm-coco-queue baseline?
> > >
> > I'm going to work on it soon and was planning on using Isaku's V19 of
> > the TDX host patches
>
> That's great, thank you!
>
> >
> > > I'm setting up a CI system that the team is using to test updates to
> > > the different TDX patch series, and it currently runs the KVM Unit
> > > tests, and kvm selftests, and we'd like to be able to add these three
> > > new TDX tests to that as well.
> > >
> > > I tried to take a quick shot at rebasing it, but ran into several
> > > conflicts since kvm-coco-queue has in the meantime made changes e.g. in
> > > tools/testing/selftests/kvm/lib/x86_64/processor.c vcpu_setup().
> > >
> > > If you can help rebase this, Rick's MMU prep series might be a good
> > > baseline to use:
> > > https://lore.kernel.org/all/[email protected]/
> >
> > This patch series only includes the basic TDX MMU changes and is
> > missing a lot of the TDX support. Not sure how this can be used as a
> > baseline without the rest of the TDX patches. Are there other patch
> > series that were posted based on this series which provides the rest
> > of the TDX support?
>
> Hm you're right, I was looking more narrowly because of the kvm-coco-
> queue conflicts, for some of which even v19 might be too old. The MMU
> prep series uses a much more recent kvm-coco-queue baseline.
>
> Rick, can we post a branch with /everything/ on this MMU prep baseline
> for this selftest refresh?

Actually I see the branch below does contain everything, not just the
MMU prep patches. Sagi, is this fine for a baseline?

>
> > >
> > > This is also available in a tree at:
> > > https://github.com/intel/tdx/tree/tdx_kvm_dev-2024-05-30
> > >
> > > >

2024-06-05 20:42:48

by Sagi Shahar

[permalink] [raw]
Subject: Re: [RFC PATCH v5 00/29] TDX KVM selftests

On Wed, Jun 5, 2024 at 3:18 PM Verma, Vishal L <[email protected]> wrote:
>
> On Wed, 2024-06-05 at 20:15 +0000, Verma, Vishal L wrote:
> > On Wed, 2024-06-05 at 15:10 -0500, Sagi Shahar wrote:
> > > On Wed, Jun 5, 2024 at 1:38 PM Verma, Vishal L <[email protected]> wrote:
> > > >
> > > > On Tue, 2023-12-12 at 12:46 -0800, Sagi Shahar wrote:
> > > > > Hello,
> > > > >
> > > > > This is v4 of the patch series for TDX selftests.
> > > > >
> > > > > It has been updated for Intel’s v17 of the TDX host patches which was
> > > > > proposed here:
> > > > > https://lore.kernel.org/all/[email protected]/
> > > > >
> > > > > The tree can be found at:
> > > > > https://github.com/googleprodkernel/linux-cc/tree/tdx-selftests-rfc-v5
> > > >
> > > > Hello,
> > > >
> > > > I wanted to check if there were any plans from Google to refresh this
> > > > series for the current TDX patches and the kvm-coco-queue baseline?
> > > >
> > > I'm going to work on it soon and was planning on using Isaku's V19 of
> > > the TDX host patches
> >
> > That's great, thank you!
> >
> > >
> > > > I'm setting up a CI system that the team is using to test updates to
> > > > the different TDX patch series, and it currently runs the KVM Unit
> > > > tests, and kvm selftests, and we'd like to be able to add these three
> > > > new TDX tests to that as well.
> > > >
> > > > I tried to take a quick shot at rebasing it, but ran into several
> > > > conflicts since kvm-coco-queue has in the meantime made changes e.g. in
> > > > tools/testing/selftests/kvm/lib/x86_64/processor.c vcpu_setup().
> > > >
> > > > If you can help rebase this, Rick's MMU prep series might be a good
> > > > baseline to use:
> > > > https://lore.kernel.org/all/[email protected]/
> > >
> > > This patch series only includes the basic TDX MMU changes and is
> > > missing a lot of the TDX support. Not sure how this can be used as a
> > > baseline without the rest of the TDX patches. Are there other patch
> > > series that were posted based on this series which provides the rest
> > > of the TDX support?
> >
> > Hm you're right, I was looking more narrowly because of the kvm-coco-
> > queue conflicts, for some of which even v19 might be too old. The MMU
> > prep series uses a much more recent kvm-coco-queue baseline.
> >
> > Rick, can we post a branch with /everything/ on this MMU prep baseline
> > for this selftest refresh?
>
> Actually I see the branch below does contain everything, not just the
> MMU prep patches. Sagi, is this fine for a baseline?
>
Maybe for internal development but I don't think I can post an
upstream patchset based on an internal Intel development branch.
Do you know if there's a plan to post a patch series based on that branch soon?
> >
> > > >
> > > > This is also available in a tree at:
> > > > https://github.com/intel/tdx/tree/tdx_kvm_dev-2024-05-30
> > > >
> > > > >

2024-06-05 20:57:07

by Edgecombe, Rick P

[permalink] [raw]
Subject: Re: [RFC PATCH v5 00/29] TDX KVM selftests

On Wed, 2024-06-05 at 15:42 -0500, Sagi Shahar wrote:
> > > Hm you're right, I was looking more narrowly because of the kvm-coco-
> > > queue conflicts, for some of which even v19 might be too old. The MMU
> > > prep series uses a much more recent kvm-coco-queue baseline.
> > >
> > > Rick, can we post a branch with /everything/ on this MMU prep baseline
> > > for this selftest refresh?
> >
> > Actually I see the branch below does contain everything, not just the
> > MMU prep patches. Sagi, is this fine for a baseline?
> >
> Maybe for internal development but I don't think I can post an
> upstream patchset based on an internal Intel development branch.
> Do you know if there's a plan to post a patch series based on that branch
> soon?

We don't currently have plans to post a whole ~130 patch series. Instead we plan
to post subsections out of the series as they slowly move into a maintainer
branch.

We are trying to use the selftests as part of the development of the base TDX
base series. So we need to be able to run them on development branches to catch
regressions and such. For this purpose, we wouldn't need updates to be posted to
the mailing list. It probably needs either some sort of co-development, or
otherwise we will need to maintain an internal fork of the selftests.

We also need to add some specific tests that can cover gaps in our current
testing. Probably we could contribute those back to the series.

What do you think?

2024-06-05 21:41:46

by Sagi Shahar

[permalink] [raw]
Subject: Re: [RFC PATCH v5 00/29] TDX KVM selftests

On Wed, Jun 5, 2024 at 3:56 PM Edgecombe, Rick P
<[email protected]> wrote:
>
> On Wed, 2024-06-05 at 15:42 -0500, Sagi Shahar wrote:
> > > > Hm you're right, I was looking more narrowly because of the kvm-coco-
> > > > queue conflicts, for some of which even v19 might be too old. The MMU
> > > > prep series uses a much more recent kvm-coco-queue baseline.
> > > >
> > > > Rick, can we post a branch with /everything/ on this MMU prep baseline
> > > > for this selftest refresh?
> > >
> > > Actually I see the branch below does contain everything, not just the
> > > MMU prep patches. Sagi, is this fine for a baseline?
> > >
> > Maybe for internal development but I don't think I can post an
> > upstream patchset based on an internal Intel development branch.
> > Do you know if there's a plan to post a patch series based on that branch
> > soon?
>
> We don't currently have plans to post a whole ~130 patch series. Instead we plan
> to post subsections out of the series as they slowly move into a maintainer
> branch.

So this means that we won't be able to post an updated version of the
selftests for a while unless we lock it to the V19 patchset which is
based on v6.8-rc5
Do you have an estimate on when the TDX patches get to the point where
they could support the basic lifecycle selftest?
>
> We are trying to use the selftests as part of the development of the base TDX
> base series. So we need to be able to run them on development branches to catch
> regressions and such. For this purpose, we wouldn't need updates to be posted to
> the mailing list. It probably needs either some sort of co-development, or
> otherwise we will need to maintain an internal fork of the selftests.
>
> We also need to add some specific tests that can cover gaps in our current
> testing. Probably we could contribute those back to the series.
>
> What do you think?

I will take a look at rebasing the selftests on top of the Intel
development branch and I can post it on our github branch. We can talk
about co-development offline. We already have some code that was
suggested by Isaku as part of our tests.

2024-06-05 21:47:31

by Edgecombe, Rick P

[permalink] [raw]
Subject: Re: [RFC PATCH v5 00/29] TDX KVM selftests

On Wed, 2024-06-05 at 16:34 -0500, Sagi Shahar wrote:
> > We don't currently have plans to post a whole ~130 patch series. Instead we
> > plan
> > to post subsections out of the series as they slowly move into a maintainer
> > branch.
>
> So this means that we won't be able to post an updated version of the
> selftests for a while unless we lock it to the V19 patchset which is
> based on v6.8-rc5
> Do you have an estimate on when the TDX patches get to the point where
> they could support the basic lifecycle selftest?

I don't understand. The MMU prep series postings come with a full branch in
github that can boot a TD. What is different if we post the other commits as
patches vs just linking to them in github? The selftests won't be upstreamed
ahead of the base TDX support anyway, right?

> >
> > We are trying to use the selftests as part of the development of the base
> > TDX
> > base series. So we need to be able to run them on development branches to
> > catch
> > regressions and such. For this purpose, we wouldn't need updates to be
> > posted to
> > the mailing list. It probably needs either some sort of co-development, or
> > otherwise we will need to maintain an internal fork of the selftests.
> >
> > We also need to add some specific tests that can cover gaps in our current
> > testing. Probably we could contribute those back to the series.
> >
> > What do you think?
>
> I will take a look at rebasing the selftests on top of the Intel
> development branch and I can post it on our github branch. We can talk
> about co-development offline. We already have some code that was
> suggested by Isaku as part of our tests.

That would be great, thanks.