2023-07-25 22:18:03

by Ryan Afranji

[permalink] [raw]
Subject: [PATCH v4 00/28] TDX KVM selftests

Hello,

This is v4 of the patch series for TDX selftests.

It has been updated for Intel’s v14 of the TDX host patches which was
proposed here:
https://lore.kernel.org/lkml/[email protected]/

The tree can be found at:
https://github.com/googleprodkernel/linux-cc/tree/tdx-selftests-rfc-v4

Changes from RFC v3:

In v14, TDX can only run with UPM enabled so the necessary changes were
made to handle that.

td_vcpu_run() was added to handle TdVmCalls that are now handled in
userspace.

The comments under the patch "KVM: selftests: Require GCC to realign
stacks on function entry" were addressed with the following patch:
https://lore.kernel.org/lkml/Y%2FfHLdvKHlK6D%[email protected]/T/

And other minor tweaks were made to integrate the selftest
infrastructure onto v14.

In RFCv4, TDX selftest code is organized into:

+ headers in tools/testing/selftests/kvm/include/x86_64/tdx/
+ common code in tools/testing/selftests/kvm/lib/x86_64/tdx/
+ selftests in tools/testing/selftests/kvm/x86_64/tdx_*

Dependencies

+ Peter’s patches, which provide functions for the host to allocate
and track protected memory in the
guest. https://lore.kernel.org/lkml/[email protected]/T/

Further work for this patch series/TODOs

+ Sean’s comments for the non-confidential UPM selftests patch series
at https://lore.kernel.org/lkml/[email protected]/T/#u apply
here as well
+ Add ucall support for TDX selftests

I would also like to acknowledge the following people, who helped
review or test patches in RFCv1, RFCv2, and RFCv3:

+ Sean Christopherson <[email protected]>
+ Zhenzhong Duan <[email protected]>
+ Peter Gonda <[email protected]>
+ Andrew Jones <[email protected]>
+ Maxim Levitsky <[email protected]>
+ Xiaoyao Li <[email protected]>
+ David Matlack <[email protected]>
+ Marc Orr <[email protected]>
+ Isaku Yamahata <[email protected]>
+ Maciej S. Szmigiero <[email protected]>

Links to earlier patch series

+ RFC v1: https://lore.kernel.org/lkml/[email protected]/T/#u
+ RFC v2: https://lore.kernel.org/lkml/[email protected]/T/#u
+ RFC v3: https://lore.kernel.org/lkml/[email protected]/T/#u

Ackerley Tng (12):
KVM: selftests: Add function to allow one-to-one GVA to GPA mappings
KVM: selftests: Expose function that sets up sregs based on VM's mode
KVM: selftests: Store initial stack address in struct kvm_vcpu
KVM: selftests: Refactor steps in vCPU descriptor table initialization
KVM: selftests: TDX: Use KVM_TDX_CAPABILITIES to validate TDs'
attribute configuration
KVM: selftests: TDX: Update load_td_memory_region for VM memory backed
by guest memfd
KVM: selftests: Add functions to allow mapping as shared
KVM: selftests: Expose _vm_vaddr_alloc
KVM: selftests: TDX: Add support for TDG.MEM.PAGE.ACCEPT
KVM: selftests: TDX: Add support for TDG.VP.VEINFO.GET
KVM: selftests: TDX: Add TDX UPM selftest
KVM: selftests: TDX: Add TDX UPM selftests for implicit conversion

Erdem Aktas (3):
KVM: selftests: Add helper functions to create TDX VMs
KVM: selftests: TDX: Add TDX lifecycle test
KVM: selftests: TDX: Adding test case for TDX port IO

Roger Wang (1):
KVM: selftests: TDX: Add TDG.VP.INFO test

Ryan Afranji (2):
KVM: selftests: TDX: Verify the behavior when host consumes a TD
private memory
KVM: selftests: TDX: Add shared memory test

Sagi Shahar (10):
KVM: selftests: TDX: Add report_fatal_error test
KVM: selftests: TDX: Add basic TDX CPUID test
KVM: selftests: TDX: Add basic get_td_vmcall_info test
KVM: selftests: TDX: Add TDX IO writes test
KVM: selftests: TDX: Add TDX IO reads test
KVM: selftests: TDX: Add TDX MSR read/write tests
KVM: selftests: TDX: Add TDX HLT exit test
KVM: selftests: TDX: Add TDX MMIO reads test
KVM: selftests: TDX: Add TDX MMIO writes test
KVM: selftests: TDX: Add TDX CPUID TDVMCALL test

tools/testing/selftests/kvm/Makefile | 8 +
.../selftests/kvm/include/kvm_util_base.h | 35 +
.../selftests/kvm/include/x86_64/processor.h | 4 +
.../kvm/include/x86_64/tdx/td_boot.h | 82 +
.../kvm/include/x86_64/tdx/td_boot_asm.h | 16 +
.../selftests/kvm/include/x86_64/tdx/tdcall.h | 59 +
.../selftests/kvm/include/x86_64/tdx/tdx.h | 65 +
.../kvm/include/x86_64/tdx/tdx_util.h | 19 +
.../kvm/include/x86_64/tdx/test_util.h | 164 ++
tools/testing/selftests/kvm/lib/kvm_util.c | 115 +-
.../selftests/kvm/lib/x86_64/processor.c | 77 +-
.../selftests/kvm/lib/x86_64/tdx/td_boot.S | 101 ++
.../selftests/kvm/lib/x86_64/tdx/tdcall.S | 158 ++
.../selftests/kvm/lib/x86_64/tdx/tdx.c | 262 ++++
.../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 565 +++++++
.../selftests/kvm/lib/x86_64/tdx/test_util.c | 101 ++
.../kvm/x86_64/tdx_shared_mem_test.c | 134 ++
.../selftests/kvm/x86_64/tdx_upm_test.c | 469 ++++++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 1322 +++++++++++++++++
19 files changed, 3730 insertions(+), 26 deletions(-)
create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h
create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h
create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdcall.S
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c
create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_upm_test.c
create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c

--
2.41.0.487.g6d72f3e995-goog



2023-07-25 22:18:52

by Ryan Afranji

[permalink] [raw]
Subject: [PATCH v4 17/28] KVM: selftests: TDX: Add TDX MMIO reads test

From: Sagi Shahar <[email protected]>

The test verifies MMIO reads of various sizes from the host to the guest.

Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
Change-Id: I64fc74b41b5efb14be8e098b95b6bd66ab8e52d7
Signed-off-by: Ryan Afranji <[email protected]>
---
.../selftests/kvm/include/x86_64/tdx/tdcall.h | 2 +
.../selftests/kvm/include/x86_64/tdx/tdx.h | 3 +
.../kvm/include/x86_64/tdx/test_util.h | 23 +++++
.../selftests/kvm/lib/x86_64/tdx/tdx.c | 19 ++++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 87 +++++++++++++++++++
5 files changed, 134 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
index b5e94b7c48fa..95fcdbd8404e 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
@@ -9,6 +9,8 @@

#define TDG_VP_VMCALL_INSTRUCTION_IO_READ 0
#define TDG_VP_VMCALL_INSTRUCTION_IO_WRITE 1
+#define TDG_VP_VMCALL_VE_REQUEST_MMIO_READ 0
+#define TDG_VP_VMCALL_VE_REQUEST_MMIO_WRITE 1

#define TDG_VP_VMCALL_SUCCESS 0x0000000000000000
#define TDG_VP_VMCALL_INVALID_OPERAND 0x8000000000000000
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
index b18e39d20498..13ce60df5684 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
@@ -12,6 +12,7 @@
#define TDG_VP_VMCALL_INSTRUCTION_IO 30
#define TDG_VP_VMCALL_INSTRUCTION_RDMSR 31
#define TDG_VP_VMCALL_INSTRUCTION_WRMSR 32
+#define TDG_VP_VMCALL_VE_REQUEST_MMIO 48

void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu);
uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
@@ -22,5 +23,7 @@ uint64_t tdg_vp_vmcall_get_td_vmcall_info(uint64_t *r11, uint64_t *r12,
uint64_t tdg_vp_vmcall_instruction_rdmsr(uint64_t index, uint64_t *ret_value);
uint64_t tdg_vp_vmcall_instruction_wrmsr(uint64_t index, uint64_t value);
uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag);
+uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size,
+ uint64_t *data_out);

#endif // SELFTEST_TDX_TDX_H
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
index 8a9b6a1bec3e..af412b764604 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
@@ -35,6 +35,29 @@
(VCPU)->run->io.direction); \
} while (0)

+
+/**
+ * Assert that some MMIO operation involving TDG.VP.VMCALL <#VERequestMMIO> was
+ * called in the guest.
+ */
+#define TDX_TEST_ASSERT_MMIO(VCPU, ADDR, SIZE, DIR) \
+ do { \
+ TEST_ASSERT((VCPU)->run->exit_reason == KVM_EXIT_MMIO, \
+ "Got exit_reason other than KVM_EXIT_MMIO: %u (%s)\n", \
+ (VCPU)->run->exit_reason, \
+ exit_reason_str((VCPU)->run->exit_reason)); \
+ \
+ TEST_ASSERT(((VCPU)->run->exit_reason == KVM_EXIT_MMIO) && \
+ ((VCPU)->run->mmio.phys_addr == (ADDR)) && \
+ ((VCPU)->run->mmio.len == (SIZE)) && \
+ ((VCPU)->run->mmio.is_write == (DIR)), \
+ "Got an unexpected MMIO exit values: %u (%s) %llu %d %d\n", \
+ (VCPU)->run->exit_reason, \
+ exit_reason_str((VCPU)->run->exit_reason), \
+ (VCPU)->run->mmio.phys_addr, (VCPU)->run->mmio.len, \
+ (VCPU)->run->mmio.is_write); \
+ } while (0)
+
/**
* Check and report if there was some failure in the guest, either an exception
* like a triple fault, or if a tdx_test_fatal() was hit.
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
index 9485bafedc38..b19f07ebc0e7 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
@@ -124,3 +124,22 @@ uint64_t tdg_vp_vmcall_instruction_hlt(uint64_t interrupt_blocked_flag)

return __tdx_hypercall(&args, 0);
}
+
+uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size,
+ uint64_t *data_out)
+{
+ uint64_t ret;
+ struct tdx_hypercall_args args = {
+ .r11 = TDG_VP_VMCALL_VE_REQUEST_MMIO,
+ .r12 = size,
+ .r13 = TDG_VP_VMCALL_VE_REQUEST_MMIO_READ,
+ .r14 = address,
+ };
+
+ ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT);
+
+ if (data_out)
+ *data_out = args.r11;
+
+ return ret;
+}
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index f77caf262f84..88cb219ae9b4 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -799,6 +799,92 @@ void verify_guest_hlt(void)
_verify_guest_hlt(0);
}

+/* Pick any address that was not mapped into the guest to test MMIO */
+#define TDX_MMIO_TEST_ADDR 0x200000000
+
+void guest_mmio_reads(void)
+{
+ uint64_t data;
+ uint64_t ret;
+
+ ret = tdg_vp_vmcall_ve_request_mmio_read(TDX_MMIO_TEST_ADDR, 1, &data);
+ if (ret)
+ tdx_test_fatal(ret);
+ if (data != 0x12)
+ tdx_test_fatal(1);
+
+ ret = tdg_vp_vmcall_ve_request_mmio_read(TDX_MMIO_TEST_ADDR, 2, &data);
+ if (ret)
+ tdx_test_fatal(ret);
+ if (data != 0x1234)
+ tdx_test_fatal(2);
+
+ ret = tdg_vp_vmcall_ve_request_mmio_read(TDX_MMIO_TEST_ADDR, 4, &data);
+ if (ret)
+ tdx_test_fatal(ret);
+ if (data != 0x12345678)
+ tdx_test_fatal(4);
+
+ ret = tdg_vp_vmcall_ve_request_mmio_read(TDX_MMIO_TEST_ADDR, 8, &data);
+ if (ret)
+ tdx_test_fatal(ret);
+ if (data != 0x1234567890ABCDEF)
+ tdx_test_fatal(8);
+
+ // Read an invalid number of bytes.
+ ret = tdg_vp_vmcall_ve_request_mmio_read(TDX_MMIO_TEST_ADDR, 10, &data);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ tdx_test_success();
+}
+
+/*
+ * Varifies guest MMIO reads.
+ */
+void verify_mmio_reads(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+ vcpu = td_vcpu_add(vm, 0, guest_mmio_reads);
+ td_finalize(vm);
+
+ printf("Verifying TD MMIO reads:\n");
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 1, TDG_VP_VMCALL_VE_REQUEST_MMIO_READ);
+ *(uint8_t *)vcpu->run->mmio.data = 0x12;
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 2, TDG_VP_VMCALL_VE_REQUEST_MMIO_READ);
+ *(uint16_t *)vcpu->run->mmio.data = 0x1234;
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 4, TDG_VP_VMCALL_VE_REQUEST_MMIO_READ);
+ *(uint32_t *)vcpu->run->mmio.data = 0x12345678;
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_MMIO(vcpu, TDX_MMIO_TEST_ADDR, 8, TDG_VP_VMCALL_VE_REQUEST_MMIO_READ);
+ *(uint64_t *)vcpu->run->mmio.data = 0x1234567890ABCDEF;
+
+ td_vcpu_run(vcpu);
+ ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
+ ASSERT_EQ(vcpu->run->system_event.data[1], TDG_VP_VMCALL_INVALID_OPERAND);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_ASSERT_SUCCESS(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
setbuf(stdout, NULL);
@@ -818,6 +904,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_guest_msr_writes);
run_in_new_process(&verify_guest_msr_reads);
run_in_new_process(&verify_guest_hlt);
+ run_in_new_process(&verify_mmio_reads);

return 0;
}
--
2.41.0.487.g6d72f3e995-goog


2023-07-25 22:19:04

by Ryan Afranji

[permalink] [raw]
Subject: [PATCH v4 23/28] KVM: selftests: TDX: Add shared memory test

Adds a test that sets up shared memory between the host and guest.

Signed-off-by: Ryan Afranji <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
Change-Id: I95b15afb84790c046d4af2a58b03e57b5ea6439b
---
tools/testing/selftests/kvm/Makefile | 1 +
.../selftests/kvm/include/x86_64/tdx/tdx.h | 2 +
.../kvm/include/x86_64/tdx/tdx_util.h | 2 +
.../selftests/kvm/lib/x86_64/tdx/tdx.c | 26 ++++
.../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 25 ++++
.../kvm/x86_64/tdx_shared_mem_test.c | 134 ++++++++++++++++++
6 files changed, 190 insertions(+)
create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 5cdced288025..cb2aaa7820c3 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -140,6 +140,7 @@ TEST_GEN_PROGS_x86_64 += steal_time
TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test
TEST_GEN_PROGS_x86_64 += system_counter_offset_test
TEST_GEN_PROGS_x86_64 += x86_64/tdx_vm_tests
+TEST_GEN_PROGS_x86_64 += x86_64/tdx_shared_mem_test

# Compiled outputs used by test targets
TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
index 6b176de1e795..db4cc62abb5d 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
@@ -8,6 +8,7 @@
#define TDG_VP_INFO 1

#define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000
+#define TDG_VP_VMCALL_MAP_GPA 0x10001
#define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003

#define TDG_VP_VMCALL_INSTRUCTION_CPUID 10
@@ -36,5 +37,6 @@ uint64_t tdg_vp_vmcall_instruction_cpuid(uint32_t eax, uint32_t ecx,
uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx,
uint64_t *r8, uint64_t *r9,
uint64_t *r10, uint64_t *r11);
+uint64_t tdg_vp_vmcall_map_gpa(uint64_t address, uint64_t size, uint64_t *data_out);

#endif // SELFTEST_TDX_TDX_H
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
index 32dd6b8fda46..3e850ecb85a6 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
@@ -13,5 +13,7 @@ void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
uint64_t attributes);
void td_finalize(struct kvm_vm *vm);
void td_vcpu_run(struct kvm_vcpu *vcpu);
+void handle_memory_conversion(struct kvm_vm *vm, uint64_t gpa, uint64_t size,
+ bool shared_to_private);

#endif // SELFTESTS_TDX_KVM_UTIL_H
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
index bcd9cceb3372..061a5c0bef34 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
@@ -4,9 +4,11 @@

#include "tdx/tdcall.h"
#include "tdx/tdx.h"
+#include "tdx/tdx_util.h"

void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu)
{
+ struct kvm_vm *vm = vcpu->vm;
struct kvm_tdx_vmcall *vmcall_info = &vcpu->run->tdx.u.vmcall;
uint64_t vmcall_subfunction = vmcall_info->subfunction;

@@ -20,6 +22,14 @@ void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu)
vcpu->run->system_event.data[2] = vmcall_info->in_r13;
vmcall_info->status_code = 0;
break;
+ case TDG_VP_VMCALL_MAP_GPA:
+ uint64_t gpa = vmcall_info->in_r12 & ~vm->arch.s_bit;
+ bool shared_to_private = !(vm->arch.s_bit &
+ vmcall_info->in_r12);
+ handle_memory_conversion(vm, gpa, vmcall_info->in_r13,
+ shared_to_private);
+ vmcall_info->status_code = 0;
+ break;
default:
TEST_FAIL("TD VMCALL subfunction %lu is unsupported.\n",
vmcall_subfunction);
@@ -210,3 +220,19 @@ uint64_t tdg_vp_info(uint64_t *rcx, uint64_t *rdx,

return ret;
}
+
+uint64_t tdg_vp_vmcall_map_gpa(uint64_t address, uint64_t size, uint64_t *data_out)
+{
+ uint64_t ret;
+ struct tdx_hypercall_args args = {
+ .r11 = TDG_VP_VMCALL_MAP_GPA,
+ .r12 = address,
+ .r13 = size
+ };
+
+ ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT);
+
+ if (data_out)
+ *data_out = args.r11;
+ return ret;
+}
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
index c8591480b412..453ede061347 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
@@ -538,3 +538,28 @@ void td_vcpu_run(struct kvm_vcpu *vcpu)
handle_userspace_tdg_vp_vmcall_exit(vcpu);
}
}
+
+/**
+ * Handle conversion of memory with @size beginning @gpa for @vm. Set
+ * @shared_to_private to true for shared to private conversions and false
+ * otherwise.
+ *
+ * Since this is just for selftests, we will just keep both pieces of backing
+ * memory allocated and not deallocate/allocate memory; we'll just do the
+ * minimum of calling KVM_MEMORY_ENCRYPT_REG_REGION and
+ * KVM_MEMORY_ENCRYPT_UNREG_REGION.
+ */
+void handle_memory_conversion(struct kvm_vm *vm, uint64_t gpa, uint64_t size,
+ bool shared_to_private)
+{
+ struct kvm_memory_attributes range;
+
+ range.address = gpa;
+ range.size = size;
+ range.attributes = shared_to_private ? KVM_MEMORY_ATTRIBUTE_PRIVATE : 0;
+ range.flags = 0;
+
+ printf("\t ... calling KVM_SET_MEMORY_ATTRIBUTES ioctl with gpa=%#lx, size=%#lx, attributes=%#llx\n", gpa, size, range.attributes);
+
+ vm_ioctl(vm, KVM_SET_MEMORY_ATTRIBUTES, &range);
+}
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c b/tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c
new file mode 100644
index 000000000000..6ebaddba0e11
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/tdx_shared_mem_test.c
@@ -0,0 +1,134 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <linux/kvm.h>
+#include <stdint.h>
+
+#include "kvm_util_base.h"
+#include "processor.h"
+#include "tdx/tdcall.h"
+#include "tdx/tdx.h"
+#include "tdx/tdx_util.h"
+#include "tdx/test_util.h"
+#include "test_util.h"
+
+#define TDX_SHARED_MEM_TEST_PRIVATE_GVA (0x80000000)
+#define TDX_SHARED_MEM_TEST_VADDR_SHARED_MASK BIT_ULL(30)
+#define TDX_SHARED_MEM_TEST_SHARED_GVA \
+ (TDX_SHARED_MEM_TEST_PRIVATE_GVA | \
+ TDX_SHARED_MEM_TEST_VADDR_SHARED_MASK)
+
+#define TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE (0xcafecafe)
+#define TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE (0xabcdabcd)
+
+#define TDX_SHARED_MEM_TEST_INFO_PORT 0x87
+
+/*
+ * Shared variables between guest and host
+ */
+static uint64_t test_mem_private_gpa;
+static uint64_t test_mem_shared_gpa;
+
+void guest_shared_mem(void)
+{
+ uint32_t *test_mem_shared_gva =
+ (uint32_t *)TDX_SHARED_MEM_TEST_SHARED_GVA;
+
+ uint64_t placeholder;
+ uint64_t ret;
+
+ /* Map gpa as shared */
+ ret = tdg_vp_vmcall_map_gpa(test_mem_shared_gpa, PAGE_SIZE,
+ &placeholder);
+ if (ret)
+ tdx_test_fatal_with_data(ret, __LINE__);
+
+ *test_mem_shared_gva = TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE;
+
+ /* Exit so host can read shared value */
+ ret = tdg_vp_vmcall_instruction_io(TDX_SHARED_MEM_TEST_INFO_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
+ &placeholder);
+ if (ret)
+ tdx_test_fatal_with_data(ret, __LINE__);
+
+ /* Read value written by host and send it back out for verification */
+ ret = tdg_vp_vmcall_instruction_io(TDX_SHARED_MEM_TEST_INFO_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
+ (uint64_t *)test_mem_shared_gva);
+ if (ret)
+ tdx_test_fatal_with_data(ret, __LINE__);
+}
+
+int verify_shared_mem(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ vm_vaddr_t test_mem_private_gva;
+ uint32_t *test_mem_hva;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+ vcpu = td_vcpu_add(vm, 0, guest_shared_mem);
+
+ /*
+ * Set up shared memory page for testing by first allocating as private
+ * and then mapping the same GPA again as shared. This way, the TD does
+ * not have to remap its page tables at runtime.
+ */
+ test_mem_private_gva = vm_vaddr_alloc(vm, vm->page_size,
+ TDX_SHARED_MEM_TEST_PRIVATE_GVA);
+ ASSERT_EQ(test_mem_private_gva, TDX_SHARED_MEM_TEST_PRIVATE_GVA);
+
+ test_mem_hva = addr_gva2hva(vm, test_mem_private_gva);
+ TEST_ASSERT(test_mem_hva != NULL,
+ "Guest address not found in guest memory regions\n");
+
+ test_mem_private_gpa = addr_gva2gpa(vm, test_mem_private_gva);
+ virt_pg_map_shared(vm, TDX_SHARED_MEM_TEST_SHARED_GVA,
+ test_mem_private_gpa);
+
+ test_mem_shared_gpa = test_mem_private_gpa | BIT_ULL(vm->pa_bits - 1);
+ sync_global_to_guest(vm, test_mem_private_gpa);
+ sync_global_to_guest(vm, test_mem_shared_gpa);
+
+ td_finalize(vm);
+
+ printf("Verifying shared memory accesses for TDX\n");
+
+ /* Begin guest execution; guest writes to shared memory. */
+ printf("\t ... Starting guest execution\n");
+
+ /* Handle map gpa as shared */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_SHARED_MEM_TEST_INFO_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ ASSERT_EQ(*test_mem_hva, TDX_SHARED_MEM_TEST_GUEST_WRITE_VALUE);
+
+ *test_mem_hva = TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE;
+ td_vcpu_run(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_SHARED_MEM_TEST_INFO_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset),
+ TDX_SHARED_MEM_TEST_HOST_WRITE_VALUE);
+
+ printf("\t ... PASSED\n");
+
+ kvm_vm_free(vm);
+
+ return 0;
+}
+
+int main(int argc, char **argv)
+{
+ if (!is_tdx_enabled()) {
+ printf("TDX is not supported by the KVM\n"
+ "Skipping the TDX tests.\n");
+ return 0;
+ }
+
+ return verify_shared_mem();
+}
--
2.41.0.487.g6d72f3e995-goog


2023-07-25 22:37:47

by Ryan Afranji

[permalink] [raw]
Subject: [PATCH v4 05/28] KVM: selftests: Add helper functions to create TDX VMs

From: Erdem Aktas <[email protected]>

TDX requires additional IOCTLs to initialize VM and vCPUs to add
private memory and to finalize the VM memory. Also additional utility
functions are provided to manipulate a TD, similar to those that
manipulate a VM in the current selftest framework.

A TD's initial register state cannot be manipulated directly by
setting the VM's memory, hence boot code is provided at the TD's reset
vector. This boot code takes boot parameters loaded in the TD's memory
and sets up the TD for the selftest.

Signed-off-by: Erdem Aktas <[email protected]>
Signed-off-by: Ryan Afranji <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
Co-developed-by: Ackerley Tng <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
Change-Id: I3bad2e7004da358ae78a0892293e992e8e30f70d
---
tools/testing/selftests/kvm/Makefile | 2 +
.../selftests/kvm/include/kvm_util_base.h | 4 +
.../kvm/include/x86_64/tdx/td_boot.h | 82 ++++
.../kvm/include/x86_64/tdx/td_boot_asm.h | 16 +
.../kvm/include/x86_64/tdx/tdx_util.h | 16 +
tools/testing/selftests/kvm/lib/kvm_util.c | 2 +-
.../selftests/kvm/lib/x86_64/tdx/td_boot.S | 101 ++++
.../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 441 ++++++++++++++++++
8 files changed, 663 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h
create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h
create mode 100644 tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 04f3ac6ee117..9b2210b64ab5 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -39,6 +39,8 @@ LIBKVM_x86_64 += lib/x86_64/processor.c
LIBKVM_x86_64 += lib/x86_64/svm.c
LIBKVM_x86_64 += lib/x86_64/ucall.c
LIBKVM_x86_64 += lib/x86_64/vmx.c
+LIBKVM_x86_64 += lib/x86_64/tdx/tdx_util.c
+LIBKVM_x86_64 += lib/x86_64/tdx/td_boot.S

LIBKVM_aarch64 += lib/aarch64/gic.c
LIBKVM_aarch64 += lib/aarch64/gic_v3.c
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 12524d94a4eb..6ed57162b49d 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -792,6 +792,10 @@ static inline vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
return _vm_phy_pages_alloc(vm, num, paddr_min, memslot, vm->protected);
}

+uint64_t vm_nr_pages_required(enum vm_guest_mode mode,
+ uint32_t nr_runnable_vcpus,
+ uint64_t extra_mem_pages);
+
/*
* ____vm_create() does KVM_CREATE_VM and little else. __vm_create() also
* loads the test binary into guest memory and creates an IRQ chip (x86 only).
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h b/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h
new file mode 100644
index 000000000000..148057e569d6
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot.h
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef SELFTEST_TDX_TD_BOOT_H
+#define SELFTEST_TDX_TD_BOOT_H
+
+#include <stdint.h>
+#include "tdx/td_boot_asm.h"
+
+/*
+ * Layout for boot section (not to scale)
+ *
+ * GPA
+ * ┌─────────────────────────────┬──0x1_0000_0000 (4GB)
+ * │ Boot code trampoline │
+ * ├─────────────────────────────┼──0x0_ffff_fff0: Reset vector (16B below 4GB)
+ * │ Boot code │
+ * ├─────────────────────────────┼──td_boot will be copied here, so that the
+ * │ │ jmp to td_boot is exactly at the reset vector
+ * │ Empty space │
+ * │ │
+ * ├─────────────────────────────┤
+ * │ │
+ * │ │
+ * │ Boot parameters │
+ * │ │
+ * │ │
+ * └─────────────────────────────┴──0x0_ffff_0000: TD_BOOT_PARAMETERS_GPA
+ */
+#define FOUR_GIGABYTES_GPA (4ULL << 30)
+
+/**
+ * The exact memory layout for LGDT or LIDT instructions.
+ */
+struct __packed td_boot_parameters_dtr {
+ uint16_t limit;
+ uint32_t base;
+};
+
+/**
+ * The exact layout in memory required for a ljmp, including the selector for
+ * changing code segment.
+ */
+struct __packed td_boot_parameters_ljmp_target {
+ uint32_t eip_gva;
+ uint16_t code64_sel;
+};
+
+/**
+ * Allows each vCPU to be initialized with different eip and esp.
+ */
+struct __packed td_per_vcpu_parameters {
+ uint32_t esp_gva;
+ struct td_boot_parameters_ljmp_target ljmp_target;
+};
+
+/**
+ * Boot parameters for the TD.
+ *
+ * Unlike a regular VM, we can't ask KVM to set registers such as esp, eip, etc
+ * before boot, so to run selftests, these registers' values have to be
+ * initialized by the TD.
+ *
+ * This struct is loaded in TD private memory at TD_BOOT_PARAMETERS_GPA.
+ *
+ * The TD boot code will read off parameters from this struct and set up the
+ * vcpu for executing selftests.
+ */
+struct __packed td_boot_parameters {
+ uint32_t cr0;
+ uint32_t cr3;
+ uint32_t cr4;
+ struct td_boot_parameters_dtr gdtr;
+ struct td_boot_parameters_dtr idtr;
+ struct td_per_vcpu_parameters per_vcpu[];
+};
+
+extern void td_boot(void);
+extern void reset_vector(void);
+extern void td_boot_code_end(void);
+
+#define TD_BOOT_CODE_SIZE (td_boot_code_end - td_boot)
+
+#endif /* SELFTEST_TDX_TD_BOOT_H */
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h b/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h
new file mode 100644
index 000000000000..0a07104f7deb
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/td_boot_asm.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef SELFTEST_TDX_TD_BOOT_ASM_H
+#define SELFTEST_TDX_TD_BOOT_ASM_H
+
+/*
+ * GPA where TD boot parameters wil lbe loaded.
+ *
+ * TD_BOOT_PARAMETERS_GPA is arbitrarily chosen to
+ *
+ * + be within the 4GB address space
+ * + provide enough contiguous memory for the struct td_boot_parameters such
+ * that there is one struct td_per_vcpu_parameters for KVM_MAX_VCPUS
+ */
+#define TD_BOOT_PARAMETERS_GPA 0xffff0000
+
+#endif // SELFTEST_TDX_TD_BOOT_ASM_H
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
new file mode 100644
index 000000000000..274b245f200b
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef SELFTESTS_TDX_KVM_UTIL_H
+#define SELFTESTS_TDX_KVM_UTIL_H
+
+#include <stdint.h>
+
+#include "kvm_util_base.h"
+
+struct kvm_vcpu *td_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, void *guest_code);
+
+struct kvm_vm *td_create(void);
+void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
+ uint64_t attributes);
+void td_finalize(struct kvm_vm *vm);
+
+#endif // SELFTESTS_TDX_KVM_UTIL_H
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 5bbcddcd6796..e2a0d979b54f 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -309,7 +309,7 @@ struct kvm_vm *____vm_create(struct vm_shape shape)
return vm;
}

-static uint64_t vm_nr_pages_required(enum vm_guest_mode mode,
+uint64_t vm_nr_pages_required(enum vm_guest_mode mode,
uint32_t nr_runnable_vcpus,
uint64_t extra_mem_pages)
{
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S b/tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S
new file mode 100644
index 000000000000..800e09264d4e
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/td_boot.S
@@ -0,0 +1,101 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#include "tdx/td_boot_asm.h"
+
+/* Offsets for reading struct td_boot_parameters */
+#define TD_BOOT_PARAMETERS_CR0 0
+#define TD_BOOT_PARAMETERS_CR3 4
+#define TD_BOOT_PARAMETERS_CR4 8
+#define TD_BOOT_PARAMETERS_GDT 12
+#define TD_BOOT_PARAMETERS_IDT 18
+#define TD_BOOT_PARAMETERS_PER_VCPU 24
+
+/* Offsets for reading struct td_per_vcpu_parameters */
+#define TD_PER_VCPU_PARAMETERS_ESP_GVA 0
+#define TD_PER_VCPU_PARAMETERS_LJMP_TARGET 4
+
+#define SIZEOF_TD_PER_VCPU_PARAMETERS 10
+
+.code32
+
+.globl td_boot
+td_boot:
+ /* In this procedure, edi is used as a temporary register */
+ cli
+
+ /* Paging is off */
+
+ movl $TD_BOOT_PARAMETERS_GPA, %ebx
+
+ /*
+ * Find the address of struct td_per_vcpu_parameters for this
+ * vCPU based on esi (TDX spec: initialized with vcpu id). Put
+ * struct address into register for indirect addressing
+ */
+ movl $SIZEOF_TD_PER_VCPU_PARAMETERS, %eax
+ mul %esi
+ leal TD_BOOT_PARAMETERS_PER_VCPU(%ebx), %edi
+ addl %edi, %eax
+
+ /* Setup stack */
+ movl TD_PER_VCPU_PARAMETERS_ESP_GVA(%eax), %esp
+
+ /* Setup GDT */
+ leal TD_BOOT_PARAMETERS_GDT(%ebx), %edi
+ lgdt (%edi)
+
+ /* Setup IDT */
+ leal TD_BOOT_PARAMETERS_IDT(%ebx), %edi
+ lidt (%edi)
+
+ /*
+ * Set up control registers (There are no instructions to
+ * mov from memory to control registers, hence we need to use ebx
+ * as a scratch register)
+ */
+ movl TD_BOOT_PARAMETERS_CR4(%ebx), %edi
+ movl %edi, %cr4
+ movl TD_BOOT_PARAMETERS_CR3(%ebx), %edi
+ movl %edi, %cr3
+ movl TD_BOOT_PARAMETERS_CR0(%ebx), %edi
+ movl %edi, %cr0
+
+ /* Paging is on after setting the most significant bit on cr0 */
+
+ /*
+ * Jump to selftest guest code. Far jumps read <segment
+ * selector:new eip> from <addr+4:addr>. This location has
+ * already been set up in boot parameters, and we can read boot
+ * parameters because boot code and boot parameters are loaded so
+ * that GVA and GPA are mapped 1:1.
+ */
+ ljmp *TD_PER_VCPU_PARAMETERS_LJMP_TARGET(%eax)
+
+.globl reset_vector
+reset_vector:
+ jmp td_boot
+ /*
+ * Pad reset_vector to its full size of 16 bytes so that this
+ * can be loaded with the end of reset_vector aligned to GPA=4G
+ */
+ int3
+ int3
+ int3
+ int3
+ int3
+ int3
+ int3
+ int3
+ int3
+ int3
+ int3
+ int3
+ int3
+ int3
+
+/* Leave marker so size of td_boot code can be computed */
+.globl td_boot_code_end
+td_boot_code_end:
+
+/* Disable executable stack */
+.section .note.GNU-stack,"",%progbits
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
new file mode 100644
index 000000000000..c90ad29f7733
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
@@ -0,0 +1,441 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#define _GNU_SOURCE
+#include <asm/kvm.h>
+#include <asm/kvm_host.h>
+#include <errno.h>
+#include <linux/kvm.h>
+#include <stdint.h>
+#include <sys/ioctl.h>
+
+#include "kvm_util.h"
+#include "test_util.h"
+#include "tdx/td_boot.h"
+#include "kvm_util_base.h"
+#include "processor.h"
+
+/*
+ * TDX ioctls
+ */
+
+static char *tdx_cmd_str[] = {
+ "KVM_TDX_CAPABILITIES",
+ "KVM_TDX_INIT_VM",
+ "KVM_TDX_INIT_VCPU",
+ "KVM_TDX_INIT_MEM_REGION",
+ "KVM_TDX_FINALIZE_VM"
+};
+#define TDX_MAX_CMD_STR (ARRAY_SIZE(tdx_cmd_str))
+
+static void tdx_ioctl(int fd, int ioctl_no, uint32_t flags, void *data)
+{
+ struct kvm_tdx_cmd tdx_cmd;
+ int r;
+
+ TEST_ASSERT(ioctl_no < TDX_MAX_CMD_STR, "Unknown TDX CMD : %d\n",
+ ioctl_no);
+
+ memset(&tdx_cmd, 0x0, sizeof(tdx_cmd));
+ tdx_cmd.id = ioctl_no;
+ tdx_cmd.flags = flags;
+ tdx_cmd.data = (uint64_t)data;
+
+ r = ioctl(fd, KVM_MEMORY_ENCRYPT_OP, &tdx_cmd);
+ TEST_ASSERT(r == 0, "%s failed: %d %d", tdx_cmd_str[ioctl_no], r,
+ errno);
+}
+
+#define XFEATURE_LBR 15
+#define XFEATURE_CET_U 11
+#define XFEATURE_CET_S 12
+
+#define XFEATURE_MASK_LBR (1 << XFEATURE_LBR)
+#define XFEATURE_MASK_CET_U (1 << XFEATURE_CET_U)
+#define XFEATURE_MASK_CET_S (1 << XFEATURE_CET_S)
+#define XFEATURE_MASK_CET (XFEATURE_MASK_CET_U | XFEATURE_MASK_CET_S)
+
+static void tdx_apply_cpuid_restrictions(struct kvm_cpuid2 *cpuid_data)
+{
+ for (int i = 0; i < cpuid_data->nent; i++) {
+ struct kvm_cpuid_entry2 *e = &cpuid_data->entries[i];
+
+ if (e->function == 0xd && e->index == 0) {
+ /*
+ * TDX module requires both XTILE_{CFG, DATA} to be set.
+ * Both bits are required for AMX to be functional.
+ */
+ if ((e->eax & XFEATURE_MASK_XTILE) !=
+ XFEATURE_MASK_XTILE) {
+ e->eax &= ~XFEATURE_MASK_XTILE;
+ }
+ }
+ if (e->function == 0xd && e->index == 1) {
+ /*
+ * TDX doesn't support LBR yet.
+ * Disable bits from the XCR0 register.
+ */
+ e->ecx &= ~XFEATURE_MASK_LBR;
+ /*
+ * TDX modules requires both CET_{U, S} to be set even
+ * if only one is supported.
+ */
+ if (e->ecx & XFEATURE_MASK_CET)
+ e->ecx |= XFEATURE_MASK_CET;
+ }
+ }
+}
+
+static void tdx_td_init(struct kvm_vm *vm, uint64_t attributes)
+{
+ const struct kvm_cpuid2 *cpuid;
+ struct kvm_tdx_init_vm *init_vm;
+
+ cpuid = kvm_get_supported_cpuid();
+
+ init_vm = malloc(sizeof(*init_vm) +
+ sizeof(init_vm->cpuid.entries[0]) * cpuid->nent);
+
+ memset(init_vm, 0, sizeof(*init_vm));
+ memcpy(&init_vm->cpuid, cpuid, kvm_cpuid2_size(cpuid->nent));
+
+ init_vm->attributes = attributes;
+
+ tdx_apply_cpuid_restrictions(&init_vm->cpuid);
+
+ tdx_ioctl(vm->fd, KVM_TDX_INIT_VM, 0, init_vm);
+}
+
+static void tdx_td_vcpu_init(struct kvm_vcpu *vcpu)
+{
+ const struct kvm_cpuid2 *cpuid = kvm_get_supported_cpuid();
+
+ vcpu_init_cpuid(vcpu, cpuid);
+ tdx_ioctl(vcpu->fd, KVM_TDX_INIT_VCPU, 0, NULL);
+}
+
+static void tdx_init_mem_region(struct kvm_vm *vm, void *source_pages,
+ uint64_t gpa, uint64_t size)
+{
+ struct kvm_tdx_init_mem_region mem_region = {
+ .source_addr = (uint64_t)source_pages,
+ .gpa = gpa,
+ .nr_pages = size / PAGE_SIZE,
+ };
+ uint32_t metadata = KVM_TDX_MEASURE_MEMORY_REGION;
+
+ TEST_ASSERT((mem_region.nr_pages > 0) &&
+ ((mem_region.nr_pages * PAGE_SIZE) == size),
+ "Cannot add partial pages to the guest memory.\n");
+ TEST_ASSERT(((uint64_t)source_pages & (PAGE_SIZE - 1)) == 0,
+ "Source memory buffer is not page aligned\n");
+ tdx_ioctl(vm->fd, KVM_TDX_INIT_MEM_REGION, metadata, &mem_region);
+}
+
+static void tdx_td_finalizemr(struct kvm_vm *vm)
+{
+ tdx_ioctl(vm->fd, KVM_TDX_FINALIZE_VM, 0, NULL);
+}
+
+/*
+ * TD creation/setup/finalization
+ */
+
+static void tdx_enable_capabilities(struct kvm_vm *vm)
+{
+ int rc;
+
+ rc = kvm_check_cap(KVM_CAP_X2APIC_API);
+ TEST_ASSERT(rc, "TDX: KVM_CAP_X2APIC_API is not supported!");
+ rc = kvm_check_cap(KVM_CAP_SPLIT_IRQCHIP);
+ TEST_ASSERT(rc, "TDX: KVM_CAP_SPLIT_IRQCHIP is not supported!");
+
+ vm_enable_cap(vm, KVM_CAP_X2APIC_API,
+ KVM_X2APIC_API_USE_32BIT_IDS |
+ KVM_X2APIC_API_DISABLE_BROADCAST_QUIRK);
+ vm_enable_cap(vm, KVM_CAP_SPLIT_IRQCHIP, 24);
+}
+
+static void tdx_configure_memory_encryption(struct kvm_vm *vm)
+{
+ /* Configure shared/enCrypted bit for this VM according to TDX spec */
+ vm->arch.s_bit = 1ULL << (vm->pa_bits - 1);
+ vm->arch.c_bit = 0;
+ /* Set gpa_protected_mask so that tagging/untagging of GPAs works */
+ vm->gpa_protected_mask = vm->arch.s_bit;
+ /* This VM is protected (has memory encryption) */
+ vm->protected = true;
+}
+
+static void tdx_apply_cr4_restrictions(struct kvm_sregs *sregs)
+{
+ /* TDX spec 11.6.2: CR4 bit MCE is fixed to 1 */
+ sregs->cr4 |= X86_CR4_MCE;
+
+ /* Set this because UEFI also sets this up, to handle XMM exceptions */
+ sregs->cr4 |= X86_CR4_OSXMMEXCPT;
+
+ /* TDX spec 11.6.2: CR4 bit VMXE and SMXE are fixed to 0 */
+ sregs->cr4 &= ~(X86_CR4_VMXE | X86_CR4_SMXE);
+}
+
+static void load_td_boot_code(struct kvm_vm *vm)
+{
+ void *boot_code_hva = addr_gpa2hva(vm, FOUR_GIGABYTES_GPA - TD_BOOT_CODE_SIZE);
+
+ TEST_ASSERT(td_boot_code_end - reset_vector == 16,
+ "The reset vector must be 16 bytes in size.");
+ memcpy(boot_code_hva, td_boot, TD_BOOT_CODE_SIZE);
+}
+
+static void load_td_per_vcpu_parameters(struct td_boot_parameters *params,
+ struct kvm_sregs *sregs,
+ struct kvm_vcpu *vcpu,
+ void *guest_code)
+{
+ /* Store vcpu_index to match what the TDX module would store internally */
+ static uint32_t vcpu_index;
+
+ struct td_per_vcpu_parameters *vcpu_params = &params->per_vcpu[vcpu_index];
+
+ TEST_ASSERT(vcpu->initial_stack_addr != 0,
+ "initial stack address should not be 0");
+ TEST_ASSERT(vcpu->initial_stack_addr <= 0xffffffff,
+ "initial stack address must fit in 32 bits");
+ TEST_ASSERT((uint64_t)guest_code <= 0xffffffff,
+ "guest_code must fit in 32 bits");
+ TEST_ASSERT(sregs->cs.selector != 0, "cs.selector should not be 0");
+
+ vcpu_params->esp_gva = (uint32_t)(uint64_t)vcpu->initial_stack_addr;
+ vcpu_params->ljmp_target.eip_gva = (uint32_t)(uint64_t)guest_code;
+ vcpu_params->ljmp_target.code64_sel = sregs->cs.selector;
+
+ vcpu_index++;
+}
+
+static void load_td_common_parameters(struct td_boot_parameters *params,
+ struct kvm_sregs *sregs)
+{
+ /* Set parameters! */
+ params->cr0 = sregs->cr0;
+ params->cr3 = sregs->cr3;
+ params->cr4 = sregs->cr4;
+ params->gdtr.limit = sregs->gdt.limit;
+ params->gdtr.base = sregs->gdt.base;
+ params->idtr.limit = sregs->idt.limit;
+ params->idtr.base = sregs->idt.base;
+
+ TEST_ASSERT(params->cr0 != 0, "cr0 should not be 0");
+ TEST_ASSERT(params->cr3 != 0, "cr3 should not be 0");
+ TEST_ASSERT(params->cr4 != 0, "cr4 should not be 0");
+ TEST_ASSERT(params->gdtr.base != 0, "gdt base address should not be 0");
+}
+
+static void load_td_boot_parameters(struct td_boot_parameters *params,
+ struct kvm_vcpu *vcpu, void *guest_code)
+{
+ struct kvm_sregs sregs;
+
+ /* Assemble parameters in sregs */
+ memset(&sregs, 0, sizeof(struct kvm_sregs));
+ vcpu_setup_mode_sregs(vcpu->vm, &sregs);
+ tdx_apply_cr4_restrictions(&sregs);
+ kvm_setup_idt(vcpu->vm, &sregs.idt);
+
+ if (!params->cr0)
+ load_td_common_parameters(params, &sregs);
+
+ load_td_per_vcpu_parameters(params, &sregs, vcpu, guest_code);
+}
+
+/**
+ * Adds a vCPU to a TD (Trusted Domain) with minimum defaults. It will not set
+ * up any general purpose registers as they will be initialized by the TDX. In
+ * TDX, vCPUs RIP is set to 0xFFFFFFF0. See Intel TDX EAS Section "Initial State
+ * of Guest GPRs" for more information on vCPUs initial register values when
+ * entering the TD first time.
+ *
+ * Input Args:
+ * vm - Virtual Machine
+ * vcpuid - The id of the VCPU to add to the VM.
+ */
+struct kvm_vcpu *td_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id, void *guest_code)
+{
+ struct kvm_vcpu *vcpu;
+
+ /*
+ * TD setup will not use the value of rip set in vm_vcpu_add anyway, so
+ * NULL can be used for guest_code.
+ */
+ vcpu = vm_vcpu_add(vm, vcpu_id, NULL);
+
+ tdx_td_vcpu_init(vcpu);
+
+ load_td_boot_parameters(addr_gpa2hva(vm, TD_BOOT_PARAMETERS_GPA),
+ vcpu, guest_code);
+
+ return vcpu;
+}
+
+/**
+ * Iterate over set ranges within sparsebit @s. In each iteration,
+ * @range_begin and @range_end will take the beginning and end of the set range,
+ * which are of type sparsebit_idx_t.
+ *
+ * For example, if the range [3, 7] (inclusive) is set, within the iteration,
+ * @range_begin will take the value 3 and @range_end will take the value 7.
+ *
+ * Ensure that there is at least one bit set before using this macro with
+ * sparsebit_any_set(), because sparsebit_first_set() will abort if none are
+ * set.
+ */
+#define sparsebit_for_each_set_range(s, range_begin, range_end) \
+ for (range_begin = sparsebit_first_set(s), \
+ range_end = sparsebit_next_clear(s, range_begin) - 1; \
+ range_begin && range_end; \
+ range_begin = sparsebit_next_set(s, range_end), \
+ range_end = sparsebit_next_clear(s, range_begin) - 1)
+/*
+ * sparsebit_next_clear() can return 0 if [x, 2**64-1] are all set, and the -1
+ * would then cause an underflow back to 2**64 - 1. This is expected and
+ * correct.
+ *
+ * If the last range in the sparsebit is [x, y] and we try to iterate,
+ * sparsebit_next_set() will return 0, and sparsebit_next_clear() will try and
+ * find the first range, but that's correct because the condition expression
+ * would cause us to quit the loop.
+ */
+
+static void load_td_memory_region(struct kvm_vm *vm,
+ struct userspace_mem_region *region)
+{
+ const struct sparsebit *pages = region->protected_phy_pages;
+ const uint64_t hva_base = region->region.userspace_addr;
+ const vm_paddr_t gpa_base = region->region.guest_phys_addr;
+ const sparsebit_idx_t lowest_page_in_region = gpa_base >>
+ vm->page_shift;
+
+ sparsebit_idx_t i;
+ sparsebit_idx_t j;
+
+ if (!sparsebit_any_set(pages))
+ return;
+
+ sparsebit_for_each_set_range(pages, i, j) {
+ const uint64_t size_to_load = (j - i + 1) * vm->page_size;
+ const uint64_t offset =
+ (i - lowest_page_in_region) * vm->page_size;
+ const uint64_t hva = hva_base + offset;
+ const uint64_t gpa = gpa_base + offset;
+ void *source_addr;
+
+ /*
+ * KVM_TDX_INIT_MEM_REGION ioctl cannot encrypt memory in place,
+ * hence we have to make a copy if there's only one backing
+ * memory source
+ */
+ source_addr = mmap(NULL, size_to_load, PROT_READ | PROT_WRITE,
+ MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+ TEST_ASSERT(
+ source_addr,
+ "Could not allocate memory for loading memory region");
+
+ memcpy(source_addr, (void *)hva, size_to_load);
+
+ tdx_init_mem_region(vm, source_addr, gpa, size_to_load);
+
+ munmap(source_addr, size_to_load);
+ }
+}
+
+static void load_td_private_memory(struct kvm_vm *vm)
+{
+ int ctr;
+ struct userspace_mem_region *region;
+
+ hash_for_each(vm->regions.slot_hash, ctr, region, slot_node) {
+ load_td_memory_region(vm, region);
+ }
+}
+
+struct kvm_vm *td_create(void)
+{
+ struct vm_shape shape;
+
+ shape.mode = VM_MODE_DEFAULT;
+ shape.type = KVM_X86_PROTECTED_VM;
+ return ____vm_create(shape);
+}
+
+static void td_setup_boot_code(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type)
+{
+ vm_vaddr_t addr;
+ size_t boot_code_allocation = round_up(TD_BOOT_CODE_SIZE, PAGE_SIZE);
+ vm_paddr_t boot_code_base_gpa = FOUR_GIGABYTES_GPA - boot_code_allocation;
+ size_t npages = DIV_ROUND_UP(boot_code_allocation, PAGE_SIZE);
+
+ vm_userspace_mem_region_add(vm, src_type, boot_code_base_gpa, 1, npages,
+ KVM_MEM_PRIVATE);
+ addr = vm_vaddr_alloc_1to1(vm, boot_code_allocation, boot_code_base_gpa, 1);
+ ASSERT_EQ(addr, boot_code_base_gpa);
+
+ load_td_boot_code(vm);
+}
+
+static size_t td_boot_parameters_size(void)
+{
+ int max_vcpus = kvm_check_cap(KVM_CAP_MAX_VCPUS);
+ size_t total_per_vcpu_parameters_size =
+ max_vcpus * sizeof(struct td_per_vcpu_parameters);
+
+ return sizeof(struct td_boot_parameters) + total_per_vcpu_parameters_size;
+}
+
+static void td_setup_boot_parameters(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type)
+{
+ vm_vaddr_t addr;
+ size_t boot_params_size = td_boot_parameters_size();
+ int npages = DIV_ROUND_UP(boot_params_size, PAGE_SIZE);
+ size_t total_size = npages * PAGE_SIZE;
+
+ vm_userspace_mem_region_add(vm, src_type, TD_BOOT_PARAMETERS_GPA, 2,
+ npages, KVM_MEM_PRIVATE);
+ addr = vm_vaddr_alloc_1to1(vm, total_size, TD_BOOT_PARAMETERS_GPA, 2);
+ ASSERT_EQ(addr, TD_BOOT_PARAMETERS_GPA);
+}
+
+void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
+ uint64_t attributes)
+{
+ uint64_t nr_pages_required;
+
+ tdx_enable_capabilities(vm);
+
+ tdx_configure_memory_encryption(vm);
+
+ tdx_td_init(vm, attributes);
+
+ nr_pages_required = vm_nr_pages_required(VM_MODE_DEFAULT, 1, 0);
+
+ /*
+ * Add memory (add 0th memslot) for TD. This will be used to setup the
+ * CPU (provide stack space for the CPU) and to load the elf file.
+ */
+ vm_userspace_mem_region_add(vm, src_type, 0, 0, nr_pages_required,
+ KVM_MEM_PRIVATE);
+
+ kvm_vm_elf_load(vm, program_invocation_name);
+
+ vm_init_descriptor_tables(vm);
+
+ td_setup_boot_code(vm, src_type);
+ td_setup_boot_parameters(vm, src_type);
+}
+
+void td_finalize(struct kvm_vm *vm)
+{
+ sync_exception_handlers_to_guest(vm);
+
+ load_td_private_memory(vm);
+
+ tdx_td_finalizemr(vm);
+}
--
2.41.0.487.g6d72f3e995-goog


2023-07-25 22:37:54

by Ryan Afranji

[permalink] [raw]
Subject: [PATCH v4 14/28] KVM: selftests: TDX: Add TDX IO reads test

From: Sagi Shahar <[email protected]>

The test verifies IO reads of various sizes from the host to the guest.

Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
Change-Id: If5b43a7b084823707b5b4a546c16800d5c61daaf
Signed-off-by: Ryan Afranji <[email protected]>
---
.../selftests/kvm/x86_64/tdx_vm_tests.c | 87 +++++++++++++++++++
1 file changed, 87 insertions(+)

diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index ca0136930775..9ac0b793f8c5 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -429,6 +429,92 @@ void verify_guest_writes(void)
printf("\t ... PASSED\n");
}

+#define TDX_IO_READS_TEST_PORT 0x52
+
+/*
+ * Verifies IO functionality by reading values of different sizes
+ * from the host.
+ */
+void guest_io_reads(void)
+{
+ uint64_t data;
+ uint64_t ret;
+
+ ret = tdg_vp_vmcall_instruction_io(TDX_IO_READS_TEST_PORT, 1,
+ TDG_VP_VMCALL_INSTRUCTION_IO_READ,
+ &data);
+ if (ret)
+ tdx_test_fatal(ret);
+ if (data != 0xAB)
+ tdx_test_fatal(1);
+
+ ret = tdg_vp_vmcall_instruction_io(TDX_IO_READS_TEST_PORT, 2,
+ TDG_VP_VMCALL_INSTRUCTION_IO_READ,
+ &data);
+ if (ret)
+ tdx_test_fatal(ret);
+ if (data != 0xABCD)
+ tdx_test_fatal(2);
+
+ ret = tdg_vp_vmcall_instruction_io(TDX_IO_READS_TEST_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_READ,
+ &data);
+ if (ret)
+ tdx_test_fatal(ret);
+ if (data != 0xFFABCDEF)
+ tdx_test_fatal(4);
+
+ // Read an invalid number of bytes.
+ ret = tdg_vp_vmcall_instruction_io(TDX_IO_READS_TEST_PORT, 5,
+ TDG_VP_VMCALL_INSTRUCTION_IO_READ,
+ &data);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ tdx_test_success();
+}
+
+void verify_guest_reads(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+ vcpu = td_vcpu_add(vm, 0, guest_io_reads);
+ td_finalize(vm);
+
+ printf("Verifying guest reads:\n");
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_IO_READS_TEST_PORT, 1,
+ TDG_VP_VMCALL_INSTRUCTION_IO_READ);
+ *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = 0xAB;
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_IO_READS_TEST_PORT, 2,
+ TDG_VP_VMCALL_INSTRUCTION_IO_READ);
+ *(uint16_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = 0xABCD;
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_IO_READS_TEST_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_READ);
+ *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset) = 0xFFABCDEF;
+
+ td_vcpu_run(vcpu);
+ ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
+ ASSERT_EQ(vcpu->run->system_event.data[1], TDG_VP_VMCALL_INVALID_OPERAND);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_ASSERT_SUCCESS(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
setbuf(stdout, NULL);
@@ -444,6 +530,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_td_cpuid);
run_in_new_process(&verify_get_td_vmcall_info);
run_in_new_process(&verify_guest_writes);
+ run_in_new_process(&verify_guest_reads);

return 0;
}
--
2.41.0.487.g6d72f3e995-goog


2023-07-25 22:59:30

by Ryan Afranji

[permalink] [raw]
Subject: [PATCH v4 19/28] KVM: selftests: TDX: Add TDX CPUID TDVMCALL test

From: Sagi Shahar <[email protected]>

This test issues a CPUID TDVMCALL from inside the guest to get the CPUID
values as seen by KVM.

Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
Change-Id: Ibd1219a119f09e6537717bf48473a387a7c2f51e
Signed-off-by: Ryan Afranji <[email protected]>
---
.../selftests/kvm/include/x86_64/tdx/tdx.h | 4 +
.../selftests/kvm/lib/x86_64/tdx/tdx.c | 26 +++++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 94 +++++++++++++++++++
3 files changed, 124 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
index 502b670ea699..b13a533234fd 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
@@ -8,6 +8,7 @@
#define TDG_VP_VMCALL_GET_TD_VM_CALL_INFO 0x10000
#define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003

+#define TDG_VP_VMCALL_INSTRUCTION_CPUID 10
#define TDG_VP_VMCALL_INSTRUCTION_HLT 12
#define TDG_VP_VMCALL_INSTRUCTION_IO 30
#define TDG_VP_VMCALL_INSTRUCTION_RDMSR 31
@@ -27,5 +28,8 @@ uint64_t tdg_vp_vmcall_ve_request_mmio_read(uint64_t address, uint64_t size,
uint64_t *data_out);
uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size,
uint64_t data_in);
+uint64_t tdg_vp_vmcall_instruction_cpuid(uint32_t eax, uint32_t ecx,
+ uint32_t *ret_eax, uint32_t *ret_ebx,
+ uint32_t *ret_ecx, uint32_t *ret_edx);

#endif // SELFTEST_TDX_TDX_H
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
index f4afa09f7e3d..a45e2ceb6eda 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
@@ -157,3 +157,29 @@ uint64_t tdg_vp_vmcall_ve_request_mmio_write(uint64_t address, uint64_t size,

return __tdx_hypercall(&args, 0);
}
+
+uint64_t tdg_vp_vmcall_instruction_cpuid(uint32_t eax, uint32_t ecx,
+ uint32_t *ret_eax, uint32_t *ret_ebx,
+ uint32_t *ret_ecx, uint32_t *ret_edx)
+{
+ uint64_t ret;
+ struct tdx_hypercall_args args = {
+ .r11 = TDG_VP_VMCALL_INSTRUCTION_CPUID,
+ .r12 = eax,
+ .r13 = ecx,
+ };
+
+
+ ret = __tdx_hypercall(&args, TDX_HCALL_HAS_OUTPUT);
+
+ if (ret_eax)
+ *ret_eax = args.r12;
+ if (ret_ebx)
+ *ret_ebx = args.r13;
+ if (ret_ecx)
+ *ret_ecx = args.r14;
+ if (ret_edx)
+ *ret_edx = args.r15;
+
+ return ret;
+}
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 79fe56c1052d..a6da9fda1c6b 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -969,6 +969,99 @@ void verify_mmio_writes(void)
printf("\t ... PASSED\n");
}

+/*
+ * Verifies CPUID TDVMCALL functionality.
+ * The guest will then send the values to userspace using an IO write to be
+ * checked against the expected values.
+ */
+void guest_code_cpuid_tdcall(void)
+{
+ uint64_t err;
+ uint32_t eax, ebx, ecx, edx;
+
+ // Read CPUID leaf 0x1 from host.
+ err = tdg_vp_vmcall_instruction_cpuid(/*eax=*/1, /*ecx=*/0,
+ &eax, &ebx, &ecx, &edx);
+ if (err)
+ tdx_test_fatal(err);
+
+ err = tdx_test_report_to_user_space(eax);
+ if (err)
+ tdx_test_fatal(err);
+
+ err = tdx_test_report_to_user_space(ebx);
+ if (err)
+ tdx_test_fatal(err);
+
+ err = tdx_test_report_to_user_space(ecx);
+ if (err)
+ tdx_test_fatal(err);
+
+ err = tdx_test_report_to_user_space(edx);
+ if (err)
+ tdx_test_fatal(err);
+
+ tdx_test_success();
+}
+
+void verify_td_cpuid_tdcall(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ uint32_t eax, ebx, ecx, edx;
+ const struct kvm_cpuid_entry2 *cpuid_entry;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+ vcpu = td_vcpu_add(vm, 0, guest_code_cpuid_tdcall);
+ td_finalize(vm);
+
+ printf("Verifying TD CPUID TDVMCALL:\n");
+
+ /* Wait for guest to report CPUID values */
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ eax = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ ebx = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ ecx = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ edx = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_SUCCESS(vcpu);
+
+ /* Get KVM CPUIDs for reference */
+ cpuid_entry = get_cpuid_entry(kvm_get_supported_cpuid(), 1, 0);
+ TEST_ASSERT(cpuid_entry, "CPUID entry missing\n");
+
+ ASSERT_EQ(cpuid_entry->eax, eax);
+ // Mask lapic ID when comparing ebx.
+ ASSERT_EQ(cpuid_entry->ebx & ~0xFF000000, ebx & ~0xFF000000);
+ ASSERT_EQ(cpuid_entry->ecx, ecx);
+ ASSERT_EQ(cpuid_entry->edx, edx);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
setbuf(stdout, NULL);
@@ -990,6 +1083,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_guest_hlt);
run_in_new_process(&verify_mmio_reads);
run_in_new_process(&verify_mmio_writes);
+ run_in_new_process(&verify_td_cpuid_tdcall);

return 0;
}
--
2.41.0.487.g6d72f3e995-goog


2023-07-25 23:02:19

by Ryan Afranji

[permalink] [raw]
Subject: [PATCH v4 09/28] KVM: selftests: TDX: Add report_fatal_error test

From: Sagi Shahar <[email protected]>

The test checks report_fatal_error functionality.

Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
Change-Id: I1bf45c42ac23aa81bb7d52f05273786d5832e377
Signed-off-by: Ryan Afranji <[email protected]>
---
.../selftests/kvm/include/x86_64/tdx/tdx.h | 6 ++-
.../kvm/include/x86_64/tdx/tdx_util.h | 1 +
.../kvm/include/x86_64/tdx/test_util.h | 19 ++++++++
.../selftests/kvm/lib/x86_64/tdx/tdx.c | 39 ++++++++++++++++
.../selftests/kvm/lib/x86_64/tdx/tdx_util.c | 12 +++++
.../selftests/kvm/lib/x86_64/tdx/test_util.c | 10 +++++
.../selftests/kvm/x86_64/tdx_vm_tests.c | 45 +++++++++++++++++++
7 files changed, 131 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
index a7161efe4ee2..1340c1070002 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx.h
@@ -3,10 +3,14 @@
#define SELFTEST_TDX_TDX_H

#include <stdint.h>
+#include "kvm_util_base.h"

-#define TDG_VP_VMCALL_INSTRUCTION_IO 30
+#define TDG_VP_VMCALL_REPORT_FATAL_ERROR 0x10003

+#define TDG_VP_VMCALL_INSTRUCTION_IO 30
+void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu);
uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
uint64_t write, uint64_t *data);
+void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa);

#endif // SELFTEST_TDX_TDX_H
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
index 274b245f200b..32dd6b8fda46 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdx_util.h
@@ -12,5 +12,6 @@ struct kvm_vm *td_create(void);
void td_initialize(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
uint64_t attributes);
void td_finalize(struct kvm_vm *vm);
+void td_vcpu_run(struct kvm_vcpu *vcpu);

#endif // SELFTESTS_TDX_KVM_UTIL_H
diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
index b570b6d978ff..6d69921136bd 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/test_util.h
@@ -49,4 +49,23 @@ bool is_tdx_enabled(void);
*/
void tdx_test_success(void);

+/**
+ * Report an error with @error_code to userspace.
+ *
+ * Return value from tdg_vp_vmcall_report_fatal_error is ignored since execution
+ * is not expected to continue beyond this point.
+ */
+void tdx_test_fatal(uint64_t error_code);
+
+/**
+ * Report an error with @error_code to userspace.
+ *
+ * @data_gpa may point to an optional shared guest memory holding the error
+ * string.
+ *
+ * Return value from tdg_vp_vmcall_report_fatal_error is ignored since execution
+ * is not expected to continue beyond this point.
+ */
+void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa);
+
#endif // SELFTEST_TDX_TEST_UTIL_H
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
index c2414523487a..b854c3aa34ff 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx.c
@@ -1,8 +1,31 @@
// SPDX-License-Identifier: GPL-2.0-only

+#include <string.h>
+
#include "tdx/tdcall.h"
#include "tdx/tdx.h"

+void handle_userspace_tdg_vp_vmcall_exit(struct kvm_vcpu *vcpu)
+{
+ struct kvm_tdx_vmcall *vmcall_info = &vcpu->run->tdx.u.vmcall;
+ uint64_t vmcall_subfunction = vmcall_info->subfunction;
+
+ switch (vmcall_subfunction) {
+ case TDG_VP_VMCALL_REPORT_FATAL_ERROR:
+ vcpu->run->exit_reason = KVM_EXIT_SYSTEM_EVENT;
+ vcpu->run->system_event.ndata = 3;
+ vcpu->run->system_event.data[0] =
+ TDG_VP_VMCALL_REPORT_FATAL_ERROR;
+ vcpu->run->system_event.data[1] = vmcall_info->in_r12;
+ vcpu->run->system_event.data[2] = vmcall_info->in_r13;
+ vmcall_info->status_code = 0;
+ break;
+ default:
+ TEST_FAIL("TD VMCALL subfunction %lu is unsupported.\n",
+ vmcall_subfunction);
+ }
+}
+
uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,
uint64_t write, uint64_t *data)
{
@@ -25,3 +48,19 @@ uint64_t tdg_vp_vmcall_instruction_io(uint64_t port, uint64_t size,

return ret;
}
+
+void tdg_vp_vmcall_report_fatal_error(uint64_t error_code, uint64_t data_gpa)
+{
+ struct tdx_hypercall_args args;
+
+ memset(&args, 0, sizeof(struct tdx_hypercall_args));
+
+ if (data_gpa)
+ error_code |= 0x8000000000000000;
+
+ args.r11 = TDG_VP_VMCALL_REPORT_FATAL_ERROR;
+ args.r12 = error_code;
+ args.r13 = data_gpa;
+
+ __tdx_hypercall(&args, 0);
+}
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
index 4ebb8ddb2a93..c8591480b412 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/tdx_util.c
@@ -10,6 +10,7 @@

#include "kvm_util.h"
#include "test_util.h"
+#include "tdx/tdx.h"
#include "tdx/td_boot.h"
#include "kvm_util_base.h"
#include "processor.h"
@@ -526,3 +527,14 @@ void td_finalize(struct kvm_vm *vm)

tdx_td_finalizemr(vm);
}
+
+void td_vcpu_run(struct kvm_vcpu *vcpu)
+{
+ vcpu_run(vcpu);
+
+ /* Handle TD VMCALLs that require userspace handling. */
+ if (vcpu->run->exit_reason == KVM_EXIT_TDX &&
+ vcpu->run->tdx.type == KVM_EXIT_TDX_VMCALL) {
+ handle_userspace_tdg_vp_vmcall_exit(vcpu);
+ }
+}
diff --git a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
index f63d90898a6a..0419c3c54341 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/tdx/test_util.c
@@ -32,3 +32,13 @@ void tdx_test_success(void)
TDX_TEST_SUCCESS_SIZE,
TDG_VP_VMCALL_INSTRUCTION_IO_WRITE, &code);
}
+
+void tdx_test_fatal_with_data(uint64_t error_code, uint64_t data_gpa)
+{
+ tdg_vp_vmcall_report_fatal_error(error_code, data_gpa);
+}
+
+void tdx_test_fatal(uint64_t error_code)
+{
+ tdx_test_fatal_with_data(error_code, 0);
+}
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index a18d1c9d6026..7741c6f585c0 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -2,6 +2,7 @@

#include <signal.h>
#include "kvm_util_base.h"
+#include "tdx/tdx.h"
#include "tdx/tdx_util.h"
#include "tdx/test_util.h"
#include "test_util.h"
@@ -30,6 +31,49 @@ void verify_td_lifecycle(void)
printf("\t ... PASSED\n");
}

+void guest_code_report_fatal_error(void)
+{
+ uint64_t err;
+
+ /*
+ * Note: err should follow the GHCI spec definition:
+ * bits 31:0 should be set to 0.
+ * bits 62:32 are used for TD-specific extended error code.
+ * bit 63 is used to mark additional information in shared memory.
+ */
+ err = 0x0BAAAAAD00000000;
+ if (err)
+ tdx_test_fatal(err);
+
+ tdx_test_success();
+}
+void verify_report_fatal_error(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+ vcpu = td_vcpu_add(vm, 0, guest_code_report_fatal_error);
+ td_finalize(vm);
+
+ printf("Verifying report_fatal_error:\n");
+
+ td_vcpu_run(vcpu);
+
+ ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
+ ASSERT_EQ(vcpu->run->system_event.ndata, 3);
+ ASSERT_EQ(vcpu->run->system_event.data[0], TDG_VP_VMCALL_REPORT_FATAL_ERROR);
+ ASSERT_EQ(vcpu->run->system_event.data[1], 0x0BAAAAAD00000000);
+ ASSERT_EQ(vcpu->run->system_event.data[2], 0);
+
+ vcpu_run(vcpu);
+ TDX_TEST_ASSERT_SUCCESS(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
setbuf(stdout, NULL);
@@ -40,6 +84,7 @@ int main(int argc, char **argv)
}

run_in_new_process(&verify_td_lifecycle);
+ run_in_new_process(&verify_report_fatal_error);

return 0;
}
--
2.41.0.487.g6d72f3e995-goog


2023-07-25 23:57:22

by Ryan Afranji

[permalink] [raw]
Subject: [PATCH v4 13/28] KVM: selftests: TDX: Add TDX IO writes test

From: Sagi Shahar <[email protected]>

The test verifies IO writes of various sizes from the guest to the host.

Signed-off-by: Sagi Shahar <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
Change-Id: I91edb81a93d7bfd881ccde517c95b5728a2ba85e
Signed-off-by: Ryan Afranji <[email protected]>
---
.../selftests/kvm/include/x86_64/tdx/tdcall.h | 3 +
.../selftests/kvm/x86_64/tdx_vm_tests.c | 91 +++++++++++++++++++
2 files changed, 94 insertions(+)

diff --git a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
index 78001bfec9c8..b5e94b7c48fa 100644
--- a/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
+++ b/tools/testing/selftests/kvm/include/x86_64/tdx/tdcall.h
@@ -10,6 +10,9 @@
#define TDG_VP_VMCALL_INSTRUCTION_IO_READ 0
#define TDG_VP_VMCALL_INSTRUCTION_IO_WRITE 1

+#define TDG_VP_VMCALL_SUCCESS 0x0000000000000000
+#define TDG_VP_VMCALL_INVALID_OPERAND 0x8000000000000000
+
#define TDX_HCALL_HAS_OUTPUT BIT(0)

#define TDX_HYPERCALL_STANDARD 0
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
index 9e9c3ad08a21..ca0136930775 100644
--- a/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/tdx_vm_tests.c
@@ -339,6 +339,96 @@ void verify_get_td_vmcall_info(void)
printf("\t ... PASSED\n");
}

+#define TDX_IO_WRITES_TEST_PORT 0x51
+
+/*
+ * Verifies IO functionality by writing values of different sizes
+ * to the host.
+ */
+void guest_io_writes(void)
+{
+ uint64_t byte_1 = 0xAB;
+ uint64_t byte_2 = 0xABCD;
+ uint64_t byte_4 = 0xFFABCDEF;
+ uint64_t ret;
+
+ ret = tdg_vp_vmcall_instruction_io(TDX_IO_WRITES_TEST_PORT, 1,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
+ &byte_1);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ ret = tdg_vp_vmcall_instruction_io(TDX_IO_WRITES_TEST_PORT, 2,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
+ &byte_2);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ ret = tdg_vp_vmcall_instruction_io(TDX_IO_WRITES_TEST_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
+ &byte_4);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ // Write an invalid number of bytes.
+ ret = tdg_vp_vmcall_instruction_io(TDX_IO_WRITES_TEST_PORT, 5,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE,
+ &byte_4);
+ if (ret)
+ tdx_test_fatal(ret);
+
+ tdx_test_success();
+}
+
+void verify_guest_writes(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ uint8_t byte_1;
+ uint16_t byte_2;
+ uint32_t byte_4;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+ vcpu = td_vcpu_add(vm, 0, guest_io_writes);
+ td_finalize(vm);
+
+ printf("Verifying guest writes:\n");
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_IO_WRITES_TEST_PORT, 1,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ byte_1 = *(uint8_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_IO_WRITES_TEST_PORT, 2,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ byte_2 = *(uint16_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_IO_WRITES_TEST_PORT, 4,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ byte_4 = *(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset);
+
+ ASSERT_EQ(byte_1, 0xAB);
+ ASSERT_EQ(byte_2, 0xABCD);
+ ASSERT_EQ(byte_4, 0xFFABCDEF);
+
+ td_vcpu_run(vcpu);
+ ASSERT_EQ(vcpu->run->exit_reason, KVM_EXIT_SYSTEM_EVENT);
+ ASSERT_EQ(vcpu->run->system_event.data[1], TDG_VP_VMCALL_INVALID_OPERAND);
+
+ td_vcpu_run(vcpu);
+ TDX_TEST_ASSERT_SUCCESS(vcpu);
+
+ kvm_vm_free(vm);
+ printf("\t ... PASSED\n");
+}
+
int main(int argc, char **argv)
{
setbuf(stdout, NULL);
@@ -353,6 +443,7 @@ int main(int argc, char **argv)
run_in_new_process(&verify_td_ioexit);
run_in_new_process(&verify_td_cpuid);
run_in_new_process(&verify_get_td_vmcall_info);
+ run_in_new_process(&verify_guest_writes);

return 0;
}
--
2.41.0.487.g6d72f3e995-goog


2023-07-26 00:07:30

by Ryan Afranji

[permalink] [raw]
Subject: [PATCH v4 27/28] KVM: selftests: TDX: Add TDX UPM selftest

From: Ackerley Tng <[email protected]>

This tests the use of guest memory with explicit MapGPA calls.

Signed-off-by: Ackerley Tng <[email protected]>
Change-Id: I47d39d4fec991f4bd465bd5ef7f2a7d5affd73c7
Signed-off-by: Ryan Afranji <[email protected]>
---
tools/testing/selftests/kvm/Makefile | 1 +
.../selftests/kvm/x86_64/tdx_upm_test.c | 401 ++++++++++++++++++
2 files changed, 402 insertions(+)
create mode 100644 tools/testing/selftests/kvm/x86_64/tdx_upm_test.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index cb2aaa7820c3..f2d8f1acc352 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -141,6 +141,7 @@ TEST_GEN_PROGS_x86_64 += kvm_binary_stats_test
TEST_GEN_PROGS_x86_64 += system_counter_offset_test
TEST_GEN_PROGS_x86_64 += x86_64/tdx_vm_tests
TEST_GEN_PROGS_x86_64 += x86_64/tdx_shared_mem_test
+TEST_GEN_PROGS_x86_64 += x86_64/tdx_upm_test

# Compiled outputs used by test targets
TEST_GEN_PROGS_EXTENDED_x86_64 += x86_64/nx_huge_pages_test
diff --git a/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c b/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c
new file mode 100644
index 000000000000..748a4ed5f88b
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/tdx_upm_test.c
@@ -0,0 +1,401 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <asm/kvm.h>
+#include <asm/vmx.h>
+#include <linux/kvm.h>
+#include <stdbool.h>
+#include <stdint.h>
+
+#include "kvm_util_base.h"
+#include "processor.h"
+#include "tdx/tdcall.h"
+#include "tdx/tdx.h"
+#include "tdx/tdx_util.h"
+#include "tdx/test_util.h"
+#include "test_util.h"
+
+/* TDX UPM test patterns */
+#define PATTERN_CONFIDENCE_CHECK (0x11)
+#define PATTERN_HOST_FOCUS (0x22)
+#define PATTERN_GUEST_GENERAL (0x33)
+#define PATTERN_GUEST_FOCUS (0x44)
+
+/*
+ * 0x80000000 is arbitrarily selected. The selected address need not be the same
+ * as TDX_UPM_TEST_AREA_GVA_PRIVATE, but it should not overlap with selftest
+ * code or boot page.
+ */
+#define TDX_UPM_TEST_AREA_GPA (0x80000000)
+/* Test area GPA is arbitrarily selected */
+#define TDX_UPM_TEST_AREA_GVA_PRIVATE (0x90000000)
+/* Select any bit that can be used as a flag */
+#define TDX_UPM_TEST_AREA_GVA_SHARED_BIT (32)
+/*
+ * TDX_UPM_TEST_AREA_GVA_SHARED is used to map the same GPA twice into the
+ * guest, once as shared and once as private
+ */
+#define TDX_UPM_TEST_AREA_GVA_SHARED \
+ (TDX_UPM_TEST_AREA_GVA_PRIVATE | \
+ BIT_ULL(TDX_UPM_TEST_AREA_GVA_SHARED_BIT))
+
+/* The test area is 2MB in size */
+#define TDX_UPM_TEST_AREA_SIZE (2 << 20)
+/* 0th general area is 1MB in size */
+#define TDX_UPM_GENERAL_AREA_0_SIZE (1 << 20)
+/* Focus area is 40KB in size */
+#define TDX_UPM_FOCUS_AREA_SIZE (40 << 10)
+/* 1st general area is the rest of the space in the test area */
+#define TDX_UPM_GENERAL_AREA_1_SIZE \
+ (TDX_UPM_TEST_AREA_SIZE - TDX_UPM_GENERAL_AREA_0_SIZE - \
+ TDX_UPM_FOCUS_AREA_SIZE)
+
+/*
+ * The test memory area is set up as two general areas, sandwiching a focus
+ * area. The general areas act as control areas. After they are filled, they
+ * are not expected to change throughout the tests. The focus area is memory
+ * permissions change from private to shared and vice-versa.
+ *
+ * The focus area is intentionally small, and sandwiched to test that when the
+ * focus area's permissions change, the other areas' permissions are not
+ * affected.
+ */
+struct __packed tdx_upm_test_area {
+ uint8_t general_area_0[TDX_UPM_GENERAL_AREA_0_SIZE];
+ uint8_t focus_area[TDX_UPM_FOCUS_AREA_SIZE];
+ uint8_t general_area_1[TDX_UPM_GENERAL_AREA_1_SIZE];
+};
+
+static void fill_test_area(struct tdx_upm_test_area *test_area_base,
+ uint8_t pattern)
+{
+ memset(test_area_base, pattern, sizeof(*test_area_base));
+}
+
+static void fill_focus_area(struct tdx_upm_test_area *test_area_base,
+ uint8_t pattern)
+{
+ memset(test_area_base->focus_area, pattern,
+ sizeof(test_area_base->focus_area));
+}
+
+static bool check_area(uint8_t *base, uint64_t size, uint8_t expected_pattern)
+{
+ size_t i;
+
+ for (i = 0; i < size; i++) {
+ if (base[i] != expected_pattern)
+ return false;
+ }
+
+ return true;
+}
+
+static bool check_general_areas(struct tdx_upm_test_area *test_area_base,
+ uint8_t expected_pattern)
+{
+ return (check_area(test_area_base->general_area_0,
+ sizeof(test_area_base->general_area_0),
+ expected_pattern) &&
+ check_area(test_area_base->general_area_1,
+ sizeof(test_area_base->general_area_1),
+ expected_pattern));
+}
+
+static bool check_focus_area(struct tdx_upm_test_area *test_area_base,
+ uint8_t expected_pattern)
+{
+ return check_area(test_area_base->focus_area,
+ sizeof(test_area_base->focus_area), expected_pattern);
+}
+
+static bool check_test_area(struct tdx_upm_test_area *test_area_base,
+ uint8_t expected_pattern)
+{
+ return (check_general_areas(test_area_base, expected_pattern) &&
+ check_focus_area(test_area_base, expected_pattern));
+}
+
+static bool fill_and_check(struct tdx_upm_test_area *test_area_base, uint8_t pattern)
+{
+ fill_test_area(test_area_base, pattern);
+
+ return check_test_area(test_area_base, pattern);
+}
+
+#define TDX_UPM_TEST_ASSERT(x) \
+ do { \
+ if (!(x)) \
+ tdx_test_fatal(__LINE__); \
+ } while (0)
+
+/*
+ * Shared variables between guest and host
+ */
+static struct tdx_upm_test_area *test_area_gpa_private;
+static struct tdx_upm_test_area *test_area_gpa_shared;
+
+/*
+ * Test stages for syncing with host
+ */
+enum {
+ SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST = 1,
+ SYNC_CHECK_READ_SHARED_MEMORY_FROM_HOST,
+ SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST_AGAIN,
+};
+
+#define TDX_UPM_TEST_ACCEPT_PRINT_PORT 0x87
+
+/**
+ * Does vcpu_run, and also manages memory conversions if requested by the TD.
+ */
+void vcpu_run_and_manage_memory_conversions(struct kvm_vm *vm,
+ struct kvm_vcpu *vcpu)
+{
+ for (;;) {
+ vcpu_run(vcpu);
+ if (vcpu->run->exit_reason == KVM_EXIT_TDX &&
+ vcpu->run->tdx.type == KVM_EXIT_TDX_VMCALL &&
+ vcpu->run->tdx.u.vmcall.subfunction == TDG_VP_VMCALL_MAP_GPA) {
+ struct kvm_tdx_vmcall *vmcall_info = &vcpu->run->tdx.u.vmcall;
+ uint64_t gpa = vmcall_info->in_r12 & ~vm->arch.s_bit;
+
+ handle_memory_conversion(vm, gpa, vmcall_info->in_r13,
+ !(vm->arch.s_bit & vmcall_info->in_r12));
+ vmcall_info->status_code = 0;
+ continue;
+ } else if (
+ vcpu->run->exit_reason == KVM_EXIT_IO &&
+ vcpu->run->io.port == TDX_UPM_TEST_ACCEPT_PRINT_PORT) {
+ uint64_t gpa = tdx_test_read_64bit(
+ vcpu, TDX_UPM_TEST_ACCEPT_PRINT_PORT);
+ printf("\t ... guest accepting 1 page at GPA: 0x%lx\n", gpa);
+ continue;
+ }
+
+ break;
+ }
+}
+
+static void guest_upm_explicit(void)
+{
+ uint64_t ret = 0;
+ uint64_t failed_gpa;
+
+ struct tdx_upm_test_area *test_area_gva_private =
+ (struct tdx_upm_test_area *)TDX_UPM_TEST_AREA_GVA_PRIVATE;
+ struct tdx_upm_test_area *test_area_gva_shared =
+ (struct tdx_upm_test_area *)TDX_UPM_TEST_AREA_GVA_SHARED;
+
+ /* Check: host reading private memory does not modify guest's view */
+ fill_test_area(test_area_gva_private, PATTERN_GUEST_GENERAL);
+
+ tdx_test_report_to_user_space(SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST);
+
+ TDX_UPM_TEST_ASSERT(
+ check_test_area(test_area_gva_private, PATTERN_GUEST_GENERAL));
+
+ /* Remap focus area as shared */
+ ret = tdg_vp_vmcall_map_gpa((uint64_t)test_area_gpa_shared->focus_area,
+ sizeof(test_area_gpa_shared->focus_area),
+ &failed_gpa);
+ TDX_UPM_TEST_ASSERT(!ret);
+
+ /* General areas should be unaffected by remapping */
+ TDX_UPM_TEST_ASSERT(
+ check_general_areas(test_area_gva_private, PATTERN_GUEST_GENERAL));
+
+ /*
+ * Use memory contents to confirm that the memory allocated using mmap
+ * is used as backing memory for shared memory - PATTERN_CONFIDENCE_CHECK
+ * was written by the VMM at the beginning of this test.
+ */
+ TDX_UPM_TEST_ASSERT(
+ check_focus_area(test_area_gva_shared, PATTERN_CONFIDENCE_CHECK));
+
+ /* Guest can use focus area after remapping as shared */
+ fill_focus_area(test_area_gva_shared, PATTERN_GUEST_FOCUS);
+
+ tdx_test_report_to_user_space(SYNC_CHECK_READ_SHARED_MEMORY_FROM_HOST);
+
+ /* Check that guest has the same view of shared memory */
+ TDX_UPM_TEST_ASSERT(
+ check_focus_area(test_area_gva_shared, PATTERN_HOST_FOCUS));
+
+ /* Remap focus area back to private */
+ ret = tdg_vp_vmcall_map_gpa((uint64_t)test_area_gpa_private->focus_area,
+ sizeof(test_area_gpa_private->focus_area),
+ &failed_gpa);
+ TDX_UPM_TEST_ASSERT(!ret);
+
+ /* General areas should be unaffected by remapping */
+ TDX_UPM_TEST_ASSERT(
+ check_general_areas(test_area_gva_private, PATTERN_GUEST_GENERAL));
+
+ /* Focus area should be zeroed after remapping */
+ TDX_UPM_TEST_ASSERT(check_focus_area(test_area_gva_private, 0));
+
+ tdx_test_report_to_user_space(SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST_AGAIN);
+
+ /* Check that guest can use private memory after focus area is remapped as private */
+ TDX_UPM_TEST_ASSERT(
+ fill_and_check(test_area_gva_private, PATTERN_GUEST_GENERAL));
+
+ tdx_test_success();
+}
+
+static void run_selftest(struct kvm_vm *vm, struct kvm_vcpu *vcpu,
+ struct tdx_upm_test_area *test_area_base_hva)
+{
+ vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset),
+ SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST);
+
+ /*
+ * Check that host should read PATTERN_CONFIDENCE_CHECK from guest's
+ * private memory. This confirms that regular memory (userspace_addr in
+ * struct kvm_userspace_memory_region) is used to back the host's view
+ * of private memory, since PATTERN_CONFIDENCE_CHECK was written to that
+ * memory before starting the guest.
+ */
+ TEST_ASSERT(check_test_area(test_area_base_hva, PATTERN_CONFIDENCE_CHECK),
+ "Host should read PATTERN_CONFIDENCE_CHECK from guest's private memory.");
+
+ vcpu_run_and_manage_memory_conversions(vm, vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset),
+ SYNC_CHECK_READ_SHARED_MEMORY_FROM_HOST);
+
+ TEST_ASSERT(check_focus_area(test_area_base_hva, PATTERN_GUEST_FOCUS),
+ "Host should have the same view of shared memory as guest.");
+ TEST_ASSERT(check_general_areas(test_area_base_hva, PATTERN_CONFIDENCE_CHECK),
+ "Host's view of private memory should still be backed by regular memory.");
+
+ /* Check that host can use shared memory */
+ fill_focus_area(test_area_base_hva, PATTERN_HOST_FOCUS);
+ TEST_ASSERT(check_focus_area(test_area_base_hva, PATTERN_HOST_FOCUS),
+ "Host should be able to use shared memory.");
+
+ vcpu_run_and_manage_memory_conversions(vm, vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_IO(vcpu, TDX_TEST_REPORT_PORT, TDX_TEST_REPORT_SIZE,
+ TDG_VP_VMCALL_INSTRUCTION_IO_WRITE);
+ ASSERT_EQ(*(uint32_t *)((void *)vcpu->run + vcpu->run->io.data_offset),
+ SYNC_CHECK_READ_PRIVATE_MEMORY_FROM_HOST_AGAIN);
+
+ TEST_ASSERT(check_general_areas(test_area_base_hva, PATTERN_CONFIDENCE_CHECK),
+ "Host's view of private memory should be backed by regular memory.");
+ TEST_ASSERT(check_focus_area(test_area_base_hva, PATTERN_HOST_FOCUS),
+ "Host's view of private memory should be backed by regular memory.");
+
+ vcpu_run(vcpu);
+ TDX_TEST_CHECK_GUEST_FAILURE(vcpu);
+ TDX_TEST_ASSERT_SUCCESS(vcpu);
+
+ printf("\t ... PASSED\n");
+}
+
+static bool address_between(uint64_t addr, void *lo, void *hi)
+{
+ return (uint64_t)lo <= addr && addr < (uint64_t)hi;
+}
+
+static void guest_ve_handler(struct ex_regs *regs)
+{
+ uint64_t ret;
+ struct ve_info ve;
+
+ ret = tdg_vp_veinfo_get(&ve);
+ TDX_UPM_TEST_ASSERT(!ret);
+
+ /* For this test, we will only handle EXIT_REASON_EPT_VIOLATION */
+ TDX_UPM_TEST_ASSERT(ve.exit_reason == EXIT_REASON_EPT_VIOLATION);
+
+ /* Validate GPA in fault */
+ TDX_UPM_TEST_ASSERT(
+ address_between(ve.gpa,
+ test_area_gpa_private->focus_area,
+ test_area_gpa_private->general_area_1));
+
+ tdx_test_send_64bit(TDX_UPM_TEST_ACCEPT_PRINT_PORT, ve.gpa);
+
+#define MEM_PAGE_ACCEPT_LEVEL_4K 0
+#define MEM_PAGE_ACCEPT_LEVEL_2M 1
+ ret = tdg_mem_page_accept(ve.gpa, MEM_PAGE_ACCEPT_LEVEL_4K);
+ TDX_UPM_TEST_ASSERT(!ret);
+}
+
+static void verify_upm_test(void)
+{
+ struct kvm_vm *vm;
+ struct kvm_vcpu *vcpu;
+
+ vm_vaddr_t test_area_gva_private;
+ struct tdx_upm_test_area *test_area_base_hva;
+ uint64_t test_area_npages;
+
+ vm = td_create();
+ td_initialize(vm, VM_MEM_SRC_ANONYMOUS, 0);
+ vcpu = td_vcpu_add(vm, 0, guest_upm_explicit);
+
+ vm_install_exception_handler(vm, VE_VECTOR, guest_ve_handler);
+
+ /*
+ * Set up shared memory page for testing by first allocating as private
+ * and then mapping the same GPA again as shared. This way, the TD does
+ * not have to remap its page tables at runtime.
+ */
+ test_area_npages = TDX_UPM_TEST_AREA_SIZE / vm->page_size;
+ vm_userspace_mem_region_add(vm,
+ VM_MEM_SRC_ANONYMOUS, TDX_UPM_TEST_AREA_GPA,
+ 3, test_area_npages, KVM_MEM_PRIVATE);
+
+ test_area_gva_private = _vm_vaddr_alloc(
+ vm, TDX_UPM_TEST_AREA_SIZE, TDX_UPM_TEST_AREA_GVA_PRIVATE,
+ TDX_UPM_TEST_AREA_GPA, 3, true);
+ ASSERT_EQ(test_area_gva_private, TDX_UPM_TEST_AREA_GVA_PRIVATE);
+
+ test_area_gpa_private = (struct tdx_upm_test_area *)
+ addr_gva2gpa(vm, test_area_gva_private);
+ virt_map_shared(vm, TDX_UPM_TEST_AREA_GVA_SHARED,
+ (uint64_t)test_area_gpa_private,
+ test_area_npages);
+ ASSERT_EQ(addr_gva2gpa(vm, TDX_UPM_TEST_AREA_GVA_SHARED),
+ (vm_paddr_t)test_area_gpa_private);
+
+ test_area_base_hva = addr_gva2hva(vm, TDX_UPM_TEST_AREA_GVA_PRIVATE);
+
+ TEST_ASSERT(fill_and_check(test_area_base_hva, PATTERN_CONFIDENCE_CHECK),
+ "Failed to mark memory intended as backing memory for TD shared memory");
+
+ sync_global_to_guest(vm, test_area_gpa_private);
+ test_area_gpa_shared = (struct tdx_upm_test_area *)
+ ((uint64_t)test_area_gpa_private | BIT_ULL(vm->pa_bits - 1));
+ sync_global_to_guest(vm, test_area_gpa_shared);
+
+ td_finalize(vm);
+
+ printf("Verifying UPM functionality: explicit MapGPA\n");
+
+ run_selftest(vm, vcpu, test_area_base_hva);
+
+ kvm_vm_free(vm);
+}
+
+int main(int argc, char **argv)
+{
+ /* Disable stdout buffering */
+ setbuf(stdout, NULL);
+
+ if (!is_tdx_enabled()) {
+ printf("TDX is not supported by the KVM\n"
+ "Skipping the TDX tests.\n");
+ return 0;
+ }
+
+ run_in_new_process(&verify_upm_test);
+}
--
2.41.0.487.g6d72f3e995-goog


2023-07-26 19:31:04

by Isaku Yamahata

[permalink] [raw]
Subject: Re: [PATCH v4 00/28] TDX KVM selftests

On Tue, Jul 25, 2023 at 10:00:53PM +0000,
Ryan Afranji <[email protected]> wrote:

> Hello,
>
> This is v4 of the patch series for TDX selftests.
>
> It has been updated for Intel’s v14 of the TDX host patches which was
> proposed here:
> https://lore.kernel.org/lkml/[email protected]/
>
> The tree can be found at:
> https://github.com/googleprodkernel/linux-cc/tree/tdx-selftests-rfc-v4
>
> Changes from RFC v3:
>
> In v14, TDX can only run with UPM enabled so the necessary changes were
> made to handle that.

Thank you for updates. Let me give it try for v15 TDX KVM patch series.

--
Isaku Yamahata <[email protected]>