2024-03-14 23:31:20

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 00/18] KVM: selftests: Clean up x86's DT initialization

The vast majority of this series is x86 specific, and aims to clean up the
core library's handling of descriptor tables and segments. Currently, the
library (a) waits until vCPUs are created to allocate per-VM assets, and
(b) forces tests to opt-in to allocate the structures needed to handler
exceptions, which has result in some rather odd tests, and makes it
unnecessarily difficult to debug unexpected exceptions.

By the end of this series, the descriptor tables, segments, and exception
handlers are allocated and installed when non-barebones VMs are created.

Patch 1 is a selftests-tree-wide change to drop kvm_util_base.h. The
existence of that file has baffled (and annoyed) me for quite a long time.
After rereading its initial changelog multiple times, I realized that the
_only_ reason it exists is so that files don't need to manually #include
ucall_common.h.

Patch 1 will obviously create conflicts all over the place, though with
the help of meld, I've found them all trivially easy to resolve. If no
objects to the removal of kvm_util_base.h, I will try to bribe Paolo into
grabbing it early in the 6.10 cycle so that everyone can bring it into
the arch trees.

Ackerley Tng (1):
KVM: selftests: Fix off-by-one initialization of GDT limit

Sean Christopherson (17):
Revert "kvm: selftests: move base kvm_util.h declarations to
kvm_util_base.h"
KVM: sefltests: Add kvm_util_types.h to hold common types, e.g.
vm_vaddr_t
KVM: selftests: Move GDT, IDT, and TSS fields to x86's kvm_vm_arch
KVM: selftests: Move platform_info_test's main assert into guest code
KVM: selftests: Rework platform_info_test to actually verify #GP
KVM: selftests: Explicitly clobber the IDT in the "delete memslot"
testcase
KVM: selftests: Move x86's descriptor table helpers "up" in
processor.c
KVM: selftests: Rename x86's vcpu_setup() to vcpu_init_sregs()
KVM: selftests: Init IDT and exception handlers for all VMs/vCPUs on
x86
KVM: selftests: Map x86's exception_handlers at VM creation, not vCPU
setup
KVM: selftests: Allocate x86's GDT during VM creation
KVM: selftests: Drop superfluous switch() on vm->mode in
vcpu_init_sregs()
KVM: selftests: Fold x86's descriptor tables helpers into
vcpu_init_sregs()
KVM: selftests: Allocate x86's TSS at VM creation
KVM: selftests: Add macro for TSS selector, rename up code/data macros
KVM: selftests: Init x86's segments during VM creation
KVM: selftests: Drop @selector from segment helpers

.../selftests/kvm/aarch64/arch_timer.c | 1 +
tools/testing/selftests/kvm/arch_timer.c | 1 +
.../selftests/kvm/demand_paging_test.c | 1 +
.../selftests/kvm/dirty_log_perf_test.c | 1 +
tools/testing/selftests/kvm/dirty_log_test.c | 1 +
.../testing/selftests/kvm/guest_memfd_test.c | 2 +-
.../testing/selftests/kvm/guest_print_test.c | 1 +
.../selftests/kvm/include/aarch64/processor.h | 2 +
.../selftests/kvm/include/aarch64/ucall.h | 2 +-
.../testing/selftests/kvm/include/kvm_util.h | 1111 +++++++++++++++-
.../selftests/kvm/include/kvm_util_base.h | 1135 -----------------
.../selftests/kvm/include/kvm_util_types.h | 20 +
.../selftests/kvm/include/s390x/ucall.h | 2 +-
.../kvm/include/x86_64/kvm_util_arch.h | 6 +
.../selftests/kvm/include/x86_64/processor.h | 5 +-
.../selftests/kvm/include/x86_64/ucall.h | 2 +-
.../selftests/kvm/kvm_page_table_test.c | 1 +
.../selftests/kvm/lib/aarch64/processor.c | 2 +
tools/testing/selftests/kvm/lib/kvm_util.c | 1 +
tools/testing/selftests/kvm/lib/memstress.c | 1 +
.../selftests/kvm/lib/riscv/processor.c | 1 +
.../testing/selftests/kvm/lib/ucall_common.c | 5 +-
.../selftests/kvm/lib/x86_64/processor.c | 305 ++---
.../testing/selftests/kvm/riscv/arch_timer.c | 1 +
tools/testing/selftests/kvm/rseq_test.c | 1 +
tools/testing/selftests/kvm/s390x/cmma_test.c | 1 +
tools/testing/selftests/kvm/s390x/memop.c | 1 +
tools/testing/selftests/kvm/s390x/tprot.c | 1 +
.../selftests/kvm/set_memory_region_test.c | 12 +
tools/testing/selftests/kvm/steal_time.c | 1 +
tools/testing/selftests/kvm/x86_64/amx_test.c | 2 -
.../x86_64/dirty_log_page_splitting_test.c | 1 +
.../x86_64/exit_on_emulation_failure_test.c | 2 +-
.../selftests/kvm/x86_64/fix_hypercall_test.c | 2 -
.../selftests/kvm/x86_64/hyperv_evmcs.c | 2 -
.../selftests/kvm/x86_64/hyperv_features.c | 6 -
.../testing/selftests/kvm/x86_64/hyperv_ipi.c | 3 -
.../selftests/kvm/x86_64/kvm_pv_test.c | 3 -
.../selftests/kvm/x86_64/monitor_mwait_test.c | 3 -
.../selftests/kvm/x86_64/platform_info_test.c | 59 +-
.../selftests/kvm/x86_64/pmu_counters_test.c | 3 -
.../kvm/x86_64/pmu_event_filter_test.c | 6 -
.../smaller_maxphyaddr_emulation_test.c | 3 -
.../selftests/kvm/x86_64/svm_int_ctl_test.c | 3 -
.../kvm/x86_64/svm_nested_shutdown_test.c | 5 +-
.../kvm/x86_64/svm_nested_soft_inject_test.c | 5 +-
.../kvm/x86_64/ucna_injection_test.c | 5 -
.../kvm/x86_64/userspace_msr_exit_test.c | 3 -
.../vmx_exception_with_invalid_guest_state.c | 3 -
.../selftests/kvm/x86_64/vmx_pmu_caps_test.c | 3 -
.../selftests/kvm/x86_64/xapic_ipi_test.c | 2 -
.../selftests/kvm/x86_64/xcr0_cpuid_test.c | 3 -
.../selftests/kvm/x86_64/xen_shinfo_test.c | 2 -
53 files changed, 1335 insertions(+), 1421 deletions(-)
delete mode 100644 tools/testing/selftests/kvm/include/kvm_util_base.h
create mode 100644 tools/testing/selftests/kvm/include/kvm_util_types.h


base-commit: e9a2bba476c8332ed547fce485c158d03b0b9659
--
2.44.0.291.gc1ea87d7ee-goog



2024-03-14 23:31:37

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 02/18] KVM: sefltests: Add kvm_util_types.h to hold common types, e.g. vm_vaddr_t

Move the base types unique to KVM selftests out of kvm_util.h and into a
new header, kvm_util_types.h. This will allow kvm_util_arch.h, i.e. core
arch headers, to reference common types, e.g. vm_vaddr_t and vm_paddr_t.

No functional change intended.

Signed-off-by: Sean Christopherson <[email protected]>
---
.../testing/selftests/kvm/include/kvm_util.h | 16 +--------------
.../selftests/kvm/include/kvm_util_types.h | 20 +++++++++++++++++++
2 files changed, 21 insertions(+), 15 deletions(-)
create mode 100644 tools/testing/selftests/kvm/include/kvm_util_types.h

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 95baee5142a7..acdcddf78e3f 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -21,28 +21,14 @@
#include <sys/ioctl.h>

#include "kvm_util_arch.h"
+#include "kvm_util_types.h"
#include "sparsebit.h"

-/*
- * Provide a version of static_assert() that is guaranteed to have an optional
- * message param. If _ISOC11_SOURCE is defined, glibc (/usr/include/assert.h)
- * #undefs and #defines static_assert() as a direct alias to _Static_assert(),
- * i.e. effectively makes the message mandatory. Many KVM selftests #define
- * _GNU_SOURCE for various reasons, and _GNU_SOURCE implies _ISOC11_SOURCE. As
- * a result, static_assert() behavior is non-deterministic and may or may not
- * require a message depending on #include order.
- */
-#define __kvm_static_assert(expr, msg, ...) _Static_assert(expr, msg)
-#define kvm_static_assert(expr, ...) __kvm_static_assert(expr, ##__VA_ARGS__, #expr)
-
#define KVM_DEV_PATH "/dev/kvm"
#define KVM_MAX_VCPUS 512

#define NSEC_PER_SEC 1000000000L

-typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */
-typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */
-
struct userspace_mem_region {
struct kvm_userspace_memory_region2 region;
struct sparsebit *unused_phy_pages;
diff --git a/tools/testing/selftests/kvm/include/kvm_util_types.h b/tools/testing/selftests/kvm/include/kvm_util_types.h
new file mode 100644
index 000000000000..764491366eb9
--- /dev/null
+++ b/tools/testing/selftests/kvm/include/kvm_util_types.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef SELFTEST_KVM_UTIL_TYPES_H
+#define SELFTEST_KVM_UTIL_TYPES_H
+
+/*
+ * Provide a version of static_assert() that is guaranteed to have an optional
+ * message param. If _ISOC11_SOURCE is defined, glibc (/usr/include/assert.h)
+ * #undefs and #defines static_assert() as a direct alias to _Static_assert(),
+ * i.e. effectively makes the message mandatory. Many KVM selftests #define
+ * _GNU_SOURCE for various reasons, and _GNU_SOURCE implies _ISOC11_SOURCE. As
+ * a result, static_assert() behavior is non-deterministic and may or may not
+ * require a message depending on #include order.
+ */
+#define __kvm_static_assert(expr, msg, ...) _Static_assert(expr, msg)
+#define kvm_static_assert(expr, ...) __kvm_static_assert(expr, ##__VA_ARGS__, #expr)
+
+typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */
+typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */
+
+#endif /* SELFTEST_KVM_UTIL_TYPES_H */
--
2.44.0.291.gc1ea87d7ee-goog


2024-03-14 23:32:13

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 04/18] KVM: selftests: Fix off-by-one initialization of GDT limit

From: Ackerley Tng <[email protected]>

Fix an off-by-one bug in the initialization of the GDT limit, which as
defined in the SDM is inclusive, not exclusive.

Note, vcpu_init_descriptor_tables() gets the limit correct, it's only
vcpu_setup() that is broken, i.e. only tests that _don't_ invoke
vcpu_init_descriptor_tables() can have problems. And the fact that KVM
effectively initializes the GDT twice will be cleaned up in the near
future.

Signed-off-by: Ackerley Tng <[email protected]>
[sean: rewrite changelog]
Signed-off-by: Sean Christopherson <[email protected]>
---
tools/testing/selftests/kvm/lib/x86_64/processor.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 45f965c052a1..eaeba907bb53 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -522,7 +522,7 @@ static void kvm_setup_gdt(struct kvm_vm *vm, struct kvm_dtable *dt)
vm->arch.gdt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);

dt->base = vm->arch.gdt;
- dt->limit = getpagesize();
+ dt->limit = getpagesize() - 1;
}

static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp,
--
2.44.0.291.gc1ea87d7ee-goog


2024-03-14 23:32:14

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 03/18] KVM: selftests: Move GDT, IDT, and TSS fields to x86's kvm_vm_arch

Now that kvm_vm_arch exists, move the GDT, IDT, and TSS fields to x86's
implementation, as the structures are firmly x86-only.

Signed-off-by: Sean Christopherson <[email protected]>
---
.../testing/selftests/kvm/include/kvm_util.h | 3 ---
.../kvm/include/x86_64/kvm_util_arch.h | 6 +++++
.../selftests/kvm/lib/x86_64/processor.c | 22 +++++++++----------
.../kvm/x86_64/svm_nested_shutdown_test.c | 2 +-
.../kvm/x86_64/svm_nested_soft_inject_test.c | 2 +-
5 files changed, 19 insertions(+), 16 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index acdcddf78e3f..58d6a4d6ce4f 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -94,9 +94,6 @@ struct kvm_vm {
bool pgd_created;
vm_paddr_t ucall_mmio_addr;
vm_paddr_t pgd;
- vm_vaddr_t gdt;
- vm_vaddr_t tss;
- vm_vaddr_t idt;
vm_vaddr_t handlers;
uint32_t dirty_ring_size;
uint64_t gpa_tag_mask;
diff --git a/tools/testing/selftests/kvm/include/x86_64/kvm_util_arch.h b/tools/testing/selftests/kvm/include/x86_64/kvm_util_arch.h
index 9f1725192aa2..b14ff3a88b4a 100644
--- a/tools/testing/selftests/kvm/include/x86_64/kvm_util_arch.h
+++ b/tools/testing/selftests/kvm/include/x86_64/kvm_util_arch.h
@@ -5,7 +5,13 @@
#include <stdbool.h>
#include <stdint.h>

+#include "kvm_util_types.h"
+
struct kvm_vm_arch {
+ vm_vaddr_t gdt;
+ vm_vaddr_t tss;
+ vm_vaddr_t idt;
+
uint64_t c_bit;
uint64_t s_bit;
int sev_fd;
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 74a4c736c9ae..45f965c052a1 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -417,7 +417,7 @@ static void kvm_seg_set_unusable(struct kvm_segment *segp)

static void kvm_seg_fill_gdt_64bit(struct kvm_vm *vm, struct kvm_segment *segp)
{
- void *gdt = addr_gva2hva(vm, vm->gdt);
+ void *gdt = addr_gva2hva(vm, vm->arch.gdt);
struct desc64 *desc = gdt + (segp->selector >> 3) * 8;

desc->limit0 = segp->limit & 0xFFFF;
@@ -518,21 +518,21 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)

static void kvm_setup_gdt(struct kvm_vm *vm, struct kvm_dtable *dt)
{
- if (!vm->gdt)
- vm->gdt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
+ if (!vm->arch.gdt)
+ vm->arch.gdt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);

- dt->base = vm->gdt;
+ dt->base = vm->arch.gdt;
dt->limit = getpagesize();
}

static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp,
int selector)
{
- if (!vm->tss)
- vm->tss = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
+ if (!vm->arch.tss)
+ vm->arch.tss = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);

memset(segp, 0, sizeof(*segp));
- segp->base = vm->tss;
+ segp->base = vm->arch.tss;
segp->limit = 0x67;
segp->selector = selector;
segp->type = 0xb;
@@ -1091,7 +1091,7 @@ static void set_idt_entry(struct kvm_vm *vm, int vector, unsigned long addr,
int dpl, unsigned short selector)
{
struct idt_entry *base =
- (struct idt_entry *)addr_gva2hva(vm, vm->idt);
+ (struct idt_entry *)addr_gva2hva(vm, vm->arch.idt);
struct idt_entry *e = &base[vector];

memset(e, 0, sizeof(*e));
@@ -1144,7 +1144,7 @@ void vm_init_descriptor_tables(struct kvm_vm *vm)
extern void *idt_handlers;
int i;

- vm->idt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
+ vm->arch.idt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
vm->handlers = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
/* Handlers have the same address in both address spaces.*/
for (i = 0; i < NUM_INTERRUPTS; i++)
@@ -1158,9 +1158,9 @@ void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu)
struct kvm_sregs sregs;

vcpu_sregs_get(vcpu, &sregs);
- sregs.idt.base = vm->idt;
+ sregs.idt.base = vm->arch.idt;
sregs.idt.limit = NUM_INTERRUPTS * sizeof(struct idt_entry) - 1;
- sregs.gdt.base = vm->gdt;
+ sregs.gdt.base = vm->arch.gdt;
sregs.gdt.limit = getpagesize() - 1;
kvm_seg_set_kernel_data_64bit(NULL, DEFAULT_DATA_SELECTOR, &sregs.gs);
vcpu_sregs_set(vcpu, &sregs);
diff --git a/tools/testing/selftests/kvm/x86_64/svm_nested_shutdown_test.c b/tools/testing/selftests/kvm/x86_64/svm_nested_shutdown_test.c
index d6fcdcc3af31..f4a1137e04ab 100644
--- a/tools/testing/selftests/kvm/x86_64/svm_nested_shutdown_test.c
+++ b/tools/testing/selftests/kvm/x86_64/svm_nested_shutdown_test.c
@@ -53,7 +53,7 @@ int main(int argc, char *argv[])

vcpu_alloc_svm(vm, &svm_gva);

- vcpu_args_set(vcpu, 2, svm_gva, vm->idt);
+ vcpu_args_set(vcpu, 2, svm_gva, vm->arch.idt);

vcpu_run(vcpu);
TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_SHUTDOWN);
diff --git a/tools/testing/selftests/kvm/x86_64/svm_nested_soft_inject_test.c b/tools/testing/selftests/kvm/x86_64/svm_nested_soft_inject_test.c
index 0c7ce3d4e83a..2478a9e50743 100644
--- a/tools/testing/selftests/kvm/x86_64/svm_nested_soft_inject_test.c
+++ b/tools/testing/selftests/kvm/x86_64/svm_nested_soft_inject_test.c
@@ -166,7 +166,7 @@ static void run_test(bool is_nmi)

idt_alt_vm = vm_vaddr_alloc_page(vm);
idt_alt = addr_gva2hva(vm, idt_alt_vm);
- idt = addr_gva2hva(vm, vm->idt);
+ idt = addr_gva2hva(vm, vm->arch.idt);
memcpy(idt_alt, idt, getpagesize());
} else {
idt_alt_vm = 0;
--
2.44.0.291.gc1ea87d7ee-goog


2024-03-14 23:33:44

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 08/18] KVM: selftests: Move x86's descriptor table helpers "up" in processor.c

Move x86's various descriptor table helpers in processor.c up above
kvm_arch_vm_post_create() and vcpu_setup() so that the helpers can be
made static and invoked from the aforementioned functions.

No functional change intended.

Signed-off-by: Sean Christopherson <[email protected]>
---
.../selftests/kvm/lib/x86_64/processor.c | 191 +++++++++---------
1 file changed, 95 insertions(+), 96 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index eaeba907bb53..3640d3290f0a 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -540,6 +540,21 @@ static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp,
kvm_seg_fill_gdt_64bit(vm, segp);
}

+void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu)
+{
+ struct kvm_vm *vm = vcpu->vm;
+ struct kvm_sregs sregs;
+
+ vcpu_sregs_get(vcpu, &sregs);
+ sregs.idt.base = vm->arch.idt;
+ sregs.idt.limit = NUM_INTERRUPTS * sizeof(struct idt_entry) - 1;
+ sregs.gdt.base = vm->arch.gdt;
+ sregs.gdt.limit = getpagesize() - 1;
+ kvm_seg_set_kernel_data_64bit(NULL, DEFAULT_DATA_SELECTOR, &sregs.gs);
+ vcpu_sregs_set(vcpu, &sregs);
+ *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
+}
+
static void vcpu_setup(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
{
struct kvm_sregs sregs;
@@ -572,6 +587,86 @@ static void vcpu_setup(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
vcpu_sregs_set(vcpu, &sregs);
}

+static void set_idt_entry(struct kvm_vm *vm, int vector, unsigned long addr,
+ int dpl, unsigned short selector)
+{
+ struct idt_entry *base =
+ (struct idt_entry *)addr_gva2hva(vm, vm->arch.idt);
+ struct idt_entry *e = &base[vector];
+
+ memset(e, 0, sizeof(*e));
+ e->offset0 = addr;
+ e->selector = selector;
+ e->ist = 0;
+ e->type = 14;
+ e->dpl = dpl;
+ e->p = 1;
+ e->offset1 = addr >> 16;
+ e->offset2 = addr >> 32;
+}
+
+static bool kvm_fixup_exception(struct ex_regs *regs)
+{
+ if (regs->r9 != KVM_EXCEPTION_MAGIC || regs->rip != regs->r10)
+ return false;
+
+ if (regs->vector == DE_VECTOR)
+ return false;
+
+ regs->rip = regs->r11;
+ regs->r9 = regs->vector;
+ regs->r10 = regs->error_code;
+ return true;
+}
+
+void route_exception(struct ex_regs *regs)
+{
+ typedef void(*handler)(struct ex_regs *);
+ handler *handlers = (handler *)exception_handlers;
+
+ if (handlers && handlers[regs->vector]) {
+ handlers[regs->vector](regs);
+ return;
+ }
+
+ if (kvm_fixup_exception(regs))
+ return;
+
+ ucall_assert(UCALL_UNHANDLED,
+ "Unhandled exception in guest", __FILE__, __LINE__,
+ "Unhandled exception '0x%lx' at guest RIP '0x%lx'",
+ regs->vector, regs->rip);
+}
+
+void vm_init_descriptor_tables(struct kvm_vm *vm)
+{
+ extern void *idt_handlers;
+ int i;
+
+ vm->arch.idt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
+ vm->handlers = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
+ /* Handlers have the same address in both address spaces.*/
+ for (i = 0; i < NUM_INTERRUPTS; i++)
+ set_idt_entry(vm, i, (unsigned long)(&idt_handlers)[i], 0,
+ DEFAULT_CODE_SELECTOR);
+}
+
+void vm_install_exception_handler(struct kvm_vm *vm, int vector,
+ void (*handler)(struct ex_regs *))
+{
+ vm_vaddr_t *handlers = (vm_vaddr_t *)addr_gva2hva(vm, vm->handlers);
+
+ handlers[vector] = (vm_vaddr_t)handler;
+}
+
+void assert_on_unhandled_exception(struct kvm_vcpu *vcpu)
+{
+ struct ucall uc;
+
+ if (get_ucall(vcpu, &uc) == UCALL_UNHANDLED)
+ REPORT_GUEST_ASSERT(uc);
+}
+
void kvm_arch_vm_post_create(struct kvm_vm *vm)
{
vm_create_irqchip(vm);
@@ -1087,102 +1182,6 @@ void kvm_init_vm_address_properties(struct kvm_vm *vm)
}
}

-static void set_idt_entry(struct kvm_vm *vm, int vector, unsigned long addr,
- int dpl, unsigned short selector)
-{
- struct idt_entry *base =
- (struct idt_entry *)addr_gva2hva(vm, vm->arch.idt);
- struct idt_entry *e = &base[vector];
-
- memset(e, 0, sizeof(*e));
- e->offset0 = addr;
- e->selector = selector;
- e->ist = 0;
- e->type = 14;
- e->dpl = dpl;
- e->p = 1;
- e->offset1 = addr >> 16;
- e->offset2 = addr >> 32;
-}
-
-
-static bool kvm_fixup_exception(struct ex_regs *regs)
-{
- if (regs->r9 != KVM_EXCEPTION_MAGIC || regs->rip != regs->r10)
- return false;
-
- if (regs->vector == DE_VECTOR)
- return false;
-
- regs->rip = regs->r11;
- regs->r9 = regs->vector;
- regs->r10 = regs->error_code;
- return true;
-}
-
-void route_exception(struct ex_regs *regs)
-{
- typedef void(*handler)(struct ex_regs *);
- handler *handlers = (handler *)exception_handlers;
-
- if (handlers && handlers[regs->vector]) {
- handlers[regs->vector](regs);
- return;
- }
-
- if (kvm_fixup_exception(regs))
- return;
-
- ucall_assert(UCALL_UNHANDLED,
- "Unhandled exception in guest", __FILE__, __LINE__,
- "Unhandled exception '0x%lx' at guest RIP '0x%lx'",
- regs->vector, regs->rip);
-}
-
-void vm_init_descriptor_tables(struct kvm_vm *vm)
-{
- extern void *idt_handlers;
- int i;
-
- vm->arch.idt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
- vm->handlers = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
- /* Handlers have the same address in both address spaces.*/
- for (i = 0; i < NUM_INTERRUPTS; i++)
- set_idt_entry(vm, i, (unsigned long)(&idt_handlers)[i], 0,
- DEFAULT_CODE_SELECTOR);
-}
-
-void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu)
-{
- struct kvm_vm *vm = vcpu->vm;
- struct kvm_sregs sregs;
-
- vcpu_sregs_get(vcpu, &sregs);
- sregs.idt.base = vm->arch.idt;
- sregs.idt.limit = NUM_INTERRUPTS * sizeof(struct idt_entry) - 1;
- sregs.gdt.base = vm->arch.gdt;
- sregs.gdt.limit = getpagesize() - 1;
- kvm_seg_set_kernel_data_64bit(NULL, DEFAULT_DATA_SELECTOR, &sregs.gs);
- vcpu_sregs_set(vcpu, &sregs);
- *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
-}
-
-void vm_install_exception_handler(struct kvm_vm *vm, int vector,
- void (*handler)(struct ex_regs *))
-{
- vm_vaddr_t *handlers = (vm_vaddr_t *)addr_gva2hva(vm, vm->handlers);
-
- handlers[vector] = (vm_vaddr_t)handler;
-}
-
-void assert_on_unhandled_exception(struct kvm_vcpu *vcpu)
-{
- struct ucall uc;
-
- if (get_ucall(vcpu, &uc) == UCALL_UNHANDLED)
- REPORT_GUEST_ASSERT(uc);
-}
-
const struct kvm_cpuid_entry2 *get_cpuid_entry(const struct kvm_cpuid2 *cpuid,
uint32_t function, uint32_t index)
{
--
2.44.0.291.gc1ea87d7ee-goog


2024-03-14 23:34:32

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 10/18] KVM: selftests: Init IDT and exception handlers for all VMs/vCPUs on x86

Initialize the IDT and exception handlers for all non-barebones VMs and
vCPUs on x86. Forcing tests to manually configure the IDT just to save
8KiB of memory is a terrible tradeoff, and also leads to weird tests
(multiple tests have deliberately relied on shutdown to indicate success),
and hard-to-debug failures, e.g. instead of a precise unexpected exception
failure, tests see only shutdown.

Signed-off-by: Sean Christopherson <[email protected]>
---
tools/testing/selftests/kvm/include/x86_64/processor.h | 2 --
tools/testing/selftests/kvm/lib/x86_64/processor.c | 8 ++++++--
tools/testing/selftests/kvm/x86_64/amx_test.c | 2 --
tools/testing/selftests/kvm/x86_64/fix_hypercall_test.c | 2 --
tools/testing/selftests/kvm/x86_64/hyperv_evmcs.c | 2 --
tools/testing/selftests/kvm/x86_64/hyperv_features.c | 6 ------
tools/testing/selftests/kvm/x86_64/hyperv_ipi.c | 3 ---
tools/testing/selftests/kvm/x86_64/kvm_pv_test.c | 3 ---
tools/testing/selftests/kvm/x86_64/monitor_mwait_test.c | 3 ---
tools/testing/selftests/kvm/x86_64/platform_info_test.c | 3 ---
tools/testing/selftests/kvm/x86_64/pmu_counters_test.c | 3 ---
.../testing/selftests/kvm/x86_64/pmu_event_filter_test.c | 6 ------
.../kvm/x86_64/smaller_maxphyaddr_emulation_test.c | 3 ---
tools/testing/selftests/kvm/x86_64/svm_int_ctl_test.c | 3 ---
.../selftests/kvm/x86_64/svm_nested_shutdown_test.c | 3 ---
.../selftests/kvm/x86_64/svm_nested_soft_inject_test.c | 3 ---
tools/testing/selftests/kvm/x86_64/ucna_injection_test.c | 4 ----
.../selftests/kvm/x86_64/userspace_msr_exit_test.c | 3 ---
.../kvm/x86_64/vmx_exception_with_invalid_guest_state.c | 3 ---
tools/testing/selftests/kvm/x86_64/vmx_pmu_caps_test.c | 3 ---
tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c | 2 --
tools/testing/selftests/kvm/x86_64/xcr0_cpuid_test.c | 3 ---
tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c | 2 --
23 files changed, 6 insertions(+), 69 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index d6ffe03c9d0b..4804abe00158 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -1129,8 +1129,6 @@ struct idt_entry {
uint32_t offset2; uint32_t reserved;
};

-void vm_init_descriptor_tables(struct kvm_vm *vm);
-void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu);
void vm_install_exception_handler(struct kvm_vm *vm, int vector,
void (*handler)(struct ex_regs *));

diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index d6bfe96a6a77..5813d93b2e7c 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -540,7 +540,7 @@ static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp,
kvm_seg_fill_gdt_64bit(vm, segp);
}

-void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu)
+static void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu)
{
struct kvm_vm *vm = vcpu->vm;
struct kvm_sregs sregs;
@@ -585,6 +585,8 @@ static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)

sregs.cr3 = vm->pgd;
vcpu_sregs_set(vcpu, &sregs);
+
+ vcpu_init_descriptor_tables(vcpu);
}

static void set_idt_entry(struct kvm_vm *vm, int vector, unsigned long addr,
@@ -638,7 +640,7 @@ void route_exception(struct ex_regs *regs)
regs->vector, regs->rip);
}

-void vm_init_descriptor_tables(struct kvm_vm *vm)
+static void vm_init_descriptor_tables(struct kvm_vm *vm)
{
extern void *idt_handlers;
int i;
@@ -670,6 +672,8 @@ void assert_on_unhandled_exception(struct kvm_vcpu *vcpu)
void kvm_arch_vm_post_create(struct kvm_vm *vm)
{
vm_create_irqchip(vm);
+ vm_init_descriptor_tables(vm);
+
sync_global_to_guest(vm, host_cpu_is_intel);
sync_global_to_guest(vm, host_cpu_is_amd);

diff --git a/tools/testing/selftests/kvm/x86_64/amx_test.c b/tools/testing/selftests/kvm/x86_64/amx_test.c
index eae521f050e0..ab6c31aee447 100644
--- a/tools/testing/selftests/kvm/x86_64/amx_test.c
+++ b/tools/testing/selftests/kvm/x86_64/amx_test.c
@@ -246,8 +246,6 @@ int main(int argc, char *argv[])
vcpu_regs_get(vcpu, &regs1);

/* Register #NM handler */
- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(vcpu);
vm_install_exception_handler(vm, NM_VECTOR, guest_nm_handler);

/* amx cfg for guest_code */
diff --git a/tools/testing/selftests/kvm/x86_64/fix_hypercall_test.c b/tools/testing/selftests/kvm/x86_64/fix_hypercall_test.c
index f3c2239228b1..762628f7d4ba 100644
--- a/tools/testing/selftests/kvm/x86_64/fix_hypercall_test.c
+++ b/tools/testing/selftests/kvm/x86_64/fix_hypercall_test.c
@@ -110,8 +110,6 @@ static void test_fix_hypercall(struct kvm_vcpu *vcpu, bool disable_quirk)
{
struct kvm_vm *vm = vcpu->vm;

- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(vcpu);
vm_install_exception_handler(vcpu->vm, UD_VECTOR, guest_ud_handler);

if (disable_quirk)
diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_evmcs.c b/tools/testing/selftests/kvm/x86_64/hyperv_evmcs.c
index 4c7257ecd2a6..4238691a755c 100644
--- a/tools/testing/selftests/kvm/x86_64/hyperv_evmcs.c
+++ b/tools/testing/selftests/kvm/x86_64/hyperv_evmcs.c
@@ -258,8 +258,6 @@ int main(int argc, char *argv[])
vcpu_args_set(vcpu, 3, vmx_pages_gva, hv_pages_gva, addr_gva2gpa(vm, hcall_page));
vcpu_set_msr(vcpu, HV_X64_MSR_VP_INDEX, vcpu->id);

- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(vcpu);
vm_install_exception_handler(vm, UD_VECTOR, guest_ud_handler);
vm_install_exception_handler(vm, NMI_VECTOR, guest_nmi_handler);

diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_features.c b/tools/testing/selftests/kvm/x86_64/hyperv_features.c
index b923a285e96f..068e9c69710d 100644
--- a/tools/testing/selftests/kvm/x86_64/hyperv_features.c
+++ b/tools/testing/selftests/kvm/x86_64/hyperv_features.c
@@ -156,9 +156,6 @@ static void guest_test_msrs_access(void)
vcpu_init_cpuid(vcpu, prev_cpuid);
}

- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(vcpu);
-
/* TODO: Make this entire test easier to maintain. */
if (stage >= 21)
vcpu_enable_cap(vcpu, KVM_CAP_HYPERV_SYNIC2, 0);
@@ -532,9 +529,6 @@ static void guest_test_hcalls_access(void)
while (true) {
vm = vm_create_with_one_vcpu(&vcpu, guest_hcall);

- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(vcpu);
-
/* Hypercall input/output */
hcall_page = vm_vaddr_alloc_pages(vm, 2);
memset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize());
diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_ipi.c b/tools/testing/selftests/kvm/x86_64/hyperv_ipi.c
index f1617762c22f..c6a03141cdaa 100644
--- a/tools/testing/selftests/kvm/x86_64/hyperv_ipi.c
+++ b/tools/testing/selftests/kvm/x86_64/hyperv_ipi.c
@@ -256,16 +256,13 @@ int main(int argc, char *argv[])
hcall_page = vm_vaddr_alloc_pages(vm, 2);
memset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize());

- vm_init_descriptor_tables(vm);

vcpu[1] = vm_vcpu_add(vm, RECEIVER_VCPU_ID_1, receiver_code);
- vcpu_init_descriptor_tables(vcpu[1]);
vcpu_args_set(vcpu[1], 2, hcall_page, addr_gva2gpa(vm, hcall_page));
vcpu_set_msr(vcpu[1], HV_X64_MSR_VP_INDEX, RECEIVER_VCPU_ID_1);
vcpu_set_hv_cpuid(vcpu[1]);

vcpu[2] = vm_vcpu_add(vm, RECEIVER_VCPU_ID_2, receiver_code);
- vcpu_init_descriptor_tables(vcpu[2]);
vcpu_args_set(vcpu[2], 2, hcall_page, addr_gva2gpa(vm, hcall_page));
vcpu_set_msr(vcpu[2], HV_X64_MSR_VP_INDEX, RECEIVER_VCPU_ID_2);
vcpu_set_hv_cpuid(vcpu[2]);
diff --git a/tools/testing/selftests/kvm/x86_64/kvm_pv_test.c b/tools/testing/selftests/kvm/x86_64/kvm_pv_test.c
index 9e2879af7c20..cef0bd80038b 100644
--- a/tools/testing/selftests/kvm/x86_64/kvm_pv_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvm_pv_test.c
@@ -146,9 +146,6 @@ int main(void)

vcpu_clear_cpuid_entry(vcpu, KVM_CPUID_FEATURES);

- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(vcpu);
-
enter_guest(vcpu);
kvm_vm_free(vm);
}
diff --git a/tools/testing/selftests/kvm/x86_64/monitor_mwait_test.c b/tools/testing/selftests/kvm/x86_64/monitor_mwait_test.c
index 853802641e1e..9c8445379d76 100644
--- a/tools/testing/selftests/kvm/x86_64/monitor_mwait_test.c
+++ b/tools/testing/selftests/kvm/x86_64/monitor_mwait_test.c
@@ -80,9 +80,6 @@ int main(int argc, char *argv[])
vm = vm_create_with_one_vcpu(&vcpu, guest_code);
vcpu_clear_cpuid_feature(vcpu, X86_FEATURE_MWAIT);

- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(vcpu);
-
while (1) {
vcpu_run(vcpu);
TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);
diff --git a/tools/testing/selftests/kvm/x86_64/platform_info_test.c b/tools/testing/selftests/kvm/x86_64/platform_info_test.c
index 6300bb70f028..9cf2b9fbf459 100644
--- a/tools/testing/selftests/kvm/x86_64/platform_info_test.c
+++ b/tools/testing/selftests/kvm/x86_64/platform_info_test.c
@@ -51,9 +51,6 @@ int main(int argc, char *argv[])

vm = vm_create_with_one_vcpu(&vcpu, guest_code);

- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(vcpu);
-
msr_platform_info = vcpu_get_msr(vcpu, MSR_PLATFORM_INFO);
vcpu_set_msr(vcpu, MSR_PLATFORM_INFO,
msr_platform_info | MSR_PLATFORM_INFO_MAX_TURBO_RATIO);
diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c
index 29609b52f8fa..ff6d21d148de 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c
@@ -31,9 +31,6 @@ static struct kvm_vm *pmu_vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
struct kvm_vm *vm;

vm = vm_create_with_one_vcpu(vcpu, guest_code);
- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(*vcpu);
-
sync_global_to_guest(vm, kvm_pmu_version);
sync_global_to_guest(vm, is_forced_emulation_enabled);

diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
index 3c85d1ae9893..5cbe9d331acb 100644
--- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
+++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
@@ -337,9 +337,6 @@ static void test_pmu_config_disable(void (*guest_code)(void))
vm_enable_cap(vm, KVM_CAP_PMU_CAPABILITY, KVM_PMU_CAP_DISABLE);

vcpu = vm_vcpu_add(vm, 0, guest_code);
- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(vcpu);
-
TEST_ASSERT(!sanity_check_pmu(vcpu),
"Guest should not be able to use disabled PMU.");

@@ -876,9 +873,6 @@ int main(int argc, char *argv[])

vm = vm_create_with_one_vcpu(&vcpu, guest_code);

- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(vcpu);
-
TEST_REQUIRE(sanity_check_pmu(vcpu));

if (use_amd_pmu())
diff --git a/tools/testing/selftests/kvm/x86_64/smaller_maxphyaddr_emulation_test.c b/tools/testing/selftests/kvm/x86_64/smaller_maxphyaddr_emulation_test.c
index 416207c38a17..0d682d6b76f1 100644
--- a/tools/testing/selftests/kvm/x86_64/smaller_maxphyaddr_emulation_test.c
+++ b/tools/testing/selftests/kvm/x86_64/smaller_maxphyaddr_emulation_test.c
@@ -60,9 +60,6 @@ int main(int argc, char *argv[])
vm = vm_create_with_one_vcpu(&vcpu, guest_code);
vcpu_args_set(vcpu, 1, kvm_is_tdp_enabled());

- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(vcpu);
-
vcpu_set_cpuid_property(vcpu, X86_PROPERTY_MAX_PHY_ADDR, MAXPHYADDR);

rc = kvm_check_cap(KVM_CAP_EXIT_ON_EMULATION_FAILURE);
diff --git a/tools/testing/selftests/kvm/x86_64/svm_int_ctl_test.c b/tools/testing/selftests/kvm/x86_64/svm_int_ctl_test.c
index 32bef39bec21..916e04248fbb 100644
--- a/tools/testing/selftests/kvm/x86_64/svm_int_ctl_test.c
+++ b/tools/testing/selftests/kvm/x86_64/svm_int_ctl_test.c
@@ -93,9 +93,6 @@ int main(int argc, char *argv[])

vm = vm_create_with_one_vcpu(&vcpu, l1_guest_code);

- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(vcpu);
-
vm_install_exception_handler(vm, VINTR_IRQ_NUMBER, vintr_irq_handler);
vm_install_exception_handler(vm, INTR_IRQ_NUMBER, intr_irq_handler);

diff --git a/tools/testing/selftests/kvm/x86_64/svm_nested_shutdown_test.c b/tools/testing/selftests/kvm/x86_64/svm_nested_shutdown_test.c
index f4a1137e04ab..00135cbba35e 100644
--- a/tools/testing/selftests/kvm/x86_64/svm_nested_shutdown_test.c
+++ b/tools/testing/selftests/kvm/x86_64/svm_nested_shutdown_test.c
@@ -48,9 +48,6 @@ int main(int argc, char *argv[])
TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SVM));

vm = vm_create_with_one_vcpu(&vcpu, l1_guest_code);
- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(vcpu);
-
vcpu_alloc_svm(vm, &svm_gva);

vcpu_args_set(vcpu, 2, svm_gva, vm->arch.idt);
diff --git a/tools/testing/selftests/kvm/x86_64/svm_nested_soft_inject_test.c b/tools/testing/selftests/kvm/x86_64/svm_nested_soft_inject_test.c
index 2478a9e50743..7b6481d6c0d3 100644
--- a/tools/testing/selftests/kvm/x86_64/svm_nested_soft_inject_test.c
+++ b/tools/testing/selftests/kvm/x86_64/svm_nested_soft_inject_test.c
@@ -152,9 +152,6 @@ static void run_test(bool is_nmi)

vm = vm_create_with_one_vcpu(&vcpu, l1_guest_code);

- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(vcpu);
-
vm_install_exception_handler(vm, NMI_VECTOR, guest_nmi_handler);
vm_install_exception_handler(vm, BP_VECTOR, guest_bp_handler);
vm_install_exception_handler(vm, INT_NR, guest_int_handler);
diff --git a/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c b/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c
index bc9be20f9600..6eeb5dd1e65c 100644
--- a/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c
+++ b/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c
@@ -284,10 +284,6 @@ int main(int argc, char *argv[])
cmcidis_vcpu = create_vcpu_with_mce_cap(vm, 1, false, cmci_disabled_guest_code);
cmci_vcpu = create_vcpu_with_mce_cap(vm, 2, true, cmci_enabled_guest_code);

- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(ucna_vcpu);
- vcpu_init_descriptor_tables(cmcidis_vcpu);
- vcpu_init_descriptor_tables(cmci_vcpu);
vm_install_exception_handler(vm, CMCI_VECTOR, guest_cmci_handler);
vm_install_exception_handler(vm, GP_VECTOR, guest_gp_handler);

diff --git a/tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c b/tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c
index f4f61a2d2464..fffda40e286f 100644
--- a/tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c
+++ b/tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c
@@ -531,9 +531,6 @@ KVM_ONE_VCPU_TEST(user_msr, msr_filter_allow, guest_code_filter_allow)

vm_ioctl(vm, KVM_X86_SET_MSR_FILTER, &filter_allow);

- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(vcpu);
-
vm_install_exception_handler(vm, GP_VECTOR, guest_gp_handler);

/* Process guest code userspace exits. */
diff --git a/tools/testing/selftests/kvm/x86_64/vmx_exception_with_invalid_guest_state.c b/tools/testing/selftests/kvm/x86_64/vmx_exception_with_invalid_guest_state.c
index fad3634fd9eb..3fd6eceab46f 100644
--- a/tools/testing/selftests/kvm/x86_64/vmx_exception_with_invalid_guest_state.c
+++ b/tools/testing/selftests/kvm/x86_64/vmx_exception_with_invalid_guest_state.c
@@ -115,9 +115,6 @@ int main(int argc, char *argv[])
vm = vm_create_with_one_vcpu(&vcpu, guest_code);
get_set_sigalrm_vcpu(vcpu);

- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(vcpu);
-
vm_install_exception_handler(vm, UD_VECTOR, guest_ud_handler);

/*
diff --git a/tools/testing/selftests/kvm/x86_64/vmx_pmu_caps_test.c b/tools/testing/selftests/kvm/x86_64/vmx_pmu_caps_test.c
index ea0cb3cae0f7..1b6e20e3a56d 100644
--- a/tools/testing/selftests/kvm/x86_64/vmx_pmu_caps_test.c
+++ b/tools/testing/selftests/kvm/x86_64/vmx_pmu_caps_test.c
@@ -86,9 +86,6 @@ KVM_ONE_VCPU_TEST(vmx_pmu_caps, guest_wrmsr_perf_capabilities, guest_code)
struct ucall uc;
int r, i;

- vm_init_descriptor_tables(vcpu->vm);
- vcpu_init_descriptor_tables(vcpu);
-
vcpu_set_msr(vcpu, MSR_IA32_PERF_CAPABILITIES, host_cap.capabilities);

vcpu_args_set(vcpu, 1, host_cap.capabilities);
diff --git a/tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c b/tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c
index 725c206ba0b9..f51084061134 100644
--- a/tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c
@@ -410,8 +410,6 @@ int main(int argc, char *argv[])

vm = vm_create_with_one_vcpu(&params[0].vcpu, halter_guest_code);

- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(params[0].vcpu);
vm_install_exception_handler(vm, IPI_VECTOR, guest_ipi_handler);

virt_pg_map(vm, APIC_DEFAULT_GPA, APIC_DEFAULT_GPA);
diff --git a/tools/testing/selftests/kvm/x86_64/xcr0_cpuid_test.c b/tools/testing/selftests/kvm/x86_64/xcr0_cpuid_test.c
index 25a0b0db5c3c..95ce192d0753 100644
--- a/tools/testing/selftests/kvm/x86_64/xcr0_cpuid_test.c
+++ b/tools/testing/selftests/kvm/x86_64/xcr0_cpuid_test.c
@@ -109,9 +109,6 @@ int main(int argc, char *argv[])
vm = vm_create_with_one_vcpu(&vcpu, guest_code);
run = vcpu->run;

- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(vcpu);
-
while (1) {
vcpu_run(vcpu);

diff --git a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
index d2ea0435f4f7..a7236f17dfd0 100644
--- a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
+++ b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
@@ -553,8 +553,6 @@ int main(int argc, char *argv[])
};
vm_ioctl(vm, KVM_XEN_HVM_SET_ATTR, &vec);

- vm_init_descriptor_tables(vm);
- vcpu_init_descriptor_tables(vcpu);
vm_install_exception_handler(vm, EVTCHN_VECTOR, evtchn_handler);

if (do_runstate_tests) {
--
2.44.0.291.gc1ea87d7ee-goog


2024-03-14 23:34:38

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 11/18] KVM: selftests: Map x86's exception_handlers at VM creation, not vCPU setup

Map x86's exception handlers at VM creation, not vCPU setup, as the
mapping is per-VM, i.e. doesn't need to be (re)done for every vCPU.

Signed-off-by: Sean Christopherson <[email protected]>
---
tools/testing/selftests/kvm/lib/x86_64/processor.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 5813d93b2e7c..f4046029f168 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -552,7 +552,6 @@ static void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu)
sregs.gdt.limit = getpagesize() - 1;
kvm_seg_set_kernel_data_64bit(NULL, DEFAULT_DATA_SELECTOR, &sregs.gs);
vcpu_sregs_set(vcpu, &sregs);
- *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
}

static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
@@ -651,6 +650,8 @@ static void vm_init_descriptor_tables(struct kvm_vm *vm)
for (i = 0; i < NUM_INTERRUPTS; i++)
set_idt_entry(vm, i, (unsigned long)(&idt_handlers)[i], 0,
DEFAULT_CODE_SELECTOR);
+
+ *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
}

void vm_install_exception_handler(struct kvm_vm *vm, int vector,
--
2.44.0.291.gc1ea87d7ee-goog


2024-03-14 23:34:55

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 12/18] KVM: selftests: Allocate x86's GDT during VM creation

Allocate the GDT during creation of non-barebones VMs instead of waiting
until the first vCPU is created, as the whole point of non-barebones VMs
is to be able to run vCPUs, i.e. the GDT is going to get allocated no
matter what.

Signed-off-by: Sean Christopherson <[email protected]>
---
tools/testing/selftests/kvm/lib/x86_64/processor.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index f4046029f168..8547833ffa26 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -518,9 +518,6 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)

static void kvm_setup_gdt(struct kvm_vm *vm, struct kvm_dtable *dt)
{
- if (!vm->arch.gdt)
- vm->arch.gdt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
-
dt->base = vm->arch.gdt;
dt->limit = getpagesize() - 1;
}
@@ -644,6 +641,7 @@ static void vm_init_descriptor_tables(struct kvm_vm *vm)
extern void *idt_handlers;
int i;

+ vm->arch.gdt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
vm->arch.idt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
vm->handlers = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
/* Handlers have the same address in both address spaces.*/
--
2.44.0.291.gc1ea87d7ee-goog


2024-03-14 23:35:27

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 13/18] KVM: selftests: Drop superfluous switch() on vm->mode in vcpu_init_sregs()

Replace the switch statement on vm->mode in x86's vcpu_init_sregs()'s with
a simple assert that the VM has a 48-bit virtual address space. A switch
statement is both overkill and misleading, as the existing code incorrectly
implies that VMs with LA57 would need different to configuration for the
LDT, TSS, and flat segments. In all likelihood, the only difference that
would be needed for selftests is CR4.LA57 itself.

Signed-off-by: Sean Christopherson <[email protected]>
---
.../selftests/kvm/lib/x86_64/processor.c | 25 ++++++++-----------
1 file changed, 10 insertions(+), 15 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 8547833ffa26..561c0aa93608 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -555,6 +555,8 @@ static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
{
struct kvm_sregs sregs;

+ TEST_ASSERT_EQ(vm->mode, VM_MODE_PXXV48_4K);
+
/* Set mode specific system register values. */
vcpu_sregs_get(vcpu, &sregs);

@@ -562,22 +564,15 @@ static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)

kvm_setup_gdt(vm, &sregs.gdt);

- switch (vm->mode) {
- case VM_MODE_PXXV48_4K:
- sregs.cr0 = X86_CR0_PE | X86_CR0_NE | X86_CR0_PG;
- sregs.cr4 |= X86_CR4_PAE | X86_CR4_OSFXSR;
- sregs.efer |= (EFER_LME | EFER_LMA | EFER_NX);
+ sregs.cr0 = X86_CR0_PE | X86_CR0_NE | X86_CR0_PG;
+ sregs.cr4 |= X86_CR4_PAE | X86_CR4_OSFXSR;
+ sregs.efer |= (EFER_LME | EFER_LMA | EFER_NX);

- kvm_seg_set_unusable(&sregs.ldt);
- kvm_seg_set_kernel_code_64bit(vm, DEFAULT_CODE_SELECTOR, &sregs.cs);
- kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.ds);
- kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.es);
- kvm_setup_tss_64bit(vm, &sregs.tr, 0x18);
- break;
-
- default:
- TEST_FAIL("Unknown guest mode, mode: 0x%x", vm->mode);
- }
+ kvm_seg_set_unusable(&sregs.ldt);
+ kvm_seg_set_kernel_code_64bit(vm, DEFAULT_CODE_SELECTOR, &sregs.cs);
+ kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.ds);
+ kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.es);
+ kvm_setup_tss_64bit(vm, &sregs.tr, 0x18);

sregs.cr3 = vm->pgd;
vcpu_sregs_set(vcpu, &sregs);
--
2.44.0.291.gc1ea87d7ee-goog


2024-03-14 23:35:30

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 05/18] KVM: selftests: Move platform_info_test's main assert into guest code

As a first step toward gracefully handling the expected #GP on RDMSR in
platform_info_test, move the test's assert on the non-faulting RDMSR
result into the guest itself. This will allow using a unified flow for
the host userspace side of things.

Signed-off-by: Sean Christopherson <[email protected]>
---
.../selftests/kvm/x86_64/platform_info_test.c | 20 +++++++++++--------
1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/platform_info_test.c b/tools/testing/selftests/kvm/x86_64/platform_info_test.c
index 87011965dc41..cdad7e2124c8 100644
--- a/tools/testing/selftests/kvm/x86_64/platform_info_test.c
+++ b/tools/testing/selftests/kvm/x86_64/platform_info_test.c
@@ -29,7 +29,9 @@ static void guest_code(void)

for (;;) {
msr_platform_info = rdmsr(MSR_PLATFORM_INFO);
- GUEST_SYNC(msr_platform_info);
+ GUEST_ASSERT_EQ(msr_platform_info & MSR_PLATFORM_INFO_MAX_TURBO_RATIO,
+ MSR_PLATFORM_INFO_MAX_TURBO_RATIO);
+ GUEST_SYNC(0);
asm volatile ("inc %r11");
}
}
@@ -42,13 +44,15 @@ static void test_msr_platform_info_enabled(struct kvm_vcpu *vcpu)
vcpu_run(vcpu);
TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);

- get_ucall(vcpu, &uc);
- TEST_ASSERT(uc.cmd == UCALL_SYNC,
- "Received ucall other than UCALL_SYNC: %lu", uc.cmd);
- TEST_ASSERT((uc.args[1] & MSR_PLATFORM_INFO_MAX_TURBO_RATIO) ==
- MSR_PLATFORM_INFO_MAX_TURBO_RATIO,
- "Expected MSR_PLATFORM_INFO to have max turbo ratio mask: %i.",
- MSR_PLATFORM_INFO_MAX_TURBO_RATIO);
+ switch (get_ucall(vcpu, &uc)) {
+ case UCALL_SYNC:
+ break;
+ case UCALL_ABORT:
+ REPORT_GUEST_ASSERT(uc);
+ default:
+ TEST_FAIL("Unexpected ucall %lu", uc.cmd);
+ break;
+ }
}

static void test_msr_platform_info_disabled(struct kvm_vcpu *vcpu)
--
2.44.0.291.gc1ea87d7ee-goog


2024-03-14 23:35:42

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 14/18] KVM: selftests: Fold x86's descriptor tables helpers into vcpu_init_sregs()

Now that the per-VM, on-demand allocation logic in kvm_setup_gdt() and
vcpu_init_descriptor_tables() is gone, fold them into vcpu_init_sregs().

Note, both kvm_setup_gdt() and vcpu_init_descriptor_tables() configured the
GDT, which is why it looks like kvm_setup_gdt() disappears.

Opportunistically delete the pointless zeroing of the IDT limit (it was
being unconditionally overwritten by vcpu_init_descriptor_tables()).

Signed-off-by: Sean Christopherson <[email protected]>
---
.../selftests/kvm/lib/x86_64/processor.c | 32 ++++---------------
1 file changed, 6 insertions(+), 26 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 561c0aa93608..5cf845975f66 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -516,12 +516,6 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
return vm_untag_gpa(vm, PTE_GET_PA(*pte)) | (gva & ~HUGEPAGE_MASK(level));
}

-static void kvm_setup_gdt(struct kvm_vm *vm, struct kvm_dtable *dt)
-{
- dt->base = vm->arch.gdt;
- dt->limit = getpagesize() - 1;
-}
-
static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp,
int selector)
{
@@ -537,32 +531,19 @@ static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp,
kvm_seg_fill_gdt_64bit(vm, segp);
}

-static void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu)
+static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
{
- struct kvm_vm *vm = vcpu->vm;
struct kvm_sregs sregs;

+ TEST_ASSERT_EQ(vm->mode, VM_MODE_PXXV48_4K);
+
+ /* Set mode specific system register values. */
vcpu_sregs_get(vcpu, &sregs);
+
sregs.idt.base = vm->arch.idt;
sregs.idt.limit = NUM_INTERRUPTS * sizeof(struct idt_entry) - 1;
sregs.gdt.base = vm->arch.gdt;
sregs.gdt.limit = getpagesize() - 1;
- kvm_seg_set_kernel_data_64bit(NULL, DEFAULT_DATA_SELECTOR, &sregs.gs);
- vcpu_sregs_set(vcpu, &sregs);
-}
-
-static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
-{
- struct kvm_sregs sregs;
-
- TEST_ASSERT_EQ(vm->mode, VM_MODE_PXXV48_4K);
-
- /* Set mode specific system register values. */
- vcpu_sregs_get(vcpu, &sregs);
-
- sregs.idt.limit = 0;
-
- kvm_setup_gdt(vm, &sregs.gdt);

sregs.cr0 = X86_CR0_PE | X86_CR0_NE | X86_CR0_PG;
sregs.cr4 |= X86_CR4_PAE | X86_CR4_OSFXSR;
@@ -572,12 +553,11 @@ static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
kvm_seg_set_kernel_code_64bit(vm, DEFAULT_CODE_SELECTOR, &sregs.cs);
kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.ds);
kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.es);
+ kvm_seg_set_kernel_data_64bit(NULL, DEFAULT_DATA_SELECTOR, &sregs.gs);
kvm_setup_tss_64bit(vm, &sregs.tr, 0x18);

sregs.cr3 = vm->pgd;
vcpu_sregs_set(vcpu, &sregs);
-
- vcpu_init_descriptor_tables(vcpu);
}

static void set_idt_entry(struct kvm_vm *vm, int vector, unsigned long addr,
--
2.44.0.291.gc1ea87d7ee-goog


2024-03-14 23:36:02

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 01/18] Revert "kvm: selftests: move base kvm_util.h declarations to kvm_util_base.h"

Effectively revert the movement of code from kvm_util.h => kvm_util_base.h,
as the TL;DR of the justification for the move was to avoid #idefs and/or
circular dependencies between what ended up being ucall_common.h and what
was (and now again, is), kvm_util.h.

But avoiding #ifdef and circular includes is trivial: don't do that. The
cost of removing kvm_util_base.h is a few extra includes of ucall_common.h,
but that cost is practically nothing. On the other hand, having a "base"
version of a header that is really just the header itself is confusing,
and makes it weird/hard to choose names for headers that actually are
"base" headers, e.g. to hold core KVM selftests typedefs.

For all intents and purposes, this reverts commit
7d9a662ed9f0403e7b94940dceb81552b8edb931.

Signed-off-by: Sean Christopherson <[email protected]>
---
.../selftests/kvm/aarch64/arch_timer.c | 1 +
tools/testing/selftests/kvm/arch_timer.c | 1 +
.../selftests/kvm/demand_paging_test.c | 1 +
.../selftests/kvm/dirty_log_perf_test.c | 1 +
tools/testing/selftests/kvm/dirty_log_test.c | 1 +
.../testing/selftests/kvm/guest_memfd_test.c | 2 +-
.../testing/selftests/kvm/guest_print_test.c | 1 +
.../selftests/kvm/include/aarch64/processor.h | 2 +
.../selftests/kvm/include/aarch64/ucall.h | 2 +-
.../testing/selftests/kvm/include/kvm_util.h | 1128 +++++++++++++++-
.../selftests/kvm/include/kvm_util_base.h | 1135 -----------------
.../selftests/kvm/include/s390x/ucall.h | 2 +-
.../selftests/kvm/include/x86_64/processor.h | 3 +-
.../selftests/kvm/include/x86_64/ucall.h | 2 +-
.../selftests/kvm/kvm_page_table_test.c | 1 +
.../selftests/kvm/lib/aarch64/processor.c | 2 +
tools/testing/selftests/kvm/lib/kvm_util.c | 1 +
tools/testing/selftests/kvm/lib/memstress.c | 1 +
.../selftests/kvm/lib/riscv/processor.c | 1 +
.../testing/selftests/kvm/lib/ucall_common.c | 5 +-
.../testing/selftests/kvm/riscv/arch_timer.c | 1 +
tools/testing/selftests/kvm/rseq_test.c | 1 +
tools/testing/selftests/kvm/s390x/cmma_test.c | 1 +
tools/testing/selftests/kvm/s390x/memop.c | 1 +
tools/testing/selftests/kvm/s390x/tprot.c | 1 +
tools/testing/selftests/kvm/steal_time.c | 1 +
.../x86_64/dirty_log_page_splitting_test.c | 1 +
.../x86_64/exit_on_emulation_failure_test.c | 2 +-
.../kvm/x86_64/ucna_injection_test.c | 1 -
29 files changed, 1156 insertions(+), 1147 deletions(-)
delete mode 100644 tools/testing/selftests/kvm/include/kvm_util_base.h

diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer.c b/tools/testing/selftests/kvm/aarch64/arch_timer.c
index ddba2c2fb5de..ee83b2413da8 100644
--- a/tools/testing/selftests/kvm/aarch64/arch_timer.c
+++ b/tools/testing/selftests/kvm/aarch64/arch_timer.c
@@ -12,6 +12,7 @@
#include "gic.h"
#include "processor.h"
#include "timer_test.h"
+#include "ucall_common.h"
#include "vgic.h"

#define GICD_BASE_GPA 0x8000000ULL
diff --git a/tools/testing/selftests/kvm/arch_timer.c b/tools/testing/selftests/kvm/arch_timer.c
index ae1f1a6d8312..40cb6c1bec0a 100644
--- a/tools/testing/selftests/kvm/arch_timer.c
+++ b/tools/testing/selftests/kvm/arch_timer.c
@@ -29,6 +29,7 @@
#include <sys/sysinfo.h>

#include "timer_test.h"
+#include "ucall_common.h"

struct test_args test_args = {
.nr_vcpus = NR_VCPUS_DEF,
diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
index bf3609f71854..a36227731564 100644
--- a/tools/testing/selftests/kvm/demand_paging_test.c
+++ b/tools/testing/selftests/kvm/demand_paging_test.c
@@ -22,6 +22,7 @@
#include "test_util.h"
#include "memstress.h"
#include "guest_modes.h"
+#include "ucall_common.h"
#include "userfaultfd_util.h"

#ifdef __NR_userfaultfd
diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c
index 504f6fe980e8..f0d3ccdb023c 100644
--- a/tools/testing/selftests/kvm/dirty_log_perf_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c
@@ -18,6 +18,7 @@
#include "test_util.h"
#include "memstress.h"
#include "guest_modes.h"
+#include "ucall_common.h"

#ifdef __aarch64__
#include "aarch64/vgic.h"
diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index eaad5b20854c..d95120f9dc40 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -23,6 +23,7 @@
#include "test_util.h"
#include "guest_modes.h"
#include "processor.h"
+#include "ucall_common.h"

#define DIRTY_MEM_BITS 30 /* 1G */
#define PAGE_SHIFT_4K 12
diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c
index 92eae206baa6..38320de686f6 100644
--- a/tools/testing/selftests/kvm/guest_memfd_test.c
+++ b/tools/testing/selftests/kvm/guest_memfd_test.c
@@ -19,8 +19,8 @@
#include <sys/types.h>
#include <sys/stat.h>

+#include "kvm_util.h"
#include "test_util.h"
-#include "kvm_util_base.h"

static void test_file_read_write(int fd)
{
diff --git a/tools/testing/selftests/kvm/guest_print_test.c b/tools/testing/selftests/kvm/guest_print_test.c
index 3502caa3590c..8092c2d0f5d6 100644
--- a/tools/testing/selftests/kvm/guest_print_test.c
+++ b/tools/testing/selftests/kvm/guest_print_test.c
@@ -13,6 +13,7 @@
#include "test_util.h"
#include "kvm_util.h"
#include "processor.h"
+#include "ucall_common.h"

struct guest_vals {
uint64_t a;
diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
index 9e518b562827..1814af7d8567 100644
--- a/tools/testing/selftests/kvm/include/aarch64/processor.h
+++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
@@ -8,6 +8,8 @@
#define SELFTEST_KVM_PROCESSOR_H

#include "kvm_util.h"
+#include "ucall_common.h"
+
#include <linux/stringify.h>
#include <linux/types.h>
#include <asm/sysreg.h>
diff --git a/tools/testing/selftests/kvm/include/aarch64/ucall.h b/tools/testing/selftests/kvm/include/aarch64/ucall.h
index 4b68f37efd36..4ec801f37f00 100644
--- a/tools/testing/selftests/kvm/include/aarch64/ucall.h
+++ b/tools/testing/selftests/kvm/include/aarch64/ucall.h
@@ -2,7 +2,7 @@
#ifndef SELFTEST_KVM_UCALL_H
#define SELFTEST_KVM_UCALL_H

-#include "kvm_util_base.h"
+#include "kvm_util.h"

#define UCALL_EXIT_REASON KVM_EXIT_MMIO

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index c9286811a4cb..95baee5142a7 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -1,13 +1,1133 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
- * tools/testing/selftests/kvm/include/kvm_util.h
- *
* Copyright (C) 2018, Google LLC.
*/
#ifndef SELFTEST_KVM_UTIL_H
#define SELFTEST_KVM_UTIL_H

-#include "kvm_util_base.h"
-#include "ucall_common.h"
+#include "test_util.h"
+
+#include <linux/compiler.h>
+#include "linux/hashtable.h"
+#include "linux/list.h"
+#include <linux/kernel.h>
+#include <linux/kvm.h>
+#include "linux/rbtree.h"
+#include <linux/types.h>
+
+#include <asm/atomic.h>
+#include <asm/kvm.h>
+
+#include <sys/ioctl.h>
+
+#include "kvm_util_arch.h"
+#include "sparsebit.h"
+
+/*
+ * Provide a version of static_assert() that is guaranteed to have an optional
+ * message param. If _ISOC11_SOURCE is defined, glibc (/usr/include/assert.h)
+ * #undefs and #defines static_assert() as a direct alias to _Static_assert(),
+ * i.e. effectively makes the message mandatory. Many KVM selftests #define
+ * _GNU_SOURCE for various reasons, and _GNU_SOURCE implies _ISOC11_SOURCE. As
+ * a result, static_assert() behavior is non-deterministic and may or may not
+ * require a message depending on #include order.
+ */
+#define __kvm_static_assert(expr, msg, ...) _Static_assert(expr, msg)
+#define kvm_static_assert(expr, ...) __kvm_static_assert(expr, ##__VA_ARGS__, #expr)
+
+#define KVM_DEV_PATH "/dev/kvm"
+#define KVM_MAX_VCPUS 512
+
+#define NSEC_PER_SEC 1000000000L
+
+typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */
+typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */
+
+struct userspace_mem_region {
+ struct kvm_userspace_memory_region2 region;
+ struct sparsebit *unused_phy_pages;
+ struct sparsebit *protected_phy_pages;
+ int fd;
+ off_t offset;
+ enum vm_mem_backing_src_type backing_src_type;
+ void *host_mem;
+ void *host_alias;
+ void *mmap_start;
+ void *mmap_alias;
+ size_t mmap_size;
+ struct rb_node gpa_node;
+ struct rb_node hva_node;
+ struct hlist_node slot_node;
+};
+
+struct kvm_vcpu {
+ struct list_head list;
+ uint32_t id;
+ int fd;
+ struct kvm_vm *vm;
+ struct kvm_run *run;
+#ifdef __x86_64__
+ struct kvm_cpuid2 *cpuid;
+#endif
+ struct kvm_dirty_gfn *dirty_gfns;
+ uint32_t fetch_index;
+ uint32_t dirty_gfns_count;
+};
+
+struct userspace_mem_regions {
+ struct rb_root gpa_tree;
+ struct rb_root hva_tree;
+ DECLARE_HASHTABLE(slot_hash, 9);
+};
+
+enum kvm_mem_region_type {
+ MEM_REGION_CODE,
+ MEM_REGION_DATA,
+ MEM_REGION_PT,
+ MEM_REGION_TEST_DATA,
+ NR_MEM_REGIONS,
+};
+
+struct kvm_vm {
+ int mode;
+ unsigned long type;
+ uint8_t subtype;
+ int kvm_fd;
+ int fd;
+ unsigned int pgtable_levels;
+ unsigned int page_size;
+ unsigned int page_shift;
+ unsigned int pa_bits;
+ unsigned int va_bits;
+ uint64_t max_gfn;
+ struct list_head vcpus;
+ struct userspace_mem_regions regions;
+ struct sparsebit *vpages_valid;
+ struct sparsebit *vpages_mapped;
+ bool has_irqchip;
+ bool pgd_created;
+ vm_paddr_t ucall_mmio_addr;
+ vm_paddr_t pgd;
+ vm_vaddr_t gdt;
+ vm_vaddr_t tss;
+ vm_vaddr_t idt;
+ vm_vaddr_t handlers;
+ uint32_t dirty_ring_size;
+ uint64_t gpa_tag_mask;
+
+ struct kvm_vm_arch arch;
+
+ /* Cache of information for binary stats interface */
+ int stats_fd;
+ struct kvm_stats_header stats_header;
+ struct kvm_stats_desc *stats_desc;
+
+ /*
+ * KVM region slots. These are the default memslots used by page
+ * allocators, e.g., lib/elf uses the memslots[MEM_REGION_CODE]
+ * memslot.
+ */
+ uint32_t memslots[NR_MEM_REGIONS];
+};
+
+struct vcpu_reg_sublist {
+ const char *name;
+ long capability;
+ int feature;
+ int feature_type;
+ bool finalize;
+ __u64 *regs;
+ __u64 regs_n;
+ __u64 *rejects_set;
+ __u64 rejects_set_n;
+ __u64 *skips_set;
+ __u64 skips_set_n;
+};
+
+struct vcpu_reg_list {
+ char *name;
+ struct vcpu_reg_sublist sublists[];
+};
+
+#define for_each_sublist(c, s) \
+ for ((s) = &(c)->sublists[0]; (s)->regs; ++(s))
+
+#define kvm_for_each_vcpu(vm, i, vcpu) \
+ for ((i) = 0; (i) <= (vm)->last_vcpu_id; (i)++) \
+ if (!((vcpu) = vm->vcpus[i])) \
+ continue; \
+ else
+
+struct userspace_mem_region *
+memslot2region(struct kvm_vm *vm, uint32_t memslot);
+
+static inline struct userspace_mem_region *vm_get_mem_region(struct kvm_vm *vm,
+ enum kvm_mem_region_type type)
+{
+ assert(type < NR_MEM_REGIONS);
+ return memslot2region(vm, vm->memslots[type]);
+}
+
+/* Minimum allocated guest virtual and physical addresses */
+#define KVM_UTIL_MIN_VADDR 0x2000
+#define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000
+
+#define DEFAULT_GUEST_STACK_VADDR_MIN 0xab6000
+#define DEFAULT_STACK_PGS 5
+
+enum vm_guest_mode {
+ VM_MODE_P52V48_4K,
+ VM_MODE_P52V48_16K,
+ VM_MODE_P52V48_64K,
+ VM_MODE_P48V48_4K,
+ VM_MODE_P48V48_16K,
+ VM_MODE_P48V48_64K,
+ VM_MODE_P40V48_4K,
+ VM_MODE_P40V48_16K,
+ VM_MODE_P40V48_64K,
+ VM_MODE_PXXV48_4K, /* For 48bits VA but ANY bits PA */
+ VM_MODE_P47V64_4K,
+ VM_MODE_P44V64_4K,
+ VM_MODE_P36V48_4K,
+ VM_MODE_P36V48_16K,
+ VM_MODE_P36V48_64K,
+ VM_MODE_P36V47_16K,
+ NUM_VM_MODES,
+};
+
+struct vm_shape {
+ uint32_t type;
+ uint8_t mode;
+ uint8_t subtype;
+ uint16_t padding;
+};
+
+kvm_static_assert(sizeof(struct vm_shape) == sizeof(uint64_t));
+
+#define VM_TYPE_DEFAULT 0
+
+#define VM_SHAPE(__mode) \
+({ \
+ struct vm_shape shape = { \
+ .mode = (__mode), \
+ .type = VM_TYPE_DEFAULT \
+ }; \
+ \
+ shape; \
+})
+
+#if defined(__aarch64__)
+
+extern enum vm_guest_mode vm_mode_default;
+
+#define VM_MODE_DEFAULT vm_mode_default
+#define MIN_PAGE_SHIFT 12U
+#define ptes_per_page(page_size) ((page_size) / 8)
+
+#elif defined(__x86_64__)
+
+#define VM_MODE_DEFAULT VM_MODE_PXXV48_4K
+#define MIN_PAGE_SHIFT 12U
+#define ptes_per_page(page_size) ((page_size) / 8)
+
+#elif defined(__s390x__)
+
+#define VM_MODE_DEFAULT VM_MODE_P44V64_4K
+#define MIN_PAGE_SHIFT 12U
+#define ptes_per_page(page_size) ((page_size) / 16)
+
+#elif defined(__riscv)
+
+#if __riscv_xlen == 32
+#error "RISC-V 32-bit kvm selftests not supported"
+#endif
+
+#define VM_MODE_DEFAULT VM_MODE_P40V48_4K
+#define MIN_PAGE_SHIFT 12U
+#define ptes_per_page(page_size) ((page_size) / 8)
+
+#endif
+
+#define VM_SHAPE_DEFAULT VM_SHAPE(VM_MODE_DEFAULT)
+
+#define MIN_PAGE_SIZE (1U << MIN_PAGE_SHIFT)
+#define PTES_PER_MIN_PAGE ptes_per_page(MIN_PAGE_SIZE)
+
+struct vm_guest_mode_params {
+ unsigned int pa_bits;
+ unsigned int va_bits;
+ unsigned int page_size;
+ unsigned int page_shift;
+};
+extern const struct vm_guest_mode_params vm_guest_mode_params[];
+
+int open_path_or_exit(const char *path, int flags);
+int open_kvm_dev_path_or_exit(void);
+
+bool get_kvm_param_bool(const char *param);
+bool get_kvm_intel_param_bool(const char *param);
+bool get_kvm_amd_param_bool(const char *param);
+
+int get_kvm_param_integer(const char *param);
+int get_kvm_intel_param_integer(const char *param);
+int get_kvm_amd_param_integer(const char *param);
+
+unsigned int kvm_check_cap(long cap);
+
+static inline bool kvm_has_cap(long cap)
+{
+ return kvm_check_cap(cap);
+}
+
+#define __KVM_SYSCALL_ERROR(_name, _ret) \
+ "%s failed, rc: %i errno: %i (%s)", (_name), (_ret), errno, strerror(errno)
+
+/*
+ * Use the "inner", double-underscore macro when reporting errors from within
+ * other macros so that the name of ioctl() and not its literal numeric value
+ * is printed on error. The "outer" macro is strongly preferred when reporting
+ * errors "directly", i.e. without an additional layer of macros, as it reduces
+ * the probability of passing in the wrong string.
+ */
+#define __KVM_IOCTL_ERROR(_name, _ret) __KVM_SYSCALL_ERROR(_name, _ret)
+#define KVM_IOCTL_ERROR(_ioctl, _ret) __KVM_IOCTL_ERROR(#_ioctl, _ret)
+
+#define kvm_do_ioctl(fd, cmd, arg) \
+({ \
+ kvm_static_assert(!_IOC_SIZE(cmd) || sizeof(*arg) == _IOC_SIZE(cmd)); \
+ ioctl(fd, cmd, arg); \
+})
+
+#define __kvm_ioctl(kvm_fd, cmd, arg) \
+ kvm_do_ioctl(kvm_fd, cmd, arg)
+
+#define kvm_ioctl(kvm_fd, cmd, arg) \
+({ \
+ int ret = __kvm_ioctl(kvm_fd, cmd, arg); \
+ \
+ TEST_ASSERT(!ret, __KVM_IOCTL_ERROR(#cmd, ret)); \
+})
+
+static __always_inline void static_assert_is_vm(struct kvm_vm *vm) { }
+
+#define __vm_ioctl(vm, cmd, arg) \
+({ \
+ static_assert_is_vm(vm); \
+ kvm_do_ioctl((vm)->fd, cmd, arg); \
+})
+
+/*
+ * Assert that a VM or vCPU ioctl() succeeded, with extra magic to detect if
+ * the ioctl() failed because KVM killed/bugged the VM. To detect a dead VM,
+ * probe KVM_CAP_USER_MEMORY, which (a) has been supported by KVM since before
+ * selftests existed and (b) should never outright fail, i.e. is supposed to
+ * return 0 or 1. If KVM kills a VM, KVM returns -EIO for all ioctl()s for the
+ * VM and its vCPUs, including KVM_CHECK_EXTENSION.
+ */
+#define __TEST_ASSERT_VM_VCPU_IOCTL(cond, name, ret, vm) \
+do { \
+ int __errno = errno; \
+ \
+ static_assert_is_vm(vm); \
+ \
+ if (cond) \
+ break; \
+ \
+ if (errno == EIO && \
+ __vm_ioctl(vm, KVM_CHECK_EXTENSION, (void *)KVM_CAP_USER_MEMORY) < 0) { \
+ TEST_ASSERT(errno == EIO, "KVM killed the VM, should return -EIO"); \
+ TEST_FAIL("KVM killed/bugged the VM, check the kernel log for clues"); \
+ } \
+ errno = __errno; \
+ TEST_ASSERT(cond, __KVM_IOCTL_ERROR(name, ret)); \
+} while (0)
+
+#define TEST_ASSERT_VM_VCPU_IOCTL(cond, cmd, ret, vm) \
+ __TEST_ASSERT_VM_VCPU_IOCTL(cond, #cmd, ret, vm)
+
+#define vm_ioctl(vm, cmd, arg) \
+({ \
+ int ret = __vm_ioctl(vm, cmd, arg); \
+ \
+ __TEST_ASSERT_VM_VCPU_IOCTL(!ret, #cmd, ret, vm); \
+})
+
+static __always_inline void static_assert_is_vcpu(struct kvm_vcpu *vcpu) { }
+
+#define __vcpu_ioctl(vcpu, cmd, arg) \
+({ \
+ static_assert_is_vcpu(vcpu); \
+ kvm_do_ioctl((vcpu)->fd, cmd, arg); \
+})
+
+#define vcpu_ioctl(vcpu, cmd, arg) \
+({ \
+ int ret = __vcpu_ioctl(vcpu, cmd, arg); \
+ \
+ __TEST_ASSERT_VM_VCPU_IOCTL(!ret, #cmd, ret, (vcpu)->vm); \
+})
+
+/*
+ * Looks up and returns the value corresponding to the capability
+ * (KVM_CAP_*) given by cap.
+ */
+static inline int vm_check_cap(struct kvm_vm *vm, long cap)
+{
+ int ret = __vm_ioctl(vm, KVM_CHECK_EXTENSION, (void *)cap);
+
+ TEST_ASSERT_VM_VCPU_IOCTL(ret >= 0, KVM_CHECK_EXTENSION, ret, vm);
+ return ret;
+}
+
+static inline int __vm_enable_cap(struct kvm_vm *vm, uint32_t cap, uint64_t arg0)
+{
+ struct kvm_enable_cap enable_cap = { .cap = cap, .args = { arg0 } };
+
+ return __vm_ioctl(vm, KVM_ENABLE_CAP, &enable_cap);
+}
+static inline void vm_enable_cap(struct kvm_vm *vm, uint32_t cap, uint64_t arg0)
+{
+ struct kvm_enable_cap enable_cap = { .cap = cap, .args = { arg0 } };
+
+ vm_ioctl(vm, KVM_ENABLE_CAP, &enable_cap);
+}
+
+static inline void vm_set_memory_attributes(struct kvm_vm *vm, uint64_t gpa,
+ uint64_t size, uint64_t attributes)
+{
+ struct kvm_memory_attributes attr = {
+ .attributes = attributes,
+ .address = gpa,
+ .size = size,
+ .flags = 0,
+ };
+
+ /*
+ * KVM_SET_MEMORY_ATTRIBUTES overwrites _all_ attributes. These flows
+ * need significant enhancements to support multiple attributes.
+ */
+ TEST_ASSERT(!attributes || attributes == KVM_MEMORY_ATTRIBUTE_PRIVATE,
+ "Update me to support multiple attributes!");
+
+ vm_ioctl(vm, KVM_SET_MEMORY_ATTRIBUTES, &attr);
+}
+
+
+static inline void vm_mem_set_private(struct kvm_vm *vm, uint64_t gpa,
+ uint64_t size)
+{
+ vm_set_memory_attributes(vm, gpa, size, KVM_MEMORY_ATTRIBUTE_PRIVATE);
+}
+
+static inline void vm_mem_set_shared(struct kvm_vm *vm, uint64_t gpa,
+ uint64_t size)
+{
+ vm_set_memory_attributes(vm, gpa, size, 0);
+}
+
+void vm_guest_mem_fallocate(struct kvm_vm *vm, uint64_t gpa, uint64_t size,
+ bool punch_hole);
+
+static inline void vm_guest_mem_punch_hole(struct kvm_vm *vm, uint64_t gpa,
+ uint64_t size)
+{
+ vm_guest_mem_fallocate(vm, gpa, size, true);
+}
+
+static inline void vm_guest_mem_allocate(struct kvm_vm *vm, uint64_t gpa,
+ uint64_t size)
+{
+ vm_guest_mem_fallocate(vm, gpa, size, false);
+}
+
+void vm_enable_dirty_ring(struct kvm_vm *vm, uint32_t ring_size);
+const char *vm_guest_mode_string(uint32_t i);
+
+void kvm_vm_free(struct kvm_vm *vmp);
+void kvm_vm_restart(struct kvm_vm *vmp);
+void kvm_vm_release(struct kvm_vm *vmp);
+int kvm_memcmp_hva_gva(void *hva, struct kvm_vm *vm, const vm_vaddr_t gva,
+ size_t len);
+void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename);
+int kvm_memfd_alloc(size_t size, bool hugepages);
+
+void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent);
+
+static inline void kvm_vm_get_dirty_log(struct kvm_vm *vm, int slot, void *log)
+{
+ struct kvm_dirty_log args = { .dirty_bitmap = log, .slot = slot };
+
+ vm_ioctl(vm, KVM_GET_DIRTY_LOG, &args);
+}
+
+static inline void kvm_vm_clear_dirty_log(struct kvm_vm *vm, int slot, void *log,
+ uint64_t first_page, uint32_t num_pages)
+{
+ struct kvm_clear_dirty_log args = {
+ .dirty_bitmap = log,
+ .slot = slot,
+ .first_page = first_page,
+ .num_pages = num_pages
+ };
+
+ vm_ioctl(vm, KVM_CLEAR_DIRTY_LOG, &args);
+}
+
+static inline uint32_t kvm_vm_reset_dirty_ring(struct kvm_vm *vm)
+{
+ return __vm_ioctl(vm, KVM_RESET_DIRTY_RINGS, NULL);
+}
+
+static inline int vm_get_stats_fd(struct kvm_vm *vm)
+{
+ int fd = __vm_ioctl(vm, KVM_GET_STATS_FD, NULL);
+
+ TEST_ASSERT_VM_VCPU_IOCTL(fd >= 0, KVM_GET_STATS_FD, fd, vm);
+ return fd;
+}
+
+static inline void read_stats_header(int stats_fd, struct kvm_stats_header *header)
+{
+ ssize_t ret;
+
+ ret = pread(stats_fd, header, sizeof(*header), 0);
+ TEST_ASSERT(ret == sizeof(*header),
+ "Failed to read '%lu' header bytes, ret = '%ld'",
+ sizeof(*header), ret);
+}
+
+struct kvm_stats_desc *read_stats_descriptors(int stats_fd,
+ struct kvm_stats_header *header);
+
+static inline ssize_t get_stats_descriptor_size(struct kvm_stats_header *header)
+{
+ /*
+ * The base size of the descriptor is defined by KVM's ABI, but the
+ * size of the name field is variable, as far as KVM's ABI is
+ * concerned. For a given instance of KVM, the name field is the same
+ * size for all stats and is provided in the overall stats header.
+ */
+ return sizeof(struct kvm_stats_desc) + header->name_size;
+}
+
+static inline struct kvm_stats_desc *get_stats_descriptor(struct kvm_stats_desc *stats,
+ int index,
+ struct kvm_stats_header *header)
+{
+ /*
+ * Note, size_desc includes the size of the name field, which is
+ * variable. i.e. this is NOT equivalent to &stats_desc[i].
+ */
+ return (void *)stats + index * get_stats_descriptor_size(header);
+}
+
+void read_stat_data(int stats_fd, struct kvm_stats_header *header,
+ struct kvm_stats_desc *desc, uint64_t *data,
+ size_t max_elements);
+
+void __vm_get_stat(struct kvm_vm *vm, const char *stat_name, uint64_t *data,
+ size_t max_elements);
+
+static inline uint64_t vm_get_stat(struct kvm_vm *vm, const char *stat_name)
+{
+ uint64_t data;
+
+ __vm_get_stat(vm, stat_name, &data, 1);
+ return data;
+}
+
+void vm_create_irqchip(struct kvm_vm *vm);
+
+static inline int __vm_create_guest_memfd(struct kvm_vm *vm, uint64_t size,
+ uint64_t flags)
+{
+ struct kvm_create_guest_memfd guest_memfd = {
+ .size = size,
+ .flags = flags,
+ };
+
+ return __vm_ioctl(vm, KVM_CREATE_GUEST_MEMFD, &guest_memfd);
+}
+
+static inline int vm_create_guest_memfd(struct kvm_vm *vm, uint64_t size,
+ uint64_t flags)
+{
+ int fd = __vm_create_guest_memfd(vm, size, flags);
+
+ TEST_ASSERT(fd >= 0, KVM_IOCTL_ERROR(KVM_CREATE_GUEST_MEMFD, fd));
+ return fd;
+}
+
+void vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
+ uint64_t gpa, uint64_t size, void *hva);
+int __vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
+ uint64_t gpa, uint64_t size, void *hva);
+void vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
+ uint64_t gpa, uint64_t size, void *hva,
+ uint32_t guest_memfd, uint64_t guest_memfd_offset);
+int __vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
+ uint64_t gpa, uint64_t size, void *hva,
+ uint32_t guest_memfd, uint64_t guest_memfd_offset);
+
+void vm_userspace_mem_region_add(struct kvm_vm *vm,
+ enum vm_mem_backing_src_type src_type,
+ uint64_t guest_paddr, uint32_t slot, uint64_t npages,
+ uint32_t flags);
+void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
+ uint64_t guest_paddr, uint32_t slot, uint64_t npages,
+ uint32_t flags, int guest_memfd_fd, uint64_t guest_memfd_offset);
+
+#ifndef vm_arch_has_protected_memory
+static inline bool vm_arch_has_protected_memory(struct kvm_vm *vm)
+{
+ return false;
+}
+#endif
+
+void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags);
+void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa);
+void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
+struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
+void vm_populate_vaddr_bitmap(struct kvm_vm *vm);
+vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
+vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
+vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
+ enum kvm_mem_region_type type);
+vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz,
+ vm_vaddr_t vaddr_min,
+ enum kvm_mem_region_type type);
+vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages);
+vm_vaddr_t __vm_vaddr_alloc_page(struct kvm_vm *vm,
+ enum kvm_mem_region_type type);
+vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm);
+
+void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
+ unsigned int npages);
+void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa);
+void *addr_gva2hva(struct kvm_vm *vm, vm_vaddr_t gva);
+vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva);
+void *addr_gpa2alias(struct kvm_vm *vm, vm_paddr_t gpa);
+
+
+static inline vm_paddr_t vm_untag_gpa(struct kvm_vm *vm, vm_paddr_t gpa)
+{
+ return gpa & ~vm->gpa_tag_mask;
+}
+
+void vcpu_run(struct kvm_vcpu *vcpu);
+int _vcpu_run(struct kvm_vcpu *vcpu);
+
+static inline int __vcpu_run(struct kvm_vcpu *vcpu)
+{
+ return __vcpu_ioctl(vcpu, KVM_RUN, NULL);
+}
+
+void vcpu_run_complete_io(struct kvm_vcpu *vcpu);
+struct kvm_reg_list *vcpu_get_reg_list(struct kvm_vcpu *vcpu);
+
+static inline void vcpu_enable_cap(struct kvm_vcpu *vcpu, uint32_t cap,
+ uint64_t arg0)
+{
+ struct kvm_enable_cap enable_cap = { .cap = cap, .args = { arg0 } };
+
+ vcpu_ioctl(vcpu, KVM_ENABLE_CAP, &enable_cap);
+}
+
+static inline void vcpu_guest_debug_set(struct kvm_vcpu *vcpu,
+ struct kvm_guest_debug *debug)
+{
+ vcpu_ioctl(vcpu, KVM_SET_GUEST_DEBUG, debug);
+}
+
+static inline void vcpu_mp_state_get(struct kvm_vcpu *vcpu,
+ struct kvm_mp_state *mp_state)
+{
+ vcpu_ioctl(vcpu, KVM_GET_MP_STATE, mp_state);
+}
+static inline void vcpu_mp_state_set(struct kvm_vcpu *vcpu,
+ struct kvm_mp_state *mp_state)
+{
+ vcpu_ioctl(vcpu, KVM_SET_MP_STATE, mp_state);
+}
+
+static inline void vcpu_regs_get(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
+{
+ vcpu_ioctl(vcpu, KVM_GET_REGS, regs);
+}
+
+static inline void vcpu_regs_set(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
+{
+ vcpu_ioctl(vcpu, KVM_SET_REGS, regs);
+}
+static inline void vcpu_sregs_get(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
+{
+ vcpu_ioctl(vcpu, KVM_GET_SREGS, sregs);
+
+}
+static inline void vcpu_sregs_set(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
+{
+ vcpu_ioctl(vcpu, KVM_SET_SREGS, sregs);
+}
+static inline int _vcpu_sregs_set(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
+{
+ return __vcpu_ioctl(vcpu, KVM_SET_SREGS, sregs);
+}
+static inline void vcpu_fpu_get(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
+{
+ vcpu_ioctl(vcpu, KVM_GET_FPU, fpu);
+}
+static inline void vcpu_fpu_set(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
+{
+ vcpu_ioctl(vcpu, KVM_SET_FPU, fpu);
+}
+
+static inline int __vcpu_get_reg(struct kvm_vcpu *vcpu, uint64_t id, void *addr)
+{
+ struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)addr };
+
+ return __vcpu_ioctl(vcpu, KVM_GET_ONE_REG, &reg);
+}
+static inline int __vcpu_set_reg(struct kvm_vcpu *vcpu, uint64_t id, uint64_t val)
+{
+ struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)&val };
+
+ return __vcpu_ioctl(vcpu, KVM_SET_ONE_REG, &reg);
+}
+static inline void vcpu_get_reg(struct kvm_vcpu *vcpu, uint64_t id, void *addr)
+{
+ struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)addr };
+
+ vcpu_ioctl(vcpu, KVM_GET_ONE_REG, &reg);
+}
+static inline void vcpu_set_reg(struct kvm_vcpu *vcpu, uint64_t id, uint64_t val)
+{
+ struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)&val };
+
+ vcpu_ioctl(vcpu, KVM_SET_ONE_REG, &reg);
+}
+
+#ifdef __KVM_HAVE_VCPU_EVENTS
+static inline void vcpu_events_get(struct kvm_vcpu *vcpu,
+ struct kvm_vcpu_events *events)
+{
+ vcpu_ioctl(vcpu, KVM_GET_VCPU_EVENTS, events);
+}
+static inline void vcpu_events_set(struct kvm_vcpu *vcpu,
+ struct kvm_vcpu_events *events)
+{
+ vcpu_ioctl(vcpu, KVM_SET_VCPU_EVENTS, events);
+}
+#endif
+#ifdef __x86_64__
+static inline void vcpu_nested_state_get(struct kvm_vcpu *vcpu,
+ struct kvm_nested_state *state)
+{
+ vcpu_ioctl(vcpu, KVM_GET_NESTED_STATE, state);
+}
+static inline int __vcpu_nested_state_set(struct kvm_vcpu *vcpu,
+ struct kvm_nested_state *state)
+{
+ return __vcpu_ioctl(vcpu, KVM_SET_NESTED_STATE, state);
+}
+
+static inline void vcpu_nested_state_set(struct kvm_vcpu *vcpu,
+ struct kvm_nested_state *state)
+{
+ vcpu_ioctl(vcpu, KVM_SET_NESTED_STATE, state);
+}
+#endif
+static inline int vcpu_get_stats_fd(struct kvm_vcpu *vcpu)
+{
+ int fd = __vcpu_ioctl(vcpu, KVM_GET_STATS_FD, NULL);
+
+ TEST_ASSERT_VM_VCPU_IOCTL(fd >= 0, KVM_CHECK_EXTENSION, fd, vcpu->vm);
+ return fd;
+}
+
+int __kvm_has_device_attr(int dev_fd, uint32_t group, uint64_t attr);
+
+static inline void kvm_has_device_attr(int dev_fd, uint32_t group, uint64_t attr)
+{
+ int ret = __kvm_has_device_attr(dev_fd, group, attr);
+
+ TEST_ASSERT(!ret, "KVM_HAS_DEVICE_ATTR failed, rc: %i errno: %i", ret, errno);
+}
+
+int __kvm_device_attr_get(int dev_fd, uint32_t group, uint64_t attr, void *val);
+
+static inline void kvm_device_attr_get(int dev_fd, uint32_t group,
+ uint64_t attr, void *val)
+{
+ int ret = __kvm_device_attr_get(dev_fd, group, attr, val);
+
+ TEST_ASSERT(!ret, KVM_IOCTL_ERROR(KVM_GET_DEVICE_ATTR, ret));
+}
+
+int __kvm_device_attr_set(int dev_fd, uint32_t group, uint64_t attr, void *val);
+
+static inline void kvm_device_attr_set(int dev_fd, uint32_t group,
+ uint64_t attr, void *val)
+{
+ int ret = __kvm_device_attr_set(dev_fd, group, attr, val);
+
+ TEST_ASSERT(!ret, KVM_IOCTL_ERROR(KVM_SET_DEVICE_ATTR, ret));
+}
+
+static inline int __vcpu_has_device_attr(struct kvm_vcpu *vcpu, uint32_t group,
+ uint64_t attr)
+{
+ return __kvm_has_device_attr(vcpu->fd, group, attr);
+}
+
+static inline void vcpu_has_device_attr(struct kvm_vcpu *vcpu, uint32_t group,
+ uint64_t attr)
+{
+ kvm_has_device_attr(vcpu->fd, group, attr);
+}
+
+static inline int __vcpu_device_attr_get(struct kvm_vcpu *vcpu, uint32_t group,
+ uint64_t attr, void *val)
+{
+ return __kvm_device_attr_get(vcpu->fd, group, attr, val);
+}
+
+static inline void vcpu_device_attr_get(struct kvm_vcpu *vcpu, uint32_t group,
+ uint64_t attr, void *val)
+{
+ kvm_device_attr_get(vcpu->fd, group, attr, val);
+}
+
+static inline int __vcpu_device_attr_set(struct kvm_vcpu *vcpu, uint32_t group,
+ uint64_t attr, void *val)
+{
+ return __kvm_device_attr_set(vcpu->fd, group, attr, val);
+}
+
+static inline void vcpu_device_attr_set(struct kvm_vcpu *vcpu, uint32_t group,
+ uint64_t attr, void *val)
+{
+ kvm_device_attr_set(vcpu->fd, group, attr, val);
+}
+
+int __kvm_test_create_device(struct kvm_vm *vm, uint64_t type);
+int __kvm_create_device(struct kvm_vm *vm, uint64_t type);
+
+static inline int kvm_create_device(struct kvm_vm *vm, uint64_t type)
+{
+ int fd = __kvm_create_device(vm, type);
+
+ TEST_ASSERT(fd >= 0, KVM_IOCTL_ERROR(KVM_CREATE_DEVICE, fd));
+ return fd;
+}
+
+void *vcpu_map_dirty_ring(struct kvm_vcpu *vcpu);
+
+/*
+ * VM VCPU Args Set
+ *
+ * Input Args:
+ * vm - Virtual Machine
+ * num - number of arguments
+ * ... - arguments, each of type uint64_t
+ *
+ * Output Args: None
+ *
+ * Return: None
+ *
+ * Sets the first @num input parameters for the function at @vcpu's entry point,
+ * per the C calling convention of the architecture, to the values given as
+ * variable args. Each of the variable args is expected to be of type uint64_t.
+ * The maximum @num can be is specific to the architecture.
+ */
+void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...);
+
+void kvm_irq_line(struct kvm_vm *vm, uint32_t irq, int level);
+int _kvm_irq_line(struct kvm_vm *vm, uint32_t irq, int level);
+
+#define KVM_MAX_IRQ_ROUTES 4096
+
+struct kvm_irq_routing *kvm_gsi_routing_create(void);
+void kvm_gsi_routing_irqchip_add(struct kvm_irq_routing *routing,
+ uint32_t gsi, uint32_t pin);
+int _kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing);
+void kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing);
+
+const char *exit_reason_str(unsigned int exit_reason);
+
+vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
+ uint32_t memslot);
+vm_paddr_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
+ vm_paddr_t paddr_min, uint32_t memslot,
+ bool protected);
+vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm);
+
+static inline vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
+ vm_paddr_t paddr_min, uint32_t memslot)
+{
+ /*
+ * By default, allocate memory as protected for VMs that support
+ * protected memory, as the majority of memory for such VMs is
+ * protected, i.e. using shared memory is effectively opt-in.
+ */
+ return __vm_phy_pages_alloc(vm, num, paddr_min, memslot,
+ vm_arch_has_protected_memory(vm));
+}
+
+/*
+ * ____vm_create() does KVM_CREATE_VM and little else. __vm_create() also
+ * loads the test binary into guest memory and creates an IRQ chip (x86 only).
+ * __vm_create() does NOT create vCPUs, @nr_runnable_vcpus is used purely to
+ * calculate the amount of memory needed for per-vCPU data, e.g. stacks.
+ */
+struct kvm_vm *____vm_create(struct vm_shape shape);
+struct kvm_vm *__vm_create(struct vm_shape shape, uint32_t nr_runnable_vcpus,
+ uint64_t nr_extra_pages);
+
+static inline struct kvm_vm *vm_create_barebones(void)
+{
+ return ____vm_create(VM_SHAPE_DEFAULT);
+}
+
+#ifdef __x86_64__
+static inline struct kvm_vm *vm_create_barebones_protected_vm(void)
+{
+ const struct vm_shape shape = {
+ .mode = VM_MODE_DEFAULT,
+ .type = KVM_X86_SW_PROTECTED_VM,
+ };
+
+ return ____vm_create(shape);
+}
+#endif
+
+static inline struct kvm_vm *vm_create(uint32_t nr_runnable_vcpus)
+{
+ return __vm_create(VM_SHAPE_DEFAULT, nr_runnable_vcpus, 0);
+}
+
+struct kvm_vm *__vm_create_with_vcpus(struct vm_shape shape, uint32_t nr_vcpus,
+ uint64_t extra_mem_pages,
+ void *guest_code, struct kvm_vcpu *vcpus[]);
+
+static inline struct kvm_vm *vm_create_with_vcpus(uint32_t nr_vcpus,
+ void *guest_code,
+ struct kvm_vcpu *vcpus[])
+{
+ return __vm_create_with_vcpus(VM_SHAPE_DEFAULT, nr_vcpus, 0,
+ guest_code, vcpus);
+}
+
+
+struct kvm_vm *__vm_create_shape_with_one_vcpu(struct vm_shape shape,
+ struct kvm_vcpu **vcpu,
+ uint64_t extra_mem_pages,
+ void *guest_code);
+
+/*
+ * Create a VM with a single vCPU with reasonable defaults and @extra_mem_pages
+ * additional pages of guest memory. Returns the VM and vCPU (via out param).
+ */
+static inline struct kvm_vm *__vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
+ uint64_t extra_mem_pages,
+ void *guest_code)
+{
+ return __vm_create_shape_with_one_vcpu(VM_SHAPE_DEFAULT, vcpu,
+ extra_mem_pages, guest_code);
+}
+
+static inline struct kvm_vm *vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
+ void *guest_code)
+{
+ return __vm_create_with_one_vcpu(vcpu, 0, guest_code);
+}
+
+static inline struct kvm_vm *vm_create_shape_with_one_vcpu(struct vm_shape shape,
+ struct kvm_vcpu **vcpu,
+ void *guest_code)
+{
+ return __vm_create_shape_with_one_vcpu(shape, vcpu, 0, guest_code);
+}
+
+struct kvm_vcpu *vm_recreate_with_one_vcpu(struct kvm_vm *vm);
+
+void kvm_pin_this_task_to_pcpu(uint32_t pcpu);
+void kvm_print_vcpu_pinning_help(void);
+void kvm_parse_vcpu_pinning(const char *pcpus_string, uint32_t vcpu_to_pcpu[],
+ int nr_vcpus);
+
+unsigned long vm_compute_max_gfn(struct kvm_vm *vm);
+unsigned int vm_calc_num_guest_pages(enum vm_guest_mode mode, size_t size);
+unsigned int vm_num_host_pages(enum vm_guest_mode mode, unsigned int num_guest_pages);
+unsigned int vm_num_guest_pages(enum vm_guest_mode mode, unsigned int num_host_pages);
+static inline unsigned int
+vm_adjust_num_guest_pages(enum vm_guest_mode mode, unsigned int num_guest_pages)
+{
+ unsigned int n;
+ n = vm_num_guest_pages(mode, vm_num_host_pages(mode, num_guest_pages));
+#ifdef __s390x__
+ /* s390 requires 1M aligned guest sizes */
+ n = (n + 255) & ~255;
+#endif
+ return n;
+}
+
+#define sync_global_to_guest(vm, g) ({ \
+ typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g)); \
+ memcpy(_p, &(g), sizeof(g)); \
+})
+
+#define sync_global_from_guest(vm, g) ({ \
+ typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g)); \
+ memcpy(&(g), _p, sizeof(g)); \
+})
+
+/*
+ * Write a global value, but only in the VM's (guest's) domain. Primarily used
+ * for "globals" that hold per-VM values (VMs always duplicate code and global
+ * data into their own region of physical memory), but can be used anytime it's
+ * undesirable to change the host's copy of the global.
+ */
+#define write_guest_global(vm, g, val) ({ \
+ typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g)); \
+ typeof(g) _val = val; \
+ \
+ memcpy(_p, &(_val), sizeof(g)); \
+})
+
+void assert_on_unhandled_exception(struct kvm_vcpu *vcpu);
+
+void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu,
+ uint8_t indent);
+
+static inline void vcpu_dump(FILE *stream, struct kvm_vcpu *vcpu,
+ uint8_t indent)
+{
+ vcpu_arch_dump(stream, vcpu, indent);
+}
+
+/*
+ * Adds a vCPU with reasonable defaults (e.g. a stack)
+ *
+ * Input Args:
+ * vm - Virtual Machine
+ * vcpu_id - The id of the VCPU to add to the VM.
+ */
+struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
+void vcpu_arch_set_entry_point(struct kvm_vcpu *vcpu, void *guest_code);
+
+static inline struct kvm_vcpu *vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id,
+ void *guest_code)
+{
+ struct kvm_vcpu *vcpu = vm_arch_vcpu_add(vm, vcpu_id);
+
+ vcpu_arch_set_entry_point(vcpu, guest_code);
+
+ return vcpu;
+}
+
+/* Re-create a vCPU after restarting a VM, e.g. for state save/restore tests. */
+struct kvm_vcpu *vm_arch_vcpu_recreate(struct kvm_vm *vm, uint32_t vcpu_id);
+
+static inline struct kvm_vcpu *vm_vcpu_recreate(struct kvm_vm *vm,
+ uint32_t vcpu_id)
+{
+ return vm_arch_vcpu_recreate(vm, vcpu_id);
+}
+
+void vcpu_arch_free(struct kvm_vcpu *vcpu);
+
+void virt_arch_pgd_alloc(struct kvm_vm *vm);
+
+static inline void virt_pgd_alloc(struct kvm_vm *vm)
+{
+ virt_arch_pgd_alloc(vm);
+}
+
+/*
+ * VM Virtual Page Map
+ *
+ * Input Args:
+ * vm - Virtual Machine
+ * vaddr - VM Virtual Address
+ * paddr - VM Physical Address
+ * memslot - Memory region slot for new virtual translation tables
+ *
+ * Output Args: None
+ *
+ * Return: None
+ *
+ * Within @vm, creates a virtual translation for the page starting
+ * at @vaddr to the page starting at @paddr.
+ */
+void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr);
+
+static inline void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
+{
+ virt_arch_pg_map(vm, vaddr, paddr);
+}
+
+
+/*
+ * Address Guest Virtual to Guest Physical
+ *
+ * Input Args:
+ * vm - Virtual Machine
+ * gva - VM virtual address
+ *
+ * Output Args: None
+ *
+ * Return:
+ * Equivalent VM physical address
+ *
+ * Returns the VM physical address of the translated VM virtual
+ * address given by @gva.
+ */
+vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva);
+
+static inline vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
+{
+ return addr_arch_gva2gpa(vm, gva);
+}
+
+/*
+ * Virtual Translation Tables Dump
+ *
+ * Input Args:
+ * stream - Output FILE stream
+ * vm - Virtual Machine
+ * indent - Left margin indent amount
+ *
+ * Output Args: None
+ *
+ * Return: None
+ *
+ * Dumps to the FILE stream given by @stream, the contents of all the
+ * virtual translation tables for the VM given by @vm.
+ */
+void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent);
+
+static inline void virt_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
+{
+ virt_arch_dump(stream, vm, indent);
+}
+
+
+static inline int __vm_disable_nx_huge_pages(struct kvm_vm *vm)
+{
+ return __vm_enable_cap(vm, KVM_CAP_VM_DISABLE_NX_HUGE_PAGES, 0);
+}
+
+/*
+ * Arch hook that is invoked via a constructor, i.e. before exeucting main(),
+ * to allow for arch-specific setup that is common to all tests, e.g. computing
+ * the default guest "mode".
+ */
+void kvm_selftest_arch_init(void);
+
+void kvm_arch_vm_post_create(struct kvm_vm *vm);
+
+bool vm_is_gpa_protected(struct kvm_vm *vm, vm_paddr_t paddr);
+
+uint32_t guest_get_vcpuid(void);

#endif /* SELFTEST_KVM_UTIL_H */
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
deleted file mode 100644
index 3e0db283a46a..000000000000
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ /dev/null
@@ -1,1135 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-only */
-/*
- * tools/testing/selftests/kvm/include/kvm_util_base.h
- *
- * Copyright (C) 2018, Google LLC.
- */
-#ifndef SELFTEST_KVM_UTIL_BASE_H
-#define SELFTEST_KVM_UTIL_BASE_H
-
-#include "test_util.h"
-
-#include <linux/compiler.h>
-#include "linux/hashtable.h"
-#include "linux/list.h"
-#include <linux/kernel.h>
-#include <linux/kvm.h>
-#include "linux/rbtree.h"
-#include <linux/types.h>
-
-#include <asm/atomic.h>
-#include <asm/kvm.h>
-
-#include <sys/ioctl.h>
-
-#include "kvm_util_arch.h"
-#include "sparsebit.h"
-
-/*
- * Provide a version of static_assert() that is guaranteed to have an optional
- * message param. If _ISOC11_SOURCE is defined, glibc (/usr/include/assert.h)
- * #undefs and #defines static_assert() as a direct alias to _Static_assert(),
- * i.e. effectively makes the message mandatory. Many KVM selftests #define
- * _GNU_SOURCE for various reasons, and _GNU_SOURCE implies _ISOC11_SOURCE. As
- * a result, static_assert() behavior is non-deterministic and may or may not
- * require a message depending on #include order.
- */
-#define __kvm_static_assert(expr, msg, ...) _Static_assert(expr, msg)
-#define kvm_static_assert(expr, ...) __kvm_static_assert(expr, ##__VA_ARGS__, #expr)
-
-#define KVM_DEV_PATH "/dev/kvm"
-#define KVM_MAX_VCPUS 512
-
-#define NSEC_PER_SEC 1000000000L
-
-typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */
-typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */
-
-struct userspace_mem_region {
- struct kvm_userspace_memory_region2 region;
- struct sparsebit *unused_phy_pages;
- struct sparsebit *protected_phy_pages;
- int fd;
- off_t offset;
- enum vm_mem_backing_src_type backing_src_type;
- void *host_mem;
- void *host_alias;
- void *mmap_start;
- void *mmap_alias;
- size_t mmap_size;
- struct rb_node gpa_node;
- struct rb_node hva_node;
- struct hlist_node slot_node;
-};
-
-struct kvm_vcpu {
- struct list_head list;
- uint32_t id;
- int fd;
- struct kvm_vm *vm;
- struct kvm_run *run;
-#ifdef __x86_64__
- struct kvm_cpuid2 *cpuid;
-#endif
- struct kvm_dirty_gfn *dirty_gfns;
- uint32_t fetch_index;
- uint32_t dirty_gfns_count;
-};
-
-struct userspace_mem_regions {
- struct rb_root gpa_tree;
- struct rb_root hva_tree;
- DECLARE_HASHTABLE(slot_hash, 9);
-};
-
-enum kvm_mem_region_type {
- MEM_REGION_CODE,
- MEM_REGION_DATA,
- MEM_REGION_PT,
- MEM_REGION_TEST_DATA,
- NR_MEM_REGIONS,
-};
-
-struct kvm_vm {
- int mode;
- unsigned long type;
- uint8_t subtype;
- int kvm_fd;
- int fd;
- unsigned int pgtable_levels;
- unsigned int page_size;
- unsigned int page_shift;
- unsigned int pa_bits;
- unsigned int va_bits;
- uint64_t max_gfn;
- struct list_head vcpus;
- struct userspace_mem_regions regions;
- struct sparsebit *vpages_valid;
- struct sparsebit *vpages_mapped;
- bool has_irqchip;
- bool pgd_created;
- vm_paddr_t ucall_mmio_addr;
- vm_paddr_t pgd;
- vm_vaddr_t gdt;
- vm_vaddr_t tss;
- vm_vaddr_t idt;
- vm_vaddr_t handlers;
- uint32_t dirty_ring_size;
- uint64_t gpa_tag_mask;
-
- struct kvm_vm_arch arch;
-
- /* Cache of information for binary stats interface */
- int stats_fd;
- struct kvm_stats_header stats_header;
- struct kvm_stats_desc *stats_desc;
-
- /*
- * KVM region slots. These are the default memslots used by page
- * allocators, e.g., lib/elf uses the memslots[MEM_REGION_CODE]
- * memslot.
- */
- uint32_t memslots[NR_MEM_REGIONS];
-};
-
-struct vcpu_reg_sublist {
- const char *name;
- long capability;
- int feature;
- int feature_type;
- bool finalize;
- __u64 *regs;
- __u64 regs_n;
- __u64 *rejects_set;
- __u64 rejects_set_n;
- __u64 *skips_set;
- __u64 skips_set_n;
-};
-
-struct vcpu_reg_list {
- char *name;
- struct vcpu_reg_sublist sublists[];
-};
-
-#define for_each_sublist(c, s) \
- for ((s) = &(c)->sublists[0]; (s)->regs; ++(s))
-
-#define kvm_for_each_vcpu(vm, i, vcpu) \
- for ((i) = 0; (i) <= (vm)->last_vcpu_id; (i)++) \
- if (!((vcpu) = vm->vcpus[i])) \
- continue; \
- else
-
-struct userspace_mem_region *
-memslot2region(struct kvm_vm *vm, uint32_t memslot);
-
-static inline struct userspace_mem_region *vm_get_mem_region(struct kvm_vm *vm,
- enum kvm_mem_region_type type)
-{
- assert(type < NR_MEM_REGIONS);
- return memslot2region(vm, vm->memslots[type]);
-}
-
-/* Minimum allocated guest virtual and physical addresses */
-#define KVM_UTIL_MIN_VADDR 0x2000
-#define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000
-
-#define DEFAULT_GUEST_STACK_VADDR_MIN 0xab6000
-#define DEFAULT_STACK_PGS 5
-
-enum vm_guest_mode {
- VM_MODE_P52V48_4K,
- VM_MODE_P52V48_16K,
- VM_MODE_P52V48_64K,
- VM_MODE_P48V48_4K,
- VM_MODE_P48V48_16K,
- VM_MODE_P48V48_64K,
- VM_MODE_P40V48_4K,
- VM_MODE_P40V48_16K,
- VM_MODE_P40V48_64K,
- VM_MODE_PXXV48_4K, /* For 48bits VA but ANY bits PA */
- VM_MODE_P47V64_4K,
- VM_MODE_P44V64_4K,
- VM_MODE_P36V48_4K,
- VM_MODE_P36V48_16K,
- VM_MODE_P36V48_64K,
- VM_MODE_P36V47_16K,
- NUM_VM_MODES,
-};
-
-struct vm_shape {
- uint32_t type;
- uint8_t mode;
- uint8_t subtype;
- uint16_t padding;
-};
-
-kvm_static_assert(sizeof(struct vm_shape) == sizeof(uint64_t));
-
-#define VM_TYPE_DEFAULT 0
-
-#define VM_SHAPE(__mode) \
-({ \
- struct vm_shape shape = { \
- .mode = (__mode), \
- .type = VM_TYPE_DEFAULT \
- }; \
- \
- shape; \
-})
-
-#if defined(__aarch64__)
-
-extern enum vm_guest_mode vm_mode_default;
-
-#define VM_MODE_DEFAULT vm_mode_default
-#define MIN_PAGE_SHIFT 12U
-#define ptes_per_page(page_size) ((page_size) / 8)
-
-#elif defined(__x86_64__)
-
-#define VM_MODE_DEFAULT VM_MODE_PXXV48_4K
-#define MIN_PAGE_SHIFT 12U
-#define ptes_per_page(page_size) ((page_size) / 8)
-
-#elif defined(__s390x__)
-
-#define VM_MODE_DEFAULT VM_MODE_P44V64_4K
-#define MIN_PAGE_SHIFT 12U
-#define ptes_per_page(page_size) ((page_size) / 16)
-
-#elif defined(__riscv)
-
-#if __riscv_xlen == 32
-#error "RISC-V 32-bit kvm selftests not supported"
-#endif
-
-#define VM_MODE_DEFAULT VM_MODE_P40V48_4K
-#define MIN_PAGE_SHIFT 12U
-#define ptes_per_page(page_size) ((page_size) / 8)
-
-#endif
-
-#define VM_SHAPE_DEFAULT VM_SHAPE(VM_MODE_DEFAULT)
-
-#define MIN_PAGE_SIZE (1U << MIN_PAGE_SHIFT)
-#define PTES_PER_MIN_PAGE ptes_per_page(MIN_PAGE_SIZE)
-
-struct vm_guest_mode_params {
- unsigned int pa_bits;
- unsigned int va_bits;
- unsigned int page_size;
- unsigned int page_shift;
-};
-extern const struct vm_guest_mode_params vm_guest_mode_params[];
-
-int open_path_or_exit(const char *path, int flags);
-int open_kvm_dev_path_or_exit(void);
-
-bool get_kvm_param_bool(const char *param);
-bool get_kvm_intel_param_bool(const char *param);
-bool get_kvm_amd_param_bool(const char *param);
-
-int get_kvm_param_integer(const char *param);
-int get_kvm_intel_param_integer(const char *param);
-int get_kvm_amd_param_integer(const char *param);
-
-unsigned int kvm_check_cap(long cap);
-
-static inline bool kvm_has_cap(long cap)
-{
- return kvm_check_cap(cap);
-}
-
-#define __KVM_SYSCALL_ERROR(_name, _ret) \
- "%s failed, rc: %i errno: %i (%s)", (_name), (_ret), errno, strerror(errno)
-
-/*
- * Use the "inner", double-underscore macro when reporting errors from within
- * other macros so that the name of ioctl() and not its literal numeric value
- * is printed on error. The "outer" macro is strongly preferred when reporting
- * errors "directly", i.e. without an additional layer of macros, as it reduces
- * the probability of passing in the wrong string.
- */
-#define __KVM_IOCTL_ERROR(_name, _ret) __KVM_SYSCALL_ERROR(_name, _ret)
-#define KVM_IOCTL_ERROR(_ioctl, _ret) __KVM_IOCTL_ERROR(#_ioctl, _ret)
-
-#define kvm_do_ioctl(fd, cmd, arg) \
-({ \
- kvm_static_assert(!_IOC_SIZE(cmd) || sizeof(*arg) == _IOC_SIZE(cmd)); \
- ioctl(fd, cmd, arg); \
-})
-
-#define __kvm_ioctl(kvm_fd, cmd, arg) \
- kvm_do_ioctl(kvm_fd, cmd, arg)
-
-#define kvm_ioctl(kvm_fd, cmd, arg) \
-({ \
- int ret = __kvm_ioctl(kvm_fd, cmd, arg); \
- \
- TEST_ASSERT(!ret, __KVM_IOCTL_ERROR(#cmd, ret)); \
-})
-
-static __always_inline void static_assert_is_vm(struct kvm_vm *vm) { }
-
-#define __vm_ioctl(vm, cmd, arg) \
-({ \
- static_assert_is_vm(vm); \
- kvm_do_ioctl((vm)->fd, cmd, arg); \
-})
-
-/*
- * Assert that a VM or vCPU ioctl() succeeded, with extra magic to detect if
- * the ioctl() failed because KVM killed/bugged the VM. To detect a dead VM,
- * probe KVM_CAP_USER_MEMORY, which (a) has been supported by KVM since before
- * selftests existed and (b) should never outright fail, i.e. is supposed to
- * return 0 or 1. If KVM kills a VM, KVM returns -EIO for all ioctl()s for the
- * VM and its vCPUs, including KVM_CHECK_EXTENSION.
- */
-#define __TEST_ASSERT_VM_VCPU_IOCTL(cond, name, ret, vm) \
-do { \
- int __errno = errno; \
- \
- static_assert_is_vm(vm); \
- \
- if (cond) \
- break; \
- \
- if (errno == EIO && \
- __vm_ioctl(vm, KVM_CHECK_EXTENSION, (void *)KVM_CAP_USER_MEMORY) < 0) { \
- TEST_ASSERT(errno == EIO, "KVM killed the VM, should return -EIO"); \
- TEST_FAIL("KVM killed/bugged the VM, check the kernel log for clues"); \
- } \
- errno = __errno; \
- TEST_ASSERT(cond, __KVM_IOCTL_ERROR(name, ret)); \
-} while (0)
-
-#define TEST_ASSERT_VM_VCPU_IOCTL(cond, cmd, ret, vm) \
- __TEST_ASSERT_VM_VCPU_IOCTL(cond, #cmd, ret, vm)
-
-#define vm_ioctl(vm, cmd, arg) \
-({ \
- int ret = __vm_ioctl(vm, cmd, arg); \
- \
- __TEST_ASSERT_VM_VCPU_IOCTL(!ret, #cmd, ret, vm); \
-})
-
-static __always_inline void static_assert_is_vcpu(struct kvm_vcpu *vcpu) { }
-
-#define __vcpu_ioctl(vcpu, cmd, arg) \
-({ \
- static_assert_is_vcpu(vcpu); \
- kvm_do_ioctl((vcpu)->fd, cmd, arg); \
-})
-
-#define vcpu_ioctl(vcpu, cmd, arg) \
-({ \
- int ret = __vcpu_ioctl(vcpu, cmd, arg); \
- \
- __TEST_ASSERT_VM_VCPU_IOCTL(!ret, #cmd, ret, (vcpu)->vm); \
-})
-
-/*
- * Looks up and returns the value corresponding to the capability
- * (KVM_CAP_*) given by cap.
- */
-static inline int vm_check_cap(struct kvm_vm *vm, long cap)
-{
- int ret = __vm_ioctl(vm, KVM_CHECK_EXTENSION, (void *)cap);
-
- TEST_ASSERT_VM_VCPU_IOCTL(ret >= 0, KVM_CHECK_EXTENSION, ret, vm);
- return ret;
-}
-
-static inline int __vm_enable_cap(struct kvm_vm *vm, uint32_t cap, uint64_t arg0)
-{
- struct kvm_enable_cap enable_cap = { .cap = cap, .args = { arg0 } };
-
- return __vm_ioctl(vm, KVM_ENABLE_CAP, &enable_cap);
-}
-static inline void vm_enable_cap(struct kvm_vm *vm, uint32_t cap, uint64_t arg0)
-{
- struct kvm_enable_cap enable_cap = { .cap = cap, .args = { arg0 } };
-
- vm_ioctl(vm, KVM_ENABLE_CAP, &enable_cap);
-}
-
-static inline void vm_set_memory_attributes(struct kvm_vm *vm, uint64_t gpa,
- uint64_t size, uint64_t attributes)
-{
- struct kvm_memory_attributes attr = {
- .attributes = attributes,
- .address = gpa,
- .size = size,
- .flags = 0,
- };
-
- /*
- * KVM_SET_MEMORY_ATTRIBUTES overwrites _all_ attributes. These flows
- * need significant enhancements to support multiple attributes.
- */
- TEST_ASSERT(!attributes || attributes == KVM_MEMORY_ATTRIBUTE_PRIVATE,
- "Update me to support multiple attributes!");
-
- vm_ioctl(vm, KVM_SET_MEMORY_ATTRIBUTES, &attr);
-}
-
-
-static inline void vm_mem_set_private(struct kvm_vm *vm, uint64_t gpa,
- uint64_t size)
-{
- vm_set_memory_attributes(vm, gpa, size, KVM_MEMORY_ATTRIBUTE_PRIVATE);
-}
-
-static inline void vm_mem_set_shared(struct kvm_vm *vm, uint64_t gpa,
- uint64_t size)
-{
- vm_set_memory_attributes(vm, gpa, size, 0);
-}
-
-void vm_guest_mem_fallocate(struct kvm_vm *vm, uint64_t gpa, uint64_t size,
- bool punch_hole);
-
-static inline void vm_guest_mem_punch_hole(struct kvm_vm *vm, uint64_t gpa,
- uint64_t size)
-{
- vm_guest_mem_fallocate(vm, gpa, size, true);
-}
-
-static inline void vm_guest_mem_allocate(struct kvm_vm *vm, uint64_t gpa,
- uint64_t size)
-{
- vm_guest_mem_fallocate(vm, gpa, size, false);
-}
-
-void vm_enable_dirty_ring(struct kvm_vm *vm, uint32_t ring_size);
-const char *vm_guest_mode_string(uint32_t i);
-
-void kvm_vm_free(struct kvm_vm *vmp);
-void kvm_vm_restart(struct kvm_vm *vmp);
-void kvm_vm_release(struct kvm_vm *vmp);
-int kvm_memcmp_hva_gva(void *hva, struct kvm_vm *vm, const vm_vaddr_t gva,
- size_t len);
-void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename);
-int kvm_memfd_alloc(size_t size, bool hugepages);
-
-void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent);
-
-static inline void kvm_vm_get_dirty_log(struct kvm_vm *vm, int slot, void *log)
-{
- struct kvm_dirty_log args = { .dirty_bitmap = log, .slot = slot };
-
- vm_ioctl(vm, KVM_GET_DIRTY_LOG, &args);
-}
-
-static inline void kvm_vm_clear_dirty_log(struct kvm_vm *vm, int slot, void *log,
- uint64_t first_page, uint32_t num_pages)
-{
- struct kvm_clear_dirty_log args = {
- .dirty_bitmap = log,
- .slot = slot,
- .first_page = first_page,
- .num_pages = num_pages
- };
-
- vm_ioctl(vm, KVM_CLEAR_DIRTY_LOG, &args);
-}
-
-static inline uint32_t kvm_vm_reset_dirty_ring(struct kvm_vm *vm)
-{
- return __vm_ioctl(vm, KVM_RESET_DIRTY_RINGS, NULL);
-}
-
-static inline int vm_get_stats_fd(struct kvm_vm *vm)
-{
- int fd = __vm_ioctl(vm, KVM_GET_STATS_FD, NULL);
-
- TEST_ASSERT_VM_VCPU_IOCTL(fd >= 0, KVM_GET_STATS_FD, fd, vm);
- return fd;
-}
-
-static inline void read_stats_header(int stats_fd, struct kvm_stats_header *header)
-{
- ssize_t ret;
-
- ret = pread(stats_fd, header, sizeof(*header), 0);
- TEST_ASSERT(ret == sizeof(*header),
- "Failed to read '%lu' header bytes, ret = '%ld'",
- sizeof(*header), ret);
-}
-
-struct kvm_stats_desc *read_stats_descriptors(int stats_fd,
- struct kvm_stats_header *header);
-
-static inline ssize_t get_stats_descriptor_size(struct kvm_stats_header *header)
-{
- /*
- * The base size of the descriptor is defined by KVM's ABI, but the
- * size of the name field is variable, as far as KVM's ABI is
- * concerned. For a given instance of KVM, the name field is the same
- * size for all stats and is provided in the overall stats header.
- */
- return sizeof(struct kvm_stats_desc) + header->name_size;
-}
-
-static inline struct kvm_stats_desc *get_stats_descriptor(struct kvm_stats_desc *stats,
- int index,
- struct kvm_stats_header *header)
-{
- /*
- * Note, size_desc includes the size of the name field, which is
- * variable. i.e. this is NOT equivalent to &stats_desc[i].
- */
- return (void *)stats + index * get_stats_descriptor_size(header);
-}
-
-void read_stat_data(int stats_fd, struct kvm_stats_header *header,
- struct kvm_stats_desc *desc, uint64_t *data,
- size_t max_elements);
-
-void __vm_get_stat(struct kvm_vm *vm, const char *stat_name, uint64_t *data,
- size_t max_elements);
-
-static inline uint64_t vm_get_stat(struct kvm_vm *vm, const char *stat_name)
-{
- uint64_t data;
-
- __vm_get_stat(vm, stat_name, &data, 1);
- return data;
-}
-
-void vm_create_irqchip(struct kvm_vm *vm);
-
-static inline int __vm_create_guest_memfd(struct kvm_vm *vm, uint64_t size,
- uint64_t flags)
-{
- struct kvm_create_guest_memfd guest_memfd = {
- .size = size,
- .flags = flags,
- };
-
- return __vm_ioctl(vm, KVM_CREATE_GUEST_MEMFD, &guest_memfd);
-}
-
-static inline int vm_create_guest_memfd(struct kvm_vm *vm, uint64_t size,
- uint64_t flags)
-{
- int fd = __vm_create_guest_memfd(vm, size, flags);
-
- TEST_ASSERT(fd >= 0, KVM_IOCTL_ERROR(KVM_CREATE_GUEST_MEMFD, fd));
- return fd;
-}
-
-void vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
- uint64_t gpa, uint64_t size, void *hva);
-int __vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
- uint64_t gpa, uint64_t size, void *hva);
-void vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
- uint64_t gpa, uint64_t size, void *hva,
- uint32_t guest_memfd, uint64_t guest_memfd_offset);
-int __vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
- uint64_t gpa, uint64_t size, void *hva,
- uint32_t guest_memfd, uint64_t guest_memfd_offset);
-
-void vm_userspace_mem_region_add(struct kvm_vm *vm,
- enum vm_mem_backing_src_type src_type,
- uint64_t guest_paddr, uint32_t slot, uint64_t npages,
- uint32_t flags);
-void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
- uint64_t guest_paddr, uint32_t slot, uint64_t npages,
- uint32_t flags, int guest_memfd_fd, uint64_t guest_memfd_offset);
-
-#ifndef vm_arch_has_protected_memory
-static inline bool vm_arch_has_protected_memory(struct kvm_vm *vm)
-{
- return false;
-}
-#endif
-
-void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags);
-void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa);
-void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
-struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
-void vm_populate_vaddr_bitmap(struct kvm_vm *vm);
-vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
-vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
-vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
- enum kvm_mem_region_type type);
-vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz,
- vm_vaddr_t vaddr_min,
- enum kvm_mem_region_type type);
-vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages);
-vm_vaddr_t __vm_vaddr_alloc_page(struct kvm_vm *vm,
- enum kvm_mem_region_type type);
-vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm);
-
-void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
- unsigned int npages);
-void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa);
-void *addr_gva2hva(struct kvm_vm *vm, vm_vaddr_t gva);
-vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva);
-void *addr_gpa2alias(struct kvm_vm *vm, vm_paddr_t gpa);
-
-
-static inline vm_paddr_t vm_untag_gpa(struct kvm_vm *vm, vm_paddr_t gpa)
-{
- return gpa & ~vm->gpa_tag_mask;
-}
-
-void vcpu_run(struct kvm_vcpu *vcpu);
-int _vcpu_run(struct kvm_vcpu *vcpu);
-
-static inline int __vcpu_run(struct kvm_vcpu *vcpu)
-{
- return __vcpu_ioctl(vcpu, KVM_RUN, NULL);
-}
-
-void vcpu_run_complete_io(struct kvm_vcpu *vcpu);
-struct kvm_reg_list *vcpu_get_reg_list(struct kvm_vcpu *vcpu);
-
-static inline void vcpu_enable_cap(struct kvm_vcpu *vcpu, uint32_t cap,
- uint64_t arg0)
-{
- struct kvm_enable_cap enable_cap = { .cap = cap, .args = { arg0 } };
-
- vcpu_ioctl(vcpu, KVM_ENABLE_CAP, &enable_cap);
-}
-
-static inline void vcpu_guest_debug_set(struct kvm_vcpu *vcpu,
- struct kvm_guest_debug *debug)
-{
- vcpu_ioctl(vcpu, KVM_SET_GUEST_DEBUG, debug);
-}
-
-static inline void vcpu_mp_state_get(struct kvm_vcpu *vcpu,
- struct kvm_mp_state *mp_state)
-{
- vcpu_ioctl(vcpu, KVM_GET_MP_STATE, mp_state);
-}
-static inline void vcpu_mp_state_set(struct kvm_vcpu *vcpu,
- struct kvm_mp_state *mp_state)
-{
- vcpu_ioctl(vcpu, KVM_SET_MP_STATE, mp_state);
-}
-
-static inline void vcpu_regs_get(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
-{
- vcpu_ioctl(vcpu, KVM_GET_REGS, regs);
-}
-
-static inline void vcpu_regs_set(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
-{
- vcpu_ioctl(vcpu, KVM_SET_REGS, regs);
-}
-static inline void vcpu_sregs_get(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
-{
- vcpu_ioctl(vcpu, KVM_GET_SREGS, sregs);
-
-}
-static inline void vcpu_sregs_set(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
-{
- vcpu_ioctl(vcpu, KVM_SET_SREGS, sregs);
-}
-static inline int _vcpu_sregs_set(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
-{
- return __vcpu_ioctl(vcpu, KVM_SET_SREGS, sregs);
-}
-static inline void vcpu_fpu_get(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
-{
- vcpu_ioctl(vcpu, KVM_GET_FPU, fpu);
-}
-static inline void vcpu_fpu_set(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
-{
- vcpu_ioctl(vcpu, KVM_SET_FPU, fpu);
-}
-
-static inline int __vcpu_get_reg(struct kvm_vcpu *vcpu, uint64_t id, void *addr)
-{
- struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)addr };
-
- return __vcpu_ioctl(vcpu, KVM_GET_ONE_REG, &reg);
-}
-static inline int __vcpu_set_reg(struct kvm_vcpu *vcpu, uint64_t id, uint64_t val)
-{
- struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)&val };
-
- return __vcpu_ioctl(vcpu, KVM_SET_ONE_REG, &reg);
-}
-static inline void vcpu_get_reg(struct kvm_vcpu *vcpu, uint64_t id, void *addr)
-{
- struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)addr };
-
- vcpu_ioctl(vcpu, KVM_GET_ONE_REG, &reg);
-}
-static inline void vcpu_set_reg(struct kvm_vcpu *vcpu, uint64_t id, uint64_t val)
-{
- struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)&val };
-
- vcpu_ioctl(vcpu, KVM_SET_ONE_REG, &reg);
-}
-
-#ifdef __KVM_HAVE_VCPU_EVENTS
-static inline void vcpu_events_get(struct kvm_vcpu *vcpu,
- struct kvm_vcpu_events *events)
-{
- vcpu_ioctl(vcpu, KVM_GET_VCPU_EVENTS, events);
-}
-static inline void vcpu_events_set(struct kvm_vcpu *vcpu,
- struct kvm_vcpu_events *events)
-{
- vcpu_ioctl(vcpu, KVM_SET_VCPU_EVENTS, events);
-}
-#endif
-#ifdef __x86_64__
-static inline void vcpu_nested_state_get(struct kvm_vcpu *vcpu,
- struct kvm_nested_state *state)
-{
- vcpu_ioctl(vcpu, KVM_GET_NESTED_STATE, state);
-}
-static inline int __vcpu_nested_state_set(struct kvm_vcpu *vcpu,
- struct kvm_nested_state *state)
-{
- return __vcpu_ioctl(vcpu, KVM_SET_NESTED_STATE, state);
-}
-
-static inline void vcpu_nested_state_set(struct kvm_vcpu *vcpu,
- struct kvm_nested_state *state)
-{
- vcpu_ioctl(vcpu, KVM_SET_NESTED_STATE, state);
-}
-#endif
-static inline int vcpu_get_stats_fd(struct kvm_vcpu *vcpu)
-{
- int fd = __vcpu_ioctl(vcpu, KVM_GET_STATS_FD, NULL);
-
- TEST_ASSERT_VM_VCPU_IOCTL(fd >= 0, KVM_CHECK_EXTENSION, fd, vcpu->vm);
- return fd;
-}
-
-int __kvm_has_device_attr(int dev_fd, uint32_t group, uint64_t attr);
-
-static inline void kvm_has_device_attr(int dev_fd, uint32_t group, uint64_t attr)
-{
- int ret = __kvm_has_device_attr(dev_fd, group, attr);
-
- TEST_ASSERT(!ret, "KVM_HAS_DEVICE_ATTR failed, rc: %i errno: %i", ret, errno);
-}
-
-int __kvm_device_attr_get(int dev_fd, uint32_t group, uint64_t attr, void *val);
-
-static inline void kvm_device_attr_get(int dev_fd, uint32_t group,
- uint64_t attr, void *val)
-{
- int ret = __kvm_device_attr_get(dev_fd, group, attr, val);
-
- TEST_ASSERT(!ret, KVM_IOCTL_ERROR(KVM_GET_DEVICE_ATTR, ret));
-}
-
-int __kvm_device_attr_set(int dev_fd, uint32_t group, uint64_t attr, void *val);
-
-static inline void kvm_device_attr_set(int dev_fd, uint32_t group,
- uint64_t attr, void *val)
-{
- int ret = __kvm_device_attr_set(dev_fd, group, attr, val);
-
- TEST_ASSERT(!ret, KVM_IOCTL_ERROR(KVM_SET_DEVICE_ATTR, ret));
-}
-
-static inline int __vcpu_has_device_attr(struct kvm_vcpu *vcpu, uint32_t group,
- uint64_t attr)
-{
- return __kvm_has_device_attr(vcpu->fd, group, attr);
-}
-
-static inline void vcpu_has_device_attr(struct kvm_vcpu *vcpu, uint32_t group,
- uint64_t attr)
-{
- kvm_has_device_attr(vcpu->fd, group, attr);
-}
-
-static inline int __vcpu_device_attr_get(struct kvm_vcpu *vcpu, uint32_t group,
- uint64_t attr, void *val)
-{
- return __kvm_device_attr_get(vcpu->fd, group, attr, val);
-}
-
-static inline void vcpu_device_attr_get(struct kvm_vcpu *vcpu, uint32_t group,
- uint64_t attr, void *val)
-{
- kvm_device_attr_get(vcpu->fd, group, attr, val);
-}
-
-static inline int __vcpu_device_attr_set(struct kvm_vcpu *vcpu, uint32_t group,
- uint64_t attr, void *val)
-{
- return __kvm_device_attr_set(vcpu->fd, group, attr, val);
-}
-
-static inline void vcpu_device_attr_set(struct kvm_vcpu *vcpu, uint32_t group,
- uint64_t attr, void *val)
-{
- kvm_device_attr_set(vcpu->fd, group, attr, val);
-}
-
-int __kvm_test_create_device(struct kvm_vm *vm, uint64_t type);
-int __kvm_create_device(struct kvm_vm *vm, uint64_t type);
-
-static inline int kvm_create_device(struct kvm_vm *vm, uint64_t type)
-{
- int fd = __kvm_create_device(vm, type);
-
- TEST_ASSERT(fd >= 0, KVM_IOCTL_ERROR(KVM_CREATE_DEVICE, fd));
- return fd;
-}
-
-void *vcpu_map_dirty_ring(struct kvm_vcpu *vcpu);
-
-/*
- * VM VCPU Args Set
- *
- * Input Args:
- * vm - Virtual Machine
- * num - number of arguments
- * ... - arguments, each of type uint64_t
- *
- * Output Args: None
- *
- * Return: None
- *
- * Sets the first @num input parameters for the function at @vcpu's entry point,
- * per the C calling convention of the architecture, to the values given as
- * variable args. Each of the variable args is expected to be of type uint64_t.
- * The maximum @num can be is specific to the architecture.
- */
-void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...);
-
-void kvm_irq_line(struct kvm_vm *vm, uint32_t irq, int level);
-int _kvm_irq_line(struct kvm_vm *vm, uint32_t irq, int level);
-
-#define KVM_MAX_IRQ_ROUTES 4096
-
-struct kvm_irq_routing *kvm_gsi_routing_create(void);
-void kvm_gsi_routing_irqchip_add(struct kvm_irq_routing *routing,
- uint32_t gsi, uint32_t pin);
-int _kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing);
-void kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing);
-
-const char *exit_reason_str(unsigned int exit_reason);
-
-vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
- uint32_t memslot);
-vm_paddr_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
- vm_paddr_t paddr_min, uint32_t memslot,
- bool protected);
-vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm);
-
-static inline vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
- vm_paddr_t paddr_min, uint32_t memslot)
-{
- /*
- * By default, allocate memory as protected for VMs that support
- * protected memory, as the majority of memory for such VMs is
- * protected, i.e. using shared memory is effectively opt-in.
- */
- return __vm_phy_pages_alloc(vm, num, paddr_min, memslot,
- vm_arch_has_protected_memory(vm));
-}
-
-/*
- * ____vm_create() does KVM_CREATE_VM and little else. __vm_create() also
- * loads the test binary into guest memory and creates an IRQ chip (x86 only).
- * __vm_create() does NOT create vCPUs, @nr_runnable_vcpus is used purely to
- * calculate the amount of memory needed for per-vCPU data, e.g. stacks.
- */
-struct kvm_vm *____vm_create(struct vm_shape shape);
-struct kvm_vm *__vm_create(struct vm_shape shape, uint32_t nr_runnable_vcpus,
- uint64_t nr_extra_pages);
-
-static inline struct kvm_vm *vm_create_barebones(void)
-{
- return ____vm_create(VM_SHAPE_DEFAULT);
-}
-
-#ifdef __x86_64__
-static inline struct kvm_vm *vm_create_barebones_protected_vm(void)
-{
- const struct vm_shape shape = {
- .mode = VM_MODE_DEFAULT,
- .type = KVM_X86_SW_PROTECTED_VM,
- };
-
- return ____vm_create(shape);
-}
-#endif
-
-static inline struct kvm_vm *vm_create(uint32_t nr_runnable_vcpus)
-{
- return __vm_create(VM_SHAPE_DEFAULT, nr_runnable_vcpus, 0);
-}
-
-struct kvm_vm *__vm_create_with_vcpus(struct vm_shape shape, uint32_t nr_vcpus,
- uint64_t extra_mem_pages,
- void *guest_code, struct kvm_vcpu *vcpus[]);
-
-static inline struct kvm_vm *vm_create_with_vcpus(uint32_t nr_vcpus,
- void *guest_code,
- struct kvm_vcpu *vcpus[])
-{
- return __vm_create_with_vcpus(VM_SHAPE_DEFAULT, nr_vcpus, 0,
- guest_code, vcpus);
-}
-
-
-struct kvm_vm *__vm_create_shape_with_one_vcpu(struct vm_shape shape,
- struct kvm_vcpu **vcpu,
- uint64_t extra_mem_pages,
- void *guest_code);
-
-/*
- * Create a VM with a single vCPU with reasonable defaults and @extra_mem_pages
- * additional pages of guest memory. Returns the VM and vCPU (via out param).
- */
-static inline struct kvm_vm *__vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
- uint64_t extra_mem_pages,
- void *guest_code)
-{
- return __vm_create_shape_with_one_vcpu(VM_SHAPE_DEFAULT, vcpu,
- extra_mem_pages, guest_code);
-}
-
-static inline struct kvm_vm *vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
- void *guest_code)
-{
- return __vm_create_with_one_vcpu(vcpu, 0, guest_code);
-}
-
-static inline struct kvm_vm *vm_create_shape_with_one_vcpu(struct vm_shape shape,
- struct kvm_vcpu **vcpu,
- void *guest_code)
-{
- return __vm_create_shape_with_one_vcpu(shape, vcpu, 0, guest_code);
-}
-
-struct kvm_vcpu *vm_recreate_with_one_vcpu(struct kvm_vm *vm);
-
-void kvm_pin_this_task_to_pcpu(uint32_t pcpu);
-void kvm_print_vcpu_pinning_help(void);
-void kvm_parse_vcpu_pinning(const char *pcpus_string, uint32_t vcpu_to_pcpu[],
- int nr_vcpus);
-
-unsigned long vm_compute_max_gfn(struct kvm_vm *vm);
-unsigned int vm_calc_num_guest_pages(enum vm_guest_mode mode, size_t size);
-unsigned int vm_num_host_pages(enum vm_guest_mode mode, unsigned int num_guest_pages);
-unsigned int vm_num_guest_pages(enum vm_guest_mode mode, unsigned int num_host_pages);
-static inline unsigned int
-vm_adjust_num_guest_pages(enum vm_guest_mode mode, unsigned int num_guest_pages)
-{
- unsigned int n;
- n = vm_num_guest_pages(mode, vm_num_host_pages(mode, num_guest_pages));
-#ifdef __s390x__
- /* s390 requires 1M aligned guest sizes */
- n = (n + 255) & ~255;
-#endif
- return n;
-}
-
-#define sync_global_to_guest(vm, g) ({ \
- typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g)); \
- memcpy(_p, &(g), sizeof(g)); \
-})
-
-#define sync_global_from_guest(vm, g) ({ \
- typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g)); \
- memcpy(&(g), _p, sizeof(g)); \
-})
-
-/*
- * Write a global value, but only in the VM's (guest's) domain. Primarily used
- * for "globals" that hold per-VM values (VMs always duplicate code and global
- * data into their own region of physical memory), but can be used anytime it's
- * undesirable to change the host's copy of the global.
- */
-#define write_guest_global(vm, g, val) ({ \
- typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g)); \
- typeof(g) _val = val; \
- \
- memcpy(_p, &(_val), sizeof(g)); \
-})
-
-void assert_on_unhandled_exception(struct kvm_vcpu *vcpu);
-
-void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu,
- uint8_t indent);
-
-static inline void vcpu_dump(FILE *stream, struct kvm_vcpu *vcpu,
- uint8_t indent)
-{
- vcpu_arch_dump(stream, vcpu, indent);
-}
-
-/*
- * Adds a vCPU with reasonable defaults (e.g. a stack)
- *
- * Input Args:
- * vm - Virtual Machine
- * vcpu_id - The id of the VCPU to add to the VM.
- */
-struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
-void vcpu_arch_set_entry_point(struct kvm_vcpu *vcpu, void *guest_code);
-
-static inline struct kvm_vcpu *vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id,
- void *guest_code)
-{
- struct kvm_vcpu *vcpu = vm_arch_vcpu_add(vm, vcpu_id);
-
- vcpu_arch_set_entry_point(vcpu, guest_code);
-
- return vcpu;
-}
-
-/* Re-create a vCPU after restarting a VM, e.g. for state save/restore tests. */
-struct kvm_vcpu *vm_arch_vcpu_recreate(struct kvm_vm *vm, uint32_t vcpu_id);
-
-static inline struct kvm_vcpu *vm_vcpu_recreate(struct kvm_vm *vm,
- uint32_t vcpu_id)
-{
- return vm_arch_vcpu_recreate(vm, vcpu_id);
-}
-
-void vcpu_arch_free(struct kvm_vcpu *vcpu);
-
-void virt_arch_pgd_alloc(struct kvm_vm *vm);
-
-static inline void virt_pgd_alloc(struct kvm_vm *vm)
-{
- virt_arch_pgd_alloc(vm);
-}
-
-/*
- * VM Virtual Page Map
- *
- * Input Args:
- * vm - Virtual Machine
- * vaddr - VM Virtual Address
- * paddr - VM Physical Address
- * memslot - Memory region slot for new virtual translation tables
- *
- * Output Args: None
- *
- * Return: None
- *
- * Within @vm, creates a virtual translation for the page starting
- * at @vaddr to the page starting at @paddr.
- */
-void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr);
-
-static inline void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
-{
- virt_arch_pg_map(vm, vaddr, paddr);
-}
-
-
-/*
- * Address Guest Virtual to Guest Physical
- *
- * Input Args:
- * vm - Virtual Machine
- * gva - VM virtual address
- *
- * Output Args: None
- *
- * Return:
- * Equivalent VM physical address
- *
- * Returns the VM physical address of the translated VM virtual
- * address given by @gva.
- */
-vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva);
-
-static inline vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
-{
- return addr_arch_gva2gpa(vm, gva);
-}
-
-/*
- * Virtual Translation Tables Dump
- *
- * Input Args:
- * stream - Output FILE stream
- * vm - Virtual Machine
- * indent - Left margin indent amount
- *
- * Output Args: None
- *
- * Return: None
- *
- * Dumps to the FILE stream given by @stream, the contents of all the
- * virtual translation tables for the VM given by @vm.
- */
-void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent);
-
-static inline void virt_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
-{
- virt_arch_dump(stream, vm, indent);
-}
-
-
-static inline int __vm_disable_nx_huge_pages(struct kvm_vm *vm)
-{
- return __vm_enable_cap(vm, KVM_CAP_VM_DISABLE_NX_HUGE_PAGES, 0);
-}
-
-/*
- * Arch hook that is invoked via a constructor, i.e. before exeucting main(),
- * to allow for arch-specific setup that is common to all tests, e.g. computing
- * the default guest "mode".
- */
-void kvm_selftest_arch_init(void);
-
-void kvm_arch_vm_post_create(struct kvm_vm *vm);
-
-bool vm_is_gpa_protected(struct kvm_vm *vm, vm_paddr_t paddr);
-
-uint32_t guest_get_vcpuid(void);
-
-#endif /* SELFTEST_KVM_UTIL_BASE_H */
diff --git a/tools/testing/selftests/kvm/include/s390x/ucall.h b/tools/testing/selftests/kvm/include/s390x/ucall.h
index b231bf2e49d6..8035a872a351 100644
--- a/tools/testing/selftests/kvm/include/s390x/ucall.h
+++ b/tools/testing/selftests/kvm/include/s390x/ucall.h
@@ -2,7 +2,7 @@
#ifndef SELFTEST_KVM_UCALL_H
#define SELFTEST_KVM_UCALL_H

-#include "kvm_util_base.h"
+#include "kvm_util.h"

#define UCALL_EXIT_REASON KVM_EXIT_S390_SIEIC

diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
index 3bd03b088dda..d6ffe03c9d0b 100644
--- a/tools/testing/selftests/kvm/include/x86_64/processor.h
+++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
@@ -18,7 +18,8 @@
#include <linux/kvm_para.h>
#include <linux/stringify.h>

-#include "../kvm_util.h"
+#include "kvm_util.h"
+#include "ucall_common.h"

extern bool host_cpu_is_intel;
extern bool host_cpu_is_amd;
diff --git a/tools/testing/selftests/kvm/include/x86_64/ucall.h b/tools/testing/selftests/kvm/include/x86_64/ucall.h
index 06b244bd06ee..d3825dcc3cd9 100644
--- a/tools/testing/selftests/kvm/include/x86_64/ucall.h
+++ b/tools/testing/selftests/kvm/include/x86_64/ucall.h
@@ -2,7 +2,7 @@
#ifndef SELFTEST_KVM_UCALL_H
#define SELFTEST_KVM_UCALL_H

-#include "kvm_util_base.h"
+#include "kvm_util.h"

#define UCALL_EXIT_REASON KVM_EXIT_IO

diff --git a/tools/testing/selftests/kvm/kvm_page_table_test.c b/tools/testing/selftests/kvm/kvm_page_table_test.c
index e0ba97ac1c56..e16ef18bcfc0 100644
--- a/tools/testing/selftests/kvm/kvm_page_table_test.c
+++ b/tools/testing/selftests/kvm/kvm_page_table_test.c
@@ -21,6 +21,7 @@
#include "kvm_util.h"
#include "processor.h"
#include "guest_modes.h"
+#include "ucall_common.h"

#define TEST_MEM_SLOT_INDEX 1

diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index a9eb17295be4..0ac7cc89f38c 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -11,6 +11,8 @@
#include "guest_modes.h"
#include "kvm_util.h"
#include "processor.h"
+#include "ucall_common.h"
+
#include <linux/bitfield.h>
#include <linux/sizes.h>

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index b2262b5fad9e..cec39b52b90d 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -9,6 +9,7 @@
#include "test_util.h"
#include "kvm_util.h"
#include "processor.h"
+#include "ucall_common.h"

#include <assert.h>
#include <sched.h>
diff --git a/tools/testing/selftests/kvm/lib/memstress.c b/tools/testing/selftests/kvm/lib/memstress.c
index cf2c73971308..96432ad9efa6 100644
--- a/tools/testing/selftests/kvm/lib/memstress.c
+++ b/tools/testing/selftests/kvm/lib/memstress.c
@@ -10,6 +10,7 @@
#include "kvm_util.h"
#include "memstress.h"
#include "processor.h"
+#include "ucall_common.h"

struct memstress_args memstress_args;

diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c
index e8211f5d6863..79b67e2627cb 100644
--- a/tools/testing/selftests/kvm/lib/riscv/processor.c
+++ b/tools/testing/selftests/kvm/lib/riscv/processor.c
@@ -10,6 +10,7 @@

#include "kvm_util.h"
#include "processor.h"
+#include "ucall_common.h"

#define DEFAULT_RISCV_GUEST_STACK_VADDR_MIN 0xac0000

diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c
index f5af65a41c29..42151e571953 100644
--- a/tools/testing/selftests/kvm/lib/ucall_common.c
+++ b/tools/testing/selftests/kvm/lib/ucall_common.c
@@ -1,9 +1,12 @@
// SPDX-License-Identifier: GPL-2.0-only
-#include "kvm_util.h"
#include "linux/types.h"
#include "linux/bitmap.h"
#include "linux/atomic.h"

+#include "kvm_util.h"
+#include "ucall_common.h"
+
+
#define GUEST_UCALL_FAILED -1

struct ucall_header {
diff --git a/tools/testing/selftests/kvm/riscv/arch_timer.c b/tools/testing/selftests/kvm/riscv/arch_timer.c
index e22848f747c0..d6375af0b23e 100644
--- a/tools/testing/selftests/kvm/riscv/arch_timer.c
+++ b/tools/testing/selftests/kvm/riscv/arch_timer.c
@@ -14,6 +14,7 @@
#include "kvm_util.h"
#include "processor.h"
#include "timer_test.h"
+#include "ucall_common.h"

static int timer_irq = IRQ_S_TIMER;

diff --git a/tools/testing/selftests/kvm/rseq_test.c b/tools/testing/selftests/kvm/rseq_test.c
index 28f97fb52044..d81f9b9c5809 100644
--- a/tools/testing/selftests/kvm/rseq_test.c
+++ b/tools/testing/selftests/kvm/rseq_test.c
@@ -19,6 +19,7 @@
#include "kvm_util.h"
#include "processor.h"
#include "test_util.h"
+#include "ucall_common.h"

#include "../rseq/rseq.c"

diff --git a/tools/testing/selftests/kvm/s390x/cmma_test.c b/tools/testing/selftests/kvm/s390x/cmma_test.c
index 626a2b8a2037..9e0033906638 100644
--- a/tools/testing/selftests/kvm/s390x/cmma_test.c
+++ b/tools/testing/selftests/kvm/s390x/cmma_test.c
@@ -18,6 +18,7 @@
#include "test_util.h"
#include "kvm_util.h"
#include "kselftest.h"
+#include "ucall_common.h"

#define MAIN_PAGE_COUNT 512

diff --git a/tools/testing/selftests/kvm/s390x/memop.c b/tools/testing/selftests/kvm/s390x/memop.c
index bb3ca9a5d731..9b31693be1cb 100644
--- a/tools/testing/selftests/kvm/s390x/memop.c
+++ b/tools/testing/selftests/kvm/s390x/memop.c
@@ -15,6 +15,7 @@
#include "test_util.h"
#include "kvm_util.h"
#include "kselftest.h"
+#include "ucall_common.h"

enum mop_target {
LOGICAL,
diff --git a/tools/testing/selftests/kvm/s390x/tprot.c b/tools/testing/selftests/kvm/s390x/tprot.c
index c73f948c9b63..7a742a673b7c 100644
--- a/tools/testing/selftests/kvm/s390x/tprot.c
+++ b/tools/testing/selftests/kvm/s390x/tprot.c
@@ -8,6 +8,7 @@
#include "test_util.h"
#include "kvm_util.h"
#include "kselftest.h"
+#include "ucall_common.h"

#define PAGE_SHIFT 12
#define PAGE_SIZE (1 << PAGE_SHIFT)
diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c
index bae0c5026f82..4c669d0cb8c0 100644
--- a/tools/testing/selftests/kvm/steal_time.c
+++ b/tools/testing/selftests/kvm/steal_time.c
@@ -18,6 +18,7 @@
#include "test_util.h"
#include "kvm_util.h"
#include "processor.h"
+#include "ucall_common.h"

#define NR_VCPUS 4
#define ST_GPA_BASE (1 << 30)
diff --git a/tools/testing/selftests/kvm/x86_64/dirty_log_page_splitting_test.c b/tools/testing/selftests/kvm/x86_64/dirty_log_page_splitting_test.c
index ee3b384b991c..2929c067c207 100644
--- a/tools/testing/selftests/kvm/x86_64/dirty_log_page_splitting_test.c
+++ b/tools/testing/selftests/kvm/x86_64/dirty_log_page_splitting_test.c
@@ -17,6 +17,7 @@
#include "test_util.h"
#include "memstress.h"
#include "guest_modes.h"
+#include "ucall_common.h"

#define VCPUS 2
#define SLOTS 2
diff --git a/tools/testing/selftests/kvm/x86_64/exit_on_emulation_failure_test.c b/tools/testing/selftests/kvm/x86_64/exit_on_emulation_failure_test.c
index 6c2e5e0ceb1f..fbac69d49b39 100644
--- a/tools/testing/selftests/kvm/x86_64/exit_on_emulation_failure_test.c
+++ b/tools/testing/selftests/kvm/x86_64/exit_on_emulation_failure_test.c
@@ -8,8 +8,8 @@
#define _GNU_SOURCE /* for program_invocation_short_name */

#include "flds_emulation.h"
-
#include "test_util.h"
+#include "ucall_common.h"

#define MMIO_GPA 0x700000000
#define MMIO_GVA MMIO_GPA
diff --git a/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c b/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c
index dcbb3c29fb8e..bc9be20f9600 100644
--- a/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c
+++ b/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c
@@ -24,7 +24,6 @@
#include <string.h>
#include <time.h>

-#include "kvm_util_base.h"
#include "kvm_util.h"
#include "mce.h"
#include "processor.h"
--
2.44.0.291.gc1ea87d7ee-goog


2024-03-14 23:36:05

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 15/18] KVM: selftests: Allocate x86's TSS at VM creation

Allocate x86's per-VM TSS at creation of a non-barebones VM. Like the
GDT, the TSS is needed to actually run vCPUs, i.e. every non-barebones VM
is all but guaranteed to allocate the TSS sooner or later.

Signed-off-by: Sean Christopherson <[email protected]>
---
tools/testing/selftests/kvm/lib/x86_64/processor.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 5cf845975f66..03b9387a1d2e 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -519,9 +519,6 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp,
int selector)
{
- if (!vm->arch.tss)
- vm->arch.tss = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
-
memset(segp, 0, sizeof(*segp));
segp->base = vm->arch.tss;
segp->limit = 0x67;
@@ -619,6 +616,8 @@ static void vm_init_descriptor_tables(struct kvm_vm *vm)
vm->arch.gdt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
vm->arch.idt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
vm->handlers = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
+ vm->arch.tss = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
+
/* Handlers have the same address in both address spaces.*/
for (i = 0; i < NUM_INTERRUPTS; i++)
set_idt_entry(vm, i, (unsigned long)(&idt_handlers)[i], 0,
--
2.44.0.291.gc1ea87d7ee-goog


2024-03-14 23:36:10

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 06/18] KVM: selftests: Rework platform_info_test to actually verify #GP

Rework platform_info_test to actually handle and verify the expected #GP
on RDMSR when the associated KVM capability is disabled. Currently, the
test _deliberately_ doesn't handle the #GP, and instead lets it escalated
to a triple fault shutdown.

In addition to verifying that KVM generates the correct fault, handling
the #GP will be necessary (without even more shenanigans) when a future
change to the core KVM selftests library configures the IDT and exception
handlers by default (the test subtly relies on the IDT limit being '0').

Signed-off-by: Sean Christopherson <[email protected]>
---
.../selftests/kvm/x86_64/platform_info_test.c | 66 +++++++++----------
1 file changed, 33 insertions(+), 33 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/platform_info_test.c b/tools/testing/selftests/kvm/x86_64/platform_info_test.c
index cdad7e2124c8..6300bb70f028 100644
--- a/tools/testing/selftests/kvm/x86_64/platform_info_test.c
+++ b/tools/testing/selftests/kvm/x86_64/platform_info_test.c
@@ -26,40 +26,18 @@
static void guest_code(void)
{
uint64_t msr_platform_info;
+ uint8_t vector;

- for (;;) {
- msr_platform_info = rdmsr(MSR_PLATFORM_INFO);
- GUEST_ASSERT_EQ(msr_platform_info & MSR_PLATFORM_INFO_MAX_TURBO_RATIO,
- MSR_PLATFORM_INFO_MAX_TURBO_RATIO);
- GUEST_SYNC(0);
- asm volatile ("inc %r11");
- }
-}
+ GUEST_SYNC(true);
+ msr_platform_info = rdmsr(MSR_PLATFORM_INFO);
+ GUEST_ASSERT_EQ(msr_platform_info & MSR_PLATFORM_INFO_MAX_TURBO_RATIO,
+ MSR_PLATFORM_INFO_MAX_TURBO_RATIO);

-static void test_msr_platform_info_enabled(struct kvm_vcpu *vcpu)
-{
- struct ucall uc;
+ GUEST_SYNC(false);
+ vector = rdmsr_safe(MSR_PLATFORM_INFO, &msr_platform_info);
+ GUEST_ASSERT_EQ(vector, GP_VECTOR);

- vm_enable_cap(vcpu->vm, KVM_CAP_MSR_PLATFORM_INFO, true);
- vcpu_run(vcpu);
- TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);
-
- switch (get_ucall(vcpu, &uc)) {
- case UCALL_SYNC:
- break;
- case UCALL_ABORT:
- REPORT_GUEST_ASSERT(uc);
- default:
- TEST_FAIL("Unexpected ucall %lu", uc.cmd);
- break;
- }
-}
-
-static void test_msr_platform_info_disabled(struct kvm_vcpu *vcpu)
-{
- vm_enable_cap(vcpu->vm, KVM_CAP_MSR_PLATFORM_INFO, false);
- vcpu_run(vcpu);
- TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_SHUTDOWN);
+ GUEST_DONE();
}

int main(int argc, char *argv[])
@@ -67,16 +45,38 @@ int main(int argc, char *argv[])
struct kvm_vcpu *vcpu;
struct kvm_vm *vm;
uint64_t msr_platform_info;
+ struct ucall uc;

TEST_REQUIRE(kvm_has_cap(KVM_CAP_MSR_PLATFORM_INFO));

vm = vm_create_with_one_vcpu(&vcpu, guest_code);

+ vm_init_descriptor_tables(vm);
+ vcpu_init_descriptor_tables(vcpu);
+
msr_platform_info = vcpu_get_msr(vcpu, MSR_PLATFORM_INFO);
vcpu_set_msr(vcpu, MSR_PLATFORM_INFO,
msr_platform_info | MSR_PLATFORM_INFO_MAX_TURBO_RATIO);
- test_msr_platform_info_enabled(vcpu);
- test_msr_platform_info_disabled(vcpu);
+
+ for (;;) {
+ vcpu_run(vcpu);
+ TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);
+
+ switch (get_ucall(vcpu, &uc)) {
+ case UCALL_SYNC:
+ vm_enable_cap(vm, KVM_CAP_MSR_PLATFORM_INFO, uc.args[1]);
+ break;
+ case UCALL_DONE:
+ goto done;
+ case UCALL_ABORT:
+ REPORT_GUEST_ASSERT(uc);
+ default:
+ TEST_FAIL("Unexpected ucall %lu", uc.cmd);
+ break;
+ }
+ }
+
+done:
vcpu_set_msr(vcpu, MSR_PLATFORM_INFO, msr_platform_info);

kvm_vm_free(vm);
--
2.44.0.291.gc1ea87d7ee-goog


2024-03-14 23:36:19

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 16/18] KVM: selftests: Add macro for TSS selector, rename up code/data macros

Add a proper #define for the TSS selector instead of open coding 0x18 and
hoping future developers don't use that selector for something else.

Opportunistically rename the code and data selector macros to shorten the
names, align the naming with the kernel's scheme, and capture that they
are *kernel* segments.

Signed-off-by: Sean Christopherson <[email protected]>
---
.../selftests/kvm/lib/x86_64/processor.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 03b9387a1d2e..67235013f6f9 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -15,8 +15,9 @@
#define NUM_INTERRUPTS 256
#endif

-#define DEFAULT_CODE_SELECTOR 0x8
-#define DEFAULT_DATA_SELECTOR 0x10
+#define KERNEL_CS 0x8
+#define KERNEL_DS 0x10
+#define KERNEL_TSS 0x18

#define MAX_NR_CPUID_ENTRIES 100

@@ -547,11 +548,11 @@ static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
sregs.efer |= (EFER_LME | EFER_LMA | EFER_NX);

kvm_seg_set_unusable(&sregs.ldt);
- kvm_seg_set_kernel_code_64bit(vm, DEFAULT_CODE_SELECTOR, &sregs.cs);
- kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.ds);
- kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.es);
- kvm_seg_set_kernel_data_64bit(NULL, DEFAULT_DATA_SELECTOR, &sregs.gs);
- kvm_setup_tss_64bit(vm, &sregs.tr, 0x18);
+ kvm_seg_set_kernel_code_64bit(vm, KERNEL_CS, &sregs.cs);
+ kvm_seg_set_kernel_data_64bit(vm, KERNEL_DS, &sregs.ds);
+ kvm_seg_set_kernel_data_64bit(vm, KERNEL_DS, &sregs.es);
+ kvm_seg_set_kernel_data_64bit(NULL, KERNEL_DS, &sregs.gs);
+ kvm_setup_tss_64bit(vm, &sregs.tr, KERNEL_TSS);

sregs.cr3 = vm->pgd;
vcpu_sregs_set(vcpu, &sregs);
@@ -620,8 +621,7 @@ static void vm_init_descriptor_tables(struct kvm_vm *vm)

/* Handlers have the same address in both address spaces.*/
for (i = 0; i < NUM_INTERRUPTS; i++)
- set_idt_entry(vm, i, (unsigned long)(&idt_handlers)[i], 0,
- DEFAULT_CODE_SELECTOR);
+ set_idt_entry(vm, i, (unsigned long)(&idt_handlers)[i], 0, KERNEL_CS);

*(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
}
--
2.44.0.291.gc1ea87d7ee-goog


2024-03-14 23:36:34

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 07/18] KVM: selftests: Explicitly clobber the IDT in the "delete memslot" testcase

Explicitly clobber the guest IDT in the "delete memslot" test, which
expects the deleted memslot to result in either a KVM emulation error, or
a triple fault shutdown. A future change to the core selftests library
will configuring the guest IDT and exception handlers by default, i.e.
will install a guest #PF handler and put the guest into an infinite #NPF
loop (the guest hits a !PRESENT SPTE when trying to vector a #PF, and KVM
reinjects the #PF without fixing the #NPF, because there is no memslot).

Note, it's not clear whether or not KVM's behavior is reasonable in this
case, e.g. arguably KVM should try (and fail) to emulate in response to
the #NPF. But barring a goofy/broken userspace, this scenario will likely
never happen in practice. Punt the KVM investigation to the future.

Signed-off-by: Sean Christopherson <[email protected]>
---
tools/testing/selftests/kvm/set_memory_region_test.c | 12 ++++++++++++
1 file changed, 12 insertions(+)

diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c
index 06b43ed23580..9b814ea16eb4 100644
--- a/tools/testing/selftests/kvm/set_memory_region_test.c
+++ b/tools/testing/selftests/kvm/set_memory_region_test.c
@@ -221,8 +221,20 @@ static void test_move_memory_region(void)

static void guest_code_delete_memory_region(void)
{
+ struct desc_ptr idt;
uint64_t val;

+ /*
+ * Clobber the IDT so that a #PF due to the memory region being deleted
+ * escalates to triple-fault shutdown. Because the memory region is
+ * deleted, there will be no valid mappings. As a result, KVM will
+ * repeatedly intercepts the state-2 page fault that occurs when trying
+ * to vector the guest's #PF. I.e. trying to actually handle the #PF
+ * in the guest will never succeed, and so isn't an option.
+ */
+ memset(&idt, 0, sizeof(idt));
+ __asm__ __volatile__("lidt %0" :: "m"(idt));
+
GUEST_SYNC(0);

/* Spin until the memory region is deleted. */
--
2.44.0.291.gc1ea87d7ee-goog


2024-03-14 23:36:46

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 17/18] KVM: selftests: Init x86's segments during VM creation

Initialize x86's various segments in the GDT during creation of relevant
VMs instead of waiting until vCPUs come along. Re-installing the segments
for every vCPU is both wasteful and confusing, as is installing KERNEL_DS
multiple times; NOT installing KERNEL_DS for GS is icing on the cake.

Signed-off-by: Sean Christopherson <[email protected]>
---
.../selftests/kvm/lib/x86_64/processor.c | 68 ++++++-------------
1 file changed, 20 insertions(+), 48 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 67235013f6f9..dab719ee7734 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -438,24 +438,7 @@ static void kvm_seg_fill_gdt_64bit(struct kvm_vm *vm, struct kvm_segment *segp)
desc->base3 = segp->base >> 32;
}

-
-/*
- * Set Long Mode Flat Kernel Code Segment
- *
- * Input Args:
- * vm - VM whose GDT is being filled, or NULL to only write segp
- * selector - selector value
- *
- * Output Args:
- * segp - Pointer to KVM segment
- *
- * Return: None
- *
- * Sets up the KVM segment pointed to by @segp, to be a code segment
- * with the selector value given by @selector.
- */
-static void kvm_seg_set_kernel_code_64bit(struct kvm_vm *vm, uint16_t selector,
- struct kvm_segment *segp)
+static void kvm_seg_set_kernel_code_64bit(uint16_t selector, struct kvm_segment *segp)
{
memset(segp, 0, sizeof(*segp));
segp->selector = selector;
@@ -467,27 +450,9 @@ static void kvm_seg_set_kernel_code_64bit(struct kvm_vm *vm, uint16_t selector,
segp->g = true;
segp->l = true;
segp->present = 1;
- if (vm)
- kvm_seg_fill_gdt_64bit(vm, segp);
}

-/*
- * Set Long Mode Flat Kernel Data Segment
- *
- * Input Args:
- * vm - VM whose GDT is being filled, or NULL to only write segp
- * selector - selector value
- *
- * Output Args:
- * segp - Pointer to KVM segment
- *
- * Return: None
- *
- * Sets up the KVM segment pointed to by @segp, to be a data segment
- * with the selector value given by @selector.
- */
-static void kvm_seg_set_kernel_data_64bit(struct kvm_vm *vm, uint16_t selector,
- struct kvm_segment *segp)
+static void kvm_seg_set_kernel_data_64bit(uint16_t selector, struct kvm_segment *segp)
{
memset(segp, 0, sizeof(*segp));
segp->selector = selector;
@@ -498,8 +463,6 @@ static void kvm_seg_set_kernel_data_64bit(struct kvm_vm *vm, uint16_t selector,
*/
segp->g = true;
segp->present = true;
- if (vm)
- kvm_seg_fill_gdt_64bit(vm, segp);
}

vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
@@ -517,16 +480,15 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
return vm_untag_gpa(vm, PTE_GET_PA(*pte)) | (gva & ~HUGEPAGE_MASK(level));
}

-static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp,
- int selector)
+static void kvm_seg_set_tss_64bit(vm_vaddr_t base, struct kvm_segment *segp,
+ int selector)
{
memset(segp, 0, sizeof(*segp));
- segp->base = vm->arch.tss;
+ segp->base = base;
segp->limit = 0x67;
segp->selector = selector;
segp->type = 0xb;
segp->present = 1;
- kvm_seg_fill_gdt_64bit(vm, segp);
}

static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
@@ -548,11 +510,11 @@ static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
sregs.efer |= (EFER_LME | EFER_LMA | EFER_NX);

kvm_seg_set_unusable(&sregs.ldt);
- kvm_seg_set_kernel_code_64bit(vm, KERNEL_CS, &sregs.cs);
- kvm_seg_set_kernel_data_64bit(vm, KERNEL_DS, &sregs.ds);
- kvm_seg_set_kernel_data_64bit(vm, KERNEL_DS, &sregs.es);
- kvm_seg_set_kernel_data_64bit(NULL, KERNEL_DS, &sregs.gs);
- kvm_setup_tss_64bit(vm, &sregs.tr, KERNEL_TSS);
+ kvm_seg_set_kernel_code_64bit(KERNEL_CS, &sregs.cs);
+ kvm_seg_set_kernel_data_64bit(KERNEL_DS, &sregs.ds);
+ kvm_seg_set_kernel_data_64bit(KERNEL_DS, &sregs.es);
+ kvm_seg_set_kernel_data_64bit(KERNEL_DS, &sregs.gs);
+ kvm_seg_set_tss_64bit(vm->arch.tss, &sregs.tr, KERNEL_TSS);

sregs.cr3 = vm->pgd;
vcpu_sregs_set(vcpu, &sregs);
@@ -612,6 +574,7 @@ void route_exception(struct ex_regs *regs)
static void vm_init_descriptor_tables(struct kvm_vm *vm)
{
extern void *idt_handlers;
+ struct kvm_segment seg;
int i;

vm->arch.gdt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
@@ -624,6 +587,15 @@ static void vm_init_descriptor_tables(struct kvm_vm *vm)
set_idt_entry(vm, i, (unsigned long)(&idt_handlers)[i], 0, KERNEL_CS);

*(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
+
+ kvm_seg_set_kernel_code_64bit(KERNEL_CS, &seg);
+ kvm_seg_fill_gdt_64bit(vm, &seg);
+
+ kvm_seg_set_kernel_data_64bit(KERNEL_DS, &seg);
+ kvm_seg_fill_gdt_64bit(vm, &seg);
+
+ kvm_seg_set_tss_64bit(vm->arch.tss, &seg, KERNEL_TSS);
+ kvm_seg_fill_gdt_64bit(vm, &seg);
}

void vm_install_exception_handler(struct kvm_vm *vm, int vector,
--
2.44.0.291.gc1ea87d7ee-goog


2024-03-14 23:37:09

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 18/18] KVM: selftests: Drop @selector from segment helpers

Drop the @selector from the kernel code, data, and TSS builders and
instead hardcode the respective selector in the helper. Accepting a
selector but not a base makes the selector useless, e.g. the data helper
can't create per-vCPU for FS or GS, and so loading GS with KERNEL_DS is
the only logical choice.

And for code and TSS, there is no known reason to ever want multiple
segments, e.g. there are zero plans to support 32-bit kernel code (and
again, that would require more than just the selector).

If KVM selftests ever do add support for per-vCPU segments, it'd arguably
be more readable to add a dedicated helper for building/setting the
per-vCPU segment, and move the common data segment code to an inner
helper.

Lastly, hardcoding the selector reduces the probability of setting the
wrong selector in the vCPU versus what was created by the VM in the GDT.

Signed-off-by: Sean Christopherson <[email protected]>
---
.../selftests/kvm/lib/x86_64/processor.c | 29 +++++++++----------
1 file changed, 14 insertions(+), 15 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index dab719ee7734..6abd50d6e59d 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -438,10 +438,10 @@ static void kvm_seg_fill_gdt_64bit(struct kvm_vm *vm, struct kvm_segment *segp)
desc->base3 = segp->base >> 32;
}

-static void kvm_seg_set_kernel_code_64bit(uint16_t selector, struct kvm_segment *segp)
+static void kvm_seg_set_kernel_code_64bit(struct kvm_segment *segp)
{
memset(segp, 0, sizeof(*segp));
- segp->selector = selector;
+ segp->selector = KERNEL_CS;
segp->limit = 0xFFFFFFFFu;
segp->s = 0x1; /* kTypeCodeData */
segp->type = 0x08 | 0x01 | 0x02; /* kFlagCode | kFlagCodeAccessed
@@ -452,10 +452,10 @@ static void kvm_seg_set_kernel_code_64bit(uint16_t selector, struct kvm_segment
segp->present = 1;
}

-static void kvm_seg_set_kernel_data_64bit(uint16_t selector, struct kvm_segment *segp)
+static void kvm_seg_set_kernel_data_64bit(struct kvm_segment *segp)
{
memset(segp, 0, sizeof(*segp));
- segp->selector = selector;
+ segp->selector = KERNEL_DS;
segp->limit = 0xFFFFFFFFu;
segp->s = 0x1; /* kTypeCodeData */
segp->type = 0x00 | 0x01 | 0x02; /* kFlagData | kFlagDataAccessed
@@ -480,13 +480,12 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
return vm_untag_gpa(vm, PTE_GET_PA(*pte)) | (gva & ~HUGEPAGE_MASK(level));
}

-static void kvm_seg_set_tss_64bit(vm_vaddr_t base, struct kvm_segment *segp,
- int selector)
+static void kvm_seg_set_tss_64bit(vm_vaddr_t base, struct kvm_segment *segp)
{
memset(segp, 0, sizeof(*segp));
segp->base = base;
segp->limit = 0x67;
- segp->selector = selector;
+ segp->selector = KERNEL_TSS;
segp->type = 0xb;
segp->present = 1;
}
@@ -510,11 +509,11 @@ static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
sregs.efer |= (EFER_LME | EFER_LMA | EFER_NX);

kvm_seg_set_unusable(&sregs.ldt);
- kvm_seg_set_kernel_code_64bit(KERNEL_CS, &sregs.cs);
- kvm_seg_set_kernel_data_64bit(KERNEL_DS, &sregs.ds);
- kvm_seg_set_kernel_data_64bit(KERNEL_DS, &sregs.es);
- kvm_seg_set_kernel_data_64bit(KERNEL_DS, &sregs.gs);
- kvm_seg_set_tss_64bit(vm->arch.tss, &sregs.tr, KERNEL_TSS);
+ kvm_seg_set_kernel_code_64bit(&sregs.cs);
+ kvm_seg_set_kernel_data_64bit(&sregs.ds);
+ kvm_seg_set_kernel_data_64bit(&sregs.es);
+ kvm_seg_set_kernel_data_64bit(&sregs.gs);
+ kvm_seg_set_tss_64bit(vm->arch.tss, &sregs.tr);

sregs.cr3 = vm->pgd;
vcpu_sregs_set(vcpu, &sregs);
@@ -588,13 +587,13 @@ static void vm_init_descriptor_tables(struct kvm_vm *vm)

*(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;

- kvm_seg_set_kernel_code_64bit(KERNEL_CS, &seg);
+ kvm_seg_set_kernel_code_64bit(&seg);
kvm_seg_fill_gdt_64bit(vm, &seg);

- kvm_seg_set_kernel_data_64bit(KERNEL_DS, &seg);
+ kvm_seg_set_kernel_data_64bit(&seg);
kvm_seg_fill_gdt_64bit(vm, &seg);

- kvm_seg_set_tss_64bit(vm->arch.tss, &seg, KERNEL_TSS);
+ kvm_seg_set_tss_64bit(vm->arch.tss, &seg);
kvm_seg_fill_gdt_64bit(vm, &seg);
}

--
2.44.0.291.gc1ea87d7ee-goog


2024-03-14 23:37:15

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 09/18] KVM: selftests: Rename x86's vcpu_setup() to vcpu_init_sregs()

Rename vcpu_setup() to be more descriptive and precise, there is a whole
lot of "setup" that is done for a vCPU that isn't in said helper.

No functional change intended.

Signed-off-by: Sean Christopherson <[email protected]>
---
tools/testing/selftests/kvm/lib/x86_64/processor.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 3640d3290f0a..d6bfe96a6a77 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -555,7 +555,7 @@ void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu)
*(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
}

-static void vcpu_setup(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
+static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
{
struct kvm_sregs sregs;

@@ -716,7 +716,7 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)

vcpu = __vm_vcpu_add(vm, vcpu_id);
vcpu_init_cpuid(vcpu, kvm_get_supported_cpuid());
- vcpu_setup(vm, vcpu);
+ vcpu_init_sregs(vm, vcpu);

/* Setup guest general purpose registers */
vcpu_regs_get(vcpu, &regs);
--
2.44.0.291.gc1ea87d7ee-goog


2024-03-28 02:29:24

by Ackerley Tng

[permalink] [raw]
Subject: Re: [PATCH 13/18] KVM: selftests: Drop superfluous switch() on vm->mode in vcpu_init_sregs()

Sean Christopherson <[email protected]> writes:

> Replace the switch statement on vm->mode in x86's vcpu_init_sregs()'s with
> a simple assert that the VM has a 48-bit virtual address space. A switch
> statement is both overkill and misleading, as the existing code incorrectly
> implies that VMs with LA57 would need different to configuration for the
> LDT, TSS, and flat segments. In all likelihood, the only difference that
> would be needed for selftests is CR4.LA57 itself.
>
> Signed-off-by: Sean Christopherson <[email protected]>
> ---
> .../selftests/kvm/lib/x86_64/processor.c | 25 ++++++++-----------
> 1 file changed, 10 insertions(+), 15 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index 8547833ffa26..561c0aa93608 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -555,6 +555,8 @@ static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
> {
> struct kvm_sregs sregs;
>
> + TEST_ASSERT_EQ(vm->mode, VM_MODE_PXXV48_4K);
> +
> /* Set mode specific system register values. */
> vcpu_sregs_get(vcpu, &sregs);
>
> @@ -562,22 +564,15 @@ static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
>
> kvm_setup_gdt(vm, &sregs.gdt);
>
> - switch (vm->mode) {
> - case VM_MODE_PXXV48_4K:
> - sregs.cr0 = X86_CR0_PE | X86_CR0_NE | X86_CR0_PG;
> - sregs.cr4 |= X86_CR4_PAE | X86_CR4_OSFXSR;
> - sregs.efer |= (EFER_LME | EFER_LMA | EFER_NX);
> + sregs.cr0 = X86_CR0_PE | X86_CR0_NE | X86_CR0_PG;
> + sregs.cr4 |= X86_CR4_PAE | X86_CR4_OSFXSR;
> + sregs.efer |= (EFER_LME | EFER_LMA | EFER_NX);
>
> - kvm_seg_set_unusable(&sregs.ldt);
> - kvm_seg_set_kernel_code_64bit(vm, DEFAULT_CODE_SELECTOR, &sregs.cs);
> - kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.ds);
> - kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.es);
> - kvm_setup_tss_64bit(vm, &sregs.tr, 0x18);
> - break;
> -
> - default:
> - TEST_FAIL("Unknown guest mode, mode: 0x%x", vm->mode);
> - }
> + kvm_seg_set_unusable(&sregs.ldt);
> + kvm_seg_set_kernel_code_64bit(vm, DEFAULT_CODE_SELECTOR, &sregs.cs);
> + kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.ds);
> + kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.es);
> + kvm_setup_tss_64bit(vm, &sregs.tr, 0x18);
>
> sregs.cr3 = vm->pgd;
> vcpu_sregs_set(vcpu, &sregs);
> --
> 2.44.0.291.gc1ea87d7ee-goog

Reviewed-by: Ackerley Tng <[email protected]>

2024-03-28 02:33:53

by Ackerley Tng

[permalink] [raw]
Subject: Re: [PATCH 12/18] KVM: selftests: Allocate x86's GDT during VM creation

Sean Christopherson <[email protected]> writes:

> Allocate the GDT during creation of non-barebones VMs instead of waiting
> until the first vCPU is created, as the whole point of non-barebones VMs
> is to be able to run vCPUs, i.e. the GDT is going to get allocated no
> matter what.
>
> Signed-off-by: Sean Christopherson <[email protected]>
> ---
> tools/testing/selftests/kvm/lib/x86_64/processor.c | 4 +---
> 1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index f4046029f168..8547833ffa26 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -518,9 +518,6 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
>
> static void kvm_setup_gdt(struct kvm_vm *vm, struct kvm_dtable *dt)
> {
> - if (!vm->arch.gdt)
> - vm->arch.gdt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
> -
> dt->base = vm->arch.gdt;
> dt->limit = getpagesize() - 1;
> }
> @@ -644,6 +641,7 @@ static void vm_init_descriptor_tables(struct kvm_vm *vm)
> extern void *idt_handlers;
> int i;
>
> + vm->arch.gdt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
> vm->arch.idt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
> vm->handlers = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
> /* Handlers have the same address in both address spaces.*/
> --
> 2.44.0.291.gc1ea87d7ee-goog

Reviewed-by: Ackerley Tng <[email protected]>

2024-03-28 02:36:02

by Ackerley Tng

[permalink] [raw]
Subject: Re: [PATCH 11/18] KVM: selftests: Map x86's exception_handlers at VM creation, not vCPU setup

Sean Christopherson <[email protected]> writes:

> Map x86's exception handlers at VM creation, not vCPU setup, as the
> mapping is per-VM, i.e. doesn't need to be (re)done for every vCPU.
>
> Signed-off-by: Sean Christopherson <[email protected]>
> ---
> tools/testing/selftests/kvm/lib/x86_64/processor.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index 5813d93b2e7c..f4046029f168 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -552,7 +552,6 @@ static void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu)
> sregs.gdt.limit = getpagesize() - 1;
> kvm_seg_set_kernel_data_64bit(NULL, DEFAULT_DATA_SELECTOR, &sregs.gs);
> vcpu_sregs_set(vcpu, &sregs);
> - *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
> }
>
> static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
> @@ -651,6 +650,8 @@ static void vm_init_descriptor_tables(struct kvm_vm *vm)
> for (i = 0; i < NUM_INTERRUPTS; i++)
> set_idt_entry(vm, i, (unsigned long)(&idt_handlers)[i], 0,
> DEFAULT_CODE_SELECTOR);
> +
> + *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
> }
>
> void vm_install_exception_handler(struct kvm_vm *vm, int vector,
> --
> 2.44.0.291.gc1ea87d7ee-goog

Reviewed-by: Ackerley Tng <[email protected]>

2024-03-28 02:47:01

by Ackerley Tng

[permalink] [raw]
Subject: Re: [PATCH 10/18] KVM: selftests: Init IDT and exception handlers for all VMs/vCPUs on x86

Sean Christopherson <[email protected]> writes:

> Initialize the IDT and exception handlers for all non-barebones VMs and
> vCPUs on x86. Forcing tests to manually configure the IDT just to save
> 8KiB of memory is a terrible tradeoff, and also leads to weird tests
> (multiple tests have deliberately relied on shutdown to indicate success),
> and hard-to-debug failures, e.g. instead of a precise unexpected exception
> failure, tests see only shutdown.
>
> Signed-off-by: Sean Christopherson <[email protected]>
> ---
> tools/testing/selftests/kvm/include/x86_64/processor.h | 2 --
> tools/testing/selftests/kvm/lib/x86_64/processor.c | 8 ++++++--
> tools/testing/selftests/kvm/x86_64/amx_test.c | 2 --
> tools/testing/selftests/kvm/x86_64/fix_hypercall_test.c | 2 --
> tools/testing/selftests/kvm/x86_64/hyperv_evmcs.c | 2 --
> tools/testing/selftests/kvm/x86_64/hyperv_features.c | 6 ------
> tools/testing/selftests/kvm/x86_64/hyperv_ipi.c | 3 ---
> tools/testing/selftests/kvm/x86_64/kvm_pv_test.c | 3 ---
> tools/testing/selftests/kvm/x86_64/monitor_mwait_test.c | 3 ---
> tools/testing/selftests/kvm/x86_64/platform_info_test.c | 3 ---
> tools/testing/selftests/kvm/x86_64/pmu_counters_test.c | 3 ---
> .../testing/selftests/kvm/x86_64/pmu_event_filter_test.c | 6 ------
> .../kvm/x86_64/smaller_maxphyaddr_emulation_test.c | 3 ---
> tools/testing/selftests/kvm/x86_64/svm_int_ctl_test.c | 3 ---
> .../selftests/kvm/x86_64/svm_nested_shutdown_test.c | 3 ---
> .../selftests/kvm/x86_64/svm_nested_soft_inject_test.c | 3 ---
> tools/testing/selftests/kvm/x86_64/ucna_injection_test.c | 4 ----
> .../selftests/kvm/x86_64/userspace_msr_exit_test.c | 3 ---
> .../kvm/x86_64/vmx_exception_with_invalid_guest_state.c | 3 ---
> tools/testing/selftests/kvm/x86_64/vmx_pmu_caps_test.c | 3 ---
> tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c | 2 --
> tools/testing/selftests/kvm/x86_64/xcr0_cpuid_test.c | 3 ---
> tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c | 2 --
> 23 files changed, 6 insertions(+), 69 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
> index d6ffe03c9d0b..4804abe00158 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/processor.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
> @@ -1129,8 +1129,6 @@ struct idt_entry {
> uint32_t offset2; uint32_t reserved;
> };
>
> -void vm_init_descriptor_tables(struct kvm_vm *vm);
> -void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu);
> void vm_install_exception_handler(struct kvm_vm *vm, int vector,
> void (*handler)(struct ex_regs *));
>
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index d6bfe96a6a77..5813d93b2e7c 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -540,7 +540,7 @@ static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp,
> kvm_seg_fill_gdt_64bit(vm, segp);
> }
>
> -void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu)
> +static void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu)
> {
> struct kvm_vm *vm = vcpu->vm;
> struct kvm_sregs sregs;
> @@ -585,6 +585,8 @@ static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
>
> sregs.cr3 = vm->pgd;
> vcpu_sregs_set(vcpu, &sregs);
> +
> + vcpu_init_descriptor_tables(vcpu);
> }
>
> static void set_idt_entry(struct kvm_vm *vm, int vector, unsigned long addr,
> @@ -638,7 +640,7 @@ void route_exception(struct ex_regs *regs)
> regs->vector, regs->rip);
> }
>
> -void vm_init_descriptor_tables(struct kvm_vm *vm)
> +static void vm_init_descriptor_tables(struct kvm_vm *vm)
> {
> extern void *idt_handlers;
> int i;
> @@ -670,6 +672,8 @@ void assert_on_unhandled_exception(struct kvm_vcpu *vcpu)
> void kvm_arch_vm_post_create(struct kvm_vm *vm)
> {
> vm_create_irqchip(vm);
> + vm_init_descriptor_tables(vm);
> +
> sync_global_to_guest(vm, host_cpu_is_intel);
> sync_global_to_guest(vm, host_cpu_is_amd);
>
> diff --git a/tools/testing/selftests/kvm/x86_64/amx_test.c b/tools/testing/selftests/kvm/x86_64/amx_test.c
> index eae521f050e0..ab6c31aee447 100644
> --- a/tools/testing/selftests/kvm/x86_64/amx_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/amx_test.c
> @@ -246,8 +246,6 @@ int main(int argc, char *argv[])
> vcpu_regs_get(vcpu, &regs1);
>
> /* Register #NM handler */
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(vcpu);
> vm_install_exception_handler(vm, NM_VECTOR, guest_nm_handler);
>
> /* amx cfg for guest_code */
> diff --git a/tools/testing/selftests/kvm/x86_64/fix_hypercall_test.c b/tools/testing/selftests/kvm/x86_64/fix_hypercall_test.c
> index f3c2239228b1..762628f7d4ba 100644
> --- a/tools/testing/selftests/kvm/x86_64/fix_hypercall_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/fix_hypercall_test.c
> @@ -110,8 +110,6 @@ static void test_fix_hypercall(struct kvm_vcpu *vcpu, bool disable_quirk)
> {
> struct kvm_vm *vm = vcpu->vm;
>
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(vcpu);
> vm_install_exception_handler(vcpu->vm, UD_VECTOR, guest_ud_handler);
>
> if (disable_quirk)
> diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_evmcs.c b/tools/testing/selftests/kvm/x86_64/hyperv_evmcs.c
> index 4c7257ecd2a6..4238691a755c 100644
> --- a/tools/testing/selftests/kvm/x86_64/hyperv_evmcs.c
> +++ b/tools/testing/selftests/kvm/x86_64/hyperv_evmcs.c
> @@ -258,8 +258,6 @@ int main(int argc, char *argv[])
> vcpu_args_set(vcpu, 3, vmx_pages_gva, hv_pages_gva, addr_gva2gpa(vm, hcall_page));
> vcpu_set_msr(vcpu, HV_X64_MSR_VP_INDEX, vcpu->id);
>
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(vcpu);
> vm_install_exception_handler(vm, UD_VECTOR, guest_ud_handler);
> vm_install_exception_handler(vm, NMI_VECTOR, guest_nmi_handler);
>
> diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_features.c b/tools/testing/selftests/kvm/x86_64/hyperv_features.c
> index b923a285e96f..068e9c69710d 100644
> --- a/tools/testing/selftests/kvm/x86_64/hyperv_features.c
> +++ b/tools/testing/selftests/kvm/x86_64/hyperv_features.c
> @@ -156,9 +156,6 @@ static void guest_test_msrs_access(void)
> vcpu_init_cpuid(vcpu, prev_cpuid);
> }
>
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(vcpu);
> -
> /* TODO: Make this entire test easier to maintain. */
> if (stage >= 21)
> vcpu_enable_cap(vcpu, KVM_CAP_HYPERV_SYNIC2, 0);
> @@ -532,9 +529,6 @@ static void guest_test_hcalls_access(void)
> while (true) {
> vm = vm_create_with_one_vcpu(&vcpu, guest_hcall);
>
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(vcpu);
> -
> /* Hypercall input/output */
> hcall_page = vm_vaddr_alloc_pages(vm, 2);
> memset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize());
> diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_ipi.c b/tools/testing/selftests/kvm/x86_64/hyperv_ipi.c
> index f1617762c22f..c6a03141cdaa 100644
> --- a/tools/testing/selftests/kvm/x86_64/hyperv_ipi.c
> +++ b/tools/testing/selftests/kvm/x86_64/hyperv_ipi.c
> @@ -256,16 +256,13 @@ int main(int argc, char *argv[])
> hcall_page = vm_vaddr_alloc_pages(vm, 2);
> memset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize());
>
> - vm_init_descriptor_tables(vm);
>
> vcpu[1] = vm_vcpu_add(vm, RECEIVER_VCPU_ID_1, receiver_code);
> - vcpu_init_descriptor_tables(vcpu[1]);
> vcpu_args_set(vcpu[1], 2, hcall_page, addr_gva2gpa(vm, hcall_page));
> vcpu_set_msr(vcpu[1], HV_X64_MSR_VP_INDEX, RECEIVER_VCPU_ID_1);
> vcpu_set_hv_cpuid(vcpu[1]);
>
> vcpu[2] = vm_vcpu_add(vm, RECEIVER_VCPU_ID_2, receiver_code);
> - vcpu_init_descriptor_tables(vcpu[2]);
> vcpu_args_set(vcpu[2], 2, hcall_page, addr_gva2gpa(vm, hcall_page));
> vcpu_set_msr(vcpu[2], HV_X64_MSR_VP_INDEX, RECEIVER_VCPU_ID_2);
> vcpu_set_hv_cpuid(vcpu[2]);
> diff --git a/tools/testing/selftests/kvm/x86_64/kvm_pv_test.c b/tools/testing/selftests/kvm/x86_64/kvm_pv_test.c
> index 9e2879af7c20..cef0bd80038b 100644
> --- a/tools/testing/selftests/kvm/x86_64/kvm_pv_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/kvm_pv_test.c
> @@ -146,9 +146,6 @@ int main(void)
>
> vcpu_clear_cpuid_entry(vcpu, KVM_CPUID_FEATURES);
>
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(vcpu);
> -
> enter_guest(vcpu);
> kvm_vm_free(vm);
> }
> diff --git a/tools/testing/selftests/kvm/x86_64/monitor_mwait_test.c b/tools/testing/selftests/kvm/x86_64/monitor_mwait_test.c
> index 853802641e1e..9c8445379d76 100644
> --- a/tools/testing/selftests/kvm/x86_64/monitor_mwait_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/monitor_mwait_test.c
> @@ -80,9 +80,6 @@ int main(int argc, char *argv[])
> vm = vm_create_with_one_vcpu(&vcpu, guest_code);
> vcpu_clear_cpuid_feature(vcpu, X86_FEATURE_MWAIT);
>
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(vcpu);
> -
> while (1) {
> vcpu_run(vcpu);
> TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);
> diff --git a/tools/testing/selftests/kvm/x86_64/platform_info_test.c b/tools/testing/selftests/kvm/x86_64/platform_info_test.c
> index 6300bb70f028..9cf2b9fbf459 100644
> --- a/tools/testing/selftests/kvm/x86_64/platform_info_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/platform_info_test.c
> @@ -51,9 +51,6 @@ int main(int argc, char *argv[])
>
> vm = vm_create_with_one_vcpu(&vcpu, guest_code);
>
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(vcpu);
> -
> msr_platform_info = vcpu_get_msr(vcpu, MSR_PLATFORM_INFO);
> vcpu_set_msr(vcpu, MSR_PLATFORM_INFO,
> msr_platform_info | MSR_PLATFORM_INFO_MAX_TURBO_RATIO);
> diff --git a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c
> index 29609b52f8fa..ff6d21d148de 100644
> --- a/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/pmu_counters_test.c
> @@ -31,9 +31,6 @@ static struct kvm_vm *pmu_vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
> struct kvm_vm *vm;
>
> vm = vm_create_with_one_vcpu(vcpu, guest_code);
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(*vcpu);
> -
> sync_global_to_guest(vm, kvm_pmu_version);
> sync_global_to_guest(vm, is_forced_emulation_enabled);
>
> diff --git a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
> index 3c85d1ae9893..5cbe9d331acb 100644
> --- a/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/pmu_event_filter_test.c
> @@ -337,9 +337,6 @@ static void test_pmu_config_disable(void (*guest_code)(void))
> vm_enable_cap(vm, KVM_CAP_PMU_CAPABILITY, KVM_PMU_CAP_DISABLE);
>
> vcpu = vm_vcpu_add(vm, 0, guest_code);
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(vcpu);
> -
> TEST_ASSERT(!sanity_check_pmu(vcpu),
> "Guest should not be able to use disabled PMU.");
>
> @@ -876,9 +873,6 @@ int main(int argc, char *argv[])
>
> vm = vm_create_with_one_vcpu(&vcpu, guest_code);
>
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(vcpu);
> -
> TEST_REQUIRE(sanity_check_pmu(vcpu));
>
> if (use_amd_pmu())
> diff --git a/tools/testing/selftests/kvm/x86_64/smaller_maxphyaddr_emulation_test.c b/tools/testing/selftests/kvm/x86_64/smaller_maxphyaddr_emulation_test.c
> index 416207c38a17..0d682d6b76f1 100644
> --- a/tools/testing/selftests/kvm/x86_64/smaller_maxphyaddr_emulation_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/smaller_maxphyaddr_emulation_test.c
> @@ -60,9 +60,6 @@ int main(int argc, char *argv[])
> vm = vm_create_with_one_vcpu(&vcpu, guest_code);
> vcpu_args_set(vcpu, 1, kvm_is_tdp_enabled());
>
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(vcpu);
> -
> vcpu_set_cpuid_property(vcpu, X86_PROPERTY_MAX_PHY_ADDR, MAXPHYADDR);
>
> rc = kvm_check_cap(KVM_CAP_EXIT_ON_EMULATION_FAILURE);
> diff --git a/tools/testing/selftests/kvm/x86_64/svm_int_ctl_test.c b/tools/testing/selftests/kvm/x86_64/svm_int_ctl_test.c
> index 32bef39bec21..916e04248fbb 100644
> --- a/tools/testing/selftests/kvm/x86_64/svm_int_ctl_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/svm_int_ctl_test.c
> @@ -93,9 +93,6 @@ int main(int argc, char *argv[])
>
> vm = vm_create_with_one_vcpu(&vcpu, l1_guest_code);
>
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(vcpu);
> -
> vm_install_exception_handler(vm, VINTR_IRQ_NUMBER, vintr_irq_handler);
> vm_install_exception_handler(vm, INTR_IRQ_NUMBER, intr_irq_handler);
>
> diff --git a/tools/testing/selftests/kvm/x86_64/svm_nested_shutdown_test.c b/tools/testing/selftests/kvm/x86_64/svm_nested_shutdown_test.c
> index f4a1137e04ab..00135cbba35e 100644
> --- a/tools/testing/selftests/kvm/x86_64/svm_nested_shutdown_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/svm_nested_shutdown_test.c
> @@ -48,9 +48,6 @@ int main(int argc, char *argv[])
> TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SVM));
>
> vm = vm_create_with_one_vcpu(&vcpu, l1_guest_code);
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(vcpu);
> -
> vcpu_alloc_svm(vm, &svm_gva);
>
> vcpu_args_set(vcpu, 2, svm_gva, vm->arch.idt);
> diff --git a/tools/testing/selftests/kvm/x86_64/svm_nested_soft_inject_test.c b/tools/testing/selftests/kvm/x86_64/svm_nested_soft_inject_test.c
> index 2478a9e50743..7b6481d6c0d3 100644
> --- a/tools/testing/selftests/kvm/x86_64/svm_nested_soft_inject_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/svm_nested_soft_inject_test.c
> @@ -152,9 +152,6 @@ static void run_test(bool is_nmi)
>
> vm = vm_create_with_one_vcpu(&vcpu, l1_guest_code);
>
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(vcpu);
> -
> vm_install_exception_handler(vm, NMI_VECTOR, guest_nmi_handler);
> vm_install_exception_handler(vm, BP_VECTOR, guest_bp_handler);
> vm_install_exception_handler(vm, INT_NR, guest_int_handler);
> diff --git a/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c b/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c
> index bc9be20f9600..6eeb5dd1e65c 100644
> --- a/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c
> @@ -284,10 +284,6 @@ int main(int argc, char *argv[])
> cmcidis_vcpu = create_vcpu_with_mce_cap(vm, 1, false, cmci_disabled_guest_code);
> cmci_vcpu = create_vcpu_with_mce_cap(vm, 2, true, cmci_enabled_guest_code);
>
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(ucna_vcpu);
> - vcpu_init_descriptor_tables(cmcidis_vcpu);
> - vcpu_init_descriptor_tables(cmci_vcpu);
> vm_install_exception_handler(vm, CMCI_VECTOR, guest_cmci_handler);
> vm_install_exception_handler(vm, GP_VECTOR, guest_gp_handler);
>
> diff --git a/tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c b/tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c
> index f4f61a2d2464..fffda40e286f 100644
> --- a/tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/userspace_msr_exit_test.c
> @@ -531,9 +531,6 @@ KVM_ONE_VCPU_TEST(user_msr, msr_filter_allow, guest_code_filter_allow)
>
> vm_ioctl(vm, KVM_X86_SET_MSR_FILTER, &filter_allow);
>
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(vcpu);
> -
> vm_install_exception_handler(vm, GP_VECTOR, guest_gp_handler);
>
> /* Process guest code userspace exits. */
> diff --git a/tools/testing/selftests/kvm/x86_64/vmx_exception_with_invalid_guest_state.c b/tools/testing/selftests/kvm/x86_64/vmx_exception_with_invalid_guest_state.c
> index fad3634fd9eb..3fd6eceab46f 100644
> --- a/tools/testing/selftests/kvm/x86_64/vmx_exception_with_invalid_guest_state.c
> +++ b/tools/testing/selftests/kvm/x86_64/vmx_exception_with_invalid_guest_state.c
> @@ -115,9 +115,6 @@ int main(int argc, char *argv[])
> vm = vm_create_with_one_vcpu(&vcpu, guest_code);
> get_set_sigalrm_vcpu(vcpu);
>
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(vcpu);
> -
> vm_install_exception_handler(vm, UD_VECTOR, guest_ud_handler);
>
> /*
> diff --git a/tools/testing/selftests/kvm/x86_64/vmx_pmu_caps_test.c b/tools/testing/selftests/kvm/x86_64/vmx_pmu_caps_test.c
> index ea0cb3cae0f7..1b6e20e3a56d 100644
> --- a/tools/testing/selftests/kvm/x86_64/vmx_pmu_caps_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/vmx_pmu_caps_test.c
> @@ -86,9 +86,6 @@ KVM_ONE_VCPU_TEST(vmx_pmu_caps, guest_wrmsr_perf_capabilities, guest_code)
> struct ucall uc;
> int r, i;
>
> - vm_init_descriptor_tables(vcpu->vm);
> - vcpu_init_descriptor_tables(vcpu);
> -
> vcpu_set_msr(vcpu, MSR_IA32_PERF_CAPABILITIES, host_cap.capabilities);
>
> vcpu_args_set(vcpu, 1, host_cap.capabilities);
> diff --git a/tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c b/tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c
> index 725c206ba0b9..f51084061134 100644
> --- a/tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c
> @@ -410,8 +410,6 @@ int main(int argc, char *argv[])
>
> vm = vm_create_with_one_vcpu(&params[0].vcpu, halter_guest_code);
>
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(params[0].vcpu);
> vm_install_exception_handler(vm, IPI_VECTOR, guest_ipi_handler);
>
> virt_pg_map(vm, APIC_DEFAULT_GPA, APIC_DEFAULT_GPA);
> diff --git a/tools/testing/selftests/kvm/x86_64/xcr0_cpuid_test.c b/tools/testing/selftests/kvm/x86_64/xcr0_cpuid_test.c
> index 25a0b0db5c3c..95ce192d0753 100644
> --- a/tools/testing/selftests/kvm/x86_64/xcr0_cpuid_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/xcr0_cpuid_test.c
> @@ -109,9 +109,6 @@ int main(int argc, char *argv[])
> vm = vm_create_with_one_vcpu(&vcpu, guest_code);
> run = vcpu->run;
>
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(vcpu);
> -
> while (1) {
> vcpu_run(vcpu);
>
> diff --git a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
> index d2ea0435f4f7..a7236f17dfd0 100644
> --- a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
> @@ -553,8 +553,6 @@ int main(int argc, char *argv[])
> };
> vm_ioctl(vm, KVM_XEN_HVM_SET_ATTR, &vec);
>
> - vm_init_descriptor_tables(vm);
> - vcpu_init_descriptor_tables(vcpu);
> vm_install_exception_handler(vm, EVTCHN_VECTOR, evtchn_handler);
>
> if (do_runstate_tests) {
> --
> 2.44.0.291.gc1ea87d7ee-goog

Reviewed-by: Ackerley Tng <[email protected]>

2024-03-28 02:47:23

by Ackerley Tng

[permalink] [raw]
Subject: Re: [PATCH 09/18] KVM: selftests: Rename x86's vcpu_setup() to vcpu_init_sregs()

Sean Christopherson <[email protected]> writes:

> Rename vcpu_setup() to be more descriptive and precise, there is a whole
> lot of "setup" that is done for a vCPU that isn't in said helper.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <[email protected]>
> ---
> tools/testing/selftests/kvm/lib/x86_64/processor.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index 3640d3290f0a..d6bfe96a6a77 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -555,7 +555,7 @@ void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu)
> *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
> }
>
> -static void vcpu_setup(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
> +static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
> {
> struct kvm_sregs sregs;
>
> @@ -716,7 +716,7 @@ struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id)
>
> vcpu = __vm_vcpu_add(vm, vcpu_id);
> vcpu_init_cpuid(vcpu, kvm_get_supported_cpuid());
> - vcpu_setup(vm, vcpu);
> + vcpu_init_sregs(vm, vcpu);
>
> /* Setup guest general purpose registers */
> vcpu_regs_get(vcpu, &regs);
> --
> 2.44.0.291.gc1ea87d7ee-goog

Reviewed-by: Ackerley Tng <[email protected]>

2024-03-28 02:47:51

by Ackerley Tng

[permalink] [raw]
Subject: Re: [PATCH 08/18] KVM: selftests: Move x86's descriptor table helpers "up" in processor.c

Sean Christopherson <[email protected]> writes:

> Move x86's various descriptor table helpers in processor.c up above
> kvm_arch_vm_post_create() and vcpu_setup() so that the helpers can be
> made static and invoked from the aforementioned functions.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <[email protected]>
> ---
> .../selftests/kvm/lib/x86_64/processor.c | 191 +++++++++---------
> 1 file changed, 95 insertions(+), 96 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index eaeba907bb53..3640d3290f0a 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -540,6 +540,21 @@ static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp,
> kvm_seg_fill_gdt_64bit(vm, segp);
> }
>
> +void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu)
> +{
> + struct kvm_vm *vm = vcpu->vm;
> + struct kvm_sregs sregs;
> +
> + vcpu_sregs_get(vcpu, &sregs);
> + sregs.idt.base = vm->arch.idt;
> + sregs.idt.limit = NUM_INTERRUPTS * sizeof(struct idt_entry) - 1;
> + sregs.gdt.base = vm->arch.gdt;
> + sregs.gdt.limit = getpagesize() - 1;
> + kvm_seg_set_kernel_data_64bit(NULL, DEFAULT_DATA_SELECTOR, &sregs.gs);
> + vcpu_sregs_set(vcpu, &sregs);
> + *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
> +}
> +
> static void vcpu_setup(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
> {
> struct kvm_sregs sregs;
> @@ -572,6 +587,86 @@ static void vcpu_setup(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
> vcpu_sregs_set(vcpu, &sregs);
> }
>
> +static void set_idt_entry(struct kvm_vm *vm, int vector, unsigned long addr,
> + int dpl, unsigned short selector)
> +{
> + struct idt_entry *base =
> + (struct idt_entry *)addr_gva2hva(vm, vm->arch.idt);
> + struct idt_entry *e = &base[vector];
> +
> + memset(e, 0, sizeof(*e));
> + e->offset0 = addr;
> + e->selector = selector;
> + e->ist = 0;
> + e->type = 14;
> + e->dpl = dpl;
> + e->p = 1;
> + e->offset1 = addr >> 16;
> + e->offset2 = addr >> 32;
> +}
> +
> +static bool kvm_fixup_exception(struct ex_regs *regs)
> +{
> + if (regs->r9 != KVM_EXCEPTION_MAGIC || regs->rip != regs->r10)
> + return false;
> +
> + if (regs->vector == DE_VECTOR)
> + return false;
> +
> + regs->rip = regs->r11;
> + regs->r9 = regs->vector;
> + regs->r10 = regs->error_code;
> + return true;
> +}
> +
> +void route_exception(struct ex_regs *regs)
> +{
> + typedef void(*handler)(struct ex_regs *);
> + handler *handlers = (handler *)exception_handlers;
> +
> + if (handlers && handlers[regs->vector]) {
> + handlers[regs->vector](regs);
> + return;
> + }
> +
> + if (kvm_fixup_exception(regs))
> + return;
> +
> + ucall_assert(UCALL_UNHANDLED,
> + "Unhandled exception in guest", __FILE__, __LINE__,
> + "Unhandled exception '0x%lx' at guest RIP '0x%lx'",
> + regs->vector, regs->rip);
> +}
> +
> +void vm_init_descriptor_tables(struct kvm_vm *vm)
> +{
> + extern void *idt_handlers;
> + int i;
> +
> + vm->arch.idt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
> + vm->handlers = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
> + /* Handlers have the same address in both address spaces.*/
> + for (i = 0; i < NUM_INTERRUPTS; i++)
> + set_idt_entry(vm, i, (unsigned long)(&idt_handlers)[i], 0,
> + DEFAULT_CODE_SELECTOR);
> +}
> +
> +void vm_install_exception_handler(struct kvm_vm *vm, int vector,
> + void (*handler)(struct ex_regs *))
> +{
> + vm_vaddr_t *handlers = (vm_vaddr_t *)addr_gva2hva(vm, vm->handlers);
> +
> + handlers[vector] = (vm_vaddr_t)handler;
> +}
> +
> +void assert_on_unhandled_exception(struct kvm_vcpu *vcpu)
> +{
> + struct ucall uc;
> +
> + if (get_ucall(vcpu, &uc) == UCALL_UNHANDLED)
> + REPORT_GUEST_ASSERT(uc);
> +}
> +
> void kvm_arch_vm_post_create(struct kvm_vm *vm)
> {
> vm_create_irqchip(vm);
> @@ -1087,102 +1182,6 @@ void kvm_init_vm_address_properties(struct kvm_vm *vm)
> }
> }
>
> -static void set_idt_entry(struct kvm_vm *vm, int vector, unsigned long addr,
> - int dpl, unsigned short selector)
> -{
> - struct idt_entry *base =
> - (struct idt_entry *)addr_gva2hva(vm, vm->arch.idt);
> - struct idt_entry *e = &base[vector];
> -
> - memset(e, 0, sizeof(*e));
> - e->offset0 = addr;
> - e->selector = selector;
> - e->ist = 0;
> - e->type = 14;
> - e->dpl = dpl;
> - e->p = 1;
> - e->offset1 = addr >> 16;
> - e->offset2 = addr >> 32;
> -}
> -
> -
> -static bool kvm_fixup_exception(struct ex_regs *regs)
> -{
> - if (regs->r9 != KVM_EXCEPTION_MAGIC || regs->rip != regs->r10)
> - return false;
> -
> - if (regs->vector == DE_VECTOR)
> - return false;
> -
> - regs->rip = regs->r11;
> - regs->r9 = regs->vector;
> - regs->r10 = regs->error_code;
> - return true;
> -}
> -
> -void route_exception(struct ex_regs *regs)
> -{
> - typedef void(*handler)(struct ex_regs *);
> - handler *handlers = (handler *)exception_handlers;
> -
> - if (handlers && handlers[regs->vector]) {
> - handlers[regs->vector](regs);
> - return;
> - }
> -
> - if (kvm_fixup_exception(regs))
> - return;
> -
> - ucall_assert(UCALL_UNHANDLED,
> - "Unhandled exception in guest", __FILE__, __LINE__,
> - "Unhandled exception '0x%lx' at guest RIP '0x%lx'",
> - regs->vector, regs->rip);
> -}
> -
> -void vm_init_descriptor_tables(struct kvm_vm *vm)
> -{
> - extern void *idt_handlers;
> - int i;
> -
> - vm->arch.idt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
> - vm->handlers = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
> - /* Handlers have the same address in both address spaces.*/
> - for (i = 0; i < NUM_INTERRUPTS; i++)
> - set_idt_entry(vm, i, (unsigned long)(&idt_handlers)[i], 0,
> - DEFAULT_CODE_SELECTOR);
> -}
> -
> -void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu)
> -{
> - struct kvm_vm *vm = vcpu->vm;
> - struct kvm_sregs sregs;
> -
> - vcpu_sregs_get(vcpu, &sregs);
> - sregs.idt.base = vm->arch.idt;
> - sregs.idt.limit = NUM_INTERRUPTS * sizeof(struct idt_entry) - 1;
> - sregs.gdt.base = vm->arch.gdt;
> - sregs.gdt.limit = getpagesize() - 1;
> - kvm_seg_set_kernel_data_64bit(NULL, DEFAULT_DATA_SELECTOR, &sregs.gs);
> - vcpu_sregs_set(vcpu, &sregs);
> - *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
> -}
> -
> -void vm_install_exception_handler(struct kvm_vm *vm, int vector,
> - void (*handler)(struct ex_regs *))
> -{
> - vm_vaddr_t *handlers = (vm_vaddr_t *)addr_gva2hva(vm, vm->handlers);
> -
> - handlers[vector] = (vm_vaddr_t)handler;
> -}
> -
> -void assert_on_unhandled_exception(struct kvm_vcpu *vcpu)
> -{
> - struct ucall uc;
> -
> - if (get_ucall(vcpu, &uc) == UCALL_UNHANDLED)
> - REPORT_GUEST_ASSERT(uc);
> -}
> -
> const struct kvm_cpuid_entry2 *get_cpuid_entry(const struct kvm_cpuid2 *cpuid,
> uint32_t function, uint32_t index)
> {
> --
> 2.44.0.291.gc1ea87d7ee-goog

Reviewed-by: Ackerley Tng <[email protected]>

2024-03-28 02:48:26

by Ackerley Tng

[permalink] [raw]
Subject: Re: [PATCH 03/18] KVM: selftests: Move GDT, IDT, and TSS fields to x86's kvm_vm_arch

Sean Christopherson <[email protected]> writes:

> Now that kvm_vm_arch exists, move the GDT, IDT, and TSS fields to x86's
> implementation, as the structures are firmly x86-only.
>
> Signed-off-by: Sean Christopherson <[email protected]>
> ---
> .../testing/selftests/kvm/include/kvm_util.h | 3 ---
> .../kvm/include/x86_64/kvm_util_arch.h | 6 +++++
> .../selftests/kvm/lib/x86_64/processor.c | 22 +++++++++----------
> .../kvm/x86_64/svm_nested_shutdown_test.c | 2 +-
> .../kvm/x86_64/svm_nested_soft_inject_test.c | 2 +-
> 5 files changed, 19 insertions(+), 16 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index acdcddf78e3f..58d6a4d6ce4f 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -94,9 +94,6 @@ struct kvm_vm {
> bool pgd_created;
> vm_paddr_t ucall_mmio_addr;
> vm_paddr_t pgd;
> - vm_vaddr_t gdt;
> - vm_vaddr_t tss;
> - vm_vaddr_t idt;
> vm_vaddr_t handlers;
> uint32_t dirty_ring_size;
> uint64_t gpa_tag_mask;
> diff --git a/tools/testing/selftests/kvm/include/x86_64/kvm_util_arch.h b/tools/testing/selftests/kvm/include/x86_64/kvm_util_arch.h
> index 9f1725192aa2..b14ff3a88b4a 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/kvm_util_arch.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/kvm_util_arch.h
> @@ -5,7 +5,13 @@
> #include <stdbool.h>
> #include <stdint.h>
>
> +#include "kvm_util_types.h"
> +
> struct kvm_vm_arch {
> + vm_vaddr_t gdt;
> + vm_vaddr_t tss;
> + vm_vaddr_t idt;
> +
> uint64_t c_bit;
> uint64_t s_bit;
> int sev_fd;
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index 74a4c736c9ae..45f965c052a1 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -417,7 +417,7 @@ static void kvm_seg_set_unusable(struct kvm_segment *segp)
>
> static void kvm_seg_fill_gdt_64bit(struct kvm_vm *vm, struct kvm_segment *segp)
> {
> - void *gdt = addr_gva2hva(vm, vm->gdt);
> + void *gdt = addr_gva2hva(vm, vm->arch.gdt);
> struct desc64 *desc = gdt + (segp->selector >> 3) * 8;
>
> desc->limit0 = segp->limit & 0xFFFF;
> @@ -518,21 +518,21 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
>
> static void kvm_setup_gdt(struct kvm_vm *vm, struct kvm_dtable *dt)
> {
> - if (!vm->gdt)
> - vm->gdt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
> + if (!vm->arch.gdt)
> + vm->arch.gdt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
>
> - dt->base = vm->gdt;
> + dt->base = vm->arch.gdt;
> dt->limit = getpagesize();
> }
>
> static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp,
> int selector)
> {
> - if (!vm->tss)
> - vm->tss = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
> + if (!vm->arch.tss)
> + vm->arch.tss = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
>
> memset(segp, 0, sizeof(*segp));
> - segp->base = vm->tss;
> + segp->base = vm->arch.tss;
> segp->limit = 0x67;
> segp->selector = selector;
> segp->type = 0xb;
> @@ -1091,7 +1091,7 @@ static void set_idt_entry(struct kvm_vm *vm, int vector, unsigned long addr,
> int dpl, unsigned short selector)
> {
> struct idt_entry *base =
> - (struct idt_entry *)addr_gva2hva(vm, vm->idt);
> + (struct idt_entry *)addr_gva2hva(vm, vm->arch.idt);
> struct idt_entry *e = &base[vector];
>
> memset(e, 0, sizeof(*e));
> @@ -1144,7 +1144,7 @@ void vm_init_descriptor_tables(struct kvm_vm *vm)
> extern void *idt_handlers;
> int i;
>
> - vm->idt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
> + vm->arch.idt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
> vm->handlers = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
> /* Handlers have the same address in both address spaces.*/
> for (i = 0; i < NUM_INTERRUPTS; i++)
> @@ -1158,9 +1158,9 @@ void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu)
> struct kvm_sregs sregs;
>
> vcpu_sregs_get(vcpu, &sregs);
> - sregs.idt.base = vm->idt;
> + sregs.idt.base = vm->arch.idt;
> sregs.idt.limit = NUM_INTERRUPTS * sizeof(struct idt_entry) - 1;
> - sregs.gdt.base = vm->gdt;
> + sregs.gdt.base = vm->arch.gdt;
> sregs.gdt.limit = getpagesize() - 1;
> kvm_seg_set_kernel_data_64bit(NULL, DEFAULT_DATA_SELECTOR, &sregs.gs);
> vcpu_sregs_set(vcpu, &sregs);
> diff --git a/tools/testing/selftests/kvm/x86_64/svm_nested_shutdown_test.c b/tools/testing/selftests/kvm/x86_64/svm_nested_shutdown_test.c
> index d6fcdcc3af31..f4a1137e04ab 100644
> --- a/tools/testing/selftests/kvm/x86_64/svm_nested_shutdown_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/svm_nested_shutdown_test.c
> @@ -53,7 +53,7 @@ int main(int argc, char *argv[])
>
> vcpu_alloc_svm(vm, &svm_gva);
>
> - vcpu_args_set(vcpu, 2, svm_gva, vm->idt);
> + vcpu_args_set(vcpu, 2, svm_gva, vm->arch.idt);
>
> vcpu_run(vcpu);
> TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_SHUTDOWN);
> diff --git a/tools/testing/selftests/kvm/x86_64/svm_nested_soft_inject_test.c b/tools/testing/selftests/kvm/x86_64/svm_nested_soft_inject_test.c
> index 0c7ce3d4e83a..2478a9e50743 100644
> --- a/tools/testing/selftests/kvm/x86_64/svm_nested_soft_inject_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/svm_nested_soft_inject_test.c
> @@ -166,7 +166,7 @@ static void run_test(bool is_nmi)
>
> idt_alt_vm = vm_vaddr_alloc_page(vm);
> idt_alt = addr_gva2hva(vm, idt_alt_vm);
> - idt = addr_gva2hva(vm, vm->idt);
> + idt = addr_gva2hva(vm, vm->arch.idt);
> memcpy(idt_alt, idt, getpagesize());
> } else {
> idt_alt_vm = 0;
> --
> 2.44.0.291.gc1ea87d7ee-goog

Reviewed-by: Ackerley Tng <[email protected]>

2024-03-28 02:48:43

by Ackerley Tng

[permalink] [raw]
Subject: Re: [PATCH 02/18] KVM: sefltests: Add kvm_util_types.h to hold common types, e.g. vm_vaddr_t

Sean Christopherson <[email protected]> writes:

> Move the base types unique to KVM selftests out of kvm_util.h and into a
> new header, kvm_util_types.h. This will allow kvm_util_arch.h, i.e. core
> arch headers, to reference common types, e.g. vm_vaddr_t and vm_paddr_t.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <[email protected]>
> ---
> .../testing/selftests/kvm/include/kvm_util.h | 16 +--------------
> .../selftests/kvm/include/kvm_util_types.h | 20 +++++++++++++++++++
> 2 files changed, 21 insertions(+), 15 deletions(-)
> create mode 100644 tools/testing/selftests/kvm/include/kvm_util_types.h
>
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index 95baee5142a7..acdcddf78e3f 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -21,28 +21,14 @@
> #include <sys/ioctl.h>
>
> #include "kvm_util_arch.h"
> +#include "kvm_util_types.h"
> #include "sparsebit.h"
>
> -/*
> - * Provide a version of static_assert() that is guaranteed to have an optional
> - * message param. If _ISOC11_SOURCE is defined, glibc (/usr/include/assert.h)
> - * #undefs and #defines static_assert() as a direct alias to _Static_assert(),
> - * i.e. effectively makes the message mandatory. Many KVM selftests #define
> - * _GNU_SOURCE for various reasons, and _GNU_SOURCE implies _ISOC11_SOURCE. As
> - * a result, static_assert() behavior is non-deterministic and may or may not
> - * require a message depending on #include order.
> - */
> -#define __kvm_static_assert(expr, msg, ...) _Static_assert(expr, msg)
> -#define kvm_static_assert(expr, ...) __kvm_static_assert(expr, ##__VA_ARGS__, #expr)
> -
> #define KVM_DEV_PATH "/dev/kvm"
> #define KVM_MAX_VCPUS 512
>
> #define NSEC_PER_SEC 1000000000L
>
> -typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */
> -typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */
> -
> struct userspace_mem_region {
> struct kvm_userspace_memory_region2 region;
> struct sparsebit *unused_phy_pages;
> diff --git a/tools/testing/selftests/kvm/include/kvm_util_types.h b/tools/testing/selftests/kvm/include/kvm_util_types.h
> new file mode 100644
> index 000000000000..764491366eb9
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/include/kvm_util_types.h
> @@ -0,0 +1,20 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +#ifndef SELFTEST_KVM_UTIL_TYPES_H
> +#define SELFTEST_KVM_UTIL_TYPES_H
> +
> +/*
> + * Provide a version of static_assert() that is guaranteed to have an optional
> + * message param. If _ISOC11_SOURCE is defined, glibc (/usr/include/assert.h)
> + * #undefs and #defines static_assert() as a direct alias to _Static_assert(),
> + * i.e. effectively makes the message mandatory. Many KVM selftests #define
> + * _GNU_SOURCE for various reasons, and _GNU_SOURCE implies _ISOC11_SOURCE. As
> + * a result, static_assert() behavior is non-deterministic and may or may not
> + * require a message depending on #include order.
> + */
> +#define __kvm_static_assert(expr, msg, ...) _Static_assert(expr, msg)
> +#define kvm_static_assert(expr, ...) __kvm_static_assert(expr, ##__VA_ARGS__, #expr)
> +
> +typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */
> +typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */
> +
> +#endif /* SELFTEST_KVM_UTIL_TYPES_H */
> --
> 2.44.0.291.gc1ea87d7ee-goog

Reviewed-by: Ackerley Tng <[email protected]>

2024-03-28 02:49:41

by Ackerley Tng

[permalink] [raw]
Subject: Re: [PATCH 01/18] Revert "kvm: selftests: move base kvm_util.h declarations to kvm_util_base.h"

Sean Christopherson <[email protected]> writes:

> Effectively revert the movement of code from kvm_util.h => kvm_util_base.h,
> as the TL;DR of the justification for the move was to avoid #idefs and/or
> circular dependencies between what ended up being ucall_common.h and what
> was (and now again, is), kvm_util.h.
>
> But avoiding #ifdef and circular includes is trivial: don't do that. The
> cost of removing kvm_util_base.h is a few extra includes of ucall_common.h,
> but that cost is practically nothing. On the other hand, having a "base"
> version of a header that is really just the header itself is confusing,
> and makes it weird/hard to choose names for headers that actually are
> "base" headers, e.g. to hold core KVM selftests typedefs.
>
> For all intents and purposes, this reverts commit
> 7d9a662ed9f0403e7b94940dceb81552b8edb931.
>
> Signed-off-by: Sean Christopherson <[email protected]>
> ---
> .../selftests/kvm/aarch64/arch_timer.c | 1 +
> tools/testing/selftests/kvm/arch_timer.c | 1 +
> .../selftests/kvm/demand_paging_test.c | 1 +
> .../selftests/kvm/dirty_log_perf_test.c | 1 +
> tools/testing/selftests/kvm/dirty_log_test.c | 1 +
> .../testing/selftests/kvm/guest_memfd_test.c | 2 +-
> .../testing/selftests/kvm/guest_print_test.c | 1 +
> .../selftests/kvm/include/aarch64/processor.h | 2 +
> .../selftests/kvm/include/aarch64/ucall.h | 2 +-
> .../testing/selftests/kvm/include/kvm_util.h | 1128 +++++++++++++++-
> .../selftests/kvm/include/kvm_util_base.h | 1135 -----------------
> .../selftests/kvm/include/s390x/ucall.h | 2 +-
> .../selftests/kvm/include/x86_64/processor.h | 3 +-
> .../selftests/kvm/include/x86_64/ucall.h | 2 +-
> .../selftests/kvm/kvm_page_table_test.c | 1 +
> .../selftests/kvm/lib/aarch64/processor.c | 2 +
> tools/testing/selftests/kvm/lib/kvm_util.c | 1 +
> tools/testing/selftests/kvm/lib/memstress.c | 1 +
> .../selftests/kvm/lib/riscv/processor.c | 1 +
> .../testing/selftests/kvm/lib/ucall_common.c | 5 +-
> .../testing/selftests/kvm/riscv/arch_timer.c | 1 +
> tools/testing/selftests/kvm/rseq_test.c | 1 +
> tools/testing/selftests/kvm/s390x/cmma_test.c | 1 +
> tools/testing/selftests/kvm/s390x/memop.c | 1 +
> tools/testing/selftests/kvm/s390x/tprot.c | 1 +
> tools/testing/selftests/kvm/steal_time.c | 1 +
> .../x86_64/dirty_log_page_splitting_test.c | 1 +
> .../x86_64/exit_on_emulation_failure_test.c | 2 +-
> .../kvm/x86_64/ucna_injection_test.c | 1 -
> 29 files changed, 1156 insertions(+), 1147 deletions(-)
> delete mode 100644 tools/testing/selftests/kvm/include/kvm_util_base.h
>
> diff --git a/tools/testing/selftests/kvm/aarch64/arch_timer.c b/tools/testing/selftests/kvm/aarch64/arch_timer.c
> index ddba2c2fb5de..ee83b2413da8 100644
> --- a/tools/testing/selftests/kvm/aarch64/arch_timer.c
> +++ b/tools/testing/selftests/kvm/aarch64/arch_timer.c
> @@ -12,6 +12,7 @@
> #include "gic.h"
> #include "processor.h"
> #include "timer_test.h"
> +#include "ucall_common.h"
> #include "vgic.h"
>
> #define GICD_BASE_GPA 0x8000000ULL
> diff --git a/tools/testing/selftests/kvm/arch_timer.c b/tools/testing/selftests/kvm/arch_timer.c
> index ae1f1a6d8312..40cb6c1bec0a 100644
> --- a/tools/testing/selftests/kvm/arch_timer.c
> +++ b/tools/testing/selftests/kvm/arch_timer.c
> @@ -29,6 +29,7 @@
> #include <sys/sysinfo.h>
>
> #include "timer_test.h"
> +#include "ucall_common.h"
>
> struct test_args test_args = {
> .nr_vcpus = NR_VCPUS_DEF,
> diff --git a/tools/testing/selftests/kvm/demand_paging_test.c b/tools/testing/selftests/kvm/demand_paging_test.c
> index bf3609f71854..a36227731564 100644
> --- a/tools/testing/selftests/kvm/demand_paging_test.c
> +++ b/tools/testing/selftests/kvm/demand_paging_test.c
> @@ -22,6 +22,7 @@
> #include "test_util.h"
> #include "memstress.h"
> #include "guest_modes.h"
> +#include "ucall_common.h"
> #include "userfaultfd_util.h"
>
> #ifdef __NR_userfaultfd
> diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c
> index 504f6fe980e8..f0d3ccdb023c 100644
> --- a/tools/testing/selftests/kvm/dirty_log_perf_test.c
> +++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c
> @@ -18,6 +18,7 @@
> #include "test_util.h"
> #include "memstress.h"
> #include "guest_modes.h"
> +#include "ucall_common.h"
>
> #ifdef __aarch64__
> #include "aarch64/vgic.h"
> diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
> index eaad5b20854c..d95120f9dc40 100644
> --- a/tools/testing/selftests/kvm/dirty_log_test.c
> +++ b/tools/testing/selftests/kvm/dirty_log_test.c
> @@ -23,6 +23,7 @@
> #include "test_util.h"
> #include "guest_modes.h"
> #include "processor.h"
> +#include "ucall_common.h"
>
> #define DIRTY_MEM_BITS 30 /* 1G */
> #define PAGE_SHIFT_4K 12
> diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c
> index 92eae206baa6..38320de686f6 100644
> --- a/tools/testing/selftests/kvm/guest_memfd_test.c
> +++ b/tools/testing/selftests/kvm/guest_memfd_test.c
> @@ -19,8 +19,8 @@
> #include <sys/types.h>
> #include <sys/stat.h>
>
> +#include "kvm_util.h"
> #include "test_util.h"
> -#include "kvm_util_base.h"
>
> static void test_file_read_write(int fd)
> {
> diff --git a/tools/testing/selftests/kvm/guest_print_test.c b/tools/testing/selftests/kvm/guest_print_test.c
> index 3502caa3590c..8092c2d0f5d6 100644
> --- a/tools/testing/selftests/kvm/guest_print_test.c
> +++ b/tools/testing/selftests/kvm/guest_print_test.c
> @@ -13,6 +13,7 @@
> #include "test_util.h"
> #include "kvm_util.h"
> #include "processor.h"
> +#include "ucall_common.h"
>
> struct guest_vals {
> uint64_t a;
> diff --git a/tools/testing/selftests/kvm/include/aarch64/processor.h b/tools/testing/selftests/kvm/include/aarch64/processor.h
> index 9e518b562827..1814af7d8567 100644
> --- a/tools/testing/selftests/kvm/include/aarch64/processor.h
> +++ b/tools/testing/selftests/kvm/include/aarch64/processor.h
> @@ -8,6 +8,8 @@
> #define SELFTEST_KVM_PROCESSOR_H
>
> #include "kvm_util.h"
> +#include "ucall_common.h"
> +
> #include <linux/stringify.h>
> #include <linux/types.h>
> #include <asm/sysreg.h>
> diff --git a/tools/testing/selftests/kvm/include/aarch64/ucall.h b/tools/testing/selftests/kvm/include/aarch64/ucall.h
> index 4b68f37efd36..4ec801f37f00 100644
> --- a/tools/testing/selftests/kvm/include/aarch64/ucall.h
> +++ b/tools/testing/selftests/kvm/include/aarch64/ucall.h
> @@ -2,7 +2,7 @@
> #ifndef SELFTEST_KVM_UCALL_H
> #define SELFTEST_KVM_UCALL_H
>
> -#include "kvm_util_base.h"
> +#include "kvm_util.h"
>
> #define UCALL_EXIT_REASON KVM_EXIT_MMIO
>
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index c9286811a4cb..95baee5142a7 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -1,13 +1,1133 @@
> /* SPDX-License-Identifier: GPL-2.0-only */
> /*
> - * tools/testing/selftests/kvm/include/kvm_util.h
> - *
> * Copyright (C) 2018, Google LLC.
> */
> #ifndef SELFTEST_KVM_UTIL_H
> #define SELFTEST_KVM_UTIL_H
>
> -#include "kvm_util_base.h"
> -#include "ucall_common.h"
> +#include "test_util.h"
> +
> +#include <linux/compiler.h>
> +#include "linux/hashtable.h"
> +#include "linux/list.h"
> +#include <linux/kernel.h>
> +#include <linux/kvm.h>
> +#include "linux/rbtree.h"
> +#include <linux/types.h>
> +
> +#include <asm/atomic.h>
> +#include <asm/kvm.h>
> +
> +#include <sys/ioctl.h>
> +
> +#include "kvm_util_arch.h"
> +#include "sparsebit.h"
> +
> +/*
> + * Provide a version of static_assert() that is guaranteed to have an optional
> + * message param. If _ISOC11_SOURCE is defined, glibc (/usr/include/assert.h)
> + * #undefs and #defines static_assert() as a direct alias to _Static_assert(),
> + * i.e. effectively makes the message mandatory. Many KVM selftests #define
> + * _GNU_SOURCE for various reasons, and _GNU_SOURCE implies _ISOC11_SOURCE. As
> + * a result, static_assert() behavior is non-deterministic and may or may not
> + * require a message depending on #include order.
> + */
> +#define __kvm_static_assert(expr, msg, ...) _Static_assert(expr, msg)
> +#define kvm_static_assert(expr, ...) __kvm_static_assert(expr, ##__VA_ARGS__, #expr)
> +
> +#define KVM_DEV_PATH "/dev/kvm"
> +#define KVM_MAX_VCPUS 512
> +
> +#define NSEC_PER_SEC 1000000000L
> +
> +typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */
> +typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */
> +
> +struct userspace_mem_region {
> + struct kvm_userspace_memory_region2 region;
> + struct sparsebit *unused_phy_pages;
> + struct sparsebit *protected_phy_pages;
> + int fd;
> + off_t offset;
> + enum vm_mem_backing_src_type backing_src_type;
> + void *host_mem;
> + void *host_alias;
> + void *mmap_start;
> + void *mmap_alias;
> + size_t mmap_size;
> + struct rb_node gpa_node;
> + struct rb_node hva_node;
> + struct hlist_node slot_node;
> +};
> +
> +struct kvm_vcpu {
> + struct list_head list;
> + uint32_t id;
> + int fd;
> + struct kvm_vm *vm;
> + struct kvm_run *run;
> +#ifdef __x86_64__
> + struct kvm_cpuid2 *cpuid;
> +#endif
> + struct kvm_dirty_gfn *dirty_gfns;
> + uint32_t fetch_index;
> + uint32_t dirty_gfns_count;
> +};
> +
> +struct userspace_mem_regions {
> + struct rb_root gpa_tree;
> + struct rb_root hva_tree;
> + DECLARE_HASHTABLE(slot_hash, 9);
> +};
> +
> +enum kvm_mem_region_type {
> + MEM_REGION_CODE,
> + MEM_REGION_DATA,
> + MEM_REGION_PT,
> + MEM_REGION_TEST_DATA,
> + NR_MEM_REGIONS,
> +};
> +
> +struct kvm_vm {
> + int mode;
> + unsigned long type;
> + uint8_t subtype;
> + int kvm_fd;
> + int fd;
> + unsigned int pgtable_levels;
> + unsigned int page_size;
> + unsigned int page_shift;
> + unsigned int pa_bits;
> + unsigned int va_bits;
> + uint64_t max_gfn;
> + struct list_head vcpus;
> + struct userspace_mem_regions regions;
> + struct sparsebit *vpages_valid;
> + struct sparsebit *vpages_mapped;
> + bool has_irqchip;
> + bool pgd_created;
> + vm_paddr_t ucall_mmio_addr;
> + vm_paddr_t pgd;
> + vm_vaddr_t gdt;
> + vm_vaddr_t tss;
> + vm_vaddr_t idt;
> + vm_vaddr_t handlers;
> + uint32_t dirty_ring_size;
> + uint64_t gpa_tag_mask;
> +
> + struct kvm_vm_arch arch;
> +
> + /* Cache of information for binary stats interface */
> + int stats_fd;
> + struct kvm_stats_header stats_header;
> + struct kvm_stats_desc *stats_desc;
> +
> + /*
> + * KVM region slots. These are the default memslots used by page
> + * allocators, e.g., lib/elf uses the memslots[MEM_REGION_CODE]
> + * memslot.
> + */
> + uint32_t memslots[NR_MEM_REGIONS];
> +};
> +
> +struct vcpu_reg_sublist {
> + const char *name;
> + long capability;
> + int feature;
> + int feature_type;
> + bool finalize;
> + __u64 *regs;
> + __u64 regs_n;
> + __u64 *rejects_set;
> + __u64 rejects_set_n;
> + __u64 *skips_set;
> + __u64 skips_set_n;
> +};
> +
> +struct vcpu_reg_list {
> + char *name;
> + struct vcpu_reg_sublist sublists[];
> +};
> +
> +#define for_each_sublist(c, s) \
> + for ((s) = &(c)->sublists[0]; (s)->regs; ++(s))
> +
> +#define kvm_for_each_vcpu(vm, i, vcpu) \
> + for ((i) = 0; (i) <= (vm)->last_vcpu_id; (i)++) \
> + if (!((vcpu) = vm->vcpus[i])) \
> + continue; \
> + else
> +
> +struct userspace_mem_region *
> +memslot2region(struct kvm_vm *vm, uint32_t memslot);
> +
> +static inline struct userspace_mem_region *vm_get_mem_region(struct kvm_vm *vm,
> + enum kvm_mem_region_type type)
> +{
> + assert(type < NR_MEM_REGIONS);
> + return memslot2region(vm, vm->memslots[type]);
> +}
> +
> +/* Minimum allocated guest virtual and physical addresses */
> +#define KVM_UTIL_MIN_VADDR 0x2000
> +#define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000
> +
> +#define DEFAULT_GUEST_STACK_VADDR_MIN 0xab6000
> +#define DEFAULT_STACK_PGS 5
> +
> +enum vm_guest_mode {
> + VM_MODE_P52V48_4K,
> + VM_MODE_P52V48_16K,
> + VM_MODE_P52V48_64K,
> + VM_MODE_P48V48_4K,
> + VM_MODE_P48V48_16K,
> + VM_MODE_P48V48_64K,
> + VM_MODE_P40V48_4K,
> + VM_MODE_P40V48_16K,
> + VM_MODE_P40V48_64K,
> + VM_MODE_PXXV48_4K, /* For 48bits VA but ANY bits PA */
> + VM_MODE_P47V64_4K,
> + VM_MODE_P44V64_4K,
> + VM_MODE_P36V48_4K,
> + VM_MODE_P36V48_16K,
> + VM_MODE_P36V48_64K,
> + VM_MODE_P36V47_16K,
> + NUM_VM_MODES,
> +};
> +
> +struct vm_shape {
> + uint32_t type;
> + uint8_t mode;
> + uint8_t subtype;
> + uint16_t padding;
> +};
> +
> +kvm_static_assert(sizeof(struct vm_shape) == sizeof(uint64_t));
> +
> +#define VM_TYPE_DEFAULT 0
> +
> +#define VM_SHAPE(__mode) \
> +({ \
> + struct vm_shape shape = { \
> + .mode = (__mode), \
> + .type = VM_TYPE_DEFAULT \
> + }; \
> + \
> + shape; \
> +})
> +
> +#if defined(__aarch64__)
> +
> +extern enum vm_guest_mode vm_mode_default;
> +
> +#define VM_MODE_DEFAULT vm_mode_default
> +#define MIN_PAGE_SHIFT 12U
> +#define ptes_per_page(page_size) ((page_size) / 8)
> +
> +#elif defined(__x86_64__)
> +
> +#define VM_MODE_DEFAULT VM_MODE_PXXV48_4K
> +#define MIN_PAGE_SHIFT 12U
> +#define ptes_per_page(page_size) ((page_size) / 8)
> +
> +#elif defined(__s390x__)
> +
> +#define VM_MODE_DEFAULT VM_MODE_P44V64_4K
> +#define MIN_PAGE_SHIFT 12U
> +#define ptes_per_page(page_size) ((page_size) / 16)
> +
> +#elif defined(__riscv)
> +
> +#if __riscv_xlen == 32
> +#error "RISC-V 32-bit kvm selftests not supported"
> +#endif
> +
> +#define VM_MODE_DEFAULT VM_MODE_P40V48_4K
> +#define MIN_PAGE_SHIFT 12U
> +#define ptes_per_page(page_size) ((page_size) / 8)
> +
> +#endif
> +
> +#define VM_SHAPE_DEFAULT VM_SHAPE(VM_MODE_DEFAULT)
> +
> +#define MIN_PAGE_SIZE (1U << MIN_PAGE_SHIFT)
> +#define PTES_PER_MIN_PAGE ptes_per_page(MIN_PAGE_SIZE)
> +
> +struct vm_guest_mode_params {
> + unsigned int pa_bits;
> + unsigned int va_bits;
> + unsigned int page_size;
> + unsigned int page_shift;
> +};
> +extern const struct vm_guest_mode_params vm_guest_mode_params[];
> +
> +int open_path_or_exit(const char *path, int flags);
> +int open_kvm_dev_path_or_exit(void);
> +
> +bool get_kvm_param_bool(const char *param);
> +bool get_kvm_intel_param_bool(const char *param);
> +bool get_kvm_amd_param_bool(const char *param);
> +
> +int get_kvm_param_integer(const char *param);
> +int get_kvm_intel_param_integer(const char *param);
> +int get_kvm_amd_param_integer(const char *param);
> +
> +unsigned int kvm_check_cap(long cap);
> +
> +static inline bool kvm_has_cap(long cap)
> +{
> + return kvm_check_cap(cap);
> +}
> +
> +#define __KVM_SYSCALL_ERROR(_name, _ret) \
> + "%s failed, rc: %i errno: %i (%s)", (_name), (_ret), errno, strerror(errno)
> +
> +/*
> + * Use the "inner", double-underscore macro when reporting errors from within
> + * other macros so that the name of ioctl() and not its literal numeric value
> + * is printed on error. The "outer" macro is strongly preferred when reporting
> + * errors "directly", i.e. without an additional layer of macros, as it reduces
> + * the probability of passing in the wrong string.
> + */
> +#define __KVM_IOCTL_ERROR(_name, _ret) __KVM_SYSCALL_ERROR(_name, _ret)
> +#define KVM_IOCTL_ERROR(_ioctl, _ret) __KVM_IOCTL_ERROR(#_ioctl, _ret)
> +
> +#define kvm_do_ioctl(fd, cmd, arg) \
> +({ \
> + kvm_static_assert(!_IOC_SIZE(cmd) || sizeof(*arg) == _IOC_SIZE(cmd)); \
> + ioctl(fd, cmd, arg); \
> +})
> +
> +#define __kvm_ioctl(kvm_fd, cmd, arg) \
> + kvm_do_ioctl(kvm_fd, cmd, arg)
> +
> +#define kvm_ioctl(kvm_fd, cmd, arg) \
> +({ \
> + int ret = __kvm_ioctl(kvm_fd, cmd, arg); \
> + \
> + TEST_ASSERT(!ret, __KVM_IOCTL_ERROR(#cmd, ret)); \
> +})
> +
> +static __always_inline void static_assert_is_vm(struct kvm_vm *vm) { }
> +
> +#define __vm_ioctl(vm, cmd, arg) \
> +({ \
> + static_assert_is_vm(vm); \
> + kvm_do_ioctl((vm)->fd, cmd, arg); \
> +})
> +
> +/*
> + * Assert that a VM or vCPU ioctl() succeeded, with extra magic to detect if
> + * the ioctl() failed because KVM killed/bugged the VM. To detect a dead VM,
> + * probe KVM_CAP_USER_MEMORY, which (a) has been supported by KVM since before
> + * selftests existed and (b) should never outright fail, i.e. is supposed to
> + * return 0 or 1. If KVM kills a VM, KVM returns -EIO for all ioctl()s for the
> + * VM and its vCPUs, including KVM_CHECK_EXTENSION.
> + */
> +#define __TEST_ASSERT_VM_VCPU_IOCTL(cond, name, ret, vm) \
> +do { \
> + int __errno = errno; \
> + \
> + static_assert_is_vm(vm); \
> + \
> + if (cond) \
> + break; \
> + \
> + if (errno == EIO && \
> + __vm_ioctl(vm, KVM_CHECK_EXTENSION, (void *)KVM_CAP_USER_MEMORY) < 0) { \
> + TEST_ASSERT(errno == EIO, "KVM killed the VM, should return -EIO"); \
> + TEST_FAIL("KVM killed/bugged the VM, check the kernel log for clues"); \
> + } \
> + errno = __errno; \
> + TEST_ASSERT(cond, __KVM_IOCTL_ERROR(name, ret)); \
> +} while (0)
> +
> +#define TEST_ASSERT_VM_VCPU_IOCTL(cond, cmd, ret, vm) \
> + __TEST_ASSERT_VM_VCPU_IOCTL(cond, #cmd, ret, vm)
> +
> +#define vm_ioctl(vm, cmd, arg) \
> +({ \
> + int ret = __vm_ioctl(vm, cmd, arg); \
> + \
> + __TEST_ASSERT_VM_VCPU_IOCTL(!ret, #cmd, ret, vm); \
> +})
> +
> +static __always_inline void static_assert_is_vcpu(struct kvm_vcpu *vcpu) { }
> +
> +#define __vcpu_ioctl(vcpu, cmd, arg) \
> +({ \
> + static_assert_is_vcpu(vcpu); \
> + kvm_do_ioctl((vcpu)->fd, cmd, arg); \
> +})
> +
> +#define vcpu_ioctl(vcpu, cmd, arg) \
> +({ \
> + int ret = __vcpu_ioctl(vcpu, cmd, arg); \
> + \
> + __TEST_ASSERT_VM_VCPU_IOCTL(!ret, #cmd, ret, (vcpu)->vm); \
> +})
> +
> +/*
> + * Looks up and returns the value corresponding to the capability
> + * (KVM_CAP_*) given by cap.
> + */
> +static inline int vm_check_cap(struct kvm_vm *vm, long cap)
> +{
> + int ret = __vm_ioctl(vm, KVM_CHECK_EXTENSION, (void *)cap);
> +
> + TEST_ASSERT_VM_VCPU_IOCTL(ret >= 0, KVM_CHECK_EXTENSION, ret, vm);
> + return ret;
> +}
> +
> +static inline int __vm_enable_cap(struct kvm_vm *vm, uint32_t cap, uint64_t arg0)
> +{
> + struct kvm_enable_cap enable_cap = { .cap = cap, .args = { arg0 } };
> +
> + return __vm_ioctl(vm, KVM_ENABLE_CAP, &enable_cap);
> +}
> +static inline void vm_enable_cap(struct kvm_vm *vm, uint32_t cap, uint64_t arg0)
> +{
> + struct kvm_enable_cap enable_cap = { .cap = cap, .args = { arg0 } };
> +
> + vm_ioctl(vm, KVM_ENABLE_CAP, &enable_cap);
> +}
> +
> +static inline void vm_set_memory_attributes(struct kvm_vm *vm, uint64_t gpa,
> + uint64_t size, uint64_t attributes)
> +{
> + struct kvm_memory_attributes attr = {
> + .attributes = attributes,
> + .address = gpa,
> + .size = size,
> + .flags = 0,
> + };
> +
> + /*
> + * KVM_SET_MEMORY_ATTRIBUTES overwrites _all_ attributes. These flows
> + * need significant enhancements to support multiple attributes.
> + */
> + TEST_ASSERT(!attributes || attributes == KVM_MEMORY_ATTRIBUTE_PRIVATE,
> + "Update me to support multiple attributes!");
> +
> + vm_ioctl(vm, KVM_SET_MEMORY_ATTRIBUTES, &attr);
> +}
> +
> +
> +static inline void vm_mem_set_private(struct kvm_vm *vm, uint64_t gpa,
> + uint64_t size)
> +{
> + vm_set_memory_attributes(vm, gpa, size, KVM_MEMORY_ATTRIBUTE_PRIVATE);
> +}
> +
> +static inline void vm_mem_set_shared(struct kvm_vm *vm, uint64_t gpa,
> + uint64_t size)
> +{
> + vm_set_memory_attributes(vm, gpa, size, 0);
> +}
> +
> +void vm_guest_mem_fallocate(struct kvm_vm *vm, uint64_t gpa, uint64_t size,
> + bool punch_hole);
> +
> +static inline void vm_guest_mem_punch_hole(struct kvm_vm *vm, uint64_t gpa,
> + uint64_t size)
> +{
> + vm_guest_mem_fallocate(vm, gpa, size, true);
> +}
> +
> +static inline void vm_guest_mem_allocate(struct kvm_vm *vm, uint64_t gpa,
> + uint64_t size)
> +{
> + vm_guest_mem_fallocate(vm, gpa, size, false);
> +}
> +
> +void vm_enable_dirty_ring(struct kvm_vm *vm, uint32_t ring_size);
> +const char *vm_guest_mode_string(uint32_t i);
> +
> +void kvm_vm_free(struct kvm_vm *vmp);
> +void kvm_vm_restart(struct kvm_vm *vmp);
> +void kvm_vm_release(struct kvm_vm *vmp);
> +int kvm_memcmp_hva_gva(void *hva, struct kvm_vm *vm, const vm_vaddr_t gva,
> + size_t len);
> +void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename);
> +int kvm_memfd_alloc(size_t size, bool hugepages);
> +
> +void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent);
> +
> +static inline void kvm_vm_get_dirty_log(struct kvm_vm *vm, int slot, void *log)
> +{
> + struct kvm_dirty_log args = { .dirty_bitmap = log, .slot = slot };
> +
> + vm_ioctl(vm, KVM_GET_DIRTY_LOG, &args);
> +}
> +
> +static inline void kvm_vm_clear_dirty_log(struct kvm_vm *vm, int slot, void *log,
> + uint64_t first_page, uint32_t num_pages)
> +{
> + struct kvm_clear_dirty_log args = {
> + .dirty_bitmap = log,
> + .slot = slot,
> + .first_page = first_page,
> + .num_pages = num_pages
> + };
> +
> + vm_ioctl(vm, KVM_CLEAR_DIRTY_LOG, &args);
> +}
> +
> +static inline uint32_t kvm_vm_reset_dirty_ring(struct kvm_vm *vm)
> +{
> + return __vm_ioctl(vm, KVM_RESET_DIRTY_RINGS, NULL);
> +}
> +
> +static inline int vm_get_stats_fd(struct kvm_vm *vm)
> +{
> + int fd = __vm_ioctl(vm, KVM_GET_STATS_FD, NULL);
> +
> + TEST_ASSERT_VM_VCPU_IOCTL(fd >= 0, KVM_GET_STATS_FD, fd, vm);
> + return fd;
> +}
> +
> +static inline void read_stats_header(int stats_fd, struct kvm_stats_header *header)
> +{
> + ssize_t ret;
> +
> + ret = pread(stats_fd, header, sizeof(*header), 0);
> + TEST_ASSERT(ret == sizeof(*header),
> + "Failed to read '%lu' header bytes, ret = '%ld'",
> + sizeof(*header), ret);
> +}
> +
> +struct kvm_stats_desc *read_stats_descriptors(int stats_fd,
> + struct kvm_stats_header *header);
> +
> +static inline ssize_t get_stats_descriptor_size(struct kvm_stats_header *header)
> +{
> + /*
> + * The base size of the descriptor is defined by KVM's ABI, but the
> + * size of the name field is variable, as far as KVM's ABI is
> + * concerned. For a given instance of KVM, the name field is the same
> + * size for all stats and is provided in the overall stats header.
> + */
> + return sizeof(struct kvm_stats_desc) + header->name_size;
> +}
> +
> +static inline struct kvm_stats_desc *get_stats_descriptor(struct kvm_stats_desc *stats,
> + int index,
> + struct kvm_stats_header *header)
> +{
> + /*
> + * Note, size_desc includes the size of the name field, which is
> + * variable. i.e. this is NOT equivalent to &stats_desc[i].
> + */
> + return (void *)stats + index * get_stats_descriptor_size(header);
> +}
> +
> +void read_stat_data(int stats_fd, struct kvm_stats_header *header,
> + struct kvm_stats_desc *desc, uint64_t *data,
> + size_t max_elements);
> +
> +void __vm_get_stat(struct kvm_vm *vm, const char *stat_name, uint64_t *data,
> + size_t max_elements);
> +
> +static inline uint64_t vm_get_stat(struct kvm_vm *vm, const char *stat_name)
> +{
> + uint64_t data;
> +
> + __vm_get_stat(vm, stat_name, &data, 1);
> + return data;
> +}
> +
> +void vm_create_irqchip(struct kvm_vm *vm);
> +
> +static inline int __vm_create_guest_memfd(struct kvm_vm *vm, uint64_t size,
> + uint64_t flags)
> +{
> + struct kvm_create_guest_memfd guest_memfd = {
> + .size = size,
> + .flags = flags,
> + };
> +
> + return __vm_ioctl(vm, KVM_CREATE_GUEST_MEMFD, &guest_memfd);
> +}
> +
> +static inline int vm_create_guest_memfd(struct kvm_vm *vm, uint64_t size,
> + uint64_t flags)
> +{
> + int fd = __vm_create_guest_memfd(vm, size, flags);
> +
> + TEST_ASSERT(fd >= 0, KVM_IOCTL_ERROR(KVM_CREATE_GUEST_MEMFD, fd));
> + return fd;
> +}
> +
> +void vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
> + uint64_t gpa, uint64_t size, void *hva);
> +int __vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
> + uint64_t gpa, uint64_t size, void *hva);
> +void vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
> + uint64_t gpa, uint64_t size, void *hva,
> + uint32_t guest_memfd, uint64_t guest_memfd_offset);
> +int __vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
> + uint64_t gpa, uint64_t size, void *hva,
> + uint32_t guest_memfd, uint64_t guest_memfd_offset);
> +
> +void vm_userspace_mem_region_add(struct kvm_vm *vm,
> + enum vm_mem_backing_src_type src_type,
> + uint64_t guest_paddr, uint32_t slot, uint64_t npages,
> + uint32_t flags);
> +void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
> + uint64_t guest_paddr, uint32_t slot, uint64_t npages,
> + uint32_t flags, int guest_memfd_fd, uint64_t guest_memfd_offset);
> +
> +#ifndef vm_arch_has_protected_memory
> +static inline bool vm_arch_has_protected_memory(struct kvm_vm *vm)
> +{
> + return false;
> +}
> +#endif
> +
> +void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags);
> +void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa);
> +void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
> +struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
> +void vm_populate_vaddr_bitmap(struct kvm_vm *vm);
> +vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
> +vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
> +vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
> + enum kvm_mem_region_type type);
> +vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz,
> + vm_vaddr_t vaddr_min,
> + enum kvm_mem_region_type type);
> +vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages);
> +vm_vaddr_t __vm_vaddr_alloc_page(struct kvm_vm *vm,
> + enum kvm_mem_region_type type);
> +vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm);
> +
> +void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
> + unsigned int npages);
> +void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa);
> +void *addr_gva2hva(struct kvm_vm *vm, vm_vaddr_t gva);
> +vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva);
> +void *addr_gpa2alias(struct kvm_vm *vm, vm_paddr_t gpa);
> +
> +
> +static inline vm_paddr_t vm_untag_gpa(struct kvm_vm *vm, vm_paddr_t gpa)
> +{
> + return gpa & ~vm->gpa_tag_mask;
> +}
> +
> +void vcpu_run(struct kvm_vcpu *vcpu);
> +int _vcpu_run(struct kvm_vcpu *vcpu);
> +
> +static inline int __vcpu_run(struct kvm_vcpu *vcpu)
> +{
> + return __vcpu_ioctl(vcpu, KVM_RUN, NULL);
> +}
> +
> +void vcpu_run_complete_io(struct kvm_vcpu *vcpu);
> +struct kvm_reg_list *vcpu_get_reg_list(struct kvm_vcpu *vcpu);
> +
> +static inline void vcpu_enable_cap(struct kvm_vcpu *vcpu, uint32_t cap,
> + uint64_t arg0)
> +{
> + struct kvm_enable_cap enable_cap = { .cap = cap, .args = { arg0 } };
> +
> + vcpu_ioctl(vcpu, KVM_ENABLE_CAP, &enable_cap);
> +}
> +
> +static inline void vcpu_guest_debug_set(struct kvm_vcpu *vcpu,
> + struct kvm_guest_debug *debug)
> +{
> + vcpu_ioctl(vcpu, KVM_SET_GUEST_DEBUG, debug);
> +}
> +
> +static inline void vcpu_mp_state_get(struct kvm_vcpu *vcpu,
> + struct kvm_mp_state *mp_state)
> +{
> + vcpu_ioctl(vcpu, KVM_GET_MP_STATE, mp_state);
> +}
> +static inline void vcpu_mp_state_set(struct kvm_vcpu *vcpu,
> + struct kvm_mp_state *mp_state)
> +{
> + vcpu_ioctl(vcpu, KVM_SET_MP_STATE, mp_state);
> +}
> +
> +static inline void vcpu_regs_get(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
> +{
> + vcpu_ioctl(vcpu, KVM_GET_REGS, regs);
> +}
> +
> +static inline void vcpu_regs_set(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
> +{
> + vcpu_ioctl(vcpu, KVM_SET_REGS, regs);
> +}
> +static inline void vcpu_sregs_get(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
> +{
> + vcpu_ioctl(vcpu, KVM_GET_SREGS, sregs);
> +
> +}
> +static inline void vcpu_sregs_set(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
> +{
> + vcpu_ioctl(vcpu, KVM_SET_SREGS, sregs);
> +}
> +static inline int _vcpu_sregs_set(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
> +{
> + return __vcpu_ioctl(vcpu, KVM_SET_SREGS, sregs);
> +}
> +static inline void vcpu_fpu_get(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
> +{
> + vcpu_ioctl(vcpu, KVM_GET_FPU, fpu);
> +}
> +static inline void vcpu_fpu_set(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
> +{
> + vcpu_ioctl(vcpu, KVM_SET_FPU, fpu);
> +}
> +
> +static inline int __vcpu_get_reg(struct kvm_vcpu *vcpu, uint64_t id, void *addr)
> +{
> + struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)addr };
> +
> + return __vcpu_ioctl(vcpu, KVM_GET_ONE_REG, &reg);
> +}
> +static inline int __vcpu_set_reg(struct kvm_vcpu *vcpu, uint64_t id, uint64_t val)
> +{
> + struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)&val };
> +
> + return __vcpu_ioctl(vcpu, KVM_SET_ONE_REG, &reg);
> +}
> +static inline void vcpu_get_reg(struct kvm_vcpu *vcpu, uint64_t id, void *addr)
> +{
> + struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)addr };
> +
> + vcpu_ioctl(vcpu, KVM_GET_ONE_REG, &reg);
> +}
> +static inline void vcpu_set_reg(struct kvm_vcpu *vcpu, uint64_t id, uint64_t val)
> +{
> + struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)&val };
> +
> + vcpu_ioctl(vcpu, KVM_SET_ONE_REG, &reg);
> +}
> +
> +#ifdef __KVM_HAVE_VCPU_EVENTS
> +static inline void vcpu_events_get(struct kvm_vcpu *vcpu,
> + struct kvm_vcpu_events *events)
> +{
> + vcpu_ioctl(vcpu, KVM_GET_VCPU_EVENTS, events);
> +}
> +static inline void vcpu_events_set(struct kvm_vcpu *vcpu,
> + struct kvm_vcpu_events *events)
> +{
> + vcpu_ioctl(vcpu, KVM_SET_VCPU_EVENTS, events);
> +}
> +#endif
> +#ifdef __x86_64__
> +static inline void vcpu_nested_state_get(struct kvm_vcpu *vcpu,
> + struct kvm_nested_state *state)
> +{
> + vcpu_ioctl(vcpu, KVM_GET_NESTED_STATE, state);
> +}
> +static inline int __vcpu_nested_state_set(struct kvm_vcpu *vcpu,
> + struct kvm_nested_state *state)
> +{
> + return __vcpu_ioctl(vcpu, KVM_SET_NESTED_STATE, state);
> +}
> +
> +static inline void vcpu_nested_state_set(struct kvm_vcpu *vcpu,
> + struct kvm_nested_state *state)
> +{
> + vcpu_ioctl(vcpu, KVM_SET_NESTED_STATE, state);
> +}
> +#endif
> +static inline int vcpu_get_stats_fd(struct kvm_vcpu *vcpu)
> +{
> + int fd = __vcpu_ioctl(vcpu, KVM_GET_STATS_FD, NULL);
> +
> + TEST_ASSERT_VM_VCPU_IOCTL(fd >= 0, KVM_CHECK_EXTENSION, fd, vcpu->vm);
> + return fd;
> +}
> +
> +int __kvm_has_device_attr(int dev_fd, uint32_t group, uint64_t attr);
> +
> +static inline void kvm_has_device_attr(int dev_fd, uint32_t group, uint64_t attr)
> +{
> + int ret = __kvm_has_device_attr(dev_fd, group, attr);
> +
> + TEST_ASSERT(!ret, "KVM_HAS_DEVICE_ATTR failed, rc: %i errno: %i", ret, errno);
> +}
> +
> +int __kvm_device_attr_get(int dev_fd, uint32_t group, uint64_t attr, void *val);
> +
> +static inline void kvm_device_attr_get(int dev_fd, uint32_t group,
> + uint64_t attr, void *val)
> +{
> + int ret = __kvm_device_attr_get(dev_fd, group, attr, val);
> +
> + TEST_ASSERT(!ret, KVM_IOCTL_ERROR(KVM_GET_DEVICE_ATTR, ret));
> +}
> +
> +int __kvm_device_attr_set(int dev_fd, uint32_t group, uint64_t attr, void *val);
> +
> +static inline void kvm_device_attr_set(int dev_fd, uint32_t group,
> + uint64_t attr, void *val)
> +{
> + int ret = __kvm_device_attr_set(dev_fd, group, attr, val);
> +
> + TEST_ASSERT(!ret, KVM_IOCTL_ERROR(KVM_SET_DEVICE_ATTR, ret));
> +}
> +
> +static inline int __vcpu_has_device_attr(struct kvm_vcpu *vcpu, uint32_t group,
> + uint64_t attr)
> +{
> + return __kvm_has_device_attr(vcpu->fd, group, attr);
> +}
> +
> +static inline void vcpu_has_device_attr(struct kvm_vcpu *vcpu, uint32_t group,
> + uint64_t attr)
> +{
> + kvm_has_device_attr(vcpu->fd, group, attr);
> +}
> +
> +static inline int __vcpu_device_attr_get(struct kvm_vcpu *vcpu, uint32_t group,
> + uint64_t attr, void *val)
> +{
> + return __kvm_device_attr_get(vcpu->fd, group, attr, val);
> +}
> +
> +static inline void vcpu_device_attr_get(struct kvm_vcpu *vcpu, uint32_t group,
> + uint64_t attr, void *val)
> +{
> + kvm_device_attr_get(vcpu->fd, group, attr, val);
> +}
> +
> +static inline int __vcpu_device_attr_set(struct kvm_vcpu *vcpu, uint32_t group,
> + uint64_t attr, void *val)
> +{
> + return __kvm_device_attr_set(vcpu->fd, group, attr, val);
> +}
> +
> +static inline void vcpu_device_attr_set(struct kvm_vcpu *vcpu, uint32_t group,
> + uint64_t attr, void *val)
> +{
> + kvm_device_attr_set(vcpu->fd, group, attr, val);
> +}
> +
> +int __kvm_test_create_device(struct kvm_vm *vm, uint64_t type);
> +int __kvm_create_device(struct kvm_vm *vm, uint64_t type);
> +
> +static inline int kvm_create_device(struct kvm_vm *vm, uint64_t type)
> +{
> + int fd = __kvm_create_device(vm, type);
> +
> + TEST_ASSERT(fd >= 0, KVM_IOCTL_ERROR(KVM_CREATE_DEVICE, fd));
> + return fd;
> +}
> +
> +void *vcpu_map_dirty_ring(struct kvm_vcpu *vcpu);
> +
> +/*
> + * VM VCPU Args Set
> + *
> + * Input Args:
> + * vm - Virtual Machine
> + * num - number of arguments
> + * ... - arguments, each of type uint64_t
> + *
> + * Output Args: None
> + *
> + * Return: None
> + *
> + * Sets the first @num input parameters for the function at @vcpu's entry point,
> + * per the C calling convention of the architecture, to the values given as
> + * variable args. Each of the variable args is expected to be of type uint64_t.
> + * The maximum @num can be is specific to the architecture.
> + */
> +void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...);
> +
> +void kvm_irq_line(struct kvm_vm *vm, uint32_t irq, int level);
> +int _kvm_irq_line(struct kvm_vm *vm, uint32_t irq, int level);
> +
> +#define KVM_MAX_IRQ_ROUTES 4096
> +
> +struct kvm_irq_routing *kvm_gsi_routing_create(void);
> +void kvm_gsi_routing_irqchip_add(struct kvm_irq_routing *routing,
> + uint32_t gsi, uint32_t pin);
> +int _kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing);
> +void kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing);
> +
> +const char *exit_reason_str(unsigned int exit_reason);
> +
> +vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
> + uint32_t memslot);
> +vm_paddr_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
> + vm_paddr_t paddr_min, uint32_t memslot,
> + bool protected);
> +vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm);
> +
> +static inline vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
> + vm_paddr_t paddr_min, uint32_t memslot)
> +{
> + /*
> + * By default, allocate memory as protected for VMs that support
> + * protected memory, as the majority of memory for such VMs is
> + * protected, i.e. using shared memory is effectively opt-in.
> + */
> + return __vm_phy_pages_alloc(vm, num, paddr_min, memslot,
> + vm_arch_has_protected_memory(vm));
> +}
> +
> +/*
> + * ____vm_create() does KVM_CREATE_VM and little else. __vm_create() also
> + * loads the test binary into guest memory and creates an IRQ chip (x86 only).
> + * __vm_create() does NOT create vCPUs, @nr_runnable_vcpus is used purely to
> + * calculate the amount of memory needed for per-vCPU data, e.g. stacks.
> + */
> +struct kvm_vm *____vm_create(struct vm_shape shape);
> +struct kvm_vm *__vm_create(struct vm_shape shape, uint32_t nr_runnable_vcpus,
> + uint64_t nr_extra_pages);
> +
> +static inline struct kvm_vm *vm_create_barebones(void)
> +{
> + return ____vm_create(VM_SHAPE_DEFAULT);
> +}
> +
> +#ifdef __x86_64__
> +static inline struct kvm_vm *vm_create_barebones_protected_vm(void)
> +{
> + const struct vm_shape shape = {
> + .mode = VM_MODE_DEFAULT,
> + .type = KVM_X86_SW_PROTECTED_VM,
> + };
> +
> + return ____vm_create(shape);
> +}
> +#endif
> +
> +static inline struct kvm_vm *vm_create(uint32_t nr_runnable_vcpus)
> +{
> + return __vm_create(VM_SHAPE_DEFAULT, nr_runnable_vcpus, 0);
> +}
> +
> +struct kvm_vm *__vm_create_with_vcpus(struct vm_shape shape, uint32_t nr_vcpus,
> + uint64_t extra_mem_pages,
> + void *guest_code, struct kvm_vcpu *vcpus[]);
> +
> +static inline struct kvm_vm *vm_create_with_vcpus(uint32_t nr_vcpus,
> + void *guest_code,
> + struct kvm_vcpu *vcpus[])
> +{
> + return __vm_create_with_vcpus(VM_SHAPE_DEFAULT, nr_vcpus, 0,
> + guest_code, vcpus);
> +}
> +
> +
> +struct kvm_vm *__vm_create_shape_with_one_vcpu(struct vm_shape shape,
> + struct kvm_vcpu **vcpu,
> + uint64_t extra_mem_pages,
> + void *guest_code);
> +
> +/*
> + * Create a VM with a single vCPU with reasonable defaults and @extra_mem_pages
> + * additional pages of guest memory. Returns the VM and vCPU (via out param).
> + */
> +static inline struct kvm_vm *__vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
> + uint64_t extra_mem_pages,
> + void *guest_code)
> +{
> + return __vm_create_shape_with_one_vcpu(VM_SHAPE_DEFAULT, vcpu,
> + extra_mem_pages, guest_code);
> +}
> +
> +static inline struct kvm_vm *vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
> + void *guest_code)
> +{
> + return __vm_create_with_one_vcpu(vcpu, 0, guest_code);
> +}
> +
> +static inline struct kvm_vm *vm_create_shape_with_one_vcpu(struct vm_shape shape,
> + struct kvm_vcpu **vcpu,
> + void *guest_code)
> +{
> + return __vm_create_shape_with_one_vcpu(shape, vcpu, 0, guest_code);
> +}
> +
> +struct kvm_vcpu *vm_recreate_with_one_vcpu(struct kvm_vm *vm);
> +
> +void kvm_pin_this_task_to_pcpu(uint32_t pcpu);
> +void kvm_print_vcpu_pinning_help(void);
> +void kvm_parse_vcpu_pinning(const char *pcpus_string, uint32_t vcpu_to_pcpu[],
> + int nr_vcpus);
> +
> +unsigned long vm_compute_max_gfn(struct kvm_vm *vm);
> +unsigned int vm_calc_num_guest_pages(enum vm_guest_mode mode, size_t size);
> +unsigned int vm_num_host_pages(enum vm_guest_mode mode, unsigned int num_guest_pages);
> +unsigned int vm_num_guest_pages(enum vm_guest_mode mode, unsigned int num_host_pages);
> +static inline unsigned int
> +vm_adjust_num_guest_pages(enum vm_guest_mode mode, unsigned int num_guest_pages)
> +{
> + unsigned int n;
> + n = vm_num_guest_pages(mode, vm_num_host_pages(mode, num_guest_pages));
> +#ifdef __s390x__
> + /* s390 requires 1M aligned guest sizes */
> + n = (n + 255) & ~255;
> +#endif
> + return n;
> +}
> +
> +#define sync_global_to_guest(vm, g) ({ \
> + typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g)); \
> + memcpy(_p, &(g), sizeof(g)); \
> +})
> +
> +#define sync_global_from_guest(vm, g) ({ \
> + typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g)); \
> + memcpy(&(g), _p, sizeof(g)); \
> +})
> +
> +/*
> + * Write a global value, but only in the VM's (guest's) domain. Primarily used
> + * for "globals" that hold per-VM values (VMs always duplicate code and global
> + * data into their own region of physical memory), but can be used anytime it's
> + * undesirable to change the host's copy of the global.
> + */
> +#define write_guest_global(vm, g, val) ({ \
> + typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g)); \
> + typeof(g) _val = val; \
> + \
> + memcpy(_p, &(_val), sizeof(g)); \
> +})
> +
> +void assert_on_unhandled_exception(struct kvm_vcpu *vcpu);
> +
> +void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu,
> + uint8_t indent);
> +
> +static inline void vcpu_dump(FILE *stream, struct kvm_vcpu *vcpu,
> + uint8_t indent)
> +{
> + vcpu_arch_dump(stream, vcpu, indent);
> +}
> +
> +/*
> + * Adds a vCPU with reasonable defaults (e.g. a stack)
> + *
> + * Input Args:
> + * vm - Virtual Machine
> + * vcpu_id - The id of the VCPU to add to the VM.
> + */
> +struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
> +void vcpu_arch_set_entry_point(struct kvm_vcpu *vcpu, void *guest_code);
> +
> +static inline struct kvm_vcpu *vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id,
> + void *guest_code)
> +{
> + struct kvm_vcpu *vcpu = vm_arch_vcpu_add(vm, vcpu_id);
> +
> + vcpu_arch_set_entry_point(vcpu, guest_code);
> +
> + return vcpu;
> +}
> +
> +/* Re-create a vCPU after restarting a VM, e.g. for state save/restore tests. */
> +struct kvm_vcpu *vm_arch_vcpu_recreate(struct kvm_vm *vm, uint32_t vcpu_id);
> +
> +static inline struct kvm_vcpu *vm_vcpu_recreate(struct kvm_vm *vm,
> + uint32_t vcpu_id)
> +{
> + return vm_arch_vcpu_recreate(vm, vcpu_id);
> +}
> +
> +void vcpu_arch_free(struct kvm_vcpu *vcpu);
> +
> +void virt_arch_pgd_alloc(struct kvm_vm *vm);
> +
> +static inline void virt_pgd_alloc(struct kvm_vm *vm)
> +{
> + virt_arch_pgd_alloc(vm);
> +}
> +
> +/*
> + * VM Virtual Page Map
> + *
> + * Input Args:
> + * vm - Virtual Machine
> + * vaddr - VM Virtual Address
> + * paddr - VM Physical Address
> + * memslot - Memory region slot for new virtual translation tables
> + *
> + * Output Args: None
> + *
> + * Return: None
> + *
> + * Within @vm, creates a virtual translation for the page starting
> + * at @vaddr to the page starting at @paddr.
> + */
> +void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr);
> +
> +static inline void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
> +{
> + virt_arch_pg_map(vm, vaddr, paddr);
> +}
> +
> +
> +/*
> + * Address Guest Virtual to Guest Physical
> + *
> + * Input Args:
> + * vm - Virtual Machine
> + * gva - VM virtual address
> + *
> + * Output Args: None
> + *
> + * Return:
> + * Equivalent VM physical address
> + *
> + * Returns the VM physical address of the translated VM virtual
> + * address given by @gva.
> + */
> +vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva);
> +
> +static inline vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
> +{
> + return addr_arch_gva2gpa(vm, gva);
> +}
> +
> +/*
> + * Virtual Translation Tables Dump
> + *
> + * Input Args:
> + * stream - Output FILE stream
> + * vm - Virtual Machine
> + * indent - Left margin indent amount
> + *
> + * Output Args: None
> + *
> + * Return: None
> + *
> + * Dumps to the FILE stream given by @stream, the contents of all the
> + * virtual translation tables for the VM given by @vm.
> + */
> +void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent);
> +
> +static inline void virt_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
> +{
> + virt_arch_dump(stream, vm, indent);
> +}
> +
> +
> +static inline int __vm_disable_nx_huge_pages(struct kvm_vm *vm)
> +{
> + return __vm_enable_cap(vm, KVM_CAP_VM_DISABLE_NX_HUGE_PAGES, 0);
> +}
> +
> +/*
> + * Arch hook that is invoked via a constructor, i.e. before exeucting main(),
> + * to allow for arch-specific setup that is common to all tests, e.g. computing
> + * the default guest "mode".
> + */
> +void kvm_selftest_arch_init(void);
> +
> +void kvm_arch_vm_post_create(struct kvm_vm *vm);
> +
> +bool vm_is_gpa_protected(struct kvm_vm *vm, vm_paddr_t paddr);
> +
> +uint32_t guest_get_vcpuid(void);
>
> #endif /* SELFTEST_KVM_UTIL_H */
> diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
> deleted file mode 100644
> index 3e0db283a46a..000000000000
> --- a/tools/testing/selftests/kvm/include/kvm_util_base.h
> +++ /dev/null
> @@ -1,1135 +0,0 @@
> -/* SPDX-License-Identifier: GPL-2.0-only */
> -/*
> - * tools/testing/selftests/kvm/include/kvm_util_base.h
> - *
> - * Copyright (C) 2018, Google LLC.
> - */
> -#ifndef SELFTEST_KVM_UTIL_BASE_H
> -#define SELFTEST_KVM_UTIL_BASE_H
> -
> -#include "test_util.h"
> -
> -#include <linux/compiler.h>
> -#include "linux/hashtable.h"
> -#include "linux/list.h"
> -#include <linux/kernel.h>
> -#include <linux/kvm.h>
> -#include "linux/rbtree.h"
> -#include <linux/types.h>
> -
> -#include <asm/atomic.h>
> -#include <asm/kvm.h>
> -
> -#include <sys/ioctl.h>
> -
> -#include "kvm_util_arch.h"
> -#include "sparsebit.h"
> -
> -/*
> - * Provide a version of static_assert() that is guaranteed to have an optional
> - * message param. If _ISOC11_SOURCE is defined, glibc (/usr/include/assert.h)
> - * #undefs and #defines static_assert() as a direct alias to _Static_assert(),
> - * i.e. effectively makes the message mandatory. Many KVM selftests #define
> - * _GNU_SOURCE for various reasons, and _GNU_SOURCE implies _ISOC11_SOURCE. As
> - * a result, static_assert() behavior is non-deterministic and may or may not
> - * require a message depending on #include order.
> - */
> -#define __kvm_static_assert(expr, msg, ...) _Static_assert(expr, msg)
> -#define kvm_static_assert(expr, ...) __kvm_static_assert(expr, ##__VA_ARGS__, #expr)
> -
> -#define KVM_DEV_PATH "/dev/kvm"
> -#define KVM_MAX_VCPUS 512
> -
> -#define NSEC_PER_SEC 1000000000L
> -
> -typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */
> -typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */
> -
> -struct userspace_mem_region {
> - struct kvm_userspace_memory_region2 region;
> - struct sparsebit *unused_phy_pages;
> - struct sparsebit *protected_phy_pages;
> - int fd;
> - off_t offset;
> - enum vm_mem_backing_src_type backing_src_type;
> - void *host_mem;
> - void *host_alias;
> - void *mmap_start;
> - void *mmap_alias;
> - size_t mmap_size;
> - struct rb_node gpa_node;
> - struct rb_node hva_node;
> - struct hlist_node slot_node;
> -};
> -
> -struct kvm_vcpu {
> - struct list_head list;
> - uint32_t id;
> - int fd;
> - struct kvm_vm *vm;
> - struct kvm_run *run;
> -#ifdef __x86_64__
> - struct kvm_cpuid2 *cpuid;
> -#endif
> - struct kvm_dirty_gfn *dirty_gfns;
> - uint32_t fetch_index;
> - uint32_t dirty_gfns_count;
> -};
> -
> -struct userspace_mem_regions {
> - struct rb_root gpa_tree;
> - struct rb_root hva_tree;
> - DECLARE_HASHTABLE(slot_hash, 9);
> -};
> -
> -enum kvm_mem_region_type {
> - MEM_REGION_CODE,
> - MEM_REGION_DATA,
> - MEM_REGION_PT,
> - MEM_REGION_TEST_DATA,
> - NR_MEM_REGIONS,
> -};
> -
> -struct kvm_vm {
> - int mode;
> - unsigned long type;
> - uint8_t subtype;
> - int kvm_fd;
> - int fd;
> - unsigned int pgtable_levels;
> - unsigned int page_size;
> - unsigned int page_shift;
> - unsigned int pa_bits;
> - unsigned int va_bits;
> - uint64_t max_gfn;
> - struct list_head vcpus;
> - struct userspace_mem_regions regions;
> - struct sparsebit *vpages_valid;
> - struct sparsebit *vpages_mapped;
> - bool has_irqchip;
> - bool pgd_created;
> - vm_paddr_t ucall_mmio_addr;
> - vm_paddr_t pgd;
> - vm_vaddr_t gdt;
> - vm_vaddr_t tss;
> - vm_vaddr_t idt;
> - vm_vaddr_t handlers;
> - uint32_t dirty_ring_size;
> - uint64_t gpa_tag_mask;
> -
> - struct kvm_vm_arch arch;
> -
> - /* Cache of information for binary stats interface */
> - int stats_fd;
> - struct kvm_stats_header stats_header;
> - struct kvm_stats_desc *stats_desc;
> -
> - /*
> - * KVM region slots. These are the default memslots used by page
> - * allocators, e.g., lib/elf uses the memslots[MEM_REGION_CODE]
> - * memslot.
> - */
> - uint32_t memslots[NR_MEM_REGIONS];
> -};
> -
> -struct vcpu_reg_sublist {
> - const char *name;
> - long capability;
> - int feature;
> - int feature_type;
> - bool finalize;
> - __u64 *regs;
> - __u64 regs_n;
> - __u64 *rejects_set;
> - __u64 rejects_set_n;
> - __u64 *skips_set;
> - __u64 skips_set_n;
> -};
> -
> -struct vcpu_reg_list {
> - char *name;
> - struct vcpu_reg_sublist sublists[];
> -};
> -
> -#define for_each_sublist(c, s) \
> - for ((s) = &(c)->sublists[0]; (s)->regs; ++(s))
> -
> -#define kvm_for_each_vcpu(vm, i, vcpu) \
> - for ((i) = 0; (i) <= (vm)->last_vcpu_id; (i)++) \
> - if (!((vcpu) = vm->vcpus[i])) \
> - continue; \
> - else
> -
> -struct userspace_mem_region *
> -memslot2region(struct kvm_vm *vm, uint32_t memslot);
> -
> -static inline struct userspace_mem_region *vm_get_mem_region(struct kvm_vm *vm,
> - enum kvm_mem_region_type type)
> -{
> - assert(type < NR_MEM_REGIONS);
> - return memslot2region(vm, vm->memslots[type]);
> -}
> -
> -/* Minimum allocated guest virtual and physical addresses */
> -#define KVM_UTIL_MIN_VADDR 0x2000
> -#define KVM_GUEST_PAGE_TABLE_MIN_PADDR 0x180000
> -
> -#define DEFAULT_GUEST_STACK_VADDR_MIN 0xab6000
> -#define DEFAULT_STACK_PGS 5
> -
> -enum vm_guest_mode {
> - VM_MODE_P52V48_4K,
> - VM_MODE_P52V48_16K,
> - VM_MODE_P52V48_64K,
> - VM_MODE_P48V48_4K,
> - VM_MODE_P48V48_16K,
> - VM_MODE_P48V48_64K,
> - VM_MODE_P40V48_4K,
> - VM_MODE_P40V48_16K,
> - VM_MODE_P40V48_64K,
> - VM_MODE_PXXV48_4K, /* For 48bits VA but ANY bits PA */
> - VM_MODE_P47V64_4K,
> - VM_MODE_P44V64_4K,
> - VM_MODE_P36V48_4K,
> - VM_MODE_P36V48_16K,
> - VM_MODE_P36V48_64K,
> - VM_MODE_P36V47_16K,
> - NUM_VM_MODES,
> -};
> -
> -struct vm_shape {
> - uint32_t type;
> - uint8_t mode;
> - uint8_t subtype;
> - uint16_t padding;
> -};
> -
> -kvm_static_assert(sizeof(struct vm_shape) == sizeof(uint64_t));
> -
> -#define VM_TYPE_DEFAULT 0
> -
> -#define VM_SHAPE(__mode) \
> -({ \
> - struct vm_shape shape = { \
> - .mode = (__mode), \
> - .type = VM_TYPE_DEFAULT \
> - }; \
> - \
> - shape; \
> -})
> -
> -#if defined(__aarch64__)
> -
> -extern enum vm_guest_mode vm_mode_default;
> -
> -#define VM_MODE_DEFAULT vm_mode_default
> -#define MIN_PAGE_SHIFT 12U
> -#define ptes_per_page(page_size) ((page_size) / 8)
> -
> -#elif defined(__x86_64__)
> -
> -#define VM_MODE_DEFAULT VM_MODE_PXXV48_4K
> -#define MIN_PAGE_SHIFT 12U
> -#define ptes_per_page(page_size) ((page_size) / 8)
> -
> -#elif defined(__s390x__)
> -
> -#define VM_MODE_DEFAULT VM_MODE_P44V64_4K
> -#define MIN_PAGE_SHIFT 12U
> -#define ptes_per_page(page_size) ((page_size) / 16)
> -
> -#elif defined(__riscv)
> -
> -#if __riscv_xlen == 32
> -#error "RISC-V 32-bit kvm selftests not supported"
> -#endif
> -
> -#define VM_MODE_DEFAULT VM_MODE_P40V48_4K
> -#define MIN_PAGE_SHIFT 12U
> -#define ptes_per_page(page_size) ((page_size) / 8)
> -
> -#endif
> -
> -#define VM_SHAPE_DEFAULT VM_SHAPE(VM_MODE_DEFAULT)
> -
> -#define MIN_PAGE_SIZE (1U << MIN_PAGE_SHIFT)
> -#define PTES_PER_MIN_PAGE ptes_per_page(MIN_PAGE_SIZE)
> -
> -struct vm_guest_mode_params {
> - unsigned int pa_bits;
> - unsigned int va_bits;
> - unsigned int page_size;
> - unsigned int page_shift;
> -};
> -extern const struct vm_guest_mode_params vm_guest_mode_params[];
> -
> -int open_path_or_exit(const char *path, int flags);
> -int open_kvm_dev_path_or_exit(void);
> -
> -bool get_kvm_param_bool(const char *param);
> -bool get_kvm_intel_param_bool(const char *param);
> -bool get_kvm_amd_param_bool(const char *param);
> -
> -int get_kvm_param_integer(const char *param);
> -int get_kvm_intel_param_integer(const char *param);
> -int get_kvm_amd_param_integer(const char *param);
> -
> -unsigned int kvm_check_cap(long cap);
> -
> -static inline bool kvm_has_cap(long cap)
> -{
> - return kvm_check_cap(cap);
> -}
> -
> -#define __KVM_SYSCALL_ERROR(_name, _ret) \
> - "%s failed, rc: %i errno: %i (%s)", (_name), (_ret), errno, strerror(errno)
> -
> -/*
> - * Use the "inner", double-underscore macro when reporting errors from within
> - * other macros so that the name of ioctl() and not its literal numeric value
> - * is printed on error. The "outer" macro is strongly preferred when reporting
> - * errors "directly", i.e. without an additional layer of macros, as it reduces
> - * the probability of passing in the wrong string.
> - */
> -#define __KVM_IOCTL_ERROR(_name, _ret) __KVM_SYSCALL_ERROR(_name, _ret)
> -#define KVM_IOCTL_ERROR(_ioctl, _ret) __KVM_IOCTL_ERROR(#_ioctl, _ret)
> -
> -#define kvm_do_ioctl(fd, cmd, arg) \
> -({ \
> - kvm_static_assert(!_IOC_SIZE(cmd) || sizeof(*arg) == _IOC_SIZE(cmd)); \
> - ioctl(fd, cmd, arg); \
> -})
> -
> -#define __kvm_ioctl(kvm_fd, cmd, arg) \
> - kvm_do_ioctl(kvm_fd, cmd, arg)
> -
> -#define kvm_ioctl(kvm_fd, cmd, arg) \
> -({ \
> - int ret = __kvm_ioctl(kvm_fd, cmd, arg); \
> - \
> - TEST_ASSERT(!ret, __KVM_IOCTL_ERROR(#cmd, ret)); \
> -})
> -
> -static __always_inline void static_assert_is_vm(struct kvm_vm *vm) { }
> -
> -#define __vm_ioctl(vm, cmd, arg) \
> -({ \
> - static_assert_is_vm(vm); \
> - kvm_do_ioctl((vm)->fd, cmd, arg); \
> -})
> -
> -/*
> - * Assert that a VM or vCPU ioctl() succeeded, with extra magic to detect if
> - * the ioctl() failed because KVM killed/bugged the VM. To detect a dead VM,
> - * probe KVM_CAP_USER_MEMORY, which (a) has been supported by KVM since before
> - * selftests existed and (b) should never outright fail, i.e. is supposed to
> - * return 0 or 1. If KVM kills a VM, KVM returns -EIO for all ioctl()s for the
> - * VM and its vCPUs, including KVM_CHECK_EXTENSION.
> - */
> -#define __TEST_ASSERT_VM_VCPU_IOCTL(cond, name, ret, vm) \
> -do { \
> - int __errno = errno; \
> - \
> - static_assert_is_vm(vm); \
> - \
> - if (cond) \
> - break; \
> - \
> - if (errno == EIO && \
> - __vm_ioctl(vm, KVM_CHECK_EXTENSION, (void *)KVM_CAP_USER_MEMORY) < 0) { \
> - TEST_ASSERT(errno == EIO, "KVM killed the VM, should return -EIO"); \
> - TEST_FAIL("KVM killed/bugged the VM, check the kernel log for clues"); \
> - } \
> - errno = __errno; \
> - TEST_ASSERT(cond, __KVM_IOCTL_ERROR(name, ret)); \
> -} while (0)
> -
> -#define TEST_ASSERT_VM_VCPU_IOCTL(cond, cmd, ret, vm) \
> - __TEST_ASSERT_VM_VCPU_IOCTL(cond, #cmd, ret, vm)
> -
> -#define vm_ioctl(vm, cmd, arg) \
> -({ \
> - int ret = __vm_ioctl(vm, cmd, arg); \
> - \
> - __TEST_ASSERT_VM_VCPU_IOCTL(!ret, #cmd, ret, vm); \
> -})
> -
> -static __always_inline void static_assert_is_vcpu(struct kvm_vcpu *vcpu) { }
> -
> -#define __vcpu_ioctl(vcpu, cmd, arg) \
> -({ \
> - static_assert_is_vcpu(vcpu); \
> - kvm_do_ioctl((vcpu)->fd, cmd, arg); \
> -})
> -
> -#define vcpu_ioctl(vcpu, cmd, arg) \
> -({ \
> - int ret = __vcpu_ioctl(vcpu, cmd, arg); \
> - \
> - __TEST_ASSERT_VM_VCPU_IOCTL(!ret, #cmd, ret, (vcpu)->vm); \
> -})
> -
> -/*
> - * Looks up and returns the value corresponding to the capability
> - * (KVM_CAP_*) given by cap.
> - */
> -static inline int vm_check_cap(struct kvm_vm *vm, long cap)
> -{
> - int ret = __vm_ioctl(vm, KVM_CHECK_EXTENSION, (void *)cap);
> -
> - TEST_ASSERT_VM_VCPU_IOCTL(ret >= 0, KVM_CHECK_EXTENSION, ret, vm);
> - return ret;
> -}
> -
> -static inline int __vm_enable_cap(struct kvm_vm *vm, uint32_t cap, uint64_t arg0)
> -{
> - struct kvm_enable_cap enable_cap = { .cap = cap, .args = { arg0 } };
> -
> - return __vm_ioctl(vm, KVM_ENABLE_CAP, &enable_cap);
> -}
> -static inline void vm_enable_cap(struct kvm_vm *vm, uint32_t cap, uint64_t arg0)
> -{
> - struct kvm_enable_cap enable_cap = { .cap = cap, .args = { arg0 } };
> -
> - vm_ioctl(vm, KVM_ENABLE_CAP, &enable_cap);
> -}
> -
> -static inline void vm_set_memory_attributes(struct kvm_vm *vm, uint64_t gpa,
> - uint64_t size, uint64_t attributes)
> -{
> - struct kvm_memory_attributes attr = {
> - .attributes = attributes,
> - .address = gpa,
> - .size = size,
> - .flags = 0,
> - };
> -
> - /*
> - * KVM_SET_MEMORY_ATTRIBUTES overwrites _all_ attributes. These flows
> - * need significant enhancements to support multiple attributes.
> - */
> - TEST_ASSERT(!attributes || attributes == KVM_MEMORY_ATTRIBUTE_PRIVATE,
> - "Update me to support multiple attributes!");
> -
> - vm_ioctl(vm, KVM_SET_MEMORY_ATTRIBUTES, &attr);
> -}
> -
> -
> -static inline void vm_mem_set_private(struct kvm_vm *vm, uint64_t gpa,
> - uint64_t size)
> -{
> - vm_set_memory_attributes(vm, gpa, size, KVM_MEMORY_ATTRIBUTE_PRIVATE);
> -}
> -
> -static inline void vm_mem_set_shared(struct kvm_vm *vm, uint64_t gpa,
> - uint64_t size)
> -{
> - vm_set_memory_attributes(vm, gpa, size, 0);
> -}
> -
> -void vm_guest_mem_fallocate(struct kvm_vm *vm, uint64_t gpa, uint64_t size,
> - bool punch_hole);
> -
> -static inline void vm_guest_mem_punch_hole(struct kvm_vm *vm, uint64_t gpa,
> - uint64_t size)
> -{
> - vm_guest_mem_fallocate(vm, gpa, size, true);
> -}
> -
> -static inline void vm_guest_mem_allocate(struct kvm_vm *vm, uint64_t gpa,
> - uint64_t size)
> -{
> - vm_guest_mem_fallocate(vm, gpa, size, false);
> -}
> -
> -void vm_enable_dirty_ring(struct kvm_vm *vm, uint32_t ring_size);
> -const char *vm_guest_mode_string(uint32_t i);
> -
> -void kvm_vm_free(struct kvm_vm *vmp);
> -void kvm_vm_restart(struct kvm_vm *vmp);
> -void kvm_vm_release(struct kvm_vm *vmp);
> -int kvm_memcmp_hva_gva(void *hva, struct kvm_vm *vm, const vm_vaddr_t gva,
> - size_t len);
> -void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename);
> -int kvm_memfd_alloc(size_t size, bool hugepages);
> -
> -void vm_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent);
> -
> -static inline void kvm_vm_get_dirty_log(struct kvm_vm *vm, int slot, void *log)
> -{
> - struct kvm_dirty_log args = { .dirty_bitmap = log, .slot = slot };
> -
> - vm_ioctl(vm, KVM_GET_DIRTY_LOG, &args);
> -}
> -
> -static inline void kvm_vm_clear_dirty_log(struct kvm_vm *vm, int slot, void *log,
> - uint64_t first_page, uint32_t num_pages)
> -{
> - struct kvm_clear_dirty_log args = {
> - .dirty_bitmap = log,
> - .slot = slot,
> - .first_page = first_page,
> - .num_pages = num_pages
> - };
> -
> - vm_ioctl(vm, KVM_CLEAR_DIRTY_LOG, &args);
> -}
> -
> -static inline uint32_t kvm_vm_reset_dirty_ring(struct kvm_vm *vm)
> -{
> - return __vm_ioctl(vm, KVM_RESET_DIRTY_RINGS, NULL);
> -}
> -
> -static inline int vm_get_stats_fd(struct kvm_vm *vm)
> -{
> - int fd = __vm_ioctl(vm, KVM_GET_STATS_FD, NULL);
> -
> - TEST_ASSERT_VM_VCPU_IOCTL(fd >= 0, KVM_GET_STATS_FD, fd, vm);
> - return fd;
> -}
> -
> -static inline void read_stats_header(int stats_fd, struct kvm_stats_header *header)
> -{
> - ssize_t ret;
> -
> - ret = pread(stats_fd, header, sizeof(*header), 0);
> - TEST_ASSERT(ret == sizeof(*header),
> - "Failed to read '%lu' header bytes, ret = '%ld'",
> - sizeof(*header), ret);
> -}
> -
> -struct kvm_stats_desc *read_stats_descriptors(int stats_fd,
> - struct kvm_stats_header *header);
> -
> -static inline ssize_t get_stats_descriptor_size(struct kvm_stats_header *header)
> -{
> - /*
> - * The base size of the descriptor is defined by KVM's ABI, but the
> - * size of the name field is variable, as far as KVM's ABI is
> - * concerned. For a given instance of KVM, the name field is the same
> - * size for all stats and is provided in the overall stats header.
> - */
> - return sizeof(struct kvm_stats_desc) + header->name_size;
> -}
> -
> -static inline struct kvm_stats_desc *get_stats_descriptor(struct kvm_stats_desc *stats,
> - int index,
> - struct kvm_stats_header *header)
> -{
> - /*
> - * Note, size_desc includes the size of the name field, which is
> - * variable. i.e. this is NOT equivalent to &stats_desc[i].
> - */
> - return (void *)stats + index * get_stats_descriptor_size(header);
> -}
> -
> -void read_stat_data(int stats_fd, struct kvm_stats_header *header,
> - struct kvm_stats_desc *desc, uint64_t *data,
> - size_t max_elements);
> -
> -void __vm_get_stat(struct kvm_vm *vm, const char *stat_name, uint64_t *data,
> - size_t max_elements);
> -
> -static inline uint64_t vm_get_stat(struct kvm_vm *vm, const char *stat_name)
> -{
> - uint64_t data;
> -
> - __vm_get_stat(vm, stat_name, &data, 1);
> - return data;
> -}
> -
> -void vm_create_irqchip(struct kvm_vm *vm);
> -
> -static inline int __vm_create_guest_memfd(struct kvm_vm *vm, uint64_t size,
> - uint64_t flags)
> -{
> - struct kvm_create_guest_memfd guest_memfd = {
> - .size = size,
> - .flags = flags,
> - };
> -
> - return __vm_ioctl(vm, KVM_CREATE_GUEST_MEMFD, &guest_memfd);
> -}
> -
> -static inline int vm_create_guest_memfd(struct kvm_vm *vm, uint64_t size,
> - uint64_t flags)
> -{
> - int fd = __vm_create_guest_memfd(vm, size, flags);
> -
> - TEST_ASSERT(fd >= 0, KVM_IOCTL_ERROR(KVM_CREATE_GUEST_MEMFD, fd));
> - return fd;
> -}
> -
> -void vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
> - uint64_t gpa, uint64_t size, void *hva);
> -int __vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
> - uint64_t gpa, uint64_t size, void *hva);
> -void vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
> - uint64_t gpa, uint64_t size, void *hva,
> - uint32_t guest_memfd, uint64_t guest_memfd_offset);
> -int __vm_set_user_memory_region2(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
> - uint64_t gpa, uint64_t size, void *hva,
> - uint32_t guest_memfd, uint64_t guest_memfd_offset);
> -
> -void vm_userspace_mem_region_add(struct kvm_vm *vm,
> - enum vm_mem_backing_src_type src_type,
> - uint64_t guest_paddr, uint32_t slot, uint64_t npages,
> - uint32_t flags);
> -void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
> - uint64_t guest_paddr, uint32_t slot, uint64_t npages,
> - uint32_t flags, int guest_memfd_fd, uint64_t guest_memfd_offset);
> -
> -#ifndef vm_arch_has_protected_memory
> -static inline bool vm_arch_has_protected_memory(struct kvm_vm *vm)
> -{
> - return false;
> -}
> -#endif
> -
> -void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags);
> -void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa);
> -void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
> -struct kvm_vcpu *__vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
> -void vm_populate_vaddr_bitmap(struct kvm_vm *vm);
> -vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
> -vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
> -vm_vaddr_t __vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
> - enum kvm_mem_region_type type);
> -vm_vaddr_t vm_vaddr_alloc_shared(struct kvm_vm *vm, size_t sz,
> - vm_vaddr_t vaddr_min,
> - enum kvm_mem_region_type type);
> -vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages);
> -vm_vaddr_t __vm_vaddr_alloc_page(struct kvm_vm *vm,
> - enum kvm_mem_region_type type);
> -vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm);
> -
> -void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
> - unsigned int npages);
> -void *addr_gpa2hva(struct kvm_vm *vm, vm_paddr_t gpa);
> -void *addr_gva2hva(struct kvm_vm *vm, vm_vaddr_t gva);
> -vm_paddr_t addr_hva2gpa(struct kvm_vm *vm, void *hva);
> -void *addr_gpa2alias(struct kvm_vm *vm, vm_paddr_t gpa);
> -
> -
> -static inline vm_paddr_t vm_untag_gpa(struct kvm_vm *vm, vm_paddr_t gpa)
> -{
> - return gpa & ~vm->gpa_tag_mask;
> -}
> -
> -void vcpu_run(struct kvm_vcpu *vcpu);
> -int _vcpu_run(struct kvm_vcpu *vcpu);
> -
> -static inline int __vcpu_run(struct kvm_vcpu *vcpu)
> -{
> - return __vcpu_ioctl(vcpu, KVM_RUN, NULL);
> -}
> -
> -void vcpu_run_complete_io(struct kvm_vcpu *vcpu);
> -struct kvm_reg_list *vcpu_get_reg_list(struct kvm_vcpu *vcpu);
> -
> -static inline void vcpu_enable_cap(struct kvm_vcpu *vcpu, uint32_t cap,
> - uint64_t arg0)
> -{
> - struct kvm_enable_cap enable_cap = { .cap = cap, .args = { arg0 } };
> -
> - vcpu_ioctl(vcpu, KVM_ENABLE_CAP, &enable_cap);
> -}
> -
> -static inline void vcpu_guest_debug_set(struct kvm_vcpu *vcpu,
> - struct kvm_guest_debug *debug)
> -{
> - vcpu_ioctl(vcpu, KVM_SET_GUEST_DEBUG, debug);
> -}
> -
> -static inline void vcpu_mp_state_get(struct kvm_vcpu *vcpu,
> - struct kvm_mp_state *mp_state)
> -{
> - vcpu_ioctl(vcpu, KVM_GET_MP_STATE, mp_state);
> -}
> -static inline void vcpu_mp_state_set(struct kvm_vcpu *vcpu,
> - struct kvm_mp_state *mp_state)
> -{
> - vcpu_ioctl(vcpu, KVM_SET_MP_STATE, mp_state);
> -}
> -
> -static inline void vcpu_regs_get(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
> -{
> - vcpu_ioctl(vcpu, KVM_GET_REGS, regs);
> -}
> -
> -static inline void vcpu_regs_set(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
> -{
> - vcpu_ioctl(vcpu, KVM_SET_REGS, regs);
> -}
> -static inline void vcpu_sregs_get(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
> -{
> - vcpu_ioctl(vcpu, KVM_GET_SREGS, sregs);
> -
> -}
> -static inline void vcpu_sregs_set(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
> -{
> - vcpu_ioctl(vcpu, KVM_SET_SREGS, sregs);
> -}
> -static inline int _vcpu_sregs_set(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
> -{
> - return __vcpu_ioctl(vcpu, KVM_SET_SREGS, sregs);
> -}
> -static inline void vcpu_fpu_get(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
> -{
> - vcpu_ioctl(vcpu, KVM_GET_FPU, fpu);
> -}
> -static inline void vcpu_fpu_set(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
> -{
> - vcpu_ioctl(vcpu, KVM_SET_FPU, fpu);
> -}
> -
> -static inline int __vcpu_get_reg(struct kvm_vcpu *vcpu, uint64_t id, void *addr)
> -{
> - struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)addr };
> -
> - return __vcpu_ioctl(vcpu, KVM_GET_ONE_REG, &reg);
> -}
> -static inline int __vcpu_set_reg(struct kvm_vcpu *vcpu, uint64_t id, uint64_t val)
> -{
> - struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)&val };
> -
> - return __vcpu_ioctl(vcpu, KVM_SET_ONE_REG, &reg);
> -}
> -static inline void vcpu_get_reg(struct kvm_vcpu *vcpu, uint64_t id, void *addr)
> -{
> - struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)addr };
> -
> - vcpu_ioctl(vcpu, KVM_GET_ONE_REG, &reg);
> -}
> -static inline void vcpu_set_reg(struct kvm_vcpu *vcpu, uint64_t id, uint64_t val)
> -{
> - struct kvm_one_reg reg = { .id = id, .addr = (uint64_t)&val };
> -
> - vcpu_ioctl(vcpu, KVM_SET_ONE_REG, &reg);
> -}
> -
> -#ifdef __KVM_HAVE_VCPU_EVENTS
> -static inline void vcpu_events_get(struct kvm_vcpu *vcpu,
> - struct kvm_vcpu_events *events)
> -{
> - vcpu_ioctl(vcpu, KVM_GET_VCPU_EVENTS, events);
> -}
> -static inline void vcpu_events_set(struct kvm_vcpu *vcpu,
> - struct kvm_vcpu_events *events)
> -{
> - vcpu_ioctl(vcpu, KVM_SET_VCPU_EVENTS, events);
> -}
> -#endif
> -#ifdef __x86_64__
> -static inline void vcpu_nested_state_get(struct kvm_vcpu *vcpu,
> - struct kvm_nested_state *state)
> -{
> - vcpu_ioctl(vcpu, KVM_GET_NESTED_STATE, state);
> -}
> -static inline int __vcpu_nested_state_set(struct kvm_vcpu *vcpu,
> - struct kvm_nested_state *state)
> -{
> - return __vcpu_ioctl(vcpu, KVM_SET_NESTED_STATE, state);
> -}
> -
> -static inline void vcpu_nested_state_set(struct kvm_vcpu *vcpu,
> - struct kvm_nested_state *state)
> -{
> - vcpu_ioctl(vcpu, KVM_SET_NESTED_STATE, state);
> -}
> -#endif
> -static inline int vcpu_get_stats_fd(struct kvm_vcpu *vcpu)
> -{
> - int fd = __vcpu_ioctl(vcpu, KVM_GET_STATS_FD, NULL);
> -
> - TEST_ASSERT_VM_VCPU_IOCTL(fd >= 0, KVM_CHECK_EXTENSION, fd, vcpu->vm);
> - return fd;
> -}
> -
> -int __kvm_has_device_attr(int dev_fd, uint32_t group, uint64_t attr);
> -
> -static inline void kvm_has_device_attr(int dev_fd, uint32_t group, uint64_t attr)
> -{
> - int ret = __kvm_has_device_attr(dev_fd, group, attr);
> -
> - TEST_ASSERT(!ret, "KVM_HAS_DEVICE_ATTR failed, rc: %i errno: %i", ret, errno);
> -}
> -
> -int __kvm_device_attr_get(int dev_fd, uint32_t group, uint64_t attr, void *val);
> -
> -static inline void kvm_device_attr_get(int dev_fd, uint32_t group,
> - uint64_t attr, void *val)
> -{
> - int ret = __kvm_device_attr_get(dev_fd, group, attr, val);
> -
> - TEST_ASSERT(!ret, KVM_IOCTL_ERROR(KVM_GET_DEVICE_ATTR, ret));
> -}
> -
> -int __kvm_device_attr_set(int dev_fd, uint32_t group, uint64_t attr, void *val);
> -
> -static inline void kvm_device_attr_set(int dev_fd, uint32_t group,
> - uint64_t attr, void *val)
> -{
> - int ret = __kvm_device_attr_set(dev_fd, group, attr, val);
> -
> - TEST_ASSERT(!ret, KVM_IOCTL_ERROR(KVM_SET_DEVICE_ATTR, ret));
> -}
> -
> -static inline int __vcpu_has_device_attr(struct kvm_vcpu *vcpu, uint32_t group,
> - uint64_t attr)
> -{
> - return __kvm_has_device_attr(vcpu->fd, group, attr);
> -}
> -
> -static inline void vcpu_has_device_attr(struct kvm_vcpu *vcpu, uint32_t group,
> - uint64_t attr)
> -{
> - kvm_has_device_attr(vcpu->fd, group, attr);
> -}
> -
> -static inline int __vcpu_device_attr_get(struct kvm_vcpu *vcpu, uint32_t group,
> - uint64_t attr, void *val)
> -{
> - return __kvm_device_attr_get(vcpu->fd, group, attr, val);
> -}
> -
> -static inline void vcpu_device_attr_get(struct kvm_vcpu *vcpu, uint32_t group,
> - uint64_t attr, void *val)
> -{
> - kvm_device_attr_get(vcpu->fd, group, attr, val);
> -}
> -
> -static inline int __vcpu_device_attr_set(struct kvm_vcpu *vcpu, uint32_t group,
> - uint64_t attr, void *val)
> -{
> - return __kvm_device_attr_set(vcpu->fd, group, attr, val);
> -}
> -
> -static inline void vcpu_device_attr_set(struct kvm_vcpu *vcpu, uint32_t group,
> - uint64_t attr, void *val)
> -{
> - kvm_device_attr_set(vcpu->fd, group, attr, val);
> -}
> -
> -int __kvm_test_create_device(struct kvm_vm *vm, uint64_t type);
> -int __kvm_create_device(struct kvm_vm *vm, uint64_t type);
> -
> -static inline int kvm_create_device(struct kvm_vm *vm, uint64_t type)
> -{
> - int fd = __kvm_create_device(vm, type);
> -
> - TEST_ASSERT(fd >= 0, KVM_IOCTL_ERROR(KVM_CREATE_DEVICE, fd));
> - return fd;
> -}
> -
> -void *vcpu_map_dirty_ring(struct kvm_vcpu *vcpu);
> -
> -/*
> - * VM VCPU Args Set
> - *
> - * Input Args:
> - * vm - Virtual Machine
> - * num - number of arguments
> - * ... - arguments, each of type uint64_t
> - *
> - * Output Args: None
> - *
> - * Return: None
> - *
> - * Sets the first @num input parameters for the function at @vcpu's entry point,
> - * per the C calling convention of the architecture, to the values given as
> - * variable args. Each of the variable args is expected to be of type uint64_t.
> - * The maximum @num can be is specific to the architecture.
> - */
> -void vcpu_args_set(struct kvm_vcpu *vcpu, unsigned int num, ...);
> -
> -void kvm_irq_line(struct kvm_vm *vm, uint32_t irq, int level);
> -int _kvm_irq_line(struct kvm_vm *vm, uint32_t irq, int level);
> -
> -#define KVM_MAX_IRQ_ROUTES 4096
> -
> -struct kvm_irq_routing *kvm_gsi_routing_create(void);
> -void kvm_gsi_routing_irqchip_add(struct kvm_irq_routing *routing,
> - uint32_t gsi, uint32_t pin);
> -int _kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing);
> -void kvm_gsi_routing_write(struct kvm_vm *vm, struct kvm_irq_routing *routing);
> -
> -const char *exit_reason_str(unsigned int exit_reason);
> -
> -vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
> - uint32_t memslot);
> -vm_paddr_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
> - vm_paddr_t paddr_min, uint32_t memslot,
> - bool protected);
> -vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm);
> -
> -static inline vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
> - vm_paddr_t paddr_min, uint32_t memslot)
> -{
> - /*
> - * By default, allocate memory as protected for VMs that support
> - * protected memory, as the majority of memory for such VMs is
> - * protected, i.e. using shared memory is effectively opt-in.
> - */
> - return __vm_phy_pages_alloc(vm, num, paddr_min, memslot,
> - vm_arch_has_protected_memory(vm));
> -}
> -
> -/*
> - * ____vm_create() does KVM_CREATE_VM and little else. __vm_create() also
> - * loads the test binary into guest memory and creates an IRQ chip (x86 only).
> - * __vm_create() does NOT create vCPUs, @nr_runnable_vcpus is used purely to
> - * calculate the amount of memory needed for per-vCPU data, e.g. stacks.
> - */
> -struct kvm_vm *____vm_create(struct vm_shape shape);
> -struct kvm_vm *__vm_create(struct vm_shape shape, uint32_t nr_runnable_vcpus,
> - uint64_t nr_extra_pages);
> -
> -static inline struct kvm_vm *vm_create_barebones(void)
> -{
> - return ____vm_create(VM_SHAPE_DEFAULT);
> -}
> -
> -#ifdef __x86_64__
> -static inline struct kvm_vm *vm_create_barebones_protected_vm(void)
> -{
> - const struct vm_shape shape = {
> - .mode = VM_MODE_DEFAULT,
> - .type = KVM_X86_SW_PROTECTED_VM,
> - };
> -
> - return ____vm_create(shape);
> -}
> -#endif
> -
> -static inline struct kvm_vm *vm_create(uint32_t nr_runnable_vcpus)
> -{
> - return __vm_create(VM_SHAPE_DEFAULT, nr_runnable_vcpus, 0);
> -}
> -
> -struct kvm_vm *__vm_create_with_vcpus(struct vm_shape shape, uint32_t nr_vcpus,
> - uint64_t extra_mem_pages,
> - void *guest_code, struct kvm_vcpu *vcpus[]);
> -
> -static inline struct kvm_vm *vm_create_with_vcpus(uint32_t nr_vcpus,
> - void *guest_code,
> - struct kvm_vcpu *vcpus[])
> -{
> - return __vm_create_with_vcpus(VM_SHAPE_DEFAULT, nr_vcpus, 0,
> - guest_code, vcpus);
> -}
> -
> -
> -struct kvm_vm *__vm_create_shape_with_one_vcpu(struct vm_shape shape,
> - struct kvm_vcpu **vcpu,
> - uint64_t extra_mem_pages,
> - void *guest_code);
> -
> -/*
> - * Create a VM with a single vCPU with reasonable defaults and @extra_mem_pages
> - * additional pages of guest memory. Returns the VM and vCPU (via out param).
> - */
> -static inline struct kvm_vm *__vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
> - uint64_t extra_mem_pages,
> - void *guest_code)
> -{
> - return __vm_create_shape_with_one_vcpu(VM_SHAPE_DEFAULT, vcpu,
> - extra_mem_pages, guest_code);
> -}
> -
> -static inline struct kvm_vm *vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
> - void *guest_code)
> -{
> - return __vm_create_with_one_vcpu(vcpu, 0, guest_code);
> -}
> -
> -static inline struct kvm_vm *vm_create_shape_with_one_vcpu(struct vm_shape shape,
> - struct kvm_vcpu **vcpu,
> - void *guest_code)
> -{
> - return __vm_create_shape_with_one_vcpu(shape, vcpu, 0, guest_code);
> -}
> -
> -struct kvm_vcpu *vm_recreate_with_one_vcpu(struct kvm_vm *vm);
> -
> -void kvm_pin_this_task_to_pcpu(uint32_t pcpu);
> -void kvm_print_vcpu_pinning_help(void);
> -void kvm_parse_vcpu_pinning(const char *pcpus_string, uint32_t vcpu_to_pcpu[],
> - int nr_vcpus);
> -
> -unsigned long vm_compute_max_gfn(struct kvm_vm *vm);
> -unsigned int vm_calc_num_guest_pages(enum vm_guest_mode mode, size_t size);
> -unsigned int vm_num_host_pages(enum vm_guest_mode mode, unsigned int num_guest_pages);
> -unsigned int vm_num_guest_pages(enum vm_guest_mode mode, unsigned int num_host_pages);
> -static inline unsigned int
> -vm_adjust_num_guest_pages(enum vm_guest_mode mode, unsigned int num_guest_pages)
> -{
> - unsigned int n;
> - n = vm_num_guest_pages(mode, vm_num_host_pages(mode, num_guest_pages));
> -#ifdef __s390x__
> - /* s390 requires 1M aligned guest sizes */
> - n = (n + 255) & ~255;
> -#endif
> - return n;
> -}
> -
> -#define sync_global_to_guest(vm, g) ({ \
> - typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g)); \
> - memcpy(_p, &(g), sizeof(g)); \
> -})
> -
> -#define sync_global_from_guest(vm, g) ({ \
> - typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g)); \
> - memcpy(&(g), _p, sizeof(g)); \
> -})
> -
> -/*
> - * Write a global value, but only in the VM's (guest's) domain. Primarily used
> - * for "globals" that hold per-VM values (VMs always duplicate code and global
> - * data into their own region of physical memory), but can be used anytime it's
> - * undesirable to change the host's copy of the global.
> - */
> -#define write_guest_global(vm, g, val) ({ \
> - typeof(g) *_p = addr_gva2hva(vm, (vm_vaddr_t)&(g)); \
> - typeof(g) _val = val; \
> - \
> - memcpy(_p, &(_val), sizeof(g)); \
> -})
> -
> -void assert_on_unhandled_exception(struct kvm_vcpu *vcpu);
> -
> -void vcpu_arch_dump(FILE *stream, struct kvm_vcpu *vcpu,
> - uint8_t indent);
> -
> -static inline void vcpu_dump(FILE *stream, struct kvm_vcpu *vcpu,
> - uint8_t indent)
> -{
> - vcpu_arch_dump(stream, vcpu, indent);
> -}
> -
> -/*
> - * Adds a vCPU with reasonable defaults (e.g. a stack)
> - *
> - * Input Args:
> - * vm - Virtual Machine
> - * vcpu_id - The id of the VCPU to add to the VM.
> - */
> -struct kvm_vcpu *vm_arch_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id);
> -void vcpu_arch_set_entry_point(struct kvm_vcpu *vcpu, void *guest_code);
> -
> -static inline struct kvm_vcpu *vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpu_id,
> - void *guest_code)
> -{
> - struct kvm_vcpu *vcpu = vm_arch_vcpu_add(vm, vcpu_id);
> -
> - vcpu_arch_set_entry_point(vcpu, guest_code);
> -
> - return vcpu;
> -}
> -
> -/* Re-create a vCPU after restarting a VM, e.g. for state save/restore tests. */
> -struct kvm_vcpu *vm_arch_vcpu_recreate(struct kvm_vm *vm, uint32_t vcpu_id);
> -
> -static inline struct kvm_vcpu *vm_vcpu_recreate(struct kvm_vm *vm,
> - uint32_t vcpu_id)
> -{
> - return vm_arch_vcpu_recreate(vm, vcpu_id);
> -}
> -
> -void vcpu_arch_free(struct kvm_vcpu *vcpu);
> -
> -void virt_arch_pgd_alloc(struct kvm_vm *vm);
> -
> -static inline void virt_pgd_alloc(struct kvm_vm *vm)
> -{
> - virt_arch_pgd_alloc(vm);
> -}
> -
> -/*
> - * VM Virtual Page Map
> - *
> - * Input Args:
> - * vm - Virtual Machine
> - * vaddr - VM Virtual Address
> - * paddr - VM Physical Address
> - * memslot - Memory region slot for new virtual translation tables
> - *
> - * Output Args: None
> - *
> - * Return: None
> - *
> - * Within @vm, creates a virtual translation for the page starting
> - * at @vaddr to the page starting at @paddr.
> - */
> -void virt_arch_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr);
> -
> -static inline void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr)
> -{
> - virt_arch_pg_map(vm, vaddr, paddr);
> -}
> -
> -
> -/*
> - * Address Guest Virtual to Guest Physical
> - *
> - * Input Args:
> - * vm - Virtual Machine
> - * gva - VM virtual address
> - *
> - * Output Args: None
> - *
> - * Return:
> - * Equivalent VM physical address
> - *
> - * Returns the VM physical address of the translated VM virtual
> - * address given by @gva.
> - */
> -vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva);
> -
> -static inline vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
> -{
> - return addr_arch_gva2gpa(vm, gva);
> -}
> -
> -/*
> - * Virtual Translation Tables Dump
> - *
> - * Input Args:
> - * stream - Output FILE stream
> - * vm - Virtual Machine
> - * indent - Left margin indent amount
> - *
> - * Output Args: None
> - *
> - * Return: None
> - *
> - * Dumps to the FILE stream given by @stream, the contents of all the
> - * virtual translation tables for the VM given by @vm.
> - */
> -void virt_arch_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent);
> -
> -static inline void virt_dump(FILE *stream, struct kvm_vm *vm, uint8_t indent)
> -{
> - virt_arch_dump(stream, vm, indent);
> -}
> -
> -
> -static inline int __vm_disable_nx_huge_pages(struct kvm_vm *vm)
> -{
> - return __vm_enable_cap(vm, KVM_CAP_VM_DISABLE_NX_HUGE_PAGES, 0);
> -}
> -
> -/*
> - * Arch hook that is invoked via a constructor, i.e. before exeucting main(),
> - * to allow for arch-specific setup that is common to all tests, e.g. computing
> - * the default guest "mode".
> - */
> -void kvm_selftest_arch_init(void);
> -
> -void kvm_arch_vm_post_create(struct kvm_vm *vm);
> -
> -bool vm_is_gpa_protected(struct kvm_vm *vm, vm_paddr_t paddr);
> -
> -uint32_t guest_get_vcpuid(void);
> -
> -#endif /* SELFTEST_KVM_UTIL_BASE_H */
> diff --git a/tools/testing/selftests/kvm/include/s390x/ucall.h b/tools/testing/selftests/kvm/include/s390x/ucall.h
> index b231bf2e49d6..8035a872a351 100644
> --- a/tools/testing/selftests/kvm/include/s390x/ucall.h
> +++ b/tools/testing/selftests/kvm/include/s390x/ucall.h
> @@ -2,7 +2,7 @@
> #ifndef SELFTEST_KVM_UCALL_H
> #define SELFTEST_KVM_UCALL_H
>
> -#include "kvm_util_base.h"
> +#include "kvm_util.h"
>
> #define UCALL_EXIT_REASON KVM_EXIT_S390_SIEIC
>
> diff --git a/tools/testing/selftests/kvm/include/x86_64/processor.h b/tools/testing/selftests/kvm/include/x86_64/processor.h
> index 3bd03b088dda..d6ffe03c9d0b 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/processor.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/processor.h
> @@ -18,7 +18,8 @@
> #include <linux/kvm_para.h>
> #include <linux/stringify.h>
>
> -#include "../kvm_util.h"
> +#include "kvm_util.h"
> +#include "ucall_common.h"
>
> extern bool host_cpu_is_intel;
> extern bool host_cpu_is_amd;
> diff --git a/tools/testing/selftests/kvm/include/x86_64/ucall.h b/tools/testing/selftests/kvm/include/x86_64/ucall.h
> index 06b244bd06ee..d3825dcc3cd9 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/ucall.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/ucall.h
> @@ -2,7 +2,7 @@
> #ifndef SELFTEST_KVM_UCALL_H
> #define SELFTEST_KVM_UCALL_H
>
> -#include "kvm_util_base.h"
> +#include "kvm_util.h"
>
> #define UCALL_EXIT_REASON KVM_EXIT_IO
>
> diff --git a/tools/testing/selftests/kvm/kvm_page_table_test.c b/tools/testing/selftests/kvm/kvm_page_table_test.c
> index e0ba97ac1c56..e16ef18bcfc0 100644
> --- a/tools/testing/selftests/kvm/kvm_page_table_test.c
> +++ b/tools/testing/selftests/kvm/kvm_page_table_test.c
> @@ -21,6 +21,7 @@
> #include "kvm_util.h"
> #include "processor.h"
> #include "guest_modes.h"
> +#include "ucall_common.h"
>
> #define TEST_MEM_SLOT_INDEX 1
>
> diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> index a9eb17295be4..0ac7cc89f38c 100644
> --- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
> @@ -11,6 +11,8 @@
> #include "guest_modes.h"
> #include "kvm_util.h"
> #include "processor.h"
> +#include "ucall_common.h"
> +
> #include <linux/bitfield.h>
> #include <linux/sizes.h>
>
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index b2262b5fad9e..cec39b52b90d 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -9,6 +9,7 @@
> #include "test_util.h"
> #include "kvm_util.h"
> #include "processor.h"
> +#include "ucall_common.h"
>
> #include <assert.h>
> #include <sched.h>
> diff --git a/tools/testing/selftests/kvm/lib/memstress.c b/tools/testing/selftests/kvm/lib/memstress.c
> index cf2c73971308..96432ad9efa6 100644
> --- a/tools/testing/selftests/kvm/lib/memstress.c
> +++ b/tools/testing/selftests/kvm/lib/memstress.c
> @@ -10,6 +10,7 @@
> #include "kvm_util.h"
> #include "memstress.h"
> #include "processor.h"
> +#include "ucall_common.h"
>
> struct memstress_args memstress_args;
>
> diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c
> index e8211f5d6863..79b67e2627cb 100644
> --- a/tools/testing/selftests/kvm/lib/riscv/processor.c
> +++ b/tools/testing/selftests/kvm/lib/riscv/processor.c
> @@ -10,6 +10,7 @@
>
> #include "kvm_util.h"
> #include "processor.h"
> +#include "ucall_common.h"
>
> #define DEFAULT_RISCV_GUEST_STACK_VADDR_MIN 0xac0000
>
> diff --git a/tools/testing/selftests/kvm/lib/ucall_common.c b/tools/testing/selftests/kvm/lib/ucall_common.c
> index f5af65a41c29..42151e571953 100644
> --- a/tools/testing/selftests/kvm/lib/ucall_common.c
> +++ b/tools/testing/selftests/kvm/lib/ucall_common.c
> @@ -1,9 +1,12 @@
> // SPDX-License-Identifier: GPL-2.0-only
> -#include "kvm_util.h"
> #include "linux/types.h"
> #include "linux/bitmap.h"
> #include "linux/atomic.h"
>
> +#include "kvm_util.h"
> +#include "ucall_common.h"
> +
> +
> #define GUEST_UCALL_FAILED -1
>
> struct ucall_header {
> diff --git a/tools/testing/selftests/kvm/riscv/arch_timer.c b/tools/testing/selftests/kvm/riscv/arch_timer.c
> index e22848f747c0..d6375af0b23e 100644
> --- a/tools/testing/selftests/kvm/riscv/arch_timer.c
> +++ b/tools/testing/selftests/kvm/riscv/arch_timer.c
> @@ -14,6 +14,7 @@
> #include "kvm_util.h"
> #include "processor.h"
> #include "timer_test.h"
> +#include "ucall_common.h"
>
> static int timer_irq = IRQ_S_TIMER;
>
> diff --git a/tools/testing/selftests/kvm/rseq_test.c b/tools/testing/selftests/kvm/rseq_test.c
> index 28f97fb52044..d81f9b9c5809 100644
> --- a/tools/testing/selftests/kvm/rseq_test.c
> +++ b/tools/testing/selftests/kvm/rseq_test.c
> @@ -19,6 +19,7 @@
> #include "kvm_util.h"
> #include "processor.h"
> #include "test_util.h"
> +#include "ucall_common.h"
>
> #include "../rseq/rseq.c"
>
> diff --git a/tools/testing/selftests/kvm/s390x/cmma_test.c b/tools/testing/selftests/kvm/s390x/cmma_test.c
> index 626a2b8a2037..9e0033906638 100644
> --- a/tools/testing/selftests/kvm/s390x/cmma_test.c
> +++ b/tools/testing/selftests/kvm/s390x/cmma_test.c
> @@ -18,6 +18,7 @@
> #include "test_util.h"
> #include "kvm_util.h"
> #include "kselftest.h"
> +#include "ucall_common.h"
>
> #define MAIN_PAGE_COUNT 512
>
> diff --git a/tools/testing/selftests/kvm/s390x/memop.c b/tools/testing/selftests/kvm/s390x/memop.c
> index bb3ca9a5d731..9b31693be1cb 100644
> --- a/tools/testing/selftests/kvm/s390x/memop.c
> +++ b/tools/testing/selftests/kvm/s390x/memop.c
> @@ -15,6 +15,7 @@
> #include "test_util.h"
> #include "kvm_util.h"
> #include "kselftest.h"
> +#include "ucall_common.h"
>
> enum mop_target {
> LOGICAL,
> diff --git a/tools/testing/selftests/kvm/s390x/tprot.c b/tools/testing/selftests/kvm/s390x/tprot.c
> index c73f948c9b63..7a742a673b7c 100644
> --- a/tools/testing/selftests/kvm/s390x/tprot.c
> +++ b/tools/testing/selftests/kvm/s390x/tprot.c
> @@ -8,6 +8,7 @@
> #include "test_util.h"
> #include "kvm_util.h"
> #include "kselftest.h"
> +#include "ucall_common.h"
>
> #define PAGE_SHIFT 12
> #define PAGE_SIZE (1 << PAGE_SHIFT)
> diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c
> index bae0c5026f82..4c669d0cb8c0 100644
> --- a/tools/testing/selftests/kvm/steal_time.c
> +++ b/tools/testing/selftests/kvm/steal_time.c
> @@ -18,6 +18,7 @@
> #include "test_util.h"
> #include "kvm_util.h"
> #include "processor.h"
> +#include "ucall_common.h"
>
> #define NR_VCPUS 4
> #define ST_GPA_BASE (1 << 30)
> diff --git a/tools/testing/selftests/kvm/x86_64/dirty_log_page_splitting_test.c b/tools/testing/selftests/kvm/x86_64/dirty_log_page_splitting_test.c
> index ee3b384b991c..2929c067c207 100644
> --- a/tools/testing/selftests/kvm/x86_64/dirty_log_page_splitting_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/dirty_log_page_splitting_test.c
> @@ -17,6 +17,7 @@
> #include "test_util.h"
> #include "memstress.h"
> #include "guest_modes.h"
> +#include "ucall_common.h"
>
> #define VCPUS 2
> #define SLOTS 2
> diff --git a/tools/testing/selftests/kvm/x86_64/exit_on_emulation_failure_test.c b/tools/testing/selftests/kvm/x86_64/exit_on_emulation_failure_test.c
> index 6c2e5e0ceb1f..fbac69d49b39 100644
> --- a/tools/testing/selftests/kvm/x86_64/exit_on_emulation_failure_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/exit_on_emulation_failure_test.c
> @@ -8,8 +8,8 @@
> #define _GNU_SOURCE /* for program_invocation_short_name */
>
> #include "flds_emulation.h"
> -
> #include "test_util.h"
> +#include "ucall_common.h"
>
> #define MMIO_GPA 0x700000000
> #define MMIO_GVA MMIO_GPA
> diff --git a/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c b/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c
> index dcbb3c29fb8e..bc9be20f9600 100644
> --- a/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c
> +++ b/tools/testing/selftests/kvm/x86_64/ucna_injection_test.c
> @@ -24,7 +24,6 @@
> #include <string.h>
> #include <time.h>
>
> -#include "kvm_util_base.h"
> #include "kvm_util.h"
> #include "mce.h"
> #include "processor.h"
> --
> 2.44.0.291.gc1ea87d7ee-goog

Happy that this got simplified!

Reviewed-by: Ackerley Tng <[email protected]>

2024-03-28 02:50:47

by Ackerley Tng

[permalink] [raw]
Subject: Re: [PATCH 14/18] KVM: selftests: Fold x86's descriptor tables helpers into vcpu_init_sregs()

Sean Christopherson <[email protected]> writes:

> Now that the per-VM, on-demand allocation logic in kvm_setup_gdt() and
> vcpu_init_descriptor_tables() is gone, fold them into vcpu_init_sregs().
>
> Note, both kvm_setup_gdt() and vcpu_init_descriptor_tables() configured the
> GDT, which is why it looks like kvm_setup_gdt() disappears.
>
> Opportunistically delete the pointless zeroing of the IDT limit (it was
> being unconditionally overwritten by vcpu_init_descriptor_tables()).
>
> Signed-off-by: Sean Christopherson <[email protected]>
> ---
> .../selftests/kvm/lib/x86_64/processor.c | 32 ++++---------------
> 1 file changed, 6 insertions(+), 26 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index 561c0aa93608..5cf845975f66 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -516,12 +516,6 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
> return vm_untag_gpa(vm, PTE_GET_PA(*pte)) | (gva & ~HUGEPAGE_MASK(level));
> }
>
> -static void kvm_setup_gdt(struct kvm_vm *vm, struct kvm_dtable *dt)
> -{
> - dt->base = vm->arch.gdt;
> - dt->limit = getpagesize() - 1;
> -}
> -
> static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp,
> int selector)
> {
> @@ -537,32 +531,19 @@ static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp,
> kvm_seg_fill_gdt_64bit(vm, segp);
> }
>
> -static void vcpu_init_descriptor_tables(struct kvm_vcpu *vcpu)
> +static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
> {
> - struct kvm_vm *vm = vcpu->vm;
> struct kvm_sregs sregs;
>
> + TEST_ASSERT_EQ(vm->mode, VM_MODE_PXXV48_4K);
> +
> + /* Set mode specific system register values. */
> vcpu_sregs_get(vcpu, &sregs);
> +
> sregs.idt.base = vm->arch.idt;
> sregs.idt.limit = NUM_INTERRUPTS * sizeof(struct idt_entry) - 1;
> sregs.gdt.base = vm->arch.gdt;
> sregs.gdt.limit = getpagesize() - 1;
> - kvm_seg_set_kernel_data_64bit(NULL, DEFAULT_DATA_SELECTOR, &sregs.gs);
> - vcpu_sregs_set(vcpu, &sregs);
> -}
> -
> -static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
> -{
> - struct kvm_sregs sregs;
> -
> - TEST_ASSERT_EQ(vm->mode, VM_MODE_PXXV48_4K);
> -
> - /* Set mode specific system register values. */
> - vcpu_sregs_get(vcpu, &sregs);
> -
> - sregs.idt.limit = 0;
> -
> - kvm_setup_gdt(vm, &sregs.gdt);
>
> sregs.cr0 = X86_CR0_PE | X86_CR0_NE | X86_CR0_PG;
> sregs.cr4 |= X86_CR4_PAE | X86_CR4_OSFXSR;
> @@ -572,12 +553,11 @@ static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
> kvm_seg_set_kernel_code_64bit(vm, DEFAULT_CODE_SELECTOR, &sregs.cs);
> kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.ds);
> kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.es);
> + kvm_seg_set_kernel_data_64bit(NULL, DEFAULT_DATA_SELECTOR, &sregs.gs);
> kvm_setup_tss_64bit(vm, &sregs.tr, 0x18);
>
> sregs.cr3 = vm->pgd;
> vcpu_sregs_set(vcpu, &sregs);
> -
> - vcpu_init_descriptor_tables(vcpu);
> }
>
> static void set_idt_entry(struct kvm_vm *vm, int vector, unsigned long addr,
> --
> 2.44.0.291.gc1ea87d7ee-goog

Reviewed-by: Ackerley Tng <[email protected]>

2024-03-28 02:51:55

by Ackerley Tng

[permalink] [raw]
Subject: Re: [PATCH 18/18] KVM: selftests: Drop @selector from segment helpers

Sean Christopherson <[email protected]> writes:

> Drop the @selector from the kernel code, data, and TSS builders and
> instead hardcode the respective selector in the helper. Accepting a
> selector but not a base makes the selector useless, e.g. the data helper
> can't create per-vCPU for FS or GS, and so loading GS with KERNEL_DS is
> the only logical choice.
>
> And for code and TSS, there is no known reason to ever want multiple
> segments, e.g. there are zero plans to support 32-bit kernel code (and
> again, that would require more than just the selector).
>
> If KVM selftests ever do add support for per-vCPU segments, it'd arguably
> be more readable to add a dedicated helper for building/setting the
> per-vCPU segment, and move the common data segment code to an inner
> helper.
>
> Lastly, hardcoding the selector reduces the probability of setting the
> wrong selector in the vCPU versus what was created by the VM in the GDT.
>
> Signed-off-by: Sean Christopherson <[email protected]>
> ---
> .../selftests/kvm/lib/x86_64/processor.c | 29 +++++++++----------
> 1 file changed, 14 insertions(+), 15 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index dab719ee7734..6abd50d6e59d 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -438,10 +438,10 @@ static void kvm_seg_fill_gdt_64bit(struct kvm_vm *vm, struct kvm_segment *segp)
> desc->base3 = segp->base >> 32;
> }
>
> -static void kvm_seg_set_kernel_code_64bit(uint16_t selector, struct kvm_segment *segp)
> +static void kvm_seg_set_kernel_code_64bit(struct kvm_segment *segp)
> {
> memset(segp, 0, sizeof(*segp));
> - segp->selector = selector;
> + segp->selector = KERNEL_CS;
> segp->limit = 0xFFFFFFFFu;
> segp->s = 0x1; /* kTypeCodeData */
> segp->type = 0x08 | 0x01 | 0x02; /* kFlagCode | kFlagCodeAccessed
> @@ -452,10 +452,10 @@ static void kvm_seg_set_kernel_code_64bit(uint16_t selector, struct kvm_segment
> segp->present = 1;
> }
>
> -static void kvm_seg_set_kernel_data_64bit(uint16_t selector, struct kvm_segment *segp)
> +static void kvm_seg_set_kernel_data_64bit(struct kvm_segment *segp)
> {
> memset(segp, 0, sizeof(*segp));
> - segp->selector = selector;
> + segp->selector = KERNEL_DS;
> segp->limit = 0xFFFFFFFFu;
> segp->s = 0x1; /* kTypeCodeData */
> segp->type = 0x00 | 0x01 | 0x02; /* kFlagData | kFlagDataAccessed
> @@ -480,13 +480,12 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
> return vm_untag_gpa(vm, PTE_GET_PA(*pte)) | (gva & ~HUGEPAGE_MASK(level));
> }
>
> -static void kvm_seg_set_tss_64bit(vm_vaddr_t base, struct kvm_segment *segp,
> - int selector)
> +static void kvm_seg_set_tss_64bit(vm_vaddr_t base, struct kvm_segment *segp)
> {
> memset(segp, 0, sizeof(*segp));
> segp->base = base;
> segp->limit = 0x67;
> - segp->selector = selector;
> + segp->selector = KERNEL_TSS;
> segp->type = 0xb;
> segp->present = 1;
> }
> @@ -510,11 +509,11 @@ static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
> sregs.efer |= (EFER_LME | EFER_LMA | EFER_NX);
>
> kvm_seg_set_unusable(&sregs.ldt);
> - kvm_seg_set_kernel_code_64bit(KERNEL_CS, &sregs.cs);
> - kvm_seg_set_kernel_data_64bit(KERNEL_DS, &sregs.ds);
> - kvm_seg_set_kernel_data_64bit(KERNEL_DS, &sregs.es);
> - kvm_seg_set_kernel_data_64bit(KERNEL_DS, &sregs.gs);
> - kvm_seg_set_tss_64bit(vm->arch.tss, &sregs.tr, KERNEL_TSS);
> + kvm_seg_set_kernel_code_64bit(&sregs.cs);
> + kvm_seg_set_kernel_data_64bit(&sregs.ds);
> + kvm_seg_set_kernel_data_64bit(&sregs.es);
> + kvm_seg_set_kernel_data_64bit(&sregs.gs);
> + kvm_seg_set_tss_64bit(vm->arch.tss, &sregs.tr);
>
> sregs.cr3 = vm->pgd;
> vcpu_sregs_set(vcpu, &sregs);
> @@ -588,13 +587,13 @@ static void vm_init_descriptor_tables(struct kvm_vm *vm)
>
> *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
>
> - kvm_seg_set_kernel_code_64bit(KERNEL_CS, &seg);
> + kvm_seg_set_kernel_code_64bit(&seg);
> kvm_seg_fill_gdt_64bit(vm, &seg);
>
> - kvm_seg_set_kernel_data_64bit(KERNEL_DS, &seg);
> + kvm_seg_set_kernel_data_64bit(&seg);
> kvm_seg_fill_gdt_64bit(vm, &seg);
>
> - kvm_seg_set_tss_64bit(vm->arch.tss, &seg, KERNEL_TSS);
> + kvm_seg_set_tss_64bit(vm->arch.tss, &seg);
> kvm_seg_fill_gdt_64bit(vm, &seg);
> }
>
> --
> 2.44.0.291.gc1ea87d7ee-goog

Reviewed-by: Ackerley Tng <[email protected]>

2024-03-28 02:52:42

by Ackerley Tng

[permalink] [raw]
Subject: Re: [PATCH 15/18] KVM: selftests: Allocate x86's TSS at VM creation

Sean Christopherson <[email protected]> writes:

> Allocate x86's per-VM TSS at creation of a non-barebones VM. Like the
> GDT, the TSS is needed to actually run vCPUs, i.e. every non-barebones VM
> is all but guaranteed to allocate the TSS sooner or later.
>
> Signed-off-by: Sean Christopherson <[email protected]>
> ---
> tools/testing/selftests/kvm/lib/x86_64/processor.c | 5 ++---
> 1 file changed, 2 insertions(+), 3 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index 5cf845975f66..03b9387a1d2e 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -519,9 +519,6 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
> static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp,
> int selector)
> {
> - if (!vm->arch.tss)
> - vm->arch.tss = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
> -
> memset(segp, 0, sizeof(*segp));
> segp->base = vm->arch.tss;
> segp->limit = 0x67;
> @@ -619,6 +616,8 @@ static void vm_init_descriptor_tables(struct kvm_vm *vm)
> vm->arch.gdt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
> vm->arch.idt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
> vm->handlers = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
> + vm->arch.tss = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
> +
> /* Handlers have the same address in both address spaces.*/
> for (i = 0; i < NUM_INTERRUPTS; i++)
> set_idt_entry(vm, i, (unsigned long)(&idt_handlers)[i], 0,
> --
> 2.44.0.291.gc1ea87d7ee-goog

Reviewed-by: Ackerley Tng <[email protected]>

2024-03-28 02:53:10

by Ackerley Tng

[permalink] [raw]
Subject: Re: [PATCH 16/18] KVM: selftests: Add macro for TSS selector, rename up code/data macros

Sean Christopherson <[email protected]> writes:

> Add a proper #define for the TSS selector instead of open coding 0x18 and
> hoping future developers don't use that selector for something else.
>
> Opportunistically rename the code and data selector macros to shorten the
> names, align the naming with the kernel's scheme, and capture that they
> are *kernel* segments.
>
> Signed-off-by: Sean Christopherson <[email protected]>
> ---
> .../selftests/kvm/lib/x86_64/processor.c | 18 +++++++++---------
> 1 file changed, 9 insertions(+), 9 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index 03b9387a1d2e..67235013f6f9 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -15,8 +15,9 @@
> #define NUM_INTERRUPTS 256
> #endif
>
> -#define DEFAULT_CODE_SELECTOR 0x8
> -#define DEFAULT_DATA_SELECTOR 0x10
> +#define KERNEL_CS 0x8
> +#define KERNEL_DS 0x10
> +#define KERNEL_TSS 0x18
>
> #define MAX_NR_CPUID_ENTRIES 100
>
> @@ -547,11 +548,11 @@ static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
> sregs.efer |= (EFER_LME | EFER_LMA | EFER_NX);
>
> kvm_seg_set_unusable(&sregs.ldt);
> - kvm_seg_set_kernel_code_64bit(vm, DEFAULT_CODE_SELECTOR, &sregs.cs);
> - kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.ds);
> - kvm_seg_set_kernel_data_64bit(vm, DEFAULT_DATA_SELECTOR, &sregs.es);
> - kvm_seg_set_kernel_data_64bit(NULL, DEFAULT_DATA_SELECTOR, &sregs.gs);
> - kvm_setup_tss_64bit(vm, &sregs.tr, 0x18);
> + kvm_seg_set_kernel_code_64bit(vm, KERNEL_CS, &sregs.cs);
> + kvm_seg_set_kernel_data_64bit(vm, KERNEL_DS, &sregs.ds);
> + kvm_seg_set_kernel_data_64bit(vm, KERNEL_DS, &sregs.es);
> + kvm_seg_set_kernel_data_64bit(NULL, KERNEL_DS, &sregs.gs);
> + kvm_setup_tss_64bit(vm, &sregs.tr, KERNEL_TSS);
>
> sregs.cr3 = vm->pgd;
> vcpu_sregs_set(vcpu, &sregs);
> @@ -620,8 +621,7 @@ static void vm_init_descriptor_tables(struct kvm_vm *vm)
>
> /* Handlers have the same address in both address spaces.*/
> for (i = 0; i < NUM_INTERRUPTS; i++)
> - set_idt_entry(vm, i, (unsigned long)(&idt_handlers)[i], 0,
> - DEFAULT_CODE_SELECTOR);
> + set_idt_entry(vm, i, (unsigned long)(&idt_handlers)[i], 0, KERNEL_CS);
>
> *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
> }
> --
> 2.44.0.291.gc1ea87d7ee-goog

Reviewed-by: Ackerley Tng <[email protected]>

2024-03-28 02:53:30

by Ackerley Tng

[permalink] [raw]
Subject: Re: [PATCH 17/18] KVM: selftests: Init x86's segments during VM creation

Sean Christopherson <[email protected]> writes:

> Initialize x86's various segments in the GDT during creation of relevant
> VMs instead of waiting until vCPUs come along. Re-installing the segments
> for every vCPU is both wasteful and confusing, as is installing KERNEL_DS
> multiple times; NOT installing KERNEL_DS for GS is icing on the cake.
>
> Signed-off-by: Sean Christopherson <[email protected]>
> ---
> .../selftests/kvm/lib/x86_64/processor.c | 68 ++++++-------------
> 1 file changed, 20 insertions(+), 48 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> index 67235013f6f9..dab719ee7734 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
> @@ -438,24 +438,7 @@ static void kvm_seg_fill_gdt_64bit(struct kvm_vm *vm, struct kvm_segment *segp)
> desc->base3 = segp->base >> 32;
> }
>
> -
> -/*
> - * Set Long Mode Flat Kernel Code Segment
> - *
> - * Input Args:
> - * vm - VM whose GDT is being filled, or NULL to only write segp
> - * selector - selector value
> - *
> - * Output Args:
> - * segp - Pointer to KVM segment
> - *
> - * Return: None
> - *
> - * Sets up the KVM segment pointed to by @segp, to be a code segment
> - * with the selector value given by @selector.
> - */
> -static void kvm_seg_set_kernel_code_64bit(struct kvm_vm *vm, uint16_t selector,
> - struct kvm_segment *segp)
> +static void kvm_seg_set_kernel_code_64bit(uint16_t selector, struct kvm_segment *segp)
> {
> memset(segp, 0, sizeof(*segp));
> segp->selector = selector;
> @@ -467,27 +450,9 @@ static void kvm_seg_set_kernel_code_64bit(struct kvm_vm *vm, uint16_t selector,
> segp->g = true;
> segp->l = true;
> segp->present = 1;
> - if (vm)
> - kvm_seg_fill_gdt_64bit(vm, segp);
> }
>
> -/*
> - * Set Long Mode Flat Kernel Data Segment
> - *
> - * Input Args:
> - * vm - VM whose GDT is being filled, or NULL to only write segp
> - * selector - selector value
> - *
> - * Output Args:
> - * segp - Pointer to KVM segment
> - *
> - * Return: None
> - *
> - * Sets up the KVM segment pointed to by @segp, to be a data segment
> - * with the selector value given by @selector.
> - */
> -static void kvm_seg_set_kernel_data_64bit(struct kvm_vm *vm, uint16_t selector,
> - struct kvm_segment *segp)
> +static void kvm_seg_set_kernel_data_64bit(uint16_t selector, struct kvm_segment *segp)
> {
> memset(segp, 0, sizeof(*segp));
> segp->selector = selector;
> @@ -498,8 +463,6 @@ static void kvm_seg_set_kernel_data_64bit(struct kvm_vm *vm, uint16_t selector,
> */
> segp->g = true;
> segp->present = true;
> - if (vm)
> - kvm_seg_fill_gdt_64bit(vm, segp);
> }
>
> vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
> @@ -517,16 +480,15 @@ vm_paddr_t addr_arch_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
> return vm_untag_gpa(vm, PTE_GET_PA(*pte)) | (gva & ~HUGEPAGE_MASK(level));
> }
>
> -static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp,
> - int selector)
> +static void kvm_seg_set_tss_64bit(vm_vaddr_t base, struct kvm_segment *segp,
> + int selector)
> {
> memset(segp, 0, sizeof(*segp));
> - segp->base = vm->arch.tss;
> + segp->base = base;
> segp->limit = 0x67;
> segp->selector = selector;
> segp->type = 0xb;
> segp->present = 1;
> - kvm_seg_fill_gdt_64bit(vm, segp);
> }
>
> static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
> @@ -548,11 +510,11 @@ static void vcpu_init_sregs(struct kvm_vm *vm, struct kvm_vcpu *vcpu)
> sregs.efer |= (EFER_LME | EFER_LMA | EFER_NX);
>
> kvm_seg_set_unusable(&sregs.ldt);
> - kvm_seg_set_kernel_code_64bit(vm, KERNEL_CS, &sregs.cs);
> - kvm_seg_set_kernel_data_64bit(vm, KERNEL_DS, &sregs.ds);
> - kvm_seg_set_kernel_data_64bit(vm, KERNEL_DS, &sregs.es);
> - kvm_seg_set_kernel_data_64bit(NULL, KERNEL_DS, &sregs.gs);
> - kvm_setup_tss_64bit(vm, &sregs.tr, KERNEL_TSS);
> + kvm_seg_set_kernel_code_64bit(KERNEL_CS, &sregs.cs);
> + kvm_seg_set_kernel_data_64bit(KERNEL_DS, &sregs.ds);
> + kvm_seg_set_kernel_data_64bit(KERNEL_DS, &sregs.es);
> + kvm_seg_set_kernel_data_64bit(KERNEL_DS, &sregs.gs);
> + kvm_seg_set_tss_64bit(vm->arch.tss, &sregs.tr, KERNEL_TSS);
>
> sregs.cr3 = vm->pgd;
> vcpu_sregs_set(vcpu, &sregs);
> @@ -612,6 +574,7 @@ void route_exception(struct ex_regs *regs)
> static void vm_init_descriptor_tables(struct kvm_vm *vm)
> {
> extern void *idt_handlers;
> + struct kvm_segment seg;
> int i;
>
> vm->arch.gdt = __vm_vaddr_alloc_page(vm, MEM_REGION_DATA);
> @@ -624,6 +587,15 @@ static void vm_init_descriptor_tables(struct kvm_vm *vm)
> set_idt_entry(vm, i, (unsigned long)(&idt_handlers)[i], 0, KERNEL_CS);
>
> *(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
> +
> + kvm_seg_set_kernel_code_64bit(KERNEL_CS, &seg);
> + kvm_seg_fill_gdt_64bit(vm, &seg);
> +
> + kvm_seg_set_kernel_data_64bit(KERNEL_DS, &seg);
> + kvm_seg_fill_gdt_64bit(vm, &seg);
> +
> + kvm_seg_set_tss_64bit(vm->arch.tss, &seg, KERNEL_TSS);
> + kvm_seg_fill_gdt_64bit(vm, &seg);
> }
>
> void vm_install_exception_handler(struct kvm_vm *vm, int vector,
> --
> 2.44.0.291.gc1ea87d7ee-goog

Reviewed-by: Ackerley Tng <[email protected]>

2024-04-29 21:38:55

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH 00/18] KVM: selftests: Clean up x86's DT initialization

On Thu, 14 Mar 2024 16:26:19 -0700, Sean Christopherson wrote:
> The vast majority of this series is x86 specific, and aims to clean up the
> core library's handling of descriptor tables and segments. Currently, the
> library (a) waits until vCPUs are created to allocate per-VM assets, and
> (b) forces tests to opt-in to allocate the structures needed to handler
> exceptions, which has result in some rather odd tests, and makes it
> unnecessarily difficult to debug unexpected exceptions.
>
> [...]

Applied to kvm-x86 selftests_utils (I had already pushed stuff to "selftests",
so the name is less than awesome).

Other KVM maintainers, I also created tags/kvm-x86-selftests-utils if you want
to pull this into your tree to avoid later merge conflicts with the
kvm_util_base.h => kvm_util.h rename. I don't expect to push anything else to
the branch, but never say never.

Note, this branch+tag also has the _GNU_SOURCE change, and the global pseudo-RNG
series ("thank yous" incoming), i.e. it's hopefuly a one stop shop for all your
6.10 selftests needs!

[01/18] Revert "kvm: selftests: move base kvm_util.h declarations to kvm_util_base.h"
https://github.com/kvm-x86/linux/commit/2b7deea3ec7c
[02/18] KVM: sefltests: Add kvm_util_types.h to hold common types, e.g. vm_vaddr_t
https://github.com/kvm-x86/linux/commit/f54884f93898
[03/18] KVM: selftests: Move GDT, IDT, and TSS fields to x86's kvm_vm_arch
https://github.com/kvm-x86/linux/commit/3a085fbf8228
[04/18] KVM: selftests: Fix off-by-one initialization of GDT limit
https://github.com/kvm-x86/linux/commit/0d95817e0753
[05/18] KVM: selftests: Move platform_info_test's main assert into guest code
https://github.com/kvm-x86/linux/commit/53635ec253c0
[06/18] KVM: selftests: Rework platform_info_test to actually verify #GP
https://github.com/kvm-x86/linux/commit/dec79eab2b48
[07/18] KVM: selftests: Explicitly clobber the IDT in the "delete memslot" testcase
https://github.com/kvm-x86/linux/commit/61c3cffd4cbf
[08/18] KVM: selftests: Move x86's descriptor table helpers "up" in processor.c
https://github.com/kvm-x86/linux/commit/b62c32c532cd
[09/18] KVM: selftests: Rename x86's vcpu_setup() to vcpu_init_sregs()
https://github.com/kvm-x86/linux/commit/d8c63805e4e5
[10/18] KVM: selftests: Init IDT and exception handlers for all VMs/vCPUs on x86
https://github.com/kvm-x86/linux/commit/c1b9793b45d5
[11/18] KVM: selftests: Map x86's exception_handlers at VM creation, not vCPU setup
https://github.com/kvm-x86/linux/commit/44c93b277269
[12/18] KVM: selftests: Allocate x86's GDT during VM creation
https://github.com/kvm-x86/linux/commit/2a511ca99493
[13/18] KVM: selftests: Drop superfluous switch() on vm->mode in vcpu_init_sregs()
https://github.com/kvm-x86/linux/commit/1051e29cb915
[14/18] KVM: selftests: Fold x86's descriptor tables helpers into vcpu_init_sregs()
https://github.com/kvm-x86/linux/commit/23ef21f58cf8
[15/18] KVM: selftests: Allocate x86's TSS at VM creation
https://github.com/kvm-x86/linux/commit/a2834e6e0b98
[16/18] KVM: selftests: Add macro for TSS selector, rename up code/data macros
https://github.com/kvm-x86/linux/commit/f18ef97fc602
[17/18] KVM: selftests: Init x86's segments during VM creation
https://github.com/kvm-x86/linux/commit/0f53a0245068
[18/18] KVM: selftests: Drop @selector from segment helpers
https://github.com/kvm-x86/linux/commit/b093f87fd195

--
https://github.com/kvm-x86/linux/tree/next