Given the high cost of NX hugepages in terms of TLB performance, it may
be desirable to disable the mitigation on a per-VM basis. In the case of public
cloud providers with many VMs on a single host, some VMs may be more trusted
than others. In order to maximize performance on critical VMs, while still
providing some protection to the host from iTLB Multihit, allow the mitigation
to be selectively disabled.
Disabling NX hugepages on a VM is relatively straightforward, but I took this
as an opportunity to add some NX hugepages test coverage and clean up selftests
infrastructure a bit.
Patches 1-2 add some library calls for accessing stats via the binary stats API.
Patches 3-5 improve memslot ID handling in the KVM util library.
Patch 6 is a misc logging improvement.
Patches 7 and 13 implement an NX hugepages test.
Patches 8, 9, 10, and 12 implement disabling NX on a VM.
Patch 11 is a small cleanup of a bad merge.
This series was tested with the new selftest and the rest of the KVM selftests
on an Intel Haswell machine.
The following tests failed, but I do not believe that has anything to do with
this series:
userspace_io_test
vmx_nested_tsc_scaling_test
vmx_preemption_timer_test
Ben Gardon (13):
selftests: KVM: Dump VM stats in binary stats test
selftests: KVM: Test reading a single stat
selftests: KVM: Wrap memslot IDs in a struct for readability
selftests: KVM: Add memslot parameter to VM vaddr allocation
selftests: KVM: Add memslot parameter to elf_load
selftests: KVM: Improve error message in vm_phy_pages_alloc
selftests: KVM: Add NX huge pages test
KVM: x86/MMU: Factor out updating NX hugepages state for a VM
KVM: x86/MMU: Track NX hugepages on a per-VM basis
KVM: x86/MMU: Allow NX huge pages to be disabled on a per-vm basis
KVM: x86: Fix errant brace in KVM capability handling
KVM: x86/MMU: Require reboot permission to disable NX hugepages
selftests: KVM: Test disabling NX hugepages on a VM
arch/x86/include/asm/kvm_host.h | 3 +
arch/x86/kvm/mmu.h | 9 +-
arch/x86/kvm/mmu/mmu.c | 23 +-
arch/x86/kvm/mmu/spte.c | 7 +-
arch/x86/kvm/mmu/spte.h | 3 +-
arch/x86/kvm/mmu/tdp_mmu.c | 3 +-
arch/x86/kvm/x86.c | 24 +-
include/uapi/linux/kvm.h | 1 +
tools/testing/selftests/kvm/Makefile | 3 +-
.../selftests/kvm/aarch64/psci_cpu_on_test.c | 2 +-
.../selftests/kvm/dirty_log_perf_test.c | 7 +-
tools/testing/selftests/kvm/dirty_log_test.c | 45 +--
.../selftests/kvm/hardware_disable_test.c | 2 +-
.../selftests/kvm/include/kvm_util_base.h | 57 ++--
.../selftests/kvm/include/x86_64/vmx.h | 4 +-
.../selftests/kvm/kvm_binary_stats_test.c | 6 +
.../selftests/kvm/kvm_page_table_test.c | 9 +-
.../selftests/kvm/lib/aarch64/processor.c | 7 +-
tools/testing/selftests/kvm/lib/elf.c | 5 +-
tools/testing/selftests/kvm/lib/kvm_util.c | 311 +++++++++++++++---
.../selftests/kvm/lib/kvm_util_internal.h | 2 +-
.../selftests/kvm/lib/perf_test_util.c | 4 +-
.../selftests/kvm/lib/riscv/processor.c | 5 +-
.../selftests/kvm/lib/s390x/processor.c | 9 +-
.../kvm/lib/x86_64/nx_huge_pages_guest.S | 45 +++
.../selftests/kvm/lib/x86_64/processor.c | 11 +-
tools/testing/selftests/kvm/lib/x86_64/svm.c | 8 +-
tools/testing/selftests/kvm/lib/x86_64/vmx.c | 26 +-
.../selftests/kvm/max_guest_memory_test.c | 6 +-
.../kvm/memslot_modification_stress_test.c | 6 +-
.../testing/selftests/kvm/memslot_perf_test.c | 11 +-
.../selftests/kvm/set_memory_region_test.c | 8 +-
tools/testing/selftests/kvm/steal_time.c | 3 +-
tools/testing/selftests/kvm/x86_64/amx_test.c | 6 +-
.../testing/selftests/kvm/x86_64/cpuid_test.c | 2 +-
.../kvm/x86_64/emulator_error_test.c | 2 +-
.../selftests/kvm/x86_64/hyperv_clock.c | 2 +-
.../selftests/kvm/x86_64/hyperv_features.c | 6 +-
.../selftests/kvm/x86_64/kvm_clock_test.c | 2 +-
.../selftests/kvm/x86_64/mmu_role_test.c | 3 +-
.../selftests/kvm/x86_64/nx_huge_pages_test.c | 149 +++++++++
.../kvm/x86_64/nx_huge_pages_test.sh | 25 ++
.../selftests/kvm/x86_64/set_boot_cpu_id.c | 2 +-
tools/testing/selftests/kvm/x86_64/smm_test.c | 2 +-
.../selftests/kvm/x86_64/vmx_dirty_log_test.c | 10 +-
.../selftests/kvm/x86_64/xapic_ipi_test.c | 2 +-
.../selftests/kvm/x86_64/xen_shinfo_test.c | 4 +-
.../selftests/kvm/x86_64/xen_vmcall_test.c | 2 +-
48 files changed, 704 insertions(+), 190 deletions(-)
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/nx_huge_pages_guest.S
create mode 100644 tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
create mode 100755 tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh
--
2.35.1.616.g0bdcbb4464-goog
Track whether NX hugepages are enabled on a per-VM basis instead of as a
host-wide setting. With this commit, the per-VM state will always be the
same as the host-wide setting, but in future commits, it will be allowed
to differ.
No functional change intended.
Signed-off-by: Ben Gardon <[email protected]>
---
arch/x86/include/asm/kvm_host.h | 2 ++
arch/x86/kvm/mmu.h | 8 ++++----
arch/x86/kvm/mmu/mmu.c | 7 +++++--
arch/x86/kvm/mmu/spte.c | 7 ++++---
arch/x86/kvm/mmu/spte.h | 3 ++-
arch/x86/kvm/mmu/tdp_mmu.c | 3 ++-
6 files changed, 19 insertions(+), 11 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index f72e80178ffc..0a0c54639dd8 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1240,6 +1240,8 @@ struct kvm_arch {
hpa_t hv_root_tdp;
spinlock_t hv_root_tdp_lock;
#endif
+
+ bool nx_huge_pages;
};
struct kvm_vm_stat {
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index bf8dbc4bb12a..dd28fe8d13ae 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -173,9 +173,9 @@ struct kvm_page_fault {
int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault);
extern int nx_huge_pages;
-static inline bool is_nx_huge_page_enabled(void)
+static inline bool is_nx_huge_page_enabled(struct kvm *kvm)
{
- return READ_ONCE(nx_huge_pages);
+ return READ_ONCE(kvm->arch.nx_huge_pages);
}
static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
@@ -191,8 +191,8 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
.user = err & PFERR_USER_MASK,
.prefetch = prefetch,
.is_tdp = likely(vcpu->arch.mmu->page_fault == kvm_tdp_page_fault),
- .nx_huge_page_workaround_enabled = is_nx_huge_page_enabled(),
-
+ .nx_huge_page_workaround_enabled =
+ is_nx_huge_page_enabled(vcpu->kvm),
.max_level = KVM_MAX_HUGEPAGE_LEVEL,
.req_level = PG_LEVEL_4K,
.goal_level = PG_LEVEL_4K,
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 1b59b56642f1..dc9672f70468 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -6195,8 +6195,10 @@ static void __set_nx_huge_pages(bool val)
nx_huge_pages = itlb_multihit_kvm_mitigation = val;
}
-static int kvm_update_nx_huge_pages(struct kvm *kvm)
+static void kvm_update_nx_huge_pages(struct kvm *kvm)
{
+ kvm->arch.nx_huge_pages = nx_huge_pages;
+
mutex_lock(&kvm->slots_lock);
kvm_mmu_zap_all_fast(kvm);
mutex_unlock(&kvm->slots_lock);
@@ -6227,7 +6229,7 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp)
mutex_lock(&kvm_lock);
list_for_each_entry(kvm, &vm_list, vm_list)
- kvm_set_nx_huge_pages(kvm);
+ kvm_update_nx_huge_pages(kvm);
mutex_unlock(&kvm_lock);
}
@@ -6448,6 +6450,7 @@ int kvm_mmu_post_init_vm(struct kvm *kvm)
{
int err;
+ kvm->arch.nx_huge_pages = READ_ONCE(nx_huge_pages);
err = kvm_vm_create_worker_thread(kvm, kvm_nx_lpage_recovery_worker, 0,
"kvm-nx-lpage-recovery",
&kvm->arch.nx_lpage_recovery_thread);
diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
index 4739b53c9734..877ad30bc7ad 100644
--- a/arch/x86/kvm/mmu/spte.c
+++ b/arch/x86/kvm/mmu/spte.c
@@ -116,7 +116,7 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
spte |= spte_shadow_accessed_mask(spte);
if (level > PG_LEVEL_4K && (pte_access & ACC_EXEC_MASK) &&
- is_nx_huge_page_enabled()) {
+ is_nx_huge_page_enabled(vcpu->kvm)) {
pte_access &= ~ACC_EXEC_MASK;
}
@@ -215,7 +215,8 @@ static u64 make_spte_executable(u64 spte)
* This is used during huge page splitting to build the SPTEs that make up the
* new page table.
*/
-u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index)
+u64 make_huge_page_split_spte(struct kvm *kvm, u64 huge_spte, int huge_level,
+ int index)
{
u64 child_spte;
int child_level;
@@ -243,7 +244,7 @@ u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index)
* When splitting to a 4K page, mark the page executable as the
* NX hugepage mitigation no longer applies.
*/
- if (is_nx_huge_page_enabled())
+ if (is_nx_huge_page_enabled(kvm))
child_spte = make_spte_executable(child_spte);
}
diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h
index 73f12615416f..e4142caff4b1 100644
--- a/arch/x86/kvm/mmu/spte.h
+++ b/arch/x86/kvm/mmu/spte.h
@@ -415,7 +415,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
unsigned int pte_access, gfn_t gfn, kvm_pfn_t pfn,
u64 old_spte, bool prefetch, bool can_unsync,
bool host_writable, u64 *new_spte);
-u64 make_huge_page_split_spte(u64 huge_spte, int huge_level, int index);
+u64 make_huge_page_split_spte(struct kvm *kvm, u64 huge_spte, int huge_level,
+ int index);
u64 make_nonleaf_spte(u64 *child_pt, bool ad_disabled);
u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access);
u64 mark_spte_for_access_track(u64 spte);
diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index af60922906ef..98a45a87f0b2 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1466,7 +1466,8 @@ static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter,
* not been linked in yet and thus is not reachable from any other CPU.
*/
for (i = 0; i < PT64_ENT_PER_PAGE; i++)
- sp->spt[i] = make_huge_page_split_spte(huge_spte, level, i);
+ sp->spt[i] = make_huge_page_split_spte(kvm, huge_spte,
+ level, i);
/*
* Replace the huge spte with a pointer to the populated lower level
--
2.35.1.616.g0bdcbb4464-goog
There's currently no test coverage of NX hugepages in KVM selftests, so
add a basic test to ensure that the feature works as intended.
Signed-off-by: Ben Gardon <[email protected]>
---
tools/testing/selftests/kvm/Makefile | 3 +-
.../kvm/lib/x86_64/nx_huge_pages_guest.S | 45 +++++++
.../selftests/kvm/x86_64/nx_huge_pages_test.c | 122 ++++++++++++++++++
.../kvm/x86_64/nx_huge_pages_test.sh | 25 ++++
4 files changed, 194 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/kvm/lib/x86_64/nx_huge_pages_guest.S
create mode 100644 tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
create mode 100755 tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 04099f453b59..6ee30c0df323 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -38,7 +38,7 @@ ifeq ($(ARCH),riscv)
endif
LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/rbtree.c lib/sparsebit.c lib/test_util.c lib/guest_modes.c lib/perf_test_util.c
-LIBKVM_x86_64 = lib/x86_64/apic.c lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S
+LIBKVM_x86_64 = lib/x86_64/apic.c lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c lib/x86_64/handlers.S lib/x86_64/nx_huge_pages_guest.S
LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c lib/aarch64/handlers.S lib/aarch64/spinlock.c lib/aarch64/gic.c lib/aarch64/gic_v3.c lib/aarch64/vgic.c
LIBKVM_s390x = lib/s390x/processor.c lib/s390x/ucall.c lib/s390x/diag318_test_handler.c
LIBKVM_riscv = lib/riscv/processor.c lib/riscv/ucall.c
@@ -56,6 +56,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/kvm_clock_test
TEST_GEN_PROGS_x86_64 += x86_64/kvm_pv_test
TEST_GEN_PROGS_x86_64 += x86_64/mmio_warning_test
TEST_GEN_PROGS_x86_64 += x86_64/mmu_role_test
+TEST_GEN_PROGS_x86_64 += x86_64/nx_huge_pages_test
TEST_GEN_PROGS_x86_64 += x86_64/platform_info_test
TEST_GEN_PROGS_x86_64 += x86_64/pmu_event_filter_test
TEST_GEN_PROGS_x86_64 += x86_64/set_boot_cpu_id
diff --git a/tools/testing/selftests/kvm/lib/x86_64/nx_huge_pages_guest.S b/tools/testing/selftests/kvm/lib/x86_64/nx_huge_pages_guest.S
new file mode 100644
index 000000000000..09c66b9562a3
--- /dev/null
+++ b/tools/testing/selftests/kvm/lib/x86_64/nx_huge_pages_guest.S
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * tools/testing/selftests/kvm/nx_huge_page_guest.S
+ *
+ * Copyright (C) 2022, Google LLC.
+ */
+
+.include "kvm_util.h"
+
+#define HPAGE_SIZE (2*1024*1024)
+#define PORT_SUCCESS 0x70
+
+.global guest_code0
+.global guest_code1
+
+.align HPAGE_SIZE
+exit_vm:
+ mov $0x1,%edi
+ mov $0x2,%esi
+ mov a_string,%edx
+ mov $0x1,%ecx
+ xor %eax,%eax
+ jmp ucall
+
+
+guest_code0:
+ mov data1, %eax
+ mov data2, %eax
+ jmp exit_vm
+
+.align HPAGE_SIZE
+guest_code1:
+ mov data1, %eax
+ mov data2, %eax
+ jmp exit_vm
+data1:
+.quad 0
+
+.align HPAGE_SIZE
+data2:
+.quad 0
+a_string:
+.string "why does the ucall function take a string argument?"
+
+
diff --git a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
new file mode 100644
index 000000000000..5cbcc777d0ab
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
@@ -0,0 +1,122 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * tools/testing/selftests/kvm/nx_huge_page_test.c
+ *
+ * Usage: to be run via nx_huge_page_test.sh, which does the necessary
+ * environment setup and teardown
+ *
+ * Copyright (C) 2022, Google LLC.
+ */
+
+#define _GNU_SOURCE
+
+#include <stdint.h>
+#include <fcntl.h>
+
+#include <test_util.h>
+#include "kvm_util.h"
+
+#define HPAGE_SLOT MEMSLOT(10)
+#define HPAGE_PADDR_START (10*1024*1024)
+#define HPAGE_SLOT_NPAGES (100*1024*1024/4096)
+
+/* Defined in nx_huge_page_guest.S */
+void guest_code0(void);
+void guest_code1(void);
+
+static void run_guest_code(struct kvm_vm *vm, void (*guest_code)(void))
+{
+ struct kvm_regs regs;
+
+ vcpu_regs_get(vm, 0, ®s);
+ regs.rip = (uint64_t)guest_code;
+ vcpu_regs_set(vm, 0, ®s);
+ vcpu_run(vm, 0);
+}
+
+static void check_2m_page_count(struct kvm_vm *vm, int expected_pages_2m)
+{
+ int actual_pages_2m;
+
+ actual_pages_2m = vm_get_single_stat(vm, "pages_2m");
+
+ TEST_ASSERT(actual_pages_2m == expected_pages_2m,
+ "Unexpected 2m page count. Expected %d, got %d",
+ expected_pages_2m, actual_pages_2m);
+}
+
+static void check_split_count(struct kvm_vm *vm, int expected_splits)
+{
+ int actual_splits;
+
+ actual_splits = vm_get_single_stat(vm, "nx_lpage_splits");
+
+ TEST_ASSERT(actual_splits == expected_splits,
+ "Unexpected nx lpage split count. Expected %d, got %d",
+ expected_splits, actual_splits);
+}
+
+int main(int argc, char **argv)
+{
+ struct kvm_vm *vm;
+
+ vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
+
+ vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS_HUGETLB,
+ HPAGE_PADDR_START, HPAGE_SLOT,
+ HPAGE_SLOT_NPAGES, 0);
+
+ kvm_vm_elf_load(vm, program_invocation_name, HPAGE_SLOT);
+
+ vm_vcpu_add_default(vm, 0, guest_code0);
+
+ check_2m_page_count(vm, 0);
+ check_split_count(vm, 0);
+
+ /*
+ * Running guest_code0 will access data1 and data2.
+ * This should result in part of the huge page containing guest_code0,
+ * and part of the hugepage containing the ucall function being mapped
+ * at 4K. The huge pages containing data1 and data2 will be mapped
+ * at 2M.
+ */
+ run_guest_code(vm, guest_code0);
+ check_2m_page_count(vm, 2);
+ check_split_count(vm, 2);
+
+ /*
+ * guest_code1 is in the same huge page as data1, so it will cause
+ * that huge page to be remapped at 4k.
+ */
+ run_guest_code(vm, guest_code1);
+ check_2m_page_count(vm, 1);
+ check_split_count(vm, 3);
+
+ /* Run guest_code0 again to check that is has no effect. */
+ run_guest_code(vm, guest_code0);
+ check_2m_page_count(vm, 1);
+ check_split_count(vm, 3);
+
+ /* Give recovery thread time to run */
+ sleep(3);
+ check_2m_page_count(vm, 1);
+ check_split_count(vm, 0);
+
+ /*
+ * The split 2M pages should have been reclaimed, so run guest_code0
+ * again to check that pages are mapped at 2M again.
+ */
+ run_guest_code(vm, guest_code0);
+ check_2m_page_count(vm, 2);
+ check_split_count(vm, 2);
+
+ /* Pages are once again split from running guest_code1. */
+ run_guest_code(vm, guest_code1);
+ check_2m_page_count(vm, 1);
+ check_split_count(vm, 3);
+
+ kvm_vm_free(vm);
+
+ return 0;
+}
+
diff --git a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh
new file mode 100755
index 000000000000..a5f946fb0626
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh
@@ -0,0 +1,25 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0-only */
+
+# tools/testing/selftests/kvm/nx_huge_page_test.sh
+# Copyright (C) 2022, Google LLC.
+
+NX_HUGE_PAGES=$(cat /sys/module/kvm/parameters/nx_huge_pages)
+NX_HUGE_PAGES_RECOVERY_RATIO=$(cat /sys/module/kvm/parameters/nx_huge_pages_recovery_ratio)
+NX_HUGE_PAGES_RECOVERY_PERIOD=$(cat /sys/module/kvm/parameters/nx_huge_pages_recovery_period_ms)
+HUGE_PAGES=$(cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages)
+
+echo 1 > /sys/module/kvm/parameters/nx_huge_pages
+echo 1 > /sys/module/kvm/parameters/nx_huge_pages_recovery_ratio
+echo 2 > /sys/module/kvm/parameters/nx_huge_pages_recovery_period_ms
+echo 200 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+
+./nx_huge_pages_test
+RET=$?
+
+echo $NX_HUGE_PAGES > /sys/module/kvm/parameters/nx_huge_pages
+echo $NX_HUGE_PAGES_RECOVERY_RATIO > /sys/module/kvm/parameters/nx_huge_pages_recovery_ratio
+echo $NX_HUGE_PAGES_RECOVERY_PERIOD > /sys/module/kvm/parameters/nx_huge_pages_recovery_period_ms
+echo $HUGE_PAGES > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+
+exit $RET
--
2.35.1.616.g0bdcbb4464-goog
On Thu, Mar 10, 2022, Ben Gardon wrote:
> selftests: KVM: Wrap memslot IDs in a struct for readability
> selftests: KVM: Add memslot parameter to VM vaddr allocation
> selftests: KVM: Add memslot parameter to elf_load
I really, really, don't want to go down this path of proliferating memslot crud
throughout the virtual memory allocators. I would much rather we solve this by
teaching the VM creation helpers to (optionally) use hugepages. The amount of
churn required just so that one test can back code with hugepages is absurd, and
there's bound to be tests in the future that want to force hugepages as well.
Ensure that the userspace actor attempting to disable NX hugepages has
permission to reboot the system. Since disabling NX hugepages would
allow a guest to crash the system, it is similar to reboot permissions.
This approach is the simplest permission gating, but passing a file
descriptor opened for write for the module parameter would also work
well and be more precise.
The latter approach was suggested by Sean Christopherson.
Suggested-by: Jim Mattson <[email protected]>
Signed-off-by: Ben Gardon <[email protected]>
---
arch/x86/kvm/x86.c | 18 +++++++++++++++++-
1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 74351cbb9b5b..995f30667619 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4256,7 +4256,6 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_SYS_ATTRIBUTES:
case KVM_CAP_VAPIC:
case KVM_CAP_ENABLE_CAP:
- case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES:
r = 1;
break;
case KVM_CAP_EXIT_HYPERCALL:
@@ -4359,6 +4358,14 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
case KVM_CAP_DISABLE_QUIRKS2:
r = KVM_X86_VALID_QUIRKS;
break;
+ case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES:
+ /*
+ * Since the risk of disabling NX hugepages is a guest crashing
+ * the system, ensure the userspace process has permission to
+ * reboot the system.
+ */
+ r = capable(CAP_SYS_BOOT);
+ break;
default:
break;
}
@@ -6050,6 +6057,15 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
mutex_unlock(&kvm->lock);
break;
case KVM_CAP_VM_DISABLE_NX_HUGE_PAGES:
+ /*
+ * Since the risk of disabling NX hugepages is a guest crashing
+ * the system, ensure the userspace process has permission to
+ * reboot the system.
+ */
+ if (!capable(CAP_SYS_BOOT)) {
+ r = -EPERM;
+ break;
+ }
kvm->arch.disable_nx_huge_pages = true;
kvm_update_nx_huge_pages(kvm);
r = 0;
--
2.35.1.616.g0bdcbb4464-goog
Add an argument to the NX huge pages test to test disabling the feature
on a VM using the new capability.
Signed-off-by: Ben Gardon <[email protected]>
---
.../selftests/kvm/include/kvm_util_base.h | 2 +
tools/testing/selftests/kvm/lib/kvm_util.c | 7 +++
.../selftests/kvm/x86_64/nx_huge_pages_test.c | 49 ++++++++++++++-----
.../kvm/x86_64/nx_huge_pages_test.sh | 2 +-
4 files changed, 48 insertions(+), 12 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 530b5272fae2..8302cf9b1e1d 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -420,4 +420,6 @@ uint64_t vm_get_single_stat(struct kvm_vm *vm, const char *stat_name);
uint32_t guest_get_vcpuid(void);
+void vm_disable_nx_huge_pages(struct kvm_vm *vm);
+
#endif /* SELFTEST_KVM_UTIL_BASE_H */
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index f9591dad1010..880786fe9fac 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -2760,3 +2760,10 @@ uint64_t vm_get_single_stat(struct kvm_vm *vm, const char *stat_name)
return value;
}
+void vm_disable_nx_huge_pages(struct kvm_vm *vm)
+{
+ struct kvm_enable_cap cap = { 0 };
+
+ cap.cap = KVM_CAP_VM_DISABLE_NX_HUGE_PAGES;
+ vm_enable_cap(vm, &cap);
+}
diff --git a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
index 5cbcc777d0ab..1020a4758664 100644
--- a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
+++ b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.c
@@ -56,12 +56,39 @@ static void check_split_count(struct kvm_vm *vm, int expected_splits)
expected_splits, actual_splits);
}
+static void help(void)
+{
+ puts("");
+ printf("usage: nx_huge_pages_test.sh [-x]\n");
+ puts("");
+ printf(" -x: Allow executable huge pages on the VM.\n");
+ puts("");
+ exit(0);
+}
+
int main(int argc, char **argv)
{
struct kvm_vm *vm;
+ bool disable_nx = false;
+ int opt;
+
+ while ((opt = getopt(argc, argv, "x")) != -1) {
+ switch (opt) {
+ case 'x':
+ disable_nx = true;
+ break;
+ case 'h':
+ default:
+ help();
+ break;
+ }
+ }
vm = vm_create(VM_MODE_DEFAULT, DEFAULT_GUEST_PHY_PAGES, O_RDWR);
+ if (disable_nx)
+ vm_disable_nx_huge_pages(vm);
+
vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS_HUGETLB,
HPAGE_PADDR_START, HPAGE_SLOT,
HPAGE_SLOT_NPAGES, 0);
@@ -81,25 +108,25 @@ int main(int argc, char **argv)
* at 2M.
*/
run_guest_code(vm, guest_code0);
- check_2m_page_count(vm, 2);
- check_split_count(vm, 2);
+ check_2m_page_count(vm, disable_nx ? 4 : 2);
+ check_split_count(vm, disable_nx ? 0 : 2);
/*
* guest_code1 is in the same huge page as data1, so it will cause
* that huge page to be remapped at 4k.
*/
run_guest_code(vm, guest_code1);
- check_2m_page_count(vm, 1);
- check_split_count(vm, 3);
+ check_2m_page_count(vm, disable_nx ? 4 : 1);
+ check_split_count(vm, disable_nx ? 0 : 3);
/* Run guest_code0 again to check that is has no effect. */
run_guest_code(vm, guest_code0);
- check_2m_page_count(vm, 1);
- check_split_count(vm, 3);
+ check_2m_page_count(vm, disable_nx ? 4 : 1);
+ check_split_count(vm, disable_nx ? 0 : 3);
/* Give recovery thread time to run */
sleep(3);
- check_2m_page_count(vm, 1);
+ check_2m_page_count(vm, disable_nx ? 4 : 1);
check_split_count(vm, 0);
/*
@@ -107,13 +134,13 @@ int main(int argc, char **argv)
* again to check that pages are mapped at 2M again.
*/
run_guest_code(vm, guest_code0);
- check_2m_page_count(vm, 2);
- check_split_count(vm, 2);
+ check_2m_page_count(vm, disable_nx ? 4 : 2);
+ check_split_count(vm, disable_nx ? 0 : 2);
/* Pages are once again split from running guest_code1. */
run_guest_code(vm, guest_code1);
- check_2m_page_count(vm, 1);
- check_split_count(vm, 3);
+ check_2m_page_count(vm, disable_nx ? 4 : 1);
+ check_split_count(vm, disable_nx ? 0 : 3);
kvm_vm_free(vm);
diff --git a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh
index a5f946fb0626..205d8c9fd750 100755
--- a/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh
+++ b/tools/testing/selftests/kvm/x86_64/nx_huge_pages_test.sh
@@ -14,7 +14,7 @@ echo 1 > /sys/module/kvm/parameters/nx_huge_pages_recovery_ratio
echo 2 > /sys/module/kvm/parameters/nx_huge_pages_recovery_period_ms
echo 200 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
-./nx_huge_pages_test
+./nx_huge_pages_test "${@}"
RET=$?
echo $NX_HUGE_PAGES > /sys/module/kvm/parameters/nx_huge_pages
--
2.35.1.616.g0bdcbb4464-goog
Factor out the code to update the NX hugepages state for an individual
VM. This will be expanded in future commits to allow per-VM control of
Nx hugepages.
No functional change intended.
Signed-off-by: Ben Gardon <[email protected]>
---
arch/x86/kvm/mmu/mmu.c | 18 +++++++++++-------
1 file changed, 11 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 3b8da8b0745e..1b59b56642f1 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -6195,6 +6195,15 @@ static void __set_nx_huge_pages(bool val)
nx_huge_pages = itlb_multihit_kvm_mitigation = val;
}
+static int kvm_update_nx_huge_pages(struct kvm *kvm)
+{
+ mutex_lock(&kvm->slots_lock);
+ kvm_mmu_zap_all_fast(kvm);
+ mutex_unlock(&kvm->slots_lock);
+
+ wake_up_process(kvm->arch.nx_lpage_recovery_thread);
+}
+
static int set_nx_huge_pages(const char *val, const struct kernel_param *kp)
{
bool old_val = nx_huge_pages;
@@ -6217,13 +6226,8 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp)
mutex_lock(&kvm_lock);
- list_for_each_entry(kvm, &vm_list, vm_list) {
- mutex_lock(&kvm->slots_lock);
- kvm_mmu_zap_all_fast(kvm);
- mutex_unlock(&kvm->slots_lock);
-
- wake_up_process(kvm->arch.nx_lpage_recovery_thread);
- }
+ list_for_each_entry(kvm, &vm_list, vm_list)
+ kvm_set_nx_huge_pages(kvm);
mutex_unlock(&kvm_lock);
}
--
2.35.1.616.g0bdcbb4464-goog
Retrieve the value of a single stat by name in the binary stats test to
ensure the kvm_util library functions work.
CC: Jing Zhang <[email protected]>
Signed-off-by: Ben Gardon <[email protected]>
---
.../selftests/kvm/include/kvm_util_base.h | 1 +
.../selftests/kvm/kvm_binary_stats_test.c | 3 ++
tools/testing/selftests/kvm/lib/kvm_util.c | 53 +++++++++++++++++++
3 files changed, 57 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index c5f4a67772cb..09ee70c0df26 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -401,6 +401,7 @@ void assert_on_unhandled_exception(struct kvm_vm *vm, uint32_t vcpuid);
int vm_get_stats_fd(struct kvm_vm *vm);
int vcpu_get_stats_fd(struct kvm_vm *vm, uint32_t vcpuid);
void dump_vm_stats(struct kvm_vm *vm);
+uint64_t vm_get_single_stat(struct kvm_vm *vm, const char *stat_name);
uint32_t guest_get_vcpuid(void);
diff --git a/tools/testing/selftests/kvm/kvm_binary_stats_test.c b/tools/testing/selftests/kvm/kvm_binary_stats_test.c
index afc4701ce8dd..97bde355f105 100644
--- a/tools/testing/selftests/kvm/kvm_binary_stats_test.c
+++ b/tools/testing/selftests/kvm/kvm_binary_stats_test.c
@@ -177,6 +177,9 @@ static void vm_stats_test(struct kvm_vm *vm)
/* Dump VM stats */
dump_vm_stats(vm);
+
+ /* Read a single stat. */
+ printf("remote_tlb_flush: %lu\n", vm_get_single_stat(vm, "remote_tlb_flush"));
}
static void vcpu_stats_test(struct kvm_vm *vm, int vcpu_id)
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 4d21c3b46780..1d3493d7fd55 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -2699,3 +2699,56 @@ void dump_vm_stats(struct kvm_vm *vm)
close(stats_fd);
}
+static int vm_get_stat_data(struct kvm_vm *vm, const char *stat_name,
+ uint64_t **data)
+{
+ struct kvm_stats_desc *stats_desc;
+ struct kvm_stats_header *header;
+ struct kvm_stats_desc *desc;
+ size_t size_desc;
+ int stats_fd;
+ int ret = -EINVAL;
+ int i;
+
+ *data = NULL;
+
+ stats_fd = vm_get_stats_fd(vm);
+
+ header = read_vm_stats_header(stats_fd);
+
+ stats_desc = read_vm_stats_desc(stats_fd, header);
+
+ size_desc = stats_desc_size(header);
+
+ /* Read kvm stats data one by one */
+ for (i = 0; i < header->num_desc; ++i) {
+ desc = (void *)stats_desc + (i * size_desc);
+
+ if (strcmp(desc->name, stat_name))
+ continue;
+
+ ret = read_stat_data(stats_fd, header, desc, data);
+ }
+
+ free(stats_desc);
+ free(header);
+
+ close(stats_fd);
+
+ return ret;
+}
+
+uint64_t vm_get_single_stat(struct kvm_vm *vm, const char *stat_name)
+{
+ uint64_t *data;
+ uint64_t value;
+ int ret;
+
+ ret = vm_get_stat_data(vm, stat_name, &data);
+ TEST_ASSERT(ret == 1, "Stat %s expected to have 1 element, but has %d",
+ stat_name, ret);
+ value = *data;
+ free(data);
+ return value;
+}
+
--
2.35.1.616.g0bdcbb4464-goog
On Thu, Mar 10, 2022, Ben Gardon wrote:
> Those patches are a lot of churn, but at least to me, they make the
> code much more readable. Currently there are many functions which just
> pass along 0 for the memslot, and often have multiple other numerical
> arguments, which makes it hard to understand what the function is
> doing.
Yeah, my solution for that was to rip out all the params. E.g. the most used
function I ended up with is
static inline struct kvm_vm *vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
void *guest_code)
{
return __vm_create_with_one_vcpu(vcpu, 0, guest_code);
}
and then the usage is
vm = vm_create_with_one_vcpu(&vcpu, guest_main);
supp_cpuid = kvm_get_supported_cpuid();
cpuid2 = vcpu_get_cpuid(vcpu);
My overarching complaint with the selftests is that they make the hard things hard,
and the easy things harder. If a test wants to be backed by hugepages, it shouldn't
have to manually specify a memslot.
Let me post my selftests rework as RFC (_very_ RFC at this point). I was hoping to
do more than compile test before posting anything, but it's going to be multiple
weeks before I'll get back to it. Hopefully it'll start a discussion on actually
rewriting the framework so that writing new tests is less painful, and so that every
new thing that comes along doesn't require poking at 50 different tests.
On Thu, Mar 10, 2022 at 11:58 AM Sean Christopherson <[email protected]> wrote:
>
> On Thu, Mar 10, 2022, Ben Gardon wrote:
> > selftests: KVM: Wrap memslot IDs in a struct for readability
> > selftests: KVM: Add memslot parameter to VM vaddr allocation
> > selftests: KVM: Add memslot parameter to elf_load
>
> I really, really, don't want to go down this path of proliferating memslot crud
> throughout the virtual memory allocators. I would much rather we solve this by
> teaching the VM creation helpers to (optionally) use hugepages. The amount of
> churn required just so that one test can back code with hugepages is absurd, and
> there's bound to be tests in the future that want to force hugepages as well.
I agree that proliferating the memslots argument isn't strictly
required for this test, but doing so makes it much easier to make
assertions about hugepage counts and such because you don't have your
stacks and page tables backed with hugepages.
Those patches are a lot of churn, but at least to me, they make the
code much more readable. Currently there are many functions which just
pass along 0 for the memslot, and often have multiple other numerical
arguments, which makes it hard to understand what the function is
doing.
I don't think explicitly specifying memslots really adds that much
overhead to the tests, and I'd rather have control over that than
implicitly cramming everything into memslot 0.
If you have a better way to manage the memslots and create virtual
mappings for / load code into other memslots, I'm open to it, but we
should do something about it.
In many places in the KVM selftests, memslots are referred to by raw
integer IDs. This makes it very difficult to read which argument to
a function is which. Wrap the memslot ID in a struct and provide an easy
macro to create the structs. This should make the code much clearer and
points out where memslot 0 is tacitly used in many library functions.
No functional change intended.
Signed-off-by: Ben Gardon <[email protected]>
---
.../selftests/kvm/dirty_log_perf_test.c | 7 +-
tools/testing/selftests/kvm/dirty_log_test.c | 43 +++++----
.../selftests/kvm/include/kvm_util_base.h | 42 +++++----
.../selftests/kvm/include/x86_64/vmx.h | 4 +-
.../selftests/kvm/kvm_page_table_test.c | 9 +-
.../selftests/kvm/lib/aarch64/processor.c | 2 +-
tools/testing/selftests/kvm/lib/kvm_util.c | 88 ++++++++++---------
.../selftests/kvm/lib/kvm_util_internal.h | 2 +-
.../selftests/kvm/lib/perf_test_util.c | 4 +-
.../selftests/kvm/lib/riscv/processor.c | 2 +-
.../selftests/kvm/lib/s390x/processor.c | 6 +-
tools/testing/selftests/kvm/lib/x86_64/vmx.c | 4 +-
.../selftests/kvm/max_guest_memory_test.c | 6 +-
.../kvm/memslot_modification_stress_test.c | 6 +-
.../testing/selftests/kvm/memslot_perf_test.c | 11 ++-
.../selftests/kvm/set_memory_region_test.c | 8 +-
tools/testing/selftests/kvm/steal_time.c | 3 +-
.../kvm/x86_64/emulator_error_test.c | 2 +-
.../selftests/kvm/x86_64/mmu_role_test.c | 3 +-
tools/testing/selftests/kvm/x86_64/smm_test.c | 2 +-
.../selftests/kvm/x86_64/vmx_dirty_log_test.c | 10 +--
.../selftests/kvm/x86_64/xen_shinfo_test.c | 4 +-
.../selftests/kvm/x86_64/xen_vmcall_test.c | 2 +-
23 files changed, 151 insertions(+), 119 deletions(-)
diff --git a/tools/testing/selftests/kvm/dirty_log_perf_test.c b/tools/testing/selftests/kvm/dirty_log_perf_test.c
index 101759ac93a4..04817f65cf18 100644
--- a/tools/testing/selftests/kvm/dirty_log_perf_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_perf_test.c
@@ -101,7 +101,7 @@ static void toggle_dirty_logging(struct kvm_vm *vm, int slots, bool enable)
int slot = PERF_TEST_MEM_SLOT_INDEX + i;
int flags = enable ? KVM_MEM_LOG_DIRTY_PAGES : 0;
- vm_mem_region_set_flags(vm, slot, flags);
+ vm_mem_region_set_flags(vm, MEMSLOT(slot), flags);
}
}
@@ -122,7 +122,7 @@ static void get_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[], int slots
for (i = 0; i < slots; i++) {
int slot = PERF_TEST_MEM_SLOT_INDEX + i;
- kvm_vm_get_dirty_log(vm, slot, bitmaps[i]);
+ kvm_vm_get_dirty_log(vm, MEMSLOT(slot), bitmaps[i]);
}
}
@@ -134,7 +134,8 @@ static void clear_dirty_log(struct kvm_vm *vm, unsigned long *bitmaps[],
for (i = 0; i < slots; i++) {
int slot = PERF_TEST_MEM_SLOT_INDEX + i;
- kvm_vm_clear_dirty_log(vm, slot, bitmaps[i], 0, pages_per_slot);
+ kvm_vm_clear_dirty_log(vm, MEMSLOT(slot), bitmaps[i], 0,
+ pages_per_slot);
}
}
diff --git a/tools/testing/selftests/kvm/dirty_log_test.c b/tools/testing/selftests/kvm/dirty_log_test.c
index 3fcd89e195c7..1241c9a2729c 100644
--- a/tools/testing/selftests/kvm/dirty_log_test.c
+++ b/tools/testing/selftests/kvm/dirty_log_test.c
@@ -26,7 +26,7 @@
#define VCPU_ID 1
/* The memory slot index to track dirty pages */
-#define TEST_MEM_SLOT_INDEX 1
+#define TEST_MEMSLOT MEMSLOT(1)
/* Default guest test virtual memory offset */
#define DEFAULT_GUEST_TEST_MEM 0xc0000000
@@ -229,17 +229,19 @@ static void clear_log_create_vm_done(struct kvm_vm *vm)
vm_enable_cap(vm, &cap);
}
-static void dirty_log_collect_dirty_pages(struct kvm_vm *vm, int slot,
+static void dirty_log_collect_dirty_pages(struct kvm_vm *vm,
+ struct kvm_memslot memslot,
void *bitmap, uint32_t num_pages)
{
- kvm_vm_get_dirty_log(vm, slot, bitmap);
+ kvm_vm_get_dirty_log(vm, memslot, bitmap);
}
-static void clear_log_collect_dirty_pages(struct kvm_vm *vm, int slot,
+static void clear_log_collect_dirty_pages(struct kvm_vm *vm,
+ struct kvm_memslot memslot,
void *bitmap, uint32_t num_pages)
{
- kvm_vm_get_dirty_log(vm, slot, bitmap);
- kvm_vm_clear_dirty_log(vm, slot, bitmap, 0, num_pages);
+ kvm_vm_get_dirty_log(vm, memslot, bitmap);
+ kvm_vm_clear_dirty_log(vm, memslot, bitmap, 0, num_pages);
}
/* Should only be called after a GUEST_SYNC */
@@ -293,7 +295,7 @@ static inline void dirty_gfn_set_collected(struct kvm_dirty_gfn *gfn)
}
static uint32_t dirty_ring_collect_one(struct kvm_dirty_gfn *dirty_gfns,
- int slot, void *bitmap,
+ struct kvm_memslot memslot, void *bitmap,
uint32_t num_pages, uint32_t *fetch_index)
{
struct kvm_dirty_gfn *cur;
@@ -303,8 +305,9 @@ static uint32_t dirty_ring_collect_one(struct kvm_dirty_gfn *dirty_gfns,
cur = &dirty_gfns[*fetch_index % test_dirty_ring_count];
if (!dirty_gfn_is_dirtied(cur))
break;
- TEST_ASSERT(cur->slot == slot, "Slot number didn't match: "
- "%u != %u", cur->slot, slot);
+ TEST_ASSERT(cur->slot == memslot.id,
+ "Slot number didn't match: %u != %u",
+ cur->slot, memslot.id);
TEST_ASSERT(cur->offset < num_pages, "Offset overflow: "
"0x%llx >= 0x%x", cur->offset, num_pages);
//pr_info("fetch 0x%x page %llu\n", *fetch_index, cur->offset);
@@ -331,7 +334,8 @@ static void dirty_ring_continue_vcpu(void)
sem_post(&sem_vcpu_cont);
}
-static void dirty_ring_collect_dirty_pages(struct kvm_vm *vm, int slot,
+static void dirty_ring_collect_dirty_pages(struct kvm_vm *vm,
+ struct kvm_memslot memslot,
void *bitmap, uint32_t num_pages)
{
/* We only have one vcpu */
@@ -352,7 +356,7 @@ static void dirty_ring_collect_dirty_pages(struct kvm_vm *vm, int slot,
/* Only have one vcpu */
count = dirty_ring_collect_one(vcpu_map_dirty_ring(vm, VCPU_ID),
- slot, bitmap, num_pages, &fetch_index);
+ memslot, bitmap, num_pages, &fetch_index);
cleared = kvm_vm_reset_dirty_ring(vm);
@@ -408,8 +412,9 @@ struct log_mode {
/* Hook when the vm creation is done (before vcpu creation) */
void (*create_vm_done)(struct kvm_vm *vm);
/* Hook to collect the dirty pages into the bitmap provided */
- void (*collect_dirty_pages) (struct kvm_vm *vm, int slot,
- void *bitmap, uint32_t num_pages);
+ void (*collect_dirty_pages)(struct kvm_vm *vm,
+ struct kvm_memslot memslot,
+ void *bitmap, uint32_t num_pages);
/* Hook to call when after each vcpu run */
void (*after_vcpu_run)(struct kvm_vm *vm, int ret, int err);
void (*before_vcpu_join) (void);
@@ -473,14 +478,15 @@ static void log_mode_create_vm_done(struct kvm_vm *vm)
mode->create_vm_done(vm);
}
-static void log_mode_collect_dirty_pages(struct kvm_vm *vm, int slot,
+static void log_mode_collect_dirty_pages(struct kvm_vm *vm,
+ struct kvm_memslot memslot,
void *bitmap, uint32_t num_pages)
{
struct log_mode *mode = &log_modes[host_log_mode];
TEST_ASSERT(mode->collect_dirty_pages != NULL,
"collect_dirty_pages() is required for any log mode!");
- mode->collect_dirty_pages(vm, slot, bitmap, num_pages);
+ mode->collect_dirty_pages(vm, memslot, bitmap, num_pages);
}
static void log_mode_after_vcpu_run(struct kvm_vm *vm, int ret, int err)
@@ -755,8 +761,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
/* Add an extra memory slot for testing dirty logging */
vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
guest_test_phys_mem,
- TEST_MEM_SLOT_INDEX,
- guest_num_pages,
+ TEST_MEMSLOT, guest_num_pages,
KVM_MEM_LOG_DIRTY_PAGES);
/* Do mapping for the dirty track memory slot */
@@ -786,8 +791,8 @@ static void run_test(enum vm_guest_mode mode, void *arg)
while (iteration < p->iterations) {
/* Give the vcpu thread some time to dirty some pages */
usleep(p->interval * 1000);
- log_mode_collect_dirty_pages(vm, TEST_MEM_SLOT_INDEX,
- bmap, host_num_pages);
+ log_mode_collect_dirty_pages(vm, TEST_MEMSLOT, bmap,
+ host_num_pages);
/*
* See vcpu_sync_stop_requested definition for details on why
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 09ee70c0df26..69a6b5e509ab 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -27,6 +27,14 @@
*/
struct kvm_vm;
+/* Simple int wrapper to represent memslots for callers of kvm_util. */
+struct kvm_memslot {
+ uint32_t id;
+};
+
+#define MEMSLOT(_id) ((struct kvm_memslot){ .id = _id })
+
+
typedef uint64_t vm_paddr_t; /* Virtual Machine (Guest) physical address */
typedef uint64_t vm_vaddr_t; /* Virtual Machine (Guest) virtual address */
@@ -114,9 +122,10 @@ struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm);
void kvm_vm_free(struct kvm_vm *vmp);
void kvm_vm_restart(struct kvm_vm *vmp, int perm);
void kvm_vm_release(struct kvm_vm *vmp);
-void kvm_vm_get_dirty_log(struct kvm_vm *vm, int slot, void *log);
-void kvm_vm_clear_dirty_log(struct kvm_vm *vm, int slot, void *log,
- uint64_t first_page, uint32_t num_pages);
+void kvm_vm_get_dirty_log(struct kvm_vm *vm, struct kvm_memslot memslot,
+ void *log);
+void kvm_vm_clear_dirty_log(struct kvm_vm *vm, struct kvm_memslot memslot,
+ void *log, uint64_t first_page, uint32_t num_pages);
uint32_t kvm_vm_reset_dirty_ring(struct kvm_vm *vm);
int kvm_memcmp_hva_gva(void *hva, struct kvm_vm *vm, const vm_vaddr_t gva,
@@ -148,14 +157,15 @@ void vcpu_dump(FILE *stream, struct kvm_vm *vm, uint32_t vcpuid,
void vm_create_irqchip(struct kvm_vm *vm);
-void vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
- uint64_t gpa, uint64_t size, void *hva);
-int __vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
- uint64_t gpa, uint64_t size, void *hva);
+void vm_set_user_memory_region(struct kvm_vm *vm, struct kvm_memslot memslot,
+ uint32_t flags, uint64_t gpa, uint64_t size,
+ void *hva);
+int __vm_set_user_memory_region(struct kvm_vm *vm, struct kvm_memslot memslot,
+ uint32_t flags, uint64_t gpa, uint64_t size,
+ void *hva);
void vm_userspace_mem_region_add(struct kvm_vm *vm,
- enum vm_mem_backing_src_type src_type,
- uint64_t guest_paddr, uint32_t slot, uint64_t npages,
- uint32_t flags);
+ enum vm_mem_backing_src_type src_type, uint64_t guest_paddr,
+ struct kvm_memslot memslot, uint64_t npages, uint32_t flags);
void vcpu_ioctl(struct kvm_vm *vm, uint32_t vcpuid, unsigned long ioctl,
void *arg);
@@ -165,9 +175,11 @@ void vm_ioctl(struct kvm_vm *vm, unsigned long ioctl, void *arg);
int _vm_ioctl(struct kvm_vm *vm, unsigned long cmd, void *arg);
void kvm_ioctl(struct kvm_vm *vm, unsigned long ioctl, void *arg);
int _kvm_ioctl(struct kvm_vm *vm, unsigned long ioctl, void *arg);
-void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags);
-void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa);
-void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot);
+void vm_mem_region_set_flags(struct kvm_vm *vm, struct kvm_memslot memslot,
+ uint32_t flags);
+void vm_mem_region_move(struct kvm_vm *vm, struct kvm_memslot memslot,
+ uint64_t new_gpa);
+void vm_mem_region_delete(struct kvm_vm *vm, struct kvm_memslot memslot);
void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid);
vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages);
@@ -307,9 +319,9 @@ void virt_pgd_alloc(struct kvm_vm *vm);
void virt_pg_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr);
vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
- uint32_t memslot);
+ struct kvm_memslot memslot);
vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
- vm_paddr_t paddr_min, uint32_t memslot);
+ vm_paddr_t paddr_min, struct kvm_memslot memslot);
vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm);
/*
diff --git a/tools/testing/selftests/kvm/include/x86_64/vmx.h b/tools/testing/selftests/kvm/include/x86_64/vmx.h
index 583ceb0d1457..cc1dd1f82a1d 100644
--- a/tools/testing/selftests/kvm/include/x86_64/vmx.h
+++ b/tools/testing/selftests/kvm/include/x86_64/vmx.h
@@ -612,9 +612,9 @@ void nested_pg_map(struct vmx_pages *vmx, struct kvm_vm *vm,
void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm,
uint64_t nested_paddr, uint64_t paddr, uint64_t size);
void nested_map_memslot(struct vmx_pages *vmx, struct kvm_vm *vm,
- uint32_t memslot);
+ struct kvm_memslot memslot);
void prepare_eptp(struct vmx_pages *vmx, struct kvm_vm *vm,
- uint32_t eptp_memslot);
+ struct kvm_memslot eptp_memslot);
void prepare_virtualize_apic_accesses(struct vmx_pages *vmx, struct kvm_vm *vm);
#endif /* SELFTEST_KVM_VMX_H */
diff --git a/tools/testing/selftests/kvm/kvm_page_table_test.c b/tools/testing/selftests/kvm/kvm_page_table_test.c
index ba1fdc3dcf4a..74687be63e8a 100644
--- a/tools/testing/selftests/kvm/kvm_page_table_test.c
+++ b/tools/testing/selftests/kvm/kvm_page_table_test.c
@@ -22,7 +22,7 @@
#include "processor.h"
#include "guest_modes.h"
-#define TEST_MEM_SLOT_INDEX 1
+#define TEST_MEMSLOT MEMSLOT(1)
/* Default size(1GB) of the memory for testing */
#define DEFAULT_TEST_MEM_SIZE (1 << 30)
@@ -300,7 +300,7 @@ static struct kvm_vm *pre_init_before_test(enum vm_guest_mode mode, void *arg)
/* Add an extra memory slot with specified backing src type */
vm_userspace_mem_region_add(vm, src_type, guest_test_phys_mem,
- TEST_MEM_SLOT_INDEX, guest_num_pages, 0);
+ TEST_MEMSLOT, guest_num_pages, 0);
/* Do mapping(GVA->GPA) for the testing memory slot */
virt_map(vm, guest_test_virt_mem, guest_test_phys_mem, guest_num_pages);
@@ -398,8 +398,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
ts_diff.tv_sec, ts_diff.tv_nsec);
/* Test the stage of KVM updating mappings */
- vm_mem_region_set_flags(vm, TEST_MEM_SLOT_INDEX,
- KVM_MEM_LOG_DIRTY_PAGES);
+ vm_mem_region_set_flags(vm, TEST_MEMSLOT, KVM_MEM_LOG_DIRTY_PAGES);
*current_stage = KVM_UPDATE_MAPPINGS;
@@ -411,7 +410,7 @@ static void run_test(enum vm_guest_mode mode, void *arg)
ts_diff.tv_sec, ts_diff.tv_nsec);
/* Test the stage of KVM adjusting mappings */
- vm_mem_region_set_flags(vm, TEST_MEM_SLOT_INDEX, 0);
+ vm_mem_region_set_flags(vm, TEST_MEMSLOT, 0);
*current_stage = KVM_ADJUST_MAPPINGS;
diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index 9343d82519b4..a9e505e351e0 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -80,7 +80,7 @@ void virt_pgd_alloc(struct kvm_vm *vm)
if (!vm->pgd_created) {
vm_paddr_t paddr = vm_phy_pages_alloc(vm,
page_align(vm, ptrs_per_pgd(vm) * 8) / vm->page_size,
- KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0);
+ KVM_GUEST_PAGE_TABLE_MIN_PADDR, MEMSLOT(0));
vm->pgd = paddr;
vm->pgd_created = true;
}
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 1d3493d7fd55..97d1badaba8b 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -357,7 +357,7 @@ struct kvm_vm *vm_create(enum vm_guest_mode mode, uint64_t phy_pages, int perm)
vm->vpages_mapped = sparsebit_alloc();
if (phy_pages != 0)
vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
- 0, 0, phy_pages, 0);
+ 0, MEMSLOT(0), phy_pages, 0);
return vm;
}
@@ -488,9 +488,10 @@ void kvm_vm_restart(struct kvm_vm *vmp, int perm)
}
}
-void kvm_vm_get_dirty_log(struct kvm_vm *vm, int slot, void *log)
+void kvm_vm_get_dirty_log(struct kvm_vm *vm, struct kvm_memslot memslot,
+ void *log)
{
- struct kvm_dirty_log args = { .dirty_bitmap = log, .slot = slot };
+ struct kvm_dirty_log args = { .dirty_bitmap = log, .slot = memslot.id };
int ret;
ret = ioctl(vm->fd, KVM_GET_DIRTY_LOG, &args);
@@ -498,11 +499,11 @@ void kvm_vm_get_dirty_log(struct kvm_vm *vm, int slot, void *log)
__func__, strerror(-ret));
}
-void kvm_vm_clear_dirty_log(struct kvm_vm *vm, int slot, void *log,
- uint64_t first_page, uint32_t num_pages)
+void kvm_vm_clear_dirty_log(struct kvm_vm *vm, struct kvm_memslot memslot,
+ void *log, uint64_t first_page, uint32_t num_pages)
{
struct kvm_clear_dirty_log args = {
- .dirty_bitmap = log, .slot = slot,
+ .dirty_bitmap = log, .slot = memslot.id,
.first_page = first_page,
.num_pages = num_pages
};
@@ -861,11 +862,12 @@ static void vm_userspace_mem_region_hva_insert(struct rb_root *hva_tree,
}
-int __vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
- uint64_t gpa, uint64_t size, void *hva)
+int __vm_set_user_memory_region(struct kvm_vm *vm, struct kvm_memslot memslot,
+ uint32_t flags, uint64_t gpa, uint64_t size,
+ void *hva)
{
struct kvm_userspace_memory_region region = {
- .slot = slot,
+ .slot = memslot.id,
.flags = flags,
.guest_phys_addr = gpa,
.memory_size = size,
@@ -875,10 +877,11 @@ int __vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags
return ioctl(vm->fd, KVM_SET_USER_MEMORY_REGION, ®ion);
}
-void vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
- uint64_t gpa, uint64_t size, void *hva)
+void vm_set_user_memory_region(struct kvm_vm *vm, struct kvm_memslot memslot,
+ uint32_t flags, uint64_t gpa, uint64_t size,
+ void *hva)
{
- int ret = __vm_set_user_memory_region(vm, slot, flags, gpa, size, hva);
+ int ret = __vm_set_user_memory_region(vm, memslot, flags, gpa, size, hva);
TEST_ASSERT(!ret, "KVM_SET_USER_MEMORY_REGION failed, errno = %d (%s)",
errno, strerror(errno));
@@ -892,7 +895,7 @@ void vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
* src_type - Storage source for this region.
* NULL to use anonymous memory.
* guest_paddr - Starting guest physical address
- * slot - KVM region slot
+ * memslot - KVM region slot
* npages - Number of physical pages
* flags - KVM memory region flags (e.g. KVM_MEM_LOG_DIRTY_PAGES)
*
@@ -907,9 +910,8 @@ void vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags,
* region is created with the flags given by flags.
*/
void vm_userspace_mem_region_add(struct kvm_vm *vm,
- enum vm_mem_backing_src_type src_type,
- uint64_t guest_paddr, uint32_t slot, uint64_t npages,
- uint32_t flags)
+ enum vm_mem_backing_src_type src_type, uint64_t guest_paddr,
+ struct kvm_memslot memslot, uint64_t npages, uint32_t flags)
{
int ret;
struct userspace_mem_region *region;
@@ -949,15 +951,15 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
/* Confirm no region with the requested slot already exists. */
hash_for_each_possible(vm->regions.slot_hash, region, slot_node,
- slot) {
- if (region->region.slot != slot)
+ memslot.id) {
+ if (region->region.slot != memslot.id)
continue;
TEST_FAIL("A mem region with the requested slot "
"already exists.\n"
" requested slot: %u paddr: 0x%lx npages: 0x%lx\n"
" existing slot: %u paddr: 0x%lx size: 0x%lx",
- slot, guest_paddr, npages,
+ memslot.id, guest_paddr, npages,
region->region.slot,
(uint64_t) region->region.guest_phys_addr,
(uint64_t) region->region.memory_size);
@@ -1024,7 +1026,7 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
region->unused_phy_pages = sparsebit_alloc();
sparsebit_set_num(region->unused_phy_pages,
guest_paddr >> vm->page_shift, npages);
- region->region.slot = slot;
+ region->region.slot = memslot.id;
region->region.flags = flags;
region->region.guest_phys_addr = guest_paddr;
region->region.memory_size = npages * vm->page_size;
@@ -1034,13 +1036,13 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
" rc: %i errno: %i\n"
" slot: %u flags: 0x%x\n"
" guest_phys_addr: 0x%lx size: 0x%lx",
- ret, errno, slot, flags,
+ ret, errno, memslot.id, flags,
guest_paddr, (uint64_t) region->region.memory_size);
/* Add to quick lookup data structures */
vm_userspace_mem_region_gpa_insert(&vm->regions.gpa_tree, region);
vm_userspace_mem_region_hva_insert(&vm->regions.hva_tree, region);
- hash_add(vm->regions.slot_hash, ®ion->slot_node, slot);
+ hash_add(vm->regions.slot_hash, ®ion->slot_node, memslot.id);
/* If shared memory, create an alias. */
if (region->fd >= 0) {
@@ -1072,17 +1074,17 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
* memory slot ID).
*/
struct userspace_mem_region *
-memslot2region(struct kvm_vm *vm, uint32_t memslot)
+memslot2region(struct kvm_vm *vm, struct kvm_memslot memslot)
{
struct userspace_mem_region *region;
hash_for_each_possible(vm->regions.slot_hash, region, slot_node,
- memslot)
- if (region->region.slot == memslot)
+ memslot.id)
+ if (region->region.slot == memslot.id)
return region;
fprintf(stderr, "No mem region with the requested slot found,\n"
- " requested slot: %u\n", memslot);
+ " requested slot: %u\n", memslot.id);
fputs("---- vm dump ----\n", stderr);
vm_dump(stderr, vm, 2);
TEST_FAIL("Mem region not found");
@@ -1103,12 +1105,13 @@ memslot2region(struct kvm_vm *vm, uint32_t memslot)
* Sets the flags of the memory region specified by the value of slot,
* to the values given by flags.
*/
-void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags)
+void vm_mem_region_set_flags(struct kvm_vm *vm, struct kvm_memslot memslot,
+ uint32_t flags)
{
int ret;
struct userspace_mem_region *region;
- region = memslot2region(vm, slot);
+ region = memslot2region(vm, memslot);
region->region.flags = flags;
@@ -1116,7 +1119,7 @@ void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags)
TEST_ASSERT(ret == 0, "KVM_SET_USER_MEMORY_REGION IOCTL failed,\n"
" rc: %i errno: %i slot: %u flags: 0x%x",
- ret, errno, slot, flags);
+ ret, errno, memslot.id, flags);
}
/*
@@ -1124,7 +1127,7 @@ void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags)
*
* Input Args:
* vm - Virtual Machine
- * slot - Slot of the memory region to move
+ * memslot - Memslot of the memory region to move
* new_gpa - Starting guest physical address
*
* Output Args: None
@@ -1133,12 +1136,13 @@ void vm_mem_region_set_flags(struct kvm_vm *vm, uint32_t slot, uint32_t flags)
*
* Change the gpa of a memory region.
*/
-void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa)
+void vm_mem_region_move(struct kvm_vm *vm, struct kvm_memslot memslot,
+ uint64_t new_gpa)
{
struct userspace_mem_region *region;
int ret;
- region = memslot2region(vm, slot);
+ region = memslot2region(vm, memslot);
region->region.guest_phys_addr = new_gpa;
@@ -1146,7 +1150,7 @@ void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa)
TEST_ASSERT(!ret, "KVM_SET_USER_MEMORY_REGION failed\n"
"ret: %i errno: %i slot: %u new_gpa: 0x%lx",
- ret, errno, slot, new_gpa);
+ ret, errno, memslot.id, new_gpa);
}
/*
@@ -1154,7 +1158,7 @@ void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa)
*
* Input Args:
* vm - Virtual Machine
- * slot - Slot of the memory region to delete
+ * memslot - Memslot of the memory region to delete
*
* Output Args: None
*
@@ -1162,9 +1166,9 @@ void vm_mem_region_move(struct kvm_vm *vm, uint32_t slot, uint64_t new_gpa)
*
* Delete a memory region.
*/
-void vm_mem_region_delete(struct kvm_vm *vm, uint32_t slot)
+void vm_mem_region_delete(struct kvm_vm *vm, struct kvm_memslot memslot)
{
- __vm_mem_region_delete(vm, memslot2region(vm, slot), true);
+ __vm_mem_region_delete(vm, memslot2region(vm, memslot), true);
}
/*
@@ -1356,7 +1360,8 @@ vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min)
virt_pgd_alloc(vm);
vm_paddr_t paddr = vm_phy_pages_alloc(vm, pages,
- KVM_UTIL_MIN_PFN * vm->page_size, 0);
+ KVM_UTIL_MIN_PFN * vm->page_size,
+ MEMSLOT(0));
/*
* Find an unused range of virtual page addresses of at least
@@ -2377,7 +2382,7 @@ const char *exit_reason_str(unsigned int exit_reason)
* not enough pages are available at or above paddr_min.
*/
vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
- vm_paddr_t paddr_min, uint32_t memslot)
+ vm_paddr_t paddr_min, struct kvm_memslot memslot)
{
struct userspace_mem_region *region;
sparsebit_idx_t pg, base;
@@ -2404,7 +2409,7 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
if (pg == 0) {
fprintf(stderr, "No guest physical page available, "
"paddr_min: 0x%lx page_size: 0x%x memslot: %u\n",
- paddr_min, vm->page_size, memslot);
+ paddr_min, vm->page_size, memslot.id);
fputs("---- vm dump ----\n", stderr);
vm_dump(stderr, vm, 2);
abort();
@@ -2417,7 +2422,7 @@ vm_paddr_t vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
}
vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
- uint32_t memslot)
+ struct kvm_memslot memslot)
{
return vm_phy_pages_alloc(vm, 1, paddr_min, memslot);
}
@@ -2427,7 +2432,8 @@ vm_paddr_t vm_phy_page_alloc(struct kvm_vm *vm, vm_paddr_t paddr_min,
vm_paddr_t vm_alloc_page_table(struct kvm_vm *vm)
{
- return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0);
+ return vm_phy_page_alloc(vm, KVM_GUEST_PAGE_TABLE_MIN_PADDR,
+ MEMSLOT(0));
}
/*
diff --git a/tools/testing/selftests/kvm/lib/kvm_util_internal.h b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
index a03febc24ba6..386ad653391c 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util_internal.h
+++ b/tools/testing/selftests/kvm/lib/kvm_util_internal.h
@@ -123,6 +123,6 @@ void regs_dump(FILE *stream, struct kvm_regs *regs, uint8_t indent);
void sregs_dump(FILE *stream, struct kvm_sregs *sregs, uint8_t indent);
struct userspace_mem_region *
-memslot2region(struct kvm_vm *vm, uint32_t memslot);
+memslot2region(struct kvm_vm *vm, struct kvm_memslot memslot);
#endif /* SELFTEST_KVM_UTIL_INTERNAL_H */
diff --git a/tools/testing/selftests/kvm/lib/perf_test_util.c b/tools/testing/selftests/kvm/lib/perf_test_util.c
index 722df3a28791..e19bb2b66bc5 100644
--- a/tools/testing/selftests/kvm/lib/perf_test_util.c
+++ b/tools/testing/selftests/kvm/lib/perf_test_util.c
@@ -169,8 +169,8 @@ struct kvm_vm *perf_test_create_vm(enum vm_guest_mode mode, int vcpus,
vm_paddr_t region_start = pta->gpa + region_pages * pta->guest_page_size * i;
vm_userspace_mem_region_add(vm, backing_src, region_start,
- PERF_TEST_MEM_SLOT_INDEX + i,
- region_pages, 0);
+ MEMSLOT(PERF_TEST_MEM_SLOT_INDEX + i),
+ region_pages, 0);
}
/* Do mapping for the demand paging memory slot */
diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c
index d377f2603d98..7a0ff26b9f8d 100644
--- a/tools/testing/selftests/kvm/lib/riscv/processor.c
+++ b/tools/testing/selftests/kvm/lib/riscv/processor.c
@@ -59,7 +59,7 @@ void virt_pgd_alloc(struct kvm_vm *vm)
if (!vm->pgd_created) {
vm_paddr_t paddr = vm_phy_pages_alloc(vm,
page_align(vm, ptrs_per_pte(vm) * 8) / vm->page_size,
- KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0);
+ KVM_GUEST_PAGE_TABLE_MIN_PADDR, MEMSLOT(0));
vm->pgd = paddr;
vm->pgd_created = true;
}
diff --git a/tools/testing/selftests/kvm/lib/s390x/processor.c b/tools/testing/selftests/kvm/lib/s390x/processor.c
index f87c7137598e..1c873a26e6de 100644
--- a/tools/testing/selftests/kvm/lib/s390x/processor.c
+++ b/tools/testing/selftests/kvm/lib/s390x/processor.c
@@ -22,7 +22,8 @@ void virt_pgd_alloc(struct kvm_vm *vm)
return;
paddr = vm_phy_pages_alloc(vm, PAGES_PER_REGION,
- KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0);
+ KVM_GUEST_PAGE_TABLE_MIN_PADDR,
+ MEMSLOT(0));
memset(addr_gpa2hva(vm, paddr), 0xff, PAGES_PER_REGION * vm->page_size);
vm->pgd = paddr;
@@ -39,7 +40,8 @@ static uint64_t virt_alloc_region(struct kvm_vm *vm, int ri)
uint64_t taddr;
taddr = vm_phy_pages_alloc(vm, ri < 4 ? PAGES_PER_REGION : 1,
- KVM_GUEST_PAGE_TABLE_MIN_PADDR, 0);
+ KVM_GUEST_PAGE_TABLE_MIN_PADDR,
+ MEMSLOT(0));
memset(addr_gpa2hva(vm, taddr), 0xff, PAGES_PER_REGION * vm->page_size);
return (taddr & REGION_ENTRY_ORIGIN)
diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c
index d089d8b850b5..7ea9455b3e71 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c
@@ -505,7 +505,7 @@ void nested_map(struct vmx_pages *vmx, struct kvm_vm *vm,
* physical pages in VM.
*/
void nested_map_memslot(struct vmx_pages *vmx, struct kvm_vm *vm,
- uint32_t memslot)
+ struct kvm_memslot memslot)
{
sparsebit_idx_t i, last;
struct userspace_mem_region *region =
@@ -526,7 +526,7 @@ void nested_map_memslot(struct vmx_pages *vmx, struct kvm_vm *vm,
}
void prepare_eptp(struct vmx_pages *vmx, struct kvm_vm *vm,
- uint32_t eptp_memslot)
+ struct kvm_memslot eptp_memslot)
{
vmx->eptp = (void *)vm_vaddr_alloc_page(vm);
vmx->eptp_hva = addr_gva2hva(vm, (uintptr_t)vmx->eptp);
diff --git a/tools/testing/selftests/kvm/max_guest_memory_test.c b/tools/testing/selftests/kvm/max_guest_memory_test.c
index 3875c4b23a04..a946a90604ea 100644
--- a/tools/testing/selftests/kvm/max_guest_memory_test.c
+++ b/tools/testing/selftests/kvm/max_guest_memory_test.c
@@ -239,7 +239,8 @@ int main(int argc, char *argv[])
if ((gpa - start_gpa) >= max_mem)
break;
- vm_set_user_memory_region(vm, slot, 0, gpa, slot_size, mem);
+ vm_set_user_memory_region(vm, MEMSLOT(slot),
+ 0, gpa, slot_size, mem);
#ifdef __x86_64__
/* Identity map memory in the guest using 1gb pages. */
@@ -277,7 +278,8 @@ int main(int argc, char *argv[])
* references to the removed regions.
*/
for (slot = (slot - 1) & ~1ull; slot >= first_slot; slot -= 2)
- vm_set_user_memory_region(vm, slot, 0, 0, 0, NULL);
+ vm_set_user_memory_region(vm, MEMSLOT(slot),
+ 0, 0, 0, NULL);
munmap(mem, slot_size / 2);
diff --git a/tools/testing/selftests/kvm/memslot_modification_stress_test.c b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
index 1410d0a9141a..465f24ac7b88 100644
--- a/tools/testing/selftests/kvm/memslot_modification_stress_test.c
+++ b/tools/testing/selftests/kvm/memslot_modification_stress_test.c
@@ -26,7 +26,7 @@
#include "test_util.h"
#include "guest_modes.h"
-#define DUMMY_MEMSLOT_INDEX 7
+#define DUMMY_MEMSLOT MEMSLOT(7)
#define DEFAULT_MEMSLOT_MODIFICATION_ITERATIONS 10
@@ -81,9 +81,9 @@ static void add_remove_memslot(struct kvm_vm *vm, useconds_t delay,
for (i = 0; i < nr_modifications; i++) {
usleep(delay);
vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, gpa,
- DUMMY_MEMSLOT_INDEX, pages, 0);
+ DUMMY_MEMSLOT, pages, 0);
- vm_mem_region_delete(vm, DUMMY_MEMSLOT_INDEX);
+ vm_mem_region_delete(vm, DUMMY_MEMSLOT);
}
}
diff --git a/tools/testing/selftests/kvm/memslot_perf_test.c b/tools/testing/selftests/kvm/memslot_perf_test.c
index 1727f75e0c2c..a18e3a7a19c8 100644
--- a/tools/testing/selftests/kvm/memslot_perf_test.c
+++ b/tools/testing/selftests/kvm/memslot_perf_test.c
@@ -293,7 +293,7 @@ static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots,
npages += rempages;
vm_userspace_mem_region_add(data->vm, VM_MEM_SRC_ANONYMOUS,
- guest_addr, slot, npages,
+ guest_addr, MEMSLOT(slot), npages,
0);
guest_addr += npages * 4096;
}
@@ -308,7 +308,7 @@ static bool prepare_vm(struct vm_data *data, int nslots, uint64_t *maxslots,
npages += rempages;
gpa = vm_phy_pages_alloc(data->vm, npages, guest_addr,
- slot + 1);
+ MEMSLOT(slot + 1));
TEST_ASSERT(gpa == guest_addr,
"vm_phy_pages_alloc() failed\n");
@@ -586,9 +586,12 @@ static void test_memslot_move_loop(struct vm_data *data, struct sync_area *sync)
uint64_t movesrcgpa;
movesrcgpa = vm_slot2gpa(data, data->nslots - 1);
- vm_mem_region_move(data->vm, data->nslots - 1 + 1,
+ vm_mem_region_move(data->vm,
+ MEMSLOT(data->nslots - 1 + 1),
MEM_TEST_MOVE_GPA_DEST);
- vm_mem_region_move(data->vm, data->nslots - 1 + 1, movesrcgpa);
+ vm_mem_region_move(data->vm,
+ MEMSLOT(data->nslots - 1 + 1),
+ movesrcgpa);
}
static void test_memslot_do_unmap(struct vm_data *data,
diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c
index 73bc297dabe6..aca694607165 100644
--- a/tools/testing/selftests/kvm/set_memory_region_test.c
+++ b/tools/testing/selftests/kvm/set_memory_region_test.c
@@ -31,7 +31,7 @@
* Somewhat arbitrary location and slot, intended to not overlap anything.
*/
#define MEM_REGION_GPA 0xc0000000
-#define MEM_REGION_SLOT 10
+#define MEM_REGION_SLOT MEMSLOT(10)
static const uint64_t MMIO_VAL = 0xbeefull;
@@ -282,7 +282,7 @@ static void test_delete_memory_region(void)
* Delete the primary memslot. This should cause an emulation error or
* shutdown due to the page tables getting nuked.
*/
- vm_mem_region_delete(vm, 0);
+ vm_mem_region_delete(vm, MEMSLOT(0));
pthread_join(vcpu_thread, NULL);
@@ -367,7 +367,7 @@ static void test_add_max_memory_regions(void)
mem_aligned = (void *)(((size_t) mem + alignment - 1) & ~(alignment - 1));
for (slot = 0; slot < max_mem_slots; slot++)
- vm_set_user_memory_region(vm, slot, 0,
+ vm_set_user_memory_region(vm, MEMSLOT(slot), 0,
((uint64_t)slot * MEM_REGION_SIZE),
MEM_REGION_SIZE,
mem_aligned + (uint64_t)slot * MEM_REGION_SIZE);
@@ -377,7 +377,7 @@ static void test_add_max_memory_regions(void)
MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
TEST_ASSERT(mem_extra != MAP_FAILED, "Failed to mmap() host");
- ret = __vm_set_user_memory_region(vm, max_mem_slots, 0,
+ ret = __vm_set_user_memory_region(vm, MEMSLOT(max_mem_slots), 0,
(uint64_t)max_mem_slots * MEM_REGION_SIZE,
MEM_REGION_SIZE, mem_extra);
TEST_ASSERT(ret == -1 && errno == EINVAL,
diff --git a/tools/testing/selftests/kvm/steal_time.c b/tools/testing/selftests/kvm/steal_time.c
index 62f2eb9ee3d5..a3d5c0407506 100644
--- a/tools/testing/selftests/kvm/steal_time.c
+++ b/tools/testing/selftests/kvm/steal_time.c
@@ -276,7 +276,8 @@ int main(int ac, char **av)
/* Create a one VCPU guest and an identity mapped memslot for the steal time structure */
vm = vm_create_default(0, 0, guest_code);
gpages = vm_calc_num_guest_pages(VM_MODE_DEFAULT, STEAL_TIME_SIZE * NR_VCPUS);
- vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, ST_GPA_BASE, 1, gpages, 0);
+ vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, ST_GPA_BASE,
+ MEMSLOT(1), gpages, 0);
virt_map(vm, ST_GPA_BASE, ST_GPA_BASE, gpages);
ucall_init(vm, NULL);
diff --git a/tools/testing/selftests/kvm/x86_64/emulator_error_test.c b/tools/testing/selftests/kvm/x86_64/emulator_error_test.c
index f070ff0224fa..fe2d78313878 100644
--- a/tools/testing/selftests/kvm/x86_64/emulator_error_test.c
+++ b/tools/testing/selftests/kvm/x86_64/emulator_error_test.c
@@ -17,7 +17,7 @@
#define MEM_REGION_GVA 0x0000123456789000
#define MEM_REGION_GPA 0x0000000700000000
-#define MEM_REGION_SLOT 10
+#define MEM_REGION_SLOT MEMSLOT(10)
#define MEM_REGION_SIZE PAGE_SIZE
static void guest_code(void)
diff --git a/tools/testing/selftests/kvm/x86_64/mmu_role_test.c b/tools/testing/selftests/kvm/x86_64/mmu_role_test.c
index da2325fcad87..9ff6bc4c278d 100644
--- a/tools/testing/selftests/kvm/x86_64/mmu_role_test.c
+++ b/tools/testing/selftests/kvm/x86_64/mmu_role_test.c
@@ -66,7 +66,8 @@ static void mmu_role_test(u32 *cpuid_reg, u32 evil_cpuid_val)
* KVM x86 zaps all shadow pages on memslot deletion.
*/
vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
- MMIO_GPA << 1, 10, 1, 0);
+ MMIO_GPA << 1,
+ MEMSLOT(10), 1, 0);
/* Set up a #PF handler to eat the RSVD #PF and signal all done! */
vm_init_descriptor_tables(vm);
diff --git a/tools/testing/selftests/kvm/x86_64/smm_test.c b/tools/testing/selftests/kvm/x86_64/smm_test.c
index a626d40fdb48..3de2958106f7 100644
--- a/tools/testing/selftests/kvm/x86_64/smm_test.c
+++ b/tools/testing/selftests/kvm/x86_64/smm_test.c
@@ -24,7 +24,7 @@
#define PAGE_SIZE 4096
#define SMRAM_SIZE 65536
-#define SMRAM_MEMSLOT ((1 << 16) | 1)
+#define SMRAM_MEMSLOT MEMSLOT((1 << 16) | 1)
#define SMRAM_PAGES (SMRAM_SIZE / PAGE_SIZE)
#define SMRAM_GPA 0x1000000
#define SMRAM_STAGE 0xfe
diff --git a/tools/testing/selftests/kvm/x86_64/vmx_dirty_log_test.c b/tools/testing/selftests/kvm/x86_64/vmx_dirty_log_test.c
index 68f26a8b4f42..9adba67c1e1c 100644
--- a/tools/testing/selftests/kvm/x86_64/vmx_dirty_log_test.c
+++ b/tools/testing/selftests/kvm/x86_64/vmx_dirty_log_test.c
@@ -20,7 +20,7 @@
#define VCPU_ID 1
/* The memory slot index to track dirty pages */
-#define TEST_MEM_SLOT_INDEX 1
+#define TEST_MEMSLOT MEMSLOT(1)
#define TEST_MEM_PAGES 3
/* L1 guest test virtual memory offset */
@@ -89,7 +89,7 @@ int main(int argc, char *argv[])
/* Add an extra memory slot for testing dirty logging */
vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
GUEST_TEST_MEM,
- TEST_MEM_SLOT_INDEX,
+ TEST_MEMSLOT,
TEST_MEM_PAGES,
KVM_MEM_LOG_DIRTY_PAGES);
@@ -106,8 +106,8 @@ int main(int argc, char *argv[])
* Note that prepare_eptp should be called only L1's GPA map is done,
* meaning after the last call to virt_map.
*/
- prepare_eptp(vmx, vm, 0);
- nested_map_memslot(vmx, vm, 0);
+ prepare_eptp(vmx, vm, MEMSLOT(0));
+ nested_map_memslot(vmx, vm, MEMSLOT(0));
nested_map(vmx, vm, NESTED_TEST_MEM1, GUEST_TEST_MEM, 4096);
nested_map(vmx, vm, NESTED_TEST_MEM2, GUEST_TEST_MEM, 4096);
@@ -132,7 +132,7 @@ int main(int argc, char *argv[])
* The nested guest wrote at offset 0x1000 in the memslot, but the
* dirty bitmap must be filled in according to L1 GPA, not L2.
*/
- kvm_vm_get_dirty_log(vm, TEST_MEM_SLOT_INDEX, bmap);
+ kvm_vm_get_dirty_log(vm, TEST_MEMSLOT, bmap);
if (uc.args[1]) {
TEST_ASSERT(test_bit(0, bmap), "Page 0 incorrectly reported clean\n");
TEST_ASSERT(host_test_mem[0] == 1, "Page 0 not written by guest\n");
diff --git a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
index d9d9d1deec45..37f173a0f189 100644
--- a/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
+++ b/tools/testing/selftests/kvm/x86_64/xen_shinfo_test.c
@@ -22,11 +22,11 @@
#define SHINFO_REGION_GVA 0xc0000000ULL
#define SHINFO_REGION_GPA 0xc0000000ULL
-#define SHINFO_REGION_SLOT 10
+#define SHINFO_REGION_SLOT MEMSLOT(10)
#define PAGE_SIZE 4096
#define DUMMY_REGION_GPA (SHINFO_REGION_GPA + (2 * PAGE_SIZE))
-#define DUMMY_REGION_SLOT 11
+#define DUMMY_REGION_SLOT MEMSLOT(11)
#define SHINFO_ADDR (SHINFO_REGION_GPA)
#define PVTIME_ADDR (SHINFO_REGION_GPA + PAGE_SIZE)
diff --git a/tools/testing/selftests/kvm/x86_64/xen_vmcall_test.c b/tools/testing/selftests/kvm/x86_64/xen_vmcall_test.c
index adc94452b57c..edf5f5600766 100644
--- a/tools/testing/selftests/kvm/x86_64/xen_vmcall_test.c
+++ b/tools/testing/selftests/kvm/x86_64/xen_vmcall_test.c
@@ -14,7 +14,7 @@
#define VCPU_ID 5
#define HCALL_REGION_GPA 0xc0000000ULL
-#define HCALL_REGION_SLOT 10
+#define HCALL_REGION_SLOT MEMSLOT(10)
#define PAGE_SIZE 4096
static struct kvm_vm *vm;
--
2.35.1.616.g0bdcbb4464-goog
Currently, the vm_vaddr_alloc functions implicitly allocate in memslot
0. Add an argument to allow allocations in any memslot. This will be
used in future commits to allow loading code in a test memslot with
different backing memory types.
No functional change intended.
Signed-off-by: Ben Gardon <[email protected]>
---
.../selftests/kvm/include/kvm_util_base.h | 8 ++++---
.../selftests/kvm/lib/aarch64/processor.c | 5 +++--
tools/testing/selftests/kvm/lib/elf.c | 3 ++-
tools/testing/selftests/kvm/lib/kvm_util.c | 17 +++++++-------
.../selftests/kvm/lib/riscv/processor.c | 3 ++-
.../selftests/kvm/lib/s390x/processor.c | 3 ++-
.../selftests/kvm/lib/x86_64/processor.c | 11 +++++-----
tools/testing/selftests/kvm/lib/x86_64/svm.c | 8 +++----
tools/testing/selftests/kvm/lib/x86_64/vmx.c | 22 +++++++++----------
tools/testing/selftests/kvm/x86_64/amx_test.c | 6 ++---
.../testing/selftests/kvm/x86_64/cpuid_test.c | 2 +-
.../selftests/kvm/x86_64/hyperv_clock.c | 2 +-
.../selftests/kvm/x86_64/hyperv_features.c | 6 ++---
.../selftests/kvm/x86_64/kvm_clock_test.c | 2 +-
.../selftests/kvm/x86_64/xapic_ipi_test.c | 2 +-
15 files changed, 54 insertions(+), 46 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 69a6b5e509ab..f70dfa3e1202 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -181,9 +181,11 @@ void vm_mem_region_move(struct kvm_vm *vm, struct kvm_memslot memslot,
uint64_t new_gpa);
void vm_mem_region_delete(struct kvm_vm *vm, struct kvm_memslot memslot);
void vm_vcpu_add(struct kvm_vm *vm, uint32_t vcpuid);
-vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min);
-vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages);
-vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm);
+vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
+ struct kvm_memslot memslot);
+vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages,
+ struct kvm_memslot memslot);
+vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm, struct kvm_memslot memslot);
void virt_map(struct kvm_vm *vm, uint64_t vaddr, uint64_t paddr,
unsigned int npages);
diff --git a/tools/testing/selftests/kvm/lib/aarch64/processor.c b/tools/testing/selftests/kvm/lib/aarch64/processor.c
index a9e505e351e0..163746259d93 100644
--- a/tools/testing/selftests/kvm/lib/aarch64/processor.c
+++ b/tools/testing/selftests/kvm/lib/aarch64/processor.c
@@ -322,7 +322,8 @@ void aarch64_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid,
DEFAULT_STACK_PGS * vm->page_size :
vm->page_size;
uint64_t stack_vaddr = vm_vaddr_alloc(vm, stack_size,
- DEFAULT_ARM64_GUEST_STACK_VADDR_MIN);
+ DEFAULT_ARM64_GUEST_STACK_VADDR_MIN,
+ MEMSLOT(0));
vm_vcpu_add(vm, vcpuid);
aarch64_vcpu_setup(vm, vcpuid, init);
@@ -426,7 +427,7 @@ void route_exception(struct ex_regs *regs, int vector)
void vm_init_descriptor_tables(struct kvm_vm *vm)
{
vm->handlers = vm_vaddr_alloc(vm, sizeof(struct handlers),
- vm->page_size);
+ vm->page_size, MEMSLOT(0));
*(vm_vaddr_t *)addr_gva2hva(vm, (vm_vaddr_t)(&exception_handlers)) = vm->handlers;
}
diff --git a/tools/testing/selftests/kvm/lib/elf.c b/tools/testing/selftests/kvm/lib/elf.c
index 13e8e3dcf984..88d03cb80423 100644
--- a/tools/testing/selftests/kvm/lib/elf.c
+++ b/tools/testing/selftests/kvm/lib/elf.c
@@ -162,7 +162,8 @@ void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename)
seg_vend |= vm->page_size - 1;
size_t seg_size = seg_vend - seg_vstart + 1;
- vm_vaddr_t vaddr = vm_vaddr_alloc(vm, seg_size, seg_vstart);
+ vm_vaddr_t vaddr = vm_vaddr_alloc(vm, seg_size, seg_vstart,
+ MEMSLOT(0));
TEST_ASSERT(vaddr == seg_vstart, "Unable to allocate "
"virtual memory for segment at requested min addr,\n"
" segment idx: %u\n"
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 97d1badaba8b..04abfc7e6b5c 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1340,8 +1340,7 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz,
* vm - Virtual Machine
* sz - Size in bytes
* vaddr_min - Minimum starting virtual address
- * data_memslot - Memory region slot for data pages
- * pgd_memslot - Memory region slot for new virtual translation tables
+ * memslot - Memory region slot for data pages
*
* Output Args: None
*
@@ -1354,14 +1353,15 @@ static vm_vaddr_t vm_vaddr_unused_gap(struct kvm_vm *vm, size_t sz,
* a unique set of pages, with the minimum real allocation being at least
* a page.
*/
-vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min)
+vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min,
+ struct kvm_memslot memslot)
{
uint64_t pages = (sz >> vm->page_shift) + ((sz % vm->page_size) != 0);
virt_pgd_alloc(vm);
vm_paddr_t paddr = vm_phy_pages_alloc(vm, pages,
KVM_UTIL_MIN_PFN * vm->page_size,
- MEMSLOT(0));
+ memslot);
/*
* Find an unused range of virtual page addresses of at least
@@ -1396,9 +1396,10 @@ vm_vaddr_t vm_vaddr_alloc(struct kvm_vm *vm, size_t sz, vm_vaddr_t vaddr_min)
* Allocates at least N system pages worth of bytes within the virtual address
* space of the vm.
*/
-vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages)
+vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages,
+ struct kvm_memslot memslot)
{
- return vm_vaddr_alloc(vm, nr_pages * getpagesize(), KVM_UTIL_MIN_VADDR);
+ return vm_vaddr_alloc(vm, nr_pages * getpagesize(), KVM_UTIL_MIN_VADDR, memslot);
}
/*
@@ -1415,9 +1416,9 @@ vm_vaddr_t vm_vaddr_alloc_pages(struct kvm_vm *vm, int nr_pages)
* Allocates at least one system page worth of bytes within the virtual address
* space of the vm.
*/
-vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm)
+vm_vaddr_t vm_vaddr_alloc_page(struct kvm_vm *vm, struct kvm_memslot memslot)
{
- return vm_vaddr_alloc_pages(vm, 1);
+ return vm_vaddr_alloc_pages(vm, 1, memslot);
}
/*
diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c
index 7a0ff26b9f8d..9b554d6939a5 100644
--- a/tools/testing/selftests/kvm/lib/riscv/processor.c
+++ b/tools/testing/selftests/kvm/lib/riscv/processor.c
@@ -281,7 +281,8 @@ void vm_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid, void *guest_code)
DEFAULT_STACK_PGS * vm->page_size :
vm->page_size;
unsigned long stack_vaddr = vm_vaddr_alloc(vm, stack_size,
- DEFAULT_RISCV_GUEST_STACK_VADDR_MIN);
+ DEFAULT_RISCV_GUEST_STACK_VADDR_MIN,
+ MEMSLOT(0));
unsigned long current_gp = 0;
struct kvm_mp_state mps;
diff --git a/tools/testing/selftests/kvm/lib/s390x/processor.c b/tools/testing/selftests/kvm/lib/s390x/processor.c
index 1c873a26e6de..edcba350dbef 100644
--- a/tools/testing/selftests/kvm/lib/s390x/processor.c
+++ b/tools/testing/selftests/kvm/lib/s390x/processor.c
@@ -169,7 +169,8 @@ void vm_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid, void *guest_code)
vm->page_size);
stack_vaddr = vm_vaddr_alloc(vm, stack_size,
- DEFAULT_GUEST_STACK_VADDR_MIN);
+ DEFAULT_GUEST_STACK_VADDR_MIN,
+ MEMSLOT(0));
vm_vcpu_add(vm, vcpuid);
diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index 9f000dfb5594..afcc13655790 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -597,7 +597,7 @@ vm_paddr_t addr_gva2gpa(struct kvm_vm *vm, vm_vaddr_t gva)
static void kvm_setup_gdt(struct kvm_vm *vm, struct kvm_dtable *dt)
{
if (!vm->gdt)
- vm->gdt = vm_vaddr_alloc_page(vm);
+ vm->gdt = vm_vaddr_alloc_page(vm, MEMSLOT(0));
dt->base = vm->gdt;
dt->limit = getpagesize();
@@ -607,7 +607,7 @@ static void kvm_setup_tss_64bit(struct kvm_vm *vm, struct kvm_segment *segp,
int selector)
{
if (!vm->tss)
- vm->tss = vm_vaddr_alloc_page(vm);
+ vm->tss = vm_vaddr_alloc_page(vm, MEMSLOT(0));
memset(segp, 0, sizeof(*segp));
segp->base = vm->tss;
@@ -710,7 +710,8 @@ void vm_vcpu_add_default(struct kvm_vm *vm, uint32_t vcpuid, void *guest_code)
struct kvm_regs regs;
vm_vaddr_t stack_vaddr;
stack_vaddr = vm_vaddr_alloc(vm, DEFAULT_STACK_PGS * getpagesize(),
- DEFAULT_GUEST_STACK_VADDR_MIN);
+ DEFAULT_GUEST_STACK_VADDR_MIN,
+ MEMSLOT(0));
/* Create VCPU */
vm_vcpu_add(vm, vcpuid);
@@ -1377,8 +1378,8 @@ void vm_init_descriptor_tables(struct kvm_vm *vm)
extern void *idt_handlers;
int i;
- vm->idt = vm_vaddr_alloc_page(vm);
- vm->handlers = vm_vaddr_alloc_page(vm);
+ vm->idt = vm_vaddr_alloc_page(vm, MEMSLOT(0));
+ vm->handlers = vm_vaddr_alloc_page(vm, MEMSLOT(0));
/* Handlers have the same address in both address spaces.*/
for (i = 0; i < NUM_INTERRUPTS; i++)
set_idt_entry(vm, i, (unsigned long)(&idt_handlers)[i], 0,
diff --git a/tools/testing/selftests/kvm/lib/x86_64/svm.c b/tools/testing/selftests/kvm/lib/x86_64/svm.c
index 736ee4a23df6..6d935cc1225a 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/svm.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/svm.c
@@ -32,18 +32,18 @@ u64 rflags;
struct svm_test_data *
vcpu_alloc_svm(struct kvm_vm *vm, vm_vaddr_t *p_svm_gva)
{
- vm_vaddr_t svm_gva = vm_vaddr_alloc_page(vm);
+ vm_vaddr_t svm_gva = vm_vaddr_alloc_page(vm, MEMSLOT(0));
struct svm_test_data *svm = addr_gva2hva(vm, svm_gva);
- svm->vmcb = (void *)vm_vaddr_alloc_page(vm);
+ svm->vmcb = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0));
svm->vmcb_hva = addr_gva2hva(vm, (uintptr_t)svm->vmcb);
svm->vmcb_gpa = addr_gva2gpa(vm, (uintptr_t)svm->vmcb);
- svm->save_area = (void *)vm_vaddr_alloc_page(vm);
+ svm->save_area = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0));
svm->save_area_hva = addr_gva2hva(vm, (uintptr_t)svm->save_area);
svm->save_area_gpa = addr_gva2gpa(vm, (uintptr_t)svm->save_area);
- svm->msr = (void *)vm_vaddr_alloc_page(vm);
+ svm->msr = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0));
svm->msr_hva = addr_gva2hva(vm, (uintptr_t)svm->msr);
svm->msr_gpa = addr_gva2gpa(vm, (uintptr_t)svm->msr);
memset(svm->msr_hva, 0, getpagesize());
diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c
index 7ea9455b3e71..3678969e992a 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c
@@ -77,48 +77,48 @@ int vcpu_enable_evmcs(struct kvm_vm *vm, int vcpu_id)
struct vmx_pages *
vcpu_alloc_vmx(struct kvm_vm *vm, vm_vaddr_t *p_vmx_gva)
{
- vm_vaddr_t vmx_gva = vm_vaddr_alloc_page(vm);
+ vm_vaddr_t vmx_gva = vm_vaddr_alloc_page(vm, MEMSLOT(0));
struct vmx_pages *vmx = addr_gva2hva(vm, vmx_gva);
/* Setup of a region of guest memory for the vmxon region. */
- vmx->vmxon = (void *)vm_vaddr_alloc_page(vm);
+ vmx->vmxon = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0));
vmx->vmxon_hva = addr_gva2hva(vm, (uintptr_t)vmx->vmxon);
vmx->vmxon_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vmxon);
/* Setup of a region of guest memory for a vmcs. */
- vmx->vmcs = (void *)vm_vaddr_alloc_page(vm);
+ vmx->vmcs = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0));
vmx->vmcs_hva = addr_gva2hva(vm, (uintptr_t)vmx->vmcs);
vmx->vmcs_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vmcs);
/* Setup of a region of guest memory for the MSR bitmap. */
- vmx->msr = (void *)vm_vaddr_alloc_page(vm);
+ vmx->msr = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0));
vmx->msr_hva = addr_gva2hva(vm, (uintptr_t)vmx->msr);
vmx->msr_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->msr);
memset(vmx->msr_hva, 0, getpagesize());
/* Setup of a region of guest memory for the shadow VMCS. */
- vmx->shadow_vmcs = (void *)vm_vaddr_alloc_page(vm);
+ vmx->shadow_vmcs = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0));
vmx->shadow_vmcs_hva = addr_gva2hva(vm, (uintptr_t)vmx->shadow_vmcs);
vmx->shadow_vmcs_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->shadow_vmcs);
/* Setup of a region of guest memory for the VMREAD and VMWRITE bitmaps. */
- vmx->vmread = (void *)vm_vaddr_alloc_page(vm);
+ vmx->vmread = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0));
vmx->vmread_hva = addr_gva2hva(vm, (uintptr_t)vmx->vmread);
vmx->vmread_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vmread);
memset(vmx->vmread_hva, 0, getpagesize());
- vmx->vmwrite = (void *)vm_vaddr_alloc_page(vm);
+ vmx->vmwrite = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0));
vmx->vmwrite_hva = addr_gva2hva(vm, (uintptr_t)vmx->vmwrite);
vmx->vmwrite_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vmwrite);
memset(vmx->vmwrite_hva, 0, getpagesize());
/* Setup of a region of guest memory for the VP Assist page. */
- vmx->vp_assist = (void *)vm_vaddr_alloc_page(vm);
+ vmx->vp_assist = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0));
vmx->vp_assist_hva = addr_gva2hva(vm, (uintptr_t)vmx->vp_assist);
vmx->vp_assist_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->vp_assist);
/* Setup of a region of guest memory for the enlightened VMCS. */
- vmx->enlightened_vmcs = (void *)vm_vaddr_alloc_page(vm);
+ vmx->enlightened_vmcs = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0));
vmx->enlightened_vmcs_hva =
addr_gva2hva(vm, (uintptr_t)vmx->enlightened_vmcs);
vmx->enlightened_vmcs_gpa =
@@ -528,14 +528,14 @@ void nested_map_memslot(struct vmx_pages *vmx, struct kvm_vm *vm,
void prepare_eptp(struct vmx_pages *vmx, struct kvm_vm *vm,
struct kvm_memslot eptp_memslot)
{
- vmx->eptp = (void *)vm_vaddr_alloc_page(vm);
+ vmx->eptp = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0));
vmx->eptp_hva = addr_gva2hva(vm, (uintptr_t)vmx->eptp);
vmx->eptp_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->eptp);
}
void prepare_virtualize_apic_accesses(struct vmx_pages *vmx, struct kvm_vm *vm)
{
- vmx->apic_access = (void *)vm_vaddr_alloc_page(vm);
+ vmx->apic_access = (void *)vm_vaddr_alloc_page(vm, MEMSLOT(0));
vmx->apic_access_hva = addr_gva2hva(vm, (uintptr_t)vmx->apic_access);
vmx->apic_access_gpa = addr_gva2gpa(vm, (uintptr_t)vmx->apic_access);
}
diff --git a/tools/testing/selftests/kvm/x86_64/amx_test.c b/tools/testing/selftests/kvm/x86_64/amx_test.c
index 52a3ef6629e8..2c12a3bf1f62 100644
--- a/tools/testing/selftests/kvm/x86_64/amx_test.c
+++ b/tools/testing/selftests/kvm/x86_64/amx_test.c
@@ -360,15 +360,15 @@ int main(int argc, char *argv[])
vm_install_exception_handler(vm, NM_VECTOR, guest_nm_handler);
/* amx cfg for guest_code */
- amx_cfg = vm_vaddr_alloc_page(vm);
+ amx_cfg = vm_vaddr_alloc_page(vm, MEMSLOT(0));
memset(addr_gva2hva(vm, amx_cfg), 0x0, getpagesize());
/* amx tiledata for guest_code */
- tiledata = vm_vaddr_alloc_pages(vm, 2);
+ tiledata = vm_vaddr_alloc_pages(vm, 2, MEMSLOT(0));
memset(addr_gva2hva(vm, tiledata), rand() | 1, 2 * getpagesize());
/* xsave data for guest_code */
- xsavedata = vm_vaddr_alloc_pages(vm, 3);
+ xsavedata = vm_vaddr_alloc_pages(vm, 3, MEMSLOT(0));
memset(addr_gva2hva(vm, xsavedata), 0, 3 * getpagesize());
vcpu_args_set(vm, VCPU_ID, 3, amx_cfg, tiledata, xsavedata);
diff --git a/tools/testing/selftests/kvm/x86_64/cpuid_test.c b/tools/testing/selftests/kvm/x86_64/cpuid_test.c
index 16d2465c5634..d0250f32d729 100644
--- a/tools/testing/selftests/kvm/x86_64/cpuid_test.c
+++ b/tools/testing/selftests/kvm/x86_64/cpuid_test.c
@@ -145,7 +145,7 @@ static void run_vcpu(struct kvm_vm *vm, uint32_t vcpuid, int stage)
struct kvm_cpuid2 *vcpu_alloc_cpuid(struct kvm_vm *vm, vm_vaddr_t *p_gva, struct kvm_cpuid2 *cpuid)
{
int size = sizeof(*cpuid) + cpuid->nent * sizeof(cpuid->entries[0]);
- vm_vaddr_t gva = vm_vaddr_alloc(vm, size, KVM_UTIL_MIN_VADDR);
+ vm_vaddr_t gva = vm_vaddr_alloc(vm, size, KVM_UTIL_MIN_VADDR, MEMSLOT(0));
struct kvm_cpuid2 *guest_cpuids = addr_gva2hva(vm, gva);
memcpy(guest_cpuids, cpuid, size);
diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_clock.c b/tools/testing/selftests/kvm/x86_64/hyperv_clock.c
index e0b2bb1339b1..8cc31ce181a0 100644
--- a/tools/testing/selftests/kvm/x86_64/hyperv_clock.c
+++ b/tools/testing/selftests/kvm/x86_64/hyperv_clock.c
@@ -214,7 +214,7 @@ int main(void)
vcpu_set_hv_cpuid(vm, VCPU_ID);
- tsc_page_gva = vm_vaddr_alloc_page(vm);
+ tsc_page_gva = vm_vaddr_alloc_page(vm, MEMSLOT(0));
memset(addr_gva2hva(vm, tsc_page_gva), 0x0, getpagesize());
TEST_ASSERT((addr_gva2gpa(vm, tsc_page_gva) & (getpagesize() - 1)) == 0,
"TSC page has to be page aligned\n");
diff --git a/tools/testing/selftests/kvm/x86_64/hyperv_features.c b/tools/testing/selftests/kvm/x86_64/hyperv_features.c
index 672915ce73d8..64cbb2cabcda 100644
--- a/tools/testing/selftests/kvm/x86_64/hyperv_features.c
+++ b/tools/testing/selftests/kvm/x86_64/hyperv_features.c
@@ -191,7 +191,7 @@ static void guest_test_msrs_access(void)
while (true) {
vm = vm_create_default(VCPU_ID, 0, guest_msr);
- msr_gva = vm_vaddr_alloc_page(vm);
+ msr_gva = vm_vaddr_alloc_page(vm, MEMSLOT(0));
memset(addr_gva2hva(vm, msr_gva), 0x0, getpagesize());
msr = addr_gva2hva(vm, msr_gva);
@@ -534,11 +534,11 @@ static void guest_test_hcalls_access(void)
vm_install_exception_handler(vm, UD_VECTOR, guest_ud_handler);
/* Hypercall input/output */
- hcall_page = vm_vaddr_alloc_pages(vm, 2);
+ hcall_page = vm_vaddr_alloc_pages(vm, 2, MEMSLOT(0));
hcall = addr_gva2hva(vm, hcall_page);
memset(addr_gva2hva(vm, hcall_page), 0x0, 2 * getpagesize());
- hcall_params = vm_vaddr_alloc_page(vm);
+ hcall_params = vm_vaddr_alloc_page(vm, MEMSLOT(0));
memset(addr_gva2hva(vm, hcall_params), 0x0, getpagesize());
vcpu_args_set(vm, VCPU_ID, 2, addr_gva2gpa(vm, hcall_page), hcall_params);
diff --git a/tools/testing/selftests/kvm/x86_64/kvm_clock_test.c b/tools/testing/selftests/kvm/x86_64/kvm_clock_test.c
index 97731454f3f3..c0d3bc5a1e7d 100644
--- a/tools/testing/selftests/kvm/x86_64/kvm_clock_test.c
+++ b/tools/testing/selftests/kvm/x86_64/kvm_clock_test.c
@@ -194,7 +194,7 @@ int main(void)
vm = vm_create_default(VCPU_ID, 0, guest_main);
- pvti_gva = vm_vaddr_alloc(vm, getpagesize(), 0x10000);
+ pvti_gva = vm_vaddr_alloc(vm, getpagesize(), 0x10000, MEMSLOT(0));
pvti_gpa = addr_gva2gpa(vm, pvti_gva);
vcpu_args_set(vm, VCPU_ID, 2, pvti_gpa, pvti_gva);
diff --git a/tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c b/tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c
index afbbc40df884..ffb92f304302 100644
--- a/tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c
+++ b/tools/testing/selftests/kvm/x86_64/xapic_ipi_test.c
@@ -427,7 +427,7 @@ int main(int argc, char *argv[])
vm_vcpu_add_default(vm, SENDER_VCPU_ID, sender_guest_code);
- test_data_page_vaddr = vm_vaddr_alloc_page(vm);
+ test_data_page_vaddr = vm_vaddr_alloc_page(vm, MEMSLOT(0));
data =
(struct test_data_page *)addr_gva2hva(vm, test_data_page_vaddr);
memset(data, 0, sizeof(*data));
--
2.35.1.616.g0bdcbb4464-goog
The braces around the KVM_CAP_XSAVE2 block also surround the
KVM_CAP_PMU_CAPABILITY block, likely the result of a merge issue. Simply
move the curly brace back to where it belongs.
Fixes: ba7bb663f5547 ("KVM: x86: Provide per VM capability for disabling PMU virtualization")
Signed-off-by: Ben Gardon <[email protected]>
---
arch/x86/kvm/x86.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 73df90a6932b..74351cbb9b5b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4352,10 +4352,10 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
if (r < sizeof(struct kvm_xsave))
r = sizeof(struct kvm_xsave);
break;
+ }
case KVM_CAP_PMU_CAPABILITY:
r = enable_pmu ? KVM_CAP_PMU_VALID_MASK : 0;
break;
- }
case KVM_CAP_DISABLE_QUIRKS2:
r = KVM_X86_VALID_QUIRKS;
break;
--
2.35.1.616.g0bdcbb4464-goog
On Thu, Mar 10, 2022 at 10:59 AM David Dunn <[email protected]> wrote:
>
> Ben,
>
> Thanks for adding this test. Nit below.
>
> Reviewed-by: David Dunn <[email protected]>
>
> On Thu, Mar 10, 2022 at 8:46 AM Ben Gardon <[email protected]> wrote:
>>
>>
>> + /* Give recovery thread time to run */
>> + sleep(3);
>>
> Is there any way to make this sleep be based on constants which control the recovery thread? Looking at the parameters below, this seems excessive. The recovery period is 2ms with a 1/1=100% recovery ratio. So this is padded out 1000x. Maybe it doesn't matter because normally this test runs in parallel with other tests, but it does seem like a pretty large hardcoded sleep.
Woops, I meant to make the recovery period 2 seconds, which would also
be preposterously large. We absolutely can and should tighten that up.
100ms recovery period and 150ms sleep would probably work perfectly.
I'll make that change if/when I send out a v2.
>
> Dave
Add kvm_util library functions to read KVM stats through the binary
stats interface and then dump them to stdout when running the binary
stats test. Subsequent commits will extend the kvm_util code and use it
to make assertions in a test for NX hugepages.
CC: Jing Zhang <[email protected]>
Signed-off-by: Ben Gardon <[email protected]>
---
.../selftests/kvm/include/kvm_util_base.h | 1 +
.../selftests/kvm/kvm_binary_stats_test.c | 3 +
tools/testing/selftests/kvm/lib/kvm_util.c | 143 ++++++++++++++++++
3 files changed, 147 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 92cef0ffb19e..c5f4a67772cb 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -400,6 +400,7 @@ void assert_on_unhandled_exception(struct kvm_vm *vm, uint32_t vcpuid);
int vm_get_stats_fd(struct kvm_vm *vm);
int vcpu_get_stats_fd(struct kvm_vm *vm, uint32_t vcpuid);
+void dump_vm_stats(struct kvm_vm *vm);
uint32_t guest_get_vcpuid(void);
diff --git a/tools/testing/selftests/kvm/kvm_binary_stats_test.c b/tools/testing/selftests/kvm/kvm_binary_stats_test.c
index 17f65d514915..afc4701ce8dd 100644
--- a/tools/testing/selftests/kvm/kvm_binary_stats_test.c
+++ b/tools/testing/selftests/kvm/kvm_binary_stats_test.c
@@ -174,6 +174,9 @@ static void vm_stats_test(struct kvm_vm *vm)
stats_test(stats_fd);
close(stats_fd);
TEST_ASSERT(fcntl(stats_fd, F_GETFD) == -1, "Stats fd not freed");
+
+ /* Dump VM stats */
+ dump_vm_stats(vm);
}
static void vcpu_stats_test(struct kvm_vm *vm, int vcpu_id)
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 1665a220abcb..4d21c3b46780 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -2556,3 +2556,146 @@ int vcpu_get_stats_fd(struct kvm_vm *vm, uint32_t vcpuid)
return ioctl(vcpu->fd, KVM_GET_STATS_FD, NULL);
}
+
+/* Caller is responsible for freeing the returned kvm_stats_header. */
+static struct kvm_stats_header *read_vm_stats_header(int stats_fd)
+{
+ struct kvm_stats_header *header;
+ ssize_t ret;
+
+ /* Read kvm stats header */
+ header = malloc(sizeof(*header));
+ TEST_ASSERT(header, "Allocate memory for stats header");
+
+ ret = read(stats_fd, header, sizeof(*header));
+ TEST_ASSERT(ret == sizeof(*header), "Read stats header");
+
+ return header;
+}
+
+static void dump_header(int stats_fd, struct kvm_stats_header *header)
+{
+ ssize_t ret;
+ char *id;
+
+ printf("flags: %u\n", header->flags);
+ printf("name size: %u\n", header->name_size);
+ printf("num_desc: %u\n", header->num_desc);
+ printf("id_offset: %u\n", header->id_offset);
+ printf("desc_offset: %u\n", header->desc_offset);
+ printf("data_offset: %u\n", header->data_offset);
+
+ /* Read kvm stats id string */
+ id = malloc(header->name_size);
+ TEST_ASSERT(id, "Allocate memory for id string");
+ ret = pread(stats_fd, id, header->name_size, header->id_offset);
+ TEST_ASSERT(ret == header->name_size, "Read id string");
+
+ printf("id: %s\n", id);
+
+ free(id);
+}
+
+static ssize_t stats_desc_size(struct kvm_stats_header *header)
+{
+ return sizeof(struct kvm_stats_desc) + header->name_size;
+}
+
+/* Caller is responsible for freeing the returned kvm_stats_desc. */
+static struct kvm_stats_desc *read_vm_stats_desc(int stats_fd,
+ struct kvm_stats_header *header)
+{
+ struct kvm_stats_desc *stats_desc;
+ size_t size_desc;
+ ssize_t ret;
+
+ size_desc = header->num_desc * stats_desc_size(header);
+
+ /* Allocate memory for stats descriptors */
+ stats_desc = malloc(size_desc);
+ TEST_ASSERT(stats_desc, "Allocate memory for stats descriptors");
+
+ /* Read kvm stats descriptors */
+ ret = pread(stats_fd, stats_desc, size_desc, header->desc_offset);
+ TEST_ASSERT(ret == size_desc, "Read KVM stats descriptors");
+
+ return stats_desc;
+}
+
+/* Caller is responsible for freeing the memory *data. */
+static int read_stat_data(int stats_fd, struct kvm_stats_header *header,
+ struct kvm_stats_desc *desc, uint64_t **data)
+{
+ u64 *stats_data;
+ ssize_t ret;
+
+ stats_data = malloc(desc->size * sizeof(*stats_data));
+
+ ret = pread(stats_fd, stats_data, desc->size * sizeof(*stats_data),
+ header->data_offset + desc->offset);
+
+ /* ret is in bytes. */
+ ret = ret / sizeof(*stats_data);
+
+ TEST_ASSERT(ret == desc->size,
+ "Read data of KVM stats: %s", desc->name);
+
+ *data = stats_data;
+
+ return ret;
+}
+
+static void dump_stat(int stats_fd, struct kvm_stats_header *header,
+ struct kvm_stats_desc *desc)
+{
+ u64 *stats_data;
+ ssize_t ret;
+ int i;
+
+ printf("\tflags: %u\n", desc->flags);
+ printf("\texponent: %u\n", desc->exponent);
+ printf("\tsize: %u\n", desc->size);
+ printf("\toffset: %u\n", desc->offset);
+ printf("\tbucket_size: %u\n", desc->bucket_size);
+ printf("\tname: %s\n", (char *)&desc->name);
+
+ ret = read_stat_data(stats_fd, header, desc, &stats_data);
+
+ printf("\tdata: %lu", *stats_data);
+ for (i = 1; i < ret; i++)
+ printf(", %lu", *(stats_data + i));
+ printf("\n\n");
+
+ free(stats_data);
+}
+
+void dump_vm_stats(struct kvm_vm *vm)
+{
+ struct kvm_stats_desc *stats_desc;
+ struct kvm_stats_header *header;
+ struct kvm_stats_desc *desc;
+ size_t size_desc;
+ int stats_fd;
+ int i;
+
+ stats_fd = vm_get_stats_fd(vm);
+
+ header = read_vm_stats_header(stats_fd);
+ dump_header(stats_fd, header);
+
+ stats_desc = read_vm_stats_desc(stats_fd, header);
+
+ size_desc = stats_desc_size(header);
+
+ /* Read kvm stats data one by one */
+ for (i = 0; i < header->num_desc; ++i) {
+ desc = (void *)stats_desc + (i * size_desc);
+ dump_stat(stats_fd, header, desc);
+ }
+
+ free(stats_desc);
+ free(header);
+
+ close(stats_fd);
+}
+
--
2.35.1.616.g0bdcbb4464-goog
Okay, I'll hold off on the memslot refactor in v2, but if folks have
feedback on the other aspects of the v1 patch series, I'd appreciate
it.
If not, I'll try to get a v2 sent out.
I think that the commits adding utility functions for the binary stats
interface to the binary stats test could be queued separately from the
rest of this series and will be helpful for other folks working on new
selftests.
On Thu, Mar 10, 2022 at 8:19 PM Sean Christopherson <[email protected]> wrote:
>
> On Thu, Mar 10, 2022, Ben Gardon wrote:
> > Those patches are a lot of churn, but at least to me, they make the
> > code much more readable. Currently there are many functions which just
> > pass along 0 for the memslot, and often have multiple other numerical
> > arguments, which makes it hard to understand what the function is
> > doing.
>
> Yeah, my solution for that was to rip out all the params. E.g. the most used
> function I ended up with is
>
> static inline struct kvm_vm *vm_create_with_one_vcpu(struct kvm_vcpu **vcpu,
> void *guest_code)
> {
> return __vm_create_with_one_vcpu(vcpu, 0, guest_code);
> }
>
> and then the usage is
>
> vm = vm_create_with_one_vcpu(&vcpu, guest_main);
>
> supp_cpuid = kvm_get_supported_cpuid();
> cpuid2 = vcpu_get_cpuid(vcpu);
>
> My overarching complaint with the selftests is that they make the hard things hard,
> and the easy things harder. If a test wants to be backed by hugepages, it shouldn't
> have to manually specify a memslot.
>
> Let me post my selftests rework as RFC (_very_ RFC at this point). I was hoping to
> do more than compile test before posting anything, but it's going to be multiple
> weeks before I'll get back to it. Hopefully it'll start a discussion on actually
> rewriting the framework so that writing new tests is less painful, and so that every
> new thing that comes along doesn't require poking at 50 different tests.