This series introduces a new KVM selftest (mem_slot_test) that goal
is to verify memory slots can be added up to the maximum allowed. An
extra slot is attempted which should occur on error.
The patch 01 is needed so that the VM fd can be accessed from the
test code (for the ioctl call attempting to add an extra slot).
I ran the test successfully on x86_64, aarch64, and s390x. This
is why it is enabled to build on those arches.
- Changelog -
v4 -> v5:
- Initialize the guest_addr and mem_reg_size variables on definition
[krish.sadhukhan]
v3 -> v4:
- Discarded mem_reg_flags variable. Simply using 0 instead [drjones]
- Discarded kvm_region pointer. Instead passing a compound literal in
the ioctl [drjones]
- All variables are declared on the declaration block [drjones]
v2 -> v3:
- Keep alphabetical order of .gitignore and Makefile [drjones]
- Use memory region flags equals to zero [drjones]
- Changed mmap() assert from 'mem != NULL' to 'mem != MAP_FAILED' [drjones]
- kvm_region is declared along side other variables and malloc()'ed
later [drjones]
- Combined two asserts into a single 'ret == -1 && errno == EINVAL'
[drjones]
v1 -> v2:
- Rebased to queue
- vm_get_fd() returns int instead of unsigned int (patch 01) [drjones]
- Removed MEM_REG_FLAGS and GUEST_VM_MODE defines [drjones]
- Replaced DEBUG() with pr_info() [drjones]
- Calculate number of guest pages with vm_calc_num_guest_pages()
[drjones]
- Using memory region of 1 MB sized (matches mininum needed
for s390x)
- Removed the increment of guest_addr after the loop [drjones]
- Added assert for the errno when adding a slot beyond-the-limit [drjones]
- Prefer KVM_MEM_READONLY flag but on s390x it switch to KVM_MEM_LOG_DIRTY_PAGES,
so ensure the coverage of both flags. Also somewhat tests the KVM_CAP_READONLY_MEM capability check [drjones]
- Moved the test logic to test_add_max_slots(), this allows to more easily add new cases in the "suite".
v1: https://lore.kernel.org/kvm/[email protected]
Wainer dos Santos Moschetta (2):
selftests: kvm: Add vm_get_fd() in kvm_util
selftests: kvm: Add mem_slot_test test
tools/testing/selftests/kvm/.gitignore | 1 +
tools/testing/selftests/kvm/Makefile | 3 +
.../testing/selftests/kvm/include/kvm_util.h | 1 +
tools/testing/selftests/kvm/lib/kvm_util.c | 5 ++
tools/testing/selftests/kvm/mem_slot_test.c | 69 +++++++++++++++++++
5 files changed, 79 insertions(+)
create mode 100644 tools/testing/selftests/kvm/mem_slot_test.c
--
2.17.2
This patch introduces the mem_slot_test test which checks
an VM can have added memory slots up to the limit defined in
KVM_CAP_NR_MEMSLOTS. Then attempt to add one more slot to
verify it fails as expected.
Signed-off-by: Wainer dos Santos Moschetta <[email protected]>
Reviewed-by: Andrew Jones <[email protected]>
---
tools/testing/selftests/kvm/.gitignore | 1 +
tools/testing/selftests/kvm/Makefile | 3 +
tools/testing/selftests/kvm/mem_slot_test.c | 69 +++++++++++++++++++++
3 files changed, 73 insertions(+)
create mode 100644 tools/testing/selftests/kvm/mem_slot_test.c
diff --git a/tools/testing/selftests/kvm/.gitignore b/tools/testing/selftests/kvm/.gitignore
index 16877c3daabf..127d27188427 100644
--- a/tools/testing/selftests/kvm/.gitignore
+++ b/tools/testing/selftests/kvm/.gitignore
@@ -21,4 +21,5 @@
/demand_paging_test
/dirty_log_test
/kvm_create_max_vcpus
+/mem_slot_test
/steal_time
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index 712a2ddd2a27..338b6cdce1a0 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -32,12 +32,14 @@ TEST_GEN_PROGS_x86_64 += clear_dirty_log_test
TEST_GEN_PROGS_x86_64 += demand_paging_test
TEST_GEN_PROGS_x86_64 += dirty_log_test
TEST_GEN_PROGS_x86_64 += kvm_create_max_vcpus
+TEST_GEN_PROGS_x86_64 += mem_slot_test
TEST_GEN_PROGS_x86_64 += steal_time
TEST_GEN_PROGS_aarch64 += clear_dirty_log_test
TEST_GEN_PROGS_aarch64 += demand_paging_test
TEST_GEN_PROGS_aarch64 += dirty_log_test
TEST_GEN_PROGS_aarch64 += kvm_create_max_vcpus
+TEST_GEN_PROGS_aarch64 += mem_slot_test
TEST_GEN_PROGS_aarch64 += steal_time
TEST_GEN_PROGS_s390x = s390x/memop
@@ -46,6 +48,7 @@ TEST_GEN_PROGS_s390x += s390x/sync_regs_test
TEST_GEN_PROGS_s390x += demand_paging_test
TEST_GEN_PROGS_s390x += dirty_log_test
TEST_GEN_PROGS_s390x += kvm_create_max_vcpus
+TEST_GEN_PROGS_s390x += mem_slot_test
TEST_GEN_PROGS += $(TEST_GEN_PROGS_$(UNAME_M))
LIBKVM += $(LIBKVM_$(UNAME_M))
diff --git a/tools/testing/selftests/kvm/mem_slot_test.c b/tools/testing/selftests/kvm/mem_slot_test.c
new file mode 100644
index 000000000000..3cab22fa6bd6
--- /dev/null
+++ b/tools/testing/selftests/kvm/mem_slot_test.c
@@ -0,0 +1,69 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * mem_slot_test
+ *
+ * Copyright (C) 2020, Red Hat, Inc.
+ *
+ * Test suite for memory region operations.
+ */
+#define _GNU_SOURCE /* for program_invocation_short_name */
+#include <linux/kvm.h>
+#include <sys/mman.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+
+/*
+ * Test it can be added memory slots up to KVM_CAP_NR_MEMSLOTS, then any
+ * tentative to add further slots should fail.
+ */
+static void test_add_max_slots(void)
+{
+ int ret;
+ struct kvm_vm *vm;
+ uint32_t max_mem_slots;
+ uint32_t slot;
+ uint64_t guest_addr = 0x0;
+ uint64_t mem_reg_npages;
+ uint64_t mem_reg_size = 0x100000; /* Aligned 1MB is needed for s390x */
+ void *mem;
+
+ max_mem_slots = kvm_check_cap(KVM_CAP_NR_MEMSLOTS);
+ TEST_ASSERT(max_mem_slots > 0,
+ "KVM_CAP_NR_MEMSLOTS should be greater than 0");
+ pr_info("Allowed number of memory slots: %i\n", max_mem_slots);
+
+ vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
+
+ mem_reg_npages = vm_calc_num_guest_pages(VM_MODE_DEFAULT, mem_reg_size);
+
+ /* Check it can be added memory slots up to the maximum allowed */
+ pr_info("Adding slots 0..%i, each memory region with %ldK size\n",
+ (max_mem_slots - 1), mem_reg_size >> 10);
+ for (slot = 0; slot < max_mem_slots; slot++) {
+ vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS,
+ guest_addr, slot, mem_reg_npages,
+ 0);
+ guest_addr += mem_reg_size;
+ }
+
+ /* Check it cannot be added memory slots beyond the limit */
+ mem = mmap(NULL, mem_reg_size, PROT_READ | PROT_WRITE,
+ MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+ TEST_ASSERT(mem != MAP_FAILED, "Failed to mmap() host");
+
+ ret = ioctl(vm_get_fd(vm), KVM_SET_USER_MEMORY_REGION,
+ &(struct kvm_userspace_memory_region) {slot, 0, guest_addr,
+ mem_reg_size, (uint64_t) mem});
+ TEST_ASSERT(ret == -1 && errno == EINVAL,
+ "Adding one more memory slot should fail with EINVAL");
+
+ munmap(mem, mem_reg_size);
+ kvm_vm_free(vm);
+}
+
+int main(int argc, char *argv[])
+{
+ test_add_max_slots();
+ return 0;
+}
--
2.17.2
Introduces the vm_get_fd() function in kvm_util which returns
the VM file descriptor.
Reviewed-by: Andrew Jones <[email protected]>
Signed-off-by: Wainer dos Santos Moschetta <[email protected]>
---
tools/testing/selftests/kvm/include/kvm_util.h | 1 +
tools/testing/selftests/kvm/lib/kvm_util.c | 5 +++++
2 files changed, 6 insertions(+)
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index a99b875f50d2..4e122819ee24 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -254,6 +254,7 @@ bool vm_is_unrestricted_guest(struct kvm_vm *vm);
unsigned int vm_get_page_size(struct kvm_vm *vm);
unsigned int vm_get_page_shift(struct kvm_vm *vm);
unsigned int vm_get_max_gfn(struct kvm_vm *vm);
+int vm_get_fd(struct kvm_vm *vm);
unsigned int vm_calc_num_guest_pages(enum vm_guest_mode mode, size_t size);
unsigned int vm_num_host_pages(enum vm_guest_mode mode, unsigned int num_guest_pages);
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 8a3523d4434f..3e36a1eb8771 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1734,6 +1734,11 @@ unsigned int vm_get_max_gfn(struct kvm_vm *vm)
return vm->max_gfn;
}
+int vm_get_fd(struct kvm_vm *vm)
+{
+ return vm->fd;
+}
+
static unsigned int vm_calc_num_pages(unsigned int num_pages,
unsigned int page_shift,
unsigned int new_page_shift,
--
2.17.2
On Thu, Apr 09, 2020 at 07:09:03PM -0300, Wainer dos Santos Moschetta wrote:
> This series introduces a new KVM selftest (mem_slot_test) that goal
> is to verify memory slots can be added up to the maximum allowed. An
> extra slot is attempted which should occur on error.
>
> The patch 01 is needed so that the VM fd can be accessed from the
> test code (for the ioctl call attempting to add an extra slot).
>
> I ran the test successfully on x86_64, aarch64, and s390x. This
> is why it is enabled to build on those arches.
Any objection to folding these patches into a series I have to clean up
set_memory_region_test (which was mentioned in a prior version) and add
this as a testcase to set_memory_region_test instead of creating a whole
new test?
A large chunk of set_memory_region_test will still be x86_64 only, but
having the test reside in common code will hopefully make it easier to
extend to other architectures.
On Fri, Apr 10, 2020 at 01:45:09PM -0700, Sean Christopherson wrote:
> On Thu, Apr 09, 2020 at 07:09:03PM -0300, Wainer dos Santos Moschetta wrote:
> > This series introduces a new KVM selftest (mem_slot_test) that goal
> > is to verify memory slots can be added up to the maximum allowed. An
> > extra slot is attempted which should occur on error.
> >
> > The patch 01 is needed so that the VM fd can be accessed from the
> > test code (for the ioctl call attempting to add an extra slot).
> >
> > I ran the test successfully on x86_64, aarch64, and s390x. This
> > is why it is enabled to build on those arches.
>
> Any objection to folding these patches into a series I have to clean up
> set_memory_region_test (which was mentioned in a prior version) and add
> this as a testcase to set_memory_region_test instead of creating a whole
> new test?
>
> A large chunk of set_memory_region_test will still be x86_64 only, but
> having the test reside in common code will hopefully make it easier to
> extend to other architectures.
>
Yes, that would be my preference as well. Eventually I decided it could be
done later, but I still prefer it being done from the beginning.
Thanks,
drew