2021-09-14 16:48:35

by Peter Gonda

[permalink] [raw]
Subject: [PATCH 0/4 V8] Add AMD SEV and SEV-ES intra host migration support

Intra host migration provides a low-cost mechanism for userspace VMM
upgrades. It is an alternative to traditional (i.e., remote) live
migration. Whereas remote migration handles moving a guest to a new host,
intra host migration only handles moving a guest to a new userspace VMM
within a host. This can be used to update, rollback, change flags of the
VMM, etc. The lower cost compared to live migration comes from the fact
that the guest's memory does not need to be copied between processes. A
handle to the guest memory simply gets passed to the new VMM, this could
be done via /dev/shm with share=on or similar feature.

The guest state can be transferred from an old VMM to a new VMM as follows:
1. Export guest state from KVM to the old user-space VMM via a getter
user-space/kernel API 2. Transfer guest state from old VMM to new VMM via
IPC communication 3. Import guest state into KVM from the new user-space
VMM via a setter user-space/kernel API VMMs by exporting from KVM using
getters, sending that data to the new VMM, then setting it again in KVM.

In the common case for intra host migration, we can rely on the normal
ioctls for passing data from one VMM to the next. SEV, SEV-ES, and other
confidential compute environments make most of this information opaque, and
render KVM ioctls such as "KVM_GET_REGS" irrelevant. As a result, we need
the ability to pass this opaque metadata from one VMM to the next. The
easiest way to do this is to leave this data in the kernel, and transfer
ownership of the metadata from one KVM VM (or vCPU) to the next. For
example, we need to move the SEV enabled ASID, VMSAs, and GHCB metadata
from one VMM to the next. In general, we need to be able to hand off any
data that would be unsafe/impossible for the kernel to hand directly to
userspace (and cannot be reproduced using data that can be handed safely to
userspace).

V8
* Update to require that @dst is not SEV or SEV-ES enabled.
* Address selftest feedback.

V7
* Address selftest feedback.

V6
* Add selftest.

V5:
* Fix up locking scheme
* Address marcorr@ comments.

V4:
* Move to seanjc@'s suggestion of source VM FD based single ioctl design.

v3:
* Fix memory leak found by dan.carpenter@

v2:
* Added marcorr@ reviewed by tag
* Renamed function introduced in 1/3
* Edited with seanjc@'s review comments
** Cleaned up WARN usage
** Userspace makes random token now
* Edited with brijesh.singh@'s review comments
** Checks for different LAUNCH_* states in send function

v1: https://lore.kernel.org/kvm/[email protected]/

base-commit: 680c7e3be6a3

Cc: Marc Orr <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Sean Christopherson <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Dr. David Alan Gilbert <[email protected]>
Cc: Brijesh Singh <[email protected]>
Cc: Vitaly Kuznetsov <[email protected]>
Cc: Wanpeng Li <[email protected]>
Cc: Jim Mattson <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: [email protected]

Peter Gonda (4):
KVM: SEV: Add support for SEV intra host migration
KVM: SEV: Add support for SEV-ES intra host migration
selftest: KVM: Add open sev dev helper
selftest: KVM: Add intra host migration tests

Documentation/virt/kvm/api.rst | 15 ++
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/svm/sev.c | 187 ++++++++++++++++
arch/x86/kvm/svm/svm.c | 1 +
arch/x86/kvm/svm/svm.h | 2 +
arch/x86/kvm/x86.c | 6 +
include/uapi/linux/kvm.h | 1 +
tools/testing/selftests/kvm/Makefile | 1 +
.../testing/selftests/kvm/include/kvm_util.h | 1 +
.../selftests/kvm/include/x86_64/svm_util.h | 2 +
tools/testing/selftests/kvm/lib/kvm_util.c | 24 ++-
tools/testing/selftests/kvm/lib/x86_64/svm.c | 13 ++
.../selftests/kvm/x86_64/sev_vm_tests.c | 203 ++++++++++++++++++
13 files changed, 447 insertions(+), 10 deletions(-)
create mode 100644 tools/testing/selftests/kvm/x86_64/sev_vm_tests.c

--
2.33.0.309.g3052b89438-goog


2021-09-14 16:48:38

by Peter Gonda

[permalink] [raw]
Subject: [PATCH 1/4 V8] KVM: SEV: Add support for SEV intra host migration

For SEV to work with intra host migration, contents of the SEV info struct
such as the ASID (used to index the encryption key in the AMD SP) and
the list of memory regions need to be transferred to the target VM.
This change adds a command for a target VMM to get a source SEV VM's sev
info.

Signed-off-by: Peter Gonda <[email protected]>
Suggested-by: Sean Christopherson <[email protected]>
Reviewed-by: Marc Orr <[email protected]>
Cc: Marc Orr <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Sean Christopherson <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Dr. David Alan Gilbert <[email protected]>
Cc: Brijesh Singh <[email protected]>
Cc: Vitaly Kuznetsov <[email protected]>
Cc: Wanpeng Li <[email protected]>
Cc: Jim Mattson <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
Documentation/virt/kvm/api.rst | 15 ++++
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/svm/sev.c | 136 ++++++++++++++++++++++++++++++++
arch/x86/kvm/svm/svm.c | 1 +
arch/x86/kvm/svm/svm.h | 2 +
arch/x86/kvm/x86.c | 6 ++
include/uapi/linux/kvm.h | 1 +
7 files changed, 162 insertions(+)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 4ea1bb28297b..4d5dbb4f14ec 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -6702,6 +6702,21 @@ MAP_SHARED mmap will result in an -EINVAL return.
When enabled the VMM may make use of the ``KVM_ARM_MTE_COPY_TAGS`` ioctl to
perform a bulk copy of tags to/from the guest.

+7.29 KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM
+-------------------------------------
+
+Architectures: x86 SEV enabled
+Type: vm
+Parameters: args[0] is the fd of the source vm
+Returns: 0 on success
+
+This capability enables userspace to migrate the encryption context from the VM
+indicated by the fd to the VM this is called on.
+
+This is intended to support intra-host migration of VMs between userspace VMMs.
+in-guest workloads scheduled by the host. This allows for upgrading the VMM
+process without interrupting the guest.
+
8. Other capabilities.
======================

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 09b256db394a..96bf358f5a46 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1456,6 +1456,7 @@ struct kvm_x86_ops {
int (*mem_enc_reg_region)(struct kvm *kvm, struct kvm_enc_region *argp);
int (*mem_enc_unreg_region)(struct kvm *kvm, struct kvm_enc_region *argp);
int (*vm_copy_enc_context_from)(struct kvm *kvm, unsigned int source_fd);
+ int (*vm_migrate_protected_vm_from)(struct kvm *kvm, unsigned int source_fd);

int (*get_msr_feature)(struct kvm_msr_entry *entry);

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 46eb1ba62d3d..6fc1935b52ea 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1501,6 +1501,142 @@ static int sev_receive_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
return sev_issue_cmd(kvm, SEV_CMD_RECEIVE_FINISH, &data, &argp->error);
}

+static int sev_lock_for_migration(struct kvm *kvm)
+{
+ struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+
+ /*
+ * Bail if this VM is already involved in a migration to avoid deadlock
+ * between two VMs trying to migrate to/from each other.
+ */
+ if (atomic_cmpxchg_acquire(&sev->migration_in_progress, 0, 1))
+ return -EBUSY;
+
+ mutex_lock(&kvm->lock);
+
+ return 0;
+}
+
+static void sev_unlock_after_migration(struct kvm *kvm)
+{
+ struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+
+ mutex_unlock(&kvm->lock);
+ atomic_set_release(&sev->migration_in_progress, 0);
+}
+
+
+static int sev_lock_vcpus_for_migration(struct kvm *kvm)
+{
+ struct kvm_vcpu *vcpu;
+ int i, j;
+
+ kvm_for_each_vcpu(i, vcpu, kvm) {
+ if (mutex_lock_killable(&vcpu->mutex))
+ goto out_unlock;
+ }
+
+ return 0;
+
+out_unlock:
+ kvm_for_each_vcpu(j, vcpu, kvm) {
+ mutex_unlock(&vcpu->mutex);
+ if (i == j)
+ break;
+ }
+ return -EINTR;
+}
+
+static void sev_unlock_vcpus_for_migration(struct kvm *kvm)
+{
+ struct kvm_vcpu *vcpu;
+ int i;
+
+ kvm_for_each_vcpu(i, vcpu, kvm) {
+ mutex_unlock(&vcpu->mutex);
+ }
+}
+
+static void sev_migrate_from(struct kvm_sev_info *dst,
+ struct kvm_sev_info *src)
+{
+ dst->active = true;
+ dst->asid = src->asid;
+ dst->misc_cg = src->misc_cg;
+ dst->handle = src->handle;
+ dst->pages_locked = src->pages_locked;
+
+ src->asid = 0;
+ src->active = false;
+ src->handle = 0;
+ src->pages_locked = 0;
+ src->misc_cg = NULL;
+
+ INIT_LIST_HEAD(&dst->regions_list);
+ list_replace_init(&src->regions_list, &dst->regions_list);
+}
+
+int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd)
+{
+ struct kvm_sev_info *dst_sev = &to_kvm_svm(kvm)->sev_info;
+ struct file *source_kvm_file;
+ struct kvm *source_kvm;
+ struct kvm_vcpu *vcpu;
+ int i, ret;
+
+ ret = sev_lock_for_migration(kvm);
+ if (ret)
+ return ret;
+
+ if (sev_guest(kvm)) {
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+
+ source_kvm_file = fget(source_fd);
+ if (!file_is_kvm(source_kvm_file)) {
+ ret = -EBADF;
+ goto out_fput;
+ }
+
+ source_kvm = source_kvm_file->private_data;
+ ret = sev_lock_for_migration(source_kvm);
+ if (ret)
+ goto out_fput;
+
+ if (!sev_guest(source_kvm) || sev_es_guest(source_kvm)) {
+ ret = -EINVAL;
+ goto out_source;
+ }
+ ret = sev_lock_vcpus_for_migration(kvm);
+ if (ret)
+ goto out_dst_vcpu;
+ ret = sev_lock_vcpus_for_migration(source_kvm);
+ if (ret)
+ goto out_source_vcpu;
+
+ sev_migrate_from(dst_sev, &to_kvm_svm(source_kvm)->sev_info);
+ kvm_for_each_vcpu(i, vcpu, source_kvm) {
+ kvm_vcpu_reset(vcpu, /* init_event= */ false);
+ }
+ ret = 0;
+
+out_source_vcpu:
+ sev_unlock_vcpus_for_migration(source_kvm);
+
+out_dst_vcpu:
+ sev_unlock_vcpus_for_migration(kvm);
+
+out_source:
+ sev_unlock_after_migration(source_kvm);
+out_fput:
+ if (source_kvm_file)
+ fput(source_kvm_file);
+out_unlock:
+ sev_unlock_after_migration(kvm);
+ return ret;
+}
+
int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
{
struct kvm_sev_cmd sev_cmd;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 1a70e11f0487..51d767266dc6 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4625,6 +4625,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.mem_enc_unreg_region = svm_unregister_enc_region,

.vm_copy_enc_context_from = svm_vm_copy_asid_from,
+ .vm_migrate_protected_vm_from = svm_vm_migrate_from,

.can_emulate_instruction = svm_can_emulate_instruction,

diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 524d943f3efc..67bfb43301e1 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -80,6 +80,7 @@ struct kvm_sev_info {
u64 ap_jump_table; /* SEV-ES AP Jump Table address */
struct kvm *enc_context_owner; /* Owner of copied encryption context */
struct misc_cg *misc_cg; /* For misc cgroup accounting */
+ atomic_t migration_in_progress;
};

struct kvm_svm {
@@ -552,6 +553,7 @@ int svm_register_enc_region(struct kvm *kvm,
int svm_unregister_enc_region(struct kvm *kvm,
struct kvm_enc_region *range);
int svm_vm_copy_asid_from(struct kvm *kvm, unsigned int source_fd);
+int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd);
void pre_sev_run(struct vcpu_svm *svm, int cpu);
void __init sev_set_cpu_caps(void);
void __init sev_hardware_setup(void);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 86539c1686fa..ef254f34b5a4 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5654,6 +5654,12 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm,
if (kvm_x86_ops.vm_copy_enc_context_from)
r = kvm_x86_ops.vm_copy_enc_context_from(kvm, cap->args[0]);
return r;
+ case KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM:
+ r = -EINVAL;
+ if (kvm_x86_ops.vm_migrate_protected_vm_from)
+ r = kvm_x86_ops.vm_migrate_protected_vm_from(
+ kvm, cap->args[0]);
+ return r;
case KVM_CAP_EXIT_HYPERCALL:
if (cap->args[0] & ~KVM_EXIT_HYPERCALL_VALID_MASK) {
r = -EINVAL;
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index a067410ebea5..77b292ed01c1 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1112,6 +1112,7 @@ struct kvm_ppc_resize_hpt {
#define KVM_CAP_BINARY_STATS_FD 203
#define KVM_CAP_EXIT_ON_EMULATION_FAILURE 204
#define KVM_CAP_ARM_MTE 205
+#define KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM 206

#ifdef KVM_CAP_IRQ_ROUTING

--
2.33.0.309.g3052b89438-goog

2021-09-14 16:49:19

by Peter Gonda

[permalink] [raw]
Subject: [PATCH 3/4 V8] selftest: KVM: Add open sev dev helper

Refactors out open path support from open_kvm_dev_path_or_exit() and
adds new helper for SEV device path.

Signed-off-by: Peter Gonda <[email protected]>
Suggested-by: Sean Christopherson <[email protected]>
Cc: Marc Orr <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Sean Christopherson <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Dr. David Alan Gilbert <[email protected]>
Cc: Brijesh Singh <[email protected]>
Cc: Vitaly Kuznetsov <[email protected]>
Cc: Wanpeng Li <[email protected]>
Cc: Jim Mattson <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
.../testing/selftests/kvm/include/kvm_util.h | 1 +
.../selftests/kvm/include/x86_64/svm_util.h | 2 ++
tools/testing/selftests/kvm/lib/kvm_util.c | 24 +++++++++++--------
tools/testing/selftests/kvm/lib/x86_64/svm.c | 13 ++++++++++
4 files changed, 30 insertions(+), 10 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 010b59b13917..368e88305046 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -80,6 +80,7 @@ struct vm_guest_mode_params {
};
extern const struct vm_guest_mode_params vm_guest_mode_params[];

+int open_path_or_exit(const char *path, int flags);
int open_kvm_dev_path_or_exit(void);
int kvm_check_cap(long cap);
int vm_enable_cap(struct kvm_vm *vm, struct kvm_enable_cap *cap);
diff --git a/tools/testing/selftests/kvm/include/x86_64/svm_util.h b/tools/testing/selftests/kvm/include/x86_64/svm_util.h
index b7531c83b8ae..587fbe408b99 100644
--- a/tools/testing/selftests/kvm/include/x86_64/svm_util.h
+++ b/tools/testing/selftests/kvm/include/x86_64/svm_util.h
@@ -46,4 +46,6 @@ static inline bool cpu_has_svm(void)
return ecx & CPUID_SVM;
}

+int open_sev_dev_path_or_exit(void);
+
#endif /* SELFTEST_KVM_SVM_UTILS_H */
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 10a8ed691c66..06a6c04010fb 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -31,6 +31,19 @@ static void *align(void *x, size_t size)
return (void *) (((size_t) x + mask) & ~mask);
}

+int open_path_or_exit(const char *path, int flags)
+{
+ int fd;
+
+ fd = open(path, flags);
+ if (fd < 0) {
+ print_skip("%s not available (errno: %d)", path, errno);
+ exit(KSFT_SKIP);
+ }
+
+ return fd;
+}
+
/*
* Open KVM_DEV_PATH if available, otherwise exit the entire program.
*
@@ -42,16 +55,7 @@ static void *align(void *x, size_t size)
*/
static int _open_kvm_dev_path_or_exit(int flags)
{
- int fd;
-
- fd = open(KVM_DEV_PATH, flags);
- if (fd < 0) {
- print_skip("%s not available, is KVM loaded? (errno: %d)",
- KVM_DEV_PATH, errno);
- exit(KSFT_SKIP);
- }
-
- return fd;
+ return open_path_or_exit(KVM_DEV_PATH, flags);
}

int open_kvm_dev_path_or_exit(void)
diff --git a/tools/testing/selftests/kvm/lib/x86_64/svm.c b/tools/testing/selftests/kvm/lib/x86_64/svm.c
index 2ac98d70d02b..14a8618efa9c 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/svm.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/svm.c
@@ -13,6 +13,8 @@
#include "processor.h"
#include "svm_util.h"

+#define SEV_DEV_PATH "/dev/sev"
+
struct gpr64_regs guest_regs;
u64 rflags;

@@ -160,3 +162,14 @@ void nested_svm_check_supported(void)
exit(KSFT_SKIP);
}
}
+
+/*
+ * Open SEV_DEV_PATH if available, otherwise exit the entire program.
+ *
+ * Return:
+ * The opened file descriptor of /dev/sev.
+ */
+int open_sev_dev_path_or_exit(void)
+{
+ return open_path_or_exit(SEV_DEV_PATH, 0);
+}
--
2.33.0.309.g3052b89438-goog

2021-09-14 16:49:32

by Peter Gonda

[permalink] [raw]
Subject: [PATCH 4/4 V8] selftest: KVM: Add intra host migration tests

Adds testcases for intra host migration for SEV and SEV-ES. Also adds
locking test to confirm no deadlock exists.

Signed-off-by: Peter Gonda <[email protected]>
Suggested-by: Sean Christopherson <[email protected]>
Reviewed-by: Marc Orr <[email protected]>
Cc: Marc Orr <[email protected]>
Cc: Sean Christopherson <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Brijesh Singh <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
tools/testing/selftests/kvm/Makefile | 1 +
.../selftests/kvm/x86_64/sev_vm_tests.c | 203 ++++++++++++++++++
2 files changed, 204 insertions(+)
create mode 100644 tools/testing/selftests/kvm/x86_64/sev_vm_tests.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index c103873531e0..44fd3566fb51 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -72,6 +72,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_msrs_test
TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test
TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test
TEST_GEN_PROGS_x86_64 += x86_64/vmx_pi_mmio_test
+TEST_GEN_PROGS_x86_64 += x86_64/sev_vm_tests
TEST_GEN_PROGS_x86_64 += access_tracking_perf_test
TEST_GEN_PROGS_x86_64 += demand_paging_test
TEST_GEN_PROGS_x86_64 += dirty_log_test
diff --git a/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
new file mode 100644
index 000000000000..ec3bbc96e73a
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
@@ -0,0 +1,203 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#include <linux/kvm.h>
+#include <linux/psp-sev.h>
+#include <stdio.h>
+#include <sys/ioctl.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <pthread.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+#include "processor.h"
+#include "svm_util.h"
+#include "kselftest.h"
+#include "../lib/kvm_util_internal.h"
+
+#define SEV_POLICY_ES 0b100
+
+#define NR_MIGRATE_TEST_VCPUS 4
+#define NR_MIGRATE_TEST_VMS 3
+#define NR_LOCK_TESTING_THREADS 3
+#define NR_LOCK_TESTING_ITERATIONS 10000
+
+static void sev_ioctl(int vm_fd, int cmd_id, void *data)
+{
+ struct kvm_sev_cmd cmd = {
+ .id = cmd_id,
+ .data = (uint64_t)data,
+ .sev_fd = open_sev_dev_path_or_exit(),
+ };
+ int ret;
+
+ ret = ioctl(vm_fd, KVM_MEMORY_ENCRYPT_OP, &cmd);
+ TEST_ASSERT((ret == 0 || cmd.error == SEV_RET_SUCCESS),
+ "%d failed: return code: %d, errno: %d, fw error: %d",
+ cmd_id, ret, errno, cmd.error);
+}
+
+static struct kvm_vm *sev_vm_create(bool es)
+{
+ struct kvm_vm *vm;
+ struct kvm_sev_launch_start start = { 0 };
+ int i;
+
+ vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
+ sev_ioctl(vm->fd, es ? KVM_SEV_ES_INIT : KVM_SEV_INIT, NULL);
+ for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i)
+ vm_vcpu_add(vm, i);
+ if (es)
+ start.policy |= SEV_POLICY_ES;
+ sev_ioctl(vm->fd, KVM_SEV_LAUNCH_START, &start);
+ if (es)
+ sev_ioctl(vm->fd, KVM_SEV_LAUNCH_UPDATE_VMSA, NULL);
+ return vm;
+}
+
+static struct kvm_vm *__vm_create(void)
+{
+ struct kvm_vm *vm;
+ int i;
+
+ vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
+ for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i)
+ vm_vcpu_add(vm, i);
+
+ return vm;
+}
+
+static int __sev_migrate_from(int dst_fd, int src_fd)
+{
+ struct kvm_enable_cap cap = {
+ .cap = KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM,
+ .args = { src_fd }
+ };
+
+ return ioctl(dst_fd, KVM_ENABLE_CAP, &cap);
+}
+
+
+static void sev_migrate_from(int dst_fd, int src_fd)
+{
+ int ret;
+
+ ret = __sev_migrate_from(dst_fd, src_fd);
+ TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d\n", ret, errno);
+}
+
+static void test_sev_migrate_from(bool es)
+{
+ struct kvm_vm *src_vm;
+ struct kvm_vm *dst_vms[NR_MIGRATE_TEST_VMS];
+ int i;
+
+ src_vm = sev_vm_create(es);
+ for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i)
+ dst_vms[i] = __vm_create();
+
+ /* Initial migration from the src to the first dst. */
+ sev_migrate_from(dst_vms[0]->fd, src_vm->fd);
+
+ for (i = 1; i < NR_MIGRATE_TEST_VMS; i++)
+ sev_migrate_from(dst_vms[i]->fd, dst_vms[i - 1]->fd);
+
+ /* Migrate the guest back to the original VM. */
+ sev_migrate_from(src_vm->fd, dst_vms[NR_MIGRATE_TEST_VMS - 1]->fd);
+
+ kvm_vm_free(src_vm);
+ for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i)
+ kvm_vm_free(dst_vms[i]);
+}
+
+struct locking_thread_input {
+ struct kvm_vm *vm;
+ int source_fds[NR_LOCK_TESTING_THREADS];
+};
+
+static void *locking_test_thread(void *arg)
+{
+ int i, j;
+ struct locking_thread_input *input = (struct locking_test_thread *)arg;
+
+ for (i = 0; i < NR_LOCK_TESTING_ITERATIONS; ++i) {
+ j = i % NR_LOCK_TESTING_THREADS;
+ __sev_migrate_from(input->vm->fd, input->source_fds[j]);
+ }
+
+ return NULL;
+}
+
+static void test_sev_migrate_locking(void)
+{
+ struct locking_thread_input input[NR_LOCK_TESTING_THREADS];
+ pthread_t pt[NR_LOCK_TESTING_THREADS];
+ int i;
+
+ for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i) {
+ input[i].vm = sev_vm_create(/* es= */ false);
+ input[0].source_fds[i] = input[i].vm->fd;
+ }
+ for (i = 1; i < NR_LOCK_TESTING_THREADS; ++i)
+ memcpy(input[i].source_fds, input[0].source_fds,
+ sizeof(input[i].source_fds));
+
+ for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i)
+ pthread_create(&pt[i], NULL, locking_test_thread, &input[i]);
+
+ for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i)
+ pthread_join(pt[i], NULL);
+}
+
+static void test_sev_migrate_parameters(void)
+{
+ struct kvm_vm *sev_vm, *sev_es_vm, *vm_no_vcpu, *vm_no_sev,
+ *sev_es_vm_no_vmsa;
+ int ret;
+
+ sev_vm = sev_vm_create(/* es= */ false);
+ sev_es_vm = sev_vm_create(/* es= */ true);
+ vm_no_vcpu = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
+ vm_no_sev = __vm_create();
+ sev_es_vm_no_vmsa = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
+ sev_ioctl(sev_es_vm_no_vmsa->fd, KVM_SEV_ES_INIT, NULL);
+ vm_vcpu_add(sev_es_vm_no_vmsa, 1);
+
+
+ ret = __sev_migrate_from(sev_vm->fd, sev_es_vm->fd);
+ TEST_ASSERT(
+ ret == -1 && errno == EINVAL,
+ "Should not be able migrate to SEV enabled VM. ret: %d, errno: %d\n",
+ ret, errno);
+
+ ret = __sev_migrate_from(sev_es_vm->fd, sev_vm->fd);
+ TEST_ASSERT(
+ ret == -1 && errno == EINVAL,
+ "Should not be able migrate to SEV-ES enabled VM. ret: %d, errno: %d\n",
+ ret, errno);
+
+ ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm->fd);
+ TEST_ASSERT(
+ ret == -1 && errno == EINVAL,
+ "SEV-ES migrations require same number of vCPUS. ret: %d, errno: %d\n",
+ ret, errno);
+
+ ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm_no_vmsa->fd);
+ TEST_ASSERT(
+ ret == -1 && errno == EINVAL,
+ "SEV-ES migrations require UPDATE_VMSA. ret %d, errno: %d\n",
+ ret, errno);
+
+ ret = __sev_migrate_from(vm_no_vcpu->fd, vm_no_sev->fd);
+ TEST_ASSERT(ret == -1 && errno == EINVAL,
+ "Migrations require SEV enabled. ret %d, errno: %d\n", ret,
+ errno);
+}
+
+int main(int argc, char *argv[])
+{
+ test_sev_migrate_from(/* es= */ false);
+ test_sev_migrate_from(/* es= */ true);
+ test_sev_migrate_locking();
+ test_sev_migrate_parameters();
+ return 0;
+}
--
2.33.0.309.g3052b89438-goog

2021-09-14 16:50:12

by Peter Gonda

[permalink] [raw]
Subject: [PATCH 2/4 V8] KVM: SEV: Add support for SEV-ES intra host migration

For SEV-ES to work with intra host migration the VMSAs, GHCB metadata,
and other SEV-ES info needs to be preserved along with the guest's
memory.

Signed-off-by: Peter Gonda <[email protected]>
Reviewed-by: Marc Orr <[email protected]>
Cc: Marc Orr <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Sean Christopherson <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Dr. David Alan Gilbert <[email protected]>
Cc: Brijesh Singh <[email protected]>
Cc: Vitaly Kuznetsov <[email protected]>
Cc: Wanpeng Li <[email protected]>
Cc: Jim Mattson <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
arch/x86/kvm/svm/sev.c | 53 +++++++++++++++++++++++++++++++++++++++++-
1 file changed, 52 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 6fc1935b52ea..321b55654f36 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1576,6 +1576,51 @@ static void sev_migrate_from(struct kvm_sev_info *dst,
list_replace_init(&src->regions_list, &dst->regions_list);
}

+static int sev_es_migrate_from(struct kvm *dst, struct kvm *src)
+{
+ int i;
+ struct kvm_vcpu *dst_vcpu, *src_vcpu;
+ struct vcpu_svm *dst_svm, *src_svm;
+
+ if (atomic_read(&src->online_vcpus) != atomic_read(&dst->online_vcpus))
+ return -EINVAL;
+
+ kvm_for_each_vcpu(i, src_vcpu, src) {
+ if (!src_vcpu->arch.guest_state_protected)
+ return -EINVAL;
+ }
+
+ kvm_for_each_vcpu(i, src_vcpu, src) {
+ src_svm = to_svm(src_vcpu);
+ dst_vcpu = dst->vcpus[i];
+ dst_vcpu = kvm_get_vcpu(dst, i);
+ dst_svm = to_svm(dst_vcpu);
+
+ /*
+ * Transfer VMSA and GHCB state to the destination. Nullify and
+ * clear source fields as appropriate, the state now belongs to
+ * the destination.
+ */
+ dst_vcpu->vcpu_id = src_vcpu->vcpu_id;
+ dst_svm->vmsa = src_svm->vmsa;
+ src_svm->vmsa = NULL;
+ dst_svm->ghcb = src_svm->ghcb;
+ src_svm->ghcb = NULL;
+ dst_svm->vmcb->control.ghcb_gpa = src_svm->vmcb->control.ghcb_gpa;
+ dst_svm->ghcb_sa = src_svm->ghcb_sa;
+ src_svm->ghcb_sa = NULL;
+ dst_svm->ghcb_sa_len = src_svm->ghcb_sa_len;
+ src_svm->ghcb_sa_len = 0;
+ dst_svm->ghcb_sa_sync = src_svm->ghcb_sa_sync;
+ src_svm->ghcb_sa_sync = false;
+ dst_svm->ghcb_sa_free = src_svm->ghcb_sa_free;
+ src_svm->ghcb_sa_free = false;
+ }
+ to_kvm_svm(src)->sev_info.es_active = false;
+
+ return 0;
+}
+
int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd)
{
struct kvm_sev_info *dst_sev = &to_kvm_svm(kvm)->sev_info;
@@ -1604,7 +1649,7 @@ int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd)
if (ret)
goto out_fput;

- if (!sev_guest(source_kvm) || sev_es_guest(source_kvm)) {
+ if (!sev_guest(source_kvm)) {
ret = -EINVAL;
goto out_source;
}
@@ -1615,6 +1660,12 @@ int svm_vm_migrate_from(struct kvm *kvm, unsigned int source_fd)
if (ret)
goto out_source_vcpu;

+ if (sev_es_guest(source_kvm)) {
+ ret = sev_es_migrate_from(kvm, source_kvm);
+ if (ret)
+ goto out_source_vcpu;
+ }
+
sev_migrate_from(dst_sev, &to_kvm_svm(source_kvm)->sev_info);
kvm_for_each_vcpu (i, vcpu, source_kvm) {
kvm_vcpu_reset(vcpu, /* init_event= */ false);
--
2.33.0.309.g3052b89438-goog

2021-09-15 17:11:20

by Marc Orr

[permalink] [raw]
Subject: Re: [PATCH 3/4 V8] selftest: KVM: Add open sev dev helper

On Tue, Sep 14, 2021 at 9:47 AM Peter Gonda <[email protected]> wrote:
>
> Refactors out open path support from open_kvm_dev_path_or_exit() and
> adds new helper for SEV device path.
>
> Signed-off-by: Peter Gonda <[email protected]>
> Suggested-by: Sean Christopherson <[email protected]>
> Cc: Marc Orr <[email protected]>
> Cc: Paolo Bonzini <[email protected]>
> Cc: Sean Christopherson <[email protected]>
> Cc: David Rientjes <[email protected]>
> Cc: Dr. David Alan Gilbert <[email protected]>
> Cc: Brijesh Singh <[email protected]>
> Cc: Vitaly Kuznetsov <[email protected]>
> Cc: Wanpeng Li <[email protected]>
> Cc: Jim Mattson <[email protected]>
> Cc: Joerg Roedel <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Borislav Petkov <[email protected]>
> Cc: "H. Peter Anvin" <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> ---
> .../testing/selftests/kvm/include/kvm_util.h | 1 +
> .../selftests/kvm/include/x86_64/svm_util.h | 2 ++
> tools/testing/selftests/kvm/lib/kvm_util.c | 24 +++++++++++--------
> tools/testing/selftests/kvm/lib/x86_64/svm.c | 13 ++++++++++
> 4 files changed, 30 insertions(+), 10 deletions(-)
>
> diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
> index 010b59b13917..368e88305046 100644
> --- a/tools/testing/selftests/kvm/include/kvm_util.h
> +++ b/tools/testing/selftests/kvm/include/kvm_util.h
> @@ -80,6 +80,7 @@ struct vm_guest_mode_params {
> };
> extern const struct vm_guest_mode_params vm_guest_mode_params[];
>
> +int open_path_or_exit(const char *path, int flags);
> int open_kvm_dev_path_or_exit(void);
> int kvm_check_cap(long cap);
> int vm_enable_cap(struct kvm_vm *vm, struct kvm_enable_cap *cap);
> diff --git a/tools/testing/selftests/kvm/include/x86_64/svm_util.h b/tools/testing/selftests/kvm/include/x86_64/svm_util.h
> index b7531c83b8ae..587fbe408b99 100644
> --- a/tools/testing/selftests/kvm/include/x86_64/svm_util.h
> +++ b/tools/testing/selftests/kvm/include/x86_64/svm_util.h
> @@ -46,4 +46,6 @@ static inline bool cpu_has_svm(void)
> return ecx & CPUID_SVM;
> }
>
> +int open_sev_dev_path_or_exit(void);
> +
> #endif /* SELFTEST_KVM_SVM_UTILS_H */
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 10a8ed691c66..06a6c04010fb 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -31,6 +31,19 @@ static void *align(void *x, size_t size)
> return (void *) (((size_t) x + mask) & ~mask);
> }
>
> +int open_path_or_exit(const char *path, int flags)
> +{
> + int fd;
> +
> + fd = open(path, flags);
> + if (fd < 0) {
> + print_skip("%s not available (errno: %d)", path, errno);
> + exit(KSFT_SKIP);
> + }
> +
> + return fd;
> +}
> +
> /*
> * Open KVM_DEV_PATH if available, otherwise exit the entire program.
> *
> @@ -42,16 +55,7 @@ static void *align(void *x, size_t size)
> */
> static int _open_kvm_dev_path_or_exit(int flags)
> {
> - int fd;
> -
> - fd = open(KVM_DEV_PATH, flags);
> - if (fd < 0) {
> - print_skip("%s not available, is KVM loaded? (errno: %d)",
> - KVM_DEV_PATH, errno);
> - exit(KSFT_SKIP);
> - }
> -
> - return fd;
> + return open_path_or_exit(KVM_DEV_PATH, flags);
> }
>
> int open_kvm_dev_path_or_exit(void)
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/svm.c b/tools/testing/selftests/kvm/lib/x86_64/svm.c
> index 2ac98d70d02b..14a8618efa9c 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/svm.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/svm.c
> @@ -13,6 +13,8 @@
> #include "processor.h"
> #include "svm_util.h"
>
> +#define SEV_DEV_PATH "/dev/sev"
> +
> struct gpr64_regs guest_regs;
> u64 rflags;
>
> @@ -160,3 +162,14 @@ void nested_svm_check_supported(void)
> exit(KSFT_SKIP);
> }
> }
> +
> +/*
> + * Open SEV_DEV_PATH if available, otherwise exit the entire program.
> + *
> + * Return:
> + * The opened file descriptor of /dev/sev.
> + */
> +int open_sev_dev_path_or_exit(void)
> +{
> + return open_path_or_exit(SEV_DEV_PATH, 0);
> +}
> --
> 2.33.0.309.g3052b89438-goog
>

Reviewed-by: Marc Orr <[email protected]>

2021-09-15 17:29:43

by Marc Orr

[permalink] [raw]
Subject: Re: [PATCH 4/4 V8] selftest: KVM: Add intra host migration tests

On Tue, Sep 14, 2021 at 9:47 AM Peter Gonda <[email protected]> wrote:
>
> Adds testcases for intra host migration for SEV and SEV-ES. Also adds
> locking test to confirm no deadlock exists.
>
> Signed-off-by: Peter Gonda <[email protected]>
> Suggested-by: Sean Christopherson <[email protected]>
> Reviewed-by: Marc Orr <[email protected]>
> Cc: Marc Orr <[email protected]>
> Cc: Sean Christopherson <[email protected]>
> Cc: David Rientjes <[email protected]>
> Cc: Brijesh Singh <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> ---
> tools/testing/selftests/kvm/Makefile | 1 +
> .../selftests/kvm/x86_64/sev_vm_tests.c | 203 ++++++++++++++++++
> 2 files changed, 204 insertions(+)
> create mode 100644 tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
>
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index c103873531e0..44fd3566fb51 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -72,6 +72,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_msrs_test
> TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test
> TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test
> TEST_GEN_PROGS_x86_64 += x86_64/vmx_pi_mmio_test
> +TEST_GEN_PROGS_x86_64 += x86_64/sev_vm_tests
> TEST_GEN_PROGS_x86_64 += access_tracking_perf_test
> TEST_GEN_PROGS_x86_64 += demand_paging_test
> TEST_GEN_PROGS_x86_64 += dirty_log_test
> diff --git a/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> new file mode 100644
> index 000000000000..ec3bbc96e73a
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> @@ -0,0 +1,203 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +#include <linux/kvm.h>
> +#include <linux/psp-sev.h>
> +#include <stdio.h>
> +#include <sys/ioctl.h>
> +#include <stdlib.h>
> +#include <errno.h>
> +#include <pthread.h>
> +
> +#include "test_util.h"
> +#include "kvm_util.h"
> +#include "processor.h"
> +#include "svm_util.h"
> +#include "kselftest.h"
> +#include "../lib/kvm_util_internal.h"
> +
> +#define SEV_POLICY_ES 0b100
> +
> +#define NR_MIGRATE_TEST_VCPUS 4
> +#define NR_MIGRATE_TEST_VMS 3
> +#define NR_LOCK_TESTING_THREADS 3
> +#define NR_LOCK_TESTING_ITERATIONS 10000
> +
> +static void sev_ioctl(int vm_fd, int cmd_id, void *data)
> +{
> + struct kvm_sev_cmd cmd = {
> + .id = cmd_id,
> + .data = (uint64_t)data,
> + .sev_fd = open_sev_dev_path_or_exit(),
> + };
> + int ret;
> +
> + ret = ioctl(vm_fd, KVM_MEMORY_ENCRYPT_OP, &cmd);
> + TEST_ASSERT((ret == 0 || cmd.error == SEV_RET_SUCCESS),
> + "%d failed: return code: %d, errno: %d, fw error: %d",
> + cmd_id, ret, errno, cmd.error);
> +}
> +
> +static struct kvm_vm *sev_vm_create(bool es)
> +{
> + struct kvm_vm *vm;
> + struct kvm_sev_launch_start start = { 0 };
> + int i;
> +
> + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> + sev_ioctl(vm->fd, es ? KVM_SEV_ES_INIT : KVM_SEV_INIT, NULL);
> + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i)
> + vm_vcpu_add(vm, i);
> + if (es)
> + start.policy |= SEV_POLICY_ES;
> + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_START, &start);
> + if (es)
> + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_UPDATE_VMSA, NULL);
> + return vm;
> +}

I should've suggested this in my original review. But is it worth
moving `sev_vm_create()` and `sev_ioctl()` into the broader selftests
library, so others can leverage this function to write selftests?

> +
> +static struct kvm_vm *__vm_create(void)
> +{
> + struct kvm_vm *vm;
> + int i;
> +
> + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i)
> + vm_vcpu_add(vm, i);
> +
> + return vm;
> +}
> +
> +static int __sev_migrate_from(int dst_fd, int src_fd)
> +{
> + struct kvm_enable_cap cap = {
> + .cap = KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM,
> + .args = { src_fd }
> + };
> +
> + return ioctl(dst_fd, KVM_ENABLE_CAP, &cap);
> +}
> +
> +
> +static void sev_migrate_from(int dst_fd, int src_fd)
> +{
> + int ret;
> +
> + ret = __sev_migrate_from(dst_fd, src_fd);
> + TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d\n", ret, errno);
> +}
> +
> +static void test_sev_migrate_from(bool es)
> +{
> + struct kvm_vm *src_vm;
> + struct kvm_vm *dst_vms[NR_MIGRATE_TEST_VMS];
> + int i;
> +
> + src_vm = sev_vm_create(es);
> + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i)
> + dst_vms[i] = __vm_create();
> +
> + /* Initial migration from the src to the first dst. */
> + sev_migrate_from(dst_vms[0]->fd, src_vm->fd);
> +
> + for (i = 1; i < NR_MIGRATE_TEST_VMS; i++)
> + sev_migrate_from(dst_vms[i]->fd, dst_vms[i - 1]->fd);
> +
> + /* Migrate the guest back to the original VM. */
> + sev_migrate_from(src_vm->fd, dst_vms[NR_MIGRATE_TEST_VMS - 1]->fd);
> +
> + kvm_vm_free(src_vm);
> + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i)
> + kvm_vm_free(dst_vms[i]);
> +}
> +
> +struct locking_thread_input {
> + struct kvm_vm *vm;
> + int source_fds[NR_LOCK_TESTING_THREADS];
> +};
> +
> +static void *locking_test_thread(void *arg)
> +{
> + int i, j;
> + struct locking_thread_input *input = (struct locking_test_thread *)arg;
> +
> + for (i = 0; i < NR_LOCK_TESTING_ITERATIONS; ++i) {
> + j = i % NR_LOCK_TESTING_THREADS;
> + __sev_migrate_from(input->vm->fd, input->source_fds[j]);
> + }
> +
> + return NULL;
> +}
> +
> +static void test_sev_migrate_locking(void)
> +{
> + struct locking_thread_input input[NR_LOCK_TESTING_THREADS];
> + pthread_t pt[NR_LOCK_TESTING_THREADS];
> + int i;
> +
> + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i) {
> + input[i].vm = sev_vm_create(/* es= */ false);
> + input[0].source_fds[i] = input[i].vm->fd;
> + }
> + for (i = 1; i < NR_LOCK_TESTING_THREADS; ++i)
> + memcpy(input[i].source_fds, input[0].source_fds,
> + sizeof(input[i].source_fds));
> +
> + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i)
> + pthread_create(&pt[i], NULL, locking_test_thread, &input[i]);
> +
> + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i)
> + pthread_join(pt[i], NULL);
> +}
> +
> +static void test_sev_migrate_parameters(void)
> +{
> + struct kvm_vm *sev_vm, *sev_es_vm, *vm_no_vcpu, *vm_no_sev,
> + *sev_es_vm_no_vmsa;
> + int ret;
> +
> + sev_vm = sev_vm_create(/* es= */ false);
> + sev_es_vm = sev_vm_create(/* es= */ true);
> + vm_no_vcpu = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> + vm_no_sev = __vm_create();
> + sev_es_vm_no_vmsa = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> + sev_ioctl(sev_es_vm_no_vmsa->fd, KVM_SEV_ES_INIT, NULL);
> + vm_vcpu_add(sev_es_vm_no_vmsa, 1);
> +
> +
> + ret = __sev_migrate_from(sev_vm->fd, sev_es_vm->fd);
> + TEST_ASSERT(
> + ret == -1 && errno == EINVAL,
> + "Should not be able migrate to SEV enabled VM. ret: %d, errno: %d\n",
> + ret, errno);
> +
> + ret = __sev_migrate_from(sev_es_vm->fd, sev_vm->fd);
> + TEST_ASSERT(
> + ret == -1 && errno == EINVAL,
> + "Should not be able migrate to SEV-ES enabled VM. ret: %d, errno: %d\n",
> + ret, errno);
> +
> + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm->fd);
> + TEST_ASSERT(
> + ret == -1 && errno == EINVAL,
> + "SEV-ES migrations require same number of vCPUS. ret: %d, errno: %d\n",
> + ret, errno);

How do we know that this failed because `vm_no_vcpu` has no vCPUs or
because it's not a SEV-ES VM?

> +
> + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm_no_vmsa->fd);
> + TEST_ASSERT(
> + ret == -1 && errno == EINVAL,
> + "SEV-ES migrations require UPDATE_VMSA. ret %d, errno: %d\n",
> + ret, errno);

Same question. How do we know why this failed? `sev_es_vm_no_vmsa` did
not have any vCPUs added. Would it be cleaner to add an additional
param to `sev_vm_create()` to skip calling UPDATE_VMSA? Then,
`sev_es_vm_no_vmsa` can be created from `sev_vm_create()` and it's
obvious to the read that the VMs are identical except for this aspect.

> +
> + ret = __sev_migrate_from(vm_no_vcpu->fd, vm_no_sev->fd);
> + TEST_ASSERT(ret == -1 && errno == EINVAL,
> + "Migrations require SEV enabled. ret %d, errno: %d\n", ret,
> + errno);

`vm_no_sev` has vCPUs. Therefore, how do we know why this failed --
(a) differing vCPU counts or (b) no SEV?

> +}
> +
> +int main(int argc, char *argv[])
> +{
> + test_sev_migrate_from(/* es= */ false);
> + test_sev_migrate_from(/* es= */ true);
> + test_sev_migrate_locking();
> + test_sev_migrate_parameters();
> + return 0;
> +}
> --
> 2.33.0.309.g3052b89438-goog
>

2021-09-21 14:22:23

by Peter Gonda

[permalink] [raw]
Subject: Re: [PATCH 4/4 V8] selftest: KVM: Add intra host migration tests

On Wed, Sep 15, 2021 at 11:28 AM Marc Orr <[email protected]> wrote:
>
> On Tue, Sep 14, 2021 at 9:47 AM Peter Gonda <[email protected]> wrote:
> >
> > Adds testcases for intra host migration for SEV and SEV-ES. Also adds
> > locking test to confirm no deadlock exists.
> >
> > Signed-off-by: Peter Gonda <[email protected]>
> > Suggested-by: Sean Christopherson <[email protected]>
> > Reviewed-by: Marc Orr <[email protected]>
> > Cc: Marc Orr <[email protected]>
> > Cc: Sean Christopherson <[email protected]>
> > Cc: David Rientjes <[email protected]>
> > Cc: Brijesh Singh <[email protected]>
> > Cc: [email protected]
> > Cc: [email protected]
> > ---
> > tools/testing/selftests/kvm/Makefile | 1 +
> > .../selftests/kvm/x86_64/sev_vm_tests.c | 203 ++++++++++++++++++
> > 2 files changed, 204 insertions(+)
> > create mode 100644 tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> >
> > diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> > index c103873531e0..44fd3566fb51 100644
> > --- a/tools/testing/selftests/kvm/Makefile
> > +++ b/tools/testing/selftests/kvm/Makefile
> > @@ -72,6 +72,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_msrs_test
> > TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test
> > TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test
> > TEST_GEN_PROGS_x86_64 += x86_64/vmx_pi_mmio_test
> > +TEST_GEN_PROGS_x86_64 += x86_64/sev_vm_tests
> > TEST_GEN_PROGS_x86_64 += access_tracking_perf_test
> > TEST_GEN_PROGS_x86_64 += demand_paging_test
> > TEST_GEN_PROGS_x86_64 += dirty_log_test
> > diff --git a/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> > new file mode 100644
> > index 000000000000..ec3bbc96e73a
> > --- /dev/null
> > +++ b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> > @@ -0,0 +1,203 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +#include <linux/kvm.h>
> > +#include <linux/psp-sev.h>
> > +#include <stdio.h>
> > +#include <sys/ioctl.h>
> > +#include <stdlib.h>
> > +#include <errno.h>
> > +#include <pthread.h>
> > +
> > +#include "test_util.h"
> > +#include "kvm_util.h"
> > +#include "processor.h"
> > +#include "svm_util.h"
> > +#include "kselftest.h"
> > +#include "../lib/kvm_util_internal.h"
> > +
> > +#define SEV_POLICY_ES 0b100
> > +
> > +#define NR_MIGRATE_TEST_VCPUS 4
> > +#define NR_MIGRATE_TEST_VMS 3
> > +#define NR_LOCK_TESTING_THREADS 3
> > +#define NR_LOCK_TESTING_ITERATIONS 10000
> > +
> > +static void sev_ioctl(int vm_fd, int cmd_id, void *data)
> > +{
> > + struct kvm_sev_cmd cmd = {
> > + .id = cmd_id,
> > + .data = (uint64_t)data,
> > + .sev_fd = open_sev_dev_path_or_exit(),
> > + };
> > + int ret;
> > +
> > + ret = ioctl(vm_fd, KVM_MEMORY_ENCRYPT_OP, &cmd);
> > + TEST_ASSERT((ret == 0 || cmd.error == SEV_RET_SUCCESS),
> > + "%d failed: return code: %d, errno: %d, fw error: %d",
> > + cmd_id, ret, errno, cmd.error);
> > +}
> > +
> > +static struct kvm_vm *sev_vm_create(bool es)
> > +{
> > + struct kvm_vm *vm;
> > + struct kvm_sev_launch_start start = { 0 };
> > + int i;
> > +
> > + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > + sev_ioctl(vm->fd, es ? KVM_SEV_ES_INIT : KVM_SEV_INIT, NULL);
> > + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i)
> > + vm_vcpu_add(vm, i);
> > + if (es)
> > + start.policy |= SEV_POLICY_ES;
> > + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_START, &start);
> > + if (es)
> > + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_UPDATE_VMSA, NULL);
> > + return vm;
> > +}
>
> I should've suggested this in my original review. But is it worth
> moving `sev_vm_create()` and `sev_ioctl()` into the broader selftests
> library, so others can leverage this function to write selftests?

This function isn't fully complete. It doesn't get to launch_finish,
i.e. it only goes far enough for copyless migration ioctls to work. I
think this would be a good expansion but could happen in follow up
series, thoughts?

>
> > +
> > +static struct kvm_vm *__vm_create(void)
> > +{
> > + struct kvm_vm *vm;
> > + int i;
> > +
> > + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i)
> > + vm_vcpu_add(vm, i);
> > +
> > + return vm;
> > +}
> > +
> > +static int __sev_migrate_from(int dst_fd, int src_fd)
> > +{
> > + struct kvm_enable_cap cap = {
> > + .cap = KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM,
> > + .args = { src_fd }
> > + };
> > +
> > + return ioctl(dst_fd, KVM_ENABLE_CAP, &cap);
> > +}
> > +
> > +
> > +static void sev_migrate_from(int dst_fd, int src_fd)
> > +{
> > + int ret;
> > +
> > + ret = __sev_migrate_from(dst_fd, src_fd);
> > + TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d\n", ret, errno);
> > +}
> > +
> > +static void test_sev_migrate_from(bool es)
> > +{
> > + struct kvm_vm *src_vm;
> > + struct kvm_vm *dst_vms[NR_MIGRATE_TEST_VMS];
> > + int i;
> > +
> > + src_vm = sev_vm_create(es);
> > + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i)
> > + dst_vms[i] = __vm_create();
> > +
> > + /* Initial migration from the src to the first dst. */
> > + sev_migrate_from(dst_vms[0]->fd, src_vm->fd);
> > +
> > + for (i = 1; i < NR_MIGRATE_TEST_VMS; i++)
> > + sev_migrate_from(dst_vms[i]->fd, dst_vms[i - 1]->fd);
> > +
> > + /* Migrate the guest back to the original VM. */
> > + sev_migrate_from(src_vm->fd, dst_vms[NR_MIGRATE_TEST_VMS - 1]->fd);
> > +
> > + kvm_vm_free(src_vm);
> > + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i)
> > + kvm_vm_free(dst_vms[i]);
> > +}
> > +
> > +struct locking_thread_input {
> > + struct kvm_vm *vm;
> > + int source_fds[NR_LOCK_TESTING_THREADS];
> > +};
> > +
> > +static void *locking_test_thread(void *arg)
> > +{
> > + int i, j;
> > + struct locking_thread_input *input = (struct locking_test_thread *)arg;
> > +
> > + for (i = 0; i < NR_LOCK_TESTING_ITERATIONS; ++i) {
> > + j = i % NR_LOCK_TESTING_THREADS;
> > + __sev_migrate_from(input->vm->fd, input->source_fds[j]);
> > + }
> > +
> > + return NULL;
> > +}
> > +
> > +static void test_sev_migrate_locking(void)
> > +{
> > + struct locking_thread_input input[NR_LOCK_TESTING_THREADS];
> > + pthread_t pt[NR_LOCK_TESTING_THREADS];
> > + int i;
> > +
> > + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i) {
> > + input[i].vm = sev_vm_create(/* es= */ false);
> > + input[0].source_fds[i] = input[i].vm->fd;
> > + }
> > + for (i = 1; i < NR_LOCK_TESTING_THREADS; ++i)
> > + memcpy(input[i].source_fds, input[0].source_fds,
> > + sizeof(input[i].source_fds));
> > +
> > + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i)
> > + pthread_create(&pt[i], NULL, locking_test_thread, &input[i]);
> > +
> > + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i)
> > + pthread_join(pt[i], NULL);
> > +}
> > +
> > +static void test_sev_migrate_parameters(void)
> > +{
> > + struct kvm_vm *sev_vm, *sev_es_vm, *vm_no_vcpu, *vm_no_sev,
> > + *sev_es_vm_no_vmsa;
> > + int ret;
> > +
> > + sev_vm = sev_vm_create(/* es= */ false);
> > + sev_es_vm = sev_vm_create(/* es= */ true);
> > + vm_no_vcpu = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > + vm_no_sev = __vm_create();
> > + sev_es_vm_no_vmsa = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > + sev_ioctl(sev_es_vm_no_vmsa->fd, KVM_SEV_ES_INIT, NULL);
> > + vm_vcpu_add(sev_es_vm_no_vmsa, 1);
> > +
> > +
> > + ret = __sev_migrate_from(sev_vm->fd, sev_es_vm->fd);
> > + TEST_ASSERT(
> > + ret == -1 && errno == EINVAL,
> > + "Should not be able migrate to SEV enabled VM. ret: %d, errno: %d\n",
> > + ret, errno);
> > +
> > + ret = __sev_migrate_from(sev_es_vm->fd, sev_vm->fd);
> > + TEST_ASSERT(
> > + ret == -1 && errno == EINVAL,
> > + "Should not be able migrate to SEV-ES enabled VM. ret: %d, errno: %d\n",
> > + ret, errno);
> > +
> > + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm->fd);
> > + TEST_ASSERT(
> > + ret == -1 && errno == EINVAL,
> > + "SEV-ES migrations require same number of vCPUS. ret: %d, errno: %d\n",
> > + ret, errno);
>
> How do we know that this failed because `vm_no_vcpu` has no vCPUs or
> because it's not a SEV-ES VM?

Actually with V8 we only migrate to none SEV(-ES)? enabled guests.

>
> > +
> > + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm_no_vmsa->fd);
> > + TEST_ASSERT(
> > + ret == -1 && errno == EINVAL,
> > + "SEV-ES migrations require UPDATE_VMSA. ret %d, errno: %d\n",
> > + ret, errno);
>
> Same question. How do we know why this failed? `sev_es_vm_no_vmsa` did
> not have any vCPUs added. Would it be cleaner to add an additional
> param to `sev_vm_create()` to skip calling UPDATE_VMSA? Then,
> `sev_es_vm_no_vmsa` can be created from `sev_vm_create()` and it's
> obvious to the read that the VMs are identical except for this aspect.
>
> > +
> > + ret = __sev_migrate_from(vm_no_vcpu->fd, vm_no_sev->fd);
> > + TEST_ASSERT(ret == -1 && errno == EINVAL,
> > + "Migrations require SEV enabled. ret %d, errno: %d\n", ret,
> > + errno);
>
> `vm_no_sev` has vCPUs. Therefore, how do we know why this failed --
> (a) differing vCPU counts or (b) no SEV?

Ditto we require dst to be none SEV enabled.

>
> > +}
> > +
> > +int main(int argc, char *argv[])
> > +{
> > + test_sev_migrate_from(/* es= */ false);
> > + test_sev_migrate_from(/* es= */ true);
> > + test_sev_migrate_locking();
> > + test_sev_migrate_parameters();
> > + return 0;
> > +}
> > --
> > 2.33.0.309.g3052b89438-goog
> >

2021-09-22 16:36:02

by Marc Orr

[permalink] [raw]
Subject: Re: [PATCH 4/4 V8] selftest: KVM: Add intra host migration tests

On Tue, Sep 21, 2021 at 7:20 AM Peter Gonda <[email protected]> wrote:
>
> On Wed, Sep 15, 2021 at 11:28 AM Marc Orr <[email protected]> wrote:
> >
> > On Tue, Sep 14, 2021 at 9:47 AM Peter Gonda <[email protected]> wrote:
> > >
> > > Adds testcases for intra host migration for SEV and SEV-ES. Also adds
> > > locking test to confirm no deadlock exists.
> > >
> > > Signed-off-by: Peter Gonda <[email protected]>
> > > Suggested-by: Sean Christopherson <[email protected]>
> > > Reviewed-by: Marc Orr <[email protected]>
> > > Cc: Marc Orr <[email protected]>
> > > Cc: Sean Christopherson <[email protected]>
> > > Cc: David Rientjes <[email protected]>
> > > Cc: Brijesh Singh <[email protected]>
> > > Cc: [email protected]
> > > Cc: [email protected]
> > > ---
> > > tools/testing/selftests/kvm/Makefile | 1 +
> > > .../selftests/kvm/x86_64/sev_vm_tests.c | 203 ++++++++++++++++++
> > > 2 files changed, 204 insertions(+)
> > > create mode 100644 tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> > >
> > > diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> > > index c103873531e0..44fd3566fb51 100644
> > > --- a/tools/testing/selftests/kvm/Makefile
> > > +++ b/tools/testing/selftests/kvm/Makefile
> > > @@ -72,6 +72,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_msrs_test
> > > TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test
> > > TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test
> > > TEST_GEN_PROGS_x86_64 += x86_64/vmx_pi_mmio_test
> > > +TEST_GEN_PROGS_x86_64 += x86_64/sev_vm_tests
> > > TEST_GEN_PROGS_x86_64 += access_tracking_perf_test
> > > TEST_GEN_PROGS_x86_64 += demand_paging_test
> > > TEST_GEN_PROGS_x86_64 += dirty_log_test
> > > diff --git a/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> > > new file mode 100644
> > > index 000000000000..ec3bbc96e73a
> > > --- /dev/null
> > > +++ b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> > > @@ -0,0 +1,203 @@
> > > +// SPDX-License-Identifier: GPL-2.0-only
> > > +#include <linux/kvm.h>
> > > +#include <linux/psp-sev.h>
> > > +#include <stdio.h>
> > > +#include <sys/ioctl.h>
> > > +#include <stdlib.h>
> > > +#include <errno.h>
> > > +#include <pthread.h>
> > > +
> > > +#include "test_util.h"
> > > +#include "kvm_util.h"
> > > +#include "processor.h"
> > > +#include "svm_util.h"
> > > +#include "kselftest.h"
> > > +#include "../lib/kvm_util_internal.h"
> > > +
> > > +#define SEV_POLICY_ES 0b100
> > > +
> > > +#define NR_MIGRATE_TEST_VCPUS 4
> > > +#define NR_MIGRATE_TEST_VMS 3
> > > +#define NR_LOCK_TESTING_THREADS 3
> > > +#define NR_LOCK_TESTING_ITERATIONS 10000
> > > +
> > > +static void sev_ioctl(int vm_fd, int cmd_id, void *data)
> > > +{
> > > + struct kvm_sev_cmd cmd = {
> > > + .id = cmd_id,
> > > + .data = (uint64_t)data,
> > > + .sev_fd = open_sev_dev_path_or_exit(),
> > > + };
> > > + int ret;
> > > +
> > > + ret = ioctl(vm_fd, KVM_MEMORY_ENCRYPT_OP, &cmd);
> > > + TEST_ASSERT((ret == 0 || cmd.error == SEV_RET_SUCCESS),
> > > + "%d failed: return code: %d, errno: %d, fw error: %d",
> > > + cmd_id, ret, errno, cmd.error);
> > > +}
> > > +
> > > +static struct kvm_vm *sev_vm_create(bool es)
> > > +{
> > > + struct kvm_vm *vm;
> > > + struct kvm_sev_launch_start start = { 0 };
> > > + int i;
> > > +
> > > + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > > + sev_ioctl(vm->fd, es ? KVM_SEV_ES_INIT : KVM_SEV_INIT, NULL);
> > > + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i)
> > > + vm_vcpu_add(vm, i);
> > > + if (es)
> > > + start.policy |= SEV_POLICY_ES;
> > > + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_START, &start);
> > > + if (es)
> > > + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_UPDATE_VMSA, NULL);
> > > + return vm;
> > > +}
> >
> > I should've suggested this in my original review. But is it worth
> > moving `sev_vm_create()` and `sev_ioctl()` into the broader selftests
> > library, so others can leverage this function to write selftests?
>
> This function isn't fully complete. It doesn't get to launch_finish,
> i.e. it only goes far enough for copyless migration ioctls to work. I
> think this would be a good expansion but could happen in follow up
> series, thoughts?

SGTM. Let's leave it here for now then.

>
> >
> > > +
> > > +static struct kvm_vm *__vm_create(void)
> > > +{
> > > + struct kvm_vm *vm;
> > > + int i;
> > > +
> > > + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > > + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i)
> > > + vm_vcpu_add(vm, i);
> > > +
> > > + return vm;
> > > +}
> > > +
> > > +static int __sev_migrate_from(int dst_fd, int src_fd)
> > > +{
> > > + struct kvm_enable_cap cap = {
> > > + .cap = KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM,
> > > + .args = { src_fd }
> > > + };
> > > +
> > > + return ioctl(dst_fd, KVM_ENABLE_CAP, &cap);
> > > +}
> > > +
> > > +
> > > +static void sev_migrate_from(int dst_fd, int src_fd)
> > > +{
> > > + int ret;
> > > +
> > > + ret = __sev_migrate_from(dst_fd, src_fd);
> > > + TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d\n", ret, errno);
> > > +}
> > > +
> > > +static void test_sev_migrate_from(bool es)
> > > +{
> > > + struct kvm_vm *src_vm;
> > > + struct kvm_vm *dst_vms[NR_MIGRATE_TEST_VMS];
> > > + int i;
> > > +
> > > + src_vm = sev_vm_create(es);
> > > + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i)
> > > + dst_vms[i] = __vm_create();
> > > +
> > > + /* Initial migration from the src to the first dst. */
> > > + sev_migrate_from(dst_vms[0]->fd, src_vm->fd);
> > > +
> > > + for (i = 1; i < NR_MIGRATE_TEST_VMS; i++)
> > > + sev_migrate_from(dst_vms[i]->fd, dst_vms[i - 1]->fd);
> > > +
> > > + /* Migrate the guest back to the original VM. */
> > > + sev_migrate_from(src_vm->fd, dst_vms[NR_MIGRATE_TEST_VMS - 1]->fd);
> > > +
> > > + kvm_vm_free(src_vm);
> > > + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i)
> > > + kvm_vm_free(dst_vms[i]);
> > > +}
> > > +
> > > +struct locking_thread_input {
> > > + struct kvm_vm *vm;
> > > + int source_fds[NR_LOCK_TESTING_THREADS];
> > > +};
> > > +
> > > +static void *locking_test_thread(void *arg)
> > > +{
> > > + int i, j;
> > > + struct locking_thread_input *input = (struct locking_test_thread *)arg;
> > > +
> > > + for (i = 0; i < NR_LOCK_TESTING_ITERATIONS; ++i) {
> > > + j = i % NR_LOCK_TESTING_THREADS;
> > > + __sev_migrate_from(input->vm->fd, input->source_fds[j]);
> > > + }
> > > +
> > > + return NULL;
> > > +}
> > > +
> > > +static void test_sev_migrate_locking(void)
> > > +{
> > > + struct locking_thread_input input[NR_LOCK_TESTING_THREADS];
> > > + pthread_t pt[NR_LOCK_TESTING_THREADS];
> > > + int i;
> > > +
> > > + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i) {
> > > + input[i].vm = sev_vm_create(/* es= */ false);
> > > + input[0].source_fds[i] = input[i].vm->fd;
> > > + }
> > > + for (i = 1; i < NR_LOCK_TESTING_THREADS; ++i)
> > > + memcpy(input[i].source_fds, input[0].source_fds,
> > > + sizeof(input[i].source_fds));
> > > +
> > > + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i)
> > > + pthread_create(&pt[i], NULL, locking_test_thread, &input[i]);
> > > +
> > > + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i)
> > > + pthread_join(pt[i], NULL);
> > > +}
> > > +
> > > +static void test_sev_migrate_parameters(void)
> > > +{
> > > + struct kvm_vm *sev_vm, *sev_es_vm, *vm_no_vcpu, *vm_no_sev,
> > > + *sev_es_vm_no_vmsa;
> > > + int ret;
> > > +
> > > + sev_vm = sev_vm_create(/* es= */ false);
> > > + sev_es_vm = sev_vm_create(/* es= */ true);
> > > + vm_no_vcpu = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > > + vm_no_sev = __vm_create();
> > > + sev_es_vm_no_vmsa = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > > + sev_ioctl(sev_es_vm_no_vmsa->fd, KVM_SEV_ES_INIT, NULL);
> > > + vm_vcpu_add(sev_es_vm_no_vmsa, 1);
> > > +
> > > +
> > > + ret = __sev_migrate_from(sev_vm->fd, sev_es_vm->fd);
> > > + TEST_ASSERT(
> > > + ret == -1 && errno == EINVAL,
> > > + "Should not be able migrate to SEV enabled VM. ret: %d, errno: %d\n",
> > > + ret, errno);
> > > +
> > > + ret = __sev_migrate_from(sev_es_vm->fd, sev_vm->fd);
> > > + TEST_ASSERT(
> > > + ret == -1 && errno == EINVAL,
> > > + "Should not be able migrate to SEV-ES enabled VM. ret: %d, errno: %d\n",
> > > + ret, errno);
> > > +
> > > + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm->fd);
> > > + TEST_ASSERT(
> > > + ret == -1 && errno == EINVAL,
> > > + "SEV-ES migrations require same number of vCPUS. ret: %d, errno: %d\n",
> > > + ret, errno);
> >
> > How do we know that this failed because `vm_no_vcpu` has no vCPUs or
> > because it's not a SEV-ES VM?
>
> Actually with V8 we only migrate to none SEV(-ES)? enabled guests.

I think my point is that the test case should be written to treat the
underlying KVM code as a black box. Without looking at the KVM code,
the test case should be setup to be accepted perfectly by KVM and then
mutated in a minimal way to trigger the intended failure case.

Here, we've defined `vm_no_vcpu`, which as far as I can tell is: (1)
not a SEV VM, (2) not a SEV-ES VM, (3) has no vCPUs. Based on the
error message in the TEST_ASSERT, the intention here is to verify that
a migration that would otherwise works, fails because the target has a
different number of vCPUs than the source. Therefore, I think
`vm_no_vcpu` should be defined as a SEV-ES VM, so that the test case
is setup such that it would've otherwise passed if `vm_no_vcpu` had
the correct number of vCPUs added.

>
> >
> > > +
> > > + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm_no_vmsa->fd);
> > > + TEST_ASSERT(
> > > + ret == -1 && errno == EINVAL,
> > > + "SEV-ES migrations require UPDATE_VMSA. ret %d, errno: %d\n",
> > > + ret, errno);
> >
> > Same question. How do we know why this failed? `sev_es_vm_no_vmsa` did
> > not have any vCPUs added. Would it be cleaner to add an additional
> > param to `sev_vm_create()` to skip calling UPDATE_VMSA? Then,
> > `sev_es_vm_no_vmsa` can be created from `sev_vm_create()` and it's
> > obvious to the read that the VMs are identical except for this aspect.
> >
> > > +
> > > + ret = __sev_migrate_from(vm_no_vcpu->fd, vm_no_sev->fd);
> > > + TEST_ASSERT(ret == -1 && errno == EINVAL,
> > > + "Migrations require SEV enabled. ret %d, errno: %d\n", ret,
> > > + errno);
> >
> > `vm_no_sev` has vCPUs. Therefore, how do we know why this failed --
> > (a) differing vCPU counts or (b) no SEV?
>
> Ditto we require dst to be none SEV enabled.

Understood. But I think the test should treat KVM as a black box.
Therefore, I think in this test case, `vm_no_vcpu` should be defined
to have the same number of vCPUs as `vm_no_sev`.

>
> >
> > > +}
> > > +
> > > +int main(int argc, char *argv[])
> > > +{
> > > + test_sev_migrate_from(/* es= */ false);
> > > + test_sev_migrate_from(/* es= */ true);
> > > + test_sev_migrate_locking();
> > > + test_sev_migrate_parameters();
> > > + return 0;
> > > +}
> > > --
> > > 2.33.0.309.g3052b89438-goog
> > >

2021-09-22 16:54:16

by Peter Gonda

[permalink] [raw]
Subject: Re: [PATCH 4/4 V8] selftest: KVM: Add intra host migration tests

|

On Wed, Sep 22, 2021 at 10:34 AM Marc Orr <[email protected]> wrote:
>
> On Tue, Sep 21, 2021 at 7:20 AM Peter Gonda <[email protected]> wrote:
> >
> > On Wed, Sep 15, 2021 at 11:28 AM Marc Orr <[email protected]> wrote:
> > >
> > > On Tue, Sep 14, 2021 at 9:47 AM Peter Gonda <[email protected]> wrote:
> > > >
> > > > Adds testcases for intra host migration for SEV and SEV-ES. Also adds
> > > > locking test to confirm no deadlock exists.
> > > >
> > > > Signed-off-by: Peter Gonda <[email protected]>
> > > > Suggested-by: Sean Christopherson <[email protected]>
> > > > Reviewed-by: Marc Orr <[email protected]>
> > > > Cc: Marc Orr <[email protected]>
> > > > Cc: Sean Christopherson <[email protected]>
> > > > Cc: David Rientjes <[email protected]>
> > > > Cc: Brijesh Singh <[email protected]>
> > > > Cc: [email protected]
> > > > Cc: [email protected]
> > > > ---
> > > > tools/testing/selftests/kvm/Makefile | 1 +
> > > > .../selftests/kvm/x86_64/sev_vm_tests.c | 203 ++++++++++++++++++
> > > > 2 files changed, 204 insertions(+)
> > > > create mode 100644 tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> > > >
> > > > diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> > > > index c103873531e0..44fd3566fb51 100644
> > > > --- a/tools/testing/selftests/kvm/Makefile
> > > > +++ b/tools/testing/selftests/kvm/Makefile
> > > > @@ -72,6 +72,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_msrs_test
> > > > TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test
> > > > TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test
> > > > TEST_GEN_PROGS_x86_64 += x86_64/vmx_pi_mmio_test
> > > > +TEST_GEN_PROGS_x86_64 += x86_64/sev_vm_tests
> > > > TEST_GEN_PROGS_x86_64 += access_tracking_perf_test
> > > > TEST_GEN_PROGS_x86_64 += demand_paging_test
> > > > TEST_GEN_PROGS_x86_64 += dirty_log_test
> > > > diff --git a/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> > > > new file mode 100644
> > > > index 000000000000..ec3bbc96e73a
> > > > --- /dev/null
> > > > +++ b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> > > > @@ -0,0 +1,203 @@
> > > > +// SPDX-License-Identifier: GPL-2.0-only
> > > > +#include <linux/kvm.h>
> > > > +#include <linux/psp-sev.h>
> > > > +#include <stdio.h>
> > > > +#include <sys/ioctl.h>
> > > > +#include <stdlib.h>
> > > > +#include <errno.h>
> > > > +#include <pthread.h>
> > > > +
> > > > +#include "test_util.h"
> > > > +#include "kvm_util.h"
> > > > +#include "processor.h"
> > > > +#include "svm_util.h"
> > > > +#include "kselftest.h"
> > > > +#include "../lib/kvm_util_internal.h"
> > > > +
> > > > +#define SEV_POLICY_ES 0b100
> > > > +
> > > > +#define NR_MIGRATE_TEST_VCPUS 4
> > > > +#define NR_MIGRATE_TEST_VMS 3
> > > > +#define NR_LOCK_TESTING_THREADS 3
> > > > +#define NR_LOCK_TESTING_ITERATIONS 10000
> > > > +
> > > > +static void sev_ioctl(int vm_fd, int cmd_id, void *data)
> > > > +{
> > > > + struct kvm_sev_cmd cmd = {
> > > > + .id = cmd_id,
> > > > + .data = (uint64_t)data,
> > > > + .sev_fd = open_sev_dev_path_or_exit(),
> > > > + };
> > > > + int ret;
> > > > +
> > > > + ret = ioctl(vm_fd, KVM_MEMORY_ENCRYPT_OP, &cmd);
> > > > + TEST_ASSERT((ret == 0 || cmd.error == SEV_RET_SUCCESS),
> > > > + "%d failed: return code: %d, errno: %d, fw error: %d",
> > > > + cmd_id, ret, errno, cmd.error);
> > > > +}
> > > > +
> > > > +static struct kvm_vm *sev_vm_create(bool es)
> > > > +{
> > > > + struct kvm_vm *vm;
> > > > + struct kvm_sev_launch_start start = { 0 };
> > > > + int i;
> > > > +
> > > > + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > > > + sev_ioctl(vm->fd, es ? KVM_SEV_ES_INIT : KVM_SEV_INIT, NULL);
> > > > + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i)
> > > > + vm_vcpu_add(vm, i);
> > > > + if (es)
> > > > + start.policy |= SEV_POLICY_ES;
> > > > + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_START, &start);
> > > > + if (es)
> > > > + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_UPDATE_VMSA, NULL);
> > > > + return vm;
> > > > +}
> > >
> > > I should've suggested this in my original review. But is it worth
> > > moving `sev_vm_create()` and `sev_ioctl()` into the broader selftests
> > > library, so others can leverage this function to write selftests?
> >
> > This function isn't fully complete. It doesn't get to launch_finish,
> > i.e. it only goes far enough for copyless migration ioctls to work. I
> > think this would be a good expansion but could happen in follow up
> > series, thoughts?
>
> SGTM. Let's leave it here for now then.
>
> >
> > >
> > > > +
> > > > +static struct kvm_vm *__vm_create(void)
> > > > +{
> > > > + struct kvm_vm *vm;
> > > > + int i;
> > > > +
> > > > + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > > > + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i)
> > > > + vm_vcpu_add(vm, i);
> > > > +
> > > > + return vm;
> > > > +}
> > > > +
> > > > +static int __sev_migrate_from(int dst_fd, int src_fd)
> > > > +{
> > > > + struct kvm_enable_cap cap = {
> > > > + .cap = KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM,
> > > > + .args = { src_fd }
> > > > + };
> > > > +
> > > > + return ioctl(dst_fd, KVM_ENABLE_CAP, &cap);
> > > > +}
> > > > +
> > > > +
> > > > +static void sev_migrate_from(int dst_fd, int src_fd)
> > > > +{
> > > > + int ret;
> > > > +
> > > > + ret = __sev_migrate_from(dst_fd, src_fd);
> > > > + TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d\n", ret, errno);
> > > > +}
> > > > +
> > > > +static void test_sev_migrate_from(bool es)
> > > > +{
> > > > + struct kvm_vm *src_vm;
> > > > + struct kvm_vm *dst_vms[NR_MIGRATE_TEST_VMS];
> > > > + int i;
> > > > +
> > > > + src_vm = sev_vm_create(es);
> > > > + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i)
> > > > + dst_vms[i] = __vm_create();
> > > > +
> > > > + /* Initial migration from the src to the first dst. */
> > > > + sev_migrate_from(dst_vms[0]->fd, src_vm->fd);
> > > > +
> > > > + for (i = 1; i < NR_MIGRATE_TEST_VMS; i++)
> > > > + sev_migrate_from(dst_vms[i]->fd, dst_vms[i - 1]->fd);
> > > > +
> > > > + /* Migrate the guest back to the original VM. */
> > > > + sev_migrate_from(src_vm->fd, dst_vms[NR_MIGRATE_TEST_VMS - 1]->fd);
> > > > +
> > > > + kvm_vm_free(src_vm);
> > > > + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i)
> > > > + kvm_vm_free(dst_vms[i]);
> > > > +}
> > > > +
> > > > +struct locking_thread_input {
> > > > + struct kvm_vm *vm;
> > > > + int source_fds[NR_LOCK_TESTING_THREADS];
> > > > +};
> > > > +
> > > > +static void *locking_test_thread(void *arg)
> > > > +{
> > > > + int i, j;
> > > > + struct locking_thread_input *input = (struct locking_test_thread *)arg;
> > > > +
> > > > + for (i = 0; i < NR_LOCK_TESTING_ITERATIONS; ++i) {
> > > > + j = i % NR_LOCK_TESTING_THREADS;
> > > > + __sev_migrate_from(input->vm->fd, input->source_fds[j]);
> > > > + }
> > > > +
> > > > + return NULL;
> > > > +}
> > > > +
> > > > +static void test_sev_migrate_locking(void)
> > > > +{
> > > > + struct locking_thread_input input[NR_LOCK_TESTING_THREADS];
> > > > + pthread_t pt[NR_LOCK_TESTING_THREADS];
> > > > + int i;
> > > > +
> > > > + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i) {
> > > > + input[i].vm = sev_vm_create(/* es= */ false);
> > > > + input[0].source_fds[i] = input[i].vm->fd;
> > > > + }
> > > > + for (i = 1; i < NR_LOCK_TESTING_THREADS; ++i)
> > > > + memcpy(input[i].source_fds, input[0].source_fds,
> > > > + sizeof(input[i].source_fds));
> > > > +
> > > > + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i)
> > > > + pthread_create(&pt[i], NULL, locking_test_thread, &input[i]);
> > > > +
> > > > + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i)
> > > > + pthread_join(pt[i], NULL);
> > > > +}
> > > > +
> > > > +static void test_sev_migrate_parameters(void)
> > > > +{
> > > > + struct kvm_vm *sev_vm, *sev_es_vm, *vm_no_vcpu, *vm_no_sev,
> > > > + *sev_es_vm_no_vmsa;
> > > > + int ret;
> > > > +
> > > > + sev_vm = sev_vm_create(/* es= */ false);
> > > > + sev_es_vm = sev_vm_create(/* es= */ true);
> > > > + vm_no_vcpu = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > > > + vm_no_sev = __vm_create();
> > > > + sev_es_vm_no_vmsa = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > > > + sev_ioctl(sev_es_vm_no_vmsa->fd, KVM_SEV_ES_INIT, NULL);
> > > > + vm_vcpu_add(sev_es_vm_no_vmsa, 1);
> > > > +
> > > > +
> > > > + ret = __sev_migrate_from(sev_vm->fd, sev_es_vm->fd);
> > > > + TEST_ASSERT(
> > > > + ret == -1 && errno == EINVAL,
> > > > + "Should not be able migrate to SEV enabled VM. ret: %d, errno: %d\n",
> > > > + ret, errno);
> > > > +
> > > > + ret = __sev_migrate_from(sev_es_vm->fd, sev_vm->fd);
> > > > + TEST_ASSERT(
> > > > + ret == -1 && errno == EINVAL,
> > > > + "Should not be able migrate to SEV-ES enabled VM. ret: %d, errno: %d\n",
> > > > + ret, errno);
> > > > +
> > > > + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm->fd);
> > > > + TEST_ASSERT(
> > > > + ret == -1 && errno == EINVAL,
> > > > + "SEV-ES migrations require same number of vCPUS. ret: %d, errno: %d\n",
> > > > + ret, errno);
> > >
> > > How do we know that this failed because `vm_no_vcpu` has no vCPUs or
> > > because it's not a SEV-ES VM?
> >
> > Actually with V8 we only migrate to none SEV(-ES)? enabled guests.
>
> I think my point is that the test case should be written to treat the
> underlying KVM code as a black box. Without looking at the KVM code,
> the test case should be setup to be accepted perfectly by KVM and then
> mutated in a minimal way to trigger the intended failure case.
>
> Here, we've defined `vm_no_vcpu`, which as far as I can tell is: (1)
> not a SEV VM, (2) not a SEV-ES VM, (3) has no vCPUs. Based on the
> error message in the TEST_ASSERT, the intention here is to verify that
> a migration that would otherwise works, fails because the target has a
> different number of vCPUs than the source. Therefore, I think
> `vm_no_vcpu` should be defined as a SEV-ES VM, so that the test case
> is setup such that it would've otherwise passed if `vm_no_vcpu` had
> the correct number of vCPUs added.

I think I get what you are asking for but I think this is good as
written, the second case should be updated. Now to migrate the src
should be SEV or SEV-ES (with the VMSAs setup), the dst should be NOT
SEV or SEV-ES enabled but should have the same # of vCPUs.

So if |vm_no_vcpu| had 3 vCPUs like |sev_es_vm| this call would work.

>
> >
> > >
> > > > +
> > > > + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm_no_vmsa->fd);
> > > > + TEST_ASSERT(
> > > > + ret == -1 && errno == EINVAL,
> > > > + "SEV-ES migrations require UPDATE_VMSA. ret %d, errno: %d\n",
> > > > + ret, errno);
> > >
> > > Same question. How do we know why this failed? `sev_es_vm_no_vmsa` did
> > > not have any vCPUs added. Would it be cleaner to add an additional
> > > param to `sev_vm_create()` to skip calling UPDATE_VMSA? Then,
> > > `sev_es_vm_no_vmsa` can be created from `sev_vm_create()` and it's
> > > obvious to the read that the VMs are identical except for this aspect.
> > >
> > > > +
> > > > + ret = __sev_migrate_from(vm_no_vcpu->fd, vm_no_sev->fd);
> > > > + TEST_ASSERT(ret == -1 && errno == EINVAL,
> > > > + "Migrations require SEV enabled. ret %d, errno: %d\n", ret,
> > > > + errno);
> > >
> > > `vm_no_sev` has vCPUs. Therefore, how do we know why this failed --
|> > > (a) differing vCPU counts or (b) no SEV?
> >
> > Ditto we require dst to be none SEV enabled.
>
> Understood. But I think the test should treat KVM as a black box.
> Therefore, I think in this test case, `vm_no_vcpu` should be defined
> to have the same number of vCPUs as `vm_no_sev`.

Ack I'll lazily reused | vm_no_vcpu|. I will add a |vm_no_sev_two| or
something which has the same number of vCPUs as | vm_no_sev|.

>
> >
> > >
> > > > +}
> > > > +
> > > > +int main(int argc, char *argv[])
> > > > +{
> > > > + test_sev_migrate_from(/* es= */ false);
> > > > + test_sev_migrate_from(/* es= */ true);
> > > > + test_sev_migrate_locking();
> > > > + test_sev_migrate_parameters();
> > > > + return 0;
> > > > +}
> > > > --
> > > > 2.33.0.309.g3052b89438-goog
> > > >

2021-09-22 17:01:22

by Marc Orr

[permalink] [raw]
Subject: Re: [PATCH 4/4 V8] selftest: KVM: Add intra host migration tests

On Wed, Sep 22, 2021 at 9:52 AM Peter Gonda <[email protected]> wrote:
>
> |
>
> On Wed, Sep 22, 2021 at 10:34 AM Marc Orr <[email protected]> wrote:
> >
> > On Tue, Sep 21, 2021 at 7:20 AM Peter Gonda <[email protected]> wrote:
> > >
> > > On Wed, Sep 15, 2021 at 11:28 AM Marc Orr <[email protected]> wrote:
> > > >
> > > > On Tue, Sep 14, 2021 at 9:47 AM Peter Gonda <[email protected]> wrote:
> > > > >
> > > > > Adds testcases for intra host migration for SEV and SEV-ES. Also adds
> > > > > locking test to confirm no deadlock exists.
> > > > >
> > > > > Signed-off-by: Peter Gonda <[email protected]>
> > > > > Suggested-by: Sean Christopherson <[email protected]>
> > > > > Reviewed-by: Marc Orr <[email protected]>
> > > > > Cc: Marc Orr <[email protected]>
> > > > > Cc: Sean Christopherson <[email protected]>
> > > > > Cc: David Rientjes <[email protected]>
> > > > > Cc: Brijesh Singh <[email protected]>
> > > > > Cc: [email protected]
> > > > > Cc: [email protected]
> > > > > ---
> > > > > tools/testing/selftests/kvm/Makefile | 1 +
> > > > > .../selftests/kvm/x86_64/sev_vm_tests.c | 203 ++++++++++++++++++
> > > > > 2 files changed, 204 insertions(+)
> > > > > create mode 100644 tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> > > > >
> > > > > diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> > > > > index c103873531e0..44fd3566fb51 100644
> > > > > --- a/tools/testing/selftests/kvm/Makefile
> > > > > +++ b/tools/testing/selftests/kvm/Makefile
> > > > > @@ -72,6 +72,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_msrs_test
> > > > > TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test
> > > > > TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test
> > > > > TEST_GEN_PROGS_x86_64 += x86_64/vmx_pi_mmio_test
> > > > > +TEST_GEN_PROGS_x86_64 += x86_64/sev_vm_tests
> > > > > TEST_GEN_PROGS_x86_64 += access_tracking_perf_test
> > > > > TEST_GEN_PROGS_x86_64 += demand_paging_test
> > > > > TEST_GEN_PROGS_x86_64 += dirty_log_test
> > > > > diff --git a/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> > > > > new file mode 100644
> > > > > index 000000000000..ec3bbc96e73a
> > > > > --- /dev/null
> > > > > +++ b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> > > > > @@ -0,0 +1,203 @@
> > > > > +// SPDX-License-Identifier: GPL-2.0-only
> > > > > +#include <linux/kvm.h>
> > > > > +#include <linux/psp-sev.h>
> > > > > +#include <stdio.h>
> > > > > +#include <sys/ioctl.h>
> > > > > +#include <stdlib.h>
> > > > > +#include <errno.h>
> > > > > +#include <pthread.h>
> > > > > +
> > > > > +#include "test_util.h"
> > > > > +#include "kvm_util.h"
> > > > > +#include "processor.h"
> > > > > +#include "svm_util.h"
> > > > > +#include "kselftest.h"
> > > > > +#include "../lib/kvm_util_internal.h"
> > > > > +
> > > > > +#define SEV_POLICY_ES 0b100
> > > > > +
> > > > > +#define NR_MIGRATE_TEST_VCPUS 4
> > > > > +#define NR_MIGRATE_TEST_VMS 3
> > > > > +#define NR_LOCK_TESTING_THREADS 3
> > > > > +#define NR_LOCK_TESTING_ITERATIONS 10000
> > > > > +
> > > > > +static void sev_ioctl(int vm_fd, int cmd_id, void *data)
> > > > > +{
> > > > > + struct kvm_sev_cmd cmd = {
> > > > > + .id = cmd_id,
> > > > > + .data = (uint64_t)data,
> > > > > + .sev_fd = open_sev_dev_path_or_exit(),
> > > > > + };
> > > > > + int ret;
> > > > > +
> > > > > + ret = ioctl(vm_fd, KVM_MEMORY_ENCRYPT_OP, &cmd);
> > > > > + TEST_ASSERT((ret == 0 || cmd.error == SEV_RET_SUCCESS),
> > > > > + "%d failed: return code: %d, errno: %d, fw error: %d",
> > > > > + cmd_id, ret, errno, cmd.error);
> > > > > +}
> > > > > +
> > > > > +static struct kvm_vm *sev_vm_create(bool es)
> > > > > +{
> > > > > + struct kvm_vm *vm;
> > > > > + struct kvm_sev_launch_start start = { 0 };
> > > > > + int i;
> > > > > +
> > > > > + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > > > > + sev_ioctl(vm->fd, es ? KVM_SEV_ES_INIT : KVM_SEV_INIT, NULL);
> > > > > + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i)
> > > > > + vm_vcpu_add(vm, i);
> > > > > + if (es)
> > > > > + start.policy |= SEV_POLICY_ES;
> > > > > + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_START, &start);
> > > > > + if (es)
> > > > > + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_UPDATE_VMSA, NULL);
> > > > > + return vm;
> > > > > +}
> > > >
> > > > I should've suggested this in my original review. But is it worth
> > > > moving `sev_vm_create()` and `sev_ioctl()` into the broader selftests
> > > > library, so others can leverage this function to write selftests?
> > >
> > > This function isn't fully complete. It doesn't get to launch_finish,
> > > i.e. it only goes far enough for copyless migration ioctls to work. I
> > > think this would be a good expansion but could happen in follow up
> > > series, thoughts?
> >
> > SGTM. Let's leave it here for now then.
> >
> > >
> > > >
> > > > > +
> > > > > +static struct kvm_vm *__vm_create(void)
> > > > > +{
> > > > > + struct kvm_vm *vm;
> > > > > + int i;
> > > > > +
> > > > > + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > > > > + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i)
> > > > > + vm_vcpu_add(vm, i);
> > > > > +
> > > > > + return vm;
> > > > > +}
> > > > > +
> > > > > +static int __sev_migrate_from(int dst_fd, int src_fd)
> > > > > +{
> > > > > + struct kvm_enable_cap cap = {
> > > > > + .cap = KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM,
> > > > > + .args = { src_fd }
> > > > > + };
> > > > > +
> > > > > + return ioctl(dst_fd, KVM_ENABLE_CAP, &cap);
> > > > > +}
> > > > > +
> > > > > +
> > > > > +static void sev_migrate_from(int dst_fd, int src_fd)
> > > > > +{
> > > > > + int ret;
> > > > > +
> > > > > + ret = __sev_migrate_from(dst_fd, src_fd);
> > > > > + TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d\n", ret, errno);
> > > > > +}
> > > > > +
> > > > > +static void test_sev_migrate_from(bool es)
> > > > > +{
> > > > > + struct kvm_vm *src_vm;
> > > > > + struct kvm_vm *dst_vms[NR_MIGRATE_TEST_VMS];
> > > > > + int i;
> > > > > +
> > > > > + src_vm = sev_vm_create(es);
> > > > > + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i)
> > > > > + dst_vms[i] = __vm_create();
> > > > > +
> > > > > + /* Initial migration from the src to the first dst. */
> > > > > + sev_migrate_from(dst_vms[0]->fd, src_vm->fd);
> > > > > +
> > > > > + for (i = 1; i < NR_MIGRATE_TEST_VMS; i++)
> > > > > + sev_migrate_from(dst_vms[i]->fd, dst_vms[i - 1]->fd);
> > > > > +
> > > > > + /* Migrate the guest back to the original VM. */
> > > > > + sev_migrate_from(src_vm->fd, dst_vms[NR_MIGRATE_TEST_VMS - 1]->fd);
> > > > > +
> > > > > + kvm_vm_free(src_vm);
> > > > > + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i)
> > > > > + kvm_vm_free(dst_vms[i]);
> > > > > +}
> > > > > +
> > > > > +struct locking_thread_input {
> > > > > + struct kvm_vm *vm;
> > > > > + int source_fds[NR_LOCK_TESTING_THREADS];
> > > > > +};
> > > > > +
> > > > > +static void *locking_test_thread(void *arg)
> > > > > +{
> > > > > + int i, j;
> > > > > + struct locking_thread_input *input = (struct locking_test_thread *)arg;
> > > > > +
> > > > > + for (i = 0; i < NR_LOCK_TESTING_ITERATIONS; ++i) {
> > > > > + j = i % NR_LOCK_TESTING_THREADS;
> > > > > + __sev_migrate_from(input->vm->fd, input->source_fds[j]);
> > > > > + }
> > > > > +
> > > > > + return NULL;
> > > > > +}
> > > > > +
> > > > > +static void test_sev_migrate_locking(void)
> > > > > +{
> > > > > + struct locking_thread_input input[NR_LOCK_TESTING_THREADS];
> > > > > + pthread_t pt[NR_LOCK_TESTING_THREADS];
> > > > > + int i;
> > > > > +
> > > > > + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i) {
> > > > > + input[i].vm = sev_vm_create(/* es= */ false);
> > > > > + input[0].source_fds[i] = input[i].vm->fd;
> > > > > + }
> > > > > + for (i = 1; i < NR_LOCK_TESTING_THREADS; ++i)
> > > > > + memcpy(input[i].source_fds, input[0].source_fds,
> > > > > + sizeof(input[i].source_fds));
> > > > > +
> > > > > + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i)
> > > > > + pthread_create(&pt[i], NULL, locking_test_thread, &input[i]);
> > > > > +
> > > > > + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i)
> > > > > + pthread_join(pt[i], NULL);
> > > > > +}
> > > > > +
> > > > > +static void test_sev_migrate_parameters(void)
> > > > > +{
> > > > > + struct kvm_vm *sev_vm, *sev_es_vm, *vm_no_vcpu, *vm_no_sev,
> > > > > + *sev_es_vm_no_vmsa;
> > > > > + int ret;
> > > > > +
> > > > > + sev_vm = sev_vm_create(/* es= */ false);
> > > > > + sev_es_vm = sev_vm_create(/* es= */ true);
> > > > > + vm_no_vcpu = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > > > > + vm_no_sev = __vm_create();
> > > > > + sev_es_vm_no_vmsa = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > > > > + sev_ioctl(sev_es_vm_no_vmsa->fd, KVM_SEV_ES_INIT, NULL);
> > > > > + vm_vcpu_add(sev_es_vm_no_vmsa, 1);
> > > > > +
> > > > > +
> > > > > + ret = __sev_migrate_from(sev_vm->fd, sev_es_vm->fd);
> > > > > + TEST_ASSERT(
> > > > > + ret == -1 && errno == EINVAL,
> > > > > + "Should not be able migrate to SEV enabled VM. ret: %d, errno: %d\n",
> > > > > + ret, errno);
> > > > > +
> > > > > + ret = __sev_migrate_from(sev_es_vm->fd, sev_vm->fd);
> > > > > + TEST_ASSERT(
> > > > > + ret == -1 && errno == EINVAL,
> > > > > + "Should not be able migrate to SEV-ES enabled VM. ret: %d, errno: %d\n",
> > > > > + ret, errno);
> > > > > +
> > > > > + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm->fd);
> > > > > + TEST_ASSERT(
> > > > > + ret == -1 && errno == EINVAL,
> > > > > + "SEV-ES migrations require same number of vCPUS. ret: %d, errno: %d\n",
> > > > > + ret, errno);
> > > >
> > > > How do we know that this failed because `vm_no_vcpu` has no vCPUs or
> > > > because it's not a SEV-ES VM?
> > >
> > > Actually with V8 we only migrate to none SEV(-ES)? enabled guests.
> >
> > I think my point is that the test case should be written to treat the
> > underlying KVM code as a black box. Without looking at the KVM code,
> > the test case should be setup to be accepted perfectly by KVM and then
> > mutated in a minimal way to trigger the intended failure case.
> >
> > Here, we've defined `vm_no_vcpu`, which as far as I can tell is: (1)
> > not a SEV VM, (2) not a SEV-ES VM, (3) has no vCPUs. Based on the
> > error message in the TEST_ASSERT, the intention here is to verify that
> > a migration that would otherwise works, fails because the target has a
> > different number of vCPUs than the source. Therefore, I think
> > `vm_no_vcpu` should be defined as a SEV-ES VM, so that the test case
> > is setup such that it would've otherwise passed if `vm_no_vcpu` had
> > the correct number of vCPUs added.
>
> I think I get what you are asking for but I think this is good as
> written, the second case should be updated. Now to migrate the src
> should be SEV or SEV-ES (with the VMSAs setup), the dst should be NOT
> SEV or SEV-ES enabled but should have the same # of vCPUs.
>
> So if |vm_no_vcpu| had 3 vCPUs like |sev_es_vm| this call would work.

Got it now. SGTM. Thanks!

>
> >
> > >
> > > >
> > > > > +
> > > > > + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm_no_vmsa->fd);
> > > > > + TEST_ASSERT(
> > > > > + ret == -1 && errno == EINVAL,
> > > > > + "SEV-ES migrations require UPDATE_VMSA. ret %d, errno: %d\n",
> > > > > + ret, errno);
> > > >
> > > > Same question. How do we know why this failed? `sev_es_vm_no_vmsa` did
> > > > not have any vCPUs added. Would it be cleaner to add an additional
> > > > param to `sev_vm_create()` to skip calling UPDATE_VMSA? Then,
> > > > `sev_es_vm_no_vmsa` can be created from `sev_vm_create()` and it's
> > > > obvious to the read that the VMs are identical except for this aspect.
> > > >
> > > > > +
> > > > > + ret = __sev_migrate_from(vm_no_vcpu->fd, vm_no_sev->fd);
> > > > > + TEST_ASSERT(ret == -1 && errno == EINVAL,
> > > > > + "Migrations require SEV enabled. ret %d, errno: %d\n", ret,
> > > > > + errno);
> > > >
> > > > `vm_no_sev` has vCPUs. Therefore, how do we know why this failed --
> |> > > (a) differing vCPU counts or (b) no SEV?
> > >
> > > Ditto we require dst to be none SEV enabled.
> >
> > Understood. But I think the test should treat KVM as a black box.
> > Therefore, I think in this test case, `vm_no_vcpu` should be defined
> > to have the same number of vCPUs as `vm_no_sev`.
>
> Ack I'll lazily reused | vm_no_vcpu|. I will add a |vm_no_sev_two| or
> something which has the same number of vCPUs as | vm_no_sev|.
>
> >
> > >
> > > >
> > > > > +}
> > > > > +
> > > > > +int main(int argc, char *argv[])
> > > > > +{
> > > > > + test_sev_migrate_from(/* es= */ false);
> > > > > + test_sev_migrate_from(/* es= */ true);
> > > > > + test_sev_migrate_locking();
> > > > > + test_sev_migrate_parameters();
> > > > > + return 0;
> > > > > +}
> > > > > --
> > > > > 2.33.0.309.g3052b89438-goog
> > > > >

2021-09-22 17:06:34

by Marc Orr

[permalink] [raw]
Subject: Re: [PATCH 4/4 V8] selftest: KVM: Add intra host migration tests

On Tue, Sep 14, 2021 at 9:47 AM Peter Gonda <[email protected]> wrote:
>
> Adds testcases for intra host migration for SEV and SEV-ES. Also adds
> locking test to confirm no deadlock exists.
>
> Signed-off-by: Peter Gonda <[email protected]>
> Suggested-by: Sean Christopherson <[email protected]>
> Reviewed-by: Marc Orr <[email protected]>
> Cc: Marc Orr <[email protected]>
> Cc: Sean Christopherson <[email protected]>
> Cc: David Rientjes <[email protected]>
> Cc: Brijesh Singh <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> ---
> tools/testing/selftests/kvm/Makefile | 1 +
> .../selftests/kvm/x86_64/sev_vm_tests.c | 203 ++++++++++++++++++
> 2 files changed, 204 insertions(+)
> create mode 100644 tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
>
> diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> index c103873531e0..44fd3566fb51 100644
> --- a/tools/testing/selftests/kvm/Makefile
> +++ b/tools/testing/selftests/kvm/Makefile
> @@ -72,6 +72,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_msrs_test
> TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test
> TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test
> TEST_GEN_PROGS_x86_64 += x86_64/vmx_pi_mmio_test
> +TEST_GEN_PROGS_x86_64 += x86_64/sev_vm_tests
> TEST_GEN_PROGS_x86_64 += access_tracking_perf_test
> TEST_GEN_PROGS_x86_64 += demand_paging_test
> TEST_GEN_PROGS_x86_64 += dirty_log_test
> diff --git a/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> new file mode 100644
> index 000000000000..ec3bbc96e73a
> --- /dev/null
> +++ b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> @@ -0,0 +1,203 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +#include <linux/kvm.h>
> +#include <linux/psp-sev.h>
> +#include <stdio.h>
> +#include <sys/ioctl.h>
> +#include <stdlib.h>
> +#include <errno.h>
> +#include <pthread.h>
> +
> +#include "test_util.h"
> +#include "kvm_util.h"
> +#include "processor.h"
> +#include "svm_util.h"
> +#include "kselftest.h"
> +#include "../lib/kvm_util_internal.h"
> +
> +#define SEV_POLICY_ES 0b100
> +
> +#define NR_MIGRATE_TEST_VCPUS 4
> +#define NR_MIGRATE_TEST_VMS 3
> +#define NR_LOCK_TESTING_THREADS 3
> +#define NR_LOCK_TESTING_ITERATIONS 10000
> +
> +static void sev_ioctl(int vm_fd, int cmd_id, void *data)
> +{
> + struct kvm_sev_cmd cmd = {
> + .id = cmd_id,
> + .data = (uint64_t)data,
> + .sev_fd = open_sev_dev_path_or_exit(),
> + };
> + int ret;
> +
> + ret = ioctl(vm_fd, KVM_MEMORY_ENCRYPT_OP, &cmd);
> + TEST_ASSERT((ret == 0 || cmd.error == SEV_RET_SUCCESS),
> + "%d failed: return code: %d, errno: %d, fw error: %d",
> + cmd_id, ret, errno, cmd.error);
> +}
> +
> +static struct kvm_vm *sev_vm_create(bool es)
> +{
> + struct kvm_vm *vm;
> + struct kvm_sev_launch_start start = { 0 };
> + int i;
> +
> + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> + sev_ioctl(vm->fd, es ? KVM_SEV_ES_INIT : KVM_SEV_INIT, NULL);
> + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i)
> + vm_vcpu_add(vm, i);
> + if (es)
> + start.policy |= SEV_POLICY_ES;
> + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_START, &start);
> + if (es)
> + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_UPDATE_VMSA, NULL);
> + return vm;
> +}
> +
> +static struct kvm_vm *__vm_create(void)
> +{
> + struct kvm_vm *vm;
> + int i;
> +
> + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i)
> + vm_vcpu_add(vm, i);
> +
> + return vm;
> +}
> +
> +static int __sev_migrate_from(int dst_fd, int src_fd)
> +{
> + struct kvm_enable_cap cap = {
> + .cap = KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM,
> + .args = { src_fd }
> + };
> +
> + return ioctl(dst_fd, KVM_ENABLE_CAP, &cap);
> +}
> +
> +
> +static void sev_migrate_from(int dst_fd, int src_fd)
> +{
> + int ret;
> +
> + ret = __sev_migrate_from(dst_fd, src_fd);
> + TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d\n", ret, errno);
> +}
> +
> +static void test_sev_migrate_from(bool es)
> +{
> + struct kvm_vm *src_vm;
> + struct kvm_vm *dst_vms[NR_MIGRATE_TEST_VMS];
> + int i;
> +
> + src_vm = sev_vm_create(es);
> + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i)
> + dst_vms[i] = __vm_create();
> +
> + /* Initial migration from the src to the first dst. */
> + sev_migrate_from(dst_vms[0]->fd, src_vm->fd);
> +
> + for (i = 1; i < NR_MIGRATE_TEST_VMS; i++)
> + sev_migrate_from(dst_vms[i]->fd, dst_vms[i - 1]->fd);
> +
> + /* Migrate the guest back to the original VM. */
> + sev_migrate_from(src_vm->fd, dst_vms[NR_MIGRATE_TEST_VMS - 1]->fd);
> +
> + kvm_vm_free(src_vm);
> + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i)
> + kvm_vm_free(dst_vms[i]);
> +}
> +
> +struct locking_thread_input {
> + struct kvm_vm *vm;
> + int source_fds[NR_LOCK_TESTING_THREADS];
> +};
> +
> +static void *locking_test_thread(void *arg)
> +{
> + int i, j;
> + struct locking_thread_input *input = (struct locking_test_thread *)arg;
> +
> + for (i = 0; i < NR_LOCK_TESTING_ITERATIONS; ++i) {
> + j = i % NR_LOCK_TESTING_THREADS;
> + __sev_migrate_from(input->vm->fd, input->source_fds[j]);
> + }
> +
> + return NULL;
> +}
> +
> +static void test_sev_migrate_locking(void)
> +{
> + struct locking_thread_input input[NR_LOCK_TESTING_THREADS];
> + pthread_t pt[NR_LOCK_TESTING_THREADS];
> + int i;
> +
> + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i) {
> + input[i].vm = sev_vm_create(/* es= */ false);
> + input[0].source_fds[i] = input[i].vm->fd;
> + }
> + for (i = 1; i < NR_LOCK_TESTING_THREADS; ++i)
> + memcpy(input[i].source_fds, input[0].source_fds,
> + sizeof(input[i].source_fds));
> +
> + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i)
> + pthread_create(&pt[i], NULL, locking_test_thread, &input[i]);
> +
> + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i)
> + pthread_join(pt[i], NULL);
> +}
> +
> +static void test_sev_migrate_parameters(void)
> +{
> + struct kvm_vm *sev_vm, *sev_es_vm, *vm_no_vcpu, *vm_no_sev,
> + *sev_es_vm_no_vmsa;
> + int ret;
> +
> + sev_vm = sev_vm_create(/* es= */ false);
> + sev_es_vm = sev_vm_create(/* es= */ true);
> + vm_no_vcpu = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> + vm_no_sev = __vm_create();
> + sev_es_vm_no_vmsa = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> + sev_ioctl(sev_es_vm_no_vmsa->fd, KVM_SEV_ES_INIT, NULL);
> + vm_vcpu_add(sev_es_vm_no_vmsa, 1);
> +
> +
> + ret = __sev_migrate_from(sev_vm->fd, sev_es_vm->fd);
> + TEST_ASSERT(
> + ret == -1 && errno == EINVAL,
> + "Should not be able migrate to SEV enabled VM. ret: %d, errno: %d\n",
> + ret, errno);
> +
> + ret = __sev_migrate_from(sev_es_vm->fd, sev_vm->fd);
> + TEST_ASSERT(
> + ret == -1 && errno == EINVAL,
> + "Should not be able migrate to SEV-ES enabled VM. ret: %d, errno: %d\n",
> + ret, errno);
> +
> + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm->fd);
> + TEST_ASSERT(
> + ret == -1 && errno == EINVAL,
> + "SEV-ES migrations require same number of vCPUS. ret: %d, errno: %d\n",
> + ret, errno);
> +
> + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm_no_vmsa->fd);
> + TEST_ASSERT(
> + ret == -1 && errno == EINVAL,
> + "SEV-ES migrations require UPDATE_VMSA. ret %d, errno: %d\n",
> + ret, errno);
> +
> + ret = __sev_migrate_from(vm_no_vcpu->fd, vm_no_sev->fd);
> + TEST_ASSERT(ret == -1 && errno == EINVAL,
> + "Migrations require SEV enabled. ret %d, errno: %d\n", ret,
> + errno);
> +}
> +
> +int main(int argc, char *argv[])
> +{
> + test_sev_migrate_from(/* es= */ false);
> + test_sev_migrate_from(/* es= */ true);
> + test_sev_migrate_locking();
> + test_sev_migrate_parameters();
> + return 0;
> +}
> --
> 2.33.0.309.g3052b89438-goog
>

Reviewed-by: Marc Orr <[email protected]>

2021-09-22 17:07:32

by Peter Gonda

[permalink] [raw]
Subject: Re: [PATCH 4/4 V8] selftest: KVM: Add intra host migration tests

On Wed, Sep 22, 2021 at 11:00 AM Marc Orr <[email protected]> wrote:
>
> On Wed, Sep 22, 2021 at 9:52 AM Peter Gonda <[email protected]> wrote:
> >
> > |
> >
> > On Wed, Sep 22, 2021 at 10:34 AM Marc Orr <[email protected]> wrote:
> > >
> > > On Tue, Sep 21, 2021 at 7:20 AM Peter Gonda <[email protected]> wrote:
> > > >
> > > > On Wed, Sep 15, 2021 at 11:28 AM Marc Orr <[email protected]> wrote:
> > > > >
> > > > > On Tue, Sep 14, 2021 at 9:47 AM Peter Gonda <[email protected]> wrote:
> > > > > >
> > > > > > Adds testcases for intra host migration for SEV and SEV-ES. Also adds
> > > > > > locking test to confirm no deadlock exists.
> > > > > >
> > > > > > Signed-off-by: Peter Gonda <[email protected]>
> > > > > > Suggested-by: Sean Christopherson <[email protected]>
> > > > > > Reviewed-by: Marc Orr <[email protected]>
> > > > > > Cc: Marc Orr <[email protected]>
> > > > > > Cc: Sean Christopherson <[email protected]>
> > > > > > Cc: David Rientjes <[email protected]>
> > > > > > Cc: Brijesh Singh <[email protected]>
> > > > > > Cc: [email protected]
> > > > > > Cc: [email protected]
> > > > > > ---
> > > > > > tools/testing/selftests/kvm/Makefile | 1 +
> > > > > > .../selftests/kvm/x86_64/sev_vm_tests.c | 203 ++++++++++++++++++
> > > > > > 2 files changed, 204 insertions(+)
> > > > > > create mode 100644 tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> > > > > >
> > > > > > diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> > > > > > index c103873531e0..44fd3566fb51 100644
> > > > > > --- a/tools/testing/selftests/kvm/Makefile
> > > > > > +++ b/tools/testing/selftests/kvm/Makefile
> > > > > > @@ -72,6 +72,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_msrs_test
> > > > > > TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test
> > > > > > TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test
> > > > > > TEST_GEN_PROGS_x86_64 += x86_64/vmx_pi_mmio_test
> > > > > > +TEST_GEN_PROGS_x86_64 += x86_64/sev_vm_tests
> > > > > > TEST_GEN_PROGS_x86_64 += access_tracking_perf_test
> > > > > > TEST_GEN_PROGS_x86_64 += demand_paging_test
> > > > > > TEST_GEN_PROGS_x86_64 += dirty_log_test
> > > > > > diff --git a/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> > > > > > new file mode 100644
> > > > > > index 000000000000..ec3bbc96e73a
> > > > > > --- /dev/null
> > > > > > +++ b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> > > > > > @@ -0,0 +1,203 @@
> > > > > > +// SPDX-License-Identifier: GPL-2.0-only
> > > > > > +#include <linux/kvm.h>
> > > > > > +#include <linux/psp-sev.h>
> > > > > > +#include <stdio.h>
> > > > > > +#include <sys/ioctl.h>
> > > > > > +#include <stdlib.h>
> > > > > > +#include <errno.h>
> > > > > > +#include <pthread.h>
> > > > > > +
> > > > > > +#include "test_util.h"
> > > > > > +#include "kvm_util.h"
> > > > > > +#include "processor.h"
> > > > > > +#include "svm_util.h"
> > > > > > +#include "kselftest.h"
> > > > > > +#include "../lib/kvm_util_internal.h"
> > > > > > +
> > > > > > +#define SEV_POLICY_ES 0b100
> > > > > > +
> > > > > > +#define NR_MIGRATE_TEST_VCPUS 4
> > > > > > +#define NR_MIGRATE_TEST_VMS 3
> > > > > > +#define NR_LOCK_TESTING_THREADS 3
> > > > > > +#define NR_LOCK_TESTING_ITERATIONS 10000
> > > > > > +
> > > > > > +static void sev_ioctl(int vm_fd, int cmd_id, void *data)
> > > > > > +{
> > > > > > + struct kvm_sev_cmd cmd = {
> > > > > > + .id = cmd_id,
> > > > > > + .data = (uint64_t)data,
> > > > > > + .sev_fd = open_sev_dev_path_or_exit(),
> > > > > > + };
> > > > > > + int ret;
> > > > > > +
> > > > > > + ret = ioctl(vm_fd, KVM_MEMORY_ENCRYPT_OP, &cmd);
> > > > > > + TEST_ASSERT((ret == 0 || cmd.error == SEV_RET_SUCCESS),
> > > > > > + "%d failed: return code: %d, errno: %d, fw error: %d",
> > > > > > + cmd_id, ret, errno, cmd.error);
> > > > > > +}
> > > > > > +
> > > > > > +static struct kvm_vm *sev_vm_create(bool es)
> > > > > > +{
> > > > > > + struct kvm_vm *vm;
> > > > > > + struct kvm_sev_launch_start start = { 0 };
> > > > > > + int i;
> > > > > > +
> > > > > > + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > > > > > + sev_ioctl(vm->fd, es ? KVM_SEV_ES_INIT : KVM_SEV_INIT, NULL);
> > > > > > + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i)
> > > > > > + vm_vcpu_add(vm, i);
> > > > > > + if (es)
> > > > > > + start.policy |= SEV_POLICY_ES;
> > > > > > + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_START, &start);
> > > > > > + if (es)
> > > > > > + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_UPDATE_VMSA, NULL);
> > > > > > + return vm;
> > > > > > +}
> > > > >
> > > > > I should've suggested this in my original review. But is it worth
> > > > > moving `sev_vm_create()` and `sev_ioctl()` into the broader selftests
> > > > > library, so others can leverage this function to write selftests?
> > > >
> > > > This function isn't fully complete. It doesn't get to launch_finish,
> > > > i.e. it only goes far enough for copyless migration ioctls to work. I
> > > > think this would be a good expansion but could happen in follow up
> > > > series, thoughts?
> > >
> > > SGTM. Let's leave it here for now then.
> > >
> > > >
> > > > >
> > > > > > +
> > > > > > +static struct kvm_vm *__vm_create(void)
> > > > > > +{
> > > > > > + struct kvm_vm *vm;
> > > > > > + int i;
> > > > > > +
> > > > > > + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > > > > > + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i)
> > > > > > + vm_vcpu_add(vm, i);
> > > > > > +
> > > > > > + return vm;
> > > > > > +}
> > > > > > +
> > > > > > +static int __sev_migrate_from(int dst_fd, int src_fd)
> > > > > > +{
> > > > > > + struct kvm_enable_cap cap = {
> > > > > > + .cap = KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM,
> > > > > > + .args = { src_fd }
> > > > > > + };
> > > > > > +
> > > > > > + return ioctl(dst_fd, KVM_ENABLE_CAP, &cap);
> > > > > > +}
> > > > > > +
> > > > > > +
> > > > > > +static void sev_migrate_from(int dst_fd, int src_fd)
> > > > > > +{
> > > > > > + int ret;
> > > > > > +
> > > > > > + ret = __sev_migrate_from(dst_fd, src_fd);
> > > > > > + TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d\n", ret, errno);
> > > > > > +}
> > > > > > +
> > > > > > +static void test_sev_migrate_from(bool es)
> > > > > > +{
> > > > > > + struct kvm_vm *src_vm;
> > > > > > + struct kvm_vm *dst_vms[NR_MIGRATE_TEST_VMS];
> > > > > > + int i;
> > > > > > +
> > > > > > + src_vm = sev_vm_create(es);
> > > > > > + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i)
> > > > > > + dst_vms[i] = __vm_create();
> > > > > > +
> > > > > > + /* Initial migration from the src to the first dst. */
> > > > > > + sev_migrate_from(dst_vms[0]->fd, src_vm->fd);
> > > > > > +
> > > > > > + for (i = 1; i < NR_MIGRATE_TEST_VMS; i++)
> > > > > > + sev_migrate_from(dst_vms[i]->fd, dst_vms[i - 1]->fd);
> > > > > > +
> > > > > > + /* Migrate the guest back to the original VM. */
> > > > > > + sev_migrate_from(src_vm->fd, dst_vms[NR_MIGRATE_TEST_VMS - 1]->fd);
> > > > > > +
> > > > > > + kvm_vm_free(src_vm);
> > > > > > + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i)
> > > > > > + kvm_vm_free(dst_vms[i]);
> > > > > > +}
> > > > > > +
> > > > > > +struct locking_thread_input {
> > > > > > + struct kvm_vm *vm;
> > > > > > + int source_fds[NR_LOCK_TESTING_THREADS];
> > > > > > +};
> > > > > > +
> > > > > > +static void *locking_test_thread(void *arg)
> > > > > > +{
> > > > > > + int i, j;
> > > > > > + struct locking_thread_input *input = (struct locking_test_thread *)arg;
> > > > > > +
> > > > > > + for (i = 0; i < NR_LOCK_TESTING_ITERATIONS; ++i) {
> > > > > > + j = i % NR_LOCK_TESTING_THREADS;
> > > > > > + __sev_migrate_from(input->vm->fd, input->source_fds[j]);
> > > > > > + }
> > > > > > +
> > > > > > + return NULL;
> > > > > > +}
> > > > > > +
> > > > > > +static void test_sev_migrate_locking(void)
> > > > > > +{
> > > > > > + struct locking_thread_input input[NR_LOCK_TESTING_THREADS];
> > > > > > + pthread_t pt[NR_LOCK_TESTING_THREADS];
> > > > > > + int i;
> > > > > > +
> > > > > > + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i) {
> > > > > > + input[i].vm = sev_vm_create(/* es= */ false);
> > > > > > + input[0].source_fds[i] = input[i].vm->fd;
> > > > > > + }
> > > > > > + for (i = 1; i < NR_LOCK_TESTING_THREADS; ++i)
> > > > > > + memcpy(input[i].source_fds, input[0].source_fds,
> > > > > > + sizeof(input[i].source_fds));
> > > > > > +
> > > > > > + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i)
> > > > > > + pthread_create(&pt[i], NULL, locking_test_thread, &input[i]);
> > > > > > +
> > > > > > + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i)
> > > > > > + pthread_join(pt[i], NULL);
> > > > > > +}
> > > > > > +
> > > > > > +static void test_sev_migrate_parameters(void)
> > > > > > +{
> > > > > > + struct kvm_vm *sev_vm, *sev_es_vm, *vm_no_vcpu, *vm_no_sev,
> > > > > > + *sev_es_vm_no_vmsa;
> > > > > > + int ret;
> > > > > > +
> > > > > > + sev_vm = sev_vm_create(/* es= */ false);
> > > > > > + sev_es_vm = sev_vm_create(/* es= */ true);
> > > > > > + vm_no_vcpu = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > > > > > + vm_no_sev = __vm_create();
> > > > > > + sev_es_vm_no_vmsa = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > > > > > + sev_ioctl(sev_es_vm_no_vmsa->fd, KVM_SEV_ES_INIT, NULL);
> > > > > > + vm_vcpu_add(sev_es_vm_no_vmsa, 1);
> > > > > > +
> > > > > > +
> > > > > > + ret = __sev_migrate_from(sev_vm->fd, sev_es_vm->fd);
> > > > > > + TEST_ASSERT(
> > > > > > + ret == -1 && errno == EINVAL,
> > > > > > + "Should not be able migrate to SEV enabled VM. ret: %d, errno: %d\n",
> > > > > > + ret, errno);
> > > > > > +
> > > > > > + ret = __sev_migrate_from(sev_es_vm->fd, sev_vm->fd);
> > > > > > + TEST_ASSERT(
> > > > > > + ret == -1 && errno == EINVAL,
> > > > > > + "Should not be able migrate to SEV-ES enabled VM. ret: %d, errno: %d\n",
> > > > > > + ret, errno);
> > > > > > +
> > > > > > + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm->fd);
> > > > > > + TEST_ASSERT(
> > > > > > + ret == -1 && errno == EINVAL,
> > > > > > + "SEV-ES migrations require same number of vCPUS. ret: %d, errno: %d\n",
> > > > > > + ret, errno);
> > > > >
> > > > > How do we know that this failed because `vm_no_vcpu` has no vCPUs or
> > > > > because it's not a SEV-ES VM?
> > > >
> > > > Actually with V8 we only migrate to none SEV(-ES)? enabled guests.
> > >
> > > I think my point is that the test case should be written to treat the
> > > underlying KVM code as a black box. Without looking at the KVM code,
> > > the test case should be setup to be accepted perfectly by KVM and then
> > > mutated in a minimal way to trigger the intended failure case.
> > >
> > > Here, we've defined `vm_no_vcpu`, which as far as I can tell is: (1)
> > > not a SEV VM, (2) not a SEV-ES VM, (3) has no vCPUs. Based on the
> > > error message in the TEST_ASSERT, the intention here is to verify that
> > > a migration that would otherwise works, fails because the target has a
> > > different number of vCPUs than the source. Therefore, I think
> > > `vm_no_vcpu` should be defined as a SEV-ES VM, so that the test case
> > > is setup such that it would've otherwise passed if `vm_no_vcpu` had
> > > the correct number of vCPUs added.
> >
> > I think I get what you are asking for but I think this is good as
> > written, the second case should be updated. Now to migrate the src
> > should be SEV or SEV-ES (with the VMSAs setup), the dst should be NOT
> > SEV or SEV-ES enabled but should have the same # of vCPUs.
> >
> > So if |vm_no_vcpu| had 3 vCPUs like |sev_es_vm| this call would work.
>
> Got it now. SGTM. Thanks!
>
> >
> > >
> > > >
> > > > >
> > > > > > +
> > > > > > + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm_no_vmsa->fd);
> > > > > > + TEST_ASSERT(
> > > > > > + ret == -1 && errno == EINVAL,
> > > > > > + "SEV-ES migrations require UPDATE_VMSA. ret %d, errno: %d\n",
> > > > > > + ret, errno);
> > > > >
> > > > > Same question. How do we know why this failed? `sev_es_vm_no_vmsa` did
> > > > > not have any vCPUs added. Would it be cleaner to add an additional
> > > > > param to `sev_vm_create()` to skip calling UPDATE_VMSA? Then,
> > > > > `sev_es_vm_no_vmsa` can be created from `sev_vm_create()` and it's
> > > > > obvious to the read that the VMs are identical except for this aspect.
> > > > >
> > > > > > +
> > > > > > + ret = __sev_migrate_from(vm_no_vcpu->fd, vm_no_sev->fd);
> > > > > > + TEST_ASSERT(ret == -1 && errno == EINVAL,
> > > > > > + "Migrations require SEV enabled. ret %d, errno: %d\n", ret,
> > > > > > + errno);
> > > > >
> > > > > `vm_no_sev` has vCPUs. Therefore, how do we know why this failed --
> > |> > > (a) differing vCPU counts or (b) no SEV?
> > > >
> > > > Ditto we require dst to be none SEV enabled.
> > >
> > > Understood. But I think the test should treat KVM as a black box.
> > > Therefore, I think in this test case, `vm_no_vcpu` should be defined
> > > to have the same number of vCPUs as `vm_no_sev`.
> >
> > Ack I'll lazily reused | vm_no_vcpu|. I will add a |vm_no_sev_two| or
> > something which has the same number of vCPUs as | vm_no_sev|.

Actually I should have checked before replying. They both have 0 vCPUs
so technically they have the same number. Is this OK with you?

> >
> > >
> > > >
> > > > >
> > > > > > +}
> > > > > > +
> > > > > > +int main(int argc, char *argv[])
> > > > > > +{
> > > > > > + test_sev_migrate_from(/* es= */ false);
> > > > > > + test_sev_migrate_from(/* es= */ true);
> > > > > > + test_sev_migrate_locking();
> > > > > > + test_sev_migrate_parameters();
> > > > > > + return 0;
> > > > > > +}
> > > > > > --
> > > > > > 2.33.0.309.g3052b89438-goog
> > > > > >

2021-09-22 17:08:37

by Marc Orr

[permalink] [raw]
Subject: Re: [PATCH 4/4 V8] selftest: KVM: Add intra host migration tests

On Wed, Sep 22, 2021 at 10:02 AM Peter Gonda <[email protected]> wrote:
>
> On Wed, Sep 22, 2021 at 11:00 AM Marc Orr <[email protected]> wrote:
> >
> > On Wed, Sep 22, 2021 at 9:52 AM Peter Gonda <[email protected]> wrote:
> > >
> > > |
> > >
> > > On Wed, Sep 22, 2021 at 10:34 AM Marc Orr <[email protected]> wrote:
> > > >
> > > > On Tue, Sep 21, 2021 at 7:20 AM Peter Gonda <[email protected]> wrote:
> > > > >
> > > > > On Wed, Sep 15, 2021 at 11:28 AM Marc Orr <[email protected]> wrote:
> > > > > >
> > > > > > On Tue, Sep 14, 2021 at 9:47 AM Peter Gonda <[email protected]> wrote:
> > > > > > >
> > > > > > > Adds testcases for intra host migration for SEV and SEV-ES. Also adds
> > > > > > > locking test to confirm no deadlock exists.
> > > > > > >
> > > > > > > Signed-off-by: Peter Gonda <[email protected]>
> > > > > > > Suggested-by: Sean Christopherson <[email protected]>
> > > > > > > Reviewed-by: Marc Orr <[email protected]>
> > > > > > > Cc: Marc Orr <[email protected]>
> > > > > > > Cc: Sean Christopherson <[email protected]>
> > > > > > > Cc: David Rientjes <[email protected]>
> > > > > > > Cc: Brijesh Singh <[email protected]>
> > > > > > > Cc: [email protected]
> > > > > > > Cc: [email protected]
> > > > > > > ---
> > > > > > > tools/testing/selftests/kvm/Makefile | 1 +
> > > > > > > .../selftests/kvm/x86_64/sev_vm_tests.c | 203 ++++++++++++++++++
> > > > > > > 2 files changed, 204 insertions(+)
> > > > > > > create mode 100644 tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> > > > > > >
> > > > > > > diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
> > > > > > > index c103873531e0..44fd3566fb51 100644
> > > > > > > --- a/tools/testing/selftests/kvm/Makefile
> > > > > > > +++ b/tools/testing/selftests/kvm/Makefile
> > > > > > > @@ -72,6 +72,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_msrs_test
> > > > > > > TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test
> > > > > > > TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test
> > > > > > > TEST_GEN_PROGS_x86_64 += x86_64/vmx_pi_mmio_test
> > > > > > > +TEST_GEN_PROGS_x86_64 += x86_64/sev_vm_tests
> > > > > > > TEST_GEN_PROGS_x86_64 += access_tracking_perf_test
> > > > > > > TEST_GEN_PROGS_x86_64 += demand_paging_test
> > > > > > > TEST_GEN_PROGS_x86_64 += dirty_log_test
> > > > > > > diff --git a/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> > > > > > > new file mode 100644
> > > > > > > index 000000000000..ec3bbc96e73a
> > > > > > > --- /dev/null
> > > > > > > +++ b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
> > > > > > > @@ -0,0 +1,203 @@
> > > > > > > +// SPDX-License-Identifier: GPL-2.0-only
> > > > > > > +#include <linux/kvm.h>
> > > > > > > +#include <linux/psp-sev.h>
> > > > > > > +#include <stdio.h>
> > > > > > > +#include <sys/ioctl.h>
> > > > > > > +#include <stdlib.h>
> > > > > > > +#include <errno.h>
> > > > > > > +#include <pthread.h>
> > > > > > > +
> > > > > > > +#include "test_util.h"
> > > > > > > +#include "kvm_util.h"
> > > > > > > +#include "processor.h"
> > > > > > > +#include "svm_util.h"
> > > > > > > +#include "kselftest.h"
> > > > > > > +#include "../lib/kvm_util_internal.h"
> > > > > > > +
> > > > > > > +#define SEV_POLICY_ES 0b100
> > > > > > > +
> > > > > > > +#define NR_MIGRATE_TEST_VCPUS 4
> > > > > > > +#define NR_MIGRATE_TEST_VMS 3
> > > > > > > +#define NR_LOCK_TESTING_THREADS 3
> > > > > > > +#define NR_LOCK_TESTING_ITERATIONS 10000
> > > > > > > +
> > > > > > > +static void sev_ioctl(int vm_fd, int cmd_id, void *data)
> > > > > > > +{
> > > > > > > + struct kvm_sev_cmd cmd = {
> > > > > > > + .id = cmd_id,
> > > > > > > + .data = (uint64_t)data,
> > > > > > > + .sev_fd = open_sev_dev_path_or_exit(),
> > > > > > > + };
> > > > > > > + int ret;
> > > > > > > +
> > > > > > > + ret = ioctl(vm_fd, KVM_MEMORY_ENCRYPT_OP, &cmd);
> > > > > > > + TEST_ASSERT((ret == 0 || cmd.error == SEV_RET_SUCCESS),
> > > > > > > + "%d failed: return code: %d, errno: %d, fw error: %d",
> > > > > > > + cmd_id, ret, errno, cmd.error);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static struct kvm_vm *sev_vm_create(bool es)
> > > > > > > +{
> > > > > > > + struct kvm_vm *vm;
> > > > > > > + struct kvm_sev_launch_start start = { 0 };
> > > > > > > + int i;
> > > > > > > +
> > > > > > > + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > > > > > > + sev_ioctl(vm->fd, es ? KVM_SEV_ES_INIT : KVM_SEV_INIT, NULL);
> > > > > > > + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i)
> > > > > > > + vm_vcpu_add(vm, i);
> > > > > > > + if (es)
> > > > > > > + start.policy |= SEV_POLICY_ES;
> > > > > > > + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_START, &start);
> > > > > > > + if (es)
> > > > > > > + sev_ioctl(vm->fd, KVM_SEV_LAUNCH_UPDATE_VMSA, NULL);
> > > > > > > + return vm;
> > > > > > > +}
> > > > > >
> > > > > > I should've suggested this in my original review. But is it worth
> > > > > > moving `sev_vm_create()` and `sev_ioctl()` into the broader selftests
> > > > > > library, so others can leverage this function to write selftests?
> > > > >
> > > > > This function isn't fully complete. It doesn't get to launch_finish,
> > > > > i.e. it only goes far enough for copyless migration ioctls to work. I
> > > > > think this would be a good expansion but could happen in follow up
> > > > > series, thoughts?
> > > >
> > > > SGTM. Let's leave it here for now then.
> > > >
> > > > >
> > > > > >
> > > > > > > +
> > > > > > > +static struct kvm_vm *__vm_create(void)
> > > > > > > +{
> > > > > > > + struct kvm_vm *vm;
> > > > > > > + int i;
> > > > > > > +
> > > > > > > + vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > > > > > > + for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i)
> > > > > > > + vm_vcpu_add(vm, i);
> > > > > > > +
> > > > > > > + return vm;
> > > > > > > +}
> > > > > > > +
> > > > > > > +static int __sev_migrate_from(int dst_fd, int src_fd)
> > > > > > > +{
> > > > > > > + struct kvm_enable_cap cap = {
> > > > > > > + .cap = KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM,
> > > > > > > + .args = { src_fd }
> > > > > > > + };
> > > > > > > +
> > > > > > > + return ioctl(dst_fd, KVM_ENABLE_CAP, &cap);
> > > > > > > +}
> > > > > > > +
> > > > > > > +
> > > > > > > +static void sev_migrate_from(int dst_fd, int src_fd)
> > > > > > > +{
> > > > > > > + int ret;
> > > > > > > +
> > > > > > > + ret = __sev_migrate_from(dst_fd, src_fd);
> > > > > > > + TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d\n", ret, errno);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void test_sev_migrate_from(bool es)
> > > > > > > +{
> > > > > > > + struct kvm_vm *src_vm;
> > > > > > > + struct kvm_vm *dst_vms[NR_MIGRATE_TEST_VMS];
> > > > > > > + int i;
> > > > > > > +
> > > > > > > + src_vm = sev_vm_create(es);
> > > > > > > + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i)
> > > > > > > + dst_vms[i] = __vm_create();
> > > > > > > +
> > > > > > > + /* Initial migration from the src to the first dst. */
> > > > > > > + sev_migrate_from(dst_vms[0]->fd, src_vm->fd);
> > > > > > > +
> > > > > > > + for (i = 1; i < NR_MIGRATE_TEST_VMS; i++)
> > > > > > > + sev_migrate_from(dst_vms[i]->fd, dst_vms[i - 1]->fd);
> > > > > > > +
> > > > > > > + /* Migrate the guest back to the original VM. */
> > > > > > > + sev_migrate_from(src_vm->fd, dst_vms[NR_MIGRATE_TEST_VMS - 1]->fd);
> > > > > > > +
> > > > > > > + kvm_vm_free(src_vm);
> > > > > > > + for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i)
> > > > > > > + kvm_vm_free(dst_vms[i]);
> > > > > > > +}
> > > > > > > +
> > > > > > > +struct locking_thread_input {
> > > > > > > + struct kvm_vm *vm;
> > > > > > > + int source_fds[NR_LOCK_TESTING_THREADS];
> > > > > > > +};
> > > > > > > +
> > > > > > > +static void *locking_test_thread(void *arg)
> > > > > > > +{
> > > > > > > + int i, j;
> > > > > > > + struct locking_thread_input *input = (struct locking_test_thread *)arg;
> > > > > > > +
> > > > > > > + for (i = 0; i < NR_LOCK_TESTING_ITERATIONS; ++i) {
> > > > > > > + j = i % NR_LOCK_TESTING_THREADS;
> > > > > > > + __sev_migrate_from(input->vm->fd, input->source_fds[j]);
> > > > > > > + }
> > > > > > > +
> > > > > > > + return NULL;
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void test_sev_migrate_locking(void)
> > > > > > > +{
> > > > > > > + struct locking_thread_input input[NR_LOCK_TESTING_THREADS];
> > > > > > > + pthread_t pt[NR_LOCK_TESTING_THREADS];
> > > > > > > + int i;
> > > > > > > +
> > > > > > > + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i) {
> > > > > > > + input[i].vm = sev_vm_create(/* es= */ false);
> > > > > > > + input[0].source_fds[i] = input[i].vm->fd;
> > > > > > > + }
> > > > > > > + for (i = 1; i < NR_LOCK_TESTING_THREADS; ++i)
> > > > > > > + memcpy(input[i].source_fds, input[0].source_fds,
> > > > > > > + sizeof(input[i].source_fds));
> > > > > > > +
> > > > > > > + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i)
> > > > > > > + pthread_create(&pt[i], NULL, locking_test_thread, &input[i]);
> > > > > > > +
> > > > > > > + for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i)
> > > > > > > + pthread_join(pt[i], NULL);
> > > > > > > +}
> > > > > > > +
> > > > > > > +static void test_sev_migrate_parameters(void)
> > > > > > > +{
> > > > > > > + struct kvm_vm *sev_vm, *sev_es_vm, *vm_no_vcpu, *vm_no_sev,
> > > > > > > + *sev_es_vm_no_vmsa;
> > > > > > > + int ret;
> > > > > > > +
> > > > > > > + sev_vm = sev_vm_create(/* es= */ false);
> > > > > > > + sev_es_vm = sev_vm_create(/* es= */ true);
> > > > > > > + vm_no_vcpu = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > > > > > > + vm_no_sev = __vm_create();
> > > > > > > + sev_es_vm_no_vmsa = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
> > > > > > > + sev_ioctl(sev_es_vm_no_vmsa->fd, KVM_SEV_ES_INIT, NULL);
> > > > > > > + vm_vcpu_add(sev_es_vm_no_vmsa, 1);
> > > > > > > +
> > > > > > > +
> > > > > > > + ret = __sev_migrate_from(sev_vm->fd, sev_es_vm->fd);
> > > > > > > + TEST_ASSERT(
> > > > > > > + ret == -1 && errno == EINVAL,
> > > > > > > + "Should not be able migrate to SEV enabled VM. ret: %d, errno: %d\n",
> > > > > > > + ret, errno);
> > > > > > > +
> > > > > > > + ret = __sev_migrate_from(sev_es_vm->fd, sev_vm->fd);
> > > > > > > + TEST_ASSERT(
> > > > > > > + ret == -1 && errno == EINVAL,
> > > > > > > + "Should not be able migrate to SEV-ES enabled VM. ret: %d, errno: %d\n",
> > > > > > > + ret, errno);
> > > > > > > +
> > > > > > > + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm->fd);
> > > > > > > + TEST_ASSERT(
> > > > > > > + ret == -1 && errno == EINVAL,
> > > > > > > + "SEV-ES migrations require same number of vCPUS. ret: %d, errno: %d\n",
> > > > > > > + ret, errno);
> > > > > >
> > > > > > How do we know that this failed because `vm_no_vcpu` has no vCPUs or
> > > > > > because it's not a SEV-ES VM?
> > > > >
> > > > > Actually with V8 we only migrate to none SEV(-ES)? enabled guests.
> > > >
> > > > I think my point is that the test case should be written to treat the
> > > > underlying KVM code as a black box. Without looking at the KVM code,
> > > > the test case should be setup to be accepted perfectly by KVM and then
> > > > mutated in a minimal way to trigger the intended failure case.
> > > >
> > > > Here, we've defined `vm_no_vcpu`, which as far as I can tell is: (1)
> > > > not a SEV VM, (2) not a SEV-ES VM, (3) has no vCPUs. Based on the
> > > > error message in the TEST_ASSERT, the intention here is to verify that
> > > > a migration that would otherwise works, fails because the target has a
> > > > different number of vCPUs than the source. Therefore, I think
> > > > `vm_no_vcpu` should be defined as a SEV-ES VM, so that the test case
> > > > is setup such that it would've otherwise passed if `vm_no_vcpu` had
> > > > the correct number of vCPUs added.
> > >
> > > I think I get what you are asking for but I think this is good as
> > > written, the second case should be updated. Now to migrate the src
> > > should be SEV or SEV-ES (with the VMSAs setup), the dst should be NOT
> > > SEV or SEV-ES enabled but should have the same # of vCPUs.
> > >
> > > So if |vm_no_vcpu| had 3 vCPUs like |sev_es_vm| this call would work.
> >
> > Got it now. SGTM. Thanks!
> >
> > >
> > > >
> > > > >
> > > > > >
> > > > > > > +
> > > > > > > + ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm_no_vmsa->fd);
> > > > > > > + TEST_ASSERT(
> > > > > > > + ret == -1 && errno == EINVAL,
> > > > > > > + "SEV-ES migrations require UPDATE_VMSA. ret %d, errno: %d\n",
> > > > > > > + ret, errno);
> > > > > >
> > > > > > Same question. How do we know why this failed? `sev_es_vm_no_vmsa` did
> > > > > > not have any vCPUs added. Would it be cleaner to add an additional
> > > > > > param to `sev_vm_create()` to skip calling UPDATE_VMSA? Then,
> > > > > > `sev_es_vm_no_vmsa` can be created from `sev_vm_create()` and it's
> > > > > > obvious to the read that the VMs are identical except for this aspect.
> > > > > >
> > > > > > > +
> > > > > > > + ret = __sev_migrate_from(vm_no_vcpu->fd, vm_no_sev->fd);
> > > > > > > + TEST_ASSERT(ret == -1 && errno == EINVAL,
> > > > > > > + "Migrations require SEV enabled. ret %d, errno: %d\n", ret,
> > > > > > > + errno);
> > > > > >
> > > > > > `vm_no_sev` has vCPUs. Therefore, how do we know why this failed --
> > > |> > > (a) differing vCPU counts or (b) no SEV?
> > > > >
> > > > > Ditto we require dst to be none SEV enabled.
> > > >
> > > > Understood. But I think the test should treat KVM as a black box.
> > > > Therefore, I think in this test case, `vm_no_vcpu` should be defined
> > > > to have the same number of vCPUs as `vm_no_sev`.
> > >
> > > Ack I'll lazily reused | vm_no_vcpu|. I will add a |vm_no_sev_two| or
> > > something which has the same number of vCPUs as | vm_no_sev|.
>
> Actually I should have checked before replying. They both have 0 vCPUs
> so technically they have the same number. Is this OK with you?

Looks good. Just sent a separate reply with my Reviewed-by tag.

2021-09-28 12:43:57

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH 1/4 V8] KVM: SEV: Add support for SEV intra host migration

On Tue, Sep 14, 2021 at 09:47:24AM -0700, Peter Gonda wrote:
> +static int sev_lock_vcpus_for_migration(struct kvm *kvm)
> +{
> + struct kvm_vcpu *vcpu;
> + int i, j;
> +
> + kvm_for_each_vcpu(i, vcpu, kvm) {
> + if (mutex_lock_killable(&vcpu->mutex))
> + goto out_unlock;
> + }
> +
> + return 0;
> +
> +out_unlock:
> + kvm_for_each_vcpu(j, vcpu, kvm) {
> + mutex_unlock(&vcpu->mutex);
> + if (i == j)
> + break;

Hmm, doesn't the mutex_unlock() need to happen after the check?

2021-09-28 15:32:19

by Peter Gonda

[permalink] [raw]
Subject: Re: [PATCH 1/4 V8] KVM: SEV: Add support for SEV intra host migration

On Tue, Sep 28, 2021 at 6:42 AM Joerg Roedel <[email protected]> wrote:
>
> On Tue, Sep 14, 2021 at 09:47:24AM -0700, Peter Gonda wrote:
> > +static int sev_lock_vcpus_for_migration(struct kvm *kvm)
> > +{
> > + struct kvm_vcpu *vcpu;
> > + int i, j;
> > +
> > + kvm_for_each_vcpu(i, vcpu, kvm) {
> > + if (mutex_lock_killable(&vcpu->mutex))
> > + goto out_unlock;
> > + }
> > +
> > + return 0;
> > +
> > +out_unlock:
> > + kvm_for_each_vcpu(j, vcpu, kvm) {
> > + mutex_unlock(&vcpu->mutex);
> > + if (i == j)
> > + break;
>
> Hmm, doesn't the mutex_unlock() need to happen after the check?
>

Ah good catch, thanks for the review Joerg! Yes you are right this
results in calling mutex_unlock on a mutex we didn't successfully
lock. I'll fix it in the next version.