2021-10-21 17:47:46

by Peter Gonda

[permalink] [raw]
Subject: [PATCH 0/5 V11] Add AMD SEV and SEV-ES intra host migration support

Intra host migration provides a low-cost mechanism for userspace VMM
upgrades. It is an alternative to traditional (i.e., remote) live
migration. Whereas remote migration handles moving a guest to a new host,
intra host migration only handles moving a guest to a new userspace VMM
within a host. This can be used to update, rollback, change flags of the
VMM, etc. The lower cost compared to live migration comes from the fact
that the guest's memory does not need to be copied between processes. A
handle to the guest memory simply gets passed to the new VMM, this could
be done via /dev/shm with share=on or similar feature.

The guest state can be transferred from an old VMM to a new VMM as follows:
1. Export guest state from KVM to the old user-space VMM via a getter
user-space/kernel API 2. Transfer guest state from old VMM to new VMM via
IPC communication 3. Import guest state into KVM from the new user-space
VMM via a setter user-space/kernel API VMMs by exporting from KVM using
getters, sending that data to the new VMM, then setting it again in KVM.

In the common case for intra host migration, we can rely on the normal
ioctls for passing data from one VMM to the next. SEV, SEV-ES, and other
confidential compute environments make most of this information opaque, and
render KVM ioctls such as "KVM_GET_REGS" irrelevant. As a result, we need
the ability to pass this opaque metadata from one VMM to the next. The
easiest way to do this is to leave this data in the kernel, and transfer
ownership of the metadata from one KVM VM (or vCPU) to the next. For
example, we need to move the SEV enabled ASID, VMSAs, and GHCB metadata
from one VMM to the next. In general, we need to be able to hand off any
data that would be unsafe/impossible for the kernel to hand directly to
userspace (and cannot be reproduced using data that can be handed safely to
userspace).

V11
* Zero SEV-ES vCPU state on source.
* Rebase onto SEV-ES fixes.

V10
* Add new starting patch to refactor all SEV-ES related vCPU data into
for easier copying.

V9
* Fix sev_lock_vcpus_for_migration from unlocking the vCPU mutex it
failed to unlock.

V8
* Update to require that @dst is not SEV or SEV-ES enabled.
* Address selftest feedback.

V7
* Address selftest feedback.

V6
* Add selftest.

V5:
* Fix up locking scheme
* Address marcorr@ comments.

V4:
* Move to seanjc@'s suggestion of source VM FD based single ioctl design.

v3:
* Fix memory leak found by dan.carpenter@

v2:
* Added marcorr@ reviewed by tag
* Renamed function introduced in 1/3
* Edited with seanjc@'s review comments
** Cleaned up WARN usage
** Userspace makes random token now
* Edited with brijesh.singh@'s review comments
** Checks for different LAUNCH_* states in send function

v1: https://lore.kernel.org/kvm/[email protected]/

base-commit: 9f1ee7b169af

Cc: Marc Orr <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Sean Christopherson <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Dr. David Alan Gilbert <[email protected]>
Cc: Brijesh Singh <[email protected]>
Cc: Tom Lendacky <[email protected]>
Cc: Vitaly Kuznetsov <[email protected]>
Cc: Wanpeng Li <[email protected]>
Cc: Jim Mattson <[email protected]>
Cc: Joerg Roedel <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: [email protected]

Peter Gonda (5):
Refactor out sev_es_state struct
KVM: SEV: Add support for SEV intra host migration
KVM: SEV: Add support for SEV-ES intra host migration
selftest: KVM: Add open sev dev helper
selftest: KVM: Add intra host migration tests

Documentation/virt/kvm/api.rst | 15 +
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/svm/sev.c | 268 +++++++++++++++---
arch/x86/kvm/svm/svm.c | 9 +-
arch/x86/kvm/svm/svm.h | 28 +-
arch/x86/kvm/x86.c | 6 +
include/uapi/linux/kvm.h | 1 +
tools/testing/selftests/kvm/Makefile | 3 +-
.../testing/selftests/kvm/include/kvm_util.h | 1 +
.../selftests/kvm/include/x86_64/svm_util.h | 2 +
tools/testing/selftests/kvm/lib/kvm_util.c | 24 +-
tools/testing/selftests/kvm/lib/x86_64/svm.c | 13 +
.../selftests/kvm/x86_64/sev_vm_tests.c | 203 +++++++++++++
13 files changed, 507 insertions(+), 67 deletions(-)
create mode 100644 tools/testing/selftests/kvm/x86_64/sev_vm_tests.c

--
2.33.0.1079.g6e70778dc9-goog


2021-10-21 17:48:06

by Peter Gonda

[permalink] [raw]
Subject: [PATCH V11 5/5] selftest: KVM: Add intra host migration tests

Adds testcases for intra host migration for SEV and SEV-ES. Also adds
locking test to confirm no deadlock exists.

Signed-off-by: Peter Gonda <[email protected]>
Suggested-by: Sean Christopherson <[email protected]>
Cc: Marc Orr <[email protected]>
Cc: Sean Christopherson <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Brijesh Singh <[email protected]>
Cc: Tom Lendacky <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
tools/testing/selftests/kvm/Makefile | 3 +-
.../selftests/kvm/x86_64/sev_vm_tests.c | 203 ++++++++++++++++++
2 files changed, 205 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/kvm/x86_64/sev_vm_tests.c

diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index d1774f461393..c6b37d2045d9 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -72,7 +72,8 @@ TEST_GEN_PROGS_x86_64 += x86_64/tsc_msrs_test
TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_msrs_test
TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test
TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test
-TEST_GEN_PROGS_x86_64 += access_tracking_perf_test
+TEST_GEN_PROGS_x86_64 += x86_64/vmx_pi_mmio_test
+TEST_GEN_PROGS_x86_64 += x86_64/sev_vm_tests
TEST_GEN_PROGS_x86_64 += demand_paging_test
TEST_GEN_PROGS_x86_64 += dirty_log_test
TEST_GEN_PROGS_x86_64 += dirty_log_perf_test
diff --git a/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
new file mode 100644
index 000000000000..ec3bbc96e73a
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/sev_vm_tests.c
@@ -0,0 +1,203 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#include <linux/kvm.h>
+#include <linux/psp-sev.h>
+#include <stdio.h>
+#include <sys/ioctl.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <pthread.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+#include "processor.h"
+#include "svm_util.h"
+#include "kselftest.h"
+#include "../lib/kvm_util_internal.h"
+
+#define SEV_POLICY_ES 0b100
+
+#define NR_MIGRATE_TEST_VCPUS 4
+#define NR_MIGRATE_TEST_VMS 3
+#define NR_LOCK_TESTING_THREADS 3
+#define NR_LOCK_TESTING_ITERATIONS 10000
+
+static void sev_ioctl(int vm_fd, int cmd_id, void *data)
+{
+ struct kvm_sev_cmd cmd = {
+ .id = cmd_id,
+ .data = (uint64_t)data,
+ .sev_fd = open_sev_dev_path_or_exit(),
+ };
+ int ret;
+
+ ret = ioctl(vm_fd, KVM_MEMORY_ENCRYPT_OP, &cmd);
+ TEST_ASSERT((ret == 0 || cmd.error == SEV_RET_SUCCESS),
+ "%d failed: return code: %d, errno: %d, fw error: %d",
+ cmd_id, ret, errno, cmd.error);
+}
+
+static struct kvm_vm *sev_vm_create(bool es)
+{
+ struct kvm_vm *vm;
+ struct kvm_sev_launch_start start = { 0 };
+ int i;
+
+ vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
+ sev_ioctl(vm->fd, es ? KVM_SEV_ES_INIT : KVM_SEV_INIT, NULL);
+ for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i)
+ vm_vcpu_add(vm, i);
+ if (es)
+ start.policy |= SEV_POLICY_ES;
+ sev_ioctl(vm->fd, KVM_SEV_LAUNCH_START, &start);
+ if (es)
+ sev_ioctl(vm->fd, KVM_SEV_LAUNCH_UPDATE_VMSA, NULL);
+ return vm;
+}
+
+static struct kvm_vm *__vm_create(void)
+{
+ struct kvm_vm *vm;
+ int i;
+
+ vm = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
+ for (i = 0; i < NR_MIGRATE_TEST_VCPUS; ++i)
+ vm_vcpu_add(vm, i);
+
+ return vm;
+}
+
+static int __sev_migrate_from(int dst_fd, int src_fd)
+{
+ struct kvm_enable_cap cap = {
+ .cap = KVM_CAP_VM_MIGRATE_PROTECTED_VM_FROM,
+ .args = { src_fd }
+ };
+
+ return ioctl(dst_fd, KVM_ENABLE_CAP, &cap);
+}
+
+
+static void sev_migrate_from(int dst_fd, int src_fd)
+{
+ int ret;
+
+ ret = __sev_migrate_from(dst_fd, src_fd);
+ TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d\n", ret, errno);
+}
+
+static void test_sev_migrate_from(bool es)
+{
+ struct kvm_vm *src_vm;
+ struct kvm_vm *dst_vms[NR_MIGRATE_TEST_VMS];
+ int i;
+
+ src_vm = sev_vm_create(es);
+ for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i)
+ dst_vms[i] = __vm_create();
+
+ /* Initial migration from the src to the first dst. */
+ sev_migrate_from(dst_vms[0]->fd, src_vm->fd);
+
+ for (i = 1; i < NR_MIGRATE_TEST_VMS; i++)
+ sev_migrate_from(dst_vms[i]->fd, dst_vms[i - 1]->fd);
+
+ /* Migrate the guest back to the original VM. */
+ sev_migrate_from(src_vm->fd, dst_vms[NR_MIGRATE_TEST_VMS - 1]->fd);
+
+ kvm_vm_free(src_vm);
+ for (i = 0; i < NR_MIGRATE_TEST_VMS; ++i)
+ kvm_vm_free(dst_vms[i]);
+}
+
+struct locking_thread_input {
+ struct kvm_vm *vm;
+ int source_fds[NR_LOCK_TESTING_THREADS];
+};
+
+static void *locking_test_thread(void *arg)
+{
+ int i, j;
+ struct locking_thread_input *input = (struct locking_test_thread *)arg;
+
+ for (i = 0; i < NR_LOCK_TESTING_ITERATIONS; ++i) {
+ j = i % NR_LOCK_TESTING_THREADS;
+ __sev_migrate_from(input->vm->fd, input->source_fds[j]);
+ }
+
+ return NULL;
+}
+
+static void test_sev_migrate_locking(void)
+{
+ struct locking_thread_input input[NR_LOCK_TESTING_THREADS];
+ pthread_t pt[NR_LOCK_TESTING_THREADS];
+ int i;
+
+ for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i) {
+ input[i].vm = sev_vm_create(/* es= */ false);
+ input[0].source_fds[i] = input[i].vm->fd;
+ }
+ for (i = 1; i < NR_LOCK_TESTING_THREADS; ++i)
+ memcpy(input[i].source_fds, input[0].source_fds,
+ sizeof(input[i].source_fds));
+
+ for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i)
+ pthread_create(&pt[i], NULL, locking_test_thread, &input[i]);
+
+ for (i = 0; i < NR_LOCK_TESTING_THREADS; ++i)
+ pthread_join(pt[i], NULL);
+}
+
+static void test_sev_migrate_parameters(void)
+{
+ struct kvm_vm *sev_vm, *sev_es_vm, *vm_no_vcpu, *vm_no_sev,
+ *sev_es_vm_no_vmsa;
+ int ret;
+
+ sev_vm = sev_vm_create(/* es= */ false);
+ sev_es_vm = sev_vm_create(/* es= */ true);
+ vm_no_vcpu = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
+ vm_no_sev = __vm_create();
+ sev_es_vm_no_vmsa = vm_create(VM_MODE_DEFAULT, 0, O_RDWR);
+ sev_ioctl(sev_es_vm_no_vmsa->fd, KVM_SEV_ES_INIT, NULL);
+ vm_vcpu_add(sev_es_vm_no_vmsa, 1);
+
+
+ ret = __sev_migrate_from(sev_vm->fd, sev_es_vm->fd);
+ TEST_ASSERT(
+ ret == -1 && errno == EINVAL,
+ "Should not be able migrate to SEV enabled VM. ret: %d, errno: %d\n",
+ ret, errno);
+
+ ret = __sev_migrate_from(sev_es_vm->fd, sev_vm->fd);
+ TEST_ASSERT(
+ ret == -1 && errno == EINVAL,
+ "Should not be able migrate to SEV-ES enabled VM. ret: %d, errno: %d\n",
+ ret, errno);
+
+ ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm->fd);
+ TEST_ASSERT(
+ ret == -1 && errno == EINVAL,
+ "SEV-ES migrations require same number of vCPUS. ret: %d, errno: %d\n",
+ ret, errno);
+
+ ret = __sev_migrate_from(vm_no_vcpu->fd, sev_es_vm_no_vmsa->fd);
+ TEST_ASSERT(
+ ret == -1 && errno == EINVAL,
+ "SEV-ES migrations require UPDATE_VMSA. ret %d, errno: %d\n",
+ ret, errno);
+
+ ret = __sev_migrate_from(vm_no_vcpu->fd, vm_no_sev->fd);
+ TEST_ASSERT(ret == -1 && errno == EINVAL,
+ "Migrations require SEV enabled. ret %d, errno: %d\n", ret,
+ errno);
+}
+
+int main(int argc, char *argv[])
+{
+ test_sev_migrate_from(/* es= */ false);
+ test_sev_migrate_from(/* es= */ true);
+ test_sev_migrate_locking();
+ test_sev_migrate_parameters();
+ return 0;
+}
--
2.33.0.1079.g6e70778dc9-goog

2021-10-21 17:48:25

by Peter Gonda

[permalink] [raw]
Subject: [PATCH V11 4/5] selftest: KVM: Add open sev dev helper

Refactors out open path support from open_kvm_dev_path_or_exit() and
adds new helper for SEV device path.

Signed-off-by: Peter Gonda <[email protected]>
Suggested-by: Sean Christopherson <[email protected]>
Cc: Marc Orr <[email protected]>
Cc: Sean Christopherson <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Brijesh Singh <[email protected]>
Cc: Tom Lendacky <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
.../testing/selftests/kvm/include/kvm_util.h | 1 +
.../selftests/kvm/include/x86_64/svm_util.h | 2 ++
tools/testing/selftests/kvm/lib/kvm_util.c | 24 +++++++++++--------
tools/testing/selftests/kvm/lib/x86_64/svm.c | 13 ++++++++++
4 files changed, 30 insertions(+), 10 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 010b59b13917..368e88305046 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -80,6 +80,7 @@ struct vm_guest_mode_params {
};
extern const struct vm_guest_mode_params vm_guest_mode_params[];

+int open_path_or_exit(const char *path, int flags);
int open_kvm_dev_path_or_exit(void);
int kvm_check_cap(long cap);
int vm_enable_cap(struct kvm_vm *vm, struct kvm_enable_cap *cap);
diff --git a/tools/testing/selftests/kvm/include/x86_64/svm_util.h b/tools/testing/selftests/kvm/include/x86_64/svm_util.h
index b7531c83b8ae..587fbe408b99 100644
--- a/tools/testing/selftests/kvm/include/x86_64/svm_util.h
+++ b/tools/testing/selftests/kvm/include/x86_64/svm_util.h
@@ -46,4 +46,6 @@ static inline bool cpu_has_svm(void)
return ecx & CPUID_SVM;
}

+int open_sev_dev_path_or_exit(void);
+
#endif /* SELFTEST_KVM_SVM_UTILS_H */
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 10a8ed691c66..06a6c04010fb 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -31,6 +31,19 @@ static void *align(void *x, size_t size)
return (void *) (((size_t) x + mask) & ~mask);
}

+int open_path_or_exit(const char *path, int flags)
+{
+ int fd;
+
+ fd = open(path, flags);
+ if (fd < 0) {
+ print_skip("%s not available (errno: %d)", path, errno);
+ exit(KSFT_SKIP);
+ }
+
+ return fd;
+}
+
/*
* Open KVM_DEV_PATH if available, otherwise exit the entire program.
*
@@ -42,16 +55,7 @@ static void *align(void *x, size_t size)
*/
static int _open_kvm_dev_path_or_exit(int flags)
{
- int fd;
-
- fd = open(KVM_DEV_PATH, flags);
- if (fd < 0) {
- print_skip("%s not available, is KVM loaded? (errno: %d)",
- KVM_DEV_PATH, errno);
- exit(KSFT_SKIP);
- }
-
- return fd;
+ return open_path_or_exit(KVM_DEV_PATH, flags);
}

int open_kvm_dev_path_or_exit(void)
diff --git a/tools/testing/selftests/kvm/lib/x86_64/svm.c b/tools/testing/selftests/kvm/lib/x86_64/svm.c
index 2ac98d70d02b..14a8618efa9c 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/svm.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/svm.c
@@ -13,6 +13,8 @@
#include "processor.h"
#include "svm_util.h"

+#define SEV_DEV_PATH "/dev/sev"
+
struct gpr64_regs guest_regs;
u64 rflags;

@@ -160,3 +162,14 @@ void nested_svm_check_supported(void)
exit(KSFT_SKIP);
}
}
+
+/*
+ * Open SEV_DEV_PATH if available, otherwise exit the entire program.
+ *
+ * Return:
+ * The opened file descriptor of /dev/sev.
+ */
+int open_sev_dev_path_or_exit(void)
+{
+ return open_path_or_exit(SEV_DEV_PATH, 0);
+}
--
2.33.0.1079.g6e70778dc9-goog

2021-11-06 03:33:20

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH V11 4/5] selftest: KVM: Add open sev dev helper

On Thu, Oct 21, 2021, Peter Gonda wrote:
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 10a8ed691c66..06a6c04010fb 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -31,6 +31,19 @@ static void *align(void *x, size_t size)
> return (void *) (((size_t) x + mask) & ~mask);
> }
>
> +int open_path_or_exit(const char *path, int flags)
> +{
> + int fd;
> +
> + fd = open(path, flags);
> + if (fd < 0) {
> + print_skip("%s not available (errno: %d)", path, errno);

While you're here, can you add the strerror(errno) as well? There are some other
enhancements I'd like to make as some failure modes are really annoying, e.g. if
the max vCPUs test fails/skips due to ulimits, but printing the human friendly
version is an easy one to pick off.

> + exit(KSFT_SKIP);
> + }
> +
> + return fd;
> +}
> +
> /*
> * Open KVM_DEV_PATH if available, otherwise exit the entire program.
> *
> @@ -42,16 +55,7 @@ static void *align(void *x, size_t size)
> */
> static int _open_kvm_dev_path_or_exit(int flags)
> {
> - int fd;
> -
> - fd = open(KVM_DEV_PATH, flags);
> - if (fd < 0) {
> - print_skip("%s not available, is KVM loaded? (errno: %d)",
> - KVM_DEV_PATH, errno);
> - exit(KSFT_SKIP);
> - }
> -
> - return fd;
> + return open_path_or_exit(KVM_DEV_PATH, flags);
> }
>
> int open_kvm_dev_path_or_exit(void)
> diff --git a/tools/testing/selftests/kvm/lib/x86_64/svm.c b/tools/testing/selftests/kvm/lib/x86_64/svm.c
> index 2ac98d70d02b..14a8618efa9c 100644
> --- a/tools/testing/selftests/kvm/lib/x86_64/svm.c
> +++ b/tools/testing/selftests/kvm/lib/x86_64/svm.c
> @@ -13,6 +13,8 @@
> #include "processor.h"
> #include "svm_util.h"
>
> +#define SEV_DEV_PATH "/dev/sev"
> +
> struct gpr64_regs guest_regs;
> u64 rflags;
>
> @@ -160,3 +162,14 @@ void nested_svm_check_supported(void)
> exit(KSFT_SKIP);
> }
> }
> +
> +/*
> + * Open SEV_DEV_PATH if available, otherwise exit the entire program.
> + *
> + * Return:
> + * The opened file descriptor of /dev/sev.
> + */
> +int open_sev_dev_path_or_exit(void)
> +{
> + return open_path_or_exit(SEV_DEV_PATH, 0);
> +}
> --
> 2.33.0.1079.g6e70778dc9-goog
>