2023-08-08 00:58:10

by Ackerley Tng

[permalink] [raw]
Subject: [RFC PATCH 00/11] New KVM ioctl to link a gmem inode to a new gmem file

Hello,

This patchset builds upon the code at
https://lore.kernel.org/lkml/[email protected]/T/.

This code is available at
https://github.com/googleprodkernel/linux-cc/tree/kvm-gmem-link-migrate-rfcv1.

In guest_mem v11, a split file/inode model was proposed, where memslot
bindings belong to the file and pages belong to the inode. This model
lends itself well to having different VMs use separate files pointing
to the same inode.

This RFC proposes an ioctl, KVM_LINK_GUEST_MEMFD, that takes a VM and
a gmem fd, and returns another gmem fd referencing a different file
and associated with VM. This RFC also includes an update to
KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM to migrate memory context
(slot->arch.lpage_info and kvm->mem_attr_array) from source to
destination vm, intra-host.

Intended usage of the two ioctls:

1. Source VM’s fd is passed to destination VM via unix sockets
2. Destination VM uses new ioctl KVM_LINK_GUEST_MEMFD to link source
VM’s fd to a new fd.
3. Destination VM will pass new fds to KVM_SET_USER_MEMORY_REGION,
which will bind the new file, pointing to the same inode that the
source VM’s file points to, to memslots
4. Use KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM to move kvm->mem_attr_array
and slot->arch.lpage_info to the destination VM.
5. Run the destination VM as per normal

Some other approaches considered were:

+ Using the linkat() syscall, but that requires a mount/directory for
a source fd to be linked to
+ Using the dup() syscall, but that only duplicates the fd, and both
fds point to the same file

---

Ackerley Tng (11):
KVM: guest_mem: Refactor out kvm_gmem_alloc_file()
KVM: guest_mem: Add ioctl KVM_LINK_GUEST_MEMFD
KVM: selftests: Add tests for KVM_LINK_GUEST_MEMFD ioctl
KVM: selftests: Test transferring private memory to another VM
KVM: x86: Refactor sev's flag migration_in_progress to kvm struct
KVM: x86: Refactor common code out of sev.c
KVM: x86: Refactor common migration preparation code out of
sev_vm_move_enc_context_from
KVM: x86: Let moving encryption context be configurable
KVM: x86: Handle moving of memory context for intra-host migration
KVM: selftests: Generalize migration functions from
sev_migrate_tests.c
KVM: selftests: Add tests for migration of private mem

arch/x86/include/asm/kvm_host.h | 4 +-
arch/x86/kvm/svm/sev.c | 85 ++-----
arch/x86/kvm/svm/svm.h | 3 +-
arch/x86/kvm/x86.c | 221 +++++++++++++++++-
arch/x86/kvm/x86.h | 6 +
include/linux/kvm_host.h | 18 ++
include/uapi/linux/kvm.h | 8 +
tools/testing/selftests/kvm/Makefile | 1 +
.../testing/selftests/kvm/guest_memfd_test.c | 42 ++++
.../selftests/kvm/include/kvm_util_base.h | 31 +++
.../kvm/x86_64/private_mem_migrate_tests.c | 93 ++++++++
.../selftests/kvm/x86_64/sev_migrate_tests.c | 48 ++--
virt/kvm/guest_mem.c | 151 ++++++++++--
virt/kvm/kvm_main.c | 10 +
virt/kvm/kvm_mm.h | 7 +
15 files changed, 596 insertions(+), 132 deletions(-)
create mode 100644 tools/testing/selftests/kvm/x86_64/private_mem_migrate_tests.c

--
2.41.0.640.ga95def55d0-goog


2023-08-08 01:08:56

by Ackerley Tng

[permalink] [raw]
Subject: [RFC PATCH 08/11] KVM: x86: Let moving encryption context be configurable

SEV-capable VMs may also use the KVM_X86_SW_PROTECTED_VM type, but
they will still need architecture-specific handling to move encryption
context. Hence, we let moving of encryption context be configurable
and store that configuration in a flag.

Co-developed-by: Vishal Annapurve <[email protected]>
Signed-off-by: Vishal Annapurve <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
---
arch/x86/include/asm/kvm_host.h | 2 ++
arch/x86/kvm/svm/sev.c | 2 ++
arch/x86/kvm/x86.c | 9 ++++++++-
3 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 71c1236e4f18..ab45a3d3c867 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1445,6 +1445,8 @@ struct kvm_arch {
*/
#define SPLIT_DESC_CACHE_MIN_NR_OBJECTS (SPTE_ENT_PER_PAGE + 1)
struct kvm_mmu_memory_cache split_desc_cache;
+
+ bool vm_move_enc_ctxt_supported;
};

struct kvm_vm_stat {
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index e0e206aa3e62..b09e6477e309 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -256,6 +256,8 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
goto e_no_asid;
sev->asid = asid;

+ kvm->arch.vm_move_enc_ctxt_supported = true;
+
ret = sev_platform_init(&argp->error);
if (ret)
goto e_free;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 75d48379d94d..a1a28dd77b94 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -6351,7 +6351,14 @@ static int kvm_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd)
if (r)
goto out_mark_migration_done;

- r = static_call(kvm_x86_vm_move_enc_context_from)(kvm, source_kvm);
+ /*
+ * Different types of VMs will allow userspace to define if moving
+ * encryption context should be supported.
+ */
+ if (kvm->arch.vm_move_enc_ctxt_supported &&
+ kvm_x86_ops.vm_move_enc_context_from) {
+ r = static_call(kvm_x86_vm_move_enc_context_from)(kvm, source_kvm);
+ }

kvm_unlock_two_vms(kvm, source_kvm);
out_mark_migration_done:
--
2.41.0.640.ga95def55d0-goog


2023-08-08 01:29:03

by Ackerley Tng

[permalink] [raw]
Subject: [RFC PATCH 06/11] KVM: x86: Refactor common code out of sev.c

Split sev_lock_two_vms() into kvm_mark_migration_in_progress() and
kvm_lock_two_vms() and refactor sev.c to use these two new functions.

Co-developed-by: Sagi Shahar <[email protected]>
Signed-off-by: Sagi Shahar <[email protected]>
Co-developed-by: Vishal Annapurve <[email protected]>
Signed-off-by: Vishal Annapurve <[email protected]>
Signed-off-by: Ackerley Tng <[email protected]>
---
arch/x86/kvm/svm/sev.c | 59 ++++++++++------------------------------
arch/x86/kvm/x86.c | 62 ++++++++++++++++++++++++++++++++++++++++++
arch/x86/kvm/x86.h | 6 ++++
3 files changed, 82 insertions(+), 45 deletions(-)

diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 725289b523c7..3c4313417966 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -1554,47 +1554,6 @@ static bool is_cmd_allowed_from_mirror(u32 cmd_id)
return false;
}

-static int sev_lock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm)
-{
- int r = -EBUSY;
-
- if (dst_kvm == src_kvm)
- return -EINVAL;
-
- /*
- * Bail if these VMs are already involved in a migration to avoid
- * deadlock between two VMs trying to migrate to/from each other.
- */
- if (atomic_cmpxchg_acquire(&dst_kvm->migration_in_progress, 0, 1))
- return -EBUSY;
-
- if (atomic_cmpxchg_acquire(&src_kvm->migration_in_progress, 0, 1))
- goto release_dst;
-
- r = -EINTR;
- if (mutex_lock_killable(&dst_kvm->lock))
- goto release_src;
- if (mutex_lock_killable_nested(&src_kvm->lock, SINGLE_DEPTH_NESTING))
- goto unlock_dst;
- return 0;
-
-unlock_dst:
- mutex_unlock(&dst_kvm->lock);
-release_src:
- atomic_set_release(&src_kvm->migration_in_progress, 0);
-release_dst:
- atomic_set_release(&dst_kvm->migration_in_progress, 0);
- return r;
-}
-
-static void sev_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm)
-{
- mutex_unlock(&dst_kvm->lock);
- mutex_unlock(&src_kvm->lock);
- atomic_set_release(&dst_kvm->migration_in_progress, 0);
- atomic_set_release(&src_kvm->migration_in_progress, 0);
-}
-
/* vCPU mutex subclasses. */
enum sev_migration_role {
SEV_MIGRATION_SOURCE = 0,
@@ -1777,9 +1736,12 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd)
}

source_kvm = f.file->private_data;
- ret = sev_lock_two_vms(kvm, source_kvm);
+ ret = kvm_mark_migration_in_progress(kvm, source_kvm);
if (ret)
goto out_fput;
+ ret = kvm_lock_two_vms(kvm, source_kvm);
+ if (ret)
+ goto out_mark_migration_done;

if (sev_guest(kvm) || !sev_guest(source_kvm)) {
ret = -EINVAL;
@@ -1823,8 +1785,10 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd)
sev_misc_cg_uncharge(cg_cleanup_sev);
put_misc_cg(cg_cleanup_sev->misc_cg);
cg_cleanup_sev->misc_cg = NULL;
+out_mark_migration_done:
+ kvm_mark_migration_done(kvm, source_kvm);
out_unlock:
- sev_unlock_two_vms(kvm, source_kvm);
+ kvm_unlock_two_vms(kvm, source_kvm);
out_fput:
fdput(f);
return ret;
@@ -2057,9 +2021,12 @@ int sev_vm_copy_enc_context_from(struct kvm *kvm, unsigned int source_fd)
}

source_kvm = f.file->private_data;
- ret = sev_lock_two_vms(kvm, source_kvm);
+ ret = kvm_mark_migration_in_progress(kvm, source_kvm);
if (ret)
goto e_source_fput;
+ ret = kvm_lock_two_vms(kvm, source_kvm);
+ if (ret)
+ goto e_mark_migration_done;

/*
* Mirrors of mirrors should work, but let's not get silly. Also
@@ -2100,7 +2067,9 @@ int sev_vm_copy_enc_context_from(struct kvm *kvm, unsigned int source_fd)
*/

e_unlock:
- sev_unlock_two_vms(kvm, source_kvm);
+ kvm_unlock_two_vms(kvm, source_kvm);
+e_mark_migration_done:
+ kvm_mark_migration_done(kvm, source_kvm);
e_source_fput:
fdput(f);
return ret;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index de195ad83ec0..494b75ef7197 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4340,6 +4340,68 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
}
EXPORT_SYMBOL_GPL(kvm_get_msr_common);

+int kvm_mark_migration_in_progress(struct kvm *dst_kvm, struct kvm *src_kvm)
+{
+ int r;
+
+ if (dst_kvm == src_kvm)
+ return -EINVAL;
+
+ /*
+ * Bail if these VMs are already involved in a migration to avoid
+ * deadlock between two VMs trying to migrate to/from each other.
+ */
+ r = -EBUSY;
+ if (atomic_cmpxchg_acquire(&dst_kvm->migration_in_progress, 0, 1))
+ return r;
+
+ if (atomic_cmpxchg_acquire(&src_kvm->migration_in_progress, 0, 1))
+ goto release_dst;
+
+ return 0;
+
+release_dst:
+ atomic_set_release(&dst_kvm->migration_in_progress, 0);
+ return r;
+}
+EXPORT_SYMBOL_GPL(kvm_mark_migration_in_progress);
+
+void kvm_mark_migration_done(struct kvm *dst_kvm, struct kvm *src_kvm)
+{
+ atomic_set_release(&dst_kvm->migration_in_progress, 0);
+ atomic_set_release(&src_kvm->migration_in_progress, 0);
+}
+EXPORT_SYMBOL_GPL(kvm_mark_migration_done);
+
+int kvm_lock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm)
+{
+ int r;
+
+ if (dst_kvm == src_kvm)
+ return -EINVAL;
+
+ r = -EINTR;
+ if (mutex_lock_killable(&dst_kvm->lock))
+ return r;
+
+ if (mutex_lock_killable_nested(&src_kvm->lock, SINGLE_DEPTH_NESTING))
+ goto unlock_dst;
+
+ return 0;
+
+unlock_dst:
+ mutex_unlock(&dst_kvm->lock);
+ return r;
+}
+EXPORT_SYMBOL_GPL(kvm_lock_two_vms);
+
+void kvm_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm)
+{
+ mutex_unlock(&dst_kvm->lock);
+ mutex_unlock(&src_kvm->lock);
+}
+EXPORT_SYMBOL_GPL(kvm_unlock_two_vms);
+
/*
* Read or write a bunch of msrs. All parameters are kernel addresses.
*
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 82e3dafc5453..4c6edaf5ac5b 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -539,4 +539,10 @@ int kvm_sev_es_string_io(struct kvm_vcpu *vcpu, unsigned int size,
unsigned int port, void *data, unsigned int count,
int in);

+int kvm_mark_migration_in_progress(struct kvm *dst_kvm, struct kvm *src_kvm);
+void kvm_mark_migration_done(struct kvm *dst_kvm, struct kvm *src_kvm);
+
+int kvm_lock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm);
+void kvm_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm);
+
#endif
--
2.41.0.640.ga95def55d0-goog


2023-08-08 01:33:28

by Ackerley Tng

[permalink] [raw]
Subject: [RFC PATCH 10/11] KVM: selftests: Generalize migration functions from sev_migrate_tests.c

These functions will be used in private (guest mem) migration tests.

Signed-off-by: Ackerley Tng <[email protected]>
---
.../selftests/kvm/include/kvm_util_base.h | 13 +++++
.../selftests/kvm/x86_64/sev_migrate_tests.c | 48 +++++++------------
2 files changed, 30 insertions(+), 31 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 868925b26a7b..af6ebead5bc3 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -331,6 +331,19 @@ static inline void vm_enable_cap(struct kvm_vm *vm, uint32_t cap, uint64_t arg0)
vm_ioctl(vm, KVM_ENABLE_CAP, &enable_cap);
}

+static inline int __vm_migrate_from(struct kvm_vm *dst, struct kvm_vm *src)
+{
+ return __vm_enable_cap(dst, KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM, src->fd);
+}
+
+static inline void vm_migrate_from(struct kvm_vm *dst, struct kvm_vm *src)
+{
+ int ret;
+
+ ret = __vm_migrate_from(dst, src);
+ TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d\n", ret, errno);
+}
+
static inline void vm_set_memory_attributes(struct kvm_vm *vm, uint64_t gpa,
uint64_t size, uint64_t attributes)
{
diff --git a/tools/testing/selftests/kvm/x86_64/sev_migrate_tests.c b/tools/testing/selftests/kvm/x86_64/sev_migrate_tests.c
index c7ef97561038..cee8219fe8d2 100644
--- a/tools/testing/selftests/kvm/x86_64/sev_migrate_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/sev_migrate_tests.c
@@ -80,20 +80,6 @@ static struct kvm_vm *aux_vm_create(bool with_vcpus)
return vm;
}

-static int __sev_migrate_from(struct kvm_vm *dst, struct kvm_vm *src)
-{
- return __vm_enable_cap(dst, KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM, src->fd);
-}
-
-
-static void sev_migrate_from(struct kvm_vm *dst, struct kvm_vm *src)
-{
- int ret;
-
- ret = __sev_migrate_from(dst, src);
- TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d\n", ret, errno);
-}
-
static void test_sev_migrate_from(bool es)
{
struct kvm_vm *src_vm;
@@ -105,13 +91,13 @@ static void test_sev_migrate_from(bool es)
dst_vms[i] = aux_vm_create(true);

/* Initial migration from the src to the first dst. */
- sev_migrate_from(dst_vms[0], src_vm);
+ vm_migrate_from(dst_vms[0], src_vm);

for (i = 1; i < NR_MIGRATE_TEST_VMS; i++)
- sev_migrate_from(dst_vms[i], dst_vms[i - 1]);
+ vm_migrate_from(dst_vms[i], dst_vms[i - 1]);

/* Migrate the guest back to the original VM. */
- ret = __sev_migrate_from(src_vm, dst_vms[NR_MIGRATE_TEST_VMS - 1]);
+ ret = __vm_migrate_from(src_vm, dst_vms[NR_MIGRATE_TEST_VMS - 1]);
TEST_ASSERT(ret == -1 && errno == EIO,
"VM that was migrated from should be dead. ret %d, errno: %d\n", ret,
errno);
@@ -133,7 +119,7 @@ static void *locking_test_thread(void *arg)

for (i = 0; i < NR_LOCK_TESTING_ITERATIONS; ++i) {
j = i % NR_LOCK_TESTING_THREADS;
- __sev_migrate_from(input->vm, input->source_vms[j]);
+ __vm_migrate_from(input->vm, input->source_vms[j]);
}

return NULL;
@@ -170,7 +156,7 @@ static void test_sev_migrate_parameters(void)

vm_no_vcpu = vm_create_barebones();
vm_no_sev = aux_vm_create(true);
- ret = __sev_migrate_from(vm_no_vcpu, vm_no_sev);
+ ret = __vm_migrate_from(vm_no_vcpu, vm_no_sev);
TEST_ASSERT(ret == -1 && errno == EINVAL,
"Migrations require SEV enabled. ret %d, errno: %d\n", ret,
errno);
@@ -184,25 +170,25 @@ static void test_sev_migrate_parameters(void)
sev_ioctl(sev_es_vm_no_vmsa->fd, KVM_SEV_ES_INIT, NULL);
__vm_vcpu_add(sev_es_vm_no_vmsa, 1);

- ret = __sev_migrate_from(sev_vm, sev_es_vm);
+ ret = __vm_migrate_from(sev_vm, sev_es_vm);
TEST_ASSERT(
ret == -1 && errno == EINVAL,
"Should not be able migrate to SEV enabled VM. ret: %d, errno: %d\n",
ret, errno);

- ret = __sev_migrate_from(sev_es_vm, sev_vm);
+ ret = __vm_migrate_from(sev_es_vm, sev_vm);
TEST_ASSERT(
ret == -1 && errno == EINVAL,
"Should not be able migrate to SEV-ES enabled VM. ret: %d, errno: %d\n",
ret, errno);

- ret = __sev_migrate_from(vm_no_vcpu, sev_es_vm);
+ ret = __vm_migrate_from(vm_no_vcpu, sev_es_vm);
TEST_ASSERT(
ret == -1 && errno == EINVAL,
"SEV-ES migrations require same number of vCPUS. ret: %d, errno: %d\n",
ret, errno);

- ret = __sev_migrate_from(vm_no_vcpu, sev_es_vm_no_vmsa);
+ ret = __vm_migrate_from(vm_no_vcpu, sev_es_vm_no_vmsa);
TEST_ASSERT(
ret == -1 && errno == EINVAL,
"SEV-ES migrations require UPDATE_VMSA. ret %d, errno: %d\n",
@@ -355,14 +341,14 @@ static void test_sev_move_copy(void)

sev_mirror_create(mirror_vm, sev_vm);

- sev_migrate_from(dst_mirror_vm, mirror_vm);
- sev_migrate_from(dst_vm, sev_vm);
+ vm_migrate_from(dst_mirror_vm, mirror_vm);
+ vm_migrate_from(dst_vm, sev_vm);

- sev_migrate_from(dst2_vm, dst_vm);
- sev_migrate_from(dst2_mirror_vm, dst_mirror_vm);
+ vm_migrate_from(dst2_vm, dst_vm);
+ vm_migrate_from(dst2_mirror_vm, dst_mirror_vm);

- sev_migrate_from(dst3_mirror_vm, dst2_mirror_vm);
- sev_migrate_from(dst3_vm, dst2_vm);
+ vm_migrate_from(dst3_mirror_vm, dst2_mirror_vm);
+ vm_migrate_from(dst3_vm, dst2_vm);

kvm_vm_free(dst_vm);
kvm_vm_free(sev_vm);
@@ -384,8 +370,8 @@ static void test_sev_move_copy(void)

sev_mirror_create(mirror_vm, sev_vm);

- sev_migrate_from(dst_mirror_vm, mirror_vm);
- sev_migrate_from(dst_vm, sev_vm);
+ vm_migrate_from(dst_mirror_vm, mirror_vm);
+ vm_migrate_from(dst_vm, sev_vm);

kvm_vm_free(mirror_vm);
kvm_vm_free(dst_mirror_vm);
--
2.41.0.640.ga95def55d0-goog


2023-08-08 01:44:08

by Ackerley Tng

[permalink] [raw]
Subject: [RFC PATCH 11/11] KVM: selftests: Add tests for migration of private mem

Tests that private mem (in guest_mem files) can be migrated. Also
demonstrates the migration flow.

Signed-off-by: Ackerley Tng <[email protected]>
---
.../kvm/x86_64/private_mem_migrate_tests.c | 54 ++++++++++---------
1 file changed, 30 insertions(+), 24 deletions(-)

diff --git a/tools/testing/selftests/kvm/x86_64/private_mem_migrate_tests.c b/tools/testing/selftests/kvm/x86_64/private_mem_migrate_tests.c
index 4226de3ebd41..2691497cf207 100644
--- a/tools/testing/selftests/kvm/x86_64/private_mem_migrate_tests.c
+++ b/tools/testing/selftests/kvm/x86_64/private_mem_migrate_tests.c
@@ -5,28 +5,28 @@
#include <linux/kvm.h>
#include <linux/sizes.h>

-#define TRANSFER_PRIVATE_MEM_TEST_SLOT 10
-#define TRANSFER_PRIVATE_MEM_GPA ((uint64_t)(1ull << 32))
-#define TRANSFER_PRIVATE_MEM_GVA TRANSFER_PRIVATE_MEM_GPA
-#define TRANSFER_PRIVATE_MEM_VALUE 0xdeadbeef
+#define MIGRATE_PRIVATE_MEM_TEST_SLOT 10
+#define MIGRATE_PRIVATE_MEM_GPA ((uint64_t)(1ull << 32))
+#define MIGRATE_PRIVATE_MEM_GVA MIGRATE_PRIVATE_MEM_GPA
+#define MIGRATE_PRIVATE_MEM_VALUE 0xdeadbeef

-static void transfer_private_mem_guest_code_src(void)
+static void migrate_private_mem_data_guest_code_src(void)
{
- uint64_t volatile *const ptr = (uint64_t *)TRANSFER_PRIVATE_MEM_GVA;
+ uint64_t volatile *const ptr = (uint64_t *)MIGRATE_PRIVATE_MEM_GVA;

- *ptr = TRANSFER_PRIVATE_MEM_VALUE;
+ *ptr = MIGRATE_PRIVATE_MEM_VALUE;

GUEST_SYNC1(*ptr);
}

-static void transfer_private_mem_guest_code_dst(void)
+static void migrate_private_mem_guest_code_dst(void)
{
- uint64_t volatile *const ptr = (uint64_t *)TRANSFER_PRIVATE_MEM_GVA;
+ uint64_t volatile *const ptr = (uint64_t *)MIGRATE_PRIVATE_MEM_GVA;

GUEST_SYNC1(*ptr);
}

-static void test_transfer_private_mem(void)
+static void test_migrate_private_mem_data(bool migrate)
{
struct kvm_vm *src_vm, *dst_vm;
struct kvm_vcpu *src_vcpu, *dst_vcpu;
@@ -40,40 +40,43 @@ static void test_transfer_private_mem(void)

/* Build the source VM, use it to write to private memory */
src_vm = __vm_create_shape_with_one_vcpu(
- shape, &src_vcpu, 0, transfer_private_mem_guest_code_src);
+ shape, &src_vcpu, 0, migrate_private_mem_data_guest_code_src);
src_memfd = vm_create_guest_memfd(src_vm, SZ_4K, 0);

- vm_mem_add(src_vm, DEFAULT_VM_MEM_SRC, TRANSFER_PRIVATE_MEM_GPA,
- TRANSFER_PRIVATE_MEM_TEST_SLOT, 1, KVM_MEM_PRIVATE,
+ vm_mem_add(src_vm, DEFAULT_VM_MEM_SRC, MIGRATE_PRIVATE_MEM_GPA,
+ MIGRATE_PRIVATE_MEM_TEST_SLOT, 1, KVM_MEM_PRIVATE,
src_memfd, 0);

- virt_map(src_vm, TRANSFER_PRIVATE_MEM_GVA, TRANSFER_PRIVATE_MEM_GPA, 1);
- vm_set_memory_attributes(src_vm, TRANSFER_PRIVATE_MEM_GPA, SZ_4K,
+ virt_map(src_vm, MIGRATE_PRIVATE_MEM_GVA, MIGRATE_PRIVATE_MEM_GPA, 1);
+ vm_set_memory_attributes(src_vm, MIGRATE_PRIVATE_MEM_GPA, SZ_4K,
KVM_MEMORY_ATTRIBUTE_PRIVATE);

vcpu_run(src_vcpu);
TEST_ASSERT_KVM_EXIT_REASON(src_vcpu, KVM_EXIT_IO);
get_ucall(src_vcpu, &uc);
- TEST_ASSERT(uc.args[0] == TRANSFER_PRIVATE_MEM_VALUE,
+ TEST_ASSERT(uc.args[0] == MIGRATE_PRIVATE_MEM_VALUE,
"Source VM should be able to write to private memory");

/* Build the destination VM with linked fd */
dst_vm = __vm_create_shape_with_one_vcpu(
- shape, &dst_vcpu, 0, transfer_private_mem_guest_code_dst);
+ shape, &dst_vcpu, 0, migrate_private_mem_guest_code_dst);
dst_memfd = vm_link_guest_memfd(dst_vm, src_memfd, 0);

- vm_mem_add(dst_vm, DEFAULT_VM_MEM_SRC, TRANSFER_PRIVATE_MEM_GPA,
- TRANSFER_PRIVATE_MEM_TEST_SLOT, 1, KVM_MEM_PRIVATE,
+ vm_mem_add(dst_vm, DEFAULT_VM_MEM_SRC, MIGRATE_PRIVATE_MEM_GPA,
+ MIGRATE_PRIVATE_MEM_TEST_SLOT, 1, KVM_MEM_PRIVATE,
dst_memfd, 0);

- virt_map(dst_vm, TRANSFER_PRIVATE_MEM_GVA, TRANSFER_PRIVATE_MEM_GPA, 1);
- vm_set_memory_attributes(dst_vm, TRANSFER_PRIVATE_MEM_GPA, SZ_4K,
- KVM_MEMORY_ATTRIBUTE_PRIVATE);
+ virt_map(dst_vm, MIGRATE_PRIVATE_MEM_GVA, MIGRATE_PRIVATE_MEM_GPA, 1);
+ if (migrate)
+ vm_migrate_from(dst_vm, src_vm);
+ else
+ vm_set_memory_attributes(dst_vm, MIGRATE_PRIVATE_MEM_GPA, SZ_4K,
+ KVM_MEMORY_ATTRIBUTE_PRIVATE);

vcpu_run(dst_vcpu);
TEST_ASSERT_KVM_EXIT_REASON(dst_vcpu, KVM_EXIT_IO);
get_ucall(dst_vcpu, &uc);
- TEST_ASSERT(uc.args[0] == TRANSFER_PRIVATE_MEM_VALUE,
+ TEST_ASSERT(uc.args[0] == MIGRATE_PRIVATE_MEM_VALUE,
"Destination VM should be able to read value transferred");
}

@@ -81,7 +84,10 @@ int main(int argc, char *argv[])
{
TEST_REQUIRE(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM));

- test_transfer_private_mem();
+ test_migrate_private_mem_data(false);
+
+ if (kvm_check_cap(KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM))
+ test_migrate_private_mem_data(true);

return 0;
}
--
2.41.0.640.ga95def55d0-goog


2023-08-08 02:37:46

by Ackerley Tng

[permalink] [raw]
Subject: [RFC PATCH 01/11] KVM: guest_mem: Refactor out kvm_gmem_alloc_file()

kvm_gmem_alloc_file() will allocate and build a file out of an inode.

Will be reused later by __kvm_gmem_link()

Signed-off-by: Ackerley Tng <[email protected]>
---
virt/kvm/guest_mem.c | 53 ++++++++++++++++++++++++++------------------
1 file changed, 32 insertions(+), 21 deletions(-)

diff --git a/virt/kvm/guest_mem.c b/virt/kvm/guest_mem.c
index 3a3e38151b45..30d0ab8745ee 100644
--- a/virt/kvm/guest_mem.c
+++ b/virt/kvm/guest_mem.c
@@ -365,12 +365,42 @@ static const struct inode_operations kvm_gmem_iops = {
.setattr = kvm_gmem_setattr,
};

+static struct file *kvm_gmem_alloc_file(struct kvm *kvm, struct inode *inode,
+ struct vfsmount *mnt)
+{
+ struct file *file;
+ struct kvm_gmem *gmem;
+
+ gmem = kzalloc(sizeof(*gmem), GFP_KERNEL);
+ if (!gmem)
+ return ERR_PTR(-ENOMEM);
+
+ file = alloc_file_pseudo(inode, mnt, "kvm-gmem", O_RDWR, &kvm_gmem_fops);
+ if (IS_ERR(file))
+ goto err;
+
+ file->f_flags |= O_LARGEFILE;
+ file->f_mapping = inode->i_mapping;
+
+ kvm_get_kvm(kvm);
+ gmem->kvm = kvm;
+ xa_init(&gmem->bindings);
+
+ file->private_data = gmem;
+
+ list_add(&gmem->entry, &inode->i_mapping->private_list);
+
+ return file;
+err:
+ kfree(gmem);
+ return file;
+}
+
static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags,
struct vfsmount *mnt)
{
const char *anon_name = "[kvm-gmem]";
const struct qstr qname = QSTR_INIT(anon_name, strlen(anon_name));
- struct kvm_gmem *gmem;
struct inode *inode;
struct file *file;
int fd, err;
@@ -399,34 +429,15 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags,
goto err_inode;
}

- file = alloc_file_pseudo(inode, mnt, "kvm-gmem", O_RDWR, &kvm_gmem_fops);
+ file = kvm_gmem_alloc_file(kvm, inode, mnt);
if (IS_ERR(file)) {
err = PTR_ERR(file);
goto err_fd;
}

- file->f_flags |= O_LARGEFILE;
- file->f_mapping = inode->i_mapping;
-
- gmem = kzalloc(sizeof(*gmem), GFP_KERNEL);
- if (!gmem) {
- err = -ENOMEM;
- goto err_file;
- }
-
- kvm_get_kvm(kvm);
- gmem->kvm = kvm;
- xa_init(&gmem->bindings);
-
- file->private_data = gmem;
-
- list_add(&gmem->entry, &inode->i_mapping->private_list);
-
fd_install(fd, file);
return fd;

-err_file:
- fput(file);
err_fd:
put_unused_fd(fd);
err_inode:
--
2.41.0.640.ga95def55d0-goog


2023-08-10 14:26:30

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [RFC PATCH 08/11] KVM: x86: Let moving encryption context be configurable

On 8/8/23 01:01, Ackerley Tng wrote:
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 75d48379d94d..a1a28dd77b94 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -6351,7 +6351,14 @@ static int kvm_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd)
> if (r)
> goto out_mark_migration_done;
>
> - r = static_call(kvm_x86_vm_move_enc_context_from)(kvm, source_kvm);
> + /*
> + * Different types of VMs will allow userspace to define if moving
> + * encryption context should be supported.
> + */
> + if (kvm->arch.vm_move_enc_ctxt_supported &&
> + kvm_x86_ops.vm_move_enc_context_from) {
> + r = static_call(kvm_x86_vm_move_enc_context_from)(kvm, source_kvm);
> + }

Rather than "supported" this is more "required". So perhaps
kvm->arch.use_vm_enc_ctxt_op?

Paolo


2023-08-19 21:41:16

by Ackerley Tng

[permalink] [raw]
Subject: Re: [RFC PATCH 08/11] KVM: x86: Let moving encryption context be configurable

Paolo Bonzini <[email protected]> writes:

> On 8/8/23 01:01, Ackerley Tng wrote:
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index 75d48379d94d..a1a28dd77b94 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -6351,7 +6351,14 @@ static int kvm_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd)
>> if (r)
>> goto out_mark_migration_done;
>>
>> - r = static_call(kvm_x86_vm_move_enc_context_from)(kvm, source_kvm);
>> + /*
>> + * Different types of VMs will allow userspace to define if moving
>> + * encryption context should be supported.
>> + */
>> + if (kvm->arch.vm_move_enc_ctxt_supported &&
>> + kvm_x86_ops.vm_move_enc_context_from) {
>> + r = static_call(kvm_x86_vm_move_enc_context_from)(kvm, source_kvm);
>> + }
>
> Rather than "supported" this is more "required". So perhaps
> kvm->arch.use_vm_enc_ctxt_op?
>
> Paolo

Thanks, that is a great suggestion, I'll incorporate this in the next
revision!