The idea that no parameter would ever be necessary when enabling SEV or
SEV-ES for a VM was decidedly optimistic. The first source of variability
that was encountered is the desired set of VMSA features, as that affects
the measurement of the VM's initial state and cannot be changed
arbitrarily by the hypervisor.
This series adds all the APIs that are needed to customize the features,
with room for future enhancements:
- a new /dev/kvm device attribute to retrieve the set of supported
features (right now, only debug swap)
- a new sub-operation for KVM_MEM_ENCRYPT_OP that can take a struct,
replacing the existing KVM_SEV_INIT and KVM_SEV_ES_INIT
It then puts the new op to work by including the VMSA features as a field
of the The existing KVM_SEV_INIT and KVM_SEV_ES_INIT use the full set of
supported VMSA features for backwards compatibility; but I am considering
also making them use zero as the feature mask, and will gladly adjust the
patches if so requested.
In order to avoid creating *two* new KVM_MEM_ENCRYPT_OPs, I decided that
I could as well make SEV and SEV-ES use VM types. And then, why not make
a SEV-ES VM, when created with the new VM type instead of KVM_SEV_ES_INIT,
reject KVM_GET_REGS/KVM_SET_REGS and friends on the vCPU file descriptor
once the VMSA has been encrypted... Which is how the API should have
always behaved.
The series is structured as follows:
- patches 1 to 5 are unrelated fixes and improvements for the SEV code
and documentation. In particular they change sev.c so that it is
compiled only if SEV is enabled in kconfig
- patches 6 to 8 introduce the new device attribute to retrieve supported
VMSA features
- patch 9 disables DEBUG_SWAP by default
- patches 10 and 11 introduce new infrastructure for VM types, replacing
the similar code in the TDX patches
- patches 12 to 14 introduce the new VM types for SEV and
SEV-ES, and KVM_SEV_INIT2 as a new sub-operation for KVM_MEM_ENCRYPT_OP.
- patch 15 tests the new ioctl.
The idea is that SEV SNP will only ever support KVM_SEV_INIT2. I have
patches in progress for QEMU to support this new API.
Thanks,
Paolo
v2->v3:
- use u64_to_user_addr()
- Compile sev.c if and only if CONFIG_KVM_AMD_SEV=y
- remove double signoffs
- rebase on top of kvm-x86/next
- no bit masking hacks; store CoCo features in kvm_arch
- store supported VM types in kvm_caps
- introduce to_kvm_sev_info
- clearer output messages for failed assertions
- remove broken test
Paolo Bonzini (14):
KVM: SEV: fix compat ABI for KVM_MEMORY_ENCRYPT_OP
KVM: x86: use u64_to_user_addr()
KVM: SVM: Compile sev.c if and only if CONFIG_KVM_AMD_SEV=y
Documentation: kvm/sev: separate description of firmware
KVM: introduce new vendor op for KVM_GET_DEVICE_ATTR
KVM: SEV: publish supported VMSA features
KVM: SEV: store VMSA features in kvm_sev_info
KVM: SEV: disable DEBUG_SWAP by default
KVM: x86: add fields to struct kvm_arch for CoCo features
KVM: x86: Add supported_vm_types to kvm_caps
KVM: SEV: introduce to_kvm_sev_info
KVM: SEV: define VM types for SEV and SEV-ES
KVM: SEV: introduce KVM_SEV_INIT2 operation
selftests: kvm: add tests for KVM_SEV_INIT2
Sean Christopherson (1):
KVM: SVM: Invert handling of SEV and SEV_ES feature flags
Documentation/virt/kvm/api.rst | 2 +
.../virt/kvm/x86/amd-memory-encryption.rst | 81 ++++++--
arch/x86/include/asm/kvm-x86-ops.h | 1 +
arch/x86/include/asm/kvm_host.h | 8 +-
arch/x86/include/uapi/asm/kvm.h | 35 ++++
arch/x86/kvm/Makefile | 7 +-
arch/x86/kvm/cpuid.c | 2 +-
arch/x86/kvm/svm/sev.c | 133 +++++++++----
arch/x86/kvm/svm/svm.c | 15 +-
arch/x86/kvm/svm/svm.h | 37 +++-
arch/x86/kvm/x86.c | 174 ++++++++++++------
arch/x86/kvm/x86.h | 2 +
tools/testing/selftests/kvm/Makefile | 1 +
.../selftests/kvm/include/kvm_util_base.h | 6 +-
.../selftests/kvm/set_memory_region_test.c | 8 +-
.../selftests/kvm/x86_64/sev_init2_tests.c | 149 +++++++++++++++
16 files changed, 527 insertions(+), 134 deletions(-)
create mode 100644 tools/testing/selftests/kvm/x86_64/sev_init2_tests.c
--
2.39.1
There is no danger to the kernel if userspace provides a 64-bit value that
has the high bits set, but for whatever reason happ[ens to resolve to an
address that has something mapped there. KVM uses the checked version
of put_user() in kvm_x86_dev_get_attr().
Suggested-by: Sean Christopherson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
---
arch/x86/kvm/x86.c | 24 +++---------------------
1 file changed, 3 insertions(+), 21 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f3f7405e0628..14c969782d73 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4791,25 +4791,13 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
return r;
}
-static inline void __user *kvm_get_attr_addr(struct kvm_device_attr *attr)
-{
- void __user *uaddr = (void __user*)(unsigned long)attr->addr;
-
- if ((u64)(unsigned long)uaddr != attr->addr)
- return ERR_PTR_USR(-EFAULT);
- return uaddr;
-}
-
static int kvm_x86_dev_get_attr(struct kvm_device_attr *attr)
{
- u64 __user *uaddr = kvm_get_attr_addr(attr);
+ u64 __user *uaddr = u64_to_user_ptr(attr->addr);
if (attr->group)
return -ENXIO;
- if (IS_ERR(uaddr))
- return PTR_ERR(uaddr);
-
switch (attr->attr) {
case KVM_X86_XCOMP_GUEST_SUPP:
if (put_user(kvm_caps.supported_xcr0, uaddr))
@@ -5664,12 +5652,9 @@ static int kvm_arch_tsc_has_attr(struct kvm_vcpu *vcpu,
static int kvm_arch_tsc_get_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr)
{
- u64 __user *uaddr = kvm_get_attr_addr(attr);
+ u64 __user *uaddr = u64_to_user_ptr(attr->addr);
int r;
- if (IS_ERR(uaddr))
- return PTR_ERR(uaddr);
-
switch (attr->attr) {
case KVM_VCPU_TSC_OFFSET:
r = -EFAULT;
@@ -5687,13 +5672,10 @@ static int kvm_arch_tsc_get_attr(struct kvm_vcpu *vcpu,
static int kvm_arch_tsc_set_attr(struct kvm_vcpu *vcpu,
struct kvm_device_attr *attr)
{
- u64 __user *uaddr = kvm_get_attr_addr(attr);
+ u64 __user *uaddr = u64_to_user_ptr(attr->addr);
struct kvm *kvm = vcpu->kvm;
int r;
- if (IS_ERR(uaddr))
- return PTR_ERR(uaddr);
-
switch (attr->attr) {
case KVM_VCPU_TSC_OFFSET: {
u64 offset, tsc, ns;
--
2.39.1
The description of firmware is included part under the "SEV Key Management"
header, part under the KVM_SEV_INIT ioctl. Put these two bits together and
and rename "SEV Key Management" to what it actually is, namely a description
of the KVM_MEMORY_ENCRYPT_OP API.
Reviewed-by: Michael Roth <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
---
.../virt/kvm/x86/amd-memory-encryption.rst | 29 +++++++++++--------
1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/Documentation/virt/kvm/x86/amd-memory-encryption.rst b/Documentation/virt/kvm/x86/amd-memory-encryption.rst
index 995780088eb2..37c5c37f4f6e 100644
--- a/Documentation/virt/kvm/x86/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/x86/amd-memory-encryption.rst
@@ -46,14 +46,8 @@ SEV hardware uses ASIDs to associate a memory encryption key with a VM.
Hence, the ASID for the SEV-enabled guests must be from 1 to a maximum value
defined in the CPUID 0x8000001f[ecx] field.
-SEV Key Management
-==================
-
-The SEV guest key management is handled by a separate processor called the AMD
-Secure Processor (AMD-SP). Firmware running inside the AMD-SP provides a secure
-key management interface to perform common hypervisor activities such as
-encrypting bootstrap code, snapshot, migrating and debugging the guest. For more
-information, see the SEV Key Management spec [api-spec]_
+``KVM_MEMORY_ENCRYPT_OP`` API
+=============================
The main ioctl to access SEV is KVM_MEMORY_ENCRYPT_OP. If the argument
to KVM_MEMORY_ENCRYPT_OP is NULL, the ioctl returns 0 if SEV is enabled
@@ -87,10 +81,6 @@ guests, such as launching, running, snapshotting, migrating and decommissioning.
The KVM_SEV_INIT command is used by the hypervisor to initialize the SEV platform
context. In a typical workflow, this command should be the first command issued.
-The firmware can be initialized either by using its own non-volatile storage or
-the OS can manage the NV storage for the firmware using the module parameter
-``init_ex_path``. If the file specified by ``init_ex_path`` does not exist or
-is invalid, the OS will create or override the file with output from PSP.
Returns: 0 on success, -negative on error
@@ -434,6 +424,21 @@ issued by the hypervisor to make the guest ready for execution.
Returns: 0 on success, -negative on error
+Firmware Management
+===================
+
+The SEV guest key management is handled by a separate processor called the AMD
+Secure Processor (AMD-SP). Firmware running inside the AMD-SP provides a secure
+key management interface to perform common hypervisor activities such as
+encrypting bootstrap code, snapshot, migrating and debugging the guest. For more
+information, see the SEV Key Management spec [api-spec]_
+
+The AMD-SP firmware can be initialized either by using its own non-volatile
+storage or the OS can manage the NV storage for the firmware using
+parameter ``init_ex_path`` of the ``ccp`` module. If the file specified
+by ``init_ex_path`` does not exist or is invalid, the OS will create or
+override the file with PSP non-volatile storage.
+
References
==========
--
2.39.1
This simplifies the implementation of KVM_CHECK_EXTENSION(KVM_CAP_VM_TYPES),
and also allows the vendor module to specify which VM types are supported.
Suggested-by: Sean Christopherson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
---
arch/x86/kvm/x86.c | 12 ++++++------
arch/x86/kvm/x86.h | 2 ++
2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9d91eb1b3080..3b87e65904ae 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -94,6 +94,7 @@
struct kvm_caps kvm_caps __read_mostly = {
.supported_mce_cap = MCG_CTL_P | MCG_SER_P,
+ .supported_vm_types = BIT(KVM_X86_DEFAULT_VM),
};
EXPORT_SYMBOL_GPL(kvm_caps);
@@ -4578,9 +4579,7 @@ static int kvm_ioctl_get_supported_hv_cpuid(struct kvm_vcpu *vcpu,
static bool kvm_is_vm_type_supported(unsigned long type)
{
- return type == KVM_X86_DEFAULT_VM ||
- (type == KVM_X86_SW_PROTECTED_VM &&
- IS_ENABLED(CONFIG_KVM_SW_PROTECTED_VM) && tdp_mmu_enabled);
+ return type < 32 && (kvm_caps.supported_vm_types & BIT(type));
}
int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
@@ -4781,9 +4780,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = kvm_caps.has_notify_vmexit;
break;
case KVM_CAP_VM_TYPES:
- r = BIT(KVM_X86_DEFAULT_VM);
- if (kvm_is_vm_type_supported(KVM_X86_SW_PROTECTED_VM))
- r |= BIT(KVM_X86_SW_PROTECTED_VM);
+ r = kvm_caps.supported_vm_types;
break;
default:
break;
@@ -9790,6 +9787,9 @@ int kvm_x86_vendor_init(struct kvm_x86_init_ops *ops)
kvm_register_perf_callbacks(ops->handle_intel_pt_intr);
+ if (IS_ENABLED(CONFIG_KVM_SW_PROTECTED_VM) && tdp_mmu_enabled)
+ kvm_caps.supported_vm_types |= BIT(KVM_X86_SW_PROTECTED_VM);
+
if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES))
kvm_caps.supported_xss = 0;
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 2f7e19166658..997017d35f3a 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -24,6 +24,8 @@ struct kvm_caps {
bool has_bus_lock_exit;
/* notify VM exit supported? */
bool has_notify_vmexit;
+ /* bit mask of VM types */
+ u32 supported_vm_types;
u64 supported_mce_cap;
u64 supported_xcr0;
--
2.39.1
Suggested-by: Sean Christopherson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
---
arch/x86/kvm/svm/sev.c | 4 ++--
arch/x86/kvm/svm/svm.h | 5 +++++
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 2db0b2b36120..2549a539a686 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -93,7 +93,7 @@ static int sev_flush_asids(unsigned int min_asid, unsigned int max_asid)
static inline bool is_mirroring_enc_context(struct kvm *kvm)
{
- return !!to_kvm_svm(kvm)->sev_info.enc_context_owner;
+ return !!to_kvm_sev_info(kvm)->enc_context_owner;
}
static bool sev_vcpu_has_debug_swap(struct vcpu_svm *svm)
@@ -644,7 +644,7 @@ static int __sev_launch_update_vmsa(struct kvm *kvm, struct kvm_vcpu *vcpu,
clflush_cache_range(svm->sev_es.vmsa, PAGE_SIZE);
vmsa.reserved = 0;
- vmsa.handle = to_kvm_svm(kvm)->sev_info.handle;
+ vmsa.handle = to_kvm_sev_info(kvm)->handle;
vmsa.address = __sme_pa(svm->sev_es.vmsa);
vmsa.len = PAGE_SIZE;
ret = sev_issue_cmd(kvm, SEV_CMD_LAUNCH_UPDATE_VMSA, &vmsa, error);
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index d6147ad18571..ebf2160bf0c6 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -319,6 +319,11 @@ static __always_inline struct kvm_svm *to_kvm_svm(struct kvm *kvm)
return container_of(kvm, struct kvm_svm, kvm);
}
+static __always_inline struct kvm_sev_info *to_kvm_sev_info(struct kvm *kvm)
+{
+ return &to_kvm_svm(kvm)->sev_info;
+}
+
static __always_inline bool sev_guest(struct kvm *kvm)
{
#ifdef CONFIG_KVM_AMD_SEV
--
2.39.1
The idea that no parameter would ever be necessary when enabling SEV or
SEV-ES for a VM was decidedly optimistic. In fact, in some sense it's
already a parameter whether SEV or SEV-ES is desired. Another possible
source of variability is the desired set of VMSA features, as that affects
the measurement of the VM's initial state and cannot be changed
arbitrarily by the hypervisor.
Create a new sub-operation for KVM_MEMORY_ENCRYPT_OP that can take a struct,
and put the new op to work by including the VMSA features as a field of the
struct. The existing KVM_SEV_INIT and KVM_SEV_ES_INIT use the full set of
supported VMSA features for backwards compatibility.
The struct also includes the usual bells and whistles for future
extensibility: a flags field that must be zero for now, and some padding
at the end.
Signed-off-by: Paolo Bonzini <[email protected]>
---
.../virt/kvm/x86/amd-memory-encryption.rst | 40 +++++++++++++--
arch/x86/include/uapi/asm/kvm.h | 9 ++++
arch/x86/kvm/svm/sev.c | 50 +++++++++++++++++--
3 files changed, 92 insertions(+), 7 deletions(-)
diff --git a/Documentation/virt/kvm/x86/amd-memory-encryption.rst b/Documentation/virt/kvm/x86/amd-memory-encryption.rst
index 5ed11bc16b96..b951d82af26c 100644
--- a/Documentation/virt/kvm/x86/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/x86/amd-memory-encryption.rst
@@ -75,15 +75,49 @@ are defined in ``<linux/psp-dev.h>``.
KVM implements the following commands to support common lifecycle events of SEV
guests, such as launching, running, snapshotting, migrating and decommissioning.
-1. KVM_SEV_INIT
----------------
+1. KVM_SEV_INIT2
+----------------
-The KVM_SEV_INIT command is used by the hypervisor to initialize the SEV platform
+The KVM_SEV_INIT2 command is used by the hypervisor to initialize the SEV platform
context. In a typical workflow, this command should be the first command issued.
+For this command to be accepted, either KVM_X86_SEV_VM or KVM_X86_SEV_ES_VM
+must have been passed to the KVM_CREATE_VM ioctl. A virtual machine created
+with those machine types in turn cannot be run until KVM_SEV_INIT2 is invoked.
+
+Parameters: struct kvm_sev_init (in)
Returns: 0 on success, -negative on error
+::
+
+ struct struct kvm_sev_init {
+ __u64 vmsa_features; /* initial value of features field in VMSA */
+ __u32 flags; /* must be 0 */
+ __u32 pad[9];
+ };
+
+It is an error if the hypervisor does not support any of the bits that
+are set in ``flags`` or ``vmsa_features``. ``vmsa_features`` must be
+0 for SEV virtual machines, as they do not have a VMSA.
+
+This command replaces the deprecated KVM_SEV_INIT and KVM_SEV_ES_INIT commands.
+The commands did not have any parameters (the ```data``` field was unused) and
+only work for the KVM_X86_DEFAULT_VM machine type (0).
+
+They behave as if:
+
+* the VM type is KVM_X86_SEV_VM for KVM_SEV_INIT, or KVM_X86_SEV_ES_VM for
+ KVM_SEV_ES_INIT
+
+* the ``flags`` and ``vmsa_features`` fields of ``struct kvm_sev_init`` are
+ set to zero
+
+If the ``KVM_X86_SEV_VMSA_FEATURES`` attribute does not exist, the hypervisor only
+supports KVM_SEV_INIT and KVM_SEV_ES_INIT. In that case, note that KVM_SEV_ES_INIT
+might set the debug swap VMSA feature (bit 5) depending on the value of the
+``debug_swap`` parameter of ``kvm-amd.ko``.
+
2. KVM_SEV_LAUNCH_START
-----------------------
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index 9d950b0b64c9..51b13080ed4b 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -690,6 +690,9 @@ enum sev_cmd_id {
/* Guest Migration Extension */
KVM_SEV_SEND_CANCEL,
+ /* Second time is the charm; improved versions of the above ioctls. */
+ KVM_SEV_INIT2,
+
KVM_SEV_NR_MAX,
};
@@ -701,6 +704,12 @@ struct kvm_sev_cmd {
__u32 sev_fd;
};
+struct kvm_sev_init {
+ __u64 vmsa_features;
+ __u32 flags;
+ __u32 pad[9];
+};
+
struct kvm_sev_launch_start {
__u32 handle;
__u32 policy;
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 1248ccf433e8..909e67a9044b 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -239,23 +239,30 @@ static void sev_unbind_asid(struct kvm *kvm, unsigned int handle)
sev_decommission(handle);
}
-static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
+static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp,
+ struct kvm_sev_init *data,
+ unsigned long vm_type)
{
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+ bool es_active = kvm->arch.has_protected_state;
+ u64 valid_vmsa_features = es_active ? sev_supported_vmsa_features : 0;
int ret;
if (kvm->created_vcpus)
return -EINVAL;
- if (kvm->arch.vm_type != KVM_X86_DEFAULT_VM)
+ if (data->flags)
+ return -EINVAL;
+
+ if (data->vmsa_features & ~valid_vmsa_features)
return -EINVAL;
if (unlikely(sev->active))
return -EINVAL;
sev->active = true;
- sev->es_active = argp->id == KVM_SEV_ES_INIT;
- sev->vmsa_features = 0;
+ sev->es_active = es_active;
+ sev->vmsa_features = data->vmsa_features;
ret = sev_asid_new(sev);
if (ret)
@@ -283,6 +290,38 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
return ret;
}
+static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+ struct kvm_sev_init data = {
+ .vmsa_features = 0,
+ };
+ unsigned long vm_type;
+
+ if (kvm->arch.vm_type != KVM_X86_DEFAULT_VM)
+ return -EINVAL;
+
+ vm_type = (argp->id == KVM_SEV_INIT ? KVM_X86_SEV_VM : KVM_X86_SEV_ES_VM);
+ return __sev_guest_init(kvm, argp, &data, vm_type);
+}
+
+static int sev_guest_init2(struct kvm *kvm, struct kvm_sev_cmd *argp)
+{
+ struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
+ struct kvm_sev_init data;
+
+ if (!sev->need_init)
+ return -EINVAL;
+
+ if (kvm->arch.vm_type != KVM_X86_SEV_VM &&
+ kvm->arch.vm_type != KVM_X86_SEV_ES_VM)
+ return -EINVAL;
+
+ if (copy_from_user(&data, u64_to_user_ptr(argp->data), sizeof(data)))
+ return -EFAULT;
+
+ return __sev_guest_init(kvm, argp, &data, kvm->arch.vm_type);
+}
+
static int sev_bind_asid(struct kvm *kvm, unsigned int handle, int *error)
{
unsigned int asid = sev_get_asid(kvm);
@@ -1898,6 +1937,9 @@ int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
case KVM_SEV_INIT:
r = sev_guest_init(kvm, &sev_cmd);
break;
+ case KVM_SEV_INIT2:
+ r = sev_guest_init2(kvm, &sev_cmd);
+ break;
case KVM_SEV_LAUNCH_START:
r = sev_launch_start(kvm, &sev_cmd);
break;
--
2.39.1
Signed-off-by: Paolo Bonzini <[email protected]>
---
tools/testing/selftests/kvm/Makefile | 1 +
.../selftests/kvm/include/kvm_util_base.h | 6 +-
.../selftests/kvm/set_memory_region_test.c | 8 +-
.../selftests/kvm/x86_64/sev_init2_tests.c | 149 ++++++++++++++++++
4 files changed, 156 insertions(+), 8 deletions(-)
create mode 100644 tools/testing/selftests/kvm/x86_64/sev_init2_tests.c
diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
index ce58098d80fd..0dcefabeae0c 100644
--- a/tools/testing/selftests/kvm/Makefile
+++ b/tools/testing/selftests/kvm/Makefile
@@ -118,6 +118,7 @@ TEST_GEN_PROGS_x86_64 += x86_64/tsc_msrs_test
TEST_GEN_PROGS_x86_64 += x86_64/vmx_pmu_caps_test
TEST_GEN_PROGS_x86_64 += x86_64/xen_shinfo_test
TEST_GEN_PROGS_x86_64 += x86_64/xen_vmcall_test
+TEST_GEN_PROGS_x86_64 += x86_64/sev_init2_tests
TEST_GEN_PROGS_x86_64 += x86_64/sev_migrate_tests
TEST_GEN_PROGS_x86_64 += x86_64/amx_test
TEST_GEN_PROGS_x86_64 += x86_64/max_vcpuid_cap_test
diff --git a/tools/testing/selftests/kvm/include/kvm_util_base.h b/tools/testing/selftests/kvm/include/kvm_util_base.h
index 070f250036fc..bb813cc9dcac 100644
--- a/tools/testing/selftests/kvm/include/kvm_util_base.h
+++ b/tools/testing/selftests/kvm/include/kvm_util_base.h
@@ -850,17 +850,15 @@ static inline struct kvm_vm *vm_create_barebones(void)
return ____vm_create(VM_SHAPE_DEFAULT);
}
-#ifdef __x86_64__
-static inline struct kvm_vm *vm_create_barebones_protected_vm(void)
+static inline struct kvm_vm *vm_create_barebones_type(unsigned long type)
{
const struct vm_shape shape = {
.mode = VM_MODE_DEFAULT,
- .type = KVM_X86_SW_PROTECTED_VM,
+ .type = type,
};
return ____vm_create(shape);
}
-#endif
static inline struct kvm_vm *vm_create(uint32_t nr_runnable_vcpus)
{
diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c
index 06b43ed23580..904d58793fc6 100644
--- a/tools/testing/selftests/kvm/set_memory_region_test.c
+++ b/tools/testing/selftests/kvm/set_memory_region_test.c
@@ -339,7 +339,7 @@ static void test_invalid_memory_region_flags(void)
#ifdef __x86_64__
if (kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM))
- vm = vm_create_barebones_protected_vm();
+ vm = vm_create_barebones_type(KVM_X86_SW_PROTECTED_VM);
else
#endif
vm = vm_create_barebones();
@@ -462,7 +462,7 @@ static void test_add_private_memory_region(void)
pr_info("Testing ADD of KVM_MEM_GUEST_MEMFD memory regions\n");
- vm = vm_create_barebones_protected_vm();
+ vm = vm_create_barebones_type(KVM_X86_SW_PROTECTED_VM);
test_invalid_guest_memfd(vm, vm->kvm_fd, 0, "KVM fd should fail");
test_invalid_guest_memfd(vm, vm->fd, 0, "VM's fd should fail");
@@ -471,7 +471,7 @@ static void test_add_private_memory_region(void)
test_invalid_guest_memfd(vm, memfd, 0, "Regular memfd() should fail");
close(memfd);
- vm2 = vm_create_barebones_protected_vm();
+ vm2 = vm_create_barebones_type(KVM_X86_SW_PROTECTED_VM);
memfd = vm_create_guest_memfd(vm2, MEM_REGION_SIZE, 0);
test_invalid_guest_memfd(vm, memfd, 0, "Other VM's guest_memfd() should fail");
@@ -499,7 +499,7 @@ static void test_add_overlapping_private_memory_regions(void)
pr_info("Testing ADD of overlapping KVM_MEM_GUEST_MEMFD memory regions\n");
- vm = vm_create_barebones_protected_vm();
+ vm = vm_create_barebones_type(KVM_X86_SW_PROTECTED_VM);
memfd = vm_create_guest_memfd(vm, MEM_REGION_SIZE * 4, 0);
diff --git a/tools/testing/selftests/kvm/x86_64/sev_init2_tests.c b/tools/testing/selftests/kvm/x86_64/sev_init2_tests.c
new file mode 100644
index 000000000000..fe55aa5a1b04
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86_64/sev_init2_tests.c
@@ -0,0 +1,149 @@
+// SPDX-License-Identifier: GPL-2.0-only
+#include <linux/kvm.h>
+#include <linux/psp-sev.h>
+#include <stdio.h>
+#include <sys/ioctl.h>
+#include <stdlib.h>
+#include <errno.h>
+#include <pthread.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+#include "processor.h"
+#include "svm_util.h"
+#include "kselftest.h"
+
+#define SVM_SEV_FEAT_DEBUG_SWAP 32u
+
+/*
+ * Some features may have hidden dependencies, or may only work
+ * for certain VM types. Err on the side of safety and don't
+ * expect that all supported features can be passed one by one
+ * to KVM_SEV_INIT2.
+ *
+ * (Well, right now there's only one...)
+ */
+#define KNOWN_FEATURES SVM_SEV_FEAT_DEBUG_SWAP
+
+int kvm_fd;
+u64 supported_vmsa_features;
+bool have_sev_es;
+
+static int __sev_ioctl(int vm_fd, int cmd_id, void *data)
+{
+ struct kvm_sev_cmd cmd = {
+ .id = cmd_id,
+ .data = (uint64_t)data,
+ .sev_fd = open_sev_dev_path_or_exit(),
+ };
+ int ret;
+
+ ret = ioctl(vm_fd, KVM_MEMORY_ENCRYPT_OP, &cmd);
+ TEST_ASSERT(ret < 0 || cmd.error == SEV_RET_SUCCESS,
+ "%d failed: fw error: %d\n",
+ cmd_id, cmd.error);
+
+ return ret;
+}
+
+static void test_init2(unsigned long vm_type, struct kvm_sev_init *init)
+{
+ struct kvm_vm *vm;
+ int ret;
+
+ vm = vm_create_barebones_type(vm_type);
+ ret = __sev_ioctl(vm->fd, KVM_SEV_INIT2, init);
+ TEST_ASSERT(ret == 0,
+ "KVM_SEV_INIT2 return code is %d (expected 0), errno: %d",
+ ret, errno);
+ kvm_vm_free(vm);
+}
+
+static void test_init2_invalid(unsigned long vm_type, struct kvm_sev_init *init, const char *msg)
+{
+ struct kvm_vm *vm;
+ int ret;
+
+ vm = vm_create_barebones_type(vm_type);
+ ret = __sev_ioctl(vm->fd, KVM_SEV_INIT2, init);
+ TEST_ASSERT(ret == -1 && errno == EINVAL,
+ "KVM_SEV_INIT2 should fail, %s.",
+ msg);
+ kvm_vm_free(vm);
+}
+
+void test_vm_types(void)
+{
+ test_init2(KVM_X86_SEV_VM, &(struct kvm_sev_init){});
+
+ /*
+ * TODO: check that unsupported types cannot be created. Probably
+ * a separate selftest.
+ */
+ if (have_sev_es)
+ test_init2(KVM_X86_SEV_ES_VM, &(struct kvm_sev_init){});
+
+ test_init2_invalid(0, &(struct kvm_sev_init){},
+ "VM type is KVM_X86_DEFAULT_VM");
+ if (kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM))
+ test_init2_invalid(KVM_X86_SW_PROTECTED_VM, &(struct kvm_sev_init){},
+ "VM type is KVM_X86_SW_PROTECTED_VM");
+}
+
+void test_flags(uint32_t vm_type)
+{
+ int i;
+
+ for (i = 0; i < 32; i++)
+ test_init2_invalid(vm_type,
+ &(struct kvm_sev_init){ .flags = BIT(i) },
+ "invalid flag");
+}
+
+void test_features(uint32_t vm_type, uint64_t supported_features)
+{
+ int i;
+
+ for (i = 0; i < 64; i++) {
+ if (!(supported_features & (1u << i)))
+ test_init2_invalid(vm_type,
+ &(struct kvm_sev_init){ .vmsa_features = BIT_ULL(i) },
+ "unknown feature");
+ else if (KNOWN_FEATURES & (1u << i))
+ test_init2(vm_type,
+ &(struct kvm_sev_init){ .vmsa_features = BIT_ULL(i) });
+ }
+}
+
+int main(int argc, char *argv[])
+{
+ int kvm_fd = open_kvm_dev_path_or_exit();
+ bool have_sev;
+
+ TEST_REQUIRE(__kvm_has_device_attr(kvm_fd, 0, KVM_X86_SEV_VMSA_FEATURES) == 0);
+ kvm_device_attr_get(kvm_fd, 0, KVM_X86_SEV_VMSA_FEATURES, &supported_vmsa_features);
+
+ have_sev = kvm_cpu_has(X86_FEATURE_SEV);
+ TEST_ASSERT(have_sev == !!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SEV_VM)),
+ "sev: KVM_CAP_VM_TYPES (%x) does not match cpuid (checking %x)",
+ kvm_check_cap(KVM_CAP_VM_TYPES), 1 << KVM_X86_SEV_VM);
+
+ TEST_REQUIRE(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SEV_VM));
+ have_sev_es = kvm_cpu_has(X86_FEATURE_SEV_ES);
+
+ TEST_ASSERT(have_sev_es == !!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SEV_ES_VM)),
+ "sev-es: KVM_CAP_VM_TYPES (%x) does not match cpuid (checking %x)",
+ kvm_check_cap(KVM_CAP_VM_TYPES), 1 << KVM_X86_SEV_ES_VM);
+
+ test_vm_types();
+
+ test_flags(KVM_X86_SEV_VM);
+ if (have_sev_es)
+ test_flags(KVM_X86_SEV_ES_VM);
+
+ test_features(KVM_X86_SEV_VM, 0);
+ if (have_sev_es)
+ test_features(KVM_X86_SEV_ES_VM, supported_vmsa_features);
+
+ return 0;
+}
--
2.39.1
Allow vendor modules to provide their own attributes on /dev/kvm.
To avoid proliferation of vendor ops, implement KVM_HAS_DEVICE_ATTR
and KVM_GET_DEVICE_ATTR in terms of the same function. You're not
supposed to use KVM_GET_DEVICE_ATTR to do complicated computations,
especially on /dev/kvm.
Reviewed-by: Michael Roth <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
---
arch/x86/include/asm/kvm-x86-ops.h | 1 +
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/x86.c | 43 ++++++++++++++++++++----------
3 files changed, 31 insertions(+), 14 deletions(-)
diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 378ed944b849..ac8b7614e79d 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -122,6 +122,7 @@ KVM_X86_OP(enter_smm)
KVM_X86_OP(leave_smm)
KVM_X86_OP(enable_smi_window)
#endif
+KVM_X86_OP_OPTIONAL(dev_get_attr)
KVM_X86_OP_OPTIONAL(mem_enc_ioctl)
KVM_X86_OP_OPTIONAL(mem_enc_register_region)
KVM_X86_OP_OPTIONAL(mem_enc_unregister_region)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index aaf5a25ea7ed..651ce10cc152 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1770,6 +1770,7 @@ struct kvm_x86_ops {
void (*enable_smi_window)(struct kvm_vcpu *vcpu);
#endif
+ int (*dev_get_attr)(u64 attr, u64 *val);
int (*mem_enc_ioctl)(struct kvm *kvm, void __user *argp);
int (*mem_enc_register_region)(struct kvm *kvm, struct kvm_enc_region *argp);
int (*mem_enc_unregister_region)(struct kvm *kvm, struct kvm_enc_region *argp);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 14c969782d73..a12af5042d82 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4791,34 +4791,49 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
return r;
}
-static int kvm_x86_dev_get_attr(struct kvm_device_attr *attr)
+static int __kvm_x86_dev_get_attr(struct kvm_device_attr *attr, u64 *val)
{
- u64 __user *uaddr = u64_to_user_ptr(attr->addr);
+ int r;
if (attr->group)
return -ENXIO;
switch (attr->attr) {
case KVM_X86_XCOMP_GUEST_SUPP:
- if (put_user(kvm_caps.supported_xcr0, uaddr))
- return -EFAULT;
- return 0;
+ r = 0;
+ *val = kvm_caps.supported_xcr0;
+ break;
default:
- return -ENXIO;
+ r = -ENXIO;
+ if (kvm_x86_ops.dev_get_attr)
+ r = static_call(kvm_x86_dev_get_attr)(attr->attr, val);
+ break;
}
+
+ return r;
+}
+
+static int kvm_x86_dev_get_attr(struct kvm_device_attr *attr)
+{
+ u64 __user *uaddr = u64_to_user_ptr(attr->addr);
+ int r;
+ u64 val;
+
+ r = __kvm_x86_dev_get_attr(attr, &val);
+ if (r < 0)
+ return r;
+
+ if (put_user(val, uaddr))
+ return -EFAULT;
+
+ return 0;
}
static int kvm_x86_dev_has_attr(struct kvm_device_attr *attr)
{
- if (attr->group)
- return -ENXIO;
+ u64 val;
- switch (attr->attr) {
- case KVM_X86_XCOMP_GUEST_SUPP:
- return 0;
- default:
- return -ENXIO;
- }
+ return __kvm_x86_dev_get_attr(attr, &val);
}
long kvm_arch_dev_ioctl(struct file *filp,
--
2.39.1
From: Sean Christopherson <[email protected]>
Leave SEV and SEV_ES '0' in kvm_cpu_caps by default, and instead set them
in sev_set_cpu_caps() if SEV and SEV-ES support are fully enabled. Aside
from the fact that sev_set_cpu_caps() is wildly misleading when it *clears*
capabilities, this will allow compiling out sev.c without falsely
advertising SEV/SEV-ES support in KVM_GET_SUPPORTED_CPUID.
Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
---
arch/x86/kvm/cpuid.c | 2 +-
arch/x86/kvm/svm/sev.c | 8 ++++----
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index adba49afb5fe..bde4df13a7e8 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -761,7 +761,7 @@ void kvm_set_cpu_caps(void)
kvm_cpu_cap_mask(CPUID_8000_000A_EDX, 0);
kvm_cpu_cap_mask(CPUID_8000_001F_EAX,
- 0 /* SME */ | F(SEV) | 0 /* VM_PAGE_FLUSH */ | F(SEV_ES) |
+ 0 /* SME */ | 0 /* SEV */ | 0 /* VM_PAGE_FLUSH */ | 0 /* SEV_ES */ |
F(SME_COHERENT));
kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index f06f9e51ad9d..aec3453fd73c 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2178,10 +2178,10 @@ void sev_vm_destroy(struct kvm *kvm)
void __init sev_set_cpu_caps(void)
{
- if (!sev_enabled)
- kvm_cpu_cap_clear(X86_FEATURE_SEV);
- if (!sev_es_enabled)
- kvm_cpu_cap_clear(X86_FEATURE_SEV_ES);
+ if (sev_enabled)
+ kvm_cpu_cap_set(X86_FEATURE_SEV);
+ if (sev_es_enabled)
+ kvm_cpu_cap_set(X86_FEATURE_SEV_ES);
}
void __init sev_hardware_setup(void)
--
2.39.1
Some VM types have characteristics in common; in fact, the only use
of VM types right now is kvm_arch_has_private_mem and it assumes that
_all_ nonzero VM types have private memory.
We will soon introduce a VM type for SEV and SEV-ES VMs, and at that
point we will have two special characteristics of confidential VMs
that depend on the VM type: not just if memory is private, but
also whether guest state is protected. For the latter we have
kvm->arch.guest_state_protected, which is only set on a fully initialized
VM.
For VM types with protected guest state, we can actually fix a problem in
the SEV-ES implementation, where ioctls to set registers do not cause an
error even if the VM has been initialized and the guest state encrypted.
Make sure that when using VM types that will become an error.
Signed-off-by: Paolo Bonzini <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
---
arch/x86/include/asm/kvm_host.h | 7 ++-
arch/x86/kvm/x86.c | 95 +++++++++++++++++++++++++++------
2 files changed, 84 insertions(+), 18 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 651ce10cc152..4c05b3001475 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1279,12 +1279,14 @@ enum kvm_apicv_inhibit {
};
struct kvm_arch {
- unsigned long vm_type;
unsigned long n_used_mmu_pages;
unsigned long n_requested_mmu_pages;
unsigned long n_max_mmu_pages;
unsigned int indirect_shadow_pages;
u8 mmu_valid_gen;
+ u8 vm_type;
+ bool has_private_mem;
+ bool has_protected_state;
struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES];
struct list_head active_mmu_pages;
struct list_head zapped_obsolete_pages;
@@ -2136,8 +2138,9 @@ void kvm_mmu_new_pgd(struct kvm_vcpu *vcpu, gpa_t new_pgd);
void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
int tdp_max_root_level, int tdp_huge_page_level);
+
#ifdef CONFIG_KVM_PRIVATE_MEM
-#define kvm_arch_has_private_mem(kvm) ((kvm)->arch.vm_type != KVM_X86_DEFAULT_VM)
+#define kvm_arch_has_private_mem(kvm) ((kvm)->arch.has_private_mem)
#else
#define kvm_arch_has_private_mem(kvm) false
#endif
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a12af5042d82..9d91eb1b3080 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5510,12 +5510,16 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
return 0;
}
-static void kvm_vcpu_ioctl_x86_get_debugregs(struct kvm_vcpu *vcpu,
- struct kvm_debugregs *dbgregs)
+static int kvm_vcpu_ioctl_x86_get_debugregs(struct kvm_vcpu *vcpu,
+ struct kvm_debugregs *dbgregs)
{
unsigned long val;
unsigned int i;
+ if (vcpu->kvm->arch.has_protected_state &&
+ vcpu->arch.guest_state_protected)
+ return -EINVAL;
+
memset(dbgregs, 0, sizeof(*dbgregs));
BUILD_BUG_ON(ARRAY_SIZE(vcpu->arch.db) != ARRAY_SIZE(dbgregs->db));
@@ -5525,6 +5529,7 @@ static void kvm_vcpu_ioctl_x86_get_debugregs(struct kvm_vcpu *vcpu,
kvm_get_dr(vcpu, 6, &val);
dbgregs->dr6 = val;
dbgregs->dr7 = vcpu->arch.dr7;
+ return 0;
}
static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu,
@@ -5532,6 +5537,10 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu,
{
unsigned int i;
+ if (vcpu->kvm->arch.has_protected_state &&
+ vcpu->arch.guest_state_protected)
+ return -EINVAL;
+
if (dbgregs->flags)
return -EINVAL;
@@ -5552,9 +5561,13 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu,
}
-static void kvm_vcpu_ioctl_x86_get_xsave2(struct kvm_vcpu *vcpu,
- u8 *state, unsigned int size)
+static int kvm_vcpu_ioctl_x86_get_xsave2(struct kvm_vcpu *vcpu,
+ u8 *state, unsigned int size)
{
+ if (vcpu->kvm->arch.has_protected_state &&
+ fpstate_is_confidential(&vcpu->arch.guest_fpu))
+ return -EINVAL;
+
/*
* Only copy state for features that are enabled for the guest. The
* state itself isn't problematic, but setting bits in the header for
@@ -5571,22 +5584,27 @@ static void kvm_vcpu_ioctl_x86_get_xsave2(struct kvm_vcpu *vcpu,
XFEATURE_MASK_FPSSE;
if (fpstate_is_confidential(&vcpu->arch.guest_fpu))
- return;
+ return 0;
fpu_copy_guest_fpstate_to_uabi(&vcpu->arch.guest_fpu, state, size,
supported_xcr0, vcpu->arch.pkru);
+ return 0;
}
-static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
- struct kvm_xsave *guest_xsave)
+static int kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
+ struct kvm_xsave *guest_xsave)
{
- kvm_vcpu_ioctl_x86_get_xsave2(vcpu, (void *)guest_xsave->region,
- sizeof(guest_xsave->region));
+ return kvm_vcpu_ioctl_x86_get_xsave2(vcpu, (void *)guest_xsave->region,
+ sizeof(guest_xsave->region));
}
static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu,
struct kvm_xsave *guest_xsave)
{
+ if (vcpu->kvm->arch.has_protected_state &&
+ fpstate_is_confidential(&vcpu->arch.guest_fpu))
+ return -EINVAL;
+
if (fpstate_is_confidential(&vcpu->arch.guest_fpu))
return 0;
@@ -5596,18 +5614,23 @@ static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu,
&vcpu->arch.pkru);
}
-static void kvm_vcpu_ioctl_x86_get_xcrs(struct kvm_vcpu *vcpu,
- struct kvm_xcrs *guest_xcrs)
+static int kvm_vcpu_ioctl_x86_get_xcrs(struct kvm_vcpu *vcpu,
+ struct kvm_xcrs *guest_xcrs)
{
+ if (vcpu->kvm->arch.has_protected_state &&
+ vcpu->arch.guest_state_protected)
+ return -EINVAL;
+
if (!boot_cpu_has(X86_FEATURE_XSAVE)) {
guest_xcrs->nr_xcrs = 0;
- return;
+ return 0;
}
guest_xcrs->nr_xcrs = 1;
guest_xcrs->flags = 0;
guest_xcrs->xcrs[0].xcr = XCR_XFEATURE_ENABLED_MASK;
guest_xcrs->xcrs[0].value = vcpu->arch.xcr0;
+ return 0;
}
static int kvm_vcpu_ioctl_x86_set_xcrs(struct kvm_vcpu *vcpu,
@@ -5615,6 +5638,10 @@ static int kvm_vcpu_ioctl_x86_set_xcrs(struct kvm_vcpu *vcpu,
{
int i, r = 0;
+ if (vcpu->kvm->arch.has_protected_state &&
+ vcpu->arch.guest_state_protected)
+ return -EINVAL;
+
if (!boot_cpu_has(X86_FEATURE_XSAVE))
return -EINVAL;
@@ -5997,7 +6024,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
case KVM_GET_DEBUGREGS: {
struct kvm_debugregs dbgregs;
- kvm_vcpu_ioctl_x86_get_debugregs(vcpu, &dbgregs);
+ r = kvm_vcpu_ioctl_x86_get_debugregs(vcpu, &dbgregs);
+ if (r < 0)
+ break;
r = -EFAULT;
if (copy_to_user(argp, &dbgregs,
@@ -6027,7 +6056,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
if (!u.xsave)
break;
- kvm_vcpu_ioctl_x86_get_xsave(vcpu, u.xsave);
+ r = kvm_vcpu_ioctl_x86_get_xsave(vcpu, u.xsave);
+ if (r < 0)
+ break;
r = -EFAULT;
if (copy_to_user(argp, u.xsave, sizeof(struct kvm_xsave)))
@@ -6056,7 +6087,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
if (!u.xsave)
break;
- kvm_vcpu_ioctl_x86_get_xsave2(vcpu, u.buffer, size);
+ r = kvm_vcpu_ioctl_x86_get_xsave2(vcpu, u.buffer, size);
+ if (r < 0)
+ break;
r = -EFAULT;
if (copy_to_user(argp, u.xsave, size))
@@ -6072,7 +6105,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
if (!u.xcrs)
break;
- kvm_vcpu_ioctl_x86_get_xcrs(vcpu, u.xcrs);
+ r = kvm_vcpu_ioctl_x86_get_xcrs(vcpu, u.xcrs);
+ if (r < 0)
+ break;
r = -EFAULT;
if (copy_to_user(argp, u.xcrs,
@@ -6216,6 +6251,11 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
}
#endif
case KVM_GET_SREGS2: {
+ r = -EINVAL;
+ if (vcpu->kvm->arch.has_protected_state &&
+ vcpu->arch.guest_state_protected)
+ goto out;
+
u.sregs2 = kzalloc(sizeof(struct kvm_sregs2), GFP_KERNEL);
r = -ENOMEM;
if (!u.sregs2)
@@ -6228,6 +6268,11 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
break;
}
case KVM_SET_SREGS2: {
+ r = -EINVAL;
+ if (vcpu->kvm->arch.has_protected_state &&
+ vcpu->arch.guest_state_protected)
+ goto out;
+
u.sregs2 = memdup_user(argp, sizeof(struct kvm_sregs2));
if (IS_ERR(u.sregs2)) {
r = PTR_ERR(u.sregs2);
@@ -11444,6 +11489,10 @@ static void __get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
{
+ if (vcpu->kvm->arch.has_protected_state &&
+ vcpu->arch.guest_state_protected)
+ return -EINVAL;
+
vcpu_load(vcpu);
__get_regs(vcpu, regs);
vcpu_put(vcpu);
@@ -11485,6 +11534,10 @@ static void __set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
{
+ if (vcpu->kvm->arch.has_protected_state &&
+ vcpu->arch.guest_state_protected)
+ return -EINVAL;
+
vcpu_load(vcpu);
__set_regs(vcpu, regs);
vcpu_put(vcpu);
@@ -11557,6 +11610,10 @@ static void __get_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2)
int kvm_arch_vcpu_ioctl_get_sregs(struct kvm_vcpu *vcpu,
struct kvm_sregs *sregs)
{
+ if (vcpu->kvm->arch.has_protected_state &&
+ vcpu->arch.guest_state_protected)
+ return -EINVAL;
+
vcpu_load(vcpu);
__get_sregs(vcpu, sregs);
vcpu_put(vcpu);
@@ -11824,6 +11881,10 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
{
int ret;
+ if (vcpu->kvm->arch.has_protected_state &&
+ vcpu->arch.guest_state_protected)
+ return -EINVAL;
+
vcpu_load(vcpu);
ret = __set_sregs(vcpu, sregs);
vcpu_put(vcpu);
@@ -12513,6 +12574,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
return -EINVAL;
kvm->arch.vm_type = type;
+ kvm->arch.has_private_mem =
+ (type == KVM_X86_SW_PROTECTED_VM);
ret = kvm_page_track_init(kvm);
if (ret)
--
2.39.1
Signed-off-by: Paolo Bonzini <[email protected]>
---
Documentation/virt/kvm/api.rst | 2 ++
arch/x86/include/uapi/asm/kvm.h | 2 ++
arch/x86/kvm/svm/sev.c | 16 +++++++++++++---
arch/x86/kvm/svm/svm.c | 7 +++++++
arch/x86/kvm/svm/svm.h | 1 +
arch/x86/kvm/x86.c | 2 ++
6 files changed, 27 insertions(+), 3 deletions(-)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 0b5a33ee71ee..f0b76ff5030d 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -8819,6 +8819,8 @@ means the VM type with value @n is supported. Possible values of @n are::
#define KVM_X86_DEFAULT_VM 0
#define KVM_X86_SW_PROTECTED_VM 1
+ #define KVM_X86_SEV_VM 2
+ #define KVM_X86_SEV_ES_VM 3
Note, KVM_X86_SW_PROTECTED_VM is currently only for development and testing.
Do not use KVM_X86_SW_PROTECTED_VM for "real" VMs, and especially not in
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index d0c1b459f7e9..9d950b0b64c9 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -857,5 +857,7 @@ struct kvm_hyperv_eventfd {
#define KVM_X86_DEFAULT_VM 0
#define KVM_X86_SW_PROTECTED_VM 1
+#define KVM_X86_SEV_VM 2
+#define KVM_X86_SEV_ES_VM 3
#endif /* _ASM_X86_KVM_H */
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 2549a539a686..1248ccf433e8 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -247,6 +247,9 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
if (kvm->created_vcpus)
return -EINVAL;
+ if (kvm->arch.vm_type != KVM_X86_DEFAULT_VM)
+ return -EINVAL;
+
if (unlikely(sev->active))
return -EINVAL;
@@ -264,6 +267,7 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
INIT_LIST_HEAD(&sev->regions_list);
INIT_LIST_HEAD(&sev->mirror_vms);
+ sev->need_init = false;
kvm_set_apicv_inhibit(kvm, APICV_INHIBIT_REASON_SEV);
@@ -1799,7 +1803,8 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd)
if (ret)
goto out_fput;
- if (sev_guest(kvm) || !sev_guest(source_kvm)) {
+ if (kvm->arch.vm_type != source_kvm->arch.vm_type ||
+ sev_guest(kvm) || !sev_guest(source_kvm)) {
ret = -EINVAL;
goto out_unlock;
}
@@ -2118,6 +2123,7 @@ int sev_vm_copy_enc_context_from(struct kvm *kvm, unsigned int source_fd)
mirror_sev->asid = source_sev->asid;
mirror_sev->fd = source_sev->fd;
mirror_sev->es_active = source_sev->es_active;
+ mirror_sev->need_init = false;
mirror_sev->handle = source_sev->handle;
INIT_LIST_HEAD(&mirror_sev->regions_list);
INIT_LIST_HEAD(&mirror_sev->mirror_vms);
@@ -2183,10 +2189,14 @@ void sev_vm_destroy(struct kvm *kvm)
void __init sev_set_cpu_caps(void)
{
- if (sev_enabled)
+ if (sev_enabled) {
kvm_cpu_cap_set(X86_FEATURE_SEV);
- if (sev_es_enabled)
+ kvm_caps.supported_vm_types |= BIT(KVM_X86_SEV_VM);
+ }
+ if (sev_es_enabled) {
kvm_cpu_cap_set(X86_FEATURE_SEV_ES);
+ kvm_caps.supported_vm_types |= BIT(KVM_X86_SEV_ES_VM);
+ }
}
void __init sev_hardware_setup(void)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 1cf9e5f1fd02..f4a750426b24 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4089,6 +4089,9 @@ static void svm_cancel_injection(struct kvm_vcpu *vcpu)
static int svm_vcpu_pre_run(struct kvm_vcpu *vcpu)
{
+ if (to_kvm_sev_info(vcpu->kvm)->need_init)
+ return -EINVAL;
+
return 1;
}
@@ -4890,6 +4893,10 @@ static void svm_vm_destroy(struct kvm *kvm)
static int svm_vm_init(struct kvm *kvm)
{
+ if (kvm->arch.vm_type != KVM_X86_DEFAULT_VM &&
+ kvm->arch.vm_type != KVM_X86_SW_PROTECTED_VM)
+ to_kvm_sev_info(kvm)->need_init = true;
+
if (!pause_filter_count || !pause_filter_thresh)
kvm->arch.pause_in_guest = true;
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index ebf2160bf0c6..7a921acc534f 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -79,6 +79,7 @@ enum {
struct kvm_sev_info {
bool active; /* SEV enabled guest */
bool es_active; /* SEV-ES enabled guest */
+ bool need_init; /* waiting for SEV_INIT2 */
unsigned int asid; /* ASID used for this guest */
unsigned int handle; /* SEV firmware handle */
int fd; /* SEV device fd */
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 3b87e65904ae..b9dfe3179332 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -12576,6 +12576,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
kvm->arch.vm_type = type;
kvm->arch.has_private_mem =
(type == KVM_X86_SW_PROTECTED_VM);
+ kvm->arch.has_protected_state =
+ (type == KVM_X86_SEV_ES_VM);
ret = kvm_page_track_init(kvm);
if (ret)
--
2.39.1
Stop compiling sev.c when CONFIG_KVM_AMD_SEV=n, as the number of #ifdefs
in sev.c is getting ridiculous, and having #ifdefs inside of SEV helpers
is quite confusing.
To minimize #ifdefs in code flows, #ifdef away only the kvm_x86_ops hooks
and the #VMGEXIT handler. Stubs are also restricted to functions that
check sev_enabled and to the destruction functions sev_free_cpu() and
sev_vm_destroy(), where the style of their callers is to leave checks
to the callers. Most call sites instead rely on dead code elimination
to take care of functions that are guarded with sev_guest() or
sev_es_guest().
Signed-off-by: Sean Christopherson <[email protected]>
Co-developed-by: Sean Christopherson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
---
arch/x86/kvm/Makefile | 7 ++++---
arch/x86/kvm/svm/sev.c | 23 -----------------------
arch/x86/kvm/svm/svm.c | 5 ++++-
arch/x86/kvm/svm/svm.h | 26 +++++++++++++++++---------
4 files changed, 25 insertions(+), 36 deletions(-)
diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile
index 475b5fa917a6..744a1ea3ee5c 100644
--- a/arch/x86/kvm/Makefile
+++ b/arch/x86/kvm/Makefile
@@ -25,9 +25,10 @@ kvm-intel-y += vmx/vmx.o vmx/vmenter.o vmx/pmu_intel.o vmx/vmcs12.o \
kvm-intel-$(CONFIG_X86_SGX_KVM) += vmx/sgx.o
kvm-intel-$(CONFIG_KVM_HYPERV) += vmx/hyperv.o vmx/hyperv_evmcs.o
-kvm-amd-y += svm/svm.o svm/vmenter.o svm/pmu.o svm/nested.o svm/avic.o \
- svm/sev.o
-kvm-amd-$(CONFIG_KVM_HYPERV) += svm/hyperv.o
+kvm-amd-y += svm/svm.o svm/vmenter.o svm/pmu.o svm/nested.o svm/avic.o
+
+kvm-amd-$(CONFIG_KVM_AMD_SEV) += svm/sev.o
+kvm-amd-$(CONFIG_KVM_HYPERV) += svm/hyperv.o
ifdef CONFIG_HYPERV
kvm-y += kvm_onhyperv.o
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index aec3453fd73c..2f4f54ab8e1b 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -32,22 +32,6 @@
#include "cpuid.h"
#include "trace.h"
-#ifndef CONFIG_KVM_AMD_SEV
-/*
- * When this config is not defined, SEV feature is not supported and APIs in
- * this file are not used but this file still gets compiled into the KVM AMD
- * module.
- *
- * We will not have MISC_CG_RES_SEV and MISC_CG_RES_SEV_ES entries in the enum
- * misc_res_type {} defined in linux/misc_cgroup.h.
- *
- * Below macros allow compilation to succeed.
- */
-#define MISC_CG_RES_SEV MISC_CG_RES_TYPES
-#define MISC_CG_RES_SEV_ES MISC_CG_RES_TYPES
-#endif
-
-#ifdef CONFIG_KVM_AMD_SEV
/* enable/disable SEV support */
static bool sev_enabled = true;
module_param_named(sev, sev_enabled, bool, 0444);
@@ -59,11 +43,6 @@ module_param_named(sev_es, sev_es_enabled, bool, 0444);
/* enable/disable SEV-ES DebugSwap support */
static bool sev_es_debug_swap_enabled = true;
module_param_named(debug_swap, sev_es_debug_swap_enabled, bool, 0444);
-#else
-#define sev_enabled false
-#define sev_es_enabled false
-#define sev_es_debug_swap_enabled false
-#endif /* CONFIG_KVM_AMD_SEV */
static u8 sev_enc_bit;
static DECLARE_RWSEM(sev_deactivate_lock);
@@ -2186,7 +2165,6 @@ void __init sev_set_cpu_caps(void)
void __init sev_hardware_setup(void)
{
-#ifdef CONFIG_KVM_AMD_SEV
unsigned int eax, ebx, ecx, edx, sev_asid_count, sev_es_asid_count;
bool sev_es_supported = false;
bool sev_supported = false;
@@ -2286,7 +2264,6 @@ void __init sev_hardware_setup(void)
if (!sev_es_enabled || !cpu_feature_enabled(X86_FEATURE_DEBUG_SWAP) ||
!cpu_feature_enabled(X86_FEATURE_NO_NESTED_DATA_BP))
sev_es_debug_swap_enabled = false;
-#endif
}
void sev_hardware_unsetup(void)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index e90b429c84f1..eaa973dbe543 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3306,7 +3306,9 @@ static int (*const svm_exit_handlers[])(struct kvm_vcpu *vcpu) = {
[SVM_EXIT_RSM] = rsm_interception,
[SVM_EXIT_AVIC_INCOMPLETE_IPI] = avic_incomplete_ipi_interception,
[SVM_EXIT_AVIC_UNACCELERATED_ACCESS] = avic_unaccelerated_access_interception,
+#ifdef CONFIG_KVM_AMD_SEV
[SVM_EXIT_VMGEXIT] = sev_handle_vmgexit,
+#endif
};
static void dump_vmcb(struct kvm_vcpu *vcpu)
@@ -5014,6 +5016,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.enable_smi_window = svm_enable_smi_window,
#endif
+#ifdef CONFIG_KVM_AMD_SEV
.mem_enc_ioctl = sev_mem_enc_ioctl,
.mem_enc_register_region = sev_mem_enc_register_region,
.mem_enc_unregister_region = sev_mem_enc_unregister_region,
@@ -5021,7 +5024,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.vm_copy_enc_context_from = sev_vm_copy_enc_context_from,
.vm_move_enc_context_from = sev_vm_move_enc_context_from,
-
+#endif
.check_emulate_instruction = svm_check_emulate_instruction,
.apic_init_signal_blocked = svm_apic_init_signal_blocked,
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 8ef95139cd24..52bc955ed06f 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -664,13 +664,10 @@ void avic_refresh_virtual_apic_mode(struct kvm_vcpu *vcpu);
/* sev.c */
+#ifdef CONFIG_KVM_AMD_SEV
#define GHCB_VERSION_MAX 1ULL
#define GHCB_VERSION_MIN 1ULL
-
-extern unsigned int max_sev_asid;
-
-void sev_vm_destroy(struct kvm *kvm);
int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp);
int sev_mem_enc_register_region(struct kvm *kvm,
struct kvm_enc_region *range);
@@ -681,19 +678,30 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd);
void sev_guest_memory_reclaimed(struct kvm *kvm);
void pre_sev_run(struct vcpu_svm *svm, int cpu);
-void __init sev_set_cpu_caps(void);
-void __init sev_hardware_setup(void);
-void sev_hardware_unsetup(void);
-int sev_cpu_init(struct svm_cpu_data *sd);
void sev_init_vmcb(struct vcpu_svm *svm);
void sev_vcpu_after_set_cpuid(struct vcpu_svm *svm);
-void sev_free_vcpu(struct kvm_vcpu *vcpu);
int sev_handle_vmgexit(struct kvm_vcpu *vcpu);
int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in);
void sev_es_vcpu_reset(struct vcpu_svm *svm);
void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector);
void sev_es_prepare_switch_to_guest(struct sev_es_save_area *hostsa);
void sev_es_unmap_ghcb(struct vcpu_svm *svm);
+void sev_free_vcpu(struct kvm_vcpu *vcpu);
+void sev_vm_destroy(struct kvm *kvm);
+void __init sev_set_cpu_caps(void);
+void __init sev_hardware_setup(void);
+void sev_hardware_unsetup(void);
+int sev_cpu_init(struct svm_cpu_data *sd);
+extern unsigned int max_sev_asid;
+#else
+static inline void sev_free_vcpu(struct kvm_vcpu *vcpu) {}
+static inline void sev_vm_destroy(struct kvm *kvm) {}
+static inline void __init sev_set_cpu_caps(void) {}
+static inline void __init sev_hardware_setup(void) {}
+static inline void sev_hardware_unsetup(void) {}
+static inline int sev_cpu_init(struct svm_cpu_data *sd) { return 0; }
+#define max_sev_asid 0
+#endif
/* vmenter.S */
--
2.39.1
Compute the set of features to be stored in the VMSA when KVM is
initialized; move it from there into kvm_sev_info when SEV is initialized,
and then into the initial VMSA.
The new variable can then be used to return the set of supported features
to userspace, via the KVM_GET_DEVICE_ATTR ioctl.
Signed-off-by: Paolo Bonzini <[email protected]>
---
.../virt/kvm/x86/amd-memory-encryption.rst | 12 +++++++++++
arch/x86/include/uapi/asm/kvm.h | 1 +
arch/x86/kvm/svm/sev.c | 20 +++++++++++++++++--
arch/x86/kvm/svm/svm.c | 1 +
arch/x86/kvm/svm/svm.h | 2 ++
5 files changed, 34 insertions(+), 2 deletions(-)
diff --git a/Documentation/virt/kvm/x86/amd-memory-encryption.rst b/Documentation/virt/kvm/x86/amd-memory-encryption.rst
index 37c5c37f4f6e..5ed11bc16b96 100644
--- a/Documentation/virt/kvm/x86/amd-memory-encryption.rst
+++ b/Documentation/virt/kvm/x86/amd-memory-encryption.rst
@@ -424,6 +424,18 @@ issued by the hypervisor to make the guest ready for execution.
Returns: 0 on success, -negative on error
+Device attribute API
+====================
+
+Attributes of the SEV implementation can be retrieved through the
+``KVM_HAS_DEVICE_ATTR`` and ``KVM_GET_DEVICE_ATTR`` ioctls on the ``/dev/kvm``
+device node.
+
+Currently only one attribute is implemented:
+
+* group 0, attribute ``KVM_X86_SEV_VMSA_FEATURES``: return the set of all
+ bits that are accepted in the ``vmsa_features`` of ``KVM_SEV_INIT2``.
+
Firmware Management
===================
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index ef11aa4cab42..d0c1b459f7e9 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -459,6 +459,7 @@ struct kvm_sync_regs {
/* attributes for system fd (group 0) */
#define KVM_X86_XCOMP_GUEST_SUPP 0
+#define KVM_X86_SEV_VMSA_FEATURES 1
struct kvm_vmx_nested_state_data {
__u8 vmcs12[KVM_STATE_NESTED_VMX_VMCS_SIZE];
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 2f4f54ab8e1b..16a5c64232b7 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -43,6 +43,7 @@ module_param_named(sev_es, sev_es_enabled, bool, 0444);
/* enable/disable SEV-ES DebugSwap support */
static bool sev_es_debug_swap_enabled = true;
module_param_named(debug_swap, sev_es_debug_swap_enabled, bool, 0444);
+static u64 sev_supported_vmsa_features;
static u8 sev_enc_bit;
static DECLARE_RWSEM(sev_deactivate_lock);
@@ -597,8 +598,7 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm)
save->xss = svm->vcpu.arch.ia32_xss;
save->dr6 = svm->vcpu.arch.dr6;
- if (sev_es_debug_swap_enabled)
- save->sev_features |= SVM_SEV_FEAT_DEBUG_SWAP;
+ save->sev_features = sev_supported_vmsa_features;
pr_debug("Virtual Machine Save Area (VMSA):\n");
print_hex_dump_debug("", DUMP_PREFIX_NONE, 16, 1, save, sizeof(*save), false);
@@ -1834,6 +1834,18 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd)
return ret;
}
+int sev_dev_get_attr(u64 attr, u64 *val)
+{
+ switch (attr) {
+ case KVM_X86_SEV_VMSA_FEATURES:
+ *val = sev_supported_vmsa_features;
+ return 0;
+
+ default:
+ return -ENXIO;
+ }
+}
+
int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
{
struct kvm_sev_cmd sev_cmd;
@@ -2264,6 +2276,10 @@ void __init sev_hardware_setup(void)
if (!sev_es_enabled || !cpu_feature_enabled(X86_FEATURE_DEBUG_SWAP) ||
!cpu_feature_enabled(X86_FEATURE_NO_NESTED_DATA_BP))
sev_es_debug_swap_enabled = false;
+
+ sev_supported_vmsa_features = 0;
+ if (sev_es_debug_swap_enabled)
+ sev_supported_vmsa_features |= SVM_SEV_FEAT_DEBUG_SWAP;
}
void sev_hardware_unsetup(void)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index eaa973dbe543..595642099772 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -5017,6 +5017,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
#endif
#ifdef CONFIG_KVM_AMD_SEV
+ .dev_get_attr = sev_dev_get_attr,
.mem_enc_ioctl = sev_mem_enc_ioctl,
.mem_enc_register_region = sev_mem_enc_register_region,
.mem_enc_unregister_region = sev_mem_enc_unregister_region,
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 52bc955ed06f..8f2394169703 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -692,8 +692,10 @@ void __init sev_set_cpu_caps(void);
void __init sev_hardware_setup(void);
void sev_hardware_unsetup(void);
int sev_cpu_init(struct svm_cpu_data *sd);
+int sev_dev_get_attr(u64 attr, u64 *val);
extern unsigned int max_sev_asid;
#else
+static inline int sev_dev_get_attr(u64 attr, u64 *val) { return -ENXIO; }
static inline void sev_free_vcpu(struct kvm_vcpu *vcpu) {}
static inline void sev_vm_destroy(struct kvm *kvm) {}
static inline void __init sev_set_cpu_caps(void) {}
--
2.39.1
Right now, the set of features that are stored in the VMSA upon
initialization is fixed and depends on the module parameters for
kvm-amd.ko. However, the hypervisor cannot really change it at will
because the feature word has to match between the hypervisor and whatever
computes a measurement of the VMSA for attestation purposes.
Add a field to kvm_sev_info that holds the set of features to be stored
in the VMSA; and query it instead of referring to the module parameters.
Because KVM_SEV_INIT and KVM_SEV_ES_INIT accept no parameters, this
does not yet introduce any functional change, but it paves the way for
an API that allows customization of the features per-VM.
Signed-off-by: Paolo Bonzini <[email protected]>
Message-Id: <[email protected]>
Reviewed-by: Michael Roth <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
---
arch/x86/kvm/svm/sev.c | 22 ++++++++++++++++++----
arch/x86/kvm/svm/svm.c | 2 +-
arch/x86/kvm/svm/svm.h | 3 ++-
3 files changed, 21 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 16a5c64232b7..b46612db0594 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -96,6 +96,14 @@ static inline bool is_mirroring_enc_context(struct kvm *kvm)
return !!to_kvm_svm(kvm)->sev_info.enc_context_owner;
}
+static bool sev_vcpu_has_debug_swap(struct vcpu_svm *svm)
+{
+ struct kvm_vcpu *vcpu = &svm->vcpu;
+ struct kvm_sev_info *sev = &to_kvm_svm(vcpu->kvm)->sev_info;
+
+ return sev->vmsa_features & SVM_SEV_FEAT_DEBUG_SWAP;
+}
+
/* Must be called with the sev_bitmap_lock held */
static bool __sev_recycle_asids(unsigned int min_asid, unsigned int max_asid)
{
@@ -244,6 +252,8 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
sev->active = true;
sev->es_active = argp->id == KVM_SEV_ES_INIT;
+ sev->vmsa_features = sev_supported_vmsa_features;
+
ret = sev_asid_new(sev);
if (ret)
goto e_no_asid;
@@ -263,6 +273,7 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
sev_asid_free(sev);
sev->asid = 0;
e_no_asid:
+ sev->vmsa_features = 0;
sev->es_active = false;
sev->active = false;
return ret;
@@ -557,6 +568,8 @@ static int sev_launch_update_data(struct kvm *kvm, struct kvm_sev_cmd *argp)
static int sev_es_sync_vmsa(struct vcpu_svm *svm)
{
+ struct kvm_vcpu *vcpu = &svm->vcpu;
+ struct kvm_sev_info *sev = &to_kvm_svm(vcpu->kvm)->sev_info;
struct sev_es_save_area *save = svm->sev_es.vmsa;
/* Check some debug related fields before encrypting the VMSA */
@@ -598,7 +611,7 @@ static int sev_es_sync_vmsa(struct vcpu_svm *svm)
save->xss = svm->vcpu.arch.ia32_xss;
save->dr6 = svm->vcpu.arch.dr6;
- save->sev_features = sev_supported_vmsa_features;
+ save->sev_features = sev->vmsa_features;
pr_debug("Virtual Machine Save Area (VMSA):\n");
print_hex_dump_debug("", DUMP_PREFIX_NONE, 16, 1, save, sizeof(*save), false);
@@ -1678,6 +1691,7 @@ static void sev_migrate_from(struct kvm *dst_kvm, struct kvm *src_kvm)
dst->pages_locked = src->pages_locked;
dst->enc_context_owner = src->enc_context_owner;
dst->es_active = src->es_active;
+ dst->vmsa_features = src->vmsa_features;
src->asid = 0;
src->active = false;
@@ -3048,7 +3062,7 @@ static void sev_es_init_vmcb(struct vcpu_svm *svm)
svm_set_intercept(svm, TRAP_CR8_WRITE);
vmcb->control.intercepts[INTERCEPT_DR] = 0;
- if (!sev_es_debug_swap_enabled) {
+ if (!sev_vcpu_has_debug_swap(svm)) {
vmcb_set_intercept(&vmcb->control, INTERCEPT_DR7_READ);
vmcb_set_intercept(&vmcb->control, INTERCEPT_DR7_WRITE);
recalc_intercepts(svm);
@@ -3103,7 +3117,7 @@ void sev_es_vcpu_reset(struct vcpu_svm *svm)
sev_enc_bit));
}
-void sev_es_prepare_switch_to_guest(struct sev_es_save_area *hostsa)
+void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_save_area *hostsa)
{
/*
* All host state for SEV-ES guests is categorized into three swap types
@@ -3131,7 +3145,7 @@ void sev_es_prepare_switch_to_guest(struct sev_es_save_area *hostsa)
* the CPU (Type-B). If DebugSwap is disabled/unsupported, the CPU both
* saves and loads debug registers (Type-A).
*/
- if (sev_es_debug_swap_enabled) {
+ if (sev_vcpu_has_debug_swap(svm)) {
hostsa->dr0 = native_get_debugreg(0);
hostsa->dr1 = native_get_debugreg(1);
hostsa->dr2 = native_get_debugreg(2);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 595642099772..1cf9e5f1fd02 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1523,7 +1523,7 @@ static void svm_prepare_switch_to_guest(struct kvm_vcpu *vcpu)
struct sev_es_save_area *hostsa;
hostsa = (struct sev_es_save_area *)(page_address(sd->save_area) + 0x400);
- sev_es_prepare_switch_to_guest(hostsa);
+ sev_es_prepare_switch_to_guest(svm, hostsa);
}
if (tsc_scaling)
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 8f2394169703..d6147ad18571 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -85,6 +85,7 @@ struct kvm_sev_info {
unsigned long pages_locked; /* Number of pages locked */
struct list_head regions_list; /* List of registered regions */
u64 ap_jump_table; /* SEV-ES AP Jump Table address */
+ u64 vmsa_features;
struct kvm *enc_context_owner; /* Owner of copied encryption context */
struct list_head mirror_vms; /* List of VMs mirroring */
struct list_head mirror_entry; /* Use as a list entry of mirrors */
@@ -684,7 +685,7 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu);
int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in);
void sev_es_vcpu_reset(struct vcpu_svm *svm);
void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector);
-void sev_es_prepare_switch_to_guest(struct sev_es_save_area *hostsa);
+void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_save_area *hostsa);
void sev_es_unmap_ghcb(struct vcpu_svm *svm);
void sev_free_vcpu(struct kvm_vcpu *vcpu);
void sev_vm_destroy(struct kvm *kvm);
--
2.39.1
Disable all VMSA features in KVM_SEV_INIT and KVM_SEV_ES_INIT. They are
not actually supported by SEV (a SEV guest does not have a VMSA to which
you can apply features) and they cause unexpected changes in measurement
for SEV-ES.
Going on, the way to enable them will be to use a new initialization ioctl
that takes the VMSA features as a parameter.
Signed-off-by: Paolo Bonzini <[email protected]>
---
arch/x86/kvm/svm/sev.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index b46612db0594..2db0b2b36120 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -252,7 +252,7 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
sev->active = true;
sev->es_active = argp->id == KVM_SEV_ES_INIT;
- sev->vmsa_features = sev_supported_vmsa_features;
+ sev->vmsa_features = 0;
ret = sev_asid_new(sev);
if (ret)
--
2.39.1
On Mon, Feb 26, 2024 at 02:03:29PM -0500, Paolo Bonzini wrote:
> The idea that no parameter would ever be necessary when enabling SEV or
> SEV-ES for a VM was decidedly optimistic. The first source of variability
> that was encountered is the desired set of VMSA features, as that affects
> the measurement of the VM's initial state and cannot be changed
> arbitrarily by the hypervisor.
>
> This series adds all the APIs that are needed to customize the features,
> with room for future enhancements:
>
> - a new /dev/kvm device attribute to retrieve the set of supported
> features (right now, only debug swap)
>
> - a new sub-operation for KVM_MEM_ENCRYPT_OP that can take a struct,
> replacing the existing KVM_SEV_INIT and KVM_SEV_ES_INIT
>
> It then puts the new op to work by including the VMSA features as a field
> of the The existing KVM_SEV_INIT and KVM_SEV_ES_INIT use the full set of
> supported VMSA features for backwards compatibility; but I am considering
> also making them use zero as the feature mask, and will gladly adjust the
> patches if so requested.
>
> In order to avoid creating *two* new KVM_MEM_ENCRYPT_OPs, I decided that
> I could as well make SEV and SEV-ES use VM types. And then, why not make
> a SEV-ES VM, when created with the new VM type instead of KVM_SEV_ES_INIT,
> reject KVM_GET_REGS/KVM_SET_REGS and friends on the vCPU file descriptor
> once the VMSA has been encrypted... Which is how the API should have
> always behaved.
>
> The series is structured as follows:
>
> - patches 1 to 5 are unrelated fixes and improvements for the SEV code
> and documentation. In particular they change sev.c so that it is
> compiled only if SEV is enabled in kconfig
>
> - patches 6 to 8 introduce the new device attribute to retrieve supported
> VMSA features
>
> - patch 9 disables DEBUG_SWAP by default
>
> - patches 10 and 11 introduce new infrastructure for VM types, replacing
> the similar code in the TDX patches
>
> - patches 12 to 14 introduce the new VM types for SEV and
> SEV-ES, and KVM_SEV_INIT2 as a new sub-operation for KVM_MEM_ENCRYPT_OP.
>
> - patch 15 tests the new ioctl.
>
> The idea is that SEV SNP will only ever support KVM_SEV_INIT2. I have
> patches in progress for QEMU to support this new API.
>
> Thanks,
>
> Paolo
>
>
> v2->v3:
> - use u64_to_user_addr()
> - Compile sev.c if and only if CONFIG_KVM_AMD_SEV=y
> - remove double signoffs
> - rebase on top of kvm-x86/next
I can't apply this series on top of current kvm-x86/next. On what exact
commit the series is based on?
Confused...
--
An old man doll... just what I always wanted! - Clara
On Tue, Feb 27, 2024, Bagas Sanjaya wrote:
> On Mon, Feb 26, 2024 at 02:03:29PM -0500, Paolo Bonzini wrote:
> > v2->v3:
> > - use u64_to_user_addr()
> > - Compile sev.c if and only if CONFIG_KVM_AMD_SEV=y
> > - remove double signoffs
> > - rebase on top of kvm-x86/next
>
> I can't apply this series on top of current kvm-x86/next. On what exact
> commit the series is based on?
Note that kvm-x86/next is my tree at https://github.com/kvm-x86/linux/tree/next.
Are you pulling that, or are you based off kvm/next (Paolo's tree at
git://git.kernel.org/pub/scm/virt/kvm/kvm.git)?
Because this series applies for me on all of these tags from kvm-x86.
kvm-x86-next-2024.02.22
kvm-x86-next-2024.02.23
kvm-x86-next-2024.02.26
kvm-x86-next-2024.02.26-2
On Mon, Feb 26, 2024 at 02:03:42PM -0500,
Paolo Bonzini <[email protected]> wrote:
> Signed-off-by: Paolo Bonzini <[email protected]>
> ---
> Documentation/virt/kvm/api.rst | 2 ++
> arch/x86/include/uapi/asm/kvm.h | 2 ++
> arch/x86/kvm/svm/sev.c | 16 +++++++++++++---
> arch/x86/kvm/svm/svm.c | 7 +++++++
> arch/x86/kvm/svm/svm.h | 1 +
> arch/x86/kvm/x86.c | 2 ++
> 6 files changed, 27 insertions(+), 3 deletions(-)
>
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index 0b5a33ee71ee..f0b76ff5030d 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -8819,6 +8819,8 @@ means the VM type with value @n is supported. Possible values of @n are::
>
> #define KVM_X86_DEFAULT_VM 0
> #define KVM_X86_SW_PROTECTED_VM 1
> + #define KVM_X86_SEV_VM 2
> + #define KVM_X86_SEV_ES_VM 3
>
> Note, KVM_X86_SW_PROTECTED_VM is currently only for development and testing.
> Do not use KVM_X86_SW_PROTECTED_VM for "real" VMs, and especially not in
> diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
> index d0c1b459f7e9..9d950b0b64c9 100644
> --- a/arch/x86/include/uapi/asm/kvm.h
> +++ b/arch/x86/include/uapi/asm/kvm.h
> @@ -857,5 +857,7 @@ struct kvm_hyperv_eventfd {
>
> #define KVM_X86_DEFAULT_VM 0
> #define KVM_X86_SW_PROTECTED_VM 1
> +#define KVM_X86_SEV_VM 2
> +#define KVM_X86_SEV_ES_VM 3
>
> #endif /* _ASM_X86_KVM_H */
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 2549a539a686..1248ccf433e8 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -247,6 +247,9 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
> if (kvm->created_vcpus)
> return -EINVAL;
>
> + if (kvm->arch.vm_type != KVM_X86_DEFAULT_VM)
> + return -EINVAL;
> +
> if (unlikely(sev->active))
> return -EINVAL;
>
> @@ -264,6 +267,7 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
>
> INIT_LIST_HEAD(&sev->regions_list);
> INIT_LIST_HEAD(&sev->mirror_vms);
> + sev->need_init = false;
>
> kvm_set_apicv_inhibit(kvm, APICV_INHIBIT_REASON_SEV);
>
> @@ -1799,7 +1803,8 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd)
> if (ret)
> goto out_fput;
>
> - if (sev_guest(kvm) || !sev_guest(source_kvm)) {
> + if (kvm->arch.vm_type != source_kvm->arch.vm_type ||
> + sev_guest(kvm) || !sev_guest(source_kvm)) {
> ret = -EINVAL;
> goto out_unlock;
> }
> @@ -2118,6 +2123,7 @@ int sev_vm_copy_enc_context_from(struct kvm *kvm, unsigned int source_fd)
> mirror_sev->asid = source_sev->asid;
> mirror_sev->fd = source_sev->fd;
> mirror_sev->es_active = source_sev->es_active;
> + mirror_sev->need_init = false;
> mirror_sev->handle = source_sev->handle;
> INIT_LIST_HEAD(&mirror_sev->regions_list);
> INIT_LIST_HEAD(&mirror_sev->mirror_vms);
> @@ -2183,10 +2189,14 @@ void sev_vm_destroy(struct kvm *kvm)
>
> void __init sev_set_cpu_caps(void)
> {
> - if (sev_enabled)
> + if (sev_enabled) {
> kvm_cpu_cap_set(X86_FEATURE_SEV);
> - if (sev_es_enabled)
> + kvm_caps.supported_vm_types |= BIT(KVM_X86_SEV_VM);
> + }
> + if (sev_es_enabled) {
> kvm_cpu_cap_set(X86_FEATURE_SEV_ES);
> + kvm_caps.supported_vm_types |= BIT(KVM_X86_SEV_ES_VM);
> + }
> }
>
> void __init sev_hardware_setup(void)
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 1cf9e5f1fd02..f4a750426b24 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -4089,6 +4089,9 @@ static void svm_cancel_injection(struct kvm_vcpu *vcpu)
>
> static int svm_vcpu_pre_run(struct kvm_vcpu *vcpu)
> {
> + if (to_kvm_sev_info(vcpu->kvm)->need_init)
> + return -EINVAL;
> +
> return 1;
> }
>
> @@ -4890,6 +4893,10 @@ static void svm_vm_destroy(struct kvm *kvm)
>
> static int svm_vm_init(struct kvm *kvm)
> {
> + if (kvm->arch.vm_type != KVM_X86_DEFAULT_VM &&
> + kvm->arch.vm_type != KVM_X86_SW_PROTECTED_VM)
> + to_kvm_sev_info(kvm)->need_init = true;
> +
> if (!pause_filter_count || !pause_filter_thresh)
> kvm->arch.pause_in_guest = true;
>
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index ebf2160bf0c6..7a921acc534f 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -79,6 +79,7 @@ enum {
> struct kvm_sev_info {
> bool active; /* SEV enabled guest */
> bool es_active; /* SEV-ES enabled guest */
> + bool need_init; /* waiting for SEV_INIT2 */
> unsigned int asid; /* ASID used for this guest */
> unsigned int handle; /* SEV firmware handle */
> int fd; /* SEV device fd */
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 3b87e65904ae..b9dfe3179332 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -12576,6 +12576,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
> kvm->arch.vm_type = type;
> kvm->arch.has_private_mem =
> (type == KVM_X86_SW_PROTECTED_VM);
> + kvm->arch.has_protected_state =
> + (type == KVM_X86_SEV_ES_VM);
Can we push it down into init_vm() op? I hesitate to add TDX check here.
kvm_page_track_init() and kvm_mmu_init_vm() wouldn't depend on it.
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index f4a750426b24..a083873b9057 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4893,6 +4893,9 @@ static void svm_vm_destroy(struct kvm *kvm)
static int svm_vm_init(struct kvm *kvm)
{
+ if (kvm->arch.vm_type == KVM_X86_SEV_ES_VM)
+ kvm->arch.has_protected_state = true;
+
if (kvm->arch.vm_type != KVM_X86_DEFAULT_VM &&
kvm->arch.vm_type != KVM_X86_SW_PROTECTED_VM)
to_kvm_sev_info(kvm)->need_init = true;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b9dfe3179332..3b87e65904ae 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -12576,8 +12576,6 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
kvm->arch.vm_type = type;
kvm->arch.has_private_mem =
(type == KVM_X86_SW_PROTECTED_VM);
- kvm->arch.has_protected_state =
- (type == KVM_X86_SEV_ES_VM);
ret = kvm_page_track_init(kvm);
if (ret)
--
Isaku Yamahata <[email protected]>
On Tue, Feb 27, 2024 at 09:49:24AM -0800, Sean Christopherson wrote:
> On Tue, Feb 27, 2024, Bagas Sanjaya wrote:
> > On Mon, Feb 26, 2024 at 02:03:29PM -0500, Paolo Bonzini wrote:
> > > v2->v3:
> > > - use u64_to_user_addr()
> > > - Compile sev.c if and only if CONFIG_KVM_AMD_SEV=y
> > > - remove double signoffs
> > > - rebase on top of kvm-x86/next
> >
> > I can't apply this series on top of current kvm-x86/next. On what exact
> > commit the series is based on?
>
> Note that kvm-x86/next is my tree at https://github.com/kvm-x86/linux/tree/next.
> Are you pulling that, or are you based off kvm/next (Paolo's tree at
> git://git.kernel.org/pub/scm/virt/kvm/kvm.git)?
I pulled from the former.
>
> Because this series applies for me on all of these tags from kvm-x86.
>
> kvm-x86-next-2024.02.22
> kvm-x86-next-2024.02.23
> kvm-x86-next-2024.02.26
> kvm-x86-next-2024.02.26-2
Successfully applied against kvm-x86-next-2024.02.26 tag. Thanks for the
pointer!
--
An old man doll... just what I always wanted! - Clara
On Mon, Feb 26, 2024 at 02:03:34PM -0500, Paolo Bonzini wrote:
> +``KVM_MEMORY_ENCRYPT_OP`` API
> +=============================
Nit: I think to be consistent, with command names in section headings below,
the API name heading above should not be inlined.
>
> The main ioctl to access SEV is KVM_MEMORY_ENCRYPT_OP. If the argument
> to KVM_MEMORY_ENCRYPT_OP is NULL, the ioctl returns 0 if SEV is enabled
> @@ -87,10 +81,6 @@ guests, such as launching, running, snapshotting, migrating and decommissioning.
> The KVM_SEV_INIT command is used by the hypervisor to initialize the SEV platform
> context. In a typical workflow, this command should be the first command issued.
>
> -The firmware can be initialized either by using its own non-volatile storage or
> -the OS can manage the NV storage for the firmware using the module parameter
> -``init_ex_path``. If the file specified by ``init_ex_path`` does not exist or
> -is invalid, the OS will create or override the file with output from PSP.
>
> Returns: 0 on success, -negative on error
>
> @@ -434,6 +424,21 @@ issued by the hypervisor to make the guest ready for execution.
>
> Returns: 0 on success, -negative on error
>
> +Firmware Management
> +===================
> +
> +The SEV guest key management is handled by a separate processor called the AMD
> +Secure Processor (AMD-SP). Firmware running inside the AMD-SP provides a secure
> +key management interface to perform common hypervisor activities such as
> +encrypting bootstrap code, snapshot, migrating and debugging the guest. For more
> +information, see the SEV Key Management spec [api-spec]_
> +
> +The AMD-SP firmware can be initialized either by using its own non-volatile
> +storage or the OS can manage the NV storage for the firmware using
> +parameter ``init_ex_path`` of the ``ccp`` module. If the file specified
> +by ``init_ex_path`` does not exist or is invalid, the OS will create or
> +override the file with PSP non-volatile storage.
> +
This one LGTM.
Other than the nit,
Reviewed-by: Bagas Sanjaya <[email protected]>
--
An old man doll... just what I always wanted! - Clara
On Mon, Feb 26, 2024 at 02:03:31PM -0500, Paolo Bonzini wrote:
> There is no danger to the kernel if userspace provides a 64-bit value that
> has the high bits set, but for whatever reason happ[ens to resolve to an
^
remove the messy char.
> address that has something mapped there. KVM uses the checked version
> of put_user() in kvm_x86_dev_get_attr().
See from the code change, not just kvm_x86_dev_get_attr().
Thanks,
Yilun
>
> Suggested-by: Sean Christopherson <[email protected]>
> Signed-off-by: Paolo Bonzini <[email protected]>
> ---
> arch/x86/kvm/x86.c | 24 +++---------------------
> 1 file changed, 3 insertions(+), 21 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index f3f7405e0628..14c969782d73 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -4791,25 +4791,13 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> return r;
> }
>
> -static inline void __user *kvm_get_attr_addr(struct kvm_device_attr *attr)
> -{
> - void __user *uaddr = (void __user*)(unsigned long)attr->addr;
> -
> - if ((u64)(unsigned long)uaddr != attr->addr)
> - return ERR_PTR_USR(-EFAULT);
> - return uaddr;
> -}
> -
> static int kvm_x86_dev_get_attr(struct kvm_device_attr *attr)
> {
> - u64 __user *uaddr = kvm_get_attr_addr(attr);
> + u64 __user *uaddr = u64_to_user_ptr(attr->addr);
>
> if (attr->group)
> return -ENXIO;
>
> - if (IS_ERR(uaddr))
> - return PTR_ERR(uaddr);
> -
> switch (attr->attr) {
> case KVM_X86_XCOMP_GUEST_SUPP:
> if (put_user(kvm_caps.supported_xcr0, uaddr))
> @@ -5664,12 +5652,9 @@ static int kvm_arch_tsc_has_attr(struct kvm_vcpu *vcpu,
> static int kvm_arch_tsc_get_attr(struct kvm_vcpu *vcpu,
> struct kvm_device_attr *attr)
> {
> - u64 __user *uaddr = kvm_get_attr_addr(attr);
> + u64 __user *uaddr = u64_to_user_ptr(attr->addr);
> int r;
>
> - if (IS_ERR(uaddr))
> - return PTR_ERR(uaddr);
> -
> switch (attr->attr) {
> case KVM_VCPU_TSC_OFFSET:
> r = -EFAULT;
> @@ -5687,13 +5672,10 @@ static int kvm_arch_tsc_get_attr(struct kvm_vcpu *vcpu,
> static int kvm_arch_tsc_set_attr(struct kvm_vcpu *vcpu,
> struct kvm_device_attr *attr)
> {
> - u64 __user *uaddr = kvm_get_attr_addr(attr);
> + u64 __user *uaddr = u64_to_user_ptr(attr->addr);
> struct kvm *kvm = vcpu->kvm;
> int r;
>
> - if (IS_ERR(uaddr))
> - return PTR_ERR(uaddr);
> -
> switch (attr->attr) {
> case KVM_VCPU_TSC_OFFSET: {
> u64 offset, tsc, ns;
> --
> 2.39.1
>
>
>
On Mon, Feb 26, 2024 at 02:03:42PM -0500, Paolo Bonzini wrote:
> Signed-off-by: Paolo Bonzini <[email protected]>
> ---
> Documentation/virt/kvm/api.rst | 2 ++
> arch/x86/include/uapi/asm/kvm.h | 2 ++
> arch/x86/kvm/svm/sev.c | 16 +++++++++++++---
> arch/x86/kvm/svm/svm.c | 7 +++++++
> arch/x86/kvm/svm/svm.h | 1 +
> arch/x86/kvm/x86.c | 2 ++
> 6 files changed, 27 insertions(+), 3 deletions(-)
>
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index 0b5a33ee71ee..f0b76ff5030d 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -8819,6 +8819,8 @@ means the VM type with value @n is supported. Possible values of @n are::
>
> #define KVM_X86_DEFAULT_VM 0
> #define KVM_X86_SW_PROTECTED_VM 1
> + #define KVM_X86_SEV_VM 2
> + #define KVM_X86_SEV_ES_VM 3
>
> Note, KVM_X86_SW_PROTECTED_VM is currently only for development and testing.
> Do not use KVM_X86_SW_PROTECTED_VM for "real" VMs, and especially not in
> diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
> index d0c1b459f7e9..9d950b0b64c9 100644
> --- a/arch/x86/include/uapi/asm/kvm.h
> +++ b/arch/x86/include/uapi/asm/kvm.h
> @@ -857,5 +857,7 @@ struct kvm_hyperv_eventfd {
>
> #define KVM_X86_DEFAULT_VM 0
> #define KVM_X86_SW_PROTECTED_VM 1
> +#define KVM_X86_SEV_VM 2
> +#define KVM_X86_SEV_ES_VM 3
>
> #endif /* _ASM_X86_KVM_H */
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 2549a539a686..1248ccf433e8 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -247,6 +247,9 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
> if (kvm->created_vcpus)
> return -EINVAL;
>
> + if (kvm->arch.vm_type != KVM_X86_DEFAULT_VM)
^
IIUC it should be KVM_X86_SEV_VM?
> + return -EINVAL;
> +
> if (unlikely(sev->active))
> return -EINVAL;
>
> @@ -264,6 +267,7 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
>
> INIT_LIST_HEAD(&sev->regions_list);
> INIT_LIST_HEAD(&sev->mirror_vms);
> + sev->need_init = false;
>
> kvm_set_apicv_inhibit(kvm, APICV_INHIBIT_REASON_SEV);
>
> @@ -1799,7 +1803,8 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd)
> if (ret)
> goto out_fput;
>
> - if (sev_guest(kvm) || !sev_guest(source_kvm)) {
> + if (kvm->arch.vm_type != source_kvm->arch.vm_type ||
> + sev_guest(kvm) || !sev_guest(source_kvm)) {
> ret = -EINVAL;
> goto out_unlock;
> }
> @@ -2118,6 +2123,7 @@ int sev_vm_copy_enc_context_from(struct kvm *kvm, unsigned int source_fd)
> mirror_sev->asid = source_sev->asid;
> mirror_sev->fd = source_sev->fd;
> mirror_sev->es_active = source_sev->es_active;
> + mirror_sev->need_init = false;
> mirror_sev->handle = source_sev->handle;
> INIT_LIST_HEAD(&mirror_sev->regions_list);
> INIT_LIST_HEAD(&mirror_sev->mirror_vms);
> @@ -2183,10 +2189,14 @@ void sev_vm_destroy(struct kvm *kvm)
>
> void __init sev_set_cpu_caps(void)
> {
> - if (sev_enabled)
> + if (sev_enabled) {
> kvm_cpu_cap_set(X86_FEATURE_SEV);
> - if (sev_es_enabled)
> + kvm_caps.supported_vm_types |= BIT(KVM_X86_SEV_VM);
> + }
> + if (sev_es_enabled) {
> kvm_cpu_cap_set(X86_FEATURE_SEV_ES);
> + kvm_caps.supported_vm_types |= BIT(KVM_X86_SEV_ES_VM);
> + }
> }
>
> void __init sev_hardware_setup(void)
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 1cf9e5f1fd02..f4a750426b24 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -4089,6 +4089,9 @@ static void svm_cancel_injection(struct kvm_vcpu *vcpu)
>
> static int svm_vcpu_pre_run(struct kvm_vcpu *vcpu)
> {
> + if (to_kvm_sev_info(vcpu->kvm)->need_init)
> + return -EINVAL;
> +
> return 1;
> }
>
> @@ -4890,6 +4893,10 @@ static void svm_vm_destroy(struct kvm *kvm)
>
> static int svm_vm_init(struct kvm *kvm)
> {
> + if (kvm->arch.vm_type != KVM_X86_DEFAULT_VM &&
> + kvm->arch.vm_type != KVM_X86_SW_PROTECTED_VM)
> + to_kvm_sev_info(kvm)->need_init = true;
> +
> if (!pause_filter_count || !pause_filter_thresh)
> kvm->arch.pause_in_guest = true;
>
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index ebf2160bf0c6..7a921acc534f 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -79,6 +79,7 @@ enum {
> struct kvm_sev_info {
> bool active; /* SEV enabled guest */
> bool es_active; /* SEV-ES enabled guest */
> + bool need_init; /* waiting for SEV_INIT2 */
> unsigned int asid; /* ASID used for this guest */
> unsigned int handle; /* SEV firmware handle */
> int fd; /* SEV device fd */
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 3b87e65904ae..b9dfe3179332 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -12576,6 +12576,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
> kvm->arch.vm_type = type;
> kvm->arch.has_private_mem =
> (type == KVM_X86_SW_PROTECTED_VM);
> + kvm->arch.has_protected_state =
> + (type == KVM_X86_SEV_ES_VM);
>
> ret = kvm_page_track_init(kvm);
> if (ret)
> --
> 2.39.1
>
>
>
On Mon, Feb 26, 2024 at 02:03:43PM -0500, Paolo Bonzini wrote:
> The idea that no parameter would ever be necessary when enabling SEV or
> SEV-ES for a VM was decidedly optimistic. In fact, in some sense it's
> already a parameter whether SEV or SEV-ES is desired. Another possible
> source of variability is the desired set of VMSA features, as that affects
> the measurement of the VM's initial state and cannot be changed
> arbitrarily by the hypervisor.
>
> Create a new sub-operation for KVM_MEMORY_ENCRYPT_OP that can take a struct,
> and put the new op to work by including the VMSA features as a field of the
> struct. The existing KVM_SEV_INIT and KVM_SEV_ES_INIT use the full set of
> supported VMSA features for backwards compatibility.
>
> The struct also includes the usual bells and whistles for future
> extensibility: a flags field that must be zero for now, and some padding
> at the end.
>
> Signed-off-by: Paolo Bonzini <[email protected]>
> ---
> .../virt/kvm/x86/amd-memory-encryption.rst | 40 +++++++++++++--
> arch/x86/include/uapi/asm/kvm.h | 9 ++++
> arch/x86/kvm/svm/sev.c | 50 +++++++++++++++++--
> 3 files changed, 92 insertions(+), 7 deletions(-)
>
> diff --git a/Documentation/virt/kvm/x86/amd-memory-encryption.rst b/Documentation/virt/kvm/x86/amd-memory-encryption.rst
> index 5ed11bc16b96..b951d82af26c 100644
> --- a/Documentation/virt/kvm/x86/amd-memory-encryption.rst
> +++ b/Documentation/virt/kvm/x86/amd-memory-encryption.rst
> @@ -75,15 +75,49 @@ are defined in ``<linux/psp-dev.h>``.
> KVM implements the following commands to support common lifecycle events of SEV
> guests, such as launching, running, snapshotting, migrating and decommissioning.
>
> -1. KVM_SEV_INIT
> ----------------
> +1. KVM_SEV_INIT2
> +----------------
>
> -The KVM_SEV_INIT command is used by the hypervisor to initialize the SEV platform
> +The KVM_SEV_INIT2 command is used by the hypervisor to initialize the SEV platform
> context. In a typical workflow, this command should be the first command issued.
>
> +For this command to be accepted, either KVM_X86_SEV_VM or KVM_X86_SEV_ES_VM
> +must have been passed to the KVM_CREATE_VM ioctl. A virtual machine created
> +with those machine types in turn cannot be run until KVM_SEV_INIT2 is invoked.
> +
> +Parameters: struct kvm_sev_init (in)
>
> Returns: 0 on success, -negative on error
>
> +::
> +
> + struct struct kvm_sev_init {
Remove the duplicated "struct"
> + __u64 vmsa_features; /* initial value of features field in VMSA */
> + __u32 flags; /* must be 0 */
> + __u32 pad[9];
> + };
> +
> +It is an error if the hypervisor does not support any of the bits that
> +are set in ``flags`` or ``vmsa_features``. ``vmsa_features`` must be
> +0 for SEV virtual machines, as they do not have a VMSA.
> +
> +This command replaces the deprecated KVM_SEV_INIT and KVM_SEV_ES_INIT commands.
> +The commands did not have any parameters (the ```data``` field was unused) and
> +only work for the KVM_X86_DEFAULT_VM machine type (0).
> +
> +They behave as if:
> +
> +* the VM type is KVM_X86_SEV_VM for KVM_SEV_INIT, or KVM_X86_SEV_ES_VM for
> + KVM_SEV_ES_INIT
> +
> +* the ``flags`` and ``vmsa_features`` fields of ``struct kvm_sev_init`` are
> + set to zero
> +
> +If the ``KVM_X86_SEV_VMSA_FEATURES`` attribute does not exist, the hypervisor only
> +supports KVM_SEV_INIT and KVM_SEV_ES_INIT. In that case, note that KVM_SEV_ES_INIT
> +might set the debug swap VMSA feature (bit 5) depending on the value of the
> +``debug_swap`` parameter of ``kvm-amd.ko``.
> +
> 2. KVM_SEV_LAUNCH_START
> -----------------------
>
> diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
> index 9d950b0b64c9..51b13080ed4b 100644
> --- a/arch/x86/include/uapi/asm/kvm.h
> +++ b/arch/x86/include/uapi/asm/kvm.h
> @@ -690,6 +690,9 @@ enum sev_cmd_id {
> /* Guest Migration Extension */
> KVM_SEV_SEND_CANCEL,
>
> + /* Second time is the charm; improved versions of the above ioctls. */
> + KVM_SEV_INIT2,
> +
> KVM_SEV_NR_MAX,
> };
>
> @@ -701,6 +704,12 @@ struct kvm_sev_cmd {
> __u32 sev_fd;
> };
>
> +struct kvm_sev_init {
> + __u64 vmsa_features;
> + __u32 flags;
> + __u32 pad[9];
> +};
> +
> struct kvm_sev_launch_start {
> __u32 handle;
> __u32 policy;
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 1248ccf433e8..909e67a9044b 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -239,23 +239,30 @@ static void sev_unbind_asid(struct kvm *kvm, unsigned int handle)
> sev_decommission(handle);
> }
>
> -static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
> +static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp,
> + struct kvm_sev_init *data,
> + unsigned long vm_type)
> {
> struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> + bool es_active = kvm->arch.has_protected_state;
> + u64 valid_vmsa_features = es_active ? sev_supported_vmsa_features : 0;
> int ret;
>
> if (kvm->created_vcpus)
> return -EINVAL;
>
> - if (kvm->arch.vm_type != KVM_X86_DEFAULT_VM)
> + if (data->flags)
> + return -EINVAL;
> +
> + if (data->vmsa_features & ~valid_vmsa_features)
> return -EINVAL;
>
> if (unlikely(sev->active))
> return -EINVAL;
>
> sev->active = true;
> - sev->es_active = argp->id == KVM_SEV_ES_INIT;
> - sev->vmsa_features = 0;
> + sev->es_active = es_active;
> + sev->vmsa_features = data->vmsa_features;
>
> ret = sev_asid_new(sev);
> if (ret)
> @@ -283,6 +290,38 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
> return ret;
> }
>
> +static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
> +{
> + struct kvm_sev_init data = {
> + .vmsa_features = 0,
> + };
> + unsigned long vm_type;
> +
> + if (kvm->arch.vm_type != KVM_X86_DEFAULT_VM)
^
Same here, KVM_X86_SEV_VM?
Thanks,
Yilun
> + return -EINVAL;
> +
> + vm_type = (argp->id == KVM_SEV_INIT ? KVM_X86_SEV_VM : KVM_X86_SEV_ES_VM);
> + return __sev_guest_init(kvm, argp, &data, vm_type);
> +}
> +
> +static int sev_guest_init2(struct kvm *kvm, struct kvm_sev_cmd *argp)
> +{
> + struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> + struct kvm_sev_init data;
> +
> + if (!sev->need_init)
> + return -EINVAL;
> +
> + if (kvm->arch.vm_type != KVM_X86_SEV_VM &&
> + kvm->arch.vm_type != KVM_X86_SEV_ES_VM)
> + return -EINVAL;
> +
> + if (copy_from_user(&data, u64_to_user_ptr(argp->data), sizeof(data)))
> + return -EFAULT;
> +
> + return __sev_guest_init(kvm, argp, &data, kvm->arch.vm_type);
> +}
> +
> static int sev_bind_asid(struct kvm *kvm, unsigned int handle, int *error)
> {
> unsigned int asid = sev_get_asid(kvm);
> @@ -1898,6 +1937,9 @@ int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp)
> case KVM_SEV_INIT:
> r = sev_guest_init(kvm, &sev_cmd);
> break;
> + case KVM_SEV_INIT2:
> + r = sev_guest_init2(kvm, &sev_cmd);
> + break;
> case KVM_SEV_LAUNCH_START:
> r = sev_launch_start(kvm, &sev_cmd);
> break;
> --
> 2.39.1
>
>
>
On Mon, Mar 04, 2024, Xu Yilun wrote:
> On Mon, Feb 26, 2024 at 02:03:42PM -0500, Paolo Bonzini wrote:
> > Signed-off-by: Paolo Bonzini <[email protected]>
> > ---
> > diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
> > index d0c1b459f7e9..9d950b0b64c9 100644
> > --- a/arch/x86/include/uapi/asm/kvm.h
> > +++ b/arch/x86/include/uapi/asm/kvm.h
> > @@ -857,5 +857,7 @@ struct kvm_hyperv_eventfd {
> >
> > #define KVM_X86_DEFAULT_VM 0
> > #define KVM_X86_SW_PROTECTED_VM 1
> > +#define KVM_X86_SEV_VM 2
> > +#define KVM_X86_SEV_ES_VM 3
> >
> > #endif /* _ASM_X86_KVM_H */
> > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> > index 2549a539a686..1248ccf433e8 100644
> > --- a/arch/x86/kvm/svm/sev.c
> > +++ b/arch/x86/kvm/svm/sev.c
> > @@ -247,6 +247,9 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
> > if (kvm->created_vcpus)
> > return -EINVAL;
> >
> > + if (kvm->arch.vm_type != KVM_X86_DEFAULT_VM)
>
> IIUC it should be KVM_X86_SEV_VM?
No, this is for the KVM_SEV_INIT version 1, which is restricted to "default" VMs.
The idea is that KVM_X86_SEV_VM and KVM_X86_SEV_ES_VM guests must be initialized
via KVM_SEV_INIT2.
On 3/4/24 16:32, Xu Yilun wrote:
>> @@ -247,6 +247,9 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
>> if (kvm->created_vcpus)
>> return -EINVAL;
>>
>> + if (kvm->arch.vm_type != KVM_X86_DEFAULT_VM)
> ^
>
> IIUC it should be KVM_X86_SEV_VM?
No, this is the legacy ioctl that only works with default-type VMs.
Paolo
On Mon, Feb 26, 2024 at 02:03:32PM -0500, Paolo Bonzini wrote:
> From: Sean Christopherson <[email protected]>
>
> Leave SEV and SEV_ES '0' in kvm_cpu_caps by default, and instead set them
> in sev_set_cpu_caps() if SEV and SEV-ES support are fully enabled. Aside
> from the fact that sev_set_cpu_caps() is wildly misleading when it *clears*
> capabilities, this will allow compiling out sev.c without falsely
> advertising SEV/SEV-ES support in KVM_GET_SUPPORTED_CPUID.
>
> Signed-off-by: Sean Christopherson <[email protected]>
> Signed-off-by: Paolo Bonzini <[email protected]>
Reviewed-by: Michael Roth <[email protected]>
> ---
> arch/x86/kvm/cpuid.c | 2 +-
> arch/x86/kvm/svm/sev.c | 8 ++++----
> 2 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index adba49afb5fe..bde4df13a7e8 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -761,7 +761,7 @@ void kvm_set_cpu_caps(void)
> kvm_cpu_cap_mask(CPUID_8000_000A_EDX, 0);
>
> kvm_cpu_cap_mask(CPUID_8000_001F_EAX,
> - 0 /* SME */ | F(SEV) | 0 /* VM_PAGE_FLUSH */ | F(SEV_ES) |
> + 0 /* SME */ | 0 /* SEV */ | 0 /* VM_PAGE_FLUSH */ | 0 /* SEV_ES */ |
> F(SME_COHERENT));
>
> kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index f06f9e51ad9d..aec3453fd73c 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -2178,10 +2178,10 @@ void sev_vm_destroy(struct kvm *kvm)
>
> void __init sev_set_cpu_caps(void)
> {
> - if (!sev_enabled)
> - kvm_cpu_cap_clear(X86_FEATURE_SEV);
> - if (!sev_es_enabled)
> - kvm_cpu_cap_clear(X86_FEATURE_SEV_ES);
> + if (sev_enabled)
> + kvm_cpu_cap_set(X86_FEATURE_SEV);
> + if (sev_es_enabled)
> + kvm_cpu_cap_set(X86_FEATURE_SEV_ES);
> }
>
> void __init sev_hardware_setup(void)
> --
> 2.39.1
>
>
On Mon, Feb 26, 2024 at 02:03:39PM -0500, Paolo Bonzini wrote:
> Some VM types have characteristics in common; in fact, the only use
> of VM types right now is kvm_arch_has_private_mem and it assumes that
> _all_ nonzero VM types have private memory.
>
> We will soon introduce a VM type for SEV and SEV-ES VMs, and at that
> point we will have two special characteristics of confidential VMs
> that depend on the VM type: not just if memory is private, but
> also whether guest state is protected. For the latter we have
> kvm->arch.guest_state_protected, which is only set on a fully initialized
> VM.
>
> For VM types with protected guest state, we can actually fix a problem in
> the SEV-ES implementation, where ioctls to set registers do not cause an
> error even if the VM has been initialized and the guest state encrypted.
> Make sure that when using VM types that will become an error.
>
> Signed-off-by: Paolo Bonzini <[email protected]>
> Message-Id: <[email protected]>
> Signed-off-by: Paolo Bonzini <[email protected]>
> ---
> arch/x86/include/asm/kvm_host.h | 7 ++-
> arch/x86/kvm/x86.c | 95 +++++++++++++++++++++++++++------
> 2 files changed, 84 insertions(+), 18 deletions(-)
>
> @@ -5552,9 +5561,13 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu,
> }
>
>
> -static void kvm_vcpu_ioctl_x86_get_xsave2(struct kvm_vcpu *vcpu,
> - u8 *state, unsigned int size)
> +static int kvm_vcpu_ioctl_x86_get_xsave2(struct kvm_vcpu *vcpu,
> + u8 *state, unsigned int size)
> {
> + if (vcpu->kvm->arch.has_protected_state &&
> + fpstate_is_confidential(&vcpu->arch.guest_fpu))
> + return -EINVAL;
> +
> /*
> * Only copy state for features that are enabled for the guest. The
> * state itself isn't problematic, but setting bits in the header for
> @@ -5571,22 +5584,27 @@ static void kvm_vcpu_ioctl_x86_get_xsave2(struct kvm_vcpu *vcpu,
> XFEATURE_MASK_FPSSE;
>
> if (fpstate_is_confidential(&vcpu->arch.guest_fpu))
> - return;
> + return 0;
>
> fpu_copy_guest_fpstate_to_uabi(&vcpu->arch.guest_fpu, state, size,
> supported_xcr0, vcpu->arch.pkru);
> + return 0;
> }
>
> -static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
> - struct kvm_xsave *guest_xsave)
> +static int kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
> + struct kvm_xsave *guest_xsave)
> {
> - kvm_vcpu_ioctl_x86_get_xsave2(vcpu, (void *)guest_xsave->region,
> - sizeof(guest_xsave->region));
> + return kvm_vcpu_ioctl_x86_get_xsave2(vcpu, (void *)guest_xsave->region,
> + sizeof(guest_xsave->region));
> }
>
> static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu,
> struct kvm_xsave *guest_xsave)
> {
> + if (vcpu->kvm->arch.has_protected_state &&
> + fpstate_is_confidential(&vcpu->arch.guest_fpu))
> + return -EINVAL;
> +
> if (fpstate_is_confidential(&vcpu->arch.guest_fpu))
> return 0;
I've been trying to get SNP running on top of these patches and hit and
issue with these due to fpstate_set_confidential() being done during
svm_vcpu_create(), so when QEMU tries to sync FPU state prior to calling
SNP_LAUNCH_FINISH it errors out. I think the same would happen with
SEV-ES as well.
Maybe fpstate_set_confidential() should be relocated to SEV_LAUNCH_FINISH
site as part of these patches?
Also, do you happen to have a pointer to the WIP QEMU patches? Happy to
help with posting/testing those since we'll need similar for
SEV_INIT2-based SNP patches.
Thanks,
Mike
On Wed, Mar 13, 2024 at 09:49:52PM -0500, Michael Roth wrote:
> On Mon, Feb 26, 2024 at 02:03:39PM -0500, Paolo Bonzini wrote:
> > Some VM types have characteristics in common; in fact, the only use
> > of VM types right now is kvm_arch_has_private_mem and it assumes that
> > _all_ nonzero VM types have private memory.
> >
> > We will soon introduce a VM type for SEV and SEV-ES VMs, and at that
> > point we will have two special characteristics of confidential VMs
> > that depend on the VM type: not just if memory is private, but
> > also whether guest state is protected. For the latter we have
> > kvm->arch.guest_state_protected, which is only set on a fully initialized
> > VM.
> >
> > For VM types with protected guest state, we can actually fix a problem in
> > the SEV-ES implementation, where ioctls to set registers do not cause an
> > error even if the VM has been initialized and the guest state encrypted.
> > Make sure that when using VM types that will become an error.
> >
> > Signed-off-by: Paolo Bonzini <[email protected]>
> > Message-Id: <[email protected]>
> > Signed-off-by: Paolo Bonzini <[email protected]>
> > ---
> > arch/x86/include/asm/kvm_host.h | 7 ++-
> > arch/x86/kvm/x86.c | 95 +++++++++++++++++++++++++++------
> > 2 files changed, 84 insertions(+), 18 deletions(-)
> >
> > @@ -5552,9 +5561,13 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu,
> > }
> >
> >
> > -static void kvm_vcpu_ioctl_x86_get_xsave2(struct kvm_vcpu *vcpu,
> > - u8 *state, unsigned int size)
> > +static int kvm_vcpu_ioctl_x86_get_xsave2(struct kvm_vcpu *vcpu,
> > + u8 *state, unsigned int size)
> > {
> > + if (vcpu->kvm->arch.has_protected_state &&
> > + fpstate_is_confidential(&vcpu->arch.guest_fpu))
> > + return -EINVAL;
> > +
> > /*
> > * Only copy state for features that are enabled for the guest. The
> > * state itself isn't problematic, but setting bits in the header for
> > @@ -5571,22 +5584,27 @@ static void kvm_vcpu_ioctl_x86_get_xsave2(struct kvm_vcpu *vcpu,
> > XFEATURE_MASK_FPSSE;
> >
> > if (fpstate_is_confidential(&vcpu->arch.guest_fpu))
> > - return;
> > + return 0;
> >
> > fpu_copy_guest_fpstate_to_uabi(&vcpu->arch.guest_fpu, state, size,
> > supported_xcr0, vcpu->arch.pkru);
> > + return 0;
> > }
> >
> > -static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
> > - struct kvm_xsave *guest_xsave)
> > +static int kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
> > + struct kvm_xsave *guest_xsave)
> > {
> > - kvm_vcpu_ioctl_x86_get_xsave2(vcpu, (void *)guest_xsave->region,
> > - sizeof(guest_xsave->region));
> > + return kvm_vcpu_ioctl_x86_get_xsave2(vcpu, (void *)guest_xsave->region,
> > + sizeof(guest_xsave->region));
> > }
> >
> > static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu,
> > struct kvm_xsave *guest_xsave)
> > {
> > + if (vcpu->kvm->arch.has_protected_state &&
> > + fpstate_is_confidential(&vcpu->arch.guest_fpu))
> > + return -EINVAL;
> > +
> > if (fpstate_is_confidential(&vcpu->arch.guest_fpu))
> > return 0;
>
> I've been trying to get SNP running on top of these patches and hit and
> issue with these due to fpstate_set_confidential() being done during
> svm_vcpu_create(), so when QEMU tries to sync FPU state prior to calling
> SNP_LAUNCH_FINISH it errors out. I think the same would happen with
> SEV-ES as well.
>
> Maybe fpstate_set_confidential() should be relocated to SEV_LAUNCH_FINISH
> site as part of these patches?
Talked to Tom a bit about this and that might not make much sense unless
we actually want to add some code to sync that FPU state into the VMSA
prior to encryption/measurement. Otherwise, it might as well be set to
confidential as soon as vCPU is created.
And if userspace wants to write FPU register state that will not actually
become part of the guest state, it probably does make sense to return an
error for new VM types and leave it to userspace to deal with
special-casing that vs. the other ioctls like SET_REGS/SREGS/etc.
-Mike
>
> Also, do you happen to have a pointer to the WIP QEMU patches? Happy to
> help with posting/testing those since we'll need similar for
> SEV_INIT2-based SNP patches.
>
> Thanks,
>
> Mike
>
On Thu, Mar 14, 2024, Michael Roth wrote:
> On Wed, Mar 13, 2024 at 09:49:52PM -0500, Michael Roth wrote:
> > I've been trying to get SNP running on top of these patches and hit and
> > issue with these due to fpstate_set_confidential() being done during
> > svm_vcpu_create(), so when QEMU tries to sync FPU state prior to calling
> > SNP_LAUNCH_FINISH it errors out. I think the same would happen with
> > SEV-ES as well.
> >
> > Maybe fpstate_set_confidential() should be relocated to SEV_LAUNCH_FINISH
> > site as part of these patches?
>
> Talked to Tom a bit about this and that might not make much sense unless
> we actually want to add some code to sync that FPU state into the VMSA
> prior to encryption/measurement. Otherwise, it might as well be set to
> confidential as soon as vCPU is created.
>
> And if userspace wants to write FPU register state that will not actually
> become part of the guest state, it probably does make sense to return an
> error for new VM types and leave it to userspace to deal with
> special-casing that vs. the other ioctls like SET_REGS/SREGS/etc.
Won't regs and sregs suffer the same fate? That might not matter _today_ for
"real" VMs, but it would be a blocking issue for selftests, which need to stuff
state to jumpstart vCPUs.
And maybe someday real VMs will catch up to the times and stop starting at the
RESET vector...
On Thu, Mar 14, 2024 at 03:56:27PM -0700, Sean Christopherson wrote:
> On Thu, Mar 14, 2024, Michael Roth wrote:
> > On Wed, Mar 13, 2024 at 09:49:52PM -0500, Michael Roth wrote:
> > > I've been trying to get SNP running on top of these patches and hit and
> > > issue with these due to fpstate_set_confidential() being done during
> > > svm_vcpu_create(), so when QEMU tries to sync FPU state prior to calling
> > > SNP_LAUNCH_FINISH it errors out. I think the same would happen with
> > > SEV-ES as well.
> > >
> > > Maybe fpstate_set_confidential() should be relocated to SEV_LAUNCH_FINISH
> > > site as part of these patches?
> >
> > Talked to Tom a bit about this and that might not make much sense unless
> > we actually want to add some code to sync that FPU state into the VMSA
> > prior to encryption/measurement. Otherwise, it might as well be set to
> > confidential as soon as vCPU is created.
> >
> > And if userspace wants to write FPU register state that will not actually
> > become part of the guest state, it probably does make sense to return an
> > error for new VM types and leave it to userspace to deal with
> > special-casing that vs. the other ioctls like SET_REGS/SREGS/etc.
>
> Won't regs and sregs suffer the same fate? That might not matter _today_ for
> "real" VMs, but it would be a blocking issue for selftests, which need to stuff
> state to jumpstart vCPUs.
SET_REGS/SREGS and the others only throw an error when
vcpu->arch.guest_state_protected gets set, which doesn't happen until
sev_launch_update_vmsa(). So in those cases userspace is still able to sync
additional/non-reset state prior initial launch. It's just XSAVE/XSAVE2 that
are a bit more restrictive because they check fpstate_is_confidential()
instead, which gets set during vCPU creation.
Somewhat related, but just noticed that KVM_SET_FPU also relies on
fpstate_is_confidential() but still silently returns 0 with this series.
Seems like it should be handled the same way as XSAVE/XSAVE2, whatever we
end up doing.
-Mike
>
> And maybe someday real VMs will catch up to the times and stop starting at the
> RESET vector...
>
On Thu, Mar 14, 2024, Michael Roth wrote:
> On Thu, Mar 14, 2024 at 03:56:27PM -0700, Sean Christopherson wrote:
> > On Thu, Mar 14, 2024, Michael Roth wrote:
> > > On Wed, Mar 13, 2024 at 09:49:52PM -0500, Michael Roth wrote:
> > > > I've been trying to get SNP running on top of these patches and hit and
> > > > issue with these due to fpstate_set_confidential() being done during
> > > > svm_vcpu_create(), so when QEMU tries to sync FPU state prior to calling
> > > > SNP_LAUNCH_FINISH it errors out. I think the same would happen with
> > > > SEV-ES as well.
> > > > Maybe fpstate_set_confidential() should be relocated to SEV_LAUNCH_FINISH
> > > > site as part of these patches?
> > >
> > > Talked to Tom a bit about this and that might not make much sense unless
> > > we actually want to add some code to sync that FPU state into the VMSA
Is manually copying required for register state? If so, manually copying everything
seems like the way to go, otherwise we'll end up with a confusing ABI where a
rather arbitrary set of bits are (not) configurable by userspace.
> > > prior to encryption/measurement. Otherwise, it might as well be set to
> > > confidential as soon as vCPU is created.
> > >
> > > And if userspace wants to write FPU register state that will not actually
> > > become part of the guest state, it probably does make sense to return an
> > > error for new VM types and leave it to userspace to deal with
> > > special-casing that vs. the other ioctls like SET_REGS/SREGS/etc.
> >
> > Won't regs and sregs suffer the same fate? That might not matter _today_ for
> > "real" VMs, but it would be a blocking issue for selftests, which need to stuff
> > state to jumpstart vCPUs.
>
> SET_REGS/SREGS and the others only throw an error when
> vcpu->arch.guest_state_protected gets set, which doesn't happen until
Ah, I misread the diff and didn't see the existing check on fpstate_is_confidential().
Side topic, I could have sworn KVM didn't allocate the guest fpstate for SEV-ES,
but git blame says otherwise. Avoiding that allocation would have been an argument
for immediately marking the fpstate confidential.
That said, any reason not to free the state when the fpstate is marked confidential?
> sev_launch_update_vmsa(). So in those cases userspace is still able to sync
> additional/non-reset state prior initial launch. It's just XSAVE/XSAVE2 that
> are a bit more restrictive because they check fpstate_is_confidential()
> instead, which gets set during vCPU creation.
>
> Somewhat related, but just noticed that KVM_SET_FPU also relies on
> fpstate_is_confidential() but still silently returns 0 with this series.
> Seems like it should be handled the same way as XSAVE/XSAVE2, whatever we
> end up doing.
+1
Also, I think a less confusing and more robust way to deal with the new VM types
would be to condition only the return code on whether or not the VM has protected
state, e.g.
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9d670a45aea4..0e245738d4c5 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5606,10 +5606,6 @@ static int kvm_vcpu_ioctl_x86_set_debugregs(struct kvm_vcpu *vcpu,
static int kvm_vcpu_ioctl_x86_get_xsave2(struct kvm_vcpu *vcpu,
u8 *state, unsigned int size)
{
- if (vcpu->kvm->arch.has_protected_state &&
- fpstate_is_confidential(&vcpu->arch.guest_fpu))
- return -EINVAL;
-
/*
* Only copy state for features that are enabled for the guest. The
* state itself isn't problematic, but setting bits in the header for
@@ -5626,7 +5622,7 @@ static int kvm_vcpu_ioctl_x86_get_xsave2(struct kvm_vcpu *vcpu,
XFEATURE_MASK_FPSSE;
if (fpstate_is_confidential(&vcpu->arch.guest_fpu))
- return 0;
+ return vcpu->kvm->arch.has_protected_state ? -EINVAL : 0;
fpu_copy_guest_fpstate_to_uabi(&vcpu->arch.guest_fpu, state, size,
supported_xcr0, vcpu->arch.pkru);
On Fri, Mar 15, 2024 at 3:57 PM Sean Christopherson <[email protected]> wrote:
>
> On Thu, Mar 14, 2024, Michael Roth wrote:
> > On Thu, Mar 14, 2024 at 03:56:27PM -0700, Sean Christopherson wrote:
> > > On Thu, Mar 14, 2024, Michael Roth wrote:
> > > > On Wed, Mar 13, 2024 at 09:49:52PM -0500, Michael Roth wrote:
> > > > > I've been trying to get SNP running on top of these patches and hit and
> > > > > issue with these due to fpstate_set_confidential() being done during
> > > > > svm_vcpu_create(), so when QEMU tries to sync FPU state prior to calling
> > > > > SNP_LAUNCH_FINISH it errors out. I think the same would happen with
> > > > > SEV-ES as well.
> > > > > Maybe fpstate_set_confidential() should be relocated to SEV_LAUNCH_FINISH
> > > > > site as part of these patches?
> > > >
> > > > Talked to Tom a bit about this and that might not make much sense unless
> > > > we actually want to add some code to sync that FPU state into the VMSA
>
> Is manually copying required for register state? If so, manually copying everything
> seems like the way to go, otherwise we'll end up with a confusing ABI where a
> rather arbitrary set of bits are (not) configurable by userspace.
Yes, see sev_es_sync_vmsa. I'll add FPU as well.
> > SET_REGS/SREGS and the others only throw an error when
> > vcpu->arch.guest_state_protected gets set, which doesn't happen until
>
> Ah, I misread the diff and didn't see the existing check on fpstate_is_confidential().
>
> Side topic, I could have sworn KVM didn't allocate the guest fpstate for SEV-ES,
> but git blame says otherwise. Avoiding that allocation would have been an argument
> for immediately marking the fpstate confidential.
>
> That said, any reason not to free the state when the fpstate is marked confidential?
No reason not to do it, except not wanting to add more cases to code
that's already pretty hairy.
> > sev_launch_update_vmsa(). So in those cases userspace is still able to sync
> > additional/non-reset state prior initial launch. It's just XSAVE/XSAVE2 that
> > are a bit more restrictive because they check fpstate_is_confidential()
> > instead, which gets set during vCPU creation.
> >
> > Somewhat related, but just noticed that KVM_SET_FPU also relies on
> > fpstate_is_confidential() but still silently returns 0 with this series.
> > Seems like it should be handled the same way as XSAVE/XSAVE2, whatever we
> > end up doing.
>
> +1
>
> Also, I think a less confusing and more robust way to deal with the new VM types
> would be to condition only the return code on whether or not the VM has protected
> state
Makes sense (I found KVM_GET/SET_FPU independently and will fix that
as well in the next submission).
Paolo
On Thu, Mar 14, 2024 at 3:50 AM Michael Roth <[email protected]> wrote:
> I've been trying to get SNP running on top of these patches and hit and
> issue with these due to fpstate_set_confidential() being done during
> svm_vcpu_create(), so when QEMU tries to sync FPU state prior to calling
> SNP_LAUNCH_FINISH it errors out. I think the same would happen with
> SEV-ES as well.
>
> Maybe fpstate_set_confidential() should be relocated to SEV_LAUNCH_FINISH
> site as part of these patches?
To SEV_LAUNCH_UPDATE_VMSA, I think, since that's where the last
opportunity lies to sync the contents of struct kvm_vcpu.
> Also, do you happen to have a pointer to the WIP QEMU patches? Happy to
> help with posting/testing those since we'll need similar for
> SEV_INIT2-based SNP patches.
Pushed to https://gitlab.com/bonzini/qemu, branch sevinit2. There is a
hackish commit "runstate: skip initial CPU reset if reset is not
actually possible" that needs some auditing, because I'd like to
replace
- cpu_synchronize_all_post_reset();
+ if (cpus_are_resettable()) {
+ cpu_synchronize_all_post_reset();
+ } else {
+ /* Assume that cpu_synchronize_all_post_init() was enough. */
+ assert(runstate_check(RUN_STATE_PRELAUNCH));
+ }
with
- cpu_synchronize_all_post_reset();
+ /*
+ * cpu_synchronize_all_post_init() has already happened if the VM hasn't
+ * launched.
+ */
+ if (!runstate_check(RUN_STATE_PRELAUNCH)) {
+ cpu_synchronize_all_post_reset();
+ }
Paolo
On Mon, Feb 26, 2024 at 02:03:33PM -0500,
Paolo Bonzini <[email protected]> wrote:
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index 8ef95139cd24..52bc955ed06f 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -664,13 +664,10 @@ void avic_refresh_virtual_apic_mode(struct kvm_vcpu *vcpu);
>
> /* sev.c */
>
> +#ifdef CONFIG_KVM_AMD_SEV
> #define GHCB_VERSION_MAX 1ULL
> #define GHCB_VERSION_MIN 1ULL
>
> -
> -extern unsigned int max_sev_asid;
> -
> -void sev_vm_destroy(struct kvm *kvm);
> int sev_mem_enc_ioctl(struct kvm *kvm, void __user *argp);
> int sev_mem_enc_register_region(struct kvm *kvm,
> struct kvm_enc_region *range);
> @@ -681,19 +678,30 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd);
> void sev_guest_memory_reclaimed(struct kvm *kvm);
>
> void pre_sev_run(struct vcpu_svm *svm, int cpu);
> -void __init sev_set_cpu_caps(void);
> -void __init sev_hardware_setup(void);
> -void sev_hardware_unsetup(void);
> -int sev_cpu_init(struct svm_cpu_data *sd);
> void sev_init_vmcb(struct vcpu_svm *svm);
> void sev_vcpu_after_set_cpuid(struct vcpu_svm *svm);
> -void sev_free_vcpu(struct kvm_vcpu *vcpu);
> int sev_handle_vmgexit(struct kvm_vcpu *vcpu);
> int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in);
> void sev_es_vcpu_reset(struct vcpu_svm *svm);
> void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector);
> void sev_es_prepare_switch_to_guest(struct sev_es_save_area *hostsa);
> void sev_es_unmap_ghcb(struct vcpu_svm *svm);
> +void sev_free_vcpu(struct kvm_vcpu *vcpu);
> +void sev_vm_destroy(struct kvm *kvm);
> +void __init sev_set_cpu_caps(void);
> +void __init sev_hardware_setup(void);
> +void sev_hardware_unsetup(void);
> +int sev_cpu_init(struct svm_cpu_data *sd);
> +extern unsigned int max_sev_asid;
> +#else
> +static inline void sev_free_vcpu(struct kvm_vcpu *vcpu) {}
> +static inline void sev_vm_destroy(struct kvm *kvm) {}
> +static inline void __init sev_set_cpu_caps(void) {}
> +static inline void __init sev_hardware_setup(void) {}
> +static inline void sev_hardware_unsetup(void) {}
> +static inline int sev_cpu_init(struct svm_cpu_data *sd) { return 0; }
> +#define max_sev_asid 0
> +#endif
>
> /* vmenter.S */
>
> --
This causes compile errors with -Werror=implicit-function-declaration when
CONFIG_KVM_AMD=y and CONFIG_KVM_AMD_SEV=n. As discussed in [1], the stubs
aren't needed due to dead code elimination, but the declarations are needed.
[1] https://lore.kernel.org/kvm/[email protected]/
Please feel free to squash the fix.
CC arch/x86/kvm/svm/svm.o
/linux/arch/x86/kvm/svm/svm.c: In function 'init_vmcb':
/linux/arch/x86/kvm/svm/svm.c:1367:17: error: implicit declaration of function 'sev_init_vmcb'; did you mean 'init_vmcb'? [-Werror=implicit-function-declaration]
1367 | sev_init_vmcb(svm);
| ^~~~~~~~~~~~~
| init_vmcb
Similar warnings for sev_es_vcpu_reset(), sev_es_unmap_ghcb(),
sev_es_prepare_switch_to_guest(), sev_es_string_io(), pre_sev_run(),
sev_vcpu_after_set_cpuid(), and sev_vcpu_deliver_sipi_vector().
Reported-by: Rick Edgecombe <[email protected]>
Signed-off-by: Isaku Yamahata <[email protected]>
---
arch/x86/kvm/svm/svm.h | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 52bc955ed06f..eff9f19e5bcc 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -663,6 +663,14 @@ void avic_refresh_virtual_apic_mode(struct kvm_vcpu *vcpu);
/* sev.c */
+void pre_sev_run(struct vcpu_svm *svm, int cpu);
+void sev_init_vmcb(struct vcpu_svm *svm);
+void sev_vcpu_after_set_cpuid(struct vcpu_svm *svm);
+int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in);
+void sev_es_vcpu_reset(struct vcpu_svm *svm);
+void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector);
+void sev_es_prepare_switch_to_guest(struct sev_es_save_area *hostsa);
+void sev_es_unmap_ghcb(struct vcpu_svm *svm);
#ifdef CONFIG_KVM_AMD_SEV
#define GHCB_VERSION_MAX 1ULL
@@ -677,15 +685,7 @@ int sev_vm_copy_enc_context_from(struct kvm *kvm, unsigned int source_fd);
int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd);
void sev_guest_memory_reclaimed(struct kvm *kvm);
-void pre_sev_run(struct vcpu_svm *svm, int cpu);
-void sev_init_vmcb(struct vcpu_svm *svm);
-void sev_vcpu_after_set_cpuid(struct vcpu_svm *svm);
int sev_handle_vmgexit(struct kvm_vcpu *vcpu);
-int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in);
-void sev_es_vcpu_reset(struct vcpu_svm *svm);
-void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector);
-void sev_es_prepare_switch_to_guest(struct sev_es_save_area *hostsa);
-void sev_es_unmap_ghcb(struct vcpu_svm *svm);
void sev_free_vcpu(struct kvm_vcpu *vcpu);
void sev_vm_destroy(struct kvm *kvm);
void __init sev_set_cpu_caps(void);
--
2.43.2
--
Isaku Yamahata <[email protected]>