2021-07-22 11:53:55

by Joerg Roedel

[permalink] [raw]
Subject: [PATCH v2 0/4] KVM: SVM: Add initial GHCB protocol version 2 support

From: Joerg Roedel <[email protected]>

Hi,

here is a small set of patches which I took from the pending SEV-SNP
patch-sets to enable basic support for GHCB protocol version 2.

When SEV-SNP is not supported, only two new MSR protocol VMGEXIT calls
need to be supported:

- MSR-based AP-reset-hold
- MSR-based HV-feature-request

These calls are implemented by here and then the protocol is lifted to
version 2.

This is submitted separately because the MSR-based AP-reset-hold call
is required to support kexec/kdump in SEV-ES guests.

Regards,

Joerg

Changes v1->v2:

- Rebased to v5.14-rc2
- Addressed Sean's review comments from the SNP patch-set.

Brijesh Singh (2):
KVM: SVM: Add support for Hypervisor Feature support MSR protocol
KVM: SVM: Increase supported GHCB protocol version

Joerg Roedel (1):
KVM: SVM: Get rid of *ghcb_msr_bits() functions

Tom Lendacky (1):
KVM: SVM: Add support to handle AP reset MSR protocol

arch/x86/include/asm/kvm_host.h | 10 ++-
arch/x86/include/asm/sev-common.h | 14 ++++
arch/x86/include/asm/svm.h | 1 -
arch/x86/include/uapi/asm/svm.h | 1 +
arch/x86/kvm/svm/sev.c | 107 ++++++++++++++++++++----------
arch/x86/kvm/svm/svm.h | 3 +-
arch/x86/kvm/x86.c | 5 +-
7 files changed, 103 insertions(+), 38 deletions(-)


base-commit: 2734d6c1b1a089fb593ef6a23d4b70903526fe0c
--
2.31.1


2021-07-22 11:54:01

by Joerg Roedel

[permalink] [raw]
Subject: [PATCH v2 2/4] KVM: SVM: Add support to handle AP reset MSR protocol

From: Tom Lendacky <[email protected]>

Add support for AP Reset Hold being invoked using the GHCB MSR protocol,
available in version 2 of the GHCB specification.

Signed-off-by: Tom Lendacky <[email protected]>
Signed-off-by: Brijesh Singh <[email protected]>
Signed-off-by: Joerg Roedel <[email protected]>
---
arch/x86/include/asm/kvm_host.h | 10 ++++++-
arch/x86/include/asm/sev-common.h | 1 +
arch/x86/include/asm/svm.h | 1 -
arch/x86/kvm/svm/sev.c | 49 ++++++++++++++++++++++++-------
arch/x86/kvm/x86.c | 5 +++-
5 files changed, 53 insertions(+), 13 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 974cbfb1eefe..9afd9d33819a 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -231,6 +231,11 @@ enum x86_intercept_stage;
KVM_GUESTDBG_INJECT_BP | \
KVM_GUESTDBG_INJECT_DB)

+enum ap_reset_hold_type {
+ AP_RESET_HOLD_NONE,
+ AP_RESET_HOLD_NAE_EVENT,
+ AP_RESET_HOLD_MSR_PROTO,
+};

#define PFERR_PRESENT_BIT 0
#define PFERR_WRITE_BIT 1
@@ -903,6 +908,8 @@ struct kvm_vcpu_arch {
#if IS_ENABLED(CONFIG_HYPERV)
hpa_t hv_root_tdp;
#endif
+
+ enum ap_reset_hold_type reset_hold_type;
};

struct kvm_lpage_info {
@@ -1647,7 +1654,8 @@ int kvm_fast_pio(struct kvm_vcpu *vcpu, int size, unsigned short port, int in);
int kvm_emulate_cpuid(struct kvm_vcpu *vcpu);
int kvm_emulate_halt(struct kvm_vcpu *vcpu);
int kvm_vcpu_halt(struct kvm_vcpu *vcpu);
-int kvm_emulate_ap_reset_hold(struct kvm_vcpu *vcpu);
+int kvm_emulate_ap_reset_hold(struct kvm_vcpu *vcpu,
+ enum ap_reset_hold_type type);
int kvm_emulate_wbinvd(struct kvm_vcpu *vcpu);

void kvm_get_segment(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg);
diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
index 8540972cad04..04470aab421b 100644
--- a/arch/x86/include/asm/sev-common.h
+++ b/arch/x86/include/asm/sev-common.h
@@ -11,6 +11,7 @@
#define GHCB_MSR_INFO_POS 0
#define GHCB_DATA_LOW 12
#define GHCB_MSR_INFO_MASK (BIT_ULL(GHCB_DATA_LOW) - 1)
+#define GHCB_DATA_MASK GENMASK_ULL(51, 0)

#define GHCB_DATA(v) \
(((unsigned long)(v) & ~GHCB_MSR_INFO_MASK) >> GHCB_DATA_LOW)
diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index e322676039f4..5a28f223a9a8 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -164,7 +164,6 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
u8 reserved_sw[32];
};

-
#define TLB_CONTROL_DO_NOTHING 0
#define TLB_CONTROL_FLUSH_ALL_ASID 1
#define TLB_CONTROL_FLUSH_ASID 3
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index d7b3557b8dbb..a32ef011025f 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2210,6 +2210,9 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm)

void sev_es_unmap_ghcb(struct vcpu_svm *svm)
{
+ /* Clear any indication that the vCPU is in a type of AP Reset Hold */
+ svm->vcpu.arch.reset_hold_type = AP_RESET_HOLD_NONE;
+
if (!svm->ghcb)
return;

@@ -2353,6 +2356,11 @@ static void set_ghcb_msr_cpuid_resp(struct vcpu_svm *svm, u64 reg, u64 value)
svm->vmcb->control.ghcb_gpa = msr;
}

+static void set_ghcb_msr_ap_rst_resp(struct vcpu_svm *svm, u64 value)
+{
+ svm->vmcb->control.ghcb_gpa = GHCB_MSR_AP_RESET_HOLD_RESP | (value << GHCB_DATA_LOW);
+}
+
static void set_ghcb_msr(struct vcpu_svm *svm, u64 value)
{
svm->vmcb->control.ghcb_gpa = value;
@@ -2406,6 +2414,17 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)

break;
}
+ case GHCB_MSR_AP_RESET_HOLD_REQ: {
+ ret = kvm_emulate_ap_reset_hold(&svm->vcpu, AP_RESET_HOLD_MSR_PROTO);
+
+ /*
+ * Preset the result to a non-SIPI return and then only set
+ * the result to non-zero when delivering a SIPI.
+ */
+ set_ghcb_msr_ap_rst_resp(svm, 0);
+
+ break;
+ }
case GHCB_MSR_TERM_REQ: {
u64 reason_set, reason_code;

@@ -2491,7 +2510,7 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
ret = svm_invoke_exit_handler(vcpu, SVM_EXIT_IRET);
break;
case SVM_VMGEXIT_AP_HLT_LOOP:
- ret = kvm_emulate_ap_reset_hold(vcpu);
+ ret = kvm_emulate_ap_reset_hold(vcpu, AP_RESET_HOLD_NAE_EVENT);
break;
case SVM_VMGEXIT_AP_JUMP_TABLE: {
struct kvm_sev_info *sev = &to_kvm_svm(vcpu->kvm)->sev_info;
@@ -2628,13 +2647,23 @@ void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)
return;
}

- /*
- * Subsequent SIPI: Return from an AP Reset Hold VMGEXIT, where
- * the guest will set the CS and RIP. Set SW_EXIT_INFO_2 to a
- * non-zero value.
- */
- if (!svm->ghcb)
- return;
-
- ghcb_set_sw_exit_info_2(svm->ghcb, 1);
+ /* Subsequent SIPI */
+ switch (vcpu->arch.reset_hold_type) {
+ case AP_RESET_HOLD_NAE_EVENT:
+ /*
+ * Return from an AP Reset Hold VMGEXIT, where the guest will
+ * set the CS and RIP. Set SW_EXIT_INFO_2 to a non-zero value.
+ */
+ ghcb_set_sw_exit_info_2(svm->ghcb, 1);
+ break;
+ case AP_RESET_HOLD_MSR_PROTO:
+ /*
+ * Return from an AP Reset Hold VMGEXIT, where the guest will
+ * set the CS and RIP. Set GHCB data field to a non-zero value.
+ */
+ set_ghcb_msr_ap_rst_resp(svm, 1);
+ break;
+ default:
+ break;
+ }
}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a4fd10604f72..927ef55bdb2d 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8504,10 +8504,13 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvm_emulate_halt);

-int kvm_emulate_ap_reset_hold(struct kvm_vcpu *vcpu)
+int kvm_emulate_ap_reset_hold(struct kvm_vcpu *vcpu,
+ enum ap_reset_hold_type type)
{
int ret = kvm_skip_emulated_instruction(vcpu);

+ vcpu->arch.reset_hold_type = type;
+
return __kvm_vcpu_halt(vcpu, KVM_MP_STATE_AP_RESET_HOLD, KVM_EXIT_AP_RESET_HOLD) && ret;
}
EXPORT_SYMBOL_GPL(kvm_emulate_ap_reset_hold);
--
2.31.1

2021-07-22 11:54:52

by Joerg Roedel

[permalink] [raw]
Subject: [PATCH v2 1/4] KVM: SVM: Get rid of *ghcb_msr_bits() functions

From: Joerg Roedel <[email protected]>

Replace the get function with macros and the set function with
hypercall specific setters. This will avoid preserving any previous
bits in the GHCB-MSR and improved code readability.

Suggested-by: Sean Christopherson <[email protected]>
Signed-off-by: Joerg Roedel <[email protected]>
---
arch/x86/include/asm/sev-common.h | 9 +++++++
arch/x86/kvm/svm/sev.c | 41 +++++++++++--------------------
2 files changed, 24 insertions(+), 26 deletions(-)

diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
index 2cef6c5a52c2..8540972cad04 100644
--- a/arch/x86/include/asm/sev-common.h
+++ b/arch/x86/include/asm/sev-common.h
@@ -50,6 +50,10 @@
(GHCB_MSR_CPUID_REQ | \
(((unsigned long)reg & GHCB_MSR_CPUID_REG_MASK) << GHCB_MSR_CPUID_REG_POS) | \
(((unsigned long)fn) << GHCB_MSR_CPUID_FUNC_POS))
+#define GHCB_MSR_CPUID_FN(msr) \
+ (((msr) >> GHCB_MSR_CPUID_FUNC_POS) & GHCB_MSR_CPUID_FUNC_MASK)
+#define GHCB_MSR_CPUID_REG(msr) \
+ (((msr) >> GHCB_MSR_CPUID_REG_POS) & GHCB_MSR_CPUID_REG_MASK)

/* AP Reset Hold */
#define GHCB_MSR_AP_RESET_HOLD_REQ 0x006
@@ -67,6 +71,11 @@
#define GHCB_SEV_TERM_REASON(reason_set, reason_val) \
(((((u64)reason_set) & GHCB_MSR_TERM_REASON_SET_MASK) << GHCB_MSR_TERM_REASON_SET_POS) | \
((((u64)reason_val) & GHCB_MSR_TERM_REASON_MASK) << GHCB_MSR_TERM_REASON_POS))
+#define GHCB_MSR_TERM_REASON_SET(msr) \
+ (((msr) >> GHCB_MSR_TERM_REASON_SET_POS) & GHCB_MSR_TERM_REASON_SET_MASK)
+#define GHCB_MSR_TERM_REASON(msr) \
+ (((msr) >> GHCB_MSR_TERM_REASON_POS) & GHCB_MSR_TERM_REASON_MASK)
+

#define GHCB_SEV_ES_REASON_GENERAL_REQUEST 0
#define GHCB_SEV_ES_REASON_PROTOCOL_UNSUPPORTED 1
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 6710d9ee2e4b..d7b3557b8dbb 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2342,16 +2342,15 @@ static bool setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len)
return true;
}

-static void set_ghcb_msr_bits(struct vcpu_svm *svm, u64 value, u64 mask,
- unsigned int pos)
+static void set_ghcb_msr_cpuid_resp(struct vcpu_svm *svm, u64 reg, u64 value)
{
- svm->vmcb->control.ghcb_gpa &= ~(mask << pos);
- svm->vmcb->control.ghcb_gpa |= (value & mask) << pos;
-}
+ u64 msr;

-static u64 get_ghcb_msr_bits(struct vcpu_svm *svm, u64 mask, unsigned int pos)
-{
- return (svm->vmcb->control.ghcb_gpa >> pos) & mask;
+ msr = GHCB_MSR_CPUID_RESP;
+ msr |= (reg & GHCB_MSR_CPUID_REG_MASK) << GHCB_MSR_CPUID_REG_POS;
+ msr |= (value & GHCB_MSR_CPUID_VALUE_MASK) << GHCB_MSR_CPUID_VALUE_POS;
+
+ svm->vmcb->control.ghcb_gpa = msr;
}

static void set_ghcb_msr(struct vcpu_svm *svm, u64 value)
@@ -2380,9 +2379,7 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)
case GHCB_MSR_CPUID_REQ: {
u64 cpuid_fn, cpuid_reg, cpuid_value;

- cpuid_fn = get_ghcb_msr_bits(svm,
- GHCB_MSR_CPUID_FUNC_MASK,
- GHCB_MSR_CPUID_FUNC_POS);
+ cpuid_fn = GHCB_MSR_CPUID_FN(control->ghcb_gpa);

/* Initialize the registers needed by the CPUID intercept */
vcpu->arch.regs[VCPU_REGS_RAX] = cpuid_fn;
@@ -2394,9 +2391,8 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)
break;
}

- cpuid_reg = get_ghcb_msr_bits(svm,
- GHCB_MSR_CPUID_REG_MASK,
- GHCB_MSR_CPUID_REG_POS);
+ cpuid_reg = GHCB_MSR_CPUID_REG(control->ghcb_gpa);
+
if (cpuid_reg == 0)
cpuid_value = vcpu->arch.regs[VCPU_REGS_RAX];
else if (cpuid_reg == 1)
@@ -2406,26 +2402,19 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)
else
cpuid_value = vcpu->arch.regs[VCPU_REGS_RDX];

- set_ghcb_msr_bits(svm, cpuid_value,
- GHCB_MSR_CPUID_VALUE_MASK,
- GHCB_MSR_CPUID_VALUE_POS);
+ set_ghcb_msr_cpuid_resp(svm, cpuid_reg, cpuid_value);

- set_ghcb_msr_bits(svm, GHCB_MSR_CPUID_RESP,
- GHCB_MSR_INFO_MASK,
- GHCB_MSR_INFO_POS);
break;
}
case GHCB_MSR_TERM_REQ: {
u64 reason_set, reason_code;

- reason_set = get_ghcb_msr_bits(svm,
- GHCB_MSR_TERM_REASON_SET_MASK,
- GHCB_MSR_TERM_REASON_SET_POS);
- reason_code = get_ghcb_msr_bits(svm,
- GHCB_MSR_TERM_REASON_MASK,
- GHCB_MSR_TERM_REASON_POS);
+ reason_set = GHCB_MSR_TERM_REASON_SET(control->ghcb_gpa);
+ reason_code = GHCB_MSR_TERM_REASON(control->ghcb_gpa);
+
pr_info("SEV-ES guest requested termination: %#llx:%#llx\n",
reason_set, reason_code);
+
fallthrough;
}
default:
--
2.31.1

2021-07-22 11:55:31

by Joerg Roedel

[permalink] [raw]
Subject: [PATCH v2 3/4] KVM: SVM: Add support for Hypervisor Feature support MSR protocol

From: Brijesh Singh <[email protected]>

Version 2 of the GHCB specification introduced advertisement of
supported Hypervisor SEV features. This request is required to support
a the GHCB version 2 protocol.

Signed-off-by: Brijesh Singh <[email protected]>
Signed-off-by: Joerg Roedel <[email protected]>
---
arch/x86/include/asm/sev-common.h | 4 ++++
arch/x86/include/uapi/asm/svm.h | 1 +
arch/x86/kvm/svm/sev.c | 21 +++++++++++++++++++++
arch/x86/kvm/svm/svm.h | 1 +
4 files changed, 27 insertions(+)

diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
index 04470aab421b..ad7448b5c74d 100644
--- a/arch/x86/include/asm/sev-common.h
+++ b/arch/x86/include/asm/sev-common.h
@@ -64,6 +64,10 @@
#define GHCB_MSR_HV_FT_REQ 0x080
#define GHCB_MSR_HV_FT_RESP 0x081

+/* GHCB Hypervisor Feature Request/Response */
+#define GHCB_MSR_HV_FT_REQ 0x080
+#define GHCB_MSR_HV_FT_RESP 0x081
+
#define GHCB_MSR_TERM_REQ 0x100
#define GHCB_MSR_TERM_REASON_SET_POS 12
#define GHCB_MSR_TERM_REASON_SET_MASK 0xf
diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
index efa969325ede..fbb6f8d27a80 100644
--- a/arch/x86/include/uapi/asm/svm.h
+++ b/arch/x86/include/uapi/asm/svm.h
@@ -108,6 +108,7 @@
#define SVM_VMGEXIT_AP_JUMP_TABLE 0x80000005
#define SVM_VMGEXIT_SET_AP_JUMP_TABLE 0
#define SVM_VMGEXIT_GET_AP_JUMP_TABLE 1
+#define SVM_VMGEXIT_HV_FT 0x8000fffd
#define SVM_VMGEXIT_UNSUPPORTED_EVENT 0x8000ffff

/* Exit code reserved for hypervisor/software use */
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index a32ef011025f..4565c360d87d 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2180,6 +2180,7 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm)
case SVM_VMGEXIT_AP_HLT_LOOP:
case SVM_VMGEXIT_AP_JUMP_TABLE:
case SVM_VMGEXIT_UNSUPPORTED_EVENT:
+ case SVM_VMGEXIT_HV_FT:
break;
default:
goto vmgexit_err;
@@ -2361,6 +2362,16 @@ static void set_ghcb_msr_ap_rst_resp(struct vcpu_svm *svm, u64 value)
svm->vmcb->control.ghcb_gpa = GHCB_MSR_AP_RESET_HOLD_RESP | (value << GHCB_DATA_LOW);
}

+static void set_ghcb_msr_hv_feat_resp(struct vcpu_svm *svm, u64 value)
+{
+ u64 msr;
+
+ msr = GHCB_MSR_HV_FT_RESP;
+ msr |= (value << GHCB_DATA_LOW);
+
+ svm->vmcb->control.ghcb_gpa = msr;
+}
+
static void set_ghcb_msr(struct vcpu_svm *svm, u64 value)
{
svm->vmcb->control.ghcb_gpa = value;
@@ -2425,6 +2436,10 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)

break;
}
+ case GHCB_MSR_HV_FT_REQ: {
+ set_ghcb_msr_hv_feat_resp(svm, GHCB_HV_FT_SUPPORTED);
+ break;
+ }
case GHCB_MSR_TERM_REQ: {
u64 reason_set, reason_code;

@@ -2537,6 +2552,12 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
ret = 1;
break;
}
+ case SVM_VMGEXIT_HV_FT: {
+ ghcb_set_sw_exit_info_2(ghcb, GHCB_HV_FT_SUPPORTED);
+
+ ret = 1;
+ break;
+ }
case SVM_VMGEXIT_UNSUPPORTED_EVENT:
vcpu_unimpl(vcpu,
"vmgexit: unsupported event - exit_info_1=%#llx, exit_info_2=%#llx\n",
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 7e2090752d8f..9cafeba3340e 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -550,6 +550,7 @@ void svm_vcpu_unblocking(struct kvm_vcpu *vcpu);
#define GHCB_VERSION_MAX 1ULL
#define GHCB_VERSION_MIN 1ULL

+#define GHCB_HV_FT_SUPPORTED 0

extern unsigned int max_sev_asid;

--
2.31.1

2021-07-22 11:55:35

by Joerg Roedel

[permalink] [raw]
Subject: [PATCH v2 4/4] KVM: SVM: Increase supported GHCB protocol version

From: Brijesh Singh <[email protected]>

Now that KVM has basic support for version 2 of the GHCB specification,
bump the maximum supported protocol version. The SNP specific functions
are still missing, but those are only required when the Hypervisor
supports running SNP guests.

Signed-off-by: Brijesh Singh <[email protected]>
Signed-off-by: Joerg Roedel <[email protected]>
---
arch/x86/kvm/svm/svm.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 9cafeba3340e..84580dc90c97 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -547,7 +547,7 @@ void svm_vcpu_unblocking(struct kvm_vcpu *vcpu);

/* sev.c */

-#define GHCB_VERSION_MAX 1ULL
+#define GHCB_VERSION_MAX 2ULL
#define GHCB_VERSION_MIN 1ULL

#define GHCB_HV_FT_SUPPORTED 0
--
2.31.1

2021-07-22 12:03:19

by Joerg Roedel

[permalink] [raw]
Subject: [PATCH v2.1 3/4] KVM: SVM: Add support for Hypervisor Feature support MSR protocol

From: Brijesh Singh <[email protected]>

Version 2 of the GHCB specification introduced advertisement of
supported Hypervisor SEV features. This request is required to support
a the GHCB version 2 protocol.

Signed-off-by: Brijesh Singh <[email protected]>
Signed-off-by: Joerg Roedel <[email protected]>
---
arch/x86/include/uapi/asm/svm.h | 1 +
arch/x86/kvm/svm/sev.c | 21 +++++++++++++++++++++
arch/x86/kvm/svm/svm.h | 1 +
3 files changed, 23 insertions(+)

diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
index efa969325ede..fbb6f8d27a80 100644
--- a/arch/x86/include/uapi/asm/svm.h
+++ b/arch/x86/include/uapi/asm/svm.h
@@ -108,6 +108,7 @@
#define SVM_VMGEXIT_AP_JUMP_TABLE 0x80000005
#define SVM_VMGEXIT_SET_AP_JUMP_TABLE 0
#define SVM_VMGEXIT_GET_AP_JUMP_TABLE 1
+#define SVM_VMGEXIT_HV_FT 0x8000fffd
#define SVM_VMGEXIT_UNSUPPORTED_EVENT 0x8000ffff

/* Exit code reserved for hypervisor/software use */
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index a32ef011025f..4565c360d87d 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2180,6 +2180,7 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm)
case SVM_VMGEXIT_AP_HLT_LOOP:
case SVM_VMGEXIT_AP_JUMP_TABLE:
case SVM_VMGEXIT_UNSUPPORTED_EVENT:
+ case SVM_VMGEXIT_HV_FT:
break;
default:
goto vmgexit_err;
@@ -2361,6 +2362,16 @@ static void set_ghcb_msr_ap_rst_resp(struct vcpu_svm *svm, u64 value)
svm->vmcb->control.ghcb_gpa = GHCB_MSR_AP_RESET_HOLD_RESP | (value << GHCB_DATA_LOW);
}

+static void set_ghcb_msr_hv_feat_resp(struct vcpu_svm *svm, u64 value)
+{
+ u64 msr;
+
+ msr = GHCB_MSR_HV_FT_RESP;
+ msr |= (value << GHCB_DATA_LOW);
+
+ svm->vmcb->control.ghcb_gpa = msr;
+}
+
static void set_ghcb_msr(struct vcpu_svm *svm, u64 value)
{
svm->vmcb->control.ghcb_gpa = value;
@@ -2425,6 +2436,10 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)

break;
}
+ case GHCB_MSR_HV_FT_REQ: {
+ set_ghcb_msr_hv_feat_resp(svm, GHCB_HV_FT_SUPPORTED);
+ break;
+ }
case GHCB_MSR_TERM_REQ: {
u64 reason_set, reason_code;

@@ -2537,6 +2552,12 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
ret = 1;
break;
}
+ case SVM_VMGEXIT_HV_FT: {
+ ghcb_set_sw_exit_info_2(ghcb, GHCB_HV_FT_SUPPORTED);
+
+ ret = 1;
+ break;
+ }
case SVM_VMGEXIT_UNSUPPORTED_EVENT:
vcpu_unimpl(vcpu,
"vmgexit: unsupported event - exit_info_1=%#llx, exit_info_2=%#llx\n",
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 7e2090752d8f..9cafeba3340e 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -550,6 +550,7 @@ void svm_vcpu_unblocking(struct kvm_vcpu *vcpu);
#define GHCB_VERSION_MAX 1ULL
#define GHCB_VERSION_MIN 1ULL

+#define GHCB_HV_FT_SUPPORTED 0

extern unsigned int max_sev_asid;

--
2.31.1

2021-09-01 21:13:24

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v2 1/4] KVM: SVM: Get rid of *ghcb_msr_bits() functions

On Thu, Jul 22, 2021, Joerg Roedel wrote:
> From: Joerg Roedel <[email protected]>
>
> Replace the get function with macros and the set function with
> hypercall specific setters. This will avoid preserving any previous
> bits in the GHCB-MSR and improved code readability.
>
> Suggested-by: Sean Christopherson <[email protected]>
> Signed-off-by: Joerg Roedel <[email protected]>
> ---
> arch/x86/include/asm/sev-common.h | 9 +++++++
> arch/x86/kvm/svm/sev.c | 41 +++++++++++--------------------
> 2 files changed, 24 insertions(+), 26 deletions(-)
>
> diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
> index 2cef6c5a52c2..8540972cad04 100644
> --- a/arch/x86/include/asm/sev-common.h
> +++ b/arch/x86/include/asm/sev-common.h
> @@ -50,6 +50,10 @@
> (GHCB_MSR_CPUID_REQ | \
> (((unsigned long)reg & GHCB_MSR_CPUID_REG_MASK) << GHCB_MSR_CPUID_REG_POS) | \
> (((unsigned long)fn) << GHCB_MSR_CPUID_FUNC_POS))
> +#define GHCB_MSR_CPUID_FN(msr) \
> + (((msr) >> GHCB_MSR_CPUID_FUNC_POS) & GHCB_MSR_CPUID_FUNC_MASK)
> +#define GHCB_MSR_CPUID_REG(msr) \
> + (((msr) >> GHCB_MSR_CPUID_REG_POS) & GHCB_MSR_CPUID_REG_MASK)
>
> /* AP Reset Hold */
> #define GHCB_MSR_AP_RESET_HOLD_REQ 0x006
> @@ -67,6 +71,11 @@
> #define GHCB_SEV_TERM_REASON(reason_set, reason_val) \
> (((((u64)reason_set) & GHCB_MSR_TERM_REASON_SET_MASK) << GHCB_MSR_TERM_REASON_SET_POS) | \
> ((((u64)reason_val) & GHCB_MSR_TERM_REASON_MASK) << GHCB_MSR_TERM_REASON_POS))
> +#define GHCB_MSR_TERM_REASON_SET(msr) \
> + (((msr) >> GHCB_MSR_TERM_REASON_SET_POS) & GHCB_MSR_TERM_REASON_SET_MASK)
> +#define GHCB_MSR_TERM_REASON(msr) \
> + (((msr) >> GHCB_MSR_TERM_REASON_POS) & GHCB_MSR_TERM_REASON_MASK)
> +
>
> #define GHCB_SEV_ES_REASON_GENERAL_REQUEST 0
> #define GHCB_SEV_ES_REASON_PROTOCOL_UNSUPPORTED 1
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index 6710d9ee2e4b..d7b3557b8dbb 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -2342,16 +2342,15 @@ static bool setup_vmgexit_scratch(struct vcpu_svm *svm, bool sync, u64 len)
> return true;
> }
>
> -static void set_ghcb_msr_bits(struct vcpu_svm *svm, u64 value, u64 mask,
> - unsigned int pos)
> +static void set_ghcb_msr_cpuid_resp(struct vcpu_svm *svm, u64 reg, u64 value)
> {
> - svm->vmcb->control.ghcb_gpa &= ~(mask << pos);
> - svm->vmcb->control.ghcb_gpa |= (value & mask) << pos;
> -}
> + u64 msr;
>
> -static u64 get_ghcb_msr_bits(struct vcpu_svm *svm, u64 mask, unsigned int pos)
> -{
> - return (svm->vmcb->control.ghcb_gpa >> pos) & mask;
> + msr = GHCB_MSR_CPUID_RESP;
> + msr |= (reg & GHCB_MSR_CPUID_REG_MASK) << GHCB_MSR_CPUID_REG_POS;
> + msr |= (value & GHCB_MSR_CPUID_VALUE_MASK) << GHCB_MSR_CPUID_VALUE_POS;
> +
> + svm->vmcb->control.ghcb_gpa = msr;

I would rather have the get/set pairs be roughly symmetric, i.e. both functions
or both macros, and both work on svm->vmcb->control.ghcb_gpa or both be purely
functional (that may not be the correct word).

I don't have a strong preference on function vs. macro. But for the second one,
my preference would be to have the helper generate the value as opposed to taken
and filling a pointer, e.g. to yield something like:

cpuid_reg = GHCB_MSR_CPUID_REG(control->ghcb_gpa);

if (cpuid_reg == 0)
cpuid_value = vcpu->arch.regs[VCPU_REGS_RAX];
else if (cpuid_reg == 1)
cpuid_value = vcpu->arch.regs[VCPU_REGS_RBX];
else if (cpuid_reg == 2)
cpuid_value = vcpu->arch.regs[VCPU_REGS_RCX];
else
cpuid_value = vcpu->arch.regs[VCPU_REGS_RDX];

control->ghcb_gpa = MAKE_GHCB_MSR_RESP(cpuid_reg, cpuid_value);


The advantage is that it's obvious from the code that control->ghcb_gpa is being
read _and_ written.

> case GHCB_MSR_TERM_REQ: {
> u64 reason_set, reason_code;
>
> - reason_set = get_ghcb_msr_bits(svm,
> - GHCB_MSR_TERM_REASON_SET_MASK,
> - GHCB_MSR_TERM_REASON_SET_POS);
> - reason_code = get_ghcb_msr_bits(svm,
> - GHCB_MSR_TERM_REASON_MASK,
> - GHCB_MSR_TERM_REASON_POS);
> + reason_set = GHCB_MSR_TERM_REASON_SET(control->ghcb_gpa);
> + reason_code = GHCB_MSR_TERM_REASON(control->ghcb_gpa);
> +
> pr_info("SEV-ES guest requested termination: %#llx:%#llx\n",
> reason_set, reason_code);
> +
> fallthrough;

Not related to this patch, but why use fallthrough and more importantly, why is
this an -EINVAL return? Why wouldn't KVM forward the request to userspace instead
of returning an opaque -EINVAL?

> }
> default:
> --
> 2.31.1
>

2021-09-01 22:46:54

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v2 1/4] KVM: SVM: Get rid of *ghcb_msr_bits() functions

On Wed, Sep 01, 2021, Sean Christopherson wrote:
> > -static u64 get_ghcb_msr_bits(struct vcpu_svm *svm, u64 mask, unsigned int pos)
> > -{
> > - return (svm->vmcb->control.ghcb_gpa >> pos) & mask;
> > + msr = GHCB_MSR_CPUID_RESP;
> > + msr |= (reg & GHCB_MSR_CPUID_REG_MASK) << GHCB_MSR_CPUID_REG_POS;
> > + msr |= (value & GHCB_MSR_CPUID_VALUE_MASK) << GHCB_MSR_CPUID_VALUE_POS;
> > +
> > + svm->vmcb->control.ghcb_gpa = msr;
>
> I would rather have the get/set pairs be roughly symmetric, i.e. both functions
> or both macros, and both work on svm->vmcb->control.ghcb_gpa or both be purely
> functional (that may not be the correct word).
>
> I don't have a strong preference on function vs. macro. But for the second one,
> my preference would be to have the helper generate the value as opposed to taken
> and filling a pointer, e.g. to yield something like:
>
> cpuid_reg = GHCB_MSR_CPUID_REG(control->ghcb_gpa);
>
> if (cpuid_reg == 0)
> cpuid_value = vcpu->arch.regs[VCPU_REGS_RAX];
> else if (cpuid_reg == 1)
> cpuid_value = vcpu->arch.regs[VCPU_REGS_RBX];
> else if (cpuid_reg == 2)
> cpuid_value = vcpu->arch.regs[VCPU_REGS_RCX];
> else
> cpuid_value = vcpu->arch.regs[VCPU_REGS_RDX];
>
> control->ghcb_gpa = MAKE_GHCB_MSR_RESP(cpuid_reg, cpuid_value);
>
>
> The advantage is that it's obvious from the code that control->ghcb_gpa is being
> read _and_ written.

Ah, but in the next path I see there's the existing ghcb_set_sw_exit_info_2().
Hrm. I think I still prefer open coding "control->ghcb_gpa = ..." with the right
hand side being a macro. That would gel with the INFO_REQ, e.g.

case GHCB_MSR_SEV_INFO_REQ:
control->ghcb_gpa = GHCB_MSR_SEV_INFO(GHCB_VERSION_MAX,
GHCB_VERSION_MIN,
sev_enc_bit));
break;

and drop set_ghcb_msr() altogether.

Side topic, what about renaming control->ghcb_gpa => control->ghcb_msr so that
the code for the MSR protocol is a bit more self-documenting? The APM defines
the field as "Guest physical address of GHCB", so it's not exactly prescribing a
specific name.

2021-09-01 22:48:12

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v2.1 3/4] KVM: SVM: Add support for Hypervisor Feature support MSR protocol

On Thu, Jul 22, 2021, Joerg Roedel wrote:
> From: Brijesh Singh <[email protected]>
>
> Version 2 of the GHCB specification introduced advertisement of
> supported Hypervisor SEV features. This request is required to support
> a the GHCB version 2 protocol.
>
> Signed-off-by: Brijesh Singh <[email protected]>
> Signed-off-by: Joerg Roedel <[email protected]>
> ---
> arch/x86/include/uapi/asm/svm.h | 1 +
> arch/x86/kvm/svm/sev.c | 21 +++++++++++++++++++++
> arch/x86/kvm/svm/svm.h | 1 +
> 3 files changed, 23 insertions(+)
>
> diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
> index efa969325ede..fbb6f8d27a80 100644
> --- a/arch/x86/include/uapi/asm/svm.h
> +++ b/arch/x86/include/uapi/asm/svm.h
> @@ -108,6 +108,7 @@
> #define SVM_VMGEXIT_AP_JUMP_TABLE 0x80000005
> #define SVM_VMGEXIT_SET_AP_JUMP_TABLE 0
> #define SVM_VMGEXIT_GET_AP_JUMP_TABLE 1
> +#define SVM_VMGEXIT_HV_FT 0x8000fffd

For this KVM-only (for all intents and purposes) name, please use the verbose
SVM_VMGEXIT_HYPERVISOR_FEATURES.

https://lkml.kernel.org/r/[email protected]

> #define SVM_VMGEXIT_UNSUPPORTED_EVENT 0x8000ffff
>
> /* Exit code reserved for hypervisor/software use */
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index a32ef011025f..4565c360d87d 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -2180,6 +2180,7 @@ static int sev_es_validate_vmgexit(struct vcpu_svm *svm)
> case SVM_VMGEXIT_AP_HLT_LOOP:
> case SVM_VMGEXIT_AP_JUMP_TABLE:
> case SVM_VMGEXIT_UNSUPPORTED_EVENT:
> + case SVM_VMGEXIT_HV_FT:
> break;
> default:
> goto vmgexit_err;
> @@ -2361,6 +2362,16 @@ static void set_ghcb_msr_ap_rst_resp(struct vcpu_svm *svm, u64 value)
> svm->vmcb->control.ghcb_gpa = GHCB_MSR_AP_RESET_HOLD_RESP | (value << GHCB_DATA_LOW);
> }
>
> +static void set_ghcb_msr_hv_feat_resp(struct vcpu_svm *svm, u64 value)
> +{
> + u64 msr;
> +
> + msr = GHCB_MSR_HV_FT_RESP;
> + msr |= (value << GHCB_DATA_LOW);
> +
> + svm->vmcb->control.ghcb_gpa = msr;
> +}
> +
> static void set_ghcb_msr(struct vcpu_svm *svm, u64 value)
> {
> svm->vmcb->control.ghcb_gpa = value;
> @@ -2425,6 +2436,10 @@ static int sev_handle_vmgexit_msr_protocol(struct vcpu_svm *svm)
>
> break;
> }
> + case GHCB_MSR_HV_FT_REQ: {
> + set_ghcb_msr_hv_feat_resp(svm, GHCB_HV_FT_SUPPORTED);

I definitely think there are too many small wrappers that bury the write to
svm->vmcb->control.ghcb_gpa. E.g. with a rename, this

control->ghcb_msr = GHCB_MSR_HV_FT_RESP |
(GHCB_HV_FT_SUPPORTED << GHCB_DATA_LOW);

or maybe add a generic helper for simple data responses? E.g. GHCB_MSR_AP_RESET_HOLD_REQ
can share a macro.

control->ghcb_msr = GHCB_MSR_RESP_WITH_DATA(GHCB_MSR_HV_FT_RESP,
GHCB_HV_FT_SUPPORTED);

> + break;
> + }

Unnecessary braces.

> case GHCB_MSR_TERM_REQ: {
> u64 reason_set, reason_code;
>
> @@ -2537,6 +2552,12 @@ int sev_handle_vmgexit(struct kvm_vcpu *vcpu)
> ret = 1;
> break;
> }
> + case SVM_VMGEXIT_HV_FT: {
> + ghcb_set_sw_exit_info_2(ghcb, GHCB_HV_FT_SUPPORTED);
> +
> + ret = 1;
> + break;
> + }

Unnecessary braces.

> case SVM_VMGEXIT_UNSUPPORTED_EVENT:
> vcpu_unimpl(vcpu,
> "vmgexit: unsupported event - exit_info_1=%#llx, exit_info_2=%#llx\n",
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index 7e2090752d8f..9cafeba3340e 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -550,6 +550,7 @@ void svm_vcpu_unblocking(struct kvm_vcpu *vcpu);
> #define GHCB_VERSION_MAX 1ULL
> #define GHCB_VERSION_MIN 1ULL
>
> +#define GHCB_HV_FT_SUPPORTED 0
>
> extern unsigned int max_sev_asid;
>
> --
> 2.31.1
>

2021-09-01 22:49:17

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v2 2/4] KVM: SVM: Add support to handle AP reset MSR protocol

On Thu, Jul 22, 2021, Joerg Roedel wrote:
> diff --git a/arch/x86/include/asm/sev-common.h b/arch/x86/include/asm/sev-common.h
> index 8540972cad04..04470aab421b 100644
> --- a/arch/x86/include/asm/sev-common.h
> +++ b/arch/x86/include/asm/sev-common.h
> @@ -11,6 +11,7 @@
> #define GHCB_MSR_INFO_POS 0
> #define GHCB_DATA_LOW 12
> #define GHCB_MSR_INFO_MASK (BIT_ULL(GHCB_DATA_LOW) - 1)
> +#define GHCB_DATA_MASK GENMASK_ULL(51, 0)

Note used in this patch, not sure if it's intended to be used in GHCB_DATA below?

> #define GHCB_DATA(v) \
> (((unsigned long)(v) & ~GHCB_MSR_INFO_MASK) >> GHCB_DATA_LOW)
> diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
> index e322676039f4..5a28f223a9a8 100644
> --- a/arch/x86/include/asm/svm.h
> +++ b/arch/x86/include/asm/svm.h
> @@ -164,7 +164,6 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
> u8 reserved_sw[32];
> };
>
> -

Unrelated whitespace change.

> #define TLB_CONTROL_DO_NOTHING 0
> #define TLB_CONTROL_FLUSH_ALL_ASID 1
> #define TLB_CONTROL_FLUSH_ASID 3

2021-09-09 13:37:37

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH v2 1/4] KVM: SVM: Get rid of *ghcb_msr_bits() functions

Hi Sean,

On Wed, Sep 01, 2021 at 09:31:52PM +0000, Sean Christopherson wrote:
> On Wed, Sep 01, 2021, Sean Christopherson wrote:
> > control->ghcb_gpa = MAKE_GHCB_MSR_RESP(cpuid_reg, cpuid_value);

Made that change, but kept the set_ghcb_msr_cpuid_resp() and renamed it
to ghcb_msr_cpuid_resp(). It now returns the MSR value for the CPUID
response.

I like the keep the more complicated response setters as functions and
not macros for readability.


> case GHCB_MSR_SEV_INFO_REQ:
> control->ghcb_gpa = GHCB_MSR_SEV_INFO(GHCB_VERSION_MAX,
> GHCB_VERSION_MIN,
> sev_enc_bit));
> break;
>
> and drop set_ghcb_msr() altogether.

Makes sense, I replaced the set_ghcb_msr() calls with the above.

> Side topic, what about renaming control->ghcb_gpa => control->ghcb_msr so that
> the code for the MSR protocol is a bit more self-documenting? The APM defines
> the field as "Guest physical address of GHCB", so it's not exactly prescribing a
> specific name.

No strong opinion here, I let this up to the AMD engineers to decide. If
we change the name I can add a separate patch for this.

Regards,

Joerg

2021-09-09 13:38:21

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH v2 1/4] KVM: SVM: Get rid of *ghcb_msr_bits() functions

On Wed, Sep 01, 2021 at 09:12:10PM +0000, Sean Christopherson wrote:
> On Thu, Jul 22, 2021, Joerg Roedel wrote:
> > case GHCB_MSR_TERM_REQ: {
> > u64 reason_set, reason_code;
> >
> > - reason_set = get_ghcb_msr_bits(svm,
> > - GHCB_MSR_TERM_REASON_SET_MASK,
> > - GHCB_MSR_TERM_REASON_SET_POS);
> > - reason_code = get_ghcb_msr_bits(svm,
> > - GHCB_MSR_TERM_REASON_MASK,
> > - GHCB_MSR_TERM_REASON_POS);
> > + reason_set = GHCB_MSR_TERM_REASON_SET(control->ghcb_gpa);
> > + reason_code = GHCB_MSR_TERM_REASON(control->ghcb_gpa);
> > +
> > pr_info("SEV-ES guest requested termination: %#llx:%#llx\n",
> > reason_set, reason_code);
> > +
> > fallthrough;
>
> Not related to this patch, but why use fallthrough and more importantly, why is
> this an -EINVAL return? Why wouldn't KVM forward the request to userspace instead
> of returning an opaque -EINVAL?

I guess it is to signal an error condition up the call-chain to get the
guest terminated, like requested.

Regards,

Joerg

2021-09-21 01:38:26

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v2 1/4] KVM: SVM: Get rid of *ghcb_msr_bits() functions

On Thu, Sep 09, 2021, Joerg Roedel wrote:
> On Wed, Sep 01, 2021 at 09:12:10PM +0000, Sean Christopherson wrote:
> > On Thu, Jul 22, 2021, Joerg Roedel wrote:
> > > case GHCB_MSR_TERM_REQ: {
> > > u64 reason_set, reason_code;
> > >
> > > - reason_set = get_ghcb_msr_bits(svm,
> > > - GHCB_MSR_TERM_REASON_SET_MASK,
> > > - GHCB_MSR_TERM_REASON_SET_POS);
> > > - reason_code = get_ghcb_msr_bits(svm,
> > > - GHCB_MSR_TERM_REASON_MASK,
> > > - GHCB_MSR_TERM_REASON_POS);
> > > + reason_set = GHCB_MSR_TERM_REASON_SET(control->ghcb_gpa);
> > > + reason_code = GHCB_MSR_TERM_REASON(control->ghcb_gpa);
> > > +
> > > pr_info("SEV-ES guest requested termination: %#llx:%#llx\n",
> > > reason_set, reason_code);
> > > +
> > > fallthrough;
> >
> > Not related to this patch, but why use fallthrough and more importantly, why is
> > this an -EINVAL return? Why wouldn't KVM forward the request to userspace instead
> > of returning an opaque -EINVAL?
>
> I guess it is to signal an error condition up the call-chain to get the
> guest terminated, like requested.

Yes, but it's odd bizarre/unfortunate that KVM doesn't take this opportunity to
forward the termination info to the VMM. The above pr_info() should not exist.
If that information is relevant then it should be handed to the VMM directly, not
dumped to dmesg.