2023-02-27 03:54:31

by Santosh Shukla

[permalink] [raw]
Subject: [PATCHv3 00/10] SVM: virtual NMI

v2:
https://lore.kernel.org/all/[email protected]/

v3:
- 09/11: Clubbed x86_ops delayed NMI with vNMI changes into one,
for better readability purpose (Sean Suggestion)
- Series includes suggestion and fixes proposed in v2 series.
Refer each patch for change history(v2-->v3).

Series based on [1] and tested on AMD EPYC-Genoa.


APM: ((Ch-15.21.10 - NMI Virtualization)
https://www.amd.com/en/support/tech-docs/amd64-architecture-programmers-manual-volumes-1-5

Past history and work refer v5-
https://lkml.org/lkml/2022/10/27/261

Thanks,
Santosh
[1] https://github.com/kvm-x86/linux branch kvm-x86/next(62ef199250cd46f)


Maxim Levitsky (2):
KVM: nSVM: Raise event on nested VM exit if L1 doesn't intercept IRQs
KVM: SVM: add wrappers to enable/disable IRET interception

Santosh Shukla (5):
KVM: nSVM: Disable intercept of VINTR if saved RFLAG.IF is 0
x86/cpu: Add CPUID feature bit for VNMI
KVM: SVM: Add VNMI bit definition
KVM: x86: add support for delayed virtual NMI injection interface
KVM: nSVM: implement support for nested VNMI

Sean Christopherson (3):
KVM: x86: Raise an event request when processing NMIs if an NMI is
pending
KVM: x86: Tweak the code and comment related to handling concurrent
NMIs
KVM: x86: Save/restore all NMIs when multiple NMIs are pending

arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/kvm-x86-ops.h | 2 +
arch/x86/include/asm/kvm_host.h | 11 ++-
arch/x86/include/asm/svm.h | 9 ++
arch/x86/kvm/svm/nested.c | 73 +++++++++++++-
arch/x86/kvm/svm/svm.c | 152 +++++++++++++++++++++++------
arch/x86/kvm/svm/svm.h | 28 ++++++
arch/x86/kvm/x86.c | 46 +++++++--
8 files changed, 279 insertions(+), 43 deletions(-)

--
2.25.1



2023-02-27 03:54:45

by Santosh Shukla

[permalink] [raw]
Subject: [PATCHv3 01/10] KVM: nSVM: Disable intercept of VINTR if saved RFLAG.IF is 0

From: Santosh Shukla <[email protected]>

Disable intercept of virtual interrupts (used to
detect interrupt windows) if the saved RFLAGS.IF is '0', as
the effective RFLAGS.IF for L1 interrupts will never be set
while L2 is running (L2's RFLAGS.IF doesn't affect L1 IRQs).

Suggested-by: Sean Christopherson <[email protected]>
Signed-off-by: Santosh Shukla <[email protected]>
---
v3:
https://lore.kernel.org/all/[email protected]/
suggested by Sean.

arch/x86/kvm/svm/nested.c | 15 ++++++++++-----
arch/x86/kvm/svm/svm.c | 10 ++++++++++
2 files changed, 20 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index fbade158d368..107258ed46ee 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -139,13 +139,18 @@ void recalc_intercepts(struct vcpu_svm *svm)

if (g->int_ctl & V_INTR_MASKING_MASK) {
/*
- * Once running L2 with HF_VINTR_MASK, EFLAGS.IF and CR8
- * does not affect any interrupt we may want to inject;
- * therefore, writes to CR8 are irrelevant to L0, as are
- * interrupt window vmexits.
+ * If L2 is active and V_INTR_MASKING is enabled in vmcb12,
+ * disable intercept of CR8 writes as L2's CR8 does not affect
+ * any interrupt KVM may want to inject.
+ *
+ * Similarly, disable intercept of virtual interrupts (used to
+ * detect interrupt windows) if the saved RFLAGS.IF is '0', as
+ * the effective RFLAGS.IF for L1 interrupts will never be set
+ * while L2 is running (L2's RFLAGS.IF doesn't affect L1 IRQs).
*/
vmcb_clr_intercept(c, INTERCEPT_CR8_WRITE);
- vmcb_clr_intercept(c, INTERCEPT_VINTR);
+ if (!(svm->vmcb01.ptr->save.rflags & X86_EFLAGS_IF))
+ vmcb_clr_intercept(c, INTERCEPT_VINTR);
}

/*
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index b43775490074..cf6ae093ed19 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -1583,6 +1583,16 @@ static void svm_set_vintr(struct vcpu_svm *svm)

svm_set_intercept(svm, INTERCEPT_VINTR);

+ /*
+ * Recalculating intercepts may have clear the VINTR intercept. If
+ * V_INTR_MASKING is enabled in vmcb12, then the effective RFLAGS.IF
+ * for L1 physical interrupts is L1's RFLAGS.IF at the time of VMRUN.
+ * Requesting an interrupt window if save.RFLAGS.IF=0 is pointless as
+ * interrupts will never be unblocked while L2 is running.
+ */
+ if (!svm_is_intercept(svm, INTERCEPT_VINTR))
+ return;
+
/*
* This is just a dummy VINTR to actually cause a vmexit to happen.
* Actual injection of virtual interrupts happens through EVENTINJ.
--
2.25.1


2023-02-27 03:55:25

by Santosh Shukla

[permalink] [raw]
Subject: [PATCHv3 02/10] KVM: nSVM: Raise event on nested VM exit if L1 doesn't intercept IRQs

From: Maxim Levitsky <[email protected]>

If the L1 doesn't intercept interrupts, then the KVM will use vmcb02's
V_IRQ for L1 (to detect an interrupt window)

In this case on nested VM exit KVM might need to copy the V_IRQ bit
from the vmcb02 to the vmcb01, to continue waiting for the
interrupt window.

To make it simple, just raise the KVM_REQ_EVENT request, which
execution will lead to the reenabling of the interrupt
window if needed.

Note that this is a theoretical bug because KVM already does raise
KVM_REQ_EVENT request on each nested VM exit because the nested
VM exit resets RFLAGS and the kvm_set_rflags() raises the
KVM_REQ_EVENT request in the response.

However raising this request explicitly, together with
documenting why this is needed, is still preferred.

Signed-off-by: Maxim Levitsky <[email protected]>
[reworded description as per Sean's v2 comment]
Signed-off-by: Santosh Shukla <[email protected]>
---
v3:
https://lore.kernel.org/all/[email protected]/
suggested vt Sean.

arch/x86/kvm/svm/nested.c | 25 +++++++++++++++++++++++++
1 file changed, 25 insertions(+)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 107258ed46ee..74e9e9e76d77 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1025,6 +1025,31 @@ int nested_svm_vmexit(struct vcpu_svm *svm)

svm_switch_vmcb(svm, &svm->vmcb01);

+ /* Note about synchronizing some of int_ctl bits from vmcb02 to vmcb01:
+ *
+ * V_IRQ, V_IRQ_VECTOR, V_INTR_PRIO_MASK, V_IGN_TPR:
+ * If the L1 doesn't intercept interrupts, then
+ * (even if the L1 does use virtual interrupt masking),
+ * KVM will use the vmcb02's V_INTR to detect interrupt window.
+ *
+ * In this case, the KVM raises KVM_REQ_EVENT to ensure that interrupt
+ * window is not lost and KVM implicitly V_IRQ bit from vmcb02 to vmcb01
+ *
+ * V_TPR:
+ * If the L1 doesn't use virtual interrupt masking, then the L1's vTPR
+ * is stored in the vmcb02 but its value doesn't need to be copied
+ * from/to vmcb01 because it is copied from/to the TPR APIC's register
+ * on each VM entry/exit.
+ *
+ * V_GIF:
+ * If the nested vGIF is not used, KVM uses vmcb02's V_GIF for L1's
+ * V_GIF, however, the L1 vGIF is reset to false on each VM exit, thus
+ * there is no need to copy it from vmcb02 to vmcb01.
+ */
+
+ if (!nested_exit_on_intr(svm))
+ kvm_make_request(KVM_REQ_EVENT, &svm->vcpu);
+
if (unlikely(svm->lbrv_enabled && (svm->nested.ctl.virt_ext & LBR_CTL_ENABLE_MASK))) {
svm_copy_lbrs(vmcb12, vmcb02);
svm_update_lbrv(vcpu);
--
2.25.1


2023-02-27 03:55:38

by Santosh Shukla

[permalink] [raw]
Subject: [PATCHv3 03/10] KVM: SVM: add wrappers to enable/disable IRET interception

From: Maxim Levitsky <[email protected]>

SEV-ES guests don't use IRET interception for the detection of
an end of a NMI.

Therefore it makes sense to create a wrapper to avoid repeating
the check for the SEV-ES.

No functional change is intended.

Suggested-by: Sean Christopherson <[email protected]>
Signed-off-by: Maxim Levitsky <[email protected]>
[Renamed iret intercept API of style svm_{clr,set}_iret_intercept()]
Signed-off-by: Santosh Shukla <[email protected]>
---
v3:
Reworded commit description per Sean's v2 comment:
https://lore.kernel.org/all/[email protected]/

arch/x86/kvm/svm/svm.c | 28 +++++++++++++++++++---------
1 file changed, 19 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index cf6ae093ed19..da936723e8ca 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2490,16 +2490,29 @@ static int task_switch_interception(struct kvm_vcpu *vcpu)
has_error_code, error_code);
}

+static void svm_clr_iret_intercept(struct vcpu_svm *svm)
+{
+ if (!sev_es_guest(svm->vcpu.kvm))
+ svm_clr_intercept(svm, INTERCEPT_IRET);
+}
+
+static void svm_set_iret_intercept(struct vcpu_svm *svm)
+{
+ if (!sev_es_guest(svm->vcpu.kvm))
+ svm_set_intercept(svm, INTERCEPT_IRET);
+}
+
static int iret_interception(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);

++vcpu->stat.nmi_window_exits;
svm->awaiting_iret_completion = true;
- if (!sev_es_guest(vcpu->kvm)) {
- svm_clr_intercept(svm, INTERCEPT_IRET);
+
+ svm_clr_iret_intercept(svm);
+ if (!sev_es_guest(vcpu->kvm))
svm->nmi_iret_rip = kvm_rip_read(vcpu);
- }
+
kvm_make_request(KVM_REQ_EVENT, vcpu);
return 1;
}
@@ -3491,8 +3504,7 @@ static void svm_inject_nmi(struct kvm_vcpu *vcpu)
return;

svm->nmi_masked = true;
- if (!sev_es_guest(vcpu->kvm))
- svm_set_intercept(svm, INTERCEPT_IRET);
+ svm_set_iret_intercept(svm);
++vcpu->stat.nmi_injections;
}

@@ -3632,12 +3644,10 @@ static void svm_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked)

if (masked) {
svm->nmi_masked = true;
- if (!sev_es_guest(vcpu->kvm))
- svm_set_intercept(svm, INTERCEPT_IRET);
+ svm_set_iret_intercept(svm);
} else {
svm->nmi_masked = false;
- if (!sev_es_guest(vcpu->kvm))
- svm_clr_intercept(svm, INTERCEPT_IRET);
+ svm_clr_iret_intercept(svm);
}
}

--
2.25.1


2023-02-27 03:56:09

by Santosh Shukla

[permalink] [raw]
Subject: [PATCHv3 04/10] KVM: x86: Raise an event request when processing NMIs if an NMI is pending

From: Sean Christopherson <[email protected]>

Don't raise KVM_REQ_EVENT if no NMIs are pending at the end of
process_nmi(). Finishing process_nmi() without a pending NMI will become
much more likely when KVM gains support for AMD's vNMI, which allows
pending vNMIs in hardware, i.e. doesn't require explicit injection.

Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Santosh Shukla <[email protected]>
---
v3:
- renamed iret_intercept API
https://lore.kernel.org/all/[email protected]/

arch/x86/kvm/x86.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f706621c35b8..1cd9cadc82af 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10148,7 +10148,9 @@ static void process_nmi(struct kvm_vcpu *vcpu)

vcpu->arch.nmi_pending += atomic_xchg(&vcpu->arch.nmi_queued, 0);
vcpu->arch.nmi_pending = min(vcpu->arch.nmi_pending, limit);
- kvm_make_request(KVM_REQ_EVENT, vcpu);
+
+ if (vcpu->arch.nmi_pending)
+ kvm_make_request(KVM_REQ_EVENT, vcpu);
}

void kvm_make_scan_ioapic_request_mask(struct kvm *kvm,
--
2.25.1


2023-02-27 03:56:43

by Santosh Shukla

[permalink] [raw]
Subject: [PATCHv3 05/10] KVM: x86: Tweak the code and comment related to handling concurrent NMIs

From: Sean Christopherson <[email protected]>

Tweak the code and comment that deals with concurrent NMIs to explicitly
call out that x86 allows exactly one pending NMI, but that KVM needs to
temporarily allow two pending NMIs in order to workaround the fact that
the target vCPU cannot immediately recognize an incoming NMI, unlike bare
metal.

No functional change intended.

Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Santosh Shukla <[email protected]>
---
v3:
https://lore.kernel.org/all/[email protected]/
from Sean comment.

arch/x86/kvm/x86.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 1cd9cadc82af..16590e094899 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10136,15 +10136,22 @@ static int kvm_check_and_inject_events(struct kvm_vcpu *vcpu,

static void process_nmi(struct kvm_vcpu *vcpu)
{
- unsigned limit = 2;
+ unsigned int limit;

/*
- * x86 is limited to one NMI running, and one NMI pending after it.
- * If an NMI is already in progress, limit further NMIs to just one.
- * Otherwise, allow two (and we'll inject the first one immediately).
+ * x86 is limited to one NMI pending, but because KVM can't react to
+ * incoming NMIs as quickly as bare metal, e.g. if the vCPU is
+ * scheduled out, KVM needs to play nice with two queued NMIs showing
+ * up at the same time. To handle this scenario, allow two NMIs to be
+ * (temporarily) pending so long as NMIs are not blocked and KVM is not
+ * waiting for a previous NMI injection to complete (which effectively
+ * blocks NMIs). KVM will immediately inject one of the two NMIs, and
+ * will request an NMI window to handle the second NMI.
*/
if (static_call(kvm_x86_get_nmi_mask)(vcpu) || vcpu->arch.nmi_injected)
limit = 1;
+ else
+ limit = 2;

vcpu->arch.nmi_pending += atomic_xchg(&vcpu->arch.nmi_queued, 0);
vcpu->arch.nmi_pending = min(vcpu->arch.nmi_pending, limit);
--
2.25.1


2023-02-27 03:56:46

by Santosh Shukla

[permalink] [raw]
Subject: [PATCHv3 06/10] KVM: x86: Save/restore all NMIs when multiple NMIs are pending

From: Sean Christopherson <[email protected]>

Save all pending NMIs in KVM_GET_VCPU_EVENTS, and queue KVM_REQ_NMI if one
or more NMIs are pending after KVM_SET_VCPU_EVENTS in order to re-evaluate
pending NMIs with respect to NMI blocking.

KVM allows multiple NMIs to be pending in order to faithfully emulate bare
metal handling of simultaneous NMIs (on bare metal, truly simultaneous
NMIs are impossible, i.e. one will always arrive first and be consumed).
Support for simultaneous NMIs botched the save/restore though. KVM only
saves one pending NMI, but allows userspace to restore 255 pending NMIs
as kvm_vcpu_events.nmi.pending is a u8, and KVM's internal state is stored
in an unsigned int.

Fixes: 7460fb4a3400 ("KVM: Fix simultaneous NMIs")
Signed-off-by: Sean Christopherson <[email protected]>
Signed-off-by: Santosh Shukla <[email protected]>
---
v3:
- There is checkpatch warning about the Fixes tag like below

WARNING: Unknown commit id '7460fb4a3400', maybe rebased or not pulled?
#19:
Fixes: 7460fb4a3400 ("KVM: Fix simultaneous NMIs")

total: 0 errors, 1 warnings, 20 lines checked

Although this patch is part of kernel v3.2 onwards


arch/x86/kvm/x86.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 16590e094899..b22074f467e0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5113,7 +5113,7 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
events->interrupt.shadow = static_call(kvm_x86_get_interrupt_shadow)(vcpu);

events->nmi.injected = vcpu->arch.nmi_injected;
- events->nmi.pending = vcpu->arch.nmi_pending != 0;
+ events->nmi.pending = vcpu->arch.nmi_pending;
events->nmi.masked = static_call(kvm_x86_get_nmi_mask)(vcpu);

/* events->sipi_vector is never valid when reporting to user space */
@@ -5200,8 +5200,11 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,
events->interrupt.shadow);

vcpu->arch.nmi_injected = events->nmi.injected;
- if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING)
+ if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING) {
vcpu->arch.nmi_pending = events->nmi.pending;
+ if (vcpu->arch.nmi_pending)
+ kvm_make_request(KVM_REQ_NMI, vcpu);
+ }
static_call(kvm_x86_set_nmi_mask)(vcpu, events->nmi.masked);

if (events->flags & KVM_VCPUEVENT_VALID_SIPI_VECTOR &&
--
2.25.1


2023-02-27 03:57:45

by Santosh Shukla

[permalink] [raw]
Subject: [PATCHv3 07/10] x86/cpu: Add CPUID feature bit for VNMI

VNMI feature allows the hypervisor to inject NMI into the guest w/o
using Event injection mechanism, The benefit of using VNMI over the
event Injection that does not require tracking the Guest's NMI state and
intercepting the IRET for the NMI completion. VNMI achieves that by
exposing 3 capability bits in VMCB intr_cntrl which helps with
virtualizing NMI injection and NMI_Masking.

The presence of this feature is indicated via the CPUID function
0x8000000A_EDX[25].

Reviewed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Santosh Shukla <[email protected]>
---
arch/x86/include/asm/cpufeatures.h | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index cdb7e1492311..b3ae49f36008 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -365,6 +365,7 @@
#define X86_FEATURE_VGIF (15*32+16) /* Virtual GIF */
#define X86_FEATURE_X2AVIC (15*32+18) /* Virtual x2apic */
#define X86_FEATURE_V_SPEC_CTRL (15*32+20) /* Virtual SPEC_CTRL */
+#define X86_FEATURE_AMD_VNMI (15*32+25) /* Virtual NMI */
#define X86_FEATURE_SVME_ADDR_CHK (15*32+28) /* "" SVME addr check */

/* Intel-defined CPU features, CPUID level 0x00000007:0 (ECX), word 16 */
--
2.25.1


2023-02-27 03:57:49

by Santosh Shukla

[permalink] [raw]
Subject: [PATCHv3 08/10] KVM: SVM: Add VNMI bit definition

VNMI exposes 3 capability bits (V_NMI, V_NMI_MASK, and V_NMI_ENABLE) to
virtualize NMI and NMI_MASK, Those capability bits are part of
VMCB::intr_ctrl -
V_NMI_PENDING_MASK(11) - Indicates whether a virtual NMI is pending in the
guest.
V_NMI_BLOCKING_MASK(12) - Indicates whether virtual NMI is masked in the
guest.
V_NMI_ENABLE_MASK(26) - Enables the NMI virtualization feature for the
guest.

When Hypervisor wants to inject NMI, it will set V_NMI bit, Processor
will clear the V_NMI bit and Set the V_NMI_MASK which means the Guest is
handling NMI, After the guest handled the NMI, The processor will clear
the V_NMI_MASK on the successful completion of IRET instruction Or if
VMEXIT occurs while delivering the virtual NMI.

To enable the VNMI capability, Hypervisor need to program
V_NMI_ENABLE_MASK bit 1.

Reviewed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Santosh Shukla <[email protected]>
---
v3:
- Renamed V_NMI bits per Sean's v2 comment for
better readability.
https://lore.kernel.org/all/[email protected]/

arch/x86/include/asm/svm.h | 9 +++++++++
1 file changed, 9 insertions(+)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index cb1ee53ad3b1..9691081d9231 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -183,6 +183,12 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
#define V_GIF_SHIFT 9
#define V_GIF_MASK (1 << V_GIF_SHIFT)

+#define V_NMI_PENDING_SHIFT 11
+#define V_NMI_PENDING_MASK (1 << V_NMI_PENDING_SHIFT)
+
+#define V_NMI_BLOCKING_SHIFT 12
+#define V_NMI_BLOCKING_MASK (1 << V_NMI_BLOCKING_SHIFT)
+
#define V_INTR_PRIO_SHIFT 16
#define V_INTR_PRIO_MASK (0x0f << V_INTR_PRIO_SHIFT)

@@ -197,6 +203,9 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
#define V_GIF_ENABLE_SHIFT 25
#define V_GIF_ENABLE_MASK (1 << V_GIF_ENABLE_SHIFT)

+#define V_NMI_ENABLE_SHIFT 26
+#define V_NMI_ENABLE_MASK (1 << V_NMI_ENABLE_SHIFT)
+
#define AVIC_ENABLE_SHIFT 31
#define AVIC_ENABLE_MASK (1 << AVIC_ENABLE_SHIFT)

--
2.25.1


2023-02-27 03:58:20

by Santosh Shukla

[permalink] [raw]
Subject: [PATCHv3 09/10] KVM: x86: add support for delayed virtual NMI injection interface

Introducing two new vendor callbacks so to support virtual NMI
injection example vNMI feature for SVM.

- kvm_x86_is_vnmi_pending()
- kvm_x86_set_vnmi_pending()

Using those callbacks the KVM can take advantage of the
hardware's accelerated delayed NMI delivery (currently vNMI on SVM).

Once NMI is set to pending via this interface, it is assumed that
the hardware will deliver the NMI on its own to the guest once
all the x86 conditions for the NMI delivery are met.

Note that the 'kvm_x86_set_vnmi_pending()' callback is allowed
to fail, in which case a normal NMI injection will be attempted
when NMI can be delivered (possibly by using a NMI window).

With vNMI that can happen either if vNMI is already pending or
if a nested guest is running.

When the vNMI injection fails due to the 'vNMI is already pending'
condition, the new NMI will be dropped unless the new NMI can be
injected immediately, so no NMI window will be requested.

Use '.kvm_x86_set_hw_nmi_pending' method to inject the
pending NMIs for AMD's VNMI feature.

Note that vNMI doesn't need nmi_window_existing feature to
pend the new virtual NMI and that KVM will now be able to
detect with flag (KVM_VCPUEVENT_VALID_NMI_PENDING) and pend
the new NMI by raising KVM_REQ_NMI event.

Signed-off-by: Santosh Shukla <[email protected]>
Co-developed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Maxim Levitsky <[email protected]>
---
v3:
- Fixed SOB
- Merged V_NMI implementation with x86_ops delayed NMI
API proposal for better readablity.
- Added early WARN_ON for VNMI case in svm_enable_nmi_window.
- Indentation and style fixes per v2 comment.
- Removed `svm->nmi_masked` check from svm_enable_nmi_window
and replaced with svm_get_nmi_mask().
- Note that I am keeping kvm_get_total_nmi_pending() logic
like v2.. since `events->nmi.pending` is u8 not a boolean.
https://lore.kernel.org/all/Y9mwz%[email protected]/

arch/x86/include/asm/kvm-x86-ops.h | 2 +
arch/x86/include/asm/kvm_host.h | 11 ++-
arch/x86/kvm/svm/svm.c | 113 +++++++++++++++++++++++------
arch/x86/kvm/svm/svm.h | 22 ++++++
arch/x86/kvm/x86.c | 26 ++++++-
5 files changed, 147 insertions(+), 27 deletions(-)

diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
index 8dc345cc6318..092ef2398857 100644
--- a/arch/x86/include/asm/kvm-x86-ops.h
+++ b/arch/x86/include/asm/kvm-x86-ops.h
@@ -68,6 +68,8 @@ KVM_X86_OP(get_interrupt_shadow)
KVM_X86_OP(patch_hypercall)
KVM_X86_OP(inject_irq)
KVM_X86_OP(inject_nmi)
+KVM_X86_OP_OPTIONAL_RET0(is_vnmi_pending)
+KVM_X86_OP_OPTIONAL_RET0(set_vnmi_pending)
KVM_X86_OP(inject_exception)
KVM_X86_OP(cancel_injection)
KVM_X86_OP(interrupt_allowed)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 792a6037047a..f8a44c6c8633 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -878,7 +878,11 @@ struct kvm_vcpu_arch {
u64 tsc_scaling_ratio; /* current scaling ratio */

atomic_t nmi_queued; /* unprocessed asynchronous NMIs */
- unsigned nmi_pending; /* NMI queued after currently running handler */
+ /*
+ * NMI queued after currently running handler
+ * (not including a hardware pending NMI (e.g vNMI))
+ */
+ unsigned int nmi_pending;
bool nmi_injected; /* Trying to inject an NMI this entry */
bool smi_pending; /* SMI queued after currently running handler */
u8 handling_intr_from_guest;
@@ -1640,6 +1644,10 @@ struct kvm_x86_ops {
int (*nmi_allowed)(struct kvm_vcpu *vcpu, bool for_injection);
bool (*get_nmi_mask)(struct kvm_vcpu *vcpu);
void (*set_nmi_mask)(struct kvm_vcpu *vcpu, bool masked);
+ /* returns true, if a NMI is pending injection on hardware level (e.g vNMI) */
+ bool (*is_vnmi_pending)(struct kvm_vcpu *vcpu);
+ /* attempts make a NMI pending via hardware interface (e.g vNMI) */
+ bool (*set_vnmi_pending)(struct kvm_vcpu *vcpu);
void (*enable_nmi_window)(struct kvm_vcpu *vcpu);
void (*enable_irq_window)(struct kvm_vcpu *vcpu);
void (*update_cr8_intercept)(struct kvm_vcpu *vcpu, int tpr, int irr);
@@ -2004,6 +2012,7 @@ int kvm_pic_set_irq(struct kvm_pic *pic, int irq, int irq_source_id, int level);
void kvm_pic_clear_all(struct kvm_pic *pic, int irq_source_id);

void kvm_inject_nmi(struct kvm_vcpu *vcpu);
+int kvm_get_total_nmi_pending(struct kvm_vcpu *vcpu);

void kvm_update_dr7(struct kvm_vcpu *vcpu);

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index da936723e8ca..84d9d2566629 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -230,6 +230,8 @@ module_param(dump_invalid_vmcb, bool, 0644);
bool intercept_smi = true;
module_param(intercept_smi, bool, 0444);

+bool vnmi = true;
+module_param(vnmi, bool, 0444);

static bool svm_gp_erratum_intercept = true;

@@ -1311,6 +1313,9 @@ static void init_vmcb(struct kvm_vcpu *vcpu)
if (kvm_vcpu_apicv_active(vcpu))
avic_init_vmcb(svm, vmcb);

+ if (vnmi)
+ svm->vmcb->control.int_ctl |= V_NMI_ENABLE_MASK;
+
if (vgif) {
svm_clr_intercept(svm, INTERCEPT_STGI);
svm_clr_intercept(svm, INTERCEPT_CLGI);
@@ -3508,6 +3513,38 @@ static void svm_inject_nmi(struct kvm_vcpu *vcpu)
++vcpu->stat.nmi_injections;
}

+static bool svm_is_vnmi_pending(struct kvm_vcpu *vcpu)
+{
+ struct vcpu_svm *svm = to_svm(vcpu);
+
+ if (!is_vnmi_enabled(svm))
+ return false;
+
+ return !!(svm->vmcb->control.int_ctl & V_NMI_BLOCKING_MASK);
+}
+
+static bool svm_set_vnmi_pending(struct kvm_vcpu *vcpu)
+{
+ struct vcpu_svm *svm = to_svm(vcpu);
+
+ if (!is_vnmi_enabled(svm))
+ return false;
+
+ if (svm->vmcb->control.int_ctl & V_NMI_PENDING_MASK)
+ return false;
+
+ svm->vmcb->control.int_ctl |= V_NMI_PENDING_MASK;
+ vmcb_mark_dirty(svm->vmcb, VMCB_INTR);
+
+ /*
+ * NMI isn't yet technically injected but
+ * this rough estimation should be good enough
+ */
+ ++vcpu->stat.nmi_injections;
+
+ return true;
+}
+
static void svm_inject_irq(struct kvm_vcpu *vcpu, bool reinjected)
{
struct vcpu_svm *svm = to_svm(vcpu);
@@ -3603,6 +3640,35 @@ static void svm_update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
svm_set_intercept(svm, INTERCEPT_CR8_WRITE);
}

+static bool svm_get_nmi_mask(struct kvm_vcpu *vcpu)
+{
+ struct vcpu_svm *svm = to_svm(vcpu);
+
+ if (is_vnmi_enabled(svm))
+ return svm->vmcb->control.int_ctl & V_NMI_BLOCKING_MASK;
+ else
+ return svm->nmi_masked;
+}
+
+static void svm_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked)
+{
+ struct vcpu_svm *svm = to_svm(vcpu);
+
+ if (is_vnmi_enabled(svm)) {
+ if (masked)
+ svm->vmcb->control.int_ctl |= V_NMI_BLOCKING_MASK;
+ else
+ svm->vmcb->control.int_ctl &= ~V_NMI_BLOCKING_MASK;
+
+ } else {
+ svm->nmi_masked = masked;
+ if (masked)
+ svm_set_iret_intercept(svm);
+ else
+ svm_clr_iret_intercept(svm);
+ }
+}
+
bool svm_nmi_blocked(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
@@ -3614,8 +3680,10 @@ bool svm_nmi_blocked(struct kvm_vcpu *vcpu)
if (is_guest_mode(vcpu) && nested_exit_on_nmi(svm))
return false;

- return (vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK) ||
- svm->nmi_masked;
+ if (svm_get_nmi_mask(vcpu))
+ return true;
+
+ return vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK;
}

static int svm_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
@@ -3633,24 +3701,6 @@ static int svm_nmi_allowed(struct kvm_vcpu *vcpu, bool for_injection)
return 1;
}

-static bool svm_get_nmi_mask(struct kvm_vcpu *vcpu)
-{
- return to_svm(vcpu)->nmi_masked;
-}
-
-static void svm_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked)
-{
- struct vcpu_svm *svm = to_svm(vcpu);
-
- if (masked) {
- svm->nmi_masked = true;
- svm_set_iret_intercept(svm);
- } else {
- svm->nmi_masked = false;
- svm_clr_iret_intercept(svm);
- }
-}
-
bool svm_interrupt_blocked(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
@@ -3731,7 +3781,14 @@ static void svm_enable_nmi_window(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);

- if (svm->nmi_masked && !svm->awaiting_iret_completion)
+ /*
+ * NMI window not needed with vNMI enabled
+ * and if reached here then better WARN and
+ * continue to single step.
+ */
+ WARN_ON_ONCE(is_vnmi_enabled(svm));
+
+ if (svm_get_nmi_mask(vcpu) && !svm->awaiting_iret_completion)
return; /* IRET will cause a vm exit */

if (!gif_set(svm)) {
@@ -3745,8 +3802,8 @@ static void svm_enable_nmi_window(struct kvm_vcpu *vcpu)
* problem (IRET or exception injection or interrupt shadow)
*/
svm->nmi_singlestep_guest_rflags = svm_get_rflags(vcpu);
- svm->nmi_singlestep = true;
svm->vmcb->save.rflags |= (X86_EFLAGS_TF | X86_EFLAGS_RF);
+ svm->nmi_singlestep = true;
}

static void svm_flush_tlb_current(struct kvm_vcpu *vcpu)
@@ -4780,6 +4837,8 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.patch_hypercall = svm_patch_hypercall,
.inject_irq = svm_inject_irq,
.inject_nmi = svm_inject_nmi,
+ .is_vnmi_pending = svm_is_vnmi_pending,
+ .set_vnmi_pending = svm_set_vnmi_pending,
.inject_exception = svm_inject_exception,
.cancel_injection = svm_cancel_injection,
.interrupt_allowed = svm_interrupt_allowed,
@@ -5070,6 +5129,16 @@ static __init int svm_hardware_setup(void)
pr_info("Virtual GIF supported\n");
}

+ vnmi = vgif && vnmi && boot_cpu_has(X86_FEATURE_AMD_VNMI);
+ if (vnmi)
+ pr_info("Virtual NMI enabled\n");
+
+ if (!vnmi) {
+ svm_x86_ops.is_vnmi_pending = NULL;
+ svm_x86_ops.set_vnmi_pending = NULL;
+ }
+
+
if (lbrv) {
if (!boot_cpu_has(X86_FEATURE_LBRV))
lbrv = false;
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 839809972da1..fb48c347bbe0 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -36,6 +36,7 @@ extern bool npt_enabled;
extern int vgif;
extern bool intercept_smi;
extern bool x2avic_enabled;
+extern bool vnmi;

/*
* Clean bits in VMCB.
@@ -548,6 +549,27 @@ static inline bool is_x2apic_msrpm_offset(u32 offset)
(msr < (APIC_BASE_MSR + 0x100));
}

+static inline struct vmcb *get_vnmi_vmcb_l1(struct vcpu_svm *svm)
+{
+ if (!vnmi)
+ return NULL;
+
+ if (is_guest_mode(&svm->vcpu))
+ return NULL;
+ else
+ return svm->vmcb01.ptr;
+}
+
+static inline bool is_vnmi_enabled(struct vcpu_svm *svm)
+{
+ struct vmcb *vmcb = get_vnmi_vmcb_l1(svm);
+
+ if (vmcb)
+ return !!(vmcb->control.int_ctl & V_NMI_ENABLE_MASK);
+ else
+ return false;
+}
+
/* svm.c */
#define MSR_INVALID 0xffffffffU

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b22074f467e0..b5354249fe00 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5113,7 +5113,7 @@ static void kvm_vcpu_ioctl_x86_get_vcpu_events(struct kvm_vcpu *vcpu,
events->interrupt.shadow = static_call(kvm_x86_get_interrupt_shadow)(vcpu);

events->nmi.injected = vcpu->arch.nmi_injected;
- events->nmi.pending = vcpu->arch.nmi_pending;
+ events->nmi.pending = kvm_get_total_nmi_pending(vcpu);
events->nmi.masked = static_call(kvm_x86_get_nmi_mask)(vcpu);

/* events->sipi_vector is never valid when reporting to user space */
@@ -5201,9 +5201,9 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu,

vcpu->arch.nmi_injected = events->nmi.injected;
if (events->flags & KVM_VCPUEVENT_VALID_NMI_PENDING) {
- vcpu->arch.nmi_pending = events->nmi.pending;
- if (vcpu->arch.nmi_pending)
- kvm_make_request(KVM_REQ_NMI, vcpu);
+ vcpu->arch.nmi_pending = 0;
+ atomic_set(&vcpu->arch.nmi_queued, events->nmi.pending);
+ kvm_make_request(KVM_REQ_NMI, vcpu);
}
static_call(kvm_x86_set_nmi_mask)(vcpu, events->nmi.masked);

@@ -10156,13 +10156,31 @@ static void process_nmi(struct kvm_vcpu *vcpu)
else
limit = 2;

+ /*
+ * Adjust the limit to account for pending virtual NMIs, which aren't
+ * tracked in vcpu->arch.nmi_pending.
+ */
+ if (static_call(kvm_x86_is_vnmi_pending)(vcpu))
+ limit--;
+
vcpu->arch.nmi_pending += atomic_xchg(&vcpu->arch.nmi_queued, 0);
vcpu->arch.nmi_pending = min(vcpu->arch.nmi_pending, limit);

+ if (vcpu->arch.nmi_pending &&
+ (static_call(kvm_x86_set_vnmi_pending)(vcpu)))
+ vcpu->arch.nmi_pending--;
+
if (vcpu->arch.nmi_pending)
kvm_make_request(KVM_REQ_EVENT, vcpu);
}

+/* Return total number of NMIs pending injection to the VM */
+int kvm_get_total_nmi_pending(struct kvm_vcpu *vcpu)
+{
+ return vcpu->arch.nmi_pending + static_call(kvm_x86_is_vnmi_pending)(vcpu);
+}
+
+
void kvm_make_scan_ioapic_request_mask(struct kvm *kvm,
unsigned long *vcpu_bitmap)
{
--
2.25.1


2023-02-27 04:01:00

by Santosh Shukla

[permalink] [raw]
Subject: [PATCHv3 10/10] KVM: nSVM: implement support for nested VNMI

Allows L1 to use vNMI to accelerate its injection of NMI
to L2 by passing through vNMI int_ctl bits from vmcb12
to/from vmcb02.

In case of L1 and L2 both using VNMI- Copy VNMI bits from vmcb12 to
vmcb02 during entry and vice-versa during exit.
And in case of L1 uses VNMI and L2 doesn't- Copy VNMI bits from vmcb01 to
vmcb02 during entry and vice-versa during exit.

Tested with the KVM-unit-test and Nested Guest scenario.

Co-developed-by: Maxim Levitsky <[email protected]>
Signed-off-by: Maxim Levitsky <[email protected]>
Signed-off-by: Santosh Shukla <[email protected]>
---
v3:
- Fix identiation and style issue.
- Fix SOB
- Removed `svm->nmi_masked` var use for nested svm case.
- Reworded the commit description.
https://lore.kernel.org/all/[email protected]/

arch/x86/kvm/svm/nested.c | 33 +++++++++++++++++++++++++++++++++
arch/x86/kvm/svm/svm.c | 5 +++++
arch/x86/kvm/svm/svm.h | 6 ++++++
3 files changed, 44 insertions(+)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 74e9e9e76d77..b018fe2fdf88 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -281,6 +281,11 @@ static bool __nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
if (CC(!nested_svm_check_tlb_ctl(vcpu, control->tlb_ctl)))
return false;

+ if (CC((control->int_ctl & V_NMI_ENABLE_MASK) &&
+ !vmcb12_is_intercept(control, INTERCEPT_NMI))) {
+ return false;
+ }
+
return true;
}

@@ -436,6 +441,9 @@ void nested_sync_control_from_vmcb02(struct vcpu_svm *svm)
if (nested_vgif_enabled(svm))
mask |= V_GIF_MASK;

+ if (nested_vnmi_enabled(svm))
+ mask |= V_NMI_BLOCKING_MASK | V_NMI_PENDING_MASK;
+
svm->nested.ctl.int_ctl &= ~mask;
svm->nested.ctl.int_ctl |= svm->vmcb->control.int_ctl & mask;
}
@@ -655,6 +663,17 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm,
else
int_ctl_vmcb01_bits |= (V_GIF_MASK | V_GIF_ENABLE_MASK);

+ if (vnmi) {
+ if (vmcb01->control.int_ctl & V_NMI_PENDING_MASK) {
+ svm->vcpu.arch.nmi_pending++;
+ kvm_make_request(KVM_REQ_EVENT, &svm->vcpu);
+ }
+ if (nested_vnmi_enabled(svm))
+ int_ctl_vmcb12_bits |= (V_NMI_PENDING_MASK |
+ V_NMI_ENABLE_MASK |
+ V_NMI_BLOCKING_MASK);
+ }
+
/* Copied from vmcb01. msrpm_base can be overwritten later. */
vmcb02->control.nested_ctl = vmcb01->control.nested_ctl;
vmcb02->control.iopm_base_pa = vmcb01->control.iopm_base_pa;
@@ -1058,6 +1077,20 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
svm_update_lbrv(vcpu);
}

+ if (vnmi) {
+ if (vmcb02->control.int_ctl & V_NMI_BLOCKING_MASK)
+ vmcb01->control.int_ctl |= V_NMI_BLOCKING_MASK;
+ else
+ vmcb01->control.int_ctl &= ~V_NMI_BLOCKING_MASK;
+
+ if (vcpu->arch.nmi_pending) {
+ vcpu->arch.nmi_pending--;
+ vmcb01->control.int_ctl |= V_NMI_PENDING_MASK;
+ } else
+ vmcb01->control.int_ctl &= ~V_NMI_PENDING_MASK;
+
+ }
+
/*
* On vmexit the GIF is set to false and
* no event can be injected in L1.
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 84d9d2566629..08b7856e2da2 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4226,6 +4226,8 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)

svm->vgif_enabled = vgif && guest_cpuid_has(vcpu, X86_FEATURE_VGIF);

+ svm->vnmi_enabled = vnmi && guest_cpuid_has(vcpu, X86_FEATURE_AMD_VNMI);
+
svm_recalc_instruction_intercepts(vcpu, svm);

/* For sev guests, the memory encryption bit is not reserved in CR3. */
@@ -4981,6 +4983,9 @@ static __init void svm_set_cpu_caps(void)
if (vgif)
kvm_cpu_cap_set(X86_FEATURE_VGIF);

+ if (vnmi)
+ kvm_cpu_cap_set(X86_FEATURE_AMD_VNMI);
+
/* Nested VM can receive #VMEXIT instead of triggering #GP */
kvm_cpu_cap_set(X86_FEATURE_SVME_ADDR_CHK);
}
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index fb48c347bbe0..e229eadbf1ce 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -266,6 +266,7 @@ struct vcpu_svm {
bool pause_filter_enabled : 1;
bool pause_threshold_enabled : 1;
bool vgif_enabled : 1;
+ bool vnmi_enabled : 1;

u32 ldr_reg;
u32 dfr_reg;
@@ -540,6 +541,11 @@ static inline bool nested_npt_enabled(struct vcpu_svm *svm)
return svm->nested.ctl.nested_ctl & SVM_NESTED_CTL_NP_ENABLE;
}

+static inline bool nested_vnmi_enabled(struct vcpu_svm *svm)
+{
+ return svm->vnmi_enabled && (svm->nested.ctl.int_ctl & V_NMI_ENABLE_MASK);
+}
+
static inline bool is_x2apic_msrpm_offset(u32 offset)
{
/* 4 msrs per u8, and 4 u8 in u32 */
--
2.25.1


2023-02-27 04:49:06

by Santosh Shukla

[permalink] [raw]
Subject: Re: [PATCHv3 00/10] SVM: virtual NMI



On 2/27/2023 9:23 AM, Santosh Shukla wrote:
> v2:
> https://lore.kernel.org/all/[email protected]/
>
> v3:

Pl. ignore this series, I missed out sending one additional patch in this series..
"0001-KVM-nSVM-Don-t-sync-vmcb02-V_IRQ-back-to-vmcb12-if-K.patch"
which was part of v2 review comment.. will send v4 right-away. Sorry for the noise!.,

Thanks,
Santosh

> - 09/11: Clubbed x86_ops delayed NMI with vNMI changes into one,
> for better readability purpose (Sean Suggestion)
> - Series includes suggestion and fixes proposed in v2 series.
> Refer each patch for change history(v2-->v3).
>
> Series based on [1] and tested on AMD EPYC-Genoa.
>
>
> APM: ((Ch-15.21.10 - NMI Virtualization)
> https://www.amd.com/en/support/tech-docs/amd64-architecture-programmers-manual-volumes-1-5
>
> Past history and work refer v5-
> https://lkml.org/lkml/2022/10/27/261
>
> Thanks,
> Santosh
> [1] https://github.com/kvm-x86/linux branch kvm-x86/next(62ef199250cd46f)
>
>
> Maxim Levitsky (2):
> KVM: nSVM: Raise event on nested VM exit if L1 doesn't intercept IRQs
> KVM: SVM: add wrappers to enable/disable IRET interception
>
> Santosh Shukla (5):
> KVM: nSVM: Disable intercept of VINTR if saved RFLAG.IF is 0
> x86/cpu: Add CPUID feature bit for VNMI
> KVM: SVM: Add VNMI bit definition
> KVM: x86: add support for delayed virtual NMI injection interface
> KVM: nSVM: implement support for nested VNMI
>
> Sean Christopherson (3):
> KVM: x86: Raise an event request when processing NMIs if an NMI is
> pending
> KVM: x86: Tweak the code and comment related to handling concurrent
> NMIs
> KVM: x86: Save/restore all NMIs when multiple NMIs are pending
>
> arch/x86/include/asm/cpufeatures.h | 1 +
> arch/x86/include/asm/kvm-x86-ops.h | 2 +
> arch/x86/include/asm/kvm_host.h | 11 ++-
> arch/x86/include/asm/svm.h | 9 ++
> arch/x86/kvm/svm/nested.c | 73 +++++++++++++-
> arch/x86/kvm/svm/svm.c | 152 +++++++++++++++++++++++------
> arch/x86/kvm/svm/svm.h | 28 ++++++
> arch/x86/kvm/x86.c | 46 +++++++--
> 8 files changed, 279 insertions(+), 43 deletions(-)
>