2020-09-15 19:24:31

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH v2 0/2] KVM: VMX: Clean up IRQ/NMI handling

Clean up KVM's handling of IRQ and NMI exits to move the invocation of the
IRQ handler to a standalone assembly routine, and to then consolidate the
NMI handling to use the same indirect call approach instead of using INTn.

The IRQ cleanup was suggested by Josh Poimboeuf in the context of a false
postive objtool warning[*]. I believe Josh intended to use UNWIND hints
instead of trickery to avoid objtool complaints. I opted for trickery in
the form of a redundant, but explicit, restoration of RSP after the hidden
IRET. AFAICT, there are no existing UNWIND hints that would let objtool
know that the stack is magically being restored, and adding a new hint to
save a single MOV <reg>, <reg> instruction seemed like overkill.

The NMI consolidation was loosely suggested by Andi Kleen. Andi's actual
suggestion was to export and directly call the NMI handler, but that's a
more involved change (unless I'm misunderstanding the wants of the NMI
handler), whereas piggybacking the IRQ code is simple and seems like a
worthwhile intermediate step.

Sean Christopherson (2):
KVM: VMX: Move IRQ invocation to assembly subroutine
KVM: VMX: Invoke NMI handler via indirect call instead of INTn

arch/x86/kvm/vmx/vmenter.S | 34 +++++++++++++++++++++
arch/x86/kvm/vmx/vmx.c | 61 +++++++++++---------------------------
2 files changed, 51 insertions(+), 44 deletions(-)

--
2.28.0


2020-09-15 19:26:32

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH v2 2/2] KVM: VMX: Invoke NMI handler via indirect call instead of INTn

Rework NMI VM-Exit handling to invoke the kernel handler by function
call instead of INTn. INTn microcode is relatively expensive, and
aligning the IRQ and NMI handling will make it easier to update KVM
should some newfangled method for invoking the handlers come along.

Suggested-by: Andi Kleen <[email protected]>
Signed-off-by: Sean Christopherson <[email protected]>
---
arch/x86/kvm/vmx/vmx.c | 30 +++++++++++++++---------------
1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 391f079d9136..b0eca151931d 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6411,40 +6411,40 @@ static void vmx_apicv_post_state_restore(struct kvm_vcpu *vcpu)

void vmx_do_interrupt_nmi_irqoff(unsigned long entry);

+static void handle_interrupt_nmi_irqoff(struct kvm_vcpu *vcpu, u32 intr_info)
+{
+ unsigned int vector = intr_info & INTR_INFO_VECTOR_MASK;
+ gate_desc *desc = (gate_desc *)host_idt_base + vector;
+
+ kvm_before_interrupt(vcpu);
+ vmx_do_interrupt_nmi_irqoff(gate_offset(desc));
+ kvm_after_interrupt(vcpu);
+}
+
static void handle_exception_nmi_irqoff(struct vcpu_vmx *vmx)
{
u32 intr_info = vmx_get_intr_info(&vmx->vcpu);

/* if exit due to PF check for async PF */
- if (is_page_fault(intr_info)) {
+ if (is_page_fault(intr_info))
vmx->vcpu.arch.apf.host_apf_flags = kvm_read_and_reset_apf_flags();
/* Handle machine checks before interrupts are enabled */
- } else if (is_machine_check(intr_info)) {
+ else if (is_machine_check(intr_info))
kvm_machine_check();
/* We need to handle NMIs before interrupts are enabled */
- } else if (is_nmi(intr_info)) {
- kvm_before_interrupt(&vmx->vcpu);
- asm("int $2");
- kvm_after_interrupt(&vmx->vcpu);
- }
+ else if (is_nmi(intr_info))
+ handle_interrupt_nmi_irqoff(&vmx->vcpu, intr_info);
}

static void handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu)
{
- unsigned int vector;
- gate_desc *desc;
u32 intr_info = vmx_get_intr_info(vcpu);

if (WARN_ONCE(!is_external_intr(intr_info),
"KVM: unexpected VM-Exit interrupt info: 0x%x", intr_info))
return;

- vector = intr_info & INTR_INFO_VECTOR_MASK;
- desc = (gate_desc *)host_idt_base + vector;
-
- kvm_before_interrupt(vcpu);
- vmx_do_interrupt_nmi_irqoff(gate_offset(desc));
- kvm_after_interrupt(vcpu);
+ handle_interrupt_nmi_irqoff(vcpu, intr_info);
}

static void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
--
2.28.0

2020-09-15 19:27:15

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH v2 1/2] KVM: VMX: Move IRQ invocation to assembly subroutine

Move the asm blob that invokes the appropriate IRQ handler after VM-Exit
into a proper subroutine. Unconditionally create a stack frame in the
subroutine so that, as objtool sees things, the function has standard
stack behavior. The dynamic stack adjustment makes using unwind hints
problematic.

Suggested-by: Josh Poimboeuf <[email protected]>
Cc: Uros Bizjak <[email protected]>
Signed-off-by: Sean Christopherson <[email protected]>
---
arch/x86/kvm/vmx/vmenter.S | 34 ++++++++++++++++++++++++++++++++++
arch/x86/kvm/vmx/vmx.c | 33 +++------------------------------
2 files changed, 37 insertions(+), 30 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
index 799db084a336..90ad7a6246e3 100644
--- a/arch/x86/kvm/vmx/vmenter.S
+++ b/arch/x86/kvm/vmx/vmenter.S
@@ -4,6 +4,7 @@
#include <asm/bitsperlong.h>
#include <asm/kvm_vcpu_regs.h>
#include <asm/nospec-branch.h>
+#include <asm/segment.h>

#define WORD_SIZE (BITS_PER_LONG / 8)

@@ -294,3 +295,36 @@ SYM_FUNC_START(vmread_error_trampoline)

ret
SYM_FUNC_END(vmread_error_trampoline)
+
+SYM_FUNC_START(vmx_do_interrupt_nmi_irqoff)
+ /*
+ * Unconditionally create a stack frame, getting the correct RSP on the
+ * stack (for x86-64) would take two instructions anyways, and RBP can
+ * be used to restore RSP to make objtool happy (see below).
+ */
+ push %_ASM_BP
+ mov %_ASM_SP, %_ASM_BP
+
+#ifdef CONFIG_X86_64
+ /*
+ * Align RSP to a 16-byte boundary (to emulate CPU behavior) before
+ * creating the synthetic interrupt stack frame for the IRQ/NMI.
+ */
+ and $-16, %rsp
+ push $__KERNEL_DS
+ push %rbp
+#endif
+ pushf
+ push $__KERNEL_CS
+ CALL_NOSPEC _ASM_ARG1
+
+ /*
+ * "Restore" RSP from RBP, even though IRET has already unwound RSP to
+ * the correct value. objtool doesn't know the callee will IRET and,
+ * without the explicit restore, thinks the stack is getting walloped.
+ * Using an unwind hint is problematic due to x86-64's dynamic alignment.
+ */
+ mov %_ASM_BP, %_ASM_SP
+ pop %_ASM_BP
+ ret
+SYM_FUNC_END(vmx_do_interrupt_nmi_irqoff)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 46ba2e03a892..391f079d9136 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6409,6 +6409,8 @@ static void vmx_apicv_post_state_restore(struct kvm_vcpu *vcpu)
memset(vmx->pi_desc.pir, 0, sizeof(vmx->pi_desc.pir));
}

+void vmx_do_interrupt_nmi_irqoff(unsigned long entry);
+
static void handle_exception_nmi_irqoff(struct vcpu_vmx *vmx)
{
u32 intr_info = vmx_get_intr_info(&vmx->vcpu);
@@ -6430,10 +6432,6 @@ static void handle_exception_nmi_irqoff(struct vcpu_vmx *vmx)
static void handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu)
{
unsigned int vector;
- unsigned long entry;
-#ifdef CONFIG_X86_64
- unsigned long tmp;
-#endif
gate_desc *desc;
u32 intr_info = vmx_get_intr_info(vcpu);

@@ -6443,36 +6441,11 @@ static void handle_external_interrupt_irqoff(struct kvm_vcpu *vcpu)

vector = intr_info & INTR_INFO_VECTOR_MASK;
desc = (gate_desc *)host_idt_base + vector;
- entry = gate_offset(desc);

kvm_before_interrupt(vcpu);
-
- asm volatile(
-#ifdef CONFIG_X86_64
- "mov %%rsp, %[sp]\n\t"
- "and $-16, %%rsp\n\t"
- "push %[ss]\n\t"
- "push %[sp]\n\t"
-#endif
- "pushf\n\t"
- "push %[cs]\n\t"
- CALL_NOSPEC
- :
-#ifdef CONFIG_X86_64
- [sp]"=&r"(tmp),
-#endif
- ASM_CALL_CONSTRAINT
- :
- [thunk_target]"r"(entry),
-#ifdef CONFIG_X86_64
- [ss]"i"(__KERNEL_DS),
-#endif
- [cs]"i"(__KERNEL_CS)
- );
-
+ vmx_do_interrupt_nmi_irqoff(gate_offset(desc));
kvm_after_interrupt(vcpu);
}
-STACK_FRAME_NON_STANDARD(handle_external_interrupt_irqoff);

static void vmx_handle_exit_irqoff(struct kvm_vcpu *vcpu)
{
--
2.28.0

2020-09-15 19:31:11

by Josh Poimboeuf

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] KVM: VMX: Move IRQ invocation to assembly subroutine

On Tue, Sep 15, 2020 at 12:15:04PM -0700, Sean Christopherson wrote:
> Move the asm blob that invokes the appropriate IRQ handler after VM-Exit
> into a proper subroutine. Unconditionally create a stack frame in the
> subroutine so that, as objtool sees things, the function has standard
> stack behavior. The dynamic stack adjustment makes using unwind hints
> problematic.
>
> Suggested-by: Josh Poimboeuf <[email protected]>
> Cc: Uros Bizjak <[email protected]>
> Signed-off-by: Sean Christopherson <[email protected]>

Acked-by: Josh Poimboeuf <[email protected]>

--
Josh

2020-09-15 19:41:43

by Uros Bizjak

[permalink] [raw]
Subject: Re: [PATCH v2 1/2] KVM: VMX: Move IRQ invocation to assembly subroutine

On Tue, Sep 15, 2020 at 9:15 PM Sean Christopherson
<[email protected]> wrote:
>
> Move the asm blob that invokes the appropriate IRQ handler after VM-Exit
> into a proper subroutine. Unconditionally create a stack frame in the
> subroutine so that, as objtool sees things, the function has standard
> stack behavior. The dynamic stack adjustment makes using unwind hints
> problematic.
>
> Suggested-by: Josh Poimboeuf <[email protected]>
> Cc: Uros Bizjak <[email protected]>
> Signed-off-by: Sean Christopherson <[email protected]>

Acked-by: Uros Bizjak <[email protected]>

Uros.

2020-09-22 13:41:07

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH v2 0/2] KVM: VMX: Clean up IRQ/NMI handling

On 15/09/20 21:15, Sean Christopherson wrote:
> Clean up KVM's handling of IRQ and NMI exits to move the invocation of the
> IRQ handler to a standalone assembly routine, and to then consolidate the
> NMI handling to use the same indirect call approach instead of using INTn.
>
> The IRQ cleanup was suggested by Josh Poimboeuf in the context of a false
> postive objtool warning[*]. I believe Josh intended to use UNWIND hints
> instead of trickery to avoid objtool complaints. I opted for trickery in
> the form of a redundant, but explicit, restoration of RSP after the hidden
> IRET. AFAICT, there are no existing UNWIND hints that would let objtool
> know that the stack is magically being restored, and adding a new hint to
> save a single MOV <reg>, <reg> instruction seemed like overkill.
>
> The NMI consolidation was loosely suggested by Andi Kleen. Andi's actual
> suggestion was to export and directly call the NMI handler, but that's a
> more involved change (unless I'm misunderstanding the wants of the NMI
> handler), whereas piggybacking the IRQ code is simple and seems like a
> worthwhile intermediate step.
>
> Sean Christopherson (2):
> KVM: VMX: Move IRQ invocation to assembly subroutine
> KVM: VMX: Invoke NMI handler via indirect call instead of INTn
>
> arch/x86/kvm/vmx/vmenter.S | 34 +++++++++++++++++++++
> arch/x86/kvm/vmx/vmx.c | 61 +++++++++++---------------------------
> 2 files changed, 51 insertions(+), 44 deletions(-)
>

Queued, thanks.

Paolo

2021-04-26 09:36:42

by Lai Jiangshan

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] KVM: VMX: Invoke NMI handler via indirect call instead of INTn

Add CC: Andy Lutomirski
Add CC: Steven Rostedt

I think this patch made it wrong for NMI.

On Wed, Sep 16, 2020 at 3:27 AM Sean Christopherson
<[email protected]> wrote:
>
> Rework NMI VM-Exit handling to invoke the kernel handler by function
> call instead of INTn. INTn microcode is relatively expensive, and
> aligning the IRQ and NMI handling will make it easier to update KVM
> should some newfangled method for invoking the handlers come along.
>
> Suggested-by: Andi Kleen <[email protected]>
> Signed-off-by: Sean Christopherson <[email protected]>
> ---
> arch/x86/kvm/vmx/vmx.c | 30 +++++++++++++++---------------
> 1 file changed, 15 insertions(+), 15 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 391f079d9136..b0eca151931d 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -6411,40 +6411,40 @@ static void vmx_apicv_post_state_restore(struct kvm_vcpu *vcpu)
>
> void vmx_do_interrupt_nmi_irqoff(unsigned long entry);
>
> +static void handle_interrupt_nmi_irqoff(struct kvm_vcpu *vcpu, u32 intr_info)
> +{
> + unsigned int vector = intr_info & INTR_INFO_VECTOR_MASK;
> + gate_desc *desc = (gate_desc *)host_idt_base + vector;
> +
> + kvm_before_interrupt(vcpu);
> + vmx_do_interrupt_nmi_irqoff(gate_offset(desc));
> + kvm_after_interrupt(vcpu);
> +}
> +
> static void handle_exception_nmi_irqoff(struct vcpu_vmx *vmx)
> {
> u32 intr_info = vmx_get_intr_info(&vmx->vcpu);
>
> /* if exit due to PF check for async PF */
> - if (is_page_fault(intr_info)) {
> + if (is_page_fault(intr_info))
> vmx->vcpu.arch.apf.host_apf_flags = kvm_read_and_reset_apf_flags();
> /* Handle machine checks before interrupts are enabled */
> - } else if (is_machine_check(intr_info)) {
> + else if (is_machine_check(intr_info))
> kvm_machine_check();
> /* We need to handle NMIs before interrupts are enabled */
> - } else if (is_nmi(intr_info)) {
> - kvm_before_interrupt(&vmx->vcpu);
> - asm("int $2");
> - kvm_after_interrupt(&vmx->vcpu);
> - }
> + else if (is_nmi(intr_info))
> + handle_interrupt_nmi_irqoff(&vmx->vcpu, intr_info);
> }

When handle_interrupt_nmi_irqoff() is called, we may lose the
CPU-hidden-NMI-masked state due to IRET of #DB, #BP or other traps
between VMEXIT and handle_interrupt_nmi_irqoff().

But the NMI handler in the Linux kernel *expects* the CPU-hidden-NMI-masked
state is still set in the CPU for no nested NMI intruding into the beginning
of the handler.

The original code "int $2" can provide the needed CPU-hidden-NMI-masked
when entering #NMI, but I doubt it about this change.

I maybe missed something, especially I haven't read all of the earlier
discussions about the change. More importantly, I haven't found the original
suggestion from Andi Kleen: (Quote from the cover letter):

The NMI consolidation was loosely suggested by Andi Kleen. Andi's actual
suggestion was to export and directly call the NMI handler, but that's a
more involved change (unless I'm misunderstanding the wants of the NMI
handler), whereas piggybacking the IRQ code is simple and seems like a
worthwhile intermediate step.
(End of quote)

I think we need to change it back or change it to call the NMI handler
immediately after VMEXIT before leaving "nostr" section if needed.

Thanks,
Lai

2021-04-26 10:42:41

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] KVM: VMX: Invoke NMI handler via indirect call instead of INTn

On 26/04/21 11:33, Lai Jiangshan wrote:
> When handle_interrupt_nmi_irqoff() is called, we may lose the
> CPU-hidden-NMI-masked state due to IRET of #DB, #BP or other traps
> between VMEXIT and handle_interrupt_nmi_irqoff().
>
> But the NMI handler in the Linux kernel*expects* the CPU-hidden-NMI-masked
> state is still set in the CPU for no nested NMI intruding into the beginning
> of the handler.
>
> The original code "int $2" can provide the needed CPU-hidden-NMI-masked
> when entering #NMI, but I doubt it about this change.

How would "int $2" block NMIs? The hidden effect of this change (and I
should have reviewed better the effect on the NMI entry code) is that
the call will not use the IST anymore.

However, I'm not sure which of the two situations is better: entering
the NMI handler on the IST without setting the hidden NMI-blocked flag
could be a recipe for bad things as well.

Paolo

2021-04-26 11:46:30

by Maxim Levitsky

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] KVM: VMX: Invoke NMI handler via indirect call instead of INTn

On Mon, 2021-04-26 at 12:40 +0200, Paolo Bonzini wrote:
> On 26/04/21 11:33, Lai Jiangshan wrote:
> > When handle_interrupt_nmi_irqoff() is called, we may lose the
> > CPU-hidden-NMI-masked state due to IRET of #DB, #BP or other traps
> > between VMEXIT and handle_interrupt_nmi_irqoff().
> >
> > But the NMI handler in the Linux kernel*expects* the CPU-hidden-NMI-masked
> > state is still set in the CPU for no nested NMI intruding into the beginning
> > of the handler.
> >
> > The original code "int $2" can provide the needed CPU-hidden-NMI-masked
> > when entering #NMI, but I doubt it about this change.
>
> How would "int $2" block NMIs? The hidden effect of this change (and I
> should have reviewed better the effect on the NMI entry code) is that
> the call will not use the IST anymore.
>
> However, I'm not sure which of the two situations is better: entering
> the NMI handler on the IST without setting the hidden NMI-blocked flag
> could be a recipe for bad things as well.

If I understand this correctly, we can't really set the NMI blocked flag
on Intel, but only keep it from beeing cleared by an iret after it
was set by the intercepted NMI.

Thus the goal of this patchset was to make sure that we don't
call any interrupt handlers that can do iret before we call the NMI handler

Indeed I don't think that doing int $2 helps, unless I miss something.
We just need to make sure that we call the NMI handler as soon as possible.


If only Intel had the GI flag....


My 0.2 cents.

Best regards,
Maxim Levitsky
>
> Paolo
>


2021-04-26 14:00:43

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] KVM: VMX: Invoke NMI handler via indirect call instead of INTn

On Mon, 26 Apr 2021 14:44:49 +0300
Maxim Levitsky <[email protected]> wrote:

> On Mon, 2021-04-26 at 12:40 +0200, Paolo Bonzini wrote:
> > On 26/04/21 11:33, Lai Jiangshan wrote:
> > > When handle_interrupt_nmi_irqoff() is called, we may lose the
> > > CPU-hidden-NMI-masked state due to IRET of #DB, #BP or other traps
> > > between VMEXIT and handle_interrupt_nmi_irqoff().
> > >
> > > But the NMI handler in the Linux kernel*expects* the CPU-hidden-NMI-masked
> > > state is still set in the CPU for no nested NMI intruding into the beginning
> > > of the handler.

This is incorrect. The Linux kernel has for some time handled the case of
nested NMIs. It had to, to implement the ftrace break point updates, as it
would trigger an int3 in an NMI which would "unmask" the NMIs. It has also
been a long time bug where a page fault could do the same (the reason you
could never do a dump all tasks from NMI without triple faulting!).

But that's been fixed a long time ago, and I even wrote an LWN article
about it ;-)

https://lwn.net/Articles/484932/

The NMI handler can handle the case of nested NMIs, and implements a
software "latch" to remember that another NMI is to be executed, if there
is a nested one. And it does so after the first one has finished.

-- Steve

2021-04-26 14:52:00

by Andi Kleen

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] KVM: VMX: Invoke NMI handler via indirect call instead of INTn

> > The original code "int $2" can provide the needed CPU-hidden-NMI-masked
> > when entering #NMI, but I doubt it about this change.
>
> How would "int $2" block NMIs? The hidden effect of this change (and I
> should have reviewed better the effect on the NMI entry code) is that the
> call will not use the IST anymore.

My understanding is that int $2 does not block NMIs.

So reentries might have been possible.

-Andi

2021-04-26 15:11:09

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] KVM: VMX: Invoke NMI handler via indirect call instead of INTn


> On Apr 26, 2021, at 7:51 AM, Andi Kleen <[email protected]> wrote:
>
> 
>>
>>> The original code "int $2" can provide the needed CPU-hidden-NMI-masked
>>> when entering #NMI, but I doubt it about this change.
>>
>> How would "int $2" block NMIs? The hidden effect of this change (and I
>> should have reviewed better the effect on the NMI entry code) is that the
>> call will not use the IST anymore.
>
> My understanding is that int $2 does not block NMIs.
>
> So reentries might have been possible.
>

The C NMI code has its own reentrancy protection and has for years. It should work fine for this use case.

> -Andi

2021-04-27 00:56:12

by Lai Jiangshan

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] KVM: VMX: Invoke NMI handler via indirect call instead of INTn

(Correct Sean Christopherson's email address)

On Mon, Apr 26, 2021 at 6:40 PM Paolo Bonzini <[email protected]> wrote:
>
> On 26/04/21 11:33, Lai Jiangshan wrote:
> > When handle_interrupt_nmi_irqoff() is called, we may lose the
> > CPU-hidden-NMI-masked state due to IRET of #DB, #BP or other traps
> > between VMEXIT and handle_interrupt_nmi_irqoff().
> >
> > But the NMI handler in the Linux kernel*expects* the CPU-hidden-NMI-masked
> > state is still set in the CPU for no nested NMI intruding into the beginning
> > of the handler.
> >
> > The original code "int $2" can provide the needed CPU-hidden-NMI-masked
> > when entering #NMI, but I doubt it about this change.
>
> How would "int $2" block NMIs?

Sorry, I haven't checked it.

> The hidden effect of this change (and I
> should have reviewed better the effect on the NMI entry code) is that
> the call will not use the IST anymore.
>
> However, I'm not sure which of the two situations is better: entering
> the NMI handler on the IST without setting the hidden NMI-blocked flag
> could be a recipe for bad things as well.

The change makes the ASM NMI entry called on the kernel stack. But the
ASM NMI entry expects it on the IST stack and it plays with "NMI executing"
variable on the IST stack. In this change, the stranded ASM NMI entry
will use the wrong/garbage "NMI executing" variable on the kernel stack
and may do some very wrong thing.

On Mon, Apr 26, 2021 at 9:59 PM Steven Rostedt <[email protected]> wrote:
> > > > But the NMI handler in the Linux kernel*expects* the CPU-hidden-NMI-masked
> > > > state is still set in the CPU for no nested NMI intruding into the beginning
> > > > of the handler.
>
>
> This is incorrect. The Linux kernel has for some time handled the case of
> nested NMIs. It had to, to implement the ftrace break point updates, as it
> would trigger an int3 in an NMI which would "unmask" the NMIs. It has also
> been a long time bug where a page fault could do the same (the reason you
> could never do a dump all tasks from NMI without triple faulting!).
>
> But that's been fixed a long time ago, and I even wrote an LWN article
> about it ;-)
>
> https://lwn.net/Articles/484932/
>
> The NMI handler can handle the case of nested NMIs, and implements a
> software "latch" to remember that another NMI is to be executed, if there
> is a nested one. And it does so after the first one has finished.

Sorry, in my reply, "the NMI handler" meant to be the ASM entry installed
on the IDT table which really expects to be NMI-masked at the beginning.

The C NMI handler can handle the case of nested NMIs, which is useful
here. I think we should change it to call the C NMI handler directly
here as Andy Lutomirski suggested:

On Mon, Apr 26, 2021 at 11:09 PM Andy Lutomirski <[email protected]> wrote:
> The C NMI code has its own reentrancy protection and has for years.
> It should work fine for this use case.

I think this is the right way.

2021-04-27 01:01:49

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] KVM: VMX: Invoke NMI handler via indirect call instead of INTn

On Tue, 27 Apr 2021 08:54:37 +0800
Lai Jiangshan <[email protected]> wrote:

> > However, I'm not sure which of the two situations is better: entering
> > the NMI handler on the IST without setting the hidden NMI-blocked flag
> > could be a recipe for bad things as well.
>
> The change makes the ASM NMI entry called on the kernel stack. But the
> ASM NMI entry expects it on the IST stack and it plays with "NMI executing"
> variable on the IST stack. In this change, the stranded ASM NMI entry
> will use the wrong/garbage "NMI executing" variable on the kernel stack
> and may do some very wrong thing.

I missed this detail.

>
> Sorry, in my reply, "the NMI handler" meant to be the ASM entry installed
> on the IDT table which really expects to be NMI-masked at the beginning.
>
> The C NMI handler can handle the case of nested NMIs, which is useful
> here. I think we should change it to call the C NMI handler directly
> here as Andy Lutomirski suggested:

Yes, because that's the way x86_32 works.

>
> On Mon, Apr 26, 2021 at 11:09 PM Andy Lutomirski <[email protected]> wrote:
> > The C NMI code has its own reentrancy protection and has for years.
> > It should work fine for this use case.
>
> I think this is the right way.

Agreed.

-- Steve

2021-04-27 07:07:22

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] KVM: VMX: Invoke NMI handler via indirect call instead of INTn

On 27/04/21 02:54, Lai Jiangshan wrote:
> The C NMI handler can handle the case of nested NMIs, which is useful
> here. I think we should change it to call the C NMI handler directly
> here as Andy Lutomirski suggested:

Great, can you send a patch?

Paolo

> On Mon, Apr 26, 2021 at 11:09 PM Andy Lutomirski <[email protected]> wrote:
>> The C NMI code has its own reentrancy protection and has for years.
>> It should work fine for this use case.
>
> I think this is the right way.
>

2021-04-30 02:58:11

by Lai Jiangshan

[permalink] [raw]
Subject: Re: [PATCH v2 2/2] KVM: VMX: Invoke NMI handler via indirect call instead of INTn

On Tue, Apr 27, 2021 at 3:05 PM Paolo Bonzini <[email protected]> wrote:
>
> On 27/04/21 02:54, Lai Jiangshan wrote:
> > The C NMI handler can handle the case of nested NMIs, which is useful
> > here. I think we should change it to call the C NMI handler directly
> > here as Andy Lutomirski suggested:
>
> Great, can you send a patch?
>

Hello, I sent it several days ago, could you have a review please, and
then I will update
the patchset with feedbacks applied. And thanks Steven for the reviews.

https://lore.kernel.org/lkml/[email protected]/

thanks
Lai