2020-05-29 15:45:43

by Paolo Bonzini

[permalink] [raw]
Subject: [PATCH v3 00/28] KVM: nSVM: event fixes and migration support

This is basically the same as v2 except that it has a small fix to
"KVM: x86: enable event window in inject_pending_event", where
a second pending interrupt or NMI was not enabling the window-open
vmexit (caught by apic.flat). In addition I've renamed
inject_pending_event to handle_processor_events.

The series now passes kvm-unit-tests and various nested hypervisor tests
so now it's *really* ready for review! (Thanks Krish for looking at
it so far).

I'm quite pleased with the overall look of the code, though the
INT_CTL arbitration is a bit ugly. I have plans to implement nested
vGIF and vLS, and then I will probably clean it up.

Paolo

Paolo Bonzini (28):
KVM: x86: track manually whether an event has been injected
KVM: x86: enable event window in inject_pending_event
KVM: nSVM: inject exceptions via svm_check_nested_events
KVM: nSVM: remove exit_required
KVM: nSVM: correctly inject INIT vmexits
KVM: SVM: always update CR3 in VMCB
KVM: nVMX: always update CR3 in VMCS
KVM: nSVM: move map argument out of enter_svm_guest_mode
KVM: nSVM: extract load_nested_vmcb_control
KVM: nSVM: extract preparation of VMCB for nested run
KVM: nSVM: move MMU setup to nested_prepare_vmcb_control
KVM: nSVM: clean up tsc_offset update
KVM: nSVM: pass vmcb_control_area to copy_vmcb_control_area
KVM: nSVM: remove trailing padding for struct vmcb_control_area
KVM: nSVM: save all control fields in svm->nested
KVM: nSVM: restore clobbered INT_CTL fields after clearing VINTR
KVM: nSVM: synchronize VMCB controls updated by the processor on every
vmexit
KVM: nSVM: remove unnecessary if
KVM: nSVM: extract svm_set_gif
KVM: SVM: preserve VGIF across VMCB switch
KVM: nSVM: synthesize correct EXITINTINFO on vmexit
KVM: nSVM: remove HF_VINTR_MASK
KVM: nSVM: remove HF_HIF_MASK
KVM: nSVM: split nested_vmcb_check_controls
KVM: nSVM: leave guest mode when clearing EFER.SVME
KVM: MMU: pass arbitrary CR0/CR4/EFER to kvm_init_shadow_mmu
selftests: kvm: add a SVM version of state-test
KVM: nSVM: implement KVM_GET_NESTED_STATE and KVM_SET_NESTED_STATE

Vitaly Kuznetsov (2):
selftests: kvm: introduce cpu_has_svm() check
selftests: kvm: fix smm test on SVM

arch/x86/include/asm/kvm_host.h | 12 +-
arch/x86/include/asm/svm.h | 9 +-
arch/x86/include/uapi/asm/kvm.h | 17 +-
arch/x86/kvm/cpuid.h | 5 +
arch/x86/kvm/irq.c | 1 +
arch/x86/kvm/mmu.h | 2 +-
arch/x86/kvm/mmu/mmu.c | 14 +-
arch/x86/kvm/svm/nested.c | 624 ++++++++++++------
arch/x86/kvm/svm/svm.c | 154 ++---
arch/x86/kvm/svm/svm.h | 33 +-
arch/x86/kvm/vmx/nested.c | 5 -
arch/x86/kvm/vmx/vmx.c | 25 +-
arch/x86/kvm/x86.c | 146 ++--
.../selftests/kvm/include/x86_64/svm_util.h | 10 +
tools/testing/selftests/kvm/x86_64/smm_test.c | 19 +-
.../testing/selftests/kvm/x86_64/state_test.c | 62 +-
16 files changed, 708 insertions(+), 430 deletions(-)

--
2.26.2


2020-05-29 15:46:30

by Paolo Bonzini

[permalink] [raw]
Subject: [PATCH 14/30] KVM: nSVM: remove trailing padding for struct vmcb_control_area

Allow placing the VMCB structs on the stack or in other structs without
wasting too much space. Add BUILD_BUG_ON as a quick safeguard against typos.

Signed-off-by: Paolo Bonzini <[email protected]>
---
arch/x86/include/asm/svm.h | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 6ece8561ba66..8a1f5382a4ea 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -96,7 +96,6 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
u8 reserved_6[8]; /* Offset 0xe8 */
u64 avic_logical_id; /* Offset 0xf0 */
u64 avic_physical_id; /* Offset 0xf8 */
- u8 reserved_7[768];
};


@@ -203,8 +202,16 @@ struct __attribute__ ((__packed__)) vmcb_save_area {
u64 last_excp_to;
};

+
+static inline void __unused_size_checks(void)
+{
+ BUILD_BUG_ON(sizeof(struct vmcb_save_area) != 0x298);
+ BUILD_BUG_ON(sizeof(struct vmcb_control_area) != 256);
+}
+
struct __attribute__ ((__packed__)) vmcb {
struct vmcb_control_area control;
+ u8 reserved_control[1024 - sizeof(struct vmcb_control_area)];
struct vmcb_save_area save;
};

--
2.26.2


2020-05-29 15:46:33

by Paolo Bonzini

[permalink] [raw]
Subject: [PATCH 13/30] KVM: nSVM: pass vmcb_control_area to copy_vmcb_control_area

This will come in handy when we put a struct vmcb_control_area in
svm->nested.

Signed-off-by: Paolo Bonzini <[email protected]>
---
arch/x86/kvm/svm/nested.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 5ca403a69148..fd9742c1a860 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -141,11 +141,9 @@ void recalc_intercepts(struct vcpu_svm *svm)
c->intercept |= g->intercept;
}

-static void copy_vmcb_control_area(struct vmcb *dst_vmcb, struct vmcb *from_vmcb)
+static void copy_vmcb_control_area(struct vmcb_control_area *dst,
+ struct vmcb_control_area *from)
{
- struct vmcb_control_area *dst = &dst_vmcb->control;
- struct vmcb_control_area *from = &from_vmcb->control;
-
dst->intercept_cr = from->intercept_cr;
dst->intercept_dr = from->intercept_dr;
dst->intercept_exceptions = from->intercept_exceptions;
@@ -419,7 +417,7 @@ int nested_svm_vmrun(struct vcpu_svm *svm)
else
hsave->save.cr3 = kvm_read_cr3(&svm->vcpu);

- copy_vmcb_control_area(hsave, vmcb);
+ copy_vmcb_control_area(&hsave->control, &vmcb->control);

svm->nested.nested_run_pending = 1;
enter_svm_guest_mode(svm, vmcb_gpa, nested_vmcb);
@@ -550,7 +548,7 @@ int nested_svm_vmexit(struct vcpu_svm *svm)
nested_vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK;

/* Restore the original control entries */
- copy_vmcb_control_area(vmcb, hsave);
+ copy_vmcb_control_area(&vmcb->control, &hsave->control);

svm->vmcb->control.tsc_offset = svm->vcpu.arch.tsc_offset =
svm->vcpu.arch.l1_tsc_offset;
--
2.26.2


2020-05-29 15:46:55

by Paolo Bonzini

[permalink] [raw]
Subject: [PATCH 06/30] KVM: SVM: always update CR3 in VMCB

svm_load_mmu_pgd is delaying the write of GUEST_CR3 to prepare_vmcs02 as
an optimization, but this is only correct before the nested vmentry.
If userspace is modifying CR3 with KVM_SET_SREGS after the VM has
already been put in guest mode, the value of CR3 will not be updated.
Remove the optimization, which almost never triggers anyway.
This was was added in commit 689f3bf21628 ("KVM: x86: unify callbacks
to load paging root", 2020-03-16) just to keep the two vendor-specific
modules closer, but we'll fix VMX too.

Fixes: 689f3bf21628 ("KVM: x86: unify callbacks to load paging root")
Signed-off-by: Paolo Bonzini <[email protected]>
---
arch/x86/kvm/svm/nested.c | 6 +-----
arch/x86/kvm/svm/svm.c | 16 +++++-----------
2 files changed, 6 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index dcac4c3510ab..8756c9f463fd 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -256,11 +256,7 @@ void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa,
svm_set_efer(&svm->vcpu, nested_vmcb->save.efer);
svm_set_cr0(&svm->vcpu, nested_vmcb->save.cr0);
svm_set_cr4(&svm->vcpu, nested_vmcb->save.cr4);
- if (npt_enabled) {
- svm->vmcb->save.cr3 = nested_vmcb->save.cr3;
- svm->vcpu.arch.cr3 = nested_vmcb->save.cr3;
- } else
- (void)kvm_set_cr3(&svm->vcpu, nested_vmcb->save.cr3);
+ (void)kvm_set_cr3(&svm->vcpu, nested_vmcb->save.cr3);

/* Guest paging mode is active - reset mmu */
kvm_mmu_reset_context(&svm->vcpu);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 545f63ebc720..feb96a410f2d 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3447,7 +3447,6 @@ static fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
static void svm_load_mmu_pgd(struct kvm_vcpu *vcpu, unsigned long root)
{
struct vcpu_svm *svm = to_svm(vcpu);
- bool update_guest_cr3 = true;
unsigned long cr3;

cr3 = __sme_set(root);
@@ -3456,18 +3455,13 @@ static void svm_load_mmu_pgd(struct kvm_vcpu *vcpu, unsigned long root)
mark_dirty(svm->vmcb, VMCB_NPT);

/* Loading L2's CR3 is handled by enter_svm_guest_mode. */
- if (is_guest_mode(vcpu))
- update_guest_cr3 = false;
- else if (test_bit(VCPU_EXREG_CR3, (ulong *)&vcpu->arch.regs_avail))
- cr3 = vcpu->arch.cr3;
- else /* CR3 is already up-to-date. */
- update_guest_cr3 = false;
+ if (!test_bit(VCPU_EXREG_CR3, (ulong *)&vcpu->arch.regs_avail))
+ return;
+ cr3 = vcpu->arch.cr3;
}

- if (update_guest_cr3) {
- svm->vmcb->save.cr3 = cr3;
- mark_dirty(svm->vmcb, VMCB_CR);
- }
+ svm->vmcb->save.cr3 = cr3;
+ mark_dirty(svm->vmcb, VMCB_CR);
}

static int is_disabled(void)
--
2.26.2


2020-05-29 17:46:13

by Krish Sadhukhan

[permalink] [raw]
Subject: Re: [PATCH 06/30] KVM: SVM: always update CR3 in VMCB


On 5/29/20 8:39 AM, Paolo Bonzini wrote:
> svm_load_mmu_pgd is delaying the write of GUEST_CR3 to prepare_vmcs02

Did you mean to say enter_svm_guest_mode here ?
> as
> an optimization, but this is only correct before the nested vmentry.
> If userspace is modifying CR3 with KVM_SET_SREGS after the VM has
> already been put in guest mode, the value of CR3 will not be updated.
> Remove the optimization, which almost never triggers anyway.
> This was was added in commit 689f3bf21628 ("KVM: x86: unify callbacks
> to load paging root", 2020-03-16) just to keep the two vendor-specific
> modules closer, but we'll fix VMX too.
>
> Fixes: 689f3bf21628 ("KVM: x86: unify callbacks to load paging root")
> Signed-off-by: Paolo Bonzini <[email protected]>
> ---
> arch/x86/kvm/svm/nested.c | 6 +-----
> arch/x86/kvm/svm/svm.c | 16 +++++-----------
> 2 files changed, 6 insertions(+), 16 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index dcac4c3510ab..8756c9f463fd 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -256,11 +256,7 @@ void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa,
> svm_set_efer(&svm->vcpu, nested_vmcb->save.efer);
> svm_set_cr0(&svm->vcpu, nested_vmcb->save.cr0);
> svm_set_cr4(&svm->vcpu, nested_vmcb->save.cr4);
> - if (npt_enabled) {
> - svm->vmcb->save.cr3 = nested_vmcb->save.cr3;
> - svm->vcpu.arch.cr3 = nested_vmcb->save.cr3;
> - } else
> - (void)kvm_set_cr3(&svm->vcpu, nested_vmcb->save.cr3);
> + (void)kvm_set_cr3(&svm->vcpu, nested_vmcb->save.cr3);
>
> /* Guest paging mode is active - reset mmu */
> kvm_mmu_reset_context(&svm->vcpu);
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 545f63ebc720..feb96a410f2d 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -3447,7 +3447,6 @@ static fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
> static void svm_load_mmu_pgd(struct kvm_vcpu *vcpu, unsigned long root)
> {
> struct vcpu_svm *svm = to_svm(vcpu);
> - bool update_guest_cr3 = true;
> unsigned long cr3;
>
> cr3 = __sme_set(root);
> @@ -3456,18 +3455,13 @@ static void svm_load_mmu_pgd(struct kvm_vcpu *vcpu, unsigned long root)
> mark_dirty(svm->vmcb, VMCB_NPT);
>
> /* Loading L2's CR3 is handled by enter_svm_guest_mode. */
> - if (is_guest_mode(vcpu))
> - update_guest_cr3 = false;
> - else if (test_bit(VCPU_EXREG_CR3, (ulong *)&vcpu->arch.regs_avail))
> - cr3 = vcpu->arch.cr3;
> - else /* CR3 is already up-to-date. */
> - update_guest_cr3 = false;
> + if (!test_bit(VCPU_EXREG_CR3, (ulong *)&vcpu->arch.regs_avail))
> + return;
> + cr3 = vcpu->arch.cr3;
> }
>
> - if (update_guest_cr3) {
> - svm->vmcb->save.cr3 = cr3;
> - mark_dirty(svm->vmcb, VMCB_CR);
> - }
> + svm->vmcb->save.cr3 = cr3;
> + mark_dirty(svm->vmcb, VMCB_CR);
> }
>
> static int is_disabled(void)

2020-05-29 17:59:08

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH 06/30] KVM: SVM: always update CR3 in VMCB

On Fri, May 29, 2020 at 10:41:58AM -0700, Krish Sadhukhan wrote:
>
> On 5/29/20 8:39 AM, Paolo Bonzini wrote:
> >svm_load_mmu_pgd is delaying the write of GUEST_CR3 to prepare_vmcs02
>
> Did you mean to say enter_svm_guest_mode here ?

Heh, looks like Vitaly passed the s/vmcs/vmcb bug on to Paolo, too. :-D

2020-05-29 18:01:50

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v3 00/28] KVM: nSVM: event fixes and migration support

> [PATCH v3 00/28] KVM: nSVM: event fixes and migration support

You've got something funky going on with the way you generate cover letters,
looks like it doesn't count patches authored by someone else. The 'v3' is
also missing from the patches, though I suppose some heathens do that on
purpose.

> Paolo Bonzini (28):
>
> Vitaly Kuznetsov (2):

2020-05-29 19:10:11

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH v3 00/28] KVM: nSVM: event fixes and migration support

On 29/05/20 19:59, Sean Christopherson wrote:
>> [PATCH v3 00/28] KVM: nSVM: event fixes and migration support
> You've got something funky going on with the way you generate cover letters,
> looks like it doesn't count patches authored by someone else. The 'v3' is
> also missing from the patches, though I suppose some heathens do that on
> purpose.

No, I simply had a draft of the cover letter ready, and when I decided
to include Vitaly's patches I didn't update the subject.

Paolo