2023-07-21 23:15:25

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 0/2] KVM: x86: Acquire SRCU in fastpath handler

Acquire SRCU for read when handling fastpath MSR writes so that side
effects like

Note, the PMU case could (and should) also be fixed by making the PMU
filter code smarter, e.g. by snapshotting which PMC events need to be
emulated, thus avoiding the filter lookup entirely. But acquiring SRCU
is relatively cheap, and this isn't the first bug of this nature.

Which is a perfect segue into patch 2, which reverts a hack-a-fix to
fudge around SVM needing to do the front half of emulation when skipping
the WRMSR.

Note #2, the fastpath also doesn't honor the MSR filter for TSC_DEADLINE.
That's a problem for another day.

Sean Christopherson (2):
KVM: x86: Acquire SRCU read lock when handling fastpath MSR writes
Revert "KVM: SVM: Skip WRMSR fastpath on VM-Exit if next RIP isn't
valid"

arch/x86/kvm/svm/svm.c | 10 ++--------
arch/x86/kvm/x86.c | 4 ++++
2 files changed, 6 insertions(+), 8 deletions(-)


base-commit: fdf0eaf11452d72945af31804e2a1048ee1b574c
--
2.41.0.487.g6d72f3e995-goog



2023-07-21 23:16:06

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 1/2] KVM: x86: Acquire SRCU read lock when handling fastpath MSR writes

Temporarily acquire kvm->srcu for read when potentially emulating WRMSR in
the VM-Exit fastpath handler, as several of the common helpers used during
emulation expect the caller to provide SRCU protection. E.g. if the guest
is counting instructions retired, KVM will query the PMU event filter when
stepping over the WRMSR.

dump_stack+0x85/0xdf
lockdep_rcu_suspicious+0x109/0x120
pmc_event_is_allowed+0x165/0x170
kvm_pmu_trigger_event+0xa5/0x190
handle_fastpath_set_msr_irqoff+0xca/0x1e0
svm_vcpu_run+0x5c3/0x7b0 [kvm_amd]
vcpu_enter_guest+0x2108/0x2580

Alternatively, check_pmu_event_filter() could acquire kvm->srcu, but this
isn't the first bug of this nature, e.g. see commit 5c30e8101e8d ("KVM:
SVM: Skip WRMSR fastpath on VM-Exit if next RIP isn't valid"). Providing
protection for the entirety of WRMSR emulation will allow reverting the
aforementioned commit, and will avoid having to play whack-a-mole when new
uses of SRCU-protected structures are inevitably added in common emulation
helpers.

Fixes: dfdeda67ea2d ("KVM: x86/pmu: Prevent the PMU from counting disallowed events")
Reported-by: Greg Thelen <[email protected]>
Reported-by: Aaron Lewis <[email protected]>
Signed-off-by: Sean Christopherson <[email protected]>
---
arch/x86/kvm/x86.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a6b9bea62fb8..8c073a4af484 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2172,6 +2172,8 @@ fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
u64 data;
fastpath_t ret = EXIT_FASTPATH_NONE;

+ kvm_vcpu_srcu_read_lock(vcpu);
+
switch (msr) {
case APIC_BASE_MSR + (APIC_ICR >> 4):
data = kvm_read_edx_eax(vcpu);
@@ -2194,6 +2196,8 @@ fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
if (ret != EXIT_FASTPATH_NONE)
trace_kvm_msr_write(msr, data);

+ kvm_vcpu_srcu_read_unlock(vcpu);
+
return ret;
}
EXPORT_SYMBOL_GPL(handle_fastpath_set_msr_irqoff);
--
2.41.0.487.g6d72f3e995-goog


2023-07-21 23:18:12

by Sean Christopherson

[permalink] [raw]
Subject: [PATCH 2/2] Revert "KVM: SVM: Skip WRMSR fastpath on VM-Exit if next RIP isn't valid"

Now that handle_fastpath_set_msr_irqoff() acquires kvm->srcu, i.e. allows
dereferencing memslots during WRMSR emulation, drop the requirement that
"next RIP" is valid. In hindsight, acquiring kvm->srcu would have been a
better fix than avoiding the pastpath, but at the time it was thought that
accessing SRCU-protected data in the fastpath was a one-off edge case.

This reverts commit 5c30e8101e8d5d020b1d7119117889756a6ed713.

Signed-off-by: Sean Christopherson <[email protected]>
---
arch/x86/kvm/svm/svm.c | 10 ++--------
1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index d381ad424554..cea08e5fa69b 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3986,14 +3986,8 @@ static int svm_vcpu_pre_run(struct kvm_vcpu *vcpu)

static fastpath_t svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
{
- struct vmcb_control_area *control = &to_svm(vcpu)->vmcb->control;
-
- /*
- * Note, the next RIP must be provided as SRCU isn't held, i.e. KVM
- * can't read guest memory (dereference memslots) to decode the WRMSR.
- */
- if (control->exit_code == SVM_EXIT_MSR && control->exit_info_1 &&
- nrips && control->next_rip)
+ if (to_svm(vcpu)->vmcb->control.exit_code == SVM_EXIT_MSR &&
+ to_svm(vcpu)->vmcb->control.exit_info_1)
return handle_fastpath_set_msr_irqoff(vcpu);

return EXIT_FASTPATH_NONE;
--
2.41.0.487.g6d72f3e995-goog


2023-07-29 17:48:28

by Paolo Bonzini

[permalink] [raw]