2023-09-28 19:14:50

by Maxim Levitsky

[permalink] [raw]
Subject: [PATCH v2 0/4] AVIC bugfixes and workarounds

This patch series includes several fixes to AVIC I found while working

on a new version of nested AVIC code.



Also while developing it I realized that a very simple workaround for

AVIC's errata #1235 exists and included it in this patch series as well.



changes since v2:



- added 'fixes' tags

- reworked workaround for avic errata #1235

- dropped iommu patch as it is no longer needed.



Best regards,

Maxim Levitsky



Maxim Levitsky (4):

x86: KVM: SVM: always update the x2avic msr interception

x86: KVM: SVM: add support for Invalid IPI Vector interception

x86: KVM: SVM: refresh AVIC inhibition in svm_leave_nested()

x86: KVM: SVM: workaround for AVIC's errata #1235



arch/x86/include/asm/svm.h | 1 +

arch/x86/kvm/svm/avic.c | 68 +++++++++++++++++++++++++++-----------

arch/x86/kvm/svm/nested.c | 3 ++

arch/x86/kvm/svm/svm.c | 3 +-

arch/x86/kvm/svm/svm.h | 1 +

5 files changed, 55 insertions(+), 21 deletions(-)



--

2.26.3





2023-09-28 20:07:08

by Maxim Levitsky

[permalink] [raw]
Subject: [PATCH v2 1/4] x86: KVM: SVM: always update the x2avic msr interception

The following problem exists since x2avic was enabled in the KVM:

svm_set_x2apic_msr_interception is called to enable the interception of
the x2apic msrs.

In particular it is called at the moment the guest resets its apic.

Assuming that the guest's apic was in x2apic mode, the reset will bring
it back to the xapic mode.

The svm_set_x2apic_msr_interception however has an erroneous check for
'!apic_x2apic_mode()' which prevents it from doing anything in this case.

As a result of this, all x2apic msrs are left unintercepted, and that
exposes the bare metal x2apic (if enabled) to the guest.
Oops.

Remove the erroneous '!apic_x2apic_mode()' check to fix that.

This fixes CVE-2023-5090

Fixes: 4d1d7942e36a ("KVM: SVM: Introduce logic to (de)activate x2AVIC mode")
Cc: [email protected]
Signed-off-by: Maxim Levitsky <[email protected]>
---
arch/x86/kvm/svm/svm.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 9507df93f410a63..acdd0b89e4715a3 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -913,8 +913,7 @@ void svm_set_x2apic_msr_interception(struct vcpu_svm *svm, bool intercept)
if (intercept == svm->x2avic_msrs_intercepted)
return;

- if (!x2avic_enabled ||
- !apic_x2apic_mode(svm->vcpu.arch.apic))
+ if (!x2avic_enabled)
return;

for (i = 0; i < MAX_DIRECT_ACCESS_MSRS; i++) {
--
2.26.3

2023-09-28 20:24:08

by Maxim Levitsky

[permalink] [raw]
Subject: [PATCH v2 3/4] x86: KVM: SVM: refresh AVIC inhibition in svm_leave_nested()

svm_leave_nested() similar to a nested VM exit, get the vCPU out of nested
mode and thus should end the local inhibition of AVIC on this vCPU.

Failure to do so, can lead to hangs on guest reboot.

Raise the KVM_REQ_APICV_UPDATE request to refresh the AVIC state of the
current vCPU in this case.

Fixes: f44509f849fe ("KVM: x86: SVM: allow AVIC to co-exist with a nested guest running")
Cc: [email protected]
Signed-off-by: Maxim Levitsky <[email protected]>
---
arch/x86/kvm/svm/nested.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index dd496c9e5f91f28..3fea8c47679e689 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1253,6 +1253,9 @@ void svm_leave_nested(struct kvm_vcpu *vcpu)

nested_svm_uninit_mmu_context(vcpu);
vmcb_mark_all_dirty(svm->vmcb);
+
+ if (kvm_apicv_activated(vcpu->kvm))
+ kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu);
}

kvm_clear_request(KVM_REQ_GET_NESTED_STATE_PAGES, vcpu);
--
2.26.3

2023-09-28 22:46:40

by Maxim Levitsky

[permalink] [raw]
Subject: [PATCH v2 4/4] x86: KVM: SVM: workaround for AVIC's errata #1235

On Zen2 (and likely on Zen1 as well), AVIC doesn't reliably detect a change
in the 'is_running' bit during ICR write emulation and might skip a
VM exit, if that bit was recently cleared.

The absence of the VM exit, leads to the KVM not waking up / triggering
nested vm exit on the target(s) of the IPI which can, in some cases,
lead to an unbounded delays in the guest execution.

As I recently discovered, a reasonable workaround exists: make the KVM
never set the is_running bit.

This workaround ensures that (*) all ICR writes always cause a VM exit
and therefore correctly emulated, in expense of never enjoying VM exit-less
ICR emulation.

This workaround does carry a performance penalty but according to my
benchmarks is still much better than not using AVIC at all,
because AVIC is still used for the receiving end of the IPIs, and for the
posted interrupts.

If the user is aware of the errata and it doesn't affect his workload,
the user can disable the workaround with 'avic_zen2_errata_workaround=0'

(*) More correctly all ICR writes except when 'Self' shorthand is used:

In this case AVIC skips reading physid table and just sets bits in IRR
of local APIC. Thankfully in this case, the errata is not possible,
therefore an extra workaround for this case is not needed.

Signed-off-by: Maxim Levitsky <[email protected]>
---
arch/x86/kvm/svm/avic.c | 63 +++++++++++++++++++++++++++++------------
arch/x86/kvm/svm/svm.h | 1 +
2 files changed, 46 insertions(+), 18 deletions(-)

diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
index 4b74ea91f4e6bb6..28bb0e6b321660d 100644
--- a/arch/x86/kvm/svm/avic.c
+++ b/arch/x86/kvm/svm/avic.c
@@ -62,6 +62,9 @@ static_assert(__AVIC_GATAG(AVIC_VM_ID_MASK, AVIC_VCPU_ID_MASK) == -1u);
static bool force_avic;
module_param_unsafe(force_avic, bool, 0444);

+static int avic_zen2_errata_workaround = -1;
+module_param(avic_zen2_errata_workaround, int, 0444);
+
/* Note:
* This hash table is used to map VM_ID to a struct kvm_svm,
* when handling AMD IOMMU GALOG notification to schedule in
@@ -276,7 +279,7 @@ static u64 *avic_get_physical_id_entry(struct kvm_vcpu *vcpu,

static int avic_init_backing_page(struct kvm_vcpu *vcpu)
{
- u64 *entry, new_entry;
+ u64 *entry;
int id = vcpu->vcpu_id;
struct vcpu_svm *svm = to_svm(vcpu);

@@ -308,10 +311,10 @@ static int avic_init_backing_page(struct kvm_vcpu *vcpu)
if (!entry)
return -EINVAL;

- new_entry = __sme_set((page_to_phys(svm->avic_backing_page) &
- AVIC_PHYSICAL_ID_ENTRY_BACKING_PAGE_MASK) |
- AVIC_PHYSICAL_ID_ENTRY_VALID_MASK);
- WRITE_ONCE(*entry, new_entry);
+ svm->avic_physical_id_entry = __sme_set((page_to_phys(svm->avic_backing_page) &
+ AVIC_PHYSICAL_ID_ENTRY_BACKING_PAGE_MASK) |
+ AVIC_PHYSICAL_ID_ENTRY_VALID_MASK);
+ WRITE_ONCE(*entry, svm->avic_physical_id_entry);

svm->avic_physical_id_cache = entry;

@@ -835,7 +838,7 @@ static int svm_ir_list_add(struct vcpu_svm *svm, struct amd_iommu_pi_data *pi)
* will update the pCPU info when the vCPU awkened and/or scheduled in.
* See also avic_vcpu_load().
*/
- entry = READ_ONCE(*(svm->avic_physical_id_cache));
+ entry = READ_ONCE(svm->avic_physical_id_entry);
if (entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK)
amd_iommu_update_ga(entry & AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK,
true, pi->ir_data);
@@ -1027,7 +1030,6 @@ avic_update_iommu_vcpu_affinity(struct kvm_vcpu *vcpu, int cpu, bool r)

void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
{
- u64 entry;
int h_physical_id = kvm_cpu_get_apicid(cpu);
struct vcpu_svm *svm = to_svm(vcpu);
unsigned long flags;
@@ -1056,14 +1058,23 @@ void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
*/
spin_lock_irqsave(&svm->ir_list_lock, flags);

- entry = READ_ONCE(*(svm->avic_physical_id_cache));
- WARN_ON_ONCE(entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK);

- entry &= ~AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK;
- entry |= (h_physical_id & AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK);
- entry |= AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK;
+ WARN_ON_ONCE(svm->avic_physical_id_entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK);
+
+ svm->avic_physical_id_entry &= ~AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK;
+ svm->avic_physical_id_entry |=
+ (h_physical_id & AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK);
+
+ svm->avic_physical_id_entry |= AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK;
+
+ /*
+ * Do not update the actual physical id table entry if workaround
+ * for #1235 - the physical ID entry is_running is never set when
+ * the workaround is activated
+ */
+ if (!avic_zen2_errata_workaround)
+ WRITE_ONCE(*(svm->avic_physical_id_cache), svm->avic_physical_id_entry);

- WRITE_ONCE(*(svm->avic_physical_id_cache), entry);
avic_update_iommu_vcpu_affinity(vcpu, h_physical_id, true);

spin_unlock_irqrestore(&svm->ir_list_lock, flags);
@@ -1071,7 +1082,6 @@ void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu)

void avic_vcpu_put(struct kvm_vcpu *vcpu)
{
- u64 entry;
struct vcpu_svm *svm = to_svm(vcpu);
unsigned long flags;

@@ -1084,10 +1094,9 @@ void avic_vcpu_put(struct kvm_vcpu *vcpu)
* can't be scheduled out and thus avic_vcpu_{put,load}() can't run
* recursively.
*/
- entry = READ_ONCE(*(svm->avic_physical_id_cache));

/* Nothing to do if IsRunning == '0' due to vCPU blocking. */
- if (!(entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK))
+ if (!(svm->avic_physical_id_entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK))
return;

/*
@@ -1102,8 +1111,14 @@ void avic_vcpu_put(struct kvm_vcpu *vcpu)

avic_update_iommu_vcpu_affinity(vcpu, -1, 0);

- entry &= ~AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK;
- WRITE_ONCE(*(svm->avic_physical_id_cache), entry);
+ svm->avic_physical_id_entry &= ~AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK;
+
+ /*
+ * Do not update the actual physical id table entry
+ * See explanation in avic_vcpu_load
+ */
+ if (!avic_zen2_errata_workaround)
+ WRITE_ONCE(*(svm->avic_physical_id_cache), svm->avic_physical_id_entry);

spin_unlock_irqrestore(&svm->ir_list_lock, flags);

@@ -1217,5 +1232,17 @@ bool avic_hardware_setup(void)

amd_iommu_register_ga_log_notifier(&avic_ga_log_notifier);

+ if (avic_zen2_errata_workaround == -1) {
+
+ /* Assume that Zen1 and Zen2 have errata #1235 */
+ if (boot_cpu_data.x86 == 0x17)
+ avic_zen2_errata_workaround = 1;
+ else
+ avic_zen2_errata_workaround = 0;
+ }
+
+ if (avic_zen2_errata_workaround)
+ pr_info("Workaround for AVIC errata #1235 is enabled\n");
+
return true;
}
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index be67ab7fdd104e3..98dc45b9c194d2e 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -265,6 +265,7 @@ struct vcpu_svm {
u32 ldr_reg;
u32 dfr_reg;
struct page *avic_backing_page;
+ u64 avic_physical_id_entry;
u64 *avic_physical_id_cache;

/*
--
2.26.3

2023-09-29 03:10:23

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] AVIC bugfixes and workarounds

On Thu, Sep 28, 2023, Maxim Levitsky wrote:
> Maxim Levitsky (4):
> x86: KVM: SVM: always update the x2avic msr interception
> x86: KVM: SVM: add support for Invalid IPI Vector interception
> x86: KVM: SVM: refresh AVIC inhibition in svm_leave_nested()

Paolo, I assume you'll take the first three directly for 6.6?

> x86: KVM: SVM: workaround for AVIC's errata #1235

2023-09-29 07:26:34

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v2 3/4] x86: KVM: SVM: refresh AVIC inhibition in svm_leave_nested()

On Thu, Sep 28, 2023, Maxim Levitsky wrote:
> svm_leave_nested() similar to a nested VM exit, get the vCPU out of nested
> mode and thus should end the local inhibition of AVIC on this vCPU.
>
> Failure to do so, can lead to hangs on guest reboot.
>
> Raise the KVM_REQ_APICV_UPDATE request to refresh the AVIC state of the
> current vCPU in this case.
>
> Fixes: f44509f849fe ("KVM: x86: SVM: allow AVIC to co-exist with a nested guest running")
> Cc: [email protected]
> Signed-off-by: Maxim Levitsky <[email protected]>
> ---

Reviewed-by: Sean Christopherson <[email protected]>

2023-09-29 13:01:14

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v2 4/4] x86: KVM: SVM: workaround for AVIC's errata #1235

On Thu, Sep 28, 2023, Maxim Levitsky wrote:
> diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
> index 4b74ea91f4e6bb6..28bb0e6b321660d 100644
> --- a/arch/x86/kvm/svm/avic.c
> +++ b/arch/x86/kvm/svm/avic.c
> @@ -62,6 +62,9 @@ static_assert(__AVIC_GATAG(AVIC_VM_ID_MASK, AVIC_VCPU_ID_MASK) == -1u);
> static bool force_avic;
> module_param_unsafe(force_avic, bool, 0444);
>
> +static int avic_zen2_errata_workaround = -1;
> +module_param(avic_zen2_errata_workaround, int, 0444);
> +
> /* Note:
> * This hash table is used to map VM_ID to a struct kvm_svm,
> * when handling AMD IOMMU GALOG notification to schedule in
> @@ -276,7 +279,7 @@ static u64 *avic_get_physical_id_entry(struct kvm_vcpu *vcpu,
>
> static int avic_init_backing_page(struct kvm_vcpu *vcpu)
> {
> - u64 *entry, new_entry;
> + u64 *entry;
> int id = vcpu->vcpu_id;
> struct vcpu_svm *svm = to_svm(vcpu);
>
> @@ -308,10 +311,10 @@ static int avic_init_backing_page(struct kvm_vcpu *vcpu)
> if (!entry)
> return -EINVAL;
>
> - new_entry = __sme_set((page_to_phys(svm->avic_backing_page) &
> - AVIC_PHYSICAL_ID_ENTRY_BACKING_PAGE_MASK) |
> - AVIC_PHYSICAL_ID_ENTRY_VALID_MASK);
> - WRITE_ONCE(*entry, new_entry);
> + svm->avic_physical_id_entry = __sme_set((page_to_phys(svm->avic_backing_page) &
> + AVIC_PHYSICAL_ID_ENTRY_BACKING_PAGE_MASK) |
> + AVIC_PHYSICAL_ID_ENTRY_VALID_MASK);
> + WRITE_ONCE(*entry, svm->avic_physical_id_entry);

Aha! Rather than deal with the dummy entry at runtime, simply point the pointer
at the dummy entry during setup.

And instead of adding a dedicated erratum param, let's piggyback VMX's enable_ipiv.
It's not a true disable, but IMO it's close enough. That will make the param
much more self-documenting, and won't feel so awkward if someone wants to disable
IPI virtualization for other reasons.

Then we can do this in three steps:

1. Move enable_ipiv to common code
2. Let userspace disable enable_ipiv for SVM+AVIC
3. Disable enable_ipiv for affected CPUs

The biggest downside to using enable_ipiv is that a the "auto" behavior for the
erratum will be a bit ugly, but that's a solvable problem.

If you've no objection to the above approach, I'll post the attached patches along
with a massaged version of this patch.

The attached patches apply on top of an AVIC clean[*], which (shameless plug)
could use a review ;-)

[*] https://lore.kernel.org/all/[email protected]


Attachments:
(No filename) (2.48 kB)
0001-KVM-VMX-Move-enable_ipiv-knob-to-common-x86.patch (2.66 kB)
0002-KVM-SVM-Add-enable_ipiv-param-skip-physical-ID-progr.patch (3.24 kB)
Download all attachments

2023-09-29 13:21:43

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v2 1/4] x86: KVM: SVM: always update the x2avic msr interception

On Thu, Sep 28, 2023, Maxim Levitsky wrote:
> The following problem exists since x2avic was enabled in the KVM:
>
> svm_set_x2apic_msr_interception is called to enable the interception of

Nit, svm_set_x2apic_msr_interception().

Definitely not worth another version though.

> the x2apic msrs.
>
> In particular it is called at the moment the guest resets its apic.
>
> Assuming that the guest's apic was in x2apic mode, the reset will bring
> it back to the xapic mode.
>
> The svm_set_x2apic_msr_interception however has an erroneous check for
> '!apic_x2apic_mode()' which prevents it from doing anything in this case.
>
> As a result of this, all x2apic msrs are left unintercepted, and that
> exposes the bare metal x2apic (if enabled) to the guest.
> Oops.
>
> Remove the erroneous '!apic_x2apic_mode()' check to fix that.
>
> This fixes CVE-2023-5090
>
> Fixes: 4d1d7942e36a ("KVM: SVM: Introduce logic to (de)activate x2AVIC mode")
> Cc: [email protected]
> Signed-off-by: Maxim Levitsky <[email protected]>
> ---

Reviewed-by: Sean Christopherson <[email protected]>

2023-09-29 21:50:35

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] AVIC bugfixes and workarounds

On Fri, Sep 29, 2023 at 4:09 AM Sean Christopherson <[email protected]> wrote:
>
> On Thu, Sep 28, 2023, Maxim Levitsky wrote:
> > Maxim Levitsky (4):
> > x86: KVM: SVM: always update the x2avic msr interception
> > x86: KVM: SVM: add support for Invalid IPI Vector interception
> > x86: KVM: SVM: refresh AVIC inhibition in svm_leave_nested()
>
> Paolo, I assume you'll take the first three directly for 6.6?

Yes.

Paolo

> > x86: KVM: SVM: workaround for AVIC's errata #1235
>

2023-10-03 03:18:35

by Suravee Suthikulpanit

[permalink] [raw]
Subject: Re: [PATCH v2 1/4] x86: KVM: SVM: always update the x2avic msr interception

Maxim,

Thanks for finding and fixing this.

On 9/29/2023 7:24 AM, Sean Christopherson wrote:
> On Thu, Sep 28, 2023, Maxim Levitsky wrote:
>> The following problem exists since x2avic was enabled in the KVM:
>>
>> svm_set_x2apic_msr_interception is called to enable the interception of
>
> Nit, svm_set_x2apic_msr_interception().
>
> Definitely not worth another version though.
>
>> the x2apic msrs.
>>
>> In particular it is called at the moment the guest resets its apic.
>>
>> Assuming that the guest's apic was in x2apic mode, the reset will bring
>> it back to the xapic mode.
>>
>> The svm_set_x2apic_msr_interception however has an erroneous check for
>> '!apic_x2apic_mode()' which prevents it from doing anything in this case.
>>
>> As a result of this, all x2apic msrs are left unintercepted, and that
>> exposes the bare metal x2apic (if enabled) to the guest.
>> Oops.
>>
>> Remove the erroneous '!apic_x2apic_mode()' check to fix that.
>>
>> This fixes CVE-2023-5090
>>
>> Fixes: 4d1d7942e36a ("KVM: SVM: Introduce logic to (de)activate x2AVIC mode")
>> Cc: [email protected]
>> Signed-off-by: Maxim Levitsky <[email protected]>
>> ---
>
> Reviewed-by: Sean Christopherson <[email protected]>
Reviewed-by: Suravee Suthikulpanit <[email protected]>
Tested-by: Suravee Suthikulpanit <[email protected]>

2023-10-12 14:47:59

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH v2 0/4] AVIC bugfixes and workarounds

Queued patches 1-3, thanks.

Paolo