2017-12-20 12:06:16

by Paolo Bonzini

[permalink] [raw]
Subject: [PATCH 0/3] KVM: vmx: MSR bitmap cleanups and optimizations

This is v2 of the patch "KVM: vmx: speed up MSR bitmap merge",
taking into account Jim and David's suggestions.

Paolo

Paolo Bonzini (3):
KVM: vmx: speed up MSR bitmap merge
KVM: vmx: simplify MSR bitmap setup
KVM: VMX: introduce X2APIC_MSR macro

arch/x86/kvm/vmx.c | 99 +++++++++++++++++++++++++++---------------------------
1 file changed, 50 insertions(+), 49 deletions(-)

--
1.8.3.1


2017-12-20 12:06:28

by Paolo Bonzini

[permalink] [raw]
Subject: [PATCH 3/3] KVM: VMX: introduce X2APIC_MSR macro

Remove duplicate expression in nested_vmx_prepare_msr_bitmap, and make
the register names clearer in hardware_setup.

Suggested-by: Jim Mattson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
---
arch/x86/kvm/vmx.c | 19 +++++++++----------
1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 905aaa778306..65e09096a5ab 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -5256,6 +5256,8 @@ static void pt_disable_intercept_for_msr(bool flag)
}
}

+#define X2APIC_MSR(r) (APIC_BASE_MSR + ((r) >> 4))
+
static void vmx_disable_intercept_msr_x2apic(u32 msr, int type, bool apicv_active)
{
if (apicv_active) {
@@ -7136,7 +7138,7 @@ static __init int hardware_setup(void)
set_bit(0, vmx_vpid_bitmap); /* 0 is reserved for host */

for (msr = 0x800; msr <= 0x8ff; msr++) {
- if (msr == 0x839 /* TMCCT */)
+ if (msr == X2APIC_MSR(APIC_TMCCT))
continue;
vmx_disable_intercept_msr_x2apic(msr, MSR_TYPE_R, true);
}
@@ -7145,12 +7147,9 @@ static __init int hardware_setup(void)
* TPR reads and writes can be virtualized even if virtual interrupt
* delivery is not in use.
*/
- vmx_disable_intercept_msr_x2apic(0x808, MSR_TYPE_R | MSR_TYPE_W, false);
-
- /* EOI */
- vmx_disable_intercept_msr_x2apic(0x80b, MSR_TYPE_W, true);
- /* SELF-IPI */
- vmx_disable_intercept_msr_x2apic(0x83f, MSR_TYPE_W, true);
+ vmx_disable_intercept_msr_x2apic(X2APIC_MSR(APIC_TASKPRI), MSR_TYPE_R | MSR_TYPE_W, false);
+ vmx_disable_intercept_msr_x2apic(X2APIC_MSR(APIC_EOI), MSR_TYPE_W, true);
+ vmx_disable_intercept_msr_x2apic(X2APIC_MSR(APIC_SELF_IPI), MSR_TYPE_W, true);

if (enable_ept)
vmx_enable_tdp();
@@ -10344,17 +10343,17 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,

nested_vmx_disable_intercept_for_msr(
msr_bitmap_l1, msr_bitmap_l0,
- APIC_BASE_MSR + (APIC_TASKPRI >> 4),
+ X2APIC_MSR(APIC_TASKPRI),
MSR_TYPE_W);

if (nested_cpu_has_vid(vmcs12)) {
nested_vmx_disable_intercept_for_msr(
msr_bitmap_l1, msr_bitmap_l0,
- APIC_BASE_MSR + (APIC_EOI >> 4),
+ X2APIC_MSR(APIC_EOI),
MSR_TYPE_W);
nested_vmx_disable_intercept_for_msr(
msr_bitmap_l1, msr_bitmap_l0,
- APIC_BASE_MSR + (APIC_SELF_IPI >> 4),
+ X2APIC_MSR(APIC_SELF_IPI),
MSR_TYPE_W);
}
kunmap(page);
--
1.8.3.1

2017-12-20 12:06:30

by Paolo Bonzini

[permalink] [raw]
Subject: [PATCH 2/3] KVM: vmx: simplify MSR bitmap setup

The APICv-enabled MSR bitmap is a superset of the APICv-disabled bitmap.
Make that obvious in vmx_disable_intercept_msr_x2apic.

Signed-off-by: Paolo Bonzini <[email protected]>
---
arch/x86/kvm/vmx.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 9f9c3194440f..905aaa778306 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -5263,12 +5263,9 @@ static void vmx_disable_intercept_msr_x2apic(u32 msr, int type, bool apicv_activ
msr, type);
__vmx_disable_intercept_for_msr(vmx_msr_bitmap_longmode_x2apic_apicv,
msr, type);
- } else {
- __vmx_disable_intercept_for_msr(vmx_msr_bitmap_legacy_x2apic,
- msr, type);
- __vmx_disable_intercept_for_msr(vmx_msr_bitmap_longmode_x2apic,
- msr, type);
}
+ __vmx_disable_intercept_for_msr(vmx_msr_bitmap_legacy_x2apic, msr, type);
+ __vmx_disable_intercept_for_msr(vmx_msr_bitmap_longmode_x2apic, msr, type);
}

static bool vmx_get_enable_apicv(struct kvm_vcpu *vcpu)
@@ -7148,7 +7145,6 @@ static __init int hardware_setup(void)
* TPR reads and writes can be virtualized even if virtual interrupt
* delivery is not in use.
*/
- vmx_disable_intercept_msr_x2apic(0x808, MSR_TYPE_W, true);
vmx_disable_intercept_msr_x2apic(0x808, MSR_TYPE_R | MSR_TYPE_W, false);

/* EOI */
--
1.8.3.1


2017-12-20 12:06:24

by Paolo Bonzini

[permalink] [raw]
Subject: [PATCH 1/3] KVM: vmx: speed up MSR bitmap merge

The bulk of the MSR bitmap is either immutable, or can be copied from
the L1 bitmap. By initializing it at VMXON time, and copying the mutable
parts one long at a time on vmentry (rather than one bit), about 4000
clock cycles (30%) can be saved on a nested VMLAUNCH/VMRESUME.

The resulting for loop only has four iterations, so it is cheap enough
to reinitialize the MSR write bitmaps on every iteration, and it makes
the code simpler.

Suggested-by: Jim Mattson <[email protected]>
Signed-off-by: Paolo Bonzini <[email protected]>
---
v1->v2: do not WARN in nested_vmx_merge_msr_bitmap [David]
rename function to nested_vmx_prepare_msr_bitmap,
it's used even if there's no L1 bitmap [Paolo]

arch/x86/kvm/vmx.c | 78 +++++++++++++++++++++++++++++-------------------------
1 file changed, 42 insertions(+), 36 deletions(-)

diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 669f5f74857d..9f9c3194440f 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -5183,11 +5183,6 @@ static void nested_vmx_disable_intercept_for_msr(unsigned long *msr_bitmap_l1,
{
int f = sizeof(unsigned long);

- if (!cpu_has_vmx_msr_bitmap()) {
- WARN_ON(1);
- return;
- }
-
/*
* See Intel PRM Vol. 3, 20.6.9 (MSR-Bitmap Address). Early manuals
* have the write-low and read-high bitmap offsets the wrong way round.
@@ -7459,6 +7454,7 @@ static int enter_vmx_operation(struct kvm_vcpu *vcpu)
(unsigned long *)__get_free_page(GFP_KERNEL);
if (!vmx->nested.msr_bitmap)
goto out_msr_bitmap;
+ memset(vmx->nested.msr_bitmap, 0xff, PAGE_SIZE);
}

vmx->nested.cached_vmcs12 = kmalloc(VMCS12_SIZE, GFP_KERNEL);
@@ -10151,8 +10147,8 @@ static void vmx_inject_page_fault_nested(struct kvm_vcpu *vcpu,
}
}

-static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu,
- struct vmcs12 *vmcs12);
+static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
+ struct vmcs12 *vmcs12);

static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu,
struct vmcs12 *vmcs12)
@@ -10241,11 +10237,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu,
(unsigned long)(vmcs12->posted_intr_desc_addr &
(PAGE_SIZE - 1)));
}
- if (cpu_has_vmx_msr_bitmap() &&
- nested_cpu_has(vmcs12, CPU_BASED_USE_MSR_BITMAPS) &&
- nested_vmx_merge_msr_bitmap(vcpu, vmcs12))
- ;
- else
+ if (!nested_vmx_prepare_msr_bitmap(vcpu, vmcs12))
vmcs_clear_bits(CPU_BASED_VM_EXEC_CONTROL,
CPU_BASED_USE_MSR_BITMAPS);
}
@@ -10313,14 +10305,19 @@ static int nested_vmx_check_tpr_shadow_controls(struct kvm_vcpu *vcpu,
* Merge L0's and L1's MSR bitmap, return false to indicate that
* we do not use the hardware.
*/
-static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu,
- struct vmcs12 *vmcs12)
+static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
+ struct vmcs12 *vmcs12)
{
int msr;
struct page *page;
unsigned long *msr_bitmap_l1;
unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.msr_bitmap;

+ /* Nothing to do if the MSR bitmap is not in use. */
+ if (!cpu_has_vmx_msr_bitmap() ||
+ !nested_cpu_has(vmcs12, CPU_BASED_USE_MSR_BITMAPS))
+ return false;
+
/* This shortcut is ok because we support only x2APIC MSRs so far. */
if (!nested_cpu_has_virt_x2apic_mode(vmcs12))
return false;
@@ -10328,32 +10325,41 @@ static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu,
page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->msr_bitmap);
if (is_error_page(page))
return false;
- msr_bitmap_l1 = (unsigned long *)kmap(page);

- memset(msr_bitmap_l0, 0xff, PAGE_SIZE);
+ msr_bitmap_l1 = (unsigned long *)kmap(page);
+ if (nested_cpu_has_apic_reg_virt(vmcs12)) {
+ /*
+ * L0 need not intercept reads for MSRs between 0x800 and 0x8ff, it
+ * just lets the processor take the value from the virtual-APIC page;
+ * take those 256 bits directly from the L1 bitmap.
+ */
+ for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
+ unsigned word = msr / BITS_PER_LONG;
+ msr_bitmap_l0[word] = msr_bitmap_l1[word];
+ msr_bitmap_l0[word + (0x800 / sizeof(long))] = ~0;
+ }
+ } else {
+ for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
+ unsigned word = msr / BITS_PER_LONG;
+ msr_bitmap_l0[word] = ~0;
+ msr_bitmap_l0[word + (0x800 / sizeof(long))] = ~0;
+ }
+ }

- if (nested_cpu_has_virt_x2apic_mode(vmcs12)) {
- if (nested_cpu_has_apic_reg_virt(vmcs12))
- for (msr = 0x800; msr <= 0x8ff; msr++)
- nested_vmx_disable_intercept_for_msr(
- msr_bitmap_l1, msr_bitmap_l0,
- msr, MSR_TYPE_R);
+ nested_vmx_disable_intercept_for_msr(
+ msr_bitmap_l1, msr_bitmap_l0,
+ APIC_BASE_MSR + (APIC_TASKPRI >> 4),
+ MSR_TYPE_W);

+ if (nested_cpu_has_vid(vmcs12)) {
nested_vmx_disable_intercept_for_msr(
- msr_bitmap_l1, msr_bitmap_l0,
- APIC_BASE_MSR + (APIC_TASKPRI >> 4),
- MSR_TYPE_R | MSR_TYPE_W);
-
- if (nested_cpu_has_vid(vmcs12)) {
- nested_vmx_disable_intercept_for_msr(
- msr_bitmap_l1, msr_bitmap_l0,
- APIC_BASE_MSR + (APIC_EOI >> 4),
- MSR_TYPE_W);
- nested_vmx_disable_intercept_for_msr(
- msr_bitmap_l1, msr_bitmap_l0,
- APIC_BASE_MSR + (APIC_SELF_IPI >> 4),
- MSR_TYPE_W);
- }
+ msr_bitmap_l1, msr_bitmap_l0,
+ APIC_BASE_MSR + (APIC_EOI >> 4),
+ MSR_TYPE_W);
+ nested_vmx_disable_intercept_for_msr(
+ msr_bitmap_l1, msr_bitmap_l0,
+ APIC_BASE_MSR + (APIC_SELF_IPI >> 4),
+ MSR_TYPE_W);
}
kunmap(page);
kvm_release_page_clean(page);
--
1.8.3.1


2017-12-20 17:07:43

by Jim Mattson

[permalink] [raw]
Subject: Re: [PATCH 3/3] KVM: VMX: introduce X2APIC_MSR macro

Reviewed-by: Jim Mattson <[email protected]>

On Wed, Dec 20, 2017 at 4:05 AM, Paolo Bonzini <[email protected]> wrote:
> Remove duplicate expression in nested_vmx_prepare_msr_bitmap, and make
> the register names clearer in hardware_setup.
>
> Suggested-by: Jim Mattson <[email protected]>
> Signed-off-by: Paolo Bonzini <[email protected]>
> ---
> arch/x86/kvm/vmx.c | 19 +++++++++----------
> 1 file changed, 9 insertions(+), 10 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 905aaa778306..65e09096a5ab 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -5256,6 +5256,8 @@ static void pt_disable_intercept_for_msr(bool flag)
> }
> }
>
> +#define X2APIC_MSR(r) (APIC_BASE_MSR + ((r) >> 4))
> +
> static void vmx_disable_intercept_msr_x2apic(u32 msr, int type, bool apicv_active)
> {
> if (apicv_active) {
> @@ -7136,7 +7138,7 @@ static __init int hardware_setup(void)
> set_bit(0, vmx_vpid_bitmap); /* 0 is reserved for host */
>
> for (msr = 0x800; msr <= 0x8ff; msr++) {
> - if (msr == 0x839 /* TMCCT */)
> + if (msr == X2APIC_MSR(APIC_TMCCT))
> continue;
> vmx_disable_intercept_msr_x2apic(msr, MSR_TYPE_R, true);
> }
> @@ -7145,12 +7147,9 @@ static __init int hardware_setup(void)
> * TPR reads and writes can be virtualized even if virtual interrupt
> * delivery is not in use.
> */
> - vmx_disable_intercept_msr_x2apic(0x808, MSR_TYPE_R | MSR_TYPE_W, false);
> -
> - /* EOI */
> - vmx_disable_intercept_msr_x2apic(0x80b, MSR_TYPE_W, true);
> - /* SELF-IPI */
> - vmx_disable_intercept_msr_x2apic(0x83f, MSR_TYPE_W, true);
> + vmx_disable_intercept_msr_x2apic(X2APIC_MSR(APIC_TASKPRI), MSR_TYPE_R | MSR_TYPE_W, false);
> + vmx_disable_intercept_msr_x2apic(X2APIC_MSR(APIC_EOI), MSR_TYPE_W, true);
> + vmx_disable_intercept_msr_x2apic(X2APIC_MSR(APIC_SELF_IPI), MSR_TYPE_W, true);
>
> if (enable_ept)
> vmx_enable_tdp();
> @@ -10344,17 +10343,17 @@ static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
>
> nested_vmx_disable_intercept_for_msr(
> msr_bitmap_l1, msr_bitmap_l0,
> - APIC_BASE_MSR + (APIC_TASKPRI >> 4),
> + X2APIC_MSR(APIC_TASKPRI),
> MSR_TYPE_W);
>
> if (nested_cpu_has_vid(vmcs12)) {
> nested_vmx_disable_intercept_for_msr(
> msr_bitmap_l1, msr_bitmap_l0,
> - APIC_BASE_MSR + (APIC_EOI >> 4),
> + X2APIC_MSR(APIC_EOI),
> MSR_TYPE_W);
> nested_vmx_disable_intercept_for_msr(
> msr_bitmap_l1, msr_bitmap_l0,
> - APIC_BASE_MSR + (APIC_SELF_IPI >> 4),
> + X2APIC_MSR(APIC_SELF_IPI),
> MSR_TYPE_W);
> }
> kunmap(page);
> --
> 1.8.3.1
>

2017-12-20 18:02:42

by Jim Mattson

[permalink] [raw]
Subject: Re: [PATCH 1/3] KVM: vmx: speed up MSR bitmap merge

Reviewed-by: Jim Mattson <[email protected]>

On Wed, Dec 20, 2017 at 4:05 AM, Paolo Bonzini <[email protected]> wrote:
> The bulk of the MSR bitmap is either immutable, or can be copied from
> the L1 bitmap. By initializing it at VMXON time, and copying the mutable
> parts one long at a time on vmentry (rather than one bit), about 4000
> clock cycles (30%) can be saved on a nested VMLAUNCH/VMRESUME.
>
> The resulting for loop only has four iterations, so it is cheap enough
> to reinitialize the MSR write bitmaps on every iteration, and it makes
> the code simpler.
>
> Suggested-by: Jim Mattson <[email protected]>
> Signed-off-by: Paolo Bonzini <[email protected]>
> ---
> v1->v2: do not WARN in nested_vmx_merge_msr_bitmap [David]
> rename function to nested_vmx_prepare_msr_bitmap,
> it's used even if there's no L1 bitmap [Paolo]
>
> arch/x86/kvm/vmx.c | 78 +++++++++++++++++++++++++++++-------------------------
> 1 file changed, 42 insertions(+), 36 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 669f5f74857d..9f9c3194440f 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -5183,11 +5183,6 @@ static void nested_vmx_disable_intercept_for_msr(unsigned long *msr_bitmap_l1,
> {
> int f = sizeof(unsigned long);
>
> - if (!cpu_has_vmx_msr_bitmap()) {
> - WARN_ON(1);
> - return;
> - }
> -
> /*
> * See Intel PRM Vol. 3, 20.6.9 (MSR-Bitmap Address). Early manuals
> * have the write-low and read-high bitmap offsets the wrong way round.
> @@ -7459,6 +7454,7 @@ static int enter_vmx_operation(struct kvm_vcpu *vcpu)
> (unsigned long *)__get_free_page(GFP_KERNEL);
> if (!vmx->nested.msr_bitmap)
> goto out_msr_bitmap;
> + memset(vmx->nested.msr_bitmap, 0xff, PAGE_SIZE);
> }
>
> vmx->nested.cached_vmcs12 = kmalloc(VMCS12_SIZE, GFP_KERNEL);
> @@ -10151,8 +10147,8 @@ static void vmx_inject_page_fault_nested(struct kvm_vcpu *vcpu,
> }
> }
>
> -static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu,
> - struct vmcs12 *vmcs12);
> +static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
> + struct vmcs12 *vmcs12);
>
> static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu,
> struct vmcs12 *vmcs12)
> @@ -10241,11 +10237,7 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu,
> (unsigned long)(vmcs12->posted_intr_desc_addr &
> (PAGE_SIZE - 1)));
> }
> - if (cpu_has_vmx_msr_bitmap() &&
> - nested_cpu_has(vmcs12, CPU_BASED_USE_MSR_BITMAPS) &&
> - nested_vmx_merge_msr_bitmap(vcpu, vmcs12))
> - ;
> - else
> + if (!nested_vmx_prepare_msr_bitmap(vcpu, vmcs12))
> vmcs_clear_bits(CPU_BASED_VM_EXEC_CONTROL,
> CPU_BASED_USE_MSR_BITMAPS);
> }
> @@ -10313,14 +10305,19 @@ static int nested_vmx_check_tpr_shadow_controls(struct kvm_vcpu *vcpu,
> * Merge L0's and L1's MSR bitmap, return false to indicate that
> * we do not use the hardware.
> */
> -static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu,
> - struct vmcs12 *vmcs12)
> +static inline bool nested_vmx_prepare_msr_bitmap(struct kvm_vcpu *vcpu,
> + struct vmcs12 *vmcs12)
> {
> int msr;
> struct page *page;
> unsigned long *msr_bitmap_l1;
> unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.msr_bitmap;
>
> + /* Nothing to do if the MSR bitmap is not in use. */
> + if (!cpu_has_vmx_msr_bitmap() ||
> + !nested_cpu_has(vmcs12, CPU_BASED_USE_MSR_BITMAPS))
> + return false;
> +
> /* This shortcut is ok because we support only x2APIC MSRs so far. */
> if (!nested_cpu_has_virt_x2apic_mode(vmcs12))
> return false;
> @@ -10328,32 +10325,41 @@ static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu,
> page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->msr_bitmap);
> if (is_error_page(page))
> return false;
> - msr_bitmap_l1 = (unsigned long *)kmap(page);
>
> - memset(msr_bitmap_l0, 0xff, PAGE_SIZE);
> + msr_bitmap_l1 = (unsigned long *)kmap(page);
> + if (nested_cpu_has_apic_reg_virt(vmcs12)) {
> + /*
> + * L0 need not intercept reads for MSRs between 0x800 and 0x8ff, it
> + * just lets the processor take the value from the virtual-APIC page;
> + * take those 256 bits directly from the L1 bitmap.
> + */
> + for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
> + unsigned word = msr / BITS_PER_LONG;
> + msr_bitmap_l0[word] = msr_bitmap_l1[word];
> + msr_bitmap_l0[word + (0x800 / sizeof(long))] = ~0;
> + }
> + } else {
> + for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
> + unsigned word = msr / BITS_PER_LONG;
> + msr_bitmap_l0[word] = ~0;
> + msr_bitmap_l0[word + (0x800 / sizeof(long))] = ~0;
> + }
> + }
>
> - if (nested_cpu_has_virt_x2apic_mode(vmcs12)) {
> - if (nested_cpu_has_apic_reg_virt(vmcs12))
> - for (msr = 0x800; msr <= 0x8ff; msr++)
> - nested_vmx_disable_intercept_for_msr(
> - msr_bitmap_l1, msr_bitmap_l0,
> - msr, MSR_TYPE_R);
> + nested_vmx_disable_intercept_for_msr(
> + msr_bitmap_l1, msr_bitmap_l0,
> + APIC_BASE_MSR + (APIC_TASKPRI >> 4),
> + MSR_TYPE_W);
>
> + if (nested_cpu_has_vid(vmcs12)) {
> nested_vmx_disable_intercept_for_msr(
> - msr_bitmap_l1, msr_bitmap_l0,
> - APIC_BASE_MSR + (APIC_TASKPRI >> 4),
> - MSR_TYPE_R | MSR_TYPE_W);
> -
> - if (nested_cpu_has_vid(vmcs12)) {
> - nested_vmx_disable_intercept_for_msr(
> - msr_bitmap_l1, msr_bitmap_l0,
> - APIC_BASE_MSR + (APIC_EOI >> 4),
> - MSR_TYPE_W);
> - nested_vmx_disable_intercept_for_msr(
> - msr_bitmap_l1, msr_bitmap_l0,
> - APIC_BASE_MSR + (APIC_SELF_IPI >> 4),
> - MSR_TYPE_W);
> - }
> + msr_bitmap_l1, msr_bitmap_l0,
> + APIC_BASE_MSR + (APIC_EOI >> 4),
> + MSR_TYPE_W);
> + nested_vmx_disable_intercept_for_msr(
> + msr_bitmap_l1, msr_bitmap_l0,
> + APIC_BASE_MSR + (APIC_SELF_IPI >> 4),
> + MSR_TYPE_W);
> }
> kunmap(page);
> kvm_release_page_clean(page);
> --
> 1.8.3.1
>
>

2017-12-20 19:40:13

by Jim Mattson

[permalink] [raw]
Subject: Re: [PATCH 2/3] KVM: vmx: simplify MSR bitmap setup

This doesn't look right to me. Without APIC-register virtualization,
the only X2APIC MSR intercept that should be disabled is TPR.

On Wed, Dec 20, 2017 at 4:05 AM, Paolo Bonzini <[email protected]> wrote:
> The APICv-enabled MSR bitmap is a superset of the APICv-disabled bitmap.
> Make that obvious in vmx_disable_intercept_msr_x2apic.
>
> Signed-off-by: Paolo Bonzini <[email protected]>
> ---
> arch/x86/kvm/vmx.c | 8 ++------
> 1 file changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 9f9c3194440f..905aaa778306 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -5263,12 +5263,9 @@ static void vmx_disable_intercept_msr_x2apic(u32 msr, int type, bool apicv_activ
> msr, type);
> __vmx_disable_intercept_for_msr(vmx_msr_bitmap_longmode_x2apic_apicv,
> msr, type);
> - } else {
> - __vmx_disable_intercept_for_msr(vmx_msr_bitmap_legacy_x2apic,
> - msr, type);
> - __vmx_disable_intercept_for_msr(vmx_msr_bitmap_longmode_x2apic,
> - msr, type);
> }
> + __vmx_disable_intercept_for_msr(vmx_msr_bitmap_legacy_x2apic, msr, type);
> + __vmx_disable_intercept_for_msr(vmx_msr_bitmap_longmode_x2apic, msr, type);
> }
>
> static bool vmx_get_enable_apicv(struct kvm_vcpu *vcpu)
> @@ -7148,7 +7145,6 @@ static __init int hardware_setup(void)
> * TPR reads and writes can be virtualized even if virtual interrupt
> * delivery is not in use.
> */
> - vmx_disable_intercept_msr_x2apic(0x808, MSR_TYPE_W, true);
> vmx_disable_intercept_msr_x2apic(0x808, MSR_TYPE_R | MSR_TYPE_W, false);
>
> /* EOI */
> --
> 1.8.3.1
>
>

2017-12-20 21:18:09

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH 2/3] KVM: vmx: simplify MSR bitmap setup

On 20/12/2017 20:40, Jim Mattson wrote:
> This doesn't look right to me. Without APIC-register virtualization,
> the only X2APIC MSR intercept that should be disabled is TPR.

Of course... The bitmap that has to be outside the "if" is
*_x2apic_apicv, not *_x2apic. I sent the wrong version of the series
(and this was the only difference, together s/superset/subset/ in the
commit message). Will resend tomorrow.

Paolo

> On Wed, Dec 20, 2017 at 4:05 AM, Paolo Bonzini <[email protected]> wrote:
>> The APICv-enabled MSR bitmap is a superset of the APICv-disabled bitmap.
>> Make that obvious in vmx_disable_intercept_msr_x2apic.
>>
>> Signed-off-by: Paolo Bonzini <[email protected]>
>> ---
>> arch/x86/kvm/vmx.c | 8 ++------
>> 1 file changed, 2 insertions(+), 6 deletions(-)
>>
>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>> index 9f9c3194440f..905aaa778306 100644
>> --- a/arch/x86/kvm/vmx.c
>> +++ b/arch/x86/kvm/vmx.c
>> @@ -5263,12 +5263,9 @@ static void vmx_disable_intercept_msr_x2apic(u32 msr, int type, bool apicv_activ
>> msr, type);
>> __vmx_disable_intercept_for_msr(vmx_msr_bitmap_longmode_x2apic_apicv,
>> msr, type);
>> - } else {
>> - __vmx_disable_intercept_for_msr(vmx_msr_bitmap_legacy_x2apic,
>> - msr, type);
>> - __vmx_disable_intercept_for_msr(vmx_msr_bitmap_longmode_x2apic,
>> - msr, type);
>> }
>> + __vmx_disable_intercept_for_msr(vmx_msr_bitmap_legacy_x2apic, msr, type);
>> + __vmx_disable_intercept_for_msr(vmx_msr_bitmap_longmode_x2apic, msr, type);
>> }
>>
>> static bool vmx_get_enable_apicv(struct kvm_vcpu *vcpu)
>> @@ -7148,7 +7145,6 @@ static __init int hardware_setup(void)
>> * TPR reads and writes can be virtualized even if virtual interrupt
>> * delivery is not in use.
>> */
>> - vmx_disable_intercept_msr_x2apic(0x808, MSR_TYPE_W, true);
>> vmx_disable_intercept_msr_x2apic(0x808, MSR_TYPE_R | MSR_TYPE_W, false);
>>
>> /* EOI */
>> --
>> 1.8.3.1
>>
>>