2020-04-28 06:25:25

by Wanpeng Li

[permalink] [raw]
Subject: [PATCH v4 0/7] KVM: VMX: Tscdeadline timer emulation fastpath

IPI and Timer cause the main vmexits in cloud environment observation,
after single target IPI fastpath, let's optimize tscdeadline timer
latency by introducing tscdeadline timer emulation fastpath, it will
skip various KVM related checks when possible. i.e. after vmexit due
to tscdeadline timer emulation, handle it and vmentry immediately
without checking various kvm stuff when possible.

Testing on SKX Server.

cyclictest in guest(w/o mwait exposed, adaptive advance lapic timer is default -1):

5540.5ns -> 4602ns 17%

kvm-unit-test/vmexit.flat:

w/o avanced timer:
tscdeadline_immed: 3028.5 -> 2494.75 17.6%
tscdeadline: 5765.7 -> 5285 8.3%

w/ adaptive advance timer default -1:
tscdeadline_immed: 3123.75 -> 2583 17.3%
tscdeadline: 4663.75 -> 4537 2.7%

Tested-by: Haiwei Li <[email protected]>
Cc: Haiwei Li <[email protected]>

v3 -> v4:
* fix bad indentation
* rename CONT_RUN to REENTER_GUEST
* rename kvm_need_cancel_enter_guest to kvm_vcpu_exit_request
* rename EXIT_FASTPATH_CONT_RUN to EXIT_FASTPATH_REENTER_GUEST
* introduce EXIT_FASTPATH_NOP
* don't squish several stuffs to one patch
* REENTER_GUEST be introduced with its first usage
* introduce __handle_preemption_timer subfunction

v2 -> v3:
* skip interrupt notify and use vmx_sync_pir_to_irr before each cont_run
* add from_timer_fn argument to apic_timer_expired
* remove all kinds of duplicate codes

v1 -> v2:
* move more stuff from vmx.c to lapic.c
* remove redundant checking
* check more conditions to bail out CONT_RUN
* not break AMD
* not handle LVTT sepecial
* cleanup codes

Wanpeng Li (7):
KVM: VMX: Introduce generic fastpath handler
KVM: X86: Enable fastpath when APICv is enabled
KVM: X86: Introduce more exit_fastpath_completion enum values
KVM: X86: Introduce kvm_vcpu_exit_request() helper
KVM: VMX: Optimize posted-interrupt delivery for timer fastpath
KVM: X86: TSCDEADLINE MSR emulation fastpath
KVM: VMX: Handle preemption timer fastpath

arch/x86/include/asm/kvm_host.h | 3 ++
arch/x86/kvm/lapic.c | 18 +++++++----
arch/x86/kvm/svm/svm.c | 11 ++++---
arch/x86/kvm/vmx/vmx.c | 66 +++++++++++++++++++++++++++++++++--------
arch/x86/kvm/x86.c | 44 ++++++++++++++++++++-------
arch/x86/kvm/x86.h | 3 +-
virt/kvm/kvm_main.c | 1 +
7 files changed, 110 insertions(+), 36 deletions(-)

--
2.7.4


2020-04-28 06:25:34

by Wanpeng Li

[permalink] [raw]
Subject: [PATCH v4 1/7] KVM: VMX: Introduce generic fastpath handler

From: Wanpeng Li <[email protected]>

Introduce generic fastpath handler to handle MSR fastpath, VMX-preemption
timer fastpath etc, move it after vmx_complete_interrupts() in order that
later patch can catch the case vmexit occurred while another event was
being delivered to guest. There is no obversed performance difference for
IPI fastpath testing after this move.

Tested-by: Haiwei Li <[email protected]>
Cc: Haiwei Li <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
---
arch/x86/kvm/vmx/vmx.c | 21 ++++++++++++++++-----
1 file changed, 16 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 3ab6ca6..9b5adb4 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6583,6 +6583,20 @@ void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp)
}
}

+static enum exit_fastpath_completion vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
+{
+ if (!is_guest_mode(vcpu)) {
+ switch (to_vmx(vcpu)->exit_reason) {
+ case EXIT_REASON_MSR_WRITE:
+ return handle_fastpath_set_msr_irqoff(vcpu);
+ default:
+ return EXIT_FASTPATH_NONE;
+ }
+ }
+
+ return EXIT_FASTPATH_NONE;
+}
+
bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launched);

static enum exit_fastpath_completion vmx_vcpu_run(struct kvm_vcpu *vcpu)
@@ -6757,17 +6771,14 @@ static enum exit_fastpath_completion vmx_vcpu_run(struct kvm_vcpu *vcpu)
if (unlikely(vmx->exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY))
return EXIT_FASTPATH_NONE;

- if (!is_guest_mode(vcpu) && vmx->exit_reason == EXIT_REASON_MSR_WRITE)
- exit_fastpath = handle_fastpath_set_msr_irqoff(vcpu);
- else
- exit_fastpath = EXIT_FASTPATH_NONE;
-
vmx->loaded_vmcs->launched = 1;
vmx->idt_vectoring_info = vmcs_read32(IDT_VECTORING_INFO_FIELD);

vmx_recover_nmi_blocking(vmx);
vmx_complete_interrupts(vmx);

+ exit_fastpath = vmx_exit_handlers_fastpath(vcpu);
+
return exit_fastpath;
}

--
2.7.4

2020-04-28 06:25:48

by Wanpeng Li

[permalink] [raw]
Subject: [PATCH v4 2/7] KVM: X86: Enable fastpath when APICv is enabled

From: Wanpeng Li <[email protected]>

We can't observe benefit from single target IPI fastpath when APICv is
disabled, let's just enable IPI and Timer fastpath when APICv is enabled
for now.

Tested-by: Haiwei Li <[email protected]>
Cc: Haiwei Li <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
---
arch/x86/kvm/svm/svm.c | 2 +-
arch/x86/kvm/vmx/vmx.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 8f8fc65..1e7220e 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -3344,7 +3344,7 @@ static void svm_cancel_injection(struct kvm_vcpu *vcpu)

static enum exit_fastpath_completion svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
{
- if (!is_guest_mode(vcpu) &&
+ if (!is_guest_mode(vcpu) && vcpu->arch.apicv_active &&
to_svm(vcpu)->vmcb->control.exit_code == SVM_EXIT_MSR &&
to_svm(vcpu)->vmcb->control.exit_info_1)
return handle_fastpath_set_msr_irqoff(vcpu);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 9b5adb4..f207004 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6585,7 +6585,7 @@ void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp)

static enum exit_fastpath_completion vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
{
- if (!is_guest_mode(vcpu)) {
+ if (!is_guest_mode(vcpu) && vcpu->arch.apicv_active) {
switch (to_vmx(vcpu)->exit_reason) {
case EXIT_REASON_MSR_WRITE:
return handle_fastpath_set_msr_irqoff(vcpu);
--
2.7.4

2020-04-28 06:25:52

by Wanpeng Li

[permalink] [raw]
Subject: [PATCH v4 4/7] KVM: X86: Introduce kvm_vcpu_exit_request() helper

From: Wanpeng Li <[email protected]>

Introduce kvm_vcpu_exit_request() helper, we need to check some conditions
before enter guest again immediately, we skip invoking the exit handler and
go through full run loop if complete fastpath but there is stuff preventing
we enter guest again immediately.

Tested-by: Haiwei Li <[email protected]>
Cc: Haiwei Li <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
---
arch/x86/kvm/vmx/vmx.c | 3 +++
arch/x86/kvm/x86.c | 10 ++++++++--
arch/x86/kvm/x86.h | 1 +
3 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index e12a42e..24cadf4 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -6777,6 +6777,9 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
vmx_complete_interrupts(vmx);

exit_fastpath = vmx_exit_handlers_fastpath(vcpu);
+ if (exit_fastpath == EXIT_FASTPATH_REENTER_GUEST &&
+ kvm_vcpu_exit_request(vcpu))
+ exit_fastpath = EXIT_FASTPATH_NOP;

return exit_fastpath;
}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index df38b40..afe052c 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1581,6 +1581,13 @@ int kvm_emulate_wrmsr(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvm_emulate_wrmsr);

+bool kvm_vcpu_exit_request(struct kvm_vcpu *vcpu)
+{
+ return vcpu->mode == EXITING_GUEST_MODE || kvm_request_pending(vcpu) ||
+ need_resched() || signal_pending(current);
+}
+EXPORT_SYMBOL_GPL(kvm_vcpu_exit_request);
+
/*
* The fast path for frequent and performance sensitive wrmsr emulation,
* i.e. the sending of IPI, sending IPI early in the VM-Exit flow reduces
@@ -8366,8 +8373,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
if (kvm_lapic_enabled(vcpu) && vcpu->arch.apicv_active)
kvm_x86_ops.sync_pir_to_irr(vcpu);

- if (vcpu->mode == EXITING_GUEST_MODE || kvm_request_pending(vcpu)
- || need_resched() || signal_pending(current)) {
+ if (kvm_vcpu_exit_request(vcpu)) {
vcpu->mode = OUTSIDE_GUEST_MODE;
smp_wmb();
local_irq_enable();
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 2f02dc0..6eb62e9 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -364,5 +364,6 @@ static inline bool kvm_dr7_valid(u64 data)
void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu);
void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu);
u64 kvm_spec_ctrl_valid_bits(struct kvm_vcpu *vcpu);
+bool kvm_vcpu_exit_request(struct kvm_vcpu *vcpu);

#endif
--
2.7.4

2020-04-28 06:25:55

by Wanpeng Li

[permalink] [raw]
Subject: [PATCH v4 6/7] KVM: X86: TSCDEADLINE MSR emulation fastpath

From: Wanpeng Li <[email protected]>

This patch implements tscdealine msr emulation fastpath, after wrmsr
tscdeadline vmexit, handle it as soon as possible and vmentry immediately
without checking various kvm stuff when possible.

Tested-by: Haiwei Li <[email protected]>
Cc: Haiwei Li <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
---
arch/x86/kvm/lapic.c | 18 ++++++++++++------
arch/x86/kvm/vmx/vmx.c | 12 ++++++++----
arch/x86/kvm/x86.c | 30 ++++++++++++++++++++++++------
3 files changed, 44 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 38f7dc9..3589237 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -1593,7 +1593,7 @@ static void kvm_apic_inject_pending_timer_irqs(struct kvm_lapic *apic)
}
}

-static void apic_timer_expired(struct kvm_lapic *apic)
+static void apic_timer_expired(struct kvm_lapic *apic, bool from_timer_fn)
{
struct kvm_vcpu *vcpu = apic->vcpu;
struct kvm_timer *ktimer = &apic->lapic_timer;
@@ -1604,6 +1604,12 @@ static void apic_timer_expired(struct kvm_lapic *apic)
if (apic_lvtt_tscdeadline(apic) || ktimer->hv_timer_in_use)
ktimer->expired_tscdeadline = ktimer->tscdeadline;

+ if (!from_timer_fn && vcpu->arch.apicv_active) {
+ WARN_ON(kvm_get_running_vcpu() != vcpu);
+ kvm_apic_inject_pending_timer_irqs(apic);
+ return;
+ }
+
if (kvm_use_posted_timer_interrupt(apic->vcpu)) {
if (apic->lapic_timer.timer_advance_ns)
__kvm_wait_lapic_expire(vcpu);
@@ -1643,7 +1649,7 @@ static void start_sw_tscdeadline(struct kvm_lapic *apic)
expire = ktime_sub_ns(expire, ktimer->timer_advance_ns);
hrtimer_start(&ktimer->timer, expire, HRTIMER_MODE_ABS_HARD);
} else
- apic_timer_expired(apic);
+ apic_timer_expired(apic, false);

local_irq_restore(flags);
}
@@ -1751,7 +1757,7 @@ static void start_sw_period(struct kvm_lapic *apic)

if (ktime_after(ktime_get(),
apic->lapic_timer.target_expiration)) {
- apic_timer_expired(apic);
+ apic_timer_expired(apic, false);

if (apic_lvtt_oneshot(apic))
return;
@@ -1813,7 +1819,7 @@ static bool start_hv_timer(struct kvm_lapic *apic)
if (atomic_read(&ktimer->pending)) {
cancel_hv_timer(apic);
} else if (expired) {
- apic_timer_expired(apic);
+ apic_timer_expired(apic, false);
cancel_hv_timer(apic);
}
}
@@ -1863,7 +1869,7 @@ void kvm_lapic_expired_hv_timer(struct kvm_vcpu *vcpu)
goto out;
WARN_ON(swait_active(&vcpu->wq));
cancel_hv_timer(apic);
- apic_timer_expired(apic);
+ apic_timer_expired(apic, false);

if (apic_lvtt_period(apic) && apic->lapic_timer.period) {
advance_periodic_target_expiration(apic);
@@ -2369,7 +2375,7 @@ static enum hrtimer_restart apic_timer_fn(struct hrtimer *data)
struct kvm_timer *ktimer = container_of(data, struct kvm_timer, timer);
struct kvm_lapic *apic = container_of(ktimer, struct kvm_lapic, lapic_timer);

- apic_timer_expired(apic);
+ apic_timer_expired(apic, true);

if (lapic_is_periodic(apic)) {
advance_periodic_target_expiration(apic);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index ce19b0e..bb5c4f1 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5994,7 +5994,8 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
if (exit_fastpath == EXIT_FASTPATH_SKIP_EMUL_INS) {
kvm_skip_emulated_instruction(vcpu);
return 1;
- }
+ } else if (exit_fastpath == EXIT_FASTPATH_NOP)
+ return 1;

if (exit_reason >= kvm_vmx_max_exit_handlers)
goto unexpected_vmexit;
@@ -6605,6 +6606,7 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
struct vcpu_vmx *vmx = to_vmx(vcpu);
unsigned long cr3, cr4;

+REENTER_GUEST:
/* Record the guest's net vcpu time for enforced NMI injections. */
if (unlikely(!enable_vnmi &&
vmx->loaded_vmcs->soft_vnmi_blocked))
@@ -6779,10 +6781,12 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)

exit_fastpath = vmx_exit_handlers_fastpath(vcpu);
if (exit_fastpath == EXIT_FASTPATH_REENTER_GUEST) {
- if (!kvm_vcpu_exit_request(vcpu))
+ if (!kvm_vcpu_exit_request(vcpu)) {
vmx_sync_pir_to_irr(vcpu);
- else
- exit_fastpath = EXIT_FASTPATH_NOP;
+ /* static call is better with retpolines */
+ goto REENTER_GUEST;
+ }
+ exit_fastpath = EXIT_FASTPATH_NOP;
}

return exit_fastpath;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index afe052c..f3a5fe4 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1616,27 +1616,45 @@ static int handle_fastpath_set_x2apic_icr_irqoff(struct kvm_vcpu *vcpu, u64 data
return 1;
}

+static int handle_fastpath_set_tscdeadline(struct kvm_vcpu *vcpu, u64 data)
+{
+ if (!kvm_x86_ops.set_hv_timer ||
+ kvm_mwait_in_guest(vcpu->kvm) ||
+ kvm_can_post_timer_interrupt(vcpu))
+ return 1;
+
+ kvm_set_lapic_tscdeadline_msr(vcpu, data);
+ return 0;
+}
+
fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
{
u32 msr = kvm_rcx_read(vcpu);
u64 data;
- int ret = 0;
+ int ret = EXIT_FASTPATH_NONE;

switch (msr) {
case APIC_BASE_MSR + (APIC_ICR >> 4):
data = kvm_read_edx_eax(vcpu);
- ret = handle_fastpath_set_x2apic_icr_irqoff(vcpu, data);
+ if (!handle_fastpath_set_x2apic_icr_irqoff(vcpu, data))
+ ret = EXIT_FASTPATH_SKIP_EMUL_INS;
+ break;
+ case MSR_IA32_TSCDEADLINE:
+ data = kvm_read_edx_eax(vcpu);
+ if (!handle_fastpath_set_tscdeadline(vcpu, data))
+ ret = EXIT_FASTPATH_REENTER_GUEST;
break;
default:
- return EXIT_FASTPATH_NONE;
+ ret = EXIT_FASTPATH_NONE;
}

- if (!ret) {
+ if (ret != EXIT_FASTPATH_NONE) {
trace_kvm_msr_write(msr, data);
- return EXIT_FASTPATH_SKIP_EMUL_INS;
+ if (ret == EXIT_FASTPATH_REENTER_GUEST)
+ kvm_skip_emulated_instruction(vcpu);
}

- return EXIT_FASTPATH_NONE;
+ return ret;
}
EXPORT_SYMBOL_GPL(handle_fastpath_set_msr_irqoff);

--
2.7.4

2020-04-28 06:25:58

by Wanpeng Li

[permalink] [raw]
Subject: [PATCH v4 7/7] KVM: VMX: Handle preemption timer fastpath

From: Wanpeng Li <[email protected]>

This patch implements handle preemption timer fastpath, after timer fire
due to VMX-preemption timer counts down to zero, handle it as soon as
possible and vmentry immediately without checking various kvm stuff when
possible.

Testing on SKX Server.

cyclictest in guest(w/o mwait exposed, adaptive advance lapic timer is default -1):

5540.5ns -> 4602ns 17%

kvm-unit-test/vmexit.flat:

w/o avanced timer:
tscdeadline_immed: 3028.5 -> 2494.75 17.6%
tscdeadline: 5765.7 -> 5285 8.3%

w/ adaptive advance timer default -1:
tscdeadline_immed: 3123.75 -> 2583 17.3%
tscdeadline: 4663.75 -> 4537 2.7%

Tested-by: Haiwei Li <[email protected]>
Cc: Haiwei Li <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
---
arch/x86/kvm/vmx/vmx.c | 23 +++++++++++++++++++++--
1 file changed, 21 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index bb5c4f1..7dcc99f 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5556,17 +5556,34 @@ static int handle_pml_full(struct kvm_vcpu *vcpu)
return 1;
}

-static int handle_preemption_timer(struct kvm_vcpu *vcpu)
+static int __handle_preemption_timer(struct kvm_vcpu *vcpu)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);

if (!vmx->req_immediate_exit &&
- !unlikely(vmx->loaded_vmcs->hv_timer_soft_disabled))
+ !unlikely(vmx->loaded_vmcs->hv_timer_soft_disabled)) {
kvm_lapic_expired_hv_timer(vcpu);
+ return 1;
+ }

+ return 0;
+}
+
+static int handle_preemption_timer(struct kvm_vcpu *vcpu)
+{
+ __handle_preemption_timer(vcpu);
return 1;
}

+static fastpath_t handle_fastpath_preemption_timer(struct kvm_vcpu *vcpu)
+{
+ if (__handle_preemption_timer(vcpu)) {
+ trace_kvm_exit(EXIT_REASON_PREEMPTION_TIMER, vcpu, KVM_ISA_VMX);
+ return EXIT_FASTPATH_REENTER_GUEST;
+ }
+ return EXIT_FASTPATH_NONE;
+}
+
/*
* When nested=0, all VMX instruction VM Exits filter here. The handlers
* are overwritten by nested_vmx_setup() when nested=1.
@@ -6590,6 +6607,8 @@ static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
switch (to_vmx(vcpu)->exit_reason) {
case EXIT_REASON_MSR_WRITE:
return handle_fastpath_set_msr_irqoff(vcpu);
+ case EXIT_REASON_PREEMPTION_TIMER:
+ return handle_fastpath_preemption_timer(vcpu);
default:
return EXIT_FASTPATH_NONE;
}
--
2.7.4

2020-04-28 06:27:09

by Wanpeng Li

[permalink] [raw]
Subject: [PATCH v4 5/7] KVM: VMX: Optimize posted-interrupt delivery for timer fastpath

From: Wanpeng Li <[email protected]>

Optimizing posted-interrupt delivery especially for the timer fastpath
scenario, I observe kvm_x86_ops.deliver_posted_interrupt() has more latency
then vmx_sync_pir_to_irr() in the case of timer fastpath scenario, since
it needs to wait vmentry, after that it can handle external interrupt, ack
the notification vector, read posted-interrupt descriptor etc, it is slower
than evaluate and delivery during vmentry immediately approach. Let's skip
sending interrupt to notify target pCPU and replace by vmx_sync_pir_to_irr()
before each reenter guest.

Tested-by: Haiwei Li <[email protected]>
Cc: Haiwei Li <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
---
arch/x86/kvm/vmx/vmx.c | 12 ++++++++----
virt/kvm/kvm_main.c | 1 +
2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 24cadf4..ce19b0e 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -3909,7 +3909,8 @@ static int vmx_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector)
if (pi_test_and_set_on(&vmx->pi_desc))
return 0;

- if (!kvm_vcpu_trigger_posted_interrupt(vcpu, false))
+ if (vcpu != kvm_get_running_vcpu() &&
+ !kvm_vcpu_trigger_posted_interrupt(vcpu, false))
kvm_vcpu_kick(vcpu);

return 0;
@@ -6777,9 +6778,12 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
vmx_complete_interrupts(vmx);

exit_fastpath = vmx_exit_handlers_fastpath(vcpu);
- if (exit_fastpath == EXIT_FASTPATH_REENTER_GUEST &&
- kvm_vcpu_exit_request(vcpu))
- exit_fastpath = EXIT_FASTPATH_NOP;
+ if (exit_fastpath == EXIT_FASTPATH_REENTER_GUEST) {
+ if (!kvm_vcpu_exit_request(vcpu))
+ vmx_sync_pir_to_irr(vcpu);
+ else
+ exit_fastpath = EXIT_FASTPATH_NOP;
+ }

return exit_fastpath;
}
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 33e1eee..2482f3c 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -4644,6 +4644,7 @@ struct kvm_vcpu *kvm_get_running_vcpu(void)

return vcpu;
}
+EXPORT_SYMBOL_GPL(kvm_get_running_vcpu);

/**
* kvm_get_running_vcpus - get the per-CPU array of currently running vcpus.
--
2.7.4

2020-04-28 06:28:26

by Wanpeng Li

[permalink] [raw]
Subject: [PATCH v4 3/7] KVM: X86: Introduce more exit_fastpath_completion enum values

From: Wanpeng Li <[email protected]>

Introduce another two exit_fastpath_completion enum values.

- EXIT_FASTPATH_REENTER_GUEST complete fastpath and there is no other stuff
prevents enter guest again immediately.
- EXIT_FASTPATH_NOP kvm will still go through it's full run loop,
but it would skip invoking the exit handler.
They will be used by later patch, in addition, adds a fastpath_t typedef since
enum lines are a bit long.

Tested-by: Haiwei Li <[email protected]>
Cc: Haiwei Li <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
---
arch/x86/include/asm/kvm_host.h | 3 +++
arch/x86/kvm/svm/svm.c | 9 ++++-----
arch/x86/kvm/vmx/vmx.c | 9 ++++-----
arch/x86/kvm/x86.c | 4 ++--
arch/x86/kvm/x86.h | 2 +-
5 files changed, 14 insertions(+), 13 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 7cd68d1..1535484 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -188,7 +188,10 @@ enum {
enum exit_fastpath_completion {
EXIT_FASTPATH_NONE,
EXIT_FASTPATH_SKIP_EMUL_INS,
+ EXIT_FASTPATH_REENTER_GUEST,
+ EXIT_FASTPATH_NOP,
};
+typedef enum exit_fastpath_completion fastpath_t;

struct x86_emulate_ctxt;
struct x86_exception;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 1e7220e..26f623f 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2911,8 +2911,7 @@ static void svm_get_exit_info(struct kvm_vcpu *vcpu, u64 *info1, u64 *info2)
*info2 = control->exit_info_2;
}

-static int handle_exit(struct kvm_vcpu *vcpu,
- enum exit_fastpath_completion exit_fastpath)
+static int handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
{
struct vcpu_svm *svm = to_svm(vcpu);
struct kvm_run *kvm_run = vcpu->run;
@@ -3342,7 +3341,7 @@ static void svm_cancel_injection(struct kvm_vcpu *vcpu)
svm_complete_interrupts(svm);
}

-static enum exit_fastpath_completion svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
+static fastpath_t svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
{
if (!is_guest_mode(vcpu) && vcpu->arch.apicv_active &&
to_svm(vcpu)->vmcb->control.exit_code == SVM_EXIT_MSR &&
@@ -3354,9 +3353,9 @@ static enum exit_fastpath_completion svm_exit_handlers_fastpath(struct kvm_vcpu

void __svm_vcpu_run(unsigned long vmcb_pa, unsigned long *regs);

-static enum exit_fastpath_completion svm_vcpu_run(struct kvm_vcpu *vcpu)
+static fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu)
{
- enum exit_fastpath_completion exit_fastpath;
+ fastpath_t exit_fastpath;
struct vcpu_svm *svm = to_svm(vcpu);

svm->vmcb->save.rax = vcpu->arch.regs[VCPU_REGS_RAX];
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index f207004..e12a42e 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -5881,8 +5881,7 @@ void dump_vmcs(void)
* The guest has exited. See if we can fix it or if we need userspace
* assistance.
*/
-static int vmx_handle_exit(struct kvm_vcpu *vcpu,
- enum exit_fastpath_completion exit_fastpath)
+static int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
u32 exit_reason = vmx->exit_reason;
@@ -6583,7 +6582,7 @@ void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp)
}
}

-static enum exit_fastpath_completion vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
+static fastpath_t vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
{
if (!is_guest_mode(vcpu) && vcpu->arch.apicv_active) {
switch (to_vmx(vcpu)->exit_reason) {
@@ -6599,9 +6598,9 @@ static enum exit_fastpath_completion vmx_exit_handlers_fastpath(struct kvm_vcpu

bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launched);

-static enum exit_fastpath_completion vmx_vcpu_run(struct kvm_vcpu *vcpu)
+static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
{
- enum exit_fastpath_completion exit_fastpath;
+ fastpath_t exit_fastpath;
struct vcpu_vmx *vmx = to_vmx(vcpu);
unsigned long cr3, cr4;

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 856b6fc2..df38b40 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1609,7 +1609,7 @@ static int handle_fastpath_set_x2apic_icr_irqoff(struct kvm_vcpu *vcpu, u64 data
return 1;
}

-enum exit_fastpath_completion handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
+fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
{
u32 msr = kvm_rcx_read(vcpu);
u64 data;
@@ -8168,7 +8168,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
bool req_int_win =
dm_request_for_irq_injection(vcpu) &&
kvm_cpu_accept_dm_intr(vcpu);
- enum exit_fastpath_completion exit_fastpath;
+ fastpath_t exit_fastpath;

bool req_immediate_exit = false;

diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 7b5ed8e..2f02dc0 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -274,7 +274,7 @@ bool kvm_mtrr_check_gfn_range_consistency(struct kvm_vcpu *vcpu, gfn_t gfn,
bool kvm_vector_hashing_enabled(void);
int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
int emulation_type, void *insn, int insn_len);
-enum exit_fastpath_completion handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu);
+fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu);

extern u64 host_xcr0;
extern u64 supported_xcr0;
--
2.7.4

2020-04-28 10:07:49

by Xinlong Lin

[permalink] [raw]
Subject: Re: [PATCH v4 6/7] KVM: X86: TSCDEADLINE MSR emulation fastpath

On Tuesday, 28 Apr 2020 at 14:23, Wanpeng Li <[email protected]> wrote:
&gt;
&gt; From: Wanpeng Li <[email protected]>
&gt;
&gt; This patch implements tscdealine msr emulation fastpath, after wrmsr
&gt; tscdeadline vmexit, handle it as soon as possible and vmentry immediately
&gt; without checking various kvm stuff when possible.
&gt;
&gt; Tested-by: Haiwei Li <[email protected]>
&gt; Cc: Haiwei Li <[email protected]>
&gt; Signed-off-by: Wanpeng Li <[email protected]>
&gt; ---
&gt; arch/x86/kvm/lapic.c | 18 ++++++++++++------
&gt; arch/x86/kvm/vmx/vmx.c | 12 ++++++++----
&gt; arch/x86/kvm/x86.c | 30 ++++++++++++++++++++++++------
&gt; 3 files changed, 44 insertions(+), 16 deletions(-)
&gt;
&gt; diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
&gt; index 38f7dc9..3589237 100644
&gt; --- a/arch/x86/kvm/lapic.c
&gt; +++ b/arch/x86/kvm/lapic.c
&gt; @@ -1593,7 +1593,7 @@ static void kvm_apic_inject_pending_timer_irqs(struct kvm_lapic *apic)
&gt; }
&gt; }
&gt;
&gt; -static void apic_timer_expired(struct kvm_lapic *apic)
&gt; +static void apic_timer_expired(struct kvm_lapic *apic, bool from_timer_fn)
&gt; {
&gt; struct kvm_vcpu *vcpu = apic-&gt;vcpu;
&gt; struct kvm_timer *ktimer = &amp;apic-&gt;lapic_timer;
&gt; @@ -1604,6 +1604,12 @@ static void apic_timer_expired(struct kvm_lapic *apic)
&gt; if (apic_lvtt_tscdeadline(apic) || ktimer-&gt;hv_timer_in_use)
&gt; ktimer-&gt;expired_tscdeadline = ktimer-&gt;tscdeadline;
&gt;
&gt; + if (!from_timer_fn &amp;&amp; vcpu-&gt;arch.apicv_active) {
&gt; + WARN_ON(kvm_get_running_vcpu() != vcpu);
&gt; + kvm_apic_inject_pending_timer_irqs(apic);
&gt; + return;
&gt; + }
&gt; +
&gt; if (kvm_use_posted_timer_interrupt(apic-&gt;vcpu)) {
&gt; if (apic-&gt;lapic_timer.timer_advance_ns)
&gt; __kvm_wait_lapic_expire(vcpu);
&gt; @@ -1643,7 +1649,7 @@ static void start_sw_tscdeadline(struct kvm_lapic *apic)
&gt; expire = ktime_sub_ns(expire, ktimer-&gt;timer_advance_ns);
&gt; hrtimer_start(&amp;ktimer-&gt;timer, expire, HRTIMER_MODE_ABS_HARD);
&gt; } else
&gt; - apic_timer_expired(apic);
&gt; + apic_timer_expired(apic, false);
&gt;
&gt; local_irq_restore(flags);
&gt; }
&gt; @@ -1751,7 +1757,7 @@ static void start_sw_period(struct kvm_lapic *apic)
&gt;
&gt; if (ktime_after(ktime_get(),
&gt; apic-&gt;lapic_timer.target_expiration)) {
&gt; - apic_timer_expired(apic);
&gt; + apic_timer_expired(apic, false);
&gt;
&gt; if (apic_lvtt_oneshot(apic))
&gt; return;
&gt; @@ -1813,7 +1819,7 @@ static bool start_hv_timer(struct kvm_lapic *apic)
&gt; if (atomic_read(&amp;ktimer-&gt;pending)) {
&gt; cancel_hv_timer(apic);
&gt; } else if (expired) {
&gt; - apic_timer_expired(apic);
&gt; + apic_timer_expired(apic, false);
&gt; cancel_hv_timer(apic);
&gt; }
&gt; }
&gt; @@ -1863,7 +1869,7 @@ void kvm_lapic_expired_hv_timer(struct kvm_vcpu *vcpu)
&gt; goto out;
&gt; WARN_ON(swait_active(&amp;vcpu-&gt;wq));
&gt; cancel_hv_timer(apic);
&gt; - apic_timer_expired(apic);
&gt; + apic_timer_expired(apic, false);
&gt;
&gt; if (apic_lvtt_period(apic) &amp;&amp; apic-&gt;lapic_timer.period) {
&gt; advance_periodic_target_expiration(apic);
&gt; @@ -2369,7 +2375,7 @@ static enum hrtimer_restart apic_timer_fn(struct hrtimer *data)
&gt; struct kvm_timer *ktimer = container_of(data, struct kvm_timer, timer);
&gt; struct kvm_lapic *apic = container_of(ktimer, struct kvm_lapic, lapic_timer);
&gt;
&gt; - apic_timer_expired(apic);
&gt; + apic_timer_expired(apic, true);
&gt;
&gt; if (lapic_is_periodic(apic)) {
&gt; advance_periodic_target_expiration(apic);
&gt; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
&gt; index ce19b0e..bb5c4f1 100644
&gt; --- a/arch/x86/kvm/vmx/vmx.c
&gt; +++ b/arch/x86/kvm/vmx/vmx.c
&gt; @@ -5994,7 +5994,8 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
&gt; if (exit_fastpath == EXIT_FASTPATH_SKIP_EMUL_INS) {
&gt; kvm_skip_emulated_instruction(vcpu);
Can we move this kvm_skip_emulated_instruction to handle_fastpath_set_msr_irqoff? This will keep the style consistent.
&gt; return 1;
&gt; - }
&gt; + } else if (exit_fastpath == EXIT_FASTPATH_NOP)
&gt; + return 1;
&gt;
&gt; if (exit_reason &gt;= kvm_vmx_max_exit_handlers)
&gt; goto unexpected_vmexit;
&gt; @@ -6605,6 +6606,7 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
&gt; struct vcpu_vmx *vmx = to_vmx(vcpu);
&gt; unsigned long cr3, cr4;
&gt;
&gt; +REENTER_GUEST:
&gt; /* Record the guest's net vcpu time for enforced NMI injections. */
&gt; if (unlikely(!enable_vnmi &amp;&amp;
&gt; vmx-&gt;loaded_vmcs-&gt;soft_vnmi_blocked))
&gt; @@ -6779,10 +6781,12 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
&gt;
&gt; exit_fastpath = vmx_exit_handlers_fastpath(vcpu);
&gt; if (exit_fastpath == EXIT_FASTPATH_REENTER_GUEST) {
&gt; - if (!kvm_vcpu_exit_request(vcpu))
&gt; + if (!kvm_vcpu_exit_request(vcpu)) {
&gt; vmx_sync_pir_to_irr(vcpu);
&gt; - else
&gt; - exit_fastpath = EXIT_FASTPATH_NOP;
&gt; + /* static call is better with retpolines */
&gt; + goto REENTER_GUEST;
&gt; + }
&gt; + exit_fastpath = EXIT_FASTPATH_NOP;
&gt; }
&gt;
&gt; return exit_fastpath;
&gt; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
&gt; index afe052c..f3a5fe4 100644
&gt; --- a/arch/x86/kvm/x86.c
&gt; +++ b/arch/x86/kvm/x86.c
&gt; @@ -1616,27 +1616,45 @@ static int handle_fastpath_set_x2apic_icr_irqoff(struct kvm_vcpu *vcpu, u64 data
&gt; return 1;
&gt; }
&gt;
&gt; +static int handle_fastpath_set_tscdeadline(struct kvm_vcpu *vcpu, u64 data)
&gt; +{
&gt; + if (!kvm_x86_ops.set_hv_timer ||
&gt; + kvm_mwait_in_guest(vcpu-&gt;kvm) ||
&gt; + kvm_can_post_timer_interrupt(vcpu))
&gt; + return 1;
&gt; +
&gt; + kvm_set_lapic_tscdeadline_msr(vcpu, data);
&gt; + return 0;
&gt; +}
&gt; +
&gt; fastpath_t handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu)
&gt; {
&gt; u32 msr = kvm_rcx_read(vcpu);
&gt; u64 data;
&gt; - int ret = 0;
&gt; + int ret = EXIT_FASTPATH_NONE;
&gt;
&gt; switch (msr) {
&gt; case APIC_BASE_MSR + (APIC_ICR &gt;&gt; 4):
&gt; data = kvm_read_edx_eax(vcpu);
&gt; - ret = handle_fastpath_set_x2apic_icr_irqoff(vcpu, data);
&gt; + if (!handle_fastpath_set_x2apic_icr_irqoff(vcpu, data))
&gt; + ret = EXIT_FASTPATH_SKIP_EMUL_INS;
&gt; + break;
&gt; + case MSR_IA32_TSCDEADLINE:
&gt; + data = kvm_read_edx_eax(vcpu);
&gt; + if (!handle_fastpath_set_tscdeadline(vcpu, data))
&gt; + ret = EXIT_FASTPATH_REENTER_GUEST;
&gt; break;
&gt; default:
&gt; - return EXIT_FASTPATH_NONE;
&gt; + ret = EXIT_FASTPATH_NONE;
&gt; }
&gt;
&gt; - if (!ret) {
&gt; + if (ret != EXIT_FASTPATH_NONE) {
&gt; trace_kvm_msr_write(msr, data);
&gt; - return EXIT_FASTPATH_SKIP_EMUL_INS;
&gt; + if (ret == EXIT_FASTPATH_REENTER_GUEST)
&gt; + kvm_skip_emulated_instruction(vcpu);

&gt; }
&gt;
&gt; - return EXIT_FASTPATH_NONE;
&gt; + return ret;
&gt; }
&gt; EXPORT_SYMBOL_GPL(handle_fastpath_set_msr_irqoff);
&gt;
&gt; --
&gt; 2.7.4
</[email protected]></[email protected]></[email protected]></[email protected]></[email protected]>

2020-04-28 10:08:36

by Wanpeng Li

[permalink] [raw]
Subject: Re: [PATCH v4 6/7] KVM: X86: TSCDEADLINE MSR emulation fastpath

On Tue, 28 Apr 2020 at 17:59, 林鑫龙 <[email protected]> wrote:
>
> On Tuesday, 28 Apr 2020 at 14:23, Wanpeng Li <[email protected]> wrote:
> &gt;
> &gt; From: Wanpeng Li <[email protected]>
> &gt;
> &gt; This patch implements tscdealine msr emulation fastpath, after wrmsr
> &gt; tscdeadline vmexit, handle it as soon as possible and vmentry immediately
> &gt; without checking various kvm stuff when possible.
> &gt;
> &gt; Tested-by: Haiwei Li <[email protected]>
> &gt; Cc: Haiwei Li <[email protected]>
> &gt; Signed-off-by: Wanpeng Li <[email protected]>
> &gt; ---
> &gt; arch/x86/kvm/lapic.c | 18 ++++++++++++------
> &gt; arch/x86/kvm/vmx/vmx.c | 12 ++++++++----
> &gt; arch/x86/kvm/x86.c | 30 ++++++++++++++++++++++++------
> &gt; 3 files changed, 44 insertions(+), 16 deletions(-)
> &gt;
> &gt; diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
> &gt; index 38f7dc9..3589237 100644
> &gt; --- a/arch/x86/kvm/lapic.c
> &gt; +++ b/arch/x86/kvm/lapic.c
> &gt; @@ -1593,7 +1593,7 @@ static void kvm_apic_inject_pending_timer_irqs(struct kvm_lapic *apic)
> &gt; }
> &gt; }
> &gt;
> &gt; -static void apic_timer_expired(struct kvm_lapic *apic)
> &gt; +static void apic_timer_expired(struct kvm_lapic *apic, bool from_timer_fn)
> &gt; {
> &gt; struct kvm_vcpu *vcpu = apic-&gt;vcpu;
> &gt; struct kvm_timer *ktimer = &amp;apic-&gt;lapic_timer;
> &gt; @@ -1604,6 +1604,12 @@ static void apic_timer_expired(struct kvm_lapic *apic)
> &gt; if (apic_lvtt_tscdeadline(apic) || ktimer-&gt;hv_timer_in_use)
> &gt; ktimer-&gt;expired_tscdeadline = ktimer-&gt;tscdeadline;
> &gt;
> &gt; + if (!from_timer_fn &amp;&amp; vcpu-&gt;arch.apicv_active) {
> &gt; + WARN_ON(kvm_get_running_vcpu() != vcpu);
> &gt; + kvm_apic_inject_pending_timer_irqs(apic);
> &gt; + return;
> &gt; + }
> &gt; +
> &gt; if (kvm_use_posted_timer_interrupt(apic-&gt;vcpu)) {
> &gt; if (apic-&gt;lapic_timer.timer_advance_ns)
> &gt; __kvm_wait_lapic_expire(vcpu);
> &gt; @@ -1643,7 +1649,7 @@ static void start_sw_tscdeadline(struct kvm_lapic *apic)
> &gt; expire = ktime_sub_ns(expire, ktimer-&gt;timer_advance_ns);
> &gt; hrtimer_start(&amp;ktimer-&gt;timer, expire, HRTIMER_MODE_ABS_HARD);
> &gt; } else
> &gt; - apic_timer_expired(apic);
> &gt; + apic_timer_expired(apic, false);
> &gt;
> &gt; local_irq_restore(flags);
> &gt; }
> &gt; @@ -1751,7 +1757,7 @@ static void start_sw_period(struct kvm_lapic *apic)
> &gt;
> &gt; if (ktime_after(ktime_get(),
> &gt; apic-&gt;lapic_timer.target_expiration)) {
> &gt; - apic_timer_expired(apic);
> &gt; + apic_timer_expired(apic, false);
> &gt;
> &gt; if (apic_lvtt_oneshot(apic))
> &gt; return;
> &gt; @@ -1813,7 +1819,7 @@ static bool start_hv_timer(struct kvm_lapic *apic)
> &gt; if (atomic_read(&amp;ktimer-&gt;pending)) {
> &gt; cancel_hv_timer(apic);
> &gt; } else if (expired) {
> &gt; - apic_timer_expired(apic);
> &gt; + apic_timer_expired(apic, false);
> &gt; cancel_hv_timer(apic);
> &gt; }
> &gt; }
> &gt; @@ -1863,7 +1869,7 @@ void kvm_lapic_expired_hv_timer(struct kvm_vcpu *vcpu)
> &gt; goto out;
> &gt; WARN_ON(swait_active(&amp;vcpu-&gt;wq));
> &gt; cancel_hv_timer(apic);
> &gt; - apic_timer_expired(apic);
> &gt; + apic_timer_expired(apic, false);
> &gt;
> &gt; if (apic_lvtt_period(apic) &amp;&amp; apic-&gt;lapic_timer.period) {
> &gt; advance_periodic_target_expiration(apic);
> &gt; @@ -2369,7 +2375,7 @@ static enum hrtimer_restart apic_timer_fn(struct hrtimer *data)
> &gt; struct kvm_timer *ktimer = container_of(data, struct kvm_timer, timer);
> &gt; struct kvm_lapic *apic = container_of(ktimer, struct kvm_lapic, lapic_timer);
> &gt;
> &gt; - apic_timer_expired(apic);
> &gt; + apic_timer_expired(apic, true);
> &gt;
> &gt; if (lapic_is_periodic(apic)) {
> &gt; advance_periodic_target_expiration(apic);
> &gt; diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> &gt; index ce19b0e..bb5c4f1 100644
> &gt; --- a/arch/x86/kvm/vmx/vmx.c
> &gt; +++ b/arch/x86/kvm/vmx/vmx.c
> &gt; @@ -5994,7 +5994,8 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t exit_fastpath)
> &gt; if (exit_fastpath == EXIT_FASTPATH_SKIP_EMUL_INS) {
> &gt; kvm_skip_emulated_instruction(vcpu);
> Can we move this kvm_skip_emulated_instruction to handle_fastpath_set_msr_irqoff? This will keep the style consistent.

It can have other users sooner or later.

Wanpeng

2020-04-30 13:31:13

by Vitaly Kuznetsov

[permalink] [raw]
Subject: Re: [PATCH v4 1/7] KVM: VMX: Introduce generic fastpath handler

Wanpeng Li <[email protected]> writes:

> From: Wanpeng Li <[email protected]>
>
> Introduce generic fastpath handler to handle MSR fastpath, VMX-preemption
> timer fastpath etc, move it after vmx_complete_interrupts() in order that
> later patch can catch the case vmexit occurred while another event was
> being delivered to guest. There is no obversed performance difference for
> IPI fastpath testing after this move.
>
> Tested-by: Haiwei Li <[email protected]>
> Cc: Haiwei Li <[email protected]>
> Signed-off-by: Wanpeng Li <[email protected]>
> ---
> arch/x86/kvm/vmx/vmx.c | 21 ++++++++++++++++-----
> 1 file changed, 16 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 3ab6ca6..9b5adb4 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -6583,6 +6583,20 @@ void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp)
> }
> }
>
> +static enum exit_fastpath_completion vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
> +{
> + if (!is_guest_mode(vcpu)) {

Nitpick: do we actually expect to have any fastpath handlers anytime
soon? If not, we could've written this as

if (is_guest_mode(vcpu))
return EXIT_FASTPATH_NONE;

and save on identation)

> + switch (to_vmx(vcpu)->exit_reason) {
> + case EXIT_REASON_MSR_WRITE:
> + return handle_fastpath_set_msr_irqoff(vcpu);
> + default:
> + return EXIT_FASTPATH_NONE;
> + }
> + }
> +
> + return EXIT_FASTPATH_NONE;
> +}
> +
> bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launched);
>
> static enum exit_fastpath_completion vmx_vcpu_run(struct kvm_vcpu *vcpu)
> @@ -6757,17 +6771,14 @@ static enum exit_fastpath_completion vmx_vcpu_run(struct kvm_vcpu *vcpu)
> if (unlikely(vmx->exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY))
> return EXIT_FASTPATH_NONE;
>
> - if (!is_guest_mode(vcpu) && vmx->exit_reason == EXIT_REASON_MSR_WRITE)
> - exit_fastpath = handle_fastpath_set_msr_irqoff(vcpu);
> - else
> - exit_fastpath = EXIT_FASTPATH_NONE;
> -
> vmx->loaded_vmcs->launched = 1;
> vmx->idt_vectoring_info = vmcs_read32(IDT_VECTORING_INFO_FIELD);
>
> vmx_recover_nmi_blocking(vmx);
> vmx_complete_interrupts(vmx);
>
> + exit_fastpath = vmx_exit_handlers_fastpath(vcpu);
> +
> return exit_fastpath;
> }

Reviewed-by: Vitaly Kuznetsov <[email protected]>

--
Vitaly

2020-04-30 13:34:44

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH v4 5/7] KVM: VMX: Optimize posted-interrupt delivery for timer fastpath

On 28/04/20 08:23, Wanpeng Li wrote:
> - if (exit_fastpath == EXIT_FASTPATH_REENTER_GUEST &&
> - kvm_vcpu_exit_request(vcpu))
> - exit_fastpath = EXIT_FASTPATH_NOP;
> + if (exit_fastpath == EXIT_FASTPATH_REENTER_GUEST) {
> + if (!kvm_vcpu_exit_request(vcpu))
> + vmx_sync_pir_to_irr(vcpu);
> + else
> + exit_fastpath = EXIT_FASTPATH_NOP;
> + }

This part should be in patch 3; not a big deal, I can reorganize that
myself.

Paolo

2020-04-30 13:36:36

by Vitaly Kuznetsov

[permalink] [raw]
Subject: Re: [PATCH v4 2/7] KVM: X86: Enable fastpath when APICv is enabled

Wanpeng Li <[email protected]> writes:

> From: Wanpeng Li <[email protected]>
>
> We can't observe benefit from single target IPI fastpath when APICv is
> disabled, let's just enable IPI and Timer fastpath when APICv is enabled
> for now.
>
> Tested-by: Haiwei Li <[email protected]>
> Cc: Haiwei Li <[email protected]>
> Signed-off-by: Wanpeng Li <[email protected]>
> ---
> arch/x86/kvm/svm/svm.c | 2 +-
> arch/x86/kvm/vmx/vmx.c | 2 +-
> 2 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 8f8fc65..1e7220e 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -3344,7 +3344,7 @@ static void svm_cancel_injection(struct kvm_vcpu *vcpu)
>
> static enum exit_fastpath_completion svm_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
> {
> - if (!is_guest_mode(vcpu) &&
> + if (!is_guest_mode(vcpu) && vcpu->arch.apicv_active &&
> to_svm(vcpu)->vmcb->control.exit_code == SVM_EXIT_MSR &&
> to_svm(vcpu)->vmcb->control.exit_info_1)
> return handle_fastpath_set_msr_irqoff(vcpu);
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 9b5adb4..f207004 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -6585,7 +6585,7 @@ void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp)
>
> static enum exit_fastpath_completion vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
> {
> - if (!is_guest_mode(vcpu)) {
> + if (!is_guest_mode(vcpu) && vcpu->arch.apicv_active) {
> switch (to_vmx(vcpu)->exit_reason) {
> case EXIT_REASON_MSR_WRITE:
> return handle_fastpath_set_msr_irqoff(vcpu);

I think that apicv_active checks are specific to APIC MSRs but
handle_fastpath_set_msr_irqoff() can handle any other MSR as well. I'd
suggest to move the check inside handle_fastpath_set_msr_irqoff().

Also, enabling Hyper-V SynIC leads to disabling apicv. It it still
pointless to keep fastpath enabled?

--
Vitaly

2020-04-30 23:46:36

by Wanpeng Li

[permalink] [raw]
Subject: Re: [PATCH v4 5/7] KVM: VMX: Optimize posted-interrupt delivery for timer fastpath

On Thu, 30 Apr 2020 at 21:32, Paolo Bonzini <[email protected]> wrote:
>
> On 28/04/20 08:23, Wanpeng Li wrote:
> > - if (exit_fastpath == EXIT_FASTPATH_REENTER_GUEST &&
> > - kvm_vcpu_exit_request(vcpu))
> > - exit_fastpath = EXIT_FASTPATH_NOP;
> > + if (exit_fastpath == EXIT_FASTPATH_REENTER_GUEST) {
> > + if (!kvm_vcpu_exit_request(vcpu))
> > + vmx_sync_pir_to_irr(vcpu);
> > + else
> > + exit_fastpath = EXIT_FASTPATH_NOP;
> > + }
>
> This part should be in patch 3; not a big deal, I can reorganize that
> myself.

Great, thanks.

Wanpeng

2020-05-01 14:17:07

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v4 1/7] KVM: VMX: Introduce generic fastpath handler

On Thu, Apr 30, 2020 at 03:28:46PM +0200, Vitaly Kuznetsov wrote:
> Wanpeng Li <[email protected]> writes:
>
> > From: Wanpeng Li <[email protected]>
> >
> > Introduce generic fastpath handler to handle MSR fastpath, VMX-preemption
> > timer fastpath etc, move it after vmx_complete_interrupts() in order that
> > later patch can catch the case vmexit occurred while another event was
> > being delivered to guest. There is no obversed performance difference for
> > IPI fastpath testing after this move.
> >
> > Tested-by: Haiwei Li <[email protected]>
> > Cc: Haiwei Li <[email protected]>
> > Signed-off-by: Wanpeng Li <[email protected]>
> > ---
> > arch/x86/kvm/vmx/vmx.c | 21 ++++++++++++++++-----
> > 1 file changed, 16 insertions(+), 5 deletions(-)
> >
> > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > index 3ab6ca6..9b5adb4 100644
> > --- a/arch/x86/kvm/vmx/vmx.c
> > +++ b/arch/x86/kvm/vmx/vmx.c
> > @@ -6583,6 +6583,20 @@ void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp)
> > }
> > }
> >
> > +static enum exit_fastpath_completion vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
> > +{
> > + if (!is_guest_mode(vcpu)) {
>
> Nitpick: do we actually expect to have any fastpath handlers anytime
> soon? If not, we could've written this as
>
> if (is_guest_mode(vcpu))
> return EXIT_FASTPATH_NONE;
>
> and save on identation)

Agreed. An alternative approach would be to do the check in the caller, e.g.

if (is_guest_mode(vcpu))
return EXIT_FASTPATH_NONE;

return vmx_exit_handlers_fastpath(vcpu);

I don't have a strong preference either way.

> > + switch (to_vmx(vcpu)->exit_reason) {
> > + case EXIT_REASON_MSR_WRITE:
> > + return handle_fastpath_set_msr_irqoff(vcpu);
> > + default:
> > + return EXIT_FASTPATH_NONE;
> > + }
> > + }
> > +
> > + return EXIT_FASTPATH_NONE;
> > +}
> > +
> > bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launched);
> >
> > static enum exit_fastpath_completion vmx_vcpu_run(struct kvm_vcpu *vcpu)
> > @@ -6757,17 +6771,14 @@ static enum exit_fastpath_completion vmx_vcpu_run(struct kvm_vcpu *vcpu)
> > if (unlikely(vmx->exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY))
> > return EXIT_FASTPATH_NONE;
> >
> > - if (!is_guest_mode(vcpu) && vmx->exit_reason == EXIT_REASON_MSR_WRITE)
> > - exit_fastpath = handle_fastpath_set_msr_irqoff(vcpu);
> > - else
> > - exit_fastpath = EXIT_FASTPATH_NONE;
> > -
> > vmx->loaded_vmcs->launched = 1;
> > vmx->idt_vectoring_info = vmcs_read32(IDT_VECTORING_INFO_FIELD);
> >
> > vmx_recover_nmi_blocking(vmx);
> > vmx_complete_interrupts(vmx);
> >
> > + exit_fastpath = vmx_exit_handlers_fastpath(vcpu);

No need for capturing the result in a local variable, just return the function
call.

> > +
> > return exit_fastpath;
> > }
>
> Reviewed-by: Vitaly Kuznetsov <[email protected]>
>
> --
> Vitaly
>

2020-05-01 22:54:22

by Wanpeng Li

[permalink] [raw]
Subject: Re: [PATCH v4 1/7] KVM: VMX: Introduce generic fastpath handler

On Fri, 1 May 2020 at 22:12, Sean Christopherson
<[email protected]> wrote:
>
> On Thu, Apr 30, 2020 at 03:28:46PM +0200, Vitaly Kuznetsov wrote:
> > Wanpeng Li <[email protected]> writes:
> >
> > > From: Wanpeng Li <[email protected]>
> > >
> > > Introduce generic fastpath handler to handle MSR fastpath, VMX-preemption
> > > timer fastpath etc, move it after vmx_complete_interrupts() in order that
> > > later patch can catch the case vmexit occurred while another event was
> > > being delivered to guest. There is no obversed performance difference for
> > > IPI fastpath testing after this move.
> > >
> > > Tested-by: Haiwei Li <[email protected]>
> > > Cc: Haiwei Li <[email protected]>
> > > Signed-off-by: Wanpeng Li <[email protected]>
> > > ---
> > > arch/x86/kvm/vmx/vmx.c | 21 ++++++++++++++++-----
> > > 1 file changed, 16 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> > > index 3ab6ca6..9b5adb4 100644
> > > --- a/arch/x86/kvm/vmx/vmx.c
> > > +++ b/arch/x86/kvm/vmx/vmx.c
> > > @@ -6583,6 +6583,20 @@ void vmx_update_host_rsp(struct vcpu_vmx *vmx, unsigned long host_rsp)
> > > }
> > > }
> > >
> > > +static enum exit_fastpath_completion vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
> > > +{
> > > + if (!is_guest_mode(vcpu)) {
> >
> > Nitpick: do we actually expect to have any fastpath handlers anytime
> > soon? If not, we could've written this as
> >
> > if (is_guest_mode(vcpu))
> > return EXIT_FASTPATH_NONE;
> >
> > and save on identation)
>
> Agreed. An alternative approach would be to do the check in the caller, e.g.
>
> if (is_guest_mode(vcpu))
> return EXIT_FASTPATH_NONE;
>
> return vmx_exit_handlers_fastpath(vcpu);
>
> I don't have a strong preference either way.
>
> > > + switch (to_vmx(vcpu)->exit_reason) {
> > > + case EXIT_REASON_MSR_WRITE:
> > > + return handle_fastpath_set_msr_irqoff(vcpu);
> > > + default:
> > > + return EXIT_FASTPATH_NONE;
> > > + }
> > > + }
> > > +
> > > + return EXIT_FASTPATH_NONE;
> > > +}
> > > +
> > > bool __vmx_vcpu_run(struct vcpu_vmx *vmx, unsigned long *regs, bool launched);
> > >
> > > static enum exit_fastpath_completion vmx_vcpu_run(struct kvm_vcpu *vcpu)
> > > @@ -6757,17 +6771,14 @@ static enum exit_fastpath_completion vmx_vcpu_run(struct kvm_vcpu *vcpu)
> > > if (unlikely(vmx->exit_reason & VMX_EXIT_REASONS_FAILED_VMENTRY))
> > > return EXIT_FASTPATH_NONE;
> > >
> > > - if (!is_guest_mode(vcpu) && vmx->exit_reason == EXIT_REASON_MSR_WRITE)
> > > - exit_fastpath = handle_fastpath_set_msr_irqoff(vcpu);
> > > - else
> > > - exit_fastpath = EXIT_FASTPATH_NONE;
> > > -
> > > vmx->loaded_vmcs->launched = 1;
> > > vmx->idt_vectoring_info = vmcs_read32(IDT_VECTORING_INFO_FIELD);
> > >
> > > vmx_recover_nmi_blocking(vmx);
> > > vmx_complete_interrupts(vmx);
> > >
> > > + exit_fastpath = vmx_exit_handlers_fastpath(vcpu);
>
> No need for capturing the result in a local variable, just return the function
> call.

As you know later patches need to handle an local variable even
through we can make 1/7 nicer, it is just overridden.

Wanpeng

2020-05-04 17:09:20

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH v4 2/7] KVM: X86: Enable fastpath when APICv is enabled

On 30/04/20 15:34, Vitaly Kuznetsov wrote:
>> static enum exit_fastpath_completion vmx_exit_handlers_fastpath(struct kvm_vcpu *vcpu)
>> {
>> - if (!is_guest_mode(vcpu)) {
>> + if (!is_guest_mode(vcpu) && vcpu->arch.apicv_active) {
>> switch (to_vmx(vcpu)->exit_reason) {
>> case EXIT_REASON_MSR_WRITE:
>> return handle_fastpath_set_msr_irqoff(vcpu);
> I think that apicv_active checks are specific to APIC MSRs but
> handle_fastpath_set_msr_irqoff() can handle any other MSR as well. I'd
> suggest to move the check inside handle_fastpath_set_msr_irqoff().
>
> Also, enabling Hyper-V SynIC leads to disabling apicv. It it still
> pointless to keep fastpath enabled?

Indeed, only fast paths that only apply to apicv should be disabled (and
ideally there should be a WARN_ON in the code that doesn't support !apicv).

Paolo

2020-05-04 17:24:33

by Paolo Bonzini

[permalink] [raw]
Subject: Re: [PATCH v4 0/7] KVM: VMX: Tscdeadline timer emulation fastpath

On 28/04/20 08:23, Wanpeng Li wrote:
> IPI and Timer cause the main vmexits in cloud environment observation,
> after single target IPI fastpath, let's optimize tscdeadline timer
> latency by introducing tscdeadline timer emulation fastpath, it will
> skip various KVM related checks when possible. i.e. after vmexit due
> to tscdeadline timer emulation, handle it and vmentry immediately
> without checking various kvm stuff when possible.
>
> Testing on SKX Server.
>
> cyclictest in guest(w/o mwait exposed, adaptive advance lapic timer is default -1):
>
> 5540.5ns -> 4602ns 17%
>
> kvm-unit-test/vmexit.flat:
>
> w/o avanced timer:
> tscdeadline_immed: 3028.5 -> 2494.75 17.6%
> tscdeadline: 5765.7 -> 5285 8.3%
>
> w/ adaptive advance timer default -1:
> tscdeadline_immed: 3123.75 -> 2583 17.3%
> tscdeadline: 4663.75 -> 4537 2.7%
>
> Tested-by: Haiwei Li <[email protected]>
> Cc: Haiwei Li <[email protected]>
>
> v3 -> v4:
> * fix bad indentation
> * rename CONT_RUN to REENTER_GUEST
> * rename kvm_need_cancel_enter_guest to kvm_vcpu_exit_request
> * rename EXIT_FASTPATH_CONT_RUN to EXIT_FASTPATH_REENTER_GUEST
> * introduce EXIT_FASTPATH_NOP
> * don't squish several stuffs to one patch
> * REENTER_GUEST be introduced with its first usage
> * introduce __handle_preemption_timer subfunction
>
> v2 -> v3:
> * skip interrupt notify and use vmx_sync_pir_to_irr before each cont_run
> * add from_timer_fn argument to apic_timer_expired
> * remove all kinds of duplicate codes
>
> v1 -> v2:
> * move more stuff from vmx.c to lapic.c
> * remove redundant checking
> * check more conditions to bail out CONT_RUN
> * not break AMD
> * not handle LVTT sepecial
> * cleanup codes
>
> Wanpeng Li (7):
> KVM: VMX: Introduce generic fastpath handler
> KVM: X86: Enable fastpath when APICv is enabled
> KVM: X86: Introduce more exit_fastpath_completion enum values
> KVM: X86: Introduce kvm_vcpu_exit_request() helper
> KVM: VMX: Optimize posted-interrupt delivery for timer fastpath
> KVM: X86: TSCDEADLINE MSR emulation fastpath
> KVM: VMX: Handle preemption timer fastpath
>
> arch/x86/include/asm/kvm_host.h | 3 ++
> arch/x86/kvm/lapic.c | 18 +++++++----
> arch/x86/kvm/svm/svm.c | 11 ++++---
> arch/x86/kvm/vmx/vmx.c | 66 +++++++++++++++++++++++++++++++++--------
> arch/x86/kvm/x86.c | 44 ++++++++++++++++++++-------
> arch/x86/kvm/x86.h | 3 +-
> virt/kvm/kvm_main.c | 1 +
> 7 files changed, 110 insertions(+), 36 deletions(-)
>

Queued all except 2, pending testing (and understanding the rationale
behind patch 2). I will post separately my version of patch 3.

Thanks,

Paolo