Received: by 2002:a25:4158:0:0:0:0:0 with SMTP id o85csp5557737yba; Mon, 13 May 2019 12:59:54 -0700 (PDT) X-Google-Smtp-Source: APXvYqyPeZPqpyDSDLl6Uskl/rKm2nM/YAjDsN27cNbLuXapOC5lRox5earXc3pep/2h7VRn7RzL X-Received: by 2002:a17:902:8bca:: with SMTP id r10mr33532178plo.67.1557777594665; Mon, 13 May 2019 12:59:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1557777594; cv=none; d=google.com; s=arc-20160816; b=i4qwwXWvcZ8Ai5HHvzjijXftO+Ndpuae4Qi6y5g0tTymqhIg+E8O0HxGRnC2jyUk+2 oVG2385Zh292hoDQj9lTf4yQJHH3S+1wgc+v4/3ECTGc/nFtY9q9sg3Y2L7VmPgoR63f 7uCp1UWswwpp1mw6c1H712qk83+YakS8p904KF8G3xlmmX5ZA7jcjmYLQalaCydSlnGn f1eEzPVF6Jw+dv2zdrzhrbR0Wf8jQEh9IQGehDOWM92ztqk7mOe+eJEbBp5QFufsFa6I AxNjk9/ahXMnal/LkFC5Cu2s29DK1PdTF6eoYYLwPWWMI1NpNe9maMf/wQvB1CLTBp+A kfsA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=HxRErDqX7Bd9GLdZgxN7LThjX6FqayWg3dLk+OTHzjg=; b=gv9/gg4W0DmwZs24sk3e/dG7eZEkhtV8F2bFxkTAWXkjnLFka6vIJ22A8Qv819jbaJ oHWqb3gRh1qrIkwzlAoAuQq2tWNTIpycIRvQfNXkf+XXKKa+vMay6POFyvxBaCEVc74C K96lU3Njk5Sdz9rp26nANiCsXZx0CZC+n0E+c8+a8CKTZOeBc4v22dg6S3ZNZw6ITEtS y0tPU4gjeD6+hRRvmDGBkbiF9KnF3/JOl273mGeNQdItQ4ze3qwAodk17DQ1ZAGOJkm4 wQhmEskdpGiwUiQswIt7sLN0FUxwr9HwfmZoOJFbTzxF/UeHF/jdoNtujhtRsEvNGq8Z nz3A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n8si16941746plk.108.2019.05.13.12.59.39; Mon, 13 May 2019 12:59:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726303AbfEMTyS (ORCPT + 99 others); Mon, 13 May 2019 15:54:18 -0400 Received: from mga11.intel.com ([192.55.52.93]:3596 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725928AbfEMTyS (ORCPT ); Mon, 13 May 2019 15:54:18 -0400 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 May 2019 12:54:17 -0700 X-ExtLoop1: 1 Received: from sjchrist-coffee.jf.intel.com (HELO linux.intel.com) ([10.54.74.36]) by orsmga006.jf.intel.com with ESMTP; 13 May 2019 12:54:17 -0700 Date: Mon, 13 May 2019 12:54:17 -0700 From: Sean Christopherson To: Wanpeng Li Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Paolo Bonzini , Radim =?utf-8?B?S3LEjW3DocWZ?= , Liran Alon Subject: Re: [PATCH 3/3] KVM: LAPIC: Optimize timer latency further Message-ID: <20190513195417.GM28561@linux.intel.com> References: <1557401361-3828-1-git-send-email-wanpengli@tencent.com> <1557401361-3828-4-git-send-email-wanpengli@tencent.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1557401361-3828-4-git-send-email-wanpengli@tencent.com> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, May 09, 2019 at 07:29:21PM +0800, Wanpeng Li wrote: > From: Wanpeng Li > > Advance lapic timer tries to hidden the hypervisor overhead between host > timer fires and the guest awares the timer is fired. However, it just hidden > the time between apic_timer_fn/handle_preemption_timer -> wait_lapic_expire, > instead of the real position of vmentry which is mentioned in the orignial > commit d0659d946be0 ("KVM: x86: add option to advance tscdeadline hrtimer > expiration"). There is 700+ cpu cycles between the end of wait_lapic_expire > and before world switch on my haswell desktop, it will be 2400+ cycles if > vmentry_l1d_flush is tuned to always. > > This patch tries to narrow the last gap, it measures the time between > the end of wait_lapic_expire and before world switch, we take this > time into consideration when busy waiting, otherwise, the guest still > awares the latency between wait_lapic_expire and world switch, we also > consider this when adaptively tuning the timer advancement. The patch > can reduce 50% latency (~1600+ cycles to ~800+ cycles on a haswell > desktop) for kvm-unit-tests/tscdeadline_latency when testing busy waits. > > Cc: Paolo Bonzini > Cc: Radim Krčmář > Cc: Sean Christopherson > Cc: Liran Alon > Signed-off-by: Wanpeng Li > --- > arch/x86/kvm/lapic.c | 23 +++++++++++++++++++++-- > arch/x86/kvm/lapic.h | 8 ++++++++ > arch/x86/kvm/vmx/vmx.c | 2 ++ > 3 files changed, 31 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c > index e7a0660..01d3a87 100644 > --- a/arch/x86/kvm/lapic.c > +++ b/arch/x86/kvm/lapic.c > @@ -1545,13 +1545,19 @@ void wait_lapic_expire(struct kvm_vcpu *vcpu) > > tsc_deadline = apic->lapic_timer.expired_tscdeadline; > apic->lapic_timer.expired_tscdeadline = 0; > - guest_tsc = kvm_read_l1_tsc(vcpu, rdtsc()); > + guest_tsc = kvm_read_l1_tsc(vcpu, (apic->lapic_timer.measure_delay_done == 2) ? > + rdtsc() + apic->lapic_timer.vmentry_delay : rdtsc()); > trace_kvm_wait_lapic_expire(vcpu->vcpu_id, guest_tsc - tsc_deadline); > > if (guest_tsc < tsc_deadline) > __wait_lapic_expire(vcpu, tsc_deadline - guest_tsc); > > adaptive_tune_timer_advancement(vcpu, guest_tsc, tsc_deadline); > + > + if (!apic->lapic_timer.measure_delay_done) { > + apic->lapic_timer.measure_delay_done = 1; > + apic->lapic_timer.vmentry_delay = rdtsc(); > + } > } > > static void start_sw_tscdeadline(struct kvm_lapic *apic) > @@ -1837,6 +1843,18 @@ static void apic_manage_nmi_watchdog(struct kvm_lapic *apic, u32 lvt0_val) > } > } > > +void kvm_lapic_measure_vmentry_delay(struct kvm_vcpu *vcpu) > +{ > + struct kvm_timer *ktimer = &vcpu->arch.apic->lapic_timer; This will #GP if the APIC is not in-kernel, i.e. @apic is NULL. > + > + if (ktimer->measure_delay_done == 1) { > + ktimer->vmentry_delay = rdtsc() - > + ktimer->vmentry_delay; > + ktimer->measure_delay_done = 2; Measuring the delay a single time is bound to result in random outliers, e.g. if an NMI happens to occur after wait_lapic_expire(). Rather than reinvent the wheel, can we simply move the call to wait_lapic_expire() into vmx.c and svm.c? For VMX we'd probably want to support the advancement if enable_unrestricted_guest=true so that we avoid the emulation_required case, but other than that I don't see anything that requires wait_lapic_expire() to be called where it is. > + } > +} > +EXPORT_SYMBOL_GPL(kvm_lapic_measure_vmentry_delay); > + > int kvm_lapic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val) > { > int ret = 0; > @@ -2318,7 +2336,8 @@ int kvm_create_lapic(struct kvm_vcpu *vcpu, int timer_advance_ns) > apic->lapic_timer.timer_advance_ns = timer_advance_ns; > apic->lapic_timer.timer_advance_adjust_done = true; > } > - > + apic->lapic_timer.vmentry_delay = 0; > + apic->lapic_timer.measure_delay_done = 0; > > /* > * APIC is created enabled. This will prevent kvm_lapic_set_base from > diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h > index d6d049b..f1d037b 100644 > --- a/arch/x86/kvm/lapic.h > +++ b/arch/x86/kvm/lapic.h > @@ -35,6 +35,13 @@ struct kvm_timer { > atomic_t pending; /* accumulated triggered timers */ > bool hv_timer_in_use; > bool timer_advance_adjust_done; > + /** > + * 0 unstart measure > + * 1 start record > + * 2 get delta > + */ > + u32 measure_delay_done; > + u64 vmentry_delay; > }; > > struct kvm_lapic { > @@ -230,6 +237,7 @@ void kvm_lapic_switch_to_hv_timer(struct kvm_vcpu *vcpu); > void kvm_lapic_expired_hv_timer(struct kvm_vcpu *vcpu); > bool kvm_lapic_hv_timer_in_use(struct kvm_vcpu *vcpu); > void kvm_lapic_restart_hv_timer(struct kvm_vcpu *vcpu); > +void kvm_lapic_measure_vmentry_delay(struct kvm_vcpu *vcpu); > > static inline enum lapic_mode kvm_apic_mode(u64 apic_base) > { > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index 9663d41..a939bf5 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -6437,6 +6437,8 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) > if (vcpu->arch.cr2 != read_cr2()) > write_cr2(vcpu->arch.cr2); > > + kvm_lapic_measure_vmentry_delay(vcpu); This should be wrapped in an unlikely of some form given that it happens literally once out of thousands/millions runs. > + > vmx->fail = __vmx_vcpu_run(vmx, (unsigned long *)&vcpu->arch.regs, > vmx->loaded_vmcs->launched); > > -- > 2.7.4 >