Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758697AbbBGUdR (ORCPT ); Sat, 7 Feb 2015 15:33:17 -0500 Received: from mail-we0-f180.google.com ([74.125.82.180]:42044 "EHLO mail-we0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758623AbbBGUdP (ORCPT ); Sat, 7 Feb 2015 15:33:15 -0500 Message-ID: <54D67685.5050205@redhat.com> Date: Sat, 07 Feb 2015 21:33:09 +0100 From: Paolo Bonzini User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: Marcelo Tosatti CC: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, riel@redhat.com, rkrcmar@redhat.com, dmatlack@google.com Subject: Re: [PATCH 2/2] KVM: x86: optimize delivery of TSC deadline timer interrupt References: <1423225019-11001-1-git-send-email-pbonzini@redhat.com> <1423225019-11001-3-git-send-email-pbonzini@redhat.com> <20150206205137.GA27561@amt.cnet> In-Reply-To: <20150206205137.GA27561@amt.cnet> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2225 Lines: 58 On 06/02/2015 21:51, Marcelo Tosatti wrote: > On Fri, Feb 06, 2015 at 01:16:59PM +0100, Paolo Bonzini wrote: >> The newly-added tracepoint shows the following results on >> the tscdeadline_latency test: >> >> qemu-kvm-8387 [002] 6425.558974: kvm_vcpu_wakeup: poll time 10407 ns >> qemu-kvm-8387 [002] 6425.558984: kvm_vcpu_wakeup: poll time 0 ns >> qemu-kvm-8387 [002] 6425.561242: kvm_vcpu_wakeup: poll time 10477 ns >> qemu-kvm-8387 [002] 6425.561251: kvm_vcpu_wakeup: poll time 0 ns >> >> and so on. This is because we need to go through kvm_vcpu_block again >> after the timer IRQ is injected. Avoid it by polling once before >> entering kvm_vcpu_block. >> >> On my machine (Xeon E5 Sandy Bridge) this removes about 500 cycles (7%) >> from the latency of the TSC deadline timer. >> >> Signed-off-by: Paolo Bonzini >> --- >> arch/x86/kvm/x86.c | 14 +++++++++----- >> 1 file changed, 9 insertions(+), 5 deletions(-) >> >> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c >> index 0b8dd13676ef..1e766033ebff 100644 >> --- a/arch/x86/kvm/x86.c >> +++ b/arch/x86/kvm/x86.c >> @@ -6389,11 +6389,15 @@ static inline int __vcpu_run(struct kvm *kvm, struct kvm_vcpu *vcpu) >> !vcpu->arch.apf.halted) >> return vcpu_enter_guest(vcpu); >> >> - srcu_read_unlock(&kvm->srcu, vcpu->srcu_idx); >> - kvm_vcpu_block(vcpu); >> - vcpu->srcu_idx = srcu_read_lock(&kvm->srcu); >> - if (!kvm_check_request(KVM_REQ_UNHALT, vcpu)) >> - return 1; >> + if (kvm_arch_vcpu_runnable(vcpu)) >> + clear_bit(KVM_REQ_UNHALT, &vcpu->requests); >> + else { > > Why the clear_bit? Since only kvm_vcpu_block in the below section > sets it, and that section clears it as well. You're right. > Can remove another 300 cycles from do_div when programming LAPIC > tscdeadline timer. Do you mean using something like lib/reciprocal_div.c? Good idea, though that's not latency, it's just being slow. :) Paolo -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/