Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754771AbbBERxu (ORCPT ); Thu, 5 Feb 2015 12:53:50 -0500 Received: from mx1.redhat.com ([209.132.183.28]:34473 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751941AbbBERxs (ORCPT ); Thu, 5 Feb 2015 12:53:48 -0500 Date: Thu, 5 Feb 2015 18:53:44 +0100 From: Radim =?utf-8?B?S3LEjW3DocWZ?= To: Paolo Bonzini Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, riel@redhat.com, mtosatti@redhat.com Subject: Re: [PATCH RFC] kvm: x86: add halt_poll module parameter Message-ID: <20150205175344.GB14367@potion.redhat.com> References: <1423152325-5094-1-git-send-email-pbonzini@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1423152325-5094-1-git-send-email-pbonzini@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 5308 Lines: 132 2015-02-05 17:05+0100, Paolo Bonzini: > This patch introduces a new module parameter for the KVM module; when it > is present, KVM attempts a bit of polling on every HLT before scheduling > itself out via kvm_vcpu_block. > > This parameter helps a lot for latency-bound workloads---in particular > I tested it with O_DSYNC writes with a battery-backed disk in the host. > In this case, writes are fast (because the data doesn't have to go all > the way to the platters) but they cannot be merged by either the host or > the guest. KVM's performance here is usually around 30% of bare metal, > or 50% if you use cache=directsync or cache=writethrough (these > parameters avoid that the guest sends pointless flush requests, and > at the same time they are not slow because of the battery-backed cache). > The bad performance happens because on every halt the host CPU decides > to halt itself too. > When the interrupt comes, the vCPU thread is then > migrated to a new physical CPU, Unrelated: are drawbacks of migrating vs waking up evaluated correctly? > and in general the latency is horrible > because the vCPU thread has to be scheduled back in. > > With this patch performance reaches 60-65% of bare metal and, more > important, > 99% of what you get if you use idle=poll in the guest. (Hm, I would have thought that this can outperform idle=poll ...) > This > means that the tunable gets rid of this particular bottleneck, and more > work can be done to improve performance in the kernel or QEMU. > > Of course there is some price to pay; every time an otherwise idle vCPUs > is interrupted by an interrupt, it will poll unnecessarily and thus > impose a little load on the host. The above results were obtained with > a mostly random value of the parameter (2000000) (I guess that translated to about 0.66 ms.) > and the load was around > 1.5-2.5% CPU usage on one of the host's core for each idle guest vCPU. (10 exits_per_second / 4 VCPUs * 0.0066 second_per_exit = 1.65% load.) > The patch also adds a new stat, /sys/kernel/debug/kvm/halt_successful_poll, > that can be used to tune the parameter. It counts how many HLT > instructions received an interrupt during the polling period; each > successful poll avoids that Linux schedules the VCPU thread out and back > in, and may also avoid a likely trip to C1 and back for the physical CPU. > > While the VM is idle, a Linux 4 VCPU VM halts around 10 times per second. (Looks like tickless.) > Of these halts, almost all are failed polls. During the benchmark, > instead, basically all halts end within the polling period, except a more > or less constant stream of 50 per second coming from vCPUs that are not > running the benchmark. The wasted time is thus very low. Things may > be slightly different for Windows VMs, which have a ~10 ms timer tick. (Windows userspace can force timer tick down to every 0.5 ms :) > The effect is also visible on Marcelo's recently-introduced latency > test for the TSC deadline timer. Though of course a non-RT kernel has > awful latency bounds, the latency of the timer is around 8000-10000 clock > cycles compared to 20000-120000 without setting halt_poll. For the TSC > deadline timer, thus, the effect is both a smaller average latency and > a smaller variance. > > Signed-off-by: Paolo Bonzini > --- It is going to be hard to balance between performance and wasted idle time, so it's good to have it disabled by default, (I guess the value is going to be overestimated when used.) Reviewed-by: Radim Krčmář > +++ b/arch/x86/kvm/x86.c > @@ -5819,13 +5823,29 @@ void kvm_arch_exit(void) > int kvm_emulate_halt(struct kvm_vcpu *vcpu) > + if (halt_poll) { > + u64 start, curr; > + rdtscll(start); (I agree that all x86 with VMM have TSC.) > + do { > + /* > + * This sets KVM_REQ_UNHALT if an interrupt > + * arrives. > + */ > + if (kvm_vcpu_check_block(vcpu) < 0) { > + ++vcpu->stat.halt_successful_poll; > + break; > + } > + rdtscll(curr); > + } while(!need_resched() && curr - start < halt_poll); (We can get preempted and possibly rescheduled to another CPU. With unstable TSC, this could loop longer than was requested; likely not by much thanks to unsigned though -- ok -- rare cases are ignored.) > +++ b/virt/kvm/kvm_main.c > @@ -1813,6 +1813,20 @@ void mark_page_dirty(struct kvm *kvm, gfn_t gfn) > } > EXPORT_SYMBOL_GPL(mark_page_dirty); > > +int kvm_vcpu_check_block(struct kvm_vcpu *vcpu) (I think the returned 'int' will be wasted on us, so it could be a 'bool', which would allow a better name ...) > +{ > + if (kvm_arch_vcpu_runnable(vcpu)) { > + kvm_make_request(KVM_REQ_UNHALT, vcpu); > + return -EINTR; > + } > + if (kvm_cpu_has_pending_timer(vcpu)) > + return -EINTR; > + if (signal_pending(current)) > + return -EINTR; > + > + return 0; > +} -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/