Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759412AbcDEPyp (ORCPT ); Tue, 5 Apr 2016 11:54:45 -0400 Received: from mx1.redhat.com ([209.132.183.28]:35729 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758661AbcDEPyo (ORCPT ); Tue, 5 Apr 2016 11:54:44 -0400 Date: Tue, 5 Apr 2016 17:54:39 +0200 From: Radim =?utf-8?B?S3LEjW3DocWZ?= To: Yang Zhang Cc: Rik van Riel , Luiz Capitulino , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, pbonzini@redhat.com, mtosatti@redhat.com, bsd@redhat.com Subject: Re: [PATCH] kvm: x86: make lapic hrtimer pinned Message-ID: <20160405155438.GD21537@potion.brq.redhat.com> References: <20160404164607.09e306fa@redhat.com> <1459803623.6219.28.camel@redhat.com> <57035899.6080609@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <57035899.6080609@gmail.com> X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Tue, 05 Apr 2016 15:54:43 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1443 Lines: 32 2016-04-05 14:18+0800, Yang Zhang: > On 2016/4/5 5:00, Rik van Riel wrote: >>Given that delivering a timer to a guest seems to >>involve trapping from the guest to the host, anyway, >>I don't see a downside to your patch. >> >>If that is ever changed (eg. allowing delivery of >>a timer interrupt to a VCPU without trapping to the >>host), we may want to revisit this. > > Posted interrupt helps in this case. Currently, KVM doesn't use PI for lapic > timer is due to same affinity for lapic timer and VCPU. Now, we can change > to use PI for lapic timer. The only concern is what's frequency of timer > migration in upstream Linux? If it is frequently, will it bring additional > cost? It's a scheduler bug if the timer migration frequency would matter. :) Additional costs arise when the timer and VCPU are on two different CPUs. (e.g. if both CPUs are in deep C-state, we wasted one wakeup; the timer would sometimes needs to send an interrupt.) Fine tuned KVM could benefit from having the lapic timer backend on a different physical core, but the general case would need some experience to decide. I think that we'd still want to have timer interrupts on the same physical core if the host didn't have PI, and the fraction of timers that can be injected without a guest entry is important to decide whether PI can make the effort worthwhile. The biggest benefit might come from handling multiple lapic timers in one host interrupt.