Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932230Ab3GKK6F (ORCPT ); Thu, 11 Jul 2013 06:58:05 -0400 Received: from mx1.redhat.com ([209.132.183.28]:18441 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932081Ab3GKK6C (ORCPT ); Thu, 11 Jul 2013 06:58:02 -0400 Date: Thu, 11 Jul 2013 13:56:31 +0300 From: Gleb Natapov To: Raghavendra K T Cc: Andrew Jones , mingo@redhat.com, ouyang@cs.pitt.edu, habanero@linux.vnet.ibm.com, jeremy@goop.org, x86@kernel.org, konrad.wilk@oracle.com, hpa@zytor.com, pbonzini@redhat.com, linux-doc@vger.kernel.org, xen-devel@lists.xensource.com, peterz@infradead.org, mtosatti@redhat.com, stefano.stabellini@eu.citrix.com, andi@firstfloor.org, attilio.rao@citrix.com, gregkh@suse.de, agraf@suse.de, chegu_vinod@hp.com, torvalds@linux-foundation.org, avi.kivity@gmail.com, tglx@linutronix.de, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, riel@redhat.com, virtualization@lists.linux-foundation.org, srivatsa.vaddagiri@gmail.com Subject: Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks Message-ID: <20130711105631.GH8575@redhat.com> References: <51CAEF45.3010203@linux.vnet.ibm.com> <20130626161130.GB18152@redhat.com> <51CB2AD9.5060508@linux.vnet.ibm.com> <51DBD3C2.2040807@linux.vnet.ibm.com> <20130710103325.GP24941@redhat.com> <51DE771F.7070805@linux.vnet.ibm.com> <20130711094833.GC8575@redhat.com> <51DE849E.4020405@linux.vnet.ibm.com> <20130711101104.GF8575@redhat.com> <51DE8EC6.6020700@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <51DE8EC6.6020700@linux.vnet.ibm.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1612 Lines: 39 On Thu, Jul 11, 2013 at 04:23:58PM +0530, Raghavendra K T wrote: > On 07/11/2013 03:41 PM, Gleb Natapov wrote: > >On Thu, Jul 11, 2013 at 03:40:38PM +0530, Raghavendra K T wrote: > >>>>>>Gleb, > >>>>>>Can you elaborate little more on what you have in mind regarding per > >>>>>>VM ple_window. (maintaining part of it as a per vm variable is clear > >>>>>>to > >>>>>> me), but is it that we have to load that every time of guest entry? > >>>>>> > >>>>>Only when it changes, shouldn't be to often no? > >>>> > >>>>Ok. Thinking how to do. read the register and writeback if there need > >>>>to be a change during guest entry? > >>>> > >>>Why not do it like in the patch you've linked? When value changes write it > >>>to VMCS of the current vcpu. > >>> > >> > >>Yes. can be done. So the running vcpu's ple_window gets updated only > >>after next pl-exit. right? > >I am not sure what you mean. You cannot change vcpu's ple_window while > >vcpu is in a guest mode. > > > > I agree with that. Both of us are on the same page. > What I meant is, > suppose the per VM ple_window changes when a vcpu x of that VM was running, > it will get its ple_window updated during next run. Ah, I think "per VM" is what confuses me. Why do you want to have "per VM" ple_windows and not "per vcpu" one? With per vcpu one ple_windows cannot change while vcpu is running. -- Gleb. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/