Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932326Ab3GKLLF (ORCPT ); Thu, 11 Jul 2013 07:11:05 -0400 Received: from e28smtp01.in.ibm.com ([122.248.162.1]:60301 "EHLO e28smtp01.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932309Ab3GKLLB (ORCPT ); Thu, 11 Jul 2013 07:11:01 -0400 Message-ID: <51DE9387.7090804@linux.vnet.ibm.com> Date: Thu, 11 Jul 2013 16:44:15 +0530 From: Raghavendra K T Organization: IBM User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:16.0) Gecko/20121029 Thunderbird/16.0.2 MIME-Version: 1.0 To: Gleb Natapov CC: Andrew Jones , mingo@redhat.com, ouyang@cs.pitt.edu, habanero@linux.vnet.ibm.com, jeremy@goop.org, x86@kernel.org, konrad.wilk@oracle.com, hpa@zytor.com, pbonzini@redhat.com, linux-doc@vger.kernel.org, xen-devel@lists.xensource.com, peterz@infradead.org, mtosatti@redhat.com, stefano.stabellini@eu.citrix.com, andi@firstfloor.org, attilio.rao@citrix.com, gregkh@suse.de, agraf@suse.de, chegu_vinod@hp.com, torvalds@linux-foundation.org, avi.kivity@gmail.com, tglx@linutronix.de, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, riel@redhat.com, virtualization@lists.linux-foundation.org, srivatsa.vaddagiri@gmail.com Subject: Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks References: <51CAEF45.3010203@linux.vnet.ibm.com> <20130626161130.GB18152@redhat.com> <51CB2AD9.5060508@linux.vnet.ibm.com> <51DBD3C2.2040807@linux.vnet.ibm.com> <20130710103325.GP24941@redhat.com> <51DE771F.7070805@linux.vnet.ibm.com> <20130711094833.GC8575@redhat.com> <51DE849E.4020405@linux.vnet.ibm.com> <20130711101104.GF8575@redhat.com> <51DE8EC6.6020700@linux.vnet.ibm.com> <20130711105631.GH8575@redhat.com> In-Reply-To: <20130711105631.GH8575@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-TM-AS-MML: No X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13071111-4790-0000-0000-000009378567 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1877 Lines: 43 On 07/11/2013 04:26 PM, Gleb Natapov wrote: > On Thu, Jul 11, 2013 at 04:23:58PM +0530, Raghavendra K T wrote: >> On 07/11/2013 03:41 PM, Gleb Natapov wrote: >>> On Thu, Jul 11, 2013 at 03:40:38PM +0530, Raghavendra K T wrote: >>>>>>>> Gleb, >>>>>>>> Can you elaborate little more on what you have in mind regarding per >>>>>>>> VM ple_window. (maintaining part of it as a per vm variable is clear >>>>>>>> to >>>>>>>> me), but is it that we have to load that every time of guest entry? >>>>>>>> >>>>>>> Only when it changes, shouldn't be to often no? >>>>>> >>>>>> Ok. Thinking how to do. read the register and writeback if there need >>>>>> to be a change during guest entry? >>>>>> >>>>> Why not do it like in the patch you've linked? When value changes write it >>>>> to VMCS of the current vcpu. >>>>> >>>> >>>> Yes. can be done. So the running vcpu's ple_window gets updated only >>>> after next pl-exit. right? >>> I am not sure what you mean. You cannot change vcpu's ple_window while >>> vcpu is in a guest mode. >>> >> >> I agree with that. Both of us are on the same page. >> What I meant is, >> suppose the per VM ple_window changes when a vcpu x of that VM was running, >> it will get its ple_window updated during next run. > Ah, I think "per VM" is what confuses me. Why do you want to have "per > VM" ple_windows and not "per vcpu" one? With per vcpu one ple_windows > cannot change while vcpu is running. > Okay. Got that. My initial feeling was vcpu does not "feel" the global load. But I think that should be of no problem. instead we will not need atomic operations to update ple_window, which is better. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/