Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752180AbdGaMLA (ORCPT ); Mon, 31 Jul 2017 08:11:00 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:10737 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751075AbdGaMK7 (ORCPT ); Mon, 31 Jul 2017 08:10:59 -0400 Message-ID: <597F1DAE.4020809@huawei.com> Date: Mon, 31 Jul 2017 20:08:14 +0800 From: "Longpeng (Mike)" User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:11.0) Gecko/20120327 Thunderbird/11.0.1 MIME-Version: 1.0 To: David Hildenbrand CC: , , , , , , , , , , , , , Subject: Re: [RFC] KVM: optimize the kvm_vcpu_on_spin References: <1501309377-195256-1-git-send-email-longpeng2@huawei.com> In-Reply-To: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.177.246.209] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020204.597F1E51.00CB,ss=1,re=0.000,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 019e96cae7c0ee152639130d7ec95a0e Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2381 Lines: 91 Hi David, On 2017/7/31 19:31, David Hildenbrand wrote: > [no idea if this change makes sense (and especially if it has any bad > side effects), do you have performance numbers? I'll just have a look at > the general structure of the patch in the meanwhile] > I haven't any test results yet, could you give me some suggestion about what benchmarks are suitable ? >> +bool kvm_arch_vcpu_spin_kernmode(struct kvm_vcpu *vcpu) > > kvm_arch_vcpu_in_kernel() ? > Um...yes, I'll correct this. >> +{ >> + return kvm_x86_ops->get_cpl(vcpu) == 0; >> +} >> + >> int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu) >> { >> return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE; >> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h >> index 648b34c..f8f0d74 100644 >> --- a/include/linux/kvm_host.h >> +++ b/include/linux/kvm_host.h >> @@ -272,6 +272,9 @@ struct kvm_vcpu { >> } spin_loop; >> #endif >> bool preempted; >> + /* If vcpu is in kernel-mode when preempted */ >> + bool in_kernmode; >> + > > Why do you have to store that ... > > [...]> + me->in_kernmode = kvm_arch_vcpu_spin_kernmode(me); >> kvm_vcpu_set_in_spin_loop(me, true); >> /* >> * We boost the priority of a VCPU that is runnable but not >> @@ -2351,6 +2353,8 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me) >> continue; >> if (swait_active(&vcpu->wq) && !kvm_arch_vcpu_runnable(vcpu)) >> continue; >> + if (me->in_kernmode && !vcpu->in_kernmode) > > Wouldn't it be easier to simply have > > in_kernel = kvm_arch_vcpu_in_kernel(me); > ... > if (in_kernel && !kvm_arch_vcpu_in_kernel(vcpu)) > ... > I'm not sure whether the operation of get the vcpu's priority-level is expensive on all architectures, so I record it in kvm_sched_out() for minimal the extra cycles cost in kvm_vcpu_on_spin(). >> + continue; >> if (!kvm_vcpu_eligible_for_directed_yield(vcpu)) >> continue; >> >> @@ -4009,8 +4013,11 @@ static void kvm_sched_out(struct preempt_notifier *pn, >> { >> struct kvm_vcpu *vcpu = preempt_notifier_to_vcpu(pn); >> >> - if (current->state == TASK_RUNNING) >> + if (current->state == TASK_RUNNING) { >> vcpu->preempted = true; >> + vcpu->in_kernmode = kvm_arch_vcpu_spin_kernmode(vcpu); >> + } >> + > > so you don't have to do this change, too. > >> kvm_arch_vcpu_put(vcpu); >> } >> >> > > -- Regards, Longpeng(Mike)