Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752084AbdHHIm0 (ORCPT ); Tue, 8 Aug 2017 04:42:26 -0400 Received: from mx1.redhat.com ([209.132.183.28]:42490 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750818AbdHHImY (ORCPT ); Tue, 8 Aug 2017 04:42:24 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 893AAC058EBF Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=david@redhat.com Subject: Re: [PATCH v2 1/4] KVM: add spinlock optimization framework To: "Longpeng(Mike)" , pbonzini@redhat.com, rkrcmar@redhat.com Cc: agraf@suse.com, borntraeger@de.ibm.com, cohuck@redhat.com, christoffer.dall@linaro.org, marc.zyngier@arm.com, james.hogan@imgtec.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, weidong.huang@huawei.com, arei.gonglei@huawei.com, wangxinxin.wang@huawei.com, longpeng.mike@gmail.com References: <1502165135-4784-1-git-send-email-longpeng2@huawei.com> <1502165135-4784-2-git-send-email-longpeng2@huawei.com> From: David Hildenbrand Organization: Red Hat GmbH Message-ID: <8f4e336d-82b2-9148-eac3-ad411e6a16ef@redhat.com> Date: Tue, 8 Aug 2017 10:42:20 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1 MIME-Version: 1.0 In-Reply-To: <1502165135-4784-2-git-send-email-longpeng2@huawei.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Tue, 08 Aug 2017 08:42:24 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1373 Lines: 57 > +bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu) > +{ > + return false; > +} why don't we need an EXPORT_SYMBOL here? > + > /* Just ensure a guest exit from a particular CPU */ > static void exit_vm_noop(void *info) > { > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index 15252d7..e7720d2 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -2317,7 +2317,7 @@ static bool kvm_vcpu_eligible_for_directed_yield(struct kvm_vcpu *vcpu) > #endif > } > > -void kvm_vcpu_on_spin(struct kvm_vcpu *me) > +void kvm_vcpu_on_spin(struct kvm_vcpu *me, bool me_in_kern) > { > struct kvm *kvm = me->kvm; > struct kvm_vcpu *vcpu; > @@ -2348,6 +2348,8 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me) > continue; > if (swait_active(&vcpu->wq) && !kvm_arch_vcpu_runnable(vcpu)) > continue; > + if (me_in_kern && !kvm_arch_vcpu_in_kernel(vcpu)) > + continue; hm, does this patch compile? (me_in_kern) I would even move this to an other patch. Maybe even split into a) introducing kvm_arch_vcpu_in_kernel() for all archs b) modifying kvm_vcpu_on_spin(), passing the result from kvm_arch_vcpu_in_kernel() c) filling kvm_arch_vcpu_in_kernel() with life for different archs (multiple patches) d) pimping kvm_vcpu_on_spin() > if (!kvm_vcpu_eligible_for_directed_yield(vcpu)) > continue; > > -- Thanks, David