Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752200AbdGaM1Y (ORCPT ); Mon, 31 Jul 2017 08:27:24 -0400 Received: from mx1.redhat.com ([209.132.183.28]:48090 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750979AbdGaM1W (ORCPT ); Mon, 31 Jul 2017 08:27:22 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 2C8DCC2623 Authentication-Results: ext-mx04.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx04.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=david@redhat.com Subject: Re: [RFC] KVM: optimize the kvm_vcpu_on_spin To: "Longpeng (Mike)" Cc: pbonzini@redhat.com, rkrcmar@redhat.com, agraf@suse.com, borntraeger@de.ibm.com, cohuck@redhat.com, christoffer.dall@linaro.org, marc.zyngier@arm.com, james.hogan@imgtec.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, weidong.huang@huawei.com, arei.gonglei@huawei.com, wangxinxin.wang@huawei.com, longpeng.mike@gmail.com References: <1501309377-195256-1-git-send-email-longpeng2@huawei.com> <597F1DAE.4020809@huawei.com> From: David Hildenbrand Organization: Red Hat GmbH Message-ID: <0677ed6e-280a-d2f3-d873-1daf99b39551@redhat.com> Date: Mon, 31 Jul 2017 14:27:17 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1 MIME-Version: 1.0 In-Reply-To: <597F1DAE.4020809@huawei.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Mon, 31 Jul 2017 12:27:22 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 388 Lines: 14 > I'm not sure whether the operation of get the vcpu's priority-level is > expensive on all architectures, so I record it in kvm_sched_out() for > minimal the extra cycles cost in kvm_vcpu_on_spin(). > as you only care for x86 right now either way, you can directly optimize here for the good (here: x86) case (keeping changes and therefore possible bugs minimal). -- Thanks, David