Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755340AbbDXHqu (ORCPT ); Fri, 24 Apr 2015 03:46:50 -0400 Received: from mga02.intel.com ([134.134.136.20]:62128 "EHLO mga02.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755276AbbDXHqs convert rfc822-to-8bit (ORCPT ); Fri, 24 Apr 2015 03:46:48 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.11,638,1422950400"; d="scan'208";a="700227976" From: "Zhang, Yang Z" To: Paolo Bonzini , "H. Peter Anvin" , "Hansen, Dave" , "Li, Liang Z" , "kvm@vger.kernel.org" , "linux-kernel@vger.kernel.org" CC: "gleb@kernel.org" , "mtosatti@redhat.com" , "tglx@linutronix.de" , "mingo@redhat.com" , "x86@kernel.org" , "joro@8bytes.org" , "Hao, Xudong" Subject: RE: [v6] kvm/fpu: Enable fully eager restore kvm FPU Thread-Topic: [v6] kvm/fpu: Enable fully eager restore kvm FPU Thread-Index: AQHQfdokB57Ycwhg7kWLEJUBKNm8Jp1ahjWAgADVAlD//+f1gIAAhnPg Date: Fri, 24 Apr 2015 07:46:42 +0000 Message-ID: References: <1429823583-3226-1-git-send-email-liang.z.li@intel.com> <55390F9A.2070808@intel.com> <553955D9.3030600@zytor.com> <5539F45D.6020400@redhat.com> In-Reply-To: <5539F45D.6020400@redhat.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1109 Lines: 34 Paolo Bonzini wrote on 2015-04-24: > > > On 24/04/2015 03:16, Zhang, Yang Z wrote: >>> This is interesting since previous measurements on KVM have had the >>> exact opposite results. I think we need to understand this a lot >>> more. >> >> What I can tell is that vmexit is heavy. So it is reasonable to see >> the improvement under some cases, especially kernel is using eager >> FPU now which means each schedule may trigger a vmexit. > > On the other hand vmexit is lighter and lighter on newer processors; a > Sandy Bridge has less than half the vmexit cost of a Core 2 (IIRC 1000 > vs. 2500 clock cycles approximately). > 1000 cycles? I remember it takes about 4000 cycle even in HSW server. > Also, measurement were done on Westmere but Sandy Bridge is the first > processor to have XSAVEOPT and thus use eager FPU. > > Paolo Best regards, Yang -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/