Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751076AbeAPRRc (ORCPT + 1 other); Tue, 16 Jan 2018 12:17:32 -0500 Received: from mx1.redhat.com ([209.132.183.28]:59178 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750747AbeAPRRa (ORCPT ); Tue, 16 Jan 2018 12:17:30 -0500 Date: Tue, 16 Jan 2018 18:16:51 +0100 From: Radim =?utf-8?B?S3LEjW3DocWZ?= To: Vitaly Kuznetsov Cc: kvm@vger.kernel.org, x86@kernel.org, Paolo Bonzini , "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , "Michael Kelley (EOSG)" , Mohammed Gamal , Cathy Avery , Bandan Das , linux-kernel@vger.kernel.org Subject: Re: [RFC 0/6] Enlightened VMCS support for KVM on Hyper-V Message-ID: <20180116171650.GB1824@flask> References: <20180115173105.31845-1-vkuznets@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180115173105.31845-1-vkuznets@redhat.com> X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Tue, 16 Jan 2018 17:17:25 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: 2018-01-15 18:30+0100, Vitaly Kuznetsov: > Early RFC. I'll refer to this patchset in my DevConf/FOSDEM > presentations. > > When running nested KVM on Hyper-V it's possible to use so called > 'Enlightened VMCS' and do normal memory reads/writes instead of > doing VMWRITE/VMREAD instructions. Tests show that this speeds up > tight CPUID loop almost 3 times: > > Before: > ./cpuid_tight > 20459 > > After: > ./cpuid_tight > 7698 Nice! > checkpatch.pl errors/warnings and possible 32bit brokenness are known > things. > > Main RFC questions I have are: > - Do we want to have this per L2 VM or per L1 host? IIUC, eVMCS replaces VMCS when enabled, hence doing it for all VMs would be simplest -- we wouldn't need to setup VMCS nor reconfigure Hyper-V on the fly. (I'm thinking we could have a union in loaded_vmcs for actually used type of VMCS.) > - How can we achieve zero overhead for non-Hyper-V deployments? Use static > keys? But this will only work if we decide to do eVMCS per host. Static keys seem like a good choice. > - Can we do better than a big switch in evmcs_read()/evmcs_write()? And > probably don't use 'case' defines which checkpatch.pl hates. I'd go for a separate mapping from Intel VMCS into its MS eVMCS and dirty bit, something like vmcs_field_to_offset_table. Thanks.