Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759034AbcK3WCK (ORCPT ); Wed, 30 Nov 2016 17:02:10 -0500 Received: from mx1.redhat.com ([209.132.183.28]:41402 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752346AbcK3WCH (ORCPT ); Wed, 30 Nov 2016 17:02:07 -0500 Date: Wed, 30 Nov 2016 22:52:35 +0100 From: Radim =?utf-8?B?S3LEjW3DocWZ?= To: David Matlack Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, jmattson@google.com, pbonzini@redhat.com Subject: Re: [PATCH v3 3/5] KVM: nVMX: fix checks on CR{0,4} during virtual VMX operation Message-ID: <20161130215234.GA8372@potion> References: <1480472050-58023-1-git-send-email-dmatlack@google.com> <1480472050-58023-4-git-send-email-dmatlack@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1480472050-58023-4-git-send-email-dmatlack@google.com> X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Wed, 30 Nov 2016 21:52:38 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2965 Lines: 78 2016-11-29 18:14-0800, David Matlack: > KVM emulates MSR_IA32_VMX_CR{0,4}_FIXED1 with the value -1ULL, meaning > all CR0 and CR4 bits are allowed to be 1 during VMX operation. > > This does not match real hardware, which disallows the high 32 bits of > CR0 to be 1, and disallows reserved bits of CR4 to be 1 (including bits > which are defined in the SDM but missing according to CPUID). A guest > can induce a VM-entry failure by setting these bits in GUEST_CR0 and > GUEST_CR4, despite MSR_IA32_VMX_CR{0,4}_FIXED1 indicating they are > valid. > > Since KVM has allowed all bits to be 1 in CR0 and CR4, the existing > checks on these registers do not verify must-be-0 bits. Fix these checks > to identify must-be-0 bits according to MSR_IA32_VMX_CR{0,4}_FIXED1. > > This patch should introduce no change in behavior in KVM, since these > MSRs are still -1ULL. > > Signed-off-by: David Matlack > --- > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > @@ -4104,6 +4110,40 @@ static void ept_save_pdptrs(struct kvm_vcpu *vcpu) > +static bool nested_guest_cr0_valid(struct kvm_vcpu *vcpu, unsigned long val) > +{ > + u64 fixed0 = to_vmx(vcpu)->nested.nested_vmx_cr0_fixed0; > + u64 fixed1 = to_vmx(vcpu)->nested.nested_vmx_cr0_fixed1; > + struct vmcs12 *vmcs12 = get_vmcs12(vcpu); > + > + if (to_vmx(vcpu)->nested.nested_vmx_secondary_ctls_high & > + SECONDARY_EXEC_UNRESTRICTED_GUEST && > + nested_cpu_has2(vmcs12, SECONDARY_EXEC_UNRESTRICTED_GUEST)) > + fixed0 &= ~(X86_CR0_PE | X86_CR0_PG); These bits also seem to be guaranteed in fixed1 ... complicated dependencies. There is another exception, SDM 26.3.1.1 (Checks on Guest Control Registers, Debug Registers, and MSRs): Bit 29 (corresponding to CR0.NW) and bit 30 (CD) are never checked because the values of these bits are not changed by VM entry; see Section 26.3.2.1. And another check: If bit 31 in the CR0 field (corresponding to PG) is 1, bit 0 in that field (PE) must also be 1. > + > + return fixed_bits_valid(val, fixed0, fixed1); > +} > + > +static bool nested_host_cr0_valid(struct kvm_vcpu *vcpu, unsigned long val) > +{ > + u64 fixed0 = to_vmx(vcpu)->nested.nested_vmx_cr0_fixed0; > + u64 fixed1 = to_vmx(vcpu)->nested.nested_vmx_cr0_fixed1; > + > + return fixed_bits_valid(val, fixed0, fixed1); > +} > + > +static bool nested_cr4_valid(struct kvm_vcpu *vcpu, unsigned long val) > +{ > + u64 fixed0 = to_vmx(vcpu)->nested.nested_vmx_cr4_fixed0; > + u64 fixed1 = to_vmx(vcpu)->nested.nested_vmx_cr4_fixed1; > + > + return fixed_bits_valid(val, fixed0, fixed1); > +} > + > +/* No difference in the restrictions on guest and host CR4 in VMX operation. */ > +#define nested_guest_cr4_valid nested_cr4_valid > +#define nested_host_cr4_valid nested_cr4_valid We should use cr0 and cr4 checks also in handle_vmon(). I've applied this series to kvm/queue for early testing. Please send replacement patch or patch(es) on top of this series. Thanks.