Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757830AbcK3Wde convert rfc822-to-8bit (ORCPT ); Wed, 30 Nov 2016 17:33:34 -0500 Received: from mx6-phx2.redhat.com ([209.132.183.39]:36363 "EHLO mx6-phx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751566AbcK3Wdd (ORCPT ); Wed, 30 Nov 2016 17:33:33 -0500 Date: Wed, 30 Nov 2016 17:33:30 -0500 (EST) From: Paolo Bonzini To: Radim =?utf-8?B?S3LEjW3DocWZ?= Cc: David Matlack , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, jmattson@google.com Message-ID: <2123679491.870979.1480545210780.JavaMail.zimbra@redhat.com> In-Reply-To: <20161130215234.GA8372@potion> References: <1480472050-58023-1-git-send-email-dmatlack@google.com> <1480472050-58023-4-git-send-email-dmatlack@google.com> <20161130215234.GA8372@potion> Subject: Re: [PATCH v3 3/5] KVM: nVMX: fix checks on CR{0,4} during virtual VMX operation MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8BIT X-Originating-IP: [10.4.164.1, 10.5.101.130] X-Mailer: Zimbra 8.0.6_GA_5922 (ZimbraWebClient - FF50 (Linux)/8.0.6_GA_5922) Thread-Topic: nVMX: fix checks on CR{0,4} during virtual VMX operation Thread-Index: GePRVd9+GanQIGjMtoUb5NPMkNuEFg== Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3864 Lines: 102 ----- Original Message ----- > From: "Radim Krčmář" > To: "David Matlack" > Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, jmattson@google.com, pbonzini@redhat.com > Sent: Wednesday, November 30, 2016 10:52:35 PM > Subject: Re: [PATCH v3 3/5] KVM: nVMX: fix checks on CR{0,4} during virtual VMX operation > > 2016-11-29 18:14-0800, David Matlack: > > KVM emulates MSR_IA32_VMX_CR{0,4}_FIXED1 with the value -1ULL, meaning > > all CR0 and CR4 bits are allowed to be 1 during VMX operation. > > > > This does not match real hardware, which disallows the high 32 bits of > > CR0 to be 1, and disallows reserved bits of CR4 to be 1 (including bits > > which are defined in the SDM but missing according to CPUID). A guest > > can induce a VM-entry failure by setting these bits in GUEST_CR0 and > > GUEST_CR4, despite MSR_IA32_VMX_CR{0,4}_FIXED1 indicating they are > > valid. > > > > Since KVM has allowed all bits to be 1 in CR0 and CR4, the existing > > checks on these registers do not verify must-be-0 bits. Fix these checks > > to identify must-be-0 bits according to MSR_IA32_VMX_CR{0,4}_FIXED1. > > > > This patch should introduce no change in behavior in KVM, since these > > MSRs are still -1ULL. > > > > Signed-off-by: David Matlack > > --- > > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > > @@ -4104,6 +4110,40 @@ static void ept_save_pdptrs(struct kvm_vcpu *vcpu) > > +static bool nested_guest_cr0_valid(struct kvm_vcpu *vcpu, unsigned long > > val) > > +{ > > + u64 fixed0 = to_vmx(vcpu)->nested.nested_vmx_cr0_fixed0; > > + u64 fixed1 = to_vmx(vcpu)->nested.nested_vmx_cr0_fixed1; > > + struct vmcs12 *vmcs12 = get_vmcs12(vcpu); > > + > > + if (to_vmx(vcpu)->nested.nested_vmx_secondary_ctls_high & > > + SECONDARY_EXEC_UNRESTRICTED_GUEST && > > + nested_cpu_has2(vmcs12, SECONDARY_EXEC_UNRESTRICTED_GUEST)) > > + fixed0 &= ~(X86_CR0_PE | X86_CR0_PG); > > These bits also seem to be guaranteed in fixed1 ... complicated > dependencies. Bits that are set in fixed0 must be set in fixed1 too. Since patch 4 always sets CR0_FIXED1 to all-ones (matching bare metal), this is okay. > There is another exception, SDM 26.3.1.1 (Checks on Guest Control > Registers, Debug Registers, and MSRs): > > Bit 29 (corresponding to CR0.NW) and bit 30 (CD) are never checked > because the values of these bits are not changed by VM entry; see > Section 26.3.2.1. Same here, we never check them anyway. > And another check: > > If bit 31 in the CR0 field (corresponding to PG) is 1, bit 0 in that > field (PE) must also be 1. This should not be a problem, a failed vmentry is reflected into L1 anyway. We only need to check insofar as we could have a more restrictive check than what the processor does. Paolo > > + > > + return fixed_bits_valid(val, fixed0, fixed1); > > +} > > + > > +static bool nested_host_cr0_valid(struct kvm_vcpu *vcpu, unsigned long > > val) > > +{ > > + u64 fixed0 = to_vmx(vcpu)->nested.nested_vmx_cr0_fixed0; > > + u64 fixed1 = to_vmx(vcpu)->nested.nested_vmx_cr0_fixed1; > > + > > + return fixed_bits_valid(val, fixed0, fixed1); > > +} > > + > > +static bool nested_cr4_valid(struct kvm_vcpu *vcpu, unsigned long val) > > +{ > > + u64 fixed0 = to_vmx(vcpu)->nested.nested_vmx_cr4_fixed0; > > + u64 fixed1 = to_vmx(vcpu)->nested.nested_vmx_cr4_fixed1; > > + > > + return fixed_bits_valid(val, fixed0, fixed1); > > +} > > + > > +/* No difference in the restrictions on guest and host CR4 in VMX > > operation. */ > > +#define nested_guest_cr4_valid nested_cr4_valid > > +#define nested_host_cr4_valid nested_cr4_valid > > We should use cr0 and cr4 checks also in handle_vmon(). > > I've applied this series to kvm/queue for early testing. > Please send replacement patch or patch(es) on top of this series. > > Thanks. >