Received: by 2002:a05:6a10:a0d1:0:0:0:0 with SMTP id j17csp1726648pxa; Thu, 6 Aug 2020 14:34:24 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwSuUA6nehuCbIe3AnjWBNIYuslOnTfX4RK9J0arm161giNxLzSdN/x00gZiKclXuKyr/T5 X-Received: by 2002:a17:906:434c:: with SMTP id z12mr6143876ejm.33.1596749664260; Thu, 06 Aug 2020 14:34:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1596749664; cv=none; d=google.com; s=arc-20160816; b=ajZ42L0PCPncaT67ZaKSC5optVlmFTtgZVr47SLJnA4Fon8WunwCSBFXOreAgLNEML u8qdgXWoXRlgtvd2AldvcRsl//psLIfophPJVVnC1oY+ct0xNq8BD56qBNDUhNOZVDi5 rVjYvkkEr263EVTuwXcjA85SxuVr0xqQw1G5NjrYNfc2ZkCbpNwowlgmyk5ODkj+HzUe ManE18u6S7jPfsNhzcqM39chcHq5uBY3G57RYxiu0ACF+9mDSIb10lHjZDjCO1OaVCyW bjPISPNlrPnK9LR29fZ04kca09nCPFyv0K8YYEemYiCIGH6TSzs/CxC+FNHUtcCzb9qu 4K/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=bMdUvR5cvJpWBqS4UuMvvAwpXkYNV4aiHJRiohzePLg=; b=pynUpiD5CfJljxBmlfUxlUDMyBtl9LQJte3N8YrV9UeTGaNMY622krN5LWvKKvcCAH DIrKPpmSpQRIClcCpMGyq7vZCmKBk6b1fWinWByvn8OubT7G7sdDjKZoddXw2v3wXEG1 2daj8+LPa6uhHiceQvdzu7/IK93etIDCAkwoDynmDSp34NMGsLyvK3T4/50Fwf4Y27iE hOKS/hXtymxQ2NckMkcZsJU/BcLKPNfm2duEO/xFdRBMvN+dviMVux3BNeL9Kbw5yKcf HihX1wThFok7MyYYczC9kUeNXoZVMRehoqC9rQeo8sOiPSpIJYvigAljN5WyhbDn4BTu Kk7w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=igUkUtxN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id k18si3937932ejp.753.2020.08.06.14.34.00; Thu, 06 Aug 2020 14:34:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=igUkUtxN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726150AbgHFVcs (ORCPT + 99 others); Thu, 6 Aug 2020 17:32:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725999AbgHFVcq (ORCPT ); Thu, 6 Aug 2020 17:32:46 -0400 Received: from mail-ot1-x343.google.com (mail-ot1-x343.google.com [IPv6:2607:f8b0:4864:20::343]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B9C3C061575 for ; Thu, 6 Aug 2020 14:32:46 -0700 (PDT) Received: by mail-ot1-x343.google.com with SMTP id z18so16102otk.6 for ; Thu, 06 Aug 2020 14:32:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=bMdUvR5cvJpWBqS4UuMvvAwpXkYNV4aiHJRiohzePLg=; b=igUkUtxNZ6QY8ZC0CF73xy8YqEiLjqFpG9/3LS9S27kT9ZRuFxZgQ7Gz02TrcYyeHb P7M3oF0HvZj84He4lV1336ZVwTZMWwbSOWdqjsdJcRz7oft8mSVSfC+UfSk+dgDi5sFX rpgjLVZ/+elZdeH/FimeO1H1/Zu5CLcwMgxER0IdCDHnlfHVA1z85CN84XEmUHwVTIpp 8b8ifZ/vNnV5iNVAQvFMgEDLR4mA6j2OKYOk0PSM69IjLpWZIvLCD4Mv6BYMg8qT/S5f HXseBtqpzHZUONovVBLxoihOGl0jfHKRFQyuDS2KnMAv6DLy/0fwtCXFRp6I/lHlwPEX RhKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=bMdUvR5cvJpWBqS4UuMvvAwpXkYNV4aiHJRiohzePLg=; b=XgaiHqL/AhoYlIHt1M33b2odTZBpp5sixNtCrHvRUyuf5yRfOyGsbcqqZjzVe1HKSy fokbDWxi7YDgnPLHcgfieL4rjebjFiTcFuwZi6E/UVV34Nl2l49g8Le0N7xwirr/DAje pJEtAe0ydG+Ry8yRy+VJIauWw2SgPMsmJFiSbdm/VplHGDC3exKOSS32mq+Ybfx5iQA+ MtbYi5A185MFE4jyUS5TnK8mgRnPKQFLnD4WLDiAxUTmowOZxTXsOQSFLFZrJzhaGFG8 X1FILkJukHTej3m6NvKlAEdoKkxidOdo2U5CcpnwJEuyrhiewycl+BDMN6bA8RLa5MQC Pjyg== X-Gm-Message-State: AOAM531uaHslHe0xdwqX7W0mebqVhEslX7+UOBM06FCSAonVG364+ynS HJsN4g86O2/qDwsX0AqHJMoxtX88AWavMdV7Aw49RA== X-Received: by 2002:a9d:65ca:: with SMTP id z10mr9442637oth.295.1596749565185; Thu, 06 Aug 2020 14:32:45 -0700 (PDT) MIME-Version: 1.0 References: <20200714015732.32426-1-sean.j.christopherson@intel.com> <084a332afc149c0c647e86f71fea49bb0665a843.camel@redhat.com> In-Reply-To: <084a332afc149c0c647e86f71fea49bb0665a843.camel@redhat.com> From: Jim Mattson Date: Thu, 6 Aug 2020 14:32:33 -0700 Message-ID: Subject: Re: [PATCH] KVM: x86: Don't attempt to load PDPTRs when 64-bit mode is enabled To: Sean Christopherson Cc: Paolo Bonzini , Vitaly Kuznetsov , Wanpeng Li , Joerg Roedel , kvm list , LKML , Oliver Upton , Peter Shier , Maxim Levitsky Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Aug 5, 2020 at 12:04 AM Maxim Levitsky wrote: > > On Mon, 2020-07-13 at 18:57 -0700, Sean Christopherson wrote: > > Don't attempt to load PDPTRs if EFER.LME=1, i.e. if 64-bit mode is > > enabled. A recent change to reload the PDTPRs when CR0.CD or CR0.NW is > > toggled botched the EFER.LME handling and sends KVM down the PDTPR path > > when is_paging() is true, i.e. when the guest toggles CD/NW in 64-bit > > mode. > > > > Split the CR0 checks for 64-bit vs. 32-bit PAE into separate paths. The > > 64-bit path is specifically checking state when paging is toggled on, > > i.e. CR0.PG transititions from 0->1. The PDPTR path now needs to run if > > the new CR0 state has paging enabled, irrespective of whether paging was > > already enabled. Trying to shave a few cycles to make the PDPTR path an > > "else if" case is a mess. > > > > Fixes: d42e3fae6faed ("kvm: x86: Read PDPTEs on CR0.CD and CR0.NW changes") > > Cc: Jim Mattson > > Cc: Oliver Upton > > Cc: Peter Shier > > Signed-off-by: Sean Christopherson > > --- > > > > The other way to fix this, with a much smaller diff stat, is to simply > > move the !is_page(vcpu) check inside (vcpu->arch.efer & EFER_LME). But > > that results in a ridiculous amount of nested conditionals for what is a > > very straightforward check e.g. > > > > if (cr0 & X86_CR0_PG) { > > if (vcpu->arch.efer & EFER_LME) } > > if (!is_paging(vcpu)) { > > ... > > } > > } > > } > > > > Since this doesn't need to be backported anywhere, I didn't see any value > > in having an intermediate step. > > > > arch/x86/kvm/x86.c | 24 ++++++++++++------------ > > 1 file changed, 12 insertions(+), 12 deletions(-) > > > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > > index 95ef629228691..5f526d94c33f3 100644 > > --- a/arch/x86/kvm/x86.c > > +++ b/arch/x86/kvm/x86.c > > @@ -819,22 +819,22 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) > > if ((cr0 & X86_CR0_PG) && !(cr0 & X86_CR0_PE)) > > return 1; > > > > - if (cr0 & X86_CR0_PG) { > > #ifdef CONFIG_X86_64 > > - if (!is_paging(vcpu) && (vcpu->arch.efer & EFER_LME)) { > > - int cs_db, cs_l; > > + if ((vcpu->arch.efer & EFER_LME) && !is_paging(vcpu) && > > + (cr0 & X86_CR0_PG)) { > > + int cs_db, cs_l; > > > > - if (!is_pae(vcpu)) > > - return 1; > > - kvm_x86_ops.get_cs_db_l_bits(vcpu, &cs_db, &cs_l); > > - if (cs_l) > > - return 1; > > - } else > > -#endif > > - if (is_pae(vcpu) && ((cr0 ^ old_cr0) & pdptr_bits) && > > - !load_pdptrs(vcpu, vcpu->arch.walk_mmu, kvm_read_cr3(vcpu))) > > + if (!is_pae(vcpu)) > > + return 1; > > + kvm_x86_ops.get_cs_db_l_bits(vcpu, &cs_db, &cs_l); > > + if (cs_l) > > return 1; > > } > > +#endif > > + if (!(vcpu->arch.efer & EFER_LME) && (cr0 & X86_CR0_PG) && > > + is_pae(vcpu) && ((cr0 ^ old_cr0) & pdptr_bits) && > > + !load_pdptrs(vcpu, vcpu->arch.walk_mmu, kvm_read_cr3(vcpu))) > > + return 1; It might be worth commenting on the subtlety of the test below being skipped if the PDPTEs were loaded above. I'm assuming that the PDPTEs shouldn't be loaded if the instruction faults. > > if (!(cr0 & X86_CR0_PG) && kvm_read_cr4_bits(vcpu, X86_CR4_PCIDE)) > > return 1; > > I also investigated this issue (also same thing, OVMF doesn't boot), > and after looking at the intel and amd's PRM, this looks like correct solution. > I also tested this and it works. > > > Reviewed-by: Maxim Levitsky > > Best regards, > Maxim Levitsky >