Received: by 2002:a25:e74b:0:0:0:0:0 with SMTP id e72csp1411634ybh; Mon, 13 Jul 2020 18:58:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJylNNs15QJylle5pf5kc4XuxlutyqtK7ggQZYKLdWS9xpyqnt+kjtxP8pd3nsGAfG8YeJJH X-Received: by 2002:a17:906:aac1:: with SMTP id kt1mr2527537ejb.408.1594691902959; Mon, 13 Jul 2020 18:58:22 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1594691902; cv=none; d=google.com; s=arc-20160816; b=n526pJLJ0nFHuffGPfciCMzfMnwcGVEcdHmFL1zq32YbHcfbTZYT+a7ZYc+JnVnHpR 2bPn8AWO7MRhs1nzyKdUDcOYOmHCKdkO/JeqAGpzPh4AigJFHgMI5Fv7ZIV3s5wnwjtu 807ablpl+sQBFUB5TOySDnD3dLLVcso+HTZvOmpwG1mW48vd8lU2bCMpHbA3VUp54dWs pSo6u4hskLO0caz5HL775YbvDgOcMNYO0XXHFWUDnOvaVwLy4d381PsdH0++tLb9Yuoa 8Gq+nPbptK5QDLsYeyqp8BQHhVxOIi1ZUTnSPoD+445eqCkbEXleh/RxsO9utCknEfFv MVOg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:ironport-sdr:ironport-sdr; bh=36FjEpapi/MIYW7hv1UTgUGyMc5lRD92A1ONmazatZg=; b=HOMpkai8FVaLJbWFOPWedOPLLJnJpmV7MJAW4qZb9C+dLnNN7ZW05DrxlEV7zPjE0F SBh9lmB3eGbLq/XntxeYQBSgXS73xLODpGO6Hfd18510o8U5hBzIPs/9zGKOl3x7SXke 4h64dbGvbe7rluqWQN3ySb2MlmgUcwFgDvWoAo6M0HBnYSae8l/1YTgSGlwVEeYJ0sdW /A7t5aJnpw61Ze8onIF/k3x9BxHbxdrCXZpoHDKnvIdfnGh9owWZ0JKtky39simPtewQ bvQxVOrsF3PuK2hM+IALVlwk6X5jb2uOTGle1EN01ZL3UGbYHJ6BApwTh1ar/0NoYxup fOZw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id rs2si10242661ejb.464.2020.07.13.18.57.44; Mon, 13 Jul 2020 18:58:22 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726505AbgGNB5e (ORCPT + 99 others); Mon, 13 Jul 2020 21:57:34 -0400 Received: from mga14.intel.com ([192.55.52.115]:24486 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726149AbgGNB5e (ORCPT ); Mon, 13 Jul 2020 21:57:34 -0400 IronPort-SDR: R4Tp2NX0R6+OCjQD1BEERhXhQtpoLAo5yAg8M1A+nzGkADw5dDP4FcOS2NoqErlYHC4hwGy85U Y2eveZtY7kCA== X-IronPort-AV: E=McAfee;i="6000,8403,9681"; a="147905455" X-IronPort-AV: E=Sophos;i="5.75,349,1589266800"; d="scan'208";a="147905455" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jul 2020 18:57:33 -0700 IronPort-SDR: 71eSYM25ykw/jOPCA7O/DMLEvwPLrpRSDX+xrnEOdBZFL3wjyP/ejX2oMXO+mw+/WirUYCm/fl O7CxzvWmGL5w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,349,1589266800"; d="scan'208";a="307681797" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga004.fm.intel.com with ESMTP; 13 Jul 2020 18:57:33 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Oliver Upton , Peter Shier Subject: [PATCH] KVM: x86: Don't attempt to load PDPTRs when 64-bit mode is enabled Date: Mon, 13 Jul 2020 18:57:32 -0700 Message-Id: <20200714015732.32426-1-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Don't attempt to load PDPTRs if EFER.LME=1, i.e. if 64-bit mode is enabled. A recent change to reload the PDTPRs when CR0.CD or CR0.NW is toggled botched the EFER.LME handling and sends KVM down the PDTPR path when is_paging() is true, i.e. when the guest toggles CD/NW in 64-bit mode. Split the CR0 checks for 64-bit vs. 32-bit PAE into separate paths. The 64-bit path is specifically checking state when paging is toggled on, i.e. CR0.PG transititions from 0->1. The PDPTR path now needs to run if the new CR0 state has paging enabled, irrespective of whether paging was already enabled. Trying to shave a few cycles to make the PDPTR path an "else if" case is a mess. Fixes: d42e3fae6faed ("kvm: x86: Read PDPTEs on CR0.CD and CR0.NW changes") Cc: Jim Mattson Cc: Oliver Upton Cc: Peter Shier Signed-off-by: Sean Christopherson --- The other way to fix this, with a much smaller diff stat, is to simply move the !is_page(vcpu) check inside (vcpu->arch.efer & EFER_LME). But that results in a ridiculous amount of nested conditionals for what is a very straightforward check e.g. if (cr0 & X86_CR0_PG) { if (vcpu->arch.efer & EFER_LME) } if (!is_paging(vcpu)) { ... } } } Since this doesn't need to be backported anywhere, I didn't see any value in having an intermediate step. arch/x86/kvm/x86.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 95ef629228691..5f526d94c33f3 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -819,22 +819,22 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0) if ((cr0 & X86_CR0_PG) && !(cr0 & X86_CR0_PE)) return 1; - if (cr0 & X86_CR0_PG) { #ifdef CONFIG_X86_64 - if (!is_paging(vcpu) && (vcpu->arch.efer & EFER_LME)) { - int cs_db, cs_l; + if ((vcpu->arch.efer & EFER_LME) && !is_paging(vcpu) && + (cr0 & X86_CR0_PG)) { + int cs_db, cs_l; - if (!is_pae(vcpu)) - return 1; - kvm_x86_ops.get_cs_db_l_bits(vcpu, &cs_db, &cs_l); - if (cs_l) - return 1; - } else -#endif - if (is_pae(vcpu) && ((cr0 ^ old_cr0) & pdptr_bits) && - !load_pdptrs(vcpu, vcpu->arch.walk_mmu, kvm_read_cr3(vcpu))) + if (!is_pae(vcpu)) + return 1; + kvm_x86_ops.get_cs_db_l_bits(vcpu, &cs_db, &cs_l); + if (cs_l) return 1; } +#endif + if (!(vcpu->arch.efer & EFER_LME) && (cr0 & X86_CR0_PG) && + is_pae(vcpu) && ((cr0 ^ old_cr0) & pdptr_bits) && + !load_pdptrs(vcpu, vcpu->arch.walk_mmu, kvm_read_cr3(vcpu))) + return 1; if (!(cr0 & X86_CR0_PG) && kvm_read_cr4_bits(vcpu, X86_CR4_PCIDE)) return 1; -- 2.26.0