Received: by 2002:a05:7208:9594:b0:7e:5202:c8b4 with SMTP id gs20csp2229101rbb; Tue, 27 Feb 2024 15:22:24 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCVJRa2RGCUtiLQ7IZeGeS1r3zAUSQakfYaiZ/AvKHz2h+wjmfqe5cP5fAgYN/FTUCVGKSAXKoctIlOugFzQcnprY/SnSHPR585CkY1LJg== X-Google-Smtp-Source: AGHT+IGEnahRQxkrr9xW5ZfjEybE+weZjQ5QhWOLvHCL++7M93p43AElzTBQAHmo1H9/imEuzetI X-Received: by 2002:a05:6214:5245:b0:68f:88d3:c470 with SMTP id kf5-20020a056214524500b0068f88d3c470mr1709876qvb.5.1709076144519; Tue, 27 Feb 2024 15:22:24 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709076144; cv=pass; d=google.com; s=arc-20160816; b=R07ojJ/g2lA8CJxGqLzEveGjQBpFDzOgbTuiopZnKGaDLRvqNwG89N1I9cTkW99ADD 1fwpQzo/1I6IC69WRVD/doeWIviey6lVNVNIauyb/0res0qTe7ktp0897SX3NOwMqx1j HnSfc6RI22KYdUrfmIHley1faPmLaiU+TR2D9aljy9a4QkcYiRf8zMi514WYX8GsT2p3 y9lREeXdRfkDeFS8Ej4LTlWzfIJmOsVWbPHCsLUSSawhTi40Gugby7B8LZdpfuEopPhN x04JNy/LQWSIw3QinGdv5QUoYkViibck8cdfDpJT1Om8gCt6d/4vA2vQRsK0e7vtl1yq sevA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=XLNTtPY8Ygc62vlJM8SQ4yJFABoFSs8ANmaUxFJyeKc=; fh=miuXyzfw5j8qsOURKCmYrSiXirUlKyDkR37pPJ+5+/s=; b=fu2Ra20BOfYtqAjRwT/89wExk9pYDaDmnhebO0PpOy4YaO6M76Q6f0DxiZ5GGtrE94 dUaswsIREM1/Mcp9Niz7wMTEb/cLRtPsh/7ILQh7fVnr1BwVPQ6/OCRtMHLeIGwTc3gR IZwD9/d0N68SCQMnXVVOhLnOIQIZkLleoElxGfa1IlQsqAKxPDzUYNeYCbSnUgHpZ3Dv PxmeUSJ4ASbPxVfEih/mq0a+fFvUuCgHSDLYsxhBdPBl9lxQzvoy2enAp4zrQjCTKIuZ +bWFPwRnbSQR50rVRAyhwEk0P+Jc2t/KRv31Lvj39CPzm9kReR2j91y0i3cB+3cMAJ44 MpXA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=dH4EeqYe; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-84192-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-84192-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id pn4-20020a056214130400b0068fe4983841si8679183qvb.490.2024.02.27.15.22.24 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Feb 2024 15:22:24 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-84192-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=dH4EeqYe; arc=pass (i=1 spf=pass spfdomain=redhat.com dkim=pass dkdomain=redhat.com dmarc=pass fromdomain=redhat.com); spf=pass (google.com: domain of linux-kernel+bounces-84192-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-84192-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 2CA8A1C22C8D for ; Tue, 27 Feb 2024 23:22:24 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 6766C5822C; Tue, 27 Feb 2024 23:21:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="dH4EeqYe" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8504B56461 for ; Tue, 27 Feb 2024 23:21:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709076068; cv=none; b=Q2YhH3cslmmH51puJnMJJ1E4AaM3/HjeCQZf111IaDqedJEP9r49ALEiQYQYhe4NQ7U/y6tEdoi1+MnSNA3LxbqdFTlySxeHyUJhM+cSXNA7U35CPsZ+dxjnjHrxtEgkAMquS9MRyuMKnP+MMgEltmRmSL/uA78DzUhcCjvZjgc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709076068; c=relaxed/simple; bh=SJd217lw8K2MypGUFZXuTDbJ/ChU4x6lw6XRMI4mM2E=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=SgPC2ESBJbFlD7j8cgTHmWpFR7DWiKZp4l4l02u93OAFEN+OZuogZskLjauJsYCwGYD8/kLuTZY7D7mx/iEB9EGmDvgM3y//4LlgOWqAJCY1FqkzlW/y0tpV01DmVqxDTXAAo9MBO1PL7IH9NcSBzHA/1U+dUZWUhBdfcITiX2o= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=dH4EeqYe; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709076065; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XLNTtPY8Ygc62vlJM8SQ4yJFABoFSs8ANmaUxFJyeKc=; b=dH4EeqYeo5gNmnaRdDzN2SGwqoKhKCTV31k1hTodYaiiNAlIebdGeb5wzC1EiYf556TAtQ fyLcdCVCMFT6qFEGszeNl6LF60BNfBQ8laZYNs1DUu/Wv6FctbBonfLjxDRK73P7H8EHDw LxJHYQEeFFTJxQdLfTQt/kihfXXVNxY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-113-xzUO6E6IOf-p7Hxi-r6Ezg-1; Tue, 27 Feb 2024 18:21:01 -0500 X-MC-Unique: xzUO6E6IOf-p7Hxi-r6Ezg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 832A4108BCA0; Tue, 27 Feb 2024 23:21:01 +0000 (UTC) Received: from virtlab511.virt.lab.eng.bos.redhat.com (virtlab511.virt.lab.eng.bos.redhat.com [10.19.152.198]) by smtp.corp.redhat.com (Postfix) with ESMTP id 56E0742283; Tue, 27 Feb 2024 23:21:01 +0000 (UTC) From: Paolo Bonzini To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: seanjc@google.com, michael.roth@amd.com, isaku.yamahata@intel.com, thomas.lendacky@amd.com Subject: [PATCH 03/21] KVM: x86/mmu: Replace hardcoded value 0 for the initial value for SPTE Date: Tue, 27 Feb 2024 18:20:42 -0500 Message-Id: <20240227232100.478238-4-pbonzini@redhat.com> In-Reply-To: <20240227232100.478238-1-pbonzini@redhat.com> References: <20240227232100.478238-1-pbonzini@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 From: Sean Christopherson The TDX support will need the "suppress #VE" bit (bit 63) set as the initial value for SPTE. To reduce code change size, introduce a new macro SHADOW_NONPRESENT_VALUE for the initial value for the shadow page table entry (SPTE) and replace hard-coded value 0 for it. Initialize shadow page tables with their value. The plan is to unconditionally set the "suppress #VE" bit for both AMD and Intel as: 1) AMD hardware uses the bit 63 as NX for present SPTE and ignored for non-present SPTE; 2) for conventional VMX guests, KVM never enables the "EPT-violation #VE" in VMCS control and "suppress #VE" bit is ignored by hardware. Signed-off-by: Sean Christopherson Signed-off-by: Isaku Yamahata Message-Id: [Remove unnecessary CONFIG_X86_64 check. - Paolo] Signed-off-by: Paolo Bonzini --- arch/x86/kvm/mmu/mmu.c | 14 +++++++++----- arch/x86/kvm/mmu/paging_tmpl.h | 2 +- arch/x86/kvm/mmu/spte.h | 2 ++ arch/x86/kvm/mmu/tdp_mmu.c | 14 +++++++------- 4 files changed, 19 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e4cc7f764980..b5baf11359ad 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -567,9 +567,9 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) if (!is_shadow_present_pte(old_spte) || !spte_has_volatile_bits(old_spte)) - __update_clear_spte_fast(sptep, 0ull); + __update_clear_spte_fast(sptep, SHADOW_NONPRESENT_VALUE); else - old_spte = __update_clear_spte_slow(sptep, 0ull); + old_spte = __update_clear_spte_slow(sptep, SHADOW_NONPRESENT_VALUE); if (!is_shadow_present_pte(old_spte)) return old_spte; @@ -603,7 +603,7 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) */ static void mmu_spte_clear_no_track(u64 *sptep) { - __update_clear_spte_fast(sptep, 0ull); + __update_clear_spte_fast(sptep, SHADOW_NONPRESENT_VALUE); } static u64 mmu_spte_get_lockless(u64 *sptep) @@ -1950,7 +1950,8 @@ static bool kvm_sync_page_check(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) static int kvm_sync_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int i) { - if (!sp->spt[i]) + /* sp->spt[i] has initial value of shadow page table allocation */ + if (sp->spt[i] == SHADOW_NONPRESENT_VALUE) return 0; return vcpu->arch.mmu->sync_spte(vcpu, sp, i); @@ -6173,7 +6174,10 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu) vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache; vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO; - vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO; + vcpu->arch.mmu_shadow_page_cache.init_value = + SHADOW_NONPRESENT_VALUE; + if (!vcpu->arch.mmu_shadow_page_cache.init_value) + vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO; vcpu->arch.mmu = &vcpu->arch.root_mmu; vcpu->arch.walk_mmu = &vcpu->arch.root_mmu; diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 4d4e98fe4f35..bebd73cd61bb 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -911,7 +911,7 @@ static int FNAME(sync_spte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, int gpa_t pte_gpa; gfn_t gfn; - if (WARN_ON_ONCE(!sp->spt[i])) + if (WARN_ON_ONCE(sp->spt[i] == SHADOW_NONPRESENT_VALUE)) return 0; first_pte_gpa = FNAME(get_level1_sp_gpa)(sp); diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index a129951c9a88..4d1799ba2bf8 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -149,6 +149,8 @@ static_assert(MMIO_SPTE_GEN_LOW_BITS == 8 && MMIO_SPTE_GEN_HIGH_BITS == 11); #define MMIO_SPTE_GEN_MASK GENMASK_ULL(MMIO_SPTE_GEN_LOW_BITS + MMIO_SPTE_GEN_HIGH_BITS - 1, 0) +#define SHADOW_NONPRESENT_VALUE 0ULL + extern u64 __read_mostly shadow_host_writable_mask; extern u64 __read_mostly shadow_mmu_writable_mask; extern u64 __read_mostly shadow_nx_mask; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index d078157e62aa..c8a4d92497b4 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -603,7 +603,7 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm, * here since the SPTE is going from non-present to non-present. Use * the raw write helper to avoid an unnecessary check on volatile bits. */ - __kvm_tdp_mmu_write_spte(iter->sptep, 0); + __kvm_tdp_mmu_write_spte(iter->sptep, SHADOW_NONPRESENT_VALUE); return 0; } @@ -740,8 +740,8 @@ static void __tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root, continue; if (!shared) - tdp_mmu_iter_set_spte(kvm, &iter, 0); - else if (tdp_mmu_set_spte_atomic(kvm, &iter, 0)) + tdp_mmu_iter_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE); + else if (tdp_mmu_set_spte_atomic(kvm, &iter, SHADOW_NONPRESENT_VALUE)) goto retry; } } @@ -808,8 +808,8 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp) if (WARN_ON_ONCE(!is_shadow_present_pte(old_spte))) return false; - tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte, 0, - sp->gfn, sp->role.level + 1); + tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte, + SHADOW_NONPRESENT_VALUE, sp->gfn, sp->role.level + 1); return true; } @@ -843,7 +843,7 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, !is_last_spte(iter.old_spte, iter.level)) continue; - tdp_mmu_iter_set_spte(kvm, &iter, 0); + tdp_mmu_iter_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE); /* * Zappings SPTEs in invalid roots doesn't require a TLB flush, @@ -1276,7 +1276,7 @@ static bool set_spte_gfn(struct kvm *kvm, struct tdp_iter *iter, * invariant that the PFN of a present * leaf SPTE can never change. * See handle_changed_spte(). */ - tdp_mmu_iter_set_spte(kvm, iter, 0); + tdp_mmu_iter_set_spte(kvm, iter, SHADOW_NONPRESENT_VALUE); if (!pte_write(range->arg.pte)) { new_spte = kvm_mmu_changed_pte_notifier_make_spte(iter->old_spte, -- 2.39.0