Received: by 2002:a05:7208:9594:b0:7e:5202:c8b4 with SMTP id gs20csp1179086rbb; Mon, 26 Feb 2024 00:52:03 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCUC1k7rWoyZIN996Vfp2Y2GrN+OcdoEUbj5U49VgLcWBUrotPO/VgX3mYSR1wyfgk2S8EActF6Of82Lx1W0MqzReMLCdXA27GvtHuEedg== X-Google-Smtp-Source: AGHT+IGJzpdQfd7aIO3mXORo35xV0iQOGpf8Eh+nNaXIlAbMmhsGv4EWezYzTcnd4ZkKkpY8+XnA X-Received: by 2002:a05:6402:148d:b0:564:b832:ba90 with SMTP id e13-20020a056402148d00b00564b832ba90mr3793298edv.33.1708937522996; Mon, 26 Feb 2024 00:52:02 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1708937522; cv=pass; d=google.com; s=arc-20160816; b=yPDBGbzyj8nUOsg9rjAGwJDJ1DUxMTbDbHW2Ip/n9lISYtNV0TayGdyuMz3/hwOZHr 1kPlabuwJarhUdtCnHlKigfi4tvq/BWmcp/uqdl6zQoZ9m0PZw5Yxz++lyo4sf1g2J4E BLmAqoQOWGnJSIECt0H/W2EB9x7Ej8oMNw/nz27KwKaCtx1znKJT8kM6X+nnuBqAfgJo cFBSHRxttESp/2Ei9CiVrLv6J3EIjubtcLB9/OCFxLav7ofyrxUaLP3M0gQ7fQmNzzQY GmYiGUX6I+ZhshNIxZjXvzC6ZznX3NhL9PV/5/djM+wJjs8Er8vou1VK+1xwWrnObp+K QLrQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=gwlZljV2iYU6/r0ARXRS4Ed2esuyFNM7kcIaq8gvdgI=; fh=ka1BjeomumTXa1Vhgkzg2yK2f13tB8VguFI2e9zPJ3c=; b=RQBoPWGD9aU6+sX3kI1PqX/H81+yQn5m7PndK5ZWn3tIxDpvn5J/BygNvCXRIyLf2j qCQ/ob+DUpXA+uILh4EmoW4Fwj2ClIA0L+mgkAnw6ws2oeDluvEVMFGHj49R1tKT9yTk kL384+ErpMg+qNIfbJ5VQRZMuasUjw8Ib5Kg6tCzPU/XrBh+onwp4KCRu+ve4htkTkxN jrKW4YVClqXE+omKlsLVpPohtlVCl9z/hz1BZhp01IT7HsMZPGNdzNRmCV8drAgOa8Rn 39QbTea+CdEbxscq0GHmgCbkbade2YCQxPkmLW+CKhZf3zJFhMHGW8r0MhmrgI/br3sT bwCg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BP9gW6LP; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-80813-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-80813-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id di5-20020a056402318500b005643f110e4esi1913935edb.202.2024.02.26.00.52.02 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Feb 2024 00:52:02 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-80813-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BP9gW6LP; arc=pass (i=1 spf=pass spfdomain=intel.com dkim=pass dkdomain=intel.com dmarc=pass fromdomain=intel.com); spf=pass (google.com: domain of linux-kernel+bounces-80813-linux.lists.archive=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-80813-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id ECC2F1F251A9 for ; Mon, 26 Feb 2024 08:44:30 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 50E6E60262; Mon, 26 Feb 2024 08:28:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="BP9gW6LP" Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DE7115F87D; Mon, 26 Feb 2024 08:28:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.13 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708936099; cv=none; b=pKU/s1rt26Ek165HTYLqGqoYHKru397MA+Xey6Rp7vVEGqlrwiSjbClfs9/HVJWMEs6tl9mK0WfcAH8tZJMrhTE1QyRwXC+N7vETp3zLHzCNbkgBbs17IJVpcV0i4L4bzQ7ReEMuQOwLoGtdVuqWkcjDVToIXjoNmJaMdLkXWAw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708936099; c=relaxed/simple; bh=3JKoRj7pW/Us//1MU97aOKv1aywjwzLMmgcEekrtcoM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=Ir5XgQUbhiqlmcCU0yIoKkqrl+ms2Yh8ibKOblm8DzIC48eKRAkuoJNyHTwjeVCQ2Ou8AreIq5rvC2DNT3nGhBR9recvKJD6cht427c5Uk3XZGEz2wlsMnIDRc1gd7YUShmJgdI/1qeX9NoB7BXrWwqTc9940rxk7DMyygZP4d4= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=BP9gW6LP; arc=none smtp.client-ip=192.198.163.13 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1708936097; x=1740472097; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3JKoRj7pW/Us//1MU97aOKv1aywjwzLMmgcEekrtcoM=; b=BP9gW6LPt+5yl64t+vseQKD83FoRXkbMCUhigS4ZqB1F5Gc5WJMZ/C+Z X2kDG3f33vyCl3+jktdbjqJQNnG5l8bEQESC+kjuNIaxUERYB8q7uPy27 hYse/Zpc+2UPigw403LP05iJlyFDUp8P1L0t4tEmUYFhCxR0q+ABahwz7 yX9cX3FWAsROLZHa9MaBJSR+EAZc7sXOaFjHd3sGnkGsnfgsg/5OjGBjC JCpsmUcgjK5JWm1EPmT02FKBRE7HFTqffHk5U/ZZ3JVmLshFJx3+G80ue l0EWhKAGb6+x/zBeFi1N5U9WOZEZQJJ6854at24O4afVYxOMrT0Gksvdd w==; X-IronPort-AV: E=McAfee;i="6600,9927,10995"; a="6155391" X-IronPort-AV: E=Sophos;i="6.06,185,1705392000"; d="scan'208";a="6155391" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Feb 2024 00:28:12 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,185,1705392000"; d="scan'208";a="6615818" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Feb 2024 00:28:12 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , Kai Huang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Sean Christopherson Subject: [PATCH v19 052/130] KVM: x86/mmu: Track shadow MMIO value on a per-VM basis Date: Mon, 26 Feb 2024 00:25:54 -0800 Message-Id: <34d7a0c8724f4fce4da50fe3028373c31213aa8a.1708933498.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Isaku Yamahata TDX will use a different shadow PTE entry value for MMIO from VMX. Add members to kvm_arch and track value for MMIO per-VM instead of global variables. By using the per-VM EPT entry value for MMIO, the existing VMX logic is kept working. Introduce a separate setter function so that guest TD can override later. Also require mmio spte caching for TDX. Actually this is true case because TDX requires EPT and KVM EPT allows mmio spte caching. Signed-off-by: Sean Christopherson Signed-off-by: Isaku Yamahata --- v19: - fix typo in the commit message. --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/mmu.h | 1 + arch/x86/kvm/mmu/mmu.c | 8 +++++--- arch/x86/kvm/mmu/spte.c | 10 ++++++++-- arch/x86/kvm/mmu/spte.h | 4 ++-- arch/x86/kvm/mmu/tdp_mmu.c | 6 +++--- 6 files changed, 21 insertions(+), 10 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index de6dd42d226f..6c10d8d1017f 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1312,6 +1312,8 @@ struct kvm_arch { */ spinlock_t mmu_unsync_pages_lock; + u64 shadow_mmio_value; + struct iommu_domain *iommu_domain; bool iommu_noncoherent; #define __KVM_HAVE_ARCH_NONCOHERENT_DMA diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 395b55684cb9..ab2854f337ab 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -101,6 +101,7 @@ static inline u8 kvm_get_shadow_phys_bits(void) } void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask); +void kvm_mmu_set_mmio_spte_value(struct kvm *kvm, u64 mmio_value); void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask); void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 211c0e72f45d..84e7a289ad07 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2515,7 +2515,7 @@ static int mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp, return kvm_mmu_prepare_zap_page(kvm, child, invalid_list); } - } else if (is_mmio_spte(pte)) { + } else if (is_mmio_spte(kvm, pte)) { mmu_spte_clear_no_track(spte); } return 0; @@ -4184,7 +4184,7 @@ static int handle_mmio_page_fault(struct kvm_vcpu *vcpu, u64 addr, bool direct) if (WARN_ON_ONCE(reserved)) return -EINVAL; - if (is_mmio_spte(spte)) { + if (is_mmio_spte(vcpu->kvm, spte)) { gfn_t gfn = get_mmio_spte_gfn(spte); unsigned int access = get_mmio_spte_access(spte); @@ -4837,7 +4837,7 @@ EXPORT_SYMBOL_GPL(kvm_mmu_new_pgd); static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn, unsigned int access) { - if (unlikely(is_mmio_spte(*sptep))) { + if (unlikely(is_mmio_spte(vcpu->kvm, *sptep))) { if (gfn != get_mmio_spte_gfn(*sptep)) { mmu_spte_clear_no_track(sptep); return true; @@ -6357,6 +6357,8 @@ static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm) void kvm_mmu_init_vm(struct kvm *kvm) { + + kvm->arch.shadow_mmio_value = shadow_mmio_value; INIT_LIST_HEAD(&kvm->arch.active_mmu_pages); INIT_LIST_HEAD(&kvm->arch.zapped_obsolete_pages); INIT_LIST_HEAD(&kvm->arch.possible_nx_huge_pages); diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 02a466de2991..318135daf685 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -74,10 +74,10 @@ u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access) u64 spte = generation_mmio_spte_mask(gen); u64 gpa = gfn << PAGE_SHIFT; - WARN_ON_ONCE(!shadow_mmio_value); + WARN_ON_ONCE(!vcpu->kvm->arch.shadow_mmio_value); access &= shadow_mmio_access_mask; - spte |= shadow_mmio_value | access; + spte |= vcpu->kvm->arch.shadow_mmio_value | access; spte |= gpa | shadow_nonpresent_or_rsvd_mask; spte |= (gpa & shadow_nonpresent_or_rsvd_mask) << SHADOW_NONPRESENT_OR_RSVD_MASK_LEN; @@ -411,6 +411,12 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask) } EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_mask); +void kvm_mmu_set_mmio_spte_value(struct kvm *kvm, u64 mmio_value) +{ + kvm->arch.shadow_mmio_value = mmio_value; +} +EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_value); + void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask) { /* shadow_me_value must be a subset of shadow_me_mask */ diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 26bc95bbc962..1a163aee9ec6 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -264,9 +264,9 @@ static inline struct kvm_mmu_page *root_to_sp(hpa_t root) return spte_to_child_sp(root); } -static inline bool is_mmio_spte(u64 spte) +static inline bool is_mmio_spte(struct kvm *kvm, u64 spte) { - return (spte & shadow_mmio_mask) == shadow_mmio_value && + return (spte & shadow_mmio_mask) == kvm->arch.shadow_mmio_value && likely(enable_mmio_caching); } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index bdeb23ff9e71..04c6af49c3e8 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -462,8 +462,8 @@ static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, * impact the guest since both the former and current SPTEs * are nonpresent. */ - if (WARN_ON_ONCE(!is_mmio_spte(old_spte) && - !is_mmio_spte(new_spte) && + if (WARN_ON_ONCE(!is_mmio_spte(kvm, old_spte) && + !is_mmio_spte(kvm, new_spte) && !is_removed_spte(new_spte))) pr_err("Unexpected SPTE change! Nonpresent SPTEs\n" "should not be replaced with another,\n" @@ -978,7 +978,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, } /* If a MMIO SPTE is installed, the MMIO will need to be emulated. */ - if (unlikely(is_mmio_spte(new_spte))) { + if (unlikely(is_mmio_spte(vcpu->kvm, new_spte))) { vcpu->stat.pf_mmio_spte_created++; trace_mark_mmio_spte(rcu_dereference(iter->sptep), iter->gfn, new_spte); -- 2.25.1