Received: by 2002:a05:6358:a55:b0:ec:fcf4:3ecf with SMTP id 21csp585017rwb; Thu, 12 Jan 2023 09:37:07 -0800 (PST) X-Google-Smtp-Source: AMrXdXu7W/6+vHXx3wUIaLJz3PeZCnHWr/eSB6xxKppAmzJXW/CFfZIE8yPzhrbnEfcuKWlTEu1W X-Received: by 2002:a05:6402:1801:b0:499:bf81:be6 with SMTP id g1-20020a056402180100b00499bf810be6mr11219000edy.37.1673545026878; Thu, 12 Jan 2023 09:37:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673545026; cv=none; d=google.com; s=arc-20160816; b=vsDe1wDZaa+94CC8QB5wCDEzWTGkNqPDe5ULMcRNMBG5UIs9y5+NM/eETh3ivkpCF4 UagkGE9/hJpbR1VtjtUccVejxW832LxOtS6Zu9CmBizUDgToDRYfL9dTaeRkkIxPUnr2 E9xuqdFTSIgcRWbt8LFf2/K6ZDau0DolKwQm/fsov05hm06hKTVoS/1MUT7OkkuuOPtD ieB1p1IqKAw6wR6p/piNew9iLlMjD1qvnG5srwyUemnSLAkcz58ZkgfcP7tHx8+2+W/s /Xh/orVQ8aNGUA8w8BvAZ5VRdnV9JesF/jjTwyNH51rK5i4p1SVnKzDzz+AVrc0kyTUN TgaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=e64xdAMqNzpPew8OfJJbYP+zDrF/nfKS3NxZ+5sMc5s=; b=YPbc9sCiKaaPsmQB1yZuby9mQ+Gw6PhvrjyDYJtqv8rQG1tx6X1xXnTLc2mFreYVFV 34CU1x9+OxdaXXElmisv+IYfheGmlulZQm7jvxzQF9u5v4N730TUHe42SDMKg4DsDwHf syVuwdYAGidVG+QISnhLlD6HfTY/+W7+0PLdHQQ601HqG01D/Kmdhj8fMSksSJZZhJpF nXrGvVNIHhch+frTR0yRXsZBg1rCEIJbS7Cdy78rftx8b/rB7U1Ggycq2xIcamqW30Cl o9Y5B0c8OyczIgqNebT6utVeVslHzz0WHwRF+1roTgjg6YH9JZuIhwHGsMVJmFwTeXor DcuQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=B86QmjtW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id eo6-20020a056402530600b00482efc40d58si17548078edb.400.2023.01.12.09.36.54; Thu, 12 Jan 2023 09:37:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=B86QmjtW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233363AbjALQqc (ORCPT + 50 others); Thu, 12 Jan 2023 11:46:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42564 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240172AbjALQi1 (ORCPT ); Thu, 12 Jan 2023 11:38:27 -0500 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A744A1582E; Thu, 12 Jan 2023 08:34:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673541241; x=1705077241; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lz+Kb3LAxUn5z0et/Tsmci4daJ0U5rN/R2vRZP62/TI=; b=B86QmjtWKkcVXct8L2pZhbdXBAkKgVAqiBHoFjxrzhuiy/wiq7k5y5Zt Hv/EdnLoJN2cG2Cyfl6+u/sMSPfk9JbyaJ3e8ixFa5cMm9h4rBgnUlZpN RgfJWEZMMClyEaxh5JI4epAxUJoBTtTNgHxq8RQAShoBvo0O0A8xkKV6c s9Q7CrEGk3HQlJZPzwaXeDhDOfaqxWsbpbABGVobgvNXeNw7Ts3EjMgW0 wPvB3MIv7DhR6YdjdTi/m1PDSMDi9OhvB8vicu2/9kl+e8hcTaxLPnzph vJlvcJvmOU//YZRzgYl6lc1CkBy3+J4BHCk8eoy5XieB3O7OF0rE9mg2i g==; X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="323811808" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="323811808" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:33:25 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10588"; a="721151741" X-IronPort-AV: E=Sophos;i="5.97,211,1669104000"; d="scan'208";a="721151741" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jan 2023 08:33:25 -0800 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Sean Christopherson Subject: [PATCH v11 033/113] KVM: x86/mmu: Track shadow MMIO value on a per-VM basis Date: Thu, 12 Jan 2023 08:31:41 -0800 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Isaku Yamahata TDX will use a different shadow PTE entry value for MMIO from VMX. Add members to kvm_arch and track value for MMIO per-VM instead of global variables. By using the per-VM EPT entry value for MMIO, the existing VMX logic is kept working. Introduce a separate setter function so that guest TD can override later. Also require mmio spte cachcing for TDX. Actually this is true case because TDX require EPT and KVM EPT allows mmio spte caching. Signed-off-by: Sean Christopherson Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm_host.h | 2 ++ arch/x86/kvm/mmu.h | 1 + arch/x86/kvm/mmu/mmu.c | 7 ++++--- arch/x86/kvm/mmu/spte.c | 10 ++++++++-- arch/x86/kvm/mmu/spte.h | 4 ++-- arch/x86/kvm/mmu/tdp_mmu.c | 14 +++++++++++--- 6 files changed, 28 insertions(+), 10 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 73c987b3d2b6..807da4b95aba 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1243,6 +1243,8 @@ struct kvm_arch { */ spinlock_t mmu_unsync_pages_lock; + u64 shadow_mmio_value; + struct list_head assigned_dev_head; struct iommu_domain *iommu_domain; bool iommu_noncoherent; diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index a45f7a96b821..50d240d52697 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -101,6 +101,7 @@ static inline u8 kvm_get_shadow_phys_bits(void) } void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask); +void kvm_mmu_set_mmio_spte_value(struct kvm *kvm, u64 mmio_value); void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask); void kvm_mmu_set_ept_masks(bool has_ad_bits, bool has_exec_only); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 59befdfeec23..8d3d7deebdd0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2450,7 +2450,7 @@ static int mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp, return kvm_mmu_prepare_zap_page(kvm, child, invalid_list); } - } else if (is_mmio_spte(pte)) { + } else if (is_mmio_spte(kvm, pte)) { mmu_spte_clear_no_track(spte); } return 0; @@ -4119,7 +4119,7 @@ static int handle_mmio_page_fault(struct kvm_vcpu *vcpu, u64 addr, bool direct) if (WARN_ON(reserved)) return -EINVAL; - if (is_mmio_spte(spte)) { + if (is_mmio_spte(vcpu->kvm, spte)) { gfn_t gfn = get_mmio_spte_gfn(spte); unsigned int access = get_mmio_spte_access(spte); @@ -4628,7 +4628,7 @@ static unsigned long get_cr3(struct kvm_vcpu *vcpu) static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn, unsigned int access) { - if (unlikely(is_mmio_spte(*sptep))) { + if (unlikely(is_mmio_spte(vcpu->kvm, *sptep))) { if (gfn != get_mmio_spte_gfn(*sptep)) { mmu_spte_clear_no_track(sptep); return true; @@ -6111,6 +6111,7 @@ int kvm_mmu_init_vm(struct kvm *kvm) struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker; int r; + kvm->arch.shadow_mmio_value = shadow_mmio_value; INIT_LIST_HEAD(&kvm->arch.active_mmu_pages); INIT_LIST_HEAD(&kvm->arch.zapped_obsolete_pages); INIT_LIST_HEAD(&kvm->arch.possible_nx_huge_pages); diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index cc0bc058fb25..a23e9205fc42 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -74,10 +74,10 @@ u64 make_mmio_spte(struct kvm_vcpu *vcpu, u64 gfn, unsigned int access) u64 spte = generation_mmio_spte_mask(gen); u64 gpa = gfn << PAGE_SHIFT; - WARN_ON_ONCE(!shadow_mmio_value); + WARN_ON_ONCE(!vcpu->kvm->arch.shadow_mmio_value); access &= shadow_mmio_access_mask; - spte |= shadow_mmio_value | access; + spte |= vcpu->kvm->arch.shadow_mmio_value | access; spte |= gpa | shadow_nonpresent_or_rsvd_mask; spte |= (gpa & shadow_nonpresent_or_rsvd_mask) << SHADOW_NONPRESENT_OR_RSVD_MASK_LEN; @@ -413,6 +413,12 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_value, u64 mmio_mask, u64 access_mask) } EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_mask); +void kvm_mmu_set_mmio_spte_value(struct kvm *kvm, u64 mmio_value) +{ + kvm->arch.shadow_mmio_value = mmio_value; +} +EXPORT_SYMBOL_GPL(kvm_mmu_set_mmio_spte_value); + void kvm_mmu_set_me_spte_mask(u64 me_value, u64 me_mask) { /* shadow_me_value must be a subset of shadow_me_mask */ diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 471378ee9071..256395eb593f 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -251,9 +251,9 @@ static inline struct kvm_mmu_page *sptep_to_sp(u64 *sptep) return to_shadow_page(__pa(sptep)); } -static inline bool is_mmio_spte(u64 spte) +static inline bool is_mmio_spte(struct kvm *kvm, u64 spte) { - return (spte & shadow_mmio_mask) == shadow_mmio_value && + return (spte & shadow_mmio_mask) == kvm->arch.shadow_mmio_value && likely(enable_mmio_caching); } diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 6111e3e9266d..dffacb7eb15a 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -19,6 +19,14 @@ int kvm_mmu_init_tdp_mmu(struct kvm *kvm) { struct workqueue_struct *wq; + /* + * TDs require mmio_caching to clear suppress_ve bit of SPTE for GPA + * of MMIO so that TD can convert #VE triggered by MMIO into + * TDG.VP.VMCALL. + */ + if (kvm->arch.vm_type == KVM_X86_TDX_VM && !enable_mmio_caching) + return -EOPNOTSUPP; + if (!tdp_enabled || !READ_ONCE(tdp_mmu_enabled)) return 0; @@ -587,8 +595,8 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, * impact the guest since both the former and current SPTEs * are nonpresent. */ - if (WARN_ON(!is_mmio_spte(old_spte) && - !is_mmio_spte(new_spte) && + if (WARN_ON(!is_mmio_spte(kvm, old_spte) && + !is_mmio_spte(kvm, new_spte) && !is_removed_spte(new_spte))) pr_err("Unexpected SPTE change! Nonpresent SPTEs\n" "should not be replaced with another,\n" @@ -1114,7 +1122,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, } /* If a MMIO SPTE is installed, the MMIO will need to be emulated. */ - if (unlikely(is_mmio_spte(new_spte))) { + if (unlikely(is_mmio_spte(vcpu->kvm, new_spte))) { vcpu->stat.pf_mmio_spte_created++; trace_mark_mmio_spte(rcu_dereference(iter->sptep), iter->gfn, new_spte); -- 2.25.1