Received: by 2002:a05:6358:c692:b0:131:369:b2a3 with SMTP id fe18csp408699rwb; Tue, 25 Jul 2023 18:21:07 -0700 (PDT) X-Google-Smtp-Source: APBJJlGv6J6zNh32h5bAGx+B5P3G5T0tXKWYONMKpTR8ppfqZPBwbOnA0tjOXI9H5EBNYrkjhfNb X-Received: by 2002:a05:6a20:96d1:b0:135:4527:efe4 with SMTP id hq17-20020a056a2096d100b001354527efe4mr645116pzc.10.1690334467085; Tue, 25 Jul 2023 18:21:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1690334467; cv=none; d=google.com; s=arc-20160816; b=DWpRYa7X3g3VY+fvflfg+U+a/HJ4DBQ5yfMHnFidi2BQmJdnxtf+X+THDrItzWWjAp Ye4HsDp4ik+ivPY2f4DmFPp75bTupqYugO+qttZUZal4nTP8JEOhuuQX6LbS9QKFpWjF l7cDZiQNrcoVrfcbhV4FYvCiqGa21NK8GfGNxc3hYHGcBSDd9uCEEV1fpDAqlxbnvgwI gFIcWovM/T2oO/POpFx0OHfe5n02qkBnqCsaaTMFZrgUZ0fNBoOfpT6vJzuoLNdydDzP 9jiYmFbJXai8okqPvXtbwihP2Ja3aQvgW6MrqyKJJ0cLCRw/ej+9tYMQdVbp1erKJ11e ts6Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=8i/U7NnyY2vGF14gfO9W8vGG+7jA2lHggPc45AkttIo=; fh=E1bxw31VVcEBr76WIqXfdV5vG/YqvWMYjFMgvHDq2Zc=; b=c2fmQMyJbqBItTatFR35QvZ3MIb6Bd7N6vlMwBZ/2TBNFce4aka3a5LZrB8ljQZ7sS 1OFI4AljLvFMg+WU6fjsQ4R6Lf2Y1odWmjRKxFdM7o0YTrgCzBO2gy20OAvjVpRgw77G 7IZakdNoSwnQqmdtxUjoZYK8I499kFzY7l6Mr3GDcFyynnGQM0iwKjhbZqEyBU3jUDCd DYQXvaHpFXvs9rKogYxfJe2QzQxBDJWEqj/tuwMoFhsefur0Mc01PL8QWnGZgKntfMH+ Vjuq38gvCFVlsOI/i0UdaTw2RbePafN3G7CvPdOqOSEuPP7bHkjOnuHeyjNkJot08JRf ++pg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=VwyA7NDg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d19-20020aa78693000000b00686da478855si135219pfo.335.2023.07.25.18.20.55; Tue, 25 Jul 2023 18:21:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=VwyA7NDg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233190AbjGYWgh (ORCPT + 99 others); Tue, 25 Jul 2023 18:36:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52346 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231600AbjGYWf5 (ORCPT ); Tue, 25 Jul 2023 18:35:57 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3C94A83D2; Tue, 25 Jul 2023 15:29:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1690324164; x=1721860164; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0LxOGumtep088Ew6M9QcTylo9KrFz02TN1U9BvZCIQc=; b=VwyA7NDg/Efytcb083sHbLV9CrCt7fEfBqFCMswzuHzXwZPsa+FHT95o feFPnW2begZHz/Uoqaujz2Qo/V4m2Gag00ylF/8K5kyrjv7VUKbdpdjYK Y7lR8ppralBVsEXnKkn1mTHEkvDBTcJvuq1FLqRp+mV1WCaTf5qrOYzqb W6bTVc5v/RLixa3sKm8HEynkVPMDPMXW9g1IBW7Fgior/YbU8sN0MAW+i h+ITSntqakhs+FH7tCdfZk3kkfN0Gn4SDPDbBpH97bZSpp6Xq5d8kQSC5 COWV5+RUvH19nqoRBhzn66Jr6tMcy5DwJvZOqzxO7i42yiLJ8i1lcnBA8 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10782"; a="371467149" X-IronPort-AV: E=Sophos;i="6.01,231,1684825200"; d="scan'208";a="371467149" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jul 2023 15:24:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10782"; a="972855830" X-IronPort-AV: E=Sophos;i="6.01,231,1684825200"; d="scan'208";a="972855830" Received: from ls.sc.intel.com (HELO localhost) ([172.25.112.31]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Jul 2023 15:24:12 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Kai Huang , Zhi Wang , chen.bo@intel.com, hang.yuan@intel.com, tina.zhang@intel.com, Xiaoyao Li Subject: [RFC PATCH v4 11/16] KVM: x86/tdp_mmu: Split the large page when zap leaf Date: Tue, 25 Jul 2023 15:23:57 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Xiaoyao Li When TDX enabled, a large page cannot be zapped if it contains mixed pages. In this case, it has to split the large page. Signed-off-by: Xiaoyao Li Signed-off-by: Isaku Yamahata --- arch/x86/kvm/Kconfig | 1 + arch/x86/kvm/mmu/mmu.c | 6 +-- arch/x86/kvm/mmu/mmu_internal.h | 9 +++++ arch/x86/kvm/mmu/tdp_mmu.c | 68 +++++++++++++++++++++++++++++++-- 4 files changed, 78 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index c7cb060c4ddc..47613ad41220 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -93,6 +93,7 @@ config KVM_INTEL tristate "KVM for Intel (and compatible) processors support" depends on KVM && IA32_FEAT_CTL select KVM_SW_PROTECTED_VM if INTEL_TDX_HOST + select KVM_GENERIC_MEMORY_ATTRIBUTES if INTEL_TDX_HOST select KVM_PRIVATE_MEM if INTEL_TDX_HOST help Provides support for KVM on processors equipped with Intel's VT diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index bf4f23129ad0..949ef2fa8264 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -7503,8 +7503,8 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm) } #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES -static bool hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, - int level) +bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, + int level) { return lpage_info_slot(gfn, slot, level)->disallow_lpage & KVM_LPAGE_MIXED_FLAG; } @@ -7563,7 +7563,7 @@ static bool hugepage_has_attrs(struct kvm *kvm, struct kvm_memory_slot *slot, return range_has_attrs(kvm, start, end, attrs); for (gfn = start; gfn < end; gfn += KVM_PAGES_PER_HPAGE(level - 1)) { - if (hugepage_test_mixed(slot, gfn, level - 1) || + if (kvm_hugepage_test_mixed(slot, gfn, level - 1) || attrs != kvm_get_memory_attributes(kvm, gfn)) return false; } diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 2dc733b15c39..bc3d38762ace 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -464,4 +464,13 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); +#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES +bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, int level); +#else +static inline bool kvm_hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn, int level) +{ + return false; +} +#endif + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 548b559280d7..e1169082c68c 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1004,6 +1004,14 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp) return true; } + +static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm, + struct tdp_iter *iter, + bool shared); + +static int tdp_mmu_split_huge_page(struct kvm *kvm, struct tdp_iter *iter, + struct kvm_mmu_page *sp, bool shared); + /* * If can_yield is true, will release the MMU lock and reschedule if the * scheduler needs the CPU or there is contention on the MMU lock. If this @@ -1015,13 +1023,15 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, gfn_t start, gfn_t end, bool can_yield, bool flush, bool zap_private) { + bool is_private = is_private_sp(root); + struct kvm_mmu_page *split_sp = NULL; struct tdp_iter iter; end = min(end, tdp_mmu_max_gfn_exclusive()); lockdep_assert_held_write(&kvm->mmu_lock); - WARN_ON_ONCE(zap_private && !is_private_sp(root)); + WARN_ON_ONCE(zap_private && !is_private); if (!zap_private && is_private_sp(root)) return false; @@ -1046,12 +1056,66 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, !is_last_spte(iter.old_spte, iter.level)) continue; + if (is_private && kvm_gfn_shared_mask(kvm) && + is_large_pte(iter.old_spte)) { + gfn_t gfn = iter.gfn & ~kvm_gfn_shared_mask(kvm); + gfn_t mask = KVM_PAGES_PER_HPAGE(iter.level) - 1; + struct kvm_memory_slot *slot; + struct kvm_mmu_page *sp; + + slot = gfn_to_memslot(kvm, gfn); + if (kvm_hugepage_test_mixed(slot, gfn, iter.level) || + (gfn & mask) < start || + end < (gfn & mask) + KVM_PAGES_PER_HPAGE(iter.level)) { + WARN_ON_ONCE(!can_yield); + if (split_sp) { + sp = split_sp; + split_sp = NULL; + sp->role = tdp_iter_child_role(&iter); + } else { + WARN_ON(iter.yielded); + if (flush && can_yield) { + kvm_flush_remote_tlbs(kvm); + flush = false; + } + sp = tdp_mmu_alloc_sp_for_split(kvm, &iter, false); + if (iter.yielded) { + split_sp = sp; + continue; + } + } + KVM_BUG_ON(!sp, kvm); + + tdp_mmu_init_sp(sp, iter.sptep, iter.gfn); + if (tdp_mmu_split_huge_page(kvm, &iter, sp, false)) { + kvm_flush_remote_tlbs(kvm); + flush = false; + /* force retry on this gfn. */ + iter.yielded = true; + } else + flush = true; + continue; + } + } + tdp_mmu_iter_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE); flush = true; } rcu_read_unlock(); + if (split_sp) { + WARN_ON(!can_yield); + if (flush) { + kvm_flush_remote_tlbs(kvm); + flush = false; + } + + write_unlock(&kvm->mmu_lock); + tdp_mmu_free_sp(split_sp); + write_lock(&kvm->mmu_lock); + } + /* * Because this flow zaps _only_ leaf SPTEs, the caller doesn't need * to provide RCU protection as no 'struct kvm_mmu_page' will be freed. @@ -1608,8 +1672,6 @@ static struct kvm_mmu_page *tdp_mmu_alloc_sp_for_split(struct kvm *kvm, KVM_BUG_ON(kvm_mmu_page_role_is_private(role) != is_private_sptep(iter->sptep), kvm); - /* TODO: Large page isn't supported for private SPTE yet. */ - KVM_BUG_ON(kvm_mmu_page_role_is_private(role), kvm); /* * Since we are allocating while under the MMU lock we have to be -- 2.25.1