Received: by 2002:a05:6358:795:b0:dc:4c66:fc3e with SMTP id n21csp1446840rwj; Sun, 30 Oct 2022 00:30:37 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6qq9HNokmwjFVxvP1RoBK1MFnbP3zD70b1eE9tGrNe7ac9w69sIaYbgUextJa7kp/b+Eqg X-Received: by 2002:a05:6a00:994:b0:56c:fa42:4f46 with SMTP id u20-20020a056a00099400b0056cfa424f46mr7871374pfg.9.1667115037454; Sun, 30 Oct 2022 00:30:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1667115037; cv=none; d=google.com; s=arc-20160816; b=L47CmmkyXJQ83Dp978XTdyk/Sc9x5rTWH+OxAtYz6AWy7S9F8AKep17eX3jhvFBJDl Tny9LlbIuovipHBRqmDM33xUhPl7zhRjc+7A8kUsDNXWq41DaaRW5YTsTWevXJMehd1j 1MKZshVmQCLzh/FQELWiN4ebDTnCfnjVnn4MhFqvFrZKYAcXrJdbyIA3SbTM8ELtzqhN eAbBQ5swPnq7kcu9xpJ2RQ3dNgqScP/693KGgwYP+MK3ecTq/PkRFErilGgglPywZHxP pUrVoMWLSENF7nFJOyZzsbfFlZZwANzl1MtSTT3D7NxUznJoaake3COhzRTkV3XQXDr6 Rjiw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=y/X5BgPLBhwcctKktmrq1jp1nSnNI1fKIeRySWeLDJk=; b=k4mLJdFeg8U3OrGMoQ+WW2kUF8U6k7n0nb0Mrdq8SEnlubjj9qeDINXbY615FgIFK+ cdro/pBNLDmj+qXbnFVlWzgFprjUjAZssZhAAng8iT6qkPs9U8TJ1w6gu+BftRv8ZwyA UbZ/yE6s8qeTA1LwPQmgdAUfXzcS9MTYRsKN83GyKaE6HflpWNlUQNmFeYCltYopInQZ jRPsgyxg995/IwcRRrNuEkRnA5qU0JX9yrRjsV/2OfGKe7R3HJJ2oMqGzThrHTuJJx3o w6UFufo+d0r7g0pEVYKM8S4R0ynUtTv10fIFj4FKTDk7jaof4vZx4wTQUquygaqrqJQz mf3A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=kzICDaKj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lw16-20020a17090b181000b002137a2faf08si5269203pjb.127.2022.10.30.00.30.26; Sun, 30 Oct 2022 00:30:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=kzICDaKj; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230353AbiJ3G0Z (ORCPT + 99 others); Sun, 30 Oct 2022 02:26:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47114 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229902AbiJ3GYJ (ORCPT ); Sun, 30 Oct 2022 02:24:09 -0400 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16FEA109; Sat, 29 Oct 2022 23:24:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1667111048; x=1698647048; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=U9taNEK+Sd3FCL0cWVbcEHkBfRflbcnLImJXgQmB6kY=; b=kzICDaKjl6x20djopy8LQzZUe23Ko2ecYjergso9toBT04KajTYY1U6M 30MO5hDmiNLjONRhhnZrw6iWywEs7xz3hDZRv01M41Cu9+cdP4F81CeGH n04KtK1GLl+VBgVJj/qCsF0m1Ptk/2QqgidejL2WcAXa14LGs32m6QqfO Kynx3fQbfLad2RrOD+ick7qaOMxlPs+rYTAf+ilm2gRFFTm/Gn5OlZEkD qVWwNxN5/hbdY7d8S8PMDlxsLbH919ahbXtF7hvsrAzhFDI7UekRUhIcT MZ811XPJzi8BGHlT8w8U0rAg9gM/sUAaG6vk7gLUjX7RnXsKVl3X3t7b9 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10515"; a="395037143" X-IronPort-AV: E=Sophos;i="5.95,225,1661842800"; d="scan'208";a="395037143" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Oct 2022 23:24:02 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10515"; a="878392942" X-IronPort-AV: E=Sophos;i="5.95,225,1661842800"; d="scan'208";a="878392942" Received: from ls.sc.intel.com (HELO localhost) ([143.183.96.54]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Oct 2022 23:24:01 -0700 From: isaku.yamahata@intel.com To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: isaku.yamahata@intel.com, isaku.yamahata@gmail.com, Paolo Bonzini , erdemaktas@google.com, Sean Christopherson , Sagi Shahar , David Matlack , Sean Christopherson Subject: [PATCH v10 031/108] KVM: x86/mmu: Replace hardcoded value 0 for the initial value for SPTE Date: Sat, 29 Oct 2022 23:22:32 -0700 Message-Id: <0de1d5dfbce49b5e9d4f93289296b726180b8dd0.1667110240.git.isaku.yamahata@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-4.9 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Isaku Yamahata The TDX support will need the "suppress #VE" bit (bit 63) set as the initial value for SPTE. To reduce code change size, introduce a new macro SHADOW_NONPRESENT_VALUE for the initial value for the shadow page table entry (SPTE) and replace hard-coded value 0 for it. Initialize shadow page tables with their value. The plan is to unconditionally set the "suppress #VE" bit for both AMD and Intel as: 1) AMD hardware doesn't use this bit; 2) for conventional VMX guests, KVM never enables the "EPT-violation #VE" in VMCS control and "suppress #VE" bit is ignored by hardware. Signed-off-by: Sean Christopherson Signed-off-by: Isaku Yamahata --- arch/x86/kvm/mmu/mmu.c | 50 +++++++++++++++++++++++++++++++++----- arch/x86/kvm/mmu/spte.h | 2 ++ arch/x86/kvm/mmu/tdp_mmu.c | 15 ++++++------ 3 files changed, 54 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 10017a9f26ee..e7e11f51f8b4 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -538,9 +538,9 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) if (!is_shadow_present_pte(old_spte) || !spte_has_volatile_bits(old_spte)) - __update_clear_spte_fast(sptep, 0ull); + __update_clear_spte_fast(sptep, SHADOW_NONPRESENT_VALUE); else - old_spte = __update_clear_spte_slow(sptep, 0ull); + old_spte = __update_clear_spte_slow(sptep, SHADOW_NONPRESENT_VALUE); if (!is_shadow_present_pte(old_spte)) return old_spte; @@ -574,7 +574,7 @@ static u64 mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) */ static void mmu_spte_clear_no_track(u64 *sptep) { - __update_clear_spte_fast(sptep, 0ull); + __update_clear_spte_fast(sptep, SHADOW_NONPRESENT_VALUE); } static u64 mmu_spte_get_lockless(u64 *sptep) @@ -642,6 +642,39 @@ static void walk_shadow_page_lockless_end(struct kvm_vcpu *vcpu) } } +#ifdef CONFIG_X86_64 +static inline void kvm_init_shadow_page(void *page) +{ + memset64(page, SHADOW_NONPRESENT_VALUE, 4096 / 8); +} + +static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu) +{ + struct kvm_mmu_memory_cache *mc = &vcpu->arch.mmu_shadow_page_cache; + int start, end, i, r; + + start = kvm_mmu_memory_cache_nr_free_objects(mc); + r = kvm_mmu_topup_memory_cache(mc, PT64_ROOT_MAX_LEVEL); + + /* + * Note, topup may have allocated objects even if it failed to allocate + * the minimum number of objects required to make forward progress _at + * this time_. Initialize newly allocated objects even on failure, as + * userspace can free memory and rerun the vCPU in response to -ENOMEM. + */ + end = kvm_mmu_memory_cache_nr_free_objects(mc); + for (i = start; i < end; i++) + kvm_init_shadow_page(mc->objects[i]); + return r; +} +#else +static int mmu_topup_shadow_page_cache(struct kvm_vcpu *vcpu) +{ + return kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadow_page_cache, + PT64_ROOT_MAX_LEVEL); +} +#endif /* CONFIG_X86_64 */ + static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect) { int r; @@ -651,8 +684,7 @@ static int mmu_topup_memory_caches(struct kvm_vcpu *vcpu, bool maybe_indirect) 1 + PT64_ROOT_MAX_LEVEL + PTE_PREFETCH_NUM); if (r) return r; - r = kvm_mmu_topup_memory_cache(&vcpu->arch.mmu_shadow_page_cache, - PT64_ROOT_MAX_LEVEL); + r = mmu_topup_shadow_page_cache(vcpu); if (r) return r; if (maybe_indirect) { @@ -5870,7 +5902,13 @@ int kvm_mmu_create(struct kvm_vcpu *vcpu) vcpu->arch.mmu_page_header_cache.kmem_cache = mmu_page_header_cache; vcpu->arch.mmu_page_header_cache.gfp_zero = __GFP_ZERO; - vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO; + /* + * When X86_64, initial SEPT entries are initialized with + * SHADOW_NONPRESENT_VALUE. Otherwise zeroed. See + * mmu_topup_shadow_page_cache(). + */ + if (!IS_ENABLED(CONFIG_X86_64)) + vcpu->arch.mmu_shadow_page_cache.gfp_zero = __GFP_ZERO; vcpu->arch.mmu = &vcpu->arch.root_mmu; vcpu->arch.walk_mmu = &vcpu->arch.root_mmu; diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 7670c13ce251..42ecaa75da15 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -148,6 +148,8 @@ static_assert(MMIO_SPTE_GEN_LOW_BITS == 8 && MMIO_SPTE_GEN_HIGH_BITS == 11); #define MMIO_SPTE_GEN_MASK GENMASK_ULL(MMIO_SPTE_GEN_LOW_BITS + MMIO_SPTE_GEN_HIGH_BITS - 1, 0) +#define SHADOW_NONPRESENT_VALUE 0ULL + extern u64 __read_mostly shadow_host_writable_mask; extern u64 __read_mostly shadow_mmu_writable_mask; extern u64 __read_mostly shadow_nx_mask; diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index eab765442d0b..38bc4c2f0f1f 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -694,7 +694,7 @@ static inline int tdp_mmu_zap_spte_atomic(struct kvm *kvm, * here since the SPTE is going from non-present to non-present. Use * the raw write helper to avoid an unnecessary check on volatile bits. */ - __kvm_tdp_mmu_write_spte(iter->sptep, 0); + __kvm_tdp_mmu_write_spte(iter->sptep, SHADOW_NONPRESENT_VALUE); return 0; } @@ -871,8 +871,8 @@ static void __tdp_mmu_zap_root(struct kvm *kvm, struct kvm_mmu_page *root, continue; if (!shared) - tdp_mmu_set_spte(kvm, &iter, 0); - else if (tdp_mmu_set_spte_atomic(kvm, &iter, 0)) + tdp_mmu_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE); + else if (tdp_mmu_set_spte_atomic(kvm, &iter, SHADOW_NONPRESENT_VALUE)) goto retry; } } @@ -928,8 +928,9 @@ bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp) if (WARN_ON_ONCE(!is_shadow_present_pte(old_spte))) return false; - __tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte, 0, - sp->gfn, sp->role.level + 1, true, true); + __tdp_mmu_set_spte(kvm, kvm_mmu_page_as_id(sp), sp->ptep, old_spte, + SHADOW_NONPRESENT_VALUE, sp->gfn, sp->role.level + 1, + true, true); return true; } @@ -963,7 +964,7 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, !is_last_spte(iter.old_spte, iter.level)) continue; - tdp_mmu_set_spte(kvm, &iter, 0); + tdp_mmu_set_spte(kvm, &iter, SHADOW_NONPRESENT_VALUE); flush = true; } @@ -1328,7 +1329,7 @@ static bool set_spte_gfn(struct kvm *kvm, struct tdp_iter *iter, * invariant that the PFN of a present * leaf SPTE can never change. * See __handle_changed_spte(). */ - tdp_mmu_set_spte(kvm, iter, 0); + tdp_mmu_set_spte(kvm, iter, SHADOW_NONPRESENT_VALUE); if (!pte_write(range->pte)) { new_spte = kvm_mmu_changed_pte_notifier_make_spte(iter->old_spte, -- 2.25.1