Received: by 2002:ac0:da4c:0:0:0:0:0 with SMTP id a12csp1133985imi; Fri, 22 Jul 2022 18:28:53 -0700 (PDT) X-Google-Smtp-Source: AGRyM1v/pCZLKfwMeSoOV+EVfrK4XTRKh4tRWMAC5AUgepE+tYv+HXzE9m7rtWu/J6ToPGr8GxUf X-Received: by 2002:a05:6402:40cb:b0:43a:8a99:225f with SMTP id z11-20020a05640240cb00b0043a8a99225fmr2279389edb.414.1658539732972; Fri, 22 Jul 2022 18:28:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1658539732; cv=none; d=google.com; s=arc-20160816; b=W4bg4uU7RgvbCzgyDN/cAN9J6XL8xxAe6DJ8jYfxexwBgL+2fkzC335t3+KXbF2Ofr NZiSVOXb2eQa/a2UjOAOSPdwwAO2vZhIcl4c0o+ndzG4ARaPVYGtX/yt7f/4T4HjL6C6 2NMZSvw6Fs2dFa3H8l2Cwsb4gre6ZgDG0zK2wwmFF/Om+tQ79f0/WspuKDfNVorc1/m1 0hUdEA7NlcJHlZ9aRKNhRi79s9WyuYEpIp6xPq46Uf9Q/bfQRPRmqkWzyEUoPUjNH8qv 0jVzj/jbaHxEE67Ez0O1onNQevmslzC4Vjkd5cLFHTYPHZwMgogr0NeJ4yPfsuUPQgr6 yCbw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:reply-to:dkim-signature; bh=DD45tdz+67fPg0YVSIsI0VzRQuV5XEPNFSGc6mpSxYY=; b=i3sllsz4gNMDbcHmOU2fsrVyNNSfkMJKiOOiQpi6illFsHk9m/uqAiR2AHjOqYMtNo Fzk6mwoi+qnLl9JQJkB2sD9Td5j3WpN7VCJYp6n/3G+r/OQREw/sebtAvQDh2jwvmeBj lxYABrtepe46ZiJwbCkWw0rUXHwI1wmSmWjeHdhCmDXrxkMoCK7NndFNP/9r5SoeUaV7 UV4v+0cmDDA/vClFbqSAl0P+K4x569jrc0pQyGpQDj70cYPjAVHt+tIz51pLnrFEcJ4c iWPFT9xox1eK/Q2EOyWOLJIqUQSLLNt0PKqIfLwnmzAz2V1rD15Z3mIFT0Lp6rr6ZosF aOkQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=MIl+pl8a; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id nb39-20020a1709071ca700b00710939b43a6si8363988ejc.69.2022.07.22.18.28.27; Fri, 22 Jul 2022 18:28:52 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=MIl+pl8a; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236693AbiGWBX4 (ORCPT + 99 others); Fri, 22 Jul 2022 21:23:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55156 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236629AbiGWBXp (ORCPT ); Fri, 22 Jul 2022 21:23:45 -0400 Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3EB592840 for ; Fri, 22 Jul 2022 18:23:39 -0700 (PDT) Received: by mail-pl1-x649.google.com with SMTP id u3-20020a17090341c300b0016c3c083636so3440013ple.8 for ; Fri, 22 Jul 2022 18:23:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=reply-to:date:in-reply-to:message-id:mime-version:references :subject:from:to:cc; bh=DD45tdz+67fPg0YVSIsI0VzRQuV5XEPNFSGc6mpSxYY=; b=MIl+pl8arhntJ6H2Ci4YqdD9KPn26WE5ebMG+KJKfISmR/8qlOn+IIAm4tCQ69uBrc Q6xmdsYlS8ULYWFwHuaBhr1pR5sUNcKQsrPXZ5wxav1d790E/IRsOQXzwNnp+64GIBKk guAqf6XiHwSCdmSbcvNoJVsJOtyJx54YQkMBb98qsOrwuDJxBI2iAh4mOcM0quJ7voYI uXoAwV71SWkBqHmc9TbaEvI7o6RJ87ehFIeT3HbPBR6BMKzuAXvbkw/NNK61iIH5BHv9 PvZQ0ukcihKmnQUQUYBGAd3vKMZxLbRvodMCvAO29DqAnay77s61HkoXxCxsXp3pRHwz QVKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:reply-to:date:in-reply-to:message-id :mime-version:references:subject:from:to:cc; bh=DD45tdz+67fPg0YVSIsI0VzRQuV5XEPNFSGc6mpSxYY=; b=6WnwCkC4a5XxWXYQQM7jfJohMWuJiYDSC0LR7e28WtRBnNlzGy//0IcHB4T145iKdx 3WS8dJYmB5Cx2hBvp3IbEW3d9+0JSREbAIFXcGNyFT+P8v8wcz3VStXkQZWh4z2Z/Guu 9s8Blv1flCdEfDaYQNcmkdH4Pr7cMKigizzmFuX2QqI6U+vYMfEA8GA+vOY5UMgAbuFT CdGS5v//wcrOFOiL7PufEoC21txTi+bQF5ZlmAQQTNMQw6EvbwOLrdWaY60g4BGTfRx9 cMCTveI+g0mjq11NjBg82gaMbq2d5cuZ1PxlmXvmtcOn3YkctlwH52K5dQ+dUc1jb9Ao CFbQ== X-Gm-Message-State: AJIora9nylhI95fyPFahKSVLdjxmPI/phsASXkYVw+Lpmq9Q2LBqxtOV XMRmfvbuHoQtTRGKCPY4L8X5UY+RYBE= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a05:6a00:150e:b0:52a:ee55:4806 with SMTP id q14-20020a056a00150e00b0052aee554806mr2581383pfu.37.1658539419501; Fri, 22 Jul 2022 18:23:39 -0700 (PDT) Reply-To: Sean Christopherson Date: Sat, 23 Jul 2022 01:23:24 +0000 In-Reply-To: <20220723012325.1715714-1-seanjc@google.com> Message-Id: <20220723012325.1715714-6-seanjc@google.com> Mime-Version: 1.0 References: <20220723012325.1715714-1-seanjc@google.com> X-Mailer: git-send-email 2.37.1.359.gd136c6c3e2-goog Subject: [PATCH v2 5/6] KVM: x86/mmu: Add helper to convert SPTE value to its shadow page From: Sean Christopherson To: Sean Christopherson , Paolo Bonzini Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Yosry Ahmed , Mingwei Zhang , Ben Gardon Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add a helper to convert a SPTE to its shadow page to deduplicate a variety of flows and hopefully avoid future bugs, e.g. if KVM attempts to get the shadow page for a SPTE without dropping high bits. Opportunistically add a comment in mmu_free_root_page() documenting why it treats the root HPA as a SPTE. No functional change intended. Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 17 ++++++++++------- arch/x86/kvm/mmu/mmu_internal.h | 12 ------------ arch/x86/kvm/mmu/spte.h | 17 +++++++++++++++++ arch/x86/kvm/mmu/tdp_mmu.h | 2 ++ 4 files changed, 29 insertions(+), 19 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e9252e7cd5a2..ed3cfb31853b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1798,7 +1798,7 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp, continue; } - child = to_shadow_page(ent & SPTE_BASE_ADDR_MASK); + child = spte_to_sp(ent); if (child->unsync_children) { if (mmu_pages_add(pvec, child, i)) @@ -2357,7 +2357,7 @@ static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep, * so we should update the spte at this point to get * a new sp with the correct access. */ - child = to_shadow_page(*sptep & SPTE_BASE_ADDR_MASK); + child = spte_to_sp(*sptep); if (child->role.access == direct_access) return; @@ -2378,7 +2378,7 @@ static int mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp, if (is_last_spte(pte, sp->role.level)) { drop_spte(kvm, spte); } else { - child = to_shadow_page(pte & SPTE_BASE_ADDR_MASK); + child = spte_to_sp(pte); drop_parent_pte(child, spte); /* @@ -2817,7 +2817,7 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, struct kvm_mmu_page *child; u64 pte = *sptep; - child = to_shadow_page(pte & SPTE_BASE_ADDR_MASK); + child = spte_to_sp(pte); drop_parent_pte(child, sptep); flush = true; } else if (pfn != spte_to_pfn(*sptep)) { @@ -3429,7 +3429,11 @@ static void mmu_free_root_page(struct kvm *kvm, hpa_t *root_hpa, if (!VALID_PAGE(*root_hpa)) return; - sp = to_shadow_page(*root_hpa & SPTE_BASE_ADDR_MASK); + /* + * The "root" may be a special root, e.g. a PAE entry, treat it as a + * SPTE to ensure any non-PA bits are dropped. + */ + sp = spte_to_sp(*root_hpa); if (WARN_ON(!sp)) return; @@ -3914,8 +3918,7 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) hpa_t root = vcpu->arch.mmu->pae_root[i]; if (IS_VALID_PAE_ROOT(root)) { - root &= SPTE_BASE_ADDR_MASK; - sp = to_shadow_page(root); + sp = spte_to_sp(root); mmu_sync_children(vcpu, sp, true); } } diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 2a887d08b722..04457b5ec968 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -133,18 +133,6 @@ struct kvm_mmu_page { extern struct kmem_cache *mmu_page_header_cache; -static inline struct kvm_mmu_page *to_shadow_page(hpa_t shadow_page) -{ - struct page *page = pfn_to_page(shadow_page >> PAGE_SHIFT); - - return (struct kvm_mmu_page *)page_private(page); -} - -static inline struct kvm_mmu_page *sptep_to_sp(u64 *sptep) -{ - return to_shadow_page(__pa(sptep)); -} - static inline int kvm_mmu_role_as_id(union kvm_mmu_page_role role) { return role.smm ? 1 : 0; diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index cabe3fbb4f39..a240b7eca54f 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -207,6 +207,23 @@ static inline int spte_index(u64 *sptep) */ extern u64 __read_mostly shadow_nonpresent_or_rsvd_lower_gfn_mask; +static inline struct kvm_mmu_page *to_shadow_page(hpa_t shadow_page) +{ + struct page *page = pfn_to_page((shadow_page) >> PAGE_SHIFT); + + return (struct kvm_mmu_page *)page_private(page); +} + +static inline struct kvm_mmu_page *spte_to_sp(u64 spte) +{ + return to_shadow_page(spte & SPTE_BASE_ADDR_MASK); +} + +static inline struct kvm_mmu_page *sptep_to_sp(u64 *sptep) +{ + return to_shadow_page(__pa(sptep)); +} + static inline bool is_mmio_spte(u64 spte) { return (spte & shadow_mmio_mask) == shadow_mmio_value && diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h index c163f7cc23ca..d3714200b932 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.h +++ b/arch/x86/kvm/mmu/tdp_mmu.h @@ -5,6 +5,8 @@ #include +#include "spte.h" + hpa_t kvm_tdp_mmu_get_vcpu_root_hpa(struct kvm_vcpu *vcpu); __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root) -- 2.37.1.359.gd136c6c3e2-goog