Received: by 2002:a05:6902:102b:0:0:0:0 with SMTP id x11csp3040870ybt; Mon, 22 Jun 2020 13:24:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzyrCPF9xAq503jEIYjyXnKSl63FT2LefSEL0oLP+JI/2ca1aBvtEZ8NWJA9QoZElb64NBD X-Received: by 2002:a17:906:6a4f:: with SMTP id n15mr8057722ejs.378.1592857466759; Mon, 22 Jun 2020 13:24:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1592857466; cv=none; d=google.com; s=arc-20160816; b=Y+CdUydJ9LgZCLilLhBU9kqrd0CKC+FC5rlcWqifrXvUOY5DxT0X0qqY3vwzwuX70+ ggLqdeusHGwfCtF0lo/dcf6RzQXhEAABYEbefQNIC18dx/1MceoTNiDdo3hDmWVtyhnR aWr3tlzXqEJFTGpcJGbOViuX1tqTUwlyYrwxefcp4YSfnBQwCGWh9wlt3f7NIblEXdSQ d9BtM0hc1uH8NbV9/QevJdUClVqlxZJLXsX3uzrha9eP+bilS758p8yo/RcaYkgTzE7Y Dd1tpR2Xvz/LeuP/oTHIo+FJh+KOFVtM12kFUFDmM6kDJG/vW6aRV6zzAiIMCeUnuLpI gL7g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :ironport-sdr:ironport-sdr; bh=+B6pYuz0ZD+YaLKk9OneZaDV/EranFpwxLAcBefzzkc=; b=hSqFGrKya8HA+dUkL6ncvHkq4l11V2RolVCFyFlhtJ7gVi3bsHKw/T/zDUprbiaGej WGRXCF4MvnWJyQVPQ4rFPwgldoWn7TOk6MI2j1pDsX8w6PxPur4GFgh8qisf+/8sHO4V MwSj0wlJVj4z4IzxFbqqNCK/m/CYNed2UGxkg/mS5Fb1cohjqyNLYk8+5Y+TWJD6tCPR tLzHolpQoaMfiNqkYJTpXvdPok4eyB41wW8tkzYVZaPacjJ8pD0dii14mNolASONFlg1 FJMYMP0j5ihuZ/LRmJ9K82V9OvOf9zSEkOSG8ytduSvxKneiO5DhjDKZxQmQ4wegNCXI zDnQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id cx26si5307316edb.241.2020.06.22.13.24.04; Mon, 22 Jun 2020 13:24:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730463AbgFVUVB (ORCPT + 99 others); Mon, 22 Jun 2020 16:21:01 -0400 Received: from mga07.intel.com ([134.134.136.100]:62017 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728745AbgFVUUi (ORCPT ); Mon, 22 Jun 2020 16:20:38 -0400 IronPort-SDR: jAAIWjmKGLn8U41LUUN7vDcbZTOdIzNKp5rxxzHiuyBcJbJmGyT/lcFEOloD714MrZSXG9hhEI umQR7+48gHNw== X-IronPort-AV: E=McAfee;i="6000,8403,9660"; a="209057482" X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="209057482" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Jun 2020 13:20:37 -0700 IronPort-SDR: WJTSeeBv8SkuO1JJWrdwFHVGCL0wNFlb6EBPUSrlb7v8/+Eny0F4WBU0SZs0/T2i76/1aldOAB twFu4wbCCDCQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,268,1589266800"; d="scan'208";a="478506333" Received: from sjchrist-coffee.jf.intel.com ([10.54.74.152]) by fmsmga006.fm.intel.com with ESMTP; 22 Jun 2020 13:20:37 -0700 From: Sean Christopherson To: Paolo Bonzini Cc: Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 4/6] KVM: x86/mmu: Make kvm_mmu_page definition and accessor internal-only Date: Mon, 22 Jun 2020 13:20:32 -0700 Message-Id: <20200622202034.15093-5-sean.j.christopherson@intel.com> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200622202034.15093-1-sean.j.christopherson@intel.com> References: <20200622202034.15093-1-sean.j.christopherson@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Make 'struct kvm_mmu_page' MMU-only, nothing outside of the MMU should be poking into the gory details of shadow pages. Signed-off-by: Sean Christopherson --- arch/x86/include/asm/kvm_host.h | 46 ++----------------------------- arch/x86/kvm/mmu/mmu_internal.h | 48 +++++++++++++++++++++++++++++++++ 2 files changed, 50 insertions(+), 44 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index f8998e97457f..86933c467a1e 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -322,43 +322,6 @@ struct kvm_rmap_head { unsigned long val; }; -struct kvm_mmu_page { - struct list_head link; - struct hlist_node hash_link; - struct list_head lpage_disallowed_link; - - bool unsync; - u8 mmu_valid_gen; - bool mmio_cached; - bool lpage_disallowed; /* Can't be replaced by an equiv large page */ - - /* - * The following two entries are used to key the shadow page in the - * hash table. - */ - union kvm_mmu_page_role role; - gfn_t gfn; - - u64 *spt; - /* hold the gfn of each spte inside spt */ - gfn_t *gfns; - int root_count; /* Currently serving as active root */ - unsigned int unsync_children; - struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */ - DECLARE_BITMAP(unsync_child_bitmap, 512); - -#ifdef CONFIG_X86_32 - /* - * Used out of the mmu-lock to avoid reading spte values while an - * update is in progress; see the comments in __get_spte_lockless(). - */ - int clear_spte_count; -#endif - - /* Number of writes since the last time traversal visited this page. */ - atomic_t write_flooding_count; -}; - struct kvm_pio_request { unsigned long linear_rip; unsigned long count; @@ -384,6 +347,8 @@ struct kvm_mmu_root_info { #define KVM_MMU_NUM_PREV_ROOTS 3 +struct kvm_mmu_page; + /* * x86 supports 4 paging modes (5-level 64-bit, 4-level 64-bit, 3-level 32-bit, * and 2-level 32-bit). The kvm_mmu structure abstracts the details of the @@ -1557,13 +1522,6 @@ static inline gpa_t translate_gpa(struct kvm_vcpu *vcpu, gpa_t gpa, u32 access, return gpa; } -static inline struct kvm_mmu_page *page_header(hpa_t shadow_page) -{ - struct page *page = pfn_to_page(shadow_page >> PAGE_SHIFT); - - return (struct kvm_mmu_page *)page_private(page); -} - static inline u16 kvm_read_ldt(void) { u16 ldt; diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index d7938c37c7de..8afa60f0a1a5 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -2,6 +2,54 @@ #ifndef __KVM_X86_MMU_INTERNAL_H #define __KVM_X86_MMU_INTERNAL_H +#include + +#include + +struct kvm_mmu_page { + struct list_head link; + struct hlist_node hash_link; + struct list_head lpage_disallowed_link; + + bool unsync; + u8 mmu_valid_gen; + bool mmio_cached; + bool lpage_disallowed; /* Can't be replaced by an equiv large page */ + + /* + * The following two entries are used to key the shadow page in the + * hash table. + */ + union kvm_mmu_page_role role; + gfn_t gfn; + + u64 *spt; + /* hold the gfn of each spte inside spt */ + gfn_t *gfns; + int root_count; /* Currently serving as active root */ + unsigned int unsync_children; + struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */ + DECLARE_BITMAP(unsync_child_bitmap, 512); + +#ifdef CONFIG_X86_32 + /* + * Used out of the mmu-lock to avoid reading spte values while an + * update is in progress; see the comments in __get_spte_lockless(). + */ + int clear_spte_count; +#endif + + /* Number of writes since the last time traversal visited this page. */ + atomic_t write_flooding_count; +}; + +static inline struct kvm_mmu_page *page_header(hpa_t shadow_page) +{ + struct page *page = pfn_to_page(shadow_page >> PAGE_SHIFT); + + return (struct kvm_mmu_page *)page_private(page); +} + void kvm_mmu_gfn_disallow_lpage(struct kvm_memory_slot *slot, gfn_t gfn); void kvm_mmu_gfn_allow_lpage(struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, -- 2.26.0