Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1162210AbbKTIeV (ORCPT ); Fri, 20 Nov 2015 03:34:21 -0500 Received: from tama50.ecl.ntt.co.jp ([129.60.39.147]:56574 "EHLO tama50.ecl.ntt.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1161128AbbKTIeT (ORCPT ); Fri, 20 Nov 2015 03:34:19 -0500 Date: Fri, 20 Nov 2015 17:47:17 +0900 From: Takuya Yoshikawa To: pbonzini@redhat.com Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, mtosatti@redhat.com, guangrong.xiao@linux.intel.com Subject: [PATCH 08/10] KVM: x86: MMU: Use for_each_rmap_spte macro instead of pte_list_walk() Message-Id: <20151120174717.7b0c31caa267aa83027e8d8f@lab.ntt.co.jp> In-Reply-To: <20151120174005.9880b89f54eee2cec2422da5@lab.ntt.co.jp> References: <20151120174005.9880b89f54eee2cec2422da5@lab.ntt.co.jp> X-Mailer: Sylpheed 3.4.2 (GTK+ 2.24.28; x86_64-redhat-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-TM-AS-MML: disable Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2802 Lines: 88 kvm_mmu_mark_parents_unsync() alone uses pte_list_walk(), witch does nearly the same as the for_each_rmap_spte macro. The only difference is that is_shadow_present_pte() checks cannot be placed there because kvm_mmu_mark_parents_unsync() can be called with a new parent pointer whose entry is not set yet. By calling mark_unsync() separately for the parent and adding the parent pointer to the parent_ptes chain later in kvm_mmu_get_page(), the macro works with no problem. Signed-off-by: Takuya Yoshikawa --- arch/x86/kvm/mmu.c | 36 +++++++++++++----------------------- 1 file changed, 13 insertions(+), 23 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 7f46e3e..4e29d9a 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1007,26 +1007,6 @@ static void pte_list_remove(u64 *spte, struct kvm_rmap_head *rmap_head) } } -typedef void (*pte_list_walk_fn) (u64 *spte); -static void pte_list_walk(struct kvm_rmap_head *rmap_head, pte_list_walk_fn fn) -{ - struct pte_list_desc *desc; - int i; - - if (!rmap_head->val) - return; - - if (!(rmap_head->val & 1)) - return fn((u64 *)rmap_head->val); - - desc = (struct pte_list_desc *)(rmap_head->val & ~1ul); - while (desc) { - for (i = 0; i < PTE_LIST_EXT && desc->sptes[i]; ++i) - fn(desc->sptes[i]); - desc = desc->more; - } -} - static struct kvm_rmap_head *__gfn_to_rmap(gfn_t gfn, int level, struct kvm_memory_slot *slot) { @@ -1749,7 +1729,12 @@ static struct kvm_mmu_page *kvm_mmu_alloc_page(struct kvm_vcpu *vcpu, int direct static void mark_unsync(u64 *spte); static void kvm_mmu_mark_parents_unsync(struct kvm_mmu_page *sp) { - pte_list_walk(&sp->parent_ptes, mark_unsync); + u64 *sptep; + struct rmap_iterator iter; + + for_each_rmap_spte(&sp->parent_ptes, &iter, sptep) { + mark_unsync(sptep); + } } static void mark_unsync(u64 *spte) @@ -2119,12 +2104,17 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, if (sp->unsync && kvm_sync_page_transient(vcpu, sp)) break; - mmu_page_add_parent_pte(vcpu, sp, parent_pte); if (sp->unsync_children) { kvm_make_request(KVM_REQ_MMU_SYNC, vcpu); kvm_mmu_mark_parents_unsync(sp); - } else if (sp->unsync) + if (parent_pte) + mark_unsync(parent_pte); + } else if (sp->unsync) { kvm_mmu_mark_parents_unsync(sp); + if (parent_pte) + mark_unsync(parent_pte); + } + mmu_page_add_parent_pte(vcpu, sp, parent_pte); __clear_sp_write_flooding_count(sp); trace_kvm_mmu_get_page(sp, false); -- 2.1.0 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/