Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755452Ab0D1D6v (ORCPT ); Tue, 27 Apr 2010 23:58:51 -0400 Received: from cn.fujitsu.com ([222.73.24.84]:56323 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1754792Ab0D1D6s (ORCPT ); Tue, 27 Apr 2010 23:58:48 -0400 Message-ID: <4BD7B1B8.8070202@cn.fujitsu.com> Date: Wed, 28 Apr 2010 11:55:36 +0800 From: Xiao Guangrong User-Agent: Thunderbird 2.0.0.24 (Windows/20100228) MIME-Version: 1.0 To: Avi Kivity CC: Marcelo Tosatti , KVM list , LKML Subject: [PATCH v3 6/10] KVM MMU: don't write-protect if have new mapping to unsync page References: <4BD7AE34.5000408@cn.fujitsu.com> In-Reply-To: <4BD7AE34.5000408@cn.fujitsu.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2450 Lines: 72 Two cases maybe happen in kvm_mmu_get_page() function: - one case is, the goal sp is already in cache, if the sp is unsync, we only need update it to assure this mapping is valid, but not mark it sync and not write-protect sp->gfn since it not broke unsync rule(one shadow page for a gfn) - another case is, the goal sp not existed, we need create a new sp for gfn, i.e, gfn (may)has another shadow page, to keep unsync rule, we should sync(mark sync and write-protect) gfn's unsync shadow page. After enabling multiple unsync shadows, we sync those shadow pages only when the new sp not allow to become unsync(also for the unsyc rule, the new rule is: allow all pte page become unsync) Signed-off-by: Xiao Guangrong --- arch/x86/kvm/mmu.c | 14 +++++++++++--- 1 files changed, 11 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index ec283c3..fb0c33c 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -1333,7 +1333,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, unsigned index; unsigned quadrant; struct hlist_head *bucket; - struct kvm_mmu_page *sp; + struct kvm_mmu_page *sp, *unsync_sp = NULL; struct hlist_node *node, *tmp; role = vcpu->arch.mmu.base_role; @@ -1352,12 +1352,17 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, hlist_for_each_entry_safe(sp, node, tmp, bucket, hash_link) if (sp->gfn == gfn) { if (sp->unsync) - if (kvm_sync_page(vcpu, sp)) - continue; + unsync_sp = sp; if (sp->role.word != role.word) continue; + if (!direct && unsync_sp && + kvm_sync_page_transient(vcpu, unsync_sp)) { + unsync_sp = NULL; + break; + } + mmu_page_add_parent_pte(vcpu, sp, parent_pte); if (sp->unsync_children) { set_bit(KVM_REQ_MMU_SYNC, &vcpu->requests); @@ -1366,6 +1371,9 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, trace_kvm_mmu_get_page(sp, false); return sp; } + if (!direct && unsync_sp) + kvm_sync_page(vcpu, unsync_sp); + ++vcpu->kvm->stat.mmu_cache_miss; sp = kvm_mmu_alloc_page(vcpu, parent_pte); if (!sp) -- 1.6.1.2 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/