Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756911Ab1FIH2J (ORCPT ); Thu, 9 Jun 2011 03:28:09 -0400 Received: from mx1.redhat.com ([209.132.183.28]:33175 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752883Ab1FIH2G (ORCPT ); Thu, 9 Jun 2011 03:28:06 -0400 Message-ID: <4DF07601.9060705@redhat.com> Date: Thu, 09 Jun 2011 10:28:01 +0300 From: Avi Kivity User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.17) Gecko/20110428 Fedora/3.1.10-1.fc15 Lightning/1.0b3pre Thunderbird/3.1.10 MIME-Version: 1.0 To: Xiao Guangrong CC: Marcelo Tosatti , LKML , KVM Subject: Re: [PATCH 14/15] KVM: MMU: mmio page fault support References: <4DEE205E.8000601@cn.fujitsu.com> <4DEE2281.1000008@cn.fujitsu.com> In-Reply-To: <4DEE2281.1000008@cn.fujitsu.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2699 Lines: 74 On 06/07/2011 04:07 PM, Xiao Guangrong wrote: > The idea is from Avi: > > | We could cache the result of a miss in an spte by using a reserved bit, and > | checking the page fault error code (or seeing if we get an ept violation or > | ept misconfiguration), so if we get repeated mmio on a page, we don't need to > | search the slot list/tree. > | (https://lkml.org/lkml/2011/2/22/221) > > When the page fault is caused by mmio, we cache the info in the shadow page > table, and also set the reserved bits in the shadow page table, so if the mmio > is caused again, we can quickly identify it and emulate it directly > > Searching mmio gfn in memslots is heavy since we need to walk all memeslots, it > can be reduced by this feature, and also avoid walking guest page table for > soft mmu. > > This feature can be disabled/enabled at the runtime, if > shadow_notrap_nonpresent_pte is enabled, the PFER.RSVD is always set, we need > to walk shadow page table for all page fault, so disable this feature if > shadow_notrap_nonpresent is enabled. > Maybe it's time to kill off bypass_guest_pf=1. It's not as effective as it used to be, since unsync pages always use shadow_trap_nonpresent_pte, and since we convert between the two nonpresent_ptes during sync and unsync. > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index 4f475ab..227cf10 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -91,6 +91,9 @@ module_param(dbg, bool, 0644); > static int oos_shadow = 1; > module_param(oos_shadow, bool, 0644); > > +static int __read_mostly mmio_pf = 1; > +module_param(mmio_pf, bool, 0644); Why make it a module parameter? > +static void mark_mmio_spte(u64 *sptep, u64 gfn, unsigned access) > +{ > + access&= ACC_WRITE_MASK | ACC_USER_MASK; > + > + __set_spte(sptep, shadow_mmio_mask | access | gfn<< PAGE_SHIFT); > +} This can only work for shadow. Is it worth the complexity? Also, shadow walking is not significantly faster than guest page table walking. And if we miss, we have to walk the guest page tables in any case. > + > +static bool quickly_check_mmio_pf(struct kvm_vcpu *vcpu, u64 addr, bool direct) > +{ > + if (direct&& vcpu_match_mmio_gpa(vcpu, addr)) > + return true; > + > + if (vcpu_match_mmio_gva(vcpu, addr)) > + return true; > + > + return false; > +} There is also the case of nesting - it's not direct and it's not a gva. -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/