2010-06-15 02:49:48

by Xiao Guangrong

[permalink] [raw]
Subject: [PATCH 0/6] KVM: MMU: support pte prefetch when intercepted guest #PF

Hi Avi, Marcelo,

This patchset support pte prefetch when intercepted guest #PF,
the aim is to reduce guest #PF which can be intercepted by VMM.

If we meet any failure in the prefetch path, we will exit it
and not try other ptes to avoid become heavy path.

During my performance test, under EPT enabled case, unixbench
shows the performance improved ~1.2%, user EPT disable case,
unixbench shows the performance improved ~3.6%





2010-06-15 11:45:14

by Avi Kivity

[permalink] [raw]
Subject: Re: [PATCH 0/6] KVM: MMU: support pte prefetch when intercepted guest #PF

On 06/15/2010 05:46 AM, Xiao Guangrong wrote:
> Hi Avi, Marcelo,
>
> This patchset support pte prefetch when intercepted guest #PF,
> the aim is to reduce guest #PF which can be intercepted by VMM.
>
> If we meet any failure in the prefetch path, we will exit it
> and not try other ptes to avoid become heavy path.
>
> During my performance test, under EPT enabled case, unixbench
> shows the performance improved ~1.2%,

Once the guest has faulted in all memory, we shouldn't see much
improvement, yes?

> user EPT disable case,
> unixbench shows the performance improved ~3.6%
>

I'm a little worried about this. In some workloads, prefetch can often
fail due to gpte.a=0 so we spend effort doing nothing. There is also
the issue of marking pages as accessed or even dirty when in fact the
guest did not access them.

We should map those pages with pte.a=pte.d=0 so we don't confuse host
memory management. On EPT (which lacks a/d bits) we can't enable it
(but we can on NPT).

--
error compiling committee.c: too many arguments to function

2010-06-17 07:52:37

by Xiao Guangrong

[permalink] [raw]
Subject: Re: [PATCH 0/6] KVM: MMU: support pte prefetch when intercepted guest #PF



Avi Kivity wrote:
> On 06/15/2010 05:46 AM, Xiao Guangrong wrote:
>> Hi Avi, Marcelo,
>>
>> This patchset support pte prefetch when intercepted guest #PF,
>> the aim is to reduce guest #PF which can be intercepted by VMM.
>>
>> If we meet any failure in the prefetch path, we will exit it
>> and not try other ptes to avoid become heavy path.
>>
>> During my performance test, under EPT enabled case, unixbench
>> shows the performance improved ~1.2%,
>
> Once the guest has faulted in all memory, we shouldn't see much
> improvement, yes?

I think you are right, this path only prefetch valid/pte.A=1 mapping.

>
>> user EPT disable case,
>> unixbench shows the performance improved ~3.6%
>>
>
> I'm a little worried about this. In some workloads, prefetch can often
> fail due to gpte.a=0 so we spend effort doing nothing.

Yes, prefetch is not alway success, but the prefetch path is fast, it not cost
much time, at the worst case, only 128 bytes we need read form guest pte. Once
it's successful, much overload can be reduce.

> There is also
> the issue of marking pages as accessed or even dirty when in fact the
> guest did not access them.

>
> We should map those pages with pte.a=pte.d=0 so we don't confuse host
> memory management. On EPT (which lacks a/d bits) we can't enable it
> (but we can on NPT).
>

You are right, this is the speculative path.

For the pte.A bit:
we called mmu_set_spte() with speculative = true, so we set pte.a = 0 in this
path.

For the pte.D bit:
We should fix also set pte.d = 0 in speculative path, the same problem is in
invlpg/pte write path, will do it in the next version.

2010-06-17 08:00:48

by Avi Kivity

[permalink] [raw]
Subject: Re: [PATCH 0/6] KVM: MMU: support pte prefetch when intercepted guest #PF

On 06/17/2010 10:49 AM, Xiao Guangrong wrote:
>
> Avi Kivity wrote:
>
>> On 06/15/2010 05:46 AM, Xiao Guangrong wrote:
>>
>>> Hi Avi, Marcelo,
>>>
>>> This patchset support pte prefetch when intercepted guest #PF,
>>> the aim is to reduce guest #PF which can be intercepted by VMM.
>>>
>>> If we meet any failure in the prefetch path, we will exit it
>>> and not try other ptes to avoid become heavy path.
>>>
>>> During my performance test, under EPT enabled case, unixbench
>>> shows the performance improved ~1.2%,
>>>
>> Once the guest has faulted in all memory, we shouldn't see much
>> improvement, yes?
>>
> I think you are right, this path only prefetch valid/pte.A=1 mapping.
>

I mean for tdp. Faulting is rare once the guest has touched all of memory.

>>> user EPT disable case,
>>> unixbench shows the performance improved ~3.6%
>>>
>>>
>> I'm a little worried about this. In some workloads, prefetch can often
>> fail due to gpte.a=0 so we spend effort doing nothing.
>>
> Yes, prefetch is not alway success, but the prefetch path is fast, it not cost
> much time, at the worst case, only 128 bytes we need read form guest pte. Once
> it's successful, much overload can be reduce.
>

Ok.

>> We should map those pages with pte.a=pte.d=0 so we don't confuse host
>> memory management. On EPT (which lacks a/d bits) we can't enable it
>> (but we can on NPT).
>>
>>
> You are right, this is the speculative path.
>
> For the pte.A bit:
> we called mmu_set_spte() with speculative = true, so we set pte.a = 0 in this
> path.
>
> For the pte.D bit:
> We should fix also set pte.d = 0 in speculative path, the same problem is in
> invlpg/pte write path, will do it in the next version.
>

It's not enough to set spte.d=0, we also need to sample it later.


--
error compiling committee.c: too many arguments to function