2012-05-02 05:28:49

by Xiao Guangrong

[permalink] [raw]
Subject: Re: [PATCH v4 06/10] KVM: MMU: fast path of handling guest page fault

On 05/01/2012 09:34 AM, Marcelo Tosatti wrote:


>
> It is getting better, but not yet, there are still reads of sptep
> scattered all over (as mentioned before, i think a pattern of read spte
> once, work on top of that, atomically write and then deal with results
> _everywhere_ (where mmu lock is held) is more consistent.
>


But we only need care the path which depends on is_writable_pte(), no?

So, where call is_writable_pte() are spte_has_volatile_bits(),
spte_write_protect() and set_spte().

I have changed these functions:
In spte_has_volatile_bits():
static bool spte_has_volatile_bits(u64 spte)
{
+ /*
+ * Always atomicly update spte if it can be updated
+ * out of mmu-lock.
+ */
+ if (spte_can_lockless_update(spte))
+ return true;
+

In spte_write_protect():

+ spte = mmu_spte_update(sptep, spte);
+
+ if (is_writable_pte(spte))
+ *flush |= true;
+
The 'spte' is from atomically read-write (xchg).

in set_spte():
set_pte:
- mmu_spte_update(sptep, spte);
+ entry = mmu_spte_update(sptep, spte);
/*
* If we overwrite a writable spte with a read-only one we
* should flush remote TLBs. Otherwise rmap_write_protect
The 'entry' is also the latest value.

> /*
> * If we overwrite a writable spte with a read-only one we
> * should flush remote TLBs. Otherwise rmap_write_protect
> * will find a read-only spte, even though the writable spte
> * might be cached on a CPU's TLB.
> */
> if (is_writable_pte(entry) && !is_writable_pte(*sptep))
> kvm_flush_remote_tlbs(vcpu->kvm);
>
> This is inconsistent with the above obviously.
>


'entry' is not a problem since it is from atomically read-write as
mentioned above, i need change this code to:

/*
* Optimization: for pte sync, if spte was writable the hash
* lookup is unnecessary (and expensive). Write protection
* is responsibility of mmu_get_page / kvm_sync_page.
* Same reasoning can be applied to dirty page accounting.
*/
if (!can_unsync && is_writable_pte(entry) /* Use 'entry' instead of '*sptep'. */
goto set_pte
......


if (is_writable_pte(entry) && !is_writable_pte(spte)) /* Use 'spte' instead of '*sptep'. */
kvm_flush_remote_tlbs(vcpu->kvm);


2012-05-02 21:11:45

by Marcelo Tosatti

[permalink] [raw]
Subject: Re: [PATCH v4 06/10] KVM: MMU: fast path of handling guest page fault

On Wed, May 02, 2012 at 01:28:39PM +0800, Xiao Guangrong wrote:
> On 05/01/2012 09:34 AM, Marcelo Tosatti wrote:
>
>
> >
> > It is getting better, but not yet, there are still reads of sptep
> > scattered all over (as mentioned before, i think a pattern of read spte
> > once, work on top of that, atomically write and then deal with results
> > _everywhere_ (where mmu lock is held) is more consistent.
> >
>
>
> But we only need care the path which depends on is_writable_pte(), no?

Yes.

> So, where call is_writable_pte() are spte_has_volatile_bits(),
> spte_write_protect() and set_spte().
>
> I have changed these functions:
> In spte_has_volatile_bits():
> static bool spte_has_volatile_bits(u64 spte)
> {
> + /*
> + * Always atomicly update spte if it can be updated
> + * out of mmu-lock.
> + */
> + if (spte_can_lockless_update(spte))
> + return true;
> +
>
> In spte_write_protect():
>
> + spte = mmu_spte_update(sptep, spte);
> +
> + if (is_writable_pte(spte))
> + *flush |= true;
> +
> The 'spte' is from atomically read-write (xchg).
>
> in set_spte():
> set_pte:
> - mmu_spte_update(sptep, spte);
> + entry = mmu_spte_update(sptep, spte);
> /*
> * If we overwrite a writable spte with a read-only one we
> * should flush remote TLBs. Otherwise rmap_write_protect
> The 'entry' is also the latest value.
>
> > /*
> > * If we overwrite a writable spte with a read-only one we
> > * should flush remote TLBs. Otherwise rmap_write_protect
> > * will find a read-only spte, even though the writable spte
> > * might be cached on a CPU's TLB.
> > */
> > if (is_writable_pte(entry) && !is_writable_pte(*sptep))
> > kvm_flush_remote_tlbs(vcpu->kvm);
> >
> > This is inconsistent with the above obviously.
> >
>
>
> 'entry' is not a problem since it is from atomically read-write as
> mentioned above, i need change this code to:
>
> /*
> * Optimization: for pte sync, if spte was writable the hash
> * lookup is unnecessary (and expensive). Write protection
> * is responsibility of mmu_get_page / kvm_sync_page.
> * Same reasoning can be applied to dirty page accounting.
> */
> if (!can_unsync && is_writable_pte(entry) /* Use 'entry' instead of '*sptep'. */
> goto set_pte
> ......
>
>
> if (is_writable_pte(entry) && !is_writable_pte(spte)) /* Use 'spte' instead of '*sptep'. */
> kvm_flush_remote_tlbs(vcpu->kvm);

What is of more importance than the ability to verify that this or that
particular case are ok at the moment is to write code in such a way that
its easy to verify that it is correct.

Thus the suggestion above:

"scattered all over (as mentioned before, i think a pattern of read spte
once, work on top of that, atomically write and then deal with results
_everywhere_ (where mmu lock is held) is more consistent."

2012-05-03 11:26:56

by Xiao Guangrong

[permalink] [raw]
Subject: Re: [PATCH v4 06/10] KVM: MMU: fast path of handling guest page fault

On 05/03/2012 05:07 AM, Marcelo Tosatti wrote:


>> 'entry' is not a problem since it is from atomically read-write as
>> mentioned above, i need change this code to:
>>
>> /*
>> * Optimization: for pte sync, if spte was writable the hash
>> * lookup is unnecessary (and expensive). Write protection
>> * is responsibility of mmu_get_page / kvm_sync_page.
>> * Same reasoning can be applied to dirty page accounting.
>> */
>> if (!can_unsync && is_writable_pte(entry) /* Use 'entry' instead of '*sptep'. */
>> goto set_pte
>> ......
>>
>>
>> if (is_writable_pte(entry) && !is_writable_pte(spte)) /* Use 'spte' instead of '*sptep'. */
>> kvm_flush_remote_tlbs(vcpu->kvm);
>
> What is of more importance than the ability to verify that this or that
> particular case are ok at the moment is to write code in such a way that
> its easy to verify that it is correct.
>
> Thus the suggestion above:
>
> "scattered all over (as mentioned before, i think a pattern of read spte
> once, work on top of that, atomically write and then deal with results
> _everywhere_ (where mmu lock is held) is more consistent."
>


Marcelo, thanks for your time to patiently review/reply my mail.

I am confused with ' _everywhere_ ', it means all of the path read/update
spte? why not only verify the path which depends on is_writable_pte()?

For the reason of "its easy to verify that it is correct"? But these
paths are safe since it is not care PT_WRITABLE_MASK at all. What these
paths care is the Dirty-bit and Accessed-bit are not lost, that is why
we always treat the spte as "volatile" if it is can be updated out of
mmu-lock.

For the further development? We can add the delta comment for
is_writable_pte() to warn the developer use it more carefully.

It is also very hard to verify spte everywhere. :(

Actually, the current code to care PT_WRITABLE_MASK is just for
tlb flush, may be we can fold it into mmu_spte_update.
[
There are tree ways to modify spte, present -> nonpresent, nonpresent -> present,
present -> present.

But we only need care present -> present for lockless.
]

/*
* return true means we need flush tlbs caused by changing spte from writeable
* to read-only.
*/
bool mmu_update_spte(u64 *sptep, u64 spte)
{
u64 last_spte, old_spte = *sptep;
bool flush = false;

last_spte = xchg(sptep, spte);

if ((is_writable_pte(last_spte) ||
spte_has_updated_lockless(old_spte, last_spte)) &&
!is_writable_pte(spte) )
flush = true;

.... track Drity/Accessed bit ...


return flush
}

Furthermore, the style of "if (spte-has-changed) goto beginning" is feasible
in set_spte since this path is a fast path. (i can speed up mmu_need_write_protect)

2012-05-05 14:13:08

by Marcelo Tosatti

[permalink] [raw]
Subject: Re: [PATCH v4 06/10] KVM: MMU: fast path of handling guest page fault

On Thu, May 03, 2012 at 07:26:38PM +0800, Xiao Guangrong wrote:
> On 05/03/2012 05:07 AM, Marcelo Tosatti wrote:
>
>
> >> 'entry' is not a problem since it is from atomically read-write as
> >> mentioned above, i need change this code to:
> >>
> >> /*
> >> * Optimization: for pte sync, if spte was writable the hash
> >> * lookup is unnecessary (and expensive). Write protection
> >> * is responsibility of mmu_get_page / kvm_sync_page.
> >> * Same reasoning can be applied to dirty page accounting.
> >> */
> >> if (!can_unsync && is_writable_pte(entry) /* Use 'entry' instead of '*sptep'. */
> >> goto set_pte
> >> ......
> >>
> >>
> >> if (is_writable_pte(entry) && !is_writable_pte(spte)) /* Use 'spte' instead of '*sptep'. */
> >> kvm_flush_remote_tlbs(vcpu->kvm);
> >
> > What is of more importance than the ability to verify that this or that
> > particular case are ok at the moment is to write code in such a way that
> > its easy to verify that it is correct.
> >
> > Thus the suggestion above:
> >
> > "scattered all over (as mentioned before, i think a pattern of read spte
> > once, work on top of that, atomically write and then deal with results
> > _everywhere_ (where mmu lock is held) is more consistent."
> >
>
>
> Marcelo, thanks for your time to patiently review/reply my mail.
>
> I am confused with ' _everywhere_ ', it means all of the path read/update
> spte? why not only verify the path which depends on is_writable_pte()?

I meant any path that updates from present->present.

> For the reason of "its easy to verify that it is correct"? But these
> paths are safe since it is not care PT_WRITABLE_MASK at all. What these
> paths care is the Dirty-bit and Accessed-bit are not lost, that is why
> we always treat the spte as "volatile" if it is can be updated out of
> mmu-lock.
>
> For the further development? We can add the delta comment for
> is_writable_pte() to warn the developer use it more carefully.
>
> It is also very hard to verify spte everywhere. :(
>
> Actually, the current code to care PT_WRITABLE_MASK is just for
> tlb flush, may be we can fold it into mmu_spte_update.
> [
> There are tree ways to modify spte, present -> nonpresent, nonpresent -> present,
> present -> present.
>
> But we only need care present -> present for lockless.
> ]

Also need to take memory ordering into account, which was not an issue
before. So it is not only TLB flush.

> /*
> * return true means we need flush tlbs caused by changing spte from writeable
> * to read-only.
> */
> bool mmu_update_spte(u64 *sptep, u64 spte)
> {
> u64 last_spte, old_spte = *sptep;
> bool flush = false;
>
> last_spte = xchg(sptep, spte);
>
> if ((is_writable_pte(last_spte) ||
> spte_has_updated_lockless(old_spte, last_spte)) &&
> !is_writable_pte(spte) )
> flush = true;
>
> .... track Drity/Accessed bit ...
>
>
> return flush
> }
>
> Furthermore, the style of "if (spte-has-changed) goto beginning" is feasible
> in set_spte since this path is a fast path. (i can speed up mmu_need_write_protect)

What you mean exactly?

It would be better if all these complications introduced by lockless
updates can be avoided, say using A/D bits as Avi suggested.

2012-05-06 09:36:39

by Avi Kivity

[permalink] [raw]
Subject: Re: [PATCH v4 06/10] KVM: MMU: fast path of handling guest page fault

On 05/05/2012 05:08 PM, Marcelo Tosatti wrote:
> It would be better if all these complications introduced by lockless
> updates can be avoided, say using A/D bits as Avi suggested.

Note that using A/D bits introduces new tradeoffs (when just a few bits
are dirtied per iteration, we reduce guest overhead, but increase host
overhead, since it has to scan a large number of sptes), and also a
large fraction of deployed systems don't have A/D bits support. But it
should affect our thinking - since in the long term all hosts will have it.

--
error compiling committee.c: too many arguments to function

2012-05-07 06:52:40

by Xiao Guangrong

[permalink] [raw]
Subject: Re: [PATCH v4 06/10] KVM: MMU: fast path of handling guest page fault

On 05/05/2012 10:08 PM, Marcelo Tosatti wrote:


>>
>> I am confused with ' _everywhere_ ', it means all of the path read/update
>> spte? why not only verify the path which depends on is_writable_pte()?
>
> I meant any path that updates from present->present.
>


OK, got it. So let us focus on mmu_spte_update() only. :)

>> For the reason of "its easy to verify that it is correct"? But these
>> paths are safe since it is not care PT_WRITABLE_MASK at all. What these
>> paths care is the Dirty-bit and Accessed-bit are not lost, that is why
>> we always treat the spte as "volatile" if it is can be updated out of
>> mmu-lock.
>>
>> For the further development? We can add the delta comment for
>> is_writable_pte() to warn the developer use it more carefully.
>>
>> It is also very hard to verify spte everywhere. :(
>>
>> Actually, the current code to care PT_WRITABLE_MASK is just for
>> tlb flush, may be we can fold it into mmu_spte_update.
>> [
>> There are tree ways to modify spte, present -> nonpresent, nonpresent -> present,
>> present -> present.
>>
>> But we only need care present -> present for lockless.
>> ]
>
> Also need to take memory ordering into account, which was not an issue
> before. So it is not only TLB flush.


It seems do not need explicit barrier, we always use atomic-xchg to update
spte, it has already guaranteed the memory ordering.

In mmu_spte_update():

/* the return value indicates wheater tlb need be flushed.*/
static bool mmu_spte_update(u64 *sptep, u64 new_spte)
{
u64 old_spte = *sptep;
bool flush = false;

old_spte = xchg(sptep, new_spte);

if (is_writable_pte(old_spte) && !is_writable_pte(spte) )
flush = true;

.....
}

>
>> /*
>> * return true means we need flush tlbs caused by changing spte from writeable
>> * to read-only.
>> */
>> bool mmu_update_spte(u64 *sptep, u64 spte)
>> {
>> u64 last_spte, old_spte = *sptep;
>> bool flush = false;
>>
>> last_spte = xchg(sptep, spte);
>>
>> if ((is_writable_pte(last_spte) ||
>> spte_has_updated_lockless(old_spte, last_spte)) &&
>> !is_writable_pte(spte) )
>> flush = true;
>>
>> .... track Drity/Accessed bit ...
>>
>>
>> return flush
>> }
>>
>> Furthermore, the style of "if (spte-has-changed) goto beginning" is feasible
>> in set_spte since this path is a fast path. (i can speed up mmu_need_write_protect)
>
> What you mean exactly?
>
> It would be better if all these complications introduced by lockless
> updates can be avoided, say using A/D bits as Avi suggested.


Anyway, i do not object it if we have a better way to do these, but ......