-stable review patch. If anyone has any objections, please let us know.
---------------------
From: Andrea Arcangeli <[email protected]>
upstream commit: 4539b35881ae9664b0e2953438dd83f5ee02c0b4
When kvm emulates an invlpg instruction, it can drop a shadow pte, but
leaves the guest tlbs intact. This can cause memory corruption when
swapping out.
Without this the other cpu can still write to a freed host physical page.
tlb smp flush must happen if rmap_remove is called always before mmu_lock
is released because the VM will take the mmu_lock before it can finally add
the page to the freelist after swapout. mmu notifier makes it safe to flush
the tlb after freeing the page (otherwise it would never be safe) so we can do
a single flush for multiple sptes invalidated.
Cc: [email protected]
Signed-off-by: Andrea Arcangeli <[email protected]>
Acked-by: Marcelo Tosatti <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
[mtosatti: backport to 2.6.29]
Signed-off-by: Chris Wright <[email protected]>
---
arch/x86/kvm/paging_tmpl.h | 4 ++++
1 file changed, 4 insertions(+)
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -476,16 +476,20 @@ static int FNAME(shadow_invlpg_entry)(st
if (level == PT_PAGE_TABLE_LEVEL ||
((level == PT_DIRECTORY_LEVEL) && is_large_pte(*sptep))) {
struct kvm_mmu_page *sp = page_header(__pa(sptep));
+ int need_flush = 0;
sw->pte_gpa = (sp->gfn << PAGE_SHIFT);
sw->pte_gpa += (sptep - sp->spt) * sizeof(pt_element_t);
if (is_shadow_present_pte(*sptep)) {
+ need_flush = 1;
rmap_remove(vcpu->kvm, sptep);
if (is_large_pte(*sptep))
--vcpu->kvm->stat.lpages;
}
set_shadow_pte(sptep, shadow_trap_nonpresent_pte);
+ if (need_flush)
+ kvm_flush_remote_tlbs(vcpu->kvm);
return 1;
}
if (!is_shadow_present_pte(*sptep))