This patch series add support for stage2 hardware DBM, and it is only
used for dirty log for now.
It works well under some migration test cases, including VM with 4K
pages or 2M THP. I checked the SHA256 hash digest of all memory and
they keep same for source VM and destination VM, which means no dirty
pages is missed under hardware DBM.
However, there are some known issues not solved.
1. Some mechanisms that rely on "write permission fault" become invalid,
such as kvm_set_pfn_dirty and "mmap page sharing".
kvm_set_pfn_dirty is called in user_mem_abort when guest issues write
fault. This guarantees physical page will not be dropped directly when
host kernel recycle memory. After using hardware dirty management, we
have no chance to call kvm_set_pfn_dirty.
For "mmap page sharing" mechanism, host kernel will allocate a new
physical page when guest writes a page that is shared with other page
table entries. After using hardware dirty management, we have no chance
to do this too.
I need to do some survey on how stage1 hardware DBM solve these problems.
It helps if anyone can figure it out.
2. Page Table Modification Races: Though I have found and solved some data
races when kernel changes page table entries, I still doubt that there
are data races I am not aware of. It's great if anyone can figure them out.
3. Performance: Under Kunpeng 920 platform, for every 64GB memory, KVM
consumes about 40ms to traverse all PTEs to collect dirty log. It will
cause unbearable downtime for migration if memory size is too big. I will
try to solve this problem in Patch v1.
Keqian Zhu (7):
KVM: arm64: Add some basic functions for hw DBM
KVM: arm64: Set DBM bit of PTEs if hw DBM enabled
KVM: arm64: Traverse page table entries when sync dirty log
KVM: arm64: Steply write protect page table by mask bit
kvm: arm64: Modify stage2 young mechanism to support hw DBM
kvm: arm64: Save stage2 PTE dirty info if it is coverred
KVM: arm64: Enable stage2 hardware DBM
arch/arm64/include/asm/kvm_host.h | 1 +
arch/arm64/include/asm/kvm_mmu.h | 44 +++++-
arch/arm64/include/asm/pgtable-prot.h | 1 +
arch/arm64/include/asm/sysreg.h | 2 +
arch/arm64/kvm/reset.c | 9 +-
virt/kvm/arm/arm.c | 6 +-
virt/kvm/arm/mmu.c | 202 ++++++++++++++++++++++++--
7 files changed, 246 insertions(+), 19 deletions(-)
--
2.19.1
kvm_set_pte is called to replace a target PTE with a desired one.
We always replace it, but if hw DBM is enalbled and dirty info is
coverred, should let caller know it. Caller can decide to whether
save the dirty info.
kvm_set_pmd and kvm_set_pud is not modified, because we only use
DBM in PTEs for now.
Signed-off-by: Keqian Zhu <[email protected]>
---
virt/kvm/arm/mmu.c | 39 +++++++++++++++++++++++++++++++++++----
1 file changed, 35 insertions(+), 4 deletions(-)
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index e1d9e4b98cb6..43d89c6333f0 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -185,10 +185,34 @@ static void clear_stage2_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr
put_page(virt_to_page(pmd));
}
-static inline void kvm_set_pte(pte_t *ptep, pte_t new_pte)
+/*
+ * @ret: true if dirty info is coverred.
+ */
+static inline bool kvm_set_pte(pte_t *ptep, pte_t new_pte)
{
+#ifdef CONFIG_ARM64_HW_AFDBM
+ pteval_t old_pteval, new_pteval, pteval;
+
+ if (!kvm_hw_dbm_enabled() || pte_none(*ptep) ||
+ !kvm_s2pte_readonly(&new_pte)) {
+ WRITE_ONCE(*ptep, new_pte);
+ dsb(ishst);
+ return false;
+ }
+
+ new_pteval = pte_val(new_pte);
+ pteval = READ_ONCE(pte_val(*ptep));
+ do {
+ old_pteval = pteval;
+ pteval = cmpxchg_relaxed(&pte_val(*ptep), old_pteval, new_pteval);
+ } while (pteval != old_pteval);
+
+ return !kvm_s2pte_readonly((pte_t *)&pteval);
+#else
WRITE_ONCE(*ptep, new_pte);
dsb(ishst);
+ return false;
+#endif
}
static inline void kvm_set_pmd(pmd_t *pmdp, pmd_t new_pmd)
@@ -249,7 +273,10 @@ static void unmap_stage2_ptes(struct kvm *kvm, pmd_t *pmd,
if (!pte_none(*pte)) {
pte_t old_pte = *pte;
- kvm_set_pte(pte, __pte(0));
+ if (kvm_set_pte(pte, __pte(0))) {
+ mark_page_dirty(kvm, addr >> PAGE_SHIFT);
+ }
+
kvm_tlb_flush_vmid_ipa(kvm, addr);
/* No need to invalidate the cache for device mappings */
@@ -1291,13 +1318,17 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
if (pte_val(old_pte) == pte_val(*new_pte))
return 0;
- kvm_set_pte(pte, __pte(0));
+ if (kvm_set_pte(pte, __pte(0))) {
+ mark_page_dirty(kvm, addr >> PAGE_SHIFT);
+ }
kvm_tlb_flush_vmid_ipa(kvm, addr);
} else {
get_page(virt_to_page(pte));
}
- kvm_set_pte(pte, *new_pte);
+ if (kvm_set_pte(pte, *new_pte)) {
+ mark_page_dirty(kvm, addr >> PAGE_SHIFT);
+ }
return 0;
}
--
2.19.1