2023-01-05 10:20:20

by Lai Jiangshan

[permalink] [raw]
Subject: [PATCH 0/7] kvm: x86/mmu: Share the same code to invalidate each vTLB entry

From: Lai Jiangshan <[email protected]>

FNAME(invlpg) and FNAME(sync_page) invalidate vTLB entries but in
slightly different methods.

Make them use the same method and share the same code.

Lai Jiangshan (7):
kvm: x86/mmu: Use KVM_MMU_ROOT_XXX for kvm_mmu_invalidate_gva()
kvm: x86/mmu: Use kvm_mmu_invalidate_gva() in kvm_mmu_invpcid_gva()
kvm: x86/mmu: Use kvm_mmu_invalidate_gva() in
nested_ept_invalidate_addr()
kvm: x86/mmu: Reduce the update to the spte in FNAME(sync_page)
kvm: x86/mmu: Move the code out of FNAME(sync_page)'s loop body into
mmu.c
kvm: x86/mmu: Remove FNAME(invlpg)
kvm: x86/mmu: Remove @no_dirty_log from FNAME(prefetch_gpte)

arch/x86/include/asm/kvm_host.h | 7 +-
arch/x86/kvm/mmu/mmu.c | 177 +++++++++++++++++----------
arch/x86/kvm/mmu/paging_tmpl.h | 207 ++++++++------------------------
arch/x86/kvm/vmx/nested.c | 5 +-
arch/x86/kvm/x86.c | 2 +-
5 files changed, 176 insertions(+), 222 deletions(-)

--
2.19.1.6.gb485710b


2023-01-05 10:20:36

by Lai Jiangshan

[permalink] [raw]
Subject: [PATCH 7/7] kvm: x86/mmu: Remove @no_dirty_log from FNAME(prefetch_gpte)

From: Lai Jiangshan <[email protected]>

FNAME(prefetch_gpte) is always called with @no_dirty_log=true.

Signed-off-by: Lai Jiangshan <[email protected]>
---
arch/x86/kvm/mmu/paging_tmpl.h | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
index 62aac5d7d38c..2db844d5d33c 100644
--- a/arch/x86/kvm/mmu/paging_tmpl.h
+++ b/arch/x86/kvm/mmu/paging_tmpl.h
@@ -519,7 +519,7 @@ static int FNAME(walk_addr)(struct guest_walker *walker,

static bool
FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
- u64 *spte, pt_element_t gpte, bool no_dirty_log)
+ u64 *spte, pt_element_t gpte)
{
struct kvm_memory_slot *slot;
unsigned pte_access;
@@ -535,8 +535,7 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
pte_access = sp->role.access & FNAME(gpte_access)(gpte);
FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte);

- slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn,
- no_dirty_log && (pte_access & ACC_WRITE_MASK));
+ slot = gfn_to_memslot_dirty_bitmap(vcpu, gfn, pte_access & ACC_WRITE_MASK);
if (!slot)
return false;

@@ -605,7 +604,7 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw,
if (is_shadow_present_pte(*spte))
continue;

- if (!FNAME(prefetch_gpte)(vcpu, sp, spte, gptep[i], true))
+ if (!FNAME(prefetch_gpte)(vcpu, sp, spte, gptep[i]))
break;
}
}
--
2.19.1.6.gb485710b

2023-01-19 01:16:36

by Lai Jiangshan

[permalink] [raw]
Subject: Re: [PATCH 0/7] kvm: x86/mmu: Share the same code to invalidate each vTLB entry

On Thu, Jan 5, 2023 at 5:57 PM Lai Jiangshan <[email protected]> wrote:
>
> From: Lai Jiangshan <[email protected]>
>
> FNAME(invlpg) and FNAME(sync_page) invalidate vTLB entries but in
> slightly different methods.
>
> Make them use the same method and share the same code.
>
> Lai Jiangshan (7):
> kvm: x86/mmu: Use KVM_MMU_ROOT_XXX for kvm_mmu_invalidate_gva()
> kvm: x86/mmu: Use kvm_mmu_invalidate_gva() in kvm_mmu_invpcid_gva()
> kvm: x86/mmu: Use kvm_mmu_invalidate_gva() in
> nested_ept_invalidate_addr()
> kvm: x86/mmu: Reduce the update to the spte in FNAME(sync_page)
> kvm: x86/mmu: Move the code out of FNAME(sync_page)'s loop body into
> mmu.c
> kvm: x86/mmu: Remove FNAME(invlpg)
> kvm: x86/mmu: Remove @no_dirty_log from FNAME(prefetch_gpte)
>
> arch/x86/include/asm/kvm_host.h | 7 +-
> arch/x86/kvm/mmu/mmu.c | 177 +++++++++++++++++----------
> arch/x86/kvm/mmu/paging_tmpl.h | 207 ++++++++------------------------
> arch/x86/kvm/vmx/nested.c | 5 +-
> arch/x86/kvm/x86.c | 2 +-
> 5 files changed, 176 insertions(+), 222 deletions(-)
>
> --
> 2.19.1.6.gb485710b
>

Hello

Ping.

Cheers,
Lai