2021-07-26 17:55:54

by Mingwei Zhang

[permalink] [raw]
Subject: [PATCH v2 1/3] KVM: x86/mmu: Remove redundant spte present check in mmu_set_spte

Drop an unnecessary is_shadow_present_pte() check when updating the rmaps
after installing a non-MMIO SPTE. set_spte() is used only to create
shadow-present SPTEs, e.g. MMIO SPTEs are handled early on, mmu_set_spte()
runs with mmu_lock held for write, i.e. the SPTE can't be zapped between
writing the SPTE and updating the rmaps.

Opportunistically combine the "new SPTE" logic for large pages and rmaps.

No functional change intended.

Suggested-by: Ben Gardon <[email protected]>
Signed-off-by: Mingwei Zhang <[email protected]>
---
arch/x86/kvm/mmu/mmu.c | 14 ++++++--------
1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index b888385d1933..442cc554ebd6 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -2690,15 +2690,13 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep,

pgprintk("%s: setting spte %llx\n", __func__, *sptep);
trace_kvm_mmu_set_spte(level, gfn, sptep);
- if (!was_rmapped && is_large_pte(*sptep))
- ++vcpu->kvm->stat.lpages;

- if (is_shadow_present_pte(*sptep)) {
- if (!was_rmapped) {
- rmap_count = rmap_add(vcpu, sptep, gfn);
- if (rmap_count > RMAP_RECYCLE_THRESHOLD)
- rmap_recycle(vcpu, sptep, gfn);
- }
+ if (!was_rmapped) {
+ if (is_large_pte(*sptep))
+ ++vcpu->kvm->stat.lpages;
+ rmap_count = rmap_add(vcpu, sptep, gfn);
+ if (rmap_count > RMAP_RECYCLE_THRESHOLD)
+ rmap_recycle(vcpu, sptep, gfn);
}

return ret;
--
2.32.0.432.gabb21c7263-goog


2021-07-26 20:25:53

by Ben Gardon

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] KVM: x86/mmu: Remove redundant spte present check in mmu_set_spte

On Mon, Jul 26, 2021 at 10:54 AM Mingwei Zhang <[email protected]> wrote:
>
> Drop an unnecessary is_shadow_present_pte() check when updating the rmaps
> after installing a non-MMIO SPTE. set_spte() is used only to create
> shadow-present SPTEs, e.g. MMIO SPTEs are handled early on, mmu_set_spte()
> runs with mmu_lock held for write, i.e. the SPTE can't be zapped between
> writing the SPTE and updating the rmaps.
>
> Opportunistically combine the "new SPTE" logic for large pages and rmaps.
>
> No functional change intended.
>
> Suggested-by: Ben Gardon <[email protected]>
> Signed-off-by: Mingwei Zhang <[email protected]>

Reviewed-by: Ben Gardon <[email protected]>

> ---
> arch/x86/kvm/mmu/mmu.c | 14 ++++++--------
> 1 file changed, 6 insertions(+), 8 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index b888385d1933..442cc554ebd6 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -2690,15 +2690,13 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
>
> pgprintk("%s: setting spte %llx\n", __func__, *sptep);
> trace_kvm_mmu_set_spte(level, gfn, sptep);
> - if (!was_rmapped && is_large_pte(*sptep))
> - ++vcpu->kvm->stat.lpages;
>
> - if (is_shadow_present_pte(*sptep)) {
> - if (!was_rmapped) {
> - rmap_count = rmap_add(vcpu, sptep, gfn);
> - if (rmap_count > RMAP_RECYCLE_THRESHOLD)
> - rmap_recycle(vcpu, sptep, gfn);
> - }
> + if (!was_rmapped) {
> + if (is_large_pte(*sptep))
> + ++vcpu->kvm->stat.lpages;
> + rmap_count = rmap_add(vcpu, sptep, gfn);
> + if (rmap_count > RMAP_RECYCLE_THRESHOLD)
> + rmap_recycle(vcpu, sptep, gfn);
> }
>
> return ret;
> --
> 2.32.0.432.gabb21c7263-goog
>

2021-07-29 18:17:53

by Sean Christopherson

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] KVM: x86/mmu: Remove redundant spte present check in mmu_set_spte

On Mon, Jul 26, 2021, Mingwei Zhang wrote:
> Drop an unnecessary is_shadow_present_pte() check when updating the rmaps
> after installing a non-MMIO SPTE. set_spte() is used only to create
> shadow-present SPTEs, e.g. MMIO SPTEs are handled early on, mmu_set_spte()
> runs with mmu_lock held for write, i.e. the SPTE can't be zapped between
> writing the SPTE and updating the rmaps.
>
> Opportunistically combine the "new SPTE" logic for large pages and rmaps.
>
> No functional change intended.
>
> Suggested-by: Ben Gardon <[email protected]>
> Signed-off-by: Mingwei Zhang <[email protected]>
> ---

Reviewed-by: Sean Christopherson <[email protected]>