Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp220267pxj; Tue, 18 May 2021 01:40:05 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxfvBmdc1u+zrvOh8wKSGXky/QLBEzkcUDS+23Ph2aHZlxBhnGJmz3LH5odRWRNBEHSsKQo X-Received: by 2002:a92:7c02:: with SMTP id x2mr3375839ilc.8.1621327204964; Tue, 18 May 2021 01:40:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1621327204; cv=none; d=google.com; s=arc-20160816; b=PvG4s1FSxFEZ7PxvKyQI4GdyzK7spR8X3p+nBqZ6lpfIH6HnEzOISn0dXM5uBff58I fa+9TY2Lqbco74odULWnREe6RUgZ3gE5zFqODAMJNBdCMvEj2j1yB8fh2Mhytsv0zgcq qf2SvEcUPrkHkimXlZZb+lDn7ITFdlTobNflRB/EqemnWAg4tmir1q3UypTZfr6FILmF iV8rATXjn3czOaHAGLeiXkokp3yZB6mBvreHP11+Zi15qJIO0YcBAvYDDg6BjxViln/t hfhwHbHMxMDUMdPZiBGj8ZdNGua2PcLGa4lFqBVlP2Kb8pt7CZmftK//B9fuNSlQ7Mly 1A+g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=Ds5WHdlVogKyfXrvE8jw2oY8TqxQ+Xi63rI9+SSi6cQ=; b=A5TTNa1T8qW0jRAr6i90fxXwb9z1DY1ECj5fGUy+vlGmwmnYSjgz/PPRg/37FfhGEp 5XOx1a/+GRV4WwvvgSKpPB2GgKwMoG3zQ/+XWRNbk0NSiJ4Of4DW4rfvS/wCS0vKDXMY RY1BzyI1JMJq64HJH6+nQMBWxDKge9d4ZtW3CSA3Hb2mcT0yEu6Yar71JOGauMXeIcny DqLCTy5ataKzPibD3rEk9e2M70HxkxoX9QqLyDNXyCrDyA5u4jQnwdv9i+bL9eyCfAUz WJL2khPIPDz/nY3UXQ1Zur/W4roLdVZ3BpprJjr2bKyKll6Yy66Ix1MbQCE3JQzzzGq8 e6Yw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=KtWCJwf0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 5si20782769ilx.117.2021.05.18.01.39.52; Tue, 18 May 2021 01:40:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=KtWCJwf0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241480AbhEQPBC (ORCPT + 99 others); Mon, 17 May 2021 11:01:02 -0400 Received: from mail.kernel.org ([198.145.29.99]:40572 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241668AbhEQOvB (ORCPT ); Mon, 17 May 2021 10:51:01 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 554FC61462; Mon, 17 May 2021 14:23:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1621261401; bh=7U2pa5aQ8jHn6bFbralFye6Xrx3YWuhNYV8Zd1PMuj8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KtWCJwf0oSKxq6TTo14ZTK/B9advqV5RhORwOIo1tE1vWZooLfqH8mmsXA3V/FHUD y+aLn/dp+x9TkalS8dW1XZ8oiTTPLciNHZmmW5DlobfdG9324XwFCpimG9p6mEPqFv W3UMU0AlOlhpp9Yp4uiTokdcCRP2yqGkAGfvw0yo= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Yu Zhang , Sean Christopherson , Paolo Bonzini Subject: [PATCH 5.10 005/289] KVM: x86/mmu: Remove the defunct update_pte() paging hook Date: Mon, 17 May 2021 15:58:50 +0200 Message-Id: <20210517140305.340027792@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210517140305.140529752@linuxfoundation.org> References: <20210517140305.140529752@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Sean Christopherson commit c5e2184d1544f9e56140791eff1a351bea2e63b9 upstream. Remove the update_pte() shadow paging logic, which was obsoleted by commit 4731d4c7a077 ("KVM: MMU: out of sync shadow core"), but never removed. As pointed out by Yu, KVM never write protects leaf page tables for the purposes of shadow paging, and instead marks their associated shadow page as unsync so that the guest can write PTEs at will. The update_pte() path, which predates the unsync logic, optimizes COW scenarios by refreshing leaf SPTEs when they are written, as opposed to zapping the SPTE, restarting the guest, and installing the new SPTE on the subsequent fault. Since KVM no longer write-protects leaf page tables, update_pte() is unreachable and can be dropped. Reported-by: Yu Zhang Signed-off-by: Sean Christopherson Message-Id: <20210115004051.4099250-1-seanjc@google.com> Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman --- arch/x86/include/asm/kvm_host.h | 3 -- arch/x86/kvm/mmu/mmu.c | 49 +--------------------------------------- arch/x86/kvm/x86.c | 1 3 files changed, 2 insertions(+), 51 deletions(-) --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -358,8 +358,6 @@ struct kvm_mmu { int (*sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp); void (*invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa); - void (*update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, - u64 *spte, const void *pte); hpa_t root_hpa; gpa_t root_pgd; union kvm_mmu_role mmu_role; @@ -1019,7 +1017,6 @@ struct kvm_arch { struct kvm_vm_stat { ulong mmu_shadow_zapped; ulong mmu_pte_write; - ulong mmu_pte_updated; ulong mmu_pde_zapped; ulong mmu_flooded; ulong mmu_recycled; --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1715,13 +1715,6 @@ static int nonpaging_sync_page(struct kv return 0; } -static void nonpaging_update_pte(struct kvm_vcpu *vcpu, - struct kvm_mmu_page *sp, u64 *spte, - const void *pte) -{ - WARN_ON(1); -} - #define KVM_PAGE_ARRAY_NR 16 struct kvm_mmu_pages { @@ -3820,7 +3813,6 @@ static void nonpaging_init_context(struc context->gva_to_gpa = nonpaging_gva_to_gpa; context->sync_page = nonpaging_sync_page; context->invlpg = NULL; - context->update_pte = nonpaging_update_pte; context->root_level = 0; context->shadow_root_level = PT32E_ROOT_LEVEL; context->direct_map = true; @@ -4402,7 +4394,6 @@ static void paging64_init_context_common context->gva_to_gpa = paging64_gva_to_gpa; context->sync_page = paging64_sync_page; context->invlpg = paging64_invlpg; - context->update_pte = paging64_update_pte; context->shadow_root_level = level; context->direct_map = false; } @@ -4431,7 +4422,6 @@ static void paging32_init_context(struct context->gva_to_gpa = paging32_gva_to_gpa; context->sync_page = paging32_sync_page; context->invlpg = paging32_invlpg; - context->update_pte = paging32_update_pte; context->shadow_root_level = PT32E_ROOT_LEVEL; context->direct_map = false; } @@ -4513,7 +4503,6 @@ static void init_kvm_tdp_mmu(struct kvm_ context->page_fault = kvm_tdp_page_fault; context->sync_page = nonpaging_sync_page; context->invlpg = NULL; - context->update_pte = nonpaging_update_pte; context->shadow_root_level = kvm_mmu_get_tdp_level(vcpu); context->direct_map = true; context->get_guest_pgd = get_cr3; @@ -4690,7 +4679,6 @@ void kvm_init_shadow_ept_mmu(struct kvm_ context->gva_to_gpa = ept_gva_to_gpa; context->sync_page = ept_sync_page; context->invlpg = ept_invlpg; - context->update_pte = ept_update_pte; context->root_level = level; context->direct_map = false; context->mmu_role.as_u64 = new_role.as_u64; @@ -4838,19 +4826,6 @@ void kvm_mmu_unload(struct kvm_vcpu *vcp } EXPORT_SYMBOL_GPL(kvm_mmu_unload); -static void mmu_pte_write_new_pte(struct kvm_vcpu *vcpu, - struct kvm_mmu_page *sp, u64 *spte, - const void *new) -{ - if (sp->role.level != PG_LEVEL_4K) { - ++vcpu->kvm->stat.mmu_pde_zapped; - return; - } - - ++vcpu->kvm->stat.mmu_pte_updated; - vcpu->arch.mmu->update_pte(vcpu, sp, spte, new); -} - static bool need_remote_flush(u64 old, u64 new) { if (!is_shadow_present_pte(old)) @@ -4966,22 +4941,6 @@ static u64 *get_written_sptes(struct kvm return spte; } -/* - * Ignore various flags when determining if a SPTE can be immediately - * overwritten for the current MMU. - * - level: explicitly checked in mmu_pte_write_new_pte(), and will never - * match the current MMU role, as MMU's level tracks the root level. - * - access: updated based on the new guest PTE - * - quadrant: handled by get_written_sptes() - * - invalid: always false (loop only walks valid shadow pages) - */ -static const union kvm_mmu_page_role role_ign = { - .level = 0xf, - .access = 0x7, - .quadrant = 0x3, - .invalid = 0x1, -}; - static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, const u8 *new, int bytes, struct kvm_page_track_notifier_node *node) @@ -5032,14 +4991,10 @@ static void kvm_mmu_pte_write(struct kvm local_flush = true; while (npte--) { - u32 base_role = vcpu->arch.mmu->mmu_role.base.word; - entry = *spte; mmu_page_zap_pte(vcpu->kvm, sp, spte, NULL); - if (gentry && - !((sp->role.word ^ base_role) & ~role_ign.word) && - rmap_can_add(vcpu)) - mmu_pte_write_new_pte(vcpu, sp, spte, &gentry); + if (gentry && sp->role.level != PG_LEVEL_4K) + ++vcpu->kvm->stat.mmu_pde_zapped; if (need_remote_flush(entry, *spte)) remote_flush = true; ++spte; --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -233,7 +233,6 @@ struct kvm_stats_debugfs_item debugfs_en VCPU_STAT("halt_poll_fail_ns", halt_poll_fail_ns), VM_STAT("mmu_shadow_zapped", mmu_shadow_zapped), VM_STAT("mmu_pte_write", mmu_pte_write), - VM_STAT("mmu_pte_updated", mmu_pte_updated), VM_STAT("mmu_pde_zapped", mmu_pde_zapped), VM_STAT("mmu_flooded", mmu_flooded), VM_STAT("mmu_recycled", mmu_recycled),