Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp3326478pxf; Mon, 5 Apr 2021 09:06:24 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwgNmmH4Top62hadgeJ9Nj3PqJ8RId4MjJY/4SdvEihA+zGOf5Zw4IDLrmuvNm/7j/YAVLI X-Received: by 2002:a17:906:f8d2:: with SMTP id lh18mr4972486ejb.57.1617638784709; Mon, 05 Apr 2021 09:06:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1617638784; cv=none; d=google.com; s=arc-20160816; b=RD7Lq4CwNhYuVqHqbvI2nMtKujIc/AtotjsrLp7E4mm2xJIojHjtVCcKX2+RCDFPCD ilrz26eTsoNP/MzMorkPZiNpeSrZ0zCoSo+g5u1U79bNfndS00aSJe9r3J55Cm3CYwxI WFPzlr3fUnPET0yaZna88rygPzchZpedVDDZ6nkb8FnlbxfxwRBX2COfwL2PYn8H281D Kt7Uzpm+FCU6mGyeCvhODX72ZQ8xnxbmgtOeCw59krGae+6vWDLG3Olc6RJq/mb2ix1K VMNY1vDKLp8T+ecfIn42LPv3DEF0Y/PHfo+BRQY4757kGClNYdxuAkDkzSnUhzrD6Tf/ QXAw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=hnSKyKpYExRlP8YvQ62kGCp0i10DKfedh/RXXgirTjI=; b=YgyQhIpexUOXDh7TtB19Qd9eQnF6vg2NxeeEZacGqUq1WQiaUTWxUgBYZNO8ZWJS7P rLCQL8l0CF0Hki890mE+H0IyiZnV62ewkgnSzh9tLiap9Vw5yhFNdf/1P9hyvR0RCN0d d2Pw8k+8su/0zROqEbQSwYlB+270/udK4oTQNGcWni5k6XWPNMoM2Q3L8lD2cM5DM5sw sYhXKUdJkl8jveusncTlWy68T/aWFufUPZ7Hs00I5GMEDjKvqnHQ8WnFORzCN5ZSrPzS kg9gCbpO5sF5kmCsmNvYWml0Y2u8BhXoER6lIQk2yosGiLOCsPK8e4Mmc7ICLBuyCdIY sE2A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=LMEUqRTK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id 38si14597168edq.52.2021.04.05.09.06.00; Mon, 05 Apr 2021 09:06:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=LMEUqRTK; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239351AbhDEJN0 (ORCPT + 99 others); Mon, 5 Apr 2021 05:13:26 -0400 Received: from mail.kernel.org ([198.145.29.99]:55392 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239286AbhDEJJp (ORCPT ); Mon, 5 Apr 2021 05:09:45 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 8492C61393; Mon, 5 Apr 2021 09:09:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1617613780; bh=e82XqluKRQYBxV/oPOYWpnnB6Sz7lv/TllE0kEpKSSw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LMEUqRTKeTtmNnGFHmYHfth90qGR6Qf8rvmx9ifpDJEvHb4E0zzzC9XYaUg/e22Ty LAQU9j+D10DsnrjRc0vqOK+kXbwjdAtz9paUUf6AAM3t03yjKRr7fOWl9dVTKNJHCj yXv/k78MnB7jmaYSz/J3YDWOIdpgF6THS6D5ooz4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Peter Feiner , Paolo Bonzini , Ben Gardon , Sasha Levin Subject: [PATCH 5.10 089/126] KVM: x86/mmu: Factor out handling of removed page tables Date: Mon, 5 Apr 2021 10:54:11 +0200 Message-Id: <20210405085034.014120821@linuxfoundation.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210405085031.040238881@linuxfoundation.org> References: <20210405085031.040238881@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Ben Gardon [ Upstream commit a066e61f13cf4b17d043ad8bea0cdde2b1e5ee49 ] Factor out the code to handle a disconnected subtree of the TDP paging structure from the code to handle the change to an individual SPTE. Future commits will build on this to allow asynchronous page freeing. No functional change intended. Reviewed-by: Peter Feiner Acked-by: Paolo Bonzini Signed-off-by: Ben Gardon Message-Id: <20210202185734.1680553-6-bgardon@google.com> Signed-off-by: Paolo Bonzini Signed-off-by: Sasha Levin --- arch/x86/kvm/mmu/tdp_mmu.c | 71 ++++++++++++++++++++++---------------- 1 file changed, 42 insertions(+), 29 deletions(-) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index ad9f8f187045..f52a22bc0fe8 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -234,6 +234,45 @@ static void handle_changed_spte_dirty_log(struct kvm *kvm, int as_id, gfn_t gfn, } } +/** + * handle_removed_tdp_mmu_page - handle a pt removed from the TDP structure + * + * @kvm: kvm instance + * @pt: the page removed from the paging structure + * + * Given a page table that has been removed from the TDP paging structure, + * iterates through the page table to clear SPTEs and free child page tables. + */ +static void handle_removed_tdp_mmu_page(struct kvm *kvm, u64 *pt) +{ + struct kvm_mmu_page *sp = sptep_to_sp(pt); + int level = sp->role.level; + gfn_t gfn = sp->gfn; + u64 old_child_spte; + int i; + + trace_kvm_mmu_prepare_zap_page(sp); + + list_del(&sp->link); + + if (sp->lpage_disallowed) + unaccount_huge_nx_page(kvm, sp); + + for (i = 0; i < PT64_ENT_PER_PAGE; i++) { + old_child_spte = READ_ONCE(*(pt + i)); + WRITE_ONCE(*(pt + i), 0); + handle_changed_spte(kvm, kvm_mmu_page_as_id(sp), + gfn + (i * KVM_PAGES_PER_HPAGE(level - 1)), + old_child_spte, 0, level - 1); + } + + kvm_flush_remote_tlbs_with_address(kvm, gfn, + KVM_PAGES_PER_HPAGE(level)); + + free_page((unsigned long)pt); + kmem_cache_free(mmu_page_header_cache, sp); +} + /** * handle_changed_spte - handle bookkeeping associated with an SPTE change * @kvm: kvm instance @@ -254,10 +293,6 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, bool was_leaf = was_present && is_last_spte(old_spte, level); bool is_leaf = is_present && is_last_spte(new_spte, level); bool pfn_changed = spte_to_pfn(old_spte) != spte_to_pfn(new_spte); - u64 *pt; - struct kvm_mmu_page *sp; - u64 old_child_spte; - int i; WARN_ON(level > PT64_ROOT_MAX_LEVEL); WARN_ON(level < PG_LEVEL_4K); @@ -319,31 +354,9 @@ static void __handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, * Recursively handle child PTs if the change removed a subtree from * the paging structure. */ - if (was_present && !was_leaf && (pfn_changed || !is_present)) { - pt = spte_to_child_pt(old_spte, level); - sp = sptep_to_sp(pt); - - trace_kvm_mmu_prepare_zap_page(sp); - - list_del(&sp->link); - - if (sp->lpage_disallowed) - unaccount_huge_nx_page(kvm, sp); - - for (i = 0; i < PT64_ENT_PER_PAGE; i++) { - old_child_spte = READ_ONCE(*(pt + i)); - WRITE_ONCE(*(pt + i), 0); - handle_changed_spte(kvm, as_id, - gfn + (i * KVM_PAGES_PER_HPAGE(level - 1)), - old_child_spte, 0, level - 1); - } - - kvm_flush_remote_tlbs_with_address(kvm, gfn, - KVM_PAGES_PER_HPAGE(level)); - - free_page((unsigned long)pt); - kmem_cache_free(mmu_page_header_cache, sp); - } + if (was_present && !was_leaf && (pfn_changed || !is_present)) + handle_removed_tdp_mmu_page(kvm, + spte_to_child_pt(old_spte, level)); } static void handle_changed_spte(struct kvm *kvm, int as_id, gfn_t gfn, -- 2.30.1