Received: by 2002:a05:6a10:2726:0:0:0:0 with SMTP id ib38csp5603338pxb; Mon, 28 Mar 2022 14:50:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwjo+IjS3q7ugm7v7P39Pats01xqa6bElETMR3Goc7EO7veHL4ZHZcl98hO47gJhVBabdVA X-Received: by 2002:aca:905:0:b0:2ee:f62a:e08e with SMTP id 5-20020aca0905000000b002eef62ae08emr657329oij.54.1648504245576; Mon, 28 Mar 2022 14:50:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1648504245; cv=none; d=google.com; s=arc-20160816; b=MjtMOei72gJxN4Jpv44EuZw9WCw71aEsQFV+CzGdzlCAjwB/XiT+LR9Tn6E26MvTbI 5mWnnC+zmzQvK55kcv4X1lcKhJIXQpr3LevpfxTUH0oTlJGzV64RAUsOeBCofFm+MV7j loXBt3kQPsaJDcpXKY+3T1L8KtxifylTH+qnXKjasxjzn1nMLUsRNi8pvMTcnlUHT63o fLFGN7l68w83lN826N44LDKo9tKriQtkMRT/Tmkb4p2eom1ECP30wMrue5PPNgAxIpuV vIAZ2j5xShiUvOMaZh3b592XnrRv1T+GBxktSaxdQugzPUIWqQ4c1mIvxsd4hQ1rRXPJ FiUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=DFRGfZNHGMMZKzJXUtu7ywwZlF3JepUWQrm/J5WbDog=; b=Jb0yFENwktQPJ086dPMAVndJb8Y/GSRrCsLvoxkbqiyzzftIp8dh7ThCV5pV9JJKWM 2S5hg6y7f0Ws/pl/nRMA1sK7PcoThfWmv3CCPB/e3c12xkrXjYq3l3aNKx5eZm7343gh 0EUKG+KUY2wv3Sm4x+UCjlMoikG4BnmYd0zm9jwZz2f2HfJRV4bL1DCOAjWgIclXScfJ 5HdiZU+ZtePU62AnutDUMsBZ7O7YNXC//w2b0YVXlSXJE++7Ft2tHLg/7I8C3WU6VPtV oDdqc9lkCtdIdk9trxexkZO9tEjOzJ3PImhcjxFqsi1ivB6MMJItTKQ/X9VzSxsZ3T3Z r2OA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=Yy3kEtjM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id b132-20020aca348a000000b002ef0c347676si9700725oia.246.2022.03.28.14.50.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Mar 2022 14:50:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=Yy3kEtjM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E2F35131953; Mon, 28 Mar 2022 14:22:15 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242938AbiC1RrS (ORCPT + 99 others); Mon, 28 Mar 2022 13:47:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44158 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234310AbiC1RrR (ORCPT ); Mon, 28 Mar 2022 13:47:17 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50CA11DA50 for ; Mon, 28 Mar 2022 10:45:36 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id g9-20020a17090ace8900b001c7cce3c0aeso6836pju.2 for ; Mon, 28 Mar 2022 10:45:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=DFRGfZNHGMMZKzJXUtu7ywwZlF3JepUWQrm/J5WbDog=; b=Yy3kEtjMZs/I13OAjIHNMfVQqyx8j2y9dZpaYo4UdgPeSC0LY3fzpXqyQ1VsawjRh3 HwC+BnDEukxYsn8YMnQzppXPNqj+Nj9Q4853j941qJtuaAulWV7jo1a4mKB+fQSnR+vm drF6ZUCZ9nd5FoWi9r1laR/5SAp/fBFtmBRaqXNbVvKTCooqfShSqN/IjsckscJjHFq/ Le80edO/zX0cwYTdpWnyk5dZKV4G1rpdY1nbimdPoz7iLCKv87dfnAfhMQ55CAdB1ffW kPdFJPrVld3qFIzI72m2dFfazJVSIhgPntvLB2to/twgjxjs3ydu3VxBuEdT02TAnsKo 7Tew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=DFRGfZNHGMMZKzJXUtu7ywwZlF3JepUWQrm/J5WbDog=; b=iiiwNWkvHMw3UixtmvgsCEGQJVV1G1OhPbDi5OOJ+e9a6XZo45MCfOVf2mCjKM5MDv R6XyNQM/jWrIEyCjmnnW+Tx3Q/1b/59+wy4jUMlEPpQDCl2mc3k6+KfKk9VMrWgZUGcN +BViRjLvzKTtqYJWbJmvTYfGgCVIQnadg8Cg0reYjKzMFgsGXie0QAwLsGrFUoR31yZf AEIw/xC9jVFXTllCTMpft++gOAs6TUbfGCsQQ65XG8kid6Lg9TIaxlt7/ScIaj/DhFiR 4iy/8cepvL84vU+plczgn+yy9meHpRqChu21magXKcPMUez6gHCBi58IbiUBoPB+sb5L z12w== X-Gm-Message-State: AOAM530zJa9/AZWsELt+zSiTz0FC/Gy93THcr6+0FkboRqhvM9DbDaWM O1N9etDFNnFm+cM4JBw+bdx2iQ== X-Received: by 2002:a17:902:cec8:b0:154:6dd6:255d with SMTP id d8-20020a170902cec800b001546dd6255dmr27367332plg.62.1648489535569; Mon, 28 Mar 2022 10:45:35 -0700 (PDT) Received: from google.com (254.80.82.34.bc.googleusercontent.com. [34.82.80.254]) by smtp.gmail.com with ESMTPSA id e6-20020a63aa06000000b00380c8bed5a6sm14379851pgf.46.2022.03.28.10.45.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Mar 2022 10:45:34 -0700 (PDT) Date: Mon, 28 Mar 2022 17:45:31 +0000 From: David Matlack To: Ben Gardon Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Paolo Bonzini , Peter Xu , Sean Christopherson , Jim Mattson , David Dunn , Jing Zhang , Junaid Shahid Subject: Re: [PATCH v2 9/9] KVM: x86/mmu: Promote pages in-place when disabling dirty logging Message-ID: References: <20220321224358.1305530-1-bgardon@google.com> <20220321224358.1305530-10-bgardon@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220321224358.1305530-10-bgardon@google.com> X-Spam-Status: No, score=-9.5 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE, USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Mar 21, 2022 at 03:43:58PM -0700, Ben Gardon wrote: > When disabling dirty logging, the TDP MMU currently zaps each leaf entry > mapping memory in the relevant memslot. This is very slow. Doing the zaps > under the mmu read lock requires a TLB flush for every zap and the > zapping causes a storm of ETP/NPT violations. > > Instead of zapping, replace the split large pages with large page > mappings directly. While this sort of operation has historically only > been done in the vCPU page fault handler context, refactorings earlier > in this series and the relative simplicity of the TDP MMU make it > possible here as well. > > Running the dirty_log_perf_test on an Intel Skylake with 96 vCPUs and 1G > of memory per vCPU, this reduces the time required to disable dirty > logging from over 45 seconds to just over 1 second. It also avoids > provoking page faults, improving vCPU performance while disabling > dirty logging. > > Signed-off-by: Ben Gardon > --- > arch/x86/kvm/mmu/mmu.c | 4 +- > arch/x86/kvm/mmu/mmu_internal.h | 6 +++ > arch/x86/kvm/mmu/tdp_mmu.c | 73 ++++++++++++++++++++++++++++++++- > 3 files changed, 79 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 6f98111f8f8b..a99c23ef90b6 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -100,7 +100,7 @@ module_param_named(flush_on_reuse, force_flush_and_sync_on_reuse, bool, 0644); > */ > bool tdp_enabled = false; > > -static int max_huge_page_level __read_mostly; > +int max_huge_page_level; > static int tdp_root_level __read_mostly; > static int max_tdp_level __read_mostly; > > @@ -4486,7 +4486,7 @@ static inline bool boot_cpu_is_amd(void) > * the direct page table on host, use as much mmu features as > * possible, however, kvm currently does not do execution-protection. > */ > -static void > +void > build_tdp_shadow_zero_bits_mask(struct rsvd_bits_validate *shadow_zero_check, > int shadow_root_level) > { > diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h > index 1bff453f7cbe..6c08a5731fcb 100644 > --- a/arch/x86/kvm/mmu/mmu_internal.h > +++ b/arch/x86/kvm/mmu/mmu_internal.h > @@ -171,4 +171,10 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); > void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); > void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); > > +void > +build_tdp_shadow_zero_bits_mask(struct rsvd_bits_validate *shadow_zero_check, > + int shadow_root_level); > + > +extern int max_huge_page_level __read_mostly; > + > #endif /* __KVM_X86_MMU_INTERNAL_H */ > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c > index af60922906ef..eb8929e394ec 100644 > --- a/arch/x86/kvm/mmu/tdp_mmu.c > +++ b/arch/x86/kvm/mmu/tdp_mmu.c > @@ -1709,6 +1709,66 @@ void kvm_tdp_mmu_clear_dirty_pt_masked(struct kvm *kvm, > clear_dirty_pt_masked(kvm, root, gfn, mask, wrprot); > } > > +static bool try_promote_lpage(struct kvm *kvm, > + const struct kvm_memory_slot *slot, > + struct tdp_iter *iter) Use "huge_page" instead of "lpage" to be consistent with eager page splitting and the rest of the Linux kernel. Some of the old KVM methods still use "lpage" and "large page", but we're slowly moving away from that. > +{ > + struct kvm_mmu_page *sp = sptep_to_sp(iter->sptep); > + struct rsvd_bits_validate shadow_zero_check; > + bool map_writable; > + kvm_pfn_t pfn; > + u64 new_spte; > + u64 mt_mask; > + > + /* > + * If addresses are being invalidated, don't do in-place promotion to > + * avoid accidentally mapping an invalidated address. > + */ > + if (unlikely(kvm->mmu_notifier_count)) > + return false; Why is this necessary? Seeing this makes me wonder if we need a similar check for eager page splitting. > + > + if (iter->level > max_huge_page_level || iter->gfn < slot->base_gfn || > + iter->gfn >= slot->base_gfn + slot->npages) > + return false; > + > + pfn = __gfn_to_pfn_memslot(slot, iter->gfn, true, NULL, true, > + &map_writable, NULL); > + if (is_error_noslot_pfn(pfn)) > + return false; > + > + /* > + * Can't reconstitute an lpage if the consituent pages can't be > + * mapped higher. > + */ > + if (iter->level > kvm_mmu_max_mapping_level(kvm, slot, iter->gfn, > + pfn, PG_LEVEL_NUM)) > + return false; > + > + build_tdp_shadow_zero_bits_mask(&shadow_zero_check, iter->root_level); > + > + /* > + * In some cases, a vCPU pointer is required to get the MT mask, > + * however in most cases it can be generated without one. If a > + * vCPU pointer is needed kvm_x86_try_get_mt_mask will fail. > + * In that case, bail on in-place promotion. > + */ > + if (unlikely(!static_call(kvm_x86_try_get_mt_mask)(kvm, iter->gfn, > + kvm_is_mmio_pfn(pfn), > + &mt_mask))) > + return false; > + > + __make_spte(kvm, sp, slot, ACC_ALL, iter->gfn, pfn, 0, false, true, > + map_writable, mt_mask, &shadow_zero_check, &new_spte); > + > + if (tdp_mmu_set_spte_atomic(kvm, iter, new_spte)) > + return true; > + > + /* Re-read the SPTE as it must have been changed by another thread. */ > + iter->old_spte = READ_ONCE(*rcu_dereference(iter->sptep)); Huge page promotion could be retried in this case. > + > + return false; > +} > + > /* > * Clear leaf entries which could be replaced by large mappings, for > * GFNs within the slot. > @@ -1729,8 +1789,17 @@ static void zap_collapsible_spte_range(struct kvm *kvm, > if (tdp_mmu_iter_cond_resched(kvm, &iter, false, true)) > continue; > > - if (!is_shadow_present_pte(iter.old_spte) || > - !is_last_spte(iter.old_spte, iter.level)) > + if (iter.level > max_huge_page_level || > + iter.gfn < slot->base_gfn || > + iter.gfn >= slot->base_gfn + slot->npages) I feel like I've been seeing this "does slot contain gfn" calculation a lot in recent commits. It's probably time to create a helper function. No need to do this clean up as part of your series though, unless you want to :). > + continue; > + > + if (!is_shadow_present_pte(iter.old_spte)) > + continue; > + > + /* Try to promote the constitutent pages to an lpage. */ > + if (!is_last_spte(iter.old_spte, iter.level) && > + try_promote_lpage(kvm, slot, &iter)) > continue; If iter.old_spte is not a leaf, the only loop would always continue to the next SPTE. Now we try to promote it and if that fails we run through the rest of the loop. This seems broken. For example, in the next line we end up grabbing the pfn of the non-leaf SPTE (which would be the PFN of the TDP MMU page table?) and treat that as the PFN backing this GFN, which is wrong. In the worst case we end up zapping an SPTE that we didn't need to, but we should still fix up this code. > > pfn = spte_to_pfn(iter.old_spte); > -- > 2.35.1.894.gb6a874cedc-goog >