Received: by 2002:a05:6358:16cc:b0:ea:6187:17c9 with SMTP id r12csp5181128rwl; Wed, 28 Dec 2022 14:20:35 -0800 (PST) X-Google-Smtp-Source: AMrXdXsknzEYXjFcIpBZlnOIDdxGMZc54lVhun2PxNYspKGy0iCydY7Hgye3JeC4enZ9dhwBMZjR X-Received: by 2002:a17:902:8a96:b0:18b:b1c:68b9 with SMTP id p22-20020a1709028a9600b0018b0b1c68b9mr26439642plo.61.1672266034947; Wed, 28 Dec 2022 14:20:34 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1672266034; cv=none; d=google.com; s=arc-20160816; b=l5EhN5Ijag7ay/gerywRUSeHrGyqJW2gZi++6aCrMH+xI2Tl8nXlQnvWZ1zB+86apB m5tH8U0XZwIMDh7uI1IOdYdDlPXCSLgxsXT9OyowAjrVbCfugl0t4qsloISNrGWEZPI7 0nI6ywrZqF/+RKxAPXQJrqWxY+KL/DqiESrdHHcnPwxUxG1Dh+V25YwwRVoyZsLpdKQb Dmxrnq6NcW1sNtfEAE3dMOO8QQ1uTGLyWhyNJJgbDufbgyMkxz39pIsL/B6TmXKIA798 sxAxhx8cYt3tiBiw+ULO9NC6RFMu3yP13XQYAtShqRs0267SVUSl5IRz6BoBtPaDpF1Z 9fSA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=/j/2fk/COQFHOsFOomujVdwLSD+qEUXjmVj2uVzmYHw=; b=D91eJ7dQY53vZ7tbXEZ6qbtbs8wOJ4hG5wdysdksY5giNJAM62ugXFG4lgbqLGxV09 N4FX89GLwscY9otJPtdixWxg+O8Qq7oIC6hRxXPX5zB+Yt/5gzUgm5/LbW7sqp5Y6/8c NhzQhUQ0L1epf31woQw0gXPJZ2gQmlkKoKKQtKFZssywNYPPweMCYniQdR3JoATJawHd CBAk4upbemYyLgjBefC/5oYX4wt5PO4L80oaeCAMCCila4w/162cAtWUPm+HbhtZMpoh sbkrcF8mEigbJkgAY68rdMBD/S5ByRipZq9AJjWggyIjQLH9MLXEBfwh4c4NlfRycdYX 9WfQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=cnDcLQxS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w17-20020a170903311100b00186e9ff4edcsi16194020plc.408.2022.12.28.14.20.23; Wed, 28 Dec 2022 14:20:34 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=cnDcLQxS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232879AbiL1WJq (ORCPT + 63 others); Wed, 28 Dec 2022 17:09:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47858 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232946AbiL1WI5 (ORCPT ); Wed, 28 Dec 2022 17:08:57 -0500 Received: from mail-yb1-xb34.google.com (mail-yb1-xb34.google.com [IPv6:2607:f8b0:4864:20::b34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B9A315FDA for ; Wed, 28 Dec 2022 14:08:47 -0800 (PST) Received: by mail-yb1-xb34.google.com with SMTP id c124so18756455ybb.13 for ; Wed, 28 Dec 2022 14:08:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=/j/2fk/COQFHOsFOomujVdwLSD+qEUXjmVj2uVzmYHw=; b=cnDcLQxS8OP/jEGBwG5LG99vXOCmzEAie46qN1BFDMAWslECvbsEC7qzGuFoywrYBP hPlGR9ULz8gBrEX9AWrzOZ5Ph44ZVAc7GW4d3oGiCzLgcm+GzGc0kM/23fEsBYacD/yt 11JahwGMxLULcvl+50fVd8MuqDCz01QmO2orNVOjivppSb0a1Y8uD1JPtZ9Y+Y0lg6Y7 TyJLr7bBn6eAs4y6VWRKrd+uBsQ0RWCwTxdjKcnhGD7bSWt0NjTYPzM0sK4ISB903izF HJhlwNPjqn/cyFnN5heYbztrsE8363Q2tn/IpCMntgpI0suPsYEm07GIRHG6kLjbNuEm SLBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=/j/2fk/COQFHOsFOomujVdwLSD+qEUXjmVj2uVzmYHw=; b=2QZfhKRZCwogrcfNucnEUkkvU6BD6BAWsaPcKFd/tUCMU+kObiVDi/CPRca5sggVsy a3z8jFikSHglUtcJopaKnYL+CzRYr8aigSvTBslAycz+pME76zsxxV/TidPiwdmghSxm KoyEB/ZiADlmKRqu8VVtrRQR0bbzl414c9oibtmcKfQawE7QbGXf0iAOJlVDyy4Rytmv rsEfAvcNZgVupglLpYajywIFJY6Z8Ub4SEW4HvGmNsn3H8dkhaKVf5cezZjsKRDaWt+l LOOkhxzId8sUVnD/w6nDv/mQWL7At9S8PZIHVW5o3aL2ZprvpvNTkEJm4+qg2S4Ys0mZ HdiA== X-Gm-Message-State: AFqh2kp2j5oQe/AmR6D9ftdxVAHObmyurNcjZ/MUC2Vi8f/DaFJkZ+Gw U1W9/88X2l5wB0NDR3xQctbMIXeffR6VmFPK9PWCEA== X-Received: by 2002:a25:6c86:0:b0:703:2633:87c1 with SMTP id h128-20020a256c86000000b00703263387c1mr3453511ybc.132.1672265325913; Wed, 28 Dec 2022 14:08:45 -0800 (PST) MIME-Version: 1.0 References: <20221222023457.1764-1-vipinsh@google.com> <20221222023457.1764-9-vipinsh@google.com> In-Reply-To: From: Vipin Sharma Date: Wed, 28 Dec 2022 14:08:10 -0800 Message-ID: Subject: Re: [Patch v3 8/9] KVM: x86/mmu: Make split_shadow_page_cache NUMA aware To: Ben Gardon Cc: seanjc@google.com, pbonzini@redhat.com, dmatlack@google.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-17.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL,USER_IN_DEF_SPF_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Dec 27, 2022 at 11:43 AM Ben Gardon wrote: > > On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma wrote: > > > > Make split_shadow_page_cache NUMA aware and allocate page table's pages > > during the split based on the underlying physical page's NUMA node. > > > > Signed-off-by: Vipin Sharma > > --- > > arch/x86/include/asm/kvm_host.h | 2 +- > > arch/x86/kvm/mmu/mmu.c | 50 ++++++++++++++++++--------------- > > 2 files changed, 29 insertions(+), 23 deletions(-) > > > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > > index b1f319ad6f89..7b3f36ae37a4 100644 > > --- a/arch/x86/include/asm/kvm_host.h > > +++ b/arch/x86/include/asm/kvm_host.h > > @@ -1410,7 +1410,7 @@ struct kvm_arch { > > * > > * Protected by kvm->slots_lock. > > */ > > - struct kvm_mmu_memory_cache split_shadow_page_cache; > > + struct kvm_mmu_memory_cache split_shadow_page_cache[MAX_NUMNODES]; > > struct kvm_mmu_memory_cache split_page_header_cache; > > > > /* > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > index 511c6ef265ee..7454bfc49a51 100644 > > --- a/arch/x86/kvm/mmu/mmu.c > > +++ b/arch/x86/kvm/mmu/mmu.c > > @@ -6126,7 +6126,7 @@ static void kvm_mmu_invalidate_zap_pages_in_memslot(struct kvm *kvm, > > int kvm_mmu_init_vm(struct kvm *kvm) > > { > > struct kvm_page_track_notifier_node *node = &kvm->arch.mmu_sp_tracker; > > - int r; > > + int r, nid; > > > > INIT_LIST_HEAD(&kvm->arch.active_mmu_pages); > > INIT_LIST_HEAD(&kvm->arch.possible_nx_huge_pages); > > @@ -6145,8 +6145,9 @@ int kvm_mmu_init_vm(struct kvm *kvm) > > INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_page_header_cache, > > mmu_page_header_cache, NUMA_NO_NODE); > > > > - INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache, > > - NULL, NUMA_NO_NODE); > > + for_each_node(nid) > > Again, assuming no one sets CONFIG_NODE_SHIFT to a ridiculous value, > it would probably be fine to initialize the entire array here since > that doesn't take any extra memory and we're not in a super hot path. This goes through the entire array. I think you are confusing it with for_each_online_node(). > > > + INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_shadow_page_cache[nid], > > + NULL, NUMA_NO_NODE); > > spin_lock_init(&kvm->arch.split_shadow_page_cache_lock); > > > > INIT_KVM_MMU_MEMORY_CACHE(&kvm->arch.split_desc_cache, > > @@ -6157,10 +6158,13 @@ int kvm_mmu_init_vm(struct kvm *kvm) > > > > static void mmu_free_vm_memory_caches(struct kvm *kvm) > > { > > + int nid; > > + > > kvm_mmu_free_memory_cache(&kvm->arch.split_desc_cache); > > kvm_mmu_free_memory_cache(&kvm->arch.split_page_header_cache); > > - mmu_free_sp_memory_cache(&kvm->arch.split_shadow_page_cache, > > - &kvm->arch.split_shadow_page_cache_lock); > > + for_each_node(nid) > > Again, could just iterate over the whole array here. > > > + mmu_free_sp_memory_cache(&kvm->arch.split_shadow_page_cache[nid], > > + &kvm->arch.split_shadow_page_cache_lock); > > } > > > > void kvm_mmu_uninit_vm(struct kvm *kvm) > > @@ -6269,7 +6273,7 @@ static inline bool need_topup(struct kvm_mmu_memory_cache *cache, int min) > > return kvm_mmu_memory_cache_nr_free_objects(cache) < min; > > } > > > > -static bool need_topup_split_caches_or_resched(struct kvm *kvm) > > +static bool need_topup_split_caches_or_resched(struct kvm *kvm, int nid) > > { > > if (need_resched() || rwlock_needbreak(&kvm->mmu_lock)) > > return true; > > @@ -6281,10 +6285,10 @@ static bool need_topup_split_caches_or_resched(struct kvm *kvm) > > */ > > return need_topup(&kvm->arch.split_desc_cache, SPLIT_DESC_CACHE_MIN_NR_OBJECTS) || > > need_topup(&kvm->arch.split_page_header_cache, 1) || > > - need_topup(&kvm->arch.split_shadow_page_cache, 1); > > + need_topup(&kvm->arch.split_shadow_page_cache[nid], 1); > > } > > > > -static int topup_split_caches(struct kvm *kvm) > > +static int topup_split_caches(struct kvm *kvm, int nid) > > { > > /* > > * Allocating rmap list entries when splitting huge pages for nested > > @@ -6314,18 +6318,21 @@ static int topup_split_caches(struct kvm *kvm) > > if (r) > > return r; > > > > - return mmu_topup_sp_memory_cache(&kvm->arch.split_shadow_page_cache, > > + return mmu_topup_sp_memory_cache(&kvm->arch.split_shadow_page_cache[nid], > > &kvm->arch.split_shadow_page_cache_lock, > > 1); > > } > > > > -static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *huge_sptep) > > +static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, > > + u64 *huge_sptep, > > + u64 huge_spte) > > These can go on the same line. Git diff is showing it weirdly. They are aligned to "struct kvm *kvm" and both will be on different lines to keep them in the 80 char limit. > > > { > > struct kvm_mmu_page *huge_sp = sptep_to_sp(huge_sptep); > > struct shadow_page_caches caches = {}; > > union kvm_mmu_page_role role; > > unsigned int access; > > gfn_t gfn; > > + int nid; > > > > gfn = kvm_mmu_page_get_gfn(huge_sp, spte_index(huge_sptep)); > > access = kvm_mmu_page_get_access(huge_sp, spte_index(huge_sptep)); > > @@ -6338,9 +6345,11 @@ static struct kvm_mmu_page *shadow_mmu_get_sp_for_split(struct kvm *kvm, u64 *hu > > */ > > role = kvm_mmu_child_role(huge_sptep, /*direct=*/true, access); > > > > + nid = kvm_pfn_to_page_table_nid(spte_to_pfn(huge_spte)); > > + > > /* Direct SPs do not require a shadowed_info_cache. */ > > caches.page_header_cache = &kvm->arch.split_page_header_cache; > > - caches.shadow_page_cache = &kvm->arch.split_shadow_page_cache; > > + caches.shadow_page_cache = &kvm->arch.split_shadow_page_cache[nid]; > > caches.shadow_page_cache_lock = &kvm->arch.split_shadow_page_cache_lock; > > > > /* Safe to pass NULL for vCPU since requesting a direct SP. */ > > @@ -6360,7 +6369,7 @@ static void shadow_mmu_split_huge_page(struct kvm *kvm, > > gfn_t gfn; > > int index; > > > > - sp = shadow_mmu_get_sp_for_split(kvm, huge_sptep); > > + sp = shadow_mmu_get_sp_for_split(kvm, huge_sptep, huge_spte); > > > > for (index = 0; index < SPTE_ENT_PER_PAGE; index++) { > > sptep = &sp->spt[index]; > > @@ -6398,7 +6407,7 @@ static int shadow_mmu_try_split_huge_page(struct kvm *kvm, > > u64 *huge_sptep) > > { > > struct kvm_mmu_page *huge_sp = sptep_to_sp(huge_sptep); > > - int level, r = 0; > > + int level, r = 0, nid; > > gfn_t gfn; > > u64 spte; > > > > @@ -6406,13 +6415,14 @@ static int shadow_mmu_try_split_huge_page(struct kvm *kvm, > > gfn = kvm_mmu_page_get_gfn(huge_sp, spte_index(huge_sptep)); > > level = huge_sp->role.level; > > spte = *huge_sptep; > > + nid = kvm_pfn_to_page_table_nid(spte_to_pfn(spte)); > > > > if (kvm_mmu_available_pages(kvm) <= KVM_MIN_FREE_MMU_PAGES) { > > r = -ENOSPC; > > goto out; > > } > > > > - if (need_topup_split_caches_or_resched(kvm)) { > > + if (need_topup_split_caches_or_resched(kvm, nid)) { > > write_unlock(&kvm->mmu_lock); > > cond_resched(); > > /* > > @@ -6420,7 +6430,7 @@ static int shadow_mmu_try_split_huge_page(struct kvm *kvm, > > * rmap iterator should be restarted because the MMU lock was > > * dropped. > > */ > > - r = topup_split_caches(kvm) ?: -EAGAIN; > > + r = topup_split_caches(kvm, nid) ?: -EAGAIN; > > write_lock(&kvm->mmu_lock); > > goto out; > > } > > @@ -6709,17 +6719,15 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen) > > } > > > > static unsigned long mmu_shrink_cache(struct kvm_mmu_memory_cache *cache, > > - int cache_count, > > spinlock_t *cache_lock) > > { > > unsigned long freed = 0; > > int nid; > > > > spin_lock(cache_lock); > > - for (nid = 0; nid < cache_count; nid++) { > > - if (node_online(nid) && cache[nid].nobjs) > > + for_each_online_node(nid) > > + if (cache[nid].nobjs) > > freed += kvm_mmu_empty_memory_cache(&cache[nid]); > > - } > > spin_unlock(cache_lock); > > return freed; > > } > > @@ -6741,8 +6749,7 @@ mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) > > first_kvm = kvm; > > list_move_tail(&kvm->vm_list, &vm_list); > > > > - freed += mmu_shrink_cache(&kvm->arch.split_shadow_page_cache, > > - 1, > > + freed += mmu_shrink_cache(kvm->arch.split_shadow_page_cache, > > &kvm->arch.split_shadow_page_cache_lock); > > > > if (freed >= sc->nr_to_scan) > > @@ -6750,7 +6757,6 @@ mmu_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) > > > > kvm_for_each_vcpu(i, vcpu, kvm) { > > freed += mmu_shrink_cache(vcpu->arch.mmu_shadow_page_cache, > > - MAX_NUMNODES, > > &vcpu->arch.mmu_shadow_page_cache_lock); > > } > > > > -- > > 2.39.0.314.g84b9a713c41-goog > >