Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp2374711pxb; Mon, 20 Sep 2021 20:36:16 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxLylPBbwBt8IBhVaodMBSl9zPNBe2A0QrOzj3Aq2Xfjt5HBJY34RjU7VWoRoDyOhHZazYS X-Received: by 2002:a17:906:7d42:: with SMTP id l2mr32471111ejp.467.1632195376491; Mon, 20 Sep 2021 20:36:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1632195376; cv=none; d=google.com; s=arc-20160816; b=cQ3bTpjQlgyQcgKhTOGekwVbjQCNlbqN+6renVZp0nJgccs70A5pJBkQhEzHuuwWLR qd9fG13iTFkuCkPnCJ7ZaKVGckF9KSmR8RAEA4HiA9RVdGavTvKzZtDOcHGCRqW1iswO T3jzGruHLQvA1uWGbnZwJBTab0JcKrpoPHzU6pWUozS1HW32WfM79OysFbZlb8KbtICS xR1j+CqpjOeIyOvuTQmkueHFEWnYySizn17bdfkVg3IFiIqcX6foo8ATtEYhIpErM34I +Iuiu37ExTIKYwPSwd4PzRXrryUW+tRXK90TL1hSSGifm5qEAmDFXIxFGpZ2US1LN1ni 9oRw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=FzzGto8yY/p3OuB/pw40S+N+42raT8IuuCLbNSnDeIo=; b=j452MV5pJK2aEo3Oavt0TfApZkj+cXK5N/SY1xkz7jR2e/wrT3UIvuZOzMxctXD5rt nWcOfP+rh2LNatB4LmO2RUS9rNj7rScDwSdagv9ztrbPN0Q1vvXGfG+fUbaTHfzP6+9A jCbnBbjKxqm3PYFbncd9v7ulpno1sQ3SUgaqvWBDN4rTt96PjkYwSpk5EGt3KzMwB3VR Cw2+iloFKd+/RTRFo60UeF1x7jVRnFtJiqkerJpEpOpuQtwdlo+kjY018cLS60q1g2Mg DElvVssrgCmvQyQrbecwo2LuFHpzpY1M11hUC3Zo7fs3pWad3I3OsK/kwIo3q+DZNvoG JH+Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bf15si18760687edb.235.2021.09.20.20.35.52; Mon, 20 Sep 2021 20:36:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241811AbhITVnK (ORCPT + 99 others); Mon, 20 Sep 2021 17:43:10 -0400 Received: from vps-vb.mhejs.net ([37.28.154.113]:45320 "EHLO vps-vb.mhejs.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232591AbhITVlI (ORCPT ); Mon, 20 Sep 2021 17:41:08 -0400 Received: from MUA by vps-vb.mhejs.net with esmtps (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1mSR0K-0006P1-Lx; Mon, 20 Sep 2021 23:39:12 +0200 From: "Maciej S. Szmigiero" To: Paolo Bonzini , Vitaly Kuznetsov Cc: Sean Christopherson , Wanpeng Li , Jim Mattson , Igor Mammedov , Marc Zyngier , James Morse , Julien Thierry , Suzuki K Poulose , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Christian Borntraeger , Janosch Frank , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH v5 01/13] KVM: x86: Cache total page count to avoid traversing the memslot array Date: Mon, 20 Sep 2021 23:38:49 +0200 Message-Id: X-Mailer: git-send-email 2.33.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Maciej S. Szmigiero" There is no point in recalculating from scratch the total number of pages in all memslots each time a memslot is created or deleted. Just cache the value and update it accordingly on each such operation so the code doesn't need to traverse the whole memslot array each time. Signed-off-by: Maciej S. Szmigiero --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/mmu/mmu.c | 24 ------------------------ arch/x86/kvm/x86.c | 20 +++++++++++++++++--- 3 files changed, 18 insertions(+), 28 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index f8f48a7ec577..315d5368ba84 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1038,6 +1038,7 @@ struct kvm_x86_msr_filter { #define APICV_INHIBIT_REASON_X2APIC 5 struct kvm_arch { + u64 n_memslots_pages; unsigned long n_used_mmu_pages; unsigned long n_requested_mmu_pages; unsigned long n_max_mmu_pages; @@ -1572,7 +1573,6 @@ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm, const struct kvm_memory_slot *memslot); void kvm_mmu_zap_all(struct kvm *kvm); void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen); -unsigned long kvm_mmu_calculate_default_mmu_pages(struct kvm *kvm); void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned long kvm_nr_mmu_pages); int load_pdptrs(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu, unsigned long cr3); diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2d7e61122af8..61b9b7b5c10c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -6133,30 +6133,6 @@ int kvm_mmu_module_init(void) return ret; } -/* - * Calculate mmu pages needed for kvm. - */ -unsigned long kvm_mmu_calculate_default_mmu_pages(struct kvm *kvm) -{ - unsigned long nr_mmu_pages; - unsigned long nr_pages = 0; - struct kvm_memslots *slots; - struct kvm_memory_slot *memslot; - int i; - - for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { - slots = __kvm_memslots(kvm, i); - - kvm_for_each_memslot(memslot, slots) - nr_pages += memslot->npages; - } - - nr_mmu_pages = nr_pages * KVM_PERMILLE_MMU_PAGES / 1000; - nr_mmu_pages = max(nr_mmu_pages, KVM_MIN_ALLOC_MMU_PAGES); - - return nr_mmu_pages; -} - void kvm_mmu_destroy(struct kvm_vcpu *vcpu) { kvm_mmu_unload(vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 28ef14155726..65fdf27b9423 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11609,9 +11609,23 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *new, enum kvm_mr_change change) { - if (!kvm->arch.n_requested_mmu_pages) - kvm_mmu_change_mmu_pages(kvm, - kvm_mmu_calculate_default_mmu_pages(kvm)); + if (change == KVM_MR_CREATE) + kvm->arch.n_memslots_pages += new->npages; + else if (change == KVM_MR_DELETE) { + WARN_ON(kvm->arch.n_memslots_pages < old->npages); + kvm->arch.n_memslots_pages -= old->npages; + } + + if (!kvm->arch.n_requested_mmu_pages) { + u64 memslots_pages; + unsigned long nr_mmu_pages; + + memslots_pages = kvm->arch.n_memslots_pages * KVM_PERMILLE_MMU_PAGES; + do_div(memslots_pages, 1000); + nr_mmu_pages = max_t(typeof(nr_mmu_pages), + memslots_pages, KVM_MIN_ALLOC_MMU_PAGES); + kvm_mmu_change_mmu_pages(kvm, nr_mmu_pages); + } kvm_mmu_slot_apply_flags(kvm, old, new, change);